hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
8a40fbd3a90cf3f49559ae106b9f6abf7dba8c15 | 74,313 | py | Python | airflow/providers/google/cloud/hooks/dlp.py | mebelousov/airflow | d99833c9b5be9eafc0c7851343ee86b6c20aed40 | [
"Apache-2.0"
] | 1 | 2020-10-16T06:13:41.000Z | 2020-10-16T06:13:41.000Z | airflow/providers/google/cloud/hooks/dlp.py | mebelousov/airflow | d99833c9b5be9eafc0c7851343ee86b6c20aed40 | [
"Apache-2.0"
] | null | null | null | airflow/providers/google/cloud/hooks/dlp.py | mebelousov/airflow | d99833c9b5be9eafc0c7851343ee86b6c20aed40 | [
"Apache-2.0"
] | null | null | null | #
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
"""
This module contains a CloudDLPHook
which allows you to connect to GCP Cloud DLP service.
"""
import re
import time
from typing import List, Optional, Sequence, Tuple, Union
from google.api_core.retry import Retry
from google.cloud.dlp_v2 import DlpServiceClient
from google.cloud.dlp_v2.types import (
ByteContentItem, ContentItem, DeidentifyConfig, DeidentifyContentResponse, DeidentifyTemplate, DlpJob,
FieldMask, InspectConfig, InspectContentResponse, InspectJobConfig, InspectTemplate, JobTrigger,
ListInfoTypesResponse, RedactImageRequest, RedactImageResponse, ReidentifyContentResponse,
RiskAnalysisJobConfig, StoredInfoType, StoredInfoTypeConfig,
)
from airflow.exceptions import AirflowException
from airflow.providers.google.common.hooks.base_google import GoogleBaseHook
DLP_JOB_PATH_PATTERN = "^projects/[^/]+/dlpJobs/(?P<job>.*?)$"
# Time to sleep between active checks of the operation results
TIME_TO_SLEEP_IN_SECONDS = 1
# pylint: disable=R0904, C0302
class CloudDLPHook(GoogleBaseHook):
"""
Hook for Google Cloud Data Loss Prevention (DLP) APIs.
Cloud DLP allows clients to detect the presence of Personally Identifiable
Information (PII) and other privacy-sensitive data in user-supplied,
unstructured data streams, like text blocks or images. The service also
includes methods for sensitive data redaction and scheduling of data scans
on Google Cloud Platform based data sets.
:param gcp_conn_id: The connection ID to use when fetching connection info.
:type gcp_conn_id: str
:param delegate_to: The account to impersonate, if any.
For this to work, the service account making the request must have
domain-wide delegation enabled.
:type delegate_to: str
"""
def __init__(self, gcp_conn_id: str = "google_cloud_default", delegate_to: Optional[str] = None) -> None:
super().__init__(gcp_conn_id, delegate_to)
self._client = None
def get_conn(self) -> DlpServiceClient:
"""
Provides a client for interacting with the Cloud DLP API.
:return: GCP Cloud DLP API Client
:rtype: google.cloud.dlp_v2.DlpServiceClient
"""
if not self._client:
self._client = DlpServiceClient(credentials=self._get_credentials(), client_info=self.client_info)
return self._client
@GoogleBaseHook.fallback_to_default_project_id
def cancel_dlp_job(
self,
dlp_job_id: str,
project_id: Optional[str] = None,
retry: Optional[Retry] = None,
timeout: Optional[float] = None,
metadata: Optional[Sequence[Tuple[str, str]]] = None,
) -> None:
"""
Starts asynchronous cancellation on a long-running DLP job.
:param dlp_job_id: ID of the DLP job resource to be cancelled.
:type dlp_job_id: str
:param project_id: (Optional) Google Cloud Platform project ID where the
DLP Instance exists. If set to None or missing, the default project_id
from the GCP connection is used.
:type project_id: str
:param retry: (Optional) A retry object used to retry requests.
If None is specified, requests will not be retried.
:type retry: google.api_core.retry.Retry
:param timeout: (Optional) The amount of time, in seconds, to wait for the request
to complete. Note that if retry is specified, the timeout applies to each
individual attempt.
:type timeout: float
:param metadata: (Optional) Additional metadata that is provided to the method.
:type metadata: Sequence[Tuple[str, str]]
"""
client = self.get_conn()
if not dlp_job_id:
raise AirflowException("Please provide the ID of the DLP job resource to be cancelled.")
name = DlpServiceClient.dlp_job_path(project_id, dlp_job_id)
client.cancel_dlp_job(name=name, retry=retry, timeout=timeout, metadata=metadata)
def create_deidentify_template(
self,
organization_id: Optional[str] = None,
project_id: Optional[str] = None,
deidentify_template: Optional[Union[dict, DeidentifyTemplate]] = None,
template_id: Optional[str] = None,
retry: Optional[Retry] = None,
timeout: Optional[float] = None,
metadata: Optional[Sequence[Tuple[str, str]]] = None,
) -> DeidentifyTemplate:
"""
Creates a deidentify template for re-using frequently used configuration for
de-identifying content, images, and storage.
:param organization_id: (Optional) The organization ID. Required to set this
field if parent resource is an organzation.
:type organization_id: str
:param project_id: (Optional) Google Cloud Platform project ID where the
DLP Instance exists. Only set this field if the parent resource is
a project instead of an organzation.
:type project_id: str
:param deidentify_template: (Optional) The deidentify template to create.
:type deidentify_template: dict or google.cloud.dlp_v2.types.DeidentifyTemplate
:param template_id: (Optional) The template ID.
:type template_id: str
:param retry: (Optional) A retry object used to retry requests.
If None is specified, requests will not be retried.
:type retry: google.api_core.retry.Retry
:param timeout: (Optional) The amount of time, in seconds, to wait for the request
to complete. Note that if retry is specified, the timeout applies to each
individual attempt.
:type timeout: float
:param metadata: (Optional) Additional metadata that is provided to the method.
:type metadata: Sequence[Tuple[str, str]]
:rtype: google.cloud.dlp_v2.types.DeidentifyTemplate
"""
client = self.get_conn()
# Handle project_id from connection configuration
project_id = project_id or self.project_id
if organization_id:
parent = DlpServiceClient.organization_path(organization_id)
elif project_id:
parent = DlpServiceClient.project_path(project_id)
else:
raise AirflowException("Please provide either organization_id or project_id.")
return client.create_deidentify_template(
parent=parent,
deidentify_template=deidentify_template,
template_id=template_id,
retry=retry,
timeout=timeout,
metadata=metadata,
)
@GoogleBaseHook.fallback_to_default_project_id
def create_dlp_job(
self,
project_id: Optional[str] = None,
inspect_job: Optional[Union[dict, InspectJobConfig]] = None,
risk_job: Optional[Union[dict, RiskAnalysisJobConfig]] = None,
job_id: Optional[str] = None,
retry: Optional[Retry] = None,
timeout: Optional[float] = None,
metadata: Optional[Sequence[Tuple[str, str]]] = None,
wait_until_finished: bool = True,
) -> DlpJob:
"""
Creates a new job to inspect storage or calculate risk metrics.
:param project_id: (Optional) Google Cloud Platform project ID where the
DLP Instance exists. If set to None or missing, the default
project_id from the GCP connection is used.
:type project_id: str
:param inspect_job: (Optional) The configuration for the inspect job.
:type inspect_job: dict or google.cloud.dlp_v2.types.InspectJobConfig
:param risk_job: (Optional) The configuration for the risk job.
:type risk_job: dict or google.cloud.dlp_v2.types.RiskAnalysisJobConfig
:param job_id: (Optional) The job ID.
:type job_id: str
:param retry: (Optional) A retry object used to retry requests.
If None is specified, requests will not be retried.
:type retry: google.api_core.retry.Retry
:param timeout: (Optional) The amount of time, in seconds, to wait for the request
to complete. Note that if retry is specified, the timeout applies to each
individual attempt.
:type timeout: float
:param metadata: (Optional) Additional metadata that is provided to the method.
:type metadata: Sequence[Tuple[str, str]]
:param wait_until_finished: (Optional) If true, it will keep polling the job state
until it is set to DONE.
:type wait_until_finished: bool
:rtype: google.cloud.dlp_v2.types.DlpJob
"""
client = self.get_conn()
parent = DlpServiceClient.project_path(project_id)
job = client.create_dlp_job(
parent=parent,
inspect_job=inspect_job,
risk_job=risk_job,
job_id=job_id,
retry=retry,
timeout=timeout,
metadata=metadata,
)
if wait_until_finished:
pattern = re.compile(DLP_JOB_PATH_PATTERN, re.IGNORECASE)
match = pattern.match(job.name)
if match is not None:
job_name = match.groupdict()["job"]
else:
raise AirflowException("Unable to retrieve DLP job's ID from {}.".format(job.name))
while wait_until_finished:
job = self.get_dlp_job(dlp_job_id=job_name, project_id=project_id)
self.log.info("DLP job %s state: %s.", job.name, DlpJob.JobState.Name(job.state))
if job.state == DlpJob.JobState.DONE:
return job
elif job.state in [
DlpJob.JobState.PENDING,
DlpJob.JobState.RUNNING,
DlpJob.JobState.JOB_STATE_UNSPECIFIED,
]:
time.sleep(TIME_TO_SLEEP_IN_SECONDS)
else:
raise AirflowException(
"Stopped polling DLP job state. DLP job {} state: {}.".format(
job.name, DlpJob.JobState.Name(job.state)
)
)
return job
def create_inspect_template(
self,
organization_id: Optional[str] = None,
project_id: Optional[str] = None,
inspect_template: Optional[Union[dict, InspectTemplate]] = None,
template_id: Optional[str] = None,
retry: Optional[Retry] = None,
timeout: Optional[float] = None,
metadata: Optional[Sequence[Tuple[str, str]]] = None,
) -> InspectTemplate:
"""
Creates an inspect template for re-using frequently used configuration for
inspecting content, images, and storage.
:param organization_id: (Optional) The organization ID. Required to set this
field if parent resource is an organzation.
:type organization_id: str
:param project_id: (Optional) Google Cloud Platform project ID where the
DLP Instance exists. Only set this field if the parent resource is
a project instead of an organzation.
:type project_id: str
:param inspect_template: (Optional) The inspect template to create.
:type inspect_template: dict or google.cloud.dlp_v2.types.InspectTemplate
:param template_id: (Optional) The template ID.
:type template_id: str
:param retry: (Optional) A retry object used to retry requests.
If None is specified, requests will not be retried.
:type retry: google.api_core.retry.Retry
:param timeout: (Optional) The amount of time, in seconds, to wait for the request
to complete. Note that if retry is specified, the timeout applies to each
individual attempt.
:type timeout: float
:param metadata: (Optional) Additional metadata that is provided to the method.
:type metadata: Sequence[Tuple[str, str]]
:rtype: google.cloud.dlp_v2.types.InspectTemplate
"""
client = self.get_conn()
# Handle project_id from connection configuration
project_id = project_id or self.project_id
if organization_id:
parent = DlpServiceClient.organization_path(organization_id)
elif project_id:
parent = DlpServiceClient.project_path(project_id)
else:
raise AirflowException("Please provide either organization_id or project_id.")
return client.create_inspect_template(
parent=parent,
inspect_template=inspect_template,
template_id=template_id,
retry=retry,
timeout=timeout,
metadata=metadata,
)
@GoogleBaseHook.fallback_to_default_project_id
def create_job_trigger(
self,
project_id: Optional[str] = None,
job_trigger: Optional[Union[dict, JobTrigger]] = None,
trigger_id: Optional[str] = None,
retry: Optional[Retry] = None,
timeout: Optional[float] = None,
metadata: Optional[Sequence[Tuple[str, str]]] = None,
) -> JobTrigger:
"""
Creates a job trigger to run DLP actions such as scanning storage for sensitive
information on a set schedule.
:param project_id: (Optional) Google Cloud Platform project ID where the
DLP Instance exists. If set to None or missing, the default
project_id from the GCP connection is used.
:type project_id: str
:param job_trigger: (Optional) The job trigger to create.
:type job_trigger: dict or google.cloud.dlp_v2.types.JobTrigger
:param trigger_id: (Optional) The job trigger ID.
:type trigger_id: str
:param retry: (Optional) A retry object used to retry requests.
If None is specified, requests will not be retried.
:type retry: google.api_core.retry.Retry
:param timeout: (Optional) The amount of time, in seconds, to wait for the request
to complete. Note that if retry is specified, the timeout applies to each
individual attempt.
:type timeout: float
:param metadata: (Optional) Additional metadata that is provided to the method.
:type metadata: Sequence[Tuple[str, str]]
:rtype: google.cloud.dlp_v2.types.JobTrigger
"""
client = self.get_conn()
parent = DlpServiceClient.project_path(project_id)
return client.create_job_trigger(
parent=parent,
job_trigger=job_trigger,
trigger_id=trigger_id,
retry=retry,
timeout=timeout,
metadata=metadata,
)
def create_stored_info_type(
self,
organization_id: Optional[str] = None,
project_id: Optional[str] = None,
config: Optional[Union[dict, StoredInfoTypeConfig]] = None,
stored_info_type_id: Optional[str] = None,
retry: Optional[Retry] = None,
timeout: Optional[float] = None,
metadata: Optional[Sequence[Tuple[str, str]]] = None,
) -> StoredInfoType:
"""
Creates a pre-built stored info type to be used for inspection.
:param organization_id: (Optional) The organization ID. Required to set this
field if parent resource is an organzation.
:type organization_id: str
:param project_id: (Optional) Google Cloud Platform project ID where the
DLP Instance exists. Only set this field if the parent resource is
a project instead of an organzation.
:type project_id: str
:param config: (Optional) The config for the stored info type.
:type config: dict or google.cloud.dlp_v2.types.StoredInfoTypeConfig
:param stored_info_type_id: (Optional) The stored info type ID.
:type stored_info_type_id: str
:param retry: (Optional) A retry object used to retry requests.
If None is specified, requests will not be retried.
:type retry: google.api_core.retry.Retry
:param timeout: (Optional) The amount of time, in seconds, to wait for the request
to complete. Note that if retry is specified, the timeout applies to each
individual attempt.
:type timeout: float
:param metadata: (Optional) Additional metadata that is provided to the method.
:type metadata: Sequence[Tuple[str, str]]
:rtype: google.cloud.dlp_v2.types.StoredInfoType
"""
client = self.get_conn()
# Handle project_id from connection configuration
project_id = project_id or self.project_id
if organization_id:
parent = DlpServiceClient.organization_path(organization_id)
elif project_id:
parent = DlpServiceClient.project_path(project_id)
else:
raise AirflowException("Please provide either organization_id or project_id.")
return client.create_stored_info_type(
parent=parent,
config=config,
stored_info_type_id=stored_info_type_id,
retry=retry,
timeout=timeout,
metadata=metadata,
)
@GoogleBaseHook.fallback_to_default_project_id
def deidentify_content(
self,
project_id: Optional[str] = None,
deidentify_config: Optional[Union[dict, DeidentifyConfig]] = None,
inspect_config: Optional[Union[dict, InspectConfig]] = None,
item: Optional[Union[dict, ContentItem]] = None,
inspect_template_name: Optional[str] = None,
deidentify_template_name: Optional[str] = None,
retry: Optional[Retry] = None,
timeout: Optional[float] = None,
metadata: Optional[Sequence[Tuple[str, str]]] = None,
) -> DeidentifyContentResponse:
"""
De-identifies potentially sensitive info from a content item. This method has limits
on input size and output size.
:param project_id: (Optional) Google Cloud Platform project ID where the
DLP Instance exists. If set to None or missing, the default
project_id from the GCP connection is used.
:type project_id: str
:param deidentify_config: (Optional) Configuration for the de-identification of the
content item. Items specified here will override the template referenced by the
deidentify_template_name argument.
:type deidentify_config: dict or google.cloud.dlp_v2.types.DeidentifyConfig
:param inspect_config: (Optional) Configuration for the inspector. Items specified
here will override the template referenced by the inspect_template_name argument.
:type inspect_config: dict or google.cloud.dlp_v2.types.InspectConfig
:param item: (Optional) The item to de-identify. Will be treated as text.
:type item: dict or google.cloud.dlp_v2.types.ContentItem
:param inspect_template_name: (Optional) Optional template to use. Any configuration
directly specified in inspect_config will override those set in the template.
:type inspect_template_name: str
:param deidentify_template_name: (Optional) Optional template to use. Any
configuration directly specified in deidentify_config will override those set
in the template.
:type deidentify_template_name: str
:param retry: (Optional) A retry object used to retry requests.
If None is specified, requests will not be retried.
:type retry: google.api_core.retry.Retry
:param timeout: (Optional) The amount of time, in seconds, to wait for the request
to complete. Note that if retry is specified, the timeout applies to each
individual attempt.
:type timeout: float
:param metadata: (Optional) Additional metadata that is provided to the method.
:type metadata: Sequence[Tuple[str, str]]
:rtype: google.cloud.dlp_v2.types.DeidentifyContentResponse
"""
client = self.get_conn()
parent = DlpServiceClient.project_path(project_id)
return client.deidentify_content(
parent=parent,
deidentify_config=deidentify_config,
inspect_config=inspect_config,
item=item,
inspect_template_name=inspect_template_name,
deidentify_template_name=deidentify_template_name,
retry=retry,
timeout=timeout,
metadata=metadata,
)
def delete_deidentify_template(
self, template_id, organization_id=None, project_id=None, retry=None, timeout=None, metadata=None
) -> None:
"""
Deletes a deidentify template.
:param template_id: The ID of deidentify template to be deleted.
:type template_id: str
:param organization_id: (Optional) The organization ID. Required to set this
field if parent resource is an organzation.
:type organization_id: str
:param project_id: (Optional) Google Cloud Platform project ID where the
DLP Instance exists. Only set this field if the parent resource is
a project instead of an organzation.
:type project_id: str
:param retry: (Optional) A retry object used to retry requests.
If None is specified, requests will not be retried.
:type retry: google.api_core.retry.Retry
:param timeout: (Optional) The amount of time, in seconds, to wait for the request
to complete. Note that if retry is specified, the timeout applies to each
individual attempt.
:type timeout: float
:param metadata: (Optional) Additional metadata that is provided to the method.
:type metadata: Sequence[Tuple[str, str]]
"""
client = self.get_conn()
if not template_id:
raise AirflowException("Please provide the ID of deidentify template to be deleted.")
# Handle project_id from connection configuration
project_id = project_id or self.project_id
if organization_id:
name = DlpServiceClient.organization_deidentify_template_path(organization_id, template_id)
elif project_id:
name = DlpServiceClient.project_deidentify_template_path(project_id, template_id)
else:
raise AirflowException("Please provide either organization_id or project_id.")
client.delete_deidentify_template(name=name, retry=retry, timeout=timeout, metadata=metadata)
@GoogleBaseHook.fallback_to_default_project_id
def delete_dlp_job(
self,
dlp_job_id: str,
project_id: Optional[str] = None,
retry: Optional[Retry] = None,
timeout: Optional[float] = None,
metadata: Optional[Sequence[Tuple[str, str]]] = None,
) -> None:
"""
Deletes a long-running DLP job. This method indicates that the client is no longer
interested in the DLP job result. The job will be cancelled if possible.
:param dlp_job_id: The ID of the DLP job resource to be cancelled.
:type dlp_job_id: str
:param project_id: (Optional) Google Cloud Platform project ID where the
DLP Instance exists. If set to None or missing, the default
project_id from the GCP connection is used.
:type project_id: str
:param retry: (Optional) A retry object used to retry requests.
If None is specified, requests will not be retried.
:type retry: google.api_core.retry.Retry
:param timeout: (Optional) The amount of time, in seconds, to wait for the request
to complete. Note that if retry is specified, the timeout applies to each
individual attempt.
:type timeout: float
:param metadata: (Optional) Additional metadata that is provided to the method.
:type metadata: Sequence[Tuple[str, str]]
"""
client = self.get_conn()
if not dlp_job_id:
raise AirflowException("Please provide the ID of the DLP job resource to be cancelled.")
name = DlpServiceClient.dlp_job_path(project_id, dlp_job_id)
client.delete_dlp_job(name=name, retry=retry, timeout=timeout, metadata=metadata)
def delete_inspect_template(
self,
template_id: str,
organization_id: Optional[str] = None,
project_id: Optional[str] = None,
retry: Optional[Retry] = None,
timeout: Optional[float] = None,
metadata: Optional[Sequence[Tuple[str, str]]] = None,
) -> None:
"""
Deletes an inspect template.
:param template_id: The ID of the inspect template to be deleted.
:type template_id: str
:param organization_id: (Optional) The organization ID. Required to set this
field if parent resource is an organzation.
:type organization_id: str
:param project_id: (Optional) Google Cloud Platform project ID where the
DLP Instance exists. Only set this field if the parent resource is
a project instead of an organzation.
:type project_id: str
:param retry: (Optional) A retry object used to retry requests.
If None is specified, requests will not be retried.
:type retry: google.api_core.retry.Retry
:param timeout: (Optional) The amount of time, in seconds, to wait for the request
to complete. Note that if retry is specified, the timeout applies to each
individual attempt.
:type timeout: float
:param metadata: (Optional) Additional metadata that is provided to the method.
:type metadata: Sequence[Tuple[str, str]]
"""
client = self.get_conn()
if not template_id:
raise AirflowException("Please provide the ID of the inspect template to be deleted.")
# Handle project_id from connection configuration
project_id = project_id or self.project_id
if organization_id:
name = DlpServiceClient.organization_inspect_template_path(organization_id, template_id)
elif project_id:
name = DlpServiceClient.project_inspect_template_path(project_id, template_id)
else:
raise AirflowException("Please provide either organization_id or project_id.")
client.delete_inspect_template(name=name, retry=retry, timeout=timeout, metadata=metadata)
@GoogleBaseHook.fallback_to_default_project_id
def delete_job_trigger(
self,
job_trigger_id: str,
project_id: Optional[str] = None,
retry: Optional[Retry] = None,
timeout: Optional[float] = None,
metadata: Optional[Sequence[Tuple[str, str]]] = None,
) -> None:
"""
Deletes a job trigger.
:param job_trigger_id: The ID of the DLP job trigger to be deleted.
:type job_trigger_id: str
:param project_id: (Optional) Google Cloud Platform project ID where the
DLP Instance exists. If set to None or missing, the default
project_id from the GCP connection is used.
:type project_id: str
:param retry: (Optional) A retry object used to retry requests.
If None is specified, requests will not be retried.
:type retry: google.api_core.retry.Retry
:param timeout: (Optional) The amount of time, in seconds, to wait for the request
to complete. Note that if retry is specified, the timeout applies to each
individual attempt.
:type timeout: float
:param metadata: (Optional) Additional metadata that is provided to the method.
:type metadata: Sequence[Tuple[str, str]]
"""
client = self.get_conn()
if not job_trigger_id:
raise AirflowException("Please provide the ID of the DLP job trigger to be deleted.")
name = DlpServiceClient.project_job_trigger_path(project_id, job_trigger_id)
client.delete_job_trigger(name=name, retry=retry, timeout=timeout, metadata=metadata)
def delete_stored_info_type(
self,
stored_info_type_id: str,
organization_id: Optional[str] = None,
project_id: Optional[str] = None,
retry: Optional[Retry] = None,
timeout: Optional[float] = None,
metadata: Optional[Sequence[Tuple[str, str]]] = None,
) -> None:
"""
Deletes a stored info type.
:param stored_info_type_id: The ID of the stored info type to be deleted.
:type stored_info_type_id: str
:param organization_id: (Optional) The organization ID. Required to set this
field if parent resource is an organzation.
:type organization_id: str
:param project_id: (Optional) Google Cloud Platform project ID where the
DLP Instance exists. Only set this field if the parent resource is
a project instead of an organzation.
:type project_id: str
:param retry: (Optional) A retry object used to retry requests.
If None is specified, requests will not be retried.
:type retry: google.api_core.retry.Retry
:param timeout: (Optional) The amount of time, in seconds, to wait for the request
to complete. Note that if retry is specified, the timeout applies to each
individual attempt.
:type timeout: float
:param metadata: (Optional) Additional metadata that is provided to the method.
:type metadata: Sequence[Tuple[str, str]]
"""
client = self.get_conn()
if not stored_info_type_id:
raise AirflowException("Please provide the ID of the stored info type to be deleted.")
# Handle project_id from connection configuration
project_id = project_id or self.project_id
if organization_id:
name = DlpServiceClient.organization_stored_info_type_path(organization_id, stored_info_type_id)
elif project_id:
name = DlpServiceClient.project_stored_info_type_path(project_id, stored_info_type_id)
else:
raise AirflowException("Please provide either organization_id or project_id.")
client.delete_stored_info_type(name=name, retry=retry, timeout=timeout, metadata=metadata)
def get_deidentify_template(
self,
template_id: str,
organization_id: Optional[str] = None,
project_id: Optional[str] = None,
retry: Optional[Retry] = None,
timeout: Optional[float] = None,
metadata: Optional[Sequence[Tuple[str, str]]] = None,
) -> DeidentifyTemplate:
"""
Gets a deidentify template.
:param template_id: The ID of deidentify template to be read.
:type template_id: str
:param organization_id: (Optional) The organization ID. Required to set this
field if parent resource is an organzation.
:type organization_id: str
:param project_id: (Optional) Google Cloud Platform project ID where the
DLP Instance exists. Only set this field if the parent resource is
a project instead of an organzation.
:type project_id: str
:param retry: (Optional) A retry object used to retry requests.
If None is specified, requests will not be retried.
:type retry: google.api_core.retry.Retry
:param timeout: (Optional) The amount of time, in seconds, to wait for the request
to complete. Note that if retry is specified, the timeout applies to each
individual attempt.
:type timeout: float
:param metadata: (Optional) Additional metadata that is provided to the method.
:type metadata: Sequence[Tuple[str, str]]
:rtype: google.cloud.dlp_v2.types.DeidentifyTemplate
"""
client = self.get_conn()
if not template_id:
raise AirflowException("Please provide the ID of the deidentify template to be read.")
# Handle project_id from connection configuration
project_id = project_id or self.project_id
if organization_id:
name = DlpServiceClient.organization_deidentify_template_path(organization_id, template_id)
elif project_id:
name = DlpServiceClient.project_deidentify_template_path(project_id, template_id)
else:
raise AirflowException("Please provide either organization_id or project_id.")
return client.get_deidentify_template(name=name, retry=retry, timeout=timeout, metadata=metadata)
@GoogleBaseHook.fallback_to_default_project_id
def get_dlp_job(
self,
dlp_job_id: str,
project_id: Optional[str] = None,
retry: Optional[Retry] = None,
timeout: Optional[float] = None,
metadata: Optional[Sequence[Tuple[str, str]]] = None,
) -> DlpJob:
"""
Gets the latest state of a long-running Dlp Job.
:param dlp_job_id: The ID of the DLP job resource to be read.
:type dlp_job_id: str
:param project_id: (Optional) Google Cloud Platform project ID where the
DLP Instance exists. If set to None or missing, the default
project_id from the GCP connection is used.
:type project_id: str
:param retry: (Optional) A retry object used to retry requests.
If None is specified, requests will not be retried.
:type retry: google.api_core.retry.Retry
:param timeout: (Optional) The amount of time, in seconds, to wait for the request
to complete. Note that if retry is specified, the timeout applies to each
individual attempt.
:type timeout: float
:param metadata: (Optional) Additional metadata that is provided to the method.
:type metadata: Sequence[Tuple[str, str]]
:rtype: google.cloud.dlp_v2.types.DlpJob
"""
client = self.get_conn()
if not dlp_job_id:
raise AirflowException("Please provide the ID of the DLP job resource to be read.")
name = DlpServiceClient.dlp_job_path(project_id, dlp_job_id)
return client.get_dlp_job(name=name, retry=retry, timeout=timeout, metadata=metadata)
def get_inspect_template(
self,
template_id: str,
organization_id: Optional[str] = None,
project_id: Optional[str] = None,
retry: Optional[Retry] = None,
timeout: Optional[float] = None,
metadata: Optional[Sequence[Tuple[str, str]]] = None,
) -> InspectTemplate:
"""
Gets an inspect template.
:param template_id: The ID of inspect template to be read.
:type template_id: str
:param organization_id: (Optional) The organization ID. Required to set this
field if parent resource is an organzation.
:type organization_id: str
:param project_id: (Optional) Google Cloud Platform project ID where the
DLP Instance exists. Only set this field if the parent resource is
a project instead of an organzation.
:type project_id: str
:param retry: (Optional) A retry object used to retry requests.
If None is specified, requests will not be retried.
:type retry: google.api_core.retry.Retry
:param timeout: (Optional) The amount of time, in seconds, to wait for the request
to complete. Note that if retry is specified, the timeout applies to each
individual attempt.
:type timeout: float
:param metadata: (Optional) Additional metadata that is provided to the method.
:type metadata: Sequence[Tuple[str, str]]
:rtype: google.cloud.dlp_v2.types.InspectTemplate
"""
client = self.get_conn()
if not template_id:
raise AirflowException("Please provide the ID of the inspect template to be read.")
# Handle project_id from connection configuration
project_id = project_id or self.project_id
if organization_id:
name = DlpServiceClient.organization_inspect_template_path(organization_id, template_id)
elif project_id:
name = DlpServiceClient.project_inspect_template_path(project_id, template_id)
else:
raise AirflowException("Please provide either organization_id or project_id.")
return client.get_inspect_template(name=name, retry=retry, timeout=timeout, metadata=metadata)
@GoogleBaseHook.fallback_to_default_project_id
def get_job_trigger(
self,
job_trigger_id: str,
project_id: Optional[str] = None,
retry: Optional[Retry] = None,
timeout: Optional[float] = None,
metadata: Optional[Sequence[Tuple[str, str]]] = None,
) -> JobTrigger:
"""
Gets a DLP job trigger.
:param job_trigger_id: The ID of the DLP job trigger to be read.
:type job_trigger_id: str
:param project_id: (Optional) Google Cloud Platform project ID where the
DLP Instance exists. If set to None or missing, the default
project_id from the GCP connection is used.
:type project_id: str
:param retry: (Optional) A retry object used to retry requests.
If None is specified, requests will not be retried.
:type retry: google.api_core.retry.Retry
:param timeout: (Optional) The amount of time, in seconds, to wait for the request
to complete. Note that if retry is specified, the timeout applies to each
individual attempt.
:type timeout: float
:param metadata: (Optional) Additional metadata that is provided to the method.
:type metadata: Sequence[Tuple[str, str]]
:rtype: google.cloud.dlp_v2.types.JobTrigger
"""
client = self.get_conn()
if not job_trigger_id:
raise AirflowException("Please provide the ID of the DLP job trigger to be read.")
name = DlpServiceClient.project_job_trigger_path(project_id, job_trigger_id)
return client.get_job_trigger(name=name, retry=retry, timeout=timeout, metadata=metadata)
def get_stored_info_type(
self,
stored_info_type_id: str,
organization_id: Optional[str] = None,
project_id: Optional[str] = None,
retry: Optional[Retry] = None,
timeout: Optional[float] = None,
metadata: Optional[Sequence[Tuple[str, str]]] = None,
) -> StoredInfoType:
"""
Gets a stored info type.
:param stored_info_type_id: The ID of the stored info type to be read.
:type stored_info_type_id: str
:param organization_id: (Optional) The organization ID. Required to set this
field if parent resource is an organzation.
:type organization_id: str
:param project_id: (Optional) Google Cloud Platform project ID where the
DLP Instance exists. Only set this field if the parent resource is
a project instead of an organzation.
:type project_id: str
:param retry: (Optional) A retry object used to retry requests.
If None is specified, requests will not be retried.
:type retry: google.api_core.retry.Retry
:param timeout: (Optional) The amount of time, in seconds, to wait for the request
to complete. Note that if retry is specified, the timeout applies to each
individual attempt.
:type timeout: float
:param metadata: (Optional) Additional metadata that is provided to the method.
:type metadata: Sequence[Tuple[str, str]]
:rtype: google.cloud.dlp_v2.types.StoredInfoType
"""
client = self.get_conn()
if not stored_info_type_id:
raise AirflowException("Please provide the ID of the stored info type to be read.")
# Handle project_id from connection configuration
project_id = project_id or self.project_id
if organization_id:
name = DlpServiceClient.organization_stored_info_type_path(organization_id, stored_info_type_id)
elif project_id:
name = DlpServiceClient.project_stored_info_type_path(project_id, stored_info_type_id)
else:
raise AirflowException("Please provide either organization_id or project_id.")
return client.get_stored_info_type(name=name, retry=retry, timeout=timeout, metadata=metadata)
@GoogleBaseHook.fallback_to_default_project_id
def inspect_content(
self,
project_id: Optional[str] = None,
inspect_config: Optional[Union[dict, InspectConfig]] = None,
item: Optional[Union[dict, ContentItem]] = None,
inspect_template_name: Optional[str] = None,
retry: Optional[Retry] = None,
timeout: Optional[float] = None,
metadata: Optional[Sequence[Tuple[str, str]]] = None,
) -> InspectContentResponse:
"""
Finds potentially sensitive info in content. This method has limits on input size,
processing time, and output size.
:param project_id: (Optional) Google Cloud Platform project ID where the
DLP Instance exists. If set to None or missing, the default
project_id from the GCP connection is used.
:type project_id: str
:param inspect_config: (Optional) Configuration for the inspector. Items specified
here will override the template referenced by the inspect_template_name argument.
:type inspect_config: dict or google.cloud.dlp_v2.types.InspectConfig
:param item: (Optional) The item to de-identify. Will be treated as text.
:type item: dict or google.cloud.dlp_v2.types.ContentItem
:param inspect_template_name: (Optional) Optional template to use. Any configuration
directly specified in inspect_config will override those set in the template.
:type inspect_template_name: str
:param retry: (Optional) A retry object used to retry requests.
If None is specified, requests will not be retried.
:type retry: google.api_core.retry.Retry
:param timeout: (Optional) The amount of time, in seconds, to wait for the request
to complete. Note that if retry is specified, the timeout applies to each
individual attempt.
:type timeout: float
:param metadata: (Optional) Additional metadata that is provided to the method.
:type metadata: Sequence[Tuple[str, str]]
:rtype: google.cloud.dlp_v2.types.InspectContentResponse
"""
client = self.get_conn()
parent = DlpServiceClient.project_path(project_id)
return client.inspect_content(
parent=parent,
inspect_config=inspect_config,
item=item,
inspect_template_name=inspect_template_name,
retry=retry,
timeout=timeout,
metadata=metadata,
)
def list_deidentify_templates(
self,
organization_id: Optional[str] = None,
project_id: Optional[str] = None,
page_size: Optional[int] = None,
order_by: Optional[str] = None,
retry: Optional[Retry] = None,
timeout: Optional[float] = None,
metadata: Optional[Sequence[Tuple[str, str]]] = None,
) -> List[DeidentifyTemplate]:
"""
Lists deidentify templates.
:param organization_id: (Optional) The organization ID. Required to set this
field if parent resource is an organzation.
:type organization_id: str
:param project_id: (Optional) Google Cloud Platform project ID where the
DLP Instance exists. Only set this field if the parent resource is
a project instead of an organzation.
:type project_id: str
:param page_size: (Optional) The maximum number of resources contained in the
underlying API response.
:type page_size: int
:param order_by: (Optional) Optional comma separated list of fields to order by,
followed by asc or desc postfix.
:type order_by: str
:param retry: (Optional) A retry object used to retry requests.
If None is specified, requests will not be retried.
:type retry: google.api_core.retry.Retry
:param timeout: (Optional) The amount of time, in seconds, to wait for the request
to complete. Note that if retry is specified, the timeout applies to each
individual attempt.
:type timeout: float
:param metadata: (Optional) Additional metadata that is provided to the method.
:type metadata: Sequence[Tuple[str, str]]
:rtype: List[google.cloud.dlp_v2.types.DeidentifyTemplate]
"""
client = self.get_conn()
# Handle project_id from connection configuration
project_id = project_id or self.project_id
if organization_id:
parent = DlpServiceClient.organization_path(organization_id)
elif project_id:
parent = DlpServiceClient.project_path(project_id)
else:
raise AirflowException("Please provide either organization_id or project_id.")
results = client.list_deidentify_templates(
parent=parent,
page_size=page_size,
order_by=order_by,
retry=retry,
timeout=timeout,
metadata=metadata,
)
return list(results)
@GoogleBaseHook.fallback_to_default_project_id
def list_dlp_jobs(
self,
project_id: Optional[str] = None,
results_filter: Optional[str] = None,
page_size: Optional[int] = None,
job_type: Optional[str] = None,
order_by: Optional[str] = None,
retry: Optional[Retry] = None,
timeout: Optional[float] = None,
metadata: Optional[Sequence[Tuple[str, str]]] = None,
) -> List[DlpJob]:
"""
Lists DLP jobs that match the specified filter in the request.
:param project_id: (Optional) Google Cloud Platform project ID where the
DLP Instance exists. If set to None or missing, the default
project_id from the GCP connection is used.
:type project_id: str
:param results_filter: (Optional) Filter used to specify a subset of results.
:type results_filter: str
:param page_size: (Optional) The maximum number of resources contained in the
underlying API response.
:type page_size: int
:param job_type: (Optional) The type of job.
:type job_type: str
:param order_by: (Optional) Optional comma separated list of fields to order by,
followed by asc or desc postfix.
:type order_by: str
:param retry: (Optional) A retry object used to retry requests.
If None is specified, requests will not be retried.
:type retry: google.api_core.retry.Retry
:param timeout: (Optional) The amount of time, in seconds, to wait for the request
to complete. Note that if retry is specified, the timeout applies to each
individual attempt.
:type timeout: float
:param metadata: (Optional) Additional metadata that is provided to the method.
:type metadata: Sequence[Tuple[str, str]]
:rtype: List[google.cloud.dlp_v2.types.DlpJob]
"""
client = self.get_conn()
parent = DlpServiceClient.project_path(project_id)
results = client.list_dlp_jobs(
parent=parent,
filter_=results_filter,
page_size=page_size,
type_=job_type,
order_by=order_by,
retry=retry,
timeout=timeout,
metadata=metadata,
)
return list(results)
def list_info_types(
self,
language_code: Optional[str] = None,
results_filter: Optional[str] = None,
retry: Optional[Retry] = None,
timeout: Optional[float] = None,
metadata: Optional[Sequence[Tuple[str, str]]] = None,
) -> ListInfoTypesResponse:
"""
Returns a list of the sensitive information types that the DLP API supports.
:param language_code: (Optional) Optional BCP-47 language code for localized info
type friendly names. If omitted, or if localized strings are not available,
en-US strings will be returned.
:type language_code: str
:param results_filter: (Optional) Filter used to specify a subset of results.
:type results_filter: str
:param retry: (Optional) A retry object used to retry requests.
If None is specified, requests will not be retried.
:type retry: google.api_core.retry.Retry
:param timeout: (Optional) The amount of time, in seconds, to wait for the request
to complete. Note that if retry is specified, the timeout applies to each
individual attempt.
:type timeout: float
:param metadata: (Optional) Additional metadata that is provided to the method.
:type metadata: Sequence[Tuple[str, str]]
:rtype: google.cloud.dlp_v2.types.ListInfoTypesResponse
"""
client = self.get_conn()
return client.list_info_types(
language_code=language_code,
filter_=results_filter,
retry=retry,
timeout=timeout,
metadata=metadata,
)
def list_inspect_templates(
self,
organization_id: Optional[str] = None,
project_id: Optional[str] = None,
page_size: Optional[int] = None,
order_by: Optional[str] = None,
retry: Optional[Retry] = None,
timeout: Optional[float] = None,
metadata: Optional[Sequence[Tuple[str, str]]] = None,
) -> List[InspectTemplate]:
"""
Lists inspect templates.
:param organization_id: (Optional) The organization ID. Required to set this
field if parent resource is an organzation.
:type organization_id: str
:param project_id: (Optional) Google Cloud Platform project ID where the
DLP Instance exists. Only set this field if the parent resource is
a project instead of an organzation.
:type project_id: str
:param page_size: (Optional) The maximum number of resources contained in the
underlying API response.
:type page_size: int
:param order_by: (Optional) Optional comma separated list of fields to order by,
followed by asc or desc postfix.
:type order_by: str
:param retry: (Optional) A retry object used to retry requests.
If None is specified, requests will not be retried.
:type retry: google.api_core.retry.Retry
:param timeout: (Optional) The amount of time, in seconds, to wait for the request
to complete. Note that if retry is specified, the timeout applies to each
individual attempt.
:type timeout: float
:param metadata: (Optional) Additional metadata that is provided to the method.
:type metadata: Sequence[Tuple[str, str]]
:rtype: List[google.cloud.dlp_v2.types.InspectTemplate]
"""
client = self.get_conn()
# Handle project_id from connection configuration
project_id = project_id or self.project_id
if organization_id:
parent = DlpServiceClient.organization_path(organization_id)
elif project_id:
parent = DlpServiceClient.project_path(project_id)
else:
raise AirflowException("Please provide either organization_id or project_id.")
results = client.list_inspect_templates(
parent=parent,
page_size=page_size,
order_by=order_by,
retry=retry,
timeout=timeout,
metadata=metadata,
)
return list(results)
@GoogleBaseHook.fallback_to_default_project_id
def list_job_triggers(
self,
project_id: Optional[str] = None,
page_size: Optional[int] = None,
order_by: Optional[str] = None,
results_filter: Optional[str] = None,
retry: Optional[Retry] = None,
timeout: Optional[float] = None,
metadata: Optional[Sequence[Tuple[str, str]]] = None,
) -> List[JobTrigger]:
"""
Lists job triggers.
:param project_id: (Optional) Google Cloud Platform project ID where the
DLP Instance exists. If set to None or missing, the default
project_id from the GCP connection is used.
:type project_id: str
:param page_size: (Optional) The maximum number of resources contained in the
underlying API response.
:type page_size: int
:param order_by: (Optional) Optional comma separated list of fields to order by,
followed by asc or desc postfix.
:type order_by: str
:param results_filter: (Optional) Filter used to specify a subset of results.
:type results_filter: str
:param retry: (Optional) A retry object used to retry requests.
If None is specified, requests will not be retried.
:type retry: google.api_core.retry.Retry
:param timeout: (Optional) The amount of time, in seconds, to wait for the request
to complete. Note that if retry is specified, the timeout applies to each
individual attempt.
:type timeout: float
:param metadata: (Optional) Additional metadata that is provided to the method.
:type metadata: Sequence[Tuple[str, str]]
:rtype: List[google.cloud.dlp_v2.types.JobTrigger]
"""
client = self.get_conn()
parent = DlpServiceClient.project_path(project_id)
results = client.list_job_triggers(
parent=parent,
page_size=page_size,
order_by=order_by,
filter_=results_filter,
retry=retry,
timeout=timeout,
metadata=metadata,
)
return list(results)
def list_stored_info_types(
self,
organization_id: Optional[str] = None,
project_id: Optional[str] = None,
page_size: Optional[int] = None,
order_by: Optional[str] = None,
retry: Optional[Retry] = None,
timeout: Optional[float] = None,
metadata: Optional[Sequence[Tuple[str, str]]] = None,
) -> List[StoredInfoType]:
"""
Lists stored info types.
:param organization_id: (Optional) The organization ID. Required to set this
field if parent resource is an organzation.
:type organization_id: str
:param project_id: (Optional) Google Cloud Platform project ID where the
DLP Instance exists. Only set this field if the parent resource is
a project instead of an organzation.
:type project_id: str
:param page_size: (Optional) The maximum number of resources contained in the
underlying API response.
:type page_size: int
:param order_by: (Optional) Optional comma separated list of fields to order by,
followed by asc or desc postfix.
:type order_by: str
:param retry: (Optional) A retry object used to retry requests.
If None is specified, requests will not be retried.
:type retry: google.api_core.retry.Retry
:param timeout: (Optional) The amount of time, in seconds, to wait for the request
to complete. Note that if retry is specified, the timeout applies to each
individual attempt.
:type timeout: float
:param metadata: (Optional) Additional metadata that is provided to the method.
:type metadata: Sequence[Tuple[str, str]]
:rtype: List[google.cloud.dlp_v2.types.StoredInfoType]
"""
client = self.get_conn()
# Handle project_id from connection configuration
project_id = project_id or self.project_id
if organization_id:
parent = DlpServiceClient.organization_path(organization_id)
elif project_id:
parent = DlpServiceClient.project_path(project_id)
else:
raise AirflowException("Please provide either organization_id or project_id.")
results = client.list_stored_info_types(
parent=parent,
page_size=page_size,
order_by=order_by,
retry=retry,
timeout=timeout,
metadata=metadata,
)
return list(results)
@GoogleBaseHook.fallback_to_default_project_id
def redact_image(
self,
project_id: Optional[str] = None,
inspect_config: Optional[Union[dict, InspectConfig]] = None,
image_redaction_configs: Optional[
Union[List[dict], List[RedactImageRequest.ImageRedactionConfig]]
] = None,
include_findings: Optional[bool] = None,
byte_item: Optional[Union[dict, ByteContentItem]] = None,
retry: Optional[Retry] = None,
timeout: Optional[float] = None,
metadata: Optional[Sequence[Tuple[str, str]]] = None,
) -> RedactImageResponse:
"""
Redacts potentially sensitive info from an image. This method has limits on
input size, processing time, and output size.
:param project_id: (Optional) Google Cloud Platform project ID where the
DLP Instance exists. If set to None or missing, the default
project_id from the GCP connection is used.
:type project_id: str
:param inspect_config: (Optional) Configuration for the inspector. Items specified
here will override the template referenced by the inspect_template_name argument.
:type inspect_config: dict or google.cloud.dlp_v2.types.InspectConfig
:param image_redaction_configs: (Optional) The configuration for specifying what
content to redact from images.
:type image_redaction_configs: List[dict] or
List[google.cloud.dlp_v2.types.RedactImageRequest.ImageRedactionConfig]
:param include_findings: (Optional) Whether the response should include findings
along with the redacted image.
:type include_findings: bool
:param byte_item: (Optional) The content must be PNG, JPEG, SVG or BMP.
:type byte_item: dict or google.cloud.dlp_v2.types.ByteContentItem
:param retry: (Optional) A retry object used to retry requests.
If None is specified, requests will not be retried.
:type retry: google.api_core.retry.Retry
:param timeout: (Optional) The amount of time, in seconds, to wait for the request
to complete. Note that if retry is specified, the timeout applies to each
individual attempt.
:type timeout: float
:param metadata: (Optional) Additional metadata that is provided to the method.
:type metadata: Sequence[Tuple[str, str]]
:rtype: google.cloud.dlp_v2.types.RedactImageResponse
"""
client = self.get_conn()
parent = DlpServiceClient.project_path(project_id)
return client.redact_image(
parent=parent,
inspect_config=inspect_config,
image_redaction_configs=image_redaction_configs,
include_findings=include_findings,
byte_item=byte_item,
retry=retry,
timeout=timeout,
metadata=metadata,
)
@GoogleBaseHook.fallback_to_default_project_id
def reidentify_content(
self,
project_id: Optional[str] = None,
reidentify_config: Optional[Union[dict, DeidentifyConfig]] = None,
inspect_config: Optional[Union[dict, InspectConfig]] = None,
item: Optional[Union[dict, ContentItem]] = None,
inspect_template_name: Optional[str] = None,
reidentify_template_name: Optional[str] = None,
retry: Optional[Retry] = None,
timeout: Optional[float] = None,
metadata: Optional[Sequence[Tuple[str, str]]] = None,
) -> ReidentifyContentResponse:
"""
Re-identifies content that has been de-identified.
:param project_id: (Optional) Google Cloud Platform project ID where the
DLP Instance exists. If set to None or missing, the default
project_id from the GCP connection is used.
:type project_id: str
:param reidentify_config: (Optional) Configuration for the re-identification of
the content item.
:type reidentify_config: dict or google.cloud.dlp_v2.types.DeidentifyConfig
:param inspect_config: (Optional) Configuration for the inspector.
:type inspect_config: dict or google.cloud.dlp_v2.types.InspectConfig
:param item: (Optional) The item to re-identify. Will be treated as text.
:type item: dict or google.cloud.dlp_v2.types.ContentItem
:param inspect_template_name: (Optional) Optional template to use. Any configuration
directly specified in inspect_config will override those set in the template.
:type inspect_template_name: str
:param reidentify_template_name: (Optional) Optional template to use. References an
instance of deidentify template. Any configuration directly specified in
reidentify_config or inspect_config will override those set in the template.
:type reidentify_template_name: str
:param retry: (Optional) A retry object used to retry requests.
If None is specified, requests will not be retried.
:type retry: google.api_core.retry.Retry
:param timeout: (Optional) The amount of time, in seconds, to wait for the request
to complete. Note that if retry is specified, the timeout applies to each
individual attempt.
:type timeout: float
:param metadata: (Optional) Additional metadata that is provided to the method.
:type metadata: Sequence[Tuple[str, str]]
:rtype: google.cloud.dlp_v2.types.ReidentifyContentResponse
"""
client = self.get_conn()
parent = DlpServiceClient.project_path(project_id)
return client.reidentify_content(
parent=parent,
reidentify_config=reidentify_config,
inspect_config=inspect_config,
item=item,
inspect_template_name=inspect_template_name,
reidentify_template_name=reidentify_template_name,
retry=retry,
timeout=timeout,
metadata=metadata,
)
def update_deidentify_template(
self,
template_id: str,
organization_id: Optional[str] = None,
project_id: Optional[str] = None,
deidentify_template: Optional[Union[dict, DeidentifyTemplate]] = None,
update_mask: Optional[Union[dict, FieldMask]] = None,
retry: Optional[Retry] = None,
timeout: Optional[float] = None,
metadata: Optional[Sequence[Tuple[str, str]]] = None,
) -> DeidentifyTemplate:
"""
Updates the deidentify template.
:param template_id: The ID of deidentify template to be updated.
:type template_id: str
:param organization_id: (Optional) The organization ID. Required to set this
field if parent resource is an organzation.
:type organization_id: str
:param project_id: (Optional) Google Cloud Platform project ID where the
DLP Instance exists. Only set this field if the parent resource is
a project instead of an organzation.
:type project_id: str
:param deidentify_template: New deidentify template value.
:type deidentify_template: dict or google.cloud.dlp_v2.types.DeidentifyTemplate
:param update_mask: Mask to control which fields get updated.
:type update_mask: dict or google.cloud.dlp_v2.types.FieldMask
:param retry: (Optional) A retry object used to retry requests.
If None is specified, requests will not be retried.
:type retry: google.api_core.retry.Retry
:param timeout: (Optional) The amount of time, in seconds, to wait for the request
to complete. Note that if retry is specified, the timeout applies to each
individual attempt.
:type timeout: float
:param metadata: (Optional) Additional metadata that is provided to the method.
:type metadata: Sequence[Tuple[str, str]]
:rtype: google.cloud.dlp_v2.types.DeidentifyTemplate
"""
client = self.get_conn()
if not template_id:
raise AirflowException("Please provide the ID of deidentify template to be updated.")
# Handle project_id from connection configuration
project_id = project_id or self.project_id
if organization_id:
name = DlpServiceClient.organization_deidentify_template_path(organization_id, template_id)
elif project_id:
name = DlpServiceClient.project_deidentify_template_path(project_id, template_id)
else:
raise AirflowException("Please provide either organization_id or project_id.")
return client.update_deidentify_template(
name=name,
deidentify_template=deidentify_template,
update_mask=update_mask,
retry=retry,
timeout=timeout,
metadata=metadata,
)
def update_inspect_template(
self,
template_id: str,
organization_id: Optional[str] = None,
project_id: Optional[str] = None,
inspect_template: Optional[Union[dict, InspectTemplate]] = None,
update_mask: Optional[Union[dict, FieldMask]] = None,
retry: Optional[Retry] = None,
timeout: Optional[float] = None,
metadata: Optional[Sequence[Tuple[str, str]]] = None,
) -> InspectTemplate:
"""
Updates the inspect template.
:param template_id: The ID of the inspect template to be updated.
:type template_id: str
:param organization_id: (Optional) The organization ID. Required to set this
field if parent resource is an organzation.
:type organization_id: str
:param project_id: (Optional) Google Cloud Platform project ID where the
DLP Instance exists. Only set this field if the parent resource is
a project instead of an organzation.
:type project_id: str
:param inspect_template: New inspect template value.
:type inspect_template: dict or google.cloud.dlp_v2.types.InspectTemplate
:param update_mask: Mask to control which fields get updated.
:type update_mask: dict or google.cloud.dlp_v2.types.FieldMask
:param retry: (Optional) A retry object used to retry requests.
If None is specified, requests will not be retried.
:type retry: google.api_core.retry.Retry
:param timeout: (Optional) The amount of time, in seconds, to wait for the request
to complete. Note that if retry is specified, the timeout applies to each
individual attempt.
:type timeout: float
:param metadata: (Optional) Additional metadata that is provided to the method.
:type metadata: Sequence[Tuple[str, str]]
:rtype: google.cloud.dlp_v2.types.InspectTemplate
"""
client = self.get_conn()
if not template_id:
raise AirflowException("Please provide the ID of the inspect template to be updated.")
# Handle project_id from connection configuration
project_id = project_id or self.project_id
if organization_id:
name = DlpServiceClient.organization_inspect_template_path(organization_id, template_id)
elif project_id:
name = DlpServiceClient.project_inspect_template_path(project_id, template_id)
else:
raise AirflowException("Please provide either organization_id or project_id.")
return client.update_inspect_template(
name=name,
inspect_template=inspect_template,
update_mask=update_mask,
retry=retry,
timeout=timeout,
metadata=metadata,
)
@GoogleBaseHook.fallback_to_default_project_id
def update_job_trigger(
self,
job_trigger_id: str,
project_id: Optional[str] = None,
job_trigger: Optional[Union[dict, JobTrigger]] = None,
update_mask: Optional[Union[dict, FieldMask]] = None,
retry: Optional[Retry] = None,
timeout: Optional[float] = None,
metadata: Optional[Sequence[Tuple[str, str]]] = None,
) -> JobTrigger:
"""
Updates a job trigger.
:param job_trigger_id: The ID of the DLP job trigger to be updated.
:type job_trigger_id: str
:param project_id: (Optional) Google Cloud Platform project ID where the
DLP Instance exists. If set to None or missing, the default
project_id from the GCP connection is used.
:type project_id: str
:param job_trigger: New job trigger value.
:type job_trigger: dict or google.cloud.dlp_v2.types.JobTrigger
:param update_mask: Mask to control which fields get updated.
:type update_mask: dict or google.cloud.dlp_v2.types.FieldMask
:param retry: (Optional) A retry object used to retry requests.
If None is specified, requests will not be retried.
:type retry: google.api_core.retry.Retry
:param timeout: (Optional) The amount of time, in seconds, to wait for the request
to complete. Note that if retry is specified, the timeout applies to each
individual attempt.
:type timeout: float
:param metadata: (Optional) Additional metadata that is provided to the method.
:type metadata: Sequence[Tuple[str, str]]
:rtype: google.cloud.dlp_v2.types.JobTrigger
"""
client = self.get_conn()
if not job_trigger_id:
raise AirflowException("Please provide the ID of the DLP job trigger to be updated.")
name = DlpServiceClient.project_job_trigger_path(project_id, job_trigger_id)
return client.update_job_trigger(
name=name,
job_trigger=job_trigger,
update_mask=update_mask,
retry=retry,
timeout=timeout,
metadata=metadata,
)
def update_stored_info_type(
self,
stored_info_type_id: str,
organization_id: Optional[str] = None,
project_id: Optional[str] = None,
config: Optional[Union[dict, StoredInfoTypeConfig]] = None,
update_mask: Optional[Union[dict, FieldMask]] = None,
retry: Optional[Retry] = None,
timeout: Optional[float] = None,
metadata: Optional[Sequence[Tuple[str, str]]] = None,
) -> StoredInfoType:
"""
Updates the stored info type by creating a new version.
:param stored_info_type_id: The ID of the stored info type to be updated.
:type stored_info_type_id: str
:param organization_id: (Optional) The organization ID. Required to set this
field if parent resource is an organzation.
:type organization_id: str
:param project_id: (Optional) Google Cloud Platform project ID where the
DLP Instance exists. Only set this field if the parent resource is
a project instead of an organzation.
:type project_id: str
:param config: Updated configuration for the stored info type. If not provided, a new
version of the stored info type will be created with the existing configuration.
:type config: dict or google.cloud.dlp_v2.types.StoredInfoTypeConfig
:param update_mask: Mask to control which fields get updated.
:type update_mask: dict or google.cloud.dlp_v2.types.FieldMask
:param retry: (Optional) A retry object used to retry requests.
If None is specified, requests will not be retried.
:type retry: google.api_core.retry.Retry
:param timeout: (Optional) The amount of time, in seconds, to wait for the request
to complete. Note that if retry is specified, the timeout applies to each
individual attempt.
:type timeout: float
:param metadata: (Optional) Additional metadata that is provided to the method.
:type metadata: Sequence[Tuple[str, str]]
:rtype: google.cloud.dlp_v2.types.StoredInfoType
"""
client = self.get_conn()
if not stored_info_type_id:
raise AirflowException("Please provide the ID of the stored info type to be updated.")
# Handle project_id from connection configuration
project_id = project_id or self.project_id
if organization_id:
name = DlpServiceClient.organization_stored_info_type_path(organization_id, stored_info_type_id)
elif project_id:
name = DlpServiceClient.project_stored_info_type_path(project_id, stored_info_type_id)
else:
raise AirflowException("Please provide either organization_id or project_id.")
return client.update_stored_info_type(
name=name, config=config, update_mask=update_mask, retry=retry, timeout=timeout, metadata=metadata
)
| 44.875 | 110 | 0.658862 | 9,111 | 74,313 | 5.242674 | 0.04533 | 0.049931 | 0.013608 | 0.023469 | 0.861115 | 0.849265 | 0.845455 | 0.838044 | 0.83124 | 0.823494 | 0 | 0.00124 | 0.27294 | 74,313 | 1,655 | 111 | 44.902115 | 0.882824 | 0.504474 | 0 | 0.743954 | 0 | 0 | 0.05938 | 0.001194 | 0 | 0 | 0 | 0 | 0 | 1 | 0.045519 | false | 0 | 0.01138 | 0 | 0.095306 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
8ab7515340893f6cc47f7b66230e40f347eeac6e | 2,344 | py | Python | src/alias_free_torch/resample.py | junjun3518/alias-free-torch | 670843075b687d9e676394975d533c7aeb9f5f5e | [
"Apache-2.0"
] | 39 | 2021-08-18T00:01:53.000Z | 2022-01-04T22:32:43.000Z | src/alias_free_torch/resample.py | junjun3518/alias-free-torch | 670843075b687d9e676394975d533c7aeb9f5f5e | [
"Apache-2.0"
] | null | null | null | src/alias_free_torch/resample.py | junjun3518/alias-free-torch | 670843075b687d9e676394975d533c7aeb9f5f5e | [
"Apache-2.0"
] | 3 | 2021-08-25T10:35:36.000Z | 2022-01-04T22:32:48.000Z | import torch
import torch.nn as nn
import torch.nn.functional as F
from .filter import LowPassFilter1d, LowPassFilter2d
class UpSample1d(nn.Module):
def __init__(self, ratio=2):
super().__init__()
self.ratio = ratio
self.lowpass = LowPassFilter1d(cutoff=0.5 / ratio,
half_width=0.6 / ratio,
kernel_size=int(6 * ratio // 2) * 2)
def forward(self, x):
shape = list(x.shape)
new_shape = shape[:-1] + [shape[-1] * self.ratio]
xx = torch.zeros(new_shape, device=x.device)
xx[..., ::self.ratio] = x
xx = self.ratio * xx
x = self.lowpass(xx.view(new_shape))
return x
class DownSample1d(nn.Module):
def __init__(self, ratio=2):
super().__init__()
self.ratio = ratio
self.lowpass = LowPassFilter1d(cutoff=0.5 / ratio,
half_width=0.6 / ratio,
stride=ratio,
kernel_size=int(6 * ratio // 2) * 2)
def forward(self, x):
xx = self.lowpass(x)
return xx
class UpSample2d(nn.Module):
def __init__(self, ratio=2):
super().__init__()
self.ratio = ratio
self.lowpass = LowPassFilter2d(cutoff=0.5 / ratio,
half_width=0.6 / ratio,
kernel_size=int(6 * ratio // 2) * 2)
def forward(self, x):
shape = list(x.shape)
new_shape = shape[:-2] + [shape[-2] * self.ratio
] + [shape[-1] * self.ratio]
xx = torch.zeros(new_shape, device=x.device)
#shape + [self.ratio**2], device=x.device)
xx[..., ::self.ratio, ::self.ratio] = x
xx = self.ratio**2 * xx
x = self.lowpass(xx)
return x
class DownSample2d(nn.Module):
def __init__(self, ratio=2):
super().__init__()
self.ratio = ratio
self.lowpass = LowPassFilter2d(cutoff=0.5 / ratio,
half_width=0.6 / ratio,
stride=ratio,
kernel_size=int(6 * ratio // 2) * 2)
def forward(self, x):
xx = self.lowpass(x)
return xx
| 32.109589 | 75 | 0.487201 | 267 | 2,344 | 4.108614 | 0.164794 | 0.139471 | 0.094804 | 0.054695 | 0.796718 | 0.767548 | 0.705561 | 0.705561 | 0.705561 | 0.705561 | 0 | 0.034386 | 0.392065 | 2,344 | 72 | 76 | 32.555556 | 0.735439 | 0.017491 | 0 | 0.701754 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.140351 | false | 0.157895 | 0.070175 | 0 | 0.350877 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 7 |
8acae8548bcacf50b89ba4d9e83a2206f9ab1aeb | 30,259 | py | Python | tests/test_macros_oval.py | kishorkunal-raj/content | c2029dc02cedd83ada1fbb9bd97d10e137b51a26 | [
"BSD-3-Clause"
] | 1,138 | 2018-09-05T06:31:44.000Z | 2022-03-31T03:38:24.000Z | tests/test_macros_oval.py | kishorkunal-raj/content | c2029dc02cedd83ada1fbb9bd97d10e137b51a26 | [
"BSD-3-Clause"
] | 4,743 | 2018-09-04T15:14:04.000Z | 2022-03-31T23:17:57.000Z | tests/test_macros_oval.py | kishorkunal-raj/content | c2029dc02cedd83ada1fbb9bd97d10e137b51a26 | [
"BSD-3-Clause"
] | 400 | 2018-09-08T20:08:49.000Z | 2022-03-30T20:54:32.000Z | #!/usr/bin/python3
import argparse
import oval_tester
def main():
parser = argparse.ArgumentParser(
description="Test Jinja macros that Generate OVAL")
parser.add_argument(
"--verbose", action="store_true", default=False,
help="Show results of each test case")
args = parser.parse_args()
tester = oval_tester.OVALTester(args.verbose)
#######################################################
# Test cases for whitespace separated files
#######################################################
tester.test(
"correct value",
r"""{{{ oval_check_config_file(
path='CONFIG_FILE',
prefix_regex='^[ \t]*',
parameter='speed',
separator_regex='[ \t]+',
value='100',
missing_parameter_pass=false,
application='',
multi_value=false,
missing_config_file_fail=false,
section=''
) }}}""",
"speed 100",
"true"
)
tester.test(
"correct value and a comment",
r"""{{{ oval_check_config_file(
path='CONFIG_FILE',
prefix_regex='^[ \t]*',
parameter='speed',
separator_regex='[ \t]+',
value='100',
missing_parameter_pass=false,
application='',
multi_value=false,
missing_config_file_fail=false,
section=''
) }}}""",
"speed 100 # be very fast",
"true"
)
tester.test(
"correct value on a new line",
r"""{{{ oval_check_config_file(
path='CONFIG_FILE',
prefix_regex='^[ \t]*',
parameter='speed',
separator_regex='[ \t]+',
value='100',
missing_parameter_pass=false,
application='',
multi_value=false,
missing_config_file_fail=false,
section=''
) }}}""",
"\n\n\n\n\n\nspeed 100",
"true"
)
tester.test(
"correct value separated by a tab",
r"""{{{ oval_check_config_file(
path='CONFIG_FILE',
prefix_regex='^[ \t]*',
parameter='speed',
separator_regex='[ \t]+',
value='100',
missing_parameter_pass=false,
application='',
multi_value=false,
missing_config_file_fail=false,
section=''
) }}}""",
"speed\t100",
"true"
)
tester.test(
"wrong value",
r"""{{{ oval_check_config_file(
path='CONFIG_FILE',
prefix_regex='^[ \t]*',
parameter='speed',
separator_regex='[ \t]+',
value='100',
missing_parameter_pass=false,
application='',
multi_value=false,
missing_config_file_fail=false,
section=''
) }}}""",
"speed 80",
"false"
)
tester.test(
"wrong value which contains the correct value as a substring",
r"""{{{ oval_check_config_file(
path='CONFIG_FILE',
prefix_regex='^[ \t]*',
parameter='speed',
separator_regex='[ \t]+',
value='100',
missing_parameter_pass=false,
application='',
multi_value=false,
missing_config_file_fail=false,
section=''
) }}}""",
"speed 1000",
"false"
)
tester.test(
"commented value",
r"""{{{ oval_check_config_file(
path='CONFIG_FILE',
prefix_regex='^[ \t]*',
parameter='speed',
separator_regex='[ \t]+',
value='100',
missing_parameter_pass=false,
application='',
multi_value=false,
missing_config_file_fail=false,
section=''
) }}}""",
"# speed 80",
"false"
)
tester.test(
"missing whitespace",
r"""{{{ oval_check_config_file(
path='CONFIG_FILE',
prefix_regex='^[ \t]*',
parameter='speed',
separator_regex='[ \t]+',
value='100',
missing_parameter_pass=false,
application='',
multi_value=false,
missing_config_file_fail=false,
section=''
) }}}""",
"speed100",
"false"
)
tester.test(
"parameter without a value",
r"""{{{ oval_check_config_file(
path='CONFIG_FILE',
prefix_regex='^[ \t]*',
parameter='speed',
separator_regex='[ \t]+',
value='100',
missing_parameter_pass=false,
application='',
multi_value=false,
missing_config_file_fail=false,
section=''
) }}}""",
"speed ",
"false"
)
tester.test(
"misspelled parameter with a value",
r"""{{{ oval_check_config_file(
path='CONFIG_FILE',
prefix_regex='^[ \t]*',
parameter='speed',
separator_regex='[ \t]+',
value='100',
missing_parameter_pass=false,
application='',
multi_value=false,
missing_config_file_fail=false,
section=''
) }}}""",
"sspeed 100",
"false"
)
tester.test(
"parameter is on a different line than the value",
r"""{{{ oval_check_config_file(
path='CONFIG_FILE',
prefix_regex='^[ \t]*',
parameter='speed',
separator_regex='[ \t]+',
value='100',
missing_parameter_pass=false,
application='',
multi_value=false,
missing_config_file_fail=false,
section=''
) }}}""",
"speed\n100",
"false"
)
tester.test(
"multiple empty lines among parameter and value",
r"""{{{ oval_check_config_file(
path='CONFIG_FILE',
prefix_regex='^[ \t]*',
parameter='speed',
separator_regex='[ \t]+',
value='100',
missing_parameter_pass=false,
application='',
multi_value=false,
missing_config_file_fail=false,
section=''
) }}}""",
"\n\nspeed\n\n\n100\n\n\n\n",
"false"
)
tester.test(
"parameter with multiple values when multi_value disabled",
r"""{{{ oval_check_config_file(
path='CONFIG_FILE',
prefix_regex='^[ \t]*',
parameter='speed',
separator_regex='[ \t]+',
value='100',
missing_parameter_pass=false,
application='',
multi_value=false,
missing_config_file_fail=false,
section=''
) }}}""",
"speed 100 50 8",
"false"
)
tester.test(
"parameter with single value when multi_value enabled",
r"""{{{ oval_check_config_file(
path='CONFIG_FILE',
prefix_regex='^[ \t]*',
parameter='speed',
separator_regex='[ \t]+',
value='100',
missing_parameter_pass=false,
application='',
multi_value=true,
missing_config_file_fail=false,
section=''
) }}}""",
"speed 100",
"true"
)
tester.test(
"parameter, single value, trailing comment, multi_value enabled",
r"""{{{ oval_check_config_file(
path='CONFIG_FILE',
prefix_regex='^[ \t]*',
parameter='speed',
separator_regex='[ \t]+',
value='100',
missing_parameter_pass=false,
application='',
multi_value=true,
missing_config_file_fail=false,
section=''
) }}}""",
"speed 100 #comment",
"true"
)
tester.test(
"parameter with multiple values when multi_value enabled",
r"""{{{ oval_check_config_file(
path='CONFIG_FILE',
prefix_regex='^[ \t]*',
parameter='speed',
separator_regex='[ \t]+',
value='100',
missing_parameter_pass=false,
application='',
multi_value=true,
missing_config_file_fail=false,
section=''
) }}}""",
"speed 100 50 8",
"true"
)
tester.test(
"multiple values, multi_value enabled, value in the middle",
r"""{{{ oval_check_config_file(
path='CONFIG_FILE',
prefix_regex='^[ \t]*',
parameter='speed',
separator_regex='[ \t]+',
value='100',
missing_parameter_pass=false,
application='',
multi_value=true,
missing_config_file_fail=false,
section=''
) }}}""",
"speed abcd 333 100 50 8",
"true"
)
tester.test(
"multiple values, multi_value enabled, value is the last",
r"""{{{ oval_check_config_file(
path='CONFIG_FILE',
prefix_regex='^[ \t]*',
parameter='speed',
separator_regex='[ \t]+',
value='100',
missing_parameter_pass=false,
application='',
multi_value=true,
missing_config_file_fail=false,
section=''
) }}}""",
"speed 2 4 6 8 10 14 100",
"true"
)
tester.test(
"parameter with extra newlines, multi_value enabled",
r"""{{{ oval_check_config_file(
path='CONFIG_FILE',
prefix_regex='^[ \t]*',
parameter='speed',
separator_regex='[ \t]+',
value='100',
missing_parameter_pass=false,
application='',
multi_value=true,
missing_config_file_fail=false,
section=''
) }}}""",
"speed\n\n100",
"false"
)
tester.test(
"parameter with multiple values when multi_value enabled, comment",
r"""{{{ oval_check_config_file(
path='CONFIG_FILE',
prefix_regex='^[ \t]*',
parameter='speed',
separator_regex='[ \t]+',
value='100',
missing_parameter_pass=false,
application='',
multi_value=true,
missing_config_file_fail=false,
section=''
) }}}""",
"speed 2 4 6 8 10 14 100 # astonishing",
"true"
)
tester.test(
"multi_value is used and value is a suffix of a value",
r"""{{{ oval_check_config_file(
path='CONFIG_FILE',
prefix_regex='^[ \t]*',
parameter='speed',
separator_regex='[ \t]+',
value='100',
missing_parameter_pass=false,
application='',
multi_value=true,
missing_config_file_fail=false,
section=''
) }}}""",
"speed 1001000",
"false"
)
tester.test(
"parameter with a value commented out",
r"""{{{ oval_check_config_file(
path='CONFIG_FILE',
prefix_regex='^[ \t]*',
parameter='speed',
separator_regex='[ \t]+',
value='100',
missing_parameter_pass=false,
application='',
multi_value=false,
missing_config_file_fail=false,
section=''
) }}}""",
"speed # 100",
"false"
)
tester.test(
"missing parameter fails",
r"""{{{ oval_check_config_file(
path='CONFIG_FILE',
prefix_regex='^[ \t]*',
parameter='speed',
separator_regex='[ \t]+',
value='100',
missing_parameter_pass=false,
application='',
multi_value=false,
missing_config_file_fail=false,
section=''
) }}}""",
"lights on",
"false"
)
tester.test(
"missing parameter pass",
r"""{{{ oval_check_config_file(
path='CONFIG_FILE',
prefix_regex='^[ \t]*',
parameter='speed',
separator_regex='[ \t]+',
value='100',
missing_parameter_pass=true,
application='',
multi_value=false,
missing_config_file_fail=false,
section=''
) }}}""",
"lights on",
"true"
)
tester.test(
"overwriting",
r"""{{{ oval_check_config_file(
path='CONFIG_FILE',
prefix_regex='^[ \t]*',
parameter='speed',
separator_regex='[ \t]+',
value='100',
missing_parameter_pass=true,
application='',
multi_value=false,
missing_config_file_fail=false,
section=''
) }}}""",
"""speed 100
speed 60""",
"false"
)
tester.test(
"overwriting commented out",
r"""{{{ oval_check_config_file(
path='CONFIG_FILE',
prefix_regex='^[ \t]*',
parameter='speed',
separator_regex='[ \t]+',
value='100',
missing_parameter_pass=true,
application='',
multi_value=false,
missing_config_file_fail=false,
section=''
) }}}""",
"""speed 100
#speed 60""",
"true"
)
tester.test(
"config file missing should fail",
r"""{{{ oval_check_config_file(
path='CONFIG_FILE',
prefix_regex='^[ \t]*',
parameter='speed',
separator_regex='[ \t]+',
value='100',
missing_parameter_pass=true,
application='',
multi_value=false,
missing_config_file_fail=true,
section=''
) }}}""",
None,
"false"
)
tester.test(
"config file missing should pass",
r"""{{{ oval_check_config_file(
path='CONFIG_FILE',
prefix_regex='^[ \t]*',
parameter='speed',
separator_regex='[ \t]+',
value='100',
missing_parameter_pass=true,
application='',
multi_value=false,
missing_config_file_fail=false,
section=''
) }}}""",
None,
"true"
)
tester.test(
"config file missing should fail due to missing parameter",
r"""{{{ oval_check_config_file(
path='CONFIG_FILE',
prefix_regex='^[ \t]*',
parameter='speed',
separator_regex='[ \t]+',
value='100',
missing_parameter_pass=false,
application='',
multi_value=false,
missing_config_file_fail=true,
section=''
) }}}""",
None,
"false"
)
tester.test(
"config file missing but missing parameter isn't allowed",
r"""{{{ oval_check_config_file(
path='CONFIG_FILE',
prefix_regex='^[ \t]*',
parameter='speed',
separator_regex='[ \t]+',
value='100',
missing_parameter_pass=false,
application='',
multi_value=false,
missing_config_file_fail=false,
section=''
) }}}""",
None,
"false"
)
#######################################################
# Test cases for equal sign separated files
#######################################################
tester.test(
"correct value, no whitespace",
r"""{{{ oval_check_config_file(
path='CONFIG_FILE',
prefix_regex='^[ \t]*',
parameter='speed',
separator_regex='[ \t]*=[ \t]*',
value='100',
missing_parameter_pass=false,
application='',
multi_value=false,
missing_config_file_fail=false,
section=''
) }}}""",
"speed=100",
"true"
)
tester.test(
"correct value, some spaces in the middle",
r"""{{{ oval_check_config_file(
path='CONFIG_FILE',
prefix_regex='^[ \t]*',
parameter='speed',
separator_regex='[ \t]*=[ \t]*',
value='100',
missing_parameter_pass=false,
application='',
multi_value=false,
missing_config_file_fail=false,
section=''
) }}}""",
"speed = 100",
"true"
)
tester.test(
"correct value, many spaces everywhere",
r"""{{{ oval_check_config_file(
path='CONFIG_FILE',
prefix_regex='^[ \t]*',
parameter='speed',
separator_regex='[ \t]*=[ \t]*',
value='100',
missing_parameter_pass=false,
application='',
multi_value=false,
missing_config_file_fail=false,
section=''
) }}}""",
" speed = 100 ",
"true"
)
tester.test(
"correct value, tabs",
r"""{{{ oval_check_config_file(
path='CONFIG_FILE',
prefix_regex='^[ \t]*',
parameter='speed',
separator_regex='[ \t]*=[ \t]*',
value='100',
missing_parameter_pass=false,
application='',
multi_value=false,
missing_config_file_fail=false,
section=''
) }}}""",
"\tspeed\t=\t100",
"true"
)
tester.test(
"correct value, and a comment",
r"""{{{ oval_check_config_file(
path='CONFIG_FILE',
prefix_regex='^[ \t]*',
parameter='speed',
separator_regex='[ \t]*=[ \t]*',
value='100',
missing_parameter_pass=false,
application='',
multi_value=false,
missing_config_file_fail=false,
section=''
) }}}""",
"speed=100 # be very fast",
"true"
)
tester.test(
"wrong value, and a comment",
r"""{{{ oval_check_config_file(
path='CONFIG_FILE',
prefix_regex='^[ \t]*',
parameter='speed',
separator_regex='[ \t]*=[ \t]*',
value='100',
missing_parameter_pass=false,
application='',
multi_value=false,
missing_config_file_fail=false,
section=''
) }}}""",
"speed=800 # be extremely fast",
"false"
)
tester.test(
"no value, and a comment",
r"""{{{ oval_check_config_file(
path='CONFIG_FILE',
prefix_regex='^[ \t]*',
parameter='speed',
separator_regex='[ \t]*=[ \t]*',
value='100',
missing_parameter_pass=false,
application='',
multi_value=false,
missing_config_file_fail=false,
section=''
) }}}""",
"speed= # 100",
"false"
)
######################################
# Test cases for INI files
######################################
tester.test(
"INI correct value",
r"""{{{ oval_check_ini_file(
path='CONFIG_FILE',
section="vehicle",
parameter='speed',
value='100',
missing_parameter_pass=false,
application='',
multi_value=false,
missing_config_file_fail=false,
) }}}""",
"""[vehicle]
speed = 100""",
"true"
)
tester.test(
"INI correct value trailing whitespace",
r"""{{{ oval_check_ini_file(
path='CONFIG_FILE',
section="vehicle",
parameter='speed',
value='100',
missing_parameter_pass=false,
application='',
multi_value=false,
missing_config_file_fail=false,
) }}}""",
"""[vehicle]
speed = 100 """,
"true"
)
tester.test(
"INI correct value no whitespace",
r"""{{{ oval_check_ini_file(
path='CONFIG_FILE',
section="vehicle",
parameter='speed',
value='100',
missing_parameter_pass=false,
application='',
multi_value=false,
missing_config_file_fail=false,
) }}}""",
"[vehicle]\nspeed=100",
"true"
)
tester.test(
"INI correct value tabs",
r"""{{{ oval_check_ini_file(
path='CONFIG_FILE',
section="vehicle",
parameter='speed',
value='100',
missing_parameter_pass=false,
application='',
multi_value=false,
missing_config_file_fail=false,
) }}}""",
"[vehicle]\n\tspeed\t=\t100",
"true"
)
tester.test(
"INI correct value commented out",
r"""{{{ oval_check_ini_file(
path='CONFIG_FILE',
section="vehicle",
parameter='speed',
value='100',
missing_parameter_pass=false,
application='',
multi_value=false,
missing_config_file_fail=false,
) }}}""",
"""[vehicle]
#speed = 100""",
"false"
)
tester.test(
"INI section commented out",
r"""{{{ oval_check_ini_file(
path='CONFIG_FILE',
section="vehicle",
parameter='speed',
value='100',
missing_parameter_pass=false,
application='',
multi_value=false,
missing_config_file_fail=false,
) }}}""",
"#[vehicle]\nspeed = 100",
"false"
)
tester.test(
"INI correct value among multiple values",
r"""{{{ oval_check_ini_file(
path='CONFIG_FILE',
section="vehicle",
parameter='speed',
value='100',
missing_parameter_pass=false,
application='',
multi_value=false,
missing_config_file_fail=false,
) }}}""",
"""[vehicle]
color = red
speed = 100
doors = 5""",
"true"
)
tester.test(
"INI correct value among multiple values commented out",
r"""{{{ oval_check_ini_file(
path='CONFIG_FILE',
section="vehicle",
parameter='speed',
value='100',
missing_parameter_pass=false,
application='',
multi_value=false,
missing_config_file_fail=false,
) }}}""",
"""[vehicle]
color = red
#speed = 100
doors = 5""",
"false"
)
tester.test(
"INI wrong value",
r"""{{{ oval_check_ini_file(
path='CONFIG_FILE',
section="vehicle",
parameter='speed',
value='100',
missing_parameter_pass=false,
application='',
multi_value=false,
missing_config_file_fail=false,
) }}}""",
"""[vehicle]
speed = 200""",
"false"
)
tester.test(
"INI wrong value which is a substring",
r"""{{{ oval_check_ini_file(
path='CONFIG_FILE',
section="vehicle",
parameter='speed',
value='100',
missing_parameter_pass=false,
application='',
multi_value=false,
missing_config_file_fail=false,
) }}}""",
"""[vehicle]
speed = 10000""",
"false"
)
tester.test(
"INI overwritten",
r"""{{{ oval_check_ini_file(
path='CONFIG_FILE',
section="vehicle",
parameter='speed',
value='100',
missing_parameter_pass=false,
application='',
multi_value=false,
missing_config_file_fail=false,
) }}}""",
"""[vehicle]
speed = 100
speed = 200""",
"false"
)
tester.test(
"INI correct value in a wrong section",
r"""{{{ oval_check_ini_file(
path='CONFIG_FILE',
section="vehicle",
parameter='speed',
value='100',
missing_parameter_pass=false,
application='',
multi_value=false,
missing_config_file_fail=false,
) }}}""",
"""[house]
speed = 100""",
"false"
)
tester.test(
"INI section overwritten",
r"""{{{ oval_check_ini_file(
path='CONFIG_FILE',
section="vehicle",
parameter='speed',
value='100',
missing_parameter_pass=false,
application='',
multi_value=false,
missing_config_file_fail=false,
) }}}""",
"[vehicle]\n[house]\nspeed = 100",
"false"
)
tester.test(
"INI no section at all",
r"""{{{ oval_check_ini_file(
path='CONFIG_FILE',
section="vehicle",
parameter='speed',
value='100',
missing_parameter_pass=false,
application='',
multi_value=false,
missing_config_file_fail=false,
) }}}""",
"speed = 100",
"false"
)
tester.test(
"INI extra newlines",
r"""{{{ oval_check_ini_file(
path='CONFIG_FILE',
section="vehicle",
parameter='speed',
value='100',
missing_parameter_pass=false,
application='',
multi_value=false,
missing_config_file_fail=false,
) }}}""",
"[vehicle]\nspeed =\n100",
"false"
)
tester.test(
"SHELL commented out",
r"""{{{ oval_check_shell_file(
path='CONFIG_FILE',
parameter='SHELL',
value='/bin/bash',
missing_parameter_pass=false,
application='',
multi_value=false,
missing_config_file_fail=false,
) }}}""",
"# SHELL=/bin/bash\n",
"false"
)
tester.test(
"SHELL correct",
r"""{{{ oval_check_shell_file(
path='CONFIG_FILE',
parameter='SHELL',
value='/bin/bash',
missing_parameter_pass=false,
application='',
multi_value=false,
missing_config_file_fail=false,
) }}}""",
" SHELL=/bin/bash\n",
"true"
)
tester.test(
"SHELL single-quoted",
r"""{{{ oval_check_shell_file(
path='CONFIG_FILE',
parameter='SHELL',
value='/bin"/bash',
missing_parameter_pass=false,
application='',
multi_value=false,
missing_config_file_fail=false,
) }}}""",
" SHELL='/bin\"/bash'\n",
"true"
)
tester.test(
"SHELL double-quoted",
r"""{{{ oval_check_shell_file(
path='CONFIG_FILE',
parameter='SHELL',
value=' /bin/bash',
missing_parameter_pass=false,
application='',
multi_value=false,
missing_config_file_fail=false,
) }}}""",
""" SHELL=" /bin/bash"\n""",
"true"
)
tester.test(
"SHELL unwanted double-quoted",
r"""{{{ oval_check_shell_file(
path='CONFIG_FILE',
parameter='SHELL',
value=' /bin/bash',
no_quotes=true,
missing_parameter_pass=false,
application='',
multi_value=false,
missing_config_file_fail=false,
) }}}""",
""" SHELL=" /bin/bash"\n""",
"false"
)
tester.test(
"SHELL unwanted single-quoted",
r"""{{{ oval_check_shell_file(
path='CONFIG_FILE',
parameter='SHELL',
value='/bin"/bash',
no_quotes=true,
missing_parameter_pass=false,
application='',
multi_value=false,
missing_config_file_fail=false,
) }}}""",
" SHELL='/bin\"/bash'\n",
"false"
)
tester.test(
"SHELL double-quoted spaced",
r"""{{{ oval_check_shell_file(
path='CONFIG_FILE',
parameter='SHELL',
value='/bin/bash',
missing_parameter_pass=false,
application='',
multi_value=false,
missing_config_file_fail=false,
) }}}""",
""" SHELL= "/bin/bash"\n""",
"false"
)
tester.test(
"SHELL bad_var_case",
r"""{{{ oval_check_shell_file(
path='CONFIG_FILE',
parameter='SHELL',
value='/bin/bash',
missing_parameter_pass=false,
application='',
multi_value=false,
missing_config_file_fail=false,
) }}}""",
""" Shell="/bin/bash"\n""",
"false"
)
tester.test(
"SHELL bad_value_case",
r"""{{{ oval_check_shell_file(
path='CONFIG_FILE',
parameter='SHELL',
value='/bin/bash',
missing_parameter_pass=false,
application='',
multi_value=false,
missing_config_file_fail=false,
) }}}""",
""" SHELL="/bin/Bash"\n""",
"false"
)
tester.test(
"SHELL badly quoted",
r"""{{{ oval_check_shell_file(
path='CONFIG_FILE',
parameter='SHELL',
value='/bin/bash',
missing_parameter_pass=false,
application='',
multi_value=false,
missing_config_file_fail=false,
) }}}""",
""" SHELL="/bin/bash'\n""",
"false"
)
tester.finish()
if __name__ == "__main__":
main()
| 28.900669 | 75 | 0.469778 | 2,653 | 30,259 | 5.08594 | 0.062948 | 0.122286 | 0.093382 | 0.08271 | 0.915586 | 0.896391 | 0.873564 | 0.86593 | 0.853331 | 0.853331 | 0 | 0.017913 | 0.383787 | 30,259 | 1,046 | 76 | 28.928298 | 0.705728 | 0.004164 | 0 | 0.527273 | 1 | 0 | 0.393659 | 0.010047 | 0 | 0 | 0 | 0 | 0 | 1 | 0.002597 | false | 0.005195 | 0.005195 | 0 | 0.007792 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
76fec941fed6f4592ba31c106a55048d71445665 | 5,801 | py | Python | list.py | cyph/The-Big-Username-Blacklist | fcac0142fbcf7aac2ae6d8f99807d599e94af07e | [
"MIT"
] | 2 | 2020-10-21T16:41:11.000Z | 2021-11-10T18:01:29.000Z | list.py | cyph/The-Big-Username-Blacklist | fcac0142fbcf7aac2ae6d8f99807d599e94af07e | [
"MIT"
] | null | null | null | list.py | cyph/The-Big-Username-Blacklist | fcac0142fbcf7aac2ae6d8f99807d599e94af07e | [
"MIT"
] | 1 | 2020-10-21T16:41:17.000Z | 2020-10-21T16:41:17.000Z | # This file was generated by The-Big-Username-Blacklist VERSION=v1.5.3 (at 2018-12-23 18:39:03.048654)
data = ['.htaccess', '.htpasswd', '.well-known', '400', '401', '403', '404', '405', '406', '407', '408', '409', '410', '411', '412', '413', '414', '415', '416', '417', '421', '422', '423', '424', '426', '428', '429', '431', '500', '501', '502', '503', '504', '505', '506', '507', '508', '509', '510', '511', 'about', 'about-us', 'abuse', 'access', 'account', 'accounts', 'ad', 'add', 'admin', 'administration', 'administrator', 'ads', 'advertise', 'advertising', 'aes128-ctr', 'aes128-gcm', 'aes192-ctr', 'aes256-ctr', 'aes256-gcm', 'affiliate', 'affiliates', 'agse', 'ajax', 'alert', 'alerts', 'alpha', 'amp', 'analytics', 'api', 'app', 'apps', 'asc', 'assets', 'atom', 'auth', 'authentication', 'authorize', 'autoconfig', 'autodiscover', 'avatar', 'backup', 'banner', 'banners', 'baronboehm', 'beta', 'billing', 'billings', 'blog', 'blogs', 'board', 'bookmark', 'bookmarks', 'broadcasthost', 'business', 'buy', 'cache', 'calendar', 'campaign', 'captcha', 'careers', 'cart', 'cas', 'categories', 'category', 'cdn', 'cgi', 'cgi-bin', 'chacha20-poly1305', 'change', 'channel', 'channels', 'chart', 'chat', 'checkout', 'cit', 'clear', 'client', 'close', 'cms', 'com', 'comment', 'comments', 'community', 'compare', 'compose', 'config', 'connect', 'contact', 'contest', 'cookies', 'copy', 'copyright', 'count', 'create', 'crossdomain.xml', 'css', 'curve25519-sha256', 'customer', 'customers', 'customize', 'cyph', 'cyph-admin', 'cyph-support', 'cyph_admin', 'cyph_support', 'cyphadmin', 'cyphsupport', 'dashboard', 'db', 'deals', 'debug', 'delete', 'desc', 'destroy', 'dev', 'developer', 'developers', 'diffie-hellman-group-exchange-sha256', 'diffie-hellman-group14-sha1', 'disconnect', 'discuss', 'dns', 'dns0', 'dns1', 'dns2', 'dns3', 'dns4', 'docs', 'documentation', 'domain', 'download', 'downloads', 'downvote', 'draft', 'drop', 'ecdh-sha2-nistp256', 'ecdh-sha2-nistp384', 'ecdh-sha2-nistp521', 'edit', 'editor', 'email', 'enterprise', 'error', 'errors', 'event', 'events', 'example', 'exception', 'exit', 'explore', 'export', 'extensions', 'false', 'family', 'faq', 'faqs', 'favicon.ico', 'features', 'feed', 'feedback', 'feeds', 'file', 'files', 'filter', 'follow', 'follower', 'followers', 'following', 'fonts', 'forgot', 'forgot-password', 'forgotpassword', 'form', 'forms', 'forum', 'forums', 'founders', 'friend', 'friends', 'ftp', 'get', 'git', 'go', 'group', 'groups', 'guest', 'guidelines', 'guides', 'head', 'header', 'help', 'hide', 'hmac-sha', 'hmac-sha1', 'hmac-sha1-etm', 'hmac-sha2-256', 'hmac-sha2-256-etm', 'hmac-sha2-512', 'hmac-sha2-512-etm', 'home', 'host', 'hosting', 'hostmaster', 'htpasswd', 'http', 'httpd', 'https', 'humans.txt', 'icons', 'images', 'imap', 'img', 'import', 'index', 'info', 'insert', 'investors', 'invitations', 'invite', 'invites', 'invoice', 'is', 'isatap', 'issues', 'it', 'jobs', 'join', 'joshboehm', 'joshua', 'joshuaboehm', 'joshuacboehm', 'js', 'json', 'keybase.txt', 'learn', 'legal', 'license', 'licensing', 'like', 'limit', 'live', 'load', 'local', 'localdomain', 'localhost', 'lock', 'login', 'logout', 'lost-password', 'mach37', 'mail', 'mail0', 'mail1', 'mail2', 'mail3', 'mail4', 'mail5', 'mail6', 'mail7', 'mail8', 'mail9', 'mailer-daemon', 'mailerdaemon', 'map', 'marketing', 'marketplace', 'master', 'me', 'media', 'member', 'members', 'message', 'messages', 'metrics', 'mis', 'mobile', 'moderator', 'modify', 'more', 'mx', 'my', 'net', 'network', 'new', 'news', 'newsletter', 'newsletters', 'next', 'nil', 'no-reply', 'nobody', 'noc', 'none', 'noreply', 'notification', 'notifications', 'ns', 'ns0', 'ns1', 'ns2', 'ns3', 'ns4', 'ns5', 'ns6', 'ns7', 'ns8', 'ns9', 'null', 'oauth', 'oauth2', 'offer', 'offers', 'online', 'openid', 'order', 'orders', 'overview', 'owner', 'page', 'pages', 'partners', 'passwd', 'password', 'pay', 'payment', 'payments', 'photo', 'photos', 'pixel', 'plans', 'plugins', 'policies', 'policy', 'pop', 'pop3', 'popular', 'portfolio', 'post', 'postfix', 'postmaster', 'poweruser', 'preferences', 'premium', 'press', 'previous', 'pricing', 'print', 'privacy', 'privacy-policy', 'private', 'prod', 'product', 'production', 'profile', 'profiles', 'project', 'projects', 'public', 'purchase', 'put', 'quota', 'redirect', 'reduce', 'refund', 'refunds', 'register', 'registration', 'remove', 'replies', 'reply', 'report', 'request', 'request-password', 'reset', 'reset-password', 'response', 'return', 'returns', 'review', 'reviews', 'robots.txt', 'root', 'rootuser', 'rsa-sha2-2', 'rsa-sha2-512', 'rss', 'rules', 'ryanlester', 'sales', 'save', 'script', 'sdk', 'search', 'secure', 'security', 'select', 'services', 'session', 'sessions', 'settings', 'setup', 'share', 'shift', 'shop', 'signin', 'signup', 'site', 'sitemap', 'sites', 'smtp', 'sort', 'source', 'sql', 'ssh', 'ssh-rsa', 'ssl', 'ssladmin', 'ssladministrator', 'sslwebmaster', 'stage', 'staging', 'stat', 'static', 'statistics', 'stats', 'status', 'store', 'style', 'styles', 'stylesheet', 'stylesheets', 'subdomain', 'subscribe', 'sudo', 'super', 'superuser', 'support', 'survey', 'sync', 'sysadmin', 'system', 'system', 'tablet', 'tag', 'tags', 'team', 'telnet', 'terms', 'terms-of-use', 'test', 'testimonials', 'theme', 'themes', 'today', 'tools', 'topic', 'topics', 'tour', 'training', 'translate', 'translations', 'trending', 'trial', 'true', 'umac-128', 'umac-128-etm', 'umac-64', 'umac-64-etm', 'undefined', 'unfollow', 'unlike', 'unsubscribe', 'update', 'upgrade', 'usenet', 'user', 'username', 'users', 'uucp', 'var', 'verify', 'video', 'view', 'void', 'vote', 'webmail', 'webmaster', 'websign', 'website', 'widget', 'widgets', 'wiki', 'wpad', 'write', 'www', 'www-data', 'www1', 'www2', 'www3', 'www4', 'you', 'yourname', 'yourusername', 'zlib'] | 2,900.5 | 5,698 | 0.608343 | 633 | 5,801 | 5.57188 | 0.887836 | 0.009073 | 0.007372 | 0.011341 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.047283 | 0.095846 | 5,801 | 2 | 5,698 | 2,900.5 | 0.625167 | 0.017238 | 0 | 0 | 1 | 0 | 0.616667 | 0.011053 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 1 | 0 | 9 |
0a1d866a595554517ea135373971b22038417478 | 186 | py | Python | 02/01/upper.py | pylangstudy/201708 | 126b1af96a1d1f57522d5a1d435b58597bea2e57 | [
"CC0-1.0"
] | null | null | null | 02/01/upper.py | pylangstudy/201708 | 126b1af96a1d1f57522d5a1d435b58597bea2e57 | [
"CC0-1.0"
] | 39 | 2017-07-31T22:54:01.000Z | 2017-08-31T00:19:03.000Z | 02/01/upper.py | pylangstudy/201708 | 126b1af96a1d1f57522d5a1d435b58597bea2e57 | [
"CC0-1.0"
] | null | null | null | s = 'abc'; print(s.upper(), s)
s = 'Abc'; print(s.upper(), s)
s = 'aBc'; print(s.upper(), s)
s = 'abC'; print(s.upper(), s)
s = 'abc'; print(s.upper(), s)
s = 'ABC'; print(s.upper(), s)
| 26.571429 | 30 | 0.516129 | 36 | 186 | 2.666667 | 0.111111 | 0.25 | 0.5625 | 0.625 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0.16129 | 186 | 6 | 31 | 31 | 0.615385 | 0 | 0 | 0.333333 | 0 | 0 | 0.096774 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 11 |
0a28f78d809307270c85063d043015ed3d2a2f46 | 142 | py | Python | sesion5/app1/lib/converter.py | joelibaceta/backend-codigo-10 | 75256580ce9975bcfa831fde884362787d82b71f | [
"MIT"
] | 1 | 2021-11-23T03:05:23.000Z | 2021-11-23T03:05:23.000Z | sesion5/app1/lib/converter.py | joelibaceta/backend-codigo-10 | 75256580ce9975bcfa831fde884362787d82b71f | [
"MIT"
] | 1 | 2021-11-23T02:49:01.000Z | 2021-11-23T02:55:14.000Z | sesion5/app1/lib/converter.py | joelibaceta/backend-codigo-10 | 75256580ce9975bcfa831fde884362787d82b71f | [
"MIT"
] | 1 | 2022-01-26T19:54:33.000Z | 2022-01-26T19:54:33.000Z | class Converter:
def ctof(degrees):
return degrees * 1.8 + 32
def ftoc(degrees):
return (degrees - 32) / (5/9)
| 17.75 | 38 | 0.535211 | 18 | 142 | 4.222222 | 0.666667 | 0.342105 | 0.526316 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.086022 | 0.34507 | 142 | 8 | 39 | 17.75 | 0.731183 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0 | 0.4 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 7 |
0a5c43345978dab30d50f4b95cc547c42e81c7f8 | 42,479 | py | Python | tccli/services/btoe/btoe_client.py | HS-Gray/tencentcloud-cli | 3822fcfdfed570fb526fe49abe6793e2f9127f4a | [
"Apache-2.0"
] | 47 | 2018-05-31T11:26:25.000Z | 2022-03-08T02:12:45.000Z | tccli/services/btoe/btoe_client.py | HS-Gray/tencentcloud-cli | 3822fcfdfed570fb526fe49abe6793e2f9127f4a | [
"Apache-2.0"
] | 23 | 2018-06-14T10:46:30.000Z | 2022-02-28T02:53:09.000Z | tccli/services/btoe/btoe_client.py | HS-Gray/tencentcloud-cli | 3822fcfdfed570fb526fe49abe6793e2f9127f4a | [
"Apache-2.0"
] | 22 | 2018-10-22T09:49:45.000Z | 2022-03-30T08:06:04.000Z | # -*- coding: utf-8 -*-
import os
import sys
import json
import tccli.options_define as OptionsDefine
import tccli.format_output as FormatOutput
from tccli import __version__
from tccli.utils import Utils
from tccli.exceptions import ConfigurationError, ClientError, ParamError
from tencentcloud.common import credential
from tencentcloud.common.profile.http_profile import HttpProfile
from tencentcloud.common.profile.client_profile import ClientProfile
from tencentcloud.btoe.v20210514 import btoe_client as btoe_client_v20210514
from tencentcloud.btoe.v20210514 import models as models_v20210514
from tencentcloud.btoe.v20210303 import btoe_client as btoe_client_v20210303
from tencentcloud.btoe.v20210303 import models as models_v20210303
from jmespath import search
import time
from tccli import six
def doGetDepositCert(args, parsed_globals):
g_param = parse_global_arg(parsed_globals)
if g_param[OptionsDefine.UseCVMRole.replace('-', '_')]:
cred = credential.CVMRoleCredential()
elif g_param[OptionsDefine.RoleArn.replace('-', '_')] and g_param[OptionsDefine.RoleSessionName.replace('-', '_')]:
cred = credential.STSAssumeRoleCredential(
g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey], g_param[OptionsDefine.RoleArn.replace('-', '_')],
g_param[OptionsDefine.RoleSessionName.replace('-', '_')]
)
else:
cred = credential.Credential(
g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey], g_param[OptionsDefine.Token]
)
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint],
proxy=g_param[OptionsDefine.HttpsProxy.replace('-', '_')]
)
profile = ClientProfile(httpProfile=http_profile, signMethod="HmacSHA256")
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.BtoeClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.GetDepositCertRequest()
model.from_json_string(json.dumps(args))
start_time = time.time()
while True:
rsp = client.GetDepositCert(model)
result = rsp.to_json_string()
try:
json_obj = json.loads(result)
except TypeError as e:
json_obj = json.loads(result.decode('utf-8')) # python3.3
if not g_param[OptionsDefine.Waiter] or search(g_param['OptionsDefine.WaiterInfo']['expr'], json_obj) == g_param['OptionsDefine.WaiterInfo']['to']:
break
cur_time = time.time()
if cur_time - start_time >= g_param['OptionsDefine.WaiterInfo']['timeout']:
raise ClientError('Request timeout, wait `%s` to `%s` timeout, last request is %s' %
(g_param['OptionsDefine.WaiterInfo']['expr'], g_param['OptionsDefine.WaiterInfo']['to'],
search(g_param['OptionsDefine.WaiterInfo']['expr'], json_obj)))
else:
print('Inquiry result is %s.' % search(g_param['OptionsDefine.WaiterInfo']['expr'], json_obj))
time.sleep(g_param['OptionsDefine.WaiterInfo']['interval'])
FormatOutput.output("action", json_obj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doCreateHashDeposit(args, parsed_globals):
g_param = parse_global_arg(parsed_globals)
if g_param[OptionsDefine.UseCVMRole.replace('-', '_')]:
cred = credential.CVMRoleCredential()
elif g_param[OptionsDefine.RoleArn.replace('-', '_')] and g_param[OptionsDefine.RoleSessionName.replace('-', '_')]:
cred = credential.STSAssumeRoleCredential(
g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey], g_param[OptionsDefine.RoleArn.replace('-', '_')],
g_param[OptionsDefine.RoleSessionName.replace('-', '_')]
)
else:
cred = credential.Credential(
g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey], g_param[OptionsDefine.Token]
)
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint],
proxy=g_param[OptionsDefine.HttpsProxy.replace('-', '_')]
)
profile = ClientProfile(httpProfile=http_profile, signMethod="HmacSHA256")
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.BtoeClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.CreateHashDepositRequest()
model.from_json_string(json.dumps(args))
start_time = time.time()
while True:
rsp = client.CreateHashDeposit(model)
result = rsp.to_json_string()
try:
json_obj = json.loads(result)
except TypeError as e:
json_obj = json.loads(result.decode('utf-8')) # python3.3
if not g_param[OptionsDefine.Waiter] or search(g_param['OptionsDefine.WaiterInfo']['expr'], json_obj) == g_param['OptionsDefine.WaiterInfo']['to']:
break
cur_time = time.time()
if cur_time - start_time >= g_param['OptionsDefine.WaiterInfo']['timeout']:
raise ClientError('Request timeout, wait `%s` to `%s` timeout, last request is %s' %
(g_param['OptionsDefine.WaiterInfo']['expr'], g_param['OptionsDefine.WaiterInfo']['to'],
search(g_param['OptionsDefine.WaiterInfo']['expr'], json_obj)))
else:
print('Inquiry result is %s.' % search(g_param['OptionsDefine.WaiterInfo']['expr'], json_obj))
time.sleep(g_param['OptionsDefine.WaiterInfo']['interval'])
FormatOutput.output("action", json_obj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doGetDepositFile(args, parsed_globals):
g_param = parse_global_arg(parsed_globals)
if g_param[OptionsDefine.UseCVMRole.replace('-', '_')]:
cred = credential.CVMRoleCredential()
elif g_param[OptionsDefine.RoleArn.replace('-', '_')] and g_param[OptionsDefine.RoleSessionName.replace('-', '_')]:
cred = credential.STSAssumeRoleCredential(
g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey], g_param[OptionsDefine.RoleArn.replace('-', '_')],
g_param[OptionsDefine.RoleSessionName.replace('-', '_')]
)
else:
cred = credential.Credential(
g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey], g_param[OptionsDefine.Token]
)
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint],
proxy=g_param[OptionsDefine.HttpsProxy.replace('-', '_')]
)
profile = ClientProfile(httpProfile=http_profile, signMethod="HmacSHA256")
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.BtoeClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.GetDepositFileRequest()
model.from_json_string(json.dumps(args))
start_time = time.time()
while True:
rsp = client.GetDepositFile(model)
result = rsp.to_json_string()
try:
json_obj = json.loads(result)
except TypeError as e:
json_obj = json.loads(result.decode('utf-8')) # python3.3
if not g_param[OptionsDefine.Waiter] or search(g_param['OptionsDefine.WaiterInfo']['expr'], json_obj) == g_param['OptionsDefine.WaiterInfo']['to']:
break
cur_time = time.time()
if cur_time - start_time >= g_param['OptionsDefine.WaiterInfo']['timeout']:
raise ClientError('Request timeout, wait `%s` to `%s` timeout, last request is %s' %
(g_param['OptionsDefine.WaiterInfo']['expr'], g_param['OptionsDefine.WaiterInfo']['to'],
search(g_param['OptionsDefine.WaiterInfo']['expr'], json_obj)))
else:
print('Inquiry result is %s.' % search(g_param['OptionsDefine.WaiterInfo']['expr'], json_obj))
time.sleep(g_param['OptionsDefine.WaiterInfo']['interval'])
FormatOutput.output("action", json_obj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doCreateVideoDeposit(args, parsed_globals):
g_param = parse_global_arg(parsed_globals)
if g_param[OptionsDefine.UseCVMRole.replace('-', '_')]:
cred = credential.CVMRoleCredential()
elif g_param[OptionsDefine.RoleArn.replace('-', '_')] and g_param[OptionsDefine.RoleSessionName.replace('-', '_')]:
cred = credential.STSAssumeRoleCredential(
g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey], g_param[OptionsDefine.RoleArn.replace('-', '_')],
g_param[OptionsDefine.RoleSessionName.replace('-', '_')]
)
else:
cred = credential.Credential(
g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey], g_param[OptionsDefine.Token]
)
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint],
proxy=g_param[OptionsDefine.HttpsProxy.replace('-', '_')]
)
profile = ClientProfile(httpProfile=http_profile, signMethod="HmacSHA256")
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.BtoeClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.CreateVideoDepositRequest()
model.from_json_string(json.dumps(args))
start_time = time.time()
while True:
rsp = client.CreateVideoDeposit(model)
result = rsp.to_json_string()
try:
json_obj = json.loads(result)
except TypeError as e:
json_obj = json.loads(result.decode('utf-8')) # python3.3
if not g_param[OptionsDefine.Waiter] or search(g_param['OptionsDefine.WaiterInfo']['expr'], json_obj) == g_param['OptionsDefine.WaiterInfo']['to']:
break
cur_time = time.time()
if cur_time - start_time >= g_param['OptionsDefine.WaiterInfo']['timeout']:
raise ClientError('Request timeout, wait `%s` to `%s` timeout, last request is %s' %
(g_param['OptionsDefine.WaiterInfo']['expr'], g_param['OptionsDefine.WaiterInfo']['to'],
search(g_param['OptionsDefine.WaiterInfo']['expr'], json_obj)))
else:
print('Inquiry result is %s.' % search(g_param['OptionsDefine.WaiterInfo']['expr'], json_obj))
time.sleep(g_param['OptionsDefine.WaiterInfo']['interval'])
FormatOutput.output("action", json_obj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doCreateDocDeposit(args, parsed_globals):
g_param = parse_global_arg(parsed_globals)
if g_param[OptionsDefine.UseCVMRole.replace('-', '_')]:
cred = credential.CVMRoleCredential()
elif g_param[OptionsDefine.RoleArn.replace('-', '_')] and g_param[OptionsDefine.RoleSessionName.replace('-', '_')]:
cred = credential.STSAssumeRoleCredential(
g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey], g_param[OptionsDefine.RoleArn.replace('-', '_')],
g_param[OptionsDefine.RoleSessionName.replace('-', '_')]
)
else:
cred = credential.Credential(
g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey], g_param[OptionsDefine.Token]
)
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint],
proxy=g_param[OptionsDefine.HttpsProxy.replace('-', '_')]
)
profile = ClientProfile(httpProfile=http_profile, signMethod="HmacSHA256")
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.BtoeClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.CreateDocDepositRequest()
model.from_json_string(json.dumps(args))
start_time = time.time()
while True:
rsp = client.CreateDocDeposit(model)
result = rsp.to_json_string()
try:
json_obj = json.loads(result)
except TypeError as e:
json_obj = json.loads(result.decode('utf-8')) # python3.3
if not g_param[OptionsDefine.Waiter] or search(g_param['OptionsDefine.WaiterInfo']['expr'], json_obj) == g_param['OptionsDefine.WaiterInfo']['to']:
break
cur_time = time.time()
if cur_time - start_time >= g_param['OptionsDefine.WaiterInfo']['timeout']:
raise ClientError('Request timeout, wait `%s` to `%s` timeout, last request is %s' %
(g_param['OptionsDefine.WaiterInfo']['expr'], g_param['OptionsDefine.WaiterInfo']['to'],
search(g_param['OptionsDefine.WaiterInfo']['expr'], json_obj)))
else:
print('Inquiry result is %s.' % search(g_param['OptionsDefine.WaiterInfo']['expr'], json_obj))
time.sleep(g_param['OptionsDefine.WaiterInfo']['interval'])
FormatOutput.output("action", json_obj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doCreateHashDepositNoCert(args, parsed_globals):
g_param = parse_global_arg(parsed_globals)
if g_param[OptionsDefine.UseCVMRole.replace('-', '_')]:
cred = credential.CVMRoleCredential()
elif g_param[OptionsDefine.RoleArn.replace('-', '_')] and g_param[OptionsDefine.RoleSessionName.replace('-', '_')]:
cred = credential.STSAssumeRoleCredential(
g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey], g_param[OptionsDefine.RoleArn.replace('-', '_')],
g_param[OptionsDefine.RoleSessionName.replace('-', '_')]
)
else:
cred = credential.Credential(
g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey], g_param[OptionsDefine.Token]
)
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint],
proxy=g_param[OptionsDefine.HttpsProxy.replace('-', '_')]
)
profile = ClientProfile(httpProfile=http_profile, signMethod="HmacSHA256")
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.BtoeClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.CreateHashDepositNoCertRequest()
model.from_json_string(json.dumps(args))
start_time = time.time()
while True:
rsp = client.CreateHashDepositNoCert(model)
result = rsp.to_json_string()
try:
json_obj = json.loads(result)
except TypeError as e:
json_obj = json.loads(result.decode('utf-8')) # python3.3
if not g_param[OptionsDefine.Waiter] or search(g_param['OptionsDefine.WaiterInfo']['expr'], json_obj) == g_param['OptionsDefine.WaiterInfo']['to']:
break
cur_time = time.time()
if cur_time - start_time >= g_param['OptionsDefine.WaiterInfo']['timeout']:
raise ClientError('Request timeout, wait `%s` to `%s` timeout, last request is %s' %
(g_param['OptionsDefine.WaiterInfo']['expr'], g_param['OptionsDefine.WaiterInfo']['to'],
search(g_param['OptionsDefine.WaiterInfo']['expr'], json_obj)))
else:
print('Inquiry result is %s.' % search(g_param['OptionsDefine.WaiterInfo']['expr'], json_obj))
time.sleep(g_param['OptionsDefine.WaiterInfo']['interval'])
FormatOutput.output("action", json_obj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doCreateImageDeposit(args, parsed_globals):
g_param = parse_global_arg(parsed_globals)
if g_param[OptionsDefine.UseCVMRole.replace('-', '_')]:
cred = credential.CVMRoleCredential()
elif g_param[OptionsDefine.RoleArn.replace('-', '_')] and g_param[OptionsDefine.RoleSessionName.replace('-', '_')]:
cred = credential.STSAssumeRoleCredential(
g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey], g_param[OptionsDefine.RoleArn.replace('-', '_')],
g_param[OptionsDefine.RoleSessionName.replace('-', '_')]
)
else:
cred = credential.Credential(
g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey], g_param[OptionsDefine.Token]
)
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint],
proxy=g_param[OptionsDefine.HttpsProxy.replace('-', '_')]
)
profile = ClientProfile(httpProfile=http_profile, signMethod="HmacSHA256")
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.BtoeClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.CreateImageDepositRequest()
model.from_json_string(json.dumps(args))
start_time = time.time()
while True:
rsp = client.CreateImageDeposit(model)
result = rsp.to_json_string()
try:
json_obj = json.loads(result)
except TypeError as e:
json_obj = json.loads(result.decode('utf-8')) # python3.3
if not g_param[OptionsDefine.Waiter] or search(g_param['OptionsDefine.WaiterInfo']['expr'], json_obj) == g_param['OptionsDefine.WaiterInfo']['to']:
break
cur_time = time.time()
if cur_time - start_time >= g_param['OptionsDefine.WaiterInfo']['timeout']:
raise ClientError('Request timeout, wait `%s` to `%s` timeout, last request is %s' %
(g_param['OptionsDefine.WaiterInfo']['expr'], g_param['OptionsDefine.WaiterInfo']['to'],
search(g_param['OptionsDefine.WaiterInfo']['expr'], json_obj)))
else:
print('Inquiry result is %s.' % search(g_param['OptionsDefine.WaiterInfo']['expr'], json_obj))
time.sleep(g_param['OptionsDefine.WaiterInfo']['interval'])
FormatOutput.output("action", json_obj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doCreateAudioDeposit(args, parsed_globals):
g_param = parse_global_arg(parsed_globals)
if g_param[OptionsDefine.UseCVMRole.replace('-', '_')]:
cred = credential.CVMRoleCredential()
elif g_param[OptionsDefine.RoleArn.replace('-', '_')] and g_param[OptionsDefine.RoleSessionName.replace('-', '_')]:
cred = credential.STSAssumeRoleCredential(
g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey], g_param[OptionsDefine.RoleArn.replace('-', '_')],
g_param[OptionsDefine.RoleSessionName.replace('-', '_')]
)
else:
cred = credential.Credential(
g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey], g_param[OptionsDefine.Token]
)
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint],
proxy=g_param[OptionsDefine.HttpsProxy.replace('-', '_')]
)
profile = ClientProfile(httpProfile=http_profile, signMethod="HmacSHA256")
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.BtoeClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.CreateAudioDepositRequest()
model.from_json_string(json.dumps(args))
start_time = time.time()
while True:
rsp = client.CreateAudioDeposit(model)
result = rsp.to_json_string()
try:
json_obj = json.loads(result)
except TypeError as e:
json_obj = json.loads(result.decode('utf-8')) # python3.3
if not g_param[OptionsDefine.Waiter] or search(g_param['OptionsDefine.WaiterInfo']['expr'], json_obj) == g_param['OptionsDefine.WaiterInfo']['to']:
break
cur_time = time.time()
if cur_time - start_time >= g_param['OptionsDefine.WaiterInfo']['timeout']:
raise ClientError('Request timeout, wait `%s` to `%s` timeout, last request is %s' %
(g_param['OptionsDefine.WaiterInfo']['expr'], g_param['OptionsDefine.WaiterInfo']['to'],
search(g_param['OptionsDefine.WaiterInfo']['expr'], json_obj)))
else:
print('Inquiry result is %s.' % search(g_param['OptionsDefine.WaiterInfo']['expr'], json_obj))
time.sleep(g_param['OptionsDefine.WaiterInfo']['interval'])
FormatOutput.output("action", json_obj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doCreateHashDepositNoSeal(args, parsed_globals):
g_param = parse_global_arg(parsed_globals)
if g_param[OptionsDefine.UseCVMRole.replace('-', '_')]:
cred = credential.CVMRoleCredential()
elif g_param[OptionsDefine.RoleArn.replace('-', '_')] and g_param[OptionsDefine.RoleSessionName.replace('-', '_')]:
cred = credential.STSAssumeRoleCredential(
g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey], g_param[OptionsDefine.RoleArn.replace('-', '_')],
g_param[OptionsDefine.RoleSessionName.replace('-', '_')]
)
else:
cred = credential.Credential(
g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey], g_param[OptionsDefine.Token]
)
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint],
proxy=g_param[OptionsDefine.HttpsProxy.replace('-', '_')]
)
profile = ClientProfile(httpProfile=http_profile, signMethod="HmacSHA256")
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.BtoeClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.CreateHashDepositNoSealRequest()
model.from_json_string(json.dumps(args))
start_time = time.time()
while True:
rsp = client.CreateHashDepositNoSeal(model)
result = rsp.to_json_string()
try:
json_obj = json.loads(result)
except TypeError as e:
json_obj = json.loads(result.decode('utf-8')) # python3.3
if not g_param[OptionsDefine.Waiter] or search(g_param['OptionsDefine.WaiterInfo']['expr'], json_obj) == g_param['OptionsDefine.WaiterInfo']['to']:
break
cur_time = time.time()
if cur_time - start_time >= g_param['OptionsDefine.WaiterInfo']['timeout']:
raise ClientError('Request timeout, wait `%s` to `%s` timeout, last request is %s' %
(g_param['OptionsDefine.WaiterInfo']['expr'], g_param['OptionsDefine.WaiterInfo']['to'],
search(g_param['OptionsDefine.WaiterInfo']['expr'], json_obj)))
else:
print('Inquiry result is %s.' % search(g_param['OptionsDefine.WaiterInfo']['expr'], json_obj))
time.sleep(g_param['OptionsDefine.WaiterInfo']['interval'])
FormatOutput.output("action", json_obj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doVerifyEvidenceBlockChainTxHash(args, parsed_globals):
g_param = parse_global_arg(parsed_globals)
if g_param[OptionsDefine.UseCVMRole.replace('-', '_')]:
cred = credential.CVMRoleCredential()
elif g_param[OptionsDefine.RoleArn.replace('-', '_')] and g_param[OptionsDefine.RoleSessionName.replace('-', '_')]:
cred = credential.STSAssumeRoleCredential(
g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey], g_param[OptionsDefine.RoleArn.replace('-', '_')],
g_param[OptionsDefine.RoleSessionName.replace('-', '_')]
)
else:
cred = credential.Credential(
g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey], g_param[OptionsDefine.Token]
)
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint],
proxy=g_param[OptionsDefine.HttpsProxy.replace('-', '_')]
)
profile = ClientProfile(httpProfile=http_profile, signMethod="HmacSHA256")
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.BtoeClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.VerifyEvidenceBlockChainTxHashRequest()
model.from_json_string(json.dumps(args))
start_time = time.time()
while True:
rsp = client.VerifyEvidenceBlockChainTxHash(model)
result = rsp.to_json_string()
try:
json_obj = json.loads(result)
except TypeError as e:
json_obj = json.loads(result.decode('utf-8')) # python3.3
if not g_param[OptionsDefine.Waiter] or search(g_param['OptionsDefine.WaiterInfo']['expr'], json_obj) == g_param['OptionsDefine.WaiterInfo']['to']:
break
cur_time = time.time()
if cur_time - start_time >= g_param['OptionsDefine.WaiterInfo']['timeout']:
raise ClientError('Request timeout, wait `%s` to `%s` timeout, last request is %s' %
(g_param['OptionsDefine.WaiterInfo']['expr'], g_param['OptionsDefine.WaiterInfo']['to'],
search(g_param['OptionsDefine.WaiterInfo']['expr'], json_obj)))
else:
print('Inquiry result is %s.' % search(g_param['OptionsDefine.WaiterInfo']['expr'], json_obj))
time.sleep(g_param['OptionsDefine.WaiterInfo']['interval'])
FormatOutput.output("action", json_obj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doVerifyEvidenceHash(args, parsed_globals):
g_param = parse_global_arg(parsed_globals)
if g_param[OptionsDefine.UseCVMRole.replace('-', '_')]:
cred = credential.CVMRoleCredential()
elif g_param[OptionsDefine.RoleArn.replace('-', '_')] and g_param[OptionsDefine.RoleSessionName.replace('-', '_')]:
cred = credential.STSAssumeRoleCredential(
g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey], g_param[OptionsDefine.RoleArn.replace('-', '_')],
g_param[OptionsDefine.RoleSessionName.replace('-', '_')]
)
else:
cred = credential.Credential(
g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey], g_param[OptionsDefine.Token]
)
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint],
proxy=g_param[OptionsDefine.HttpsProxy.replace('-', '_')]
)
profile = ClientProfile(httpProfile=http_profile, signMethod="HmacSHA256")
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.BtoeClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.VerifyEvidenceHashRequest()
model.from_json_string(json.dumps(args))
start_time = time.time()
while True:
rsp = client.VerifyEvidenceHash(model)
result = rsp.to_json_string()
try:
json_obj = json.loads(result)
except TypeError as e:
json_obj = json.loads(result.decode('utf-8')) # python3.3
if not g_param[OptionsDefine.Waiter] or search(g_param['OptionsDefine.WaiterInfo']['expr'], json_obj) == g_param['OptionsDefine.WaiterInfo']['to']:
break
cur_time = time.time()
if cur_time - start_time >= g_param['OptionsDefine.WaiterInfo']['timeout']:
raise ClientError('Request timeout, wait `%s` to `%s` timeout, last request is %s' %
(g_param['OptionsDefine.WaiterInfo']['expr'], g_param['OptionsDefine.WaiterInfo']['to'],
search(g_param['OptionsDefine.WaiterInfo']['expr'], json_obj)))
else:
print('Inquiry result is %s.' % search(g_param['OptionsDefine.WaiterInfo']['expr'], json_obj))
time.sleep(g_param['OptionsDefine.WaiterInfo']['interval'])
FormatOutput.output("action", json_obj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doGetDepositInfo(args, parsed_globals):
g_param = parse_global_arg(parsed_globals)
if g_param[OptionsDefine.UseCVMRole.replace('-', '_')]:
cred = credential.CVMRoleCredential()
elif g_param[OptionsDefine.RoleArn.replace('-', '_')] and g_param[OptionsDefine.RoleSessionName.replace('-', '_')]:
cred = credential.STSAssumeRoleCredential(
g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey], g_param[OptionsDefine.RoleArn.replace('-', '_')],
g_param[OptionsDefine.RoleSessionName.replace('-', '_')]
)
else:
cred = credential.Credential(
g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey], g_param[OptionsDefine.Token]
)
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint],
proxy=g_param[OptionsDefine.HttpsProxy.replace('-', '_')]
)
profile = ClientProfile(httpProfile=http_profile, signMethod="HmacSHA256")
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.BtoeClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.GetDepositInfoRequest()
model.from_json_string(json.dumps(args))
start_time = time.time()
while True:
rsp = client.GetDepositInfo(model)
result = rsp.to_json_string()
try:
json_obj = json.loads(result)
except TypeError as e:
json_obj = json.loads(result.decode('utf-8')) # python3.3
if not g_param[OptionsDefine.Waiter] or search(g_param['OptionsDefine.WaiterInfo']['expr'], json_obj) == g_param['OptionsDefine.WaiterInfo']['to']:
break
cur_time = time.time()
if cur_time - start_time >= g_param['OptionsDefine.WaiterInfo']['timeout']:
raise ClientError('Request timeout, wait `%s` to `%s` timeout, last request is %s' %
(g_param['OptionsDefine.WaiterInfo']['expr'], g_param['OptionsDefine.WaiterInfo']['to'],
search(g_param['OptionsDefine.WaiterInfo']['expr'], json_obj)))
else:
print('Inquiry result is %s.' % search(g_param['OptionsDefine.WaiterInfo']['expr'], json_obj))
time.sleep(g_param['OptionsDefine.WaiterInfo']['interval'])
FormatOutput.output("action", json_obj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doCreateWebpageDeposit(args, parsed_globals):
g_param = parse_global_arg(parsed_globals)
if g_param[OptionsDefine.UseCVMRole.replace('-', '_')]:
cred = credential.CVMRoleCredential()
elif g_param[OptionsDefine.RoleArn.replace('-', '_')] and g_param[OptionsDefine.RoleSessionName.replace('-', '_')]:
cred = credential.STSAssumeRoleCredential(
g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey], g_param[OptionsDefine.RoleArn.replace('-', '_')],
g_param[OptionsDefine.RoleSessionName.replace('-', '_')]
)
else:
cred = credential.Credential(
g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey], g_param[OptionsDefine.Token]
)
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint],
proxy=g_param[OptionsDefine.HttpsProxy.replace('-', '_')]
)
profile = ClientProfile(httpProfile=http_profile, signMethod="HmacSHA256")
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.BtoeClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.CreateWebpageDepositRequest()
model.from_json_string(json.dumps(args))
start_time = time.time()
while True:
rsp = client.CreateWebpageDeposit(model)
result = rsp.to_json_string()
try:
json_obj = json.loads(result)
except TypeError as e:
json_obj = json.loads(result.decode('utf-8')) # python3.3
if not g_param[OptionsDefine.Waiter] or search(g_param['OptionsDefine.WaiterInfo']['expr'], json_obj) == g_param['OptionsDefine.WaiterInfo']['to']:
break
cur_time = time.time()
if cur_time - start_time >= g_param['OptionsDefine.WaiterInfo']['timeout']:
raise ClientError('Request timeout, wait `%s` to `%s` timeout, last request is %s' %
(g_param['OptionsDefine.WaiterInfo']['expr'], g_param['OptionsDefine.WaiterInfo']['to'],
search(g_param['OptionsDefine.WaiterInfo']['expr'], json_obj)))
else:
print('Inquiry result is %s.' % search(g_param['OptionsDefine.WaiterInfo']['expr'], json_obj))
time.sleep(g_param['OptionsDefine.WaiterInfo']['interval'])
FormatOutput.output("action", json_obj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doCreateDataDeposit(args, parsed_globals):
g_param = parse_global_arg(parsed_globals)
if g_param[OptionsDefine.UseCVMRole.replace('-', '_')]:
cred = credential.CVMRoleCredential()
elif g_param[OptionsDefine.RoleArn.replace('-', '_')] and g_param[OptionsDefine.RoleSessionName.replace('-', '_')]:
cred = credential.STSAssumeRoleCredential(
g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey], g_param[OptionsDefine.RoleArn.replace('-', '_')],
g_param[OptionsDefine.RoleSessionName.replace('-', '_')]
)
else:
cred = credential.Credential(
g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey], g_param[OptionsDefine.Token]
)
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint],
proxy=g_param[OptionsDefine.HttpsProxy.replace('-', '_')]
)
profile = ClientProfile(httpProfile=http_profile, signMethod="HmacSHA256")
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.BtoeClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.CreateDataDepositRequest()
model.from_json_string(json.dumps(args))
start_time = time.time()
while True:
rsp = client.CreateDataDeposit(model)
result = rsp.to_json_string()
try:
json_obj = json.loads(result)
except TypeError as e:
json_obj = json.loads(result.decode('utf-8')) # python3.3
if not g_param[OptionsDefine.Waiter] or search(g_param['OptionsDefine.WaiterInfo']['expr'], json_obj) == g_param['OptionsDefine.WaiterInfo']['to']:
break
cur_time = time.time()
if cur_time - start_time >= g_param['OptionsDefine.WaiterInfo']['timeout']:
raise ClientError('Request timeout, wait `%s` to `%s` timeout, last request is %s' %
(g_param['OptionsDefine.WaiterInfo']['expr'], g_param['OptionsDefine.WaiterInfo']['to'],
search(g_param['OptionsDefine.WaiterInfo']['expr'], json_obj)))
else:
print('Inquiry result is %s.' % search(g_param['OptionsDefine.WaiterInfo']['expr'], json_obj))
time.sleep(g_param['OptionsDefine.WaiterInfo']['interval'])
FormatOutput.output("action", json_obj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
CLIENT_MAP = {
"v20210514": btoe_client_v20210514,
"v20210303": btoe_client_v20210303,
}
MODELS_MAP = {
"v20210514": models_v20210514,
"v20210303": models_v20210303,
}
ACTION_MAP = {
"GetDepositCert": doGetDepositCert,
"CreateHashDeposit": doCreateHashDeposit,
"GetDepositFile": doGetDepositFile,
"CreateVideoDeposit": doCreateVideoDeposit,
"CreateDocDeposit": doCreateDocDeposit,
"CreateHashDepositNoCert": doCreateHashDepositNoCert,
"CreateImageDeposit": doCreateImageDeposit,
"CreateAudioDeposit": doCreateAudioDeposit,
"CreateHashDepositNoSeal": doCreateHashDepositNoSeal,
"VerifyEvidenceBlockChainTxHash": doVerifyEvidenceBlockChainTxHash,
"VerifyEvidenceHash": doVerifyEvidenceHash,
"GetDepositInfo": doGetDepositInfo,
"CreateWebpageDeposit": doCreateWebpageDeposit,
"CreateDataDeposit": doCreateDataDeposit,
}
AVAILABLE_VERSION_LIST = [
"v20210514",
"v20210303",
]
def action_caller():
return ACTION_MAP
def parse_global_arg(parsed_globals):
g_param = parsed_globals
is_exist_profile = True
if not parsed_globals["profile"]:
is_exist_profile = False
g_param["profile"] = "default"
configure_path = os.path.join(os.path.expanduser("~"), ".tccli")
is_conf_exist, conf_path = Utils.file_existed(configure_path, g_param["profile"] + ".configure")
is_cred_exist, cred_path = Utils.file_existed(configure_path, g_param["profile"] + ".credential")
conf = {}
cred = {}
if is_conf_exist:
conf = Utils.load_json_msg(conf_path)
if is_cred_exist:
cred = Utils.load_json_msg(cred_path)
if not (isinstance(conf, dict) and isinstance(cred, dict)):
raise ConfigurationError(
"file: %s or %s is not json format"
% (g_param["profile"] + ".configure", g_param["profile"] + ".credential"))
if OptionsDefine.Token not in cred:
cred[OptionsDefine.Token] = None
if not is_exist_profile:
if os.environ.get(OptionsDefine.ENV_SECRET_ID) and os.environ.get(OptionsDefine.ENV_SECRET_KEY):
cred[OptionsDefine.SecretId] = os.environ.get(OptionsDefine.ENV_SECRET_ID)
cred[OptionsDefine.SecretKey] = os.environ.get(OptionsDefine.ENV_SECRET_KEY)
cred[OptionsDefine.Token] = os.environ.get(OptionsDefine.ENV_TOKEN)
if os.environ.get(OptionsDefine.ENV_REGION):
conf[OptionsDefine.Region] = os.environ.get(OptionsDefine.ENV_REGION)
if os.environ.get(OptionsDefine.ENV_ROLE_ARN) and os.environ.get(OptionsDefine.ENV_ROLE_SESSION_NAME):
cred[OptionsDefine.RoleArn] = os.environ.get(OptionsDefine.ENV_ROLE_ARN)
cred[OptionsDefine.RoleSessionName] = os.environ.get(OptionsDefine.ENV_ROLE_SESSION_NAME)
for param in g_param.keys():
if g_param[param] is None:
if param in [OptionsDefine.SecretKey, OptionsDefine.SecretId, OptionsDefine.Token]:
if param in cred:
g_param[param] = cred[param]
elif not g_param[OptionsDefine.UseCVMRole.replace('-', '_')]:
raise ConfigurationError("%s is invalid" % param)
elif param in [OptionsDefine.Region, OptionsDefine.Output]:
if param in conf:
g_param[param] = conf[param]
else:
raise ConfigurationError("%s is invalid" % param)
elif param.replace('_', '-') in [OptionsDefine.RoleArn, OptionsDefine.RoleSessionName]:
if param.replace('_', '-') in cred:
g_param[param] = cred[param.replace('_', '-')]
try:
if g_param[OptionsDefine.ServiceVersion]:
g_param[OptionsDefine.Version] = "v" + g_param[OptionsDefine.ServiceVersion].replace('-', '')
else:
version = conf["btoe"][OptionsDefine.Version]
g_param[OptionsDefine.Version] = "v" + version.replace('-', '')
if g_param[OptionsDefine.Endpoint] is None:
g_param[OptionsDefine.Endpoint] = conf["btoe"][OptionsDefine.Endpoint]
except Exception as err:
raise ConfigurationError("config file:%s error, %s" % (conf_path, str(err)))
if g_param[OptionsDefine.Version] not in AVAILABLE_VERSION_LIST:
raise Exception("available versions: %s" % " ".join(AVAILABLE_VERSION_LIST))
if g_param[OptionsDefine.Waiter]:
param = eval(g_param[OptionsDefine.Waiter])
if 'expr' not in param:
raise Exception('`expr` in `--waiter` must be defined')
if 'to' not in param:
raise Exception('`to` in `--waiter` must be defined')
if 'timeout' not in param:
if 'waiter' in conf and 'timeout' in conf['waiter']:
param['timeout'] = conf['waiter']['timeout']
else:
param['timeout'] = 180
if 'interval' not in param:
if 'waiter' in conf and 'interval' in conf['waiter']:
param['interval'] = conf['waiter']['interval']
else:
param['timeout'] = 5
param['interval'] = min(param['interval'], param['timeout'])
g_param['OptionsDefine.WaiterInfo'] = param
# 如果在配置文件中读取字段的值,python2中的json.load函数会读取unicode类型的值,因此这里要转化类型
if six.PY2:
for key, value in g_param.items():
if isinstance(value, six.text_type):
g_param[key] = value.encode('utf-8')
return g_param
| 50.995198 | 155 | 0.677959 | 4,604 | 42,479 | 6.035187 | 0.046699 | 0.093068 | 0.27557 | 0.117937 | 0.859354 | 0.847801 | 0.841251 | 0.834269 | 0.825848 | 0.81865 | 0 | 0.007673 | 0.19007 | 42,479 | 832 | 156 | 51.05649 | 0.799942 | 0.005203 | 0 | 0.72859 | 0 | 0 | 0.135038 | 0.066004 | 0 | 0 | 0 | 0 | 0 | 1 | 0.02108 | false | 0 | 0.023715 | 0.001318 | 0.047431 | 0.018445 | 0 | 0 | 0 | null | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
0a64723eb2b517fd7e381012c2d510e88829418e | 13,968 | py | Python | Macropad_Hotkeys/macros/minecraft-pe-equip.py | gamblor21/Adafruit_Learning_System_Guides | f5dab4a758bc82d0bfc3c299683fe89dc093912a | [
"MIT"
] | 665 | 2017-09-27T21:20:14.000Z | 2022-03-31T09:09:25.000Z | Macropad_Hotkeys/macros/minecraft-pe-equip.py | gamblor21/Adafruit_Learning_System_Guides | f5dab4a758bc82d0bfc3c299683fe89dc093912a | [
"MIT"
] | 641 | 2017-10-03T19:46:37.000Z | 2022-03-30T18:28:46.000Z | Macropad_Hotkeys/macros/minecraft-pe-equip.py | gamblor21/Adafruit_Learning_System_Guides | f5dab4a758bc82d0bfc3c299683fe89dc093912a | [
"MIT"
] | 734 | 2017-10-02T22:47:38.000Z | 2022-03-30T14:03:51.000Z | # MACROPAD Hotkeys example: Minecraft Effects (Creative) for Bedrock Edition
# Note: Must enable "full keyboad gameplay" to equip armor automatically.
# This is found under "settings", then "keyboard and mouse".
# NOTE: There is a line length limit (? ~100 char ?). Exceeding that limit
# appears to result in silent failure. Therefore, the key sequences are
# split across multiple lines.
from adafruit_hid.keycode import Keycode # REQUIRED if using Keycode.* values
# See https://minecraft.fandom.com/wiki/Effect
# Unfortunately, bedrock edition has no single command that both
# gives an item and enchants it. Thus, have to place the item in
# the player's inventory slot, enchant it, then equip it.
#
# As a result, it is probably better to learn on less complex
# macro files before attempting to adjust settings in this one.
DELAY_AFTER_COMMAND = 0.75
DELAY_AFTER_SLASH = 0.80 # required so minecraft has time to bring up command screen
DELAY_BEFORE_RETURN = 0.10 # give minecraft time to show all the keys pressed...
# If "full-keyboard gameplay" is not enabled, armor can be left in inventory
# CONFIGURABLE_KEY_EQUIP_CURRENTLY_HELD_ITEM = Keycode.PAGE_UP
CONFIGURABLE_KEY_EQUIP_CURRENTLY_HELD_ITEM = Keycode.E
app = {
'name': 'Minecraft PE (equip)',
'macros': [
(0x003000, 'helm', [
'/', DELAY_AFTER_SLASH,
'replaceitem entity @s slot.weapon.mainhand 0 destroy netherite_helmet',
DELAY_BEFORE_RETURN, Keycode.RETURN, -Keycode.RETURN, DELAY_AFTER_COMMAND,
'/', DELAY_AFTER_SLASH,
'enchant @s mending 1',
DELAY_BEFORE_RETURN, Keycode.RETURN, -Keycode.RETURN, DELAY_AFTER_COMMAND,
'/', DELAY_AFTER_SLASH,
'enchant @s protection 4',
DELAY_BEFORE_RETURN, Keycode.RETURN, -Keycode.RETURN, DELAY_AFTER_COMMAND,
'/', DELAY_AFTER_SLASH,
'enchant @s respiration 3',
DELAY_BEFORE_RETURN, Keycode.RETURN, -Keycode.RETURN, DELAY_AFTER_COMMAND,
'/', DELAY_AFTER_SLASH,
'enchant @s aqua_affinity 1',
DELAY_BEFORE_RETURN, Keycode.RETURN, -Keycode.RETURN, DELAY_AFTER_COMMAND,
'/', DELAY_AFTER_SLASH,
'enchant @s unbreaking 3',
DELAY_BEFORE_RETURN, Keycode.RETURN, -Keycode.RETURN, DELAY_AFTER_COMMAND,
CONFIGURABLE_KEY_EQUIP_CURRENTLY_HELD_ITEM]),
(0x003000, 'elytra', [
'/', DELAY_AFTER_SLASH,
'replaceitem entity @s slot.weapon.mainhand 0 destroy netherite_chestplate',
DELAY_BEFORE_RETURN, Keycode.RETURN, -Keycode.RETURN, DELAY_AFTER_COMMAND,
'/', DELAY_AFTER_SLASH,
'enchant @s mending 1',
DELAY_BEFORE_RETURN, Keycode.RETURN, -Keycode.RETURN, DELAY_AFTER_COMMAND,
'/', DELAY_AFTER_SLASH,
'enchant @s unbreaking 3',
DELAY_BEFORE_RETURN, Keycode.RETURN, -Keycode.RETURN, DELAY_AFTER_COMMAND,
CONFIGURABLE_KEY_EQUIP_CURRENTLY_HELD_ITEM]),
(0x003000, 'legs', [
'/', DELAY_AFTER_SLASH,
'replaceitem entity @s slot.weapon.mainhand 0 destroy netherite_leggings',
DELAY_BEFORE_RETURN, Keycode.RETURN, -Keycode.RETURN, DELAY_AFTER_COMMAND,
'/', DELAY_AFTER_SLASH,
'enchant @s mending 1',
DELAY_BEFORE_RETURN, Keycode.RETURN, -Keycode.RETURN, DELAY_AFTER_COMMAND,
'/', DELAY_AFTER_SLASH,
'enchant @s protection 4',
DELAY_BEFORE_RETURN, Keycode.RETURN, -Keycode.RETURN, DELAY_AFTER_COMMAND,
'/', DELAY_AFTER_SLASH,
'enchant @s unbreaking 3',
DELAY_BEFORE_RETURN, Keycode.RETURN, -Keycode.RETURN, DELAY_AFTER_COMMAND,
CONFIGURABLE_KEY_EQUIP_CURRENTLY_HELD_ITEM]),
(0x003000, 'boots', [
'/', DELAY_AFTER_SLASH,
'replaceitem entity @s slot.weapon.mainhand 0 destroy netherite_boots',
DELAY_BEFORE_RETURN, Keycode.RETURN, -Keycode.RETURN, DELAY_AFTER_COMMAND,
'/', DELAY_AFTER_SLASH,
'enchant @s mending 1',
DELAY_BEFORE_RETURN, Keycode.RETURN, -Keycode.RETURN, DELAY_AFTER_COMMAND,
'/', DELAY_AFTER_SLASH,
'enchant @s protection 4',
DELAY_BEFORE_RETURN, Keycode.RETURN, -Keycode.RETURN, DELAY_AFTER_COMMAND,
'/', DELAY_AFTER_SLASH,
'enchant @s feather_falling 4',
DELAY_BEFORE_RETURN, Keycode.RETURN, -Keycode.RETURN, DELAY_AFTER_COMMAND,
'/', DELAY_AFTER_SLASH,
'enchant @s depth_strider 3',
DELAY_BEFORE_RETURN, Keycode.RETURN, -Keycode.RETURN, DELAY_AFTER_COMMAND,
'/', DELAY_AFTER_SLASH,
'enchant @s soul_speed 3',
DELAY_BEFORE_RETURN, Keycode.RETURN, -Keycode.RETURN, DELAY_AFTER_COMMAND,
'/', DELAY_AFTER_SLASH,
'enchant @s unbreaking 3',
DELAY_BEFORE_RETURN, Keycode.RETURN, -Keycode.RETURN, DELAY_AFTER_COMMAND,
CONFIGURABLE_KEY_EQUIP_CURRENTLY_HELD_ITEM]),
(0x003000, 'frosty', [
'/', DELAY_AFTER_SLASH,
'replaceitem entity @s slot.weapon.mainhand 0 destroy netherite_boots',
DELAY_BEFORE_RETURN, Keycode.RETURN, -Keycode.RETURN, DELAY_AFTER_COMMAND,
'/', DELAY_AFTER_SLASH,
'enchant @s mending 1',
DELAY_BEFORE_RETURN, Keycode.RETURN, -Keycode.RETURN, DELAY_AFTER_COMMAND,
'/', DELAY_AFTER_SLASH,
'enchant @s protection 4',
DELAY_BEFORE_RETURN, Keycode.RETURN, -Keycode.RETURN, DELAY_AFTER_COMMAND,
'/', DELAY_AFTER_SLASH,
'enchant @s feather_falling 4',
DELAY_BEFORE_RETURN, Keycode.RETURN, -Keycode.RETURN, DELAY_AFTER_COMMAND,
'/', DELAY_AFTER_SLASH,
'enchant @s frost_walker 2',
DELAY_BEFORE_RETURN, Keycode.RETURN, -Keycode.RETURN, DELAY_AFTER_COMMAND,
'/', DELAY_AFTER_SLASH,
'enchant @s soul_speed 3',
DELAY_BEFORE_RETURN, Keycode.RETURN, -Keycode.RETURN, DELAY_AFTER_COMMAND,
'/', DELAY_AFTER_SLASH,
'enchant @s unbreaking 3',
DELAY_BEFORE_RETURN, Keycode.RETURN, -Keycode.RETURN, DELAY_AFTER_COMMAND,
CONFIGURABLE_KEY_EQUIP_CURRENTLY_HELD_ITEM]),
(0x300000, 'feedme', [
'/', DELAY_AFTER_SLASH,
'replaceitem entity @s slot.weapon.mainhand 0 destroy netherite_sword',
DELAY_BEFORE_RETURN, Keycode.RETURN, -Keycode.RETURN, DELAY_AFTER_COMMAND,
'/', DELAY_AFTER_SLASH,
'enchant @s mending 1',
DELAY_BEFORE_RETURN, Keycode.RETURN, -Keycode.RETURN, DELAY_AFTER_COMMAND,
'/', DELAY_AFTER_SLASH,
'enchant @s fire_aspect 2',
DELAY_BEFORE_RETURN, Keycode.RETURN, -Keycode.RETURN, DELAY_AFTER_COMMAND,
'/', DELAY_AFTER_SLASH,
'enchant @s knockback 2',
DELAY_BEFORE_RETURN, Keycode.RETURN, -Keycode.RETURN, DELAY_AFTER_COMMAND,
'/', DELAY_AFTER_SLASH,
'enchant @s looting 3',
DELAY_BEFORE_RETURN, Keycode.RETURN, -Keycode.RETURN, DELAY_AFTER_COMMAND,
'/', DELAY_AFTER_SLASH,
'enchant @s sharpness 5',
DELAY_BEFORE_RETURN, Keycode.RETURN, -Keycode.RETURN, DELAY_AFTER_COMMAND,
'/', DELAY_AFTER_SLASH,
'enchant @s unbreaking 3',
DELAY_BEFORE_RETURN, Keycode.RETURN, -Keycode.RETURN, DELAY_AFTER_COMMAND,
Keycode.PAGE_UP, -Keycode.PAGE_UP]),
(0x300000, 'excal', [
'/', DELAY_AFTER_SLASH,
'replaceitem entity @s slot.weapon.mainhand 0 destroy netherite_sword',
DELAY_BEFORE_RETURN, Keycode.RETURN, -Keycode.RETURN, DELAY_AFTER_COMMAND,
'/', DELAY_AFTER_SLASH,
'enchant @s mending 1',
DELAY_BEFORE_RETURN, Keycode.RETURN, -Keycode.RETURN, DELAY_AFTER_COMMAND,
'/', DELAY_AFTER_SLASH,
'enchant @s fire_aspect 2',
DELAY_BEFORE_RETURN, Keycode.RETURN, -Keycode.RETURN, DELAY_AFTER_COMMAND,
'/', DELAY_AFTER_SLASH,
'enchant @s knockback 2',
DELAY_BEFORE_RETURN, Keycode.RETURN, -Keycode.RETURN, DELAY_AFTER_COMMAND,
'/', DELAY_AFTER_SLASH,
'enchant @s looting 3',
DELAY_BEFORE_RETURN, Keycode.RETURN, -Keycode.RETURN, DELAY_AFTER_COMMAND,
'/', DELAY_AFTER_SLASH,
'enchant @s sharpness 5',
DELAY_BEFORE_RETURN, Keycode.RETURN, -Keycode.RETURN, DELAY_AFTER_COMMAND,
'/', DELAY_AFTER_SLASH,
'enchant @s unbreaking 3',
DELAY_BEFORE_RETURN, Keycode.RETURN, -Keycode.RETURN, DELAY_AFTER_COMMAND,
Keycode.PAGE_UP, -Keycode.PAGE_UP]),
(0x300000, 'trident', [
'/', DELAY_AFTER_SLASH,
'replaceitem entity @s slot.weapon.mainhand 0 destroy trident',
DELAY_BEFORE_RETURN, Keycode.RETURN, -Keycode.RETURN, DELAY_AFTER_COMMAND,
'/', DELAY_AFTER_SLASH,
'enchant @s mending 1',
DELAY_BEFORE_RETURN, Keycode.RETURN, -Keycode.RETURN, DELAY_AFTER_COMMAND,
'/', DELAY_AFTER_SLASH,
'enchant @s loyalty 3',
DELAY_BEFORE_RETURN, Keycode.RETURN, -Keycode.RETURN, DELAY_AFTER_COMMAND,
'/', DELAY_AFTER_SLASH,
'enchant @s channeling 1',
DELAY_BEFORE_RETURN, Keycode.RETURN, -Keycode.RETURN, DELAY_AFTER_COMMAND,
'/', DELAY_AFTER_SLASH,
'enchant @s riptide 3',
DELAY_BEFORE_RETURN, Keycode.RETURN, -Keycode.RETURN, DELAY_AFTER_COMMAND,
'/', DELAY_AFTER_SLASH,
'enchant @s impaling 5',
DELAY_BEFORE_RETURN, Keycode.RETURN, -Keycode.RETURN, DELAY_AFTER_COMMAND,
'/', DELAY_AFTER_SLASH,
'enchant @s unbreaking 3',
DELAY_BEFORE_RETURN, Keycode.RETURN, -Keycode.RETURN, DELAY_AFTER_COMMAND,
Keycode.PAGE_UP, -Keycode.PAGE_UP]),
(0x300000, 'bow', [
'/', DELAY_AFTER_SLASH,
'replaceitem entity @s slot.weapon.mainhand 0 destroy bow',
DELAY_BEFORE_RETURN, Keycode.RETURN, -Keycode.RETURN, DELAY_AFTER_COMMAND,
'/', DELAY_AFTER_SLASH,
'enchant @s mending 1',
DELAY_BEFORE_RETURN, Keycode.RETURN, -Keycode.RETURN, DELAY_AFTER_COMMAND,
'/', DELAY_AFTER_SLASH,
'enchant @s power 5',
DELAY_BEFORE_RETURN, Keycode.RETURN, -Keycode.RETURN, DELAY_AFTER_COMMAND,
'/', DELAY_AFTER_SLASH,
'enchant @s punch 2',
DELAY_BEFORE_RETURN, Keycode.RETURN, -Keycode.RETURN, DELAY_AFTER_COMMAND,
'/', DELAY_AFTER_SLASH,
'enchant @s unbreaking 3',
DELAY_BEFORE_RETURN, Keycode.RETURN, -Keycode.RETURN, DELAY_AFTER_COMMAND,
Keycode.PAGE_UP, -Keycode.PAGE_UP]),
(0x000030, 'silky', [
'/', DELAY_AFTER_SLASH,
'replaceitem entity @s slot.weapon.mainhand 0 destroy netherite_pickaxe',
DELAY_BEFORE_RETURN, Keycode.RETURN, -Keycode.RETURN, DELAY_AFTER_COMMAND,
'/', DELAY_AFTER_SLASH,
'enchant @s mending 1',
DELAY_BEFORE_RETURN, Keycode.RETURN, -Keycode.RETURN, DELAY_AFTER_COMMAND,
'/', DELAY_AFTER_SLASH,
'enchant @s efficiency 5',
DELAY_BEFORE_RETURN, Keycode.RETURN, -Keycode.RETURN, DELAY_AFTER_COMMAND,
'/', DELAY_AFTER_SLASH,
'enchant @s silk_touch 1',
DELAY_BEFORE_RETURN, Keycode.RETURN, -Keycode.RETURN, DELAY_AFTER_COMMAND,
'/', DELAY_AFTER_SLASH,
'enchant @s unbreaking 3',
DELAY_BEFORE_RETURN, Keycode.RETURN, -Keycode.RETURN, DELAY_AFTER_COMMAND,
Keycode.PAGE_UP, -Keycode.PAGE_UP]),
(0x000030, 'pickme', [
'/', DELAY_AFTER_SLASH,
'replaceitem entity @s slot.weapon.mainhand 0 destroy netherite_pickaxe',
DELAY_BEFORE_RETURN, Keycode.RETURN, -Keycode.RETURN, DELAY_AFTER_COMMAND,
'/', DELAY_AFTER_SLASH,
'enchant @s mending 1',
DELAY_BEFORE_RETURN, Keycode.RETURN, -Keycode.RETURN, DELAY_AFTER_COMMAND,
'/', DELAY_AFTER_SLASH,
'enchant @s efficiency 5',
DELAY_BEFORE_RETURN, Keycode.RETURN, -Keycode.RETURN, DELAY_AFTER_COMMAND,
'/', DELAY_AFTER_SLASH,
'enchant @s fortune 3',
DELAY_BEFORE_RETURN, Keycode.RETURN, -Keycode.RETURN, DELAY_AFTER_COMMAND,
'/', DELAY_AFTER_SLASH,
'enchant @s unbreaking 3',
DELAY_BEFORE_RETURN, Keycode.RETURN, -Keycode.RETURN, DELAY_AFTER_COMMAND,
Keycode.PAGE_UP, -Keycode.PAGE_UP]),
(0x000030, 'axe', [
'/', DELAY_AFTER_SLASH,
'replaceitem entity @s slot.weapon.mainhand 0 destroy netherite_axe',
DELAY_BEFORE_RETURN, Keycode.RETURN, -Keycode.RETURN, DELAY_AFTER_COMMAND,
'/', DELAY_AFTER_SLASH,
'enchant @s mending 1',
DELAY_BEFORE_RETURN, Keycode.RETURN, -Keycode.RETURN, DELAY_AFTER_COMMAND,
'/', DELAY_AFTER_SLASH,
'enchant @s fortune 3',
DELAY_BEFORE_RETURN, Keycode.RETURN, -Keycode.RETURN, DELAY_AFTER_COMMAND,
'/', DELAY_AFTER_SLASH,
'enchant @s efficiency 5',
DELAY_BEFORE_RETURN, Keycode.RETURN, -Keycode.RETURN, DELAY_AFTER_COMMAND,
'/', DELAY_AFTER_SLASH,
'enchant @s sharpness 5',
DELAY_BEFORE_RETURN, Keycode.RETURN, -Keycode.RETURN, DELAY_AFTER_COMMAND,
'/', DELAY_AFTER_SLASH,
'enchant @s unbreaking 3',
DELAY_BEFORE_RETURN, Keycode.RETURN, -Keycode.RETURN, DELAY_AFTER_COMMAND,
Keycode.PAGE_UP, -Keycode.PAGE_UP]),
]
}
| 52.511278 | 88 | 0.634951 | 1,546 | 13,968 | 5.419146 | 0.122898 | 0.167104 | 0.312963 | 0.197661 | 0.869897 | 0.869897 | 0.869897 | 0.859394 | 0.859394 | 0.859394 | 0 | 0.016135 | 0.267898 | 13,968 | 265 | 89 | 52.709434 | 0.803149 | 0.073525 | 0 | 0.838174 | 0 | 0 | 0.172342 | 0 | 0 | 0 | 0.007429 | 0 | 0 | 1 | 0 | false | 0 | 0.004149 | 0 | 0.004149 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
6a8e8c4576f61ae88f7a4bf0d02426b8541b2df4 | 105 | py | Python | pacfish/iohandler/__init__.py | IPASC/DataConversionTool | 88f04c4df97f4a060c566da4bfc0c0e4fe246536 | [
"MIT",
"BSD-3-Clause"
] | 2 | 2021-12-16T04:21:37.000Z | 2022-03-10T03:19:21.000Z | pacfish/iohandler/__init__.py | IPASC/DataConversionTool | 88f04c4df97f4a060c566da4bfc0c0e4fe246536 | [
"MIT",
"BSD-3-Clause"
] | 12 | 2021-12-20T13:22:45.000Z | 2022-02-17T19:57:03.000Z | pacfish/iohandler/__init__.py | amasmiller/PACFISH | f9bc912e701df3261a2b9a94df281c664701701d | [
"BSD-3-Clause"
] | 1 | 2022-02-11T21:33:28.000Z | 2022-02-11T21:33:28.000Z | from pacfish.iohandler.file_reader import load_data
from pacfish.iohandler.file_writer import write_data
| 35 | 52 | 0.885714 | 16 | 105 | 5.5625 | 0.625 | 0.247191 | 0.449438 | 0.539326 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.07619 | 105 | 2 | 53 | 52.5 | 0.917526 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
6ab9be052e041a3aa6efdc9a341f77e5188ad7aa | 38,836 | py | Python | TankBattle_AllVersions/Tank_v1.3.py | zfang399/zzz | 641cef34db65b4d70cc188f76d991ad617dc4ac6 | [
"MIT"
] | 1 | 2017-07-17T16:24:19.000Z | 2017-07-17T16:24:19.000Z | TankBattle_AllVersions/Tank_v1.3.py | zfang399/Python_minigames | 641cef34db65b4d70cc188f76d991ad617dc4ac6 | [
"MIT"
] | null | null | null | TankBattle_AllVersions/Tank_v1.3.py | zfang399/Python_minigames | 641cef34db65b4d70cc188f76d991ad617dc4ac6 | [
"MIT"
] | null | null | null | #Tank Battle
#Game Imports
import pygame
from pygame.locals import *
import sys
import random
import time
from gameobjects.vector2 import Vector2
pygame.mixer.pre_init(44100,-16,2,1024*4)
pygame.init()
pygame.mixer.set_num_channels(8)
#Play surface Intialization
screen=pygame.display.set_mode((640,480),0,32)
pygame.display.set_caption('Tank Battle')
#FPS Controller
clock=pygame.time.Clock()
#Load sound files
fire_sound=pygame.mixer.Sound("blip.wav")
hit_sound=pygame.mixer.Sound("gameover.wav")
bgm_sound=pygame.mixer.Sound("bgm.wav")
#Game Instructions function
def GameIns():
lastbgm=pygame.time.get_ticks()-bgm_sound.get_length()*1000
while True:
if pygame.time.get_ticks()-lastbgm>bgm_sound.get_length()*1000:
lastbgm=pygame.time.get_ticks()
bgm_sound.play()
#check if quit
for event in pygame.event.get():
if event.type==QUIT:
pygame.quit()
sys.exit()
#show game title
screen.fill((255,255,255))
title_font=pygame.font.SysFont('arial',72)
gametitle_surface=title_font.render('TANK BATTLE',True,(54,168,196))
gametitle_rect=gametitle_surface.get_rect()
gametitle_rect.midtop=(320,120)
screen.blit(gametitle_surface,gametitle_rect)
#show start game instructions
text_font=pygame.font.SysFont('arial',60)
single_surface=text_font.render('1P --- Press 1',True,(244,131,66))
single_rect=single_surface.get_rect()
single_rect.midtop=(320,240)
screen.blit(single_surface,single_rect)
multi_surface=text_font.render('2P --- Press 2',True,(244,131,66))
multi_rect=multi_surface.get_rect()
multi_rect.midtop=(320,280)
screen.blit(multi_surface,multi_rect)
pygame.display.update()
#choose game mode
pressed_keys=pygame.key.get_pressed()
if pressed_keys[K_2]:
GameMulti()
elif pressed_keys[K_1]:
GameSingle()
elif pressed_keys[K_ESCAPE]:
pygame.event.post(pygame.event.Event(pygame.QUIT))
#Show health function
def PrintLife(ahealth=0,bhealth=0,multi=True):
text_font=pygame.font.SysFont('arial',32)
if multi:
alife_surface=text_font.render('Player A',True,(249,175,47))
alife_rect=alife_surface.get_rect()
alife_rect.midtop=(50,15)
screen.blit(alife_surface,alife_rect)
blife_surface=text_font.render('Player B',True,(47,206,249))
blife_rect=blife_surface.get_rect()
blife_rect.midtop=(590,15)
screen.blit(blife_surface,blife_rect)
else:
alife_surface=text_font.render('Player',True,(249,175,47))
alife_rect=alife_surface.get_rect()
alife_rect.midtop=(50,15)
screen.blit(alife_surface,alife_rect)
blife_surface=text_font.render('Computer',True,(47,206,249))
blife_rect=blife_surface.get_rect()
blife_rect.midtop=(590,15)
screen.blit(blife_surface,blife_rect)
#show Tank A health
ap=0
yy=50
while ap<ahealth:
pygame.draw.rect(screen,(255,0,0),pygame.Rect(5+15*ap,yy,15,10))
ap+=1
#show Tank B health
bp=0
yy=50
while bp<bhealth:
pygame.draw.rect(screen,(255,0,0),pygame.Rect(620-15*bp,yy,15,10))
bp+=1
#Multi-player
def GameMulti():
bgm_sound.fadeout(1500)
#Tank A settings
a_color=pygame.Color(249,175,47)
a_dir='r'
a_length=15
a_width=15
a_barrel=6
a_barrelw=4
a_cpos=[100,240]
a_speed=5
a_cd=500
ahealth=5
ahealth_max=5
ainvi=False
ainvi_start=0
#Tank B settings
b_color=pygame.Color(47,206,249)
b_dir='l'
b_length=15
b_width=15
b_barrel=6
b_barrelw=4
b_cpos=[540,240]
b_speed=5
b_cd=500
bhealth=5
bhealth_max=5
binvi=False
binvi_start=0
#Tank A bullet settings
abul_speed=10
abul_pos=[]
abul_side=4
abul_color=pygame.Color(244,128,66)
a_last=pygame.time.get_ticks()
#Tank B bullet settings
bbul_speed=10
bbul_pos=[]
bbul_side=4
bbul_color=pygame.Color(197,66,244)
b_last=a_last
#Items settings
#ordinary items (refreshes faster)
ordi_last=a_last
ordi_cd=10*1000
ordi_color=pygame.Color(242,87,216)
ordi_pos=[]
ordi_side=6
#options:
#restore health by 1
#faster bullet speed
#reduces bullet firing interval
#rare items(refreshes slower)
rare_color=pygame.Color(188,71,186)
rare_last=a_last
rare_cd=23*1000
rare_pos=[]
rare_side=8
#options
#increase maximum health
#increse moving speed
#invincible for 6 secs
#main loop
while True:
#check if quit
for event in pygame.event.get():
if event.type==QUIT:
pygame.quit()
sys.exit()
pressed_keys=pygame.key.get_pressed()
#check if quit
if pressed_keys[K_ESCAPE]:
pygame.event.post(pygame.event.Event(pygame.QUIT))
#get Tank A's new direction
achange=False
amove=True
if pressed_keys[K_a] and a_dir!='l':
a_dir='l'
achange=True
elif pressed_keys[K_d] and a_dir!='r':
a_dir='r'
achange=True
elif pressed_keys[K_w] and a_dir!='u':
a_dir='u'
achange=True
elif pressed_keys[K_s] and a_dir!='d':
a_dir='d'
achange=True
if not (pressed_keys[K_a] or pressed_keys[K_d] or pressed_keys[K_w] or pressed_keys[K_s]):
amove=False
#get Tank B's new direction
bchange=False
bmove=True
if pressed_keys[K_LEFT] and b_dir!='l':
b_dir='l'
bchange=True
elif pressed_keys[K_RIGHT] and b_dir!='r':
b_dir='r'
bchange=True
elif pressed_keys[K_UP] and b_dir!='u':
b_dir='u'
bchange=True
elif pressed_keys[K_DOWN] and b_dir!='d':
b_dir='d'
bchange=True
if not (pressed_keys[K_LEFT] or pressed_keys[K_RIGHT] or pressed_keys[K_UP] or pressed_keys[K_DOWN]):
bmove=False
#Tank A position update
if (not achange) and amove:
#move if direction hasn't changed
if a_dir=='l':
a_cpos[0]-=a_speed
if a_cpos[0]-(a_length/2)-a_barrel<0:
a_cpos[0]=(a_length/2)+a_barrel
elif a_dir=='r':
a_cpos[0]+=a_speed
if a_cpos[0]+(a_length/2)+a_barrel>640:
a_cpos[0]=640-(a_length/2)-a_barrel
elif a_dir=='u':
a_cpos[1]-=a_speed
if a_cpos[1]-(a_length/2)-a_barrel<0:
a_cpos[1]=(a_length/2)+a_barrel
elif a_dir=='d':
a_cpos[1]+=a_speed
if a_cpos[1]+(a_length/2)+a_barrel>480:
a_cpos[1]=480-(a_length/2)-a_barrel
#Tank B position update
if (not bchange) and bmove:
#move if direction hasn't changed
if b_dir=='l':
b_cpos[0]-=b_speed
if b_cpos[0]-(b_length/2)-b_barrel<0:
b_cpos[0]=(b_length/2)+b_barrel
elif b_dir=='r':
b_cpos[0]+=b_speed
if b_cpos[0]+(b_length/2)+b_barrel>640:
b_cpos[0]=640-(b_length/2)-b_barrel
elif b_dir=='u':
b_cpos[1]-=b_speed
if b_cpos[1]-(b_length/2)-b_barrel<0:
b_cpos[1]=(b_length/2)+b_barrel
elif b_dir=='d':
b_cpos[1]+=b_speed
if b_cpos[1]+(b_length/2)+b_barrel>480:
b_cpos[1]=480-(b_length/2)-b_barrel
#Tank A bullets position update
for bul in abul_pos:
if bul[2]=='l':
bul[0]-=abul_speed
elif bul[2]=='r':
bul[0]+=abul_speed
elif bul[2]=='u':
bul[1]-=abul_speed
elif bul[2]=='d':
bul[1]+=abul_speed
if bul[0]<0 or bul[0]>640-abul_side or bul[1]<0 or bul[1]>480-abul_side:
abul_pos.remove(bul)
#Tank B bullets position update
for bul in bbul_pos:
if bul[2]=='l':
bul[0]-=bbul_speed
elif bul[2]=='r':
bul[0]+=bbul_speed
elif bul[2]=='u':
bul[1]-=bbul_speed
elif bul[2]=='d':
bul[1]+=bbul_speed
if bul[0]<0 or bul[0]>640-bbul_side or bul[1]<0 or bul[1]>480-bbul_side:
bbul_pos.remove(bul)
#check if A shoots new bullet
now_time=pygame.time.get_ticks()
if pressed_keys[K_c] and now_time-a_last>a_cd:
a_last=now_time
if a_dir=='l':
abul_pos.append([a_cpos[0]-(a_length/2)-a_barrel-abul_side,a_cpos[1]-(a_barrelw/2),'l'])
elif a_dir=='r':
abul_pos.append([a_cpos[0]+(a_length/2)+a_barrel,a_cpos[1]-(a_barrelw/2),'r'])
elif a_dir=='u':
abul_pos.append([a_cpos[0]-(a_barrelw/2),a_cpos[1]-a_barrel-(a_length/2)-abul_side,'u'])
elif a_dir=='d':
abul_pos.append([a_cpos[0]-(a_barrelw/2),a_cpos[1]+(a_length/2)+a_barrel,'d'])
if abul_pos[-1][0]<0 or abul_pos[-1][0]>640-abul_side or abul_pos[-1][1]<0 or abul_pos[-1][1]>480-abul_side:
abul_pos.remove(abul_pos[-1])
fire_sound.play()
#check if B shoots new bullet
if pressed_keys[K_l] and now_time-b_last>b_cd:
b_last=now_time
if b_dir=='l':
bbul_pos.append([b_cpos[0]-(b_length/2)-b_barrel-bbul_side,b_cpos[1]-(b_barrelw/2),'l'])
elif b_dir=='r':
bbul_pos.append([b_cpos[0]+(b_length/2)+b_barrel,b_cpos[1]-(b_barrelw/2),'r'])
elif b_dir=='u':
bbul_pos.append([b_cpos[0]-(b_barrelw/2),b_cpos[1]-b_barrel-(b_length/2)-bbul_side,'u'])
elif b_dir=='d':
bbul_pos.append([b_cpos[0]-(b_barrelw/2),b_cpos[1]+(b_length/2)+b_barrel,'d'])
if bbul_pos[-1][0]<0 or bbul_pos[-1][0]>640-bbul_side or bbul_pos[-1][1]<0 or bbul_pos[-1][1]>480-bbul_side:
bbul_pos.remove(bbul_pos[-1])
fire_sound.play()
#check if it is time for a new ordinary item
if now_time-ordi_last>ordi_cd:
ordi_last=now_time
ordix=random.randint(0,640-ordi_side)
ordiy=random.randint(0,480-ordi_side)
proper=True
if abs(a_cpos[0]-ordix-(ordi_side/2))<(ordi_side/2+a_length/2) and abs(a_cpos[1]-ordiy-(ordi_side/2))<(ordi_side/2+a_length/2):
proper=False
if abs(b_cpos[0]-ordix-(ordi_side/2))<(ordi_side/2+b_length/2) and abs(b_cpos[1]-ordiy-(ordi_side/2))<(ordi_side/2+b_length/2):
proper=False
for rarep in rare_pos:
if abs(rarep[0]+(rare_side/2)-ordix-(ordi_side/2))<(rare_side/2+ordi_side/2) and abs(rarep[1]+(rare_side/2)-ordiy-(ordi_side/2))<(rare_side/2+ordi_side/2):
proper=False
break
while not proper:
ordix=random.randint(0,640-ordi_side)
ordiy=random.randint(0,480-ordi_side)
if not (abs(a_cpos[0]-ordix-(ordi_side/2))<(ordi_side/2+a_length/2) and abs(a_cpos[1]-ordiy-(ordi_side/2))<(ordi_side/2+a_length/2)):
proper=True
if abs(b_cpos[0]-ordix-(ordi_side/2))<(ordi_side/2+b_length/2) and abs(b_cpos[1]-ordiy-(ordi_side/2))<(ordi_side/2+b_length/2):
proper=False
for rarep in rare_pos:
if abs(rarep[0]+(rare_side/2)-ordix-(ordi_side/2))<(rare_side/2+ordi_side/2) and abs(rarep[1]+(rare_side/2)-ordiy-(ordi_side/2))<(rare_side/2+ordi_side/2):
proper=False
break
ordi_pos.append([ordix,ordiy])
#check if it is time for a new rare item
if now_time-rare_last>rare_cd:
rare_last=now_time
rarex=random.randint(0,640-rare_side)
rarey=random.randint(0,480-rare_side)
proper=True
if abs(a_cpos[0]-rarex-(rare_side/2))<(rare_side/2+a_length/2) and abs(a_cpos[1]-rarey-(rare_side/2))<(rare_side/2+a_length/2):
proper=False
if abs(b_cpos[0]-ordix-(ordi_side/2))<(ordi_side/2+b_length/2) and abs(b_cpos[1]-ordiy-(ordi_side/2))<(ordi_side/2+b_length/2):
proper=False
for ordip in ordi_pos:
if abs(ordip[0]+(ordi_side/2)-rarex-(rare_side/2))<(rare_side/2+ordi_side/2) and abs(ordip[1]+(ordi_side/2)-rarey-(rare_side/2))<(rare_side/2+ordi_side/2):
proper=False
break
while not proper:
rarex=random.randint(0,640-rare_side)
rarey=random.randint(0,480-rare_side)
if not (abs(a_cpos[0]-rarex-(rare_side/2))<(rare_side/2+a_length/2) and abs(a_cpos[1]-rarey-(rare_side/2))<(rare_side/2+a_length/2)):
proper=True
if abs(b_cpos[0]-ordix-(ordi_side/2))<(ordi_side/2+b_length/2) and abs(b_cpos[1]-ordiy-(ordi_side/2))<(ordi_side/2+b_length/2):
proper=False
for ordip in ordi_pos:
if abs(ordip[0]+(ordi_side/2)-rarex-(rare_side/2))<(rare_side/2+ordi_side/2) and abs(ordip[1]+(ordi_side/2)-rarey-(rare_side/2))<(rare_side/2+ordi_side/2):
proper=False
break
rare_pos.append([rarex,rarey])
#check if the bullets collide
for bula in abul_pos:
for bulb in bbul_pos:
if bula[0]==bulb[0] and bula[1]-bulb[1]<=10 and bula[1]-bulb[1]>=-10:
abul_pos.remove(bula)
bbul_pos.remove(bulb)
hit_sound.play()
elif bula[1]==bulb[1] and bula[0]-bulb[0]<=10 and bula[0]-bulb[0]>=-10:
abul_pos.remove(bula)
bbul_pos.remove(bulb)
hit_sound.play()
elif bula[0]-bulb[0]<abul_side and bula[0]-bulb[0]>-abul_side and bula[1]-bulb[1]<abul_side and bula[1]-bulb[1]>-abul_side:
abul_pos.remove(bula)
bbul_pos.remove(bulb)
hit_sound.play()
#check if A gets any items
#ordinary
for ordip in ordi_pos:
if ((a_dir=='l' or a_dir=='r') and abs(ordip[0]-a_cpos[0])<((a_length+ordi_side)/2) and abs(ordip[1]-a_cpos[1])<((a_width+ordi_side)/2)) or ((a_dir=='u' or a_dir=='d') and abs(ordip[0]-a_cpos[0])<((a_width+ordi_side)/2) and abs(ordip[1]-a_cpos[1])<((a_length+ordi_side)/2)):
choice=random.randint(1,3)
if choice==1:
#restore health by 1
if ahealth<ahealth_max:
ahealth=ahealth+1
elif choice==2:
#faster bullet speed
abul_speed+=2
else:
#reduces bullet firing interval
a_cd-=50
ordi_pos.remove(ordip)
#rare
for rarep in rare_pos:
if ((a_dir=='l' or a_dir=='r') and abs(rarep[0]-a_cpos[0])<((a_length+rare_side)/2) and abs(rarep[1]-a_cpos[1])<((a_width+rare_side)/2)) or ((a_dir=='u' or a_dir=='d') and abs(rarep[0]-a_cpos[0])<((a_width+rare_side)/2) and abs(rarep[1]-a_cpos[1])<((a_length+rare_side)/2)):
choice=random.randint(1,3)
if choice==1:
#increase maximum health
ahealth_max+=1
ahealth+=1
elif choice==2:
#increse moving speed
a_speed+=1
else:
#invincible for 6 secs
ainvi=True
ainvi_start=now_time
a_color=pygame.Color(0,0,0)
rare_pos.remove(rarep)
#check if B gets any items
#ordinary
for ordip in ordi_pos:
if ((b_dir=='l' or b_dir=='r') and abs(ordip[0]-b_cpos[0])<((b_length+ordi_side)/2) and abs(ordip[1]-b_cpos[1])<((b_width+ordi_side)/2)) or ((b_dir=='u' or b_dir=='d') and abs(ordip[0]-b_cpos[0])<((b_width+ordi_side)/2) and abs(ordip[1]-b_cpos[1])<((b_length+ordi_side)/2)):
choice=random.randint(1,3)
if choice==1:
#restore health by 1
if bhealth<bhealth_max:
bhealth=bhealth+1
elif choice==2:
#faster bullet speed
bbul_speed+=2
else:
#reduces bullet firing interval
b_cd-=50
ordi_pos.remove(ordip)
#rare
for rarep in rare_pos:
if ((b_dir=='l' or b_dir=='r') and abs(rarep[0]-b_cpos[0])<((b_length+rare_side)/2) and abs(rarep[1]-b_cpos[1])<((b_width+rare_side)/2)) or ((b_dir=='u' or b_dir=='d') and abs(rarep[0]-b_cpos[0])<((b_width+rare_side)/2) and abs(rarep[1]-b_cpos[1])<((b_length+rare_side)/2)):
choice=random.randint(1,3)
if choice==1:
#increase maximum health
bhealth_max+=1
bhealth+=1
elif choice==2:
#increse moving speed
b_speed+=1
else:
#invincible for 6 secs
binvi=True
binvi_start=now_time
b_color=pygame.Color(0,0,0)
rare_pos.remove(rarep)
#check if A's bullets hit b
for bula in abul_pos:
if b_dir=='l' or b_dir=='r':
if bula[0]-b_cpos[0]<((b_length+bbul_side)/2) and bula[0]-b_cpos[0]>-((b_length+bbul_side)/2) and bula[1]-b_cpos[1]<((b_width+bbul_side)/2) and bula[1]-b_cpos[1]>-((b_width+bbul_side)/2):
if not binvi:
bhealth-=1
abul_pos.remove(bula)
hit_sound.play()
elif b_dir=='u' or b_dir=='d':
if bula[0]-b_cpos[0]<((b_width+bbul_side)/2) and bula[0]-b_cpos[0]>-((b_width+bbul_side)/2) and bula[1]-b_cpos[1]<((b_length+bbul_side)/2) and bula[1]-b_cpos[1]>-((b_length+bbul_side)/2):
if not binvi:
bhealth-=1
abul_pos.remove(bula)
hit_sound.play()
#check if B's bullets hit a
for bulb in bbul_pos:
if a_dir=='l' or a_dir=='r':
if bulb[0]-a_cpos[0]<((a_length+abul_side)/2) and bulb[0]-a_cpos[0]>-((a_length+abul_side)/2) and bulb[1]-a_cpos[1]<((a_width+abul_side)/2) and bulb[1]-a_cpos[1]>-((a_width+abul_side)/2):
if not ainvi:
ahealth-=1
bbul_pos.remove(bulb)
hit_sound.play()
elif a_dir=='u' or a_dir=='d':
if bulb[0]-a_cpos[0]<((a_width+abul_side)/2) and bulb[0]-a_cpos[0]>-((a_width+abul_side)/2) and bulb[1]-a_cpos[1]<((a_length+abul_side)/2) and bulb[1]-a_cpos[1]>-((a_length+abul_side)/2):
if not ainvi:
ahealth-=1
bbul_pos.remove(bulb)
hit_sound.play()
#check if A is still invicible
if ainvi and now_time-ainvi_start>6000:
a_color=pygame.Color(249,175,47)
ainvi=False
#check if B is still invicible
if binvi and now_time-binvi_start>6000:
b_color=pygame.Color(47,206,249)
binvi=False
#Draw all the elements
screen.fill((255,255,255))
#Draw Tank A
if a_dir=='l':
pygame.draw.rect(screen,a_color,pygame.Rect(a_cpos[0]-(a_length/2),a_cpos[1]-(a_width/2),a_length,a_width))
pygame.draw.rect(screen,a_color,pygame.Rect(a_cpos[0]-(a_length/2)-a_barrel,a_cpos[1]-(a_barrelw/2),a_barrel,a_barrelw))
elif a_dir=='r':
pygame.draw.rect(screen,a_color,pygame.Rect(a_cpos[0]-(a_length/2),a_cpos[1]-(a_width/2),a_length,a_width))
pygame.draw.rect(screen,a_color,pygame.Rect(a_cpos[0]+(a_length/2),a_cpos[1]-(a_barrelw/2),a_barrel,a_barrelw))
elif a_dir=='u':
pygame.draw.rect(screen,a_color,pygame.Rect(a_cpos[0]-(a_width/2),a_cpos[1]-(a_length/2),a_width,a_length))
pygame.draw.rect(screen,a_color,pygame.Rect(a_cpos[0]-(a_barrelw/2),a_cpos[1]-a_barrel-(a_length/2),a_barrelw,a_barrel))
elif a_dir=='d':
pygame.draw.rect(screen,a_color,pygame.Rect(a_cpos[0]-(a_width/2),a_cpos[1]-(a_length/2),a_width,a_length))
pygame.draw.rect(screen,a_color,pygame.Rect(a_cpos[0]-(a_barrelw/2),a_cpos[1]+(a_length/2),a_barrelw,a_barrel))
#Draw Tank B
if b_dir=='l':
pygame.draw.rect(screen,b_color,pygame.Rect(b_cpos[0]-(b_length/2),b_cpos[1]-(b_width/2),b_length,b_width))
pygame.draw.rect(screen,b_color,pygame.Rect(b_cpos[0]-(b_length/2)-b_barrel,b_cpos[1]-(b_barrelw/2),b_barrel,b_barrelw))
elif b_dir=='r':
pygame.draw.rect(screen,b_color,pygame.Rect(b_cpos[0]-(b_length/2),b_cpos[1]-(b_width/2),b_length,b_width))
pygame.draw.rect(screen,b_color,pygame.Rect(b_cpos[0]+(b_length/2),b_cpos[1]-(b_barrelw/2),b_barrel,b_barrelw))
elif b_dir=='u':
pygame.draw.rect(screen,b_color,pygame.Rect(b_cpos[0]-(b_width/2),b_cpos[1]-(b_length/2),b_width,b_length))
pygame.draw.rect(screen,b_color,pygame.Rect(b_cpos[0]-(b_barrelw/2),b_cpos[1]-b_barrel-(b_length/2),b_barrelw,b_barrel))
elif b_dir=='d':
pygame.draw.rect(screen,b_color,pygame.Rect(b_cpos[0]-(b_width/2),b_cpos[1]-(b_length/2),b_width,b_length))
pygame.draw.rect(screen,b_color,pygame.Rect(b_cpos[0]-(b_barrelw/2),b_cpos[1]+(b_length/2),b_barrelw,b_barrel))
#Draw Tank A's bullets
for bu in abul_pos:
pygame.draw.rect(screen,abul_color,pygame.Rect(bu[0],bu[1],abul_side,abul_side))
#Draw Tank B's bullets
for bu in bbul_pos:
pygame.draw.rect(screen,bbul_color,pygame.Rect(bu[0],bu[1],bbul_side,bbul_side))
#Draw ordinary items
for ordip in ordi_pos:
pygame.draw.rect(screen,ordi_color,pygame.Rect(ordip[0],ordip[1],ordi_side,ordi_side))
#Draw rare items
for rarep in rare_pos:
pygame.draw.rect(screen,rare_color,pygame.Rect(rarep[0],rarep[1],rare_side,rare_side))
#show remaining life
PrintLife(ahealth,bhealth,True)
if ahealth==0 or bhealth==0:
GameOver(ahealth,bhealth,True)
pygame.display.update()
#Versus AI
def GameSingle():
#Tank A settings
a_color=pygame.Color(249,175,47)
a_dir='r'
a_length=15
a_width=15
a_barrel=6
a_barrelw=4
a_cpos=[100,240]
a_speed=5
a_cd=500
ahealth=5
ahealth_max=5
ainvi=False
ainvi_start=0
#Tank B settings
b_color=pygame.Color(47,206,249)
b_dir='l'
b_length=15
b_width=15
b_barrel=6
b_barrelw=4
b_cpos=[540,240]
b_speed=5
b_cd=550
bhealth=5
bmove=True
bhealth_max=5
binvi=False
binvi_start=0
#Tank A bullet settings
abul_speed=10
abul_pos=[]
abul_side=4
abul_color=pygame.Color(244,128,66)
a_last=pygame.time.get_ticks()
#Tank B bullet settings
bbul_speed=10
bbul_pos=[]
bbul_side=4
bbul_color=pygame.Color(197,66,244)
b_last=a_last
#Items settings
#ordinary items (refreshes faster)
ordi_last=a_last
ordi_cd=10*1000
ordi_color=pygame.Color(242,87,216)
ordi_pos=[]
ordi_side=6
#options:
#restore health by 1
#faster bullet speed
#reduces bullet firing interval
#rare items(refreshes slower)
rare_color=pygame.Color(188,71,186)
rare_last=a_last
rare_cd=23*1000
rare_pos=[]
rare_side=8
#options
#increase maximum health
#increse moving speed
#invincible for 6 secs
while True:
#check if quit
for event in pygame.event.get():
if event.type==QUIT:
pygame.quit()
sys.exit()
pressed_keys=pygame.key.get_pressed()
#check if quit
if pressed_keys[K_ESCAPE]:
pygame.event.post(pygame.event.Event(pygame.QUIT))
#get Tank A's new direction
achange=False
amove=True
if pressed_keys[K_a] and a_dir!='l':
a_dir='l'
achange=True
elif pressed_keys[K_d] and a_dir!='r':
a_dir='r'
achange=True
elif pressed_keys[K_w] and a_dir!='u':
a_dir='u'
achange=True
elif pressed_keys[K_s] and a_dir!='d':
a_dir='d'
achange=True
if not (pressed_keys[K_a] or pressed_keys[K_d] or pressed_keys[K_w] or pressed_keys[K_s]):
amove=False
#Tank A position update
if (not achange) and amove:
#move if direction hasn't changed
if a_dir=='l':
a_cpos[0]-=a_speed
if a_cpos[0]-(a_length/2)-a_barrel<0:
a_cpos[0]=(a_length/2)+a_barrel
elif a_dir=='r':
a_cpos[0]+=a_speed
if a_cpos[0]+(a_length/2)+a_barrel>640:
a_cpos[0]=640-(a_length/2)-a_barrel
elif a_dir=='u':
a_cpos[1]-=a_speed
if a_cpos[1]-(a_length/2)-a_barrel<0:
a_cpos[1]=(a_length/2)+a_barrel
elif a_dir=='d':
a_cpos[1]+=a_speed
if a_cpos[1]+(a_length/2)+a_barrel>480:
a_cpos[1]=480-(a_length/2)-a_barrel
#Tank B position update
if bmove:
if b_dir=='l':
b_cpos[0]-=b_speed
if b_cpos[0]-(b_length/2)-b_barrel<0:
b_cpos[0]=(b_length/2)+b_barrel
elif b_dir=='r':
b_cpos[0]+=b_speed
if b_cpos[0]+(b_length/2)+b_barrel>640:
b_cpos[0]=640-(b_length/2)-b_barrel
elif b_dir=='u':
b_cpos[1]-=b_speed
if b_cpos[1]-(b_length/2)-b_barrel<0:
b_cpos[1]=(b_length/2)+b_barrel
elif b_dir=='d':
b_cpos[1]+=b_speed
if b_cpos[1]+(b_length/2)+b_barrel>480:
b_cpos[1]=480-(b_length/2)-b_barrel
#Tank A bullets position update
for bul in abul_pos:
if bul[2]=='l':
bul[0]-=abul_speed
elif bul[2]=='r':
bul[0]+=abul_speed
elif bul[2]=='u':
bul[1]-=abul_speed
elif bul[2]=='d':
bul[1]+=abul_speed
if bul[0]<0 or bul[0]>640-abul_side or bul[1]<0 or bul[1]>480-abul_side:
abul_pos.remove(bul)
#Tank B bullets position update
for bul in bbul_pos:
if bul[2]=='l':
bul[0]-=bbul_speed
elif bul[2]=='r':
bul[0]+=bbul_speed
elif bul[2]=='u':
bul[1]-=bbul_speed
elif bul[2]=='d':
bul[1]+=bbul_speed
if bul[0]<0 or bul[0]>640-bbul_side or bul[1]<0 or bul[1]>480-bbul_side:
bbul_pos.remove(bul)
#check if it is time for a new ordinary item
now_time=pygame.time.get_ticks()
if now_time-ordi_last>ordi_cd:
ordi_last=now_time
ordix=random.randint(0,640-ordi_side)
ordiy=random.randint(0,480-ordi_side)
proper=True
if abs(a_cpos[0]-ordix-(ordi_side/2))<(ordi_side/2+a_length/2) and abs(a_cpos[1]-ordiy-(ordi_side/2))<(ordi_side/2+a_length/2):
proper=False
if abs(b_cpos[0]-ordix-(ordi_side/2))<(ordi_side/2+b_length/2) and abs(b_cpos[1]-ordiy-(ordi_side/2))<(ordi_side/2+b_length/2):
proper=False
for rarep in rare_pos:
if abs(rarep[0]+(rare_side/2)-ordix-(ordi_side/2))<(rare_side/2+ordi_side/2) and abs(rarep[1]+(rare_side/2)-ordiy-(ordi_side/2))<(rare_side/2+ordi_side/2):
proper=False
break
while not proper:
ordix=random.randint(0,640-ordi_side)
ordiy=random.randint(0,480-ordi_side)
if not (abs(a_cpos[0]-ordix-(ordi_side/2))<(ordi_side/2+a_length/2) and abs(a_cpos[1]-ordiy-(ordi_side/2))<(ordi_side/2+a_length/2)):
proper=True
if abs(b_cpos[0]-ordix-(ordi_side/2))<(ordi_side/2+b_length/2) and abs(b_cpos[1]-ordiy-(ordi_side/2))<(ordi_side/2+b_length/2):
proper=False
for rarep in rare_pos:
if abs(rarep[0]+(rare_side/2)-ordix-(ordi_side/2))<(rare_side/2+ordi_side/2) and abs(rarep[1]+(rare_side/2)-ordiy-(ordi_side/2))<(rare_side/2+ordi_side/2):
proper=False
break
ordi_pos.append([ordix,ordiy])
#check if it is time for a new rare item
if now_time-rare_last>rare_cd:
rare_last=now_time
rarex=random.randint(0,640-rare_side)
rarey=random.randint(0,480-rare_side)
proper=True
if abs(a_cpos[0]-rarex-(rare_side/2))<(rare_side/2+a_length/2) and abs(a_cpos[1]-rarey-(rare_side/2))<(rare_side/2+a_length/2):
proper=False
if abs(b_cpos[0]-ordix-(ordi_side/2))<(ordi_side/2+b_length/2) and abs(b_cpos[1]-ordiy-(ordi_side/2))<(ordi_side/2+b_length/2):
proper=False
for ordip in ordi_pos:
if abs(ordip[0]+(ordi_side/2)-rarex-(rare_side/2))<(rare_side/2+ordi_side/2) and abs(ordip[1]+(ordi_side/2)-rarey-(rare_side/2))<(rare_side/2+ordi_side/2):
proper=False
break
while not proper:
rarex=random.randint(0,640-rare_side)
rarey=random.randint(0,480-rare_side)
if not (abs(a_cpos[0]-rarex-(rare_side/2))<(rare_side/2+a_length/2) and abs(a_cpos[1]-rarey-(rare_side/2))<(rare_side/2+a_length/2)):
proper=True
if abs(b_cpos[0]-ordix-(ordi_side/2))<(ordi_side/2+b_length/2) and abs(b_cpos[1]-ordiy-(ordi_side/2))<(ordi_side/2+b_length/2):
proper=False
for ordip in ordi_pos:
if abs(ordip[0]+(ordi_side/2)-rarex-(rare_side/2))<(rare_side/2+ordi_side/2) and abs(ordip[1]+(ordi_side/2)-rarey-(rare_side/2))<(rare_side/2+ordi_side/2):
proper=False
break
rare_pos.append([rarex,rarey])
#check if the bullets collide
for bula in abul_pos:
for bulb in bbul_pos:
if bula[0]==bulb[0] and bula[1]-bulb[1]<=10 and bula[1]-bulb[1]>=-10:
abul_pos.remove(bula)
bbul_pos.remove(bulb)
hit_sound.play()
elif bula[1]==bulb[1] and bula[0]-bulb[0]<=10 and bula[0]-bulb[0]>=-10:
abul_pos.remove(bula)
bbul_pos.remove(bulb)
hit_sound.play()
elif bula[0]-bulb[0]<abul_side and bula[0]-bulb[0]>-abul_side and bula[1]-bulb[1]<abul_side and bula[1]-bulb[1]>-abul_side:
abul_pos.remove(bula)
bbul_pos.remove(bulb)
hit_sound.play()
#check if A gets any items
#ordinary
for ordip in ordi_pos:
if ((a_dir=='l' or a_dir=='r') and abs(ordip[0]-a_cpos[0])<((a_length+ordi_side)/2) and abs(ordip[1]-a_cpos[1])<((a_width+ordi_side)/2)) or ((a_dir=='u' or a_dir=='d') and abs(ordip[0]-a_cpos[0])<((a_width+ordi_side)/2) and abs(ordip[1]-a_cpos[1])<((a_length+ordi_side)/2)):
choice=random.randint(1,3)
if choice==1:
#restore health by 1
if ahealth<ahealth_max:
ahealth=ahealth+1
elif choice==2:
#faster bullet speed
abul_speed+=2
else:
#reduces bullet firing interval
a_cd-=50
ordi_pos.remove(ordip)
#rare
for rarep in rare_pos:
if ((a_dir=='l' or a_dir=='r') and abs(rarep[0]-a_cpos[0])<((a_length+rare_side)/2) and abs(rarep[1]-a_cpos[1])<((a_width+rare_side)/2)) or ((a_dir=='u' or a_dir=='d') and abs(rarep[0]-a_cpos[0])<((a_width+rare_side)/2) and abs(rarep[1]-a_cpos[1])<((a_length+rare_side)/2)):
choice=random.randint(1,3)
if choice==1:
#increase maximum health
ahealth_max+=1
ahealth+=1
elif choice==2:
#increse moving speed
a_speed+=1
else:
#invincible for 6 secs
ainvi=True
ainvi_start=now_time
a_color=pygame.Color(0,0,0)
rare_pos.remove(rarep)
#check if B gets any items
#ordinary
for ordip in ordi_pos:
if ((b_dir=='l' or b_dir=='r') and abs(ordip[0]-b_cpos[0])<((b_length+ordi_side)/2) and abs(ordip[1]-b_cpos[1])<((b_width+ordi_side)/2)) or ((b_dir=='u' or b_dir=='d') and abs(ordip[0]-b_cpos[0])<((b_width+ordi_side)/2) and abs(ordip[1]-b_cpos[1])<((b_length+ordi_side)/2)):
choice=random.randint(1,3)
if choice==1:
#restore health by 1
if bhealth<bhealth_max:
bhealth=bhealth+1
elif choice==2:
#faster bullet speed
bbul_speed+=2
else:
#reduces bullet firing interval
b_cd-=50
ordi_pos.remove(ordip)
#rare
for rarep in rare_pos:
if ((b_dir=='l' or b_dir=='r') and abs(rarep[0]-b_cpos[0])<((b_length+rare_side)/2) and abs(rarep[1]-b_cpos[1])<((b_width+rare_side)/2)) or ((b_dir=='u' or b_dir=='d') and abs(rarep[0]-b_cpos[0])<((b_width+rare_side)/2) and abs(rarep[1]-b_cpos[1])<((b_length+rare_side)/2)):
choice=random.randint(1,3)
if choice==1:
#increase maximum health
bhealth_max+=1
bhealth+=1
elif choice==2:
#increse moving speed
b_speed+=1
else:
#invincible for 6 secs
binvi=True
binvi_start=now_time
b_color=pygame.Color(0,0,0)
rare_pos.remove(rarep)
#check if A's bullets hit b
for bula in abul_pos:
if b_dir=='l' or b_dir=='r':
if bula[0]-b_cpos[0]<((b_length+bbul_side)/2) and bula[0]-b_cpos[0]>-((b_length+bbul_side)/2) and bula[1]-b_cpos[1]<((b_width+bbul_side)/2) and bula[1]-b_cpos[1]>-((b_width+bbul_side)/2):
if not binvi:
bhealth-=1
abul_pos.remove(bula)
hit_sound.play()
elif b_dir=='u' or b_dir=='d':
if bula[0]-b_cpos[0]<((b_width+bbul_side)/2) and bula[0]-b_cpos[0]>-((b_width+bbul_side)/2) and bula[1]-b_cpos[1]<((b_length+bbul_side)/2) and bula[1]-b_cpos[1]>-((b_length+bbul_side)/2):
if not binvi:
bhealth-=1
abul_pos.remove(bula)
hit_sound.play()
#check if B's bullets hit a
for bulb in bbul_pos:
if a_dir=='l' or a_dir=='r':
if bulb[0]-a_cpos[0]<((a_length+abul_side)/2) and bulb[0]-a_cpos[0]>-((a_length+abul_side)/2) and bulb[1]-a_cpos[1]<((a_width+abul_side)/2) and bulb[1]-a_cpos[1]>-((a_width+abul_side)/2):
if not ainvi:
ahealth-=1
bbul_pos.remove(bulb)
hit_sound.play()
elif a_dir=='u' or a_dir=='d':
if bulb[0]-a_cpos[0]<((a_width+abul_side)/2) and bulb[0]-a_cpos[0]>-((a_width+abul_side)/2) and bulb[1]-a_cpos[1]<((a_length+abul_side)/2) and bulb[1]-a_cpos[1]>-((a_length+abul_side)/2):
if not ainvi:
ahealth-=1
bbul_pos.remove(bulb)
hit_sound.play()
#check if A shoots new bullet
if pressed_keys[K_c] and now_time-a_last>a_cd:
a_last=now_time
if a_dir=='l':
abul_pos.append([a_cpos[0]-(a_length/2)-a_barrel-abul_side,a_cpos[1]-(a_barrelw/2),'l'])
elif a_dir=='r':
abul_pos.append([a_cpos[0]+(a_length/2)+a_barrel,a_cpos[1]-(a_barrelw/2),'r'])
elif a_dir=='u':
abul_pos.append([a_cpos[0]-(a_barrelw/2),a_cpos[1]-a_barrel-(a_length/2)-abul_side,'u'])
elif a_dir=='d':
abul_pos.append([a_cpos[0]-(a_barrelw/2),a_cpos[1]+(a_length/2)+a_barrel,'d'])
if abul_pos[-1][0]<0 or abul_pos[-1][0]>640-abul_side or abul_pos[-1][1]<0 or abul_pos[-1][1]>480-abul_side:
abul_pos.remove(abul_pos[-1])
fire_sound.play()
#check if B shoots new bullet
if now_time-b_last>b_cd:
b_last=now_time
if b_dir=='l':
bbul_pos.append([b_cpos[0]-(b_length/2)-b_barrel-bbul_side,b_cpos[1]-(b_barrelw/2),'l'])
elif b_dir=='r':
bbul_pos.append([b_cpos[0]+(b_length/2)+b_barrel,b_cpos[1]-(b_barrelw/2),'r'])
elif b_dir=='u':
bbul_pos.append([b_cpos[0]-(b_barrelw/2),b_cpos[1]-b_barrel-(b_length/2)-bbul_side,'u'])
elif b_dir=='d':
bbul_pos.append([b_cpos[0]-(b_barrelw/2),b_cpos[1]+(b_length/2)+b_barrel,'d'])
if bbul_pos[-1][0]<0 or bbul_pos[-1][0]>640-bbul_side or bbul_pos[-1][1]<0 or bbul_pos[-1][1]>480-bbul_side:
bbul_pos.remove(bbul_pos[-1])
fire_sound.play()
#check if A is still invicible
if ainvi and now_time-ainvi_start>6000:
a_color=pygame.Color(249,175,47)
ainvi=False
#check if B is still invicible
if binvi and now_time-binvi_start>6000:
b_color=pygame.Color(47,206,249)
binvi=False
#Draw all the elements
screen.fill((255,255,255))
#Draw Tank A
if a_dir=='l':
pygame.draw.rect(screen,a_color,pygame.Rect(a_cpos[0]-(a_length/2),a_cpos[1]-(a_width/2),a_length,a_width))
pygame.draw.rect(screen,a_color,pygame.Rect(a_cpos[0]-(a_length/2)-a_barrel,a_cpos[1]-(a_barrelw/2),a_barrel,a_barrelw))
elif a_dir=='r':
pygame.draw.rect(screen,a_color,pygame.Rect(a_cpos[0]-(a_length/2),a_cpos[1]-(a_width/2),a_length,a_width))
pygame.draw.rect(screen,a_color,pygame.Rect(a_cpos[0]+(a_length/2),a_cpos[1]-(a_barrelw/2),a_barrel,a_barrelw))
elif a_dir=='u':
pygame.draw.rect(screen,a_color,pygame.Rect(a_cpos[0]-(a_width/2),a_cpos[1]-(a_length/2),a_width,a_length))
pygame.draw.rect(screen,a_color,pygame.Rect(a_cpos[0]-(a_barrelw/2),a_cpos[1]-a_barrel-(a_length/2),a_barrelw,a_barrel))
elif a_dir=='d':
pygame.draw.rect(screen,a_color,pygame.Rect(a_cpos[0]-(a_width/2),a_cpos[1]-(a_length/2),a_width,a_length))
pygame.draw.rect(screen,a_color,pygame.Rect(a_cpos[0]-(a_barrelw/2),a_cpos[1]+(a_length/2),a_barrelw,a_barrel))
#Draw Tank B
if b_dir=='l':
pygame.draw.rect(screen,b_color,pygame.Rect(b_cpos[0]-(b_length/2),b_cpos[1]-(b_width/2),b_length,b_width))
pygame.draw.rect(screen,b_color,pygame.Rect(b_cpos[0]-(b_length/2)-b_barrel,b_cpos[1]-(b_barrelw/2),b_barrel,b_barrelw))
elif b_dir=='r':
pygame.draw.rect(screen,b_color,pygame.Rect(b_cpos[0]-(b_length/2),b_cpos[1]-(b_width/2),b_length,b_width))
pygame.draw.rect(screen,b_color,pygame.Rect(b_cpos[0]+(b_length/2),b_cpos[1]-(b_barrelw/2),b_barrel,b_barrelw))
elif b_dir=='u':
pygame.draw.rect(screen,b_color,pygame.Rect(b_cpos[0]-(b_width/2),b_cpos[1]-(b_length/2),b_width,b_length))
pygame.draw.rect(screen,b_color,pygame.Rect(b_cpos[0]-(b_barrelw/2),b_cpos[1]-b_barrel-(b_length/2),b_barrelw,b_barrel))
elif b_dir=='d':
pygame.draw.rect(screen,b_color,pygame.Rect(b_cpos[0]-(b_width/2),b_cpos[1]-(b_length/2),b_width,b_length))
pygame.draw.rect(screen,b_color,pygame.Rect(b_cpos[0]-(b_barrelw/2),b_cpos[1]+(b_length/2),b_barrelw,b_barrel))
#Draw Tank A's bullets
for bu in abul_pos:
pygame.draw.rect(screen,abul_color,pygame.Rect(bu[0],bu[1],abul_side,abul_side))
#Draw Tank B's bullets
for bu in bbul_pos:
pygame.draw.rect(screen,bbul_color,pygame.Rect(bu[0],bu[1],bbul_side,bbul_side))
#Draw ordinary items
for ordip in ordi_pos:
pygame.draw.rect(screen,ordi_color,pygame.Rect(ordip[0],ordip[1],ordi_side,ordi_side))
#Draw rare items
for rarep in rare_pos:
pygame.draw.rect(screen,rare_color,pygame.Rect(rarep[0],rarep[1],rare_side,rare_side))
#show remaining life
PrintLife(ahealth,bhealth,False)
if ahealth==0 or bhealth==0:
GameOver(ahealth,bhealth,False)
pygame.display.update()
min_dis=640000
item_pos=[]
for item in ordi_pos:
now_dis=abs(item[0]-b_cpos[0])*abs(item[1]-b_cpos[1])
if now_dis<min_dis:
min_dis=now_dis
item_pos=item
for item in rare_pos:
now_dis=abs(item[0]-b_cpos[0])*abs(item[1]-b_cpos[1])
if now_dis<min_dis:
min_dis=now_dis
item_pos=item
tank_dis=abs(a_cpos[0]-b_cpos[0])*abs(a_cpos[1]-b_cpos[1])
if tank_dis<min_dis:
if (not (abs(a_cpos[0]-b_cpos[0])<b_width or abs(a_cpos[1]-b_cpos[1])<b_width)):
if a_cpos[0]<b_cpos[0] and a_cpos[1]<b_cpos[1]:
if b_cpos[0]-a_cpos[0]>b_cpos[1]-a_cpos[1]:
b_dir='u'
else:
b_dir='l'
bmove=True
elif a_cpos[0]>b_cpos[0] and a_cpos[1]<b_cpos[1]:
if a_cpos[0]-b_cpos[0]>b_cpos[1]-a_cpos[1]:
b_dir='u'
else:
b_dir='r'
bmove=True
elif a_cpos[0]<b_cpos[0] and a_cpos[1]>b_cpos[1]:
if b_cpos[0]-a_cpos[0]>a_cpos[1]-b_cpos[1]:
b_dir='d'
else:
b_dir='l'
bmove=True
elif a_cpos[0]>b_cpos[0] and a_cpos[1]>b_cpos[1]:
if a_cpos[0]-b_cpos[0]>a_cpos[1]-b_cpos[1]:
b_dir='d'
else:
b_dir='r'
bmove=True
else:
if a_cpos[0]==b_cpos[0] and a_cpos[1]==b_cpos[1]:
bmove=True
elif abs(a_cpos[0]-b_cpos[0])<abs(a_cpos[1]-b_cpos[1]):
bmove=False
if a_cpos[1]>b_cpos[1]:
b_dir='d'
else:
b_dir='u'
else:
bmove=False
if a_cpos[0]>b_cpos[0]:
b_dir='r'
else:
b_dir='l'
else:
if (not (abs(item_pos[0]-b_cpos[0])<b_length or abs(item_pos[1]-b_cpos[1])<b_length)):
if item_pos[0]<b_cpos[0] and item_pos[1]<b_cpos[1]:
if b_cpos[0]-item_pos[0]>b_cpos[1]-item_pos[1]:
b_dir='u'
else:
b_dir='l'
bmove=True
elif item_pos[0]>b_cpos[0] and item_pos[1]<b_cpos[1]:
if item_pos[0]-b_cpos[0]>b_cpos[1]-item_pos[1]:
b_dir='u'
else:
b_dir='r'
bmove=True
elif item_pos[0]<b_cpos[0] and item_pos[1]>b_cpos[1]:
if b_cpos[0]-item_pos[0]>item_pos[1]-b_cpos[1]:
b_dir='d'
else:
b_dir='l'
bmove=True
elif item_pos>b_cpos[0] and item_pos[1]>b_cpos[1]:
if item_pos[0]-b_cpos[0]>item_pos[1]-b_cpos[1]:
b_dir='d'
else:
b_dir='r'
bmove=True
else:
if abs(item_pos[0]-b_cpos[0])<abs(item_pos[1]-b_cpos[1]):
bmove=True
if item_pos[1]>b_cpos[1]:
b_dir='d'
else:
b_dir='u'
else:
bmove=True
if item_pos[0]>b_cpos[0]:
b_dir='r'
else:
b_dir='l'
#Game over
def GameOver(ahealth=0,bhealth=0,multi=True):
text_font=pygame.font.SysFont('arial',48)
gameover_surface=text_font.render('Game Over!',True,(255,0,0))
gameover_rect=gameover_surface.get_rect()
gameover_rect.midtop=(320,15)
screen.blit(gameover_surface,gameover_rect)
if multi:
#multi-player case
if ahealth==0 and bhealth==0:
result_surface=text_font.render('Draw!',True,(131,18,183))
elif ahealth==0:
result_surface=text_font.render('Player 2 Won!',True,(47,206,249))
else:
result_surface=text_font.render('Player 1 Won!',True,(249,175,47))
else:
#single-player case
if ahealth==0 and bhealth==0:
result_surface=text_font.render('Draw!',True,(131,18,183))
elif ahealth==0:
result_surface=text_font.render('You Lost...',True,(131,18,183))
else:
result_surface=text_font.render('You Won!',True,(131,18,183))
#update result
result_rect=result_surface.get_rect()
result_rect.midtop=(320,50)
screen.blit(result_surface,result_rect)
pygame.display.update()
#wait for 2 seconds
time.sleep(2)
#play again or quit?
lastbgm=pygame.time.get_ticks()-bgm_sound.get_length()*1000
while True:
if pygame.time.get_ticks()-lastbgm>bgm_sound.get_length()*1000:
lastbgm=pygame.time.get_ticks()
bgm_sound.play()
#check if quit
for event in pygame.event.get():
if event.type==QUIT:
pygame.quit()
sys.exit()
#show play again/return to menu option
stext_font=pygame.font.SysFont('arial',40)
gamenext_surface=stext_font.render('Press Enter to play again!',True,(244,66,113))
gamenext_rect=gamenext_surface.get_rect()
gamenext_rect.midtop=(320,320)
screen.blit(gamenext_surface,gamenext_rect)
pygame.display.update()
gameback_surface=stext_font.render('Press Space to return to main menu',True,(244,66,113))
gameback_rect=gameback_surface.get_rect()
gameback_rect.midtop=(320,360)
screen.blit(gameback_surface,gameback_rect)
pygame.display.update()
#check if play again
pressed_keys=pygame.key.get_pressed()
if pressed_keys[K_RETURN]:
if multi:
GameMulti()
else:
GameSingle()
elif pressed_keys[K_ESCAPE]:
pygame.event.post(pygame.event.Event(pygame.QUIT))
elif pressed_keys[K_SPACE]:
GameIns()
GameIns() | 33.624242 | 277 | 0.681146 | 7,515 | 38,836 | 3.294478 | 0.037392 | 0.038775 | 0.034898 | 0.016399 | 0.903587 | 0.896841 | 0.883432 | 0.874546 | 0.873899 | 0.867396 | 0 | 0.054044 | 0.144299 | 38,836 | 1,155 | 278 | 33.624242 | 0.690961 | 0.076218 | 0 | 0.86776 | 0 | 0 | 0.012229 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.005464 | false | 0 | 0.006557 | 0 | 0.012022 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
6ae1b9cabc144c6b01da7ccc507c6ef26b79798f | 12,680 | py | Python | tensorflow_mri/python/losses/iqa_losses.py | mrphys/tensorflow-mri | 46a8929aec4180aba4961f902897e02592f25da6 | [
"Apache-2.0"
] | 3 | 2021-07-28T17:22:26.000Z | 2022-03-29T15:17:26.000Z | tensorflow_mri/python/losses/iqa_losses.py | mrphys/tensorflow-mri | 46a8929aec4180aba4961f902897e02592f25da6 | [
"Apache-2.0"
] | 1 | 2021-07-23T01:37:11.000Z | 2021-07-23T01:37:11.000Z | tensorflow_mri/python/losses/iqa_losses.py | mrphys/tensorflow-mri | 46a8929aec4180aba4961f902897e02592f25da6 | [
"Apache-2.0"
] | null | null | null | # Copyright 2021 University College London. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""IQA losses.
This module contains loss functions for the optimization of image quality.
"""
import tensorflow as tf
from tensorflow_mri.python.ops import image_ops
from tensorflow_mri.python.util import keras_util
@tf.keras.utils.register_keras_serializable(package="MRI")
class StructuralSimilarityLoss(keras_util.LossFunctionWrapper):
"""Computes the structural similarity (SSIM) loss.
The SSIM loss is equal to :math:`1.0 - \textrm{SSIM}`.
.. warning::
As of TensorFlow 2.6.0, 3D inputs with `channels` > 1 can only be processed
on GPU.
Args:
max_val: The dynamic range of the images (i.e., the difference between
the maximum and the minimum allowed values). Defaults to 1 for floating
point input images and `MAX` for integer input images, where `MAX` is the
largest positive representable number for the data type.
filter_size: The size of the Gaussian filter. Defaults to 11.
filter_sigma: The standard deviation of the Gaussian filter. Defaults to
1.5.
k1: Factor used to calculate the regularization constant for the luminance
term, as `C1 = (k1 * max_val) ** 2`. Defaults to 0.01.
k2: Factor used to calculate the regularization constant for the contrast
term, as `C2 = (k2 * max_val) ** 2`. Defaults to 0.03.
rank: An `int`. The number of spatial dimensions. Must be 2 or 3. Defaults
to `tf.rank(y_true) - 2`. In other words, if rank is not explicitly set,
`y_true` and `y_pred` should have shape `[batch, height, width, channels]`
if processing 2D images or `[batch, depth, height, width, channels]` if
processing 3D images.
reduction: Type of `tf.keras.losses.Reduction` to apply to loss. Default
value is `AUTO`.
name: String name of the loss instance.
References:
.. [1] Zhao, H., Gallo, O., Frosio, I., & Kautz, J. (2016). Loss functions
for image restoration with neural networks. IEEE Transactions on
computational imaging, 3(1), 47-57.
"""
def __init__(self,
max_val=None,
filter_size=11,
filter_sigma=1.5,
k1=0.01,
k2=0.03,
rank=None,
reduction=tf.keras.losses.Reduction.AUTO,
name='ssim_loss'):
super().__init__(ssim_loss, reduction=reduction, name=name, max_val=max_val,
filter_size=filter_size, filter_sigma=filter_sigma,
k1=k1, k2=k2, rank=rank)
@tf.keras.utils.register_keras_serializable(package="MRI")
class MultiscaleStructuralSimilarityLoss(keras_util.LossFunctionWrapper):
"""Computes the multiscale structural similarity (MS-SSIM) loss.
The MS-SSIM loss is equal to :math:`1.0 - \textrm{MS-SSIM}`.
.. warning::
As of TensorFlow 2.6.0, 3D inputs with `channels` > 1 can only be processed
on GPU.
Args:
max_val: The dynamic range of the images (i.e., the difference between
the maximum and the minimum allowed values). Defaults to 1 for floating
point input images and `MAX` for integer input images, where `MAX` is the
largest positive representable number for the data type.
power_factors: A list of weights for each of the scales. The length of the
list determines the number of scales. Index 0 is the unscaled resolution's
weight and each increasing scale corresponds to the image being
downsampled by 2. Defaults to (0.0448, 0.2856, 0.3001, 0.2363, 0.1333),
which are the values obtained in the original paper.
filter_size: The size of the Gaussian filter. Defaults to 11.
filter_sigma: The standard deviation of the Gaussian filter. Defaults to
1.5.
k1: Factor used to calculate the regularization constant for the luminance
term, as `C1 = (k1 * max_val) ** 2`. Defaults to 0.01.
k2: Factor used to calculate the regularization constant for the contrast
term, as `C2 = (k2 * max_val) ** 2`. Defaults to 0.03.
rank: An `int`. The number of spatial dimensions. Must be 2 or 3. Defaults
to `tf.rank(y_true) - 2`. In other words, if rank is not explicitly set,
`y_true` and `y_pred` should have shape `[batch, height, width, channels]`
if processing 2D images or `[batch, depth, height, width, channels]` if
processing 3D images.
reduction: Type of `tf.keras.losses.Reduction` to apply to loss. Default
value is `AUTO`.
name: String name of the loss instance.
References:
.. [1] Zhao, H., Gallo, O., Frosio, I., & Kautz, J. (2016). Loss functions
for image restoration with neural networks. IEEE Transactions on
computational imaging, 3(1), 47-57.
"""
def __init__(self,
max_val=None,
power_factors=image_ops._MSSSIM_WEIGHTS,
filter_size=11,
filter_sigma=1.5,
k1=0.01,
k2=0.03,
rank=None,
reduction=tf.keras.losses.Reduction.AUTO,
name='ssim_multiscale_loss'):
super().__init__(ssim_multiscale_loss, reduction=reduction, name=name,
max_val=max_val, power_factors=power_factors,
filter_size=filter_size, filter_sigma=filter_sigma,
k1=k1, k2=k2, rank=rank)
@tf.keras.utils.register_keras_serializable(package="MRI")
def ssim_loss(y_true, y_pred, max_val=None,
filter_size=11, filter_sigma=1.5,
k1=0.01, k2=0.03, rank=None):
r"""Computes the structural similarity (SSIM) loss.
The SSIM loss is equal to :math:`1.0 - \textrm{SSIM}`.
.. warning::
As of TensorFlow 2.6.0, 3D inputs with `channels` > 1 can only be processed
on GPU.
Args:
y_true: A `Tensor`. Ground truth images. For 2D images, must have rank >= 3
with shape `batch_shape + [height, width, channels]`. For 3D images, must
have rank >= 4 with shape
`batch_shape + [depth, height, width, channels]`. `height`, `width` and
`depth` must be greater than or equal to `filter_size`. Must have floating
point type, with values in the range `[0, max_val]`.
y_pred: A `Tensor`. Predicted images. For 2D images, must have rank >= 3
with shape `batch_shape + [height, width, channels]`. For 3D images, must
have rank >= 4 with shape
`batch_shape + [depth, height, width, channels]`. `height`, `width` and
`depth` must be greater than or equal to `filter_size`. Must have floating
point type, with values in the range `[0, max_val]`.
max_val: The dynamic range of the images (i.e., the difference between
the maximum and the minimum allowed values). Defaults to 1 for floating
point input images and `MAX` for integer input images, where `MAX` is the
largest positive representable number for the data type.
filter_size: The size of the Gaussian filter. Defaults to 11.
filter_sigma: The standard deviation of the Gaussian filter. Defaults to
1.5.
k1: Factor used to calculate the regularization constant for the luminance
term, as `C1 = (k1 * max_val) ** 2`. Defaults to 0.01.
k2: Factor used to calculate the regularization constant for the contrast
term, as `C2 = (k2 * max_val) ** 2`. Defaults to 0.03.
rank: An `int`. The number of spatial dimensions. Must be 2 or 3. Defaults
to `tf.rank(y_true) - 2`. In other words, if rank is not explicitly set,
`y_true` and `y_pred` should have shape `[batch, height, width, channels]`
if processing 2D images or `[batch, depth, height, width, channels]` if
processing 3D images.
Returns:
A `Tensor` of type `float32` and shape `batch_shape` containing an SSIM
value for each image in the batch.
References:
.. [1] Zhao, H., Gallo, O., Frosio, I., & Kautz, J. (2016). Loss functions
for image restoration with neural networks. IEEE Transactions on
computational imaging, 3(1), 47-57.
"""
return 1.0 - image_ops.ssim(y_true, y_pred,
max_val=max_val,
filter_size=filter_size,
filter_sigma=filter_sigma,
k1=k1,
k2=k2,
rank=rank)
@tf.keras.utils.register_keras_serializable(package="MRI")
def ssim_multiscale_loss(y_true, y_pred, max_val=None,
power_factors=image_ops._MSSSIM_WEIGHTS, # pylint: disable=protected-access
filter_size=11, filter_sigma=1.5,
k1=0.01, k2=0.03, rank=None):
r"""Computes the multiscale structural similarity (MS-SSIM) loss.
The MS-SSIM loss is equal to :math:`1.0 - \textrm{MS-SSIM}`.
.. warning::
As of TensorFlow 2.6.0, 3D inputs with `channels` > 1 can only be processed
on GPU.
Args:
y_true: A `Tensor`. Ground truth images. For 2D images, must have rank >= 3
with shape `batch_shape + [height, width, channels]`. For 3D images, must
have rank >= 4 with shape
`batch_shape + [depth, height, width, channels]`. `height`, `width` and
`depth` must be greater than or equal to
`(filter_size - 1) * 2 ** (len(power_factors) - 1) + 1`. Must have
floating point type, with values in the range `[0, max_val]`.
y_pred: A `Tensor`. Predicted images. For 2D images, must have rank >= 3
with shape `batch_shape + [height, width, channels]`. For 3D images, must
have rank >= 4 with shape
`batch_shape + [depth, height, width, channels]`. `height`, `width` and
`depth` must be greater than or equal to
`(filter_size - 1) * 2 ** (len(power_factors) - 1) + 1`. Must have
floating point type, with values in the range `[0, max_val]`.
max_val: The dynamic range of the images (i.e., the difference between
the maximum and the minimum allowed values). Defaults to 1 for floating
point input images and `MAX` for integer input images, where `MAX` is the
largest positive representable number for the data type.
power_factors: A list of weights for each of the scales. The length of the
list determines the number of scales. Index 0 is the unscaled resolution's
weight and each increasing scale corresponds to the image being
downsampled by 2. Defaults to (0.0448, 0.2856, 0.3001, 0.2363, 0.1333),
which are the values obtained in the original paper.
filter_size: The size of the Gaussian filter. Defaults to 11.
filter_sigma: The standard deviation of the Gaussian filter. Defaults to
1.5.
k1: Factor used to calculate the regularization constant for the luminance
term, as `C1 = (k1 * max_val) ** 2`. Defaults to 0.01.
k2: Factor used to calculate the regularization constant for the contrast
term, as `C2 = (k2 * max_val) ** 2`. Defaults to 0.03.
rank: An `int`. The number of spatial dimensions. Must be 2 or 3. Defaults
to `tf.rank(y_true) - 2`. In other words, if rank is not explicitly set,
`y_true` and `y_pred` should have shape `[batch, height, width, channels]`
if processing 2D images or `[batch, depth, height, width, channels]` if
processing 3D images.
Returns:
A `Tensor` of type `float32` and shape `batch_shape` containing an SSIM
value for each image in the batch.
References:
.. [1] Zhao, H., Gallo, O., Frosio, I., & Kautz, J. (2016). Loss functions
for image restoration with neural networks. IEEE Transactions on
computational imaging, 3(1), 47-57.
"""
return 1.0 - image_ops.ssim_multiscale(y_true, y_pred,
max_val=max_val,
power_factors=power_factors,
filter_size=filter_size,
filter_sigma=filter_sigma,
k1=k1,
k2=k2,
rank=rank)
| 48.769231 | 100 | 0.646451 | 1,825 | 12,680 | 4.406027 | 0.143014 | 0.020893 | 0.037806 | 0.014924 | 0.900137 | 0.893172 | 0.893172 | 0.893172 | 0.887203 | 0.863201 | 0 | 0.03511 | 0.258754 | 12,680 | 259 | 101 | 48.957529 | 0.820406 | 0.753785 | 0 | 0.711864 | 0 | 0 | 0.014748 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.067797 | false | 0 | 0.050847 | 0 | 0.186441 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
6ae2844c57caea0248075b43affe2c5a4c5773ee | 11,762 | py | Python | python_modules/dagster-graphql/dagster_graphql_tests/graphql/snapshots/snap_test_execute_pipeline.py | vatervonacht/dagster | 595d78c883ef20618052ac1575fe46cde51fd541 | [
"Apache-2.0"
] | 3 | 2020-04-28T16:27:33.000Z | 2020-07-22T07:43:30.000Z | python_modules/dagster-graphql/dagster_graphql_tests/graphql/snapshots/snap_test_execute_pipeline.py | vatervonacht/dagster | 595d78c883ef20618052ac1575fe46cde51fd541 | [
"Apache-2.0"
] | null | null | null | python_modules/dagster-graphql/dagster_graphql_tests/graphql/snapshots/snap_test_execute_pipeline.py | vatervonacht/dagster | 595d78c883ef20618052ac1575fe46cde51fd541 | [
"Apache-2.0"
] | 1 | 2021-02-21T12:16:47.000Z | 2021-02-21T12:16:47.000Z | # -*- coding: utf-8 -*-
# snapshottest: v1 - https://goo.gl/zC4yUc
from __future__ import unicode_literals
from snapshottest import Snapshot
snapshots = Snapshot()
snapshots['test_successful_pipeline_reexecution 1'] = {
'startPipelineExecution': {
'__typename': 'StartPipelineExecutionSuccess',
'run': {
'logs': {
'nodes': [
{
'__typename': 'PipelineStartEvent',
'level': 'DEBUG'
},
{
'__typename': 'EngineEvent',
'level': 'DEBUG'
},
{
'__typename': 'ExecutionStepStartEvent',
'level': 'DEBUG',
'step': {
'key': 'sum_solid.compute',
'kind': 'COMPUTE'
}
},
{
'__typename': 'ExecutionStepInputEvent',
'inputName': 'num',
'level': 'DEBUG',
'step': {
'key': 'sum_solid.compute',
'kind': 'COMPUTE'
},
'typeCheck': {
'description': None,
'label': 'num',
'metadataEntries': [
]
}
},
{
'__typename': 'ExecutionStepOutputEvent',
'level': 'DEBUG',
'outputName': 'result',
'step': {
'key': 'sum_solid.compute',
'kind': 'COMPUTE'
},
'typeCheck': {
'description': None,
'label': 'result',
'metadataEntries': [
]
}
},
{
'__typename': 'ObjectStoreOperationEvent',
'level': 'DEBUG',
'operationResult': {
'metadataEntries': [
{
'description': None,
'label': 'key',
'path': 'DUMMY_PATH'
}
],
'op': 'SET_OBJECT'
},
'step': {
'key': 'sum_solid.compute'
}
},
{
'__typename': 'ExecutionStepSuccessEvent',
'level': 'DEBUG',
'step': {
'key': 'sum_solid.compute'
}
},
{
'__typename': 'ExecutionStepStartEvent',
'level': 'DEBUG',
'step': {
'key': 'sum_sq_solid.compute',
'kind': 'COMPUTE'
}
},
{
'__typename': 'ObjectStoreOperationEvent',
'level': 'DEBUG',
'operationResult': {
'metadataEntries': [
{
'description': None,
'label': 'key',
'path': 'DUMMY_PATH'
}
],
'op': 'GET_OBJECT'
},
'step': {
'key': 'sum_sq_solid.compute'
}
},
{
'__typename': 'ExecutionStepInputEvent',
'inputName': 'sum_df',
'level': 'DEBUG',
'step': {
'key': 'sum_sq_solid.compute',
'kind': 'COMPUTE'
},
'typeCheck': {
'description': None,
'label': 'sum_df',
'metadataEntries': [
]
}
},
{
'__typename': 'ExecutionStepOutputEvent',
'level': 'DEBUG',
'outputName': 'result',
'step': {
'key': 'sum_sq_solid.compute',
'kind': 'COMPUTE'
},
'typeCheck': {
'description': None,
'label': 'result',
'metadataEntries': [
]
}
},
{
'__typename': 'ObjectStoreOperationEvent',
'level': 'DEBUG',
'operationResult': {
'metadataEntries': [
{
'description': None,
'label': 'key',
'path': 'DUMMY_PATH'
}
],
'op': 'SET_OBJECT'
},
'step': {
'key': 'sum_sq_solid.compute'
}
},
{
'__typename': 'ExecutionStepSuccessEvent',
'level': 'DEBUG',
'step': {
'key': 'sum_sq_solid.compute'
}
},
{
'__typename': 'EngineEvent',
'level': 'DEBUG'
},
{
'__typename': 'PipelineSuccessEvent',
'level': 'DEBUG'
}
]
},
'pipeline': {
'name': 'csv_hello_world'
},
'tags': [
]
}
}
}
snapshots['test_successful_pipeline_reexecution 2'] = {
'startPipelineExecution': {
'__typename': 'StartPipelineExecutionSuccess',
'run': {
'logs': {
'nodes': [
{
'__typename': 'PipelineStartEvent',
'level': 'DEBUG'
},
{
'__typename': 'EngineEvent',
'level': 'DEBUG'
},
{
'__typename': 'ObjectStoreOperationEvent',
'level': 'DEBUG',
'operationResult': {
'metadataEntries': [
{
'description': None,
'label': 'key',
'path': 'DUMMY_PATH'
}
],
'op': 'CP_OBJECT'
},
'step': {
'key': 'sum_solid.compute'
}
},
{
'__typename': 'ExecutionStepStartEvent',
'level': 'DEBUG',
'step': {
'key': 'sum_sq_solid.compute',
'kind': 'COMPUTE'
}
},
{
'__typename': 'ObjectStoreOperationEvent',
'level': 'DEBUG',
'operationResult': {
'metadataEntries': [
{
'description': None,
'label': 'key',
'path': 'DUMMY_PATH'
}
],
'op': 'GET_OBJECT'
},
'step': {
'key': 'sum_sq_solid.compute'
}
},
{
'__typename': 'ExecutionStepInputEvent',
'inputName': 'sum_df',
'level': 'DEBUG',
'step': {
'key': 'sum_sq_solid.compute',
'kind': 'COMPUTE'
},
'typeCheck': {
'description': None,
'label': 'sum_df',
'metadataEntries': [
]
}
},
{
'__typename': 'ExecutionStepOutputEvent',
'level': 'DEBUG',
'outputName': 'result',
'step': {
'key': 'sum_sq_solid.compute',
'kind': 'COMPUTE'
},
'typeCheck': {
'description': None,
'label': 'result',
'metadataEntries': [
]
}
},
{
'__typename': 'ObjectStoreOperationEvent',
'level': 'DEBUG',
'operationResult': {
'metadataEntries': [
{
'description': None,
'label': 'key',
'path': 'DUMMY_PATH'
}
],
'op': 'SET_OBJECT'
},
'step': {
'key': 'sum_sq_solid.compute'
}
},
{
'__typename': 'ExecutionStepSuccessEvent',
'level': 'DEBUG',
'step': {
'key': 'sum_sq_solid.compute'
}
},
{
'__typename': 'EngineEvent',
'level': 'DEBUG'
},
{
'__typename': 'PipelineSuccessEvent',
'level': 'DEBUG'
}
]
},
'pipeline': {
'name': 'csv_hello_world'
},
'tags': [
]
}
}
}
snapshots['test_pipeline_reexecution_info_query 1'] = [
'sum_sq_solid.compute'
]
| 37.339683 | 66 | 0.252678 | 415 | 11,762 | 6.86506 | 0.168675 | 0.09126 | 0.06318 | 0.077571 | 0.927694 | 0.899965 | 0.899965 | 0.889786 | 0.889786 | 0.851878 | 0 | 0.001463 | 0.65142 | 11,762 | 314 | 67 | 37.458599 | 0.693415 | 0.005271 | 0 | 0.592834 | 0 | 0 | 0.239036 | 0.055142 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.006515 | 0 | 0.006515 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
0a7ff1b119498aebe1cbcbe312eed4eecb71a63e | 1,980 | py | Python | precise/skaters/covariance/bufhuber.py | OVVO-Financial/precise | ce744cadfca18f4ab77c68cc27bf8d712561127f | [
"MIT"
] | null | null | null | precise/skaters/covariance/bufhuber.py | OVVO-Financial/precise | ce744cadfca18f4ab77c68cc27bf8d712561127f | [
"MIT"
] | null | null | null | precise/skaters/covariance/bufhuber.py | OVVO-Financial/precise | ce744cadfca18f4ab77c68cc27bf8d712561127f | [
"MIT"
] | null | null | null | from precise.skaters.covariance.bufhuberfactory import buf_huber_d0_factory
from precise.skaters.covarianceutil.differencing import d1_factory
def buf_huber_pcov_d0_a1_b2_n50(y, s, k=1):
assert k==1
return buf_huber_d0_factory(y=y,s=s,a=1.0, b=2.0, n_buffer=50)
def buf_huber_pcov_d0_a05_b2_n50(y, s, k=1):
assert k==1
return buf_huber_d0_factory(y=y,s=s,a=0.5, b=2.0, n_buffer=50)
def buf_huber_pcov_d0_a1_b5_n50(y, s, k=1):
assert k==1
return buf_huber_d0_factory(y=y,s=s,a=0.5, b=5.0, n_buffer=50)
def buf_huber_pcov_d0_a1_b2_n100(y, s, k=1):
assert k==1
return buf_huber_d0_factory(y=y,s=s,a=1.0, b=2.0, n_buffer=100)
def buf_huber_pcov_d0_a05_b2_n100(y, s, k=1):
assert k==1
return buf_huber_d0_factory(y=y,s=s,a=0.5, b=2.0, n_buffer=100)
def buf_huber_pcov_d0_a1_b5_n100(y, s, k=1):
assert k==1
return buf_huber_d0_factory(y=y,s=s,a=0.5, b=5.0, n_buffer=100)
def buf_huber_pcov_d0_a1_b2_n200(y, s, k=1):
assert k==1
return buf_huber_d0_factory(y=y,s=s,a=1.0, b=2.0, n_buffer=200)
def buf_huber_pcov_d0_a05_b2_n200(y, s, k=1):
assert k==1
return buf_huber_d0_factory(y=y,s=s,a=0.5, b=2.0, n_buffer=200)
def buf_huber_pcov_d0_a1_b5_n200(y, s, k=1):
assert k==1
return buf_huber_d0_factory(y=y,s=s,a=0.5, b=5.0, n_buffer=200)
BUF_HUBER_D0_COV_SKATERS = [buf_huber_pcov_d0_a1_b2_n50, buf_huber_pcov_d0_a05_b2_n50, buf_huber_pcov_d0_a1_b5_n50,
buf_huber_pcov_d0_a1_b2_n100, buf_huber_pcov_d0_a05_b2_n100, buf_huber_pcov_d0_a1_b5_n100,
buf_huber_pcov_d0_a1_b2_n200, buf_huber_pcov_d0_a05_b2_n200, buf_huber_pcov_d0_a1_b5_n200]
def buf_huber_pcov_d1_a1_b2_n50(y, s, k=1):
return d1_factory(y=y,s=s,k=k,a=1.0, b=2.0, n_buffer=50)
def buf_huber_pcov_d1_a1_b2_n100(y, s, k=1):
return d1_factory(y=y,s=s,k=k,a=1.0, b=2.0, n_buffer=100)
BUF_HUBER_D1_COV_SKATERS = [buf_huber_pcov_d1_a1_b2_n50, buf_huber_pcov_d1_a1_b2_n100]
| 30.9375 | 118 | 0.725758 | 462 | 1,980 | 2.712121 | 0.084416 | 0.217079 | 0.210694 | 0.201117 | 0.881883 | 0.865922 | 0.865922 | 0.617717 | 0.617717 | 0.615323 | 0 | 0.140912 | 0.14697 | 1,980 | 63 | 119 | 31.428571 | 0.600947 | 0 | 0 | 0.243243 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.243243 | 1 | 0.297297 | false | 0 | 0.054054 | 0.054054 | 0.648649 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
0ab822fd607f434468eaadc63561f4a59c196d5a | 203 | py | Python | tests/parser/aggregates.max.propagation.4.test.py | veltri/DLV2 | 944aaef803aa75e7ec51d7e0c2b0d964687fdd0e | [
"Apache-2.0"
] | null | null | null | tests/parser/aggregates.max.propagation.4.test.py | veltri/DLV2 | 944aaef803aa75e7ec51d7e0c2b0d964687fdd0e | [
"Apache-2.0"
] | null | null | null | tests/parser/aggregates.max.propagation.4.test.py | veltri/DLV2 | 944aaef803aa75e7ec51d7e0c2b0d964687fdd0e | [
"Apache-2.0"
] | null | null | null | input = """
% Only undefined auxiliary atoms.
a(1) | b(1).
a(2) | b(2).
ok1 :- #max{V:a(V)} = 2.
"""
output = """
% Only undefined auxiliary atoms.
a(1) | b(1).
a(2) | b(2).
ok1 :- #max{V:a(V)} = 2.
"""
| 15.615385 | 33 | 0.497537 | 38 | 203 | 2.657895 | 0.342105 | 0.257426 | 0.435644 | 0.534653 | 0.891089 | 0.891089 | 0.891089 | 0.891089 | 0.891089 | 0.891089 | 0 | 0.07362 | 0.197044 | 203 | 12 | 34 | 16.916667 | 0.546012 | 0 | 0 | 0.833333 | 0 | 0 | 0.847291 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 11 |
0ab8cd5aa43b89942a2447acf06c36a39ab3b41d | 121 | py | Python | venv/Lib/site-packages/django_extensions/management/technical_response.py | Kiiwi/Syssel | 83705e3fd0edf40f09df950d5ce91c95586573f5 | [
"BSD-3-Clause"
] | 1 | 2015-08-10T16:03:34.000Z | 2015-08-10T16:03:34.000Z | venv/Lib/site-packages/django_extensions/management/technical_response.py | Kiiwi/Syssel | 83705e3fd0edf40f09df950d5ce91c95586573f5 | [
"BSD-3-Clause"
] | null | null | null | venv/Lib/site-packages/django_extensions/management/technical_response.py | Kiiwi/Syssel | 83705e3fd0edf40f09df950d5ce91c95586573f5 | [
"BSD-3-Clause"
] | 1 | 2019-11-02T03:07:07.000Z | 2019-11-02T03:07:07.000Z | import six
def null_technical_500_response(request, exc_type, exc_value, tb):
six.reraise(exc_type, exc_value, tb)
| 20.166667 | 66 | 0.77686 | 20 | 121 | 4.35 | 0.65 | 0.16092 | 0.229885 | 0.344828 | 0.390805 | 0 | 0 | 0 | 0 | 0 | 0 | 0.028571 | 0.132231 | 121 | 5 | 67 | 24.2 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
e405ba4d3a7ec18c4f1386c2ca31d2444f9cc64a | 193 | py | Python | Codewars/6kyu/sort-the-odd/Python/test.py | RevansChen/online-judge | ad1b07fee7bd3c49418becccda904e17505f3018 | [
"MIT"
] | 7 | 2017-09-20T16:40:39.000Z | 2021-08-31T18:15:08.000Z | Codewars/6kyu/sort-the-odd/Python/test.py | RevansChen/online-judge | ad1b07fee7bd3c49418becccda904e17505f3018 | [
"MIT"
] | null | null | null | Codewars/6kyu/sort-the-odd/Python/test.py | RevansChen/online-judge | ad1b07fee7bd3c49418becccda904e17505f3018 | [
"MIT"
] | null | null | null | # Python - 3.4.3
Test.assert_equals(sort_array([5, 3, 2, 8, 1, 4]), [1, 3, 2, 8, 5, 4])
Test.assert_equals(sort_array([5, 3, 1, 8, 0]), [1, 3, 5, 8, 0])
Test.assert_equals(sort_array([]), [])
| 32.166667 | 70 | 0.580311 | 41 | 193 | 2.585366 | 0.317073 | 0.283019 | 0.45283 | 0.566038 | 0.745283 | 0.509434 | 0.509434 | 0 | 0 | 0 | 0 | 0.152439 | 0.150259 | 193 | 5 | 71 | 38.6 | 0.493902 | 0.072539 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
7c17fb56823f272fe2cc6b193afbda8f9db01191 | 28,471 | py | Python | tests/cli_test.py | Journera/glutil | aeb75974ae162456617334db4bd235c2101c8fd5 | [
"BSD-3-Clause"
] | 14 | 2019-06-19T20:14:38.000Z | 2020-05-21T18:25:02.000Z | tests/cli_test.py | Journera/glutil | aeb75974ae162456617334db4bd235c2101c8fd5 | [
"BSD-3-Clause"
] | 2 | 2019-06-19T17:39:31.000Z | 2019-10-29T18:24:20.000Z | tests/cli_test.py | Journera/glutil | aeb75974ae162456617334db4bd235c2101c8fd5 | [
"BSD-3-Clause"
] | 2 | 2019-10-04T04:33:06.000Z | 2020-05-21T19:25:31.000Z | from unittest import TestCase
from unittest.mock import MagicMock
from moto import mock_s3, mock_glue
from .helper import GlueHelper, captured_output
from collections import namedtuple
import boto3
import pendulum
import sure # noqa: F401
import sys
from glutil import Cli, Partitioner, Partition, DatabaseCleaner, GlutilError
from glutil.database_cleaner import Table
from glutil.partitioner import PartitionMap
class CliTest(TestCase):
bucket = "test-bucket"
database = "test_database"
table = "test_table"
region = "us-east-1"
def setUp(self):
super().setUp()
self.helper = GlueHelper(
default_bucket=self.bucket,
default_database=self.database,
default_table=self.table)
self.s3 = boto3.client("s3")
self.glue = boto3.client("glue")
self.exit_mock = MagicMock()
self.original_exit = sys.exit
sys.exit = self.exit_mock
def tearDown(self):
sys.exit = self.original_exit
super().tearDown()
def get_cmd_output(self, cli, cli_args):
with captured_output() as (out, err):
cli.main(cli_args)
output = out.getvalue().strip()
error = err.getvalue().strip()
return output, error
@mock_glue
@mock_s3
def test_create_partitions(self):
self.s3.create_bucket(Bucket=self.bucket)
self.helper.make_database_and_table()
cli = Cli()
partitions = self.helper.create_many_partitions(count=10)
partitions.sort()
expected_output = f"Running Partitioner for {self.database}.{self.table}\n\tLooking for partitions in s3://{self.bucket}/{self.table}/\n\tFound 10 new partitions to create\n\t"
expected_output += ", ".join(map(str, partitions))
out, err = self.get_cmd_output(cli, ["create-partitions", self.database, self.table])
out.should.equal(expected_output)
partitioner = Partitioner(self.database, self.table, aws_region=self.region)
found = partitioner.partitions_on_disk()
set(found).should.equal(set(partitions))
@mock_glue
@mock_s3
def test_create_partitions_dry_run(self):
self.s3.create_bucket(Bucket=self.bucket)
self.helper.make_database_and_table()
cli = Cli()
partitions = self.helper.create_many_partitions(count=10)
partitions.sort()
expected_output = f"Running Partitioner for {self.database}.{self.table}\n\tLooking for partitions in s3://{self.bucket}/{self.table}/\n\tFound 10 new partitions to create\n\t"
expected_output += ", ".join(map(str, partitions))
out, err = self.get_cmd_output(cli, ["create-partitions", self.database, self.table, "--dry-run"])
out.should.equal(expected_output)
partitioner = Partitioner(self.database, self.table, aws_region=self.region)
found = partitioner.existing_partitions()
found.should.have.length_of(0)
@mock_glue
@mock_s3
def test_create_partitions_nothing_new(self):
self.s3.create_bucket(Bucket=self.bucket)
self.helper.make_database_and_table()
cli = Cli()
partitions = self.helper.create_many_partitions(count=10)
partitions.sort()
partitioner = Partitioner(self.database, self.table, aws_region=self.region)
partitioner.create_partitions(partitions)
expected_output = f"Running Partitioner for {self.database}.{self.table}\n\tLooking for partitions in s3://{self.bucket}/{self.table}/\n\tFound 0 new partitions to create"
out, err = self.get_cmd_output(cli, ["create-partitions", self.database, self.table])
out.should.equal(expected_output)
@mock_glue
@mock_s3
def test_create_partitions_error_output(self):
""" Technically this should _never_ happen, but on the off chance that
batch_get_partition ever returns bad values we'll leave it in"""
self.s3.create_bucket(Bucket=self.bucket)
self.helper.make_database_and_table()
cli = Cli()
partitions = self.helper.create_many_partitions(count=10)
partitions.sort()
expected_output = f"Running Partitioner for {self.database}.{self.table}\n\tLooking for partitions in s3://{self.bucket}/{self.table}/\n\tFound 10 new partitions to create\n\t"
expected_output += ", ".join(map(str, partitions))
expected_output += f"\nOne or more errors occurred when attempting to create partitions\nError on {partitions[0].values}: AlreadyExistsException"
partitioner = Partitioner(self.database, self.table, aws_region=self.region)
partitioner.create_partitions([partitions[0]])
mock = MagicMock(return_value=partitions)
partitioner.partitions_to_create = mock
partitioner_mock = MagicMock(return_value=partitioner)
cli.get_partitioner = partitioner_mock
out, err = self.get_cmd_output(cli, ["create-partitions", self.database, self.table])
out.should.equal(expected_output)
self.exit_mock.assert_called_with(1)
fresh_partitioner = Partitioner(self.database, self.table, aws_region=self.region)
exists = fresh_partitioner.existing_partitions()
set(exists).should.equal(set(partitions))
@mock_glue
@mock_s3
def test_create_partitions_limit_days(self):
self.s3.create_bucket(Bucket=self.bucket)
self.helper.make_database_and_table()
cli = Cli()
today = pendulum.now()
partitions = []
for i in range(1, 11):
partition_date = today.subtract(days=i)
year = partition_date.strftime("%Y")
month = partition_date.strftime("%m")
day = partition_date.strftime("%d")
hour = "03"
partition = Partition([year, month, day, hour], f"s3://{self.bucket}/{self.table}/{year}/{month}/{day}/{hour}/")
self.helper.write_partition_to_s3(partition)
partitions.append(partition)
partitions.sort()
expected_output = f"Running Partitioner for {self.database}.{self.table}\n\tLooking for partitions in s3://{self.bucket}/{self.table}/\n\tFound 7 new partitions to create\n\t"
expected_output += ", ".join(map(str, partitions[3:]))
out, err = self.get_cmd_output(cli, ["create-partitions", self.database, self.table, "--limit-days=7"])
out.should.equal(expected_output)
partitioner = Partitioner(self.database, self.table, aws_region=self.region)
found = partitioner.existing_partitions()
found.should.have.length_of(7)
set(found).should.equal(set(partitions[3:]))
@mock_glue
@mock_s3
def test_delete_all_partitions_no_partitions(self):
self.s3.create_bucket(Bucket=self.bucket)
self.helper.make_database_and_table()
cli = Cli()
# case without any partitions
out, err = self.get_cmd_output(cli, ["delete-all-partitions", self.database, self.table])
out.should.equal("No partitions found in table test_table")
@mock_glue
@mock_s3
def test_delete_all_partitions_dry_run(self):
self.s3.create_bucket(Bucket=self.bucket)
self.helper.make_database_and_table()
cli = Cli()
partitions = self.helper.create_many_partitions(count=10)
partitions.sort()
partitioner = Partitioner(self.database, self.table, aws_region=self.region)
partitioner.create_partitions(partitions)
expected_out = "Deleting the following partitions:"
for partition in partitions:
expected_out += f"\n\t{str(partition)}"
out, err = self.get_cmd_output(cli, ["delete-all-partitions", self.database, self.table, "--dry-run"])
out.should.equal(expected_out)
found_partitions = partitioner.existing_partitions()
found_partitions.should.have.length_of(len(partitions))
@mock_glue
@mock_s3
def test_delete_all_partitions(self):
self.s3.create_bucket(Bucket=self.bucket)
self.helper.make_database_and_table()
cli = Cli()
partitions = self.helper.create_many_partitions(count=10)
partitions.sort()
partitioner = Partitioner(self.database, self.table, aws_region=self.region)
partitioner.create_partitions(partitions)
expected_out = "Deleting the following partitions:"
for partition in partitions:
expected_out += f"\n\t{str(partition)}"
out, err = self.get_cmd_output(cli, ["delete-all-partitions", self.database, self.table])
out.should.equal(expected_out)
found_partitions = partitioner.existing_partitions()
found_partitions.should.have.length_of(0)
@mock_glue
@mock_s3
def test_delete_all_partitions_error(self):
self.s3.create_bucket(Bucket=self.bucket)
self.helper.make_database_and_table()
cli = Cli()
partitioner = Partitioner(self.database, self.table, aws_region=self.region)
partition = self.helper.create_partition_data()
partitioner.create_partitions([partition])
mock = MagicMock()
mock.return_value = [{
"PartitionValues": partition.values,
"ErrorDetail": {
"ErrorCode": "PartitionNotFound",
"ErrorMessage": "Partition not found"
}
}]
partitioner.delete_partitions = mock
partitioner_mock = MagicMock(return_value=partitioner)
cli.get_partitioner = partitioner_mock
expected_output = f"Deleting the following partitions:\n\t{partition}\nOne or more errors occurred when attempting to delete partitions\nError on {partition.values}: PartitionNotFound"
out, err = self.get_cmd_output(cli, ["delete-all-partitions", self.database, self.table])
out.should.equal(expected_output)
@mock_glue
@mock_s3
def test_delete_bad_partitions_no_partitions(self):
self.s3.create_bucket(Bucket=self.bucket)
self.helper.make_database_and_table()
cli = Cli()
out, err = self.get_cmd_output(cli, ["delete-bad-partitions", self.database, self.table])
out.should.equal("Found 0 partitions to delete")
@mock_glue
@mock_s3
def test_delete_bad_partitions_dry_run(self):
self.s3.create_bucket(Bucket=self.bucket)
self.helper.make_database_and_table()
cli = Cli()
partitions = self.helper.create_many_partitions(count=10, prefix="not-this-table")
partitions.sort()
partitioner = Partitioner(self.database, self.table, aws_region=self.region)
partitioner.create_partitions(partitions)
expected_out = "Found 10 partitions to delete\nDeleting the following partitions:"
for partition in partitions:
expected_out += f"\n\t{str(partition)}"
out, err = self.get_cmd_output(cli, ["delete-bad-partitions", self.database, self.table, "--dry-run"])
out.should.equal(expected_out)
found_partitions = partitioner.existing_partitions()
found_partitions.should.have.length_of(10)
@mock_glue
@mock_s3
def test_delete_bad_partitions(self):
self.s3.create_bucket(Bucket=self.bucket)
self.helper.make_database_and_table()
cli = Cli()
partitions = self.helper.create_many_partitions(count=10, prefix="not-this-table")
partitions.sort()
partitioner = Partitioner(self.database, self.table, aws_region=self.region)
partitioner.create_partitions(partitions)
expected_out = "Found 10 partitions to delete\nDeleting the following partitions:"
for partition in partitions:
expected_out += f"\n\t{str(partition)}"
out, err = self.get_cmd_output(cli, ["delete-bad-partitions", self.database, self.table])
out.should.equal(expected_out)
found_partitions = partitioner.existing_partitions()
found_partitions.should.have.length_of(0)
@mock_glue
@mock_s3
def test_delete_bad_partitions_error_output(self):
self.s3.create_bucket(Bucket=self.bucket)
self.helper.make_database_and_table()
cli = Cli()
partitioner = Partitioner(self.database, self.table, aws_region=self.region)
partition = self.helper.create_partition_data(prefix="not-this-table")
partitioner.create_partitions([partition])
mock = MagicMock()
mock.return_value = [{
"PartitionValues": partition.values,
"ErrorDetail": {
"ErrorCode": "PartitionNotFound",
"ErrorMessage": "Partition not found"
}
}]
partitioner.delete_partitions = mock
partitioner_mock = MagicMock(return_value=partitioner)
cli.get_partitioner = partitioner_mock
expected_output = f"Found 1 partitions to delete\nDeleting the following partitions:\n\t{partition}\nOne or more errors occurred when attempting to delete partitions\nError on {partition.values}: PartitionNotFound"
out, err = self.get_cmd_output(cli, ["delete-bad-partitions", self.database, self.table])
out.should.equal(expected_output)
self.exit_mock.assert_called_with(1)
@mock_glue
def test_delete_bad_tables_nothing_to_delete(self):
database_input = self.helper.create_database_input()
self.glue.create_database(**database_input)
cli = Cli()
location = "s3://bucket/root-table/"
root_table_input = self.helper.create_table_input(location=location)
self.glue.create_table(**root_table_input)
out, err = self.get_cmd_output(cli, ["delete-bad-tables", self.database])
out.should.equal("Nothing to delete")
@mock_glue
def test_delete_bad_tables_dry_run(self):
database_input = self.helper.create_database_input()
self.glue.create_database(**database_input)
cli = Cli()
location = "s3://bucket/root-table/"
root_table_input = self.helper.create_table_input(location=location)
self.glue.create_table(**root_table_input)
tables = []
for i in range(1, 12):
tbl_location = f"{location}{i}/"
tbl_input = self.helper.create_table_input(location=tbl_location, random_name=True)
self.glue.create_table(**tbl_input)
tbl_input['TableInput']["DatabaseName"] = self.database
tables.append(Table(tbl_input['TableInput']))
expected_output = "Going to delete the following tables:"
tables.sort(key=lambda x: x.name)
for table in tables:
expected_output += f"\n\t{table}"
out, err = self.get_cmd_output(cli, ["delete-bad-tables", self.database, "--dry-run"])
cleaner = DatabaseCleaner(self.database, aws_region=self.region)
out.should.equal(expected_output)
found_tables = cleaner.child_tables()
found_tables.sort(key=lambda x: x.name)
found_tables.should.equal(tables)
@mock_glue
def test_delete_bad_tables(self):
database_input = self.helper.create_database_input()
self.glue.create_database(**database_input)
cli = Cli()
location = "s3://bucket/root-table/"
root_table_input = self.helper.create_table_input(location=location)
self.glue.create_table(**root_table_input)
tables = []
for i in range(1, 12):
tbl_location = f"{location}{i}/"
tbl_input = self.helper.create_table_input(location=tbl_location, random_name=True)
self.glue.create_table(**tbl_input)
tbl_input['TableInput']["DatabaseName"] = self.database
tables.append(Table(tbl_input['TableInput']))
expected_output = "Going to delete the following tables:"
tables.sort(key=lambda x: x.name)
for table in tables:
expected_output += f"\n\t{table}"
out, err = self.get_cmd_output(cli, ["delete-bad-tables", self.database])
out.should.equal(expected_output)
cleaner = DatabaseCleaner(self.database, aws_region=self.region)
found_tables = cleaner.child_tables()
found_tables.should.have.length_of(0)
@mock_glue
def test_delete_bad_tables_error_output(self):
database_input = self.helper.create_database_input()
self.glue.create_database(**database_input)
cli = Cli()
location = "s3://bucket/root-table/"
root_table_input = self.helper.create_table_input(location=location)
self.glue.create_table(**root_table_input)
table_input = self.helper.create_table_input(location=location, name="test_table-bazer")
self.glue.create_table(**table_input)
mock = MagicMock()
mock.return_value = [{
"TableName": "test_table-bazer",
"ErrorDetail": {
"ErrorCode": "EntityNotFoundException",
"ErrorMessage": "Table not found",
},
}]
cleaner = DatabaseCleaner(self.database, aws_region=self.region)
cleaner.delete_tables = mock
cleaner_mock = MagicMock(return_value=cleaner)
cli.get_database_cleaner = cleaner_mock
expected_output = f"Going to delete the following tables:\n\t<Table {self.database} / test_table-bazer : {location}>\nOne or more errors occurred when attempting to delete tables\nError on test_table-bazer: EntityNotFoundException"
out, err = self.get_cmd_output(cli, ["delete-bad-tables", self.database])
mock.assert_called()
out.should.equal(expected_output)
self.exit_mock.assert_called_with(1)
@mock_glue
@mock_s3
def test_delete_missing_partitions_no_partitions(self):
self.helper.make_database_and_table()
cli = Cli()
self.s3.create_bucket(Bucket=self.bucket)
partitions = self.helper.create_many_partitions(count=10)
partitions.sort()
partitioner = Partitioner(self.database, self.table, aws_region=self.region)
partitioner.create_partitions(partitions)
out, err = self.get_cmd_output(cli, ["delete-missing-partitions", self.database, self.table])
out.should.equal("Found 0 partitions to delete:")
catalog_partitions = partitioner.existing_partitions()
catalog_partitions.should.have.length_of(10)
@mock_glue
@mock_s3
def test_delete_missing_partitions_dry_run(self):
self.helper.make_database_and_table()
cli = Cli()
self.s3.create_bucket(Bucket=self.bucket)
partitions = self.helper.create_many_partitions(count=10)
partitions.sort()
partitioner = Partitioner(self.database, self.table, aws_region=self.region)
partitioner.create_partitions(partitions)
s3resource = boto3.resource("s3")
bucket = s3resource.Bucket(self.bucket)
for obj in bucket.objects.all():
obj.delete()
expected_out = "Found 10 partitions to delete:"
for partition in partitions:
expected_out += f"\n\t{partition}"
out, err = self.get_cmd_output(cli, ["delete-missing-partitions", self.database, self.table, "--dry-run"])
out.should.equal(expected_out)
found_partitions = partitioner.existing_partitions()
found_partitions.should.have.length_of(10)
set(found_partitions).should.equal(set(partitions))
@mock_glue
@mock_s3
def test_delete_missing_partitions(self):
self.helper.make_database_and_table()
cli = Cli()
self.s3.create_bucket(Bucket=self.bucket)
partitions = self.helper.create_many_partitions(count=10)
partitions.sort()
partitioner = Partitioner(self.database, self.table, aws_region=self.region)
partitioner.create_partitions(partitions)
s3resource = boto3.resource("s3")
bucket = s3resource.Bucket(self.bucket)
for obj in bucket.objects.all():
obj.delete()
expected_out = "Found 10 partitions to delete:"
for partition in partitions:
expected_out += f"\n\t{partition}"
out, err = self.get_cmd_output(cli, ["delete-missing-partitions", self.database, self.table])
out.should.equal(expected_out)
found_partitions = partitioner.existing_partitions()
found_partitions.should.have.length_of(0)
@mock_glue
@mock_s3
def test_delete_missing_partitions_error_output(self):
self.helper.make_database_and_table()
cli = Cli()
partitioner = Partitioner(self.database, self.table, aws_region=self.region)
self.s3.create_bucket(Bucket=self.bucket)
partition = self.helper.create_partition_data(save=False)
partitioner.create_partitions([partition])
mock = MagicMock()
mock.return_value = [{
"PartitionValues": partition.values,
"ErrorDetail": {
"ErrorCode": "PartitionNotFound",
"ErrorMessage": "Partition not found"
}
}]
partitioner.delete_partitions = mock
partitioner_mock = MagicMock(return_value=partitioner)
cli.get_partitioner = partitioner_mock
expected_output = f"Found 1 partitions to delete:\n\t{partition}\nOne or more errors occurred when attempting to delete partitions\nError on {partition.values}: PartitionNotFound"
out, err = self.get_cmd_output(cli, ["delete-missing-partitions", self.database, self.table])
out.should.equal(expected_output)
self.exit_mock.assert_called_with(1)
@mock_s3
@mock_glue
def test_update_partitions_no_partitions(self):
self.s3.create_bucket(Bucket=self.bucket)
self.helper.make_database_and_table()
cli = Cli()
partitions = self.helper.create_many_partitions(10)
partitions.sort()
partitioner = Partitioner(self.database, self.table, aws_region=self.region)
partitioner.create_partitions(partitions)
# all partitions correctly located
out, err = self.get_cmd_output(cli, ["update-partitions", self.database, self.table])
out.should.equal("No partitions to update")
@mock_s3
@mock_glue
def test_update_partitions_dry_run(self):
self.s3.create_bucket(Bucket=self.bucket)
self.helper.make_database_and_table()
cli = Cli()
partitions = self.helper.create_many_partitions(10)
partitions.sort()
partitioner = Partitioner(self.database, self.table, aws_region=self.region)
partitioner.create_partitions(partitions)
expected_output = "Found 5 moved partitions"
partitions_to_move = partitions[0:5]
for p in partitions_to_move:
subpath = "/".join(p.values)
new_location = f"s3://old-bucket/old-table/{subpath}/"
p.location = new_location
expected_output += f"\n\t{p}"
partitioner.update_partition_locations(partitions_to_move)
out, err = self.get_cmd_output(cli, ["update-partitions", self.database, self.table, "--dry-run"])
out.should.equal(expected_output)
found_map = PartitionMap(partitioner.existing_partitions())
for partition in partitions_to_move:
matching = found_map.get(partition)
matching.should_not.be.false
matching.location.startswith(f"s3://{self.bucket}/{self.table}/").should.be.false
@mock_s3
@mock_glue
def test_update_partitions(self):
self.s3.create_bucket(Bucket=self.bucket)
self.helper.make_database_and_table()
cli = Cli()
partitions = self.helper.create_many_partitions(10)
partitions.sort()
partitioner = Partitioner(self.database, self.table, aws_region=self.region)
partitioner.create_partitions(partitions)
expected_output = "Found 5 moved partitions"
partitions_to_move = partitions[0:5]
for p in partitions_to_move:
subpath = "/".join(p.values)
new_location = f"s3://old-bucket/old-table/{subpath}/"
p.location = new_location
expected_output += f"\n\t{p}"
partitioner.update_partition_locations(partitions_to_move)
out, err = self.get_cmd_output(cli, ["update-partitions", self.database, self.table])
out.should.equal(expected_output)
found_map = PartitionMap(partitioner.existing_partitions())
for partition in partitions_to_move:
matching = found_map.get(partition)
matching.should_not.be.false
matching.location.startswith(f"s3://{self.bucket}/{self.table}/").should.be.true
@mock_s3
@mock_glue
def test_update_partitions_error_output(self):
self.s3.create_bucket(Bucket=self.bucket)
self.helper.make_database_and_table()
cli = Cli()
partitioner = Partitioner(self.database, self.table, aws_region=self.region)
partition = self.helper.create_partition_data()
partition.location = "s3://old-bucket/old-table/"
partitioner.create_partitions([partition])
mock = MagicMock()
mock.return_value = [{
"PartitionValues": partition.values,
"ErrorDetail": {
"ErrorCode": "PartitionNotFound",
"ErrorMessage": "Partition not found"
}
}]
partitioner.update_partition_locations = mock
partitioner_mock = MagicMock(return_value=partitioner)
cli.get_partitioner = partitioner_mock
expected_output = f"Found 1 moved partitions\n\t{partition}\nOne or more errors occurred when attempting to update partitions\nError on {partition.values}: PartitionNotFound"
out, err = self.get_cmd_output(cli, ["update-partitions", self.database, self.table])
out.should.equal(expected_output)
self.exit_mock.assert_called_with(1)
@mock_s3
@mock_glue
def test_get_partitioner_and_cleaner_exceptions(self):
Args = namedtuple("Args", ["database", "table", "profile"])
args = Args("nodb", "notable", "noprofile")
no_profile_mock = MagicMock()
no_profile_mock.side_effect = GlutilError(
error_type="ProfileNotFound",
message="No such profile noprofile")
original_init = Partitioner.__init__
Partitioner.__init__ = no_profile_mock
cli = Cli()
try:
with captured_output() as (out, err):
cli.get_partitioner(args)
output = out.getvalue().strip()
output.should.equal("No such profile noprofile\n\tConfirm that noprofile is a locally configured aws profile.")
self.exit_mock.assert_called_with(1)
no_access_mock = MagicMock()
no_access_mock.side_effect = GlutilError(
error_type="AccessDenied",
message="You do not have permissions to run GetTable")
Partitioner.__init__ = no_access_mock
with captured_output() as (out, err):
cli.get_partitioner(args)
output = out.getvalue().strip()
output.should.equal("You do not have permissions to run GetTable\n\tConfirm that noprofile has the glue:GetTable permission.")
self.exit_mock.assert_called_with(1)
with captured_output() as (out, err):
cli.get_partitioner(Args("nodb", "notable", None))
output = out.getvalue().strip()
output.should.equal("You do not have permissions to run GetTable\n\tDid you mean to run this with a profile specified?")
self.exit_mock.assert_called_with(1)
not_found_mock = MagicMock()
not_found_mock.side_effect = GlutilError(
error_type="EntityNotFound",
message="Error, could not find table notable")
Partitioner.__init__ = not_found_mock
with captured_output() as (out, err):
cli.get_partitioner(args)
output = out.getvalue().strip()
output.should.equal("Error, could not find table notable\n\tConfirm notable exists, and you have the ability to access it.")
self.exit_mock.assert_called_with(1)
finally:
# NOTE: this must stay, otherwise tests run after this will still
# have Partitioner.__init__ set to a mock
Partitioner.__init__ = original_init
| 39.87535 | 239 | 0.666573 | 3,409 | 28,471 | 5.352596 | 0.073629 | 0.040116 | 0.040335 | 0.05294 | 0.84529 | 0.839097 | 0.821615 | 0.798707 | 0.777278 | 0.758481 | 0 | 0.007686 | 0.227741 | 28,471 | 713 | 240 | 39.931276 | 0.822213 | 0.010923 | 0 | 0.719141 | 0 | 0.025045 | 0.161259 | 0.042358 | 0 | 0 | 0 | 0 | 0.017889 | 1 | 0.051878 | false | 0 | 0.021467 | 0 | 0.084079 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
7c2e3a3b208c696731ef12be5e9cbab66dc99355 | 25,979 | py | Python | tensorflow/contrib/boosted_trees/python/kernel_tests/split_handler_ops_test.py | yasunakacho/tensorflow | cf36c3fdefda3c874cd8cebb779744c5035bb435 | [
"Apache-2.0"
] | null | null | null | tensorflow/contrib/boosted_trees/python/kernel_tests/split_handler_ops_test.py | yasunakacho/tensorflow | cf36c3fdefda3c874cd8cebb779744c5035bb435 | [
"Apache-2.0"
] | null | null | null | tensorflow/contrib/boosted_trees/python/kernel_tests/split_handler_ops_test.py | yasunakacho/tensorflow | cf36c3fdefda3c874cd8cebb779744c5035bb435 | [
"Apache-2.0"
] | null | null | null | # Copyright 2017 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for the GTFlow split handler Ops."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from tensorflow.contrib.boosted_trees.proto import learner_pb2
from tensorflow.contrib.boosted_trees.proto import split_info_pb2
from tensorflow.contrib.boosted_trees.python.ops import split_handler_ops
from tensorflow.python.framework import dtypes
from tensorflow.python.framework import test_util
from tensorflow.python.ops import array_ops
from tensorflow.python.platform import googletest
class SplitHandlerOpsTest(test_util.TensorFlowTestCase):
def testMakeDenseSplit(self):
"""Tests split handler op."""
with self.test_session() as sess:
# The data looks like the following after dividing by number of steps (2).
# Gradients | Partition | Dense Quantile |
# (1.2, 0.2) | 0 | 0 |
# (-0.3, 0.19) | 0 | 1 |
# (4.0, 0.13) | 1 | 1 |
partition_ids = array_ops.constant([0, 0, 1], dtype=dtypes.int32)
bucket_ids = array_ops.constant(
[[0, 0], [1, 0], [1, 0]], dtype=dtypes.int64)
gradients = array_ops.constant([2.4, -0.6, 8.0])
hessians = array_ops.constant([0.4, 0.38, 0.26])
bucket_boundaries = [0.3, 0.52]
partitions, gains, splits = (
split_handler_ops.build_dense_inequality_splits(
num_minibatches=2,
partition_ids=partition_ids,
bucket_ids=bucket_ids,
gradients=gradients,
hessians=hessians,
bucket_boundaries=bucket_boundaries,
l1_regularization=0.1,
l2_regularization=1,
tree_complexity_regularization=0,
min_node_weight=0,
class_id=-1,
feature_column_group_id=0,
multiclass_strategy=learner_pb2.LearnerConfig.TREE_PER_CLASS))
partitions, gains, splits = sess.run([partitions, gains, splits])
self.assertAllEqual([0, 1], partitions)
# Check the split on partition 0.
# -(1.2 - 0.1) / (0.2 + 1)
expected_left_weight = -0.91666
# expected_left_weight * -(1.2 - 0.1)
expected_left_gain = 1.0083333333333331
# (-0.3 + 0.1) / (0.19 + 1)
expected_right_weight = 0.1680672
# expected_right_weight * -(-0.3 + 0.1)
expected_right_gain = 0.033613445378151252
# (-0.3 + 1.2 - 0.1) ** 2 / (0.19 + 0.2 + 1)
expected_bias_gain = 0.46043165467625885
split_info = split_info_pb2.SplitInfo()
split_info.ParseFromString(splits[0])
left_child = split_info.left_child.vector
right_child = split_info.right_child.vector
split_node = split_info.split_node.dense_float_binary_split
self.assertAllClose(
expected_left_gain + expected_right_gain - expected_bias_gain, gains[0],
0.00001)
self.assertAllClose([expected_left_weight], left_child.value, 0.00001)
self.assertAllClose([expected_right_weight], right_child.value, 0.00001)
self.assertEqual(0, split_node.feature_column)
self.assertAllClose(0.3, split_node.threshold, 0.00001)
# Check the split on partition 1.
# (-4 + 0.1) / (0.13 + 1)
expected_left_weight = -3.4513274336283186
expected_right_weight = 0
split_info = split_info_pb2.SplitInfo()
split_info.ParseFromString(splits[1])
left_child = split_info.left_child.vector
right_child = split_info.right_child.vector
split_node = split_info.split_node.dense_float_binary_split
# There's only one active bucket here so zero gain is expected.
self.assertAllClose(0.0, gains[1], 0.00001)
self.assertAllClose([expected_left_weight], left_child.value, 0.00001)
self.assertAllClose([expected_right_weight], right_child.value, 0.00001)
self.assertEqual(0, split_node.feature_column)
self.assertAllClose(0.52, split_node.threshold, 0.00001)
def testMakeMulticlassDenseSplit(self):
"""Tests split handler op."""
with self.test_session() as sess:
partition_ids = array_ops.constant([0, 0, 1], dtype=dtypes.int32)
bucket_ids = array_ops.constant(
[[0, 0], [1, 0], [1, 0]], dtype=dtypes.int64)
gradients = array_ops.constant([[2.4, 3.0], [-0.6, 0.1], [8.0, 1.0]])
hessians = array_ops.constant([[[0.4, 1], [1, 1]], [[0.38, 1], [1, 1]],
[[0.26, 1], [1, 1]]])
bucket_boundaries = [0.3, 0.52]
partitions, gains, splits = (
split_handler_ops.build_dense_inequality_splits(
num_minibatches=2,
partition_ids=partition_ids,
bucket_ids=bucket_ids,
gradients=gradients,
hessians=hessians,
bucket_boundaries=bucket_boundaries,
l1_regularization=0,
l2_regularization=1,
tree_complexity_regularization=0,
min_node_weight=0,
class_id=-1,
feature_column_group_id=0,
multiclass_strategy=learner_pb2.LearnerConfig.FULL_HESSIAN))
partitions, gains, splits = sess.run([partitions, gains, splits])
self.assertAllEqual([0, 1], partitions)
split_info = split_info_pb2.SplitInfo()
split_info.ParseFromString(splits[0])
left_child = split_info.left_child.vector
right_child = split_info.right_child.vector
split_node = split_info.split_node.dense_float_binary_split
# Each leaf has 2 element vector.
self.assertEqual(2, len(left_child.value))
self.assertEqual(2, len(right_child.value))
self.assertEqual(0, split_node.feature_column)
self.assertAllClose(0.3, split_node.threshold, 1e-6)
def testMakeDenseSplitEmptyInputs(self):
"""Tests empty inputs op."""
with self.test_session() as sess:
partition_ids = array_ops.constant([], dtype=dtypes.int32)
bucket_ids = array_ops.constant([[]], dtype=dtypes.int64)
gradients = array_ops.constant([])
hessians = array_ops.constant([])
bucket_boundaries = [0.3, 0.52]
partitions, gains, splits = (
split_handler_ops.build_dense_inequality_splits(
num_minibatches=0,
partition_ids=partition_ids,
bucket_ids=bucket_ids,
gradients=gradients,
hessians=hessians,
bucket_boundaries=bucket_boundaries,
l1_regularization=0.1,
l2_regularization=1,
tree_complexity_regularization=0,
min_node_weight=0,
class_id=-1,
feature_column_group_id=0,
multiclass_strategy=learner_pb2.LearnerConfig.TREE_PER_CLASS))
partitions, gains, splits = sess.run([partitions, gains, splits])
# .assertEmpty doesn't exist on ubuntu-contrib
self.assertEqual(0, len(partitions))
self.assertEqual(0, len(gains))
self.assertEqual(0, len(splits))
def testMakeSparseSplit(self):
"""Tests split handler op."""
with self.test_session() as sess:
# The data looks like the following after dividing by number of steps (2).
# Gradients | Partition | bucket ID |
# (0.9, 0.39) | 0 | -1 |
# (1.2, 0.2) | 0 | 0 |
# (0.2, 0.12) | 0 | 1 |
# (4.0, 0.13) | 1 | -1 |
# (4.0, 0.13) | 1 | 1 |
partition_ids = array_ops.constant([0, 0, 0, 1, 1], dtype=dtypes.int32)
# We have only 1 dimension in our sparse feature column.
bucket_ids = array_ops.constant([-1, 0, 1, -1, 1], dtype=dtypes.int64)
dimension_ids = array_ops.constant([0, 0, 0, 0, 0], dtype=dtypes.int64)
bucket_ids = array_ops.stack([bucket_ids, dimension_ids], axis=1)
gradients = array_ops.constant([1.8, 2.4, 0.4, 8.0, 8.0])
hessians = array_ops.constant([0.78, 0.4, 0.24, 0.26, 0.26])
bucket_boundaries = array_ops.constant([0.3, 0.52])
partitions, gains, splits = (
split_handler_ops.build_sparse_inequality_splits(
num_minibatches=2,
partition_ids=partition_ids,
bucket_ids=bucket_ids,
gradients=gradients,
hessians=hessians,
bucket_boundaries=bucket_boundaries,
l1_regularization=0,
l2_regularization=2,
tree_complexity_regularization=0,
min_node_weight=0,
feature_column_group_id=0,
bias_feature_id=-1,
class_id=-1,
multiclass_strategy=learner_pb2.LearnerConfig.TREE_PER_CLASS))
partitions, gains, splits = (sess.run([partitions, gains, splits]))
self.assertAllEqual([0, 1], partitions)
self.assertEqual(2, len(splits))
# Check the split on partition 0.
# -(0.2 + 1.2) / (0.12 + 0.2 + 2)
expected_left_weight = -0.603448275862069
# (0.2 + 1.2) ** 2 / (0.12 + 0.2 + 2)
expected_left_gain = 0.8448275862068965
# 0.5 / (0.07 + 2)
expected_right_weight = 0.24154589371980678
# 0.5 ** 2 / (0.07 + 2)
expected_right_gain = 0.12077294685990339
# (0.2 + 1.2 - 0.5) ** 2 / (0.12 + 0.2 + 0.07 + 2)
expected_bias_gain = 0.3389121338912133
split_info = split_info_pb2.SplitInfo()
split_info.ParseFromString(splits[0])
left_child = split_info.left_child.vector
right_child = split_info.right_child.vector
split_node = split_info.split_node.sparse_float_binary_split_default_right
self.assertAllClose(
expected_left_gain + expected_right_gain - expected_bias_gain, gains[0])
self.assertAllClose([expected_left_weight], left_child.value)
self.assertAllClose([expected_right_weight], right_child.value)
self.assertEqual(0, split_node.split.feature_column)
# Sparse is one dimensional.
self.assertEqual(0, split_node.split.feature_id)
self.assertAllClose(0.52, split_node.split.threshold)
# Check the split on partition 1.
expected_left_weight = -1.8779342723004695
expected_right_weight = 0
# Verify candidate for partition 1, there's only one active bucket here
# so zero gain is expected.
split_info.ParseFromString(splits[1])
left_child = split_info.left_child.vector
right_child = split_info.right_child.vector
split_node = split_info.split_node.sparse_float_binary_split_default_left
self.assertAllClose(0.0, gains[1])
self.assertAllClose([expected_left_weight], left_child.value)
self.assertAllClose([expected_right_weight], right_child.value)
self.assertEqual(0, split_node.split.feature_column)
# Sparse is one dimensional.
self.assertEqual(0, split_node.split.feature_id)
self.assertAllClose(0.52, split_node.split.threshold)
def testMakeSparseSplitAllEmptyDimensions(self):
"""Tests split handler op when all dimensions have only bias bucket id."""
with self.test_session() as sess:
# The data looks like the following after dividing by number of steps (2).
# Gradients | Partition | Dimension | bucket ID |
# (0.9, 0.39) | 0 | 0 | -1 |
# (4.0, 0.13) | 1 | 0 | -1 |
partition_ids = array_ops.constant([0, 1], dtype=dtypes.int32)
# We have only 1 dimension in our sparse feature column.
bucket_ids = array_ops.constant([[-1, 0], [-1, 0]], dtype=dtypes.int64)
gradients = array_ops.constant([1.8, 8.0])
hessians = array_ops.constant([0.78, 0.26])
bucket_boundaries = array_ops.constant([0.3, 0.52])
partitions, gains, splits = (
split_handler_ops.build_sparse_inequality_splits(
num_minibatches=2,
partition_ids=partition_ids,
bucket_ids=bucket_ids,
gradients=gradients,
hessians=hessians,
bucket_boundaries=bucket_boundaries,
l1_regularization=0,
l2_regularization=2,
tree_complexity_regularization=0,
min_node_weight=0,
feature_column_group_id=0,
bias_feature_id=-1,
class_id=-1,
multiclass_strategy=learner_pb2.LearnerConfig.TREE_PER_CLASS))
partitions, gains, splits = (sess.run([partitions, gains, splits]))
self.assertEqual(0, len(partitions))
self.assertEqual(0, len(splits))
def testMakeSparseMultidimensionalSplit(self):
"""Tests split handler op."""
with self.test_session() as sess:
# Num of steps is 2.
# The feature column is three dimensional.
# First dimension has bias bucket only, the second has bias bucket and
# two valid buckets, the third has just one bias bucket and one valid
# bucket.
# Gradients | Partition | Dimension | bucket ID |
# (0.9, 0.39) | 0 | 0 | -1 |
# (1.2, 0.2) | 0 | 1 | 0 |
# (0.2, 0.12) | 0 | 1 | 2 |
# (0.1, 0.1) | 0 | 2 | 3 |
# Now second node - nothing interesting there, just one dimension.
# Second node has the same bucket ids for all dimensions.
# (4.0, 0.13) | 1 | 0 | -1 |
# (4.0, 0.13) | 1 | 2 | 3 |
# Tree node ids.
partition_ids = array_ops.constant([0, 0, 0, 0, 1, 1], dtype=dtypes.int32)
dimension_ids = array_ops.constant([0, 1, 1, 2, 0, 2], dtype=dtypes.int64)
bucket_ids = array_ops.constant([-1, 0, 2, 3, -1, 3], dtype=dtypes.int64)
bucket_ids = array_ops.stack([bucket_ids, dimension_ids], axis=1)
gradients = array_ops.constant([1.8, 2.4, 0.4, 0.2, 8.0, 8.0])
hessians = array_ops.constant([0.78, 0.4, 0.24, 0.2, 0.26, 0.26])
bucket_boundaries = array_ops.constant([0.3, 0.52, 0.58, 0.6])
partitions, gains, splits = (
split_handler_ops.build_sparse_inequality_splits(
num_minibatches=2,
partition_ids=partition_ids,
bucket_ids=bucket_ids,
gradients=gradients,
hessians=hessians,
bucket_boundaries=bucket_boundaries,
l1_regularization=0,
l2_regularization=2,
tree_complexity_regularization=0,
min_node_weight=0,
feature_column_group_id=0,
bias_feature_id=-1,
class_id=-1,
multiclass_strategy=learner_pb2.LearnerConfig.TREE_PER_CLASS))
partitions, gains, splits = (sess.run([partitions, gains, splits]))
self.assertAllEqual([0, 1], partitions)
self.assertEqual(2, len(splits))
# Check the split on node 0 - it should split on second dimension
# -(0.2 + 1.2) / (0.12 + 0.2 + 2)
expected_left_weight = -0.603448275862069
# (0.2 + 1.2) ** 2 / (0.12 + 0.2 + 2)
expected_left_gain = 0.8448275862068965
# 0.5 / (0.07 + 2)
expected_right_weight = 0.24154589371980678
# 0.5 ** 2 / (0.07 + 2)
expected_right_gain = 0.12077294685990339
# (0.2 + 1.2 - 0.5) ** 2 / (0.12 + 0.2 + 0.07 + 2)
expected_bias_gain = 0.3389121338912133
split_info = split_info_pb2.SplitInfo()
split_info.ParseFromString(splits[0])
left_child = split_info.left_child.vector
right_child = split_info.right_child.vector
split_node = split_info.split_node.sparse_float_binary_split_default_right
self.assertAllClose(
expected_left_gain + expected_right_gain - expected_bias_gain, gains[0])
self.assertAllClose([expected_left_weight], left_child.value)
self.assertAllClose([expected_right_weight], right_child.value)
self.assertEqual(0, split_node.split.feature_column)
# Split happened on second dimension.
self.assertEqual(1, split_node.split.feature_id)
self.assertAllClose(0.58, split_node.split.threshold)
# Check the split on partition 1.
expected_left_weight = -1.8779342723004695
expected_right_weight = 0
# Verify candidate for partition 1, there's only one active bucket here
# so zero gain is expected.
split_info.ParseFromString(splits[1])
left_child = split_info.left_child.vector
right_child = split_info.right_child.vector
split_node = split_info.split_node.sparse_float_binary_split_default_left
self.assertAllClose(0.0, gains[1])
self.assertAllClose([expected_left_weight], left_child.value)
self.assertAllClose([expected_right_weight], right_child.value)
self.assertEqual(0, split_node.split.feature_column)
self.assertEqual(2, split_node.split.feature_id)
self.assertAllClose(0.6, split_node.split.threshold)
def testMakeMulticlassSparseSplit(self):
"""Tests split handler op."""
with self.test_session() as sess:
partition_ids = array_ops.constant([0, 0, 0, 1, 1], dtype=dtypes.int32)
bucket_ids = array_ops.constant(
[[-1, 0], [0, 0], [1, 0], [-1, 0], [1, 0]], dtype=dtypes.int64)
gradients = array_ops.constant([[1.8, 3.5], [2.4, 1.0], [0.4, 4.0],
[8.0, 3.1], [8.0, 0.8]])
hessian_0 = [[0.78, 1], [12, 1]]
hessian_1 = [[0.4, 1], [1, 1]]
hessian_2 = [[0.24, 1], [1, 1]]
hessian_3 = [[0.26, 1], [1, 1]]
hessian_4 = [[0.26, 1], [1, 1]]
hessians = array_ops.constant(
[hessian_0, hessian_1, hessian_2, hessian_3, hessian_4])
bucket_boundaries = array_ops.constant([0.3, 0.52])
partitions, gains, splits = (
split_handler_ops.build_sparse_inequality_splits(
num_minibatches=2,
partition_ids=partition_ids,
bucket_ids=bucket_ids,
gradients=gradients,
hessians=hessians,
bucket_boundaries=bucket_boundaries,
l1_regularization=0,
l2_regularization=2,
tree_complexity_regularization=0,
min_node_weight=0,
feature_column_group_id=0,
bias_feature_id=-1,
class_id=-1,
multiclass_strategy=learner_pb2.LearnerConfig.FULL_HESSIAN))
partitions, gains, splits = (sess.run([partitions, gains, splits]))
split_info = split_info_pb2.SplitInfo()
split_info.ParseFromString(splits[0])
left_child = split_info.left_child.vector
right_child = split_info.right_child.vector
split_node = split_info.split_node.sparse_float_binary_split_default_right
# Each leaf has 2 element vector.
self.assertEqual(2, len(left_child.value))
self.assertEqual(2, len(right_child.value))
self.assertEqual(0, split_node.split.feature_column)
self.assertAllClose(0.52, split_node.split.threshold)
def testMakeCategoricalEqualitySplit(self):
"""Tests split handler op for categorical equality split."""
with self.test_session() as sess:
# The data looks like the following after dividing by number of steps (2).
# Gradients | Partition | Feature ID |
# (0.9, 0.39) | 0 | -1 |
# (0.2, 0.12) | 0 | 1 |
# (1.4, 0.32) | 0 | 2 |
# (4.0, 0.13) | 1 | -1 |
# (4.0, 0.13) | 1 | 1 |
gradients = [1.8, 0.4, 2.8, 8.0, 8.0]
hessians = [0.78, 0.24, 0.64, 0.26, 0.26]
partition_ids = [0, 0, 0, 1, 1]
feature_ids = array_ops.constant(
[[-1, 0], [1, 0], [2, 0], [-1, 0], [1, 0]], dtype=dtypes.int64)
partitions, gains, splits = (
split_handler_ops.build_categorical_equality_splits(
num_minibatches=2,
partition_ids=partition_ids,
feature_ids=feature_ids,
gradients=gradients,
hessians=hessians,
l1_regularization=0.1,
l2_regularization=1,
tree_complexity_regularization=0,
min_node_weight=0,
feature_column_group_id=0,
bias_feature_id=-1,
class_id=-1,
multiclass_strategy=learner_pb2.LearnerConfig.TREE_PER_CLASS))
partitions, gains, splits = sess.run([partitions, gains, splits])
self.assertAllEqual([0, 1], partitions)
# Check the split on partition 0.
# -(0.2 + 1.2 - 0.1) / (0.12 + 0.2 + 1)
expected_left_weight = -0.9848484848484846
# (0.2 + 1.2 - 0.1) ** 2 / (0.12 + 0.2 + 1)
expected_left_gain = 1.2803030303030298
# -(-0.5 + 0.1) / (0.07 + 1)
expected_right_weight = 0.37383177570093457
# (-0.5 + 0.1) ** 2 / (0.07 + 1)
expected_right_gain = 0.14953271028037385
# (0.2 + -0.5 + 1.2 - 0.1) ** 2 / (0.12 + 0.07 + 0.2 + 1)
expected_bias_gain = 0.46043165467625885
split_info = split_info_pb2.SplitInfo()
split_info.ParseFromString(splits[0])
left_child = split_info.left_child.vector
right_child = split_info.right_child.vector
split_node = split_info.split_node.categorical_id_binary_split
self.assertEqual(0, split_node.feature_column)
self.assertEqual(2, split_node.feature_id)
self.assertAllClose(
expected_left_gain + expected_right_gain - expected_bias_gain, gains[0],
0.00001)
self.assertAllClose([expected_left_weight], left_child.value, 0.00001)
self.assertAllClose([expected_right_weight], right_child.value, 0.00001)
# Check the split on partition 1.
# (-4 + 0.1) / (0.13 + 1)
expected_left_weight = -3.4513274336283186
# (-4 + 0.1) ** 2 / (0.13 + 1)
expected_left_gain = 13.460176991150442
expected_right_weight = 0
expected_right_gain = 0
# (-4 + 0.1) ** 2 / (0.13 + 1)
expected_bias_gain = 13.460176991150442
# Verify candidate for partition 1, there's only one active feature here
# so zero gain is expected.
split_info = split_info_pb2.SplitInfo()
split_info.ParseFromString(splits[1])
left_child = split_info.left_child.vector
right_child = split_info.right_child.vector
split_node = split_info.split_node.categorical_id_binary_split
self.assertAllClose(0.0, gains[1], 0.00001)
self.assertAllClose([expected_left_weight], left_child.value, 0.00001)
self.assertAllClose([expected_right_weight], right_child.value, 0.00001)
self.assertEqual(0, split_node.feature_column)
self.assertEqual(1, split_node.feature_id)
def testMakeMulticlassCategoricalEqualitySplit(self):
"""Tests split handler op for categorical equality split in multiclass."""
with self.test_session() as sess:
gradients = array_ops.constant([[1.8, 3.5], [2.4, 1.0], [0.4, 4.0],
[9.0, 3.1], [3.0, 0.8]])
hessian_0 = [[0.78, 1], [12, 1]]
hessian_1 = [[0.4, 1], [1, 1]]
hessian_2 = [[0.24, 1], [1, 1]]
hessian_3 = [[0.16, 2], [-1, 1]]
hessian_4 = [[0.6, 1], [2, 1]]
hessians = array_ops.constant(
[hessian_0, hessian_1, hessian_2, hessian_3, hessian_4])
partition_ids = [0, 0, 0, 1, 1]
feature_ids = array_ops.constant(
[[-1, 0], [1, 0], [2, 0], [-1, 0], [1, 0]], dtype=dtypes.int64)
partitions, gains, splits = (
split_handler_ops.build_categorical_equality_splits(
num_minibatches=2,
partition_ids=partition_ids,
feature_ids=feature_ids,
gradients=gradients,
hessians=hessians,
l1_regularization=0.1,
l2_regularization=1,
tree_complexity_regularization=0,
min_node_weight=0,
feature_column_group_id=0,
bias_feature_id=-1,
class_id=-1,
multiclass_strategy=learner_pb2.LearnerConfig.FULL_HESSIAN))
partitions, gains, splits = sess.run([partitions, gains, splits])
self.assertAllEqual([0, 1], partitions)
split_info = split_info_pb2.SplitInfo()
split_info.ParseFromString(splits[1])
left_child = split_info.left_child.vector
right_child = split_info.right_child.vector
split_node = split_info.split_node.categorical_id_binary_split
# Each leaf has 2 element vector.
self.assertEqual(2, len(left_child.value))
self.assertEqual(2, len(right_child.value))
self.assertEqual(0, split_node.feature_column)
self.assertEqual(1, split_node.feature_id)
def testMakeCategoricalEqualitySplitEmptyInput(self):
with self.test_session() as sess:
gradients = []
hessians = []
partition_ids = []
feature_ids = [[]]
partitions, gains, splits = (
split_handler_ops.build_categorical_equality_splits(
num_minibatches=0,
partition_ids=partition_ids,
feature_ids=feature_ids,
gradients=gradients,
hessians=hessians,
l1_regularization=0.1,
l2_regularization=1,
tree_complexity_regularization=0,
min_node_weight=0,
feature_column_group_id=0,
bias_feature_id=-1,
class_id=-1,
multiclass_strategy=learner_pb2.LearnerConfig.TREE_PER_CLASS))
partitions, gains, splits = (sess.run([partitions, gains, splits]))
self.assertEqual(0, len(partitions))
self.assertEqual(0, len(gains))
self.assertEqual(0, len(splits))
if __name__ == "__main__":
googletest.main()
| 41.766881 | 80 | 0.632164 | 3,391 | 25,979 | 4.615748 | 0.078738 | 0.008306 | 0.038845 | 0.020636 | 0.868579 | 0.848454 | 0.840595 | 0.81012 | 0.79172 | 0.775811 | 0 | 0.081112 | 0.253512 | 25,979 | 621 | 81 | 41.834138 | 0.725984 | 0.181377 | 0 | 0.818605 | 0 | 0 | 0.000379 | 0 | 0 | 0 | 0 | 0 | 0.167442 | 1 | 0.023256 | false | 0 | 0.023256 | 0 | 0.048837 | 0.002326 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
7c325980bca5941103faddddef94494ffbe1063f | 15,215 | py | Python | station/test/test_connection.py | jawaad-ahmad/brata.station | 510bc7e6465af080f0ad11503afcd0be1ac06f58 | [
"Apache-2.0"
] | null | null | null | station/test/test_connection.py | jawaad-ahmad/brata.station | 510bc7e6465af080f0ad11503afcd0be1ac06f58 | [
"Apache-2.0"
] | null | null | null | station/test/test_connection.py | jawaad-ahmad/brata.station | 510bc7e6465af080f0ad11503afcd0be1ac06f58 | [
"Apache-2.0"
] | null | null | null | # ------------------------------------------------------------------------------
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
#
# See the License for the specific language governing permissions and
# limitations under the License.
# ------------------------------------------------------------------------------
"""
TODO module description
"""
from mock import MagicMock
from mock import Mock
from mock import patch
import sys
import unittest
sys.modules['flask'] = MagicMock()
from station.connection import ConnectionManager
from station.state import State
# ------------------------------------------------------------------------------
class ConnectionManagerTestCase(unittest.TestCase):
"""
TODO class comment
"""
# --------------------------------------------------------------------------
def setUp(self):
"""TODO strictly one-line summary
TODO Detailed multi-line description if
necessary.
Args:
arg1 (type1): TODO describe arg, valid values, etc.
arg2 (type2): TODO describe arg, valid values, etc.
arg3 (type3): TODO describe arg, valid values, etc.
Returns:
TODO describe the return type and details
Raises:
TodoError1: if TODO.
TodoError2: if TODO.
"""
station = Mock()
stationTypeId = 'hoiven-glaven'
config = Mock()
config.ResetUrlRule = '/path/to/reset/<int:pin>'
config.StartChallengeUrlRule = '/path/to/sc'
config.HandleSubmissionUrlRule = '/path/to/hs'
config.ShutdownUrlRule = '/path/to/hd'
self.Target = ConnectionManager(station, stationTypeId, config)
# --------------------------------------------------------------------------
def test_connected(self):
"""TODO strictly one-line summary
TODO Detailed multi-line description if
necessary.
Args:
arg1 (type1): TODO describe arg, valid values, etc.
arg2 (type2): TODO describe arg, valid values, etc.
arg3 (type3): TODO describe arg, valid values, etc.
Returns:
TODO describe the return type and details
Raises:
TodoError1: if TODO.
TodoError2: if TODO.
"""
# TODO
#target = Led(name)
#self.assertEqual(name, target.Name)
# --------------------------------------------------------------------------
def test_enter(self):
"""TODO strictly one-line summary
TODO Detailed multi-line description if
necessary.
Args:
arg1 (type1): TODO describe arg, valid values, etc.
arg2 (type2): TODO describe arg, valid values, etc.
arg3 (type3): TODO describe arg, valid values, etc.
Returns:
TODO describe the return type and details
Raises:
TodoError1: if TODO.
TodoError2: if TODO.
"""
# TODO
#output = pibrella.output.e
#self.assertEqual(0, output.read())
#self.Target.turnOn()
#self.assertEqual(1, output.read())
# --------------------------------------------------------------------------
def test_exit(self):
"""TODO strictly one-line summary
TODO Detailed multi-line description if
necessary.
Args:
arg1 (type1): TODO describe arg, valid values, etc.
arg2 (type2): TODO describe arg, valid values, etc.
arg3 (type3): TODO describe arg, valid values, etc.
Returns:
TODO describe the return type and details
Raises:
TodoError1: if TODO.
TodoError2: if TODO.
"""
# TODO
#output = pibrella.output.f
#self.assertEqual(0, output.read())
#self.Target.turnOff()
#self.assertEqual(1, output.read())
# --------------------------------------------------------------------------
def test_run(self):
"""TODO strictly one-line summary
TODO Detailed multi-line description if
necessary.
Args:
arg1 (type1): TODO describe arg, valid values, etc.
arg2 (type2): TODO describe arg, valid values, etc.
arg3 (type3): TODO describe arg, valid values, etc.
Returns:
TODO describe the return type and details
Raises:
TodoError1: if TODO.
TodoError2: if TODO.
"""
# TODO
#self.Target.setFlashing()
# TODO
# --------------------------------------------------------------------------
def test_startListening(self):
"""TODO strictly one-line summary
TODO Detailed multi-line description if
necessary.
Args:
arg1 (type1): TODO describe arg, valid values, etc.
arg2 (type2): TODO describe arg, valid values, etc.
arg3 (type3): TODO describe arg, valid values, etc.
Returns:
TODO describe the return type and details
Raises:
TodoError1: if TODO.
TodoError2: if TODO.
"""
# TODO
#self.Target.setFlashing()
# TODO
# --------------------------------------------------------------------------
def test_stopListening(self):
"""TODO strictly one-line summary
TODO Detailed multi-line description if
necessary.
Args:
arg1 (type1): TODO describe arg, valid values, etc.
arg2 (type2): TODO describe arg, valid values, etc.
arg3 (type3): TODO describe arg, valid values, etc.
Returns:
TODO describe the return type and details
Raises:
TodoError1: if TODO.
TodoError2: if TODO.
"""
# TODO
#self.Target.setFlashing()
# TODO
# --------------------------------------------------------------------------
def test_timestamp(self):
"""TODO strictly one-line summary
TODO Detailed multi-line description if
necessary.
Args:
arg1 (type1): TODO describe arg, valid values, etc.
arg2 (type2): TODO describe arg, valid values, etc.
arg3 (type3): TODO describe arg, valid values, etc.
Returns:
TODO describe the return type and details
Raises:
TodoError1: if TODO.
TodoError2: if TODO.
"""
# TODO
#self.Target.setFlashing()
# TODO
# --------------------------------------------------------------------------
def test_callService(self):
"""TODO strictly one-line summary
TODO Detailed multi-line description if
necessary.
Args:
arg1 (type1): TODO describe arg, valid values, etc.
arg2 (type2): TODO describe arg, valid values, etc.
arg3 (type3): TODO describe arg, valid values, etc.
Returns:
TODO describe the return type and details
Raises:
TodoError1: if TODO.
TodoError2: if TODO.
"""
# TODO
#self.Target.setFlashing()
# TODO
# --------------------------------------------------------------------------
def test_join(self):
"""TODO strictly one-line summary
TODO Detailed multi-line description if
necessary.
Args:
arg1 (type1): TODO describe arg, valid values, etc.
arg2 (type2): TODO describe arg, valid values, etc.
arg3 (type3): TODO describe arg, valid values, etc.
Returns:
TODO describe the return type and details
Raises:
TodoError1: if TODO.
TodoError2: if TODO.
"""
# TODO
#self.Target.setFlashing()
# TODO
# --------------------------------------------------------------------------
def test_leave(self):
"""TODO strictly one-line summary
TODO Detailed multi-line description if
necessary.
Args:
arg1 (type1): TODO describe arg, valid values, etc.
arg2 (type2): TODO describe arg, valid values, etc.
arg3 (type3): TODO describe arg, valid values, etc.
Returns:
TODO describe the return type and details
Raises:
TodoError1: if TODO.
TodoError2: if TODO.
"""
# TODO
#self.Target.setFlashing()
# TODO
# --------------------------------------------------------------------------
def test_timeExpired(self):
"""TODO strictly one-line summary
TODO Detailed multi-line description if
necessary.
Args:
arg1 (type1): TODO describe arg, valid values, etc.
arg2 (type2): TODO describe arg, valid values, etc.
arg3 (type3): TODO describe arg, valid values, etc.
Returns:
TODO describe the return type and details
Raises:
TodoError1: if TODO.
TodoError2: if TODO.
"""
# TODO
#self.Target.setFlashing()
# TODO
# --------------------------------------------------------------------------
def test_submitCtsComboToMS(self):
"""TODO strictly one-line summary
TODO Detailed multi-line description if
necessary.
Args:
arg1 (type1): TODO describe arg, valid values, etc.
arg2 (type2): TODO describe arg, valid values, etc.
arg3 (type3): TODO describe arg, valid values, etc.
Returns:
TODO describe the return type and details
Raises:
TodoError1: if TODO.
TodoError2: if TODO.
"""
# TODO
#self.Target.setFlashing()
# TODO
# --------------------------------------------------------------------------
def test_submitCpaDetectionToMS(self):
"""TODO strictly one-line summary
TODO Detailed multi-line description if
necessary.
Args:
arg1 (type1): TODO describe arg, valid values, etc.
arg2 (type2): TODO describe arg, valid values, etc.
arg3 (type3): TODO describe arg, valid values, etc.
Returns:
TODO describe the return type and details
Raises:
TodoError1: if TODO.
TodoError2: if TODO.
"""
# TODO
#self.Target.setFlashing()
# TODO
# --------------------------------------------------------------------------
@patch('station.connection.logger')
def test_resetExpectedPin(self,
mock_logger):
"""TODO strictly one-line summary
TODO Detailed multi-line description if
necessary.
Args:
arg1 (type1): TODO describe arg, valid values, etc.
arg2 (type2): TODO describe arg, valid values, etc.
arg3 (type3): TODO describe arg, valid values, etc.
Returns:
TODO describe the return type and details
Raises:
TodoError1: if TODO.
TodoError2: if TODO.
"""
self._callback = Mock()
self._callback.State = State.PROCESSING
self.Target._resetPin = 'expectedPin'
pin = self.Target._resetPin
resp = self.Target.reset(pin)
self.assertFalse(mock_logger.warning.called)
self.assertEqual(State.READY, self.Target._callback.State, 'incorrect state: {}'.format(self._callback.State))
self.assertEqual(200, resp.status_code)
# --------------------------------------------------------------------------
@patch('station.connection.logger')
def test_resetUnexpectedPin(self,
mock_logger):
"""TODO strictly one-line summary
TODO Detailed multi-line description if
necessary.
Args:
arg1 (type1): TODO describe arg, valid values, etc.
arg2 (type2): TODO describe arg, valid values, etc.
arg3 (type3): TODO describe arg, valid values, etc.
Returns:
TODO describe the return type and details
Raises:
TodoError1: if TODO.
TodoError2: if TODO.
"""
self._callback = Mock()
self._callback.State = State.PROCESSING
self.Target._resetPin = 'expectedPin'
pin = 'unexpectedPin'
resp = self.Target.reset(pin)
self.assertTrue(mock_logger.warning.called)
self.assertEqual(State.PROCESSING, self._callback.State)
self.assertEqual(400, resp.status_code)
# --------------------------------------------------------------------------
def test_startChallenge(self):
"""TODO strictly one-line summary
TODO Detailed multi-line description if
necessary.
Args:
arg1 (type1): TODO describe arg, valid values, etc.
arg2 (type2): TODO describe arg, valid values, etc.
arg3 (type3): TODO describe arg, valid values, etc.
Returns:
TODO describe the return type and details
Raises:
TodoError1: if TODO.
TodoError2: if TODO.
"""
# TODO
#self.Target.setFlashing()
# TODO
# --------------------------------------------------------------------------
def test_handleSubmission(self):
"""TODO strictly one-line summary
TODO Detailed multi-line description if
necessary.
Args:
arg1 (type1): TODO describe arg, valid values, etc.
arg2 (type2): TODO describe arg, valid values, etc.
arg3 (type3): TODO describe arg, valid values, etc.
Returns:
TODO describe the return type and details
Raises:
TodoError1: if TODO.
TodoError2: if TODO.
"""
# TODO
#self.Target.setFlashing()
# TODO
# --------------------------------------------------------------------------
def test_shutdown(self):
"""TODO strictly one-line summary
TODO Detailed multi-line description if
necessary.
Args:
arg1 (type1): TODO describe arg, valid values, etc.
arg2 (type2): TODO describe arg, valid values, etc.
arg3 (type3): TODO describe arg, valid values, etc.
Returns:
TODO describe the return type and details
Raises:
TodoError1: if TODO.
TodoError2: if TODO.
"""
# TODO
#self.Target.setFlashing()
# TODO
# ------------------------------------------------------------------------------
if __name__ == '__main__':
unittest.main()
| 30.188492 | 118 | 0.502859 | 1,436 | 15,215 | 5.298747 | 0.121866 | 0.119858 | 0.112367 | 0.149823 | 0.820739 | 0.812328 | 0.796294 | 0.766855 | 0.766855 | 0.766855 | 0 | 0.015853 | 0.311798 | 15,215 | 503 | 119 | 30.248509 | 0.71082 | 0.664607 | 0 | 0.206897 | 0 | 0 | 0.061574 | 0.024366 | 0 | 0 | 0 | 0.308151 | 0.103448 | 1 | 0.327586 | false | 0 | 0.12069 | 0 | 0.465517 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
7c490b6b183c8db4039d8eb2669309fb089d28b7 | 3,006 | py | Python | common/policy.py | abhay97ps/visual-control-ppo-procgen | 765fe1ddb289d384abddc4df8eb865379c8da76a | [
"MIT"
] | null | null | null | common/policy.py | abhay97ps/visual-control-ppo-procgen | 765fe1ddb289d384abddc4df8eb865379c8da76a | [
"MIT"
] | null | null | null | common/policy.py | abhay97ps/visual-control-ppo-procgen | 765fe1ddb289d384abddc4df8eb865379c8da76a | [
"MIT"
] | null | null | null | from .misc_util import orthogonal_init
from .model import GRU
import torch.nn as nn
import torch.nn.functional as F
from torch.distributions import Categorical, Normal
class CategoricalPolicy(nn.Module):
def __init__(self,
embedder,
recurrent,
action_size):
"""
embedder: (torch.Tensor) model to extract the embedding for observation
action_size: number of the categorical actions
"""
super(CategoricalPolicy, self).__init__()
self.embedder = embedder
# small scale weight-initialization in policy enhances the stability
self.fc_policy = orthogonal_init(nn.Linear(self.embedder.output_dim, action_size), gain=0.01)
self.fc_value = orthogonal_init(nn.Linear(self.embedder.output_dim, 1), gain=1.0)
self.recurrent = recurrent
if self.recurrent:
self.gru = GRU(self.embedder.output_dim, self.embedder.output_dim)
def is_recurrent(self):
return self.recurrent
def forward(self, x, hx, masks):
hidden = self.embedder(x)
if self.recurrent:
hidden, hx = self.gru(hidden, hx, masks)
logits = self.fc_policy(hidden)
log_probs = F.log_softmax(logits, dim=1)
p = Categorical(logits=log_probs)
v = self.fc_value(hidden).reshape(-1)
return p, v, hx
class CustomCategoricalPolicy(nn.Module):
def __init__(self,
embedder,
recurrent,
action_size):
"""
embedder: (torch.Tensor) model to extract the embedding for observation
action_size: number of the categorical actions
"""
super(CustomCategoricalPolicy, self).__init__()
self.embedder = embedder
# small scale weight-initialization in policy enhances the stability
self.fc_policy = orthogonal_init(nn.Linear(self.embedder.output_dim, action_size), gain=0.01)
self.fc_value = orthogonal_init(nn.Linear(self.embedder.output_dim, 1), gain=1.0)
self.recurrent = recurrent
if self.recurrent:
self.gru = GRU(self.embedder.output_dim, self.embedder.output_dim)
def is_recurrent(self):
return self.recurrent
def forward(self, x, hx, masks):
hidden,_,_,_ = self.embedder(x)
if self.recurrent:
hidden, hx = self.gru(hidden, hx, masks)
logits = self.fc_policy(hidden)
log_probs = F.log_softmax(logits, dim=1)
p = Categorical(logits=log_probs)
v = self.fc_value(hidden).reshape(-1)
return p, v, hx
def evaluate(self, x, hx, masks):
hidden,c1,c2,c3=self.embedder(x)
if self.recurrent:
hidden, hx = self.gru(hidden, hx, masks)
logits = self.fc_policy(hidden)
log_probs = F.log_softmax(logits, dim=1)
p = Categorical(logits=log_probs)
v = self.fc_value(hidden).reshape(-1)
return p, v, hx, c1, c2, c3
| 37.111111 | 101 | 0.627412 | 378 | 3,006 | 4.830688 | 0.193122 | 0.098576 | 0.078861 | 0.092004 | 0.862541 | 0.852683 | 0.852683 | 0.852683 | 0.852683 | 0.852683 | 0 | 0.011014 | 0.275116 | 3,006 | 81 | 102 | 37.111111 | 0.826985 | 0.126414 | 0 | 0.766667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.116667 | false | 0 | 0.083333 | 0.033333 | 0.316667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
7cb11762e38b2e7e53f498de8eeaaf4d14eac8dd | 174 | py | Python | novelscraper/scrapers/__init__.py | HenryRocha/novel-scraper | 2a5b0b8153d6243777f0d13fc7eca5596d88b416 | [
"MIT"
] | null | null | null | novelscraper/scrapers/__init__.py | HenryRocha/novel-scraper | 2a5b0b8153d6243777f0d13fc7eca5596d88b416 | [
"MIT"
] | null | null | null | novelscraper/scrapers/__init__.py | HenryRocha/novel-scraper | 2a5b0b8153d6243777f0d13fc7eca5596d88b416 | [
"MIT"
] | null | null | null | """
scrapers/__init__.py
Exports all the scrapers provided by the package.
"""
from novelscraper.scrapers.novelfull import *
from novelscraper.scrapers.wuxiaworld import *
| 19.333333 | 49 | 0.793103 | 21 | 174 | 6.380952 | 0.666667 | 0.238806 | 0.358209 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.12069 | 174 | 8 | 50 | 21.75 | 0.875817 | 0.408046 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
7cf00f8ef06992354f0d6c780ec6a25af8e636b9 | 104 | py | Python | entity/cards/LETLT_083/__init__.py | x014/lushi_script | edab2b88e3f0de8139de2541ab2daa331f777c0e | [
"MIT"
] | 102 | 2021-10-20T09:06:39.000Z | 2022-03-28T13:35:11.000Z | entity/cards/LETLT_083/__init__.py | x014/lushi_script | edab2b88e3f0de8139de2541ab2daa331f777c0e | [
"MIT"
] | 98 | 2021-10-19T16:13:27.000Z | 2022-03-27T13:27:49.000Z | entity/cards/LETLT_083/__init__.py | x014/lushi_script | edab2b88e3f0de8139de2541ab2daa331f777c0e | [
"MIT"
] | 55 | 2021-10-19T03:56:50.000Z | 2022-03-25T08:25:26.000Z | # -*- coding: utf-8 -*-
import entity.cards.LETLT_083.LETLT_083
import entity.cards.LETLT_083.LETLT_083
| 26 | 39 | 0.769231 | 17 | 104 | 4.470588 | 0.470588 | 0.421053 | 0.447368 | 0.578947 | 0.868421 | 0.868421 | 0.868421 | 0 | 0 | 0 | 0 | 0.136842 | 0.086538 | 104 | 3 | 40 | 34.666667 | 0.663158 | 0.201923 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 12 |
6b1ef59a36d758d4e6b86f9ff342921291266b6e | 5,644 | py | Python | test/test_marks.py | bpreskit/pytest-describe | d58034c162e8026bd17cabdd97caa559d7d43a92 | [
"MIT"
] | null | null | null | test/test_marks.py | bpreskit/pytest-describe | d58034c162e8026bd17cabdd97caa559d7d43a92 | [
"MIT"
] | null | null | null | test/test_marks.py | bpreskit/pytest-describe | d58034c162e8026bd17cabdd97caa559d7d43a92 | [
"MIT"
] | null | null | null | import py
from util import assert_outcomes
pytest_plugins = 'pytester'
def test_special_marks(testdir):
a_dir = testdir.mkpydir('a_dir')
a_dir.join('test_a.py').write(py.code.Source("""
import pytest
def describe_marks():
@pytest.mark.xfail
def xfails():
assert False
@pytest.mark.xfail
def xpasses():
pass
@pytest.mark.skipif("0 < 1")
def skipped():
pass
@pytest.mark.parametrize('foo', (1, 2, 3))
def isint(foo):
assert foo == int(foo)
"""))
result = testdir.runpytest()
assert_outcomes(result, passed=3, xfailed=1, xpassed=1, skipped=1)
def test_cartesian_parametrize(testdir):
a_dir = testdir.mkpydir('a_dir')
a_dir.join('test_a.py').write(py.code.Source("""
import pytest
def describe_marks():
@pytest.mark.parametrize('foo', (1, 2, 3))
@pytest.mark.parametrize('bar', (1, 2, 3))
def isint(foo, bar):
assert foo == int(foo)
assert bar == int(bar)
"""))
result = testdir.runpytest()
assert_outcomes(result, passed=9)
def test_parametrize_applies_to_describe(testdir):
a_dir = testdir.mkpydir('a_dir')
a_dir.join('test_a.py').write(py.code.Source("""
import pytest
@pytest.mark.parametrize('foo', (1, 2, 3))
def describe_marks():
@pytest.mark.parametrize('bar', (1, 2, 3))
def isint(foo, bar):
assert foo == int(foo)
assert bar == int(bar)
def isint2(foo):
assert foo == int(foo)
def describe_nested():
def isint3(foo):
assert foo == int(foo)
"""))
result = testdir.runpytest()
assert_outcomes(result, passed=15)
def test_cartesian_parametrize_on_describe(testdir):
a_dir = testdir.mkpydir('a_dir')
a_dir.join('test_a.py').write(py.code.Source("""
import pytest
@pytest.mark.parametrize('foo', (1, 2, 3))
@pytest.mark.parametrize('bar', (1, 2, 3))
def describe_marks():
def isint(foo, bar):
assert foo == int(foo)
assert bar == int(bar)
"""))
result = testdir.runpytest()
assert_outcomes(result, passed=9)
def test_parametrize_with_shared(testdir):
a_dir = testdir.mkpydir('a_dir')
a_dir.join('test_a.py').write(py.code.Source("""
import pytest
from pytest import fixture
from pytest_describe import behaves_like
def a_duck():
def it_quacks(sound):
assert sound == int(sound)
@pytest.mark.parametrize('foo', (1, 2, 3))
@behaves_like(a_duck)
def describe_something_that_quacks():
@fixture
def sound(foo):
return foo
@pytest.mark.parametrize('foo', (1, 2, 3))
@behaves_like(a_duck)
def describe_something_that_barks():
@fixture
def sound(foo):
return foo
"""))
result = testdir.runpytest()
assert_outcomes(result, passed=6)
def test_coincident_parametrize_at_top(testdir):
a_dir = testdir.mkpydir('a_dir')
a_dir.join('test_a.py').write(py.code.Source("""
import pytest
@pytest.mark.parametrize('foo', (1, 2, 3))
def describe_marks():
@pytest.mark.parametrize('bar', (1, 2, 3))
def isint(foo, bar):
assert foo == int(foo)
assert bar == int(bar)
@pytest.mark.parametrize('foo', (1, 2, 3))
def describe_marks2():
def isint2(foo):
assert foo == int(foo)
"""))
result = testdir.runpytest()
assert_outcomes(result, passed=12)
def test_keywords(testdir):
a_dir = testdir.mkpydir('a_dir')
a_dir.join('test_a.py').write(py.code.Source("""
import pytest
def describe_a():
@pytest.mark.foo
def foo_test():
pass
@pytest.mark.bar
def bar_test():
pass
"""))
result = testdir.runpytest('-k', 'foo')
assert_outcomes(result, passed=1, deselected=1)
def test_marks(testdir):
a_dir = testdir.mkpydir('a_dir')
a_dir.join('test_a.py').write(py.code.Source("""
import pytest
def describe_a():
@pytest.mark.foo
def foo_test():
pass
@pytest.mark.bar
def bar_test():
pass
"""))
result = testdir.runpytest('-m', 'foo')
assert_outcomes(result, passed=1, deselected=1)
def test_module_marks(testdir):
a_dir = testdir.mkpydir('a_dir')
a_dir.join('test_a.py').write(py.code.Source("""
import pytest
pytestmark = [ pytest.mark.foo ]
def describe_a():
pytestmark = [ pytest.mark.bar ]
def describe_b():
def a_test():
pass
"""))
result = testdir.runpytest('-m', 'foo')
assert_outcomes(result, passed=1)
def test_mark_at_describe_function(testdir):
a_dir = testdir.mkpydir('a_dir')
a_dir.join('test_a.py').write(py.code.Source("""
import pytest
@pytest.mark.foo
def describe_foo():
def describe_a():
def a_test():
pass
@pytest.mark.bar
def b_test():
pass
"""))
result = testdir.runpytest('-m', 'foo')
assert_outcomes(result, passed=2)
| 26.251163 | 70 | 0.542169 | 670 | 5,644 | 4.398507 | 0.119403 | 0.040719 | 0.085511 | 0.061079 | 0.809637 | 0.796742 | 0.765185 | 0.752969 | 0.725484 | 0.712589 | 0 | 0.015453 | 0.323529 | 5,644 | 214 | 71 | 26.373832 | 0.756417 | 0 | 0 | 0.75 | 0 | 0 | 0.627746 | 0.088767 | 0 | 0 | 0 | 0 | 0.152439 | 1 | 0.060976 | false | 0.121951 | 0.085366 | 0 | 0.158537 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 9 |
6b388d5cae873988acbe3d14ef56ea04b5f95943 | 250 | py | Python | tests/basics/continue.py | learnforpractice/micropython-cpp | 004bc8382f74899e7b876cc29bfa6a9cc976ba10 | [
"MIT"
] | 13,648 | 2015-01-01T01:34:51.000Z | 2022-03-31T16:19:53.000Z | tests/basics/continue.py | learnforpractice/micropython-cpp | 004bc8382f74899e7b876cc29bfa6a9cc976ba10 | [
"MIT"
] | 7,092 | 2015-01-01T07:59:11.000Z | 2022-03-31T23:52:18.000Z | tests/basics/continue.py | learnforpractice/micropython-cpp | 004bc8382f74899e7b876cc29bfa6a9cc976ba10 | [
"MIT"
] | 4,942 | 2015-01-02T11:48:50.000Z | 2022-03-31T19:57:10.000Z | for i in range(4):
print('one', i)
if i > 2:
continue
print('two', i)
for i in range(4):
print('one', i)
if i < 2:
continue
print('two', i)
for i in [1, 2, 3, 4]:
if i == 3:
continue
print(i)
| 14.705882 | 22 | 0.448 | 43 | 250 | 2.604651 | 0.302326 | 0.107143 | 0.160714 | 0.196429 | 0.803571 | 0.803571 | 0.803571 | 0.803571 | 0.803571 | 0.803571 | 0 | 0.058824 | 0.388 | 250 | 16 | 23 | 15.625 | 0.673203 | 0 | 0 | 0.642857 | 0 | 0 | 0.048 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.357143 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
861bf9c6a5fb01d92d4226bcc23fd0fbbfa1af53 | 104 | py | Python | src/style_rank/__init__.py | cifkao/style_rank | 3b07ecffcb669b24c22e63919a3dd95d14c29108 | [
"ISC"
] | 11 | 2019-10-25T02:05:21.000Z | 2021-01-04T11:02:42.000Z | src/style_rank/__init__.py | cifkao/style_rank | 3b07ecffcb669b24c22e63919a3dd95d14c29108 | [
"ISC"
] | 2 | 2019-11-29T10:29:24.000Z | 2019-12-02T18:00:56.000Z | src/style_rank/__init__.py | cifkao/style_rank | 3b07ecffcb669b24c22e63919a3dd95d14c29108 | [
"ISC"
] | 2 | 2019-12-01T13:18:53.000Z | 2020-09-28T06:19:52.000Z | from style_rank.api import get_features, get_similarity_matrix, get_feature_csv, get_feature_names, rank | 104 | 104 | 0.884615 | 17 | 104 | 4.941176 | 0.705882 | 0.238095 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.067308 | 104 | 1 | 104 | 104 | 0.865979 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
86682fd841dc83ae8fa7e9096175255691b9a687 | 108 | py | Python | backend/tests/conftest.py | FEDMix/fedmix-viewer | 3116d7c85aa73016d5657c2c084cddb7fb64e399 | [
"Apache-2.0"
] | null | null | null | backend/tests/conftest.py | FEDMix/fedmix-viewer | 3116d7c85aa73016d5657c2c084cddb7fb64e399 | [
"Apache-2.0"
] | 3 | 2020-10-19T09:27:42.000Z | 2020-10-23T11:26:52.000Z | backend/tests/conftest.py | FEDMix/fedmix-viewer | 3116d7c85aa73016d5657c2c084cddb7fb64e399 | [
"Apache-2.0"
] | null | null | null | import pytest
from fedmix_backend import get_schema
@pytest.fixture
def schema():
return get_schema()
| 13.5 | 37 | 0.777778 | 15 | 108 | 5.4 | 0.666667 | 0.222222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.157407 | 108 | 7 | 38 | 15.428571 | 0.89011 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | true | 0 | 0.4 | 0.2 | 0.8 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 7 |
866b2f434af42b8fda90faba2c0f7d95db9410dd | 87,900 | py | Python | monolingual-word-aligner/aligner.py | FTAsr/STS | 07fd4720cf00c9c78733718bd032fba7d92efc3a | [
"MIT"
] | null | null | null | monolingual-word-aligner/aligner.py | FTAsr/STS | 07fd4720cf00c9c78733718bd032fba7d92efc3a | [
"MIT"
] | null | null | null | monolingual-word-aligner/aligner.py | FTAsr/STS | 07fd4720cf00c9c78733718bd032fba7d92efc3a | [
"MIT"
] | null | null | null | from wordSim import *
from util import *
from coreNlpUtil import *
##############################################################################################################################
def alignNouns(source, target, sourceParseResult, targetParseResult, existingAlignments):
# source and target:: each is a list of elements of the form:
# [[character begin offset, character end offset], word index, word, lemma, pos tag]
global ppdbSim
global theta1
nounAlignments = []
sourceWordIndices = [i+1 for i in xrange(len(source))]
targetWordIndices = [i+1 for i in xrange(len(target))]
sourceWordIndicesAlreadyAligned = sorted(list(set([item[0] for item in existingAlignments])))
targetWordIndicesAlreadyAligned = sorted(list(set([item[1] for item in existingAlignments])))
sourceWords = [item[2] for item in source]
targetWords = [item[2] for item in target]
sourceLemmas = [item[3] for item in source]
targetLemmas = [item[3] for item in target]
sourcePosTags = [item[4] for item in source]
targetPosTags = [item[4] for item in target]
sourceDParse = dependencyParseAndPutOffsets(sourceParseResult)
targetDParse = dependencyParseAndPutOffsets(targetParseResult)
numberOfNounsInSource = 0
evidenceCountsMatrix = {}
relativeAlignmentsMatrix = {}
wordSimilarities = {}
# construct the two matrices in the following loop
for i in sourceWordIndices:
if i in sourceWordIndicesAlreadyAligned or (sourcePosTags[i-1][0].lower() <> 'n' and sourcePosTags[i-1].lower()<>'prp'):
continue
numberOfNounsInSource += 1
for j in targetWordIndices:
if j in targetWordIndicesAlreadyAligned or (targetPosTags[j-1][0].lower() <> 'n' and targetPosTags[j-1].lower()<>'prp'):
continue
if max(wordRelatedness(sourceWords[i-1], sourcePosTags[i-1], targetWords[j-1], targetPosTags[j-1]), wordRelatedness(sourceLemmas[i-1], sourcePosTags[i-1], targetLemmas[j-1], targetPosTags[j-1]))<ppdbSim:
continue
wordSimilarities[(i, j)] = max(wordRelatedness(sourceWords[i-1], sourcePosTags[i-1], targetWords[j-1], targetPosTags[j-1]), wordRelatedness(sourceLemmas[i-1], sourcePosTags[i-1], targetLemmas[j-1], targetPosTags[j-1]))
sourceWordParents = findParents(sourceDParse, i, sourceWords[i-1])
sourceWordChildren = findChildren(sourceDParse, i, sourceWords[i-1])
targetWordParents = findParents(targetDParse, j, targetWords[j-1])
targetWordChildren = findChildren(targetDParse, j, targetWords[j-1])
# search for common or equivalent parents
groupOfSimilarRelationsForNounParent = ['pos', 'nn', 'prep_of', 'prep_in', 'prep_at', 'prep_for']
group1OfSimilarRelationsForVerbParent = ['agent', 'nsubj', 'xsubj']
group2OfSimilarRelationsForVerbParent = ['ccomp', 'dobj', 'nsubjpass', 'rel', 'partmod']
group3OfSimilarRelationsForVerbParent = ['tmod' 'prep_in', 'prep_at', 'prep_on']
group4OfSimilarRelationsForVerbParent = ['iobj', 'prep_to']
for ktem in sourceWordParents:
for ltem in targetWordParents:
if ((ktem[0], ltem[0]) in existingAlignments+nounAlignments or max(wordRelatedness(ktem[1], sourcePosTags[ktem[0]-1], ltem[1], targetPosTags[ltem[0]-1]), wordRelatedness(sourceLemmas[ktem[0]-1], sourcePosTags[ktem[0]-1], targetLemmas[ltem[0]-1], targetPosTags[ltem[0]-1]))>=ppdbSim) and (
(ktem[2]==ltem[2]) or
(ktem[2] in groupOfSimilarRelationsForNounParent and ltem[2] in groupOfSimilarRelationsForNounParent) or
(ktem[2] in group1OfSimilarRelationsForVerbParent and ltem[2] in group1OfSimilarRelationsForVerbParent) or
(ktem[2] in group2OfSimilarRelationsForVerbParent and ltem[2] in group2OfSimilarRelationsForVerbParent) or
(ktem[2] in group3OfSimilarRelationsForVerbParent and ltem[2] in group3OfSimilarRelationsForVerbParent) or
(ktem[2] in group4OfSimilarRelationsForVerbParent and ltem[2] in group4OfSimilarRelationsForVerbParent)):
if (i, j) in evidenceCountsMatrix:
evidenceCountsMatrix[(i, j)] += max(wordRelatedness(ktem[1], sourcePosTags[ktem[0]-1], ltem[1], targetPosTags[ltem[0]-1]), wordRelatedness(sourceLemmas[ktem[0]-1], sourcePosTags[ktem[0]-1], targetLemmas[ltem[0]-1], targetPosTags[ltem[0]-1]))
else:
evidenceCountsMatrix[(i, j)] = max(wordRelatedness(ktem[1], sourcePosTags[ktem[0]-1], ltem[1], targetPosTags[ltem[0]-1]), wordRelatedness(sourceLemmas[ktem[0]-1], sourcePosTags[ktem[0]-1], targetLemmas[ltem[0]-1], targetPosTags[ltem[0]-1]))
if (i, j) in relativeAlignmentsMatrix:
relativeAlignmentsMatrix[(i, j)].append([ktem[0], ltem[0]])
else:
relativeAlignmentsMatrix[(i, j)] = []
relativeAlignmentsMatrix[(i, j)].append([ktem[0], ltem[0]])
# search for common or equivalent children
groupOfSimilarRelationsForNounChild = ['pos', 'nn' 'prep_of', 'prep_in', 'prep_at', 'prep_for']
groupOfSimilarRelationsForVerbChild = ['infmod', 'partmod', 'rcmod']
groupOfSimilarRelationsForAdjectiveChild = ['amod', 'rcmod']
for ktem in sourceWordChildren:
for ltem in targetWordChildren:
if ((ktem[0], ltem[0]) in existingAlignments+nounAlignments or max(wordRelatedness(ktem[1], sourcePosTags[ktem[0]-1], ltem[1], targetPosTags[ltem[0]-1]), wordRelatedness(sourceLemmas[ktem[0]-1], sourcePosTags[ktem[0]-1], targetLemmas[ltem[0]-1], targetPosTags[ltem[0]-1]))>=ppdbSim) and (
(ktem[2]==ltem[2]) or
(ktem[2] in groupOfSimilarRelationsForNounChild and ltem[2] in groupOfSimilarRelationsForNounChild) or
(ktem[2] in groupOfSimilarRelationsForVerbChild and ltem[2] in groupOfSimilarRelationsForVerbChild) or
(ktem[2] in groupOfSimilarRelationsForAdjectiveChild and ltem[2] in groupOfSimilarRelationsForAdjectiveChild)):
if (i, j) in evidenceCountsMatrix:
evidenceCountsMatrix[(i, j)] += max(wordRelatedness(ktem[1], sourcePosTags[ktem[0]-1], ltem[1], targetPosTags[ltem[0]-1]), wordRelatedness(sourceLemmas[ktem[0]-1], sourcePosTags[ktem[0]-1], targetLemmas[ltem[0]-1], targetPosTags[ltem[0]-1]))
else:
evidenceCountsMatrix[(i, j)] = max(wordRelatedness(ktem[1], sourcePosTags[ktem[0]-1], ltem[1], targetPosTags[ltem[0]-1]), wordRelatedness(sourceLemmas[ktem[0]-1], sourcePosTags[ktem[0]-1], targetLemmas[ltem[0]-1], targetPosTags[ltem[0]-1]))
if (i, j) in relativeAlignmentsMatrix:
relativeAlignmentsMatrix[(i, j)].append([ktem[0], ltem[0]])
else:
relativeAlignmentsMatrix[(i, j)] = []
relativeAlignmentsMatrix[(i, j)].append([ktem[0], ltem[0]])
# search for equivalent parent-child relations
groupOfSimilarRelationsInOppositeDirectionForAdjectiveParentAndChild = [['nsubj'], ['amod', 'rcmod']]
groupOfSimilarRelationsInOppositeDirectionForVerbParentAndChild = [['ccomp', 'dobj', 'nsubjpass', 'rel', 'partmod'], ['infmod', 'partmod', 'rcmod']]
group1OfSimilarRelationsInOppositeDirectionForNounParentAndChild = [['conj_and'], ['conj_and']]
group2OfSimilarRelationsInOppositeDirectionForNounParentAndChild = [['conj_or'], ['conj_or']]
group3OfSimilarRelationsInOppositeDirectionForNounParentAndChild = [['conj_nor'], ['conj_nor']]
for ktem in sourceWordParents:
for ltem in targetWordChildren:
if ((ktem[0], ltem[0]) in existingAlignments+nounAlignments or max(wordRelatedness(ktem[1], sourcePosTags[ktem[0]-1], ltem[1], targetPosTags[ltem[0]-1]), wordRelatedness(sourceLemmas[ktem[0]-1], sourcePosTags[ktem[0]-1], targetLemmas[ltem[0]-1], targetPosTags[ltem[0]-1]))>=ppdbSim) and (
(ktem[2]==ltem[2]) or
(ktem[2] in groupOfSimilarRelationsInOppositeDirectionForAdjectiveParentAndChild[0] and ltem[2] in groupOfSimilarRelationsInOppositeDirectionForAdjectiveParentAndChild[1]) or
(ktem[2] in groupOfSimilarRelationsInOppositeDirectionForVerbParentAndChild[0] and ltem[2] in groupOfSimilarRelationsInOppositeDirectionForVerbParentAndChild[1]) or
(ktem[2] in group1OfSimilarRelationsInOppositeDirectionForNounParentAndChild[0] and ltem[2] in group1OfSimilarRelationsInOppositeDirectionForNounParentAndChild[1]) or
(ktem[2] in group2OfSimilarRelationsInOppositeDirectionForNounParentAndChild[0] and ltem[2] in group2OfSimilarRelationsInOppositeDirectionForNounParentAndChild[1]) or
(ktem[2] in group3OfSimilarRelationsInOppositeDirectionForNounParentAndChild[0] and ltem[2] in group3OfSimilarRelationsInOppositeDirectionForNounParentAndChild[1])):
if (i, j) in evidenceCountsMatrix:
evidenceCountsMatrix[(i, j)] += max(wordRelatedness(ktem[1], sourcePosTags[ktem[0]-1], ltem[1], targetPosTags[ltem[0]-1]), wordRelatedness(sourceLemmas[ktem[0]-1], sourcePosTags[ktem[0]-1], targetLemmas[ltem[0]-1], targetPosTags[ltem[0]-1]))
else:
evidenceCountsMatrix[(i, j)] = max(wordRelatedness(ktem[1], sourcePosTags[ktem[0]-1], ltem[1], targetPosTags[ltem[0]-1]), wordRelatedness(sourceLemmas[ktem[0]-1], sourcePosTags[ktem[0]-1], targetLemmas[ltem[0]-1], targetPosTags[ltem[0]-1]))
if (i, j) in relativeAlignmentsMatrix:
relativeAlignmentsMatrix[(i, j)].append([ktem[0], ltem[0]])
else:
relativeAlignmentsMatrix[(i, j)] = []
relativeAlignmentsMatrix[(i, j)].append([ktem[0], ltem[0]])
# search for equivalent child-parent relations
for ktem in sourceWordChildren:
for ltem in targetWordParents:
if ((ktem[0], ltem[0]) in existingAlignments+nounAlignments or max(wordRelatedness(ktem[1], sourcePosTags[ktem[0]-1], ltem[1], targetPosTags[ltem[0]-1]), wordRelatedness(sourceLemmas[ktem[0]-1], sourcePosTags[ktem[0]-1], targetLemmas[ltem[0]-1], targetPosTags[ltem[0]-1]))>=ppdbSim) and (
(ktem[2]==ltem[2]) or
(ktem[2] in groupOfSimilarRelationsInOppositeDirectionForAdjectiveParentAndChild[1] and ltem[2] in groupOfSimilarRelationsInOppositeDirectionForAdjectiveParentAndChild[0]) or
(ktem[2] in groupOfSimilarRelationsInOppositeDirectionForVerbParentAndChild[1] and ltem[2] in groupOfSimilarRelationsInOppositeDirectionForVerbParentAndChild[0]) or
(ktem[2] in group1OfSimilarRelationsInOppositeDirectionForNounParentAndChild[1] and ltem[2] in group1OfSimilarRelationsInOppositeDirectionForNounParentAndChild[0]) or
(ktem[2] in group2OfSimilarRelationsInOppositeDirectionForNounParentAndChild[1] and ltem[2] in group2OfSimilarRelationsInOppositeDirectionForNounParentAndChild[0]) or
(ktem[2] in group3OfSimilarRelationsInOppositeDirectionForNounParentAndChild[1] and ltem[2] in group3OfSimilarRelationsInOppositeDirectionForNounParentAndChild[0])):
if (i, j) in evidenceCountsMatrix:
evidenceCountsMatrix[(i, j)] += max(wordRelatedness(ktem[1], sourcePosTags[ktem[0]-1], ltem[1], targetPosTags[ltem[0]-1]), wordRelatedness(sourceLemmas[ktem[0]-1], sourcePosTags[ktem[0]-1], targetLemmas[ltem[0]-1], targetPosTags[ltem[0]-1]))
else:
evidenceCountsMatrix[(i, j)] = max(wordRelatedness(ktem[1], sourcePosTags[ktem[0]-1], ltem[1], targetPosTags[ltem[0]-1]), wordRelatedness(sourceLemmas[ktem[0]-1], sourcePosTags[ktem[0]-1], targetLemmas[ltem[0]-1], targetPosTags[ltem[0]-1]))
if (i, j) in relativeAlignmentsMatrix:
relativeAlignmentsMatrix[(i, j)].append([ktem[0], ltem[0]])
else:
relativeAlignmentsMatrix[(i, j)] = []
relativeAlignmentsMatrix[(i, j)].append([ktem[0], ltem[0]])
# now use the collected stats to align
for n in xrange(numberOfNounsInSource):
maxEvidenceCountForCurrentPass = 0
maxOverallValueForCurrentPass = 0
indexPairWithStrongestTieForCurrentPass = [-1, -1]
for i in sourceWordIndices:
if i in sourceWordIndicesAlreadyAligned or sourcePosTags[i-1][0].lower() <> 'n' or sourceLemmas[i-1] in stopwords:
continue
for j in targetWordIndices:
if j in targetWordIndicesAlreadyAligned or targetPosTags[j-1][0].lower() <> 'n' or targetLemmas[j-1] in stopwords:
continue
if (i, j) in evidenceCountsMatrix and theta1*wordSimilarities[(i, j)]+(1-theta1)*evidenceCountsMatrix[(i, j)]>maxOverallValueForCurrentPass:
maxOverallValueForCurrentPass = theta1*wordSimilarities[(i, j)]+(1-theta1)*evidenceCountsMatrix[(i, j)]
maxEvidenceCountForCurrentPass = evidenceCountsMatrix[(i, j)]
indexPairWithStrongestTieForCurrentPass = [i, j]
if maxEvidenceCountForCurrentPass > 0:
nounAlignments.append(indexPairWithStrongestTieForCurrentPass)
sourceWordIndicesAlreadyAligned.append(indexPairWithStrongestTieForCurrentPass[0])
targetWordIndicesAlreadyAligned.append(indexPairWithStrongestTieForCurrentPass[1])
for item in relativeAlignmentsMatrix[(indexPairWithStrongestTieForCurrentPass[0], indexPairWithStrongestTieForCurrentPass[1])]:
if item[0]<>0 and item[1]<>0 and item[0] not in sourceWordIndicesAlreadyAligned and item[1] not in targetWordIndicesAlreadyAligned:
nounAlignments.append(item)
sourceWordIndicesAlreadyAligned.append(item[0])
targetWordIndicesAlreadyAligned.append(item[1])
else:
break
return nounAlignments
##############################################################################################################################
##############################################################################################################################
def alignMainVerbs(source, target, sourceParseResult, targetParseResult, existingAlignments):
# source and target:: each is a list of elements of the form:
# [[character begin offset, character end offset], word index, word, lemma, pos tag]
global ppdbSim
global theta1
mainVerbAlignments = []
sourceWordIndices = [i+1 for i in xrange(len(source))]
targetWordIndices = [i+1 for i in xrange(len(target))]
sourceWordIndicesAlreadyAligned = sorted(list(set([item[0] for item in existingAlignments])))
targetWordIndicesAlreadyAligned = sorted(list(set([item[1] for item in existingAlignments])))
sourceWords = [item[2] for item in source]
targetWords = [item[2] for item in target]
sourceLemmas = [item[3] for item in source]
targetLemmas = [item[3] for item in target]
sourcePosTags = [item[4] for item in source]
targetPosTags = [item[4] for item in target]
sourceDParse = dependencyParseAndPutOffsets(sourceParseResult)
targetDParse = dependencyParseAndPutOffsets(targetParseResult)
numberOfMainVerbsInSource = 0
evidenceCountsMatrix = {}
relativeAlignmentsMatrix = {}
wordSimilarities = {}
# construct the two matrices in the following loop
for i in sourceWordIndices:
if i in sourceWordIndicesAlreadyAligned or sourcePosTags[i-1][0].lower() <> 'v' or sourceLemmas[i-1] in stopwords:
continue
numberOfMainVerbsInSource += 1
for j in targetWordIndices:
if j in targetWordIndicesAlreadyAligned or targetPosTags[j-1][0].lower() <> 'v' or targetLemmas[j-1] in stopwords:
continue
if max(wordRelatedness(sourceWords[i-1], sourcePosTags[i-1], targetWords[j-1], targetPosTags[j-1]), wordRelatedness(sourceLemmas[i-1], sourcePosTags[i-1], targetLemmas[j-1], targetPosTags[j-1]))<ppdbSim:
continue
wordSimilarities[(i, j)] = max(wordRelatedness(sourceWords[i-1], sourcePosTags[i-1], targetWords[j-1], targetPosTags[j-1]), wordRelatedness(sourceLemmas[i-1], sourcePosTags[i-1], targetLemmas[j-1], targetPosTags[j-1]))
sourceWordParents = findParents(sourceDParse, i, sourceWords[i-1])
sourceWordChildren = findChildren(sourceDParse, i, sourceWords[i-1])
targetWordParents = findParents(targetDParse, j, targetWords[j-1])
targetWordChildren = findChildren(targetDParse, j, targetWords[j-1])
# search for common or equivalent children
group1OfSimilarRelationsForNounChild = ['agent', 'nsubj' 'xsubj']
group2OfSimilarRelationsForNounChild = ['ccomp', 'dobj' 'nsubjpass', 'rel', 'partmod']
group3OfSimilarRelationsForNounChild = ['tmod', 'prep_in', 'prep_at', 'prep_on']
group4OfSimilarRelationsForNounChild = ['iobj', 'prep_to']
groupOfSimilarRelationsForVerbChild = ['purpcl', 'xcomp']
for ktem in sourceWordChildren:
for ltem in targetWordChildren:
if ((ktem[0], ltem[0]) in existingAlignments+mainVerbAlignments or max(wordRelatedness(ktem[1], sourcePosTags[ktem[0]-1], ltem[1], targetPosTags[ltem[0]-1]), wordRelatedness(sourceLemmas[ktem[0]-1], sourcePosTags[ktem[0]-1], targetLemmas[ltem[0]-1], targetPosTags[ltem[0]-1]))>=ppdbSim) and (
(ktem[2]==ltem[2]) or
(ktem[2] in group1OfSimilarRelationsForNounChild and ltem[2] in group1OfSimilarRelationsForNounChild) or
(ktem[2] in group2OfSimilarRelationsForNounChild and ltem[2] in group2OfSimilarRelationsForNounChild) or
(ktem[2] in group3OfSimilarRelationsForNounChild and ltem[2] in group3OfSimilarRelationsForNounChild) or
(ktem[2] in group4OfSimilarRelationsForNounChild and ltem[2] in group4OfSimilarRelationsForNounChild) or
(ktem[2] in groupOfSimilarRelationsForVerbChild and ltem[2] in groupOfSimilarRelationsForVerbChild)):
if (i, j) in evidenceCountsMatrix:
evidenceCountsMatrix[(i, j)] += max(wordRelatedness(ktem[1], sourcePosTags[ktem[0]-1], ltem[1], targetPosTags[ltem[0]-1]), wordRelatedness(sourceLemmas[ktem[0]-1], sourcePosTags[ktem[0]-1], targetLemmas[ltem[0]-1], targetPosTags[ltem[0]-1]))
else:
evidenceCountsMatrix[(i, j)] = max(wordRelatedness(ktem[1], sourcePosTags[ktem[0]-1], ltem[1], targetPosTags[ltem[0]-1]), wordRelatedness(sourceLemmas[ktem[0]-1], sourcePosTags[ktem[0]-1], targetLemmas[ltem[0]-1], targetPosTags[ltem[0]-1]))
if (i, j) in relativeAlignmentsMatrix:
relativeAlignmentsMatrix[(i, j)].append([ktem[0], ltem[0]])
else:
relativeAlignmentsMatrix[(i, j)] = []
relativeAlignmentsMatrix[(i, j)].append([ktem[0], ltem[0]])
# search for common or equivalent parents
groupOfSimilarRelationsForNounParent = ['infmod', 'partmod', 'rcmod']
groupOfSimilarRelationsForVerbParent = ['purpcl', 'xcomp']
for ktem in sourceWordParents:
for ltem in targetWordParents:
if ((ktem[0], ltem[0]) in existingAlignments+mainVerbAlignments or max(wordRelatedness(ktem[1], sourcePosTags[ktem[0]-1], ltem[1], targetPosTags[ltem[0]-1]), wordRelatedness(sourceLemmas[ktem[0]-1], sourcePosTags[ktem[0]-1], targetLemmas[ltem[0]-1], targetPosTags[ltem[0]-1]))>=ppdbSim) and (
(ktem[2]==ltem[2]) or
(ktem[2] in groupOfSimilarRelationsForNounParent and ltem[2] in groupOfSimilarRelationsForNounParent) or
(ktem[2] in groupOfSimilarRelationsForVerbParent and ltem[2] in groupOfSimilarRelationsForVerbParent)):
if (i, j) in evidenceCountsMatrix:
evidenceCountsMatrix[(i, j)] += max(wordRelatedness(ktem[1], sourcePosTags[ktem[0]-1], ltem[1], targetPosTags[ltem[0]-1]), wordRelatedness(sourceLemmas[ktem[0]-1], sourcePosTags[ktem[0]-1], targetLemmas[ltem[0]-1], targetPosTags[ltem[0]-1]))
else:
evidenceCountsMatrix[(i, j)] = max(wordRelatedness(ktem[1], sourcePosTags[ktem[0]-1], ltem[1], targetPosTags[ltem[0]-1]), wordRelatedness(sourceLemmas[ktem[0]-1], sourcePosTags[ktem[0]-1], targetLemmas[ltem[0]-1], targetPosTags[ltem[0]-1]))
if (i, j) in relativeAlignmentsMatrix:
relativeAlignmentsMatrix[(i, j)].append([ktem[0], ltem[0]])
else:
relativeAlignmentsMatrix[(i, j)] = []
relativeAlignmentsMatrix[(i, j)].append([ktem[0], ltem[0]])
# search for equivalent parent-child pairs
groupOfSimilarRelationsInOppositeDirectionForAdjectiveParentAndChild = [['cop', 'csubj'], ['acomp']]
group1OfSimilarRelationsInOppositeDirectionForVerbParentAndChild = [['csubj'], ['csubjpass']]
group2OfSimilarRelationsInOppositeDirectionForVerbParentAndChild = [['conj_and'], ['conj_and']]
group3OfSimilarRelationsInOppositeDirectionForVerbParentAndChild = [['conj_or'], ['conj_or']]
group4OfSimilarRelationsInOppositeDirectionForVerbParentAndChild = [['conj_nor'], ['conj_nor']]
for ktem in sourceWordParents:
for ltem in targetWordChildren:
if ((ktem[0], ltem[0]) in existingAlignments+mainVerbAlignments or max(wordRelatedness(ktem[1], sourcePosTags[ktem[0]-1], ltem[1], targetPosTags[ltem[0]-1]), wordRelatedness(sourceLemmas[ktem[0]-1], sourcePosTags[ktem[0]-1], targetLemmas[ltem[0]-1], targetPosTags[ltem[0]-1]))>=ppdbSim) and (
(ktem[2]==ltem[2]) or
(ktem[2] in groupOfSimilarRelationsInOppositeDirectionForAdjectiveParentAndChild[0] and ltem[2] in groupOfSimilarRelationsInOppositeDirectionForAdjectiveParentAndChild[1]) or
(ktem[2] in group1OfSimilarRelationsInOppositeDirectionForVerbParentAndChild[0] and ltem[2] in group1OfSimilarRelationsInOppositeDirectionForVerbParentAndChild[1]) or
(ktem[2] in group2OfSimilarRelationsInOppositeDirectionForVerbParentAndChild[0] and ltem[2] in group2OfSimilarRelationsInOppositeDirectionForVerbParentAndChild[1]) or
(ktem[2] in group3OfSimilarRelationsInOppositeDirectionForVerbParentAndChild[0] and ltem[2] in group3OfSimilarRelationsInOppositeDirectionForVerbParentAndChild[1]) or
(ktem[2] in group4OfSimilarRelationsInOppositeDirectionForVerbParentAndChild[0] and ltem[2] in group4OfSimilarRelationsInOppositeDirectionForVerbParentAndChild[1])):
if (i, j) in evidenceCountsMatrix:
evidenceCountsMatrix[(i, j)] += max(wordRelatedness(ktem[1], sourcePosTags[ktem[0]-1], ltem[1], targetPosTags[ltem[0]-1]), wordRelatedness(sourceLemmas[ktem[0]-1], sourcePosTags[ktem[0]-1], targetLemmas[ltem[0]-1], targetPosTags[ltem[0]-1]))
else:
evidenceCountsMatrix[(i, j)] = max(wordRelatedness(ktem[1], sourcePosTags[ktem[0]-1], ltem[1], targetPosTags[ltem[0]-1]), wordRelatedness(sourceLemmas[ktem[0]-1], sourcePosTags[ktem[0]-1], targetLemmas[ltem[0]-1], targetPosTags[ltem[0]-1]))
if (i, j) in relativeAlignmentsMatrix:
relativeAlignmentsMatrix[(i, j)].append([ktem[0], ltem[0]])
else:
relativeAlignmentsMatrix[(i, j)] = []
relativeAlignmentsMatrix[(i, j)].append([ktem[0], ltem[0]])
# search for equivalent child-parent pairs
for ktem in sourceWordChildren:
for ltem in targetWordParents:
if ((ktem[0], ltem[0]) in existingAlignments+mainVerbAlignments or max(wordRelatedness(ktem[1], sourcePosTags[ktem[0]-1], ltem[1], targetPosTags[ltem[0]-1]), wordRelatedness(sourceLemmas[ktem[0]-1], sourcePosTags[ktem[0]-1], targetLemmas[ltem[0]-1], targetPosTags[ltem[0]-1]))>=ppdbSim) and (
(ktem[2]==ltem[2]) or
(ktem[2] in groupOfSimilarRelationsInOppositeDirectionForAdjectiveParentAndChild[1] and ltem[2] in groupOfSimilarRelationsInOppositeDirectionForAdjectiveParentAndChild[0]) or
(ktem[2] in group1OfSimilarRelationsInOppositeDirectionForVerbParentAndChild[1] and ltem[2] in group1OfSimilarRelationsInOppositeDirectionForVerbParentAndChild[0]) or
(ktem[2] in group2OfSimilarRelationsInOppositeDirectionForVerbParentAndChild[1] and ltem[2] in group2OfSimilarRelationsInOppositeDirectionForVerbParentAndChild[0]) or
(ktem[2] in group3OfSimilarRelationsInOppositeDirectionForVerbParentAndChild[1] and ltem[2] in group3OfSimilarRelationsInOppositeDirectionForVerbParentAndChild[0]) or
(ktem[2] in group4OfSimilarRelationsInOppositeDirectionForVerbParentAndChild[1] and ltem[2] in group4OfSimilarRelationsInOppositeDirectionForVerbParentAndChild[0])):
if (i, j) in evidenceCountsMatrix:
evidenceCountsMatrix[(i, j)] += max(wordRelatedness(ktem[1], sourcePosTags[ktem[0]-1], ltem[1], targetPosTags[ltem[0]-1]), wordRelatedness(sourceLemmas[ktem[0]-1], sourcePosTags[ktem[0]-1], targetLemmas[ltem[0]-1], targetPosTags[ltem[0]-1]))
else:
evidenceCountsMatrix[(i, j)] = max(wordRelatedness(ktem[1], sourcePosTags[ktem[0]-1], ltem[1], targetPosTags[ltem[0]-1]), wordRelatedness(sourceLemmas[ktem[0]-1], sourcePosTags[ktem[0]-1], targetLemmas[ltem[0]-1], targetPosTags[ltem[0]-1]))
if (i, j) in relativeAlignmentsMatrix:
relativeAlignmentsMatrix[(i, j)].append([ktem[0], ltem[0]])
else:
relativeAlignmentsMatrix[(i, j)] = []
relativeAlignmentsMatrix[(i, j)].append([ktem[0], ltem[0]])
# now use the collected stats to align
for n in xrange(numberOfMainVerbsInSource):
maxEvidenceCountForCurrentPass = 0
maxOverallValueForCurrentPass = 0
indexPairWithStrongestTieForCurrentPass = [-1, -1]
for i in sourceWordIndices:
if i in sourceWordIndicesAlreadyAligned or sourcePosTags[i-1][0].lower() <> 'v' or sourceLemmas[i-1] in stopwords:
continue
for j in targetWordIndices:
if j in targetWordIndicesAlreadyAligned or targetPosTags[j-1][0].lower() <> 'v' or targetLemmas[j-1] in stopwords:
continue
if (i, j) in evidenceCountsMatrix and theta1*wordSimilarities[(i, j)]+(1-theta1)*evidenceCountsMatrix[(i, j)]>maxOverallValueForCurrentPass:
maxOverallValueForCurrentPass = theta1*wordSimilarities[(i, j)]+(1-theta1)*evidenceCountsMatrix[(i, j)]
maxEvidenceCountForCurrentPass = evidenceCountsMatrix[(i, j)]
indexPairWithStrongestTieForCurrentPass = [i, j]
if maxEvidenceCountForCurrentPass > 0:
mainVerbAlignments.append(indexPairWithStrongestTieForCurrentPass)
sourceWordIndicesAlreadyAligned.append(indexPairWithStrongestTieForCurrentPass[0])
targetWordIndicesAlreadyAligned.append(indexPairWithStrongestTieForCurrentPass[1])
for item in relativeAlignmentsMatrix[(indexPairWithStrongestTieForCurrentPass[0], indexPairWithStrongestTieForCurrentPass[1])]:
if item[0]<>0 and item[1]<>0 and item[0] not in sourceWordIndicesAlreadyAligned and item[1] not in targetWordIndicesAlreadyAligned:
mainVerbAlignments.append(item)
sourceWordIndicesAlreadyAligned.append(item[0])
targetWordIndicesAlreadyAligned.append(item[1])
else:
break
return mainVerbAlignments
##############################################################################################################################
##############################################################################################################################
def alignAdjectives(source, target, sourceParseResult, targetParseResult, existingAlignments):
# source and target:: each is a list of elements of the form:
# [[character begin offset, character end offset], word index, word, lemma, pos tag]
global ppdbSim
global theta1
adjectiveAlignments = []
sourceWordIndices = [i+1 for i in xrange(len(source))]
targetWordIndices = [i+1 for i in xrange(len(target))]
sourceWordIndicesAlreadyAligned = sorted(list(set([item[0] for item in existingAlignments])))
targetWordIndicesAlreadyAligned = sorted(list(set([item[1] for item in existingAlignments])))
sourceWords = [item[2] for item in source]
targetWords = [item[2] for item in target]
sourceLemmas = [item[3] for item in source]
targetLemmas = [item[3] for item in target]
sourcePosTags = [item[4] for item in source]
targetPosTags = [item[4] for item in target]
sourceDParse = dependencyParseAndPutOffsets(sourceParseResult)
targetDParse = dependencyParseAndPutOffsets(targetParseResult)
numberOfAdjectivesInSource = 0
evidenceCountsMatrix = {}
relativeAlignmentsMatrix = {}
wordSimilarities = {}
# construct the two matrices in the following loop
for i in sourceWordIndices:
if i in sourceWordIndicesAlreadyAligned or sourcePosTags[i-1][0].lower() <> 'j':
continue
numberOfAdjectivesInSource += 1
for j in targetWordIndices:
if j in targetWordIndicesAlreadyAligned or targetPosTags[j-1][0].lower() <> 'j':
continue
if max(wordRelatedness(sourceWords[i-1], sourcePosTags[i-1], targetWords[j-1], targetPosTags[j-1]), wordRelatedness(sourceLemmas[i-1], sourcePosTags[i-1], targetLemmas[j-1], targetPosTags[j-1]))<ppdbSim:
continue
wordSimilarities[(i, j)] = max(wordRelatedness(sourceWords[i-1], sourcePosTags[i-1], targetWords[j-1], targetPosTags[j-1]), wordRelatedness(sourceLemmas[i-1], sourcePosTags[i-1], targetLemmas[j-1], targetPosTags[j-1]))
sourceWordParents = findParents(sourceDParse, i, sourceWords[i-1])
sourceWordChildren = findChildren(sourceDParse, i, sourceWords[i-1])
targetWordParents = findParents(targetDParse, j, targetWords[j-1])
targetWordChildren = findChildren(targetDParse, j, targetWords[j-1])
# search for common or equivalent parents
groupOfSimilarRelationsForNounParent = ['amod', 'rcmod']
for ktem in sourceWordParents:
for ltem in targetWordParents:
if ((ktem[0], ltem[0]) in existingAlignments+adjectiveAlignments or max(wordRelatedness(ktem[1], sourcePosTags[ktem[0]-1], ltem[1], targetPosTags[ltem[0]-1]), wordRelatedness(sourceLemmas[ktem[0]-1], sourcePosTags[ktem[0]-1], targetLemmas[ltem[0]-1], targetPosTags[ltem[0]-1]))>=ppdbSim) and ((ktem[2]==ltem[2]) or (ktem[2] in groupOfSimilarRelationsForNounParent and ltem[2] in groupOfSimilarRelationsForNounParent)):
if (i, j) in evidenceCountsMatrix:
evidenceCountsMatrix[(i, j)] += max(wordRelatedness(ktem[1], sourcePosTags[ktem[0]-1], ltem[1], targetPosTags[ltem[0]-1]), wordRelatedness(sourceLemmas[ktem[0]-1], sourcePosTags[ktem[0]-1], targetLemmas[ltem[0]-1], targetPosTags[ltem[0]-1]))
else:
evidenceCountsMatrix[(i, j)] = max(wordRelatedness(ktem[1], sourcePosTags[ktem[0]-1], ltem[1], targetPosTags[ltem[0]-1]), wordRelatedness(sourceLemmas[ktem[0]-1], sourcePosTags[ktem[0]-1], targetLemmas[ltem[0]-1], targetPosTags[ltem[0]-1]))
if (i, j) in relativeAlignmentsMatrix:
relativeAlignmentsMatrix[(i, j)].append([ktem[0], ltem[0]])
else:
relativeAlignmentsMatrix[(i, j)] = []
relativeAlignmentsMatrix[(i, j)].append([ktem[0], ltem[0]])
# search for common children
for ktem in sourceWordChildren:
for ltem in targetWordChildren:
if ((ktem[0], ltem[0]) in existingAlignments+adjectiveAlignments or max(wordRelatedness(ktem[1], sourcePosTags[ktem[0]-1], ltem[1], targetPosTags[ltem[0]-1]), wordRelatedness(sourceLemmas[ktem[0]-1], sourcePosTags[ktem[0]-1], targetLemmas[ltem[0]-1], targetPosTags[ltem[0]-1]))>=ppdbSim) and (ktem[2]==ltem[2]):
if (i, j) in evidenceCountsMatrix:
evidenceCountsMatrix[(i, j)] += max(wordRelatedness(ktem[1], sourcePosTags[ktem[0]-1], ltem[1], targetPosTags[ltem[0]-1]), wordRelatedness(sourceLemmas[ktem[0]-1], sourcePosTags[ktem[0]-1], targetLemmas[ltem[0]-1], targetPosTags[ltem[0]-1]))
else:
evidenceCountsMatrix[(i, j)] = max(wordRelatedness(ktem[1], sourcePosTags[ktem[0]-1], ltem[1], targetPosTags[ltem[0]-1]), wordRelatedness(sourceLemmas[ktem[0]-1], sourcePosTags[ktem[0]-1], targetLemmas[ltem[0]-1], targetPosTags[ltem[0]-1]))
if (i, j) in relativeAlignmentsMatrix:
relativeAlignmentsMatrix[(i, j)].append([ktem[0], ltem[0]])
else:
relativeAlignmentsMatrix[(i, j)] = []
relativeAlignmentsMatrix[(i, j)].append([ktem[0], ltem[0]])
# search for equivalent parent-child pair
groupOfSimilarRelationsInOppositeDirectionForNounParentAndChild = [['amod', 'rcmod'], ['nsubj']]
groupOfSimilarRelationsInOppositeDirectionForVerbParentAndChild = [['acomp'], ['cop', 'csubj']]
group1OfSimilarRelationsInOppositeDirectionForAdjectiveParentAndChild = [['conj_and'], ['conj_and']]
group2OfSimilarRelationsInOppositeDirectionForAdjectiveParentAndChild = [['conj_or'], ['conj_or']]
group3OfSimilarRelationsInOppositeDirectionForAdjectiveParentAndChild = [['conj_nor'], ['conj_nor']]
for ktem in sourceWordParents:
for ltem in targetWordChildren:
if ((ktem[0], ltem[0]) in existingAlignments+adjectiveAlignments or max(wordRelatedness(ktem[1], sourcePosTags[ktem[0]-1], ltem[1], targetPosTags[ltem[0]-1]), wordRelatedness(sourceLemmas[ktem[0]-1], sourcePosTags[ktem[0]-1], targetLemmas[ltem[0]-1], targetPosTags[ltem[0]-1]))>=ppdbSim) and (
(ktem[2]==ltem[2]) or
(ktem[2] in groupOfSimilarRelationsInOppositeDirectionForNounParentAndChild[0] and ltem[2] in groupOfSimilarRelationsInOppositeDirectionForNounParentAndChild[1]) or
(ktem[2] in groupOfSimilarRelationsInOppositeDirectionForVerbParentAndChild[0] and ltem[2] in groupOfSimilarRelationsInOppositeDirectionForVerbParentAndChild[1]) or
(ktem[2] in group1OfSimilarRelationsInOppositeDirectionForAdjectiveParentAndChild[0] and ltem[2] in group1OfSimilarRelationsInOppositeDirectionForAdjectiveParentAndChild[1]) or
(ktem[2] in group2OfSimilarRelationsInOppositeDirectionForAdjectiveParentAndChild[0] and ltem[2] in group2OfSimilarRelationsInOppositeDirectionForAdjectiveParentAndChild[1]) or
(ktem[2] in group3OfSimilarRelationsInOppositeDirectionForAdjectiveParentAndChild[0] and ltem[2] in group3OfSimilarRelationsInOppositeDirectionForAdjectiveParentAndChild[1])):
if (i, j) in evidenceCountsMatrix:
evidenceCountsMatrix[(i, j)] += max(wordRelatedness(ktem[1], sourcePosTags[ktem[0]-1], ltem[1], targetPosTags[ltem[0]-1]), wordRelatedness(sourceLemmas[ktem[0]-1], sourcePosTags[ktem[0]-1], targetLemmas[ltem[0]-1], targetPosTags[ltem[0]-1]))
else:
evidenceCountsMatrix[(i, j)] = max(wordRelatedness(ktem[1], sourcePosTags[ktem[0]-1], ltem[1], targetPosTags[ltem[0]-1]), wordRelatedness(sourceLemmas[ktem[0]-1], sourcePosTags[ktem[0]-1], targetLemmas[ltem[0]-1], targetPosTags[ltem[0]-1]))
if (i, j) in relativeAlignmentsMatrix:
relativeAlignmentsMatrix[(i, j)].append([ktem[0], ltem[0]])
else:
relativeAlignmentsMatrix[(i, j)] = []
relativeAlignmentsMatrix[(i, j)].append([ktem[0], ltem[0]])
# search for equivalent child-parent pair
for ktem in sourceWordChildren:
for ltem in targetWordParents:
if ((ktem[0], ltem[0]) in existingAlignments+adjectiveAlignments or max(wordRelatedness(ktem[1], sourcePosTags[ktem[0]-1], ltem[1], targetPosTags[ltem[0]-1]), wordRelatedness(sourceLemmas[ktem[0]-1], sourcePosTags[ktem[0]-1], targetLemmas[ltem[0]-1], targetPosTags[ltem[0]-1]))>=ppdbSim) and (
(ktem[2]==ltem[2]) or
(ktem[2] in groupOfSimilarRelationsInOppositeDirectionForNounParentAndChild[1] and ltem[2] in groupOfSimilarRelationsInOppositeDirectionForNounParentAndChild[0]) or
(ktem[2] in groupOfSimilarRelationsInOppositeDirectionForVerbParentAndChild[1] and ltem[2] in groupOfSimilarRelationsInOppositeDirectionForVerbParentAndChild[0]) or
(ktem[2] in group1OfSimilarRelationsInOppositeDirectionForAdjectiveParentAndChild[1] and ltem[2] in group1OfSimilarRelationsInOppositeDirectionForAdjectiveParentAndChild[0]) or
(ktem[2] in group2OfSimilarRelationsInOppositeDirectionForAdjectiveParentAndChild[1] and ltem[2] in group2OfSimilarRelationsInOppositeDirectionForAdjectiveParentAndChild[0]) or
(ktem[2] in group3OfSimilarRelationsInOppositeDirectionForAdjectiveParentAndChild[1] and ltem[2] in group3OfSimilarRelationsInOppositeDirectionForAdjectiveParentAndChild[0])):
if (i, j) in evidenceCountsMatrix:
evidenceCountsMatrix[(i, j)] += max(wordRelatedness(ktem[1], sourcePosTags[ktem[0]-1], ltem[1], targetPosTags[ltem[0]-1]), wordRelatedness(sourceLemmas[ktem[0]-1], sourcePosTags[ktem[0]-1], targetLemmas[ltem[0]-1], targetPosTags[ltem[0]-1]))
else:
evidenceCountsMatrix[(i, j)] = max(wordRelatedness(ktem[1], sourcePosTags[ktem[0]-1], ltem[1], targetPosTags[ltem[0]-1]), wordRelatedness(sourceLemmas[ktem[0]-1], sourcePosTags[ktem[0]-1], targetLemmas[ltem[0]-1], targetPosTags[ltem[0]-1]))
if (i, j) in relativeAlignmentsMatrix:
relativeAlignmentsMatrix[(i, j)].append([ktem[0], ltem[0]])
else:
relativeAlignmentsMatrix[(i, j)] = []
relativeAlignmentsMatrix[(i, j)].append([ktem[0], ltem[0]])
# now use the collected stats to align
for n in xrange(numberOfAdjectivesInSource):
maxEvidenceCountForCurrentPass = 0
maxOverallValueForCurrentPass = 0
indexPairWithStrongestTieForCurrentPass = [-1, -1]
for i in sourceWordIndices:
if i in sourceWordIndicesAlreadyAligned or sourcePosTags[i-1][0].lower() <> 'j' or sourceLemmas[i-1] in stopwords:
continue
for j in targetWordIndices:
if j in targetWordIndicesAlreadyAligned or targetPosTags[j-1][0].lower() <> 'j' or targetLemmas[j-1] in stopwords:
continue
if (i, j) in evidenceCountsMatrix and theta1*wordSimilarities[(i, j)]+(1-theta1)*evidenceCountsMatrix[(i, j)]>maxOverallValueForCurrentPass:
maxOverallValueForCurrentPass = theta1*wordSimilarities[(i, j)]+(1-theta1)*evidenceCountsMatrix[(i, j)]
maxEvidenceCountForCurrentPass = evidenceCountsMatrix[(i, j)]
indexPairWithStrongestTieForCurrentPass = [i, j]
if maxEvidenceCountForCurrentPass > 0:
adjectiveAlignments.append(indexPairWithStrongestTieForCurrentPass)
sourceWordIndicesAlreadyAligned.append(indexPairWithStrongestTieForCurrentPass[0])
targetWordIndicesAlreadyAligned.append(indexPairWithStrongestTieForCurrentPass[1])
for item in relativeAlignmentsMatrix[(indexPairWithStrongestTieForCurrentPass[0], indexPairWithStrongestTieForCurrentPass[1])]:
if item[0]<>0 and item[1]<>0 and item[0] not in sourceWordIndicesAlreadyAligned and item[1] not in targetWordIndicesAlreadyAligned:
adjectiveAlignments.append(item)
sourceWordIndicesAlreadyAligned.append(item[0])
targetWordIndicesAlreadyAligned.append(item[1])
else:
break
return adjectiveAlignments
##############################################################################################################################
##############################################################################################################################
def alignAdverbs(source, target, sourceParseResult, targetParseResult, existingAlignments):
# source and target:: each is a list of elements of the form:
# [[character begin offset, character end offset], word index, word, lemma, pos tag]
global ppdbSim
global theta1
adverbAlignments = []
sourceWordIndices = [i+1 for i in xrange(len(source))]
targetWordIndices = [i+1 for i in xrange(len(target))]
sourceWordIndicesAlreadyAligned = sorted(list(set([item[0] for item in existingAlignments])))
targetWordIndicesAlreadyAligned = sorted(list(set([item[1] for item in existingAlignments])))
sourceWords = [item[2] for item in source]
targetWords = [item[2] for item in target]
sourceLemmas = [item[3] for item in source]
targetLemmas = [item[3] for item in target]
sourcePosTags = [item[4] for item in source]
targetPosTags = [item[4] for item in target]
sourceDParse = dependencyParseAndPutOffsets(sourceParseResult)
targetDParse = dependencyParseAndPutOffsets(targetParseResult)
numberOfAdverbsInSource = 0
evidenceCountsMatrix = {}
relativeAlignmentsMatrix = {}
wordSimilarities = {}
for i in sourceWordIndices:
if i in sourceWordIndicesAlreadyAligned or (sourcePosTags[i-1][0].lower() <> 'r'):
continue
numberOfAdverbsInSource += 1
for j in targetWordIndices:
if j in targetWordIndicesAlreadyAligned or (targetPosTags[j-1][0].lower() <> 'r'):
continue
if max(wordRelatedness(sourceWords[i-1], sourcePosTags[i-1], targetWords[j-1], targetPosTags[j-1]), wordRelatedness(sourceLemmas[i-1], sourcePosTags[i-1], targetLemmas[j-1], targetPosTags[j-1]))<ppdbSim:
continue
wordSimilarities[(i, j)] = max(wordRelatedness(sourceWords[i-1], sourcePosTags[i-1], targetWords[j-1], targetPosTags[j-1]), wordRelatedness(sourceLemmas[i-1], sourcePosTags[i-1], targetLemmas[j-1], targetPosTags[j-1]))
sourceWordParents = findParents(sourceDParse, i, sourceWords[i-1])
sourceWordChildren = findChildren(sourceDParse, i, sourceWords[i-1])
targetWordParents = findParents(targetDParse, j, targetWords[j-1])
targetWordChildren = findChildren(targetDParse, j, targetWords[j-1])
# search for common parents
for ktem in sourceWordParents:
for ltem in targetWordParents:
if ((ktem[0], ltem[0]) in existingAlignments+adverbAlignments or max(wordRelatedness(ktem[1], sourcePosTags[ktem[0]-1], ltem[1], targetPosTags[ltem[0]-1]), wordRelatedness(sourceLemmas[ktem[0]-1], sourcePosTags[ktem[0]-1], targetLemmas[ltem[0]-1], targetPosTags[ltem[0]-1]))>=ppdbSim) and (ktem[2]==ltem[2]):
if (i, j) in evidenceCountsMatrix:
evidenceCountsMatrix[(i, j)] += max(wordRelatedness(ktem[1], sourcePosTags[ktem[0]-1], ltem[1], targetPosTags[ltem[0]-1]), wordRelatedness(sourceLemmas[ktem[0]-1], sourcePosTags[ktem[0]-1], targetLemmas[ltem[0]-1], targetPosTags[ltem[0]-1]))
else:
evidenceCountsMatrix[(i, j)] = max(wordRelatedness(ktem[1], sourcePosTags[ktem[0]-1], ltem[1], targetPosTags[ltem[0]-1]), wordRelatedness(sourceLemmas[ktem[0]-1], sourcePosTags[ktem[0]-1], targetLemmas[ltem[0]-1], targetPosTags[ltem[0]-1]))
if (i, j) in relativeAlignmentsMatrix:
relativeAlignmentsMatrix[(i, j)].append([ktem[0], ltem[0]])
else:
relativeAlignmentsMatrix[(i, j)] = []
relativeAlignmentsMatrix[(i, j)].append([ktem[0], ltem[0]])
# search for common children
for ktem in sourceWordChildren:
for ltem in targetWordChildren:
if ((ktem[0], ltem[0]) in existingAlignments+adverbAlignments or max(wordRelatedness(ktem[1], sourcePosTags[ktem[0]-1], ltem[1], targetPosTags[ltem[0]-1]), wordRelatedness(sourceLemmas[ktem[0]-1], sourcePosTags[ktem[0]-1], targetLemmas[ltem[0]-1], targetPosTags[ltem[0]-1]))>=ppdbSim) and (ktem[2]==ltem[2]):
if (i, j) in evidenceCountsMatrix:
evidenceCountsMatrix[(i, j)] += max(wordRelatedness(ktem[1], sourcePosTags[ktem[0]-1], ltem[1], targetPosTags[ltem[0]-1]), wordRelatedness(sourceLemmas[ktem[0]-1], sourcePosTags[ktem[0]-1], targetLemmas[ltem[0]-1], targetPosTags[ltem[0]-1]))
else:
evidenceCountsMatrix[(i, j)] = max(wordRelatedness(ktem[1], sourcePosTags[ktem[0]-1], ltem[1], targetPosTags[ltem[0]-1]), wordRelatedness(sourceLemmas[ktem[0]-1], sourcePosTags[ktem[0]-1], targetLemmas[ltem[0]-1], targetPosTags[ltem[0]-1]))
if (i, j) in relativeAlignmentsMatrix:
relativeAlignmentsMatrix[(i, j)].append([ktem[0], ltem[0]])
else:
relativeAlignmentsMatrix[(i, j)] = []
relativeAlignmentsMatrix[(i, j)].append([ktem[0], ltem[0]])
# search for equivalent parent-child relationships
group1OfSimilarRelationsInOppositeDirectionForAdverbParentAndChild = [['conj_and'], ['conj_and']]
group2OfSimilarRelationsInOppositeDirectionForAdverbParentAndChild = [['conj_or'], ['conj_or']]
group3OfSimilarRelationsInOppositeDirectionForAdverbParentAndChild = [['conj_nor'], ['conj_nor']]
for ktem in sourceWordParents:
for ltem in targetWordChildren:
if ((ktem[0], ltem[0]) in existingAlignments+adverbAlignments or max(wordRelatedness(ktem[1], sourcePosTags[ktem[0]-1], ltem[1], targetPosTags[ltem[0]-1]), wordRelatedness(sourceLemmas[ktem[0]-1], sourcePosTags[ktem[0]-1], targetLemmas[ltem[0]-1], targetPosTags[ltem[0]-1]))>=ppdbSim) and (
(ktem[2]==ltem[2]) or
(ktem[2] in group1OfSimilarRelationsInOppositeDirectionForAdverbParentAndChild[0] and ltem[2] in group1OfSimilarRelationsInOppositeDirectionForAdverbParentAndChild[1]) or
(ktem[2] in group2OfSimilarRelationsInOppositeDirectionForAdverbParentAndChild[0] and ltem[2] in group2OfSimilarRelationsInOppositeDirectionForAdverbParentAndChild[1]) or
(ktem[2] in group3OfSimilarRelationsInOppositeDirectionForAdverbParentAndChild[0] and ltem[2] in group3OfSimilarRelationsInOppositeDirectionForAdverbParentAndChild[1])):
if (i, j) in evidenceCountsMatrix:
evidenceCountsMatrix[(i, j)] += max(wordRelatedness(ktem[1], sourcePosTags[ktem[0]-1], ltem[1], targetPosTags[ltem[0]-1]), wordRelatedness(sourceLemmas[ktem[0]-1], sourcePosTags[ktem[0]-1], targetLemmas[ltem[0]-1], targetPosTags[ltem[0]-1]))
else:
evidenceCountsMatrix[(i, j)] = max(wordRelatedness(ktem[1], sourcePosTags[ktem[0]-1], ltem[1], targetPosTags[ltem[0]-1]), wordRelatedness(sourceLemmas[ktem[0]-1], sourcePosTags[ktem[0]-1], targetLemmas[ltem[0]-1], targetPosTags[ltem[0]-1]))
if (i, j) in relativeAlignmentsMatrix:
relativeAlignmentsMatrix[(i, j)].append([ktem[0], ltem[0]])
else:
relativeAlignmentsMatrix[(i, j)] = []
relativeAlignmentsMatrix[(i, j)].append([ktem[0], ltem[0]])
# search for equivalent child-parent relationships
for ktem in sourceWordChildren:
for ltem in targetWordParents:
if ((ktem[0], ltem[0]) in existingAlignments+adverbAlignments or max(wordRelatedness(ktem[1], sourcePosTags[ktem[0]-1], ltem[1], targetPosTags[ltem[0]-1]), wordRelatedness(sourceLemmas[ktem[0]-1], sourcePosTags[ktem[0]-1], targetLemmas[ltem[0]-1], targetPosTags[ltem[0]-1]))>=ppdbSim) and (
(ktem[2]==ltem[2]) or
(ktem[2] in group1OfSimilarRelationsInOppositeDirectionForAdverbParentAndChild[1] and ltem[2] in group1OfSimilarRelationsInOppositeDirectionForAdverbParentAndChild[0]) or
(ktem[2] in group2OfSimilarRelationsInOppositeDirectionForAdverbParentAndChild[1] and ltem[2] in group2OfSimilarRelationsInOppositeDirectionForAdverbParentAndChild[0]) or
(ktem[2] in group3OfSimilarRelationsInOppositeDirectionForAdverbParentAndChild[1] and ltem[2] in group3OfSimilarRelationsInOppositeDirectionForAdverbParentAndChild[0])):
if (i, j) in evidenceCountsMatrix:
evidenceCountsMatrix[(i, j)] += max(wordRelatedness(ktem[1], sourcePosTags[ktem[0]-1], ltem[1], targetPosTags[ltem[0]-1]), wordRelatedness(sourceLemmas[ktem[0]-1], sourcePosTags[ktem[0]-1], targetLemmas[ltem[0]-1], targetPosTags[ltem[0]-1]))
else:
evidenceCountsMatrix[(i, j)] = max(wordRelatedness(ktem[1], sourcePosTags[ktem[0]-1], ltem[1], targetPosTags[ltem[0]-1]), wordRelatedness(sourceLemmas[ktem[0]-1], sourcePosTags[ktem[0]-1], targetLemmas[ltem[0]-1], targetPosTags[ltem[0]-1]))
if (i, j) in relativeAlignmentsMatrix:
relativeAlignmentsMatrix[(i, j)].append([ktem[0], ltem[0]])
else:
relativeAlignmentsMatrix[(i, j)] = []
relativeAlignmentsMatrix[(i, j)].append([ktem[0], ltem[0]])
# now use the collected stats to align
for n in xrange(numberOfAdverbsInSource):
maxEvidenceCountForCurrentPass = 0
maxOverallValueForCurrentPass = 0
indexPairWithStrongestTieForCurrentPass = [-1, -1]
for i in sourceWordIndices:
if i in sourceWordIndicesAlreadyAligned or sourcePosTags[i-1][0].lower() <> 'r' or sourceLemmas[i-1] in stopwords:
continue
for j in targetWordIndices:
if j in targetWordIndicesAlreadyAligned or targetPosTags[j-1][0].lower() <> 'r' or targetLemmas[j-1] in stopwords:
continue
if (i, j) in evidenceCountsMatrix and theta1*wordSimilarities[(i, j)]+(1-theta1)*evidenceCountsMatrix[(i, j)]>maxOverallValueForCurrentPass:
maxOverallValueForCurrentPass = theta1*wordSimilarities[(i, j)]+(1-theta1)*evidenceCountsMatrix[(i, j)]
maxEvidenceCountForCurrentPass = evidenceCountsMatrix[(i, j)]
indexPairWithStrongestTieForCurrentPass = [i, j]
if maxEvidenceCountForCurrentPass > 0:
adverbAlignments.append(indexPairWithStrongestTieForCurrentPass)
sourceWordIndicesAlreadyAligned.append(indexPairWithStrongestTieForCurrentPass[0])
targetWordIndicesAlreadyAligned.append(indexPairWithStrongestTieForCurrentPass[1])
for item in relativeAlignmentsMatrix[(indexPairWithStrongestTieForCurrentPass[0], indexPairWithStrongestTieForCurrentPass[1])]:
if item[0]<>0 and item[1]<>0 and item[0] not in sourceWordIndicesAlreadyAligned and item[1] not in targetWordIndicesAlreadyAligned:
adverbAlignments.append(item)
sourceWordIndicesAlreadyAligned.append(item[0])
targetWordIndicesAlreadyAligned.append(item[1])
else:
break
return adverbAlignments
##############################################################################################################################
##############################################################################################################################
def alignNamedEntities(source, target, sourceParseResult, targetParseResult, existingAlignments):
# source and target:: each is a list of elements of the form:
# [[character begin offset, character end offset], word index, word, lemma, pos tag]
global punctuations
alignments = []
sourceNamedEntities = ner(sourceParseResult)
sourceNamedEntities = sorted(sourceNamedEntities, key=len)
targetNamedEntities = ner(targetParseResult)
targetNamedEntities = sorted(targetNamedEntities, key=len)
# learn from the other sentence that a certain word/phrase is a named entity (learn for source from target)
for item in source:
alreadyIncluded = False
for jtem in sourceNamedEntities:
if item[1] in jtem[1]:
alreadyIncluded = True
break
if alreadyIncluded or (len(item[2]) >0 and not item[2][0].isupper()):
continue
for jtem in targetNamedEntities:
if item[2] in jtem[2]:
# construct the item
newItem = [[item[0]], [item[1]], [item[2]], jtem[3]]
# check if the current item is part of a named entity part of which has already been added (by checking contiguousness)
partOfABiggerName = False
for k in xrange(len(sourceNamedEntities)):
if sourceNamedEntities[k][1][len(sourceNamedEntities[k][1])-1] == newItem[1][0] - 1:
sourceNamedEntities[k][0].append(newItem[0][0])
sourceNamedEntities[k][1].append(newItem[1][0])
sourceNamedEntities[k][2].append(newItem[2][0])
partOfABiggerName = True
if not partOfABiggerName:
sourceNamedEntities.append(newItem)
elif isAcronym(item[2], jtem[2]) and [[item[0]], [item[1]], [item[2]], jtem[3]] not in sourceNamedEntities:
sourceNamedEntities.append([[item[0]], [item[1]], [item[2]], jtem[3]])
# learn from the other sentence that a certain word/phrase is a named entity (learn for target from source)
for item in target:
alreadyIncluded = False
for jtem in targetNamedEntities:
if item[1] in jtem[1]:
alreadyIncluded = True
break
if alreadyIncluded or (len(item[2]) >0 and not item[2][0].isupper()):
continue
for jtem in sourceNamedEntities:
if item[2] in jtem[2]:
# construct the item
newItem = [[item[0]], [item[1]], [item[2]], jtem[3]]
# check if the current item is part of a named entity part of which has already been added (by checking contiguousness)
partOfABiggerName = False
for k in xrange(len(targetNamedEntities)):
if targetNamedEntities[k][1][len(targetNamedEntities[k][1])-1] == newItem[1][0] - 1:
targetNamedEntities[k][0].append(newItem[0][0])
targetNamedEntities[k][1].append(newItem[1][0])
targetNamedEntities[k][2].append(newItem[2][0])
partOfABiggerName = True
if not partOfABiggerName:
targetNamedEntities.append(newItem)
elif isAcronym(item[2], jtem[2]) and [[item[0]], [item[1]], [item[2]], jtem[3]] not in targetNamedEntities:
targetNamedEntities.append([[item[0]], [item[1]], [item[2]], jtem[3]])
sourceWords = []
targetWords = []
for item in sourceNamedEntities:
for jtem in item[1]:
if item[3] in ['PERSON', 'ORGANIZATION', 'LOCATION']:
sourceWords.append(source[jtem-1][2])
for item in targetNamedEntities:
for jtem in item[1]:
if item[3] in ['PERSON', 'ORGANIZATION', 'LOCATION']:
targetWords.append(target[jtem-1][2])
if len(sourceNamedEntities) == 0 or len(targetNamedEntities) == 0:
return []
sourceNamedEntitiesAlreadyAligned = []
targetNamedEntitiesAlreadyAligned = []
# align all full matches
for item in sourceNamedEntities:
if item[3] not in ['PERSON', 'ORGANIZATION', 'LOCATION']:
continue
# do not align if the current source entity is present more than once
count = 0
for ktem in sourceNamedEntities:
if ktem[2] == item[2]:
count += 1
if count > 1:
continue
for jtem in targetNamedEntities:
if jtem[3] not in ['PERSON', 'ORGANIZATION', 'LOCATION']:
continue
# do not align if the current target entity is present more than once
count = 0
for ktem in targetNamedEntities:
if ktem[2] == jtem[2]:
count += 1
if count > 1:
continue
# get rid of dots and hyphens
canonicalItemWord = [i.replace('.', '') for i in item[2]]
canonicalItemWord = [i.replace('-', '') for i in item[2]]
canonicalJtemWord = [j.replace('.', '') for j in jtem[2]]
canonicalJtemWord = [j.replace('-', '') for j in jtem[2]]
if canonicalItemWord == canonicalJtemWord:
for k in xrange(len(item[1])):
if ([item[1][k], jtem[1][k]]) not in alignments:
alignments.append([item[1][k], jtem[1][k]])
sourceNamedEntitiesAlreadyAligned.append(item)
targetNamedEntitiesAlreadyAligned.append(jtem)
# align acronyms with their elaborations
for item in sourceNamedEntities:
if item[3] not in ['PERSON', 'ORGANIZATION', 'LOCATION']:
continue
for jtem in targetNamedEntities:
if jtem[3] not in ['PERSON', 'ORGANIZATION', 'LOCATION']:
continue
if len(item[2])==1 and isAcronym(item[2][0], jtem[2]):
for i in xrange(len(jtem[1])):
if [item[1][0], jtem[1][i]] not in alignments:
alignments.append([item[1][0], jtem[1][i]])
sourceNamedEntitiesAlreadyAligned.append(item[1][0])
targetNamedEntitiesAlreadyAligned.append(jtem[1][i])
elif len(jtem[2])==1 and isAcronym(jtem[2][0], item[2]):
for i in xrange(len(item[1])):
if [item[1][i], jtem[1][0]] not in alignments:
alignments.append([item[1][i], jtem[1][0]])
sourceNamedEntitiesAlreadyAligned.append(item[1][i])
targetNamedEntitiesAlreadyAligned.append(jtem[1][0])
# align subset matches
for item in sourceNamedEntities:
if item[3] not in ['PERSON', 'ORGANIZATION', 'LOCATION'] or item in sourceNamedEntitiesAlreadyAligned:
continue
# do not align if the current source entity is present more than once
count = 0
for ktem in sourceNamedEntities:
if ktem[2] == item[2]:
count += 1
if count > 1:
continue
for jtem in targetNamedEntities:
if jtem[3] not in ['PERSON', 'ORGANIZATION', 'LOCATION'] or jtem in targetNamedEntitiesAlreadyAligned:
continue
if item[3] <> jtem[3]:
continue
# do not align if the current target entity is present more than once
count = 0
for ktem in targetNamedEntities:
if ktem[2] == jtem[2]:
count += 1
if count > 1:
continue
# find if the first is a part of the second
if isSublist(item[2], jtem[2]):
unalignedWordIndicesInTheLongerName = []
for ktem in jtem[1]:
unalignedWordIndicesInTheLongerName.append(ktem)
for k in xrange(len(item[2])):
for l in xrange(len(jtem[2])):
if item[2][k] == jtem[2][l] and [item[1][k], jtem[1][l]] not in alignments:
alignments.append([item[1][k], jtem[1][l]])
if jtem[1][l] in unalignedWordIndicesInTheLongerName:
unalignedWordIndicesInTheLongerName.remove(jtem[1][l])
for k in xrange(len(item[1])): # the shorter name
for l in xrange(len(jtem[1])): # the longer name
# find if the current term in the longer name has already been aligned (before calling alignNamedEntities()), do not align it in that case
alreadyInserted = False
for mtem in existingAlignments:
if mtem[1] == jtem[1][l]:
alreadyInserted = True
break
if jtem[1][l] not in unalignedWordIndicesInTheLongerName or alreadyInserted:
continue
if [item[1][k], jtem[1][l]] not in alignments and target[jtem[1][l]-1][2] not in sourceWords and item[2][k] not in punctuations and jtem[2][l] not in punctuations:
alignments.append([item[1][k], jtem[1][l]])
# else find if the second is a part of the first
elif isSublist(jtem[2], item[2]):
unalignedWordIndicesInTheLongerName = []
for ktem in item[1]:
unalignedWordIndicesInTheLongerName.append(ktem)
for k in xrange(len(jtem[2])):
for l in xrange(len(item[2])):
if jtem[2][k] == item[2][l] and [item[1][l], jtem[1][k]] not in alignments:
alignments.append([item[1][l], jtem[1][k]])
if item[1][l] in unalignedWordIndicesInTheLongerName:
unalignedWordIndicesInTheLongerName.remove(item[1][l])
for k in xrange(len(jtem[1])): # the shorter name
for l in xrange(len(item[1])): # the longer name
# find if the current term in the longer name has already been aligned (before calling alignNamedEntities()), do not align it in that case
alreadyInserted = False
for mtem in existingAlignments:
if mtem[0] == item[1][k]:
alreadyInserted = True
break
if item[1][l] not in unalignedWordIndicesInTheLongerName or alreadyInserted:
continue
if [item[1][l], jtem[1][k]] not in alignments and source[item[1][k]-1][2] not in targetWords and item[2][l] not in punctuations and jtem[2][k] not in punctuations:
alignments.append([item[1][l], jtem[1][k]])
#unalignedWordIndicesInTheLongerName.remove(jtem[1][l])
return alignments
##############################################################################################################################
##############################################################################################################################
def alignWords(source, target, sourceParseResult, targetParseResult):
# source and target:: each is a list of elements of the form:
# [[character begin offset, character end offset], word index, word, lemma, pos tag]
# function returns the word alignments from source to target - each alignment returned is of the following form:
# [
# [[source word character begin offset, source word character end offset], source word index, source word, source word lemma],
# [[target word character begin offset, target word character end offset], target word index, target word, target word lemma]
# ]
global punctuations
sourceWordIndices = [i+1 for i in xrange(len(source))]
targetWordIndices = [i+1 for i in xrange(len(target))]
alignments = []
sourceWordIndicesAlreadyAligned= []
targetWordIndicesAlreadyAligned= []
sourceWords = [item[2] for item in source]
targetWords = [item[2] for item in target]
sourceLemmas = [item[3] for item in source]
targetLemmas = [item[3] for item in target]
sourcePosTags = [item[4] for item in source]
targetPosTags = [item[4] for item in target]
# align the sentence ending punctuation first
if (sourceWords[len(source)-1] in ['.', '!'] and targetWords[len(target)-1] in ['.', '!']) or sourceWords[len(source)-1]==targetWords[len(target)-1]:
alignments.append([len(source), len(target)])
sourceWordIndicesAlreadyAligned.append(len(source))
targetWordIndicesAlreadyAligned.append(len(target))
elif (sourceWords[len(source)-2] in ['.', '!'] and targetWords[len(target)-1] in ['.', '!']):
alignments.append([len(source)-1, len(target)])
sourceWordIndicesAlreadyAligned.append(len(source)-1)
targetWordIndicesAlreadyAligned.append(len(target))
elif sourceWords[len(source)-1] in ['.', '!'] and targetWords[len(target)-2] in ['.', '!']:
alignments.append([len(source), len(target)-1])
sourceWordIndicesAlreadyAligned.append(len(source))
targetWordIndicesAlreadyAligned.append(len(target)-1)
elif sourceWords[len(source)-2] in ['.', '!'] and targetWords[len(target)-2] in ['.', '!']:
alignments.append([len(source)-1, len(target)-1])
sourceWordIndicesAlreadyAligned.append(len(source)-1)
targetWordIndicesAlreadyAligned.append(len(target)-1)
# align all (>=2)-gram matches with at least one content word
commonContiguousSublists = findAllCommonContiguousSublists(sourceWords, targetWords, True)
for item in commonContiguousSublists:
allStopWords = True
for jtem in item:
if jtem not in stopwords and jtem not in punctuations:
allStopWords = False
break
if len(item[0]) >= 2 and not allStopWords:
for j in xrange(len(item[0])):
if item[0][j]+1 not in sourceWordIndicesAlreadyAligned and item[1][j]+1 not in targetWordIndicesAlreadyAligned and [item[0][j]+1, item[1][j]+1] not in alignments:
alignments.append([item[0][j]+1, item[1][j]+1])
sourceWordIndicesAlreadyAligned.append(item[0][j]+1)
targetWordIndicesAlreadyAligned.append(item[1][j]+1)
# align hyphenated word groups
for i in sourceWordIndices:
if i in sourceWordIndicesAlreadyAligned:
continue
if '-' in sourceWords[i-1] and sourceWords[i-1] <> '-':
tokens = sourceWords[i-1].split('-')
commonContiguousSublists = findAllCommonContiguousSublists(tokens, targetWords)
for item in commonContiguousSublists:
if len(item[0]) > 1:
for jtem in item[1]:
if [i, jtem+1] not in alignments:
alignments.append([i, jtem+1])
sourceWordIndicesAlreadyAligned.append(i)
targetWordIndicesAlreadyAligned.append(jtem+1)
for i in targetWordIndices:
if i in targetWordIndicesAlreadyAligned:
continue
if '-' in target[i-1][2] and target[i-1][2] <> '-':
tokens = target[i-1][2].split('-')
commonContiguousSublists = findAllCommonContiguousSublists(sourceWords, tokens)
for item in commonContiguousSublists:
if len(item[0]) > 1:
for jtem in item[0]:
if [jtem+1, i] not in alignments:
alignments.append([jtem+1, i])
sourceWordIndicesAlreadyAligned.append(jtem+1)
targetWordIndicesAlreadyAligned.append(i)
# align named entities
neAlignments = alignNamedEntities(source, target, sourceParseResult, targetParseResult, alignments)
for item in neAlignments:
if item not in alignments:
alignments.append(item)
if item[0] not in sourceWordIndicesAlreadyAligned:
sourceWordIndicesAlreadyAligned.append(item[0])
if item[1] not in targetWordIndicesAlreadyAligned:
targetWordIndicesAlreadyAligned.append(item[1])
# align words based on word and dependency match
sourceDParse = dependencyParseAndPutOffsets(sourceParseResult)
targetDParse = dependencyParseAndPutOffsets(targetParseResult)
mainVerbAlignments = alignMainVerbs(source, target, sourceParseResult, targetParseResult, alignments)
for item in mainVerbAlignments:
if item not in alignments:
alignments.append(item)
if item[0] not in sourceWordIndicesAlreadyAligned:
sourceWordIndicesAlreadyAligned.append(item[0])
if item[1] not in targetWordIndicesAlreadyAligned:
targetWordIndicesAlreadyAligned.append(item[1])
nounAlignments = alignNouns(source, target, sourceParseResult, targetParseResult, alignments)
for item in nounAlignments:
if item not in alignments:
alignments.append(item)
if item[0] not in sourceWordIndicesAlreadyAligned:
sourceWordIndicesAlreadyAligned.append(item[0])
if item[1] not in targetWordIndicesAlreadyAligned:
targetWordIndicesAlreadyAligned.append(item[1])
adjectiveAlignments = alignAdjectives(source, target, sourceParseResult, targetParseResult, alignments)
for item in adjectiveAlignments:
if item not in alignments:
alignments.append(item)
if item[0] not in sourceWordIndicesAlreadyAligned:
sourceWordIndicesAlreadyAligned.append(item[0])
if item[1] not in targetWordIndicesAlreadyAligned:
targetWordIndicesAlreadyAligned.append(item[1])
adverbAlignments = alignAdverbs(source, target, sourceParseResult, targetParseResult, alignments)
for item in adverbAlignments:
if item not in alignments:
alignments.append(item)
if item[0] not in sourceWordIndicesAlreadyAligned:
sourceWordIndicesAlreadyAligned.append(item[0])
if item[1] not in targetWordIndicesAlreadyAligned:
targetWordIndicesAlreadyAligned.append(item[1])
# collect evidence from textual neighborhood for aligning content words
wordSimilarities = {}
textualNeighborhoodSimilarities = {}
sourceWordIndicesBeingConsidered = []
targetWordIndicesBeingConsidered = []
for i in sourceWordIndices:
if i in sourceWordIndicesAlreadyAligned or sourceLemmas[i-1] in stopwords + punctuations + ['\'s', '\'d', '\'ll']:
continue
for j in targetWordIndices:
if j in targetWordIndicesAlreadyAligned or targetLemmas[j-1] in stopwords + punctuations + ['\'s', '\'d', '\'ll']:
continue
wordSimilarities[(i, j)] = max(wordRelatedness(sourceWords[i-1], sourcePosTags[i-1], targetWords[j-1], targetPosTags[j-1]), wordRelatedness(sourceLemmas[i-1], sourcePosTags[i-1], targetLemmas[j-1], targetPosTags[j-1]))
sourceWordIndicesBeingConsidered.append(i)
targetWordIndicesBeingConsidered.append(j)
# textual neighborhood similarities
sourceNeighborhood = findTextualNeighborhood(source, i, 3, 3)
targetNeighborhood = findTextualNeighborhood(target, j, 3, 3)
evidence = 0
for k in xrange(len(sourceNeighborhood[0])):
for l in xrange(len(targetNeighborhood[0])):
if (sourceNeighborhood[1][k] not in stopwords + punctuations) and ((sourceNeighborhood[0][k], targetNeighborhood[0][l]) in alignments or (wordRelatedness(sourceNeighborhood[1][k], 'none', targetNeighborhood[1][l], 'none')>=ppdbSim)):
evidence += wordRelatedness(sourceNeighborhood[1][k], 'none', targetNeighborhood[1][l], 'none')
textualNeighborhoodSimilarities[(i, j)] = evidence
numOfUnalignedWordsInSource = len(sourceWordIndicesBeingConsidered)
# now align: find the best alignment in each iteration of the following loop and include in alignments if good enough
for item in xrange(numOfUnalignedWordsInSource):
highestWeightedSim = 0
bestWordSim = 0
bestSourceIndex = -1
bestTargetIndex = -1
for i in sourceWordIndicesBeingConsidered:
if i in sourceWordIndicesAlreadyAligned:
continue
for j in targetWordIndicesBeingConsidered:
if j in targetWordIndicesAlreadyAligned:
continue
if (i, j) not in wordSimilarities:
continue
theta2 = 1 - theta1
if theta1*wordSimilarities[(i, j)] + theta2*textualNeighborhoodSimilarities[(i, j)] > highestWeightedSim:
highestWeightedSim = theta1*wordSimilarities[(i, j)] + theta2*textualNeighborhoodSimilarities[(i, j)]
bestSourceIndex = i
bestTargetIndex = j
bestWordSim = wordSimilarities[(i, j)]
bestTextNeighborhoodSim = textualNeighborhoodSimilarities[(i, j)]
if bestWordSim>=ppdbSim and [bestSourceIndex, bestTargetIndex] not in alignments:
if sourceLemmas[bestSourceIndex-1] not in stopwords:
alignments.append([bestSourceIndex, bestTargetIndex])
sourceWordIndicesAlreadyAligned.append(bestSourceIndex)
targetWordIndicesAlreadyAligned.append(bestTargetIndex)
if bestSourceIndex in sourceWordIndicesBeingConsidered:
sourceWordIndicesBeingConsidered.remove(bestSourceIndex)
if bestTargetIndex in targetWordIndicesBeingConsidered:
targetWordIndicesBeingConsidered.remove(bestTargetIndex)
# look if any remaining word is a part of a hyphenated word
for i in sourceWordIndices:
if i in sourceWordIndicesAlreadyAligned:
continue
if '-' in sourceWords[i-1] and sourceWords[i-1] <> '-':
tokens = sourceWords[i-1].split('-')
commonContiguousSublists = findAllCommonContiguousSublists(tokens, targetWords)
for item in commonContiguousSublists:
if len(item[0]) == 1 and target[item[1][0]][3] not in stopwords:
for jtem in item[1]:
if [i, jtem+1] not in alignments and jtem+1 not in targetWordIndicesAlreadyAligned:
alignments.append([i, jtem+1])
sourceWordIndicesAlreadyAligned.append(i)
targetWordIndicesAlreadyAligned.append(jtem+1)
for i in targetWordIndices:
if i in targetWordIndicesAlreadyAligned:
continue
if '-' in target[i-1][2] and target[i-1][2] <> '-':
tokens = target[i-1][2].split('-')
commonContiguousSublists = findAllCommonContiguousSublists(sourceWords, tokens)
for item in commonContiguousSublists:
if len(item[0]) == 1 and source[item[0][0]][3] not in stopwords:
for jtem in item[0]:
if [jtem+1, i] not in alignments and i not in targetWordIndicesAlreadyAligned:
alignments.append([jtem+1, i])
sourceWordIndicesAlreadyAligned.append(jtem+1)
targetWordIndicesAlreadyAligned.append(i)
# collect evidence from dependency neighborhood for aligning stopwords
wordSimilarities = {}
dependencyNeighborhoodSimilarities = {}
sourceWordIndicesBeingConsidered = []
targetWordIndicesBeingConsidered = []
for i in sourceWordIndices:
if sourceLemmas[i-1] not in stopwords or i in sourceWordIndicesAlreadyAligned:
continue
for j in targetWordIndices:
if targetLemmas[j-1] not in stopwords or j in targetWordIndicesAlreadyAligned:
continue
if (sourceLemmas[i-1]<>targetLemmas[j-1]) and (wordRelatedness(sourceLemmas[i-1], sourcePosTags[i-1], targetLemmas[j-1], targetPosTags[j-1])<ppdbSim):
continue
wordSimilarities[(i, j)] = max(wordRelatedness(sourceWords[i-1], sourcePosTags[i-1], targetWords[j-1], targetPosTags[j-1]), wordRelatedness(sourceLemmas[i-1], sourcePosTags[i-1], targetLemmas[j-1], targetPosTags[j-1]))
sourceWordIndicesBeingConsidered.append(i)
targetWordIndicesBeingConsidered.append(j)
sourceWordParents = findParents(sourceDParse, i, sourceWords[i-1])
sourceWordChildren = findChildren(sourceDParse, i, sourceWords[i-1])
targetWordParents = findParents(targetDParse, j, targetWords[j-1])
targetWordChildren = findChildren(targetDParse, j, targetWords[j-1])
evidence = 0
for item in sourceWordParents:
for jtem in targetWordParents:
if [item[0], jtem[0]] in alignments:
evidence += 1
for item in sourceWordChildren:
for jtem in targetWordChildren:
if [item[0], jtem[0]] in alignments:
evidence += 1
dependencyNeighborhoodSimilarities[(i, j)] = evidence
numOfUnalignedWordsInSource = len(sourceWordIndicesBeingConsidered)
# now align: find the best alignment in each iteration of the following loop and include in alignments if good enough
for item in xrange(numOfUnalignedWordsInSource):
highestWeightedSim = 0
bestWordSim = 0
bestSourceIndex = -1
bestTargetIndex = -1
for i in sourceWordIndicesBeingConsidered:
for j in targetWordIndicesBeingConsidered:
if (i, j) not in wordSimilarities:
continue
theta2 = 1 - theta1
if theta1*wordSimilarities[(i, j)] + theta2*dependencyNeighborhoodSimilarities[(i, j)] > highestWeightedSim:
highestWeightedSim = theta1*wordSimilarities[(i, j)] + theta2*dependencyNeighborhoodSimilarities[(i, j)]
bestSourceIndex = i
bestTargetIndex = j
bestWordSim = wordSimilarities[(i, j)]
bestDependencyNeighborhoodSim = dependencyNeighborhoodSimilarities[(i, j)]
if bestWordSim>=ppdbSim and bestDependencyNeighborhoodSim>0 and [bestSourceIndex, bestTargetIndex] not in alignments:
alignments.append([bestSourceIndex, bestTargetIndex])
sourceWordIndicesAlreadyAligned.append(bestSourceIndex)
targetWordIndicesAlreadyAligned.append(bestTargetIndex)
if bestSourceIndex in sourceWordIndicesBeingConsidered:
sourceWordIndicesBeingConsidered.remove(bestSourceIndex)
if bestTargetIndex in targetWordIndicesBeingConsidered:
targetWordIndicesBeingConsidered.remove(bestTargetIndex)
# collect evidence from textual neighborhood for aligning stopwords and punctuations
wordSimilarities = {}
textualNeighborhoodSimilarities = {}
sourceWordIndicesBeingConsidered = []
targetWordIndicesBeingConsidered = []
for i in sourceWordIndices:
if (sourceLemmas[i-1] not in stopwords + punctuations + ['\'s', '\'d', '\'ll']) or i in sourceWordIndicesAlreadyAligned:
continue
for j in targetWordIndices:
if (targetLemmas[j-1] not in stopwords + punctuations + ['\'s', '\'d', '\'ll']) or j in targetWordIndicesAlreadyAligned:
continue
if wordRelatedness(sourceLemmas[i-1], sourcePosTags[i-1], targetLemmas[j-1], targetPosTags[j-1]) < ppdbSim:
continue
wordSimilarities[(i, j)] = max(wordRelatedness(sourceWords[i-1], sourcePosTags[i-1], targetWords[j-1], targetPosTags[j-1]), wordRelatedness(sourceLemmas[i-1], sourcePosTags[i-1], targetLemmas[j-1], targetPosTags[j-1]))
sourceWordIndicesBeingConsidered.append(i)
targetWordIndicesBeingConsidered.append(j)
# textual neighborhood evidence
evidence = 0
if [i-1, j-1] in alignments:
evidence += 1
if [i+1, j+1] in alignments:
evidence += 1
try:
textualNeighborhoodSimilarities[(i, j)] = evidence
except ZeroDivisionError:
textualNeighborhoodSimilarities[(i, j)] = 0
numOfUnalignedWordsInSource = len(sourceWordIndicesBeingConsidered)
# now align: find the best alignment in each iteration of the following loop and include in alignments if good enough
for item in xrange(numOfUnalignedWordsInSource):
highestWeightedSim = 0
bestWordSim = 0
bestSourceIndex = -1
bestTargetIndex = -1
for i in sourceWordIndicesBeingConsidered:
if i in sourceWordIndicesAlreadyAligned:
continue
for j in targetWordIndicesBeingConsidered:
if j in targetWordIndicesAlreadyAligned:
continue
if (i, j) not in wordSimilarities:
continue
theta2 = 1 - theta1
if theta1*wordSimilarities[(i, j)] + theta2*textualNeighborhoodSimilarities[(i, j)] > highestWeightedSim:
highestWeightedSim = theta1*wordSimilarities[(i, j)] + theta2*textualNeighborhoodSimilarities[(i, j)]
bestSourceIndex = i
bestTargetIndex = j
bestWordSim = wordSimilarities[(i, j)]
bestTextNeighborhoodSim = textualNeighborhoodSimilarities[(i, j)]
if bestWordSim>=ppdbSim and bestTextNeighborhoodSim>0 and [bestSourceIndex, bestTargetIndex] not in alignments:
alignments.append([bestSourceIndex, bestTargetIndex])
sourceWordIndicesAlreadyAligned.append(bestSourceIndex)
targetWordIndicesAlreadyAligned.append(bestTargetIndex)
if bestSourceIndex in sourceWordIndicesBeingConsidered:
sourceWordIndicesBeingConsidered.remove(bestSourceIndex)
if bestTargetIndex in targetWordIndicesBeingConsidered:
targetWordIndicesBeingConsidered.remove(bestTargetIndex)
alignments = [item for item in alignments if item[0]<>0 and item[1]<>0]
return alignments
##############################################################################################################################
##############################################################################################################################
def align(sentence1, sentence2):
if isinstance(sentence1, list):
sentence1 = ' '.join(sentence1)
if isinstance(sentence2, list):
sentence2 = ' '.join(sentence2)
sentence1ParseResult = parseText(sentence1)
sentence2ParseResult = parseText(sentence2)
sentence1Lemmatized = lemmatize(sentence1ParseResult)
sentence2Lemmatized = lemmatize(sentence2ParseResult)
sentence1PosTagged = posTag(sentence1ParseResult)
sentence2PosTagged = posTag(sentence2ParseResult)
sentence1LemmasAndPosTags = []
for i in xrange(len(sentence1Lemmatized)):
sentence1LemmasAndPosTags.append([])
for i in xrange(len(sentence1Lemmatized)):
for item in sentence1Lemmatized[i]:
sentence1LemmasAndPosTags[i].append(item)
sentence1LemmasAndPosTags[i].append(sentence1PosTagged[i][3])
sentence2LemmasAndPosTags = []
for i in xrange(len(sentence2Lemmatized)):
sentence2LemmasAndPosTags.append([])
for i in xrange(len(sentence2Lemmatized)):
for item in sentence2Lemmatized[i]:
sentence2LemmasAndPosTags[i].append(item)
sentence2LemmasAndPosTags[i].append(sentence2PosTagged[i][3])
myWordAlignments = alignWords(sentence1LemmasAndPosTags, sentence2LemmasAndPosTags, sentence1ParseResult, sentence2ParseResult)
myWordAlignmentTokens = [[str(sentence1Lemmatized[item[0]-1][2]), str(sentence2Lemmatized[item[1]-1][2])] for item in myWordAlignments]
return [myWordAlignments, myWordAlignmentTokens]
##############################################################################################################################
| 55.844981 | 438 | 0.617929 | 8,068 | 87,900 | 6.727318 | 0.041522 | 0.01087 | 0.015919 | 0.033606 | 0.841587 | 0.75678 | 0.737214 | 0.73082 | 0.713372 | 0.690876 | 0 | 0.02786 | 0.25802 | 87,900 | 1,573 | 439 | 55.880483 | 0.804339 | 0.050273 | 0 | 0.71068 | 0 | 0 | 0.011024 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.050485 | 0.002913 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 9 |
8675c6d581b7d0d01eb716d33daad5e2872ae4e9 | 6,474 | py | Python | src/unicon/plugins/tests/test_plugin_iosxe_cat9k.py | fpessoanunes/unicon.plugins | 663ea096096d96c3080c7cc0ddbbd49596f34afe | [
"Apache-2.0"
] | null | null | null | src/unicon/plugins/tests/test_plugin_iosxe_cat9k.py | fpessoanunes/unicon.plugins | 663ea096096d96c3080c7cc0ddbbd49596f34afe | [
"Apache-2.0"
] | null | null | null | src/unicon/plugins/tests/test_plugin_iosxe_cat9k.py | fpessoanunes/unicon.plugins | 663ea096096d96c3080c7cc0ddbbd49596f34afe | [
"Apache-2.0"
] | 2 | 2021-10-19T16:11:56.000Z | 2021-11-23T09:58:34.000Z | """
Unittests for iosxe/cat9k plugin
"""
import unittest
from unicon import Connection
from unicon.plugins.tests.mock.mock_device_iosxe import MockDeviceTcpWrapperIOSXE
from unicon.plugins.tests.mock.mock_device_iosxe_cat9k import MockDeviceTcpWrapperIOSXECat9k
class TestIosXeCat9kPlugin(unittest.TestCase):
def test_connect(self):
d = Connection(hostname='Router',
start=['mock_device_cli --os iosxe --state c9k_login'],
os='iosxe',
platform='cat9k',
credentials=dict(default=dict(username='admin', password='cisco')),
settings=dict(POST_DISCONNECT_WAIT_SEC=0, GRACEFUL_DISCONNECT_WAIT_SEC=0.2),
log_buffer=True
)
d.connect()
d.disconnect()
def test_boot_from_rommon(self):
md = MockDeviceTcpWrapperIOSXE(port=0, state='cat9k_rommon')
md.start()
c = Connection(
hostname='switch',
start=['telnet 127.0.0.1 {}'.format(md.ports[0])],
os='iosxe',
platform='cat9k',
settings=dict(POST_DISCONNECT_WAIT_SEC=0, GRACEFUL_DISCONNECT_WAIT_SEC=0.2),
credentials=dict(default=dict(username='cisco', password='cisco'),
alt=dict(username='admin', password='lab'))
)
try:
c.connect()
self.assertEqual(c.state_machine.current_state, 'enable')
finally:
c.disconnect()
md.stop()
def test_reload_image_from_rommon(self):
md = MockDeviceTcpWrapperIOSXE(port=0, state='cat9k_rommon')
md.start()
c = Connection(
hostname='switch',
start=['telnet 127.0.0.1 {}'.format(md.ports[0])],
os='iosxe',
platform='cat9k',
mit=True,
settings=dict(POST_DISCONNECT_WAIT_SEC=0, GRACEFUL_DISCONNECT_WAIT_SEC=0.2),
credentials=dict(default=dict(username='cisco', password='cisco'),
alt=dict(username='admin', password='lab'))
)
try:
c.connect()
self.assertEqual(c.state_machine.current_state, 'rommon')
c.execute('unlock flash:')
c.reload(image_to_boot='tftp://1.1.1.1/latest.bin')
self.assertEqual(c.state_machine.current_state, 'enable')
finally:
c.disconnect()
md.stop()
class TestIosXECat9kPluginReload(unittest.TestCase):
def test_reload(self):
md = MockDeviceTcpWrapperIOSXE(port=0, state='c9k_login4')
md.start()
c = Connection(
hostname='switch',
start=['telnet 127.0.0.1 {}'.format(md.ports[0])],
os='iosxe',
platform='cat9k',
settings=dict(POST_DISCONNECT_WAIT_SEC=0, GRACEFUL_DISCONNECT_WAIT_SEC=0.2),
credentials=dict(default=dict(username='cisco', password='cisco'),
alt=dict(username='admin', password='lab')),
mit=True
)
try:
c.connect()
c.reload()
self.assertEqual(c.state_machine.current_state, 'enable')
finally:
c.disconnect()
md.stop()
def test_rommon(self):
c = Connection(hostname='switch',
start=['mock_device_cli --os iosxe --state cat9k_enable_reload_to_rommon'],
os='iosxe',
platform='cat9k',
mit=True,
credentials=dict(default=dict(username='cisco', password='cisco'),
alt=dict(username='admin', password='lab')),
settings=dict(POST_DISCONNECT_WAIT_SEC=0, GRACEFUL_DISCONNECT_WAIT_SEC=0.2),
log_buffer=True)
c.connect()
c.rommon()
self.assertEqual(c.state_machine.current_state, 'rommon')
c.disconnect()
def test_rommon_enable_break(self):
c = Connection(hostname='switch',
start=['mock_device_cli --os iosxe --state cat9k_enable_reload_to_rommon_break'],
os='iosxe',
platform='cat9k',
mit=True,
credentials=dict(default=dict(username='cisco', password='cisco'),
alt=dict(username='admin', password='lab')),
settings=dict(POST_DISCONNECT_WAIT_SEC=0, GRACEFUL_DISCONNECT_WAIT_SEC=0.2),
log_buffer=True)
c.connect()
c.rommon()
self.assertEqual(c.state_machine.current_state, 'rommon')
c.disconnect()
def test_reload_with_image(self):
c = Connection(hostname='switch',
start=['mock_device_cli --os iosxe --state cat9k_enable_reload_to_rommon'],
os='iosxe',
platform='cat9k',
mit=True,
credentials=dict(default=dict(username='cisco', password='cisco'),
alt=dict(username='admin', password='lab')),
settings=dict(POST_DISCONNECT_WAIT_SEC=0, GRACEFUL_DISCONNECT_WAIT_SEC=0.2),
log_buffer=True)
c.connect()
c.reload(image_to_boot='tftp://1.1.1.1/latest.bin')
self.assertEqual(c.state_machine.current_state, 'enable')
c.disconnect()
def test_reload_ha(self):
md = MockDeviceTcpWrapperIOSXECat9k(port=0, state='cat9k_ha_active_escape,cat9k_ha_standby_escape')
md.start()
c = Connection(
hostname='switch',
start=[
'telnet 127.0.0.1 {}'.format(md.ports[0]),
'telnet 127.0.0.1 {}'.format(md.ports[1]),
],
os='iosxe',
platform='cat9k',
settings=dict(POST_DISCONNECT_WAIT_SEC=0, GRACEFUL_DISCONNECT_WAIT_SEC=0.2),
credentials=dict(default=dict(username='cisco', password='cisco'),
alt=dict(username='admin', password='lab')),
# debug=True
)
try:
c.connect()
c.reload()
self.assertEqual(c.state_machine.current_state, 'enable')
finally:
c.disconnect()
md.stop()
if __name__ == '__main__':
unittest.main()
| 38.535714 | 107 | 0.54603 | 670 | 6,474 | 5.076119 | 0.138806 | 0.065863 | 0.079976 | 0.084681 | 0.837401 | 0.818583 | 0.804469 | 0.795648 | 0.764187 | 0.762129 | 0 | 0.021981 | 0.332407 | 6,474 | 167 | 108 | 38.766467 | 0.764924 | 0.006796 | 0 | 0.748252 | 0 | 0 | 0.124572 | 0.02943 | 0 | 0 | 0 | 0 | 0.055944 | 1 | 0.055944 | false | 0.104895 | 0.027972 | 0 | 0.097902 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 8 |
86800b94049c7dbecb908253754ac52fe8790cb4 | 3,785 | py | Python | onnx/backend/test/case/node/reducelogsumexp.py | rajeevsrao/onnx | 355a4954ea4e5836a5e943589509951c44feb6b4 | [
"MIT"
] | 4,071 | 2018-12-13T04:17:38.000Z | 2022-03-30T03:29:35.000Z | blaze/thirdparty/onnx/onnx-1.2.2/onnx/backend/test/case/node/reducelogsumexp.py | laozhuang727/x-deeplearning | 781545783a4e2bbbda48fc64318fb2c6d8bbb3cc | [
"Apache-2.0"
] | 359 | 2018-12-21T01:14:57.000Z | 2022-02-15T07:18:02.000Z | blaze/thirdparty/onnx/onnx-1.2.2/onnx/backend/test/case/node/reducelogsumexp.py | laozhuang727/x-deeplearning | 781545783a4e2bbbda48fc64318fb2c6d8bbb3cc | [
"Apache-2.0"
] | 1,054 | 2018-12-20T09:57:42.000Z | 2022-03-29T07:16:53.000Z | from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
import numpy as np # type: ignore
import onnx
from ..base import Base
from . import expect
class ReduceLogSumExp(Base):
@staticmethod
def export_do_not_keepdims(): # type: () -> None
shape = [3, 2, 2]
axes = [1]
keepdims = 0
node = onnx.helper.make_node(
'ReduceLogSumExp',
inputs=['data'],
outputs=['reduced'],
axes=axes,
keepdims=keepdims
)
data = np.array(
[[[5, 1], [20, 2]], [[30, 1], [40, 2]], [[55, 1], [60, 2]]],
dtype=np.float32)
reduced = np.log(np.sum(
np.exp(data), axis=tuple(axes), keepdims=keepdims == 1))
# print(reduced)
#[[20., 2.31326175]
# [40.00004578, 2.31326175]
# [60.00671387, 2.31326175]]
expect(node, inputs=[data], outputs=[reduced],
name='test_reduce_log_sum_exp_do_not_keepdims_example')
np.random.seed(0)
data = np.random.uniform(-10, 10, shape).astype(np.float32)
reduced = np.log(np.sum(
np.exp(data), axis=tuple(axes), keepdims=keepdims == 1))
expect(node, inputs=[data], outputs=[reduced],
name='test_reduce_log_sum_exp_do_not_keepdims_random')
@staticmethod
def export_keepdims(): # type: () -> None
shape = [3, 2, 2]
axes = [1]
keepdims = 1
node = onnx.helper.make_node(
'ReduceLogSumExp',
inputs=['data'],
outputs=['reduced'],
axes=axes,
keepdims=keepdims
)
data = np.array(
[[[5, 1], [20, 2]], [[30, 1], [40, 2]], [[55, 1], [60, 2]]],
dtype=np.float32)
reduced = np.log(np.sum(np.exp(data),
axis=tuple(axes),
keepdims=keepdims == 1))
# print(reduced)
# [[[20., 2.31326175]]
# [[40.00004578, 2.31326175]]
# [[60.00671387, 2.31326175]]]
expect(node, inputs=[data], outputs=[reduced],
name='test_reduce_log_sum_exp_keepdims_example')
np.random.seed(0)
data = np.random.uniform(-10, 10, shape).astype(np.float32)
reduced = np.log(np.sum(np.exp(data),
axis=tuple(axes),
keepdims=keepdims == 1))
expect(node, inputs=[data], outputs=[reduced],
name='test_reduce_log_sum_exp_keepdims_random')
@staticmethod
def export_default_axes_keepdims(): # type: () -> None
shape = [3, 2, 2]
axes = None
keepdims = 1
node = onnx.helper.make_node(
'ReduceLogSumExp',
inputs=['data'],
outputs=['reduced'],
keepdims=keepdims
)
data = np.array(
[[[5, 1], [20, 2]], [[30, 1], [40, 2]], [[55, 1], [60, 2]]],
dtype=np.float32)
reduced = np.log(np.sum(np.exp(data),
axis=axes,
keepdims=keepdims == 1))
# print(reduced)
# [[[60.00671387]]]
expect(node, inputs=[data], outputs=[reduced],
name='test_reduce_log_sum_exp_default_axes_keepdims_example')
np.random.seed(0)
data = np.random.uniform(-10, 10, shape).astype(np.float32)
reduced = np.log(np.sum(np.exp(data),
axis=axes,
keepdims=keepdims == 1))
expect(node, inputs=[data], outputs=[reduced],
name='test_reduce_log_sum_exp_default_axes_keepdims_random')
| 32.350427 | 75 | 0.514663 | 423 | 3,785 | 4.432624 | 0.156028 | 0.0704 | 0.0816 | 0.1152 | 0.858667 | 0.829867 | 0.823467 | 0.823467 | 0.808533 | 0.808533 | 0 | 0.082698 | 0.341876 | 3,785 | 116 | 76 | 32.62931 | 0.670012 | 0.072655 | 0 | 0.735632 | 0 | 0 | 0.101574 | 0.079256 | 0 | 0 | 0 | 0 | 0 | 1 | 0.034483 | false | 0 | 0.091954 | 0 | 0.137931 | 0.011494 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
86b0c42a83ecf981026c0c7f631cd00634c58233 | 23,862 | py | Python | examples.py | ashigirl96/DeepRL | 70bc9201da85d3c64bfe1019e1db4777feefd12c | [
"MIT"
] | null | null | null | examples.py | ashigirl96/DeepRL | 70bc9201da85d3c64bfe1019e1db4777feefd12c | [
"MIT"
] | null | null | null | examples.py | ashigirl96/DeepRL | 70bc9201da85d3c64bfe1019e1db4777feefd12c | [
"MIT"
] | null | null | null | #######################################################################
# Copyright (C) 2017 Shangtong Zhang(zhangshangtong.cpp@gmail.com) #
# Permission given to modify the code as long as you keep this #
# declaration at the top #
#######################################################################
from deep_rl import *
# DQN
def dqn_feature(**kwargs):
generate_tag(kwargs)
kwargs.setdefault('log_level', 0)
kwargs.setdefault('n_step', 1)
kwargs.setdefault('replay_cls', UniformReplay)
kwargs.setdefault('async_replay', True)
config = Config()
config.merge(kwargs)
config.task_fn = lambda: Task(config.game)
config.eval_env = config.task_fn()
config.optimizer_fn = lambda params: torch.optim.RMSprop(params, 0.001)
config.network_fn = lambda: VanillaNet(config.action_dim, FCBody(config.state_dim))
# config.network_fn = lambda: DuelingNet(config.action_dim, FCBody(config.state_dim))
config.history_length = 1
config.batch_size = 10
config.discount = 0.99
config.max_steps = 1e5
replay_kwargs = dict(
memory_size=int(1e4),
batch_size=config.batch_size,
n_step=config.n_step,
discount=config.discount,
history_length=config.history_length)
config.replay_fn = lambda: ReplayWrapper(config.replay_cls, replay_kwargs, config.async_replay)
config.replay_eps = 0.01
config.replay_alpha = 0.5
config.replay_beta = LinearSchedule(0.4, 1.0, config.max_steps)
config.random_action_prob = LinearSchedule(1.0, 0.1, 1e4)
config.target_network_update_freq = 200
config.exploration_steps = 1000
# config.double_q = True
config.double_q = False
config.sgd_update_frequency = 4
config.gradient_clip = 5
config.eval_interval = int(5e3)
config.async_actor = False
run_steps(DQNAgent(config))
def dqn_pixel(**kwargs):
generate_tag(kwargs)
kwargs.setdefault('log_level', 0)
kwargs.setdefault('n_step', 1)
kwargs.setdefault('replay_cls', UniformReplay)
kwargs.setdefault('async_replay', True)
config = Config()
config.merge(kwargs)
config.task_fn = lambda: Task(config.game)
config.eval_env = config.task_fn()
config.optimizer_fn = lambda params: torch.optim.RMSprop(
params, lr=0.00025, alpha=0.95, eps=0.01, centered=True)
config.network_fn = lambda: VanillaNet(config.action_dim, NatureConvBody(in_channels=config.history_length))
# config.network_fn = lambda: DuelingNet(config.action_dim, NatureConvBody(in_channels=config.history_length))
config.random_action_prob = LinearSchedule(1.0, 0.01, 1e6)
config.batch_size = 32
config.discount = 0.99
config.history_length = 4
config.max_steps = int(2e7)
replay_kwargs = dict(
memory_size=int(1e6),
batch_size=config.batch_size,
n_step=config.n_step,
discount=config.discount,
history_length=config.history_length,
)
config.replay_fn = lambda: ReplayWrapper(config.replay_cls, replay_kwargs, config.async_replay)
config.replay_eps = 0.01
config.replay_alpha = 0.5
config.replay_beta = LinearSchedule(0.4, 1.0, config.max_steps)
config.state_normalizer = ImageNormalizer()
config.reward_normalizer = SignNormalizer()
config.target_network_update_freq = 10000
config.exploration_steps = 50000
# config.exploration_steps = 100
config.sgd_update_frequency = 4
config.gradient_clip = 5
config.double_q = False
config.async_actor = True
run_steps(DQNAgent(config))
# QR DQN
def quantile_regression_dqn_feature(**kwargs):
generate_tag(kwargs)
kwargs.setdefault('log_level', 0)
config = Config()
config.merge(kwargs)
config.task_fn = lambda: Task(config.game)
config.eval_env = config.task_fn()
config.optimizer_fn = lambda params: torch.optim.RMSprop(params, 0.001)
config.network_fn = lambda: QuantileNet(config.action_dim, config.num_quantiles, FCBody(config.state_dim))
config.batch_size = 10
replay_kwargs = dict(
memory_size=int(1e4),
batch_size=config.batch_size)
config.replay_fn = lambda: ReplayWrapper(UniformReplay, replay_kwargs, async=True)
config.random_action_prob = LinearSchedule(1.0, 0.1, 1e4)
config.discount = 0.99
config.target_network_update_freq = 200
config.exploration_steps = 100
config.num_quantiles = 20
config.gradient_clip = 5
config.sgd_update_frequency = 4
config.eval_interval = int(5e3)
config.max_steps = 1e5
run_steps(QuantileRegressionDQNAgent(config))
def quantile_regression_dqn_pixel(**kwargs):
generate_tag(kwargs)
kwargs.setdefault('log_level', 0)
config = Config()
config.merge(kwargs)
config.task_fn = lambda: Task(config.game)
config.eval_env = config.task_fn()
config.optimizer_fn = lambda params: torch.optim.Adam(params, lr=0.00005, eps=0.01 / 32)
config.network_fn = lambda: QuantileNet(config.action_dim, config.num_quantiles, NatureConvBody())
config.random_action_prob = LinearSchedule(1.0, 0.01, 1e6)
config.batch_size = 32
replay_kwargs = dict(
memory_size=int(1e6),
batch_size=config.batch_size,
history_length=4,
)
config.replay_fn = lambda: ReplayWrapper(UniformReplay, replay_kwargs, async=True)
config.state_normalizer = ImageNormalizer()
config.reward_normalizer = SignNormalizer()
config.discount = 0.99
config.target_network_update_freq = 10000
config.exploration_steps = 50000
config.sgd_update_frequency = 4
config.gradient_clip = 5
config.num_quantiles = 200
config.max_steps = int(2e7)
run_steps(QuantileRegressionDQNAgent(config))
# C51
def categorical_dqn_feature(**kwargs):
generate_tag(kwargs)
kwargs.setdefault('log_level', 0)
config = Config()
config.merge(kwargs)
config.task_fn = lambda: Task(config.game)
config.eval_env = config.task_fn()
config.optimizer_fn = lambda params: torch.optim.RMSprop(params, 0.001)
config.network_fn = lambda: CategoricalNet(config.action_dim, config.categorical_n_atoms, FCBody(config.state_dim))
config.random_action_prob = LinearSchedule(1.0, 0.1, 1e4)
config.batch_size = 10
replay_kwargs = dict(
memory_size=int(1e4),
batch_size=config.batch_size)
config.replay_fn = lambda: ReplayWrapper(UniformReplay, replay_kwargs, async=True)
config.discount = 0.99
config.target_network_update_freq = 200
config.exploration_steps = 100
config.categorical_v_max = 100
config.categorical_v_min = -100
config.categorical_n_atoms = 50
config.gradient_clip = 5
config.sgd_update_frequency = 4
config.eval_interval = int(5e3)
config.max_steps = 1e5
run_steps(CategoricalDQNAgent(config))
def categorical_dqn_pixel(**kwargs):
generate_tag(kwargs)
kwargs.setdefault('log_level', 0)
config = Config()
config.merge(kwargs)
config.task_fn = lambda: Task(config.game)
config.eval_env = config.task_fn()
config.optimizer_fn = lambda params: torch.optim.Adam(params, lr=0.00025, eps=0.01 / 32)
config.network_fn = lambda: CategoricalNet(config.action_dim, config.categorical_n_atoms, NatureConvBody())
config.random_action_prob = LinearSchedule(1.0, 0.01, 1e6)
config.batch_size = 32
replay_kwargs = dict(
memory_size=int(1e6),
batch_size=config.batch_size,
history_length=4,
)
config.replay_fn = lambda: ReplayWrapper(UniformReplay, replay_kwargs, async=True)
config.discount = 0.99
config.state_normalizer = ImageNormalizer()
config.reward_normalizer = SignNormalizer()
config.target_network_update_freq = 10000
config.exploration_steps = 50000
config.categorical_v_max = 10
config.categorical_v_min = -10
config.categorical_n_atoms = 51
config.sgd_update_frequency = 4
config.gradient_clip = 0.5
config.max_steps = int(2e7)
run_steps(CategoricalDQNAgent(config))
# Rainbow
def rainbow_feature(**kwargs):
generate_tag(kwargs)
kwargs.setdefault('log_level', 0)
kwargs.setdefault('n_step', 3)
kwargs.setdefault('replay_cls', PrioritizedReplay)
kwargs.setdefault('async_replay', True)
config = Config()
config.merge(kwargs)
config.task_fn = lambda: Task(config.game)
config.eval_env = config.task_fn()
config.max_steps = 1e5
config.optimizer_fn = lambda params: torch.optim.RMSprop(params, 0.001)
config.noisy_linear = True
config.network_fn = lambda: RainbowNet(
config.action_dim,
config.categorical_n_atoms,
FCBody(config.state_dim, noisy_linear=config.noisy_linear),
noisy_linear=config.noisy_linear
)
config.categorical_v_max = 100
config.categorical_v_min = -100
config.categorical_n_atoms = 50
config.discount = 0.99
config.batch_size = 32
replay_kwargs = dict(
memory_size=int(1e4),
batch_size=config.batch_size,
n_step=config.n_step,
discount=config.discount,
history_length=1)
config.replay_fn = lambda: ReplayWrapper(config.replay_cls, replay_kwargs, config.async_replay)
config.replay_eps = 0.01
config.replay_alpha = 0.5
config.replay_beta = LinearSchedule(0.4, 1, config.max_steps)
config.random_action_prob = LinearSchedule(1.0, 0.1, 1e4)
config.target_network_update_freq = 200
config.exploration_steps = 1000
config.double_q = True
config.sgd_update_frequency = 4
config.eval_interval = int(5e3)
config.async_actor = True
config.gradient_clip = 10
run_steps(CategoricalDQNAgent(config))
def rainbow_pixel(**kwargs):
generate_tag(kwargs)
kwargs.setdefault('log_level', 0)
kwargs.setdefault('n_step', 1)
kwargs.setdefault('replay_cls', PrioritizedReplay)
kwargs.setdefault('async_replay', True)
kwargs.setdefault('noisy_linear', True)
config = Config()
config.merge(kwargs)
config.task_fn = lambda: Task(config.game)
config.eval_env = config.task_fn()
config.max_steps = int(2e7)
Config.NOISY_LAYER_STD = 0.5
config.optimizer_fn = lambda params: torch.optim.Adam(
params, lr=0.000625, eps=1.5e-4)
config.network_fn = lambda: RainbowNet(
config.action_dim,
config.categorical_n_atoms,
NatureConvBody(noisy_linear=config.noisy_linear),
noisy_linear=config.noisy_linear,
)
config.categorical_v_max = 10
config.categorical_v_min = -10
config.categorical_n_atoms = 51
config.random_action_prob = LinearSchedule(1, 0.01, 25e4)
config.batch_size = 32
config.discount = 0.99
config.history_length = 4
replay_kwargs = dict(
memory_size=int(1e6),
batch_size=config.batch_size,
n_step=config.n_step,
discount=config.discount,
history_length=config.history_length,
)
config.replay_fn = lambda: ReplayWrapper(config.replay_cls, replay_kwargs, config.async_replay)
config.replay_eps = 0.01
config.replay_alpha = 0.5
config.replay_beta = LinearSchedule(0.4, 1.0, config.max_steps)
config.state_normalizer = ImageNormalizer()
config.reward_normalizer = SignNormalizer()
config.target_network_update_freq = 2000
config.exploration_steps = 20000
# config.exploration_steps = 200
config.sgd_update_frequency = 4
config.double_q = True
config.async_actor = True
config.gradient_clip = 10
run_steps(CategoricalDQNAgent(config))
# A2C
def a2c_feature(**kwargs):
generate_tag(kwargs)
kwargs.setdefault('log_level', 0)
config = Config()
config.merge(kwargs)
config.num_workers = 5
config.task_fn = lambda: Task(config.game, num_envs=config.num_workers)
config.eval_env = Task(config.game)
config.optimizer_fn = lambda params: torch.optim.RMSprop(params, 0.001)
config.network_fn = lambda: CategoricalActorCriticNet(
config.state_dim, config.action_dim, FCBody(config.state_dim, gate=F.tanh))
config.discount = 0.99
config.use_gae = True
config.gae_tau = 0.95
config.entropy_weight = 0.01
config.rollout_length = 5
config.gradient_clip = 0.5
run_steps(A2CAgent(config))
def a2c_pixel(**kwargs):
generate_tag(kwargs)
kwargs.setdefault('log_level', 0)
config = Config()
config.merge(kwargs)
config.num_workers = 16
config.task_fn = lambda: Task(config.game, num_envs=config.num_workers)
config.eval_env = Task(config.game)
config.optimizer_fn = lambda params: torch.optim.RMSprop(params, lr=1e-4, alpha=0.99, eps=1e-5)
config.network_fn = lambda: CategoricalActorCriticNet(config.state_dim, config.action_dim, NatureConvBody())
config.state_normalizer = ImageNormalizer()
config.reward_normalizer = SignNormalizer()
config.discount = 0.99
config.use_gae = True
config.gae_tau = 1.0
config.entropy_weight = 0.01
config.rollout_length = 5
config.gradient_clip = 5
config.max_steps = int(2e7)
run_steps(A2CAgent(config))
def a2c_continuous(**kwargs):
generate_tag(kwargs)
kwargs.setdefault('log_level', 0)
config = Config()
config.merge(kwargs)
config.num_workers = 16
config.task_fn = lambda: Task(config.game, num_envs=config.num_workers)
config.eval_env = Task(config.game)
config.optimizer_fn = lambda params: torch.optim.RMSprop(params, lr=0.0007)
config.network_fn = lambda: GaussianActorCriticNet(
config.state_dim, config.action_dim,
actor_body=FCBody(config.state_dim), critic_body=FCBody(config.state_dim))
config.discount = 0.99
config.use_gae = True
config.gae_tau = 1.0
config.entropy_weight = 0.01
config.rollout_length = 5
config.gradient_clip = 5
config.max_steps = int(2e7)
run_steps(A2CAgent(config))
# N-Step DQN
def n_step_dqn_feature(**kwargs):
generate_tag(kwargs)
kwargs.setdefault('log_level', 0)
config = Config()
config.merge(kwargs)
config.task_fn = lambda: Task(config.game, num_envs=config.num_workers)
config.eval_env = Task(config.game)
config.num_workers = 5
config.optimizer_fn = lambda params: torch.optim.RMSprop(params, 0.001)
config.network_fn = lambda: VanillaNet(config.action_dim, FCBody(config.state_dim))
config.random_action_prob = LinearSchedule(1.0, 0.1, 1e4)
config.discount = 0.99
config.target_network_update_freq = 200
config.rollout_length = 5
config.gradient_clip = 5
run_steps(NStepDQNAgent(config))
def n_step_dqn_pixel(**kwargs):
generate_tag(kwargs)
kwargs.setdefault('log_level', 0)
config = Config()
config.merge(kwargs)
config.task_fn = lambda: Task(config.game, num_envs=config.num_workers)
config.eval_env = Task(config.game)
config.num_workers = 16
config.optimizer_fn = lambda params: torch.optim.RMSprop(params, lr=1e-4, alpha=0.99, eps=1e-5)
config.network_fn = lambda: VanillaNet(config.action_dim, NatureConvBody())
config.random_action_prob = LinearSchedule(1.0, 0.05, 1e6)
config.state_normalizer = ImageNormalizer()
config.reward_normalizer = SignNormalizer()
config.discount = 0.99
config.target_network_update_freq = 10000
config.rollout_length = 5
config.gradient_clip = 5
config.max_steps = int(2e7)
run_steps(NStepDQNAgent(config))
# Option-Critic
def option_critic_feature(**kwargs):
generate_tag(kwargs)
kwargs.setdefault('log_level', 0)
config = Config()
config.merge(kwargs)
config.num_workers = 5
config.task_fn = lambda: Task(config.game, num_envs=config.num_workers)
config.eval_env = Task(config.game)
config.optimizer_fn = lambda params: torch.optim.RMSprop(params, 0.001)
config.network_fn = lambda: OptionCriticNet(FCBody(config.state_dim), config.action_dim, num_options=2)
config.random_option_prob = LinearSchedule(1.0, 0.1, 1e4)
config.discount = 0.99
config.target_network_update_freq = 200
config.rollout_length = 5
config.termination_regularizer = 0.01
config.entropy_weight = 0.01
config.gradient_clip = 5
run_steps(OptionCriticAgent(config))
def option_critic_pixel(**kwargs):
generate_tag(kwargs)
kwargs.setdefault('log_level', 0)
config = Config()
config.merge(kwargs)
config.task_fn = lambda: Task(config.game, num_envs=config.num_workers)
config.eval_env = Task(config.game)
config.num_workers = 16
config.optimizer_fn = lambda params: torch.optim.RMSprop(params, lr=1e-4, alpha=0.99, eps=1e-5)
config.network_fn = lambda: OptionCriticNet(NatureConvBody(), config.action_dim, num_options=4)
config.random_option_prob = LinearSchedule(0.1)
config.state_normalizer = ImageNormalizer()
config.reward_normalizer = SignNormalizer()
config.discount = 0.99
config.target_network_update_freq = 10000
config.rollout_length = 5
config.gradient_clip = 5
config.max_steps = int(2e7)
config.entropy_weight = 0.01
config.termination_regularizer = 0.01
run_steps(OptionCriticAgent(config))
# PPO
def ppo_continuous(**kwargs):
generate_tag(kwargs)
kwargs.setdefault('log_level', 0)
config = Config()
config.merge(kwargs)
config.task_fn = lambda: Task(config.game)
config.eval_env = config.task_fn()
config.network_fn = lambda: GaussianActorCriticNet(
config.state_dim, config.action_dim, actor_body=FCBody(config.state_dim, gate=torch.tanh),
critic_body=FCBody(config.state_dim, gate=torch.tanh))
config.actor_opt_fn = lambda params: torch.optim.Adam(params, 3e-4)
config.critic_opt_fn = lambda params: torch.optim.Adam(params, 1e-3)
config.discount = 0.99
config.use_gae = True
config.gae_tau = 0.95
config.gradient_clip = 0.5
config.rollout_length = 2048
config.optimization_epochs = 10
config.mini_batch_size = 64
config.ppo_ratio_clip = 0.2
config.log_interval = 2048
config.max_steps = 3e6
config.target_kl = 0.01
config.state_normalizer = MeanStdNormalizer()
run_steps(PPOAgent(config))
def ppo_pixel(**kwargs):
generate_tag(kwargs)
kwargs.setdefault('skip', False)
config = Config()
config.merge(kwargs)
config.task_fn = lambda: Task(config.game, num_envs=config.num_workers)
config.eval_env = Task(config.game)
config.num_workers = 8
config.optimizer_fn = lambda params: torch.optim.Adam(params, lr=2.5e-4)
config.network_fn = lambda: CategoricalActorCriticNet(config.state_dim, config.action_dim, NatureConvBody())
config.state_normalizer = ImageNormalizer()
config.reward_normalizer = SignNormalizer()
config.discount = 0.99
config.use_gae = True
config.gae_tau = 0.95
config.entropy_weight = 0.01
config.gradient_clip = 0.5
config.rollout_length = 128
config.optimization_epochs = 4
config.mini_batch_size = config.rollout_length * config.num_workers // 4
config.ppo_ratio_clip = 0.1
config.log_interval = config.rollout_length * config.num_workers
config.shared_repr = True
config.max_steps = int(2e7)
run_steps(PPOAgent(config))
# DDPG
def ddpg_continuous(**kwargs):
generate_tag(kwargs)
kwargs.setdefault('log_level', 0)
config = Config()
config.merge(kwargs)
config.task_fn = lambda: Task(config.game)
config.eval_env = config.task_fn()
config.max_steps = int(1e6)
config.eval_interval = int(1e4)
config.eval_episodes = 20
config.network_fn = lambda: DeterministicActorCriticNet(
config.state_dim, config.action_dim,
actor_body=FCBody(config.state_dim, (400, 300), gate=F.relu),
critic_body=FCBody(config.state_dim + config.action_dim, (400, 300), gate=F.relu),
actor_opt_fn=lambda params: torch.optim.Adam(params, lr=1e-3),
critic_opt_fn=lambda params: torch.optim.Adam(params, lr=1e-3))
config.replay_fn = lambda: UniformReplay(memory_size=int(1e6), batch_size=100)
config.discount = 0.99
config.random_process_fn = lambda: OrnsteinUhlenbeckProcess(
size=(config.action_dim,), std=LinearSchedule(0.2))
config.warm_up = int(1e4)
config.target_network_mix = 5e-3
run_steps(DDPGAgent(config))
# TD3
def td3_continuous(**kwargs):
generate_tag(kwargs)
kwargs.setdefault('log_level', 0)
config = Config()
config.merge(kwargs)
config.task_fn = lambda: Task(config.game)
config.eval_env = config.task_fn()
config.max_steps = int(1e6)
config.eval_interval = int(1e4)
config.eval_episodes = 20
config.network_fn = lambda: TD3Net(
config.action_dim,
actor_body_fn=lambda: FCBody(config.state_dim, (400, 300), gate=F.relu),
critic_body_fn=lambda: FCBody(
config.state_dim + config.action_dim, (400, 300), gate=F.relu),
actor_opt_fn=lambda params: torch.optim.Adam(params, lr=1e-3),
critic_opt_fn=lambda params: torch.optim.Adam(params, lr=1e-3))
replay_kwargs = dict(
memory_size=int(1e6),
batch_size=100,
)
config.replay_fn = lambda: ReplayWrapper(UniformReplay, replay_kwargs)
config.discount = 0.99
config.random_process_fn = lambda: GaussianProcess(
size=(config.action_dim,), std=LinearSchedule(0.1))
config.td3_noise = 0.2
config.td3_noise_clip = 0.5
config.td3_delay = 2
config.warm_up = int(1e4)
config.target_network_mix = 5e-3
run_steps(TD3Agent(config))
def sac_continuous(**kwargs):
from deep_rl.agent import SACAgent
generate_tag(kwargs)
kwargs.setdefault('log_level', 0)
config = Config()
config.merge(kwargs)
config.task_fn = lambda: Task(config.game)
config.eval_env = config.task_fn()
config.max_steps = int(1e6)
config.eval_interval = int(1e4)
config.eval_episodes = 20
config.network_fn = lambda: SACNet(
config.action_dim,
actor_body_fn=lambda: FCBody(config.state_dim, (400, 300), gate=F.relu),
critic_body_fn=lambda: FCBody(
config.state_dim + config.action_dim, (400, 300), gate=F.relu),
actor_opt_fn=lambda params: torch.optim.Adam(params, lr=1e-3),
critic_opt_fn=lambda params: torch.optim.Adam(params, lr=1e-3))
config.replay_fn = lambda: Replay(memory_size=int(1e6), batch_size=100)
config.discount = 0.99
config.random_process_fn = lambda: GaussianProcess(
size=(config.action_dim,), std=LinearSchedule(0.1))
config.td3_noise = 0.2
config.td3_noise_clip = 0.5
config.td3_delay = 2
config.warm_up = int(1e4)
config.target_network_mix = 5e-3
config.sac_coef = 0.2
run_steps(SACAgent(config))
if __name__ == '__main__':
mkdir('log')
mkdir('tf_log')
set_one_thread()
random_seed()
# -1 is CPU, a positive integer is the index of GPU
select_device(-1)
# select_device(0)
game = 'CartPole-v0'
# dqn_feature(game=game, n_step=1, replay_cls=UniformReplay, async_replay=True, noisy_linear=True)
# quantile_regression_dqn_feature(game=game)
# categorical_dqn_feature(game=game)
# rainbow_feature(game=game)
# a2c_feature(game=game)
# n_step_dqn_feature(game=game)
# option_critic_feature(game=game)
game = 'HalfCheetah-v2'
# game = 'Hopper-v2'
# a2c_continuous(game=game)
# ppo_continuous(game=game)
# ddpg_continuous(game=game)
# td3_continuous(game=game)
sac_continuous(game=game)
game = 'BreakoutNoFrameskip-v4'
dqn_pixel(game=game, n_step=1, replay_cls=UniformReplay, async_replay=False)
# quantile_regression_dqn_pixel(game=game)
# categorical_dqn_pixel(game=game)
# rainbow_pixel(game=game, async_replay=False)
# a2c_pixel(game=game)
# n_step_dqn_pixel(game=game)
# option_critic_pixel(game=game)
# ppo_pixel(game=game)
| 34.632801 | 119 | 0.705808 | 3,195 | 23,862 | 5.038185 | 0.079812 | 0.041747 | 0.023855 | 0.028328 | 0.877865 | 0.850407 | 0.837858 | 0.822079 | 0.793937 | 0.779338 | 0 | 0.038536 | 0.182214 | 23,862 | 688 | 120 | 34.68314 | 0.786359 | 0.052301 | 0 | 0.789762 | 0 | 0 | 0.016184 | 0.000981 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.003656 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
86c547146d50d3f7cca81cd266c9769528014c27 | 200 | py | Python | services/showcase/api/core/utils.py | Open-Earth-Foundation/traction | 908b555a7f408a88541b7692d3730e37a297c919 | [
"Apache-2.0"
] | 12 | 2022-01-29T20:30:03.000Z | 2022-03-29T11:46:14.000Z | services/showcase/api/core/utils.py | Open-Earth-Foundation/traction | 908b555a7f408a88541b7692d3730e37a297c919 | [
"Apache-2.0"
] | 38 | 2021-11-22T17:52:50.000Z | 2022-03-31T17:52:00.000Z | services/showcase/api/core/utils.py | Open-Earth-Foundation/traction | 908b555a7f408a88541b7692d3730e37a297c919 | [
"Apache-2.0"
] | 9 | 2021-11-22T18:05:48.000Z | 2022-03-29T11:25:08.000Z | from passlib.hash import pbkdf2_sha256
def hash_password(password):
return pbkdf2_sha256.hash(password)
def check_password(password, hashed):
return pbkdf2_sha256.verify(password, hashed)
| 20 | 49 | 0.795 | 26 | 200 | 5.923077 | 0.461538 | 0.233766 | 0.233766 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.068966 | 0.13 | 200 | 9 | 50 | 22.222222 | 0.816092 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 1 | 0.2 | 0.4 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 8 |
86f1da08feee667aa219ef73ea2379de005b8544 | 3,731 | py | Python | week07/01.ContextManagers/test_silence_exception.py | HackBulgaria/Programming-101-Python-2020-Spring | 443446028df7fe78fcdd6c37dada0b5cd8ed3c93 | [
"MIT"
] | 30 | 2020-01-22T17:22:43.000Z | 2022-01-26T08:28:57.000Z | week07/01.ContextManagers/test_silence_exception.py | HackBulgaria/Programming-101-Python-2020-Spring | 443446028df7fe78fcdd6c37dada0b5cd8ed3c93 | [
"MIT"
] | 1 | 2020-01-21T19:50:47.000Z | 2020-03-18T16:18:31.000Z | week07/01.ContextManagers/test_silence_exception.py | HackBulgaria/Programming-101-Python-2020-Spring | 443446028df7fe78fcdd6c37dada0b5cd8ed3c93 | [
"MIT"
] | 7 | 2019-11-28T15:59:16.000Z | 2020-12-05T08:39:02.000Z | import unittest
from silence_exception import silence_exception, SilenceException
class SilenceExceptionTests(unittest.TestCase):
def test_silences_passed_exception(self):
exception = None
try:
with silence_exception(ValueError):
raise ValueError('Testing.')
except Exception as exc:
exception = exc
self.assertIsNone(exception)
def test_not_silences_different_exception_from_passed_one(self):
with self.assertRaises(ValueError):
with silence_exception(TypeError):
raise ValueError('Testing.')
def test_not_silences_passed_exception_outside_context_manager(self):
with self.assertRaises(ValueError, msg='Testing outside with-block'):
with silence_exception(ValueError):
raise ValueError('Testing inside with-block')
raise ValueError('Testing outside with-block')
def test_silences_passed_exception_with_correct_message(self):
exception = None
exc_message = 'Testing with msg argument.'
try:
with silence_exception(ValueError, msg=exc_message):
raise ValueError(exc_message)
except Exception as exc:
exception = exc
self.assertIsNone(exception)
def test_not_silences_passed_exception_with_different_message(self):
exc_message = 'Testing with msg argument.'
with self.assertRaises(ValueError):
with silence_exception(ValueError, msg=exc_message):
raise ValueError(f'{exc_message} - different.')
def test_not_silences_different_exception_with_same_message(self):
exc_message = 'Testing with msg argument.'
with self.assertRaises(TypeError):
with silence_exception(ValueError, msg=exc_message):
raise TypeError(exc_message)
class SilenceExceptionClassTests(unittest.TestCase):
def test_silences_passed_exception(self):
exception = None
try:
with SilenceException(ValueError):
raise ValueError('Testing.')
except Exception as exc:
exception = exc
self.assertIsNone(exception)
def test_not_silences_different_exception_from_passed_one(self):
with self.assertRaises(ValueError):
with SilenceException(TypeError):
raise ValueError('Testing.')
def test_not_silences_passed_exception_outside_context_manager(self):
with self.assertRaises(ValueError, msg='Testing outside with-block'):
with SilenceException(ValueError):
raise ValueError('Testing inside with-block')
raise ValueError('Testing outside with-block')
def test_silences_passed_exception_with_correct_message(self):
exception = None
exc_message = 'Testing with msg argument.'
try:
with SilenceException(ValueError, msg=exc_message):
raise ValueError(exc_message)
except Exception as exc:
exception = exc
self.assertIsNone(exception)
def test_not_silences_passed_exception_with_different_message(self):
exc_message = 'Testing with msg argument.'
with self.assertRaises(ValueError):
with SilenceException(ValueError, msg=exc_message):
raise ValueError(f'{exc_message} - different.')
def test_not_silences_different_exception_with_same_message(self):
exc_message = 'Testing with msg argument.'
with self.assertRaises(TypeError):
with SilenceException(ValueError, msg=exc_message):
raise TypeError(exc_message)
if __name__ == '__main__':
unittest.main()
| 33.918182 | 77 | 0.676762 | 382 | 3,731 | 6.327225 | 0.115183 | 0.074472 | 0.076127 | 0.059578 | 0.937526 | 0.937526 | 0.917667 | 0.891187 | 0.832437 | 0.819197 | 0 | 0 | 0.253819 | 3,731 | 109 | 78 | 34.229358 | 0.868175 | 0 | 0 | 0.897436 | 0 | 0 | 0.107746 | 0 | 0 | 0 | 0 | 0 | 0.153846 | 1 | 0.153846 | false | 0.128205 | 0.025641 | 0 | 0.205128 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 8 |
8115dee3bff12b6e89dda75382eff1a29ae149d6 | 2,689 | py | Python | actymath/columns/whole_of_life.py | ttamg/actymath | 37405449e0e72c44a8c300f18c8c0f6caf313f06 | [
"MIT"
] | 1 | 2021-05-22T17:58:59.000Z | 2021-05-22T17:58:59.000Z | actymath/columns/whole_of_life.py | ttamg/actymath | 37405449e0e72c44a8c300f18c8c0f6caf313f06 | [
"MIT"
] | null | null | null | actymath/columns/whole_of_life.py | ttamg/actymath | 37405449e0e72c44a8c300f18c8c0f6caf313f06 | [
"MIT"
] | null | null | null | import pandas as pd
from .base import Column
""" Actuarial formulae for whole remainder of life. """
class a_due_x(Column):
""" PV of annuity due (paid in advance) for remainder of life. """
parameters = {"life": "Life identifier (int)"}
column_name = "a_due(x{life})"
dependencies = ["N(x{life})", "D(x{life})"]
@classmethod
def calculate(cls, calc, **kwargs):
return calc[f"N(x{kwargs['life']})"] / calc[f"D(x{kwargs['life']})"]
class a_x(Column):
""" PV of annuity (paid in arrears) for remainder of life. """
parameters = {"life": "Life identifier (int)"}
column_name = "a(x{life})"
dependencies = ["N(x{life})", "D(x{life})"]
@classmethod
def calculate(cls, calc, **kwargs):
return calc[f"N(x{kwargs['life']})"].shift(-1) / calc[f"D(x{kwargs['life']})"]
class A_x(Column):
""" PV of whole of life assurance paid in arrears. """
parameters = {"life": "Life identifier (int)"}
column_name = "A(x{life})"
dependencies = ["M(x{life})", "D(x{life})"]
@classmethod
def calculate(cls, calc, **kwargs):
return calc[f"M(x{kwargs['life']})"] / calc[f"D(x{kwargs['life']})"]
class NP_x(Column):
""" Net premium for whole of life assurance. """
parameters = {"life": "Life identifier (int)"}
column_name = "NP(x{life})"
dependencies = ["A(x{life})", "a_due(x{life})"]
@classmethod
def calculate(cls, calc, **kwargs):
return calc[f"A(x{kwargs['life']})"] / calc[f"a_due(x{kwargs['life']})"]
class Ia_due_x(Column):
""" PV of arithmetically increasing annuity due (paid in advance) for remainder of life. """
parameters = {"life": "Life identifier (int)"}
column_name = "Ia_due(x{life})"
dependencies = ["S(x{life})", "D(x{life})"]
@classmethod
def calculate(cls, calc, **kwargs):
return calc[f"S(x{kwargs['life']})"] / calc[f"D(x{kwargs['life']})"]
class Ia_x(Column):
""" PV of arithmetically increasing annuity (paid in arrears) for remainder of life. """
parameters = {"life": "Life identifier (int)"}
column_name = "Ia(x{life})"
dependencies = ["S(x{life})", "D(x{life})"]
@classmethod
def calculate(cls, calc, **kwargs):
return calc[f"S(x{kwargs['life']})"].shift(-1) / calc[f"D(x{kwargs['life']})"]
class IA_x(Column):
""" PV of arithmetically increasing whole of life assurance paid in arrears. """
parameters = {"life": "Life identifier (int)"}
column_name = "IA(x{life})"
dependencies = ["R(x{life})", "D(x{life})"]
@classmethod
def calculate(cls, calc, **kwargs):
return calc[f"R(x{kwargs['life']})"] / calc[f"D(x{kwargs['life']})"] | 29.877778 | 96 | 0.59762 | 378 | 2,689 | 4.198413 | 0.134921 | 0.066163 | 0.097038 | 0.123503 | 0.89414 | 0.862004 | 0.862004 | 0.814115 | 0.814115 | 0.796471 | 0 | 0.000925 | 0.195612 | 2,689 | 90 | 97 | 29.877778 | 0.732779 | 0.165861 | 0 | 0.490196 | 0 | 0 | 0.319347 | 0.011189 | 0 | 0 | 0 | 0 | 0 | 1 | 0.137255 | false | 0 | 0.039216 | 0.137255 | 0.862745 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 8 |
d4a1edec0243dcbf58da9bb503a2b9e9f16685ef | 15,880 | py | Python | sdk/python/pulumi_aws/apigatewayv2/integration.py | JakeGinnivan/pulumi-aws | c91ef78932964ac74eda7f5da81f65b0f1798c93 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | sdk/python/pulumi_aws/apigatewayv2/integration.py | JakeGinnivan/pulumi-aws | c91ef78932964ac74eda7f5da81f65b0f1798c93 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | sdk/python/pulumi_aws/apigatewayv2/integration.py | JakeGinnivan/pulumi-aws | c91ef78932964ac74eda7f5da81f65b0f1798c93 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import json
import warnings
import pulumi
import pulumi.runtime
from typing import Union
from .. import utilities, tables
class Integration(pulumi.CustomResource):
api_id: pulumi.Output[str]
"""
The API identifier.
"""
connection_id: pulumi.Output[str]
"""
The ID of the VPC link for a private integration. Supported only for HTTP APIs.
"""
connection_type: pulumi.Output[str]
"""
The type of the network connection to the integration endpoint. Valid values: `INTERNET`, `VPC_LINK`. Default is `INTERNET`.
"""
content_handling_strategy: pulumi.Output[str]
"""
How to handle response payload content type conversions. Valid values: `CONVERT_TO_BINARY`, `CONVERT_TO_TEXT`. Supported only for WebSocket APIs.
"""
credentials_arn: pulumi.Output[str]
"""
The credentials required for the integration, if any.
"""
description: pulumi.Output[str]
"""
The description of the integration.
"""
integration_method: pulumi.Output[str]
"""
The integration's HTTP method. Must be specified if `integration_type` is not `MOCK`.
"""
integration_response_selection_expression: pulumi.Output[str]
"""
The [integration response selection expression](https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-websocket-api-selection-expressions.html#apigateway-websocket-api-integration-response-selection-expressions) for the integration.
"""
integration_type: pulumi.Output[str]
"""
The integration type of an integration.
Valid values: `AWS`, `AWS_PROXY`, `HTTP`, `HTTP_PROXY`, `MOCK`.
"""
integration_uri: pulumi.Output[str]
"""
The URI of the Lambda function for a Lambda proxy integration, when `integration_type` is `AWS_PROXY`.
For an `HTTP` integration, specify a fully-qualified URL. For an HTTP API private integration, specify the ARN of an Application Load Balancer listener, Network Load Balancer listener, or AWS Cloud Map service.
"""
passthrough_behavior: pulumi.Output[str]
"""
The pass-through behavior for incoming requests based on the Content-Type header in the request, and the available mapping templates specified as the `request_templates` attribute.
Valid values: `WHEN_NO_MATCH`, `WHEN_NO_TEMPLATES`, `NEVER`. Default is `WHEN_NO_MATCH`. Supported only for WebSocket APIs.
"""
payload_format_version: pulumi.Output[str]
"""
The [format of the payload](https://docs.aws.amazon.com/apigateway/latest/developerguide/http-api-develop-integrations-lambda.html#http-api-develop-integrations-lambda.proxy-format) sent to an integration. Valid values: `1.0`, `2.0`. Default is `1.0`.
"""
request_templates: pulumi.Output[dict]
"""
A map of Velocity templates that are applied on the request payload based on the value of the Content-Type header sent by the client. Supported only for WebSocket APIs.
"""
template_selection_expression: pulumi.Output[str]
"""
The [template selection expression](https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-websocket-api-selection-expressions.html#apigateway-websocket-api-template-selection-expressions) for the integration.
"""
timeout_milliseconds: pulumi.Output[float]
"""
Custom timeout between 50 and 29,000 milliseconds. The default value is 29,000 milliseconds or 29 seconds.
"""
def __init__(__self__, resource_name, opts=None, api_id=None, connection_id=None, connection_type=None, content_handling_strategy=None, credentials_arn=None, description=None, integration_method=None, integration_type=None, integration_uri=None, passthrough_behavior=None, payload_format_version=None, request_templates=None, template_selection_expression=None, timeout_milliseconds=None, __props__=None, __name__=None, __opts__=None):
"""
Manages an Amazon API Gateway Version 2 integration.
More information can be found in the [Amazon API Gateway Developer Guide](https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-websocket-api.html).
## Example Usage
### Basic
```python
import pulumi
import pulumi_aws as aws
example = aws.apigatewayv2.Integration("example",
api_id=aws_apigatewayv2_api["example"]["id"],
integration_type="MOCK")
```
### Lambda Integration
```python
import pulumi
import pulumi_aws as aws
example_function = aws.lambda_.Function("exampleFunction",
code=pulumi.FileArchive("example.zip"),
handler="index.handler",
role=aws_iam_role["example"]["arn"],
runtime="nodejs10.x")
example_integration = aws.apigatewayv2.Integration("exampleIntegration",
api_id=aws_apigatewayv2_api["example"]["id"],
connection_type="INTERNET",
content_handling_strategy="CONVERT_TO_TEXT",
description="Lambda example",
integration_method="POST",
integration_type="AWS",
integration_uri=example_function.invoke_arn,
passthrough_behavior="WHEN_NO_MATCH")
```
:param str resource_name: The name of the resource.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] api_id: The API identifier.
:param pulumi.Input[str] connection_id: The ID of the VPC link for a private integration. Supported only for HTTP APIs.
:param pulumi.Input[str] connection_type: The type of the network connection to the integration endpoint. Valid values: `INTERNET`, `VPC_LINK`. Default is `INTERNET`.
:param pulumi.Input[str] content_handling_strategy: How to handle response payload content type conversions. Valid values: `CONVERT_TO_BINARY`, `CONVERT_TO_TEXT`. Supported only for WebSocket APIs.
:param pulumi.Input[str] credentials_arn: The credentials required for the integration, if any.
:param pulumi.Input[str] description: The description of the integration.
:param pulumi.Input[str] integration_method: The integration's HTTP method. Must be specified if `integration_type` is not `MOCK`.
:param pulumi.Input[str] integration_type: The integration type of an integration.
Valid values: `AWS`, `AWS_PROXY`, `HTTP`, `HTTP_PROXY`, `MOCK`.
:param pulumi.Input[str] integration_uri: The URI of the Lambda function for a Lambda proxy integration, when `integration_type` is `AWS_PROXY`.
For an `HTTP` integration, specify a fully-qualified URL. For an HTTP API private integration, specify the ARN of an Application Load Balancer listener, Network Load Balancer listener, or AWS Cloud Map service.
:param pulumi.Input[str] passthrough_behavior: The pass-through behavior for incoming requests based on the Content-Type header in the request, and the available mapping templates specified as the `request_templates` attribute.
Valid values: `WHEN_NO_MATCH`, `WHEN_NO_TEMPLATES`, `NEVER`. Default is `WHEN_NO_MATCH`. Supported only for WebSocket APIs.
:param pulumi.Input[str] payload_format_version: The [format of the payload](https://docs.aws.amazon.com/apigateway/latest/developerguide/http-api-develop-integrations-lambda.html#http-api-develop-integrations-lambda.proxy-format) sent to an integration. Valid values: `1.0`, `2.0`. Default is `1.0`.
:param pulumi.Input[dict] request_templates: A map of Velocity templates that are applied on the request payload based on the value of the Content-Type header sent by the client. Supported only for WebSocket APIs.
:param pulumi.Input[str] template_selection_expression: The [template selection expression](https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-websocket-api-selection-expressions.html#apigateway-websocket-api-template-selection-expressions) for the integration.
:param pulumi.Input[float] timeout_milliseconds: Custom timeout between 50 and 29,000 milliseconds. The default value is 29,000 milliseconds or 29 seconds.
"""
if __name__ is not None:
warnings.warn("explicit use of __name__ is deprecated", DeprecationWarning)
resource_name = __name__
if __opts__ is not None:
warnings.warn("explicit use of __opts__ is deprecated, use 'opts' instead", DeprecationWarning)
opts = __opts__
if opts is None:
opts = pulumi.ResourceOptions()
if not isinstance(opts, pulumi.ResourceOptions):
raise TypeError('Expected resource options to be a ResourceOptions instance')
if opts.version is None:
opts.version = utilities.get_version()
if opts.id is None:
if __props__ is not None:
raise TypeError('__props__ is only valid when passed in combination with a valid opts.id to get an existing resource')
__props__ = dict()
if api_id is None:
raise TypeError("Missing required property 'api_id'")
__props__['api_id'] = api_id
__props__['connection_id'] = connection_id
__props__['connection_type'] = connection_type
__props__['content_handling_strategy'] = content_handling_strategy
__props__['credentials_arn'] = credentials_arn
__props__['description'] = description
__props__['integration_method'] = integration_method
if integration_type is None:
raise TypeError("Missing required property 'integration_type'")
__props__['integration_type'] = integration_type
__props__['integration_uri'] = integration_uri
__props__['passthrough_behavior'] = passthrough_behavior
__props__['payload_format_version'] = payload_format_version
__props__['request_templates'] = request_templates
__props__['template_selection_expression'] = template_selection_expression
__props__['timeout_milliseconds'] = timeout_milliseconds
__props__['integration_response_selection_expression'] = None
super(Integration, __self__).__init__(
'aws:apigatewayv2/integration:Integration',
resource_name,
__props__,
opts)
@staticmethod
def get(resource_name, id, opts=None, api_id=None, connection_id=None, connection_type=None, content_handling_strategy=None, credentials_arn=None, description=None, integration_method=None, integration_response_selection_expression=None, integration_type=None, integration_uri=None, passthrough_behavior=None, payload_format_version=None, request_templates=None, template_selection_expression=None, timeout_milliseconds=None):
"""
Get an existing Integration resource's state with the given name, id, and optional extra
properties used to qualify the lookup.
:param str resource_name: The unique name of the resulting resource.
:param str id: The unique provider ID of the resource to lookup.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] api_id: The API identifier.
:param pulumi.Input[str] connection_id: The ID of the VPC link for a private integration. Supported only for HTTP APIs.
:param pulumi.Input[str] connection_type: The type of the network connection to the integration endpoint. Valid values: `INTERNET`, `VPC_LINK`. Default is `INTERNET`.
:param pulumi.Input[str] content_handling_strategy: How to handle response payload content type conversions. Valid values: `CONVERT_TO_BINARY`, `CONVERT_TO_TEXT`. Supported only for WebSocket APIs.
:param pulumi.Input[str] credentials_arn: The credentials required for the integration, if any.
:param pulumi.Input[str] description: The description of the integration.
:param pulumi.Input[str] integration_method: The integration's HTTP method. Must be specified if `integration_type` is not `MOCK`.
:param pulumi.Input[str] integration_response_selection_expression: The [integration response selection expression](https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-websocket-api-selection-expressions.html#apigateway-websocket-api-integration-response-selection-expressions) for the integration.
:param pulumi.Input[str] integration_type: The integration type of an integration.
Valid values: `AWS`, `AWS_PROXY`, `HTTP`, `HTTP_PROXY`, `MOCK`.
:param pulumi.Input[str] integration_uri: The URI of the Lambda function for a Lambda proxy integration, when `integration_type` is `AWS_PROXY`.
For an `HTTP` integration, specify a fully-qualified URL. For an HTTP API private integration, specify the ARN of an Application Load Balancer listener, Network Load Balancer listener, or AWS Cloud Map service.
:param pulumi.Input[str] passthrough_behavior: The pass-through behavior for incoming requests based on the Content-Type header in the request, and the available mapping templates specified as the `request_templates` attribute.
Valid values: `WHEN_NO_MATCH`, `WHEN_NO_TEMPLATES`, `NEVER`. Default is `WHEN_NO_MATCH`. Supported only for WebSocket APIs.
:param pulumi.Input[str] payload_format_version: The [format of the payload](https://docs.aws.amazon.com/apigateway/latest/developerguide/http-api-develop-integrations-lambda.html#http-api-develop-integrations-lambda.proxy-format) sent to an integration. Valid values: `1.0`, `2.0`. Default is `1.0`.
:param pulumi.Input[dict] request_templates: A map of Velocity templates that are applied on the request payload based on the value of the Content-Type header sent by the client. Supported only for WebSocket APIs.
:param pulumi.Input[str] template_selection_expression: The [template selection expression](https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-websocket-api-selection-expressions.html#apigateway-websocket-api-template-selection-expressions) for the integration.
:param pulumi.Input[float] timeout_milliseconds: Custom timeout between 50 and 29,000 milliseconds. The default value is 29,000 milliseconds or 29 seconds.
"""
opts = pulumi.ResourceOptions.merge(opts, pulumi.ResourceOptions(id=id))
__props__ = dict()
__props__["api_id"] = api_id
__props__["connection_id"] = connection_id
__props__["connection_type"] = connection_type
__props__["content_handling_strategy"] = content_handling_strategy
__props__["credentials_arn"] = credentials_arn
__props__["description"] = description
__props__["integration_method"] = integration_method
__props__["integration_response_selection_expression"] = integration_response_selection_expression
__props__["integration_type"] = integration_type
__props__["integration_uri"] = integration_uri
__props__["passthrough_behavior"] = passthrough_behavior
__props__["payload_format_version"] = payload_format_version
__props__["request_templates"] = request_templates
__props__["template_selection_expression"] = template_selection_expression
__props__["timeout_milliseconds"] = timeout_milliseconds
return Integration(resource_name, opts=opts, __props__=__props__)
def translate_output_property(self, prop):
return tables._CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
def translate_input_property(self, prop):
return tables._SNAKE_TO_CAMEL_CASE_TABLE.get(prop) or prop
| 68.448276 | 439 | 0.727078 | 1,967 | 15,880 | 5.632944 | 0.121505 | 0.030776 | 0.041877 | 0.04287 | 0.794495 | 0.763899 | 0.753791 | 0.740253 | 0.730144 | 0.722022 | 0 | 0.005343 | 0.186839 | 15,880 | 231 | 440 | 68.744589 | 0.852707 | 0.464358 | 0 | 0.022989 | 1 | 0 | 0.174067 | 0.050901 | 0 | 0 | 0 | 0 | 0 | 1 | 0.045977 | false | 0.068966 | 0.068966 | 0.022989 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 7 |
d4c405afe74aadfbcfcca4f697e1a3c4ea7d9953 | 73,151 | py | Python | tests/ut/python/dataset/test_cache_nomap.py | GeekHee/mindspore | 896b8e5165dd0a900ed5a39e0fb23525524bf8b0 | [
"Apache-2.0"
] | null | null | null | tests/ut/python/dataset/test_cache_nomap.py | GeekHee/mindspore | 896b8e5165dd0a900ed5a39e0fb23525524bf8b0 | [
"Apache-2.0"
] | null | null | null | tests/ut/python/dataset/test_cache_nomap.py | GeekHee/mindspore | 896b8e5165dd0a900ed5a39e0fb23525524bf8b0 | [
"Apache-2.0"
] | null | null | null | # Copyright 2020-2021 Huawei Technologies Co., Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""
Testing cache operator with non-mappable datasets
"""
import os
import itertools
import pytest
import mindspore.common.dtype as mstype
import mindspore.dataset as ds
import mindspore.dataset.text as text
import mindspore.dataset.vision.c_transforms as c_vision
from mindspore import log as logger
DATA_DIR = ["../data/dataset/test_tf_file_3_images/train-0000-of-0001.data"]
SCHEMA_DIR = "../data/dataset/test_tf_file_3_images/datasetSchema.json"
TEXT_TF_DATA_DIR = ["../data/dataset/testTextTFRecord/text.tfrecord"]
SCHEMA_DIR2 = "../data/dataset/testTextTFRecord/datasetSchema.json"
TRAIN_DATA_DIR = ["../data/dataset/test_tf_file_3_images2/train-0000-of-0001.data",
"../data/dataset/test_tf_file_3_images2/train-0000-of-0002.data",
"../data/dataset/test_tf_file_3_images2/train-0000-of-0003.data",
"../data/dataset/test_tf_file_3_images2/train-0000-of-0004.data"]
TRAIN_SCHEMA_DIR = "../data/dataset/test_tf_file_3_images2/datasetSchema.json"
IMAGE_FOLDER_DATA_DIR = "../data/dataset/testImageNetData/train/"
CLUE_DATA_DIR = '../data/dataset/testCLUE/afqmc/train.json'
CSV_DATA_DIR = '../data/dataset/testCSV/1.csv'
TEXT_FILE_DATA_DIR = "../data/dataset/testTextFileDataset/1.txt"
GENERATE_GOLDEN = False
@pytest.mark.skipif(os.environ.get('RUN_CACHE_TEST') != 'TRUE', reason="Require to bring up cache server")
def test_cache_nomap_basic1():
"""
A random dataset (a non mappable dataset) with a cache over it just after the leaf
"""
logger.info("Test cache nomap basic 1")
if "SESSION_ID" in os.environ:
session_id = int(os.environ['SESSION_ID'])
else:
raise RuntimeError("Testcase requires SESSION_ID environment variable")
schema = ds.Schema()
schema.add_column('image', de_type=mstype.uint8,
shape=[640, 480, 3]) # 921600 bytes (a bit less than 1 MB per image)
schema.add_column('label', de_type=mstype.uint8, shape=[1])
# create a cache. arbitrary session_id for now
some_cache = ds.DatasetCache(session_id=session_id, size=0)
# User-created sampler here
ds1 = ds.RandomDataset(schema=schema, total_rows=10, num_parallel_workers=4, cache=some_cache)
ds1 = ds1.repeat(4)
num_iter = 0
for data in ds1.create_dict_iterator(num_epochs=1):
logger.info("printing the label: {}".format(data["label"]))
num_iter += 1
logger.info("Number of data in ds1: {} ".format(num_iter))
assert num_iter == 40
logger.info("test_cache_nomap_basic1 Ended.\n")
@pytest.mark.skipif(os.environ.get('RUN_CACHE_TEST') != 'TRUE', reason="Require to bring up cache server")
def test_cache_nomap_basic2():
"""
A random dataset (a non mappable dataset) with a cache over it just after the leaf
"""
logger.info("Test cache nomap basic 2")
if "SESSION_ID" in os.environ:
session_id = int(os.environ['SESSION_ID'])
else:
raise RuntimeError("Testcase requires SESSION_ID environment variable")
schema = ds.Schema()
schema.add_column('image', de_type=mstype.uint8,
shape=[640, 480, 3]) # 921600 bytes (a bit less than 1 MB per image)
schema.add_column('label', de_type=mstype.uint8, shape=[1])
# create a cache. arbitrary session_id for now
some_cache = ds.DatasetCache(session_id=session_id, size=0)
# sampler arg not given directly, however any of these args will auto-generate an appropriate sampler:
# num_samples, shuffle, num_shards, shard_id
# In this case, the presence of num_samples chooses a sampler.
ds1 = ds.RandomDataset(schema=schema, total_rows=20, num_samples=20, num_parallel_workers=4, cache=some_cache)
ds1 = ds1.repeat(2)
num_iter = 0
for data in ds1.create_dict_iterator(num_epochs=1):
logger.info("printing the label: {}".format(data["label"]))
num_iter += 1
logger.info("Number of data in ds1: {} ".format(num_iter))
assert num_iter == 40
logger.info("test_cache_nomap_basic2 Ended.\n")
@pytest.mark.skipif(os.environ.get('RUN_CACHE_TEST') != 'TRUE', reason="Require to bring up cache server")
def test_cache_nomap_basic3():
"""
A TF reader dataset (a non mappable dataset) with a cache over it just after the leaf
Repeat
|
Map(decode)
|
Cache
|
TFReader
"""
logger.info("Test cache nomap basic 3")
if "SESSION_ID" in os.environ:
session_id = int(os.environ['SESSION_ID'])
else:
raise RuntimeError("Testcase requires SESSION_ID environment variable")
some_cache = ds.DatasetCache(session_id=session_id, size=0)
ds1 = ds.TFRecordDataset(DATA_DIR, SCHEMA_DIR, columns_list=["image"], shuffle=False, cache=some_cache)
decode_op = c_vision.Decode()
ds1 = ds1.map(operations=decode_op, input_columns=["image"])
ds1 = ds1.repeat(4)
num_iter = 0
for _ in ds1.create_dict_iterator(num_epochs=1):
num_iter += 1
logger.info("Number of data in ds1: {} ".format(num_iter))
assert num_iter == 12
# Contact the server to get the statistics
stat = some_cache.GetStat()
cache_sz = stat.avg_cache_sz
num_mem_cached = stat.num_mem_cached
num_disk_cached = stat.num_disk_cached
logger.info("Number of rows cached in memory: {}".format(num_mem_cached))
logger.info("Number of rows spilled to disk: {}".format(num_disk_cached))
logger.info("Average row cache size: {}".format(cache_sz))
logger.info("test_cache_nomap_basic3 Ended.\n")
@pytest.mark.skipif(os.environ.get('RUN_CACHE_TEST') != 'TRUE', reason="Require to bring up cache server")
def test_cache_nomap_basic4():
"""
A TF reader dataset (a non mappable dataset) with a map decode and cache after it
Since a global shuffle is used for the tf reader, it will inject a shuffle op over the tf.
But, if there's a cache later, that shuffle becomes invalid and should be removed.
Repeat
|
Cache
|
Map(decode)
|
TFReader
"""
logger.info("Test cache nomap basic 4")
if "SESSION_ID" in os.environ:
session_id = int(os.environ['SESSION_ID'])
else:
raise RuntimeError("Testcase requires SESSION_ID environment variable")
# This dataset has 3 records in it only
some_cache = ds.DatasetCache(session_id=session_id, size=0)
# With shuffle not being set, TF defaults to a "global" shuffle when there is no cache
# in the picture. This causes a shuffle-injection over the TF. For clarify, this test will
# explicitly give the global option, even though it's the default in python.
# But, when caching is added in the ascendent tree above TF, we do global shuffling
# through the sampler over the cache, not by the shuffle op. In that case, tree prepare
# will remove the shuffle op that got injected by the initial tree creation.
ds1 = ds.TFRecordDataset(DATA_DIR, SCHEMA_DIR, columns_list=["image"], shuffle=ds.Shuffle.GLOBAL)
decode_op = c_vision.Decode()
ds1 = ds1.map(operations=decode_op, input_columns=["image"], cache=some_cache)
ds1 = ds1.repeat(4)
num_iter = 0
for _ in ds1.create_dict_iterator(num_epochs=1):
num_iter += 1
logger.info("Number of data in ds1: {} ".format(num_iter))
assert num_iter == 12
logger.info("test_cache_nomap_basic4 Ended.\n")
@pytest.mark.skipif(os.environ.get('RUN_CACHE_TEST') != 'TRUE', reason="Require to bring up cache server")
def test_cache_nomap_basic5():
"""
A TF reader dataset (a non mappable dataset) with a cache over it just after the leaf
Same as test 3, but this one does not have shuffle arg, causing tf to default to global
shuffle which attempts to inject a shuffle operator. However, since there is a cache
we do not need global shuffle, so the shuffle will not be built. It ends up being
identical to test basic 3, however we arrive at the same tree in different codepaths
(if there was no cache, then the shuffle IS built)
Repeat
|
Map(decode)
|
Cache
|
TFReader
"""
logger.info("Test cache nomap basic 5")
if "SESSION_ID" in os.environ:
session_id = int(os.environ['SESSION_ID'])
else:
raise RuntimeError("Testcase requires SESSION_ID environment variable")
# This dataset has 3 records in it only
some_cache = ds.DatasetCache(session_id=session_id, size=0)
ds1 = ds.TFRecordDataset(DATA_DIR, SCHEMA_DIR, columns_list=["image"], cache=some_cache)
decode_op = c_vision.Decode()
ds1 = ds1.map(operations=decode_op, input_columns=["image"])
ds1 = ds1.repeat(4)
num_iter = 0
for _ in ds1.create_dict_iterator(num_epochs=1):
num_iter += 1
logger.info("Number of data in ds1: {} ".format(num_iter))
assert num_iter == 12
logger.info("test_cache_nomap_basic5 Ended.\n")
@pytest.mark.skipif(os.environ.get('RUN_CACHE_TEST') != 'TRUE', reason="Require to bring up cache server")
def test_cache_nomap_basic6():
"""
A TF reader dataset (a non mappable dataset) with a cache over it just after the leaf
In this one, the tf dataset will be given sharding configuration, however since a cache is
used, the tree prepare should undo the sharding configuration and instead, a distributed
sampler will be chosen with the same shard config.
Repeat
|
Map(decode)
|
Cache
|
TFReader
"""
logger.info("Test cache nomap basic 6")
if "SESSION_ID" in os.environ:
session_id = int(os.environ['SESSION_ID'])
else:
raise RuntimeError("Testcase requires SESSION_ID environment variable")
# This dataset has 3 records in it only
some_cache = ds.DatasetCache(session_id=session_id, size=0)
# With only 3 records shard into 3, we expect only 1 record returned for this shard
# However, the sharding will be done by the sampler, not by the tf record leaf node
# In this case, it is a row-based sharding, not the file-based sharding that would happen if
# there was not any cache.
ds1 = ds.TFRecordDataset(DATA_DIR, SCHEMA_DIR, columns_list=["image"], num_shards=3, shard_id=1, cache=some_cache)
decode_op = c_vision.Decode()
ds1 = ds1.map(operations=decode_op, input_columns=["image"])
ds1 = ds1.repeat(4)
num_iter = 0
for _ in ds1.create_dict_iterator(num_epochs=1):
num_iter += 1
logger.info("Number of data in ds1: {} ".format(num_iter))
assert num_iter == 4
logger.info("test_cache_nomap_basic6 Ended.\n")
@pytest.mark.skipif(os.environ.get('RUN_CACHE_TEST') != 'TRUE', reason="Require to bring up cache server")
def test_cache_nomap_basic7():
"""
A TF reader dataset (a non mappable dataset) that uses global shuffle, and is cached followed by
map.
In this one, the tf dataset with global shuffle might want to inject a shuffle op over top of the
tf reader, but since a cache is given, it will choose not to.
Repeat
|
Map(decode)
|
cache
|
TFReader
"""
logger.info("Test cache nomap basic 7")
if "SESSION_ID" in os.environ:
session_id = int(os.environ['SESSION_ID'])
else:
raise RuntimeError("Testcase requires SESSION_ID environment variable")
some_cache = ds.DatasetCache(session_id=session_id, size=0)
# This dataset has 3 records in it only
ds1 = ds.TFRecordDataset(DATA_DIR, SCHEMA_DIR, columns_list=["image"], shuffle=ds.Shuffle.GLOBAL, cache=some_cache)
decode_op = c_vision.Decode()
ds1 = ds1.map(operations=decode_op, input_columns=["image"])
ds1 = ds1.repeat(4)
num_iter = 0
for _ in ds1.create_dict_iterator(num_epochs=1):
num_iter += 1
logger.info("Number of data in ds1: {} ".format(num_iter))
assert num_iter == 12
logger.info("test_cache_nomap_basic7 Ended.\n")
@pytest.mark.skipif(os.environ.get('RUN_CACHE_TEST') != 'TRUE', reason="Require to bring up cache server")
def test_cache_nomap_basic8():
"""
Test cache as root node
cache
|
TFReader
"""
logger.info("Test cache basic 8")
if "SESSION_ID" in os.environ:
session_id = int(os.environ['SESSION_ID'])
else:
raise RuntimeError("Testcase requires SESSION_ID environment variable")
some_cache = ds.DatasetCache(session_id=session_id, size=0)
# This dataset has 3 records in it only
ds1 = ds.TFRecordDataset(DATA_DIR, SCHEMA_DIR, cache=some_cache)
num_iter = 0
for _ in ds1.create_dict_iterator(num_epochs=1):
logger.info("get data from dataset")
num_iter += 1
logger.info("Number of data in ds1: {} ".format(num_iter))
assert num_iter == 3
logger.info('test_cache_basic8 Ended.\n')
@pytest.mark.skipif(os.environ.get('RUN_CACHE_TEST') != 'TRUE', reason="Require to bring up cache server")
def test_cache_nomap_basic9():
"""
Testing the GetStat interface for getting some info from server, but this should fail if the cache is not created
in a pipeline.
"""
logger.info("Test cache nomap basic 9")
if "SESSION_ID" in os.environ:
session_id = int(os.environ['SESSION_ID'])
else:
raise RuntimeError("Testcase requires SESSION_ID environment variable")
some_cache = ds.DatasetCache(session_id=session_id, size=0)
# Contact the server to get the statistics, this should fail because we have not used this cache in any pipeline
# so there will not be any cache to get stats on.
with pytest.raises(RuntimeError) as e:
stat = some_cache.GetStat()
cache_sz = stat.avg_cache_sz
logger.info("Average row cache size: {}".format(cache_sz))
assert "Unexpected error" in str(e.value)
logger.info("test_cache_nomap_basic9 Ended.\n")
@pytest.mark.skipif(os.environ.get('RUN_CACHE_TEST') != 'TRUE', reason="Require to bring up cache server")
def test_cache_nomap_allowed_share1():
"""
It is allowed to share the cache between the following two trees:
Repeat Shuffle
| |
Cache Cache
| |
TFReader TFReader
"""
logger.info("Test cache nomap allowed share 1")
if "SESSION_ID" in os.environ:
session_id = int(os.environ['SESSION_ID'])
else:
raise RuntimeError("Testcase requires SESSION_ID environment variable")
ds.config.set_seed(1)
# This dataset has 3 records in it only
some_cache = ds.DatasetCache(session_id=session_id, size=0, prefetch_size=32)
ds1 = ds.TFRecordDataset(DATA_DIR, SCHEMA_DIR, columns_list=["image"], shuffle=False, cache=some_cache)
ds1 = ds1.repeat(4)
ds2 = ds.TFRecordDataset(DATA_DIR, SCHEMA_DIR, columns_list=["image"], shuffle=False, cache=some_cache)
ds2 = ds2.shuffle(buffer_size=2)
num_iter = 0
for _ in ds1.create_dict_iterator(num_epochs=1):
num_iter += 1
assert num_iter == 12
logger.info("Number of data in ds1: {} ".format(num_iter))
num_iter = 0
for _ in ds2.create_dict_iterator(num_epochs=1):
num_iter += 1
assert num_iter == 3
logger.info("test_cache_nomap_allowed_share1 Ended.\n")
@pytest.mark.skipif(os.environ.get('RUN_CACHE_TEST') != 'TRUE', reason="Require to bring up cache server")
def test_cache_nomap_allowed_share2():
"""
It is allowed to share the cache between the following two trees (with map decode):
Repeat Shuffle
| |
Cache Cache
| |
Map(decode) Map(decode)
| |
TFReader TFReader
"""
logger.info("Test cache nomap allowed share 2")
if "SESSION_ID" in os.environ:
session_id = int(os.environ['SESSION_ID'])
else:
raise RuntimeError("Testcase requires SESSION_ID environment variable")
ds.config.set_seed(1)
# This dataset has 3 records in it only
some_cache = ds.DatasetCache(session_id=session_id, size=0)
decode_op = c_vision.Decode()
ds1 = ds.TFRecordDataset(DATA_DIR, SCHEMA_DIR, columns_list=["image"], shuffle=False)
ds1 = ds1.map(operations=decode_op, input_columns=["image"], cache=some_cache)
ds1 = ds1.repeat(4)
ds2 = ds.TFRecordDataset(DATA_DIR, SCHEMA_DIR, columns_list=["image"], shuffle=False)
ds2 = ds2.map(operations=decode_op, input_columns=["image"], cache=some_cache)
ds2 = ds2.shuffle(buffer_size=2)
num_iter = 0
for _ in ds1.create_dict_iterator(num_epochs=1):
num_iter += 1
logger.info("Number of data in ds1: {} ".format(num_iter))
assert num_iter == 12
num_iter = 0
for _ in ds2.create_dict_iterator(num_epochs=1):
num_iter += 1
assert num_iter == 3
logger.info("test_cache_nomap_allowed_share2 Ended.\n")
@pytest.mark.skipif(os.environ.get('RUN_CACHE_TEST') != 'TRUE', reason="Require to bring up cache server")
def test_cache_nomap_allowed_share3():
"""
It is allowed to share the cache between the following two trees (different shard ids):
Repeat Repeat
| |
Cache Cache
| |
TFReader(shard_id = 0) TFReader(shard_id = 1)
"""
logger.info("Test cache nomap allowed share 3")
if "SESSION_ID" in os.environ:
session_id = int(os.environ['SESSION_ID'])
else:
raise RuntimeError("Testcase requires SESSION_ID environment variable")
some_cache = ds.DatasetCache(session_id=session_id, size=0)
tf_files = ["../data/dataset/tf_file_dataset/test1.data", "../data/dataset/tf_file_dataset/test2.data"]
ds1 = ds.TFRecordDataset(tf_files, num_shards=2, shard_id=0, num_samples=3, shuffle=False, cache=some_cache)
ds1 = ds1.repeat(4)
ds2 = ds.TFRecordDataset(tf_files, num_shards=2, shard_id=1, num_samples=3, shuffle=False, cache=some_cache)
ds2 = ds2.repeat(4)
num_iter = 0
for _ in ds1.create_dict_iterator(num_epochs=1):
num_iter += 1
logger.info("Number of data in ds1: {} ".format(num_iter))
assert num_iter == 12
num_iter = 0
for _ in ds2.create_dict_iterator(num_epochs=1):
num_iter += 1
assert num_iter == 12
logger.info("test_cache_nomap_allowed_share3 Ended.\n")
@pytest.mark.skipif(os.environ.get('RUN_CACHE_TEST') != 'TRUE', reason="Require to bring up cache server")
def test_cache_nomap_allowed_share4():
"""
It is allowed to share the cache between the following two trees:
Cache Cache
| |
Map(decode, num_parallel_workers=1) Map(decode, num_parallel_workers=2)
| |
TFReader TFReader
"""
logger.info("Test cache nomap allowed share 4")
if "SESSION_ID" in os.environ:
session_id = int(os.environ['SESSION_ID'])
else:
raise RuntimeError("Testcase requires SESSION_ID environment variable")
# This dataset has 3 records in it only
some_cache = ds.DatasetCache(session_id=session_id, size=0)
decode_op = c_vision.Decode()
ds1 = ds.TFRecordDataset(DATA_DIR, SCHEMA_DIR, columns_list=["image"], shuffle=False)
ds1 = ds1.map(operations=decode_op, input_columns=["image"], cache=some_cache, num_parallel_workers=1)
ds2 = ds.TFRecordDataset(DATA_DIR, SCHEMA_DIR, columns_list=["image"], shuffle=False)
ds2 = ds2.map(operations=decode_op, input_columns=["image"], cache=some_cache, num_parallel_workers=2)
num_iter = 0
for _ in ds1.create_dict_iterator(num_epochs=1):
num_iter += 1
logger.info("Number of data in ds1: {} ".format(num_iter))
assert num_iter == 3
num_iter = 0
for _ in ds2.create_dict_iterator(num_epochs=1):
num_iter += 1
logger.info("Number of data in ds2: {} ".format(num_iter))
assert num_iter == 3
logger.info("test_cache_nomap_allowed_share4 Ended.\n")
@pytest.mark.skipif(os.environ.get('RUN_CACHE_TEST') != 'TRUE', reason="Require to bring up cache server")
def test_cache_nomap_disallowed_share1():
"""
It is not allowed to share the cache between the following two trees:
Cache Cache
| |
Map(decode) Map(rescale)
| |
TFReader TFReader
"""
logger.info("Test cache nomap disallowed share1")
if "SESSION_ID" in os.environ:
session_id = int(os.environ['SESSION_ID'])
else:
raise RuntimeError("Testcase requires SESSION_ID environment variable")
# This dataset has 3 records in it only
some_cache = ds.DatasetCache(session_id=session_id, size=0)
decode_op = c_vision.Decode()
rescale_op = c_vision.Rescale(1.0 / 255.0, -1.0)
ds1 = ds.TFRecordDataset(DATA_DIR, SCHEMA_DIR, columns_list=["image"], shuffle=False)
ds1 = ds1.map(operations=decode_op, input_columns=["image"], cache=some_cache)
ds2 = ds.TFRecordDataset(DATA_DIR, SCHEMA_DIR, columns_list=["image"], shuffle=False)
ds2 = ds2.map(operations=rescale_op, input_columns=["image"], cache=some_cache)
num_iter = 0
for _ in ds1.create_dict_iterator(num_epochs=1):
num_iter += 1
logger.info("Number of data in ds1: {} ".format(num_iter))
assert num_iter == 3
with pytest.raises(RuntimeError) as e:
sum([1 for _ in ds2])
assert "Cannot re-use a cache for a different tree!" in str(e.value)
logger.info("test_cache_nomap_disallowed_share1 Ended.\n")
@pytest.mark.skipif(os.environ.get('RUN_CACHE_TEST') != 'TRUE', reason="Require to bring up cache server")
def test_cache_nomap_running_twice1():
"""
Executing the same pipeline for twice (from python), with cache injected after map
Repeat
|
Cache
|
Map(decode)
|
TFRecord
"""
logger.info("Test cache nomap running twice 1")
if "SESSION_ID" in os.environ:
session_id = int(os.environ['SESSION_ID'])
else:
raise RuntimeError("Testcase requires SESSION_ID environment variable")
some_cache = ds.DatasetCache(session_id=session_id, size=0)
# This dataset has 3 records in it only
ds1 = ds.TFRecordDataset(DATA_DIR, SCHEMA_DIR)
decode_op = c_vision.Decode()
ds1 = ds1.map(input_columns=["image"], operations=decode_op, cache=some_cache)
ds1 = ds1.repeat(4)
num_iter = 0
for _ in ds1.create_dict_iterator():
num_iter += 1
logger.info("Number of data in ds1: {} ".format(num_iter))
assert num_iter == 12
num_iter = 0
for _ in ds1.create_dict_iterator():
num_iter += 1
logger.info("Number of data in ds1: {} ".format(num_iter))
assert num_iter == 12
logger.info("test_cache_nomap_running_twice1 Ended.\n")
@pytest.mark.skipif(os.environ.get('RUN_CACHE_TEST') != 'TRUE', reason="Require to bring up cache server")
def test_cache_nomap_running_twice2():
"""
Executing the same pipeline for twice (from shell), with cache injected after leaf
Repeat
|
Map(decode)
|
Cache
|
TFRecord
"""
logger.info("Test cache nomap running twice 2")
if "SESSION_ID" in os.environ:
session_id = int(os.environ['SESSION_ID'])
else:
raise RuntimeError("Testcase requires SESSION_ID environment variable")
some_cache = ds.DatasetCache(session_id=session_id, size=0)
# This dataset has 3 records in it only
ds1 = ds.TFRecordDataset(DATA_DIR, SCHEMA_DIR, cache=some_cache)
decode_op = c_vision.Decode()
ds1 = ds1.map(input_columns=["image"], operations=decode_op)
ds1 = ds1.repeat(4)
num_iter = 0
for _ in ds1.create_dict_iterator():
num_iter += 1
logger.info("Number of data in ds1: {} ".format(num_iter))
assert num_iter == 12
logger.info("test_cache_nomap_running_twice2 Ended.\n")
@pytest.mark.skipif(os.environ.get('RUN_CACHE_TEST') != 'TRUE', reason="Require to bring up cache server")
def test_cache_nomap_extra_small_size1():
"""
Test running pipeline with cache of extra small size and spilling true
Repeat
|
Map(decode)
|
Cache
|
TFRecord
"""
logger.info("Test cache nomap extra small size 1")
if "SESSION_ID" in os.environ:
session_id = int(os.environ['SESSION_ID'])
else:
raise RuntimeError("Testcase requires SESSION_ID environment variable")
some_cache = ds.DatasetCache(session_id=session_id, size=1, spilling=True)
# This dataset has 3 records in it only
ds1 = ds.TFRecordDataset(DATA_DIR, SCHEMA_DIR, cache=some_cache)
decode_op = c_vision.Decode()
ds1 = ds1.map(input_columns=["image"], operations=decode_op)
ds1 = ds1.repeat(4)
num_iter = 0
for _ in ds1.create_dict_iterator():
num_iter += 1
logger.info("Number of data in ds1: {} ".format(num_iter))
assert num_iter == 12
logger.info("test_cache_nomap_extra_small_size1 Ended.\n")
@pytest.mark.skipif(os.environ.get('RUN_CACHE_TEST') != 'TRUE', reason="Require to bring up cache server")
def test_cache_nomap_extra_small_size2():
"""
Test running pipeline with cache of extra small size and spilling false (failure)
Repeat
|
Cache
|
Map(decode)
|
TFRecord
"""
logger.info("Test cache nomap extra small size 2")
if "SESSION_ID" in os.environ:
session_id = int(os.environ['SESSION_ID'])
else:
raise RuntimeError("Testcase requires SESSION_ID environment variable")
some_cache = ds.DatasetCache(session_id=session_id, size=1, spilling=False)
# This dataset has 3 records in it only
ds1 = ds.TFRecordDataset(DATA_DIR, SCHEMA_DIR)
decode_op = c_vision.Decode()
ds1 = ds1.map(input_columns=["image"], operations=decode_op, cache=some_cache)
ds1 = ds1.repeat(4)
with pytest.raises(RuntimeError) as e:
sum([1 for _ in ds1])
assert "Out of memory" in str(e.value)
logger.info("test_cache_nomap_extra_small_size2 Ended.\n")
@pytest.mark.skipif(os.environ.get('RUN_CACHE_TEST') != 'TRUE', reason="Require to bring up cache server")
def test_cache_nomap_parallel_pipeline1(shard):
"""
Test running two parallel pipelines (sharing cache) with cache injected after leaf op
Repeat
|
Map(decode)
|
cache
|
TFReader
"""
logger.info("Test cache nomap parallel pipeline 1")
if "SESSION_ID" in os.environ:
session_id = int(os.environ['SESSION_ID'])
else:
raise RuntimeError("Testcase requires SESSION_ID environment variable")
some_cache = ds.DatasetCache(session_id=session_id, size=0)
# This dataset has 3 records in it only
ds1 = ds.TFRecordDataset(DATA_DIR, SCHEMA_DIR, num_shards=3, shard_id=int(shard), cache=some_cache)
decode_op = c_vision.Decode()
ds1 = ds1.map(input_columns=["image"], operations=decode_op)
ds1 = ds1.repeat(4)
num_iter = 0
for _ in ds1.create_dict_iterator(num_epochs=1):
num_iter += 1
logger.info("Number of data in ds1: {} ".format(num_iter))
assert num_iter == 4
logger.info("test_cache_nomap_parallel_pipeline1 Ended.\n")
@pytest.mark.skipif(os.environ.get('RUN_CACHE_TEST') != 'TRUE', reason="Require to bring up cache server")
def test_cache_nomap_parallel_pipeline2(shard):
"""
Test running two parallel pipelines (sharing cache) with cache injected after map op
Repeat
|
cache
|
Map(decode)
|
TFReader
"""
logger.info("Test cache nomap parallel pipeline 2")
if "SESSION_ID" in os.environ:
session_id = int(os.environ['SESSION_ID'])
else:
raise RuntimeError("Testcase requires SESSION_ID environment variable")
some_cache = ds.DatasetCache(session_id=session_id, size=0)
# This dataset has 3 records in it only
ds1 = ds.TFRecordDataset(DATA_DIR, SCHEMA_DIR, num_shards=3, shard_id=int(shard))
decode_op = c_vision.Decode()
ds1 = ds1.map(input_columns=["image"], operations=decode_op, cache=some_cache)
ds1 = ds1.repeat(4)
num_iter = 0
for _ in ds1.create_dict_iterator(num_epochs=1):
num_iter += 1
logger.info("Number of data in ds1: {} ".format(num_iter))
assert num_iter == 4
logger.info("test_cache_nomap_parallel_pipeline2 Ended.\n")
@pytest.mark.skipif(os.environ.get('RUN_CACHE_TEST') != 'TRUE', reason="Require to bring up cache server")
def test_cache_nomap_parallel_workers():
"""
Test cache with num_parallel_workers > 1 set for map op and leaf op
Repeat
|
Map(decode)
|
cache
|
TFReader
"""
logger.info("Test cache nomap parallel workers")
if "SESSION_ID" in os.environ:
session_id = int(os.environ['SESSION_ID'])
else:
raise RuntimeError("Testcase requires SESSION_ID environment variable")
some_cache = ds.DatasetCache(session_id=session_id, size=0)
# This dataset has 3 records in it only
ds1 = ds.TFRecordDataset(DATA_DIR, SCHEMA_DIR, num_parallel_workers=4)
decode_op = c_vision.Decode()
ds1 = ds1.map(input_columns=["image"], operations=decode_op, num_parallel_workers=4, cache=some_cache)
ds1 = ds1.repeat(4)
num_iter = 0
for _ in ds1.create_dict_iterator(num_epochs=1):
num_iter += 1
logger.info("Number of data in ds1: {} ".format(num_iter))
assert num_iter == 12
logger.info("test_cache_nomap_parallel_workers Ended.\n")
@pytest.mark.skipif(os.environ.get('RUN_CACHE_TEST') != 'TRUE', reason="Require to bring up cache server")
def test_cache_nomap_server_workers_1():
"""
start cache server with --workers 1 and then test cache function
Repeat
|
cache
|
Map(decode)
|
TFRecord
"""
logger.info("Test cache nomap server workers 1")
if "SESSION_ID" in os.environ:
session_id = int(os.environ['SESSION_ID'])
else:
raise RuntimeError("Testcase requires SESSION_ID environment variable")
some_cache = ds.DatasetCache(session_id=session_id, size=0)
# This dataset has 3 records in it only
ds1 = ds.TFRecordDataset(DATA_DIR, SCHEMA_DIR)
decode_op = c_vision.Decode()
ds1 = ds1.map(input_columns=["image"], operations=decode_op, cache=some_cache)
ds1 = ds1.repeat(4)
num_iter = 0
for _ in ds1.create_dict_iterator():
num_iter += 1
logger.info("Number of data in ds1: {} ".format(num_iter))
assert num_iter == 12
logger.info("test_cache_nomap_server_workers_1 Ended.\n")
@pytest.mark.skipif(os.environ.get('RUN_CACHE_TEST') != 'TRUE', reason="Require to bring up cache server")
def test_cache_nomap_server_workers_100():
"""
start cache server with --workers 100 and then test cache function
Repeat
|
Map(decode)
|
cache
|
TFRecord
"""
logger.info("Test cache nomap server workers 100")
if "SESSION_ID" in os.environ:
session_id = int(os.environ['SESSION_ID'])
else:
raise RuntimeError("Testcase requires SESSION_ID environment variable")
some_cache = ds.DatasetCache(session_id=session_id, size=0)
# This dataset has 3 records in it only
ds1 = ds.TFRecordDataset(DATA_DIR, SCHEMA_DIR, cache=some_cache)
decode_op = c_vision.Decode()
ds1 = ds1.map(input_columns=["image"], operations=decode_op)
ds1 = ds1.repeat(4)
num_iter = 0
for _ in ds1.create_dict_iterator():
num_iter += 1
logger.info("Number of data in ds1: {} ".format(num_iter))
assert num_iter == 12
logger.info("test_cache_nomap_server_workers_100 Ended.\n")
@pytest.mark.skipif(os.environ.get('RUN_CACHE_TEST') != 'TRUE', reason="Require to bring up cache server")
def test_cache_nomap_num_connections_1():
"""
Test setting num_connections=1 in DatasetCache
Repeat
|
cache
|
Map(decode)
|
TFRecord
"""
logger.info("Test cache nomap num_connections 1")
if "SESSION_ID" in os.environ:
session_id = int(os.environ['SESSION_ID'])
else:
raise RuntimeError("Testcase requires SESSION_ID environment variable")
some_cache = ds.DatasetCache(session_id=session_id, size=0, num_connections=1)
# This dataset has 3 records in it only
ds1 = ds.TFRecordDataset(DATA_DIR, SCHEMA_DIR)
decode_op = c_vision.Decode()
ds1 = ds1.map(input_columns=["image"], operations=decode_op, cache=some_cache)
ds1 = ds1.repeat(4)
num_iter = 0
for _ in ds1.create_dict_iterator():
num_iter += 1
logger.info("Number of data in ds1: {} ".format(num_iter))
assert num_iter == 12
logger.info("test_cache_nomap_num_connections_1 Ended.\n")
@pytest.mark.skipif(os.environ.get('RUN_CACHE_TEST') != 'TRUE', reason="Require to bring up cache server")
def test_cache_nomap_num_connections_100():
"""
Test setting num_connections=100 in DatasetCache
Repeat
|
Map(decode)
|
cache
|
TFRecord
"""
logger.info("Test cache nomap num_connections 100")
if "SESSION_ID" in os.environ:
session_id = int(os.environ['SESSION_ID'])
else:
raise RuntimeError("Testcase requires SESSION_ID environment variable")
some_cache = ds.DatasetCache(session_id=session_id, size=0, num_connections=100)
# This dataset has 3 records in it only
ds1 = ds.TFRecordDataset(DATA_DIR, SCHEMA_DIR, cache=some_cache)
decode_op = c_vision.Decode()
ds1 = ds1.map(input_columns=["image"], operations=decode_op)
ds1 = ds1.repeat(4)
num_iter = 0
for _ in ds1.create_dict_iterator():
num_iter += 1
logger.info("Number of data in ds1: {} ".format(num_iter))
assert num_iter == 12
logger.info("test_cache_nomap_num_connections_100 Ended.\n")
@pytest.mark.skipif(os.environ.get('RUN_CACHE_TEST') != 'TRUE', reason="Require to bring up cache server")
def test_cache_nomap_prefetch_size_1():
"""
Test setting prefetch_size=1 in DatasetCache
Repeat
|
cache
|
Map(decode)
|
TFRecord
"""
logger.info("Test cache nomap prefetch_size 1")
if "SESSION_ID" in os.environ:
session_id = int(os.environ['SESSION_ID'])
else:
raise RuntimeError("Testcase requires SESSION_ID environment variable")
some_cache = ds.DatasetCache(session_id=session_id, size=0, prefetch_size=1)
# This dataset has 3 records in it only
ds1 = ds.TFRecordDataset(DATA_DIR, SCHEMA_DIR)
decode_op = c_vision.Decode()
ds1 = ds1.map(input_columns=["image"], operations=decode_op, cache=some_cache)
ds1 = ds1.repeat(4)
num_iter = 0
for _ in ds1.create_dict_iterator():
num_iter += 1
logger.info("Number of data in ds1: {} ".format(num_iter))
assert num_iter == 12
logger.info("test_cache_nomap_prefetch_size_1 Ended.\n")
@pytest.mark.skipif(os.environ.get('RUN_CACHE_TEST') != 'TRUE', reason="Require to bring up cache server")
def test_cache_nomap_prefetch_size_100():
"""
Test setting prefetch_size=100 in DatasetCache
Repeat
|
Map(decode)
|
cache
|
TFRecord
"""
logger.info("Test cache nomap prefetch_size 100")
if "SESSION_ID" in os.environ:
session_id = int(os.environ['SESSION_ID'])
else:
raise RuntimeError("Testcase requires SESSION_ID environment variable")
some_cache = ds.DatasetCache(session_id=session_id, size=0, prefetch_size=100)
# This dataset has 3 records in it only
ds1 = ds.TFRecordDataset(DATA_DIR, SCHEMA_DIR, cache=some_cache)
decode_op = c_vision.Decode()
ds1 = ds1.map(input_columns=["image"], operations=decode_op)
ds1 = ds1.repeat(4)
num_iter = 0
for _ in ds1.create_dict_iterator():
num_iter += 1
logger.info("Number of data in ds1: {} ".format(num_iter))
assert num_iter == 12
logger.info("test_cache_nomap_prefetch_size_100 Ended.\n")
@pytest.mark.skipif(os.environ.get('RUN_CACHE_TEST') != 'TRUE', reason="Require to bring up cache server")
def test_cache_nomap_to_device():
"""
Test cache with to_device
DeviceQueue
|
EpochCtrl
|
Repeat
|
Map(decode)
|
cache
|
TFReader
"""
logger.info("Test cache nomap to_device")
if "SESSION_ID" in os.environ:
session_id = int(os.environ['SESSION_ID'])
else:
raise RuntimeError("Testcase requires SESSION_ID environment variable")
some_cache = ds.DatasetCache(session_id=session_id, size=0)
# This dataset has 3 records in it only
ds1 = ds.TFRecordDataset(DATA_DIR, SCHEMA_DIR)
decode_op = c_vision.Decode()
ds1 = ds1.map(input_columns=["image"], operations=decode_op, cache=some_cache)
ds1 = ds1.repeat(4)
ds1 = ds1.to_device()
ds1.send()
logger.info("test_cache_nomap_to_device Ended.\n")
@pytest.mark.skipif(os.environ.get('RUN_CACHE_TEST') != 'TRUE', reason="Require to bring up cache server")
def test_cache_nomap_session_destroy():
"""
Test executing cache_admin -d while the pipeline is running
Repeat
|
Cache
|
RandomDataset
"""
logger.info("Test cache nomap session destroy")
if "SESSION_ID" in os.environ:
session_id = int(os.environ['SESSION_ID'])
else:
raise RuntimeError("Testcase requires SESSION_ID environment variable")
schema = ds.Schema()
schema.add_column('image', de_type=mstype.uint8,
shape=[640, 480, 3]) # 921600 bytes (a bit less than 1 MB per image)
schema.add_column('label', de_type=mstype.uint8, shape=[1])
some_cache = ds.DatasetCache(session_id=session_id, size=0)
# User-created sampler here
ds1 = ds.RandomDataset(schema=schema, num_parallel_workers=4, cache=some_cache)
ds1 = ds1.repeat()
with pytest.raises(RuntimeError) as e:
num_iter = 0
for _ in ds1.create_dict_iterator():
num_iter += 1
assert "Unexpected error" in str(e.value)
logger.info("test_cache_nomap_session_destroy Ended.\n")
@pytest.mark.skipif(os.environ.get('RUN_CACHE_TEST') != 'TRUE', reason="Require to bring up cache server")
def test_cache_nomap_server_stop():
"""
Test executing cache_admin --stop while the pipeline is running
Repeat
|
Cache
|
RandomDataset
"""
logger.info("Test cache nomap server stop")
if "SESSION_ID" in os.environ:
session_id = int(os.environ['SESSION_ID'])
else:
raise RuntimeError("Testcase requires SESSION_ID environment variable")
schema = ds.Schema()
schema.add_column('image', de_type=mstype.uint8,
shape=[640, 480, 3]) # 921600 bytes (a bit less than 1 MB per image)
schema.add_column('label', de_type=mstype.uint8, shape=[1])
some_cache = ds.DatasetCache(session_id=session_id, size=0)
# User-created sampler here
ds1 = ds.RandomDataset(schema=schema, num_parallel_workers=4, cache=some_cache)
ds1 = ds1.repeat()
with pytest.raises(RuntimeError) as e:
num_iter = 0
for _ in ds1.create_dict_iterator():
num_iter += 1
assert "Network error. Cache server with port 50052 is unreachable. Make sure the server is running." in \
str(e.value)
logger.info("test_cache_nomap_server_stop Ended.\n")
@pytest.mark.skipif(os.environ.get('RUN_CACHE_TEST') != 'TRUE', reason="Require to bring up cache server")
def test_cache_nomap_epoch_ctrl1():
"""
Test using two-loops method to run several epochs
Map(decode)
|
cache
|
TFRecord
"""
logger.info("Test cache nomap epoch ctrl1")
if "SESSION_ID" in os.environ:
session_id = int(os.environ['SESSION_ID'])
else:
raise RuntimeError("Testcase requires SESSION_ID environment variable")
some_cache = ds.DatasetCache(session_id=session_id, size=0)
# This dataset has 3 records in it only
ds1 = ds.TFRecordDataset(DATA_DIR, SCHEMA_DIR, cache=some_cache)
decode_op = c_vision.Decode()
ds1 = ds1.map(input_columns=["image"], operations=decode_op)
num_epoch = 5
iter1 = ds1.create_dict_iterator(num_epochs=num_epoch)
epoch_count = 0
for _ in range(num_epoch):
row_count = 0
for _ in iter1:
row_count += 1
logger.info("Number of data in ds1: {} ".format(row_count))
assert row_count == 3
epoch_count += 1
assert epoch_count == num_epoch
logger.info("test_cache_nomap_epoch_ctrl1 Ended.\n")
@pytest.mark.skipif(os.environ.get('RUN_CACHE_TEST') != 'TRUE', reason="Require to bring up cache server")
def test_cache_nomap_epoch_ctrl2():
"""
Test using two-loops method with infinite epochs
cache
|
Map(decode)
|
TFRecord
"""
logger.info("Test cache nomap epoch ctrl2")
if "SESSION_ID" in os.environ:
session_id = int(os.environ['SESSION_ID'])
else:
raise RuntimeError("Testcase requires SESSION_ID environment variable")
some_cache = ds.DatasetCache(session_id=session_id, size=0)
# This dataset has 3 records in it only
ds1 = ds.TFRecordDataset(DATA_DIR, SCHEMA_DIR)
decode_op = c_vision.Decode()
ds1 = ds1.map(input_columns=["image"], operations=decode_op, cache=some_cache)
num_epoch = 5
# iter1 will always assume there is a next epoch and never shutdown
iter1 = ds1.create_dict_iterator()
epoch_count = 0
for _ in range(num_epoch):
row_count = 0
for _ in iter1:
row_count += 1
logger.info("Number of data in ds1: {} ".format(row_count))
assert row_count == 3
epoch_count += 1
assert epoch_count == num_epoch
# manually stop the iterator
iter1.stop()
logger.info("test_cache_nomap_epoch_ctrl2 Ended.\n")
@pytest.mark.skipif(os.environ.get('RUN_CACHE_TEST') != 'TRUE', reason="Require to bring up cache server")
def test_cache_nomap_epoch_ctrl3():
"""
Test using two-loops method with infinite epochs over repeat
repeat
|
Map(decode)
|
cache
|
TFRecord
"""
logger.info("Test cache nomap epoch ctrl3")
if "SESSION_ID" in os.environ:
session_id = int(os.environ['SESSION_ID'])
else:
raise RuntimeError("Testcase requires SESSION_ID environment variable")
some_cache = ds.DatasetCache(session_id=session_id, size=0)
# This dataset has 3 records in it only
ds1 = ds.TFRecordDataset(DATA_DIR, SCHEMA_DIR, cache=some_cache)
decode_op = c_vision.Decode()
ds1 = ds1.map(input_columns=["image"], operations=decode_op)
ds1 = ds1.repeat(2)
num_epoch = 5
# iter1 will always assume there is a next epoch and never shutdown
iter1 = ds1.create_dict_iterator()
epoch_count = 0
for _ in range(num_epoch):
row_count = 0
for _ in iter1:
row_count += 1
logger.info("Number of data in ds1: {} ".format(row_count))
assert row_count == 6
epoch_count += 1
assert epoch_count == num_epoch
# reply on garbage collector to destroy iter1
logger.info("test_cache_nomap_epoch_ctrl3 Ended.\n")
@pytest.mark.skipif(os.environ.get('RUN_CACHE_TEST') != 'TRUE', reason="Require to bring up cache server")
def test_cache_nomap_epoch_ctrl4():
"""
Test using two-loops method with repeat under cache
cache
|
Map(decode)
|
repeat
|
TFRecord
"""
logger.info("Test cache nomap epoch ctrl4")
if "SESSION_ID" in os.environ:
session_id = int(os.environ['SESSION_ID'])
else:
raise RuntimeError("Testcase requires SESSION_ID environment variable")
some_cache = ds.DatasetCache(session_id=session_id, size=0)
# This dataset has 3 records in it only
ds1 = ds.TFRecordDataset(DATA_DIR, SCHEMA_DIR)
ds1 = ds1.repeat(2)
decode_op = c_vision.Decode()
ds1 = ds1.map(input_columns=["image"], operations=decode_op, cache=some_cache)
num_epoch = 5
iter1 = ds1.create_dict_iterator(num_epochs=num_epoch)
epoch_count = 0
for _ in range(num_epoch):
row_count = 0
for _ in iter1:
row_count += 1
logger.info("Number of data in ds1: {} ".format(row_count))
assert row_count == 6
epoch_count += 1
assert epoch_count == num_epoch
logger.info("test_cache_nomap_epoch_ctrl4 Ended.\n")
@pytest.mark.skipif(os.environ.get('RUN_CACHE_TEST') != 'TRUE', reason="Require to bring up cache server")
def test_cache_nomap_multiple_cache1():
"""
Test multiple cache in the same python script
cache cache
| |
Map(decode) Map(decode)
| |
TFRecord(train) TFRecord(eval)
"""
logger.info("Test cache nomap multiple cache 1")
if "SESSION_ID" in os.environ:
session_id = int(os.environ['SESSION_ID'])
else:
raise RuntimeError("Testcase requires SESSION_ID environment variable")
train_cache = ds.DatasetCache(session_id=session_id, size=0)
eval_cache = ds.DatasetCache(session_id=session_id, size=0)
# This dataset has 12 records in it
train_dataset = ds.TFRecordDataset(TRAIN_DATA_DIR, TRAIN_SCHEMA_DIR)
decode_op = c_vision.Decode()
train_dataset = train_dataset.map(input_columns=["image"], operations=decode_op, cache=train_cache)
# This dataset has 3 records in it only
eval_dataset = ds.TFRecordDataset(DATA_DIR, SCHEMA_DIR)
eval_dataset = eval_dataset.map(input_columns=["image"], operations=decode_op, cache=eval_cache)
num_epoch = 5
train_iter = train_dataset.create_dict_iterator(num_epochs=num_epoch)
eval_iter = eval_dataset.create_dict_iterator(num_epochs=num_epoch)
epoch_count = 0
for _ in range(num_epoch):
assert sum([1 for _ in train_iter]) == 12
assert sum([1 for _ in eval_iter]) == 3
epoch_count += 1
assert epoch_count == num_epoch
logger.info("test_cache_nomap_multiple_cache1 Ended.\n")
@pytest.mark.skipif(os.environ.get('RUN_CACHE_TEST') != 'TRUE', reason="Require to bring up cache server")
def test_cache_nomap_multiple_cache2():
"""
Test multiple cache in the same python script
cache
|
Map(decode) cache
| |
TFRecord(image) TFRecord(text)
"""
logger.info("Test cache nomap multiple cache 2")
if "SESSION_ID" in os.environ:
session_id = int(os.environ['SESSION_ID'])
else:
raise RuntimeError("Testcase requires SESSION_ID environment variable")
image_cache = ds.DatasetCache(session_id=session_id, size=0)
text_cache = ds.DatasetCache(session_id=session_id, size=0)
# This dataset has 3 records in it only
image_dataset = ds.TFRecordDataset(DATA_DIR, SCHEMA_DIR)
decode_op = c_vision.Decode()
image_dataset = image_dataset.map(input_columns=["image"], operations=decode_op, cache=image_cache)
# This dataset has 3 records in it only
text_dataset = ds.TFRecordDataset(TEXT_TF_DATA_DIR, SCHEMA_DIR2, cache=text_cache)
num_epoch = 5
image_iter = image_dataset.create_dict_iterator(num_epochs=num_epoch)
text_iter = text_dataset.create_dict_iterator(num_epochs=num_epoch, output_numpy=True)
epoch_count = 0
for _ in range(num_epoch):
row_count = 0
for _, _ in itertools.zip_longest(image_iter, text_iter):
row_count += 1
assert row_count == 3
epoch_count += 1
assert epoch_count == num_epoch
logger.info("test_cache_nomap_multiple_cache2 Ended.\n")
@pytest.mark.skipif(os.environ.get('RUN_CACHE_TEST') != 'TRUE', reason="Require to bring up cache server")
def test_cache_nomap_multiple_cache3():
"""
Test multiple cache in the same python script
cache cache
| |
Map(decode) Map(decode)
| |
TFRecord ImageFolder
"""
logger.info("Test cache nomap multiple cache 3")
if "SESSION_ID" in os.environ:
session_id = int(os.environ['SESSION_ID'])
else:
raise RuntimeError("Testcase requires SESSION_ID environment variable")
tf_cache = ds.DatasetCache(session_id=session_id, size=0)
image_cache = ds.DatasetCache(session_id=session_id, size=0)
# This dataset has 3 records in it only
tf_dataset = ds.TFRecordDataset(DATA_DIR, SCHEMA_DIR)
decode_op = c_vision.Decode()
tf_dataset = tf_dataset.map(input_columns=["image"], operations=decode_op, cache=tf_cache)
# This DATA_DIR only has 2 images in it
image_dataset = ds.ImageFolderDataset(dataset_dir=IMAGE_FOLDER_DATA_DIR)
image_dataset = image_dataset.map(input_columns=["image"], operations=decode_op, cache=image_cache)
num_epoch = 5
tf_iter = tf_dataset.create_dict_iterator(num_epochs=num_epoch)
image_iter = image_dataset.create_dict_iterator(num_epochs=num_epoch)
epoch_count = 0
for _ in range(num_epoch):
assert sum([1 for _ in tf_iter]) == 3
assert sum([1 for _ in image_iter]) == 2
epoch_count += 1
assert epoch_count == num_epoch
logger.info("test_cache_nomap_multiple_cache3 Ended.\n")
@pytest.mark.skipif(os.environ.get('RUN_CACHE_TEST') != 'TRUE', reason="Require to bring up cache server")
def test_cache_nomap_multiple_cache_train():
"""
Test multiple cache in different python scripts. This test case is going to run concurrently with
test_cache_nomap_multiple_cache_eval.
cache
|
Map(decode)
|
TFRecord(train)
"""
logger.info("Test cache nomap multiple cache train")
if "SESSION_ID" in os.environ:
session_id = int(os.environ['SESSION_ID'])
else:
raise RuntimeError("Testcase requires SESSION_ID environment variable")
train_cache = ds.DatasetCache(session_id=session_id, size=0)
# This dataset has 12 records in it
train_dataset = ds.TFRecordDataset(TRAIN_DATA_DIR, TRAIN_SCHEMA_DIR)
decode_op = c_vision.Decode()
train_dataset = train_dataset.map(input_columns=["image"], operations=decode_op, cache=train_cache)
num_epoch = 5
train_iter = train_dataset.create_dict_iterator(num_epochs=num_epoch)
epoch_count = 0
for _ in range(num_epoch):
assert sum([1 for _ in train_iter]) == 12
epoch_count += 1
assert epoch_count == num_epoch
logger.info("test_cache_nomap_multiple_cache_train Ended.\n")
@pytest.mark.skipif(os.environ.get('RUN_CACHE_TEST') != 'TRUE', reason="Require to bring up cache server")
def test_cache_nomap_multiple_cache_eval():
"""
Test multiple cache in different python scripts. This test case is going to run concurrently with
test_cache_nomap_multiple_cache_train.
cache
|
Map(decode)
|
TFRecord(eval)
"""
logger.info("Test cache nomap multiple cache eval")
if "SESSION_ID" in os.environ:
session_id = int(os.environ['SESSION_ID'])
else:
raise RuntimeError("Testcase requires SESSION_ID environment variable")
eval_cache = ds.DatasetCache(session_id=session_id, size=0)
# This dataset only has 3 records in it
eval_dataset = ds.TFRecordDataset(DATA_DIR, SCHEMA_DIR)
decode_op = c_vision.Decode()
eval_dataset = eval_dataset.map(input_columns=["image"], operations=decode_op, cache=eval_cache)
num_epoch = 5
eval_iter = eval_dataset.create_dict_iterator(num_epochs=num_epoch)
epoch_count = 0
for _ in range(num_epoch):
assert sum([1 for _ in eval_iter]) == 3
epoch_count += 1
assert epoch_count == num_epoch
logger.info("test_cache_nomap_multiple_cache_eval Ended.\n")
@pytest.mark.skipif(os.environ.get('RUN_CACHE_TEST') != 'TRUE', reason="Require to bring up cache server")
def test_cache_nomap_clue1():
"""
A clue dataset (a non mappable dataset) with a cache over it just after the leaf
In this one, the clue dataset will be given sharding configuration, however since a cache is
used, the tree prepare should undo the sharding configuration and instead, a distributed
sampler will be chosen with the same shard config.
Cache
|
CLUE
"""
logger.info("Test cache nomap clue 1")
if "SESSION_ID" in os.environ:
session_id = int(os.environ['SESSION_ID'])
else:
raise RuntimeError("Testcase requires SESSION_ID environment variable")
some_cache = ds.DatasetCache(session_id=session_id, size=0)
# With only 3 records shard into 3, we expect only 1 record returned for this shard
# However, the sharding will be done by the sampler, not by the clue leaf node
# In this case, it is a row-based sharding, not the file-based sharding that would happen if
# there was not any cache.
ds1 = ds.CLUEDataset(CLUE_DATA_DIR, task='AFQMC', usage='train', num_shards=3, shard_id=1, cache=some_cache)
num_epoch = 4
iter1 = ds1.create_dict_iterator(num_epochs=num_epoch, output_numpy=True)
epoch_count = 0
for _ in range(num_epoch):
assert sum([1 for _ in iter1]) == 1
epoch_count += 1
assert epoch_count == num_epoch
logger.info("test_cache_nomap_clue1 Ended.\n")
@pytest.mark.skipif(os.environ.get('RUN_CACHE_TEST') != 'TRUE', reason="Require to bring up cache server")
def test_cache_nomap_clue2():
"""
A clue dataset (a non mappable dataset) with a cache over it after map
In this one, a num_samples argument is given
Cache
|
map(lambda x: x)
|
CLUE
"""
logger.info("Test cache nomap clue 2")
if "SESSION_ID" in os.environ:
session_id = int(os.environ['SESSION_ID'])
else:
raise RuntimeError("Testcase requires SESSION_ID environment variable")
some_cache = ds.DatasetCache(session_id=session_id, size=0)
ds1 = ds.CLUEDataset(CLUE_DATA_DIR, task='AFQMC', usage='train', num_samples=2)
ds1 = ds1.map((lambda x: x), ["label"], cache=some_cache)
num_epoch = 4
iter1 = ds1.create_dict_iterator(num_epochs=num_epoch, output_numpy=True)
epoch_count = 0
for _ in range(num_epoch):
assert sum([1 for _ in iter1]) == 2
epoch_count += 1
assert epoch_count == num_epoch
logger.info("test_cache_nomap_clue2 Ended.\n")
@pytest.mark.skipif(os.environ.get('RUN_CACHE_TEST') != 'TRUE', reason="Require to bring up cache server")
def test_cache_nomap_csv1():
"""
A csv dataset (a non mappable dataset) with a cache over it just after the leaf
In this one, the csv dataset will be given sharding configuration, however since a cache is
used, the tree prepare should undo the sharding configuration and instead, a distributed
sampler will be chosen with the same shard config.
Cache
|
CSV
"""
logger.info("Test cache nomap csv 1")
if "SESSION_ID" in os.environ:
session_id = int(os.environ['SESSION_ID'])
else:
raise RuntimeError("Testcase requires SESSION_ID environment variable")
some_cache = ds.DatasetCache(session_id=session_id, size=0)
# With only 3 records shard into 3, we expect only 1 record returned for this shard
# However, the sharding will be done by the sampler, not by the clue leaf node
# In this case, it is a row-based sharding, not the file-based sharding that would happen if
# there was not any cache.
ds1 = ds.CSVDataset(CSV_DATA_DIR, column_defaults=["1", "2", "3", "4"],
column_names=['col1', 'col2', 'col3', 'col4'], num_shards=3, shard_id=1, cache=some_cache)
num_epoch = 4
iter1 = ds1.create_dict_iterator(num_epochs=num_epoch, output_numpy=True)
epoch_count = 0
for _ in range(num_epoch):
assert sum([1 for _ in iter1]) == 1
epoch_count += 1
assert epoch_count == num_epoch
logger.info("test_cache_nomap_csv1 Ended.\n")
@pytest.mark.skipif(os.environ.get('RUN_CACHE_TEST') != 'TRUE', reason="Require to bring up cache server")
def test_cache_nomap_csv2():
"""
A csv dataset (a non mappable dataset) with a cache over it after map
In this one, a num_samples argument is given
Cache
|
map(lambda x: x)
|
CSV
"""
logger.info("Test cache nomap csv 2")
if "SESSION_ID" in os.environ:
session_id = int(os.environ['SESSION_ID'])
else:
raise RuntimeError("Testcase requires SESSION_ID environment variable")
some_cache = ds.DatasetCache(session_id=session_id, size=0)
ds1 = ds.CSVDataset(CSV_DATA_DIR, column_defaults=["1", "2", "3", "4"],
column_names=['col1', 'col2', 'col3', 'col4'], num_samples=2)
ds1 = ds1.map((lambda x: x), ["col1"], cache=some_cache)
num_epoch = 4
iter1 = ds1.create_dict_iterator(num_epochs=num_epoch, output_numpy=True)
epoch_count = 0
for _ in range(num_epoch):
assert sum([1 for _ in iter1]) == 2
epoch_count += 1
assert epoch_count == num_epoch
logger.info("test_cache_nomap_csv2 Ended.\n")
@pytest.mark.skipif(os.environ.get('RUN_CACHE_TEST') != 'TRUE', reason="Require to bring up cache server")
def test_cache_nomap_textfile1():
"""
A text file dataset (a non mappable dataset) with a cache over it just after the leaf
In this one, the text file dataset will be given sharding configuration, however since a cache is
used, the tree prepare should undo the sharding configuration and instead, a distributed
sampler will be chosen with the same shard config.
Cache
|
TextFile
"""
logger.info("Test cache nomap textfile 1")
if "SESSION_ID" in os.environ:
session_id = int(os.environ['SESSION_ID'])
else:
raise RuntimeError("Testcase requires SESSION_ID environment variable")
some_cache = ds.DatasetCache(session_id=session_id, size=0)
# With only 3 records shard into 3, we expect only 1 record returned for this shard
# However, the sharding will be done by the sampler, not by the clue leaf node
# In this case, it is a row-based sharding, not the file-based sharding that would happen if
# there was not any cache.
ds1 = ds.TextFileDataset(TEXT_FILE_DATA_DIR, num_shards=3, shard_id=1, cache=some_cache)
num_epoch = 4
iter1 = ds1.create_dict_iterator(num_epochs=num_epoch, output_numpy=True)
epoch_count = 0
for _ in range(num_epoch):
assert sum([1 for _ in iter1]) == 1
epoch_count += 1
assert epoch_count == num_epoch
logger.info("test_cache_nomap_textfile1 Ended.\n")
@pytest.mark.skipif(os.environ.get('RUN_CACHE_TEST') != 'TRUE', reason="Require to bring up cache server")
def test_cache_nomap_textfile2():
"""
A text file dataset (a non mappable dataset) with a cache over it after map
In this one, a num_samples argument is given
Cache
|
Map(tokenizer)
|
TextFile
"""
def my_tokenizer(line):
words = line.split()
if not words:
return [""]
return words
logger.info("Test cache nomap textfile 2")
if "SESSION_ID" in os.environ:
session_id = int(os.environ['SESSION_ID'])
else:
raise RuntimeError("Testcase requires SESSION_ID environment variable")
some_cache = ds.DatasetCache(session_id=session_id, size=0)
ds1 = ds.TextFileDataset(TEXT_FILE_DATA_DIR, num_samples=2)
tokenizer = text.PythonTokenizer(my_tokenizer)
ds1 = ds1.map(operations=tokenizer, cache=some_cache)
num_epoch = 4
iter1 = ds1.create_dict_iterator(num_epochs=num_epoch, output_numpy=True)
epoch_count = 0
for _ in range(num_epoch):
assert sum([1 for _ in iter1]) == 2
epoch_count += 1
assert epoch_count == num_epoch
logger.info("test_cache_nomap_textfile2 Ended.\n")
@pytest.mark.skipif(os.environ.get('RUN_CACHE_TEST') != 'TRUE', reason="Require to bring up cache server")
def test_cache_nomap_nested_repeat():
"""
Test cache on pipeline with nested repeat ops
Repeat
|
Cache
|
Map(decode)
|
Repeat
|
TFRecord
"""
logger.info("Test cache nomap nested repeat")
if "SESSION_ID" in os.environ:
session_id = int(os.environ['SESSION_ID'])
else:
raise RuntimeError("Testcase requires SESSION_ID environment variable")
some_cache = ds.DatasetCache(session_id=session_id, size=0)
# This dataset has 3 records in it only
ds1 = ds.TFRecordDataset(DATA_DIR, SCHEMA_DIR)
decode_op = c_vision.Decode()
ds1 = ds1.repeat(4)
ds1 = ds1.map(operations=decode_op, input_columns=["image"], cache=some_cache)
ds1 = ds1.repeat(2)
num_iter = 0
for _ in ds1.create_dict_iterator(num_epochs=1):
logger.info("get data from dataset")
num_iter += 1
logger.info("Number of data in ds1: {} ".format(num_iter))
assert num_iter == 24
logger.info('test_cache_nomap_nested_repeat Ended.\n')
@pytest.mark.skipif(os.environ.get('RUN_CACHE_TEST') != 'TRUE', reason="Require to bring up cache server")
def test_cache_nomap_get_repeat_count():
"""
Test get_repeat_count() for a pipeline with cache and nested repeat ops
Cache
|
Map(decode)
|
Repeat
|
TFRecord
"""
logger.info("Test cache nomap get_repeat_count")
if "SESSION_ID" in os.environ:
session_id = int(os.environ['SESSION_ID'])
else:
raise RuntimeError("Testcase requires SESSION_ID environment variable")
some_cache = ds.DatasetCache(session_id=session_id, size=0)
# This dataset has 3 records in it only
ds1 = ds.TFRecordDataset(DATA_DIR, SCHEMA_DIR, columns_list=["image"], shuffle=False)
ds1 = ds1.repeat(4)
decode_op = c_vision.Decode()
ds1 = ds1.map(operations=decode_op, input_columns=["image"], cache=some_cache)
repeat_count = ds1.get_repeat_count()
logger.info("repeat_count: {}".format(repeat_count))
assert repeat_count == 4
num_iter = 0
for _ in ds1.create_dict_iterator(num_epochs=1):
logger.info("get data from dataset")
num_iter += 1
assert num_iter == 12
@pytest.mark.skipif(os.environ.get('RUN_CACHE_TEST') != 'TRUE', reason="Require to bring up cache server")
def test_cache_nomap_long_file_list():
"""
Test cache after TFRecord with a long list of files as arguments
Cache
|
TFRecord
"""
logger.info("Test cache nomap long file list")
if "SESSION_ID" in os.environ:
session_id = int(os.environ['SESSION_ID'])
else:
raise RuntimeError("Testcase requires SESSION_ID environment variable")
some_cache = ds.DatasetCache(session_id=session_id, size=1)
ds1 = ds.TFRecordDataset([DATA_DIR[0] for _ in range(0, 1000)], SCHEMA_DIR, columns_list=["image"],
cache=some_cache)
with pytest.raises(RuntimeError) as e:
sum([1 for _ in ds1])
assert "Out of memory" in str(e.value)
logger.info("test_cache_nomap_long_file_list Ended.\n")
@pytest.mark.skipif(os.environ.get('RUN_CACHE_TEST') != 'TRUE', reason="Require to bring up cache server")
def test_cache_nomap_failure1():
"""
Test nested cache (failure)
Repeat
|
Cache
|
Map(decode)
|
Cache
|
TFRecord
"""
logger.info("Test cache nomap failure 1")
if "SESSION_ID" in os.environ:
session_id = int(os.environ['SESSION_ID'])
else:
raise RuntimeError("Testcase requires SESSION_ID environment variable")
some_cache = ds.DatasetCache(session_id=session_id, size=0)
# This dataset has 3 records in it only
ds1 = ds.TFRecordDataset(DATA_DIR, SCHEMA_DIR, cache=some_cache)
decode_op = c_vision.Decode()
ds1 = ds1.map(operations=decode_op, input_columns=["image"], cache=some_cache)
ds1 = ds1.repeat(4)
with pytest.raises(RuntimeError) as e:
ds1.get_batch_size()
assert "Nested cache operations" in str(e.value)
with pytest.raises(RuntimeError) as e:
num_iter = 0
for _ in ds1.create_dict_iterator(num_epochs=1):
num_iter += 1
assert "Nested cache operations" in str(e.value)
assert num_iter == 0
logger.info('test_cache_nomap_failure1 Ended.\n')
@pytest.mark.skipif(os.environ.get('RUN_CACHE_TEST') != 'TRUE', reason="Require to bring up cache server")
def test_cache_nomap_failure2():
"""
Test zip under cache (failure)
repeat
|
Cache
|
Map(decode)
|
Zip
| |
Random Random
"""
logger.info("Test cache nomap failure 2")
if "SESSION_ID" in os.environ:
session_id = int(os.environ['SESSION_ID'])
else:
raise RuntimeError("Testcase requires SESSION_ID environment variable")
some_cache = ds.DatasetCache(session_id=session_id, size=0)
schema = ds.Schema()
schema.add_column('image', de_type=mstype.uint8,
shape=[640, 480, 3]) # 921600 bytes (a bit less than 1 MB per image)
schema.add_column('label', de_type=mstype.uint8, shape=[1])
ds1 = ds.RandomDataset(schema=schema)
ds2 = ds.RandomDataset(schema=schema)
dsz = ds.zip((ds1, ds2))
decode_op = c_vision.Decode()
dsz = dsz.map(input_columns=["image"], operations=decode_op, cache=some_cache)
dsz = dsz.repeat(4)
with pytest.raises(RuntimeError) as e:
num_iter = 0
for _ in dsz.create_dict_iterator():
num_iter += 1
assert "ZipNode is not supported as a descendant operator under a cache" in str(e.value)
assert num_iter == 0
logger.info('test_cache_nomap_failure2 Ended.\n')
@pytest.mark.skipif(os.environ.get('RUN_CACHE_TEST') != 'TRUE', reason="Require to bring up cache server")
def test_cache_nomap_failure3():
"""
Test batch under cache (failure)
repeat
|
Cache
|
Map(resize)
|
Batch
|
Clue
"""
logger.info("Test cache nomap failure 3")
if "SESSION_ID" in os.environ:
session_id = int(os.environ['SESSION_ID'])
else:
raise RuntimeError("Testcase requires SESSION_ID environment variable")
some_cache = ds.DatasetCache(session_id=session_id, size=0)
ds1 = ds.CLUEDataset(CLUE_DATA_DIR, task='AFQMC', usage='train')
ds1 = ds1.batch(2)
resize_op = c_vision.Resize((224, 224))
ds1 = ds1.map(input_columns=["image"], operations=resize_op, cache=some_cache)
ds1 = ds1.repeat(4)
with pytest.raises(RuntimeError) as e:
num_iter = 0
for _ in ds1.create_dict_iterator():
num_iter += 1
assert "BatchNode is not supported as a descendant operator under a cache" in str(e.value)
assert num_iter == 0
logger.info('test_cache_nomap_failure3 Ended.\n')
@pytest.mark.skipif(os.environ.get('RUN_CACHE_TEST') != 'TRUE', reason="Require to bring up cache server")
def test_cache_nomap_failure4():
"""
Test filter under cache (failure)
repeat
|
Cache
|
Map(decode)
|
Filter
|
CSV
"""
logger.info("Test cache nomap failure 4")
if "SESSION_ID" in os.environ:
session_id = int(os.environ['SESSION_ID'])
else:
raise RuntimeError("Testcase requires SESSION_ID environment variable")
some_cache = ds.DatasetCache(session_id=session_id, size=0)
ds1 = ds.CSVDataset(CSV_DATA_DIR, column_defaults=["1", "2", "3", "4"],
column_names=['col1', 'col2', 'col3', 'col4'])
ds1 = ds1.filter(predicate=lambda data: data < 11, input_columns=["label"])
decode_op = c_vision.Decode()
ds1 = ds1.map(input_columns=["image"], operations=decode_op, cache=some_cache)
ds1 = ds1.repeat(4)
with pytest.raises(RuntimeError) as e:
num_iter = 0
for _ in ds1.create_dict_iterator():
num_iter += 1
assert "FilterNode is not supported as a descendant operator under a cache" in str(e.value)
assert num_iter == 0
logger.info('test_cache_nomap_failure4 Ended.\n')
@pytest.mark.skipif(os.environ.get('RUN_CACHE_TEST') != 'TRUE', reason="Require to bring up cache server")
def test_cache_nomap_failure5():
"""
Test Map containing random operation under cache (failure)
repeat
|
Cache
|
Map(decode, randomCrop)
|
TextFile
"""
logger.info("Test cache nomap failure 5")
if "SESSION_ID" in os.environ:
session_id = int(os.environ['SESSION_ID'])
else:
raise RuntimeError("Testcase requires SESSION_ID environment variable")
some_cache = ds.DatasetCache(session_id=session_id, size=0)
data = ds.TextFileDataset(TEXT_FILE_DATA_DIR)
random_crop_op = c_vision.RandomCrop([512, 512], [200, 200, 200, 200])
decode_op = c_vision.Decode()
data = data.map(input_columns=["image"], operations=decode_op)
data = data.map(input_columns=["image"], operations=random_crop_op, cache=some_cache)
data = data.repeat(4)
with pytest.raises(RuntimeError) as e:
num_iter = 0
for _ in data.create_dict_iterator():
num_iter += 1
assert "MapNode containing random operation is not supported as a descendant of cache" in str(e.value)
assert num_iter == 0
logger.info('test_cache_nomap_failure5 Ended.\n')
if __name__ == '__main__':
# This is just a list of tests, don't try to run these tests with 'python test_cache_nomap.py'
# since cache server is required to be brought up first
test_cache_nomap_basic1()
test_cache_nomap_basic2()
test_cache_nomap_basic3()
test_cache_nomap_basic4()
test_cache_nomap_basic5()
test_cache_nomap_basic6()
test_cache_nomap_basic7()
test_cache_nomap_basic8()
test_cache_nomap_basic9()
test_cache_nomap_allowed_share1()
test_cache_nomap_allowed_share2()
test_cache_nomap_allowed_share3()
test_cache_nomap_allowed_share4()
test_cache_nomap_disallowed_share1()
test_cache_nomap_running_twice1()
test_cache_nomap_running_twice2()
test_cache_nomap_extra_small_size1()
test_cache_nomap_extra_small_size2()
test_cache_nomap_parallel_pipeline1(shard=0)
test_cache_nomap_parallel_pipeline2(shard=1)
test_cache_nomap_parallel_workers()
test_cache_nomap_server_workers_1()
test_cache_nomap_server_workers_100()
test_cache_nomap_num_connections_1()
test_cache_nomap_num_connections_100()
test_cache_nomap_prefetch_size_1()
test_cache_nomap_prefetch_size_100()
test_cache_nomap_to_device()
test_cache_nomap_session_destroy()
test_cache_nomap_server_stop()
test_cache_nomap_epoch_ctrl1()
test_cache_nomap_epoch_ctrl2()
test_cache_nomap_epoch_ctrl3()
test_cache_nomap_epoch_ctrl4()
test_cache_nomap_multiple_cache1()
test_cache_nomap_multiple_cache2()
test_cache_nomap_multiple_cache3()
test_cache_nomap_multiple_cache_train()
test_cache_nomap_multiple_cache_eval()
test_cache_nomap_clue1()
test_cache_nomap_clue2()
test_cache_nomap_csv1()
test_cache_nomap_csv2()
test_cache_nomap_textfile1()
test_cache_nomap_textfile2()
test_cache_nomap_nested_repeat()
test_cache_nomap_get_repeat_count()
test_cache_nomap_long_file_list()
test_cache_nomap_failure1()
test_cache_nomap_failure2()
test_cache_nomap_failure3()
test_cache_nomap_failure4()
test_cache_nomap_failure5()
| 33.509391 | 119 | 0.668234 | 10,258 | 73,151 | 4.548353 | 0.050692 | 0.062884 | 0.063613 | 0.040894 | 0.891635 | 0.871702 | 0.84744 | 0.81334 | 0.795336 | 0.776732 | 0 | 0.021374 | 0.22932 | 73,151 | 2,182 | 120 | 33.524748 | 0.80623 | 0.19714 | 0 | 0.710272 | 0 | 0 | 0.223507 | 0.039367 | 0 | 0 | 0 | 0 | 0.071993 | 1 | 0.04741 | false | 0 | 0.007024 | 0 | 0.05619 | 0.001756 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
be0c324dd1e9436a1d2697ad1dc0bf6f5c715062 | 225 | py | Python | qaoa/util/number_format.py | gregvw/pyQAOA | 59b5abda36d90b45913878e7ffb588a1c146bc38 | [
"BSD-3-Clause"
] | null | null | null | qaoa/util/number_format.py | gregvw/pyQAOA | 59b5abda36d90b45913878e7ffb588a1c146bc38 | [
"BSD-3-Clause"
] | null | null | null | qaoa/util/number_format.py | gregvw/pyQAOA | 59b5abda36d90b45913878e7ffb588a1c146bc38 | [
"BSD-3-Clause"
] | null | null | null | from decimal import Decimal
def decim(string,digits):
return ('{0:.'+str(digits)+'E}').format(Decimal(str(string)))
def spaced_decim(string,digits,width):
d = decim(string,digits)
return d + ' '*(width-len(d))
| 22.5 | 65 | 0.662222 | 32 | 225 | 4.625 | 0.5 | 0.222973 | 0.344595 | 0.310811 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005208 | 0.146667 | 225 | 9 | 66 | 25 | 0.765625 | 0 | 0 | 0 | 0 | 0 | 0.03125 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.166667 | 0.166667 | 0.833333 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 7 |
be0fc7cce66bea5a377aa0d4385311ebf321ef39 | 662 | py | Python | my_cars.py | yiyidhuang/PythonCrashCrouse2nd | 3512f9ab8fcf32c6145604a37e2a62feddf174d1 | [
"MIT"
] | null | null | null | my_cars.py | yiyidhuang/PythonCrashCrouse2nd | 3512f9ab8fcf32c6145604a37e2a62feddf174d1 | [
"MIT"
] | null | null | null | my_cars.py | yiyidhuang/PythonCrashCrouse2nd | 3512f9ab8fcf32c6145604a37e2a62feddf174d1 | [
"MIT"
] | null | null | null | # from car import Car, ElectricCar
# my_beetle = Car('volkswagen', 'beetle', 2019)
# print(my_beetle.get_descriptive_name())
# my_tesla = ElectricCar('tesla', 'roadster', 2019)
# print(my_tesla.get_descriptive_name())
# import car
# my_beetle = car.Car('volkswagen', 'beetle', 2019)
# print(my_beetle.get_descriptive_name())
# my_tesla = car.ElectricCar('tesla', 'roadster', 2019)
# print(my_tesla.get_descriptive_name())
from car import Car
from electric_car import ElectricCar
my_beetle = Car('volkswagen', 'beetle', 2019)
print(my_beetle.get_descriptive_name())
my_tesla = ElectricCar('tesla', 'roadster', 2019)
print(my_tesla.get_descriptive_name())
| 26.48 | 55 | 0.749245 | 90 | 662 | 5.233333 | 0.166667 | 0.101911 | 0.140127 | 0.146497 | 0.838641 | 0.838641 | 0.838641 | 0.838641 | 0.838641 | 0.838641 | 0 | 0.040472 | 0.10423 | 662 | 24 | 56 | 27.583333 | 0.753794 | 0.60574 | 0 | 0 | 0 | 0 | 0.116 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0.333333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 7 |
be2b959268e4f5094713c8caf421e868afcc94b4 | 45,347 | py | Python | appengine/swarming/server/lease_management_test.py | Swift1313/luci-py | 0a4fdfc25f89833026be6a8b29c0a27b8f3c5fc4 | [
"Apache-2.0"
] | null | null | null | appengine/swarming/server/lease_management_test.py | Swift1313/luci-py | 0a4fdfc25f89833026be6a8b29c0a27b8f3c5fc4 | [
"Apache-2.0"
] | null | null | null | appengine/swarming/server/lease_management_test.py | Swift1313/luci-py | 0a4fdfc25f89833026be6a8b29c0a27b8f3c5fc4 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/python
# Copyright 2015 The LUCI Authors. All rights reserved.
# Use of this source code is governed under the Apache License, Version 2.0
# that can be found in the LICENSE file.
"""Unit tests for lease_management.py."""
import datetime
import json
import logging
import sys
import unittest
import test_env
test_env.setup_test_env()
from google.appengine.ext import ndb
from protorpc.remote import protojson
import webtest
from components import machine_provider
from components import utils
from test_support import test_case
import bot_management
import lease_management
from proto.config import bots_pb2
def rpc_to_json(rpc_message):
"""Converts the given RPC message to a POSTable JSON dict.
Args:
rpc_message: A protorpc.message.Message instance.
Returns:
A string representing a JSON dict.
"""
return json.loads(protojson.encode_message(rpc_message))
class TestCase(test_case.TestCase):
def setUp(self):
super(TestCase, self).setUp()
self.mock_machine_types({})
def mock_machine_types(self, cfg):
self.mock(
lease_management.bot_groups_config,
'fetch_machine_types',
lambda: cfg,
)
class AssociateBotIdTest(TestCase):
"""Tests for lease_management._associate_bot_id."""
def test_hostname_unset(self):
key = lease_management.MachineLease().put()
lease_management._associate_bot_id(key, 'id')
self.assertFalse(key.get().bot_id)
self.assertFalse(key.get().hostname)
def test_hostname_mismatch(self):
key = lease_management.MachineLease(hostname='id1').put()
lease_management._associate_bot_id(key, 'id2')
self.assertFalse(key.get().bot_id)
self.assertEqual(key.get().hostname, 'id1')
def test_bot_id_mismatch(self):
key = lease_management.MachineLease(bot_id='id1', hostname='id1').put()
lease_management._associate_bot_id(key, 'id2')
self.assertEqual(key.get().bot_id, 'id1')
self.assertEqual(key.get().hostname, 'id1')
def test_hostname_set(self):
key = lease_management.MachineLease(hostname='id1').put()
lease_management._associate_bot_id(key, 'id1')
self.assertEqual(key.get().bot_id, 'id1')
self.assertEqual(key.get().hostname, 'id1')
def test_bot_id_match(self):
key = lease_management.MachineLease(bot_id='id1', hostname='id1').put()
lease_management._associate_bot_id(key, 'id1')
self.assertEqual(key.get().bot_id, 'id1')
self.assertEqual(key.get().hostname, 'id1')
class CheckForConnectionTest(TestCase):
"""Tests for lease_management._check_for_connection."""
def test_not_connected(self):
machine_lease = lease_management.MachineLease(
bot_id='bot-id',
client_request_id='req-id',
hostname='bot-id',
instruction_ts=utils.utcnow(),
machine_type=ndb.Key(lease_management.MachineType, 'mt'),
)
machine_lease.put()
bot_management.bot_event(
event_type='bot_leased',
bot_id=machine_lease.hostname,
external_ip=None,
authenticated_as=None,
dimensions=None,
state=None,
version=None,
quarantined=False,
maintenance_msg=None,
task_id='',
task_name=None,
)
lease_management._check_for_connection(machine_lease)
self.failUnless(bot_management.get_info_key(machine_lease.bot_id).get())
self.failUnless(machine_lease.key.get().client_request_id)
self.failIf(machine_lease.key.get().connection_ts)
def test_connected(self):
machine_lease = lease_management.MachineLease(
bot_id='bot-id',
client_request_id='req-id',
hostname='bot-id',
instruction_ts=utils.utcnow(),
machine_type=ndb.Key(lease_management.MachineType, 'mt'),
)
machine_lease.put()
bot_management.bot_event(
event_type='bot_leased',
bot_id=machine_lease.hostname,
external_ip=None,
authenticated_as=None,
dimensions=None,
state=None,
version=None,
quarantined=False,
maintenance_msg=None,
task_id='',
task_name=None,
)
bot_management.bot_event(
event_type='bot_connected',
bot_id=machine_lease.hostname,
external_ip=None,
authenticated_as=None,
dimensions=None,
state=None,
version=None,
quarantined=False,
maintenance_msg=None,
task_id='',
task_name=None,
)
lease_management._check_for_connection(machine_lease)
self.failUnless(bot_management.get_info_key(machine_lease.bot_id).get())
self.failUnless(machine_lease.key.get().client_request_id)
self.failUnless(machine_lease.key.get().connection_ts)
def test_connected_earlier_than_instructed(self):
bot_management.bot_event(
event_type='bot_connected',
bot_id='bot-id',
external_ip=None,
authenticated_as=None,
dimensions=None,
state=None,
version=None,
quarantined=False,
maintenance_msg=None,
task_id='',
task_name=None,
)
machine_lease = lease_management.MachineLease(
bot_id='bot-id',
client_request_id='req-id',
hostname='bot-id',
instruction_ts=utils.utcnow(),
machine_type=ndb.Key(lease_management.MachineType, 'mt'),
)
machine_lease.put()
bot_management.bot_event(
event_type='bot_leased',
bot_id=machine_lease.hostname,
external_ip=None,
authenticated_as=None,
dimensions=None,
state=None,
version=None,
quarantined=False,
maintenance_msg=None,
task_id='',
task_name=None,
)
lease_management._check_for_connection(machine_lease)
self.failUnless(bot_management.get_info_key(machine_lease.bot_id).get())
self.failUnless(machine_lease.key.get().client_request_id)
self.failIf(machine_lease.key.get().connection_ts)
def test_missing(self):
self.mock(lease_management, 'release', lambda *args, **kwargs: True)
machine_lease = lease_management.MachineLease(
bot_id='bot-id',
client_request_id='req-id',
hostname='bot-id',
instruction_ts=utils.utcnow(),
machine_type=ndb.Key(lease_management.MachineType, 'mt'),
)
machine_lease.put()
lease_management._check_for_connection(machine_lease)
self.failIf(bot_management.get_info_key(machine_lease.bot_id).get())
self.failIf(machine_lease.key.get().client_request_id)
self.failIf(machine_lease.key.get().connection_ts)
def test_dead(self):
def is_dead(_self, _now):
return True
self.mock(bot_management.BotInfo, 'is_dead', is_dead)
self.mock(lease_management, 'release', lambda *args, **kwargs: True)
machine_lease = lease_management.MachineLease(
bot_id='bot-id',
client_request_id='req-id',
hostname='bot-id',
instruction_ts=utils.utcnow(),
machine_type=ndb.Key(lease_management.MachineType, 'mt'),
)
machine_lease.put()
bot_management.bot_event(
event_type='bot_leased',
bot_id=machine_lease.hostname,
external_ip=None,
authenticated_as=None,
dimensions=None,
state=None,
version=None,
quarantined=False,
maintenance_msg=None,
task_id='',
task_name=None,
)
lease_management._check_for_connection(machine_lease)
self.failIf(bot_management.get_info_key(machine_lease.bot_id).get())
self.failIf(machine_lease.key.get().client_request_id)
self.failIf(machine_lease.key.get().connection_ts)
class ComputeUtilizationTest(TestCase):
"""Tests for lease_management.cron_compute_utilization."""
APP_DIR = test_env.APP_DIR
def test_no_machine_provider_bots(self):
bots = [
]
def fetch_page(*_args, **_kwargs):
return bots, None
self.mock(lease_management.datastore_utils, 'fetch_page', fetch_page)
lease_management.MachineType(
id='machine-type',
target_size=1,
).put()
key = ndb.Key(lease_management.MachineTypeUtilization, 'machine-type')
self.assertEqual(0, lease_management.cron_compute_utilization())
self.failIf(key.get())
def test_machine_provider_bots(self):
ndb.get_context().set_cache_policy(lambda _: None)
now = utils.utcnow()
bots = [
bot_management.BotInfo(
key=bot_management.get_info_key('bot1'),
machine_type='machine-type-1',
last_seen_ts=now,
),
bot_management.BotInfo(
key=bot_management.get_info_key('bot2'),
machine_type='machine-type-1',
last_seen_ts=now,
),
bot_management.BotInfo(
key=bot_management.get_info_key('bot3'),
machine_type='machine-type-2',
last_seen_ts=now,
task_id='task',
),
bot_management.BotInfo(
key=bot_management.get_info_key('bot4'),
machine_type='machine-type-3',
last_seen_ts=now,
task_id='task',
),
bot_management.BotInfo(
key=bot_management.get_info_key('bot5'),
machine_type='machine-type-3',
last_seen_ts=now,
),
bot_management.BotInfo(
key=bot_management.get_info_key('bot6'),
machine_type='machine-type-3',
last_seen_ts=now,
task_id='task',
),
]
ndb.put_multi(bots)
obj1 = lease_management.MachineType(id='machine-type-1', target_size=2)
obj1.put()
obj2 = lease_management.MachineType(id='machine-type-2', target_size=1)
obj2.put()
obj3 = lease_management.MachineType(id='machine-type-3', target_size=1)
obj3.put()
self.assertEqual(3, lease_management.cron_compute_utilization())
u1 = ndb.Key(lease_management.MachineTypeUtilization,
obj1.key.string_id()).get()
self.assertEqual(u1.busy, 0)
self.assertEqual(u1.idle, 2)
self.failUnless(u1.last_updated_ts)
u2 = ndb.Key(lease_management.MachineTypeUtilization,
obj2.key.string_id()).get()
self.assertEqual(u2.busy, 1)
self.assertEqual(u2.idle, 0)
self.failUnless(u2.last_updated_ts)
u3 = ndb.Key(lease_management.MachineTypeUtilization,
obj3.key.string_id()).get()
self.assertEqual(u3.busy, 2)
self.assertEqual(u3.idle, 1)
self.failUnless(u3.last_updated_ts)
class DrainExcessTest(TestCase):
"""Tests for lease_management._drain_excess."""
def test_no_machine_types(self):
lease_management._drain_excess()
self.failIf(lease_management.MachineLease.query().count())
def test_nothing_to_drain(self):
key = lease_management.MachineType(
target_size=1,
).put()
key = lease_management.MachineLease(
id='%s-0' % key.id(),
machine_type=key,
).put()
lease_management._drain_excess()
self.assertEqual(lease_management.MachineLease.query().count(), 1)
self.failIf(key.get().drained)
def test_drain_one(self):
key = lease_management.MachineType(
target_size=0,
).put()
key = lease_management.MachineLease(
id='%s-0' % key.id(),
machine_type=key,
).put()
lease_management._drain_excess()
self.assertEqual(lease_management.MachineLease.query().count(), 1)
self.assertTrue(key.get().drained)
def test_drain_all(self):
key = lease_management.MachineType(
enabled=False,
target_size=3,
).put()
lease_management.MachineLease(
id='%s-0' % key.id(),
machine_type=key,
).put()
lease_management.MachineLease(
id='%s-1' % key.id(),
machine_type=key,
).put()
lease_management.MachineLease(
id='%s-2' % key.id(),
machine_type=key,
).put()
lease_management._drain_excess()
self.assertEqual(lease_management.MachineLease.query().count(), 3)
for machine_lease in lease_management.MachineLease.query():
self.assertTrue(machine_lease.drained)
def test_drain_batched(self):
key = lease_management.MachineType(
enabled=False,
target_size=2,
).put()
lease_management.MachineLease(
id='%s-0' % key.id(),
machine_type=key,
).put()
lease_management.MachineLease(
id='%s-1' % key.id(),
machine_type=key,
).put()
key = lease_management.MachineType(
enabled=False,
target_size=2,
).put()
lease_management.MachineLease(
id='%s-0' % key.id(),
machine_type=key,
).put()
lease_management.MachineLease(
id='%s-1' % key.id(),
machine_type=key,
).put()
key = lease_management.MachineType(
target_size=0,
).put()
lease_management.MachineLease(
id='%s-0' % key.id(),
machine_type=key,
).put()
# Choice of 2, 2, 1 above and 3 here ensures at least one batch contains
# MachineLease entities created for two different MachineTypes.
lease_management._drain_excess(max_concurrent=3)
self.assertEqual(lease_management.MachineLease.query().count(), 5)
for machine_lease in lease_management.MachineLease.query():
self.assertTrue(machine_lease.drained)
class EnsureBotInfoExistsTest(TestCase):
"""Tests for lease_management._ensure_bot_info_exists."""
def test_creates(self):
key = lease_management.MachineLease(
id='machine-type-1',
hostname='hostname',
lease_id='lease-id',
lease_expiration_ts=utils.utcnow(),
machine_type=ndb.Key(lease_management.MachineType, 'machine-type'),
).put()
lease_management._ensure_bot_info_exists(key.get())
machine_lease = key.get()
bot_info = bot_management.get_info_key(machine_lease.bot_id).get()
self.assertEqual(machine_lease.bot_id, machine_lease.hostname)
self.assertEqual(bot_info.lease_id, machine_lease.lease_id)
self.assertEqual(
bot_info.lease_expiration_ts, machine_lease.lease_expiration_ts)
self.assertTrue(bot_info.lease_expiration_ts)
self.assertEqual(
bot_info.leased_indefinitely, machine_lease.leased_indefinitely)
self.assertFalse(bot_info.leased_indefinitely)
self.assertEqual(bot_info.machine_type, machine_lease.machine_type.id())
self.assertEqual(bot_info.machine_lease, machine_lease.key.id())
def test_creates_indefinite(self):
key = lease_management.MachineLease(
id='machine-type-1',
hostname='hostname',
lease_id='lease-id',
leased_indefinitely=True,
machine_type=ndb.Key(lease_management.MachineType, 'machine-type'),
).put()
lease_management._ensure_bot_info_exists(key.get())
machine_lease = key.get()
bot_info = bot_management.get_info_key(machine_lease.bot_id).get()
self.assertEqual(machine_lease.bot_id, machine_lease.hostname)
self.assertEqual(bot_info.lease_id, machine_lease.lease_id)
self.assertEqual(
bot_info.lease_expiration_ts, machine_lease.lease_expiration_ts)
self.assertFalse(bot_info.lease_expiration_ts)
self.assertEqual(
bot_info.leased_indefinitely, machine_lease.leased_indefinitely)
self.assertTrue(bot_info.leased_indefinitely)
self.assertEqual(bot_info.machine_type, machine_lease.machine_type.id())
self.assertEqual(bot_info.machine_lease, machine_lease.key.id())
class EnsureEntitiesExistTest(TestCase):
"""Tests for lease_management._ensure_entities_exist."""
def test_no_machine_types(self):
lease_management._ensure_entities_exist()
self.failIf(lease_management.MachineLease.query().count())
def test_no_enabled_machine_types(self):
lease_management.MachineType(
enabled=False,
target_size=3,
).put()
lease_management._ensure_entities_exist()
self.failIf(lease_management.MachineLease.query().count())
def test_one_enabled_machine_type(self):
self.mock_machine_types(
{
'machine-type': bots_pb2.MachineType(
early_release_secs=0,
lease_duration_secs=1,
mp_dimensions=[
'disk_gb:100',
'snapshot_labels:label1',
'snapshot_labels:label2',
],
name='machine-type',
target_size=1,
),
})
key = lease_management.MachineType(
id='machine-type',
target_size=1,
).put()
lease_management._ensure_entities_exist()
self.assertEqual(key.get().early_release_secs, 0)
self.assertEqual(key.get().lease_duration_secs, 1)
self.assertEqual(key.get().mp_dimensions.disk_gb, 100)
self.assertEqual(key.get().mp_dimensions.snapshot_labels[0], 'label1')
self.assertEqual(key.get().mp_dimensions.snapshot_labels[1], 'label2')
self.assertEqual(key.get().target_size, 1)
self.assertEqual(lease_management.MachineLease.query().count(), 1)
def test_two_enabled_machine_types(self):
self.mock_machine_types(
{
'machine-type-a': bots_pb2.MachineType(
early_release_secs=0,
lease_duration_secs=1,
mp_dimensions=['disk_gb:100'],
name='machine-type-a',
target_size=1,
),
'machine-type-b': bots_pb2.MachineType(
early_release_secs=0,
lease_duration_secs=1,
mp_dimensions=['disk_gb:100'],
name='machine-type-b',
target_size=1,
),
})
lease_management.MachineType(
id='machine-type-a',
target_size=1,
).put()
lease_management.MachineType(
id='machine-type-b',
target_size=1,
).put()
lease_management._ensure_entities_exist()
self.assertEqual(lease_management.MachineLease.query().count(), 2)
self.failUnless(lease_management.MachineLease.get_by_id('machine-type-a-0'))
self.failUnless(lease_management.MachineLease.get_by_id('machine-type-b-0'))
def test_one_machine_type_multiple_batches(self):
self.mock_machine_types(
{
'machine-type': bots_pb2.MachineType(
early_release_secs=0,
lease_duration_secs=1,
mp_dimensions=['disk_gb:100'],
name='machine-type',
target_size=5,
),
})
lease_management.MachineType(
id='machine-type',
target_size=5,
).put()
# Choice of 3 here and 5 above ensures MachineLeases are created in two
# batches of differing sizes.
lease_management._ensure_entities_exist(max_concurrent=3)
self.assertEqual(lease_management.MachineLease.query().count(), 5)
self.failUnless(lease_management.MachineLease.get_by_id('machine-type-0'))
self.failUnless(lease_management.MachineLease.get_by_id('machine-type-1'))
self.failUnless(lease_management.MachineLease.get_by_id('machine-type-2'))
self.failUnless(lease_management.MachineLease.get_by_id('machine-type-3'))
self.failUnless(lease_management.MachineLease.get_by_id('machine-type-4'))
def test_three_machine_types_multiple_batches(self):
self.mock_machine_types(
{
'machine-type-a': bots_pb2.MachineType(
early_release_secs=0,
lease_duration_secs=1,
mp_dimensions=['disk_gb:100'],
name='machine-type-a',
target_size=2,
),
'machine-type-b': bots_pb2.MachineType(
early_release_secs=0,
lease_duration_secs=1,
mp_dimensions=['disk_gb:100'],
name='machine-type-b',
target_size=2,
),
'machine-type-c': bots_pb2.MachineType(
early_release_secs=0,
lease_duration_secs=1,
mp_dimensions=['disk_gb:100'],
name='machine-type-c',
target_size=1,
),
})
lease_management.MachineType(
id='machine-type-a',
target_size=2,
).put()
lease_management.MachineType(
id='machine-type-b',
target_size=2,
).put()
lease_management.MachineType(
id='machine-type-c',
target_size=1,
).put()
# Choice of 2, 2, 1 above and 3 here ensures at least one batch contains
# MachineLease entities created for two different MachineTypes.
lease_management._ensure_entities_exist(max_concurrent=3)
self.assertEqual(lease_management.MachineLease.query().count(), 5)
self.failUnless(lease_management.MachineLease.get_by_id('machine-type-a-0'))
self.failUnless(lease_management.MachineLease.get_by_id('machine-type-a-1'))
self.failUnless(lease_management.MachineLease.get_by_id('machine-type-b-0'))
self.failUnless(lease_management.MachineLease.get_by_id('machine-type-b-1'))
self.failUnless(lease_management.MachineLease.get_by_id('machine-type-c-0'))
def test_enable_machine_type(self):
self.mock_machine_types(
{
'machine-type': bots_pb2.MachineType(
early_release_secs=0,
lease_duration_secs=1,
mp_dimensions=['disk_gb:100'],
name='machine-type',
target_size=1,
),
})
key = lease_management.MachineType(
id='machine-type',
early_release_secs=0,
enabled=False,
lease_duration_secs=1,
mp_dimensions=machine_provider.Dimensions(
disk_gb=100,
),
target_size=1,
).put()
lease_management._ensure_entities_exist()
self.failUnless(key.get().enabled)
def test_update_machine_type(self):
self.mock_machine_types(
{
'machine-type': bots_pb2.MachineType(
early_release_secs=0,
lease_duration_secs=2,
mp_dimensions=['disk_gb:100'],
name='machine-type',
target_size=1,
),
})
key = lease_management.MachineType(
id='machine-type',
early_release_secs=0,
enabled=True,
lease_duration_secs=1,
mp_dimensions=machine_provider.Dimensions(
disk_gb=100,
),
target_size=1,
).put()
lease_management._ensure_entities_exist()
self.assertEqual(key.get().lease_duration_secs, 2)
def test_enable_and_update_machine_type(self):
self.mock_machine_types(
{
'machine-type': bots_pb2.MachineType(
early_release_secs=0,
lease_duration_secs=2,
mp_dimensions=['disk_gb:100'],
name='machine-type',
target_size=1,
),
})
key = lease_management.MachineType(
id='machine-type',
early_release_secs=0,
enabled=False,
lease_duration_secs=1,
mp_dimensions=machine_provider.Dimensions(
disk_gb=100,
),
target_size=1,
).put()
lease_management._ensure_entities_exist()
self.failUnless(key.get().enabled)
self.assertEqual(key.get().lease_duration_secs, 2)
def test_disable_machine_type(self):
key = lease_management.MachineType(
id='machine-type',
early_release_secs=0,
enabled=True,
lease_duration_secs=1,
mp_dimensions=machine_provider.Dimensions(
disk_gb=100,
),
target_size=1,
).put()
lease_management._ensure_entities_exist()
self.failIf(key.get().enabled)
def test_machine_lease_exists_mismatched_not_updated(self):
key = lease_management.MachineType(
early_release_secs=0,
lease_duration_secs=1,
mp_dimensions=machine_provider.Dimensions(
disk_gb=100,
),
target_size=1,
).put()
key = lease_management.MachineLease(
id='%s-0' % key.id(),
early_release_secs=1,
lease_duration_secs=2,
machine_type=key,
mp_dimensions=machine_provider.Dimensions(
disk_gb=200,
),
).put()
lease_management._ensure_entities_exist()
self.assertEqual(lease_management.MachineLease.query().count(), 1)
self.assertEqual(key.get().early_release_secs, 1)
self.assertEqual(key.get().lease_duration_secs, 2)
self.assertEqual(key.get().mp_dimensions.disk_gb, 200)
def test_machine_lease_exists_mismatched_updated(self):
self.mock_machine_types(
{
'machine-type': bots_pb2.MachineType(
early_release_secs=0,
lease_duration_secs=1,
mp_dimensions=['disk_gb:100'],
name='machine-type',
target_size=1,
),
})
key = lease_management.MachineType(
id='machine-type',
early_release_secs=0,
lease_duration_secs=1,
mp_dimensions=machine_provider.Dimensions(
disk_gb=100,
),
target_size=1,
).put()
key = lease_management.MachineLease(
id='%s-0' % key.id(),
early_release_secs=1,
lease_duration_secs=2,
lease_expiration_ts=utils.utcnow(),
machine_type=key,
mp_dimensions=machine_provider.Dimensions(
disk_gb=200,
),
).put()
lease_management._ensure_entities_exist()
self.assertEqual(lease_management.MachineLease.query().count(), 1)
self.assertEqual(key.get().early_release_secs, 0)
self.assertEqual(key.get().lease_duration_secs, 1)
self.assertEqual(key.get().mp_dimensions.disk_gb, 100)
def test_machine_lease_exists_mismatched_updated_to_indefinite(self):
self.mock_machine_types(
{
'machine-type': bots_pb2.MachineType(
lease_indefinitely=True,
mp_dimensions=['disk_gb:100'],
name='machine-type',
target_size=1,
),
})
key = lease_management.MachineType(
id='machine-type',
lease_indefinitely=True,
mp_dimensions=machine_provider.Dimensions(
disk_gb=100,
),
target_size=1,
).put()
key = lease_management.MachineLease(
id='%s-0' % key.id(),
early_release_secs=1,
lease_duration_secs=2,
lease_expiration_ts=utils.utcnow(),
machine_type=key,
mp_dimensions=machine_provider.Dimensions(
disk_gb=100,
),
).put()
lease_management._ensure_entities_exist()
self.assertEqual(lease_management.MachineLease.query().count(), 1)
self.assertFalse(key.get().early_release_secs)
self.assertFalse(key.get().lease_duration_secs)
self.assertTrue(key.get().lease_indefinitely)
self.assertFalse(key.get().drained)
def test_machine_lease_exists_mismatched_updated_to_finite(self):
self.mock_machine_types(
{
'machine-type': bots_pb2.MachineType(
lease_duration_secs=1,
mp_dimensions=['disk_gb:100'],
name='machine-type',
target_size=1,
),
})
key = lease_management.MachineType(
id='machine-type',
lease_duration_secs=1,
mp_dimensions=machine_provider.Dimensions(
disk_gb=100,
),
target_size=1,
).put()
key = lease_management.MachineLease(
id='%s-0' % key.id(),
lease_indefinitely=True,
leased_indefinitely=True,
machine_type=key,
mp_dimensions=machine_provider.Dimensions(
disk_gb=100,
),
).put()
lease_management._ensure_entities_exist()
self.assertEqual(lease_management.MachineLease.query().count(), 1)
self.assertEqual(key.get().lease_duration_secs, 1)
self.assertFalse(key.get().lease_indefinitely)
self.assertTrue(key.get().drained)
def test_daily_schedule_resize(self):
self.mock_machine_types(
{
'machine-type': bots_pb2.MachineType(
early_release_secs=0,
lease_duration_secs=1,
mp_dimensions=['disk_gb:100'],
name='machine-type',
target_size=1,
schedule=bots_pb2.Schedule(
daily=[bots_pb2.DailySchedule(
start='0:00',
end='1:00',
days_of_the_week=xrange(7),
target_size=3,
)],
),
),
})
self.mock_now(datetime.datetime(1969, 1, 1, 0, 30))
key = lease_management.MachineType(
id='machine-type',
early_release_secs=0,
lease_duration_secs=1,
mp_dimensions=machine_provider.Dimensions(
disk_gb=100,
),
target_size=1,
).put()
lease_management._ensure_entities_exist()
self.assertEqual(lease_management.MachineLease.query().count(), 3)
self.assertEqual(key.get().target_size, 3)
def test_daily_schedule_resize_to_default(self):
self.mock_machine_types(
{
'machine-type': bots_pb2.MachineType(
early_release_secs=0,
lease_duration_secs=1,
mp_dimensions=['disk_gb:100'],
name='machine-type',
target_size=1,
schedule=bots_pb2.Schedule(
daily=[bots_pb2.DailySchedule(
start='0:00',
end='1:00',
days_of_the_week=xrange(7),
target_size=3,
)],
),
),
})
self.mock_now(datetime.datetime(1969, 1, 1, 2))
key = lease_management.MachineType(
id='machine-type',
early_release_secs=0,
lease_duration_secs=1,
mp_dimensions=machine_provider.Dimensions(
disk_gb=100,
),
target_size=1,
).put()
lease_management._ensure_entities_exist()
self.assertEqual(lease_management.MachineLease.query().count(), 1)
self.assertEqual(key.get().target_size, 1)
def test_daily_schedule_resize_to_zero(self):
self.mock_machine_types(
{
'machine-type': bots_pb2.MachineType(
early_release_secs=0,
lease_duration_secs=1,
mp_dimensions=['disk_gb:100'],
name='machine-type',
target_size=1,
schedule=bots_pb2.Schedule(
daily=[bots_pb2.DailySchedule(
start='0:00',
end='1:00',
days_of_the_week=xrange(7),
target_size=0,
)],
),
),
})
self.mock_now(datetime.datetime(1969, 1, 1, 0, 30))
key = lease_management.MachineType(
id='machine-type',
early_release_secs=0,
lease_duration_secs=1,
mp_dimensions=machine_provider.Dimensions(
disk_gb=100,
),
target_size=1,
).put()
lease_management._ensure_entities_exist()
self.failIf(lease_management.MachineLease.query().count())
self.failIf(key.get().target_size)
class GetTargetSize(TestCase):
"""Tests for lease_management._get_target_size."""
def test_no_schedules(self):
config = bots_pb2.MachineType(schedule=bots_pb2.Schedule())
self.assertEqual(
lease_management._get_target_size(config.schedule, 'mt', 1, 2), 2)
def test_wrong_day(self):
config = bots_pb2.MachineType(schedule=bots_pb2.Schedule(
daily=[bots_pb2.DailySchedule(
start='1:00',
end='2:00',
days_of_the_week=xrange(5),
target_size=3,
)],
))
now = datetime.datetime(2012, 1, 1, 1, 2)
self.assertEqual(
lease_management._get_target_size(config.schedule, 'mt', 1, 2, now), 2)
def test_right_day(self):
config = bots_pb2.MachineType(schedule=bots_pb2.Schedule(
daily=[bots_pb2.DailySchedule(
start='1:00',
end='2:00',
days_of_the_week=xrange(7),
target_size=3,
)],
))
now = datetime.datetime(2012, 1, 1, 1, 2)
self.assertEqual(
lease_management._get_target_size(config.schedule, 'mt', 1, 2, now), 3)
def test_no_utilization(self):
config = bots_pb2.MachineType(schedule=bots_pb2.Schedule(
load_based=[bots_pb2.LoadBased(
maximum_size=5,
minimum_size=3,
)],
))
self.assertEqual(
lease_management._get_target_size(config.schedule, 'mt', 1, 4), 4)
def test_utilization(self):
config = bots_pb2.MachineType(schedule=bots_pb2.Schedule(
load_based=[bots_pb2.LoadBased(
maximum_size=6,
minimum_size=2,
)],
))
lease_management.MachineTypeUtilization(
id='mt',
busy=4,
idle=0,
).put()
self.assertEqual(
lease_management._get_target_size(config.schedule, 'mt', 1, 3), 6)
def test_load_based_fallback(self):
config = bots_pb2.MachineType(schedule=bots_pb2.Schedule(
daily=[bots_pb2.DailySchedule(
start='1:00',
end='2:00',
days_of_the_week=xrange(5),
target_size=3,
)],
load_based=[bots_pb2.LoadBased(
maximum_size=6,
minimum_size=2,
)],
))
lease_management.MachineTypeUtilization(
id='mt',
busy=4,
idle=0,
).put()
now = datetime.datetime(2012, 1, 1, 1, 2)
self.assertEqual(
lease_management._get_target_size(config.schedule, 'mt', 1, 3, now), 6)
def test_upper_bound(self):
config = bots_pb2.MachineType(schedule=bots_pb2.Schedule(
load_based=[bots_pb2.LoadBased(
maximum_size=4,
minimum_size=2,
)],
))
lease_management.MachineTypeUtilization(
id='mt',
busy=4,
idle=0,
).put()
self.assertEqual(
lease_management._get_target_size(config.schedule, 'mt', 1, 3), 4)
def test_drop_dampening(self):
config = bots_pb2.MachineType(schedule=bots_pb2.Schedule(
load_based=[bots_pb2.LoadBased(
maximum_size=100,
minimum_size=1,
)],
))
lease_management.MachineTypeUtilization(
id='mt',
busy=60,
idle=20,
).put()
self.assertEqual(
lease_management._get_target_size(config.schedule, 'mt', 100, 50), 99)
def test_lower_bound(self):
config = bots_pb2.MachineType(schedule=bots_pb2.Schedule(
load_based=[bots_pb2.LoadBased(
maximum_size=4,
minimum_size=2,
)],
))
lease_management.MachineTypeUtilization(
id='mt',
busy=0,
idle=4,
).put()
self.assertEqual(
lease_management._get_target_size(config.schedule, 'mt', 1, 3), 2)
class ManageLeasedMachineTest(TestCase):
"""Tests for lease_management._manage_leased_machine."""
def test_creates_bot_id_and_sends_connection_instruction(self):
def _send_connection_instruction(machine_lease):
self.assertTrue(machine_lease)
self.mock(lease_management, '_send_connection_instruction',
_send_connection_instruction)
key = lease_management.MachineLease(
id='machine-lease',
client_request_id='request-id',
hostname='hostname',
lease_id='lease-id',
leased_indefinitely=True,
machine_type=lease_management.MachineType(
id='machine-type',
target_size=1,
).put(),
).put()
lease_management._manage_leased_machine(key.get())
self.assertTrue(key.get().bot_id)
self.assertEquals(key.get().bot_id, key.get().hostname)
def test_checks_for_connection(self):
def _check_for_connection(machine_lease):
self.assertTrue(machine_lease)
def cleanup_bot(*_args, **_kwargs):
self.fail('cleanup_bot called')
self.mock(lease_management, '_check_for_connection', _check_for_connection)
self.mock(lease_management, 'cleanup_bot', cleanup_bot)
key = lease_management.MachineLease(
id='machine-lease',
bot_id='hostname',
client_request_id='request-id',
hostname='hostname',
instruction_ts=utils.utcnow(),
lease_id='lease-id',
leased_indefinitely=True,
machine_type=lease_management.MachineType(
id='machine-type',
target_size=1,
).put(),
).put()
lease_management._manage_leased_machine(key.get())
self.assertTrue(key.get().client_request_id)
def test_cleans_up_bot(self):
key = lease_management.MachineLease(
id='machine-lease',
bot_id='hostname',
client_request_id='request-id',
connection_ts=utils.utcnow(),
hostname='hostname',
instruction_ts=utils.utcnow(),
lease_expiration_ts=utils.utcnow(),
lease_id='lease-id',
machine_type=lease_management.MachineType(
id='machine-type',
target_size=1,
).put(),
).put()
lease_management._manage_leased_machine(key.get())
self.assertFalse(key.get().client_request_id)
def test_releases(self):
key = lease_management.MachineLease(
id='machine-lease',
bot_id='hostname',
client_request_id='request-id',
connection_ts=utils.utcnow(),
early_release_secs=86400,
hostname='hostname',
instruction_ts=utils.utcnow(),
lease_expiration_ts=utils.utcnow() + datetime.timedelta(days=1),
lease_id='lease-id',
machine_type=lease_management.MachineType(
id='machine-type',
target_size=1,
).put(),
).put()
lease_management._manage_leased_machine(key.get())
self.assertTrue(key.get().termination_task)
def test_releases_drained_bot(self):
key = lease_management.MachineLease(
id='machine-lease',
bot_id='hostname',
client_request_id='request-id',
connection_ts=utils.utcnow(),
drained=True,
hostname='hostname',
instruction_ts=utils.utcnow(),
lease_expiration_ts=utils.utcnow() + datetime.timedelta(days=1),
lease_id='lease-id',
machine_type=lease_management.MachineType(
id='machine-type',
target_size=1,
).put(),
).put()
lease_management._manage_leased_machine(key.get())
self.assertTrue(key.get().termination_task)
def test_releases_drained_indefinite_bot(self):
key = lease_management.MachineLease(
id='machine-lease',
bot_id='hostname',
client_request_id='request-id',
connection_ts=utils.utcnow(),
drained=True,
hostname='hostname',
instruction_ts=utils.utcnow(),
leased_indefinitely=True,
lease_id='lease-id',
machine_type=lease_management.MachineType(
id='machine-type',
target_size=1,
).put(),
).put()
lease_management._manage_leased_machine(key.get())
self.assertTrue(key.get().termination_task)
class ScheduleLeaseManagementTest(TestCase):
"""Tests for lease_management.cron_schedule_lease_management."""
def test_none(self):
def enqueue_task(*_args, **_kwargs):
self.fail('enqueue_task called')
self.mock(utils, 'enqueue_task', enqueue_task)
self.assertEqual(0, lease_management.cron_schedule_lease_management())
def test_manageable(self):
def enqueue_task(*_args, **kwargs):
self.assertTrue(kwargs.get('params', {}).get('key'))
self.mock(utils, 'enqueue_task', enqueue_task)
lease_management.MachineLease().put()
self.assertEqual(1, lease_management.cron_schedule_lease_management())
def test_pending_connection(self):
def enqueue_task(*_args, **kwargs):
self.assertTrue(kwargs.get('params', {}).get('key'))
self.mock(utils, 'enqueue_task', enqueue_task)
key = lease_management.MachineLease(
client_request_id='request-id',
).put()
lease_management._log_lease_fulfillment(
key, 'request-id', 'hostname', 0, True, 'lease-id')
self.assertEqual(1, lease_management.cron_schedule_lease_management())
def test_leased(self):
def enqueue_task(*_args, **_kwargs):
self.fail('enqueue_task called')
self.mock(utils, 'enqueue_task', enqueue_task)
key = lease_management.MachineLease(
client_request_id='request-id',
).put()
lease_expiration_ts = utils.datetime_to_timestamp(
utils.utcnow()) / 1000 / 1000 + 3600
lease_management._log_lease_fulfillment(
key, 'request-id', 'hostname', lease_expiration_ts, False, 'lease-id')
lease_management._associate_bot_id(key, 'hostname')
lease_management._associate_connection_ts(key, utils.utcnow())
self.assertEqual(0, lease_management.cron_schedule_lease_management())
def test_expired(self):
def enqueue_task(*_args, **kwargs):
self.assertTrue(kwargs.get('params', {}).get('key'))
self.mock(utils, 'enqueue_task', enqueue_task)
key = lease_management.MachineLease(
client_request_id='request-id',
early_release_secs=3600,
).put()
lease_expiration_ts = utils.datetime_to_timestamp(
utils.utcnow()) / 1000 / 1000
lease_management._log_lease_fulfillment(
key, 'request-id', 'hostname', lease_expiration_ts, False, 'lease-id')
lease_management._associate_connection_ts(key, utils.utcnow())
self.assertEqual(1, lease_management.cron_schedule_lease_management())
def test_leased_indefinitely(self):
def enqueue_task(*_args, **_kwargs):
self.fail('enqueue_task called')
self.mock(utils, 'enqueue_task', enqueue_task)
key = lease_management.MachineLease(
client_request_id='request-id',
).put()
lease_management._log_lease_fulfillment(
key, 'request-id', 'hostname', 0, True, 'lease-id')
lease_management._associate_bot_id(key, 'hostname')
lease_management._associate_connection_ts(key, utils.utcnow())
self.assertEqual(0, lease_management.cron_schedule_lease_management())
def test_drained(self):
def enqueue_task(*_args, **kwargs):
self.assertTrue(kwargs.get('params', {}).get('key'))
self.mock(utils, 'enqueue_task', enqueue_task)
key = lease_management.MachineLease(
client_request_id='request-id',
).put()
lease_management._log_lease_fulfillment(
key, 'request-id', 'hostname', 0, True, 'lease-id')
lease_management._associate_connection_ts(key, utils.utcnow())
lease_management._drain_entity(key)
self.assertEqual(1, lease_management.cron_schedule_lease_management())
class SendConnectionInstructionTest(TestCase):
"""Tests for lease_management._send_connection_instruction."""
def test_empty(self):
def instruct_machine(*_args, **_kwargs):
return {}
self.mock(machine_provider, 'instruct_machine', instruct_machine)
key = lease_management.MachineLease(
bot_id='bot-id',
client_request_id='request-id',
hostname='bot-id',
).put()
lease_management._send_connection_instruction(key.get())
self.assertFalse(key.get().instruction_ts)
def test_ok(self):
def instruct_machine(*_args, **_kwargs):
return {'client_request_id': 'request-id'}
self.mock(machine_provider, 'instruct_machine', instruct_machine)
key = lease_management.MachineLease(
bot_id='bot-id',
client_request_id='request-id',
hostname='bot-id',
).put()
lease_management._send_connection_instruction(key.get())
self.assertTrue(key.get().instruction_ts)
def test_reclaimed(self):
def instruct_machine(*_args, **_kwargs):
return {'client_request_id': 'request-id', 'error': 'ALREADY_RECLAIMED'}
self.mock(machine_provider, 'instruct_machine', instruct_machine)
key = lease_management.MachineLease(
bot_id='bot-id',
client_request_id='request-id',
hostname='bot-id',
).put()
lease_management._send_connection_instruction(key.get())
self.assertFalse(key.get().bot_id)
self.assertFalse(key.get().client_request_id)
self.assertFalse(key.get().hostname)
self.assertFalse(key.get().instruction_ts)
def test_error(self):
def instruct_machine(*_args, **_kwargs):
return {'client_request_id': 'request-id', 'error': 'error'}
self.mock(machine_provider, 'instruct_machine', instruct_machine)
key = lease_management.MachineLease(
bot_id='bot-id',
client_request_id='request-id',
hostname='bot-id',
).put()
lease_management._send_connection_instruction(key.get())
self.assertTrue(key.get().bot_id)
self.assertTrue(key.get().client_request_id)
self.assertTrue(key.get().hostname)
self.assertFalse(key.get().instruction_ts)
def test_race(self):
key = lease_management.MachineLease(
bot_id='bot-id',
client_request_id='request-id',
hostname='bot-id',
).put()
def instruct_machine(*_args, **_kwargs):
# Mimic race condition by clearing the MachineLease.
# In reality this would happen concurrently elsewhere.
lease_management._clear_lease_request(key, key.get().client_request_id)
return {'client_request_id': 'request-id'}
self.mock(machine_provider, 'instruct_machine', instruct_machine)
lease_management._send_connection_instruction(key.get())
self.assertFalse(key.get().instruction_ts)
if __name__ == '__main__':
logging.basicConfig(
level=logging.DEBUG if '-v' in sys.argv else logging.ERROR)
unittest.main()
| 31.578691 | 80 | 0.650186 | 5,362 | 45,347 | 5.190601 | 0.063409 | 0.123419 | 0.072758 | 0.021163 | 0.878916 | 0.841657 | 0.814422 | 0.795918 | 0.776624 | 0.763725 | 0 | 0.016986 | 0.231438 | 45,347 | 1,435 | 81 | 31.600697 | 0.781591 | 0.029881 | 0 | 0.793559 | 0 | 0 | 0.064816 | 0.002118 | 0 | 0 | 0 | 0 | 0.094137 | 1 | 0.068538 | false | 0 | 0.012386 | 0.004955 | 0.09744 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
076c30cedec500809d38cf2274824a0717c3fc5d | 20,385 | py | Python | Questionnaire/migrations/0001_initial.py | AdityaKapoor74/Supervised_Categorization_Study | 1ce29de95c8ccc2b848e2d06463719858e57b942 | [
"MIT"
] | null | null | null | Questionnaire/migrations/0001_initial.py | AdityaKapoor74/Supervised_Categorization_Study | 1ce29de95c8ccc2b848e2d06463719858e57b942 | [
"MIT"
] | null | null | null | Questionnaire/migrations/0001_initial.py | AdityaKapoor74/Supervised_Categorization_Study | 1ce29de95c8ccc2b848e2d06463719858e57b942 | [
"MIT"
] | null | null | null | # Generated by Django 2.0.2 on 2020-07-20 10:53
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='Classify_And_Learn_Samples_set1',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('sample_img', models.ImageField(upload_to='images/')),
('sample_label', models.CharField(blank=True, default=None, max_length=10, null=True)),
],
options={
'verbose_name_plural': 'Classify and Learn Samples Set 1',
},
),
migrations.CreateModel(
name='Classify_And_Learn_Samples_set2',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('sample_img', models.ImageField(upload_to='images/')),
('sample_label', models.CharField(blank=True, default=None, max_length=10, null=True)),
],
options={
'verbose_name_plural': 'Classify and Learn Samples Set 2',
},
),
migrations.CreateModel(
name='Classify_And_Learn_Samples_set3',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('sample_img', models.ImageField(upload_to='images/')),
('sample_label', models.CharField(blank=True, default=None, max_length=10, null=True)),
],
options={
'verbose_name_plural': 'Classify and Learn Samples Set 3',
},
),
migrations.CreateModel(
name='Classify_And_Learn_Samples_set4',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('sample_img', models.ImageField(upload_to='images/')),
('sample_label', models.CharField(blank=True, default=None, max_length=10, null=True)),
],
options={
'verbose_name_plural': 'Classify and Learn Samples Set 4',
},
),
migrations.CreateModel(
name='Classify_And_Learn_Samples_set5',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('sample_img', models.ImageField(upload_to='images/')),
('sample_label', models.CharField(blank=True, default=None, max_length=10, null=True)),
],
options={
'verbose_name_plural': 'Classify and Learn Samples Set 5',
},
),
migrations.CreateModel(
name='Common_Features_Test_set1',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('sample_img', models.ImageField(upload_to='images/')),
],
options={
'verbose_name_plural': 'Common Features Test Samples Set 1',
},
),
migrations.CreateModel(
name='Common_Features_Test_set2',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('sample_img', models.ImageField(upload_to='images/')),
],
options={
'verbose_name_plural': 'Common Features Test Samples Set 2',
},
),
migrations.CreateModel(
name='Common_Features_Test_set3',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('sample_img', models.ImageField(upload_to='images/')),
],
options={
'verbose_name_plural': 'Common Features Test Samples Set 3',
},
),
migrations.CreateModel(
name='Common_Features_Test_set4',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('sample_img', models.ImageField(upload_to='images/')),
],
options={
'verbose_name_plural': 'Common Features Test Samples Set 4',
},
),
migrations.CreateModel(
name='Common_Features_Test_set5',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('sample_img', models.ImageField(upload_to='images/')),
],
options={
'verbose_name_plural': 'Common Features Test Samples Set 5',
},
),
migrations.CreateModel(
name='Observe_And_Learn_Samples_set1',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('sample_img', models.ImageField(upload_to='images/')),
('sample_label', models.CharField(blank=True, default=None, max_length=10, null=True)),
],
options={
'verbose_name_plural': 'Observe and Learn Samples Set 1',
},
),
migrations.CreateModel(
name='Observe_And_Learn_Samples_set2',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('sample_img', models.ImageField(upload_to='images/')),
('sample_label', models.CharField(blank=True, default=None, max_length=10, null=True)),
],
options={
'verbose_name_plural': 'Observe and Learn Samples Set 2',
},
),
migrations.CreateModel(
name='Observe_And_Learn_Samples_set3',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('sample_img', models.ImageField(upload_to='images/')),
('sample_label', models.CharField(blank=True, default=None, max_length=10, null=True)),
],
options={
'verbose_name_plural': 'Observe and Learn Samples Set 3',
},
),
migrations.CreateModel(
name='Observe_And_Learn_Samples_set4',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('sample_img', models.ImageField(upload_to='images/')),
('sample_label', models.CharField(blank=True, default=None, max_length=10, null=True)),
],
options={
'verbose_name_plural': 'Observe and Learn Samples Set 4',
},
),
migrations.CreateModel(
name='Observe_And_Learn_Samples_set5',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('sample_img', models.ImageField(upload_to='images/')),
('sample_label', models.CharField(blank=True, default=None, max_length=10, null=True)),
],
options={
'verbose_name_plural': 'Observe and Learn Samples Set 5',
},
),
migrations.CreateModel(
name='Test_set1',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('sample_img', models.ImageField(upload_to='images/')),
('sample_label', models.CharField(blank=True, default=None, max_length=10, null=True)),
],
options={
'verbose_name_plural': 'Test Samples Set 1',
},
),
migrations.CreateModel(
name='Test_set2',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('sample_img', models.ImageField(upload_to='images/')),
('sample_label', models.CharField(blank=True, default=None, max_length=10, null=True)),
],
options={
'verbose_name_plural': 'Test Samples Set 2',
},
),
migrations.CreateModel(
name='Test_set3',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('sample_img', models.ImageField(upload_to='images/')),
('sample_label', models.CharField(blank=True, default=None, max_length=10, null=True)),
],
options={
'verbose_name_plural': 'Test Samples Set 3',
},
),
migrations.CreateModel(
name='Test_set4',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('sample_img', models.ImageField(upload_to='images/')),
('sample_label', models.CharField(blank=True, default=None, max_length=10, null=True)),
],
options={
'verbose_name_plural': 'Test Samples Set 4',
},
),
migrations.CreateModel(
name='Test_set5',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('sample_img', models.ImageField(upload_to='images/')),
('sample_label', models.CharField(blank=True, default=None, max_length=10, null=True)),
],
options={
'verbose_name_plural': 'Test Samples Set 5',
},
),
migrations.CreateModel(
name='UserDetails',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('first_name', models.CharField(blank=True, default=None, max_length=100, null=True)),
('last_name', models.CharField(blank=True, default=None, max_length=100, null=True)),
('email', models.EmailField(max_length=254)),
('gender', models.CharField(blank=True, default=None, max_length=10, null=True)),
('city', models.CharField(blank=True, default=None, max_length=100, null=True)),
('country', models.CharField(blank=True, default=None, max_length=100, null=True)),
('age', models.IntegerField(blank=True, default=None, null=True)),
('set_num', models.CharField(blank=True, default=None, max_length=10, null=True)),
],
options={
'verbose_name_plural': 'User Details',
},
),
migrations.CreateModel(
name='UserResponse_Common_Features_Test_set1',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('user_option', models.CharField(default=None, max_length=10)),
('iteration', models.IntegerField(default=1)),
('time_taken', models.FloatField(default=None)),
('quid', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='Questionnaire.Common_Features_Test_set1')),
('user', models.ForeignKey(blank=True, default=None, on_delete=django.db.models.deletion.CASCADE, to='Questionnaire.UserDetails')),
],
options={
'verbose_name_plural': 'User Response for Common Features Test phase set 1',
},
),
migrations.CreateModel(
name='UserResponse_Common_Features_Test_set2',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('user_option', models.CharField(default=None, max_length=10)),
('iteration', models.IntegerField(default=1)),
('time_taken', models.FloatField(default=None)),
('quid', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='Questionnaire.Common_Features_Test_set2')),
('user', models.ForeignKey(blank=True, default=None, on_delete=django.db.models.deletion.CASCADE, to='Questionnaire.UserDetails')),
],
options={
'verbose_name_plural': 'User Response for Common Features Test phase set 2',
},
),
migrations.CreateModel(
name='UserResponse_Common_Features_Test_set3',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('user_option', models.CharField(default=None, max_length=10)),
('iteration', models.IntegerField(default=1)),
('time_taken', models.FloatField(default=None)),
('quid', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='Questionnaire.Common_Features_Test_set3')),
('user', models.ForeignKey(blank=True, default=None, on_delete=django.db.models.deletion.CASCADE, to='Questionnaire.UserDetails')),
],
options={
'verbose_name_plural': 'User Response for Common Features Test phase set 3',
},
),
migrations.CreateModel(
name='UserResponse_Common_Features_Test_set4',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('user_option', models.CharField(default=None, max_length=10)),
('iteration', models.IntegerField(default=1)),
('time_taken', models.FloatField(default=None)),
('quid', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='Questionnaire.Common_Features_Test_set4')),
('user', models.ForeignKey(blank=True, default=None, on_delete=django.db.models.deletion.CASCADE, to='Questionnaire.UserDetails')),
],
options={
'verbose_name_plural': 'User Response for Common Features Test phase set 4',
},
),
migrations.CreateModel(
name='UserResponse_Common_Features_Test_set5',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('user_option', models.CharField(default=None, max_length=10)),
('iteration', models.IntegerField(default=1)),
('quid', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='Questionnaire.Common_Features_Test_set5')),
('user', models.ForeignKey(blank=True, default=None, on_delete=django.db.models.deletion.CASCADE, to='Questionnaire.UserDetails')),
],
options={
'verbose_name_plural': 'User Response for Common Features Test phase set 5',
},
),
migrations.CreateModel(
name='UserResponse_Test_set1',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('user_option', models.CharField(default=None, max_length=10)),
('iteration', models.IntegerField(default=1)),
('time_taken', models.FloatField(default=None)),
('quid', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='Questionnaire.Test_set1')),
('user', models.ForeignKey(blank=True, default=None, on_delete=django.db.models.deletion.CASCADE, to='Questionnaire.UserDetails')),
],
options={
'verbose_name_plural': 'User Response for Test phase set 1',
},
),
migrations.CreateModel(
name='UserResponse_Test_set2',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('user_option', models.CharField(default=None, max_length=10)),
('iteration', models.IntegerField(default=1)),
('time_taken', models.FloatField(default=None)),
('quid', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='Questionnaire.Test_set2')),
('user', models.ForeignKey(blank=True, default=None, on_delete=django.db.models.deletion.CASCADE, to='Questionnaire.UserDetails')),
],
options={
'verbose_name_plural': 'User Response for Test phase set 2',
},
),
migrations.CreateModel(
name='UserResponse_Test_set3',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('user_option', models.CharField(default=None, max_length=10)),
('iteration', models.IntegerField(default=1)),
('time_taken', models.FloatField(default=None)),
('quid', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='Questionnaire.Test_set3')),
('user', models.ForeignKey(blank=True, default=None, on_delete=django.db.models.deletion.CASCADE, to='Questionnaire.UserDetails')),
],
options={
'verbose_name_plural': 'User Response for Test phase set 3',
},
),
migrations.CreateModel(
name='UserResponse_Test_set4',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('user_option', models.CharField(default=None, max_length=10)),
('iteration', models.IntegerField(default=1)),
('time_taken', models.FloatField(default=None)),
('quid', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='Questionnaire.Test_set4')),
('user', models.ForeignKey(blank=True, default=None, on_delete=django.db.models.deletion.CASCADE, to='Questionnaire.UserDetails')),
],
options={
'verbose_name_plural': 'User Response for Test phase set 4',
},
),
migrations.CreateModel(
name='UserResponse_Test_set5',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('user_option', models.CharField(default=None, max_length=10)),
('iteration', models.IntegerField(default=1)),
('time_taken', models.FloatField(default=None)),
('quid', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='Questionnaire.Test_set5')),
('user', models.ForeignKey(blank=True, default=None, on_delete=django.db.models.deletion.CASCADE, to='Questionnaire.UserDetails')),
],
options={
'verbose_name_plural': 'User Response for Test phase set 5',
},
),
migrations.CreateModel(
name='UserResponsesForDescription',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('description', models.TextField(blank=True, default=None, null=True)),
('set_number', models.CharField(blank=True, default=None, max_length=10, null=True)),
('user', models.ForeignKey(blank=True, default=None, on_delete=django.db.models.deletion.CASCADE, to='Questionnaire.UserDetails')),
],
options={
'verbose_name_plural': 'User Responses for Description',
},
),
]
| 51.090226 | 147 | 0.575178 | 2,031 | 20,385 | 5.575086 | 0.050714 | 0.062174 | 0.049457 | 0.061821 | 0.969531 | 0.968118 | 0.968118 | 0.922989 | 0.873178 | 0.873178 | 0 | 0.011504 | 0.292127 | 20,385 | 398 | 148 | 51.218593 | 0.773181 | 0.002208 | 0 | 0.682864 | 1 | 0 | 0.203412 | 0.065985 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.005115 | 0 | 0.015345 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
079b53153cd1a43f5d54e48a9ebcf15a541aceb0 | 480 | py | Python | Mundo 3 Estruturas Compostas/ex109_md/moeda.py | costa53/curso_em_video_python3 | 4f859641324f8b35be56d807f40457d7dddc451f | [
"MIT"
] | 1 | 2022-02-17T16:23:52.000Z | 2022-02-17T16:23:52.000Z | Mundo 3 Estruturas Compostas/ex109_md/moeda.py | costa53/curso_em_video_python3 | 4f859641324f8b35be56d807f40457d7dddc451f | [
"MIT"
] | null | null | null | Mundo 3 Estruturas Compostas/ex109_md/moeda.py | costa53/curso_em_video_python3 | 4f859641324f8b35be56d807f40457d7dddc451f | [
"MIT"
] | null | null | null | def aumentar(n=0, a=0, show=False):
n += (n * a / 100)
if show:
return moeda(n)
return n
def diminuir(n=0, a=0, show=False):
n -= (n * a / 100)
if show:
return moeda(n)
return n
def dobro(n=0, show=False):
n *= 2
if show:
return moeda(n)
return n
def metade(n=0, show=False):
n /= 2
if show:
return moeda(n)
return n
def moeda(n=0, moeda='R$'):
return f'{moeda}{n:.2f}'.replace('.', ',')
| 15.483871 | 46 | 0.502083 | 80 | 480 | 3.0125 | 0.2375 | 0.149378 | 0.165975 | 0.182573 | 0.73029 | 0.73029 | 0.73029 | 0.73029 | 0.73029 | 0.73029 | 0 | 0.049383 | 0.325 | 480 | 30 | 47 | 16 | 0.694444 | 0 | 0 | 0.545455 | 0 | 0 | 0.0375 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.227273 | false | 0 | 0 | 0.045455 | 0.636364 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
07bc49a0caaf343ee169ce07e4194a27d391d04b | 9,092 | py | Python | Packs/MicrosoftGraphGroups/Integrations/MicrosoftGraphGroups/test_data/response_constants.py | diCagri/content | c532c50b213e6dddb8ae6a378d6d09198e08fc9f | [
"MIT"
] | 799 | 2016-08-02T06:43:14.000Z | 2022-03-31T11:10:11.000Z | Packs/MicrosoftGraphGroups/Integrations/MicrosoftGraphGroups/test_data/response_constants.py | diCagri/content | c532c50b213e6dddb8ae6a378d6d09198e08fc9f | [
"MIT"
] | 9,317 | 2016-08-07T19:00:51.000Z | 2022-03-31T21:56:04.000Z | Packs/MicrosoftGraphGroups/Integrations/MicrosoftGraphGroups/test_data/response_constants.py | diCagri/content | c532c50b213e6dddb8ae6a378d6d09198e08fc9f | [
"MIT"
] | 1,297 | 2016-08-04T13:59:00.000Z | 2022-03-31T23:43:06.000Z | RESPONSE_LIST_GROUPS = {
"@odata.context": "https://graph.microsoft.com/v1.0/$metadata#groups",
"value": [
{
"classification": None,
"createdDateTime": "2018-12-26T09:51:32Z",
"creationOptions": [],
"deletedDateTime": None,
"description": None,
"displayName": "TestDist",
"groupTypes": [],
"id": "TestDist",
"isAssignableToRole": None,
"mail": "testdist@demistodev.onmicrosoft.com",
"mailEnabled": True,
"mailNickname": "testdist",
"onPremisesDomainName": None,
"onPremisesLastSyncDateTime": None,
"onPremisesNetBiosName": None,
"onPremisesProvisioningErrors": [],
"onPremisesSamAccountName": None,
"onPremisesSecurityIdentifier": None,
"onPremisesSyncEnabled": None,
"preferredDataLocation": None,
"proxyAddresses": [
"SMTP:testdist@demistodev.onmicrosoft.com"
],
"renewedDateTime": "2018-12-26T09:51:32Z",
"resourceBehaviorOptions": [],
"resourceProvisioningOptions": [],
"securityEnabled": False,
"securityIdentifier": None,
"visibility": None
},
{
"classification": None,
"createdDateTime": "2019-08-24T09:39:03Z",
"creationOptions": [
"Team",
"ExchangeProvisioningFlags:3552"
],
"deletedDateTime": None,
"description": "DemistoTeam",
"displayName": "DemistoTeam",
"groupTypes": [
"Unified"
],
"id": "DemistoTeam",
"isAssignableToRole": None,
"mail": "DemistoTeam@demistodev.onmicrosoft.com",
"mailEnabled": True,
"mailNickname": "DemistoTeam",
"onPremisesDomainName": None,
"onPremisesLastSyncDateTime": None,
"onPremisesNetBiosName": None,
"onPremisesProvisioningErrors": [],
"onPremisesSamAccountName": None,
"onPremisesSecurityIdentifier": None,
"onPremisesSyncEnabled": None,
"preferredDataLocation": None,
"proxyAddresses": [
"SPO:SPO_6450fabe-0048-4804-8503-9f0f0694662f@SPO_ebac1a16-81bf-449b-8d43-5732c3c1d999",
"SMTP:DemistoTeam@demistodev.onmicrosoft.com"
],
"renewedDateTime": "2019-08-24T09:39:03Z",
"resourceBehaviorOptions": [
"HideGroupInOutlook",
"SubscribeMembersToCalendarEventsDisabled",
"WelcomeEmailDisabled"
],
"resourceProvisioningOptions": [
"Team"
],
"securityEnabled": False,
"securityIdentifier": None,
"visibility": "Public"
}
]
}
RESPONSE_GET_GROUP = {
"@odata.context": "https://graph.microsoft.com/v1.0/$metadata#groups/$entity",
"classification": None,
"createdDateTime": "2019-08-24T09:39:03Z",
"creationOptions": [
"Team",
"ExchangeProvisioningFlags:3552"
],
"deletedDateTime": None,
"description": "DemistoTeam",
"displayName": "DemistoTeam",
"groupTypes": [
"Unified"
],
"id": "DemistoTeam",
"isAssignableToRole": None,
"mail": "DemistoTeam@demistodev.onmicrosoft.com",
"mailEnabled": True,
"mailNickname": "DemistoTeam",
"onPremisesDomainName": None,
"onPremisesLastSyncDateTime": None,
"onPremisesNetBiosName": None,
"onPremisesProvisioningErrors": [],
"onPremisesSamAccountName": None,
"onPremisesSecurityIdentifier": None,
"onPremisesSyncEnabled": None,
"preferredDataLocation": None,
"proxyAddresses": [
"SPO:SPO_6450fabe-0048-4804-8503-9f0f0694662f@SPO_ebac1a16-81bf-449b-8d43-5732c3c1d999",
"SMTP:DemistoTeam@demistodev.onmicrosoft.com"
],
"renewedDateTime": "2019-08-24T09:39:03Z",
"resourceBehaviorOptions": [
"HideGroupInOutlook",
"SubscribeMembersToCalendarEventsDisabled",
"WelcomeEmailDisabled"
],
"resourceProvisioningOptions": [
"Team"
],
"securityEnabled": False,
"securityIdentifier": None,
"visibility": "Public"
}
RESPONSE_CREATE_GROUP = {
"@odata.context": "https://graph.microsoft.com/v1.0/$metadata#groups/$entity",
"classification": None,
"createdDateTime": "2019-11-05T10:15:55Z",
"creationOptions": [],
"deletedDateTime": None,
"description": None,
"displayName": "my_unit_test_group",
"groupTypes": [],
"id": "1baabf76-0f12-4336-922d-c9669a0d4027",
"isAssignableToRole": None,
"mail": None,
"mailEnabled": False,
"mailNickname": "unit_test",
"onPremisesDomainName": None,
"onPremisesLastSyncDateTime": None,
"onPremisesNetBiosName": None,
"onPremisesProvisioningErrors": [],
"onPremisesSamAccountName": None,
"onPremisesSecurityIdentifier": None,
"onPremisesSyncEnabled": None,
"preferredDataLocation": None,
"proxyAddresses": [],
"renewedDateTime": "2019-11-05T10:15:55Z",
"resourceBehaviorOptions": [],
"resourceProvisioningOptions": [],
"securityEnabled": True,
"securityIdentifier": "S-1-12-1-464174966-1127616274-1724460434-658509210",
"visibility": None
}
RESPONSE_LIST_MEMBERS_UNDER_100 = {
"@odata.context": "someLink",
"value": [
{
"id": "ID1",
"businessPhones": [
],
"displayName": "mock1",
"givenName": "mock1",
"jobTitle": "test",
"mail": "mock1@demistodev.onmicrosoft.com",
"mobilePhone": "None",
"officeLocation": "None",
"preferredLanguage": "en-US",
"surname": "mock1",
"userPrincipalName": "mock1@demistodev.onmicrosoft.com"
},
{
"@odata.type": "#microsoft.graph.user",
"id": "ID2",
"businessPhones": [
],
"displayName": "mock2",
"givenName": "mock2",
"jobTitle": "None",
"mail": "mock2@demistodev.onmicrosoft.com",
"mobilePhone": "050505050",
"officeLocation": "None",
"preferredLanguage": "en-US",
"surname": "mock2",
"userPrincipalName": "mock2@demistodev.onmicrosoft.com"
},
{
"@odata.type": "#microsoft.graph.user",
"id": "ID3",
"businessPhones": [
],
"displayName": "mock3",
"givenName": "mock3",
"jobTitle": "None",
"mail": "None",
"mobilePhone": "None",
"officeLocation": "None",
"preferredLanguage": "None",
"surname": "mock3",
"userPrincipalName": "mock3@demistodev.onmicrosoft.com"
}
]
}
RESPONSE_LIST_MEMBERS_ABOVE_100 = {
"@odata.context": "someLink",
"@odata.nextLink": "someNextLink",
"value": [
{
"@odata.type": "#microsoft.graph.user",
"id": "ID1",
"businessPhones": [
],
"displayName": "mock1",
"givenName": "mock1",
"jobTitle": "test",
"mail": "mock1@demistodev.onmicrosoft.com",
"mobilePhone": "None",
"officeLocation": "None",
"preferredLanguage": "en-US",
"surname": "mock1",
"userPrincipalName": "mock1@demistodev.onmicrosoft.com"
},
{
"@odata.type": "#microsoft.graph.user",
"id": "ID2",
"businessPhones": [
],
"displayName": "mock2",
"givenName": "mock2",
"jobTitle": "None",
"mail": "mock2@demistodev.onmicrosoft.com",
"mobilePhone": "050505050",
"officeLocation": "None",
"preferredLanguage": "en-US",
"surname": "mock2",
"userPrincipalName": "mock2@demistodev.onmicrosoft.com"
},
{
"@odata.type": "#microsoft.graph.user",
"id": "ID3",
"businessPhones": [
],
"displayName": "mock3",
"givenName": "mock3",
"jobTitle": "None",
"mail": "None",
"mobilePhone": "None",
"officeLocation": "None",
"preferredLanguage": "None",
"surname": "mock3",
"userPrincipalName": "mock3@demistodev.onmicrosoft.com"
}
]
}
| 35.24031 | 104 | 0.515288 | 553 | 9,092 | 8.432188 | 0.22604 | 0.072057 | 0.08235 | 0.024662 | 0.859961 | 0.835943 | 0.793481 | 0.793481 | 0.793481 | 0.793481 | 0 | 0.056702 | 0.344369 | 9,092 | 257 | 105 | 35.377432 | 0.725549 | 0 | 0 | 0.810277 | 0 | 0.007905 | 0.49791 | 0.212714 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
07e23064716a4c2bb0ee817e09f63d8fe9931db8 | 175 | py | Python | pythonUtils/ExploreDA/SummaryStatistics/__init__.py | tgquintela/pythonUtils | 6f2e5ba3be67a48d3cd5cf72dcabfae04cfa7afe | [
"MIT"
] | 1 | 2015-07-21T05:15:11.000Z | 2015-07-21T05:15:11.000Z | pythonUtils/ExploreDA/SummaryStatistics/__init__.py | tgquintela/pythonUtils | 6f2e5ba3be67a48d3cd5cf72dcabfae04cfa7afe | [
"MIT"
] | null | null | null | pythonUtils/ExploreDA/SummaryStatistics/__init__.py | tgquintela/pythonUtils | 6f2e5ba3be67a48d3cd5cf72dcabfae04cfa7afe | [
"MIT"
] | null | null | null |
"""
Summary statistics
==================
The summary statistics information for the data. It groups functions which
generates the summary statistics of the data input.
"""
| 19.444444 | 74 | 0.702857 | 21 | 175 | 5.857143 | 0.619048 | 0.414634 | 0.325203 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148571 | 175 | 8 | 75 | 21.875 | 0.825503 | 0.937143 | 0 | null | 1 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
07ec5f7ce67d28b8093029a047a3015bcb083b90 | 22,866 | py | Python | kale/tests/test_worker.py | aces-inc/ndkale | cef8f41801f50ced1a14bac480270b3a9c54b087 | [
"BSD-2-Clause"
] | null | null | null | kale/tests/test_worker.py | aces-inc/ndkale | cef8f41801f50ced1a14bac480270b3a9c54b087 | [
"BSD-2-Clause"
] | null | null | null | kale/tests/test_worker.py | aces-inc/ndkale | cef8f41801f50ced1a14bac480270b3a9c54b087 | [
"BSD-2-Clause"
] | null | null | null | """Module testing the kale.worker module."""
from __future__ import absolute_import
import mock
import signal
import unittest
from kale import exceptions
from kale import test_utils
from kale import worker
from six.moves import range
class WorkerTestCase(unittest.TestCase):
"""Test worker logic."""
def _create_patch(self, name):
"""Helper method for creating scoped mocks."""
patcher = mock.patch(name)
patch = patcher.start()
self.addCleanup(patcher.stop)
return patch
def testRun(self):
"""Test an iteration that has tasks."""
mock_consumer = self._create_patch('kale.consumer.Consumer.__init__')
mock_consumer.return_value = None
startup_handler = self._create_patch('kale.settings.ON_WORKER_STARTUP')
worker_inst = worker.Worker()
self.assertTrue(worker_inst is not None)
startup_handler.assert_called_once_with()
def testRunIterationWithTasks(self):
"""Test an iteration that has tasks."""
mock_consumer = self._create_patch('kale.consumer.Consumer.__init__')
mock_consumer.return_value = None
fetch_batch = self._create_patch('kale.consumer.Consumer.fetch_batch')
message = test_utils.new_mock_message()
fetch_batch.return_value = [message]
run_batch = self._create_patch('kale.worker.Worker._run_batch')
run_batch.return_value = (1, 1)
worker_inst = worker.Worker()
mock_consumer.assert_called_once_with()
worker_inst._batch_queue = worker_inst._queue_selector.get_queue()
self.assertTrue(worker_inst._run_single_iteration())
self.assertEqual(fetch_batch.called, 1)
self.assertTrue(worker_inst._dirty)
run_batch.assert_called_once_with([message])
def testRunIterationWithoutTasks(self):
"""Test an iteration that does not have tasks."""
mock_consumer = self._create_patch('kale.consumer.Consumer.__init__')
mock_consumer.return_value = None
fetch_batch = self._create_patch('kale.consumer.Consumer.fetch_batch')
fetch_batch.return_value = []
run_batch = self._create_patch('kale.worker.Worker._run_batch')
worker_inst = worker.Worker()
mock_consumer.assert_called_once_with()
worker_inst._batch_queue = worker_inst._queue_selector.get_queue()
self.assertFalse(worker_inst._run_single_iteration())
self.assertFalse(worker_inst._dirty)
self.assertEqual(fetch_batch.called, 1)
self.assertFalse(run_batch.called)
def testCleanupWorkerStop(self):
"""Test cleanup worker."""
mock_consumer = self._create_patch('kale.consumer.Consumer')
release_batch = self._create_patch('kale.worker.Worker._release_batch')
shutdown_handler = self._create_patch(
'kale.settings.ON_WORKER_SHUTDOWN')
sys_exit = self._create_patch('sys.exit')
worker_inst = worker.Worker()
mock_consumer.assert_called_once_with()
release_batch.return_value = (0, 0)
worker_inst._cleanup_worker(signal.SIGABRT, None)
release_batch.assert_called_once_with()
sys_exit.assert_called_once_with(0)
shutdown_handler.assert_called_once_with()
def testCleanupWorkerSuspend(self):
"""Test cleanup worker."""
mock_consumer = self._create_patch('kale.consumer.Consumer')
release_batch = self._create_patch('kale.worker.Worker._release_batch')
sys_exit = self._create_patch('sys.exit')
worker_inst = worker.Worker()
mock_consumer.assert_called_once_with()
release_batch.return_value = (0, 0)
worker_inst._cleanup_worker(signal.SIGTSTP, None)
release_batch.assert_called_once_with()
assert not sys_exit.called, 'System should not have exited.'
def testReleaseBatchWithTimeToSpare(self):
"""Test releasing a batch where the spare time is over the threshold.
"""
mock_consumer = self._create_patch('kale.consumer.Consumer.__init__')
mock_consumer.return_value = None
mock_release = self._create_patch(
'kale.consumer.Consumer.release_messages')
mock_delete = self._create_patch(
'kale.consumer.Consumer.delete_messages')
mock_publish_dlq = self._create_patch(
'kale.publisher.Publisher.publish_messages_to_dead_letter_queue')
get_time = self._create_patch('time.time')
worker_inst = worker.Worker()
worker_inst._batch_queue = worker_inst._queue_selector.get_queue()
mock_consumer.assert_called_once_with()
worker_inst._incomplete_messages = [
test_utils.new_mock_message() for i in range(2)]
worker_inst._successful_messages = [
test_utils.new_mock_message() for i in range(3)]
worker_inst._failed_messages = [
test_utils.new_mock_message() for i in range(4)]
worker_inst._batch_stop_time = 20
# _batch_stop_time - get_time > RESET_TIMEOUT_THRESHOLD (20 - 10 > 1)
get_time.return_value = 10
releasable_messages = worker_inst._incomplete_messages
deletable_messages = (
worker_inst._successful_messages + worker_inst._failed_messages)
num_deleted, num_released = worker_inst._release_batch()
mock_release.assert_called_once_with(
releasable_messages, worker_inst._batch_queue.name)
mock_delete.assert_called_once_with(
deletable_messages, worker_inst._batch_queue.name)
assert not mock_publish_dlq.called, ('No messages should have been '
'moved to dlq.')
self.assertEqual(num_deleted, len(deletable_messages))
self.assertEqual(num_released, len(releasable_messages))
self.assertEqual(0, len(worker_inst._incomplete_messages))
self.assertEqual(0, len(worker_inst._successful_messages))
self.assertEqual(0, len(worker_inst._failed_messages))
self.assertEqual(0, len(worker_inst._permanent_failures))
def testReleaseBatchWithPermanent(self):
"""Test releasing a batch where the spare time is over the threshold.
"""
mock_consumer = self._create_patch('kale.consumer.Consumer.__init__')
mock_consumer.return_value = None
mock_release = self._create_patch(
'kale.consumer.Consumer.release_messages')
mock_delete = self._create_patch(
'kale.consumer.Consumer.delete_messages')
mock_publish_dlq = self._create_patch(
'kale.publisher.Publisher.publish_messages_to_dead_letter_queue')
get_time = self._create_patch('time.time')
worker_inst = worker.Worker()
worker_inst._batch_queue = worker_inst._queue_selector.get_queue()
mock_consumer.assert_called_once_with()
worker_inst._incomplete_messages = [
test_utils.new_mock_message() for i in range(2)]
worker_inst._successful_messages = [
test_utils.new_mock_message() for i in range(3)]
worker_inst._failed_messages = [
test_utils.new_mock_message() for i in range(4)]
# Permanent failures should be a subset of failures.
worker_inst._permanent_failures = worker_inst._failed_messages[:2]
worker_inst._batch_stop_time = 20
# _batch_stop_time - get_time > RESET_TIMEOUT_THRESHOLD (20 - 10 > 1)
get_time.return_value = 10
releasable_messages = worker_inst._incomplete_messages
permament_failures = worker_inst._permanent_failures
deletable_messages = (
worker_inst._successful_messages + worker_inst._failed_messages)
num_deleted, num_released = worker_inst._release_batch()
mock_release.assert_called_once_with(
releasable_messages, worker_inst._batch_queue.name)
mock_delete.assert_called_once_with(
deletable_messages, worker_inst._batch_queue.name)
mock_publish_dlq.assert_called_once_with(
worker_inst._batch_queue.dlq_name, permament_failures)
self.assertEqual(num_deleted, len(deletable_messages))
self.assertEqual(num_released, len(releasable_messages))
self.assertEqual(0, len(worker_inst._incomplete_messages))
self.assertEqual(0, len(worker_inst._successful_messages))
self.assertEqual(0, len(worker_inst._failed_messages))
self.assertEqual(0, len(worker_inst._permanent_failures))
def testReleaseBatchWithNoSuccessfulAndNoTimeLeft(self):
"""Test releasing a batch where the spare time is over the threshold.
"""
mock_consumer = self._create_patch('kale.consumer.Consumer.__init__')
mock_consumer.return_value = None
mock_release = self._create_patch(
'kale.consumer.Consumer.release_messages')
mock_delete = self._create_patch(
'kale.consumer.Consumer.delete_messages')
mock_publish_dlq = self._create_patch(
'kale.publisher.Publisher.publish_messages_to_dead_letter_queue')
get_time = self._create_patch('time.time')
worker_inst = worker.Worker()
worker_inst._batch_queue = worker_inst._queue_selector.get_queue()
mock_consumer.assert_called_once_with()
worker_inst._successful_messages = []
worker_inst._incomplete_messages = [
test_utils.new_mock_message() for i in range(2)]
worker_inst._failed_messages = [
test_utils.new_mock_message() for i in range(4)]
worker_inst._batch_stop_time = 20
# _batch_stop_time - get_time > RESET_TIMEOUT_THRESHOLD (20 - 19.5 < 1)
get_time.return_value = 19.5
deletable_messages = worker_inst._failed_messages
num_deleted, num_released = worker_inst._release_batch()
assert not mock_release.called, ('No messages should have '
'been released.')
# Failed messages should have been deleted.
mock_delete.assert_called_once_with(
deletable_messages, worker_inst._batch_queue.name)
assert not mock_publish_dlq.called, ('No messages should have'
'been moved to dlq.')
self.assertEqual(num_deleted, len(deletable_messages))
self.assertEqual(num_released, 0)
self.assertEqual(0, len(worker_inst._incomplete_messages))
self.assertEqual(0, len(worker_inst._successful_messages))
self.assertEqual(0, len(worker_inst._failed_messages))
self.assertEqual(0, len(worker_inst._permanent_failures))
def testReleaseBatchWithNoDeletableAndNoTimeLeft(self):
"""Test releasing a batch where the spare time is over the threshold.
"""
mock_consumer = self._create_patch('kale.consumer.Consumer.__init__')
mock_consumer.return_value = None
mock_release = self._create_patch(
'kale.consumer.Consumer.release_messages')
mock_delete = self._create_patch(
'kale.consumer.Consumer.delete_messages')
mock_publish_dlq = self._create_patch(
'kale.publisher.Publisher.publish_messages_to_dead_letter_queue')
get_time = self._create_patch('time.time')
worker_inst = worker.Worker()
worker_inst._batch_queue = worker_inst._queue_selector.get_queue()
mock_consumer.assert_called_once_with()
worker_inst._successful_messages = []
worker_inst._failed_messages = []
worker_inst._incomplete_messages = [
test_utils.new_mock_message() for i in range(2)]
worker_inst._batch_stop_time = 20
# _batch_stop_time - get_time > RESET_TIMEOUT_THRESHOLD (20 - 19.5 < 1)
get_time.return_value = 19.5
num_deleted, num_released = worker_inst._release_batch()
assert not mock_release.called, ('No messages should have '
'been released.')
assert not mock_delete.called, 'No messages should have been deleted.'
assert not mock_publish_dlq.called, ('No messages should have'
' been moved to dlq.')
self.assertEqual(num_deleted, 0)
self.assertEqual(num_released, 0)
self.assertEqual(0, len(worker_inst._incomplete_messages))
self.assertEqual(0, len(worker_inst._successful_messages))
self.assertEqual(0, len(worker_inst._failed_messages))
self.assertEqual(0, len(worker_inst._permanent_failures))
def testReleaseBatchWithNoDeletableAndWithTimeLeft(self):
"""Test releasing a batch where the spare time is over the threshold.
"""
mock_consumer = self._create_patch('kale.consumer.Consumer.__init__')
mock_consumer.return_value = None
mock_release = self._create_patch(
'kale.consumer.Consumer.release_messages')
mock_delete = self._create_patch(
'kale.consumer.Consumer.delete_messages')
mock_publish_dlq = self._create_patch(
'kale.publisher.Publisher.publish_messages_to_dead_letter_queue')
get_time = self._create_patch('time.time')
worker_inst = worker.Worker()
worker_inst._batch_queue = worker_inst._queue_selector.get_queue()
mock_consumer.assert_called_once_with()
worker_inst._successful_messages = []
worker_inst._failed_messages = []
worker_inst._incomplete_messages = [
test_utils.new_mock_message() for i in range(2)]
worker_inst._batch_stop_time = 20
# _batch_stop_time - get_time > RESET_TIMEOUT_THRESHOLD (20 - 19.5 < 1)
get_time.return_value = 10
releasable_messages = worker_inst._incomplete_messages
num_deleted, num_released = worker_inst._release_batch()
mock_release.assert_called_once_with(
releasable_messages, worker_inst._batch_queue.name)
assert not mock_delete.called, 'No messages should have been deleted.'
assert not mock_publish_dlq.called, ('No messages should have '
'been moved to dlq.')
self.assertEqual(num_deleted, 0)
self.assertEqual(num_released, len(releasable_messages))
self.assertEqual(0, len(worker_inst._incomplete_messages))
self.assertEqual(0, len(worker_inst._successful_messages))
self.assertEqual(0, len(worker_inst._failed_messages))
self.assertEqual(0, len(worker_inst._permanent_failures))
def testRunBatchSuccessful(self):
"""Test a successful batch."""
mock_consumer = self._create_patch('kale.consumer.Consumer')
get_time = self._create_patch('time.time')
worker_inst = worker.Worker()
worker_inst._batch_queue = worker_inst._queue_selector.get_queue()
mock_consumer.assert_called_once_with()
worker_inst._batch_stop_time = 100
# _batch_stop_time - (get_time + task.time_limit) > 0
# (100 - (10 + 60)) > 0)
get_time.return_value = 10
message_batch = [test_utils.new_mock_message()]
num_messages = len(message_batch)
worker_inst._run_batch(message_batch)
self.assertEqual(0, len(worker_inst._incomplete_messages))
self.assertEqual(num_messages, len(worker_inst._successful_messages))
self.assertEqual(0, len(worker_inst._failed_messages))
self.assertEqual(0, len(worker_inst._permanent_failures))
def testRunBatchNoTimeRemaining(self):
"""Test a batch where there is not enough time remaining."""
mock_consumer = self._create_patch('kale.consumer.Consumer')
get_time = self._create_patch('time.time')
worker_inst = worker.Worker()
worker_inst._batch_queue = worker_inst._queue_selector.get_queue()
mock_consumer.assert_called_once_with()
worker_inst._batch_stop_time = 50
# _batch_stop_time - (get_time + task.time_limit) > 0
# (100 - (10 + 60)) < 0)
get_time.return_value = 10
message_batch = [test_utils.new_mock_message()]
num_messages = len(message_batch)
worker_inst._run_batch(message_batch)
self.assertEqual(num_messages, len(worker_inst._incomplete_messages))
self.assertEqual(0, len(worker_inst._successful_messages))
self.assertEqual(0, len(worker_inst._failed_messages))
self.assertEqual(0, len(worker_inst._permanent_failures))
def testRunBatchTaskTimeout(self):
"""Test batch with a task timeout."""
mock_consumer = self._create_patch('kale.consumer.Consumer')
get_time = self._create_patch('time.time')
mock_failure = self._create_patch(
'kale.test_utils.TimeoutTask.handle_failure')
mock_failure.return_value = True
worker_inst = worker.Worker()
worker_inst._batch_queue = worker_inst._queue_selector.get_queue()
mock_consumer.assert_called_once_with()
worker_inst._batch_stop_time = 100
# _batch_stop_time - (get_time + task.time_limit) > 0
# (100 - (10 + 60)) > 0)
get_time.return_value = 10
message = test_utils.new_mock_message(
task_class=test_utils.TimeoutTask)
message_batch = [message]
num_messages = len(message_batch)
worker_inst._run_batch(message_batch)
fail_msg, fail_exc = mock_failure.call_args[0]
self.assertEqual(fail_msg, message)
self.assertTrue(type(fail_exc) == exceptions.TimeoutException)
self.assertEqual(0, len(worker_inst._incomplete_messages))
self.assertEqual(0, len(worker_inst._successful_messages))
self.assertEqual(0, len(worker_inst._permanent_failures))
self.assertEqual(num_messages, len(worker_inst._failed_messages))
def testRunBatchTaskException(self):
"""Test batch with a task exception."""
mock_consumer = self._create_patch('kale.consumer.Consumer')
get_time = self._create_patch('time.time')
mock_failure = self._create_patch(
'kale.test_utils.FailTask.handle_failure')
mock_failure.return_value = True
worker_inst = worker.Worker()
worker_inst._batch_queue = worker_inst._queue_selector.get_queue()
mock_consumer.assert_called_once_with()
worker_inst._batch_stop_time = 100
# _batch_stop_time - (get_time + task.time_limit) > 0
# (100 - (10 + 60)) > 0)
get_time.return_value = 10
message = test_utils.new_mock_message(task_class=test_utils.FailTask)
message_batch = [message]
num_messages = len(message_batch)
worker_inst._run_batch(message_batch)
fail_msg, fail_exc = mock_failure.call_args[0]
self.assertEqual(fail_msg, message)
self.assertTrue(type(fail_exc) == exceptions.TaskException)
self.assertEqual(0, len(worker_inst._incomplete_messages))
self.assertEqual(0, len(worker_inst._successful_messages))
self.assertEqual(0, len(worker_inst._permanent_failures))
self.assertEqual(num_messages, len(worker_inst._failed_messages))
def testRunBatchTaskExceptionPermanentFailure(self):
"""Test batch with a task exception."""
mock_consumer = self._create_patch('kale.consumer.Consumer')
get_time = self._create_patch('time.time')
mock_failure = self._create_patch(
'kale.test_utils.FailTask.handle_failure')
mock_failure.return_value = False
worker_inst = worker.Worker()
worker_inst._batch_queue = worker_inst._queue_selector.get_queue()
mock_consumer.assert_called_once_with()
worker_inst._batch_stop_time = 100
# _batch_stop_time - (get_time + task.time_limit) > 0
# (100 - (10 + 60)) > 0)
get_time.return_value = 10
message = test_utils.new_mock_message(task_class=test_utils.FailTask)
message_batch = [message]
num_messages = len(message_batch)
worker_inst._run_batch(message_batch)
fail_msg, fail_exc = mock_failure.call_args[0]
self.assertEqual(fail_msg, message)
self.assertTrue(type(fail_exc) == exceptions.TaskException)
self.assertEqual(0, len(worker_inst._incomplete_messages))
self.assertEqual(0, len(worker_inst._successful_messages))
self.assertEqual(1, len(worker_inst._permanent_failures))
self.assertEqual(num_messages, len(worker_inst._failed_messages))
def testCheckProcessExceedingMemory(self):
"""Test process resources method."""
mock_resource = self._create_patch('resource.getrusage')
sys_exit = self._create_patch('sys.exit')
self._create_patch('kale.consumer.Consumer')
worker_inst = worker.Worker()
mock_resource.return_value = mock.MagicMock(ru_maxrss=1000000000)
worker_inst._check_process_resources()
sys_exit.assert_called_once_with(1)
def testCheckProcessDirty(self):
"""Test process resources method."""
mock_resource = self._create_patch('resource.getrusage')
mock_resource.return_value = mock.MagicMock(ru_maxrss=10)
sys_exit = self._create_patch('sys.exit')
self._create_patch('kale.consumer.Consumer')
worker_inst = worker.Worker()
worker_inst._dirty = True
self.assertTrue(worker_inst._check_process_resources())
self.assertFalse(sys_exit.called)
def testCheckProcessNotDirty(self):
"""Test process resources method."""
mock_logger = self._create_patch('kale.worker.logger.info')
mock_resource = self._create_patch('resource.getrusage')
mock_resource.return_value = mock.MagicMock(ru_maxrss=10)
sys_exit = self._create_patch('sys.exit')
self._create_patch('kale.consumer.Consumer')
worker_inst = worker.Worker()
worker_inst._dirty = False
self.assertTrue(worker_inst._check_process_resources())
self.assertFalse(mock_logger.called)
self.assertFalse(sys_exit.called)
def testRemoveMessageOrExitSuccess(self):
"""Test remove_message_or_exit method."""
sys_exit = self._create_patch('sys.exit')
worker_inst = worker.Worker()
worker_inst._incomplete_messages = [1, 2]
worker_inst.remove_message_or_exit(1)
self.assertEqual(worker_inst._incomplete_messages, [2])
sys_exit.assert_not_called()
def testRemoveMessageOrExitFailure(self):
"""Test remove_message_or_exit method."""
sys_exit = self._create_patch('sys.exit')
worker_inst = worker.Worker()
worker_inst._incomplete_messages = [1, 2]
worker_inst.remove_message_or_exit(3)
self.assertEqual(worker_inst._incomplete_messages, [1, 2])
sys_exit.assert_called()
| 43.804598 | 79 | 0.689539 | 2,721 | 22,866 | 5.3767 | 0.06799 | 0.105947 | 0.066644 | 0.058442 | 0.887491 | 0.873616 | 0.846617 | 0.834997 | 0.823513 | 0.815311 | 0 | 0.011498 | 0.220283 | 22,866 | 521 | 80 | 43.888676 | 0.809075 | 0.07837 | 0 | 0.783854 | 0 | 0 | 0.100659 | 0.074539 | 0 | 0 | 0 | 0 | 0.286458 | 1 | 0.054688 | false | 0 | 0.020833 | 0 | 0.080729 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
07ed5816953d504a98ccaad92a8e94228928d0b1 | 117 | py | Python | MsgBot/__init__.py | LZC6244/DingTalkBot | e5d927d5290d14c62fd34d9ff7795a0ef1252729 | [
"MIT"
] | 3 | 2021-05-17T03:10:28.000Z | 2021-12-08T09:34:44.000Z | MsgBot/__init__.py | LZC6244/MsgBot | 1b7bdfa59b30c25f71d8332ce0670d77a73dbee9 | [
"MIT"
] | null | null | null | MsgBot/__init__.py | LZC6244/MsgBot | 1b7bdfa59b30c25f71d8332ce0670d77a73dbee9 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from MsgBot.wx_com_bot.bot import WxComBot
from MsgBot.ding_talk_bot.bot import DingTalkBot
| 23.4 | 48 | 0.769231 | 19 | 117 | 4.526316 | 0.684211 | 0.232558 | 0.27907 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009709 | 0.119658 | 117 | 4 | 49 | 29.25 | 0.825243 | 0.179487 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
5806a2ba877f2fdff44c2286545545f23da3b3de | 68,545 | py | Python | benchmarks/SimResults/_bigLittle_hrrs_spec_tugberk_pinned/cmp_astar/power.py | TugberkArkose/MLScheduler | e493b6cbf7b9d29a2c9300d7dd6f0c2f102e4061 | [
"Unlicense"
] | null | null | null | benchmarks/SimResults/_bigLittle_hrrs_spec_tugberk_pinned/cmp_astar/power.py | TugberkArkose/MLScheduler | e493b6cbf7b9d29a2c9300d7dd6f0c2f102e4061 | [
"Unlicense"
] | null | null | null | benchmarks/SimResults/_bigLittle_hrrs_spec_tugberk_pinned/cmp_astar/power.py | TugberkArkose/MLScheduler | e493b6cbf7b9d29a2c9300d7dd6f0c2f102e4061 | [
"Unlicense"
] | null | null | null | power = {'BUSES': {'Area': 1.33155,
'Bus/Area': 1.33155,
'Bus/Gate Leakage': 0.00662954,
'Bus/Peak Dynamic': 0.0,
'Bus/Runtime Dynamic': 0.0,
'Bus/Subthreshold Leakage': 0.0691322,
'Bus/Subthreshold Leakage with power gating': 0.0259246,
'Gate Leakage': 0.00662954,
'Peak Dynamic': 0.0,
'Runtime Dynamic': 0.0,
'Subthreshold Leakage': 0.0691322,
'Subthreshold Leakage with power gating': 0.0259246},
'Core': [{'Area': 32.6082,
'Execution Unit/Area': 8.2042,
'Execution Unit/Complex ALUs/Area': 0.235435,
'Execution Unit/Complex ALUs/Gate Leakage': 0.0132646,
'Execution Unit/Complex ALUs/Peak Dynamic': 0.0,
'Execution Unit/Complex ALUs/Runtime Dynamic': 0.202689,
'Execution Unit/Complex ALUs/Subthreshold Leakage': 0.20111,
'Execution Unit/Complex ALUs/Subthreshold Leakage with power gating': 0.0754163,
'Execution Unit/Floating Point Units/Area': 4.6585,
'Execution Unit/Floating Point Units/Gate Leakage': 0.0656156,
'Execution Unit/Floating Point Units/Peak Dynamic': 0.0,
'Execution Unit/Floating Point Units/Runtime Dynamic': 0.304033,
'Execution Unit/Floating Point Units/Subthreshold Leakage': 0.994829,
'Execution Unit/Floating Point Units/Subthreshold Leakage with power gating': 0.373061,
'Execution Unit/Gate Leakage': 0.122718,
'Execution Unit/Instruction Scheduler/Area': 2.17927,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Area': 0.328073,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Gate Leakage': 0.00115349,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Peak Dynamic': 1.20978,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Runtime Dynamic': 0.29937,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage': 0.017004,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage with power gating': 0.00962066,
'Execution Unit/Instruction Scheduler/Gate Leakage': 0.00730101,
'Execution Unit/Instruction Scheduler/Instruction Window/Area': 1.00996,
'Execution Unit/Instruction Scheduler/Instruction Window/Gate Leakage': 0.00529112,
'Execution Unit/Instruction Scheduler/Instruction Window/Peak Dynamic': 2.07911,
'Execution Unit/Instruction Scheduler/Instruction Window/Runtime Dynamic': 0.5184,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage': 0.0800117,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage with power gating': 0.0455351,
'Execution Unit/Instruction Scheduler/Peak Dynamic': 4.84781,
'Execution Unit/Instruction Scheduler/ROB/Area': 0.841232,
'Execution Unit/Instruction Scheduler/ROB/Gate Leakage': 0.000856399,
'Execution Unit/Instruction Scheduler/ROB/Peak Dynamic': 1.55892,
'Execution Unit/Instruction Scheduler/ROB/Runtime Dynamic': 0.297317,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage': 0.0178624,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage with power gating': 0.00897339,
'Execution Unit/Instruction Scheduler/Runtime Dynamic': 1.11509,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage': 0.114878,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage with power gating': 0.0641291,
'Execution Unit/Integer ALUs/Area': 0.47087,
'Execution Unit/Integer ALUs/Gate Leakage': 0.0265291,
'Execution Unit/Integer ALUs/Peak Dynamic': 0.295915,
'Execution Unit/Integer ALUs/Runtime Dynamic': 0.101344,
'Execution Unit/Integer ALUs/Subthreshold Leakage': 0.40222,
'Execution Unit/Integer ALUs/Subthreshold Leakage with power gating': 0.150833,
'Execution Unit/Peak Dynamic': 5.45214,
'Execution Unit/Register Files/Area': 0.570804,
'Execution Unit/Register Files/Floating Point RF/Area': 0.208131,
'Execution Unit/Register Files/Floating Point RF/Gate Leakage': 0.000232788,
'Execution Unit/Register Files/Floating Point RF/Peak Dynamic': 0.0,
'Execution Unit/Register Files/Floating Point RF/Runtime Dynamic': 0.0108524,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage': 0.00399698,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage with power gating': 0.00176968,
'Execution Unit/Register Files/Gate Leakage': 0.000622708,
'Execution Unit/Register Files/Integer RF/Area': 0.362673,
'Execution Unit/Register Files/Integer RF/Gate Leakage': 0.00038992,
'Execution Unit/Register Files/Integer RF/Peak Dynamic': 0.0784767,
'Execution Unit/Register Files/Integer RF/Runtime Dynamic': 0.0802601,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage': 0.00614175,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage with power gating': 0.00246675,
'Execution Unit/Register Files/Peak Dynamic': 0.0784767,
'Execution Unit/Register Files/Runtime Dynamic': 0.0911124,
'Execution Unit/Register Files/Subthreshold Leakage': 0.0101387,
'Execution Unit/Register Files/Subthreshold Leakage with power gating': 0.00423643,
'Execution Unit/Results Broadcast Bus/Area Overhead': 0.0442632,
'Execution Unit/Results Broadcast Bus/Gate Leakage': 0.00607074,
'Execution Unit/Results Broadcast Bus/Peak Dynamic': 0.189632,
'Execution Unit/Results Broadcast Bus/Runtime Dynamic': 0.542786,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage': 0.0920413,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage with power gating': 0.0345155,
'Execution Unit/Runtime Dynamic': 2.35705,
'Execution Unit/Subthreshold Leakage': 1.83518,
'Execution Unit/Subthreshold Leakage with power gating': 0.709678,
'Gate Leakage': 0.372997,
'Instruction Fetch Unit/Area': 5.86007,
'Instruction Fetch Unit/Branch Predictor/Area': 0.138516,
'Instruction Fetch Unit/Branch Predictor/Chooser/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Chooser/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Chooser/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Chooser/Runtime Dynamic': 0.00224747,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/Gate Leakage': 0.000757657,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Runtime Dynamic': 0.00224747,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Area': 0.0257064,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Gate Leakage': 0.000154548,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Peak Dynamic': 0.0142575,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Runtime Dynamic': 0.00199565,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage': 0.00384344,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage with power gating': 0.00198631,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Area': 0.0151917,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Gate Leakage': 8.00196e-05,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Peak Dynamic': 0.00527447,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Runtime Dynamic': 0.000793391,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage': 0.00181347,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage with power gating': 0.000957045,
'Instruction Fetch Unit/Branch Predictor/Peak Dynamic': 0.0597838,
'Instruction Fetch Unit/Branch Predictor/RAS/Area': 0.0105732,
'Instruction Fetch Unit/Branch Predictor/RAS/Gate Leakage': 4.63858e-05,
'Instruction Fetch Unit/Branch Predictor/RAS/Peak Dynamic': 0.0117602,
'Instruction Fetch Unit/Branch Predictor/RAS/Runtime Dynamic': 0.00115294,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage': 0.000932505,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage with power gating': 0.000494733,
'Instruction Fetch Unit/Branch Predictor/Runtime Dynamic': 0.00764354,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage': 0.0199703,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage with power gating': 0.0103282,
'Instruction Fetch Unit/Branch Target Buffer/Area': 0.64954,
'Instruction Fetch Unit/Branch Target Buffer/Gate Leakage': 0.00272758,
'Instruction Fetch Unit/Branch Target Buffer/Peak Dynamic': 0.177867,
'Instruction Fetch Unit/Branch Target Buffer/Runtime Dynamic': 0.0201872,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage': 0.0811682,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage with power gating': 0.0435357,
'Instruction Fetch Unit/Gate Leakage': 0.0590479,
'Instruction Fetch Unit/Instruction Buffer/Area': 0.0226323,
'Instruction Fetch Unit/Instruction Buffer/Gate Leakage': 6.83558e-05,
'Instruction Fetch Unit/Instruction Buffer/Peak Dynamic': 0.606827,
'Instruction Fetch Unit/Instruction Buffer/Runtime Dynamic': 0.077156,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage': 0.00151885,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage with power gating': 0.000701682,
'Instruction Fetch Unit/Instruction Cache/Area': 3.14635,
'Instruction Fetch Unit/Instruction Cache/Gate Leakage': 0.029931,
'Instruction Fetch Unit/Instruction Cache/Peak Dynamic': 4.90778,
'Instruction Fetch Unit/Instruction Cache/Runtime Dynamic': 0.198692,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage': 0.367022,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage with power gating': 0.180386,
'Instruction Fetch Unit/Instruction Decoder/Area': 1.85799,
'Instruction Fetch Unit/Instruction Decoder/Gate Leakage': 0.0222493,
'Instruction Fetch Unit/Instruction Decoder/Peak Dynamic': 1.37404,
'Instruction Fetch Unit/Instruction Decoder/Runtime Dynamic': 0.262057,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage': 0.442943,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage with power gating': 0.166104,
'Instruction Fetch Unit/Peak Dynamic': 7.36813,
'Instruction Fetch Unit/Runtime Dynamic': 0.565736,
'Instruction Fetch Unit/Subthreshold Leakage': 0.932587,
'Instruction Fetch Unit/Subthreshold Leakage with power gating': 0.408542,
'L2/Area': 4.53318,
'L2/Gate Leakage': 0.015464,
'L2/Peak Dynamic': 0.033966,
'L2/Runtime Dynamic': 0.00762399,
'L2/Subthreshold Leakage': 0.834142,
'L2/Subthreshold Leakage with power gating': 0.401066,
'Load Store Unit/Area': 8.80969,
'Load Store Unit/Data Cache/Area': 6.84535,
'Load Store Unit/Data Cache/Gate Leakage': 0.0279261,
'Load Store Unit/Data Cache/Peak Dynamic': 3.59411,
'Load Store Unit/Data Cache/Runtime Dynamic': 1.14163,
'Load Store Unit/Data Cache/Subthreshold Leakage': 0.527675,
'Load Store Unit/Data Cache/Subthreshold Leakage with power gating': 0.25085,
'Load Store Unit/Gate Leakage': 0.0351387,
'Load Store Unit/LoadQ/Area': 0.0836782,
'Load Store Unit/LoadQ/Gate Leakage': 0.00059896,
'Load Store Unit/LoadQ/Peak Dynamic': 0.0762541,
'Load Store Unit/LoadQ/Runtime Dynamic': 0.0762542,
'Load Store Unit/LoadQ/Subthreshold Leakage': 0.00941961,
'Load Store Unit/LoadQ/Subthreshold Leakage with power gating': 0.00536918,
'Load Store Unit/Peak Dynamic': 3.95566,
'Load Store Unit/Runtime Dynamic': 1.59394,
'Load Store Unit/StoreQ/Area': 0.322079,
'Load Store Unit/StoreQ/Gate Leakage': 0.00329971,
'Load Store Unit/StoreQ/Peak Dynamic': 0.18803,
'Load Store Unit/StoreQ/Runtime Dynamic': 0.37606,
'Load Store Unit/StoreQ/Subthreshold Leakage': 0.0345621,
'Load Store Unit/StoreQ/Subthreshold Leakage with power gating': 0.0197004,
'Load Store Unit/Subthreshold Leakage': 0.591622,
'Load Store Unit/Subthreshold Leakage with power gating': 0.283406,
'Memory Management Unit/Area': 0.434579,
'Memory Management Unit/Dtlb/Area': 0.0879726,
'Memory Management Unit/Dtlb/Gate Leakage': 0.00088729,
'Memory Management Unit/Dtlb/Peak Dynamic': 0.0667323,
'Memory Management Unit/Dtlb/Runtime Dynamic': 0.0672414,
'Memory Management Unit/Dtlb/Subthreshold Leakage': 0.0155699,
'Memory Management Unit/Dtlb/Subthreshold Leakage with power gating': 0.00887485,
'Memory Management Unit/Gate Leakage': 0.00813591,
'Memory Management Unit/Itlb/Area': 0.301552,
'Memory Management Unit/Itlb/Gate Leakage': 0.00393464,
'Memory Management Unit/Itlb/Peak Dynamic': 0.305148,
'Memory Management Unit/Itlb/Runtime Dynamic': 0.0325756,
'Memory Management Unit/Itlb/Subthreshold Leakage': 0.0413758,
'Memory Management Unit/Itlb/Subthreshold Leakage with power gating': 0.0235842,
'Memory Management Unit/Peak Dynamic': 0.579014,
'Memory Management Unit/Runtime Dynamic': 0.099817,
'Memory Management Unit/Subthreshold Leakage': 0.0769113,
'Memory Management Unit/Subthreshold Leakage with power gating': 0.0399462,
'Peak Dynamic': 21.9506,
'Renaming Unit/Area': 0.369768,
'Renaming Unit/FP Front End RAT/Area': 0.168486,
'Renaming Unit/FP Front End RAT/Gate Leakage': 0.00489731,
'Renaming Unit/FP Front End RAT/Peak Dynamic': 3.33511,
'Renaming Unit/FP Front End RAT/Runtime Dynamic': 0.0,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage': 0.0437281,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage with power gating': 0.024925,
'Renaming Unit/Free List/Area': 0.0414755,
'Renaming Unit/Free List/Gate Leakage': 4.15911e-05,
'Renaming Unit/Free List/Peak Dynamic': 0.0401324,
'Renaming Unit/Free List/Runtime Dynamic': 0.0153081,
'Renaming Unit/Free List/Subthreshold Leakage': 0.000670426,
'Renaming Unit/Free List/Subthreshold Leakage with power gating': 0.000377987,
'Renaming Unit/Gate Leakage': 0.00863632,
'Renaming Unit/Int Front End RAT/Area': 0.114751,
'Renaming Unit/Int Front End RAT/Gate Leakage': 0.00038343,
'Renaming Unit/Int Front End RAT/Peak Dynamic': 0.86945,
'Renaming Unit/Int Front End RAT/Runtime Dynamic': 0.158965,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage': 0.00611897,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage with power gating': 0.00348781,
'Renaming Unit/Peak Dynamic': 4.56169,
'Renaming Unit/Runtime Dynamic': 0.174273,
'Renaming Unit/Subthreshold Leakage': 0.070483,
'Renaming Unit/Subthreshold Leakage with power gating': 0.0362779,
'Runtime Dynamic': 4.79844,
'Subthreshold Leakage': 6.21877,
'Subthreshold Leakage with power gating': 2.58311},
{'Area': 32.0201,
'Execution Unit/Area': 7.68434,
'Execution Unit/Complex ALUs/Area': 0.235435,
'Execution Unit/Complex ALUs/Gate Leakage': 0.0132646,
'Execution Unit/Complex ALUs/Peak Dynamic': 0.0,
'Execution Unit/Complex ALUs/Runtime Dynamic': 0.202689,
'Execution Unit/Complex ALUs/Subthreshold Leakage': 0.20111,
'Execution Unit/Complex ALUs/Subthreshold Leakage with power gating': 0.0754163,
'Execution Unit/Floating Point Units/Area': 4.6585,
'Execution Unit/Floating Point Units/Gate Leakage': 0.0656156,
'Execution Unit/Floating Point Units/Peak Dynamic': 0.0,
'Execution Unit/Floating Point Units/Runtime Dynamic': 0.304033,
'Execution Unit/Floating Point Units/Subthreshold Leakage': 0.994829,
'Execution Unit/Floating Point Units/Subthreshold Leakage with power gating': 0.373061,
'Execution Unit/Gate Leakage': 0.120359,
'Execution Unit/Instruction Scheduler/Area': 1.66526,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Area': 0.275653,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Gate Leakage': 0.000977433,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Peak Dynamic': 1.04181,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Runtime Dynamic': 0.144817,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage': 0.0143453,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage with power gating': 0.00810519,
'Execution Unit/Instruction Scheduler/Gate Leakage': 0.00568913,
'Execution Unit/Instruction Scheduler/Instruction Window/Area': 0.805223,
'Execution Unit/Instruction Scheduler/Instruction Window/Gate Leakage': 0.00414562,
'Execution Unit/Instruction Scheduler/Instruction Window/Peak Dynamic': 1.6763,
'Execution Unit/Instruction Scheduler/Instruction Window/Runtime Dynamic': 0.233585,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage': 0.0625755,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage with power gating': 0.0355964,
'Execution Unit/Instruction Scheduler/Peak Dynamic': 3.82262,
'Execution Unit/Instruction Scheduler/ROB/Area': 0.584388,
'Execution Unit/Instruction Scheduler/ROB/Gate Leakage': 0.00056608,
'Execution Unit/Instruction Scheduler/ROB/Peak Dynamic': 1.10451,
'Execution Unit/Instruction Scheduler/ROB/Runtime Dynamic': 0.117906,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage': 0.00906853,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage with power gating': 0.00364446,
'Execution Unit/Instruction Scheduler/Runtime Dynamic': 0.496307,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage': 0.0859892,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage with power gating': 0.047346,
'Execution Unit/Integer ALUs/Area': 0.47087,
'Execution Unit/Integer ALUs/Gate Leakage': 0.0265291,
'Execution Unit/Integer ALUs/Peak Dynamic': 0.165628,
'Execution Unit/Integer ALUs/Runtime Dynamic': 0.101344,
'Execution Unit/Integer ALUs/Subthreshold Leakage': 0.40222,
'Execution Unit/Integer ALUs/Subthreshold Leakage with power gating': 0.150833,
'Execution Unit/Peak Dynamic': 4.14693,
'Execution Unit/Register Files/Area': 0.570804,
'Execution Unit/Register Files/Floating Point RF/Area': 0.208131,
'Execution Unit/Register Files/Floating Point RF/Gate Leakage': 0.000232788,
'Execution Unit/Register Files/Floating Point RF/Peak Dynamic': 0.0,
'Execution Unit/Register Files/Floating Point RF/Runtime Dynamic': 0.00607428,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage': 0.00399698,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage with power gating': 0.00176968,
'Execution Unit/Register Files/Gate Leakage': 0.000622708,
'Execution Unit/Register Files/Integer RF/Area': 0.362673,
'Execution Unit/Register Files/Integer RF/Gate Leakage': 0.00038992,
'Execution Unit/Register Files/Integer RF/Peak Dynamic': 0.0439245,
'Execution Unit/Register Files/Integer RF/Runtime Dynamic': 0.044923,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage': 0.00614175,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage with power gating': 0.00246675,
'Execution Unit/Register Files/Peak Dynamic': 0.0439245,
'Execution Unit/Register Files/Runtime Dynamic': 0.0509973,
'Execution Unit/Register Files/Subthreshold Leakage': 0.0101387,
'Execution Unit/Register Files/Subthreshold Leakage with power gating': 0.00423643,
'Execution Unit/Results Broadcast Bus/Area Overhead': 0.0390912,
'Execution Unit/Results Broadcast Bus/Gate Leakage': 0.00537402,
'Execution Unit/Results Broadcast Bus/Peak Dynamic': 0.0925366,
'Execution Unit/Results Broadcast Bus/Runtime Dynamic': 0.264819,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage': 0.081478,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage with power gating': 0.0305543,
'Execution Unit/Runtime Dynamic': 1.42019,
'Execution Unit/Subthreshold Leakage': 1.79543,
'Execution Unit/Subthreshold Leakage with power gating': 0.688821,
'Gate Leakage': 0.368936,
'Instruction Fetch Unit/Area': 5.85939,
'Instruction Fetch Unit/Branch Predictor/Area': 0.138516,
'Instruction Fetch Unit/Branch Predictor/Chooser/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Chooser/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Chooser/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Chooser/Runtime Dynamic': 0.00141035,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/Gate Leakage': 0.000757657,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Runtime Dynamic': 0.00141035,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Area': 0.0257064,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Gate Leakage': 0.000154548,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Peak Dynamic': 0.0142575,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Runtime Dynamic': 0.00129081,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage': 0.00384344,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage with power gating': 0.00198631,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Area': 0.0151917,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Gate Leakage': 8.00196e-05,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Peak Dynamic': 0.00527447,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Runtime Dynamic': 0.000533818,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage': 0.00181347,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage with power gating': 0.000957045,
'Instruction Fetch Unit/Branch Predictor/Peak Dynamic': 0.0597838,
'Instruction Fetch Unit/Branch Predictor/RAS/Area': 0.0105732,
'Instruction Fetch Unit/Branch Predictor/RAS/Gate Leakage': 4.63858e-05,
'Instruction Fetch Unit/Branch Predictor/RAS/Peak Dynamic': 0.0117602,
'Instruction Fetch Unit/Branch Predictor/RAS/Runtime Dynamic': 0.000645323,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage': 0.000932505,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage with power gating': 0.000494733,
'Instruction Fetch Unit/Branch Predictor/Runtime Dynamic': 0.00475682,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage': 0.0199703,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage with power gating': 0.0103282,
'Instruction Fetch Unit/Branch Target Buffer/Area': 0.64954,
'Instruction Fetch Unit/Branch Target Buffer/Gate Leakage': 0.00272758,
'Instruction Fetch Unit/Branch Target Buffer/Peak Dynamic': 0.177867,
'Instruction Fetch Unit/Branch Target Buffer/Runtime Dynamic': 0.011293,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage': 0.0811682,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage with power gating': 0.0435357,
'Instruction Fetch Unit/Gate Leakage': 0.0589979,
'Instruction Fetch Unit/Instruction Buffer/Area': 0.0226323,
'Instruction Fetch Unit/Instruction Buffer/Gate Leakage': 6.83558e-05,
'Instruction Fetch Unit/Instruction Buffer/Peak Dynamic': 0.606827,
'Instruction Fetch Unit/Instruction Buffer/Runtime Dynamic': 0.0431857,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage': 0.00151885,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage with power gating': 0.000701682,
'Instruction Fetch Unit/Instruction Cache/Area': 3.14635,
'Instruction Fetch Unit/Instruction Cache/Gate Leakage': 0.029931,
'Instruction Fetch Unit/Instruction Cache/Peak Dynamic': 2.74698,
'Instruction Fetch Unit/Instruction Cache/Runtime Dynamic': 0.111537,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage': 0.367022,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage with power gating': 0.180386,
'Instruction Fetch Unit/Instruction Decoder/Area': 1.85799,
'Instruction Fetch Unit/Instruction Decoder/Gate Leakage': 0.0222493,
'Instruction Fetch Unit/Instruction Decoder/Peak Dynamic': 1.37404,
'Instruction Fetch Unit/Instruction Decoder/Runtime Dynamic': 0.146678,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage': 0.442943,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage with power gating': 0.166104,
'Instruction Fetch Unit/Peak Dynamic': 5.09881,
'Instruction Fetch Unit/Runtime Dynamic': 0.317451,
'Instruction Fetch Unit/Subthreshold Leakage': 0.932286,
'Instruction Fetch Unit/Subthreshold Leakage with power gating': 0.40843,
'L2/Area': 4.53318,
'L2/Gate Leakage': 0.015464,
'L2/Peak Dynamic': 0.0192477,
'L2/Runtime Dynamic': 0.00433256,
'L2/Subthreshold Leakage': 0.834142,
'L2/Subthreshold Leakage with power gating': 0.401066,
'Load Store Unit/Area': 8.80901,
'Load Store Unit/Data Cache/Area': 6.84535,
'Load Store Unit/Data Cache/Gate Leakage': 0.0279261,
'Load Store Unit/Data Cache/Peak Dynamic': 2.55926,
'Load Store Unit/Data Cache/Runtime Dynamic': 0.640399,
'Load Store Unit/Data Cache/Subthreshold Leakage': 0.527675,
'Load Store Unit/Data Cache/Subthreshold Leakage with power gating': 0.25085,
'Load Store Unit/Gate Leakage': 0.0350888,
'Load Store Unit/LoadQ/Area': 0.0836782,
'Load Store Unit/LoadQ/Gate Leakage': 0.00059896,
'Load Store Unit/LoadQ/Peak Dynamic': 0.0427742,
'Load Store Unit/LoadQ/Runtime Dynamic': 0.0427743,
'Load Store Unit/LoadQ/Subthreshold Leakage': 0.00941961,
'Load Store Unit/LoadQ/Subthreshold Leakage with power gating': 0.00536918,
'Load Store Unit/Peak Dynamic': 2.76124,
'Load Store Unit/Runtime Dynamic': 0.894121,
'Load Store Unit/StoreQ/Area': 0.322079,
'Load Store Unit/StoreQ/Gate Leakage': 0.00329971,
'Load Store Unit/StoreQ/Peak Dynamic': 0.105474,
'Load Store Unit/StoreQ/Runtime Dynamic': 0.210948,
'Load Store Unit/StoreQ/Subthreshold Leakage': 0.0345621,
'Load Store Unit/StoreQ/Subthreshold Leakage with power gating': 0.0197004,
'Load Store Unit/Subthreshold Leakage': 0.591321,
'Load Store Unit/Subthreshold Leakage with power gating': 0.283293,
'Memory Management Unit/Area': 0.4339,
'Memory Management Unit/Dtlb/Area': 0.0879726,
'Memory Management Unit/Dtlb/Gate Leakage': 0.00088729,
'Memory Management Unit/Dtlb/Peak Dynamic': 0.037433,
'Memory Management Unit/Dtlb/Runtime Dynamic': 0.0377216,
'Memory Management Unit/Dtlb/Subthreshold Leakage': 0.0155699,
'Memory Management Unit/Dtlb/Subthreshold Leakage with power gating': 0.00887485,
'Memory Management Unit/Gate Leakage': 0.00808595,
'Memory Management Unit/Itlb/Area': 0.301552,
'Memory Management Unit/Itlb/Gate Leakage': 0.00393464,
'Memory Management Unit/Itlb/Peak Dynamic': 0.170797,
'Memory Management Unit/Itlb/Runtime Dynamic': 0.0182865,
'Memory Management Unit/Itlb/Subthreshold Leakage': 0.0413758,
'Memory Management Unit/Itlb/Subthreshold Leakage with power gating': 0.0235842,
'Memory Management Unit/Peak Dynamic': 0.391209,
'Memory Management Unit/Runtime Dynamic': 0.0560081,
'Memory Management Unit/Subthreshold Leakage': 0.0766103,
'Memory Management Unit/Subthreshold Leakage with power gating': 0.0398333,
'Peak Dynamic': 16.0069,
'Renaming Unit/Area': 0.303608,
'Renaming Unit/FP Front End RAT/Area': 0.131045,
'Renaming Unit/FP Front End RAT/Gate Leakage': 0.00351123,
'Renaming Unit/FP Front End RAT/Peak Dynamic': 2.51468,
'Renaming Unit/FP Front End RAT/Runtime Dynamic': 0.0,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage': 0.0308571,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage with power gating': 0.0175885,
'Renaming Unit/Free List/Area': 0.0340654,
'Renaming Unit/Free List/Gate Leakage': 2.5481e-05,
'Renaming Unit/Free List/Peak Dynamic': 0.0306032,
'Renaming Unit/Free List/Runtime Dynamic': 0.00653375,
'Renaming Unit/Free List/Subthreshold Leakage': 0.000370144,
'Renaming Unit/Free List/Subthreshold Leakage with power gating': 0.000201064,
'Renaming Unit/Gate Leakage': 0.00708398,
'Renaming Unit/Int Front End RAT/Area': 0.0941223,
'Renaming Unit/Int Front End RAT/Gate Leakage': 0.000283242,
'Renaming Unit/Int Front End RAT/Peak Dynamic': 0.731965,
'Renaming Unit/Int Front End RAT/Runtime Dynamic': 0.0751839,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage': 0.00435488,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage with power gating': 0.00248228,
'Renaming Unit/Peak Dynamic': 3.58947,
'Renaming Unit/Runtime Dynamic': 0.0817176,
'Renaming Unit/Subthreshold Leakage': 0.0552466,
'Renaming Unit/Subthreshold Leakage with power gating': 0.0276461,
'Runtime Dynamic': 2.77382,
'Subthreshold Leakage': 6.16288,
'Subthreshold Leakage with power gating': 2.55328},
{'Area': 32.0201,
'Execution Unit/Area': 7.68434,
'Execution Unit/Complex ALUs/Area': 0.235435,
'Execution Unit/Complex ALUs/Gate Leakage': 0.0132646,
'Execution Unit/Complex ALUs/Peak Dynamic': 0.0,
'Execution Unit/Complex ALUs/Runtime Dynamic': 0.202689,
'Execution Unit/Complex ALUs/Subthreshold Leakage': 0.20111,
'Execution Unit/Complex ALUs/Subthreshold Leakage with power gating': 0.0754163,
'Execution Unit/Floating Point Units/Area': 4.6585,
'Execution Unit/Floating Point Units/Gate Leakage': 0.0656156,
'Execution Unit/Floating Point Units/Peak Dynamic': 0.0,
'Execution Unit/Floating Point Units/Runtime Dynamic': 0.304033,
'Execution Unit/Floating Point Units/Subthreshold Leakage': 0.994829,
'Execution Unit/Floating Point Units/Subthreshold Leakage with power gating': 0.373061,
'Execution Unit/Gate Leakage': 0.120359,
'Execution Unit/Instruction Scheduler/Area': 1.66526,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Area': 0.275653,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Gate Leakage': 0.000977433,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Peak Dynamic': 1.04181,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Runtime Dynamic': 0.14482,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage': 0.0143453,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage with power gating': 0.00810519,
'Execution Unit/Instruction Scheduler/Gate Leakage': 0.00568913,
'Execution Unit/Instruction Scheduler/Instruction Window/Area': 0.805223,
'Execution Unit/Instruction Scheduler/Instruction Window/Gate Leakage': 0.00414562,
'Execution Unit/Instruction Scheduler/Instruction Window/Peak Dynamic': 1.6763,
'Execution Unit/Instruction Scheduler/Instruction Window/Runtime Dynamic': 0.23359,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage': 0.0625755,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage with power gating': 0.0355964,
'Execution Unit/Instruction Scheduler/Peak Dynamic': 3.82262,
'Execution Unit/Instruction Scheduler/ROB/Area': 0.584388,
'Execution Unit/Instruction Scheduler/ROB/Gate Leakage': 0.00056608,
'Execution Unit/Instruction Scheduler/ROB/Peak Dynamic': 1.10451,
'Execution Unit/Instruction Scheduler/ROB/Runtime Dynamic': 0.117908,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage': 0.00906853,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage with power gating': 0.00364446,
'Execution Unit/Instruction Scheduler/Runtime Dynamic': 0.496318,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage': 0.0859892,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage with power gating': 0.047346,
'Execution Unit/Integer ALUs/Area': 0.47087,
'Execution Unit/Integer ALUs/Gate Leakage': 0.0265291,
'Execution Unit/Integer ALUs/Peak Dynamic': 0.165633,
'Execution Unit/Integer ALUs/Runtime Dynamic': 0.101344,
'Execution Unit/Integer ALUs/Subthreshold Leakage': 0.40222,
'Execution Unit/Integer ALUs/Subthreshold Leakage with power gating': 0.150833,
'Execution Unit/Peak Dynamic': 4.14694,
'Execution Unit/Register Files/Area': 0.570804,
'Execution Unit/Register Files/Floating Point RF/Area': 0.208131,
'Execution Unit/Register Files/Floating Point RF/Gate Leakage': 0.000232788,
'Execution Unit/Register Files/Floating Point RF/Peak Dynamic': 0.0,
'Execution Unit/Register Files/Floating Point RF/Runtime Dynamic': 0.00607441,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage': 0.00399698,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage with power gating': 0.00176968,
'Execution Unit/Register Files/Gate Leakage': 0.000622708,
'Execution Unit/Register Files/Integer RF/Area': 0.362673,
'Execution Unit/Register Files/Integer RF/Gate Leakage': 0.00038992,
'Execution Unit/Register Files/Integer RF/Peak Dynamic': 0.0439257,
'Execution Unit/Register Files/Integer RF/Runtime Dynamic': 0.044924,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage': 0.00614175,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage with power gating': 0.00246675,
'Execution Unit/Register Files/Peak Dynamic': 0.0439257,
'Execution Unit/Register Files/Runtime Dynamic': 0.0509984,
'Execution Unit/Register Files/Subthreshold Leakage': 0.0101387,
'Execution Unit/Register Files/Subthreshold Leakage with power gating': 0.00423643,
'Execution Unit/Results Broadcast Bus/Area Overhead': 0.0390912,
'Execution Unit/Results Broadcast Bus/Gate Leakage': 0.00537402,
'Execution Unit/Results Broadcast Bus/Peak Dynamic': 0.0925392,
'Execution Unit/Results Broadcast Bus/Runtime Dynamic': 0.264824,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage': 0.081478,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage with power gating': 0.0305543,
'Execution Unit/Runtime Dynamic': 1.42021,
'Execution Unit/Subthreshold Leakage': 1.79543,
'Execution Unit/Subthreshold Leakage with power gating': 0.688821,
'Gate Leakage': 0.368936,
'Instruction Fetch Unit/Area': 5.85939,
'Instruction Fetch Unit/Branch Predictor/Area': 0.138516,
'Instruction Fetch Unit/Branch Predictor/Chooser/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Chooser/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Chooser/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Chooser/Runtime Dynamic': 0.00141038,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/Gate Leakage': 0.000757657,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Runtime Dynamic': 0.00141038,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Area': 0.0257064,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Gate Leakage': 0.000154548,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Peak Dynamic': 0.0142575,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Runtime Dynamic': 0.00129083,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage': 0.00384344,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage with power gating': 0.00198631,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Area': 0.0151917,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Gate Leakage': 8.00196e-05,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Peak Dynamic': 0.00527447,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Runtime Dynamic': 0.00053383,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage': 0.00181347,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage with power gating': 0.000957045,
'Instruction Fetch Unit/Branch Predictor/Peak Dynamic': 0.0597838,
'Instruction Fetch Unit/Branch Predictor/RAS/Area': 0.0105732,
'Instruction Fetch Unit/Branch Predictor/RAS/Gate Leakage': 4.63858e-05,
'Instruction Fetch Unit/Branch Predictor/RAS/Peak Dynamic': 0.0117602,
'Instruction Fetch Unit/Branch Predictor/RAS/Runtime Dynamic': 0.000645337,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage': 0.000932505,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage with power gating': 0.000494733,
'Instruction Fetch Unit/Branch Predictor/Runtime Dynamic': 0.00475693,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage': 0.0199703,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage with power gating': 0.0103282,
'Instruction Fetch Unit/Branch Target Buffer/Area': 0.64954,
'Instruction Fetch Unit/Branch Target Buffer/Gate Leakage': 0.00272758,
'Instruction Fetch Unit/Branch Target Buffer/Peak Dynamic': 0.177867,
'Instruction Fetch Unit/Branch Target Buffer/Runtime Dynamic': 0.0112932,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage': 0.0811682,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage with power gating': 0.0435357,
'Instruction Fetch Unit/Gate Leakage': 0.0589979,
'Instruction Fetch Unit/Instruction Buffer/Area': 0.0226323,
'Instruction Fetch Unit/Instruction Buffer/Gate Leakage': 6.83558e-05,
'Instruction Fetch Unit/Instruction Buffer/Peak Dynamic': 0.606827,
'Instruction Fetch Unit/Instruction Buffer/Runtime Dynamic': 0.0431866,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage': 0.00151885,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage with power gating': 0.000701682,
'Instruction Fetch Unit/Instruction Cache/Area': 3.14635,
'Instruction Fetch Unit/Instruction Cache/Gate Leakage': 0.029931,
'Instruction Fetch Unit/Instruction Cache/Peak Dynamic': 2.74703,
'Instruction Fetch Unit/Instruction Cache/Runtime Dynamic': 0.111539,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage': 0.367022,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage with power gating': 0.180386,
'Instruction Fetch Unit/Instruction Decoder/Area': 1.85799,
'Instruction Fetch Unit/Instruction Decoder/Gate Leakage': 0.0222493,
'Instruction Fetch Unit/Instruction Decoder/Peak Dynamic': 1.37404,
'Instruction Fetch Unit/Instruction Decoder/Runtime Dynamic': 0.146681,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage': 0.442943,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage with power gating': 0.166104,
'Instruction Fetch Unit/Peak Dynamic': 5.09887,
'Instruction Fetch Unit/Runtime Dynamic': 0.317457,
'Instruction Fetch Unit/Subthreshold Leakage': 0.932286,
'Instruction Fetch Unit/Subthreshold Leakage with power gating': 0.40843,
'L2/Area': 4.53318,
'L2/Gate Leakage': 0.015464,
'L2/Peak Dynamic': 0.0192477,
'L2/Runtime Dynamic': 0.00433231,
'L2/Subthreshold Leakage': 0.834142,
'L2/Subthreshold Leakage with power gating': 0.401066,
'Load Store Unit/Area': 8.80901,
'Load Store Unit/Data Cache/Area': 6.84535,
'Load Store Unit/Data Cache/Gate Leakage': 0.0279261,
'Load Store Unit/Data Cache/Peak Dynamic': 2.55929,
'Load Store Unit/Data Cache/Runtime Dynamic': 0.640412,
'Load Store Unit/Data Cache/Subthreshold Leakage': 0.527675,
'Load Store Unit/Data Cache/Subthreshold Leakage with power gating': 0.25085,
'Load Store Unit/Gate Leakage': 0.0350888,
'Load Store Unit/LoadQ/Area': 0.0836782,
'Load Store Unit/LoadQ/Gate Leakage': 0.00059896,
'Load Store Unit/LoadQ/Peak Dynamic': 0.0427752,
'Load Store Unit/LoadQ/Runtime Dynamic': 0.0427752,
'Load Store Unit/LoadQ/Subthreshold Leakage': 0.00941961,
'Load Store Unit/LoadQ/Subthreshold Leakage with power gating': 0.00536918,
'Load Store Unit/Peak Dynamic': 2.76128,
'Load Store Unit/Runtime Dynamic': 0.894139,
'Load Store Unit/StoreQ/Area': 0.322079,
'Load Store Unit/StoreQ/Gate Leakage': 0.00329971,
'Load Store Unit/StoreQ/Peak Dynamic': 0.105476,
'Load Store Unit/StoreQ/Runtime Dynamic': 0.210952,
'Load Store Unit/StoreQ/Subthreshold Leakage': 0.0345621,
'Load Store Unit/StoreQ/Subthreshold Leakage with power gating': 0.0197004,
'Load Store Unit/Subthreshold Leakage': 0.591321,
'Load Store Unit/Subthreshold Leakage with power gating': 0.283293,
'Memory Management Unit/Area': 0.4339,
'Memory Management Unit/Dtlb/Area': 0.0879726,
'Memory Management Unit/Dtlb/Gate Leakage': 0.00088729,
'Memory Management Unit/Dtlb/Peak Dynamic': 0.0374339,
'Memory Management Unit/Dtlb/Runtime Dynamic': 0.0377224,
'Memory Management Unit/Dtlb/Subthreshold Leakage': 0.0155699,
'Memory Management Unit/Dtlb/Subthreshold Leakage with power gating': 0.00887485,
'Memory Management Unit/Gate Leakage': 0.00808595,
'Memory Management Unit/Itlb/Area': 0.301552,
'Memory Management Unit/Itlb/Gate Leakage': 0.00393464,
'Memory Management Unit/Itlb/Peak Dynamic': 0.170801,
'Memory Management Unit/Itlb/Runtime Dynamic': 0.0182869,
'Memory Management Unit/Itlb/Subthreshold Leakage': 0.0413758,
'Memory Management Unit/Itlb/Subthreshold Leakage with power gating': 0.0235842,
'Memory Management Unit/Peak Dynamic': 0.391214,
'Memory Management Unit/Runtime Dynamic': 0.0560093,
'Memory Management Unit/Subthreshold Leakage': 0.0766103,
'Memory Management Unit/Subthreshold Leakage with power gating': 0.0398333,
'Peak Dynamic': 16.007,
'Renaming Unit/Area': 0.303608,
'Renaming Unit/FP Front End RAT/Area': 0.131045,
'Renaming Unit/FP Front End RAT/Gate Leakage': 0.00351123,
'Renaming Unit/FP Front End RAT/Peak Dynamic': 2.51468,
'Renaming Unit/FP Front End RAT/Runtime Dynamic': 0.0,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage': 0.0308571,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage with power gating': 0.0175885,
'Renaming Unit/Free List/Area': 0.0340654,
'Renaming Unit/Free List/Gate Leakage': 2.5481e-05,
'Renaming Unit/Free List/Peak Dynamic': 0.0306032,
'Renaming Unit/Free List/Runtime Dynamic': 0.00653389,
'Renaming Unit/Free List/Subthreshold Leakage': 0.000370144,
'Renaming Unit/Free List/Subthreshold Leakage with power gating': 0.000201064,
'Renaming Unit/Gate Leakage': 0.00708398,
'Renaming Unit/Int Front End RAT/Area': 0.0941223,
'Renaming Unit/Int Front End RAT/Gate Leakage': 0.000283242,
'Renaming Unit/Int Front End RAT/Peak Dynamic': 0.731965,
'Renaming Unit/Int Front End RAT/Runtime Dynamic': 0.0751855,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage': 0.00435488,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage with power gating': 0.00248228,
'Renaming Unit/Peak Dynamic': 3.58947,
'Renaming Unit/Runtime Dynamic': 0.0817194,
'Renaming Unit/Subthreshold Leakage': 0.0552466,
'Renaming Unit/Subthreshold Leakage with power gating': 0.0276461,
'Runtime Dynamic': 2.77386,
'Subthreshold Leakage': 6.16288,
'Subthreshold Leakage with power gating': 2.55328},
{'Area': 32.0201,
'Execution Unit/Area': 7.68434,
'Execution Unit/Complex ALUs/Area': 0.235435,
'Execution Unit/Complex ALUs/Gate Leakage': 0.0132646,
'Execution Unit/Complex ALUs/Peak Dynamic': 0.0,
'Execution Unit/Complex ALUs/Runtime Dynamic': 0.202689,
'Execution Unit/Complex ALUs/Subthreshold Leakage': 0.20111,
'Execution Unit/Complex ALUs/Subthreshold Leakage with power gating': 0.0754163,
'Execution Unit/Floating Point Units/Area': 4.6585,
'Execution Unit/Floating Point Units/Gate Leakage': 0.0656156,
'Execution Unit/Floating Point Units/Peak Dynamic': 0.0,
'Execution Unit/Floating Point Units/Runtime Dynamic': 0.304033,
'Execution Unit/Floating Point Units/Subthreshold Leakage': 0.994829,
'Execution Unit/Floating Point Units/Subthreshold Leakage with power gating': 0.373061,
'Execution Unit/Gate Leakage': 0.120359,
'Execution Unit/Instruction Scheduler/Area': 1.66526,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Area': 0.275653,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Gate Leakage': 0.000977433,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Peak Dynamic': 1.04181,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Runtime Dynamic': 0.144828,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage': 0.0143453,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage with power gating': 0.00810519,
'Execution Unit/Instruction Scheduler/Gate Leakage': 0.00568913,
'Execution Unit/Instruction Scheduler/Instruction Window/Area': 0.805223,
'Execution Unit/Instruction Scheduler/Instruction Window/Gate Leakage': 0.00414562,
'Execution Unit/Instruction Scheduler/Instruction Window/Peak Dynamic': 1.6763,
'Execution Unit/Instruction Scheduler/Instruction Window/Runtime Dynamic': 0.233603,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage': 0.0625755,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage with power gating': 0.0355964,
'Execution Unit/Instruction Scheduler/Peak Dynamic': 3.82262,
'Execution Unit/Instruction Scheduler/ROB/Area': 0.584388,
'Execution Unit/Instruction Scheduler/ROB/Gate Leakage': 0.00056608,
'Execution Unit/Instruction Scheduler/ROB/Peak Dynamic': 1.10451,
'Execution Unit/Instruction Scheduler/ROB/Runtime Dynamic': 0.117915,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage': 0.00906853,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage with power gating': 0.00364446,
'Execution Unit/Instruction Scheduler/Runtime Dynamic': 0.496346,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage': 0.0859892,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage with power gating': 0.047346,
'Execution Unit/Integer ALUs/Area': 0.47087,
'Execution Unit/Integer ALUs/Gate Leakage': 0.0265291,
'Execution Unit/Integer ALUs/Peak Dynamic': 0.165642,
'Execution Unit/Integer ALUs/Runtime Dynamic': 0.101344,
'Execution Unit/Integer ALUs/Subthreshold Leakage': 0.40222,
'Execution Unit/Integer ALUs/Subthreshold Leakage with power gating': 0.150833,
'Execution Unit/Peak Dynamic': 4.14696,
'Execution Unit/Register Files/Area': 0.570804,
'Execution Unit/Register Files/Floating Point RF/Area': 0.208131,
'Execution Unit/Register Files/Floating Point RF/Gate Leakage': 0.000232788,
'Execution Unit/Register Files/Floating Point RF/Peak Dynamic': 0.0,
'Execution Unit/Register Files/Floating Point RF/Runtime Dynamic': 0.00607475,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage': 0.00399698,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage with power gating': 0.00176968,
'Execution Unit/Register Files/Gate Leakage': 0.000622708,
'Execution Unit/Register Files/Integer RF/Area': 0.362673,
'Execution Unit/Register Files/Integer RF/Gate Leakage': 0.00038992,
'Execution Unit/Register Files/Integer RF/Peak Dynamic': 0.0439282,
'Execution Unit/Register Files/Integer RF/Runtime Dynamic': 0.0449265,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage': 0.00614175,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage with power gating': 0.00246675,
'Execution Unit/Register Files/Peak Dynamic': 0.0439282,
'Execution Unit/Register Files/Runtime Dynamic': 0.0510012,
'Execution Unit/Register Files/Subthreshold Leakage': 0.0101387,
'Execution Unit/Register Files/Subthreshold Leakage with power gating': 0.00423643,
'Execution Unit/Results Broadcast Bus/Area Overhead': 0.0390912,
'Execution Unit/Results Broadcast Bus/Gate Leakage': 0.00537402,
'Execution Unit/Results Broadcast Bus/Peak Dynamic': 0.0925444,
'Execution Unit/Results Broadcast Bus/Runtime Dynamic': 0.264839,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage': 0.081478,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage with power gating': 0.0305543,
'Execution Unit/Runtime Dynamic': 1.42025,
'Execution Unit/Subthreshold Leakage': 1.79543,
'Execution Unit/Subthreshold Leakage with power gating': 0.688821,
'Gate Leakage': 0.368936,
'Instruction Fetch Unit/Area': 5.85939,
'Instruction Fetch Unit/Branch Predictor/Area': 0.138516,
'Instruction Fetch Unit/Branch Predictor/Chooser/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Chooser/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Chooser/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Chooser/Runtime Dynamic': 0.00141046,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/Gate Leakage': 0.000757657,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Runtime Dynamic': 0.00141046,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Area': 0.0257064,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Gate Leakage': 0.000154548,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Peak Dynamic': 0.0142575,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Runtime Dynamic': 0.00129091,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage': 0.00384344,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage with power gating': 0.00198631,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Area': 0.0151917,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Gate Leakage': 8.00196e-05,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Peak Dynamic': 0.00527447,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Runtime Dynamic': 0.00053386,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage': 0.00181347,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage with power gating': 0.000957045,
'Instruction Fetch Unit/Branch Predictor/Peak Dynamic': 0.0597838,
'Instruction Fetch Unit/Branch Predictor/RAS/Area': 0.0105732,
'Instruction Fetch Unit/Branch Predictor/RAS/Gate Leakage': 4.63858e-05,
'Instruction Fetch Unit/Branch Predictor/RAS/Peak Dynamic': 0.0117602,
'Instruction Fetch Unit/Branch Predictor/RAS/Runtime Dynamic': 0.000645373,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage': 0.000932505,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage with power gating': 0.000494733,
'Instruction Fetch Unit/Branch Predictor/Runtime Dynamic': 0.00475719,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage': 0.0199703,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage with power gating': 0.0103282,
'Instruction Fetch Unit/Branch Target Buffer/Area': 0.64954,
'Instruction Fetch Unit/Branch Target Buffer/Gate Leakage': 0.00272758,
'Instruction Fetch Unit/Branch Target Buffer/Peak Dynamic': 0.177867,
'Instruction Fetch Unit/Branch Target Buffer/Runtime Dynamic': 0.0112939,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage': 0.0811682,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage with power gating': 0.0435357,
'Instruction Fetch Unit/Gate Leakage': 0.0589979,
'Instruction Fetch Unit/Instruction Buffer/Area': 0.0226323,
'Instruction Fetch Unit/Instruction Buffer/Gate Leakage': 6.83558e-05,
'Instruction Fetch Unit/Instruction Buffer/Peak Dynamic': 0.606827,
'Instruction Fetch Unit/Instruction Buffer/Runtime Dynamic': 0.043189,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage': 0.00151885,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage with power gating': 0.000701682,
'Instruction Fetch Unit/Instruction Cache/Area': 3.14635,
'Instruction Fetch Unit/Instruction Cache/Gate Leakage': 0.029931,
'Instruction Fetch Unit/Instruction Cache/Peak Dynamic': 2.74719,
'Instruction Fetch Unit/Instruction Cache/Runtime Dynamic': 0.111545,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage': 0.367022,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage with power gating': 0.180386,
'Instruction Fetch Unit/Instruction Decoder/Area': 1.85799,
'Instruction Fetch Unit/Instruction Decoder/Gate Leakage': 0.0222493,
'Instruction Fetch Unit/Instruction Decoder/Peak Dynamic': 1.37404,
'Instruction Fetch Unit/Instruction Decoder/Runtime Dynamic': 0.146689,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage': 0.442943,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage with power gating': 0.166104,
'Instruction Fetch Unit/Peak Dynamic': 5.09903,
'Instruction Fetch Unit/Runtime Dynamic': 0.317474,
'Instruction Fetch Unit/Subthreshold Leakage': 0.932286,
'Instruction Fetch Unit/Subthreshold Leakage with power gating': 0.40843,
'L2/Area': 4.53318,
'L2/Gate Leakage': 0.015464,
'L2/Peak Dynamic': 0.0192528,
'L2/Runtime Dynamic': 0.00433278,
'L2/Subthreshold Leakage': 0.834142,
'L2/Subthreshold Leakage with power gating': 0.401066,
'Load Store Unit/Area': 8.80901,
'Load Store Unit/Data Cache/Area': 6.84535,
'Load Store Unit/Data Cache/Gate Leakage': 0.0279261,
'Load Store Unit/Data Cache/Peak Dynamic': 2.55936,
'Load Store Unit/Data Cache/Runtime Dynamic': 0.640446,
'Load Store Unit/Data Cache/Subthreshold Leakage': 0.527675,
'Load Store Unit/Data Cache/Subthreshold Leakage with power gating': 0.25085,
'Load Store Unit/Gate Leakage': 0.0350888,
'Load Store Unit/LoadQ/Area': 0.0836782,
'Load Store Unit/LoadQ/Gate Leakage': 0.00059896,
'Load Store Unit/LoadQ/Peak Dynamic': 0.0427774,
'Load Store Unit/LoadQ/Runtime Dynamic': 0.0427774,
'Load Store Unit/LoadQ/Subthreshold Leakage': 0.00941961,
'Load Store Unit/LoadQ/Subthreshold Leakage with power gating': 0.00536918,
'Load Store Unit/Peak Dynamic': 2.76136,
'Load Store Unit/Runtime Dynamic': 0.894187,
'Load Store Unit/StoreQ/Area': 0.322079,
'Load Store Unit/StoreQ/Gate Leakage': 0.00329971,
'Load Store Unit/StoreQ/Peak Dynamic': 0.105482,
'Load Store Unit/StoreQ/Runtime Dynamic': 0.210964,
'Load Store Unit/StoreQ/Subthreshold Leakage': 0.0345621,
'Load Store Unit/StoreQ/Subthreshold Leakage with power gating': 0.0197004,
'Load Store Unit/Subthreshold Leakage': 0.591321,
'Load Store Unit/Subthreshold Leakage with power gating': 0.283293,
'Memory Management Unit/Area': 0.4339,
'Memory Management Unit/Dtlb/Area': 0.0879726,
'Memory Management Unit/Dtlb/Gate Leakage': 0.00088729,
'Memory Management Unit/Dtlb/Peak Dynamic': 0.0374359,
'Memory Management Unit/Dtlb/Runtime Dynamic': 0.0377244,
'Memory Management Unit/Dtlb/Subthreshold Leakage': 0.0155699,
'Memory Management Unit/Dtlb/Subthreshold Leakage with power gating': 0.00887485,
'Memory Management Unit/Gate Leakage': 0.00808595,
'Memory Management Unit/Itlb/Area': 0.301552,
'Memory Management Unit/Itlb/Gate Leakage': 0.00393464,
'Memory Management Unit/Itlb/Peak Dynamic': 0.17081,
'Memory Management Unit/Itlb/Runtime Dynamic': 0.0182878,
'Memory Management Unit/Itlb/Subthreshold Leakage': 0.0413758,
'Memory Management Unit/Itlb/Subthreshold Leakage with power gating': 0.0235842,
'Memory Management Unit/Peak Dynamic': 0.391227,
'Memory Management Unit/Runtime Dynamic': 0.0560121,
'Memory Management Unit/Subthreshold Leakage': 0.0766103,
'Memory Management Unit/Subthreshold Leakage with power gating': 0.0398333,
'Peak Dynamic': 16.0073,
'Renaming Unit/Area': 0.303608,
'Renaming Unit/FP Front End RAT/Area': 0.131045,
'Renaming Unit/FP Front End RAT/Gate Leakage': 0.00351123,
'Renaming Unit/FP Front End RAT/Peak Dynamic': 2.51468,
'Renaming Unit/FP Front End RAT/Runtime Dynamic': 0.0,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage': 0.0308571,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage with power gating': 0.0175885,
'Renaming Unit/Free List/Area': 0.0340654,
'Renaming Unit/Free List/Gate Leakage': 2.5481e-05,
'Renaming Unit/Free List/Peak Dynamic': 0.0306032,
'Renaming Unit/Free List/Runtime Dynamic': 0.00653425,
'Renaming Unit/Free List/Subthreshold Leakage': 0.000370144,
'Renaming Unit/Free List/Subthreshold Leakage with power gating': 0.000201064,
'Renaming Unit/Gate Leakage': 0.00708398,
'Renaming Unit/Int Front End RAT/Area': 0.0941223,
'Renaming Unit/Int Front End RAT/Gate Leakage': 0.000283242,
'Renaming Unit/Int Front End RAT/Peak Dynamic': 0.731965,
'Renaming Unit/Int Front End RAT/Runtime Dynamic': 0.0751897,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage': 0.00435488,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage with power gating': 0.00248228,
'Renaming Unit/Peak Dynamic': 3.58947,
'Renaming Unit/Runtime Dynamic': 0.0817239,
'Renaming Unit/Subthreshold Leakage': 0.0552466,
'Renaming Unit/Subthreshold Leakage with power gating': 0.0276461,
'Runtime Dynamic': 2.77398,
'Subthreshold Leakage': 6.16288,
'Subthreshold Leakage with power gating': 2.55328}],
'DRAM': {'Area': 0,
'Gate Leakage': 0,
'Peak Dynamic': 1.6477843408442538,
'Runtime Dynamic': 1.6477843408442538,
'Subthreshold Leakage': 4.252,
'Subthreshold Leakage with power gating': 4.252},
'L3': [{'Area': 61.9075,
'Gate Leakage': 0.0484137,
'Peak Dynamic': 0.155394,
'Runtime Dynamic': 0.0898217,
'Subthreshold Leakage': 6.80085,
'Subthreshold Leakage with power gating': 3.32364}],
'Processor': {'Area': 191.908,
'Gate Leakage': 1.53485,
'Peak Dynamic': 70.1272,
'Peak Power': 103.239,
'Runtime Dynamic': 13.2099,
'Subthreshold Leakage': 31.5774,
'Subthreshold Leakage with power gating': 13.9484,
'Total Cores/Area': 128.669,
'Total Cores/Gate Leakage': 1.4798,
'Total Cores/Peak Dynamic': 69.9718,
'Total Cores/Runtime Dynamic': 13.1201,
'Total Cores/Subthreshold Leakage': 24.7074,
'Total Cores/Subthreshold Leakage with power gating': 10.2429,
'Total L3s/Area': 61.9075,
'Total L3s/Gate Leakage': 0.0484137,
'Total L3s/Peak Dynamic': 0.155394,
'Total L3s/Runtime Dynamic': 0.0898217,
'Total L3s/Subthreshold Leakage': 6.80085,
'Total L3s/Subthreshold Leakage with power gating': 3.32364,
'Total Leakage': 33.1122,
'Total NoCs/Area': 1.33155,
'Total NoCs/Gate Leakage': 0.00662954,
'Total NoCs/Peak Dynamic': 0.0,
'Total NoCs/Runtime Dynamic': 0.0,
'Total NoCs/Subthreshold Leakage': 0.0691322,
'Total NoCs/Subthreshold Leakage with power gating': 0.0259246}} | 74.99453 | 124 | 0.681786 | 8,082 | 68,545 | 5.776417 | 0.064588 | 0.123723 | 0.113098 | 0.093563 | 0.943751 | 0.935868 | 0.922459 | 0.897162 | 0.869466 | 0.850702 | 0 | 0.130903 | 0.224539 | 68,545 | 914 | 125 | 74.99453 | 0.747394 | 0 | 0 | 0.666302 | 0 | 0 | 0.658025 | 0.048143 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
6af4a93c755cece482a738f856676236b75b41c8 | 4,766 | py | Python | sdk/python/pulumi_azure_nextgen/web/v20200601/__init__.py | test-wiz-sec/pulumi-azure-nextgen | 20a695af0d020b34b0f1c336e1b69702755174cc | [
"Apache-2.0"
] | null | null | null | sdk/python/pulumi_azure_nextgen/web/v20200601/__init__.py | test-wiz-sec/pulumi-azure-nextgen | 20a695af0d020b34b0f1c336e1b69702755174cc | [
"Apache-2.0"
] | null | null | null | sdk/python/pulumi_azure_nextgen/web/v20200601/__init__.py | test-wiz-sec/pulumi-azure-nextgen | 20a695af0d020b34b0f1c336e1b69702755174cc | [
"Apache-2.0"
] | null | null | null | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi SDK Generator. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
# Export this package's modules as members:
from .app_service_environment import *
from .app_service_plan import *
from .app_service_plan_route_for_vnet import *
from .certificate import *
from .get_app_service_environment import *
from .get_app_service_plan import *
from .get_certificate import *
from .get_static_site import *
from .get_web_app import *
from .get_web_app_deployment import *
from .get_web_app_deployment_slot import *
from .get_web_app_domain_ownership_identifier import *
from .get_web_app_domain_ownership_identifier_slot import *
from .get_web_app_function import *
from .get_web_app_host_name_binding import *
from .get_web_app_host_name_binding_slot import *
from .get_web_app_hybrid_connection import *
from .get_web_app_hybrid_connection_slot import *
from .get_web_app_instance_function_slot import *
from .get_web_app_premier_add_on import *
from .get_web_app_premier_add_on_slot import *
from .get_web_app_private_endpoint_connection import *
from .get_web_app_public_certificate import *
from .get_web_app_public_certificate_slot import *
from .get_web_app_relay_service_connection import *
from .get_web_app_relay_service_connection_slot import *
from .get_web_app_site_extension import *
from .get_web_app_site_extension_slot import *
from .get_web_app_slot import *
from .get_web_app_slot_configuration_names import *
from .get_web_app_source_control import *
from .get_web_app_source_control_slot import *
from .get_web_app_swift_virtual_network_connection import *
from .get_web_app_swift_virtual_network_connection_slot import *
from .get_web_app_vnet_connection import *
from .get_web_app_vnet_connection_slot import *
from .list_app_service_plan_hybrid_connection_keys import *
from .list_site_identifiers_assigned_to_host_name import *
from .list_static_site_build_function_app_settings import *
from .list_static_site_function_app_settings import *
from .list_static_site_secrets import *
from .list_static_site_users import *
from .list_web_app_auth_settings import *
from .list_web_app_auth_settings_slot import *
from .list_web_app_azure_storage_accounts import *
from .list_web_app_azure_storage_accounts_slot import *
from .list_web_app_backup_configuration import *
from .list_web_app_backup_configuration_slot import *
from .list_web_app_backup_status_secrets import *
from .list_web_app_backup_status_secrets_slot import *
from .list_web_app_connection_strings import *
from .list_web_app_connection_strings_slot import *
from .list_web_app_function_keys import *
from .list_web_app_function_keys_slot import *
from .list_web_app_function_secrets import *
from .list_web_app_function_secrets_slot import *
from .list_web_app_host_keys import *
from .list_web_app_host_keys_slot import *
from .list_web_app_metadata import *
from .list_web_app_metadata_slot import *
from .list_web_app_publishing_credentials import *
from .list_web_app_publishing_credentials_slot import *
from .list_web_app_site_backups import *
from .list_web_app_site_backups_slot import *
from .list_web_app_site_push_settings import *
from .list_web_app_site_push_settings_slot import *
from .list_web_app_sync_function_triggers import *
from .list_web_app_sync_function_triggers_slot import *
from .list_web_application_settings import *
from .list_web_application_settings_slot import *
from .static_site import *
from .web_app import *
from .web_app_deployment import *
from .web_app_deployment_slot import *
from .web_app_domain_ownership_identifier import *
from .web_app_domain_ownership_identifier_slot import *
from .web_app_function import *
from .web_app_host_name_binding import *
from .web_app_host_name_binding_slot import *
from .web_app_hybrid_connection import *
from .web_app_hybrid_connection_slot import *
from .web_app_instance_function_slot import *
from .web_app_premier_add_on import *
from .web_app_premier_add_on_slot import *
from .web_app_private_endpoint_connection import *
from .web_app_public_certificate import *
from .web_app_public_certificate_slot import *
from .web_app_relay_service_connection import *
from .web_app_relay_service_connection_slot import *
from .web_app_site_extension import *
from .web_app_site_extension_slot import *
from .web_app_slot import *
from .web_app_slot_configuration_names import *
from .web_app_source_control import *
from .web_app_source_control_slot import *
from .web_app_swift_virtual_network_connection import *
from .web_app_swift_virtual_network_connection_slot import *
from .web_app_vnet_connection import *
from .web_app_vnet_connection_slot import *
from ._inputs import *
from . import outputs
| 44.542056 | 80 | 0.859631 | 752 | 4,766 | 4.896277 | 0.130319 | 0.271592 | 0.152091 | 0.121673 | 0.892993 | 0.819935 | 0.717002 | 0.304183 | 0.043183 | 0 | 0 | 0.000231 | 0.09316 | 4,766 | 106 | 81 | 44.962264 | 0.851689 | 0.042593 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
ed5c9cac419305a4cc9332513dbc8c0395cc9a23 | 33,315 | py | Python | openmdao/core/tests/test_approx_derivs.py | ryanfarr01/blue | a9aac98c09cce0f7cadf26cf592e3d978bf4e3ff | [
"Apache-2.0"
] | null | null | null | openmdao/core/tests/test_approx_derivs.py | ryanfarr01/blue | a9aac98c09cce0f7cadf26cf592e3d978bf4e3ff | [
"Apache-2.0"
] | null | null | null | openmdao/core/tests/test_approx_derivs.py | ryanfarr01/blue | a9aac98c09cce0f7cadf26cf592e3d978bf4e3ff | [
"Apache-2.0"
] | null | null | null | """ Testing for group finite differencing."""
import unittest
import itertools
from parameterized import parameterized
import numpy as np
from openmdao.api import Problem, Group, IndepVarComp, ScipyIterativeSolver, ExecComp, NewtonSolver, \
ExplicitComponent, DefaultVector, NonlinearBlockGS
from openmdao.devtools.testutil import assert_rel_error
from openmdao.test_suite.components.impl_comp_array import TestImplCompArray, TestImplCompArrayDense
from openmdao.test_suite.components.paraboloid import Paraboloid
from openmdao.test_suite.components.sellar import SellarDis1withDerivatives, SellarDis2withDerivatives
from openmdao.test_suite.components.sellar_feature import SellarNoDerivativesCS
from openmdao.test_suite.components.simple_comps import DoubleArrayComp
from openmdao.test_suite.components.unit_conv import SrcComp, TgtCompC, TgtCompF, TgtCompK
try:
from openmdao.parallel_api import PETScVector
except ImportError:
PETScVector = None
class TestGroupFiniteDifference(unittest.TestCase):
def test_paraboloid(self):
prob = Problem()
model = prob.model = Group()
model.add_subsystem('p1', IndepVarComp('x', 0.0), promotes=['x'])
model.add_subsystem('p2', IndepVarComp('y', 0.0), promotes=['y'])
model.add_subsystem('comp', Paraboloid(), promotes=['x', 'y', 'f_xy'])
model.linear_solver = ScipyIterativeSolver()
model.approx_total_derivs()
prob.setup(check=False, mode='fwd')
prob.set_solver_print(level=0)
prob.run_model()
of = ['f_xy']
wrt = ['x', 'y']
derivs = prob.compute_total_derivs(of=of, wrt=wrt)
assert_rel_error(self, derivs['f_xy', 'x'], [[-6.0]], 1e-6)
assert_rel_error(self, derivs['f_xy', 'y'], [[8.0]], 1e-6)
# 1 output x 2 inputs
self.assertEqual(len(model._approx_schemes['fd']._exec_list), 2)
def test_paraboloid_subbed(self):
prob = Problem()
model = prob.model = Group()
model.add_subsystem('p1', IndepVarComp('x', 0.0), promotes=['x'])
model.add_subsystem('p2', IndepVarComp('y', 0.0), promotes=['y'])
sub = model.add_subsystem('sub', Group(), promotes=['x', 'y', 'f_xy'])
sub.add_subsystem('comp', Paraboloid(), promotes=['x', 'y', 'f_xy'])
model.linear_solver = ScipyIterativeSolver()
sub.approx_total_derivs()
prob.setup(check=False, mode='fwd')
prob.set_solver_print(level=0)
prob.run_model()
of = ['f_xy']
wrt = ['x', 'y']
derivs = prob.compute_total_derivs(of=of, wrt=wrt)
assert_rel_error(self, derivs['f_xy', 'x'], [[-6.0]], 1e-6)
assert_rel_error(self, derivs['f_xy', 'y'], [[8.0]], 1e-6)
Jfd = sub.jacobian._subjacs
assert_rel_error(self, Jfd['sub.comp.f_xy', 'sub.comp.x'], [[6.0]], 1e-6)
assert_rel_error(self, Jfd['sub.comp.f_xy', 'sub.comp.y'], [[-8.0]], 1e-6)
# 1 output x 2 inputs
sub = model.get_subsystem('sub')
self.assertEqual(len(sub._approx_schemes['fd']._exec_list), 2)
def test_paraboloid_subbed_in_setup(self):
class MyModel(Group):
def setup(self):
self.add_subsystem('comp', Paraboloid(), promotes=['x', 'y', 'f_xy'])
self.approx_total_derivs()
prob = Problem()
model = prob.model = Group()
model.add_subsystem('p1', IndepVarComp('x', 0.0), promotes=['x'])
model.add_subsystem('p2', IndepVarComp('y', 0.0), promotes=['y'])
sub = model.add_subsystem('sub', MyModel(), promotes=['x', 'y', 'f_xy'])
model.linear_solver = ScipyIterativeSolver()
prob.setup(check=False, mode='fwd')
prob.set_solver_print(level=0)
prob.run_model()
of = ['f_xy']
wrt = ['x', 'y']
derivs = prob.compute_total_derivs(of=of, wrt=wrt)
assert_rel_error(self, derivs['f_xy', 'x'], [[-6.0]], 1e-6)
assert_rel_error(self, derivs['f_xy', 'y'], [[8.0]], 1e-6)
Jfd = sub.jacobian._subjacs
assert_rel_error(self, Jfd['sub.comp.f_xy', 'sub.comp.x'], [[6.0]], 1e-6)
assert_rel_error(self, Jfd['sub.comp.f_xy', 'sub.comp.y'], [[-8.0]], 1e-6)
# 1 output x 2 inputs
sub = model.get_subsystem('sub')
self.assertEqual(len(sub._approx_schemes['fd']._exec_list), 2)
def test_paraboloid_subbed_with_connections(self):
prob = Problem()
model = prob.model = Group()
model.add_subsystem('p1', IndepVarComp('x', 0.0))
model.add_subsystem('p2', IndepVarComp('y', 0.0))
sub = model.add_subsystem('sub', Group())
sub.add_subsystem('bx', ExecComp('xout = xin'))
sub.add_subsystem('by', ExecComp('yout = yin'))
sub.add_subsystem('comp', Paraboloid())
model.connect('p1.x', 'sub.bx.xin')
model.connect('sub.bx.xout', 'sub.comp.x')
model.connect('p2.y', 'sub.by.yin')
model.connect('sub.by.yout', 'sub.comp.y')
model.linear_solver = ScipyIterativeSolver()
sub.approx_total_derivs()
prob.setup(check=False, mode='fwd')
prob.set_solver_print(level=0)
prob.run_model()
of = ['sub.comp.f_xy']
wrt = ['p1.x', 'p2.y']
derivs = prob.compute_total_derivs(of=of, wrt=wrt)
assert_rel_error(self, derivs['sub.comp.f_xy', 'p1.x'], [[-6.0]], 1e-6)
assert_rel_error(self, derivs['sub.comp.f_xy', 'p2.y'], [[8.0]], 1e-6)
Jfd = sub.jacobian._subjacs
assert_rel_error(self, Jfd['sub.comp.f_xy', 'sub.bx.xin'], [[6.0]], 1e-6)
assert_rel_error(self, Jfd['sub.comp.f_xy', 'sub.by.yin'], [[-8.0]], 1e-6)
# 3 outputs x 2 inputs
sub = model.get_subsystem('sub')
self.assertEqual(len(sub._approx_schemes['fd']._exec_list), 6)
def test_arrray_comp(self):
class DoubleArrayFD(DoubleArrayComp):
def compute_partials(self, inputs, outputs, partials):
"""
Override deriv calculation.
"""
pass
prob = Problem()
model = prob.model = Group()
model.add_subsystem('p1', IndepVarComp('x1', val=np.ones(2)))
model.add_subsystem('p2', IndepVarComp('x2', val=np.ones(2)))
comp = model.add_subsystem('comp', DoubleArrayFD())
model.connect('p1.x1', 'comp.x1')
model.connect('p2.x2', 'comp.x2')
model.linear_solver = ScipyIterativeSolver()
model.approx_total_derivs()
prob.setup(check=False)
prob.run_model()
model.run_linearize()
Jfd = model.jacobian._subjacs
assert_rel_error(self, Jfd['comp.y1', 'p1.x1'], -comp.JJ[0:2, 0:2], 1e-6)
assert_rel_error(self, Jfd['comp.y1', 'p2.x2'], -comp.JJ[0:2, 2:4], 1e-6)
assert_rel_error(self, Jfd['comp.y2', 'p1.x1'], -comp.JJ[2:4, 0:2], 1e-6)
assert_rel_error(self, Jfd['comp.y2', 'p2.x2'], -comp.JJ[2:4, 2:4], 1e-6)
def test_implicit_component_fd(self):
# Somehow this wasn't tested in the original fd tests (which are mostly feature tests.)
class TestImplCompArrayDense(TestImplCompArray):
def setup(self):
super(TestImplCompArrayDense, self).setup()
self.approx_partials('*', '*')
prob = Problem()
model = prob.model = Group()
model.add_subsystem('p_rhs', IndepVarComp('rhs', val=np.ones(2)))
sub = model.add_subsystem('sub', Group())
comp = sub.add_subsystem('comp', TestImplCompArrayDense())
model.connect('p_rhs.rhs', 'sub.comp.rhs')
model.linear_solver = ScipyIterativeSolver()
prob.setup(check=False)
prob.run_model()
model.run_linearize()
Jfd = comp.jacobian._subjacs
assert_rel_error(self, Jfd['sub.comp.x', 'sub.comp.rhs'], -np.eye(2), 1e-6)
assert_rel_error(self, Jfd['sub.comp.x', 'sub.comp.x'], comp.mtx, 1e-6)
def test_around_newton(self):
# For a group that is set to FD that has a Newton solver, make sure it doesn't
# try to FD itself while solving.
class TestImplCompArrayDenseNoSolve(TestImplCompArrayDense):
def solve_nonlinear(self, inputs, outputs):
""" Disable local solve."""
pass
prob = Problem()
model = prob.model = Group()
model.add_subsystem('p_rhs', IndepVarComp('rhs', val=np.array([2, 4])))
comp = model.add_subsystem('comp', TestImplCompArrayDenseNoSolve())
model.connect('p_rhs.rhs', 'comp.rhs')
model.nonlinear_solver = NewtonSolver()
model.linear_solver = ScipyIterativeSolver()
model.approx_total_derivs()
prob.setup(check=False)
prob.run_model()
model.approx_total_derivs()
assert_rel_error(self, prob['comp.x'], [1.97959184, 4.02040816], 1e-5)
model.run_linearize()
of = ['comp.x']
wrt = ['p_rhs.rhs']
Jfd = prob.compute_total_derivs(of=of, wrt=wrt)
assert_rel_error(self, Jfd['comp.x', 'p_rhs.rhs'], [[1.01020408, -0.01020408], [-0.01020408, 1.01020408]], 1e-5)
def test_step_size(self):
# Test makes sure option metadata propagates to the fd function
prob = Problem()
model = prob.model = Group()
model.add_subsystem('p1', IndepVarComp('x', 0.0), promotes=['x'])
model.add_subsystem('p2', IndepVarComp('y', 0.0), promotes=['y'])
model.add_subsystem('comp', Paraboloid(), promotes=['x', 'y', 'f_xy'])
model.linear_solver = ScipyIterativeSolver()
# Worse step so that our answer will be off a wee bit.
model.approx_total_derivs(step=1e-2)
prob.setup(check=False, mode='fwd')
prob.set_solver_print(level=0)
prob.run_model()
of = ['f_xy']
wrt = ['x', 'y']
derivs = prob.compute_total_derivs(of=of, wrt=wrt)
assert_rel_error(self, derivs['f_xy', 'x'], [[-5.99]], 1e-6)
assert_rel_error(self, derivs['f_xy', 'y'], [[8.01]], 1e-6)
def test_unit_conv_group(self):
prob = Problem()
prob.model = Group()
prob.model.add_subsystem('px1', IndepVarComp('x1', 100.0), promotes=['x1'])
sub1 = prob.model.add_subsystem('sub1', Group())
sub2 = prob.model.add_subsystem('sub2', Group())
sub1.add_subsystem('src', SrcComp())
sub2.add_subsystem('tgtF', TgtCompF())
sub2.add_subsystem('tgtC', TgtCompC())
sub2.add_subsystem('tgtK', TgtCompK())
prob.model.connect('x1', 'sub1.src.x1')
prob.model.connect('sub1.src.x2', 'sub2.tgtF.x2')
prob.model.connect('sub1.src.x2', 'sub2.tgtC.x2')
prob.model.connect('sub1.src.x2', 'sub2.tgtK.x2')
sub2.approx_total_derivs(method='fd')
prob.setup(check=False)
prob.run_model()
assert_rel_error(self, prob['sub1.src.x2'], 100.0, 1e-6)
assert_rel_error(self, prob['sub2.tgtF.x3'], 212.0, 1e-6)
assert_rel_error(self, prob['sub2.tgtC.x3'], 100.0, 1e-6)
assert_rel_error(self, prob['sub2.tgtK.x3'], 373.15, 1e-6)
wrt = ['x1']
of = ['sub2.tgtF.x3', 'sub2.tgtC.x3', 'sub2.tgtK.x3']
J = prob.compute_total_derivs(of=of, wrt=wrt, return_format='dict')
assert_rel_error(self, J['sub2.tgtF.x3']['x1'][0][0], 1.8, 1e-6)
assert_rel_error(self, J['sub2.tgtC.x3']['x1'][0][0], 1.0, 1e-6)
assert_rel_error(self, J['sub2.tgtK.x3']['x1'][0][0], 1.0, 1e-6)
# Check the total derivatives in reverse mode
prob.setup(check=False, mode='rev')
prob.run_model()
J = prob.compute_total_derivs(of=of, wrt=wrt, return_format='dict')
assert_rel_error(self, J['sub2.tgtF.x3']['x1'][0][0], 1.8, 1e-6)
assert_rel_error(self, J['sub2.tgtC.x3']['x1'][0][0], 1.0, 1e-6)
assert_rel_error(self, J['sub2.tgtK.x3']['x1'][0][0], 1.0, 1e-6)
def test_sellar(self):
# Basic sellar test.
prob = self.prob = Problem()
model = prob.model = Group()
model.add_subsystem('px', IndepVarComp('x', 1.0), promotes=['x'])
model.add_subsystem('pz', IndepVarComp('z', np.array([5.0, 2.0])), promotes=['z'])
model.add_subsystem('d1', SellarDis1withDerivatives(), promotes=['x', 'z', 'y1', 'y2'])
model.add_subsystem('d2', SellarDis2withDerivatives(), promotes=['z', 'y1', 'y2'])
model.add_subsystem('obj_cmp', ExecComp('obj = x**2 + z[1] + y1 + exp(-y2)',
z=np.array([0.0, 0.0]), x=0.0),
promotes=['obj', 'x', 'z', 'y1', 'y2'])
model.add_subsystem('con_cmp1', ExecComp('con1 = 3.16 - y1'), promotes=['con1', 'y1'])
model.add_subsystem('con_cmp2', ExecComp('con2 = y2 - 24.0'), promotes=['con2', 'y2'])
nlbgs = prob.model.nonlinear_solver = NonlinearBlockGS()
model.approx_total_derivs(method='fd', step=1e-5)
prob.setup(check=False)
prob.set_solver_print(level=0)
prob.run_model()
assert_rel_error(self, prob['y1'], 25.58830273, .00001)
assert_rel_error(self, prob['y2'], 12.05848819, .00001)
wrt = ['z']
of = ['obj']
J = prob.compute_total_derivs(of=of, wrt=wrt, return_format='flat_dict')
assert_rel_error(self, J['obj', 'z'][0][0], 9.61001056, .00001)
assert_rel_error(self, J['obj', 'z'][0][1], 1.78448534, .00001)
def title(txt):
""" Provide nice title for parameterized testing."""
return str(txt).split('.')[-1].replace("'", '').replace('>', '')
class TestGroupComplexStep(unittest.TestCase):
def setUp(self):
self.prob = Problem()
def tearDown(self):
# Global stuff seems to not get cleaned up if test fails.
try:
self.prob.model._outputs._vector_info._under_complex_step = False
except:
pass
@parameterized.expand(itertools.product(
[DefaultVector, PETScVector],
), testcase_func_name=lambda f, n, p: 'test_paraboloid_'+'_'.join(title(a) for a in p.args)
)
def test_paraboloid(self, vec_class):
if not vec_class:
raise unittest.SkipTest("PETSc is not installed")
prob = self.prob
model = prob.model = Group()
model.add_subsystem('p1', IndepVarComp('x', 0.0), promotes=['x'])
model.add_subsystem('p2', IndepVarComp('y', 0.0), promotes=['y'])
model.add_subsystem('comp', Paraboloid(), promotes=['x', 'y', 'f_xy'])
model.linear_solver = ScipyIterativeSolver()
model.approx_total_derivs(method='cs')
prob.setup(check=False, vector_class=vec_class, mode='fwd')
prob.set_solver_print(level=0)
prob.run_model()
of = ['f_xy']
wrt = ['x', 'y']
derivs = prob.compute_total_derivs(of=of, wrt=wrt)
assert_rel_error(self, derivs['f_xy', 'x'], [[-6.0]], 1e-6)
assert_rel_error(self, derivs['f_xy', 'y'], [[8.0]], 1e-6)
# 1 output x 2 inputs
self.assertEqual(len(model._approx_schemes['cs']._exec_list), 2)
@parameterized.expand(itertools.product(
[DefaultVector, PETScVector],
), testcase_func_name=lambda f, n, p: 'test_paraboloid_subbed_'+'_'.join(title(a) for a in p.args)
)
def test_paraboloid_subbed(self, vec_class):
if not vec_class:
raise unittest.SkipTest("PETSc is not installed")
prob = self.prob
model = prob.model = Group()
model.add_subsystem('p1', IndepVarComp('x', 0.0), promotes=['x'])
model.add_subsystem('p2', IndepVarComp('y', 0.0), promotes=['y'])
sub = model.add_subsystem('sub', Group(), promotes=['x', 'y', 'f_xy'])
sub.add_subsystem('comp', Paraboloid(), promotes=['x', 'y', 'f_xy'])
model.linear_solver = ScipyIterativeSolver()
sub.approx_total_derivs(method='cs')
prob.setup(check=False, vector_class=vec_class, mode='fwd')
prob.set_solver_print(level=0)
prob.run_model()
of = ['f_xy']
wrt = ['x', 'y']
derivs = prob.compute_total_derivs(of=of, wrt=wrt)
assert_rel_error(self, derivs['f_xy', 'x'], [[-6.0]], 1e-6)
assert_rel_error(self, derivs['f_xy', 'y'], [[8.0]], 1e-6)
Jfd = sub.jacobian._subjacs
assert_rel_error(self, Jfd['sub.comp.f_xy', 'sub.comp.x'], [[6.0]], 1e-6)
assert_rel_error(self, Jfd['sub.comp.f_xy', 'sub.comp.y'], [[-8.0]], 1e-6)
# 1 output x 2 inputs
sub = model.get_subsystem('sub')
self.assertEqual(len(sub._approx_schemes['cs']._exec_list), 2)
@parameterized.expand(itertools.product(
[DefaultVector, PETScVector],
), testcase_func_name=lambda f, n, p: 'test_paraboloid_subbed_with_connections_'+'_'.join(title(a) for a in p.args)
)
def test_paraboloid_subbed_with_connections(self, vec_class):
if not vec_class:
raise unittest.SkipTest("PETSc is not installed")
prob = self.prob
model = prob.model = Group()
model.add_subsystem('p1', IndepVarComp('x', 0.0))
model.add_subsystem('p2', IndepVarComp('y', 0.0))
sub = model.add_subsystem('sub', Group())
sub.add_subsystem('bx', ExecComp('xout = xin'))
sub.add_subsystem('by', ExecComp('yout = yin'))
sub.add_subsystem('comp', Paraboloid())
model.connect('p1.x', 'sub.bx.xin')
model.connect('sub.bx.xout', 'sub.comp.x')
model.connect('p2.y', 'sub.by.yin')
model.connect('sub.by.yout', 'sub.comp.y')
model.linear_solver = ScipyIterativeSolver()
sub.approx_total_derivs(method='cs')
prob.setup(check=False, vector_class=vec_class, mode='fwd')
prob.set_solver_print(level=0)
prob.run_model()
of = ['sub.comp.f_xy']
wrt = ['p1.x', 'p2.y']
derivs = prob.compute_total_derivs(of=of, wrt=wrt)
assert_rel_error(self, derivs['sub.comp.f_xy', 'p1.x'], [[-6.0]], 1e-6)
assert_rel_error(self, derivs['sub.comp.f_xy', 'p2.y'], [[8.0]], 1e-6)
Jfd = sub.jacobian._subjacs
assert_rel_error(self, Jfd['sub.comp.f_xy', 'sub.bx.xin'], [[6.0]], 1e-6)
assert_rel_error(self, Jfd['sub.comp.f_xy', 'sub.by.yin'], [[-8.0]], 1e-6)
# 3 outputs x 2 inputs
sub = model.get_subsystem('sub')
self.assertEqual(len(sub._approx_schemes['cs']._exec_list), 6)
@parameterized.expand(itertools.product(
[DefaultVector, PETScVector],
), testcase_func_name=lambda f, n, p: 'test_arrray_comp_'+'_'.join(title(a) for a in p.args)
)
def test_arrray_comp(self, vec_class):
if not vec_class:
raise unittest.SkipTest("PETSc is not installed")
class DoubleArrayFD(DoubleArrayComp):
def compute_partials(self, inputs, outputs, partials):
"""
Override deriv calculation.
"""
pass
prob = self.prob
model = prob.model = Group()
model.add_subsystem('p1', IndepVarComp('x1', val=np.ones(2)))
model.add_subsystem('p2', IndepVarComp('x2', val=np.ones(2)))
comp = model.add_subsystem('comp', DoubleArrayFD())
model.connect('p1.x1', 'comp.x1')
model.connect('p2.x2', 'comp.x2')
model.linear_solver = ScipyIterativeSolver()
model.approx_total_derivs(method='cs')
prob.setup(check=False, vector_class=vec_class)
prob.run_model()
model.run_linearize()
Jfd = model.jacobian._subjacs
assert_rel_error(self, Jfd['comp.y1', 'p1.x1'], -comp.JJ[0:2, 0:2], 1e-6)
assert_rel_error(self, Jfd['comp.y1', 'p2.x2'], -comp.JJ[0:2, 2:4], 1e-6)
assert_rel_error(self, Jfd['comp.y2', 'p1.x1'], -comp.JJ[2:4, 0:2], 1e-6)
assert_rel_error(self, Jfd['comp.y2', 'p2.x2'], -comp.JJ[2:4, 2:4], 1e-6)
@parameterized.expand(itertools.product(
[DefaultVector, PETScVector],
), testcase_func_name=lambda f, n, p: 'test_unit_conv_group_'+'_'.join(title(a) for a in p.args)
)
def test_unit_conv_group(self, vec_class):
if not vec_class:
raise unittest.SkipTest("PETSc is not installed")
prob = self.prob
prob.model = Group()
prob.model.add_subsystem('px1', IndepVarComp('x1', 100.0), promotes=['x1'])
sub1 = prob.model.add_subsystem('sub1', Group())
sub2 = prob.model.add_subsystem('sub2', Group())
sub1.add_subsystem('src', SrcComp())
sub2.add_subsystem('tgtF', TgtCompF())
sub2.add_subsystem('tgtC', TgtCompC())
sub2.add_subsystem('tgtK', TgtCompK())
prob.model.connect('x1', 'sub1.src.x1')
prob.model.connect('sub1.src.x2', 'sub2.tgtF.x2')
prob.model.connect('sub1.src.x2', 'sub2.tgtC.x2')
prob.model.connect('sub1.src.x2', 'sub2.tgtK.x2')
sub2.approx_total_derivs(method='cs')
prob.setup(check=False, vector_class=vec_class)
prob.run_model()
assert_rel_error(self, prob['sub1.src.x2'], 100.0, 1e-6)
assert_rel_error(self, prob['sub2.tgtF.x3'], 212.0, 1e-6)
assert_rel_error(self, prob['sub2.tgtC.x3'], 100.0, 1e-6)
assert_rel_error(self, prob['sub2.tgtK.x3'], 373.15, 1e-6)
wrt = ['x1']
of = ['sub2.tgtF.x3', 'sub2.tgtC.x3', 'sub2.tgtK.x3']
J = prob.compute_total_derivs(of=of, wrt=wrt, return_format='dict')
assert_rel_error(self, J['sub2.tgtF.x3']['x1'][0][0], 1.8, 1e-6)
assert_rel_error(self, J['sub2.tgtC.x3']['x1'][0][0], 1.0, 1e-6)
assert_rel_error(self, J['sub2.tgtK.x3']['x1'][0][0], 1.0, 1e-6)
# Check the total derivatives in reverse mode
prob.setup(check=False, vector_class=vec_class, mode='rev')
prob.run_model()
J = prob.compute_total_derivs(of=of, wrt=wrt, return_format='dict')
assert_rel_error(self, J['sub2.tgtF.x3']['x1'][0][0], 1.8, 1e-6)
assert_rel_error(self, J['sub2.tgtC.x3']['x1'][0][0], 1.0, 1e-6)
assert_rel_error(self, J['sub2.tgtK.x3']['x1'][0][0], 1.0, 1e-6)
@parameterized.expand(itertools.product(
[DefaultVector, PETScVector],
), testcase_func_name=lambda f, n, p: 'test_sellar_'+'_'.join(title(a) for a in p.args)
)
def test_sellar(self, vec_class):
# Basic sellar test.
if not vec_class:
raise unittest.SkipTest("PETSc is not installed")
prob = self.prob
model = prob.model = Group()
model.add_subsystem('px', IndepVarComp('x', 1.0), promotes=['x'])
model.add_subsystem('pz', IndepVarComp('z', np.array([5.0, 2.0])), promotes=['z'])
model.add_subsystem('d1', SellarDis1withDerivatives(), promotes=['x', 'z', 'y1', 'y2'])
model.add_subsystem('d2', SellarDis2withDerivatives(), promotes=['z', 'y1', 'y2'])
model.add_subsystem('obj_cmp', ExecComp('obj = x**2 + z[1] + y1 + exp(-y2)',
z=np.array([0.0, 0.0]), x=0.0),
promotes=['obj', 'x', 'z', 'y1', 'y2'])
model.add_subsystem('con_cmp1', ExecComp('con1 = 3.16 - y1'), promotes=['con1', 'y1'])
model.add_subsystem('con_cmp2', ExecComp('con2 = y2 - 24.0'), promotes=['con2', 'y2'])
nlbgs = prob.model.nonlinear_solver = NonlinearBlockGS()
# Had to make this step larger so that solver would reconverge adequately.
model.approx_total_derivs(method='cs', step=1.0e-1)
prob.setup(check=False, vector_class=vec_class)
prob.set_solver_print(level=0)
prob.run_model()
assert_rel_error(self, prob['y1'], 25.58830273, .00001)
assert_rel_error(self, prob['y2'], 12.05848819, .00001)
wrt = ['z']
of = ['obj']
J = prob.compute_total_derivs(of=of, wrt=wrt, return_format='flat_dict')
assert_rel_error(self, J['obj', 'z'][0][0], 9.61001056, .00001)
assert_rel_error(self, J['obj', 'z'][0][1], 1.78448534, .00001)
class TestComponentComplexStep(unittest.TestCase):
def tearDown(self):
# Global stuff seems to not get cleaned up if test fails.
self.prob.model._outputs._vector_info._under_complex_step = False
def test_implicit_component(self):
class TestImplCompArrayDense(TestImplCompArray):
def setup(self):
super(TestImplCompArrayDense, self).setup()
self.approx_partials('*', '*', method='cs')
prob = self.prob = Problem()
model = prob.model = Group()
model.add_subsystem('p_rhs', IndepVarComp('rhs', val=np.ones(2)))
sub = model.add_subsystem('sub', Group())
comp = sub.add_subsystem('comp', TestImplCompArrayDense())
model.connect('p_rhs.rhs', 'sub.comp.rhs')
model.linear_solver = ScipyIterativeSolver()
prob.setup(check=False)
prob.run_model()
model.run_linearize()
Jfd = comp.jacobian._subjacs
assert_rel_error(self, Jfd['sub.comp.x', 'sub.comp.rhs'], -np.eye(2), 1e-6)
assert_rel_error(self, Jfd['sub.comp.x', 'sub.comp.x'], comp.mtx, 1e-6)
def test_reconfigure(self):
# In this test, we switch to 'cs' when we reconfigure.
class TestImplCompArrayDense(TestImplCompArray):
def initialize(self):
self.mtx = np.array([
[0.99, 0.01],
[0.01, 0.99],
])
self.count = 0
def setup(self):
super(TestImplCompArrayDense, self).setup()
if self.count > 0:
self.approx_partials('*', '*', method='cs')
else:
self.approx_partials('*', '*', method='fd')
self.count += 1
prob = self.prob = Problem()
model = prob.model = Group()
model.add_subsystem('p_rhs', IndepVarComp('rhs', val=np.ones(2)))
sub = model.add_subsystem('sub', Group())
comp = sub.add_subsystem('comp', TestImplCompArrayDense())
model.connect('p_rhs.rhs', 'sub.comp.rhs')
model.linear_solver = ScipyIterativeSolver()
prob.setup(check=False)
prob.run_model()
with self.assertRaises(RuntimeError) as context:
model.resetup(setup_mode='reconf')
msg = 'In order to activate complex step during reconfiguration, you need to set ' + \
'"force_alloc_complex" to True during setup.'
self.assertEqual(str(context.exception), msg)
# This time, allocate complex in setup.
prob.setup(check=False, force_alloc_complex=True)
prob.run_model()
model.resetup(setup_mode='reconf')
prob.run_model()
model.run_linearize()
Jfd = comp.jacobian._subjacs
assert_rel_error(self, Jfd['sub.comp.x', 'sub.comp.rhs'], -np.eye(2), 1e-6)
assert_rel_error(self, Jfd['sub.comp.x', 'sub.comp.x'], comp.mtx, 1e-6)
def test_vector_methods(self):
class KenComp(ExplicitComponent):
def setup(self):
self.add_input('x1', np.array([[7.0, 3.0], [2.4, 3.33]]))
self.add_output('y1', np.zeros((2, 2)))
self.approx_partials('*', '*', method='cs')
def compute(self, inputs, outputs):
x1 = inputs['x1']
outputs['y1'] = x1
outputs['y1'][0][0] += 14.0
outputs['y1'][0][1] *= 3.0
outputs['y1'][1][0] -= 6.67
outputs['y1'][1][1] /= 2.34
pass #outputs['y1'] *= 1.0
prob = self.prob = Problem()
model = prob.model = Group()
model.add_subsystem('px', IndepVarComp('x', val=np.array([[7.0, 3.0], [2.4, 3.33]])))
model.add_subsystem('comp', KenComp())
model.connect('px.x', 'comp.x1')
prob.setup(check=False)
prob.run_model()
of = ['comp.y1']
wrt = ['px.x']
derivs = prob.compute_total_derivs(of=of, wrt=wrt)
assert_rel_error(self, derivs['comp.y1', 'px.x'][0][0], 1.0, 1e-6)
assert_rel_error(self, derivs['comp.y1', 'px.x'][1][1], 3.0, 1e-6)
assert_rel_error(self, derivs['comp.y1', 'px.x'][2][2], 1.0, 1e-6)
assert_rel_error(self, derivs['comp.y1', 'px.x'][3][3], 1.0/2.34, 1e-6)
class ApproxTotalsFeature(unittest.TestCase):
def test_basic(self):
class CompOne(ExplicitComponent):
def setup(self):
self.add_input('x', val=0.0)
self.add_output('y', val=np.zeros(25))
self._exec_count = 0
def compute(self, inputs, outputs):
x = inputs['x']
outputs['y'] = np.arange(25) * x
self._exec_count += 1
class CompTwo(ExplicitComponent):
def setup(self):
self.add_input('y', val=np.zeros(25))
self.add_output('z', val=0.0)
self._exec_count = 0
def compute(self, inputs, outputs):
y = inputs['y']
outputs['z'] = np.sum(y)
self._exec_count += 1
prob = Problem()
model = prob.model = Group()
model.add_subsystem('p1', IndepVarComp('x', 0.0), promotes=['x'])
model.add_subsystem('comp1', CompOne(), promotes=['x', 'y'])
comp2 = model.add_subsystem('comp2', CompTwo(), promotes=['y', 'z'])
model.linear_solver = ScipyIterativeSolver()
model.approx_total_derivs()
prob.setup()
prob.run_model()
of = ['z']
wrt = ['x']
derivs = prob.compute_total_derivs(of=of, wrt=wrt)
assert_rel_error(self, derivs['z', 'x'], [[300.0]], 1e-6)
self.assertEqual(comp2._exec_count, 3)
def test_basic_cs(self):
class CompOne(ExplicitComponent):
def setup(self):
self.add_input('x', val=0.0)
self.add_output('y', val=np.zeros(25))
self._exec_count = 0
def compute(self, inputs, outputs):
x = inputs['x']
outputs['y'] = np.arange(25) * x
self._exec_count += 1
class CompTwo(ExplicitComponent):
def setup(self):
self.add_input('y', val=np.zeros(25))
self.add_output('z', val=0.0)
self._exec_count = 0
def compute(self, inputs, outputs):
y = inputs['y']
outputs['z'] = np.sum(y)
self._exec_count += 1
prob = Problem()
model = prob.model = Group()
model.add_subsystem('p1', IndepVarComp('x', 0.0), promotes=['x'])
model.add_subsystem('comp1', CompOne(), promotes=['x', 'y'])
comp2 = model.add_subsystem('comp2', CompTwo(), promotes=['y', 'z'])
model.linear_solver = ScipyIterativeSolver()
model.approx_total_derivs(method='cs')
prob.setup()
prob.run_model()
of = ['z']
wrt = ['x']
derivs = prob.compute_total_derivs(of=of, wrt=wrt)
assert_rel_error(self, derivs['z', 'x'], [[300.0]], 1e-6)
def test_arguments(self):
class CompOne(ExplicitComponent):
def setup(self):
self.add_input('x', val=0.0)
self.add_output('y', val=np.zeros(25))
self._exec_count = 0
def compute(self, inputs, outputs):
x = inputs['x']
outputs['y'] = np.arange(25) * x
self._exec_count += 1
class CompTwo(ExplicitComponent):
def setup(self):
self.add_input('y', val=np.zeros(25))
self.add_output('z', val=0.0)
self._exec_count = 0
def compute(self, inputs, outputs):
y = inputs['y']
outputs['z'] = np.sum(y)
self._exec_count += 1
prob = Problem()
model = prob.model = Group()
model.add_subsystem('p1', IndepVarComp('x', 1.0), promotes=['x'])
model.add_subsystem('comp1', CompOne(), promotes=['x', 'y'])
comp2 = model.add_subsystem('comp2', CompTwo(), promotes=['y', 'z'])
model.linear_solver = ScipyIterativeSolver()
model.approx_total_derivs(method='fd', step=1e-7, form='central', step_calc='rel')
prob.setup()
prob.run_model()
of = ['z']
wrt = ['x']
derivs = prob.compute_total_derivs(of=of, wrt=wrt)
assert_rel_error(self, derivs['z', 'x'], [[300.0]], 1e-6)
def test_sellarCS(self):
# Just tests Newton on Sellar with FD derivs.
prob = Problem()
prob.model = SellarNoDerivativesCS()
prob.setup(check=False)
prob.run_model()
assert_rel_error(self, prob['y1'], 25.58830273, .00001)
assert_rel_error(self, prob['y2'], 12.05848819, .00001)
# Make sure we aren't iterating like crazy
self.assertLess(prob.model.nonlinear_solver._iter_count, 8)
if __name__ == "__main__":
unittest.main()
| 36.690529 | 123 | 0.580579 | 4,414 | 33,315 | 4.221568 | 0.077481 | 0.057315 | 0.060105 | 0.076312 | 0.849683 | 0.82709 | 0.819148 | 0.811098 | 0.808737 | 0.800365 | 0 | 0.042598 | 0.25448 | 33,315 | 907 | 124 | 36.730981 | 0.707654 | 0.033649 | 0 | 0.792663 | 0 | 0 | 0.088576 | 0.003274 | 0 | 0 | 0 | 0 | 0.145136 | 1 | 0.07815 | false | 0.007974 | 0.022329 | 0 | 0.130782 | 0.015949 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
ed71b659b824128e12c92a5be3b2c8f019dc2ff4 | 3,685 | py | Python | ambra_sdk/service/entrypoints/generated/training.py | dicomgrid/sdk-python | bb12eed311bad73dfb863917df4dc5cbcd91a447 | [
"Apache-2.0"
] | 9 | 2020-04-20T23:45:44.000Z | 2021-04-18T11:22:17.000Z | ambra_sdk/service/entrypoints/generated/training.py | dicomgrid/sdk-python | bb12eed311bad73dfb863917df4dc5cbcd91a447 | [
"Apache-2.0"
] | 13 | 2020-02-08T16:15:05.000Z | 2021-09-13T22:55:28.000Z | ambra_sdk/service/entrypoints/generated/training.py | dicomgrid/sdk-python | bb12eed311bad73dfb863917df4dc5cbcd91a447 | [
"Apache-2.0"
] | 6 | 2020-03-25T17:47:45.000Z | 2021-04-18T11:22:19.000Z | """ Training.
Do not edit this file by hand.
This is generated by parsing api.html service doc.
"""
from ambra_sdk.exceptions.service import AllDone
from ambra_sdk.exceptions.service import NotFound
from ambra_sdk.service.query import QueryO
from ambra_sdk.service.query import AsyncQueryO
class Training:
"""Training."""
def __init__(self, api):
self._api = api
def todo(
self,
):
"""Todo.
"""
request_data = {
}
errors_mapping = {}
errors_mapping[('ALL_DONE', None)] = AllDone('No more training is needed')
query_data = {
'api': self._api,
'url': '/training/todo',
'request_data': request_data,
'errors_mapping': errors_mapping,
'required_sid': True,
}
return QueryO(**query_data)
def done(
self,
account_id,
form_number,
additional_parameters=None,
):
"""Done.
:param account_id: Id of the account the training is for
:param form_number: The formstack id of the form
:param additional_parameters: Additional parameters will be logged as part of the TRAINING_DONE user audit event
"""
request_data = {
'account_id': account_id,
'form_number': form_number,
}
if additional_parameters is not None:
additional_parameters_dict = {'{prefix}{k}'.format(prefix='', k=k): v for k,v in additional_parameters.items()}
request_data.update(additional_parameters_dict)
errors_mapping = {}
errors_mapping[('NOT_FOUND', None)] = NotFound('The form was not found for this user')
query_data = {
'api': self._api,
'url': '/training/done',
'request_data': request_data,
'errors_mapping': errors_mapping,
'required_sid': True,
}
return QueryO(**query_data)
class AsyncTraining:
"""AsyncTraining."""
def __init__(self, api):
self._api = api
def todo(
self,
):
"""Todo.
"""
request_data = {
}
errors_mapping = {}
errors_mapping[('ALL_DONE', None)] = AllDone('No more training is needed')
query_data = {
'api': self._api,
'url': '/training/todo',
'request_data': request_data,
'errors_mapping': errors_mapping,
'required_sid': True,
}
return AsyncQueryO(**query_data)
def done(
self,
account_id,
form_number,
additional_parameters=None,
):
"""Done.
:param account_id: Id of the account the training is for
:param form_number: The formstack id of the form
:param additional_parameters: Additional parameters will be logged as part of the TRAINING_DONE user audit event
"""
request_data = {
'account_id': account_id,
'form_number': form_number,
}
if additional_parameters is not None:
additional_parameters_dict = {'{prefix}{k}'.format(prefix='', k=k): v for k,v in additional_parameters.items()}
request_data.update(additional_parameters_dict)
errors_mapping = {}
errors_mapping[('NOT_FOUND', None)] = NotFound('The form was not found for this user')
query_data = {
'api': self._api,
'url': '/training/done',
'request_data': request_data,
'errors_mapping': errors_mapping,
'required_sid': True,
}
return AsyncQueryO(**query_data)
| 28.789063 | 123 | 0.573677 | 401 | 3,685 | 5.027431 | 0.189526 | 0.103175 | 0.075397 | 0.103175 | 0.922619 | 0.922619 | 0.858135 | 0.858135 | 0.858135 | 0.858135 | 0 | 0 | 0.323745 | 3,685 | 128 | 124 | 28.789063 | 0.808989 | 0.163094 | 0 | 0.837209 | 1 | 0 | 0.152298 | 0 | 0 | 0 | 0 | 0.015625 | 0 | 1 | 0.069767 | false | 0 | 0.046512 | 0 | 0.186047 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
71ed810ce9e9c3e314120c17d6a64033d51b6d57 | 3,012 | py | Python | eventsourcing/infrastructure/cassandra/records.py | scbabacus/eventsourcing | 8404c5b26719ed9d9d1d257ebba774879c7243c4 | [
"BSD-3-Clause"
] | 1 | 2020-02-10T08:12:31.000Z | 2020-02-10T08:12:31.000Z | eventsourcing/infrastructure/cassandra/records.py | scbabacus/eventsourcing | 8404c5b26719ed9d9d1d257ebba774879c7243c4 | [
"BSD-3-Clause"
] | null | null | null | eventsourcing/infrastructure/cassandra/records.py | scbabacus/eventsourcing | 8404c5b26719ed9d9d1d257ebba774879c7243c4 | [
"BSD-3-Clause"
] | null | null | null | from cassandra.cqlengine.models import columns, Model
class IntegerSequencedRecord(Model):
"""Stores integer-sequenced items in Cassandra."""
__table_name__ = 'integer_sequenced_items'
_if_not_exists = True
# Sequence ID (e.g. an entity or aggregate ID).
sequence_id = columns.UUID(partition_key=True)
# Position (index) of item in sequence.
position = columns.BigInt(clustering_order='DESC', primary_key=True)
# Topic of the item (e.g. path to domain event class).
topic = columns.Text(required=True)
# State of the item (serialized dict, possibly encrypted).
data = columns.Text(required=True)
class TimestampSequencedRecord(Model):
"""Stores timestamp-sequenced items in Cassandra."""
__table_name__ = 'timestamp_sequenced_items'
_if_not_exists = True
# Sequence ID (e.g. an entity or aggregate ID).
sequence_id = columns.UUID(partition_key=True)
# Position (in time) of item in sequence.
position = columns.Decimal(clustering_order='DESC', primary_key=True)
# Topic of the item (e.g. path to domain event class).
topic = columns.Text(required=True)
# State of the item (serialized dict, possibly encrypted).
data = columns.Text(required=True)
class TimeuuidSequencedRecord(Model):
"""Stores timeuuid-sequenced items in Cassandra."""
__table_name__ = 'timeuuid_sequenced_items'
_if_not_exists = True
# Sequence UUID (e.g. an entity or aggregate ID).
sequence_id = columns.UUID(partition_key=True)
# Position (in time) of item in sequence.
position = columns.TimeUUID(clustering_order='DESC', primary_key=True)
# Topic of the item (e.g. path to domain event class).
topic = columns.Text(required=True)
# State of the item (serialized dict, possibly encrypted).
data = columns.Text(required=True)
class SnapshotRecord(Model):
"""Stores snapshots in Cassandra."""
__table_name__ = 'snapshots'
_if_not_exists = True
# Sequence ID (e.g. an entity or aggregate ID).
sequence_id = columns.UUID(partition_key=True)
# Position (index) of item in sequence.
position = columns.BigInt(clustering_order='DESC', primary_key=True)
# Topic of the item (e.g. path to domain entity class).
topic = columns.Text(required=True)
# State of the entity (serialized dict, possibly encrypted).
data = columns.Text(required=True)
class StoredEventRecord(Model):
"""Stores integer-sequenced items in Cassandra."""
__table_name__ = 'stored_events'
_if_not_exists = True
# Aggregate ID (e.g. an entity or aggregate ID).
originator_id = columns.UUID(partition_key=True)
# Aggregate version (index) of item in sequence.
originator_version = columns.BigInt(clustering_order='DESC', primary_key=True)
# Topic of the item (e.g. path to domain event class).
event_type = columns.Text(required=True)
# State of the item (serialized dict, possibly encrypted).
state = columns.Text(required=True)
| 32.73913 | 82 | 0.711487 | 397 | 3,012 | 5.224181 | 0.171285 | 0.009643 | 0.09161 | 0.110897 | 0.802314 | 0.792189 | 0.745419 | 0.72758 | 0.715526 | 0.644648 | 0 | 0 | 0.188247 | 3,012 | 91 | 83 | 33.098901 | 0.848262 | 0.399402 | 0 | 0.527778 | 0 | 0 | 0.064626 | 0.040816 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.027778 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 7 |
9c12320a94010b99a31fb540423966e20085b449 | 241,697 | py | Python | catboost/pytest/test.py | karina-usmanova/catboost | 7eff3b7e2e9e8793ab27ea21b2d39a9238f1ba02 | [
"Apache-2.0"
] | null | null | null | catboost/pytest/test.py | karina-usmanova/catboost | 7eff3b7e2e9e8793ab27ea21b2d39a9238f1ba02 | [
"Apache-2.0"
] | null | null | null | catboost/pytest/test.py | karina-usmanova/catboost | 7eff3b7e2e9e8793ab27ea21b2d39a9238f1ba02 | [
"Apache-2.0"
] | null | null | null | import yatest.common
from yatest.common import ExecutionTimeoutError, ExecutionError
import pytest
import os
import filecmp
import numpy as np
import timeit
import json
import catboost
from catboost_pytest_lib import (
apply_catboost,
compare_evals,
compare_evals_with_precision,
data_file,
execute_catboost_fit,
format_crossvalidation,
generate_random_labeled_set,
local_canonical_file,
permute_dataset_columns,
remove_time_from_json,
execute_dist_train,
)
CATBOOST_PATH = yatest.common.binary_path("catboost/app/catboost")
BOOSTING_TYPE = ['Ordered', 'Plain']
PREDICTION_TYPES = ['Probability', 'RawFormulaVal', 'Class']
BINCLASS_LOSSES = ['Logloss', 'CrossEntropy']
MULTICLASS_LOSSES = ['MultiClass', 'MultiClassOneVsAll']
CLASSIFICATION_LOSSES = BINCLASS_LOSSES + MULTICLASS_LOSSES
REGRESSION_LOSSES = ['MAE', 'MAPE', 'Poisson', 'Quantile', 'RMSE', 'LogLinQuantile', 'Lq']
PAIRWISE_LOSSES = ['PairLogit', 'PairLogitPairwise']
GROUPWISE_LOSSES = ['YetiRank', 'YetiRankPairwise', 'QueryRMSE', 'QuerySoftMax']
RANKING_LOSSES = PAIRWISE_LOSSES + GROUPWISE_LOSSES
ALL_LOSSES = CLASSIFICATION_LOSSES + REGRESSION_LOSSES + RANKING_LOSSES
SAMPLING_UNIT_TYPES = ['Object', 'Group']
OVERFITTING_DETECTOR_TYPE = ['IncToDec', 'Iter']
# test both parallel in and non-parallel modes
# default block size (5000000) is too big to run in parallel on these tests
SCORE_CALC_OBJ_BLOCK_SIZES = ['60', '5000000']
SCORE_CALC_OBJ_BLOCK_SIZES_IDS = ['calc_block=60', 'calc_block=5000000']
@pytest.mark.parametrize('boosting_type', BOOSTING_TYPE)
@pytest.mark.parametrize(
'dev_score_calc_obj_block_size',
SCORE_CALC_OBJ_BLOCK_SIZES,
ids=SCORE_CALC_OBJ_BLOCK_SIZES_IDS
)
def test_queryrmse(boosting_type, dev_score_calc_obj_block_size):
output_model_path = yatest.common.test_output_path('model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
cmd = (
CATBOOST_PATH,
'fit',
'--loss-function', 'QueryRMSE',
'-f', data_file('querywise', 'train'),
'-t', data_file('querywise', 'test'),
'--column-description', data_file('querywise', 'train.cd'),
'--boosting-type', boosting_type,
'--dev-score-calc-obj-block-size', dev_score_calc_obj_block_size,
'-i', '20',
'-T', '4',
'-m', output_model_path,
'--eval-file', output_eval_path,
'--use-best-model', 'false',
)
yatest.common.execute(cmd)
return [local_canonical_file(output_eval_path)]
@pytest.mark.parametrize('boosting_type', BOOSTING_TYPE)
@pytest.mark.parametrize(
'dev_score_calc_obj_block_size',
SCORE_CALC_OBJ_BLOCK_SIZES,
ids=SCORE_CALC_OBJ_BLOCK_SIZES_IDS
)
def test_queryrmse_newton_gradient(boosting_type, dev_score_calc_obj_block_size):
newton_eval_path = yatest.common.test_output_path('newton.eval')
gradient_eval_path = yatest.common.test_output_path('gradient.eval')
def run_catboost(eval_path, leaf_estimation_method):
cmd = [
CATBOOST_PATH,
'fit',
'--loss-function', 'QueryRMSE',
'-f', data_file('querywise', 'train'),
'-t', data_file('querywise', 'test'),
'--column-description', data_file('querywise', 'train.cd'),
'--boosting-type', boosting_type,
'--dev-score-calc-obj-block-size', dev_score_calc_obj_block_size,
'--leaf-estimation-method', leaf_estimation_method,
'-i', '20',
'-T', '4',
'--eval-file', eval_path,
'--use-best-model', 'false',
]
yatest.common.execute(cmd)
run_catboost(newton_eval_path, 'Newton')
run_catboost(gradient_eval_path, 'Gradient')
assert filecmp.cmp(newton_eval_path, gradient_eval_path)
@pytest.mark.parametrize('boosting_type', BOOSTING_TYPE)
def test_pool_with_QueryId(boosting_type):
output_model_path = yatest.common.test_output_path('model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
cmd = (
CATBOOST_PATH,
'fit',
'--loss-function', 'QueryRMSE',
'-f', data_file('querywise', 'train'),
'-t', data_file('querywise', 'test'),
'--column-description', data_file('querywise', 'train.cd.query_id'),
'--boosting-type', boosting_type,
'-i', '20',
'-T', '4',
'-m', output_model_path,
'--eval-file', output_eval_path,
'--use-best-model', 'false',
)
yatest.common.execute(cmd)
return [local_canonical_file(output_eval_path)]
@pytest.mark.parametrize('boosting_type', BOOSTING_TYPE)
@pytest.mark.parametrize(
'dev_score_calc_obj_block_size',
SCORE_CALC_OBJ_BLOCK_SIZES,
ids=SCORE_CALC_OBJ_BLOCK_SIZES_IDS
)
def test_rmse_on_qwise_pool(boosting_type, dev_score_calc_obj_block_size):
output_model_path = yatest.common.test_output_path('model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
cmd = (
CATBOOST_PATH,
'fit',
'--loss-function', 'RMSE',
'-f', data_file('querywise', 'train'),
'-t', data_file('querywise', 'test'),
'--column-description', data_file('querywise', 'train.cd'),
'--boosting-type', boosting_type,
'--dev-score-calc-obj-block-size', dev_score_calc_obj_block_size,
'-i', '20',
'-T', '4',
'-m', output_model_path,
'--eval-file', output_eval_path,
'--use-best-model', 'false',
)
yatest.common.execute(cmd)
return [local_canonical_file(output_eval_path)]
@pytest.mark.parametrize('boosting_type', BOOSTING_TYPE)
def test_averagegain(boosting_type):
learn_error_path = yatest.common.test_output_path('learn_error.tsv')
test_error_path = yatest.common.test_output_path('test_error.tsv')
cmd = (
CATBOOST_PATH,
'fit',
'--loss-function', 'QueryRMSE',
'-f', data_file('querywise', 'train'),
'-t', data_file('querywise', 'test'),
'--column-description', data_file('querywise', 'train.cd'),
'--boosting-type', boosting_type,
'-i', '20',
'-T', '4',
'--custom-metric', 'AverageGain:top=2;hints=skip_train~false',
'--learn-err-log', learn_error_path,
'--test-err-log', test_error_path,
'--use-best-model', 'false',
)
yatest.common.execute(cmd)
return [local_canonical_file(learn_error_path), local_canonical_file(test_error_path)]
@pytest.mark.parametrize('boosting_type', BOOSTING_TYPE)
def test_queryaverage(boosting_type):
learn_error_path = yatest.common.test_output_path('learn_error.tsv')
test_error_path = yatest.common.test_output_path('test_error.tsv')
cmd = (
CATBOOST_PATH,
'fit',
'--loss-function', 'QueryRMSE',
'-f', data_file('querywise', 'train'),
'-t', data_file('querywise', 'test'),
'--column-description', data_file('querywise', 'train.cd'),
'--boosting-type', boosting_type,
'-i', '20',
'-T', '4',
'--custom-metric', 'QueryAverage:top=2;hints=skip_train~false',
'--learn-err-log', learn_error_path,
'--test-err-log', test_error_path,
'--use-best-model', 'false',
)
yatest.common.execute(cmd)
return [local_canonical_file(learn_error_path), local_canonical_file(test_error_path)]
@pytest.mark.parametrize('sigma', ['sigma=' + str(sigma) for sigma in [0.01, 1, 10]])
@pytest.mark.parametrize('num_estimations', ['num_estimations=' + str(n_estim) for n_estim in [1, 100]])
def test_stochastic_filter(sigma, num_estimations):
model_path = yatest.common.test_output_path('model.bin')
cd_path = yatest.common.test_output_path('pool.cd')
train_path = yatest.common.test_output_path('train.txt')
test_path = yatest.common.test_output_path('test.txt')
prng = np.random.RandomState(seed=0)
n_samples_by_query = 20
n_features = 10
n_queries = 50
n_samples = n_samples_by_query * n_queries
features = prng.uniform(0, 1, size=(n_samples, n_features))
weights = prng.uniform(0, 1, size=n_features)
labels = np.dot(features, weights)
query_ids = np.arange(0, n_samples) // n_queries
money = (n_queries - np.arange(0, n_samples) % n_queries) * 10
labels = labels.reshape((n_samples, 1))
query_ids = query_ids.reshape((n_samples, 1))
money = money.reshape((n_samples, 1))
features = np.hstack((labels, query_ids, money, features))
n_learn = int(0.7 * n_samples)
learn = features[:n_learn, :]
test = features[n_learn:, :]
np.savetxt(train_path, learn, fmt='%.5f', delimiter='\t')
np.savetxt(test_path, test, fmt='%.5f', delimiter='\t')
np.savetxt(cd_path, [[0, 'Target'], [1, 'GroupId']], fmt='%s', delimiter='\t')
learn_error_path = yatest.common.test_output_path('learn_error.tsv')
test_error_path = yatest.common.test_output_path('test_error.tsv')
learn_error_one_thread_path = yatest.common.test_output_path('learn_error_one_thread.tsv')
test_error_one_thread_path = yatest.common.test_output_path('test_error_one_thread.tsv')
loss_description = 'StochasticFilter:' + sigma + ';' + num_estimations
cmd = [
CATBOOST_PATH,
'fit',
'--loss-function', loss_description,
'--leaf-estimation-backtracking', 'No',
'-f', train_path,
'-t', test_path,
'--column-description', cd_path,
'--boosting-type', 'Plain',
'-i', '20',
'-m', model_path,
'--use-best-model', 'false',
]
cmd_one_thread = cmd + [
'--learn-err-log', learn_error_one_thread_path,
'--test-err-log', test_error_one_thread_path,
'-T', '1'
]
cmd_four_thread = cmd + [
'--learn-err-log', learn_error_path,
'--test-err-log', test_error_path,
'-T', '4'
]
yatest.common.execute(cmd_one_thread)
yatest.common.execute(cmd_four_thread)
compare_evals(learn_error_one_thread_path, learn_error_path)
compare_evals(test_error_one_thread_path, test_error_path)
return [local_canonical_file(learn_error_path),
local_canonical_file(test_error_path)]
@pytest.mark.parametrize('boosting_type', BOOSTING_TYPE)
@pytest.mark.parametrize('top', [2, 100])
def test_averagegain_with_query_weights(boosting_type, top):
learn_error_path = yatest.common.test_output_path('learn_error.tsv')
test_error_path = yatest.common.test_output_path('test_error.tsv')
cmd = (
CATBOOST_PATH,
'fit',
'--loss-function', 'QueryRMSE',
'-f', data_file('querywise', 'train'),
'-t', data_file('querywise', 'test'),
'--column-description', data_file('querywise', 'train.cd.group_weight'),
'--boosting-type', boosting_type,
'-i', '10',
'-T', '4',
'--custom-metric', 'AverageGain:top={};hints=skip_train~false'.format(top),
'--learn-err-log', learn_error_path,
'--test-err-log', test_error_path,
'--use-best-model', 'false',
)
yatest.common.execute(cmd)
return [local_canonical_file(learn_error_path), local_canonical_file(test_error_path)]
@pytest.mark.parametrize('top_size', [2, 5, 10, -1])
@pytest.mark.parametrize('boosting_type', BOOSTING_TYPE)
@pytest.mark.parametrize('cd_file', ['train.cd', 'train.cd.subgroup_id'])
def test_pfound(top_size, boosting_type, cd_file):
learn_error_path = yatest.common.test_output_path('learn_error.tsv')
test_error_path = yatest.common.test_output_path('test_error.tsv')
cmd = (
CATBOOST_PATH,
'fit',
'--loss-function', 'QueryRMSE',
'-f', data_file('querywise', 'train'),
'-t', data_file('querywise', 'test'),
'--column-description', data_file('querywise', cd_file),
'--boosting-type', boosting_type,
'-i', '20',
'-T', '4',
'--custom-metric', 'PFound:top={};hints=skip_train~false'.format(top_size),
'--learn-err-log', learn_error_path,
'--test-err-log', test_error_path,
'--use-best-model', 'false',
)
yatest.common.execute(cmd)
return [local_canonical_file(learn_error_path), local_canonical_file(test_error_path)]
def test_recall_at_k():
learn_error_path = yatest.common.test_output_path('learn_error.tsv')
test_error_path = yatest.common.test_output_path('test_error.tsv')
cmd = (
CATBOOST_PATH,
'fit',
'--loss-function', 'QueryRMSE',
'-f', data_file('querywise', 'train'),
'-t', data_file('querywise', 'test'),
'--column-description', data_file('querywise', 'train.cd'),
'--boosting-type', 'Ordered',
'-i', '10',
'-T', '4',
'--custom-metric', 'RecallAt:top=3;border=0',
'--learn-err-log', learn_error_path,
'--test-err-log', test_error_path,
'--use-best-model', 'false',
)
yatest.common.execute(cmd)
return [local_canonical_file(learn_error_path), local_canonical_file(test_error_path)]
def test_precision_at_k():
learn_error_path = yatest.common.test_output_path('learn_error.tsv')
test_error_path = yatest.common.test_output_path('test_error.tsv')
cmd = (
CATBOOST_PATH,
'fit',
'--loss-function', 'QueryRMSE',
'-f', data_file('querywise', 'train'),
'-t', data_file('querywise', 'test'),
'--column-description', data_file('querywise', 'train.cd'),
'--boosting-type', 'Ordered',
'-i', '10',
'-T', '4',
'--custom-metric', 'PrecisionAt:top=3;border=0',
'--learn-err-log', learn_error_path,
'--test-err-log', test_error_path,
'--use-best-model', 'false',
)
yatest.common.execute(cmd)
return [local_canonical_file(learn_error_path), local_canonical_file(test_error_path)]
@pytest.mark.parametrize('boosting_type', BOOSTING_TYPE)
def test_mapk(boosting_type):
learn_error_path = yatest.common.test_output_path('learn_error.tsv')
test_error_path = yatest.common.test_output_path('test_error.tsv')
cmd = (
CATBOOST_PATH,
'fit',
'--loss-function', 'QueryRMSE',
'-f', data_file('querywise', 'train'),
'-t', data_file('querywise', 'test'),
'--column-description', data_file('querywise', 'train.cd'),
'--boosting-type', boosting_type,
'-i', '20',
'-T', '4',
'--custom-metric', 'MAP:top={}'.format(10),
'--learn-err-log', learn_error_path,
'--test-err-log', test_error_path,
'--use-best-model', 'false',
)
yatest.common.execute(cmd)
return [local_canonical_file(learn_error_path), local_canonical_file(test_error_path)]
@pytest.mark.parametrize('boosting_type', BOOSTING_TYPE)
@pytest.mark.parametrize('ndcg_power_mode', ['Base', 'Exp'])
def test_ndcg(boosting_type, ndcg_power_mode):
learn_error_path = yatest.common.test_output_path('learn_error.tsv')
test_error_path = yatest.common.test_output_path('test_error.tsv')
cmd = (
CATBOOST_PATH,
'fit',
'--loss-function', 'QueryRMSE',
'-f', data_file('querywise', 'train'),
'-t', data_file('querywise', 'test'),
'--column-description', data_file('querywise', 'train.cd'),
'--boosting-type', boosting_type,
'-i', '20',
'-T', '4',
'--custom-metric', 'NDCG:top={};type={};hints=skip_train~false'.format(10, ndcg_power_mode),
'--learn-err-log', learn_error_path,
'--test-err-log', test_error_path,
'--use-best-model', 'false',
)
yatest.common.execute(cmd)
return [local_canonical_file(learn_error_path), local_canonical_file(test_error_path)]
def test_queryrmse_approx_on_full_history():
output_model_path = yatest.common.test_output_path('model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
cmd = (
CATBOOST_PATH,
'fit',
'--loss-function', 'QueryRMSE',
'-f', data_file('querywise', 'train'),
'-t', data_file('querywise', 'test'),
'--column-description', data_file('querywise', 'train.cd'),
'--approx-on-full-history',
'-i', '20',
'-T', '4',
'-m', output_model_path,
'--eval-file', output_eval_path,
'--use-best-model', 'false',
)
yatest.common.execute(cmd)
return [local_canonical_file(output_eval_path)]
@pytest.mark.parametrize('boosting_type', BOOSTING_TYPE)
@pytest.mark.parametrize(
'dev_score_calc_obj_block_size',
SCORE_CALC_OBJ_BLOCK_SIZES,
ids=SCORE_CALC_OBJ_BLOCK_SIZES_IDS
)
def test_pairlogit(boosting_type, dev_score_calc_obj_block_size):
output_model_path = yatest.common.test_output_path('model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
test_error_path = yatest.common.test_output_path('test_error.tsv')
learn_error_path = yatest.common.test_output_path('learn_error.tsv')
def run_catboost(eval_path, learn_pairs):
cmd = [
CATBOOST_PATH,
'fit',
'--loss-function', 'PairLogit',
'--eval-metric', 'PairAccuracy',
'-f', data_file('querywise', 'train'),
'-t', data_file('querywise', 'test'),
'--column-description', data_file('querywise', 'train.cd'),
'--learn-pairs', data_file('querywise', learn_pairs),
'--test-pairs', data_file('querywise', 'test.pairs'),
'--boosting-type', boosting_type,
'--dev-score-calc-obj-block-size', dev_score_calc_obj_block_size,
'--ctr', 'Borders,Counter',
'--l2-leaf-reg', '0',
'-i', '20',
'-T', '4',
'-m', output_model_path,
'--eval-file', eval_path,
'--learn-err-log', learn_error_path,
'--test-err-log', test_error_path,
'--use-best-model', 'false',
]
yatest.common.execute(cmd)
run_catboost(output_eval_path, 'train.pairs')
return [local_canonical_file(learn_error_path),
local_canonical_file(test_error_path),
local_canonical_file(output_eval_path)]
def test_pairs_generation():
output_model_path = yatest.common.test_output_path('model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
test_error_path = yatest.common.test_output_path('test_error.tsv')
learn_error_path = yatest.common.test_output_path('learn_error.tsv')
def run_catboost(eval_path):
cmd = [
CATBOOST_PATH,
'fit',
'--loss-function', 'PairLogit',
'--eval-metric', 'PairAccuracy',
'-f', data_file('querywise', 'train'),
'-t', data_file('querywise', 'test'),
'--column-description', data_file('querywise', 'train.cd'),
'--ctr', 'Borders,Counter',
'--l2-leaf-reg', '0',
'-i', '20',
'-T', '4',
'-m', output_model_path,
'--eval-file', eval_path,
'--learn-err-log', learn_error_path,
'--test-err-log', test_error_path,
'--use-best-model', 'false',
]
yatest.common.execute(cmd)
run_catboost(output_eval_path)
return [local_canonical_file(learn_error_path),
local_canonical_file(test_error_path),
local_canonical_file(output_eval_path)]
def test_pairs_generation_with_max_pairs():
output_model_path = yatest.common.test_output_path('model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
test_error_path = yatest.common.test_output_path('test_error.tsv')
learn_error_path = yatest.common.test_output_path('learn_error.tsv')
def run_catboost(eval_path):
cmd = [
CATBOOST_PATH,
'fit',
'--loss-function', 'PairLogit:max_pairs=30',
'--eval-metric', 'PairLogit:max_pairs=30',
'-f', data_file('querywise', 'train'),
'-t', data_file('querywise', 'test'),
'--column-description', data_file('querywise', 'train.cd'),
'--ctr', 'Borders,Counter',
'--l2-leaf-reg', '0',
'-i', '20',
'-T', '4',
'-m', output_model_path,
'--eval-file', eval_path,
'--learn-err-log', learn_error_path,
'--test-err-log', test_error_path,
'--use-best-model', 'false',
]
yatest.common.execute(cmd)
run_catboost(output_eval_path)
return [local_canonical_file(learn_error_path),
local_canonical_file(test_error_path),
local_canonical_file(output_eval_path)]
@pytest.mark.parametrize('boosting_type', BOOSTING_TYPE)
def test_pairlogit_no_target(boosting_type):
output_model_path = yatest.common.test_output_path('model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
cmd = (
CATBOOST_PATH,
'fit',
'--loss-function', 'PairLogit',
'-f', data_file('querywise', 'train'),
'-t', data_file('querywise', 'test'),
'--column-description', data_file('querywise', 'train.cd.no_target'),
'--learn-pairs', data_file('querywise', 'train.pairs'),
'--test-pairs', data_file('querywise', 'test.pairs'),
'--boosting-type', boosting_type,
'-i', '20',
'-T', '4',
'-m', output_model_path,
'--eval-file', output_eval_path,
'--use-best-model', 'false',
)
yatest.common.execute(cmd)
return [local_canonical_file(output_eval_path)]
def test_pairlogit_approx_on_full_history():
output_model_path = yatest.common.test_output_path('model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
cmd = (
CATBOOST_PATH,
'fit',
'--loss-function', 'PairLogit',
'-f', data_file('querywise', 'train'),
'-t', data_file('querywise', 'test'),
'--column-description', data_file('querywise', 'train.cd'),
'--learn-pairs', data_file('querywise', 'train.pairs'),
'--test-pairs', data_file('querywise', 'test.pairs'),
'--approx-on-full-history',
'-i', '20',
'-T', '4',
'-m', output_model_path,
'--eval-file', output_eval_path,
'--use-best-model', 'false',
)
yatest.common.execute(cmd)
return [local_canonical_file(output_eval_path)]
@pytest.mark.parametrize(
'dev_score_calc_obj_block_size',
SCORE_CALC_OBJ_BLOCK_SIZES,
ids=SCORE_CALC_OBJ_BLOCK_SIZES_IDS
)
@pytest.mark.parametrize('pairs_file', ['train.pairs', 'train.pairs.weighted'])
def test_pairlogit_pairwise(pairs_file, dev_score_calc_obj_block_size):
output_model_path = yatest.common.test_output_path('model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
cmd = (
CATBOOST_PATH,
'fit',
'--loss-function', 'PairLogitPairwise',
'-f', data_file('querywise', 'train'),
'-t', data_file('querywise', 'test'),
'--column-description', data_file('querywise', 'train.cd'),
'--learn-pairs', data_file('querywise', 'train.pairs'),
'--test-pairs', data_file('querywise', 'test.pairs'),
'--dev-score-calc-obj-block-size', dev_score_calc_obj_block_size,
'-i', '20',
'-T', '4',
'-m', output_model_path,
'--eval-file', output_eval_path,
'--use-best-model', 'false',
)
yatest.common.execute(cmd)
return [local_canonical_file(output_eval_path)]
@pytest.mark.parametrize('boosting_type', BOOSTING_TYPE)
@pytest.mark.parametrize(
'dev_score_calc_obj_block_size',
SCORE_CALC_OBJ_BLOCK_SIZES,
ids=SCORE_CALC_OBJ_BLOCK_SIZES_IDS
)
def test_yetirank(boosting_type, dev_score_calc_obj_block_size):
output_model_path = yatest.common.test_output_path('model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
cmd = (
CATBOOST_PATH,
'fit',
'--loss-function', 'YetiRank',
'-f', data_file('querywise', 'train'),
'-t', data_file('querywise', 'test'),
'--column-description', data_file('querywise', 'train.cd'),
'--boosting-type', boosting_type,
'--dev-score-calc-obj-block-size', dev_score_calc_obj_block_size,
'-i', '20',
'-T', '4',
'-m', output_model_path,
'--eval-file', output_eval_path,
)
yatest.common.execute(cmd)
return [local_canonical_file(output_eval_path)]
@pytest.mark.parametrize('loss_function', ['QueryRMSE', 'PairLogit', 'YetiRank', 'PairLogitPairwise', 'YetiRankPairwise'])
def test_pairwise_reproducibility(loss_function):
def run_catboost(threads, model_path, eval_path):
cmd = [
CATBOOST_PATH,
'fit',
'--use-best-model', 'false',
'--loss-function', loss_function,
'-f', data_file('querywise', 'train'),
'-t', data_file('querywise', 'test'),
'--learn-pairs', data_file('querywise', 'train.pairs'),
'--test-pairs', data_file('querywise', 'test.pairs'),
'--cd', data_file('querywise', 'train.cd'),
'-i', '5',
'-T', str(threads),
'-m', model_path,
'--eval-file', eval_path,
]
yatest.common.execute(cmd)
model_1 = yatest.common.test_output_path('model_1.bin')
eval_1 = yatest.common.test_output_path('test_1.eval')
run_catboost(1, model_1, eval_1)
model_4 = yatest.common.test_output_path('model_4.bin')
eval_4 = yatest.common.test_output_path('test_4.eval')
run_catboost(4, model_4, eval_4)
assert filecmp.cmp(eval_1, eval_4)
@pytest.mark.parametrize('boosting_type', BOOSTING_TYPE)
@pytest.mark.parametrize(
'dev_score_calc_obj_block_size',
SCORE_CALC_OBJ_BLOCK_SIZES,
ids=SCORE_CALC_OBJ_BLOCK_SIZES_IDS
)
def test_yetirank_with_params(boosting_type, dev_score_calc_obj_block_size):
output_model_path = yatest.common.test_output_path('model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
cmd = (
CATBOOST_PATH,
'fit',
'--loss-function', 'YetiRank:permutations=5;decay=0.9',
'-f', data_file('querywise', 'train'),
'-t', data_file('querywise', 'test'),
'--column-description', data_file('querywise', 'train.cd'),
'--boosting-type', boosting_type,
'--dev-score-calc-obj-block-size', dev_score_calc_obj_block_size,
'-i', '20',
'-T', '4',
'-m', output_model_path,
'--eval-file', output_eval_path,
)
yatest.common.execute(cmd)
return [local_canonical_file(output_eval_path)]
@pytest.mark.parametrize(
'dev_score_calc_obj_block_size',
SCORE_CALC_OBJ_BLOCK_SIZES,
ids=SCORE_CALC_OBJ_BLOCK_SIZES_IDS
)
def test_yetirank_pairwise(dev_score_calc_obj_block_size):
output_model_path = yatest.common.test_output_path('model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
cmd = (
CATBOOST_PATH,
'fit',
'--loss-function', 'YetiRankPairwise',
'-f', data_file('querywise', 'train'),
'-t', data_file('querywise', 'test'),
'--column-description', data_file('querywise', 'train.cd'),
'--dev-score-calc-obj-block-size', dev_score_calc_obj_block_size,
'-i', '20',
'-T', '4',
'-m', output_model_path,
'--eval-file', output_eval_path,
)
yatest.common.execute(cmd)
return [local_canonical_file(output_eval_path)]
@pytest.mark.parametrize('loss_function', ('YetiRank', 'YetiRankPairwise'))
def test_yetirank_default_metric(loss_function):
output_model_path = yatest.common.test_output_path('model.bin')
test_error_path = yatest.common.test_output_path('test_error.tsv')
cmd = (
CATBOOST_PATH,
'fit',
'--loss-function', loss_function,
'--has-header',
'-f', data_file('black_friday', 'train'),
'-t', data_file('black_friday', 'test'),
'--column-description', data_file('black_friday', 'cd'),
'--model-file', output_model_path,
'--boosting-type', 'Plain',
'-i', '10',
'-T', '4',
'--test-err-log', test_error_path,
)
yatest.common.execute(cmd)
return [local_canonical_file(test_error_path)]
NAN_MODE = ['Min', 'Max']
@pytest.mark.parametrize('nan_mode', NAN_MODE)
@pytest.mark.parametrize('boosting_type', BOOSTING_TYPE)
def test_nan_mode(nan_mode, boosting_type):
output_model_path = yatest.common.test_output_path('model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
cmd = (
CATBOOST_PATH,
'fit',
'--use-best-model', 'false',
'-f', data_file('adult_nan', 'train_small'),
'-t', data_file('adult_nan', 'test_small'),
'--column-description', data_file('adult_nan', 'train.cd'),
'--boosting-type', boosting_type,
'-i', '20',
'-T', '4',
'-m', output_model_path,
'--eval-file', output_eval_path,
'--nan-mode', nan_mode,
)
yatest.common.execute(cmd)
formula_predict_path = yatest.common.test_output_path('predict_test.eval')
calc_cmd = (
CATBOOST_PATH,
'calc',
'--input-path', data_file('adult_nan', 'test_small'),
'--column-description', data_file('adult_nan', 'train.cd'),
'-m', output_model_path,
'--output-path', formula_predict_path,
'--prediction-type', 'RawFormulaVal'
)
yatest.common.execute(calc_cmd)
assert (compare_evals(output_eval_path, formula_predict_path))
return [local_canonical_file(output_eval_path)]
@pytest.mark.parametrize('border_count', [64, 255, 350, 1000, 2500])
def test_different_border_count(border_count):
output_model_path = yatest.common.test_output_path('model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
train_path = data_file('querywise', 'train')
test_path = data_file('querywise', 'test')
cd_path = data_file('querywise', 'train.cd')
cmd = (
CATBOOST_PATH,
'fit',
'--use-best-model', 'false',
'-f', train_path,
'-t', test_path,
'--column-description', cd_path,
'-i', '20',
'-T', '4',
'-x', str(border_count),
'-m', output_model_path,
'--eval-file', output_eval_path,
)
yatest.common.execute(cmd)
formula_predict_path = yatest.common.test_output_path('predict_test.eval')
calc_cmd = (
CATBOOST_PATH,
'calc',
'--input-path', test_path,
'--column-description', cd_path,
'-m', output_model_path,
'--output-path', formula_predict_path,
'--prediction-type', 'RawFormulaVal'
)
yatest.common.execute(calc_cmd)
assert (compare_evals(output_eval_path, formula_predict_path))
return [local_canonical_file(output_eval_path)]
@pytest.mark.parametrize('boosting_type', BOOSTING_TYPE)
def test_nan_mode_forbidden(boosting_type):
output_model_path = yatest.common.test_output_path('model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
cmd = (
CATBOOST_PATH,
'fit',
'-f', data_file('adult', 'train_small'),
'-t', data_file('adult', 'test_small'),
'--column-description', data_file('adult', 'train.cd'),
'--boosting-type', boosting_type,
'-i', '20',
'-T', '4',
'-m', output_model_path,
'--eval-file', output_eval_path,
'--nan-mode', 'Forbidden',
'--use-best-model', 'false',
)
yatest.common.execute(cmd)
return [local_canonical_file(output_eval_path)]
@pytest.mark.parametrize('boosting_type', BOOSTING_TYPE)
def test_overfit_detector_iter(boosting_type):
output_model_path = yatest.common.test_output_path('model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
cmd = (
CATBOOST_PATH,
'fit',
'--use-best-model', 'false',
'--loss-function', 'Logloss',
'-f', data_file('adult', 'train_small'),
'-t', data_file('adult', 'test_small'),
'--column-description', data_file('adult', 'train.cd'),
'--boosting-type', boosting_type,
'-i', '2000',
'-T', '4',
'-m', output_model_path,
'--eval-file', output_eval_path,
'-x', '1',
'-n', '8',
'-w', '0.5',
'--rsm', '1',
'--od-type', 'Iter',
'--od-wait', '2',
)
yatest.common.execute(cmd)
return [local_canonical_file(output_eval_path)]
@pytest.mark.parametrize('boosting_type', BOOSTING_TYPE)
def test_overfit_detector_inc_to_dec(boosting_type):
output_model_path = yatest.common.test_output_path('model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
cmd = (
CATBOOST_PATH,
'fit',
'--use-best-model', 'false',
'--loss-function', 'Logloss',
'-f', data_file('adult', 'train_small'),
'-t', data_file('adult', 'test_small'),
'--column-description', data_file('adult', 'train.cd'),
'--boosting-type', boosting_type,
'-i', '2000',
'-T', '4',
'-m', output_model_path,
'--eval-file', output_eval_path,
'-x', '1',
'-n', '8',
'-w', '0.5',
'--rsm', '1',
'--od-pval', '0.5',
'--od-type', 'IncToDec',
'--od-wait', '2',
)
yatest.common.execute(cmd)
return [local_canonical_file(output_eval_path)]
@pytest.mark.parametrize('boosting_type', BOOSTING_TYPE)
@pytest.mark.parametrize('overfitting_detector_type', OVERFITTING_DETECTOR_TYPE)
def test_overfit_detector_with_resume_from_snapshot(boosting_type, overfitting_detector_type):
output_model_path = yatest.common.test_output_path('model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
snapshot_path = yatest.common.test_output_path('snapshot')
cmd_prefix = (
CATBOOST_PATH,
'fit',
'--use-best-model', 'false',
'--loss-function', 'Logloss',
'-f', data_file('adult', 'train_small'),
'-t', data_file('adult', 'test_small'),
'--column-description', data_file('adult', 'train.cd'),
'--boosting-type', boosting_type,
'-T', '4',
'-m', output_model_path,
'--eval-file', output_eval_path,
'-x', '1',
'-n', '8',
'-w', '0.5',
'--rsm', '1',
'--snapshot-file', snapshot_path,
'--od-type', overfitting_detector_type
)
if overfitting_detector_type == 'IncToDec':
cmd_prefix += (
'--od-wait', '2',
'--od-pval', '0.5'
)
elif overfitting_detector_type == 'Iter':
cmd_prefix += ('--od-wait', '2')
cmd_first = cmd_prefix + ('-i', '10')
yatest.common.execute(cmd_first)
cmd_second = cmd_prefix + ('-i', '2000')
yatest.common.execute(cmd_second)
return [local_canonical_file(output_eval_path)]
@pytest.mark.parametrize('boosting_type', BOOSTING_TYPE)
def test_shrink_model(boosting_type):
output_model_path = yatest.common.test_output_path('model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
cmd = (
CATBOOST_PATH,
'fit',
'--loss-function', 'Logloss',
'-f', data_file('adult', 'train_small'),
'-t', data_file('adult', 'test_small'),
'--column-description', data_file('adult', 'train.cd'),
'--boosting-type', boosting_type,
'-i', '100',
'-T', '4',
'-m', output_model_path,
'--eval-file', output_eval_path,
'-x', '1',
'-n', '8',
'-w', '1',
'--od-pval', '0.99',
'--rsm', '1',
'--use-best-model', 'true'
)
yatest.common.execute(cmd)
return [local_canonical_file(output_eval_path)]
LOSS_FUNCTIONS = ['RMSE', 'Logloss', 'MAE', 'CrossEntropy', 'Quantile', 'LogLinQuantile', 'Poisson', 'MAPE', 'MultiClass', 'MultiClassOneVsAll']
LEAF_ESTIMATION_METHOD = ['Gradient', 'Newton']
@pytest.mark.parametrize('leaf_estimation_method', LEAF_ESTIMATION_METHOD)
@pytest.mark.parametrize('boosting_type', BOOSTING_TYPE)
@pytest.mark.parametrize(
'dev_score_calc_obj_block_size',
SCORE_CALC_OBJ_BLOCK_SIZES,
ids=SCORE_CALC_OBJ_BLOCK_SIZES_IDS
)
def test_multi_leaf_estimation_method(leaf_estimation_method, boosting_type, dev_score_calc_obj_block_size):
output_model_path = yatest.common.test_output_path('model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
cmd = (
CATBOOST_PATH,
'fit',
'--loss-function', 'MultiClass',
'-f', data_file('cloudness_small', 'train_small'),
'-t', data_file('cloudness_small', 'test_small'),
'--column-description', data_file('cloudness_small', 'train.cd'),
'--boosting-type', boosting_type,
'--dev-score-calc-obj-block-size', dev_score_calc_obj_block_size,
'-i', '10',
'-T', '4',
'-m', output_model_path,
'--eval-file', output_eval_path,
'--leaf-estimation-method', leaf_estimation_method,
'--leaf-estimation-iterations', '2',
'--use-best-model', 'false',
)
yatest.common.execute(cmd)
formula_predict_path = yatest.common.test_output_path('predict_test.eval')
calc_cmd = (
CATBOOST_PATH,
'calc',
'--input-path', data_file('cloudness_small', 'test_small'),
'--column-description', data_file('cloudness_small', 'train.cd'),
'-m', output_model_path,
'--output-path', formula_predict_path,
'--prediction-type', 'RawFormulaVal'
)
yatest.common.execute(calc_cmd)
assert(compare_evals(output_eval_path, formula_predict_path))
return [local_canonical_file(output_eval_path)]
LOSS_FUNCTIONS_SHORT = ['Logloss', 'MultiClass']
@pytest.mark.parametrize(
'loss_function',
LOSS_FUNCTIONS_SHORT,
ids=['loss_function=%s' % loss_function for loss_function in LOSS_FUNCTIONS_SHORT]
)
@pytest.mark.parametrize(
'column_name',
['doc_id', 'sample_id'],
ids=['column_name=doc_id', 'column_name=sample_id']
)
def test_sample_id(loss_function, column_name):
output_model_path = yatest.common.test_output_path('model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
column_description = data_file('adult_' + column_name, 'train.cd')
cmd = (
CATBOOST_PATH,
'fit',
'--loss-function', loss_function,
'-f', data_file('adult_doc_id', 'train'),
'-t', data_file('adult_doc_id', 'test'),
'--column-description', column_description,
'--boosting-type', 'Plain',
'-i', '10',
'-w', '0.03',
'-T', '4',
'-m', output_model_path,
'--eval-file', output_eval_path,
'--use-best-model', 'false',
)
yatest.common.execute(cmd)
formula_predict_path = yatest.common.test_output_path('predict_test.eval')
cmd = (
CATBOOST_PATH,
'calc',
'--input-path', data_file('adult_doc_id', 'test'),
'--column-description', column_description,
'-m', output_model_path,
'--output-path', formula_predict_path,
'--prediction-type', 'RawFormulaVal'
)
yatest.common.execute(cmd)
assert(compare_evals(output_eval_path, formula_predict_path))
return [local_canonical_file(output_eval_path)]
POOLS = ['amazon', 'adult']
@pytest.mark.parametrize('boosting_type', BOOSTING_TYPE)
def test_apply_missing_vals(boosting_type):
model_path = yatest.common.test_output_path('adult_model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
cmd = (
CATBOOST_PATH,
'fit',
'--loss-function', 'Logloss',
'-f', data_file('adult', 'train_small'),
'--column-description', data_file('adult', 'train.cd'),
'--boosting-type', boosting_type,
'-i', '10',
'-w', '0.03',
'-T', '4',
'-m', model_path,
'--use-best-model', 'false',
)
yatest.common.execute(cmd)
calc_cmd = (
CATBOOST_PATH,
'calc',
'--input-path', data_file('test_adult_missing_val.tsv'),
'--column-description', data_file('adult', 'train.cd'),
'-m', model_path,
'--output-path', output_eval_path
)
yatest.common.execute(calc_cmd)
return local_canonical_file(output_eval_path)
@pytest.mark.parametrize('boosting_type', BOOSTING_TYPE)
@pytest.mark.parametrize(
'dev_score_calc_obj_block_size',
SCORE_CALC_OBJ_BLOCK_SIZES,
ids=SCORE_CALC_OBJ_BLOCK_SIZES_IDS
)
def test_crossentropy(boosting_type, dev_score_calc_obj_block_size):
output_model_path = yatest.common.test_output_path('model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
cmd = (
CATBOOST_PATH,
'fit',
'--loss-function', 'CrossEntropy',
'-f', data_file('adult_crossentropy', 'train_proba'),
'-t', data_file('adult_crossentropy', 'test_proba'),
'--column-description', data_file('adult_crossentropy', 'train.cd'),
'--boosting-type', boosting_type,
'--dev-score-calc-obj-block-size', dev_score_calc_obj_block_size,
'-i', '10',
'-w', '0.03',
'-T', '4',
'-m', output_model_path,
'--eval-file', output_eval_path,
)
yatest.common.execute(cmd)
return [local_canonical_file(output_eval_path)]
@pytest.mark.parametrize('boosting_type', BOOSTING_TYPE)
@pytest.mark.parametrize(
'dev_score_calc_obj_block_size',
SCORE_CALC_OBJ_BLOCK_SIZES,
ids=SCORE_CALC_OBJ_BLOCK_SIZES_IDS
)
def test_permutation_block(boosting_type, dev_score_calc_obj_block_size):
output_model_path = yatest.common.test_output_path('model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
cmd = (
CATBOOST_PATH,
'fit',
'--loss-function', 'Logloss',
'-f', data_file('adult', 'train_small'),
'-t', data_file('adult', 'test_small'),
'--column-description', data_file('adult', 'train.cd'),
'--boosting-type', boosting_type,
'--dev-score-calc-obj-block-size', dev_score_calc_obj_block_size,
'-i', '10',
'-w', '0.03',
'-T', '4',
'-m', output_model_path,
'--eval-file', output_eval_path,
'--fold-permutation-block', '239',
'--use-best-model', 'false',
)
yatest.common.execute(cmd)
return [local_canonical_file(output_eval_path)]
@pytest.mark.parametrize('boosting_type', BOOSTING_TYPE)
def test_ignored_features(boosting_type):
output_model_path = yatest.common.test_output_path('model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
cmd = (
CATBOOST_PATH,
'fit',
'--loss-function', 'Logloss',
'-f', data_file('adult', 'train_small'),
'-t', data_file('adult', 'test_small'),
'--column-description', data_file('adult', 'train.cd'),
'--boosting-type', boosting_type,
'-i', '10',
'-w', '0.03',
'-T', '4',
'-m', output_model_path,
'-I', '0:1:3:5-7:10000',
'--eval-file', output_eval_path,
'--use-best-model', 'false',
)
yatest.common.execute(cmd)
return [local_canonical_file(output_eval_path)]
def test_ignored_features_not_read():
output_model_path = yatest.common.test_output_path('model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
input_cd_path = data_file('adult', 'train.cd')
cd_path = yatest.common.test_output_path('train.cd')
with open(input_cd_path, "rt") as f:
cd_lines = f.readlines()
with open(cd_path, "wt") as f:
for cd_line in cd_lines:
# Corrupt some features by making them 'Num'
if cd_line.split() == ('5', 'Categ'): # column 5 --> feature 4
cd_line = cd_line.replace('Categ', 'Num')
if cd_line.split() == ('7', 'Categ'): # column 7 --> feature 6
cd_line = cd_line.replace('Categ', 'Num')
f.write(cd_line)
cmd = (
CATBOOST_PATH,
'fit',
'--loss-function', 'Logloss',
'-f', data_file('adult', 'train_small'),
'-t', data_file('adult', 'test_small'),
'--column-description', cd_path,
'-i', '10',
'-w', '0.03',
'-T', '4',
'-m', output_model_path,
'-I', '4:6', # Ignore the corrupted features
'--eval-file', output_eval_path,
'--use-best-model', 'false',
)
yatest.common.execute(cmd)
# Not needed: return [local_canonical_file(output_eval_path)]
@pytest.mark.parametrize('boosting_type', BOOSTING_TYPE)
def test_baseline(boosting_type):
output_model_path = yatest.common.test_output_path('model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
cmd = (
CATBOOST_PATH,
'fit',
'--loss-function', 'Logloss',
'-f', data_file('adult_weight', 'train_weight'),
'-t', data_file('adult_weight', 'test_weight'),
'--column-description', data_file('train_adult_baseline.cd'),
'--boosting-type', boosting_type,
'-i', '10',
'-w', '0.03',
'-T', '4',
'-m', output_model_path,
'--eval-file', output_eval_path,
'--use-best-model', 'false',
)
yatest.common.execute(cmd)
formula_predict_path = yatest.common.test_output_path('predict_test.eval')
calc_cmd = (
CATBOOST_PATH,
'calc',
'--input-path', data_file('adult_weight', 'test_weight'),
'--column-description', data_file('train_adult_baseline.cd'),
'-m', output_model_path,
'--output-path', formula_predict_path,
'--prediction-type', 'RawFormulaVal'
)
yatest.common.execute(calc_cmd)
assert(compare_evals(output_eval_path, formula_predict_path))
return [local_canonical_file(output_eval_path)]
@pytest.mark.parametrize('boosting_type', BOOSTING_TYPE)
@pytest.mark.parametrize('loss_function', MULTICLASS_LOSSES)
def test_multiclass_baseline(boosting_type, loss_function):
labels = ['0', '1', '2', '3']
model_path = yatest.common.test_output_path('model.bin')
cd_path = yatest.common.test_output_path('cd.txt')
np.savetxt(cd_path, [[0, 'Target'], [1, 'Baseline'], [2, 'Baseline'], [3, 'Baseline'], [4, 'Baseline']], fmt='%s', delimiter='\t')
prng = np.random.RandomState(seed=0)
train_path = yatest.common.test_output_path('train.txt')
np.savetxt(train_path, generate_random_labeled_set(100, 10, labels, prng=prng), fmt='%s', delimiter='\t')
test_path = yatest.common.test_output_path('test.txt')
np.savetxt(test_path, generate_random_labeled_set(100, 10, labels, prng=prng), fmt='%s', delimiter='\t')
eval_path = yatest.common.test_output_path('eval.txt')
cmd = (
CATBOOST_PATH,
'fit',
'--loss-function', loss_function,
'-f', train_path,
'-t', test_path,
'--column-description', cd_path,
'--boosting-type', boosting_type,
'-i', '10',
'-T', '4',
'-m', model_path,
'--eval-file', eval_path,
'--use-best-model', 'false',
'--classes-count', '4'
)
yatest.common.execute(cmd)
formula_predict_path = yatest.common.test_output_path('predict_test.eval')
calc_cmd = (
CATBOOST_PATH,
'calc',
'--input-path', test_path,
'--column-description', cd_path,
'-m', model_path,
'--output-path', formula_predict_path,
'--prediction-type', 'RawFormulaVal'
)
yatest.common.execute(calc_cmd)
assert(compare_evals(eval_path, formula_predict_path))
return [local_canonical_file(eval_path)]
@pytest.mark.parametrize('boosting_type', BOOSTING_TYPE)
@pytest.mark.parametrize('loss_function', MULTICLASS_LOSSES)
def test_multiclass_baseline_lost_class(boosting_type, loss_function):
labels = [0, 1, 2, 3]
model_path = yatest.common.test_output_path('model.bin')
cd_path = yatest.common.test_output_path('cd.txt')
np.savetxt(cd_path, [[0, 'Target'], [1, 'Baseline'], [2, 'Baseline']], fmt='%s', delimiter='\t')
prng = np.random.RandomState(seed=0)
train_path = yatest.common.test_output_path('train.txt')
np.savetxt(train_path, generate_random_labeled_set(100, 10, [1, 2], prng=prng), fmt='%s', delimiter='\t')
test_path = yatest.common.test_output_path('test.txt')
np.savetxt(test_path, generate_random_labeled_set(100, 10, labels, prng=prng), fmt='%s', delimiter='\t')
eval_path = yatest.common.test_output_path('eval.txt')
cmd = (
CATBOOST_PATH,
'fit',
'--loss-function', loss_function,
'-f', train_path,
'-t', test_path,
'--column-description', cd_path,
'--boosting-type', boosting_type,
'-i', '10',
'-T', '4',
'-m', model_path,
'--eval-file', eval_path,
'--use-best-model', 'false',
'--classes-count', '4',
)
with pytest.raises(yatest.common.ExecutionError):
yatest.common.execute(cmd)
@pytest.mark.parametrize('boosting_type', BOOSTING_TYPE)
@pytest.mark.parametrize(
'dev_score_calc_obj_block_size',
SCORE_CALC_OBJ_BLOCK_SIZES,
ids=SCORE_CALC_OBJ_BLOCK_SIZES_IDS
)
def test_weights(boosting_type, dev_score_calc_obj_block_size):
output_model_path = yatest.common.test_output_path('model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
cmd = (
CATBOOST_PATH,
'fit',
'--use-best-model', 'false',
'--loss-function', 'Logloss',
'-f', data_file('adult_weight', 'train_weight'),
'-t', data_file('adult_weight', 'test_weight'),
'--column-description', data_file('adult_weight', 'train.cd'),
'--boosting-type', boosting_type,
'--dev-score-calc-obj-block-size', dev_score_calc_obj_block_size,
'-i', '10',
'-w', '0.03',
'-T', '4',
'-m', output_model_path,
'--eval-file', output_eval_path,
)
yatest.common.execute(cmd)
return [local_canonical_file(output_eval_path)]
@pytest.mark.parametrize('boosting_type', BOOSTING_TYPE)
@pytest.mark.parametrize(
'dev_score_calc_obj_block_size',
SCORE_CALC_OBJ_BLOCK_SIZES,
ids=SCORE_CALC_OBJ_BLOCK_SIZES_IDS
)
def test_weights_no_bootstrap(boosting_type, dev_score_calc_obj_block_size):
output_model_path = yatest.common.test_output_path('model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
cmd = (
CATBOOST_PATH,
'fit',
'--use-best-model', 'false',
'--loss-function', 'Logloss',
'-f', data_file('adult_weight', 'train_weight'),
'-t', data_file('adult_weight', 'test_weight'),
'--column-description', data_file('adult_weight', 'train.cd'),
'--boosting-type', boosting_type,
'--dev-score-calc-obj-block-size', dev_score_calc_obj_block_size,
'--bootstrap-type', 'No',
'-i', '10',
'-w', '0.03',
'-T', '4',
'-m', output_model_path,
'--eval-file', output_eval_path,
)
yatest.common.execute(cmd)
return [local_canonical_file(output_eval_path)]
@pytest.mark.parametrize('boosting_type', BOOSTING_TYPE)
@pytest.mark.parametrize(
'dev_score_calc_obj_block_size',
SCORE_CALC_OBJ_BLOCK_SIZES,
ids=SCORE_CALC_OBJ_BLOCK_SIZES_IDS
)
def test_weights_gradient(boosting_type, dev_score_calc_obj_block_size):
output_model_path = yatest.common.test_output_path('model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
cmd = (
CATBOOST_PATH,
'fit',
'--use-best-model', 'false',
'--loss-function', 'Logloss',
'-f', data_file('adult_weight', 'train_weight'),
'-t', data_file('adult_weight', 'test_weight'),
'--column-description', data_file('adult_weight', 'train.cd'),
'--boosting-type', boosting_type,
'--dev-score-calc-obj-block-size', dev_score_calc_obj_block_size,
'-i', '10',
'-T', '4',
'-m', output_model_path,
'--eval-file', output_eval_path,
'--leaf-estimation-method', 'Gradient'
)
yatest.common.execute(cmd)
return [local_canonical_file(output_eval_path)]
@pytest.mark.parametrize('boosting_type', BOOSTING_TYPE)
@pytest.mark.parametrize(
'dev_score_calc_obj_block_size',
SCORE_CALC_OBJ_BLOCK_SIZES,
ids=SCORE_CALC_OBJ_BLOCK_SIZES_IDS
)
def test_logloss_with_not_binarized_target(boosting_type, dev_score_calc_obj_block_size):
output_model_path = yatest.common.test_output_path('model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
cmd = (
CATBOOST_PATH,
'fit',
'--use-best-model', 'false',
'--loss-function', 'Logloss',
'-f', data_file('adult_not_binarized', 'train_small'),
'-t', data_file('adult_not_binarized', 'test_small'),
'--column-description', data_file('adult_not_binarized', 'train.cd'),
'--boosting-type', boosting_type,
'--dev-score-calc-obj-block-size', dev_score_calc_obj_block_size,
'-i', '10',
'-w', '0.03',
'-T', '4',
'-m', output_model_path,
'--eval-file', output_eval_path
)
yatest.common.execute(cmd)
return [local_canonical_file(output_eval_path)]
@pytest.mark.parametrize('loss_function', LOSS_FUNCTIONS)
@pytest.mark.parametrize('boosting_type', BOOSTING_TYPE)
@pytest.mark.parametrize(
'dev_score_calc_obj_block_size',
SCORE_CALC_OBJ_BLOCK_SIZES,
ids=SCORE_CALC_OBJ_BLOCK_SIZES_IDS
)
def test_all_targets(loss_function, boosting_type, dev_score_calc_obj_block_size):
output_model_path = yatest.common.test_output_path('model.bin')
output_model_path_without_test = yatest.common.test_output_path('model_without_test.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
base_cmd = (
CATBOOST_PATH,
'fit',
'--use-best-model', 'false',
'--loss-function', loss_function,
'-f', data_file('adult', 'train_small'),
'--column-description', data_file('adult', 'train.cd'),
'--boosting-type', boosting_type,
'--dev-score-calc-obj-block-size', dev_score_calc_obj_block_size,
'-i', '10',
'--counter-calc-method', 'SkipTest', # TODO(kirillovs): remove after setting SkipTest as default type
'-w', '0.03',
'-T', '4',
)
train_with_test_cmd = base_cmd + (
'-t', data_file('adult', 'test_small'),
'-m', output_model_path,
'--eval-file', output_eval_path,
)
yatest.common.execute(train_with_test_cmd)
train_without_test_cmd = base_cmd + (
'-m', output_model_path_without_test,
)
yatest.common.execute(train_without_test_cmd)
formula_predict_path = yatest.common.test_output_path('predict_test.eval')
formula_predict_without_test_path = yatest.common.test_output_path('predict_without_test.eval')
base_calc_cmd = (
CATBOOST_PATH,
'calc',
'--input-path', data_file('adult', 'test_small'),
'--column-description', data_file('adult', 'train.cd'),
'--prediction-type', 'RawFormulaVal'
)
calc_cmd = base_calc_cmd + (
'-m', output_model_path,
'--output-path', formula_predict_path,
)
calc_cmd_without_test = base_calc_cmd + (
'-m', output_model_path_without_test,
'--output-path', formula_predict_without_test_path,
)
yatest.common.execute(calc_cmd)
yatest.common.execute(calc_cmd_without_test)
if loss_function == 'MAPE':
# TODO(kirillovs): uncomment this after resolving MAPE problems
# assert(compare_evals(output_eval_path, formula_predict_path))
return [local_canonical_file(output_eval_path), local_canonical_file(formula_predict_path)]
else:
assert(compare_evals(output_eval_path, formula_predict_path))
assert(filecmp.cmp(formula_predict_without_test_path, formula_predict_path))
return [local_canonical_file(output_eval_path)]
@pytest.mark.parametrize('is_inverted', [False, True], ids=['', 'inverted'])
@pytest.mark.parametrize('boosting_type', BOOSTING_TYPE)
def test_cv(is_inverted, boosting_type):
output_model_path = yatest.common.test_output_path('model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
cmd = (
CATBOOST_PATH,
'fit',
'--use-best-model', 'false',
'--loss-function', 'Logloss',
'-f', data_file('adult', 'train_small'),
'--column-description', data_file('adult', 'train.cd'),
'--boosting-type', boosting_type,
'-i', '10',
'-w', '0.03',
'-T', '4',
'-m', output_model_path,
'--cv', format_crossvalidation(is_inverted, 2, 10),
'--eval-file', output_eval_path,
)
yatest.common.execute(cmd)
return [local_canonical_file(output_eval_path)]
@pytest.mark.parametrize('is_inverted', [False, True], ids=['', 'inverted'])
@pytest.mark.parametrize('boosting_type', BOOSTING_TYPE)
def test_cv_for_query(is_inverted, boosting_type):
output_model_path = yatest.common.test_output_path('model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
cmd = (
CATBOOST_PATH,
'fit',
'--use-best-model', 'false',
'--loss-function', 'QueryRMSE',
'-f', data_file('querywise', 'train'),
'--column-description', data_file('querywise', 'train.cd'),
'--boosting-type', boosting_type,
'-i', '10',
'-T', '4',
'-m', output_model_path,
'--cv', format_crossvalidation(is_inverted, 2, 7),
'--eval-file', output_eval_path,
)
yatest.common.execute(cmd)
return [local_canonical_file(output_eval_path)]
@pytest.mark.parametrize('is_inverted', [False, True], ids=['', 'inverted'])
@pytest.mark.parametrize('boosting_type', BOOSTING_TYPE)
def test_cv_for_pairs(is_inverted, boosting_type):
output_model_path = yatest.common.test_output_path('model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
cmd = (
CATBOOST_PATH,
'fit',
'--use-best-model', 'false',
'--loss-function', 'PairLogit',
'-f', data_file('querywise', 'train'),
'--column-description', data_file('querywise', 'train.cd'),
'--learn-pairs', data_file('querywise', 'train.pairs'),
'--boosting-type', boosting_type,
'-i', '10',
'-T', '4',
'-m', output_model_path,
'--cv', format_crossvalidation(is_inverted, 2, 7),
'--eval-file', output_eval_path,
)
yatest.common.execute(cmd)
return [local_canonical_file(output_eval_path)]
@pytest.mark.parametrize('bad_cv_params', ['XX', 'YY', 'XY'])
def test_multiple_cv_spec(bad_cv_params):
output_model_path = yatest.common.test_output_path('model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
cmd = (
CATBOOST_PATH,
'fit',
'--loss-function', 'Logloss',
'-f', data_file('adult', 'train_small'),
'--column-description', data_file('adult', 'train.cd'),
'-i', '10',
'-T', '4',
'-m', output_model_path,
'--eval-file', output_eval_path,
)
if bad_cv_params == 'XX':
cmd += ('--cv', format_crossvalidation(is_inverted=False, n=2, k=10),
'--cv', format_crossvalidation(is_inverted=False, n=4, k=7))
elif bad_cv_params == 'XY':
cmd += ('--cv', format_crossvalidation(is_inverted=False, n=2, k=10),
'--cv', format_crossvalidation(is_inverted=True, n=4, k=7))
elif bad_cv_params == 'YY':
cmd += ('--cv', format_crossvalidation(is_inverted=True, n=2, k=10),
'--cv', format_crossvalidation(is_inverted=True, n=4, k=7))
else:
raise Exception('bad bad_cv_params value:' + bad_cv_params)
with pytest.raises(yatest.common.ExecutionError):
yatest.common.execute(cmd)
@pytest.mark.parametrize('is_inverted', [False, True], ids=['', 'inverted'])
@pytest.mark.parametrize('error_type', ['0folds', 'fold_idx_overflow'])
def test_bad_fold_cv_spec(is_inverted, error_type):
output_model_path = yatest.common.test_output_path('model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
cmd = (
CATBOOST_PATH,
'fit',
'--loss-function', 'Logloss',
'-f', data_file('adult', 'train_small'),
'--column-description', data_file('adult', 'train.cd'),
'-i', '10',
'-T', '4',
'-m', output_model_path,
('--cv:Inverted' if is_inverted else '--cv:Classical'),
{'0folds': '0/0', 'fold_idx_overflow': '3/2'}[error_type],
'--eval-file', output_eval_path,
)
with pytest.raises(yatest.common.ExecutionError):
yatest.common.execute(cmd)
@pytest.mark.parametrize('boosting_type', BOOSTING_TYPE)
def test_empty_eval(boosting_type):
output_model_path = yatest.common.test_output_path('model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
cmd = (
CATBOOST_PATH,
'fit',
'--use-best-model', 'false',
'--loss-function', 'Logloss',
'-f', data_file('adult', 'train_small'),
'--column-description', data_file('adult', 'train.cd'),
'--boosting-type', boosting_type,
'-i', '10',
'-T', '4',
'-m', output_model_path,
'--eval-file', output_eval_path,
)
yatest.common.execute(cmd)
return [local_canonical_file(output_eval_path)]
@pytest.mark.parametrize('boosting_type', BOOSTING_TYPE)
def test_time(boosting_type):
output_model_path = yatest.common.test_output_path('model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
cmd = (
CATBOOST_PATH,
'fit',
'--use-best-model', 'false',
'--loss-function', 'Logloss',
'-f', data_file('adult', 'train_small'),
'-t', data_file('adult', 'test_small'),
'--column-description', data_file('adult', 'train.cd'),
'--boosting-type', boosting_type,
'-i', '10',
'-w', '0.03',
'-T', '4',
'-m', output_model_path,
'--has-time',
'--eval-file', output_eval_path,
)
yatest.common.execute(cmd)
return [local_canonical_file(output_eval_path)]
@pytest.mark.parametrize('boosting_type', BOOSTING_TYPE)
@pytest.mark.parametrize(
'dev_score_calc_obj_block_size',
SCORE_CALC_OBJ_BLOCK_SIZES,
ids=SCORE_CALC_OBJ_BLOCK_SIZES_IDS
)
def test_gradient(boosting_type, dev_score_calc_obj_block_size):
output_model_path = yatest.common.test_output_path('model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
cmd = (
CATBOOST_PATH,
'fit',
'--use-best-model', 'false',
'--loss-function', 'Logloss',
'-f', data_file('adult', 'train_small'),
'-t', data_file('adult', 'test_small'),
'--column-description', data_file('adult', 'train.cd'),
'--boosting-type', boosting_type,
'--dev-score-calc-obj-block-size', dev_score_calc_obj_block_size,
'-i', '10',
'-T', '4',
'-m', output_model_path,
'--leaf-estimation-method', 'Gradient',
'--eval-file', output_eval_path,
)
yatest.common.execute(cmd)
return [local_canonical_file(output_eval_path)]
@pytest.mark.parametrize('boosting_type', BOOSTING_TYPE)
@pytest.mark.parametrize(
'dev_score_calc_obj_block_size',
SCORE_CALC_OBJ_BLOCK_SIZES,
ids=SCORE_CALC_OBJ_BLOCK_SIZES_IDS
)
def test_newton(boosting_type, dev_score_calc_obj_block_size):
output_model_path = yatest.common.test_output_path('model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
cmd = (
CATBOOST_PATH,
'fit',
'--use-best-model', 'false',
'--loss-function', 'Logloss',
'-f', data_file('adult', 'train_small'),
'-t', data_file('adult', 'test_small'),
'--column-description', data_file('adult', 'train.cd'),
'--boosting-type', boosting_type,
'--dev-score-calc-obj-block-size', dev_score_calc_obj_block_size,
'-i', '10',
'-T', '4',
'-m', output_model_path,
'--leaf-estimation-method', 'Newton',
'--eval-file', output_eval_path,
)
yatest.common.execute(cmd)
return [local_canonical_file(output_eval_path)]
@pytest.mark.parametrize('boosting_type', BOOSTING_TYPE)
@pytest.mark.parametrize(
'dev_score_calc_obj_block_size',
SCORE_CALC_OBJ_BLOCK_SIZES,
ids=SCORE_CALC_OBJ_BLOCK_SIZES_IDS
)
def test_newton_on_pool_with_weights(boosting_type, dev_score_calc_obj_block_size):
output_model_path = yatest.common.test_output_path('model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
cmd = (
CATBOOST_PATH,
'fit',
'--use-best-model', 'false',
'--loss-function', 'Logloss',
'-f', data_file('adult_weight', 'train_weight'),
'-t', data_file('adult_weight', 'test_weight'),
'--column-description', data_file('adult_weight', 'train.cd'),
'--boosting-type', boosting_type,
'--dev-score-calc-obj-block-size', dev_score_calc_obj_block_size,
'-i', '10',
'-T', '4',
'-m', output_model_path,
'--leaf-estimation-method', 'Newton',
'--leaf-estimation-iterations', '7',
'--eval-file', output_eval_path,
)
yatest.common.execute(cmd)
return [local_canonical_file(output_eval_path)]
@pytest.mark.parametrize('boosting_type', BOOSTING_TYPE)
def test_custom_priors(boosting_type):
output_model_path = yatest.common.test_output_path('model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
cmd = (
CATBOOST_PATH,
'fit',
'--use-best-model', 'false',
'--loss-function', 'Logloss',
'-f', data_file('adult', 'train_small'),
'-t', data_file('adult', 'test_small'),
'--column-description', data_file('adult', 'train.cd'),
'--boosting-type', boosting_type,
'-i', '10',
'-w', '0.03',
'-T', '4',
'-m', output_model_path,
'--ctr', 'Borders:Prior=-2:Prior=0:Prior=8:Prior=1:Prior=-1:Prior=3,'
'Counter:Prior=0',
'--per-feature-ctr', '4:Borders:Prior=0.444,Counter:Prior=0.444;'
'6:Borders:Prior=0.666,Counter:Prior=0.666;'
'8:Borders:Prior=-0.888:Prior=0.888,Counter:Prior=-0.888:Prior=0.888',
'--eval-file', output_eval_path,
)
yatest.common.execute(cmd)
return [local_canonical_file(output_eval_path)]
@pytest.mark.parametrize('boosting_type', BOOSTING_TYPE)
@pytest.mark.parametrize(
'dev_score_calc_obj_block_size',
SCORE_CALC_OBJ_BLOCK_SIZES,
ids=SCORE_CALC_OBJ_BLOCK_SIZES_IDS
)
def test_ctr_buckets(boosting_type, dev_score_calc_obj_block_size):
output_model_path = yatest.common.test_output_path('model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
cmd = (
CATBOOST_PATH,
'fit',
'--use-best-model', 'false',
'--loss-function', 'MultiClass',
'-f', data_file('adult', 'train_small'),
'-t', data_file('adult', 'test_small'),
'--column-description', data_file('adult', 'train.cd'),
'--boosting-type', boosting_type,
'--dev-score-calc-obj-block-size', dev_score_calc_obj_block_size,
'-i', '10',
'-T', '4',
'-m', output_model_path,
'--eval-file', output_eval_path,
'--ctr', 'Buckets'
)
yatest.common.execute(cmd)
return [local_canonical_file(output_eval_path)]
@pytest.mark.parametrize('boosting_type', BOOSTING_TYPE)
@pytest.mark.parametrize(
'dev_score_calc_obj_block_size',
SCORE_CALC_OBJ_BLOCK_SIZES,
ids=SCORE_CALC_OBJ_BLOCK_SIZES_IDS
)
def test_fold_len_multiplier(boosting_type, dev_score_calc_obj_block_size):
output_model_path = yatest.common.test_output_path('model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
cmd = (
CATBOOST_PATH,
'fit',
'--use-best-model', 'false',
'--loss-function', 'MultiClass',
'-f', data_file('adult', 'train_small'),
'-t', data_file('adult', 'test_small'),
'--column-description', data_file('adult', 'train.cd'),
'--boosting-type', boosting_type,
'--dev-score-calc-obj-block-size', dev_score_calc_obj_block_size,
'-i', '10',
'-T', '4',
'-m', output_model_path,
'--eval-file', output_eval_path,
'--fold-len-multiplier', '1.5'
)
yatest.common.execute(cmd)
return [local_canonical_file(output_eval_path)]
FSTR_TYPES = ['PredictionValuesChange', 'InternalFeatureImportance', 'InternalInteraction', 'Interaction', 'ShapValues']
@pytest.mark.parametrize('fstr_type', FSTR_TYPES)
@pytest.mark.parametrize('boosting_type', BOOSTING_TYPE)
def test_fstr(fstr_type, boosting_type):
model_path = yatest.common.test_output_path('adult_model.bin')
output_fstr_path = yatest.common.test_output_path('fstr.tsv')
cmd = (
CATBOOST_PATH,
'fit',
'--use-best-model', 'false',
'--loss-function', 'Logloss',
'-f', data_file('adult', 'train_small'),
'--column-description', data_file('adult', 'train.cd'),
'--boosting-type', boosting_type,
'-i', '10',
'-w', '0.03',
'-T', '4',
'--one-hot-max-size', '10',
'-m', model_path
)
if fstr_type == 'ShapValues':
cmd = cmd + ('--max-ctr-complexity', '1')
yatest.common.execute(cmd)
fstr_cmd = (
CATBOOST_PATH,
'fstr',
'--input-path', data_file('adult', 'train_small'),
'--column-description', data_file('adult', 'train.cd'),
'-m', model_path,
'-o', output_fstr_path,
'--fstr-type', fstr_type
)
yatest.common.execute(fstr_cmd)
return local_canonical_file(output_fstr_path)
@pytest.mark.parametrize('loss_function', ['QueryRMSE', 'PairLogit', 'YetiRank', 'PairLogitPairwise', 'YetiRankPairwise'])
def test_loss_change_fstr(loss_function):
model_path = yatest.common.test_output_path('model.bin')
output_fstr_path = yatest.common.test_output_path('fstr.tsv')
train_fstr_path = yatest.common.test_output_path('t_fstr.tsv')
cmd = (
CATBOOST_PATH,
'fit',
'--use-best-model', 'false',
'--loss-function', loss_function,
'--learn-set', data_file('querywise', 'train'),
'--column-description', data_file('querywise', 'train.cd'),
'--learn-pairs', data_file('querywise', 'train.pairs'),
'--boosting-type', 'Plain',
'-i', '10',
'-w', '0.03',
'-T', '4',
'--one-hot-max-size', '10',
'--fstr-file', train_fstr_path,
'--fstr-type', 'LossFunctionChange',
'--model-file', model_path
)
yatest.common.execute(cmd)
fstr_cmd = (
CATBOOST_PATH,
'fstr',
'--input-path', data_file('querywise', 'train'),
'--column-description', data_file('querywise', 'train.cd'),
'--input-pairs', data_file('querywise', 'train.pairs'),
'--model-file', model_path,
'--output-path', output_fstr_path,
'--fstr-type', 'LossFunctionChange',
)
yatest.common.execute(fstr_cmd)
fit_otuput = np.loadtxt(train_fstr_path, dtype='float', delimiter='\t')
fstr_output = np.loadtxt(output_fstr_path, dtype='float', delimiter='\t')
assert(np.allclose(fit_otuput, fstr_output, rtol=1e-6))
return [local_canonical_file(output_fstr_path)]
@pytest.mark.parametrize('boosting_type', BOOSTING_TYPE)
@pytest.mark.parametrize('ranking_parameters', [
{'loss-function': 'PairLogit', 'fstr-type': 'LossFunctionChange'},
{'loss-function': 'Logloss', 'fstr-type': 'PredictionValuesChange'}
])
def test_fstr_feature_importance_default_value(boosting_type, ranking_parameters):
model_path = yatest.common.test_output_path('model.bin')
fstr_path_0 = yatest.common.test_output_path('fstr_0.tsv')
fstr_path_1 = yatest.common.test_output_path('fstr_1.tsv')
cmd = (
CATBOOST_PATH,
'fit',
'--use-best-model', 'false',
'--learn-set', data_file('querywise', 'train'),
'--column-description', data_file('querywise', 'train.cd'),
'--learn-pairs', data_file('querywise', 'train.pairs'),
'-i', '10',
'-T', '4',
'--one-hot-max-size', '10',
'--model-file', model_path,
'--loss-function', ranking_parameters['loss-function']
)
yatest.common.execute(
cmd + ('--fstr-file', fstr_path_0,
'--fstr-type', 'FeatureImportance')
)
yatest.common.execute(
cmd + ('--fstr-file', fstr_path_1,
'--fstr-type', ranking_parameters['fstr-type'])
)
fstr_otuput_0 = np.loadtxt(fstr_path_0, dtype='float', delimiter='\t')
fstr_output_1 = np.loadtxt(fstr_path_1, dtype='float', delimiter='\t')
assert(np.allclose(fstr_otuput_0, fstr_output_1, rtol=1e-6))
fstr_cmd = (
CATBOOST_PATH,
'fstr',
'--input-path', data_file('querywise', 'train'),
'--column-description', data_file('querywise', 'train.cd'),
'--input-pairs', data_file('querywise', 'train.pairs'),
'--model-file', model_path,
'--output-path', fstr_path_1,
'--fstr-type', 'FeatureImportance',
)
yatest.common.execute(fstr_cmd)
fstr_output_1 = np.loadtxt(fstr_path_1, dtype='float', delimiter='\t')
assert(np.allclose(fstr_otuput_0, fstr_output_1, rtol=1e-6))
@pytest.mark.parametrize('boosting_type', BOOSTING_TYPE)
def test_loss_change_fstr_without_pairs(boosting_type):
model_path = yatest.common.test_output_path('model.bin')
output_fstr_path = yatest.common.test_output_path('fstr.tsv')
cmd = (
CATBOOST_PATH,
'fit',
'--use-best-model', 'false',
'--loss-function', 'PairLogit',
'--learn-set', data_file('querywise', 'train'),
'--column-description', data_file('querywise', 'train.cd'),
'--learn-pairs', data_file('querywise', 'train.pairs'),
'--boosting-type', boosting_type,
'-i', '10',
'--learning-rate', '0.03',
'-T', '4',
'--one-hot-max-size', '10',
'--model-file', model_path
)
yatest.common.execute(cmd)
fstr_cmd = (
CATBOOST_PATH,
'fstr',
'--input-path', data_file('querywise', 'train'),
'--column-description', data_file('querywise', 'train.cd'),
'--model-file', model_path,
'--output-path', output_fstr_path,
'--fstr-type', 'LossFunctionChange',
)
yatest.common.execute(fstr_cmd)
try:
fstr_cmd = (
CATBOOST_PATH,
'fstr',
'--input-path', data_file('querywise', 'train'),
'--column-description', data_file('querywise', 'train.cd.no_target'),
'--model-file', model_path,
'--fstr-type', 'LossFunctionChange',
)
yatest.common.execute(fstr_cmd)
except:
return [local_canonical_file(output_fstr_path)]
assert False
def test_loss_change_fstr_on_different_pool_type():
output_model_path = yatest.common.test_output_path('model.bin')
output_dsv_fstr_path = yatest.common.test_output_path('fstr.tsv')
output_quantized_fstr_path = yatest.common.test_output_path('fstr.tsv.quantized')
train_fstr_path = yatest.common.test_output_path('train_fstr.tsv')
def get_pool_path(set_name, is_quantized=False):
path = data_file('querywise', set_name)
return 'quantized://' + path + '.quantized' if is_quantized else path
cd_file = data_file('querywise', 'train.cd')
cmd = (
CATBOOST_PATH, 'fit',
'--use-best-model', 'false',
'--loss-function', 'PairLogit',
'--learn-set', get_pool_path('train', True),
'--learn-pairs', data_file('querywise', 'train.pairs'),
'-i', '10',
'-T', '4',
'--fstr-file', train_fstr_path,
'--fstr-type', 'LossFunctionChange',
'--model-file', output_model_path,
)
yatest.common.execute(cmd)
cmd = (
CATBOOST_PATH, 'fstr',
'--input-path', get_pool_path('train'),
'--column-description', cd_file,
'--input-pairs', data_file('querywise', 'train.pairs'),
'--model-file', output_model_path,
'--output-path', output_dsv_fstr_path,
'--fstr-type', 'LossFunctionChange',
)
yatest.common.execute(cmd)
cmd = (
CATBOOST_PATH, 'fstr',
'--input-path', get_pool_path('train', True),
'--input-pairs', data_file('querywise', 'train.pairs'),
'--model-file', output_model_path,
'--output-path', output_quantized_fstr_path,
'--fstr-type', 'LossFunctionChange',
)
yatest.common.execute(cmd)
fstr_dsv = np.loadtxt(output_dsv_fstr_path, dtype='float', delimiter='\t')
fstr_quantized = np.loadtxt(output_quantized_fstr_path, dtype='float', delimiter='\t')
train_fstr = np.loadtxt(train_fstr_path, dtype='float', delimiter='\t')
assert(np.allclose(fstr_dsv, fstr_quantized, rtol=1e-6))
assert(np.allclose(fstr_dsv, train_fstr, rtol=1e-6))
@pytest.mark.parametrize('loss_function', LOSS_FUNCTIONS)
@pytest.mark.parametrize(
'dev_score_calc_obj_block_size',
SCORE_CALC_OBJ_BLOCK_SIZES,
ids=SCORE_CALC_OBJ_BLOCK_SIZES_IDS
)
def test_reproducibility(loss_function, dev_score_calc_obj_block_size):
def run_catboost(threads, model_path, eval_path):
cmd = [
CATBOOST_PATH,
'fit',
'--use-best-model', 'false',
'--loss-function', loss_function,
'-f', data_file('adult', 'train_small'),
'-t', data_file('adult', 'test_small'),
'--column-description', data_file('adult', 'train.cd'),
'--dev-score-calc-obj-block-size', dev_score_calc_obj_block_size,
'-i', '25',
'-T', str(threads),
'-m', model_path,
'--eval-file', eval_path,
]
yatest.common.execute(cmd)
model_1 = yatest.common.test_output_path('model_1.bin')
eval_1 = yatest.common.test_output_path('test_1.eval')
run_catboost(1, model_1, eval_1)
model_4 = yatest.common.test_output_path('model_4.bin')
eval_4 = yatest.common.test_output_path('test_4.eval')
run_catboost(4, model_4, eval_4)
assert filecmp.cmp(eval_1, eval_4)
BORDER_TYPES = ['Median', 'GreedyLogSum', 'UniformAndQuantiles', 'MinEntropy', 'MaxLogSum', 'Uniform']
@pytest.mark.parametrize('border_type', BORDER_TYPES)
@pytest.mark.parametrize('boosting_type', BOOSTING_TYPE)
def test_feature_border_types(border_type, boosting_type):
output_model_path = yatest.common.test_output_path('model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
cmd = (
CATBOOST_PATH,
'fit',
'--use-best-model', 'false',
'--loss-function', 'Logloss',
'-f', data_file('adult', 'train_small'),
'-t', data_file('adult', 'test_small'),
'--column-description', data_file('adult', 'train.cd'),
'--boosting-type', boosting_type,
'-i', '10',
'-w', '0.03',
'-T', '4',
'-m', output_model_path,
'--eval-file', output_eval_path,
'--feature-border-type', border_type,
)
yatest.common.execute(cmd)
return [local_canonical_file(output_eval_path)]
@pytest.mark.parametrize('depth', [4, 8])
@pytest.mark.parametrize('boosting_type', BOOSTING_TYPE)
def test_deep_tree_classification(depth, boosting_type):
output_model_path = yatest.common.test_output_path('model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
cmd = (
CATBOOST_PATH,
'fit',
'--use-best-model', 'false',
'--loss-function', 'Logloss',
'-f', data_file('adult', 'train_small'),
'-t', data_file('adult', 'test_small'),
'--column-description', data_file('adult', 'train.cd'),
'--boosting-type', boosting_type,
'-i', '10',
'-w', '0.03',
'-T', '4',
'--depth', str(depth),
'-m', output_model_path,
'--eval-file', output_eval_path,
)
yatest.common.execute(cmd)
return [local_canonical_file(output_eval_path)]
@pytest.mark.parametrize('boosting_type', BOOSTING_TYPE)
@pytest.mark.parametrize(
'dev_score_calc_obj_block_size',
SCORE_CALC_OBJ_BLOCK_SIZES,
ids=SCORE_CALC_OBJ_BLOCK_SIZES_IDS
)
def test_regularization(boosting_type, dev_score_calc_obj_block_size):
output_model_path = yatest.common.test_output_path('model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
cmd = (
CATBOOST_PATH,
'fit',
'--use-best-model', 'false',
'--loss-function', 'Logloss',
'-f', data_file('adult', 'train_small'),
'-t', data_file('adult', 'test_small'),
'--column-description', data_file('adult', 'train.cd'),
'--boosting-type', boosting_type,
'--dev-score-calc-obj-block-size', dev_score_calc_obj_block_size,
'-i', '10',
'-T', '4',
'-m', output_model_path,
'--leaf-estimation-method', 'Newton',
'--eval-file', output_eval_path,
'--l2-leaf-reg', '5'
)
yatest.common.execute(cmd)
return [local_canonical_file(output_eval_path)]
REG_LOSS_FUNCTIONS = ['RMSE', 'MAE', 'Lq:q=1', 'Lq:q=1.5', 'Lq:q=3', 'Quantile', 'LogLinQuantile', 'Poisson', 'MAPE']
@pytest.mark.parametrize('loss_function', REG_LOSS_FUNCTIONS)
@pytest.mark.parametrize('boosting_type', BOOSTING_TYPE)
@pytest.mark.parametrize(
'dev_score_calc_obj_block_size',
SCORE_CALC_OBJ_BLOCK_SIZES,
ids=SCORE_CALC_OBJ_BLOCK_SIZES_IDS
)
def test_reg_targets(loss_function, boosting_type, dev_score_calc_obj_block_size):
output_model_path = yatest.common.test_output_path('model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
cmd = (
CATBOOST_PATH,
'fit',
'--use-best-model', 'false',
'--loss-function', loss_function,
'-f', data_file('adult_crossentropy', 'train_proba'),
'-t', data_file('adult_crossentropy', 'test_proba'),
'--column-description', data_file('adult_crossentropy', 'train.cd'),
'--boosting-type', boosting_type,
'--dev-score-calc-obj-block-size', dev_score_calc_obj_block_size,
'-i', '10',
'-T', '4',
'-m', output_model_path,
'--eval-file', output_eval_path
)
yatest.common.execute(cmd)
return [local_canonical_file(output_eval_path)]
@pytest.mark.parametrize('loss_function', MULTICLASS_LOSSES)
@pytest.mark.parametrize('boosting_type', BOOSTING_TYPE)
@pytest.mark.parametrize(
'dev_score_calc_obj_block_size',
SCORE_CALC_OBJ_BLOCK_SIZES,
ids=SCORE_CALC_OBJ_BLOCK_SIZES_IDS
)
def test_multi_targets(loss_function, boosting_type, dev_score_calc_obj_block_size):
output_model_path = yatest.common.test_output_path('model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
cmd = (
CATBOOST_PATH,
'fit',
'--use-best-model', 'false',
'--loss-function', loss_function,
'-f', data_file('cloudness_small', 'train_small'),
'-t', data_file('cloudness_small', 'test_small'),
'--column-description', data_file('cloudness_small', 'train.cd'),
'--boosting-type', boosting_type,
'--dev-score-calc-obj-block-size', dev_score_calc_obj_block_size,
'-i', '10',
'-T', '4',
'-m', output_model_path,
'--eval-file', output_eval_path
)
yatest.common.execute(cmd)
formula_predict_path = yatest.common.test_output_path('predict_test.eval')
calc_cmd = (
CATBOOST_PATH,
'calc',
'--input-path', data_file('cloudness_small', 'test_small'),
'--column-description', data_file('cloudness_small', 'train.cd'),
'-m', output_model_path,
'--output-path', formula_predict_path,
'--prediction-type', 'RawFormulaVal'
)
yatest.common.execute(calc_cmd)
assert(compare_evals(output_eval_path, formula_predict_path))
return [local_canonical_file(output_eval_path)]
BORDER_TYPES = ['MinEntropy', 'Median', 'UniformAndQuantiles', 'MaxLogSum', 'GreedyLogSum', 'Uniform']
@pytest.mark.parametrize(
'border_type',
BORDER_TYPES,
ids=lambda border_type: 'border_type=%s' % border_type
)
@pytest.mark.parametrize(
'border_count',
[1, 3, 10],
ids=lambda border_count: 'border_count=%d' % border_count
)
@pytest.mark.parametrize(
'boosting_type',
BOOSTING_TYPE,
ids=lambda boosting_type: 'boosting_type=%s' % boosting_type
)
def test_ctr_target_quantization(border_type, border_count, boosting_type):
output_model_path = yatest.common.test_output_path('model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
cmd = (
CATBOOST_PATH,
'fit',
'--use-best-model', 'false',
'--loss-function', 'RMSE',
'-f', data_file('adult_crossentropy', 'train_proba'),
'-t', data_file('adult_crossentropy', 'test_proba'),
'--column-description', data_file('adult_crossentropy', 'train.cd'),
'--boosting-type', boosting_type,
'-i', '3',
'-T', '4',
'-m', output_model_path,
'--eval-file', output_eval_path,
'--ctr', 'Borders:TargetBorderType=' + border_type,
'--ctr-target-border-count', str(border_count)
)
yatest.common.execute(cmd)
return [local_canonical_file(output_eval_path)]
COUNTER_METHODS = ['Full', 'SkipTest']
@pytest.mark.parametrize('counter_calc_method', COUNTER_METHODS)
@pytest.mark.parametrize('boosting_type', BOOSTING_TYPE)
def test_counter_calc(counter_calc_method, boosting_type):
output_model_path = yatest.common.test_output_path('model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
cmd = (
CATBOOST_PATH,
'fit',
'--use-best-model', 'false',
'--loss-function', 'RMSE',
'-f', data_file('adult_crossentropy', 'train_proba'),
'-t', data_file('adult_crossentropy', 'test_proba'),
'--column-description', data_file('adult_crossentropy', 'train.cd'),
'--boosting-type', boosting_type,
'-i', '60',
'-T', '4',
'-m', output_model_path,
'--eval-file', output_eval_path,
'--counter-calc-method', counter_calc_method
)
yatest.common.execute(cmd)
return [local_canonical_file(output_eval_path)]
CTR_TYPES = ['Borders', 'Buckets', 'BinarizedTargetMeanValue:TargetBorderCount=10', 'Borders,BinarizedTargetMeanValue:TargetBorderCount=10', 'Buckets,Borders']
@pytest.mark.parametrize('ctr_type', CTR_TYPES)
@pytest.mark.parametrize('boosting_type', BOOSTING_TYPE)
@pytest.mark.parametrize(
'dev_score_calc_obj_block_size',
SCORE_CALC_OBJ_BLOCK_SIZES,
ids=SCORE_CALC_OBJ_BLOCK_SIZES_IDS
)
def test_ctr_type(ctr_type, boosting_type, dev_score_calc_obj_block_size):
output_model_path = yatest.common.test_output_path('model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
cmd = (
CATBOOST_PATH,
'fit',
'--use-best-model', 'false',
'--loss-function', 'RMSE',
'-f', data_file('adult_crossentropy', 'train_proba'),
'-t', data_file('adult_crossentropy', 'test_proba'),
'--column-description', data_file('adult_crossentropy', 'train.cd'),
'--boosting-type', boosting_type,
'--dev-score-calc-obj-block-size', dev_score_calc_obj_block_size,
'-i', '3',
'-T', '4',
'-m', output_model_path,
'--eval-file', output_eval_path,
'--ctr', ctr_type
)
yatest.common.execute(cmd)
return [local_canonical_file(output_eval_path)]
@pytest.mark.parametrize('boosting_type', BOOSTING_TYPE)
def test_custom_overfitting_detector_metric(boosting_type):
model_path = yatest.common.test_output_path('adult_model.bin')
test_error_path = yatest.common.test_output_path('test_error.tsv')
learn_error_path = yatest.common.test_output_path('learn_error.tsv')
cmd = (
CATBOOST_PATH,
'fit',
'--use-best-model', 'false',
'--loss-function', 'Logloss',
'--eval-metric', 'AUC:hints=skip_train~false',
'-f', data_file('adult', 'train_small'),
'-t', data_file('adult', 'test_small'),
'--column-description', data_file('adult', 'train.cd'),
'--boosting-type', boosting_type,
'-i', '10',
'-w', '0.03',
'-T', '4',
'-m', model_path,
'--learn-err-log', learn_error_path,
'--test-err-log', test_error_path
)
yatest.common.execute(cmd)
return [local_canonical_file(learn_error_path),
local_canonical_file(test_error_path)]
@pytest.mark.parametrize('loss_function', BINCLASS_LOSSES)
@pytest.mark.parametrize('boosting_type', BOOSTING_TYPE)
def test_custom_loss_for_classification(loss_function, boosting_type):
learn_error_path = yatest.common.test_output_path('learn_error.tsv')
test_error_path = yatest.common.test_output_path('test_error.tsv')
custom_metrics = [
metric for metric in
[
'AUC:hints=skip_train~false',
'Logloss',
'CrossEntropy',
'Accuracy',
'Precision',
'Recall',
'F1',
'TotalF1',
'MCC',
'BalancedAccuracy',
'BalancedErrorRate',
'Kappa',
'WKappa',
'BrierScore',
'ZeroOneLoss',
'HammingLoss',
'HingeLoss'
]
if metric != loss_function
]
cmd = (
CATBOOST_PATH,
'fit',
'--use-best-model', 'false',
'--loss-function', loss_function,
'-f', data_file('adult_crossentropy', 'train_proba'),
'-t', data_file('adult_crossentropy', 'test_proba'),
'--column-description', data_file('adult_crossentropy', 'train.cd'),
'--boosting-type', boosting_type,
'-w', '0.03',
'-i', '10',
'-T', '4',
'--custom-metric', ','.join(custom_metrics),
'--learn-err-log', learn_error_path,
'--test-err-log', test_error_path,
)
yatest.common.execute(cmd)
return [local_canonical_file(learn_error_path), local_canonical_file(test_error_path)]
@pytest.mark.parametrize('boosting_type', BOOSTING_TYPE)
def test_loglikelihood_of_prediction(boosting_type):
learn_error_path = yatest.common.test_output_path('learn_error.tsv')
test_error_path = yatest.common.test_output_path('test_error.tsv')
cmd = (
CATBOOST_PATH,
'fit',
'--use-best-model', 'false',
'--loss-function', 'Logloss',
'-f', data_file('adult_weight', 'train_weight'),
'-t', data_file('adult_weight', 'test_weight'),
'--column-description', data_file('adult_weight', 'train.cd'),
'--boosting-type', boosting_type,
'-w', '0.03',
'-i', '10',
'-T', '4',
'--custom-metric', 'LogLikelihoodOfPrediction',
'--learn-err-log', learn_error_path,
'--test-err-log', test_error_path,
)
yatest.common.execute(cmd)
return [local_canonical_file(learn_error_path), local_canonical_file(test_error_path)]
@pytest.mark.parametrize('boosting_type', BOOSTING_TYPE)
def test_custom_loss_for_multiclassification(boosting_type):
output_model_path = yatest.common.test_output_path('model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
learn_error_path = yatest.common.test_output_path('learn_error.tsv')
test_error_path = yatest.common.test_output_path('test_error.tsv')
cmd = (
CATBOOST_PATH,
'fit',
'--use-best-model', 'false',
'--loss-function', 'MultiClass',
'-f', data_file('cloudness_small', 'train_small'),
'-t', data_file('cloudness_small', 'test_small'),
'--column-description', data_file('cloudness_small', 'train.cd'),
'--boosting-type', boosting_type,
'-i', '10',
'-T', '4',
'-m', output_model_path,
'--eval-file', output_eval_path,
'--custom-metric',
'AUC:hints=skip_train~false,Accuracy,Precision,Recall,F1,TotalF1,MCC,Kappa,WKappa,ZeroOneLoss,HammingLoss,HingeLoss',
'--learn-err-log', learn_error_path,
'--test-err-log', test_error_path,
)
yatest.common.execute(cmd)
return [local_canonical_file(learn_error_path), local_canonical_file(test_error_path)]
@pytest.mark.parametrize('boosting_type', BOOSTING_TYPE)
def test_calc_prediction_type(boosting_type):
model_path = yatest.common.test_output_path('model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
cmd = (
CATBOOST_PATH,
'fit',
'--use-best-model', 'false',
'--loss-function', 'Logloss',
'-f', data_file('adult', 'train_small'),
'--column-description', data_file('adult', 'train.cd'),
'--boosting-type', boosting_type,
'-i', '10',
'-w', '0.03',
'-T', '4',
'-m', model_path,
)
yatest.common.execute(cmd)
calc_cmd = (
CATBOOST_PATH,
'calc',
'--input-path', data_file('adult', 'test_small'),
'--column-description', data_file('adult', 'train.cd'),
'-m', model_path,
'--output-path', output_eval_path,
'--prediction-type', 'Probability'
)
yatest.common.execute(calc_cmd)
return local_canonical_file(output_eval_path)
@pytest.mark.parametrize('boosting_type', BOOSTING_TYPE)
def test_calc_no_target(boosting_type):
model_path = yatest.common.test_output_path('adult_model.bin')
fit_output_eval_path = yatest.common.test_output_path('fit_test.eval')
calc_output_eval_path = yatest.common.test_output_path('calc_test.eval')
cmd = (
CATBOOST_PATH,
'fit',
'--use-best-model', 'false',
'--loss-function', 'Logloss',
'-f', data_file('adult', 'train_small'),
'-t', data_file('adult', 'test_small'),
'--column-description', data_file('adult', 'train.cd'),
'--boosting-type', boosting_type,
'-i', '10',
'-T', '4',
'-m', model_path,
'--counter-calc-method', 'SkipTest',
'--eval-file', fit_output_eval_path
)
yatest.common.execute(cmd)
calc_cmd = (
CATBOOST_PATH,
'calc',
'--input-path', data_file('adult', 'test_small'),
'--column-description', data_file('train_notarget.cd'),
'-m', model_path,
'--output-path', calc_output_eval_path
)
yatest.common.execute(calc_cmd)
assert(compare_evals(fit_output_eval_path, calc_output_eval_path))
@pytest.mark.parametrize('boosting_type', BOOSTING_TYPE)
def test_classification_progress_restore(boosting_type):
def run_catboost(iters, model_path, eval_path, additional_params=None):
import random
import shutil
import string
letters = string.ascii_lowercase
train_random_name = ''.join(random.choice(letters) for i in xrange(8))
shutil.copy(data_file('adult', 'train_small'), train_random_name)
cmd = [
CATBOOST_PATH,
'fit',
'--loss-function', 'Logloss',
'--learning-rate', '0.5',
'-f', train_random_name,
'-t', data_file('adult', 'test_small'),
'--column-description', data_file('adult', 'train.cd'),
'--boosting-type', boosting_type,
'-i', str(iters),
'-T', '4',
'-m', model_path,
'--eval-file', eval_path,
]
if additional_params:
cmd += additional_params
yatest.common.execute(cmd)
canon_model_path = yatest.common.test_output_path('canon_model.bin')
canon_eval_path = yatest.common.test_output_path('canon_test.eval')
run_catboost(30, canon_model_path, canon_eval_path)
model_path = yatest.common.test_output_path('model.bin')
eval_path = yatest.common.test_output_path('test.eval')
progress_path = yatest.common.test_output_path('test.cbp')
run_catboost(15, model_path, eval_path, additional_params=['--snapshot-file', progress_path])
run_catboost(30, model_path, eval_path, additional_params=['--snapshot-file', progress_path])
assert filecmp.cmp(canon_eval_path, eval_path)
# TODO(kirillovs): make this active when progress_file parameter will be deleted from json params
# assert filecmp.cmp(canon_model_path, model_path)
@pytest.mark.parametrize('loss_function', CLASSIFICATION_LOSSES)
@pytest.mark.parametrize('prediction_type', PREDICTION_TYPES)
@pytest.mark.parametrize('boosting_type', BOOSTING_TYPE)
def test_prediction_type(prediction_type, loss_function, boosting_type):
output_model_path = yatest.common.test_output_path('model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
cmd = (
CATBOOST_PATH,
'fit',
'--use-best-model', 'false',
'--loss-function', loss_function,
'-f', data_file('adult', 'train_small'),
'-t', data_file('adult', 'test_small'),
'--column-description', data_file('adult', 'train.cd'),
'--boosting-type', boosting_type,
'-i', '10',
'-w', '0.03',
'-T', '4',
'-m', output_model_path,
'--eval-file', output_eval_path,
'--prediction-type', prediction_type
)
yatest.common.execute(cmd)
return [local_canonical_file(output_eval_path)]
@pytest.mark.parametrize('boosting_type', BOOSTING_TYPE)
@pytest.mark.parametrize(
'dev_score_calc_obj_block_size',
SCORE_CALC_OBJ_BLOCK_SIZES,
ids=SCORE_CALC_OBJ_BLOCK_SIZES_IDS
)
def test_const_feature(boosting_type, dev_score_calc_obj_block_size):
output_model_path = yatest.common.test_output_path('model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
train_path = yatest.common.test_output_path('train_small')
test_path = yatest.common.test_output_path('test_small')
train_dataset = np.loadtxt(data_file('adult', 'train_small'), dtype=str, delimiter='\t')
test_dataset = np.loadtxt(data_file('adult', 'test_small'), dtype=str, delimiter='\t')
train_dataset[:, 14] = '0'
test_dataset[:, 14] = '0'
np.savetxt(train_path, train_dataset, fmt='%s', delimiter='\t')
np.savetxt(test_path, test_dataset[:10, :], fmt='%s', delimiter='\t')
cmd = (
CATBOOST_PATH,
'fit',
'--use-best-model', 'false',
'--loss-function', 'RMSE',
'-f', train_path,
'-t', test_path,
'--column-description', data_file('adult', 'train.cd'),
'--boosting-type', boosting_type,
'--dev-score-calc-obj-block-size', dev_score_calc_obj_block_size,
'-i', '10',
'-T', '4',
'-m', output_model_path,
'--eval-file', output_eval_path,
)
yatest.common.execute(cmd)
return [local_canonical_file(output_eval_path)]
QUANTILE_LOSS_FUNCTIONS = ['Quantile', 'LogLinQuantile']
@pytest.mark.parametrize('loss_function', QUANTILE_LOSS_FUNCTIONS)
@pytest.mark.parametrize('boosting_type', BOOSTING_TYPE)
def test_quantile_targets(loss_function, boosting_type):
output_model_path = yatest.common.test_output_path('model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
cmd = (
CATBOOST_PATH,
'fit',
'--use-best-model', 'false',
'--loss-function', loss_function + ':alpha=0.9',
'-f', data_file('adult', 'train_small'),
'-t', data_file('adult', 'test_small'),
'--column-description', data_file('adult', 'train.cd'),
'--boosting-type', boosting_type,
'-i', '5',
'-T', '4',
'-m', output_model_path,
'--eval-file', output_eval_path,
)
yatest.common.execute(cmd)
return [local_canonical_file(output_eval_path)]
CUSTOM_LOSS_FUNCTIONS = ['RMSE,MAE', 'Quantile:alpha=0.9', 'MSLE,MedianAbsoluteError,SMAPE',
'NumErrors:greater_than=0.01,NumErrors:greater_than=0.1,NumErrors:greater_than=0.5']
@pytest.mark.parametrize('custom_loss_function', CUSTOM_LOSS_FUNCTIONS)
@pytest.mark.parametrize('boosting_type', BOOSTING_TYPE)
def test_custom_loss(custom_loss_function, boosting_type):
output_model_path = yatest.common.test_output_path('model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
learn_error_path = yatest.common.test_output_path('learn_error.tsv')
test_error_path = yatest.common.test_output_path('test_error.tsv')
cmd = (
CATBOOST_PATH,
'fit',
'--use-best-model', 'false',
'--loss-function', 'RMSE',
'-f', data_file('adult_crossentropy', 'train_proba'),
'-t', data_file('adult_crossentropy', 'test_proba'),
'--column-description', data_file('adult_crossentropy', 'train.cd'),
'--boosting-type', boosting_type,
'-i', '50',
'-T', '4',
'-m', output_model_path,
'--eval-file', output_eval_path,
'--custom-metric', custom_loss_function,
'--learn-err-log', learn_error_path,
'--test-err-log', test_error_path,
)
yatest.common.execute(cmd)
return [local_canonical_file(learn_error_path), local_canonical_file(test_error_path)]
@pytest.mark.parametrize('loss_function', LOSS_FUNCTIONS)
@pytest.mark.parametrize('boosting_type', BOOSTING_TYPE)
def test_meta(loss_function, boosting_type):
output_model_path = yatest.common.test_output_path('model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
meta_path = 'meta.tsv'
cmd = (
CATBOOST_PATH,
'fit',
'--use-best-model', 'false',
'--loss-function', loss_function,
'-f', data_file('adult', 'train_small'),
'-t', data_file('adult', 'test_small'),
'--column-description', data_file('adult', 'train.cd'),
'--boosting-type', boosting_type,
'-i', '10',
'-T', '4',
'-m', output_model_path,
'--eval-file', output_eval_path,
'--name', 'test experiment',
)
yatest.common.execute(cmd)
return [local_canonical_file(meta_path)]
def test_train_dir():
output_model_path = 'model.bin'
output_eval_path = 'test.eval'
train_dir_path = 'trainDir'
cmd = (
CATBOOST_PATH,
'fit',
'--use-best-model', 'false',
'--loss-function', 'RMSE',
'-f', data_file('adult', 'train_small'),
'-t', data_file('adult', 'test_small'),
'--column-description', data_file('adult', 'train.cd'),
'-i', '2',
'-T', '4',
'-m', output_model_path,
'--eval-file', output_eval_path,
'--train-dir', train_dir_path,
'--fstr-file', 'fstr.tsv',
'--fstr-internal-file', 'ifstr.tsv'
)
yatest.common.execute(cmd)
outputs = ['time_left.tsv', 'learn_error.tsv', 'test_error.tsv', 'meta.tsv', output_model_path, output_eval_path, 'fstr.tsv', 'ifstr.tsv']
for output in outputs:
assert os.path.isfile(train_dir_path + '/' + output)
@pytest.mark.parametrize('boosting_type', BOOSTING_TYPE)
@pytest.mark.parametrize('qwise_loss', ['QueryRMSE', 'RMSE'])
def test_train_on_binarized_equal_train_on_float(boosting_type, qwise_loss):
output_model_path = yatest.common.test_output_path('model.bin')
output_model_path_binarized = yatest.common.test_output_path('model_binarized.bin')
test_error_path = yatest.common.test_output_path('test_error.tsv')
learn_error_path = yatest.common.test_output_path('learn_error.tsv')
borders_file = yatest.common.test_output_path('borders.tsv')
borders_file_output = borders_file + '.out'
predictions_path_learn = yatest.common.test_output_path('predictions_learn.tsv')
predictions_path_learn_binarized = yatest.common.test_output_path('predictions_learn_binarized.tsv')
predictions_path_test = yatest.common.test_output_path('predictions_test.tsv')
predictions_path_test_binarized = yatest.common.test_output_path('predictions_test_binarized.tsv')
learn_file = data_file('querywise', 'train')
cd_file = data_file('querywise', 'train.cd')
test_file = data_file('querywise', 'test')
params = {"--loss-function": qwise_loss,
"-f": learn_file,
"-t": test_file,
'--column-description': cd_file,
'--boosting-type': boosting_type,
'-i': '100',
'-T': '4',
'-m': output_model_path,
'--learn-err-log': learn_error_path,
'--test-err-log': test_error_path,
'--use-best-model': 'false',
'--output-borders-file': borders_file_output,
}
params_binarized = dict(params)
params_binarized['--input-borders-file'] = borders_file_output
params_binarized['--output-borders-file'] = borders_file
params_binarized['-m'] = output_model_path_binarized
execute_catboost_fit(task_type='CPU', params=params)
apply_catboost(output_model_path, learn_file, cd_file, predictions_path_learn)
apply_catboost(output_model_path, test_file, cd_file, predictions_path_test)
execute_catboost_fit(
task_type='CPU',
params=params_binarized,
input_data={learn_error_path: None, test_error_path: None}
)
apply_catboost(output_model_path_binarized, learn_file, cd_file, predictions_path_learn_binarized)
apply_catboost(output_model_path_binarized, test_file, cd_file, predictions_path_test_binarized)
assert (filecmp.cmp(predictions_path_learn, predictions_path_learn_binarized))
assert (filecmp.cmp(predictions_path_test, predictions_path_test_binarized))
return [local_canonical_file(learn_error_path),
local_canonical_file(test_error_path),
local_canonical_file(predictions_path_test),
local_canonical_file(predictions_path_learn),
local_canonical_file(borders_file)]
@pytest.mark.parametrize('boosting_type', BOOSTING_TYPE)
def test_feature_id_fstr(boosting_type):
model_path = yatest.common.test_output_path('adult_model.bin')
output_fstr_path = yatest.common.test_output_path('fstr.tsv')
cmd = (
CATBOOST_PATH,
'fit',
'--use-best-model', 'false',
'--loss-function', 'Logloss',
'-f', data_file('adult', 'train_small'),
'--column-description', data_file('adult', 'train.cd'),
'--boosting-type', boosting_type,
'-i', '10',
'-w', '0.03',
'-T', '4',
'-m', model_path,
)
yatest.common.execute(cmd)
fstr_cmd = (
CATBOOST_PATH,
'fstr',
'--input-path', data_file('adult', 'train_small'),
'--column-description', data_file('adult_with_id.cd'),
'-m', model_path,
'-o', output_fstr_path,
)
yatest.common.execute(fstr_cmd)
return local_canonical_file(output_fstr_path)
@pytest.mark.parametrize('boosting_type', BOOSTING_TYPE)
def test_class_names_logloss(boosting_type):
output_model_path = yatest.common.test_output_path('model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
cmd = (
CATBOOST_PATH,
'fit',
'--use-best-model', 'false',
'--loss-function', 'Logloss',
'-f', data_file('adult', 'train_small'),
'-t', data_file('adult', 'test_small'),
'--column-description', data_file('adult', 'train.cd'),
'--boosting-type', boosting_type,
'-i', '10',
'-w', '0.03',
'-T', '4',
'-m', output_model_path,
'--eval-file', output_eval_path,
'--class-names', '1,0'
)
yatest.common.execute(cmd)
return [local_canonical_file(output_eval_path)]
@pytest.mark.parametrize('loss_function', MULTICLASS_LOSSES)
@pytest.mark.parametrize('boosting_type', BOOSTING_TYPE)
def test_class_names_multiclass(loss_function, boosting_type):
output_model_path = yatest.common.test_output_path('model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
cmd = (
CATBOOST_PATH,
'fit',
'--use-best-model', 'false',
'--loss-function', loss_function,
'-f', data_file('precipitation_small', 'train_small'),
'-t', data_file('precipitation_small', 'test_small'),
'--column-description', data_file('precipitation_small', 'train.cd'),
'--boosting-type', boosting_type,
'-i', '10',
'-T', '4',
'-m', output_model_path,
'--prediction-type', 'RawFormulaVal,Class',
'--eval-file', output_eval_path,
'--class-names', '0.,0.5,1.,0.25,0.75'
)
yatest.common.execute(cmd)
return [local_canonical_file(output_eval_path)]
@pytest.mark.parametrize('loss_function', MULTICLASS_LOSSES)
@pytest.mark.parametrize('boosting_type', BOOSTING_TYPE)
def test_class_names_multiclass_last_class_missed(loss_function, boosting_type):
output_model_path = yatest.common.test_output_path('model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
cmd = (
CATBOOST_PATH,
'fit',
'--use-best-model', 'false',
'--loss-function', loss_function,
'-f', data_file('precipitation_small', 'train_small'),
'-t', data_file('precipitation_small', 'test_small'),
'--column-description', data_file('precipitation_small', 'train.cd'),
'--boosting-type', boosting_type,
'-i', '10',
'-T', '4',
'-m', output_model_path,
'--prediction-type', 'RawFormulaVal,Class',
'--eval-file', output_eval_path,
'--class-names', '0.,0.5,0.25,0.75,1.',
)
yatest.common.execute(cmd)
return [local_canonical_file(output_eval_path)]
@pytest.mark.parametrize('boosting_type', BOOSTING_TYPE)
def test_class_weight_logloss(boosting_type):
output_model_path = yatest.common.test_output_path('model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
cmd = (
CATBOOST_PATH,
'fit',
'--use-best-model', 'false',
'--loss-function', 'Logloss',
'-f', data_file('adult', 'train_small'),
'-t', data_file('adult', 'test_small'),
'--column-description', data_file('adult', 'train.cd'),
'--boosting-type', boosting_type,
'-i', '10',
'-w', '0.03',
'-T', '4',
'-m', output_model_path,
'--eval-file', output_eval_path,
'--class-weights', '0.5,2'
)
yatest.common.execute(cmd)
return [local_canonical_file(output_eval_path)]
@pytest.mark.parametrize('loss_function', MULTICLASS_LOSSES)
@pytest.mark.parametrize('boosting_type', BOOSTING_TYPE)
def test_class_weight_multiclass(loss_function, boosting_type):
output_model_path = yatest.common.test_output_path('model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
cmd = (
CATBOOST_PATH,
'fit',
'--use-best-model', 'false',
'--loss-function', loss_function,
'-f', data_file('adult', 'train_small'),
'-t', data_file('adult', 'test_small'),
'--column-description', data_file('adult', 'train.cd'),
'--boosting-type', boosting_type,
'-i', '10',
'-T', '4',
'-m', output_model_path,
'--eval-file', output_eval_path,
'--class-weights', '0.5,2'
)
yatest.common.execute(cmd)
return [local_canonical_file(output_eval_path)]
@pytest.mark.parametrize('boosting_type', BOOSTING_TYPE)
def test_params_from_file(boosting_type):
output_model_path = yatest.common.test_output_path('model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
cmd = (
CATBOOST_PATH,
'fit',
'--use-best-model', 'false',
'--loss-function', 'Logloss',
'-f', data_file('adult', 'train_small'),
'-t', data_file('adult', 'test_small'),
'--column-description', data_file('adult', 'train.cd'),
'--boosting-type', boosting_type,
'-i', '6',
'-w', '0.03',
'-T', '4',
'-m', output_model_path,
'--eval-file', output_eval_path,
'--params-file', data_file('params.json')
)
yatest.common.execute(cmd)
return [local_canonical_file(output_eval_path)]
@pytest.mark.parametrize('boosting_type', BOOSTING_TYPE)
@pytest.mark.parametrize('loss_function', MULTICLASS_LOSSES)
def test_lost_class(boosting_type, loss_function):
output_model_path = yatest.common.test_output_path('model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
cmd = (
CATBOOST_PATH,
'fit',
'--use-best-model', 'false',
'--loss-function', loss_function,
'-f', data_file('cloudness_lost_class', 'train_small'),
'-t', data_file('cloudness_lost_class', 'test_small'),
'--column-description', data_file('cloudness_lost_class', 'train.cd'),
'--boosting-type', boosting_type,
'-i', '10',
'-T', '4',
'-m', output_model_path,
'--eval-file', output_eval_path,
'--classes-count', '3',
'--prediction-type', 'RawFormulaVal,Class',
)
yatest.common.execute(cmd)
return [local_canonical_file(output_eval_path)]
@pytest.mark.parametrize('boosting_type', BOOSTING_TYPE)
def test_class_weight_with_lost_class(boosting_type):
output_model_path = yatest.common.test_output_path('model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
cmd = (
CATBOOST_PATH,
'fit',
'--use-best-model', 'false',
'--loss-function', 'MultiClass',
'-f', data_file('cloudness_lost_class', 'train_small'),
'-t', data_file('cloudness_lost_class', 'test_small'),
'--column-description', data_file('cloudness_lost_class', 'train.cd'),
'--boosting-type', boosting_type,
'-i', '10',
'-T', '4',
'-m', output_model_path,
'--eval-file', output_eval_path,
'--classes-count', '3',
'--class-weights', '0.5,2,2',
'--prediction-type', 'RawFormulaVal,Class',
)
yatest.common.execute(cmd)
return [local_canonical_file(output_eval_path)]
@pytest.mark.parametrize('boosting_type', BOOSTING_TYPE)
@pytest.mark.parametrize(
'dev_score_calc_obj_block_size',
SCORE_CALC_OBJ_BLOCK_SIZES,
ids=SCORE_CALC_OBJ_BLOCK_SIZES_IDS
)
def test_one_hot(boosting_type, dev_score_calc_obj_block_size):
output_model_path = yatest.common.test_output_path('model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
calc_eval_path = yatest.common.test_output_path('calc.eval')
cmd = (
CATBOOST_PATH,
'fit',
'--use-best-model', 'false',
'--loss-function', 'Logloss',
'-f', data_file('adult', 'train_small'),
'-t', data_file('adult', 'test_small'),
'--column-description', data_file('adult', 'train.cd'),
'--boosting-type', boosting_type,
'--dev-score-calc-obj-block-size', dev_score_calc_obj_block_size,
'-i', '100',
'-T', '4',
'-m', output_model_path,
'--eval-file', output_eval_path,
'-x', '1',
'-n', '8',
'-w', '0.1',
'--one-hot-max-size', '10'
)
yatest.common.execute(cmd)
calc_cmd = (
CATBOOST_PATH,
'calc',
'--input-path', data_file('adult', 'test_small'),
'--column-description', data_file('adult', 'train.cd'),
'-m', output_model_path,
'--output-path', calc_eval_path
)
yatest.common.execute(calc_cmd)
assert(compare_evals(output_eval_path, calc_eval_path))
return [local_canonical_file(output_eval_path)]
@pytest.mark.parametrize('boosting_type', BOOSTING_TYPE)
@pytest.mark.parametrize(
'dev_score_calc_obj_block_size',
SCORE_CALC_OBJ_BLOCK_SIZES,
ids=SCORE_CALC_OBJ_BLOCK_SIZES_IDS
)
def test_random_strength(boosting_type, dev_score_calc_obj_block_size):
output_model_path = yatest.common.test_output_path('model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
cmd = (
CATBOOST_PATH,
'fit',
'--use-best-model', 'false',
'--loss-function', 'Logloss',
'-f', data_file('adult', 'train_small'),
'-t', data_file('adult', 'test_small'),
'--column-description', data_file('adult', 'train.cd'),
'--boosting-type', boosting_type,
'--dev-score-calc-obj-block-size', dev_score_calc_obj_block_size,
'-i', '100',
'-T', '4',
'-m', output_model_path,
'--eval-file', output_eval_path,
'-x', '1',
'-n', '8',
'-w', '0.1',
'--random-strength', '100'
)
yatest.common.execute(cmd)
return [local_canonical_file(output_eval_path)]
@pytest.mark.parametrize('boosting_type', BOOSTING_TYPE)
@pytest.mark.parametrize(
'dev_score_calc_obj_block_size',
SCORE_CALC_OBJ_BLOCK_SIZES,
ids=SCORE_CALC_OBJ_BLOCK_SIZES_IDS
)
def test_only_categorical_features(boosting_type, dev_score_calc_obj_block_size):
output_model_path = yatest.common.test_output_path('model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
cmd = (
CATBOOST_PATH,
'fit',
'--use-best-model', 'false',
'--loss-function', 'Logloss',
'-f', data_file('adult', 'train_small'),
'-t', data_file('adult', 'test_small'),
'--column-description', data_file('adult_all_categorical.cd'),
'--boosting-type', boosting_type,
'--dev-score-calc-obj-block-size', dev_score_calc_obj_block_size,
'-i', '100',
'-T', '4',
'-m', output_model_path,
'--eval-file', output_eval_path,
'-x', '1',
'-n', '8',
'-w', '0.1',
)
yatest.common.execute(cmd)
return [local_canonical_file(output_eval_path)]
@pytest.mark.parametrize('boosting_type', BOOSTING_TYPE)
@pytest.mark.parametrize(
'dev_score_calc_obj_block_size',
SCORE_CALC_OBJ_BLOCK_SIZES,
ids=SCORE_CALC_OBJ_BLOCK_SIZES_IDS
)
def test_weight_sampling_per_tree(boosting_type, dev_score_calc_obj_block_size):
output_model_path = yatest.common.test_output_path('model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
learn_error_path = yatest.common.test_output_path('learn_error.tsv')
test_error_path = yatest.common.test_output_path('test_error.tsv')
cmd = (
CATBOOST_PATH,
'fit',
'--use-best-model', 'false',
'--loss-function', 'Logloss',
'-f', data_file('adult', 'train_small'),
'-t', data_file('adult', 'test_small'),
'--column-description', data_file('adult', 'train.cd'),
'--boosting-type', boosting_type,
'--dev-score-calc-obj-block-size', dev_score_calc_obj_block_size,
'-i', '10',
'-w', '0.03',
'-T', '4',
'-m', output_model_path,
'--eval-file', output_eval_path,
'--learn-err-log', learn_error_path,
'--test-err-log', test_error_path,
'--sampling-frequency', 'PerTree',
)
yatest.common.execute(cmd)
return local_canonical_file(output_eval_path)
@pytest.mark.parametrize('boosting_type', BOOSTING_TYPE)
@pytest.mark.parametrize('used_ram_limit', ['1Kb', '4Gb'])
@pytest.mark.parametrize(
'dev_score_calc_obj_block_size',
['600', '5000000'],
ids=['calc_block=600', 'calc_block=5000000']
)
def test_allow_writing_files_and_used_ram_limit(boosting_type, used_ram_limit, dev_score_calc_obj_block_size):
output_model_path = yatest.common.test_output_path('model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
cmd = (
CATBOOST_PATH,
'fit',
'--use-best-model', 'false',
'--allow-writing-files', 'false',
'--used-ram-limit', used_ram_limit,
'--loss-function', 'Logloss',
'--max-ctr-complexity', '5',
'--depth', '7',
'-f', data_file('airlines_5K', 'train'),
'-t', data_file('airlines_5K', 'test'),
'--column-description', data_file('airlines_5K', 'cd'),
'--has-header',
'--boosting-type', boosting_type,
'--dev-score-calc-obj-block-size', dev_score_calc_obj_block_size,
'-i', '20',
'-w', '0.03',
'-T', '6',
'-m', output_model_path,
'--eval-file', output_eval_path,
)
yatest.common.execute(cmd)
return [local_canonical_file(output_eval_path)]
@pytest.mark.parametrize(
'ignored_features',
[True, False],
ids=['ignored_features=True', 'ignored_features=False']
)
def test_apply_with_permuted_columns(ignored_features):
output_model_path = yatest.common.test_output_path('model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
cmd = (
CATBOOST_PATH,
'fit',
'--loss-function', 'Logloss',
'-f', data_file('airlines_5K', 'train'),
'-t', data_file('airlines_5K', 'test'),
'--column-description', data_file('airlines_5K', 'cd'),
'--has-header',
'-i', '20',
'-w', '0.03',
'-T', '6',
'-m', output_model_path,
'--eval-file', output_eval_path,
)
if ignored_features:
cmd += ('--ignore-features', '0:2:5')
yatest.common.execute(cmd)
permuted_test_path, permuted_cd_path = permute_dataset_columns(
data_file('airlines_5K', 'test'),
data_file('airlines_5K', 'cd'),
seed=123)
permuted_predict_path = yatest.common.test_output_path('permuted_predict.eval')
calc_cmd = (
CATBOOST_PATH,
'calc',
'--input-path', permuted_test_path,
'--has-header',
'--column-description', permuted_cd_path,
'-m', output_model_path,
'--output-path', permuted_predict_path,
'--output-columns', 'DocId,RawFormulaVal,Label'
)
yatest.common.execute(calc_cmd)
assert filecmp.cmp(output_eval_path, permuted_predict_path)
@pytest.mark.parametrize('boosting_type', BOOSTING_TYPE)
@pytest.mark.parametrize(
'dev_score_calc_obj_block_size',
SCORE_CALC_OBJ_BLOCK_SIZES,
ids=SCORE_CALC_OBJ_BLOCK_SIZES_IDS
)
def test_subsample_per_tree(boosting_type, dev_score_calc_obj_block_size):
output_model_path = yatest.common.test_output_path('model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
learn_error_path = yatest.common.test_output_path('learn_error.tsv')
test_error_path = yatest.common.test_output_path('test_error.tsv')
cmd = (
CATBOOST_PATH,
'fit',
'--use-best-model', 'false',
'--loss-function', 'Logloss',
'-f', data_file('adult', 'train_small'),
'-t', data_file('adult', 'test_small'),
'--column-description', data_file('adult', 'train.cd'),
'--boosting-type', boosting_type,
'--dev-score-calc-obj-block-size', dev_score_calc_obj_block_size,
'-i', '10',
'-w', '0.03',
'-T', '4',
'-m', output_model_path,
'--eval-file', output_eval_path,
'--learn-err-log', learn_error_path,
'--test-err-log', test_error_path,
'--sampling-frequency', 'PerTree',
'--bootstrap-type', 'Bernoulli',
'--subsample', '0.5',
)
yatest.common.execute(cmd)
return local_canonical_file(output_eval_path)
@pytest.mark.parametrize('boosting_type', BOOSTING_TYPE)
@pytest.mark.parametrize(
'dev_score_calc_obj_block_size',
SCORE_CALC_OBJ_BLOCK_SIZES,
ids=SCORE_CALC_OBJ_BLOCK_SIZES_IDS
)
def test_subsample_per_tree_level(boosting_type, dev_score_calc_obj_block_size):
output_model_path = yatest.common.test_output_path('model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
learn_error_path = yatest.common.test_output_path('learn_error.tsv')
test_error_path = yatest.common.test_output_path('test_error.tsv')
cmd = (
CATBOOST_PATH,
'fit',
'--use-best-model', 'false',
'--loss-function', 'Logloss',
'-f', data_file('adult', 'train_small'),
'-t', data_file('adult', 'test_small'),
'--column-description', data_file('adult', 'train.cd'),
'--boosting-type', boosting_type,
'--dev-score-calc-obj-block-size', dev_score_calc_obj_block_size,
'-i', '10',
'-w', '0.03',
'-T', '4',
'-m', output_model_path,
'--eval-file', output_eval_path,
'--learn-err-log', learn_error_path,
'--test-err-log', test_error_path,
'--bootstrap-type', 'Bernoulli',
'--subsample', '0.5',
)
yatest.common.execute(cmd)
return local_canonical_file(output_eval_path)
@pytest.mark.parametrize('boosting_type', BOOSTING_TYPE)
@pytest.mark.parametrize(
'dev_score_calc_obj_block_size',
SCORE_CALC_OBJ_BLOCK_SIZES,
ids=SCORE_CALC_OBJ_BLOCK_SIZES_IDS
)
def test_bagging_per_tree_level(boosting_type, dev_score_calc_obj_block_size):
output_model_path = yatest.common.test_output_path('model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
learn_error_path = yatest.common.test_output_path('learn_error.tsv')
test_error_path = yatest.common.test_output_path('test_error.tsv')
cmd = (
CATBOOST_PATH,
'fit',
'--use-best-model', 'false',
'--loss-function', 'Logloss',
'-f', data_file('adult', 'train_small'),
'-t', data_file('adult', 'test_small'),
'--column-description', data_file('adult', 'train.cd'),
'--boosting-type', boosting_type,
'--dev-score-calc-obj-block-size', dev_score_calc_obj_block_size,
'-i', '10',
'-w', '0.03',
'-T', '4',
'-m', output_model_path,
'--eval-file', output_eval_path,
'--learn-err-log', learn_error_path,
'--test-err-log', test_error_path,
'--bagging-temperature', '0.5',
)
yatest.common.execute(cmd)
return local_canonical_file(output_eval_path)
@pytest.mark.parametrize('boosting_type', BOOSTING_TYPE)
@pytest.mark.parametrize(
'dev_score_calc_obj_block_size',
SCORE_CALC_OBJ_BLOCK_SIZES,
ids=SCORE_CALC_OBJ_BLOCK_SIZES_IDS
)
def test_plain(boosting_type, dev_score_calc_obj_block_size):
output_model_path = yatest.common.test_output_path('model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
cmd = (
CATBOOST_PATH,
'fit',
'--use-best-model', 'false',
'--loss-function', 'Logloss',
'-f', data_file('adult', 'train_small'),
'-t', data_file('adult', 'test_small'),
'--column-description', data_file('adult', 'train.cd'),
'--boosting-type', boosting_type,
'--dev-score-calc-obj-block-size', dev_score_calc_obj_block_size,
'-i', '10',
'-w', '0.03',
'-T', '4',
'-m', output_model_path,
'--boosting-type', 'Plain',
'--eval-file', output_eval_path,
)
yatest.common.execute(cmd)
return [local_canonical_file(output_eval_path)]
@pytest.mark.parametrize('boosting_type', BOOSTING_TYPE)
@pytest.mark.parametrize(
'dev_score_calc_obj_block_size',
SCORE_CALC_OBJ_BLOCK_SIZES,
ids=SCORE_CALC_OBJ_BLOCK_SIZES_IDS
)
def test_bootstrap(boosting_type, dev_score_calc_obj_block_size):
bootstrap_option = {
'no': ('--bootstrap-type', 'No',),
'bayes': ('--bootstrap-type', 'Bayesian', '--bagging-temperature', '0.0',),
'bernoulli': ('--bootstrap-type', 'Bernoulli', '--subsample', '1.0',)
}
cmd = (
CATBOOST_PATH,
'fit',
'--use-best-model', 'false',
'--loss-function', 'Logloss',
'-f', data_file('adult', 'train_small'),
'-t', data_file('adult', 'test_small'),
'--column-description', data_file('adult', 'train.cd'),
'--boosting-type', boosting_type,
'--dev-score-calc-obj-block-size', dev_score_calc_obj_block_size,
'-i', '10',
'-w', '0.03',
'-T', '4',
)
for bootstrap in bootstrap_option:
model_path = yatest.common.test_output_path('model_' + bootstrap + '.bin')
eval_path = yatest.common.test_output_path('test_' + bootstrap + '.eval')
yatest.common.execute(cmd + ('-m', model_path, '--eval-file', eval_path,) + bootstrap_option[bootstrap])
ref_eval_path = yatest.common.test_output_path('test_no.eval')
assert(filecmp.cmp(ref_eval_path, yatest.common.test_output_path('test_bayes.eval')))
assert(filecmp.cmp(ref_eval_path, yatest.common.test_output_path('test_bernoulli.eval')))
return [local_canonical_file(ref_eval_path)]
def test_json_logging():
output_model_path = yatest.common.test_output_path('model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
json_path = yatest.common.test_output_path('catboost_training.json')
cmd = (
CATBOOST_PATH,
'fit',
'--use-best-model', 'false',
'--loss-function', 'Logloss',
'-f', data_file('adult', 'train_small'),
'-t', data_file('adult', 'test_small'),
'--column-description', data_file('adult', 'train.cd'),
'-w', '0.03',
'-i', '10',
'-T', '4',
'-m', output_model_path,
'--eval-file', output_eval_path,
'--json-log', json_path,
)
yatest.common.execute(cmd)
return [local_canonical_file(remove_time_from_json(json_path))]
def test_json_logging_metric_period():
output_model_path = yatest.common.test_output_path('model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
json_path = yatest.common.test_output_path('catboost_training.json')
cmd = (
CATBOOST_PATH,
'fit',
'--use-best-model', 'false',
'--loss-function', 'Logloss',
'-f', data_file('adult', 'train_small'),
'-t', data_file('adult', 'test_small'),
'--column-description', data_file('adult', 'train.cd'),
'-i', '10',
'-w', '0.03',
'-T', '4',
'-m', output_model_path,
'--eval-file', output_eval_path,
'--json-log', json_path,
'--metric-period', '2',
)
yatest.common.execute(cmd)
return [local_canonical_file(remove_time_from_json(json_path))]
def test_output_columns_format():
model_path = yatest.common.test_output_path('adult_model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
cmd = (
CATBOOST_PATH,
'fit',
'--use-best-model', 'false',
'-f', data_file('adult', 'train_small'),
'--column-description', data_file('adult', 'train.cd'),
# Intentionally skipped: -t ...
'-i', '10',
'-T', '4',
'-m', model_path,
'--output-columns', 'DocId,RawFormulaVal,#2,Label',
'--eval-file', output_eval_path
)
yatest.common.execute(cmd)
formula_predict_path = yatest.common.test_output_path('predict_test.eval')
calc_cmd = (
CATBOOST_PATH,
'calc',
'--input-path', data_file('adult', 'test_small'),
'--column-description', data_file('adult', 'train.cd'),
'-m', model_path,
'--output-path', formula_predict_path,
'--output-columns', 'DocId,RawFormulaVal'
)
yatest.common.execute(calc_cmd)
return local_canonical_file(output_eval_path, formula_predict_path)
def test_eval_period():
model_path = yatest.common.test_output_path('adult_model.bin')
cmd = (
CATBOOST_PATH,
'fit',
'--use-best-model', 'false',
'-f', data_file('adult', 'train_small'),
'--column-description', data_file('adult', 'train.cd'),
'-i', '10',
'-T', '4',
'-m', model_path,
)
yatest.common.execute(cmd)
formula_predict_path = yatest.common.test_output_path('predict_test.eval')
calc_cmd = (
CATBOOST_PATH,
'calc',
'--input-path', data_file('adult', 'test_small'),
'--column-description', data_file('adult', 'train.cd'),
'-m', model_path,
'--output-path', formula_predict_path,
'--eval-period', '2'
)
yatest.common.execute(calc_cmd)
return local_canonical_file(formula_predict_path)
def test_weights_output():
output_model_path = yatest.common.test_output_path('model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
cmd = (
CATBOOST_PATH,
'fit',
'--use-best-model', 'false',
'--loss-function', 'Logloss',
'-f', data_file('adult_weight', 'train_weight'),
'-t', data_file('adult_weight', 'test_weight'),
'--column-description', data_file('adult_weight', 'train.cd'),
'-i', '10',
'-w', '0.03',
'-T', '4',
'-m', output_model_path,
'--eval-file', output_eval_path,
'--output-columns', 'DocId,RawFormulaVal,Weight,Label',
)
yatest.common.execute(cmd)
return [local_canonical_file(output_eval_path)]
def test_baseline_output():
output_model_path = yatest.common.test_output_path('model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
cmd = (
CATBOOST_PATH,
'fit',
'--use-best-model', 'false',
'--loss-function', 'Logloss',
'-f', data_file('adult_weight', 'train_weight'),
'-t', data_file('adult_weight', 'test_weight'),
'--column-description', data_file('train_adult_baseline.cd'),
'-i', '10',
'-w', '0.03',
'-T', '4',
'-m', output_model_path,
'--eval-file', output_eval_path,
'--output-columns', 'DocId,RawFormulaVal,Baseline,Label',
)
yatest.common.execute(cmd)
return [local_canonical_file(output_eval_path)]
def test_baseline_from_file_output():
output_model_path = yatest.common.test_output_path('model.bin')
eval_0_path = yatest.common.test_output_path('test_0.eval')
eval_1_path = yatest.common.test_output_path('test_1.eval')
cmd = (
CATBOOST_PATH,
'fit',
'--use-best-model', 'false',
'--loss-function', 'Logloss',
'--learn-set', data_file('higgs', 'train_small'),
'--test-set', data_file('higgs', 'test_small'),
'--column-description', data_file('higgs', 'train_baseline.cd'),
'-i', '10',
'--learning-rate', '0.03',
'-T', '4',
'-m', output_model_path,
'--eval-file', eval_0_path,
'--output-columns', 'DocId,RawFormulaVal',
)
yatest.common.execute(cmd)
cmd = (
CATBOOST_PATH,
'fit',
'--use-best-model', 'false',
'--loss-function', 'Logloss',
'--learn-set', data_file('higgs', 'train_small'),
'--test-set', data_file('higgs', 'test_small'),
'--column-description', data_file('higgs', 'train_weight.cd'),
'--learn-baseline', data_file('higgs', 'train_baseline'),
'--test-baseline', data_file('higgs', 'test_baseline'),
'-i', '10',
'--ignore-features', '0', # baseline column
'--learning-rate', '0.03',
'-T', '4',
'-m', output_model_path,
'--eval-file', eval_1_path,
'--output-columns', 'DocId,RawFormulaVal',
)
yatest.common.execute(cmd)
compare_evals(eval_0_path, eval_1_path)
@pytest.mark.parametrize('boosting_type', BOOSTING_TYPE)
@pytest.mark.parametrize('loss_function', MULTICLASS_LOSSES)
def test_multiclass_baseline_from_file(boosting_type, loss_function):
output_model_path = yatest.common.test_output_path('model.bin')
output_eval_path_0 = yatest.common.test_output_path('test_0.eval')
output_eval_path_1 = yatest.common.test_output_path('test_1.eval')
cmd = (
CATBOOST_PATH,
'fit',
'--use-best-model', 'false',
'--loss-function', loss_function,
'-f', data_file('precipitation_small', 'train_small'),
'-t', data_file('precipitation_small', 'train_small'),
'--column-description', data_file('precipitation_small', 'train.cd'),
'--boosting-type', boosting_type,
'-i', '10',
'-T', '4',
'-m', output_model_path,
'--prediction-type', 'RawFormulaVal,Class',
'--eval-file', output_eval_path_0,
)
yatest.common.execute(cmd)
cmd = (
CATBOOST_PATH,
'fit',
'--use-best-model', 'false',
'--loss-function', loss_function,
'-f', data_file('precipitation_small', 'train_small'),
'-t', data_file('precipitation_small', 'train_small'),
'--column-description', data_file('precipitation_small', 'train.cd'),
'--learn-baseline', output_eval_path_0,
'--test-baseline', output_eval_path_0,
'--boosting-type', boosting_type,
'-i', '10',
'-T', '4',
'-m', output_model_path,
'--prediction-type', 'RawFormulaVal,Class',
'--class-names', '0.,0.25,0.5,0.75',
'--eval-file', output_eval_path_1,
)
yatest.common.execute(cmd)
try:
cmd = (
CATBOOST_PATH,
'fit',
'--use-best-model', 'false',
'--loss-function', loss_function,
'-f', data_file('precipitation_small', 'train_small'),
'-t', data_file('precipitation_small', 'train_small'),
'--column-description', data_file('precipitation_small', 'train.cd'),
'--learn-baseline', output_eval_path_0,
'--test-baseline', output_eval_path_0,
'--boosting-type', boosting_type,
'-i', '10',
'-T', '4',
'-m', output_model_path,
'--prediction-type', 'RawFormulaVal,Class',
'--class-names', '0.5,0.25,0.75.,0.',
'--eval-file', output_eval_path_1,
)
yatest.common.execute(cmd)
except:
return [local_canonical_file(output_eval_path_0), local_canonical_file(output_eval_path_1)]
assert False
def test_baseline_from_file_output_on_quantized_pool():
output_model_path = yatest.common.test_output_path('model.bin')
eval_0_path = yatest.common.test_output_path('test_0.eval')
eval_1_path = yatest.common.test_output_path('test_1.eval')
cmd = (
CATBOOST_PATH,
'fit',
'--use-best-model', 'false',
'--loss-function', 'Logloss',
'--learn-set', 'quantized://' + data_file('higgs', 'train_small_x128_greedylogsum.bin'),
'--test-set', 'quantized://' + data_file('higgs', 'train_small_x128_greedylogsum.bin'),
'--column-description', data_file('higgs', 'train_baseline.cd'),
'--learning-rate', '0.03',
'-T', '4',
'-m', output_model_path,
'--eval-file', eval_0_path,
)
yatest.common.execute(cmd + ('-i', '10'))
yatest.common.execute(cmd + (
'-i', '10',
'--learn-baseline', eval_0_path,
'--test-baseline', eval_0_path,
'--eval-file', eval_0_path))
yatest.common.execute(cmd + (
'-i', '20',
'--eval-file', eval_1_path))
compare_evals(eval_0_path, eval_1_path)
def test_query_output():
output_model_path = yatest.common.test_output_path('model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
cmd = (
CATBOOST_PATH,
'fit',
'--use-best-model', 'false',
'--loss-function', 'QueryRMSE',
'-f', data_file('querywise', 'train'),
'-t', data_file('querywise', 'test'),
'--column-description', data_file('querywise', 'train.cd'),
'-i', '20',
'-T', '4',
'-m', output_model_path,
'--eval-file', output_eval_path,
'--output-columns', 'DocId,Label,RawFormulaVal,GroupId',
)
yatest.common.execute(cmd)
return [local_canonical_file(output_eval_path)]
def test_subgroup_output():
output_model_path = yatest.common.test_output_path('model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
cmd = (
CATBOOST_PATH,
'fit',
'--use-best-model', 'false',
'--loss-function', 'QueryRMSE',
'-f', data_file('querywise', 'train'),
'-t', data_file('querywise', 'test'),
'--column-description', data_file('querywise', 'train.cd.subgroup_id'),
'-i', '20',
'-T', '4',
'-m', output_model_path,
'--eval-file', output_eval_path,
'--output-columns', 'GroupId,SubgroupId,DocId,Label,RawFormulaVal',
)
yatest.common.execute(cmd)
return [local_canonical_file(output_eval_path)]
@pytest.mark.parametrize('boosting_type', BOOSTING_TYPE)
@pytest.mark.parametrize(
'dev_score_calc_obj_block_size',
SCORE_CALC_OBJ_BLOCK_SIZES,
ids=SCORE_CALC_OBJ_BLOCK_SIZES_IDS
)
def test_without_cat_features(boosting_type, dev_score_calc_obj_block_size):
output_model_path = yatest.common.test_output_path('model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
cmd = (
CATBOOST_PATH,
'fit',
'--use-best-model', 'false',
'--loss-function', 'RMSE',
'-f', data_file('adult', 'train_small'),
'-t', data_file('adult', 'test_small'),
'--column-description', data_file('adult', 'train.cd'),
'--boosting-type', boosting_type,
'--dev-score-calc-obj-block-size', dev_score_calc_obj_block_size,
'-i', '10',
'-T', '4',
'-w', '0.1',
'--one-hot-max-size', '102',
'--bootstrap-type', 'No',
'--random-strength', '0',
'-m', output_model_path,
'--eval-file', output_eval_path,
)
yatest.common.execute(cmd)
return [local_canonical_file(output_eval_path)]
def make_deterministic_train_cmd(loss_function, pool, train, test, cd, schema='', test_schema='', dev_score_calc_obj_block_size=None, other_options=()):
pool_path = schema + data_file(pool, train)
test_path = test_schema + data_file(pool, test)
cd_path = data_file(pool, cd)
cmd = (
CATBOOST_PATH,
'fit',
'--loss-function', loss_function,
'-f', pool_path,
'-t', test_path,
'--column-description', cd_path,
'-i', '10',
'-w', '0.03',
'-T', '4',
'--random-strength', '0',
'--has-time',
'--bootstrap-type', 'No',
'--boosting-type', 'Plain',
)
if dev_score_calc_obj_block_size:
cmd += ('--dev-score-calc-obj-block-size', dev_score_calc_obj_block_size)
return cmd + other_options
def run_dist_train(cmd, output_file_switch='--eval-file'):
eval_0_path = yatest.common.test_output_path('test_0.eval')
yatest.common.execute(cmd + (output_file_switch, eval_0_path,))
eval_1_path = yatest.common.test_output_path('test_1.eval')
execute_dist_train(cmd + (output_file_switch, eval_1_path,))
eval_0 = np.loadtxt(eval_0_path, dtype='float', delimiter='\t', skiprows=1)
eval_1 = np.loadtxt(eval_1_path, dtype='float', delimiter='\t', skiprows=1)
assert(np.allclose(eval_0, eval_1, atol=1e-6, rtol=1e-3))
return eval_1_path
@pytest.mark.parametrize(
'dev_score_calc_obj_block_size',
SCORE_CALC_OBJ_BLOCK_SIZES,
ids=SCORE_CALC_OBJ_BLOCK_SIZES_IDS
)
def test_dist_train(dev_score_calc_obj_block_size):
return [local_canonical_file(run_dist_train(make_deterministic_train_cmd(
loss_function='Logloss',
pool='higgs',
train='train_small',
test='test_small',
cd='train.cd',
dev_score_calc_obj_block_size=dev_score_calc_obj_block_size)))]
@pytest.mark.parametrize(
'dev_score_calc_obj_block_size',
SCORE_CALC_OBJ_BLOCK_SIZES,
ids=SCORE_CALC_OBJ_BLOCK_SIZES_IDS
)
def test_dist_train_with_weights(dev_score_calc_obj_block_size):
return [local_canonical_file(run_dist_train(make_deterministic_train_cmd(
loss_function='Logloss',
pool='higgs',
train='train_small',
test='test_small',
cd='train_weight.cd',
dev_score_calc_obj_block_size=dev_score_calc_obj_block_size)))]
@pytest.mark.parametrize(
'dev_score_calc_obj_block_size',
SCORE_CALC_OBJ_BLOCK_SIZES,
ids=SCORE_CALC_OBJ_BLOCK_SIZES_IDS
)
def test_dist_train_with_baseline(dev_score_calc_obj_block_size):
return [local_canonical_file(run_dist_train(make_deterministic_train_cmd(
loss_function='Logloss',
pool='higgs',
train='train_small',
test='test_small',
cd='train_baseline.cd',
dev_score_calc_obj_block_size=dev_score_calc_obj_block_size)))]
@pytest.mark.parametrize(
'dev_score_calc_obj_block_size',
SCORE_CALC_OBJ_BLOCK_SIZES,
ids=SCORE_CALC_OBJ_BLOCK_SIZES_IDS
)
def test_dist_train_multiclass(dev_score_calc_obj_block_size):
return [local_canonical_file(run_dist_train(make_deterministic_train_cmd(
loss_function='MultiClass',
pool='cloudness_small',
train='train_small',
test='test_small',
cd='train_float.cd',
dev_score_calc_obj_block_size=dev_score_calc_obj_block_size)))]
@pytest.mark.parametrize(
'dev_score_calc_obj_block_size',
SCORE_CALC_OBJ_BLOCK_SIZES,
ids=SCORE_CALC_OBJ_BLOCK_SIZES_IDS
)
def test_dist_train_multiclass_weight(dev_score_calc_obj_block_size):
return [local_canonical_file(run_dist_train(make_deterministic_train_cmd(
loss_function='MultiClass',
pool='cloudness_small',
train='train_small',
test='test_small',
cd='train_float_weight.cd',
dev_score_calc_obj_block_size=dev_score_calc_obj_block_size)))]
@pytest.mark.parametrize(
'dev_score_calc_obj_block_size',
SCORE_CALC_OBJ_BLOCK_SIZES,
ids=SCORE_CALC_OBJ_BLOCK_SIZES_IDS
)
def test_dist_train_quantized(dev_score_calc_obj_block_size):
return [local_canonical_file(run_dist_train(make_deterministic_train_cmd(
loss_function='Logloss',
pool='higgs',
train='train_small_x128_greedylogsum.bin',
test='test_small',
cd='train.cd',
schema='quantized://',
dev_score_calc_obj_block_size=dev_score_calc_obj_block_size,
other_options=('-x', '128', '--feature-border-type', 'GreedyLogSum'))))]
@pytest.mark.parametrize(
'dev_score_calc_obj_block_size',
SCORE_CALC_OBJ_BLOCK_SIZES,
ids=SCORE_CALC_OBJ_BLOCK_SIZES_IDS
)
@pytest.mark.parametrize('pairs_file', ['train.pairs', 'train.pairs.weighted'])
@pytest.mark.parametrize('target', ['PairLogitPairwise', 'QuerySoftMax'])
def test_dist_train_quantized_groupid(dev_score_calc_obj_block_size, pairs_file, target):
return [local_canonical_file(run_dist_train(make_deterministic_train_cmd(
loss_function=target,
pool='querywise',
train='train_x128_greedylogsum_aqtaa.bin',
test='test',
cd='train.cd.query_id',
schema='quantized://',
dev_score_calc_obj_block_size=dev_score_calc_obj_block_size,
other_options=('-x', '128', '--feature-border-type', 'GreedyLogSum',
'--learn-pairs', data_file('querywise', pairs_file)))))]
@pytest.mark.parametrize(
'dev_score_calc_obj_block_size',
SCORE_CALC_OBJ_BLOCK_SIZES,
ids=SCORE_CALC_OBJ_BLOCK_SIZES_IDS
)
def test_dist_train_quantized_group_weights(dev_score_calc_obj_block_size):
return [local_canonical_file(run_dist_train(make_deterministic_train_cmd(
loss_function='QueryRMSE',
pool='querywise',
train='train.quantized',
test='test',
cd='train.cd.query_id',
schema='quantized://',
dev_score_calc_obj_block_size=dev_score_calc_obj_block_size,
other_options=('-x', '128', '--feature-border-type', 'GreedyLogSum',
'--learn-group-weights', data_file('querywise', 'train.group_weights')))))]
@pytest.mark.parametrize(
'dev_score_calc_obj_block_size',
SCORE_CALC_OBJ_BLOCK_SIZES,
ids=SCORE_CALC_OBJ_BLOCK_SIZES_IDS
)
def test_dist_train_quantized_baseline(dev_score_calc_obj_block_size):
return [local_canonical_file(run_dist_train(make_deterministic_train_cmd(
loss_function='Logloss',
pool='higgs',
train='train_small_x128_greedylogsum.bin',
test='train_small_x128_greedylogsum.bin',
cd='train_baseline.cd',
schema='quantized://',
test_schema='quantized://',
dev_score_calc_obj_block_size=dev_score_calc_obj_block_size,
other_options=('-x', '128', '--feature-border-type', 'GreedyLogSum',
'--test-baseline', data_file('higgs', 'test_baseline'),
'--learn-baseline', data_file('higgs', 'train_baseline')))))]
@pytest.mark.parametrize(
'dev_score_calc_obj_block_size',
SCORE_CALC_OBJ_BLOCK_SIZES,
ids=SCORE_CALC_OBJ_BLOCK_SIZES_IDS
)
def test_dist_train_queryrmse(dev_score_calc_obj_block_size):
return [local_canonical_file(run_dist_train(make_deterministic_train_cmd(
loss_function='QueryRMSE',
pool='querywise',
train='train',
test='test',
cd='train.cd.subgroup_id',
dev_score_calc_obj_block_size=dev_score_calc_obj_block_size)))]
@pytest.mark.parametrize(
'dev_score_calc_obj_block_size',
SCORE_CALC_OBJ_BLOCK_SIZES,
ids=SCORE_CALC_OBJ_BLOCK_SIZES_IDS
)
def test_dist_train_subgroup(dev_score_calc_obj_block_size):
return [local_canonical_file(run_dist_train(make_deterministic_train_cmd(
loss_function='QueryRMSE',
pool='querywise',
train='train',
test='test',
cd='train.cd.subgroup_id',
dev_score_calc_obj_block_size=dev_score_calc_obj_block_size,
other_options=('--eval-metric', 'PFound')),
output_file_switch='--test-err-log'))]
@pytest.mark.parametrize(
'dev_score_calc_obj_block_size',
SCORE_CALC_OBJ_BLOCK_SIZES,
ids=SCORE_CALC_OBJ_BLOCK_SIZES_IDS
)
def test_dist_train_pairlogit(dev_score_calc_obj_block_size):
return [local_canonical_file(run_dist_train(make_deterministic_train_cmd(
loss_function='PairLogit',
pool='querywise',
train='train',
test='test',
cd='train.cd.query_id',
dev_score_calc_obj_block_size=dev_score_calc_obj_block_size,
other_options=('--learn-pairs', data_file('querywise', 'train.pairs')))))]
@pytest.mark.parametrize('pairs_file', ['train.pairs', 'train.pairs.weighted'])
def test_dist_train_pairlogitpairwise(pairs_file):
return [local_canonical_file(run_dist_train(make_deterministic_train_cmd(
loss_function='PairLogitPairwise',
pool='querywise',
train='train',
test='test',
cd='train.cd',
other_options=('--learn-pairs', data_file('querywise', pairs_file)))))]
@pytest.mark.parametrize(
'dev_score_calc_obj_block_size',
SCORE_CALC_OBJ_BLOCK_SIZES,
ids=SCORE_CALC_OBJ_BLOCK_SIZES_IDS
)
def test_dist_train_querysoftmax(dev_score_calc_obj_block_size):
return [local_canonical_file(run_dist_train(make_deterministic_train_cmd(
loss_function='QuerySoftMax',
pool='querywise',
train='train',
test='test',
cd='train.cd.subgroup_id',
dev_score_calc_obj_block_size=dev_score_calc_obj_block_size)))]
@pytest.mark.parametrize('loss_func', ['Logloss', 'RMSE'])
def test_dist_train_auc(loss_func):
return [local_canonical_file(run_dist_train(make_deterministic_train_cmd(
loss_function=loss_func,
pool='higgs',
train='train_small',
test='test_small',
cd='train_baseline.cd',
other_options=('--eval-metric', 'AUC')),
output_file_switch='--test-err-log'))]
@pytest.mark.parametrize('loss_func', ['Logloss', 'RMSE'])
def test_dist_train_auc_weight(loss_func):
return [local_canonical_file(run_dist_train(make_deterministic_train_cmd(
loss_function=loss_func,
pool='higgs',
train='train_small',
test='test_small',
cd='train_weight.cd',
other_options=('--eval-metric', 'AUC')),
output_file_switch='--test-err-log'))]
@pytest.mark.parametrize('schema,train', [('quantized://', 'train_small_x128_greedylogsum.bin'), ('', 'train_small')])
def test_dist_train_snapshot(schema, train):
train_cmd = make_deterministic_train_cmd(
loss_function='RMSE',
pool='higgs',
train=train,
test='test_small',
schema=schema,
cd='train.cd')
eval_10_trees_path = yatest.common.test_output_path('10_trees.eval')
yatest.common.execute(train_cmd + ('-i', '10', '--eval-file', eval_10_trees_path,))
snapshot_path = yatest.common.test_output_path('snapshot')
execute_dist_train(train_cmd + ('-i', '5', '--snapshot-file', snapshot_path,))
eval_5_plus_5_trees_path = yatest.common.test_output_path('5_plus_5_trees.eval')
execute_dist_train(train_cmd + ('-i', '10', '--eval-file', eval_5_plus_5_trees_path, '--snapshot-file', snapshot_path,))
assert(filecmp.cmp(eval_10_trees_path, eval_5_plus_5_trees_path))
return [local_canonical_file(eval_5_plus_5_trees_path)]
def test_dist_train_yetirank():
return [local_canonical_file(run_dist_train(make_deterministic_train_cmd(
loss_function='YetiRank',
pool='querywise',
train='repeat_same_query_8_times',
test='repeat_same_query_8_times',
cd='train.cd'),
output_file_switch='--test-err-log'))]
def test_no_target():
train_path = yatest.common.test_output_path('train')
cd_path = yatest.common.test_output_path('train.cd')
pairs_path = yatest.common.test_output_path('pairs')
np.savetxt(train_path, [[0], [1], [2], [3], [4]], delimiter='\t', fmt='%.4f')
np.savetxt(cd_path, [('0', 'Num')], delimiter='\t', fmt='%s')
np.savetxt(pairs_path, [[0, 1], [0, 2], [0, 3], [2, 4]], delimiter='\t', fmt='%i')
cmd = (
CATBOOST_PATH,
'fit',
'-f', train_path,
'--cd', cd_path,
'--learn-pairs', pairs_path
)
with pytest.raises(yatest.common.ExecutionError):
yatest.common.execute(cmd)
@pytest.mark.parametrize('loss_function', ALL_LOSSES)
def test_const_target(loss_function):
train_path = yatest.common.test_output_path('train')
cd_path = yatest.common.test_output_path('train.cd')
np.savetxt(
train_path,
[[0, 0, 0],
[0, 0, 1],
[0, 0, 2],
[0, 0, 3],
[0, 0, 4]],
delimiter='\t',
fmt='%.4f'
)
np.savetxt(cd_path, [('0', 'Target'), ('1', 'GroupId')], delimiter='\t', fmt='%s')
cmd = (
CATBOOST_PATH,
'fit',
'--loss-function', loss_function,
'-f', train_path,
'--cd', cd_path,
)
with pytest.raises(yatest.common.ExecutionError):
yatest.common.execute(cmd)
def test_negative_weights():
train_path = yatest.common.test_output_path('train')
cd_path = yatest.common.test_output_path('train.cd')
open(cd_path, 'wt').write('0\tNum\n1\tWeight\n2\tTarget\n')
np.savetxt(train_path, [
[0, 1, 2],
[1, -1, 1]], delimiter='\t', fmt='%.4f')
cmd = (CATBOOST_PATH, 'fit',
'-f', train_path,
'--cd', cd_path,
)
with pytest.raises(yatest.common.ExecutionError):
yatest.common.execute(cmd)
def test_zero_learning_rate():
train_path = yatest.common.test_output_path('train')
cd_path = yatest.common.test_output_path('train.cd')
open(cd_path, 'wt').write(
'0\tNum\n'
'1\tNum\n'
'2\tTarget\n')
np.savetxt(train_path, [
[0, 1, 2],
[1, 1, 1]], delimiter='\t', fmt='%.4f')
cmd = (CATBOOST_PATH, 'fit',
'-f', train_path,
'--cd', cd_path,
'--learning-rate', '0.0',
)
with pytest.raises(yatest.common.ExecutionError):
yatest.common.execute(cmd)
@pytest.mark.parametrize('metric_period', ['1', '2'])
@pytest.mark.parametrize('metric', ['Logloss', 'F1', 'Accuracy', 'PFound', 'TotalF1', 'MCC', 'PairAccuracy'])
def test_eval_metrics(metric, metric_period):
if metric == 'PFound':
train, test, cd, loss_function = data_file('querywise', 'train'), data_file('querywise', 'test'), data_file('querywise', 'train.cd'), 'QueryRMSE'
elif metric == 'PairAccuracy':
# note: pairs are autogenerated
train, test, cd, loss_function = data_file('querywise', 'train'), data_file('querywise', 'test'), data_file('querywise', 'train.cd'), 'PairLogitPairwise'
else:
train, test, cd, loss_function = data_file('adult', 'train_small'), data_file('adult', 'test_small'), data_file('adult', 'train.cd'), 'Logloss'
output_model_path = yatest.common.test_output_path('model.bin')
test_error_path = yatest.common.test_output_path('test_error.tsv')
eval_path = yatest.common.test_output_path('output.tsv')
cmd = (
CATBOOST_PATH,
'fit',
'--loss-function', loss_function,
'--eval-metric', metric,
'-f', train,
'-t', test,
'--column-description', cd,
'-i', '10',
'-w', '0.03',
'-T', '4',
'-m', output_model_path,
'--test-err-log', test_error_path,
'--use-best-model', 'false',
'--metric-period', metric_period
)
yatest.common.execute(cmd)
cmd = (
CATBOOST_PATH,
'eval-metrics',
'--metrics', metric,
'--input-path', test,
'--column-description', cd,
'-m', output_model_path,
'-o', eval_path,
'--block-size', '100',
'--eval-period', metric_period,
'--save-stats'
)
yatest.common.execute(cmd)
first_metrics = np.round(np.loadtxt(test_error_path, skiprows=1)[:, 1], 8)
second_metrics = np.round(np.loadtxt(eval_path, skiprows=1)[:, 1], 8)
assert np.all(first_metrics == second_metrics)
return [local_canonical_file(eval_path)]
@pytest.mark.parametrize('metric_period', ['1', '2'])
@pytest.mark.parametrize('metric', ['MultiClass', 'MultiClassOneVsAll', 'F1', 'Accuracy', 'TotalF1', 'MCC', 'Precision', 'Recall'])
@pytest.mark.parametrize('loss_function', MULTICLASS_LOSSES)
@pytest.mark.parametrize('dataset', ['cloudness_small', 'cloudness_lost_class'])
def test_eval_metrics_multiclass(metric, loss_function, dataset, metric_period):
if metric in MULTICLASS_LOSSES and metric != loss_function:
# MultiClass and MultiClassOneVsAll are incompatible
return
train, test, cd = data_file(dataset, 'train_small'), data_file(dataset, 'test_small'), data_file(dataset, 'train.cd')
output_model_path = yatest.common.test_output_path('model.bin')
test_error_path = yatest.common.test_output_path('test_error.tsv')
eval_path = yatest.common.test_output_path('output.tsv')
cmd = (
CATBOOST_PATH,
'fit',
'--loss-function', loss_function,
'--custom-metric', metric,
'-f', train,
'-t', test,
'--column-description', cd,
'-i', '10',
'-T', '4',
'-m', output_model_path,
'--test-err-log', test_error_path,
'--use-best-model', 'false',
'--classes-count', '3',
'--metric-period', metric_period
)
yatest.common.execute(cmd)
cmd = (
CATBOOST_PATH,
'eval-metrics',
'--metrics', metric,
'--input-path', test,
'--column-description', cd,
'-m', output_model_path,
'-o', eval_path,
'--block-size', '100',
'--eval-period', metric_period,
'--save-stats'
)
yatest.common.execute(cmd)
start_index = 1 if metric == loss_function else 2
first_metrics = np.round(np.loadtxt(test_error_path, skiprows=1)[:, start_index:], 8)
second_metrics = np.round(np.loadtxt(eval_path, skiprows=1)[:, 1:], 8)
assert np.all(first_metrics == second_metrics)
return [local_canonical_file(eval_path)]
def test_eval_metrics_class_names():
labels = ['a', 'b', 'c', 'd']
model_path = yatest.common.test_output_path('model.bin')
cd_path = yatest.common.test_output_path('cd.txt')
np.savetxt(cd_path, [[0, 'Target']], fmt='%s', delimiter='\t')
prng = np.random.RandomState(seed=0)
train_path = yatest.common.test_output_path('train.txt')
np.savetxt(train_path, generate_random_labeled_set(100, 10, labels, prng=prng), fmt='%s', delimiter='\t')
test_path = yatest.common.test_output_path('test.txt')
np.savetxt(test_path, generate_random_labeled_set(100, 10, labels, prng=prng), fmt='%s', delimiter='\t')
eval_path = yatest.common.test_output_path('eval.txt')
test_error_path = yatest.common.test_output_path('test_error.tsv')
cmd = (
CATBOOST_PATH,
'fit',
'--loss-function', 'MultiClass',
'--custom-metric', 'TotalF1,AUC',
'-f', train_path,
'-t', test_path,
'--column-description', cd_path,
'-i', '10',
'-T', '4',
'-m', model_path,
'--test-err-log', test_error_path,
'--use-best-model', 'false',
'--class-names', ','.join(labels),
)
yatest.common.execute(cmd)
eval_cmd = (
CATBOOST_PATH,
'eval-metrics',
'--metrics', 'TotalF1,AUC',
'--input-path', test_path,
'--column-description', cd_path,
'-m', model_path,
'-o', eval_path,
'--block-size', '100',
'--save-stats'
)
yatest.common.execute(cmd)
yatest.common.execute(eval_cmd)
first_metrics = np.round(np.loadtxt(test_error_path, skiprows=1)[:, 2], 8)
second_metrics = np.round(np.loadtxt(eval_path, skiprows=1)[:, 1], 8)
assert np.all(first_metrics == second_metrics)
return [local_canonical_file(eval_path)]
@pytest.mark.parametrize('metric_period', ['1', '2'])
@pytest.mark.parametrize('metric', ['Accuracy', 'AUC'])
def test_eval_metrics_with_baseline(metric_period, metric):
train = data_file('adult_weight', 'train_weight')
test = data_file('adult_weight', 'test_weight')
cd = data_file('train_adult_baseline.cd')
output_model_path = yatest.common.test_output_path('model.bin')
test_error_path = yatest.common.test_output_path('test_error.tsv')
eval_path = yatest.common.test_output_path('output.tsv')
cmd = (
CATBOOST_PATH,
'fit',
'--loss-function', 'Logloss',
'--eval-metric', metric,
'-f', train,
'-t', test,
'--column-description', cd,
'-i', '10',
'-w', '0.03',
'-T', '4',
'-m', output_model_path,
'--test-err-log', test_error_path,
'--use-best-model', 'false',
'--metric-period', metric_period
)
yatest.common.execute(cmd)
cmd = (
CATBOOST_PATH,
'eval-metrics',
'--metrics', metric,
'--input-path', test,
'--column-description', cd,
'-m', output_model_path,
'-o', eval_path,
'--block-size', '100',
'--eval-period', metric_period,
'--save-stats'
)
yatest.common.execute(cmd)
first_metrics = np.round(np.loadtxt(test_error_path, skiprows=1)[:, 1], 8)
second_metrics = np.round(np.loadtxt(eval_path, skiprows=1)[:, 1], 8)
assert np.all(first_metrics == second_metrics)
return [local_canonical_file(eval_path)]
@pytest.mark.parametrize('metric_period', ['1', '2'])
@pytest.mark.parametrize('metric', ['Accuracy'])
def test_eval_metrics_multiclass_with_baseline(metric_period, metric):
labels = [0, 1, 2, 3]
cd_path = yatest.common.test_output_path('cd.txt')
np.savetxt(cd_path, [[0, 'Target'], [1, 'Baseline'], [2, 'Baseline'], [3, 'Baseline'], [4, 'Baseline']], fmt='%s', delimiter='\t')
prng = np.random.RandomState(seed=0)
train_path = yatest.common.test_output_path('train.txt')
np.savetxt(train_path, generate_random_labeled_set(100, 10, labels, prng=prng), fmt='%s', delimiter='\t')
test_path = yatest.common.test_output_path('test.txt')
np.savetxt(test_path, generate_random_labeled_set(100, 10, labels, prng=prng), fmt='%s', delimiter='\t')
output_model_path = yatest.common.test_output_path('model.bin')
test_error_path = yatest.common.test_output_path('test_error.tsv')
eval_path = yatest.common.test_output_path('output.tsv')
cmd = (
CATBOOST_PATH,
'fit',
'--loss-function', 'MultiClass',
'--eval-metric', metric,
'-f', train_path,
'-t', test_path,
'--column-description', cd_path,
'-i', '10',
'-T', '4',
'-m', output_model_path,
'--test-err-log', test_error_path,
'--use-best-model', 'false',
'--classes-count', '4',
'--metric-period', metric_period
)
yatest.common.execute(cmd)
cmd = (
CATBOOST_PATH,
'eval-metrics',
'--metrics', metric,
'--input-path', test_path,
'--column-description', cd_path,
'-m', output_model_path,
'-o', eval_path,
'--block-size', '100',
'--eval-period', metric_period,
'--save-stats'
)
yatest.common.execute(cmd)
first_metrics = np.round(np.loadtxt(test_error_path, skiprows=1)[:, 1], 8)
second_metrics = np.round(np.loadtxt(eval_path, skiprows=1)[:, 1], 8)
assert np.all(first_metrics == second_metrics)
return [local_canonical_file(eval_path)]
@pytest.mark.parametrize('boosting_type', BOOSTING_TYPE)
@pytest.mark.parametrize(
'dev_score_calc_obj_block_size',
SCORE_CALC_OBJ_BLOCK_SIZES,
ids=SCORE_CALC_OBJ_BLOCK_SIZES_IDS
)
def test_ctr_leaf_count_limit(boosting_type, dev_score_calc_obj_block_size):
output_model_path = yatest.common.test_output_path('model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
cmd = (
CATBOOST_PATH,
'fit',
'--loss-function', 'Logloss',
'-f', data_file('adult', 'train_small'),
'-t', data_file('adult', 'test_small'),
'--column-description', data_file('adult', 'train.cd'),
'--boosting-type', boosting_type,
'--dev-score-calc-obj-block-size', dev_score_calc_obj_block_size,
'--ctr-leaf-count-limit', '10',
'-i', '30',
'-w', '0.03',
'-T', '4',
'-m', output_model_path,
'--eval-file', output_eval_path,
)
yatest.common.execute(cmd)
return [local_canonical_file(output_eval_path)]
@pytest.mark.parametrize('eval_period', ['1', '2'])
def test_eval_non_additive_metric(eval_period):
output_model_path = yatest.common.test_output_path('model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
cmd = (
CATBOOST_PATH,
'fit',
'--use-best-model', 'false',
'--loss-function', 'Logloss',
'-f', data_file('adult', 'train_small'),
'-t', data_file('adult', 'test_small'),
'--column-description', data_file('adult', 'train.cd'),
'-i', '10',
'-w', '0.03',
'-T', '4',
'-m', output_model_path,
)
yatest.common.execute(cmd)
cmd = (
CATBOOST_PATH,
'eval-metrics',
'--metrics', 'AUC:hints=skip_train~false',
'--input-path', data_file('adult', 'test_small'),
'--column-description', data_file('adult', 'train.cd'),
'-m', output_model_path,
'-o', output_eval_path,
'--eval-period', eval_period,
'--block-size', '10'
)
yatest.common.execute(cmd)
output_eval_in_parts = yatest.common.test_output_path('eval_in_parts.eval')
cmd = (
CATBOOST_PATH,
'eval-metrics',
'--metrics', 'AUC:hints=skip_train~false',
'--input-path', data_file('adult', 'test_small'),
'--column-description', data_file('adult', 'train.cd'),
'-m', output_model_path,
'-o', output_eval_in_parts,
'--eval-period', eval_period,
'--calc-on-parts',
'--block-size', '10'
)
yatest.common.execute(cmd)
first_metrics = np.loadtxt(output_eval_path, skiprows=1)
second_metrics = np.loadtxt(output_eval_in_parts, skiprows=1)
assert np.all(first_metrics == second_metrics)
return [local_canonical_file(output_eval_path)]
@pytest.mark.parametrize('boosting_type', ['Plain', 'Ordered'])
@pytest.mark.parametrize('max_ctr_complexity', [1, 2])
def test_eval_eq_calc(boosting_type, max_ctr_complexity):
one_hot_max_size = 2
cd_path = yatest.common.test_output_path('cd.txt')
train_path = yatest.common.test_output_path('train.txt')
test_path = yatest.common.test_output_path('test.txt')
model_path = yatest.common.test_output_path('model.bin')
test_eval_path = yatest.common.test_output_path('test.eval')
calc_eval_path = yatest.common.test_output_path('calc.eval')
np.savetxt(cd_path, [['0', 'Target'],
['1', 'Categ'],
['2', 'Categ']
], fmt='%s', delimiter='\t')
np.savetxt(train_path, [['1', 'A', 'X'],
['1', 'B', 'Y'],
['1', 'C', 'Y'],
['0', 'A', 'Z'],
['0', 'B', 'Z'],
], fmt='%s', delimiter='\t')
np.savetxt(test_path, [['1', 'A', 'Y'],
['1', 'D', 'U'],
['1', 'D', 'U']
], fmt='%s', delimiter='\t')
cmd_fit = (CATBOOST_PATH, 'fit',
'--loss-function', 'Logloss',
'--boosting-type', boosting_type,
'--cd', cd_path,
'-f', train_path,
'-t', test_path,
'-m', model_path,
'--eval-file', test_eval_path,
'-i', '5',
'-T', '1',
'--max-ctr-complexity', str(max_ctr_complexity),
'--one-hot-max-size', str(one_hot_max_size),
)
cmd_calc = (CATBOOST_PATH, 'calc',
'--cd', cd_path,
'--input-path', test_path,
'-m', model_path,
'-T', '1',
'--output-path', calc_eval_path,
)
yatest.common.execute(cmd_fit)
yatest.common.execute(cmd_calc)
assert(compare_evals(test_eval_path, calc_eval_path))
@pytest.mark.parametrize('loss_function', ['RMSE', 'Logloss', 'Poisson'])
@pytest.mark.parametrize('leaf_estimation_iteration', ['1', '2'])
def test_object_importances(loss_function, leaf_estimation_iteration):
output_model_path = yatest.common.test_output_path('model.bin')
object_importances_path = yatest.common.test_output_path('object_importances.tsv')
cmd = (
CATBOOST_PATH,
'fit',
'--loss-function', loss_function,
'-f', data_file('adult', 'train_small'),
'-t', data_file('adult', 'test_small'),
'--column-description', data_file('adult', 'train.cd'),
'-i', '10',
'--leaf-estimation-method', 'Gradient',
'--leaf-estimation-iterations', leaf_estimation_iteration,
'--boosting-type', 'Plain',
'-T', '4',
'-m', output_model_path,
'--use-best-model', 'false'
)
yatest.common.execute(cmd)
cmd = (
CATBOOST_PATH,
'ostr',
'-f', data_file('adult', 'train_small'),
'-t', data_file('adult', 'test_small'),
'--column-description', data_file('adult', 'train.cd'),
'-m', output_model_path,
'-o', object_importances_path,
)
yatest.common.execute(cmd)
return [local_canonical_file(object_importances_path)]
# Create `num_tests` test files from `test_input_path`.
def split_test_to(num_tests, test_input_path):
test_input_lines = open(test_input_path).readlines()
test_paths = [yatest.common.test_output_path('test{}'.format(i)) for i in range(num_tests)]
for testno in range(num_tests):
test_path = test_paths[testno]
test_lines = test_input_lines[testno::num_tests]
open(test_path, 'wt').write(''.join(test_lines))
return test_paths
# Create a few shuffles from list of test files, for use with `-t` option.
def create_test_shuffles(test_paths, seed=20181219, prng=None):
if prng is None:
prng = np.random.RandomState(seed=seed)
num_tests = len(test_paths)
num_shuffles = num_tests # if num_tests < 3 else num_tests * (num_tests - 1)
test_shuffles = set()
while len(test_shuffles) < num_shuffles:
test_shuffles.add(tuple(prng.permutation(test_paths)))
return [','.join(shuffle) for shuffle in test_shuffles]
def fit_calc_cksum(fit_stem, calc_stem, test_shuffles):
import hashlib
last_cksum = None
for i, shuffle in enumerate(test_shuffles):
model_path = yatest.common.test_output_path('model{}.bin'.format(i))
eval_path = yatest.common.test_output_path('eval{}.txt'.format(i))
yatest.common.execute(fit_stem + (
'-t', shuffle,
'-m', model_path,
))
yatest.common.execute(calc_stem + (
'-m', model_path,
'--output-path', eval_path,
))
cksum = hashlib.md5(open(eval_path).read()).hexdigest()
if last_cksum is None:
last_cksum = cksum
continue
assert(last_cksum == cksum)
@pytest.mark.parametrize('num_tests', [3, 4])
@pytest.mark.parametrize('boosting_type', ['Plain', 'Ordered'])
def test_multiple_eval_sets_order_independent(boosting_type, num_tests):
train_path = data_file('adult', 'train_small')
cd_path = data_file('adult', 'train.cd')
test_input_path = data_file('adult', 'test_small')
fit_stem = (CATBOOST_PATH, 'fit',
'--loss-function', 'RMSE',
'-f', train_path,
'--cd', cd_path,
'--boosting-type', boosting_type,
'-i', '5',
'-T', '4',
'--use-best-model', 'false',
)
calc_stem = (CATBOOST_PATH, 'calc',
'--cd', cd_path,
'--input-path', test_input_path,
'-T', '4',
)
# We use a few shuffles of tests and check equivalence of resulting models
prng = np.random.RandomState(seed=20181219)
test_shuffles = create_test_shuffles(split_test_to(num_tests, test_input_path), prng=prng)
fit_calc_cksum(fit_stem, calc_stem, test_shuffles)
@pytest.mark.parametrize('num_tests', [3, 4])
@pytest.mark.parametrize('boosting_type', ['Plain', 'Ordered'])
def test_multiple_eval_sets_querywise_order_independent(boosting_type, num_tests):
train_path = data_file('querywise', 'train')
cd_path = data_file('querywise', 'train.cd.query_id')
test_input_path = data_file('querywise', 'test')
fit_stem = (CATBOOST_PATH, 'fit',
'--loss-function', 'QueryRMSE',
'-f', train_path,
'--cd', cd_path,
'--boosting-type', boosting_type,
'-i', '5',
'-T', '4',
'--use-best-model', 'false',
)
calc_stem = (CATBOOST_PATH, 'calc',
'--cd', cd_path,
'--input-path', test_input_path,
'-T', '4',
)
# We use a few shuffles of tests and check equivalence of resulting models
prng = np.random.RandomState(seed=20181219)
test_shuffles = create_test_shuffles(split_test_to(num_tests, test_input_path), prng=prng)
fit_calc_cksum(fit_stem, calc_stem, test_shuffles)
def test_multiple_eval_sets_no_empty():
train_path = data_file('adult', 'train_small')
cd_path = data_file('adult', 'train.cd')
test_input_path = data_file('adult', 'test_small')
fit_stem = (CATBOOST_PATH, 'fit',
'--loss-function', 'RMSE',
'-f', train_path,
'--cd', cd_path,
'-i', '5',
'-T', '4',
'--use-best-model', 'false',
)
test0_path = yatest.common.test_output_path('test0.txt')
open(test0_path, 'wt').write('')
with pytest.raises(yatest.common.ExecutionError):
yatest.common.execute(fit_stem + (
'-t', ','.join((test_input_path, test0_path))
))
@pytest.mark.parametrize('loss_function', ['RMSE', 'QueryRMSE'])
def test_multiple_eval_sets(loss_function):
num_tests = 5
train_path = data_file('querywise', 'train')
cd_path = data_file('querywise', 'train.cd.query_id')
test_input_path = data_file('querywise', 'test')
eval_path = yatest.common.test_output_path('test.eval')
test_paths = list(reversed(split_test_to(num_tests, test_input_path)))
cmd = (CATBOOST_PATH, 'fit',
'--loss-function', loss_function,
'-f', train_path,
'-t', ','.join(test_paths),
'--column-description', cd_path,
'-i', '5',
'-T', '4',
'--use-best-model', 'false',
'--eval-file', eval_path,
)
yatest.common.execute(cmd)
return [local_canonical_file(eval_path)]
def test_multiple_eval_sets_err_log():
num_tests = 3
train_path = data_file('querywise', 'train')
cd_path = data_file('querywise', 'train.cd.query_id')
test_input_path = data_file('querywise', 'test')
test_err_log_path = yatest.common.test_output_path('test-err.log')
json_log_path = yatest.common.test_output_path('json.log')
test_paths = reversed(split_test_to(num_tests, test_input_path))
cmd = (CATBOOST_PATH, 'fit',
'--loss-function', 'RMSE',
'-f', train_path,
'-t', ','.join(test_paths),
'--column-description', cd_path,
'-i', '5',
'-T', '4',
'--test-err-log', test_err_log_path,
'--json-log', json_log_path,
)
yatest.common.execute(cmd)
return [local_canonical_file(test_err_log_path),
local_canonical_file(remove_time_from_json(json_log_path))]
# Cast<float>(CityHash('Quvena')) is QNaN
# Cast<float>(CityHash('Sineco')) is SNaN
@pytest.mark.parametrize('cat_value', ['Normal', 'Quvena', 'Sineco'])
def test_const_cat_feature(cat_value):
def make_a_set(nrows, value, seed=20181219, prng=None):
if prng is None:
prng = np.random.RandomState(seed=seed)
label = prng.randint(0, nrows, [nrows, 1])
feature = np.full([nrows, 1], value, dtype='|S{}'.format(len(value)))
return np.concatenate([label, feature], axis=1)
cd_path = yatest.common.test_output_path('cd.txt')
np.savetxt(cd_path, [[0, 'Target'], [1, 'Categ']], fmt='%s', delimiter='\t')
prng = np.random.RandomState(seed=20181219)
train_path = yatest.common.test_output_path('train.txt')
np.savetxt(train_path, make_a_set(10, cat_value, prng=prng), fmt='%s', delimiter='\t')
test_path = yatest.common.test_output_path('test.txt')
np.savetxt(test_path, make_a_set(10, cat_value, prng=prng), fmt='%s', delimiter='\t')
eval_path = yatest.common.test_output_path('eval.txt')
cmd = (CATBOOST_PATH, 'fit',
'--loss-function', 'RMSE',
'-f', train_path,
'-t', test_path,
'--column-description', cd_path,
'-i', '5',
'-T', '4',
'--eval-file', eval_path,
)
with pytest.raises(yatest.common.ExecutionError):
yatest.common.execute(cmd)
def test_model_metadata():
output_model_path = yatest.common.test_output_path('model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
cmd = (
CATBOOST_PATH,
'fit',
'--loss-function', 'Logloss',
'-f', data_file('adult', 'train_small'),
'-t', data_file('adult', 'test_small'),
'--column-description', data_file('adult', 'train.cd'),
'-i', '2',
'-T', '4',
'-m', output_model_path,
'--eval-file', output_eval_path,
'-w', '0.1',
'--set-metadata-from-freeargs',
'A', 'A',
'BBB', 'BBB',
'CCC', 'A'
)
yatest.common.execute(cmd)
calc_cmd = (
CATBOOST_PATH,
'metadata', 'set',
'-m', output_model_path,
'--key', 'CCC',
'--value', 'CCC'
)
yatest.common.execute(calc_cmd)
calc_cmd = (
CATBOOST_PATH,
'metadata', 'set',
'-m', output_model_path,
'--key', 'CCC',
'--value', 'CCC'
)
yatest.common.execute(calc_cmd)
py_catboost = catboost.CatBoost()
py_catboost.load_model(output_model_path)
assert 'A' == py_catboost.get_metadata()['A']
assert 'BBB' == py_catboost.get_metadata()['BBB']
assert 'CCC' == py_catboost.get_metadata()['CCC']
def test_fit_multiclass_with_class_names():
labels = ['a', 'b', 'c', 'd']
cd_path = yatest.common.test_output_path('cd.txt')
np.savetxt(cd_path, [[0, 'Target']], fmt='%s', delimiter='\t')
prng = np.random.RandomState(seed=0)
train_path = yatest.common.test_output_path('train.txt')
np.savetxt(train_path, generate_random_labeled_set(100, 10, labels, prng=prng), fmt='%s', delimiter='\t')
test_path = yatest.common.test_output_path('test.txt')
np.savetxt(test_path, generate_random_labeled_set(100, 10, labels, prng=prng), fmt='%s', delimiter='\t')
eval_path = yatest.common.test_output_path('eval.txt')
fit_cmd = (
CATBOOST_PATH,
'fit',
'--loss-function', 'MultiClass',
'--class-names', ','.join(labels),
'-f', train_path,
'-t', test_path,
'--column-description', cd_path,
'-i', '10',
'-T', '4',
'--use-best-model', 'false',
'--prediction-type', 'RawFormulaVal,Class',
'--eval-file', eval_path
)
yatest.common.execute(fit_cmd)
return [local_canonical_file(eval_path)]
def test_extract_multiclass_labels_from_class_names():
labels = ['a', 'b', 'c', 'd']
model_path = yatest.common.test_output_path('model.bin')
cd_path = yatest.common.test_output_path('cd.txt')
np.savetxt(cd_path, [[0, 'Target']], fmt='%s', delimiter='\t')
prng = np.random.RandomState(seed=0)
train_path = yatest.common.test_output_path('train.txt')
np.savetxt(train_path, generate_random_labeled_set(100, 10, labels, prng=prng), fmt='%s', delimiter='\t')
test_path = yatest.common.test_output_path('test.txt')
np.savetxt(test_path, generate_random_labeled_set(100, 10, labels, prng=prng), fmt='%s', delimiter='\t')
eval_path = yatest.common.test_output_path('eval.txt')
fit_cmd = (
CATBOOST_PATH,
'fit',
'--loss-function', 'MultiClass',
'--class-names', ','.join(labels),
'-f', train_path,
'-t', test_path,
'--column-description', cd_path,
'-i', '10',
'-T', '4',
'-m', model_path,
'--use-best-model', 'false',
)
calc_cmd = (
CATBOOST_PATH,
'calc',
'--input-path', test_path,
'--column-description', cd_path,
'-T', '4',
'-m', model_path,
'--output-path', eval_path,
'--prediction-type', 'RawFormulaVal,Class',
)
yatest.common.execute(fit_cmd)
yatest.common.execute(calc_cmd)
py_catboost = catboost.CatBoost()
py_catboost.load_model(model_path)
assert json.loads(py_catboost.get_metadata()['multiclass_params'])['class_to_label'] == [0, 1, 2, 3]
assert json.loads(py_catboost.get_metadata()['multiclass_params'])['class_names'] == ['a', 'b', 'c', 'd']
assert json.loads(py_catboost.get_metadata()['multiclass_params'])['classes_count'] == 0
assert json.loads(py_catboost.get_metadata()['params'])['data_processing_options']['class_names'] == ['a', 'b', 'c', 'd']
return [local_canonical_file(eval_path)]
@pytest.mark.parametrize('loss_function', ['MultiClass', 'MultiClassOneVsAll', 'Logloss', 'RMSE'])
def test_save_multiclass_labels_from_data(loss_function):
labels = [10000000, 7, 0, 9999]
model_path = yatest.common.test_output_path('model.bin')
cd_path = yatest.common.test_output_path('cd.txt')
np.savetxt(cd_path, [[0, 'Target']], fmt='%s', delimiter='\t')
prng = np.random.RandomState(seed=0)
train_path = yatest.common.test_output_path('train.txt')
np.savetxt(train_path, generate_random_labeled_set(100, 10, labels, prng=prng), fmt='%s', delimiter='\t')
cmd = (
CATBOOST_PATH,
'fit',
'--loss-function', loss_function,
'-f', train_path,
'--column-description', cd_path,
'-i', '10',
'-T', '4',
'-m', model_path,
'--use-best-model', 'false',
)
yatest.common.execute(cmd)
py_catboost = catboost.CatBoost()
py_catboost.load_model(model_path)
if loss_function in MULTICLASS_LOSSES:
assert json.loads(py_catboost.get_metadata()['multiclass_params'])['class_to_label'] == [0, 1, 2, 3]
assert json.loads(py_catboost.get_metadata()['multiclass_params'])['class_names'] == ['0.0', '7.0', '9999.0', '10000000.0']
assert json.loads(py_catboost.get_metadata()['multiclass_params'])['classes_count'] == 0
else:
assert 'multiclass_params' not in py_catboost.get_metadata()
@pytest.mark.parametrize('prediction_type', ['Probability', 'RawFormulaVal', 'Class'])
def test_apply_multiclass_labels_from_data(prediction_type):
labels = [10000000, 7, 0, 9999]
model_path = yatest.common.test_output_path('model.bin')
cd_path = yatest.common.test_output_path('cd.txt')
np.savetxt(cd_path, [[0, 'Target']], fmt='%s', delimiter='\t')
prng = np.random.RandomState(seed=0)
train_path = yatest.common.test_output_path('train.txt')
np.savetxt(train_path, generate_random_labeled_set(100, 10, labels, prng=prng), fmt='%s', delimiter='\t')
test_path = yatest.common.test_output_path('test.txt')
np.savetxt(test_path, generate_random_labeled_set(100, 10, labels, prng=prng), fmt='%s', delimiter='\t')
eval_path = yatest.common.test_output_path('eval.txt')
fit_cmd = (
CATBOOST_PATH,
'fit',
'--loss-function', 'MultiClass',
'-f', train_path,
'--column-description', cd_path,
'-i', '10',
'-T', '4',
'-m', model_path,
'--use-best-model', 'false',
)
calc_cmd = (
CATBOOST_PATH,
'calc',
'--input-path', test_path,
'--column-description', cd_path,
'-m', model_path,
'--output-path', eval_path,
'--prediction-type', prediction_type,
)
yatest.common.execute(fit_cmd)
yatest.common.execute(calc_cmd)
py_catboost = catboost.CatBoost()
py_catboost.load_model(model_path)
assert json.loads(py_catboost.get_metadata()['multiclass_params'])['class_to_label'] == [0, 1, 2, 3]
assert json.loads(py_catboost.get_metadata()['multiclass_params'])['class_names'] == ['0.0', '7.0', '9999.0', '10000000.0']
assert json.loads(py_catboost.get_metadata()['multiclass_params'])['classes_count'] == 0
if prediction_type in ['Probability', 'RawFormulaVal']:
with open(eval_path, "rt") as f:
for line in f:
assert line[:-1] == 'DocId\t{}:Class=0.0\t{}:Class=7.0\t{}:Class=9999.0\t{}:Class=10000000.0'\
.format(prediction_type, prediction_type, prediction_type, prediction_type)
break
else: # Class
with open(eval_path, "rt") as f:
for i, line in enumerate(f):
if not i:
assert line[:-1] == 'DocId\tClass'
else:
assert float(line[:-1].split()[1]) in labels
return [local_canonical_file(eval_path)]
@pytest.mark.parametrize('loss_function', MULTICLASS_LOSSES)
@pytest.mark.parametrize('prediction_type', ['Probability', 'RawFormulaVal', 'Class'])
def test_save_and_apply_multiclass_labels_from_classes_count(loss_function, prediction_type):
model_path = yatest.common.test_output_path('model.bin')
cd_path = yatest.common.test_output_path('cd.txt')
np.savetxt(cd_path, [[0, 'Target']], fmt='%s', delimiter='\t')
prng = np.random.RandomState(seed=0)
train_path = yatest.common.test_output_path('train.txt')
np.savetxt(train_path, generate_random_labeled_set(100, 10, [1, 2], prng=prng), fmt='%s', delimiter='\t')
test_path = yatest.common.test_output_path('test.txt')
np.savetxt(test_path, generate_random_labeled_set(100, 10, [0, 1, 2, 3], prng=prng), fmt='%s', delimiter='\t')
eval_path = yatest.common.test_output_path('eval.txt')
fit_cmd = (
CATBOOST_PATH,
'fit',
'--loss-function', loss_function,
'--classes-count', '4',
'-f', train_path,
'--column-description', cd_path,
'-i', '10',
'-T', '4',
'-m', model_path,
'--use-best-model', 'false',
)
yatest.common.execute(fit_cmd)
py_catboost = catboost.CatBoost()
py_catboost.load_model(model_path)
assert json.loads(py_catboost.get_metadata()['multiclass_params'])['class_to_label'] == [1, 2]
assert json.loads(py_catboost.get_metadata()['multiclass_params'])['classes_count'] == 4
assert json.loads(py_catboost.get_metadata()['multiclass_params'])['class_names'] == []
calc_cmd = (
CATBOOST_PATH,
'calc',
'--input-path', test_path,
'--column-description', cd_path,
'-m', model_path,
'--output-path', eval_path,
'--prediction-type', prediction_type
)
yatest.common.execute(calc_cmd)
if prediction_type == 'RawFormulaVal':
with open(eval_path, "rt") as f:
for i, line in enumerate(f):
if i == 0:
assert line[:-1] == 'DocId\t{}:Class=0\t{}:Class=1\t{}:Class=2\t{}:Class=3' \
.format(prediction_type, prediction_type, prediction_type, prediction_type)
else:
assert float(line[:-1].split()[1]) == float('-inf') and float(line[:-1].split()[4]) == float('-inf') # fictitious approxes must be negative infinity
if prediction_type == 'Probability':
with open(eval_path, "rt") as f:
for i, line in enumerate(f):
if i == 0:
assert line[:-1] == 'DocId\t{}:Class=0\t{}:Class=1\t{}:Class=2\t{}:Class=3' \
.format(prediction_type, prediction_type, prediction_type, prediction_type)
else:
assert abs(float(line[:-1].split()[1])) < 1e-307 \
and abs(float(line[:-1].split()[4])) < 1e-307 # fictitious probabilities must be virtually zero
if prediction_type == 'Class':
with open(eval_path, "rt") as f:
for i, line in enumerate(f):
if i == 0:
assert line[:-1] == 'DocId\tClass'
else:
assert float(line[:-1].split()[1]) in [1, 2] # probability of 0,3 classes appearance must be zero
return [local_canonical_file(eval_path)]
def test_set_class_names_implicitly():
INPUT_CLASS_LABELS = ['a', 'bc', '7.', '8.0', '19.2']
SAVED_CLASS_LABELS = ['19.2', '7.', '8.0', 'a', 'bc']
model_path = yatest.common.test_output_path('model.bin')
cd_path = yatest.common.test_output_path('cd.txt')
np.savetxt(cd_path, [[0, 'Target']], fmt='%s', delimiter='\t')
prng = np.random.RandomState(seed=0)
train_path = yatest.common.test_output_path('train.txt')
np.savetxt(train_path, generate_random_labeled_set(100, 10, INPUT_CLASS_LABELS, prng=prng), fmt='%s', delimiter='\t')
test_path = yatest.common.test_output_path('test.txt')
np.savetxt(test_path, generate_random_labeled_set(100, 10, INPUT_CLASS_LABELS, prng=prng), fmt='%s', delimiter='\t')
eval_path = yatest.common.test_output_path('eval.txt')
fit_cmd = (
CATBOOST_PATH,
'fit',
'--loss-function', 'MultiClass',
'-f', train_path,
'--column-description', cd_path,
'-i', '10',
'-T', '4',
'-m', model_path,
'--use-best-model', 'false',
)
calc_cmd = (
CATBOOST_PATH,
'calc',
'--input-path', test_path,
'--column-description', cd_path,
'-m', model_path,
'--output-path', eval_path,
'--prediction-type', 'RawFormulaVal,Class',
)
yatest.common.execute(fit_cmd)
py_catboost = catboost.CatBoost()
py_catboost.load_model(model_path)
assert json.loads(py_catboost.get_metadata()['multiclass_params'])['class_to_label'] == [0, 1, 2, 3, 4]
assert json.loads(py_catboost.get_metadata()['multiclass_params'])['class_names'] == SAVED_CLASS_LABELS
assert json.loads(py_catboost.get_metadata()['multiclass_params'])['classes_count'] == 0
yatest.common.execute(calc_cmd)
with open(eval_path, "rt") as f:
for i, line in enumerate(f):
if not i:
assert line[:-1] == 'DocId\t{}:Class=19.2\t{}:Class=7.\t{}:Class=8.0\t{}:Class=a\t{}:Class=bc\tClass' \
.format(*(['RawFormulaVal'] * 5))
else:
label = line[:-1].split()[-1]
assert label in SAVED_CLASS_LABELS
return [local_canonical_file(eval_path)]
CANONICAL_CLOUDNESS_MINI_MULTICLASS_MODEL_PATH = data_file('', 'multiclass_model.bin')
@pytest.mark.parametrize('prediction_type', ['Probability', 'RawFormulaVal', 'Class'])
def test_multiclass_model_backward_compatibility(prediction_type):
model = catboost.CatBoost()
model.load_model(CANONICAL_CLOUDNESS_MINI_MULTICLASS_MODEL_PATH)
assert 'multiclass_params' not in model.get_metadata()
pool = catboost.Pool(data_file('cloudness_small', 'train_small'),
column_description=data_file('cloudness_small', 'train.cd'))
model.predict(data=pool, prediction_type='Class')
model.eval_metrics(data=pool, metrics=['Accuracy'])
output_path = yatest.common.test_output_path('out.txt')
calc_cmd = (
CATBOOST_PATH,
'calc',
'--input-path', data_file('cloudness_small', 'train_small'),
'--column-description', data_file('cloudness_small', 'train.cd'),
'-m', CANONICAL_CLOUDNESS_MINI_MULTICLASS_MODEL_PATH,
'--prediction-type', prediction_type,
'--output-path', output_path,
)
yatest.common.execute(calc_cmd)
return [local_canonical_file(output_path)]
@pytest.mark.parametrize('boosting_type', BOOSTING_TYPE)
@pytest.mark.parametrize('use_best_model', ['true', 'false'])
def test_learning_rate_auto_set(boosting_type, use_best_model):
output_model_path = yatest.common.test_output_path('model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
cmd = (
CATBOOST_PATH,
'fit',
'--use-best-model', use_best_model,
'--loss-function', 'Logloss',
'-f', data_file('adult', 'train_small'),
'-t', data_file('adult', 'test_small'),
'--column-description', data_file('adult', 'train.cd'),
'--boosting-type', boosting_type,
'-T', '4',
'-m', output_model_path,
'--eval-file', output_eval_path,
'--od-type', 'Iter',
'--od-wait', '2',
)
yatest.common.execute(cmd)
return [local_canonical_file(output_eval_path)]
def test_paths_with_dsv_scheme():
output_model_path = yatest.common.test_output_path('model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
cmd = (
CATBOOST_PATH,
'fit',
'--loss-function', 'QueryRMSE',
'-f', 'dsv://' + data_file('querywise', 'train'),
'-t', 'dsv://' + data_file('querywise', 'test'),
'--column-description', 'dsv://' + data_file('querywise', 'train.cd'),
'--boosting-type', 'Ordered',
'-i', '20',
'-T', '4',
'-m', output_model_path,
'--eval-file', output_eval_path,
'--use-best-model', 'false',
)
yatest.common.execute(cmd)
return [local_canonical_file(output_eval_path)]
def test_skip_train():
learn_error_path = yatest.common.test_output_path('learn_error.tsv')
test_error_path = yatest.common.test_output_path('test_error.tsv')
json_log_path = yatest.common.test_output_path('json_log.json')
cmd = (
CATBOOST_PATH,
'fit',
'--loss-function', 'QueryRMSE',
'-f', data_file('querywise', 'train'),
'-t', data_file('querywise', 'test'),
'--column-description', data_file('querywise', 'train.cd'),
'-i', '20',
'-T', '4',
'--custom-metric', 'AverageGain:top=2;hints=skip_train~true',
'--learn-err-log', learn_error_path,
'--test-err-log', test_error_path,
'--use-best-model', 'false',
'--json-log', json_log_path
)
yatest.common.execute(cmd)
return [local_canonical_file(learn_error_path),
local_canonical_file(test_error_path),
local_canonical_file(remove_time_from_json(json_log_path))]
@pytest.mark.parametrize('boosting_type', BOOSTING_TYPE)
@pytest.mark.parametrize(
'dev_score_calc_obj_block_size',
SCORE_CALC_OBJ_BLOCK_SIZES,
ids=SCORE_CALC_OBJ_BLOCK_SIZES_IDS
)
def test_group_weight(boosting_type, dev_score_calc_obj_block_size):
output_model_path = yatest.common.test_output_path('model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
def run_catboost(train_path, test_path, cd_path, eval_path):
cmd = (
CATBOOST_PATH,
'fit',
'--loss-function', 'YetiRank',
'-f', data_file('querywise', train_path),
'-t', data_file('querywise', test_path),
'--column-description', data_file('querywise', cd_path),
'--boosting-type', boosting_type,
'--dev-score-calc-obj-block-size', dev_score_calc_obj_block_size,
'-i', '10',
'-T', '4',
'-m', output_model_path,
'--eval-file', eval_path,
)
yatest.common.execute(cmd)
output_eval_path_first = yatest.common.test_output_path('test_first.eval')
output_eval_path_second = yatest.common.test_output_path('test_second.eval')
run_catboost('train', 'test', 'train.cd', output_eval_path_first)
run_catboost('train.const_group_weight', 'test.const_group_weight', 'train.cd.group_weight', output_eval_path_second)
assert filecmp.cmp(output_eval_path_first, output_eval_path_second)
run_catboost('train', 'test', 'train.cd.group_weight', output_eval_path)
return [local_canonical_file(output_eval_path)]
@pytest.mark.parametrize('boosting_type', BOOSTING_TYPE)
@pytest.mark.parametrize('loss_function', ['QueryRMSE', 'RMSE'])
@pytest.mark.parametrize(
'dev_score_calc_obj_block_size',
SCORE_CALC_OBJ_BLOCK_SIZES,
ids=SCORE_CALC_OBJ_BLOCK_SIZES_IDS
)
def test_group_weight_and_object_weight(boosting_type, loss_function, dev_score_calc_obj_block_size):
def run_catboost(train_path, test_path, cd_path, eval_path):
cmd = (
CATBOOST_PATH,
'fit',
'--loss-function', loss_function,
'-f', data_file('querywise', train_path),
'-t', data_file('querywise', test_path),
'--column-description', data_file('querywise', cd_path),
'--boosting-type', boosting_type,
'--dev-score-calc-obj-block-size', dev_score_calc_obj_block_size,
'-i', '10',
'-T', '4',
'--eval-file', eval_path,
)
yatest.common.execute(cmd)
output_eval_path_first = yatest.common.test_output_path('test_first.eval')
output_eval_path_second = yatest.common.test_output_path('test_second.eval')
run_catboost('train', 'test', 'train.cd.group_weight', output_eval_path_first)
run_catboost('train', 'test', 'train.cd.weight', output_eval_path_second)
assert filecmp.cmp(output_eval_path_first, output_eval_path_second)
def test_snapshot_without_random_seed():
def run_catboost(iters, eval_path, additional_params=None):
cmd = [
CATBOOST_PATH,
'fit',
'--loss-function', 'Logloss',
'--learning-rate', '0.5',
'-f', data_file('adult', 'train_small'),
'-t', data_file('adult', 'test_small'),
'--column-description', data_file('adult', 'train.cd'),
'-i', str(iters),
'-T', '4',
'--eval-file', eval_path,
]
if additional_params:
cmd += additional_params
tmpfile = 'test_data_dumps'
with open(tmpfile, 'w') as f:
yatest.common.execute(cmd, stdout=f)
with open(tmpfile, 'r') as output:
line_count = sum(1 for line in output)
return line_count
model_path = yatest.common.test_output_path('model.bin')
eval_path = yatest.common.test_output_path('test.eval')
progress_path = yatest.common.test_output_path('test.cbp')
additional_params = ['--snapshot-file', progress_path, '-m', model_path]
fisrt_line_count = run_catboost(15, eval_path, additional_params=additional_params)
second_line_count = run_catboost(30, eval_path, additional_params=additional_params)
third_line_count = run_catboost(45, eval_path, additional_params=additional_params)
assert fisrt_line_count == second_line_count == third_line_count
canon_eval_path = yatest.common.test_output_path('canon_test.eval')
cb_model = catboost.CatBoost()
cb_model.load_model(model_path)
random_seed = cb_model.random_seed_
run_catboost(45, canon_eval_path, additional_params=['-r', str(random_seed)])
assert filecmp.cmp(canon_eval_path, eval_path)
def test_snapshot_with_interval():
def run_with_timeout(cmd, timeout):
try:
yatest.common.execute(cmd, timeout=timeout)
except ExecutionTimeoutError:
return True
return False
cmd = [
CATBOOST_PATH,
'fit',
'--loss-function', 'Logloss',
'-f', data_file('adult', 'train_small'),
'-t', data_file('adult', 'test_small'),
'--column-description', data_file('adult', 'train.cd'),
'-T', '4',
]
measure_time_iters = 100
exec_time = timeit.timeit(lambda: yatest.common.execute(cmd + ['-i', str(measure_time_iters)]), number=1)
SNAPSHOT_INTERVAL = 1
TIMEOUT = 5
TOTAL_TIME = 25
iters = int(TOTAL_TIME / (exec_time / measure_time_iters))
canon_eval_path = yatest.common.test_output_path('canon_test.eval')
canon_params = cmd + ['--eval-file', canon_eval_path, '-i', str(iters)]
yatest.common.execute(canon_params)
eval_path = yatest.common.test_output_path('test.eval')
progress_path = yatest.common.test_output_path('test.cbp')
model_path = yatest.common.test_output_path('model.bin')
params = cmd + ['--snapshot-file', progress_path,
'--snapshot-interval', str(SNAPSHOT_INTERVAL),
'-m', model_path,
'--eval-file', eval_path,
'-i', str(iters)]
was_timeout = False
while run_with_timeout(params, TIMEOUT):
was_timeout = True
assert was_timeout
assert filecmp.cmp(canon_eval_path, eval_path)
def test_snapshot_with_different_params():
cmd = [
CATBOOST_PATH,
'fit',
'--loss-function', 'Logloss',
'-f', data_file('adult', 'train_small'),
'-t', data_file('adult', 'test_small'),
'--column-description', data_file('adult', 'train.cd'),
'-T', '4',
'-i', '10',
'--snapshot-file', 'snapshot.cbp'
]
cmd_1 = cmd + ['--eval-metric', 'Logloss']
cmd_2 = cmd + ['--eval-metric', 'Accuracy']
yatest.common.execute(cmd_1)
try:
yatest.common.execute(cmd_2)
except ExecutionError:
return
assert False
@pytest.mark.parametrize('boosting_type', BOOSTING_TYPE)
@pytest.mark.parametrize('leaf_estimation_method', LEAF_ESTIMATION_METHOD)
@pytest.mark.parametrize(
'dev_score_calc_obj_block_size',
SCORE_CALC_OBJ_BLOCK_SIZES,
ids=SCORE_CALC_OBJ_BLOCK_SIZES_IDS
)
def test_querysoftmax(boosting_type, leaf_estimation_method, dev_score_calc_obj_block_size):
output_model_path = yatest.common.test_output_path('model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
cmd = (
CATBOOST_PATH,
'fit',
'--loss-function', 'QuerySoftMax',
'-f', data_file('querywise', 'train'),
'-t', data_file('querywise', 'test'),
'--column-description', data_file('querywise', 'train.cd'),
'--boosting-type', boosting_type,
'--leaf-estimation-method', leaf_estimation_method,
'--dev-score-calc-obj-block-size', dev_score_calc_obj_block_size,
'-i', '20',
'-T', '4',
'-m', output_model_path,
'--eval-file', output_eval_path,
'--use-best-model', 'false',
)
yatest.common.execute(cmd)
return [local_canonical_file(output_eval_path)]
def test_shap_verbose():
output_model_path = yatest.common.test_output_path('model.bin')
output_values_path = yatest.common.test_output_path('shapval')
output_log = yatest.common.test_output_path('log')
cmd_fit = [
CATBOOST_PATH,
'fit',
'--loss-function', 'Logloss',
'--learning-rate', '0.5',
'-f', data_file('adult', 'train_small'),
'--column-description', data_file('adult', 'train.cd'),
'-i', '250',
'-T', '4',
'-m', output_model_path,
]
yatest.common.execute(cmd_fit)
cmd_shap = [
CATBOOST_PATH,
'fstr',
'-o', output_values_path,
'--input-path', data_file('adult', 'train_small'),
'--column-description', data_file('adult', 'train.cd'),
'--verbose', '12',
'--fstr-type', 'ShapValues',
'-T', '4',
'-m', output_model_path,
]
with open(output_log, 'w') as log:
yatest.common.execute(cmd_shap, stdout=log)
with open(output_log, 'r') as log:
line_count = sum(1 for line in log)
assert line_count == 5
@pytest.mark.parametrize('bagging_temperature', ['0', '1'])
@pytest.mark.parametrize('sampling_unit', SAMPLING_UNIT_TYPES)
@pytest.mark.parametrize(
'dev_score_calc_obj_block_size',
SCORE_CALC_OBJ_BLOCK_SIZES,
ids=SCORE_CALC_OBJ_BLOCK_SIZES_IDS
)
def test_querywise_bayesian_bootstrap(bagging_temperature, sampling_unit, dev_score_calc_obj_block_size):
output_model_path = yatest.common.test_output_path('model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
cmd = (
CATBOOST_PATH,
'fit',
'--loss-function', 'RMSE',
'-f', data_file('querywise', 'train'),
'-t', data_file('querywise', 'test'),
'--column-description', data_file('querywise', 'train.cd'),
'--bootstrap-type', 'Bayesian',
'--sampling-unit', sampling_unit,
'--bagging-temperature', bagging_temperature,
'--dev-score-calc-obj-block-size', dev_score_calc_obj_block_size,
'-i', '20',
'-T', '4',
'-m', output_model_path,
'--eval-file', output_eval_path,
'--use-best-model', 'false',
)
yatest.common.execute(cmd)
return [local_canonical_file(output_eval_path)]
@pytest.mark.parametrize('subsample', ['0.5', '1'])
@pytest.mark.parametrize('sampling_unit', SAMPLING_UNIT_TYPES)
@pytest.mark.parametrize(
'dev_score_calc_obj_block_size',
SCORE_CALC_OBJ_BLOCK_SIZES,
ids=SCORE_CALC_OBJ_BLOCK_SIZES_IDS
)
def test_querywise_bernoulli_bootstrap(subsample, sampling_unit, dev_score_calc_obj_block_size):
output_model_path = yatest.common.test_output_path('model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
cmd = (
CATBOOST_PATH,
'fit',
'--loss-function', 'RMSE',
'-f', data_file('querywise', 'train'),
'-t', data_file('querywise', 'test'),
'--column-description', data_file('querywise', 'train.cd'),
'--bootstrap-type', 'Bernoulli',
'--sampling-unit', sampling_unit,
'--subsample', subsample,
'--dev-score-calc-obj-block-size', dev_score_calc_obj_block_size,
'-i', '20',
'-T', '4',
'-m', output_model_path,
'--eval-file', output_eval_path,
'--use-best-model', 'false',
)
yatest.common.execute(cmd)
return [local_canonical_file(output_eval_path)]
LOSS_FUNCTIONS_WITH_PAIRWISE_SCORRING = ['YetiRankPairwise', 'PairLogitPairwise']
@pytest.mark.parametrize('bagging_temperature', ['0', '1'])
@pytest.mark.parametrize('sampling_unit', SAMPLING_UNIT_TYPES)
@pytest.mark.parametrize('loss_function', LOSS_FUNCTIONS_WITH_PAIRWISE_SCORRING)
@pytest.mark.parametrize(
'dev_score_calc_obj_block_size',
SCORE_CALC_OBJ_BLOCK_SIZES,
ids=SCORE_CALC_OBJ_BLOCK_SIZES_IDS
)
def test_pairwise_bayesian_bootstrap(bagging_temperature, sampling_unit, loss_function, dev_score_calc_obj_block_size):
output_model_path = yatest.common.test_output_path('model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
cmd = (
CATBOOST_PATH,
'fit',
'--loss-function', loss_function,
'-f', data_file('querywise', 'train'),
'-t', data_file('querywise', 'test'),
'--column-description', data_file('querywise', 'train.cd'),
'--learn-pairs', data_file('querywise', 'train.pairs'),
'--test-pairs', data_file('querywise', 'test.pairs'),
'--bootstrap-type', 'Bayesian',
'--sampling-unit', sampling_unit,
'--bagging-temperature', bagging_temperature,
'--dev-score-calc-obj-block-size', dev_score_calc_obj_block_size,
'-i', '20',
'-T', '4',
'-m', output_model_path,
'--eval-file', output_eval_path,
'--use-best-model', 'false',
)
yatest.common.execute(cmd)
return [local_canonical_file(output_eval_path)]
@pytest.mark.parametrize('subsample', ['0.5', '1'])
@pytest.mark.parametrize('sampling_unit', SAMPLING_UNIT_TYPES)
@pytest.mark.parametrize('loss_function', LOSS_FUNCTIONS_WITH_PAIRWISE_SCORRING)
@pytest.mark.parametrize(
'dev_score_calc_obj_block_size',
SCORE_CALC_OBJ_BLOCK_SIZES,
ids=SCORE_CALC_OBJ_BLOCK_SIZES_IDS
)
def test_pairwise_bernoulli_bootstrap(subsample, sampling_unit, loss_function, dev_score_calc_obj_block_size):
output_model_path = yatest.common.test_output_path('model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
cmd = (
CATBOOST_PATH,
'fit',
'--loss-function', loss_function,
'-f', data_file('querywise', 'train'),
'-t', data_file('querywise', 'test'),
'--column-description', data_file('querywise', 'train.cd'),
'--learn-pairs', data_file('querywise', 'train.pairs'),
'--test-pairs', data_file('querywise', 'test.pairs'),
'--bootstrap-type', 'Bernoulli',
'--sampling-unit', sampling_unit,
'--subsample', subsample,
'--dev-score-calc-obj-block-size', dev_score_calc_obj_block_size,
'-i', '20',
'-T', '4',
'-m', output_model_path,
'--eval-file', output_eval_path,
'--use-best-model', 'false',
)
yatest.common.execute(cmd)
return [local_canonical_file(output_eval_path)]
@pytest.mark.parametrize('loss_function', ['Logloss', 'RMSE', 'MultiClass', 'QuerySoftMax', 'QueryRMSE'])
@pytest.mark.parametrize('metric', ['Logloss', 'RMSE', 'MultiClass', 'QuerySoftMax', 'AUC', 'PFound'])
def test_bad_metrics_combination(loss_function, metric):
BAD_PAIRS = {
'Logloss': ['RMSE', 'MultiClass'],
'RMSE': ['Logloss', 'MultiClass', 'QuerySoftMax'],
'MultiClass': ['Logloss', 'RMSE', 'QuerySoftMax', 'PFound'],
'QuerySoftMax': ['RMSE', 'MultiClass'],
'QueryRMSE': ['Logloss', 'MultiClass', 'QuerySoftMax'],
'YetiRank': ['Logloss', 'RMSE', 'MultiClass', 'QuerySoftMax']
}
cd_path = yatest.common.test_output_path('cd.txt')
np.savetxt(cd_path, [[0, 'Target'], [1, 'QueryId']], fmt='%s', delimiter='\t')
data = np.array([[0, 1, 0, 1, 0], [0, 0, 1, 1, 2], [1, 2, 3, 4, 5]]).T
train_path = yatest.common.test_output_path('train.txt')
np.savetxt(train_path, data, fmt='%s', delimiter='\t')
test_path = yatest.common.test_output_path('test.txt')
np.savetxt(test_path, data, fmt='%s', delimiter='\t')
cmd = (
CATBOOST_PATH,
'fit',
'--loss-function', loss_function,
'--custom-metric', metric,
'-f', train_path,
'-t', test_path,
'--column-description', cd_path,
'-i', '4',
'-T', '4',
)
try:
yatest.common.execute(cmd)
except Exception:
assert metric in BAD_PAIRS[loss_function]
return
assert metric not in BAD_PAIRS[loss_function]
@pytest.mark.parametrize('metric', [('good', ',AUC,'), ('bad', ',')])
def test_extra_commas(metric):
cmd = (
CATBOOST_PATH,
'fit',
'--use-best-model', 'false',
'--loss-function', 'Logloss',
'-f', data_file('adult', 'train_small'),
'-t', data_file('adult', 'test_small'),
'--column-description', data_file('adult', 'train.cd'),
'-w', '0.03',
'-i', '10',
'-T', '4',
'--custom-metric', metric[1]
)
if metric[0] == 'good':
yatest.common.execute(cmd)
if metric[0] == 'bad':
with pytest.raises(yatest.common.ExecutionError):
yatest.common.execute(cmd)
def execute_fit_for_test_quantized_pool(loss_function, pool_path, test_path, cd_path, eval_path,
border_count=128, other_options=()):
model_path = yatest.common.test_output_path('model.bin')
cmd = (
CATBOOST_PATH,
'fit',
'--use-best-model', 'false',
'--loss-function', loss_function,
'-f', pool_path,
'-t', test_path,
'--cd', cd_path,
'-i', '10',
'-w', '0.03',
'-T', '4',
'-x', str(border_count),
'--feature-border-type', 'GreedyLogSum',
'-m', model_path,
'--eval-file', eval_path,
)
yatest.common.execute(cmd + other_options)
def test_quantized_pool():
test_path = data_file('higgs', 'test_small')
tsv_eval_path = yatest.common.test_output_path('tsv.eval')
execute_fit_for_test_quantized_pool(
loss_function='Logloss',
pool_path=data_file('higgs', 'train_small'),
test_path=test_path,
cd_path=data_file('higgs', 'train.cd'),
eval_path=tsv_eval_path
)
quantized_eval_path = yatest.common.test_output_path('quantized.eval')
execute_fit_for_test_quantized_pool(
loss_function='Logloss',
pool_path='quantized://' + data_file('higgs', 'train_small_x128_greedylogsum.bin'),
test_path=test_path,
cd_path=data_file('higgs', 'train.cd'),
eval_path=quantized_eval_path
)
assert filecmp.cmp(tsv_eval_path, quantized_eval_path)
def test_quantized_pool_ignored_features():
test_path = data_file('higgs', 'test_small')
tsv_eval_path = yatest.common.test_output_path('tsv.eval')
execute_fit_for_test_quantized_pool(
loss_function='Logloss',
pool_path=data_file('higgs', 'train_small'),
test_path=test_path,
cd_path=data_file('higgs', 'train.cd'),
eval_path=tsv_eval_path,
other_options=('-I', '5',)
)
quantized_eval_path = yatest.common.test_output_path('quantized.eval')
execute_fit_for_test_quantized_pool(
loss_function='Logloss',
pool_path='quantized://' + data_file('higgs', 'train_small_x128_greedylogsum.bin'),
test_path=test_path,
cd_path=data_file('higgs', 'train.cd'),
eval_path=quantized_eval_path,
other_options=('-I', '5',)
)
assert filecmp.cmp(tsv_eval_path, quantized_eval_path)
def test_quantized_pool_groupid():
test_path = data_file('querywise', 'test')
tsv_eval_path = yatest.common.test_output_path('tsv.eval')
execute_fit_for_test_quantized_pool(
loss_function='PairLogitPairwise',
pool_path=data_file('querywise', 'train'),
test_path=test_path,
cd_path=data_file('querywise', 'train.cd.query_id'),
eval_path=tsv_eval_path
)
quantized_eval_path = yatest.common.test_output_path('quantized.eval')
execute_fit_for_test_quantized_pool(
loss_function='PairLogitPairwise',
pool_path='quantized://' + data_file('querywise', 'train_x128_greedylogsum_aqtaa.bin'),
test_path=test_path,
cd_path=data_file('querywise', 'train.cd.query_id'),
eval_path=quantized_eval_path
)
assert filecmp.cmp(tsv_eval_path, quantized_eval_path)
def test_quantized_pool_ignored_during_quantization():
test_path = data_file('querywise', 'test')
tsv_eval_path = yatest.common.test_output_path('tsv.eval')
execute_fit_for_test_quantized_pool(
loss_function='PairLogitPairwise',
pool_path=data_file('querywise', 'train'),
test_path=test_path,
cd_path=data_file('querywise', 'train.cd.query_id'),
eval_path=tsv_eval_path,
other_options=('-I', '18-36',)
)
quantized_eval_path = yatest.common.test_output_path('quantized.eval')
execute_fit_for_test_quantized_pool(
loss_function='PairLogitPairwise',
pool_path='quantized://' + data_file('querywise', 'train_x128_greedylogsum_aqtaa_ignore_18_36.bin'),
test_path=test_path,
cd_path=data_file('querywise', 'train.cd.query_id'),
eval_path=quantized_eval_path
)
assert filecmp.cmp(tsv_eval_path, quantized_eval_path)
def test_quantized_pool_quantized_test():
test_path = data_file('querywise', 'test')
tsv_eval_path = yatest.common.test_output_path('tsv.eval')
execute_fit_for_test_quantized_pool(
loss_function='PairLogitPairwise',
pool_path=data_file('querywise', 'train'),
test_path=test_path,
cd_path=data_file('querywise', 'train.cd.query_id'),
eval_path=tsv_eval_path
)
quantized_eval_path = yatest.common.test_output_path('quantized.eval')
execute_fit_for_test_quantized_pool(
loss_function='PairLogitPairwise',
pool_path='quantized://' + data_file('querywise', 'train_x128_greedylogsum_aqtaa.bin'),
test_path='quantized://' + data_file('querywise', 'test_borders_from_train_aqtaa.bin'),
cd_path=data_file('querywise', 'train.cd.query_id'),
eval_path=quantized_eval_path
)
assert filecmp.cmp(tsv_eval_path, quantized_eval_path)
def test_quantized_pool_with_large_grid():
test_path = data_file('querywise', 'test')
tsv_eval_path = yatest.common.test_output_path('tsv.eval')
execute_fit_for_test_quantized_pool(
loss_function='PairLogitPairwise',
pool_path=data_file('querywise', 'train'),
test_path=test_path,
cd_path=data_file('querywise', 'train.cd.query_id'),
eval_path=tsv_eval_path,
border_count=1024
)
quantized_eval_path = yatest.common.test_output_path('quantized.eval')
execute_fit_for_test_quantized_pool(
loss_function='PairLogitPairwise',
pool_path='quantized://' + data_file('querywise', 'train.quantized_x1024'),
test_path='quantized://' + data_file('querywise', 'test.quantized_x1024'),
cd_path=data_file('querywise', 'train.cd.query_id'),
eval_path=quantized_eval_path
)
assert filecmp.cmp(tsv_eval_path, quantized_eval_path)
def test_group_weights_file():
first_eval_path = yatest.common.test_output_path('first.eval')
second_eval_path = yatest.common.test_output_path('second.eval')
def run_catboost(eval_path, cd_file, is_additional_query_weights):
cmd = [
CATBOOST_PATH,
'fit',
'--use-best-model', 'false',
'--loss-function', 'QueryRMSE',
'-f', data_file('querywise', 'train'),
'-t', data_file('querywise', 'test'),
'--column-description', data_file('querywise', cd_file),
'-i', '5',
'-T', '4',
'--eval-file', eval_path,
]
if is_additional_query_weights:
cmd += [
'--learn-group-weights', data_file('querywise', 'train.group_weights'),
'--test-group-weights', data_file('querywise', 'test.group_weights'),
]
yatest.common.execute(cmd)
run_catboost(first_eval_path, 'train.cd', True)
run_catboost(second_eval_path, 'train.cd.group_weight', False)
assert filecmp.cmp(first_eval_path, second_eval_path)
return [local_canonical_file(first_eval_path)]
def test_group_weights_file_quantized():
first_eval_path = yatest.common.test_output_path('first.eval')
second_eval_path = yatest.common.test_output_path('second.eval')
def run_catboost(eval_path, train, test, is_additional_query_weights):
cmd = [
CATBOOST_PATH,
'fit',
'--use-best-model', 'false',
'--loss-function', 'QueryRMSE',
'-f', 'quantized://' + data_file('querywise', train),
'-t', 'quantized://' + data_file('querywise', test),
'-i', '5',
'-T', '4',
'--eval-file', eval_path,
]
if is_additional_query_weights:
cmd += [
'--learn-group-weights', data_file('querywise', 'train.group_weights'),
'--test-group-weights', data_file('querywise', 'test.group_weights'),
]
yatest.common.execute(cmd)
run_catboost(first_eval_path, 'train.quantized', 'test.quantized', True)
run_catboost(second_eval_path, 'train.quantized.group_weight', 'test.quantized.group_weight', False)
assert filecmp.cmp(first_eval_path, second_eval_path)
return [local_canonical_file(first_eval_path)]
def test_mode_roc():
eval_path = yatest.common.test_output_path('eval.tsv')
output_roc_path = yatest.common.test_output_path('test.eval')
cmd = (
CATBOOST_PATH,
'fit',
'--loss-function', 'Logloss',
'-f', data_file('adult', 'train_small'),
'-t', data_file('adult', 'test_small'),
'--column-description', data_file('adult', 'train.cd'),
'-i', '10',
'-T', '4',
'--counter-calc-method', 'SkipTest',
'--eval-file', eval_path,
'--use-best-model', 'false',
)
yatest.common.execute(cmd)
roc_cmd = (
CATBOOST_PATH,
'roc',
'--eval-file', eval_path,
'--output-path', output_roc_path
)
yatest.common.execute(roc_cmd)
return local_canonical_file(output_roc_path)
@pytest.mark.parametrize('pool', ['adult', 'higgs'])
def test_convert_model_to_json(pool):
output_model_path = yatest.common.test_output_path('model')
output_eval_path = yatest.common.test_output_path('test.eval')
cmd = (
CATBOOST_PATH,
'fit',
'--use-best-model', 'false',
'-f', data_file(pool, 'train_small'),
'-t', data_file(pool, 'test_small'),
'--column-description', data_file(pool, 'train.cd'),
'-i', '20',
'-T', '4',
'--eval-file', output_eval_path,
'-m', output_model_path,
'--model-format', 'CatboostBinary,Json'
)
yatest.common.execute(cmd)
formula_predict_path_bin = yatest.common.test_output_path('predict_test_bin.eval')
formula_predict_path_json = yatest.common.test_output_path('predict_test_json.eval')
calc_cmd = (
CATBOOST_PATH,
'calc',
'--input-path', data_file(pool, 'test_small'),
'--column-description', data_file(pool, 'train.cd'),
'-m', output_model_path + '.json',
'--model-format', 'Json',
'--output-path', formula_predict_path_json
)
yatest.common.execute(calc_cmd)
calc_cmd = (
CATBOOST_PATH,
'calc',
'--input-path', data_file(pool, 'test_small'),
'--column-description', data_file(pool, 'train.cd'),
'-m', output_model_path + '.bin',
'--output-path', formula_predict_path_bin
)
yatest.common.execute(calc_cmd)
assert (compare_evals_with_precision(output_eval_path, formula_predict_path_bin))
assert (compare_evals_with_precision(output_eval_path, formula_predict_path_json))
LOSS_FUNCTIONS_NO_MAPE = ['RMSE', 'Logloss', 'MAE', 'CrossEntropy', 'Quantile', 'LogLinQuantile', 'Poisson']
@pytest.mark.parametrize('loss_function', LOSS_FUNCTIONS_NO_MAPE)
@pytest.mark.parametrize('boosting_type', BOOSTING_TYPE)
def test_quantized_adult_pool(loss_function, boosting_type):
output_model_path = yatest.common.test_output_path('model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
quantized_train_file = 'quantized://' + data_file('quantized_adult', 'train.qbin')
quantized_test_file = 'quantized://' + data_file('quantized_adult', 'test.qbin')
cmd = (
CATBOOST_PATH, 'fit',
'--use-best-model', 'false',
'--loss-function', loss_function,
'-f', quantized_train_file,
'-t', quantized_test_file,
'--boosting-type', boosting_type,
'-i', '10',
'-w', '0.03',
'-T', '4',
'-m', output_model_path,
)
yatest.common.execute(cmd)
cd_file = data_file('quantized_adult', 'pool.cd')
test_file = data_file('quantized_adult', 'test_small.tsv')
apply_catboost(output_model_path, test_file, cd_file, output_eval_path)
return [local_canonical_file(output_eval_path)]
@pytest.mark.parametrize('boosting_type', BOOSTING_TYPE)
def test_quantized_with_one_thread(boosting_type):
output_model_path = yatest.common.test_output_path('model.bin')
quantized_train_file = 'quantized://' + data_file('querywise', 'train.quantized')
cmd = (
CATBOOST_PATH, 'fit',
'--use-best-model', 'false',
'--loss-function', 'Logloss',
'-f', quantized_train_file,
'--boosting-type', boosting_type,
'-i', '10',
'-w', '0.03',
'-T', '1',
'-m', output_model_path,
)
print(cmd)
yatest.common.execute(cmd)
def test_eval_result_on_different_pool_type():
output_eval_path = yatest.common.test_output_path('test.eval')
output_quantized_eval_path = yatest.common.test_output_path('test.eval.quantized')
def run_catboost(train, test, eval_path):
cmd = (
CATBOOST_PATH, 'fit',
'--use-best-model', 'false',
'--loss-function', 'Logloss',
'--border-count', '128',
'-f', train,
'-t', test,
'--cd', data_file('querywise', 'train.cd'),
'-i', '10',
'-T', '4',
'--eval-file', eval_path,
)
yatest.common.execute(cmd)
def get_pool_path(set_name, is_quantized=False):
path = data_file('querywise', set_name)
return 'quantized://' + path + '.quantized' if is_quantized else path
run_catboost(get_pool_path('train'), get_pool_path('test'), output_eval_path)
run_catboost(get_pool_path('train', True), get_pool_path('test', True), output_quantized_eval_path)
assert filecmp.cmp(output_eval_path, output_quantized_eval_path)
return [local_canonical_file(output_eval_path)]
def test_apply_on_different_pool_type():
output_model_path = yatest.common.test_output_path('model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
output_quantized_eval_path = yatest.common.test_output_path('test.eval.quantized')
def get_pool_path(set_name, is_quantized=False):
path = data_file('querywise', set_name)
return 'quantized://' + path + '.quantized' if is_quantized else path
cd_file = data_file('querywise', 'train.cd')
cmd = (
CATBOOST_PATH, 'fit',
'--use-best-model', 'false',
'--loss-function', 'Logloss',
'--learn-set', get_pool_path('train', True),
'--test-set', get_pool_path('test', True),
'--column-description', cd_file,
'-i', '10',
'-T', '4',
'--model-file', output_model_path,
)
yatest.common.execute(cmd)
cmd = (
CATBOOST_PATH, 'calc',
'--input-path', get_pool_path('test'),
'--column-description', cd_file,
'--model-file', output_model_path,
'--output-path', output_eval_path,
'--prediction-type', 'RawFormulaVal'
)
yatest.common.execute(cmd)
cmd = (
CATBOOST_PATH, 'calc',
'--input-path', get_pool_path('test', True),
'--model-file', output_model_path,
'--output-path', output_quantized_eval_path,
'--prediction-type', 'RawFormulaVal'
)
yatest.common.execute(cmd)
assert filecmp.cmp(output_eval_path, output_quantized_eval_path)
def test_apply_output_column_by_idx():
output_model_path = yatest.common.test_output_path('model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
learn = data_file('black_friday', 'train')
test = data_file('black_friday', 'test')
cd = data_file('black_friday', 'cd')
cmd = (
CATBOOST_PATH, 'fit',
'--use-best-model', 'false',
'--loss-function', 'RMSE',
'--learn-set', learn,
'--test-set', test,
'--column-description', cd,
'-i', '10',
'-T', '4',
'--model-file', output_model_path,
'--has-header'
)
yatest.common.execute(cmd)
column_names = [
'User_ID',
'Product_ID',
'Gender',
'Age',
'Occupation',
'City_Category',
'Stay_In_Current_City_Years',
'Marital_Status',
'Product_Category_1',
'Product_Category_2',
'Product_Category_3',
'Purchase'
]
output_columns = ','.join(['#{}:{}'.format(idx, name) for idx, name in enumerate(column_names)])
output_columns = 'RawFormulaVal,' + output_columns
cmd = (
CATBOOST_PATH, 'calc',
'--input-path', test,
'--column-description', cd,
'--model-file', output_model_path,
'--output-path', output_eval_path,
'--output-columns', output_columns,
'--has-header'
)
yatest.common.execute(cmd)
with open(output_eval_path, 'r') as f:
eval_lines = f.readlines()
with open(test, 'r') as f:
test_lines = f.readlines()
assert len(eval_lines) == len(test_lines)
for i in range(len(eval_lines)):
eval_line = eval_lines[i].split('\t')[1:] # skip RawFormulaVal
test_line = test_lines[i].split('\t')
for eval_column, test_column in zip(eval_line, test_line):
assert eval_column == test_column
@pytest.mark.parametrize(
'dataset_name,loss_function,has_pairs,has_group_weights',
[
('adult_small_broken_features', 'Logloss', False, False),
('querywise_broken_pairs', 'RMSE', True, False),
('querywise_broken_group_weights', 'RMSE', False, True),
]
)
def test_broken_dsv_format(dataset_name, loss_function, has_pairs, has_group_weights):
output_model_path = yatest.common.test_output_path('model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
# iterations and threads are specified just to finish fast if test is xpass
cmd = (
CATBOOST_PATH,
'fit',
'--loss-function', loss_function,
'--learn-set', data_file('broken_format', dataset_name, 'train'),
'--test-set', data_file('broken_format', dataset_name, 'test'),
'--column-description', data_file('broken_format', dataset_name, 'train.cd'),
'-i', '1',
'-T', '4',
'-m', output_model_path,
'--eval-file', output_eval_path,
)
if has_pairs:
cmd += (
'--learn-pairs', data_file('broken_format', dataset_name, 'train.pairs'),
'--test-pairs', data_file('broken_format', dataset_name, 'test.pairs'),
)
if has_group_weights:
cmd += (
'--learn-group-weights', data_file('broken_format', dataset_name, 'train.group_weights'),
'--test-group-weights', data_file('broken_format', dataset_name, 'test.group_weights'),
)
with pytest.raises(yatest.common.ExecutionError):
yatest.common.execute(cmd)
@pytest.mark.parametrize(
'loss_function,eval_metric,boosting_type',
[
('QueryRMSE', 'NDCG', 'Plain'),
('QueryRMSE', 'NDCG', 'Ordered'),
# Boosting type 'Ordered' is not supported for YetiRankPairwise and PairLogitPairwise
('YetiRankPairwise', 'NDCG', 'Plain'),
('PairLogit', 'PairAccuracy', 'Plain'),
('PairLogitPairwise', 'NDCG', 'Plain'),
('PairLogitPairwise', 'PairAccuracy', 'Plain'),
],
ids=[
'loss_function=QueryRMSE,eval_metric=NDCG,boosting_type=Plain',
'loss_function=QueryRMSE,eval_metric=NDCG,boosting_type=Ordered',
'loss_function=YetiRankPairwise,eval_metric=NDCG,boosting_type=Plain',
'loss_function=PairLogit,eval_metric=PairAccuracy,boosting_type=Plain',
'loss_function=PairLogitPairwise,eval_metric=NDCG,boosting_type=Plain',
'loss_function=PairLogitPairwise,eval_metric=PairAccuracy,boosting_type=Plain'
]
)
def test_groupwise_with_cat_features(loss_function, eval_metric, boosting_type):
learn_error_path = yatest.common.test_output_path('learn_error.tsv')
test_error_path = yatest.common.test_output_path('test_error.tsv')
cmd = (
CATBOOST_PATH,
'fit',
'--loss-function', loss_function,
'--has-header',
'-f', data_file('black_friday', 'train'),
'-t', data_file('black_friday', 'test'),
'--column-description', data_file('black_friday', 'cd'),
'--boosting-type', boosting_type,
'-i', '10',
'-T', '4',
'--eval-metric', eval_metric,
'--learn-err-log', learn_error_path,
'--test-err-log', test_error_path,
)
yatest.common.execute(cmd)
return [local_canonical_file(learn_error_path), local_canonical_file(test_error_path)]
def test_gradient_walker():
output_eval_path = yatest.common.test_output_path('test.eval')
cmd = (
CATBOOST_PATH,
'fit',
'-f', data_file('adult', 'train_small'),
'-t', data_file('adult', 'test_small'),
'--column-description', data_file('adult', 'train.cd'),
'-i', '20',
'-T', '4',
'--eval-file', output_eval_path,
'--use-best-model', 'false',
'--leaf-estimation-backtracking', 'AnyImprovement',
)
yatest.common.execute(cmd)
return [local_canonical_file(output_eval_path)]
# training with pairwise scoring with categorical features on CPU does not yet support one-hot features
# so they are disabled by default, explicit non-default specification should be an error
@pytest.mark.parametrize(
'loss_function', ['YetiRankPairwise', 'PairLogitPairwise'],
ids=['loss_function=YetiRankPairwise', 'loss_function=PairLogitPairwise']
)
def test_groupwise_with_bad_one_hot_max_size(loss_function):
cmd = (
CATBOOST_PATH,
'fit',
'--loss-function', loss_function,
'--has-header',
'-f', data_file('black_friday', 'train'),
'-t', data_file('black_friday', 'test'),
'--column-description', data_file('black_friday', 'cd'),
'--boosting-type', 'Plain',
'-i', '10',
'-T', '4',
'--eval-metric', 'NDCG',
'--one_hot_max_size', '10'
)
with pytest.raises(yatest.common.ExecutionError):
yatest.common.execute(cmd)
def test_load_quantized_pool_with_double_baseline():
# Dataset with 3 random columns, first column is Target, seconds columns is Num, third column
# is Baseline.
#
# There are only 10 rows in dataset.
cmd = (
CATBOOST_PATH, 'fit',
'-f', 'quantized://' + data_file('quantized_with_baseline', 'dataset.qbin'),
'-i', '10')
yatest.common.execute(cmd)
def test_write_predictions_to_streams():
output_model_path = yatest.common.test_output_path('model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
calc_output_eval_path_redirected = yatest.common.test_output_path('calc_test.eval')
cmd = (
CATBOOST_PATH,
'fit',
'-f', data_file('adult', 'train_small'),
'-t', data_file('adult', 'test_small'),
'--eval-file', output_eval_path,
'--column-description', data_file('adult', 'train.cd'),
'-i', '10',
'-m', output_model_path
)
yatest.common.execute(cmd)
calc_cmd = (
CATBOOST_PATH,
'calc',
'--input-path', data_file('adult', 'test_small'),
'--column-description', data_file('adult', 'train.cd'),
'-m', output_model_path,
'--output-path', 'stream://stdout',
)
with open(calc_output_eval_path_redirected, 'w') as catboost_stdout:
yatest.common.execute(calc_cmd, stdout=catboost_stdout)
assert compare_evals(output_eval_path, calc_output_eval_path_redirected)
calc_cmd = (
CATBOOST_PATH,
'calc',
'--input-path', data_file('adult', 'test_small'),
'--column-description', data_file('adult', 'train.cd'),
'-m', output_model_path,
'--output-path', 'stream://stderr'
)
with open(calc_output_eval_path_redirected, 'w') as catboost_stderr:
yatest.common.execute(calc_cmd, stderr=catboost_stderr)
assert compare_evals(output_eval_path, calc_output_eval_path_redirected)
@pytest.mark.parametrize('boosting_type', BOOSTING_TYPE)
def test_mvs_bootstrap_head_frac(boosting_type):
def run_catboost(eval_path, mvs_head_fraction):
cmd = [
CATBOOST_PATH,
'fit',
'--use-best-model', 'false',
'--allow-writing-files', 'false',
'--loss-function', 'Logloss',
'--max-ctr-complexity', '5',
'-f', data_file('airlines_5K', 'train'),
'-t', data_file('airlines_5K', 'test'),
'--column-description', data_file('airlines_5K', 'cd'),
'--has-header',
'--boosting-type', boosting_type,
'--bootstrap-type', 'MVS',
'--mvs-head-fraction', mvs_head_fraction,
'-i', '50',
'-w', '0.03',
'-T', '6',
'-r', '0',
'--eval-file', eval_path,
]
yatest.common.execute(cmd)
ref_eval_path = yatest.common.test_output_path('test.eval')
run_catboost(ref_eval_path, '0.5')
for head_fraction in ('0.1', '0.9'):
eval_path = yatest.common.test_output_path('test_{}.eval'.format(head_fraction))
run_catboost(eval_path, head_fraction)
assert (filecmp.cmp(ref_eval_path, eval_path) is False)
return [local_canonical_file(ref_eval_path)]
def test_simple_ctr():
output_model_path = yatest.common.test_output_path('model.bin')
output_eval_path = yatest.common.test_output_path('test.eval')
simple_ctr = ','.join((
'Borders:TargetBorderCount=15',
'Buckets:TargetBorderCount=15',
'Borders:TargetBorderType=MinEntropy',
'Counter:CtrBorderCount=20',
))
yatest.common.execute((
CATBOOST_PATH,
'fit',
'--loss-function', 'RMSE',
'-f', data_file('adult', 'train_small'),
'-t', data_file('adult', 'test_small'),
'--column-description', data_file('adult', 'train.cd'),
'--boosting-type', 'Ordered',
'-i', '20',
'-T', '4',
'-m', output_model_path,
'--eval-file', output_eval_path,
'--simple-ctr', simple_ctr,
))
return [local_canonical_file(output_eval_path)]
def test_output_options():
output_options_path = 'training_options.json'
train_dir = 'catboost_info'
cmd = (
CATBOOST_PATH,
'fit',
'--loss-function', 'Logloss',
'-f', data_file('adult', 'train_small'),
'-t', data_file('adult', 'test_small'),
'--column-description', data_file('adult', 'train.cd'),
'-i', '10',
'-T', '4',
'--train-dir', train_dir,
'--training-options-file', output_options_path,
)
yatest.common.execute(cmd)
return local_canonical_file(os.path.join(train_dir, output_options_path))
| 35.706456 | 169 | 0.62894 | 29,777 | 241,697 | 4.774625 | 0.024616 | 0.062796 | 0.05942 | 0.074894 | 0.863555 | 0.841736 | 0.816507 | 0.788949 | 0.773869 | 0.75236 | 0 | 0.010144 | 0.210776 | 241,697 | 6,768 | 170 | 35.711732 | 0.735186 | 0.007514 | 0 | 0.744615 | 0 | 0.001379 | 0.236429 | 0.033347 | 0 | 0 | 0 | 0.000148 | 0.016715 | 1 | 0.039635 | false | 0 | 0.003791 | 0.00293 | 0.073238 | 0.000172 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
1343a082b29e7277b7f38cbe07b128604d857600 | 79,071 | py | Python | tigershark/parsers/M835_5010_X221_A1.py | CloudCray/TigerShark | e27f1e775652576743518d9f2dfd57266f0c541f | [
"BSD-3-Clause"
] | 19 | 2016-05-09T01:30:37.000Z | 2022-03-15T15:51:24.000Z | tigershark/parsers/M835_5010_X221_A1.py | CloudCray/TigerShark | e27f1e775652576743518d9f2dfd57266f0c541f | [
"BSD-3-Clause"
] | 10 | 2016-04-11T14:55:54.000Z | 2021-08-07T15:41:14.000Z | tigershark/parsers/M835_5010_X221_A1.py | CloudCray/TigerShark | e27f1e775652576743518d9f2dfd57266f0c541f | [
"BSD-3-Clause"
] | 11 | 2015-10-15T16:12:39.000Z | 2021-03-22T19:33:56.000Z | #
# Generated by TigerShark.tools.convertPyX12 on 2014-09-26 07:26:39.089319
#
from tigershark.X12.parse import Message, Loop, Segment, Composite, Element, Properties
parsed_835_1000A = Loop( u'1000A', Properties(position=u'0800',looptype='',repeat=u'1',req_sit=u'R',desc=u'Payer Identification'),
Segment( u'N1', Properties(syntax=u'R0203 P0304',position=u'0800',req_sit=u'R',repeat=u'1',desc=u'Payer Identification'),
Element( u'N101', Properties(desc=u'Entity Identifier Code', req_sit=u'R', data_type=(u'ID',u'2',u'3'), position=1,
codes=[u'PR'] ) ),
Element( u'N102', Properties(desc=u'Name', req_sit=u'R', data_type=(u'AN',u'1',u'60'), position=2,
codes=[] ) ),
Element( u'N103', Properties(desc=u'Identification Code Qualifier', req_sit=u'S', data_type=(u'ID',u'1',u'2'), position=3,
codes=[u'XV'] ) ),
Element( u'N104', Properties(desc=u'Identification Code', req_sit=u'S', data_type=(u'AN',u'2',u'80'), position=4,
codes=[] ) ),
Element( u'N105', Properties(desc=u'Entity Relationship Code', req_sit=u'N', data_type=(u'ID',u'2',u'2'), position=5,
codes=[] ) ),
Element( u'N106', Properties(desc=u'Entity Identifier Code', req_sit=u'N', data_type=(u'ID',u'2',u'3'), position=6,
codes=[] ) ),
),
Segment( u'N3', Properties(syntax='',position=u'1000',req_sit=u'R',repeat=u'1',desc=u'Payer Address'),
Element( u'N301', Properties(desc=u'Address Information', req_sit=u'R', data_type=(u'AN',u'1',u'55'), position=1,
codes=[] ) ),
Element( u'N302', Properties(desc=u'Address Information', req_sit=u'S', data_type=(u'AN',u'1',u'55'), position=2,
codes=[] ) ),
),
Segment( u'N4', Properties(syntax=u'E0207 C0605 C0704',position=u'1100',req_sit=u'R',repeat=u'1',desc=u'Payer City, State, ZIP Code'),
Element( u'N401', Properties(desc=u'City Name', req_sit=u'R', data_type=(u'AN',u'2',u'30'), position=1,
codes=[] ) ),
Element( u'N402', Properties(desc=u'State or Province Code', req_sit=u'S', data_type=(u'ID',u'2',u'2'), position=2,
codes=[] ) ),
Element( u'N403', Properties(desc=u'Postal Code', req_sit=u'S', data_type=(u'ID',u'3',u'15'), position=3,
codes=[] ) ),
Element( u'N404', Properties(desc=u'Country Code', req_sit=u'S', data_type=(u'ID',u'2',u'3'), position=4,
codes=[] ) ),
Element( u'N405', Properties(desc=u'Location Qualifier', req_sit=u'N', data_type=(u'ID',u'1',u'2'), position=5,
codes=[] ) ),
Element( u'N406', Properties(desc=u'Location Identifier', req_sit=u'N', data_type=(u'AN',u'1',u'30'), position=6,
codes=[] ) ),
Element( u'N407', Properties(desc=u'Country Subdivision Code', req_sit=u'S', data_type=(u'ID',u'1',u'3'), position=7,
codes=[] ) ),
),
Segment( u'REF', Properties(syntax=u'R0203',position=u'1200',req_sit=u'S',repeat=u'4',desc=u'Additional Payer Identification'),
Element( u'REF01', Properties(desc=u'Reference Identification Qualifier', req_sit=u'R', data_type=(u'ID',u'2',u'3'), position=1,
codes=[u'2U', u'EO', u'HI', u'NF'] ) ),
Element( u'REF02', Properties(desc=u'Reference Identification', req_sit=u'R', data_type=(u'AN',u'1',u'50'), position=2,
codes=[] ) ),
Element( u'REF03', Properties(desc=u'Description', req_sit=u'N', data_type=(u'AN',u'1',u'80'), position=3,
codes=[] ) ),
Composite( u'C040', Properties(req_sit=u'N',repeat='',refdes='',seq=u'04',desc=u'Reference Identifier'),
),
),
Segment( u'PER', Properties(syntax=u'P0304 P0506 P0708',position=u'1300',req_sit=u'S',repeat=u'1',desc=u'Payer Business Contact Information'),
Element( u'PER01', Properties(desc=u'Contact Function Code', req_sit=u'R', data_type=(u'ID',u'2',u'2'), position=1,
codes=[u'CX'] ) ),
Element( u'PER02', Properties(desc=u'Name', req_sit=u'S', data_type=(u'AN',u'1',u'60'), position=2,
codes=[] ) ),
Element( u'PER03', Properties(desc=u'Communication Number Qualifier', req_sit=u'S', data_type=(u'ID',u'2',u'2'), position=3,
codes=[u'EM', u'FX', u'TE'] ) ),
Element( u'PER04', Properties(desc=u'Communication Number', req_sit=u'S', data_type=(u'AN',u'1',u'256'), position=4,
codes=[] ) ),
Element( u'PER05', Properties(desc=u'Communication Number Qualifier', req_sit=u'S', data_type=(u'ID',u'2',u'2'), position=5,
codes=[u'EM', u'EX', u'FX', u'TE'] ) ),
Element( u'PER06', Properties(desc=u'Communication Number', req_sit=u'S', data_type=(u'AN',u'1',u'256'), position=6,
codes=[] ) ),
Element( u'PER07', Properties(desc=u'Communication Number Qualifier', req_sit=u'S', data_type=(u'ID',u'2',u'2'), position=7,
codes=[u'EX'] ) ),
Element( u'PER08', Properties(desc=u'Communication Number', req_sit=u'S', data_type=(u'AN',u'1',u'256'), position=8,
codes=[] ) ),
Element( u'PER09', Properties(desc=u'Contact Inquiry Reference', req_sit=u'N', data_type=(u'AN',u'1',u'20'), position=9,
codes=[] ) ),
),
Segment( u'PER', Properties(syntax=u'P0304 P0506 P0708',position=u'1300',req_sit=u'R',repeat=u'>1',desc=u'Payer Technical Contact Information'),
Element( u'PER01', Properties(desc=u'Contact Function Code', req_sit=u'R', data_type=(u'ID',u'2',u'2'), position=1,
codes=[u'BL'] ) ),
Element( u'PER02', Properties(desc=u'Name', req_sit=u'S', data_type=(u'AN',u'1',u'60'), position=2,
codes=[] ) ),
Element( u'PER03', Properties(desc=u'Communication Number Qualifier', req_sit=u'S', data_type=(u'ID',u'2',u'2'), position=3,
codes=[u'EM', u'TE', u'UR'] ) ),
Element( u'PER04', Properties(desc=u'Communication Number', req_sit=u'S', data_type=(u'AN',u'1',u'256'), position=4,
codes=[] ) ),
Element( u'PER05', Properties(desc=u'Communication Number Qualifier', req_sit=u'S', data_type=(u'ID',u'2',u'2'), position=5,
codes=[u'EM', u'EX', u'FX', u'TE', u'UR'] ) ),
Element( u'PER06', Properties(desc=u'Communication Number', req_sit=u'S', data_type=(u'AN',u'1',u'256'), position=6,
codes=[] ) ),
Element( u'PER07', Properties(desc=u'Communication Number Qualifier', req_sit=u'S', data_type=(u'ID',u'2',u'2'), position=7,
codes=[u'EM', u'EX', u'FX', u'UR'] ) ),
Element( u'PER08', Properties(desc=u'Communication Number', req_sit=u'S', data_type=(u'AN',u'1',u'256'), position=8,
codes=[] ) ),
Element( u'PER09', Properties(desc=u'Contact Inquiry Reference', req_sit=u'N', data_type=(u'AN',u'1',u'20'), position=9,
codes=[] ) ),
),
Segment( u'PER', Properties(syntax=u'P0304 P0506 P0708',position=u'1300',req_sit=u'S',repeat=u'1',desc=u'Payer WEB Site'),
Element( u'PER01', Properties(desc=u'Contact Function Code', req_sit=u'R', data_type=(u'ID',u'2',u'2'), position=1,
codes=[u'IC'] ) ),
Element( u'PER02', Properties(desc=u'Name', req_sit=u'N', data_type=(u'AN',u'1',u'60'), position=2,
codes=[] ) ),
Element( u'PER03', Properties(desc=u'Communication Number Qualifier', req_sit=u'R', data_type=(u'ID',u'2',u'2'), position=3,
codes=[u'UR'] ) ),
Element( u'PER04', Properties(desc=u'Communication Number', req_sit=u'R', data_type=(u'AN',u'1',u'256'), position=4,
codes=[] ) ),
Element( u'PER05', Properties(desc=u'Communication Number Qualifier', req_sit=u'N', data_type=(u'ID',u'2',u'2'), position=5,
codes=[] ) ),
Element( u'PER06', Properties(desc=u'Communication Number', req_sit=u'N', data_type=(u'AN',u'1',u'256'), position=6,
codes=[] ) ),
Element( u'PER07', Properties(desc=u'Communication Number Qualifier', req_sit=u'N', data_type=(u'ID',u'2',u'2'), position=7,
codes=[] ) ),
Element( u'PER08', Properties(desc=u'Communication Number', req_sit=u'N', data_type=(u'AN',u'1',u'256'), position=8,
codes=[] ) ),
Element( u'PER09', Properties(desc=u'Contact Inquiry Reference', req_sit=u'N', data_type=(u'AN',u'1',u'20'), position=9,
codes=[] ) ),
),
)
parsed_835_1000B = Loop( u'1000B', Properties(position=u'1400',looptype='',repeat=u'1',req_sit=u'R',desc=u'Payee Identification'),
Segment( u'N1', Properties(syntax=u'R0203 P0304',position=u'0800',req_sit=u'R',repeat=u'1',desc=u'Payee Identification'),
Element( u'N101', Properties(desc=u'Entity Identifier Code', req_sit=u'R', data_type=(u'ID',u'2',u'3'), position=1,
codes=[u'PE'] ) ),
Element( u'N102', Properties(desc=u'Name', req_sit=u'R', data_type=(u'AN',u'1',u'60'), position=2,
codes=[] ) ),
Element( u'N103', Properties(desc=u'Identification Code Qualifier', req_sit=u'R', data_type=(u'ID',u'1',u'2'), position=3,
codes=[u'FI', u'XV', u'XX'] ) ),
Element( u'N104', Properties(desc=u'Identification Code', req_sit=u'R', data_type=(u'AN',u'2',u'80'), position=4,
codes=[] ) ),
Element( u'N105', Properties(desc=u'Entity Relationship Code', req_sit=u'N', data_type=(u'ID',u'2',u'2'), position=5,
codes=[] ) ),
Element( u'N106', Properties(desc=u'Entity Identifier Code', req_sit=u'N', data_type=(u'ID',u'2',u'3'), position=6,
codes=[] ) ),
),
Segment( u'N3', Properties(syntax='',position=u'1000',req_sit=u'S',repeat=u'1',desc=u'Payee Address'),
Element( u'N301', Properties(desc=u'Address Information', req_sit=u'R', data_type=(u'AN',u'1',u'55'), position=1,
codes=[] ) ),
Element( u'N302', Properties(desc=u'Address Information', req_sit=u'S', data_type=(u'AN',u'1',u'55'), position=2,
codes=[] ) ),
),
Segment( u'N4', Properties(syntax=u'E0207 C0605 C0704',position=u'1100',req_sit=u'S',repeat=u'1',desc=u'Payee City, State, ZIP Code'),
Element( u'N401', Properties(desc=u'City Name', req_sit=u'R', data_type=(u'AN',u'2',u'30'), position=1,
codes=[] ) ),
Element( u'N402', Properties(desc=u'State or Province Code', req_sit=u'S', data_type=(u'ID',u'2',u'2'), position=2,
codes=[] ) ),
Element( u'N403', Properties(desc=u'Postal Code', req_sit=u'S', data_type=(u'ID',u'3',u'15'), position=3,
codes=[] ) ),
Element( u'N404', Properties(desc=u'Country Code', req_sit=u'S', data_type=(u'ID',u'2',u'3'), position=4,
codes=[] ) ),
Element( u'N405', Properties(desc=u'Location Qualifier', req_sit=u'N', data_type=(u'ID',u'1',u'2'), position=5,
codes=[] ) ),
Element( u'N406', Properties(desc=u'Location Identifier', req_sit=u'N', data_type=(u'AN',u'1',u'30'), position=6,
codes=[] ) ),
Element( u'N407', Properties(desc=u'Country Subdivision Code', req_sit=u'S', data_type=(u'ID',u'1',u'3'), position=7,
codes=[] ) ),
),
Segment( u'REF', Properties(syntax=u'R0203',position=u'1200',req_sit=u'S',repeat=u'>1',desc=u'Payee Additional Identification'),
Element( u'REF01', Properties(desc=u'Reference Identification Qualifier', req_sit=u'R', data_type=(u'ID',u'2',u'3'), position=1,
codes=[u'0B', u'D3', u'PQ', u'TJ'] ) ),
Element( u'REF02', Properties(desc=u'Reference Identification', req_sit=u'R', data_type=(u'AN',u'1',u'50'), position=2,
codes=[] ) ),
Element( u'REF03', Properties(desc=u'Description', req_sit=u'N', data_type=(u'AN',u'1',u'80'), position=3,
codes=[] ) ),
Composite( u'C040', Properties(req_sit=u'N',repeat='',refdes='',seq=u'04',desc=u'Reference Identifier'),
),
),
Segment( u'RDM', Properties(syntax='',position=u'1400',req_sit=u'S',repeat=u'1',desc=u'Remittance Delivery Method'),
Element( u'RDM01', Properties(desc=u'Report Transmission Code', req_sit=u'R', data_type=(u'ID',u'1',u'2'), position=1,
codes=[u'BM', u'EM', u'FT', u'OL'] ) ),
Element( u'RDM02', Properties(desc=u'Name', req_sit=u'S', data_type=(u'AN',u'1',u'60'), position=2,
codes=[] ) ),
Element( u'RDM03', Properties(desc=u'Communication Number', req_sit=u'S', data_type=(u'AN',u'1',u'256'), position=3,
codes=[] ) ),
Composite( u'C040', Properties(req_sit=u'N',repeat='',refdes='',seq=u'04',desc=u'Reference Identifier'),
),
Composite( u'C040', Properties(req_sit=u'N',repeat='',refdes='',seq=u'05',desc=u'Reference Identifier'),
),
),
)
parsed_835_HEADER = Loop( u'HEADER', Properties(position=u'0110',looptype=u'wrapper',repeat=u'1',req_sit=u'R',desc=u'Header'),
Segment( u'BPR', Properties(syntax=u'P0607 C0809 P1213 C1415 P1819 C2021',position=u'0200',req_sit=u'R',repeat=u'1',desc=u'Financial Information'),
Element( u'BPR01', Properties(desc=u'Transaction Handling Code', req_sit=u'R', data_type=(u'ID',u'1',u'2'), position=1,
codes=[u'C', u'D', u'H', u'I', u'P', u'U', u'X'] ) ),
Element( u'BPR02', Properties(desc=u'Monetary Amount', req_sit=u'R', data_type=(u'R',u'1',u'18'), position=2,
codes=[] ) ),
Element( u'BPR03', Properties(desc=u'Credit/Debit Flag Code', req_sit=u'R', data_type=(u'ID',u'1',u'1'), position=3,
codes=[u'C', u'D'] ) ),
Element( u'BPR04', Properties(desc=u'Payment Method Code', req_sit=u'R', data_type=(u'ID',u'3',u'3'), position=4,
codes=[u'ACH', u'BOP', u'CHK', u'FWT', u'NON'] ) ),
Element( u'BPR05', Properties(desc=u'Payment Format Code', req_sit=u'S', data_type=(u'ID',u'1',u'10'), position=5,
codes=[u'CCP', u'CTX'] ) ),
Element( u'BPR06', Properties(desc=u'(DFI) ID Number Qualifier', req_sit=u'S', data_type=(u'ID',u'2',u'2'), position=6,
codes=[u'01', u'04'] ) ),
Element( u'BPR07', Properties(desc=u'(DFI) Identification Number', req_sit=u'S', data_type=(u'AN',u'3',u'12'), position=7,
codes=[] ) ),
Element( u'BPR08', Properties(desc=u'Account Number Qualifier', req_sit=u'S', data_type=(u'ID',u'1',u'3'), position=8,
codes=[u'DA'] ) ),
Element( u'BPR09', Properties(desc=u'Account Number', req_sit=u'S', data_type=(u'AN',u'1',u'35'), position=9,
codes=[] ) ),
Element( u'BPR10', Properties(desc=u'Originating Company Identifier', req_sit=u'S', data_type=(u'AN',u'10',u'10'), position=10,
codes=[] ) ),
Element( u'BPR11', Properties(desc=u'Originating Company Supplemental Code', req_sit=u'S', data_type=(u'AN',u'9',u'9'), position=11,
codes=[] ) ),
Element( u'BPR12', Properties(desc=u'(DFI) ID Number Qualifier', req_sit=u'S', data_type=(u'ID',u'2',u'2'), position=12,
codes=[u'01', u'04'] ) ),
Element( u'BPR13', Properties(desc=u'(DFI) Identification Number', req_sit=u'S', data_type=(u'AN',u'3',u'12'), position=13,
codes=[] ) ),
Element( u'BPR14', Properties(desc=u'Account Number Qualifier', req_sit=u'S', data_type=(u'ID',u'1',u'3'), position=14,
codes=[u'DA', u'SG'] ) ),
Element( u'BPR15', Properties(desc=u'Account Number', req_sit=u'S', data_type=(u'AN',u'1',u'35'), position=15,
codes=[] ) ),
Element( u'BPR16', Properties(desc=u'Date', req_sit=u'R', data_type=(u'DT',u'8',u'8'), position=16,
codes=[] ) ),
Element( u'BPR17', Properties(desc=u'Business Function Code', req_sit=u'N', data_type=(u'ID',u'1',u'3'), position=17,
codes=[] ) ),
Element( u'BPR18', Properties(desc=u'(DFI) ID Number Qualifier', req_sit=u'N', data_type=(u'ID',u'2',u'2'), position=18,
codes=[] ) ),
Element( u'BPR19', Properties(desc=u'(DFI) Identification Number', req_sit=u'N', data_type=(u'AN',u'3',u'12'), position=19,
codes=[] ) ),
Element( u'BPR20', Properties(desc=u'Account Number Qualifier', req_sit=u'N', data_type=(u'ID',u'1',u'3'), position=20,
codes=[] ) ),
Element( u'BPR21', Properties(desc=u'Account Number', req_sit=u'N', data_type=(u'AN',u'1',u'35'), position=21,
codes=[] ) ),
),
Segment( u'TRN', Properties(syntax='',position=u'0400',req_sit=u'R',repeat=u'1',desc=u'Reassociation Trace Number'),
Element( u'TRN01', Properties(desc=u'Trace Type Code', req_sit=u'R', data_type=(u'ID',u'1',u'2'), position=1,
codes=[u'1'] ) ),
Element( u'TRN02', Properties(desc=u'Reference Identification', req_sit=u'R', data_type=(u'AN',u'1',u'50'), position=2,
codes=[] ) ),
Element( u'TRN03', Properties(desc=u'Originating Company Identifier', req_sit=u'R', data_type=(u'AN',u'10',u'10'), position=3,
codes=[] ) ),
Element( u'TRN04', Properties(desc=u'Reference Identification', req_sit=u'S', data_type=(u'AN',u'1',u'50'), position=4,
codes=[] ) ),
),
Segment( u'CUR', Properties(syntax=u'C0807 C0907 L101112 C1110 C1210 L131415 C1413 C1513 L161718 C1716 C1816 L192021 C2019 C2119',position=u'0500',req_sit=u'S',repeat=u'1',desc=u'Foreign Currency Information'),
Element( u'CUR01', Properties(desc=u'Entity Identifier Code', req_sit=u'R', data_type=(u'ID',u'2',u'3'), position=1,
codes=[u'PR'] ) ),
Element( u'CUR02', Properties(desc=u'Currency Code', req_sit=u'R', data_type=(u'ID',u'3',u'3'), position=2,
codes=[] ) ),
Element( u'CUR03', Properties(desc=u'Exchange Rate', req_sit=u'N', data_type=(u'R',u'4',u'10'), position=3,
codes=[] ) ),
Element( u'CUR04', Properties(desc=u'Entity Identifier Code', req_sit=u'N', data_type=(u'ID',u'2',u'3'), position=4,
codes=[] ) ),
Element( u'CUR05', Properties(desc=u'Currency Code', req_sit=u'N', data_type=(u'ID',u'3',u'3'), position=5,
codes=[] ) ),
Element( u'CUR06', Properties(desc=u'Currency Market/Exchange Code', req_sit=u'N', data_type=(u'ID',u'3',u'3'), position=6,
codes=[] ) ),
Element( u'CUR07', Properties(desc=u'Date/Time Qualifier', req_sit=u'N', data_type=(u'ID',u'3',u'3'), position=7,
codes=[] ) ),
Element( u'CUR08', Properties(desc=u'Date', req_sit=u'N', data_type=(u'DT',u'8',u'8'), position=8,
codes=[] ) ),
Element( u'CUR09', Properties(desc=u'Time', req_sit=u'N', data_type=(u'TM',u'4',u'8'), position=9,
codes=[] ) ),
Element( u'CUR10', Properties(desc=u'Date/Time Qualifier', req_sit=u'N', data_type=(u'ID',u'3',u'3'), position=10,
codes=[] ) ),
Element( u'CUR11', Properties(desc=u'Date', req_sit=u'N', data_type=(u'DT',u'8',u'8'), position=11,
codes=[] ) ),
Element( u'CUR12', Properties(desc=u'Time', req_sit=u'N', data_type=(u'TM',u'4',u'8'), position=12,
codes=[] ) ),
Element( u'CUR13', Properties(desc=u'Date/Time Qualifier', req_sit=u'N', data_type=(u'ID',u'3',u'3'), position=13,
codes=[] ) ),
Element( u'CUR14', Properties(desc=u'Date', req_sit=u'N', data_type=(u'DT',u'8',u'8'), position=14,
codes=[] ) ),
Element( u'CUR15', Properties(desc=u'Time', req_sit=u'N', data_type=(u'TM',u'4',u'8'), position=15,
codes=[] ) ),
Element( u'CUR16', Properties(desc=u'Date/Time Qualifier', req_sit=u'N', data_type=(u'ID',u'3',u'3'), position=16,
codes=[] ) ),
Element( u'CUR17', Properties(desc=u'Date', req_sit=u'N', data_type=(u'DT',u'8',u'8'), position=17,
codes=[] ) ),
Element( u'CUR18', Properties(desc=u'Time', req_sit=u'N', data_type=(u'TM',u'4',u'8'), position=18,
codes=[] ) ),
Element( u'CUR19', Properties(desc=u'Date/Time Qualifier', req_sit=u'N', data_type=(u'ID',u'3',u'3'), position=19,
codes=[] ) ),
Element( u'CUR20', Properties(desc=u'Date', req_sit=u'N', data_type=(u'DT',u'8',u'8'), position=20,
codes=[] ) ),
Element( u'CUR21', Properties(desc=u'Time', req_sit=u'N', data_type=(u'TM',u'4',u'8'), position=21,
codes=[] ) ),
),
Segment( u'REF', Properties(syntax=u'R0203',position=u'0600',req_sit=u'S',repeat=u'1',desc=u'Receiver Identification'),
Element( u'REF01', Properties(desc=u'Reference Identification Qualifier', req_sit=u'R', data_type=(u'ID',u'2',u'3'), position=1,
codes=[u'EV'] ) ),
Element( u'REF02', Properties(desc=u'Reference Identification', req_sit=u'R', data_type=(u'AN',u'1',u'50'), position=2,
codes=[] ) ),
Element( u'REF03', Properties(desc=u'Description', req_sit=u'N', data_type=(u'AN',u'1',u'80'), position=3,
codes=[] ) ),
Composite( u'C040', Properties(req_sit=u'N',repeat='',refdes='',seq=u'04',desc=u'Reference Identifier'),
),
),
Segment( u'REF', Properties(syntax=u'R0203',position=u'0600',req_sit=u'S',repeat=u'1',desc=u'Version Identification'),
Element( u'REF01', Properties(desc=u'Reference Identification Qualifier', req_sit=u'R', data_type=(u'ID',u'2',u'3'), position=1,
codes=[u'F2'] ) ),
Element( u'REF02', Properties(desc=u'Reference Identification', req_sit=u'R', data_type=(u'AN',u'1',u'50'), position=2,
codes=[] ) ),
Element( u'REF03', Properties(desc=u'Description', req_sit=u'N', data_type=(u'AN',u'1',u'80'), position=3,
codes=[] ) ),
Composite( u'C040', Properties(req_sit=u'N',repeat='',refdes='',seq=u'04',desc=u'Reference Identifier'),
),
),
Segment( u'DTM', Properties(syntax=u'R020305 C0403 P0506',position=u'0700',req_sit=u'S',repeat=u'1',desc=u'Production Date'),
Element( u'DTM01', Properties(desc=u'Date/Time Qualifier', req_sit=u'R', data_type=(u'ID',u'3',u'3'), position=1,
codes=[u'405'] ) ),
Element( u'DTM02', Properties(desc=u'Date', req_sit=u'R', data_type=(u'DT',u'8',u'8'), position=2,
codes=[] ) ),
Element( u'DTM03', Properties(desc=u'Time', req_sit=u'N', data_type=(u'TM',u'4',u'8'), position=3,
codes=[] ) ),
Element( u'DTM04', Properties(desc=u'Time Code', req_sit=u'N', data_type=(u'ID',u'2',u'2'), position=4,
codes=[] ) ),
Element( u'DTM05', Properties(desc=u'Date Time Period Format Qualifier', req_sit=u'N', data_type=(u'ID',u'2',u'3'), position=5,
codes=[] ) ),
Element( u'DTM06', Properties(desc=u'Date Time Period', req_sit=u'N', data_type=(u'AN',u'1',u'35'), position=6,
codes=[] ) ),
),
parsed_835_1000A,
parsed_835_1000B,
)
parsed_835_2110 = Loop( u'2110', Properties(position=u'0700',looptype='',repeat=u'999',req_sit=u'S',desc=u'Service Payment Information'),
Segment( u'SVC', Properties(syntax='',position=u'0700',req_sit=u'R',repeat=u'1',desc=u'Service Payment Information'),
Composite( u'C003', Properties(req_sit=u'R',repeat='',refdes='',seq=u'01',desc=u'Composite Medical Procedure Identifier'),
Element( u'SVC01-01', Properties(desc=u'Product/Service ID Qualifier', req_sit=u'R', data_type=(u'ID',u'2',u'2'), position=0,
codes=[u'AD', u'ER', u'HC', u'HP', u'IV', u'N4', u'N6', u'NU', u'UI', u'WK'] ) ),
Element( u'SVC01-02', Properties(desc=u'Product/Service ID', req_sit=u'R', data_type=(u'AN',u'1',u'48'), position=1,
codes=[] ) ),
Element( u'SVC01-03', Properties(desc=u'Procedure Modifier', req_sit=u'S', data_type=(u'AN',u'2',u'2'), position=2,
codes=[] ) ),
Element( u'SVC01-04', Properties(desc=u'Procedure Modifier', req_sit=u'S', data_type=(u'AN',u'2',u'2'), position=3,
codes=[] ) ),
Element( u'SVC01-05', Properties(desc=u'Procedure Modifier', req_sit=u'S', data_type=(u'AN',u'2',u'2'), position=4,
codes=[] ) ),
Element( u'SVC01-06', Properties(desc=u'Procedure Modifier', req_sit=u'S', data_type=(u'AN',u'2',u'2'), position=5,
codes=[] ) ),
Element( u'SVC01-07', Properties(desc=u'Description', req_sit=u'N', data_type=(u'AN',u'1',u'80'), position=6,
codes=[] ) ),
Element( u'SVC01-08', Properties(desc=u'Product/Service ID', req_sit=u'N', data_type=(u'AN',u'1',u'48'), position=7,
codes=[] ) ),
),
Element( u'SVC02', Properties(desc=u'Monetary Amount', req_sit=u'R', data_type=(u'R',u'1',u'18'), position=2,
codes=[] ) ),
Element( u'SVC03', Properties(desc=u'Monetary Amount', req_sit=u'R', data_type=(u'R',u'1',u'18'), position=3,
codes=[] ) ),
Element( u'SVC04', Properties(desc=u'Product/Service ID', req_sit=u'S', data_type=(u'AN',u'1',u'48'), position=4,
codes=[] ) ),
Element( u'SVC05', Properties(desc=u'Quantity', req_sit=u'S', data_type=(u'R',u'1',u'15'), position=5,
codes=[] ) ),
Composite( u'C003', Properties(req_sit=u'S',repeat='',refdes='',seq=u'06',desc=u'Composite Medical Procedure Identifier'),
Element( u'SVC06-01', Properties(desc=u'Product/Service ID Qualifier', req_sit=u'R', data_type=(u'ID',u'2',u'2'), position=0,
codes=[u'AD', u'ER', u'HC', u'HP', u'IV', u'N4', u'NU', u'WK'] ) ),
Element( u'SVC06-02', Properties(desc=u'Product/Service ID', req_sit=u'R', data_type=(u'AN',u'1',u'48'), position=1,
codes=[] ) ),
Element( u'SVC06-03', Properties(desc=u'Procedure Modifier', req_sit=u'S', data_type=(u'AN',u'2',u'2'), position=2,
codes=[] ) ),
Element( u'SVC06-04', Properties(desc=u'Procedure Modifier', req_sit=u'S', data_type=(u'AN',u'2',u'2'), position=3,
codes=[] ) ),
Element( u'SVC06-05', Properties(desc=u'Procedure Modifier', req_sit=u'S', data_type=(u'AN',u'2',u'2'), position=4,
codes=[] ) ),
Element( u'SVC06-06', Properties(desc=u'Procedure Modifier', req_sit=u'S', data_type=(u'AN',u'2',u'2'), position=5,
codes=[] ) ),
Element( u'SVC06-07', Properties(desc=u'Description', req_sit=u'S', data_type=(u'AN',u'1',u'80'), position=6,
codes=[] ) ),
Element( u'SVC06-08', Properties(desc=u'Product/Service ID', req_sit=u'N', data_type=(u'AN',u'1',u'48'), position=7,
codes=[] ) ),
),
Element( u'SVC07', Properties(desc=u'Quantity', req_sit=u'S', data_type=(u'R',u'1',u'15'), position=7,
codes=[] ) ),
),
Segment( u'DTM', Properties(syntax=u'R020305 C0403 P0506',position=u'0800',req_sit=u'S',repeat=u'2',desc=u'Service Date'),
Element( u'DTM01', Properties(desc=u'Date/Time Qualifier', req_sit=u'R', data_type=(u'ID',u'3',u'3'), position=1,
codes=[u'150', u'151', u'472'] ) ),
Element( u'DTM02', Properties(desc=u'Date', req_sit=u'R', data_type=(u'DT',u'8',u'8'), position=2,
codes=[] ) ),
Element( u'DTM03', Properties(desc=u'Time', req_sit=u'N', data_type=(u'TM',u'4',u'8'), position=3,
codes=[] ) ),
Element( u'DTM04', Properties(desc=u'Time Code', req_sit=u'N', data_type=(u'ID',u'2',u'2'), position=4,
codes=[] ) ),
Element( u'DTM05', Properties(desc=u'Date Time Period Format Qualifier', req_sit=u'N', data_type=(u'ID',u'2',u'3'), position=5,
codes=[] ) ),
Element( u'DTM06', Properties(desc=u'Date Time Period', req_sit=u'N', data_type=(u'AN',u'1',u'35'), position=6,
codes=[] ) ),
),
Segment( u'CAS', Properties(syntax=u'L050607 C0605 C0705 L080910 C0908 C1008 L111213 C1211 C1311 L141516 C1514 C1614 L171819 C1817 C1917',position=u'0900',req_sit=u'S',repeat=u'99',desc=u'Service Adjustment'),
Element( u'CAS01', Properties(desc=u'Claim Adjustment Group Code', req_sit=u'R', data_type=(u'ID',u'1',u'2'), position=1,
codes=[u'CO', u'OA', u'PI', u'PR'] ) ),
Element( u'CAS02', Properties(desc=u'Claim Adjustment Reason Code', req_sit=u'R', data_type=(u'ID',u'1',u'5'), position=2,
codes=[] ) ),
Element( u'CAS03', Properties(desc=u'Monetary Amount', req_sit=u'R', data_type=(u'R',u'1',u'18'), position=3,
codes=[] ) ),
Element( u'CAS04', Properties(desc=u'Quantity', req_sit=u'S', data_type=(u'R',u'1',u'15'), position=4,
codes=[] ) ),
Element( u'CAS05', Properties(desc=u'Claim Adjustment Reason Code', req_sit=u'S', data_type=(u'ID',u'1',u'5'), position=5,
codes=[] ) ),
Element( u'CAS06', Properties(desc=u'Monetary Amount', req_sit=u'S', data_type=(u'R',u'1',u'18'), position=6,
codes=[] ) ),
Element( u'CAS07', Properties(desc=u'Quantity', req_sit=u'S', data_type=(u'R',u'1',u'15'), position=7,
codes=[] ) ),
Element( u'CAS08', Properties(desc=u'Claim Adjustment Reason Code', req_sit=u'S', data_type=(u'ID',u'1',u'5'), position=8,
codes=[] ) ),
Element( u'CAS09', Properties(desc=u'Monetary Amount', req_sit=u'S', data_type=(u'R',u'1',u'18'), position=9,
codes=[] ) ),
Element( u'CAS10', Properties(desc=u'Quantity', req_sit=u'S', data_type=(u'R',u'1',u'15'), position=10,
codes=[] ) ),
Element( u'CAS11', Properties(desc=u'Claim Adjustment Reason Code', req_sit=u'S', data_type=(u'ID',u'1',u'5'), position=11,
codes=[] ) ),
Element( u'CAS12', Properties(desc=u'Monetary Amount', req_sit=u'S', data_type=(u'R',u'1',u'18'), position=12,
codes=[] ) ),
Element( u'CAS13', Properties(desc=u'Quantity', req_sit=u'S', data_type=(u'R',u'1',u'15'), position=13,
codes=[] ) ),
Element( u'CAS14', Properties(desc=u'Claim Adjustment Reason Code', req_sit=u'S', data_type=(u'ID',u'1',u'5'), position=14,
codes=[] ) ),
Element( u'CAS15', Properties(desc=u'Monetary Amount', req_sit=u'S', data_type=(u'R',u'1',u'18'), position=15,
codes=[] ) ),
Element( u'CAS16', Properties(desc=u'Quantity', req_sit=u'S', data_type=(u'R',u'1',u'15'), position=16,
codes=[] ) ),
Element( u'CAS17', Properties(desc=u'Claim Adjustment Reason Code', req_sit=u'S', data_type=(u'ID',u'1',u'5'), position=17,
codes=[] ) ),
Element( u'CAS18', Properties(desc=u'Monetary Amount', req_sit=u'S', data_type=(u'R',u'1',u'18'), position=18,
codes=[] ) ),
Element( u'CAS19', Properties(desc=u'Quantity', req_sit=u'S', data_type=(u'R',u'1',u'15'), position=19,
codes=[] ) ),
),
Segment( u'REF', Properties(syntax=u'R0203',position=u'1000',req_sit=u'S',repeat=u'8',desc=u'Service Identification'),
Element( u'REF01', Properties(desc=u'Reference Identification Qualifier', req_sit=u'R', data_type=(u'ID',u'2',u'3'), position=1,
codes=[u'1S', u'APC', u'BB', u'E9', u'G1', u'G3', u'LU', u'RB'] ) ),
Element( u'REF02', Properties(desc=u'Reference Identification', req_sit=u'R', data_type=(u'AN',u'1',u'50'), position=2,
codes=[] ) ),
Element( u'REF03', Properties(desc=u'Description', req_sit=u'N', data_type=(u'AN',u'1',u'80'), position=3,
codes=[] ) ),
Composite( u'C040', Properties(req_sit=u'N',repeat='',refdes='',seq=u'04',desc=u'Reference Identifier'),
),
),
Segment( u'REF', Properties(syntax=u'R0203',position=u'1000',req_sit=u'S',repeat=u'1',desc=u'Line Item Control Number'),
Element( u'REF01', Properties(desc=u'Reference Identification Qualifier', req_sit=u'R', data_type=(u'ID',u'2',u'3'), position=1,
codes=[u'6R'] ) ),
Element( u'REF02', Properties(desc=u'Reference Identification', req_sit=u'R', data_type=(u'AN',u'1',u'50'), position=2,
codes=[] ) ),
Element( u'REF03', Properties(desc=u'Description', req_sit=u'N', data_type=(u'AN',u'1',u'80'), position=3,
codes=[] ) ),
Composite( u'C040', Properties(req_sit=u'N',repeat='',refdes='',seq=u'04',desc=u'Reference Identifier'),
),
),
Segment( u'REF', Properties(syntax=u'R0203',position=u'1000',req_sit=u'S',repeat=u'10',desc=u'Rendering Provider Information'),
Element( u'REF01', Properties(desc=u'Reference Identification Qualifier', req_sit=u'R', data_type=(u'ID',u'2',u'3'), position=1,
codes=[u'0B', u'1A', u'1B', u'1C', u'1D', u'1G', u'1H', u'1J', u'D3', u'G2', u'HPI', u'SY', u'TJ'] ) ),
Element( u'REF02', Properties(desc=u'Reference Identification', req_sit=u'R', data_type=(u'AN',u'1',u'50'), position=2,
codes=[] ) ),
Element( u'REF03', Properties(desc=u'Description', req_sit=u'N', data_type=(u'AN',u'1',u'80'), position=3,
codes=[] ) ),
Composite( u'C040', Properties(req_sit=u'N',repeat='',refdes='',seq=u'04',desc=u'Reference Identifier'),
),
),
Segment( u'REF', Properties(syntax=u'R0203',position=u'1000',req_sit=u'S',repeat=u'5',desc=u'HealthCare Policy Identification'),
Element( u'REF01', Properties(desc=u'Reference Identification Qualifier', req_sit=u'R', data_type=(u'ID',u'2',u'3'), position=1,
codes=[u'0K'] ) ),
Element( u'REF02', Properties(desc=u'Reference Identification', req_sit=u'R', data_type=(u'AN',u'1',u'50'), position=2,
codes=[] ) ),
Element( u'REF03', Properties(desc=u'Description', req_sit=u'N', data_type=(u'AN',u'1',u'80'), position=3,
codes=[] ) ),
Composite( u'C040', Properties(req_sit=u'N',repeat='',refdes='',seq=u'04',desc=u'Reference Identifier'),
),
),
Segment( u'AMT', Properties(syntax='',position=u'1100',req_sit=u'S',repeat=u'9',desc=u'Service Supplemental Amount'),
Element( u'AMT01', Properties(desc=u'Amount Qualifier Code', req_sit=u'R', data_type=(u'ID',u'1',u'3'), position=1,
codes=[u'B6', u'KH', u'T', u'T2', u'ZK', u'ZL', u'ZM', u'ZN', u'ZO'] ) ),
Element( u'AMT02', Properties(desc=u'Monetary Amount', req_sit=u'R', data_type=(u'R',u'1',u'18'), position=2,
codes=[] ) ),
Element( u'AMT03', Properties(desc=u'Credit/Debit Flag Code', req_sit=u'N', data_type=(u'ID',u'1',u'1'), position=3,
codes=[] ) ),
),
Segment( u'QTY', Properties(syntax=u'E0204 R0204',position=u'1200',req_sit=u'S',repeat=u'6',desc=u'Service Supplemental Quantity'),
Element( u'QTY01', Properties(desc=u'Quantity Qualifier', req_sit=u'R', data_type=(u'ID',u'2',u'2'), position=1,
codes=[u'ZK', u'ZL', u'ZM', u'ZN', u'ZO'] ) ),
Element( u'QTY02', Properties(desc=u'Quantity', req_sit=u'R', data_type=(u'R',u'1',u'15'), position=2,
codes=[] ) ),
Composite( u'C001', Properties(req_sit=u'N',repeat='',refdes='',seq=u'03',desc=u'Composite Unit of Measure'),
),
Element( u'QTY04', Properties(desc=u'Free-form Information', req_sit=u'N', data_type=(u'AN',u'1',u'30'), position=4,
codes=[] ) ),
),
Segment( u'LQ', Properties(syntax=u'C0102',position=u'1300',req_sit=u'S',repeat=u'99',desc=u'Health Care Remark Codes'),
Element( u'LQ01', Properties(desc=u'Code List Qualifier Code', req_sit=u'R', data_type=(u'ID',u'1',u'3'), position=1,
codes=[u'HE', u'RX'] ) ),
Element( u'LQ02', Properties(desc=u'Industry Code', req_sit=u'R', data_type=(u'AN',u'1',u'30'), position=2,
codes=[] ) ),
),
)
parsed_835_2100 = Loop( u'2100', Properties(position=u'0100',looptype='',repeat=u'>1',req_sit=u'R',desc=u'Claim Payment Information'),
Segment( u'CLP', Properties(syntax='',position=u'0100',req_sit=u'R',repeat=u'1',desc=u'Claim Payment Information'),
Element( u'CLP01', Properties(desc=u"Claim Submitter's Identifier", req_sit=u'R', data_type=(u'AN',u'1',u'38'), position=1,
codes=[] ) ),
Element( u'CLP02', Properties(desc=u'Claim Status Code', req_sit=u'R', data_type=(u'ID',u'1',u'2'), position=2,
codes=[u'1', u'19', u'2', u'20', u'21', u'22', u'23', u'25', u'3', u'4'] ) ),
Element( u'CLP03', Properties(desc=u'Monetary Amount', req_sit=u'R', data_type=(u'R',u'1',u'18'), position=3,
codes=[] ) ),
Element( u'CLP04', Properties(desc=u'Monetary Amount', req_sit=u'R', data_type=(u'R',u'1',u'18'), position=4,
codes=[] ) ),
Element( u'CLP05', Properties(desc=u'Monetary Amount', req_sit=u'S', data_type=(u'R',u'1',u'18'), position=5,
codes=[] ) ),
Element( u'CLP06', Properties(desc=u'Claim Filing Indicator Code', req_sit=u'R', data_type=(u'ID',u'1',u'2'), position=6,
codes=[u'12', u'13', u'14', u'15', u'16', u'17', u'AM', u'CH', u'DS', u'HM', u'LM', u'MA', u'MB', u'MC', u'OF', u'TV', u'VA', u'WC', u'ZZ'] ) ),
Element( u'CLP07', Properties(desc=u'Reference Identification', req_sit=u'R', data_type=(u'AN',u'1',u'50'), position=7,
codes=[] ) ),
Element( u'CLP08', Properties(desc=u'Facility Code Value', req_sit=u'S', data_type=(u'AN',u'1',u'2'), position=8,
codes=[] ) ),
Element( u'CLP09', Properties(desc=u'Claim Frequency Type Code', req_sit=u'S', data_type=(u'ID',u'1',u'1'), position=9,
codes=[] ) ),
Element( u'CLP10', Properties(desc=u'Patient Status Code', req_sit=u'N', data_type=(u'ID',u'1',u'2'), position=10,
codes=[] ) ),
Element( u'CLP11', Properties(desc=u'Diagnosis Related Group (DRG) Code', req_sit=u'S', data_type=(u'ID',u'1',u'4'), position=11,
codes=[] ) ),
Element( u'CLP12', Properties(desc=u'Quantity', req_sit=u'S', data_type=(u'R',u'1',u'15'), position=12,
codes=[] ) ),
Element( u'CLP13', Properties(desc=u'Percentage as Decimal', req_sit=u'S', data_type=(u'R',u'1',u'10'), position=13,
codes=[] ) ),
Element( u'CLP14', Properties(desc=u'Yes/No Condition or Response Code', req_sit=u'N', data_type=(u'ID',u'1',u'1'), position=14,
codes=[] ) ),
),
Segment( u'CAS', Properties(syntax=u'L050607 C0605 C0705 L080910 C0908 C1008 L111213 C1211 C1311 L141516 C1514 C1614 L171819 C1817 C1917',position=u'0200',req_sit=u'S',repeat=u'99',desc=u'Claim Adjustment'),
Element( u'CAS01', Properties(desc=u'Claim Adjustment Group Code', req_sit=u'R', data_type=(u'ID',u'1',u'2'), position=1,
codes=[u'CO', u'OA', u'PI', u'PR'] ) ),
Element( u'CAS02', Properties(desc=u'Claim Adjustment Reason Code', req_sit=u'R', data_type=(u'ID',u'1',u'5'), position=2,
codes=[] ) ),
Element( u'CAS03', Properties(desc=u'Monetary Amount', req_sit=u'R', data_type=(u'R',u'1',u'18'), position=3,
codes=[] ) ),
Element( u'CAS04', Properties(desc=u'Quantity', req_sit=u'S', data_type=(u'R',u'1',u'15'), position=4,
codes=[] ) ),
Element( u'CAS05', Properties(desc=u'Claim Adjustment Reason Code', req_sit=u'S', data_type=(u'ID',u'1',u'5'), position=5,
codes=[] ) ),
Element( u'CAS06', Properties(desc=u'Monetary Amount', req_sit=u'S', data_type=(u'R',u'1',u'18'), position=6,
codes=[] ) ),
Element( u'CAS07', Properties(desc=u'Quantity', req_sit=u'S', data_type=(u'R',u'1',u'15'), position=7,
codes=[] ) ),
Element( u'CAS08', Properties(desc=u'Claim Adjustment Reason Code', req_sit=u'S', data_type=(u'ID',u'1',u'5'), position=8,
codes=[] ) ),
Element( u'CAS09', Properties(desc=u'Monetary Amount', req_sit=u'S', data_type=(u'R',u'1',u'18'), position=9,
codes=[] ) ),
Element( u'CAS10', Properties(desc=u'Quantity', req_sit=u'S', data_type=(u'R',u'1',u'15'), position=10,
codes=[] ) ),
Element( u'CAS11', Properties(desc=u'Claim Adjustment Reason Code', req_sit=u'S', data_type=(u'ID',u'1',u'5'), position=11,
codes=[] ) ),
Element( u'CAS12', Properties(desc=u'Monetary Amount', req_sit=u'S', data_type=(u'R',u'1',u'18'), position=12,
codes=[] ) ),
Element( u'CAS13', Properties(desc=u'Quantity', req_sit=u'S', data_type=(u'R',u'1',u'15'), position=13,
codes=[] ) ),
Element( u'CAS14', Properties(desc=u'Claim Adjustment Reason Code', req_sit=u'S', data_type=(u'ID',u'1',u'5'), position=14,
codes=[] ) ),
Element( u'CAS15', Properties(desc=u'Monetary Amount', req_sit=u'S', data_type=(u'R',u'1',u'18'), position=15,
codes=[] ) ),
Element( u'CAS16', Properties(desc=u'Quantity', req_sit=u'S', data_type=(u'R',u'1',u'15'), position=16,
codes=[] ) ),
Element( u'CAS17', Properties(desc=u'Claim Adjustment Reason Code', req_sit=u'S', data_type=(u'ID',u'1',u'5'), position=17,
codes=[] ) ),
Element( u'CAS18', Properties(desc=u'Monetary Amount', req_sit=u'S', data_type=(u'R',u'1',u'18'), position=18,
codes=[] ) ),
Element( u'CAS19', Properties(desc=u'Quantity', req_sit=u'S', data_type=(u'R',u'1',u'15'), position=19,
codes=[] ) ),
),
Segment( u'NM1', Properties(syntax=u'P0809 C1110 C1203',position=u'0300',req_sit=u'R',repeat=u'1',desc=u'Patient Name'),
Element( u'NM101', Properties(desc=u'Entity Identifier Code', req_sit=u'R', data_type=(u'ID',u'2',u'3'), position=1,
codes=[u'QC'] ) ),
Element( u'NM102', Properties(desc=u'Entity Type Qualifier', req_sit=u'R', data_type=(u'ID',u'1',u'1'), position=2,
codes=[u'1'] ) ),
Element( u'NM103', Properties(desc=u'Name Last or Organization Name', req_sit=u'S', data_type=(u'AN',u'1',u'60'), position=3,
codes=[] ) ),
Element( u'NM104', Properties(desc=u'Name First', req_sit=u'S', data_type=(u'AN',u'1',u'35'), position=4,
codes=[] ) ),
Element( u'NM105', Properties(desc=u'Name Middle', req_sit=u'S', data_type=(u'AN',u'1',u'25'), position=5,
codes=[] ) ),
Element( u'NM106', Properties(desc=u'Name Prefix', req_sit=u'N', data_type=(u'AN',u'1',u'10'), position=6,
codes=[] ) ),
Element( u'NM107', Properties(desc=u'Name Suffix', req_sit=u'S', data_type=(u'AN',u'1',u'10'), position=7,
codes=[] ) ),
Element( u'NM108', Properties(desc=u'Identification Code Qualifier', req_sit=u'S', data_type=(u'ID',u'1',u'2'), position=8,
codes=[u'34', u'HN', u'II', u'MI', u'MR'] ) ),
Element( u'NM109', Properties(desc=u'Identification Code', req_sit=u'S', data_type=(u'AN',u'2',u'80'), position=9,
codes=[] ) ),
Element( u'NM110', Properties(desc=u'Entity Relationship Code', req_sit=u'N', data_type=(u'ID',u'2',u'2'), position=10,
codes=[] ) ),
Element( u'NM111', Properties(desc=u'Entity Identifier Code', req_sit=u'N', data_type=(u'ID',u'2',u'3'), position=11,
codes=[] ) ),
Element( u'NM112', Properties(desc=u'Name Last or Organization Name', req_sit=u'N', data_type=(u'AN',u'1',u'60'), position=12,
codes=[] ) ),
),
Segment( u'NM1', Properties(syntax=u'P0809 C1110 C1203',position=u'0300',req_sit=u'S',repeat=u'1',desc=u'Insured Name'),
Element( u'NM101', Properties(desc=u'Entity Identifier Code', req_sit=u'R', data_type=(u'ID',u'2',u'3'), position=1,
codes=[u'IL'] ) ),
Element( u'NM102', Properties(desc=u'Entity Type Qualifier', req_sit=u'R', data_type=(u'ID',u'1',u'1'), position=2,
codes=[u'1', u'2'] ) ),
Element( u'NM103', Properties(desc=u'Name Last or Organization Name', req_sit=u'S', data_type=(u'AN',u'1',u'60'), position=3,
codes=[] ) ),
Element( u'NM104', Properties(desc=u'Name First', req_sit=u'S', data_type=(u'AN',u'1',u'35'), position=4,
codes=[] ) ),
Element( u'NM105', Properties(desc=u'Name Middle', req_sit=u'S', data_type=(u'AN',u'1',u'25'), position=5,
codes=[] ) ),
Element( u'NM106', Properties(desc=u'Name Prefix', req_sit=u'N', data_type=(u'AN',u'1',u'10'), position=6,
codes=[] ) ),
Element( u'NM107', Properties(desc=u'Name Suffix', req_sit=u'S', data_type=(u'AN',u'1',u'10'), position=7,
codes=[] ) ),
Element( u'NM108', Properties(desc=u'Identification Code Qualifier', req_sit=u'R', data_type=(u'ID',u'1',u'2'), position=8,
codes=[u'FI', u'II', u'MI'] ) ),
Element( u'NM109', Properties(desc=u'Identification Code', req_sit=u'R', data_type=(u'AN',u'2',u'80'), position=9,
codes=[] ) ),
Element( u'NM110', Properties(desc=u'Entity Relationship Code', req_sit=u'N', data_type=(u'ID',u'2',u'2'), position=10,
codes=[] ) ),
Element( u'NM111', Properties(desc=u'Entity Identifier Code', req_sit=u'N', data_type=(u'ID',u'2',u'3'), position=11,
codes=[] ) ),
Element( u'NM112', Properties(desc=u'Name Last or Organization Name', req_sit=u'N', data_type=(u'AN',u'1',u'60'), position=12,
codes=[] ) ),
),
Segment( u'NM1', Properties(syntax=u'P0809 C1110 C1203',position=u'0300',req_sit=u'S',repeat=u'1',desc=u'Corrected Patient/Insured Name'),
Element( u'NM101', Properties(desc=u'Entity Identifier Code', req_sit=u'R', data_type=(u'ID',u'2',u'3'), position=1,
codes=[u'74'] ) ),
Element( u'NM102', Properties(desc=u'Entity Type Qualifier', req_sit=u'R', data_type=(u'ID',u'1',u'1'), position=2,
codes=[u'1', u'2'] ) ),
Element( u'NM103', Properties(desc=u'Name Last or Organization Name', req_sit=u'S', data_type=(u'AN',u'1',u'60'), position=3,
codes=[] ) ),
Element( u'NM104', Properties(desc=u'Name First', req_sit=u'S', data_type=(u'AN',u'1',u'35'), position=4,
codes=[] ) ),
Element( u'NM105', Properties(desc=u'Name Middle', req_sit=u'S', data_type=(u'AN',u'1',u'25'), position=5,
codes=[] ) ),
Element( u'NM106', Properties(desc=u'Name Prefix', req_sit=u'N', data_type=(u'AN',u'1',u'10'), position=6,
codes=[] ) ),
Element( u'NM107', Properties(desc=u'Name Suffix', req_sit=u'S', data_type=(u'AN',u'1',u'10'), position=7,
codes=[] ) ),
Element( u'NM108', Properties(desc=u'Identification Code Qualifier', req_sit=u'S', data_type=(u'ID',u'1',u'2'), position=8,
codes=[u'C'] ) ),
Element( u'NM109', Properties(desc=u'Identification Code', req_sit=u'S', data_type=(u'AN',u'2',u'80'), position=9,
codes=[] ) ),
Element( u'NM110', Properties(desc=u'Entity Relationship Code', req_sit=u'N', data_type=(u'ID',u'2',u'2'), position=10,
codes=[] ) ),
Element( u'NM111', Properties(desc=u'Entity Identifier Code', req_sit=u'N', data_type=(u'ID',u'2',u'3'), position=11,
codes=[] ) ),
Element( u'NM112', Properties(desc=u'Name Last or Organization Name', req_sit=u'N', data_type=(u'AN',u'1',u'60'), position=12,
codes=[] ) ),
),
Segment( u'NM1', Properties(syntax=u'P0809 C1110 C1203',position=u'0300',req_sit=u'S',repeat=u'1',desc=u'Service Provider Name'),
Element( u'NM101', Properties(desc=u'Entity Identifier Code', req_sit=u'R', data_type=(u'ID',u'2',u'3'), position=1,
codes=[u'82'] ) ),
Element( u'NM102', Properties(desc=u'Entity Type Qualifier', req_sit=u'R', data_type=(u'ID',u'1',u'1'), position=2,
codes=[u'1', u'2'] ) ),
Element( u'NM103', Properties(desc=u'Name Last or Organization Name', req_sit=u'S', data_type=(u'AN',u'1',u'60'), position=3,
codes=[] ) ),
Element( u'NM104', Properties(desc=u'Name First', req_sit=u'S', data_type=(u'AN',u'1',u'35'), position=4,
codes=[] ) ),
Element( u'NM105', Properties(desc=u'Name Middle', req_sit=u'S', data_type=(u'AN',u'1',u'25'), position=5,
codes=[] ) ),
Element( u'NM106', Properties(desc=u'Name Prefix', req_sit=u'N', data_type=(u'AN',u'1',u'10'), position=6,
codes=[] ) ),
Element( u'NM107', Properties(desc=u'Name Suffix', req_sit=u'S', data_type=(u'AN',u'1',u'10'), position=7,
codes=[] ) ),
Element( u'NM108', Properties(desc=u'Identification Code Qualifier', req_sit=u'R', data_type=(u'ID',u'1',u'2'), position=8,
codes=[u'BD', u'BS', u'FI', u'MC', u'PC', u'SL', u'UP', u'XX'] ) ),
Element( u'NM109', Properties(desc=u'Identification Code', req_sit=u'R', data_type=(u'AN',u'2',u'80'), position=9,
codes=[] ) ),
Element( u'NM110', Properties(desc=u'Entity Relationship Code', req_sit=u'N', data_type=(u'ID',u'2',u'2'), position=10,
codes=[] ) ),
Element( u'NM111', Properties(desc=u'Entity Identifier Code', req_sit=u'N', data_type=(u'ID',u'2',u'3'), position=11,
codes=[] ) ),
Element( u'NM112', Properties(desc=u'Name Last or Organization Name', req_sit=u'N', data_type=(u'AN',u'1',u'60'), position=12,
codes=[] ) ),
),
Segment( u'NM1', Properties(syntax=u'P0809 C1110 C1203',position=u'0300',req_sit=u'S',repeat=u'1',desc=u'Crossover Carrier Name'),
Element( u'NM101', Properties(desc=u'Entity Identifier Code', req_sit=u'R', data_type=(u'ID',u'2',u'3'), position=1,
codes=[u'TT'] ) ),
Element( u'NM102', Properties(desc=u'Entity Type Qualifier', req_sit=u'R', data_type=(u'ID',u'1',u'1'), position=2,
codes=[u'2'] ) ),
Element( u'NM103', Properties(desc=u'Name Last or Organization Name', req_sit=u'R', data_type=(u'AN',u'1',u'60'), position=3,
codes=[] ) ),
Element( u'NM104', Properties(desc=u'Name First', req_sit=u'N', data_type=(u'AN',u'1',u'35'), position=4,
codes=[] ) ),
Element( u'NM105', Properties(desc=u'Name Middle', req_sit=u'N', data_type=(u'AN',u'1',u'25'), position=5,
codes=[] ) ),
Element( u'NM106', Properties(desc=u'Name Prefix', req_sit=u'N', data_type=(u'AN',u'1',u'10'), position=6,
codes=[] ) ),
Element( u'NM107', Properties(desc=u'Name Suffix', req_sit=u'N', data_type=(u'AN',u'1',u'10'), position=7,
codes=[] ) ),
Element( u'NM108', Properties(desc=u'Identification Code Qualifier', req_sit=u'R', data_type=(u'ID',u'1',u'2'), position=8,
codes=[u'AD', u'FI', u'NI', u'PI', u'PP', u'XV'] ) ),
Element( u'NM109', Properties(desc=u'Identification Code', req_sit=u'R', data_type=(u'AN',u'2',u'80'), position=9,
codes=[] ) ),
Element( u'NM110', Properties(desc=u'Entity Relationship Code', req_sit=u'N', data_type=(u'ID',u'2',u'2'), position=10,
codes=[] ) ),
Element( u'NM111', Properties(desc=u'Entity Identifier Code', req_sit=u'N', data_type=(u'ID',u'2',u'3'), position=11,
codes=[] ) ),
Element( u'NM112', Properties(desc=u'Name Last or Organization Name', req_sit=u'N', data_type=(u'AN',u'1',u'60'), position=12,
codes=[] ) ),
),
Segment( u'NM1', Properties(syntax=u'P0809 C1110 C1203',position=u'0300',req_sit=u'S',repeat=u'1',desc=u'Corrected Priority Payer Name'),
Element( u'NM101', Properties(desc=u'Entity Identifier Code', req_sit=u'R', data_type=(u'ID',u'2',u'3'), position=1,
codes=[u'PR'] ) ),
Element( u'NM102', Properties(desc=u'Entity Type Qualifier', req_sit=u'R', data_type=(u'ID',u'1',u'1'), position=2,
codes=[u'2'] ) ),
Element( u'NM103', Properties(desc=u'Name Last or Organization Name', req_sit=u'R', data_type=(u'AN',u'1',u'60'), position=3,
codes=[] ) ),
Element( u'NM104', Properties(desc=u'Name First', req_sit=u'N', data_type=(u'AN',u'1',u'35'), position=4,
codes=[] ) ),
Element( u'NM105', Properties(desc=u'Name Middle', req_sit=u'N', data_type=(u'AN',u'1',u'25'), position=5,
codes=[] ) ),
Element( u'NM106', Properties(desc=u'Name Prefix', req_sit=u'N', data_type=(u'AN',u'1',u'10'), position=6,
codes=[] ) ),
Element( u'NM107', Properties(desc=u'Name Suffix', req_sit=u'N', data_type=(u'AN',u'1',u'10'), position=7,
codes=[] ) ),
Element( u'NM108', Properties(desc=u'Identification Code Qualifier', req_sit=u'R', data_type=(u'ID',u'1',u'2'), position=8,
codes=[u'AD', u'FI', u'NI', u'PI', u'PP', u'XV'] ) ),
Element( u'NM109', Properties(desc=u'Identification Code', req_sit=u'R', data_type=(u'AN',u'2',u'80'), position=9,
codes=[] ) ),
Element( u'NM110', Properties(desc=u'Entity Relationship Code', req_sit=u'N', data_type=(u'ID',u'2',u'2'), position=10,
codes=[] ) ),
Element( u'NM111', Properties(desc=u'Entity Identifier Code', req_sit=u'N', data_type=(u'ID',u'2',u'3'), position=11,
codes=[] ) ),
Element( u'NM112', Properties(desc=u'Name Last or Organization Name', req_sit=u'N', data_type=(u'AN',u'1',u'60'), position=12,
codes=[] ) ),
),
Segment( u'NM1', Properties(syntax=u'P0809 C1110 C1203',position=u'0300',req_sit=u'S',repeat=u'1',desc=u'Other Subscriber Name'),
Element( u'NM101', Properties(desc=u'Entity Identifier Code', req_sit=u'R', data_type=(u'ID',u'2',u'3'), position=1,
codes=[u'GB'] ) ),
Element( u'NM102', Properties(desc=u'Entity Type Qualifier', req_sit=u'R', data_type=(u'ID',u'1',u'1'), position=2,
codes=[u'1', u'2'] ) ),
Element( u'NM103', Properties(desc=u'Name Last or Organization Name', req_sit=u'S', data_type=(u'AN',u'1',u'60'), position=3,
codes=[] ) ),
Element( u'NM104', Properties(desc=u'Name First', req_sit=u'S', data_type=(u'AN',u'1',u'35'), position=4,
codes=[] ) ),
Element( u'NM105', Properties(desc=u'Name Middle', req_sit=u'S', data_type=(u'AN',u'1',u'25'), position=5,
codes=[] ) ),
Element( u'NM106', Properties(desc=u'Name Prefix', req_sit=u'N', data_type=(u'AN',u'1',u'10'), position=6,
codes=[] ) ),
Element( u'NM107', Properties(desc=u'Name Suffix', req_sit=u'S', data_type=(u'AN',u'1',u'10'), position=7,
codes=[] ) ),
Element( u'NM108', Properties(desc=u'Identification Code Qualifier', req_sit=u'S', data_type=(u'ID',u'1',u'2'), position=8,
codes=[u'FI', u'II', u'MI'] ) ),
Element( u'NM109', Properties(desc=u'Identification Code', req_sit=u'S', data_type=(u'AN',u'2',u'80'), position=9,
codes=[] ) ),
Element( u'NM110', Properties(desc=u'Entity Relationship Code', req_sit=u'N', data_type=(u'ID',u'2',u'2'), position=10,
codes=[] ) ),
Element( u'NM111', Properties(desc=u'Entity Identifier Code', req_sit=u'N', data_type=(u'ID',u'2',u'3'), position=11,
codes=[] ) ),
Element( u'NM112', Properties(desc=u'Name Last or Organization Name', req_sit=u'N', data_type=(u'AN',u'1',u'60'), position=12,
codes=[] ) ),
),
Segment( u'MIA', Properties(syntax='',position=u'0330',req_sit=u'S',repeat=u'1',desc=u'Inpatient Adjudication Information'),
Element( u'MIA01', Properties(desc=u'Quantity', req_sit=u'R', data_type=(u'R',u'1',u'15'), position=1,
codes=[] ) ),
Element( u'MIA02', Properties(desc=u'Monetary Amount', req_sit=u'S', data_type=(u'R',u'1',u'18'), position=2,
codes=[] ) ),
Element( u'MIA03', Properties(desc=u'Quantity', req_sit=u'S', data_type=(u'R',u'1',u'15'), position=3,
codes=[] ) ),
Element( u'MIA04', Properties(desc=u'Monetary Amount', req_sit=u'S', data_type=(u'R',u'1',u'18'), position=4,
codes=[] ) ),
Element( u'MIA05', Properties(desc=u'Reference Identification', req_sit=u'S', data_type=(u'AN',u'1',u'50'), position=5,
codes=[] ) ),
Element( u'MIA06', Properties(desc=u'Monetary Amount', req_sit=u'S', data_type=(u'R',u'1',u'18'), position=6,
codes=[] ) ),
Element( u'MIA07', Properties(desc=u'Monetary Amount', req_sit=u'S', data_type=(u'R',u'1',u'18'), position=7,
codes=[] ) ),
Element( u'MIA08', Properties(desc=u'Monetary Amount', req_sit=u'S', data_type=(u'R',u'1',u'18'), position=8,
codes=[] ) ),
Element( u'MIA09', Properties(desc=u'Monetary Amount', req_sit=u'S', data_type=(u'R',u'1',u'18'), position=9,
codes=[] ) ),
Element( u'MIA10', Properties(desc=u'Monetary Amount', req_sit=u'S', data_type=(u'R',u'1',u'18'), position=10,
codes=[] ) ),
Element( u'MIA11', Properties(desc=u'Monetary Amount', req_sit=u'S', data_type=(u'R',u'1',u'18'), position=11,
codes=[] ) ),
Element( u'MIA12', Properties(desc=u'Monetary Amount', req_sit=u'S', data_type=(u'R',u'1',u'18'), position=12,
codes=[] ) ),
Element( u'MIA13', Properties(desc=u'Monetary Amount', req_sit=u'S', data_type=(u'R',u'1',u'18'), position=13,
codes=[] ) ),
Element( u'MIA14', Properties(desc=u'Monetary Amount', req_sit=u'S', data_type=(u'R',u'1',u'18'), position=14,
codes=[] ) ),
Element( u'MIA15', Properties(desc=u'Quantity', req_sit=u'S', data_type=(u'R',u'1',u'15'), position=15,
codes=[] ) ),
Element( u'MIA16', Properties(desc=u'Monetary Amount', req_sit=u'S', data_type=(u'R',u'1',u'18'), position=16,
codes=[] ) ),
Element( u'MIA17', Properties(desc=u'Monetary Amount', req_sit=u'S', data_type=(u'R',u'1',u'18'), position=17,
codes=[] ) ),
Element( u'MIA18', Properties(desc=u'Monetary Amount', req_sit=u'S', data_type=(u'R',u'1',u'18'), position=18,
codes=[] ) ),
Element( u'MIA19', Properties(desc=u'Monetary Amount', req_sit=u'S', data_type=(u'R',u'1',u'18'), position=19,
codes=[] ) ),
Element( u'MIA20', Properties(desc=u'Reference Identification', req_sit=u'S', data_type=(u'AN',u'1',u'50'), position=20,
codes=[] ) ),
Element( u'MIA21', Properties(desc=u'Reference Identification', req_sit=u'S', data_type=(u'AN',u'1',u'50'), position=21,
codes=[] ) ),
Element( u'MIA22', Properties(desc=u'Reference Identification', req_sit=u'S', data_type=(u'AN',u'1',u'50'), position=22,
codes=[] ) ),
Element( u'MIA23', Properties(desc=u'Reference Identification', req_sit=u'S', data_type=(u'AN',u'1',u'50'), position=23,
codes=[] ) ),
Element( u'MIA24', Properties(desc=u'Monetary Amount', req_sit=u'S', data_type=(u'R',u'1',u'18'), position=24,
codes=[] ) ),
),
Segment( u'MOA', Properties(syntax='',position=u'0350',req_sit=u'S',repeat=u'1',desc=u'Outpatient Adjudication Information'),
Element( u'MOA01', Properties(desc=u'Percentage as Decimal', req_sit=u'S', data_type=(u'R',u'1',u'10'), position=1,
codes=[] ) ),
Element( u'MOA02', Properties(desc=u'Monetary Amount', req_sit=u'S', data_type=(u'R',u'1',u'18'), position=2,
codes=[] ) ),
Element( u'MOA03', Properties(desc=u'Reference Identification', req_sit=u'S', data_type=(u'AN',u'1',u'50'), position=3,
codes=[] ) ),
Element( u'MOA04', Properties(desc=u'Reference Identification', req_sit=u'S', data_type=(u'AN',u'1',u'50'), position=4,
codes=[] ) ),
Element( u'MOA05', Properties(desc=u'Reference Identification', req_sit=u'S', data_type=(u'AN',u'1',u'50'), position=5,
codes=[] ) ),
Element( u'MOA06', Properties(desc=u'Reference Identification', req_sit=u'S', data_type=(u'AN',u'1',u'50'), position=6,
codes=[] ) ),
Element( u'MOA07', Properties(desc=u'Reference Identification', req_sit=u'S', data_type=(u'AN',u'1',u'50'), position=7,
codes=[] ) ),
Element( u'MOA08', Properties(desc=u'Monetary Amount', req_sit=u'S', data_type=(u'R',u'1',u'18'), position=8,
codes=[] ) ),
Element( u'MOA09', Properties(desc=u'Monetary Amount', req_sit=u'S', data_type=(u'R',u'1',u'18'), position=9,
codes=[] ) ),
),
Segment( u'REF', Properties(syntax=u'R0203',position=u'0400',req_sit=u'S',repeat=u'5',desc=u'Other Claim Related Identification'),
Element( u'REF01', Properties(desc=u'Reference Identification Qualifier', req_sit=u'R', data_type=(u'ID',u'2',u'3'), position=1,
codes=[u'1L', u'1W', u'28', u'6P', u'9A', u'9C', u'BB', u'CE', u'EA', u'F8', u'G1', u'G3', u'IG', u'SY'] ) ),
Element( u'REF02', Properties(desc=u'Reference Identification', req_sit=u'R', data_type=(u'AN',u'1',u'50'), position=2,
codes=[] ) ),
Element( u'REF03', Properties(desc=u'Description', req_sit=u'N', data_type=(u'AN',u'1',u'80'), position=3,
codes=[] ) ),
Composite( u'C040', Properties(req_sit=u'N',repeat='',refdes='',seq=u'04',desc=u'Reference Identifier'),
),
),
Segment( u'REF', Properties(syntax=u'R0203',position=u'0400',req_sit=u'S',repeat=u'10',desc=u'Rendering Provider Identification'),
Element( u'REF01', Properties(desc=u'Reference Identification Qualifier', req_sit=u'R', data_type=(u'ID',u'2',u'3'), position=1,
codes=[u'0B', u'1A', u'1B', u'1C', u'1D', u'1G', u'1H', u'1J', u'D3', u'G2', u'LU'] ) ),
Element( u'REF02', Properties(desc=u'Reference Identification', req_sit=u'R', data_type=(u'AN',u'1',u'50'), position=2,
codes=[] ) ),
Element( u'REF03', Properties(desc=u'Description', req_sit=u'N', data_type=(u'AN',u'1',u'80'), position=3,
codes=[] ) ),
Composite( u'C040', Properties(req_sit=u'N',repeat='',refdes='',seq=u'04',desc=u'Reference Identifier'),
),
),
Segment( u'DTM', Properties(syntax=u'R020305 C0403 P0506',position=u'0500',req_sit=u'S',repeat=u'2',desc=u'Statement From or To Date'),
Element( u'DTM01', Properties(desc=u'Date/Time Qualifier', req_sit=u'R', data_type=(u'ID',u'3',u'3'), position=1,
codes=[u'232', u'233'] ) ),
Element( u'DTM02', Properties(desc=u'Date', req_sit=u'R', data_type=(u'DT',u'8',u'8'), position=2,
codes=[] ) ),
Element( u'DTM03', Properties(desc=u'Time', req_sit=u'N', data_type=(u'TM',u'4',u'8'), position=3,
codes=[] ) ),
Element( u'DTM04', Properties(desc=u'Time Code', req_sit=u'N', data_type=(u'ID',u'2',u'2'), position=4,
codes=[] ) ),
Element( u'DTM05', Properties(desc=u'Date Time Period Format Qualifier', req_sit=u'N', data_type=(u'ID',u'2',u'3'), position=5,
codes=[] ) ),
Element( u'DTM06', Properties(desc=u'Date Time Period', req_sit=u'N', data_type=(u'AN',u'1',u'35'), position=6,
codes=[] ) ),
),
Segment( u'DTM', Properties(syntax=u'R020305 C0403 P0506',position=u'0500',req_sit=u'S',repeat=u'1',desc=u'Coverage Expiration Date'),
Element( u'DTM01', Properties(desc=u'Date/Time Qualifier', req_sit=u'R', data_type=(u'ID',u'3',u'3'), position=1,
codes=[u'036'] ) ),
Element( u'DTM02', Properties(desc=u'Date', req_sit=u'R', data_type=(u'DT',u'8',u'8'), position=2,
codes=[] ) ),
Element( u'DTM03', Properties(desc=u'Time', req_sit=u'N', data_type=(u'TM',u'4',u'8'), position=3,
codes=[] ) ),
Element( u'DTM04', Properties(desc=u'Time Code', req_sit=u'N', data_type=(u'ID',u'2',u'2'), position=4,
codes=[] ) ),
Element( u'DTM05', Properties(desc=u'Date Time Period Format Qualifier', req_sit=u'N', data_type=(u'ID',u'2',u'3'), position=5,
codes=[] ) ),
Element( u'DTM06', Properties(desc=u'Date Time Period', req_sit=u'N', data_type=(u'AN',u'1',u'35'), position=6,
codes=[] ) ),
),
Segment( u'DTM', Properties(syntax=u'R020305 C0403 P0506',position=u'0500',req_sit=u'S',repeat=u'1',desc=u'Claim Received Date'),
Element( u'DTM01', Properties(desc=u'Date/Time Qualifier', req_sit=u'R', data_type=(u'ID',u'3',u'3'), position=1,
codes=[u'050'] ) ),
Element( u'DTM02', Properties(desc=u'Date', req_sit=u'R', data_type=(u'DT',u'8',u'8'), position=2,
codes=[] ) ),
Element( u'DTM03', Properties(desc=u'Time', req_sit=u'N', data_type=(u'TM',u'4',u'8'), position=3,
codes=[] ) ),
Element( u'DTM04', Properties(desc=u'Time Code', req_sit=u'N', data_type=(u'ID',u'2',u'2'), position=4,
codes=[] ) ),
Element( u'DTM05', Properties(desc=u'Date Time Period Format Qualifier', req_sit=u'N', data_type=(u'ID',u'2',u'3'), position=5,
codes=[] ) ),
Element( u'DTM06', Properties(desc=u'Date Time Period', req_sit=u'N', data_type=(u'AN',u'1',u'35'), position=6,
codes=[] ) ),
),
Segment( u'PER', Properties(syntax=u'P0304 P0506 P0708',position=u'0600',req_sit=u'S',repeat=u'2',desc=u'Claim Contact Information'),
Element( u'PER01', Properties(desc=u'Contact Function Code', req_sit=u'R', data_type=(u'ID',u'2',u'2'), position=1,
codes=[u'CX'] ) ),
Element( u'PER02', Properties(desc=u'Name', req_sit=u'S', data_type=(u'AN',u'1',u'60'), position=2,
codes=[] ) ),
Element( u'PER03', Properties(desc=u'Communication Number Qualifier', req_sit=u'R', data_type=(u'ID',u'2',u'2'), position=3,
codes=[u'EM', u'FX', u'TE'] ) ),
Element( u'PER04', Properties(desc=u'Communication Number', req_sit=u'R', data_type=(u'AN',u'1',u'256'), position=4,
codes=[] ) ),
Element( u'PER05', Properties(desc=u'Communication Number Qualifier', req_sit=u'S', data_type=(u'ID',u'2',u'2'), position=5,
codes=[u'EM', u'EX', u'FX', u'TE'] ) ),
Element( u'PER06', Properties(desc=u'Communication Number', req_sit=u'S', data_type=(u'AN',u'1',u'256'), position=6,
codes=[] ) ),
Element( u'PER07', Properties(desc=u'Communication Number Qualifier', req_sit=u'S', data_type=(u'ID',u'2',u'2'), position=7,
codes=[u'EX'] ) ),
Element( u'PER08', Properties(desc=u'Communication Number', req_sit=u'S', data_type=(u'AN',u'1',u'256'), position=8,
codes=[] ) ),
Element( u'PER09', Properties(desc=u'Contact Inquiry Reference', req_sit=u'N', data_type=(u'AN',u'1',u'20'), position=9,
codes=[] ) ),
),
Segment( u'AMT', Properties(syntax='',position=u'0620',req_sit=u'S',repeat=u'13',desc=u'Claim Supplemental Information'),
Element( u'AMT01', Properties(desc=u'Amount Qualifier Code', req_sit=u'R', data_type=(u'ID',u'1',u'3'), position=1,
codes=[u'AU', u'D8', u'DY', u'F5', u'I', u'NL', u'T', u'T2', u'ZK', u'ZL', u'ZM', u'ZN', u'ZO'] ) ),
Element( u'AMT02', Properties(desc=u'Monetary Amount', req_sit=u'R', data_type=(u'R',u'1',u'18'), position=2,
codes=[] ) ),
Element( u'AMT03', Properties(desc=u'Credit/Debit Flag Code', req_sit=u'N', data_type=(u'ID',u'1',u'1'), position=3,
codes=[] ) ),
),
Segment( u'QTY', Properties(syntax=u'E0204 R0204',position=u'0640',req_sit=u'S',repeat=u'14',desc=u'Claim Supplemental Information Quantity'),
Element( u'QTY01', Properties(desc=u'Quantity Qualifier', req_sit=u'R', data_type=(u'ID',u'2',u'2'), position=1,
codes=[u'CA', u'CD', u'LA', u'LE', u'NE', u'NR', u'OU', u'PS', u'VS', u'ZK', u'ZL', u'ZM', u'ZN', u'ZO'] ) ),
Element( u'QTY02', Properties(desc=u'Quantity', req_sit=u'R', data_type=(u'R',u'1',u'15'), position=2,
codes=[] ) ),
Composite( u'C001', Properties(req_sit=u'N',repeat='',refdes='',seq=u'03',desc=u'Composite Unit of Measure'),
),
Element( u'QTY04', Properties(desc=u'Free-form Information', req_sit=u'N', data_type=(u'AN',u'1',u'30'), position=4,
codes=[] ) ),
),
parsed_835_2110,
)
parsed_835_2000 = Loop( u'2000', Properties(position=u'0030',looptype='',repeat=u'>1',req_sit=u'S',desc=u'Header Number'),
Segment( u'LX', Properties(syntax='',position=u'0030',req_sit=u'R',repeat=u'1',desc=u'Header Number'),
Element( u'LX01', Properties(desc=u'Assigned Number', req_sit=u'R', data_type=(u'N0',u'1',u'6'), position=1,
codes=[] ) ),
),
Segment( u'TS3', Properties(syntax='',position=u'0050',req_sit=u'S',repeat=u'1',desc=u'Provider Summary Information'),
Element( u'TS301', Properties(desc=u'Reference Identification', req_sit=u'R', data_type=(u'AN',u'1',u'50'), position=1,
codes=[] ) ),
Element( u'TS302', Properties(desc=u'Facility Code Value', req_sit=u'R', data_type=(u'AN',u'1',u'2'), position=2,
codes=[] ) ),
Element( u'TS303', Properties(desc=u'Date', req_sit=u'R', data_type=(u'DT',u'8',u'8'), position=3,
codes=[] ) ),
Element( u'TS304', Properties(desc=u'Quantity', req_sit=u'R', data_type=(u'R',u'1',u'15'), position=4,
codes=[] ) ),
Element( u'TS305', Properties(desc=u'Monetary Amount', req_sit=u'R', data_type=(u'R',u'1',u'18'), position=5,
codes=[] ) ),
Element( u'TS306', Properties(desc=u'Monetary Amount', req_sit=u'N', data_type=(u'R',u'1',u'18'), position=6,
codes=[] ) ),
Element( u'TS307', Properties(desc=u'Monetary Amount', req_sit=u'N', data_type=(u'R',u'1',u'18'), position=7,
codes=[] ) ),
Element( u'TS308', Properties(desc=u'Monetary Amount', req_sit=u'N', data_type=(u'R',u'1',u'18'), position=8,
codes=[] ) ),
Element( u'TS309', Properties(desc=u'Monetary Amount', req_sit=u'N', data_type=(u'R',u'1',u'18'), position=9,
codes=[] ) ),
Element( u'TS310', Properties(desc=u'Monetary Amount', req_sit=u'N', data_type=(u'R',u'1',u'18'), position=10,
codes=[] ) ),
Element( u'TS311', Properties(desc=u'Monetary Amount', req_sit=u'N', data_type=(u'R',u'1',u'18'), position=11,
codes=[] ) ),
Element( u'TS312', Properties(desc=u'Monetary Amount', req_sit=u'N', data_type=(u'R',u'1',u'18'), position=12,
codes=[] ) ),
Element( u'TS313', Properties(desc=u'Monetary Amount', req_sit=u'S', data_type=(u'R',u'1',u'18'), position=13,
codes=[] ) ),
Element( u'TS314', Properties(desc=u'Monetary Amount', req_sit=u'N', data_type=(u'R',u'1',u'18'), position=14,
codes=[] ) ),
Element( u'TS315', Properties(desc=u'Monetary Amount', req_sit=u'S', data_type=(u'R',u'1',u'18'), position=15,
codes=[] ) ),
Element( u'TS316', Properties(desc=u'Monetary Amount', req_sit=u'N', data_type=(u'R',u'1',u'18'), position=16,
codes=[] ) ),
Element( u'TS317', Properties(desc=u'Monetary Amount', req_sit=u'S', data_type=(u'R',u'1',u'18'), position=17,
codes=[] ) ),
Element( u'TS318', Properties(desc=u'Monetary Amount', req_sit=u'S', data_type=(u'R',u'1',u'18'), position=18,
codes=[] ) ),
Element( u'TS319', Properties(desc=u'Monetary Amount', req_sit=u'N', data_type=(u'R',u'1',u'18'), position=19,
codes=[] ) ),
Element( u'TS320', Properties(desc=u'Monetary Amount', req_sit=u'S', data_type=(u'R',u'1',u'18'), position=20,
codes=[] ) ),
Element( u'TS321', Properties(desc=u'Monetary Amount', req_sit=u'S', data_type=(u'R',u'1',u'18'), position=21,
codes=[] ) ),
Element( u'TS322', Properties(desc=u'Monetary Amount', req_sit=u'S', data_type=(u'R',u'1',u'18'), position=22,
codes=[] ) ),
Element( u'TS323', Properties(desc=u'Quantity', req_sit=u'S', data_type=(u'R',u'1',u'15'), position=23,
codes=[] ) ),
Element( u'TS324', Properties(desc=u'Monetary Amount', req_sit=u'S', data_type=(u'R',u'1',u'18'), position=24,
codes=[] ) ),
),
Segment( u'TS2', Properties(syntax='',position=u'0070',req_sit=u'S',repeat=u'1',desc=u'Provider Supplemental Summary Information'),
Element( u'TS201', Properties(desc=u'Monetary Amount', req_sit=u'S', data_type=(u'R',u'1',u'18'), position=1,
codes=[] ) ),
Element( u'TS202', Properties(desc=u'Monetary Amount', req_sit=u'S', data_type=(u'R',u'1',u'18'), position=2,
codes=[] ) ),
Element( u'TS203', Properties(desc=u'Monetary Amount', req_sit=u'S', data_type=(u'R',u'1',u'18'), position=3,
codes=[] ) ),
Element( u'TS204', Properties(desc=u'Monetary Amount', req_sit=u'S', data_type=(u'R',u'1',u'18'), position=4,
codes=[] ) ),
Element( u'TS205', Properties(desc=u'Monetary Amount', req_sit=u'S', data_type=(u'R',u'1',u'18'), position=5,
codes=[] ) ),
Element( u'TS206', Properties(desc=u'Monetary Amount', req_sit=u'S', data_type=(u'R',u'1',u'18'), position=6,
codes=[] ) ),
Element( u'TS207', Properties(desc=u'Quantity', req_sit=u'S', data_type=(u'R',u'1',u'15'), position=7,
codes=[] ) ),
Element( u'TS208', Properties(desc=u'Monetary Amount', req_sit=u'S', data_type=(u'R',u'1',u'18'), position=8,
codes=[] ) ),
Element( u'TS209', Properties(desc=u'Monetary Amount', req_sit=u'S', data_type=(u'R',u'1',u'18'), position=9,
codes=[] ) ),
Element( u'TS210', Properties(desc=u'Quantity', req_sit=u'S', data_type=(u'R',u'1',u'15'), position=10,
codes=[] ) ),
Element( u'TS211', Properties(desc=u'Quantity', req_sit=u'S', data_type=(u'R',u'1',u'15'), position=11,
codes=[] ) ),
Element( u'TS212', Properties(desc=u'Quantity', req_sit=u'S', data_type=(u'R',u'1',u'15'), position=12,
codes=[] ) ),
Element( u'TS213', Properties(desc=u'Quantity', req_sit=u'S', data_type=(u'R',u'1',u'15'), position=13,
codes=[] ) ),
Element( u'TS214', Properties(desc=u'Quantity', req_sit=u'S', data_type=(u'R',u'1',u'15'), position=14,
codes=[] ) ),
Element( u'TS215', Properties(desc=u'Monetary Amount', req_sit=u'S', data_type=(u'R',u'1',u'18'), position=15,
codes=[] ) ),
Element( u'TS216', Properties(desc=u'Quantity', req_sit=u'S', data_type=(u'R',u'1',u'15'), position=16,
codes=[] ) ),
Element( u'TS217', Properties(desc=u'Monetary Amount', req_sit=u'S', data_type=(u'R',u'1',u'18'), position=17,
codes=[] ) ),
Element( u'TS218', Properties(desc=u'Monetary Amount', req_sit=u'S', data_type=(u'R',u'1',u'18'), position=18,
codes=[] ) ),
Element( u'TS219', Properties(desc=u'Monetary Amount', req_sit=u'S', data_type=(u'R',u'1',u'18'), position=19,
codes=[] ) ),
),
parsed_835_2100,
)
parsed_835_DETAIL = Loop( u'DETAIL', Properties(position=u'0120',looptype=u'wrapper',repeat=u'>1',req_sit=u'S',desc=u'Table2 - Detail'),
parsed_835_2000,
)
parsed_835_FOOTER = Loop( u'FOOTER', Properties(position=u'0130',looptype=u'wrapper',repeat=u'1',req_sit=u'S',desc=u'Footer'),
Segment( u'PLB', Properties(syntax=u'P0506 P0708 P0910 P1112 P1314',position=u'0100',req_sit=u'S',repeat=u'>1',desc=u'Provider Adjustment'),
Element( u'PLB01', Properties(desc=u'Reference Identification', req_sit=u'R', data_type=(u'AN',u'1',u'50'), position=1,
codes=[] ) ),
Element( u'PLB02', Properties(desc=u'Date', req_sit=u'R', data_type=(u'DT',u'8',u'8'), position=2,
codes=[] ) ),
Composite( u'C042', Properties(req_sit=u'R',repeat='',refdes='',seq=u'03',desc=u'Adjustment Identifier'),
Element( u'PLB03-01', Properties(desc=u'Adjustment Reason Code', req_sit=u'R', data_type=(u'ID',u'2',u'2'), position=0,
codes=[u'50', u'51', u'72', u'90', u'AH', u'AM', u'AP', u'B2', u'B3', u'BD', u'BN', u'C5', u'CR', u'CS', u'CT', u'CV', u'CW', u'DM', u'E3', u'FB', u'FC', u'GO', u'HM', u'IP', u'IR', u'IS', u'J1', u'L3', u'L6', u'LE', u'LS', u'OA', u'OB', u'PI', u'PL', u'RA', u'RE', u'SL', u'TL', u'WO', u'WU'] ) ),
Element( u'PLB03-02', Properties(desc=u'Reference Identification', req_sit=u'S', data_type=(u'AN',u'1',u'50'), position=1,
codes=[] ) ),
),
Element( u'PLB04', Properties(desc=u'Monetary Amount', req_sit=u'R', data_type=(u'R',u'1',u'18'), position=4,
codes=[] ) ),
Composite( u'C042', Properties(req_sit=u'S',repeat='',refdes='',seq=u'05',desc=u'Adjustment Identifier'),
Element( u'PLB05-01', Properties(desc=u'Adjustment Reason Code', req_sit=u'R', data_type=(u'ID',u'2',u'2'), position=0,
codes=[] ) ),
Element( u'PLB05-02', Properties(desc=u'Reference Identification', req_sit=u'S', data_type=(u'AN',u'1',u'50'), position=1,
codes=[] ) ),
),
Element( u'PLB06', Properties(desc=u'Monetary Amount', req_sit=u'S', data_type=(u'R',u'1',u'18'), position=6,
codes=[] ) ),
Composite( u'C042', Properties(req_sit=u'S',repeat='',refdes='',seq=u'07',desc=u'Adjustment Identifier'),
Element( u'PLB07-01', Properties(desc=u'Adjustment Reason Code', req_sit=u'R', data_type=(u'ID',u'2',u'2'), position=0,
codes=[] ) ),
Element( u'PLB07-02', Properties(desc=u'Reference Identification', req_sit=u'S', data_type=(u'AN',u'1',u'50'), position=1,
codes=[] ) ),
),
Element( u'PLB08', Properties(desc=u'Monetary Amount', req_sit=u'S', data_type=(u'R',u'1',u'18'), position=8,
codes=[] ) ),
Composite( u'C042', Properties(req_sit=u'S',repeat='',refdes='',seq=u'09',desc=u'Adjustment Identifier'),
Element( u'PLB09-01', Properties(desc=u'Adjustment Reason Code', req_sit=u'R', data_type=(u'ID',u'2',u'2'), position=0,
codes=[] ) ),
Element( u'PLB09-02', Properties(desc=u'Reference Identification', req_sit=u'S', data_type=(u'AN',u'1',u'50'), position=1,
codes=[] ) ),
),
Element( u'PLB10', Properties(desc=u'Monetary Amount', req_sit=u'S', data_type=(u'R',u'1',u'18'), position=10,
codes=[] ) ),
Composite( u'C042', Properties(req_sit=u'S',repeat='',refdes='',seq=u'11',desc=u'Adjustment Identifier'),
Element( u'PLB11-01', Properties(desc=u'Adjustment Reason Code', req_sit=u'R', data_type=(u'ID',u'2',u'2'), position=0,
codes=[] ) ),
Element( u'PLB11-02', Properties(desc=u'Reference Identification', req_sit=u'S', data_type=(u'AN',u'1',u'50'), position=1,
codes=[] ) ),
),
Element( u'PLB12', Properties(desc=u'Monetary Amount', req_sit=u'S', data_type=(u'R',u'1',u'18'), position=12,
codes=[] ) ),
Composite( u'C042', Properties(req_sit=u'S',repeat='',refdes='',seq=u'13',desc=u'Adjustment Identifier'),
Element( u'PLB13-01', Properties(desc=u'Adjustment Reason Code', req_sit=u'R', data_type=(u'ID',u'2',u'2'), position=0,
codes=[] ) ),
Element( u'PLB13-02', Properties(desc=u'Reference Identification', req_sit=u'S', data_type=(u'AN',u'1',u'50'), position=1,
codes=[] ) ),
),
Element( u'PLB14', Properties(desc=u'Monetary Amount', req_sit=u'S', data_type=(u'R',u'1',u'18'), position=14,
codes=[] ) ),
),
)
parsed_835_ST_LOOP = Loop( u'ST_LOOP', Properties(position=u'0200',looptype=u'explicit',repeat=u'>1',req_sit=u'R',desc=u'Transaction Set Header'),
Segment( u'ST', Properties(syntax='',position=u'0100',req_sit=u'R',repeat=u'1',desc=u'Transaction Set Header'),
Element( u'ST01', Properties(desc=u'Transaction Set Identifier Code', req_sit=u'R', data_type=(u'ID',u'3',u'3'), position=1,
codes=[u'835'] ) ),
Element( u'ST02', Properties(desc=u'Transaction Set Control Number', req_sit=u'R', data_type=(u'AN',u'4',u'9'), position=2,
codes=[] ) ),
Element( u'ST03', Properties(desc=u'Implementation Convention Reference', req_sit=u'N', data_type=(u'AN',u'1',u'35'), position=3,
codes=[u'005010X221A1'] ) ),
),
parsed_835_HEADER,
parsed_835_DETAIL,
parsed_835_FOOTER,
Segment( u'SE', Properties(syntax='',position=u'0200',req_sit=u'R',repeat=u'1',desc=u'Transaction Set Trailer'),
Element( u'SE01', Properties(desc=u'Number of Included Segments', req_sit=u'R', data_type=(u'N0',u'1',u'10'), position=1,
codes=[] ) ),
Element( u'SE02', Properties(desc=u'Transaction Set Control Number', req_sit=u'R', data_type=(u'AN',u'4',u'9'), position=2,
codes=[] ) ),
),
)
parsed_835_GS_LOOP = Loop( u'GS_LOOP', Properties(position=u'0200',looptype=u'explicit',repeat=u'>1',req_sit=u'R',desc=u'Functional Group Header'),
Segment( u'GS', Properties(syntax='',position=u'0100',req_sit=u'R',repeat=u'1',desc=u'Functional Group Header'),
Element( u'GS01', Properties(desc=u'Functional Identifier Code', req_sit=u'R', data_type=(u'ID',u'2',u'2'), position=1,
codes=[u'HP'] ) ),
Element( u'GS02', Properties(desc=u'Application Senders Code', req_sit=u'R', data_type=(u'AN',u'2',u'15'), position=2,
codes=[] ) ),
Element( u'GS03', Properties(desc=u'124', req_sit=u'R', data_type=(u'AN',u'2',u'15'), position=3,
codes=[] ) ),
Element( u'GS04', Properties(desc=u'Date', req_sit=u'R', data_type=(u'DT',u'8',u'8'), position=4,
codes=[] ) ),
Element( u'GS05', Properties(desc=u'Time', req_sit=u'R', data_type=(u'TM',u'4',u'8'), position=5,
codes=[] ) ),
Element( u'GS06', Properties(desc=u'Group Control Number', req_sit=u'R', data_type=(u'N0',u'1',u'9'), position=6,
codes=[] ) ),
Element( u'GS07', Properties(desc=u'Responsible Agency Code', req_sit=u'R', data_type=(u'ID',u'1',u'2'), position=7,
codes=[u'X'] ) ),
Element( u'GS08', Properties(desc=u'Version / Release / Industry Identifier Code', req_sit=u'R', data_type=(u'AN',u'1',u'12'), position=8,
codes=[u'005010X221A1'] ) ),
),
parsed_835_ST_LOOP,
Segment( u'GE', Properties(syntax='',position=u'0300',req_sit=u'R',repeat=u'1',desc=u'Functional Group Trailer'),
Element( u'GE01', Properties(desc=u'97', req_sit=u'R', data_type=(u'N0',u'1',u'6'), position=1,
codes=[] ) ),
Element( u'GE02', Properties(desc=u'Group Control Number', req_sit=u'R', data_type=(u'N0',u'1',u'9'), position=2,
codes=[] ) ),
),
)
parsed_835_ISA_LOOP = Loop( u'ISA_LOOP', Properties(position=u'0010',looptype=u'explicit',repeat=u'>1',req_sit=u'R',desc=u'Interchange Control Header'),
Segment( u'ISA', Properties(syntax='',position=u'0100',req_sit=u'R',repeat=u'1',desc=u'Interchange Control Header'),
Element( u'ISA01', Properties(desc=u'I01', req_sit=u'R', data_type=(u'ID',u'2',u'2'), position=1,
codes=[u'00', u'03'] ) ),
Element( u'ISA02', Properties(desc=u'I02', req_sit=u'R', data_type=(u'AN',u'10',u'10'), position=2,
codes=[] ) ),
Element( u'ISA03', Properties(desc=u'I03', req_sit=u'R', data_type=(u'ID',u'2',u'2'), position=3,
codes=[u'00', u'01'] ) ),
Element( u'ISA04', Properties(desc=u'I04', req_sit=u'R', data_type=(u'AN',u'10',u'10'), position=4,
codes=[] ) ),
Element( u'ISA05', Properties(desc=u'I05', req_sit=u'R', data_type=(u'ID',u'2',u'2'), position=5,
codes=[u'01', u'14', u'20', u'27', u'28', u'29', u'30', u'33', u'ZZ'] ) ),
Element( u'ISA06', Properties(desc=u'I06', req_sit=u'R', data_type=(u'AN',u'15',u'15'), position=6,
codes=[] ) ),
Element( u'ISA07', Properties(desc=u'I05', req_sit=u'R', data_type=(u'ID',u'2',u'2'), position=7,
codes=[u'01', u'14', u'20', u'27', u'28', u'29', u'30', u'33', u'ZZ'] ) ),
Element( u'ISA08', Properties(desc=u'I07', req_sit=u'R', data_type=(u'AN',u'15',u'15'), position=8,
codes=[] ) ),
Element( u'ISA09', Properties(desc=u'I08', req_sit=u'R', data_type=(u'DT',u'6',u'6'), position=9,
codes=[] ) ),
Element( u'ISA10', Properties(desc=u'I09', req_sit=u'R', data_type=(u'TM',u'4',u'4'), position=10,
codes=[] ) ),
Element( u'ISA11', Properties(desc=u'I65', req_sit=u'R', data_type=(u'AN',u'1',u'1'), position=11,
codes=[] ) ),
Element( u'ISA12', Properties(desc=u'I11', req_sit=u'R', data_type=(u'ID',u'5',u'5'), position=12,
codes=[u'00501'] ) ),
Element( u'ISA13', Properties(desc=u'I12', req_sit=u'R', data_type=(u'N0',u'9',u'9'), position=13,
codes=[] ) ),
Element( u'ISA14', Properties(desc=u'I13', req_sit=u'R', data_type=(u'ID',u'1',u'1'), position=14,
codes=[u'0', u'1'] ) ),
Element( u'ISA15', Properties(desc=u'I14', req_sit=u'R', data_type=(u'ID',u'1',u'1'), position=15,
codes=[u'P', u'T'] ) ),
Element( u'ISA16', Properties(desc=u'I15', req_sit=u'R', data_type=(u'AN',u'1',u'1'), position=16,
codes=[] ) ),
),
parsed_835_GS_LOOP,
Segment( u'TA1', Properties(syntax='',position=u'0200',req_sit=u'S',repeat=u'1',desc=u'Interchange Acknowledgement'),
Element( u'TA101', Properties(desc=u'I12', req_sit=u'R', data_type=(u'N0',u'9',u'9'), position=1,
codes=[] ) ),
Element( u'TA102', Properties(desc=u'I08', req_sit=u'R', data_type=(u'DT',u'6',u'6'), position=2,
codes=[] ) ),
Element( u'TA103', Properties(desc=u'I09', req_sit=u'R', data_type=(u'TM',u'4',u'4'), position=3,
codes=[] ) ),
Element( u'TA104', Properties(desc=u'I17', req_sit=u'R', data_type=(u'ID',u'1',u'1'), position=4,
codes=[u'A', u'E', u'R'] ) ),
Element( u'TA105', Properties(desc=u'I18', req_sit=u'R', data_type=(u'ID',u'3',u'3'), position=5,
codes=[u'000', u'001', u'002', u'003', u'004', u'005', u'006', u'007', u'008', u'009', u'010', u'011', u'012', u'013', u'014', u'015', u'016', u'017', u'018', u'019', u'020', u'021', u'022', u'023', u'024', u'025', u'026', u'027', u'028', u'029', u'030', u'031'] ) ),
),
Segment( u'IEA', Properties(syntax='',position=u'0300',req_sit=u'R',repeat=u'1',desc=u'Interchange Control Trailer'),
Element( u'IEA01', Properties(desc=u'I16', req_sit=u'R', data_type=(u'N0',u'1',u'5'), position=1,
codes=[] ) ),
Element( u'IEA02', Properties(desc=u'I12', req_sit=u'R', data_type=(u'N0',u'9',u'9'), position=2,
codes=[] ) ),
),
)
parsed_835 = Message( u'835W1', Properties(desc=u'HIPAA Health Care Claim Payment/Advice 005010X221A1 835W1'),
parsed_835_ISA_LOOP,
)
| 67.988822 | 304 | 0.637756 | 14,086 | 79,071 | 3.50142 | 0.061763 | 0.058089 | 0.081182 | 0.039415 | 0.893271 | 0.863202 | 0.856288 | 0.849598 | 0.844549 | 0.825328 | 0 | 0.06565 | 0.114049 | 79,071 | 1,162 | 305 | 68.047332 | 0.638402 | 0.000911 | 0 | 0.660622 | 1 | 0 | 0.230005 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.000864 | 0 | 0.000864 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
1348a858bc1cb808c3a66c403bca7675517d0594 | 160 | py | Python | pymc3_models/__init__.py | kkuusisto/pymc3_models | 7dd5baa5552849f15e5a9e09518083e991d89453 | [
"Apache-2.0"
] | 1 | 2019-10-11T09:29:36.000Z | 2019-10-11T09:29:36.000Z | pymc3_models/__init__.py | kkuusisto/pymc3_models | 7dd5baa5552849f15e5a9e09518083e991d89453 | [
"Apache-2.0"
] | null | null | null | pymc3_models/__init__.py | kkuusisto/pymc3_models | 7dd5baa5552849f15e5a9e09518083e991d89453 | [
"Apache-2.0"
] | null | null | null | from pymc3_models.models.HierarchicalLogisticRegression import HierarchicalLogisticRegression
from pymc3_models.models.LinearRegression import LinearRegression
| 53.333333 | 93 | 0.925 | 14 | 160 | 10.428571 | 0.428571 | 0.123288 | 0.205479 | 0.287671 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013158 | 0.05 | 160 | 2 | 94 | 80 | 0.947368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
135e38f09add8920839a82c51f94b545844497e8 | 63,911 | py | Python | tests/parse_91.py | matthewgehring/wptools | 788cdc2078696dacb14652d5f2ad098a585e4763 | [
"MIT"
] | 482 | 2015-04-13T23:43:42.000Z | 2022-03-31T14:44:50.000Z | tests/parse_91.py | matthewgehring/wptools | 788cdc2078696dacb14652d5f2ad098a585e4763 | [
"MIT"
] | 168 | 2016-01-06T14:30:05.000Z | 2022-02-17T22:14:36.000Z | tests/parse_91.py | matthewgehring/wptools | 788cdc2078696dacb14652d5f2ad098a585e4763 | [
"MIT"
] | 80 | 2015-05-03T18:10:58.000Z | 2022-02-17T22:54:25.000Z | # -*- coding:utf-8 -*-
query = 'https://fr.wikipedia.org/w/api.php?action=parse&formatversion=2&contentmodel=text&disableeditsection=&disablelimitreport=&disabletoc=&prop=text|iwlinks|parsetree|wikitext|displaytitle|properties&redirects&page=Okapi'
response = r"""{"parse":{"title":"Okapi","pageid":26145,"redirects":[],"text":"<div class=\"mw-parser-output\"><p><span id=\"sous_titre_h1\"><i>Okapia johnstoni</i></span></p>\n<div class=\"homonymie\"><a href=\"/wiki/Aide:Homonymie\" title=\"Aide:Homonymie\"><img alt=\"Page d'aide sur l'homonymie\" src=\"//upload.wikimedia.org/wikipedia/commons/thumb/3/3e/Disambig_colour.svg/20px-Disambig_colour.svg.png\" width=\"20\" height=\"15\" class=\"noviewer\" srcset=\"//upload.wikimedia.org/wikipedia/commons/thumb/3/3e/Disambig_colour.svg/30px-Disambig_colour.svg.png 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/3/3e/Disambig_colour.svg/40px-Disambig_colour.svg.png 2x\" data-file-width=\"272\" data-file-height=\"200\" /></a> Pour les articles homonymes, voir <a href=\"/wiki/Okapi_(homonymie)\" class=\"mw-disambig\" title=\"Okapi (homonymie)\">Okapi (homonymie)</a>.</div>\n<div class=\"bandeau-article bandeau-niveau-modere plainlinks metadata\">\n<div class=\"floatright\"><a href=\"/wiki/Aide:Liste_de_bandeaux_de_maintenance_d%27articles\" title=\"Si ce bandeau n'est plus pertinent, retirez-le. Cliquez pour voir d'autres modèles.\"><img alt=\"Si ce bandeau n'est plus pertinent, retirez-le. Cliquez pour voir d'autres modèles.\" src=\"//upload.wikimedia.org/wikipedia/commons/thumb/3/38/Info_Simple.svg/12px-Info_Simple.svg.png\" width=\"12\" height=\"12\" class=\"noviewer\" srcset=\"//upload.wikimedia.org/wikipedia/commons/thumb/3/38/Info_Simple.svg/18px-Info_Simple.svg.png 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/3/38/Info_Simple.svg/24px-Info_Simple.svg.png 2x\" data-file-width=\"512\" data-file-height=\"512\" /></a></div>\n<div class=\"bandeau-cell bandeau-icone\"><a href=\"/wiki/Fichier:Question_book-4.svg\" class=\"image\"><img alt=\"\" src=\"//upload.wikimedia.org/wikipedia/commons/thumb/6/64/Question_book-4.svg/45px-Question_book-4.svg.png\" width=\"45\" height=\"35\" class=\"noviewer\" srcset=\"//upload.wikimedia.org/wikipedia/commons/thumb/6/64/Question_book-4.svg/68px-Question_book-4.svg.png 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/6/64/Question_book-4.svg/90px-Question_book-4.svg.png 2x\" data-file-width=\"262\" data-file-height=\"204\" /></a></div>\n<div class=\"bandeau-cell\"><strong class=\"bandeau-titre\">Des informations de cet article ou de cette section devraient être mieux reliées aux sources mentionnées dans les sections « Bibliographie », « Sources » ou « Liens externes »</strong> <small>(juin 2015).</small>\n<p>Améliorez sa <a href=\"/wiki/Wikip%C3%A9dia:V%C3%A9rifiabilit%C3%A9\" title=\"Wikipédia:Vérifiabilité\">vérifiabilité</a> en les <a href=\"/wiki/Mod%C3%A8le:Sources_%C3%A0_lier/Explication\" title=\"Modèle:Sources à lier/Explication\">associant par des références</a> à l'aide d'<a href=\"/wiki/Aide:Note\" title=\"Aide:Note\">appels de notes</a>.</p>\n</div>\n</div>\n<div class=\"infobox_v3 large taxobox_v3 zoologie animal bordered\" style=\"width:20em\">\n<div class=\"entete\" style=\"\">\n<div><i>Okapia johnstoni</i></div>\n</div>\n<div class=\"images\"><a href=\"/wiki/Fichier:Okapi2.jpg\" class=\"image\" title=\"Okapi\"><img alt=\"Description de cette image, également commentée ci-après\" src=\"//upload.wikimedia.org/wikipedia/commons/thumb/1/18/Okapi2.jpg/290px-Okapi2.jpg\" width=\"290\" height=\"252\" srcset=\"//upload.wikimedia.org/wikipedia/commons/thumb/1/18/Okapi2.jpg/435px-Okapi2.jpg 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/1/18/Okapi2.jpg/580px-Okapi2.jpg 2x\" data-file-width=\"1707\" data-file-height=\"1482\" /></a></div>\n<div class=\"legend\">Okapi</div>\n<table class=\"taxobox_classification\">\n<caption><a href=\"/wiki/Classification_scientifique_des_esp%C3%A8ces\" title=\"Classification scientifique des espèces\">Classification</a></caption>\n<tr>\n<th scope=\"row\" style=\"width:8em;\"><a href=\"/wiki/R%C3%A8gne_(biologie)\" title=\"Règne (biologie)\">Règne</a></th>\n<td><span class=\"normal\"><a href=\"/wiki/Animal\" title=\"Animal\">Animalia</a></span></td>\n</tr>\n<tr>\n<th scope=\"row\" style=\"width:8em;\"><a href=\"/wiki/Embranchement_(biologie)\" title=\"Embranchement (biologie)\">Embranchement</a></th>\n<td><span class=\"normal\"><a href=\"/wiki/Chordata\" title=\"Chordata\">Chordata</a></span></td>\n</tr>\n<tr>\n<th scope=\"row\" style=\"width:8em;\"><a href=\"/wiki/Classe_(biologie)\" title=\"Classe (biologie)\">Classe</a></th>\n<td><span class=\"normal\"><a href=\"/wiki/Mammalia\" class=\"mw-redirect\" title=\"Mammalia\">Mammalia</a></span></td>\n</tr>\n<tr>\n<th scope=\"row\" style=\"width:8em;\"><a href=\"/wiki/Sous-classe_(biologie)\" title=\"Sous-classe (biologie)\">Sous-classe</a></th>\n<td><span class=\"normal\"><a href=\"/wiki/Theria\" title=\"Theria\">Theria</a></span></td>\n</tr>\n<tr>\n<th scope=\"row\" style=\"width:8em;\"><a href=\"/wiki/Ordre_(biologie)\" title=\"Ordre (biologie)\">Ordre</a></th>\n<td><span class=\"normal\"><a href=\"/wiki/Artiodactyla\" title=\"Artiodactyla\">Artiodactyla</a></span></td>\n</tr>\n<tr>\n<th scope=\"row\" style=\"width:8em;\"><a href=\"/wiki/Famille_(biologie)\" title=\"Famille (biologie)\">Famille</a></th>\n<td><span class=\"normal\"><a href=\"/wiki/Giraffidae\" title=\"Giraffidae\">Giraffidae</a></span></td>\n</tr>\n</table>\n<p class=\"bloc\"><a href=\"/wiki/Genre_(biologie)\" title=\"Genre (biologie)\">Genre</a></p>\n<div class=\"center taxobox_classification\"><b><span style=\"font-style: normal\"><i>Okapia</i></span></b><br />\n<span class=\"rnormal\"><small><b><a href=\"/wiki/Edwin_Ray_Lankester\" title=\"Edwin Ray Lankester\">Lankester</a>, <a href=\"/wiki/1901\" title=\"1901\">1901</a></b></small></span></div>\n<p class=\"bloc\"><a href=\"/wiki/Nom_binominal\" title=\"Nom binominal\">Nom binominal</a></p>\n<div class=\"center taxobox_classification\"><b><span style=\"font-style: normal\"><i>Okapia johnstoni</i></span></b><br />\n<span class=\"rnormal\"><small><b>(<a href=\"/wiki/Philip_Lutley_Sclater\" title=\"Philip Lutley Sclater\">Sclater</a>, <a href=\"/wiki/1901\" title=\"1901\">1901</a>)</b></small></span></div>\n<p class=\"bloc\"><a href=\"/wiki/Synonyme_(taxinomie)\" title=\"Synonyme (taxinomie)\">Synonymes</a></p>\n<ul>\n<li><i>Equus johnstoni</i> <small>P.L. Sclater, 1901</small></li>\n</ul>\n<p class=\"bloc\"><a href=\"/wiki/Statut_de_conservation\" title=\"Statut de conservation\">Statut de conservation</a> <a href=\"/wiki/Union_internationale_pour_la_conservation_de_la_nature\" title=\"Union internationale pour la conservation de la nature\">UICN</a></p>\n<p class=\"center\"><img alt=\"( EN )\" src=\"//upload.wikimedia.org/wikipedia/commons/thumb/d/df/Status_iucn3.1_EN-fr.svg/244px-Status_iucn3.1_EN-fr.svg.png\" width=\"244\" height=\"65\" srcset=\"//upload.wikimedia.org/wikipedia/commons/thumb/d/df/Status_iucn3.1_EN-fr.svg/366px-Status_iucn3.1_EN-fr.svg.png 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/d/df/Status_iucn3.1_EN-fr.svg/488px-Status_iucn3.1_EN-fr.svg.png 2x\" data-file-width=\"240\" data-file-height=\"64\" /><br />\n<b>EN</b> A2abcd+4abcd : <b>En danger</b></p>\n<p class=\"bloc\">Répartition géographique</p>\n<div class=\"images\"><a href=\"/wiki/Fichier:Okapi_map.jpg\" class=\"image\"><img alt=\"Description de l'image Okapi map.jpg.\" src=\"//upload.wikimedia.org/wikipedia/commons/thumb/7/7f/Okapi_map.jpg/290px-Okapi_map.jpg\" width=\"290\" height=\"140\" srcset=\"//upload.wikimedia.org/wikipedia/commons/7/7f/Okapi_map.jpg 1.5x\" data-file-width=\"347\" data-file-height=\"167\" /></a></div>\n<p class=\"bloc\">Répartition géographique</p>\n<div class=\"images\"><a href=\"/wiki/Fichier:Okapi_distribution.PNG\" class=\"image\"><img alt=\"Description de l'image Okapi distribution.PNG.\" src=\"//upload.wikimedia.org/wikipedia/commons/thumb/6/6a/Okapi_distribution.PNG/290px-Okapi_distribution.PNG\" width=\"290\" height=\"365\" srcset=\"//upload.wikimedia.org/wikipedia/commons/thumb/6/6a/Okapi_distribution.PNG/435px-Okapi_distribution.PNG 1.5x, //upload.wikimedia.org/wikipedia/commons/6/6a/Okapi_distribution.PNG 2x\" data-file-width=\"477\" data-file-height=\"601\" /></a></div>\n</div>\n<p>L’<b>okapi</b> (<i><b>Okapia johnstoni</b></i>), aussi connu sous le nom de <b>Mondonga</b>, est une <a href=\"/wiki/Esp%C3%A8ce\" title=\"Espèce\">espèce</a> de <a href=\"/wiki/Mammif%C3%A8re\" title=\"Mammifère\">mammifères</a> <a href=\"/wiki/Ruminant\" class=\"mw-redirect\" title=\"Ruminant\">ruminants</a> de la même <a href=\"/wiki/Famille_(biologie)\" title=\"Famille (biologie)\">famille</a> que la <a href=\"/wiki/Girafe\" title=\"Girafe\">girafe</a>, venant des forêts équatoriales de l'<a href=\"/wiki/Afrique_centrale\" title=\"Afrique centrale\">Afrique centrale</a>. Bien que connu par les <a href=\"/wiki/Pygm%C3%A9e\" title=\"Pygmée\">Pygmées</a>, il est « <a href=\"/wiki/D%C3%A9couverte_scientifique\" title=\"Découverte scientifique\">découvert</a> » en <a href=\"/wiki/1901\" title=\"1901\">1901</a> par Sir <a href=\"/wiki/Harry_Johnston\" title=\"Harry Johnston\">Harry Johnston</a> à qui il doit son nom. C’est l'un des derniers grands <a href=\"/wiki/Mammif%C3%A8re\" title=\"Mammifère\">mammifères</a> à être observé scientifiquement sur la planète.</p>\n<p>Cet animal dont l’allure rappelle à la fois celle du <a href=\"/wiki/Z%C3%A8bre\" title=\"Zèbre\">zèbre</a> et de la <a href=\"/wiki/Girafe\" title=\"Girafe\">girafe</a> vit exclusivement dans une petite région au nord-est de la <a href=\"/wiki/R%C3%A9publique_d%C3%A9mocratique_du_Congo\" title=\"République démocratique du Congo\">République démocratique du Congo</a>, la <a href=\"/wiki/For%C3%AAt_de_l%27Ituri\" title=\"Forêt de l'Ituri\">forêt tropicale de l’Ituri</a>, où une réserve lui est spécialement dédiée. Son nom vernaculaire en <a href=\"/wiki/Lingala\" title=\"Lingala\">lingala</a> est <i>mondonga</i>.</p>\n<p><span class=\"need_ref\" style=\"cursor:help;\" title=\"Ce passage nécessite une référence.\">Cet animal ne vit pas exclusivement en RD Congo. Il a été aussi observé dans les forêts du sud-est du Gabon, à la frontière avec le Congo-Brazzaville. Il a entre autre été vu par des chasseurs français en 1983 sur la piste reliant Boumango à Mbinda</span><sup class=\"need_ref_tag\" style=\"padding-left:2px;\"><a href=\"/wiki/Aide:R%C3%A9f%C3%A9rence_n%C3%A9cessaire\" title=\"Aide:Référence nécessaire\">[réf. nécessaire]</a></sup>.</p>\n<p></p>\n<h2><span id=\"Caract.C3.A9ristiques_physiques\"></span><span class=\"mw-headline\" id=\"Caractéristiques_physiques\">Caractéristiques physiques</span></h2>\n<p>L’okapi mesure environ <span class=\"nowrap\">1,80 <abbr class=\"abbr\" title=\"mètre\">m</abbr></span> au <a href=\"/wiki/Garrot_(anatomie)\" title=\"Garrot (anatomie)\">garrot</a> et pèse au maximum <span class=\"nowrap\">200 à 230 <abbr class=\"abbr\" title=\"kilogramme\">kg</abbr></span>. Sa <a href=\"/wiki/Morphologie_(biologie)\" title=\"Morphologie (biologie)\">morphologie</a> est relativement proche de celle de la <a href=\"/wiki/Girafe\" title=\"Girafe\">girafe</a> : son corps est court et massif, ses pattes arrières sont plus courtes que celles de devant (ce qui lui donne l'allure d'avoir la croupe plus basse que les épaules) et sa colonne vertébrale a un axe oblique. Toutefois son cou est moins long et plus épais que celui de la girafe. Le mâle porte des <a href=\"/wiki/Ossic%C3%B4ne\" title=\"Ossicône\">ossicônes</a>, sortes de petites cornes osseuses recouvertes de peau qui se développent entre 1 et 5 ans. Ses oreilles sont larges et particulièrement mobiles. Sa langue <a href=\"/wiki/Pr%C3%A9hensile\" class=\"mw-redirect\" title=\"Préhensile\">préhensile</a> est noire et mesure entre <span class=\"nowrap\">30 et 50 <abbr class=\"abbr\" title=\"centimètre\">cm</abbr></span> de long : avec elle, il peut saisir sa nourriture mais aussi nettoyer toutes les parties de son corps, y compris ses oreilles.</p>\n<p>Son pelage court est d’un brun chocolat sur le corps avec des zébrures noires et blanches sur les pattes et l’arrière-train. La tête est marquée d’une tache blanche au niveau de la joue.</p>\n<h2><span class=\"mw-headline\" id=\"Histoire\">Histoire</span></h2>\n<p>Les <a href=\"/wiki/Pygm%C3%A9e\" title=\"Pygmée\">pygmées</a> de l’actuelle <a href=\"/wiki/R%C3%A9publique_d%C3%A9mocratique_du_Congo\" title=\"République démocratique du Congo\">République démocratique du Congo</a> connaissaient depuis longtemps l’okapi qu’ils prenaient parfois au piège dans des trous camouflés. Ils l’appelaient <i>o’api</i>. En 1890, le journaliste <a href=\"/wiki/Henry_Morton_Stanley\" title=\"Henry Morton Stanley\">Henry Morton Stanley</a> (1841-1904) venu à la rencontre des pygmées rapporte l’existence d’une sorte d’âne-zèbre broutant des feuilles. Sir <a href=\"/wiki/Harry_Hamilton_Johnston\" class=\"mw-redirect\" title=\"Harry Hamilton Johnston\">Harry Hamilton Johnston</a> (1858-1927), futur gouverneur de l’<a href=\"/wiki/Ouganda\" title=\"Ouganda\">Ouganda</a>, curieux de cet animal étrange, partit en 1899 à sa recherche et le baptisa <i>Equus johnstoni</i>, pensant qu’il s’agissait d’une nouvelle espèce de <a href=\"/wiki/Z%C3%A8bre\" title=\"Zèbre\">zèbre</a> (du genre <i>Equus</i>). En 1901, il réussit à se procurer la peau entière d’un okapi ainsi que deux crânes. Leur étude révéla qu’il ne s’agissait pas d’un <a href=\"/wiki/Z%C3%A8bre\" title=\"Zèbre\">zèbre</a> mais d’une espèce d'un nouveau genre et on changea son nom en <i>Okapia johnstoni</i>.</p>\n<h2><span class=\"mw-headline\" id=\"Alimentation\">Alimentation</span></h2>\n<p>L’okapi se nourrit de <a href=\"/wiki/Feuille\" title=\"Feuille\">feuilles</a>, de divers végétaux différents (dont l’<a href=\"/wiki/Euphorbe\" title=\"Euphorbe\">euphorbe</a>, particulièrement toxique pour l’homme), de bourgeons, de branches tendres, de fruits, de champignons et de fougères. Il cueille sa nourriture à l’aide de sa langue et de ses lèvres préhensiles. Il comble ses besoins en minéraux en mangeant de l’argile sulfureuse qu’il trouve près des rivières ou des <a href=\"/wiki/Gramin%C3%A9e\" class=\"mw-redirect\" title=\"Graminée\">graminées</a> poussant sur des sols hautement minéralisés.</p>\n<h2><span class=\"mw-headline\" id=\"Habitat\">Habitat</span></h2>\n<p>L’okapi est un animal discret et solitaire qui ne fréquente ses pairs qu’au moment de la reproduction. On compte généralement deux individus au km². Sédentaire, il vit sur un territoire qu’il marque par des dépôts d’urine et des sécrétions issues de glandes situées entre ses doigts. Il emprunte toujours les mêmes pistes de passage qu’il a ainsi marquées. C’est un animal essentiellement nocturne dont le principal prédateur est le <a href=\"/wiki/L%C3%A9opard_(f%C3%A9lin)\" class=\"mw-redirect\" title=\"Léopard (félin)\">léopard</a>. Ses oreilles très grandes lui permettent d'entendre le moindre bruit en cas d'attaque.</p>\n<h2><span class=\"mw-headline\" id=\"Reproduction\">Reproduction</span></h2>\n<p>La saison des amours a lieu de mai à juillet. La femelle, qui a déjà signalé sa piste par ses sécrétions odoriférantes, guide le mâle à travers la forêt dense en émettant des appels ressemblant à des toussotements. Il peut y avoir des affrontements entre les mâles convoitant une même femelle. Les deux membres du couple se rejoignent finalement dans une courte parade nuptiale faite de fuites et d’esquives puis s’accouplent. Après une <a href=\"/wiki/Gestation\" title=\"Gestation\">gestation</a> de <span class=\"nowrap\">15 mois</span> environ, elle donne naissance à un petit d’environ <span class=\"nowrap\">75 <abbr class=\"abbr\" title=\"centimètre\">cm</abbr></span> au garrot et pesant environ <span class=\"nowrap\">20 <abbr class=\"abbr\" title=\"kilogramme\">kg</abbr></span>. Celui-ci suit sa mère pendant quelques jours jusqu’à trouver un fourré où se cacher. Il y reste la plupart de son temps jusqu’à atteindre l’âge de deux mois, à partir duquel il suit sa mère dans ses déplacements. Le <a href=\"/wiki/Sevrage_(alimentation)\" title=\"Sevrage (alimentation)\">sevrage</a> a lieu entre <span class=\"nowrap\">6 et 10 mois</span>.</p>\n<h2><span id=\"Une_esp.C3.A8ce_menac.C3.A9e\"></span><span class=\"mw-headline\" id=\"Une_espèce_menacée\">Une espèce menacée</span></h2>\n<p>L’okapi figure sur la liste rouge des espèces menacées de l’<a href=\"/wiki/UICN\" class=\"mw-redirect\" title=\"UICN\">UICN</a>. En effet, son habitat est de plus en plus restreint. Même à l’intérieur de la réserve, l’okapi est victime du <a href=\"/wiki/Braconnage\" title=\"Braconnage\">braconnage</a>, surtout dans le parc national de Virunga. Leur population est estimée de <span class=\"nowrap\">10 000 à 35 000 individus</span> et la tendance est à la baisse. Cet animal est protégé depuis 1933<sup id=\"cite_ref-1\" class=\"reference\"><a href=\"#cite_note-1\"><span class=\"cite_crochet\">[</span>1<span class=\"cite_crochet\">]</span></a></sup>. L'espèce est en danger depuis décembre 2013.</p>\n<h2><span id=\"La_vie_en_captivit.C3.A9\"></span><span class=\"mw-headline\" id=\"La_vie_en_captivité\">La vie en captivité</span></h2>\n<div class=\"thumb tright\">\n<div class=\"thumbinner\" style=\"width:222px;\"><a href=\"/wiki/Fichier:Okapia_johnstoni_(Okapi)_-_437.jpg\" class=\"image\"><img alt=\"\" src=\"//upload.wikimedia.org/wikipedia/commons/thumb/2/2f/Okapia_johnstoni_%28Okapi%29_-_437.jpg/220px-Okapia_johnstoni_%28Okapi%29_-_437.jpg\" width=\"220\" height=\"147\" class=\"thumbimage\" srcset=\"//upload.wikimedia.org/wikipedia/commons/thumb/2/2f/Okapia_johnstoni_%28Okapi%29_-_437.jpg/330px-Okapia_johnstoni_%28Okapi%29_-_437.jpg 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/2/2f/Okapia_johnstoni_%28Okapi%29_-_437.jpg/440px-Okapia_johnstoni_%28Okapi%29_-_437.jpg 2x\" data-file-width=\"2061\" data-file-height=\"1374\" /></a>\n<div class=\"thumbcaption\">\n<div class=\"magnify\"><a href=\"/wiki/Fichier:Okapia_johnstoni_(Okapi)_-_437.jpg\" class=\"internal\" title=\"Agrandir\"></a></div>\nUn okapia johnstoni (Okapi) au <a href=\"/wiki/ZooParc_de_Beauval\" title=\"ZooParc de Beauval\">ZooParc de Beauval</a> à <a href=\"/wiki/Saint-Aignan_(Loir-et-Cher)\" title=\"Saint-Aignan (Loir-et-Cher)\">Saint-Aignan</a>, France.</div>\n</div>\n</div>\n<p>La survie de l’okapi dépend aussi des zoos où il peut vivre et se reproduire en sécurité. Toutefois, son acclimatation à la vie en captivité a été difficile. Le premier spécimen ramené en <a href=\"/wiki/Europe\" title=\"Europe\">Europe</a> fut donné au <a href=\"/wiki/Zoo_d%27Anvers\" title=\"Zoo d'Anvers\">zoo d'Anvers</a> en 1918 mais ne survécut que <span class=\"nowrap\">50 jours</span>. Jusqu’en 1940, toutes les tentatives d’acclimatation de l’okapi en zoo furent des échecs hormis à Anvers où un individu vécut <span class=\"nowrap\">15 ans</span> à partir de 1928. La première naissance en captivité eut lieu à Anvers en 1954 mais le petit ne vécut qu’une journée. D’autres naissances eurent lieu dans divers zoos mais les petits ne survivaient jamais longtemps. En 1957 eut lieu la première naissance viable, au <a href=\"/wiki/Zoo_de_Vincennes\" class=\"mw-redirect\" title=\"Zoo de Vincennes\">zoo de Vincennes</a>.</p>\n<h2><span class=\"mw-headline\" id=\"Sources\">Sources</span></h2>\n<ul>\n<li><a rel=\"nofollow\" class=\"external text\" href=\"http://www.leszoosdanslemonde.com/introduction/archives_actualites/2003/07_2003_vincennes.htm\">Les okapis et le parc zoologique de Paris</a></li>\n<li><a rel=\"nofollow\" class=\"external text\" href=\"http://www.thebigzoo.com/Animals/Okapi.asp\">Un site en anglais sur l'okapi</a></li>\n<li><a rel=\"nofollow\" class=\"external text\" href=\"http://www.dinosoria.com/okapi.htm\">L'okapi</a></li>\n<li><a rel=\"nofollow\" class=\"external autonumber\" href=\"http://www.webjunoir.net/encyclopedie/l-okapi-99.php\">[1]</a></li>\n</ul>\n<h2><span id=\"R.C3.A9f.C3.A9rences\"></span><span class=\"mw-headline\" id=\"Références\">Références</span></h2>\n<div class=\"references-small decimal\" style=\"\">\n<div class=\"mw-references-wrap\">\n<ol class=\"references\">\n<li id=\"cite_note-1\"><span class=\"noprint renvois_vers_le_texte\"><a href=\"#cite_ref-1\">↑</a></span> <span class=\"reference-text\"><span class=\"indicateur-langue\">(<abbr class=\"abbr\" title=\"Langue : anglais\">en</abbr>)</span> <span class=\"ouvrage\"><a rel=\"nofollow\" class=\"external text\" href=\"http://www.iucnredlist.org/details/15188\">« <cite style=\"font-style: normal;\">l'okapi sur le site de l'UICN</cite> »</a>, sur <i>iucnredlist.org</i> <small style=\"line-height:1em;\">(consulté le <span class=\"nowrap\">3 juillet 2012</span>)</small></span>.</span></li>\n</ol>\n</div>\n</div>\n<h2><span class=\"mw-headline\" id=\"Liens_externes\">Liens externes</span></h2>\n<div class=\"autres-projets boite-grise boite-a-droite noprint js-interprojets\">\n<p class=\"titre\">Sur les autres projets Wikimedia :</p>\n<ul class=\"noarchive plainlinks\">\n<li class=\"commons\"><a class=\"external text\" href=\"https://commons.wikimedia.org/wiki/Category:Okapia_johnstoni?uselang=fr\">Okapi</a>, sur <span class=\"project\">Wikimedia Commons</span></li>\n<li class=\"wikispecies\"><a href=\"https://species.wikimedia.org/wiki/Okapia_johnstoni\" class=\"extiw\" title=\"wikispecies:Okapia johnstoni\">Okapi</a>, <span class=\"nowrap\">sur <span class=\"project\">Wikispecies</span></span></li>\n</ul>\n</div>\n<ul>\n<li><span class=\"ouvrage\" id=\"ADW\">Référence <a href=\"/wiki/Animal_Diversity_Web\" title=\"Animal Diversity Web\">Animal Diversity Web</a> : <a rel=\"nofollow\" class=\"external text\" href=\"http://animaldiversity.org/accounts/Okapia_johnstoni/\"><i>Okapia johnstoni</i></a> <span class=\"indicateur-langue\">(<abbr class=\"abbr\" title=\"Langue : anglais\">en</abbr>)</span> <small>(consulté le <time class=\"nowrap\" datetime=\"2013-11-28\">28 novembre 2013</time>)</small></span></li>\n<li><span class=\"ouvrage\" id=\"ARKive_GES\">Référence <a rel=\"nofollow\" class=\"external text\" href=\"http://www.arkive.org/\">Fonds documentaire ARKive</a> : <a rel=\"nofollow\" class=\"external text\" href=\"http://www.arkive.org/en/Okapia-johnstoni/\"><i>Okapia johnstoni</i></a> <span class=\"indicateur-langue\">(<abbr class=\"abbr\" title=\"Langue : anglais\">en</abbr>)</span> <small>(consulté le <time class=\"nowrap\" datetime=\"2013-11-28\">28 novembre 2013</time>)</small></span></li>\n<li><span class=\"ouvrage\" id=\"ITIS\">Référence <a href=\"/wiki/Syst%C3%A8me_d%27information_taxonomique_int%C3%A9gr%C3%A9\" title=\"Système d'information taxonomique intégré\">ITIS</a> : <a rel=\"nofollow\" class=\"external text\" href=\"http://www.cbif.gc.ca/acp/fra/siti/regarder?tsn=625037\"><i>Okapia johnstoni</i> (P. L. Sclater, 1901)</a> <span class=\"indicateur-langue\">(<abbr class=\"abbr\" title=\"Langue : français\">fr</abbr>)</span> (<span class=\"plainlinksneverexpand\"><small>+ <a rel=\"nofollow\" class=\"external text\" href=\"http://www.itis.gov/servlet/SingleRpt/SingleRpt?search_topic=TSN&search_value=625037\">version anglaise</a></small></span> <span class=\"indicateur-langue\">(<abbr class=\"abbr\" title=\"Langue : anglais\">en</abbr>)</span>) <small>(consulté le <time class=\"nowrap\" datetime=\"2013-11-28\">28 novembre 2013</time>)</small></span></li>\n<li><span class=\"ouvrage\" id=\"MSW\">Référence <a href=\"/wiki/Mammal_Species_of_the_World\" title=\"Mammal Species of the World\">Mammal Species of the World</a> : <a rel=\"nofollow\" class=\"external text\" href=\"http://www.departments.bucknell.edu/biology/resources/msw3/browse.asp?s=y&id=14200484\"><i>Okapia johnstoni</i> P. L. Sclater, 1901</a> <span class=\"indicateur-langue\">(<abbr class=\"abbr\" title=\"Langue : anglais\">en</abbr>)</span> <small>(consulté le <time class=\"nowrap\" datetime=\"2013-11-28\">28 novembre 2013</time>)</small></span></li>\n<li><span class=\"ouvrage\" id=\"NCBI\">Référence <a href=\"/wiki/National_Center_for_Biotechnology_Information\" title=\"National Center for Biotechnology Information\">NCBI</a> : <a rel=\"nofollow\" class=\"external text\" href=\"http://www.ncbi.nlm.nih.gov/Taxonomy/Browser/wwwtax.cgi?lin=s&p=has_linkout&id=86973\"><i>Okapia johnstoni</i></a> <span class=\"indicateur-langue\">(<abbr class=\"abbr\" title=\"Langue : anglais\">en</abbr>)</span> <small>(consulté le <time class=\"nowrap\" datetime=\"2013-11-28\">28 novembre 2013</time>)</small></span></li>\n<li><span class=\"ouvrage\" id=\"Tolweb\">Référence <a rel=\"nofollow\" class=\"external text\" href=\"http://tolweb.org/tree/phylogeny.html\"><span class=\"lang-en\" lang=\"en\">Tree of Life</span> Web Project</a> : <a rel=\"nofollow\" class=\"external text\" href=\"http://tolweb.org/Okapia+johnstoni\"><i>Okapia johnstoni</i></a> <span class=\"indicateur-langue\">(<abbr class=\"abbr\" title=\"Langue : anglais\">en</abbr>)</span> <small>(consulté le <time class=\"nowrap\" datetime=\"2013-11-28\">28 novembre 2013</time>)</small></span></li>\n<li><span class=\"ouvrage\" id=\"UBIO\">Référence <a href=\"/wiki/Universal_Biological_Indexer_and_Organizer\" title=\"Universal Biological Indexer and Organizer\">uBio</a> : <a rel=\"nofollow\" class=\"external text\" href=\"http://www.ubio.org/browser/details.php?namebankID=106096\"><i>Okapia johnstoni</i> (P. L. Sclater, 1901)</a> <span class=\"indicateur-langue\">(<abbr class=\"abbr\" title=\"Langue : anglais\">en</abbr>)</span> <small>(consulté le <time class=\"nowrap\" datetime=\"2013-11-28\">28 novembre 2013</time>)</small></span></li>\n<li><span class=\"ouvrage\" id=\"UICN\">Référence <a href=\"/wiki/Union_internationale_pour_la_conservation_de_la_nature\" title=\"Union internationale pour la conservation de la nature\">UICN</a> : <a rel=\"nofollow\" class=\"external text\" href=\"http://www.iucnredlist.org/apps/redlist/details/15188\"><small>espèce</small> <i>Okapia johnstoni</i> (Sclater, 1901)</a> <span class=\"indicateur-langue\">(<abbr class=\"abbr\" title=\"Langue : anglais\">en</abbr>)</span> <small>(consulté le <time class=\"nowrap\" datetime=\"2015-05-27\">27 mai 2015</time>)</small></span></li>\n</ul>\n<table class=\"navbox collapsible noprint collapsed\" data-autocollapse-group=\"palette\" style=\"\">\n<tr>\n<th class=\"navbox-title\" colspan=\"3\" style=\"background-color:#E6E2AF;\">\n<div style=\"float:left; width:6em; text-align:left\">\n<div class=\"noprint plainlinksneverexpand nowrap tnavbar\" style=\"background-color:transparent; padding:0; font-size:xx-small; color:#000000;\"><a href=\"/wiki/Mod%C3%A8le:Palette_Cryptozoologie\" title=\"Modèle:Palette Cryptozoologie\"><abbr class=\"abbr\" title=\"Voir ce modèle.\">v</abbr></a> · <a class=\"external text\" href=\"//fr.wikipedia.org/w/index.php?title=Mod%C3%A8le:Palette_Cryptozoologie&action=edit\"><abbr class=\"abbr\" title=\"Modifier ce modèle. Merci de prévisualiser avant de sauvegarder.\">m</abbr></a></div>\n</div>\n<span style=\"font-size:110%\"><a href=\"/wiki/Cryptozoologie\" title=\"Cryptozoologie\">Cryptozoologie</a></span></th>\n</tr>\n<tr>\n<th class=\"navbox-group\" style=\"width:150px;background:#DDDDFF; vertical-align:middle; background-color:#d9cf98;\">Cryptozoologues</th>\n<td class=\"navbox-list\" style=\"text-align:center;; background-color:#E6E2AF;\"><span class=\"nowrap\"><a href=\"/wiki/Bernard_Heuvelmans\" title=\"Bernard Heuvelmans\">Bernard Heuvelmans</a></span> • <span class=\"nowrap\"><a href=\"/wiki/Karl_Shuker\" title=\"Karl Shuker\">Karl Shuker</a></span> • <span class=\"nowrap\"><a href=\"/wiki/Richard_Ellis_(biologiste)\" title=\"Richard Ellis (biologiste)\">Richard Ellis</a></span> • <span class=\"nowrap\"><a href=\"/wiki/Benjamin_Radford\" title=\"Benjamin Radford\">Benjamin Radford</a></span> • <span class=\"nowrap\"><a href=\"/wiki/Ivan_T._Sanderson\" title=\"Ivan T. Sanderson\">Ivan Terence Sanderson</a></span></td>\n<td rowspan=\"5\" style=\"vertical-align:middle;padding-left:7px\"><a href=\"/wiki/Fichier:Lake_monster_petroglyph.svg\" class=\"image\"><img alt=\"Lake monster petroglyph.svg\" src=\"//upload.wikimedia.org/wikipedia/commons/thumb/2/23/Lake_monster_petroglyph.svg/70px-Lake_monster_petroglyph.svg.png\" width=\"70\" height=\"224\" srcset=\"//upload.wikimedia.org/wikipedia/commons/thumb/2/23/Lake_monster_petroglyph.svg/105px-Lake_monster_petroglyph.svg.png 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/2/23/Lake_monster_petroglyph.svg/140px-Lake_monster_petroglyph.svg.png 2x\" data-file-width=\"59\" data-file-height=\"189\" /></a></td>\n</tr>\n<tr>\n<th class=\"navbox-group\" style=\"width:150px;background:#DDDDFF; vertical-align:middle; background-color:#d9cf98;\">Cryptides</th>\n<td class=\"navbox-list navbox-even\" style=\"text-align:center;; background-color:#FCFAE1;\">\n<table class=\"navbox-subgroup\" style=\"\">\n<tr>\n<th class=\"navbox-group\" style=\"width:150px; vertical-align:middle; background-color:#d9cf98;\"><a href=\"/wiki/Monstre_marin\" title=\"Monstre marin\">Monstres aquatiques</a> et <a href=\"/wiki/Serpent_de_mer\" title=\"Serpent de mer\">serpents de mer</a></th>\n<td class=\"navbox-list navbox-even\" style=\";text-align:left;background-color:#FCFAE1;\"><span class=\"nowrap\"><a href=\"/wiki/Bownessie\" title=\"Bownessie\">Bownessie</a></span> • <span class=\"nowrap\"><a href=\"/wiki/Manipogo\" title=\"Manipogo\">Manipogo</a></span> • <span class=\"nowrap\"><a href=\"/wiki/Memphr%C3%A9\" title=\"Memphré\">Memphré</a></span> • <span class=\"nowrap\"><a href=\"/wiki/Ogopogo\" title=\"Ogopogo\">Ogopogo</a></span> • <span class=\"nowrap\"><a href=\"/wiki/Ponik\" title=\"Ponik\">Ponik</a></span> • <span class=\"nowrap\"><a href=\"/wiki/Selma_(cr%C3%A9ature_lacustre)\" title=\"Selma (créature lacustre)\">Selma</a></span> • <span class=\"nowrap\"><a href=\"/wiki/Storsj%C3%B6odjuret\" title=\"Storsjöodjuret\">Storsjöodjuret</a></span> • <span class=\"nowrap\"><a href=\"/wiki/Champ_(cr%C3%A9ature_lacustre)\" title=\"Champ (créature lacustre)\">Champ</a></span> • <span class=\"nowrap\"><a href=\"/wiki/Mussie\" title=\"Mussie\">Mussie</a></span> • <span class=\"nowrap\"><a href=\"/wiki/Monstre_du_Loch_Ness\" title=\"Monstre du Loch Ness\">Monstre du Loch Ness</a></span> • <span class=\"nowrap\"><a href=\"/wiki/Lusca\" title=\"Lusca\">Lusca</a></span></td>\n</tr>\n<tr>\n<th class=\"navbox-group\" style=\"width:150px; vertical-align:middle; background-color:#d9cf98;\"><a href=\"/wiki/Dinosaure\" title=\"Dinosaure\">Dinosaures</a> vivants</th>\n<td class=\"navbox-list\" style=\";background:#E6E2AF; text-align:left;\"><span class=\"nowrap\"><a href=\"/wiki/Mokele-mbembe\" class=\"mw-redirect\" title=\"Mokele-mbembe\">Mokele-mbembe</a></span> • <span class=\"nowrap\"><a href=\"/wiki/Emela-ntouka\" title=\"Emela-ntouka\">Emela-ntouka</a></span> • <span class=\"nowrap\"><a href=\"/wiki/Kongamato\" title=\"Kongamato\">Kongamato</a></span> • <span class=\"nowrap\"><a href=\"/wiki/Mbielu-mbielu-mbielu\" title=\"Mbielu-mbielu-mbielu\">Mbielu-mbielu-mbielu</a></span> • <span class=\"nowrap\"><a href=\"/wiki/Ngoubou\" title=\"Ngoubou\">Ngoubou</a></span></td>\n</tr>\n<tr>\n<th class=\"navbox-group\" style=\"width:150px; vertical-align:middle; background-color:#d9cf98;\">Liste de bêtes dévorantes</th>\n<td class=\"navbox-list navbox-even\" style=\";text-align:left;background-color:#FCFAE1;\"><span class=\"nowrap\"><a href=\"/wiki/B%C3%AAte_de_Cinglais\" title=\"Bête de Cinglais\">Bête de Cinglais</a></span> • <span class=\"nowrap\"><a href=\"/wiki/B%C3%AAte_du_Benais\" class=\"mw-redirect\" title=\"Bête du Benais\">Bête du Benais</a></span> • <span class=\"nowrap\"><a href=\"/wiki/B%C3%AAte_de_l%27Auxerrois\" title=\"Bête de l'Auxerrois\">Bête de l'Auxerrois</a></span> • <span class=\"nowrap\"><a href=\"/wiki/B%C3%AAte_de_Primarette\" title=\"Bête de Primarette\">Bête de Primarette</a></span> • <span class=\"nowrap\"><a href=\"/wiki/B%C3%AAte_du_Lyonnais\" title=\"Bête du Lyonnais\">Bête du Lyonnais</a></span> • <span class=\"nowrap\"><a href=\"/wiki/B%C3%AAte_du_G%C3%A9vaudan\" title=\"Bête du Gévaudan\">Bête du Gévaudan</a></span> • <span class=\"nowrap\"><a href=\"/wiki/B%C3%AAte_de_Sarlat\" title=\"Bête de Sarlat\">Bête de Sarlat</a></span> • <span class=\"nowrap\"><a href=\"/wiki/B%C3%AAte_de_Veyreau\" title=\"Bête de Veyreau\">Bête de Veyreau</a></span> • <span class=\"nowrap\"><a href=\"/wiki/B%C3%AAte_de_Chaingy\" title=\"Bête de Chaingy\">Bête de Chaingy</a></span> • <span class=\"nowrap\"><a href=\"/wiki/B%C3%AAte_des_C%C3%A9vennes\" title=\"Bête des Cévennes\">Bête des Cévennes</a></span> • <span class=\"nowrap\"><a href=\"/wiki/B%C3%AAte_du_C%C3%A9zallier\" title=\"Bête du Cézallier\">Bête du Cézallier</a></span> • <span class=\"nowrap\"><a href=\"/wiki/B%C3%AAte_du_Valais\" class=\"mw-redirect\" title=\"Bête du Valais\">Bête du Valais</a></span> • <span class=\"nowrap\"><a href=\"/wiki/B%C3%AAte_des_Vosges\" title=\"Bête des Vosges\">Bête des Vosges</a></span> • <span class=\"nowrap\"><a href=\"/wiki/B%C3%AAte_de_Noth\" title=\"Bête de Noth\">Bête de Noth</a></span></td>\n</tr>\n<tr>\n<th class=\"navbox-group\" style=\"width:150px; vertical-align:middle; background-color:#d9cf98;\">Anthropomorphes et hominidés</th>\n<td class=\"navbox-list\" style=\";background:#E6E2AF; text-align:left;\"><span class=\"nowrap\"><a href=\"/wiki/Agogwe\" title=\"Agogwe\">Agogwe</a></span> • <span class=\"nowrap\"><a href=\"/wiki/Almasty\" title=\"Almasty\">Almasty</a></span> • <span class=\"nowrap\"><a href=\"/wiki/Ebu_Gogo\" title=\"Ebu Gogo\">Ebu Gogo</a></span> • <span class=\"nowrap\"><a href=\"/wiki/Homme-singe_de_New_Delhi\" title=\"Homme-singe de New Delhi\">Homme-singe</a></span> • <span class=\"nowrap\"><a href=\"/wiki/Orang_pendek\" title=\"Orang pendek\">Orang pendek</a></span> • <span class=\"nowrap\"><a href=\"/wiki/Sasquatch\" class=\"mw-redirect\" title=\"Sasquatch\">Sasquatch</a></span> • <span class=\"nowrap\"><a href=\"/wiki/Wendigo\" title=\"Wendigo\">Wendigo</a></span> • <span class=\"nowrap\"><a href=\"/wiki/Yowie\" class=\"mw-disambig\" title=\"Yowie\">Yowie</a></span> • <span class=\"nowrap\"><a href=\"/wiki/Y%C3%A9ti\" title=\"Yéti\">Yéti</a></span></td>\n</tr>\n<tr>\n<th class=\"navbox-group\" style=\"width:150px; vertical-align:middle; background-color:#d9cf98;\"><a href=\"/wiki/Globster\" title=\"Globster\">Globsters</a> et cadavres</th>\n<td class=\"navbox-list navbox-even\" style=\";text-align:left;background-color:#FCFAE1;\"><span class=\"nowrap\"><a href=\"/wiki/B%C3%AAte_de_Stronsay\" title=\"Bête de Stronsay\">Bête de Stronsay</a></span> • <span class=\"nowrap\"><a href=\"/wiki/Monstre_de_Montauk\" title=\"Monstre de Montauk\">Monstre de Montauk</a></span></td>\n</tr>\n<tr>\n<th class=\"navbox-group\" style=\"width:150px; vertical-align:middle; background-color:#d9cf98;\">Autres animaux inconnus</th>\n<td class=\"navbox-list\" style=\";background:#E6E2AF; text-align:left;\"><span class=\"nowrap\"><a href=\"/wiki/Mapinguari\" title=\"Mapinguari\">Mapinguari</a></span> • <span class=\"nowrap\"><a href=\"/wiki/Khting_voar\" title=\"Khting voar\">Khting voar</a></span> • <span class=\"nowrap\"><a href=\"/wiki/Mngwa\" title=\"Mngwa\">Mngwa</a></span> • <span class=\"nowrap\"><a href=\"/wiki/Olgo%C3%AF-Khorkho%C3%AF\" title=\"Olgoï-Khorkhoï\">Olgoï-Khorkhoï</a></span> • <span class=\"nowrap\"><a href=\"/wiki/Tsuchinoko\" title=\"Tsuchinoko\">Tsuchinoko</a></span> • <span class=\"nowrap\"><a href=\"/wiki/Ours_Nandi\" title=\"Ours Nandi\">Ours Nandi</a></span></td>\n</tr>\n</table>\n</td>\n</tr>\n<tr>\n<th class=\"navbox-group\" style=\"width:150px;background:#DDDDFF; vertical-align:middle; background-color:#d9cf98;\">Les animaux bien réels</th>\n<td class=\"navbox-list navbox-even\" style=\"text-align:center;; background-color:#FCFAE1;\"><span class=\"nowrap\"><a href=\"/wiki/Architeuthis\" class=\"mw-redirect\" title=\"Architeuthis\">Calamar géant</a></span> • <span class=\"nowrap\"><a class=\"mw-selflink selflink\">Okapi</a></span> • <span class=\"nowrap\"><a href=\"/wiki/Grand-duc_du_N%C3%A9pal\" title=\"Grand-duc du Népal\">Grand-duc du Népal</a></span> • <span class=\"nowrap\"><a href=\"/wiki/Gorille\" title=\"Gorille\">Gorille</a></span></td>\n</tr>\n<tr>\n<th class=\"navbox-group\" style=\"width:150px;background:#DDDDFF; vertical-align:middle; background-color:#d9cf98;\">Dans la fiction</th>\n<td class=\"navbox-list\" style=\"text-align:center;; background-color:#E6E2AF;\"><span class=\"nowrap\"><i><a href=\"/wiki/Les_Saturdays\" title=\"Les Saturdays\">Les Saturdays</a></i></span> • <span class=\"nowrap\"><i><a href=\"/wiki/Sanctuary_(s%C3%A9rie_t%C3%A9l%C3%A9vis%C3%A9e)\" title=\"Sanctuary (série télévisée)\">Sanctuary</a></i></span> • <span class=\"nowrap\"><i><a href=\"/wiki/Nick_Cutter_et_les_Portes_du_temps\" title=\"Nick Cutter et les Portes du temps\">Primeval</a></i> et <i><a href=\"/wiki/Les_Portes_du_temps_:_Un_nouveau_monde\" title=\"Les Portes du temps : Un nouveau monde\">Les Portes du temps : Un nouveau monde</a></i></span> • <span class=\"nowrap\"><i><a href=\"/wiki/Kenya_(bande_dessin%C3%A9e)\" title=\"Kenya (bande dessinée)\">Kenya</a></i></span> • <span class=\"nowrap\"><i><a href=\"/wiki/Ad%C3%A8le_et_la_B%C3%AAte\" title=\"Adèle et la Bête\">Adèle et la Bête</a></i></span> • <span class=\"nowrap\"><i><a href=\"/wiki/Tintin_au_Tibet\" title=\"Tintin au Tibet\">Tintin au Tibet</a></i></span> • <span class=\"nowrap\"><i><a href=\"/wiki/Le_Monde_perdu_(Arthur_Conan_Doyle)\" title=\"Le Monde perdu (Arthur Conan Doyle)\">Le Monde perdu</a></i></span></td>\n</tr>\n<tr>\n<th class=\"navbox-group\" style=\"width:150px;background:#DDDDFF; vertical-align:middle; background-color:#d9cf98;\">Humour et supercheries</th>\n<td class=\"navbox-list navbox-even\" style=\"text-align:center;; background-color:#FCFAE1;\"><span class=\"nowrap\"><a href=\"/wiki/Rhinogrades\" title=\"Rhinogrades\">Rhinogradentia</a></span> • <span class=\"nowrap\"><a href=\"/wiki/Marsupilami\" title=\"Marsupilami\">Marsupilami franquini</a></span> • <span class=\"nowrap\"><a href=\"/wiki/Cantatrix_sopranica_L.\" title=\"Cantatrix sopranica L.\">Cantatrix sopranica L.</a></span> • <span class=\"nowrap\"><a href=\"/wiki/Homo_orcus_-_Une_seconde_humanit%C3%A9\" title=\"Homo orcus - Une seconde humanité\">Homo orcus</a></span> • <span class=\"nowrap\"><a href=\"/wiki/Sir%C3%A8ne_des_Fidji\" title=\"Sirène des Fidji\">Sirène des Fidji</a></span> • <span class=\"nowrap\"><a href=\"/wiki/Skvader\" title=\"Skvader\">Skvader</a></span> • <span class=\"nowrap\"><a href=\"/wiki/Pieuvre_arboricole_du_Nord-Ouest_Pacifique\" title=\"Pieuvre arboricole du Nord-Ouest Pacifique\">Pieuvre arboricole du Nord-Ouest Pacifique</a></span></td>\n</tr>\n<tr>\n<td class=\"navbox-banner\" style=\"background-color:#E6E2AF;font-size:85%;\" colspan=\"3\"><span class=\"nowrap\"><a href=\"/wiki/Esp%C3%A8ce_panchronique\" title=\"Espèce panchronique\">Espèce panchronique</a></span> • <span class=\"nowrap\"><a href=\"/wiki/Taxon_Lazare\" title=\"Taxon Lazare\">Taxon Lazare</a></span> • <span class=\"nowrap\"><a href=\"/wiki/T%C3%A9ratologie\" title=\"Tératologie\">Tératologie</a></span> • <span class=\"nowrap\"><a href=\"/wiki/Pseudo-science\" title=\"Pseudo-science\">Pseudo-science</a></span></td>\n</tr>\n</table>\n<ul id=\"bandeau-portail\" class=\"bandeau-portail\">\n<li><span class=\"bandeau-portail-element\"><span class=\"bandeau-portail-icone\"><a href=\"/wiki/Portail:Mammif%C3%A8res\" title=\"Portail des mammifères\"><img alt=\"Portail des mammifères\" src=\"//upload.wikimedia.org/wikipedia/commons/thumb/f/f1/Mouse.svg/41px-Mouse.svg.png\" width=\"41\" height=\"24\" class=\"noviewer\" srcset=\"//upload.wikimedia.org/wikipedia/commons/thumb/f/f1/Mouse.svg/61px-Mouse.svg.png 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/f/f1/Mouse.svg/81px-Mouse.svg.png 2x\" data-file-width=\"365\" data-file-height=\"218\" /></a></span> <span class=\"bandeau-portail-texte\"><a href=\"/wiki/Portail:Mammif%C3%A8res\" title=\"Portail:Mammifères\">Portail des mammifères</a></span></span></li>\n</ul>\n\n</div>","displaytitle":"Okapi","iwlinks":[{"prefix":"wikispecies","url":"https://species.wikimedia.org/wiki/Okapia_johnstoni","title":"wikispecies:Okapia johnstoni"}],"wikitext":"{{sous-titre/Taxon|ns1=Okapia johnstoni}}\n{{Voir homonymes|Okapi (homonymie)}}\n{{Sources à lier|date=juin 2015}}\n{{Taxobox début | animal | ''Okapia johnstoni'' | Okapi2.jpg | Okapi }}\n{{Taxobox | embranchement | Chordata }}\n{{Taxobox | classe | Mammalia }}\n{{Taxobox | sous-classe | Theria }}\n{{Taxobox | ordre | Artiodactyla }}\n{{Taxobox | famille | Giraffidae }}\n{{Taxobox taxon | animal | genre | Okapia | [[Edwin Ray Lankester|Lankester]], [[1901]] }}\n{{Taxobox taxon | animal | espèce | Okapia johnstoni | ([[Philip Lutley Sclater|Sclater]], [[1901]]) }}\n{{Taxobox synonymes |\n* ''Equus johnstoni'' <small>P.L. Sclater, 1901</small>}}\n{{Taxobox UICN | EN | A2abcd+4abcd }}\n{{Taxobox répartition | Okapi map.jpg }}\n{{Taxobox répartition | Okapi distribution.PNG }}\n{{Taxobox fin}}\n\nL’'''okapi''' ('''''Okapia johnstoni'''''), aussi connu sous le nom de '''Mondonga''', est une [[espèce]] de [[mammifère]]s [[ruminant]]s de la même [[Famille (biologie)|famille]] que la [[girafe]], venant des forêts équatoriales de l'[[Afrique centrale]]. Bien que connu par les [[Pygmée]]s, il est « [[Découverte scientifique|découvert]] » en [[1901]] par Sir [[Harry Johnston]] à qui il doit son nom. C’est l'un des derniers grands [[mammifère]]s à être observé scientifiquement sur la planète.\n\nCet animal dont l’allure rappelle à la fois celle du [[zèbre]] et de la [[girafe]] vit exclusivement dans une petite région au nord-est de la [[République démocratique du Congo]], la [[Forêt de l'Ituri|forêt tropicale de l’Ituri]], où une réserve lui est spécialement dédiée. Son nom vernaculaire en [[lingala]] est ''mondonga''.\n\n{{ref nec|Cet animal ne vit pas exclusivement en RD Congo. Il a été aussi observé dans les forêts du sud-est du Gabon, à la frontière avec le Congo-Brazzaville. Il a entre autre été vu par des chasseurs français en 1983 sur la piste reliant Boumango à Mbinda}}.\n\n== Caractéristiques physiques ==\nL’okapi mesure environ {{unité|1.80|m}} au [[Garrot (anatomie)|garrot]] et pèse au maximum {{unité|200 à 230|kg}}. Sa [[Morphologie (biologie)|morphologie]] est relativement proche de celle de la [[girafe]] : son corps est court et massif, ses pattes arrières sont plus courtes que celles de devant (ce qui lui donne l'allure d'avoir la croupe plus basse que les épaules) et sa colonne vertébrale a un axe oblique. Toutefois son cou est moins long et plus épais que celui de la girafe. Le mâle porte des [[ossicône]]s, sortes de petites cornes osseuses recouvertes de peau qui se développent entre 1 et 5 ans. Ses oreilles sont larges et particulièrement mobiles. Sa langue [[préhensile]] est noire et mesure entre {{unité|30 et 50|cm}} de long : avec elle, il peut saisir sa nourriture mais aussi nettoyer toutes les parties de son corps, y compris ses oreilles.\n\nSon pelage court est d’un brun chocolat sur le corps avec des zébrures noires et blanches sur les pattes et l’arrière-train. La tête est marquée d’une tache blanche au niveau de la joue.\n\n== Histoire ==\nLes [[pygmée]]s de l’actuelle [[République démocratique du Congo]] connaissaient depuis longtemps l’okapi qu’ils prenaient parfois au piège dans des trous camouflés. Ils l’appelaient ''o’api''. En 1890, le journaliste [[Henry Morton Stanley]] (1841-1904) venu à la rencontre des pygmées rapporte l’existence d’une sorte d’âne-zèbre broutant des feuilles. Sir [[Harry Hamilton Johnston]] (1858-1927), futur gouverneur de l’[[Ouganda]], curieux de cet animal étrange, partit en 1899 à sa recherche et le baptisa ''Equus johnstoni'', pensant qu’il s’agissait d’une nouvelle espèce de [[zèbre]] (du genre ''Equus''). En 1901, il réussit à se procurer la peau entière d’un okapi ainsi que deux crânes. Leur étude révéla qu’il ne s’agissait pas d’un [[zèbre]] mais d’une espèce d'un nouveau genre et on changea son nom en ''Okapia johnstoni''.\n\n== Alimentation ==\nL’okapi se nourrit de [[feuille]]s, de divers végétaux différents (dont l’[[euphorbe]], particulièrement toxique pour l’homme), de bourgeons, de branches tendres, de fruits, de champignons et de fougères. Il cueille sa nourriture à l’aide de sa langue et de ses lèvres préhensiles. Il comble ses besoins en minéraux en mangeant de l’argile sulfureuse qu’il trouve près des rivières ou des [[graminée]]s poussant sur des sols hautement minéralisés.\n\n== Habitat ==\nL’okapi est un animal discret et solitaire qui ne fréquente ses pairs qu’au moment de la reproduction. On compte généralement deux individus au km². Sédentaire, il vit sur un territoire qu’il marque par des dépôts d’urine et des sécrétions issues de glandes situées entre ses doigts. Il emprunte toujours les mêmes pistes de passage qu’il a ainsi marquées. C’est un animal essentiellement nocturne dont le principal prédateur est le [[Léopard (félin)|léopard]]. Ses oreilles très grandes lui permettent d'entendre le moindre bruit en cas d'attaque.\n\n== Reproduction ==\nLa saison des amours a lieu de mai à juillet. La femelle, qui a déjà signalé sa piste par ses sécrétions odoriférantes, guide le mâle à travers la forêt dense en émettant des appels ressemblant à des toussotements. Il peut y avoir des affrontements entre les mâles convoitant une même femelle. Les deux membres du couple se rejoignent finalement dans une courte parade nuptiale faite de fuites et d’esquives puis s’accouplent. Après une [[gestation]] de {{unité|15|mois}} environ, elle donne naissance à un petit d’environ {{unité|75|cm}} au garrot et pesant environ {{unité|20|kg}}. Celui-ci suit sa mère pendant quelques jours jusqu’à trouver un fourré où se cacher. Il y reste la plupart de son temps jusqu’à atteindre l’âge de deux mois, à partir duquel il suit sa mère dans ses déplacements. Le [[Sevrage (alimentation)|sevrage]] a lieu entre {{unité|6 et 10|mois}}.\n\n== Une espèce menacée ==\nL’okapi figure sur la liste rouge des espèces menacées de l’[[UICN]]. En effet, son habitat est de plus en plus restreint. Même à l’intérieur de la réserve, l’okapi est victime du [[braconnage]], surtout dans le parc national de Virunga. Leur population est estimée de {{unité|10000 à 35000|individus}} et la tendance est à la baisse. Cet animal est protégé depuis 1933<ref>{{en}} {{lien web|url=http://www.iucnredlist.org/details/15188|titre=l'okapi sur le site de l'UICN|site=iucnredlist.org|citation=|en ligne le=|consulté le=3 juillet 2012}}.</ref>. L'espèce est en danger depuis décembre 2013.\n\n== La vie en captivité ==\n[[Fichier:Okapia johnstoni (Okapi) - 437.jpg|vignette|Un okapia johnstoni (Okapi) au [[ZooParc de Beauval]] à [[Saint-Aignan (Loir-et-Cher)|Saint-Aignan]], France.]]\n\nLa survie de l’okapi dépend aussi des zoos où il peut vivre et se reproduire en sécurité. Toutefois, son acclimatation à la vie en captivité a été difficile. Le premier spécimen ramené en [[Europe]] fut donné au [[zoo d'Anvers]] en 1918 mais ne survécut que {{unité|50|jours}}. Jusqu’en 1940, toutes les tentatives d’acclimatation de l’okapi en zoo furent des échecs hormis à Anvers où un individu vécut {{unité|15|ans}} à partir de 1928. La première naissance en captivité eut lieu à Anvers en 1954 mais le petit ne vécut qu’une journée. D’autres naissances eurent lieu dans divers zoos mais les petits ne survivaient jamais longtemps. En 1957 eut lieu la première naissance viable, au [[zoo de Vincennes]].\n\n== Sources ==\n* [http://www.leszoosdanslemonde.com/introduction/archives_actualites/2003/07_2003_vincennes.htm Les okapis et le parc zoologique de Paris]\n* [http://www.thebigzoo.com/Animals/Okapi.asp Un site en anglais sur l'okapi]\n* [http://www.dinosoria.com/okapi.htm L'okapi]\n* [http://www.webjunoir.net/encyclopedie/l-okapi-99.php]\n\n== Références ==\n{{Références}}\n\n== Liens externes ==\n{{Autres projets\n|commons=Category:Okapia johnstoni\n|wikispecies=Okapia johnstoni\n}}\n* {{ADW|Okapia_johnstoni|''Okapia johnstoni''|consulté le=28 Nov 2013}}\n* {{ARKive GES|mammals|Okapia|johnstoni|consulté le=28 Nov 2013}}\n* {{ITIS|625037|''Okapia johnstoni'' (P. L. Sclater, 1901)|consulté le=28 Nov 2013}}\n* {{MSW|14200484|Okapia johnstoni|P. L. Sclater, 1901|consulté le=28 Nov 2013}}\n* {{NCBI|86973|''Okapia johnstoni''|consulté le=28 Nov 2013}}\n* {{Tolweb|Okapia johnstoni|consulté le=28 Nov 2013}}\n* {{uBIO|106096|''Okapia johnstoni'' (P. L. Sclater, 1901)|consulté le=28 Nov 2013}}\n* {{UICN|15188|''Okapia johnstoni'' (Sclater, 1901)|consulté le=27 mai 2015}}\n\n{{Palette Cryptozoologie}}\n\n{{Portail|Mammifères}}\n\n[[Catégorie:Mammifère (nom vernaculaire)]]\n[[Catégorie:Giraffidae]]\n[[Catégorie:Faune d'Afrique centrale]]","properties":{"wikibase_item":"Q82037"},"parsetree":"<root><template><title>sous-titre/Taxon</title><part><name>ns1</name><equals>=</equals><value>Okapia johnstoni</value></part></template>\n<template lineStart=\"1\"><title>Voir homonymes</title><part><name index=\"1\"/><value>Okapi (homonymie)</value></part></template>\n<template lineStart=\"1\"><title>Sources à lier</title><part><name>date</name><equals>=</equals><value>juin 2015</value></part></template>\n<template lineStart=\"1\"><title>Taxobox début </title><part><name index=\"1\"/><value> animal </value></part><part><name index=\"2\"/><value> ''Okapia johnstoni'' </value></part><part><name index=\"3\"/><value> Okapi2.jpg </value></part><part><name index=\"4\"/><value> Okapi </value></part></template>\n<template lineStart=\"1\"><title>Taxobox </title><part><name index=\"1\"/><value> embranchement </value></part><part><name index=\"2\"/><value> Chordata </value></part></template>\n<template lineStart=\"1\"><title>Taxobox </title><part><name index=\"1\"/><value> classe </value></part><part><name index=\"2\"/><value> Mammalia </value></part></template>\n<template lineStart=\"1\"><title>Taxobox </title><part><name index=\"1\"/><value> sous-classe </value></part><part><name index=\"2\"/><value> Theria </value></part></template>\n<template lineStart=\"1\"><title>Taxobox </title><part><name index=\"1\"/><value> ordre </value></part><part><name index=\"2\"/><value> Artiodactyla </value></part></template>\n<template lineStart=\"1\"><title>Taxobox </title><part><name index=\"1\"/><value> famille </value></part><part><name index=\"2\"/><value> Giraffidae </value></part></template>\n<template lineStart=\"1\"><title>Taxobox taxon </title><part><name index=\"1\"/><value> animal </value></part><part><name index=\"2\"/><value> genre </value></part><part><name index=\"3\"/><value> Okapia </value></part><part><name index=\"4\"/><value> [[Edwin Ray Lankester|Lankester]], [[1901]] </value></part></template>\n<template lineStart=\"1\"><title>Taxobox taxon </title><part><name index=\"1\"/><value> animal </value></part><part><name index=\"2\"/><value> espèce </value></part><part><name index=\"3\"/><value> Okapia johnstoni </value></part><part><name index=\"4\"/><value> ([[Philip Lutley Sclater|Sclater]], [[1901]]) </value></part></template>\n<template lineStart=\"1\"><title>Taxobox synonymes </title><part><name index=\"1\"/><value>\n* ''Equus johnstoni'' <small>P.L. Sclater, 1901</small></value></part></template>\n<template lineStart=\"1\"><title>Taxobox UICN </title><part><name index=\"1\"/><value> EN </value></part><part><name index=\"2\"/><value> A2abcd+4abcd </value></part></template>\n<template lineStart=\"1\"><title>Taxobox répartition </title><part><name index=\"1\"/><value> Okapi map.jpg </value></part></template>\n<template lineStart=\"1\"><title>Taxobox répartition </title><part><name index=\"1\"/><value> Okapi distribution.PNG </value></part></template>\n<template lineStart=\"1\"><title>Taxobox fin</title></template>\n\nL’'''okapi''' ('''''Okapia johnstoni'''''), aussi connu sous le nom de '''Mondonga''', est une [[espèce]] de [[mammifère]]s [[ruminant]]s de la même [[Famille (biologie)|famille]] que la [[girafe]], venant des forêts équatoriales de l'[[Afrique centrale]]. Bien que connu par les [[Pygmée]]s, il est « [[Découverte scientifique|découvert]] » en [[1901]] par Sir [[Harry Johnston]] à qui il doit son nom. C’est l'un des derniers grands [[mammifère]]s à être observé scientifiquement sur la planète.\n\nCet animal dont l’allure rappelle à la fois celle du [[zèbre]] et de la [[girafe]] vit exclusivement dans une petite région au nord-est de la [[République démocratique du Congo]], la [[Forêt de l'Ituri|forêt tropicale de l’Ituri]], où une réserve lui est spécialement dédiée. Son nom vernaculaire en [[lingala]] est ''mondonga''.\n\n<template lineStart=\"1\"><title>ref nec</title><part><name index=\"1\"/><value>Cet animal ne vit pas exclusivement en RD Congo. Il a été aussi observé dans les forêts du sud-est du Gabon, à la frontière avec le Congo-Brazzaville. Il a entre autre été vu par des chasseurs français en 1983 sur la piste reliant Boumango à Mbinda</value></part></template>.\n\n<h level=\"2\" i=\"1\">== Caractéristiques physiques ==</h>\nL’okapi mesure environ <template><title>unité</title><part><name index=\"1\"/><value>1.80</value></part><part><name index=\"2\"/><value>m</value></part></template> au [[Garrot (anatomie)|garrot]] et pèse au maximum <template><title>unité</title><part><name index=\"1\"/><value>200 à 230</value></part><part><name index=\"2\"/><value>kg</value></part></template>. Sa [[Morphologie (biologie)|morphologie]] est relativement proche de celle de la [[girafe]] : son corps est court et massif, ses pattes arrières sont plus courtes que celles de devant (ce qui lui donne l'allure d'avoir la croupe plus basse que les épaules) et sa colonne vertébrale a un axe oblique. Toutefois son cou est moins long et plus épais que celui de la girafe. Le mâle porte des [[ossicône]]s, sortes de petites cornes osseuses recouvertes de peau qui se développent entre 1 et 5 ans. Ses oreilles sont larges et particulièrement mobiles. Sa langue [[préhensile]] est noire et mesure entre <template><title>unité</title><part><name index=\"1\"/><value>30 et 50</value></part><part><name index=\"2\"/><value>cm</value></part></template> de long : avec elle, il peut saisir sa nourriture mais aussi nettoyer toutes les parties de son corps, y compris ses oreilles.\n\nSon pelage court est d’un brun chocolat sur le corps avec des zébrures noires et blanches sur les pattes et l’arrière-train. La tête est marquée d’une tache blanche au niveau de la joue.\n\n<h level=\"2\" i=\"2\">== Histoire ==</h>\nLes [[pygmée]]s de l’actuelle [[République démocratique du Congo]] connaissaient depuis longtemps l’okapi qu’ils prenaient parfois au piège dans des trous camouflés. Ils l’appelaient ''o’api''. En 1890, le journaliste [[Henry Morton Stanley]] (1841-1904) venu à la rencontre des pygmées rapporte l’existence d’une sorte d’âne-zèbre broutant des feuilles. Sir [[Harry Hamilton Johnston]] (1858-1927), futur gouverneur de l’[[Ouganda]], curieux de cet animal étrange, partit en 1899 à sa recherche et le baptisa ''Equus johnstoni'', pensant qu’il s’agissait d’une nouvelle espèce de [[zèbre]] (du genre ''Equus''). En 1901, il réussit à se procurer la peau entière d’un okapi ainsi que deux crânes. Leur étude révéla qu’il ne s’agissait pas d’un [[zèbre]] mais d’une espèce d'un nouveau genre et on changea son nom en ''Okapia johnstoni''.\n\n<h level=\"2\" i=\"3\">== Alimentation ==</h>\nL’okapi se nourrit de [[feuille]]s, de divers végétaux différents (dont l’[[euphorbe]], particulièrement toxique pour l’homme), de bourgeons, de branches tendres, de fruits, de champignons et de fougères. Il cueille sa nourriture à l’aide de sa langue et de ses lèvres préhensiles. Il comble ses besoins en minéraux en mangeant de l’argile sulfureuse qu’il trouve près des rivières ou des [[graminée]]s poussant sur des sols hautement minéralisés.\n\n<h level=\"2\" i=\"4\">== Habitat ==</h>\nL’okapi est un animal discret et solitaire qui ne fréquente ses pairs qu’au moment de la reproduction. On compte généralement deux individus au km². Sédentaire, il vit sur un territoire qu’il marque par des dépôts d’urine et des sécrétions issues de glandes situées entre ses doigts. Il emprunte toujours les mêmes pistes de passage qu’il a ainsi marquées. C’est un animal essentiellement nocturne dont le principal prédateur est le [[Léopard (félin)|léopard]]. Ses oreilles très grandes lui permettent d'entendre le moindre bruit en cas d'attaque.\n\n<h level=\"2\" i=\"5\">== Reproduction ==</h>\nLa saison des amours a lieu de mai à juillet. La femelle, qui a déjà signalé sa piste par ses sécrétions odoriférantes, guide le mâle à travers la forêt dense en émettant des appels ressemblant à des toussotements. Il peut y avoir des affrontements entre les mâles convoitant une même femelle. Les deux membres du couple se rejoignent finalement dans une courte parade nuptiale faite de fuites et d’esquives puis s’accouplent. Après une [[gestation]] de <template><title>unité</title><part><name index=\"1\"/><value>15</value></part><part><name index=\"2\"/><value>mois</value></part></template> environ, elle donne naissance à un petit d’environ <template><title>unité</title><part><name index=\"1\"/><value>75</value></part><part><name index=\"2\"/><value>cm</value></part></template> au garrot et pesant environ <template><title>unité</title><part><name index=\"1\"/><value>20</value></part><part><name index=\"2\"/><value>kg</value></part></template>. Celui-ci suit sa mère pendant quelques jours jusqu’à trouver un fourré où se cacher. Il y reste la plupart de son temps jusqu’à atteindre l’âge de deux mois, à partir duquel il suit sa mère dans ses déplacements. Le [[Sevrage (alimentation)|sevrage]] a lieu entre <template><title>unité</title><part><name index=\"1\"/><value>6 et 10</value></part><part><name index=\"2\"/><value>mois</value></part></template>.\n\n<h level=\"2\" i=\"6\">== Une espèce menacée ==</h>\nL’okapi figure sur la liste rouge des espèces menacées de l’[[UICN]]. En effet, son habitat est de plus en plus restreint. Même à l’intérieur de la réserve, l’okapi est victime du [[braconnage]], surtout dans le parc national de Virunga. Leur population est estimée de <template><title>unité</title><part><name index=\"1\"/><value>10000 à 35000</value></part><part><name index=\"2\"/><value>individus</value></part></template> et la tendance est à la baisse. Cet animal est protégé depuis 1933<ext><name>ref</name><attr/><inner>{{en}} {{lien web|url=http://www.iucnredlist.org/details/15188|titre=l'okapi sur le site de l'UICN|site=iucnredlist.org|citation=|en ligne le=|consulté le=3 juillet 2012}}.</inner><close></ref></close></ext>. L'espèce est en danger depuis décembre 2013.\n\n<h level=\"2\" i=\"7\">== La vie en captivité ==</h>\n[[Fichier:Okapia johnstoni (Okapi) - 437.jpg|vignette|Un okapia johnstoni (Okapi) au [[ZooParc de Beauval]] à [[Saint-Aignan (Loir-et-Cher)|Saint-Aignan]], France.]]\n\nLa survie de l’okapi dépend aussi des zoos où il peut vivre et se reproduire en sécurité. Toutefois, son acclimatation à la vie en captivité a été difficile. Le premier spécimen ramené en [[Europe]] fut donné au [[zoo d'Anvers]] en 1918 mais ne survécut que <template><title>unité</title><part><name index=\"1\"/><value>50</value></part><part><name index=\"2\"/><value>jours</value></part></template>. Jusqu’en 1940, toutes les tentatives d’acclimatation de l’okapi en zoo furent des échecs hormis à Anvers où un individu vécut <template><title>unité</title><part><name index=\"1\"/><value>15</value></part><part><name index=\"2\"/><value>ans</value></part></template> à partir de 1928. La première naissance en captivité eut lieu à Anvers en 1954 mais le petit ne vécut qu’une journée. D’autres naissances eurent lieu dans divers zoos mais les petits ne survivaient jamais longtemps. En 1957 eut lieu la première naissance viable, au [[zoo de Vincennes]].\n\n<h level=\"2\" i=\"8\">== Sources ==</h>\n* [http://www.leszoosdanslemonde.com/introduction/archives_actualites/2003/07_2003_vincennes.htm Les okapis et le parc zoologique de Paris]\n* [http://www.thebigzoo.com/Animals/Okapi.asp Un site en anglais sur l'okapi]\n* [http://www.dinosoria.com/okapi.htm L'okapi]\n* [http://www.webjunoir.net/encyclopedie/l-okapi-99.php]\n\n<h level=\"2\" i=\"9\">== Références ==</h>\n<template lineStart=\"1\"><title>Références</title></template>\n\n<h level=\"2\" i=\"10\">== Liens externes ==</h>\n<template lineStart=\"1\"><title>Autres projets\n</title><part><name>commons</name><equals>=</equals><value>Category:Okapia johnstoni\n</value></part><part><name>wikispecies</name><equals>=</equals><value>Okapia johnstoni\n</value></part></template>\n* <template><title>ADW</title><part><name index=\"1\"/><value>Okapia_johnstoni</value></part><part><name index=\"2\"/><value>''Okapia johnstoni''</value></part><part><name>consulté le</name><equals>=</equals><value>28 Nov 2013</value></part></template>\n* <template><title>ARKive GES</title><part><name index=\"1\"/><value>mammals</value></part><part><name index=\"2\"/><value>Okapia</value></part><part><name index=\"3\"/><value>johnstoni</value></part><part><name>consulté le</name><equals>=</equals><value>28 Nov 2013</value></part></template>\n* <template><title>ITIS</title><part><name index=\"1\"/><value>625037</value></part><part><name index=\"2\"/><value>''Okapia johnstoni'' (P. L. Sclater, 1901)</value></part><part><name>consulté le</name><equals>=</equals><value>28 Nov 2013</value></part></template>\n* <template><title>MSW</title><part><name index=\"1\"/><value>14200484</value></part><part><name index=\"2\"/><value>Okapia johnstoni</value></part><part><name index=\"3\"/><value>P. L. Sclater, 1901</value></part><part><name>consulté le</name><equals>=</equals><value>28 Nov 2013</value></part></template>\n* <template><title>NCBI</title><part><name index=\"1\"/><value>86973</value></part><part><name index=\"2\"/><value>''Okapia johnstoni''</value></part><part><name>consulté le</name><equals>=</equals><value>28 Nov 2013</value></part></template>\n* <template><title>Tolweb</title><part><name index=\"1\"/><value>Okapia johnstoni</value></part><part><name>consulté le</name><equals>=</equals><value>28 Nov 2013</value></part></template>\n* <template><title>uBIO</title><part><name index=\"1\"/><value>106096</value></part><part><name index=\"2\"/><value>''Okapia johnstoni'' (P. L. Sclater, 1901)</value></part><part><name>consulté le</name><equals>=</equals><value>28 Nov 2013</value></part></template>\n* <template><title>UICN</title><part><name index=\"1\"/><value>15188</value></part><part><name index=\"2\"/><value>''Okapia johnstoni'' (Sclater, 1901)</value></part><part><name>consulté le</name><equals>=</equals><value>27 mai 2015</value></part></template>\n\n<template lineStart=\"1\"><title>Palette Cryptozoologie</title></template>\n\n<template lineStart=\"1\"><title>Portail</title><part><name index=\"1\"/><value>Mammifères</value></part></template>\n\n[[Catégorie:Mammifère (nom vernaculaire)]]\n[[Catégorie:Giraffidae]]\n[[Catégorie:Faune d'Afrique centrale]]</root>"}}"""
cache = {'info': {'status': 200}, 'query': query, 'response': response}
| 7,988.875 | 63,586 | 0.712131 | 10,256 | 63,911 | 4.413612 | 0.118467 | 0.018557 | 0.032806 | 0.023682 | 0.709407 | 0.680334 | 0.647174 | 0.62442 | 0.600075 | 0.582269 | 0 | 0.031765 | 0.076919 | 63,911 | 7 | 63,587 | 9,130.142857 | 0.734181 | 0.000313 | 0 | 0 | 0 | 0.666667 | 0.998701 | 0.457246 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.333333 | 0 | 0 | 0 | 0.333333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 7 |
137894978174d763d5e737edbe6ef4d80d545194 | 10,200 | py | Python | src/prediction/node_vmstat_RAPL_variables.py | sanja7s/EEDC | 6a9aabf61bc857ad9b54d07b256610e766a0d88d | [
"Apache-2.0"
] | null | null | null | src/prediction/node_vmstat_RAPL_variables.py | sanja7s/EEDC | 6a9aabf61bc857ad9b54d07b256610e766a0d88d | [
"Apache-2.0"
] | null | null | null | src/prediction/node_vmstat_RAPL_variables.py | sanja7s/EEDC | 6a9aabf61bc857ad9b54d07b256610e766a0d88d | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
# -*- coding: UTF-8 -*-
"""
author: sanja7s
---------------
plot the distribution
"""
import os
import datetime as dt
import pandas as pd
import numpy as np
from collections import defaultdict, OrderedDict
from numpy import random
IN_DIR = "../../data/prediction/prediction/weka/RAPL"
os.chdir(IN_DIR)
def node_type(node):
"""
Node numbering scheme is as follows:
[c1-c309] [c321-c478] old compute nodes (Sandy Bridge)
[c579-c628],[c639-c985] new compute nodes (Haswell)
Special nodes:
c309-c320 old big memory nodes (Sandy Bridge)
c629-c638 new big memory nodes (Haswell)
c577,c578 old huge memory nodes (HP Proliant DL560)
c986-c989 new huge memory nodes (Dell R930)
"""
if node.strip() in ['c'+str(x) for x in range(1, 310)]:
return 'SandyBridge'
if node.strip() in ['c'+str(x) for x in range(321, 479)]:
return 'SandyBridge'
if node.strip() in ['c'+str(x) for x in range(579, 629)]:
return 'Haswell'
if node.strip() in ['c'+str(x) for x in range(639, 986)]:
return 'Haswell'
if node.strip() in ['c'+str(x) for x in range(309, 321)]:
return 'SandyBridgeBig'
if node.strip() in ['c'+str(x) for x in range(629, 639)]:
return 'HaswellBig'
if node.strip() in ['c'+str(x) for x in range(577, 579)]:
return 'OldHuge'
if node.strip() in ['c'+str(x) for x in range(986, 990)]:
return 'NewHuge'
def test_RAPL_negatives(node = 'c819'):
f_in = 'node_' + node + '_vmstat_RAPL_timestamp.csv'
f_out = 'cleaned_node_' + node + '_vmstat_RAPL_timestamp.csv'
i = 0
j = 0
with open(f_in, 'r') as f:
with open(f_out, 'w') as fo:
for line in f:
ts, r, b, swpd, free, cache, si, so, bi, bo, in1, cs, us, sy, id7, wa, c1, c2, d1, d2, plug \
= line.split(',')
i += 1
if float(c1) >= 0 and float(c2) >= 0 and float(d1) >= 0 and float(d2) >= 0 :
fo.write("{0},{1},{2},{3},{4},{5},{6},{7},{8},{9},{10},{11},{12},{13},{14},{15},{16},{17},{18},{19},{20}"\
.format(ts, r, b, swpd, free, cache, si, so, bi, bo, in1, cs, us, sy, id7, wa, c1, c2, d1, d2, plug))
else:
j += 1
print 'Negative instances ', j, ' out of total ', i
def read_in_data_per_node_and_save_timeseries_arff(node='c819'):
f_in = 'cleaned_node_' + node + '_vmstat_RAPL_timestamp.csv'
f_out = 'node_' + node + '_vmstat_RAPL_timestamps.arff'
arff_header = \
"@RELATION " + node + "_traintest_RAPL" + '\n' + '\n' + \
"@ATTRIBUTE Timestamp DATE \"yyyy-MM-dd HH:mm:ss\" " + '\n' + \
"@ATTRIBUTE r NUMERIC" + '\n' + \
"@ATTRIBUTE b NUMERIC" + '\n' + \
"@ATTRIBUTE swpd NUMERIC" + '\n' + \
"@ATTRIBUTE free NUMERIC" + '\n' + \
"@ATTRIBUTE cache NUMERIC" + '\n' + \
"@ATTRIBUTE si NUMERIC" + '\n' + \
"@ATTRIBUTE so NUMERIC" + '\n' + \
"@ATTRIBUTE bi NUMERIC" + '\n' + \
"@ATTRIBUTE bo NUMERIC" + '\n' + \
"@ATTRIBUTE in1 NUMERIC" + '\n' + \
"@ATTRIBUTE cs NUMERIC" + '\n' + \
"@ATTRIBUTE us NUMERIC" + '\n' + \
"@ATTRIBUTE sy NUMERIC" + '\n' + \
"@ATTRIBUTE id NUMERIC" + '\n' + \
"@ATTRIBUTE wa NUMERIC" + '\n' + \
"@ATTRIBUTE cpu1 REAL" + '\n' + \
"@ATTRIBUTE cpu2 REAL" + '\n' + \
"@ATTRIBUTE dram1 REAL" + '\n' + \
"@ATTRIBUTE dram2 REAL" + '\n' + \
"@ATTRIBUTE plug NUMERIC" + '\n' + '\n' + \
"@DATA" + '\n'
with open(f_in, 'r') as f:
with open(f_out, 'w') as fo:
fo.write(arff_header)
for line in f:
ts, r, b, swpd, free, cache, si, so, bi, bo, in1, cs, us, sy, id7, wa, c1, c2, d1, d2, plug \
= line.split(',')
t = dt.datetime.fromtimestamp(int(ts))
fo.write("\"{0}\",{1},{2},{3},{4},{5},{6},{7},{8},{9},{10},{11},{12},{13},{14},{15},{16},{17},{18},{19},{20}\n"\
.format(t, r, b, swpd, free, cache, si, so, bi, bo, in1, cs, us, sy, id7, wa, c1, c2, d1, d2, plug))
def read_in_data_per_node_and_save_3_hours_timeseries_arff(node='c585'):
f_in = 'cleaned_node_' + node + '_vmstat_RAPL_timestamp.csv'
f_out = '3hr_node_' + node + '_vmstat_RAPL_timestamps.arff'
arff_header = \
"@RELATION " + node + "_traintest_RAPL" + '\n' + '\n' + \
"@ATTRIBUTE Timestamp DATE \"yyyy-MM-dd HH:mm:ss\" " + '\n' + \
"@ATTRIBUTE r NUMERIC" + '\n' + \
"@ATTRIBUTE b NUMERIC" + '\n' + \
"@ATTRIBUTE swpd NUMERIC" + '\n' + \
"@ATTRIBUTE free NUMERIC" + '\n' + \
"@ATTRIBUTE cache NUMERIC" + '\n' + \
"@ATTRIBUTE si NUMERIC" + '\n' + \
"@ATTRIBUTE so NUMERIC" + '\n' + \
"@ATTRIBUTE bi NUMERIC" + '\n' + \
"@ATTRIBUTE bo NUMERIC" + '\n' + \
"@ATTRIBUTE in1 NUMERIC" + '\n' + \
"@ATTRIBUTE cs NUMERIC" + '\n' + \
"@ATTRIBUTE us NUMERIC" + '\n' + \
"@ATTRIBUTE sy NUMERIC" + '\n' + \
"@ATTRIBUTE id NUMERIC" + '\n' + \
"@ATTRIBUTE wa NUMERIC" + '\n' + \
"@ATTRIBUTE cpu1 REAL" + '\n' + \
"@ATTRIBUTE cpu2 REAL" + '\n' + \
"@ATTRIBUTE dram1 REAL" + '\n' + \
"@ATTRIBUTE dram2 REAL" + '\n' + \
"@ATTRIBUTE plug NUMERIC" + '\n' + '\n' + \
"@DATA" + '\n'
with open(f_in, 'r') as f:
os.chdir('../../../../prediction/Haswell10/' + node)
with open(f_out, 'w') as fo:
fo.write(arff_header)
for line in f:
ts, r, b, swpd, free, cache, si, so, bi, bo, in1, cs, us, sy, id7, wa, c1, c2, d1, d2, plug \
= line.split(',')
t = dt.datetime.fromtimestamp(int(ts))
if t > dt.datetime.strptime('2016-06-27 21:00', '%Y-%m-%d %H:%M')\
and t < dt.datetime.strptime('2016-06-27 23:59', '%Y-%m-%d %H:%M'):
if int(plug) == 0:
plug = '100'
print 'fixed', t
fo.write("\"{0}\",{1},{2},{3},{4},{5},{6},{7},{8},{9},{10},{11},{12},{13},{14},{15},{16},{17},{18},{19},{20}\n"\
.format(t, r, b, swpd, free, cache, si, so, bi, bo, in1, cs, us, sy, id7, wa, c1, c2, d1, d2, plug))
def read_in_data_per_node_and_save_train_test(node='c77'):
def the_header(T):
arff_header = \
"@RELATION " + node + T + '\n' + '\n' + \
"@ATTRIBUTE r NUMERIC" + '\n' + \
"@ATTRIBUTE b NUMERIC" + '\n' + \
"@ATTRIBUTE swpd NUMERIC" + '\n' + \
"@ATTRIBUTE free NUMERIC" + '\n' + \
"@ATTRIBUTE cache NUMERIC" + '\n' + \
"@ATTRIBUTE si NUMERIC" + '\n' + \
"@ATTRIBUTE so NUMERIC" + '\n' + \
"@ATTRIBUTE bi NUMERIC" + '\n' + \
"@ATTRIBUTE bo NUMERIC" + '\n' + \
"@ATTRIBUTE in1 NUMERIC" + '\n' + \
"@ATTRIBUTE cs NUMERIC" + '\n' + \
"@ATTRIBUTE us NUMERIC" + '\n' + \
"@ATTRIBUTE sy NUMERIC" + '\n' + \
"@ATTRIBUTE id NUMERIC" + '\n' + \
"@ATTRIBUTE wa NUMERIC" + '\n' + \
"@ATTRIBUTE cpu1 REAL" + '\n' + \
"@ATTRIBUTE cpu2 REAL" + '\n' + \
"@ATTRIBUTE dram1 REAL" + '\n' + \
"@ATTRIBUTE dram2 REAL" + '\n' + \
"@ATTRIBUTE plug NUMERIC" + '\n' + '\n' + \
"@DATA" + '\n'
return arff_header
f_in = 'cleaned_node_' + node + '_vmstat_RAPL_timestamp.csv'
f_out_1 = 'node_' + node + '_vmstat_RAPL_train.arff'
f_out_2 = 'node_' + node + '_vmstat_RAPL_test.arff'
f = open(f_in, 'r')
TOT = len(f.read().splitlines())
f.close()
with open(f_in, 'r') as f:
print 'Total lines in the file ', TOT
i = 0
with open(f_out_1, 'w') as fo1:
fo1.write(the_header('_train_RAPL'))
for line in f:
ts, r, b, swpd, free, cache, si, so, bi, bo, in1, cs, us, sy, id7, wa, c1, c2, d1, d2, plug \
= line.split(',')
i += 1
fo1.write("{0},{1},{2},{3},{4},{5},{6},{7},{8},{9},{10},{11},{12},{13},{14},{15},{16},{17},{18},{19}"\
.format(r, b, swpd, free, cache, si, so, bi, bo, in1, cs, us, sy, id7, wa, c1, c2, d1, d2, plug))
if i == int(2 * TOT / 3):
break
with open(f_out_2, 'w') as fo2:
fo2.write(the_header('_test_RAPL'))
for line in f:
ts, r, b, swpd, free, cache, si, so, bi, bo, in1, cs, us, sy, id7, wa, c1, c2, d1, d2, plug \
= line.split(',')
i += 1
fo2.write("{0},{1},{2},{3},{4},{5},{6},{7},{8},{9},{10},{11},{12},{13},{14},{15},{16},{17},{18},{19}"\
.format(r, b, swpd, free, cache, si, so, bi, bo, in1, cs, us, sy, id7, wa, c1, c2, d1, d2, plug))
print 'Written total lines ', i
def read_in_data_per_node_and_save_SHUFFLE_train_test(node='c836'):
def the_header(T):
arff_header = \
"@RELATION " + node + T + '\n' + '\n' + \
"@ATTRIBUTE r NUMERIC" + '\n' + \
"@ATTRIBUTE b NUMERIC" + '\n' + \
"@ATTRIBUTE swpd NUMERIC" + '\n' + \
"@ATTRIBUTE free NUMERIC" + '\n' + \
"@ATTRIBUTE cache NUMERIC" + '\n' + \
"@ATTRIBUTE si NUMERIC" + '\n' + \
"@ATTRIBUTE so NUMERIC" + '\n' + \
"@ATTRIBUTE bi NUMERIC" + '\n' + \
"@ATTRIBUTE bo NUMERIC" + '\n' + \
"@ATTRIBUTE in1 NUMERIC" + '\n' + \
"@ATTRIBUTE cs NUMERIC" + '\n' + \
"@ATTRIBUTE us NUMERIC" + '\n' + \
"@ATTRIBUTE sy NUMERIC" + '\n' + \
"@ATTRIBUTE id NUMERIC" + '\n' + \
"@ATTRIBUTE wa NUMERIC" + '\n' + \
"@ATTRIBUTE plug NUMERIC" + '\n' + '\n' + \
"@DATA" + '\n'
return arff_header
f_in = 'cleaned_node_' + node + '_vmstat_timestamp.csv'
f_out_1 = 'node_' + node + '_vmstat_SHUFFLE_train.arff'
f_out_2 = 'node_' + node + '_vmstat_SHUFFLE_test.arff'
f = open(f_in, 'r')
F = f.read().splitlines()
TOT = len(F)
random.shuffle(F)
print 'Total lines in the file ', TOT
n1 = int(9 * TOT / 10)
print 'Training set size ', n1
with open(f_out_1, 'w') as fo1:
fo1.write(the_header('_SHUFFLE_train_RAPL'))
for i in range(n1):
line = F[i]
ts, r, b, swpd, free, cache, si, so, bi, bo, in1, cs, us, sy, id7, wa, c1, c2, d1, d2, plug \
= line.split(',')
i += 1
fo1.write("{0},{1},{2},{3},{4},{5},{6},{7},{8},{9},{10},{11},{12},{13},{14},{15},{16},{17},{18},{19}"\
.format(r, b, swpd, free, cache, si, so, bi, bo, in1, cs, us, sy, id7, wa, c1, c2, d1, d2, plug))
print 'Test set size ', TOT-n1
with open(f_out_2, 'w') as fo2:
fo2.write(the_header('_SHUFFLE_test_RAPL'))
for i in range(n1,TOT):
line = F[i]
ts, r, b, swpd, free, cache, si, so, bi, bo, in1, cs, us, sy, id7, wa, c1, c2, d1, d2, plug \
= line.split(',')
i += 1
fo2.write("{0},{1},{2},{3},{4},{5},{6},{7},{8},{9},{10},{11},{12},{13},{14},{15},{16},{17},{18},{19}"\
.format(r, b, swpd, free, cache, si, so, bi, bo, in1, cs, us, sy, id7, wa, c1, c2, d1, d2, plug))
print 'Written total lines ', i
#read_in_data_per_node_and_save_SHUFFLE_train_test()
#read_in_data_per_node_and_save_train_test()
#test_RAPL_negatives()
#read_in_data_per_node_and_save_timeseries_arff()
#read_in_data_per_node_and_save_3_hours_timeseries_arff()
node='c615'
test_RAPL_negatives(node=node)
read_in_data_per_node_and_save_train_test(node=node)
| 34.812287 | 117 | 0.570294 | 1,624 | 10,200 | 3.458128 | 0.138547 | 0.138889 | 0.181624 | 0.024929 | 0.810185 | 0.807692 | 0.801638 | 0.776887 | 0.742699 | 0.722044 | 0 | 0.062791 | 0.19902 | 10,200 | 292 | 118 | 34.931507 | 0.624602 | 0.025686 | 0 | 0.707424 | 0 | 0.030568 | 0.366008 | 0.106647 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.026201 | null | null | 0.034935 | 0 | 0 | 0 | null | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
1380364880e72805de31c21c228ab7d9dbd45947 | 162,289 | py | Python | activity/controllers.py | aucoeur/WeVoteServer | 7b30bdbb59d6e0c19abc81237aa42fba7de1a432 | [
"MIT"
] | 44 | 2015-11-19T04:52:39.000Z | 2021-03-17T02:08:26.000Z | activity/controllers.py | aucoeur/WeVoteServer | 7b30bdbb59d6e0c19abc81237aa42fba7de1a432 | [
"MIT"
] | 748 | 2015-09-03T04:18:33.000Z | 2022-03-10T14:08:10.000Z | activity/controllers.py | aucoeur/WeVoteServer | 7b30bdbb59d6e0c19abc81237aa42fba7de1a432 | [
"MIT"
] | 145 | 2015-09-19T10:10:44.000Z | 2022-03-04T21:01:12.000Z | # activity/controllers.py
# Brought to you by We Vote. Be good.
# -*- coding: UTF-8 -*-
from .models import ActivityComment, ActivityNoticeSeed, ActivityManager, ActivityNotice, ActivityPost, \
NOTICE_ACTIVITY_POST_SEED, \
NOTICE_CAMPAIGNX_FRIEND_HAS_SUPPORTED, \
NOTICE_CAMPAIGNX_NEWS_ITEM, NOTICE_CAMPAIGNX_NEWS_ITEM_AUTHORED, NOTICE_CAMPAIGNX_NEWS_ITEM_SEED, \
NOTICE_CAMPAIGNX_SUPER_SHARE_ITEM_AUTHORED, NOTICE_CAMPAIGNX_SUPER_SHARE_ITEM_SEED, \
NOTICE_CAMPAIGNX_SUPPORTER_INITIAL_RESPONSE, NOTICE_CAMPAIGNX_SUPPORTER_INITIAL_RESPONSE_SEED, \
NOTICE_FRIEND_ACTIVITY_POSTS, \
NOTICE_FRIEND_ENDORSEMENTS, NOTICE_FRIEND_ENDORSEMENTS_SEED, \
NOTICE_VOTER_DAILY_SUMMARY, NOTICE_VOTER_DAILY_SUMMARY_SEED
from config.base import get_environment_variable
from django.utils.timezone import now
from friend.models import FriendManager
import json
from datetime import timedelta
from reaction.models import ReactionManager
from voter.models import \
NOTIFICATION_FRIEND_OPINIONS_OTHER_REGIONS_EMAIL, NOTIFICATION_FRIEND_OPINIONS_OTHER_REGIONS_SMS, \
NOTIFICATION_FRIEND_OPINIONS_YOUR_BALLOT_EMAIL, NOTIFICATION_FRIEND_OPINIONS_YOUR_BALLOT_SMS,\
NOTIFICATION_VOTER_DAILY_SUMMARY_EMAIL, NOTIFICATION_VOTER_DAILY_SUMMARY_SMS, \
VoterDeviceLinkManager, VoterManager
import wevote_functions.admin
from wevote_functions.functions import is_voter_device_id_valid, positive_value_exists, return_first_x_words
logger = wevote_functions.admin.get_logger(__name__)
WE_VOTE_SERVER_ROOT_URL = get_environment_variable("WE_VOTE_SERVER_ROOT_URL")
def delete_activity_comments_for_voter(voter_to_delete_we_vote_id, from_organization_we_vote_id):
status = ''
success = True
activity_comment_entries_deleted = 0
if not positive_value_exists(voter_to_delete_we_vote_id):
status += "DELETE_ACTIVITY_COMMENTS-MISSING_EITHER_FROM_OR_TO_VOTER_WE_VOTE_ID "
success = False
results = {
'status': status,
'success': success,
'voter_to_delete_we_vote_id': voter_to_delete_we_vote_id,
'activity_comment_entries_deleted': activity_comment_entries_deleted,
}
return results
try:
activity_comment_entries_deleted += ActivityComment.objects\
.filter(commenter_voter_we_vote_id__iexact=voter_to_delete_we_vote_id)\
.delete()
except Exception as e:
status += "FAILED-ACTIVITY_COMMENT_UPDATE-INCLUDING_ORG_UPDATE " + str(e) + " "
# #############################################
# Delete based on organization_we_vote_id
try:
activity_comment_entries_deleted += ActivityComment.objects \
.filter(commenter_organization_we_vote_id__iexact=from_organization_we_vote_id) \
.delete()
except Exception as e:
status += "FAILED-ACTIVITY_COMMENT_DELETE-FROM_ORG_WE_VOTE_ID " + str(e) + " "
results = {
'status': status,
'success': success,
'voter_to_delete_we_vote_id': voter_to_delete_we_vote_id,
'activity_comment_entries_deleted': activity_comment_entries_deleted,
}
return results
def delete_activity_notices_for_voter(voter_to_delete_we_vote_id, from_organization_we_vote_id):
status = ''
success = True
activity_notice_seed_entries_deleted = 0
activity_notice_entries_deleted = 0
if not positive_value_exists(voter_to_delete_we_vote_id):
status += "DELETE_ACTIVITY_NOTICE_SEEDS-MISSING_VOTER_WE_VOTE_ID "
success = False
results = {
'status': status,
'success': success,
'voter_to_delete_we_vote_id': voter_to_delete_we_vote_id,
'activity_notice_seed_entries_deleted': activity_notice_seed_entries_deleted,
'activity_notice_entries_deleted': activity_notice_entries_deleted,
}
return results
try:
activity_notice_seed_entries_deleted += ActivityNoticeSeed.objects\
.filter(speaker_voter_we_vote_id__iexact=voter_to_delete_we_vote_id)\
.delete()
except Exception as e:
status += "FAILED-ACTIVITY_NOTICE_SEED_UPDATE-INCLUDING_ORG_UPDATE " + str(e) + " "
try:
activity_notice_entries_deleted += ActivityNotice.objects\
.filter(speaker_voter_we_vote_id__iexact=voter_to_delete_we_vote_id) \
.delete()
except Exception as e:
status += "FAILED-ACTIVITY_NOTICE_UPDATE-INCLUDING_ORG_UPDATE " + str(e) + " "
# #############################################
# Delete based on speaker_organization_we_vote_id
try:
activity_notice_seed_entries_deleted += ActivityNoticeSeed.objects \
.filter(speaker_organization_we_vote_id__iexact=from_organization_we_vote_id) \
.delete()
except Exception as e:
status += "FAILED-ACTIVITY_NOTICE_SEED_UPDATE-FROM_ORG_WE_VOTE_ID " + str(e) + " "
try:
activity_notice_entries_deleted += ActivityNotice.objects \
.filter(speaker_organization_we_vote_id__iexact=from_organization_we_vote_id) \
.delete()
except Exception as e:
status += "FAILED-ACTIVITY_NOTICE_UPDATE-FROM_ORG_WE_VOTE_ID " + str(e) + " "
# Now move ActivityNotice recipient_voter_we_vote_id
try:
activity_notice_entries_deleted += ActivityNotice.objects \
.filter(recipient_voter_we_vote_id__iexact=voter_to_delete_we_vote_id) \
.delete()
except Exception as e:
status += "FAILED-ACTIVITY_NOTICE_UPDATE-RECIPIENT " + str(e) + " "
results = {
'status': status,
'success': success,
'voter_to_delete_we_vote_id': voter_to_delete_we_vote_id,
'activity_notice_seed_entries_deleted': activity_notice_seed_entries_deleted,
'activity_notice_entries_deleted': activity_notice_entries_deleted,
}
return results
def delete_activity_posts_for_voter(voter_to_delete_we_vote_id, from_organization_we_vote_id):
status = ''
success = True
activity_post_entries_deleted = 0
if not positive_value_exists(voter_to_delete_we_vote_id):
status += "DELETE_ACTIVITY_POSTS-MISSING_EITHER_FROM_OR_TO_VOTER_WE_VOTE_ID "
success = False
results = {
'status': status,
'success': success,
'voter_to_delete_we_vote_id': voter_to_delete_we_vote_id,
'activity_post_entries_deleted': activity_post_entries_deleted,
}
return results
try:
activity_post_entries_deleted += ActivityPost.objects\
.filter(speaker_voter_we_vote_id__iexact=voter_to_delete_we_vote_id)\
.delete()
except Exception as e:
status += "FAILED-ACTIVITY_POST_UPDATE-INCLUDING_ORG_UPDATE " + str(e) + " "
# #############################################
# Delete based on speaker_organization_we_vote_id
try:
activity_post_entries_deleted += ActivityPost.objects \
.filter(speaker_organization_we_vote_id__iexact=from_organization_we_vote_id) \
.delete()
except Exception as e:
status += "FAILED-ACTIVITY_POST_DELETE-FROM_ORG_WE_VOTE_ID " + str(e) + " "
results = {
'status': status,
'success': success,
'voter_to_delete_we_vote_id': voter_to_delete_we_vote_id,
'activity_post_entries_deleted': activity_post_entries_deleted,
}
return results
def move_activity_comments_to_another_voter(
from_voter_we_vote_id, to_voter_we_vote_id, from_organization_we_vote_id, to_organization_we_vote_id,
to_voter=None):
status = ''
success = True
activity_comment_entries_moved = 0
if not positive_value_exists(from_voter_we_vote_id) or not positive_value_exists(to_voter_we_vote_id):
status += "MOVE_ACTIVITY_COMMENTS-MISSING_EITHER_FROM_OR_TO_VOTER_WE_VOTE_ID "
success = False
results = {
'status': status,
'success': success,
'from_voter_we_vote_id': from_voter_we_vote_id,
'to_voter_we_vote_id': to_voter_we_vote_id,
'activity_comment_entries_moved': activity_comment_entries_moved,
}
return results
if from_voter_we_vote_id == to_voter_we_vote_id:
status += "MOVE_ACTIVITY_COMMENTS-FROM_AND_TO_VOTER_WE_VOTE_IDS_IDENTICAL "
success = False
results = {
'status': status,
'success': success,
'from_voter_we_vote_id': from_voter_we_vote_id,
'to_voter_we_vote_id': to_voter_we_vote_id,
'activity_comment_entries_moved': activity_comment_entries_moved,
}
return results
# ######################
# Migrations
to_voter_commenter_name = ''
commenter_profile_image_url_medium = None
commenter_profile_image_url_tiny = None
try:
to_voter_commenter_name = to_voter.get_full_name()
commenter_profile_image_url_medium = to_voter.we_vote_hosted_profile_image_url_medium
commenter_profile_image_url_tiny = to_voter.we_vote_hosted_profile_image_url_tiny
except Exception as e:
status += "UNABLE_TO_GET_NAME_OR_PHOTOS: " + str(e) + " "
if positive_value_exists(to_organization_we_vote_id):
# Move based on commenter_voter_we_vote_id
try:
activity_comment_entries_moved += ActivityComment.objects\
.filter(commenter_voter_we_vote_id__iexact=from_voter_we_vote_id)\
.update(commenter_name=to_voter_commenter_name,
commenter_voter_we_vote_id=to_voter_we_vote_id,
commenter_organization_we_vote_id=to_organization_we_vote_id,
commenter_profile_image_url_medium=commenter_profile_image_url_medium,
commenter_profile_image_url_tiny=commenter_profile_image_url_tiny)
except Exception as e:
status += "FAILED-ACTIVITY_COMMENT_UPDATE-INCLUDING_ORG_UPDATE: " + str(e) + " "
# #############################################
# Move based on commenter_organization_we_vote_id
try:
activity_comment_entries_moved += ActivityComment.objects \
.filter(commenter_organization_we_vote_id__iexact=from_organization_we_vote_id) \
.update(commenter_name=to_voter_commenter_name,
commenter_voter_we_vote_id=to_voter_we_vote_id,
commenter_organization_we_vote_id=to_organization_we_vote_id,
commenter_profile_image_url_medium=commenter_profile_image_url_medium,
commenter_profile_image_url_tiny=commenter_profile_image_url_tiny)
except Exception as e:
status += "FAILED-ACTIVITY_COMMENT_UPDATE-FROM_ORG_WE_VOTE_ID: " + str(e) + " "
else:
try:
activity_comment_entries_moved += ActivityComment.objects\
.filter(commenter_voter_we_vote_id__iexact=from_voter_we_vote_id)\
.update(commenter_name=to_voter_commenter_name,
commenter_voter_we_vote_id=to_voter_we_vote_id,
commenter_profile_image_url_medium=commenter_profile_image_url_medium,
commenter_profile_image_url_tiny=commenter_profile_image_url_tiny)
except Exception as e:
status += "FAILED-ACTIVITY_COMMENT_UPDATE-MISSING_ORG: " + str(e) + " "
results = {
'status': status,
'success': success,
'from_voter_we_vote_id': from_voter_we_vote_id,
'to_voter_we_vote_id': to_voter_we_vote_id,
'activity_comment_entries_moved': activity_comment_entries_moved,
}
return results
def move_activity_notices_to_another_voter(
from_voter_we_vote_id, to_voter_we_vote_id, from_organization_we_vote_id, to_organization_we_vote_id,
to_voter=None):
status = ''
success = True
activity_notice_seed_entries_moved = 0
activity_notice_entries_moved = 0
if not positive_value_exists(from_voter_we_vote_id) or not positive_value_exists(to_voter_we_vote_id):
status += "MOVE_ACTIVITY_NOTICE_SEEDS-MISSING_EITHER_FROM_OR_TO_VOTER_WE_VOTE_ID "
success = False
results = {
'status': status,
'success': success,
'from_voter_we_vote_id': from_voter_we_vote_id,
'to_voter_we_vote_id': to_voter_we_vote_id,
'activity_notice_seed_entries_moved': activity_notice_seed_entries_moved,
'activity_notice_entries_moved': activity_notice_entries_moved,
}
return results
if from_voter_we_vote_id == to_voter_we_vote_id:
status += "MOVE_ACTIVITY_NOTICE_SEEDS-FROM_AND_TO_VOTER_WE_VOTE_IDS_IDENTICAL "
success = False
results = {
'status': status,
'success': success,
'from_voter_we_vote_id': from_voter_we_vote_id,
'to_voter_we_vote_id': to_voter_we_vote_id,
'activity_notice_seed_entries_moved': activity_notice_seed_entries_moved,
'activity_notice_entries_moved': activity_notice_entries_moved,
}
return results
# ######################
# Migrations
to_voter_speaker_name = ''
speaker_profile_image_url_medium = None
speaker_profile_image_url_tiny = None
try:
to_voter_speaker_name = to_voter.get_full_name()
speaker_profile_image_url_medium = to_voter.we_vote_hosted_profile_image_url_medium
speaker_profile_image_url_tiny = to_voter.we_vote_hosted_profile_image_url_tiny
except Exception as e:
status += "UNABLE_TO_GET_NAME_OR_PHOTOS: " + str(e) + " "
if positive_value_exists(to_organization_we_vote_id):
# Move based on speaker_voter_we_vote_id
try:
activity_notice_seed_entries_moved += ActivityNoticeSeed.objects\
.filter(speaker_voter_we_vote_id__iexact=from_voter_we_vote_id)\
.update(speaker_name=to_voter_speaker_name,
speaker_voter_we_vote_id=to_voter_we_vote_id,
speaker_organization_we_vote_id=to_organization_we_vote_id,
speaker_profile_image_url_medium=speaker_profile_image_url_medium,
speaker_profile_image_url_tiny=speaker_profile_image_url_tiny)
except Exception as e:
status += "FAILED-ACTIVITY_NOTICE_SEED_UPDATE-INCLUDING_ORG_UPDATE: " + str(e) + " "
try:
activity_notice_entries_moved += ActivityNotice.objects\
.filter(speaker_voter_we_vote_id__iexact=from_voter_we_vote_id) \
.update(speaker_name=to_voter_speaker_name,
speaker_voter_we_vote_id=to_voter_we_vote_id,
speaker_organization_we_vote_id=to_organization_we_vote_id,
speaker_profile_image_url_medium=speaker_profile_image_url_medium,
speaker_profile_image_url_tiny=speaker_profile_image_url_tiny)
except Exception as e:
status += "FAILED-ACTIVITY_NOTICE_UPDATE-INCLUDING_ORG_UPDATE " + str(e) + " "
# #############################################
# Move based on speaker_organization_we_vote_id
try:
activity_notice_seed_entries_moved += ActivityNoticeSeed.objects \
.filter(speaker_organization_we_vote_id__iexact=from_organization_we_vote_id) \
.update(speaker_name=to_voter_speaker_name,
speaker_voter_we_vote_id=to_voter_we_vote_id,
speaker_organization_we_vote_id=to_organization_we_vote_id,
speaker_profile_image_url_medium=speaker_profile_image_url_medium,
speaker_profile_image_url_tiny=speaker_profile_image_url_tiny)
except Exception as e:
status += "FAILED-ACTIVITY_NOTICE_SEED_UPDATE-FROM_ORG_WE_VOTE_ID: " + str(e) + " "
try:
activity_notice_entries_moved += ActivityNotice.objects \
.filter(speaker_organization_we_vote_id__iexact=from_organization_we_vote_id) \
.update(speaker_name=to_voter_speaker_name,
speaker_voter_we_vote_id=to_voter_we_vote_id,
speaker_organization_we_vote_id=to_organization_we_vote_id,
speaker_profile_image_url_medium=speaker_profile_image_url_medium,
speaker_profile_image_url_tiny=speaker_profile_image_url_tiny)
except Exception as e:
status += "FAILED-ACTIVITY_NOTICE_UPDATE-FROM_ORG_WE_VOTE_ID: " + str(e) + " "
else:
try:
activity_notice_seed_entries_moved += ActivityNoticeSeed.objects\
.filter(speaker_voter_we_vote_id__iexact=from_voter_we_vote_id)\
.update(speaker_name=to_voter_speaker_name,
speaker_voter_we_vote_id=to_voter_we_vote_id,
speaker_profile_image_url_medium=speaker_profile_image_url_medium,
speaker_profile_image_url_tiny=speaker_profile_image_url_tiny)
except Exception as e:
status += "FAILED-ACTIVITY_NOTICE_SEED_UPDATE-MISSING_ORG: " + str(e) + " "
try:
activity_notice_entries_moved += ActivityNotice.objects\
.filter(speaker_voter_we_vote_id__iexact=from_voter_we_vote_id) \
.update(speaker_name=to_voter_speaker_name,
speaker_voter_we_vote_id=to_voter_we_vote_id,
speaker_profile_image_url_medium=speaker_profile_image_url_medium,
speaker_profile_image_url_tiny=speaker_profile_image_url_tiny)
except Exception as e:
status += "FAILED-ACTIVITY_NOTICE_UPDATE-MISSING_ORG: " + str(e) + " "
# Now move ActivityNotice recipient_voter_we_vote_id
try:
activity_notice_entries_moved += ActivityNotice.objects \
.filter(recipient_voter_we_vote_id__iexact=from_voter_we_vote_id) \
.update(speaker_name=to_voter_speaker_name,
recipient_voter_we_vote_id=to_voter_we_vote_id)
except Exception as e:
status += "FAILED-ACTIVITY_NOTICE_UPDATE-RECIPIENT: " + str(e) + " "
results = {
'status': status,
'success': success,
'from_voter_we_vote_id': from_voter_we_vote_id,
'to_voter_we_vote_id': to_voter_we_vote_id,
'activity_notice_seed_entries_moved': activity_notice_seed_entries_moved,
'activity_notice_entries_moved': activity_notice_entries_moved,
}
return results
def move_activity_posts_to_another_voter(
from_voter_we_vote_id, to_voter_we_vote_id, from_organization_we_vote_id, to_organization_we_vote_id,
to_voter=None):
status = ''
success = True
activity_post_entries_moved = 0
if not positive_value_exists(from_voter_we_vote_id) or not positive_value_exists(to_voter_we_vote_id):
status += "MOVE_ACTIVITY_POSTS-MISSING_EITHER_FROM_OR_TO_VOTER_WE_VOTE_ID "
success = False
results = {
'status': status,
'success': success,
'from_voter_we_vote_id': from_voter_we_vote_id,
'to_voter_we_vote_id': to_voter_we_vote_id,
'activity_post_entries_moved': activity_post_entries_moved,
}
return results
if from_voter_we_vote_id == to_voter_we_vote_id:
status += "MOVE_ACTIVITY_POSTS-FROM_AND_TO_VOTER_WE_VOTE_IDS_IDENTICAL "
success = False
results = {
'status': status,
'success': success,
'from_voter_we_vote_id': from_voter_we_vote_id,
'to_voter_we_vote_id': to_voter_we_vote_id,
'activity_post_entries_moved': activity_post_entries_moved,
}
return results
# ######################
# Migrations
to_voter_speaker_name = ''
speaker_profile_image_url_medium = None
speaker_profile_image_url_tiny = None
try:
to_voter_speaker_name = to_voter.get_full_name()
speaker_profile_image_url_medium = to_voter.we_vote_hosted_profile_image_url_medium
speaker_profile_image_url_tiny = to_voter.we_vote_hosted_profile_image_url_tiny
except Exception as e:
status += "UNABLE_TO_GET_NAME_OR_PHOTOS: " + str(e) + " "
if positive_value_exists(to_organization_we_vote_id):
# Move based on speaker_voter_we_vote_id
try:
activity_post_entries_moved += ActivityPost.objects\
.filter(speaker_voter_we_vote_id__iexact=from_voter_we_vote_id)\
.update(speaker_name=to_voter_speaker_name,
speaker_voter_we_vote_id=to_voter_we_vote_id,
speaker_organization_we_vote_id=to_organization_we_vote_id,
speaker_profile_image_url_medium=speaker_profile_image_url_medium,
speaker_profile_image_url_tiny=speaker_profile_image_url_tiny)
except Exception as e:
status += "FAILED-ACTIVITY_POST_UPDATE-INCLUDING_ORG_UPDATE " + str(e) + " "
# #############################################
# Move based on speaker_organization_we_vote_id
try:
activity_post_entries_moved += ActivityPost.objects \
.filter(speaker_organization_we_vote_id__iexact=from_organization_we_vote_id) \
.update(speaker_name=to_voter_speaker_name,
speaker_voter_we_vote_id=to_voter_we_vote_id,
speaker_organization_we_vote_id=to_organization_we_vote_id,
speaker_profile_image_url_medium=speaker_profile_image_url_medium,
speaker_profile_image_url_tiny=speaker_profile_image_url_tiny)
except Exception as e:
status += "FAILED-ACTIVITY_POST_UPDATE-FROM_ORG_WE_VOTE_ID: " + str(e) + " "
else:
try:
activity_post_entries_moved += ActivityPost.objects\
.filter(speaker_voter_we_vote_id__iexact=from_voter_we_vote_id)\
.update(speaker_name=to_voter_speaker_name,
speaker_voter_we_vote_id=to_voter_we_vote_id,
speaker_profile_image_url_medium=speaker_profile_image_url_medium,
speaker_profile_image_url_tiny=speaker_profile_image_url_tiny)
except Exception as e:
status += "FAILED-ACTIVITY_POST_UPDATE-MISSING_ORG: " + str(e) + " "
results = {
'status': status,
'success': success,
'from_voter_we_vote_id': from_voter_we_vote_id,
'to_voter_we_vote_id': to_voter_we_vote_id,
'activity_post_entries_moved': activity_post_entries_moved,
}
return results
def notice_friend_endorsements_send(
speaker_voter_we_vote_id='',
recipient_voter_we_vote_id='',
invitation_message='',
activity_tidbit_we_vote_id='',
position_name_list=[]):
"""
We are sending an email to the speaker's friends who are
subscribed to NOTIFICATION_FRIEND_OPINIONS_YOUR_BALLOT or NOTIFICATION_FRIEND_OPINIONS_OTHER_REGIONS
:param speaker_voter_we_vote_id:
:param recipient_voter_we_vote_id:
:param invitation_message:
:param activity_tidbit_we_vote_id:
:param position_name_list:
:return:
"""
from email_outbound.controllers import schedule_email_with_email_outbound_description
from email_outbound.models import EmailManager, NOTICE_FRIEND_ENDORSEMENTS_TEMPLATE
status = ""
success = True
voter_manager = VoterManager()
voter_results = voter_manager.retrieve_voter_by_we_vote_id(speaker_voter_we_vote_id)
from organization.controllers import transform_web_app_url
web_app_root_url_verified = transform_web_app_url('') # Change to client URL if needed
if not voter_results['voter_found']:
error_results = {
'status': "SPEAKER_VOTER_NOT_FOUND ",
'success': False,
}
return error_results
speaker_voter = voter_results['voter']
recipient_voter_results = voter_manager.retrieve_voter_by_we_vote_id(recipient_voter_we_vote_id)
if not recipient_voter_results['voter_found']:
error_results = {
'status': "RECIPIENT_VOTER_NOT_FOUND ",
'success': False,
}
return error_results
recipient_voter = recipient_voter_results['voter']
email_manager = EmailManager()
# Retrieve the email address of the original_sender (which is the person we are sending this notification to)
recipient_email_we_vote_id = ""
recipient_email = ""
recipient_email_subscription_secret_key = ""
if recipient_voter.has_email_with_verified_ownership():
results = email_manager.retrieve_primary_email_with_ownership_verified(recipient_voter_we_vote_id)
success = results['success']
if results['email_address_object_found']:
recipient_email_object = results['email_address_object']
recipient_email_we_vote_id = recipient_email_object.we_vote_id
recipient_email = recipient_email_object.normalized_email_address
if positive_value_exists(recipient_email_object.subscription_secret_key):
recipient_email_subscription_secret_key = recipient_email_object.subscription_secret_key
else:
recipient_email_subscription_secret_key = \
email_manager.update_email_address_with_new_subscription_secret_key(
email_we_vote_id=recipient_email_we_vote_id)
else:
# The recipient must have a valid email
status += "RECIPIENT_VOTER_DOES_NOT_HAVE_VALID_EMAIL "
success = True
results = {
'success': success,
'status': status,
}
return results
# Retrieve the email address of the speaker_voter - used in invitation to help the recipient understand who sent
speaker_voter_email = ""
speaker_voter_we_vote_id = speaker_voter.we_vote_id
if speaker_voter.has_email_with_verified_ownership():
results = email_manager.retrieve_primary_email_with_ownership_verified(speaker_voter_we_vote_id)
if results['email_address_object_found']:
speaker_voter_email_object = results['email_address_object']
speaker_voter_email = speaker_voter_email_object.normalized_email_address
else:
# Not having an email is ok now, since the speaker_voter could have signed in with SMS or Twitter
status += "SPEAKER_VOTER_DOES_NOT_HAVE_VALID_EMAIL "
if positive_value_exists(recipient_email_we_vote_id):
recipient_voter_we_vote_id = recipient_voter.we_vote_id
# Template variables
real_name_only = True
recipient_name = recipient_voter.get_full_name(real_name_only)
speaker_voter_name = speaker_voter.get_full_name(real_name_only)
speaker_voter_photo = speaker_voter.voter_photo_url()
speaker_voter_description = ""
speaker_voter_network_details = ""
if positive_value_exists(speaker_voter_name):
subject = speaker_voter_name
else:
subject = "Your friend"
activity_description = ''
if position_name_list and len(position_name_list) > 0:
if len(position_name_list) == 1:
subject += " added opinion about "
subject += position_name_list[0]
activity_description += "added opinion about "
activity_description += position_name_list[0]
elif len(position_name_list) == 2:
subject += " added opinions about "
subject += position_name_list[0]
subject += " and "
subject += position_name_list[1]
activity_description += "added opinions about "
activity_description += position_name_list[0]
activity_description += " and "
activity_description += position_name_list[1]
elif len(position_name_list) >= 3:
subject += " added opinions about "
subject += position_name_list[0]
subject += ", "
subject += position_name_list[1]
subject += " and "
subject += position_name_list[2]
activity_description += "added opinions about "
activity_description += position_name_list[0]
activity_description += ", "
activity_description += position_name_list[1]
activity_description += " and "
activity_description += position_name_list[2]
else:
subject += " has added new opinion"
activity_description += "has added new opinion"
# Variables used by templates/email_outbound/email_templates/notice_friend_endorsements.txt and .html
template_variables_for_json = {
"activity_description": activity_description,
"subject": subject,
"invitation_message": invitation_message,
"sender_name": speaker_voter_name,
"sender_photo": speaker_voter_photo,
"sender_email_address": speaker_voter_email, # Does not affect the "From" email header
"sender_description": speaker_voter_description,
"sender_network_details": speaker_voter_network_details,
"recipient_name": recipient_name,
"recipient_voter_email": recipient_email,
"recipient_unsubscribe_url": web_app_root_url_verified + "/settings/notifications/esk/" +
recipient_email_subscription_secret_key,
"email_open_url": WE_VOTE_SERVER_ROOT_URL + "/apis/v1/emailOpen?email_key=1234",
"view_new_endorsements_url": web_app_root_url_verified + "/news/a/" + activity_tidbit_we_vote_id,
"view_your_ballot_url": web_app_root_url_verified + "/ballot",
}
template_variables_in_json = json.dumps(template_variables_for_json, ensure_ascii=True)
# Create the outbound email description, then schedule it
kind_of_email_template = NOTICE_FRIEND_ENDORSEMENTS_TEMPLATE
outbound_results = email_manager.create_email_outbound_description(
sender_voter_we_vote_id=speaker_voter_we_vote_id,
sender_voter_email=speaker_voter_email,
sender_voter_name=speaker_voter_name,
recipient_voter_we_vote_id=recipient_voter_we_vote_id,
recipient_email_we_vote_id=recipient_email_we_vote_id,
recipient_voter_email=recipient_email,
template_variables_in_json=template_variables_in_json,
kind_of_email_template=kind_of_email_template)
status += outbound_results['status'] + " "
success = outbound_results['success']
if outbound_results['email_outbound_description_saved']:
email_outbound_description = outbound_results['email_outbound_description']
schedule_results = schedule_email_with_email_outbound_description(email_outbound_description)
status += schedule_results['status'] + " "
success = schedule_results['success']
if schedule_results['email_scheduled_saved']:
# messages_to_send.append(schedule_results['email_scheduled_id'])
email_scheduled = schedule_results['email_scheduled']
send_results = email_manager.send_scheduled_email(email_scheduled)
email_scheduled_sent = send_results['email_scheduled_sent']
status += send_results['status']
success = send_results['success']
results = {
'success': success,
'status': status,
}
return results
def assemble_voter_daily_summary(
assemble_activity_start_date=None,
recipient_voter_we_vote_id='',
number_of_friends_to_display=3):
status = ''
success = True
activity_manager = ActivityManager()
friend_manager = FriendManager()
friend_activity_dict_list = []
reaction_manager = ReactionManager()
subject = 'Discussion(s) have been added'
introduction_line = 'At least one friend has added a discussion.'
# Collect all of the data about activity in this voter's network since the last daily_summary
current_friends_results = friend_manager.retrieve_friends_we_vote_id_list(recipient_voter_we_vote_id)
success = current_friends_results['success']
status += current_friends_results['status']
if not current_friends_results['friends_we_vote_id_list_found']:
status += "ASSEMBLE_VOTER_DAILY_SUMMARY_NO_FRIENDS_FOUND "
results = {
'success': success,
'status': status,
'friend_activity_dict_list': friend_activity_dict_list,
'introduction_line': introduction_line,
'subject': subject,
}
return results
else:
friends_we_vote_id_list = current_friends_results['friends_we_vote_id_list']
# ##########################
# Each activity post, with name, first line, # of comments and # of likes
highest_priority_by_friend_we_vote_id = {}
raw_list_by_friend_we_vote_id = {}
post_results = activity_manager.retrieve_activity_post_list(
speaker_voter_we_vote_id_list=friends_we_vote_id_list,
since_date=assemble_activity_start_date)
if post_results['success']:
friends_post_list = post_results['activity_post_list']
for one_post in friends_post_list:
number_of_comments = activity_manager.fetch_number_of_comments(one_post.we_vote_id)
number_of_likes = reaction_manager.fetch_number_of_likes(one_post.we_vote_id)
# Higher priority score makes it more likely this post is at top of list
priority_score = 0
if not one_post.speaker_name or one_post.speaker_name.startswith('Voter-'):
priority_score -= 20
if one_post.speaker_profile_image_url_medium and len(one_post.speaker_profile_image_url_medium) > 1:
priority_score += 10
if number_of_comments > 0:
priority_score += number_of_comments * 3
if number_of_likes > 0:
priority_score += number_of_likes * 1
highlight_item_dict = {
# 'date_created': one_post.date_created.strftime('%Y-%m-%d %H:%M:%S'),
'number_of_comments': number_of_comments,
'number_of_likes': number_of_likes,
'priority_score': priority_score,
'speaker_name': one_post.speaker_name,
'speaker_profile_image_url_medium': one_post.speaker_profile_image_url_medium,
'speaker_voter_we_vote_id': one_post.speaker_voter_we_vote_id,
'statement_text': one_post.statement_text,
'we_vote_id': one_post.we_vote_id,
}
if one_post.speaker_voter_we_vote_id in highest_priority_by_friend_we_vote_id and \
highest_priority_by_friend_we_vote_id[one_post.speaker_voter_we_vote_id] > priority_score:
# Do not add this highlight_item_dict because the highlight item captured for this person
# already has a higher priority_score
pass
else:
raw_list_by_friend_we_vote_id[one_post.speaker_voter_we_vote_id] = highlight_item_dict
highest_priority_by_friend_we_vote_id[one_post.speaker_voter_we_vote_id] = priority_score
# ##########################
# Endorsements made
# ##########################
# Now that we know raw_list_by_friend_we_vote_id only has one highlight_item_dict per friend,
# drop them into simple friend_activity_dict_list so we can sort them by priority_score
friend_activity_dict_list = raw_list_by_friend_we_vote_id.values()
sorted(friend_activity_dict_list, key=lambda item: item['priority_score'], reverse=True)
friend_name_list_in_order = []
names_stored = 0
for one_activity_dict in friend_activity_dict_list:
if names_stored < number_of_friends_to_display:
friend_name_list_in_order.append(one_activity_dict['speaker_name'])
names_stored += 1
if len(friend_name_list_in_order) > 0:
introduction_line = ''
subject = ''
if len(friend_name_list_in_order) == 1:
subject += friend_name_list_in_order[0]
subject += " added a discussion"
introduction_line += "Your friend "
introduction_line += friend_name_list_in_order[0]
introduction_line += " has added one or more discussion."
elif len(friend_name_list_in_order) == 2:
subject += friend_name_list_in_order[0]
subject += " and "
subject += friend_name_list_in_order[1]
subject += " added discussions"
introduction_line += "Your friends "
introduction_line += friend_name_list_in_order[0]
introduction_line += " and "
introduction_line += friend_name_list_in_order[1]
introduction_line += " have added discussions."
elif len(friend_name_list_in_order) >= 3:
subject += friend_name_list_in_order[0]
subject += ", "
subject += friend_name_list_in_order[1]
subject += " and "
subject += friend_name_list_in_order[2]
subject += " added discussions"
introduction_line += "Your friends "
introduction_line += friend_name_list_in_order[0]
introduction_line += ", "
introduction_line += friend_name_list_in_order[1]
introduction_line += " and "
introduction_line += friend_name_list_in_order[2]
introduction_line += " have added discussions."
results = {
'success': success,
'status': status,
'friend_activity_dict_list': friend_activity_dict_list,
'introduction_line': introduction_line,
'subject': subject,
}
return results
def notice_voter_daily_summary_send( # NOTICE_VOTER_DAILY_SUMMARY
recipient_voter_we_vote_id='',
friend_activity_dict_list=[],
introduction_line='',
subject=''):
"""
:param recipient_voter_we_vote_id:
:param friend_activity_dict_list:
:param subject:
:param introduction_line:
:return:
"""
from email_outbound.controllers import schedule_email_with_email_outbound_description
from email_outbound.models import EmailManager, NOTICE_VOTER_DAILY_SUMMARY_TEMPLATE
status = ""
voter_manager = VoterManager()
from organization.controllers import transform_web_app_url
web_app_root_url_verified = transform_web_app_url('') # Change to client URL if needed
recipient_voter_results = voter_manager.retrieve_voter_by_we_vote_id(recipient_voter_we_vote_id)
if not recipient_voter_results['voter_found']:
error_results = {
'status': "RECIPIENT_VOTER_NOT_FOUND ",
'success': False,
}
return error_results
recipient_voter = recipient_voter_results['voter']
email_manager = EmailManager()
# Retrieve the email address of the original_sender (which is the person we are sending this notification to)
recipient_email_we_vote_id = ""
recipient_email = ""
recipient_email_subscription_secret_key = ""
if recipient_voter.has_email_with_verified_ownership():
results = email_manager.retrieve_primary_email_with_ownership_verified(recipient_voter_we_vote_id)
success = results['success']
if results['email_address_object_found']:
recipient_email_object = results['email_address_object']
recipient_email_we_vote_id = recipient_email_object.we_vote_id
recipient_email = recipient_email_object.normalized_email_address
if positive_value_exists(recipient_email_object.subscription_secret_key):
recipient_email_subscription_secret_key = recipient_email_object.subscription_secret_key
else:
recipient_email_subscription_secret_key = \
email_manager.update_email_address_with_new_subscription_secret_key(
email_we_vote_id=recipient_email_we_vote_id)
else:
# The recipient must have a valid email
status += "RECIPIENT_VOTER_DOES_NOT_HAVE_VALID_EMAIL "
success = True
results = {
'success': success,
'status': status,
}
return results
if positive_value_exists(recipient_email_we_vote_id):
recipient_voter_we_vote_id = recipient_voter.we_vote_id
# Trim down friend_activity_dict_list to only x items
number_of_highlights_to_show = 3
number_shown = 0
friend_activity_dict_list_modified = []
for highlight_dict in friend_activity_dict_list:
if number_shown < number_of_highlights_to_show:
highlight_dict['view_activity_tidbit_url'] = \
web_app_root_url_verified + "/news/a/" + highlight_dict['we_vote_id']
friend_activity_dict_list_modified.append(highlight_dict)
number_shown += 1
# Template variables
real_name_only = True
recipient_name = recipient_voter.get_full_name(real_name_only)
# speaker_voter_name = speaker_voter.get_full_name(real_name_only)
# speaker_voter_photo = speaker_voter.voter_photo_url()
# speaker_voter_description = ""
# speaker_voter_network_details = ""
# Variables used by templates/email_outbound/email_templates/friend_accepted_invitation.txt and .html
if not positive_value_exists(subject):
subject = "Your friends have commented"
template_variables_for_json = {
"introduction_line": introduction_line,
"subject": subject,
"friend_activity_dict_list": friend_activity_dict_list_modified,
# "sender_name": speaker_voter_name,
# "sender_photo": speaker_voter_photo,
# "sender_email_address": speaker_voter_email, # Does not affect the "From" email header
# "sender_description": speaker_voter_description,
# "sender_network_details": speaker_voter_network_details,
"recipient_name": recipient_name,
"recipient_voter_email": recipient_email,
"recipient_unsubscribe_url": web_app_root_url_verified + "/settings/notifications/esk/" +
recipient_email_subscription_secret_key,
"email_open_url": WE_VOTE_SERVER_ROOT_URL + "/apis/v1/emailOpen?email_key=1234",
"view_main_discussion_page_url": web_app_root_url_verified + "/news",
"view_your_ballot_url": web_app_root_url_verified + "/ballot",
}
template_variables_in_json = json.dumps(template_variables_for_json, ensure_ascii=True)
from_email_for_daily_summary = "We Vote <info@WeVote.US>" # TODO DALE Make system variable
# Create the outbound email description, then schedule it
kind_of_email_template = NOTICE_VOTER_DAILY_SUMMARY_TEMPLATE
outbound_results = email_manager.create_email_outbound_description(
sender_voter_we_vote_id=recipient_voter_we_vote_id,
sender_voter_email=from_email_for_daily_summary,
sender_voter_name='',
recipient_voter_we_vote_id=recipient_voter_we_vote_id,
recipient_email_we_vote_id=recipient_email_we_vote_id,
recipient_voter_email=recipient_email,
template_variables_in_json=template_variables_in_json,
kind_of_email_template=kind_of_email_template)
status += outbound_results['status'] + " "
success = outbound_results['success']
if outbound_results['email_outbound_description_saved']:
email_outbound_description = outbound_results['email_outbound_description']
schedule_results = schedule_email_with_email_outbound_description(email_outbound_description)
status += schedule_results['status'] + " "
success = schedule_results['success']
if schedule_results['email_scheduled_saved']:
# messages_to_send.append(schedule_results['email_scheduled_id'])
email_scheduled = schedule_results['email_scheduled']
send_results = email_manager.send_scheduled_email(email_scheduled)
email_scheduled_sent = send_results['email_scheduled_sent']
status += send_results['status']
success = send_results['success']
results = {
'success': success,
'status': status,
}
return results
def process_activity_notice_seeds_triggered_by_batch_process():
"""
We assume only one of this function is running at any time.
:return:
"""
status = ''
success = True
activity_notice_seed_count = 0
activity_notice_count = 0
# Retrieve ActivityNoticeSeeds that need to have some processing done, including ActivityNotice entries created
activity_manager = ActivityManager()
# We want this process to stop before it has run for 5 minutes, so that we don't collide with another process
# starting. Please also see: activity_notice_processing_time_out_duration & checked_out_expiration_time
# We adjust timeout for ACTIVITY_NOTICE_PROCESS in retrieve_batch_process_list
longest_activity_notice_processing_run_time_allowed = 270 # 4.5 minutes * 60 seconds
when_process_must_stop = now() + timedelta(seconds=longest_activity_notice_processing_run_time_allowed)
# Update existing ActivityNoticeSeed entries (notices_to_be_updated=True)
# Only run this when the minutes are divisible by "5"
# Note: Because of other processes running we cannot count on every entry updating every 5 minutes -- there
# is some randomness to when they get updated
update_interval = 5
time_now = now()
if time_now.minute % update_interval == 0:
continue_retrieving_notices_to_be_updated = True
activity_notice_seed_id_already_reviewed_list = []
safety_valve_count = 0
while continue_retrieving_notices_to_be_updated and \
safety_valve_count < 1000 and \
when_process_must_stop > now():
safety_valve_count += 1
results = activity_manager.retrieve_next_activity_notice_seed_to_process(
notices_to_be_updated=True,
activity_notice_seed_id_already_reviewed_list=activity_notice_seed_id_already_reviewed_list)
if results['activity_notice_seed_found']:
# We retrieve from these seed types:
# NOTICE_ACTIVITY_POST_SEED
# NOTICE_FRIEND_ENDORSEMENTS_SEED
# We do not need to update (we create once elsewhere and do not update):
# NOTICE_CAMPAIGNX_NEWS_ITEM_SEED
# NOTICE_CAMPAIGNX_SUPPORTER_INITIAL_RESPONSE_SEED
# NOTICE_VOTER_DAILY_SUMMARY_SEED
activity_notice_seed = results['activity_notice_seed']
activity_notice_seed_id_already_reviewed_list.append(activity_notice_seed.id)
activity_notice_seed_count += 1
status += "[updated:: "
status += "activity_notice_seed_id: " + str(activity_notice_seed.id) + " "
status += "kind_of_seed: " + str(activity_notice_seed.kind_of_seed) + ""
status += "] "
update_activity_notices = False
if activity_notice_seed.kind_of_seed == NOTICE_ACTIVITY_POST_SEED:
# We are storing number_of_comments and number_of_likes in NOTICE_ACTIVITY_POST_SEED, so we need
# to update in case there have been changes.
update_seed_results = \
update_activity_notice_seed_date_of_notice_earlier_than_update_window(activity_notice_seed)
status += update_seed_results['status']
if update_seed_results['success']:
activity_notice_seed = update_seed_results['activity_notice_seed']
if not activity_notice_seed.date_of_notice_earlier_than_update_window:
update_activity_notices = True
elif activity_notice_seed.kind_of_seed == NOTICE_FRIEND_ENDORSEMENTS_SEED:
update_seed_results = \
update_activity_notice_seed_date_of_notice_earlier_than_update_window(activity_notice_seed)
status += update_seed_results['status']
if update_seed_results['success']:
activity_notice_seed = update_seed_results['activity_notice_seed']
if not activity_notice_seed.date_of_notice_earlier_than_update_window:
# Only update if the number of positions has changed
update_seed_results = update_activity_notice_seed_with_positions(activity_notice_seed)
activity_notice_seed = update_seed_results['activity_notice_seed']
update_activity_notices = True
if update_activity_notices:
# Update the activity drop down in each voter touched (friends of the voter acting)
update_results = update_or_create_activity_notices_from_seed(activity_notice_seed)
status += update_results['status'] # Show all status for now
# if not update_results['success']:
# status += update_results['status']
else:
continue_retrieving_notices_to_be_updated = False
# Create new ActivityNotice entries, which appear in header notification menu (notices_to_be_created=True)
continue_retrieving_notices_to_be_created = True
activity_notice_seed_id_already_reviewed_list = [] # Reset
safety_valve_count = 0
while continue_retrieving_notices_to_be_created and safety_valve_count < 1000 and when_process_must_stop > now():
safety_valve_count += 1
results = activity_manager.retrieve_next_activity_notice_seed_to_process(
notices_to_be_created=True,
activity_notice_seed_id_already_reviewed_list=activity_notice_seed_id_already_reviewed_list)
if results['activity_notice_seed_found']:
# We retrieve from these seed types:
# NOTICE_ACTIVITY_POST_SEED
# NOTICE_CAMPAIGNX_NEWS_ITEM_SEED
# NOTICE_CAMPAIGNX_SUPPORTER_INITIAL_RESPONSE_SEED
# NOTICE_FRIEND_ENDORSEMENTS_SEED
activity_notice_seed = results['activity_notice_seed']
activity_notice_seed_id_already_reviewed_list.append(activity_notice_seed.id)
activity_notice_seed_count += 1
status += "[created:: "
status += "activity_notice_seed_id: " + str(activity_notice_seed.id) + " "
status += "kind_of_seed: " + str(activity_notice_seed.kind_of_seed) + ""
status += "] "
# Create the activity drop down in each voter's header for each voter touched (friends of the voter acting)
create_results = update_or_create_activity_notices_from_seed(activity_notice_seed)
# activity_notice_seed.activity_notices_created = True # Marked in function immediately above
activity_notice_count += create_results['activity_notice_count']
# NOTE: Since the daily summary is only sent once per day, wait to create NOTICE_VOTER_DAILY_SUMMARY_SEED
# in the update step above
else:
continue_retrieving_notices_to_be_created = False
# Create NOTICE_VOTER_DAILY_SUMMARY_SEED entries for any other SEED that needs to go into the DAILY_SUMMARY
# We retrieve from these seed types: NOTICE_ACTIVITY_POST_SEED, NOTICE_FRIEND_ENDORSEMENTS_SEED
continue_retrieving_to_be_added_to_voter_summary = True
activity_notice_seed_id_already_reviewed_list = []
safety_valve_count = 0
while continue_retrieving_to_be_added_to_voter_summary and \
safety_valve_count < 1000 and \
when_process_must_stop > now():
safety_valve_count += 1
results = activity_manager.retrieve_next_activity_notice_seed_to_process(
to_be_added_to_voter_daily_summary=True,
activity_notice_seed_id_already_reviewed_list=activity_notice_seed_id_already_reviewed_list)
if results['activity_notice_seed_found']:
# We retrieve from these seed types: NOTICE_ACTIVITY_POST_SEED, NOTICE_FRIEND_ENDORSEMENTS_SEED
activity_notice_seed = results['activity_notice_seed']
activity_notice_seed_id_already_reviewed_list.append(activity_notice_seed.id)
activity_notice_seed_count += 1
status += "[daily_summary:: "
status += "activity_notice_seed_id: " + str(activity_notice_seed.id) + " "
status += "kind_of_seed: " + str(activity_notice_seed.kind_of_seed) + ""
status += "] "
# Create the seeds (one for each voter touched) which will be used to send a daily summary
# to each voter touched. So we end up with new NOTICE_VOTER_DAILY_SUMMARY_SEED entries for the friends
# of the creators of these seeds: NOTICE_ACTIVITY_POST_SEED, NOTICE_FRIEND_ENDORSEMENTS_SEED
update_results = update_or_create_voter_daily_summary_seeds_from_seed(activity_notice_seed)
# if not update_results['success']:
status += update_results['status']
else:
continue_retrieving_to_be_added_to_voter_summary = False
# Send email notifications (notices_to_be_scheduled=True)
continue_retrieving_notices_to_be_scheduled = True
activity_notice_seed_id_already_reviewed_list = [] # Reset
safety_valve_count = 0
while continue_retrieving_notices_to_be_scheduled and safety_valve_count < 1000 and when_process_must_stop > now():
safety_valve_count += 1
results = activity_manager.retrieve_next_activity_notice_seed_to_process(
notices_to_be_scheduled=True,
activity_notice_seed_id_already_reviewed_list=activity_notice_seed_id_already_reviewed_list)
if results['activity_notice_seed_found']:
# We retrieve from these seed types:
# NOTICE_CAMPAIGNX_NEWS_ITEM_SEED
# NOTICE_CAMPAIGNX_SUPER_SHARE_ITEM_SEED
# NOTICE_CAMPAIGNX_SUPPORTER_INITIAL_RESPONSE_SEED
# NOTICE_FRIEND_ENDORSEMENTS_SEED
# NOTICE_VOTER_DAILY_SUMMARY_SEED
activity_notice_seed = results['activity_notice_seed']
activity_notice_seed_id_already_reviewed_list.append(activity_notice_seed.id)
# activity_notice_seed_count += 1
schedule_results = schedule_activity_notices_from_seed(activity_notice_seed)
# activity_notice_seed.activity_notices_scheduled = True # Marked in function immediately above
# if not schedule_results['success']:
status += schedule_results['status']
# activity_notice_count += create_results['activity_notice_count']
else:
continue_retrieving_notices_to_be_scheduled = False
results = {
'success': success,
'status': status,
'activity_notice_seed_count': activity_notice_seed_count,
'activity_notice_count': activity_notice_count,
}
return results
def update_or_create_activity_notices_from_seed(activity_notice_seed):
status = ''
success = True
activity_notice_count = 0
from campaign.models import CampaignXManager
campaignx_manager = CampaignXManager()
activity_manager = ActivityManager()
friend_manager = FriendManager()
reaction_manager = ReactionManager()
# Create or update ActivityNotice entries for the person who generated activity_notice_seed
if positive_value_exists(activity_notice_seed.campaignx_we_vote_id):
if activity_notice_seed.kind_of_seed == NOTICE_CAMPAIGNX_NEWS_ITEM_SEED:
# #########
# Notice to the creator for drop down.
results = campaignx_manager.retrieve_campaignx(
campaignx_we_vote_id=activity_notice_seed.campaignx_we_vote_id,
read_only=True
)
statement_subject = ''
if results['campaignx_found']:
statement_subject = results['campaignx'].campaign_title
kind_of_notice = NOTICE_CAMPAIGNX_NEWS_ITEM_AUTHORED
activity_results = update_or_create_activity_notice_for_campaignx_news_item(
activity_notice_seed_id=activity_notice_seed.id,
campaignx_news_item_we_vote_id=activity_notice_seed.campaignx_news_item_we_vote_id,
campaignx_we_vote_id=activity_notice_seed.campaignx_we_vote_id,
kind_of_seed=activity_notice_seed.kind_of_seed,
kind_of_notice=kind_of_notice,
recipient_voter_we_vote_id=activity_notice_seed.speaker_voter_we_vote_id,
send_to_email=False,
send_to_sms=False,
speaker_name=activity_notice_seed.speaker_name,
speaker_organization_we_vote_id=activity_notice_seed.speaker_organization_we_vote_id,
speaker_voter_we_vote_id=activity_notice_seed.speaker_voter_we_vote_id,
speaker_profile_image_url_medium=activity_notice_seed.speaker_profile_image_url_medium,
speaker_profile_image_url_tiny=activity_notice_seed.speaker_profile_image_url_tiny,
statement_subject=statement_subject,
statement_text_preview=activity_notice_seed.statement_text_preview)
if activity_results['success']:
activity_notice_count += 1
status += activity_results['status'] # We may be able to remove this later to reduce log size
else:
status += activity_results['status']
# Note there are more activity_notice entries for NOTICE_CAMPAIGNX_NEWS_ITEM_SEED created below
elif activity_notice_seed.kind_of_seed == NOTICE_CAMPAIGNX_SUPPORTER_INITIAL_RESPONSE_SEED:
# #########
# Notice to the creator for drop down. Email is sent by the processing of the ActivityNoticeSeed.
kind_of_notice = NOTICE_CAMPAIGNX_SUPPORTER_INITIAL_RESPONSE
activity_results = update_or_create_activity_notice_for_campaignx_supporter_initial_response(
activity_notice_seed_id=activity_notice_seed.id,
campaignx_we_vote_id=activity_notice_seed.campaignx_we_vote_id,
kind_of_seed=activity_notice_seed.kind_of_seed,
kind_of_notice=kind_of_notice,
recipient_voter_we_vote_id=activity_notice_seed.speaker_voter_we_vote_id,
send_to_email=False,
send_to_sms=False,
speaker_name=activity_notice_seed.speaker_name,
speaker_organization_we_vote_id=activity_notice_seed.speaker_organization_we_vote_id,
speaker_voter_we_vote_id=activity_notice_seed.speaker_voter_we_vote_id,
speaker_profile_image_url_medium=activity_notice_seed.speaker_profile_image_url_medium,
speaker_profile_image_url_tiny=activity_notice_seed.speaker_profile_image_url_tiny,
statement_text_preview=activity_notice_seed.statement_text_preview)
if activity_results['success']:
activity_notice_count += 1
status += activity_results['status'] # We may be able to remove this later to reduce log size
else:
status += activity_results['status']
# Seeds that require a friend list to be found
if activity_notice_seed.kind_of_seed in [
NOTICE_ACTIVITY_POST_SEED,
NOTICE_CAMPAIGNX_SUPPORTER_INITIAL_RESPONSE_SEED,
NOTICE_FRIEND_ENDORSEMENTS_SEED,
]:
# Retrieve all friends of activity_notice_seed.speaker_voter_we_vote_id
status += "KIND_OF_LIST_CURRENT_FRIENDS_ACTIVITY_NOTICES "
retrieve_current_friends_as_voters_results = \
friend_manager.retrieve_current_friends_as_voters(activity_notice_seed.speaker_voter_we_vote_id)
success = retrieve_current_friends_as_voters_results['success']
status += retrieve_current_friends_as_voters_results['status']
if retrieve_current_friends_as_voters_results['friend_list_found']:
current_friend_list = retrieve_current_friends_as_voters_results['friend_list']
if activity_notice_seed.kind_of_seed == NOTICE_ACTIVITY_POST_SEED:
# Pop the last activity_tidbit_we_vote_id
activity_tidbit_we_vote_id = ''
if positive_value_exists(activity_notice_seed.activity_tidbit_we_vote_ids_for_friends_serialized):
activity_tidbit_we_vote_id_list_for_friends = \
json.loads(activity_notice_seed.activity_tidbit_we_vote_ids_for_friends_serialized)
if len(activity_tidbit_we_vote_id_list_for_friends) > 0:
activity_tidbit_we_vote_id = activity_tidbit_we_vote_id_list_for_friends.pop()
if not positive_value_exists(activity_tidbit_we_vote_id):
if positive_value_exists(activity_notice_seed.activity_tidbit_we_vote_ids_for_public_serialized):
activity_tidbit_we_vote_id_list_for_public = \
json.loads(activity_notice_seed.activity_tidbit_we_vote_ids_for_public_serialized)
if len(activity_tidbit_we_vote_id_list_for_public) > 0:
activity_tidbit_we_vote_id = activity_tidbit_we_vote_id_list_for_public.pop()
if positive_value_exists(activity_tidbit_we_vote_id):
number_of_comments = activity_manager.fetch_number_of_comments(
parent_we_vote_id=activity_tidbit_we_vote_id)
number_of_likes = reaction_manager.fetch_number_of_likes(activity_tidbit_we_vote_id)
kind_of_notice = NOTICE_FRIEND_ACTIVITY_POSTS
for friend_voter in current_friend_list:
# ###########################
# NOTE: We call update_or_create_voter_daily_summary_seeds_from_seed from the same place
# (process_activity_notice_seeds_triggered_by_batch_process) we call the function
# we are currently in. We don't do it here.
# ###########################
# This is the entry that goes in the header drop-down
activity_results = update_or_create_activity_notice_for_friend_posts(
activity_notice_seed_id=activity_notice_seed.id,
activity_tidbit_we_vote_id=activity_tidbit_we_vote_id,
kind_of_seed=activity_notice_seed.kind_of_seed,
kind_of_notice=kind_of_notice,
number_of_comments=number_of_comments,
number_of_likes=number_of_likes,
recipient_voter_we_vote_id=friend_voter.we_vote_id,
send_to_email=False,
send_to_sms=False,
speaker_name=activity_notice_seed.speaker_name,
speaker_organization_we_vote_id=activity_notice_seed.speaker_organization_we_vote_id,
speaker_voter_we_vote_id=activity_notice_seed.speaker_voter_we_vote_id,
speaker_profile_image_url_medium=activity_notice_seed.speaker_profile_image_url_medium,
speaker_profile_image_url_tiny=activity_notice_seed.speaker_profile_image_url_tiny,
statement_text_preview=activity_notice_seed.statement_text_preview)
if activity_results['success']:
activity_notice_count += 1
else:
status += activity_results['status']
elif activity_notice_seed.kind_of_seed == NOTICE_CAMPAIGNX_SUPPORTER_INITIAL_RESPONSE_SEED:
if positive_value_exists(activity_notice_seed.campaignx_we_vote_id):
# #########
# Notices (and emails) to the creator's friends
kind_of_notice = NOTICE_CAMPAIGNX_FRIEND_HAS_SUPPORTED
twelve_hours_of_seconds = 12 * 60 * 60
for friend_voter in current_friend_list:
# Has the friend already signed this campaign? If so, don't send another email.
is_voter_campaignx_supporter = campaignx_manager.is_voter_campaignx_supporter(
campaignx_we_vote_id=activity_notice_seed.campaignx_we_vote_id,
voter_we_vote_id=friend_voter.we_vote_id)
# Has the friend already received an email about this supporter signing a campaign recently?
# If so, don't email any more notices for twelve_hours_of_seconds
activity_notice_count = activity_manager.fetch_activity_notice_count(
activity_in_last_x_seconds=twelve_hours_of_seconds,
kind_of_notice=kind_of_notice,
recipient_voter_we_vote_id=friend_voter.we_vote_id,
send_to_email=True,
speaker_voter_we_vote_id=activity_notice_seed.speaker_voter_we_vote_id,
)
if is_voter_campaignx_supporter or activity_notice_count > 0:
send_to_email = False
send_to_sms = False
else:
# Decide whether to send email or sms based on friend's notification settings
# We will need to figure out if this endorsement is on this voter's ballot
# NOTIFICATION_FRIEND_OPINIONS_OTHER_REGIONS_EMAIL
# NOTIFICATION_FRIEND_OPINIONS_YOUR_BALLOT_EMAIL
send_to_email = friend_voter.is_notification_status_flag_set(
NOTIFICATION_FRIEND_OPINIONS_OTHER_REGIONS_EMAIL)
# NOTIFICATION_FRIEND_OPINIONS_YOUR_BALLOT_SMS
# NOTIFICATION_FRIEND_OPINIONS_OTHER_REGIONS_SMS
send_to_sms = friend_voter.is_notification_status_flag_set(
NOTIFICATION_FRIEND_OPINIONS_OTHER_REGIONS_SMS)
# ###########################
# This is the entry that goes in the header drop-down
activity_results = update_or_create_activity_notice_for_friend_campaignx_support(
activity_notice_seed_id=activity_notice_seed.id,
campaignx_we_vote_id=activity_notice_seed.campaignx_we_vote_id,
kind_of_seed=activity_notice_seed.kind_of_seed,
kind_of_notice=kind_of_notice,
recipient_voter_we_vote_id=friend_voter.we_vote_id,
send_to_email=send_to_email,
send_to_sms=send_to_sms,
speaker_name=activity_notice_seed.speaker_name,
speaker_organization_we_vote_id=activity_notice_seed.speaker_organization_we_vote_id,
speaker_voter_we_vote_id=activity_notice_seed.speaker_voter_we_vote_id,
speaker_profile_image_url_medium=activity_notice_seed.speaker_profile_image_url_medium,
speaker_profile_image_url_tiny=activity_notice_seed.speaker_profile_image_url_tiny,
statement_text_preview=activity_notice_seed.statement_text_preview)
if activity_results['success']:
activity_notice_count += 1
else:
status += activity_results['status']
elif activity_notice_seed.kind_of_seed == NOTICE_FRIEND_ENDORSEMENTS_SEED:
kind_of_notice = NOTICE_FRIEND_ENDORSEMENTS
# Names for quick summaries
position_name_list = []
if positive_value_exists(activity_notice_seed.position_names_for_friends_serialized):
position_name_list_for_friends = \
json.loads(activity_notice_seed.position_names_for_friends_serialized)
position_name_list += position_name_list_for_friends
if positive_value_exists(activity_notice_seed.position_names_for_public_serialized):
position_name_list_for_public = \
json.loads(activity_notice_seed.position_names_for_public_serialized)
position_name_list += position_name_list_for_public
position_name_list_serialized = json.dumps(position_name_list)
# We Vote Ids for full position display
position_we_vote_id_list = []
if positive_value_exists(
activity_notice_seed.position_we_vote_ids_for_friends_serialized):
position_we_vote_id_list_for_friends = \
json.loads(activity_notice_seed.position_we_vote_ids_for_friends_serialized)
position_we_vote_id_list += position_we_vote_id_list_for_friends
if positive_value_exists(
activity_notice_seed.position_we_vote_ids_for_public_serialized):
position_we_vote_id_list_for_public = \
json.loads(activity_notice_seed.position_we_vote_ids_for_public_serialized)
position_we_vote_id_list += position_we_vote_id_list_for_public
position_we_vote_id_list_serialized = json.dumps(position_we_vote_id_list)
for friend_voter in current_friend_list:
# Add switch for NOTICE_FRIEND_ACTIVITY_POSTS here
# Decide whether to send email or sms based on friend's notification settings
# We will need to figure out if this endorsement is on this voter's ballot
# NOTIFICATION_FRIEND_OPINIONS_OTHER_REGIONS_EMAIL
# NOTIFICATION_FRIEND_OPINIONS_YOUR_BALLOT_EMAIL
send_to_email = friend_voter.is_notification_status_flag_set(
NOTIFICATION_FRIEND_OPINIONS_OTHER_REGIONS_EMAIL)
# NOTIFICATION_FRIEND_OPINIONS_YOUR_BALLOT_SMS
# NOTIFICATION_FRIEND_OPINIONS_OTHER_REGIONS_SMS
send_to_sms = friend_voter.is_notification_status_flag_set(
NOTIFICATION_FRIEND_OPINIONS_OTHER_REGIONS_SMS)
# ###########################
# This is the entry that goes in the header drop-down
activity_results = update_or_create_activity_notice_for_friend_endorsements(
activity_notice_seed_id=activity_notice_seed.id,
activity_tidbit_we_vote_id=activity_notice_seed.we_vote_id,
kind_of_seed=activity_notice_seed.kind_of_seed,
kind_of_notice=kind_of_notice,
position_name_list_serialized=position_name_list_serialized,
position_we_vote_id_list_serialized=position_we_vote_id_list_serialized,
recipient_voter_we_vote_id=friend_voter.we_vote_id,
send_to_email=send_to_email,
send_to_sms=send_to_sms,
speaker_name=activity_notice_seed.speaker_name,
speaker_organization_we_vote_id=activity_notice_seed.speaker_organization_we_vote_id,
speaker_voter_we_vote_id=activity_notice_seed.speaker_voter_we_vote_id,
speaker_profile_image_url_medium=activity_notice_seed.speaker_profile_image_url_medium,
speaker_profile_image_url_tiny=activity_notice_seed.speaker_profile_image_url_tiny)
if activity_results['success']:
activity_notice_count += 1
else:
status += activity_results['status']
else:
status += "CREATE_ACTIVITY_NOTICES_FROM_SEED_NO_FRIENDS "
# These do not require friends for the notices
if activity_notice_seed.kind_of_seed == NOTICE_CAMPAIGNX_NEWS_ITEM_SEED:
if positive_value_exists(activity_notice_seed.campaignx_we_vote_id):
# #########
# Notices (and emails) to the campaignx subscribers
kind_of_notice = NOTICE_CAMPAIGNX_NEWS_ITEM
campaignx_supporter_list = []
results = campaignx_manager.retrieve_campaignx_supporter_list(
campaignx_we_vote_id=activity_notice_seed.campaignx_we_vote_id,
limit=0,
read_only=True,
)
if results['supporter_list_found']:
campaignx_supporter_list = results['supporter_list']
for campaignx_supporter in campaignx_supporter_list:
if positive_value_exists(campaignx_supporter.is_subscribed_by_email):
send_to_email = True
send_to_sms = False
else:
send_to_email = False
send_to_sms = False
# ###########################
# This is the entry that goes in the header drop-down
activity_results = update_or_create_activity_notice_for_campaignx_news_item(
activity_notice_seed_id=activity_notice_seed.id,
campaignx_news_item_we_vote_id=activity_notice_seed.campaignx_news_item_we_vote_id,
campaignx_we_vote_id=activity_notice_seed.campaignx_we_vote_id,
kind_of_seed=activity_notice_seed.kind_of_seed,
kind_of_notice=kind_of_notice,
recipient_voter_we_vote_id=campaignx_supporter.voter_we_vote_id,
send_to_email=send_to_email,
send_to_sms=send_to_sms,
speaker_name=activity_notice_seed.speaker_name,
speaker_organization_we_vote_id=activity_notice_seed.speaker_organization_we_vote_id,
speaker_voter_we_vote_id=activity_notice_seed.speaker_voter_we_vote_id,
speaker_profile_image_url_medium=activity_notice_seed.speaker_profile_image_url_medium,
speaker_profile_image_url_tiny=activity_notice_seed.speaker_profile_image_url_tiny,
statement_subject=activity_notice_seed.statement_subject,
statement_text_preview=activity_notice_seed.statement_text_preview)
if activity_results['success']:
activity_notice_count += 1
else:
status += activity_results['status']
# Note: We don't create notices for: NOTICE_VOTER_DAILY_SUMMARY_SEED
try:
activity_notice_seed.activity_notices_created = True
activity_notice_seed.save()
status += "CREATE_ACTIVITY_NOTICES_FROM_SEED_MARKED_CREATED "
except Exception as e:
status += "CREATE_ACTIVITY_NOTICES_FROM_SEED_CANNOT_MARK_NOTICES_CREATED: " + str(e) + " "
success = False
results = {
'success': success,
'status': status,
'activity_notice_count': activity_notice_count,
}
return results
def update_or_create_voter_daily_summary_seed(
recipient_name='',
recipient_voter_we_vote_id='',
send_to_email=False,
send_to_sms=False,
speaker_organization_we_vote_id='',
speaker_voter_we_vote_id='',
update_only=False):
"""
:param recipient_name:
:param recipient_voter_we_vote_id:
:param send_to_email:
:param send_to_sms:
:param speaker_organization_we_vote_id: The person's organization who has done something
:param speaker_voter_we_vote_id: The person who has done something
:param update_only:
:return:
"""
status = ''
success = True
activity_manager = ActivityManager()
results = activity_manager.retrieve_recent_activity_notice_seed_from_listener(
kind_of_seed=NOTICE_VOTER_DAILY_SUMMARY_SEED,
recipient_voter_we_vote_id=recipient_voter_we_vote_id,
)
if results['activity_notice_seed_found']:
status += "WE_DO_NOT_NEED_TO_UPDATE_NOTICE_VOTER_DAILY_SUMMARY_SEED "
# activity_notice_seed = results['activity_notice_seed']
# change_detected = False
# try:
# # DALE Sept 6, 2020: I'm not 100% sure we need to update NOTICE_VOTER_DAILY_SUMMARY_SEED with this data
# # since when we generate the daily summary email we are just querying against activity since the last
# # summary was sent.
# if positive_value_exists(speaker_organization_we_vote_id):
# speaker_organization_we_vote_ids = []
# if positive_value_exists(activity_notice_seed.speaker_organization_we_vote_ids_serialized):
# # Deserialize
# speaker_organization_we_vote_ids = \
# json.loads(activity_notice_seed.speaker_organization_we_vote_ids_serialized)
# if speaker_organization_we_vote_id not in speaker_organization_we_vote_ids:
# speaker_organization_we_vote_ids.append(speaker_organization_we_vote_id)
# change_detected = True
# # Then serialize
# speaker_organization_we_vote_ids_serialized = json.dumps(speaker_organization_we_vote_ids)
# activity_notice_seed.speaker_organization_we_vote_ids_serialized = \
# speaker_organization_we_vote_ids_serialized
#
# if positive_value_exists(speaker_voter_we_vote_id):
# speaker_voter_we_vote_ids = []
# if positive_value_exists(activity_notice_seed.speaker_voter_we_vote_ids_serialized):
# # Deserialize
# speaker_voter_we_vote_ids = json.loads(activity_notice_seed.speaker_voter_we_vote_ids_serialized)
# if speaker_voter_we_vote_id not in speaker_voter_we_vote_ids:
# speaker_voter_we_vote_ids.append(speaker_voter_we_vote_id)
# change_detected = True
# # Then serialize
# speaker_voter_we_vote_ids_serialized = json.dumps(speaker_voter_we_vote_ids)
# activity_notice_seed.speaker_voter_we_vote_ids_serialized = speaker_voter_we_vote_ids_serialized
#
# if activity_notice_seed.recipient_name != recipient_name:
# activity_notice_seed.recipient_name = recipient_name
# change_detected = True
# if positive_value_exists(change_detected):
# activity_notice_seed.save()
# except Exception as e:
# status += "COULD_NOT_UPDATE_ACTIVITY_NOTICE_SEED_FOR_POSTS: " + str(e) + " "
# status += results['status']
elif update_only:
status += "DID_NOT_CREATE_SEED-UPDATE_ONLY_MODE "
elif results['success']:
if positive_value_exists(send_to_email) or positive_value_exists(send_to_sms):
date_of_notice = now()
speaker_organization_we_vote_ids = [speaker_organization_we_vote_id]
speaker_organization_we_vote_ids_serialized = json.dumps(speaker_organization_we_vote_ids)
speaker_voter_we_vote_ids = [speaker_voter_we_vote_id]
speaker_voter_we_vote_ids_serialized = json.dumps(speaker_voter_we_vote_ids)
create_results = activity_manager.create_activity_notice_seed(
date_of_notice=date_of_notice,
kind_of_seed=NOTICE_VOTER_DAILY_SUMMARY_SEED,
recipient_name=recipient_name,
recipient_voter_we_vote_id=recipient_voter_we_vote_id,
send_to_email=send_to_email,
send_to_sms=send_to_sms,
speaker_organization_we_vote_ids_serialized=speaker_organization_we_vote_ids_serialized,
speaker_voter_we_vote_ids_serialized=speaker_voter_we_vote_ids_serialized)
status += create_results['status']
else:
status += "NOT_SENDING-NEITHER_SEND_TO_EMAIL_NOR_SMS_SET "
else:
status += results['status']
results = {
'success': success,
'status': status,
}
return results
def update_or_create_voter_daily_summary_seeds_from_seed(activity_notice_seed):
"""
Take in seeds like NOTICE_ACTIVITY_POST_SEED and create a NOTICE_VOTER_DAILY_SUMMARY_SEED for
each of the speaker_voter's friends
:param activity_notice_seed:
:return:
"""
status = ''
success = True
activity_notice_count = 0
friend_manager = FriendManager()
seed_types_that_always_cause_the_creation_of_voter_daily_summary_seed = [NOTICE_ACTIVITY_POST_SEED]
# Who needs to see a notice?
audience = 'FRIENDS'
# audience = 'ONE_FRIEND'
if audience == 'FRIENDS':
# Retrieve all friends of activity_notice_seed.speaker_voter_we_vote_id
status += "KIND_OF_LIST_CURRENT_FRIENDS_DAILY_SUMMARY "
retrieve_current_friends_as_voters_results = \
friend_manager.retrieve_current_friends_as_voters(activity_notice_seed.speaker_voter_we_vote_id)
success = retrieve_current_friends_as_voters_results['success']
status += retrieve_current_friends_as_voters_results['status']
if retrieve_current_friends_as_voters_results['friend_list_found']:
current_friend_list = retrieve_current_friends_as_voters_results['friend_list']
for friend_voter in current_friend_list:
create_voter_daily_summary_seed_for_this_voter = False
update_only = False
if activity_notice_seed.kind_of_seed == NOTICE_FRIEND_ENDORSEMENTS_SEED:
# Add friend endorsements to a daily summary of activity: NOTICE_VOTER_DAILY_SUMMARY
# if a NOTICE_VOTER_DAILY_SUMMARY has already been created
# OR if this voter has this notification setting turned off
create_voter_daily_summary_seed_for_this_voter = True
opinions_email_turned_on = friend_voter.is_notification_status_flag_set(
NOTIFICATION_FRIEND_OPINIONS_OTHER_REGIONS_EMAIL)
if positive_value_exists(opinions_email_turned_on):
# Since the friend_voter is already getting a notice about the speaker_voter's endorsements
# don't create a VOTER_DAILY_SUMMARY *just* for NOTICE_FRIEND_ENDORSEMENTS
# but updating is ok.
update_only = True
elif activity_notice_seed.kind_of_seed \
in seed_types_that_always_cause_the_creation_of_voter_daily_summary_seed:
create_voter_daily_summary_seed_for_this_voter = True
# Decide whether to send email or sms based on friend's notification settings
send_to_email = friend_voter.is_notification_status_flag_set(
NOTIFICATION_VOTER_DAILY_SUMMARY_EMAIL)
send_to_sms = friend_voter.is_notification_status_flag_set(
NOTIFICATION_VOTER_DAILY_SUMMARY_SMS)
if create_voter_daily_summary_seed_for_this_voter:
results = update_or_create_voter_daily_summary_seed(
recipient_name=friend_voter.get_full_name(real_name_only=True),
recipient_voter_we_vote_id=friend_voter.we_vote_id,
send_to_email=send_to_email,
send_to_sms=send_to_sms,
speaker_organization_we_vote_id=activity_notice_seed.speaker_organization_we_vote_id,
speaker_voter_we_vote_id=activity_notice_seed.speaker_voter_we_vote_id,
update_only=update_only,
)
status += results['status']
else:
status += "CREATE_DAILY_SUMMARY_FROM_SEED_NO_FRIENDS "
try:
activity_notice_seed.added_to_voter_daily_summary = True
activity_notice_seed.save()
status += "MARKED_ADDED_TO_VOTER_DAILY_SUMMARY "
except Exception as e:
status += "ADDED_TO_VOTER_DAILY_SUMMARY-CANNOT_MARK_CREATED: " + str(e) + " "
success = False
results = {
'success': success,
'status': status,
'activity_notice_count': activity_notice_count,
}
return results
def schedule_activity_notices_from_seed(activity_notice_seed):
status = ''
success = True
activity_notice_count = 0
activity_manager = ActivityManager()
# This is a switch with different branches for:
# NOTICE_CAMPAIGNX_NEWS_ITEM_SEED
# NOTICE_CAMPAIGNX_SUPER_SHARE_ITEM_SEED
# NOTICE_CAMPAIGNX_SUPPORTER_INITIAL_RESPONSE_SEED
# NOTICE_FRIEND_ENDORSEMENTS_SEED
# NOTICE_VOTER_DAILY_SUMMARY_SEED
if activity_notice_seed.kind_of_seed == NOTICE_CAMPAIGNX_NEWS_ITEM_SEED:
from campaign.controllers_email_outbound import campaignx_news_item_send
from campaign.controllers import fetch_sentence_string_from_politician_list
from campaign.models import CampaignXManager
from organization.controllers import transform_campaigns_url
campaignx_manager = CampaignXManager()
voter_manager = VoterManager()
campaigns_root_url_verified = transform_campaigns_url('') # Change to client URL if needed
results = campaignx_manager.retrieve_campaignx(campaignx_we_vote_id=activity_notice_seed.campaignx_we_vote_id)
campaignx_title = ''
campaignx_url = campaigns_root_url_verified + '/id/' + activity_notice_seed.campaignx_we_vote_id # Default link
we_vote_hosted_campaign_photo_large_url = ''
if results['campaignx_found']:
campaignx = results['campaignx']
campaignx_title = campaignx.campaign_title
if positive_value_exists(campaignx.seo_friendly_path):
campaignx_url = campaigns_root_url_verified + '/c/' + campaignx.seo_friendly_path
we_vote_hosted_campaign_photo_large_url = campaignx.we_vote_hosted_campaign_photo_large_url
speaker_voter_name = ''
if positive_value_exists(activity_notice_seed.speaker_voter_we_vote_id):
speaker_voter_results = \
voter_manager.retrieve_voter_by_we_vote_id(activity_notice_seed.speaker_voter_we_vote_id)
if speaker_voter_results['voter_found']:
speaker_voter = speaker_voter_results['voter']
speaker_voter_name = speaker_voter.get_full_name(real_name_only=True)
politician_list = campaignx_manager.retrieve_campaignx_politician_list(
campaignx_we_vote_id=activity_notice_seed.campaignx_we_vote_id)
politician_count = len(politician_list)
if politician_count > 0:
politician_full_sentence_string = fetch_sentence_string_from_politician_list(
politician_list=politician_list,
)
else:
politician_full_sentence_string = ''
# Send to the campaignX supporters (which includes the campaign owner)
continue_retrieving = True
activity_notice_id_already_reviewed_list = []
safety_valve_count = 0
while continue_retrieving and success \
and safety_valve_count < 5000: # Current limit: 500,000 supporters (5000 loops)
safety_valve_count += 1
results = activity_manager.retrieve_activity_notice_list(
activity_notice_seed_id=activity_notice_seed.id,
to_be_sent_to_email=True,
retrieve_count_limit=100,
activity_notice_id_already_reviewed_list=activity_notice_id_already_reviewed_list,
)
if not results['success']:
status += results['status']
success = False
elif results['activity_notice_list_found']:
activity_notice_list = results['activity_notice_list']
for activity_notice in activity_notice_list:
send_results = campaignx_news_item_send(
campaignx_news_item_we_vote_id=activity_notice_seed.campaignx_news_item_we_vote_id,
campaigns_root_url_verified=campaigns_root_url_verified,
campaignx_title=campaignx_title,
campaignx_url=campaignx_url,
campaignx_we_vote_id=activity_notice_seed.campaignx_we_vote_id,
politician_count=politician_count,
politician_full_sentence_string=politician_full_sentence_string,
recipient_voter_we_vote_id=activity_notice.recipient_voter_we_vote_id,
speaker_voter_name=speaker_voter_name,
speaker_voter_we_vote_id=activity_notice.speaker_voter_we_vote_id,
statement_subject=activity_notice_seed.statement_subject,
statement_text_preview=activity_notice_seed.statement_text_preview,
we_vote_hosted_campaign_photo_large_url=we_vote_hosted_campaign_photo_large_url,
)
activity_notice_id_already_reviewed_list.append(activity_notice.id)
try:
activity_notice.scheduled_to_email = True
activity_notice.save()
except Exception as e:
status += "FAILED_SAVING_ACTIVITY_NOTICE_CAMPAIGNX_NEWS_ITEM_SCHEDULED: " + str(e) + " "
success = False
if send_results['success']:
try:
activity_notice.sent_to_email = True
activity_notice.scheduled_to_sms = True
activity_notice.sent_to_sms = True
activity_notice.save()
activity_notice_count += 1
# We'll want to create a routine that connects up to the SendGrid API to tell us
# when the message was received or bounced
except Exception as e:
status += "FAILED_SAVING_ACTIVITY_NOTICE_CAMPAIGNX_NEWS_ITEM: " + str(e) + " "
success = False
else:
status += send_results['status']
success = False
else:
continue_retrieving = False
try:
activity_notice_seed.activity_notices_scheduled = True
activity_notice_seed.save()
except Exception as e:
status += "FAILED_SAVING_ACTIVITY_NOTICE_CAMPAIGNX_NEWS_ITEM_SEED_SCHEDULED: " + str(e) + " "
success = False
if success:
try:
activity_notice_seed.scheduled_to_email = True
activity_notice_seed.sent_to_email = True
# activity_notice_seed.scheduled_to_sms = True
# activity_notice_seed.sent_to_sms = True
activity_notice_seed.save()
activity_notice_count += 1
# We'll want to create a routine that connects up to the SendGrid API to tell us
# when the message was received or bounced
except Exception as e:
status += "FAILED_SAVING_ACTIVITY_NOTICE_CAMPAIGNX_NEWS_ITEM_SEED: " + str(e) + " "
success = False
elif activity_notice_seed.kind_of_seed == NOTICE_CAMPAIGNX_SUPPORTER_INITIAL_RESPONSE_SEED:
from campaign.controllers_email_outbound import campaignx_friend_has_supported_send, \
campaignx_supporter_initial_response_send
# Send to the person who just signed
send_results = campaignx_supporter_initial_response_send(
campaignx_we_vote_id=activity_notice_seed.campaignx_we_vote_id,
recipient_voter_we_vote_id=activity_notice_seed.recipient_voter_we_vote_id,
)
status += send_results['status']
if not send_results['success']:
success = False
# Successful or not, we need to mark this activity_notice_seed.activity_notices_scheduled as True to prevent
# infinite loop
try:
activity_notice_seed.activity_notices_scheduled = True
activity_notice_seed.save()
except Exception as e:
status += "FAILED_SAVING_ACTIVITY_NOTICE_CAMPAIGNX_SUPPORTER_INITIAL_RESPONSE_SEED_AS_SCHEDULED: " \
"" + str(e) + " "
success = False
# Send to the person who signed the campaign's friends
continue_retrieving = True
activity_notice_id_already_reviewed_list = []
safety_valve_count = 0
while continue_retrieving and success \
and safety_valve_count < 500: # Current limit: 5,000 friends (500 loops with 100 per)
safety_valve_count += 1
results = activity_manager.retrieve_activity_notice_list(
activity_notice_seed_id=activity_notice_seed.id,
to_be_sent_to_email=True,
retrieve_count_limit=100,
activity_notice_id_already_reviewed_list=activity_notice_id_already_reviewed_list,
)
if not results['success']:
status += results['status']
success = False
elif results['activity_notice_list_found']:
activity_notice_list = results['activity_notice_list']
for activity_notice in activity_notice_list:
send_results = campaignx_friend_has_supported_send(
campaignx_we_vote_id=activity_notice_seed.campaignx_we_vote_id,
recipient_voter_we_vote_id=activity_notice.recipient_voter_we_vote_id,
speaker_voter_we_vote_id=activity_notice.speaker_voter_we_vote_id)
activity_notice_id_already_reviewed_list.append(activity_notice.id)
if send_results['success']:
try:
activity_notice.scheduled_to_email = True
activity_notice.sent_to_email = True
activity_notice.scheduled_to_sms = True
activity_notice.sent_to_sms = True
activity_notice.save()
activity_notice_count += 1
# We'll want to create a routine that connects up to the SendGrid API to tell us
# when the message was received or bounced
except Exception as e:
status += "FAILED_SAVING_ACTIVITY_NOTICE_CAMPAIGNX_FRIEND_HAS_SUPPORTED: " + str(e) + " "
success = False
else:
status += send_results['status']
success = False
else:
continue_retrieving = False
if success:
try:
# activity_notice_seed.activity_notices_scheduled = True # Saved above
activity_notice_seed.scheduled_to_email = True
activity_notice_seed.sent_to_email = True
# activity_notice_seed.scheduled_to_sms = True
# activity_notice_seed.sent_to_sms = True
activity_notice_seed.save()
activity_notice_count += 1
# We'll want to create a routine that connects up to the SendGrid API to tell us
# when the message was received or bounced
except Exception as e:
status += "FAILED_SAVING_ACTIVITY_NOTICE_CAMPAIGNX_SUPPORTER_INITIAL_RESPONSE_SEED: " + str(e) + " "
success = False
elif activity_notice_seed.kind_of_seed == NOTICE_FRIEND_ENDORSEMENTS_SEED:
# Schedule/send emails
# For these kind of seeds, we just send an email notification for the activity_notice (that is displayed
# to each voter in the header bar
continue_retrieving = True
activity_notice_id_already_reviewed_list = []
safety_valve_count = 0
while continue_retrieving and success \
and safety_valve_count < 500: # Current limit: 5,000 friends (500 loops with 100 per)
safety_valve_count += 1
results = activity_manager.retrieve_activity_notice_list(
activity_notice_seed_id=activity_notice_seed.id,
to_be_sent_to_email=True,
retrieve_count_limit=100,
activity_notice_id_already_reviewed_list=activity_notice_id_already_reviewed_list,
)
if not results['success']:
status += results['status']
success = False
elif results['activity_notice_list_found']:
position_name_list = []
if positive_value_exists(activity_notice_seed.position_names_for_friends_serialized):
position_name_list_for_friends = \
json.loads(activity_notice_seed.position_names_for_friends_serialized)
position_name_list += position_name_list_for_friends
if positive_value_exists(activity_notice_seed.position_names_for_public_serialized):
position_name_list_for_public = \
json.loads(activity_notice_seed.position_names_for_public_serialized)
position_name_list += position_name_list_for_public
activity_notice_list = results['activity_notice_list']
for activity_notice in activity_notice_list:
send_results = notice_friend_endorsements_send(
speaker_voter_we_vote_id=activity_notice.speaker_voter_we_vote_id,
recipient_voter_we_vote_id=activity_notice.recipient_voter_we_vote_id,
activity_tidbit_we_vote_id=activity_notice_seed.we_vote_id,
position_name_list=position_name_list)
activity_notice_id_already_reviewed_list.append(activity_notice.id)
if send_results['success']:
try:
activity_notice.scheduled_to_email = True
activity_notice.sent_to_email = True
activity_notice.scheduled_to_sms = True
activity_notice.sent_to_sms = True
activity_notice.save()
activity_notice_count += 1
# We'll want to create a routine that connects up to the SendGrid API to tell us
# when the message was received or bounced
except Exception as e:
status += "FAILED_SAVING_ACTIVITY_NOTICE: " + str(e) + " "
else:
status += send_results['status']
success = False
else:
continue_retrieving = False
try:
activity_notice_seed.activity_notices_scheduled = True
activity_notice_seed.save()
status += "SCHEDULE_ACTIVITY_NOTICE_FRIEND_ENDORSEMENTS_SEED_AS_SCHEDULED "
except Exception as e:
status += "SCHEDULE_ACTIVITY_NOTICES_FRIEND_ENDORSEMENTS_SEED-CANNOT_MARK_NOTICES_CREATED: " + str(e) + " "
success = False
elif activity_notice_seed.kind_of_seed == NOTICE_CAMPAIGNX_SUPER_SHARE_ITEM_SEED:
from campaign.controllers_email_outbound import campaignx_super_share_item_send
from campaign.controllers import fetch_sentence_string_from_politician_list
from campaign.models import CampaignXManager
from organization.controllers import transform_campaigns_url
from share.models import ShareManager
campaignx_manager = CampaignXManager()
voter_manager = VoterManager()
share_manager = ShareManager()
campaigns_root_url_verified = transform_campaigns_url('') # Change to client URL if needed
results = campaignx_manager.retrieve_campaignx(campaignx_we_vote_id=activity_notice_seed.campaignx_we_vote_id)
campaignx_title = ''
campaignx_url = campaigns_root_url_verified + '/id/' + activity_notice_seed.campaignx_we_vote_id # Default link
we_vote_hosted_campaign_photo_large_url = ''
if results['campaignx_found']:
campaignx = results['campaignx']
campaignx_title = campaignx.campaign_title
if positive_value_exists(campaignx.seo_friendly_path):
campaignx_url = campaigns_root_url_verified + '/c/' + campaignx.seo_friendly_path
we_vote_hosted_campaign_photo_large_url = campaignx.we_vote_hosted_campaign_photo_large_url
speaker_email_address = ''
speaker_photo = ''
speaker_voter_name = ''
if positive_value_exists(activity_notice_seed.speaker_voter_we_vote_id):
speaker_voter_results = \
voter_manager.retrieve_voter_by_we_vote_id(activity_notice_seed.speaker_voter_we_vote_id)
if speaker_voter_results['voter_found']:
speaker_voter = speaker_voter_results['voter']
if positive_value_exists(speaker_voter.email_ownership_is_verified):
speaker_email_address = speaker_voter.email
speaker_photo = speaker_voter.we_vote_hosted_profile_image_url_large
speaker_voter_name = speaker_voter.get_full_name(real_name_only=True)
if not positive_value_exists(activity_notice_seed.super_share_item_id):
status += "MISSING_SUPER_SHARE_ITEM_ID_FROM_ACTIVITY_NOTICE_SEED "
success = False
# Send to the SuperShareEmailRecipients
continue_retrieving = True
super_share_email_recipient_already_reviewed_list = []
safety_valve_count = 0
while continue_retrieving and success \
and safety_valve_count < 5000: # Current limit: 500,000 supporters (5000 loops)
safety_valve_count += 1
results = share_manager.retrieve_super_share_email_recipient_list(
read_only=False,
retrieve_count_limit=100,
retrieve_only_if_not_sent=True,
super_share_email_recipient_already_reviewed_list=super_share_email_recipient_already_reviewed_list,
super_share_item_id=activity_notice_seed.super_share_item_id,
)
if not results['success']:
status += results['status']
success = False
elif results['email_recipient_list_found']:
email_recipient_list = results['email_recipient_list']
for super_share_email_recipient in email_recipient_list:
send_results = campaignx_super_share_item_send(
campaignx_news_item_we_vote_id=activity_notice_seed.campaignx_news_item_we_vote_id,
campaigns_root_url_verified=campaigns_root_url_verified,
campaignx_title=campaignx_title,
recipient_email_address=super_share_email_recipient.email_address_text,
recipient_first_name=super_share_email_recipient.recipient_first_name,
recipient_voter_we_vote_id=super_share_email_recipient.recipient_voter_we_vote_id,
speaker_email_address=speaker_email_address,
speaker_photo=speaker_photo,
speaker_voter_name=speaker_voter_name,
speaker_voter_we_vote_id=super_share_email_recipient.shared_by_voter_we_vote_id,
statement_subject=activity_notice_seed.statement_subject,
statement_text_preview=activity_notice_seed.statement_text_preview,
view_shared_campaignx_url=campaignx_url,
we_vote_hosted_campaign_photo_large_url=we_vote_hosted_campaign_photo_large_url,
)
super_share_email_recipient_already_reviewed_list.append(super_share_email_recipient.id)
if send_results['success']:
try:
super_share_email_recipient.date_sent_to_email = now()
super_share_email_recipient.save()
activity_notice_count += 1
# We'll want to create a routine that connects up to the SendGrid API to tell us
# when the message was received or bounced
except Exception as e:
status += "FAILED_SAVING_ACTIVITY_NOTICE_CAMPAIGNX_NEWS_ITEM: " + str(e) + " "
success = False
else:
status += send_results['status']
success = False
else:
continue_retrieving = False
# Mark activity_notices_scheduled as True whether success or not
try:
activity_notice_seed.activity_notices_scheduled = True
activity_notice_seed.scheduled_to_email = True
activity_notice_seed.save()
activity_notice_count += 1
except Exception as e:
status += "FAILED_SAVING_NOTICE_CAMPAIGNX_SUPER_SHARE_ITEM_SEED_AS_SCHEDULED: " + str(e) + " "
success = False
if success:
try:
activity_notice_seed.sent_to_email = True
activity_notice_seed.save()
activity_notice_count += 1
# We'll want to create a routine that connects up to the SendGrid API to tell us
# when the message was received or bounced
except Exception as e:
status += "FAILED_SAVING_NOTICE_CAMPAIGNX_SUPER_SHARE_ITEM_SEED_AS_SENT: " + str(e) + " "
success = False
elif activity_notice_seed.kind_of_seed == NOTICE_VOTER_DAILY_SUMMARY_SEED:
# Make this either when the last SEED was created OR 24 hours ago
assemble_activity_start_date = now() - timedelta(hours=24)
assemble_results = assemble_voter_daily_summary(
assemble_activity_start_date=assemble_activity_start_date,
recipient_voter_we_vote_id=activity_notice_seed.recipient_voter_we_vote_id,
)
send_results = notice_voter_daily_summary_send(
recipient_voter_we_vote_id=activity_notice_seed.recipient_voter_we_vote_id,
friend_activity_dict_list=assemble_results['friend_activity_dict_list'],
introduction_line=assemble_results['introduction_line'],
subject=assemble_results['subject'])
try:
activity_notice_seed.activity_notices_scheduled = True
activity_notice_seed.save()
activity_notice_count += 1
except Exception as e:
status += "FAILED_SAVING_NOTICE_VOTER_DAILY_SUMMARY_SEED_AS_SCHEDULED: " + str(e) + " "
success = False
if send_results['success']:
try:
activity_notice_seed.scheduled_to_email = True
activity_notice_seed.sent_to_email = True
activity_notice_seed.save()
activity_notice_count += 1
# We'll want to create a routine that connects up to the SendGrid API to tell us
# when the message was received or bounced
except Exception as e:
status += "FAILED_SAVING_NOTICE_VOTER_DAILY_SUMMARY_SEED_AS_SENT: " + str(e) + " "
success = False
else:
status += send_results['status']
success = False
results = {
'success': success,
'status': status,
'activity_notice_count': activity_notice_count,
}
return results
def update_or_create_activity_notice_for_campaignx_news_item(
activity_notice_seed_id=0,
campaignx_news_item_we_vote_id='',
campaignx_we_vote_id='',
kind_of_seed='',
kind_of_notice='',
number_of_comments=0,
number_of_likes=0,
recipient_voter_we_vote_id='',
send_to_email=True,
send_to_sms=True,
speaker_name='',
speaker_organization_we_vote_id='',
speaker_voter_we_vote_id='',
speaker_profile_image_url_medium='',
speaker_profile_image_url_tiny='',
statement_subject='',
statement_text_preview=''):
status = ''
success = True
activity_manager = ActivityManager()
results = activity_manager.retrieve_activity_notice_for_campaignx(
campaignx_news_item_we_vote_id=campaignx_news_item_we_vote_id,
campaignx_we_vote_id=campaignx_we_vote_id,
kind_of_notice=kind_of_notice,
recipient_voter_we_vote_id=recipient_voter_we_vote_id,
speaker_organization_we_vote_id=speaker_organization_we_vote_id,
speaker_voter_we_vote_id=speaker_voter_we_vote_id,
)
if results['activity_notice_found']:
try:
activity_notice = results['activity_notice']
change_found = False
if positive_value_exists(campaignx_we_vote_id) and \
campaignx_we_vote_id != activity_notice.campaignx_we_vote_id:
activity_notice.campaignx_we_vote_id = campaignx_we_vote_id
change_found = True
if positive_value_exists(number_of_comments) and number_of_comments != activity_notice.number_of_comments:
activity_notice.number_of_comments = number_of_comments
change_found = True
if positive_value_exists(number_of_likes) and number_of_likes != activity_notice.number_of_likes:
activity_notice.number_of_likes = number_of_likes
change_found = True
if positive_value_exists(speaker_name) and speaker_name != activity_notice.speaker_name:
activity_notice.speaker_name = speaker_name
change_found = True
if positive_value_exists(speaker_profile_image_url_medium) and \
speaker_profile_image_url_medium != activity_notice.speaker_profile_image_url_medium:
activity_notice.speaker_profile_image_url_medium = speaker_profile_image_url_medium
change_found = True
if positive_value_exists(speaker_profile_image_url_tiny) and \
speaker_profile_image_url_tiny != activity_notice.speaker_profile_image_url_tiny:
activity_notice.speaker_profile_image_url_tiny = speaker_profile_image_url_tiny
change_found = True
if positive_value_exists(statement_subject) and \
statement_subject != activity_notice.statement_subject:
activity_notice.statement_subject = statement_subject
change_found = True
if positive_value_exists(statement_text_preview) and \
statement_text_preview != activity_notice.statement_text_preview:
activity_notice.statement_text_preview = statement_text_preview
change_found = True
if change_found:
activity_notice.save()
except Exception as e:
status += "FAILED_ACTIVITY_NOTICE_SAVE_CAMPAIGNX_NEWS_ITEM: " + str(e) + ' '
status += results['status']
elif results['success']:
date_of_notice = now()
create_results = activity_manager.create_activity_notice(
activity_notice_seed_id=activity_notice_seed_id,
campaignx_news_item_we_vote_id=campaignx_news_item_we_vote_id,
campaignx_we_vote_id=campaignx_we_vote_id,
date_of_notice=date_of_notice,
kind_of_notice=kind_of_notice,
kind_of_seed=kind_of_seed,
number_of_comments=number_of_comments,
number_of_likes=number_of_likes,
recipient_voter_we_vote_id=recipient_voter_we_vote_id,
send_to_email=send_to_email,
send_to_sms=send_to_sms,
speaker_name=speaker_name,
speaker_organization_we_vote_id=speaker_organization_we_vote_id,
speaker_voter_we_vote_id=speaker_voter_we_vote_id,
speaker_profile_image_url_medium=speaker_profile_image_url_medium,
speaker_profile_image_url_tiny=speaker_profile_image_url_tiny,
statement_subject=statement_subject,
statement_text_preview=statement_text_preview)
status += create_results['status']
else:
status += results['status']
results = {
'success': success,
'status': status,
}
return results
def update_or_create_activity_notice_for_campaignx_supporter_initial_response(
activity_notice_seed_id=0,
campaignx_we_vote_id='',
kind_of_seed='',
kind_of_notice='',
number_of_comments=0,
number_of_likes=0,
recipient_voter_we_vote_id='',
send_to_email=True,
send_to_sms=True,
speaker_name='',
speaker_organization_we_vote_id='',
speaker_voter_we_vote_id='',
speaker_profile_image_url_medium='',
speaker_profile_image_url_tiny='',
statement_text_preview=''):
status = ''
success = True
activity_manager = ActivityManager()
results = activity_manager.retrieve_recent_activity_notice_from_speaker_and_recipient(
activity_notice_seed_id=activity_notice_seed_id,
campaignx_we_vote_id=campaignx_we_vote_id,
kind_of_notice=kind_of_notice,
recipient_voter_we_vote_id=recipient_voter_we_vote_id,
speaker_organization_we_vote_id=speaker_organization_we_vote_id,
speaker_voter_we_vote_id=speaker_voter_we_vote_id,
)
if results['activity_notice_found']:
try:
activity_notice = results['activity_notice']
change_found = False
if positive_value_exists(campaignx_we_vote_id) and \
campaignx_we_vote_id != activity_notice.campaignx_we_vote_id:
activity_notice.campaignx_we_vote_id = campaignx_we_vote_id
change_found = True
if positive_value_exists(number_of_comments) and number_of_comments != activity_notice.number_of_comments:
activity_notice.number_of_comments = number_of_comments
change_found = True
if positive_value_exists(number_of_likes) and number_of_likes != activity_notice.number_of_likes:
activity_notice.number_of_likes = number_of_likes
change_found = True
if positive_value_exists(speaker_name) and speaker_name != activity_notice.speaker_name:
activity_notice.speaker_name = speaker_name
change_found = True
if positive_value_exists(statement_text_preview) and \
statement_text_preview != activity_notice.statement_text_preview:
activity_notice.statement_text_preview = statement_text_preview
change_found = True
if change_found:
activity_notice.save()
except Exception as e:
status += "FAILED_ACTIVITY_NOTICE_SAVE_CAMPAIGNX_INITIAL_RESPONSE: " + str(e) + ' '
status += results['status']
elif results['success']:
date_of_notice = now()
create_results = activity_manager.create_activity_notice(
activity_notice_seed_id=activity_notice_seed_id,
campaignx_we_vote_id=campaignx_we_vote_id,
date_of_notice=date_of_notice,
kind_of_notice=kind_of_notice,
kind_of_seed=kind_of_seed,
number_of_comments=number_of_comments,
number_of_likes=number_of_likes,
recipient_voter_we_vote_id=recipient_voter_we_vote_id,
send_to_email=send_to_email,
send_to_sms=send_to_sms,
speaker_name=speaker_name,
speaker_organization_we_vote_id=speaker_organization_we_vote_id,
speaker_voter_we_vote_id=speaker_voter_we_vote_id,
speaker_profile_image_url_medium=speaker_profile_image_url_medium,
speaker_profile_image_url_tiny=speaker_profile_image_url_tiny,
statement_text_preview=statement_text_preview)
status += create_results['status']
else:
status += results['status']
results = {
'success': success,
'status': status,
}
return results
def update_or_create_activity_notice_for_friend_campaignx_support(
activity_notice_seed_id=0,
campaignx_we_vote_id='',
kind_of_seed='',
kind_of_notice='',
number_of_comments=0,
number_of_likes=0,
recipient_voter_we_vote_id='',
send_to_email=False,
send_to_sms=False,
speaker_name='',
speaker_organization_we_vote_id='',
speaker_voter_we_vote_id='',
speaker_profile_image_url_medium='',
speaker_profile_image_url_tiny='',
statement_text_preview=''):
status = ''
success = True
activity_manager = ActivityManager()
results = activity_manager.retrieve_recent_activity_notice_from_speaker_and_recipient(
activity_notice_seed_id=activity_notice_seed_id,
campaignx_we_vote_id=campaignx_we_vote_id,
kind_of_notice=kind_of_notice,
recipient_voter_we_vote_id=recipient_voter_we_vote_id,
speaker_organization_we_vote_id=speaker_organization_we_vote_id,
speaker_voter_we_vote_id=speaker_voter_we_vote_id,
)
if results['activity_notice_found']:
try:
activity_notice = results['activity_notice']
change_found = False
if positive_value_exists(campaignx_we_vote_id) and \
campaignx_we_vote_id != activity_notice.campaignx_we_vote_id:
activity_notice.campaignx_we_vote_id = campaignx_we_vote_id
change_found = True
if positive_value_exists(number_of_comments) and number_of_comments != activity_notice.number_of_comments:
activity_notice.number_of_comments = number_of_comments
change_found = True
if positive_value_exists(number_of_likes) and number_of_likes != activity_notice.number_of_likes:
activity_notice.number_of_likes = number_of_likes
change_found = True
if positive_value_exists(speaker_name) and speaker_name != activity_notice.speaker_name:
activity_notice.speaker_name = speaker_name
change_found = True
if positive_value_exists(statement_text_preview) and \
statement_text_preview != activity_notice.statement_text_preview:
activity_notice.statement_text_preview = statement_text_preview
change_found = True
if change_found:
activity_notice.save()
except Exception as e:
status += "FAILED_ACTIVITY_NOTICE_SAVE: " + str(e) + ' '
status += results['status']
elif results['success']:
date_of_notice = now()
create_results = activity_manager.create_activity_notice(
activity_notice_seed_id=activity_notice_seed_id,
campaignx_we_vote_id=campaignx_we_vote_id,
date_of_notice=date_of_notice,
kind_of_notice=kind_of_notice,
kind_of_seed=kind_of_seed,
number_of_comments=number_of_comments,
number_of_likes=number_of_likes,
recipient_voter_we_vote_id=recipient_voter_we_vote_id,
send_to_email=send_to_email,
send_to_sms=send_to_sms,
speaker_name=speaker_name,
speaker_organization_we_vote_id=speaker_organization_we_vote_id,
speaker_voter_we_vote_id=speaker_voter_we_vote_id,
speaker_profile_image_url_medium=speaker_profile_image_url_medium,
speaker_profile_image_url_tiny=speaker_profile_image_url_tiny,
statement_text_preview=statement_text_preview)
status += create_results['status']
else:
status += results['status']
results = {
'success': success,
'status': status,
}
return results
def update_or_create_activity_notice_for_friend_endorsements(
activity_notice_seed_id=0,
activity_tidbit_we_vote_id='',
kind_of_seed='',
kind_of_notice='',
position_name_list_serialized='',
position_we_vote_id_list_serialized='',
recipient_voter_we_vote_id='',
send_to_email=False,
send_to_sms=False,
speaker_name='',
speaker_organization_we_vote_id='',
speaker_voter_we_vote_id='',
speaker_profile_image_url_medium='',
speaker_profile_image_url_tiny=''):
status = ''
success = True
activity_manager = ActivityManager()
results = activity_manager.retrieve_recent_activity_notice_from_speaker_and_recipient(
activity_notice_seed_id=activity_notice_seed_id,
kind_of_notice=kind_of_notice,
recipient_voter_we_vote_id=recipient_voter_we_vote_id,
speaker_organization_we_vote_id=speaker_organization_we_vote_id,
speaker_voter_we_vote_id=speaker_voter_we_vote_id,
)
# Combine friends and public into single position_we_vote_id_list_serialized
if results['activity_notice_found']:
try:
activity_notice = results['activity_notice']
activity_notice.position_name_list_serialized = position_name_list_serialized
activity_notice.position_we_vote_id_list_serialized = position_we_vote_id_list_serialized
if positive_value_exists(activity_tidbit_we_vote_id):
activity_notice.activity_tidbit_we_vote_id = activity_tidbit_we_vote_id
activity_notice.save()
except Exception as e:
status += "FAILED_ACTIVITY_NOTICE_SAVE: " + str(e) + ' '
status += results['status']
elif results['success']:
date_of_notice = now()
create_results = activity_manager.create_activity_notice(
activity_notice_seed_id=activity_notice_seed_id,
activity_tidbit_we_vote_id=activity_tidbit_we_vote_id,
date_of_notice=date_of_notice,
kind_of_notice=kind_of_notice,
kind_of_seed=kind_of_seed,
position_name_list_serialized=position_name_list_serialized,
position_we_vote_id_list_serialized=position_we_vote_id_list_serialized,
recipient_voter_we_vote_id=recipient_voter_we_vote_id,
send_to_email=send_to_email,
send_to_sms=send_to_sms,
speaker_name=speaker_name,
speaker_organization_we_vote_id=speaker_organization_we_vote_id,
speaker_voter_we_vote_id=speaker_voter_we_vote_id,
speaker_profile_image_url_medium=speaker_profile_image_url_medium,
speaker_profile_image_url_tiny=speaker_profile_image_url_tiny)
status += create_results['status']
else:
status += results['status']
results = {
'success': success,
'status': status,
}
return results
def update_or_create_activity_notice_for_friend_posts(
activity_notice_seed_id=0,
activity_tidbit_we_vote_id='',
kind_of_seed='',
kind_of_notice='',
number_of_comments=0,
number_of_likes=0,
recipient_voter_we_vote_id='',
send_to_email=False,
send_to_sms=False,
speaker_name='',
speaker_organization_we_vote_id='',
speaker_voter_we_vote_id='',
speaker_profile_image_url_medium='',
speaker_profile_image_url_tiny='',
statement_text_preview=''):
status = ''
success = True
activity_manager = ActivityManager()
results = activity_manager.retrieve_recent_activity_notice_from_speaker_and_recipient(
activity_notice_seed_id=activity_notice_seed_id,
kind_of_notice=kind_of_notice,
recipient_voter_we_vote_id=recipient_voter_we_vote_id,
speaker_organization_we_vote_id=speaker_organization_we_vote_id,
speaker_voter_we_vote_id=speaker_voter_we_vote_id,
)
if results['activity_notice_found']:
try:
activity_notice = results['activity_notice']
change_found = False
if positive_value_exists(activity_tidbit_we_vote_id) and \
activity_tidbit_we_vote_id != activity_notice.activity_tidbit_we_vote_id:
activity_notice.activity_tidbit_we_vote_id = activity_tidbit_we_vote_id
change_found = True
if positive_value_exists(number_of_comments) and number_of_comments != activity_notice.number_of_comments:
activity_notice.number_of_comments = number_of_comments
change_found = True
if positive_value_exists(number_of_likes) and number_of_likes != activity_notice.number_of_likes:
activity_notice.number_of_likes = number_of_likes
change_found = True
if positive_value_exists(speaker_name) and speaker_name != activity_notice.speaker_name:
activity_notice.speaker_name = speaker_name
change_found = True
if positive_value_exists(statement_text_preview) and \
statement_text_preview != activity_notice.statement_text_preview:
activity_notice.statement_text_preview = statement_text_preview
change_found = True
if change_found:
activity_notice.save()
except Exception as e:
status += "FAILED_ACTIVITY_NOTICE_SAVE: " + str(e) + ' '
status += results['status']
elif results['success']:
date_of_notice = now()
create_results = activity_manager.create_activity_notice(
activity_notice_seed_id=activity_notice_seed_id,
activity_tidbit_we_vote_id=activity_tidbit_we_vote_id,
date_of_notice=date_of_notice,
kind_of_notice=kind_of_notice,
kind_of_seed=kind_of_seed,
number_of_comments=number_of_comments,
number_of_likes=number_of_likes,
recipient_voter_we_vote_id=recipient_voter_we_vote_id,
send_to_email=send_to_email,
send_to_sms=send_to_sms,
speaker_name=speaker_name,
speaker_organization_we_vote_id=speaker_organization_we_vote_id,
speaker_voter_we_vote_id=speaker_voter_we_vote_id,
speaker_profile_image_url_medium=speaker_profile_image_url_medium,
speaker_profile_image_url_tiny=speaker_profile_image_url_tiny,
statement_text_preview=statement_text_preview)
status += create_results['status']
else:
status += results['status']
results = {
'success': success,
'status': status,
}
return results
def update_or_create_activity_notice_seed_for_activity_posts(
activity_post_we_vote_id='',
visibility_is_public=False,
speaker_name='',
speaker_organization_we_vote_id='',
speaker_voter_we_vote_id='',
speaker_profile_image_url_medium='',
speaker_profile_image_url_tiny='',
statement_text=''):
"""
NOTE: This is tied to ANY activity_posts
:param activity_post_we_vote_id: Not used for updates
:param visibility_is_public: Not used for updates
:param speaker_name:
:param speaker_organization_we_vote_id:
:param speaker_voter_we_vote_id:
:param speaker_profile_image_url_medium:
:param speaker_profile_image_url_tiny:
:param statement_text:
:return:
"""
status = ''
success = True
activity_manager = ActivityManager()
results = activity_manager.retrieve_recent_activity_notice_seed_from_speaker(
kind_of_seed=NOTICE_ACTIVITY_POST_SEED,
speaker_organization_we_vote_id=speaker_organization_we_vote_id,
speaker_voter_we_vote_id=speaker_voter_we_vote_id,
)
if results['activity_notice_seed_found']:
activity_notice_seed = results['activity_notice_seed']
try:
# This SEED might have multiple ActivityPost entries associated with it
most_recent_activity_post = None
most_recent_activity_post_date = None
# Since the activity is being saved microseconds before the activity_notice_seed is stored, we want to
# "rewind" the date_of_notice by 60 seconds
since_date = activity_notice_seed.date_of_notice - timedelta(seconds=60)
post_results = activity_manager.retrieve_activity_post_list(
speaker_voter_we_vote_id_list=[speaker_voter_we_vote_id],
since_date=since_date,
limit_to_visibility_is_friends_only=True)
activity_tidbit_we_vote_ids_for_friends = []
activity_tidbit_we_vote_ids_for_friends_serialized = None
if post_results['success']:
friends_post_list = post_results['activity_post_list']
for one_post in friends_post_list:
activity_tidbit_we_vote_ids_for_friends.append(one_post.we_vote_id)
if not one_post.date_created:
pass
elif most_recent_activity_post_date and one_post.date_created < most_recent_activity_post_date:
pass
else:
most_recent_activity_post_date = one_post.date_created
most_recent_activity_post = one_post
activity_tidbit_we_vote_ids_for_friends_serialized = json.dumps(activity_tidbit_we_vote_ids_for_friends)
post_results = activity_manager.retrieve_activity_post_list(
speaker_voter_we_vote_id_list=[speaker_voter_we_vote_id],
since_date=since_date,
limit_to_visibility_is_public=True)
activity_tidbit_we_vote_ids_for_public = []
activity_tidbit_we_vote_ids_for_public_serialized = None
if post_results['success']:
public_post_list = post_results['activity_post_list']
for one_post in public_post_list:
activity_tidbit_we_vote_ids_for_public.append(one_post.we_vote_id)
if not one_post.date_created:
pass
elif most_recent_activity_post_date and one_post.date_created < most_recent_activity_post_date:
pass
else:
most_recent_activity_post_date = one_post.date_created
most_recent_activity_post = one_post
activity_tidbit_we_vote_ids_for_public_serialized = json.dumps(activity_tidbit_we_vote_ids_for_public)
activity_notice_seed.activity_tidbit_we_vote_ids_for_friends_serialized = \
activity_tidbit_we_vote_ids_for_friends_serialized
activity_notice_seed.activity_tidbit_we_vote_ids_for_public_serialized = \
activity_tidbit_we_vote_ids_for_public_serialized
activity_notice_seed.speaker_name = speaker_name
activity_notice_seed.speaker_profile_image_url_medium = speaker_profile_image_url_medium
activity_notice_seed.speaker_profile_image_url_tiny = speaker_profile_image_url_tiny
if most_recent_activity_post and most_recent_activity_post.statement_text:
activity_notice_seed.statement_text_preview = return_first_x_words(
most_recent_activity_post.statement_text,
number_of_words_to_return=20,
include_ellipses=True)
activity_notice_seed.save()
except Exception as e:
status += "COULD_NOT_UPDATE_ACTIVITY_NOTICE_SEED_FOR_POSTS: " + str(e) + " "
status += results['status']
elif results['success']:
date_of_notice = now()
activity_tidbit_we_vote_ids_for_friends = []
activity_tidbit_we_vote_ids_for_friends_serialized = None
activity_tidbit_we_vote_ids_for_public = []
activity_tidbit_we_vote_ids_for_public_serialized = None
if positive_value_exists(visibility_is_public):
activity_tidbit_we_vote_ids_for_public.append(activity_post_we_vote_id)
activity_tidbit_we_vote_ids_for_public_serialized = json.dumps(activity_tidbit_we_vote_ids_for_public)
else:
activity_tidbit_we_vote_ids_for_friends.append(activity_post_we_vote_id)
activity_tidbit_we_vote_ids_for_friends_serialized = json.dumps(activity_tidbit_we_vote_ids_for_friends)
if positive_value_exists(statement_text):
statement_text_preview = return_first_x_words(
statement_text,
number_of_words_to_return=30,
include_ellipses=True)
else:
statement_text_preview = ''
create_results = activity_manager.create_activity_notice_seed(
activity_notices_scheduled=True, # Set this to true so it gets ignored by the email-sending routine
activity_tidbit_we_vote_ids_for_friends_serialized=activity_tidbit_we_vote_ids_for_friends_serialized,
activity_tidbit_we_vote_ids_for_public_serialized=activity_tidbit_we_vote_ids_for_public_serialized,
date_of_notice=date_of_notice,
kind_of_seed=NOTICE_ACTIVITY_POST_SEED,
speaker_name=speaker_name,
speaker_organization_we_vote_id=speaker_organization_we_vote_id,
speaker_voter_we_vote_id=speaker_voter_we_vote_id,
speaker_profile_image_url_medium=speaker_profile_image_url_medium,
speaker_profile_image_url_tiny=speaker_profile_image_url_tiny,
statement_text_preview=statement_text_preview)
status += create_results['status']
else:
status += results['status']
results = {
'success': success,
'status': status,
}
return results
def update_or_create_activity_notice_seed_for_campaignx_news_item(
campaignx_news_item_we_vote_id='',
campaignx_we_vote_id='',
send_campaignx_news_item=False,
speaker_name='',
speaker_organization_we_vote_id='',
speaker_voter_we_vote_id='',
speaker_profile_image_url_medium='',
speaker_profile_image_url_tiny='',
statement_subject='',
statement_text=''):
status = ''
success = True
activity_notice_seed = None
activity_notice_seed_found = False
activity_manager = ActivityManager()
results = activity_manager.retrieve_activity_notice_seed(
campaignx_news_item_we_vote_id=campaignx_news_item_we_vote_id,
campaignx_we_vote_id=campaignx_we_vote_id,
kind_of_seed=NOTICE_CAMPAIGNX_NEWS_ITEM_SEED,
)
if results['activity_notice_seed_found']:
activity_notice_seed = results['activity_notice_seed']
try:
if positive_value_exists(send_campaignx_news_item):
activity_notice_seed.send_to_email = True
if not positive_value_exists(activity_notice_seed.date_sent_to_email):
activity_notice_seed.date_sent_to_email = now()
activity_notice_seed.speaker_name = speaker_name
activity_notice_seed.speaker_profile_image_url_medium = speaker_profile_image_url_medium
activity_notice_seed.speaker_profile_image_url_tiny = speaker_profile_image_url_tiny
activity_notice_seed.statement_subject = statement_subject
if statement_text:
activity_notice_seed.statement_text_preview = return_first_x_words(
statement_text,
number_of_words_to_return=40,
include_ellipses=True)
else:
activity_notice_seed.statement_text_preview = ''
activity_notice_seed.save()
activity_notice_seed_found = True
except Exception as e:
status += "COULD_NOT_UPDATE_NOTICE_CAMPAIGNX_NEWS_ITEM_SEED: " + str(e) + " "
success = False
status += results['status']
elif results['success']:
date_of_notice = now()
if positive_value_exists(statement_text):
statement_text_preview = return_first_x_words(
statement_text,
number_of_words_to_return=40,
include_ellipses=True)
else:
statement_text_preview = ''
create_results = activity_manager.create_activity_notice_seed(
activity_notices_scheduled=False, # Set this to false so the email-sending routine picks it up
campaignx_news_item_we_vote_id=campaignx_news_item_we_vote_id,
campaignx_we_vote_id=campaignx_we_vote_id,
date_of_notice=date_of_notice,
kind_of_seed=NOTICE_CAMPAIGNX_NEWS_ITEM_SEED,
send_to_email=positive_value_exists(send_campaignx_news_item),
speaker_name=speaker_name,
speaker_organization_we_vote_id=speaker_organization_we_vote_id,
speaker_voter_we_vote_id=speaker_voter_we_vote_id,
speaker_profile_image_url_medium=speaker_profile_image_url_medium,
speaker_profile_image_url_tiny=speaker_profile_image_url_tiny,
statement_subject=statement_subject,
statement_text_preview=statement_text_preview)
status += create_results['status']
activity_notice_seed_found = create_results['activity_notice_seed_found']
activity_notice_seed = create_results['activity_notice_seed']
else:
status += results['status']
success = False
results = {
'activity_notice_seed_found': activity_notice_seed_found,
'activity_notice_seed': activity_notice_seed,
'success': success,
'status': status,
}
return results
def update_or_create_activity_notice_seed_for_campaignx_supporter_initial_response(
campaignx_we_vote_id='',
visibility_is_public=False,
speaker_name='',
speaker_organization_we_vote_id='',
speaker_voter_we_vote_id='',
speaker_profile_image_url_medium='',
speaker_profile_image_url_tiny='',
statement_text=''):
"""
:param campaignx_we_vote_id:
:param visibility_is_public: Not used for updates
:param speaker_name:
:param speaker_organization_we_vote_id:
:param speaker_voter_we_vote_id:
:param speaker_profile_image_url_medium:
:param speaker_profile_image_url_tiny:
:param statement_text:
:return:
"""
status = ''
success = True
activity_manager = ActivityManager()
results = activity_manager.retrieve_recent_activity_notice_seed_from_speaker(
campaignx_we_vote_id=campaignx_we_vote_id,
kind_of_seed=NOTICE_CAMPAIGNX_SUPPORTER_INITIAL_RESPONSE_SEED,
speaker_organization_we_vote_id=speaker_organization_we_vote_id,
speaker_voter_we_vote_id=speaker_voter_we_vote_id,
)
if results['activity_notice_seed_found']:
activity_notice_seed = results['activity_notice_seed']
try:
activity_notice_seed.speaker_name = speaker_name
activity_notice_seed.speaker_profile_image_url_medium = speaker_profile_image_url_medium
activity_notice_seed.speaker_profile_image_url_tiny = speaker_profile_image_url_tiny
if statement_text:
activity_notice_seed.statement_text_preview = return_first_x_words(
statement_text,
number_of_words_to_return=20,
include_ellipses=True)
activity_notice_seed.save()
except Exception as e:
status += "COULD_NOT_UPDATE_NOTICE_CAMPAIGNX_SUPPORTER_INITIAL_RESPONSE_SEED: " + str(e) + " "
status += results['status']
elif results['success']:
date_of_notice = now()
if positive_value_exists(statement_text):
statement_text_preview = return_first_x_words(
statement_text,
number_of_words_to_return=20,
include_ellipses=True)
else:
statement_text_preview = ''
create_results = activity_manager.create_activity_notice_seed(
activity_notices_scheduled=False, # Set this to false so the email-sending routine picks it up
campaignx_we_vote_id=campaignx_we_vote_id,
date_of_notice=date_of_notice,
kind_of_seed=NOTICE_CAMPAIGNX_SUPPORTER_INITIAL_RESPONSE_SEED,
recipient_name=speaker_name,
recipient_voter_we_vote_id=speaker_voter_we_vote_id,
speaker_name=speaker_name,
speaker_organization_we_vote_id=speaker_organization_we_vote_id,
speaker_voter_we_vote_id=speaker_voter_we_vote_id,
speaker_profile_image_url_medium=speaker_profile_image_url_medium,
speaker_profile_image_url_tiny=speaker_profile_image_url_tiny,
statement_text_preview=statement_text_preview)
status += create_results['status']
else:
status += results['status']
results = {
'success': success,
'status': status,
}
return results
def update_or_create_activity_notice_seed_for_super_share_item(
campaignx_news_item_we_vote_id='',
campaignx_we_vote_id='',
send_super_share_item=False,
speaker_name='',
speaker_organization_we_vote_id='',
speaker_voter_we_vote_id='',
speaker_profile_image_url_medium='',
speaker_profile_image_url_tiny='',
statement_subject='',
statement_text='',
super_share_item_id=0):
status = ''
success = True
activity_notice_seed = None
activity_notice_seed_found = False
activity_manager = ActivityManager()
results = activity_manager.retrieve_activity_notice_seed(
kind_of_seed=NOTICE_CAMPAIGNX_SUPER_SHARE_ITEM_SEED,
super_share_item_id=super_share_item_id,
)
if results['activity_notice_seed_found']:
activity_notice_seed = results['activity_notice_seed']
try:
if positive_value_exists(send_super_share_item):
activity_notice_seed.send_to_email = True
if not positive_value_exists(activity_notice_seed.date_sent_to_email):
activity_notice_seed.date_sent_to_email = now()
activity_notice_seed.speaker_name = speaker_name
activity_notice_seed.speaker_profile_image_url_medium = speaker_profile_image_url_medium
activity_notice_seed.speaker_profile_image_url_tiny = speaker_profile_image_url_tiny
activity_notice_seed.statement_subject = statement_subject
if statement_text:
activity_notice_seed.statement_text_preview = return_first_x_words(
statement_text,
number_of_words_to_return=40,
include_ellipses=True)
else:
activity_notice_seed.statement_text_preview = ''
activity_notice_seed.save()
activity_notice_seed_found = True
except Exception as e:
status += "COULD_NOT_UPDATE_NOTICE_CAMPAIGNX_SUPER_SHARE_ITEM_SEED: " + str(e) + " "
success = False
status += results['status']
elif results['success']:
date_of_notice = now()
if positive_value_exists(statement_text):
statement_text_preview = statement_text
else:
statement_text_preview = ''
create_results = activity_manager.create_activity_notice_seed(
activity_notices_scheduled=False, # Set this to false so the email-sending routine picks it up
campaignx_news_item_we_vote_id=campaignx_news_item_we_vote_id,
campaignx_we_vote_id=campaignx_we_vote_id,
date_of_notice=date_of_notice,
kind_of_seed=NOTICE_CAMPAIGNX_SUPER_SHARE_ITEM_SEED,
send_to_email=positive_value_exists(send_super_share_item),
speaker_name=speaker_name,
speaker_organization_we_vote_id=speaker_organization_we_vote_id,
speaker_voter_we_vote_id=speaker_voter_we_vote_id,
speaker_profile_image_url_medium=speaker_profile_image_url_medium,
speaker_profile_image_url_tiny=speaker_profile_image_url_tiny,
statement_subject=statement_subject,
statement_text_preview=statement_text_preview,
super_share_item_id=super_share_item_id)
status += create_results['status']
activity_notice_seed_found = create_results['activity_notice_seed_found']
activity_notice_seed = create_results['activity_notice_seed']
else:
status += results['status']
success = False
results = {
'activity_notice_seed_found': activity_notice_seed_found,
'activity_notice_seed': activity_notice_seed,
'success': success,
'status': status,
}
return results
def update_or_create_activity_notice_seed_for_voter_position(
position_ballot_item_display_name='',
position_we_vote_id='',
is_public_position=False,
speaker_name='',
speaker_organization_we_vote_id='',
speaker_voter_we_vote_id='',
speaker_profile_image_url_medium='',
speaker_profile_image_url_tiny=''):
"""
:param position_ballot_item_display_name: Not used for updates
:param position_we_vote_id: Not used for updates
:param is_public_position: Not used for updates
:param speaker_name:
:param speaker_organization_we_vote_id:
:param speaker_voter_we_vote_id:
:param speaker_profile_image_url_medium:
:param speaker_profile_image_url_tiny:
:return:
"""
status = ''
success = True
activity_manager = ActivityManager()
from position.models import PositionListManager
position_list_manager = PositionListManager()
results = activity_manager.retrieve_recent_activity_notice_seed_from_speaker(
kind_of_seed=NOTICE_FRIEND_ENDORSEMENTS_SEED,
speaker_organization_we_vote_id=speaker_organization_we_vote_id,
speaker_voter_we_vote_id=speaker_voter_we_vote_id,
)
if results['activity_notice_seed_found']:
activity_notice_seed = results['activity_notice_seed']
try:
# Since the position is being saved microseconds before the activity_notice_seed is stored, we want to
# "rewind" the date_of_notice by 60 seconds
since_date = activity_notice_seed.date_of_notice - timedelta(seconds=60)
position_results = position_list_manager.retrieve_all_positions_for_voter(
voter_we_vote_id=speaker_voter_we_vote_id,
since_date=since_date)
if position_results['success']:
friends_positions_list = position_results['friends_positions_list']
position_name_list_for_friends = []
position_we_vote_id_list_for_friends = []
for one_position in friends_positions_list:
position_name_list_for_friends.append(one_position.ballot_item_display_name)
position_we_vote_id_list_for_friends.append(one_position.we_vote_id)
position_names_for_friends_serialized = json.dumps(position_name_list_for_friends)
position_we_vote_ids_for_friends_serialized = json.dumps(position_we_vote_id_list_for_friends)
public_positions_list = position_results['public_positions_list']
position_name_list_for_public = []
position_we_vote_id_list_for_public = []
for one_position in public_positions_list:
position_name_list_for_public.append(one_position.ballot_item_display_name)
position_we_vote_id_list_for_public.append(one_position.we_vote_id)
position_names_for_public_serialized = json.dumps(position_name_list_for_public)
position_we_vote_ids_for_public_serialized = json.dumps(position_we_vote_id_list_for_public)
else:
# If here, there was a problem retrieving positions since the activity_notice_seed was saved,
# so we just work with the one position_we_vote_id
if is_public_position:
position_names_for_friends_serialized = None
position_name_list_for_public = [position_ballot_item_display_name]
position_names_for_public_serialized = json.dumps(position_name_list_for_public)
position_we_vote_ids_for_friends_serialized = None
position_we_vote_id_list_for_public = [position_we_vote_id]
position_we_vote_ids_for_public_serialized = json.dumps(position_we_vote_id_list_for_public)
else:
position_name_list_for_friends = [position_ballot_item_display_name]
position_names_for_friends_serialized = json.dumps(position_name_list_for_friends)
position_names_for_public_serialized = None
position_we_vote_id_list_for_friends = [position_we_vote_id]
position_we_vote_ids_for_friends_serialized = json.dumps(position_we_vote_id_list_for_friends)
position_we_vote_ids_for_public_serialized = None
activity_notice_seed.position_names_for_friends_serialized = position_names_for_friends_serialized
activity_notice_seed.position_names_for_public_serialized = position_names_for_public_serialized
activity_notice_seed.position_we_vote_ids_for_friends_serialized = \
position_we_vote_ids_for_friends_serialized
activity_notice_seed.position_we_vote_ids_for_public_serialized = \
position_we_vote_ids_for_public_serialized
activity_notice_seed.speaker_name = speaker_name
activity_notice_seed.speaker_profile_image_url_medium = speaker_profile_image_url_medium
activity_notice_seed.speaker_profile_image_url_tiny = speaker_profile_image_url_tiny
activity_notice_seed.save()
except Exception as e:
status += "COULD_NOT_UPDATE_SPEAKER_IMAGES " + str(e) + " "
status += results['status']
elif results['success']:
date_of_notice = now()
if is_public_position:
position_name_list_for_public = [position_ballot_item_display_name]
position_names_for_public_serialized = json.dumps(position_name_list_for_public)
position_names_for_friends_serialized = None
position_we_vote_id_list_for_public = [position_we_vote_id]
position_we_vote_ids_for_public_serialized = json.dumps(position_we_vote_id_list_for_public)
position_we_vote_ids_for_friends_serialized = None
else:
position_name_list_for_friends = [position_ballot_item_display_name]
position_names_for_friends_serialized = json.dumps(position_name_list_for_friends)
position_names_for_public_serialized = None
position_we_vote_id_list_for_friends = [position_we_vote_id]
position_we_vote_ids_for_friends_serialized = json.dumps(position_we_vote_id_list_for_friends)
position_we_vote_ids_for_public_serialized = None
create_results = activity_manager.create_activity_notice_seed(
date_of_notice=date_of_notice,
kind_of_seed=NOTICE_FRIEND_ENDORSEMENTS_SEED,
position_names_for_friends_serialized=position_names_for_friends_serialized,
position_names_for_public_serialized=position_names_for_public_serialized,
position_we_vote_ids_for_friends_serialized=position_we_vote_ids_for_friends_serialized,
position_we_vote_ids_for_public_serialized=position_we_vote_ids_for_public_serialized,
speaker_name=speaker_name,
speaker_organization_we_vote_id=speaker_organization_we_vote_id,
speaker_voter_we_vote_id=speaker_voter_we_vote_id,
speaker_profile_image_url_medium=speaker_profile_image_url_medium,
speaker_profile_image_url_tiny=speaker_profile_image_url_tiny)
status += create_results['status']
else:
status += results['status']
results = {
'success': success,
'status': status,
}
return results
def update_activity_notice_seed_date_of_notice_earlier_than_update_window(activity_notice_seed):
status = ''
success = True
activity_notice_seed_changed = False
from activity.models import get_lifespan_of_seed
lifespan_of_seed_in_seconds = get_lifespan_of_seed(activity_notice_seed.kind_of_seed) # In seconds
earliest_date_of_notice = now() - timedelta(seconds=lifespan_of_seed_in_seconds)
# Is this activity_notice_seed.date_of_notice older than earliest_date_of_notice?
if activity_notice_seed.date_of_notice < earliest_date_of_notice:
try:
activity_notice_seed.date_of_notice_earlier_than_update_window = True
activity_notice_seed.save()
activity_notice_seed_changed = True
status += "DATE_OF_NOTICE_EARLIER_THAN_UPDATE_WINDOW_SET_TRUE "
except Exception as e:
status += "COULD_NOT_UPDATE-date_of_notice_earlier_than_update_window: " + str(e) + ' '
success = False
results = {
'success': success,
'status': status,
'activity_notice_seed': activity_notice_seed,
'activity_notice_seed_changed': activity_notice_seed_changed,
'date_of_notice_earlier_than_update_window': activity_notice_seed.date_of_notice_earlier_than_update_window,
}
return results
def update_activity_notice_seed_with_positions(activity_notice_seed):
status = ''
success = True
activity_notice_seed_changed = False
# What values currently exist? We deserialize so we can compare with latest positions
# Position names
position_name_list_for_friends = []
if positive_value_exists(activity_notice_seed.position_names_for_friends_serialized):
position_name_list_for_friends = json.loads(activity_notice_seed.position_names_for_friends_serialized)
position_name_list_for_public = []
if positive_value_exists(activity_notice_seed.position_names_for_public_serialized):
position_name_list_for_public = json.loads(activity_notice_seed.position_names_for_public_serialized)
# Position we_vote_ids
position_we_vote_id_list_for_friends = []
if positive_value_exists(activity_notice_seed.position_we_vote_ids_for_friends_serialized):
position_we_vote_id_list_for_friends = \
json.loads(activity_notice_seed.position_we_vote_ids_for_friends_serialized)
position_we_vote_id_list_for_public = []
if positive_value_exists(activity_notice_seed.position_we_vote_ids_for_public_serialized):
position_we_vote_id_list_for_public = \
json.loads(activity_notice_seed.position_we_vote_ids_for_public_serialized)
from position.models import PositionListManager
position_list_manager = PositionListManager()
since_date = activity_notice_seed.date_of_notice - timedelta(seconds=60)
position_results = position_list_manager.retrieve_all_positions_for_voter(
voter_we_vote_id=activity_notice_seed.speaker_voter_we_vote_id,
since_date=since_date)
if position_results['success']:
friends_positions_list = position_results['friends_positions_list']
position_name_list_for_friends_latest = []
position_we_vote_id_list_for_friends_latest = []
for one_position in friends_positions_list:
position_name_list_for_friends_latest.append(one_position.ballot_item_display_name)
position_we_vote_id_list_for_friends_latest.append(one_position.we_vote_id)
public_positions_list = position_results['public_positions_list']
position_name_list_for_public_latest = []
position_we_vote_id_list_for_public_latest = []
for one_position in public_positions_list:
position_name_list_for_public_latest.append(one_position.ballot_item_display_name)
position_we_vote_id_list_for_public_latest.append(one_position.we_vote_id)
friends_name_list_different = set(position_name_list_for_friends) != \
set(position_name_list_for_friends_latest)
public_name_list_different = set(position_name_list_for_public) != \
set(position_name_list_for_public_latest)
friends_we_vote_id_list_different = set(position_we_vote_id_list_for_friends) != \
set(position_we_vote_id_list_for_friends_latest)
public_we_vote_id_list_different = set(position_we_vote_id_list_for_public) != \
set(position_we_vote_id_list_for_public_latest)
if friends_name_list_different or public_name_list_different or \
friends_we_vote_id_list_different or public_we_vote_id_list_different:
try:
activity_notice_seed.position_names_for_friends_serialized = \
json.dumps(position_name_list_for_friends_latest)
activity_notice_seed.position_names_for_public_serialized = \
json.dumps(position_name_list_for_public_latest)
activity_notice_seed.position_we_vote_ids_for_friends_serialized = \
json.dumps(position_we_vote_id_list_for_friends_latest)
activity_notice_seed.position_we_vote_ids_for_public_serialized = \
json.dumps(position_we_vote_id_list_for_public_latest)
activity_notice_seed.save()
activity_notice_seed_changed = True
except Exception as e:
success = False
status += "COULD_NOT_SAVE: " + str(e) + ' '
results = {
'success': success,
'status': status,
'activity_notice_seed': activity_notice_seed,
'activity_notice_seed_changed': activity_notice_seed_changed,
}
return results
| 52.622892 | 120 | 0.670107 | 18,879 | 162,289 | 5.188516 | 0.030245 | 0.049983 | 0.057006 | 0.041009 | 0.9008 | 0.872379 | 0.847337 | 0.824979 | 0.796711 | 0.766472 | 0 | 0.002231 | 0.273765 | 162,289 | 3,083 | 121 | 52.639961 | 0.828873 | 0.101227 | 0 | 0.760298 | 0 | 0 | 0.071051 | 0.046105 | 0 | 0 | 0 | 0.000324 | 0 | 1 | 0.0102 | false | 0.001962 | 0.011769 | 0 | 0.038054 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
138dc6c94bae27cc0b527c45db9d24dd55da5ffe | 1,475 | py | Python | liblinesdk/api/friends.py | mrexmelle/liblinesdk-py | b87abff848d01ff45b8997f152d6dde3d2a75dbf | [
"MIT"
] | null | null | null | liblinesdk/api/friends.py | mrexmelle/liblinesdk-py | b87abff848d01ff45b8997f152d6dde3d2a75dbf | [
"MIT"
] | null | null | null | liblinesdk/api/friends.py | mrexmelle/liblinesdk-py | b87abff848d01ff45b8997f152d6dde3d2a75dbf | [
"MIT"
] | null | null | null | # coding: utf-8
import json
import requests
import urllib
from ..models import FriendList, Profile
def get_all(access_token):
h={'Authorization': 'Bearer ' + access_token}
response=FriendList()
s=1
while True:
r=requests.get('https://api.line.me/v1/friends?start=' + str(s) + '&display=100', headers=h)
# print 'status code: ', r.status_code
# print 'content: ', r.content
if r.status_code == 200:
jr=json.loads(r.content)
response.count+=jr['count']
for ctc in jr['contacts']:
response.contacts.append(Profile.from_dictionary(ctc))
if jr['count'] < 100:
break
else:
s+=100
else:
break
return response
def get_ingame(access_token):
h={'Authorization': 'Bearer ' + access_token}
response=FriendList()
s=1
while True:
r=requests.get('https://api.line.me/v1/friends/channel?start=' + str(s) + '&display=100', headers=h)
# print 'status code: ', r.status_code
# print 'content: ', r.content
if r.status_code == 200:
jr=json.loads(r.content)
response.count+=jr['count']
for ctc in jr['contacts']:
response.contacts.append(Profile.from_dictionary(ctc))
if jr['count'] < 100:
break
else:
s+=100
else:
break
return response
| 30.102041 | 108 | 0.549831 | 175 | 1,475 | 4.565714 | 0.32 | 0.075094 | 0.055069 | 0.062578 | 0.866083 | 0.866083 | 0.866083 | 0.866083 | 0.866083 | 0.866083 | 0 | 0.028942 | 0.320678 | 1,475 | 48 | 109 | 30.729167 | 0.768463 | 0.117288 | 0 | 0.8 | 0 | 0 | 0.140432 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.05 | false | 0 | 0.1 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
13ab210d578b4880dedb865ad855b2ab8c207e9e | 38,733 | py | Python | src/syft/grid/messages/association_messages.py | dnabanita7/PySyft | ce2510e65f5bad382e88806bcde30fa38c3c76c4 | [
"Apache-2.0"
] | 2 | 2018-07-23T20:34:10.000Z | 2020-08-01T09:09:09.000Z | packages/syft/src/syft/grid/messages/association_messages.py | Metrix1010/PySyft | 6477f64b63dc285059c3766deab3993653cead2e | [
"Apache-2.0"
] | 5 | 2020-09-11T05:47:12.000Z | 2020-10-13T08:36:17.000Z | packages/syft/src/syft/grid/messages/association_messages.py | Metrix1010/PySyft | 6477f64b63dc285059c3766deab3993653cead2e | [
"Apache-2.0"
] | 1 | 2020-10-15T06:13:38.000Z | 2020-10-15T06:13:38.000Z | # stdlib
import json
from typing import Dict
from typing import Optional
# third party
from google.protobuf.reflection import GeneratedProtocolMessageType
from typing_extensions import final
# syft absolute
from syft import serialize
from syft.core.common.message import ImmediateSyftMessageWithReply
from syft.core.common.message import ImmediateSyftMessageWithoutReply
from syft.core.common.serde.deserialize import _deserialize
from syft.core.common.uid import UID
from syft.core.io.address import Address
from syft.proto.grid.messages.association_messages_pb2 import (
DeleteAssociationRequestMessage as DeleteAssociationRequestMessage_PB,
)
from syft.proto.grid.messages.association_messages_pb2 import (
DeleteAssociationRequestResponse as DeleteAssociationRequestResponse_PB,
)
from syft.proto.grid.messages.association_messages_pb2 import (
GetAssociationRequestMessage as GetAssociationRequestMessage_PB,
)
from syft.proto.grid.messages.association_messages_pb2 import (
GetAssociationRequestResponse as GetAssociationRequestResponse_PB,
)
from syft.proto.grid.messages.association_messages_pb2 import (
GetAssociationRequestsMessage as GetAssociationRequestsMessage_PB,
)
from syft.proto.grid.messages.association_messages_pb2 import (
GetAssociationRequestsResponse as GetAssociationRequestsResponse_PB,
)
from syft.proto.grid.messages.association_messages_pb2 import (
ReceiveAssociationRequestMessage as ReceiveAssociationRequestMessage_PB,
)
from syft.proto.grid.messages.association_messages_pb2 import (
ReceiveAssociationRequestResponse as ReceiveAssociationRequestResponse_PB,
)
from syft.proto.grid.messages.association_messages_pb2 import (
RespondAssociationRequestMessage as RespondAssociationRequestMessage_PB,
)
from syft.proto.grid.messages.association_messages_pb2 import (
RespondAssociationRequestResponse as RespondAssociationRequestResponse_PB,
)
from syft.proto.grid.messages.association_messages_pb2 import (
SendAssociationRequestMessage as SendAssociationRequestMessage_PB,
)
from syft.proto.grid.messages.association_messages_pb2 import (
SendAssociationRequestResponse as SendAssociationRequestResponse_PB,
)
# syft relative
from ...core.common.serde.serializable import bind_protobuf
@bind_protobuf
@final
class SendAssociationRequestMessage(ImmediateSyftMessageWithReply):
def __init__(
self,
address: Address,
content: Dict,
reply_to: Address,
msg_id: Optional[UID] = None,
):
super().__init__(address=address, msg_id=msg_id, reply_to=reply_to)
self.content = content
def _object2proto(self) -> SendAssociationRequestMessage_PB:
"""Returns a protobuf serialization of self.
As a requirement of all objects which inherit from Serializable,
this method transforms the current object into the corresponding
Protobuf object so that it can be further serialized.
:return: returns a protobuf object
:rtype: SendAssociationRequestMessage_PB
.. note::
This method is purely an internal method. Please use serialize(object) or one of
the other public serialization methods if you wish to serialize an
object.
"""
return SendAssociationRequestMessage_PB(
msg_id=serialize(self.id),
address=serialize(self.address),
content=json.dumps(self.content),
reply_to=serialize(self.reply_to),
)
@staticmethod
def _proto2object(
proto: SendAssociationRequestMessage_PB,
) -> "SendAssociationRequestMessage":
"""Creates a SendAssociationRequestMessage from a protobuf
As a requirement of all objects which inherit from Serializable,
this method transforms a protobuf object into an instance of this class.
:return: returns an instance of SignalingOfferMessage
:rtype: SendAssociationRequestMessage
.. note::
This method is purely an internal method. Please use syft.deserialize()
if you wish to deserialize an object.
"""
return SendAssociationRequestMessage(
msg_id=_deserialize(blob=proto.msg_id),
address=_deserialize(blob=proto.address),
content=json.loads(proto.content),
reply_to=_deserialize(blob=proto.reply_to),
)
@staticmethod
def get_protobuf_schema() -> GeneratedProtocolMessageType:
"""Return the type of protobuf object which stores a class of this type
As a part of serialization and deserialization, we need the ability to
lookup the protobuf object type directly from the object type. This
static method allows us to do this.
Importantly, this method is also used to create the reverse lookup ability within
the metaclass of Serializable. In the metaclass, it calls this method and then
it takes whatever type is returned from this method and adds an attribute to it
with the type of this class attached to it. See the MetaSerializable class for
details.
:return: the type of protobuf object which corresponds to this class.
:rtype: GeneratedProtocolMessageType
"""
return SendAssociationRequestMessage_PB
@bind_protobuf
@final
class SendAssociationRequestResponse(ImmediateSyftMessageWithoutReply):
def __init__(
self,
address: Address,
status_code: int,
content: Dict,
msg_id: Optional[UID] = None,
):
super().__init__(address=address, msg_id=msg_id)
self.status_code = status_code
self.content = content
def _object2proto(self) -> SendAssociationRequestResponse_PB:
"""Returns a protobuf serialization of self.
As a requirement of all objects which inherit from Serializable,
this method transforms the current object into the corresponding
Protobuf object so that it can be further serialized.
:return: returns a protobuf object
:rtype: SignalingOfferMessage_PB
.. note::
This method is purely an internal method. Please use serialize(object) or one of
the other public serialization methods if you wish to serialize an
object.
"""
return SendAssociationRequestResponse_PB(
msg_id=serialize(self.id),
address=serialize(self.address),
status_code=self.status_code,
content=json.dumps(self.content),
)
@staticmethod
def _proto2object(
proto: SendAssociationRequestResponse_PB,
) -> "SendAssociationRequestResponse":
"""Creates a SignalingOfferMessage from a protobuf
As a requirement of all objects which inherit from Serializable,
this method transforms a protobuf object into an instance of this class.
:return: returns an instance of SignalingOfferMessage
:rtype: SignalingOfferMessage
.. note::
This method is purely an internal method. Please use syft.deserialize()
if you wish to deserialize an object.
"""
return SendAssociationRequestResponse(
msg_id=_deserialize(blob=proto.msg_id),
address=_deserialize(blob=proto.address),
status_code=proto.status_code,
content=json.loads(proto.content),
)
@staticmethod
def get_protobuf_schema() -> GeneratedProtocolMessageType:
"""Return the type of protobuf object which stores a class of this type
As a part of serialization and deserialization, we need the ability to
lookup the protobuf object type directly from the object type. This
static method allows us to do this.
Importantly, this method is also used to create the reverse lookup ability within
the metaclass of Serializable. In the metaclass, it calls this method and then
it takes whatever type is returned from this method and adds an attribute to it
with the type of this class attached to it. See the MetaSerializable class for
details.
:return: the type of protobuf object which corresponds to this class.
:rtype: GeneratedProtocolMessageType
"""
return SendAssociationRequestResponse_PB
@bind_protobuf
@final
class ReceiveAssociationRequestMessage(ImmediateSyftMessageWithReply):
def __init__(
self,
address: Address,
content: Dict,
reply_to: Address,
msg_id: Optional[UID] = None,
):
super().__init__(address=address, msg_id=msg_id, reply_to=reply_to)
self.content = content
def _object2proto(self) -> ReceiveAssociationRequestMessage_PB:
"""Returns a protobuf serialization of self.
As a requirement of all objects which inherit from Serializable,
this method transforms the current object into the corresponding
Protobuf object so that it can be further serialized.
:return: returns a protobuf object
:rtype: ReceiveAssociationRequestMessage_PB
.. note::
This method is purely an internal method. Please use serialize(object) or one of
the other public serialization methods if you wish to serialize an
object.
"""
return ReceiveAssociationRequestMessage_PB(
msg_id=serialize(self.id),
address=serialize(self.address),
content=json.dumps(self.content),
reply_to=serialize(self.reply_to),
)
@staticmethod
def _proto2object(
proto: ReceiveAssociationRequestMessage_PB,
) -> "ReceiveAssociationRequestMessage":
"""Creates a ReceiveAssociationRequestMessage from a protobuf
As a requirement of all objects which inherit from Serializable,
this method transforms a protobuf object into an instance of this class.
:return: returns an instance of SignalingOfferMessage
:rtype: ReceiveAssociationRequestMessage
.. note::
This method is purely an internal method. Please use syft.deserialize()
if you wish to deserialize an object.
"""
return ReceiveAssociationRequestMessage(
msg_id=_deserialize(blob=proto.msg_id),
address=_deserialize(blob=proto.address),
content=json.loads(proto.content),
reply_to=_deserialize(blob=proto.reply_to),
)
@staticmethod
def get_protobuf_schema() -> GeneratedProtocolMessageType:
"""Return the type of protobuf object which stores a class of this type
As a part of serialization and deserialization, we need the ability to
lookup the protobuf object type directly from the object type. This
static method allows us to do this.
Importantly, this method is also used to create the reverse lookup ability within
the metaclass of Serializable. In the metaclass, it calls this method and then
it takes whatever type is returned from this method and adds an attribute to it
with the type of this class attached to it. See the MetaSerializable class for
details.
:return: the type of protobuf object which corresponds to this class.
:rtype: GeneratedProtocolMessageType
"""
return ReceiveAssociationRequestMessage_PB
@bind_protobuf
@final
class ReceiveAssociationRequestResponse(ImmediateSyftMessageWithoutReply):
def __init__(
self,
address: Address,
status_code: int,
content: Dict,
msg_id: Optional[UID] = None,
):
super().__init__(address=address, msg_id=msg_id)
self.status_code = status_code
self.content = content
def _object2proto(self) -> ReceiveAssociationRequestResponse_PB:
"""Returns a protobuf serialization of self.
As a requirement of all objects which inherit from Serializable,
this method transforms the current object into the corresponding
Protobuf object so that it can be further serialized.
:return: returns a protobuf object
:rtype: SignalingOfferMessage_PB
.. note::
This method is purely an internal method. Please use serialize(object) or one of
the other public serialization methods if you wish to serialize an
object.
"""
return ReceiveAssociationRequestResponse_PB(
msg_id=serialize(self.id),
address=serialize(self.address),
status_code=self.status_code,
content=json.dumps(self.content),
)
@staticmethod
def _proto2object(
proto: ReceiveAssociationRequestResponse_PB,
) -> "ReceiveAssociationRequestResponse":
"""Creates a SignalingOfferMessage from a protobuf
As a requirement of all objects which inherit from Serializable,
this method transforms a protobuf object into an instance of this class.
:return: returns an instance of SignalingOfferMessage
:rtype: SignalingOfferMessage
.. note::
This method is purely an internal method. Please use syft.deserialize()
if you wish to deserialize an object.
"""
return ReceiveAssociationRequestResponse(
msg_id=_deserialize(blob=proto.msg_id),
address=_deserialize(blob=proto.address),
status_code=proto.status_code,
content=json.loads(proto.content),
)
@staticmethod
def get_protobuf_schema() -> GeneratedProtocolMessageType:
"""Return the type of protobuf object which stores a class of this type
As a part of serialization and deserialization, we need the ability to
lookup the protobuf object type directly from the object type. This
static method allows us to do this.
Importantly, this method is also used to create the reverse lookup ability within
the metaclass of Serializable. In the metaclass, it calls this method and then
it takes whatever type is returned from this method and adds an attribute to it
with the type of this class attached to it. See the MetaSerializable class for
details.
:return: the type of protobuf object which corresponds to this class.
:rtype: GeneratedProtocolMessageType
"""
return ReceiveAssociationRequestResponse_PB
@bind_protobuf
@final
class RespondAssociationRequestMessage(ImmediateSyftMessageWithReply):
def __init__(
self,
address: Address,
content: Dict,
reply_to: Address,
msg_id: Optional[UID] = None,
):
super().__init__(address=address, msg_id=msg_id, reply_to=reply_to)
self.content = content
def _object2proto(self) -> RespondAssociationRequestMessage_PB:
"""Returns a protobuf serialization of self.
As a requirement of all objects which inherit from Serializable,
this method transforms the current object into the corresponding
Protobuf object so that it can be further serialized.
:return: returns a protobuf object
:rtype: RespondAssociationRequestMessage_PB
.. note::
This method is purely an internal method. Please use serialize(object) or one of
the other public serialization methods if you wish to serialize an
object.
"""
return RespondAssociationRequestMessage_PB(
msg_id=serialize(self.id),
address=serialize(self.address),
content=json.dumps(self.content),
reply_to=serialize(self.reply_to),
)
@staticmethod
def _proto2object(
proto: RespondAssociationRequestMessage_PB,
) -> "RespondAssociationRequestMessage":
"""Creates a RespondAssociationRequestMessage from a protobuf
As a requirement of all objects which inherit from Serializable,
this method transforms a protobuf object into an instance of this class.
:return: returns an instance of SignalingOfferMessage
:rtype: RespondAssociationRequestMessage
.. note::
This method is purely an internal method. Please use syft.deserialize()
if you wish to deserialize an object.
"""
return RespondAssociationRequestMessage(
msg_id=_deserialize(blob=proto.msg_id),
address=_deserialize(blob=proto.address),
content=json.loads(proto.content),
reply_to=_deserialize(blob=proto.reply_to),
)
@staticmethod
def get_protobuf_schema() -> GeneratedProtocolMessageType:
"""Return the type of protobuf object which stores a class of this type
As a part of serialization and deserialization, we need the ability to
lookup the protobuf object type directly from the object type. This
static method allows us to do this.
Importantly, this method is also used to create the reverse lookup ability within
the metaclass of Serializable. In the metaclass, it calls this method and then
it takes whatever type is returned from this method and adds an attribute to it
with the type of this class attached to it. See the MetaSerializable class for
details.
:return: the type of protobuf object which corresponds to this class.
:rtype: GeneratedProtocolMessageType
"""
return RespondAssociationRequestMessage_PB
@bind_protobuf
@final
class RespondAssociationRequestResponse(ImmediateSyftMessageWithoutReply):
def __init__(
self,
address: Address,
status_code: int,
content: Dict,
msg_id: Optional[UID] = None,
):
super().__init__(address=address, msg_id=msg_id)
self.status_code = status_code
self.content = content
def _object2proto(self) -> RespondAssociationRequestResponse_PB:
"""Returns a protobuf serialization of self.
As a requirement of all objects which inherit from Serializable,
this method transforms the current object into the corresponding
Protobuf object so that it can be further serialized.
:return: returns a protobuf object
:rtype: SignalingOfferMessage_PB
.. note::
This method is purely an internal method. Please use serialize(object) or one of
the other public serialization methods if you wish to serialize an
object.
"""
return RespondAssociationRequestResponse_PB(
msg_id=serialize(self.id),
address=serialize(self.address),
status_code=self.status_code,
content=json.dumps(self.content),
)
@staticmethod
def _proto2object(
proto: RespondAssociationRequestResponse_PB,
) -> "RespondAssociationRequestResponse":
"""Creates a SignalingOfferMessage from a protobuf
As a requirement of all objects which inherit from Serializable,
this method transforms a protobuf object into an instance of this class.
:return: returns an instance of SignalingOfferMessage
:rtype: SignalingOfferMessage
.. note::
This method is purely an internal method. Please use syft.deserialize()
if you wish to deserialize an object.
"""
return RespondAssociationRequestResponse(
msg_id=_deserialize(blob=proto.msg_id),
address=_deserialize(blob=proto.address),
status_code=proto.status_code,
content=json.loads(proto.content),
)
@staticmethod
def get_protobuf_schema() -> GeneratedProtocolMessageType:
"""Return the type of protobuf object which stores a class of this type
As a part of serialization and deserialization, we need the ability to
lookup the protobuf object type directly from the object type. This
static method allows us to do this.
Importantly, this method is also used to create the reverse lookup ability within
the metaclass of Serializable. In the metaclass, it calls this method and then
it takes whatever type is returned from this method and adds an attribute to it
with the type of this class attached to it. See the MetaSerializable class for
details.
:return: the type of protobuf object which corresponds to this class.
:rtype: GeneratedProtocolMessageType
"""
return RespondAssociationRequestResponse_PB
@bind_protobuf
@final
class GetAssociationRequestMessage(ImmediateSyftMessageWithReply):
def __init__(
self,
address: Address,
content: Dict,
reply_to: Address,
msg_id: Optional[UID] = None,
):
super().__init__(address=address, msg_id=msg_id, reply_to=reply_to)
self.content = content
def _object2proto(self) -> GetAssociationRequestMessage_PB:
"""Returns a protobuf serialization of self.
As a requirement of all objects which inherit from Serializable,
this method transforms the current object into the corresponding
Protobuf object so that it can be further serialized.
:return: returns a protobuf object
:rtype: GetAssociationRequestMessage_PB
.. note::
This method is purely an internal method. Please use serialize(object) or one of
the other public serialization methods if you wish to serialize an
object.
"""
return GetAssociationRequestMessage_PB(
msg_id=serialize(self.id),
address=serialize(self.address),
content=json.dumps(self.content),
reply_to=serialize(self.reply_to),
)
@staticmethod
def _proto2object(
proto: GetAssociationRequestMessage_PB,
) -> "GetAssociationRequestMessage":
"""Creates a GetAssociationRequestMessage from a protobuf
As a requirement of all objects which inherit from Serializable,
this method transforms a protobuf object into an instance of this class.
:return: returns an instance of SignalingOfferMessage
:rtype: GetAssociationRequestMessage
.. note::
This method is purely an internal method. Please use syft.deserialize()
if you wish to deserialize an object.
"""
return GetAssociationRequestMessage(
msg_id=_deserialize(blob=proto.msg_id),
address=_deserialize(blob=proto.address),
content=json.loads(proto.content),
reply_to=_deserialize(blob=proto.reply_to),
)
@staticmethod
def get_protobuf_schema() -> GeneratedProtocolMessageType:
"""Return the type of protobuf object which stores a class of this type
As a part of serialization and deserialization, we need the ability to
lookup the protobuf object type directly from the object type. This
static method allows us to do this.
Importantly, this method is also used to create the reverse lookup ability within
the metaclass of Serializable. In the metaclass, it calls this method and then
it takes whatever type is returned from this method and adds an attribute to it
with the type of this class attached to it. See the MetaSerializable class for
details.
:return: the type of protobuf object which corresponds to this class.
:rtype: GeneratedProtocolMessageType
"""
return GetAssociationRequestMessage_PB
@bind_protobuf
@final
class GetAssociationRequestResponse(ImmediateSyftMessageWithoutReply):
def __init__(
self,
address: Address,
status_code: int,
content: Dict,
msg_id: Optional[UID] = None,
):
super().__init__(address=address, msg_id=msg_id)
self.status_code = status_code
self.content = content
def _object2proto(self) -> GetAssociationRequestResponse_PB:
"""Returns a protobuf serialization of self.
As a requirement of all objects which inherit from Serializable,
this method transforms the current object into the corresponding
Protobuf object so that it can be further serialized.
:return: returns a protobuf object
:rtype: SignalingOfferMessage_PB
.. note::
This method is purely an internal method. Please use serialize(object) or one of
the other public serialization methods if you wish to serialize an
object.
"""
return GetAssociationRequestResponse_PB(
msg_id=serialize(self.id),
address=serialize(self.address),
status_code=self.status_code,
content=json.dumps(self.content),
)
@staticmethod
def _proto2object(
proto: GetAssociationRequestResponse_PB,
) -> "GetAssociationRequestResponse":
"""Creates a SignalingOfferMessage from a protobuf
As a requirement of all objects which inherit from Serializable,
this method transforms a protobuf object into an instance of this class.
:return: returns an instance of SignalingOfferMessage
:rtype: SignalingOfferMessage
.. note::
This method is purely an internal method. Please use syft.deserialize()
if you wish to deserialize an object.
"""
return GetAssociationRequestResponse(
msg_id=_deserialize(blob=proto.msg_id),
address=_deserialize(blob=proto.address),
status_code=proto.status_code,
content=json.loads(proto.content),
)
@staticmethod
def get_protobuf_schema() -> GeneratedProtocolMessageType:
"""Return the type of protobuf object which stores a class of this type
As a part of serialization and deserialization, we need the ability to
lookup the protobuf object type directly from the object type. This
static method allows us to do this.
Importantly, this method is also used to create the reverse lookup ability within
the metaclass of Serializable. In the metaclass, it calls this method and then
it takes whatever type is returned from this method and adds an attribute to it
with the type of this class attached to it. See the MetaSerializable class for
details.
:return: the type of protobuf object which corresponds to this class.
:rtype: GeneratedProtocolMessageType
"""
return GetAssociationRequestResponse_PB
@bind_protobuf
@final
class GetAssociationRequestsMessage(ImmediateSyftMessageWithReply):
def __init__(
self,
address: Address,
content: Dict,
reply_to: Address,
msg_id: Optional[UID] = None,
):
super().__init__(address=address, msg_id=msg_id, reply_to=reply_to)
self.content = content
def _object2proto(self) -> GetAssociationRequestsMessage_PB:
"""Returns a protobuf serialization of self.
As a requirement of all objects which inherit from Serializable,
this method transforms the current object into the corresponding
Protobuf object so that it can be further serialized.
:return: returns a protobuf object
:rtype: GetAssociationRequestsMessage_PB
.. note::
This method is purely an internal method. Please use serialize(object) or one of
the other public serialization methods if you wish to serialize an
object.
"""
return GetAssociationRequestsMessage_PB(
msg_id=serialize(self.id),
address=serialize(self.address),
content=json.dumps(self.content),
reply_to=serialize(self.reply_to),
)
@staticmethod
def _proto2object(
proto: GetAssociationRequestsMessage_PB,
) -> "GetAssociationRequestsMessage":
"""Creates a GetAssociationRequestsMessage from a protobuf
As a requirement of all objects which inherit from Serializable,
this method transforms a protobuf object into an instance of this class.
:return: returns an instance of SignalingOfferMessage
:rtype: GetAssociationRequestsMessage
.. note::
This method is purely an internal method. Please use syft.deserialize()
if you wish to deserialize an object.
"""
return GetAssociationRequestsMessage(
msg_id=_deserialize(blob=proto.msg_id),
address=_deserialize(blob=proto.address),
content=json.loads(proto.content),
reply_to=_deserialize(blob=proto.reply_to),
)
@staticmethod
def get_protobuf_schema() -> GeneratedProtocolMessageType:
"""Return the type of protobuf object which stores a class of this type
As a part of serialization and deserialization, we need the ability to
lookup the protobuf object type directly from the object type. This
static method allows us to do this.
Importantly, this method is also used to create the reverse lookup ability within
the metaclass of Serializable. In the metaclass, it calls this method and then
it takes whatever type is returned from this method and adds an attribute to it
with the type of this class attached to it. See the MetaSerializable class for
details.
:return: the type of protobuf object which corresponds to this class.
:rtype: GeneratedProtocolMessageType
"""
return GetAssociationRequestsMessage_PB
@bind_protobuf
@final
class GetAssociationRequestsResponse(ImmediateSyftMessageWithoutReply):
def __init__(
self,
address: Address,
status_code: int,
content: Dict,
msg_id: Optional[UID] = None,
):
super().__init__(address=address, msg_id=msg_id)
self.status_code = status_code
self.content = content
def _object2proto(self) -> GetAssociationRequestsResponse_PB:
"""Returns a protobuf serialization of self.
As a requirement of all objects which inherit from Serializable,
this method transforms the current object into the corresponding
Protobuf object so that it can be further serialized.
:return: returns a protobuf object
:rtype: SignalingOfferMessage_PB
.. note::
This method is purely an internal method. Please use serialize(object) or one of
the other public serialization methods if you wish to serialize an
object.
"""
return GetAssociationRequestsResponse_PB(
msg_id=serialize(self.id),
address=serialize(self.address),
status_code=self.status_code,
content=json.dumps(self.content),
)
@staticmethod
def _proto2object(
proto: GetAssociationRequestsResponse_PB,
) -> "GetAssociationRequestsResponse":
"""Creates a SignalingOfferMessage from a protobuf
As a requirement of all objects which inherit from Serializable,
this method transforms a protobuf object into an instance of this class.
:return: returns an instance of SignalingOfferMessage
:rtype: SignalingOfferMessage
.. note::
This method is purely an internal method. Please use syft.deserialize()
if you wish to deserialize an object.
"""
return GetAssociationRequestsResponse(
msg_id=_deserialize(blob=proto.msg_id),
address=_deserialize(blob=proto.address),
status_code=proto.status_code,
content=json.loads(proto.content),
)
@staticmethod
def get_protobuf_schema() -> GeneratedProtocolMessageType:
"""Return the type of protobuf object which stores a class of this type
As a part of serialization and deserialization, we need the ability to
lookup the protobuf object type directly from the object type. This
static method allows us to do this.
Importantly, this method is also used to create the reverse lookup ability within
the metaclass of Serializable. In the metaclass, it calls this method and then
it takes whatever type is returned from this method and adds an attribute to it
with the type of this class attached to it. See the MetaSerializable class for
details.
:return: the type of protobuf object which corresponds to this class.
:rtype: GeneratedProtocolMessageType
"""
return GetAssociationRequestsResponse_PB
@bind_protobuf
@final
class DeleteAssociationRequestMessage(ImmediateSyftMessageWithReply):
def __init__(
self,
address: Address,
content: Dict,
reply_to: Address,
msg_id: Optional[UID] = None,
):
super().__init__(address=address, msg_id=msg_id, reply_to=reply_to)
self.content = content
def _object2proto(self) -> DeleteAssociationRequestMessage_PB:
"""Returns a protobuf serialization of self.
As a requirement of all objects which inherit from Serializable,
this method transforms the current object into the corresponding
Protobuf object so that it can be further serialized.
:return: returns a protobuf object
:rtype: DeleteAssociationRequestMessage_PB
.. note::
This method is purely an internal method. Please use serialize(object) or one of
the other public serialization methods if you wish to serialize an
object.
"""
return DeleteAssociationRequestMessage_PB(
msg_id=serialize(self.id),
address=serialize(self.address),
content=json.dumps(self.content),
reply_to=serialize(self.reply_to),
)
@staticmethod
def _proto2object(
proto: DeleteAssociationRequestMessage_PB,
) -> "DeleteAssociationRequestMessage":
"""Creates a DeleteAssociationRequestMessage from a protobuf
As a requirement of all objects which inherit from Serializable,
this method transforms a protobuf object into an instance of this class.
:return: returns an instance of SignalingOfferMessage
:rtype: DeleteAssociationRequestMessage
.. note::
This method is purely an internal method. Please use syft.deserialize()
if you wish to deserialize an object.
"""
return DeleteAssociationRequestMessage(
msg_id=_deserialize(blob=proto.msg_id),
address=_deserialize(blob=proto.address),
content=json.loads(proto.content),
reply_to=_deserialize(blob=proto.reply_to),
)
@staticmethod
def get_protobuf_schema() -> GeneratedProtocolMessageType:
"""Return the type of protobuf object which stores a class of this type
As a part of serialization and deserialization, we need the ability to
lookup the protobuf object type directly from the object type. This
static method allows us to do this.
Importantly, this method is also used to create the reverse lookup ability within
the metaclass of Serializable. In the metaclass, it calls this method and then
it takes whatever type is returned from this method and adds an attribute to it
with the type of this class attached to it. See the MetaSerializable class for
details.
:return: the type of protobuf object which corresponds to this class.
:rtype: GeneratedProtocolMessageType
"""
return DeleteAssociationRequestMessage_PB
@bind_protobuf
@final
class DeleteAssociationRequestResponse(ImmediateSyftMessageWithoutReply):
def __init__(
self,
address: Address,
status_code: int,
content: Dict,
msg_id: Optional[UID] = None,
):
super().__init__(address=address, msg_id=msg_id)
self.status_code = status_code
self.content = content
def _object2proto(self) -> DeleteAssociationRequestResponse_PB:
"""Returns a protobuf serialization of self.
As a requirement of all objects which inherit from Serializable,
this method transforms the current object into the corresponding
Protobuf object so that it can be further serialized.
:return: returns a protobuf object
:rtype: SignalingOfferMessage_PB
.. note::
This method is purely an internal method. Please use serialize(object) or one of
the other public serialization methods if you wish to serialize an
object.
"""
return DeleteAssociationRequestResponse_PB(
msg_id=serialize(self.id),
address=serialize(self.address),
status_code=self.status_code,
content=json.dumps(self.content),
)
@staticmethod
def _proto2object(
proto: DeleteAssociationRequestResponse_PB,
) -> "DeleteAssociationRequestResponse":
"""Creates a SignalingOfferMessage from a protobuf
As a requirement of all objects which inherit from Serializable,
this method transforms a protobuf object into an instance of this class.
:return: returns an instance of SignalingOfferMessage
:rtype: SignalingOfferMessage
.. note::
This method is purely an internal method. Please use syft.deserialize()
if you wish to deserialize an object.
"""
return DeleteAssociationRequestResponse(
msg_id=_deserialize(blob=proto.msg_id),
address=_deserialize(blob=proto.address),
status_code=proto.status_code,
content=json.loads(proto.content),
)
@staticmethod
def get_protobuf_schema() -> GeneratedProtocolMessageType:
"""Return the type of protobuf object which stores a class of this type
As a part of serialization and deserialization, we need the ability to
lookup the protobuf object type directly from the object type. This
static method allows us to do this.
Importantly, this method is also used to create the reverse lookup ability within
the metaclass of Serializable. In the metaclass, it calls this method and then
it takes whatever type is returned from this method and adds an attribute to it
with the type of this class attached to it. See the MetaSerializable class for
details.
:return: the type of protobuf object which corresponds to this class.
:rtype: GeneratedProtocolMessageType
"""
return DeleteAssociationRequestResponse_PB
| 42.377462 | 92 | 0.689619 | 4,324 | 38,733 | 6.075393 | 0.035153 | 0.031976 | 0.016445 | 0.014617 | 0.832052 | 0.822002 | 0.819642 | 0.819642 | 0.819642 | 0.817625 | 0 | 0.001252 | 0.257765 | 38,733 | 913 | 93 | 42.423877 | 0.912519 | 0.485684 | 0 | 0.671233 | 1 | 0 | 0.021753 | 0.021753 | 0 | 0 | 0 | 0 | 0 | 1 | 0.109589 | false | 0 | 0.054795 | 0 | 0.273973 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
13adddc73f9b98c1321b6356e117fc5a5001d0da | 18,763 | py | Python | UnitTests/test_MemberSetLoad_test.py | r0m30d4c/DlubalRFEM6 | 4bd0d744007bdc27d86d6ce535a507cdc81552ca | [
"MIT"
] | null | null | null | UnitTests/test_MemberSetLoad_test.py | r0m30d4c/DlubalRFEM6 | 4bd0d744007bdc27d86d6ce535a507cdc81552ca | [
"MIT"
] | null | null | null | UnitTests/test_MemberSetLoad_test.py | r0m30d4c/DlubalRFEM6 | 4bd0d744007bdc27d86d6ce535a507cdc81552ca | [
"MIT"
] | null | null | null | import sys
sys.path.append(".")
from RFEM.Loads.surfaceLoad import *
from RFEM.Loads.memberLoad import *
from RFEM.Loads.nodalLoad import *
from RFEM.Loads.membersetload import *
from RFEM.LoadCasesAndCombinations.loadCase import *
from RFEM.LoadCasesAndCombinations.staticAnalysisSettings import *
from RFEM.TypesForMembers.memberHinge import *
from RFEM.TypesForNodes.nodalSupport import *
from RFEM.BasicObjects.solidSet import *
from RFEM.BasicObjects.surfaceSet import *
from RFEM.BasicObjects.memberSet import *
from RFEM.BasicObjects.lineSet import *
from RFEM.BasicObjects.opening import *
from RFEM.BasicObjects.solid import *
from RFEM.BasicObjects.surface import *
from RFEM.BasicObjects.member import *
from RFEM.BasicObjects.line import *
from RFEM.BasicObjects.node import *
from RFEM.BasicObjects.thickness import *
from RFEM.BasicObjects.section import *
from RFEM.BasicObjects.material import *
from RFEM.initModel import *
from RFEM.dataTypes import *
from RFEM.enums import *
def test_member_set_load():
clientModel.service.begin_modification()
# Create Material
Material(1, 'S235')
# Create Section
Section(1, 'IPE 300')
# Create Nodes
Node(1, 0.0, 0.0, 0.0)
Node(2, 2, 0.0, 0.0)
Node(3, 4, 0, 0)
# Create Member
Member(1, MemberType.TYPE_BEAM, '1', '2', 0, 1, 1)
Member(2, MemberType.TYPE_BEAM, '2', '3', 0, 1, 1)
# Create Member Set
MemberSet(1, '1 2', SetType.SET_TYPE_CONTINUOUS)
# Create Nodal Supports
NodalSupport(1, '1', NodalSupportType.FIXED)
NodalSupport(2, '3', NodalSupportType.FIXED)
# Create Static Analysis Settings
StaticAnalysisSettings(1, '1. Order', StaticAnalysisType.GEOMETRICALLY_LINEAR)
# Create Load Case
LoadCase(1, 'DEAD', [True, 0.0, 0.0, 1.0])
## Initial Member Set Load ##
MemberSetLoad(1, 1, '1', LoadDirectionType.LOAD_DIRECTION_LOCAL_Z, 5000)
## Force Type Member Set Load with LOAD_DISTRIBUTION_UNIFORM ##
MemberSetLoad.Force(0, 2, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_UNIFORM, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, load_parameter=[5000])
## Force Type Member Load with LOAD_DISTRIBUTION_UNIFORM with Eccentricity ##
MemberSetLoad.Force(0, 3, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_UNIFORM, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, load_parameter=[5000], force_eccentricity=True, params={'eccentricity_y_at_start' : 0.01, 'eccentricity_z_at_start': 0.02})
## Force Type Member Load with LOAD_DISTRIBUTION_UNIFORM_TOTAL ##
MemberSetLoad.Force(0, 4, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_UNIFORM_TOTAL, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, load_parameter=[5000])
## Force Type Member Load with LOAD_DISTRIBUTION_CONCENTRATED_1 ##
MemberSetLoad.Force(0, 5, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_CONCENTRATED_1, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, load_parameter=[False, 5000, 1.2])
## Force Type Member Load with LOAD_DISTRIBUTION_CONCENTRATED_N ##
MemberSetLoad.Force(0, 6, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_CONCENTRATED_N, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, load_parameter=[False, False, 5000, 2, 1, 2])
## Force Type Member Load with LOAD_DISTRIBUTION_CONCENTRATED_2x2 ##
MemberSetLoad.Force(0, 7, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_CONCENTRATED_2x2, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, load_parameter=[False, False, False, 5000, 1, 2, 3])
## Force Type Member Load with LOAD_DISTRIBUTION_CONCENTRATED_2x ##
MemberSetLoad.Force(0, 8, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_CONCENTRATED_2, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, load_parameter=[False, False, 5000, 6000, 1, 2])
## Force Type Member Load with LOAD_DISTRIBUTION_CONCENTRATED_VARYING ##
MemberSetLoad.Force(0, 9, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_CONCENTRATED_VARYING, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, load_parameter=[[1, 1, 4000], [2, 1, 5000]])
## Force Type Member Load with LOAD_DISTRIBUTION_TRAPEZOIDAL ##
MemberSetLoad.Force(0, 10, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_TRAPEZOIDAL, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, load_parameter=[False, False, 4000, 8000, 1, 2])
## Force Type Member Load with LOAD_DISTRIBUTION_TAPERED ##
MemberSetLoad.Force(0, 11, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_TAPERED, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, load_parameter=[False, False, 4000, 8000, 1, 2])
## Force Type Member Load with LOAD_DISTRIBUTION_PARABOLIC ##
MemberSetLoad.Force(0, 12, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_PARABOLIC, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, load_parameter=[4000, 8000, 12000])
## Force Type Member Load with LOAD_DISTRIBUTION_VARYING ##
MemberSetLoad.Force(0, 13, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_VARYING, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, load_parameter=[[1, 1, 4000], [2, 1, 5000]])
## Force Type Member Load with LOAD_DISTRIBUTION_VARYING_IN_Z ##
MemberSetLoad.Force(0, 14, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_VARYING_IN_Z, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, load_parameter=[[1, 1, 4000], [2, 1, 5000]])
## Moment Type Member Load with LOAD_DISTRIBUTION_UNIFORM ##
MemberSetLoad.Moment(0, 15, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_UNIFORM, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, load_parameter=[5000])
## Moment Type Member Load with LOAD_DISTRIBUTION_CONCENTRATED_1 ##
MemberSetLoad.Moment(0, 16, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_CONCENTRATED_1, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, load_parameter=[False, 5000, 1.2])
## Moment Type Member Load with LOAD_DISTRIBUTION_CONCENTRATED_N ##
MemberSetLoad.Moment(0, 17, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_CONCENTRATED_N, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, load_parameter=[False, False, 5000, 2, 1, 2])
## Moment Type Member Load with LOAD_DISTRIBUTION_CONCENTRATED_2x2 ##
MemberSetLoad.Moment(0, 18, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_CONCENTRATED_2x2, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, load_parameter=[False, False, False, 5000, 1, 2, 3])
## Moment Type Member Load with LOAD_DISTRIBUTION_CONCENTRATED_2x ##
MemberSetLoad.Moment(0, 19, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_CONCENTRATED_2, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, load_parameter=[False, False, 5000, 6000, 1, 2])
## Moment Type Member Load with LOAD_DISTRIBUTION_CONCENTRATED_VARYING ##
MemberSetLoad.Moment(0, 20, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_CONCENTRATED_VARYING, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, load_parameter=[[1, 1, 4000], [2, 1, 5000]])
## Moment Type Member Load with LOAD_DISTRIBUTION_TRAPEZOIDAL ##
MemberSetLoad.Moment(0, 21, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_TRAPEZOIDAL, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, load_parameter=[False, False, 4000, 8000, 1, 2])
## Moment Type Member Load with LOAD_DISTRIBUTION_TAPERED ##
MemberSetLoad.Moment(0, 22, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_TAPERED, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, load_parameter=[False, False, 4000, 8000, 1, 2])
## Moment Type Member Load with LOAD_DISTRIBUTION_PARABOLIC ##
MemberSetLoad.Moment(0, 23, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_PARABOLIC, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, load_parameter=[4000, 8000, 12000])
## Moment Type Member Load with LOAD_DISTRIBUTION_VARYING ##
MemberSetLoad.Moment(0, 24, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_VARYING, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, load_parameter=[[1, 1, 4000], [2, 1, 5000]])
## Mass Type Member Load ##
MemberSetLoad.Mass(0, 25, 1, mass_components=[1000])
## Temperature Type Member Load with LOAD_DISTRIBUTION_UNIFORM ##
MemberSetLoad.Temperature(0, 26, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_UNIFORM, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, load_parameter=[18, 2])
## Temperature Type Member Load with LOAD_DISTRIBUTION_TRAPEZOIDAL ##
MemberSetLoad.Temperature(0, 27, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_TRAPEZOIDAL, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, load_parameter=[12, 16, 18, 20, False, False, 1, 2])
## Temperature Type Member Load with LOAD_DISTRIBUTION_TAPERED ##
MemberSetLoad.Temperature(0, 28, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_TAPERED, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, load_parameter=[12, 16, 18, 20, False, False, 1, 2])
## Temperature Type Member Load with LOAD_DISTRIBUTION_PARABOLIC ##
MemberSetLoad.Temperature(0, 29, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_PARABOLIC, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, load_parameter=[1, 2, 3, 4, 5, 6])
## Temperature Type Member Load with LOAD_DISTRIBUTION_VARYING ##
MemberSetLoad.Temperature(0, 30, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_VARYING, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, load_parameter=[[1, 1, 285, 289], [2, 1, 293, 297]])
## TemperatureChange Type Member Load with LOAD_DISTRIBUTION_UNIFORM ##
MemberSetLoad.TemperatureChange(0, 31, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_UNIFORM, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, load_parameter=[18, 2])
## TemperatureChange Type Member Load with LOAD_DISTRIBUTION_TRAPEZOIDAL ##
MemberSetLoad.TemperatureChange(0, 32, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_TRAPEZOIDAL, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, load_parameter=[12, 16, 18, 20, False, False, 1, 2])
## TemperatureChange Type Member Load with LOAD_DISTRIBUTION_TAPERED ##
MemberSetLoad.TemperatureChange(0, 33, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_TAPERED, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, load_parameter=[12, 16, 18, 20, False, False, 1, 2])
## TemperatureChange Type Member Load with LOAD_DISTRIBUTION_PARABOLIC ##
MemberSetLoad.TemperatureChange(0, 34, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_PARABOLIC, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, load_parameter=[1, 2, 3, 4, 5, 6])
## TemperatureChange Type Member Load with LOAD_DISTRIBUTION_VARYING ##
MemberSetLoad.TemperatureChange(0, 35, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_VARYING, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, load_parameter=[[1, 1, 285, 289], [2, 1, 293, 297]])
## AxialStrain Type Member Load with LOAD_DISTRIBUTION_UNIFORM ##
MemberSetLoad.AxialStrain(0, 36, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_UNIFORM, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_X, load_parameter=[0.005])
## AxialStrain Type Member Load with LOAD_DISTRIBUTION_TRAPEZOIDAL ##
MemberSetLoad.AxialStrain(0, 37, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_TRAPEZOIDAL, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_X, load_parameter=[12, 16, False, False, 1, 2])
## AxialStrain Type Member Load with LOAD_DISTRIBUTION_TAPERED ##
MemberSetLoad.AxialStrain(0, 38, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_TAPERED, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_X, load_parameter=[12, 16, False, False, 1, 2])
## AxialStrain Type Member Load with LOAD_DISTRIBUTION_PARABOLIC ##
MemberSetLoad.AxialStrain(0, 39, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_PARABOLIC, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_X, load_parameter=[1, 2, 3])
## AxialStrain Type Member Load with LOAD_DISTRIBUTION_VARYING ##
MemberSetLoad.AxialStrain(0, 40, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_VARYING, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_X, load_parameter=[[1, 1, 285, 289], [2, 1, 293, 297]])
## AxialDisplacement Type Member Load ##
MemberSetLoad.AxialDisplacement(0, 41, 1, '1', MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_X, 0.05)
## Precamber Type Member Load with LOAD_DISTRIBUTION_UNIFORM ##
MemberSetLoad.Precamber(0, 42, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_UNIFORM, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, load_parameter=[0.005])
## Precamber Type Member Load with LOAD_DISTRIBUTION_TRAPEZOIDAL ##
MemberSetLoad.Precamber(0, 43, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_TRAPEZOIDAL, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, load_parameter=[12, 16, False, False, 1, 2])
## Precamber Type Member Load with LOAD_DISTRIBUTION_TAPERED ##
MemberSetLoad.Precamber(0, 44, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_TAPERED, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, load_parameter=[12, 16, False, False, 1, 2])
## Precamber Type Member Load with LOAD_DISTRIBUTION_PARABOLIC ##
MemberSetLoad.Precamber(0, 45, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_PARABOLIC, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, load_parameter=[1, 2, 3])
## Precamber Type Member Load with LOAD_DISTRIBUTION_VARYING ##
MemberSetLoad.Precamber(0, 46, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_VARYING, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, load_parameter=[[1, 1, 285], [2, 1, 293]])
## InitialPrestress Type Member Load ##
MemberSetLoad.InitialPrestress(0, 47, 1, '1', MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_X, 50)
## Displacement Type Member Load with LOAD_DISTRIBUTION_UNIFORM ##
MemberSetLoad.Displacement(0, 48, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_UNIFORM, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, [0.5])
## Displacement Type Member Load with LOAD_DISTRIBUTION_CONCENTRATED_1 ##
MemberSetLoad.Displacement(0, 49, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_CONCENTRATED_1, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, [0.5, False, 1])
## Displacement Type Member Load with LOAD_DISTRIBUTION_CONCENTRATED_N ##
MemberSetLoad.Displacement(0, 50, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_CONCENTRATED_N, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, [0.5, False, False, 1, 2])
## Displacement Type Member Load with LOAD_DISTRIBUTION_CONCENTRATED_2x2 ##
MemberSetLoad.Displacement(0, 51, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_CONCENTRATED_2x2, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, [0.5, False, False, False, 1, 2, 3])
## Displacement Type Member Load with LOAD_DISTRIBUTION_CONCENTRATED_2 ##
MemberSetLoad.Displacement(0, 52, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_CONCENTRATED_2, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, [0.5, 0.6, False, False, 1, 2])
## Displacement Type Member Load with LOAD_DISTRIBUTION_CONCENTRATED_VARYING ##
MemberSetLoad.Displacement(0, 53, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_CONCENTRATED_VARYING, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, [[0.001, 1, 1], [0.002, 2, 1]])
## Displacement Type Member Load with LOAD_DISTRIBUTION_TRAPEZOIDAL ##
MemberSetLoad.Displacement(0, 54, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_TRAPEZOIDAL, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, load_parameter=[12, 16, False, False, 1, 2])
## Displacement Type Member Load with LOAD_DISTRIBUTION_TAPERED ##
MemberSetLoad.Displacement(0, 55, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_TAPERED, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, load_parameter=[12, 16, False, False, 1, 2])
## Displacement Type Member Load with LOAD_DISTRIBUTION_PARABOLIC ##
MemberSetLoad.Displacement(0, 56, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_PARABOLIC, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, load_parameter=[1, 2, 3])
## Displacement Type Member Load with LOAD_DISTRIBUTION_VARYING ##
MemberSetLoad.Displacement(0, 57, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_VARYING, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, load_parameter=[[1, 1, 285], [2, 1, 293]])
## Rotation Type Member Load with LOAD_DISTRIBUTION_UNIFORM ##
MemberSetLoad.Rotation(0, 58, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_UNIFORM, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, [0.5])
## Rotation Type Member Load with LOAD_DISTRIBUTION_CONCENTRATED_1 ##
MemberSetLoad.Rotation(0, 59, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_CONCENTRATED_1, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, [0.5, False, 1])
## Rotation Type Member Load with LOAD_DISTRIBUTION_CONCENTRATED_N ##
MemberSetLoad.Rotation(0, 60, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_CONCENTRATED_N, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, [0.5, False, False, 1, 2])
## Rotation Type Member Load with LOAD_DISTRIBUTION_CONCENTRATED_2x2 ##
MemberSetLoad.Rotation(0, 61, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_CONCENTRATED_2x2, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, [0.5, False, False, False, 1, 2, 3])
## Rotation Type Member Load with LOAD_DISTRIBUTION_CONCENTRATED_2 ##
MemberSetLoad.Rotation(0, 62, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_CONCENTRATED_2, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, [0.5, 0.6, False, False, 1, 2])
## Rotation Type Member Load with LOAD_DISTRIBUTION_CONCENTRATED_VARYING ##
MemberSetLoad.Rotation(0, 63, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_CONCENTRATED_VARYING, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, [[1, 1, 285], [2, 1, 293]])
## Rotation Type Member Load with LOAD_DISTRIBUTION_TRAPEZOIDAL ##
MemberSetLoad.Rotation(0, 64, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_TRAPEZOIDAL, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, load_parameter=[12, 16, False, False, 1, 2])
## Rotation Type Member Load with LOAD_DISTRIBUTION_TAPERED ##
MemberSetLoad.Rotation(0, 65, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_TAPERED, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, load_parameter=[12, 16, False, False, 1, 2])
## Rotation Type Member Load with LOAD_DISTRIBUTION_PARABOLIC ##
MemberSetLoad.Rotation(0, 66, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_PARABOLIC, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, load_parameter=[1, 2, 3])
## Rotation Type Member Load with LOAD_DISTRIBUTION_VARYING ##
MemberSetLoad.Rotation(0, 67, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_VARYING, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, load_parameter=[[1, 1, 285], [2, 1, 293]])
Calculate_all()
print('Ready!')
clientModel.service.finish_modification()
| 67.736462 | 261 | 0.783084 | 2,251 | 18,763 | 6.275877 | 0.087072 | 0.142705 | 0.084094 | 0.184045 | 0.82084 | 0.818999 | 0.808735 | 0.792171 | 0.682381 | 0.634813 | 0 | 0.054182 | 0.118638 | 18,763 | 276 | 262 | 67.981884 | 0.800085 | 0.22571 | 0 | 0 | 0 | 0 | 0.010597 | 0.003228 | 0 | 0 | 0 | 0 | 0 | 1 | 0.009091 | true | 0 | 0.227273 | 0 | 0.236364 | 0.009091 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
13ce814e1a16c9f84e30e95a40fd7879281ff0bb | 128 | py | Python | python/testData/completion/heavyStarPropagation/lib/_pkg1/_pkg1_0/_pkg1_0_1/_pkg1_0_1_1/_pkg1_0_1_1_1/_mod1_0_1_1_1_0.py | jnthn/intellij-community | 8fa7c8a3ace62400c838e0d5926a7be106aa8557 | [
"Apache-2.0"
] | 2 | 2018-12-29T09:53:39.000Z | 2018-12-29T09:53:42.000Z | python/testData/completion/heavyStarPropagation/lib/_pkg1/_pkg1_0/_pkg1_0_1/_pkg1_0_1_1/_pkg1_0_1_1_1/_mod1_0_1_1_1_0.py | Cyril-lamirand/intellij-community | 60ab6c61b82fc761dd68363eca7d9d69663cfa39 | [
"Apache-2.0"
] | 173 | 2018-07-05T13:59:39.000Z | 2018-08-09T01:12:03.000Z | python/testData/completion/heavyStarPropagation/lib/_pkg1/_pkg1_0/_pkg1_0_1/_pkg1_0_1_1/_pkg1_0_1_1_1/_mod1_0_1_1_1_0.py | Cyril-lamirand/intellij-community | 60ab6c61b82fc761dd68363eca7d9d69663cfa39 | [
"Apache-2.0"
] | 2 | 2020-03-15T08:57:37.000Z | 2020-04-07T04:48:14.000Z | name1_0_1_1_1_0_0 = None
name1_0_1_1_1_0_1 = None
name1_0_1_1_1_0_2 = None
name1_0_1_1_1_0_3 = None
name1_0_1_1_1_0_4 = None | 14.222222 | 24 | 0.820313 | 40 | 128 | 1.875 | 0.175 | 0.266667 | 0.466667 | 0.533333 | 0.88 | 0.88 | 0.746667 | 0 | 0 | 0 | 0 | 0.318182 | 0.140625 | 128 | 9 | 25 | 14.222222 | 0.363636 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
b91c3358c9f661855c0503a573027e45f8224a38 | 128,710 | py | Python | salt/modules/vsphere.py | klyr/salt | 90b3fcf345b95c1bfe9f60a500b50e8070d414e6 | [
"Apache-2.0"
] | null | null | null | salt/modules/vsphere.py | klyr/salt | 90b3fcf345b95c1bfe9f60a500b50e8070d414e6 | [
"Apache-2.0"
] | null | null | null | salt/modules/vsphere.py | klyr/salt | 90b3fcf345b95c1bfe9f60a500b50e8070d414e6 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
'''
Manage VMware vCenter servers and ESXi hosts.
.. versionadded:: 2015.8.4
Dependencies
------------
- pyVmomi Python Module
- ESXCLI
.. note::
Be aware that some functionality in this execution module may depend on the
type of license attached to a vCenter Server or ESXi host(s).
For example, certain services are only available to manipulate service state
or policies with a VMware vSphere Enterprise or Enterprise Plus license, while
others are available with a Standard license. The ``ntpd`` service is restricted
to an Enterprise Plus license, while ``ssh`` is available via the Standard
license.
Please see the `vSphere Comparison`_ page for more information.
.. _vSphere Comparison: https://www.vmware.com/products/vsphere/compare
About
-----
This execution module was designed to be able to handle connections both to a
vCenter Server, as well as to an ESXi host. It utilizes the pyVmomi Python
library and the ESXCLI package to run remote execution functions against either
the defined vCenter server or the ESXi host.
Whether or not the function runs against a vCenter Server or an ESXi host depends
entirely upon the arguments passed into the function. Each function requires a
``host`` location, ``username``, and ``password``. If the credentials provided
apply to a vCenter Server, then the function will be run against the vCenter
Server. For example, when listing hosts using vCenter credentials, you'll get a
list of hosts associated with that vCenter Server:
.. code-block:: bash
# salt my-minion vsphere.list_hosts <vcenter-ip> <vcenter-user> <vcenter-password>
my-minion:
- esxi-1.example.com
- esxi-2.example.com
However, some functions should be used against ESXi hosts, not vCenter Servers.
Functionality such as getting a host's coredump network configuration should be
performed against a host and not a vCenter server. If the authentication information
you're using is against a vCenter server and not an ESXi host, you can provide the
host name that is associated with the vCenter server in the command, as a list, using
the ``host_names`` or ``esxi_host`` kwarg. For example:
.. code-block:: bash
# salt my-minion vsphere.get_coredump_network_config <vcenter-ip> <vcenter-user> \
<vcenter-password> esxi_hosts='[esxi-1.example.com, esxi-2.example.com]'
my-minion:
----------
esxi-1.example.com:
----------
Coredump Config:
----------
enabled:
False
esxi-2.example.com:
----------
Coredump Config:
----------
enabled:
True
host_vnic:
vmk0
ip:
coredump-location.example.com
port:
6500
You can also use these functions against an ESXi host directly by establishing a
connection to an ESXi host using the host's location, username, and password. If ESXi
connection credentials are used instead of vCenter credentials, the ``host_names`` and
``esxi_hosts`` arguments are not needed.
.. code-block:: bash
# salt my-minion vsphere.get_coredump_network_config esxi-1.example.com root <host-password>
local:
----------
10.4.28.150:
----------
Coredump Config:
----------
enabled:
True
host_vnic:
vmk0
ip:
coredump-location.example.com
port:
6500
'''
# Import Python Libs
from __future__ import absolute_import
import datetime
import logging
# Import Salt Libs
import salt.ext.six as six
import salt.utils
import salt.utils.vmware
import salt.utils.http
from salt.exceptions import CommandExecutionError
# Import Third Party Libs
try:
from pyVmomi import vim, vmodl
HAS_PYVMOMI = True
except ImportError:
HAS_PYVMOMI = False
log = logging.getLogger(__name__)
__virtualname__ = 'vsphere'
def __virtual__():
if not HAS_PYVMOMI:
return False, 'Missing dependency: The vSphere module requires the pyVmomi Python module.'
esx_cli = salt.utils.which('esxcli')
if not esx_cli:
return False, 'Missing dependency: The vSphere module requires ESXCLI.'
return __virtualname__
def get_coredump_network_config(host, username, password, protocol=None, port=None, esxi_hosts=None):
'''
Retrieve information on ESXi or vCenter network dump collection and
format it into a dictionary.
host
The location of the host.
username
The username used to login to the host, such as ``root``.
password
The password used to login to the host.
protocol
Optionally set to alternate protocol if the host is not using the default
protocol. Default protocol is ``https``.
port
Optionally set to alternate port if the host is not using the default
port. Default port is ``443``.
esxi_hosts
If ``host`` is a vCenter host, then use esxi_hosts to execute this function
on a list of one or more ESXi machines.
:return: A dictionary with the network configuration, or, if getting
the network config failed, a an error message retrieved from the
standard cmd.run_all dictionary, per host.
CLI Example:
.. code-block:: bash
# Used for ESXi host connection information
salt '*' vsphere.get_coredump_network_config my.esxi.host root bad-password
# Used for connecting to a vCenter Server
salt '*' vsphere.get_coredump_network_config my.vcenter.location root bad-password \
esxi_hosts='[esxi-1.host.com, esxi-2.host.com]'
'''
cmd = 'system coredump network get'
ret = {}
if esxi_hosts:
if not isinstance(esxi_hosts, list):
raise CommandExecutionError('\'esxi_hosts\' must be a list.')
for esxi_host in esxi_hosts:
response = salt.utils.vmware.esxcli(host, username, password, cmd,
protocol=protocol, port=port,
esxi_host=esxi_host)
if response['retcode'] != 0:
ret.update({esxi_host: {'Error': response.get('stdout')}})
else:
# format the response stdout into something useful
ret.update({esxi_host: {'Coredump Config': _format_coredump_stdout(response)}})
else:
# Handles a single host or a vCenter connection when no esxi_hosts are provided.
response = salt.utils.vmware.esxcli(host, username, password, cmd,
protocol=protocol, port=port)
if response['retcode'] != 0:
ret.update({host: {'Error': response.get('stdout')}})
else:
# format the response stdout into something useful
stdout = _format_coredump_stdout(response)
ret.update({host: {'Coredump Config': stdout}})
return ret
def coredump_network_enable(host, username, password, enabled, protocol=None, port=None, esxi_hosts=None):
'''
Enable or disable ESXi core dump collection. Returns ``True`` if coredump is enabled
and returns ``False`` if core dump is not enabled. If there was an error, the error
will be the value printed in the ``Error`` key dictionary for the given host.
host
The location of the host.
username
The username used to login to the host, such as ``root``.
password
The password used to login to the host.
enabled
Python True or False to enable or disable coredumps.
protocol
Optionally set to alternate protocol if the host is not using the default
protocol. Default protocol is ``https``.
port
Optionally set to alternate port if the host is not using the default
port. Default port is ``443``.
esxi_hosts
If ``host`` is a vCenter host, then use esxi_hosts to execute this function
on a list of one or more ESXi machines.
CLI Example:
.. code-block:: bash
# Used for ESXi host connection information
salt '*' vsphere.coredump_network_enable my.esxi.host root bad-password True
# Used for connecting to a vCenter Server
salt '*' vsphere.coredump_network_enable my.vcenter.location root bad-password True \
esxi_hosts='[esxi-1.host.com, esxi-2.host.com]'
'''
if enabled:
enable_it = 1
else:
enable_it = 0
cmd = 'system coredump network set -e {0}'.format(enable_it)
ret = {}
if esxi_hosts:
if not isinstance(esxi_hosts, list):
raise CommandExecutionError('\'esxi_hosts\' must be a list.')
for esxi_host in esxi_hosts:
response = salt.utils.vmware.esxcli(host, username, password, cmd,
protocol=protocol, port=port,
esxi_host=esxi_host)
if response['retcode'] != 0:
ret.update({esxi_host: {'Error': response.get('stdout')}})
else:
ret.update({esxi_host: {'Coredump Enabled': enabled}})
else:
# Handles a single host or a vCenter connection when no esxi_hosts are provided.
response = salt.utils.vmware.esxcli(host, username, password, cmd,
protocol=protocol, port=port)
if response['retcode'] != 0:
ret.update({host: {'Error': response.get('stdout')}})
else:
ret.update({host: {'Coredump Enabled': enabled}})
return ret
def set_coredump_network_config(host,
username,
password,
dump_ip,
protocol=None,
port=None,
host_vnic='vmk0',
dump_port=6500,
esxi_hosts=None):
'''
Set the network parameters for a network coredump collection.
Note that ESXi requires that the dumps first be enabled (see
`coredump_network_enable`) before these parameters may be set.
host
The location of the host.
username
The username used to login to the host, such as ``root``.
password
The password used to login to the host.
dump_ip
IP address of host that will accept the dump.
protocol
Optionally set to alternate protocol if the host is not using the default
protocol. Default protocol is ``https``.
port
Optionally set to alternate port if the host is not using the default
port. Default port is ``443``.
esxi_hosts
If ``host`` is a vCenter host, then use esxi_hosts to execute this function
on a list of one or more ESXi machines.
host_vnic
Host VNic port through which to communicate. Defaults to ``vmk0``.
dump_port
TCP port to use for the dump, defaults to ``6500``.
:return: A standard cmd.run_all dictionary with a `success` key added, per host.
`success` will be True if the set succeeded, False otherwise.
CLI Example:
.. code-block:: bash
# Used for ESXi host connection information
salt '*' vsphere.set_coredump_network_config my.esxi.host root bad-password 'dump_ip.host.com'
# Used for connecting to a vCenter Server
salt '*' vsphere.set_coredump_network_config my.vcenter.location root bad-password 'dump_ip.host.com' \
esxi_hosts='[esxi-1.host.com, esxi-2.host.com]'
'''
cmd = 'system coredump network set -v {0} -i {1} -o {2}'.format(host_vnic,
dump_ip,
dump_port)
ret = {}
if esxi_hosts:
if not isinstance(esxi_hosts, list):
raise CommandExecutionError('\'esxi_hosts\' must be a list.')
for esxi_host in esxi_hosts:
response = salt.utils.vmware.esxcli(host, username, password, cmd,
protocol=protocol, port=port,
esxi_host=esxi_host)
if response['retcode'] != 0:
response['success'] = False
else:
response['success'] = True
# Update the cmd.run_all dictionary for each particular host.
ret.update({esxi_host: response})
else:
# Handles a single host or a vCenter connection when no esxi_hosts are provided.
response = salt.utils.vmware.esxcli(host, username, password, cmd,
protocol=protocol, port=port)
if response['retcode'] != 0:
response['success'] = False
else:
response['success'] = True
ret.update({host: response})
return ret
def get_firewall_status(host, username, password, protocol=None, port=None, esxi_hosts=None):
'''
Show status of all firewall rule sets.
host
The location of the host.
username
The username used to login to the host, such as ``root``.
password
The password used to login to the host.
protocol
Optionally set to alternate protocol if the host is not using the default
protocol. Default protocol is ``https``.
port
Optionally set to alternate port if the host is not using the default
port. Default port is ``443``.
esxi_hosts
If ``host`` is a vCenter host, then use esxi_hosts to execute this function
on a list of one or more ESXi machines.
:return: Nested dictionary with two toplevel keys ``rulesets`` and ``success``
``success`` will be True or False depending on query success
``rulesets`` will list the rulesets and their statuses if ``success``
was true, per host.
CLI Example:
.. code-block:: bash
# Used for ESXi host connection information
salt '*' vsphere.get_firewall_status my.esxi.host root bad-password
# Used for connecting to a vCenter Server
salt '*' vsphere.get_firewall_status my.vcenter.location root bad-password \
esxi_hosts='[esxi-1.host.com, esxi-2.host.com]'
'''
cmd = 'network firewall ruleset list'
ret = {}
if esxi_hosts:
if not isinstance(esxi_hosts, list):
raise CommandExecutionError('\'esxi_hosts\' must be a list.')
for esxi_host in esxi_hosts:
response = salt.utils.vmware.esxcli(host, username, password, cmd,
protocol=protocol, port=port,
esxi_host=esxi_host)
if response['retcode'] != 0:
ret.update({esxi_host: {'Error': response['stdout'],
'success': False,
'rulesets': None}})
else:
# format the response stdout into something useful
ret.update({esxi_host: _format_firewall_stdout(response)})
else:
# Handles a single host or a vCenter connection when no esxi_hosts are provided.
response = salt.utils.vmware.esxcli(host, username, password, cmd,
protocol=protocol, port=port)
if response['retcode'] != 0:
ret.update({host: {'Error': response['stdout'],
'success': False,
'rulesets': None}})
else:
# format the response stdout into something useful
ret.update({host: _format_firewall_stdout(response)})
return ret
def enable_firewall_ruleset(host,
username,
password,
ruleset_enable,
ruleset_name,
protocol=None,
port=None,
esxi_hosts=None):
'''
Enable or disable an ESXi firewall rule set.
host
The location of the host.
username
The username used to login to the host, such as ``root``.
password
The password used to login to the host.
ruleset_enable
True to enable the ruleset, false to disable.
ruleset_name
Name of ruleset to target.
protocol
Optionally set to alternate protocol if the host is not using the default
protocol. Default protocol is ``https``.
port
Optionally set to alternate port if the host is not using the default
port. Default port is ``443``.
esxi_hosts
If ``host`` is a vCenter host, then use esxi_hosts to execute this function
on a list of one or more ESXi machines.
:return: A standard cmd.run_all dictionary, per host.
CLI Example:
.. code-block:: bash
# Used for ESXi host connection information
salt '*' vsphere.enable_firewall_ruleset my.esxi.host root bad-password True 'syslog'
# Used for connecting to a vCenter Server
salt '*' vsphere.enable_firewall_ruleset my.vcenter.location root bad-password True 'syslog' \
esxi_hosts='[esxi-1.host.com, esxi-2.host.com]'
'''
cmd = 'network firewall ruleset set --enabled {0} --ruleset-id={1}'.format(
ruleset_enable, ruleset_name
)
ret = {}
if esxi_hosts:
if not isinstance(esxi_hosts, list):
raise CommandExecutionError('\'esxi_hosts\' must be a list.')
for esxi_host in esxi_hosts:
response = salt.utils.vmware.esxcli(host, username, password, cmd,
protocol=protocol, port=port,
esxi_host=esxi_host)
ret.update({esxi_host: response})
else:
# Handles a single host or a vCenter connection when no esxi_hosts are provided.
response = salt.utils.vmware.esxcli(host, username, password, cmd,
protocol=protocol, port=port)
ret.update({host: response})
return ret
def syslog_service_reload(host, username, password, protocol=None, port=None, esxi_hosts=None):
'''
Reload the syslog service so it will pick up any changes.
host
The location of the host.
username
The username used to login to the host, such as ``root``.
password
The password used to login to the host.
protocol
Optionally set to alternate protocol if the host is not using the default
protocol. Default protocol is ``https``.
port
Optionally set to alternate port if the host is not using the default
port. Default port is ``443``.
esxi_hosts
If ``host`` is a vCenter host, then use esxi_hosts to execute this function
on a list of one or more ESXi machines.
:return: A standard cmd.run_all dictionary. This dictionary will at least
have a `retcode` key. If `retcode` is 0 the command was successful.
CLI Example:
.. code-block:: bash
# Used for ESXi host connection information
salt '*' vsphere.syslog_service_reload my.esxi.host root bad-password
# Used for connecting to a vCenter Server
salt '*' vsphere.syslog_service_reload my.vcenter.location root bad-password \
esxi_hosts='[esxi-1.host.com, esxi-2.host.com]'
'''
cmd = 'system syslog reload'
ret = {}
if esxi_hosts:
if not isinstance(esxi_hosts, list):
raise CommandExecutionError('\'esxi_hosts\' must be a list.')
for esxi_host in esxi_hosts:
response = salt.utils.vmware.esxcli(host, username, password, cmd,
protocol=protocol, port=port,
esxi_host=esxi_host)
ret.update({esxi_host: response})
else:
# Handles a single host or a vCenter connection when no esxi_hosts are provided.
response = salt.utils.vmware.esxcli(host, username, password, cmd,
protocol=protocol, port=port)
ret.update({host: response})
return ret
def set_syslog_config(host,
username,
password,
syslog_config,
config_value,
protocol=None,
port=None,
firewall=True,
reset_service=True,
esxi_hosts=None):
'''
Set the specified syslog configuration parameter. By default, this function will
reset the syslog service after the configuration is set.
host
ESXi or vCenter host to connect to.
username
User to connect as, usually root.
password
Password to connect with.
syslog_config
Name of parameter to set (corresponds to the command line switch for
esxcli without the double dashes (--))
Valid syslog_config values are ``logdir``, ``loghost``, ``default-rotate`,
``default-size``, ``default-timeout``, and ``logdir-unique``.
config_value
Value for the above parameter. For ``loghost``, URLs or IP addresses to
use for logging. Multiple log servers can be specified by listing them,
comma-separated, but without spaces before or after commas.
(reference: https://blogs.vmware.com/vsphere/2012/04/configuring-multiple-syslog-servers-for-esxi-5.html)
protocol
Optionally set to alternate protocol if the host is not using the default
protocol. Default protocol is ``https``.
port
Optionally set to alternate port if the host is not using the default
port. Default port is ``443``.
firewall
Enable the firewall rule set for syslog. Defaults to ``True``.
reset_service
After a successful parameter set, reset the service. Defaults to ``True``.
esxi_hosts
If ``host`` is a vCenter host, then use esxi_hosts to execute this function
on a list of one or more ESXi machines.
:return: Dictionary with a top-level key of 'success' which indicates
if all the parameters were reset, and individual keys
for each parameter indicating which succeeded or failed, per host.
CLI Example:
.. code-block:: bash
# Used for ESXi host connection information
salt '*' vsphere.set_syslog_config my.esxi.host root bad-password \
loghost ssl://localhost:5432,tcp://10.1.0.1:1514
# Used for connecting to a vCenter Server
salt '*' vsphere.set_syslog_config my.vcenter.location root bad-password \
loghost ssl://localhost:5432,tcp://10.1.0.1:1514 \
esxi_hosts='[esxi-1.host.com, esxi-2.host.com]'
'''
ret = {}
# First, enable the syslog firewall ruleset, for each host, if needed.
if firewall and syslog_config == 'loghost':
if esxi_hosts:
if not isinstance(esxi_hosts, list):
raise CommandExecutionError('\'esxi_hosts\' must be a list.')
for esxi_host in esxi_hosts:
response = enable_firewall_ruleset(host, username, password,
ruleset_enable=True, ruleset_name='syslog',
protocol=protocol, port=port,
esxi_hosts=[esxi_host]).get(esxi_host)
if response['retcode'] != 0:
ret.update({esxi_host: {'enable_firewall': {'message': response['stdout'],
'success': False}}})
else:
ret.update({esxi_host: {'enable_firewall': {'success': True}}})
else:
# Handles a single host or a vCenter connection when no esxi_hosts are provided.
response = enable_firewall_ruleset(host, username, password,
ruleset_enable=True, ruleset_name='syslog',
protocol=protocol, port=port).get(host)
if response['retcode'] != 0:
ret.update({host: {'enable_firewall': {'message': response['stdout'],
'success': False}}})
else:
ret.update({host: {'enable_firewall': {'success': True}}})
# Set the config value on each esxi_host, if provided.
if esxi_hosts:
if not isinstance(esxi_hosts, list):
raise CommandExecutionError('\'esxi_hosts\' must be a list.')
for esxi_host in esxi_hosts:
response = _set_syslog_config_helper(host, username, password, syslog_config,
config_value, protocol=protocol, port=port,
reset_service=reset_service, esxi_host=esxi_host)
# Ensure we don't overwrite any dictionary data already set
# By updating the esxi_host directly.
if ret.get(esxi_host) is None:
ret.update({esxi_host: {}})
ret[esxi_host].update(response)
else:
# Handles a single host or a vCenter connection when no esxi_hosts are provided.
response = _set_syslog_config_helper(host, username, password, syslog_config,
config_value, protocol=protocol, port=port,
reset_service=reset_service)
# Ensure we don't overwrite any dictionary data already set
# By updating the host directly.
if ret.get(host) is None:
ret.update({host: {}})
ret[host].update(response)
return ret
def get_syslog_config(host, username, password, protocol=None, port=None, esxi_hosts=None):
'''
Retrieve the syslog configuration.
host
The location of the host.
username
The username used to login to the host, such as ``root``.
password
The password used to login to the host.
protocol
Optionally set to alternate protocol if the host is not using the default
protocol. Default protocol is ``https``.
port
Optionally set to alternate port if the host is not using the default
port. Default port is ``443``.
esxi_hosts
If ``host`` is a vCenter host, then use esxi_hosts to execute this function
on a list of one or more ESXi machines.
:return: Dictionary with keys and values corresponding to the
syslog configuration, per host.
CLI Example:
.. code-block:: bash
# Used for ESXi host connection information
salt '*' vsphere.get_syslog_config my.esxi.host root bad-password
# Used for connecting to a vCenter Server
salt '*' vsphere.get_syslog_config my.vcenter.location root bad-password \
esxi_hosts='[esxi-1.host.com, esxi-2.host.com]'
'''
cmd = 'system syslog config get'
ret = {}
if esxi_hosts:
if not isinstance(esxi_hosts, list):
raise CommandExecutionError('\'esxi_hosts\' must be a list.')
for esxi_host in esxi_hosts:
response = salt.utils.vmware.esxcli(host, username, password, cmd,
protocol=protocol, port=port,
esxi_host=esxi_host)
# format the response stdout into something useful
ret.update({esxi_host: _format_syslog_config(response)})
else:
# Handles a single host or a vCenter connection when no esxi_hosts are provided.
response = salt.utils.vmware.esxcli(host, username, password, cmd,
protocol=protocol, port=port)
# format the response stdout into something useful
ret.update({host: _format_syslog_config(response)})
return ret
def reset_syslog_config(host,
username,
password,
protocol=None,
port=None,
syslog_config=None,
esxi_hosts=None):
'''
Reset the syslog service to its default settings.
Valid syslog_config values are ``logdir``, ``loghost``, ``logdir-unique``,
``default-rotate``, ``default-size``, ``default-timeout``,
or ``all`` for all of these.
host
The location of the host.
username
The username used to login to the host, such as ``root``.
password
The password used to login to the host.
protocol
Optionally set to alternate protocol if the host is not using the default
protocol. Default protocol is ``https``.
port
Optionally set to alternate port if the host is not using the default
port. Default port is ``443``.
syslog_config
List of parameters to reset, or 'all' to reset everything.
esxi_hosts
If ``host`` is a vCenter host, then use esxi_hosts to execute this function
on a list of one or more ESXi machines.
:return: Dictionary with a top-level key of 'success' which indicates
if all the parameters were reset, and individual keys
for each parameter indicating which succeeded or failed, per host.
CLI Example:
``syslog_config`` can be passed as a quoted, comma-separated string, e.g.
.. code-block:: bash
# Used for ESXi host connection information
salt '*' vsphere.reset_syslog_config my.esxi.host root bad-password \
syslog_config='logdir,loghost'
# Used for connecting to a vCenter Server
salt '*' vsphere.reset_syslog_config my.vcenter.location root bad-password \
syslog_config='logdir,loghost' esxi_hosts='[esxi-1.host.com, esxi-2.host.com]'
'''
valid_resets = ['logdir', 'loghost', 'default-rotate',
'default-size', 'default-timeout', 'logdir-unique']
cmd = 'system syslog config set --reset='
if ',' in syslog_config:
resets = [ind_reset.strip() for ind_reset in syslog_config.split(',')]
elif syslog_config == 'all':
resets = valid_resets
else:
resets = [syslog_config]
ret = {}
if esxi_hosts:
if not isinstance(esxi_hosts, list):
raise CommandExecutionError('\'esxi_hosts\' must be a list.')
for esxi_host in esxi_hosts:
response_dict = _reset_syslog_config_params(host, username, password,
cmd, resets, valid_resets,
protocol=protocol, port=port,
esxi_host=esxi_host)
ret.update({esxi_host: response_dict})
else:
# Handles a single host or a vCenter connection when no esxi_hosts are provided.
response_dict = _reset_syslog_config_params(host, username, password,
cmd, resets, valid_resets,
protocol=protocol, port=port)
ret.update({host: response_dict})
return ret
def upload_ssh_key(host, username, password, ssh_key=None, ssh_key_file=None,
protocol=None, port=None, certificate_verify=False):
'''
Upload an ssh key for root to an ESXi host via http PUT.
This function only works for ESXi, not vCenter.
Only one ssh key can be uploaded for root. Uploading a second key will
replace any existing key.
:param host: The location of the ESXi Host
:param username: Username to connect as
:param password: Password for the ESXi web endpoint
:param ssh_key: Public SSH key, will be added to authorized_keys on ESXi
:param ssh_key_file: File containing the SSH key. Use 'ssh_key' or
ssh_key_file, but not both.
:param protocol: defaults to https, can be http if ssl is disabled on ESXi
:param port: defaults to 443 for https
:param certificate_verify: If true require that the SSL connection present
a valid certificate
:return: Dictionary with a 'status' key, True if upload is successful.
If upload is unsuccessful, 'status' key will be False and
an 'Error' key will have an informative message.
CLI Example:
.. code-block:: bash
salt '*' vsphere.upload_ssh_key my.esxi.host root bad-password ssh_key_file='/etc/salt/my_keys/my_key.pub'
'''
if protocol is None:
protocol = 'https'
if port is None:
port = 443
url = '{0}://{1}:{2}/host/ssh_root_authorized_keys'.format(protocol,
host,
port)
ret = {}
result = None
try:
if ssh_key:
result = salt.utils.http.query(url,
status=True,
text=True,
method='PUT',
username=username,
password=password,
data=ssh_key,
verify_ssl=certificate_verify)
elif ssh_key_file:
result = salt.utils.http.query(url,
status=True,
text=True,
method='PUT',
username=username,
password=password,
data_file=ssh_key_file,
data_render=False,
verify_ssl=certificate_verify)
if result.get('status') == 200:
ret['status'] = True
else:
ret['status'] = False
ret['Error'] = result['error']
except Exception as msg:
ret['status'] = False
ret['Error'] = msg
return ret
def get_ssh_key(host,
username,
password,
protocol=None,
port=None,
certificate_verify=False):
'''
Retrieve the authorized_keys entry for root.
This function only works for ESXi, not vCenter.
:param host: The location of the ESXi Host
:param username: Username to connect as
:param password: Password for the ESXi web endpoint
:param protocol: defaults to https, can be http if ssl is disabled on ESXi
:param port: defaults to 443 for https
:param certificate_verify: If true require that the SSL connection present
a valid certificate
:return: True if upload is successful
CLI Example:
.. code-block:: bash
salt '*' vsphere.get_ssh_key my.esxi.host root bad-password certificate_verify=True
'''
if protocol is None:
protocol = 'https'
if port is None:
port = 443
url = '{0}://{1}:{2}/host/ssh_root_authorized_keys'.format(protocol,
host,
port)
ret = {}
try:
result = salt.utils.http.query(url,
status=True,
text=True,
method='GET',
username=username,
password=password,
verify_ssl=certificate_verify)
if result.get('status') == 200:
ret['status'] = True
ret['key'] = result['text']
else:
ret['status'] = False
ret['Error'] = result['error']
except Exception as msg:
ret['status'] = False
ret['Error'] = msg
return ret
def get_host_datetime(host, username, password, protocol=None, port=None, host_names=None):
'''
Get the date/time information for a given host or list of host_names.
host
The location of the host.
username
The username used to login to the host, such as ``root``.
password
The password used to login to the host.
protocol
Optionally set to alternate protocol if the host is not using the default
protocol. Default protocol is ``https``.
port
Optionally set to alternate port if the host is not using the default
port. Default port is ``443``.
host_names
List of ESXi host names. When the host, username, and password credentials
are provided for a vCenter Server, the host_names argument is required to tell
vCenter the hosts for which to get date/time information.
If host_names is not provided, the date/time information will be retrieved for the
``host`` location instead. This is useful for when service instance connection
information is used for a single ESXi host.
CLI Example:
.. code-block:: bash
# Used for single ESXi host connection information
salt '*' vsphere.get_host_datetime my.esxi.host root bad-password
# Used for connecting to a vCenter Server
salt '*' vsphere.get_host_datetime my.vcenter.location root bad-password \
host_names='[esxi-1.host.com, esxi-2.host.com]'
'''
service_instance = salt.utils.vmware.get_service_instance(host=host,
username=username,
password=password,
protocol=protocol,
port=port)
host_names = _check_hosts(service_instance, host, host_names)
ret = {}
for host_name in host_names:
host_ref = _get_host_ref(service_instance, host, host_name=host_name)
date_time_manager = _get_date_time_mgr(host_ref)
date_time = date_time_manager.QueryDateTime()
ret.update({host_name: date_time})
return ret
def get_ntp_config(host, username, password, protocol=None, port=None, host_names=None):
'''
Get the NTP configuration information for a given host or list of host_names.
host
The location of the host.
username
The username used to login to the host, such as ``root``.
password
The password used to login to the host.
protocol
Optionally set to alternate protocol if the host is not using the default
protocol. Default protocol is ``https``.
port
Optionally set to alternate port if the host is not using the default
port. Default port is ``443``.
host_names
List of ESXi host names. When the host, username, and password credentials
are provided for a vCenter Server, the host_names argument is required to tell
vCenter the hosts for which to get ntp configuration information.
If host_names is not provided, the NTP configuration will be retrieved for the
``host`` location instead. This is useful for when service instance connection
information is used for a single ESXi host.
CLI Example:
.. code-block:: bash
# Used for single ESXi host connection information
salt '*' vsphere.get_ntp_config my.esxi.host root bad-password
# Used for connecting to a vCenter Server
salt '*' vsphere.get_ntp_config my.vcenter.location root bad-password \
host_names='[esxi-1.host.com, esxi-2.host.com]'
'''
service_instance = salt.utils.vmware.get_service_instance(host=host,
username=username,
password=password,
protocol=protocol,
port=port)
host_names = _check_hosts(service_instance, host, host_names)
ret = {}
for host_name in host_names:
host_ref = _get_host_ref(service_instance, host, host_name=host_name)
ntp_config = host_ref.configManager.dateTimeSystem.dateTimeInfo.ntpConfig.server
ret.update({host_name: ntp_config})
return ret
def get_service_policy(host, username, password, service_name, protocol=None, port=None, host_names=None):
'''
Get the service name's policy for a given host or list of hosts.
host
The location of the host.
username
The username used to login to the host, such as ``root``.
password
The password used to login to the host.
service_name
The name of the service for which to retrieve the policy. Supported service names are:
- DCUI
- TSM
- SSH
- lbtd
- lsassd
- lwiod
- netlogond
- ntpd
- sfcbd-watchdog
- snmpd
- vprobed
- vpxa
- xorg
protocol
Optionally set to alternate protocol if the host is not using the default
protocol. Default protocol is ``https``.
port
Optionally set to alternate port if the host is not using the default
port. Default port is ``443``.
host_names
List of ESXi host names. When the host, username, and password credentials
are provided for a vCenter Server, the host_names argument is required to tell
vCenter the hosts for which to get service policy information.
If host_names is not provided, the service policy information will be retrieved
for the ``host`` location instead. This is useful for when service instance
connection information is used for a single ESXi host.
CLI Example:
.. code-block:: bash
# Used for single ESXi host connection information
salt '*' vsphere.get_service_policy my.esxi.host root bad-password 'ssh'
# Used for connecting to a vCenter Server
salt '*' vsphere.get_service_policy my.vcenter.location root bad-password 'ntpd' \
host_names='[esxi-1.host.com, esxi-2.host.com]'
'''
service_instance = salt.utils.vmware.get_service_instance(host=host,
username=username,
password=password,
protocol=protocol,
port=port)
valid_services = ['DCUI', 'TSM', 'SSH', 'ssh', 'lbtd', 'lsassd', 'lwiod', 'netlogond',
'ntpd', 'sfcbd-watchdog', 'snmpd', 'vprobed', 'vpxa', 'xorg']
host_names = _check_hosts(service_instance, host, host_names)
ret = {}
for host_name in host_names:
# Check if the service_name provided is a valid one.
# If we don't have a valid service, return. The service will be invalid for all hosts.
if service_name not in valid_services:
ret.update({host_name: {'Error': '{0} is not a valid service name.'.format(service_name)}})
return ret
host_ref = _get_host_ref(service_instance, host, host_name=host_name)
services = host_ref.configManager.serviceSystem.serviceInfo.service
# Don't require users to know that VMware lists the ssh service as TSM-SSH
if service_name == 'SSH' or service_name == 'ssh':
temp_service_name = 'TSM-SSH'
else:
temp_service_name = service_name
# Loop through services until we find a matching name
for service in services:
if service.key == temp_service_name:
ret.update({host_name:
{service_name: service.policy}})
# We've found a match - break out of the loop so we don't overwrite the
# Updated host_name value with an error message.
break
else:
msg = 'Could not find service \'{0}\' for host \'{1}\'.'.format(service_name,
host_name)
ret.update({host_name: {'Error': msg}})
# If we made it this far, something else has gone wrong.
if ret.get(host_name) is None:
msg = '\'vsphere.get_service_policy\' failed for host {0}.'.format(host_name)
log.debug(msg)
ret.update({host_name: {'Error': msg}})
return ret
def get_service_running(host, username, password, service_name, protocol=None, port=None, host_names=None):
'''
Get the service name's running state for a given host or list of hosts.
host
The location of the host.
username
The username used to login to the host, such as ``root``.
password
The password used to login to the host.
service_name
The name of the service for which to retrieve the policy. Supported service names are:
- DCUI
- TSM
- SSH
- lbtd
- lsassd
- lwiod
- netlogond
- ntpd
- sfcbd-watchdog
- snmpd
- vprobed
- vpxa
- xorg
protocol
Optionally set to alternate protocol if the host is not using the default
protocol. Default protocol is ``https``.
port
Optionally set to alternate port if the host is not using the default
port. Default port is ``443``.
host_names
List of ESXi host names. When the host, username, and password credentials
are provided for a vCenter Server, the host_names argument is required to tell
vCenter the hosts for which to get the service's running state.
If host_names is not provided, the service's running state will be retrieved
for the ``host`` location instead. This is useful for when service instance
connection information is used for a single ESXi host.
CLI Example:
.. code-block:: bash
# Used for single ESXi host connection information
salt '*' vsphere.get_service_running my.esxi.host root bad-password 'ssh'
# Used for connecting to a vCenter Server
salt '*' vsphere.get_service_running my.vcenter.location root bad-password 'ntpd' \
host_names='[esxi-1.host.com, esxi-2.host.com]'
'''
service_instance = salt.utils.vmware.get_service_instance(host=host,
username=username,
password=password,
protocol=protocol,
port=port)
valid_services = ['DCUI', 'TSM', 'SSH', 'ssh', 'lbtd', 'lsassd', 'lwiod', 'netlogond',
'ntpd', 'sfcbd-watchdog', 'snmpd', 'vprobed', 'vpxa', 'xorg']
host_names = _check_hosts(service_instance, host, host_names)
ret = {}
for host_name in host_names:
# Check if the service_name provided is a valid one.
# If we don't have a valid service, return. The service will be invalid for all hosts.
if service_name not in valid_services:
ret.update({host_name: {'Error': '{0} is not a valid service name.'.format(service_name)}})
return ret
host_ref = _get_host_ref(service_instance, host, host_name=host_name)
services = host_ref.configManager.serviceSystem.serviceInfo.service
# Don't require users to know that VMware lists the ssh service as TSM-SSH
if service_name == 'SSH' or service_name == 'ssh':
temp_service_name = 'TSM-SSH'
else:
temp_service_name = service_name
# Loop through services until we find a matching name
for service in services:
if service.key == temp_service_name:
ret.update({host_name:
{service_name: service.running}})
# We've found a match - break out of the loop so we don't overwrite the
# Updated host_name value with an error message.
break
else:
msg = 'Could not find service \'{0}\' for host \'{1}\'.'.format(service_name,
host_name)
ret.update({host_name: {'Error': msg}})
# If we made it this far, something else has gone wrong.
if ret.get(host_name) is None:
msg = '\'vsphere.get_service_running\' failed for host {0}.'.format(host_name)
log.debug(msg)
ret.update({host_name: {'Error': msg}})
return ret
def get_vmotion_enabled(host, username, password, protocol=None, port=None, host_names=None):
'''
Get the VMotion enabled status for a given host or a list of host_names. Returns ``True``
if VMotion is enabled, ``False`` if it is not enabled.
host
The location of the host.
username
The username used to login to the host, such as ``root``.
password
The password used to login to the host.
protocol
Optionally set to alternate protocol if the host is not using the default
protocol. Default protocol is ``https``.
port
Optionally set to alternate port if the host is not using the default
port. Default port is ``443``.
host_names
List of ESXi host names. When the host, username, and password credentials
are provided for a vCenter Server, the host_names argument is required to
tell vCenter which hosts to check if VMotion is enabled.
If host_names is not provided, the VMotion status will be retrieved for the
``host`` location instead. This is useful for when service instance
connection information is used for a single ESXi host.
CLI Example:
.. code-block:: bash
# Used for single ESXi host connection information
salt '*' vsphere.get_vmotion_enabled my.esxi.host root bad-password
# Used for connecting to a vCenter Server
salt '*' vsphere.get_vmotion_enabled my.vcenter.location root bad-password \
host_names='[esxi-1.host.com, esxi-2.host.com]'
'''
service_instance = salt.utils.vmware.get_service_instance(host=host,
username=username,
password=password,
protocol=protocol,
port=port)
host_names = _check_hosts(service_instance, host, host_names)
ret = {}
for host_name in host_names:
host_ref = _get_host_ref(service_instance, host, host_name=host_name)
vmotion_vnic = host_ref.configManager.vmotionSystem.netConfig.selectedVnic
if vmotion_vnic:
ret.update({host_name: {'VMotion Enabled': True}})
else:
ret.update({host_name: {'VMotion Enabled': False}})
return ret
def get_vsan_enabled(host, username, password, protocol=None, port=None, host_names=None):
'''
Get the VSAN enabled status for a given host or a list of host_names. Returns ``True``
if VSAN is enabled, ``False`` if it is not enabled, and ``None`` if a VSAN Host Config
is unset, per host.
host
The location of the host.
username
The username used to login to the host, such as ``root``.
password
The password used to login to the host.
protocol
Optionally set to alternate protocol if the host is not using the default
protocol. Default protocol is ``https``.
port
Optionally set to alternate port if the host is not using the default
port. Default port is ``443``.
host_names
List of ESXi host names. When the host, username, and password credentials
are provided for a vCenter Server, the host_names argument is required to
tell vCenter which hosts to check if VSAN enabled.
If host_names is not provided, the VSAN status will be retrieved for the
``host`` location instead. This is useful for when service instance
connection information is used for a single ESXi host.
CLI Example:
.. code-block:: bash
# Used for single ESXi host connection information
salt '*' vsphere.get_vsan_enabled my.esxi.host root bad-password
# Used for connecting to a vCenter Server
salt '*' vsphere.get_vsan_enabled my.vcenter.location root bad-password \
host_names='[esxi-1.host.com, esxi-2.host.com]'
'''
service_instance = salt.utils.vmware.get_service_instance(host=host,
username=username,
password=password,
protocol=protocol,
port=port)
host_names = _check_hosts(service_instance, host, host_names)
ret = {}
for host_name in host_names:
host_ref = _get_host_ref(service_instance, host, host_name=host_name)
vsan_config = host_ref.config.vsanHostConfig
# We must have a VSAN Config in place get information about VSAN state.
if vsan_config is None:
msg = 'VSAN System Config Manager is unset for host \'{0}\'.'.format(host_name)
log.debug(msg)
ret.update({host_name: {'Error': msg}})
else:
ret.update({host_name: {'VSAN Enabled': vsan_config.enabled}})
return ret
def get_vsan_eligible_disks(host, username, password, protocol=None, port=None, host_names=None):
'''
Returns a list of VSAN-eligible disks for a given host or list of host_names.
host
The location of the host.
username
The username used to login to the host, such as ``root``.
password
The password used to login to the host.
protocol
Optionally set to alternate protocol if the host is not using the default
protocol. Default protocol is ``https``.
port
Optionally set to alternate port if the host is not using the default
port. Default port is ``443``.
host_names
List of ESXi host names. When the host, username, and password credentials
are provided for a vCenter Server, the host_names argument is required to
tell vCenter which hosts to check if any VSAN-eligible disks are available.
If host_names is not provided, the VSAN-eligible disks will be retrieved
for the ``host`` location instead. This is useful for when service instance
connection information is used for a single ESXi host.
CLI Example:
.. code-block:: bash
# Used for single ESXi host connection information
salt '*' vsphere.get_vsan_eligible_disks my.esxi.host root bad-password
# Used for connecting to a vCenter Server
salt '*' vsphere.get_vsan_eligible_disks my.vcenter.location root bad-password \
host_names='[esxi-1.host.com, esxi-2.host.com]'
'''
service_instance = salt.utils.vmware.get_service_instance(host=host,
username=username,
password=password,
protocol=protocol,
port=port)
host_names = _check_hosts(service_instance, host, host_names)
response = _get_vsan_eligible_disks(service_instance, host, host_names)
ret = {}
for host_name, value in response.iteritems():
error = value.get('Error')
if error:
ret.update({host_name: {'Error': error}})
continue
disks = value.get('Eligible')
# If we have eligible disks, it will be a list of disk objects
if disks and isinstance(disks, list):
disk_names = []
# We need to return ONLY the disk names, otherwise
# MessagePack can't deserialize the disk objects.
for disk in disks:
disk_names.append(disk.canonicalName)
ret.update({host_name: {'Eligible': disk_names}})
else:
# If we have disks, but it's not a list, it's actually a
# string message that we're passing along.
ret.update({host_name: {'Eligible': disks}})
return ret
def system_info(host, username, password, protocol=None, port=None):
'''
Return system information about a VMware environment.
host
The location of the host.
username
The username used to login to the host, such as ``root``.
password
The password used to login to the host.
protocol
Optionally set to alternate protocol if the host is not using the default
protocol. Default protocol is ``https``.
port
Optionally set to alternate port if the host is not using the default
port. Default port is ``443``.
CLI Example:
.. code-block:: bash
salt '*' vsphere.system_info 1.2.3.4 root bad-password
'''
service_instance = salt.utils.vmware.get_service_instance(host=host,
username=username,
password=password,
protocol=protocol,
port=port)
return salt.utils.vmware.get_inventory(service_instance).about.__dict__
def list_datacenters(host, username, password, protocol=None, port=None):
'''
Returns a list of datacenters for the the specified host.
host
The location of the host.
username
The username used to login to the host, such as ``root``.
password
The password used to login to the host.
protocol
Optionally set to alternate protocol if the host is not using the default
protocol. Default protocol is ``https``.
port
Optionally set to alternate port if the host is not using the default
port. Default port is ``443``.
.. code-block:: bash
salt '*' vsphere.list_datacenters 1.2.3.4 root bad-password
'''
service_instance = salt.utils.vmware.get_service_instance(host=host,
username=username,
password=password,
protocol=protocol,
port=port)
return salt.utils.vmware.list_datacenters(service_instance)
def list_clusters(host, username, password, protocol=None, port=None):
'''
Returns a list of clusters for the the specified host.
host
The location of the host.
username
The username used to login to the host, such as ``root``.
password
The password used to login to the host.
protocol
Optionally set to alternate protocol if the host is not using the default
protocol. Default protocol is ``https``.
port
Optionally set to alternate port if the host is not using the default
port. Default port is ``443``.
.. code-block:: bash
salt '*' vsphere.list_clusters 1.2.3.4 root bad-password
'''
service_instance = salt.utils.vmware.get_service_instance(host=host,
username=username,
password=password,
protocol=protocol,
port=port)
return salt.utils.vmware.list_clusters(service_instance)
def list_datastore_clusters(host, username, password, protocol=None, port=None):
'''
Returns a list of datastore clusters for the the specified host.
host
The location of the host.
username
The username used to login to the host, such as ``root``.
password
The password used to login to the host.
protocol
Optionally set to alternate protocol if the host is not using the default
protocol. Default protocol is ``https``.
port
Optionally set to alternate port if the host is not using the default
port. Default port is ``443``.
.. code-block:: bash
salt '*' vsphere.list_datastore_clusters 1.2.3.4 root bad-password
'''
service_instance = salt.utils.vmware.get_service_instance(host=host,
username=username,
password=password,
protocol=protocol,
port=port)
return salt.utils.vmware.list_datastore_clusters(service_instance)
def list_datastores(host, username, password, protocol=None, port=None):
'''
Returns a list of datastores for the the specified host.
host
The location of the host.
username
The username used to login to the host, such as ``root``.
password
The password used to login to the host.
protocol
Optionally set to alternate protocol if the host is not using the default
protocol. Default protocol is ``https``.
port
Optionally set to alternate port if the host is not using the default
port. Default port is ``443``.
.. code-block:: bash
salt '*' vsphere.list_datastores 1.2.3.4 root bad-password
'''
service_instance = salt.utils.vmware.get_service_instance(host=host,
username=username,
password=password,
protocol=protocol,
port=port)
return salt.utils.vmware.list_datastores(service_instance)
def list_hosts(host, username, password, protocol=None, port=None):
'''
Returns a list of hosts for the the specified VMware environment.
host
The location of the host.
username
The username used to login to the host, such as ``root``.
password
The password used to login to the host.
protocol
Optionally set to alternate protocol if the host is not using the default
protocol. Default protocol is ``https``.
port
Optionally set to alternate port if the host is not using the default
port. Default port is ``443``.
.. code-block:: bash
salt '*' vsphere.list_hosts 1.2.3.4 root bad-password
'''
service_instance = salt.utils.vmware.get_service_instance(host=host,
username=username,
password=password,
protocol=protocol,
port=port)
return salt.utils.vmware.list_hosts(service_instance)
def list_resourcepools(host, username, password, protocol=None, port=None):
'''
Returns a list of resource pools for the the specified host.
host
The location of the host.
username
The username used to login to the host, such as ``root``.
password
The password used to login to the host.
protocol
Optionally set to alternate protocol if the host is not using the default
protocol. Default protocol is ``https``.
port
Optionally set to alternate port if the host is not using the default
port. Default port is ``443``.
.. code-block:: bash
salt '*' vsphere.list_resourcepools 1.2.3.4 root bad-password
'''
service_instance = salt.utils.vmware.get_service_instance(host=host,
username=username,
password=password,
protocol=protocol,
port=port)
return salt.utils.vmware.list_resourcepools(service_instance)
def list_networks(host, username, password, protocol=None, port=None):
'''
Returns a list of networks for the the specified host.
host
The location of the host.
username
The username used to login to the host, such as ``root``.
password
The password used to login to the host.
protocol
Optionally set to alternate protocol if the host is not using the default
protocol. Default protocol is ``https``.
port
Optionally set to alternate port if the host is not using the default
port. Default port is ``443``.
.. code-block:: bash
salt '*' vsphere.list_networks 1.2.3.4 root bad-password
'''
service_instance = salt.utils.vmware.get_service_instance(host=host,
username=username,
password=password,
protocol=protocol,
port=port)
return salt.utils.vmware.list_networks(service_instance)
def list_vms(host, username, password, protocol=None, port=None):
'''
Returns a list of VMs for the the specified host.
host
The location of the host.
username
The username used to login to the host, such as ``root``.
password
The password used to login to the host.
protocol
Optionally set to alternate protocol if the host is not using the default
protocol. Default protocol is ``https``.
port
Optionally set to alternate port if the host is not using the default
port. Default port is ``443``.
.. code-block:: bash
salt '*' vsphere.list_vms 1.2.3.4 root bad-password
'''
service_instance = salt.utils.vmware.get_service_instance(host=host,
username=username,
password=password,
protocol=protocol,
port=port)
return salt.utils.vmware.list_vms(service_instance)
def list_folders(host, username, password, protocol=None, port=None):
'''
Returns a list of folders for the the specified host.
host
The location of the host.
username
The username used to login to the host, such as ``root``.
password
The password used to login to the host.
protocol
Optionally set to alternate protocol if the host is not using the default
protocol. Default protocol is ``https``.
port
Optionally set to alternate port if the host is not using the default
port. Default port is ``443``.
.. code-block:: bash
salt '*' vsphere.list_folders 1.2.3.4 root bad-password
'''
service_instance = salt.utils.vmware.get_service_instance(host=host,
username=username,
password=password,
protocol=protocol,
port=port)
return salt.utils.vmware.list_folders(service_instance)
def list_dvs(host, username, password, protocol=None, port=None):
'''
Returns a list of distributed virtual switches for the the specified host.
host
The location of the host.
username
The username used to login to the host, such as ``root``.
password
The password used to login to the host.
protocol
Optionally set to alternate protocol if the host is not using the default
protocol. Default protocol is ``https``.
port
Optionally set to alternate port if the host is not using the default
port. Default port is ``443``.
.. code-block:: bash
salt '*' vsphere.list_dvs 1.2.3.4 root bad-password
'''
service_instance = salt.utils.vmware.get_service_instance(host=host,
username=username,
password=password,
protocol=protocol,
port=port)
return salt.utils.vmware.list_dvs(service_instance)
def list_vapps(host, username, password, protocol=None, port=None):
'''
Returns a list of vApps for the the specified host.
host
The location of the host.
username
The username used to login to the host, such as ``root``.
password
The password used to login to the host.
protocol
Optionally set to alternate protocol if the host is not using the default
protocol. Default protocol is ``https``.
port
Optionally set to alternate port if the host is not using the default
port. Default port is ``443``.
.. code-block:: bash
salt '*' vsphere.list_vapps 1.2.3.4 root bad-password
'''
service_instance = salt.utils.vmware.get_service_instance(host=host,
username=username,
password=password,
protocol=protocol,
port=port)
return salt.utils.vmware.list_vapps(service_instance)
def list_ssds(host, username, password, protocol=None, port=None, host_names=None):
'''
Returns a list of SSDs for the given host or list of host_names.
host
The location of the host.
username
The username used to login to the host, such as ``root``.
password
The password used to login to the host.
protocol
Optionally set to alternate protocol if the host is not using the default
protocol. Default protocol is ``https``.
port
Optionally set to alternate port if the host is not using the default
port. Default port is ``443``.
host_names
List of ESXi host names. When the host, username, and password credentials
are provided for a vCenter Server, the host_names argument is required to
tell vCenter the hosts for which to retrieve SSDs.
If host_names is not provided, SSDs will be retrieved for the
``host`` location instead. This is useful for when service instance
connection information is used for a single ESXi host.
CLI Example:
.. code-block:: bash
# Used for single ESXi host connection information
salt '*' vsphere.list_ssds my.esxi.host root bad-password
# Used for connecting to a vCenter Server
salt '*' vsphere.list_ssds my.vcenter.location root bad-password \
host_names='[esxi-1.host.com, esxi-2.host.com]'
'''
service_instance = salt.utils.vmware.get_service_instance(host=host,
username=username,
password=password,
protocol=protocol,
port=port)
host_names = _check_hosts(service_instance, host, host_names)
ret = {}
names = []
for host_name in host_names:
host_ref = _get_host_ref(service_instance, host, host_name=host_name)
disks = _get_host_ssds(host_ref)
for disk in disks:
names.append(disk.canonicalName)
ret.update({host_name: names})
return ret
def list_non_ssds(host, username, password, protocol=None, port=None, host_names=None):
'''
Returns a list of Non-SSD disks for the given host or list of host_names.
.. note::
In the pyVmomi StorageSystem, ScsiDisks may, or may not have an ``ssd`` attribute.
This attribute indicates if the ScsiDisk is SSD backed. As this option is optional,
if a relevant disk in the StorageSystem does not have ``ssd = true``, it will end
up in the ``non_ssds`` list here.
host
The location of the host.
username
The username used to login to the host, such as ``root``.
password
The password used to login to the host.
protocol
Optionally set to alternate protocol if the host is not using the default
protocol. Default protocol is ``https``.
port
Optionally set to alternate port if the host is not using the default
port. Default port is ``443``.
host_names
List of ESXi host names. When the host, username, and password credentials
are provided for a vCenter Server, the host_names argument is required to
tell vCenter the hosts for which to retrieve Non-SSD disks.
If host_names is not provided, Non-SSD disks will be retrieved for the
``host`` location instead. This is useful for when service instance
connection information is used for a single ESXi host.
CLI Example:
.. code-block:: bash
# Used for single ESXi host connection information
salt '*' vsphere.list_non_ssds my.esxi.host root bad-password
# Used for connecting to a vCenter Server
salt '*' vsphere.list_non_ssds my.vcenter.location root bad-password \
host_names='[esxi-1.host.com, esxi-2.host.com]'
'''
service_instance = salt.utils.vmware.get_service_instance(host=host,
username=username,
password=password,
protocol=protocol,
port=port)
host_names = _check_hosts(service_instance, host, host_names)
ret = {}
names = []
for host_name in host_names:
host_ref = _get_host_ref(service_instance, host, host_name=host_name)
disks = _get_host_non_ssds(host_ref)
for disk in disks:
names.append(disk.canonicalName)
ret.update({host_name: names})
return ret
def set_ntp_config(host, username, password, ntp_servers, protocol=None, port=None, host_names=None):
'''
Set NTP configuration for a given host of list of host_names.
host
The location of the host.
username
The username used to login to the host, such as ``root``.
password
The password used to login to the host.
ntp_servers
A list of servers that should be added to and configured for the specified
host's NTP configuration.
protocol
Optionally set to alternate protocol if the host is not using the default
protocol. Default protocol is ``https``.
port
Optionally set to alternate port if the host is not using the default
port. Default port is ``443``.
host_names
List of ESXi host names. When the host, username, and password credentials
are provided for a vCenter Server, the host_names argument is required to tell
vCenter which hosts to configure ntp servers.
If host_names is not provided, the NTP servers will be configured for the
``host`` location instead. This is useful for when service instance connection
information is used for a single ESXi host.
CLI Example:
.. code-block:: bash
# Used for single ESXi host connection information
salt '*' vsphere.ntp_configure my.esxi.host root bad-password '[192.174.1.100, 192.174.1.200]'
# Used for connecting to a vCenter Server
salt '*' vsphere.ntp_configure my.vcenter.location root bad-password '[192.174.1.100, 192.174.1.200]' \
host_names='[esxi-1.host.com, esxi-2.host.com]'
'''
service_instance = salt.utils.vmware.get_service_instance(host=host,
username=username,
password=password,
protocol=protocol,
port=port)
if not isinstance(ntp_servers, list):
raise CommandExecutionError('\'ntp_servers\' must be a list.')
# Get NTP Config Object from ntp_servers
ntp_config = vim.HostNtpConfig(server=ntp_servers)
# Get DateTimeConfig object from ntp_config
date_config = vim.HostDateTimeConfig(ntpConfig=ntp_config)
host_names = _check_hosts(service_instance, host, host_names)
ret = {}
for host_name in host_names:
host_ref = _get_host_ref(service_instance, host, host_name=host_name)
date_time_manager = _get_date_time_mgr(host_ref)
log.debug('Configuring NTP Servers \'{0}\' for host \'{1}\'.'.format(ntp_servers, host_name))
try:
date_time_manager.UpdateDateTimeConfig(config=date_config)
except vim.fault.HostConfigFault as err:
msg = 'vsphere.ntp_configure_servers failed: {0}'.format(err)
log.debug(msg)
ret.update({host_name: {'Error': msg}})
continue
ret.update({host_name: {'NTP Servers': ntp_config}})
return ret
def service_start(host,
username,
password,
service_name,
protocol=None,
port=None,
host_names=None):
'''
Start the named service for the given host or list of hosts.
host
The location of the host.
username
The username used to login to the host, such as ``root``.
password
The password used to login to the host.
service_name
The name of the service for which to set the policy. Supported service names are:
- DCUI
- TSM
- SSH
- lbtd
- lsassd
- lwiod
- netlogond
- ntpd
- sfcbd-watchdog
- snmpd
- vprobed
- vpxa
- xorg
protocol
Optionally set to alternate protocol if the host is not using the default
protocol. Default protocol is ``https``.
port
Optionally set to alternate port if the host is not using the default
port. Default port is ``443``.
host_names
List of ESXi host names. When the host, username, and password credentials
are provided for a vCenter Server, the host_names argument is required to tell
vCenter the hosts for which to start the service.
If host_names is not provided, the service will be started for the ``host``
location instead. This is useful for when service instance connection information
is used for a single ESXi host.
CLI Example:
.. code-block:: bash
# Used for single ESXi host connection information
salt '*' vsphere.service_start my.esxi.host root bad-password 'ntpd'
# Used for connecting to a vCenter Server
salt '*' vsphere.service_start my.vcenter.location root bad-password 'ntpd' \
host_names='[esxi-1.host.com, esxi-2.host.com]'
'''
service_instance = salt.utils.vmware.get_service_instance(host=host,
username=username,
password=password,
protocol=protocol,
port=port)
host_names = _check_hosts(service_instance, host, host_names)
valid_services = ['DCUI', 'TSM', 'SSH', 'ssh', 'lbtd', 'lsassd', 'lwiod', 'netlogond',
'ntpd', 'sfcbd-watchdog', 'snmpd', 'vprobed', 'vpxa', 'xorg']
ret = {}
# Don't require users to know that VMware lists the ssh service as TSM-SSH
if service_name == 'SSH' or service_name == 'ssh':
temp_service_name = 'TSM-SSH'
else:
temp_service_name = service_name
for host_name in host_names:
# Check if the service_name provided is a valid one.
# If we don't have a valid service, return. The service will be invalid for all hosts.
if service_name not in valid_services:
ret.update({host_name: {'Error': '{0} is not a valid service name.'.format(service_name)}})
return ret
host_ref = _get_host_ref(service_instance, host, host_name=host_name)
service_manager = _get_service_manager(host_ref)
log.debug('Starting the \'{0}\' service on {1}.'.format(service_name, host_name))
# Start the service
try:
service_manager.StartService(id=temp_service_name)
except vim.fault.HostConfigFault as err:
msg = '\'vsphere.service_start\' failed for host {0}: {1}'.format(host_name, err)
log.debug(msg)
ret.update({host_name: {'Error': msg}})
continue
# Some services are restricted by the vSphere License Level.
except vim.fault.RestrictedVersion as err:
log.debug(err)
ret.update({host_name: {'Error': err}})
continue
ret.update({host_name: {'Service Started': True}})
return ret
def service_stop(host,
username,
password,
service_name,
protocol=None,
port=None,
host_names=None):
'''
Stop the named service for the given host or list of hosts.
host
The location of the host.
username
The username used to login to the host, such as ``root``.
password
The password used to login to the host.
service_name
The name of the service for which to set the policy. Supported service names are:
- DCUI
- TSM
- SSH
- lbtd
- lsassd
- lwiod
- netlogond
- ntpd
- sfcbd-watchdog
- snmpd
- vprobed
- vpxa
- xorg
protocol
Optionally set to alternate protocol if the host is not using the default
protocol. Default protocol is ``https``.
port
Optionally set to alternate port if the host is not using the default
port. Default port is ``443``.
host_names
List of ESXi host names. When the host, username, and password credentials
are provided for a vCenter Server, the host_names argument is required to tell
vCenter the hosts for which to stop the service.
If host_names is not provided, the service will be stopped for the ``host``
location instead. This is useful for when service instance connection information
is used for a single ESXi host.
CLI Example:
.. code-block:: bash
# Used for single ESXi host connection information
salt '*' vsphere.service_stop my.esxi.host root bad-password 'ssh'
# Used for connecting to a vCenter Server
salt '*' vsphere.service_stop my.vcenter.location root bad-password 'ssh' \
host_names='[esxi-1.host.com, esxi-2.host.com]'
'''
service_instance = salt.utils.vmware.get_service_instance(host=host,
username=username,
password=password,
protocol=protocol,
port=port)
host_names = _check_hosts(service_instance, host, host_names)
valid_services = ['DCUI', 'TSM', 'SSH', 'ssh', 'lbtd', 'lsassd', 'lwiod', 'netlogond',
'ntpd', 'sfcbd-watchdog', 'snmpd', 'vprobed', 'vpxa', 'xorg']
ret = {}
# Don't require users to know that VMware lists the ssh service as TSM-SSH
if service_name == 'SSH' or service_name == 'ssh':
temp_service_name = 'TSM-SSH'
else:
temp_service_name = service_name
for host_name in host_names:
# Check if the service_name provided is a valid one.
# If we don't have a valid service, return. The service will be invalid for all hosts.
if service_name not in valid_services:
ret.update({host_name: {'Error': '{0} is not a valid service name.'.format(service_name)}})
return ret
host_ref = _get_host_ref(service_instance, host, host_name=host_name)
service_manager = _get_service_manager(host_ref)
log.debug('Stopping the \'{0}\' service on {1}.'.format(service_name, host_name))
# Stop the service.
try:
service_manager.StopService(id=temp_service_name)
except vim.fault.HostConfigFault as err:
msg = '\'vsphere.service_stop\' failed for host {0}: {1}'.format(host_name, err)
log.debug(msg)
ret.update({host_name: {'Error': msg}})
continue
# Some services are restricted by the vSphere License Level.
except vim.fault.RestrictedVersion as err:
log.debug(err)
ret.update({host_name: {'Error': err}})
continue
ret.update({host_name: {'Service Stopped': True}})
return ret
def service_restart(host,
username,
password,
service_name,
protocol=None,
port=None,
host_names=None):
'''
Restart the named service for the given host or list of hosts.
host
The location of the host.
username
The username used to login to the host, such as ``root``.
password
The password used to login to the host.
service_name
The name of the service for which to set the policy. Supported service names are:
- DCUI
- TSM
- SSH
- lbtd
- lsassd
- lwiod
- netlogond
- ntpd
- sfcbd-watchdog
- snmpd
- vprobed
- vpxa
- xorg
protocol
Optionally set to alternate protocol if the host is not using the default
protocol. Default protocol is ``https``.
port
Optionally set to alternate port if the host is not using the default
port. Default port is ``443``.
host_names
List of ESXi host names. When the host, username, and password credentials
are provided for a vCenter Server, the host_names argument is required to tell
vCenter the hosts for which to restart the service.
If host_names is not provided, the service will be restarted for the ``host``
location instead. This is useful for when service instance connection information
is used for a single ESXi host.
CLI Example:
.. code-block:: bash
# Used for single ESXi host connection information
salt '*' vsphere.service_restart my.esxi.host root bad-password 'ntpd'
# Used for connecting to a vCenter Server
salt '*' vsphere.service_restart my.vcenter.location root bad-password 'ntpd' \
host_names='[esxi-1.host.com, esxi-2.host.com]'
'''
service_instance = salt.utils.vmware.get_service_instance(host=host,
username=username,
password=password,
protocol=protocol,
port=port)
host_names = _check_hosts(service_instance, host, host_names)
valid_services = ['DCUI', 'TSM', 'SSH', 'ssh', 'lbtd', 'lsassd', 'lwiod', 'netlogond',
'ntpd', 'sfcbd-watchdog', 'snmpd', 'vprobed', 'vpxa', 'xorg']
ret = {}
# Don't require users to know that VMware lists the ssh service as TSM-SSH
if service_name == 'SSH' or service_name == 'ssh':
temp_service_name = 'TSM-SSH'
else:
temp_service_name = service_name
for host_name in host_names:
# Check if the service_name provided is a valid one.
# If we don't have a valid service, return. The service will be invalid for all hosts.
if service_name not in valid_services:
ret.update({host_name: {'Error': '{0} is not a valid service name.'.format(service_name)}})
return ret
host_ref = _get_host_ref(service_instance, host, host_name=host_name)
service_manager = _get_service_manager(host_ref)
log.debug('Restarting the \'{0}\' service on {1}.'.format(service_name, host_name))
# Restart the service.
try:
service_manager.RestartService(id=temp_service_name)
except vim.fault.HostConfigFault as err:
msg = '\'vsphere.service_restart\' failed for host {0}: {1}'.format(host_name, err)
log.debug(msg)
ret.update({host_name: {'Error': msg}})
continue
# Some services are restricted by the vSphere License Level.
except vim.fault.RestrictedVersion as err:
log.debug(err)
ret.update({host_name: {'Error': err}})
continue
ret.update({host_name: {'Service Restarted': True}})
return ret
def set_service_policy(host,
username,
password,
service_name,
service_policy,
protocol=None,
port=None,
host_names=None):
'''
Set the service name's policy for a given host or list of hosts.
host
The location of the host.
username
The username used to login to the host, such as ``root``.
password
The password used to login to the host.
service_name
The name of the service for which to set the policy. Supported service names are:
- DCUI
- TSM
- SSH
- lbtd
- lsassd
- lwiod
- netlogond
- ntpd
- sfcbd-watchdog
- snmpd
- vprobed
- vpxa
- xorg
service_policy
The policy to set for the service. For example, 'automatic'.
protocol
Optionally set to alternate protocol if the host is not using the default
protocol. Default protocol is ``https``.
port
Optionally set to alternate port if the host is not using the default
port. Default port is ``443``.
host_names
List of ESXi host names. When the host, username, and password credentials
are provided for a vCenter Server, the host_names argument is required to tell
vCenter the hosts for which to set the service policy.
If host_names is not provided, the service policy information will be retrieved
for the ``host`` location instead. This is useful for when service instance
connection information is used for a single ESXi host.
CLI Example:
.. code-block:: bash
# Used for single ESXi host connection information
salt '*' vsphere.set_service_policy my.esxi.host root bad-password 'ntpd' 'automatic'
# Used for connecting to a vCenter Server
salt '*' vsphere.set_service_policy my.vcenter.location root bad-password 'ntpd' 'automatic' \
host_names='[esxi-1.host.com, esxi-2.host.com]'
'''
service_instance = salt.utils.vmware.get_service_instance(host=host,
username=username,
password=password,
protocol=protocol,
port=port)
host_names = _check_hosts(service_instance, host, host_names)
valid_services = ['DCUI', 'TSM', 'SSH', 'ssh', 'lbtd', 'lsassd', 'lwiod', 'netlogond',
'ntpd', 'sfcbd-watchdog', 'snmpd', 'vprobed', 'vpxa', 'xorg']
ret = {}
for host_name in host_names:
# Check if the service_name provided is a valid one.
# If we don't have a valid service, return. The service will be invalid for all hosts.
if service_name not in valid_services:
ret.update({host_name: {'Error': '{0} is not a valid service name.'.format(service_name)}})
return ret
host_ref = _get_host_ref(service_instance, host, host_name=host_name)
service_manager = _get_service_manager(host_ref)
services = host_ref.configManager.serviceSystem.serviceInfo.service
# Services are stored in a general list - we need loop through the list and find
# service key that matches our service name.
for service in services:
service_key = None
# Find the service key based on the given service_name
if service.key == service_name:
service_key = service.key
elif service_name == 'ssh' or service_name == 'SSH':
if service.key == 'TSM-SSH':
service_key = 'TSM-SSH'
# If we have a service_key, we've found a match. Update the policy.
if service_key:
try:
service_manager.UpdateServicePolicy(id=service_key, policy=service_policy)
except vim.fault.NotFound:
msg = 'The service name \'{0}\' was not found.'.format(service_name)
log.debug(msg)
ret.update({host_name: {'Error': msg}})
continue
# Some services are restricted by the vSphere License Level.
except vim.fault.HostConfigFault as err:
msg = '\'vsphere.set_service_policy\' failed for host {0}: {1}'.format(host_name, err)
log.debug(msg)
ret.update({host_name: {'Error': msg}})
continue
ret.update({host_name: True})
# If we made it this far, something else has gone wrong.
if ret.get(host_name) is None:
msg = 'Could not find service \'{0}\' for host \'{1}\'.'.format(service_name, host_name)
log.debug(msg)
ret.update({host_name: {'Error': msg}})
return ret
def update_host_datetime(host, username, password, protocol=None, port=None, host_names=None):
'''
Update the date/time on the given host or list of host_names. This function should be
used with caution since network delays and execution delays can result in time skews.
host
The location of the host.
username
The username used to login to the host, such as ``root``.
password
The password used to login to the host.
protocol
Optionally set to alternate protocol if the host is not using the default
protocol. Default protocol is ``https``.
port
Optionally set to alternate port if the host is not using the default
port. Default port is ``443``.
host_names
List of ESXi host names. When the host, username, and password credentials
are provided for a vCenter Server, the host_names argument is required to
tell vCenter which hosts should update their date/time.
If host_names is not provided, the date/time will be updated for the ``host``
location instead. This is useful for when service instance connection
information is used for a single ESXi host.
CLI Example:
.. code-block:: bash
# Used for single ESXi host connection information
salt '*' vsphere.update_date_time my.esxi.host root bad-password
# Used for connecting to a vCenter Server
salt '*' vsphere.update_date_time my.vcenter.location root bad-password \
host_names='[esxi-1.host.com, esxi-2.host.com]'
'''
service_instance = salt.utils.vmware.get_service_instance(host=host,
username=username,
password=password,
protocol=protocol,
port=port)
host_names = _check_hosts(service_instance, host, host_names)
ret = {}
for host_name in host_names:
host_ref = _get_host_ref(service_instance, host, host_name=host_name)
date_time_manager = _get_date_time_mgr(host_ref)
try:
date_time_manager.UpdateDateTime(datetime.datetime.utcnow())
except vim.fault.HostConfigFault as err:
msg = '\'vsphere.update_date_time\' failed for host {0}: {1}'.format(host_name, err)
log.debug(msg)
ret.update({host_name: {'Error': msg}})
continue
ret.update({host_name: {'Datetime Updated': True}})
return ret
def update_host_password(host, username, password, new_password, protocol=None, port=None):
'''
Update the password for a given host.
.. note:: Currently only works with connections to ESXi hosts. Does not work with vCenter servers.
host
The location of the ESXi host.
username
The username used to login to the ESXi host, such as ``root``.
password
The password used to login to the ESXi host.
new_password
The new password that will be updated for the provided username on the ESXi host.
protocol
Optionally set to alternate protocol if the host is not using the default
protocol. Default protocol is ``https``.
port
Optionally set to alternate port if the host is not using the default
port. Default port is ``443``.
CLI Example:
.. code-block:: bash
salt '*' vsphere.update_host_password my.esxi.host root original-bad-password new-bad-password
'''
service_instance = salt.utils.vmware.get_service_instance(host=host,
username=username,
password=password,
protocol=protocol,
port=port)
# Get LocalAccountManager object
account_manager = salt.utils.vmware.get_inventory(service_instance).accountManager
# Create user account specification object and assign id and password attributes
user_account = vim.host.LocalAccountManager.AccountSpecification()
user_account.id = username
user_account.password = new_password
# Update the password
try:
account_manager.UpdateUser(user_account)
except vmodl.fault.SystemError as err:
raise CommandExecutionError(err.msg)
except vim.fault.UserNotFound:
raise CommandExecutionError('\'vsphere.update_host_password\' failed for host {0}: '
'User was not found.'.format(host))
# If the username and password already exist, we don't need to do anything.
except vim.fault.AlreadyExists:
pass
return True
def vmotion_disable(host, username, password, protocol=None, port=None, host_names=None):
'''
Disable vMotion for a given host or list of host_names.
host
The location of the host.
username
The username used to login to the host, such as ``root``.
password
The password used to login to the host.
protocol
Optionally set to alternate protocol if the host is not using the default
protocol. Default protocol is ``https``.
port
Optionally set to alternate port if the host is not using the default
port. Default port is ``443``.
host_names
List of ESXi host names. When the host, username, and password credentials
are provided for a vCenter Server, the host_names argument is required to
tell vCenter which hosts should disable VMotion.
If host_names is not provided, VMotion will be disabled for the ``host``
location instead. This is useful for when service instance connection
information is used for a single ESXi host.
CLI Example:
.. code-block:: bash
# Used for single ESXi host connection information
salt '*' vsphere.vmotion_disable my.esxi.host root bad-password
# Used for connecting to a vCenter Server
salt '*' vsphere.vmotion_disable my.vcenter.location root bad-password \
host_names='[esxi-1.host.com, esxi-2.host.com]'
'''
service_instance = salt.utils.vmware.get_service_instance(host=host,
username=username,
password=password,
protocol=protocol,
port=port)
host_names = _check_hosts(service_instance, host, host_names)
ret = {}
for host_name in host_names:
host_ref = _get_host_ref(service_instance, host, host_name=host_name)
vmotion_system = host_ref.configManager.vmotionSystem
# Disable VMotion for the host by removing the VNic selected to use for VMotion.
try:
vmotion_system.DeselectVnic()
except vim.fault.HostConfigFault as err:
msg = 'vsphere.vmotion_disable failed: {0}'.format(err)
log.debug(msg)
ret.update({host_name: {'Error': msg,
'VMotion Disabled': False}})
continue
ret.update({host_name: {'VMotion Disabled': True}})
return ret
def vmotion_enable(host, username, password, protocol=None, port=None, host_names=None, device='vmk0'):
'''
Enable vMotion for a given host or list of host_names.
host
The location of the host.
username
The username used to login to the host, such as ``root``.
password
The password used to login to the host.
protocol
Optionally set to alternate protocol if the host is not using the default
protocol. Default protocol is ``https``.
port
Optionally set to alternate port if the host is not using the default
port. Default port is ``443``.
host_names
List of ESXi host names. When the host, username, and password credentials
are provided for a vCenter Server, the host_names argument is required to
tell vCenter which hosts should enable VMotion.
If host_names is not provided, VMotion will be enabled for the ``host``
location instead. This is useful for when service instance connection
information is used for a single ESXi host.
device
The device that uniquely identifies the VirtualNic that will be used for
VMotion for each host. Defaults to ``vmk0``.
CLI Example:
.. code-block:: bash
# Used for single ESXi host connection information
salt '*' vsphere.vmotion_enable my.esxi.host root bad-password
# Used for connecting to a vCenter Server
salt '*' vsphere.vmotion_enable my.vcenter.location root bad-password \
host_names='[esxi-1.host.com, esxi-2.host.com]'
'''
service_instance = salt.utils.vmware.get_service_instance(host=host,
username=username,
password=password,
protocol=protocol,
port=port)
host_names = _check_hosts(service_instance, host, host_names)
ret = {}
for host_name in host_names:
host_ref = _get_host_ref(service_instance, host, host_name=host_name)
vmotion_system = host_ref.configManager.vmotionSystem
# Enable VMotion for the host by setting the given device to provide the VNic to use for VMotion.
try:
vmotion_system.SelectVnic(device)
except vim.fault.HostConfigFault as err:
msg = 'vsphere.vmotion_disable failed: {0}'.format(err)
log.debug(msg)
ret.update({host_name: {'Error': msg,
'VMotion Enabled': False}})
continue
ret.update({host_name: {'VMotion Enabled': True}})
return ret
def vsan_add_disks(host, username, password, protocol=None, port=None, host_names=None):
'''
Add any VSAN-eligible disks to the VSAN System for the given host or list of host_names.
host
The location of the host.
username
The username used to login to the host, such as ``root``.
password
The password used to login to the host.
protocol
Optionally set to alternate protocol if the host is not using the default
protocol. Default protocol is ``https``.
port
Optionally set to alternate port if the host is not using the default
port. Default port is ``443``.
host_names
List of ESXi host names. When the host, username, and password credentials
are provided for a vCenter Server, the host_names argument is required to
tell vCenter which hosts need to add any VSAN-eligible disks to the host's
VSAN system.
If host_names is not provided, VSAN-eligible disks will be added to the hosts's
VSAN system for the ``host`` location instead. This is useful for when service
instance connection information is used for a single ESXi host.
CLI Example:
.. code-block:: bash
# Used for single ESXi host connection information
salt '*' vsphere.vsan_add_disks my.esxi.host root bad-password
# Used for connecting to a vCenter Server
salt '*' vsphere.vsan_add_disks my.vcenter.location root bad-password \
host_names='[esxi-1.host.com, esxi-2.host.com]'
'''
service_instance = salt.utils.vmware.get_service_instance(host=host,
username=username,
password=password,
protocol=protocol,
port=port)
host_names = _check_hosts(service_instance, host, host_names)
response = _get_vsan_eligible_disks(service_instance, host, host_names)
ret = {}
for host_name, value in response.iteritems():
host_ref = _get_host_ref(service_instance, host, host_name=host_name)
vsan_system = host_ref.configManager.vsanSystem
# We must have a VSAN Config in place before we can manipulate it.
if vsan_system is None:
msg = 'VSAN System Config Manager is unset for host \'{0}\'. ' \
'VSAN configuration cannot be changed without a configured ' \
'VSAN System.'.format(host_name)
log.debug(msg)
ret.update({host_name: {'Error': msg}})
else:
eligible = value.get('Eligible')
error = value.get('Error')
if eligible and isinstance(eligible, list):
# If we have eligible, matching disks, add them to VSAN.
try:
task = vsan_system.AddDisks(eligible)
salt.utils.vmware.wait_for_task(task, host_name, 'Adding disks to VSAN', sleep_seconds=3)
except Exception as err:
msg = '\'vsphere.vsan_add_disks\' failed for host {0}: {1}'.format(host_name, err)
log.debug(msg)
ret.update({host_name: {'Error': msg}})
continue
log.debug('Successfully added disks to the VSAN system for host \'{0}\'.'.format(host_name))
# We need to return ONLY the disk names, otherwise Message Pack can't deserialize the disk objects.
disk_names = []
for disk in eligible:
disk_names.append(disk.canonicalName)
ret.update({host_name: {'Disks Added': disk_names}})
elif eligible and isinstance(eligible, six.string_types):
# If we have a string type in the eligible value, we don't
# have any VSAN-eligible disks. Pull the message through.
ret.update({host_name: {'Disks Added': eligible}})
elif error:
# If we hit an error, populate the Error return dict for state functions.
ret.update({host_name: {'Error': error}})
else:
# If we made it this far, we somehow have eligible disks, but they didn't
# match the disk list and just got an empty list of matching disks.
ret.update({host_name: {'Disks Added': 'No new VSAN-eligible disks were found to add.'}})
return ret
def vsan_disable(host, username, password, protocol=None, port=None, host_names=None):
'''
Disable VSAN for a given host or list of host_names.
host
The location of the host.
username
The username used to login to the host, such as ``root``.
password
The password used to login to the host.
protocol
Optionally set to alternate protocol if the host is not using the default
protocol. Default protocol is ``https``.
port
Optionally set to alternate port if the host is not using the default
port. Default port is ``443``.
host_names
List of ESXi host names. When the host, username, and password credentials
are provided for a vCenter Server, the host_names argument is required to
tell vCenter which hosts should disable VSAN.
If host_names is not provided, VSAN will be disabled for the ``host``
location instead. This is useful for when service instance connection
information is used for a single ESXi host.
CLI Example:
.. code-block:: bash
# Used for single ESXi host connection information
salt '*' vsphere.vsan_disable my.esxi.host root bad-password
# Used for connecting to a vCenter Server
salt '*' vsphere.vsan_disable my.vcenter.location root bad-password \
host_names='[esxi-1.host.com, esxi-2.host.com]'
'''
service_instance = salt.utils.vmware.get_service_instance(host=host,
username=username,
password=password,
protocol=protocol,
port=port)
# Create a VSAN Configuration Object and set the enabled attribute to True
vsan_config = vim.vsan.host.ConfigInfo()
vsan_config.enabled = False
host_names = _check_hosts(service_instance, host, host_names)
ret = {}
for host_name in host_names:
host_ref = _get_host_ref(service_instance, host, host_name=host_name)
vsan_system = host_ref.configManager.vsanSystem
# We must have a VSAN Config in place before we can manipulate it.
if vsan_system is None:
msg = 'VSAN System Config Manager is unset for host \'{0}\'. ' \
'VSAN configuration cannot be changed without a configured ' \
'VSAN System.'.format(host_name)
log.debug(msg)
ret.update({host_name: {'Error': msg}})
else:
try:
# Disable vsan on the host
task = vsan_system.UpdateVsan_Task(vsan_config)
salt.utils.vmware.wait_for_task(task, host_name, 'Disabling VSAN', sleep_seconds=3)
except vmodl.fault.SystemError as err:
log.debug(err.msg)
ret.update({host_name: {'Error': err.msg}})
continue
except Exception as err:
msg = '\'vsphere.vsan_disable\' failed for host {0}: {1}'.format(host_name, err)
log.debug(msg)
ret.update({host_name: {'Error': msg}})
continue
ret.update({host_name: {'VSAN Disabled': True}})
return ret
def vsan_enable(host, username, password, protocol=None, port=None, host_names=None):
'''
Enable VSAN for a given host or list of host_names.
host
The location of the host.
username
The username used to login to the host, such as ``root``.
password
The password used to login to the host.
protocol
Optionally set to alternate protocol if the host is not using the default
protocol. Default protocol is ``https``.
port
Optionally set to alternate port if the host is not using the default
port. Default port is ``443``.
host_names
List of ESXi host names. When the host, username, and password credentials
are provided for a vCenter Server, the host_names argument is required to
tell vCenter which hosts should enable VSAN.
If host_names is not provided, VSAN will be enabled for the ``host``
location instead. This is useful for when service instance connection
information is used for a single ESXi host.
CLI Example:
.. code-block:: bash
# Used for single ESXi host connection information
salt '*' vsphere.vsan_enable my.esxi.host root bad-password
# Used for connecting to a vCenter Server
salt '*' vsphere.vsan_enable my.vcenter.location root bad-password \
host_names='[esxi-1.host.com, esxi-2.host.com]'
'''
service_instance = salt.utils.vmware.get_service_instance(host=host,
username=username,
password=password,
protocol=protocol,
port=port)
# Create a VSAN Configuration Object and set the enabled attribute to True
vsan_config = vim.vsan.host.ConfigInfo()
vsan_config.enabled = True
host_names = _check_hosts(service_instance, host, host_names)
ret = {}
for host_name in host_names:
host_ref = _get_host_ref(service_instance, host, host_name=host_name)
vsan_system = host_ref.configManager.vsanSystem
# We must have a VSAN Config in place before we can manipulate it.
if vsan_system is None:
msg = 'VSAN System Config Manager is unset for host \'{0}\'. ' \
'VSAN configuration cannot be changed without a configured ' \
'VSAN System.'.format(host_name)
log.debug(msg)
ret.update({host_name: {'Error': msg}})
else:
try:
# Enable vsan on the host
task = vsan_system.UpdateVsan_Task(vsan_config)
salt.utils.vmware.wait_for_task(task, host_name, 'Enabling VSAN', sleep_seconds=3)
except vmodl.fault.SystemError as err:
log.debug(err.msg)
ret.update({host_name: {'Error': err.msg}})
continue
except vim.fault.VsanFault as err:
msg = '\'vsphere.vsan_enable\' failed for host {0}: {1}'.format(host_name, err)
log.debug(msg)
ret.update({host_name: {'Error': msg}})
continue
ret.update({host_name: {'VSAN Enabled': True}})
return ret
def _check_hosts(service_instance, host, host_names):
'''
Helper function that checks to see if the host provided is a vCenter Server or
an ESXi host. If it's an ESXi host, returns a list of a single host_name.
If a host reference isn't found, we're trying to find a host object for a vCenter
server. Raises a CommandExecutionError in this case, as we need host references to
check against.
'''
if not host_names:
host_name = _get_host_ref(service_instance, host)
if host_name:
host_names = [host]
else:
raise CommandExecutionError('No host reference found. If connecting to a '
'vCenter Server, a list of \'host_names\' must be '
'provided.')
elif not isinstance(host_names, list):
raise CommandExecutionError('\'host_names\' must be a list.')
return host_names
def _format_coredump_stdout(cmd_ret):
'''
Helper function to format the stdout from the get_coredump_network_config function.
cmd_ret
The return dictionary that comes from a cmd.run_all call.
'''
ret_dict = {}
for line in cmd_ret['stdout'].splitlines():
line = line.strip().lower()
if line.startswith('enabled:'):
enabled = line.split(':')
if 'true' in enabled[1]:
ret_dict['enabled'] = True
else:
ret_dict['enabled'] = False
break
if line.startswith('host vnic:'):
host_vnic = line.split(':')
ret_dict['host_vnic'] = host_vnic[1].strip()
if line.startswith('network server ip:'):
ip = line.split(':')
ret_dict['ip'] = ip[1].strip()
if line.startswith('network server port:'):
ip_port = line.split(':')
ret_dict['port'] = ip_port[1].strip()
return ret_dict
def _format_firewall_stdout(cmd_ret):
'''
Helper function to format the stdout from the get_firewall_status function.
cmd_ret
The return dictionary that comes from a cmd.run_all call.
'''
ret_dict = {'success': True,
'rulesets': {}}
for line in cmd_ret['stdout'].splitlines():
if line.startswith('Name'):
continue
if line.startswith('---'):
continue
ruleset_status = line.split()
ret_dict['rulesets'][ruleset_status[0]] = bool(ruleset_status[1])
return ret_dict
def _format_syslog_config(cmd_ret):
'''
Helper function to format the stdout from the get_syslog_config function.
cmd_ret
The return dictionary that comes from a cmd.run_all call.
'''
ret_dict = {'success': cmd_ret['retcode'] == 0}
if cmd_ret['retcode'] != 0:
ret_dict['message'] = cmd_ret['stdout']
else:
for line in cmd_ret['stdout'].splitlines():
line = line.strip()
cfgvars = line.split(': ')
key = cfgvars[0].strip()
value = cfgvars[1].strip()
ret_dict[key] = value
return ret_dict
def _get_date_time_mgr(host_reference):
'''
Helper function that returns a dateTimeManager object
'''
return host_reference.configManager.dateTimeSystem
def _get_host_ref(service_instance, host, host_name=None):
'''
Helper function that returns a host object either from the host location or the host_name.
If host_name is provided, that is the host_object that will be returned.
The function will first search for hosts by DNS Name. If no hosts are found, it will
try searching by IP Address.
'''
search_index = salt.utils.vmware.get_inventory(service_instance).searchIndex
# First, try to find the host reference by DNS Name.
if host_name:
host_ref = search_index.FindByDnsName(dnsName=host_name, vmSearch=False)
else:
host_ref = search_index.FindByDnsName(dnsName=host, vmSearch=False)
# If we couldn't find the host by DNS Name, then try the IP Address.
if host_ref is None:
host_ref = search_index.FindByIp(ip=host, vmSearch=False)
return host_ref
def _get_host_ssds(host_reference):
'''
Helper function that returns a list of ssd objects for a given host.
'''
return _get_host_disks(host_reference).get('SSDs')
def _get_host_non_ssds(host_reference):
'''
Helper function that returns a list of Non-SSD objects for a given host.
'''
return _get_host_disks(host_reference).get('Non-SSDs')
def _get_host_disks(host_reference):
'''
Helper function that returns a dictionary containing a list of SSD and Non-SSD disks.
'''
storage_system = host_reference.configManager.storageSystem
disks = storage_system.storageDeviceInfo.scsiLun
ssds = []
non_ssds = []
for disk in disks:
try:
has_ssd_attr = disk.ssd
except AttributeError:
has_ssd_attr = False
if has_ssd_attr:
ssds.append(disk)
else:
non_ssds.append(disk)
return {'SSDs': ssds, 'Non-SSDs': non_ssds}
def _get_service_manager(host_reference):
'''
Helper function that returns a service manager object from a given host object.
'''
return host_reference.configManager.serviceSystem
def _get_vsan_eligible_disks(service_instance, host, host_names):
'''
Helper function that returns a dictionary of host_name keys with either a list of eligible
disks that can be added to VSAN or either and 'Error' message or a message saying no
eligible disks were found. Possible keys/values look like so:
return = {'host_1': {'Error': 'VSAN System Config Manager is unset ...'},
'host_2': {'Eligible': 'The host xxx does not have any VSAN eligible disks.'},
'host_3': {'Eligible': [disk1, disk2, disk3, disk4],
'host_4': {'Eligible': []}}
'''
ret = {}
for host_name in host_names:
# Get VSAN System Config Manager, if available.
host_ref = _get_host_ref(service_instance, host, host_name=host_name)
vsan_system = host_ref.configManager.vsanSystem
if vsan_system is None:
msg = 'VSAN System Config Manager is unset for host \'{0}\'. ' \
'VSAN configuration cannot be changed without a configured ' \
'VSAN System.'.format(host_name)
log.debug(msg)
ret.update({host_name: {'Error': msg}})
continue
# Get all VSAN suitable disks for this host.
suitable_disks = []
query = vsan_system.QueryDisksForVsan()
for item in query:
if item.state == 'eligible':
suitable_disks.append(item)
# No suitable disks were found to add. Warn and move on.
# This isn't an error as the state may run repeatedly after all eligible disks are added.
if not suitable_disks:
msg = 'The host \'{0}\' does not have any VSAN eligible disks.'.format(host_name)
log.warning(msg)
ret.update({host_name: {'Eligible': msg}})
continue
# Get disks for host and combine into one list of Disk Objects
disks = _get_host_ssds(host_ref) + _get_host_non_ssds(host_ref)
# Get disks that are in both the disks list and suitable_disks lists.
matching = []
for disk in disks:
for suitable_disk in suitable_disks:
if disk.canonicalName == suitable_disk.disk.canonicalName:
matching.append(disk)
ret.update({host_name: {'Eligible': matching}})
return ret
def _reset_syslog_config_params(host, username, password, cmd, resets, valid_resets,
protocol=None, port=None, esxi_host=None):
'''
Helper function for reset_syslog_config that resets the config and populates the return dictionary.
'''
ret_dict = {}
all_success = True
for reset_param in resets:
if reset_param in valid_resets:
ret = salt.utils.vmware.esxcli(host, username, password, cmd + reset_param,
protocol=protocol, port=port,
esxi_host=esxi_host)
ret_dict[reset_param] = {}
ret_dict[reset_param]['success'] = ret['retcode'] == 0
if ret['retcode'] != 0:
all_success = False
ret_dict[reset_param]['message'] = ret['stdout']
else:
all_success = False
ret_dict[reset_param] = {}
ret_dict[reset_param]['success'] = False
ret_dict[reset_param]['message'] = 'Invalid syslog ' \
'configuration parameter'
ret_dict['success'] = all_success
return ret_dict
def _set_syslog_config_helper(host, username, password, syslog_config, config_value,
protocol=None, port=None, reset_service=None, esxi_host=None):
'''
Helper function for set_syslog_config that sets the config and populates the return dictionary.
'''
cmd = 'system syslog config set --{0} {1}'.format(syslog_config, config_value)
ret_dict = {}
valid_resets = ['logdir', 'loghost', 'default-rotate',
'default-size', 'default-timeout', 'logdir-unique']
if syslog_config not in valid_resets:
return ret_dict.update({'success': False,
'message': '\'{0}\' is not a valid config variable.'.format(syslog_config)})
response = salt.utils.vmware.esxcli(host, username, password, cmd,
protocol=protocol, port=port,
esxi_host=esxi_host)
# Update the return dictionary for success or error messages.
if response['retcode'] != 0:
ret_dict.update({syslog_config: {'success': False,
'message': response['stdout']}})
else:
ret_dict.update({syslog_config: {'success': True}})
# Restart syslog for each host, if desired.
if reset_service:
if esxi_host:
host_name = esxi_host
esxi_host = [esxi_host]
else:
host_name = host
response = syslog_service_reload(host, username, password,
protocol=protocol, port=port,
esxi_hosts=esxi_host).get(host_name)
ret_dict.update({'syslog_restart': {'success': response['retcode'] == 0}})
return ret_dict
| 38.001181 | 115 | 0.584477 | 15,479 | 128,710 | 4.752891 | 0.042768 | 0.026832 | 0.010398 | 0.027402 | 0.810983 | 0.786272 | 0.770124 | 0.742721 | 0.724752 | 0.711064 | 0 | 0.005552 | 0.342335 | 128,710 | 3,386 | 116 | 38.012404 | 0.863576 | 0.473297 | 0 | 0.695801 | 0 | 0 | 0.071584 | 0.002581 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0497 | false | 0.092545 | 0.008569 | 0 | 0.115681 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 7 |
b932f0b3cd5162b5258f671f26aa5f6448e3d1e5 | 12,177 | py | Python | pySOT/tests/test_strategies.py | drkupi/pySOT | c75f842d9b894b6006f5a1ccb7998f4b2d1f7820 | [
"BSD-3-Clause"
] | 1 | 2021-03-30T15:35:30.000Z | 2021-03-30T15:35:30.000Z | pySOT/tests/test_strategies.py | drkupi/pySOT | c75f842d9b894b6006f5a1ccb7998f4b2d1f7820 | [
"BSD-3-Clause"
] | null | null | null | pySOT/tests/test_strategies.py | drkupi/pySOT | c75f842d9b894b6006f5a1ccb7998f4b2d1f7820 | [
"BSD-3-Clause"
] | null | null | null | from pySOT.strategy import SRBFStrategy, DYCORSStrategy, \
EIStrategy, RandomSampling, LCBStrategy, SOPStrategy
from pySOT.experimental_design import SymmetricLatinHypercube
from pySOT.surrogate import GPRegressor, \
RBFInterpolant, CubicKernel, LinearTail
from pySOT.optimization_problems import Ackley
from poap.controller import SerialController, \
ThreadController, BasicWorkerThread
import numpy as np
num_threads = 4
ackley = Ackley(dim=10)
def check_strategy(controller):
"""Make sure the strategy object is correct."""
# Check the strategy object
assert controller.strategy.num_evals <= controller.strategy.max_evals
assert controller.strategy.phase == 2
assert controller.strategy.init_pending == 0
assert controller.strategy.pending_evals == 0
assert controller.strategy.X.shape == \
(controller.strategy.num_evals, ackley.dim)
assert controller.strategy.fX.shape == (controller.strategy.num_evals, 1)
assert controller.strategy.Xpend.shape == (0, ackley.dim)
assert len(controller.strategy.fevals) == controller.strategy.num_evals
# Check that all evaluations are in the surrogate model
assert controller.strategy.surrogate.num_pts == \
controller.strategy.num_evals
assert np.all(controller.strategy.X == controller.strategy.surrogate.X)
assert np.all(controller.strategy.fX == controller.strategy.surrogate.fX)
# Check that the strategy and controller have the same information
for i in range(controller.strategy.num_evals):
idx = np.where((controller.strategy.X ==
controller.fevals[i].params[0]).all(axis=1))[0]
assert np.all(controller.fevals[i].params[0] ==
controller.strategy.X[idx, :])
assert controller.fevals[i].value == controller.strategy.fX[idx]
assert np.all(controller.fevals[i].params[0] <= ackley.ub)
assert np.all(controller.fevals[i].params[0] >= ackley.lb)
def test_srbf_serial():
max_evals = 200
rbf = RBFInterpolant(
dim=ackley.dim, kernel=CubicKernel(),
tail=LinearTail(ackley.dim))
slhd = SymmetricLatinHypercube(
dim=ackley.dim, num_pts=2*(ackley.dim+1))
# Create a strategy and a controller
controller = SerialController(ackley.eval)
controller.strategy = SRBFStrategy(
max_evals=max_evals, opt_prob=ackley, exp_design=slhd,
surrogate=rbf, asynchronous=True)
controller.run()
check_strategy(controller)
def test_srbf_sync():
max_evals = 200
rbf = RBFInterpolant(
dim=ackley.dim, kernel=CubicKernel(),
tail=LinearTail(ackley.dim))
slhd = SymmetricLatinHypercube(
dim=ackley.dim, num_pts=2*(ackley.dim+1))
# Create a strategy and a controller
controller = ThreadController()
controller.strategy = SRBFStrategy(
max_evals=max_evals, opt_prob=ackley, exp_design=slhd,
surrogate=rbf, asynchronous=False, batch_size=num_threads)
for _ in range(num_threads):
worker = BasicWorkerThread(controller, ackley.eval)
controller.launch_worker(worker)
controller.run()
check_strategy(controller)
def test_srbf_async():
max_evals = 200
rbf = RBFInterpolant(
dim=ackley.dim, kernel=CubicKernel(),
tail=LinearTail(ackley.dim))
slhd = SymmetricLatinHypercube(
dim=ackley.dim, num_pts=2*(ackley.dim+1))
# Create a strategy and a controller
controller = ThreadController()
controller.strategy = SRBFStrategy(
max_evals=max_evals, opt_prob=ackley, exp_design=slhd,
surrogate=rbf, asynchronous=True, batch_size=None)
for _ in range(num_threads):
worker = BasicWorkerThread(controller, ackley.eval)
controller.launch_worker(worker)
controller.run()
check_strategy(controller)
#######################################################################
def test_dycors_serial():
max_evals = 200
rbf = RBFInterpolant(
dim=ackley.dim, kernel=CubicKernel(),
tail=LinearTail(ackley.dim))
slhd = SymmetricLatinHypercube(
dim=ackley.dim, num_pts=2*(ackley.dim+1))
# Create a strategy and a controller
controller = SerialController(ackley.eval)
controller.strategy = DYCORSStrategy(
max_evals=max_evals, opt_prob=ackley, exp_design=slhd,
surrogate=rbf, asynchronous=True)
controller.run()
check_strategy(controller)
def test_dycors_sync():
max_evals = 200
rbf = RBFInterpolant(dim=ackley.dim, kernel=CubicKernel(),
tail=LinearTail(ackley.dim))
slhd = SymmetricLatinHypercube(dim=ackley.dim, num_pts=2*(ackley.dim+1))
# Create a strategy and a controller
controller = ThreadController()
controller.strategy = DYCORSStrategy(
max_evals=max_evals, opt_prob=ackley, exp_design=slhd,
surrogate=rbf, asynchronous=False, batch_size=num_threads)
for _ in range(num_threads):
worker = BasicWorkerThread(controller, ackley.eval)
controller.launch_worker(worker)
controller.run()
check_strategy(controller)
def test_dycors_async():
max_evals = 200
rbf = RBFInterpolant(
dim=ackley.dim, kernel=CubicKernel(),
tail=LinearTail(ackley.dim))
slhd = SymmetricLatinHypercube(
dim=ackley.dim, num_pts=2*(ackley.dim+1))
# Create a strategy and a controller
controller = ThreadController()
controller.strategy = DYCORSStrategy(
max_evals=max_evals, opt_prob=ackley, exp_design=slhd,
surrogate=rbf, asynchronous=True, batch_size=None)
for _ in range(num_threads):
worker = BasicWorkerThread(controller, ackley.eval)
controller.launch_worker(worker)
controller.run()
check_strategy(controller)
#######################################################################
def test_ei_serial():
max_evals = 50
gp = GPRegressor(dim=ackley.dim)
slhd = SymmetricLatinHypercube(
dim=ackley.dim, num_pts=2*(ackley.dim+1))
# Create a strategy and a controller
controller = SerialController(ackley.eval)
controller.strategy = EIStrategy(
max_evals=max_evals, opt_prob=ackley, exp_design=slhd,
surrogate=gp, asynchronous=True)
controller.run()
check_strategy(controller)
def test_ei_sync():
max_evals = 50
gp = GPRegressor(dim=ackley.dim)
slhd = SymmetricLatinHypercube(
dim=ackley.dim, num_pts=2*(ackley.dim+1))
# Create a strategy and a controller
controller = ThreadController()
controller.strategy = EIStrategy(
max_evals=max_evals, opt_prob=ackley, exp_design=slhd,
surrogate=gp, asynchronous=False, batch_size=num_threads)
for _ in range(num_threads):
worker = BasicWorkerThread(controller, ackley.eval)
controller.launch_worker(worker)
controller.run()
check_strategy(controller)
def test_ei_async():
max_evals = 50
gp = GPRegressor(dim=ackley.dim)
slhd = SymmetricLatinHypercube(
dim=ackley.dim, num_pts=2*(ackley.dim+1))
# Create a strategy and a controller
controller = ThreadController()
controller.strategy = EIStrategy(
max_evals=max_evals, opt_prob=ackley, exp_design=slhd,
surrogate=gp, asynchronous=True, batch_size=None)
for _ in range(num_threads):
worker = BasicWorkerThread(controller, ackley.eval)
controller.launch_worker(worker)
controller.run()
check_strategy(controller)
#######################################################################
def test_lcb_serial():
max_evals = 50
gp = GPRegressor(dim=ackley.dim)
slhd = SymmetricLatinHypercube(
dim=ackley.dim, num_pts=2*(ackley.dim+1))
# Create a strategy and a controller
controller = SerialController(ackley.eval)
controller.strategy = LCBStrategy(
max_evals=max_evals, opt_prob=ackley, exp_design=slhd,
surrogate=gp, asynchronous=True)
controller.run()
check_strategy(controller)
def test_lcb_sync():
max_evals = 50
gp = GPRegressor(dim=ackley.dim)
slhd = SymmetricLatinHypercube(
dim=ackley.dim, num_pts=2*(ackley.dim+1))
# Create a strategy and a controller
controller = ThreadController()
controller.strategy = LCBStrategy(
max_evals=max_evals, opt_prob=ackley, exp_design=slhd,
surrogate=gp, asynchronous=False, batch_size=num_threads)
for _ in range(num_threads):
worker = BasicWorkerThread(controller, ackley.eval)
controller.launch_worker(worker)
controller.run()
check_strategy(controller)
def test_lcb_async():
max_evals = 50
gp = GPRegressor(dim=ackley.dim)
slhd = SymmetricLatinHypercube(
dim=ackley.dim, num_pts=2*(ackley.dim+1))
# Create a strategy and a controller
controller = ThreadController()
controller.strategy = LCBStrategy(
max_evals=max_evals, opt_prob=ackley, exp_design=slhd,
surrogate=gp, asynchronous=True, batch_size=None)
for _ in range(num_threads):
worker = BasicWorkerThread(controller, ackley.eval)
controller.launch_worker(worker)
controller.run()
check_strategy(controller)
#######################################################################
def test_random_sampling():
max_evals = 500
controller = ThreadController()
controller.strategy = RandomSampling(
opt_prob=ackley, max_evals=max_evals)
for _ in range(num_threads):
worker = BasicWorkerThread(controller, ackley.eval)
controller.launch_worker(worker)
controller.run()
assert len(controller.fevals) == max_evals
for rec in controller.fevals:
assert np.all(rec.params[0] <= ackley.ub)
assert np.all(rec.params[0] >= ackley.lb)
#######################################################################
def test_sop_serial():
max_evals = 200
rbf = RBFInterpolant(
dim=ackley.dim, kernel=CubicKernel(),
tail=LinearTail(ackley.dim))
slhd = SymmetricLatinHypercube(
dim=ackley.dim, num_pts=2*(ackley.dim+1))
# Create a strategy and a controller
controller = SerialController(ackley.eval)
controller.strategy = SOPStrategy(
max_evals=max_evals, opt_prob=ackley, exp_design=slhd,
surrogate=rbf, asynchronous=True, ncenters=4)
controller.run()
check_strategy(controller)
def test_sop_sync():
max_evals = 200
rbf = RBFInterpolant(
dim=ackley.dim, kernel=CubicKernel(),
tail=LinearTail(ackley.dim))
slhd = SymmetricLatinHypercube(
dim=ackley.dim, num_pts=2*(ackley.dim+1))
# Create a strategy and a controller
controller = ThreadController()
controller.strategy = SOPStrategy(
max_evals=max_evals, opt_prob=ackley, exp_design=slhd,
surrogate=rbf, asynchronous=False, ncenters=num_threads, batch_size=num_threads)
for _ in range(num_threads):
worker = BasicWorkerThread(controller, ackley.eval)
controller.launch_worker(worker)
controller.run()
check_strategy(controller)
def test_sop_async():
max_evals = 200
rbf = RBFInterpolant(
dim=ackley.dim, kernel=CubicKernel(),
tail=LinearTail(ackley.dim))
slhd = SymmetricLatinHypercube(
dim=ackley.dim, num_pts=2*(ackley.dim+1))
# Create a strategy and a controller
controller = ThreadController()
controller.strategy = SOPStrategy(
max_evals=max_evals, opt_prob=ackley, exp_design=slhd,
surrogate=rbf, asynchronous=True, ncenters=num_threads)
for _ in range(num_threads):
worker = BasicWorkerThread(controller, ackley.eval)
controller.launch_worker(worker)
controller.run()
check_strategy(controller)
if __name__ == '__main__':
test_srbf_serial()
test_srbf_sync()
test_srbf_async()
test_dycors_serial()
test_dycors_sync()
test_dycors_async()
test_ei_serial()
test_ei_sync()
test_ei_async()
test_lcb_serial()
test_lcb_sync()
test_lcb_async()
test_random_sampling()
test_sop_serial()
test_sop_sync()
test_sop_async()
| 30.75 | 88 | 0.676439 | 1,392 | 12,177 | 5.741379 | 0.084052 | 0.064189 | 0.045045 | 0.032032 | 0.803554 | 0.785536 | 0.783283 | 0.776026 | 0.763138 | 0.752878 | 0 | 0.009132 | 0.199639 | 12,177 | 395 | 89 | 30.827848 | 0.810897 | 0.058471 | 0 | 0.715328 | 0 | 0 | 0.000722 | 0 | 0 | 0 | 0 | 0 | 0.065693 | 1 | 0.062044 | false | 0 | 0.021898 | 0 | 0.083942 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
b965e60be4395c3c5ab56dc644dfce59fc8c17da | 9,651 | py | Python | gamestonk_terminal/common/technical_analysis/volume_view.py | jbushago/GamestonkTerminal | 73a2b419664bf62bbdc59aa8402c8cd6a913a518 | [
"MIT"
] | 1 | 2022-03-15T13:05:40.000Z | 2022-03-15T13:05:40.000Z | gamestonk_terminal/common/technical_analysis/volume_view.py | jbushago/GamestonkTerminal | 73a2b419664bf62bbdc59aa8402c8cd6a913a518 | [
"MIT"
] | null | null | null | gamestonk_terminal/common/technical_analysis/volume_view.py | jbushago/GamestonkTerminal | 73a2b419664bf62bbdc59aa8402c8cd6a913a518 | [
"MIT"
] | null | null | null | """Volume View"""
__docformat__ = "numpy"
import logging
import os
from typing import Optional, List
import matplotlib.pyplot as plt
import pandas as pd
from gamestonk_terminal.config_terminal import theme
from gamestonk_terminal.common.technical_analysis import volume_model
from gamestonk_terminal.config_plot import PLOT_DPI
from gamestonk_terminal.decorators import log_start_end
from gamestonk_terminal.helper_funcs import export_data, plot_autoscale, reindex_dates
from gamestonk_terminal.rich_config import console
logger = logging.getLogger(__name__)
@log_start_end(log=logger)
def display_ad(
ohlc: pd.DataFrame,
use_open: bool = False,
s_ticker: str = "",
export: str = "",
external_axes: Optional[List[plt.Axes]] = None,
):
"""Plot AD technical indicator
Parameters
----------
ohlc : pd.DataFrame
Dataframe of prices
use_open : bool
Whether to use open prices in calculation
s_ticker : str
Ticker
export: str
Format to export data as
external_axes : Optional[List[plt.Axes]], optional
External axes (3 axes is expected in the list), by default None
"""
divisor = 1_000_000
df_vol = ohlc["Volume"] / divisor
df_vol.name = "Adj Volume"
df_ta = volume_model.ad(ohlc, use_open)
df_cal = df_ta["AD"] / divisor
df_cal.name = "Adj AD"
plot_data = pd.merge(ohlc, df_vol, how="outer", left_index=True, right_index=True)
plot_data = pd.merge(
plot_data, df_ta, how="outer", left_index=True, right_index=True
)
plot_data = pd.merge(
plot_data, df_cal, how="outer", left_index=True, right_index=True
)
plot_data = reindex_dates(plot_data)
# This plot has 3 axes
if external_axes is None:
_, axes = plt.subplots(
3,
1,
sharex=True,
figsize=plot_autoscale(),
dpi=PLOT_DPI,
)
ax1, ax2, ax3 = axes
else:
if len(external_axes) != 3:
logger.error("Expected list of three axis items.")
console.print("[red]Expected list of 3 axis items./n[/red]")
return
(ax1, ax2, ax3) = external_axes
ax1.plot(plot_data.index, plot_data["Adj Close"].values)
ax1.set_title(f"{s_ticker} AD", x=0.08, y=1)
ax1.set_xlim(plot_data.index[0], plot_data.index[-1])
ax1.set_ylabel("Price")
theme.style_primary_axis(
ax1,
data_index=plot_data.index.to_list(),
tick_labels=plot_data["date"].to_list(),
)
ax2.set_ylabel("Volume [M]")
bar_colors = [
theme.down_color if x[1].Open < x[1].Close else theme.up_color
for x in plot_data.iterrows()
]
ax2.bar(
plot_data.index,
plot_data["Adj Volume"].values,
color=bar_colors,
width=theme.volume_bar_width,
)
ax2.set_xlim(plot_data.index[0], plot_data.index[-1])
theme.style_primary_axis(
ax2,
data_index=plot_data.index.to_list(),
tick_labels=plot_data["date"].to_list(),
)
ax3.set_ylabel("A/D [M]")
ax3.plot(plot_data.index, plot_data["Adj AD"])
ax3.set_xlim(plot_data.index[0], plot_data.index[-1])
ax3.axhline(0, linestyle="--")
theme.style_primary_axis(
ax3,
data_index=plot_data.index.to_list(),
tick_labels=plot_data["date"].to_list(),
)
if external_axes is None:
theme.visualize_output()
export_data(
export,
os.path.dirname(os.path.abspath(__file__)).replace("common", "stocks"),
"ad",
df_ta,
)
@log_start_end(log=logger)
def display_adosc(
ohlc: pd.DataFrame,
fast: int = 3,
slow: int = 10,
use_open: bool = False,
s_ticker: str = "",
export: str = "",
external_axes: Optional[List[plt.Axes]] = None,
):
"""Display AD Osc Indicator
Parameters
----------
ohlc : pd.DataFrame
Dataframe of prices
use_open : bool
Whether to use open prices in calculation
fast: int
Length of fast window
slow : int
Length of slow window
s_ticker : str
Stock ticker
export : str
Format to export data
external_axes : Optional[List[plt.Axes]], optional
External axes (3 axes is expected in the list), by default None
"""
divisor = 1_000_000
df_vol = ohlc["Volume"] / divisor
df_vol.name = "Adj Volume"
df_ta = volume_model.adosc(ohlc, use_open, fast, slow)
df_cal = df_ta[df_ta.columns[0]] / divisor
df_cal.name = "Adj ADOSC"
plot_data = pd.merge(ohlc, df_vol, how="outer", left_index=True, right_index=True)
plot_data = pd.merge(
plot_data, df_ta, how="outer", left_index=True, right_index=True
)
plot_data = pd.merge(
plot_data, df_cal, how="outer", left_index=True, right_index=True
)
plot_data = reindex_dates(plot_data)
# This plot has 3 axes
if external_axes is None:
_, axes = plt.subplots(
3,
1,
sharex=True,
figsize=plot_autoscale(),
dpi=PLOT_DPI,
)
ax1, ax2, ax3 = axes
else:
if len(external_axes) != 3:
logger.error("Expected list of three axis items.")
console.print("[red]Expected list of 3 axis items./n[/red]")
return
(ax1, ax2, ax3) = external_axes
ax1.set_title(f"{s_ticker} AD Oscillator")
ax1.plot(plot_data.index, plot_data["Adj Close"].values)
ax1.set_xlim(plot_data.index[0], plot_data.index[-1])
ax1.set_ylabel("Price")
theme.style_primary_axis(
ax1,
data_index=plot_data.index.to_list(),
tick_labels=plot_data["date"].to_list(),
)
ax2.set_ylabel("Volume [M]")
bar_colors = [
theme.down_color if x[1].Open < x[1].Close else theme.up_color
for x in plot_data.iterrows()
]
ax2.bar(
plot_data.index,
plot_data["Adj Volume"],
color=bar_colors,
width=theme.volume_bar_width,
)
ax2.set_xlim(plot_data.index[0], plot_data.index[-1])
theme.style_primary_axis(
ax2,
data_index=plot_data.index.to_list(),
tick_labels=plot_data["date"].to_list(),
)
ax3.set_ylabel("AD Osc [M]")
ax3.plot(plot_data.index, plot_data["Adj ADOSC"], label="AD Osc")
ax3.set_xlim(plot_data.index[0], plot_data.index[-1])
theme.style_primary_axis(
ax3,
data_index=plot_data.index.to_list(),
tick_labels=plot_data["date"].to_list(),
)
if external_axes is None:
theme.visualize_output()
export_data(
export,
os.path.dirname(os.path.abspath(__file__)).replace("common", "stocks"),
"adosc",
df_ta,
)
@log_start_end(log=logger)
def display_obv(
ohlc: pd.DataFrame,
s_ticker: str = "",
export: str = "",
external_axes: Optional[List[plt.Axes]] = None,
):
"""Plot OBV technical indicator
Parameters
----------
ohlc : pd.DataFrame
Dataframe of prices
s_ticker : str
Ticker
export: str
Format to export data as
external_axes : Optional[List[plt.Axes]], optional
External axes (1 axis is expected in the list), by default None
"""
divisor = 1_000_000
df_vol = ohlc["Volume"] / divisor
df_vol.name = "Adj Volume"
df_ta = volume_model.obv(ohlc)
df_cal = df_ta[df_ta.columns[0]] / divisor
df_cal.name = "Adj OBV"
plot_data = pd.merge(ohlc, df_vol, how="outer", left_index=True, right_index=True)
plot_data = pd.merge(
plot_data, df_ta, how="outer", left_index=True, right_index=True
)
plot_data = pd.merge(
plot_data, df_cal, how="outer", left_index=True, right_index=True
)
plot_data = reindex_dates(plot_data)
# This plot has 3 axes
if external_axes is None:
_, axes = plt.subplots(
3,
1,
sharex=True,
figsize=plot_autoscale(),
dpi=PLOT_DPI,
)
ax1, ax2, ax3 = axes
else:
if len(external_axes) != 3:
logger.error("Expected list of three axis items.")
console.print("[red]Expected list of 3 axis items./n[/red]")
return
(ax1, ax2, ax3) = external_axes
ax1.plot(plot_data.index, plot_data["Adj Close"].values)
ax1.set_title(f"{s_ticker} OBV")
ax1.set_xlim(plot_data.index[0], plot_data.index[-1])
ax1.set_ylabel("Price")
theme.style_primary_axis(
ax1,
data_index=plot_data.index.to_list(),
tick_labels=plot_data["date"].to_list(),
)
ax2.set_xlim(plot_data.index[0], plot_data.index[-1])
ax2.set_ylabel("Volume [M]")
bar_colors = [
theme.down_color if x[1].Open < x[1].Close else theme.up_color
for x in plot_data.iterrows()
]
ax2.bar(
plot_data.index,
plot_data["Adj Volume"],
color=bar_colors,
alpha=0.8,
width=theme.volume_bar_width,
)
theme.style_primary_axis(
ax2,
data_index=plot_data.index.to_list(),
tick_labels=plot_data["date"].to_list(),
)
ax3.set_ylabel("OBV [M]")
ax3.plot(plot_data.index, plot_data["Adj OBV"])
ax3.set_xlim(plot_data.index[0], plot_data.index[-1])
theme.style_primary_axis(
ax3,
data_index=plot_data.index.to_list(),
tick_labels=plot_data["date"].to_list(),
)
if external_axes is None:
theme.visualize_output()
export_data(
export,
os.path.dirname(os.path.abspath(__file__)).replace("common", "stocks"),
"obv",
df_ta,
)
| 28.723214 | 86 | 0.617345 | 1,351 | 9,651 | 4.167283 | 0.118431 | 0.110835 | 0.083126 | 0.054352 | 0.861812 | 0.854174 | 0.854174 | 0.839432 | 0.839432 | 0.798046 | 0 | 0.018659 | 0.261424 | 9,651 | 335 | 87 | 28.808955 | 0.771184 | 0.115843 | 0 | 0.735178 | 0 | 0 | 0.076951 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.011858 | false | 0 | 0.043478 | 0 | 0.067194 | 0.011858 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
b9d5f9210069138fb9871ab568c544f2009a7269 | 474 | py | Python | stubs/3.2/calendar.py | TimSimpsonR/mypy | 5e6fd6335e0662b0477e1d678269f33e6f4194ba | [
"PSF-2.0"
] | 1 | 2019-06-27T11:34:27.000Z | 2019-06-27T11:34:27.000Z | stubs/3.2/calendar.py | silky/mypy | de6a8d3710df9f49109cb682f2092e4967bfb92c | [
"PSF-2.0"
] | null | null | null | stubs/3.2/calendar.py | silky/mypy | de6a8d3710df9f49109cb682f2092e4967bfb92c | [
"PSF-2.0"
] | null | null | null | # Stubs for calendar
# NOTE: These are incomplete!
from typing import overload, Tuple
# TODO actually, any number of items larger than 5 is fine
@overload
def timegm(t: Tuple[int, int, int, int, int, int]) -> int: pass
@overload
def timegm(t: Tuple[int, int, int, int, int, int, int]) -> int: pass
@overload
def timegm(t: Tuple[int, int, int, int, int, int, int, int]) -> int: pass
@overload
def timegm(t: Tuple[int, int, int, int, int, int, int, int, int]) -> int: pass
| 29.625 | 78 | 0.679325 | 81 | 474 | 3.975309 | 0.345679 | 0.559006 | 0.726708 | 0.819876 | 0.652174 | 0.652174 | 0.652174 | 0.652174 | 0.652174 | 0.652174 | 0 | 0.002558 | 0.175105 | 474 | 15 | 79 | 31.6 | 0.820972 | 0.2173 | 0 | 0.444444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.066667 | 0 | 1 | 0.444444 | false | 0.444444 | 0.111111 | 0 | 0.555556 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 8 |
b9e02009e036e5366d43664a56329e44ebb42dc8 | 52,592 | py | Python | msgraph-cli-extensions/v1_0/users_v1_0/azext_users_v1_0/generated/custom.py | thewahome/msgraph-cli | 33127d9efa23a0e5f5303c93242fbdbb73348671 | [
"MIT"
] | null | null | null | msgraph-cli-extensions/v1_0/users_v1_0/azext_users_v1_0/generated/custom.py | thewahome/msgraph-cli | 33127d9efa23a0e5f5303c93242fbdbb73348671 | [
"MIT"
] | null | null | null | msgraph-cli-extensions/v1_0/users_v1_0/azext_users_v1_0/generated/custom.py | thewahome/msgraph-cli | 33127d9efa23a0e5f5303c93242fbdbb73348671 | [
"MIT"
] | null | null | null | # --------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
#
# Code generated by Microsoft (R) AutoRest Code Generator.
# Changes may cause incorrect behavior and will be lost if the code is
# regenerated.
# --------------------------------------------------------------------------
# pylint: disable=line-too-long
# pylint: disable=too-many-lines
def users_user_list(client,
orderby=None,
select=None,
expand=None):
return client.list_user(orderby=orderby,
select=select,
expand=expand)
def users_user_create(client,
id_=None,
deleted_date_time=None,
account_enabled=None,
age_group=None,
assigned_licenses=None,
assigned_plans=None,
business_phones=None,
city=None,
company_name=None,
consent_provided_for_minor=None,
country=None,
created_date_time=None,
creation_type=None,
department=None,
display_name=None,
employee_id=None,
external_user_state=None,
external_user_state_change_date_time=None,
fax_number=None,
given_name=None,
identities=None,
im_addresses=None,
is_resource_account=None,
job_title=None,
last_password_change_date_time=None,
legal_age_group_classification=None,
license_assignment_states=None,
mail=None,
mail_nickname=None,
mobile_phone=None,
office_location=None,
on_premises_distinguished_name=None,
on_premises_domain_name=None,
on_premises_extension_attributes=None,
on_premises_immutable_id=None,
on_premises_last_sync_date_time=None,
on_premises_provisioning_errors=None,
on_premises_sam_account_name=None,
on_premises_security_identifier=None,
on_premises_sync_enabled=None,
on_premises_user_principal_name=None,
other_mails=None,
password_policies=None,
password_profile=None,
postal_code=None,
preferred_language=None,
provisioned_plans=None,
proxy_addresses=None,
show_in_address_list=None,
sign_in_sessions_valid_from_date_time=None,
state=None,
street_address=None,
surname=None,
usage_location=None,
user_principal_name=None,
user_type=None,
device_enrollment_limit=None,
about_me=None,
birthday=None,
hire_date=None,
interests=None,
my_site=None,
past_projects=None,
preferred_name=None,
responsibilities=None,
schools=None,
skills=None,
app_role_assignments=None,
created_objects=None,
direct_reports=None,
license_details=None,
manager=None,
member_of=None,
oauth2_permission_grants=None,
owned_devices=None,
owned_objects=None,
registered_devices=None,
scoped_role_member_of=None,
transitive_member_of=None,
calendar=None,
calendar_groups=None,
calendars=None,
calendar_view=None,
contact_folders=None,
contacts=None,
events=None,
mail_folders=None,
messages=None,
people=None,
photo=None,
photos=None,
drive=None,
drives=None,
followed_sites=None,
extensions=None,
managed_devices=None,
managed_app_registrations=None,
device_management_troubleshooting_events=None,
activities=None,
online_meetings=None,
joined_teams=None,
microsoft_graph_entity_id=None,
notebooks=None,
operations=None,
pages=None,
resources=None,
section_groups=None,
sections=None,
id1=None,
contribution_to_content_discovery_as_organization_disabled=None,
contribution_to_content_discovery_disabled=None,
id2=None,
microsoft_graph_change_tracked_entity_created_date_time_created_date_time=None,
last_modified_date_time=None,
application=None,
device=None,
user=None,
availability=None,
id3=None,
shared=None,
trending=None,
used=None,
id4=None,
plans=None,
tasks=None,
id5=None,
master_categories=None,
id6=None,
overrides=None,
archive_folder=None,
automatic_replies_setting=None,
date_format=None,
delegate_meeting_message_delivery_options=None,
language=None,
time_format=None,
time_zone=None,
working_hours=None):
body = {}
body['id'] = id_
body['deleted_date_time'] = deleted_date_time
body['account_enabled'] = account_enabled
body['age_group'] = age_group
body['assigned_licenses'] = assigned_licenses
body['assigned_plans'] = assigned_plans
body['business_phones'] = business_phones
body['city'] = city
body['company_name'] = company_name
body['consent_provided_for_minor'] = consent_provided_for_minor
body['country'] = country
body['created_date_time'] = created_date_time
body['creation_type'] = creation_type
body['department'] = department
body['display_name'] = display_name
body['employee_id'] = employee_id
body['external_user_state'] = external_user_state
body['external_user_state_change_date_time'] = external_user_state_change_date_time
body['fax_number'] = fax_number
body['given_name'] = given_name
body['identities'] = identities
body['im_addresses'] = im_addresses
body['is_resource_account'] = is_resource_account
body['job_title'] = job_title
body['last_password_change_date_time'] = last_password_change_date_time
body['legal_age_group_classification'] = legal_age_group_classification
body['license_assignment_states'] = license_assignment_states
body['mail'] = mail
body['mail_nickname'] = mail_nickname
body['mobile_phone'] = mobile_phone
body['office_location'] = office_location
body['on_premises_distinguished_name'] = on_premises_distinguished_name
body['on_premises_domain_name'] = on_premises_domain_name
body['on_premises_extension_attributes'] = on_premises_extension_attributes
body['on_premises_immutable_id'] = on_premises_immutable_id
body['on_premises_last_sync_date_time'] = on_premises_last_sync_date_time
body['on_premises_provisioning_errors'] = on_premises_provisioning_errors
body['on_premises_sam_account_name'] = on_premises_sam_account_name
body['on_premises_security_identifier'] = on_premises_security_identifier
body['on_premises_sync_enabled'] = on_premises_sync_enabled
body['on_premises_user_principal_name'] = on_premises_user_principal_name
body['other_mails'] = other_mails
body['password_policies'] = password_policies
body['password_profile'] = password_profile
body['postal_code'] = postal_code
body['preferred_language'] = preferred_language
body['provisioned_plans'] = provisioned_plans
body['proxy_addresses'] = proxy_addresses
body['show_in_address_list'] = show_in_address_list
body['sign_in_sessions_valid_from_date_time'] = sign_in_sessions_valid_from_date_time
body['state'] = state
body['street_address'] = street_address
body['surname'] = surname
body['usage_location'] = usage_location
body['user_principal_name'] = user_principal_name
body['user_type'] = user_type
body['device_enrollment_limit'] = device_enrollment_limit
body['about_me'] = about_me
body['birthday'] = birthday
body['hire_date'] = hire_date
body['interests'] = interests
body['my_site'] = my_site
body['past_projects'] = past_projects
body['preferred_name'] = preferred_name
body['responsibilities'] = responsibilities
body['schools'] = schools
body['skills'] = skills
body['app_role_assignments'] = app_role_assignments
body['created_objects'] = created_objects
body['direct_reports'] = direct_reports
body['license_details'] = license_details
body['manager'] = manager
body['member_of'] = member_of
body['oauth2_permission_grants'] = oauth2_permission_grants
body['owned_devices'] = owned_devices
body['owned_objects'] = owned_objects
body['registered_devices'] = registered_devices
body['scoped_role_member_of'] = scoped_role_member_of
body['transitive_member_of'] = transitive_member_of
body['calendar'] = calendar
body['calendar_groups'] = calendar_groups
body['calendars'] = calendars
body['calendar_view'] = calendar_view
body['contact_folders'] = contact_folders
body['contacts'] = contacts
body['events'] = events
body['mail_folders'] = mail_folders
body['messages'] = messages
body['people'] = people
body['photo'] = photo
body['photos'] = photos
body['drive'] = drive
body['drives'] = drives
body['followed_sites'] = followed_sites
body['extensions'] = extensions
body['managed_devices'] = managed_devices
body['managed_app_registrations'] = managed_app_registrations
body['device_management_troubleshooting_events'] = device_management_troubleshooting_events
body['activities'] = activities
body['online_meetings'] = online_meetings
body['joined_teams'] = joined_teams
body['onenote'] = {}
body['onenote']['id'] = microsoft_graph_entity_id
body['onenote']['notebooks'] = notebooks
body['onenote']['operations'] = operations
body['onenote']['pages'] = pages
body['onenote']['resources'] = resources
body['onenote']['section_groups'] = section_groups
body['onenote']['sections'] = sections
body['settings'] = {}
body['settings']['id'] = id1
body['settings']['contribution_to_content_discovery_as_organization_disabled'] = contribution_to_content_discovery_as_organization_disabled
body['settings']['contribution_to_content_discovery_disabled'] = contribution_to_content_discovery_disabled
body['settings']['shift_preferences'] = {}
body['settings']['shift_preferences']['id'] = id2
body['settings']['shift_preferences']['created_date_time'] = microsoft_graph_change_tracked_entity_created_date_time_created_date_time
body['settings']['shift_preferences']['last_modified_date_time'] = last_modified_date_time
body['settings']['shift_preferences']['last_modified_by'] = {}
body['settings']['shift_preferences']['last_modified_by']['application'] = application
body['settings']['shift_preferences']['last_modified_by']['device'] = device
body['settings']['shift_preferences']['last_modified_by']['user'] = user
body['settings']['shift_preferences']['availability'] = availability
body['insights'] = {}
body['insights']['id'] = id3
body['insights']['shared'] = shared
body['insights']['trending'] = trending
body['insights']['used'] = used
body['planner'] = {}
body['planner']['id'] = id4
body['planner']['plans'] = plans
body['planner']['tasks'] = tasks
body['outlook'] = {}
body['outlook']['id'] = id5
body['outlook']['master_categories'] = master_categories
body['inference_classification'] = {}
body['inference_classification']['id'] = id6
body['inference_classification']['overrides'] = overrides
body['mailbox_settings'] = {}
body['mailbox_settings']['archive_folder'] = archive_folder
body['mailbox_settings']['automatic_replies_setting'] = automatic_replies_setting
body['mailbox_settings']['date_format'] = date_format
body['mailbox_settings']['delegate_meeting_message_delivery_options'] = delegate_meeting_message_delivery_options
body['mailbox_settings']['language'] = language
body['mailbox_settings']['time_format'] = time_format
body['mailbox_settings']['time_zone'] = time_zone
body['mailbox_settings']['working_hours'] = working_hours
return client.create_user(body=body)
def users_user_update(client,
user_id,
id_=None,
deleted_date_time=None,
account_enabled=None,
age_group=None,
assigned_licenses=None,
assigned_plans=None,
business_phones=None,
city=None,
company_name=None,
consent_provided_for_minor=None,
country=None,
created_date_time=None,
creation_type=None,
department=None,
display_name=None,
employee_id=None,
external_user_state=None,
external_user_state_change_date_time=None,
fax_number=None,
given_name=None,
identities=None,
im_addresses=None,
is_resource_account=None,
job_title=None,
last_password_change_date_time=None,
legal_age_group_classification=None,
license_assignment_states=None,
mail=None,
mail_nickname=None,
mobile_phone=None,
office_location=None,
on_premises_distinguished_name=None,
on_premises_domain_name=None,
on_premises_extension_attributes=None,
on_premises_immutable_id=None,
on_premises_last_sync_date_time=None,
on_premises_provisioning_errors=None,
on_premises_sam_account_name=None,
on_premises_security_identifier=None,
on_premises_sync_enabled=None,
on_premises_user_principal_name=None,
other_mails=None,
password_policies=None,
password_profile=None,
postal_code=None,
preferred_language=None,
provisioned_plans=None,
proxy_addresses=None,
show_in_address_list=None,
sign_in_sessions_valid_from_date_time=None,
state=None,
street_address=None,
surname=None,
usage_location=None,
user_principal_name=None,
user_type=None,
device_enrollment_limit=None,
about_me=None,
birthday=None,
hire_date=None,
interests=None,
my_site=None,
past_projects=None,
preferred_name=None,
responsibilities=None,
schools=None,
skills=None,
app_role_assignments=None,
created_objects=None,
direct_reports=None,
license_details=None,
manager=None,
member_of=None,
oauth2_permission_grants=None,
owned_devices=None,
owned_objects=None,
registered_devices=None,
scoped_role_member_of=None,
transitive_member_of=None,
calendar=None,
calendar_groups=None,
calendars=None,
calendar_view=None,
contact_folders=None,
contacts=None,
events=None,
mail_folders=None,
messages=None,
people=None,
photo=None,
photos=None,
drive=None,
drives=None,
followed_sites=None,
extensions=None,
managed_devices=None,
managed_app_registrations=None,
device_management_troubleshooting_events=None,
activities=None,
online_meetings=None,
joined_teams=None,
microsoft_graph_entity_id=None,
notebooks=None,
operations=None,
pages=None,
resources=None,
section_groups=None,
sections=None,
id1=None,
contribution_to_content_discovery_as_organization_disabled=None,
contribution_to_content_discovery_disabled=None,
id2=None,
microsoft_graph_change_tracked_entity_created_date_time_created_date_time=None,
last_modified_date_time=None,
application=None,
device=None,
user=None,
availability=None,
id3=None,
shared=None,
trending=None,
used=None,
id4=None,
plans=None,
tasks=None,
id5=None,
master_categories=None,
id6=None,
overrides=None,
archive_folder=None,
automatic_replies_setting=None,
date_format=None,
delegate_meeting_message_delivery_options=None,
language=None,
time_format=None,
time_zone=None,
working_hours=None):
body = {}
body['id'] = id_
body['deleted_date_time'] = deleted_date_time
body['account_enabled'] = account_enabled
body['age_group'] = age_group
body['assigned_licenses'] = assigned_licenses
body['assigned_plans'] = assigned_plans
body['business_phones'] = business_phones
body['city'] = city
body['company_name'] = company_name
body['consent_provided_for_minor'] = consent_provided_for_minor
body['country'] = country
body['created_date_time'] = created_date_time
body['creation_type'] = creation_type
body['department'] = department
body['display_name'] = display_name
body['employee_id'] = employee_id
body['external_user_state'] = external_user_state
body['external_user_state_change_date_time'] = external_user_state_change_date_time
body['fax_number'] = fax_number
body['given_name'] = given_name
body['identities'] = identities
body['im_addresses'] = im_addresses
body['is_resource_account'] = is_resource_account
body['job_title'] = job_title
body['last_password_change_date_time'] = last_password_change_date_time
body['legal_age_group_classification'] = legal_age_group_classification
body['license_assignment_states'] = license_assignment_states
body['mail'] = mail
body['mail_nickname'] = mail_nickname
body['mobile_phone'] = mobile_phone
body['office_location'] = office_location
body['on_premises_distinguished_name'] = on_premises_distinguished_name
body['on_premises_domain_name'] = on_premises_domain_name
body['on_premises_extension_attributes'] = on_premises_extension_attributes
body['on_premises_immutable_id'] = on_premises_immutable_id
body['on_premises_last_sync_date_time'] = on_premises_last_sync_date_time
body['on_premises_provisioning_errors'] = on_premises_provisioning_errors
body['on_premises_sam_account_name'] = on_premises_sam_account_name
body['on_premises_security_identifier'] = on_premises_security_identifier
body['on_premises_sync_enabled'] = on_premises_sync_enabled
body['on_premises_user_principal_name'] = on_premises_user_principal_name
body['other_mails'] = other_mails
body['password_policies'] = password_policies
body['password_profile'] = password_profile
body['postal_code'] = postal_code
body['preferred_language'] = preferred_language
body['provisioned_plans'] = provisioned_plans
body['proxy_addresses'] = proxy_addresses
body['show_in_address_list'] = show_in_address_list
body['sign_in_sessions_valid_from_date_time'] = sign_in_sessions_valid_from_date_time
body['state'] = state
body['street_address'] = street_address
body['surname'] = surname
body['usage_location'] = usage_location
body['user_principal_name'] = user_principal_name
body['user_type'] = user_type
body['device_enrollment_limit'] = device_enrollment_limit
body['about_me'] = about_me
body['birthday'] = birthday
body['hire_date'] = hire_date
body['interests'] = interests
body['my_site'] = my_site
body['past_projects'] = past_projects
body['preferred_name'] = preferred_name
body['responsibilities'] = responsibilities
body['schools'] = schools
body['skills'] = skills
body['app_role_assignments'] = app_role_assignments
body['created_objects'] = created_objects
body['direct_reports'] = direct_reports
body['license_details'] = license_details
body['manager'] = manager
body['member_of'] = member_of
body['oauth2_permission_grants'] = oauth2_permission_grants
body['owned_devices'] = owned_devices
body['owned_objects'] = owned_objects
body['registered_devices'] = registered_devices
body['scoped_role_member_of'] = scoped_role_member_of
body['transitive_member_of'] = transitive_member_of
body['calendar'] = calendar
body['calendar_groups'] = calendar_groups
body['calendars'] = calendars
body['calendar_view'] = calendar_view
body['contact_folders'] = contact_folders
body['contacts'] = contacts
body['events'] = events
body['mail_folders'] = mail_folders
body['messages'] = messages
body['people'] = people
body['photo'] = photo
body['photos'] = photos
body['drive'] = drive
body['drives'] = drives
body['followed_sites'] = followed_sites
body['extensions'] = extensions
body['managed_devices'] = managed_devices
body['managed_app_registrations'] = managed_app_registrations
body['device_management_troubleshooting_events'] = device_management_troubleshooting_events
body['activities'] = activities
body['online_meetings'] = online_meetings
body['joined_teams'] = joined_teams
body['onenote'] = {}
body['onenote']['id'] = microsoft_graph_entity_id
body['onenote']['notebooks'] = notebooks
body['onenote']['operations'] = operations
body['onenote']['pages'] = pages
body['onenote']['resources'] = resources
body['onenote']['section_groups'] = section_groups
body['onenote']['sections'] = sections
body['settings'] = {}
body['settings']['id'] = id1
body['settings']['contribution_to_content_discovery_as_organization_disabled'] = contribution_to_content_discovery_as_organization_disabled
body['settings']['contribution_to_content_discovery_disabled'] = contribution_to_content_discovery_disabled
body['settings']['shift_preferences'] = {}
body['settings']['shift_preferences']['id'] = id2
body['settings']['shift_preferences']['created_date_time'] = microsoft_graph_change_tracked_entity_created_date_time_created_date_time
body['settings']['shift_preferences']['last_modified_date_time'] = last_modified_date_time
body['settings']['shift_preferences']['last_modified_by'] = {}
body['settings']['shift_preferences']['last_modified_by']['application'] = application
body['settings']['shift_preferences']['last_modified_by']['device'] = device
body['settings']['shift_preferences']['last_modified_by']['user'] = user
body['settings']['shift_preferences']['availability'] = availability
body['insights'] = {}
body['insights']['id'] = id3
body['insights']['shared'] = shared
body['insights']['trending'] = trending
body['insights']['used'] = used
body['planner'] = {}
body['planner']['id'] = id4
body['planner']['plans'] = plans
body['planner']['tasks'] = tasks
body['outlook'] = {}
body['outlook']['id'] = id5
body['outlook']['master_categories'] = master_categories
body['inference_classification'] = {}
body['inference_classification']['id'] = id6
body['inference_classification']['overrides'] = overrides
body['mailbox_settings'] = {}
body['mailbox_settings']['archive_folder'] = archive_folder
body['mailbox_settings']['automatic_replies_setting'] = automatic_replies_setting
body['mailbox_settings']['date_format'] = date_format
body['mailbox_settings']['delegate_meeting_message_delivery_options'] = delegate_meeting_message_delivery_options
body['mailbox_settings']['language'] = language
body['mailbox_settings']['time_format'] = time_format
body['mailbox_settings']['time_zone'] = time_zone
body['mailbox_settings']['working_hours'] = working_hours
return client.update_user(user_id=user_id,
body=body)
def users_user_delete_user(client,
user_id,
if_match=None):
return client.delete_user(user_id=user_id,
if_match=if_match)
def users_user_show_user(client,
user_id,
select=None,
expand=None):
return client.get_user(user_id=user_id,
select=select,
expand=expand)
def users_user_create_extension(client,
user_id,
id_=None):
body = {}
body['id'] = id_
return client.create_extensions(user_id=user_id,
body=body)
def users_user_create_license_detail(client,
user_id,
id_=None,
service_plans=None,
sku_id=None,
sku_part_number=None):
body = {}
body['id'] = id_
body['service_plans'] = service_plans
body['sku_id'] = sku_id
body['sku_part_number'] = sku_part_number
return client.create_license_details(user_id=user_id,
body=body)
def users_user_create_photo(client,
user_id,
id_=None,
height=None,
width=None):
body = {}
body['id'] = id_
body['height'] = height
body['width'] = width
return client.create_photos(user_id=user_id,
body=body)
def users_user_create_ref_created_object(client,
user_id,
body):
return client.create_ref_created_objects(user_id=user_id,
body=body)
def users_user_create_ref_direct_report(client,
user_id,
body):
return client.create_ref_direct_reports(user_id=user_id,
body=body)
def users_user_create_ref_member_of(client,
user_id,
body):
return client.create_ref_member_of(user_id=user_id,
body=body)
def users_user_create_ref_oauth2_permission_grant(client,
user_id,
body):
return client.create_ref_oauth2_permission_grants(user_id=user_id,
body=body)
def users_user_create_ref_owned_device(client,
user_id,
body):
return client.create_ref_owned_devices(user_id=user_id,
body=body)
def users_user_create_ref_owned_object(client,
user_id,
body):
return client.create_ref_owned_objects(user_id=user_id,
body=body)
def users_user_create_ref_registered_device(client,
user_id,
body):
return client.create_ref_registered_devices(user_id=user_id,
body=body)
def users_user_create_ref_transitive_member_of(client,
user_id,
body):
return client.create_ref_transitive_member_of(user_id=user_id,
body=body)
def users_user_delete_extension(client,
user_id,
extension_id,
if_match=None):
return client.delete_extensions(user_id=user_id,
extension_id=extension_id,
if_match=if_match)
def users_user_delete_license_detail(client,
user_id,
license_details_id,
if_match=None):
return client.delete_license_details(user_id=user_id,
license_details_id=license_details_id,
if_match=if_match)
def users_user_delete_outlook(client,
user_id,
if_match=None):
return client.delete_outlook(user_id=user_id,
if_match=if_match)
def users_user_delete_photo(client,
user_id,
profile_photo_id=None,
if_match=None):
if user_id is not None and profile_photo_id is not None:
return client.delete_photos(user_id=user_id,
profile_photo_id=profile_photo_id,
if_match=if_match)
return client.delete_photo(user_id=user_id,
if_match=if_match)
def users_user_delete_ref_manager(client,
user_id,
if_match=None):
return client.delete_ref_manager(user_id=user_id,
if_match=if_match)
def users_user_delete_setting(client,
user_id,
if_match=None):
return client.delete_settings(user_id=user_id,
if_match=if_match)
def users_user_list_created_object(client,
user_id,
orderby=None,
select=None,
expand=None):
return client.list_created_objects(user_id=user_id,
orderby=orderby,
select=select,
expand=expand)
def users_user_list_direct_report(client,
user_id,
orderby=None,
select=None,
expand=None):
return client.list_direct_reports(user_id=user_id,
orderby=orderby,
select=select,
expand=expand)
def users_user_list_extension(client,
user_id,
orderby=None,
select=None,
expand=None):
return client.list_extensions(user_id=user_id,
orderby=orderby,
select=select,
expand=expand)
def users_user_list_license_detail(client,
user_id,
orderby=None,
select=None,
expand=None):
return client.list_license_details(user_id=user_id,
orderby=orderby,
select=select,
expand=expand)
def users_user_list_member_of(client,
user_id,
orderby=None,
select=None,
expand=None):
return client.list_member_of(user_id=user_id,
orderby=orderby,
select=select,
expand=expand)
def users_user_list_oauth2_permission_grant(client,
user_id,
orderby=None,
select=None,
expand=None):
return client.list_oauth2_permission_grants(user_id=user_id,
orderby=orderby,
select=select,
expand=expand)
def users_user_list_owned_device(client,
user_id,
orderby=None,
select=None,
expand=None):
return client.list_owned_devices(user_id=user_id,
orderby=orderby,
select=select,
expand=expand)
def users_user_list_owned_object(client,
user_id,
orderby=None,
select=None,
expand=None):
return client.list_owned_objects(user_id=user_id,
orderby=orderby,
select=select,
expand=expand)
def users_user_list_photo(client,
user_id,
orderby=None,
select=None,
expand=None):
return client.list_photos(user_id=user_id,
orderby=orderby,
select=select,
expand=expand)
def users_user_list_ref_created_object(client,
user_id,
orderby=None):
return client.list_ref_created_objects(user_id=user_id,
orderby=orderby)
def users_user_list_ref_direct_report(client,
user_id,
orderby=None):
return client.list_ref_direct_reports(user_id=user_id,
orderby=orderby)
def users_user_list_ref_member_of(client,
user_id,
orderby=None):
return client.list_ref_member_of(user_id=user_id,
orderby=orderby)
def users_user_list_ref_oauth2_permission_grant(client,
user_id,
orderby=None):
return client.list_ref_oauth2_permission_grants(user_id=user_id,
orderby=orderby)
def users_user_list_ref_owned_device(client,
user_id,
orderby=None):
return client.list_ref_owned_devices(user_id=user_id,
orderby=orderby)
def users_user_list_ref_owned_object(client,
user_id,
orderby=None):
return client.list_ref_owned_objects(user_id=user_id,
orderby=orderby)
def users_user_list_ref_registered_device(client,
user_id,
orderby=None):
return client.list_ref_registered_devices(user_id=user_id,
orderby=orderby)
def users_user_list_ref_transitive_member_of(client,
user_id,
orderby=None):
return client.list_ref_transitive_member_of(user_id=user_id,
orderby=orderby)
def users_user_list_registered_device(client,
user_id,
orderby=None,
select=None,
expand=None):
return client.list_registered_devices(user_id=user_id,
orderby=orderby,
select=select,
expand=expand)
def users_user_list_transitive_member_of(client,
user_id,
orderby=None,
select=None,
expand=None):
return client.list_transitive_member_of(user_id=user_id,
orderby=orderby,
select=select,
expand=expand)
def users_user_set_ref_manager(client,
user_id,
body):
return client.set_ref_manager(user_id=user_id,
body=body)
def users_user_show_extension(client,
user_id,
extension_id,
select=None,
expand=None):
return client.get_extensions(user_id=user_id,
extension_id=extension_id,
select=select,
expand=expand)
def users_user_show_license_detail(client,
user_id,
license_details_id,
select=None,
expand=None):
return client.get_license_details(user_id=user_id,
license_details_id=license_details_id,
select=select,
expand=expand)
def users_user_show_manager(client,
user_id,
select=None,
expand=None):
return client.get_manager(user_id=user_id,
select=select,
expand=expand)
def users_user_show_outlook(client,
user_id,
select=None,
expand=None):
return client.get_outlook(user_id=user_id,
select=select,
expand=expand)
def users_user_show_photo(client,
user_id,
profile_photo_id=None,
select=None,
expand=None):
if user_id is not None and profile_photo_id is not None:
return client.get_photos(user_id=user_id,
profile_photo_id=profile_photo_id,
select=select,
expand=expand)
return client.get_photo(user_id=user_id,
select=select,
expand=expand)
def users_user_show_ref_manager(client,
user_id):
return client.get_ref_manager(user_id=user_id)
def users_user_show_setting(client,
user_id,
select=None,
expand=None):
return client.get_settings(user_id=user_id,
select=select,
expand=expand)
def users_user_update_extension(client,
user_id,
extension_id,
id_=None):
body = {}
body['id'] = id_
return client.update_extensions(user_id=user_id,
extension_id=extension_id,
body=body)
def users_user_update_license_detail(client,
user_id,
license_details_id,
id_=None,
service_plans=None,
sku_id=None,
sku_part_number=None):
body = {}
body['id'] = id_
body['service_plans'] = service_plans
body['sku_id'] = sku_id
body['sku_part_number'] = sku_part_number
return client.update_license_details(user_id=user_id,
license_details_id=license_details_id,
body=body)
def users_user_update_outlook(client,
user_id,
id_=None,
master_categories=None):
body = {}
body['id'] = id_
body['master_categories'] = master_categories
return client.update_outlook(user_id=user_id,
body=body)
def users_user_update_photo(client,
user_id,
profile_photo_id=None,
id_=None,
height=None,
width=None):
body = {}
body['id'] = id_
body['height'] = height
body['width'] = width
if user_id is not None and profile_photo_id is not None:
return client.update_photos(user_id=user_id,
profile_photo_id=profile_photo_id,
body=body)
return client.update_photo(user_id=user_id,
body=body)
def users_user_update_setting(client,
user_id,
id_=None,
contribution_to_content_discovery_as_organization_disabled=None,
contribution_to_content_discovery_disabled=None,
microsoft_graph_entity_id=None,
created_date_time=None,
last_modified_date_time=None,
application=None,
device=None,
user=None,
availability=None):
body = {}
body['id'] = id_
body['contribution_to_content_discovery_as_organization_disabled'] = contribution_to_content_discovery_as_organization_disabled
body['contribution_to_content_discovery_disabled'] = contribution_to_content_discovery_disabled
body['shift_preferences'] = {}
body['shift_preferences']['id'] = microsoft_graph_entity_id
body['shift_preferences']['created_date_time'] = created_date_time
body['shift_preferences']['last_modified_date_time'] = last_modified_date_time
body['shift_preferences']['last_modified_by'] = {}
body['shift_preferences']['last_modified_by']['application'] = application
body['shift_preferences']['last_modified_by']['device'] = device
body['shift_preferences']['last_modified_by']['user'] = user
body['shift_preferences']['availability'] = availability
return client.update_settings(user_id=user_id,
body=body)
def users_user_outlook_create_master_category(client,
user_id,
id_=None,
color=None,
display_name=None):
body = {}
body['id'] = id_
body['color'] = color
body['display_name'] = display_name
return client.create_master_categories(user_id=user_id,
body=body)
def users_user_outlook_delete_master_category(client,
user_id,
outlook_category_id,
if_match=None):
return client.delete_master_categories(user_id=user_id,
outlook_category_id=outlook_category_id,
if_match=if_match)
def users_user_outlook_list_master_category(client,
user_id,
orderby=None,
select=None,
expand=None):
return client.list_master_categories(user_id=user_id,
orderby=orderby,
select=select,
expand=expand)
def users_user_outlook_show_master_category(client,
user_id,
outlook_category_id,
select=None,
expand=None):
return client.get_master_categories(user_id=user_id,
outlook_category_id=outlook_category_id,
select=select,
expand=expand)
def users_user_outlook_update_master_category(client,
user_id,
outlook_category_id,
id_=None,
color=None,
display_name=None):
body = {}
body['id'] = id_
body['color'] = color
body['display_name'] = display_name
return client.update_master_categories(user_id=user_id,
outlook_category_id=outlook_category_id,
body=body)
def users_user_setting_delete_shift_preference(client,
user_id,
if_match=None):
return client.delete_shift_preferences(user_id=user_id,
if_match=if_match)
def users_user_setting_show_shift_preference(client,
user_id,
select=None,
expand=None):
return client.get_shift_preferences(user_id=user_id,
select=select,
expand=expand)
def users_user_setting_update_shift_preference(client,
user_id,
id_=None,
created_date_time=None,
last_modified_date_time=None,
application=None,
device=None,
user=None,
availability=None):
body = {}
body['id'] = id_
body['created_date_time'] = created_date_time
body['last_modified_date_time'] = last_modified_date_time
body['last_modified_by'] = {}
body['last_modified_by']['application'] = application
body['last_modified_by']['device'] = device
body['last_modified_by']['user'] = user
body['availability'] = availability
return client.update_shift_preferences(user_id=user_id,
body=body)
| 44.010042 | 144 | 0.50483 | 4,642 | 52,592 | 5.314735 | 0.05838 | 0.045965 | 0.025536 | 0.030643 | 0.967492 | 0.946577 | 0.924851 | 0.911799 | 0.87313 | 0.830651 | 0 | 0.001182 | 0.420672 | 52,592 | 1,194 | 145 | 44.046901 | 0.808553 | 0.009507 | 0 | 0.86119 | 0 | 0 | 0.117832 | 0.036788 | 0 | 0 | 0 | 0 | 0 | 1 | 0.058546 | false | 0.011331 | 0 | 0.044381 | 0.119924 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
b9f4dab22a08e11f84e760b9cc568472dda9d484 | 152,315 | py | Python | python/generated/swaggeraem/apis/sling_api.py | mbloch1986/swagger-aem | 599baa705dd4db5ae2b30a637e5bcd7d3f886e85 | [
"Apache-2.0"
] | null | null | null | python/generated/swaggeraem/apis/sling_api.py | mbloch1986/swagger-aem | 599baa705dd4db5ae2b30a637e5bcd7d3f886e85 | [
"Apache-2.0"
] | null | null | null | python/generated/swaggeraem/apis/sling_api.py | mbloch1986/swagger-aem | 599baa705dd4db5ae2b30a637e5bcd7d3f886e85 | [
"Apache-2.0"
] | null | null | null | # coding: utf-8
"""
Adobe Experience Manager (AEM) API
Swagger AEM is an OpenAPI specification for Adobe Experience Manager (AEM) API
OpenAPI spec version: 2.2.0
Contact: opensource@shinesolutions.com
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
from __future__ import absolute_import
import sys
import os
import re
# python 2 and python 3 compatibility library
from six import iteritems
from ..api_client import ApiClient
class SlingApi(object):
"""
NOTE: This class is auto generated by the swagger code generator program.
Do not edit the class manually.
Ref: https://github.com/swagger-api/swagger-codegen
"""
def __init__(self, api_client=None):
if api_client is None:
api_client = ApiClient()
self.api_client = api_client
def delete_agent(self, runmode, name, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.delete_agent(runmode, name, async=True)
>>> result = thread.get()
:param async bool
:param str runmode: (required)
:param str name: (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.delete_agent_with_http_info(runmode, name, **kwargs)
else:
(data) = self.delete_agent_with_http_info(runmode, name, **kwargs)
return data
def delete_agent_with_http_info(self, runmode, name, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.delete_agent_with_http_info(runmode, name, async=True)
>>> result = thread.get()
:param async bool
:param str runmode: (required)
:param str name: (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['runmode', 'name']
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method delete_agent" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'runmode' is set
if ('runmode' not in params) or (params['runmode'] is None):
raise ValueError("Missing the required parameter `runmode` when calling `delete_agent`")
# verify the required parameter 'name' is set
if ('name' not in params) or (params['name'] is None):
raise ValueError("Missing the required parameter `name` when calling `delete_agent`")
collection_formats = {}
path_params = {}
if 'runmode' in params:
path_params['runmode'] = params['runmode']
if 'name' in params:
path_params['name'] = params['name']
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['text/plain'])
# Authentication setting
auth_settings = ['aemAuth']
return self.api_client.call_api('/etc/replication/agents.{runmode}/{name}', 'DELETE',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None,
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def delete_node(self, path, name, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.delete_node(path, name, async=True)
>>> result = thread.get()
:param async bool
:param str path: (required)
:param str name: (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.delete_node_with_http_info(path, name, **kwargs)
else:
(data) = self.delete_node_with_http_info(path, name, **kwargs)
return data
def delete_node_with_http_info(self, path, name, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.delete_node_with_http_info(path, name, async=True)
>>> result = thread.get()
:param async bool
:param str path: (required)
:param str name: (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['path', 'name']
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method delete_node" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'path' is set
if ('path' not in params) or (params['path'] is None):
raise ValueError("Missing the required parameter `path` when calling `delete_node`")
# verify the required parameter 'name' is set
if ('name' not in params) or (params['name'] is None):
raise ValueError("Missing the required parameter `name` when calling `delete_node`")
collection_formats = {}
path_params = {}
if 'path' in params:
path_params['path'] = params['path']
if 'name' in params:
path_params['name'] = params['name']
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['text/plain'])
# Authentication setting
auth_settings = ['aemAuth']
return self.api_client.call_api('/{path}/{name}', 'DELETE',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None,
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def get_agent(self, runmode, name, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.get_agent(runmode, name, async=True)
>>> result = thread.get()
:param async bool
:param str runmode: (required)
:param str name: (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.get_agent_with_http_info(runmode, name, **kwargs)
else:
(data) = self.get_agent_with_http_info(runmode, name, **kwargs)
return data
def get_agent_with_http_info(self, runmode, name, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.get_agent_with_http_info(runmode, name, async=True)
>>> result = thread.get()
:param async bool
:param str runmode: (required)
:param str name: (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['runmode', 'name']
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_agent" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'runmode' is set
if ('runmode' not in params) or (params['runmode'] is None):
raise ValueError("Missing the required parameter `runmode` when calling `get_agent`")
# verify the required parameter 'name' is set
if ('name' not in params) or (params['name'] is None):
raise ValueError("Missing the required parameter `name` when calling `get_agent`")
collection_formats = {}
path_params = {}
if 'runmode' in params:
path_params['runmode'] = params['runmode']
if 'name' in params:
path_params['name'] = params['name']
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['text/plain'])
# Authentication setting
auth_settings = ['aemAuth']
return self.api_client.call_api('/etc/replication/agents.{runmode}/{name}', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None,
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def get_agents(self, runmode, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.get_agents(runmode, async=True)
>>> result = thread.get()
:param async bool
:param str runmode: (required)
:return: str
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.get_agents_with_http_info(runmode, **kwargs)
else:
(data) = self.get_agents_with_http_info(runmode, **kwargs)
return data
def get_agents_with_http_info(self, runmode, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.get_agents_with_http_info(runmode, async=True)
>>> result = thread.get()
:param async bool
:param str runmode: (required)
:return: str
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['runmode']
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_agents" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'runmode' is set
if ('runmode' not in params) or (params['runmode'] is None):
raise ValueError("Missing the required parameter `runmode` when calling `get_agents`")
collection_formats = {}
path_params = {}
if 'runmode' in params:
path_params['runmode'] = params['runmode']
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
# Authentication setting
auth_settings = ['aemAuth']
return self.api_client.call_api('/etc/replication/agents.{runmode}.-1.json', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='str',
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def get_authorizable_keystore(self, intermediate_path, authorizable_id, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.get_authorizable_keystore(intermediate_path, authorizable_id, async=True)
>>> result = thread.get()
:param async bool
:param str intermediate_path: (required)
:param str authorizable_id: (required)
:return: KeystoreInformations
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.get_authorizable_keystore_with_http_info(intermediate_path, authorizable_id, **kwargs)
else:
(data) = self.get_authorizable_keystore_with_http_info(intermediate_path, authorizable_id, **kwargs)
return data
def get_authorizable_keystore_with_http_info(self, intermediate_path, authorizable_id, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.get_authorizable_keystore_with_http_info(intermediate_path, authorizable_id, async=True)
>>> result = thread.get()
:param async bool
:param str intermediate_path: (required)
:param str authorizable_id: (required)
:return: KeystoreInformations
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['intermediate_path', 'authorizable_id']
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_authorizable_keystore" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'intermediate_path' is set
if ('intermediate_path' not in params) or (params['intermediate_path'] is None):
raise ValueError("Missing the required parameter `intermediate_path` when calling `get_authorizable_keystore`")
# verify the required parameter 'authorizable_id' is set
if ('authorizable_id' not in params) or (params['authorizable_id'] is None):
raise ValueError("Missing the required parameter `authorizable_id` when calling `get_authorizable_keystore`")
collection_formats = {}
path_params = {}
if 'intermediate_path' in params:
path_params['intermediatePath'] = params['intermediate_path']
if 'authorizable_id' in params:
path_params['authorizableId'] = params['authorizable_id']
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['text/plain'])
# Authentication setting
auth_settings = ['aemAuth']
return self.api_client.call_api('/{intermediatePath}/{authorizableId}.ks.json', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='KeystoreInformations',
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def get_keystore(self, intermediate_path, authorizable_id, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.get_keystore(intermediate_path, authorizable_id, async=True)
>>> result = thread.get()
:param async bool
:param str intermediate_path: (required)
:param str authorizable_id: (required)
:return: file
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.get_keystore_with_http_info(intermediate_path, authorizable_id, **kwargs)
else:
(data) = self.get_keystore_with_http_info(intermediate_path, authorizable_id, **kwargs)
return data
def get_keystore_with_http_info(self, intermediate_path, authorizable_id, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.get_keystore_with_http_info(intermediate_path, authorizable_id, async=True)
>>> result = thread.get()
:param async bool
:param str intermediate_path: (required)
:param str authorizable_id: (required)
:return: file
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['intermediate_path', 'authorizable_id']
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_keystore" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'intermediate_path' is set
if ('intermediate_path' not in params) or (params['intermediate_path'] is None):
raise ValueError("Missing the required parameter `intermediate_path` when calling `get_keystore`")
# verify the required parameter 'authorizable_id' is set
if ('authorizable_id' not in params) or (params['authorizable_id'] is None):
raise ValueError("Missing the required parameter `authorizable_id` when calling `get_keystore`")
collection_formats = {}
path_params = {}
if 'intermediate_path' in params:
path_params['intermediatePath'] = params['intermediate_path']
if 'authorizable_id' in params:
path_params['authorizableId'] = params['authorizable_id']
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/octet-stream'])
# Authentication setting
auth_settings = ['aemAuth']
return self.api_client.call_api('/{intermediatePath}/{authorizableId}/keystore/store.p12', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='file',
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def get_node(self, path, name, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.get_node(path, name, async=True)
>>> result = thread.get()
:param async bool
:param str path: (required)
:param str name: (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.get_node_with_http_info(path, name, **kwargs)
else:
(data) = self.get_node_with_http_info(path, name, **kwargs)
return data
def get_node_with_http_info(self, path, name, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.get_node_with_http_info(path, name, async=True)
>>> result = thread.get()
:param async bool
:param str path: (required)
:param str name: (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['path', 'name']
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_node" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'path' is set
if ('path' not in params) or (params['path'] is None):
raise ValueError("Missing the required parameter `path` when calling `get_node`")
# verify the required parameter 'name' is set
if ('name' not in params) or (params['name'] is None):
raise ValueError("Missing the required parameter `name` when calling `get_node`")
collection_formats = {}
path_params = {}
if 'path' in params:
path_params['path'] = params['path']
if 'name' in params:
path_params['name'] = params['name']
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['text/plain'])
# Authentication setting
auth_settings = ['aemAuth']
return self.api_client.call_api('/{path}/{name}', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None,
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def get_package(self, group, name, version, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.get_package(group, name, version, async=True)
>>> result = thread.get()
:param async bool
:param str group: (required)
:param str name: (required)
:param str version: (required)
:return: file
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.get_package_with_http_info(group, name, version, **kwargs)
else:
(data) = self.get_package_with_http_info(group, name, version, **kwargs)
return data
def get_package_with_http_info(self, group, name, version, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.get_package_with_http_info(group, name, version, async=True)
>>> result = thread.get()
:param async bool
:param str group: (required)
:param str name: (required)
:param str version: (required)
:return: file
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['group', 'name', 'version']
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_package" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'group' is set
if ('group' not in params) or (params['group'] is None):
raise ValueError("Missing the required parameter `group` when calling `get_package`")
# verify the required parameter 'name' is set
if ('name' not in params) or (params['name'] is None):
raise ValueError("Missing the required parameter `name` when calling `get_package`")
# verify the required parameter 'version' is set
if ('version' not in params) or (params['version'] is None):
raise ValueError("Missing the required parameter `version` when calling `get_package`")
collection_formats = {}
path_params = {}
if 'group' in params:
path_params['group'] = params['group']
if 'name' in params:
path_params['name'] = params['name']
if 'version' in params:
path_params['version'] = params['version']
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/octet-stream'])
# Authentication setting
auth_settings = ['aemAuth']
return self.api_client.call_api('/etc/packages/{group}/{name}-{version}.zip', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='file',
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def get_package_filter(self, group, name, version, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.get_package_filter(group, name, version, async=True)
>>> result = thread.get()
:param async bool
:param str group: (required)
:param str name: (required)
:param str version: (required)
:return: str
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.get_package_filter_with_http_info(group, name, version, **kwargs)
else:
(data) = self.get_package_filter_with_http_info(group, name, version, **kwargs)
return data
def get_package_filter_with_http_info(self, group, name, version, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.get_package_filter_with_http_info(group, name, version, async=True)
>>> result = thread.get()
:param async bool
:param str group: (required)
:param str name: (required)
:param str version: (required)
:return: str
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['group', 'name', 'version']
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_package_filter" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'group' is set
if ('group' not in params) or (params['group'] is None):
raise ValueError("Missing the required parameter `group` when calling `get_package_filter`")
# verify the required parameter 'name' is set
if ('name' not in params) or (params['name'] is None):
raise ValueError("Missing the required parameter `name` when calling `get_package_filter`")
# verify the required parameter 'version' is set
if ('version' not in params) or (params['version'] is None):
raise ValueError("Missing the required parameter `version` when calling `get_package_filter`")
collection_formats = {}
path_params = {}
if 'group' in params:
path_params['group'] = params['group']
if 'name' in params:
path_params['name'] = params['name']
if 'version' in params:
path_params['version'] = params['version']
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
# Authentication setting
auth_settings = ['aemAuth']
return self.api_client.call_api('/etc/packages/{group}/{name}-{version}.zip/jcr:content/vlt:definition/filter.tidy.2.json', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='str',
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def get_query(self, path, p_limit, _1_property, _1_property_value, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.get_query(path, p_limit, _1_property, _1_property_value, async=True)
>>> result = thread.get()
:param async bool
:param str path: (required)
:param float p_limit: (required)
:param str _1_property: (required)
:param str _1_property_value: (required)
:return: str
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.get_query_with_http_info(path, p_limit, _1_property, _1_property_value, **kwargs)
else:
(data) = self.get_query_with_http_info(path, p_limit, _1_property, _1_property_value, **kwargs)
return data
def get_query_with_http_info(self, path, p_limit, _1_property, _1_property_value, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.get_query_with_http_info(path, p_limit, _1_property, _1_property_value, async=True)
>>> result = thread.get()
:param async bool
:param str path: (required)
:param float p_limit: (required)
:param str _1_property: (required)
:param str _1_property_value: (required)
:return: str
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['path', 'p_limit', '_1_property', '_1_property_value']
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_query" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'path' is set
if ('path' not in params) or (params['path'] is None):
raise ValueError("Missing the required parameter `path` when calling `get_query`")
# verify the required parameter 'p_limit' is set
if ('p_limit' not in params) or (params['p_limit'] is None):
raise ValueError("Missing the required parameter `p_limit` when calling `get_query`")
# verify the required parameter '_1_property' is set
if ('_1_property' not in params) or (params['_1_property'] is None):
raise ValueError("Missing the required parameter `_1_property` when calling `get_query`")
# verify the required parameter '_1_property_value' is set
if ('_1_property_value' not in params) or (params['_1_property_value'] is None):
raise ValueError("Missing the required parameter `_1_property_value` when calling `get_query`")
collection_formats = {}
path_params = {}
query_params = []
if 'path' in params:
query_params.append(('path', params['path']))
if 'p_limit' in params:
query_params.append(('p.limit', params['p_limit']))
if '_1_property' in params:
query_params.append(('1_property', params['_1_property']))
if '_1_property_value' in params:
query_params.append(('1_property.value', params['_1_property_value']))
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
# Authentication setting
auth_settings = ['aemAuth']
return self.api_client.call_api('/bin/querybuilder.json', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='str',
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def get_truststore(self, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.get_truststore(async=True)
>>> result = thread.get()
:param async bool
:return: file
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.get_truststore_with_http_info(**kwargs)
else:
(data) = self.get_truststore_with_http_info(**kwargs)
return data
def get_truststore_with_http_info(self, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.get_truststore_with_http_info(async=True)
>>> result = thread.get()
:param async bool
:return: file
If the method is called asynchronously,
returns the request thread.
"""
all_params = []
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_truststore" % key
)
params[key] = val
del params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/octet-stream'])
# Authentication setting
auth_settings = ['aemAuth']
return self.api_client.call_api('/etc/truststore/truststore.p12', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='file',
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def get_truststore_informations(self, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.get_truststore_informations(async=True)
>>> result = thread.get()
:param async bool
:return: TruststoreInformations
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.get_truststore_informations_with_http_info(**kwargs)
else:
(data) = self.get_truststore_informations_with_http_info(**kwargs)
return data
def get_truststore_informations_with_http_info(self, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.get_truststore_informations_with_http_info(async=True)
>>> result = thread.get()
:param async bool
:return: TruststoreInformations
If the method is called asynchronously,
returns the request thread.
"""
all_params = []
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_truststore_informations" % key
)
params[key] = val
del params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
# Authentication setting
auth_settings = ['aemAuth']
return self.api_client.call_api('/libs/granite/security/truststore.json', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='TruststoreInformations',
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def post_agent(self, runmode, name, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.post_agent(runmode, name, async=True)
>>> result = thread.get()
:param async bool
:param str runmode: (required)
:param str name: (required)
:param bool jcrcontentcqdistribute:
:param str jcrcontentcqdistribute_type_hint:
:param str jcrcontentcqname:
:param str jcrcontentcqtemplate:
:param bool jcrcontentenabled:
:param str jcrcontentjcrdescription:
:param str jcrcontentjcrlast_modified:
:param str jcrcontentjcrlast_modified_by:
:param str jcrcontentjcrmixin_types:
:param str jcrcontentjcrtitle:
:param str jcrcontentlog_level:
:param bool jcrcontentno_status_update:
:param bool jcrcontentno_versioning:
:param float jcrcontentprotocol_connect_timeout:
:param bool jcrcontentprotocol_http_connection_closed:
:param str jcrcontentprotocol_http_expired:
:param list[str] jcrcontentprotocol_http_headers:
:param str jcrcontentprotocol_http_headers_type_hint:
:param str jcrcontentprotocol_http_method:
:param bool jcrcontentprotocol_https_relaxed:
:param str jcrcontentprotocol_interface:
:param float jcrcontentprotocol_socket_timeout:
:param str jcrcontentprotocol_version:
:param str jcrcontentproxy_ntlm_domain:
:param str jcrcontentproxy_ntlm_host:
:param str jcrcontentproxy_host:
:param str jcrcontentproxy_password:
:param float jcrcontentproxy_port:
:param str jcrcontentproxy_user:
:param float jcrcontentqueue_batch_max_size:
:param str jcrcontentqueue_batch_mode:
:param float jcrcontentqueue_batch_wait_time:
:param str jcrcontentretry_delay:
:param bool jcrcontentreverse_replication:
:param str jcrcontentserialization_type:
:param str jcrcontentslingresource_type:
:param str jcrcontentssl:
:param str jcrcontenttransport_ntlm_domain:
:param str jcrcontenttransport_ntlm_host:
:param str jcrcontenttransport_password:
:param str jcrcontenttransport_uri:
:param str jcrcontenttransport_user:
:param bool jcrcontenttrigger_distribute:
:param bool jcrcontenttrigger_modified:
:param bool jcrcontenttrigger_on_off_time:
:param bool jcrcontenttrigger_receive:
:param bool jcrcontenttrigger_specific:
:param str jcrcontentuser_id:
:param str jcrprimary_type:
:param str operation:
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.post_agent_with_http_info(runmode, name, **kwargs)
else:
(data) = self.post_agent_with_http_info(runmode, name, **kwargs)
return data
def post_agent_with_http_info(self, runmode, name, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.post_agent_with_http_info(runmode, name, async=True)
>>> result = thread.get()
:param async bool
:param str runmode: (required)
:param str name: (required)
:param bool jcrcontentcqdistribute:
:param str jcrcontentcqdistribute_type_hint:
:param str jcrcontentcqname:
:param str jcrcontentcqtemplate:
:param bool jcrcontentenabled:
:param str jcrcontentjcrdescription:
:param str jcrcontentjcrlast_modified:
:param str jcrcontentjcrlast_modified_by:
:param str jcrcontentjcrmixin_types:
:param str jcrcontentjcrtitle:
:param str jcrcontentlog_level:
:param bool jcrcontentno_status_update:
:param bool jcrcontentno_versioning:
:param float jcrcontentprotocol_connect_timeout:
:param bool jcrcontentprotocol_http_connection_closed:
:param str jcrcontentprotocol_http_expired:
:param list[str] jcrcontentprotocol_http_headers:
:param str jcrcontentprotocol_http_headers_type_hint:
:param str jcrcontentprotocol_http_method:
:param bool jcrcontentprotocol_https_relaxed:
:param str jcrcontentprotocol_interface:
:param float jcrcontentprotocol_socket_timeout:
:param str jcrcontentprotocol_version:
:param str jcrcontentproxy_ntlm_domain:
:param str jcrcontentproxy_ntlm_host:
:param str jcrcontentproxy_host:
:param str jcrcontentproxy_password:
:param float jcrcontentproxy_port:
:param str jcrcontentproxy_user:
:param float jcrcontentqueue_batch_max_size:
:param str jcrcontentqueue_batch_mode:
:param float jcrcontentqueue_batch_wait_time:
:param str jcrcontentretry_delay:
:param bool jcrcontentreverse_replication:
:param str jcrcontentserialization_type:
:param str jcrcontentslingresource_type:
:param str jcrcontentssl:
:param str jcrcontenttransport_ntlm_domain:
:param str jcrcontenttransport_ntlm_host:
:param str jcrcontenttransport_password:
:param str jcrcontenttransport_uri:
:param str jcrcontenttransport_user:
:param bool jcrcontenttrigger_distribute:
:param bool jcrcontenttrigger_modified:
:param bool jcrcontenttrigger_on_off_time:
:param bool jcrcontenttrigger_receive:
:param bool jcrcontenttrigger_specific:
:param str jcrcontentuser_id:
:param str jcrprimary_type:
:param str operation:
:return: None
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['runmode', 'name', 'jcrcontentcqdistribute', 'jcrcontentcqdistribute_type_hint', 'jcrcontentcqname', 'jcrcontentcqtemplate', 'jcrcontentenabled', 'jcrcontentjcrdescription', 'jcrcontentjcrlast_modified', 'jcrcontentjcrlast_modified_by', 'jcrcontentjcrmixin_types', 'jcrcontentjcrtitle', 'jcrcontentlog_level', 'jcrcontentno_status_update', 'jcrcontentno_versioning', 'jcrcontentprotocol_connect_timeout', 'jcrcontentprotocol_http_connection_closed', 'jcrcontentprotocol_http_expired', 'jcrcontentprotocol_http_headers', 'jcrcontentprotocol_http_headers_type_hint', 'jcrcontentprotocol_http_method', 'jcrcontentprotocol_https_relaxed', 'jcrcontentprotocol_interface', 'jcrcontentprotocol_socket_timeout', 'jcrcontentprotocol_version', 'jcrcontentproxy_ntlm_domain', 'jcrcontentproxy_ntlm_host', 'jcrcontentproxy_host', 'jcrcontentproxy_password', 'jcrcontentproxy_port', 'jcrcontentproxy_user', 'jcrcontentqueue_batch_max_size', 'jcrcontentqueue_batch_mode', 'jcrcontentqueue_batch_wait_time', 'jcrcontentretry_delay', 'jcrcontentreverse_replication', 'jcrcontentserialization_type', 'jcrcontentslingresource_type', 'jcrcontentssl', 'jcrcontenttransport_ntlm_domain', 'jcrcontenttransport_ntlm_host', 'jcrcontenttransport_password', 'jcrcontenttransport_uri', 'jcrcontenttransport_user', 'jcrcontenttrigger_distribute', 'jcrcontenttrigger_modified', 'jcrcontenttrigger_on_off_time', 'jcrcontenttrigger_receive', 'jcrcontenttrigger_specific', 'jcrcontentuser_id', 'jcrprimary_type', 'operation']
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method post_agent" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'runmode' is set
if ('runmode' not in params) or (params['runmode'] is None):
raise ValueError("Missing the required parameter `runmode` when calling `post_agent`")
# verify the required parameter 'name' is set
if ('name' not in params) or (params['name'] is None):
raise ValueError("Missing the required parameter `name` when calling `post_agent`")
collection_formats = {}
path_params = {}
if 'runmode' in params:
path_params['runmode'] = params['runmode']
if 'name' in params:
path_params['name'] = params['name']
query_params = []
if 'jcrcontentcqdistribute' in params:
query_params.append(('jcr:content/cq:distribute', params['jcrcontentcqdistribute']))
if 'jcrcontentcqdistribute_type_hint' in params:
query_params.append(('jcr:content/cq:distribute@TypeHint', params['jcrcontentcqdistribute_type_hint']))
if 'jcrcontentcqname' in params:
query_params.append(('jcr:content/cq:name', params['jcrcontentcqname']))
if 'jcrcontentcqtemplate' in params:
query_params.append(('jcr:content/cq:template', params['jcrcontentcqtemplate']))
if 'jcrcontentenabled' in params:
query_params.append(('jcr:content/enabled', params['jcrcontentenabled']))
if 'jcrcontentjcrdescription' in params:
query_params.append(('jcr:content/jcr:description', params['jcrcontentjcrdescription']))
if 'jcrcontentjcrlast_modified' in params:
query_params.append(('jcr:content/jcr:lastModified', params['jcrcontentjcrlast_modified']))
if 'jcrcontentjcrlast_modified_by' in params:
query_params.append(('jcr:content/jcr:lastModifiedBy', params['jcrcontentjcrlast_modified_by']))
if 'jcrcontentjcrmixin_types' in params:
query_params.append(('jcr:content/jcr:mixinTypes', params['jcrcontentjcrmixin_types']))
if 'jcrcontentjcrtitle' in params:
query_params.append(('jcr:content/jcr:title', params['jcrcontentjcrtitle']))
if 'jcrcontentlog_level' in params:
query_params.append(('jcr:content/logLevel', params['jcrcontentlog_level']))
if 'jcrcontentno_status_update' in params:
query_params.append(('jcr:content/noStatusUpdate', params['jcrcontentno_status_update']))
if 'jcrcontentno_versioning' in params:
query_params.append(('jcr:content/noVersioning', params['jcrcontentno_versioning']))
if 'jcrcontentprotocol_connect_timeout' in params:
query_params.append(('jcr:content/protocolConnectTimeout', params['jcrcontentprotocol_connect_timeout']))
if 'jcrcontentprotocol_http_connection_closed' in params:
query_params.append(('jcr:content/protocolHTTPConnectionClosed', params['jcrcontentprotocol_http_connection_closed']))
if 'jcrcontentprotocol_http_expired' in params:
query_params.append(('jcr:content/protocolHTTPExpired', params['jcrcontentprotocol_http_expired']))
if 'jcrcontentprotocol_http_headers' in params:
query_params.append(('jcr:content/protocolHTTPHeaders', params['jcrcontentprotocol_http_headers']))
collection_formats['jcr:content/protocolHTTPHeaders'] = 'multi'
if 'jcrcontentprotocol_http_headers_type_hint' in params:
query_params.append(('jcr:content/protocolHTTPHeaders@TypeHint', params['jcrcontentprotocol_http_headers_type_hint']))
if 'jcrcontentprotocol_http_method' in params:
query_params.append(('jcr:content/protocolHTTPMethod', params['jcrcontentprotocol_http_method']))
if 'jcrcontentprotocol_https_relaxed' in params:
query_params.append(('jcr:content/protocolHTTPSRelaxed', params['jcrcontentprotocol_https_relaxed']))
if 'jcrcontentprotocol_interface' in params:
query_params.append(('jcr:content/protocolInterface', params['jcrcontentprotocol_interface']))
if 'jcrcontentprotocol_socket_timeout' in params:
query_params.append(('jcr:content/protocolSocketTimeout', params['jcrcontentprotocol_socket_timeout']))
if 'jcrcontentprotocol_version' in params:
query_params.append(('jcr:content/protocolVersion', params['jcrcontentprotocol_version']))
if 'jcrcontentproxy_ntlm_domain' in params:
query_params.append(('jcr:content/proxyNTLMDomain', params['jcrcontentproxy_ntlm_domain']))
if 'jcrcontentproxy_ntlm_host' in params:
query_params.append(('jcr:content/proxyNTLMHost', params['jcrcontentproxy_ntlm_host']))
if 'jcrcontentproxy_host' in params:
query_params.append(('jcr:content/proxyHost', params['jcrcontentproxy_host']))
if 'jcrcontentproxy_password' in params:
query_params.append(('jcr:content/proxyPassword', params['jcrcontentproxy_password']))
if 'jcrcontentproxy_port' in params:
query_params.append(('jcr:content/proxyPort', params['jcrcontentproxy_port']))
if 'jcrcontentproxy_user' in params:
query_params.append(('jcr:content/proxyUser', params['jcrcontentproxy_user']))
if 'jcrcontentqueue_batch_max_size' in params:
query_params.append(('jcr:content/queueBatchMaxSize', params['jcrcontentqueue_batch_max_size']))
if 'jcrcontentqueue_batch_mode' in params:
query_params.append(('jcr:content/queueBatchMode', params['jcrcontentqueue_batch_mode']))
if 'jcrcontentqueue_batch_wait_time' in params:
query_params.append(('jcr:content/queueBatchWaitTime', params['jcrcontentqueue_batch_wait_time']))
if 'jcrcontentretry_delay' in params:
query_params.append(('jcr:content/retryDelay', params['jcrcontentretry_delay']))
if 'jcrcontentreverse_replication' in params:
query_params.append(('jcr:content/reverseReplication', params['jcrcontentreverse_replication']))
if 'jcrcontentserialization_type' in params:
query_params.append(('jcr:content/serializationType', params['jcrcontentserialization_type']))
if 'jcrcontentslingresource_type' in params:
query_params.append(('jcr:content/sling:resourceType', params['jcrcontentslingresource_type']))
if 'jcrcontentssl' in params:
query_params.append(('jcr:content/ssl', params['jcrcontentssl']))
if 'jcrcontenttransport_ntlm_domain' in params:
query_params.append(('jcr:content/transportNTLMDomain', params['jcrcontenttransport_ntlm_domain']))
if 'jcrcontenttransport_ntlm_host' in params:
query_params.append(('jcr:content/transportNTLMHost', params['jcrcontenttransport_ntlm_host']))
if 'jcrcontenttransport_password' in params:
query_params.append(('jcr:content/transportPassword', params['jcrcontenttransport_password']))
if 'jcrcontenttransport_uri' in params:
query_params.append(('jcr:content/transportUri', params['jcrcontenttransport_uri']))
if 'jcrcontenttransport_user' in params:
query_params.append(('jcr:content/transportUser', params['jcrcontenttransport_user']))
if 'jcrcontenttrigger_distribute' in params:
query_params.append(('jcr:content/triggerDistribute', params['jcrcontenttrigger_distribute']))
if 'jcrcontenttrigger_modified' in params:
query_params.append(('jcr:content/triggerModified', params['jcrcontenttrigger_modified']))
if 'jcrcontenttrigger_on_off_time' in params:
query_params.append(('jcr:content/triggerOnOffTime', params['jcrcontenttrigger_on_off_time']))
if 'jcrcontenttrigger_receive' in params:
query_params.append(('jcr:content/triggerReceive', params['jcrcontenttrigger_receive']))
if 'jcrcontenttrigger_specific' in params:
query_params.append(('jcr:content/triggerSpecific', params['jcrcontenttrigger_specific']))
if 'jcrcontentuser_id' in params:
query_params.append(('jcr:content/userId', params['jcrcontentuser_id']))
if 'jcrprimary_type' in params:
query_params.append(('jcr:primaryType', params['jcrprimary_type']))
if 'operation' in params:
query_params.append((':operation', params['operation']))
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['text/plain'])
# Authentication setting
auth_settings = ['aemAuth']
return self.api_client.call_api('/etc/replication/agents.{runmode}/{name}', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None,
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def post_authorizable_keystore(self, intermediate_path, authorizable_id, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.post_authorizable_keystore(intermediate_path, authorizable_id, async=True)
>>> result = thread.get()
:param async bool
:param str intermediate_path: (required)
:param str authorizable_id: (required)
:param str operation:
:param str current_password:
:param str new_password:
:param str re_password:
:param str key_password:
:param str key_store_pass:
:param str operation2:
:param str alias:
:param str new_alias:
:param str remove_alias:
:param file cert_chain:
:param file pk:
:param file key_store:
:return: KeystoreInformations
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.post_authorizable_keystore_with_http_info(intermediate_path, authorizable_id, **kwargs)
else:
(data) = self.post_authorizable_keystore_with_http_info(intermediate_path, authorizable_id, **kwargs)
return data
def post_authorizable_keystore_with_http_info(self, intermediate_path, authorizable_id, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.post_authorizable_keystore_with_http_info(intermediate_path, authorizable_id, async=True)
>>> result = thread.get()
:param async bool
:param str intermediate_path: (required)
:param str authorizable_id: (required)
:param str operation:
:param str current_password:
:param str new_password:
:param str re_password:
:param str key_password:
:param str key_store_pass:
:param str operation2:
:param str alias:
:param str new_alias:
:param str remove_alias:
:param file cert_chain:
:param file pk:
:param file key_store:
:return: KeystoreInformations
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['intermediate_path', 'authorizable_id', 'operation', 'current_password', 'new_password', 're_password', 'key_password', 'key_store_pass', 'operation2', 'alias', 'new_alias', 'remove_alias', 'cert_chain', 'pk', 'key_store']
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method post_authorizable_keystore" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'intermediate_path' is set
if ('intermediate_path' not in params) or (params['intermediate_path'] is None):
raise ValueError("Missing the required parameter `intermediate_path` when calling `post_authorizable_keystore`")
# verify the required parameter 'authorizable_id' is set
if ('authorizable_id' not in params) or (params['authorizable_id'] is None):
raise ValueError("Missing the required parameter `authorizable_id` when calling `post_authorizable_keystore`")
collection_formats = {}
path_params = {}
if 'intermediate_path' in params:
path_params['intermediatePath'] = params['intermediate_path']
if 'authorizable_id' in params:
path_params['authorizableId'] = params['authorizable_id']
query_params = []
if 'operation' in params:
query_params.append((':operation', params['operation']))
if 'current_password' in params:
query_params.append(('currentPassword', params['current_password']))
if 'new_password' in params:
query_params.append(('newPassword', params['new_password']))
if 're_password' in params:
query_params.append(('rePassword', params['re_password']))
if 'key_password' in params:
query_params.append(('keyPassword', params['key_password']))
if 'key_store_pass' in params:
query_params.append(('keyStorePass', params['key_store_pass']))
if 'operation2' in params:
query_params.append((':operation', params['operation2']))
if 'alias' in params:
query_params.append(('alias', params['alias']))
if 'new_alias' in params:
query_params.append(('newAlias', params['new_alias']))
if 'remove_alias' in params:
query_params.append(('removeAlias', params['remove_alias']))
header_params = {}
form_params = []
local_var_files = {}
if 'cert_chain' in params:
local_var_files['cert-chain'] = params['cert_chain']
if 'pk' in params:
local_var_files['pk'] = params['pk']
if 'key_store' in params:
local_var_files['keyStore'] = params['key_store']
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['text/plain'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['multipart/form-data'])
# Authentication setting
auth_settings = ['aemAuth']
return self.api_client.call_api('/{intermediatePath}/{authorizableId}.ks.html', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='KeystoreInformations',
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def post_authorizables(self, authorizable_id, intermediate_path, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.post_authorizables(authorizable_id, intermediate_path, async=True)
>>> result = thread.get()
:param async bool
:param str authorizable_id: (required)
:param str intermediate_path: (required)
:param str create_user:
:param str create_group:
:param str reppassword:
:param str profilegiven_name:
:return: str
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.post_authorizables_with_http_info(authorizable_id, intermediate_path, **kwargs)
else:
(data) = self.post_authorizables_with_http_info(authorizable_id, intermediate_path, **kwargs)
return data
def post_authorizables_with_http_info(self, authorizable_id, intermediate_path, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.post_authorizables_with_http_info(authorizable_id, intermediate_path, async=True)
>>> result = thread.get()
:param async bool
:param str authorizable_id: (required)
:param str intermediate_path: (required)
:param str create_user:
:param str create_group:
:param str reppassword:
:param str profilegiven_name:
:return: str
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['authorizable_id', 'intermediate_path', 'create_user', 'create_group', 'reppassword', 'profilegiven_name']
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method post_authorizables" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'authorizable_id' is set
if ('authorizable_id' not in params) or (params['authorizable_id'] is None):
raise ValueError("Missing the required parameter `authorizable_id` when calling `post_authorizables`")
# verify the required parameter 'intermediate_path' is set
if ('intermediate_path' not in params) or (params['intermediate_path'] is None):
raise ValueError("Missing the required parameter `intermediate_path` when calling `post_authorizables`")
collection_formats = {}
path_params = {}
query_params = []
if 'authorizable_id' in params:
query_params.append(('authorizableId', params['authorizable_id']))
if 'intermediate_path' in params:
query_params.append(('intermediatePath', params['intermediate_path']))
if 'create_user' in params:
query_params.append(('createUser', params['create_user']))
if 'create_group' in params:
query_params.append(('createGroup', params['create_group']))
if 'reppassword' in params:
query_params.append(('rep:password', params['reppassword']))
if 'profilegiven_name' in params:
query_params.append(('profile/givenName', params['profilegiven_name']))
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['text/html'])
# Authentication setting
auth_settings = ['aemAuth']
return self.api_client.call_api('/libs/granite/security/post/authorizables', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='str',
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def post_config_adobe_granite_saml_authentication_handler(self, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.post_config_adobe_granite_saml_authentication_handler(async=True)
>>> result = thread.get()
:param async bool
:param str key_store_password:
:param str key_store_password_type_hint:
:param int service_ranking:
:param str service_ranking_type_hint:
:param bool idp_http_redirect:
:param str idp_http_redirect_type_hint:
:param bool create_user:
:param str create_user_type_hint:
:param str default_redirect_url:
:param str default_redirect_url_type_hint:
:param str user_id_attribute:
:param str user_id_attribute_type_hint:
:param list[str] default_groups:
:param str default_groups_type_hint:
:param str idp_cert_alias:
:param str idp_cert_alias_type_hint:
:param bool add_group_memberships:
:param str add_group_memberships_type_hint:
:param list[str] path:
:param str path_type_hint:
:param list[str] synchronize_attributes:
:param str synchronize_attributes_type_hint:
:param int clock_tolerance:
:param str clock_tolerance_type_hint:
:param str group_membership_attribute:
:param str group_membership_attribute_type_hint:
:param str idp_url:
:param str idp_url_type_hint:
:param str logout_url:
:param str logout_url_type_hint:
:param str service_provider_entity_id:
:param str service_provider_entity_id_type_hint:
:param str assertion_consumer_service_url:
:param str assertion_consumer_service_url_type_hint:
:param bool handle_logout:
:param str handle_logout_type_hint:
:param str sp_private_key_alias:
:param str sp_private_key_alias_type_hint:
:param bool use_encryption:
:param str use_encryption_type_hint:
:param str name_id_format:
:param str name_id_format_type_hint:
:param str digest_method:
:param str digest_method_type_hint:
:param str signature_method:
:param str signature_method_type_hint:
:param str user_intermediate_path:
:param str user_intermediate_path_type_hint:
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.post_config_adobe_granite_saml_authentication_handler_with_http_info(**kwargs)
else:
(data) = self.post_config_adobe_granite_saml_authentication_handler_with_http_info(**kwargs)
return data
def post_config_adobe_granite_saml_authentication_handler_with_http_info(self, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.post_config_adobe_granite_saml_authentication_handler_with_http_info(async=True)
>>> result = thread.get()
:param async bool
:param str key_store_password:
:param str key_store_password_type_hint:
:param int service_ranking:
:param str service_ranking_type_hint:
:param bool idp_http_redirect:
:param str idp_http_redirect_type_hint:
:param bool create_user:
:param str create_user_type_hint:
:param str default_redirect_url:
:param str default_redirect_url_type_hint:
:param str user_id_attribute:
:param str user_id_attribute_type_hint:
:param list[str] default_groups:
:param str default_groups_type_hint:
:param str idp_cert_alias:
:param str idp_cert_alias_type_hint:
:param bool add_group_memberships:
:param str add_group_memberships_type_hint:
:param list[str] path:
:param str path_type_hint:
:param list[str] synchronize_attributes:
:param str synchronize_attributes_type_hint:
:param int clock_tolerance:
:param str clock_tolerance_type_hint:
:param str group_membership_attribute:
:param str group_membership_attribute_type_hint:
:param str idp_url:
:param str idp_url_type_hint:
:param str logout_url:
:param str logout_url_type_hint:
:param str service_provider_entity_id:
:param str service_provider_entity_id_type_hint:
:param str assertion_consumer_service_url:
:param str assertion_consumer_service_url_type_hint:
:param bool handle_logout:
:param str handle_logout_type_hint:
:param str sp_private_key_alias:
:param str sp_private_key_alias_type_hint:
:param bool use_encryption:
:param str use_encryption_type_hint:
:param str name_id_format:
:param str name_id_format_type_hint:
:param str digest_method:
:param str digest_method_type_hint:
:param str signature_method:
:param str signature_method_type_hint:
:param str user_intermediate_path:
:param str user_intermediate_path_type_hint:
:return: None
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['key_store_password', 'key_store_password_type_hint', 'service_ranking', 'service_ranking_type_hint', 'idp_http_redirect', 'idp_http_redirect_type_hint', 'create_user', 'create_user_type_hint', 'default_redirect_url', 'default_redirect_url_type_hint', 'user_id_attribute', 'user_id_attribute_type_hint', 'default_groups', 'default_groups_type_hint', 'idp_cert_alias', 'idp_cert_alias_type_hint', 'add_group_memberships', 'add_group_memberships_type_hint', 'path', 'path_type_hint', 'synchronize_attributes', 'synchronize_attributes_type_hint', 'clock_tolerance', 'clock_tolerance_type_hint', 'group_membership_attribute', 'group_membership_attribute_type_hint', 'idp_url', 'idp_url_type_hint', 'logout_url', 'logout_url_type_hint', 'service_provider_entity_id', 'service_provider_entity_id_type_hint', 'assertion_consumer_service_url', 'assertion_consumer_service_url_type_hint', 'handle_logout', 'handle_logout_type_hint', 'sp_private_key_alias', 'sp_private_key_alias_type_hint', 'use_encryption', 'use_encryption_type_hint', 'name_id_format', 'name_id_format_type_hint', 'digest_method', 'digest_method_type_hint', 'signature_method', 'signature_method_type_hint', 'user_intermediate_path', 'user_intermediate_path_type_hint']
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method post_config_adobe_granite_saml_authentication_handler" % key
)
params[key] = val
del params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
if 'key_store_password' in params:
query_params.append(('keyStorePassword', params['key_store_password']))
if 'key_store_password_type_hint' in params:
query_params.append(('keyStorePassword@TypeHint', params['key_store_password_type_hint']))
if 'service_ranking' in params:
query_params.append(('service.ranking', params['service_ranking']))
if 'service_ranking_type_hint' in params:
query_params.append(('service.ranking@TypeHint', params['service_ranking_type_hint']))
if 'idp_http_redirect' in params:
query_params.append(('idpHttpRedirect', params['idp_http_redirect']))
if 'idp_http_redirect_type_hint' in params:
query_params.append(('idpHttpRedirect@TypeHint', params['idp_http_redirect_type_hint']))
if 'create_user' in params:
query_params.append(('createUser', params['create_user']))
if 'create_user_type_hint' in params:
query_params.append(('createUser@TypeHint', params['create_user_type_hint']))
if 'default_redirect_url' in params:
query_params.append(('defaultRedirectUrl', params['default_redirect_url']))
if 'default_redirect_url_type_hint' in params:
query_params.append(('defaultRedirectUrl@TypeHint', params['default_redirect_url_type_hint']))
if 'user_id_attribute' in params:
query_params.append(('userIDAttribute', params['user_id_attribute']))
if 'user_id_attribute_type_hint' in params:
query_params.append(('userIDAttribute@TypeHint', params['user_id_attribute_type_hint']))
if 'default_groups' in params:
query_params.append(('defaultGroups', params['default_groups']))
collection_formats['defaultGroups'] = 'multi'
if 'default_groups_type_hint' in params:
query_params.append(('defaultGroups@TypeHint', params['default_groups_type_hint']))
if 'idp_cert_alias' in params:
query_params.append(('idpCertAlias', params['idp_cert_alias']))
if 'idp_cert_alias_type_hint' in params:
query_params.append(('idpCertAlias@TypeHint', params['idp_cert_alias_type_hint']))
if 'add_group_memberships' in params:
query_params.append(('addGroupMemberships', params['add_group_memberships']))
if 'add_group_memberships_type_hint' in params:
query_params.append(('addGroupMemberships@TypeHint', params['add_group_memberships_type_hint']))
if 'path' in params:
query_params.append(('path', params['path']))
collection_formats['path'] = 'multi'
if 'path_type_hint' in params:
query_params.append(('path@TypeHint', params['path_type_hint']))
if 'synchronize_attributes' in params:
query_params.append(('synchronizeAttributes', params['synchronize_attributes']))
collection_formats['synchronizeAttributes'] = 'multi'
if 'synchronize_attributes_type_hint' in params:
query_params.append(('synchronizeAttributes@TypeHint', params['synchronize_attributes_type_hint']))
if 'clock_tolerance' in params:
query_params.append(('clockTolerance', params['clock_tolerance']))
if 'clock_tolerance_type_hint' in params:
query_params.append(('clockTolerance@TypeHint', params['clock_tolerance_type_hint']))
if 'group_membership_attribute' in params:
query_params.append(('groupMembershipAttribute', params['group_membership_attribute']))
if 'group_membership_attribute_type_hint' in params:
query_params.append(('groupMembershipAttribute@TypeHint', params['group_membership_attribute_type_hint']))
if 'idp_url' in params:
query_params.append(('idpUrl', params['idp_url']))
if 'idp_url_type_hint' in params:
query_params.append(('idpUrl@TypeHint', params['idp_url_type_hint']))
if 'logout_url' in params:
query_params.append(('logoutUrl', params['logout_url']))
if 'logout_url_type_hint' in params:
query_params.append(('logoutUrl@TypeHint', params['logout_url_type_hint']))
if 'service_provider_entity_id' in params:
query_params.append(('serviceProviderEntityId', params['service_provider_entity_id']))
if 'service_provider_entity_id_type_hint' in params:
query_params.append(('serviceProviderEntityId@TypeHint', params['service_provider_entity_id_type_hint']))
if 'assertion_consumer_service_url' in params:
query_params.append(('assertionConsumerServiceURL', params['assertion_consumer_service_url']))
if 'assertion_consumer_service_url_type_hint' in params:
query_params.append(('assertionConsumerServiceURL@TypeHint', params['assertion_consumer_service_url_type_hint']))
if 'handle_logout' in params:
query_params.append(('handleLogout', params['handle_logout']))
if 'handle_logout_type_hint' in params:
query_params.append(('handleLogout@TypeHint', params['handle_logout_type_hint']))
if 'sp_private_key_alias' in params:
query_params.append(('spPrivateKeyAlias', params['sp_private_key_alias']))
if 'sp_private_key_alias_type_hint' in params:
query_params.append(('spPrivateKeyAlias@TypeHint', params['sp_private_key_alias_type_hint']))
if 'use_encryption' in params:
query_params.append(('useEncryption', params['use_encryption']))
if 'use_encryption_type_hint' in params:
query_params.append(('useEncryption@TypeHint', params['use_encryption_type_hint']))
if 'name_id_format' in params:
query_params.append(('nameIdFormat', params['name_id_format']))
if 'name_id_format_type_hint' in params:
query_params.append(('nameIdFormat@TypeHint', params['name_id_format_type_hint']))
if 'digest_method' in params:
query_params.append(('digestMethod', params['digest_method']))
if 'digest_method_type_hint' in params:
query_params.append(('digestMethod@TypeHint', params['digest_method_type_hint']))
if 'signature_method' in params:
query_params.append(('signatureMethod', params['signature_method']))
if 'signature_method_type_hint' in params:
query_params.append(('signatureMethod@TypeHint', params['signature_method_type_hint']))
if 'user_intermediate_path' in params:
query_params.append(('userIntermediatePath', params['user_intermediate_path']))
if 'user_intermediate_path_type_hint' in params:
query_params.append(('userIntermediatePath@TypeHint', params['user_intermediate_path_type_hint']))
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['text/plain'])
# Authentication setting
auth_settings = ['aemAuth']
return self.api_client.call_api('/apps/system/config/com.adobe.granite.auth.saml.SamlAuthenticationHandler.config', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None,
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def post_config_apache_felix_jetty_based_http_service(self, runmode, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.post_config_apache_felix_jetty_based_http_service(runmode, async=True)
>>> result = thread.get()
:param async bool
:param str runmode: (required)
:param bool org_apache_felix_https_nio:
:param str org_apache_felix_https_nio_type_hint:
:param str org_apache_felix_https_keystore:
:param str org_apache_felix_https_keystore_type_hint:
:param str org_apache_felix_https_keystore_password:
:param str org_apache_felix_https_keystore_password_type_hint:
:param str org_apache_felix_https_keystore_key:
:param str org_apache_felix_https_keystore_key_type_hint:
:param str org_apache_felix_https_keystore_key_password:
:param str org_apache_felix_https_keystore_key_password_type_hint:
:param str org_apache_felix_https_truststore:
:param str org_apache_felix_https_truststore_type_hint:
:param str org_apache_felix_https_truststore_password:
:param str org_apache_felix_https_truststore_password_type_hint:
:param str org_apache_felix_https_clientcertificate:
:param str org_apache_felix_https_clientcertificate_type_hint:
:param bool org_apache_felix_https_enable:
:param str org_apache_felix_https_enable_type_hint:
:param str org_osgi_service_http_port_secure:
:param str org_osgi_service_http_port_secure_type_hint:
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.post_config_apache_felix_jetty_based_http_service_with_http_info(runmode, **kwargs)
else:
(data) = self.post_config_apache_felix_jetty_based_http_service_with_http_info(runmode, **kwargs)
return data
def post_config_apache_felix_jetty_based_http_service_with_http_info(self, runmode, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.post_config_apache_felix_jetty_based_http_service_with_http_info(runmode, async=True)
>>> result = thread.get()
:param async bool
:param str runmode: (required)
:param bool org_apache_felix_https_nio:
:param str org_apache_felix_https_nio_type_hint:
:param str org_apache_felix_https_keystore:
:param str org_apache_felix_https_keystore_type_hint:
:param str org_apache_felix_https_keystore_password:
:param str org_apache_felix_https_keystore_password_type_hint:
:param str org_apache_felix_https_keystore_key:
:param str org_apache_felix_https_keystore_key_type_hint:
:param str org_apache_felix_https_keystore_key_password:
:param str org_apache_felix_https_keystore_key_password_type_hint:
:param str org_apache_felix_https_truststore:
:param str org_apache_felix_https_truststore_type_hint:
:param str org_apache_felix_https_truststore_password:
:param str org_apache_felix_https_truststore_password_type_hint:
:param str org_apache_felix_https_clientcertificate:
:param str org_apache_felix_https_clientcertificate_type_hint:
:param bool org_apache_felix_https_enable:
:param str org_apache_felix_https_enable_type_hint:
:param str org_osgi_service_http_port_secure:
:param str org_osgi_service_http_port_secure_type_hint:
:return: None
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['runmode', 'org_apache_felix_https_nio', 'org_apache_felix_https_nio_type_hint', 'org_apache_felix_https_keystore', 'org_apache_felix_https_keystore_type_hint', 'org_apache_felix_https_keystore_password', 'org_apache_felix_https_keystore_password_type_hint', 'org_apache_felix_https_keystore_key', 'org_apache_felix_https_keystore_key_type_hint', 'org_apache_felix_https_keystore_key_password', 'org_apache_felix_https_keystore_key_password_type_hint', 'org_apache_felix_https_truststore', 'org_apache_felix_https_truststore_type_hint', 'org_apache_felix_https_truststore_password', 'org_apache_felix_https_truststore_password_type_hint', 'org_apache_felix_https_clientcertificate', 'org_apache_felix_https_clientcertificate_type_hint', 'org_apache_felix_https_enable', 'org_apache_felix_https_enable_type_hint', 'org_osgi_service_http_port_secure', 'org_osgi_service_http_port_secure_type_hint']
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method post_config_apache_felix_jetty_based_http_service" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'runmode' is set
if ('runmode' not in params) or (params['runmode'] is None):
raise ValueError("Missing the required parameter `runmode` when calling `post_config_apache_felix_jetty_based_http_service`")
collection_formats = {}
path_params = {}
if 'runmode' in params:
path_params['runmode'] = params['runmode']
query_params = []
if 'org_apache_felix_https_nio' in params:
query_params.append(('org.apache.felix.https.nio', params['org_apache_felix_https_nio']))
if 'org_apache_felix_https_nio_type_hint' in params:
query_params.append(('org.apache.felix.https.nio@TypeHint', params['org_apache_felix_https_nio_type_hint']))
if 'org_apache_felix_https_keystore' in params:
query_params.append(('org.apache.felix.https.keystore', params['org_apache_felix_https_keystore']))
if 'org_apache_felix_https_keystore_type_hint' in params:
query_params.append(('org.apache.felix.https.keystore@TypeHint', params['org_apache_felix_https_keystore_type_hint']))
if 'org_apache_felix_https_keystore_password' in params:
query_params.append(('org.apache.felix.https.keystore.password', params['org_apache_felix_https_keystore_password']))
if 'org_apache_felix_https_keystore_password_type_hint' in params:
query_params.append(('org.apache.felix.https.keystore.password@TypeHint', params['org_apache_felix_https_keystore_password_type_hint']))
if 'org_apache_felix_https_keystore_key' in params:
query_params.append(('org.apache.felix.https.keystore.key', params['org_apache_felix_https_keystore_key']))
if 'org_apache_felix_https_keystore_key_type_hint' in params:
query_params.append(('org.apache.felix.https.keystore.key@TypeHint', params['org_apache_felix_https_keystore_key_type_hint']))
if 'org_apache_felix_https_keystore_key_password' in params:
query_params.append(('org.apache.felix.https.keystore.key.password', params['org_apache_felix_https_keystore_key_password']))
if 'org_apache_felix_https_keystore_key_password_type_hint' in params:
query_params.append(('org.apache.felix.https.keystore.key.password@TypeHint', params['org_apache_felix_https_keystore_key_password_type_hint']))
if 'org_apache_felix_https_truststore' in params:
query_params.append(('org.apache.felix.https.truststore', params['org_apache_felix_https_truststore']))
if 'org_apache_felix_https_truststore_type_hint' in params:
query_params.append(('org.apache.felix.https.truststore@TypeHint', params['org_apache_felix_https_truststore_type_hint']))
if 'org_apache_felix_https_truststore_password' in params:
query_params.append(('org.apache.felix.https.truststore.password', params['org_apache_felix_https_truststore_password']))
if 'org_apache_felix_https_truststore_password_type_hint' in params:
query_params.append(('org.apache.felix.https.truststore.password@TypeHint', params['org_apache_felix_https_truststore_password_type_hint']))
if 'org_apache_felix_https_clientcertificate' in params:
query_params.append(('org.apache.felix.https.clientcertificate', params['org_apache_felix_https_clientcertificate']))
if 'org_apache_felix_https_clientcertificate_type_hint' in params:
query_params.append(('org.apache.felix.https.clientcertificate@TypeHint', params['org_apache_felix_https_clientcertificate_type_hint']))
if 'org_apache_felix_https_enable' in params:
query_params.append(('org.apache.felix.https.enable', params['org_apache_felix_https_enable']))
if 'org_apache_felix_https_enable_type_hint' in params:
query_params.append(('org.apache.felix.https.enable@TypeHint', params['org_apache_felix_https_enable_type_hint']))
if 'org_osgi_service_http_port_secure' in params:
query_params.append(('org.osgi.service.http.port.secure', params['org_osgi_service_http_port_secure']))
if 'org_osgi_service_http_port_secure_type_hint' in params:
query_params.append(('org.osgi.service.http.port.secure@TypeHint', params['org_osgi_service_http_port_secure_type_hint']))
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['text/plain'])
# Authentication setting
auth_settings = ['aemAuth']
return self.api_client.call_api('/apps/system/config/org.apache.felix.http', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None,
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def post_config_apache_sling_dav_ex_servlet(self, runmode, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.post_config_apache_sling_dav_ex_servlet(runmode, async=True)
>>> result = thread.get()
:param async bool
:param str runmode: (required)
:param str alias:
:param str alias_type_hint:
:param bool dav_create_absolute_uri:
:param str dav_create_absolute_uri_type_hint:
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.post_config_apache_sling_dav_ex_servlet_with_http_info(runmode, **kwargs)
else:
(data) = self.post_config_apache_sling_dav_ex_servlet_with_http_info(runmode, **kwargs)
return data
def post_config_apache_sling_dav_ex_servlet_with_http_info(self, runmode, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.post_config_apache_sling_dav_ex_servlet_with_http_info(runmode, async=True)
>>> result = thread.get()
:param async bool
:param str runmode: (required)
:param str alias:
:param str alias_type_hint:
:param bool dav_create_absolute_uri:
:param str dav_create_absolute_uri_type_hint:
:return: None
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['runmode', 'alias', 'alias_type_hint', 'dav_create_absolute_uri', 'dav_create_absolute_uri_type_hint']
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method post_config_apache_sling_dav_ex_servlet" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'runmode' is set
if ('runmode' not in params) or (params['runmode'] is None):
raise ValueError("Missing the required parameter `runmode` when calling `post_config_apache_sling_dav_ex_servlet`")
collection_formats = {}
path_params = {}
if 'runmode' in params:
path_params['runmode'] = params['runmode']
query_params = []
if 'alias' in params:
query_params.append(('alias', params['alias']))
if 'alias_type_hint' in params:
query_params.append(('alias@TypeHint', params['alias_type_hint']))
if 'dav_create_absolute_uri' in params:
query_params.append(('dav.create-absolute-uri', params['dav_create_absolute_uri']))
if 'dav_create_absolute_uri_type_hint' in params:
query_params.append(('dav.create-absolute-uri@TypeHint', params['dav_create_absolute_uri_type_hint']))
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['text/plain'])
# Authentication setting
auth_settings = ['aemAuth']
return self.api_client.call_api('/apps/system/config/org.apache.sling.jcr.davex.impl.servlets.SlingDavExServlet', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None,
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def post_config_apache_sling_get_servlet(self, runmode, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.post_config_apache_sling_get_servlet(runmode, async=True)
>>> result = thread.get()
:param async bool
:param str runmode: (required)
:param str json_maximumresults:
:param str json_maximumresults_type_hint:
:param bool enable_html:
:param str enable_html_type_hint:
:param bool enable_txt:
:param str enable_txt_type_hint:
:param bool enable_xml:
:param str enable_xml_type_hint:
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.post_config_apache_sling_get_servlet_with_http_info(runmode, **kwargs)
else:
(data) = self.post_config_apache_sling_get_servlet_with_http_info(runmode, **kwargs)
return data
def post_config_apache_sling_get_servlet_with_http_info(self, runmode, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.post_config_apache_sling_get_servlet_with_http_info(runmode, async=True)
>>> result = thread.get()
:param async bool
:param str runmode: (required)
:param str json_maximumresults:
:param str json_maximumresults_type_hint:
:param bool enable_html:
:param str enable_html_type_hint:
:param bool enable_txt:
:param str enable_txt_type_hint:
:param bool enable_xml:
:param str enable_xml_type_hint:
:return: None
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['runmode', 'json_maximumresults', 'json_maximumresults_type_hint', 'enable_html', 'enable_html_type_hint', 'enable_txt', 'enable_txt_type_hint', 'enable_xml', 'enable_xml_type_hint']
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method post_config_apache_sling_get_servlet" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'runmode' is set
if ('runmode' not in params) or (params['runmode'] is None):
raise ValueError("Missing the required parameter `runmode` when calling `post_config_apache_sling_get_servlet`")
collection_formats = {}
path_params = {}
if 'runmode' in params:
path_params['runmode'] = params['runmode']
query_params = []
if 'json_maximumresults' in params:
query_params.append(('json.maximumresults', params['json_maximumresults']))
if 'json_maximumresults_type_hint' in params:
query_params.append(('json.maximumresults@TypeHint', params['json_maximumresults_type_hint']))
if 'enable_html' in params:
query_params.append(('enable.html', params['enable_html']))
if 'enable_html_type_hint' in params:
query_params.append(('enable.html@TypeHint', params['enable_html_type_hint']))
if 'enable_txt' in params:
query_params.append(('enable.txt', params['enable_txt']))
if 'enable_txt_type_hint' in params:
query_params.append(('enable.txt@TypeHint', params['enable_txt_type_hint']))
if 'enable_xml' in params:
query_params.append(('enable.xml', params['enable_xml']))
if 'enable_xml_type_hint' in params:
query_params.append(('enable.xml@TypeHint', params['enable_xml_type_hint']))
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['text/plain'])
# Authentication setting
auth_settings = ['aemAuth']
return self.api_client.call_api('/apps/system/config/org.apache.sling.servlets.get.DefaultGetServlet', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None,
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def post_config_apache_sling_referrer_filter(self, runmode, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.post_config_apache_sling_referrer_filter(runmode, async=True)
>>> result = thread.get()
:param async bool
:param str runmode: (required)
:param bool allow_empty:
:param str allow_empty_type_hint:
:param str allow_hosts:
:param str allow_hosts_type_hint:
:param str allow_hosts_regexp:
:param str allow_hosts_regexp_type_hint:
:param str filter_methods:
:param str filter_methods_type_hint:
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.post_config_apache_sling_referrer_filter_with_http_info(runmode, **kwargs)
else:
(data) = self.post_config_apache_sling_referrer_filter_with_http_info(runmode, **kwargs)
return data
def post_config_apache_sling_referrer_filter_with_http_info(self, runmode, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.post_config_apache_sling_referrer_filter_with_http_info(runmode, async=True)
>>> result = thread.get()
:param async bool
:param str runmode: (required)
:param bool allow_empty:
:param str allow_empty_type_hint:
:param str allow_hosts:
:param str allow_hosts_type_hint:
:param str allow_hosts_regexp:
:param str allow_hosts_regexp_type_hint:
:param str filter_methods:
:param str filter_methods_type_hint:
:return: None
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['runmode', 'allow_empty', 'allow_empty_type_hint', 'allow_hosts', 'allow_hosts_type_hint', 'allow_hosts_regexp', 'allow_hosts_regexp_type_hint', 'filter_methods', 'filter_methods_type_hint']
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method post_config_apache_sling_referrer_filter" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'runmode' is set
if ('runmode' not in params) or (params['runmode'] is None):
raise ValueError("Missing the required parameter `runmode` when calling `post_config_apache_sling_referrer_filter`")
collection_formats = {}
path_params = {}
if 'runmode' in params:
path_params['runmode'] = params['runmode']
query_params = []
if 'allow_empty' in params:
query_params.append(('allow.empty', params['allow_empty']))
if 'allow_empty_type_hint' in params:
query_params.append(('allow.empty@TypeHint', params['allow_empty_type_hint']))
if 'allow_hosts' in params:
query_params.append(('allow.hosts', params['allow_hosts']))
if 'allow_hosts_type_hint' in params:
query_params.append(('allow.hosts@TypeHint', params['allow_hosts_type_hint']))
if 'allow_hosts_regexp' in params:
query_params.append(('allow.hosts.regexp', params['allow_hosts_regexp']))
if 'allow_hosts_regexp_type_hint' in params:
query_params.append(('allow.hosts.regexp@TypeHint', params['allow_hosts_regexp_type_hint']))
if 'filter_methods' in params:
query_params.append(('filter.methods', params['filter_methods']))
if 'filter_methods_type_hint' in params:
query_params.append(('filter.methods@TypeHint', params['filter_methods_type_hint']))
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['text/plain'])
# Authentication setting
auth_settings = ['aemAuth']
return self.api_client.call_api('/apps/system/config/org.apache.sling.security.impl.ReferrerFilter', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None,
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def post_node(self, path, name, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.post_node(path, name, async=True)
>>> result = thread.get()
:param async bool
:param str path: (required)
:param str name: (required)
:param str operation:
:param str delete_authorizable:
:param file file:
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.post_node_with_http_info(path, name, **kwargs)
else:
(data) = self.post_node_with_http_info(path, name, **kwargs)
return data
def post_node_with_http_info(self, path, name, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.post_node_with_http_info(path, name, async=True)
>>> result = thread.get()
:param async bool
:param str path: (required)
:param str name: (required)
:param str operation:
:param str delete_authorizable:
:param file file:
:return: None
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['path', 'name', 'operation', 'delete_authorizable', 'file']
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method post_node" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'path' is set
if ('path' not in params) or (params['path'] is None):
raise ValueError("Missing the required parameter `path` when calling `post_node`")
# verify the required parameter 'name' is set
if ('name' not in params) or (params['name'] is None):
raise ValueError("Missing the required parameter `name` when calling `post_node`")
collection_formats = {}
path_params = {}
if 'path' in params:
path_params['path'] = params['path']
if 'name' in params:
path_params['name'] = params['name']
query_params = []
if 'operation' in params:
query_params.append((':operation', params['operation']))
if 'delete_authorizable' in params:
query_params.append(('deleteAuthorizable', params['delete_authorizable']))
header_params = {}
form_params = []
local_var_files = {}
if 'file' in params:
local_var_files['file'] = params['file']
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['text/plain'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['multipart/form-data'])
# Authentication setting
auth_settings = ['aemAuth']
return self.api_client.call_api('/{path}/{name}', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None,
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def post_node_rw(self, path, name, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.post_node_rw(path, name, async=True)
>>> result = thread.get()
:param async bool
:param str path: (required)
:param str name: (required)
:param str add_members:
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.post_node_rw_with_http_info(path, name, **kwargs)
else:
(data) = self.post_node_rw_with_http_info(path, name, **kwargs)
return data
def post_node_rw_with_http_info(self, path, name, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.post_node_rw_with_http_info(path, name, async=True)
>>> result = thread.get()
:param async bool
:param str path: (required)
:param str name: (required)
:param str add_members:
:return: None
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['path', 'name', 'add_members']
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method post_node_rw" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'path' is set
if ('path' not in params) or (params['path'] is None):
raise ValueError("Missing the required parameter `path` when calling `post_node_rw`")
# verify the required parameter 'name' is set
if ('name' not in params) or (params['name'] is None):
raise ValueError("Missing the required parameter `name` when calling `post_node_rw`")
collection_formats = {}
path_params = {}
if 'path' in params:
path_params['path'] = params['path']
if 'name' in params:
path_params['name'] = params['name']
query_params = []
if 'add_members' in params:
query_params.append(('addMembers', params['add_members']))
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['text/plain'])
# Authentication setting
auth_settings = ['aemAuth']
return self.api_client.call_api('/{path}/{name}.rw.html', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None,
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def post_path(self, path, jcrprimary_type, name, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.post_path(path, jcrprimary_type, name, async=True)
>>> result = thread.get()
:param async bool
:param str path: (required)
:param str jcrprimary_type: (required)
:param str name: (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.post_path_with_http_info(path, jcrprimary_type, name, **kwargs)
else:
(data) = self.post_path_with_http_info(path, jcrprimary_type, name, **kwargs)
return data
def post_path_with_http_info(self, path, jcrprimary_type, name, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.post_path_with_http_info(path, jcrprimary_type, name, async=True)
>>> result = thread.get()
:param async bool
:param str path: (required)
:param str jcrprimary_type: (required)
:param str name: (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['path', 'jcrprimary_type', 'name']
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method post_path" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'path' is set
if ('path' not in params) or (params['path'] is None):
raise ValueError("Missing the required parameter `path` when calling `post_path`")
# verify the required parameter 'jcrprimary_type' is set
if ('jcrprimary_type' not in params) or (params['jcrprimary_type'] is None):
raise ValueError("Missing the required parameter `jcrprimary_type` when calling `post_path`")
# verify the required parameter 'name' is set
if ('name' not in params) or (params['name'] is None):
raise ValueError("Missing the required parameter `name` when calling `post_path`")
collection_formats = {}
path_params = {}
if 'path' in params:
path_params['path'] = params['path']
query_params = []
if 'jcrprimary_type' in params:
query_params.append(('jcr:primaryType', params['jcrprimary_type']))
if 'name' in params:
query_params.append((':name', params['name']))
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['text/plain'])
# Authentication setting
auth_settings = ['aemAuth']
return self.api_client.call_api('/{path}/', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None,
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def post_query(self, path, p_limit, _1_property, _1_property_value, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.post_query(path, p_limit, _1_property, _1_property_value, async=True)
>>> result = thread.get()
:param async bool
:param str path: (required)
:param float p_limit: (required)
:param str _1_property: (required)
:param str _1_property_value: (required)
:return: str
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.post_query_with_http_info(path, p_limit, _1_property, _1_property_value, **kwargs)
else:
(data) = self.post_query_with_http_info(path, p_limit, _1_property, _1_property_value, **kwargs)
return data
def post_query_with_http_info(self, path, p_limit, _1_property, _1_property_value, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.post_query_with_http_info(path, p_limit, _1_property, _1_property_value, async=True)
>>> result = thread.get()
:param async bool
:param str path: (required)
:param float p_limit: (required)
:param str _1_property: (required)
:param str _1_property_value: (required)
:return: str
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['path', 'p_limit', '_1_property', '_1_property_value']
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method post_query" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'path' is set
if ('path' not in params) or (params['path'] is None):
raise ValueError("Missing the required parameter `path` when calling `post_query`")
# verify the required parameter 'p_limit' is set
if ('p_limit' not in params) or (params['p_limit'] is None):
raise ValueError("Missing the required parameter `p_limit` when calling `post_query`")
# verify the required parameter '_1_property' is set
if ('_1_property' not in params) or (params['_1_property'] is None):
raise ValueError("Missing the required parameter `_1_property` when calling `post_query`")
# verify the required parameter '_1_property_value' is set
if ('_1_property_value' not in params) or (params['_1_property_value'] is None):
raise ValueError("Missing the required parameter `_1_property_value` when calling `post_query`")
collection_formats = {}
path_params = {}
query_params = []
if 'path' in params:
query_params.append(('path', params['path']))
if 'p_limit' in params:
query_params.append(('p.limit', params['p_limit']))
if '_1_property' in params:
query_params.append(('1_property', params['_1_property']))
if '_1_property_value' in params:
query_params.append(('1_property.value', params['_1_property_value']))
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
# Authentication setting
auth_settings = ['aemAuth']
return self.api_client.call_api('/bin/querybuilder.json', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='str',
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def post_tree_activation(self, ignoredeactivated, onlymodified, path, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.post_tree_activation(ignoredeactivated, onlymodified, path, async=True)
>>> result = thread.get()
:param async bool
:param bool ignoredeactivated: (required)
:param bool onlymodified: (required)
:param str path: (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.post_tree_activation_with_http_info(ignoredeactivated, onlymodified, path, **kwargs)
else:
(data) = self.post_tree_activation_with_http_info(ignoredeactivated, onlymodified, path, **kwargs)
return data
def post_tree_activation_with_http_info(self, ignoredeactivated, onlymodified, path, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.post_tree_activation_with_http_info(ignoredeactivated, onlymodified, path, async=True)
>>> result = thread.get()
:param async bool
:param bool ignoredeactivated: (required)
:param bool onlymodified: (required)
:param str path: (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['ignoredeactivated', 'onlymodified', 'path']
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method post_tree_activation" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'ignoredeactivated' is set
if ('ignoredeactivated' not in params) or (params['ignoredeactivated'] is None):
raise ValueError("Missing the required parameter `ignoredeactivated` when calling `post_tree_activation`")
# verify the required parameter 'onlymodified' is set
if ('onlymodified' not in params) or (params['onlymodified'] is None):
raise ValueError("Missing the required parameter `onlymodified` when calling `post_tree_activation`")
# verify the required parameter 'path' is set
if ('path' not in params) or (params['path'] is None):
raise ValueError("Missing the required parameter `path` when calling `post_tree_activation`")
collection_formats = {}
path_params = {}
query_params = []
if 'ignoredeactivated' in params:
query_params.append(('ignoredeactivated', params['ignoredeactivated']))
if 'onlymodified' in params:
query_params.append(('onlymodified', params['onlymodified']))
if 'path' in params:
query_params.append(('path', params['path']))
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['text/plain'])
# Authentication setting
auth_settings = ['aemAuth']
return self.api_client.call_api('/etc/replication/treeactivation.html', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None,
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def post_truststore(self, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.post_truststore(async=True)
>>> result = thread.get()
:param async bool
:param str operation:
:param str new_password:
:param str re_password:
:param str key_store_type:
:param str remove_alias:
:param file certificate:
:return: str
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.post_truststore_with_http_info(**kwargs)
else:
(data) = self.post_truststore_with_http_info(**kwargs)
return data
def post_truststore_with_http_info(self, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.post_truststore_with_http_info(async=True)
>>> result = thread.get()
:param async bool
:param str operation:
:param str new_password:
:param str re_password:
:param str key_store_type:
:param str remove_alias:
:param file certificate:
:return: str
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['operation', 'new_password', 're_password', 'key_store_type', 'remove_alias', 'certificate']
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method post_truststore" % key
)
params[key] = val
del params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
if 'operation' in params:
query_params.append((':operation', params['operation']))
if 'new_password' in params:
query_params.append(('newPassword', params['new_password']))
if 're_password' in params:
query_params.append(('rePassword', params['re_password']))
if 'key_store_type' in params:
query_params.append(('keyStoreType', params['key_store_type']))
if 'remove_alias' in params:
query_params.append(('removeAlias', params['remove_alias']))
header_params = {}
form_params = []
local_var_files = {}
if 'certificate' in params:
local_var_files['certificate'] = params['certificate']
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['text/plain'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['multipart/form-data'])
# Authentication setting
auth_settings = ['aemAuth']
return self.api_client.call_api('/libs/granite/security/post/truststore', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='str',
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def post_truststore_pkcs12(self, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.post_truststore_pkcs12(async=True)
>>> result = thread.get()
:param async bool
:param file truststore_p12:
:return: str
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.post_truststore_pkcs12_with_http_info(**kwargs)
else:
(data) = self.post_truststore_pkcs12_with_http_info(**kwargs)
return data
def post_truststore_pkcs12_with_http_info(self, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.post_truststore_pkcs12_with_http_info(async=True)
>>> result = thread.get()
:param async bool
:param file truststore_p12:
:return: str
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['truststore_p12']
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method post_truststore_pkcs12" % key
)
params[key] = val
del params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
form_params = []
local_var_files = {}
if 'truststore_p12' in params:
local_var_files['truststore.p12'] = params['truststore_p12']
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['text/plain'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['multipart/form-data'])
# Authentication setting
auth_settings = ['aemAuth']
return self.api_client.call_api('/etc/truststore', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='str',
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
| 46.310429 | 1,515 | 0.598227 | 15,961 | 152,315 | 5.401917 | 0.026189 | 0.031176 | 0.041603 | 0.038564 | 0.890675 | 0.871689 | 0.845535 | 0.813373 | 0.781269 | 0.767026 | 0 | 0.001025 | 0.314316 | 152,315 | 3,288 | 1,516 | 46.324513 | 0.824521 | 0.023812 | 0 | 0.67619 | 0 | 0.00112 | 0.272467 | 0.138135 | 0 | 0 | 0 | 0 | 0.002801 | 0 | null | null | 0.023529 | 0.003361 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
6a06da291d214da9dfea3b90e16c8e8f045877bd | 1,563 | py | Python | HackyEaster/he2022/level6/solution22.py | tbrup/ctf-writeups | dfac11abb3051af657ed3384c3c389c14a40c10e | [
"MIT"
] | null | null | null | HackyEaster/he2022/level6/solution22.py | tbrup/ctf-writeups | dfac11abb3051af657ed3384c3c389c14a40c10e | [
"MIT"
] | null | null | null | HackyEaster/he2022/level6/solution22.py | tbrup/ctf-writeups | dfac11abb3051af657ed3384c3c389c14a40c10e | [
"MIT"
] | null | null | null | msg = '33333336333032303332333333373230333233333330323033323330333032303333333633303230333233333337323033323331333432303332333133323230333433303230333633363230333733303230333433303230333633363230333633353230333433303230333633363230333633323230333433303230333633333230333633303230333433303230333633333230333633323230333433303230333633363230333633323230333433303230333633373230333133343332323033343330323033363336323033373330323033343330323033363331323033363337323033363331323033343330323033363336323033363334323033343330323033363337323033363332323033343330323033363336323033363330323033343330323033363336323033363337323033343330323033363333323033363333323033343330323033363331323033363335323033363336323033343330323033363335323033313334333632303334333032303336333632303331333433353230333433303230333633313230333633333230333633373230333433303230333633333230333633303230333433303230333633373230333733303230333433303230333633313230333633373230333633313230333433303230333633363230333633373230333433303230333633333230333633333230333433303230333633313230333633353230333633363230333433303230333633373230333133343334'
msg2 = ''
for i in range(0,len(msg),2):
t = int(msg[i:i+2], 16)
msg2 += chr(t)
msg3 = ''
for i in range(0,len(msg2),2):
t = int(msg2[i:i+2], 16)
msg3 += chr(t)
msg4 = ''
for s in msg3.split(' '):
t = int(s, 8)
msg4 += chr(t)
print(msg4)
msg5 = ''
index = 0
for s in msg4.split(' '):
if index > 0:
if index % 3 == 0:
t = int(s, 8)
else:
t = int(s, 16)
msg5 += chr(t)
index += 1
print(msg5)
| 47.363636 | 1,108 | 0.841331 | 86 | 1,563 | 15.290698 | 0.302326 | 0.015209 | 0.011407 | 0.01673 | 0.022814 | 0.022814 | 0 | 0 | 0 | 0 | 0 | 0.801273 | 0.095329 | 1,563 | 32 | 1,109 | 48.84375 | 0.128713 | 0 | 0 | 0.08 | 0 | 0 | 0.705054 | 0.703775 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.08 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
6a28d02de039b18f189021569a2f73ca9e73846c | 219 | py | Python | comaze_gym/rule_based_agents/__init__.py | Near32/comaze_gym | 296ae8295ffac5222f085e3482d0b976ea987d66 | [
"MIT"
] | null | null | null | comaze_gym/rule_based_agents/__init__.py | Near32/comaze_gym | 296ae8295ffac5222f085e3482d0b976ea987d66 | [
"MIT"
] | null | null | null | comaze_gym/rule_based_agents/__init__.py | Near32/comaze_gym | 296ae8295ffac5222f085e3482d0b976ea987d66 | [
"MIT"
] | null | null | null | from .action_only_rule_based_agent import ActionOnlyRuleBasedAgent, build_WrappedActionOnlyRuleBasedAgent
from .communicating_rule_based_agent import CommunicatingRuleBasedAgent, build_WrappedCommunicatingRuleBasedAgent | 109.5 | 113 | 0.940639 | 19 | 219 | 10.368421 | 0.684211 | 0.091371 | 0.142132 | 0.203046 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.041096 | 219 | 2 | 113 | 109.5 | 0.938095 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 7 |
6a3ac848575a311716e473c276505868ebb9323b | 41,271 | py | Python | spark_fhir_schemas/r4/resources/immunization.py | imranq2/SparkFhirSchemas | 24debae6980fb520fe55aa199bdfd43c0092eb9c | [
"Apache-2.0"
] | 2 | 2020-10-31T23:25:01.000Z | 2021-06-09T14:12:42.000Z | spark_fhir_schemas/r4/resources/immunization.py | imranq2/SparkFhirSchemas | 24debae6980fb520fe55aa199bdfd43c0092eb9c | [
"Apache-2.0"
] | null | null | null | spark_fhir_schemas/r4/resources/immunization.py | imranq2/SparkFhirSchemas | 24debae6980fb520fe55aa199bdfd43c0092eb9c | [
"Apache-2.0"
] | null | null | null | from typing import Union, List, Optional
from pyspark.sql.types import (
StructType,
StructField,
StringType,
ArrayType,
DateType,
BooleanType,
DataType,
TimestampType,
)
# This file is auto-generated by generate_schema so do not edit it manually
# noinspection PyPep8Naming
class ImmunizationSchema:
"""
Describes the event of a patient being administered a vaccine or a record of
an immunization as reported by a patient, a clinician or another party.
"""
# noinspection PyDefaultArgument
@staticmethod
def get_schema(
max_nesting_depth: Optional[int] = 6,
nesting_depth: int = 0,
nesting_list: List[str] = [],
max_recursion_limit: Optional[int] = 2,
include_extension: Optional[bool] = False,
extension_fields: Optional[List[str]] = [
"valueBoolean",
"valueCode",
"valueDate",
"valueDateTime",
"valueDecimal",
"valueId",
"valueInteger",
"valuePositiveInt",
"valueString",
"valueTime",
"valueUnsignedInt",
"valueUri",
"valueUrl",
],
extension_depth: int = 0,
max_extension_depth: Optional[int] = 2,
include_modifierExtension: Optional[bool] = False,
) -> Union[StructType, DataType]:
"""
Describes the event of a patient being administered a vaccine or a record of
an immunization as reported by a patient, a clinician or another party.
resourceType: This is a Immunization resource
id: The logical id of the resource, as used in the URL for the resource. Once
assigned, this value never changes.
meta: The metadata about the resource. This is content that is maintained by the
infrastructure. Changes to the content might not always be associated with
version changes to the resource.
implicitRules: A reference to a set of rules that were followed when the resource was
constructed, and which must be understood when processing the content. Often,
this is a reference to an implementation guide that defines the special rules
along with other profiles etc.
language: The base language in which the resource is written.
text: A human-readable narrative that contains a summary of the resource and can be
used to represent the content of the resource to a human. The narrative need
not encode all the structured data, but is required to contain sufficient
detail to make it "clinically safe" for a human to just read the narrative.
Resource definitions may define what content should be represented in the
narrative to ensure clinical safety.
contained: These resources do not have an independent existence apart from the resource
that contains them - they cannot be identified independently, and nor can they
have their own independent transaction scope.
extension: May be used to represent additional information that is not part of the basic
definition of the resource. To make the use of extensions safe and manageable,
there is a strict set of governance applied to the definition and use of
extensions. Though any implementer can define an extension, there is a set of
requirements that SHALL be met as part of the definition of the extension.
modifierExtension: May be used to represent additional information that is not part of the basic
definition of the resource and that modifies the understanding of the element
that contains it and/or the understanding of the containing element's
descendants. Usually modifier elements provide negation or qualification. To
make the use of extensions safe and manageable, there is a strict set of
governance applied to the definition and use of extensions. Though any
implementer is allowed to define an extension, there is a set of requirements
that SHALL be met as part of the definition of the extension. Applications
processing a resource are required to check for modifier extensions.
Modifier extensions SHALL NOT change the meaning of any elements on Resource
or DomainResource (including cannot change the meaning of modifierExtension
itself).
identifier: A unique identifier assigned to this immunization record.
status: Indicates the current status of the immunization event.
statusReason: Indicates the reason the immunization event was not performed.
vaccineCode: Vaccine that was administered or was to be administered.
patient: The patient who either received or did not receive the immunization.
encounter: The visit or admission or other contact between patient and health care
provider the immunization was performed as part of.
occurrenceDateTime: Date vaccine administered or was to be administered.
occurrenceString: Date vaccine administered or was to be administered.
recorded: The date the occurrence of the immunization was first captured in the record -
potentially significantly after the occurrence of the event.
primarySource: An indication that the content of the record is based on information from the
person who administered the vaccine. This reflects the context under which the
data was originally recorded.
reportOrigin: The source of the data when the report of the immunization event is not based
on information from the person who administered the vaccine.
location: The service delivery location where the vaccine administration occurred.
manufacturer: Name of vaccine manufacturer.
lotNumber: Lot number of the vaccine product.
expirationDate: Date vaccine batch expires.
site: Body site where vaccine was administered.
route: The path by which the vaccine product is taken into the body.
doseQuantity: The quantity of vaccine product that was administered.
performer: Indicates who performed the immunization event.
note: Extra information about the immunization that is not conveyed by the other
attributes.
reasonCode: Reasons why the vaccine was administered.
reasonReference: Condition, Observation or DiagnosticReport that supports why the immunization
was administered.
isSubpotent: Indication if a dose is considered to be subpotent. By default, a dose should
be considered to be potent.
subpotentReason: Reason why a dose is considered to be subpotent.
education: Educational material presented to the patient (or guardian) at the time of
vaccine administration.
programEligibility: Indicates a patient's eligibility for a funding program.
fundingSource: Indicates the source of the vaccine actually administered. This may be
different than the patient eligibility (e.g. the patient may be eligible for a
publically purchased vaccine but due to inventory issues, vaccine purchased
with private funds was actually administered).
reaction: Categorical data indicating that an adverse event is associated in time to an
immunization.
protocolApplied: The protocol (set of recommendations) being followed by the provider who
administered the dose.
"""
from spark_fhir_schemas.r4.simple_types.id import idSchema
from spark_fhir_schemas.r4.complex_types.meta import MetaSchema
from spark_fhir_schemas.r4.simple_types.uri import uriSchema
from spark_fhir_schemas.r4.simple_types.code import codeSchema
from spark_fhir_schemas.r4.complex_types.narrative import NarrativeSchema
from spark_fhir_schemas.r4.complex_types.resourcelist import ResourceListSchema
from spark_fhir_schemas.r4.complex_types.extension import ExtensionSchema
from spark_fhir_schemas.r4.complex_types.identifier import IdentifierSchema
from spark_fhir_schemas.r4.complex_types.codeableconcept import (
CodeableConceptSchema,
)
from spark_fhir_schemas.r4.complex_types.reference import ReferenceSchema
from spark_fhir_schemas.r4.simple_types.datetime import dateTimeSchema
from spark_fhir_schemas.r4.complex_types.quantity import QuantitySchema
from spark_fhir_schemas.r4.complex_types.immunization_performer import (
Immunization_PerformerSchema,
)
from spark_fhir_schemas.r4.complex_types.annotation import AnnotationSchema
from spark_fhir_schemas.r4.complex_types.immunization_education import (
Immunization_EducationSchema,
)
from spark_fhir_schemas.r4.complex_types.immunization_reaction import (
Immunization_ReactionSchema,
)
from spark_fhir_schemas.r4.complex_types.immunization_protocolapplied import (
Immunization_ProtocolAppliedSchema,
)
if (
max_recursion_limit
and nesting_list.count("Immunization") >= max_recursion_limit
) or (max_nesting_depth and nesting_depth >= max_nesting_depth):
return StructType([StructField("id", StringType(), True)])
# add my name to recursion list for later
my_nesting_list: List[str] = nesting_list + ["Immunization"]
schema = StructType(
[
# This is a Immunization resource
StructField("resourceType", StringType(), True),
# The logical id of the resource, as used in the URL for the resource. Once
# assigned, this value never changes.
StructField(
"id",
idSchema.get_schema(
max_nesting_depth=max_nesting_depth,
nesting_depth=nesting_depth + 1,
nesting_list=my_nesting_list,
max_recursion_limit=max_recursion_limit,
include_extension=include_extension,
extension_fields=extension_fields,
extension_depth=extension_depth + 1,
max_extension_depth=max_extension_depth,
include_modifierExtension=include_modifierExtension,
),
True,
),
# The metadata about the resource. This is content that is maintained by the
# infrastructure. Changes to the content might not always be associated with
# version changes to the resource.
StructField(
"meta",
MetaSchema.get_schema(
max_nesting_depth=max_nesting_depth,
nesting_depth=nesting_depth + 1,
nesting_list=my_nesting_list,
max_recursion_limit=max_recursion_limit,
include_extension=include_extension,
extension_fields=extension_fields,
extension_depth=extension_depth + 1,
max_extension_depth=max_extension_depth,
include_modifierExtension=include_modifierExtension,
),
True,
),
# A reference to a set of rules that were followed when the resource was
# constructed, and which must be understood when processing the content. Often,
# this is a reference to an implementation guide that defines the special rules
# along with other profiles etc.
StructField(
"implicitRules",
uriSchema.get_schema(
max_nesting_depth=max_nesting_depth,
nesting_depth=nesting_depth + 1,
nesting_list=my_nesting_list,
max_recursion_limit=max_recursion_limit,
include_extension=include_extension,
extension_fields=extension_fields,
extension_depth=extension_depth + 1,
max_extension_depth=max_extension_depth,
include_modifierExtension=include_modifierExtension,
),
True,
),
# The base language in which the resource is written.
StructField(
"language",
codeSchema.get_schema(
max_nesting_depth=max_nesting_depth,
nesting_depth=nesting_depth + 1,
nesting_list=my_nesting_list,
max_recursion_limit=max_recursion_limit,
include_extension=include_extension,
extension_fields=extension_fields,
extension_depth=extension_depth + 1,
max_extension_depth=max_extension_depth,
include_modifierExtension=include_modifierExtension,
),
True,
),
# A human-readable narrative that contains a summary of the resource and can be
# used to represent the content of the resource to a human. The narrative need
# not encode all the structured data, but is required to contain sufficient
# detail to make it "clinically safe" for a human to just read the narrative.
# Resource definitions may define what content should be represented in the
# narrative to ensure clinical safety.
StructField(
"text",
NarrativeSchema.get_schema(
max_nesting_depth=max_nesting_depth,
nesting_depth=nesting_depth + 1,
nesting_list=my_nesting_list,
max_recursion_limit=max_recursion_limit,
include_extension=include_extension,
extension_fields=extension_fields,
extension_depth=extension_depth + 1,
max_extension_depth=max_extension_depth,
include_modifierExtension=include_modifierExtension,
),
True,
),
# These resources do not have an independent existence apart from the resource
# that contains them - they cannot be identified independently, and nor can they
# have their own independent transaction scope.
StructField(
"contained",
ArrayType(
ResourceListSchema.get_schema(
max_nesting_depth=max_nesting_depth,
nesting_depth=nesting_depth + 1,
nesting_list=my_nesting_list,
max_recursion_limit=max_recursion_limit,
include_extension=include_extension,
extension_fields=extension_fields,
extension_depth=extension_depth,
max_extension_depth=max_extension_depth,
include_modifierExtension=include_modifierExtension,
)
),
True,
),
# May be used to represent additional information that is not part of the basic
# definition of the resource. To make the use of extensions safe and manageable,
# there is a strict set of governance applied to the definition and use of
# extensions. Though any implementer can define an extension, there is a set of
# requirements that SHALL be met as part of the definition of the extension.
StructField(
"extension",
ArrayType(
ExtensionSchema.get_schema(
max_nesting_depth=max_nesting_depth,
nesting_depth=nesting_depth + 1,
nesting_list=my_nesting_list,
max_recursion_limit=max_recursion_limit,
include_extension=include_extension,
extension_fields=extension_fields,
extension_depth=extension_depth,
max_extension_depth=max_extension_depth,
include_modifierExtension=include_modifierExtension,
)
),
True,
),
# May be used to represent additional information that is not part of the basic
# definition of the resource and that modifies the understanding of the element
# that contains it and/or the understanding of the containing element's
# descendants. Usually modifier elements provide negation or qualification. To
# make the use of extensions safe and manageable, there is a strict set of
# governance applied to the definition and use of extensions. Though any
# implementer is allowed to define an extension, there is a set of requirements
# that SHALL be met as part of the definition of the extension. Applications
# processing a resource are required to check for modifier extensions.
#
# Modifier extensions SHALL NOT change the meaning of any elements on Resource
# or DomainResource (including cannot change the meaning of modifierExtension
# itself).
StructField(
"modifierExtension",
ArrayType(
ExtensionSchema.get_schema(
max_nesting_depth=max_nesting_depth,
nesting_depth=nesting_depth + 1,
nesting_list=my_nesting_list,
max_recursion_limit=max_recursion_limit,
include_extension=include_extension,
extension_fields=extension_fields,
extension_depth=extension_depth,
max_extension_depth=max_extension_depth,
include_modifierExtension=include_modifierExtension,
)
),
True,
),
# A unique identifier assigned to this immunization record.
StructField(
"identifier",
ArrayType(
IdentifierSchema.get_schema(
max_nesting_depth=max_nesting_depth,
nesting_depth=nesting_depth + 1,
nesting_list=my_nesting_list,
max_recursion_limit=max_recursion_limit,
include_extension=include_extension,
extension_fields=extension_fields,
extension_depth=extension_depth,
max_extension_depth=max_extension_depth,
include_modifierExtension=include_modifierExtension,
)
),
True,
),
# Indicates the current status of the immunization event.
StructField(
"status",
codeSchema.get_schema(
max_nesting_depth=max_nesting_depth,
nesting_depth=nesting_depth + 1,
nesting_list=my_nesting_list,
max_recursion_limit=max_recursion_limit,
include_extension=include_extension,
extension_fields=extension_fields,
extension_depth=extension_depth + 1,
max_extension_depth=max_extension_depth,
include_modifierExtension=include_modifierExtension,
),
True,
),
# Indicates the reason the immunization event was not performed.
StructField(
"statusReason",
CodeableConceptSchema.get_schema(
max_nesting_depth=max_nesting_depth,
nesting_depth=nesting_depth + 1,
nesting_list=my_nesting_list,
max_recursion_limit=max_recursion_limit,
include_extension=include_extension,
extension_fields=extension_fields,
extension_depth=extension_depth + 1,
max_extension_depth=max_extension_depth,
include_modifierExtension=include_modifierExtension,
),
True,
),
# Vaccine that was administered or was to be administered.
StructField(
"vaccineCode",
CodeableConceptSchema.get_schema(
max_nesting_depth=max_nesting_depth,
nesting_depth=nesting_depth + 1,
nesting_list=my_nesting_list,
max_recursion_limit=max_recursion_limit,
include_extension=include_extension,
extension_fields=extension_fields,
extension_depth=extension_depth + 1,
max_extension_depth=max_extension_depth,
include_modifierExtension=include_modifierExtension,
),
True,
),
# The patient who either received or did not receive the immunization.
StructField(
"patient",
ReferenceSchema.get_schema(
max_nesting_depth=max_nesting_depth,
nesting_depth=nesting_depth + 1,
nesting_list=my_nesting_list,
max_recursion_limit=max_recursion_limit,
include_extension=include_extension,
extension_fields=extension_fields,
extension_depth=extension_depth + 1,
max_extension_depth=max_extension_depth,
include_modifierExtension=include_modifierExtension,
),
True,
),
# The visit or admission or other contact between patient and health care
# provider the immunization was performed as part of.
StructField(
"encounter",
ReferenceSchema.get_schema(
max_nesting_depth=max_nesting_depth,
nesting_depth=nesting_depth + 1,
nesting_list=my_nesting_list,
max_recursion_limit=max_recursion_limit,
include_extension=include_extension,
extension_fields=extension_fields,
extension_depth=extension_depth + 1,
max_extension_depth=max_extension_depth,
include_modifierExtension=include_modifierExtension,
),
True,
),
# Date vaccine administered or was to be administered.
StructField("occurrenceDateTime", TimestampType(), True),
# Date vaccine administered or was to be administered.
StructField("occurrenceString", StringType(), True),
# The date the occurrence of the immunization was first captured in the record -
# potentially significantly after the occurrence of the event.
StructField(
"recorded",
dateTimeSchema.get_schema(
max_nesting_depth=max_nesting_depth,
nesting_depth=nesting_depth + 1,
nesting_list=my_nesting_list,
max_recursion_limit=max_recursion_limit,
include_extension=include_extension,
extension_fields=extension_fields,
extension_depth=extension_depth + 1,
max_extension_depth=max_extension_depth,
include_modifierExtension=include_modifierExtension,
),
True,
),
# An indication that the content of the record is based on information from the
# person who administered the vaccine. This reflects the context under which the
# data was originally recorded.
StructField("primarySource", BooleanType(), True),
# The source of the data when the report of the immunization event is not based
# on information from the person who administered the vaccine.
StructField(
"reportOrigin",
CodeableConceptSchema.get_schema(
max_nesting_depth=max_nesting_depth,
nesting_depth=nesting_depth + 1,
nesting_list=my_nesting_list,
max_recursion_limit=max_recursion_limit,
include_extension=include_extension,
extension_fields=extension_fields,
extension_depth=extension_depth + 1,
max_extension_depth=max_extension_depth,
include_modifierExtension=include_modifierExtension,
),
True,
),
# The service delivery location where the vaccine administration occurred.
StructField(
"location",
ReferenceSchema.get_schema(
max_nesting_depth=max_nesting_depth,
nesting_depth=nesting_depth + 1,
nesting_list=my_nesting_list,
max_recursion_limit=max_recursion_limit,
include_extension=include_extension,
extension_fields=extension_fields,
extension_depth=extension_depth + 1,
max_extension_depth=max_extension_depth,
include_modifierExtension=include_modifierExtension,
),
True,
),
# Name of vaccine manufacturer.
StructField(
"manufacturer",
ReferenceSchema.get_schema(
max_nesting_depth=max_nesting_depth,
nesting_depth=nesting_depth + 1,
nesting_list=my_nesting_list,
max_recursion_limit=max_recursion_limit,
include_extension=include_extension,
extension_fields=extension_fields,
extension_depth=extension_depth + 1,
max_extension_depth=max_extension_depth,
include_modifierExtension=include_modifierExtension,
),
True,
),
# Lot number of the vaccine product.
StructField("lotNumber", StringType(), True),
# Date vaccine batch expires.
StructField("expirationDate", DateType(), True),
# Body site where vaccine was administered.
StructField(
"site",
CodeableConceptSchema.get_schema(
max_nesting_depth=max_nesting_depth,
nesting_depth=nesting_depth + 1,
nesting_list=my_nesting_list,
max_recursion_limit=max_recursion_limit,
include_extension=include_extension,
extension_fields=extension_fields,
extension_depth=extension_depth + 1,
max_extension_depth=max_extension_depth,
include_modifierExtension=include_modifierExtension,
),
True,
),
# The path by which the vaccine product is taken into the body.
StructField(
"route",
CodeableConceptSchema.get_schema(
max_nesting_depth=max_nesting_depth,
nesting_depth=nesting_depth + 1,
nesting_list=my_nesting_list,
max_recursion_limit=max_recursion_limit,
include_extension=include_extension,
extension_fields=extension_fields,
extension_depth=extension_depth + 1,
max_extension_depth=max_extension_depth,
include_modifierExtension=include_modifierExtension,
),
True,
),
# The quantity of vaccine product that was administered.
StructField(
"doseQuantity",
QuantitySchema.get_schema(
max_nesting_depth=max_nesting_depth,
nesting_depth=nesting_depth + 1,
nesting_list=my_nesting_list,
max_recursion_limit=max_recursion_limit,
include_extension=include_extension,
extension_fields=extension_fields,
extension_depth=extension_depth + 1,
max_extension_depth=max_extension_depth,
include_modifierExtension=include_modifierExtension,
),
True,
),
# Indicates who performed the immunization event.
StructField(
"performer",
ArrayType(
Immunization_PerformerSchema.get_schema(
max_nesting_depth=max_nesting_depth,
nesting_depth=nesting_depth + 1,
nesting_list=my_nesting_list,
max_recursion_limit=max_recursion_limit,
include_extension=include_extension,
extension_fields=extension_fields,
extension_depth=extension_depth,
max_extension_depth=max_extension_depth,
include_modifierExtension=include_modifierExtension,
)
),
True,
),
# Extra information about the immunization that is not conveyed by the other
# attributes.
StructField(
"note",
ArrayType(
AnnotationSchema.get_schema(
max_nesting_depth=max_nesting_depth,
nesting_depth=nesting_depth + 1,
nesting_list=my_nesting_list,
max_recursion_limit=max_recursion_limit,
include_extension=include_extension,
extension_fields=extension_fields,
extension_depth=extension_depth,
max_extension_depth=max_extension_depth,
include_modifierExtension=include_modifierExtension,
)
),
True,
),
# Reasons why the vaccine was administered.
StructField(
"reasonCode",
ArrayType(
CodeableConceptSchema.get_schema(
max_nesting_depth=max_nesting_depth,
nesting_depth=nesting_depth + 1,
nesting_list=my_nesting_list,
max_recursion_limit=max_recursion_limit,
include_extension=include_extension,
extension_fields=extension_fields,
extension_depth=extension_depth,
max_extension_depth=max_extension_depth,
include_modifierExtension=include_modifierExtension,
)
),
True,
),
# Condition, Observation or DiagnosticReport that supports why the immunization
# was administered.
StructField(
"reasonReference",
ArrayType(
ReferenceSchema.get_schema(
max_nesting_depth=max_nesting_depth,
nesting_depth=nesting_depth + 1,
nesting_list=my_nesting_list,
max_recursion_limit=max_recursion_limit,
include_extension=include_extension,
extension_fields=extension_fields,
extension_depth=extension_depth,
max_extension_depth=max_extension_depth,
include_modifierExtension=include_modifierExtension,
)
),
True,
),
# Indication if a dose is considered to be subpotent. By default, a dose should
# be considered to be potent.
StructField("isSubpotent", BooleanType(), True),
# Reason why a dose is considered to be subpotent.
StructField(
"subpotentReason",
ArrayType(
CodeableConceptSchema.get_schema(
max_nesting_depth=max_nesting_depth,
nesting_depth=nesting_depth + 1,
nesting_list=my_nesting_list,
max_recursion_limit=max_recursion_limit,
include_extension=include_extension,
extension_fields=extension_fields,
extension_depth=extension_depth,
max_extension_depth=max_extension_depth,
include_modifierExtension=include_modifierExtension,
)
),
True,
),
# Educational material presented to the patient (or guardian) at the time of
# vaccine administration.
StructField(
"education",
ArrayType(
Immunization_EducationSchema.get_schema(
max_nesting_depth=max_nesting_depth,
nesting_depth=nesting_depth + 1,
nesting_list=my_nesting_list,
max_recursion_limit=max_recursion_limit,
include_extension=include_extension,
extension_fields=extension_fields,
extension_depth=extension_depth,
max_extension_depth=max_extension_depth,
include_modifierExtension=include_modifierExtension,
)
),
True,
),
# Indicates a patient's eligibility for a funding program.
StructField(
"programEligibility",
ArrayType(
CodeableConceptSchema.get_schema(
max_nesting_depth=max_nesting_depth,
nesting_depth=nesting_depth + 1,
nesting_list=my_nesting_list,
max_recursion_limit=max_recursion_limit,
include_extension=include_extension,
extension_fields=extension_fields,
extension_depth=extension_depth,
max_extension_depth=max_extension_depth,
include_modifierExtension=include_modifierExtension,
)
),
True,
),
# Indicates the source of the vaccine actually administered. This may be
# different than the patient eligibility (e.g. the patient may be eligible for a
# publically purchased vaccine but due to inventory issues, vaccine purchased
# with private funds was actually administered).
StructField(
"fundingSource",
CodeableConceptSchema.get_schema(
max_nesting_depth=max_nesting_depth,
nesting_depth=nesting_depth + 1,
nesting_list=my_nesting_list,
max_recursion_limit=max_recursion_limit,
include_extension=include_extension,
extension_fields=extension_fields,
extension_depth=extension_depth + 1,
max_extension_depth=max_extension_depth,
include_modifierExtension=include_modifierExtension,
),
True,
),
# Categorical data indicating that an adverse event is associated in time to an
# immunization.
StructField(
"reaction",
ArrayType(
Immunization_ReactionSchema.get_schema(
max_nesting_depth=max_nesting_depth,
nesting_depth=nesting_depth + 1,
nesting_list=my_nesting_list,
max_recursion_limit=max_recursion_limit,
include_extension=include_extension,
extension_fields=extension_fields,
extension_depth=extension_depth,
max_extension_depth=max_extension_depth,
include_modifierExtension=include_modifierExtension,
)
),
True,
),
# The protocol (set of recommendations) being followed by the provider who
# administered the dose.
StructField(
"protocolApplied",
ArrayType(
Immunization_ProtocolAppliedSchema.get_schema(
max_nesting_depth=max_nesting_depth,
nesting_depth=nesting_depth + 1,
nesting_list=my_nesting_list,
max_recursion_limit=max_recursion_limit,
include_extension=include_extension,
extension_fields=extension_fields,
extension_depth=extension_depth,
max_extension_depth=max_extension_depth,
include_modifierExtension=include_modifierExtension,
)
),
True,
),
]
)
if not include_extension:
schema.fields = [
c
if c.name != "extension"
else StructField("extension", StringType(), True)
for c in schema.fields
]
if not include_modifierExtension:
schema.fields = [
c
if c.name != "modifierExtension"
else StructField("modifierExtension", StringType(), True)
for c in schema.fields
]
return schema
| 50.951852 | 104 | 0.550144 | 3,603 | 41,271 | 6.052456 | 0.106855 | 0.070986 | 0.04471 | 0.068235 | 0.850965 | 0.84615 | 0.831981 | 0.808456 | 0.783693 | 0.758243 | 0 | 0.002962 | 0.411112 | 41,271 | 809 | 105 | 51.014833 | 0.894297 | 0.26217 | 0 | 0.744898 | 0 | 0 | 0.02059 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.001701 | false | 0 | 0.032313 | 0 | 0.039116 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
dbf65628ea793c8a96d0e5c573c3c8ac65467b62 | 21,344 | py | Python | networks.py | chemaMR/Remote-Sensing-Image-Classification | ddc5d7e17c1bb8ecf6f0c8982115327be3f0dbbe | [
"MIT"
] | 32 | 2020-09-10T12:54:09.000Z | 2022-03-21T08:55:29.000Z | networks.py | sjliu68/Remote-Sensing-Image-Classification | 9bd5ec28380961c9e66288dd75c998425622043e | [
"MIT"
] | null | null | null | networks.py | sjliu68/Remote-Sensing-Image-Classification | 9bd5ec28380961c9e66288dd75c998425622043e | [
"MIT"
] | 19 | 2020-08-10T10:16:47.000Z | 2022-02-17T06:52:14.000Z | from __future__ import print_function
import keras
from keras.models import Model
from keras.layers import concatenate, Dense, Dropout, Flatten, Add, SpatialDropout2D, Conv3D
from keras.layers import Conv2D, MaxPooling2D, Input, Activation,AveragePooling2D,BatchNormalization
from keras.layers import MaxPooling3D, AveragePooling3D
from keras import backend as K
from keras import regularizers
from keras import initializers
from keras.initializers import he_normal, RandomNormal
from keras.layers import multiply, GlobalAveragePooling2D, GlobalAveragePooling3D
from keras.layers.core import Reshape, Dropout
def DCCNN(band, imx, ncla1):
input1 = Input(shape=(imx,imx,band))
# define network
conv01 = Conv2D(128,kernel_size=(1,1),padding='valid',
kernel_initializer=RandomNormal(mean=0.0, stddev=0.01))
conv02 = Conv2D(128,kernel_size=(3,3),padding='valid',
kernel_initializer=RandomNormal(mean=0.0, stddev=0.01))
conv03 = Conv2D(128,kernel_size=(5,5),padding='valid',
kernel_initializer=RandomNormal(mean=0.0, stddev=0.01))
bn1 = BatchNormalization(axis=-1,momentum=0.9,epsilon=0.001,center=True,scale=True,
beta_initializer='zeros',gamma_initializer='ones',
moving_mean_initializer='zeros',
moving_variance_initializer='ones')
bn2 = BatchNormalization(axis=-1,momentum=0.9,epsilon=0.001,center=True,scale=True,
beta_initializer='zeros',gamma_initializer='ones',
moving_mean_initializer='zeros',
moving_variance_initializer='ones')
conv0 = Conv2D(128,kernel_size=(1,1),padding='same',
kernel_initializer=RandomNormal(mean=0.0, stddev=0.01))
conv11 = Conv2D(128,kernel_size=(1,1),padding='same',
kernel_initializer=RandomNormal(mean=0.0, stddev=0.01))
conv12 = Conv2D(128,kernel_size=(1,1),padding='same',
kernel_initializer=RandomNormal(mean=0.0, stddev=0.01))
conv21 = Conv2D(128,kernel_size=(1,1),padding='same',
kernel_initializer=RandomNormal(mean=0.0, stddev=0.01))
conv22 = Conv2D(128,kernel_size=(1,1),padding='same',
kernel_initializer=RandomNormal(mean=0.0, stddev=0.01))
conv31 = Conv2D(128,kernel_size=(1,1),padding='same',
kernel_initializer=RandomNormal(mean=0.0, stddev=0.01))
conv32 = Conv2D(128,kernel_size=(1,1),padding='same',
kernel_initializer=RandomNormal(mean=0.0, stddev=0.01))
conv33 = Conv2D(128,kernel_size=(1,1),padding='same',
kernel_initializer=RandomNormal(mean=0.0, stddev=0.01))
fc1 = Dense(ncla1,activation='softmax',name='output1',
kernel_initializer=RandomNormal(mean=0.0, stddev=0.01))
# begin
x1 = conv01(input1)
x2 = conv02(input1)
x3 = conv03(input1)
x1 = MaxPooling2D(pool_size=(5,5))(x1)
x2 = MaxPooling2D(pool_size=(3,3))(x2)
x1 = concatenate([x1,x2,x3],axis=-1)
x1 = Activation('relu')(x1)
x1 = bn1(x1)
x1 = conv0(x1)
x11 = Activation('relu')(x1)
x11 = bn2(x11)
x11 = conv11(x11)
x11 = Activation('relu')(x11)
x11 = conv12(x11)
x1 = Add()([x1,x11])
x11 = Activation('relu')(x1)
x11 = conv21(x11)
x11 = Activation('relu')(x11)
x11 = conv22(x11)
x1 = Add()([x1,x11])
x1 = Activation('relu')(x1)
x1 = conv31(x1)
x1 = Activation('relu')(x1)
x1 = Dropout(0.5)(x1)
x1 = conv32(x1)
x1 = Activation('relu')(x1)
x1 = Dropout(0.5)(x1)
x1 = conv33(x1)
x1 = Flatten()(x1)
pre1 = fc1(x1)
model1 = Model(inputs=input1, outputs=pre1)
return model1
def DBMA(band, ncla1):
input1 = Input(shape=(7,7,band,1))
## spectral branch
conv11 = Conv3D(24,kernel_size=(1,1,7),strides=(1,1,2),padding='valid',
kernel_initializer=RandomNormal(mean=0.0, stddev=0.01))
bn12 = BatchNormalization(axis=-1,momentum=0.9,epsilon=0.001,center=True,scale=True,
beta_initializer='zeros',gamma_initializer='ones',
moving_mean_initializer='zeros',
moving_variance_initializer='ones')
conv12 = Conv3D(24,kernel_size=(1,1,7),strides=(1,1,1),padding='same',
kernel_initializer=RandomNormal(mean=0.0, stddev=0.01))
bn13 = BatchNormalization(axis=-1,momentum=0.9,epsilon=0.001,center=True,scale=True,
beta_initializer='zeros',gamma_initializer='ones',
moving_mean_initializer='zeros',
moving_variance_initializer='ones')
conv13 = Conv3D(24,kernel_size=(1,1,7),strides=(1,1,1),padding='same',
kernel_initializer=RandomNormal(mean=0.0, stddev=0.01))
bn14 = BatchNormalization(axis=-1,momentum=0.9,epsilon=0.001,center=True,scale=True,
beta_initializer='zeros',gamma_initializer='ones',
moving_mean_initializer='zeros',
moving_variance_initializer='ones')
conv14 = Conv3D(24,kernel_size=(1,1,7),strides=(1,1,1),padding='same',
kernel_initializer=RandomNormal(mean=0.0, stddev=0.01))
bn15 = BatchNormalization(axis=-1,momentum=0.9,epsilon=0.001,center=True,scale=True,
beta_initializer='zeros',gamma_initializer='ones',
moving_mean_initializer='zeros',
moving_variance_initializer='ones')
conv15 = Conv3D(60,kernel_size=(1,1,4),strides=(1,1,1),padding='valid',
kernel_initializer=RandomNormal(mean=0.0, stddev=0.01))
fc11 = Dense(30,activation=None,kernel_initializer=RandomNormal(mean=0.0, stddev=0.01))
fc12 = Dense(60,activation=None,kernel_initializer=RandomNormal(mean=0.0, stddev=0.01))
## spatial branch
conv21 = Conv3D(24,kernel_size=(1,1,band),strides=(1,1,1),padding='valid',
kernel_initializer=RandomNormal(mean=0.0, stddev=0.01))
bn22 = BatchNormalization(axis=-1,momentum=0.9,epsilon=0.001,center=True,scale=True,
beta_initializer='zeros',gamma_initializer='ones',
moving_mean_initializer='zeros',
moving_variance_initializer='ones')
conv22 = Conv3D(12,kernel_size=(3,3,1),strides=(1,1,1),padding='same',
kernel_initializer=RandomNormal(mean=0.0, stddev=0.01))
bn23 = BatchNormalization(axis=-1,momentum=0.9,epsilon=0.001,center=True,scale=True,
beta_initializer='zeros',gamma_initializer='ones',
moving_mean_initializer='zeros',
moving_variance_initializer='ones')
conv23 = Conv3D(12,kernel_size=(3,3,1),strides=(1,1,1),padding='same',
kernel_initializer=RandomNormal(mean=0.0, stddev=0.01))
bn24 = BatchNormalization(axis=-1,momentum=0.9,epsilon=0.001,center=True,scale=True,
beta_initializer='zeros',gamma_initializer='ones',
moving_mean_initializer='zeros',
moving_variance_initializer='ones')
conv24 = Conv3D(12,kernel_size=(3,3,1),strides=(1,1,1),padding='same',
kernel_initializer=RandomNormal(mean=0.0, stddev=0.01))
bn25 = BatchNormalization(axis=-1,momentum=0.9,epsilon=0.001,center=True,scale=True,
beta_initializer='zeros',gamma_initializer='ones',
moving_mean_initializer='zeros',
moving_variance_initializer='ones')
conv25 = Conv3D(24,kernel_size=(3,3,1),strides=(1,1,1),padding='same',
kernel_initializer=RandomNormal(mean=0.0, stddev=0.01))
conv26 = Conv3D(1,activation=None,kernel_size=(3,3,2),strides=(1,1,2),padding='same',
kernel_initializer=RandomNormal(mean=0.0, stddev=0.01))
fc = Dense(ncla1,activation='softmax',name='output1',
kernel_initializer=RandomNormal(mean=0.0, stddev=0.01))
# spectral
x1 = conv11(input1)
x11 = bn12(x1)
x11 = Activation('relu')(x11)
x11 = conv12(x11)
x12 = concatenate([x1,x11],axis=-1)
x12 = bn13(x12)
x12 = Activation('relu')(x12)
x12 = conv13(x12)
x13 = concatenate([x1,x11,x12],axis=-1)
x13 = bn14(x13)
x13 = Activation('relu')(x13)
x13 = conv14(x13)
x14 = concatenate([x1,x11,x12,x13],axis=-1)
x14 = bn15(x14)
x14 = Activation('relu')(x14)
x14 = conv15(x14)
x1_max = MaxPooling3D(pool_size=(7,7,1))(x14)
x1_avg = AveragePooling3D(pool_size=(7,7,1))(x14)
x1_max = fc11(x1_max)
x1_max = fc12(x1_max)
x1_avg = fc11(x1_avg)
x1_avg = fc12(x1_avg)
x1 = Add()([x1_max,x1_avg])
x1 = Activation('sigmoid')(x1)
x1 = multiply([x1,x14])
x1 = GlobalAveragePooling3D()(x1)
# spatial
x2 = conv21(input1)
x21 = bn22(x2)
x21 = Activation('relu')(x21)
x21 = conv22(x21)
x22 = concatenate([x2,x21],axis=-1)
x22 = bn23(x22)
x22 = Activation('relu')(x22)
x22 = conv23(x22)
x23 = concatenate([x2,x21,x22],axis=-1)
x23 = bn24(x23)
x23 = Activation('relu')(x23)
x23 = conv24(x23)
x24 = concatenate([x2,x21,x22,x23],axis=-1)
x24 = Reshape(target_shape=(7,7,60,1))(x24)
x2_max = MaxPooling3D(pool_size=(1,1,60))(x24)
x2_avg = AveragePooling3D(pool_size=(1,1,60))(x24)
x2_max = Reshape(target_shape=(7,7,1))(x2_max)
x2_avg = Reshape(target_shape=(7,7,1))(x2_avg)
x25 = concatenate([x2_max,x2_avg],axis=-1)
x25 = Reshape(target_shape=(7,7,2,1))(x25)
x25 = conv26(x25)
x25 = Activation('sigmoid')(x25)
x2 = multiply([x24,x25])
x2 = Reshape(target_shape=(7,7,1,60))(x2)
x2 = GlobalAveragePooling3D()(x2)
x = concatenate([x1,x2],axis=-1)
pre = fc(x)
model = Model(inputs=input1, outputs=pre)
return model
def resnet99_avg_se(band, imx, ncla1, l=1):
input1 = Input(shape=(imx,imx,band))
# define network
conv0x = Conv2D(32,kernel_size=(3,3),padding='valid',
kernel_initializer=RandomNormal(mean=0.0, stddev=0.01))
conv0 = Conv2D(32,kernel_size=(3,3),padding='valid',
kernel_initializer=RandomNormal(mean=0.0, stddev=0.01))
bn11 = BatchNormalization(axis=-1,momentum=0.9,epsilon=0.001,center=True,scale=True,
beta_initializer='zeros',gamma_initializer='ones',
moving_mean_initializer='zeros',
moving_variance_initializer='ones')
conv11 = Conv2D(64,kernel_size=(3,3),padding='same',
kernel_initializer=RandomNormal(mean=0.0, stddev=0.01))
conv12 = Conv2D(64,kernel_size=(3,3),padding='same',
kernel_initializer=RandomNormal(mean=0.0, stddev=0.01))
fc11 = Dense(4,activation=None,kernel_initializer=RandomNormal(mean=0.0, stddev=0.01))
fc12 = Dense(64,activation=None,kernel_initializer=RandomNormal(mean=0.0, stddev=0.01))
bn21 = BatchNormalization(axis=-1,momentum=0.9,epsilon=0.001,center=True,scale=True,
beta_initializer='zeros',gamma_initializer='ones',
moving_mean_initializer='zeros',
moving_variance_initializer='ones')
conv21 = Conv2D(64,kernel_size=(3,3),padding='same',
kernel_initializer=RandomNormal(mean=0.0, stddev=0.01))
conv22 = Conv2D(64,kernel_size=(3,3),padding='same',
kernel_initializer=RandomNormal(mean=0.0, stddev=0.01))
fc21 = Dense(4,activation=None,kernel_initializer=RandomNormal(mean=0.0, stddev=0.01))
fc22 = Dense(64,activation=None,kernel_initializer=RandomNormal(mean=0.0, stddev=0.01))
fc1 = Dense(ncla1,activation='softmax',name='output1',
kernel_initializer=RandomNormal(mean=0.0, stddev=0.01))
# x1
x1 = conv0(input1)
x1x = conv0x(input1)
# x1 = MaxPooling2D(pool_size=(2,2))(x1)
# x1x = MaxPooling2D(pool_size=(2,2))(x1x)
x1 = concatenate([x1,x1x],axis=-1)
x11 = bn11(x1)
x11 = Activation('relu')(x11)
x11 = conv11(x11)
x11 = Activation('relu')(x11)
x11 = conv12(x11)
x12 = GlobalAveragePooling2D()(x11)
x12 = fc11(x12)
x12 = fc12(x12)
x12 = Activation('sigmoid')(x12)
x11 = multiply([x11,x12])
x1 = Add()([x1,x11])
if l==2:
x11 = bn21(x1)
x11 = Activation('relu')(x11)
x11 = conv21(x11)
x11 = Activation('relu')(x11)
x11 = conv22(x11)
x12 = GlobalAveragePooling2D()(x11)
x12 = fc11(x12)
x12 = fc12(x12)
x12 = Activation('sigmoid')(x12)
x11 = multiply([x11,x12])
x1 = Add()([x1,x11])
x1 = GlobalAveragePooling2D()(x1)
# x1 = Flatten()(x1)
pre1 = fc1(x1)
model1 = Model(inputs=input1, outputs=pre1)
return model1
def resnet99_avg(band, imx, ncla1, l=1):
input1 = Input(shape=(imx,imx,band))
# define network
conv0x = Conv2D(32,kernel_size=(3,3),padding='valid',
kernel_initializer=RandomNormal(mean=0.0, stddev=0.01))
conv0 = Conv2D(32,kernel_size=(3,3),padding='valid',
kernel_initializer=RandomNormal(mean=0.0, stddev=0.01))
bn11 = BatchNormalization(axis=-1,momentum=0.9,epsilon=0.001,center=True,scale=True,
beta_initializer='zeros',gamma_initializer='ones',
moving_mean_initializer='zeros',
moving_variance_initializer='ones')
conv11 = Conv2D(64,kernel_size=(3,3),padding='same',
kernel_initializer=RandomNormal(mean=0.0, stddev=0.01))
conv12 = Conv2D(64,kernel_size=(3,3),padding='same',
kernel_initializer=RandomNormal(mean=0.0, stddev=0.01))
bn21 = BatchNormalization(axis=-1,momentum=0.9,epsilon=0.001,center=True,scale=True,
beta_initializer='zeros',gamma_initializer='ones',
moving_mean_initializer='zeros',
moving_variance_initializer='ones')
conv21 = Conv2D(64,kernel_size=(3,3),padding='same',
kernel_initializer=RandomNormal(mean=0.0, stddev=0.01))
conv22 = Conv2D(64,kernel_size=(3,3),padding='same',
kernel_initializer=RandomNormal(mean=0.0, stddev=0.01))
fc1 = Dense(ncla1,activation='softmax',name='output1',
kernel_initializer=RandomNormal(mean=0.0, stddev=0.01))
# x1
x1 = conv0(input1)
x1x = conv0x(input1)
# x1 = MaxPooling2D(pool_size=(2,2))(x1)
# x1x = MaxPooling2D(pool_size=(2,2))(x1x)
x1 = concatenate([x1,x1x],axis=-1)
x11 = bn11(x1)
x11 = Activation('relu')(x11)
x11 = conv11(x11)
x11 = Activation('relu')(x11)
x11 = conv12(x11)
x1 = Add()([x1,x11])
if l==2:
x11 = bn21(x1)
x11 = Activation('relu')(x11)
x11 = conv21(x11)
x11 = Activation('relu')(x11)
x11 = conv22(x11)
x1 = Add()([x1,x11])
x1 = GlobalAveragePooling2D()(x1)
# x1 = Flatten()(x1)
pre1 = fc1(x1)
model1 = Model(inputs=input1, outputs=pre1)
return model1
def resnet99(band, ncla1):
input1 = Input(shape=(9,9,band))
# define network
conv0x = Conv2D(32,kernel_size=(3,3),padding='valid',
kernel_initializer=RandomNormal(mean=0.0, stddev=0.01))
conv0 = Conv2D(32,kernel_size=(3,3),padding='valid',
kernel_initializer=RandomNormal(mean=0.0, stddev=0.01))
bn11 = BatchNormalization(axis=-1,momentum=0.9,epsilon=0.001,center=True,scale=True,
beta_initializer='zeros',gamma_initializer='ones',
moving_mean_initializer='zeros',
moving_variance_initializer='ones')
conv11 = Conv2D(64,kernel_size=(3,3),padding='same',
kernel_initializer=RandomNormal(mean=0.0, stddev=0.01))
conv12 = Conv2D(64,kernel_size=(3,3),padding='same',
kernel_initializer=RandomNormal(mean=0.0, stddev=0.01))
bn21 = BatchNormalization(axis=-1,momentum=0.9,epsilon=0.001,center=True,scale=True,
beta_initializer='zeros',gamma_initializer='ones',
moving_mean_initializer='zeros',
moving_variance_initializer='ones')
conv21 = Conv2D(64,kernel_size=(3,3),padding='same',
kernel_initializer=RandomNormal(mean=0.0, stddev=0.01))
conv22 = Conv2D(64,kernel_size=(3,3),padding='same',
kernel_initializer=RandomNormal(mean=0.0, stddev=0.01))
fc1 = Dense(ncla1,activation='softmax',name='output1',
kernel_initializer=RandomNormal(mean=0.0, stddev=0.01))
# x1
x1 = conv0(input1)
x1x = conv0x(input1)
# x1 = MaxPooling2D(pool_size=(2,2))(x1)
# x1x = MaxPooling2D(pool_size=(2,2))(x1x)
x1 = concatenate([x1,x1x],axis=-1)
x11 = bn11(x1)
x11 = Activation('relu')(x11)
x11 = conv11(x11)
x11 = Activation('relu')(x11)
x11 = conv12(x11)
x1 = Add()([x1,x11])
# x11 = bn21(x1)
# x11 = Activation('relu')(x11)
# x11 = conv21(x11)
# x11 = Activation('relu')(x11)
# x11 = conv22(x11)
# x1 = Add()([x1,x11])
x1 = Flatten()(x1)
pre1 = fc1(x1)
model1 = Model(inputs=input1, outputs=pre1)
return model1
def wcrn3D(band, ncla1):
input1 = Input(shape=(5,5,band))
# define network
conv0x = Conv2D(64,kernel_size=(1,1,7),padding='valid',
kernel_initializer=RandomNormal(mean=0.0, stddev=0.01))
conv0 = Conv2D(64,kernel_size=(3,3,1),padding='valid',
kernel_initializer=RandomNormal(mean=0.0, stddev=0.01))
bn11 = BatchNormalization(axis=-1,momentum=0.9,epsilon=0.001,center=True,scale=True,
beta_initializer='zeros',gamma_initializer='ones',
moving_mean_initializer='zeros',
moving_variance_initializer='ones')
conv11 = Conv2D(128,kernel_size=(1,1),padding='same',
kernel_initializer=RandomNormal(mean=0.0, stddev=0.01))
conv12 = Conv2D(128,kernel_size=(1,1),padding='same',
kernel_initializer=RandomNormal(mean=0.0, stddev=0.01))
fc1 = Dense(ncla1,activation='softmax',name='output1',
kernel_initializer=RandomNormal(mean=0.0, stddev=0.01))
# x1
x1 = conv0(input1)
x1x = conv0x(input1)
x1 = MaxPooling2D(pool_size=(3,3))(x1)
x1x = MaxPooling2D(pool_size=(5,5))(x1x)
x1 = concatenate([x1,x1x],axis=-1)
x11 = bn11(x1)
x11 = Activation('relu')(x11)
x11 = conv11(x11)
x11 = Activation('relu')(x11)
x11 = conv12(x11)
x1 = Add()([x1,x11])
x1 = Flatten()(x1)
pre1 = fc1(x1)
model1 = Model(inputs=input1, outputs=pre1)
return model1
def wcrn(band, ncla1):
input1 = Input(shape=(5,5,band))
# define network
conv0x = Conv2D(64,kernel_size=(1,1),padding='valid',
kernel_initializer=RandomNormal(mean=0.0, stddev=0.01))
conv0 = Conv2D(64,kernel_size=(3,3),padding='valid',
kernel_initializer=RandomNormal(mean=0.0, stddev=0.01))
bn11 = BatchNormalization(axis=-1,momentum=0.9,epsilon=0.001,center=True,scale=True,
beta_initializer='zeros',gamma_initializer='ones',
moving_mean_initializer='zeros',
moving_variance_initializer='ones')
conv11 = Conv2D(128,kernel_size=(1,1),padding='same',
kernel_initializer=RandomNormal(mean=0.0, stddev=0.01))
conv12 = Conv2D(128,kernel_size=(1,1),padding='same',
kernel_initializer=RandomNormal(mean=0.0, stddev=0.01))
#
fc1 = Dense(ncla1,activation='softmax',name='output1',
kernel_initializer=RandomNormal(mean=0.0, stddev=0.01))
# x1
x1 = conv0(input1)
x1x = conv0x(input1)
x1 = MaxPooling2D(pool_size=(3,3))(x1)
x1x = MaxPooling2D(pool_size=(5,5))(x1x)
x1 = concatenate([x1,x1x],axis=-1)
x11 = bn11(x1)
x11 = Activation('relu')(x11)
x11 = conv11(x11)
x11 = Activation('relu')(x11)
x11 = conv12(x11)
x1 = Add()([x1,x11])
x1 = Flatten()(x1)
pre1 = fc1(x1)
model1 = Model(inputs=input1, outputs=pre1)
return model1
| 42.517928 | 101 | 0.586347 | 2,617 | 21,344 | 4.680168 | 0.061903 | 0.084667 | 0.144432 | 0.164353 | 0.848955 | 0.830176 | 0.826666 | 0.816786 | 0.810745 | 0.810745 | 0 | 0.114355 | 0.269912 | 21,344 | 501 | 102 | 42.602794 | 0.671629 | 0.028814 | 0 | 0.677665 | 0 | 0 | 0.038328 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.017767 | false | 0 | 0.030457 | 0 | 0.06599 | 0.002538 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
e013a1515097d260aaa95f1ba993dba30b2e2cae | 46,988 | py | Python | hbaselines/envs/efficient_hrl/envs.py | brandontrabucco/h-baselines | d61d4bd0286ea757deba04741f8da710c5a05c9e | [
"MIT"
] | null | null | null | hbaselines/envs/efficient_hrl/envs.py | brandontrabucco/h-baselines | d61d4bd0286ea757deba04741f8da710c5a05c9e | [
"MIT"
] | null | null | null | hbaselines/envs/efficient_hrl/envs.py | brandontrabucco/h-baselines | d61d4bd0286ea757deba04741f8da710c5a05c9e | [
"MIT"
] | null | null | null | """Contextual representation of AntMaze, AntPush, and AntFall."""
import numpy as np
import random
from gym.spaces import Box
from hbaselines.utils.reward_fns import negative_distance
from hbaselines.envs.efficient_hrl.ant_maze_env import AntMazeEnv
from hbaselines.envs.efficient_hrl.humanoid_maze_env import HumanoidMazeEnv
# scale to the contextual reward. Does not affect the environmental reward.
REWARD_SCALE = 0.1
# threshold after which the agent is considered to have reached its target
DISTANCE_THRESHOLD = 5
class UniversalAntMazeEnv(AntMazeEnv):
"""Universal environment variant of AntMazeEnv.
This environment extends the generic gym environment by including contexts,
or goals. The goals are added to the observation, and an additional
contextual reward is included to the generic rewards.
"""
def __init__(self,
maze_id,
contextual_reward,
use_contexts=False,
random_contexts=False,
context_range=None,
maze_size_scaling=8,
top_down_view=False,
image_size=32,
horizon=500,
ant_fall=False,
evaluate=False,
num_levels=1):
"""Initialize the Universal environment.
Parameters
----------
maze_id : str
the type of maze environment. One of "Maze", "Push", or "Fall"
contextual_reward : function
a reward function that takes as input (states, goals, next_states)
and returns a float reward and whether the goal has been achieved
use_contexts : bool, optional
specifies whether to add contexts to the observations and add the
contextual rewards
random_contexts : bool
specifies whether the context is a single value, or a random set of
values between some range
context_range : [float] or [(float, float)] or [[float]]
one of the following three:
1. the desired context / goal
2. the (lower, upper) bound tuple for each dimension of the goal
3. a list of desired contexts / goals. Goals are sampled from these
list of possible goals
top_down_view : bool
specifies whether the observation should have an image prepended
useful for training convolutional policies
image_size : int
determines the width and height of the rendered image
horizon : float, optional
time horizon
ant_fall : bool
specifies whether you are using the AntFall environment. The agent
in this environment is placed on a block of height 4; the "dying"
conditions for the agent need to be accordingly offset.
evaluate : bool
whether to run an evaluation. In this case an additional goal agent
is placed in the environment for visualization purposes.
num_levels : int
number of levels in the policy. 1 refers to non-hierarchical models
Raises
------
AssertionError
If the context_range is not the right form based on whether
contexts are a single value or random across a range.
"""
# Initialize the maze variant of the environment.
super(UniversalAntMazeEnv, self).__init__(
maze_id=maze_id,
maze_height=0.5,
maze_size_scaling=maze_size_scaling,
n_bins=0,
sensor_range=3.,
sensor_span=2 * np.pi,
observe_blocks=False,
put_spin_near_agent=False,
top_down_view=top_down_view,
image_size=image_size,
manual_collision=False,
ant_fall=ant_fall,
evaluate=evaluate,
num_levels=num_levels,
)
self.horizon = horizon
self.step_number = 0
# contextual variables
self.use_contexts = use_contexts
self.random_contexts = random_contexts
self.context_range = context_range
self.contextual_reward = contextual_reward
self.current_context = None
# a hack to deal with previous observations in the reward
self.prev_obs = None
# Check that context_range is the right form based on whether contexts
# are a single value or random across a range.
if self.use_contexts:
if self.random_contexts:
assert all(isinstance(i, tuple) for i in self.context_range), \
"When using random contexts, every element in " \
"context_range, must be a tuple of (min,max) values."
else:
assert all(not isinstance(i, tuple) for i in
self.context_range), \
"When not using random contexts, every element in " \
"context_range, must be a single value or a list of " \
"values."
@property
def context_space(self):
"""Return the shape and bounds of the contextual term."""
# Check if the environment is using contexts, and if not, return a None
# value as the context space.
if self.use_contexts:
# If the context space is random, use the min and max values of
# each context to specify the space range. Otherwise, the min and
# max values are both the deterministic context value.
if self.random_contexts:
context_low = []
context_high = []
for context_i in self.context_range:
low, high = context_i
context_low.append(low)
context_high.append(high)
return Box(low=np.asarray(context_low),
high=np.asarray(context_high),
dtype=np.float32)
else:
# If there are a list of possible goals, use the min and max
# values of each index for the context space.
if isinstance(self.context_range[0], list):
min_val = []
max_val = []
for i in range(len(self.context_range[0])):
min_val.append(min(v[i] for v in self.context_range))
max_val.append(max(v[i] for v in self.context_range))
return Box(low=np.array(min_val), high=np.array(max_val))
else:
# Use the original context as the context space. It is a
# fixed value in this case.
return Box(low=np.asarray(self.context_range),
high=np.asarray(self.context_range),
dtype=np.float32)
else:
return None
def step(self, action):
"""Advance the environment by one simulation step.
If the environment is using the contextual setting, an "is_success"
term is added to the info_dict to specify whether the objective has
been met.
Parameters
----------
action : array_like
actions to be performed by the agent
Returns
-------
array_like
next observation
float
environmental reward
bool
done mask
dict
extra information dictionary
"""
# Run environment update.
obs, rew, done, _ = super(UniversalAntMazeEnv, self).step(action)
info = {}
if self.use_contexts:
# Add success to the info dict
dist = self.contextual_reward(
states=self.prev_obs,
next_states=obs,
goals=self.current_context,
)
info["goal_distance"] = dist / REWARD_SCALE
info["is_success"] = abs(dist) < DISTANCE_THRESHOLD * REWARD_SCALE
# Replace the reward with the contextual reward.
rew = dist
# Check if the time horizon has been met.
self.step_number += 1
done = done or self.step_number == self.horizon
return obs, rew, done, info
def reset(self):
"""Reset the environment.
If the environment is using the contextual setting, a new context is
issued.
Returns
-------
array_like
initial observation
"""
try:
self.prev_obs = super(UniversalAntMazeEnv, self).reset()
except NotImplementedError:
# for testing purposes
self.prev_obs = np.empty(1)
# Reset the step counter.
self.step_number = 0
if self.use_contexts:
if not self.random_contexts:
if isinstance(self.context_range[0], list):
# In this case, sample on of the contexts as the next
# environmental context.
self.current_context = random.sample(self.context_range, 1)
self.current_context = self.current_context[0]
else:
# In this case, the context range is just the context.
self.current_context = self.context_range
else:
# In this case, choose random values between the context range.
self.current_context = []
for range_i in self.context_range:
minval, maxval = range_i
self.current_context.append(random.uniform(minval, maxval))
# Convert to numpy array.
self.current_context = np.asarray(self.current_context)
return self.prev_obs
class UniversalHumanoidMazeEnv(HumanoidMazeEnv):
"""Universal environment variant of HumanoidMazeEnv.
This environment extends the generic gym environment by including contexts,
or goals. The goals are added to the observation, and an additional
contextual reward is included to the generic rewards.
"""
def __init__(self,
maze_id,
contextual_reward,
use_contexts=False,
random_contexts=False,
context_range=None,
maze_size_scaling=4,
top_down_view=False,
image_size=32,
horizon=1000):
"""Initialize the Universal environment.
Parameters
----------
maze_id : str
the type of maze environment. One of "Maze", "Push", or "Fall"
contextual_reward : function
a reward function that takes as input (states, goals, next_states)
and returns a float reward and whether the goal has been achieved
use_contexts : bool, optional
specifies whether to add contexts to the observations and add the
contextual rewards
random_contexts : bool
specifies whether the context is a single value, or a random set of
values between some range
context_range : [float] or [(float, float)] or [[float]]
one of the following three:
1. the desired context / goal
2. the (lower, upper) bound tuple for each dimension of the goal
3. a list of desired contexts / goals. Goals are sampled from these
list of possible goals
top_down_view : bool
specifies whether the observation should have an image prepended
useful for training convolutional policies
image_size: int
determines the width and height of the rendered image
horizon : float, optional
time horizon
Raises
------
AssertionError
If the context_range is not the right form based on whether
contexts are a single value or random across a range.
"""
# Initialize the maze variant of the environment.
super(UniversalHumanoidMazeEnv, self).__init__(
maze_id=maze_id,
maze_height=0.5,
maze_size_scaling=maze_size_scaling,
n_bins=0,
sensor_range=3.,
sensor_span=2 * np.pi,
observe_blocks=False,
put_spin_near_agent=False,
top_down_view=top_down_view,
image_size=image_size,
manual_collision=False,
)
self.horizon = horizon
self.step_number = 0
# contextual variables
self.use_contexts = use_contexts
self.random_contexts = random_contexts
self.context_range = context_range
self.contextual_reward = contextual_reward
self.current_context = None
# a hack to deal with previous observations in the reward
self.prev_obs = None
# Check that context_range is the right form based on whether contexts
# are a single value or random across a range.
if self.use_contexts:
if self.random_contexts:
assert all(isinstance(i, tuple) for i in self.context_range), \
"When using random contexts, every element in " \
"context_range, must be a tuple of (min,max) values."
else:
assert all(not isinstance(i, tuple) for i in
self.context_range), \
"When not using random contexts, every element in " \
"context_range, must be a single value or a list of " \
"values."
@property
def context_space(self):
"""Return the shape and bounds of the contextual term."""
# Check if the environment is using contexts, and if not, return a None
# value as the context space.
if self.use_contexts:
# If the context space is random, use the min and max values of
# each context to specify the space range. Otherwise, the min and
# max values are both the deterministic context value.
if self.random_contexts:
context_low = []
context_high = []
for context_i in self.context_range:
low, high = context_i
context_low.append(low)
context_high.append(high)
return Box(low=np.asarray(context_low),
high=np.asarray(context_high),
dtype=np.float32)
else:
# If there are a list of possible goals, use the min and max
# values of each index for the context space.
if isinstance(self.context_range[0], list):
min_val = []
max_val = []
for i in range(len(self.context_range[0])):
min_val.append(min(v[i] for v in self.context_range))
max_val.append(max(v[i] for v in self.context_range))
return Box(low=np.array(min_val),
high=np.array(max_val),
dtype=np.float32)
else:
# Use the original context as the context space. It is a
# fixed value in this case.
return Box(low=np.asarray(self.context_range),
high=np.asarray(self.context_range),
dtype=np.float32)
else:
return None
def step(self, action):
"""Advance the environment by one simulation step.
If the environment is using the contextual setting, an "is_success"
term is added to the info_dict to specify whether the objective has
been met.
Parameters
----------
action : array_like
actions to be performed by the agent
Returns
-------
array_like
next observation
float
environmental reward
bool
done mask
dict
extra information dictionary
"""
# Run environment update.
obs, rew, done, info = super(UniversalHumanoidMazeEnv, self).step(
action)
if self.use_contexts:
# Replace the reward with the contextual reward.
rew = self.contextual_reward(
states=self.prev_obs,
next_states=obs,
goals=self.current_context,
)
# Add success to the info dict
dist = 7.2 * np.log(rew)
info["is_success"] = abs(dist) < DISTANCE_THRESHOLD
# Check if the time horizon has been met.
self.step_number += 1
done = done or self.step_number == self.horizon
return obs, rew, done, info
def reset(self):
"""Reset the environment.
If the environment is using the contextual setting, a new context is
issued.
Returns
-------
array_like
initial observation
"""
try:
self.prev_obs = super(UniversalHumanoidMazeEnv, self).reset()
except (NotImplementedError, AttributeError):
# for testing purposes
self.prev_obs = np.empty(1)
# Reset the step counter.
self.step_number = 0
if self.use_contexts:
if not self.random_contexts:
if isinstance(self.context_range[0], list):
# In this case, sample on of the contexts as the next
# environmental context.
self.current_context = random.sample(self.context_range, 1)
self.current_context = self.current_context[0]
else:
# In this case, the context range is just the context.
self.current_context = self.context_range
else:
# In this case, choose random values between the context range.
self.current_context = []
for range_i in self.context_range:
minval, maxval = range_i
self.current_context.append(random.uniform(minval, maxval))
# Convert to numpy array.
self.current_context = np.asarray(self.current_context)
return self.prev_obs
class AntMaze(UniversalAntMazeEnv):
"""Ant Maze Environment.
In this task, immovable blocks are placed to confine the agent to a
U-shaped corridor. That is, blocks are placed everywhere except at (0,0),
(8,0), (16,0), (16,8), (16,16), (8,16), and (0,16). The agent is
initialized at position (0,0) and tasked at reaching a specific target
position. "Success" in this environment is defined as being within an L2
distance of 5 from the target.
"""
def __init__(self,
use_contexts=False,
random_contexts=False,
context_range=None,
evaluate=False,
num_levels=1):
"""Initialize the Ant Maze environment.
Parameters
----------
use_contexts : bool, optional
specifies whether to add contexts to the observations and add the
contextual rewards
random_contexts : bool
specifies whether the context is a single value, or a random set of
values between some range
context_range : [float] or [(float, float)] or [[float]]
the desired context / goal, or the (lower, upper) bound tuple for
each dimension of the goal
evaluate : bool
whether to run an evaluation. In this case an additional goal agent
is placed in the environment for visualization purposes.
num_levels : int
number of levels in the policy. 1 refers to non-hierarchical models
Raises
------
AssertionError
If the context_range is not the right form based on whether
contexts are a single value or random across a range.
"""
maze_id = "Maze"
def contextual_reward(states, goals, next_states):
return negative_distance(
states=states,
goals=goals,
next_states=next_states,
state_indices=[0, 1],
relative_context=False,
offset=0.0,
reward_scales=REWARD_SCALE
)
super(AntMaze, self).__init__(
maze_id=maze_id,
contextual_reward=contextual_reward,
use_contexts=use_contexts,
random_contexts=random_contexts,
context_range=context_range,
maze_size_scaling=8,
top_down_view=False,
evaluate=evaluate,
num_levels=num_levels,
)
class ImageHumanoidMaze(UniversalAntMazeEnv):
"""Visual Humanoid Maze Environment.
In this task, immovable blocks are placed to confine the agent to a
U-shaped corridor. That is, blocks are placed everywhere except at (0,0),
(8,0), (16,0), (16,8), (16,16), (8,16), and (0,16). The agent is
initialized at position (0,0) and tasked at reaching a specific target
position. "Success" in this environment is defined as being within an L2
distance of 5 from the target.
"""
def __init__(self,
use_contexts=False,
random_contexts=False,
context_range=None,
image_size=32):
"""Initialize the Image Humanoid Maze environment.
Parameters
----------
use_contexts : bool, optional
specifies whether to add contexts to the observations and add the
contextual rewards
random_contexts : bool
specifies whether the context is a single value, or a random set of
values between some range
context_range : list of float or list of (float, float)
the desired context / goal, or the (lower, upper) bound tuple for
each dimension of the goal
Raises
------
AssertionError
If the context_range is not the right form based on whether
contexts are a single value or random across a range.
"""
maze_id = "Maze"
def contextual_reward(states, goals, next_states):
return negative_distance(
states=states,
goals=goals,
next_states=next_states,
state_indices=[image_size*image_size*3 + 0,
image_size*image_size*3 + 1],
relative_context=False,
offset=0.0,
reward_scales=REWARD_SCALE
)
super(ImageHumanoidMaze, self).__init__(
maze_id=maze_id,
contextual_reward=contextual_reward,
use_contexts=use_contexts,
random_contexts=random_contexts,
context_range=context_range,
top_down_view=False,
maze_size_scaling=8,
evaluate=evaluate,
num_levels=num_levels,
)
class HumanoidMaze(UniversalHumanoidMazeEnv):
"""Humanoid Maze Environment.
In this task, immovable blocks are placed to confine the agent to a
U-shaped corridor. That is, blocks are placed everywhere except at (0,0),
(4,0), (8,0), (8,4), (8,8), (4,8), and (0,8). The agent is
initialized at position (0,0) and tasked at reaching a specific target
position. "Success" in this environment is defined as being within an L2
distance of 5 from the target.
"""
def __init__(self,
use_contexts=False,
random_contexts=False,
context_range=None):
"""Initialize the Humanoid Maze environment.
Parameters
----------
use_contexts : bool, optional
specifies whether to add contexts to the observations and add the
contextual rewards
random_contexts : bool
specifies whether the context is a single value, or a random set of
values between some range
context_range : [float] or [(float, float)] or [[float]]
the desired context / goal, or the (lower, upper) bound tuple for
each dimension of the goal
Raises
------
AssertionError
If the context_range is not the right form based on whether
contexts are a single value or random across a range.
"""
maze_id = "Maze"
def contextual_reward(states, goals, next_states):
return negative_distance(
states=states,
goals=goals,
next_states=next_states,
state_indices=[0, 1],
relative_context=False,
offset=0.0,
reward_scales=1/7.2,
output_activation=np.exp)
super(HumanoidMaze, self).__init__(
maze_id=maze_id,
contextual_reward=contextual_reward,
use_contexts=use_contexts,
random_contexts=random_contexts,
context_range=context_range,
maze_size_scaling=4)
class ImageAntMaze(UniversalAntMazeEnv):
"""Visual Ant Maze Environment.
In this task, immovable blocks are placed to confine the agent to a
U-shaped corridor. That is, blocks are placed everywhere except at (0,0),
(8,0), (16,0), (16,8), (16,16), (8,16), and (0,16). The agent is
initialized at position (0,0) and tasked at reaching a specific target
position. "Success" in this environment is defined as being within an L2
distance of 5 from the target.
"""
def __init__(self,
use_contexts=False,
random_contexts=False,
context_range=None,
image_size=32,
evaluate=False,
num_levels=1):
"""Initialize the Image Ant Maze environment.
Parameters
----------
use_contexts : bool, optional
specifies whether to add contexts to the observations and add the
contextual rewards
random_contexts : bool
specifies whether the context is a single value, or a random set of
values between some range
context_range : [float] or [(float, float)] or [[float]]
the desired context / goal, or the (lower, upper) bound tuple for
each dimension of the goal
image_size : int
determines the width and height of the rendered image
evaluate : bool
whether to run an evaluation. In this case an additional goal agent
is placed in the environment for visualization purposes.
num_levels : int
number of levels in the policy. 1 refers to non-hierarchical models
Raises
------
AssertionError
If the context_range is not the right form based on whether
contexts are a single value or random across a range.
"""
maze_id = "Maze"
def contextual_reward(states, goals, next_states):
return negative_distance(
states=states,
goals=goals,
next_states=next_states,
state_indices=[image_size * image_size * 3 + 0,
image_size * image_size * 3 + 1],
relative_context=False,
offset=0.0,
reward_scales=REWARD_SCALE
)
super(ImageAntMaze, self).__init__(
maze_id=maze_id,
contextual_reward=contextual_reward,
use_contexts=use_contexts,
random_contexts=random_contexts,
context_range=context_range,
maze_size_scaling=8,
top_down_view=True,
image_size=image_size,
evaluate=evaluate,
num_levels=num_levels,
)
class ImageHumanoidMaze(UniversalAntMazeEnv):
"""Visual Humanoid Maze Environment.
In this task, immovable blocks are placed to confine the agent to a
U-shaped corridor. That is, blocks are placed everywhere except at (0,0),
(8,0), (16,0), (16,8), (16,16), (8,16), and (0,16). The agent is
initialized at position (0,0) and tasked at reaching a specific target
position. "Success" in this environment is defined as being within an L2
distance of 5 from the target.
"""
def __init__(self,
use_contexts=False,
random_contexts=False,
context_range=None,
image_size=32):
"""Initialize the Image Humanoid Maze environment.
Parameters
----------
use_contexts : bool, optional
specifies whether to add contexts to the observations and add the
contextual rewards
random_contexts : bool
specifies whether the context is a single value, or a random set of
values between some range
context_range : [float] or [(float, float)] or [[float]]
the desired context / goal, or the (lower, upper) bound tuple for
each dimension of the goal
Raises
------
AssertionError
If the context_range is not the right form based on whether
contexts are a single value or random across a range.
"""
maze_id = "Maze"
def contextual_reward(states, goals, next_states):
return negative_distance(
states=states,
goals=goals,
next_states=next_states,
state_indices=[image_size*image_size*3 + 0,
image_size*image_size*3 + 1],
relative_context=False,
offset=0.0,
reward_scales=REWARD_SCALE
)
super(ImageHumanoidMaze, self).__init__(
maze_id=maze_id,
contextual_reward=contextual_reward,
use_contexts=use_contexts,
random_contexts=random_contexts,
context_range=context_range,
maze_size_scaling=8,
top_down_view=True,
image_size=image_size,
ant_fall=False,
)
class AntPush(UniversalAntMazeEnv):
"""Ant Push Environment.
In this task, immovable blocks are placed every where except at (0,0),
(-8,0), (-8,8), (0,8), (8,8), (16,8), and (0,16), and a movable block is
placed at (0,8). The agent is initialized at position (0,0), and is tasked
with the objective of reaching position (0,19). Therefore, the agent must
first move to the left, push the movable block to the right, and then
finally navigate to the target. "Success" in this environment is defined as
being within an L2 distance of 5 from the target.
"""
def __init__(self,
use_contexts=False,
random_contexts=False,
context_range=None,
evaluate=False,
num_levels=1):
"""Initialize the Ant Push environment.
Parameters
----------
use_contexts : bool, optional
specifies whether to add contexts to the observations and add the
contextual rewards
random_contexts : bool
specifies whether the context is a single value, or a random set of
values between some range
context_range : [float] or [(float, float)] or [[float]]
the desired context / goal, or the (lower, upper) bound tuple for
each dimension of the goal
evaluate : bool
whether to run an evaluation. In this case an additional goal agent
is placed in the environment for visualization purposes.
num_levels : int
number of levels in the policy. 1 refers to non-hierarchical models
Raises
------
AssertionError
If the context_range is not the right form based on whether
contexts are a single value or random across a range.
"""
maze_id = "Push"
def contextual_reward(states, goals, next_states):
return negative_distance(
states=states,
goals=goals,
next_states=next_states,
state_indices=[0, 1],
relative_context=False,
offset=0.0,
reward_scales=REWARD_SCALE
)
super(AntPush, self).__init__(
maze_id=maze_id,
contextual_reward=contextual_reward,
use_contexts=use_contexts,
random_contexts=random_contexts,
context_range=context_range,
maze_size_scaling=8,
ant_fall=False,
top_down_view=False,
evaluate=evaluate,
num_levels=num_levels,
)
class HumanoidPush(UniversalHumanoidMazeEnv):
"""Humanoid Push Environment.
In this task, immovable blocks are placed every where except at (0,0),
(-8,0), (-8,8), (0,8), (8,8), (16,8), and (0,16), and a movable block is
placed at (0,8). The agent is initialized at position (0,0), and is tasked
with the objective of reaching position (0,19). Therefore, the agent must
first move to the left, push the movable block to the right, and then
finally navigate to the target. "Success" in this environment is defined as
being within an L2 distance of 5 from the target.
"""
def __init__(self,
use_contexts=False,
random_contexts=False,
context_range=None):
"""Initialize the Humanoid Push environment.
Parameters
----------
use_contexts : bool, optional
specifies whether to add contexts to the observations and add the
contextual rewards
random_contexts : bool
specifies whether the context is a single value, or a random set of
values between some range
context_range : [float] or [(float, float)] or [[float]]
the desired context / goal, or the (lower, upper) bound tuple for
each dimension of the goal
Raises
------
AssertionError
If the context_range is not the right form based on whether
contexts are a single value or random across a range.
"""
maze_id = "Push"
def contextual_reward(states, goals, next_states):
return negative_distance(
states=states,
goals=goals,
next_states=next_states,
state_indices=[0, 1],
relative_context=False,
offset=0.0,
reward_scales=REWARD_SCALE
)
super(HumanoidPush, self).__init__(
maze_id=maze_id,
contextual_reward=contextual_reward,
use_contexts=use_contexts,
random_contexts=random_contexts,
context_range=context_range,
maze_size_scaling=8,
)
class AntFall(UniversalAntMazeEnv):
"""Ant Fall Environment.
In this task, the agent is initialized on a platform of height 4. Immovable
blocks are placed everywhere except at (-8,0), (0,0), (-8,8), (0,8),
(-8,16), (0,16), (-8,24), and (0,24). The raised platform is absent in the
region [-4,12]x[12,20], and a movable block is placed at (8,8). The agent
is initialized at position (0,0,4.5), and is with the objective of reaching
position (0,27,4.5). Therefore, to achieve this, the agent must first push
the movable block into the chasm and walk on top of it before navigating to
the target. "Success" in this environment is defined as being within an L2
distance of 5 from the target.
"""
def __init__(self,
use_contexts=False,
random_contexts=False,
context_range=None,
evaluate=False,
num_levels=1):
"""Initialize the Ant Fall environment.
Parameters
----------
use_contexts : bool, optional
specifies whether to add contexts to the observations and add the
contextual rewards
random_contexts : bool
specifies whether the context is a single value, or a random set of
values between some range
context_range : [float] or [(float, float)] or [[float]]
the desired context / goal, or the (lower, upper) bound tuple for
each dimension of the goal
evaluate : bool
whether to run an evaluation. In this case an additional goal agent
is placed in the environment for visualization purposes.
num_levels : int
number of levels in the policy. 1 refers to non-hierarchical models
Raises
------
AssertionError
If the context_range is not the right form based on whether
contexts are a single value or random across a range.
"""
maze_id = "Fall"
def contextual_reward(states, goals, next_states):
return negative_distance(
states=states,
goals=goals,
next_states=next_states,
state_indices=[0, 1, 2],
relative_context=False,
offset=0.0,
reward_scales=REWARD_SCALE
)
super(AntFall, self).__init__(
maze_id=maze_id,
contextual_reward=contextual_reward,
use_contexts=use_contexts,
random_contexts=random_contexts,
context_range=context_range,
maze_size_scaling=8,
ant_fall=True,
top_down_view=False,
evaluate=evaluate,
num_levels=num_levels,
)
class HumanoidFall(UniversalHumanoidMazeEnv):
"""Humanoid Fall Environment.
In this task, the agent is initialized on a platform of height 4. Immovable
blocks are placed everywhere except at (-8,0), (0,0), (-8,8), (0,8),
(-8,16), (0,16), (-8,24), and (0,24). The raised platform is absent in the
region [-4,12]x[12,20], and a movable block is placed at (8,8). The agent
is initialized at position (0,0,4.5), and is with the objective of reaching
position (0,27,4.5). Therefore, to achieve this, the agent must first push
the movable block into the chasm and walk on top of it before navigating to
the target. "Success" in this environment is defined as being within an L2
distance of 5 from the target.
"""
def __init__(self,
use_contexts=False,
random_contexts=False,
context_range=None):
"""Initialize the Humanoid Fall environment.
Parameters
----------
use_contexts : bool, optional
specifies whether to add contexts to the observations and add the
contextual rewards
random_contexts : bool
specifies whether the context is a single value, or a random set of
values between some range
context_range : [float] or [(float, float)] or [[float]]
the desired context / goal, or the (lower, upper) bound tuple for
each dimension of the goal
Raises
------
AssertionError
If the context_range is not the right form based on whether
contexts are a single value or random across a range.
"""
maze_id = "Fall"
def contextual_reward(states, goals, next_states):
return negative_distance(
states=states,
goals=goals,
next_states=next_states,
state_indices=[0, 1, 2],
relative_context=False,
offset=0.0,
reward_scales=REWARD_SCALE
)
super(HumanoidFall, self).__init__(
maze_id=maze_id,
contextual_reward=contextual_reward,
use_contexts=use_contexts,
random_contexts=random_contexts,
context_range=context_range,
maze_size_scaling=8,
)
class AntFourRooms(UniversalAntMazeEnv):
"""Ant Four Rooms Environment.
In this environment, an agent is placed in a four-room network whose
structure is represented in the figure below. The agent is initialized at
position (0,0) and tasked at reaching a specific target position. "Success"
in this environment is defined as being within an L2 distance of 5 from the
target.
+------------------------------------+
| X | |
| | |
| |
| | |
| | |
|---- ----------| |
| |--------- ------|
| | |
| | |
| |
| | |
+------------------------------------+
"""
def __init__(self,
use_contexts=False,
random_contexts=False,
context_range=None,
evaluate=False,
num_levels=1):
"""Initialize the Ant Four Rooms environment.
Parameters
----------
use_contexts : bool, optional
specifies whether to add contexts to the observations and add the
contextual rewards
random_contexts : bool
specifies whether the context is a single value, or a random set of
values between some range
context_range : [float] or [(float, float)] or [[float]]
the desired context / goal, or the (lower, upper) bound tuple for
each dimension of the goal
evaluate : bool
whether to run an evaluation. In this case an additional goal agent
is placed in the environment for visualization purposes.
num_levels : int
number of levels in the policy. 1 refers to non-hierarchical models
Raises
------
AssertionError
If the context_range is not the right form based on whether
contexts are a single value or random across a range.
"""
maze_id = "FourRooms"
def contextual_reward(states, goals, next_states):
return negative_distance(
states=states,
goals=goals,
next_states=next_states,
state_indices=[0, 1],
relative_context=False,
offset=0.0,
reward_scales=REWARD_SCALE
)
super(AntFourRooms, self).__init__(
maze_id=maze_id,
contextual_reward=contextual_reward,
use_contexts=use_contexts,
random_contexts=random_contexts,
context_range=context_range,
maze_size_scaling=2,
ant_fall=False,
top_down_view=False,
evaluate=evaluate,
num_levels=num_levels,
)
class HumanoidFourRooms(UniversalHumanoidMazeEnv):
"""Humanoid Four Rooms Environment.
Need to add description. TODO
"""
def __init__(self,
use_contexts=False,
random_contexts=False,
context_range=None):
"""Initialize the Humanoid Four Rooms environment.
Parameters
----------
use_contexts : bool, optional
specifies whether to add contexts to the observations and add the
contextual rewards
random_contexts : bool
specifies whether the context is a single value, or a random set of
values between some range
context_range : [float] or [(float, float)] or [[float]]
the desired context / goal, or the (lower, upper) bound tuple for
each dimension of the goal
Raises
------
AssertionError
If the context_range is not the right form based on whether
contexts are a single value or random across a range.
"""
maze_id = "FourRooms"
def contextual_reward(states, goals, next_states):
return negative_distance(
states=states,
goals=goals,
next_states=next_states,
state_indices=[0, 1],
relative_context=False,
offset=0.0,
reward_scales=REWARD_SCALE
)
super(HumanoidFourRooms, self).__init__(
maze_id=maze_id,
contextual_reward=contextual_reward,
use_contexts=use_contexts,
random_contexts=random_contexts,
context_range=context_range,
maze_size_scaling=3,
)
class HumanoidFourRooms(UniversalHumanoidMazeEnv):
"""Humanoid Four Rooms Environment.
Need to add description. TODO
"""
def __init__(self,
use_contexts=False,
random_contexts=False,
context_range=None):
"""Initialize the Humanoid Four Rooms environment.
Parameters
----------
use_contexts : bool, optional
specifies whether to add contexts to the observations and add the
contextual rewards
random_contexts : bool
specifies whether the context is a single value, or a random set of
values between some range
context_range : list of float or list of (float, float)
the desired context / goal, or the (lower, upper) bound tuple for
each dimension of the goal
Raises
------
AssertionError
If the context_range is not the right form based on whether
contexts are a single value or random across a range.
"""
maze_id = "FourRooms"
def contextual_reward(states, goals, next_states):
return negative_distance(
states=states,
goals=goals,
next_states=next_states,
state_indices=[0, 1],
relative_context=False,
offset=0.0,
reward_scales=REWARD_SCALE
)
super(HumanoidFourRooms, self).__init__(
maze_id=maze_id,
contextual_reward=contextual_reward,
use_contexts=use_contexts,
random_contexts=random_contexts,
context_range=context_range,
maze_size_scaling=3,
)
| 37.530351 | 79 | 0.574657 | 5,426 | 46,988 | 4.833395 | 0.065979 | 0.048501 | 0.014642 | 0.017082 | 0.936742 | 0.934454 | 0.933158 | 0.924541 | 0.918707 | 0.918707 | 0 | 0.013351 | 0.359198 | 46,988 | 1,251 | 80 | 37.560352 | 0.857655 | 0.451711 | 0 | 0.853526 | 0 | 0 | 0.022548 | 0 | 0 | 0 | 0 | 0.001599 | 0.007233 | 1 | 0.057866 | false | 0 | 0.01085 | 0.0217 | 0.137432 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
e0291d91dab98cfdf44e3824b5d8fd88916741b1 | 135 | py | Python | api/__init__.py | zedoax/Hubris | 0f57fb72afbfbed22699628af63fa324c29ed3d2 | [
"MIT"
] | 1 | 2018-02-24T22:45:03.000Z | 2018-02-24T22:45:03.000Z | api/__init__.py | zedoax/Hubris | 0f57fb72afbfbed22699628af63fa324c29ed3d2 | [
"MIT"
] | 5 | 2018-02-24T22:49:46.000Z | 2018-03-02T00:27:14.000Z | api/__init__.py | zedoax/Hubris | 0f57fb72afbfbed22699628af63fa324c29ed3d2 | [
"MIT"
] | 2 | 2018-02-24T22:47:17.000Z | 2019-01-21T07:32:02.000Z | from flask import Blueprint
api_blueprint = Blueprint('api_blueprint', __name__, template_folder='templates')
from api import routes
| 22.5 | 81 | 0.814815 | 17 | 135 | 6.058824 | 0.588235 | 0.23301 | 0.407767 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 135 | 5 | 82 | 27 | 0.858333 | 0 | 0 | 0 | 0 | 0 | 0.162963 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0.666667 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 7 |
e037029375978fc96d2e58b2f609cdd2328bc1ce | 9,644 | py | Python | deepcaps.py | deepcapsmobike/mobike | 3ca2c3ea55f128e75ce8538cb6d1c58e1c5f992d | [
"MIT"
] | null | null | null | deepcaps.py | deepcapsmobike/mobike | 3ca2c3ea55f128e75ce8538cb6d1c58e1c5f992d | [
"MIT"
] | null | null | null | deepcaps.py | deepcapsmobike/mobike | 3ca2c3ea55f128e75ce8538cb6d1c58e1c5f992d | [
"MIT"
] | null | null | null | from tensorflow.keras import backend as K
from tensorflow.keras import layers, models, optimizers
from tensorflow.keras.layers import Layer
from tensorflow.keras.layers import Input, Conv2D, Activation, Dense, Dropout, Lambda, Reshape, Concatenate
from tensorflow.keras.layers import BatchNormalization, MaxPooling2D, Flatten, Conv1D, Deconvolution2D, Conv2DTranspose
from tensorflow.keras.callbacks import Callback, ModelCheckpoint, TensorBoard
from tensorflow.keras.utils import plot_model
from tensorflow.keras.layers.convolutional import UpSampling2D
import numpy as np
import tensorflow as tf
import os
from PIL import Image
from capslayers import Conv2DCaps, ConvCapsuleLayer3D, CapsuleLayer, CapsToScalars, Mask_CID, Mask, ConvertToCaps, FlattenCaps
# To limit the GPU usage
# config = tf.ConfigProto()
# config.gpu_options.allow_growth=True
# sess = tf.Session(config=config)
# K.set_session(sess)
def DeepCapsNet(input_shape, n_class, routings):
# assemble encoder
x = Input(shape=input_shape)
l = x
l = Conv2D(128, (3, 3), strides=(1, 1), activation='relu', padding="same")(l) # common conv layer
l = BatchNormalization()(l)
l = ConvertToCaps()(l)
l = Conv2DCaps(32, 4, kernel_size=(3, 3), strides=(2, 2), r_num=1, b_alphas=[1, 1, 1])(l)
l_skip = Conv2DCaps(32, 4, kernel_size=(3, 3), strides=(1, 1), r_num=1, b_alphas=[1, 1, 1])(l)
l = Conv2DCaps(32, 4, kernel_size=(3, 3), strides=(1, 1), r_num=1, b_alphas=[1, 1, 1])(l)
l = Conv2DCaps(32, 4, kernel_size=(3, 3), strides=(1, 1), r_num=1, b_alphas=[1, 1, 1])(l)
l = layers.Add()([l, l_skip])
l = Conv2DCaps(32, 8, kernel_size=(3, 3), strides=(2, 2), r_num=1, b_alphas=[1, 1, 1])(l)
l_skip = Conv2DCaps(32, 8, kernel_size=(3, 3), strides=(1, 1), r_num=1, b_alphas=[1, 1, 1])(l)
l = Conv2DCaps(32, 8, kernel_size=(3, 3), strides=(1, 1), r_num=1, b_alphas=[1, 1, 1])(l)
l = Conv2DCaps(32, 8, kernel_size=(3, 3), strides=(1, 1), r_num=1, b_alphas=[1, 1, 1])(l)
l = layers.Add()([l, l_skip])
l = Conv2DCaps(32, 8, kernel_size=(3, 3), strides=(2, 2), r_num=1, b_alphas=[1, 1, 1])(l)
l_skip = Conv2DCaps(32, 8, kernel_size=(3, 3), strides=(1, 1), r_num=1, b_alphas=[1, 1, 1])(l)
l = Conv2DCaps(32, 8, kernel_size=(3, 3), strides=(1, 1), r_num=1, b_alphas=[1, 1, 1])(l)
l = Conv2DCaps(32, 8, kernel_size=(3, 3), strides=(1, 1), r_num=1, b_alphas=[1, 1, 1])(l)
l = layers.Add()([l, l_skip])
l1 = l
l = Conv2DCaps(32, 8, kernel_size=(3, 3), strides=(2, 2), r_num=1, b_alphas=[1, 1, 1])(l)
l_skip = ConvCapsuleLayer3D(kernel_size=3, num_capsule=32, num_atoms=8, strides=1, padding='same', routings=3)(l)
l = Conv2DCaps(32, 8, kernel_size=(3, 3), strides=(1, 1), r_num=1, b_alphas=[1, 1, 1])(l)
l = Conv2DCaps(32, 8, kernel_size=(3, 3), strides=(1, 1), r_num=1, b_alphas=[1, 1, 1])(l)
l = layers.Add()([l, l_skip])
l2 = l
la = FlattenCaps()(l2)
lb = FlattenCaps()(l1)
l = layers.Concatenate(axis=-2)([la, lb])
# l = Dropout(0.4)(l)
digits_caps = CapsuleLayer(num_capsule=n_class, dim_capsule=32, routings=routings, channels=0, name='digit_caps')(l)
l = CapsToScalars(name='capsnet')(digits_caps)
m_capsnet = models.Model(inputs=x, outputs=l, name='capsnet_model')
y = Input(shape=(n_class,))
masked_by_y = Mask_CID()([digits_caps, y])
masked = Mask_CID()(digits_caps)
# Decoder Network
decoder = models.Sequential(name='decoder')
decoder.add(Dense(input_dim=32, activation="relu", output_dim=8 * 8 * 16))
decoder.add(Reshape((8, 8, 16)))
decoder.add(BatchNormalization(momentum=0.8))
decoder.add(Deconvolution2D(64, 3, 3, subsample=(1, 1), border_mode='same'))
decoder.add(Deconvolution2D(32, 3, 3, subsample=(2, 2), border_mode='same'))
decoder.add(Deconvolution2D(16, 3, 3, subsample=(2, 2), border_mode='same'))
decoder.add(Deconvolution2D(8, 3, 3, subsample=(2, 2), border_mode='same'))
decoder.add(Deconvolution2D(3, 3, 3, subsample=(1, 1), border_mode='same'))
decoder.add(Activation("relu"))
decoder.add(Reshape(target_shape=(64, 64, 3), name='out_recon'))
train_model = models.Model([x, y], [m_capsnet.output, decoder(masked_by_y)])
eval_model = models.Model(x, [m_capsnet.output, decoder(masked)])
train_model.summary()
return train_model, eval_model
def DeepCapsNet28(input_shape, n_class, routings):
# assemble encoder
x = Input(shape=input_shape)
l = x
l = Conv2D(128, (3, 3), strides=(1, 1), activation='relu', padding="same")(l) # common conv layer
l = BatchNormalization()(l)
l = ConvertToCaps()(l)
l = Conv2DCaps(32, 4, kernel_size=(3, 3), strides=(2, 2), r_num=1, b_alphas=[1, 1, 1])(l)
l_skip = Conv2DCaps(32, 4, kernel_size=(3, 3), strides=(1, 1), r_num=1, b_alphas=[1, 1, 1])(l)
l = Conv2DCaps(32, 4, kernel_size=(3, 3), strides=(1, 1), r_num=1, b_alphas=[1, 1, 1])(l)
l = Conv2DCaps(32, 4, kernel_size=(3, 3), strides=(1, 1), r_num=1, b_alphas=[1, 1, 1])(l)
l = layers.Add()([l, l_skip])
l = Conv2DCaps(32, 8, kernel_size=(3, 3), strides=(2, 2), r_num=1, b_alphas=[1, 1, 1])(l)
l_skip = Conv2DCaps(32, 8, kernel_size=(3, 3), strides=(1, 1), r_num=1, b_alphas=[1, 1, 1])(l)
l = Conv2DCaps(32, 8, kernel_size=(3, 3), strides=(1, 1), r_num=1, b_alphas=[1, 1, 1])(l)
l = Conv2DCaps(32, 8, kernel_size=(3, 3), strides=(1, 1), r_num=1, b_alphas=[1, 1, 1])(l)
l = layers.Add()([l, l_skip])
l = Conv2DCaps(32, 8, kernel_size=(3, 3), strides=(2, 2), r_num=1, b_alphas=[1, 1, 1])(l)
l_skip = Conv2DCaps(32, 8, kernel_size=(3, 3), strides=(1, 1), r_num=1, b_alphas=[1, 1, 1])(l)
l = Conv2DCaps(32, 8, kernel_size=(3, 3), strides=(1, 1), r_num=1, b_alphas=[1, 1, 1])(l)
l = Conv2DCaps(32, 8, kernel_size=(3, 3), strides=(1, 1), r_num=1, b_alphas=[1, 1, 1])(l)
l = layers.Add()([l, l_skip])
l1 = l
l = Conv2DCaps(32, 8, kernel_size=(3, 3), strides=(2, 2), r_num=1, b_alphas=[1, 1, 1])(l)
print("=====")
print(l.shape)
print("####")
l_skip = ConvCapsuleLayer3D(kernel_size=3, num_capsule=32, num_atoms=8, strides=1, padding='same', routings=3)(l)
print("=====")
print(l_skip.shape)
print("####")
l = Conv2DCaps(32, 8, kernel_size=(3, 3), strides=(1, 1), r_num=1, b_alphas=[1, 1, 1])(l)
l = Conv2DCaps(32, 8, kernel_size=(3, 3), strides=(1, 1), r_num=1, b_alphas=[1, 1, 1])(l)
l = layers.Add()([l, l_skip])
l2 = l
la = FlattenCaps()(l2)
lb = FlattenCaps()(l1)
l = layers.Concatenate(axis=-2)([la, lb])
digits_caps = CapsuleLayer(num_capsule=n_class, dim_capsule=32, routings=routings, channels=0, name='digit_caps')(l)
l = CapsToScalars(name='capsnet')(digits_caps)
m_capsnet = models.Model(inputs=x, outputs=l, name='capsnet_model')
y = Input(shape=(n_class,))
masked_by_y = Mask_CID()([digits_caps, y])
masked = Mask_CID()(digits_caps)
# Decoder Network
decoder = models.Sequential(name='decoder')
decoder.add(Dense(input_dim=32, activation="relu", output_dim=7 * 7 * 16))
decoder.add(Reshape((7, 7, 16)))
decoder.add(BatchNormalization(momentum=0.8))
decoder.add(Deconvolution2D(64, 3, 3, subsample=(1, 1), border_mode='same'))
decoder.add(Deconvolution2D(32, 3, 3, subsample=(2, 2), border_mode='same'))
decoder.add(Deconvolution2D(16, 3, 3, subsample=(2, 2), border_mode='same'))
decoder.add(Deconvolution2D(1, 3, 3, subsample=(1, 1), border_mode='same'))
decoder.add(Activation("relu"))
decoder.add(Reshape(target_shape=(28, 28, 1), name='out_recon'))
train_model = models.Model([x, y], [m_capsnet.output, decoder(masked_by_y)])
eval_model = models.Model(x, [m_capsnet.output, decoder(masked)])
train_model.summary()
return train_model, eval_model
def BaseCapsNet(input_shape, n_class, routings):
# assemble encoder
x = Input(shape=input_shape)
l = x
l = Conv2D(256, (9, 9), strides=(2, 2), activation='relu', padding="same")(l)
l = BatchNormalization()(l)
l = Conv2D(256, (9, 9), strides=(2, 2), activation='relu', padding="same")(l)
l = BatchNormalization()(l)
l = ConvertToCaps()(l)
l = Conv2DCaps(16, 6, kernel_size=(3, 3), strides=(2, 2), r_num=1, b_alphas=[1, 1, 1])(l)
l = FlattenCaps()(l)
digits_caps = CapsuleLayer(num_capsule=10, dim_capsule=8, routings=routings, channels=0, name='digit_caps')(l)
l = CapsToScalars(name='capsnet')(digits_caps)
m_capsnet = models.Model(inputs=x, outputs=l, name='capsnet_model')
y = layers.Input(shape=(n_class,))
masked_by_y = Mask()([digits_caps, y]) # The true label is used to mask the output of capsule layer. For training
masked = Mask()(digits_caps)
# Decoder Network
decoder = models.Sequential(name='decoder')
decoder.add(Dense(input_dim=80, activation="relu", output_dim=8 * 8 * 16))
decoder.add(Reshape((8, 8, 16)))
decoder.add(BatchNormalization(momentum=0.8))
decoder.add(layers.Deconvolution2D(64, 3, 3, subsample=(1, 1), border_mode='same'))
decoder.add(layers.Deconvolution2D(32, 3, 3, subsample=(2, 2), border_mode='same'))
decoder.add(layers.Deconvolution2D(16, 3, 3, subsample=(2, 2), border_mode='same'))
decoder.add(layers.Deconvolution2D(3, 3, 3, subsample=(1, 1), border_mode='same'))
decoder.add(Activation("relu"))
decoder.add(layers.Reshape(target_shape=(32, 32, 3), name='out_recon'))
train_model = models.Model([x, y], [m_capsnet.output, decoder(masked_by_y)])
eval_model = models.Model(x, [m_capsnet.output, decoder(masked)])
train_model.summary()
return train_model, eval_model
| 44.855814 | 126 | 0.649212 | 1,573 | 9,644 | 3.837889 | 0.094088 | 0.030479 | 0.049197 | 0.06162 | 0.848103 | 0.830048 | 0.824416 | 0.823422 | 0.814477 | 0.814477 | 0 | 0.072844 | 0.163003 | 9,644 | 214 | 127 | 45.065421 | 0.67505 | 0.038366 | 0 | 0.75 | 0 | 0 | 0.029386 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.019737 | false | 0 | 0.085526 | 0 | 0.125 | 0.039474 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
0ed6477c47fd6414bab3229abc0d611bd31b68ac | 3,262 | py | Python | tests/iterators/find_test.py | SSouik/pyutil | d2250fb585679e49eb9056a3051bf239a58c2e8b | [
"MIT"
] | null | null | null | tests/iterators/find_test.py | SSouik/pyutil | d2250fb585679e49eb9056a3051bf239a58c2e8b | [
"MIT"
] | 21 | 2022-01-05T04:51:33.000Z | 2022-01-28T05:45:57.000Z | tests/iterators/find_test.py | SSouik/pyutil | d2250fb585679e49eb9056a3051bf239a58c2e8b | [
"MIT"
] | null | null | null | import pytest
from pyutil import find
sample_data = [
{"foo": "abc", "bar": "123"},
{"foo": "def", "bar": "456"},
{"foo": "ghi", "bar": "789"},
{"foo": "jkl", "bar": "100"},
{"foo": "mno", "bar": "101"},
]
def test_find_when_seq_is_empty_list():
actual = find([], lambda x: x == True)
expected = None
assert actual == expected
def test_find_when_seq_is_empty_tuple():
actual = find((), lambda x: x == True)
expected = None
assert actual == expected
def test_find_when_seq_is_list():
actual = find(sample_data, lambda x: x["foo"] == "def")
expected = {"foo": "def", "bar": "456"}
assert actual == expected
def test_find_when_seq_is_tuple():
actual = find(tuple(sample_data), lambda x: x["foo"] == "def")
expected = {"foo": "def", "bar": "456"}
assert actual == expected
def test_find_when_seq_is_list_with_start_as_1():
actual = find(sample_data, lambda x: x["foo"] == "def", 1)
expected = {"foo": "def", "bar": "456"}
assert actual == expected
def test_find_when_seq_is_tuple_with_start_as_1():
actual = find(tuple(sample_data), lambda x: x["foo"] == "def", 1)
expected = {"foo": "def", "bar": "456"}
assert actual == expected
def test_find_when_seq_is_list_with_start_as_2():
actual = find(sample_data, lambda x: x["foo"] == "def", 2)
expected = None
assert actual == expected
def test_find_when_seq_is_tuple_with_start_as_2():
actual = find(tuple(sample_data), lambda x: x["foo"] == "def", 2)
expected = None
assert actual == expected
def test_find_when_seq_is_list_when_item_is_in_start_end_range():
actual = find(sample_data, lambda x: x["foo"] == "def", 1, 4)
expected = {"foo": "def", "bar": "456"}
assert actual == expected
def test_find_when_seq_is_tuple_when_item_is_in_start_end_range():
actual = find(tuple(sample_data), lambda x: x["foo"] == "def", 1, 4)
expected = {"foo": "def", "bar": "456"}
assert actual == expected
def test_find_when_seq_is_list_when_item_is_not_in_start_end_range():
actual = find(sample_data, lambda x: x["foo"] == "def", 2, 4)
expected = None
assert actual == expected
def test_find_when_seq_is_tuple_when_item_is_not_in_start_end_range():
actual = find(tuple(sample_data), lambda x: x["foo"] == "def", 2, 4)
expected = None
assert actual == expected
def test_find_when_start_and_end_are_equal():
actual = find(sample_data, lambda x: x["foo"] == "def", 1, 1)
expected = None
assert actual == expected
def test_find_when_seq_is_not_valid():
with pytest.raises(TypeError):
find("foo", lambda x: x == 0)
def test_find_when_func_is_not_callable():
with pytest.raises(TypeError):
find([], "foo")
def test_find_when_start_is_negative():
with pytest.raises(ValueError):
find([], lambda x: x == 0, -1)
def test_find_when_end_is_negative():
with pytest.raises(ValueError):
find([], lambda x: x == 0, 0, -1)
def test_find_when_start_is_greater_than_end():
with pytest.raises(ValueError):
find([], lambda x: x == 2, 1)
def test_find_when_start_is_greater_than_len_of_seq():
with pytest.raises(ValueError):
find([], lambda x: x["foo"] == "def", 10)
| 27.411765 | 72 | 0.648375 | 490 | 3,262 | 3.979592 | 0.122449 | 0.061538 | 0.107179 | 0.146154 | 0.901538 | 0.895897 | 0.842051 | 0.826667 | 0.787692 | 0.747692 | 0 | 0.023185 | 0.19344 | 3,262 | 118 | 73 | 27.644068 | 0.717978 | 0 | 0 | 0.405063 | 0 | 0 | 0.064378 | 0 | 0 | 0 | 0 | 0 | 0.164557 | 1 | 0.240506 | false | 0 | 0.025316 | 0 | 0.265823 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
0ed7a977542041b5dcdfd280d411e4caca4630f0 | 120,500 | py | Python | pybind/slxos/v16r_1_00b/brocade_mpls_rpc/show_mpls_lsp_name_debug/output/lsp/show_mpls_lsp_extensive_info/show_mpls_lsp_sec_path_info/sec_path/__init__.py | shivharis/pybind | 4e1c6d54b9fd722ccec25546ba2413d79ce337e6 | [
"Apache-2.0"
] | null | null | null | pybind/slxos/v16r_1_00b/brocade_mpls_rpc/show_mpls_lsp_name_debug/output/lsp/show_mpls_lsp_extensive_info/show_mpls_lsp_sec_path_info/sec_path/__init__.py | shivharis/pybind | 4e1c6d54b9fd722ccec25546ba2413d79ce337e6 | [
"Apache-2.0"
] | null | null | null | pybind/slxos/v16r_1_00b/brocade_mpls_rpc/show_mpls_lsp_name_debug/output/lsp/show_mpls_lsp_extensive_info/show_mpls_lsp_sec_path_info/sec_path/__init__.py | shivharis/pybind | 4e1c6d54b9fd722ccec25546ba2413d79ce337e6 | [
"Apache-2.0"
] | 1 | 2021-11-05T22:15:42.000Z | 2021-11-05T22:15:42.000Z |
from operator import attrgetter
import pyangbind.lib.xpathhelper as xpathhelper
from pyangbind.lib.yangtypes import RestrictedPrecisionDecimalType, RestrictedClassType, TypedListType
from pyangbind.lib.yangtypes import YANGBool, YANGListType, YANGDynClass, ReferenceType
from pyangbind.lib.base import PybindBase
from decimal import Decimal
from bitarray import bitarray
import __builtin__
import lsp_sec_path_config_admin_groups
class sec_path(PybindBase):
"""
This class was auto-generated by the PythonClass plugin for PYANG
from YANG module brocade-mpls - based on the path /brocade_mpls_rpc/show-mpls-lsp-name-debug/output/lsp/show-mpls-lsp-extensive-info/show-mpls-lsp-sec-path-info/sec-path. Each member element of
the container is represented as a class variable - with a specific
YANG type.
"""
__slots__ = ('_pybind_generated_by', '_path_helper', '_yang_name', '_rest_name', '_extmethods', '__lsp_sec_path_path_name','__lsp_sec_path_state','__lsp_sec_path_state_up','__lsp_sec_path_active','__lsp_sec_path_is_current_secondary','__lsp_sec_path_is_selected_secondary','__lsp_sec_path_config_reoptimize_timer_configured','__lsp_sec_path_config_reoptimize_timer','__lsp_sec_path_config_tspec_mtu_configured','__lsp_sec_path_sec_path_config_tspec_mtu','__lsp_sec_path_config_cos_configured','__lsp_sec_path_config_cos','__lsp_sec_path_config_mtu_configured','__lsp_sec_path_config_mtu','__lsp_sec_path_config_tie_breaking_configured','__lsp_sec_path_config_tie_break_random','__lsp_sec_path_config_tie_break_least_fill','__lsp_sec_path_config_tie_break_most_fill','__lsp_sec_path_config_cspf_disabled','__lsp_sec_path_config_hot_standby','__lsp_sec_path_config_pinned','__lsp_sec_path_config_persistent','__lsp_sec_path_config_soft_prempt','__lsp_sec_path_config_priority_configured','__lsp_sec_path_config_setup_prority','__lsp_sec_path_config_holding_prority','__lsp_sec_path_config_hop_limit_configured','__lsp_sec_path_config_hop_limit','__lsp_sec_path_config_traffic_eng_rate_configured','__lsp_sec_path_config_traffic_eng_mean_rate','__lsp_sec_path_config_traffic_eng_max_rate','__lsp_sec_path_config_traffic_eng_max_burst','__lsp_sec_path_config_admin_group_configured','__lsp_sec_path_config_admin_groups',)
_yang_name = 'sec-path'
_rest_name = 'sec-path'
_pybind_generated_by = 'container'
def __init__(self, *args, **kwargs):
path_helper_ = kwargs.pop("path_helper", None)
if path_helper_ is False:
self._path_helper = False
elif path_helper_ is not None and isinstance(path_helper_, xpathhelper.YANGPathHelper):
self._path_helper = path_helper_
elif hasattr(self, "_parent"):
path_helper_ = getattr(self._parent, "_path_helper", False)
self._path_helper = path_helper_
else:
self._path_helper = False
extmethods = kwargs.pop("extmethods", None)
if extmethods is False:
self._extmethods = False
elif extmethods is not None and isinstance(extmethods, dict):
self._extmethods = extmethods
elif hasattr(self, "_parent"):
extmethods = getattr(self._parent, "_extmethods", None)
self._extmethods = extmethods
else:
self._extmethods = False
self.__lsp_sec_path_config_mtu_configured = YANGDynClass(base=YANGBool, is_leaf=True, yang_name="lsp-sec-path-config-mtu-configured", rest_name="lsp-sec-path-config-mtu-configured", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)
self.__lsp_sec_path_config_hot_standby = YANGDynClass(base=YANGBool, is_leaf=True, yang_name="lsp-sec-path-config-hot-standby", rest_name="lsp-sec-path-config-hot-standby", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)
self.__lsp_sec_path_config_reoptimize_timer_configured = YANGDynClass(base=YANGBool, is_leaf=True, yang_name="lsp-sec-path-config-reoptimize-timer-configured", rest_name="lsp-sec-path-config-reoptimize-timer-configured", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)
self.__lsp_sec_path_config_admin_groups = YANGDynClass(base=lsp_sec_path_config_admin_groups.lsp_sec_path_config_admin_groups, is_container='container', presence=False, yang_name="lsp-sec-path-config-admin-groups", rest_name="lsp-sec-path-config-admin-groups", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, extensions=None, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='container', is_config=True)
self.__lsp_sec_path_config_cspf_disabled = YANGDynClass(base=YANGBool, is_leaf=True, yang_name="lsp-sec-path-config-cspf-disabled", rest_name="lsp-sec-path-config-cspf-disabled", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)
self.__lsp_sec_path_config_reoptimize_timer = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="lsp-sec-path-config-reoptimize-timer", rest_name="lsp-sec-path-config-reoptimize-timer", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint32', is_config=True)
self.__lsp_sec_path_config_soft_prempt = YANGDynClass(base=YANGBool, is_leaf=True, yang_name="lsp-sec-path-config-soft-prempt", rest_name="lsp-sec-path-config-soft-prempt", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)
self.__lsp_sec_path_config_tie_break_random = YANGDynClass(base=YANGBool, is_leaf=True, yang_name="lsp-sec-path-config-tie-break-random", rest_name="lsp-sec-path-config-tie-break-random", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)
self.__lsp_sec_path_config_traffic_eng_mean_rate = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="lsp-sec-path-config-traffic-eng-mean-rate", rest_name="lsp-sec-path-config-traffic-eng-mean-rate", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint32', is_config=True)
self.__lsp_sec_path_is_current_secondary = YANGDynClass(base=YANGBool, is_leaf=True, yang_name="lsp-sec-path-is-current-secondary", rest_name="lsp-sec-path-is-current-secondary", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)
self.__lsp_sec_path_config_persistent = YANGDynClass(base=YANGBool, is_leaf=True, yang_name="lsp-sec-path-config-persistent", rest_name="lsp-sec-path-config-persistent", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)
self.__lsp_sec_path_config_hop_limit = YANGDynClass(base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..255']}, int_size=8), is_leaf=True, yang_name="lsp-sec-path-config-hop-limit", rest_name="lsp-sec-path-config-hop-limit", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint8', is_config=True)
self.__lsp_sec_path_is_selected_secondary = YANGDynClass(base=YANGBool, is_leaf=True, yang_name="lsp-sec-path-is-selected-secondary", rest_name="lsp-sec-path-is-selected-secondary", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)
self.__lsp_sec_path_config_traffic_eng_max_rate = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="lsp-sec-path-config-traffic-eng-max-rate", rest_name="lsp-sec-path-config-traffic-eng-max-rate", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint32', is_config=True)
self.__lsp_sec_path_config_holding_prority = YANGDynClass(base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..255']}, int_size=8), is_leaf=True, yang_name="lsp-sec-path-config-holding-prority", rest_name="lsp-sec-path-config-holding-prority", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint8', is_config=True)
self.__lsp_sec_path_state_up = YANGDynClass(base=YANGBool, is_leaf=True, yang_name="lsp-sec-path-state-up", rest_name="lsp-sec-path-state-up", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)
self.__lsp_sec_path_config_tie_breaking_configured = YANGDynClass(base=YANGBool, is_leaf=True, yang_name="lsp-sec-path-config-tie-breaking-configured", rest_name="lsp-sec-path-config-tie-breaking-configured", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)
self.__lsp_sec_path_sec_path_config_tspec_mtu = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="lsp-sec-path-sec-path-config-tspec-mtu", rest_name="lsp-sec-path-sec-path-config-tspec-mtu", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint32', is_config=True)
self.__lsp_sec_path_state = YANGDynClass(base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..255']}, int_size=8), is_leaf=True, yang_name="lsp-sec-path-state", rest_name="lsp-sec-path-state", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint8', is_config=True)
self.__lsp_sec_path_config_setup_prority = YANGDynClass(base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..255']}, int_size=8), is_leaf=True, yang_name="lsp-sec-path-config-setup-prority", rest_name="lsp-sec-path-config-setup-prority", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint8', is_config=True)
self.__lsp_sec_path_config_traffic_eng_max_burst = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="lsp-sec-path-config-traffic-eng-max-burst", rest_name="lsp-sec-path-config-traffic-eng-max-burst", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint32', is_config=True)
self.__lsp_sec_path_active = YANGDynClass(base=YANGBool, is_leaf=True, yang_name="lsp-sec-path-active", rest_name="lsp-sec-path-active", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)
self.__lsp_sec_path_config_tie_break_least_fill = YANGDynClass(base=YANGBool, is_leaf=True, yang_name="lsp-sec-path-config-tie-break-least-fill", rest_name="lsp-sec-path-config-tie-break-least-fill", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)
self.__lsp_sec_path_config_hop_limit_configured = YANGDynClass(base=YANGBool, is_leaf=True, yang_name="lsp-sec-path-config-hop-limit-configured", rest_name="lsp-sec-path-config-hop-limit-configured", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)
self.__lsp_sec_path_config_tspec_mtu_configured = YANGDynClass(base=YANGBool, is_leaf=True, yang_name="lsp-sec-path-config-tspec-mtu-configured", rest_name="lsp-sec-path-config-tspec-mtu-configured", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)
self.__lsp_sec_path_config_priority_configured = YANGDynClass(base=YANGBool, is_leaf=True, yang_name="lsp-sec-path-config-priority-configured", rest_name="lsp-sec-path-config-priority-configured", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)
self.__lsp_sec_path_config_traffic_eng_rate_configured = YANGDynClass(base=YANGBool, is_leaf=True, yang_name="lsp-sec-path-config-traffic-eng-rate-configured", rest_name="lsp-sec-path-config-traffic-eng-rate-configured", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)
self.__lsp_sec_path_config_admin_group_configured = YANGDynClass(base=YANGBool, is_leaf=True, yang_name="lsp-sec-path-config-admin-group-configured", rest_name="lsp-sec-path-config-admin-group-configured", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)
self.__lsp_sec_path_config_mtu = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="lsp-sec-path-config-mtu", rest_name="lsp-sec-path-config-mtu", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint32', is_config=True)
self.__lsp_sec_path_config_pinned = YANGDynClass(base=YANGBool, is_leaf=True, yang_name="lsp-sec-path-config-pinned", rest_name="lsp-sec-path-config-pinned", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)
self.__lsp_sec_path_config_cos_configured = YANGDynClass(base=YANGBool, is_leaf=True, yang_name="lsp-sec-path-config-cos-configured", rest_name="lsp-sec-path-config-cos-configured", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)
self.__lsp_sec_path_config_tie_break_most_fill = YANGDynClass(base=YANGBool, is_leaf=True, yang_name="lsp-sec-path-config-tie-break-most-fill", rest_name="lsp-sec-path-config-tie-break-most-fill", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)
self.__lsp_sec_path_config_cos = YANGDynClass(base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..255']}, int_size=8), is_leaf=True, yang_name="lsp-sec-path-config-cos", rest_name="lsp-sec-path-config-cos", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint8', is_config=True)
self.__lsp_sec_path_path_name = YANGDynClass(base=unicode, is_leaf=True, yang_name="lsp-sec-path-path-name", rest_name="lsp-sec-path-path-name", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, is_keyval=True, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='string', is_config=True)
load = kwargs.pop("load", None)
if args:
if len(args) > 1:
raise TypeError("cannot create a YANG container with >1 argument")
all_attr = True
for e in self._pyangbind_elements:
if not hasattr(args[0], e):
all_attr = False
break
if not all_attr:
raise ValueError("Supplied object did not have the correct attributes")
for e in self._pyangbind_elements:
nobj = getattr(args[0], e)
if nobj._changed() is False:
continue
setmethod = getattr(self, "_set_%s" % e)
if load is None:
setmethod(getattr(args[0], e))
else:
setmethod(getattr(args[0], e), load=load)
def _path(self):
if hasattr(self, "_parent"):
return self._parent._path()+[self._yang_name]
else:
return [u'brocade_mpls_rpc', u'show-mpls-lsp-name-debug', u'output', u'lsp', u'show-mpls-lsp-extensive-info', u'show-mpls-lsp-sec-path-info', u'sec-path']
def _rest_path(self):
if hasattr(self, "_parent"):
if self._rest_name:
return self._parent._rest_path()+[self._rest_name]
else:
return self._parent._rest_path()
else:
return [u'show-mpls-lsp-name-debug', u'output', u'lsp', u'sec-path']
def _get_lsp_sec_path_path_name(self):
"""
Getter method for lsp_sec_path_path_name, mapped from YANG variable /brocade_mpls_rpc/show_mpls_lsp_name_debug/output/lsp/show_mpls_lsp_extensive_info/show_mpls_lsp_sec_path_info/sec_path/lsp_sec_path_path_name (string)
YANG Description: Secondary path name
"""
return self.__lsp_sec_path_path_name
def _set_lsp_sec_path_path_name(self, v, load=False):
"""
Setter method for lsp_sec_path_path_name, mapped from YANG variable /brocade_mpls_rpc/show_mpls_lsp_name_debug/output/lsp/show_mpls_lsp_extensive_info/show_mpls_lsp_sec_path_info/sec_path/lsp_sec_path_path_name (string)
If this variable is read-only (config: false) in the
source YANG file, then _set_lsp_sec_path_path_name is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_lsp_sec_path_path_name() directly.
YANG Description: Secondary path name
"""
parent = getattr(self, "_parent", None)
if parent is not None and load is False:
raise AttributeError("Cannot set keys directly when" +
" within an instantiated list")
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=unicode, is_leaf=True, yang_name="lsp-sec-path-path-name", rest_name="lsp-sec-path-path-name", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, is_keyval=True, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='string', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """lsp_sec_path_path_name must be of a type compatible with string""",
'defined-type': "string",
'generated-type': """YANGDynClass(base=unicode, is_leaf=True, yang_name="lsp-sec-path-path-name", rest_name="lsp-sec-path-path-name", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, is_keyval=True, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='string', is_config=True)""",
})
self.__lsp_sec_path_path_name = t
if hasattr(self, '_set'):
self._set()
def _unset_lsp_sec_path_path_name(self):
self.__lsp_sec_path_path_name = YANGDynClass(base=unicode, is_leaf=True, yang_name="lsp-sec-path-path-name", rest_name="lsp-sec-path-path-name", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, is_keyval=True, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='string', is_config=True)
def _get_lsp_sec_path_state(self):
"""
Getter method for lsp_sec_path_state, mapped from YANG variable /brocade_mpls_rpc/show_mpls_lsp_name_debug/output/lsp/show_mpls_lsp_extensive_info/show_mpls_lsp_sec_path_info/sec_path/lsp_sec_path_state (uint8)
YANG Description: Secondary path state
"""
return self.__lsp_sec_path_state
def _set_lsp_sec_path_state(self, v, load=False):
"""
Setter method for lsp_sec_path_state, mapped from YANG variable /brocade_mpls_rpc/show_mpls_lsp_name_debug/output/lsp/show_mpls_lsp_extensive_info/show_mpls_lsp_sec_path_info/sec_path/lsp_sec_path_state (uint8)
If this variable is read-only (config: false) in the
source YANG file, then _set_lsp_sec_path_state is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_lsp_sec_path_state() directly.
YANG Description: Secondary path state
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..255']}, int_size=8), is_leaf=True, yang_name="lsp-sec-path-state", rest_name="lsp-sec-path-state", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint8', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """lsp_sec_path_state must be of a type compatible with uint8""",
'defined-type': "uint8",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..255']}, int_size=8), is_leaf=True, yang_name="lsp-sec-path-state", rest_name="lsp-sec-path-state", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint8', is_config=True)""",
})
self.__lsp_sec_path_state = t
if hasattr(self, '_set'):
self._set()
def _unset_lsp_sec_path_state(self):
self.__lsp_sec_path_state = YANGDynClass(base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..255']}, int_size=8), is_leaf=True, yang_name="lsp-sec-path-state", rest_name="lsp-sec-path-state", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint8', is_config=True)
def _get_lsp_sec_path_state_up(self):
"""
Getter method for lsp_sec_path_state_up, mapped from YANG variable /brocade_mpls_rpc/show_mpls_lsp_name_debug/output/lsp/show_mpls_lsp_extensive_info/show_mpls_lsp_sec_path_info/sec_path/lsp_sec_path_state_up (boolean)
YANG Description: Secondary path state
"""
return self.__lsp_sec_path_state_up
def _set_lsp_sec_path_state_up(self, v, load=False):
"""
Setter method for lsp_sec_path_state_up, mapped from YANG variable /brocade_mpls_rpc/show_mpls_lsp_name_debug/output/lsp/show_mpls_lsp_extensive_info/show_mpls_lsp_sec_path_info/sec_path/lsp_sec_path_state_up (boolean)
If this variable is read-only (config: false) in the
source YANG file, then _set_lsp_sec_path_state_up is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_lsp_sec_path_state_up() directly.
YANG Description: Secondary path state
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=YANGBool, is_leaf=True, yang_name="lsp-sec-path-state-up", rest_name="lsp-sec-path-state-up", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """lsp_sec_path_state_up must be of a type compatible with boolean""",
'defined-type': "boolean",
'generated-type': """YANGDynClass(base=YANGBool, is_leaf=True, yang_name="lsp-sec-path-state-up", rest_name="lsp-sec-path-state-up", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)""",
})
self.__lsp_sec_path_state_up = t
if hasattr(self, '_set'):
self._set()
def _unset_lsp_sec_path_state_up(self):
self.__lsp_sec_path_state_up = YANGDynClass(base=YANGBool, is_leaf=True, yang_name="lsp-sec-path-state-up", rest_name="lsp-sec-path-state-up", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)
def _get_lsp_sec_path_active(self):
"""
Getter method for lsp_sec_path_active, mapped from YANG variable /brocade_mpls_rpc/show_mpls_lsp_name_debug/output/lsp/show_mpls_lsp_extensive_info/show_mpls_lsp_sec_path_info/sec_path/lsp_sec_path_active (boolean)
YANG Description: Secondary path state atcive
"""
return self.__lsp_sec_path_active
def _set_lsp_sec_path_active(self, v, load=False):
"""
Setter method for lsp_sec_path_active, mapped from YANG variable /brocade_mpls_rpc/show_mpls_lsp_name_debug/output/lsp/show_mpls_lsp_extensive_info/show_mpls_lsp_sec_path_info/sec_path/lsp_sec_path_active (boolean)
If this variable is read-only (config: false) in the
source YANG file, then _set_lsp_sec_path_active is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_lsp_sec_path_active() directly.
YANG Description: Secondary path state atcive
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=YANGBool, is_leaf=True, yang_name="lsp-sec-path-active", rest_name="lsp-sec-path-active", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """lsp_sec_path_active must be of a type compatible with boolean""",
'defined-type': "boolean",
'generated-type': """YANGDynClass(base=YANGBool, is_leaf=True, yang_name="lsp-sec-path-active", rest_name="lsp-sec-path-active", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)""",
})
self.__lsp_sec_path_active = t
if hasattr(self, '_set'):
self._set()
def _unset_lsp_sec_path_active(self):
self.__lsp_sec_path_active = YANGDynClass(base=YANGBool, is_leaf=True, yang_name="lsp-sec-path-active", rest_name="lsp-sec-path-active", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)
def _get_lsp_sec_path_is_current_secondary(self):
"""
Getter method for lsp_sec_path_is_current_secondary, mapped from YANG variable /brocade_mpls_rpc/show_mpls_lsp_name_debug/output/lsp/show_mpls_lsp_extensive_info/show_mpls_lsp_sec_path_info/sec_path/lsp_sec_path_is_current_secondary (boolean)
YANG Description: Secondary path current secondary
"""
return self.__lsp_sec_path_is_current_secondary
def _set_lsp_sec_path_is_current_secondary(self, v, load=False):
"""
Setter method for lsp_sec_path_is_current_secondary, mapped from YANG variable /brocade_mpls_rpc/show_mpls_lsp_name_debug/output/lsp/show_mpls_lsp_extensive_info/show_mpls_lsp_sec_path_info/sec_path/lsp_sec_path_is_current_secondary (boolean)
If this variable is read-only (config: false) in the
source YANG file, then _set_lsp_sec_path_is_current_secondary is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_lsp_sec_path_is_current_secondary() directly.
YANG Description: Secondary path current secondary
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=YANGBool, is_leaf=True, yang_name="lsp-sec-path-is-current-secondary", rest_name="lsp-sec-path-is-current-secondary", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """lsp_sec_path_is_current_secondary must be of a type compatible with boolean""",
'defined-type': "boolean",
'generated-type': """YANGDynClass(base=YANGBool, is_leaf=True, yang_name="lsp-sec-path-is-current-secondary", rest_name="lsp-sec-path-is-current-secondary", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)""",
})
self.__lsp_sec_path_is_current_secondary = t
if hasattr(self, '_set'):
self._set()
def _unset_lsp_sec_path_is_current_secondary(self):
self.__lsp_sec_path_is_current_secondary = YANGDynClass(base=YANGBool, is_leaf=True, yang_name="lsp-sec-path-is-current-secondary", rest_name="lsp-sec-path-is-current-secondary", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)
def _get_lsp_sec_path_is_selected_secondary(self):
"""
Getter method for lsp_sec_path_is_selected_secondary, mapped from YANG variable /brocade_mpls_rpc/show_mpls_lsp_name_debug/output/lsp/show_mpls_lsp_extensive_info/show_mpls_lsp_sec_path_info/sec_path/lsp_sec_path_is_selected_secondary (boolean)
YANG Description: Secondary path decondary secondary
"""
return self.__lsp_sec_path_is_selected_secondary
def _set_lsp_sec_path_is_selected_secondary(self, v, load=False):
"""
Setter method for lsp_sec_path_is_selected_secondary, mapped from YANG variable /brocade_mpls_rpc/show_mpls_lsp_name_debug/output/lsp/show_mpls_lsp_extensive_info/show_mpls_lsp_sec_path_info/sec_path/lsp_sec_path_is_selected_secondary (boolean)
If this variable is read-only (config: false) in the
source YANG file, then _set_lsp_sec_path_is_selected_secondary is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_lsp_sec_path_is_selected_secondary() directly.
YANG Description: Secondary path decondary secondary
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=YANGBool, is_leaf=True, yang_name="lsp-sec-path-is-selected-secondary", rest_name="lsp-sec-path-is-selected-secondary", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """lsp_sec_path_is_selected_secondary must be of a type compatible with boolean""",
'defined-type': "boolean",
'generated-type': """YANGDynClass(base=YANGBool, is_leaf=True, yang_name="lsp-sec-path-is-selected-secondary", rest_name="lsp-sec-path-is-selected-secondary", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)""",
})
self.__lsp_sec_path_is_selected_secondary = t
if hasattr(self, '_set'):
self._set()
def _unset_lsp_sec_path_is_selected_secondary(self):
self.__lsp_sec_path_is_selected_secondary = YANGDynClass(base=YANGBool, is_leaf=True, yang_name="lsp-sec-path-is-selected-secondary", rest_name="lsp-sec-path-is-selected-secondary", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)
def _get_lsp_sec_path_config_reoptimize_timer_configured(self):
"""
Getter method for lsp_sec_path_config_reoptimize_timer_configured, mapped from YANG variable /brocade_mpls_rpc/show_mpls_lsp_name_debug/output/lsp/show_mpls_lsp_extensive_info/show_mpls_lsp_sec_path_info/sec_path/lsp_sec_path_config_reoptimize_timer_configured (boolean)
YANG Description: LSP reoptimization timer configured
"""
return self.__lsp_sec_path_config_reoptimize_timer_configured
def _set_lsp_sec_path_config_reoptimize_timer_configured(self, v, load=False):
"""
Setter method for lsp_sec_path_config_reoptimize_timer_configured, mapped from YANG variable /brocade_mpls_rpc/show_mpls_lsp_name_debug/output/lsp/show_mpls_lsp_extensive_info/show_mpls_lsp_sec_path_info/sec_path/lsp_sec_path_config_reoptimize_timer_configured (boolean)
If this variable is read-only (config: false) in the
source YANG file, then _set_lsp_sec_path_config_reoptimize_timer_configured is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_lsp_sec_path_config_reoptimize_timer_configured() directly.
YANG Description: LSP reoptimization timer configured
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=YANGBool, is_leaf=True, yang_name="lsp-sec-path-config-reoptimize-timer-configured", rest_name="lsp-sec-path-config-reoptimize-timer-configured", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """lsp_sec_path_config_reoptimize_timer_configured must be of a type compatible with boolean""",
'defined-type': "boolean",
'generated-type': """YANGDynClass(base=YANGBool, is_leaf=True, yang_name="lsp-sec-path-config-reoptimize-timer-configured", rest_name="lsp-sec-path-config-reoptimize-timer-configured", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)""",
})
self.__lsp_sec_path_config_reoptimize_timer_configured = t
if hasattr(self, '_set'):
self._set()
def _unset_lsp_sec_path_config_reoptimize_timer_configured(self):
self.__lsp_sec_path_config_reoptimize_timer_configured = YANGDynClass(base=YANGBool, is_leaf=True, yang_name="lsp-sec-path-config-reoptimize-timer-configured", rest_name="lsp-sec-path-config-reoptimize-timer-configured", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)
def _get_lsp_sec_path_config_reoptimize_timer(self):
"""
Getter method for lsp_sec_path_config_reoptimize_timer, mapped from YANG variable /brocade_mpls_rpc/show_mpls_lsp_name_debug/output/lsp/show_mpls_lsp_extensive_info/show_mpls_lsp_sec_path_info/sec_path/lsp_sec_path_config_reoptimize_timer (uint32)
YANG Description: LSP reoptimization timer value
"""
return self.__lsp_sec_path_config_reoptimize_timer
def _set_lsp_sec_path_config_reoptimize_timer(self, v, load=False):
"""
Setter method for lsp_sec_path_config_reoptimize_timer, mapped from YANG variable /brocade_mpls_rpc/show_mpls_lsp_name_debug/output/lsp/show_mpls_lsp_extensive_info/show_mpls_lsp_sec_path_info/sec_path/lsp_sec_path_config_reoptimize_timer (uint32)
If this variable is read-only (config: false) in the
source YANG file, then _set_lsp_sec_path_config_reoptimize_timer is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_lsp_sec_path_config_reoptimize_timer() directly.
YANG Description: LSP reoptimization timer value
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="lsp-sec-path-config-reoptimize-timer", rest_name="lsp-sec-path-config-reoptimize-timer", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint32', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """lsp_sec_path_config_reoptimize_timer must be of a type compatible with uint32""",
'defined-type': "uint32",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="lsp-sec-path-config-reoptimize-timer", rest_name="lsp-sec-path-config-reoptimize-timer", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint32', is_config=True)""",
})
self.__lsp_sec_path_config_reoptimize_timer = t
if hasattr(self, '_set'):
self._set()
def _unset_lsp_sec_path_config_reoptimize_timer(self):
self.__lsp_sec_path_config_reoptimize_timer = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="lsp-sec-path-config-reoptimize-timer", rest_name="lsp-sec-path-config-reoptimize-timer", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint32', is_config=True)
def _get_lsp_sec_path_config_tspec_mtu_configured(self):
"""
Getter method for lsp_sec_path_config_tspec_mtu_configured, mapped from YANG variable /brocade_mpls_rpc/show_mpls_lsp_name_debug/output/lsp/show_mpls_lsp_extensive_info/show_mpls_lsp_sec_path_info/sec_path/lsp_sec_path_config_tspec_mtu_configured (boolean)
YANG Description: LSP traffic spec mtu configured
"""
return self.__lsp_sec_path_config_tspec_mtu_configured
def _set_lsp_sec_path_config_tspec_mtu_configured(self, v, load=False):
"""
Setter method for lsp_sec_path_config_tspec_mtu_configured, mapped from YANG variable /brocade_mpls_rpc/show_mpls_lsp_name_debug/output/lsp/show_mpls_lsp_extensive_info/show_mpls_lsp_sec_path_info/sec_path/lsp_sec_path_config_tspec_mtu_configured (boolean)
If this variable is read-only (config: false) in the
source YANG file, then _set_lsp_sec_path_config_tspec_mtu_configured is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_lsp_sec_path_config_tspec_mtu_configured() directly.
YANG Description: LSP traffic spec mtu configured
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=YANGBool, is_leaf=True, yang_name="lsp-sec-path-config-tspec-mtu-configured", rest_name="lsp-sec-path-config-tspec-mtu-configured", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """lsp_sec_path_config_tspec_mtu_configured must be of a type compatible with boolean""",
'defined-type': "boolean",
'generated-type': """YANGDynClass(base=YANGBool, is_leaf=True, yang_name="lsp-sec-path-config-tspec-mtu-configured", rest_name="lsp-sec-path-config-tspec-mtu-configured", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)""",
})
self.__lsp_sec_path_config_tspec_mtu_configured = t
if hasattr(self, '_set'):
self._set()
def _unset_lsp_sec_path_config_tspec_mtu_configured(self):
self.__lsp_sec_path_config_tspec_mtu_configured = YANGDynClass(base=YANGBool, is_leaf=True, yang_name="lsp-sec-path-config-tspec-mtu-configured", rest_name="lsp-sec-path-config-tspec-mtu-configured", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)
def _get_lsp_sec_path_sec_path_config_tspec_mtu(self):
"""
Getter method for lsp_sec_path_sec_path_config_tspec_mtu, mapped from YANG variable /brocade_mpls_rpc/show_mpls_lsp_name_debug/output/lsp/show_mpls_lsp_extensive_info/show_mpls_lsp_sec_path_info/sec_path/lsp_sec_path_sec_path_config_tspec_mtu (uint32)
YANG Description: LSP traffic spec mtu value
"""
return self.__lsp_sec_path_sec_path_config_tspec_mtu
def _set_lsp_sec_path_sec_path_config_tspec_mtu(self, v, load=False):
"""
Setter method for lsp_sec_path_sec_path_config_tspec_mtu, mapped from YANG variable /brocade_mpls_rpc/show_mpls_lsp_name_debug/output/lsp/show_mpls_lsp_extensive_info/show_mpls_lsp_sec_path_info/sec_path/lsp_sec_path_sec_path_config_tspec_mtu (uint32)
If this variable is read-only (config: false) in the
source YANG file, then _set_lsp_sec_path_sec_path_config_tspec_mtu is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_lsp_sec_path_sec_path_config_tspec_mtu() directly.
YANG Description: LSP traffic spec mtu value
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="lsp-sec-path-sec-path-config-tspec-mtu", rest_name="lsp-sec-path-sec-path-config-tspec-mtu", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint32', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """lsp_sec_path_sec_path_config_tspec_mtu must be of a type compatible with uint32""",
'defined-type': "uint32",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="lsp-sec-path-sec-path-config-tspec-mtu", rest_name="lsp-sec-path-sec-path-config-tspec-mtu", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint32', is_config=True)""",
})
self.__lsp_sec_path_sec_path_config_tspec_mtu = t
if hasattr(self, '_set'):
self._set()
def _unset_lsp_sec_path_sec_path_config_tspec_mtu(self):
self.__lsp_sec_path_sec_path_config_tspec_mtu = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="lsp-sec-path-sec-path-config-tspec-mtu", rest_name="lsp-sec-path-sec-path-config-tspec-mtu", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint32', is_config=True)
def _get_lsp_sec_path_config_cos_configured(self):
"""
Getter method for lsp_sec_path_config_cos_configured, mapped from YANG variable /brocade_mpls_rpc/show_mpls_lsp_name_debug/output/lsp/show_mpls_lsp_extensive_info/show_mpls_lsp_sec_path_info/sec_path/lsp_sec_path_config_cos_configured (boolean)
YANG Description: LSP cos value configured
"""
return self.__lsp_sec_path_config_cos_configured
def _set_lsp_sec_path_config_cos_configured(self, v, load=False):
"""
Setter method for lsp_sec_path_config_cos_configured, mapped from YANG variable /brocade_mpls_rpc/show_mpls_lsp_name_debug/output/lsp/show_mpls_lsp_extensive_info/show_mpls_lsp_sec_path_info/sec_path/lsp_sec_path_config_cos_configured (boolean)
If this variable is read-only (config: false) in the
source YANG file, then _set_lsp_sec_path_config_cos_configured is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_lsp_sec_path_config_cos_configured() directly.
YANG Description: LSP cos value configured
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=YANGBool, is_leaf=True, yang_name="lsp-sec-path-config-cos-configured", rest_name="lsp-sec-path-config-cos-configured", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """lsp_sec_path_config_cos_configured must be of a type compatible with boolean""",
'defined-type': "boolean",
'generated-type': """YANGDynClass(base=YANGBool, is_leaf=True, yang_name="lsp-sec-path-config-cos-configured", rest_name="lsp-sec-path-config-cos-configured", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)""",
})
self.__lsp_sec_path_config_cos_configured = t
if hasattr(self, '_set'):
self._set()
def _unset_lsp_sec_path_config_cos_configured(self):
self.__lsp_sec_path_config_cos_configured = YANGDynClass(base=YANGBool, is_leaf=True, yang_name="lsp-sec-path-config-cos-configured", rest_name="lsp-sec-path-config-cos-configured", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)
def _get_lsp_sec_path_config_cos(self):
"""
Getter method for lsp_sec_path_config_cos, mapped from YANG variable /brocade_mpls_rpc/show_mpls_lsp_name_debug/output/lsp/show_mpls_lsp_extensive_info/show_mpls_lsp_sec_path_info/sec_path/lsp_sec_path_config_cos (uint8)
YANG Description: LSP cos value
"""
return self.__lsp_sec_path_config_cos
def _set_lsp_sec_path_config_cos(self, v, load=False):
"""
Setter method for lsp_sec_path_config_cos, mapped from YANG variable /brocade_mpls_rpc/show_mpls_lsp_name_debug/output/lsp/show_mpls_lsp_extensive_info/show_mpls_lsp_sec_path_info/sec_path/lsp_sec_path_config_cos (uint8)
If this variable is read-only (config: false) in the
source YANG file, then _set_lsp_sec_path_config_cos is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_lsp_sec_path_config_cos() directly.
YANG Description: LSP cos value
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..255']}, int_size=8), is_leaf=True, yang_name="lsp-sec-path-config-cos", rest_name="lsp-sec-path-config-cos", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint8', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """lsp_sec_path_config_cos must be of a type compatible with uint8""",
'defined-type': "uint8",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..255']}, int_size=8), is_leaf=True, yang_name="lsp-sec-path-config-cos", rest_name="lsp-sec-path-config-cos", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint8', is_config=True)""",
})
self.__lsp_sec_path_config_cos = t
if hasattr(self, '_set'):
self._set()
def _unset_lsp_sec_path_config_cos(self):
self.__lsp_sec_path_config_cos = YANGDynClass(base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..255']}, int_size=8), is_leaf=True, yang_name="lsp-sec-path-config-cos", rest_name="lsp-sec-path-config-cos", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint8', is_config=True)
def _get_lsp_sec_path_config_mtu_configured(self):
"""
Getter method for lsp_sec_path_config_mtu_configured, mapped from YANG variable /brocade_mpls_rpc/show_mpls_lsp_name_debug/output/lsp/show_mpls_lsp_extensive_info/show_mpls_lsp_sec_path_info/sec_path/lsp_sec_path_config_mtu_configured (boolean)
YANG Description: LSP MTU value configured
"""
return self.__lsp_sec_path_config_mtu_configured
def _set_lsp_sec_path_config_mtu_configured(self, v, load=False):
"""
Setter method for lsp_sec_path_config_mtu_configured, mapped from YANG variable /brocade_mpls_rpc/show_mpls_lsp_name_debug/output/lsp/show_mpls_lsp_extensive_info/show_mpls_lsp_sec_path_info/sec_path/lsp_sec_path_config_mtu_configured (boolean)
If this variable is read-only (config: false) in the
source YANG file, then _set_lsp_sec_path_config_mtu_configured is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_lsp_sec_path_config_mtu_configured() directly.
YANG Description: LSP MTU value configured
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=YANGBool, is_leaf=True, yang_name="lsp-sec-path-config-mtu-configured", rest_name="lsp-sec-path-config-mtu-configured", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """lsp_sec_path_config_mtu_configured must be of a type compatible with boolean""",
'defined-type': "boolean",
'generated-type': """YANGDynClass(base=YANGBool, is_leaf=True, yang_name="lsp-sec-path-config-mtu-configured", rest_name="lsp-sec-path-config-mtu-configured", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)""",
})
self.__lsp_sec_path_config_mtu_configured = t
if hasattr(self, '_set'):
self._set()
def _unset_lsp_sec_path_config_mtu_configured(self):
self.__lsp_sec_path_config_mtu_configured = YANGDynClass(base=YANGBool, is_leaf=True, yang_name="lsp-sec-path-config-mtu-configured", rest_name="lsp-sec-path-config-mtu-configured", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)
def _get_lsp_sec_path_config_mtu(self):
"""
Getter method for lsp_sec_path_config_mtu, mapped from YANG variable /brocade_mpls_rpc/show_mpls_lsp_name_debug/output/lsp/show_mpls_lsp_extensive_info/show_mpls_lsp_sec_path_info/sec_path/lsp_sec_path_config_mtu (uint32)
YANG Description: LSP MTU value
"""
return self.__lsp_sec_path_config_mtu
def _set_lsp_sec_path_config_mtu(self, v, load=False):
"""
Setter method for lsp_sec_path_config_mtu, mapped from YANG variable /brocade_mpls_rpc/show_mpls_lsp_name_debug/output/lsp/show_mpls_lsp_extensive_info/show_mpls_lsp_sec_path_info/sec_path/lsp_sec_path_config_mtu (uint32)
If this variable is read-only (config: false) in the
source YANG file, then _set_lsp_sec_path_config_mtu is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_lsp_sec_path_config_mtu() directly.
YANG Description: LSP MTU value
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="lsp-sec-path-config-mtu", rest_name="lsp-sec-path-config-mtu", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint32', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """lsp_sec_path_config_mtu must be of a type compatible with uint32""",
'defined-type': "uint32",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="lsp-sec-path-config-mtu", rest_name="lsp-sec-path-config-mtu", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint32', is_config=True)""",
})
self.__lsp_sec_path_config_mtu = t
if hasattr(self, '_set'):
self._set()
def _unset_lsp_sec_path_config_mtu(self):
self.__lsp_sec_path_config_mtu = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="lsp-sec-path-config-mtu", rest_name="lsp-sec-path-config-mtu", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint32', is_config=True)
def _get_lsp_sec_path_config_tie_breaking_configured(self):
"""
Getter method for lsp_sec_path_config_tie_breaking_configured, mapped from YANG variable /brocade_mpls_rpc/show_mpls_lsp_name_debug/output/lsp/show_mpls_lsp_extensive_info/show_mpls_lsp_sec_path_info/sec_path/lsp_sec_path_config_tie_breaking_configured (boolean)
YANG Description: LSP CSPF tie-breaking configured
"""
return self.__lsp_sec_path_config_tie_breaking_configured
def _set_lsp_sec_path_config_tie_breaking_configured(self, v, load=False):
"""
Setter method for lsp_sec_path_config_tie_breaking_configured, mapped from YANG variable /brocade_mpls_rpc/show_mpls_lsp_name_debug/output/lsp/show_mpls_lsp_extensive_info/show_mpls_lsp_sec_path_info/sec_path/lsp_sec_path_config_tie_breaking_configured (boolean)
If this variable is read-only (config: false) in the
source YANG file, then _set_lsp_sec_path_config_tie_breaking_configured is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_lsp_sec_path_config_tie_breaking_configured() directly.
YANG Description: LSP CSPF tie-breaking configured
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=YANGBool, is_leaf=True, yang_name="lsp-sec-path-config-tie-breaking-configured", rest_name="lsp-sec-path-config-tie-breaking-configured", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """lsp_sec_path_config_tie_breaking_configured must be of a type compatible with boolean""",
'defined-type': "boolean",
'generated-type': """YANGDynClass(base=YANGBool, is_leaf=True, yang_name="lsp-sec-path-config-tie-breaking-configured", rest_name="lsp-sec-path-config-tie-breaking-configured", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)""",
})
self.__lsp_sec_path_config_tie_breaking_configured = t
if hasattr(self, '_set'):
self._set()
def _unset_lsp_sec_path_config_tie_breaking_configured(self):
self.__lsp_sec_path_config_tie_breaking_configured = YANGDynClass(base=YANGBool, is_leaf=True, yang_name="lsp-sec-path-config-tie-breaking-configured", rest_name="lsp-sec-path-config-tie-breaking-configured", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)
def _get_lsp_sec_path_config_tie_break_random(self):
"""
Getter method for lsp_sec_path_config_tie_break_random, mapped from YANG variable /brocade_mpls_rpc/show_mpls_lsp_name_debug/output/lsp/show_mpls_lsp_extensive_info/show_mpls_lsp_sec_path_info/sec_path/lsp_sec_path_config_tie_break_random (boolean)
YANG Description: LSP cspf tie braking is random
"""
return self.__lsp_sec_path_config_tie_break_random
def _set_lsp_sec_path_config_tie_break_random(self, v, load=False):
"""
Setter method for lsp_sec_path_config_tie_break_random, mapped from YANG variable /brocade_mpls_rpc/show_mpls_lsp_name_debug/output/lsp/show_mpls_lsp_extensive_info/show_mpls_lsp_sec_path_info/sec_path/lsp_sec_path_config_tie_break_random (boolean)
If this variable is read-only (config: false) in the
source YANG file, then _set_lsp_sec_path_config_tie_break_random is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_lsp_sec_path_config_tie_break_random() directly.
YANG Description: LSP cspf tie braking is random
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=YANGBool, is_leaf=True, yang_name="lsp-sec-path-config-tie-break-random", rest_name="lsp-sec-path-config-tie-break-random", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """lsp_sec_path_config_tie_break_random must be of a type compatible with boolean""",
'defined-type': "boolean",
'generated-type': """YANGDynClass(base=YANGBool, is_leaf=True, yang_name="lsp-sec-path-config-tie-break-random", rest_name="lsp-sec-path-config-tie-break-random", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)""",
})
self.__lsp_sec_path_config_tie_break_random = t
if hasattr(self, '_set'):
self._set()
def _unset_lsp_sec_path_config_tie_break_random(self):
self.__lsp_sec_path_config_tie_break_random = YANGDynClass(base=YANGBool, is_leaf=True, yang_name="lsp-sec-path-config-tie-break-random", rest_name="lsp-sec-path-config-tie-break-random", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)
def _get_lsp_sec_path_config_tie_break_least_fill(self):
"""
Getter method for lsp_sec_path_config_tie_break_least_fill, mapped from YANG variable /brocade_mpls_rpc/show_mpls_lsp_name_debug/output/lsp/show_mpls_lsp_extensive_info/show_mpls_lsp_sec_path_info/sec_path/lsp_sec_path_config_tie_break_least_fill (boolean)
YANG Description: LSP cspf tie braking is least fill
"""
return self.__lsp_sec_path_config_tie_break_least_fill
def _set_lsp_sec_path_config_tie_break_least_fill(self, v, load=False):
"""
Setter method for lsp_sec_path_config_tie_break_least_fill, mapped from YANG variable /brocade_mpls_rpc/show_mpls_lsp_name_debug/output/lsp/show_mpls_lsp_extensive_info/show_mpls_lsp_sec_path_info/sec_path/lsp_sec_path_config_tie_break_least_fill (boolean)
If this variable is read-only (config: false) in the
source YANG file, then _set_lsp_sec_path_config_tie_break_least_fill is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_lsp_sec_path_config_tie_break_least_fill() directly.
YANG Description: LSP cspf tie braking is least fill
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=YANGBool, is_leaf=True, yang_name="lsp-sec-path-config-tie-break-least-fill", rest_name="lsp-sec-path-config-tie-break-least-fill", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """lsp_sec_path_config_tie_break_least_fill must be of a type compatible with boolean""",
'defined-type': "boolean",
'generated-type': """YANGDynClass(base=YANGBool, is_leaf=True, yang_name="lsp-sec-path-config-tie-break-least-fill", rest_name="lsp-sec-path-config-tie-break-least-fill", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)""",
})
self.__lsp_sec_path_config_tie_break_least_fill = t
if hasattr(self, '_set'):
self._set()
def _unset_lsp_sec_path_config_tie_break_least_fill(self):
self.__lsp_sec_path_config_tie_break_least_fill = YANGDynClass(base=YANGBool, is_leaf=True, yang_name="lsp-sec-path-config-tie-break-least-fill", rest_name="lsp-sec-path-config-tie-break-least-fill", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)
def _get_lsp_sec_path_config_tie_break_most_fill(self):
"""
Getter method for lsp_sec_path_config_tie_break_most_fill, mapped from YANG variable /brocade_mpls_rpc/show_mpls_lsp_name_debug/output/lsp/show_mpls_lsp_extensive_info/show_mpls_lsp_sec_path_info/sec_path/lsp_sec_path_config_tie_break_most_fill (boolean)
YANG Description: LSP cspf tie braking is most-fill
"""
return self.__lsp_sec_path_config_tie_break_most_fill
def _set_lsp_sec_path_config_tie_break_most_fill(self, v, load=False):
"""
Setter method for lsp_sec_path_config_tie_break_most_fill, mapped from YANG variable /brocade_mpls_rpc/show_mpls_lsp_name_debug/output/lsp/show_mpls_lsp_extensive_info/show_mpls_lsp_sec_path_info/sec_path/lsp_sec_path_config_tie_break_most_fill (boolean)
If this variable is read-only (config: false) in the
source YANG file, then _set_lsp_sec_path_config_tie_break_most_fill is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_lsp_sec_path_config_tie_break_most_fill() directly.
YANG Description: LSP cspf tie braking is most-fill
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=YANGBool, is_leaf=True, yang_name="lsp-sec-path-config-tie-break-most-fill", rest_name="lsp-sec-path-config-tie-break-most-fill", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """lsp_sec_path_config_tie_break_most_fill must be of a type compatible with boolean""",
'defined-type': "boolean",
'generated-type': """YANGDynClass(base=YANGBool, is_leaf=True, yang_name="lsp-sec-path-config-tie-break-most-fill", rest_name="lsp-sec-path-config-tie-break-most-fill", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)""",
})
self.__lsp_sec_path_config_tie_break_most_fill = t
if hasattr(self, '_set'):
self._set()
def _unset_lsp_sec_path_config_tie_break_most_fill(self):
self.__lsp_sec_path_config_tie_break_most_fill = YANGDynClass(base=YANGBool, is_leaf=True, yang_name="lsp-sec-path-config-tie-break-most-fill", rest_name="lsp-sec-path-config-tie-break-most-fill", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)
def _get_lsp_sec_path_config_cspf_disabled(self):
"""
Getter method for lsp_sec_path_config_cspf_disabled, mapped from YANG variable /brocade_mpls_rpc/show_mpls_lsp_name_debug/output/lsp/show_mpls_lsp_extensive_info/show_mpls_lsp_sec_path_info/sec_path/lsp_sec_path_config_cspf_disabled (boolean)
YANG Description: LSP cspf disabled
"""
return self.__lsp_sec_path_config_cspf_disabled
def _set_lsp_sec_path_config_cspf_disabled(self, v, load=False):
"""
Setter method for lsp_sec_path_config_cspf_disabled, mapped from YANG variable /brocade_mpls_rpc/show_mpls_lsp_name_debug/output/lsp/show_mpls_lsp_extensive_info/show_mpls_lsp_sec_path_info/sec_path/lsp_sec_path_config_cspf_disabled (boolean)
If this variable is read-only (config: false) in the
source YANG file, then _set_lsp_sec_path_config_cspf_disabled is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_lsp_sec_path_config_cspf_disabled() directly.
YANG Description: LSP cspf disabled
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=YANGBool, is_leaf=True, yang_name="lsp-sec-path-config-cspf-disabled", rest_name="lsp-sec-path-config-cspf-disabled", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """lsp_sec_path_config_cspf_disabled must be of a type compatible with boolean""",
'defined-type': "boolean",
'generated-type': """YANGDynClass(base=YANGBool, is_leaf=True, yang_name="lsp-sec-path-config-cspf-disabled", rest_name="lsp-sec-path-config-cspf-disabled", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)""",
})
self.__lsp_sec_path_config_cspf_disabled = t
if hasattr(self, '_set'):
self._set()
def _unset_lsp_sec_path_config_cspf_disabled(self):
self.__lsp_sec_path_config_cspf_disabled = YANGDynClass(base=YANGBool, is_leaf=True, yang_name="lsp-sec-path-config-cspf-disabled", rest_name="lsp-sec-path-config-cspf-disabled", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)
def _get_lsp_sec_path_config_hot_standby(self):
"""
Getter method for lsp_sec_path_config_hot_standby, mapped from YANG variable /brocade_mpls_rpc/show_mpls_lsp_name_debug/output/lsp/show_mpls_lsp_extensive_info/show_mpls_lsp_sec_path_info/sec_path/lsp_sec_path_config_hot_standby (boolean)
YANG Description: LSP is hot standby
"""
return self.__lsp_sec_path_config_hot_standby
def _set_lsp_sec_path_config_hot_standby(self, v, load=False):
"""
Setter method for lsp_sec_path_config_hot_standby, mapped from YANG variable /brocade_mpls_rpc/show_mpls_lsp_name_debug/output/lsp/show_mpls_lsp_extensive_info/show_mpls_lsp_sec_path_info/sec_path/lsp_sec_path_config_hot_standby (boolean)
If this variable is read-only (config: false) in the
source YANG file, then _set_lsp_sec_path_config_hot_standby is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_lsp_sec_path_config_hot_standby() directly.
YANG Description: LSP is hot standby
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=YANGBool, is_leaf=True, yang_name="lsp-sec-path-config-hot-standby", rest_name="lsp-sec-path-config-hot-standby", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """lsp_sec_path_config_hot_standby must be of a type compatible with boolean""",
'defined-type': "boolean",
'generated-type': """YANGDynClass(base=YANGBool, is_leaf=True, yang_name="lsp-sec-path-config-hot-standby", rest_name="lsp-sec-path-config-hot-standby", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)""",
})
self.__lsp_sec_path_config_hot_standby = t
if hasattr(self, '_set'):
self._set()
def _unset_lsp_sec_path_config_hot_standby(self):
self.__lsp_sec_path_config_hot_standby = YANGDynClass(base=YANGBool, is_leaf=True, yang_name="lsp-sec-path-config-hot-standby", rest_name="lsp-sec-path-config-hot-standby", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)
def _get_lsp_sec_path_config_pinned(self):
"""
Getter method for lsp_sec_path_config_pinned, mapped from YANG variable /brocade_mpls_rpc/show_mpls_lsp_name_debug/output/lsp/show_mpls_lsp_extensive_info/show_mpls_lsp_sec_path_info/sec_path/lsp_sec_path_config_pinned (boolean)
YANG Description: LSP is pinned
"""
return self.__lsp_sec_path_config_pinned
def _set_lsp_sec_path_config_pinned(self, v, load=False):
"""
Setter method for lsp_sec_path_config_pinned, mapped from YANG variable /brocade_mpls_rpc/show_mpls_lsp_name_debug/output/lsp/show_mpls_lsp_extensive_info/show_mpls_lsp_sec_path_info/sec_path/lsp_sec_path_config_pinned (boolean)
If this variable is read-only (config: false) in the
source YANG file, then _set_lsp_sec_path_config_pinned is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_lsp_sec_path_config_pinned() directly.
YANG Description: LSP is pinned
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=YANGBool, is_leaf=True, yang_name="lsp-sec-path-config-pinned", rest_name="lsp-sec-path-config-pinned", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """lsp_sec_path_config_pinned must be of a type compatible with boolean""",
'defined-type': "boolean",
'generated-type': """YANGDynClass(base=YANGBool, is_leaf=True, yang_name="lsp-sec-path-config-pinned", rest_name="lsp-sec-path-config-pinned", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)""",
})
self.__lsp_sec_path_config_pinned = t
if hasattr(self, '_set'):
self._set()
def _unset_lsp_sec_path_config_pinned(self):
self.__lsp_sec_path_config_pinned = YANGDynClass(base=YANGBool, is_leaf=True, yang_name="lsp-sec-path-config-pinned", rest_name="lsp-sec-path-config-pinned", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)
def _get_lsp_sec_path_config_persistent(self):
"""
Getter method for lsp_sec_path_config_persistent, mapped from YANG variable /brocade_mpls_rpc/show_mpls_lsp_name_debug/output/lsp/show_mpls_lsp_extensive_info/show_mpls_lsp_sec_path_info/sec_path/lsp_sec_path_config_persistent (boolean)
YANG Description: LSP is persistent
"""
return self.__lsp_sec_path_config_persistent
def _set_lsp_sec_path_config_persistent(self, v, load=False):
"""
Setter method for lsp_sec_path_config_persistent, mapped from YANG variable /brocade_mpls_rpc/show_mpls_lsp_name_debug/output/lsp/show_mpls_lsp_extensive_info/show_mpls_lsp_sec_path_info/sec_path/lsp_sec_path_config_persistent (boolean)
If this variable is read-only (config: false) in the
source YANG file, then _set_lsp_sec_path_config_persistent is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_lsp_sec_path_config_persistent() directly.
YANG Description: LSP is persistent
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=YANGBool, is_leaf=True, yang_name="lsp-sec-path-config-persistent", rest_name="lsp-sec-path-config-persistent", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """lsp_sec_path_config_persistent must be of a type compatible with boolean""",
'defined-type': "boolean",
'generated-type': """YANGDynClass(base=YANGBool, is_leaf=True, yang_name="lsp-sec-path-config-persistent", rest_name="lsp-sec-path-config-persistent", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)""",
})
self.__lsp_sec_path_config_persistent = t
if hasattr(self, '_set'):
self._set()
def _unset_lsp_sec_path_config_persistent(self):
self.__lsp_sec_path_config_persistent = YANGDynClass(base=YANGBool, is_leaf=True, yang_name="lsp-sec-path-config-persistent", rest_name="lsp-sec-path-config-persistent", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)
def _get_lsp_sec_path_config_soft_prempt(self):
"""
Getter method for lsp_sec_path_config_soft_prempt, mapped from YANG variable /brocade_mpls_rpc/show_mpls_lsp_name_debug/output/lsp/show_mpls_lsp_extensive_info/show_mpls_lsp_sec_path_info/sec_path/lsp_sec_path_config_soft_prempt (boolean)
YANG Description: LSP soft preemption enabled
"""
return self.__lsp_sec_path_config_soft_prempt
def _set_lsp_sec_path_config_soft_prempt(self, v, load=False):
"""
Setter method for lsp_sec_path_config_soft_prempt, mapped from YANG variable /brocade_mpls_rpc/show_mpls_lsp_name_debug/output/lsp/show_mpls_lsp_extensive_info/show_mpls_lsp_sec_path_info/sec_path/lsp_sec_path_config_soft_prempt (boolean)
If this variable is read-only (config: false) in the
source YANG file, then _set_lsp_sec_path_config_soft_prempt is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_lsp_sec_path_config_soft_prempt() directly.
YANG Description: LSP soft preemption enabled
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=YANGBool, is_leaf=True, yang_name="lsp-sec-path-config-soft-prempt", rest_name="lsp-sec-path-config-soft-prempt", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """lsp_sec_path_config_soft_prempt must be of a type compatible with boolean""",
'defined-type': "boolean",
'generated-type': """YANGDynClass(base=YANGBool, is_leaf=True, yang_name="lsp-sec-path-config-soft-prempt", rest_name="lsp-sec-path-config-soft-prempt", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)""",
})
self.__lsp_sec_path_config_soft_prempt = t
if hasattr(self, '_set'):
self._set()
def _unset_lsp_sec_path_config_soft_prempt(self):
self.__lsp_sec_path_config_soft_prempt = YANGDynClass(base=YANGBool, is_leaf=True, yang_name="lsp-sec-path-config-soft-prempt", rest_name="lsp-sec-path-config-soft-prempt", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)
def _get_lsp_sec_path_config_priority_configured(self):
"""
Getter method for lsp_sec_path_config_priority_configured, mapped from YANG variable /brocade_mpls_rpc/show_mpls_lsp_name_debug/output/lsp/show_mpls_lsp_extensive_info/show_mpls_lsp_sec_path_info/sec_path/lsp_sec_path_config_priority_configured (boolean)
YANG Description: LSP priority configured
"""
return self.__lsp_sec_path_config_priority_configured
def _set_lsp_sec_path_config_priority_configured(self, v, load=False):
"""
Setter method for lsp_sec_path_config_priority_configured, mapped from YANG variable /brocade_mpls_rpc/show_mpls_lsp_name_debug/output/lsp/show_mpls_lsp_extensive_info/show_mpls_lsp_sec_path_info/sec_path/lsp_sec_path_config_priority_configured (boolean)
If this variable is read-only (config: false) in the
source YANG file, then _set_lsp_sec_path_config_priority_configured is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_lsp_sec_path_config_priority_configured() directly.
YANG Description: LSP priority configured
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=YANGBool, is_leaf=True, yang_name="lsp-sec-path-config-priority-configured", rest_name="lsp-sec-path-config-priority-configured", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """lsp_sec_path_config_priority_configured must be of a type compatible with boolean""",
'defined-type': "boolean",
'generated-type': """YANGDynClass(base=YANGBool, is_leaf=True, yang_name="lsp-sec-path-config-priority-configured", rest_name="lsp-sec-path-config-priority-configured", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)""",
})
self.__lsp_sec_path_config_priority_configured = t
if hasattr(self, '_set'):
self._set()
def _unset_lsp_sec_path_config_priority_configured(self):
self.__lsp_sec_path_config_priority_configured = YANGDynClass(base=YANGBool, is_leaf=True, yang_name="lsp-sec-path-config-priority-configured", rest_name="lsp-sec-path-config-priority-configured", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)
def _get_lsp_sec_path_config_setup_prority(self):
"""
Getter method for lsp_sec_path_config_setup_prority, mapped from YANG variable /brocade_mpls_rpc/show_mpls_lsp_name_debug/output/lsp/show_mpls_lsp_extensive_info/show_mpls_lsp_sec_path_info/sec_path/lsp_sec_path_config_setup_prority (uint8)
YANG Description: LSP setup priority
"""
return self.__lsp_sec_path_config_setup_prority
def _set_lsp_sec_path_config_setup_prority(self, v, load=False):
"""
Setter method for lsp_sec_path_config_setup_prority, mapped from YANG variable /brocade_mpls_rpc/show_mpls_lsp_name_debug/output/lsp/show_mpls_lsp_extensive_info/show_mpls_lsp_sec_path_info/sec_path/lsp_sec_path_config_setup_prority (uint8)
If this variable is read-only (config: false) in the
source YANG file, then _set_lsp_sec_path_config_setup_prority is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_lsp_sec_path_config_setup_prority() directly.
YANG Description: LSP setup priority
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..255']}, int_size=8), is_leaf=True, yang_name="lsp-sec-path-config-setup-prority", rest_name="lsp-sec-path-config-setup-prority", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint8', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """lsp_sec_path_config_setup_prority must be of a type compatible with uint8""",
'defined-type': "uint8",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..255']}, int_size=8), is_leaf=True, yang_name="lsp-sec-path-config-setup-prority", rest_name="lsp-sec-path-config-setup-prority", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint8', is_config=True)""",
})
self.__lsp_sec_path_config_setup_prority = t
if hasattr(self, '_set'):
self._set()
def _unset_lsp_sec_path_config_setup_prority(self):
self.__lsp_sec_path_config_setup_prority = YANGDynClass(base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..255']}, int_size=8), is_leaf=True, yang_name="lsp-sec-path-config-setup-prority", rest_name="lsp-sec-path-config-setup-prority", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint8', is_config=True)
def _get_lsp_sec_path_config_holding_prority(self):
"""
Getter method for lsp_sec_path_config_holding_prority, mapped from YANG variable /brocade_mpls_rpc/show_mpls_lsp_name_debug/output/lsp/show_mpls_lsp_extensive_info/show_mpls_lsp_sec_path_info/sec_path/lsp_sec_path_config_holding_prority (uint8)
YANG Description: LSP holding priority
"""
return self.__lsp_sec_path_config_holding_prority
def _set_lsp_sec_path_config_holding_prority(self, v, load=False):
"""
Setter method for lsp_sec_path_config_holding_prority, mapped from YANG variable /brocade_mpls_rpc/show_mpls_lsp_name_debug/output/lsp/show_mpls_lsp_extensive_info/show_mpls_lsp_sec_path_info/sec_path/lsp_sec_path_config_holding_prority (uint8)
If this variable is read-only (config: false) in the
source YANG file, then _set_lsp_sec_path_config_holding_prority is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_lsp_sec_path_config_holding_prority() directly.
YANG Description: LSP holding priority
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..255']}, int_size=8), is_leaf=True, yang_name="lsp-sec-path-config-holding-prority", rest_name="lsp-sec-path-config-holding-prority", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint8', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """lsp_sec_path_config_holding_prority must be of a type compatible with uint8""",
'defined-type': "uint8",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..255']}, int_size=8), is_leaf=True, yang_name="lsp-sec-path-config-holding-prority", rest_name="lsp-sec-path-config-holding-prority", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint8', is_config=True)""",
})
self.__lsp_sec_path_config_holding_prority = t
if hasattr(self, '_set'):
self._set()
def _unset_lsp_sec_path_config_holding_prority(self):
self.__lsp_sec_path_config_holding_prority = YANGDynClass(base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..255']}, int_size=8), is_leaf=True, yang_name="lsp-sec-path-config-holding-prority", rest_name="lsp-sec-path-config-holding-prority", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint8', is_config=True)
def _get_lsp_sec_path_config_hop_limit_configured(self):
"""
Getter method for lsp_sec_path_config_hop_limit_configured, mapped from YANG variable /brocade_mpls_rpc/show_mpls_lsp_name_debug/output/lsp/show_mpls_lsp_extensive_info/show_mpls_lsp_sec_path_info/sec_path/lsp_sec_path_config_hop_limit_configured (boolean)
YANG Description: LSP hop limit is configured
"""
return self.__lsp_sec_path_config_hop_limit_configured
def _set_lsp_sec_path_config_hop_limit_configured(self, v, load=False):
"""
Setter method for lsp_sec_path_config_hop_limit_configured, mapped from YANG variable /brocade_mpls_rpc/show_mpls_lsp_name_debug/output/lsp/show_mpls_lsp_extensive_info/show_mpls_lsp_sec_path_info/sec_path/lsp_sec_path_config_hop_limit_configured (boolean)
If this variable is read-only (config: false) in the
source YANG file, then _set_lsp_sec_path_config_hop_limit_configured is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_lsp_sec_path_config_hop_limit_configured() directly.
YANG Description: LSP hop limit is configured
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=YANGBool, is_leaf=True, yang_name="lsp-sec-path-config-hop-limit-configured", rest_name="lsp-sec-path-config-hop-limit-configured", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """lsp_sec_path_config_hop_limit_configured must be of a type compatible with boolean""",
'defined-type': "boolean",
'generated-type': """YANGDynClass(base=YANGBool, is_leaf=True, yang_name="lsp-sec-path-config-hop-limit-configured", rest_name="lsp-sec-path-config-hop-limit-configured", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)""",
})
self.__lsp_sec_path_config_hop_limit_configured = t
if hasattr(self, '_set'):
self._set()
def _unset_lsp_sec_path_config_hop_limit_configured(self):
self.__lsp_sec_path_config_hop_limit_configured = YANGDynClass(base=YANGBool, is_leaf=True, yang_name="lsp-sec-path-config-hop-limit-configured", rest_name="lsp-sec-path-config-hop-limit-configured", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)
def _get_lsp_sec_path_config_hop_limit(self):
"""
Getter method for lsp_sec_path_config_hop_limit, mapped from YANG variable /brocade_mpls_rpc/show_mpls_lsp_name_debug/output/lsp/show_mpls_lsp_extensive_info/show_mpls_lsp_sec_path_info/sec_path/lsp_sec_path_config_hop_limit (uint8)
YANG Description: LSP hop limit
"""
return self.__lsp_sec_path_config_hop_limit
def _set_lsp_sec_path_config_hop_limit(self, v, load=False):
"""
Setter method for lsp_sec_path_config_hop_limit, mapped from YANG variable /brocade_mpls_rpc/show_mpls_lsp_name_debug/output/lsp/show_mpls_lsp_extensive_info/show_mpls_lsp_sec_path_info/sec_path/lsp_sec_path_config_hop_limit (uint8)
If this variable is read-only (config: false) in the
source YANG file, then _set_lsp_sec_path_config_hop_limit is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_lsp_sec_path_config_hop_limit() directly.
YANG Description: LSP hop limit
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..255']}, int_size=8), is_leaf=True, yang_name="lsp-sec-path-config-hop-limit", rest_name="lsp-sec-path-config-hop-limit", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint8', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """lsp_sec_path_config_hop_limit must be of a type compatible with uint8""",
'defined-type': "uint8",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..255']}, int_size=8), is_leaf=True, yang_name="lsp-sec-path-config-hop-limit", rest_name="lsp-sec-path-config-hop-limit", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint8', is_config=True)""",
})
self.__lsp_sec_path_config_hop_limit = t
if hasattr(self, '_set'):
self._set()
def _unset_lsp_sec_path_config_hop_limit(self):
self.__lsp_sec_path_config_hop_limit = YANGDynClass(base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..255']}, int_size=8), is_leaf=True, yang_name="lsp-sec-path-config-hop-limit", rest_name="lsp-sec-path-config-hop-limit", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint8', is_config=True)
def _get_lsp_sec_path_config_traffic_eng_rate_configured(self):
"""
Getter method for lsp_sec_path_config_traffic_eng_rate_configured, mapped from YANG variable /brocade_mpls_rpc/show_mpls_lsp_name_debug/output/lsp/show_mpls_lsp_extensive_info/show_mpls_lsp_sec_path_info/sec_path/lsp_sec_path_config_traffic_eng_rate_configured (boolean)
YANG Description: LSP traffic engineering rates configured
"""
return self.__lsp_sec_path_config_traffic_eng_rate_configured
def _set_lsp_sec_path_config_traffic_eng_rate_configured(self, v, load=False):
"""
Setter method for lsp_sec_path_config_traffic_eng_rate_configured, mapped from YANG variable /brocade_mpls_rpc/show_mpls_lsp_name_debug/output/lsp/show_mpls_lsp_extensive_info/show_mpls_lsp_sec_path_info/sec_path/lsp_sec_path_config_traffic_eng_rate_configured (boolean)
If this variable is read-only (config: false) in the
source YANG file, then _set_lsp_sec_path_config_traffic_eng_rate_configured is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_lsp_sec_path_config_traffic_eng_rate_configured() directly.
YANG Description: LSP traffic engineering rates configured
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=YANGBool, is_leaf=True, yang_name="lsp-sec-path-config-traffic-eng-rate-configured", rest_name="lsp-sec-path-config-traffic-eng-rate-configured", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """lsp_sec_path_config_traffic_eng_rate_configured must be of a type compatible with boolean""",
'defined-type': "boolean",
'generated-type': """YANGDynClass(base=YANGBool, is_leaf=True, yang_name="lsp-sec-path-config-traffic-eng-rate-configured", rest_name="lsp-sec-path-config-traffic-eng-rate-configured", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)""",
})
self.__lsp_sec_path_config_traffic_eng_rate_configured = t
if hasattr(self, '_set'):
self._set()
def _unset_lsp_sec_path_config_traffic_eng_rate_configured(self):
self.__lsp_sec_path_config_traffic_eng_rate_configured = YANGDynClass(base=YANGBool, is_leaf=True, yang_name="lsp-sec-path-config-traffic-eng-rate-configured", rest_name="lsp-sec-path-config-traffic-eng-rate-configured", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)
def _get_lsp_sec_path_config_traffic_eng_mean_rate(self):
"""
Getter method for lsp_sec_path_config_traffic_eng_mean_rate, mapped from YANG variable /brocade_mpls_rpc/show_mpls_lsp_name_debug/output/lsp/show_mpls_lsp_extensive_info/show_mpls_lsp_sec_path_info/sec_path/lsp_sec_path_config_traffic_eng_mean_rate (uint32)
YANG Description: LSP traffic engineering mean rate
"""
return self.__lsp_sec_path_config_traffic_eng_mean_rate
def _set_lsp_sec_path_config_traffic_eng_mean_rate(self, v, load=False):
"""
Setter method for lsp_sec_path_config_traffic_eng_mean_rate, mapped from YANG variable /brocade_mpls_rpc/show_mpls_lsp_name_debug/output/lsp/show_mpls_lsp_extensive_info/show_mpls_lsp_sec_path_info/sec_path/lsp_sec_path_config_traffic_eng_mean_rate (uint32)
If this variable is read-only (config: false) in the
source YANG file, then _set_lsp_sec_path_config_traffic_eng_mean_rate is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_lsp_sec_path_config_traffic_eng_mean_rate() directly.
YANG Description: LSP traffic engineering mean rate
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="lsp-sec-path-config-traffic-eng-mean-rate", rest_name="lsp-sec-path-config-traffic-eng-mean-rate", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint32', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """lsp_sec_path_config_traffic_eng_mean_rate must be of a type compatible with uint32""",
'defined-type': "uint32",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="lsp-sec-path-config-traffic-eng-mean-rate", rest_name="lsp-sec-path-config-traffic-eng-mean-rate", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint32', is_config=True)""",
})
self.__lsp_sec_path_config_traffic_eng_mean_rate = t
if hasattr(self, '_set'):
self._set()
def _unset_lsp_sec_path_config_traffic_eng_mean_rate(self):
self.__lsp_sec_path_config_traffic_eng_mean_rate = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="lsp-sec-path-config-traffic-eng-mean-rate", rest_name="lsp-sec-path-config-traffic-eng-mean-rate", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint32', is_config=True)
def _get_lsp_sec_path_config_traffic_eng_max_rate(self):
"""
Getter method for lsp_sec_path_config_traffic_eng_max_rate, mapped from YANG variable /brocade_mpls_rpc/show_mpls_lsp_name_debug/output/lsp/show_mpls_lsp_extensive_info/show_mpls_lsp_sec_path_info/sec_path/lsp_sec_path_config_traffic_eng_max_rate (uint32)
YANG Description: LSP traffic engineering max rate
"""
return self.__lsp_sec_path_config_traffic_eng_max_rate
def _set_lsp_sec_path_config_traffic_eng_max_rate(self, v, load=False):
"""
Setter method for lsp_sec_path_config_traffic_eng_max_rate, mapped from YANG variable /brocade_mpls_rpc/show_mpls_lsp_name_debug/output/lsp/show_mpls_lsp_extensive_info/show_mpls_lsp_sec_path_info/sec_path/lsp_sec_path_config_traffic_eng_max_rate (uint32)
If this variable is read-only (config: false) in the
source YANG file, then _set_lsp_sec_path_config_traffic_eng_max_rate is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_lsp_sec_path_config_traffic_eng_max_rate() directly.
YANG Description: LSP traffic engineering max rate
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="lsp-sec-path-config-traffic-eng-max-rate", rest_name="lsp-sec-path-config-traffic-eng-max-rate", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint32', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """lsp_sec_path_config_traffic_eng_max_rate must be of a type compatible with uint32""",
'defined-type': "uint32",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="lsp-sec-path-config-traffic-eng-max-rate", rest_name="lsp-sec-path-config-traffic-eng-max-rate", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint32', is_config=True)""",
})
self.__lsp_sec_path_config_traffic_eng_max_rate = t
if hasattr(self, '_set'):
self._set()
def _unset_lsp_sec_path_config_traffic_eng_max_rate(self):
self.__lsp_sec_path_config_traffic_eng_max_rate = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="lsp-sec-path-config-traffic-eng-max-rate", rest_name="lsp-sec-path-config-traffic-eng-max-rate", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint32', is_config=True)
def _get_lsp_sec_path_config_traffic_eng_max_burst(self):
"""
Getter method for lsp_sec_path_config_traffic_eng_max_burst, mapped from YANG variable /brocade_mpls_rpc/show_mpls_lsp_name_debug/output/lsp/show_mpls_lsp_extensive_info/show_mpls_lsp_sec_path_info/sec_path/lsp_sec_path_config_traffic_eng_max_burst (uint32)
YANG Description: LSP traffic engineering max-burst
"""
return self.__lsp_sec_path_config_traffic_eng_max_burst
def _set_lsp_sec_path_config_traffic_eng_max_burst(self, v, load=False):
"""
Setter method for lsp_sec_path_config_traffic_eng_max_burst, mapped from YANG variable /brocade_mpls_rpc/show_mpls_lsp_name_debug/output/lsp/show_mpls_lsp_extensive_info/show_mpls_lsp_sec_path_info/sec_path/lsp_sec_path_config_traffic_eng_max_burst (uint32)
If this variable is read-only (config: false) in the
source YANG file, then _set_lsp_sec_path_config_traffic_eng_max_burst is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_lsp_sec_path_config_traffic_eng_max_burst() directly.
YANG Description: LSP traffic engineering max-burst
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="lsp-sec-path-config-traffic-eng-max-burst", rest_name="lsp-sec-path-config-traffic-eng-max-burst", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint32', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """lsp_sec_path_config_traffic_eng_max_burst must be of a type compatible with uint32""",
'defined-type': "uint32",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="lsp-sec-path-config-traffic-eng-max-burst", rest_name="lsp-sec-path-config-traffic-eng-max-burst", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint32', is_config=True)""",
})
self.__lsp_sec_path_config_traffic_eng_max_burst = t
if hasattr(self, '_set'):
self._set()
def _unset_lsp_sec_path_config_traffic_eng_max_burst(self):
self.__lsp_sec_path_config_traffic_eng_max_burst = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="lsp-sec-path-config-traffic-eng-max-burst", rest_name="lsp-sec-path-config-traffic-eng-max-burst", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='uint32', is_config=True)
def _get_lsp_sec_path_config_admin_group_configured(self):
"""
Getter method for lsp_sec_path_config_admin_group_configured, mapped from YANG variable /brocade_mpls_rpc/show_mpls_lsp_name_debug/output/lsp/show_mpls_lsp_extensive_info/show_mpls_lsp_sec_path_info/sec_path/lsp_sec_path_config_admin_group_configured (boolean)
YANG Description: LSP secondary path admin group configured
"""
return self.__lsp_sec_path_config_admin_group_configured
def _set_lsp_sec_path_config_admin_group_configured(self, v, load=False):
"""
Setter method for lsp_sec_path_config_admin_group_configured, mapped from YANG variable /brocade_mpls_rpc/show_mpls_lsp_name_debug/output/lsp/show_mpls_lsp_extensive_info/show_mpls_lsp_sec_path_info/sec_path/lsp_sec_path_config_admin_group_configured (boolean)
If this variable is read-only (config: false) in the
source YANG file, then _set_lsp_sec_path_config_admin_group_configured is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_lsp_sec_path_config_admin_group_configured() directly.
YANG Description: LSP secondary path admin group configured
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=YANGBool, is_leaf=True, yang_name="lsp-sec-path-config-admin-group-configured", rest_name="lsp-sec-path-config-admin-group-configured", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """lsp_sec_path_config_admin_group_configured must be of a type compatible with boolean""",
'defined-type': "boolean",
'generated-type': """YANGDynClass(base=YANGBool, is_leaf=True, yang_name="lsp-sec-path-config-admin-group-configured", rest_name="lsp-sec-path-config-admin-group-configured", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)""",
})
self.__lsp_sec_path_config_admin_group_configured = t
if hasattr(self, '_set'):
self._set()
def _unset_lsp_sec_path_config_admin_group_configured(self):
self.__lsp_sec_path_config_admin_group_configured = YANGDynClass(base=YANGBool, is_leaf=True, yang_name="lsp-sec-path-config-admin-group-configured", rest_name="lsp-sec-path-config-admin-group-configured", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='boolean', is_config=True)
def _get_lsp_sec_path_config_admin_groups(self):
"""
Getter method for lsp_sec_path_config_admin_groups, mapped from YANG variable /brocade_mpls_rpc/show_mpls_lsp_name_debug/output/lsp/show_mpls_lsp_extensive_info/show_mpls_lsp_sec_path_info/sec_path/lsp_sec_path_config_admin_groups (container)
"""
return self.__lsp_sec_path_config_admin_groups
def _set_lsp_sec_path_config_admin_groups(self, v, load=False):
"""
Setter method for lsp_sec_path_config_admin_groups, mapped from YANG variable /brocade_mpls_rpc/show_mpls_lsp_name_debug/output/lsp/show_mpls_lsp_extensive_info/show_mpls_lsp_sec_path_info/sec_path/lsp_sec_path_config_admin_groups (container)
If this variable is read-only (config: false) in the
source YANG file, then _set_lsp_sec_path_config_admin_groups is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_lsp_sec_path_config_admin_groups() directly.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=lsp_sec_path_config_admin_groups.lsp_sec_path_config_admin_groups, is_container='container', presence=False, yang_name="lsp-sec-path-config-admin-groups", rest_name="lsp-sec-path-config-admin-groups", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, extensions=None, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='container', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """lsp_sec_path_config_admin_groups must be of a type compatible with container""",
'defined-type': "container",
'generated-type': """YANGDynClass(base=lsp_sec_path_config_admin_groups.lsp_sec_path_config_admin_groups, is_container='container', presence=False, yang_name="lsp-sec-path-config-admin-groups", rest_name="lsp-sec-path-config-admin-groups", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, extensions=None, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='container', is_config=True)""",
})
self.__lsp_sec_path_config_admin_groups = t
if hasattr(self, '_set'):
self._set()
def _unset_lsp_sec_path_config_admin_groups(self):
self.__lsp_sec_path_config_admin_groups = YANGDynClass(base=lsp_sec_path_config_admin_groups.lsp_sec_path_config_admin_groups, is_container='container', presence=False, yang_name="lsp-sec-path-config-admin-groups", rest_name="lsp-sec-path-config-admin-groups", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, extensions=None, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='container', is_config=True)
lsp_sec_path_path_name = __builtin__.property(_get_lsp_sec_path_path_name, _set_lsp_sec_path_path_name)
lsp_sec_path_state = __builtin__.property(_get_lsp_sec_path_state, _set_lsp_sec_path_state)
lsp_sec_path_state_up = __builtin__.property(_get_lsp_sec_path_state_up, _set_lsp_sec_path_state_up)
lsp_sec_path_active = __builtin__.property(_get_lsp_sec_path_active, _set_lsp_sec_path_active)
lsp_sec_path_is_current_secondary = __builtin__.property(_get_lsp_sec_path_is_current_secondary, _set_lsp_sec_path_is_current_secondary)
lsp_sec_path_is_selected_secondary = __builtin__.property(_get_lsp_sec_path_is_selected_secondary, _set_lsp_sec_path_is_selected_secondary)
lsp_sec_path_config_reoptimize_timer_configured = __builtin__.property(_get_lsp_sec_path_config_reoptimize_timer_configured, _set_lsp_sec_path_config_reoptimize_timer_configured)
lsp_sec_path_config_reoptimize_timer = __builtin__.property(_get_lsp_sec_path_config_reoptimize_timer, _set_lsp_sec_path_config_reoptimize_timer)
lsp_sec_path_config_tspec_mtu_configured = __builtin__.property(_get_lsp_sec_path_config_tspec_mtu_configured, _set_lsp_sec_path_config_tspec_mtu_configured)
lsp_sec_path_sec_path_config_tspec_mtu = __builtin__.property(_get_lsp_sec_path_sec_path_config_tspec_mtu, _set_lsp_sec_path_sec_path_config_tspec_mtu)
lsp_sec_path_config_cos_configured = __builtin__.property(_get_lsp_sec_path_config_cos_configured, _set_lsp_sec_path_config_cos_configured)
lsp_sec_path_config_cos = __builtin__.property(_get_lsp_sec_path_config_cos, _set_lsp_sec_path_config_cos)
lsp_sec_path_config_mtu_configured = __builtin__.property(_get_lsp_sec_path_config_mtu_configured, _set_lsp_sec_path_config_mtu_configured)
lsp_sec_path_config_mtu = __builtin__.property(_get_lsp_sec_path_config_mtu, _set_lsp_sec_path_config_mtu)
lsp_sec_path_config_tie_breaking_configured = __builtin__.property(_get_lsp_sec_path_config_tie_breaking_configured, _set_lsp_sec_path_config_tie_breaking_configured)
lsp_sec_path_config_tie_break_random = __builtin__.property(_get_lsp_sec_path_config_tie_break_random, _set_lsp_sec_path_config_tie_break_random)
lsp_sec_path_config_tie_break_least_fill = __builtin__.property(_get_lsp_sec_path_config_tie_break_least_fill, _set_lsp_sec_path_config_tie_break_least_fill)
lsp_sec_path_config_tie_break_most_fill = __builtin__.property(_get_lsp_sec_path_config_tie_break_most_fill, _set_lsp_sec_path_config_tie_break_most_fill)
lsp_sec_path_config_cspf_disabled = __builtin__.property(_get_lsp_sec_path_config_cspf_disabled, _set_lsp_sec_path_config_cspf_disabled)
lsp_sec_path_config_hot_standby = __builtin__.property(_get_lsp_sec_path_config_hot_standby, _set_lsp_sec_path_config_hot_standby)
lsp_sec_path_config_pinned = __builtin__.property(_get_lsp_sec_path_config_pinned, _set_lsp_sec_path_config_pinned)
lsp_sec_path_config_persistent = __builtin__.property(_get_lsp_sec_path_config_persistent, _set_lsp_sec_path_config_persistent)
lsp_sec_path_config_soft_prempt = __builtin__.property(_get_lsp_sec_path_config_soft_prempt, _set_lsp_sec_path_config_soft_prempt)
lsp_sec_path_config_priority_configured = __builtin__.property(_get_lsp_sec_path_config_priority_configured, _set_lsp_sec_path_config_priority_configured)
lsp_sec_path_config_setup_prority = __builtin__.property(_get_lsp_sec_path_config_setup_prority, _set_lsp_sec_path_config_setup_prority)
lsp_sec_path_config_holding_prority = __builtin__.property(_get_lsp_sec_path_config_holding_prority, _set_lsp_sec_path_config_holding_prority)
lsp_sec_path_config_hop_limit_configured = __builtin__.property(_get_lsp_sec_path_config_hop_limit_configured, _set_lsp_sec_path_config_hop_limit_configured)
lsp_sec_path_config_hop_limit = __builtin__.property(_get_lsp_sec_path_config_hop_limit, _set_lsp_sec_path_config_hop_limit)
lsp_sec_path_config_traffic_eng_rate_configured = __builtin__.property(_get_lsp_sec_path_config_traffic_eng_rate_configured, _set_lsp_sec_path_config_traffic_eng_rate_configured)
lsp_sec_path_config_traffic_eng_mean_rate = __builtin__.property(_get_lsp_sec_path_config_traffic_eng_mean_rate, _set_lsp_sec_path_config_traffic_eng_mean_rate)
lsp_sec_path_config_traffic_eng_max_rate = __builtin__.property(_get_lsp_sec_path_config_traffic_eng_max_rate, _set_lsp_sec_path_config_traffic_eng_max_rate)
lsp_sec_path_config_traffic_eng_max_burst = __builtin__.property(_get_lsp_sec_path_config_traffic_eng_max_burst, _set_lsp_sec_path_config_traffic_eng_max_burst)
lsp_sec_path_config_admin_group_configured = __builtin__.property(_get_lsp_sec_path_config_admin_group_configured, _set_lsp_sec_path_config_admin_group_configured)
lsp_sec_path_config_admin_groups = __builtin__.property(_get_lsp_sec_path_config_admin_groups, _set_lsp_sec_path_config_admin_groups)
_pyangbind_elements = {'lsp_sec_path_path_name': lsp_sec_path_path_name, 'lsp_sec_path_state': lsp_sec_path_state, 'lsp_sec_path_state_up': lsp_sec_path_state_up, 'lsp_sec_path_active': lsp_sec_path_active, 'lsp_sec_path_is_current_secondary': lsp_sec_path_is_current_secondary, 'lsp_sec_path_is_selected_secondary': lsp_sec_path_is_selected_secondary, 'lsp_sec_path_config_reoptimize_timer_configured': lsp_sec_path_config_reoptimize_timer_configured, 'lsp_sec_path_config_reoptimize_timer': lsp_sec_path_config_reoptimize_timer, 'lsp_sec_path_config_tspec_mtu_configured': lsp_sec_path_config_tspec_mtu_configured, 'lsp_sec_path_sec_path_config_tspec_mtu': lsp_sec_path_sec_path_config_tspec_mtu, 'lsp_sec_path_config_cos_configured': lsp_sec_path_config_cos_configured, 'lsp_sec_path_config_cos': lsp_sec_path_config_cos, 'lsp_sec_path_config_mtu_configured': lsp_sec_path_config_mtu_configured, 'lsp_sec_path_config_mtu': lsp_sec_path_config_mtu, 'lsp_sec_path_config_tie_breaking_configured': lsp_sec_path_config_tie_breaking_configured, 'lsp_sec_path_config_tie_break_random': lsp_sec_path_config_tie_break_random, 'lsp_sec_path_config_tie_break_least_fill': lsp_sec_path_config_tie_break_least_fill, 'lsp_sec_path_config_tie_break_most_fill': lsp_sec_path_config_tie_break_most_fill, 'lsp_sec_path_config_cspf_disabled': lsp_sec_path_config_cspf_disabled, 'lsp_sec_path_config_hot_standby': lsp_sec_path_config_hot_standby, 'lsp_sec_path_config_pinned': lsp_sec_path_config_pinned, 'lsp_sec_path_config_persistent': lsp_sec_path_config_persistent, 'lsp_sec_path_config_soft_prempt': lsp_sec_path_config_soft_prempt, 'lsp_sec_path_config_priority_configured': lsp_sec_path_config_priority_configured, 'lsp_sec_path_config_setup_prority': lsp_sec_path_config_setup_prority, 'lsp_sec_path_config_holding_prority': lsp_sec_path_config_holding_prority, 'lsp_sec_path_config_hop_limit_configured': lsp_sec_path_config_hop_limit_configured, 'lsp_sec_path_config_hop_limit': lsp_sec_path_config_hop_limit, 'lsp_sec_path_config_traffic_eng_rate_configured': lsp_sec_path_config_traffic_eng_rate_configured, 'lsp_sec_path_config_traffic_eng_mean_rate': lsp_sec_path_config_traffic_eng_mean_rate, 'lsp_sec_path_config_traffic_eng_max_rate': lsp_sec_path_config_traffic_eng_max_rate, 'lsp_sec_path_config_traffic_eng_max_burst': lsp_sec_path_config_traffic_eng_max_burst, 'lsp_sec_path_config_admin_group_configured': lsp_sec_path_config_admin_group_configured, 'lsp_sec_path_config_admin_groups': lsp_sec_path_config_admin_groups, }
| 85.09887 | 2,530 | 0.789917 | 18,146 | 120,500 | 4.829274 | 0.012951 | 0.090503 | 0.117651 | 0.139675 | 0.980384 | 0.969714 | 0.952883 | 0.9244 | 0.904259 | 0.883821 | 0 | 0.005117 | 0.101485 | 120,500 | 1,415 | 2,531 | 85.159011 | 0.80426 | 0.244274 | 0 | 0.520584 | 0 | 0.045153 | 0.367434 | 0.265053 | 0 | 0 | 0 | 0 | 0 | 1 | 0.139442 | false | 0 | 0.011952 | 0 | 0.256308 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
0edc40db02821b069192ce77b7693e0632dea08b | 31,057 | py | Python | rlkit/torch/sac/gcs/gcs_path_collector.py | vincentlui/unsupervised-goal-conditioned-rl | 4f2e6938e072cb52f8ee779a939fe7bf6a980d45 | [
"MIT"
] | null | null | null | rlkit/torch/sac/gcs/gcs_path_collector.py | vincentlui/unsupervised-goal-conditioned-rl | 4f2e6938e072cb52f8ee779a939fe7bf6a980d45 | [
"MIT"
] | null | null | null | rlkit/torch/sac/gcs/gcs_path_collector.py | vincentlui/unsupervised-goal-conditioned-rl | 4f2e6938e072cb52f8ee779a939fe7bf6a980d45 | [
"MIT"
] | null | null | null | from rlkit.samplers.data_collector.path_collector import MdpPathCollector
from collections import OrderedDict, deque
from rlkit.samplers.rollout_functions import rollout
from rlkit.core.eval_util import create_stats_ordered_dict
from rlkit.torch.core import eval_np
from rlkit.envs.env_utils import get_dim
from rlkit.torch import pytorch_util as ptu
import numpy as np
import rlkit.torch.pytorch_util as ptu
class GCSMdpPathCollector(MdpPathCollector):
def __init__(self,
env,
policy,
max_num_epoch_paths_saved=None,
render=False,
render_kwargs=None,
exclude_obs_ind=None,
goal_ind=None,
target_obs_name=None,
skill_horizon=1):
super().__init__(
env,
policy,
max_num_epoch_paths_saved,
render,
render_kwargs,
)
self.goal_ind = goal_ind
self.skill_horizon = skill_horizon
self.exclude_obs_ind = exclude_obs_ind
self.target_obs_name = target_obs_name
if exclude_obs_ind:
obs_len = get_dim(env.observation_space)
self.obs_ind = get_indices(obs_len, exclude_obs_ind)
def collect_new_paths(
self,
max_path_length,
num_steps,
discard_incomplete_paths,
):
paths = []
num_steps_collected = 0
while num_steps_collected < num_steps:
max_path_length_this_loop = min( # Do not go over num_steps
max_path_length,
num_steps - num_steps_collected,
)
# self._policy.skill_reset()
path = self._rollout(
max_path_length=max_path_length_this_loop,
skill_horizon=self.skill_horizon,
render=self._render
)
# path = self._rollout2(
# max_path_length=max_path_length_this_loop,
# render=self._render
# )
# path = rollout(
# env=self._env,
# agent=self._policy,
# max_path_length=max_path_length_this_loop,
# render=self._render
# )
path_len = len(path['actions'])
if (
path_len != max_path_length
and not path['terminals'][-1]
and discard_incomplete_paths
):
break
num_steps_collected += path_len
paths.append(path)
self._num_paths_total += len(paths)
self._num_steps_total += num_steps_collected
self._epoch_paths.extend(paths)
return paths
def _rollout(
self,
skill_horizon=1,
max_path_length=np.inf,
render=False,
render_kwargs=None,
):
"""
The following value for the following keys will be a 2D array, with the
first dimension corresponding to the time dimension.
- observations
- actions
- rewards
- next_observations
- terminals
The next two elements will be lists of dictionaries, with the index into
the list being the index into the time
- agent_infos
- env_infos
"""
if render_kwargs is None:
render_kwargs = {}
observations = []
actions = []
rewards = []
terminals = []
agent_infos = []
env_infos = []
skill_goals = []
current_states = []
next_states = []
skill_steps = []
o = self._env.reset()
if self.target_obs_name is not None:
o = o[self.target_obs_name]
o_policy = o
if self.exclude_obs_ind:
o_policy = o[self.obs_ind]
self._policy.reset()
next_o = None
path_length = 0
skill_step = 0
if render:
self._env.render(**render_kwargs)
while path_length < max_path_length:
a, agent_info = self._policy.get_action(o_policy, return_log_prob=True)
next_o, r, d, env_info = self._env.step(a)
if self.target_obs_name is not None:
next_o = next_o[self.target_obs_name]
observations.append(o)
rewards.append(r)
terminals.append(d)
actions.append(a)
agent_infos.append(agent_info)
env_infos.append(env_info)
skill_steps.append(skill_step)
if self.goal_ind:
current_states.append(o[self.goal_ind])
next_states.append(o[self.goal_ind])
else:
current_states.append(o)
next_states.append(o)
path_length += 1
skill_step += 1
if skill_step >= skill_horizon:
skill_step = 0
self._policy.skill_reset()
if self.goal_ind:
skill_goals.append(next_o[self.goal_ind])
else:
skill_goals.append(next_o)
if max_path_length == np.inf and d:
if self.goal_ind:
skill_goals.append(next_o[self.goal_ind])
else:
skill_goals.append(next_o)
break
o = next_o
o_policy = o
if self.exclude_obs_ind:
o_policy = o_policy[self.obs_ind]
if render:
self._env.render(**render_kwargs)
actions = np.array(actions)
if len(actions.shape) == 1:
actions = np.expand_dims(actions, 1)
current_states = np.array(current_states)
if len(current_states.shape) == 1:
current_states = np.expand_dims(current_states, 1)
next_states = np.array(next_states)
if len(next_states.shape) == 1:
next_states = np.expand_dims(next_states, 1)
# skill_steps = np.array(skill_steps)
# if len(skill_steps.shape) == 1:
# skill_steps = np.expand_dims(skill_steps, 1)
observations = np.array(observations)
if len(observations.shape) == 1:
observations = np.expand_dims(observations, 1)
next_o = np.array([next_o])
next_observations = np.vstack(
(
observations[1:, :],
np.expand_dims(next_o, 0)
)
)
skill_goals = np.repeat(np.array(skill_goals), skill_horizon, axis=0)[:len(observations)]
return dict(
observations=observations,
actions=actions,
rewards=np.array(rewards).reshape(-1, 1),
next_observations=next_observations,
terminals=np.array(terminals).reshape(-1, 1),
agent_infos=agent_infos,
env_infos=env_infos,
skill_goals=skill_goals,
current_states=current_states,
next_states=next_states,
# skill_steps=np.array(skill_steps).reshape(-1,1),
)
def _rollout2(
self,
skill_horizon=1,
max_path_length=np.inf,
render=False,
render_kwargs=None,
):
"""
The following value for the following keys will be a 2D array, with the
first dimension corresponding to the time dimension.
- observations
- actions
- rewards
- next_observations
- terminals
The next two elements will be lists of dictionaries, with the index into
the list being the index into the time
- agent_infos
- env_infos
"""
if render_kwargs is None:
render_kwargs = {}
observations = []
actions = []
rewards = []
terminals = []
agent_infos = []
env_infos = []
skill_goals = []
current_states = []
o = self._env.reset()
o_policy = o
if self.exclude_obs_ind:
o_policy = o[self.obs_ind]
self._policy.reset()
next_o = None
last_next_o = None
path_length = 0
skill_step = 0
if render:
self._env.render(**render_kwargs)
while path_length < max_path_length:
a, agent_info = self._policy.get_action(o_policy, return_log_prob=True)
next_o, r, d, env_info = self._env.step(a)
if path_length <= max_path_length - skill_horizon:
observations.append(o)
rewards.append(r)
terminals.append(d)
actions.append()
agent_infos.append(agent_info)
env_infos.append(env_info)
last_next_o = next_o
if self.goal_ind:
current_states.append(o[self.goal_ind])
else:
current_states.append(o)
path_length += 1
if path_length >= skill_horizon:
if self.goal_ind:
skill_goals.append(next_o[self.goal_ind])
else:
skill_goals.append(next_o)
if max_path_length == np.inf and d:
raise NotImplementedError()
break
o = next_o
o_policy = o
if self.exclude_obs_ind:
o_policy = o_policy[self.obs_ind]
if render:
self._env.render(**render_kwargs)
actions = np.array(actions)
if len(actions.shape) == 1:
actions = np.expand_dims(actions, 1)
current_states = np.array(current_states)
if len(actions.shape) == 1:
current_states = np.expand_dims(current_states, 1)
observations = np.array(observations)
if len(observations.shape) == 1:
observations = np.expand_dims(observations, 1)
next_o = np.array([last_next_o])
next_observations = np.vstack(
(
observations[1:, :],
np.expand_dims(last_next_o, 0)
)
)
skill_goals = np.array(skill_goals)
if len(skill_goals.shape) == 1:
skill_goals = np.expand_dims(skill_goals, 1)
return dict(
observations=observations,
actions=actions,
rewards=np.array(rewards).reshape(-1, 1),
next_observations=next_observations,
terminals=np.array(terminals).reshape(-1, 1),
agent_infos=agent_infos,
env_infos=env_infos,
skill_goals=skill_goals,
current_states=current_states
)
class GCSMdpPathCollector2(MdpPathCollector):
def __init__(self,
env,
policy,
goal_buffer,
skill_discriminator,
max_num_epoch_paths_saved=None,
render=False,
render_kwargs=None,
exclude_obs_ind=None,
goal_ind=None,
skill_horizon=1):
super().__init__(
env,
policy,
max_num_epoch_paths_saved,
render,
render_kwargs,
)
self.goal_ind = goal_ind
self.skill_horizon = skill_horizon
self.exclude_obs_ind = exclude_obs_ind
self.goal_condition_training = False
self.goal_buffer = goal_buffer
self.skill_discriminator = skill_discriminator
# self.mean, self.std = get_stats(skill_horizon)
if exclude_obs_ind:
obs_len = get_dim(env.observation_space)
self.obs_ind = get_indices(obs_len, exclude_obs_ind)
def collect_new_paths(
self,
max_path_length,
num_steps,
discard_incomplete_paths,
goal_condition_training=False,
):
paths = []
num_steps_collected = 0
while num_steps_collected < num_steps:
max_path_length_this_loop = min( # Do not go over num_steps
max_path_length,
num_steps - num_steps_collected,
)
if goal_condition_training:
goal_conditioned = np.random.choice([True,False], 1)[0]
else:
goal_conditioned = False
path, skill_goals = self._rollout(
max_path_length=max_path_length_this_loop,
skill_horizon=self.skill_horizon,
render=self._render,
goal_conditioned=goal_conditioned
)
if not goal_conditioned:
self.goal_buffer.add(skill_goals)
path_len = len(path['actions'])
if (
path_len != max_path_length
and not path['terminals'][-1]
and discard_incomplete_paths
):
break
num_steps_collected += path_len
paths.append(path)
self._num_paths_total += len(paths)
self._num_steps_total += num_steps_collected
self._epoch_paths.extend(paths)
return paths, skill_goals
def _rollout(
self,
skill_horizon=1,
max_path_length=np.inf,
render=False,
render_kwargs=None,
goal_conditioned=False,
):
"""
The following value for the following keys will be a 2D array, with the
first dimension corresponding to the time dimension.
- observations
- actions
- rewards
- next_observations
- terminals
The next two elements will be lists of dictionaries, with the index into
the list being the index into the time
- agent_infos
- env_infos
"""
if render_kwargs is None:
render_kwargs = {}
observations = []
actions = []
rewards = []
terminals = []
agent_infos = []
env_infos = []
skill_goals = []
current_states = []
next_states = []
skill_steps = []
o = self._env.reset()
o_policy = o
if self.exclude_obs_ind:
o_policy = o[self.obs_ind]
self._policy.reset()
next_o = None
skill = None
path_length = 0
skill_step = 0
if render:
self._env.render(**render_kwargs)
if goal_conditioned:
if self.goal_ind:
sampled_goal = self.goal_buffer.pick(far_away_from=o[self.goal_ind])
sd_input = np.array(np.concatenate((o_policy, sampled_goal[self.goal_ind] - o[self.goal_ind])))
else:
sampled_goal = self.goal_buffer.pick(far_away_from=o)
sd_input = np.array(np.concatenate((o_policy, sampled_goal - o)))
skill = ptu.get_numpy(eval_np(self.skill_discriminator, sd_input).mean)
self._policy.set_skill(skill)
else:
self._policy.skill_reset()
while path_length < max_path_length:
a, agent_info = self._policy.get_action(o_policy, return_log_prob=False)
next_o, r, d, env_info = self._env.step(a)
observations.append(o)
# rewards.append(r)
if goal_conditioned:
rewards.append(calc_reward(o, sampled_goal))
terminals.append(d)
actions.append(a)
agent_infos.append(agent_info)
env_infos.append(env_info)
if self.goal_ind:
current_states.append(o[self.goal_ind])
next_states.append(o[self.goal_ind])
else:
current_states.append(o)
next_states.append(o)
path_length += 1
skill_step += 1
if skill_step >= skill_horizon:
skill_step = 0
# self._policy.skill_reset()
if self.goal_ind:
skill_goals.append(next_o[self.goal_ind])
else:
skill_goals.append(next_o)
if max_path_length == np.inf and d:
# if self.goal_ind:
# skill_goals.append(next_o[self.goal_ind])
# else:
# skill_goals.append(next_o)
raise NotImplementedError
break
o = next_o
o_policy = o
if self.exclude_obs_ind:
o_policy = o_policy[self.obs_ind]
if render:
self._env.render(**render_kwargs)
actions = np.array(actions)
if len(actions.shape) == 1:
actions = np.expand_dims(actions, 1)
current_states = np.array(current_states)
if len(current_states.shape) == 1:
current_states = np.expand_dims(current_states, 1)
next_states = np.array(next_states)
if len(next_states.shape) == 1:
next_states = np.expand_dims(next_states, 1)
# skill_steps = np.array(skill_steps)
# if len(skill_steps.shape) == 1:
# skill_steps = np.expand_dims(skill_steps, 1)
observations = np.array(observations)
if len(observations.shape) == 1:
observations = np.expand_dims(observations, 1)
next_o = np.array([next_o])
next_observations = np.vstack(
(
observations[1:, :],
np.expand_dims(next_o, 0)
)
)
if goal_conditioned:
skill_goals = np.array([sampled_goal])
obs_skill_goals = np.repeat(skill_goals, len(observations), axis=0)
rewards = np.array(rewards)
else:
skill_goals = np.array(skill_goals)
if len(skill_goals.shape) == 1:
skill_goals = np.expand_dims(skill_goals, 1)
obs_skill_goals = np.repeat(skill_goals, skill_horizon, axis=0)[:len(observations)]
rewards = calc_reward(observations, obs_skill_goals)
return dict(
observations=observations,
actions=actions,
rewards=rewards.reshape(-1,1),
next_observations=next_observations,
terminals=np.array(terminals).reshape(-1, 1),
agent_infos=agent_infos,
env_infos=env_infos,
skill_goals=obs_skill_goals,
current_states=current_states,
next_states=next_states,
), skill_goals
class GCSMdpPathCollector3(MdpPathCollector):
def __init__(self,
env,
policy,
max_num_epoch_paths_saved=None,
render=False,
render_kwargs=None,
exclude_obs_ind=None,
goal_ind=None,
skill_horizon=1):
super().__init__(
env,
policy,
max_num_epoch_paths_saved,
render,
render_kwargs,
)
self.goal_ind = goal_ind
self.skill_horizon = skill_horizon
self.exclude_obs_ind = exclude_obs_ind
# self.mean, self.std = get_stats(skill_horizon)
self._goal_paths = deque(maxlen=self._max_num_epoch_paths_saved)
if exclude_obs_ind:
obs_len = get_dim(env.observation_space)
self.obs_ind = get_indices(obs_len, exclude_obs_ind)
def collect_new_paths(
self,
max_path_length,
num_steps,
discard_incomplete_paths,
):
paths = []
list_goal_path = []
num_steps_collected = 0
while num_steps_collected < num_steps:
max_path_length_this_loop = min( # Do not go over num_steps
max_path_length,
num_steps - num_steps_collected,
)
self._policy.skill_reset()
path, goal_path = self._rollout(
max_path_length=max_path_length_this_loop,
skill_horizon=self.skill_horizon,
render=self._render
)
# path = self._rollout2(
# max_path_length=max_path_length_this_loop,
# render=self._render
# )
# path = rollout(
# env=self._env,
# agent=self._policy,
# max_path_length=max_path_length_this_loop,
# render=self._render
# )
path_len = len(path['actions'])
if (
path_len != max_path_length
and not path['terminals'][-1]
and discard_incomplete_paths
):
break
num_steps_collected += path_len
paths.append(path)
list_goal_path.append(goal_path)
self._num_paths_total += len(paths)
self._num_steps_total += num_steps_collected
self._epoch_paths.extend(paths)
self._goal_paths.extend(list_goal_path)
return paths
def _rollout(
self,
skill_horizon=1,
max_path_length=np.inf,
render=False,
render_kwargs=None,
):
"""
The following value for the following keys will be a 2D array, with the
first dimension corresponding to the time dimension.
- observations
- actions
- rewards
- next_observations
- terminals
The next two elements will be lists of dictionaries, with the index into
the list being the index into the time
- agent_infos
- env_infos
"""
if render_kwargs is None:
render_kwargs = {}
observations = []
actions = []
rewards = []
terminals = []
agent_infos = []
env_infos = []
skill_goals = []
current_states = []
next_states = []
skill_start_states = []
skills = []
o = self._env.reset()
o_policy = o
if self.exclude_obs_ind:
o_policy = o[self.obs_ind]
self._policy.reset()
skill_start_states.append(o)
skills.append(self._policy.skill)
next_o = None
path_length = 0
skill_step = 0
if render:
self._env.render(**render_kwargs)
while path_length < max_path_length:
a, agent_info = self._policy.get_action(o_policy, return_log_prob=True)
next_o, r, d, env_info = self._env.step(a)
observations.append(o)
rewards.append(r)
terminals.append(d)
actions.append(a)
agent_infos.append(agent_info)
env_infos.append(env_info)
if self.goal_ind:
current_states.append(o[self.goal_ind])
next_states.append(next_o[self.goal_ind])
else:
current_states.append(o)
next_states.append(next_o)
path_length += 1
skill_step += 1
if skill_step >= skill_horizon:
skill_step = 0
self._policy.skill_reset()
skill_start_states.append(next_o)
skills.append(self._policy.skill)
if self.goal_ind:
skill_goals.append(next_o[self.goal_ind])
else:
skill_goals.append(next_o)
if max_path_length == np.inf and d:
if self.goal_ind:
skill_goals.append(next_o[self.goal_ind])
else:
skill_goals.append(next_o)
break
o = next_o
o_policy = o
if self.exclude_obs_ind:
o_policy = o_policy[self.obs_ind]
if render:
self._env.render(**render_kwargs)
actions = np.array(actions)
if len(actions.shape) == 1:
actions = np.expand_dims(actions, 1)
current_states = np.array(current_states)
if len(current_states.shape) == 1:
current_states = np.expand_dims(current_states, 1)
next_states = np.array(next_states)
if len(next_states.shape) == 1:
next_states = np.expand_dims(next_states, 1)
# skill_steps = np.array(skill_steps)
# if len(skill_steps.shape) == 1:
# skill_steps = np.expand_dims(skill_steps, 1)
observations = np.array(observations)
if len(observations.shape) == 1:
observations = np.expand_dims(observations, 1)
next_o = np.array([next_o])
next_observations = np.vstack(
(
observations[1:, :],
np.expand_dims(next_o, 0)
)
)
skill_goals = np.repeat(np.array(skill_goals), skill_horizon, axis=0)[:len(observations)]
# For discriminator
skill_start_states = np.array(skill_start_states)
if len(skill_start_states.shape) == 1:
skill_start_states = np.expand_dims(skill_start_states, 1)
final_state = next_observations[None, -1].repeat(len(skill_start_states), axis=0)
skills = np.array(skills)
if len(skills.shape) == 1:
skills = np.expand_dims(skills, 1)
return dict(
observations=observations,
actions=actions,
rewards=np.array(rewards).reshape(-1, 1),
next_observations=next_observations,
terminals=np.array(terminals).reshape(-1, 1),
agent_infos=agent_infos,
env_infos=env_infos,
skill_goals=skill_goals,
current_states=current_states,
next_states=next_states,
), dict(
start_states=skill_start_states,
final_states=final_state,
skills=skills,
)
def end_epoch(self, epoch):
self._start_goal_pairs_np = None
super().end_epoch(epoch)
def get_epoch_goal_paths(self):
return self._goal_paths
class GCSPathCollector(MdpPathCollector):
def __init__(self,
env,
policy,
df,
max_num_epoch_paths_saved=None,
render=False,
render_kwargs=None,
exclude_obs_ind=None,
goal_ind=None,
target_obs_name=None,
skill_horizon=1):
super().__init__(
env,
policy,
max_num_epoch_paths_saved,
render,
render_kwargs,
)
self.df = df
self.goal_ind = goal_ind
self.skill_horizon = skill_horizon
self.exclude_obs_ind = exclude_obs_ind
self.target_obs_name = target_obs_name
if exclude_obs_ind:
obs_len = get_dim(env.observation_space)
self.obs_ind = get_indices(obs_len, exclude_obs_ind)
def collect_new_paths(
self,
max_path_length,
num_steps,
discard_incomplete_paths,
):
paths = []
num_steps_collected = 0
while num_steps_collected < num_steps:
max_path_length_this_loop = min( # Do not go over num_steps
max_path_length,
num_steps - num_steps_collected,
)
path = GCSRollout(
env = self._env,
agent=self._policy,
df=self.df,
max_path_length=max_path_length_this_loop,
render=self._render
)
path_len = len(path['actions'])
if (
path_len != max_path_length
and not path['terminals'][-1]
and discard_incomplete_paths
):
break
num_steps_collected += path_len
paths.append(path)
self._num_paths_total += len(paths)
self._num_steps_total += num_steps_collected
self._epoch_paths.extend(paths)
return paths
def get_diagnostics(self):
total = 0.
success_count = 0.
dist_to_goal = 0.
for path in self._epoch_paths:
success_count += path['env_infos'][-1]['is_success']
total += 1
dist_to_goal += path['rewards'][-1][0] / path['rewards'][0][0]
return {
'success rate': success_count/total,
'distance to goal': dist_to_goal/total,
}
def GCSRollout(env, agent, df, max_path_length=np.inf, render=False):
observations = []
actions = []
rewards = []
terminals = []
agent_infos = []
env_infos = []
images = []
o_env = env.reset()
o = o_env['observation']
goal = o_env['desired_goal']
next_o = None
path_length = 0
if render:
img = env.render('rgb_array')
# img = env.render(mode= 'rgb_array',width=1900,height=860)
# env.viewer.cam.fixedcamid = 0
# env.viewer.cam.type = 2
images.append(img)
df_input = ptu.FloatTensor(np.concatenate([o[:3], goal]))
skill = df(df_input).mean
agent.set_skill(ptu.get_numpy(skill))
while path_length < max_path_length:
a, agent_info = agent.get_action(o)
next_o, r, d, env_info = env.step(a)
observations.append(o)
rewards.append(r)
terminals.append(d)
actions.append(a)
agent_infos.append(agent_info)
env_infos.append(env_info)
path_length += 1
if max_path_length == np.inf and d:
break
next_o = next_o['observation']
o = next_o
if render:
img = env.render('rgb_array')
# img = env.render(mode= 'rgb_array',width=1900,height=860)
images.append(img)
actions = np.array(actions)
if len(actions.shape) == 1:
actions = np.expand_dims(actions, 1)
observations = np.array(observations)
if len(observations.shape) == 1:
observations = np.expand_dims(observations, 1)
next_o = np.array([next_o])
next_observations = np.vstack(
(
observations[1:, :],
np.expand_dims(next_o, 0)
)
)
return dict(
observations=observations,
actions=actions,
rewards=np.array(rewards).reshape(-1, 1),
next_observations=next_observations,
terminals=np.array(terminals).reshape(-1, 1),
agent_infos=agent_infos,
env_infos=env_infos,
images=images
)
def get_indices(length, exclude_ind):
length = np.arange(length)
exclude_ind = np.array(exclude_ind).reshape(-1,1)
return np.nonzero(~np.any(length == exclude_ind, axis=0))[0]
def calc_reward(obs, goal):
if len(goal.shape) > 1:
return -np.sqrt(np.sum(np.square(obs - goal), axis=1))
return -np.sqrt(np.sum(np.square(np.subtract(obs, goal))))
def get_stats(horizon):
h = np.arange(horizon)
return h.mean(), h.std()
def normalize(x, mean, std):
return (x - mean)/(std + 1e-8)
| 34.016429 | 111 | 0.549377 | 3,520 | 31,057 | 4.532102 | 0.059375 | 0.040745 | 0.039115 | 0.012035 | 0.836081 | 0.826616 | 0.812449 | 0.803673 | 0.785432 | 0.78305 | 0 | 0.008492 | 0.366777 | 31,057 | 912 | 112 | 34.053728 | 0.802705 | 0.089416 | 0 | 0.786561 | 0 | 0 | 0.006362 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.02635 | false | 0 | 0.011858 | 0.002635 | 0.064559 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
0ee828543dd87ff7d4136668d44213a96cced7d6 | 22,538 | py | Python | azure-mgmt-servermanager/azure/mgmt/servermanager/operations/node_operations.py | v-Ajnava/azure-sdk-for-python | a1f6f80eb5869c5b710e8bfb66146546697e2a6f | [
"MIT"
] | 4 | 2016-06-17T23:25:29.000Z | 2022-03-30T22:37:45.000Z | azure-mgmt-servermanager/azure/mgmt/servermanager/operations/node_operations.py | v-Ajnava/azure-sdk-for-python | a1f6f80eb5869c5b710e8bfb66146546697e2a6f | [
"MIT"
] | 54 | 2016-03-25T17:25:01.000Z | 2018-10-22T17:27:54.000Z | azure-mgmt-servermanager/azure/mgmt/servermanager/operations/node_operations.py | v-Ajnava/azure-sdk-for-python | a1f6f80eb5869c5b710e8bfb66146546697e2a6f | [
"MIT"
] | 3 | 2016-05-03T20:49:46.000Z | 2017-10-05T21:05:27.000Z | # coding=utf-8
# --------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
#
# Code generated by Microsoft (R) AutoRest Code Generator.
# Changes may cause incorrect behavior and will be lost if the code is
# regenerated.
# --------------------------------------------------------------------------
from msrest.pipeline import ClientRawResponse
from msrestazure.azure_operation import AzureOperationPoller
import uuid
from .. import models
class NodeOperations(object):
"""NodeOperations operations.
:param client: Client for service requests.
:param config: Configuration of service client.
:param serializer: An object model serializer.
:param deserializer: An objec model deserializer.
:ivar api_version: Client API Version. Constant value: "2016-07-01-preview".
"""
def __init__(self, client, config, serializer, deserializer):
self._client = client
self._serialize = serializer
self._deserialize = deserializer
self.api_version = "2016-07-01-preview"
self.config = config
def create(
self, resource_group_name, node_name, location=None, tags=None, gateway_id=None, connection_name=None, user_name=None, password=None, custom_headers=None, raw=False, **operation_config):
"""Creates or updates a management node.
:param resource_group_name: The resource group name uniquely
identifies the resource group within the user subscriptionId.
:type resource_group_name: str
:param node_name: The node name (256 characters maximum).
:type node_name: str
:param location: Location of the resource.
:type location: str
:param tags: Resource tags.
:type tags: object
:param gateway_id: Gateway ID which will manage this node.
:type gateway_id: str
:param connection_name: myhost.domain.com
:type connection_name: str
:param user_name: User name to be used to connect to node.
:type user_name: str
:param password: Password associated with user name.
:type password: str
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:rtype:
:class:`AzureOperationPoller<msrestazure.azure_operation.AzureOperationPoller>`
instance that returns :class:`NodeResource
<azure.mgmt.servermanager.models.NodeResource>`
:rtype: :class:`ClientRawResponse<msrest.pipeline.ClientRawResponse>`
if raw=true
:raises:
:class:`ErrorException<azure.mgmt.servermanager.models.ErrorException>`
"""
gateway_parameters = models.NodeParameters(location=location, tags=tags, gateway_id=gateway_id, connection_name=connection_name, user_name=user_name, password=password)
# Construct URL
url = '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ServerManagement/nodes/{nodeName}'
path_format_arguments = {
'subscriptionId': self._serialize.url("self.config.subscription_id", self.config.subscription_id, 'str'),
'resourceGroupName': self._serialize.url("resource_group_name", resource_group_name, 'str', min_length=3, pattern='[a-zA-Z0-9]+'),
'nodeName': self._serialize.url("node_name", node_name, 'str', max_length=256, min_length=1, pattern='^[a-zA-Z0-9][a-zA-Z0-9_.-]*$')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
query_parameters['api-version'] = self._serialize.query("self.api_version", self.api_version, 'str')
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if self.config.generate_client_request_id:
header_parameters['x-ms-client-request-id'] = str(uuid.uuid1())
if custom_headers:
header_parameters.update(custom_headers)
if self.config.accept_language is not None:
header_parameters['accept-language'] = self._serialize.header("self.config.accept_language", self.config.accept_language, 'str')
# Construct body
body_content = self._serialize.body(gateway_parameters, 'NodeParameters')
# Construct and send request
def long_running_send():
request = self._client.put(url, query_parameters)
return self._client.send(
request, header_parameters, body_content, **operation_config)
def get_long_running_status(status_link, headers=None):
request = self._client.get(status_link)
if headers:
request.headers.update(headers)
return self._client.send(
request, header_parameters, **operation_config)
def get_long_running_output(response):
if response.status_code not in [200, 201, 202]:
raise models.ErrorException(self._deserialize, response)
deserialized = None
if response.status_code == 200:
deserialized = self._deserialize('NodeResource', response)
if response.status_code == 201:
deserialized = self._deserialize('NodeResource', response)
if raw:
client_raw_response = ClientRawResponse(deserialized, response)
return client_raw_response
return deserialized
if raw:
response = long_running_send()
return get_long_running_output(response)
long_running_operation_timeout = operation_config.get(
'long_running_operation_timeout',
self.config.long_running_operation_timeout)
return AzureOperationPoller(
long_running_send, get_long_running_output,
get_long_running_status, long_running_operation_timeout)
def update(
self, resource_group_name, node_name, location=None, tags=None, gateway_id=None, connection_name=None, user_name=None, password=None, custom_headers=None, raw=False, **operation_config):
"""Updates a management node.
:param resource_group_name: The resource group name uniquely
identifies the resource group within the user subscriptionId.
:type resource_group_name: str
:param node_name: The node name (256 characters maximum).
:type node_name: str
:param location: Location of the resource.
:type location: str
:param tags: Resource tags.
:type tags: object
:param gateway_id: Gateway ID which will manage this node.
:type gateway_id: str
:param connection_name: myhost.domain.com
:type connection_name: str
:param user_name: User name to be used to connect to node.
:type user_name: str
:param password: Password associated with user name.
:type password: str
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:rtype:
:class:`AzureOperationPoller<msrestazure.azure_operation.AzureOperationPoller>`
instance that returns :class:`NodeResource
<azure.mgmt.servermanager.models.NodeResource>`
:rtype: :class:`ClientRawResponse<msrest.pipeline.ClientRawResponse>`
if raw=true
:raises:
:class:`ErrorException<azure.mgmt.servermanager.models.ErrorException>`
"""
node_parameters = models.NodeParameters(location=location, tags=tags, gateway_id=gateway_id, connection_name=connection_name, user_name=user_name, password=password)
# Construct URL
url = '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ServerManagement/nodes/{nodeName}'
path_format_arguments = {
'subscriptionId': self._serialize.url("self.config.subscription_id", self.config.subscription_id, 'str'),
'resourceGroupName': self._serialize.url("resource_group_name", resource_group_name, 'str', min_length=3, pattern='[a-zA-Z0-9]+'),
'nodeName': self._serialize.url("node_name", node_name, 'str', max_length=256, min_length=1, pattern='^[a-zA-Z0-9][a-zA-Z0-9_.-]*$')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
query_parameters['api-version'] = self._serialize.query("self.api_version", self.api_version, 'str')
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if self.config.generate_client_request_id:
header_parameters['x-ms-client-request-id'] = str(uuid.uuid1())
if custom_headers:
header_parameters.update(custom_headers)
if self.config.accept_language is not None:
header_parameters['accept-language'] = self._serialize.header("self.config.accept_language", self.config.accept_language, 'str')
# Construct body
body_content = self._serialize.body(node_parameters, 'NodeParameters')
# Construct and send request
def long_running_send():
request = self._client.patch(url, query_parameters)
return self._client.send(
request, header_parameters, body_content, **operation_config)
def get_long_running_status(status_link, headers=None):
request = self._client.get(status_link)
if headers:
request.headers.update(headers)
return self._client.send(
request, header_parameters, **operation_config)
def get_long_running_output(response):
if response.status_code not in [200, 202]:
raise models.ErrorException(self._deserialize, response)
deserialized = None
if response.status_code == 200:
deserialized = self._deserialize('NodeResource', response)
if raw:
client_raw_response = ClientRawResponse(deserialized, response)
return client_raw_response
return deserialized
if raw:
response = long_running_send()
return get_long_running_output(response)
long_running_operation_timeout = operation_config.get(
'long_running_operation_timeout',
self.config.long_running_operation_timeout)
return AzureOperationPoller(
long_running_send, get_long_running_output,
get_long_running_status, long_running_operation_timeout)
def delete(
self, resource_group_name, node_name, custom_headers=None, raw=False, **operation_config):
"""deletes a management node.
:param resource_group_name: The resource group name uniquely
identifies the resource group within the user subscriptionId.
:type resource_group_name: str
:param node_name: The node name (256 characters maximum).
:type node_name: str
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:rtype: None
:rtype: :class:`ClientRawResponse<msrest.pipeline.ClientRawResponse>`
if raw=true
:raises:
:class:`ErrorException<azure.mgmt.servermanager.models.ErrorException>`
"""
# Construct URL
url = '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ServerManagement/nodes/{nodeName}'
path_format_arguments = {
'subscriptionId': self._serialize.url("self.config.subscription_id", self.config.subscription_id, 'str'),
'resourceGroupName': self._serialize.url("resource_group_name", resource_group_name, 'str', min_length=3, pattern='[a-zA-Z0-9]+'),
'nodeName': self._serialize.url("node_name", node_name, 'str', max_length=256, min_length=1, pattern='^[a-zA-Z0-9][a-zA-Z0-9_.-]*$')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
query_parameters['api-version'] = self._serialize.query("self.api_version", self.api_version, 'str')
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if self.config.generate_client_request_id:
header_parameters['x-ms-client-request-id'] = str(uuid.uuid1())
if custom_headers:
header_parameters.update(custom_headers)
if self.config.accept_language is not None:
header_parameters['accept-language'] = self._serialize.header("self.config.accept_language", self.config.accept_language, 'str')
# Construct and send request
request = self._client.delete(url, query_parameters)
response = self._client.send(request, header_parameters, **operation_config)
if response.status_code not in [200, 204]:
raise models.ErrorException(self._deserialize, response)
if raw:
client_raw_response = ClientRawResponse(None, response)
return client_raw_response
def get(
self, resource_group_name, node_name, custom_headers=None, raw=False, **operation_config):
"""Gets a management node.
:param resource_group_name: The resource group name uniquely
identifies the resource group within the user subscriptionId.
:type resource_group_name: str
:param node_name: The node name (256 characters maximum).
:type node_name: str
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:rtype: :class:`NodeResource
<azure.mgmt.servermanager.models.NodeResource>`
:rtype: :class:`ClientRawResponse<msrest.pipeline.ClientRawResponse>`
if raw=true
:raises:
:class:`ErrorException<azure.mgmt.servermanager.models.ErrorException>`
"""
# Construct URL
url = '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ServerManagement/nodes/{nodeName}'
path_format_arguments = {
'subscriptionId': self._serialize.url("self.config.subscription_id", self.config.subscription_id, 'str'),
'resourceGroupName': self._serialize.url("resource_group_name", resource_group_name, 'str', min_length=3, pattern='[a-zA-Z0-9]+'),
'nodeName': self._serialize.url("node_name", node_name, 'str', max_length=256, min_length=1, pattern='^[a-zA-Z0-9][a-zA-Z0-9_.-]*$')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
query_parameters['api-version'] = self._serialize.query("self.api_version", self.api_version, 'str')
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if self.config.generate_client_request_id:
header_parameters['x-ms-client-request-id'] = str(uuid.uuid1())
if custom_headers:
header_parameters.update(custom_headers)
if self.config.accept_language is not None:
header_parameters['accept-language'] = self._serialize.header("self.config.accept_language", self.config.accept_language, 'str')
# Construct and send request
request = self._client.get(url, query_parameters)
response = self._client.send(request, header_parameters, **operation_config)
if response.status_code not in [200]:
raise models.ErrorException(self._deserialize, response)
deserialized = None
if response.status_code == 200:
deserialized = self._deserialize('NodeResource', response)
if raw:
client_raw_response = ClientRawResponse(deserialized, response)
return client_raw_response
return deserialized
def list(
self, custom_headers=None, raw=False, **operation_config):
"""Lists nodes in a subscription.
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:rtype: :class:`NodeResourcePaged
<azure.mgmt.servermanager.models.NodeResourcePaged>`
:raises:
:class:`ErrorException<azure.mgmt.servermanager.models.ErrorException>`
"""
def internal_paging(next_link=None, raw=False):
if not next_link:
# Construct URL
url = '/subscriptions/{subscriptionId}/providers/Microsoft.ServerManagement/nodes'
path_format_arguments = {
'subscriptionId': self._serialize.url("self.config.subscription_id", self.config.subscription_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
query_parameters['api-version'] = self._serialize.query("self.api_version", self.api_version, 'str')
else:
url = next_link
query_parameters = {}
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if self.config.generate_client_request_id:
header_parameters['x-ms-client-request-id'] = str(uuid.uuid1())
if custom_headers:
header_parameters.update(custom_headers)
if self.config.accept_language is not None:
header_parameters['accept-language'] = self._serialize.header("self.config.accept_language", self.config.accept_language, 'str')
# Construct and send request
request = self._client.get(url, query_parameters)
response = self._client.send(
request, header_parameters, **operation_config)
if response.status_code not in [200]:
raise models.ErrorException(self._deserialize, response)
return response
# Deserialize response
deserialized = models.NodeResourcePaged(internal_paging, self._deserialize.dependencies)
if raw:
header_dict = {}
client_raw_response = models.NodeResourcePaged(internal_paging, self._deserialize.dependencies, header_dict)
return client_raw_response
return deserialized
def list_for_resource_group(
self, resource_group_name, custom_headers=None, raw=False, **operation_config):
"""Lists nodes in a resource group.
:param resource_group_name: The resource group name uniquely
identifies the resource group within the user subscriptionId.
:type resource_group_name: str
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:rtype: :class:`NodeResourcePaged
<azure.mgmt.servermanager.models.NodeResourcePaged>`
:raises:
:class:`ErrorException<azure.mgmt.servermanager.models.ErrorException>`
"""
def internal_paging(next_link=None, raw=False):
if not next_link:
# Construct URL
url = '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ServerManagement/nodes'
path_format_arguments = {
'subscriptionId': self._serialize.url("self.config.subscription_id", self.config.subscription_id, 'str'),
'resourceGroupName': self._serialize.url("resource_group_name", resource_group_name, 'str', min_length=3, pattern='[a-zA-Z0-9]+')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
query_parameters['api-version'] = self._serialize.query("self.api_version", self.api_version, 'str')
else:
url = next_link
query_parameters = {}
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if self.config.generate_client_request_id:
header_parameters['x-ms-client-request-id'] = str(uuid.uuid1())
if custom_headers:
header_parameters.update(custom_headers)
if self.config.accept_language is not None:
header_parameters['accept-language'] = self._serialize.header("self.config.accept_language", self.config.accept_language, 'str')
# Construct and send request
request = self._client.get(url, query_parameters)
response = self._client.send(
request, header_parameters, **operation_config)
if response.status_code not in [200]:
raise models.ErrorException(self._deserialize, response)
return response
# Deserialize response
deserialized = models.NodeResourcePaged(internal_paging, self._deserialize.dependencies)
if raw:
header_dict = {}
client_raw_response = models.NodeResourcePaged(internal_paging, self._deserialize.dependencies, header_dict)
return client_raw_response
return deserialized
| 46.08998 | 198 | 0.660618 | 2,422 | 22,538 | 5.933939 | 0.086705 | 0.027136 | 0.035486 | 0.030058 | 0.934178 | 0.930768 | 0.927985 | 0.924715 | 0.923462 | 0.923462 | 0 | 0.007603 | 0.241326 | 22,538 | 488 | 199 | 46.184426 | 0.832914 | 0.273183 | 0 | 0.827731 | 0 | 0 | 0.154117 | 0.084022 | 0 | 0 | 0 | 0 | 0 | 1 | 0.063025 | false | 0.016807 | 0.016807 | 0 | 0.172269 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
0efb051f5fe8fdf5f831fe054ceceb06ea8b7c1f | 15,686 | py | Python | turbogears/i18n/data/ar_BH.py | timmartin19/turbogears | b5420cb7e55757d418d8fadb512dbd7803c4279c | [
"MIT"
] | null | null | null | turbogears/i18n/data/ar_BH.py | timmartin19/turbogears | b5420cb7e55757d418d8fadb512dbd7803c4279c | [
"MIT"
] | 9 | 2015-01-27T19:13:56.000Z | 2019-03-29T14:44:31.000Z | turbogears/i18n/data/ar_BH.py | timmartin19/turbogears | b5420cb7e55757d418d8fadb512dbd7803c4279c | [
"MIT"
] | 13 | 2015-04-14T14:15:53.000Z | 2020-03-18T01:05:46.000Z | # Formatting configuration for locale ar_BH
languages={'el': u'\u0627\u0644\u064a\u0648\u0646\u0627\u0646\u064a\u0629', 'gu': u'\u0627\u0644\u063a\u0648\u062c\u0627\u0631\u0627\u062a\u064a\u0629', 'en': u'\u0627\u0644\u0627\u0646\u062c\u0644\u064a\u0632\u064a\u0629', 'zh': u'\u0627\u0644\u0635\u064a\u0646\u064a\u0629', 'sw': u'\u0627\u0644\u0633\u0648\u0627\u062d\u0644\u064a\u0629', 'ca': u'\u0627\u0644\u0643\u0627\u062a\u0627\u0644\u0648\u064a\u0646\u064a\u0629', 'it': u'\u0627\u0644\u0627\u064a\u0637\u0627\u0644\u064a\u0629', 'ar': u'\u0627\u0644\u0639\u0631\u0628\u064a\u0629', 'id': u'\u0627\u0644\u0627\u0646\u062f\u0648\u0646\u064a\u0633\u064a\u0629', 'es': u'\u0627\u0644\u0627\u0633\u0628\u0627\u0646\u064a\u0629', 'ru': u'\u0627\u0644\u0631\u0648\u0633\u064a\u0629', 'nl': u'\u0627\u0644\u0647\u0648\u0644\u0646\u062f\u064a\u0629', 'pt': u'\u0627\u0644\u0628\u0631\u062a\u063a\u0627\u0644\u064a\u0629', 'tr': u'\u0627\u0644\u062a\u0631\u0643\u064a\u0629', 'ne': u'\u0627\u0644\u0646\u064a\u0628\u0627\u0644\u064a\u0629', 'lt': u'\u0627\u0644\u0644\u062a\u0648\u0627\u0646\u064a\u0629', 'pa': u'\u0627\u0644\u0628\u0646\u062c\u0627\u0628\u064a\u0629', 'th': u'\u0627\u0644\u062a\u0627\u064a\u0644\u0627\u0646\u062f\u064a\u0629', 'vi': u'\u0627\u0644\u0641\u064a\u062a\u0646\u0627\u0645\u064a\u0629', 'ro': u'\u0627\u0644\u0631\u0648\u0645\u0627\u0646\u064a\u0629', 'be': u'\u0627\u0644\u0628\u064a\u0644\u0648\u0631\u0648\u0633\u064a\u0629', 'fr': u'\u0627\u0644\u0641\u0631\u0646\u0633\u064a\u0629', 'bg': u'\u0627\u0644\u0628\u0644\u063a\u0627\u0631\u064a\u0629', 'uk': u'\u0627\u0644\u0627\u0648\u0643\u0631\u0627\u0646\u064a\u0629', 'hr': u'\u0627\u0644\u0643\u0631\u0648\u0627\u062a\u064a\u0629', 'bn': u'\u0627\u0644\u0628\u0646\u063a\u0627\u0644\u064a\u0629', 'bo': u'\u0627\u0644\u062a\u0628\u062a\u064a\u0629', 'da': u'\u0627\u0644\u062f\u0627\u0646\u0645\u0627\u0631\u0643\u064a\u0629', 'fa': u'\u0627\u0644\u0641\u0627\u0631\u0633\u064a\u0629', 'hi': u'\u0627\u0644\u0647\u0646\u062f\u064a\u0629', 'dz': u'\u0627\u0644\u0632\u0648\u0646\u062e\u0627\u064a\u0629', 'dv': u'\u0627\u0644\u0645\u0627\u0644\u062f\u064a\u0641\u064a\u0629', 'fi': u'\u0627\u0644\u0641\u0646\u0644\u0646\u062f\u064a\u0629', 'ja': u'\u0627\u0644\u064a\u0627\u0628\u0627\u0646\u064a\u0629', 'he': u'\u0627\u0644\u0639\u0628\u0631\u064a\u0629', 'tl': u'\u0627\u0644\u062a\u0627\u063a\u0627\u0644\u0648\u063a\u064a\u0629', 'sr': u'\u0627\u0644\u0635\u0631\u0628\u064a\u0629', 'sq': u'\u0627\u0644\u0627\u0644\u0628\u0627\u0646\u064a\u0629', 'mn': u'\u0627\u0644\u0645\u0646\u063a\u0648\u0644\u064a\u0629', 'ko': u'\u0627\u0644\u0643\u0648\u0631\u064a\u0629', 'km': u'\u0627\u0644\u062e\u0645\u064a\u0631\u064a\u0629', 'ur': u'\u0627\u0644\u0627\u0631\u062f\u064a\u0629', 'de': u'\u0627\u0644\u0627\u0644\u0645\u0627\u0646\u064a\u0629', 'ms': u'\u0644\u063a\u0629 \u0627\u0644\u0645\u0644\u0627\u064a\u0648', 'ug': u'\u0627\u0644\u0627\u063a\u0648\u0631\u064a\u0629', 'my': u'\u0627\u0644\u0628\u0648\u0631\u0645\u064a\u0629'}
countries={'BD': u'\u0628\u0646\u063a\u0644\u0627\u062f\u064a\u0634', 'BE': u'\u0628\u0644\u062c\u064a\u0643\u0627', 'BF': u'\u0628\u0648\u0631\u0643\u064a\u0646\u0627 \u0641\u0627\u0633\u0648', 'BG': u'\u0628\u0644\u063a\u0627\u0631\u064a\u0627', 'BA': u'\u0627\u0644\u0628\u0648\u0633\u0646\u0629 \u0648\u0627\u0644\u0647\u0631\u0633\u0643', 'BB': u'\u0628\u0631\u0628\u0627\u062f\u0648\u0633', 'BN': u'\u0628\u0631\u0648\u0646\u0627\u064a', 'BO': u'\u0628\u0648\u0644\u064a\u0641\u064a\u0627', 'BH': u'\u0627\u0644\u0628\u062d\u0631\u064a\u0646', 'BI': u'\u0628\u0648\u0631\u0648\u0646\u062f\u064a', 'BJ': u'\u0628\u0646\u064a\u0646', 'BT': u'\u0628\u0648\u062a\u0627\u0646', 'JM': u'\u062c\u0627\u0645\u0627\u064a\u0643\u0627', 'BW': u'\u0628\u0648\u062a\u0633\u0648\u0627\u0646\u0627', 'WS': u'\u0633\u0627\u0645\u0648\u0627', 'BR': u'\u0627\u0644\u0628\u0631\u0627\u0632\u064a\u0644', 'BS': u'\u0627\u0644\u0628\u0647\u0627\u0645\u0627', 'BY': u'\u0631\u0648\u0633\u064a\u0627 \u0627\u0644\u0628\u064a\u0636\u0627\u0621', 'BZ': u'\u0628\u0644\u064a\u0632', 'RU': u'\u0631\u0648\u0633\u064a\u0627', 'RW': u'\u0631\u0648\u0627\u0646\u062f\u0627', 'TM': u'\u062a\u0631\u0643\u0645\u0627\u0646\u0633\u062a\u0627\u0646', 'TJ': u'\u062a\u0627\u062c\u064a\u0643\u0633\u062a\u0627\u0646', 'RO': u'\u0631\u0648\u0645\u0627\u0646\u064a\u0627', 'GW': u'\u063a\u064a\u0646\u064a\u0627 \u0628\u064a\u0633\u0627\u0648', 'GT': u'\u063a\u0648\u0627\u062a\u064a\u0645\u0627\u0644\u0627', 'GR': u'\u0627\u0644\u064a\u0648\u0646\u0627\u0646', 'GQ': u'\u063a\u064a\u0646\u064a\u0627 \u0627\u0644\u0627\u0633\u062a\u0648\u0627\u0626\u064a\u0629', 'JP': u'\u0627\u0644\u064a\u0627\u0628\u0627\u0646', 'GY': u'\u063a\u0648\u0627\u064a\u0627\u0646\u0627', 'GE': u'\u062c\u0648\u0631\u062c\u064a\u0627', 'GD': u'\u063a\u0631\u064a\u0646\u0627\u062f\u0627', 'GB': u'\u0627\u0644\u0645\u0645\u0644\u0643\u0629 \u0627\u0644\u0645\u062a\u062d\u062f\u0629', 'GA': u'\u063a\u0627\u0628\u0648\u0646', 'SV': u'\u0627\u0644\u0633\u0644\u0641\u0627\u062f\u0648\u0631', 'GN': u'\u063a\u064a\u0646\u064a\u0627', 'GM': u'\u063a\u0627\u0645\u0628\u064a\u0627', 'GH': u'\u063a\u0627\u0646\u0627', 'OM': u'\u0639\u0645\u0627\u0646', 'TN': u'\u062a\u0648\u0646\u0633', 'JO': u'\u0627\u0644\u0627\u0631\u062f\u0646', 'HR': u'\u0643\u0631\u0648\u0627\u062a\u064a\u0627', 'HT': u'\u0647\u0627\u064a\u062a\u064a', 'HU': u'\u0647\u0646\u063a\u0627\u0631\u064a\u0627', 'HN': u'\u0647\u0646\u062f\u0648\u0631\u0627\u0633', 'VE': u'\u0641\u0646\u0632\u0648\u064a\u0644\u0627', 'PW': u'\u0628\u0627\u0644\u0627\u0648', 'PT': u'\u0627\u0644\u0628\u0631\u062a\u063a\u0627\u0644', 'PY': u'\u0628\u0627\u0631\u0627\u063a\u0648\u0627\u064a', 'IQ': u'\u0627\u0644\u0639\u0631\u0627\u0642', 'PA': u'\u0628\u0646\u0645\u0627', 'PG': u'\u0628\u0627\u0628\u0648\u0627 \u063a\u064a\u0646\u064a\u0627 \u0627\u0644\u062c\u062f\u064a\u062f\u0629', 'PE': u'\u0628\u064a\u0631\u0648', 'PK': u'\u0627\u0644\u0628\u0627\u0643\u0633\u062a\u0627\u0646', 'PH': u'\u0627\u0644\u0641\u064a\u0644\u0628\u064a\u0646', 'PL': u'\u0628\u0648\u0644\u0646\u062f\u0627', 'ZM': u'\u0632\u0627\u0645\u0628\u064a\u0627', 'EH': u'\u0627\u0644\u0635\u062d\u0631\u0627\u0621 \u0627\u0644\u063a\u0631\u0628\u064a\u0629', 'EE': u'\u0627\u0633\u062a\u0648\u0646\u064a\u0627', 'EG': u'\u0645\u0635\u0631', 'ZA': u'\u062c\u0646\u0648\u0628 \u0627\u0641\u0631\u064a\u0642\u064a\u0627', 'EC': u'\u0627\u0643\u0648\u0627\u062f\u0648\u0631', 'VN': u'\u0641\u064a\u062a\u0646\u0627\u0645', 'SB': u'\u062c\u0632\u0631 \u0633\u0644\u064a\u0645\u0627\u0646', 'ET': u'\u0627\u062b\u064a\u0648\u0628\u064a\u0627', 'SO': u'\u0627\u0644\u0635\u0648\u0645\u0627\u0644', 'ZW': u'\u0632\u064a\u0645\u0628\u0627\u0628\u0648\u064a', 'ES': u'\u0627\u0633\u0628\u0627\u0646\u064a\u0627', 'ER': u'\u0627\u0631\u062a\u064a\u0631\u064a\u0627', 'MD': u'\u0645\u0648\u0644\u062f\u0648\u0641\u0627', 'MG': u'\u0645\u062f\u063a\u0634\u0642\u0631', 'MA': u'\u0627\u0644\u0645\u063a\u0631\u0628', 'MC': u'\u0645\u0648\u0646\u0627\u0643\u0648', 'UZ': u'\u0627\u0632\u0628\u0643\u0633\u062a\u0627\u0646', 'MM': u'\u0645\u064a\u0627\u0646\u0645\u0627\u0631', 'ML': u'\u0645\u0627\u0644\u064a', 'MN': u'\u0645\u0646\u063a\u0648\u0644\u064a\u0627', 'MH': u'\u062c\u0632\u0631 \u0627\u0644\u0645\u0627\u0631\u0634\u0627\u0644', 'MK': u'\u0645\u0642\u062f\u0648\u0646\u064a\u0627', 'MU': u'\u0645\u0648\u0631\u064a\u0634\u0648\u0633', 'MT': u'\u0645\u0627\u0644\u0637\u0629', 'MW': u'\u0645\u0644\u0627\u0648\u064a', 'MV': u'\u0645\u0627\u0644\u062f\u064a\u0641', 'MR': u'\u0645\u0648\u0631\u064a\u062a\u0627\u0646\u064a\u0627', 'UG': u'\u0627\u0648\u063a\u0646\u062f\u0627', 'MY': u'\u0645\u0627\u0644\u064a\u0632\u064a\u0627', 'MX': u'\u0627\u0644\u0645\u0643\u0633\u064a\u0643', 'IL': u'\u0627\u0633\u0631\u0627\u0626\u064a\u0644', 'FR': u'\u0641\u0631\u0646\u0633\u0627', 'FI': u'\u0641\u0646\u0644\u0646\u062f\u0627', 'FJ': u'\u0641\u064a\u062c\u064a', 'FM': u'\u0645\u064a\u0643\u0631\u0648\u0646\u064a\u0632\u064a\u0627', 'NI': u'\u0646\u064a\u0643\u0627\u0631\u0627\u063a\u0648\u0627', 'NL': u'\u0647\u0648\u0644\u0646\u062f\u0627', 'NO': u'\u0627\u0644\u0646\u0631\u0648\u064a\u062c', 'NA': u'\u0646\u0627\u0645\u064a\u0628\u064a\u0627', 'VU': u'\u0641\u0627\u0646\u0648\u0622\u062a\u0648', 'NE': u'\u0627\u0644\u0646\u064a\u062c\u0631', 'NG': u'\u0646\u064a\u062c\u064a\u0631\u064a\u0627', 'NZ': u'\u0632\u064a\u0644\u0646\u062f\u0627 \u0627\u0644\u062c\u062f\u064a\u062f\u0629', 'NP': u'\u0627\u0644\u0646\u064a\u0628\u0627\u0644', 'NR': u'\u0646\u0627\u0648\u0631\u0648', 'CH': u'\u0633\u0648\u064a\u0633\u0631\u0627', 'CO': u'\u0643\u0648\u0644\u0648\u0645\u0628\u064a\u0627', 'CN': u'\u0627\u0644\u0635\u064a\u0646', 'CM': u'\u0627\u0644\u0643\u0627\u0645\u064a\u0631\u0648\u0646', 'CL': u'\u062a\u0634\u064a\u0644\u064a', 'CA': u'\u0643\u0646\u062f\u0627', 'CG': u'\u0627\u0644\u0643\u0648\u0646\u063a\u0648', 'CF': u'\u062c\u0645\u0647\u0648\u0631\u064a\u0629 \u0627\u0641\u0631\u064a\u0642\u064a\u0627 \u0627\u0644\u0648\u0633\u0637\u0649', 'CZ': u'\u062c\u0645\u0647\u0648\u0631\u064a\u0629 \u0627\u0644\u062a\u0634\u064a\u0643', 'CY': u'\u0642\u0628\u0631\u0635', 'CR': u'\u0643\u0648\u0633\u062a\u0627\u0631\u064a\u0643\u0627', 'CV': u'\u0627\u0644\u0631\u0623\u0633 \u0627\u0644\u0627\u062e\u0636\u0631', 'CU': u'\u0643\u0648\u0628\u0627', 'SZ': u'\u0633\u0648\u0627\u0632\u064a\u0644\u0627\u0646\u062f', 'SY': u'\u0633\u0648\u0631\u064a\u0629', 'KG': u'\u0642\u064a\u0631\u063a\u064a\u0632\u0633\u062a\u0627\u0646', 'KE': u'\u0643\u064a\u0646\u064a\u0627', 'SR': u'\u0633\u0648\u0631\u064a\u0646\u0627\u0645', 'KI': u'\u0643\u064a\u0631\u064a\u0628\u0627\u062a\u064a', 'KH': u'\u0643\u0645\u0628\u0648\u062f\u064a\u0627', 'KN': u'\u0633\u0627\u0646\u062a \u0643\u064a\u062a\u0633 \u0648\u0646\u064a\u0641\u064a\u0633', 'KM': u'\u062c\u0632\u0631 \u0627\u0644\u0642\u0645\u0631', 'ST': u'\u0633\u0627\u0646 \u062a\u0648\u0645\u064a \u0648\u0628\u0631\u064a\u0646\u0633\u064a\u0628\u064a', 'SK': u'\u0633\u0644\u0648\u0641\u0627\u0643\u064a\u0627', 'KR': u'\u0643\u0648\u0631\u064a\u0627 \u0627\u0644\u062c\u0646\u0648\u0628\u064a\u0629', 'SI': u'\u0633\u0644\u0648\u0641\u064a\u0646\u064a\u0627', 'KP': u'\u0643\u0648\u0631\u064a\u0627 \u0627\u0644\u0634\u0645\u0627\u0644\u064a\u0629', 'KW': u'\u0627\u0644\u0643\u0648\u064a\u062a', 'SN': u'\u0627\u0644\u0633\u0646\u063a\u0627\u0644', 'SM': u'\u0633\u0627\u0646 \u0645\u0627\u0631\u064a\u0646\u0648', 'SL': u'\u0633\u064a\u0631\u0627\u0644\u064a\u0648\u0646', 'SC': u'\u0633\u064a\u0634\u0644', 'KZ': u'\u0643\u0627\u0632\u0627\u062e\u0633\u062a\u0627\u0646', 'SA': u'\u0627\u0644\u0639\u0631\u0628\u064a\u0629 \u0627\u0644\u0633\u0639\u0648\u062f\u064a\u0629', 'SG': u'\u0633\u0646\u063a\u0627\u0641\u0648\u0631\u0629', 'SE': u'\u0627\u0644\u0633\u0648\u064a\u062f', 'SD': u'\u0627\u0644\u0633\u0648\u062f\u0627\u0646', 'DO': u'\u0627\u0644\u062c\u0645\u0647\u0648\u0631\u064a\u0629 \u0627\u0644\u062f\u0648\u0645\u064a\u0646\u064a\u0643\u064a\u0629', 'DM': u'\u062f\u0648\u0645\u064a\u0646\u064a\u0643\u0627', 'DJ': u'\u062c\u064a\u0628\u0648\u062a\u064a', 'DK': u'\u0627\u0644\u062f\u0627\u0646\u0645\u0631\u0643', 'DE': u'\u0627\u0644\u0645\u0627\u0646\u064a\u0627', 'YE': u'\u0627\u0644\u064a\u0645\u0646', 'DZ': u'\u0627\u0644\u062c\u0632\u0627\u0626\u0631', 'US': u'\u0627\u0644\u0627\u0648\u0644\u0627\u064a\u0627\u062a \u0627\u0644\u0645\u062a\u062d\u062f\u0629 \u0627\u0644\u0627\u0645\u0631\u064a\u0643\u064a\u0629', 'UY': u'\u0627\u0631\u0648\u063a\u0648\u0627\u064a', 'LB': u'\u0644\u0628\u0646\u0627\u0646', 'LC': u'\u0633\u0627\u0646\u062a \u0644\u0648\u0633\u064a\u0627', 'LA': u'\u0644\u0627\u0648\u0633', 'TV': u'\u062a\u0648\u0641\u0627\u0644\u0648', 'TW': u'\u062a\u0627\u064a\u0648\u0627\u0646', 'TT': u'\u062a\u0631\u064a\u0646\u064a\u062f\u0627\u062f \u0648\u062a\u0648\u0628\u0627\u063a\u0648', 'TR': u'\u062a\u0631\u0643\u064a\u0627', 'LK': u'\u0633\u0631\u064a \u0644\u0627\u0646\u0643\u0627', 'LI': u'\u0644\u064a\u062e\u062a\u0646\u0634\u062a\u0627\u064a\u0646', 'LV': u'\u0644\u0627\u062a\u0641\u064a\u0627', 'TO': u'\u062a\u0648\u0646\u063a\u0627', 'LT': u'\u0644\u064a\u062a\u0648\u0627\u0646\u064a\u0627', 'LU': u'\u0644\u0648\u0643\u0633\u0648\u0645\u0628\u0631\u063a', 'LR': u'\u0644\u064a\u0628\u064a\u0631\u064a\u0627', 'LS': u'\u0644\u064a\u0633\u0648\u062a\u0648', 'TH': u'\u062a\u0627\u064a\u0644\u0646\u062f', 'TG': u'\u062a\u0648\u063a\u0648', 'TD': u'\u062a\u0634\u0627\u062f', 'LY': u'\u0644\u064a\u0628\u064a\u0627', 'VA': u'\u0627\u0644\u0641\u0627\u062a\u064a\u0643\u0627\u0646', 'VC': u'\u0633\u0627\u0646\u062a \u0641\u0646\u0633\u0646\u062a \u0648\u062c\u0632\u0631 \u063a\u0631\u064a\u0646\u0627\u062f\u064a\u0646', 'AE': u'\u0627\u0644\u0627\u0645\u0627\u0631\u0627\u062a \u0627\u0644\u0639\u0631\u0628\u064a\u0629 \u0627\u0644\u0645\u062a\u062d\u062f\u0629', 'AD': u'\u0627\u0646\u062f\u0648\u0631\u0627', 'AG': u'\u0627\u0646\u062a\u064a\u063a\u0648\u0627 \u0648\u0628\u0631\u0628\u0648\u062f\u0627', 'AF': u'\u0627\u0641\u063a\u0627\u0646\u0633\u062a\u0627\u0646', 'AI': u'\u0627\u0644\u0628\u0627\u0646\u064a\u0627', 'IS': u'\u0627\u064a\u0633\u0644\u0646\u062f\u0627', 'IR': u'\u0627\u064a\u0631\u0627\u0646', 'AM': u'\u0627\u0631\u0645\u064a\u0646\u064a\u0627', 'IT': u'\u0627\u064a\u0637\u0627\u0644\u064a\u0627', 'AO': u'\u0627\u0646\u063a\u0648\u0644\u0627', 'AR': u'\u0627\u0644\u0627\u0631\u062c\u0646\u062a\u064a\u0646', 'AU': u'\u0627\u0633\u062a\u0631\u0627\u0644\u064a\u0627', 'AT': u'\u0627\u0644\u0646\u0645\u0633\u0627', 'IN': u'\u0627\u0644\u0647\u0646\u062f', 'TZ': u'\u062a\u0627\u0646\u0632\u0627\u0646\u064a\u0627', 'AZ': u'\u0622\u0630\u0631\u0628\u064a\u062c\u0627\u0646', 'IE': u'\u0627\u064a\u0631\u0644\u0646\u062f\u0627', 'ID': u'\u0627\u0646\u062f\u0648\u0646\u064a\u0633\u064a\u0627', 'UA': u'\u0627\u0648\u0643\u0631\u0627\u0646\u064a\u0627', 'QA': u'\u0642\u0637\u0631', 'MZ': u'\u0645\u0648\u0632\u0645\u0628\u064a\u0642'}
months=[u'\u064a\u0646\u0627\u064a\u0631', u'\u0641\u0628\u0631\u0627\u064a\u0631', u'\u0645\u0627\u0631\u0633', u'\u0623\u0628\u0631\u064a\u0644', u'\u0645\u0627\u064a\u0648', u'\u064a\u0648\u0646\u064a\u0648', u'\u064a\u0648\u0644\u064a\u0648', u'\u0623\u063a\u0633\u0637\u0633', u'\u0633\u0628\u062a\u0645\u0628\u0631', u'\u0623\u0643\u062a\u0648\u0628\u0631', u'\u0646\u0648\u0641\u0645\u0628\u0631', u'\u062f\u064a\u0633\u0645\u0628\u0631']
abbrMonths=[u'\u064a\u0646\u0627\u064a\u0631', u'\u0641\u0628\u0631\u0627\u064a\u0631', u'\u0645\u0627\u0631\u0633', u'\u0623\u0628\u0631\u064a\u0644', u'\u0645\u0627\u064a\u0648', u'\u064a\u0648\u0646\u064a\u0648', u'\u064a\u0648\u0644\u064a\u0648', u'\u0623\u063a\u0633\u0637\u0633', u'\u0633\u0628\u062a\u0645\u0628\u0631', u'\u0623\u0643\u062a\u0648\u0628\u0631', u'\u0646\u0648\u0641\u0645\u0628\u0631', u'\u062f\u064a\u0633\u0645\u0628\u0631']
days=[u'\u0627\u0644\u0627\u062b\u0646\u064a\u0646', u'\u0627\u0644\u062b\u0644\u0627\u062b\u0627\u0621', u'\u0627\u0644\u0623\u0631\u0628\u0639\u0627\u0621', u'\u0627\u0644\u062e\u0645\u064a\u0633', u'\u0627\u0644\u062c\u0645\u0639\u0629', u'\u0627\u0644\u0633\u0628\u062a', u'\u0627\u0644\u0623\u062d\u062f']
abbrDays=[u'\u0646', u'\u062b', u'\u0631', u'\u062e', u'\u062c', u'\u0633', u'\u062d']
dateFormats={'medium': '%d/%m/%Y', 'full': '%%(dayname)s, %d %%(monthname)s, %Y', 'long': '%d %%(monthname)s, %Y', 'short': '%d/%m/%Y'}
numericSymbols={'group': u'\u066c', 'nativeZeroDigit': u'\u0660', 'exponential': 'E', 'perMille': u'\u2030', 'nan': u'\ufffd', 'decimal': u'\u066b', 'percentSign': u'\u066a', 'list': ';', 'patternDigit': '#', 'plusSign': '+', 'infinity': u'\u221e', 'minusSign': '-'}
| 871.444444 | 10,958 | 0.723256 | 2,629 | 15,686 | 4.314949 | 0.108026 | 0.122532 | 0.09018 | 0.019746 | 0.410437 | 0.267895 | 0.164757 | 0.133286 | 0.084803 | 0.065233 | 0 | 0.51445 | 0.038251 | 15,686 | 17 | 10,959 | 922.705882 | 0.237505 | 0.002614 | 0 | 0 | 0 | 3.875 | 0.837307 | 0.776322 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
161e1b02e7f9297e2bfd4776c1d038b76db0c636 | 19,794 | py | Python | src/rtl/compressed_decoder.py | giraffe50/RISCV-M4F | 1b1ed756a8ea02c2d2a11d8472f8603847170ad8 | [
"Apache-2.0"
] | 3 | 2021-01-13T03:41:14.000Z | 2021-03-23T11:31:48.000Z | src/rtl/compressed_decoder.py | scutdig/LG-32HP | 1b1ed756a8ea02c2d2a11d8472f8603847170ad8 | [
"Apache-2.0"
] | 1 | 2021-03-01T09:32:59.000Z | 2021-03-01T09:32:59.000Z | src/rtl/compressed_decoder.py | scutdig/LG-32HP | 1b1ed756a8ea02c2d2a11d8472f8603847170ad8 | [
"Apache-2.0"
] | 4 | 2021-01-07T03:01:26.000Z | 2021-02-28T02:20:10.000Z | """
Copyright Digisim, Computer Architecture team of South China University of Technology,
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Author Name: Guoyi Mo
Date: 2021-03-03
File Name: compressed_decoder.py
Description: decode compressed instruction for if
"""
from pyhcl import *
from src.include.pkg import *
def compressed_decoder(FPU=0):
class COMPRESSED_DECODER(Module):
io = IO(
instr_i=Input(U.w(32)),
instr_o=Output(U.w(32)),
is_compressed_o=Output(Bool),
illegal_instr_o=Output(Bool)
)
ca_format = Wire(U.w(3))
imm = Wire(U.w(6))
imm <<= CatBits(io.instr_i[12], io.instr_i[6:2])
ca_format <<= CatBits(io.instr_i[12], io.instr_i[6:5])
io.illegal_instr_o <<= U.w(1)(0)
io.instr_o <<= U(0)
with when(io.instr_i[1:0] == U.w(2)(0)):
with when(io.instr_i[15:13] == U.w(3)(0)):
# c.addi4spn -> addi rd, x2, imm
io.instr_o <<= CatBits(U.w(2)(0), io.instr_i[10:7], io.instr_i[12:11], io.instr_i[5], io.instr_i[6],
U.w(2)(0), U.w(5)(0x2), U.w(3)(0), U.w(2)(1), io.instr_i[4:2], OPCODE_OPIMM)
with when(io.instr_i[12:5] == U.w(8)(0)):
io.illegal_instr_o <<= U.w(1)(1)
with elsewhen(io.instr_i[15:13] == U.w(3)(1)):
# c.fld -> fld rd, imm(rs1)
if FPU:
io.instr_o <<= CatBits(U.w(4)(0), io.instr_i[6:5], io.instr_i[12:10], U.w(3)(0), U.w(2)(1),
io.instr_i[9:7], U.w(3)(3), U.w(2)(1), io.instr_i[4:2], OPCODE_LOAD_FP)
else:
io.illegal_instr_o <<= U.w(1)(1)
with elsewhen(io.instr_i[15:13] == U.w(3)(2)):
# c.lw -> lw rd, imm(rs1)
io.instr_o <<= CatBits(U.w(5)(0), io.instr_i[5], io.instr_i[12:10], io.instr_i[6], U.w(2)(0), U.w(2)(1),
io.instr_i[9:7], U.w(3)(2), U.w(2)(1), io.instr_i[4:2], OPCODE_LOAD)
with elsewhen(io.instr_i[15:13] == U.w(3)(3)):
# c.flw -> flw rd, imm(rs1)
if FPU:
io.instr_o <<= CatBits(U.w(5)(0), io.instr_i[5], io.instr_i[12:10], io.instr_i[6], U.w(2)(0),
U.w(2)(1), io.instr_i[9:7], U.w(3)(2), U.w(2)(1), io.instr_i[4:2], OPCODE_LOAD_FP)
else:
io.illegal_instr_o <<= U.w(1)(1)
with elsewhen(io.instr_i[15:13] == U.w(3)(5)):
# c.fsd -> fsd rs2, imm(rs1)
if FPU:
io.instr_o <<= CatBits(U.w(4)(0), io.instr_i[6:5], io.instr_i[12], U.w(2)(1), io.instr_i[4:2],
U.w(2)(1), io.instr_i[9:7], U.w(3)(3), io.instr_i[11:10], U.w(3)(0), OPCODE_STORE_FP)
else:
io.illegal_instr_o <<= U.w(1)(1)
with elsewhen(io.instr_i[15:13] == U.w(3)(6)):
# c.sw -> sw rs2, imm(rs1)
io.instr_o <<= CatBits(U.w(5)(0), io.instr_i[5], io.instr_i[12], U.w(2)(1), io.instr_i[4:2], U.w(2)(1),
io.instr_i[9:7], U.w(3)(2), io.instr_i[11:10], io.instr_i[6], U.w(2)(0), OPCODE_STORE)
with elsewhen(io.instr_i[15:13] == U.w(3)(7)):
# c.fsw -> fsw rs2, imm(rs1)
if FPU:
io.instr_o <<= CatBits(U.w(5)(0), io.instr_i[5], io.instr_i[12], U.w(2)(1), io.instr_i[4:2], U.w(2)(1),
io.instr_i[9:7], U.w(3)(2), io.instr_i[11:10], io.instr_i[6], U.w(2)(0), OPCODE_STORE_FP)
else:
io.illegal_instr_o <<= U.w(1)(1)
with otherwise():
io.illegal_instr_o <<= U.w(1)(1)
with elsewhen(io.instr_i[1:0] == U.w(2)(1)):
with when(io.instr_i[15:13] == U.w(3)(0)):
# c.addi -> addi rd, rd, nzimm
# c.nop
io.instr_o <<= CatBits(io.instr_i[12], io.instr_i[12], io.instr_i[12], io.instr_i[12], io.instr_i[12],
io.instr_i[12], io.instr_i[12], io.instr_i[6:2], io.instr_i[11:7], U.w(3)(0),
io.instr_i[11:7], OPCODE_OPIMM)
with elsewhen(io.instr_i[15:13] == U.w(3)(1)):
# c.jal -> jal x1, imm
io.instr_o <<= CatBits(io.instr_i[12], io.instr_i[8], io.instr_i[10:9], io.instr_i[6], io.instr_i[7],
io.instr_i[2], io.instr_i[11], io.instr_i[5:3], io.instr_i[12], io.instr_i[12],
io.instr_i[12], io.instr_i[12], io.instr_i[12], io.instr_i[12], io.instr_i[12],
io.instr_i[12], io.instr_i[12], U.w(4)(0), ~io.instr_i[15], OPCODE_JAL)
with elsewhen(io.instr_i[15:13] == U.w(3)(2)):
with when(io.instr_i[11:7] == U.w(5)(0)):
# Hint -> addi x0, x0, nzimm
io.instr_o <<= CatBits(io.instr_i[12], io.instr_i[12], io.instr_i[12], io.instr_i[12], io.instr_i[12],
io.instr_i[12], io.instr_i[12], io.instr_i[6:2], U.w(5)(0), U.w(3)(0),
io.instr_i[11:7], OPCODE_OPIMM)
with otherwise():
# c.li -> addi rd, x0, nzimm
io.instr_o <<= CatBits(io.instr_i[12], io.instr_i[12], io.instr_i[12], io.instr_i[12], io.instr_i[12],
io.instr_i[12], io.instr_i[12], io.instr_i[6:2], U.w(5)(0), U.w(3)(0),
io.instr_i[11:7], OPCODE_OPIMM)
with elsewhen(io.instr_i[15:13] == U.w(3)(3)):
with when(imm == U.w(6)(0)):
io.illegal_instr_o <<= U.w(1)(1)
with otherwise():
with when(io.instr_i[11:7] == U.w(5)(0x2)):
# c.addi16sp -> addi x2, x2, nzimm
io.instr_o <<= CatBits(io.instr_i[12], io.instr_i[12], io.instr_i[12], io.instr_i[4:3],
io.instr_i[5], io.instr_i[2], io.instr_i[6], U.w(4)(0), U.w(5)(0x2),
U.w(3)(0), U.w(5)(0x2), OPCODE_OPIMM)
with elsewhen(io.instr_i[11:7] == U.w(5)(0)):
# Hint -> lui x0, imm
io.instr_o <<= CatBits(io.instr_i[12], io.instr_i[12], io.instr_i[12], io.instr_i[12],
io.instr_i[12], io.instr_i[12], io.instr_i[12], io.instr_i[12],
io.instr_i[12], io.instr_i[12], io.instr_i[12], io.instr_i[12],
io.instr_i[12], io.instr_i[12], io.instr_i[12], io.instr_i[6:2],
io.instr_i[11:7], OPCODE_LUI)
with otherwise():
io.instr_o <<= CatBits(io.instr_i[12], io.instr_i[12], io.instr_i[12], io.instr_i[12],
io.instr_i[12], io.instr_i[12], io.instr_i[12], io.instr_i[12],
io.instr_i[12], io.instr_i[12], io.instr_i[12], io.instr_i[12],
io.instr_i[12], io.instr_i[12], io.instr_i[12], io.instr_i[6:2],
io.instr_i[11:7], OPCODE_LUI)
with elsewhen(io.instr_i[15:13] == U.w(3)(4)):
with when(io.instr_i[11:10] == U.w(2)(0)):
# c.srli -> srli rd, rd, shamt
with when(io.instr_i[12] == U.w(1)(1)):
# Reserved for future custom extensions
io.illegal_instr_o <<= U.w(1)(1)
io.instr_o <<= CatBits(U.w(1)(0), io.instr_i[10], U.w(5)(0), io.instr_i[6:2], U.w(2)(1),
io.instr_i[9:7], U.w(3)(5), U.w(2)(1), io.instr_i[9:7], OPCODE_OPIMM)
with otherwise():
with when(io.instr_i[6:2] == U.w(5)(0)):
io.instr_o <<= CatBits(U.w(1)(0), io.instr_i[10], U.w(5)(0), io.instr_i[6:2], U.w(2)(1),
io.instr_i[9:7], U.w(3)(5), U.w(2)(1), io.instr_i[9:7], OPCODE_OPIMM)
with otherwise():
io.instr_o <<= CatBits(U.w(1)(0), io.instr_i[10], U.w(5)(0), io.instr_i[6:2], U.w(2)(1),
io.instr_i[9:7], U.w(3)(5), U.w(2)(1), io.instr_i[9:7], OPCODE_OPIMM)
with elsewhen(io.instr_i[11:10] == U.w(2)(1)):
# c.srai -> srai rd, rd, shamt
with when(io.instr_i[12] == U.w(1)(1)):
# Reserved for future custom extensions
io.illegal_instr_o <<= U.w(1)(1)
io.instr_o <<= CatBits(U.w(1)(0), io.instr_i[10], U.w(5)(0), io.instr_i[6:2], U.w(2)(1),
io.instr_i[9:7], U.w(3)(5), U.w(2)(1), io.instr_i[9:7], OPCODE_OPIMM)
with otherwise():
with when(io.instr_i[6:2] == U.w(5)(0)):
io.instr_o <<= CatBits(U.w(1)(0), io.instr_i[10], U.w(5)(0), io.instr_i[6:2], U.w(2)(1),
io.instr_i[9:7], U.w(3)(5), U.w(2)(1), io.instr_i[9:7], OPCODE_OPIMM)
with otherwise():
io.instr_o <<= CatBits(U.w(1)(0), io.instr_i[10], U.w(5)(0), io.instr_i[6:2], U.w(2)(1),
io.instr_i[9:7], U.w(3)(5), U.w(2)(1), io.instr_i[9:7], OPCODE_OPIMM)
with elsewhen(io.instr_i[11:10] == U.w(2)(2)):
# c.andi -> andi rd, rd, imm
io.instr_o <<= CatBits(io.instr_i[12], io.instr_i[12], io.instr_i[12], io.instr_i[12], io.instr_i[12],
io.instr_i[12], io.instr_i[12], io.instr_i[6:2], U.w(2)(1), io.instr_i[9:7],
U.w(3)(7), U.w(2)(1), io.instr_i[9:7], OPCODE_OPIMM)
with elsewhen(io.instr_i[11:10] == U.w(2)(3)):
with when(ca_format == U.w(3)(0)):
# c.sub -> sub rd, rd, rs2
io.instr_o <<= CatBits(U.w(2)(1), U.w(5)(0), U.w(2)(1), io.instr_i[4:2], U.w(2)(1), io.instr_i[9:7],
U.w(3)(0), U.w(2)(1), io.instr_i[9:7], OPCODE_OP)
with elsewhen(ca_format == U.w(3)(1)):
# c.xor -> xor rd, rd, rs2
io.instr_o <<= CatBits(U.w(7)(0), U.w(2)(1), io.instr_i[4:2], U.w(2)(1), io.instr_i[9:7],
U.w(3)(4), U.w(2)(1), io.instr_i[9:7], OPCODE_OP)
with elsewhen(ca_format == U.w(3)(2)):
# c.or -> or rd, rd, rs2
io.instr_o <<= CatBits(U.w(7)(0), U.w(2)(1), io.instr_i[4:2], U.w(2)(1), io.instr_i[9:7],
U.w(3)(6), U.w(2)(1), io.instr_i[9:7], OPCODE_OP)
with elsewhen(ca_format == U.w(3)(3)):
# c.and -> and rd, rd, rs2
io.instr_o <<= CatBits(U.w(7)(0), U.w(2)(1), io.instr_i[4:2], U.w(2)(1), io.instr_i[9:7],
U.w(3)(7), U.w(2)(1), io.instr_i[9:7], OPCODE_OP)
with otherwise():
io.illegal_instr_o <<= U.w(1)(1)
with elsewhen(io.instr_i[15:13] == U.w(3)(5)):
# c.j -> jal x0, imm
io.instr_o <<= CatBits(io.instr_i[12], io.instr_i[8], io.instr_i[10:9], io.instr_i[6], io.instr_i[7],
io.instr_i[2], io.instr_i[11], io.instr_i[5:3], io.instr_i[12], io.instr_i[12],
io.instr_i[12], io.instr_i[12], io.instr_i[12], io.instr_i[12], io.instr_i[12],
io.instr_i[12], io.instr_i[12], U.w(4)(0), ~io.instr_i[15], OPCODE_JAL)
with elsewhen(io.instr_i[15:13] == U.w(3)(6)):
# c.beqz -> beq rs1, x0, imm
io.instr_o <<= CatBits(io.instr_i[12], io.instr_i[12], io.instr_i[12], io.instr_i[12], io.instr_i[6:5],
io.instr_i[2], U.w(5)(0), U.w(2)(1), io.instr_i[9:7], U.w(2)(0), io.instr_i[13],
io.instr_i[11:10], io.instr_i[4:3], io.instr_i[12], OPCODE_BRANCH)
with elsewhen(io.instr_i[15:13] == U.w(3)(7)):
# c.bnez -> bne rs1, x0, imm
io.instr_o <<= CatBits(io.instr_i[12], io.instr_i[12], io.instr_i[12], io.instr_i[12], io.instr_i[6:5],
io.instr_i[2], U.w(5)(0), U.w(2)(1), io.instr_i[9:7], U.w(2)(0), io.instr_i[13],
io.instr_i[11:10], io.instr_i[4:3], io.instr_i[12], OPCODE_BRANCH)
with elsewhen(io.instr_i[1:0] == U.w(2)(2)):
with when(io.instr_i[15:13] == U.w(3)(0)):
with when(io.instr_i[12] == U.w(1)(1)):
# Reserved for future extensions
io.instr_o <<= CatBits(U.w(7)(0), io.instr_i[6:2], io.instr_i[11:7], U.w(3)(1), io.instr_i[11:7],
OPCODE_OPIMM)
io.illegal_instr_o << U.w(1)(1)
with otherwise():
with when(io.instr_i[6:2] == U.w(5)(0) | io.instr_i[11:7] == U.w(5)(0)):
# Hint -> slli rd, rd, shamt
io.instr_o <<= CatBits(U.w(7)(0), io.instr_i[6:2], io.instr_i[11:7], U.w(3)(1), io.instr_i[11:7],
OPCODE_OPIMM)
with otherwise():
# c.slli -> slli rd, rd, shamt
io.instr_o <<= CatBits(U.w(7)(0), io.instr_i[6:2], io.instr_i[11:7], U.w(3)(1), io.instr_i[11:7],
OPCODE_OPIMM)
with elsewhen(io.instr_i[15:13] == U.w(3)(1)):
# c.fldsp -> fld rd, imm(x2)
if FPU:
io.instr_o <<= CatBits(U.w(3)(0), io.instr_i[4:2], io.instr_i[12], io.instr_i[6:5], U.w(3)(0),
U.w(5)(0x2), U.w(3)(3), io.instr_i[11:7], OPCODE_LOAD_FP)
else:
io.illegal_instr_o <<= U.w(1)(1)
with elsewhen(io.instr_i[15:13] == U.w(3)(2)):
# c.lwsp -> lw rd, imm(x2)
io.instr_o <<= CatBits(U.w(4)(0), io.instr_i[3:2], io.instr_i[12], io.instr_i[6:4], U.w(2)(0), U.w(5)(0x2),
U.w(3)(2), io.instr_i[11:7], OPCODE_LOAD)
with when(io.instr_i[11:7] == U.w(5)(0)):
io.illegal_instr_o <<= U.w(1)(1)
with elsewhen(io.instr_i[15:13] == U.w(3)(3)):
# c.flwsp -> flw rd, imm(x2)
if FPU:
io.instr_o <<= CatBits(U.w(4)(0), io.instr_i[3:2], io.instr_i[12], io.instr_i[6:4], U.w(2)(0),
U.w(5)(0x2), U.w(3)(2), io.instr_i[11:7], OPCODE_LOAD_FP)
else:
io.illegal_instr_o <<= U.w(1)(1)
with elsewhen(io.instr_i[15:13] == U.w(3)(4)):
with when(io.instr_i[12] == U.w(1)(0)):
with when(io.instr_i[6:2] == U.w(5)(0)):
# c.jr -> jalr x0, rd/rs1, 0
io.instr_o <<= CatBits(U.w(12)(0), io.instr_i[11:7], U.w(3)(0), U.w(5)(0), OPCODE_JALR)
# c.jr with rs1 = 0 is reserved
with when(io.instr_i[11:7] == U.w(5)(0)):
io.illegal_instr_o <<= U.w(1)(1)
with otherwise():
with when(io.instr_i[11:7] == U.w(5)(0)):
# Hint -> add x0, x0, rs2
io.instr_o <<= CatBits(U.w(7)(0), io.instr_i[6:2], U.w(5)(0), U.w(3)(0), io.instr_i[11:7],
OPCODE_OP)
with otherwise():
# c.mv -> add rd, x0, rs2
io.instr_o <<= CatBits(U.w(7)(0), io.instr_i[6:2], U.w(5)(0), U.w(3)(0), io.instr_i[11:7],
OPCODE_OP)
with otherwise():
with when(io.instr_i[6:2] == U.w(5)(0)):
with when(io.instr_i[11:7] == U.w(5)(0)):
# c.ebreak -> ebreak
io.instr_o <<= CatBits(U.w(32)(0x00100073))
with otherwise():
# c.jalr -> jalr x1, rs1, 0
io.instr_o <<= CatBits(U.w(12)(0), io.instr_i[11:7], U.w(3)(0), U.w(5)(1), OPCODE_JALR)
with otherwise():
with when(io.instr_i[11:7] == U.w(5)(0)):
# Hint -> add x0, x0, rs2
io.instr_o <<= CatBits(U.w(7)(0), io.instr_i[6:2], io.instr_i[11:7], U.w(3)(0),
io.instr_i[11:7], OPCODE_OP)
with otherwise():
io.instr_o <<= CatBits(U.w(7)(0), io.instr_i[6:2], io.instr_i[11:7], U.w(3)(0),
io.instr_i[11:7], OPCODE_OP)
with elsewhen(io.instr_i[15:13] == U.w(3)(5)):
# c.fsdsp -> fsd rs2, imm(x2)
if FPU:
io.instr_o <<= CatBits(U.w(3)(0), io.instr_i[9:7], io.instr_i[12], io.instr_i[6:2], U.w(5)(0x2),
U.w(3)(3), io.instr_i[11:10], U.w(3)(0), OPCODE_STORE_FP)
else:
io.illegal_instr_o <<= U.w(1)(1)
with elsewhen(io.instr_i[15:13] == U.w(3)(6)):
# c.swsp -> sw rs2, imm(x2)
io.instr_o <<= CatBits(U.w(4)(0), io.instr_i[8:7], io.instr_i[12], io.instr_i[6:2], U.w(5)(0x2),
U.w(3)(2), io.instr_i[11:9], U.w(2)(0), OPCODE_STORE)
with elsewhen(io.instr_i[15:13] == U.w(3)(7)):
# c.fswsp -> fsw rs2, imm(x2)
if FPU:
io.instr_o <<= CatBits(U.w(4)(0), io.instr_i[8:7], io.instr_i[12], io.instr_i[6:2], U.w(5)(0x2),
U.w(3)(2), io.instr_i[11:9], U.w(2)(0), OPCODE_STORE_FP)
else:
io.illegal_instr_o <<= U.w(1)(1)
with otherwise():
io.instr_o <<= io.instr_i
io.is_compressed_o <<= io.instr_i[1:0] != U.w(2)(3)
return COMPRESSED_DECODER()
if __name__ == '__main__':
Emitter.dumpVerilog(Emitter.dump(Emitter.emit(compressed_decoder(1)), "compressed_decoder.fir"))
| 65.98 | 132 | 0.428665 | 3,261 | 19,794 | 2.457835 | 0.058264 | 0.310917 | 0.30942 | 0.13849 | 0.833562 | 0.827573 | 0.818964 | 0.810106 | 0.800873 | 0.790393 | 0 | 0.103068 | 0.385824 | 19,794 | 299 | 133 | 66.200669 | 0.556223 | 0.094928 | 0 | 0.669604 | 0 | 0 | 0.001679 | 0.001231 | 0 | 0 | 0.002239 | 0 | 0 | 1 | 0.004405 | false | 0 | 0.008811 | 0 | 0.035242 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
1665c23abae83534a4b9e68dde7d5c1af3a6be7e | 14,074 | py | Python | synapse/tests/test_tools_dumprows_loadrows.py | vertexmc/synapse | bd1f8ab1abcbaac20dc9afb9ad385cf831278ada | [
"Apache-2.0"
] | null | null | null | synapse/tests/test_tools_dumprows_loadrows.py | vertexmc/synapse | bd1f8ab1abcbaac20dc9afb9ad385cf831278ada | [
"Apache-2.0"
] | 4 | 2017-10-03T21:50:40.000Z | 2017-11-20T15:49:38.000Z | synapse/tests/test_tools_dumprows_loadrows.py | vertexmc/synapse | bd1f8ab1abcbaac20dc9afb9ad385cf831278ada | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
"""
synapse - test_tools_dumprows_loadrows.py
Created on 7/26/17.
Unittests for the dumprows and loadrows tools.
"""
import gzip
import synapse.lib.const as s_const
import synapse.lib.msgpack as s_msgpack
import synapse.tools.dumprows as s_dumprows
import synapse.tools.loadrows as s_loadrows
from synapse.tests.common import *
log = logging.getLogger(__name__)
class DumpRowsTest(SynTest):
def make_sql_genrows_json(self, fp):
d = {'slicebytes': 2, 'incvalu': 4}
with open(fp, 'wb') as f:
f.write(json.dumps(d, indent=2, sort_keys=True).encode())
def test_simple_use(self):
self.thisHostMustNot(platform='darwin')
outp = self.getTestOutp()
with self.getTestDir() as temp:
fp = os.path.join(temp, 'dumpfile.mpk')
new_db = os.path.join(temp, 'test.db')
sqlite_url = 'sqlite:///{}'.format(new_db)
with s_cortex.openurl(sqlite_url) as core:
self.true(core.isnew)
core.setBlobValu('syn:test:tel', 8675309)
with core.getCoreXact():
core.formTufoByProp('inet:ipv4', 0x01020304)
for i in range(1000):
core.formTufoByProp('inet:ipv4', i)
# Now dump that sqlite core
argv = ['-s', sqlite_url, '-o', fp]
ret = s_dumprows.main(argv, outp)
self.eq(ret, 0)
# Now ensure our .mpk file is correct
with open(fp, 'rb') as fd:
gen = s_msgpack.iterfd(fd)
evt = next(gen)
self.eq(evt[0], 'syn:cortex:rowdump:info')
self.eq(evt[1].get('rows:compress'), False)
self.eq(evt[1].get('synapse:rows:output'), fp)
self.eq(evt[1].get('synapse:cortex:input'), sqlite_url)
self.eq(evt[1].get('synapse:cortex:blob_store'), False)
self.eq(evt[1].get('synapse:cortex:revstore'), False)
self.eq(evt[1].get('python:version'), version)
self.isin('synapse:version', evt[1])
evt = next(gen)
self.eq(evt[0], 'core:save:add:rows')
self.isin('rows', evt[1])
rows = evt[1].get('rows')
self.isinstance(rows, tuple)
self.isinstance(rows[0], tuple)
self.eq(len(rows[0]), 4)
# Expensive but worth checking
event_types = set()
event_types.add(evt[0])
total_rows = 0
for evt in gen:
event_types.add(evt[0])
if 'rows' in evt[1]:
total_rows = total_rows + len(evt[1].get('rows'))
self.gt(total_rows, 1000)
self.eq(event_types, {'core:save:add:rows'})
def test_simple_compress(self):
self.thisHostMustNot(platform='darwin')
outp = self.getTestOutp()
with self.getTestDir() as temp:
fp = os.path.join(temp, 'dumpfile.mpk')
new_db = os.path.join(temp, 'test.db')
sqlite_url = 'sqlite:///{}'.format(new_db)
with s_cortex.openurl(sqlite_url) as core:
self.true(core.isnew)
core.setBlobValu('syn:test:tel', 8675309)
core.formTufoByProp('inet:ipv4', 0x01020304)
# Now dump that sqlite core
argv = ['-s', sqlite_url, '-o', fp, '--compress']
ret = s_dumprows.main(argv, outp)
self.eq(ret, 0)
# Now ensure our .mpk file is correct
with open(fp, 'rb') as fd:
gen = s_msgpack.iterfd(fd)
evt = next(gen)
self.eq(evt[0], 'syn:cortex:rowdump:info')
self.eq(evt[1].get('rows:compress'), True)
evt = next(gen)
self.eq(evt[0], 'core:save:add:rows')
self.isin('rows', evt[1])
rows = evt[1].get('rows')
# we decode the rows blob not in place but separately here
rows = s_msgpack.un(gzip.decompress(rows))
self.isinstance(rows, tuple)
self.isinstance(rows[0], tuple)
self.eq(len(rows[0]), 4)
# Expensive but worth checking
event_types = set()
event_types.add(evt[0])
for evt in gen:
event_types.add(evt[0])
self.eq(event_types, {'core:save:add:rows'})
def test_blob_dump(self):
self.thisHostMustNot(platform='darwin')
outp = self.getTestOutp()
with self.getTestDir() as temp:
fp = os.path.join(temp, 'dumpfile.mpk')
new_db = os.path.join(temp, 'test.db')
sqlite_url = 'sqlite:///{}'.format(new_db)
with s_cortex.openurl(sqlite_url) as core:
self.true(core.isnew)
core.setBlobValu('syn:test:tel', 8675309)
core.formTufoByProp('inet:ipv4', 0x01020304)
# Now dump that sqlite core
argv = ['-s', sqlite_url, '-o', fp, '--dump-blobstore']
ret = s_dumprows.main(argv, outp)
self.eq(ret, 0)
# Now ensure our .mpk file is correct
with open(fp, 'rb') as fd:
gen = s_msgpack.iterfd(fd)
evt = next(gen)
self.eq(evt[0], 'syn:cortex:rowdump:info')
self.eq(evt[1].get('synapse:cortex:blob_store'), True)
evt = next(gen)
self.eq(evt[0], 'core:save:add:rows')
self.isin('rows', evt[1])
rows = evt[1].get('rows')
self.isinstance(rows, tuple)
self.isinstance(rows[0], tuple)
self.eq(len(rows[0]), 4)
# Expensive but worth checking
event_types = set()
event_types.add(evt[0])
for evt in gen:
event_types.add(evt[0])
self.eq(event_types, {'core:save:add:rows', 'syn:core:blob:set'})
def test_dump_force(self):
self.thisHostMustNot(platform='darwin')
outp = self.getTestOutp()
with self.getTestDir() as temp:
fp = os.path.join(temp, 'dumpfile.mpk')
new_db = os.path.join(temp, 'test.db')
sqlite_url = 'sqlite:///{}'.format(new_db)
with s_cortex.openurl(sqlite_url) as core:
self.true(core.isnew)
core.setBlobValu('syn:test:tel', 8675309)
core.formTufoByProp('inet:ipv4', 0x01020304)
# Now dump that sqlite core
argv = ['-s', sqlite_url, '-o', fp]
ret = s_dumprows.main(argv, outp)
self.eq(ret, 0)
outp = self.getTestOutp()
argv = ['-s', sqlite_url, '-o', fp]
ret = s_dumprows.main(argv, outp)
self.eq(ret, 1)
self.true('Cannot overwrite a backup.' in str(outp))
outp = self.getTestOutp()
# Now dump that sqlite core
argv = ['-s', sqlite_url, '-o', fp, '-f']
ret = s_dumprows.main(argv, outp)
self.eq(ret, 0)
def test_dump_largecore(self):
self.skipLongTest()
self.thisHostMustNot(platform='darwin')
# This ensure we're executing the "dump rows
# when we have N number of bytes cached codepath.
# Unfortunately this is a bit slow (2-4 seconds).
ntufos = 40000
outp = self.getTestOutp()
with self.getTestDir() as temp:
fp = os.path.join(temp, 'dumpfile.mpk')
new_db = os.path.join(temp, 'test.db')
sqlite_url = 'sqlite:///{}'.format(new_db)
with s_cortex.openurl(sqlite_url) as core:
self.true(core.isnew)
rows = []
tick = now()
for i in range(1, ntufos):
iden = guid()
rows.append((iden, 'tufo:form', 'inet:asn', tick))
rows.append((iden, 'inet:asn', i, tick))
rows.append((iden, 'inet:asn:name', '??', tick))
core.addRows(rows)
q = 'SELECT count(1) from {}'.format(core.store._getTableName())
num_core_rows = core.store.select(q)[0][0]
# Now dump that sqlite core
argv = ['-s', sqlite_url, '-o', fp]
ret = s_dumprows.main(argv, outp)
self.eq(ret, 0)
stat = os.stat(fp)
self.gt(stat.st_size, s_const.mebibyte * 4)
# Now ensure our .mpk file is correct
with open(fp, 'rb') as fd:
msgpk_rows = 0
for evt in s_msgpack.iterfd(fd):
if 'rows' in evt[1]:
msgpk_rows = msgpk_rows + len(evt[1].get('rows'))
self.eq(num_core_rows, msgpk_rows)
class LoadRowsTest(SynTest):
def make_sql_genrows_json(self, fp):
d = {'slicebytes': 2, 'incvalu': 4}
with open(fp, 'wb') as f:
f.write(json.dumps(d, indent=2, sort_keys=True).encode())
def test_savefile_load(self):
self.thisHostMustNot(platform='darwin')
outp = self.getTestOutp()
with self.getTestDir() as temp:
# Prepare a savefile to load from a ram core
fp = os.path.join(temp, 'savefile.mpk')
new_db = os.path.join(temp, 'test.db')
sqlite_url = 'sqlite:///{}'.format(new_db)
with s_cortex.openurl('ram:///', savefile=fp) as core:
self.true(core.isnew)
node = core.formTufoByProp('inet:ipv4', 0x01020304)
self.true('.new' in node[1])
core.setBlobValu('foo:bar', ('tufo', {'test': 'value'}))
rammyfo = core.myfo
argv = ['-s', sqlite_url, '-i', fp]
# Execute loadrows tool to create the sqlite store and load the rows
ret = s_loadrows.main(argv, outp)
self.eq(ret, 0)
self.true('Restoring from a savefile' in str(outp))
with s_cortex.openurl(sqlite_url) as core:
self.false(core.isnew)
self.eq(core.myfo[0], rammyfo[0])
node = core.formTufoByProp('inet:ipv4', 0x01020304)
self.true('.new' not in node[1])
self.eq(core.getBlobValu('foo:bar'), ('tufo', {'test': 'value'}))
def test_dumprows_load(self):
self.thisHostMustNot(platform='darwin')
outp = self.getTestOutp()
with self.getTestDir() as temp:
# Make a sqlite cortex and the associated dupmfile for it
fp = os.path.join(temp, 'dumpfile.mpk')
genrows_json_fp = os.path.join(temp, 'genrows.json')
self.make_sql_genrows_json(genrows_json_fp)
old_db = os.path.join(temp, 'old.db')
new_db = os.path.join(temp, 'new.db')
sqlite_url_old = 'sqlite:///{}'.format(old_db)
sqlite_url_new = 'sqlite:///{}'.format(new_db)
with s_cortex.openurl(sqlite_url_old) as core:
self.true(core.isnew)
node = core.formTufoByProp('inet:ipv4', 0x01020304)
self.true('.new' in node[1])
core.setBlobValu('foo:bar', ('tufo', {'test': 'value'}))
# Dump that core and its blobstore to a dumpfile
dump_argv = ['-s', sqlite_url_old, '-o', fp, '--dump-blobstore', '-e', genrows_json_fp]
ret = s_dumprows.main(dump_argv, outp)
self.eq(ret, 0)
# Execute loadrows tool to create the sqlite store and load the rows
argv = ['-s', sqlite_url_new, '-i', fp]
ret = s_loadrows.main(argv, outp)
self.eq(ret, 0)
self.true('Restoring from a dumprows file' in str(outp))
# Make sure the output is valid
with s_cortex.openurl(sqlite_url_new) as core:
self.false(core.isnew)
node = core.formTufoByProp('inet:ipv4', 0x01020304)
self.true('.new' not in node[1])
self.eq(core.getBlobValu('foo:bar'), ('tufo', {'test': 'value'}))
def test_dumprows_load_compressed(self):
self.thisHostMustNot(platform='darwin')
outp = self.getTestOutp()
with self.getTestDir() as temp:
# Make a sqlite cortex and the associated dupmfile for it
fp = os.path.join(temp, 'dumpfile.mpk')
genrows_json_fp = os.path.join(temp, 'genrows.json')
self.make_sql_genrows_json(genrows_json_fp)
old_db = os.path.join(temp, 'old.db')
new_db = os.path.join(temp, 'new.db')
sqlite_url_old = 'sqlite:///{}'.format(old_db)
sqlite_url_new = 'sqlite:///{}'.format(new_db)
with s_cortex.openurl(sqlite_url_old) as core:
self.true(core.isnew)
node = core.formTufoByProp('inet:ipv4', 0x01020304)
self.true('.new' in node[1])
core.setBlobValu('foo:bar', ('tufo', {'test': 'value'}))
# Dump that core and its blobstore to a dumpfile
dump_argv = ['-s', sqlite_url_old, '-o', fp, '--dump-blobstore', '--compress', '-e', genrows_json_fp]
ret = s_dumprows.main(dump_argv, self.getTestOutp())
self.eq(ret, 0)
# Execute loadrows tool to create the sqlite store and load the rows
argv = ['-s', sqlite_url_new, '-i', fp]
ret = s_loadrows.main(argv, outp)
self.eq(ret, 0)
self.true('Gzip row compression enabled' in str(outp))
# Make sure the output is valid
with s_cortex.openurl(sqlite_url_new) as core:
self.false(core.isnew)
node = core.formTufoByProp('inet:ipv4', 0x01020304)
self.true('.new' not in node[1])
self.eq(core.getBlobValu('foo:bar'), ('tufo', {'test': 'value'}))
| 43.171779 | 113 | 0.529061 | 1,753 | 14,074 | 4.144894 | 0.128351 | 0.030553 | 0.027525 | 0.038536 | 0.807597 | 0.794798 | 0.782962 | 0.765621 | 0.765621 | 0.752684 | 0 | 0.023904 | 0.337147 | 14,074 | 325 | 114 | 43.304615 | 0.754958 | 0.08704 | 0 | 0.732558 | 0 | 0 | 0.108709 | 0.011082 | 0 | 0 | 0.007804 | 0 | 0 | 1 | 0.03876 | false | 0 | 0.023256 | 0 | 0.069767 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
16853b2bd6dba5a8bd857e277dfc092bcea93038 | 124 | py | Python | product/__init__.py | gingerxman/bdd-steps | 2396a65e58a4b38d5b621393acbd178ac3cd4ad2 | [
"MIT"
] | 2 | 2019-12-15T09:05:18.000Z | 2021-04-21T12:56:13.000Z | features/steps/product/__init__.py | gingerxman/ginger-mall | 2cd4ea5863f3966fe76e73d857e3991d1c20e3cb | [
"MIT"
] | null | null | null | features/steps/product/__init__.py | gingerxman/ginger-mall | 2cd4ea5863f3966fe76e73d857e3991d1c20e3cb | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
import category_steps
import product_label_steps
import product_property_steps
import product_steps | 20.666667 | 29 | 0.822581 | 17 | 124 | 5.647059 | 0.529412 | 0.34375 | 0.5625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009009 | 0.104839 | 124 | 6 | 30 | 20.666667 | 0.855856 | 0.169355 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
16868d79ca66ea955d7064b026d3e31f89097030 | 52,814 | py | Python | sos_trades_core/tests/l0_test_14_optim_scenario.py | os-climate/sostrades-core | bcaa9b5e393ffbd0963e75a9315b27caf8b0abd9 | [
"Apache-2.0"
] | 8 | 2022-01-10T14:44:28.000Z | 2022-03-31T08:57:14.000Z | sos_trades_core/tests/l0_test_14_optim_scenario.py | os-climate/sostrades-core | bcaa9b5e393ffbd0963e75a9315b27caf8b0abd9 | [
"Apache-2.0"
] | null | null | null | sos_trades_core/tests/l0_test_14_optim_scenario.py | os-climate/sostrades-core | bcaa9b5e393ffbd0963e75a9315b27caf8b0abd9 | [
"Apache-2.0"
] | 1 | 2022-02-21T14:51:45.000Z | 2022-02-21T14:51:45.000Z | '''
Copyright 2022 Airbus SAS
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
'''
"""
mode: python; py-indent-offset: 4; tab-width: 4; coding: utf-8
unit test for optimization scenario
"""
import os
import unittest
from copy import deepcopy
import pandas as pd
from numpy import array, set_printoptions
from numpy.testing import assert_array_almost_equal, assert_array_equal
from gemseo.core.mdo_scenario import MDOScenario
from sos_trades_core.execution_engine.execution_engine import ExecutionEngine
from sos_trades_core.sos_processes.test.test_Griewank_opt.usecase import Study as study_griewank
from sos_trades_core.sos_processes.test.test_sellar_opt.usecase import Study as study_sellar_opt
from sos_trades_core.sos_processes.test.test_sellar_opt_idf.usecase import Study as study_sellar_idf
class TestSoSOptimScenario(unittest.TestCase):
"""
SoSOptimScenario test class
"""
def setUp(self):
self.study_name = 'optim'
self.ns = f'{self.study_name}'
self.sc_name = "SellarOptimScenario"
self.c_name = "SellarCoupling"
dspace_dict = {'variable': ['x', 'z', 'y_1', 'y_2'],
'value': [[1.], [5., 2.], [1.], [1.]],
'lower_bnd': [[0.], [-10., 0.], [-100.], [-100.]],
'upper_bnd': [[10.], [10., 10.], [100.], [100.]],
'enable_variable': [True, True, True, True],
'activated_elem': [[True], [True, True], [True], [True]]}
self.dspace = pd.DataFrame(dspace_dict)
self.repo = 'sos_trades_core.sos_processes.test'
self.proc_name = 'test_sellar_opt'
def test_01_optim_scenario_check_treeview(self):
print("\n Test 1 : check configure and treeview")
exec_eng = ExecutionEngine(self.study_name)
factory = exec_eng.factory
opt_builder = factory.get_builder_from_process(repo=self.repo,
mod_id=self.proc_name)
exec_eng.factory.set_builders_to_coupling_builder(opt_builder)
exec_eng.configure()
# -- set up disciplines in Scenario
disc_dict = {}
# Optim inputs
disc_dict[f'{self.ns}.SellarOptimScenario.max_iter'] = 100
disc_dict[f'{self.ns}.SellarOptimScenario.algo'] = "SLSQP"
disc_dict[f'{self.ns}.SellarOptimScenario.design_space'] = self.dspace
disc_dict[f'{self.ns}.SellarOptimScenario.formulation'] = 'MDF'
disc_dict[f'{self.ns}.SellarOptimScenario.objective_name'] = 'obj'
disc_dict[f'{self.ns}.SellarOptimScenario.ineq_constraints'] = [
f'c_1', f'c_2']
disc_dict[f'{self.ns}.SellarOptimScenario.algo_options'] = {"ftol_rel": 1e-10,
"ineq_tolerance": 2e-3,
"normalize_design_space": False}
exec_eng.dm.set_values_from_dict(disc_dict)
# Sellar inputs
values_dict = {}
values_dict[f'{self.ns}.{self.sc_name}.x'] = array([1.])
values_dict[f'{self.ns}.{self.sc_name}.y_1'] = array([1.])
values_dict[f'{self.ns}.{self.sc_name}.y_2'] = array([1.])
values_dict[f'{self.ns}.{self.sc_name}.z'] = array([1., 1.])
exec_eng.dm.set_values_from_dict(values_dict)
exec_eng.configure()
exp_tv_list = [f'Nodes representation for Treeview {self.ns}',
'|_ optim',
f'\t|_ {self.sc_name}',
'\t\t|_ Sellar_Problem',
'\t\t|_ Sellar_2',
'\t\t|_ Sellar_1']
exp_tv_str = '\n'.join(exp_tv_list)
assert exp_tv_str == exec_eng.display_treeview_nodes()
# XDSMize test
# exec_eng.root_process.xdsmize()
# to visualize in an internet browser :
# - download XDSMjs at https://github.com/OneraHub/XDSMjs and unzip
# - replace existing xdsm.json inside by yours
# - in the same folder, type in terminal 'python -m http.server 8080'
# - open in browser http://localhost:8080/xdsm.html
def test_02_optim_scenario_execution_mdf(self):
print("\n Test 2 : Sellar optim solution check with MDF formulation")
exec_eng = ExecutionEngine(self.study_name)
factory = exec_eng.factory
builder = factory.get_builder_from_process(repo=self.repo,
mod_id=self.proc_name)
exec_eng.factory.set_builders_to_coupling_builder(builder)
exec_eng.configure()
# -- set up disciplines in Scenario
disc_dict = {}
# Optim inputs
disc_dict[f'{self.ns}.SellarOptimScenario.max_iter'] = 100
disc_dict[f'{self.ns}.SellarOptimScenario.algo'] = "NLOPT_MMA"
disc_dict[f'{self.ns}.SellarOptimScenario.design_space'] = self.dspace
disc_dict[f'{self.ns}.SellarOptimScenario.formulation'] = 'MDF'
disc_dict[f'{self.ns}.SellarOptimScenario.objective_name'] = 'obj'
disc_dict[f'{self.ns}.SellarOptimScenario.ineq_constraints'] = [
f'c_1', f'c_2']
disc_dict[f'{self.ns}.SellarOptimScenario.algo_options'] = {"ftol_rel": 1e-5,
"ineq_tolerance": 1e-5,
"normalize_design_space": False}
exec_eng.dm.set_values_from_dict(disc_dict)
# Sellar inputs
local_dv = 10.
values_dict = {}
values_dict[f'{self.ns}.{self.sc_name}.x'] = 1.
values_dict[f'{self.ns}.{self.sc_name}.y_1'] = array([1.])
values_dict[f'{self.ns}.{self.sc_name}.y_2'] = array([1.])
values_dict[f'{self.ns}.{self.sc_name}.z'] = array([1., 1.])
values_dict[f'{self.ns}.{self.sc_name}.Sellar_Problem.local_dv'] = local_dv
exec_eng.dm.set_values_from_dict(values_dict)
exec_eng.configure()
exp_tv_list = [f'Nodes representation for Treeview {self.ns}',
'|_ optim',
f'\t|_ {self.sc_name}',
'\t\t|_ Sellar_Problem',
'\t\t|_ Sellar_2',
'\t\t|_ Sellar_1']
exp_tv_str = '\n'.join(exp_tv_list)
assert exp_tv_str == exec_eng.display_treeview_nodes()
res = exec_eng.execute()
# retrieve discipline to check the result...
opt_disc = exec_eng.dm.get_disciplines_with_name(
"optim." + self.sc_name)[0]
# check optimal x and f
sellar_obj_opt = 3.18339395 + local_dv
self.assertAlmostEqual(
sellar_obj_opt, opt_disc.optimization_result.f_opt, 4, msg="Wrong objective value")
exp_x = array([8.45997174e-15, 1.97763888, 0.0])
assert_array_almost_equal(
exp_x, opt_disc.optimization_result.x_opt, decimal=4, err_msg="Wrong optimal x solution")
def test_03_optim_scenario_execution_idf(self):
print("\n Test 3 : Sellar optim solution check with IDF formulation")
exec_eng = ExecutionEngine(self.study_name)
factory = exec_eng.factory
builder = factory.get_builder_from_process(repo=self.repo,
mod_id=self.proc_name)
exec_eng.factory.set_builders_to_coupling_builder(builder)
exec_eng.configure()
# -- set up disciplines in Scenario
disc_dict = {}
# Optim inputs
disc_dict[f'{self.ns}.SellarOptimScenario.max_iter'] = 200
disc_dict[f'{self.ns}.SellarOptimScenario.algo'] = "NLOPT_SLSQP"
disc_dict[f'{self.ns}.SellarOptimScenario.design_space'] = self.dspace
disc_dict[f'{self.ns}.SellarOptimScenario.formulation'] = 'IDF'
disc_dict[f'{self.ns}.SellarOptimScenario.objective_name'] = 'obj'
disc_dict[f'{self.ns}.SellarOptimScenario.ineq_constraints'] = [
f'c_1', f'c_2']
disc_dict[f'{self.ns}.SellarOptimScenario.algo_options'] = {"ftol_rel": 1e-6,
"ineq_tolerance": 1e-6,
"normalize_design_space": True}
exec_eng.dm.set_values_from_dict(disc_dict)
# Sellar inputs
local_dv = 10.
values_dict = {}
values_dict[f'{self.ns}.{self.sc_name}.x'] = 1.
values_dict[f'{self.ns}.{self.sc_name}.y_1'] = array([1.])
values_dict[f'{self.ns}.{self.sc_name}.y_2'] = array([1.])
values_dict[f'{self.ns}.{self.sc_name}.z'] = array([1., 1.])
values_dict[f'{self.ns}.{self.sc_name}.Sellar_Problem.local_dv'] = local_dv
exec_eng.dm.set_values_from_dict(values_dict)
exec_eng.configure()
exp_tv_list = [f'Nodes representation for Treeview {self.ns}',
'|_ optim',
f'\t|_ {self.sc_name}',
'\t\t|_ Sellar_Problem',
'\t\t|_ Sellar_2',
'\t\t|_ Sellar_1']
exp_tv_str = '\n'.join(exp_tv_list)
assert exp_tv_str == exec_eng.display_treeview_nodes()
res = exec_eng.execute()
# retrieve discipline to check the result...
opt_disc = exec_eng.dm.get_disciplines_with_name(
"optim." + self.sc_name)[0]
# check optimal x and f
sellar_obj_opt = 3.1800 + local_dv
self.assertAlmostEqual(
sellar_obj_opt, opt_disc.optimization_result.f_opt, places=4, msg="Wrong objective value")
exp_x = array([1.6653e-16, 2.1339, 0., 3.16, 3.911598])
assert_array_almost_equal(
exp_x, opt_disc.optimization_result.x_opt, decimal=4, err_msg="Wrong optimal x solution")
def test_04_optim_scenario_execution_disciplinaryopt(self):
print("\n Test 4 : Sellar optim solution check with DisciplinaryOpt formulation")
exec_eng = ExecutionEngine(self.study_name)
factory = exec_eng.factory
repo_discopt = 'sos_trades_core.sos_processes.test'
proc_name_discopt = 'test_sellar_opt_discopt'
builder = factory.get_builder_from_process(repo=repo_discopt,
mod_id=proc_name_discopt)
exec_eng.factory.set_builders_to_coupling_builder(builder)
exec_eng.configure()
# -- set up design space
dspace_dict = {'variable': ['x', 'z'],
'value': [[1.], [5., 2.]],
'lower_bnd': [[0.], [-10., 0.]],
'upper_bnd': [[10.], [10., 10.]],
'enable_variable': [True, True],
'activated_elem': [[True], [True, True]]}
dspace = pd.DataFrame(dspace_dict)
# -- set up disciplines in Scenario
disc_dict = {}
# Optim inputs
disc_dict[f'{self.ns}.SellarOptimScenario.max_iter'] = 200
disc_dict[f'{self.ns}.SellarOptimScenario.algo'] = "NLOPT_SLSQP"
disc_dict[f'{self.ns}.SellarOptimScenario.design_space'] = dspace
disc_dict[f'{self.ns}.SellarOptimScenario.formulation'] = 'DisciplinaryOpt'
disc_dict[f'{self.ns}.SellarOptimScenario.objective_name'] = 'obj'
disc_dict[f'{self.ns}.SellarOptimScenario.ineq_constraints'] = [
'c_1', 'c_2']
disc_dict[f'{self.ns}.SellarOptimScenario.algo_options'] = {"ftol_rel": 1e-6,
"ineq_tolerance": 1e-6,
"normalize_design_space": True}
exec_eng.dm.set_values_from_dict(disc_dict)
# Sellar inputs
local_dv = 10.
values_dict = {}
values_dict[f'{self.ns}.{self.sc_name}.{self.c_name}.x'] = 1.
values_dict[f'{self.ns}.{self.sc_name}.{self.c_name}.y_1'] = 1.
values_dict[f'{self.ns}.{self.sc_name}.{self.c_name}.y_2'] = 1.
values_dict[f'{self.ns}.{self.sc_name}.{self.c_name}.z'] = array([
1., 1.])
values_dict[f'{self.ns}.{self.sc_name}.{self.c_name}.Sellar_Problem.local_dv'] = local_dv
exec_eng.dm.set_values_from_dict(values_dict)
exec_eng.configure()
exp_tv_list = [f'Nodes representation for Treeview {self.ns}',
'|_ optim',
f'\t|_ {self.sc_name}',
f'\t\t|_ {self.c_name}',
'\t\t\t|_ Sellar_2',
'\t\t\t|_ Sellar_1',
'\t\t\t|_ Sellar_Problem']
exp_tv_str = '\n'.join(exp_tv_list)
exec_eng.display_treeview_nodes(True)
assert exp_tv_str == exec_eng.display_treeview_nodes()
res = exec_eng.execute()
# retrieve discipline to check the result...
opt_disc = exec_eng.dm.get_disciplines_with_name(
"optim." + self.sc_name)[0]
# check optimal x and f
sellar_obj_opt = 3.18339 + local_dv
self.assertAlmostEqual(
sellar_obj_opt, opt_disc.optimization_result.f_opt, places=4, msg="Wrong objective value")
exp_x = array([8.3109e-15, 1.9776e+00, 3.2586e-13])
assert_array_almost_equal(
exp_x, opt_disc.optimization_result.x_opt, decimal=4,
err_msg="Wrong optimal x solution")
def test_05_optim_scenario_execution_disciplinaryopt_complex_step(self):
print("\n Test 5 : Sellar optim solution check with DisciplinaryOpt formulation with complex step")
exec_eng = ExecutionEngine(self.study_name)
factory = exec_eng.factory
repo_discopt = 'sos_trades_core.sos_processes.test'
proc_name_discopt = 'test_sellar_opt_discopt'
builder = factory.get_builder_from_process(repo=repo_discopt,
mod_id=proc_name_discopt)
exec_eng.factory.set_builders_to_coupling_builder(builder)
exec_eng.configure()
# -- set up design space
dspace_dict = {'variable': ['x', 'z'],
'value': [[1.], [5., 2.]],
'lower_bnd': [[0.], [-10., 0.]],
'upper_bnd': [[10.], [10., 10.]],
'enable_variable': [True, True],
'activated_elem': [[True], [True, True]]}
dspace = pd.DataFrame(dspace_dict)
# -- set up disciplines in Scenario
disc_dict = {}
# Optim inputs
disc_dict[f'{self.ns}.SellarOptimScenario.max_iter'] = 200
disc_dict[f'{self.ns}.SellarOptimScenario.algo'] = "NLOPT_SLSQP"
disc_dict[f'{self.ns}.SellarOptimScenario.design_space'] = dspace
disc_dict[f'{self.ns}.SellarOptimScenario.formulation'] = 'DisciplinaryOpt'
disc_dict[f'{self.ns}.SellarOptimScenario.objective_name'] = 'obj'
disc_dict[f'{self.ns}.SellarOptimScenario.ineq_constraints'] = [
'c_1', 'c_2']
disc_dict[f'{self.ns}.SellarOptimScenario.differentiation_method'] = MDOScenario.COMPLEX_STEP
disc_dict[f'{self.ns}.SellarOptimScenario.algo_options'] = {"ftol_rel": 1e-6,
"ineq_tolerance": 1e-6,
"normalize_design_space": True}
exec_eng.dm.set_values_from_dict(disc_dict)
# Sellar inputs
local_dv = 10.
values_dict = {}
values_dict[f'{self.ns}.{self.sc_name}.{self.c_name}.x'] = 1.
values_dict[f'{self.ns}.{self.sc_name}.{self.c_name}.y_1'] = 1.
values_dict[f'{self.ns}.{self.sc_name}.{self.c_name}.y_2'] = 1.
values_dict[f'{self.ns}.{self.sc_name}.{self.c_name}.z'] = array([
1., 1.])
values_dict[f'{self.ns}.{self.sc_name}.{self.c_name}.Sellar_Problem.local_dv'] = local_dv
exec_eng.dm.set_values_from_dict(values_dict)
exec_eng.configure()
exp_tv_list = [f'Nodes representation for Treeview {self.ns}',
'|_ optim',
f'\t|_ {self.sc_name}',
f'\t\t|_ {self.c_name}',
'\t\t\t|_ Sellar_2',
'\t\t\t|_ Sellar_1',
'\t\t\t|_ Sellar_Problem']
exp_tv_str = '\n'.join(exp_tv_list)
exec_eng.display_treeview_nodes(True)
assert exp_tv_str == exec_eng.display_treeview_nodes()
res = exec_eng.execute()
# retrieve discipline to check the result...
opt_disc = exec_eng.dm.get_disciplines_with_name(
"optim." + self.sc_name)[0]
# check optimal x and f
sellar_obj_opt = 3.18339 + local_dv
self.assertAlmostEqual(
sellar_obj_opt, opt_disc.optimization_result.f_opt, places=4, msg="Wrong objective value")
exp_x = array([8.3109e-15, 1.9776e+00, 3.2586e-13])
assert_array_almost_equal(
exp_x, opt_disc.optimization_result.x_opt, decimal=4, err_msg="Wrongoptimal x solution")
def test_06_optim_scenario_execution_fd_parallel(self):
if os.name == 'nt':
print("\n Test 6 : skipped, multi-proc not handled on windows")
else:
print("\n Test 6 : Sellar optim with FD in parallel execution")
exec_eng = ExecutionEngine(self.study_name)
factory = exec_eng.factory
repo_discopt = 'sos_trades_core.sos_processes.test'
proc_name_discopt = 'test_sellar_opt_discopt'
builder = factory.get_builder_from_process(repo=repo_discopt,
mod_id=proc_name_discopt)
exec_eng.factory.set_builders_to_coupling_builder(builder)
exec_eng.configure()
dspace_dict = {'variable': ['x', 'z'],
'value': [[1.], [5., 2.]],
'lower_bnd': [[0.], [-10., 0.]],
'upper_bnd': [[10.], [10., 10.]],
'enable_variable': [True, True],
'activated_elem': [[True], [True, True]]}
dspace = pd.DataFrame(dspace_dict)
# -- set up disciplines in Scenario
disc_dict = {}
# Optim inputs
disc_dict[f'{self.ns}.SellarOptimScenario.max_iter'] = 200
disc_dict[f'{self.ns}.SellarOptimScenario.algo'] = "NLOPT_SLSQP"
disc_dict[f'{self.ns}.SellarOptimScenario.design_space'] = dspace
disc_dict[f'{self.ns}.SellarOptimScenario.formulation'] = 'DisciplinaryOpt'
disc_dict[f'{self.ns}.SellarOptimScenario.objective_name'] = 'obj'
disc_dict[f'{self.ns}.SellarOptimScenario.ineq_constraints'] = [
'c_1', 'c_2']
disc_dict[f'{self.ns}.SellarOptimScenario.algo_options'] = {"ftol_rel": 1e-6,
"ineq_tolerance": 1e-6,
"normalize_design_space": True}
# parallel inputs
disc_dict[f'{self.ns}.SellarOptimScenario.parallel_options'] = {"parallel": True,
"n_processes": 2,
"use_threading": False,
"wait_time_between_fork": 0}
exec_eng.dm.set_values_from_dict(disc_dict)
# Sellar inputs
local_dv = 10.
values_dict = {}
values_dict[f'{self.ns}.{self.sc_name}.{self.c_name}.x'] = 1.
values_dict[f'{self.ns}.{self.sc_name}.{self.c_name}.y_1'] = 1.
values_dict[f'{self.ns}.{self.sc_name}.{self.c_name}.y_2'] = 1.
values_dict[f'{self.ns}.{self.sc_name}.{self.c_name}.z'] = array([
1., 1.])
values_dict[f'{self.ns}.{self.sc_name}.{self.c_name}.Sellar_Problem.local_dv'] = local_dv
exec_eng.dm.set_values_from_dict(values_dict)
exec_eng.configure()
exp_tv_list = [f'Nodes representation for Treeview {self.ns}',
'|_ optim',
f'\t|_ {self.sc_name}',
f'\t\t|_ {self.c_name}',
'\t\t\t|_ Sellar_2',
'\t\t\t|_ Sellar_1',
'\t\t\t|_ Sellar_Problem']
exp_tv_str = '\n'.join(exp_tv_list)
exec_eng.display_treeview_nodes(True)
assert exp_tv_str == exec_eng.display_treeview_nodes()
res = exec_eng.execute()
# retrieve discipline to check the result...
opt_disc = exec_eng.dm.get_disciplines_with_name(
"optim." + self.sc_name)[0]
# check optimal x and f
sellar_obj_opt = 3.18339 + local_dv
self.assertAlmostEqual(
sellar_obj_opt, opt_disc.optimization_result.f_opt, places=4, msg="Wrong objective value")
exp_x = array([8.3109e-15, 1.9776e+00, 3.2586e-13])
assert_array_almost_equal(
exp_x, opt_disc.optimization_result.x_opt, decimal=4, err_msg="Wrong optimal x solution")
def test_07_test_options(self):
print("\n Test 07 : Sellar optim solution check options")
exec_eng = ExecutionEngine(self.study_name)
factory = exec_eng.factory
repo_discopt = 'sos_trades_core.sos_processes.test'
proc_name_discopt = 'test_sellar_opt_discopt'
builder = factory.get_builder_from_process(repo=repo_discopt,
mod_id=proc_name_discopt)
exec_eng.factory.set_builders_to_coupling_builder(builder)
exec_eng.configure()
# -- set up design space
dspace_dict = {'variable': ['x', 'z'],
'value': [[1.], [5., 2.]],
'lower_bnd': [[0.], [-10., 0.]],
'upper_bnd': [[10.], [10., 10.]],
'enable_variable': [True, True],
'activated_elem': [[True], [True, True]]}
dspace = pd.DataFrame(dspace_dict)
# -- set up disciplines in Scenario
disc_dict = {}
# Optim inputs
disc_dict[f'{self.ns}.SellarOptimScenario.max_iter'] = 200
disc_dict[f'{self.ns}.SellarOptimScenario.algo'] = "L-BFGS-B"
disc_dict[f'{self.ns}.SellarOptimScenario.design_space'] = dspace
disc_dict[f'{self.ns}.SellarOptimScenario.formulation'] = 'DisciplinaryOpt'
disc_dict[f'{self.ns}.SellarOptimScenario.objective_name'] = 'obj'
disc_dict[f'{self.ns}.SellarOptimScenario.ineq_constraints'] = []
disc_dict[f'{self.ns}.SellarOptimScenario.algo_options'] = {"ftol_rel": 1e-6,
"ineq_tolerance": 1e-6,
"normalize_design_space": True}
exec_eng.dm.set_values_from_dict(disc_dict)
# Sellar inputs
local_dv = 10.
values_dict = {}
values_dict[f'{self.ns}.{self.sc_name}.{self.c_name}.x'] = 1.
values_dict[f'{self.ns}.{self.sc_name}.{self.c_name}.y_1'] = array([
1.])
values_dict[f'{self.ns}.{self.sc_name}.{self.c_name}.y_2'] = array([
1.])
values_dict[f'{self.ns}.{self.sc_name}.{self.c_name}.z'] = array([
1., 1.])
values_dict[f'{self.ns}.{self.sc_name}.{self.c_name}.Sellar_Problem.local_dv'] = local_dv
exec_eng.dm.set_values_from_dict(values_dict)
exec_eng.configure()
exp_tv_list = [f'Nodes representation for Treeview {self.ns}',
'|_ optim',
f'\t|_ {self.sc_name}',
f'\t\t|_ {self.c_name}',
'\t\t\t|_ Sellar_2',
'\t\t\t|_ Sellar_1',
'\t\t\t|_ Sellar_Problem']
exp_tv_str = '\n'.join(exp_tv_list)
exec_eng.display_treeview_nodes(True)
assert exp_tv_str == exec_eng.display_treeview_nodes()
opt_disc = exec_eng.dm.get_disciplines_with_name(
"optim." + self.sc_name)[0]
algo_options = opt_disc.get_sosdisc_inputs('algo_options')
assert ("maxcor" in algo_options.keys())
assert ("max_ls_step_nb" in algo_options.keys())
def test_08_optim_scenario_eval_mode(self):
print("\n Test 8 : Sellar optim with eval_mode")
set_printoptions(precision=20)
exec_eng = ExecutionEngine(self.study_name)
factory = exec_eng.factory
repo_discopt = 'sos_trades_core.sos_processes.test'
proc_name_discopt = 'test_sellar_opt_discopt'
builder = factory.get_builder_from_process(repo=repo_discopt,
mod_id=proc_name_discopt)
exec_eng.factory.set_builders_to_coupling_builder(builder)
exec_eng.configure()
# -- set up design space
dspace_dict = {'variable': ['x', 'z'],
'value': [[2.], [2., 2.]],
'lower_bnd': [[0.], [-10., 0.]],
'upper_bnd': [[10.], [10., 10.]],
'enable_variable': [True, True],
'activated_elem': [[True], [True, True]]}
dspace = pd.DataFrame(dspace_dict)
# -- set up disciplines in Scenario
disc_dict = {}
# Optim inputs
disc_dict[f'{self.ns}.SellarOptimScenario.max_iter'] = 200
disc_dict[f'{self.ns}.SellarOptimScenario.design_space'] = dspace
disc_dict[f'{self.ns}.SellarOptimScenario.formulation'] = 'DisciplinaryOpt'
disc_dict[f'{self.ns}.SellarOptimScenario.objective_name'] = 'obj'
disc_dict[f'{self.ns}.SellarOptimScenario.ineq_constraints'] = [
'c_1', 'c_2']
disc_dict[f'{self.ns}.SellarOptimScenario.eval_mode'] = True
exec_eng.load_study_from_input_dict(disc_dict)
# Sellar inputs
local_dv = 10.
values_dict = {}
values_dict[f'{self.ns}.{self.sc_name}.{self.c_name}.x'] = 2.
values_dict[f'{self.ns}.{self.sc_name}.{self.c_name}.y_1'] = 2.
values_dict[f'{self.ns}.{self.sc_name}.{self.c_name}.y_2'] = 2.
values_dict[f'{self.ns}.{self.sc_name}.{self.c_name}.z'] = array([
2., 2.])
values_dict[f'{self.ns}.{self.sc_name}.{self.c_name}.tolerance'] = 1e-9
values_dict[f'{self.ns}.{self.sc_name}.{self.c_name}.Sellar_Problem.local_dv'] = local_dv
exec_eng.load_study_from_input_dict(values_dict)
self.assertFalse(exec_eng.dm.get_data(
f'{self.ns}.SellarOptimScenario.algo_options', 'editable'))
self.assertFalse(exec_eng.dm.get_data(
f'{self.ns}.SellarOptimScenario.algo', 'editable'))
exec_eng.execute()
# Check that the jacobian has not been executed
self.assertEqual(
exec_eng.root_process.sos_disciplines[0].sos_disciplines[0].jac, None)
# Exec_eng with only the coupling
exec_eng2 = ExecutionEngine(self.study_name)
factory = exec_eng2.factory
repo_discopt = 'sos_trades_core.sos_processes.test'
proc_name_discopt = 'test_sellar_opt_discopt'
builder = factory.get_builder_from_process(repo=repo_discopt,
mod_id='test_sellar_coupling')
factory.set_builders_to_coupling_builder(builder)
exec_eng2.configure()
disc_dict = {}
# Optim inputs
disc_dict[f'{self.ns}.SellarOptimScenario.max_iter'] = 200
disc_dict[f'{self.ns}.SellarOptimScenario.design_space'] = dspace
disc_dict[f'{self.ns}.SellarOptimScenario.formulation'] = 'DisciplinaryOpt'
disc_dict[f'{self.ns}.SellarOptimScenario.objective_name'] = 'obj'
disc_dict[f'{self.ns}.SellarOptimScenario.ineq_constraints'] = [
'c_1', 'c_2']
disc_dict[f'{self.ns}.SellarOptimScenario.eval_mode'] = True
exec_eng.load_study_from_input_dict(disc_dict)
# Sellar inputs
local_dv = 10.
values_dict = {}
values_dict[f'{self.ns}.{self.c_name}.x'] = 2.
values_dict[f'{self.ns}.{self.c_name}.y_1'] = 2.
values_dict[f'{self.ns}.{self.c_name}.y_2'] = 2.
values_dict[f'{self.ns}.{self.c_name}.z'] = array([
2., 2.])
values_dict[f'{self.ns}.{self.c_name}.Sellar_Problem.local_dv'] = local_dv
values_dict[f'{self.ns}.{self.c_name}.sub_mda_class'] = 'MDANewtonRaphson'
values_dict[f'{self.ns}.{self.c_name}.tolerance'] = 1e-9
exec_eng2.load_study_from_input_dict(values_dict)
exec_eng2.execute()
for var in ['x', 'y_1', 'y_2', 'z', 'obj', 'c_1', 'c_2']:
eval_value = exec_eng.dm.get_value(
f'{self.ns}.{self.sc_name}.{self.c_name}.{var}')
coupling_value = exec_eng2.dm.get_value(
f'{self.ns}.{self.c_name}.{var}')
try:
self.assertEqual(coupling_value, eval_value)
except:
self.assertListEqual(list(coupling_value), list(eval_value))
def test_09_optim_scenario_eval_mode_with_eval_jac(self):
print("\n Test 9 : Sellar optim with eval_mode and eval_jac")
exec_eng = ExecutionEngine(self.study_name)
factory = exec_eng.factory
repo_discopt = 'sos_trades_core.sos_processes.test'
proc_name_discopt = 'test_sellar_opt_discopt'
builder = factory.get_builder_from_process(repo=repo_discopt,
mod_id=proc_name_discopt)
exec_eng.factory.set_builders_to_coupling_builder(builder)
exec_eng.configure()
# -- set up design space
dspace_dict = {'variable': ['x', 'z'],
'value': [[2.], [2., 2.]],
'lower_bnd': [[0.], [-10., 0.]],
'upper_bnd': [[10.], [10., 10.]],
'enable_variable': [True, True],
'activated_elem': [[True], [True, True]]}
dspace = pd.DataFrame(dspace_dict)
# -- set up disciplines in Scenario
disc_dict = {}
# Optim inputs
disc_dict[f'{self.ns}.SellarOptimScenario.max_iter'] = 200
disc_dict[f'{self.ns}.SellarOptimScenario.design_space'] = dspace
disc_dict[f'{self.ns}.SellarOptimScenario.formulation'] = 'DisciplinaryOpt'
disc_dict[f'{self.ns}.SellarOptimScenario.objective_name'] = 'obj'
disc_dict[f'{self.ns}.SellarOptimScenario.ineq_constraints'] = [
'c_1', 'c_2']
disc_dict[f'{self.ns}.SellarOptimScenario.eval_mode'] = True
disc_dict[f'{self.ns}.SellarOptimScenario.eval_jac'] = True
exec_eng.load_study_from_input_dict(disc_dict)
# Sellar inputs
local_dv = 10.
values_dict = {}
values_dict[f'{self.ns}.{self.sc_name}.{self.c_name}.sub_mda_class'] = 'MDANewtonRaphson'
values_dict[f'{self.ns}.{self.sc_name}.{self.c_name}.x'] = 2.
values_dict[f'{self.ns}.{self.sc_name}.{self.c_name}.y_1'] = 2.
values_dict[f'{self.ns}.{self.sc_name}.{self.c_name}.y_2'] = 2.
values_dict[f'{self.ns}.{self.sc_name}.{self.c_name}.z'] = array([
2., 2.])
values_dict[f'{self.ns}.{self.sc_name}.{self.c_name}.Sellar_Problem.local_dv'] = local_dv
exec_eng.load_study_from_input_dict(values_dict)
exec_eng.execute()
# Get the jacobian of each functions (constraints + objective)
computed_jac = exec_eng.root_process.sos_disciplines[0].sos_disciplines[0].jac
self.assertListEqual(sorted(list(computed_jac.keys())), sorted([
f'{self.ns}.{self.sc_name}.{self.c_name}.{var}' for var in ['obj', 'c_1', 'c_2']]))
def test_10_update_dspace(self):
exec_eng = ExecutionEngine(self.study_name)
factory = exec_eng.factory
opt_builder = factory.get_builder_from_process(repo=self.repo,
mod_id=self.proc_name)
exec_eng.factory.set_builders_to_coupling_builder(opt_builder)
exec_eng.configure()
dspace_dict = {'variable': ['x', 'z', 'y_1', 'y_2'],
'value': [[1.], [5., 12.], [1.], [1.]],
'lower_bnd': [[0.], [-10., 0.], [-100.], [-100.]],
'upper_bnd': [[10.], [10., 10.], [100.], [100.]],
'enable_variable': [True, True, True, True],
'activated_elem': [[True], [True, True], [True], [True]]}
dspace = pd.DataFrame(dspace_dict)
# -- set up disciplines in Scenario
disc_dict = {}
# Optim inputs
disc_dict[f'{self.ns}.SellarOptimScenario.max_iter'] = 100
disc_dict[f'{self.ns}.SellarOptimScenario.algo'] = "SLSQP"
disc_dict[f'{self.ns}.SellarOptimScenario.design_space'] = dspace
disc_dict[f'{self.ns}.SellarOptimScenario.formulation'] = 'MDF'
disc_dict[f'{self.ns}.SellarOptimScenario.objective_name'] = 'obj'
disc_dict[f'{self.ns}.SellarOptimScenario.ineq_constraints'] = [
f'c_1', f'c_2']
disc_dict[f'{self.ns}.SellarOptimScenario.algo_options'] = {"ftol_rel": 1e-10,
"ineq_tolerance": 2e-3,
"normalize_design_space": False}
# Sellar inputs
values_dict = {}
values_dict[f'{self.ns}.{self.sc_name}.x'] = 1.
values_dict[f'{self.ns}.{self.sc_name}.y_1'] = 1.
values_dict[f'{self.ns}.{self.sc_name}.y_2'] = 1.
values_dict[f'{self.ns}.{self.sc_name}.z'] = array([1., 1.])
values_dict[f'{self.ns}.{self.sc_name}.Sellar_Problem.local_dv'] = 10.
values_dict.update(disc_dict)
try:
exec_eng.load_study_from_input_dict(values_dict)
except:
pass
dspace_dict = {'variable': ['x', 'z', 'y_1', 'y_2'],
'value': [[1.], [5., 5.], [1.], [1.]],
'lower_bnd': [[0.], [-10., 0.], [-100.], [-100.]],
'upper_bnd': [[10.], [10., 10.], [100.], [100.]],
'enable_variable': [True, True, True, True],
'activated_elem': [[True], [True, True], [True], [True]]}
dspace = pd.DataFrame(dspace_dict)
values_dict[f'{self.ns}.SellarOptimScenario.design_space'] = dspace
exec_eng.load_study_from_input_dict(values_dict)
exec_eng.execute()
def test_11_update_dspace_from_usecase(self):
uc_cls = study_sellar_opt()
uc_cls.setup_usecase()
uc_cls.load_data()
dspace = deepcopy(uc_cls.execution_engine.dm.get_value(
f'{uc_cls.study_name}.SellarOptimScenario.design_space'))
dspace['value'] = [[1.], [5., 12.], [1.], [1.]]
values_dict = {
f'{uc_cls.study_name}.SellarOptimScenario.design_space': dspace}
try:
uc_cls.load_data(from_input_dict=values_dict)
except:
dspace = deepcopy(uc_cls.execution_engine.dm.get_value(
f'{uc_cls.study_name}.SellarOptimScenario.design_space'))
dspace['value'] = [[1.], [5., 5.], [1.], [1.]]
values_dict = {
f'{uc_cls.study_name}.SellarOptimScenario.design_space': dspace}
uc_cls.load_data(from_input_dict=values_dict)
def test_12_optim_scenario_execution_disciplinaryopt(self):
print("\n Test 12 : Sellar optim solution check with DisciplinaryOpt formulation, check optimum")
exec_eng = ExecutionEngine(self.study_name)
factory = exec_eng.factory
repo_discopt = 'sos_trades_core.sos_processes.test'
proc_name_discopt = 'test_sellar_opt_discopt'
builder = factory.get_builder_from_process(repo=repo_discopt,
mod_id=proc_name_discopt)
exec_eng.factory.set_builders_to_coupling_builder(builder)
exec_eng.configure()
# -- set up design space
dspace_dict = {'variable': ['x', 'z'],
'value': [[1.], [5., 2.]],
'lower_bnd': [[0.], [-10., 0.]],
'upper_bnd': [[10.], [10., 10.]],
'enable_variable': [True, True],
'activated_elem': [[True], [True, True]]}
dspace = pd.DataFrame(dspace_dict)
# -- set up disciplines in Scenario
disc_dict = {}
# Optim inputs
disc_dict[f'{self.ns}.SellarOptimScenario.max_iter'] = 2
disc_dict[f'{self.ns}.SellarOptimScenario.algo'] = "L-BFGS-B"
disc_dict[f'{self.ns}.SellarOptimScenario.design_space'] = dspace
disc_dict[f'{self.ns}.SellarOptimScenario.formulation'] = 'DisciplinaryOpt'
disc_dict[f'{self.ns}.SellarOptimScenario.objective_name'] = 'obj'
disc_dict[f'{self.ns}.SellarOptimScenario.ineq_constraints'] = []
disc_dict[f'{self.ns}.SellarOptimScenario.algo_options'] = {"ftol_rel": 1e-6,
"ineq_tolerance": 1e-6,
"normalize_design_space": True}
exec_eng.dm.set_values_from_dict(disc_dict)
# Sellar inputs
local_dv = 10.
values_dict = {}
values_dict[f'{self.ns}.{self.sc_name}.{self.c_name}.x'] = 1.
values_dict[f'{self.ns}.{self.sc_name}.{self.c_name}.y_1'] = 1.
values_dict[f'{self.ns}.{self.sc_name}.{self.c_name}.y_2'] = 1.
values_dict[f'{self.ns}.{self.sc_name}.{self.c_name}.z'] = array([
1., 1.])
values_dict[f'{self.ns}.{self.sc_name}.{self.c_name}.Sellar_Problem.local_dv'] = local_dv
exec_eng.dm.set_values_from_dict(values_dict)
exec_eng.configure()
exp_tv_list = [f'Nodes representation for Treeview {self.ns}',
'|_ optim',
f'\t|_ {self.sc_name}',
f'\t\t|_ {self.c_name}',
'\t\t\t|_ Sellar_2',
'\t\t\t|_ Sellar_1',
'\t\t\t|_ Sellar_Problem']
exp_tv_str = '\n'.join(exp_tv_list)
exec_eng.display_treeview_nodes(True)
assert exp_tv_str == exec_eng.display_treeview_nodes()
res = exec_eng.execute()
# retrieve discipline to check the result...
opt_disc = exec_eng.dm.get_disciplines_with_name(
"optim." + self.sc_name)[0]
opt_array = array([1., 5., 2.])
# check that design space in GEMS contains the optimal value (not last
# iteration)
assert_array_almost_equal(
opt_disc.formulation.design_space.get_current_x(), opt_array,
err_msg="design space does not have optimal value")
# check that in dm we have xopt value
z = exec_eng.dm.get_value(f'{self.ns}.{self.sc_name}.{self.c_name}.z')
opt_z = array([5., 2.])
assert_array_almost_equal(
z, opt_z, err_msg="the value of z in dm does not have the optimal value")
x = exec_eng.dm.get_value(f'{self.ns}.{self.sc_name}.{self.c_name}.x')
opt_x = array([1.])
assert_array_almost_equal(
x, opt_x, err_msg="the value of x in dm does not have the optimal value")
def test_13_optim_scenario_execution_disciplinaryopt_other_dspace(self):
print("\n Test 13 : Sellar optim solution check with DisciplinaryOpt formulation, check optimum")
exec_eng = ExecutionEngine(self.study_name)
factory = exec_eng.factory
repo_discopt = 'sos_trades_core.sos_processes.test'
proc_name_discopt = 'test_sellar_opt_discopt'
builder = factory.get_builder_from_process(repo=repo_discopt,
mod_id=proc_name_discopt)
exec_eng.factory.set_builders_to_coupling_builder(builder)
exec_eng.configure()
# -- set up design space
dspace_dict = {'variable': ['x', 'z'],
'value': [[1.], [5., 2., 3.]],
'lower_bnd': [[0.], [-10., 0., 0.]],
'upper_bnd': [[10.], [10., 10., 10.]],
'enable_variable': [True, True],
'activated_elem': [[True], [True, True, False]]}
dspace = pd.DataFrame(dspace_dict)
# -- set up disciplines in Scenario
disc_dict = {}
# Optim inputs
disc_dict[f'{self.ns}.SellarOptimScenario.max_iter'] = 2
disc_dict[f'{self.ns}.SellarOptimScenario.algo'] = "L-BFGS-B"
disc_dict[f'{self.ns}.SellarOptimScenario.design_space'] = dspace
disc_dict[f'{self.ns}.SellarOptimScenario.formulation'] = 'DisciplinaryOpt'
disc_dict[f'{self.ns}.SellarOptimScenario.objective_name'] = 'obj'
disc_dict[f'{self.ns}.SellarOptimScenario.ineq_constraints'] = []
disc_dict[f'{self.ns}.SellarOptimScenario.algo_options'] = {"ftol_rel": 1e-6,
"ineq_tolerance": 1e-6,
"normalize_design_space": True}
exec_eng.dm.set_values_from_dict(disc_dict)
# Sellar inputs
local_dv = 10.
values_dict = {}
values_dict[f'{self.ns}.{self.sc_name}.{self.c_name}.x'] = 1.
values_dict[f'{self.ns}.{self.sc_name}.{self.c_name}.y_1'] = 1.
values_dict[f'{self.ns}.{self.sc_name}.{self.c_name}.y_2'] = 1.
values_dict[f'{self.ns}.{self.sc_name}.{self.c_name}.z'] = array([
1., 1.])
values_dict[f'{self.ns}.{self.sc_name}.{self.c_name}.Sellar_Problem.local_dv'] = local_dv
exec_eng.dm.set_values_from_dict(values_dict)
exec_eng.configure()
exp_tv_list = [f'Nodes representation for Treeview {self.ns}',
'|_ optim',
f'\t|_ {self.sc_name}',
f'\t\t|_ {self.c_name}',
'\t\t\t|_ Sellar_2',
'\t\t\t|_ Sellar_1',
'\t\t\t|_ Sellar_Problem']
exp_tv_str = '\n'.join(exp_tv_list)
exec_eng.display_treeview_nodes(True)
assert exp_tv_str == exec_eng.display_treeview_nodes()
res = exec_eng.execute()
def test_14_optim_sellar_idf_process(self):
exec_eng = ExecutionEngine(self.study_name)
builder_process = exec_eng.factory.get_builder_from_process(
'sos_trades_core.sos_processes.test', 'test_sellar_opt_idf')
exec_eng.factory.set_builders_to_coupling_builder(builder_process)
exec_eng.configure()
study_dremio = study_sellar_idf()
study_dremio.study_name = self.study_name
dict_values_list = study_dremio.setup_usecase()
dict_values = {}
for dict_val in dict_values_list:
dict_values.update(dict_val)
exec_eng.load_study_from_input_dict(dict_values)
exec_eng.execute()
def test_15_optim_griewank_process(self):
exec_eng = ExecutionEngine(self.study_name)
builder_process = exec_eng.factory.get_builder_from_process(
'sos_trades_core.sos_processes.test', 'test_Griewank_opt')
exec_eng.factory.set_builders_to_coupling_builder(builder_process)
exec_eng.configure()
study_dremio = study_griewank()
study_dremio.study_name = self.study_name
dict_values_list = study_dremio.setup_usecase()
dict_values = {}
for dict_val in dict_values_list:
dict_values.update(dict_val)
exec_eng.load_study_from_input_dict(dict_values)
exec_eng.execute()
def test_16_test_post_run(self):
print("\n Test 16 : Sellar optim check post run exception")
exec_eng = ExecutionEngine(self.study_name)
factory = exec_eng.factory
repo_discopt = 'sos_trades_core.sos_processes.test'
proc_name_discopt = 'test_sellar_opt_discopt'
builder = factory.get_builder_from_process(repo=repo_discopt,
mod_id=proc_name_discopt)
exec_eng.factory.set_builders_to_coupling_builder(builder)
exec_eng.configure()
# -- set up design space
dspace_dict = {'variable': ['x', 'z'],
'value': [[1.], [5., 2., 3.]],
'lower_bnd': [[0.], [-10., 0., 0.]],
'upper_bnd': [[10.], [10., 10., 10.]],
'enable_variable': [True, True],
'activated_elem': [[True], [True, True, False]]}
dspace = pd.DataFrame(dspace_dict)
# -- set up disciplines in Scenario
disc_dict = {}
# Optim inputs
disc_dict[f'{self.ns}.SellarOptimScenario.max_iter'] = 10
disc_dict[f'{self.ns}.SellarOptimScenario.algo'] = "L-BFGS-B"
disc_dict[f'{self.ns}.SellarOptimScenario.design_space'] = dspace
disc_dict[f'{self.ns}.SellarOptimScenario.formulation'] = 'DisciplinaryOpt'
disc_dict[f'{self.ns}.SellarOptimScenario.objective_name'] = 'obj'
disc_dict[f'{self.ns}.SellarOptimScenario.ineq_constraints'] = []
disc_dict[f'{self.ns}.SellarOptimScenario.algo_options'] = {"ftol_rel": 1e-6,
"ineq_tolerance": 1e-6,
"normalize_design_space": True}
disc_dict[f'{self.ns}.SellarOptimScenario.execute_at_xopt'] = False
exec_eng.dm.set_values_from_dict(disc_dict)
# Sellar inputs
local_dv = 10.
values_dict = {}
values_dict[f'{self.ns}.{self.sc_name}.{self.c_name}.x'] = 1.
values_dict[f'{self.ns}.{self.sc_name}.{self.c_name}.y_1'] = 1.
values_dict[f'{self.ns}.{self.sc_name}.{self.c_name}.y_2'] = 1.
values_dict[f'{self.ns}.{self.sc_name}.{self.c_name}.z'] = array([
1., 1.])
values_dict[f'{self.ns}.{self.sc_name}.{self.c_name}.Sellar_Problem.local_dv'] = local_dv
exec_eng.dm.set_values_from_dict(values_dict)
exec_eng.configure()
exp_tv_list = [f'Nodes representation for Treeview {self.ns}',
'|_ optim',
f'\t|_ {self.sc_name}',
f'\t\t|_ {self.c_name}',
'\t\t\t|_ Sellar_2',
'\t\t\t|_ Sellar_1',
'\t\t\t|_ Sellar_Problem']
exp_tv_str = '\n'.join(exp_tv_list)
exec_eng.display_treeview_nodes(True)
assert exp_tv_str == exec_eng.display_treeview_nodes()
# execute without post run
res = exec_eng.execute()
# get sosoptimscenario discipline
disc = exec_eng.root_process.sos_disciplines[0]
disc.formulation.opt_problem.nonproc_constraints = []
disc.formulation.opt_problem.nonproc_objective = None
# execute postrun to trigger exception
disc._post_run()
dm = exec_eng.dm
x_first_execution = dm.get_value(
f'{self.ns}.{self.sc_name}.{self.c_name}.x')
z_first_execution = dm.get_value(
f'{self.ns}.{self.sc_name}.{self.c_name}.z')
# use nominal execution
local_dv = 10.
values_dict = {}
values_dict[f'{self.ns}.{self.sc_name}.{self.c_name}.x'] = 1.
values_dict[f'{self.ns}.{self.sc_name}.{self.c_name}.y_1'] = 1.
values_dict[f'{self.ns}.{self.sc_name}.{self.c_name}.y_2'] = 1.
values_dict[f'{self.ns}.{self.sc_name}.{self.c_name}.z'] = array([
1., 1.])
values_dict[f'{self.ns}.{self.sc_name}.{self.c_name}.Sellar_Problem.local_dv'] = local_dv
exec_eng.dm.set_values_from_dict(values_dict)
disc_dict[f'{self.ns}.SellarOptimScenario.execute_at_xopt'] = True
exec_eng.dm.set_values_from_dict(disc_dict)
exec_eng.configure()
res = exec_eng.execute()
dm = exec_eng.dm
x_nominal_execution = dm.get_value(
f'{self.ns}.{self.sc_name}.{self.c_name}.x')
z_nominal_execution = dm.get_value(
f'{self.ns}.{self.sc_name}.{self.c_name}.z')
assert x_first_execution == x_nominal_execution
assert_array_equal(z_first_execution, z_nominal_execution)
def test_17_optim_scenario_execution_disciplinaryopt_complex_step_with_custom_step(self):
print("\n Test 17 : Sellar optim solution check with DisciplinaryOpt formulation with complex step and a finite differences step")
exec_eng = ExecutionEngine(self.study_name)
factory = exec_eng.factory
repo_discopt = 'sos_trades_core.sos_processes.test'
proc_name_discopt = 'test_sellar_opt_discopt'
builder = factory.get_builder_from_process(repo=repo_discopt,
mod_id=proc_name_discopt)
exec_eng.factory.set_builders_to_coupling_builder(builder)
exec_eng.configure()
# -- set up design space
dspace_dict = {'variable': ['x', 'z'],
'value': [[1.], [5., 2.]],
'lower_bnd': [[0.], [-10., 0.]],
'upper_bnd': [[10.], [10., 10.]],
'enable_variable': [True, True],
'activated_elem': [[True], [True, True]]}
dspace = pd.DataFrame(dspace_dict)
# -- set up disciplines in Scenario
disc_dict = {}
# Optim inputs
disc_dict[f'{self.ns}.SellarOptimScenario.max_iter'] = 200
disc_dict[f'{self.ns}.SellarOptimScenario.algo'] = "NLOPT_SLSQP"
disc_dict[f'{self.ns}.SellarOptimScenario.design_space'] = dspace
disc_dict[f'{self.ns}.SellarOptimScenario.formulation'] = 'DisciplinaryOpt'
disc_dict[f'{self.ns}.SellarOptimScenario.objective_name'] = 'obj'
disc_dict[f'{self.ns}.SellarOptimScenario.ineq_constraints'] = [
'c_1', 'c_2']
disc_dict[f'{self.ns}.SellarOptimScenario.differentiation_method'] = MDOScenario.COMPLEX_STEP
fd_step = 1.e-15
disc_dict[f'{self.ns}.SellarOptimScenario.fd_step'] = fd_step
disc_dict[f'{self.ns}.SellarOptimScenario.algo_options'] = {"ftol_rel": 1e-6,
"ineq_tolerance": 1e-6,
"normalize_design_space": True}
exec_eng.dm.set_values_from_dict(disc_dict)
# Sellar inputs
local_dv = 10.
values_dict = {}
values_dict[f'{self.ns}.{self.sc_name}.{self.c_name}.x'] = 1.
values_dict[f'{self.ns}.{self.sc_name}.{self.c_name}.y_1'] = 1.
values_dict[f'{self.ns}.{self.sc_name}.{self.c_name}.y_2'] = 1.
values_dict[f'{self.ns}.{self.sc_name}.{self.c_name}.z'] = array([
1., 1.])
values_dict[f'{self.ns}.{self.sc_name}.{self.c_name}.Sellar_Problem.local_dv'] = local_dv
exec_eng.dm.set_values_from_dict(values_dict)
exec_eng.configure()
res = exec_eng.execute()
# retrieve discipline to get information to check
opt_disc = exec_eng.dm.get_disciplines_with_name(
"optim." + self.sc_name)[0]
assert opt_disc.opt_problem.fd_step == fd_step
# check optimal x and f
sellar_obj_opt = 3.18339 + local_dv
self.assertAlmostEqual(
sellar_obj_opt, opt_disc.optimization_result.f_opt, places=4, msg="Wrong objective value")
exp_x = array([8.3109e-15, 1.9776e+00, 3.2586e-13])
assert_array_almost_equal(
exp_x, opt_disc.optimization_result.x_opt, decimal=4, err_msg="Wrongoptimal x solution")
if '__main__' == __name__:
cls = TestSoSOptimScenario()
cls.setUp()
cls.test_17_optim_scenario_execution_disciplinaryopt_complex_step_with_custom_step()
| 44.909864 | 138 | 0.577745 | 6,579 | 52,814 | 4.329685 | 0.054872 | 0.045287 | 0.050132 | 0.07453 | 0.87783 | 0.866702 | 0.856275 | 0.849289 | 0.83623 | 0.824996 | 0 | 0.021534 | 0.291286 | 52,814 | 1,175 | 139 | 44.948085 | 0.739487 | 0.053243 | 0 | 0.820633 | 0 | 0 | 0.284093 | 0.192609 | 0 | 0 | 0 | 0 | 0.043376 | 1 | 0.021102 | false | 0.001172 | 0.012896 | 0 | 0.03517 | 0.018757 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
169734a67aaa5ae214b07d05a3474abcdd7136c5 | 106,258 | py | Python | dask-fargate/.env/lib/python3.6/site-packages/aws_cdk/aws_sqs/__init__.py | chriscoombs/amazon-sagemaker-cdk-examples | ba848218dab59abb03f68dc92bcad7929841fcc9 | [
"Apache-2.0"
] | 41 | 2019-08-22T13:03:42.000Z | 2022-02-24T05:07:32.000Z | dask-fargate/.env/lib/python3.6/site-packages/aws_cdk/aws_sqs/__init__.py | chriscoombs/amazon-sagemaker-cdk-examples | ba848218dab59abb03f68dc92bcad7929841fcc9 | [
"Apache-2.0"
] | 1 | 2020-06-17T17:44:28.000Z | 2021-02-12T22:40:01.000Z | dask-fargate/.env/lib/python3.6/site-packages/aws_cdk/aws_sqs/__init__.py | chriscoombs/amazon-sagemaker-cdk-examples | ba848218dab59abb03f68dc92bcad7929841fcc9 | [
"Apache-2.0"
] | 31 | 2019-08-23T17:33:41.000Z | 2022-03-28T09:20:07.000Z | """
## Amazon Simple Queue Service Construct Library
<!--BEGIN STABILITY BANNER-->---

---
<!--END STABILITY BANNER-->
Amazon Simple Queue Service (SQS) is a fully managed message queuing service that
enables you to decouple and scale microservices, distributed systems, and serverless
applications. SQS eliminates the complexity and overhead associated with managing and
operating message oriented middleware, and empowers developers to focus on differentiating work.
Using SQS, you can send, store, and receive messages between software components at any volume,
without losing messages or requiring other services to be available.
### Installation
Import to your project:
```python
# Example automatically generated. See https://github.com/aws/jsii/issues/826
import aws_cdk.aws_sqs as sqs
```
### Basic usage
Here's how to add a basic queue to your application:
```python
# Example automatically generated. See https://github.com/aws/jsii/issues/826
sqs.Queue(self, "Queue")
```
### Encryption
If you want to encrypt the queue contents, set the `encryption` property. You can have
the messages encrypted with a key that SQS manages for you, or a key that you
can manage yourself.
```python
# Example automatically generated. See https://github.com/aws/jsii/issues/826
# Use managed key
sqs.Queue(self, "Queue",
encryption=QueueEncryption.KMS_MANAGED
)
# Use custom key
my_key = kms.Key(self, "Key")
sqs.Queue(self, "Queue",
encryption=QueueEncryption.KMS,
encryption_master_key=my_key
)
```
### First-In-First-Out (FIFO) queues
FIFO queues give guarantees on the order in which messages are dequeued, and have additional
features in order to help guarantee exactly-once processing. For more information, see
the SQS manual. Note that FIFO queues are not available in all AWS regions.
A queue can be made a FIFO queue by either setting `fifo: true`, giving it a name which ends
in `".fifo"`, or enabling content-based deduplication (which requires FIFO queues).
"""
import abc
import datetime
import enum
import typing
import jsii
import jsii.compat
import publication
from jsii.python import classproperty
import aws_cdk.aws_cloudwatch
import aws_cdk.aws_iam
import aws_cdk.aws_kms
import aws_cdk.core
__jsii_assembly__ = jsii.JSIIAssembly.load("@aws-cdk/aws-sqs", "1.18.0", __name__, "aws-sqs@1.18.0.jsii.tgz")
@jsii.implements(aws_cdk.core.IInspectable)
class CfnQueue(aws_cdk.core.CfnResource, metaclass=jsii.JSIIMeta, jsii_type="@aws-cdk/aws-sqs.CfnQueue"):
"""A CloudFormation ``AWS::SQS::Queue``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-sqs-queues.html
cloudformationResource:
:cloudformationResource:: AWS::SQS::Queue
"""
def __init__(self, scope: aws_cdk.core.Construct, id: str, *, content_based_deduplication: typing.Optional[typing.Union[typing.Optional[bool], typing.Optional[aws_cdk.core.IResolvable]]]=None, delay_seconds: typing.Optional[jsii.Number]=None, fifo_queue: typing.Optional[typing.Union[typing.Optional[bool], typing.Optional[aws_cdk.core.IResolvable]]]=None, kms_data_key_reuse_period_seconds: typing.Optional[jsii.Number]=None, kms_master_key_id: typing.Optional[str]=None, maximum_message_size: typing.Optional[jsii.Number]=None, message_retention_period: typing.Optional[jsii.Number]=None, queue_name: typing.Optional[str]=None, receive_message_wait_time_seconds: typing.Optional[jsii.Number]=None, redrive_policy: typing.Any=None, tags: typing.Optional[typing.List[aws_cdk.core.CfnTag]]=None, visibility_timeout: typing.Optional[jsii.Number]=None) -> None:
"""Create a new ``AWS::SQS::Queue``.
:param scope: - scope in which this resource is defined.
:param id: - scoped id of the resource.
:param props: - resource properties.
:param content_based_deduplication: ``AWS::SQS::Queue.ContentBasedDeduplication``.
:param delay_seconds: ``AWS::SQS::Queue.DelaySeconds``.
:param fifo_queue: ``AWS::SQS::Queue.FifoQueue``.
:param kms_data_key_reuse_period_seconds: ``AWS::SQS::Queue.KmsDataKeyReusePeriodSeconds``.
:param kms_master_key_id: ``AWS::SQS::Queue.KmsMasterKeyId``.
:param maximum_message_size: ``AWS::SQS::Queue.MaximumMessageSize``.
:param message_retention_period: ``AWS::SQS::Queue.MessageRetentionPeriod``.
:param queue_name: ``AWS::SQS::Queue.QueueName``.
:param receive_message_wait_time_seconds: ``AWS::SQS::Queue.ReceiveMessageWaitTimeSeconds``.
:param redrive_policy: ``AWS::SQS::Queue.RedrivePolicy``.
:param tags: ``AWS::SQS::Queue.Tags``.
:param visibility_timeout: ``AWS::SQS::Queue.VisibilityTimeout``.
"""
props = CfnQueueProps(content_based_deduplication=content_based_deduplication, delay_seconds=delay_seconds, fifo_queue=fifo_queue, kms_data_key_reuse_period_seconds=kms_data_key_reuse_period_seconds, kms_master_key_id=kms_master_key_id, maximum_message_size=maximum_message_size, message_retention_period=message_retention_period, queue_name=queue_name, receive_message_wait_time_seconds=receive_message_wait_time_seconds, redrive_policy=redrive_policy, tags=tags, visibility_timeout=visibility_timeout)
jsii.create(CfnQueue, self, [scope, id, props])
@jsii.member(jsii_name="inspect")
def inspect(self, inspector: aws_cdk.core.TreeInspector) -> None:
"""Examines the CloudFormation resource and discloses attributes.
:param inspector: - tree inspector to collect and process attributes.
stability
:stability: experimental
"""
return jsii.invoke(self, "inspect", [inspector])
@jsii.member(jsii_name="renderProperties")
def _render_properties(self, props: typing.Mapping[str,typing.Any]) -> typing.Mapping[str,typing.Any]:
"""
:param props: -
"""
return jsii.invoke(self, "renderProperties", [props])
@classproperty
@jsii.member(jsii_name="CFN_RESOURCE_TYPE_NAME")
def CFN_RESOURCE_TYPE_NAME(cls) -> str:
"""The CloudFormation resource type name for this resource class."""
return jsii.sget(cls, "CFN_RESOURCE_TYPE_NAME")
@property
@jsii.member(jsii_name="attrArn")
def attr_arn(self) -> str:
"""
cloudformationAttribute:
:cloudformationAttribute:: Arn
"""
return jsii.get(self, "attrArn")
@property
@jsii.member(jsii_name="attrQueueName")
def attr_queue_name(self) -> str:
"""
cloudformationAttribute:
:cloudformationAttribute:: QueueName
"""
return jsii.get(self, "attrQueueName")
@property
@jsii.member(jsii_name="cfnProperties")
def _cfn_properties(self) -> typing.Mapping[str,typing.Any]:
return jsii.get(self, "cfnProperties")
@property
@jsii.member(jsii_name="tags")
def tags(self) -> aws_cdk.core.TagManager:
"""``AWS::SQS::Queue.Tags``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-sqs-queues.html#cfn-sqs-queue-tags
"""
return jsii.get(self, "tags")
@property
@jsii.member(jsii_name="redrivePolicy")
def redrive_policy(self) -> typing.Any:
"""``AWS::SQS::Queue.RedrivePolicy``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-sqs-queues.html#aws-sqs-queue-redrive
"""
return jsii.get(self, "redrivePolicy")
@redrive_policy.setter
def redrive_policy(self, value: typing.Any):
return jsii.set(self, "redrivePolicy", value)
@property
@jsii.member(jsii_name="contentBasedDeduplication")
def content_based_deduplication(self) -> typing.Optional[typing.Union[typing.Optional[bool], typing.Optional[aws_cdk.core.IResolvable]]]:
"""``AWS::SQS::Queue.ContentBasedDeduplication``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-sqs-queues.html#aws-sqs-queue-contentbaseddeduplication
"""
return jsii.get(self, "contentBasedDeduplication")
@content_based_deduplication.setter
def content_based_deduplication(self, value: typing.Optional[typing.Union[typing.Optional[bool], typing.Optional[aws_cdk.core.IResolvable]]]):
return jsii.set(self, "contentBasedDeduplication", value)
@property
@jsii.member(jsii_name="delaySeconds")
def delay_seconds(self) -> typing.Optional[jsii.Number]:
"""``AWS::SQS::Queue.DelaySeconds``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-sqs-queues.html#aws-sqs-queue-delayseconds
"""
return jsii.get(self, "delaySeconds")
@delay_seconds.setter
def delay_seconds(self, value: typing.Optional[jsii.Number]):
return jsii.set(self, "delaySeconds", value)
@property
@jsii.member(jsii_name="fifoQueue")
def fifo_queue(self) -> typing.Optional[typing.Union[typing.Optional[bool], typing.Optional[aws_cdk.core.IResolvable]]]:
"""``AWS::SQS::Queue.FifoQueue``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-sqs-queues.html#aws-sqs-queue-fifoqueue
"""
return jsii.get(self, "fifoQueue")
@fifo_queue.setter
def fifo_queue(self, value: typing.Optional[typing.Union[typing.Optional[bool], typing.Optional[aws_cdk.core.IResolvable]]]):
return jsii.set(self, "fifoQueue", value)
@property
@jsii.member(jsii_name="kmsDataKeyReusePeriodSeconds")
def kms_data_key_reuse_period_seconds(self) -> typing.Optional[jsii.Number]:
"""``AWS::SQS::Queue.KmsDataKeyReusePeriodSeconds``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-sqs-queues.html#aws-sqs-queue-kmsdatakeyreuseperiodseconds
"""
return jsii.get(self, "kmsDataKeyReusePeriodSeconds")
@kms_data_key_reuse_period_seconds.setter
def kms_data_key_reuse_period_seconds(self, value: typing.Optional[jsii.Number]):
return jsii.set(self, "kmsDataKeyReusePeriodSeconds", value)
@property
@jsii.member(jsii_name="kmsMasterKeyId")
def kms_master_key_id(self) -> typing.Optional[str]:
"""``AWS::SQS::Queue.KmsMasterKeyId``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-sqs-queues.html#aws-sqs-queue-kmsmasterkeyid
"""
return jsii.get(self, "kmsMasterKeyId")
@kms_master_key_id.setter
def kms_master_key_id(self, value: typing.Optional[str]):
return jsii.set(self, "kmsMasterKeyId", value)
@property
@jsii.member(jsii_name="maximumMessageSize")
def maximum_message_size(self) -> typing.Optional[jsii.Number]:
"""``AWS::SQS::Queue.MaximumMessageSize``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-sqs-queues.html#aws-sqs-queue-maxmesgsize
"""
return jsii.get(self, "maximumMessageSize")
@maximum_message_size.setter
def maximum_message_size(self, value: typing.Optional[jsii.Number]):
return jsii.set(self, "maximumMessageSize", value)
@property
@jsii.member(jsii_name="messageRetentionPeriod")
def message_retention_period(self) -> typing.Optional[jsii.Number]:
"""``AWS::SQS::Queue.MessageRetentionPeriod``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-sqs-queues.html#aws-sqs-queue-msgretentionperiod
"""
return jsii.get(self, "messageRetentionPeriod")
@message_retention_period.setter
def message_retention_period(self, value: typing.Optional[jsii.Number]):
return jsii.set(self, "messageRetentionPeriod", value)
@property
@jsii.member(jsii_name="queueName")
def queue_name(self) -> typing.Optional[str]:
"""``AWS::SQS::Queue.QueueName``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-sqs-queues.html#aws-sqs-queue-name
"""
return jsii.get(self, "queueName")
@queue_name.setter
def queue_name(self, value: typing.Optional[str]):
return jsii.set(self, "queueName", value)
@property
@jsii.member(jsii_name="receiveMessageWaitTimeSeconds")
def receive_message_wait_time_seconds(self) -> typing.Optional[jsii.Number]:
"""``AWS::SQS::Queue.ReceiveMessageWaitTimeSeconds``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-sqs-queues.html#aws-sqs-queue-receivemsgwaittime
"""
return jsii.get(self, "receiveMessageWaitTimeSeconds")
@receive_message_wait_time_seconds.setter
def receive_message_wait_time_seconds(self, value: typing.Optional[jsii.Number]):
return jsii.set(self, "receiveMessageWaitTimeSeconds", value)
@property
@jsii.member(jsii_name="visibilityTimeout")
def visibility_timeout(self) -> typing.Optional[jsii.Number]:
"""``AWS::SQS::Queue.VisibilityTimeout``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-sqs-queues.html#aws-sqs-queue-visiblitytimeout
"""
return jsii.get(self, "visibilityTimeout")
@visibility_timeout.setter
def visibility_timeout(self, value: typing.Optional[jsii.Number]):
return jsii.set(self, "visibilityTimeout", value)
@jsii.implements(aws_cdk.core.IInspectable)
class CfnQueuePolicy(aws_cdk.core.CfnResource, metaclass=jsii.JSIIMeta, jsii_type="@aws-cdk/aws-sqs.CfnQueuePolicy"):
"""A CloudFormation ``AWS::SQS::QueuePolicy``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-sqs-policy.html
cloudformationResource:
:cloudformationResource:: AWS::SQS::QueuePolicy
"""
def __init__(self, scope: aws_cdk.core.Construct, id: str, *, policy_document: typing.Any, queues: typing.List[str]) -> None:
"""Create a new ``AWS::SQS::QueuePolicy``.
:param scope: - scope in which this resource is defined.
:param id: - scoped id of the resource.
:param props: - resource properties.
:param policy_document: ``AWS::SQS::QueuePolicy.PolicyDocument``.
:param queues: ``AWS::SQS::QueuePolicy.Queues``.
"""
props = CfnQueuePolicyProps(policy_document=policy_document, queues=queues)
jsii.create(CfnQueuePolicy, self, [scope, id, props])
@jsii.member(jsii_name="inspect")
def inspect(self, inspector: aws_cdk.core.TreeInspector) -> None:
"""Examines the CloudFormation resource and discloses attributes.
:param inspector: - tree inspector to collect and process attributes.
stability
:stability: experimental
"""
return jsii.invoke(self, "inspect", [inspector])
@jsii.member(jsii_name="renderProperties")
def _render_properties(self, props: typing.Mapping[str,typing.Any]) -> typing.Mapping[str,typing.Any]:
"""
:param props: -
"""
return jsii.invoke(self, "renderProperties", [props])
@classproperty
@jsii.member(jsii_name="CFN_RESOURCE_TYPE_NAME")
def CFN_RESOURCE_TYPE_NAME(cls) -> str:
"""The CloudFormation resource type name for this resource class."""
return jsii.sget(cls, "CFN_RESOURCE_TYPE_NAME")
@property
@jsii.member(jsii_name="cfnProperties")
def _cfn_properties(self) -> typing.Mapping[str,typing.Any]:
return jsii.get(self, "cfnProperties")
@property
@jsii.member(jsii_name="policyDocument")
def policy_document(self) -> typing.Any:
"""``AWS::SQS::QueuePolicy.PolicyDocument``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-sqs-policy.html#cfn-sqs-queuepolicy-policydoc
"""
return jsii.get(self, "policyDocument")
@policy_document.setter
def policy_document(self, value: typing.Any):
return jsii.set(self, "policyDocument", value)
@property
@jsii.member(jsii_name="queues")
def queues(self) -> typing.List[str]:
"""``AWS::SQS::QueuePolicy.Queues``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-sqs-policy.html#cfn-sqs-queuepolicy-queues
"""
return jsii.get(self, "queues")
@queues.setter
def queues(self, value: typing.List[str]):
return jsii.set(self, "queues", value)
@jsii.data_type(jsii_type="@aws-cdk/aws-sqs.CfnQueuePolicyProps", jsii_struct_bases=[], name_mapping={'policy_document': 'policyDocument', 'queues': 'queues'})
class CfnQueuePolicyProps():
def __init__(self, *, policy_document: typing.Any, queues: typing.List[str]):
"""Properties for defining a ``AWS::SQS::QueuePolicy``.
:param policy_document: ``AWS::SQS::QueuePolicy.PolicyDocument``.
:param queues: ``AWS::SQS::QueuePolicy.Queues``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-sqs-policy.html
"""
self._values = {
'policy_document': policy_document,
'queues': queues,
}
@property
def policy_document(self) -> typing.Any:
"""``AWS::SQS::QueuePolicy.PolicyDocument``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-sqs-policy.html#cfn-sqs-queuepolicy-policydoc
"""
return self._values.get('policy_document')
@property
def queues(self) -> typing.List[str]:
"""``AWS::SQS::QueuePolicy.Queues``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-sqs-policy.html#cfn-sqs-queuepolicy-queues
"""
return self._values.get('queues')
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return 'CfnQueuePolicyProps(%s)' % ', '.join(k + '=' + repr(v) for k, v in self._values.items())
@jsii.data_type(jsii_type="@aws-cdk/aws-sqs.CfnQueueProps", jsii_struct_bases=[], name_mapping={'content_based_deduplication': 'contentBasedDeduplication', 'delay_seconds': 'delaySeconds', 'fifo_queue': 'fifoQueue', 'kms_data_key_reuse_period_seconds': 'kmsDataKeyReusePeriodSeconds', 'kms_master_key_id': 'kmsMasterKeyId', 'maximum_message_size': 'maximumMessageSize', 'message_retention_period': 'messageRetentionPeriod', 'queue_name': 'queueName', 'receive_message_wait_time_seconds': 'receiveMessageWaitTimeSeconds', 'redrive_policy': 'redrivePolicy', 'tags': 'tags', 'visibility_timeout': 'visibilityTimeout'})
class CfnQueueProps():
def __init__(self, *, content_based_deduplication: typing.Optional[typing.Union[typing.Optional[bool], typing.Optional[aws_cdk.core.IResolvable]]]=None, delay_seconds: typing.Optional[jsii.Number]=None, fifo_queue: typing.Optional[typing.Union[typing.Optional[bool], typing.Optional[aws_cdk.core.IResolvable]]]=None, kms_data_key_reuse_period_seconds: typing.Optional[jsii.Number]=None, kms_master_key_id: typing.Optional[str]=None, maximum_message_size: typing.Optional[jsii.Number]=None, message_retention_period: typing.Optional[jsii.Number]=None, queue_name: typing.Optional[str]=None, receive_message_wait_time_seconds: typing.Optional[jsii.Number]=None, redrive_policy: typing.Any=None, tags: typing.Optional[typing.List[aws_cdk.core.CfnTag]]=None, visibility_timeout: typing.Optional[jsii.Number]=None):
"""Properties for defining a ``AWS::SQS::Queue``.
:param content_based_deduplication: ``AWS::SQS::Queue.ContentBasedDeduplication``.
:param delay_seconds: ``AWS::SQS::Queue.DelaySeconds``.
:param fifo_queue: ``AWS::SQS::Queue.FifoQueue``.
:param kms_data_key_reuse_period_seconds: ``AWS::SQS::Queue.KmsDataKeyReusePeriodSeconds``.
:param kms_master_key_id: ``AWS::SQS::Queue.KmsMasterKeyId``.
:param maximum_message_size: ``AWS::SQS::Queue.MaximumMessageSize``.
:param message_retention_period: ``AWS::SQS::Queue.MessageRetentionPeriod``.
:param queue_name: ``AWS::SQS::Queue.QueueName``.
:param receive_message_wait_time_seconds: ``AWS::SQS::Queue.ReceiveMessageWaitTimeSeconds``.
:param redrive_policy: ``AWS::SQS::Queue.RedrivePolicy``.
:param tags: ``AWS::SQS::Queue.Tags``.
:param visibility_timeout: ``AWS::SQS::Queue.VisibilityTimeout``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-sqs-queues.html
"""
self._values = {
}
if content_based_deduplication is not None: self._values["content_based_deduplication"] = content_based_deduplication
if delay_seconds is not None: self._values["delay_seconds"] = delay_seconds
if fifo_queue is not None: self._values["fifo_queue"] = fifo_queue
if kms_data_key_reuse_period_seconds is not None: self._values["kms_data_key_reuse_period_seconds"] = kms_data_key_reuse_period_seconds
if kms_master_key_id is not None: self._values["kms_master_key_id"] = kms_master_key_id
if maximum_message_size is not None: self._values["maximum_message_size"] = maximum_message_size
if message_retention_period is not None: self._values["message_retention_period"] = message_retention_period
if queue_name is not None: self._values["queue_name"] = queue_name
if receive_message_wait_time_seconds is not None: self._values["receive_message_wait_time_seconds"] = receive_message_wait_time_seconds
if redrive_policy is not None: self._values["redrive_policy"] = redrive_policy
if tags is not None: self._values["tags"] = tags
if visibility_timeout is not None: self._values["visibility_timeout"] = visibility_timeout
@property
def content_based_deduplication(self) -> typing.Optional[typing.Union[typing.Optional[bool], typing.Optional[aws_cdk.core.IResolvable]]]:
"""``AWS::SQS::Queue.ContentBasedDeduplication``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-sqs-queues.html#aws-sqs-queue-contentbaseddeduplication
"""
return self._values.get('content_based_deduplication')
@property
def delay_seconds(self) -> typing.Optional[jsii.Number]:
"""``AWS::SQS::Queue.DelaySeconds``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-sqs-queues.html#aws-sqs-queue-delayseconds
"""
return self._values.get('delay_seconds')
@property
def fifo_queue(self) -> typing.Optional[typing.Union[typing.Optional[bool], typing.Optional[aws_cdk.core.IResolvable]]]:
"""``AWS::SQS::Queue.FifoQueue``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-sqs-queues.html#aws-sqs-queue-fifoqueue
"""
return self._values.get('fifo_queue')
@property
def kms_data_key_reuse_period_seconds(self) -> typing.Optional[jsii.Number]:
"""``AWS::SQS::Queue.KmsDataKeyReusePeriodSeconds``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-sqs-queues.html#aws-sqs-queue-kmsdatakeyreuseperiodseconds
"""
return self._values.get('kms_data_key_reuse_period_seconds')
@property
def kms_master_key_id(self) -> typing.Optional[str]:
"""``AWS::SQS::Queue.KmsMasterKeyId``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-sqs-queues.html#aws-sqs-queue-kmsmasterkeyid
"""
return self._values.get('kms_master_key_id')
@property
def maximum_message_size(self) -> typing.Optional[jsii.Number]:
"""``AWS::SQS::Queue.MaximumMessageSize``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-sqs-queues.html#aws-sqs-queue-maxmesgsize
"""
return self._values.get('maximum_message_size')
@property
def message_retention_period(self) -> typing.Optional[jsii.Number]:
"""``AWS::SQS::Queue.MessageRetentionPeriod``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-sqs-queues.html#aws-sqs-queue-msgretentionperiod
"""
return self._values.get('message_retention_period')
@property
def queue_name(self) -> typing.Optional[str]:
"""``AWS::SQS::Queue.QueueName``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-sqs-queues.html#aws-sqs-queue-name
"""
return self._values.get('queue_name')
@property
def receive_message_wait_time_seconds(self) -> typing.Optional[jsii.Number]:
"""``AWS::SQS::Queue.ReceiveMessageWaitTimeSeconds``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-sqs-queues.html#aws-sqs-queue-receivemsgwaittime
"""
return self._values.get('receive_message_wait_time_seconds')
@property
def redrive_policy(self) -> typing.Any:
"""``AWS::SQS::Queue.RedrivePolicy``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-sqs-queues.html#aws-sqs-queue-redrive
"""
return self._values.get('redrive_policy')
@property
def tags(self) -> typing.Optional[typing.List[aws_cdk.core.CfnTag]]:
"""``AWS::SQS::Queue.Tags``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-sqs-queues.html#cfn-sqs-queue-tags
"""
return self._values.get('tags')
@property
def visibility_timeout(self) -> typing.Optional[jsii.Number]:
"""``AWS::SQS::Queue.VisibilityTimeout``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-sqs-queues.html#aws-sqs-queue-visiblitytimeout
"""
return self._values.get('visibility_timeout')
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return 'CfnQueueProps(%s)' % ', '.join(k + '=' + repr(v) for k, v in self._values.items())
@jsii.data_type(jsii_type="@aws-cdk/aws-sqs.DeadLetterQueue", jsii_struct_bases=[], name_mapping={'max_receive_count': 'maxReceiveCount', 'queue': 'queue'})
class DeadLetterQueue():
def __init__(self, *, max_receive_count: jsii.Number, queue: "IQueue"):
"""Dead letter queue settings.
:param max_receive_count: The number of times a message can be unsuccesfully dequeued before being moved to the dead-letter queue.
:param queue: The dead-letter queue to which Amazon SQS moves messages after the value of maxReceiveCount is exceeded.
"""
self._values = {
'max_receive_count': max_receive_count,
'queue': queue,
}
@property
def max_receive_count(self) -> jsii.Number:
"""The number of times a message can be unsuccesfully dequeued before being moved to the dead-letter queue."""
return self._values.get('max_receive_count')
@property
def queue(self) -> "IQueue":
"""The dead-letter queue to which Amazon SQS moves messages after the value of maxReceiveCount is exceeded."""
return self._values.get('queue')
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return 'DeadLetterQueue(%s)' % ', '.join(k + '=' + repr(v) for k, v in self._values.items())
@jsii.interface(jsii_type="@aws-cdk/aws-sqs.IQueue")
class IQueue(aws_cdk.core.IResource, jsii.compat.Protocol):
@staticmethod
def __jsii_proxy_class__():
return _IQueueProxy
@property
@jsii.member(jsii_name="fifo")
def fifo(self) -> bool:
"""Whether this queue is an Amazon SQS FIFO queue.
If false, this is a standard queue.
"""
...
@property
@jsii.member(jsii_name="queueArn")
def queue_arn(self) -> str:
"""The ARN of this queue.
attribute:
:attribute:: true
"""
...
@property
@jsii.member(jsii_name="queueName")
def queue_name(self) -> str:
"""The name of this queue.
attribute:
:attribute:: true
"""
...
@property
@jsii.member(jsii_name="queueUrl")
def queue_url(self) -> str:
"""The URL of this queue.
attribute:
:attribute:: true
"""
...
@property
@jsii.member(jsii_name="encryptionMasterKey")
def encryption_master_key(self) -> typing.Optional[aws_cdk.aws_kms.IKey]:
"""If this queue is server-side encrypted, this is the KMS encryption key."""
...
@jsii.member(jsii_name="addToResourcePolicy")
def add_to_resource_policy(self, statement: aws_cdk.aws_iam.PolicyStatement) -> None:
"""Adds a statement to the IAM resource policy associated with this queue.
If this queue was created in this stack (``new Queue``), a queue policy
will be automatically created upon the first call to ``addToPolicy``. If
the queue is improted (``Queue.import``), then this is a no-op.
:param statement: -
"""
...
@jsii.member(jsii_name="grant")
def grant(self, grantee: aws_cdk.aws_iam.IGrantable, *queue_actions: str) -> aws_cdk.aws_iam.Grant:
"""Grant the actions defined in queueActions to the identity Principal given on this SQS queue resource.
:param grantee: Principal to grant right to.
:param queue_actions: The actions to grant.
"""
...
@jsii.member(jsii_name="grantConsumeMessages")
def grant_consume_messages(self, grantee: aws_cdk.aws_iam.IGrantable) -> aws_cdk.aws_iam.Grant:
"""Grant permissions to consume messages from a queue.
This will grant the following permissions:
- sqs:ChangeMessageVisibility
- sqs:DeleteMessage
- sqs:ReceiveMessage
- sqs:GetQueueAttributes
- sqs:GetQueueUrl
:param grantee: Principal to grant consume rights to.
"""
...
@jsii.member(jsii_name="grantPurge")
def grant_purge(self, grantee: aws_cdk.aws_iam.IGrantable) -> aws_cdk.aws_iam.Grant:
"""Grant an IAM principal permissions to purge all messages from the queue.
This will grant the following permissions:
- sqs:PurgeQueue
- sqs:GetQueueAttributes
- sqs:GetQueueUrl
:param grantee: Principal to grant send rights to.
"""
...
@jsii.member(jsii_name="grantSendMessages")
def grant_send_messages(self, grantee: aws_cdk.aws_iam.IGrantable) -> aws_cdk.aws_iam.Grant:
"""Grant access to send messages to a queue to the given identity.
This will grant the following permissions:
- sqs:SendMessage
- sqs:GetQueueAttributes
- sqs:GetQueueUrl
:param grantee: Principal to grant send rights to.
"""
...
@jsii.member(jsii_name="metric")
def metric(self, metric_name: str, *, color: typing.Optional[str]=None, dimensions: typing.Optional[typing.Mapping[str,typing.Any]]=None, label: typing.Optional[str]=None, period: typing.Optional[aws_cdk.core.Duration]=None, statistic: typing.Optional[str]=None, unit: typing.Optional[aws_cdk.aws_cloudwatch.Unit]=None) -> aws_cdk.aws_cloudwatch.Metric:
"""Return the given named metric for this Queue.
:param metric_name: -
:param props: -
:param color: Color for this metric when added to a Graph in a Dashboard.
:param dimensions: Dimensions of the metric. Default: - No dimensions.
:param label: Label for this metric when added to a Graph in a Dashboard.
:param period: The period over which the specified statistic is applied. Default: Duration.minutes(5)
:param statistic: What function to use for aggregating. Can be one of the following: - "Minimum" | "min" - "Maximum" | "max" - "Average" | "avg" - "Sum" | "sum" - "SampleCount | "n" - "pNN.NN" Default: Average
:param unit: Unit for the metric that is associated with the alarm.
"""
...
@jsii.member(jsii_name="metricApproximateAgeOfOldestMessage")
def metric_approximate_age_of_oldest_message(self, *, color: typing.Optional[str]=None, dimensions: typing.Optional[typing.Mapping[str,typing.Any]]=None, label: typing.Optional[str]=None, period: typing.Optional[aws_cdk.core.Duration]=None, statistic: typing.Optional[str]=None, unit: typing.Optional[aws_cdk.aws_cloudwatch.Unit]=None) -> aws_cdk.aws_cloudwatch.Metric:
"""The approximate age of the oldest non-deleted message in the queue.
Maximum over 5 minutes
:param props: -
:param color: Color for this metric when added to a Graph in a Dashboard.
:param dimensions: Dimensions of the metric. Default: - No dimensions.
:param label: Label for this metric when added to a Graph in a Dashboard.
:param period: The period over which the specified statistic is applied. Default: Duration.minutes(5)
:param statistic: What function to use for aggregating. Can be one of the following: - "Minimum" | "min" - "Maximum" | "max" - "Average" | "avg" - "Sum" | "sum" - "SampleCount | "n" - "pNN.NN" Default: Average
:param unit: Unit for the metric that is associated with the alarm.
"""
...
@jsii.member(jsii_name="metricApproximateNumberOfMessagesDelayed")
def metric_approximate_number_of_messages_delayed(self, *, color: typing.Optional[str]=None, dimensions: typing.Optional[typing.Mapping[str,typing.Any]]=None, label: typing.Optional[str]=None, period: typing.Optional[aws_cdk.core.Duration]=None, statistic: typing.Optional[str]=None, unit: typing.Optional[aws_cdk.aws_cloudwatch.Unit]=None) -> aws_cdk.aws_cloudwatch.Metric:
"""The number of messages in the queue that are delayed and not available for reading immediately.
Maximum over 5 minutes
:param props: -
:param color: Color for this metric when added to a Graph in a Dashboard.
:param dimensions: Dimensions of the metric. Default: - No dimensions.
:param label: Label for this metric when added to a Graph in a Dashboard.
:param period: The period over which the specified statistic is applied. Default: Duration.minutes(5)
:param statistic: What function to use for aggregating. Can be one of the following: - "Minimum" | "min" - "Maximum" | "max" - "Average" | "avg" - "Sum" | "sum" - "SampleCount | "n" - "pNN.NN" Default: Average
:param unit: Unit for the metric that is associated with the alarm.
"""
...
@jsii.member(jsii_name="metricApproximateNumberOfMessagesNotVisible")
def metric_approximate_number_of_messages_not_visible(self, *, color: typing.Optional[str]=None, dimensions: typing.Optional[typing.Mapping[str,typing.Any]]=None, label: typing.Optional[str]=None, period: typing.Optional[aws_cdk.core.Duration]=None, statistic: typing.Optional[str]=None, unit: typing.Optional[aws_cdk.aws_cloudwatch.Unit]=None) -> aws_cdk.aws_cloudwatch.Metric:
"""The number of messages that are in flight.
Maximum over 5 minutes
:param props: -
:param color: Color for this metric when added to a Graph in a Dashboard.
:param dimensions: Dimensions of the metric. Default: - No dimensions.
:param label: Label for this metric when added to a Graph in a Dashboard.
:param period: The period over which the specified statistic is applied. Default: Duration.minutes(5)
:param statistic: What function to use for aggregating. Can be one of the following: - "Minimum" | "min" - "Maximum" | "max" - "Average" | "avg" - "Sum" | "sum" - "SampleCount | "n" - "pNN.NN" Default: Average
:param unit: Unit for the metric that is associated with the alarm.
"""
...
@jsii.member(jsii_name="metricApproximateNumberOfMessagesVisible")
def metric_approximate_number_of_messages_visible(self, *, color: typing.Optional[str]=None, dimensions: typing.Optional[typing.Mapping[str,typing.Any]]=None, label: typing.Optional[str]=None, period: typing.Optional[aws_cdk.core.Duration]=None, statistic: typing.Optional[str]=None, unit: typing.Optional[aws_cdk.aws_cloudwatch.Unit]=None) -> aws_cdk.aws_cloudwatch.Metric:
"""The number of messages available for retrieval from the queue.
Maximum over 5 minutes
:param props: -
:param color: Color for this metric when added to a Graph in a Dashboard.
:param dimensions: Dimensions of the metric. Default: - No dimensions.
:param label: Label for this metric when added to a Graph in a Dashboard.
:param period: The period over which the specified statistic is applied. Default: Duration.minutes(5)
:param statistic: What function to use for aggregating. Can be one of the following: - "Minimum" | "min" - "Maximum" | "max" - "Average" | "avg" - "Sum" | "sum" - "SampleCount | "n" - "pNN.NN" Default: Average
:param unit: Unit for the metric that is associated with the alarm.
"""
...
@jsii.member(jsii_name="metricNumberOfEmptyReceives")
def metric_number_of_empty_receives(self, *, color: typing.Optional[str]=None, dimensions: typing.Optional[typing.Mapping[str,typing.Any]]=None, label: typing.Optional[str]=None, period: typing.Optional[aws_cdk.core.Duration]=None, statistic: typing.Optional[str]=None, unit: typing.Optional[aws_cdk.aws_cloudwatch.Unit]=None) -> aws_cdk.aws_cloudwatch.Metric:
"""The number of ReceiveMessage API calls that did not return a message.
Sum over 5 minutes
:param props: -
:param color: Color for this metric when added to a Graph in a Dashboard.
:param dimensions: Dimensions of the metric. Default: - No dimensions.
:param label: Label for this metric when added to a Graph in a Dashboard.
:param period: The period over which the specified statistic is applied. Default: Duration.minutes(5)
:param statistic: What function to use for aggregating. Can be one of the following: - "Minimum" | "min" - "Maximum" | "max" - "Average" | "avg" - "Sum" | "sum" - "SampleCount | "n" - "pNN.NN" Default: Average
:param unit: Unit for the metric that is associated with the alarm.
"""
...
@jsii.member(jsii_name="metricNumberOfMessagesDeleted")
def metric_number_of_messages_deleted(self, *, color: typing.Optional[str]=None, dimensions: typing.Optional[typing.Mapping[str,typing.Any]]=None, label: typing.Optional[str]=None, period: typing.Optional[aws_cdk.core.Duration]=None, statistic: typing.Optional[str]=None, unit: typing.Optional[aws_cdk.aws_cloudwatch.Unit]=None) -> aws_cdk.aws_cloudwatch.Metric:
"""The number of messages deleted from the queue.
Sum over 5 minutes
:param props: -
:param color: Color for this metric when added to a Graph in a Dashboard.
:param dimensions: Dimensions of the metric. Default: - No dimensions.
:param label: Label for this metric when added to a Graph in a Dashboard.
:param period: The period over which the specified statistic is applied. Default: Duration.minutes(5)
:param statistic: What function to use for aggregating. Can be one of the following: - "Minimum" | "min" - "Maximum" | "max" - "Average" | "avg" - "Sum" | "sum" - "SampleCount | "n" - "pNN.NN" Default: Average
:param unit: Unit for the metric that is associated with the alarm.
"""
...
@jsii.member(jsii_name="metricNumberOfMessagesReceived")
def metric_number_of_messages_received(self, *, color: typing.Optional[str]=None, dimensions: typing.Optional[typing.Mapping[str,typing.Any]]=None, label: typing.Optional[str]=None, period: typing.Optional[aws_cdk.core.Duration]=None, statistic: typing.Optional[str]=None, unit: typing.Optional[aws_cdk.aws_cloudwatch.Unit]=None) -> aws_cdk.aws_cloudwatch.Metric:
"""The number of messages returned by calls to the ReceiveMessage action.
Sum over 5 minutes
:param props: -
:param color: Color for this metric when added to a Graph in a Dashboard.
:param dimensions: Dimensions of the metric. Default: - No dimensions.
:param label: Label for this metric when added to a Graph in a Dashboard.
:param period: The period over which the specified statistic is applied. Default: Duration.minutes(5)
:param statistic: What function to use for aggregating. Can be one of the following: - "Minimum" | "min" - "Maximum" | "max" - "Average" | "avg" - "Sum" | "sum" - "SampleCount | "n" - "pNN.NN" Default: Average
:param unit: Unit for the metric that is associated with the alarm.
"""
...
@jsii.member(jsii_name="metricNumberOfMessagesSent")
def metric_number_of_messages_sent(self, *, color: typing.Optional[str]=None, dimensions: typing.Optional[typing.Mapping[str,typing.Any]]=None, label: typing.Optional[str]=None, period: typing.Optional[aws_cdk.core.Duration]=None, statistic: typing.Optional[str]=None, unit: typing.Optional[aws_cdk.aws_cloudwatch.Unit]=None) -> aws_cdk.aws_cloudwatch.Metric:
"""The number of messages added to a queue.
Sum over 5 minutes
:param props: -
:param color: Color for this metric when added to a Graph in a Dashboard.
:param dimensions: Dimensions of the metric. Default: - No dimensions.
:param label: Label for this metric when added to a Graph in a Dashboard.
:param period: The period over which the specified statistic is applied. Default: Duration.minutes(5)
:param statistic: What function to use for aggregating. Can be one of the following: - "Minimum" | "min" - "Maximum" | "max" - "Average" | "avg" - "Sum" | "sum" - "SampleCount | "n" - "pNN.NN" Default: Average
:param unit: Unit for the metric that is associated with the alarm.
"""
...
@jsii.member(jsii_name="metricSentMessageSize")
def metric_sent_message_size(self, *, color: typing.Optional[str]=None, dimensions: typing.Optional[typing.Mapping[str,typing.Any]]=None, label: typing.Optional[str]=None, period: typing.Optional[aws_cdk.core.Duration]=None, statistic: typing.Optional[str]=None, unit: typing.Optional[aws_cdk.aws_cloudwatch.Unit]=None) -> aws_cdk.aws_cloudwatch.Metric:
"""The size of messages added to a queue.
Average over 5 minutes
:param props: -
:param color: Color for this metric when added to a Graph in a Dashboard.
:param dimensions: Dimensions of the metric. Default: - No dimensions.
:param label: Label for this metric when added to a Graph in a Dashboard.
:param period: The period over which the specified statistic is applied. Default: Duration.minutes(5)
:param statistic: What function to use for aggregating. Can be one of the following: - "Minimum" | "min" - "Maximum" | "max" - "Average" | "avg" - "Sum" | "sum" - "SampleCount | "n" - "pNN.NN" Default: Average
:param unit: Unit for the metric that is associated with the alarm.
"""
...
class _IQueueProxy(jsii.proxy_for(aws_cdk.core.IResource)):
__jsii_type__ = "@aws-cdk/aws-sqs.IQueue"
@property
@jsii.member(jsii_name="fifo")
def fifo(self) -> bool:
"""Whether this queue is an Amazon SQS FIFO queue.
If false, this is a standard queue.
"""
return jsii.get(self, "fifo")
@property
@jsii.member(jsii_name="queueArn")
def queue_arn(self) -> str:
"""The ARN of this queue.
attribute:
:attribute:: true
"""
return jsii.get(self, "queueArn")
@property
@jsii.member(jsii_name="queueName")
def queue_name(self) -> str:
"""The name of this queue.
attribute:
:attribute:: true
"""
return jsii.get(self, "queueName")
@property
@jsii.member(jsii_name="queueUrl")
def queue_url(self) -> str:
"""The URL of this queue.
attribute:
:attribute:: true
"""
return jsii.get(self, "queueUrl")
@property
@jsii.member(jsii_name="encryptionMasterKey")
def encryption_master_key(self) -> typing.Optional[aws_cdk.aws_kms.IKey]:
"""If this queue is server-side encrypted, this is the KMS encryption key."""
return jsii.get(self, "encryptionMasterKey")
@jsii.member(jsii_name="addToResourcePolicy")
def add_to_resource_policy(self, statement: aws_cdk.aws_iam.PolicyStatement) -> None:
"""Adds a statement to the IAM resource policy associated with this queue.
If this queue was created in this stack (``new Queue``), a queue policy
will be automatically created upon the first call to ``addToPolicy``. If
the queue is improted (``Queue.import``), then this is a no-op.
:param statement: -
"""
return jsii.invoke(self, "addToResourcePolicy", [statement])
@jsii.member(jsii_name="grant")
def grant(self, grantee: aws_cdk.aws_iam.IGrantable, *queue_actions: str) -> aws_cdk.aws_iam.Grant:
"""Grant the actions defined in queueActions to the identity Principal given on this SQS queue resource.
:param grantee: Principal to grant right to.
:param queue_actions: The actions to grant.
"""
return jsii.invoke(self, "grant", [grantee, *queue_actions])
@jsii.member(jsii_name="grantConsumeMessages")
def grant_consume_messages(self, grantee: aws_cdk.aws_iam.IGrantable) -> aws_cdk.aws_iam.Grant:
"""Grant permissions to consume messages from a queue.
This will grant the following permissions:
- sqs:ChangeMessageVisibility
- sqs:DeleteMessage
- sqs:ReceiveMessage
- sqs:GetQueueAttributes
- sqs:GetQueueUrl
:param grantee: Principal to grant consume rights to.
"""
return jsii.invoke(self, "grantConsumeMessages", [grantee])
@jsii.member(jsii_name="grantPurge")
def grant_purge(self, grantee: aws_cdk.aws_iam.IGrantable) -> aws_cdk.aws_iam.Grant:
"""Grant an IAM principal permissions to purge all messages from the queue.
This will grant the following permissions:
- sqs:PurgeQueue
- sqs:GetQueueAttributes
- sqs:GetQueueUrl
:param grantee: Principal to grant send rights to.
"""
return jsii.invoke(self, "grantPurge", [grantee])
@jsii.member(jsii_name="grantSendMessages")
def grant_send_messages(self, grantee: aws_cdk.aws_iam.IGrantable) -> aws_cdk.aws_iam.Grant:
"""Grant access to send messages to a queue to the given identity.
This will grant the following permissions:
- sqs:SendMessage
- sqs:GetQueueAttributes
- sqs:GetQueueUrl
:param grantee: Principal to grant send rights to.
"""
return jsii.invoke(self, "grantSendMessages", [grantee])
@jsii.member(jsii_name="metric")
def metric(self, metric_name: str, *, color: typing.Optional[str]=None, dimensions: typing.Optional[typing.Mapping[str,typing.Any]]=None, label: typing.Optional[str]=None, period: typing.Optional[aws_cdk.core.Duration]=None, statistic: typing.Optional[str]=None, unit: typing.Optional[aws_cdk.aws_cloudwatch.Unit]=None) -> aws_cdk.aws_cloudwatch.Metric:
"""Return the given named metric for this Queue.
:param metric_name: -
:param props: -
:param color: Color for this metric when added to a Graph in a Dashboard.
:param dimensions: Dimensions of the metric. Default: - No dimensions.
:param label: Label for this metric when added to a Graph in a Dashboard.
:param period: The period over which the specified statistic is applied. Default: Duration.minutes(5)
:param statistic: What function to use for aggregating. Can be one of the following: - "Minimum" | "min" - "Maximum" | "max" - "Average" | "avg" - "Sum" | "sum" - "SampleCount | "n" - "pNN.NN" Default: Average
:param unit: Unit for the metric that is associated with the alarm.
"""
props = aws_cdk.aws_cloudwatch.MetricOptions(color=color, dimensions=dimensions, label=label, period=period, statistic=statistic, unit=unit)
return jsii.invoke(self, "metric", [metric_name, props])
@jsii.member(jsii_name="metricApproximateAgeOfOldestMessage")
def metric_approximate_age_of_oldest_message(self, *, color: typing.Optional[str]=None, dimensions: typing.Optional[typing.Mapping[str,typing.Any]]=None, label: typing.Optional[str]=None, period: typing.Optional[aws_cdk.core.Duration]=None, statistic: typing.Optional[str]=None, unit: typing.Optional[aws_cdk.aws_cloudwatch.Unit]=None) -> aws_cdk.aws_cloudwatch.Metric:
"""The approximate age of the oldest non-deleted message in the queue.
Maximum over 5 minutes
:param props: -
:param color: Color for this metric when added to a Graph in a Dashboard.
:param dimensions: Dimensions of the metric. Default: - No dimensions.
:param label: Label for this metric when added to a Graph in a Dashboard.
:param period: The period over which the specified statistic is applied. Default: Duration.minutes(5)
:param statistic: What function to use for aggregating. Can be one of the following: - "Minimum" | "min" - "Maximum" | "max" - "Average" | "avg" - "Sum" | "sum" - "SampleCount | "n" - "pNN.NN" Default: Average
:param unit: Unit for the metric that is associated with the alarm.
"""
props = aws_cdk.aws_cloudwatch.MetricOptions(color=color, dimensions=dimensions, label=label, period=period, statistic=statistic, unit=unit)
return jsii.invoke(self, "metricApproximateAgeOfOldestMessage", [props])
@jsii.member(jsii_name="metricApproximateNumberOfMessagesDelayed")
def metric_approximate_number_of_messages_delayed(self, *, color: typing.Optional[str]=None, dimensions: typing.Optional[typing.Mapping[str,typing.Any]]=None, label: typing.Optional[str]=None, period: typing.Optional[aws_cdk.core.Duration]=None, statistic: typing.Optional[str]=None, unit: typing.Optional[aws_cdk.aws_cloudwatch.Unit]=None) -> aws_cdk.aws_cloudwatch.Metric:
"""The number of messages in the queue that are delayed and not available for reading immediately.
Maximum over 5 minutes
:param props: -
:param color: Color for this metric when added to a Graph in a Dashboard.
:param dimensions: Dimensions of the metric. Default: - No dimensions.
:param label: Label for this metric when added to a Graph in a Dashboard.
:param period: The period over which the specified statistic is applied. Default: Duration.minutes(5)
:param statistic: What function to use for aggregating. Can be one of the following: - "Minimum" | "min" - "Maximum" | "max" - "Average" | "avg" - "Sum" | "sum" - "SampleCount | "n" - "pNN.NN" Default: Average
:param unit: Unit for the metric that is associated with the alarm.
"""
props = aws_cdk.aws_cloudwatch.MetricOptions(color=color, dimensions=dimensions, label=label, period=period, statistic=statistic, unit=unit)
return jsii.invoke(self, "metricApproximateNumberOfMessagesDelayed", [props])
@jsii.member(jsii_name="metricApproximateNumberOfMessagesNotVisible")
def metric_approximate_number_of_messages_not_visible(self, *, color: typing.Optional[str]=None, dimensions: typing.Optional[typing.Mapping[str,typing.Any]]=None, label: typing.Optional[str]=None, period: typing.Optional[aws_cdk.core.Duration]=None, statistic: typing.Optional[str]=None, unit: typing.Optional[aws_cdk.aws_cloudwatch.Unit]=None) -> aws_cdk.aws_cloudwatch.Metric:
"""The number of messages that are in flight.
Maximum over 5 minutes
:param props: -
:param color: Color for this metric when added to a Graph in a Dashboard.
:param dimensions: Dimensions of the metric. Default: - No dimensions.
:param label: Label for this metric when added to a Graph in a Dashboard.
:param period: The period over which the specified statistic is applied. Default: Duration.minutes(5)
:param statistic: What function to use for aggregating. Can be one of the following: - "Minimum" | "min" - "Maximum" | "max" - "Average" | "avg" - "Sum" | "sum" - "SampleCount | "n" - "pNN.NN" Default: Average
:param unit: Unit for the metric that is associated with the alarm.
"""
props = aws_cdk.aws_cloudwatch.MetricOptions(color=color, dimensions=dimensions, label=label, period=period, statistic=statistic, unit=unit)
return jsii.invoke(self, "metricApproximateNumberOfMessagesNotVisible", [props])
@jsii.member(jsii_name="metricApproximateNumberOfMessagesVisible")
def metric_approximate_number_of_messages_visible(self, *, color: typing.Optional[str]=None, dimensions: typing.Optional[typing.Mapping[str,typing.Any]]=None, label: typing.Optional[str]=None, period: typing.Optional[aws_cdk.core.Duration]=None, statistic: typing.Optional[str]=None, unit: typing.Optional[aws_cdk.aws_cloudwatch.Unit]=None) -> aws_cdk.aws_cloudwatch.Metric:
"""The number of messages available for retrieval from the queue.
Maximum over 5 minutes
:param props: -
:param color: Color for this metric when added to a Graph in a Dashboard.
:param dimensions: Dimensions of the metric. Default: - No dimensions.
:param label: Label for this metric when added to a Graph in a Dashboard.
:param period: The period over which the specified statistic is applied. Default: Duration.minutes(5)
:param statistic: What function to use for aggregating. Can be one of the following: - "Minimum" | "min" - "Maximum" | "max" - "Average" | "avg" - "Sum" | "sum" - "SampleCount | "n" - "pNN.NN" Default: Average
:param unit: Unit for the metric that is associated with the alarm.
"""
props = aws_cdk.aws_cloudwatch.MetricOptions(color=color, dimensions=dimensions, label=label, period=period, statistic=statistic, unit=unit)
return jsii.invoke(self, "metricApproximateNumberOfMessagesVisible", [props])
@jsii.member(jsii_name="metricNumberOfEmptyReceives")
def metric_number_of_empty_receives(self, *, color: typing.Optional[str]=None, dimensions: typing.Optional[typing.Mapping[str,typing.Any]]=None, label: typing.Optional[str]=None, period: typing.Optional[aws_cdk.core.Duration]=None, statistic: typing.Optional[str]=None, unit: typing.Optional[aws_cdk.aws_cloudwatch.Unit]=None) -> aws_cdk.aws_cloudwatch.Metric:
"""The number of ReceiveMessage API calls that did not return a message.
Sum over 5 minutes
:param props: -
:param color: Color for this metric when added to a Graph in a Dashboard.
:param dimensions: Dimensions of the metric. Default: - No dimensions.
:param label: Label for this metric when added to a Graph in a Dashboard.
:param period: The period over which the specified statistic is applied. Default: Duration.minutes(5)
:param statistic: What function to use for aggregating. Can be one of the following: - "Minimum" | "min" - "Maximum" | "max" - "Average" | "avg" - "Sum" | "sum" - "SampleCount | "n" - "pNN.NN" Default: Average
:param unit: Unit for the metric that is associated with the alarm.
"""
props = aws_cdk.aws_cloudwatch.MetricOptions(color=color, dimensions=dimensions, label=label, period=period, statistic=statistic, unit=unit)
return jsii.invoke(self, "metricNumberOfEmptyReceives", [props])
@jsii.member(jsii_name="metricNumberOfMessagesDeleted")
def metric_number_of_messages_deleted(self, *, color: typing.Optional[str]=None, dimensions: typing.Optional[typing.Mapping[str,typing.Any]]=None, label: typing.Optional[str]=None, period: typing.Optional[aws_cdk.core.Duration]=None, statistic: typing.Optional[str]=None, unit: typing.Optional[aws_cdk.aws_cloudwatch.Unit]=None) -> aws_cdk.aws_cloudwatch.Metric:
"""The number of messages deleted from the queue.
Sum over 5 minutes
:param props: -
:param color: Color for this metric when added to a Graph in a Dashboard.
:param dimensions: Dimensions of the metric. Default: - No dimensions.
:param label: Label for this metric when added to a Graph in a Dashboard.
:param period: The period over which the specified statistic is applied. Default: Duration.minutes(5)
:param statistic: What function to use for aggregating. Can be one of the following: - "Minimum" | "min" - "Maximum" | "max" - "Average" | "avg" - "Sum" | "sum" - "SampleCount | "n" - "pNN.NN" Default: Average
:param unit: Unit for the metric that is associated with the alarm.
"""
props = aws_cdk.aws_cloudwatch.MetricOptions(color=color, dimensions=dimensions, label=label, period=period, statistic=statistic, unit=unit)
return jsii.invoke(self, "metricNumberOfMessagesDeleted", [props])
@jsii.member(jsii_name="metricNumberOfMessagesReceived")
def metric_number_of_messages_received(self, *, color: typing.Optional[str]=None, dimensions: typing.Optional[typing.Mapping[str,typing.Any]]=None, label: typing.Optional[str]=None, period: typing.Optional[aws_cdk.core.Duration]=None, statistic: typing.Optional[str]=None, unit: typing.Optional[aws_cdk.aws_cloudwatch.Unit]=None) -> aws_cdk.aws_cloudwatch.Metric:
"""The number of messages returned by calls to the ReceiveMessage action.
Sum over 5 minutes
:param props: -
:param color: Color for this metric when added to a Graph in a Dashboard.
:param dimensions: Dimensions of the metric. Default: - No dimensions.
:param label: Label for this metric when added to a Graph in a Dashboard.
:param period: The period over which the specified statistic is applied. Default: Duration.minutes(5)
:param statistic: What function to use for aggregating. Can be one of the following: - "Minimum" | "min" - "Maximum" | "max" - "Average" | "avg" - "Sum" | "sum" - "SampleCount | "n" - "pNN.NN" Default: Average
:param unit: Unit for the metric that is associated with the alarm.
"""
props = aws_cdk.aws_cloudwatch.MetricOptions(color=color, dimensions=dimensions, label=label, period=period, statistic=statistic, unit=unit)
return jsii.invoke(self, "metricNumberOfMessagesReceived", [props])
@jsii.member(jsii_name="metricNumberOfMessagesSent")
def metric_number_of_messages_sent(self, *, color: typing.Optional[str]=None, dimensions: typing.Optional[typing.Mapping[str,typing.Any]]=None, label: typing.Optional[str]=None, period: typing.Optional[aws_cdk.core.Duration]=None, statistic: typing.Optional[str]=None, unit: typing.Optional[aws_cdk.aws_cloudwatch.Unit]=None) -> aws_cdk.aws_cloudwatch.Metric:
"""The number of messages added to a queue.
Sum over 5 minutes
:param props: -
:param color: Color for this metric when added to a Graph in a Dashboard.
:param dimensions: Dimensions of the metric. Default: - No dimensions.
:param label: Label for this metric when added to a Graph in a Dashboard.
:param period: The period over which the specified statistic is applied. Default: Duration.minutes(5)
:param statistic: What function to use for aggregating. Can be one of the following: - "Minimum" | "min" - "Maximum" | "max" - "Average" | "avg" - "Sum" | "sum" - "SampleCount | "n" - "pNN.NN" Default: Average
:param unit: Unit for the metric that is associated with the alarm.
"""
props = aws_cdk.aws_cloudwatch.MetricOptions(color=color, dimensions=dimensions, label=label, period=period, statistic=statistic, unit=unit)
return jsii.invoke(self, "metricNumberOfMessagesSent", [props])
@jsii.member(jsii_name="metricSentMessageSize")
def metric_sent_message_size(self, *, color: typing.Optional[str]=None, dimensions: typing.Optional[typing.Mapping[str,typing.Any]]=None, label: typing.Optional[str]=None, period: typing.Optional[aws_cdk.core.Duration]=None, statistic: typing.Optional[str]=None, unit: typing.Optional[aws_cdk.aws_cloudwatch.Unit]=None) -> aws_cdk.aws_cloudwatch.Metric:
"""The size of messages added to a queue.
Average over 5 minutes
:param props: -
:param color: Color for this metric when added to a Graph in a Dashboard.
:param dimensions: Dimensions of the metric. Default: - No dimensions.
:param label: Label for this metric when added to a Graph in a Dashboard.
:param period: The period over which the specified statistic is applied. Default: Duration.minutes(5)
:param statistic: What function to use for aggregating. Can be one of the following: - "Minimum" | "min" - "Maximum" | "max" - "Average" | "avg" - "Sum" | "sum" - "SampleCount | "n" - "pNN.NN" Default: Average
:param unit: Unit for the metric that is associated with the alarm.
"""
props = aws_cdk.aws_cloudwatch.MetricOptions(color=color, dimensions=dimensions, label=label, period=period, statistic=statistic, unit=unit)
return jsii.invoke(self, "metricSentMessageSize", [props])
@jsii.data_type(jsii_type="@aws-cdk/aws-sqs.QueueAttributes", jsii_struct_bases=[], name_mapping={'queue_arn': 'queueArn', 'key_arn': 'keyArn', 'queue_name': 'queueName', 'queue_url': 'queueUrl'})
class QueueAttributes():
def __init__(self, *, queue_arn: str, key_arn: typing.Optional[str]=None, queue_name: typing.Optional[str]=None, queue_url: typing.Optional[str]=None):
"""Reference to a queue.
:param queue_arn: The ARN of the queue.
:param key_arn: KMS encryption key, if this queue is server-side encrypted by a KMS key.
:param queue_name: The name of the queue. Default: if queue name is not specified, the name will be derived from the queue ARN
:param queue_url: The URL of the queue.
"""
self._values = {
'queue_arn': queue_arn,
}
if key_arn is not None: self._values["key_arn"] = key_arn
if queue_name is not None: self._values["queue_name"] = queue_name
if queue_url is not None: self._values["queue_url"] = queue_url
@property
def queue_arn(self) -> str:
"""The ARN of the queue."""
return self._values.get('queue_arn')
@property
def key_arn(self) -> typing.Optional[str]:
"""KMS encryption key, if this queue is server-side encrypted by a KMS key."""
return self._values.get('key_arn')
@property
def queue_name(self) -> typing.Optional[str]:
"""The name of the queue.
default
:default: if queue name is not specified, the name will be derived from the queue ARN
"""
return self._values.get('queue_name')
@property
def queue_url(self) -> typing.Optional[str]:
"""The URL of the queue."""
return self._values.get('queue_url')
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return 'QueueAttributes(%s)' % ', '.join(k + '=' + repr(v) for k, v in self._values.items())
@jsii.implements(IQueue)
class QueueBase(aws_cdk.core.Resource, metaclass=jsii.JSIIAbstractClass, jsii_type="@aws-cdk/aws-sqs.QueueBase"):
"""Reference to a new or existing Amazon SQS queue."""
@staticmethod
def __jsii_proxy_class__():
return _QueueBaseProxy
def __init__(self, scope: aws_cdk.core.Construct, id: str, *, physical_name: typing.Optional[str]=None) -> None:
"""
:param scope: -
:param id: -
:param props: -
:param physical_name: The value passed in by users to the physical name prop of the resource. - ``undefined`` implies that a physical name will be allocated by CloudFormation during deployment. - a concrete value implies a specific physical name - ``PhysicalName.GENERATE_IF_NEEDED`` is a marker that indicates that a physical will only be generated by the CDK if it is needed for cross-environment references. Otherwise, it will be allocated by CloudFormation. Default: - The physical name will be allocated by CloudFormation at deployment time
"""
props = aws_cdk.core.ResourceProps(physical_name=physical_name)
jsii.create(QueueBase, self, [scope, id, props])
@jsii.member(jsii_name="addToResourcePolicy")
def add_to_resource_policy(self, statement: aws_cdk.aws_iam.PolicyStatement) -> None:
"""Adds a statement to the IAM resource policy associated with this queue.
If this queue was created in this stack (``new Queue``), a queue policy
will be automatically created upon the first call to ``addToPolicy``. If
the queue is improted (``Queue.import``), then this is a no-op.
:param statement: -
"""
return jsii.invoke(self, "addToResourcePolicy", [statement])
@jsii.member(jsii_name="grant")
def grant(self, grantee: aws_cdk.aws_iam.IGrantable, *actions: str) -> aws_cdk.aws_iam.Grant:
"""Grant the actions defined in queueActions to the identity Principal given on this SQS queue resource.
:param grantee: Principal to grant right to.
:param actions: The actions to grant.
"""
return jsii.invoke(self, "grant", [grantee, *actions])
@jsii.member(jsii_name="grantConsumeMessages")
def grant_consume_messages(self, grantee: aws_cdk.aws_iam.IGrantable) -> aws_cdk.aws_iam.Grant:
"""Grant permissions to consume messages from a queue.
This will grant the following permissions:
- sqs:ChangeMessageVisibility
- sqs:DeleteMessage
- sqs:ReceiveMessage
- sqs:GetQueueAttributes
- sqs:GetQueueUrl
:param grantee: Principal to grant consume rights to.
"""
return jsii.invoke(self, "grantConsumeMessages", [grantee])
@jsii.member(jsii_name="grantPurge")
def grant_purge(self, grantee: aws_cdk.aws_iam.IGrantable) -> aws_cdk.aws_iam.Grant:
"""Grant an IAM principal permissions to purge all messages from the queue.
This will grant the following permissions:
- sqs:PurgeQueue
- sqs:GetQueueAttributes
- sqs:GetQueueUrl
:param grantee: Principal to grant send rights to.
"""
return jsii.invoke(self, "grantPurge", [grantee])
@jsii.member(jsii_name="grantSendMessages")
def grant_send_messages(self, grantee: aws_cdk.aws_iam.IGrantable) -> aws_cdk.aws_iam.Grant:
"""Grant access to send messages to a queue to the given identity.
This will grant the following permissions:
- sqs:SendMessage
- sqs:GetQueueAttributes
- sqs:GetQueueUrl
:param grantee: Principal to grant send rights to.
"""
return jsii.invoke(self, "grantSendMessages", [grantee])
@jsii.member(jsii_name="metric")
def metric(self, metric_name: str, *, color: typing.Optional[str]=None, dimensions: typing.Optional[typing.Mapping[str,typing.Any]]=None, label: typing.Optional[str]=None, period: typing.Optional[aws_cdk.core.Duration]=None, statistic: typing.Optional[str]=None, unit: typing.Optional[aws_cdk.aws_cloudwatch.Unit]=None) -> aws_cdk.aws_cloudwatch.Metric:
"""Return the given named metric for this Queue.
:param metric_name: -
:param props: -
:param color: Color for this metric when added to a Graph in a Dashboard.
:param dimensions: Dimensions of the metric. Default: - No dimensions.
:param label: Label for this metric when added to a Graph in a Dashboard.
:param period: The period over which the specified statistic is applied. Default: Duration.minutes(5)
:param statistic: What function to use for aggregating. Can be one of the following: - "Minimum" | "min" - "Maximum" | "max" - "Average" | "avg" - "Sum" | "sum" - "SampleCount | "n" - "pNN.NN" Default: Average
:param unit: Unit for the metric that is associated with the alarm.
"""
props = aws_cdk.aws_cloudwatch.MetricOptions(color=color, dimensions=dimensions, label=label, period=period, statistic=statistic, unit=unit)
return jsii.invoke(self, "metric", [metric_name, props])
@jsii.member(jsii_name="metricApproximateAgeOfOldestMessage")
def metric_approximate_age_of_oldest_message(self, *, color: typing.Optional[str]=None, dimensions: typing.Optional[typing.Mapping[str,typing.Any]]=None, label: typing.Optional[str]=None, period: typing.Optional[aws_cdk.core.Duration]=None, statistic: typing.Optional[str]=None, unit: typing.Optional[aws_cdk.aws_cloudwatch.Unit]=None) -> aws_cdk.aws_cloudwatch.Metric:
"""The approximate age of the oldest non-deleted message in the queue.
Maximum over 5 minutes
:param props: -
:param color: Color for this metric when added to a Graph in a Dashboard.
:param dimensions: Dimensions of the metric. Default: - No dimensions.
:param label: Label for this metric when added to a Graph in a Dashboard.
:param period: The period over which the specified statistic is applied. Default: Duration.minutes(5)
:param statistic: What function to use for aggregating. Can be one of the following: - "Minimum" | "min" - "Maximum" | "max" - "Average" | "avg" - "Sum" | "sum" - "SampleCount | "n" - "pNN.NN" Default: Average
:param unit: Unit for the metric that is associated with the alarm.
"""
props = aws_cdk.aws_cloudwatch.MetricOptions(color=color, dimensions=dimensions, label=label, period=period, statistic=statistic, unit=unit)
return jsii.invoke(self, "metricApproximateAgeOfOldestMessage", [props])
@jsii.member(jsii_name="metricApproximateNumberOfMessagesDelayed")
def metric_approximate_number_of_messages_delayed(self, *, color: typing.Optional[str]=None, dimensions: typing.Optional[typing.Mapping[str,typing.Any]]=None, label: typing.Optional[str]=None, period: typing.Optional[aws_cdk.core.Duration]=None, statistic: typing.Optional[str]=None, unit: typing.Optional[aws_cdk.aws_cloudwatch.Unit]=None) -> aws_cdk.aws_cloudwatch.Metric:
"""The number of messages in the queue that are delayed and not available for reading immediately.
Maximum over 5 minutes
:param props: -
:param color: Color for this metric when added to a Graph in a Dashboard.
:param dimensions: Dimensions of the metric. Default: - No dimensions.
:param label: Label for this metric when added to a Graph in a Dashboard.
:param period: The period over which the specified statistic is applied. Default: Duration.minutes(5)
:param statistic: What function to use for aggregating. Can be one of the following: - "Minimum" | "min" - "Maximum" | "max" - "Average" | "avg" - "Sum" | "sum" - "SampleCount | "n" - "pNN.NN" Default: Average
:param unit: Unit for the metric that is associated with the alarm.
"""
props = aws_cdk.aws_cloudwatch.MetricOptions(color=color, dimensions=dimensions, label=label, period=period, statistic=statistic, unit=unit)
return jsii.invoke(self, "metricApproximateNumberOfMessagesDelayed", [props])
@jsii.member(jsii_name="metricApproximateNumberOfMessagesNotVisible")
def metric_approximate_number_of_messages_not_visible(self, *, color: typing.Optional[str]=None, dimensions: typing.Optional[typing.Mapping[str,typing.Any]]=None, label: typing.Optional[str]=None, period: typing.Optional[aws_cdk.core.Duration]=None, statistic: typing.Optional[str]=None, unit: typing.Optional[aws_cdk.aws_cloudwatch.Unit]=None) -> aws_cdk.aws_cloudwatch.Metric:
"""The number of messages that are in flight.
Maximum over 5 minutes
:param props: -
:param color: Color for this metric when added to a Graph in a Dashboard.
:param dimensions: Dimensions of the metric. Default: - No dimensions.
:param label: Label for this metric when added to a Graph in a Dashboard.
:param period: The period over which the specified statistic is applied. Default: Duration.minutes(5)
:param statistic: What function to use for aggregating. Can be one of the following: - "Minimum" | "min" - "Maximum" | "max" - "Average" | "avg" - "Sum" | "sum" - "SampleCount | "n" - "pNN.NN" Default: Average
:param unit: Unit for the metric that is associated with the alarm.
"""
props = aws_cdk.aws_cloudwatch.MetricOptions(color=color, dimensions=dimensions, label=label, period=period, statistic=statistic, unit=unit)
return jsii.invoke(self, "metricApproximateNumberOfMessagesNotVisible", [props])
@jsii.member(jsii_name="metricApproximateNumberOfMessagesVisible")
def metric_approximate_number_of_messages_visible(self, *, color: typing.Optional[str]=None, dimensions: typing.Optional[typing.Mapping[str,typing.Any]]=None, label: typing.Optional[str]=None, period: typing.Optional[aws_cdk.core.Duration]=None, statistic: typing.Optional[str]=None, unit: typing.Optional[aws_cdk.aws_cloudwatch.Unit]=None) -> aws_cdk.aws_cloudwatch.Metric:
"""The number of messages available for retrieval from the queue.
Maximum over 5 minutes
:param props: -
:param color: Color for this metric when added to a Graph in a Dashboard.
:param dimensions: Dimensions of the metric. Default: - No dimensions.
:param label: Label for this metric when added to a Graph in a Dashboard.
:param period: The period over which the specified statistic is applied. Default: Duration.minutes(5)
:param statistic: What function to use for aggregating. Can be one of the following: - "Minimum" | "min" - "Maximum" | "max" - "Average" | "avg" - "Sum" | "sum" - "SampleCount | "n" - "pNN.NN" Default: Average
:param unit: Unit for the metric that is associated with the alarm.
"""
props = aws_cdk.aws_cloudwatch.MetricOptions(color=color, dimensions=dimensions, label=label, period=period, statistic=statistic, unit=unit)
return jsii.invoke(self, "metricApproximateNumberOfMessagesVisible", [props])
@jsii.member(jsii_name="metricNumberOfEmptyReceives")
def metric_number_of_empty_receives(self, *, color: typing.Optional[str]=None, dimensions: typing.Optional[typing.Mapping[str,typing.Any]]=None, label: typing.Optional[str]=None, period: typing.Optional[aws_cdk.core.Duration]=None, statistic: typing.Optional[str]=None, unit: typing.Optional[aws_cdk.aws_cloudwatch.Unit]=None) -> aws_cdk.aws_cloudwatch.Metric:
"""The number of ReceiveMessage API calls that did not return a message.
Sum over 5 minutes
:param props: -
:param color: Color for this metric when added to a Graph in a Dashboard.
:param dimensions: Dimensions of the metric. Default: - No dimensions.
:param label: Label for this metric when added to a Graph in a Dashboard.
:param period: The period over which the specified statistic is applied. Default: Duration.minutes(5)
:param statistic: What function to use for aggregating. Can be one of the following: - "Minimum" | "min" - "Maximum" | "max" - "Average" | "avg" - "Sum" | "sum" - "SampleCount | "n" - "pNN.NN" Default: Average
:param unit: Unit for the metric that is associated with the alarm.
"""
props = aws_cdk.aws_cloudwatch.MetricOptions(color=color, dimensions=dimensions, label=label, period=period, statistic=statistic, unit=unit)
return jsii.invoke(self, "metricNumberOfEmptyReceives", [props])
@jsii.member(jsii_name="metricNumberOfMessagesDeleted")
def metric_number_of_messages_deleted(self, *, color: typing.Optional[str]=None, dimensions: typing.Optional[typing.Mapping[str,typing.Any]]=None, label: typing.Optional[str]=None, period: typing.Optional[aws_cdk.core.Duration]=None, statistic: typing.Optional[str]=None, unit: typing.Optional[aws_cdk.aws_cloudwatch.Unit]=None) -> aws_cdk.aws_cloudwatch.Metric:
"""The number of messages deleted from the queue.
Sum over 5 minutes
:param props: -
:param color: Color for this metric when added to a Graph in a Dashboard.
:param dimensions: Dimensions of the metric. Default: - No dimensions.
:param label: Label for this metric when added to a Graph in a Dashboard.
:param period: The period over which the specified statistic is applied. Default: Duration.minutes(5)
:param statistic: What function to use for aggregating. Can be one of the following: - "Minimum" | "min" - "Maximum" | "max" - "Average" | "avg" - "Sum" | "sum" - "SampleCount | "n" - "pNN.NN" Default: Average
:param unit: Unit for the metric that is associated with the alarm.
"""
props = aws_cdk.aws_cloudwatch.MetricOptions(color=color, dimensions=dimensions, label=label, period=period, statistic=statistic, unit=unit)
return jsii.invoke(self, "metricNumberOfMessagesDeleted", [props])
@jsii.member(jsii_name="metricNumberOfMessagesReceived")
def metric_number_of_messages_received(self, *, color: typing.Optional[str]=None, dimensions: typing.Optional[typing.Mapping[str,typing.Any]]=None, label: typing.Optional[str]=None, period: typing.Optional[aws_cdk.core.Duration]=None, statistic: typing.Optional[str]=None, unit: typing.Optional[aws_cdk.aws_cloudwatch.Unit]=None) -> aws_cdk.aws_cloudwatch.Metric:
"""The number of messages returned by calls to the ReceiveMessage action.
Sum over 5 minutes
:param props: -
:param color: Color for this metric when added to a Graph in a Dashboard.
:param dimensions: Dimensions of the metric. Default: - No dimensions.
:param label: Label for this metric when added to a Graph in a Dashboard.
:param period: The period over which the specified statistic is applied. Default: Duration.minutes(5)
:param statistic: What function to use for aggregating. Can be one of the following: - "Minimum" | "min" - "Maximum" | "max" - "Average" | "avg" - "Sum" | "sum" - "SampleCount | "n" - "pNN.NN" Default: Average
:param unit: Unit for the metric that is associated with the alarm.
"""
props = aws_cdk.aws_cloudwatch.MetricOptions(color=color, dimensions=dimensions, label=label, period=period, statistic=statistic, unit=unit)
return jsii.invoke(self, "metricNumberOfMessagesReceived", [props])
@jsii.member(jsii_name="metricNumberOfMessagesSent")
def metric_number_of_messages_sent(self, *, color: typing.Optional[str]=None, dimensions: typing.Optional[typing.Mapping[str,typing.Any]]=None, label: typing.Optional[str]=None, period: typing.Optional[aws_cdk.core.Duration]=None, statistic: typing.Optional[str]=None, unit: typing.Optional[aws_cdk.aws_cloudwatch.Unit]=None) -> aws_cdk.aws_cloudwatch.Metric:
"""The number of messages added to a queue.
Sum over 5 minutes
:param props: -
:param color: Color for this metric when added to a Graph in a Dashboard.
:param dimensions: Dimensions of the metric. Default: - No dimensions.
:param label: Label for this metric when added to a Graph in a Dashboard.
:param period: The period over which the specified statistic is applied. Default: Duration.minutes(5)
:param statistic: What function to use for aggregating. Can be one of the following: - "Minimum" | "min" - "Maximum" | "max" - "Average" | "avg" - "Sum" | "sum" - "SampleCount | "n" - "pNN.NN" Default: Average
:param unit: Unit for the metric that is associated with the alarm.
"""
props = aws_cdk.aws_cloudwatch.MetricOptions(color=color, dimensions=dimensions, label=label, period=period, statistic=statistic, unit=unit)
return jsii.invoke(self, "metricNumberOfMessagesSent", [props])
@jsii.member(jsii_name="metricSentMessageSize")
def metric_sent_message_size(self, *, color: typing.Optional[str]=None, dimensions: typing.Optional[typing.Mapping[str,typing.Any]]=None, label: typing.Optional[str]=None, period: typing.Optional[aws_cdk.core.Duration]=None, statistic: typing.Optional[str]=None, unit: typing.Optional[aws_cdk.aws_cloudwatch.Unit]=None) -> aws_cdk.aws_cloudwatch.Metric:
"""The size of messages added to a queue.
Average over 5 minutes
:param props: -
:param color: Color for this metric when added to a Graph in a Dashboard.
:param dimensions: Dimensions of the metric. Default: - No dimensions.
:param label: Label for this metric when added to a Graph in a Dashboard.
:param period: The period over which the specified statistic is applied. Default: Duration.minutes(5)
:param statistic: What function to use for aggregating. Can be one of the following: - "Minimum" | "min" - "Maximum" | "max" - "Average" | "avg" - "Sum" | "sum" - "SampleCount | "n" - "pNN.NN" Default: Average
:param unit: Unit for the metric that is associated with the alarm.
"""
props = aws_cdk.aws_cloudwatch.MetricOptions(color=color, dimensions=dimensions, label=label, period=period, statistic=statistic, unit=unit)
return jsii.invoke(self, "metricSentMessageSize", [props])
@property
@jsii.member(jsii_name="autoCreatePolicy")
@abc.abstractmethod
def _auto_create_policy(self) -> bool:
"""Controls automatic creation of policy objects.
Set by subclasses.
"""
...
@property
@jsii.member(jsii_name="fifo")
@abc.abstractmethod
def fifo(self) -> bool:
"""Whether this queue is an Amazon SQS FIFO queue.
If false, this is a standard queue.
"""
...
@property
@jsii.member(jsii_name="queueArn")
@abc.abstractmethod
def queue_arn(self) -> str:
"""The ARN of this queue."""
...
@property
@jsii.member(jsii_name="queueName")
@abc.abstractmethod
def queue_name(self) -> str:
"""The name of this queue."""
...
@property
@jsii.member(jsii_name="queueUrl")
@abc.abstractmethod
def queue_url(self) -> str:
"""The URL of this queue."""
...
@property
@jsii.member(jsii_name="encryptionMasterKey")
@abc.abstractmethod
def encryption_master_key(self) -> typing.Optional[aws_cdk.aws_kms.IKey]:
"""If this queue is server-side encrypted, this is the KMS encryption key."""
...
class _QueueBaseProxy(QueueBase, jsii.proxy_for(aws_cdk.core.Resource)):
@property
@jsii.member(jsii_name="autoCreatePolicy")
def _auto_create_policy(self) -> bool:
"""Controls automatic creation of policy objects.
Set by subclasses.
"""
return jsii.get(self, "autoCreatePolicy")
@property
@jsii.member(jsii_name="fifo")
def fifo(self) -> bool:
"""Whether this queue is an Amazon SQS FIFO queue.
If false, this is a standard queue.
"""
return jsii.get(self, "fifo")
@property
@jsii.member(jsii_name="queueArn")
def queue_arn(self) -> str:
"""The ARN of this queue."""
return jsii.get(self, "queueArn")
@property
@jsii.member(jsii_name="queueName")
def queue_name(self) -> str:
"""The name of this queue."""
return jsii.get(self, "queueName")
@property
@jsii.member(jsii_name="queueUrl")
def queue_url(self) -> str:
"""The URL of this queue."""
return jsii.get(self, "queueUrl")
@property
@jsii.member(jsii_name="encryptionMasterKey")
def encryption_master_key(self) -> typing.Optional[aws_cdk.aws_kms.IKey]:
"""If this queue is server-side encrypted, this is the KMS encryption key."""
return jsii.get(self, "encryptionMasterKey")
class Queue(QueueBase, metaclass=jsii.JSIIMeta, jsii_type="@aws-cdk/aws-sqs.Queue"):
"""A new Amazon SQS queue."""
def __init__(self, scope: aws_cdk.core.Construct, id: str, *, content_based_deduplication: typing.Optional[bool]=None, data_key_reuse: typing.Optional[aws_cdk.core.Duration]=None, dead_letter_queue: typing.Optional["DeadLetterQueue"]=None, delivery_delay: typing.Optional[aws_cdk.core.Duration]=None, encryption: typing.Optional["QueueEncryption"]=None, encryption_master_key: typing.Optional[aws_cdk.aws_kms.IKey]=None, fifo: typing.Optional[bool]=None, max_message_size_bytes: typing.Optional[jsii.Number]=None, queue_name: typing.Optional[str]=None, receive_message_wait_time: typing.Optional[aws_cdk.core.Duration]=None, retention_period: typing.Optional[aws_cdk.core.Duration]=None, visibility_timeout: typing.Optional[aws_cdk.core.Duration]=None) -> None:
"""
:param scope: -
:param id: -
:param props: -
:param content_based_deduplication: Specifies whether to enable content-based deduplication. During the deduplication interval (5 minutes), Amazon SQS treats messages that are sent with identical content (excluding attributes) as duplicates and delivers only one copy of the message. If you don't enable content-based deduplication and you want to deduplicate messages, provide an explicit deduplication ID in your SendMessage() call. (Only applies to FIFO queues.) Default: false
:param data_key_reuse: The length of time that Amazon SQS reuses a data key before calling KMS again. The value must be an integer between 60 (1 minute) and 86,400 (24 hours). The default is 300 (5 minutes). Default: Duration.minutes(5)
:param dead_letter_queue: Send messages to this queue if they were unsuccessfully dequeued a number of times. Default: no dead-letter queue
:param delivery_delay: The time in seconds that the delivery of all messages in the queue is delayed. You can specify an integer value of 0 to 900 (15 minutes). The default value is 0. Default: 0
:param encryption: Whether the contents of the queue are encrypted, and by what type of key. Be aware that encryption is not available in all regions, please see the docs for current availability details. Default: Unencrypted
:param encryption_master_key: External KMS master key to use for queue encryption. Individual messages will be encrypted using data keys. The data keys in turn will be encrypted using this key, and reused for a maximum of ``dataKeyReuseSecs`` seconds. The 'encryption' property must be either not specified or set to "Kms". An error will be emitted if encryption is set to "Unencrypted" or "KmsManaged". Default: If encryption is set to KMS and not specified, a key will be created.
:param fifo: Whether this a first-in-first-out (FIFO) queue. Default: false, unless queueName ends in '.fifo' or 'contentBasedDeduplication' is true.
:param max_message_size_bytes: The limit of how many bytes that a message can contain before Amazon SQS rejects it. You can specify an integer value from 1024 bytes (1 KiB) to 262144 bytes (256 KiB). The default value is 262144 (256 KiB). Default: 256KiB
:param queue_name: A name for the queue. If specified and this is a FIFO queue, must end in the string '.fifo'. Default: CloudFormation-generated name
:param receive_message_wait_time: Default wait time for ReceiveMessage calls. Does not wait if set to 0, otherwise waits this amount of seconds by default for messages to arrive. For more information, see Amazon SQS Long Poll. Default: 0
:param retention_period: The number of seconds that Amazon SQS retains a message. You can specify an integer value from 60 seconds (1 minute) to 1209600 seconds (14 days). The default value is 345600 seconds (4 days). Default: Duration.days(4)
:param visibility_timeout: Timeout of processing a single message. After dequeuing, the processor has this much time to handle the message and delete it from the queue before it becomes visible again for dequeueing by another processor. Values must be from 0 to 43200 seconds (12 hours). If you don't specify a value, AWS CloudFormation uses the default value of 30 seconds. Default: Duration.seconds(30)
"""
props = QueueProps(content_based_deduplication=content_based_deduplication, data_key_reuse=data_key_reuse, dead_letter_queue=dead_letter_queue, delivery_delay=delivery_delay, encryption=encryption, encryption_master_key=encryption_master_key, fifo=fifo, max_message_size_bytes=max_message_size_bytes, queue_name=queue_name, receive_message_wait_time=receive_message_wait_time, retention_period=retention_period, visibility_timeout=visibility_timeout)
jsii.create(Queue, self, [scope, id, props])
@jsii.member(jsii_name="fromQueueArn")
@classmethod
def from_queue_arn(cls, scope: aws_cdk.core.Construct, id: str, queue_arn: str) -> "IQueue":
"""
:param scope: -
:param id: -
:param queue_arn: -
"""
return jsii.sinvoke(cls, "fromQueueArn", [scope, id, queue_arn])
@jsii.member(jsii_name="fromQueueAttributes")
@classmethod
def from_queue_attributes(cls, scope: aws_cdk.core.Construct, id: str, *, queue_arn: str, key_arn: typing.Optional[str]=None, queue_name: typing.Optional[str]=None, queue_url: typing.Optional[str]=None) -> "IQueue":
"""Import an existing queue.
:param scope: -
:param id: -
:param attrs: -
:param queue_arn: The ARN of the queue.
:param key_arn: KMS encryption key, if this queue is server-side encrypted by a KMS key.
:param queue_name: The name of the queue. Default: if queue name is not specified, the name will be derived from the queue ARN
:param queue_url: The URL of the queue.
"""
attrs = QueueAttributes(queue_arn=queue_arn, key_arn=key_arn, queue_name=queue_name, queue_url=queue_url)
return jsii.sinvoke(cls, "fromQueueAttributes", [scope, id, attrs])
@property
@jsii.member(jsii_name="autoCreatePolicy")
def _auto_create_policy(self) -> bool:
"""Controls automatic creation of policy objects.
Set by subclasses.
"""
return jsii.get(self, "autoCreatePolicy")
@property
@jsii.member(jsii_name="fifo")
def fifo(self) -> bool:
"""Whether this queue is an Amazon SQS FIFO queue.
If false, this is a standard queue.
"""
return jsii.get(self, "fifo")
@property
@jsii.member(jsii_name="queueArn")
def queue_arn(self) -> str:
"""The ARN of this queue."""
return jsii.get(self, "queueArn")
@property
@jsii.member(jsii_name="queueName")
def queue_name(self) -> str:
"""The name of this queue."""
return jsii.get(self, "queueName")
@property
@jsii.member(jsii_name="queueUrl")
def queue_url(self) -> str:
"""The URL of this queue."""
return jsii.get(self, "queueUrl")
@property
@jsii.member(jsii_name="encryptionMasterKey")
def encryption_master_key(self) -> typing.Optional[aws_cdk.aws_kms.IKey]:
"""If this queue is encrypted, this is the KMS key."""
return jsii.get(self, "encryptionMasterKey")
@jsii.enum(jsii_type="@aws-cdk/aws-sqs.QueueEncryption")
class QueueEncryption(enum.Enum):
"""What kind of encryption to apply to this queue."""
UNENCRYPTED = "UNENCRYPTED"
"""Messages in the queue are not encrypted."""
KMS_MANAGED = "KMS_MANAGED"
"""Server-side KMS encryption with a master key managed by SQS."""
KMS = "KMS"
"""Server-side encryption with a KMS key managed by the user.
If ``encryptionKey`` is specified, this key will be used, otherwise, one will be defined.
"""
class QueuePolicy(aws_cdk.core.Resource, metaclass=jsii.JSIIMeta, jsii_type="@aws-cdk/aws-sqs.QueuePolicy"):
"""Applies a policy to SQS queues."""
def __init__(self, scope: aws_cdk.core.Construct, id: str, *, queues: typing.List["IQueue"]) -> None:
"""
:param scope: -
:param id: -
:param props: -
:param queues: The set of queues this policy applies to.
"""
props = QueuePolicyProps(queues=queues)
jsii.create(QueuePolicy, self, [scope, id, props])
@property
@jsii.member(jsii_name="document")
def document(self) -> aws_cdk.aws_iam.PolicyDocument:
"""The IAM policy document for this policy."""
return jsii.get(self, "document")
@jsii.data_type(jsii_type="@aws-cdk/aws-sqs.QueuePolicyProps", jsii_struct_bases=[], name_mapping={'queues': 'queues'})
class QueuePolicyProps():
def __init__(self, *, queues: typing.List["IQueue"]):
"""
:param queues: The set of queues this policy applies to.
"""
self._values = {
'queues': queues,
}
@property
def queues(self) -> typing.List["IQueue"]:
"""The set of queues this policy applies to."""
return self._values.get('queues')
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return 'QueuePolicyProps(%s)' % ', '.join(k + '=' + repr(v) for k, v in self._values.items())
@jsii.data_type(jsii_type="@aws-cdk/aws-sqs.QueueProps", jsii_struct_bases=[], name_mapping={'content_based_deduplication': 'contentBasedDeduplication', 'data_key_reuse': 'dataKeyReuse', 'dead_letter_queue': 'deadLetterQueue', 'delivery_delay': 'deliveryDelay', 'encryption': 'encryption', 'encryption_master_key': 'encryptionMasterKey', 'fifo': 'fifo', 'max_message_size_bytes': 'maxMessageSizeBytes', 'queue_name': 'queueName', 'receive_message_wait_time': 'receiveMessageWaitTime', 'retention_period': 'retentionPeriod', 'visibility_timeout': 'visibilityTimeout'})
class QueueProps():
def __init__(self, *, content_based_deduplication: typing.Optional[bool]=None, data_key_reuse: typing.Optional[aws_cdk.core.Duration]=None, dead_letter_queue: typing.Optional["DeadLetterQueue"]=None, delivery_delay: typing.Optional[aws_cdk.core.Duration]=None, encryption: typing.Optional["QueueEncryption"]=None, encryption_master_key: typing.Optional[aws_cdk.aws_kms.IKey]=None, fifo: typing.Optional[bool]=None, max_message_size_bytes: typing.Optional[jsii.Number]=None, queue_name: typing.Optional[str]=None, receive_message_wait_time: typing.Optional[aws_cdk.core.Duration]=None, retention_period: typing.Optional[aws_cdk.core.Duration]=None, visibility_timeout: typing.Optional[aws_cdk.core.Duration]=None):
"""Properties for creating a new Queue.
:param content_based_deduplication: Specifies whether to enable content-based deduplication. During the deduplication interval (5 minutes), Amazon SQS treats messages that are sent with identical content (excluding attributes) as duplicates and delivers only one copy of the message. If you don't enable content-based deduplication and you want to deduplicate messages, provide an explicit deduplication ID in your SendMessage() call. (Only applies to FIFO queues.) Default: false
:param data_key_reuse: The length of time that Amazon SQS reuses a data key before calling KMS again. The value must be an integer between 60 (1 minute) and 86,400 (24 hours). The default is 300 (5 minutes). Default: Duration.minutes(5)
:param dead_letter_queue: Send messages to this queue if they were unsuccessfully dequeued a number of times. Default: no dead-letter queue
:param delivery_delay: The time in seconds that the delivery of all messages in the queue is delayed. You can specify an integer value of 0 to 900 (15 minutes). The default value is 0. Default: 0
:param encryption: Whether the contents of the queue are encrypted, and by what type of key. Be aware that encryption is not available in all regions, please see the docs for current availability details. Default: Unencrypted
:param encryption_master_key: External KMS master key to use for queue encryption. Individual messages will be encrypted using data keys. The data keys in turn will be encrypted using this key, and reused for a maximum of ``dataKeyReuseSecs`` seconds. The 'encryption' property must be either not specified or set to "Kms". An error will be emitted if encryption is set to "Unencrypted" or "KmsManaged". Default: If encryption is set to KMS and not specified, a key will be created.
:param fifo: Whether this a first-in-first-out (FIFO) queue. Default: false, unless queueName ends in '.fifo' or 'contentBasedDeduplication' is true.
:param max_message_size_bytes: The limit of how many bytes that a message can contain before Amazon SQS rejects it. You can specify an integer value from 1024 bytes (1 KiB) to 262144 bytes (256 KiB). The default value is 262144 (256 KiB). Default: 256KiB
:param queue_name: A name for the queue. If specified and this is a FIFO queue, must end in the string '.fifo'. Default: CloudFormation-generated name
:param receive_message_wait_time: Default wait time for ReceiveMessage calls. Does not wait if set to 0, otherwise waits this amount of seconds by default for messages to arrive. For more information, see Amazon SQS Long Poll. Default: 0
:param retention_period: The number of seconds that Amazon SQS retains a message. You can specify an integer value from 60 seconds (1 minute) to 1209600 seconds (14 days). The default value is 345600 seconds (4 days). Default: Duration.days(4)
:param visibility_timeout: Timeout of processing a single message. After dequeuing, the processor has this much time to handle the message and delete it from the queue before it becomes visible again for dequeueing by another processor. Values must be from 0 to 43200 seconds (12 hours). If you don't specify a value, AWS CloudFormation uses the default value of 30 seconds. Default: Duration.seconds(30)
"""
if isinstance(dead_letter_queue, dict): dead_letter_queue = DeadLetterQueue(**dead_letter_queue)
self._values = {
}
if content_based_deduplication is not None: self._values["content_based_deduplication"] = content_based_deduplication
if data_key_reuse is not None: self._values["data_key_reuse"] = data_key_reuse
if dead_letter_queue is not None: self._values["dead_letter_queue"] = dead_letter_queue
if delivery_delay is not None: self._values["delivery_delay"] = delivery_delay
if encryption is not None: self._values["encryption"] = encryption
if encryption_master_key is not None: self._values["encryption_master_key"] = encryption_master_key
if fifo is not None: self._values["fifo"] = fifo
if max_message_size_bytes is not None: self._values["max_message_size_bytes"] = max_message_size_bytes
if queue_name is not None: self._values["queue_name"] = queue_name
if receive_message_wait_time is not None: self._values["receive_message_wait_time"] = receive_message_wait_time
if retention_period is not None: self._values["retention_period"] = retention_period
if visibility_timeout is not None: self._values["visibility_timeout"] = visibility_timeout
@property
def content_based_deduplication(self) -> typing.Optional[bool]:
"""Specifies whether to enable content-based deduplication.
During the deduplication interval (5 minutes), Amazon SQS treats
messages that are sent with identical content (excluding attributes) as
duplicates and delivers only one copy of the message.
If you don't enable content-based deduplication and you want to deduplicate
messages, provide an explicit deduplication ID in your SendMessage() call.
(Only applies to FIFO queues.)
default
:default: false
"""
return self._values.get('content_based_deduplication')
@property
def data_key_reuse(self) -> typing.Optional[aws_cdk.core.Duration]:
"""The length of time that Amazon SQS reuses a data key before calling KMS again.
The value must be an integer between 60 (1 minute) and 86,400 (24
hours). The default is 300 (5 minutes).
default
:default: Duration.minutes(5)
"""
return self._values.get('data_key_reuse')
@property
def dead_letter_queue(self) -> typing.Optional["DeadLetterQueue"]:
"""Send messages to this queue if they were unsuccessfully dequeued a number of times.
default
:default: no dead-letter queue
"""
return self._values.get('dead_letter_queue')
@property
def delivery_delay(self) -> typing.Optional[aws_cdk.core.Duration]:
"""The time in seconds that the delivery of all messages in the queue is delayed.
You can specify an integer value of 0 to 900 (15 minutes). The default
value is 0.
default
:default: 0
"""
return self._values.get('delivery_delay')
@property
def encryption(self) -> typing.Optional["QueueEncryption"]:
"""Whether the contents of the queue are encrypted, and by what type of key.
Be aware that encryption is not available in all regions, please see the docs
for current availability details.
default
:default: Unencrypted
"""
return self._values.get('encryption')
@property
def encryption_master_key(self) -> typing.Optional[aws_cdk.aws_kms.IKey]:
"""External KMS master key to use for queue encryption.
Individual messages will be encrypted using data keys. The data keys in
turn will be encrypted using this key, and reused for a maximum of
``dataKeyReuseSecs`` seconds.
The 'encryption' property must be either not specified or set to "Kms".
An error will be emitted if encryption is set to "Unencrypted" or
"KmsManaged".
default
:default: If encryption is set to KMS and not specified, a key will be created.
"""
return self._values.get('encryption_master_key')
@property
def fifo(self) -> typing.Optional[bool]:
"""Whether this a first-in-first-out (FIFO) queue.
default
:default: false, unless queueName ends in '.fifo' or 'contentBasedDeduplication' is true.
"""
return self._values.get('fifo')
@property
def max_message_size_bytes(self) -> typing.Optional[jsii.Number]:
"""The limit of how many bytes that a message can contain before Amazon SQS rejects it.
You can specify an integer value from 1024 bytes (1 KiB) to 262144 bytes
(256 KiB). The default value is 262144 (256 KiB).
default
:default: 256KiB
"""
return self._values.get('max_message_size_bytes')
@property
def queue_name(self) -> typing.Optional[str]:
"""A name for the queue.
If specified and this is a FIFO queue, must end in the string '.fifo'.
default
:default: CloudFormation-generated name
"""
return self._values.get('queue_name')
@property
def receive_message_wait_time(self) -> typing.Optional[aws_cdk.core.Duration]:
"""Default wait time for ReceiveMessage calls.
Does not wait if set to 0, otherwise waits this amount of seconds
by default for messages to arrive.
For more information, see Amazon SQS Long Poll.
default
:default: 0
"""
return self._values.get('receive_message_wait_time')
@property
def retention_period(self) -> typing.Optional[aws_cdk.core.Duration]:
"""The number of seconds that Amazon SQS retains a message.
You can specify an integer value from 60 seconds (1 minute) to 1209600
seconds (14 days). The default value is 345600 seconds (4 days).
default
:default: Duration.days(4)
"""
return self._values.get('retention_period')
@property
def visibility_timeout(self) -> typing.Optional[aws_cdk.core.Duration]:
"""Timeout of processing a single message.
After dequeuing, the processor has this much time to handle the message
and delete it from the queue before it becomes visible again for dequeueing
by another processor.
Values must be from 0 to 43200 seconds (12 hours). If you don't specify
a value, AWS CloudFormation uses the default value of 30 seconds.
default
:default: Duration.seconds(30)
"""
return self._values.get('visibility_timeout')
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return 'QueueProps(%s)' % ', '.join(k + '=' + repr(v) for k, v in self._values.items())
__all__ = ["CfnQueue", "CfnQueuePolicy", "CfnQueuePolicyProps", "CfnQueueProps", "DeadLetterQueue", "IQueue", "Queue", "QueueAttributes", "QueueBase", "QueueEncryption", "QueuePolicy", "QueuePolicyProps", "QueueProps", "__jsii_assembly__"]
publication.publish()
| 54.047813 | 862 | 0.697764 | 13,786 | 106,258 | 5.26186 | 0.041782 | 0.058671 | 0.016749 | 0.029818 | 0.894045 | 0.879129 | 0.854177 | 0.845465 | 0.829377 | 0.820444 | 0 | 0.003791 | 0.190706 | 106,258 | 1,965 | 863 | 54.075318 | 0.839756 | 0.455552 | 0 | 0.659701 | 0 | 0 | 0.120477 | 0.060132 | 0 | 0 | 0 | 0 | 0 | 1 | 0.264179 | false | 0 | 0.01791 | 0.052239 | 0.519403 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
16c188a1e62607660a1227f2e7de1f72566508c1 | 5,296 | py | Python | tests/test_cli.py | cffbots/howfairis | 008552b7266e229bd38553631d7dfe3554df18b2 | [
"Apache-2.0"
] | 27 | 2020-09-10T10:04:56.000Z | 2022-02-07T23:24:13.000Z | tests/test_cli.py | cffbots/howfairis | 008552b7266e229bd38553631d7dfe3554df18b2 | [
"Apache-2.0"
] | 297 | 2020-09-07T14:10:08.000Z | 2022-02-18T09:46:30.000Z | tests/test_cli.py | cffbots/howfairis | 008552b7266e229bd38553631d7dfe3554df18b2 | [
"Apache-2.0"
] | 6 | 2020-09-10T12:58:37.000Z | 2022-03-11T10:17:21.000Z | from click.testing import CliRunner
from requests_mock import Mocker
from howfairis.cli.cli import cli
def test_matching_badge(requests_mock: Mocker):
owner = "fair-software"
repo_string = "howfairis"
filename = "README.rst"
url = "https://github.com/{0}/{1}".format(owner, repo_string)
api = "https://api.github.com/repos/{0}/{1}".format(owner, repo_string)
raw = "https://raw.githubusercontent.com/{0}/{1}/main".format(owner, repo_string)
howfairis_badge = "https://img.shields.io/badge/fair--software.eu-%E2%97%8F%20%20%E2%97%8F%20%20%E2%97%8F%20%20%E2%97%8F%20%20%E2%97%8F-green"
pypi_badge = "https://img.shields.io/pypi/v/howfairis.svg?colorB=blue"
cii_badge = "https://bestpractices.coreinfrastructure.org/projects/4630/badge"
requests_mock.get(url, status_code=200)
requests_mock.get(api, json={"default_branch": "main"}, status_code=200)
requests_mock.get(api + "/license", status_code=200)
requests_mock.get(raw + "/.howfairis.yml", status_code=200)
requests_mock.get(raw + "/CITATION", status_code=200)
requests_mock.get(raw + "/CITATION.cff", status_code=200)
requests_mock.get(raw + "/codemeta.json", status_code=200)
requests_mock.get(raw + "/" + filename, text=howfairis_badge+pypi_badge+cii_badge, status_code=200)
requests_mock.get(raw + "/.zenodo.json", status_code=200)
requests_mock.get(api + "/commits", json=[], status_code=200)
runner = CliRunner()
response = runner.invoke(cli, [url])
assert response.exit_code == 0
def test_upgraded_badge(requests_mock: Mocker):
owner = "fair-software"
repo_string = "howfairis"
filename = "README.rst"
url = "https://github.com/{0}/{1}".format(owner, repo_string)
api = "https://api.github.com/repos/{0}/{1}".format(owner, repo_string)
raw = "https://raw.githubusercontent.com/{0}/{1}/main".format(owner, repo_string)
howfairis_badge = "https://img.shields.io/badge/fair--software.eu-%E2%97%8F%20%20%E2%97%8F%20%20%E2%97%8F%20%20%E2%97%8F%20%20%E2%97%8B-yellow"
pypi_badge = "https://img.shields.io/pypi/v/howfairis.svg?colorB=blue"
cii_badge = "https://bestpractices.coreinfrastructure.org/projects/4630/badge"
requests_mock.get(url, status_code=200)
requests_mock.get(api, json={"default_branch": "main"}, status_code=200)
requests_mock.get(api + "/license", status_code=200)
requests_mock.get(raw + "/.howfairis.yml", status_code=200)
requests_mock.get(raw + "/CITATION", status_code=200)
requests_mock.get(raw + "/CITATION.cff", status_code=200)
requests_mock.get(raw + "/codemeta.json", status_code=200)
requests_mock.get(raw + "/" + filename, text=howfairis_badge+pypi_badge+cii_badge, status_code=200)
requests_mock.get(raw + "/.zenodo.json", status_code=200)
requests_mock.get(api + "/commits", json=[], status_code=200)
runner = CliRunner()
response = runner.invoke(cli, [url])
assert response.exit_code == 1
def test_mismatching_badge(requests_mock: Mocker):
owner = "fair-software"
repo_string = "howfairis"
filename = "README.rst"
url = "https://github.com/{0}/{1}".format(owner, repo_string)
api = "https://api.github.com/repos/{0}/{1}".format(owner, repo_string)
raw = "https://raw.githubusercontent.com/{0}/{1}/main".format(owner, repo_string)
howfairis_badge = "https://img.shields.io/badge/fair--software.eu-%E2%97%8F%20%20%E2%97%8F%20%20%E2%97%8F%20%20%E2%97%8F%20%20%E2%97%8F-green"
requests_mock.get(url, status_code=200)
requests_mock.get(api, json={"default_branch": "main"}, status_code=200)
requests_mock.get(api + "/license", status_code=200)
requests_mock.get(raw + "/.howfairis.yml", status_code=200)
requests_mock.get(raw + "/CITATION", status_code=200)
requests_mock.get(raw + "/CITATION.cff", status_code=200)
requests_mock.get(raw + "/codemeta.json", status_code=200)
requests_mock.get(raw + "/" + filename, text=howfairis_badge, status_code=200)
requests_mock.get(raw + "/.zenodo.json", status_code=200)
requests_mock.get(api + "/commits", json=[], status_code=200)
runner = CliRunner()
response = runner.invoke(cli, [url])
assert response.exit_code == 1
def test_missing_badge(requests_mock: Mocker):
owner = "fair-software"
repo_string = "howfairis"
filename = "README.rst"
url = "https://github.com/{0}/{1}".format(owner, repo_string)
api = "https://api.github.com/repos/{0}/{1}".format(owner, repo_string)
raw = "https://raw.githubusercontent.com/{0}/{1}/main".format(owner, repo_string)
requests_mock.get(url, status_code=200)
requests_mock.get(api, json={"default_branch": "main"}, status_code=200)
requests_mock.get(api + "/license", status_code=200)
requests_mock.get(raw + "/.howfairis.yml", status_code=200)
requests_mock.get(raw + "/CITATION", status_code=200)
requests_mock.get(raw + "/CITATION.cff", status_code=200)
requests_mock.get(raw + "/codemeta.json", status_code=200)
requests_mock.get(raw + "/" + filename, text="", status_code=200)
requests_mock.get(raw + "/.zenodo.json", status_code=200)
requests_mock.get(api + "/commits", json=[], status_code=200)
runner = CliRunner()
response = runner.invoke(cli, [url])
assert response.exit_code == 1
| 53.494949 | 147 | 0.697508 | 774 | 5,296 | 4.603359 | 0.102067 | 0.151558 | 0.168397 | 0.212181 | 0.959585 | 0.959585 | 0.959585 | 0.959585 | 0.959585 | 0.959585 | 0 | 0.057205 | 0.128588 | 5,296 | 98 | 148 | 54.040816 | 0.714843 | 0 | 0 | 0.877778 | 0 | 0.033333 | 0.294751 | 0 | 0 | 0 | 0 | 0 | 0.044444 | 1 | 0.044444 | false | 0 | 0.033333 | 0 | 0.077778 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
bc7873a66921641d0d95685f823baec6f7368890 | 131 | py | Python | app/forms/__init__.py | tonyngophd/dronest | f0976c31cbbf6fb032851bd42ac566bb381608f0 | [
"MIT"
] | 13 | 2021-02-03T13:26:59.000Z | 2021-03-24T19:34:19.000Z | app/forms/__init__.py | suasllc/dronest | f0976c31cbbf6fb032851bd42ac566bb381608f0 | [
"MIT"
] | null | null | null | app/forms/__init__.py | suasllc/dronest | f0976c31cbbf6fb032851bd42ac566bb381608f0 | [
"MIT"
] | 1 | 2021-06-07T17:56:58.000Z | 2021-06-07T17:56:58.000Z | from .login_form import LoginForm
from .login_form import ChangePasswordForm
from .signup_form import SignUpForm, UpdateProfileForm | 43.666667 | 54 | 0.877863 | 16 | 131 | 7 | 0.5625 | 0.267857 | 0.232143 | 0.339286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.091603 | 131 | 3 | 54 | 43.666667 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 9 |
bc97f5efe28f96abec107698c630e5d72f4f7bcc | 76,103 | py | Python | infoblox_netmri/api/broker/v2_5_0/issue_detail_broker.py | infobloxopen/infoblox_netmri | aa1c744df7e439dbe163bb9edd165e4e85a9771b | [
"Apache-2.0"
] | 12 | 2016-02-19T12:37:54.000Z | 2022-03-04T20:11:08.000Z | infoblox_netmri/api/broker/v2_5_0/issue_detail_broker.py | infobloxopen/infoblox_netmri | aa1c744df7e439dbe163bb9edd165e4e85a9771b | [
"Apache-2.0"
] | 18 | 2015-11-12T18:37:00.000Z | 2021-05-19T07:59:55.000Z | infoblox_netmri/api/broker/v2_5_0/issue_detail_broker.py | infobloxopen/infoblox_netmri | aa1c744df7e439dbe163bb9edd165e4e85a9771b | [
"Apache-2.0"
] | 18 | 2016-01-07T12:04:34.000Z | 2022-03-31T11:05:41.000Z | from ..broker import Broker
class IssueDetailBroker(Broker):
controller = "issue_details"
def show(self, **kwargs):
"""Shows the details for the specified issue detail.
**Inputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` True
| ``default:`` None
:param IssueID: The internal NetMRI identifier for this issue instance.
:type IssueID: Integer
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param methods: A list of issue detail methods. The listed methods will be called on each issue detail returned and included in the output. Available methods are: data_source, device, interface, iprg, vlan, subnet, alternate_device, issue_desc, title, severity, infradevice.
:type methods: Array of String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param include: A list of associated object types to include in the output. The listed associations will be returned as outputs named according to the association name (see outputs below). Available includes are: data_source, device, interface, iprg, vlan, subnet, alternate_device, issue_desc.
:type include: Array of String
**Outputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return issue_detail: The issue detail identified by the specified IssueID.
:rtype issue_detail: IssueDetail
"""
return self.api_request(self._get_method_fullname("show"), kwargs)
def index(self, **kwargs):
"""Lists the available issue details. Any of the inputs listed may be be used to narrow the list; other inputs will be ignored. Of the various ways to query lists, using this method is most efficient.
**Inputs**
| ``api version min:`` 2.3
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param BatchID: The internal NetMRI identifier for the job execution batch to which this issue applies, if relevant.
:type BatchID: Integer
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param BatchID: The internal NetMRI identifier for the job execution batch to which this issue applies, if relevant.
:type BatchID: Array of Integer
| ``api version min:`` 2.3
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param DeviceID: The internal NetMRI identifier for the device to which this issue applies.
:type DeviceID: Integer
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param DeviceID: The internal NetMRI identifier for the device to which this issue applies.
:type DeviceID: Array of Integer
| ``api version min:`` 2.3
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param EndTime: The ending effective time of this revision of this record, or empty if still in effect.
:type EndTime: DateTime
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param EndTime: The ending effective time of this revision of this record, or empty if still in effect.
:type EndTime: Array of DateTime
| ``api version min:`` 2.3
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param InterfaceID: The internal NetMRI identifier for the interface to which this issue applies, if relevant.
:type InterfaceID: Integer
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param InterfaceID: The internal NetMRI identifier for the interface to which this issue applies, if relevant.
:type InterfaceID: Array of Integer
| ``api version min:`` 2.3
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param IprgID: The internal NetMRI identifier for the HSRP or VRRP group to which this issue applies, if relevant.
:type IprgID: Integer
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param IprgID: The internal NetMRI identifier for the HSRP or VRRP group to which this issue applies, if relevant.
:type IprgID: Array of Integer
| ``api version min:`` 2.3
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param IssueID: The internal NetMRI identifier for this issue instance.
:type IssueID: Integer
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param IssueID: The internal NetMRI identifier for this issue instance.
:type IssueID: Array of Integer
| ``api version min:`` 2.3
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param IssueTypeID: An internal NetMRI identifier for the type of this issue.
:type IssueTypeID: String
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param IssueTypeID: An internal NetMRI identifier for the type of this issue.
:type IssueTypeID: Array of String
| ``api version min:`` 2.3
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param SubnetID: The internal NetMRI identifier for the subnet to which this issue applies, if relevant.
:type SubnetID: Integer
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param SubnetID: The internal NetMRI identifier for the subnet to which this issue applies, if relevant.
:type SubnetID: Array of Integer
| ``api version min:`` 2.3
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param Timestamp: The date and time this record was collected or calculated.
:type Timestamp: DateTime
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param Timestamp: The date and time this record was collected or calculated.
:type Timestamp: Array of DateTime
| ``api version min:`` 2.3
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param VlanID: The internal NetMRI identifier of the VLAN to which this issue applies, if relevant.
:type VlanID: Integer
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param VlanID: The internal NetMRI identifier of the VLAN to which this issue applies, if relevant.
:type VlanID: Array of Integer
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param DeviceGroupID: The internal NetMRI identifier of the device groups to which to limit the results.
:type DeviceGroupID: Array of Integer
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param timestamp: The data returned will represent the issue details as of this date and time. If omitted, the result will indicate the most recently collected data.
:type timestamp: DateTime
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param methods: A list of issue detail methods. The listed methods will be called on each issue detail returned and included in the output. Available methods are: data_source, device, interface, iprg, vlan, subnet, alternate_device, issue_desc, title, severity, infradevice.
:type methods: Array of String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param include: A list of associated object types to include in the output. The listed associations will be returned as outputs named according to the association name (see outputs below). Available includes are: data_source, device, interface, iprg, vlan, subnet, alternate_device, issue_desc.
:type include: Array of String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` 0
:param start: The record number to return in the selected page of data. It will always appear, although it may not be the first record. See the :limit for more information.
:type start: Integer
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` 1000
:param limit: The size of the page of data, that is, the maximum number of records returned. The limit size will be used to break the data up into pages and the first page with the start record will be returned. So if you have 100 records and use a :limit of 10 and a :start of 10, you will get records 10-19. The maximum limit is 10000.
:type limit: Integer
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` IssueID
:param sort: The data field(s) to use for sorting the output. Default is IssueID. Valid values are DataSourceID, IssueID, StartTime, EndTime, ChangedCols, Timestamp, IssueTypeID, DetailID, DeviceID, InterfaceID, VlanID, SubnetID, IprgID, BatchID, AltDeviceID, Criteria, IssueValue, Component, SeverityID, Correctness, Stability, SuppressedInd.
:type sort: Array of String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` asc
:param dir: The direction(s) in which to sort the data. Default is 'asc'. Valid values are 'asc' and 'desc'.
:type dir: Array of String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param select: The list of attributes to return for each IssueDetail. Valid values are DataSourceID, IssueID, StartTime, EndTime, ChangedCols, Timestamp, IssueTypeID, DetailID, DeviceID, InterfaceID, VlanID, SubnetID, IprgID, BatchID, AltDeviceID, Criteria, IssueValue, Component, SeverityID, Correctness, Stability, SuppressedInd. If empty or omitted, all attributes will be returned.
:type select: Array
| ``api version min:`` 2.8
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param goto_field: The field name for NIOS GOTO that is used for locating a row position of records.
:type goto_field: String
| ``api version min:`` 2.8
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param goto_value: The value of goto_field for NIOS GOTO that is used for locating a row position of records.
:type goto_value: String
**Outputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return issue_details: An array of the IssueDetail objects that match the specified input criteria.
:rtype issue_details: Array of IssueDetail
"""
return self.api_list_request(self._get_method_fullname("index"), kwargs)
def search(self, **kwargs):
"""Lists the available issue details matching the input criteria. This method provides a more flexible search interface than the index method, but searching using this method is more demanding on the system and will not perform to the same level as the index method. The input fields listed below will be used as in the index method, to filter the result, along with the optional query string and XML filter described below.
**Inputs**
| ``api version min:`` 2.3
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param AltDeviceID: The internal NetMRI identifier of the alternate device (such as a neighbor) involved in this issue, if relevant.
:type AltDeviceID: Integer
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param AltDeviceID: The internal NetMRI identifier of the alternate device (such as a neighbor) involved in this issue, if relevant.
:type AltDeviceID: Array of Integer
| ``api version min:`` 2.3
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param BatchID: The internal NetMRI identifier for the job execution batch to which this issue applies, if relevant.
:type BatchID: Integer
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param BatchID: The internal NetMRI identifier for the job execution batch to which this issue applies, if relevant.
:type BatchID: Array of Integer
| ``api version min:`` 2.3
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param ChangedCols: The fields that changed between this revision of the record and the previous revision.
:type ChangedCols: String
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param ChangedCols: The fields that changed between this revision of the record and the previous revision.
:type ChangedCols: Array of String
| ``api version min:`` 2.3
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param Component: The issue component (Devices, Configuration, VLANs, etc.).
:type Component: String
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param Component: The issue component (Devices, Configuration, VLANs, etc.).
:type Component: Array of String
| ``api version min:`` 2.3
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param Correctness: The correctness contribution for this issue.
:type Correctness: Float
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param Correctness: The correctness contribution for this issue.
:type Correctness: Array of Float
| ``api version min:`` 2.3
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param Criteria: The criteria value for this issue at the time it was raised.
:type Criteria: String
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param Criteria: The criteria value for this issue at the time it was raised.
:type Criteria: Array of String
| ``api version min:`` 2.3
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param DataSourceID: The internal NetMRI identifier for the collector NetMRI that raised this issue.
:type DataSourceID: Integer
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param DataSourceID: The internal NetMRI identifier for the collector NetMRI that raised this issue.
:type DataSourceID: Array of Integer
| ``api version min:`` 2.3
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param DetailID: A unique identifier for this issue instance.
:type DetailID: String
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param DetailID: A unique identifier for this issue instance.
:type DetailID: Array of String
| ``api version min:`` 2.3
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param DeviceID: The internal NetMRI identifier for the device to which this issue applies.
:type DeviceID: Integer
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param DeviceID: The internal NetMRI identifier for the device to which this issue applies.
:type DeviceID: Array of Integer
| ``api version min:`` 2.3
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param EndTime: The ending effective time of this revision of this record, or empty if still in effect.
:type EndTime: DateTime
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param EndTime: The ending effective time of this revision of this record, or empty if still in effect.
:type EndTime: Array of DateTime
| ``api version min:`` 2.3
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param InterfaceID: The internal NetMRI identifier for the interface to which this issue applies, if relevant.
:type InterfaceID: Integer
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param InterfaceID: The internal NetMRI identifier for the interface to which this issue applies, if relevant.
:type InterfaceID: Array of Integer
| ``api version min:`` 2.3
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param IprgID: The internal NetMRI identifier for the HSRP or VRRP group to which this issue applies, if relevant.
:type IprgID: Integer
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param IprgID: The internal NetMRI identifier for the HSRP or VRRP group to which this issue applies, if relevant.
:type IprgID: Array of Integer
| ``api version min:`` 2.3
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param IssueID: The internal NetMRI identifier for this issue instance.
:type IssueID: Integer
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param IssueID: The internal NetMRI identifier for this issue instance.
:type IssueID: Array of Integer
| ``api version min:`` 2.3
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param IssueTypeID: An internal NetMRI identifier for the type of this issue.
:type IssueTypeID: String
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param IssueTypeID: An internal NetMRI identifier for the type of this issue.
:type IssueTypeID: Array of String
| ``api version min:`` 2.3
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param IssueValue: The meaning of this field varies based upon the specific issue.
:type IssueValue: String
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param IssueValue: The meaning of this field varies based upon the specific issue.
:type IssueValue: Array of String
| ``api version min:`` 2.3
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param SeverityID: The issue severity ID (1 = Error, 2 = Warning, 3 = Info). Useful for sorting.
:type SeverityID: Integer
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param SeverityID: The issue severity ID (1 = Error, 2 = Warning, 3 = Info). Useful for sorting.
:type SeverityID: Array of Integer
| ``api version min:`` 2.3
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param Stability: The stability contribution for this issue.
:type Stability: Float
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param Stability: The stability contribution for this issue.
:type Stability: Array of Float
| ``api version min:`` 2.3
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param StartTime: The date/time this issue instance was raised.
:type StartTime: DateTime
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param StartTime: The date/time this issue instance was raised.
:type StartTime: Array of DateTime
| ``api version min:`` 2.3
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param SubnetID: The internal NetMRI identifier for the subnet to which this issue applies, if relevant.
:type SubnetID: Integer
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param SubnetID: The internal NetMRI identifier for the subnet to which this issue applies, if relevant.
:type SubnetID: Array of Integer
| ``api version min:`` 2.3
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param SuppressedInd: A flag indicating whether this issue is suppressed or not.
:type SuppressedInd: Boolean
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param SuppressedInd: A flag indicating whether this issue is suppressed or not.
:type SuppressedInd: Array of Boolean
| ``api version min:`` 2.3
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param Timestamp: The date and time this record was collected or calculated.
:type Timestamp: DateTime
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param Timestamp: The date and time this record was collected or calculated.
:type Timestamp: Array of DateTime
| ``api version min:`` 2.3
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param VlanID: The internal NetMRI identifier of the VLAN to which this issue applies, if relevant.
:type VlanID: Integer
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param VlanID: The internal NetMRI identifier of the VLAN to which this issue applies, if relevant.
:type VlanID: Array of Integer
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param DeviceGroupID: The internal NetMRI identifier of the device groups to which to limit the results.
:type DeviceGroupID: Array of Integer
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param timestamp: The data returned will represent the issue details as of this date and time. If omitted, the result will indicate the most recently collected data.
:type timestamp: DateTime
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param methods: A list of issue detail methods. The listed methods will be called on each issue detail returned and included in the output. Available methods are: data_source, device, interface, iprg, vlan, subnet, alternate_device, issue_desc, title, severity, infradevice.
:type methods: Array of String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param include: A list of associated object types to include in the output. The listed associations will be returned as outputs named according to the association name (see outputs below). Available includes are: data_source, device, interface, iprg, vlan, subnet, alternate_device, issue_desc.
:type include: Array of String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` 0
:param start: The record number to return in the selected page of data. It will always appear, although it may not be the first record. See the :limit for more information.
:type start: Integer
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` 1000
:param limit: The size of the page of data, that is, the maximum number of records returned. The limit size will be used to break the data up into pages and the first page with the start record will be returned. So if you have 100 records and use a :limit of 10 and a :start of 10, you will get records 10-19. The maximum limit is 10000.
:type limit: Integer
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` IssueID
:param sort: The data field(s) to use for sorting the output. Default is IssueID. Valid values are DataSourceID, IssueID, StartTime, EndTime, ChangedCols, Timestamp, IssueTypeID, DetailID, DeviceID, InterfaceID, VlanID, SubnetID, IprgID, BatchID, AltDeviceID, Criteria, IssueValue, Component, SeverityID, Correctness, Stability, SuppressedInd.
:type sort: Array of String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` asc
:param dir: The direction(s) in which to sort the data. Default is 'asc'. Valid values are 'asc' and 'desc'.
:type dir: Array of String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param select: The list of attributes to return for each IssueDetail. Valid values are DataSourceID, IssueID, StartTime, EndTime, ChangedCols, Timestamp, IssueTypeID, DetailID, DeviceID, InterfaceID, VlanID, SubnetID, IprgID, BatchID, AltDeviceID, Criteria, IssueValue, Component, SeverityID, Correctness, Stability, SuppressedInd. If empty or omitted, all attributes will be returned.
:type select: Array
| ``api version min:`` 2.8
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param goto_field: The field name for NIOS GOTO that is used for locating a row position of records.
:type goto_field: String
| ``api version min:`` 2.8
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param goto_value: The value of goto_field for NIOS GOTO that is used for locating a row position of records.
:type goto_value: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param query: This value will be matched against issue details, looking to see if one or more of the listed attributes contain the passed value. You may also surround the value with '/' and '/' to perform a regular expression search rather than a containment operation. Any record that matches will be returned. The attributes searched are: AltDeviceID, BatchID, ChangedCols, Component, Correctness, Criteria, DataSourceID, DetailID, DeviceID, EndTime, InterfaceID, IprgID, IssueID, IssueTypeID, IssueValue, SeverityID, Stability, StartTime, SubnetID, SuppressedInd, Timestamp, VlanID.
:type query: String
| ``api version min:`` 2.3
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param xml_filter: A SetFilter XML structure to further refine the search. The SetFilter will be applied AFTER any search query or field values, but before any limit options. The limit and pagination will be enforced after the filter. Remind that this kind of filter may be costly and inefficient if not associated with a database filtering.
:type xml_filter: String
**Outputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return issue_details: An array of the IssueDetail objects that match the specified input criteria.
:rtype issue_details: Array of IssueDetail
"""
return self.api_list_request(self._get_method_fullname("search"), kwargs)
def find(self, **kwargs):
"""Lists the available issue details matching the input specification. This provides the most flexible search specification of all the query mechanisms, enabling searching using comparison operations other than equality. However, it is more complex to use and will not perform as efficiently as the index or search methods. In the input descriptions below, 'field names' refers to the following fields: AltDeviceID, BatchID, ChangedCols, Component, Correctness, Criteria, DataSourceID, DetailID, DeviceID, EndTime, InterfaceID, IprgID, IssueID, IssueTypeID, IssueValue, SeverityID, Stability, StartTime, SubnetID, SuppressedInd, Timestamp, VlanID.
**Inputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_AltDeviceID: The operator to apply to the field AltDeviceID. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. AltDeviceID: The internal NetMRI identifier of the alternate device (such as a neighbor) involved in this issue, if relevant. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_AltDeviceID: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_AltDeviceID: If op_AltDeviceID is specified, the field named in this input will be compared to the value in AltDeviceID using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_AltDeviceID must be specified if op_AltDeviceID is specified.
:type val_f_AltDeviceID: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_AltDeviceID: If op_AltDeviceID is specified, this value will be compared to the value in AltDeviceID using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_AltDeviceID must be specified if op_AltDeviceID is specified.
:type val_c_AltDeviceID: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_BatchID: The operator to apply to the field BatchID. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. BatchID: The internal NetMRI identifier for the job execution batch to which this issue applies, if relevant. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_BatchID: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_BatchID: If op_BatchID is specified, the field named in this input will be compared to the value in BatchID using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_BatchID must be specified if op_BatchID is specified.
:type val_f_BatchID: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_BatchID: If op_BatchID is specified, this value will be compared to the value in BatchID using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_BatchID must be specified if op_BatchID is specified.
:type val_c_BatchID: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_ChangedCols: The operator to apply to the field ChangedCols. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. ChangedCols: The fields that changed between this revision of the record and the previous revision. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_ChangedCols: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_ChangedCols: If op_ChangedCols is specified, the field named in this input will be compared to the value in ChangedCols using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_ChangedCols must be specified if op_ChangedCols is specified.
:type val_f_ChangedCols: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_ChangedCols: If op_ChangedCols is specified, this value will be compared to the value in ChangedCols using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_ChangedCols must be specified if op_ChangedCols is specified.
:type val_c_ChangedCols: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_Component: The operator to apply to the field Component. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. Component: The issue component (Devices, Configuration, VLANs, etc.). For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_Component: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_Component: If op_Component is specified, the field named in this input will be compared to the value in Component using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_Component must be specified if op_Component is specified.
:type val_f_Component: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_Component: If op_Component is specified, this value will be compared to the value in Component using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_Component must be specified if op_Component is specified.
:type val_c_Component: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_Correctness: The operator to apply to the field Correctness. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. Correctness: The correctness contribution for this issue. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_Correctness: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_Correctness: If op_Correctness is specified, the field named in this input will be compared to the value in Correctness using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_Correctness must be specified if op_Correctness is specified.
:type val_f_Correctness: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_Correctness: If op_Correctness is specified, this value will be compared to the value in Correctness using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_Correctness must be specified if op_Correctness is specified.
:type val_c_Correctness: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_Criteria: The operator to apply to the field Criteria. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. Criteria: The criteria value for this issue at the time it was raised. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_Criteria: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_Criteria: If op_Criteria is specified, the field named in this input will be compared to the value in Criteria using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_Criteria must be specified if op_Criteria is specified.
:type val_f_Criteria: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_Criteria: If op_Criteria is specified, this value will be compared to the value in Criteria using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_Criteria must be specified if op_Criteria is specified.
:type val_c_Criteria: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_DataSourceID: The operator to apply to the field DataSourceID. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. DataSourceID: The internal NetMRI identifier for the collector NetMRI that raised this issue. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_DataSourceID: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_DataSourceID: If op_DataSourceID is specified, the field named in this input will be compared to the value in DataSourceID using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_DataSourceID must be specified if op_DataSourceID is specified.
:type val_f_DataSourceID: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_DataSourceID: If op_DataSourceID is specified, this value will be compared to the value in DataSourceID using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_DataSourceID must be specified if op_DataSourceID is specified.
:type val_c_DataSourceID: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_DetailID: The operator to apply to the field DetailID. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. DetailID: A unique identifier for this issue instance. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_DetailID: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_DetailID: If op_DetailID is specified, the field named in this input will be compared to the value in DetailID using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_DetailID must be specified if op_DetailID is specified.
:type val_f_DetailID: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_DetailID: If op_DetailID is specified, this value will be compared to the value in DetailID using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_DetailID must be specified if op_DetailID is specified.
:type val_c_DetailID: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_DeviceID: The operator to apply to the field DeviceID. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. DeviceID: The internal NetMRI identifier for the device to which this issue applies. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_DeviceID: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_DeviceID: If op_DeviceID is specified, the field named in this input will be compared to the value in DeviceID using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_DeviceID must be specified if op_DeviceID is specified.
:type val_f_DeviceID: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_DeviceID: If op_DeviceID is specified, this value will be compared to the value in DeviceID using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_DeviceID must be specified if op_DeviceID is specified.
:type val_c_DeviceID: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_EndTime: The operator to apply to the field EndTime. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. EndTime: The ending effective time of this revision of this record, or empty if still in effect. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_EndTime: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_EndTime: If op_EndTime is specified, the field named in this input will be compared to the value in EndTime using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_EndTime must be specified if op_EndTime is specified.
:type val_f_EndTime: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_EndTime: If op_EndTime is specified, this value will be compared to the value in EndTime using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_EndTime must be specified if op_EndTime is specified.
:type val_c_EndTime: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_InterfaceID: The operator to apply to the field InterfaceID. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. InterfaceID: The internal NetMRI identifier for the interface to which this issue applies, if relevant. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_InterfaceID: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_InterfaceID: If op_InterfaceID is specified, the field named in this input will be compared to the value in InterfaceID using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_InterfaceID must be specified if op_InterfaceID is specified.
:type val_f_InterfaceID: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_InterfaceID: If op_InterfaceID is specified, this value will be compared to the value in InterfaceID using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_InterfaceID must be specified if op_InterfaceID is specified.
:type val_c_InterfaceID: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_IprgID: The operator to apply to the field IprgID. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. IprgID: The internal NetMRI identifier for the HSRP or VRRP group to which this issue applies, if relevant. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_IprgID: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_IprgID: If op_IprgID is specified, the field named in this input will be compared to the value in IprgID using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_IprgID must be specified if op_IprgID is specified.
:type val_f_IprgID: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_IprgID: If op_IprgID is specified, this value will be compared to the value in IprgID using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_IprgID must be specified if op_IprgID is specified.
:type val_c_IprgID: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_IssueID: The operator to apply to the field IssueID. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. IssueID: The internal NetMRI identifier for this issue instance. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_IssueID: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_IssueID: If op_IssueID is specified, the field named in this input will be compared to the value in IssueID using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_IssueID must be specified if op_IssueID is specified.
:type val_f_IssueID: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_IssueID: If op_IssueID is specified, this value will be compared to the value in IssueID using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_IssueID must be specified if op_IssueID is specified.
:type val_c_IssueID: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_IssueTypeID: The operator to apply to the field IssueTypeID. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. IssueTypeID: An internal NetMRI identifier for the type of this issue. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_IssueTypeID: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_IssueTypeID: If op_IssueTypeID is specified, the field named in this input will be compared to the value in IssueTypeID using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_IssueTypeID must be specified if op_IssueTypeID is specified.
:type val_f_IssueTypeID: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_IssueTypeID: If op_IssueTypeID is specified, this value will be compared to the value in IssueTypeID using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_IssueTypeID must be specified if op_IssueTypeID is specified.
:type val_c_IssueTypeID: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_IssueValue: The operator to apply to the field IssueValue. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. IssueValue: The meaning of this field varies based upon the specific issue. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_IssueValue: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_IssueValue: If op_IssueValue is specified, the field named in this input will be compared to the value in IssueValue using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_IssueValue must be specified if op_IssueValue is specified.
:type val_f_IssueValue: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_IssueValue: If op_IssueValue is specified, this value will be compared to the value in IssueValue using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_IssueValue must be specified if op_IssueValue is specified.
:type val_c_IssueValue: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_SeverityID: The operator to apply to the field SeverityID. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. SeverityID: The issue severity ID (1 = Error, 2 = Warning, 3 = Info). Useful for sorting. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_SeverityID: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_SeverityID: If op_SeverityID is specified, the field named in this input will be compared to the value in SeverityID using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_SeverityID must be specified if op_SeverityID is specified.
:type val_f_SeverityID: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_SeverityID: If op_SeverityID is specified, this value will be compared to the value in SeverityID using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_SeverityID must be specified if op_SeverityID is specified.
:type val_c_SeverityID: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_Stability: The operator to apply to the field Stability. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. Stability: The stability contribution for this issue. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_Stability: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_Stability: If op_Stability is specified, the field named in this input will be compared to the value in Stability using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_Stability must be specified if op_Stability is specified.
:type val_f_Stability: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_Stability: If op_Stability is specified, this value will be compared to the value in Stability using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_Stability must be specified if op_Stability is specified.
:type val_c_Stability: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_StartTime: The operator to apply to the field StartTime. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. StartTime: The date/time this issue instance was raised. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_StartTime: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_StartTime: If op_StartTime is specified, the field named in this input will be compared to the value in StartTime using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_StartTime must be specified if op_StartTime is specified.
:type val_f_StartTime: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_StartTime: If op_StartTime is specified, this value will be compared to the value in StartTime using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_StartTime must be specified if op_StartTime is specified.
:type val_c_StartTime: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_SubnetID: The operator to apply to the field SubnetID. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. SubnetID: The internal NetMRI identifier for the subnet to which this issue applies, if relevant. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_SubnetID: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_SubnetID: If op_SubnetID is specified, the field named in this input will be compared to the value in SubnetID using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_SubnetID must be specified if op_SubnetID is specified.
:type val_f_SubnetID: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_SubnetID: If op_SubnetID is specified, this value will be compared to the value in SubnetID using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_SubnetID must be specified if op_SubnetID is specified.
:type val_c_SubnetID: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_SuppressedInd: The operator to apply to the field SuppressedInd. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. SuppressedInd: A flag indicating whether this issue is suppressed or not. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_SuppressedInd: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_SuppressedInd: If op_SuppressedInd is specified, the field named in this input will be compared to the value in SuppressedInd using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_SuppressedInd must be specified if op_SuppressedInd is specified.
:type val_f_SuppressedInd: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_SuppressedInd: If op_SuppressedInd is specified, this value will be compared to the value in SuppressedInd using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_SuppressedInd must be specified if op_SuppressedInd is specified.
:type val_c_SuppressedInd: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_Timestamp: The operator to apply to the field Timestamp. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. Timestamp: The date and time this record was collected or calculated. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_Timestamp: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_Timestamp: If op_Timestamp is specified, the field named in this input will be compared to the value in Timestamp using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_Timestamp must be specified if op_Timestamp is specified.
:type val_f_Timestamp: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_Timestamp: If op_Timestamp is specified, this value will be compared to the value in Timestamp using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_Timestamp must be specified if op_Timestamp is specified.
:type val_c_Timestamp: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_VlanID: The operator to apply to the field VlanID. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. VlanID: The internal NetMRI identifier of the VLAN to which this issue applies, if relevant. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_VlanID: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_VlanID: If op_VlanID is specified, the field named in this input will be compared to the value in VlanID using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_VlanID must be specified if op_VlanID is specified.
:type val_f_VlanID: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_VlanID: If op_VlanID is specified, this value will be compared to the value in VlanID using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_VlanID must be specified if op_VlanID is specified.
:type val_c_VlanID: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param DeviceGroupID: The internal NetMRI identifier of the device groups to which to limit the results.
:type DeviceGroupID: Array of Integer
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param timestamp: The data returned will represent the issue details as of this date and time. If omitted, the result will indicate the most recently collected data.
:type timestamp: DateTime
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param methods: A list of issue detail methods. The listed methods will be called on each issue detail returned and included in the output. Available methods are: data_source, device, interface, iprg, vlan, subnet, alternate_device, issue_desc, title, severity, infradevice.
:type methods: Array of String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param include: A list of associated object types to include in the output. The listed associations will be returned as outputs named according to the association name (see outputs below). Available includes are: data_source, device, interface, iprg, vlan, subnet, alternate_device, issue_desc.
:type include: Array of String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` 0
:param start: The record number to return in the selected page of data. It will always appear, although it may not be the first record. See the :limit for more information.
:type start: Integer
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` 1000
:param limit: The size of the page of data, that is, the maximum number of records returned. The limit size will be used to break the data up into pages and the first page with the start record will be returned. So if you have 100 records and use a :limit of 10 and a :start of 10, you will get records 10-19. The maximum limit is 10000.
:type limit: Integer
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` IssueID
:param sort: The data field(s) to use for sorting the output. Default is IssueID. Valid values are DataSourceID, IssueID, StartTime, EndTime, ChangedCols, Timestamp, IssueTypeID, DetailID, DeviceID, InterfaceID, VlanID, SubnetID, IprgID, BatchID, AltDeviceID, Criteria, IssueValue, Component, SeverityID, Correctness, Stability, SuppressedInd.
:type sort: Array of String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` asc
:param dir: The direction(s) in which to sort the data. Default is 'asc'. Valid values are 'asc' and 'desc'.
:type dir: Array of String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param select: The list of attributes to return for each IssueDetail. Valid values are DataSourceID, IssueID, StartTime, EndTime, ChangedCols, Timestamp, IssueTypeID, DetailID, DeviceID, InterfaceID, VlanID, SubnetID, IprgID, BatchID, AltDeviceID, Criteria, IssueValue, Component, SeverityID, Correctness, Stability, SuppressedInd. If empty or omitted, all attributes will be returned.
:type select: Array
| ``api version min:`` 2.8
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param goto_field: The field name for NIOS GOTO that is used for locating a row position of records.
:type goto_field: String
| ``api version min:`` 2.8
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param goto_value: The value of goto_field for NIOS GOTO that is used for locating a row position of records.
:type goto_value: String
| ``api version min:`` 2.3
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param xml_filter: A SetFilter XML structure to further refine the search. The SetFilter will be applied AFTER any search query or field values, but before any limit options. The limit and pagination will be enforced after the filter. Remind that this kind of filter may be costly and inefficient if not associated with a database filtering.
:type xml_filter: String
**Outputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return issue_details: An array of the IssueDetail objects that match the specified input criteria.
:rtype issue_details: Array of IssueDetail
"""
return self.api_list_request(self._get_method_fullname("find"), kwargs)
| 53.070432 | 655 | 0.601829 | 9,468 | 76,103 | 4.788128 | 0.035065 | 0.076322 | 0.04961 | 0.084705 | 0.962743 | 0.960846 | 0.939229 | 0.923766 | 0.922244 | 0.920898 | 0 | 0.005353 | 0.312708 | 76,103 | 1,433 | 656 | 53.107467 | 0.861371 | 0.826419 | 0 | 0 | 0 | 0 | 0.055077 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.363636 | false | 0 | 0.090909 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
bc9a838209ba068c85e1142249999a60e231f74b | 30,753 | py | Python | outlierRemover.py | dg1223/GestureRecognition | 07078b0b8340c8b94f42414efe0ed36158e8c0ea | [
"MIT"
] | 2 | 2019-02-12T17:59:41.000Z | 2019-10-27T03:36:08.000Z | outlierRemover.py | dg1223/GestureRecognition | 07078b0b8340c8b94f42414efe0ed36158e8c0ea | [
"MIT"
] | null | null | null | outlierRemover.py | dg1223/GestureRecognition | 07078b0b8340c8b94f42414efe0ed36158e8c0ea | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Created on Fri Jun 05 23:31:44 2015
@author: Shamir
Comment (Sep 27, 2016): Should have tried interquartile range
"""
import pandas
import os
import time
#import matplotlib.pyplot as plt
#import numpy as np
from scipy.spatial.distance import euclidean
from natsort import natsorted
start = time.clock()
# function for Linear Interpolation
def linearInterpolation(prev_datapoint, target_datapoint, next_datapoint):
denominator = next_datapoint - prev_datapoint
numerator = ((target_datapoint - prev_datapoint) * (file.values[i, next_datapoint] - file.values[i, prev_datapoint]))
interpolated_value = (numerator/denominator) + file.values[i, prev_datapoint]
return interpolated_value
# function for derivative filtering
def firstDerivative(prev, curr, nexT):
try:
derivative = (abs(prev - curr) + abs(curr - nexT)) / abs(prev - nexT)
return derivative
except ZeroDivisionError: # as detail:
if abs(prev - curr) == abs(nexT - curr):
error = 1
return error
#print 'Two identical datapoints:', detail
pass
#source = 'C:\\Users\\Shamir\\Desktop\\broken down files\\' # broken down files
source = 'C:\\Users\\Shamir\\Desktop\\Grad\\Participant Study\\Broken down files\\P1\\'
filelist = os.listdir(source)
filelist = natsorted(filelist) # naturally sort the file list
#destination = 'C:\\Users\\Shamir\\Desktop\\denoised3(final)\\'
destination = 'C:\\Users\\Shamir\\Desktop\\Grad\\Participant Study\\Denosed_allValues\\P1\\'
fileformat = '.csv'
backslash = '\\'
count = 1
## Algorithm for filtering noisy peaks
for eachfile in range(len(filelist)): # len(filelist)
# fileHandler (can become a different class!)
csvfile = source + filelist[eachfile] # full filepath
file = pandas.read_csv(csvfile, header = None)
#file = file.dropna(axis = 1) # reject every column that contains at least one NaN value (we lose at least one instance of gesture) - use only for unprocessed datasets
#file = file.drop(range(0,40), axis = 1) # delete 1st 40 points
file.values[1:] = file.values[1:].astype(float) # convert all strings to floats; ignore header columns
#plt.plot(file.values[32, 0:5])
num_rows = len(file) # number of rows in the dataset
num_columns = len(file.values[0]) # number of columns after preprocessing
column_limit = num_columns - 1 # boundary condition for iterating through columns
thresh = 0.12 # threshold to find peaks (noisy values based-on euclidean distance)
# start denoising every file (dataset)
for i in range(1, num_rows): # 1, num_rows
index = 1 # index of current datapoint
for j in range(num_columns):
#print "i, j = ", i, j
if index == num_columns - 1:
#print ("error: index == num_columns - 1")
break
else:
# prev_point (1), index (2), next_point (3), secNext_point (4), thirdNext_point (5), fourthNext_point (6), window_bound (7)
prev_point = index - 1
next_point = index + 1
secNext_point = index + 2
thirdNext_point = index + 3
fourthNext_point = index + 4
window_bound = index + 5
## if boundary condition is False and euclidean distance is greater than threshold, perform Linear Interpolation. Check for consecutive, noisy datapoints (window size = 6)
## and perform L.I. on each noisy value with the previous and next clean datapoints.
try:
if (index < (num_columns - 1)) and (euclidean(file.values[i, index], file.values[i, prev_point]) <= thresh):
#print ("0th condition")
index += 1
elif (index < (num_columns - 1)) and (euclidean(file.values[i, index], file.values[i, prev_point]) > thresh)\
and euclidean(file.values[i, next_point], file.values[i, prev_point]) <= thresh:
#print ("1st condition")
file.values[i, index] = linearInterpolation(prev_point, index, next_point)
index += 2
elif (index < (num_columns - 3)) and (euclidean(file.values[i, index], file.values[i, prev_point]) > thresh)\
and (euclidean(file.values[i, next_point], file.values[i, prev_point]) > thresh) and (euclidean(file.values[i, secNext_point], file.values[i, prev_point]) <= thresh):
file.values[i, index] = linearInterpolation(prev_point, index, secNext_point)
file.values[i, next_point] = linearInterpolation(prev_point, next_point, secNext_point)
#print ("2nd condition")
index += 3
elif (index < (num_columns - 4)) and (euclidean(file.values[i, index], file.values[i, prev_point]) > thresh)\
and (euclidean(file.values[i, next_point], file.values[i, prev_point]) > thresh) and (euclidean(file.values[i, secNext_point], file.values[i, prev_point]) > thresh)\
and (euclidean(file.values[i, thirdNext_point], file.values[i, prev_point]) <= thresh):
file.values[i, index] = linearInterpolation(prev_point, index, thirdNext_point)
file.values[i, next_point] = linearInterpolation(prev_point, next_point, thirdNext_point)
file.values[i, secNext_point] = linearInterpolation(prev_point, secNext_point, thirdNext_point)
#print ("3rd condition")
index += 4
elif (index < (num_columns - 5)) and (euclidean(file.values[i, index], file.values[i, prev_point]) > thresh)\
and (euclidean(file.values[i, next_point], file.values[i, prev_point]) > thresh) and (euclidean(file.values[i, secNext_point], file.values[i, prev_point]) > thresh)\
and (euclidean(file.values[i, thirdNext_point], file.values[i, prev_point]) > thresh) and (euclidean(file.values[i, window_bound], file.values[i, prev_point]) <= thresh):
file.values[i, index] = linearInterpolation(prev_point, index, window_bound)
file.values[i, next_point] = linearInterpolation(prev_point, next_point, window_bound)
file.values[i, secNext_point] = linearInterpolation(prev_point, secNext_point, window_bound)
file.values[i, thirdNext_point] = linearInterpolation(prev_point, thirdNext_point, window_bound)
#print ("4th condition")
index += 5
elif (index < (num_columns - 6)) and (euclidean(file.values[i, index], file.values[i, prev_point]) > thresh)\
and (euclidean(file.values[i, next_point], file.values[i, prev_point]) > thresh) and (euclidean(file.values[i, secNext_point], file.values[i, prev_point]) > thresh)\
and (euclidean(file.values[i, thirdNext_point], file.values[i, prev_point]) > thresh) and (euclidean(file.values[i, fourthNext_point], file.values[i, prev_point]) > thresh)\
and (euclidean(file.values[i, window_bound], file.values[i, prev_point]) <= thresh):
file.values[i, index] = linearInterpolation(prev_point, index, window_bound)
file.values[i, next_point] = linearInterpolation(prev_point, next_point, window_bound)
file.values[i, secNext_point] = linearInterpolation(prev_point, secNext_point, window_bound)
file.values[i, thirdNext_point] = linearInterpolation(prev_point, thirdNext_point, window_bound)
file.values[i, fourthNext_point] = linearInterpolation(prev_point, fourthNext_point, window_bound)
print ("5th condition")
index += 6
# if there is no noise inside the window, go to next datapoint
elif index < num_columns - 1:
index += 1
except ValueError: ## every datapoint is important; let's not forget them because of other missing values :)
pass
#print ("prev_point = "), prev_point
#print ("index = "), index
#print ("next_point = "), next_point
#plt.plot(file.values[32, 0:5])
## Derivative filtering
thresh2 = 2.5
for i in range(1, num_rows):#
index = 1
for j in range(num_columns):
if index == num_columns - 1:
#print ("error: index == num_columns - 1")
break
else:
# prev_point (1), index (2), next_point (3), secNext_point (4), thirdNext_point (5), fourthNext_point (6), window_bound (7)
prev_point = index - 1
next_point = index + 1
secNext_point = index + 2
thirdNext_point = index + 3
fourthNext_point = index + 4
fifthNext_point = index + 5
sixthNext_point = index + 6
seventhNext_point = index + 7
eigthNext_point = index + 8
ninthNext_point = index + 9
window_bound = index + 10
try:
if (index < (num_columns - 1))\
and (file.values[i, index] - file.values[i, prev_point] == 0):
#print ("0th derivative condition")
index += 1
elif (index < (num_columns - 1))\
and firstDerivative(file.values[i, prev_point], file.values[i, index], file.values[i, next_point]) == 1\
and file.values[i, prev_point] == file.values[i, next_point]:
file.values[i, index] = linearInterpolation(prev_point, index, next_point)
#print ("condition: zreo division error with noise [1] [example: -0.089, 0.024, -0.089]")
index += 2
elif (index < (num_columns - 2))\
and firstDerivative(file.values[i, prev_point], file.values[i, index], file.values[i, secNext_point]) == 1\
and file.values[i, prev_point] == file.values[i, secNext_point]:
file.values[i, index] = linearInterpolation(prev_point, index, secNext_point)
file.values[i, next_point] = linearInterpolation(prev_point, next_point, secNext_point)
#print i, index
#print ("condition: zreo division error with noise [2] [example: -0.089, 0.024, 0.024, -0.089]")
index += 3
elif (index < (num_columns - 3))\
and firstDerivative(file.values[i, prev_point], file.values[i, index], file.values[i, thirdNext_point]) == 1\
and file.values[i, prev_point] == file.values[i, thirdNext_point]:
file.values[i, index] = linearInterpolation(prev_point, index, thirdNext_point)
file.values[i, next_point] = linearInterpolation(prev_point, next_point, thirdNext_point)
file.values[i, secNext_point] = linearInterpolation(prev_point, secNext_point, thirdNext_point)
#print i, index
#print ("condition: zreo division error with noise [3] [example: -0.089, 0.024, 0.024, 0.024, -0.089]")
index += 4
elif (index < (num_columns - 4))\
and firstDerivative(file.values[i, prev_point], file.values[i, index], file.values[i, fourthNext_point]) == 1\
and file.values[i, prev_point] == file.values[i, fourthNext_point]:
file.values[i, index] = linearInterpolation(prev_point, index, fourthNext_point)
file.values[i, next_point] = linearInterpolation(prev_point, next_point, fourthNext_point)
file.values[i, secNext_point] = linearInterpolation(prev_point, secNext_point, fourthNext_point)
file.values[i, thirdNext_point] = linearInterpolation(prev_point, thirdNext_point, fourthNext_point)
#print i, index
#print ("condition: zreo division error with noise [4] [example: -0.089, 0.024, 0.024, 0.024, 0.024, -0.089]")
index += 5
elif (index < (num_columns - 1))\
and firstDerivative(file.values[i, prev_point], file.values[i, index], file.values[i, next_point]) > thresh2:
file.values[i, index] = linearInterpolation(prev_point, index, next_point)
#print ("first 1st derivative condition")
index += 2
elif (index < (num_columns - 2))\
and firstDerivative(file.values[i, prev_point], file.values[i, index], file.values[i, next_point]) <= thresh2\
and firstDerivative(file.values[i, prev_point], file.values[i, index], file.values[i, secNext_point]) > thresh2:
file.values[i, index] = linearInterpolation(prev_point, index, secNext_point)
file.values[i, next_point] = linearInterpolation(prev_point, next_point, secNext_point)
#print ("Second 1st derivative condition")
index += 2
elif (index < (num_columns - 3))\
and firstDerivative(file.values[i, prev_point], file.values[i, index], file.values[i, next_point]) <= thresh2\
and firstDerivative(file.values[i, prev_point], file.values[i, index], file.values[i, secNext_point]) <= thresh2\
and firstDerivative(file.values[i, prev_point], file.values[i, index], file.values[i, thirdNext_point]) > thresh2:
#print i, index
#print ("third 1st derivative condition")
file.values[i, index] = linearInterpolation(prev_point, index, thirdNext_point)
file.values[i, next_point] = linearInterpolation(prev_point, next_point, thirdNext_point)
file.values[i, secNext_point] = linearInterpolation(prev_point, secNext_point, thirdNext_point)
index += 3
elif (index < (num_columns - 4))\
and firstDerivative(file.values[i, prev_point], file.values[i, index], file.values[i, next_point]) <= thresh2\
and firstDerivative(file.values[i, prev_point], file.values[i, index], file.values[i, secNext_point]) <= thresh2\
and firstDerivative(file.values[i, prev_point], file.values[i, index], file.values[i, thirdNext_point]) <= thresh2\
and firstDerivative(file.values[i, prev_point], file.values[i, index], file.values[i, fourthNext_point]) > thresh2:
#print i, index
#print ("fourth 1st derivative condition")
file.values[i, index] = linearInterpolation(prev_point, index, fourthNext_point)
file.values[i, next_point] = linearInterpolation(prev_point, next_point, fourthNext_point)
file.values[i, secNext_point] = linearInterpolation(prev_point, secNext_point, fourthNext_point)
file.values[i, thirdNext_point] = linearInterpolation(prev_point, thirdNext_point, fourthNext_point)
index += 4
elif (index < (num_columns - 5))\
and firstDerivative(file.values[i, prev_point], file.values[i, index], file.values[i, next_point]) <= thresh2\
and firstDerivative(file.values[i, prev_point], file.values[i, index], file.values[i, secNext_point]) <= thresh2\
and firstDerivative(file.values[i, prev_point], file.values[i, index], file.values[i, thirdNext_point]) <= thresh2\
and firstDerivative(file.values[i, prev_point], file.values[i, index], file.values[i, fourthNext_point]) <= thresh2\
and firstDerivative(file.values[i, prev_point], file.values[i, index], file.values[i, fifthNext_point]) > thresh2:
file.values[i, index] = linearInterpolation(prev_point, index, fifthNext_point)
file.values[i, next_point] = linearInterpolation(prev_point, next_point, fifthNext_point)
file.values[i, secNext_point] = linearInterpolation(prev_point, secNext_point, fifthNext_point)
file.values[i, thirdNext_point] = linearInterpolation(prev_point, thirdNext_point, fifthNext_point)
file.values[i, fourthNext_point] = linearInterpolation(prev_point, fourthNext_point, fifthNext_point)
#print i, index
#print ("fifth 1st derivative condition")
index += 5
elif (index < (num_columns - 6))\
and firstDerivative(file.values[i, prev_point], file.values[i, index], file.values[i, next_point]) <= thresh2\
and firstDerivative(file.values[i, prev_point], file.values[i, index], file.values[i, secNext_point]) <= thresh2\
and firstDerivative(file.values[i, prev_point], file.values[i, index], file.values[i, thirdNext_point]) <= thresh2\
and firstDerivative(file.values[i, prev_point], file.values[i, index], file.values[i, fourthNext_point]) <= thresh2\
and firstDerivative(file.values[i, prev_point], file.values[i, index], file.values[i, fifthNext_point]) <= thresh2\
and firstDerivative(file.values[i, prev_point], file.values[i, index], file.values[i, sixthNext_point]) > thresh2:
file.values[i, index] = linearInterpolation(prev_point, index, sixthNext_point)
file.values[i, next_point] = linearInterpolation(prev_point, next_point, sixthNext_point)
file.values[i, secNext_point] = linearInterpolation(prev_point, secNext_point, sixthNext_point)
file.values[i, thirdNext_point] = linearInterpolation(prev_point, thirdNext_point, sixthNext_point)
file.values[i, fourthNext_point] = linearInterpolation(prev_point, fourthNext_point, sixthNext_point)
file.values[i, fifthNext_point] = linearInterpolation(prev_point, fifthNext_point, sixthNext_point)
#print i, index
#print ("sixth 1st derivative condition")
index += 6
elif (index < (num_columns - 7))\
and firstDerivative(file.values[i, prev_point], file.values[i, index], file.values[i, next_point]) <= thresh2\
and firstDerivative(file.values[i, prev_point], file.values[i, index], file.values[i, secNext_point]) <= thresh2\
and firstDerivative(file.values[i, prev_point], file.values[i, index], file.values[i, thirdNext_point]) <= thresh2\
and firstDerivative(file.values[i, prev_point], file.values[i, index], file.values[i, fourthNext_point]) <= thresh2\
and firstDerivative(file.values[i, prev_point], file.values[i, index], file.values[i, fifthNext_point]) <= thresh2\
and firstDerivative(file.values[i, prev_point], file.values[i, index], file.values[i, sixthNext_point]) <= thresh2\
and firstDerivative(file.values[i, prev_point], file.values[i, index], file.values[i, seventhNext_point]) > thresh2:
file.values[i, index] = linearInterpolation(prev_point, index, seventhNext_point)
file.values[i, next_point] = linearInterpolation(prev_point, next_point, seventhNext_point)
file.values[i, secNext_point] = linearInterpolation(prev_point, secNext_point, seventhNext_point)
file.values[i, thirdNext_point] = linearInterpolation(prev_point, thirdNext_point, seventhNext_point)
file.values[i, fourthNext_point] = linearInterpolation(prev_point, fourthNext_point, seventhNext_point)
file.values[i, fifthNext_point] = linearInterpolation(prev_point, fifthNext_point, seventhNext_point)
file.values[i, sixthNext_point] = linearInterpolation(prev_point, sixthNext_point, seventhNext_point)
#print i, index
#print ("seventh 1st derivative condition")
index += 7
elif (index < (num_columns - 8))\
and firstDerivative(file.values[i, prev_point], file.values[i, index], file.values[i, next_point]) <= thresh2\
and firstDerivative(file.values[i, prev_point], file.values[i, index], file.values[i, secNext_point]) <= thresh2\
and firstDerivative(file.values[i, prev_point], file.values[i, index], file.values[i, thirdNext_point]) <= thresh2\
and firstDerivative(file.values[i, prev_point], file.values[i, index], file.values[i, fourthNext_point]) <= thresh2\
and firstDerivative(file.values[i, prev_point], file.values[i, index], file.values[i, fifthNext_point]) <= thresh2\
and firstDerivative(file.values[i, prev_point], file.values[i, index], file.values[i, sixthNext_point]) <= thresh2\
and firstDerivative(file.values[i, prev_point], file.values[i, index], file.values[i, seventhNext_point]) <= thresh2\
and firstDerivative(file.values[i, prev_point], file.values[i, index], file.values[i, eigthNext_point]) > thresh2:
file.values[i, index] = linearInterpolation(prev_point, index, eigthNext_point)
file.values[i, next_point] = linearInterpolation(prev_point, next_point, eigthNext_point)
file.values[i, secNext_point] = linearInterpolation(prev_point, secNext_point, eigthNext_point)
file.values[i, thirdNext_point] = linearInterpolation(prev_point, thirdNext_point, eigthNext_point)
file.values[i, fourthNext_point] = linearInterpolation(prev_point, fourthNext_point, eigthNext_point)
file.values[i, fifthNext_point] = linearInterpolation(prev_point, fifthNext_point, eigthNext_point)
file.values[i, sixthNext_point] = linearInterpolation(prev_point, sixthNext_point, eigthNext_point)
file.values[i, seventhNext_point] = linearInterpolation(prev_point, seventhNext_point, eigthNext_point)
#print i, index
#print ("eigth 1st derivative condition")
index += 8
# inconsistent with the earlier ones
elif (index < (num_columns - 9))\
and firstDerivative(file.values[i, prev_point], file.values[i, index], file.values[i, next_point]) <= thresh2\
and firstDerivative(file.values[i, prev_point], file.values[i, index], file.values[i, secNext_point]) <= thresh2\
and firstDerivative(file.values[i, prev_point], file.values[i, index], file.values[i, thirdNext_point]) <= thresh2\
and firstDerivative(file.values[i, prev_point], file.values[i, index], file.values[i, fourthNext_point]) <= thresh2\
and firstDerivative(file.values[i, prev_point], file.values[i, index], file.values[i, fifthNext_point]) <= thresh2\
and firstDerivative(file.values[i, prev_point], file.values[i, index], file.values[i, sixthNext_point]) <= thresh2\
and firstDerivative(file.values[i, prev_point], file.values[i, index], file.values[i, seventhNext_point]) <= thresh2\
and firstDerivative(file.values[i, prev_point], file.values[i, index], file.values[i, eigthNext_point]) <= thresh2\
and firstDerivative(file.values[i, prev_point], file.values[i, index], file.values[i, ninthNext_point]) > thresh2:
file.values[i, index] = linearInterpolation(prev_point, index, ninthNext_point)
file.values[i, next_point] = linearInterpolation(prev_point, next_point, ninthNext_point)
file.values[i, secNext_point] = linearInterpolation(prev_point, secNext_point, ninthNext_point)
file.values[i, thirdNext_point] = linearInterpolation(prev_point, thirdNext_point, ninthNext_point)
file.values[i, fourthNext_point] = linearInterpolation(prev_point, fourthNext_point, ninthNext_point)
file.values[i, fifthNext_point] = linearInterpolation(prev_point, fifthNext_point, ninthNext_point)
file.values[i, sixthNext_point] = linearInterpolation(prev_point, sixthNext_point, ninthNext_point)
file.values[i, seventhNext_point] = linearInterpolation(prev_point, seventhNext_point, ninthNext_point)
file.values[i, eigthNext_point] = linearInterpolation(prev_point, eigthNext_point, ninthNext_point)
#print i, index
#print ("ninth 1st derivative condition")
index += 9
elif (index < (num_columns - 10))\
and firstDerivative(file.values[i, prev_point], file.values[i, index], file.values[i, next_point]) <= thresh2\
and firstDerivative(file.values[i, prev_point], file.values[i, index], file.values[i, secNext_point]) <= thresh2\
and firstDerivative(file.values[i, prev_point], file.values[i, index], file.values[i, thirdNext_point]) <= thresh2\
and firstDerivative(file.values[i, prev_point], file.values[i, index], file.values[i, fourthNext_point]) <= thresh2\
and firstDerivative(file.values[i, prev_point], file.values[i, index], file.values[i, fifthNext_point]) <= thresh2\
and firstDerivative(file.values[i, prev_point], file.values[i, index], file.values[i, sixthNext_point]) <= thresh2\
and firstDerivative(file.values[i, prev_point], file.values[i, index], file.values[i, seventhNext_point]) <= thresh2\
and firstDerivative(file.values[i, prev_point], file.values[i, index], file.values[i, eigthNext_point]) <= thresh2\
and firstDerivative(file.values[i, prev_point], file.values[i, index], file.values[i, ninthNext_point]) <= thresh2\
and firstDerivative(file.values[i, prev_point], file.values[i, index], file.values[i, window_bound]) > thresh2:
file.values[i, index] = linearInterpolation(prev_point, index, window_bound)
file.values[i, next_point] = linearInterpolation(prev_point, next_point, window_bound)
file.values[i, secNext_point] = linearInterpolation(prev_point, secNext_point, window_bound)
file.values[i, thirdNext_point] = linearInterpolation(prev_point, thirdNext_point, window_bound)
file.values[i, fourthNext_point] = linearInterpolation(prev_point, fourthNext_point, window_bound)
file.values[i, fifthNext_point] = linearInterpolation(prev_point, fifthNext_point, window_bound)
file.values[i, sixthNext_point] = linearInterpolation(prev_point, sixthNext_point, window_bound)
file.values[i, seventhNext_point] = linearInterpolation(prev_point, seventhNext_point, window_bound)
file.values[i, eigthNext_point] = linearInterpolation(prev_point, eigthNext_point, window_bound)
file.values[i, ninthNext_point] = linearInterpolation(prev_point, ninthNext_point, window_bound)
#print i, index
#print ("tenth 1st derivative condition")
index += 10
# if there is no noise inside the window, go to next datapoint
elif index < num_columns - 1:
index += 1
except ValueError:
pass
#file = file.drop(range(0,40), axis = 1) # delete 1st 40 points
#file = file.drop(range(num_columns - , num_columns), axis = 1) # delete last 5 points
# save data to file
file.to_csv(destination + str(count) + fileformat, header = False, index = False)
count += 1
print time.clock() - start, 'seconds taken to execute the program'
| 79.670984 | 222 | 0.572725 | 3,269 | 30,753 | 5.225451 | 0.079841 | 0.185575 | 0.200913 | 0.117082 | 0.836787 | 0.818054 | 0.79534 | 0.78129 | 0.771338 | 0.756937 | 0 | 0.014621 | 0.326115 | 30,753 | 385 | 223 | 79.877922 | 0.809641 | 0.11407 | 0 | 0.532847 | 0 | 0 | 0.00766 | 0.004441 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.010949 | 0.018248 | null | null | 0.007299 | 0 | 0 | 0 | null | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
bce86b550e89403475937ada4fffebb929263c33 | 526 | py | Python | features/steps/managers/base.py | lordkyzr/launchkey-python | 4a6c13c2e60c5f38c4cb749d6a887eb1ac813c0c | [
"MIT"
] | 9 | 2017-10-12T02:45:23.000Z | 2021-01-11T05:44:13.000Z | features/steps/managers/base.py | lordkyzr/launchkey-python | 4a6c13c2e60c5f38c4cb749d6a887eb1ac813c0c | [
"MIT"
] | 31 | 2018-09-12T00:17:10.000Z | 2022-01-31T21:35:04.000Z | features/steps/managers/base.py | lordkyzr/launchkey-python | 4a6c13c2e60c5f38c4cb749d6a887eb1ac813c0c | [
"MIT"
] | 11 | 2017-01-31T21:45:29.000Z | 2022-01-28T00:56:48.000Z | class BaseManager(object):
def __init__(self, organization_factory):
self._organization_factory = organization_factory
self._organization_client = self._organization_factory. \
make_organization_client()
def _get_directory_client(self, directory_id):
return self._organization_factory.make_directory_client(directory_id)
def _get_service_client(self, service_id):
return self._organization_factory.make_service_client(service_id)
def cleanup(self):
pass
| 32.875 | 77 | 0.747148 | 58 | 526 | 6.224138 | 0.293103 | 0.265928 | 0.31856 | 0.224377 | 0.193906 | 0.193906 | 0 | 0 | 0 | 0 | 0 | 0 | 0.186312 | 526 | 15 | 78 | 35.066667 | 0.843458 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.363636 | false | 0.090909 | 0 | 0.181818 | 0.636364 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 7 |
bcebf828e4918e32bda229d3af70598bb97d8a65 | 20,025 | py | Python | Assignment-2/q3b.py | pankajk22/Artificial-Intelligence-Assignments | 8974b0351dbe5a299eaf5584464dd9c0f371d271 | [
"MIT"
] | null | null | null | Assignment-2/q3b.py | pankajk22/Artificial-Intelligence-Assignments | 8974b0351dbe5a299eaf5584464dd9c0f371d271 | [
"MIT"
] | null | null | null | Assignment-2/q3b.py | pankajk22/Artificial-Intelligence-Assignments | 8974b0351dbe5a299eaf5584464dd9c0f371d271 | [
"MIT"
] | null | null | null | from math import pi
import random
import copy
import numpy as np
class wolverine_MDP():
def __init__(self):
self.jean_coordinates = [(0, 0), (3, 4)]
jean_coordinate = random.choice(self.jean_coordinates)
self.jean_x = jean_coordinate[0]
self.jean_y = jean_coordinate[1]
self.wall_x = 2
self.wall_y = 3
self.xavier_school_x = 0
self.xavier_school_y = 4
def start_state(self):
self.wolverine_x = random.randint(0, 4)
while(self.wolverine_x == self.wall_x):
self.wolverine_x = random.randint(0, 4)
self.wolverine_y = random.randint(0, 4)
while(self.wolverine_y == self.wall_y):
self.wolverine_y = random.randint(0, 4)
self.magneto_x = random.randint(0, 4)
while(self.magneto_x == self.wall_x or self.magneto_x == self.xavier_school_x):
self.magneto_x = random.randint(0, 4)
self.magneto_y = random.randint(0, 4)
while(self.magneto_y == self.wall_y or self.magneto_y == self.xavier_school_y):
self.magneto_y = random.randint(0, 4)
state = (self.magneto_x, self.magneto_y, self.wolverine_x,
self.wolverine_y, self.jean_x, self.jean_y)
return state
def Is_End(self, state):
m_x = state[0]
m_y = state[1]
w_x = state[2]
w_y = state[3]
j_x = state[4]
j_y = state[5]
if(m_x == w_x and w_x == j_x and m_y == w_y and w_y == j_y):
return True
elif(w_x == j_x and w_y == j_y):
return True
elif(m_x == w_x and m_y == w_y):
return True
else:
return False
def Reward(self, state):
m_x = state[0]
m_y = state[1]
w_x = state[2]
w_y = state[3]
j_x = state[4]
j_y = state[5]
if(m_x == w_x and w_x == j_x and m_y == w_y and w_y == j_y):
return -15
elif(w_x == j_x and w_y == j_y):
return 20
elif(m_x == w_x and m_y == w_y):
return -20
else:
return 0
def lazy_magneto(self, state):
result = []
m_x = state[0]
m_y = state[1]
wall_coordinates = (self.wall_x, self.wall_y)
school_coordinates = (self.xavier_school_x, self.xavier_school_y)
if(m_x+1 < 5):
if((m_x+1, m_y) != school_coordinates and (m_x+1, m_y) != wall_coordinates):
result.append((m_x+1, m_y))
if(m_y+1 < 5):
if((m_x, m_y+1) != school_coordinates and (m_x, m_y+1) != wall_coordinates):
result.append((m_x, m_y+1))
if(m_x-1 >= 0):
if((m_x-1, m_y) != school_coordinates and (m_x-1, m_y) != wall_coordinates):
result.append((m_x-1, m_y))
if(m_y-1 >= 0):
if((m_x, m_y-1) != school_coordinates and (m_x, m_y-1) != wall_coordinates):
result.append((m_x, m_y-1))
return result
def intelligent_magneto(self,state):
result = []
m_x = state[0]
m_y = state[1]
w_x = state[2]
w_y = state[3]
wall_coordinates=(self.wall_x,self.wall_y)
school_coordinates=(self.xavier_school_x,self.xavier_school_y)
if(m_x+1<5):
if((m_x+1,m_y)!=school_coordinates and (m_x+1,m_y)!=wall_coordinates):
result.append((m_x+1,m_y))
if(m_y+1<5):
if((m_x,m_y+1)!=school_coordinates and (m_x,m_y+1)!=wall_coordinates):
result.append((m_x,m_y+1))
if(m_x-1>=0):
if((m_x-1,m_y)!=school_coordinates and (m_x-1,m_y)!=wall_coordinates):
result.append((m_x-1,m_y))
if(m_y-1>=0):
if((m_x,m_y-1)!=school_coordinates and (m_x,m_y-1)!=wall_coordinates):
result.append((m_x,m_y-1))
distance=100000000000
valid_results=[]
for valid_actions in result:
x=valid_actions[0]-w_x
y=valid_actions[1]-w_y
dist=(x*x)+(y*y)
distance=min(dist,distance)
for valid_actions in result:
x=valid_actions[0]-w_x
y=valid_actions[1]-w_y
dist=(x*x)+(y*y)
if(dist==distance):
valid_results.append((valid_actions))
return valid_results
def wolverine_valid_actions(self, state):
result = []
w_x = state[2]
w_y = state[3]
if(w_x+1 < 5):
if((w_x+1, w_y) != (self.wall_x, self.wall_y)):
result.append((w_x+1, w_y))
if(w_y+1 < 5):
if((w_x, w_y+1) != (self.wall_x, self.wall_y)):
result.append((w_x, w_y+1))
if(w_x-1 >= 0):
if((w_x-1, w_y) != (self.wall_x, self.wall_y)):
result.append((w_x-1, w_y))
if(w_y-1 >= 0):
if((w_x, w_y-1) != (self.wall_x, self.wall_y)):
result.append((w_x, w_y-1))
return result
def new_position_for_lazy_magneto(self,state):
m_x = state[0]
m_y = state[1]
j_x = state[4]
j_y = state[5]
index=self.jean_coordinates.index((j_x,j_y))
if(random.randint(1,10)>8):
if(index==1):
jean_coordinate=self.jean_coordinates[0]
j_x=jean_coordinate[0]
j_y=jean_coordinate[1]
else:
jean_coordinate=self.jean_coordinates[1]
j_x=jean_coordinate[0]
j_y=jean_coordinate[1]
magneto_next_step=self.lazy_magneto(state)
if(random.randint(1,100)<=95):
next_coordinate=random.choice(magneto_next_step)
m_x=next_coordinate[0]
m_y=next_coordinate[1]
return (m_x,m_y,j_x,j_y)
def new_position_for_intelligent_magneto(self,state):
m_x = state[0]
m_y = state[1]
j_x = state[4]
j_y = state[5]
index=self.jean_coordinates.index((j_x,j_y))
if(random.randint(1,10)>8):
if(index==1):
jean_coordinate=self.jean_coordinates[0]
j_x=jean_coordinate[0]
j_y=jean_coordinate[1]
else:
jean_coordinate=self.jean_coordinates[1]
j_x=jean_coordinate[0]
j_y=jean_coordinate[1]
# magneto_next_step=[]
magneto_next_step=self.intelligent_magneto(state)
# moving_prob=0.95/len(actions)
# for action in actions:
# if(action!=(self.wall_x,self.wall_y) and action!=(self.xavier_school_x,self.xavier_school_y)):
# magneto_next_step.append(action)
if(random.randint(1,100)<=95):
next_coordinate=random.choice(magneto_next_step)
m_x=next_coordinate[0]
m_y=next_coordinate[1]
return (m_x,m_y,j_x,j_y)
def allstates(self):
jean_state = []
magneto_state = []
wolverine_state = []
for pos in self.jean_coordinates:
jean_state.append(pos)
for i in range(0, 5):
for j in range(0, 5):
if((i, j) != (self.wall_x, self.wall_y) and (i, j) != (self.xavier_school_x, self.xavier_school_y)):
magneto_state.append((i, j))
for i in range(0, 5):
for j in range(0, 5):
if((i, j) != (self.wall_x, self.wall_y)):
wolverine_state.append((i, j))
allstates = []
for j_state in jean_state:
for m_state in magneto_state:
for w_state in wolverine_state:
next_state = (
m_state[0], m_state[1], w_state[0], w_state[1], j_state[0], j_state[1])
allstates.append(next_state)
return allstates
def next_state_probabilty_reward(self, state, wolverine_action):
m_x = state[0]
m_y = state[1]
w_x = state[2]
w_y = state[3]
j_x = state[4]
j_y = state[5]
stay_prob = 0.05
jean_next_step_ = []
magneto_next_step_ = []
wolverine_next_step_ = []
index = self.jean_coordinates.index((j_x, j_y))
if(index == 1):
jean_coordinate = self.jean_coordinates[0]
jean_next_step_.append((jean_coordinate, 0.2))
else:
jean_coordinate = self.jean_coordinates[1]
jean_next_step_.append((jean_coordinate, 0.2))
jean_next_step_.append(((j_x, j_y), 0.8))
moving_prob = 0.95
wolverine_next_step_.append((wolverine_action, moving_prob))
wolverine_next_step_.append(((w_x, w_y), stay_prob))
magneto_next_step = self.lazy_magneto(state)
moving_prob = 0.95/len(magneto_next_step)
for step in magneto_next_step:
magneto_next_step_.append((step, moving_prob))
magneto_next_step_.append(((m_x, m_y), stay_prob))
result = []
for w_step in wolverine_next_step_:
for m_step in magneto_next_step_:
for j_step in jean_next_step_:
next_state = (m_step[0][0], m_step[0][1], w_step[0]
[0], w_step[0][1], j_step[0][0], j_step[0][1])
next_state_probability = round(
m_step[1]*w_step[1]*j_step[1], 5)
reward = self.Reward(next_state)
result.append((next_state, next_state_probability, reward))
# print(round(sum(t[1] for t in result ),3))
return result
def next_state_probabilty_reward_for_active_magneto(self, state, wolverine_action):
m_x = state[0]
m_y = state[1]
w_x = state[2]
w_y = state[3]
j_x = state[4]
j_y = state[5]
stay_prob = 0.05
jean_next_step_ = []
magneto_next_step_ = []
wolverine_next_step_ = []
index = self.jean_coordinates.index((j_x, j_y))
if(index == 1):
jean_coordinate = self.jean_coordinates[0]
jean_next_step_.append((jean_coordinate, 0.2))
else:
jean_coordinate = self.jean_coordinates[1]
jean_next_step_.append((jean_coordinate, 0.2))
jean_next_step_.append(((j_x, j_y), 0.8))
moving_prob = 0.95
wolverine_next_step_.append((wolverine_action, moving_prob))
wolverine_next_step_.append(((w_x, w_y), stay_prob))
magneto_next_step = self.intelligent_magneto(state)
moving_prob = 0.95/len(magneto_next_step)
for step in magneto_next_step:
magneto_next_step_.append((step, moving_prob))
magneto_next_step_.append(((m_x, m_y), stay_prob))
result = []
for w_step in wolverine_next_step_:
for m_step in magneto_next_step_:
for j_step in jean_next_step_:
next_state = (m_step[0][0], m_step[0][1], w_step[0]
[0], w_step[0][1], j_step[0][0], j_step[0][1])
next_state_probability = round(
m_step[1]*w_step[1]*j_step[1], 5)
reward = self.Reward(next_state)
result.append((next_state, next_state_probability, reward))
# print(round(sum(t[1] for t in result ),3))
return result
def Discount(self):
return 0.85
def policy_evaluation(mdp,V,Pi_s):
all_states = mdp.allstates()
def Q(state, action):
return sum(prob*(reward+(mdp.Discount()*V[newState]))for (newState, prob, reward) in mdp.next_state_probabilty_reward(state, action))
while True:
newV = {}
for state in all_states:
if mdp.Is_End(state) == True:
newV[state] = 0
else:
newV[state] = Q(state, Pi_s[state])
# print(newV)
if max(abs(V[state]-newV[state]) for state in all_states)<0.0001:
break
V=copy.deepcopy(newV)
# for state in all_states:
# print(state,' ',V[state])
pi={}
for state in all_states:
if mdp.Is_End(state)== True:
pi[state]='End'
else:
pi[state]=max((Q(state,action),action) for action in mdp.wolverine_valid_actions(state))[1]
# for state in all_states:
# print(state,' ',V[state],' ',pi[state])
return pi
def policy_stable(pi_s,new_pi_s):
diff=0
for state in pi_s:
if(pi_s[state]!=new_pi_s[state]):
diff+=1
# print(diff)
if(diff>0):
return False
return True
def policy_iteration_for_lazy_magneto(mdp):
V = {}
all_states = mdp.allstates()
for state in all_states:
V[state] = 0
Pi_s={}
for state in all_states:
if mdp.Is_End(state) == True:
Pi_s[state] = 'End'
else:
wolv_next_moves=mdp.wolverine_valid_actions(state)
Pi_s[state]=random.choice(wolv_next_moves)
iterations=1
while True:
new_Pi_s=policy_evaluation(mdp,V,Pi_s)
iterations+=1
if(policy_stable(Pi_s,new_Pi_s)==True):
# print("Yes")
break
else:
Pi_s=copy.deepcopy(new_Pi_s)
# print(iterations)
# for state in all_states:
# print(state,' ',V[state],' ',Pi_s[state])
return Pi_s
def policy_evaluation_for_active_magneto(mdp,V,Pi_s):
all_states = mdp.allstates()
def Q(state, action):
return sum(prob*(reward+(mdp.Discount()*V[newState]))for (newState, prob, reward) in mdp.next_state_probabilty_reward_for_active_magneto(state, action))
while True:
newV = {}
for state in all_states:
if mdp.Is_End(state) == True:
newV[state] = 0
else:
newV[state] = Q(state, Pi_s[state])
# print(newV)
if max(abs(V[state]-newV[state]) for state in all_states)<0.0001:
break
V=copy.deepcopy(newV)
# for state in all_states:
# print(state,' ',V[state])
pi={}
for state in all_states:
if mdp.Is_End(state)== True:
pi[state]='End'
else:
pi[state]=max((Q(state,action),action) for action in mdp.wolverine_valid_actions(state))[1]
# for state in all_states:
# print(state,' ',V[state],' ',pi[state])
return pi
def policy_Iteration_for_active_magneto(mdp):
V = {}
all_states = mdp.allstates()
for state in all_states:
V[state] = 0
Pi_s={}
for state in all_states:
if mdp.Is_End(state) == True:
Pi_s[state] = 'End'
else:
wolv_next_moves=mdp.wolverine_valid_actions(state)
Pi_s[state]=random.choice(wolv_next_moves)
iterations=1
while True:
new_Pi_s=policy_evaluation_for_active_magneto(mdp,V,Pi_s)
iterations+=1
if(policy_stable(Pi_s,new_Pi_s)==True):
# print("Yes")
break
else:
Pi_s=copy.deepcopy(new_Pi_s)
# print(iterations)
# for state in all_states:
# print(state,' ',V[state],' ',Pi_s[state])
return Pi_s
def make_grid(state,mdp):
grid=[['-' for i in range(5)] for j in range(5)]
grid[mdp.wall_x][mdp.wall_y]='B'
grid[mdp.xavier_school_x][mdp.xavier_school_y]='X'
grid[state[0]][state[1]]='M'
grid[state[2]][state[3]]='W'
grid[state[4]][state[5]]='J'
return grid
def Print(grid):
for r in grid:
for c in r:
print(c,end = " ")
print()
print()
print()
def play_game_for_lazy_magneto(mdp,policy):
print("--------------------Playing Game for Lazy magneto---------------------")
wolve=0
jean=0
magneto=0
for i in range(15):
state=mdp.start_state()
print('Start State: ')
grid=make_grid(state,mdp)
Print(grid)
if(policy[state]=='End'):
print('End State: ')
total_reward=mdp.Reward(state)
# print(total_reward)
if(total_reward==20):
wolve+=1
elif(total_reward==-20):
magneto+=1
elif(total_reward==-15):
jean+=1
Print(grid)
else:
while(policy[state]!='End'):
(new_m_x,new_m_y,new_j_x,new_j_y)=mdp.new_position_for_lazy_magneto(state)
wolverine_next_step=policy[state]
new_w_x=wolverine_next_step[0]
new_w_y=wolverine_next_step[1]
new_state=(new_m_x,new_m_y,new_w_x,new_w_y,new_j_x,new_j_y)
if(policy[new_state]=='End'):
print('End State: ')
total_reward=mdp.Reward(new_state)
# print(total_reward)
if(total_reward==20):
wolve+=1
elif(total_reward==-20):
magneto+=1
elif(total_reward==-15):
jean+=1
grid=make_grid(new_state,mdp)
Print(grid)
state=copy.deepcopy(new_state)
print()
print("Wolverine wins : ",wolve)
print("magneto Wins : ",magneto)
print("All on same position : ",jean)
def play_game_for_Active_magneto(mdp,policy):
print("--------------------Playing Game for Active magneto---------------------")
wolve=0
jean=0
magneto=0
for i in range(15):
state=mdp.start_state()
print('Start State: ')
grid=make_grid(state,mdp)
Print(grid)
if(policy[state]=='End'):
print('End State: ')
total_reward=mdp.Reward(state)
# print(total_reward)
if(total_reward==20):
wolve+=1
elif(total_reward==-20):
magneto+=1
elif(total_reward==-15):
jean+=1
Print(grid)
else:
while(policy[state]!='End'):
(new_m_x,new_m_y,new_j_x,new_j_y)=mdp.new_position_for_intelligent_magneto(state)
wolverine_next_step=policy[state]
new_w_x=wolverine_next_step[0]
new_w_y=wolverine_next_step[1]
new_state=(new_m_x,new_m_y,new_w_x,new_w_y,new_j_x,new_j_y)
if(policy[new_state]=='End'):
print('End State: ')
total_reward=mdp.Reward(new_state)
# print(total_reward)
if(total_reward==20):
wolve+=1
elif(total_reward==-20):
magneto+=1
elif(total_reward==-15):
jean+=1
grid=make_grid(new_state,mdp)
Print(grid)
state=copy.deepcopy(new_state)
print()
print("Wolverine wins : ",wolve)
print("magneto Wins : ",magneto)
print("All on same position : ",jean)
mdp = wolverine_MDP()
policy=policy_iteration_for_lazy_magneto(mdp)
play_game_for_lazy_magneto(mdp,policy)
mdp1=wolverine_MDP()
policy1=policy_Iteration_for_active_magneto(mdp1)
play_game_for_Active_magneto(mdp1,policy1) | 31.24025 | 161 | 0.524245 | 2,734 | 20,025 | 3.549012 | 0.046818 | 0.010306 | 0.030918 | 0.006596 | 0.853757 | 0.828919 | 0.821705 | 0.767288 | 0.764815 | 0.751417 | 0 | 0.027954 | 0.355106 | 20,025 | 641 | 162 | 31.24025 | 0.723401 | 0.044494 | 0 | 0.764331 | 0 | 0 | 0.019443 | 0.005958 | 0 | 0 | 0 | 0 | 0 | 1 | 0.050955 | false | 0 | 0.008493 | 0.006369 | 0.118896 | 0.042463 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
4c15d596edc31ae24c62f2bb54398018d3da141f | 240 | py | Python | basars_addons/metrics/__init__.py | Basars/basars-addons | 0719216613ab7c6d23b26e55b09b9b024e1485ad | [
"MIT"
] | null | null | null | basars_addons/metrics/__init__.py | Basars/basars-addons | 0719216613ab7c6d23b26e55b09b9b024e1485ad | [
"MIT"
] | null | null | null | basars_addons/metrics/__init__.py | Basars/basars-addons | 0719216613ab7c6d23b26e55b09b9b024e1485ad | [
"MIT"
] | null | null | null | # Confusion Matrix
from basars_addons.metrics.confusion_matrix import ThresholdRecall
from basars_addons.metrics.confusion_matrix import ThresholdPrecision
# IoU
from basars_addons.metrics.intersection_over_union import ThresholdBinaryIoU
| 34.285714 | 76 | 0.891667 | 28 | 240 | 7.392857 | 0.5 | 0.217391 | 0.231884 | 0.333333 | 0.425121 | 0.425121 | 0.425121 | 0 | 0 | 0 | 0 | 0 | 0.075 | 240 | 6 | 77 | 40 | 0.932432 | 0.083333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
4c2bf655ae29075dd4f87e7916c986d5d39cef56 | 58,498 | py | Python | kitsune/sumo/db_strings.py | LeoMcA/kitsune | 9023ae805f5e875db754680400f2cd148e163906 | [
"BSD-3-Clause"
] | null | null | null | kitsune/sumo/db_strings.py | LeoMcA/kitsune | 9023ae805f5e875db754680400f2cd148e163906 | [
"BSD-3-Clause"
] | null | null | null | kitsune/sumo/db_strings.py | LeoMcA/kitsune | 9023ae805f5e875db754680400f2cd148e163906 | [
"BSD-3-Clause"
] | 1 | 2020-07-28T15:53:02.000Z | 2020-07-28T15:53:02.000Z | #######################################################################
#
# Note: This file is a generated file--do not edit it directly!
# Instead make changes to the appropriate content in the database or
# write up a bug here:
#
# https://bugzilla.mozilla.org/enter_bug.cgi?product=support.mozilla.org
#
# with the specific lines that are problematic and why.
#
# You can generate this file by running:
#
# ./manage.py extract_db
#
#######################################################################
from django.utils.translation import pgettext
pgettext("DB: kbadge.Badge.title", """2021 KB Badge""")
pgettext(
"DB: kbadge.Badge.description",
"""This badge is awarded to contributors with 10 approved English edits during 2021.""",
)
pgettext("DB: kbadge.Badge.title", """2020 KB Badge""")
pgettext(
"DB: kbadge.Badge.description",
"""This badge is awarded to contributors with 10 approved English edits during 2020.""",
)
pgettext("DB: kbadge.Badge.title", """2019 KB Badge""")
pgettext(
"DB: kbadge.Badge.description",
"""This badge is awarded to contributors with 10 approved English edits during 2019.""",
)
pgettext("DB: kbadge.Badge.title", """2020 Support Forum Badge""")
pgettext(
"DB: kbadge.Badge.description",
"""This badge is awarded to contributors with 20 support forum replies during 2020.""",
)
pgettext("DB: kbadge.Badge.title", """2019 Support Forum Badge""")
pgettext(
"DB: kbadge.Badge.description",
"""This badge is awarded to contributors with 20 support forum replies during 2019.""",
)
pgettext("DB: kbadge.Badge.title", """2020 Army of Awesome Badge""")
pgettext(
"DB: kbadge.Badge.description",
"""This badge is awarded to contributors with 50 Army of Awesome tweets during 2020.""",
)
pgettext("DB: kbadge.Badge.title", """2019 Army of Awesome Badge""")
pgettext(
"DB: kbadge.Badge.description",
"""This badge is awarded to contributors with 50 Army of Awesome tweets during 2019.""",
)
pgettext("DB: kbadge.Badge.title", """2020 L10n Badge""")
pgettext(
"DB: kbadge.Badge.description",
"""This badge is awarded to contributors with 10 approved translations edits during 2020.""",
)
pgettext("DB: kbadge.Badge.title", """2019 L10n Badge""")
pgettext(
"DB: kbadge.Badge.description",
"""This badge is awarded to contributors with 10 approved translations edits during 2019.""",
)
pgettext("DB: kbadge.Badge.title", """2018 L10n Badge""")
pgettext(
"DB: kbadge.Badge.description",
"""This badge is awarded to contributors with 10 approved translations edits during 2018.""",
)
pgettext("DB: kbadge.Badge.title", """2018 KB Badge""")
pgettext(
"DB: kbadge.Badge.description",
"""This badge is awarded to contributors with 10 approved English edits during 2018.""",
)
pgettext("DB: kbadge.Badge.title", """2018 Army of Awesome Badge""")
pgettext(
"DB: kbadge.Badge.description",
"""This badge is awarded to contributors with 50 Army of Awesome tweets during 2018.""",
)
pgettext("DB: kbadge.Badge.title", """2018 Support Forum Badge""")
pgettext(
"DB: kbadge.Badge.description",
"""This badge is awarded to contributors with 20 support forum replies during 2018.""",
)
pgettext("DB: kbadge.Badge.title", """2017 L10n Badge""")
pgettext(
"DB: kbadge.Badge.description",
"""This badge is awarded to contributors with 10 approved translations edits during 2017.""",
)
pgettext("DB: kbadge.Badge.title", """2017 KB Badge""")
pgettext(
"DB: kbadge.Badge.description",
"""This badge is awarded to contributors with 10 approved English edits during 2017.""",
)
pgettext("DB: kbadge.Badge.title", """2017 Army of Awesome Badge""")
pgettext(
"DB: kbadge.Badge.description",
"""This badge is awarded to contributors with 50 Army of Awesome tweets during 2017.""",
)
pgettext("DB: kbadge.Badge.title", """2017 Support Forum Badge""")
pgettext(
"DB: kbadge.Badge.description",
"""This badge is awarded to contributors with 20 support forum replies during 2017.""",
)
pgettext("DB: kbadge.Badge.title", """2016 L10n Badge""")
pgettext(
"DB: kbadge.Badge.description",
"""This badge is awarded to contributors with 10 approved translations edits during 2016.""",
)
pgettext("DB: kbadge.Badge.title", """2016 Support Forum Badge""")
pgettext(
"DB: kbadge.Badge.description",
"""This badge is awarded to contributors with 30 support forum replies during 2016.""",
)
pgettext("DB: kbadge.Badge.title", """2016 KB Badge""")
pgettext(
"DB: kbadge.Badge.description",
"""This badge is awarded to contributors with 10 approved English edits during 2016.""",
)
pgettext("DB: kbadge.Badge.title", """2016 Army of Awesome Badge""")
pgettext(
"DB: kbadge.Badge.description",
"""This badge is awarded to contributors with 50 Army of Awesome tweets during 2016.""",
)
pgettext("DB: kbadge.Badge.title", """2015 Support Forum Badge""")
pgettext(
"DB: kbadge.Badge.description",
"""This badge is awarded to contributors with 30 support forum replies during 2015.""",
)
pgettext("DB: kbadge.Badge.title", """2015 Army of Awesome Badge""")
pgettext(
"DB: kbadge.Badge.description",
"""This badge is awarded to contributors with 50 Army of Awesome tweets during 2015.""",
)
pgettext("DB: kbadge.Badge.title", """2015 KB Badge""")
pgettext(
"DB: kbadge.Badge.description",
"""This badge is awarded to contributors with 10 approved English edits during 2015.""",
)
pgettext("DB: kbadge.Badge.title", """2015 L10n Badge""")
pgettext(
"DB: kbadge.Badge.description",
"""This badge is awarded to contributors with 10 approved translations edits during 2015.""",
)
pgettext("DB: kbadge.Badge.title", """2014 Army of Awesome Badge""")
pgettext(
"DB: kbadge.Badge.description",
"""The SUMO Army of Awesome 2014 mini-badge is part of the SUMO series. It represents contribution to SUMO in 2014; in this case: 50 Army of Awesome tweets.
Congrats to all SUMO Army of Awesome 2014 badge earners for advancing the Mozilla Mission!""",
)
pgettext("DB: kbadge.Badge.title", """2014 L10n Badge""")
pgettext(
"DB: kbadge.Badge.description",
"""The SUMO L10n 2014 mini-badge is part of the SUMO series. It represents contribution to SUMO in 2014; in this case: 10 approved translation edits of the SUMO Knowledge Base.
Congrats to all SUMO L10n 2014 badge earners for advancing the Mozilla Mission!""",
)
pgettext("DB: kbadge.Badge.title", """2014 KB Badge""")
pgettext(
"DB: kbadge.Badge.description",
"""The SUMO KB 2014 mini-badge is part of the SUMO series. It represents contribution to SUMO in 2014; in this case: 10 approved edits of the English SUMO Knowledge Base.
Congrats to all SUMO KB 2014 badge earners for advancing the Mozilla Mission""",
)
pgettext("DB: kbadge.Badge.title", """2014 Support Forum Badge""")
pgettext(
"DB: kbadge.Badge.description",
"""The SUMO Forum 2014 mini-badge is part of the SUMO series. It represents contribution to SUMO in 2014 in this case: 30 replies in the English SUMO Forum.
Congrats to all SUMO Forum 2014 badge earners for advancing the Mozilla Mission!""",
)
pgettext("DB: kbadge.Badge.title", """Firefox 29 Launch Team""")
pgettext(
"DB: kbadge.Badge.description",
"""Awarded to support contributors who contributed (KB article documentation, answering Forum Questions, localizing KB article documentation, tweets, etc) to the launch of Firefox 29, thanks!
Firefox 29 features:
1. Firefox Desktop: Australis new look and feel
AND Firefox Accounts based sync
2. Firefox for Android: Firefox Accounts based sync
MOAR:
https://sumo.etherpad.mozilla.org/sumo-australis-badges""",
)
pgettext("DB: kbadge.Badge.title", """2008 L10n Badge""")
pgettext(
"DB: kbadge.Badge.description",
"""This badge is awarded to contributors with 10 approved translations edits during 2008.""",
)
pgettext("DB: kbadge.Badge.title", """2009 L10n Badge""")
pgettext(
"DB: kbadge.Badge.description",
"""This badge is awarded to contributors with 10 approved translations edits during 2009.""",
)
pgettext("DB: kbadge.Badge.title", """2012 Army of Awesome Badge""")
pgettext(
"DB: kbadge.Badge.description",
"""The SUMO Army of Awesome 2012 mini-badge is part of the SUMO series. It represents contribution to SUMO in 2012; in this case: 50 Army of Awesome tweets.
Congrats to all SUMO Army of Awesome 2012 badge earners for advancing the Mozilla Mission!""",
)
pgettext("DB: kbadge.Badge.title", """2013 Army of Awesome Badge""")
pgettext(
"DB: kbadge.Badge.description",
"""The SUMO Army of Awesome 2013 mini-badge is part of the SUMO series. It represents contribution to SUMO in 2013; in this case: 50 Army of Awesome tweets.
Congrats to all SUMO Army of Awesome 2013 badge earners for advancing the Mozilla Mission!""",
)
pgettext("DB: kbadge.Badge.title", """2010 Army of Awesome Badge""")
pgettext(
"DB: kbadge.Badge.description",
"""The SUMO Army of Awesome 2010 mini-badge is part of the SUMO series. It represents contribution to SUMO in 2010; in this case: 50 Army of Awesome tweets.
Congrats to all SUMO Army of Awesome 2010 badge earners for advancing the Mozilla Mission!""",
)
pgettext("DB: kbadge.Badge.title", """2011 Army of Awesome Badge""")
pgettext(
"DB: kbadge.Badge.description",
"""The SUMO Army of Awesome 2011 mini-badge is part of the SUMO series. It represents contribution to SUMO in 2011; in this case: 50 Army of Awesome tweets.
Congrats to all SUMO Army of Awesome 2011 badge earners for advancing the Mozilla Mission!""",
)
pgettext("DB: kbadge.Badge.title", """2012 Support Forum Badge""")
pgettext(
"DB: kbadge.Badge.description",
"""The SUMO Forum 2012 mini-badge is part of the SUMO series. It represents contribution to SUMO in 2012 in this case: 30 replies in the English SUMO Forum.
Congrats to all SUMO Forum 2012 badge earners for advancing the Mozilla Mission!""",
)
pgettext("DB: kbadge.Badge.title", """2010 KB Badge""")
pgettext(
"DB: kbadge.Badge.description",
"""The SUMO KB 2010 mini-badge is part of the SUMO series. It represents contribution to SUMO in 2010; in this case: 10 approved edits of the English SUMO Knowledge Base.
Congrats to all SUMO KB 2010 badge earners for advancing the Mozilla Mission""",
)
pgettext("DB: kbadge.Badge.title", """2010 L10n Badge""")
pgettext(
"DB: kbadge.Badge.description",
"""The SUMO L10n 2010 mini-badge is part of the SUMO series. It represents contribution to SUMO in 2010; in this case: 10 approved translation edits of the SUMO Knowledge Base.
Congrats to all SUMO L10n 2010 badge earners for advancing the Mozilla Mission!""",
)
pgettext("DB: kbadge.Badge.title", """2010 Support Forum Badge""")
pgettext(
"DB: kbadge.Badge.description",
"""The SUMO Forum 2010 mini-badge is part of the SUMO series. It represents contribution to SUMO in 2010 in this case: 30 replies in the English SUMO Forum.
Congrats to all SUMO Forum 2010 badge earners for advancing the Mozilla Mission!""",
)
pgettext("DB: kbadge.Badge.title", """2011 KB Badge""")
pgettext(
"DB: kbadge.Badge.description",
"""The SUMO KB 2011 mini-badge is part of the SUMO series. It represents contribution to SUMO in 2011; in this case: 10 approved edits of the English SUMO Knowledge Base.
Congrats to all SUMO KB 2011 badge earners for advancing the Mozilla Mission!""",
)
pgettext("DB: kbadge.Badge.title", """2011 L10n Badge""")
pgettext(
"DB: kbadge.Badge.description",
"""The SUMO L10n 2011 mini-badge is part of the SUMO series. It represents contribution to SUMO in 2011; in this case: 10 approved translation edits of the SUMO Knowledge Base.
Congrats to all SUMO L10n 2011 badge earners for advancing the Mozilla Mission!""",
)
pgettext("DB: kbadge.Badge.title", """2011 Support Forum Badge""")
pgettext(
"DB: kbadge.Badge.description",
"""The SUMO Forum 2011 mini-badge is part of the SUMO series. It represents contribution to SUMO in 2011 in this case: 30 replies in the English SUMO Forum.
Congrats to all SUMO Forum 2011 badge earners for advancing the Mozilla Mission!
""",
)
pgettext("DB: kbadge.Badge.title", """2012 KB Badge""")
pgettext(
"DB: kbadge.Badge.description",
"""The SUMO KB 2012 mini-badge is part of the SUMO series. It represents contribution to SUMO in 2012; in this case: 10 approved edits of the English SUMO Knowledge Base.
Congrats to all SUMO KB 2012 badge earners for advancing the Mozilla Mission!""",
)
pgettext("DB: kbadge.Badge.title", """2013 KB Badge""")
pgettext(
"DB: kbadge.Badge.description",
"""The SUMO KB 2013 mini-badge is part of the SUMO series. It represents contribution to SUMO in 2013 in this case: 10 approved edits of the English SUMO Knowledge Base.
Congrats to all SUMO KB 2013 badge earners for advancing the Mozilla Mission!""",
)
pgettext("DB: kbadge.Badge.title", """2013 L10n Badge""")
pgettext(
"DB: kbadge.Badge.description",
"""The SUMO L10n 2013 mini-badge is part of the SUMO series. It represents contribution to SUMO in 2013 in this case: 10 approved translation edits of the SUMO Knowledge Base.
Congrats to all SUMO L10n 2013 badge earners for advancing the Mozilla Mission!""",
)
pgettext("DB: kbadge.Badge.title", """2013 Support Forum Badge""")
pgettext(
"DB: kbadge.Badge.description",
"""The SUMO Forum 2013 mini-badge is part of the SUMO series. It represents contribution to SUMO in 2013 in this case: 30 replies in the English SUMO Forum.
Congrats to all SUMO Forum 2013 badge earners for advancing the Mozilla Mission""",
)
pgettext("DB: kbadge.Badge.title", """2012 L10n Badge""")
pgettext(
"DB: kbadge.Badge.description",
"""The SUMO L10n 2012 mini-badge is part of the SUMO series. It represents contribution to SUMO in 2012; in this case: 10 approved translation edits of the SUMO Knowledge Base.
Congrats to all SUMO L10n 2012 badge earners for advancing the Mozilla Mission!""",
)
pgettext("DB: kbadge.Badge.title", """Kitsune Contributor""")
pgettext(
"DB: kbadge.Badge.description",
"""Badge awarded to those who have contributed to the Kitsune code base.""",
)
pgettext("DB: products.Topic.title", """Learn the Basics: get started""")
pgettext(
"DB: products.Topic.description", """Learn all you need to know to get started with Firefox."""
)
pgettext("DB: products.Topic.title", """Bookmarks and tabs""")
pgettext(
"DB: products.Topic.description",
"""Access and organize your favorite webpages easily with bookmarks and tabs""",
)
pgettext("DB: products.Topic.title", """Basic browsing""")
pgettext(
"DB: products.Topic.description",
"""Search and navigate easily with these essential features""",
)
pgettext("DB: products.Topic.title", """Import settings from other browsers""")
pgettext(
"DB: products.Topic.description",
"""Learn how to import or export your information between Firefox and another browser""",
)
pgettext("DB: products.Topic.title", """Video, audio and interactive settings""")
pgettext(
"DB: products.Topic.description",
"""Change how Firefox handles videos, animations, music and other interactive content""",
)
pgettext("DB: products.Topic.title", """How to use Firefox""")
pgettext("DB: products.Topic.description", """How to browse, search and customize your settings""")
pgettext("DB: products.Topic.title", """Download, install and migration""")
pgettext(
"DB: products.Topic.description",
"""Learn how to download Firefox on your desktop devices or move information to and from other browsers.""",
)
pgettext("DB: products.Topic.title", """Tips and tricks""")
pgettext(
"DB: products.Topic.description",
"""Go beyond the basics with these shortcuts and other tips.""",
)
pgettext("DB: products.Topic.title", """Install and update""")
pgettext("DB: products.Topic.description", """How to install Firefox and keep it up to date""")
pgettext("DB: products.Topic.title", """Display and appearance""")
pgettext(
"DB: products.Topic.description",
"""Learn how to change your toolbar, font sizes and browser colors""",
)
pgettext("DB: products.Topic.title", """Install and update""")
pgettext(
"DB: products.Topic.description", """Download or update Firefox for Windows, Mac and Linux."""
)
pgettext("DB: products.Topic.title", """Firefox Sync""")
pgettext("DB: products.Topic.description", """Firefox Sync settings""")
pgettext("DB: products.Topic.title", """Sync and save""")
pgettext("DB: products.Topic.description", """Sync information on all your devices""")
pgettext("DB: products.Topic.title", """Manage add-ons""")
pgettext(
"DB: products.Topic.description",
"""Enhance Firefox's functionality and appearance with add-ons""",
)
pgettext("DB: products.Topic.title", """Sync, share and save""")
pgettext(
"DB: products.Topic.description",
"""Sync browsing information and content across multiple devices with Firefox Accounts.""",
)
pgettext("DB: products.Topic.title", """Firefox settings""")
pgettext("DB: products.Topic.description", """Privacy and personalization""")
pgettext("DB: products.Topic.title", """Firefox Hello""")
pgettext(
"DB: products.Topic.description",
"""Have video or voice conversations using the Firefox browser""",
)
pgettext("DB: products.Topic.title", """Chat and share""")
pgettext(
"DB: products.Topic.description", """Connect on video and share pages with your network"""
)
pgettext("DB: products.Topic.title", """Personalize Firefox""")
pgettext(
"DB: products.Topic.description", """Make Firefox yours with these customization options."""
)
pgettext("DB: products.Topic.title", """Customize controls, options and add-ons""")
pgettext(
"DB: products.Topic.description",
"""Make Firefox yours by adding and managing the features that you want.""",
)
pgettext("DB: products.Topic.title", """Personalize Firefox""")
pgettext(
"DB: products.Topic.description", """Change Firefox's appearance, behavior and settings."""
)
pgettext("DB: products.Topic.title", """Privacy and security settings""")
pgettext(
"DB: products.Topic.description",
"""Learn how to keep your information safe and secure with Firefox's private browsing, password features and other security settings.""",
)
pgettext("DB: products.Topic.title", """Do more with apps""")
pgettext(
"DB: products.Topic.description",
"""Install open apps from the Marketplace to add more fun and functionality to your device""",
)
pgettext("DB: products.Topic.title", """Protect your privacy""")
pgettext(
"DB: products.Topic.description",
"""Keep your information safe from prying eyes with the latest privacy and security features.""",
)
pgettext("DB: products.Topic.title", """Get community support""")
pgettext("DB: products.Topic.description", """Get community support""")
pgettext("DB: products.Topic.title", """Manage preferences and add-ons""")
pgettext(
"DB: products.Topic.description",
"""Make Firefox yours through customization settings and add-ons""",
)
pgettext("DB: products.Topic.title", """Fix problems""")
pgettext(
"DB: products.Topic.description", """Troubleshoot slowness, crashing and error messages."""
)
pgettext(
"DB: products.Topic.title", """Fix slowness, crashing, error messages and other problems"""
)
pgettext(
"DB: products.Topic.description",
"""Fix slowness, crashing, error messages and other problems""",
)
pgettext("DB: products.Topic.title", """Advanced and experimental features""")
pgettext(
"DB: products.Topic.description",
"""Learn tips beyond the basics and try features before they're released to the public.""",
)
pgettext("DB: products.Topic.title", """Tab basics""")
pgettext("DB: products.Topic.description", """Tab basics""")
pgettext("DB: products.Topic.title", """Firefox versions and languages""")
pgettext("DB: products.Topic.description", """Firefox versions and languages""")
pgettext(
"DB: products.Topic.title", """Copy your personal information from one browser to another"""
)
pgettext(
"DB: products.Topic.description",
"""Copy your personal information from one browser to another""",
)
pgettext("DB: products.Topic.title", """Cookies and cache""")
pgettext("DB: products.Topic.description", """Control the information that Firefox saves""")
pgettext(
"DB: products.Topic.title",
"""Passwords, forms, search, and history - control what Firefox suggests""",
)
pgettext(
"DB: products.Topic.description",
"""Passwords, forms, search, and history - control what Firefox suggests""",
)
pgettext("DB: products.Topic.title", """Firefox controls and buttons""")
pgettext("DB: products.Topic.description", """Firefox controls and buttons""")
pgettext("DB: products.Topic.title", """Tab settings""")
pgettext("DB: products.Topic.description", """Tab settings""")
pgettext("DB: products.Topic.title", """Customize Firefox with add-ons, plugins, and extensions""")
pgettext(
"DB: products.Topic.description", """Customize Firefox with add-ons, plugins, and extensions"""
)
pgettext("DB: products.Topic.title", """Firefox options, preferences and settings""")
pgettext("DB: products.Topic.description", """Firefox options, preferences and settings""")
pgettext("DB: products.Topic.title", """Bookmark options""")
pgettext("DB: products.Topic.description", """Bookmark options""")
pgettext(
"DB: products.Topic.title", """Fix problems with websites (Facebook, YouTube, webmail etc.)"""
)
pgettext(
"DB: products.Topic.description",
"""Fix problems with websites (Facebook, YouTube, webmail etc.)""",
)
pgettext("DB: products.Topic.title", """Error messages: what they mean and how to fix""")
pgettext("DB: products.Topic.description", """How to troubleshoot error messages on Firefox""")
pgettext("DB: products.Topic.title", """Unblock Firefox from connecting to the Internet""")
pgettext("DB: products.Topic.description", """Unblock Firefox from connecting to the Internet""")
pgettext("DB: products.Topic.title", """Procedures to diagnose and fix problems""")
pgettext("DB: products.Topic.description", """Procedures to diagnose and fix problems""")
pgettext("DB: products.Topic.title", """Videos, sound, pictures and animations don't work""")
pgettext("DB: products.Topic.description", """Videos, sound, pictures and animations don't work""")
pgettext("DB: products.Topic.title", """Firefox is slow or stops working""")
pgettext("DB: products.Topic.description", """Slowness or hanging""")
pgettext("DB: products.Topic.title", """Firefox crashes""")
pgettext("DB: products.Topic.description", """Crashing""")
pgettext("DB: products.Topic.title", """Firefox won't save settings or remember information""")
pgettext(
"DB: products.Topic.description", """Firefox won't save settings or remember information"""
)
pgettext("DB: products.Topic.title", """Problems with add-ons, plugins or unwanted software""")
pgettext(
"DB: products.Topic.description", """Problems with add-ons, plugins or unwanted software"""
)
pgettext("DB: products.Topic.title", """Mozilla Persona""")
pgettext("DB: products.Topic.description", """Mozilla Persona""")
pgettext("DB: products.Topic.title", """Hot topics""")
pgettext("DB: products.Topic.description", """Hot topics""")
pgettext("DB: products.Topic.title", """Other""")
pgettext("DB: products.Topic.description", """Other""")
pgettext("DB: products.Topic.title", """Basic browsing""")
pgettext(
"DB: products.Topic.description",
"""Search and navigate easily with these essential features""",
)
pgettext("DB: products.Topic.title", """How to use Firefox for Android""")
pgettext("DB: products.Topic.description", """How to use Firefox for Android""")
pgettext("DB: products.Topic.title", """What's new in Firefox for Android""")
pgettext(
"DB: products.Topic.description", """See what new features are available in each release"""
)
pgettext("DB: products.Topic.title", """Install and update""")
pgettext(
"DB: products.Topic.description", """How to install and keep Firefox for Android up to date"""
)
pgettext("DB: products.Topic.title", """Save, share and sync""")
pgettext(
"DB: products.Topic.description", """Save, share and synchronize content with other devices"""
)
pgettext("DB: products.Topic.title", """Changing your settings""")
pgettext("DB: products.Topic.description", """Change Firefox's behavior.""")
pgettext("DB: products.Topic.title", """Cast to your TV""")
pgettext("DB: products.Topic.description", """Learn how to view content on another screen""")
pgettext("DB: products.Topic.title", """Popular articles""")
pgettext(
"DB: products.Topic.description", """Popular tips and solutions for Firefox for Android"""
)
pgettext("DB: products.Topic.title", """Protect your privacy""")
pgettext("DB: products.Topic.description", """Control how your information is saved or tracked""")
pgettext("DB: products.Topic.title", """Customize settings and preferences""")
pgettext(
"DB: products.Topic.description",
"""Make Firefox for Android yours with these customization options""",
)
pgettext("DB: products.Topic.title", """Do more with apps""")
pgettext(
"DB: products.Topic.description",
"""Learn to find and install open apps to add more fun and functionality to your device""",
)
pgettext(
"DB: products.Topic.title", """Fix slowness, crashing, error messages and other problems"""
)
pgettext(
"DB: products.Topic.description",
"""Fix slowness, crashing, error messages and other problems""",
)
pgettext("DB: products.Topic.title", """Get community support""")
pgettext("DB: products.Topic.description", """Get community support""")
pgettext("DB: products.Topic.title", """Learn the Basics: get started""")
pgettext(
"DB: products.Topic.description",
"""Learn all you need to know to get Firefox for Android up and running.""",
)
pgettext("DB: products.Topic.title", """Download, install and migration""")
pgettext(
"DB: products.Topic.description",
"""Learn how to install and transfer information to Firefox for Android.""",
)
pgettext("DB: products.Topic.title", """Tips and tricks""")
pgettext("DB: products.Topic.description", """Tips and tricks""")
pgettext("DB: products.Topic.title", """Use bookmarks""")
pgettext("DB: products.Topic.description", """The basics of using bookmarks""")
pgettext("DB: products.Topic.title", """Firefox Sync settings""")
pgettext("DB: products.Topic.description", """Firefox Sync settings""")
pgettext("DB: products.Topic.title", """Tab basics""")
pgettext("DB: products.Topic.description", """Tab basics""")
pgettext("DB: products.Topic.title", """Privacy and security settings""")
pgettext(
"DB: products.Topic.description",
"""Keep your information safe with Firefox for Android's privacy and security settings.""",
)
pgettext("DB: products.Topic.title", """Customize controls, options and add-ons""")
pgettext(
"DB: products.Topic.description",
"""Make Firefox for Android work the way you want through customization.""",
)
pgettext("DB: products.Topic.title", """Cookies""")
pgettext("DB: products.Topic.description", """Cookies""")
pgettext("DB: products.Topic.title", """Firefox controls and buttons""")
pgettext("DB: products.Topic.description", """Firefox controls and buttons""")
pgettext("DB: products.Topic.title", """Customize Firefox with add-ons, plugins, and extensions""")
pgettext(
"DB: products.Topic.description", """Customize Firefox with add-ons, plugins, and extensions"""
)
pgettext(
"DB: products.Topic.title", """Fix problems with websites (Facebook, YouTube, webmail etc.)"""
)
pgettext(
"DB: products.Topic.description",
"""Fix problems with websites (Facebook, YouTube, webmail etc.)""",
)
pgettext("DB: products.Topic.title", """Firefox crashes""")
pgettext("DB: products.Topic.description", """Crashing""")
pgettext("DB: products.Topic.title", """Mozilla Persona""")
pgettext("DB: products.Topic.description", """Mozilla Persona""")
pgettext("DB: products.Topic.title", """Marketplace""")
pgettext("DB: products.Topic.description", """Firefox Marketplace""")
pgettext("DB: products.Topic.title", """Other""")
pgettext("DB: products.Topic.description", """Other""")
pgettext("DB: products.Topic.title", """Install and Update""")
pgettext(
"DB: products.Topic.description",
"""Install and keep Firefox up to date on your iPad, iPhone or iPod Touch.""",
)
pgettext("DB: products.Topic.title", """Reader View and List""")
pgettext(
"DB: products.Topic.description",
"""Read and save web pages in a clutter-free, reader-friendly view""",
)
pgettext("DB: products.Topic.title", """Basic browsing""")
pgettext(
"DB: products.Topic.description",
"""How to use bookmarks, tabs and basic Firefox features on your iOS device""",
)
pgettext("DB: products.Topic.title", """History""")
pgettext("DB: products.Topic.description", """Change your history settings on Firefox for iOS""")
pgettext("DB: products.Topic.title", """How to use Firefox for iOS""")
pgettext("DB: products.Topic.description", """General usage questions""")
pgettext("DB: products.Topic.title", """What's new in Firefox for iOS""")
pgettext("DB: products.Topic.description", """See what features are available in each release.""")
pgettext("DB: products.Topic.title", """Bookmarks and tabs""")
pgettext(
"DB: products.Topic.description", """Access websites easily with bookmarks and tab features"""
)
pgettext("DB: products.Topic.title", """Search""")
pgettext("DB: products.Topic.description", """Customize your search settings in Firefox for iOS""")
pgettext("DB: products.Topic.title", """Firefox for iOS is not working as expected""")
pgettext("DB: products.Topic.description", """Troubleshoot problems with Firefox for iOS.""")
pgettext("DB: products.Topic.title", """Privacy""")
pgettext(
"DB: products.Topic.description",
"""Protect your information with Firefox's privacy settings on iOS""",
)
pgettext("DB: products.Topic.title", """Sync, save and share""")
pgettext("DB: products.Topic.description", """Share web pages on Firefox for iOS""")
pgettext("DB: products.Topic.title", """Customize preferences""")
pgettext("DB: products.Topic.description", """Customize preferences for Firefox for iOS""")
pgettext("DB: products.Topic.title", """Crashes, errors and other issues""")
pgettext("DB: products.Topic.description", """Troubleshoot error message on Firefox for iOS""")
pgettext("DB: products.Topic.title", """Get started""")
pgettext("DB: products.Topic.description", """Firefox OS basics""")
pgettext("DB: products.Topic.title", """Basic Features""")
pgettext(
"DB: products.Topic.description",
"""Learn the basic functionality for your Firefox OS phone. """,
)
pgettext("DB: products.Topic.title", """Download and Manage Apps""")
pgettext("DB: products.Topic.description", """Download apps from the Marketplace""")
pgettext("DB: products.Topic.title", """Date and Time""")
pgettext("DB: products.Topic.description", """Setting a date and time on your Firefox OS phone""")
pgettext("DB: products.Topic.title", """Display""")
pgettext("DB: products.Topic.description", """Customize your screen on your Firefox OS device.""")
pgettext("DB: products.Topic.title", """Install and update""")
pgettext(
"DB: products.Topic.description", """Download and install the mobile app on your device."""
)
pgettext("DB: products.Topic.title", """Get started""")
pgettext("DB: products.Topic.description", """Learn the basics""")
pgettext("DB: products.Topic.title", """Calling and Contacts""")
pgettext(
"DB: products.Topic.description",
"""Learn how to add and manage contacts, as well as make one-to-one or conference calls on your Firefox OS phone.""",
)
pgettext("DB: products.Topic.title", """Browsing""")
pgettext("DB: products.Topic.description", """Surf and navigate the Web on Firefox Preview""")
pgettext("DB: products.Topic.title", """How do I use Firefox Preview?""")
pgettext("DB: products.Topic.description", """Get help with using features in Firefox Preview.""")
pgettext("DB: products.Topic.title", """Manage account""")
pgettext("DB: products.Topic.description", """How to change your account settings""")
pgettext("DB: products.Topic.title", """Email and Messages""")
pgettext(
"DB: products.Topic.description",
"""Keep in touch with your contacts through email and messaging.""",
)
pgettext("DB: products.Topic.title", """Library""")
pgettext("DB: products.Topic.description", """Manage bookmarks and history""")
pgettext("DB: products.Topic.title", """Services and Subscriptions""")
pgettext("DB: products.Topic.description", """Free and premium privacy offerings""")
pgettext("DB: products.Topic.title", """Music, Photos and Video""")
pgettext(
"DB: products.Topic.description",
"""Take pictures, record videos and listen to music on your Firefox OS phone.""",
)
pgettext("DB: products.Topic.title", """Sync""")
pgettext(
"DB: products.Topic.description", """Sync your browsing information across other devices."""
)
pgettext("DB: products.Topic.title", """Troubleshoot""")
pgettext("DB: products.Topic.description", """Fix problems with Firefox Accounts""")
pgettext("DB: products.Topic.title", """Marketplace""")
pgettext(
"DB: products.Topic.description",
"""How to download, manage and use your favorite apps on your Firefox OS phone.""",
)
pgettext("DB: products.Topic.title", """Privacy and security""")
pgettext(
"DB: products.Topic.description",
"""Keep your information safe with Firefox OS locks, privacy features and more.""",
)
pgettext("DB: products.Topic.title", """Privacy and security""")
pgettext("DB: products.Topic.description", """Protect your privacy on Firefox Preview.""")
pgettext("DB: products.Topic.title", """Settings""")
pgettext(
"DB: products.Topic.description",
"""Learn how to configure the Internet connection, display and time on your Firefox OS device.""",
)
pgettext("DB: products.Topic.title", """Internet and Connections""")
pgettext(
"DB: products.Topic.description", """Learn more about Wi-Fi, Bluetooth and NFC connections."""
)
pgettext("DB: products.Topic.title", """Settings and preferences""")
pgettext("DB: products.Topic.description", """Manage themes and search settings""")
pgettext("DB: products.Topic.title", """Fix problems with Firefox Preview""")
pgettext("DB: products.Topic.description", """Troubleshoot issues with Firefox Preview""")
pgettext("DB: products.Topic.title", """Fix problems""")
pgettext(
"DB: products.Topic.description",
"""Learn how to troubleshoot issues on your Firefox OS phone.""",
)
pgettext("DB: products.Topic.title", """Advanced Settings""")
pgettext("DB: products.Topic.description", """Do more with Firefox Preview""")
pgettext("DB: products.Topic.title", """Get community support""")
pgettext("DB: products.Topic.description", """Get community support""")
pgettext("DB: products.Topic.title", """View all Firefox OS articles""")
pgettext("DB: products.Topic.description", """View a list of all Firefox OS articles""")
pgettext("DB: products.Topic.title", """Working with messages""")
pgettext("DB: products.Topic.description", """Firefox OS SMS & email""")
pgettext("DB: products.Topic.title", """Procedures to diagnose and fix problems""")
pgettext("DB: products.Topic.description", """Procedures to diagnose and fix problems""")
pgettext("DB: products.Topic.title", """Mozilla Persona""")
pgettext("DB: products.Topic.description", """Mozilla Persona""")
pgettext("DB: products.Topic.title", """Hot topics""")
pgettext("DB: products.Topic.description", """Hot topics""")
pgettext("DB: products.Topic.title", """Technical""")
pgettext(
"DB: products.Topic.description",
"""Find solutions for how to use the Firefox Private Network VPN""",
)
pgettext("DB: products.Topic.title", """Accounts""")
pgettext("DB: products.Topic.description", """Find solutions on managing your account""")
pgettext("DB: products.Topic.title", """Payments""")
pgettext("DB: products.Topic.description", """Manage your payment and subscription""")
pgettext("DB: products.Topic.title", """Troubleshooting""")
pgettext("DB: products.Topic.description", """Fix problems with Firefox Private Network VPN""")
pgettext("DB: products.Topic.title", """Firefox for Fire TV""")
pgettext("DB: products.Topic.description", """Browser for the Amazon Fire TV.""")
pgettext("DB: products.Topic.title", """Firefox for Echo Show""")
pgettext("DB: products.Topic.description", """Browser for the Amazon Echo Show""")
pgettext("DB: products.Topic.title", """Get started""")
pgettext("DB: products.Topic.description", """Get Started with Firefox for Fire TV""")
pgettext("DB: products.Topic.title", """Get started""")
pgettext("DB: products.Topic.description", """Basics for using Firefox Private Network.""")
pgettext("DB: products.Topic.title", """Fix problems""")
pgettext("DB: products.Topic.description", """Troubleshoot problems with Firefox Fire TV""")
pgettext("DB: products.Topic.title", """Manage account and settings""")
pgettext("DB: products.Topic.description", """Change account and settings for Private Network.""")
pgettext("DB: products.Topic.title", """Fix problems""")
pgettext("DB: products.Topic.description", """Troubleshoot issues for Private Network""")
pgettext("DB: products.Topic.title", """Popcorn Maker""")
pgettext(
"DB: products.Topic.description",
"""Learn how to remix web video, audio and images into mashups that you can embed on other websites. """,
)
pgettext("DB: products.Topic.title", """Webmaker for Android""")
pgettext("DB: products.Topic.description", """Get help with the Webmaker app for Android.""")
pgettext("DB: products.Topic.title", """Intro to Open Badges""")
pgettext("DB: products.Topic.description", """Learn the basic about Open Badges""")
pgettext("DB: products.Topic.title", """Release notes""")
pgettext(
"DB: products.Topic.description", """Where to find release notes and upcoming features."""
)
pgettext("DB: products.Topic.title", """Windows""")
pgettext("DB: products.Topic.description", """Deploying Firefox on Windows computers.""")
pgettext("DB: products.Topic.title", """Manage certificates""")
pgettext(
"DB: products.Topic.description", """Set up certificates on Firefox for your organization."""
)
pgettext("DB: products.Topic.title", """Thimble""")
pgettext(
"DB: products.Topic.description",
"""Learn how to create and share your own webpages quickly and easily.""",
)
pgettext("DB: products.Topic.title", """BadgeKit""")
pgettext("DB: products.Topic.description", """Learn how to create, assess and issue badges""")
pgettext("DB: products.Topic.title", """Customization of Firefox in an enterprise environment""")
pgettext(
"DB: products.Topic.description", """Customization of Firefox in an enterprise environment"""
)
pgettext("DB: products.Topic.title", """Installation""")
pgettext("DB: products.Topic.description", """How to install Firefox for Enterprise""")
pgettext("DB: products.Topic.title", """Explore""")
pgettext("DB: products.Topic.description", """Learn about Firefox for Enterprise""")
pgettext("DB: products.Topic.title", """Mac""")
pgettext(
"DB: products.Topic.description", """Deploy Firefox on your organization's Mac computers"""
)
pgettext("DB: products.Topic.title", """Policies overview""")
pgettext(
"DB: products.Topic.description",
"""How to set up policies on Firefox for your organization.""",
)
pgettext("DB: products.Topic.title", """X-Ray Goggles""")
pgettext(
"DB: products.Topic.description", """Learn how to inspect the code behind every webpage."""
)
pgettext("DB: products.Topic.title", """Get Involved""")
pgettext("DB: products.Topic.description", """Help the Open Badges community""")
pgettext("DB: products.Topic.title", """Deploy""")
pgettext(
"DB: products.Topic.description", """Deployment of Firefox in an enterprise environment"""
)
pgettext("DB: products.Topic.title", """Manage updates, policies & customization""")
pgettext("DB: products.Topic.description", """Policies for Firefox for Enterprise""")
pgettext("DB: products.Topic.title", """Autoconfiguration""")
pgettext("DB: products.Topic.description", """How to configure Firefox for Enterprise""")
pgettext("DB: products.Topic.title", """Linux""")
pgettext(
"DB: products.Topic.description", """Deploy Firefox on your organization's Linux machines."""
)
pgettext("DB: products.Topic.title", """Manage settings via policy""")
pgettext("DB: products.Topic.description", """Change Firefox's settings using policies.""")
pgettext("DB: products.Topic.title", """Get the most from webmaker.org""")
pgettext("DB: products.Topic.description", """Help or get help on a Webmaker project.""")
pgettext("DB: products.Topic.title", """Earn Badges""")
pgettext(
"DB: products.Topic.description", """Earn Badges for the skills you learn online and offline"""
)
pgettext("DB: products.Topic.title", """Manage add-ons""")
pgettext(
"DB: products.Topic.description", """Working with add-ons on Firefox for your organization."""
)
pgettext("DB: products.Topic.title", """Events and help for Mentors""")
pgettext(
"DB: products.Topic.description",
"""Help teach digital skills and share creative ways of teaching technology.""",
)
pgettext("DB: products.Topic.title", """Issue Badges""")
pgettext(
"DB: products.Topic.description",
"""Issue digital badges to acknowledge new skills and achievements""",
)
pgettext("DB: products.Topic.title", """Display Badges""")
pgettext(
"DB: products.Topic.description",
"""Display your digital badges on your social networks, job sites and your own website.""",
)
pgettext("DB: products.Topic.title", """Knowledge Base""")
pgettext("DB: products.Topic.description", """Windows 8 Touch support articles""")
pgettext("DB: products.Topic.title", """Pocket Basics""")
pgettext("DB: products.Topic.description", """New to Pocket? Start here.""")
pgettext("DB: products.Topic.title", """Install and set up""")
pgettext("DB: products.Topic.description", """Sync your logins across Firefox and your apps.""")
pgettext("DB: products.Topic.title", """About Data Sharing""")
pgettext(
"DB: products.Topic.description",
"""In order to process or provide our products and services to you, we share your information with the following business partners. These entities are contractually obligated to handle the data in ways that are approved by Mozilla.""",
)
pgettext("DB: products.Topic.title", """Pocket for Mobile""")
pgettext(
"DB: products.Topic.description",
"""How to use Pocket on your iPhone, iPad, Android or Kobo.""",
)
pgettext("DB: products.Topic.title", """Manage settings and logins""")
pgettext(
"DB: products.Topic.description", """Setting up your device to work with Firefox Lockwise"""
)
pgettext("DB: products.Topic.title", """Managing Your Data""")
pgettext(
"DB: products.Topic.description",
"""Learn how to manage your data (including deleting) for specific products or services.""",
)
pgettext("DB: products.Topic.title", """Pocket for your Computer""")
pgettext("DB: products.Topic.description", """Using Pocket on the Web.""")
pgettext("DB: products.Topic.title", """Fix problems""")
pgettext("DB: products.Topic.description", """Troubleshoot issues with Firefox Lockwise""")
pgettext("DB: products.Topic.title", """Sensible Settings""")
pgettext(
"DB: products.Topic.description",
"""Give our users actionable and informed choices by informing and educating at the point of collection and providing a choice to opt-out whenever possible. """,
)
pgettext("DB: products.Topic.title", """Advanced""")
pgettext("DB: products.Topic.description", """Information for Developers and Beta users.""")
pgettext("DB: products.Topic.title", """Defense in Depth""")
pgettext(
"DB: products.Topic.description",
"""Make privacy a key factor in selecting and interacting with partners. """,
)
pgettext("DB: products.Topic.title", """How does it work?""")
pgettext(
"DB: products.Topic.description", """Basics to get started with Firefox for Windows 8 Touch."""
)
pgettext("DB: products.Topic.title", """Settings""")
pgettext(
"DB: products.Topic.description",
"""How to configure and customize Firefox for Windows 8 Touch.""",
)
pgettext("DB: products.Topic.title", """Problems with websites""")
pgettext(
"DB: products.Topic.description",
"""Problems with websites that don't work well in Firefox for Windows 8 Touch.""",
)
pgettext("DB: products.Topic.title", """Other""")
pgettext("DB: products.Topic.description", """Other questions with Firefox for Windows 8 Touch.""")
pgettext("DB: products.Topic.title", """General contribution""")
pgettext(
"DB: products.Topic.description",
"""Topic for any KB articles related to the contribution in general""",
)
pgettext("DB: products.Topic.title", """Get started""")
pgettext("DB: products.Topic.description", """How to use Firefox Reality""")
pgettext("DB: products.Topic.title", """Forum Support""")
pgettext(
"DB: products.Topic.description",
"""Topic for any KB articles related to the Forum Support contribution""",
)
pgettext("DB: products.Topic.title", """Social Support""")
pgettext(
"DB: products.Topic.description",
"""Topic for any KB articles related to the Social Support Program""",
)
pgettext("DB: products.Topic.title", """Localization""")
pgettext(
"DB: products.Topic.description",
"""Topic for any KB articles related to the Localization contribution""",
)
pgettext("DB: products.Topic.title", """KB Contribution""")
pgettext(
"DB: products.Topic.description",
"""Topic for any KB articles related to the KB articles contribution""",
)
pgettext("DB: products.Topic.title", """Respond tool""")
pgettext(
"DB: products.Topic.description",
"""Topic for any KB articles related to the Respond Tool contribution""",
)
pgettext("DB: products.Topic.title", """Troubleshooting""")
pgettext("DB: products.Topic.description", """Fix problems with Firefox Reality""")
pgettext("DB: products.Topic.title", """[Obsolete] Mozilla Persona""")
pgettext("DB: products.Topic.description", """Mozilla Persona""")
pgettext("DB: products.Topic.title", """[Obsolete] Hot topics""")
pgettext("DB: products.Topic.description", """Hot topics""")
pgettext("DB: products.Topic.title", """Get Started""")
pgettext("DB: products.Topic.description", """Klar verwenden""")
pgettext("DB: products.Topic.title", """Firefox Klar for iOS""")
pgettext("DB: products.Topic.description", """Privacy browser for iOS""")
pgettext("DB: products.Topic.title", """Get started""")
pgettext("DB: products.Topic.description", """Everything you need to know to use Firefox Lite.""")
pgettext("DB: products.Topic.title", """Features""")
pgettext("DB: products.Topic.description", """Getting started with Hubs""")
pgettext("DB: products.Topic.title", """Firefox Klar for Android""")
pgettext("DB: products.Topic.description", """Privacy browser for Android""")
pgettext("DB: products.Topic.title", """Preferences""")
pgettext("DB: products.Topic.description", """Customize Firefox Lite to your desired settings""")
pgettext("DB: products.Topic.title", """Controls""")
pgettext("DB: products.Topic.description", """How to navigate Hubs""")
pgettext("DB: products.Topic.title", """Fix problems""")
pgettext("DB: products.Topic.description", """Troubleshoot problems with Firefox Lite.""")
pgettext("DB: products.Topic.title", """Moderation""")
pgettext("DB: products.Topic.description", """Tools for making Hubs a good experience for all.""")
pgettext("DB: products.Topic.title", """Spoke""")
pgettext("DB: products.Topic.description", """Build scenes with Spoke""")
pgettext("DB: products.Topic.title", """Creators""")
pgettext("DB: products.Topic.description", """Advanced Hubs customization for creators""")
pgettext("DB: products.Topic.title", """Firefox Focus for iOS""")
pgettext("DB: products.Topic.description", """Firefox Focus for iOS""")
pgettext("DB: products.Topic.title", """Firefox Focus for Android""")
pgettext("DB: products.Topic.description", """Privacy browser for Android""")
pgettext("DB: products.Topic.title", """Get started""")
pgettext("DB: products.Topic.description", """Learn the basics about ScreenshotGo""")
pgettext("DB: products.Topic.title", """Learn the Basics. Get Started""")
pgettext("DB: products.Topic.description", """Learn the Basics. Get Started""")
pgettext("DB: products.Topic.title", """Tips and tricks""")
pgettext("DB: products.Topic.description", """Learn tips and shortcuts to help you work faster""")
pgettext("DB: products.Topic.title", """Set up email""")
pgettext(
"DB: products.Topic.description", """Add and configure your email accounts on Thunderbird"""
)
pgettext("DB: products.Topic.title", """Install, Migrate and Update""")
pgettext("DB: products.Topic.description", """How to install and keep Thunderbird up to date""")
pgettext("DB: products.Topic.title", """Read, send and organize emails""")
pgettext("DB: products.Topic.description", """Learn how to manage your email messages""")
pgettext("DB: products.Topic.title", """Emails""")
pgettext(
"DB: products.Topic.description", """Learn to set up accounts, read, send and manage emails"""
)
pgettext("DB: products.Topic.title", """News Feeds (RSS), Blogs and Social""")
pgettext(
"DB: products.Topic.description",
"""Stay up to date with news feeds, blogs and social features""",
)
pgettext("DB: products.Topic.title", """Contacts""")
pgettext("DB: products.Topic.description", """How to use the address book on Thunderbird""")
pgettext("DB: products.Topic.title", """Calendar""")
pgettext("DB: products.Topic.description", """Related to the Lightning add-on for Calendar""")
pgettext("DB: products.Topic.title", """Customize controls, options and add-ons""")
pgettext("DB: products.Topic.description", """Customize controls, options and add-ons""")
pgettext("DB: products.Topic.title", """Thunderbird versions and languages""")
pgettext("DB: products.Topic.description", """Thunderbird versions and languages""")
pgettext("DB: products.Topic.title", """Passwords, forms and search""")
pgettext("DB: products.Topic.description", """Passwords, forms and search""")
pgettext("DB: products.Topic.title", """Thunderbird controls and buttons """)
pgettext(
"DB: products.Topic.description", """Learn all about Thunderbird controls and functionality."""
)
pgettext("DB: products.Topic.title", """Fix problems with email providers (gmail, Yahoo, etc.) """)
pgettext(
"DB: products.Topic.description",
"""Learn how to fix problems with Gmail, Yahoo and other email providers""",
)
pgettext("DB: products.Topic.title", """Download, install and migration""")
pgettext("DB: products.Topic.description", """Download, install and Migration""")
pgettext(
"DB: products.Topic.title",
"""Copy your personal information from one Thunderbird to another""",
)
pgettext(
"DB: products.Topic.description",
"""Copy your personal information from one Thunderbird to another""",
)
pgettext("DB: products.Topic.title", """Tab settings""")
pgettext("DB: products.Topic.description", """Tab settings""")
pgettext("DB: products.Topic.title", """Error messages: what they mean and how to fix""")
pgettext("DB: products.Topic.description", """Error messages: what they mean and how to fix""")
pgettext("DB: products.Topic.title", """Privacy and security settings""")
pgettext(
"DB: products.Topic.description",
"""Keep your information safe with password and security settings""",
)
pgettext(
"DB: products.Topic.title", """Customize Thunderbird with add-ons, plugins, and extensions"""
)
pgettext(
"DB: products.Topic.description",
"""Customize Thunderbird with add-ons, plugins, and extensions""",
)
pgettext("DB: products.Topic.title", """Unblock Thunderbird from connecting to the Internet""")
pgettext(
"DB: products.Topic.description", """Unblock Thunderbird from connecting to the Internet"""
)
pgettext("DB: products.Topic.title", """Thunderbird options, preferences and settings """)
pgettext("DB: products.Topic.description", """Thunderbird options, preferences and settings """)
pgettext("DB: products.Topic.title", """Procedures to diagnose and fix problems""")
pgettext("DB: products.Topic.description", """Procedures to diagnose and fix problems""")
pgettext(
"DB: products.Topic.title", """Fix slowness, crashing, error messages and other problems"""
)
pgettext("DB: products.Topic.description", """Troubleshoot error messages on Thunderbird.""")
pgettext("DB: products.Topic.title", """Thunderbird is slow or stops working""")
pgettext("DB: products.Topic.description", """Thunderbird is slow or stops working""")
pgettext("DB: products.Topic.title", """Thunderbird crashes""")
pgettext("DB: products.Topic.description", """Thunderbird crashes""")
pgettext("DB: products.Topic.title", """Get community support""")
pgettext("DB: products.Topic.description", """Get community support""")
pgettext("DB: products.Topic.title", """Thunderbird won't save settings or remember information""")
pgettext(
"DB: products.Topic.description", """Thunderbird won't save settings or remember information"""
)
pgettext("DB: products.Topic.title", """Problems with add-ons, plugins or unwanted software""")
pgettext(
"DB: products.Topic.description", """Problems with add-ons, plugins or unwanted software"""
)
pgettext("DB: products.Topic.title", """How To""")
pgettext(
"DB: products.Topic.description",
"""Articles that tell you how you can do more with Thunderbird""",
)
pgettext("DB: products.Topic.title", """Other""")
pgettext("DB: products.Topic.description", """Other""")
pgettext("DB: products.Product.title", """Firefox""")
pgettext("DB: products.Product.description", """Web browser for Windows, Mac and Linux""")
pgettext("DB: products.Product.title", """Firefox for Android""")
pgettext("DB: products.Product.description", """Web browser for Android smartphones and tablets""")
pgettext("DB: products.Product.title", """Firefox for iOS""")
pgettext("DB: products.Product.description", """Firefox for iPhone, iPad and iPod touch devices""")
pgettext("DB: products.Product.title", """Firefox Accounts""")
pgettext("DB: products.Product.description", """Privacy-first products for desktop and mobile""")
pgettext("DB: products.Product.title", """Firefox OS""")
pgettext("DB: products.Product.description", """Mobile OS for smartphones""")
pgettext("DB: products.Product.title", """Firefox Preview""")
pgettext(
"DB: products.Product.description",
"""Early version of an experimental Firefox browser for Android.""",
)
pgettext("DB: products.Product.title", """Mozilla VPN""")
pgettext("DB: products.Product.description", """VPN for Windows 10, Android, and iOS devices""")
pgettext("DB: products.Product.title", """Firefox for Amazon Devices""")
pgettext("DB: products.Product.description", """Browser for Amazon devices""")
pgettext("DB: products.Product.title", """Firefox for Fire TV""")
pgettext("DB: products.Product.description", """Browser for Amazon Fire TV""")
pgettext("DB: products.Product.title", """Firefox Private Network Browser-level protection""")
pgettext("DB: products.Product.description", """Browse securely on public Wi-Fi""")
pgettext("DB: products.Product.title", """Firefox for Enterprise""")
pgettext("DB: products.Product.description", """Firefox Quantum for businesses""")
pgettext("DB: products.Product.title", """Open Badges""")
pgettext(
"DB: products.Product.description",
"""A new online standard to recognize and verify learning""",
)
pgettext("DB: products.Product.title", """Webmaker""")
pgettext(
"DB: products.Product.description",
"""Webmaker and other tools for teaching and learning the Web""",
)
pgettext("DB: products.Product.title", """Firefox for Android (ESR)""")
pgettext(
"DB: products.Product.description",
"""Older version of Firefox for Android (no longer supported)""",
)
pgettext("DB: products.Product.title", """Firefox for Windows 8 Touch""")
pgettext("DB: products.Product.description", """Firefox for Windows 8 touch devices""")
pgettext("DB: products.Product.title", """Firefox Lockwise""")
pgettext(
"DB: products.Product.description",
"""Mobile app that gives you access to passwords you've saved to Firefox.""",
)
pgettext("DB: products.Product.title", """Pocket""")
pgettext("DB: products.Product.description", """Discover and save stories for later""")
pgettext("DB: products.Product.title", """Privacy and Security""")
pgettext(
"DB: products.Product.description",
"""Learn more about Mozilla's privacy and security practices.""",
)
pgettext("DB: products.Product.title", """Contributors""")
pgettext("DB: products.Product.description", """Contributor articles""")
pgettext("DB: products.Product.title", """Firefox Reality""")
pgettext("DB: products.Product.description", """Web browser for virtual reality headsets""")
pgettext("DB: products.Product.title", """Firefox Send""")
pgettext("DB: products.Product.description", """An app for sending files to anyone.""")
pgettext("DB: products.Product.title", """Firefox Klar""")
pgettext("DB: products.Product.description", """Was ist Firefox Klar?""")
pgettext("DB: products.Product.title", """Firefox Lite""")
pgettext(
"DB: products.Product.description",
"""Mobile browser for Indonesia, India, The Philippines, and Thailand""",
)
pgettext("DB: products.Product.title", """Hubs""")
pgettext(
"DB: products.Product.description", """Social virtual reality for headsets and browsers"""
)
pgettext("DB: products.Product.title", """Firefox Focus""")
pgettext("DB: products.Product.description", """Automatic privacy browser and content blocker""")
pgettext("DB: products.Product.title", """Firefox ScreenshotGo""")
pgettext("DB: products.Product.description", """Screenshot app for mobile""")
pgettext("DB: products.Product.title", """Thunderbird""")
pgettext("DB: products.Product.description", """Email software for Windows, Mac and Linux""")
# This is a karma title.
pgettext("DB: karma.Title.name", """Administrator""")
# This is a karma title.
pgettext("DB: karma.Title.name", """Buddy of the Month! (10/2015)""")
# This is a karma title.
pgettext("DB: karma.Title.name", """Locale Leader""")
# This is a karma title.
pgettext("DB: karma.Title.name", """Moderator""")
# This is a karma title.
pgettext("DB: karma.Title.name", """SUMO Warrior""")
# This is a karma title.
pgettext("DB: karma.Title.name", """Top 10 Contributor""")
# This is a karma title.
pgettext("DB: karma.Title.name", """Top 25 Contributor""")
| 50.299226 | 239 | 0.715495 | 7,527 | 58,498 | 5.560383 | 0.080776 | 0.149332 | 0.22364 | 0.256087 | 0.845865 | 0.786013 | 0.702769 | 0.642558 | 0.553628 | 0.519126 | 0 | 0.013687 | 0.121987 | 58,498 | 1,162 | 240 | 50.342513 | 0.801176 | 0.008633 | 0 | 0.379945 | 1 | 0 | 0.679879 | 0.195206 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.00184 | 0.00184 | 0 | 0.00184 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
d5b2647c7cdabf6f8f8eb9b23907e5b11829b3c0 | 1,002 | py | Python | jupyterlab2pymolpysnips/Jupyter/kernel.py | MooersLab/pymolpysnips | 50a89c85adf8006d85c1d6cd3f8aad7e440a0b92 | [
"MIT"
] | null | null | null | jupyterlab2pymolpysnips/Jupyter/kernel.py | MooersLab/pymolpysnips | 50a89c85adf8006d85c1d6cd3f8aad7e440a0b92 | [
"MIT"
] | null | null | null | jupyterlab2pymolpysnips/Jupyter/kernel.py | MooersLab/pymolpysnips | 50a89c85adf8006d85c1d6cd3f8aad7e440a0b92 | [
"MIT"
] | null | null | null | """
cmd.do('framerule=2pt,')
cmd.do('framesep=2mm,')
cmd.do('framesep=2mm,')
cmd.do('breaklines=True,')
cmd.do('baselinestretch=1.2')
cmd.do(']{bash}')
cmd.do('{')
cmd.do(' "argv": [')
cmd.do(' "/Applications/PyMOL.app/Contents/bin/python",')
cmd.do(' "-m",')
cmd.do(' "ipykernel_launcher",')
cmd.do(' "-f",')
cmd.do(' "{connection_file}"')
cmd.do(' ],')
cmd.do(' "display_name": "pymol.python",')
cmd.do(' "language": "python"')
cmd.do('}')
"""
cmd.do('framerule=2pt,')
cmd.do('framesep=2mm,')
cmd.do('framesep=2mm,')
cmd.do('breaklines=True,')
cmd.do('baselinestretch=1.2')
cmd.do(']{bash')
cmd.do('{')
cmd.do(' "argv": [')
cmd.do(' "/Applications/PyMOL.app/Contents/bin/python",')
cmd.do(' "-m",')
cmd.do(' "ipykernel_launcher",')
cmd.do(' "-f",')
cmd.do(' "{connection_file"')
cmd.do(' ],')
cmd.do(' "display_name": "pymol.python",')
cmd.do(' "language": "python"')
# Description: A kernel.json file for runnig PyMOL python interpreter in the Jupyter notebook.
# Source: placeHolder
| 25.05 | 96 | 0.621756 | 144 | 1,002 | 4.284722 | 0.284722 | 0.267423 | 0.06483 | 0.081037 | 0.839546 | 0.839546 | 0.839546 | 0.839546 | 0.839546 | 0.839546 | 0 | 0.010977 | 0.090818 | 1,002 | 39 | 97 | 25.692308 | 0.666301 | 0.557884 | 0 | 0.125 | 0 | 0 | 0.584296 | 0.154734 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
d5e91a263d0d6cd610b4a59794b720aa8f5d4998 | 2,609 | py | Python | _app/cantiin/migrations/0002_auto_20210630_0108.py | OmarThinks/cantiin_django | 3c80ba0aa1b6a92d232b1147e217a0d6ac8fc834 | [
"MIT"
] | 1 | 2021-08-17T21:27:32.000Z | 2021-08-17T21:27:32.000Z | _app/cantiin/migrations/0002_auto_20210630_0108.py | OmarThinks/cantiin_django | 3c80ba0aa1b6a92d232b1147e217a0d6ac8fc834 | [
"MIT"
] | null | null | null | _app/cantiin/migrations/0002_auto_20210630_0108.py | OmarThinks/cantiin_django | 3c80ba0aa1b6a92d232b1147e217a0d6ac8fc834 | [
"MIT"
] | null | null | null | # Generated by Django 3.2.4 on 2021-06-29 23:08
from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
('cantiin', '0001_initial'),
]
operations = [
migrations.AddField(
model_name='comment',
name='author',
field=models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, related_name='comments', to=settings.AUTH_USER_MODEL),
),
migrations.AddField(
model_name='comment',
name='created_at',
field=models.DateTimeField(auto_now_add=True, null=True),
),
migrations.AddField(
model_name='comment',
name='product',
field=models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, related_name='comments', to='cantiin.product'),
),
migrations.AddField(
model_name='comment',
name='updated_at',
field=models.DateTimeField(auto_now=True, null=True),
),
migrations.AddField(
model_name='order',
name='author',
field=models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, related_name='orders', to=settings.AUTH_USER_MODEL),
),
migrations.AddField(
model_name='order',
name='created_at',
field=models.DateTimeField(auto_now_add=True, null=True),
),
migrations.AddField(
model_name='order',
name='product',
field=models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, related_name='orders', to='cantiin.product'),
),
migrations.AddField(
model_name='order',
name='updated_at',
field=models.DateTimeField(auto_now=True, null=True),
),
migrations.AddField(
model_name='product',
name='author',
field=models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, related_name='products', to=settings.AUTH_USER_MODEL),
),
migrations.AddField(
model_name='product',
name='created_at',
field=models.DateTimeField(auto_now_add=True, null=True),
),
migrations.AddField(
model_name='product',
name='updated_at',
field=models.DateTimeField(auto_now=True, null=True),
),
]
| 36.236111 | 146 | 0.604829 | 274 | 2,609 | 5.591241 | 0.20438 | 0.129243 | 0.165144 | 0.193864 | 0.833551 | 0.833551 | 0.786554 | 0.733029 | 0.733029 | 0.622063 | 0 | 0.010042 | 0.274818 | 2,609 | 71 | 147 | 36.746479 | 0.799683 | 0.017248 | 0 | 0.769231 | 1 | 0 | 0.096019 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.046154 | 0 | 0.092308 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
9100f364009b07eec1b3a6d93982067bd4fd663f | 3,648 | py | Python | tests/test_fn_exec.py | Neoteroi/rodi | ce9877cd54225fe1cf2023e96485569c2da674fb | [
"MIT"
] | 22 | 2021-01-29T20:08:34.000Z | 2022-03-20T14:38:26.000Z | tests/test_fn_exec.py | Neoteroi/rodi | ce9877cd54225fe1cf2023e96485569c2da674fb | [
"MIT"
] | 10 | 2021-01-31T10:44:07.000Z | 2022-03-27T08:27:07.000Z | tests/test_fn_exec.py | RobertoPrevato/rodi | 1bbf2c1dd52cd383dad7196b47ebfa9ede9cab43 | [
"MIT"
] | null | null | null | """Functions exec tests.
exec functions are designed to enable executing any function injecting parameters.
"""
import pytest
from rodi import Container, inject
class Example:
def __init__(self, repository):
self.repository = repository
class Context:
def __init__(self):
self.trace_id = "1111"
class Repository:
def __init__(self, context: Context):
self.context = context
def test_execute_function():
class Example:
def __init__(self, repository):
self.repository = repository
class Context:
def __init__(self):
self.trace_id = "1111"
@inject()
class Repository:
def __init__(self, context: Context):
self.context = context
called = False
@inject()
def fn(example, context: Context):
nonlocal called
called = True
assert isinstance(example, Example)
assert isinstance(example.repository, Repository)
assert isinstance(context, Context)
# scoped parameter:
assert context is example.repository.context
return context.trace_id
container = Container()
container.add_transient(Example)
container.add_transient(Repository)
container.add_scoped(Context)
provider = container.build_provider()
result = provider.exec(fn)
assert called
assert result == Context().trace_id
def test_executor():
called = False
@inject()
def fn(example, context: Context):
nonlocal called
called = True
assert isinstance(example, Example)
assert isinstance(example.repository, Repository)
assert isinstance(context, Context)
# scoped parameter:
assert context is example.repository.context
return context.trace_id
container = Container()
container.add_transient(Example)
container.add_transient(Repository)
container.add_scoped(Context)
provider = container.build_provider()
executor = provider.get_executor(fn)
result = executor()
assert called
assert result == Context().trace_id
def test_executor_with_given_scoped_services():
called = False
@inject()
def fn(example, context: Context):
nonlocal called
called = True
assert isinstance(example, Example)
assert isinstance(example.repository, Repository)
assert isinstance(context, Context)
# scoped parameter:
assert context is example.repository.context
return context
container = Container()
container.add_transient(Example)
container.add_transient(Repository)
container.add_scoped(Context)
provider = container.build_provider()
executor = provider.get_executor(fn)
given_context = Context()
result = executor({Context: given_context})
assert called
assert result is given_context
@pytest.mark.asyncio
async def test_async_executor():
called = False
@inject()
async def fn(example, context: Context):
nonlocal called
called = True
assert isinstance(example, Example)
assert isinstance(example.repository, Repository)
assert isinstance(context, Context)
# scoped parameter:
assert context is example.repository.context
return context.trace_id
container = Container()
container.add_transient(Example)
container.add_transient(Repository)
container.add_scoped(Context)
provider = container.build_provider()
executor = provider.get_executor(fn)
result = await executor()
assert called
assert result == Context().trace_id
| 23.535484 | 82 | 0.676809 | 379 | 3,648 | 6.345646 | 0.145119 | 0.075676 | 0.076507 | 0.031601 | 0.850312 | 0.850312 | 0.850312 | 0.850312 | 0.827859 | 0.827859 | 0 | 0.002908 | 0.245888 | 3,648 | 154 | 83 | 23.688312 | 0.87132 | 0.04852 | 0 | 0.852941 | 0 | 0 | 0.002311 | 0 | 0 | 0 | 0 | 0 | 0.235294 | 1 | 0.117647 | false | 0 | 0.019608 | 0 | 0.235294 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
910261b9a691ba4028a097fbfc36c5f953351d6d | 47,498 | py | Python | tnn/convrnn.py | neuroailab/tnn | 0d5e5dc6ab3669309e8c00c23da2928a04bc8d02 | [
"MIT"
] | 88 | 2018-03-14T15:56:54.000Z | 2022-03-22T17:19:39.000Z | tnn/convrnn.py | neuroailab/tnn | 0d5e5dc6ab3669309e8c00c23da2928a04bc8d02 | [
"MIT"
] | null | null | null | tnn/convrnn.py | neuroailab/tnn | 0d5e5dc6ab3669309e8c00c23da2928a04bc8d02 | [
"MIT"
] | 19 | 2018-07-05T00:17:26.000Z | 2021-11-15T06:22:17.000Z | '''
ConvRNN base class adapted from
https://github.com/loliverhennigh/Convolutional-LSTM-in-Tensorflow/blob/master/BasicConvLSTMCell.py
'''
import tensorflow as tf
from tensorflow.contrib.rnn import LSTMStateTuple
from tnn.cell import *
class ConvRNNCell(object):
"""Abstract object representing an Convolutional RNN cell.
"""
def __call__(self, inputs, state, scope=None):
"""Run this RNN cell on inputs, starting from the given state.
"""
raise NotImplementedError("Abstract method")
@property
def state_size(self):
"""size(s) of state(s) used by this cell.
"""
raise NotImplementedError("Abstract method")
@property
def output_size(self):
"""Integer or TensorShape: size of outputs produced by this cell."""
raise NotImplementedError("Abstract method")
def zero_state(self, batch_size, dtype):
"""Return zero-filled state tensor(s).
Args:
batch_size: int, float, or unit Tensor representing the batch size.
dtype: the data type to use for the state.
Returns:
tensor of shape '[batch_size x shape[0] x shape[1] x out_depth]
filled with zeros
"""
shape = self.shape
out_depth = self._out_depth
zeros = tf.zeros([batch_size, shape[0], shape[1], out_depth], dtype=dtype)
return zeros
class ConvBasicCell(ConvRNNCell):
"""Conv basic recurrent network cell.
"""
def __init__(self,
shape,
filter_size,
out_depth,
activation=tf.nn.tanh,
kernel_initializer=None,
bias_initializer=None):
"""Initialize the Conv Basic cell.
Args:
shape: int tuple thats the height and width of the cell
filter_size: int tuple thats the height and width of the filter
out_depth: int thats the depth of the cell
activation: Activation function of the inner states.
"""
self.shape = shape
self.filter_size = filter_size
self._out_depth = out_depth
self._size = tf.TensorShape([self.shape[0], self.shape[1], self._out_depth])
self._activation = activation
self._kernel_initializer = kernel_initializer
self._bias_initializer = bias_initializer
@property
def state_size(self):
return self._size
@property
def output_size(self):
return self._size
def __call__(self, inputs, state):
"""Basic RNN cell."""
with tf.variable_scope(type(self).__name__): # "ConvBasicCell"
output = self._activation(
_conv_linear([inputs, state], self.filter_size, self._out_depth, True,
self._bias_initializer, self._kernel_initializer))
return output, output
class ConvNormBasicCell(ConvRNNCell):
"""Conv norm recurrent network cell.
"""
def __init__(self,
shape,
filter_size,
out_depth,
layer_norm=True,
kernel_regularizer=5e-4,
bias_regularizer=5e-4,
activation=tf.nn.elu,
kernel_initializer=None,
bias_initializer=None):
"""Initialize the Conv Norm Basic cell.
Args:
shape: int tuple thats the height and width of the cell
filter_size: int tuple thats the height and width of the filter
out_depth: int thats the depth of the cell
activation: Activation function of the inner states.
"""
self.shape = shape
self.filter_size = filter_size
self._out_depth = out_depth
self._size = tf.TensorShape([self.shape[0], self.shape[1], self._out_depth])
self._activation = activation
self._kernel_initializer = kernel_initializer
self._bias_initializer = bias_initializer
self._kernel_regularizer = kernel_regularizer
self._bias_regularizer = bias_regularizer
self._layer_norm = layer_norm
@property
def state_size(self):
return self._size
@property
def output_size(self):
return self._size
def __call__(self, inputs, state):
"""Basic RNN cell."""
with tf.variable_scope(type(self).__name__): # "ConvNormBasicCell"
if self._activation is not None:
with tf.variable_scope("s"):
s = _conv_linear([state], self.filter_size, self._out_depth, True,
self._bias_initializer, self._kernel_initializer, self._bias_regularizer, self._kernel_regularizer)
with tf.variable_scope("i"):
i = _conv_linear([inputs], self.filter_size, self._out_depth, True,
self._bias_initializer, self._kernel_initializer, self._bias_regularizer, self._kernel_regularizer)
if self._layer_norm:
new_state = tf.contrib.layers.layer_norm(i + s,
activation_fn=self._activation,
reuse=tf.AUTO_REUSE,
scope='layer_norm'
)
else:
new_state = self._activation(i + s)
return new_state, new_state
class ConvGRUCell(ConvRNNCell):
"""Conv GRU recurrent network cell.
"""
def __init__(self,
shape,
filter_size,
out_depth,
activation=tf.nn.tanh,
kernel_initializer=None,
bias_initializer=None):
"""Initialize the Conv GRU cell.
Args:
shape: int tuple thats the height and width of the cell
filter_size: int tuple thats the height and width of the filter
out_depth: int thats the depth of the cell
activation: Activation function of the inner states.
"""
self.shape = shape
self.filter_size = filter_size
self._out_depth = out_depth
self._size = tf.TensorShape([self.shape[0], self.shape[1], self._out_depth])
self._activation = activation
self._kernel_initializer = kernel_initializer
self._bias_initializer = bias_initializer
@property
def state_size(self):
return self._size
@property
def output_size(self):
return self._size
def __call__(self, inputs, state):
"""Gated recurrent unit (GRU)."""
with tf.variable_scope(type(self).__name__): # "ConvGRUCell"
with tf.variable_scope("gates"):
# We start with bias of 1.0 to not reset and not update.
bias_ones = self._bias_initializer
if self._bias_initializer is None:
dtype = [a.dtype for a in [inputs, state]][0]
bias_ones = tf.constant_initializer(1.0, dtype=dtype)
value = tf.nn.sigmoid(
_conv_linear([inputs, state], self.filter_size, 2*self._out_depth, True, bias_ones,
self._kernel_initializer))
r, u = tf.split(value=value, num_or_size_splits=2, axis=3)
with tf.variable_scope("candidates"):
c = self._activation(
_conv_linear([inputs, r * state], self.filter_size, self._out_depth, True,
self._bias_initializer, self._kernel_initializer))
new_h = u * state + (1 - u) * c
return new_h, new_h
class ConvLSTMCell(ConvRNNCell):
"""Conv LSTM recurrent network cell.
"""
def __init__(self,
shape,
filter_size,
out_depth,
use_peepholes=False,
forget_bias=1.0,
state_is_tuple=False,
activation=tf.nn.tanh,
kernel_initializer=None,
bias_initializer=None,
weight_decay=0.0,
layer_norm=False,
norm_gain=1.0,
norm_shift=0.0):
"""Initialize the Conv LSTM cell.
Args:
shape: int tuple thats the height and width of the cell
filter_size: int tuple thats the height and width of the filter
out_depth: int thats the depth of the cell
use_peepholes: bool, set True to enable peephole connections
activation: Activation function of the inner states.
forget_bias: float, The bias added to forget gates (see above).
state_is_tuple: If True, accepted and returned states are 2-tuples of
the `c_state` and `m_state`. If False, they are concatenated
along the column axis. The latter behavior will soon be deprecated.
"""
self.shape = shape
self.filter_size = filter_size
self._use_peepholes = use_peepholes
self._out_depth = out_depth
self._size = tf.TensorShape([self.shape[0], self.shape[1], self._out_depth])
self._concat_size = tf.TensorShape([self.shape[0], self.shape[1], 2*self._out_depth])
self._forget_bias = forget_bias
self._state_is_tuple = state_is_tuple
if activation == "elu":
self._activation = tf.nn.elu
else:
self._activation = activation
self._kernel_initializer = kernel_initializer
self._bias_initializer = bias_initializer
self._layer_norm = layer_norm
self._weight_decay = weight_decay
self._g = norm_gain
self._b = norm_shift
@property
def state_size(self):
return (LSTMStateTuple(self._size, self._size)
if self._state_is_tuple else self._concat_size)
@property
def output_size(self):
return self._size
def zero_state(self, batch_size, dtype):
"""Return zero-filled state tensor(s).
Args:
batch_size: int, float, or unit Tensor representing the batch size.
dtype: the data type to use for the state.
Returns:
tensor of shape '[batch_size x shape[0] x shape[1] x out_depth]
filled with zeros
"""
# last dimension is replaced by 2 * out_depth = (c, h)
shape = self.shape
out_depth = self._out_depth
if self._state_is_tuple:
zeros = LSTMStateTuple(
tf.zeros([batch_size, shape[0], shape[1], out_depth], dtype=dtype),
tf.zeros([batch_size, shape[0], shape[1], out_depth], dtype=dtype))
else:
zeros = tf.zeros([batch_size, shape[0], shape[1], out_depth * 2], dtype=dtype)
return zeros
def _norm(self, inp, scope):
shape = inp.get_shape()[-1:]
gamma_init = tf.constant_initializer(self._g)
beta_init = tf.constant_initializer(self._b)
with tf.variable_scope(scope):
gamma = tf.get_variable(shape=shape, initializer=gamma_init, name="gamma")
beta = tf.get_variable(shape=shape, initializer=beta_init, name="beta")
normalized = tf.contrib.layers.layer_norm(inp, reuse=True, scope=scope)
return normalized
def __call__(self, inputs, state):
"""Long-short term memory (LSTM)."""
with tf.variable_scope(type(self).__name__): # "ConvLSTMCell"
# Parameters of gates are concatenated into one multiply for efficiency
if self._state_is_tuple:
c, h = state
else:
c, h = tf.split(axis=3, num_or_size_splits=2, value=state)
concat = _conv_linear([inputs, h], \
self.filter_size, self._out_depth * 4, True, self._bias_initializer, self._kernel_initializer, kernel_regularizer=self._weight_decay)
# i = input_gate, j = new_input, f = forget_gate, o = output_gate
i, j, f, o = tf.split(axis=3, num_or_size_splits=4, value=concat)
if self._layer_norm:
#print("using layer norm")
i = self._norm(i, "input")
j = self._norm(j, "transform")
f = self._norm(f, "forget")
o = self._norm(o, "output")
if self._use_peepholes:
with tf.variable_scope("peepholes", initializer=self._kernel_initializer):
w_f_diag = tf.get_variable("w_f_diag",
[self.shape[0], self.shape[1], self._out_depth],
dtype=c.dtype)
w_i_diag = tf.get_variable("w_i_diag",
[self.shape[0], self.shape[1], self._out_depth],
dtype=c.dtype)
w_o_diag = tf.get_variable("w_o_diag",
[self.shape[0], self.shape[1], self._out_depth],
dtype=c.dtype)
if self._use_peepholes:
new_c = (c * tf.nn.sigmoid(f + self._forget_bias + w_f_diag * c)
+ tf.nn.sigmoid(i + w_i_diag * c) * self._activation(j))
else:
new_c = (c * tf.nn.sigmoid(f + self._forget_bias)
+ tf.nn.sigmoid(i) * self._activation(j))
# new_c = (c * tf.nn.sigmoid(f)
# + tf.nn.sigmoid(i) * self._activation(j))
if self._layer_norm:
new_c = self._norm(new_c, "state")
if self._use_peepholes:
new_h = self._activation(new_c) * tf.nn.sigmoid(o + w_o_diag * c)
else:
new_h = self._activation(new_c) * tf.nn.sigmoid(o)
if self._state_is_tuple:
new_state = LSTMStateTuple(new_c, new_h)
else:
new_state = tf.concat(axis=3, values=[new_c, new_h])
return new_h, new_state
class ConvUGRNNCell(ConvRNNCell):
"""Conv UGRNN recurrent network cell.
"""
def __init__(self,
shape,
filter_size,
out_depth,
weight_decay=0.0,
forget_bias=1.0,
kernel_initializer=None,
bias_initializer=None,
layer_norm=False,
norm_gain=1.0,
norm_shift=0.0):
"""Initialize the Conv UGRNN cell.
Args:
shape: int tuple thats the height and width of the cell
filter_size: int tuple thats the height and width of the filter
out_depth: int thats the depth of the cell
forget_bias: float, The bias added to forget gates (see above).
state_is_tuple: If True, accepted and returned states are 2-tuples of
the `c_state` and `m_state`. If False, they are concatenated
along the column axis. The latter behavior will soon be deprecated.
"""
self.shape = shape
self.filter_size = filter_size
self._out_depth = out_depth
self._size = tf.TensorShape([self.shape[0], self.shape[1], self._out_depth])
self._kernel_initializer = kernel_initializer
self._bias_initializer = bias_initializer
self._layer_norm = layer_norm
self._forget_bias = forget_bias
self._g = norm_gain
self._b = norm_shift
self._weight_decay = weight_decay
@property
def state_size(self):
return self._size
@property
def output_size(self):
return self._size
def zero_state(self, batch_size, dtype):
"""Return zero-filled state tensor(s).
Args:
batch_size: int, float, or unit Tensor representing the batch size.
dtype: the data type to use for the state.
Returns:
tensor of shape '[batch_size x shape[0] x shape[1] x out_depth]
filled with zeros
"""
shape = self.shape
out_depth = self._out_depth
zeros = tf.zeros([batch_size, shape[0], shape[1], out_depth], dtype=dtype)
return zeros
def _norm(self, inp, scope):
shape = inp.get_shape()[-1:]
gamma_init = tf.constant_initializer(self._g)
beta_init = tf.constant_initializer(self._b)
with tf.variable_scope(scope):
gamma = tf.get_variable(shape=shape, initializer=gamma_init, name="gamma")
beta = tf.get_variable(shape=shape, initializer=beta_init, name="beta")
normalized = tf.contrib.layers.layer_norm(inp, reuse=True, scope=scope)
return normalized
def __call__(self, inputs, state):
"""UGRNN cell."""
with tf.variable_scope(type(self).__name__): # "ConvUGRNNCell"
# Parameters of gates are concatenated into one multiply for efficiency
concat = _conv_linear([inputs, state], \
self.filter_size, 2*self._out_depth, True, self._bias_initializer, self._kernel_initializer, bias_regularizer=self._weight_decay, kernel_regularizer=self._weight_decay)
g_act, c_act = tf.split(axis=3, num_or_size_splits=2, value=concat)
if self._layer_norm:
g_act = self._norm(g_act, "g_act")
c_act = self._norm(h_act, "c_act")
c = tf.nn.tanh(c_act)
g = tf.nn.sigmoid(g_act + self._forget_bias)
new_state = g * state + (1.0 - g) * c
new_output = new_state
return new_output, new_state
class ConvIntersectionRNNCell(ConvRNNCell):
"""Conv IntersectionRNN recurrent network cell.
"""
def __init__(self,
shape,
filter_size,
out_depth,
weight_decay=0.0,
forget_bias=1.0,
kernel_initializer=None,
bias_initializer=None,
layer_norm=False,
norm_gain=1.0,
norm_shift=0.0):
"""Initialize the Conv IntersectionRNN cell.
Args:
shape: int tuple thats the height and width of the cell
filter_size: int tuple thats the height and width of the filter
out_depth: int thats the depth of the cell
forget_bias: float, The bias added to forget gates (see above).
state_is_tuple: If True, accepted and returned states are 2-tuples of
the `c_state` and `m_state`. If False, they are concatenated
along the column axis. The latter behavior will soon be deprecated.
"""
self.shape = shape
self.filter_size = filter_size
self._out_depth = out_depth
self._size = tf.TensorShape([self.shape[0], self.shape[1], self._out_depth])
self._kernel_initializer = kernel_initializer
self._bias_initializer = bias_initializer
self._layer_norm = layer_norm
self._forget_bias = forget_bias
self._g = norm_gain
self._b = norm_shift
self._weight_decay = weight_decay
@property
def state_size(self):
return self._size
@property
def output_size(self):
return self._size
def zero_state(self, batch_size, dtype):
"""Return zero-filled state tensor(s).
Args:
batch_size: int, float, or unit Tensor representing the batch size.
dtype: the data type to use for the state.
Returns:
tensor of shape '[batch_size x shape[0] x shape[1] x out_depth]
filled with zeros
"""
shape = self.shape
out_depth = self._out_depth
zeros = tf.zeros([batch_size, shape[0], shape[1], out_depth], dtype=dtype)
return zeros
def _norm(self, inp, scope):
shape = inp.get_shape()[-1:]
gamma_init = tf.constant_initializer(self._g)
beta_init = tf.constant_initializer(self._b)
with tf.variable_scope(scope):
gamma = tf.get_variable(shape=shape, initializer=gamma_init, name="gamma")
beta = tf.get_variable(shape=shape, initializer=beta_init, name="beta")
normalized = tf.contrib.layers.layer_norm(inp, reuse=True, scope=scope)
return normalized
def __call__(self, inputs, state):
"""IntersectionRNN cell."""
with tf.variable_scope(type(self).__name__): # "ConvIntersectionRNNCell"
# Parameters of gates are concatenated into one multiply for efficiency
if inputs.get_shape().as_list()[1] != self.shape[0] or inputs.get_shape().as_list()[2] != self.shape[1] or inputs.get_shape().as_list()[3] != self._out_depth:
raise ValueError("Input and output shape must match.")
n_dim = i_dim = self._out_depth
concat = _conv_linear([inputs, state], \
self.filter_size, 2*n_dim + 2*i_dim, True, self._bias_initializer, self._kernel_initializer, bias_regularizer=self._weight_decay, kernel_regularizer=self._weight_decay)
gh_act, h_act, gy_act, y_act = tf.split(axis=3, num_or_size_splits=[n_dim, n_dim, i_dim, i_dim], value=concat)
if self._layer_norm:
gh_act = self._norm(gh_act, "gh_act")
h_act = self._norm(h_act, "h_act")
gy_act = self._norm(gy_act, "gy_act")
y_act = self._norm(y_act, "y_act")
h = tf.nn.tanh(h_act)
y = tf.nn.relu(y_act)
gh = tf.nn.sigmoid(gh_act + self._forget_bias)
gy = tf.nn.sigmoid(gy_act + self._forget_bias)
new_state = gh * state + (1.0 - gh) * h # passed through time
new_y = gy * inputs + (1.0 - gy) * y # passed through depth
return new_y, new_state
class tnn_ConvBasicCell(ConvRNNCell):
def __init__(self,
harbor_shape,
harbor=(harbor, None),
pre_memory=None,
memory=(memory, None),
post_memory=None,
input_init=(tf.zeros, None),
state_init=(tf.zeros, None),
dtype=tf.float32,
name=None
):
self.harbor_shape = harbor_shape
self.harbor = harbor if harbor[1] is not None else (harbor[0], {})
self.pre_memory = pre_memory
self.memory = memory if memory[1] is not None else (memory[0], {})
self.post_memory = post_memory
self.input_init = input_init if input_init[1] is not None else (input_init[0], {})
self.state_init = state_init if state_init[1] is not None else (state_init[0], {})
self.dtype_tmp = dtype
self.name_tmp = name
self._reuse = None
self.conv_cell = ConvBasicCell(memory[1]['shape'], memory[1]['filter_size'], memory[1]['out_depth'])
def __call__(self, inputs=None, state=None):
"""
Produce outputs given inputs
If inputs or state are None, they are initialized from scratch.
:Kwargs:
- inputs (list)
A list of inputs. Inputs are combined using the harbor function
- state
:Returns:
(output, state)
"""
with tf.variable_scope(self.name_tmp, reuse=self._reuse):
if inputs is None:
inputs = [self.input_init[0](shape=self.harbor_shape,
**self.input_init[1])]
output = self.harbor[0](inputs, self.harbor_shape, self.name_tmp, reuse=self._reuse, **self.harbor[1])
pre_name_counter = 0
for function, kwargs in self.pre_memory:
with tf.variable_scope("pre_" + str(pre_name_counter), reuse=self._reuse):
if function.__name__ == "component_conv":
output = function(output, inputs, **kwargs) # component_conv needs to know the inputs
else:
output = function(output, **kwargs)
pre_name_counter += 1
if state is None:
bs = output.get_shape().as_list()[0]
state = self.conv_cell.zero_state(bs, dtype = self.dtype_tmp)
output, state = self.conv_cell(output, state)
self.state = tf.identity(state, name='state')
post_name_counter = 0
for function, kwargs in self.post_memory:
with tf.variable_scope("post_" + str(post_name_counter), reuse=self._reuse):
if function.__name__ == "component_conv":
output = function(output, inputs, **kwargs)
else:
output = function(output, **kwargs)
post_name_counter += 1
self.output_tmp = tf.identity(tf.cast(output, self.dtype_tmp), name='output')
self._reuse = True
self.state_shape = self.state.shape
self.output_tmp_shape = self.output_tmp.shape
return self.output_tmp, state
@property
def state_size(self):
"""
Size(s) of state(s) used by this cell.
It can be represented by an Integer, a TensorShape or a tuple of Integers
or TensorShapes.
"""
# if self.state is not None:
return self.state_shape
# else:
# raise ValueError('State not initialized yet')
@property
def output_size(self):
"""
Integer or TensorShape: size of outputs produced by this cell.
"""
# if self.output_tmp is not None:
return self.output_tmp_shape
# else:
# raise ValueError('Output not initialized yet')
class tnn_ConvNormBasicCell(ConvRNNCell):
def __init__(self,
harbor_shape,
harbor=(harbor, None),
pre_memory=None,
memory=(memory, None),
post_memory=None,
input_init=(tf.zeros, None),
state_init=(tf.zeros, None),
dtype=tf.float32,
name=None
):
self.harbor_shape = harbor_shape
self.harbor = harbor if harbor[1] is not None else (harbor[0], {})
self.pre_memory = pre_memory
self.memory = memory if memory[1] is not None else (memory[0], {})
self.post_memory = post_memory
self.input_init = input_init if input_init[1] is not None else (input_init[0], {})
self.state_init = state_init if state_init[1] is not None else (state_init[0], {})
self.dtype_tmp = dtype
self.name_tmp = name
self._reuse = None
self.conv_cell = ConvNormBasicCell(memory[1]['shape'], memory[1]['filter_size'], memory[1]['out_depth'], memory[1]['layer_norm'], memory[1]['kernel_regularizer'], memory[1]['bias_regularizer'])
def __call__(self, inputs=None, state=None):
"""
Produce outputs given inputs
If inputs or state are None, they are initialized from scratch.
:Kwargs:
- inputs (list)
A list of inputs. Inputs are combined using the harbor function
- state
:Returns:
(output, state)
"""
with tf.variable_scope(self.name_tmp, reuse=self._reuse):
if inputs is None:
inputs = [self.input_init[0](shape=self.harbor_shape,
**self.input_init[1])]
output = self.harbor[0](inputs, self.harbor_shape, self.name_tmp, reuse=self._reuse, **self.harbor[1])
pre_name_counter = 0
for function, kwargs in self.pre_memory:
with tf.variable_scope("pre_" + str(pre_name_counter), reuse=self._reuse):
if function.__name__ == "component_conv":
output = function(output, inputs, **kwargs) # component_conv needs to know the inputs
else:
output = function(output, **kwargs)
pre_name_counter += 1
if state is None:
bs = output.get_shape().as_list()[0]
state = self.conv_cell.zero_state(bs, dtype = self.dtype_tmp)
output, state = self.conv_cell(output, state)
self.state = tf.identity(state, name='state')
post_name_counter = 0
for function, kwargs in self.post_memory:
with tf.variable_scope("post_" + str(post_name_counter), reuse=self._reuse):
if function.__name__ == "component_conv":
output = function(output, inputs, **kwargs)
else:
output = function(output, **kwargs)
post_name_counter += 1
self.output_tmp = tf.identity(tf.cast(output, self.dtype_tmp), name='output')
self._reuse = True
self.state_shape = self.state.shape
self.output_tmp_shape = self.output_tmp.shape
return self.output_tmp, state
@property
def state_size(self):
"""
Size(s) of state(s) used by this cell.
It can be represented by an Integer, a TensorShape or a tuple of Integers
or TensorShapes.
"""
# if self.state is not None:
return self.state_shape
# else:
# raise ValueError('State not initialized yet')
@property
def output_size(self):
"""
Integer or TensorShape: size of outputs produced by this cell.
"""
# if self.output_tmp is not None:
return self.output_tmp_shape
# else:
# raise ValueError('Output not initialized yet')
class tnn_ConvGRUCell(ConvRNNCell):
def __init__(self,
harbor_shape,
harbor=(harbor, None),
pre_memory=None,
memory=(memory, None),
post_memory=None,
input_init=(tf.zeros, None),
state_init=(tf.zeros, None),
dtype=tf.float32,
name=None
):
self.harbor_shape = harbor_shape
self.harbor = harbor if harbor[1] is not None else (harbor[0], {})
self.pre_memory = pre_memory
self.memory = memory if memory[1] is not None else (memory[0], {})
self.post_memory = post_memory
self.input_init = input_init if input_init[1] is not None else (input_init[0], {})
self.state_init = state_init if state_init[1] is not None else (state_init[0], {})
self.dtype_tmp = dtype
self.name_tmp = name
self._reuse = None
self.conv_cell = ConvGRUCell(memory[1]['shape'], memory[1]['filter_size'], memory[1]['out_depth'])
def __call__(self, inputs=None, state=None):
"""
Produce outputs given inputs
If inputs or state are None, they are initialized from scratch.
:Kwargs:
- inputs (list)
A list of inputs. Inputs are combined using the harbor function
- state
:Returns:
(output, state)
"""
with tf.variable_scope(self.name_tmp, reuse=self._reuse):
if inputs is None:
inputs = [self.input_init[0](shape=self.harbor_shape,
**self.input_init[1])]
output = self.harbor[0](inputs, self.harbor_shape, self.name_tmp, reuse=self._reuse, **self.harbor[1])
pre_name_counter = 0
for function, kwargs in self.pre_memory:
with tf.variable_scope("pre_" + str(pre_name_counter), reuse=self._reuse):
if function.__name__ == "component_conv":
output = function(output, inputs, **kwargs) # component_conv needs to know the inputs
else:
output = function(output, **kwargs)
pre_name_counter += 1
if state is None:
bs = output.get_shape().as_list()[0]
state = self.conv_cell.zero_state(bs, dtype = self.dtype_tmp)
output, state = self.conv_cell(output, state)
self.state = tf.identity(state, name='state')
post_name_counter = 0
for function, kwargs in self.post_memory:
with tf.variable_scope("post_" + str(post_name_counter), reuse=self._reuse):
if function.__name__ == "component_conv":
output = function(output, inputs, **kwargs)
else:
output = function(output, **kwargs)
post_name_counter += 1
self.output_tmp = tf.identity(tf.cast(output, self.dtype_tmp), name='output')
self._reuse = True
self.state_shape = self.state.shape
self.output_tmp_shape = self.output_tmp.shape
return self.output_tmp, state
@property
def state_size(self):
"""
Size(s) of state(s) used by this cell.
It can be represented by an Integer, a TensorShape or a tuple of Integers
or TensorShapes.
"""
# if self.state is not None:
return self.state_shape
# else:
# raise ValueError('State not initialized yet')
@property
def output_size(self):
"""
Integer or TensorShape: size of outputs produced by this cell.
"""
# if self.output_tmp is not None:
return self.output_tmp_shape
# else:
# raise ValueError('Output not initialized yet')
class tnn_ConvLSTMCell(ConvRNNCell):
def __init__(self,
harbor_shape,
harbor=(harbor, None),
pre_memory=None,
memory=(memory, None),
post_memory=None,
input_init=(tf.zeros, None),
state_init=(tf.zeros, None),
dtype=tf.float32,
name=None
):
self.harbor_shape = harbor_shape
self.harbor = harbor if harbor[1] is not None else (harbor[0], {})
self.pre_memory = pre_memory
self.memory = memory if memory[1] is not None else (memory[0], {})
self.post_memory = post_memory
self.input_init = input_init if input_init[1] is not None else (input_init[0], {})
self.state_init = state_init if state_init[1] is not None else (state_init[0], {})
self.dtype_tmp = dtype
self.name_tmp = name
self._reuse = None
self.conv_cell = ConvLSTMCell(**self.memory[1])
def __call__(self, inputs=None, state=None):
"""
Produce outputs given inputs
If inputs or state are None, they are initialized from scratch.
:Kwargs:
- inputs (list)
A list of inputs. Inputs are combined using the harbor function
- state
:Returns:
(output, state)
"""
with tf.variable_scope(self.name_tmp, reuse=self._reuse):
if inputs is None:
inputs = [self.input_init[0](shape=self.harbor_shape,
**self.input_init[1])]
output = self.harbor[0](inputs, self.harbor_shape, self.name_tmp, reuse=self._reuse, **self.harbor[1])
pre_name_counter = 0
for function, kwargs in self.pre_memory:
with tf.variable_scope("pre_" + str(pre_name_counter), reuse=self._reuse):
if function.__name__ == "component_conv":
output = function(output, inputs, **kwargs) # component_conv needs to know the inputs
else:
output = function(output, **kwargs)
pre_name_counter += 1
if state is None:
bs = output.get_shape().as_list()[0]
state = self.conv_cell.zero_state(bs, dtype = self.dtype_tmp)
output, state = self.conv_cell(output, state)
self.state = tf.identity(state, name='state')
post_name_counter = 0
for function, kwargs in self.post_memory:
with tf.variable_scope("post_" + str(post_name_counter), reuse=self._reuse):
if function.__name__ == "component_conv":
output = function(output, inputs, **kwargs)
else:
output = function(output, **kwargs)
post_name_counter += 1
self.output_tmp = tf.identity(tf.cast(output, self.dtype_tmp), name='output')
self._reuse = True
self.state_shape = self.state.shape
self.output_tmp_shape = self.output_tmp.shape
return self.output_tmp, state
@property
def state_size(self):
"""
Size(s) of state(s) used by this cell.
It can be represented by an Integer, a TensorShape or a tuple of Integers
or TensorShapes.
"""
# if self.state is not None:
return self.state_shape
# else:
# raise ValueError('State not initialized yet')
@property
def output_size(self):
"""
Integer or TensorShape: size of outputs produced by this cell.
"""
# if self.output_tmp is not None:
return self.output_tmp_shape
# else:
# raise ValueError('Output not initialized yet')
class tnn_ConvUGRNNCell(ConvRNNCell):
def __init__(self,
harbor_shape,
harbor=(harbor, None),
pre_memory=None,
memory=(memory, None),
post_memory=None,
input_init=(tf.zeros, None),
state_init=(tf.zeros, None),
dtype=tf.float32,
name=None
):
self.harbor_shape = harbor_shape
self.harbor = harbor if harbor[1] is not None else (harbor[0], {})
self.pre_memory = pre_memory
self.memory = memory if memory[1] is not None else (memory[0], {})
self.post_memory = post_memory
self.input_init = input_init if input_init[1] is not None else (input_init[0], {})
self.state_init = state_init if state_init[1] is not None else (state_init[0], {})
self.dtype_tmp = dtype
self.name_tmp = name
self._reuse = None
self.conv_cell = ConvUGRNNCell(**self.memory[1])
def __call__(self, inputs=None, state=None):
"""
Produce outputs given inputs
If inputs or state are None, they are initialized from scratch.
:Kwargs:
- inputs (list)
A list of inputs. Inputs are combined using the harbor function
- state
:Returns:
(output, state)
"""
with tf.variable_scope(self.name_tmp, reuse=self._reuse):
if inputs is None:
inputs = [self.input_init[0](shape=self.harbor_shape,
**self.input_init[1])]
output = self.harbor[0](inputs, self.harbor_shape, self.name_tmp, reuse=self._reuse, **self.harbor[1])
pre_name_counter = 0
for function, kwargs in self.pre_memory:
with tf.variable_scope("pre_" + str(pre_name_counter), reuse=self._reuse):
if function.__name__ == "component_conv":
output = function(output, inputs, **kwargs) # component_conv needs to know the inputs
else:
output = function(output, **kwargs)
pre_name_counter += 1
if state is None:
bs = output.get_shape().as_list()[0]
state = self.conv_cell.zero_state(bs, dtype = self.dtype_tmp)
output, state = self.conv_cell(output, state)
self.state = tf.identity(state, name='state')
post_name_counter = 0
for function, kwargs in self.post_memory:
with tf.variable_scope("post_" + str(post_name_counter), reuse=self._reuse):
if function.__name__ == "component_conv":
output = function(output, inputs, **kwargs)
else:
output = function(output, **kwargs)
post_name_counter += 1
self.output_tmp = tf.identity(tf.cast(output, self.dtype_tmp), name='output')
self._reuse = True
self.state_shape = self.state.shape
self.output_tmp_shape = self.output_tmp.shape
return self.output_tmp, state
@property
def state_size(self):
"""
Size(s) of state(s) used by this cell.
It can be represented by an Integer, a TensorShape or a tuple of Integers
or TensorShapes.
"""
# if self.state is not None:
return self.state_shape
# else:
# raise ValueError('State not initialized yet')
@property
def output_size(self):
"""
Integer or TensorShape: size of outputs produced by this cell.
"""
# if self.output_tmp is not None:
return self.output_tmp_shape
# else:
# raise ValueError('Output not initialized yet')
class tnn_ConvIntersectionRNNCell(ConvRNNCell):
def __init__(self,
harbor_shape,
harbor=(harbor, None),
pre_memory=None,
memory=(memory, None),
post_memory=None,
input_init=(tf.zeros, None),
state_init=(tf.zeros, None),
dtype=tf.float32,
name=None
):
self.harbor_shape = harbor_shape
self.harbor = harbor if harbor[1] is not None else (harbor[0], {})
self.pre_memory = pre_memory
self.memory = memory if memory[1] is not None else (memory[0], {})
self.post_memory = post_memory
self.input_init = input_init if input_init[1] is not None else (input_init[0], {})
self.state_init = state_init if state_init[1] is not None else (state_init[0], {})
self.dtype_tmp = dtype
self.name_tmp = name
self._reuse = None
self.conv_cell = ConvIntersectionRNNCell(**self.memory[1])
def __call__(self, inputs=None, state=None):
"""
Produce outputs given inputs
If inputs or state are None, they are initialized from scratch.
:Kwargs:
- inputs (list)
A list of inputs. Inputs are combined using the harbor function
- state
:Returns:
(output, state)
"""
with tf.variable_scope(self.name_tmp, reuse=self._reuse):
if inputs is None:
inputs = [self.input_init[0](shape=self.harbor_shape,
**self.input_init[1])]
output = self.harbor[0](inputs, self.harbor_shape, self.name_tmp, reuse=self._reuse, **self.harbor[1])
pre_name_counter = 0
for function, kwargs in self.pre_memory:
with tf.variable_scope("pre_" + str(pre_name_counter), reuse=self._reuse):
if function.__name__ == "component_conv":
output = function(output, inputs, **kwargs) # component_conv needs to know the inputs
else:
output = function(output, **kwargs)
pre_name_counter += 1
if state is None:
bs = output.get_shape().as_list()[0]
state = self.conv_cell.zero_state(bs, dtype = self.dtype_tmp)
output, state = self.conv_cell(output, state)
self.state = tf.identity(state, name='state')
post_name_counter = 0
for function, kwargs in self.post_memory:
with tf.variable_scope("post_" + str(post_name_counter), reuse=self._reuse):
if function.__name__ == "component_conv":
output = function(output, inputs, **kwargs)
else:
output = function(output, **kwargs)
post_name_counter += 1
self.output_tmp = tf.identity(tf.cast(output, self.dtype_tmp), name='output')
self._reuse = True
self.state_shape = self.state.shape
self.output_tmp_shape = self.output_tmp.shape
return self.output_tmp, state
@property
def state_size(self):
"""
Size(s) of state(s) used by this cell.
It can be represented by an Integer, a TensorShape or a tuple of Integers
or TensorShapes.
"""
# if self.state is not None:
return self.state_shape
# else:
# raise ValueError('State not initialized yet')
@property
def output_size(self):
"""
Integer or TensorShape: size of outputs produced by this cell.
"""
# if self.output_tmp is not None:
return self.output_tmp_shape
# else:
# raise ValueError('Output not initialized yet')
def _conv_linear(args, filter_size, out_depth, bias, bias_initializer=None, kernel_initializer=None, bias_regularizer=None, kernel_regularizer=None):
"""convolution:
Args:
args: a 4D Tensor or a list of 4D, batch x n, Tensors.
filter_size: int tuple of filter height and width.
out_depth: int, number of features.
bias: boolean as to whether to have a bias.
bias_initializer: starting value to initialize the bias.
kernel_initializer: starting value to initialize the kernel.
Returns:
A 4D Tensor with shape [batch h w out_depth]
Raises:
ValueError: if some of the arguments has unspecified or wrong shape.
"""
# Calculate the total size of arguments on dimension 1.
total_arg_size_depth = 0
shapes = [a.get_shape().as_list() for a in args]
for shape in shapes:
if len(shape) != 4:
raise ValueError("Linear is expecting 4D arguments: %s" % str(shapes))
if not shape[3]:
raise ValueError("Linear expects shape[4] of arguments: %s" % str(shapes))
else:
total_arg_size_depth += shape[3]
dtype = [a.dtype for a in args][0]
if kernel_regularizer is None:
kernel_regularizer = 0.
if bias_regularizer is None:
bias_regularizer = 0.
if kernel_initializer is None:
kernel_initializer = tf.contrib.layers.xavier_initializer()
if bias_initializer is None:
bias_initializer = tf.contrib.layers.xavier_initializer()
# Now the computation.
kernel = tf.get_variable(
"weights", [filter_size[0], filter_size[1], total_arg_size_depth, out_depth], dtype=dtype, initializer=kernel_initializer, regularizer=tf.contrib.layers.l2_regularizer(kernel_regularizer))
if len(args) == 1:
res = tf.nn.conv2d(args[0], kernel, strides=[1, 1, 1, 1], padding='SAME')
else:
res = tf.nn.conv2d(tf.concat(axis=3, values=args), kernel, strides=[1, 1, 1, 1], padding='SAME')
if not bias:
return res
if bias_initializer is None:
bias_initializer = tf.constant_initializer(0.0, dtype=dtype)
bias_term = tf.get_variable(
"bias", [out_depth],
dtype=dtype,
initializer=bias_initializer,
regularizer=tf.contrib.layers.l2_regularizer(bias_regularizer))
return res + bias_term
def _transpose_conv_linear(args, out_shape, filter_size, out_depth, bias, bias_initializer=None, kernel_initializer=None):
"""transpose convolution for dealing with feedbacks:
Args:
args: a 4D Tensor or a list of 4D, batch x n, Tensors.
filter_size: int tuple of filter height and width.
out_depth: int, number of features.
bias: boolean as to whether to have a bias.
bias_initializer: starting value to initialize the bias.
kernel_initializer: starting value to initialize the kernel.
Returns:
A 4D Tensor with shape [batch h w out_depth]
Raises:
ValueError: if some of the arguments has unspecified or wrong shape.
"""
# Calculate the total size of arguments on dimension 1.
total_arg_size_depth = 0
shapes = [a.get_shape().as_list() for a in args]
for shape in shapes:
if len(shape) != 4:
raise ValueError("Linear is expecting 4D arguments: %s" % str(shapes))
if not shape[3]:
raise ValueError("Linear expects shape[4] of arguments: %s" % str(shapes))
else:
total_arg_size_depth += shape[3]
dtype = [a.dtype for a in args][0]
# Now the computation.
kernel = tf.get_variable(
"weights", [filter_size[0], filter_size[1], out_depth, total_arg_size_depth], dtype=dtype, initializer=kernel_initializer)
if len(args) == 1:
new_inp = args[0]
stride_0 = out_shape[1] // new_inp.get_shape().as_list()[1]
stride_1 = out_shape[2] // new_inp.get_shape().as_list()[2]
res = tf.nn.conv2d_transpose(new_inp, kernel, output_shape=out_shape, strides=[1, stride_0, stride_1, 1], padding='VALID')
else:
new_inp = tf.concat(axis=3, values=args)
stride_0 = out_shape[1] // new_inp.get_shape().as_list()[1]
stride_1 = out_shape[2] // new_inp.get_shape().as_list()[2]
res = tf.nn.conv2d_transpose(new_inp, kernel, output_shape=out_shape, strides=[1, stride_0, stride_1, 1], padding='VALID')
if not bias:
return res
if bias_initializer is None:
bias_initializer = tf.constant_initializer(0.0, dtype=dtype)
bias_term = tf.get_variable(
"bias", [out_depth],
dtype=dtype,
initializer=bias_initializer)
return res + bias_term
| 37.4 | 201 | 0.606657 | 6,097 | 47,498 | 4.496638 | 0.0474 | 0.021885 | 0.012146 | 0.022177 | 0.882149 | 0.86362 | 0.845382 | 0.830719 | 0.821455 | 0.815071 | 0 | 0.009264 | 0.295507 | 47,498 | 1,269 | 202 | 37.429472 | 0.810053 | 0.204682 | 0 | 0.824359 | 0 | 0 | 0.022855 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0 | 0.003846 | 0.015385 | 0.157692 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
910d30b08e3869b5ef1be8a9ab5f355d223e74ba | 168 | py | Python | bac/protocols/esmacs.py | UCL-CCS/BAC2 | 57f50062f60a806d23be7fe6a44b4a1f4f28c109 | [
"Apache-2.0"
] | 3 | 2018-09-27T16:08:25.000Z | 2021-03-19T04:27:04.000Z | bac/protocols/esmacs.py | UCL-CCS/BAC2 | 57f50062f60a806d23be7fe6a44b4a1f4f28c109 | [
"Apache-2.0"
] | 15 | 2018-11-27T09:19:31.000Z | 2021-02-13T09:19:37.000Z | bac/protocols/esmacs.py | UCL-CCS/BAC2 | 57f50062f60a806d23be7fe6a44b4a1f4f28c109 | [
"Apache-2.0"
] | 1 | 2018-10-31T14:56:31.000Z | 2018-10-31T14:56:31.000Z | from simtk.openmm import *
from simtk.unit import *
from simtk import *
from simtk.openmm.app import *
class Esmacs:
def __init__(self):
Baro
pass
| 16.8 | 30 | 0.672619 | 23 | 168 | 4.73913 | 0.565217 | 0.330275 | 0.412844 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.255952 | 168 | 9 | 31 | 18.666667 | 0.872 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0.125 | 0.5 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 7 |
e67f884cdf0292685cde2d7fb1b19d2974a98355 | 288 | py | Python | ipython/startup/import_logging.py | dycw/dotfiles2 | 9e23c4989e9813080da3658a8f98dbb1e03776f2 | [
"MIT"
] | null | null | null | ipython/startup/import_logging.py | dycw/dotfiles2 | 9e23c4989e9813080da3658a8f98dbb1e03776f2 | [
"MIT"
] | null | null | null | ipython/startup/import_logging.py | dycw/dotfiles2 | 9e23c4989e9813080da3658a8f98dbb1e03776f2 | [
"MIT"
] | null | null | null | import logging # noqa: F401
from logging import DEBUG # noqa: F401
from logging import ERROR # noqa: F401
from logging import INFO # noqa: F401
from logging import WARNING # noqa: F401
from logging import basicConfig # noqa: F401
from logging.config import dictConfig # noqa: F401
| 36 | 51 | 0.760417 | 41 | 288 | 5.341463 | 0.292683 | 0.255708 | 0.328767 | 0.520548 | 0.570776 | 0 | 0 | 0 | 0 | 0 | 0 | 0.089744 | 0.1875 | 288 | 7 | 52 | 41.142857 | 0.846154 | 0.263889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
e6ec718971d195ac2afaddee0b23379009e6a3f8 | 160 | py | Python | jaxrl/datasets/__init__.py | anuragajay/jaxrl | a37414aea9e281f19719ccfc09702b32e1ef4e44 | [
"MIT"
] | 157 | 2021-03-12T04:30:53.000Z | 2021-06-10T11:28:48.000Z | jaxrl/datasets/__init__.py | anuragajay/jaxrl | a37414aea9e281f19719ccfc09702b32e1ef4e44 | [
"MIT"
] | 8 | 2021-02-12T18:38:28.000Z | 2021-02-16T14:03:00.000Z | jaxrl/datasets/__init__.py | anuragajay/jaxrl | a37414aea9e281f19719ccfc09702b32e1ef4e44 | [
"MIT"
] | 17 | 2021-06-15T13:38:35.000Z | 2022-03-17T15:25:23.000Z | from jaxrl.datasets.dataset import Batch
from jaxrl.datasets.dataset_utils import make_env_and_dataset
from jaxrl.datasets.replay_buffer import ReplayBuffer
| 40 | 62 | 0.86875 | 23 | 160 | 5.826087 | 0.565217 | 0.201493 | 0.380597 | 0.358209 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.09375 | 160 | 3 | 63 | 53.333333 | 0.924138 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
fc112fe518aa59c5635c11c6a22871ef557d9d4a | 11,694 | py | Python | src/python/nimbusml/tests/pipeline/test_load_save.py | michaelgsharp/NimbusML | 50031157265f49eec85d27fe67582d9ddaf01ef9 | [
"MIT"
] | null | null | null | src/python/nimbusml/tests/pipeline/test_load_save.py | michaelgsharp/NimbusML | 50031157265f49eec85d27fe67582d9ddaf01ef9 | [
"MIT"
] | null | null | null | src/python/nimbusml/tests/pipeline/test_load_save.py | michaelgsharp/NimbusML | 50031157265f49eec85d27fe67582d9ddaf01ef9 | [
"MIT"
] | null | null | null | # --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License.
# --------------------------------------------------------------------------------------------
import os
import pickle
import unittest
from nimbusml import Pipeline
from nimbusml.datasets import get_dataset
from nimbusml.feature_extraction.categorical import OneHotVectorizer
from nimbusml.linear_model import FastLinearBinaryClassifier
from nimbusml.utils import get_X_y
from numpy.testing import assert_almost_equal
train_file = get_dataset('uciadult_train').as_filepath()
test_file = get_dataset('uciadult_test').as_filepath()
categorical_columns = [
'workclass',
'education',
'marital-status',
'occupation',
'relationship',
'ethnicity',
'sex',
'native-country-region']
label_column = 'label'
(train, label) = get_X_y(train_file, label_column, sep=',')
(test, test_label) = get_X_y(test_file, label_column, sep=',')
class TestLoadSave(unittest.TestCase):
def test_model_dataframe(self):
model_nimbusml = Pipeline(
steps=[
('cat',
OneHotVectorizer() << categorical_columns),
('linear',
FastLinearBinaryClassifier(
shuffle=False,
number_of_threads=1))])
model_nimbusml.fit(train, label)
# Save with pickle
pickle_filename = 'nimbusml_model.p'
with open(pickle_filename, 'wb') as f:
pickle.dump(model_nimbusml, f)
with open(pickle_filename, "rb") as f:
model_nimbusml_pickle = pickle.load(f)
os.remove(pickle_filename)
score1 = model_nimbusml.predict(test).head(5)
score2 = model_nimbusml_pickle.predict(test).head(5)
metrics, score = model_nimbusml.test(test, test_label, output_scores=True)
metrics_pickle, score_pickle = model_nimbusml_pickle.test(
test, test_label, output_scores=True)
# Save load with pipeline methods
model_nimbusml.save_model('model.nimbusml.m')
model_nimbusml_load = Pipeline()
model_nimbusml_load.load_model('model.nimbusml.m')
score1 = model_nimbusml.predict(test).head(5)
score2 = model_nimbusml_load.predict(test).head(5)
metrics, score = model_nimbusml.test(test, test_label, output_scores=True)
model_nimbusml_load, score_load = model_nimbusml_load.test(
test, test_label, evaltype='binary', output_scores=True)
assert_almost_equal(score1.sum().sum(), score2.sum().sum(), decimal=2)
assert_almost_equal(
metrics.sum().sum(),
model_nimbusml_load.sum().sum(),
decimal=2)
os.remove('model.nimbusml.m')
def test_model_datastream(self):
model_nimbusml = Pipeline(
steps=[
('cat',
OneHotVectorizer() << categorical_columns),
('linear',
FastLinearBinaryClassifier(
shuffle=False,
number_of_threads=1))])
model_nimbusml.fit(train, label)
# Save with pickle
pickle_filename = 'nimbusml_model.p'
with open(pickle_filename, 'wb') as f:
pickle.dump(model_nimbusml, f)
with open(pickle_filename, "rb") as f:
model_nimbusml_pickle = pickle.load(f)
os.remove(pickle_filename)
score1 = model_nimbusml.predict(test).head(5)
score2 = model_nimbusml_pickle.predict(test).head(5)
metrics, score = model_nimbusml.test(test, test_label, output_scores=True)
metrics_pickle, score_pickle = model_nimbusml_pickle.test(
test, test_label, output_scores=True)
assert_almost_equal(score1.sum().sum(), score2.sum().sum(), decimal=2)
assert_almost_equal(
metrics.sum().sum(),
metrics_pickle.sum().sum(),
decimal=2)
# Save load with pipeline methods
model_nimbusml.save_model('model.nimbusml.m')
model_nimbusml_load = Pipeline()
model_nimbusml_load.load_model('model.nimbusml.m')
score1 = model_nimbusml.predict(test).head(5)
score2 = model_nimbusml_load.predict(test).head(5)
metrics, score = model_nimbusml.test(test, test_label, output_scores=True)
model_nimbusml_load, score_load = model_nimbusml_load.test(
test, test_label, evaltype='binary', output_scores=True)
assert_almost_equal(score1.sum().sum(), score2.sum().sum(), decimal=2)
assert_almost_equal(
metrics.sum().sum(),
model_nimbusml_load.sum().sum(),
decimal=2)
os.remove('model.nimbusml.m')
def test_pipeline_saves_complete_model_file_when_pickled(self):
model_nimbusml = Pipeline(
steps=[
('cat',
OneHotVectorizer() << categorical_columns),
('linear',
FastLinearBinaryClassifier(
shuffle=False,
number_of_threads=1))])
model_nimbusml.fit(train, label)
metrics, score = model_nimbusml.test(test, test_label, output_scores=True)
pickle_filename = 'nimbusml_model.p'
# Save with pickle
with open(pickle_filename, 'wb') as f:
pickle.dump(model_nimbusml, f)
# Remove the pipeline model from disk so
# that the unpickled pipeline is forced
# to get its model from the pickled file.
os.remove(model_nimbusml.model)
with open(pickle_filename, "rb") as f:
model_nimbusml_pickle = pickle.load(f)
os.remove(pickle_filename)
metrics_pickle, score_pickle = model_nimbusml_pickle.test(
test, test_label, output_scores=True)
assert_almost_equal(score.sum().sum(),
score_pickle.sum().sum(),
decimal=2)
assert_almost_equal(metrics.sum().sum(),
metrics_pickle.sum().sum(),
decimal=2)
def test_unfitted_pickled_pipeline_can_be_fit(self):
pipeline = Pipeline(
steps=[
('cat',
OneHotVectorizer() << categorical_columns),
('linear',
FastLinearBinaryClassifier(
shuffle=False,
number_of_threads=1))])
pipeline.fit(train, label)
metrics, score = pipeline.test(test, test_label, output_scores=True)
# Create a new unfitted pipeline
pipeline = Pipeline(
steps=[
('cat',
OneHotVectorizer() << categorical_columns),
('linear',
FastLinearBinaryClassifier(
shuffle=False,
number_of_threads=1))])
pickle_filename = 'nimbusml_model.p'
# Save with pickle
with open(pickle_filename, 'wb') as f:
pickle.dump(pipeline, f)
with open(pickle_filename, "rb") as f:
pipeline_pickle = pickle.load(f)
os.remove(pickle_filename)
pipeline_pickle.fit(train, label)
metrics_pickle, score_pickle = pipeline_pickle.test(
test, test_label, output_scores=True)
assert_almost_equal(score.sum().sum(),
score_pickle.sum().sum(),
decimal=2)
assert_almost_equal(metrics.sum().sum(),
metrics_pickle.sum().sum(),
decimal=2)
def test_unpickled_pipeline_has_feature_contributions(self):
features = ['age', 'education-num', 'hours-per-week']
model_nimbusml = Pipeline(
steps=[FastLinearBinaryClassifier(feature=features)])
model_nimbusml.fit(train, label)
fc = model_nimbusml.get_feature_contributions(test)
# Save with pickle
pickle_filename = 'nimbusml_model.p'
with open(pickle_filename, 'wb') as f:
pickle.dump(model_nimbusml, f)
# Unpickle model
with open(pickle_filename, "rb") as f:
model_nimbusml_pickle = pickle.load(f)
fc_pickle = model_nimbusml_pickle.get_feature_contributions(test)
assert ['FeatureContributions.' + feature in fc_pickle.columns
for feature in features]
assert [fc['FeatureContributions.' + feature].equals(
fc_pickle['FeatureContributions.' + feature])
for feature in features]
os.remove(pickle_filename)
def test_unpickled_predictor_has_feature_contributions(self):
features = ['age', 'education-num', 'hours-per-week']
model_nimbusml = FastLinearBinaryClassifier(feature=features)
model_nimbusml.fit(train, label)
fc = model_nimbusml.get_feature_contributions(test)
# Save with pickle
pickle_filename = 'nimbusml_model.p'
with open(pickle_filename, 'wb') as f:
pickle.dump(model_nimbusml, f)
# Unpickle model
with open(pickle_filename, "rb") as f:
model_nimbusml_pickle = pickle.load(f)
fc_pickle = model_nimbusml_pickle.get_feature_contributions(test)
assert ['FeatureContributions.' + feature in fc_pickle.columns
for feature in features]
assert [fc['FeatureContributions.' + feature].equals(
fc_pickle['FeatureContributions.' + feature])
for feature in features]
os.remove(pickle_filename)
def test_pipeline_loaded_from_zip_has_feature_contributions(self):
features = ['age', 'education-num', 'hours-per-week']
model_nimbusml = Pipeline(
steps=[FastLinearBinaryClassifier(feature=features)])
model_nimbusml.fit(train, label)
fc = model_nimbusml.get_feature_contributions(test)
# Save the model to zip
model_filename = 'nimbusml_model.zip'
model_nimbusml.save_model(model_filename)
# Load the model from zip
model_nimbusml_zip = Pipeline()
model_nimbusml_zip.load_model(model_filename)
fc_zip = model_nimbusml_zip.get_feature_contributions(test)
assert ['FeatureContributions.' + feature in fc_zip.columns
for feature in features]
assert [fc['FeatureContributions.' + feature].equals(
fc_zip['FeatureContributions.' + feature])
for feature in features]
os.remove(model_filename)
def test_predictor_loaded_from_zip_has_feature_contributions(self):
features = ['age', 'education-num', 'hours-per-week']
model_nimbusml = FastLinearBinaryClassifier(feature=features)
model_nimbusml.fit(train, label)
fc = model_nimbusml.get_feature_contributions(test)
# Save the model to zip
model_filename = 'nimbusml_model.zip'
model_nimbusml.save_model(model_filename)
# Load the model from zip
model_nimbusml_zip = Pipeline()
model_nimbusml_zip.load_model(model_filename)
fc_zip = model_nimbusml_zip.get_feature_contributions(test)
assert ['FeatureContributions.' + feature in fc_zip.columns
for feature in features]
assert [fc['FeatureContributions.' + feature].equals(
fc_zip['FeatureContributions.' + feature])
for feature in features]
os.remove(model_filename)
if __name__ == '__main__':
unittest.main()
| 35.329305 | 94 | 0.61057 | 1,234 | 11,694 | 5.52269 | 0.113452 | 0.139252 | 0.024798 | 0.038738 | 0.840059 | 0.83639 | 0.83639 | 0.831548 | 0.821717 | 0.821717 | 0 | 0.004395 | 0.280144 | 11,694 | 330 | 95 | 35.436364 | 0.805179 | 0.0608 | 0 | 0.820961 | 0 | 0 | 0.073905 | 0.024909 | 0 | 0 | 0 | 0 | 0.082969 | 1 | 0.034935 | false | 0 | 0.039301 | 0 | 0.078603 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
fc3be1cf7c35c3003a4765b2e9e483ad5659c3ee | 11,830 | py | Python | res/event_parser_test.py | initialed85/motion-cctv | abadab1dc6d4f18929d6a94d9d7c23225a17d6b4 | [
"MIT"
] | null | null | null | res/event_parser_test.py | initialed85/motion-cctv | abadab1dc6d4f18929d6a94d9d7c23225a17d6b4 | [
"MIT"
] | null | null | null | res/event_parser_test.py | initialed85/motion-cctv | abadab1dc6d4f18929d6a94d9d7c23225a17d6b4 | [
"MIT"
] | null | null | null | import datetime
import os
import unittest
from collections import OrderedDict
from event_parser import parse_events
_TIMESTAMP = datetime.datetime(year=1992, month=2, day=6, hour=13, minute=33, second=37)
_HTMLS = OrderedDict([
('events_2019-06-07.html',
'</html>\n<head>\n<title>Events for 2019-06-07 as at 1992-02-06 13:33:37</title>\n<style type="text/css">\n\nBODY {\n font-family: Tahoma;\n font-size: 8pt;\n font-weight: none;\n text-align: center;\n}\n\nTH {\n font-family: Tahoma;\n font-size: 8pt;\n font-weight: bold;\n text-align: center;\n}\n\nTD {\n font-family: Tahoma;\n font-size: 8pt;\n font-weight: none;\n text-align: center;\n border: 1px solid gray; \n}\n\n</style>\n</head>\n\n<body>\n<h1>Events for 2019-06-07 as at 1992-02-06 13:33:37</h1>\n\n<center>\n<table width="90%">\n\n<tr>\n<th>Event ID</th>\n<th>Camera ID</th>\n<th>Timestamp</th>\n<th>Size</th>\n<th>Camera</th>\n<th>Screenshot</th>\n<th>Download</th>\n</tr>\n\n<tr>\n<td>73</td>\n<td>102</td>\n<td>2019-06-07 00:09:29</td>\n<td>0.0 MB</td>\n<td>FrontDoor</td>\n<td style="width: 320px";><a target="_blank" href="/browse58__102__2019-06-06_22-49-40__FrontDoor.jpg"><img src="/browse58__102__2019-06-06_22-49-40__FrontDoor.jpg" alt="58__102__2019-06-06_22-49-40__FrontDoor.jpg" width="320" height="180" /></a></td>\n<td><a href="/browse73__102__2019-06-07_00-09-29__FrontDoor.mkv">Download</a></td>\n</tr>\n\n<tr>\n<td>72</td>\n<td>102</td>\n<td>2019-06-07 00:08:58</td>\n<td>0.0 MB</td>\n<td>FrontDoor</td>\n<td style="width: 320px";><a target="_blank" href="/browse72__102__2019-06-07_00-08-59__FrontDoor.jpg"><img src="/browse72__102__2019-06-07_00-08-59__FrontDoor.jpg" alt="72__102__2019-06-07_00-08-59__FrontDoor.jpg" width="320" height="180" /></a></td>\n<td><a href="/browse72__102__2019-06-07_00-08-58__FrontDoor.mkv">Download</a></td>\n</tr>\n\n</table>\n<center>\n\n</body>\n</html>'),
('events_2019-06-06.html',
'</html>\n<head>\n<title>Events for 2019-06-06 as at 1992-02-06 13:33:37</title>\n<style type="text/css">\n\nBODY {\n font-family: Tahoma;\n font-size: 8pt;\n font-weight: none;\n text-align: center;\n}\n\nTH {\n font-family: Tahoma;\n font-size: 8pt;\n font-weight: bold;\n text-align: center;\n}\n\nTD {\n font-family: Tahoma;\n font-size: 8pt;\n font-weight: none;\n text-align: center;\n border: 1px solid gray; \n}\n\n</style>\n</head>\n\n<body>\n<h1>Events for 2019-06-06 as at 1992-02-06 13:33:37</h1>\n\n<center>\n<table width="90%">\n\n<tr>\n<th>Event ID</th>\n<th>Camera ID</th>\n<th>Timestamp</th>\n<th>Size</th>\n<th>Camera</th>\n<th>Screenshot</th>\n<th>Download</th>\n</tr>\n\n<tr>\n<td>61</td>\n<td>101</td>\n<td>2019-06-06 23:44:20</td>\n<td>0.0 MB</td>\n<td>Driveway</td>\n<td style="width: 320px";><a target="_blank" href="/browse61__101__2019-06-06_23-44-22__Driveway.jpg"><img src="/browse61__101__2019-06-06_23-44-22__Driveway.jpg" alt="61__101__2019-06-06_23-44-22__Driveway.jpg" width="320" height="180" /></a></td>\n<td><a href="/browse61__101__2019-06-06_23-44-20__Driveway.mkv">Download</a></td>\n</tr>\n\n<tr>\n<td>60</td>\n<td>101</td>\n<td>2019-06-06 23:43:18</td>\n<td>0.0 MB</td>\n<td>Driveway</td>\n<td style="width: 320px";><a target="_blank" href="/browse60__101__2019-06-06_23-43-19__Driveway.jpg"><img src="/browse60__101__2019-06-06_23-43-19__Driveway.jpg" alt="60__101__2019-06-06_23-43-19__Driveway.jpg" width="320" height="180" /></a></td>\n<td><a href="/browse60__101__2019-06-06_23-43-18__Driveway.mkv">Download</a></td>\n</tr>\n\n<tr>\n<td>59</td>\n<td>101</td>\n<td>2019-06-06 23:38:03</td>\n<td>0.0 MB</td>\n<td>Driveway</td>\n<td style="width: 320px";><a target="_blank" href="/browse59__101__2019-06-06_23-38-05__Driveway.jpg"><img src="/browse59__101__2019-06-06_23-38-05__Driveway.jpg" alt="59__101__2019-06-06_23-38-05__Driveway.jpg" width="320" height="180" /></a></td>\n<td><a href="/browse59__101__2019-06-06_23-38-03__Driveway.mkv">Download</a></td>\n</tr>\n\n<tr>\n<td>71</td>\n<td>102</td>\n<td>2019-06-06 23:32:34</td>\n<td>0.0 MB</td>\n<td>FrontDoor</td>\n<td style="width: 320px";><a target="_blank" href="/browse71__102__2019-06-06_23-32-36__FrontDoor.jpg"><img src="/browse71__102__2019-06-06_23-32-36__FrontDoor.jpg" alt="71__102__2019-06-06_23-32-36__FrontDoor.jpg" width="320" height="180" /></a></td>\n<td><a href="/browse71__102__2019-06-06_23-32-34__FrontDoor.mkv">Download</a></td>\n</tr>\n\n<tr>\n<td>58</td>\n<td>101</td>\n<td>2019-06-06 23:32:34</td>\n<td>0.0 MB</td>\n<td>Driveway</td>\n<td style="width: 320px";><a target="_blank" href="/browse59__102__2019-06-06_22-50-18__FrontDoor.jpg"><img src="/browse59__102__2019-06-06_22-50-18__FrontDoor.jpg" alt="59__102__2019-06-06_22-50-18__FrontDoor.jpg" width="320" height="180" /></a></td>\n<td><a href="/browse58__101__2019-06-06_23-32-34__Driveway.mkv">Download</a></td>\n</tr>\n\n<tr>\n<td>70</td>\n<td>102</td>\n<td>2019-06-06 23:32:13</td>\n<td>0.0 MB</td>\n<td>FrontDoor</td>\n<td style="width: 320px";><a target="_blank" href="/browse70__102__2019-06-06_23-32-15__FrontDoor.jpg"><img src="/browse70__102__2019-06-06_23-32-15__FrontDoor.jpg" alt="70__102__2019-06-06_23-32-15__FrontDoor.jpg" width="320" height="180" /></a></td>\n<td><a href="/browse70__102__2019-06-06_23-32-13__FrontDoor.mkv">Download</a></td>\n</tr>\n\n<tr>\n<td>69</td>\n<td>102</td>\n<td>2019-06-06 23:31:29</td>\n<td>0.0 MB</td>\n<td>FrontDoor</td>\n<td style="width: 320px";><a target="_blank" href="/browse69__102__2019-06-06_23-31-31__FrontDoor.jpg"><img src="/browse69__102__2019-06-06_23-31-31__FrontDoor.jpg" alt="69__102__2019-06-06_23-31-31__FrontDoor.jpg" width="320" height="180" /></a></td>\n<td><a href="/browse69__102__2019-06-06_23-31-29__FrontDoor.mkv">Download</a></td>\n</tr>\n\n<tr>\n<td>68</td>\n<td>102</td>\n<td>2019-06-06 23:17:13</td>\n<td>0.0 MB</td>\n<td>FrontDoor</td>\n<td style="width: 320px";><a target="_blank" href="/browse68__102__2019-06-06_23-17-15__FrontDoor.jpg"><img src="/browse68__102__2019-06-06_23-17-15__FrontDoor.jpg" alt="68__102__2019-06-06_23-17-15__FrontDoor.jpg" width="320" height="180" /></a></td>\n<td><a href="/browse68__102__2019-06-06_23-17-13__FrontDoor.mkv">Download</a></td>\n</tr>\n\n<tr>\n<td>67</td>\n<td>102</td>\n<td>2019-06-06 23:10:52</td>\n<td>0.0 MB</td>\n<td>FrontDoor</td>\n<td style="width: 320px";><a target="_blank" href="/browse67__102__2019-06-06_23-10-54__FrontDoor.jpg"><img src="/browse67__102__2019-06-06_23-10-54__FrontDoor.jpg" alt="67__102__2019-06-06_23-10-54__FrontDoor.jpg" width="320" height="180" /></a></td>\n<td><a href="/browse67__102__2019-06-06_23-10-52__FrontDoor.mkv">Download</a></td>\n</tr>\n\n<tr>\n<td>66</td>\n<td>102</td>\n<td>2019-06-06 23:09:00</td>\n<td>0.0 MB</td>\n<td>FrontDoor</td>\n<td style="width: 320px";><a target="_blank" href="/browse66__102__2019-06-06_23-09-02__FrontDoor.jpg"><img src="/browse66__102__2019-06-06_23-09-02__FrontDoor.jpg" alt="66__102__2019-06-06_23-09-02__FrontDoor.jpg" width="320" height="180" /></a></td>\n<td><a href="/browse66__102__2019-06-06_23-09-00__FrontDoor.mkv">Download</a></td>\n</tr>\n\n<tr>\n<td>65</td>\n<td>102</td>\n<td>2019-06-06 23:03:16</td>\n<td>0.0 MB</td>\n<td>FrontDoor</td>\n<td style="width: 320px";><a target="_blank" href="/browse65__102__2019-06-06_23-03-18__FrontDoor.jpg"><img src="/browse65__102__2019-06-06_23-03-18__FrontDoor.jpg" alt="65__102__2019-06-06_23-03-18__FrontDoor.jpg" width="320" height="180" /></a></td>\n<td><a href="/browse65__102__2019-06-06_23-03-16__FrontDoor.mkv">Download</a></td>\n</tr>\n\n<tr>\n<td>64</td>\n<td>102</td>\n<td>2019-06-06 22:59:54</td>\n<td>0.0 MB</td>\n<td>FrontDoor</td>\n<td style="width: 320px";><a target="_blank" href="/browse64__102__2019-06-06_22-59-55__FrontDoor.jpg"><img src="/browse64__102__2019-06-06_22-59-55__FrontDoor.jpg" alt="64__102__2019-06-06_22-59-55__FrontDoor.jpg" width="320" height="180" /></a></td>\n<td><a href="/browse64__102__2019-06-06_22-59-54__FrontDoor.mkv">Download</a></td>\n</tr>\n\n<tr>\n<td>63</td>\n<td>102</td>\n<td>2019-06-06 22:55:52</td>\n<td>0.0 MB</td>\n<td>FrontDoor</td>\n<td style="width: 320px";><a target="_blank" href="/browse63__102__2019-06-06_22-55-54__FrontDoor.jpg"><img src="/browse63__102__2019-06-06_22-55-54__FrontDoor.jpg" alt="63__102__2019-06-06_22-55-54__FrontDoor.jpg" width="320" height="180" /></a></td>\n<td><a href="/browse63__102__2019-06-06_22-55-52__FrontDoor.mkv">Download</a></td>\n</tr>\n\n<tr>\n<td>62</td>\n<td>102</td>\n<td>2019-06-06 22:54:38</td>\n<td>0.0 MB</td>\n<td>FrontDoor</td>\n<td style="width: 320px";><a target="_blank" href="/browse62__102__2019-06-06_22-54-40__FrontDoor.jpg"><img src="/browse62__102__2019-06-06_22-54-40__FrontDoor.jpg" alt="62__102__2019-06-06_22-54-40__FrontDoor.jpg" width="320" height="180" /></a></td>\n<td><a href="/browse62__102__2019-06-06_22-54-38__FrontDoor.mkv">Download</a></td>\n</tr>\n\n<tr>\n<td>61</td>\n<td>102</td>\n<td>2019-06-06 22:52:25</td>\n<td>0.0 MB</td>\n<td>FrontDoor</td>\n<td style="width: 320px";><a target="_blank" href="/browse61__102__2019-06-06_22-52-27__FrontDoor.jpg"><img src="/browse61__102__2019-06-06_22-52-27__FrontDoor.jpg" alt="61__102__2019-06-06_22-52-27__FrontDoor.jpg" width="320" height="180" /></a></td>\n<td><a href="/browse61__102__2019-06-06_22-52-25__FrontDoor.mkv">Download</a></td>\n</tr>\n\n<tr>\n<td>60</td>\n<td>102</td>\n<td>2019-06-06 22:52:04</td>\n<td>0.0 MB</td>\n<td>FrontDoor</td>\n<td style="width: 320px";><a target="_blank" href="/browse60__102__2019-06-06_22-52-05__FrontDoor.jpg"><img src="/browse60__102__2019-06-06_22-52-05__FrontDoor.jpg" alt="60__102__2019-06-06_22-52-05__FrontDoor.jpg" width="320" height="180" /></a></td>\n<td><a href="/browse60__102__2019-06-06_22-52-04__FrontDoor.mkv">Download</a></td>\n</tr>\n\n<tr>\n<td>59</td>\n<td>102</td>\n<td>2019-06-06 22:50:17</td>\n<td>0.0 MB</td>\n<td>FrontDoor</td>\n<td style="width: 320px";><a target="_blank" href="/browsemissing.png"><img src="/browsemissing.png" alt="missing.png" width="320" height="180" /></a></td>\n<td><a href="/browse59__102__2019-06-06_22-50-17__FrontDoor.mkv">Download</a></td>\n</tr>\n\n<tr>\n<td>58</td>\n<td>102</td>\n<td>2019-06-06 22:49:39</td>\n<td>0.0 MB</td>\n<td>FrontDoor</td>\n<td style="width: 320px";><a target="_blank" href="/browsemissing.png"><img src="/browsemissing.png" alt="missing.png" width="320" height="180" /></a></td>\n<td><a href="/browse58__102__2019-06-06_22-49-39__FrontDoor.mkv">Download</a></td>\n</tr>\n\n</table>\n<center>\n\n</body>\n</html>'),
('events.html',
'</html>\n<head>\n<title>All events as at 1992-02-06 13:33:37</title>\n<style type="text/css">\n\nBODY {\n font-family: Tahoma;\n font-size: 8pt;\n font-weight: none;\n text-align: center;\n}\n\nTH {\n font-family: Tahoma;\n font-size: 8pt;\n font-weight: bold;\n text-align: center;\n}\n\nTD {\n font-family: Tahoma;\n font-size: 8pt;\n font-weight: none;\n text-align: center;\n border: 1px solid gray; \n}\n\n</style>\n</head>\n\n<body>\n<h2>Events as at 1992-02-06 13:33:37</h2>\n\n<center>\n<table width="90%">\n\n<tr>\n<th>Date</th>\n<th>Events</th>\n</tr>\n\n\n<tr>\n<td><a target="event" href="events_2019-06-07.html">2019-06-07</a></td>\n<td>2</td>\n</tr>\n\n\n\n<tr>\n<td><a target="event" href="events_2019-06-06.html">2019-06-06</a></td>\n<td>18</td>\n</tr>\n\n\n</table>\n<center>\n\n</body>\n</html>\n')
])
class EventParserTest(unittest.TestCase):
def setUp(self):
os.chdir(os.path.dirname(os.path.realpath(__file__)))
def test_parse_events(self):
self.assertEqual(
parse_events(
target_dir='../test_files',
browse_url_prefix='/browse',
run_timestamp=_TIMESTAMP
),
_HTMLS
)
| 358.484848 | 8,595 | 0.684954 | 2,486 | 11,830 | 3.034191 | 0.071199 | 0.057272 | 0.08087 | 0.081665 | 0.918335 | 0.906006 | 0.887843 | 0.887843 | 0.840382 | 0.797958 | 0 | 0.198401 | 0.058833 | 11,830 | 32 | 8,596 | 369.6875 | 0.479073 | 0 | 0 | 0 | 0 | 0.115385 | 0.944463 | 0.675148 | 0 | 0 | 0 | 0 | 0.038462 | 1 | 0.076923 | false | 0 | 0.192308 | 0 | 0.307692 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 11 |
fc74ce470ecb12a948a01bf059339921ed386947 | 102 | py | Python | holobot/discord/sdk/data_providers/__init__.py | rexor12/holobot | 89b7b416403d13ccfeee117ef942426b08d3651d | [
"MIT"
] | 1 | 2021-05-24T00:17:46.000Z | 2021-05-24T00:17:46.000Z | holobot/discord/sdk/data_providers/__init__.py | rexor12/holobot | 89b7b416403d13ccfeee117ef942426b08d3651d | [
"MIT"
] | 41 | 2021-03-24T22:50:09.000Z | 2021-12-17T12:15:13.000Z | holobot/discord/sdk/data_providers/__init__.py | rexor12/holobot | 89b7b416403d13ccfeee117ef942426b08d3651d | [
"MIT"
] | null | null | null | from .ibot_data_provider import IBotDataProvider
from .iemoji_data_provider import IEmojiDataProvider
| 34 | 52 | 0.901961 | 12 | 102 | 7.333333 | 0.666667 | 0.272727 | 0.409091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.078431 | 102 | 2 | 53 | 51 | 0.93617 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.