hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
8a2694ba8dccf786223043d4f9f44e990f3fa074 | 31 | py | Python | tests/__init__.py | DaSch17/fileminer | fe202f39407a52ef0b8970ba10d7059b05bcaade | [
"MIT"
] | null | null | null | tests/__init__.py | DaSch17/fileminer | fe202f39407a52ef0b8970ba10d7059b05bcaade | [
"MIT"
] | null | null | null | tests/__init__.py | DaSch17/fileminer | fe202f39407a52ef0b8970ba10d7059b05bcaade | [
"MIT"
] | 1 | 2021-12-14T15:08:40.000Z | 2021-12-14T15:08:40.000Z | from .test_fileminer import *
| 15.5 | 30 | 0.774194 | 4 | 31 | 5.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.16129 | 31 | 1 | 31 | 31 | 0.884615 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8a2a6178152f84e18c5f77c9e75bec6bc2a0eb17 | 9,003 | py | Python | hdidx/indexer/hamming.py | aagnone3/hdidx | 81866ceabca8094c5c947926d938d87632c518b3 | [
"MIT"
] | 77 | 2015-10-15T13:16:28.000Z | 2021-11-01T03:33:38.000Z | hdidx/indexer/hamming.py | Neurocomputing/NEUCOM-D-15-02269 | 7b816f651cdb7791e8761ad14b943153a94d4f0a | [
"Python-2.0",
"RSA-MD"
] | 9 | 2016-03-16T03:40:12.000Z | 2021-01-18T14:22:06.000Z | hdidx/indexer/hamming.py | Neurocomputing/NEUCOM-D-15-02269 | 7b816f651cdb7791e8761ad14b943153a94d4f0a | [
"Python-2.0",
"RSA-MD"
] | 23 | 2015-10-07T12:52:01.000Z | 2021-04-29T07:04:04.000Z | #!/usr/bin/env python
# coding: utf-8
"""
File Name: hamming.py
Author: Wan Ji
E-mail: wanji@live.com
Created on: Mon Jul 27 10:22:06 2015 CST
"""
DESCRIPTION = """
Indexers for binary codes in Hamming space.
"""
import os
import logging
import numpy as np
import hdidx.encoder
from hdidx.indexer import Indexer
from hdidx.util import Profiler
from hdidx.storage import createStorage
import hdidx._cext as cext
from hdidx import _mih as mih
BIT_CNT_MAP = np.array([bin(i).count("1") for i in xrange(256)], np.uint16)
DEFAULT_HAMMING_ENCODER = "SHEncoder"
class SHIndexer(Indexer):
def __init__(self, encoder=DEFAULT_HAMMING_ENCODER):
Indexer.__init__(self)
self.encoder = getattr(hdidx.encoder, encoder)()
self.set_storage()
def __del__(self):
pass
def build(self, pardic=None):
self.encoder.build(pardic)
def set_storage(self, storage_type='mem', storage_parm=None):
self.storage = createStorage(storage_type, storage_parm)
def add(self, vals, keys=None):
num_vals = vals.shape[0]
if keys is None:
num_base_items = self.storage.get_num_items()
keys = np.arange(num_base_items, num_base_items + num_vals,
dtype=np.int32)
else:
keys = np.array(keys, dtype=np.int32).reshape(-1)
start_id = 0
for start_id in range(0, num_vals, self.BLKSIZE):
cur_num = min(self.BLKSIZE, num_vals - start_id)
logging.info("%8d/%d: %d" % (start_id, num_vals, cur_num))
codes = self.encoder.encode(vals[start_id:start_id+cur_num, :])
self.storage.add(codes, keys[start_id:start_id+cur_num])
def remove(self, keys):
raise Exception(self.ERR_UNIMPL)
@staticmethod
def hammingDist(B1, B2):
"""
Compute hamming distance between two sets of samples (B1, B2)
Dh=hammingDist(B1, B2);
Input
B1, B2: compact bit vectors. Each datapoint is one row.
size(B1) = [ndatapoints1, nwords]
size(B2) = [ndatapoints2, nwords]
It is faster if ndatapoints1 < ndatapoints2
Output
Dh = hamming distance.
size(Dh) = [ndatapoints1, ndatapoints2]
example query
Dhamm = hammingDist(B2, B1);
this will give the same result than:
Dhamm = distMat(U2>0, U1>0).^2;
the size of the distance matrix is:
size(Dhamm) = [Ntest x Ntraining]
"""
if B1.ndim == 1:
B1 = B1.reshape((1, -1))
if B2.ndim == 1:
B2 = B2.reshape((1, -1))
npt1, dim1 = B1.shape
npt2, dim2 = B2.shape
if dim1 != dim2:
raise Exception("Dimension not consists: %d, %d" % (dim1, dim2))
Dh = np.zeros((npt1, npt2), np.uint16)
for i in xrange(npt1):
Dh[i, :] = BIT_CNT_MAP[np.bitwise_xor(B1[i, :], B2)].sum(1)
return Dh
@staticmethod
def hammingDist2(B1, B2):
"""
Compute hamming distance between two sets of samples (B1, B2)
Dh=hammingDist(B1, B2);
Input
B1, B2: compact bit vectors. Each datapoint is one row.
size(B1) = [ndatapoints1, nwords]
size(B2) = [ndatapoints2, nwords]
It is faster if ndatapoints1 < ndatapoints2
Output
Dh = hamming distance.
size(Dh) = [ndatapoints1, ndatapoints2]
example query
Dhamm = hammingDist(B2, B1);
this will give the same result than:
Dhamm = distMat(U2>0, U1>0).^2;
the size of the distance matrix is:
size(Dhamm) = [Ntest x Ntraining]
"""
if B1.ndim == 1:
B1 = B1.reshape((1, -1))
if B2.ndim == 1:
B2 = B2.reshape((1, -1))
npt1, dim1 = B1.shape
npt2, dim2 = B2.shape
if dim1 != dim2:
raise Exception("Dimension not consists: %d, %d" % (dim1, dim2))
Dh = cext.hamming(B1, B2)
return Dh
def search(self, queries, topk=None, **kwargs):
nq = queries.shape[0]
nbits = self.encoder.ecdat['nbits']
# qry_codes = self.encoder.encode(queries)
db_codes = self.storage.get_codes()
idsquerybase = self.storage.get_keys()
dis = np.ones((nq, topk), np.single) * np.inf
ids = np.ones((nq, topk), np.int32) * -1
profiler = Profiler()
interval = 100 if nq >= 100 else 10
time_total = 0.0 # total time for all queries
logging.info('Start Querying ...')
for qry_id in range(nq):
profiler.start("encoding") # time for computing the distances
qry_code = self.encoder.encode(queries[qry_id:qry_id+1])
profiler.end()
profiler.start("distance") # time for computing the distances
disquerybase = self.hammingDist2(qry_code, db_codes).reshape(-1)
profiler.end()
profiler.start("knn") # time for finding the kNN
cur_ids = cext.knn_count(disquerybase, nbits, topk)
profiler.end()
profiler.start("result") # time for getting final result
ids[qry_id, :] = idsquerybase[cur_ids]
dis[qry_id, :] = disquerybase[cur_ids]
profiler.end()
if (qry_id+1) % interval == 0:
time_total += profiler.sum_overall()
logging.info(
'\t%d/%d: %.3fms per query' %
(qry_id+1, nq, profiler.sum_average() * 1000))
logging.info("\t\t%s" % profiler.str_average())
profiler.reset()
logging.info('Querying Finished!')
time_total += profiler.sum_overall()
logging.info("Average querying time: %.3fms" % (time_total * 1000 / nq))
return ids, dis
class MIHIndexer(Indexer):
def __init__(self, encoder=DEFAULT_HAMMING_ENCODER):
Indexer.__init__(self)
self.encoder = getattr(hdidx.encoder, encoder)()
self.indexer = None
self.key_map = mih.get_key_map(16)
self.set_storage()
def __del__(self):
pass
def build(self, pardic=None):
self.encoder.build(pardic)
def set_storage(self, storage_type='mem', storage_parm=None):
self.idx_path = storage_parm['path'] if storage_parm else None
if self.idx_path is not None:
nbits = self.encoder.ecdat['nbits']
self.ntbls = nbits / 16
self.indexer = mih.PyMultiIndexer(nbits, self.ntbls, 1000000)
if os.path.exists(self.idx_path):
self.indexer.load(self.idx_path)
def add(self, vals, keys=None):
num_vals = vals.shape[0]
if keys is None:
num_base_items = self.indexer.get_num_items()
keys = np.arange(num_base_items, num_base_items + num_vals,
dtype=np.int32)
else:
keys = np.array(keys, dtype=np.int32).reshape(-1)
start_id = 0
for start_id in range(0, num_vals, self.BLKSIZE):
cur_num = min(self.BLKSIZE, num_vals - start_id)
logging.info("%8d/%d: %d" % (start_id, num_vals, cur_num))
codes = self.encoder.encode(vals[start_id:start_id+cur_num, :])
self.indexer.add(codes)
logging.info("%8d/%d (Done!)" % (num_vals, num_vals))
self.indexer.save(self.idx_path)
def remove(self, keys):
raise Exception(self.ERR_UNIMPL)
def search(self, queries, topk=None, **kwargs):
nq = queries.shape[0]
# qry_codes = self.encoder.encode(queries)
dis = np.ones((nq, topk), np.single) * np.inf
ids = np.ones((nq, topk), np.int32) * -1
profiler = Profiler()
interval = 100 if nq >= 100 else 10
time_total = 0.0 # total time for all queries
logging.info('Start Querying ...')
for qry_id in range(nq):
profiler.start("encoding") # time for getting final result
qry_code = self.encoder.encode(queries[qry_id:qry_id+1])
profiler.end()
profiler.start("search") # time for getting final result
cur_ids, cur_dis = self.indexer.search(qry_code, topk)
profiler.end()
profiler.start("result") # time for getting final result
ids[qry_id, :] = cur_ids
dis[qry_id, :] = cur_dis
profiler.end()
if (qry_id+1) % interval == 0:
time_total += profiler.sum_overall()
logging.info(
'\t%d/%d: %.3fms per query' %
(qry_id+1, nq, profiler.sum_average() * 1000))
logging.info("\t\t%s" % profiler.str_average())
profiler.reset()
logging.info('Querying Finished!')
time_total += profiler.sum_overall()
logging.info("Average querying time: %.3fms" % (time_total * 1000 / nq))
return ids, dis
| 32.039146 | 80 | 0.574919 | 1,157 | 9,003 | 4.338807 | 0.191011 | 0.030677 | 0.014343 | 0.023904 | 0.758367 | 0.719124 | 0.70239 | 0.70239 | 0.70239 | 0.684861 | 0 | 0.033355 | 0.307342 | 9,003 | 280 | 81 | 32.153571 | 0.771648 | 0.169166 | 0 | 0.711765 | 0 | 0 | 0.056656 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.094118 | false | 0.011765 | 0.052941 | 0 | 0.182353 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8a54a93118be817f30965711d94bea41e02ead2a | 99 | py | Python | src/env.py | fabiob/wwwsqldesigner-aws | 5518eae682e8228be30b094c6015054b3cddf8f3 | [
"MIT"
] | null | null | null | src/env.py | fabiob/wwwsqldesigner-aws | 5518eae682e8228be30b094c6015054b3cddf8f3 | [
"MIT"
] | null | null | null | src/env.py | fabiob/wwwsqldesigner-aws | 5518eae682e8228be30b094c6015054b3cddf8f3 | [
"MIT"
] | 1 | 2021-04-04T09:41:51.000Z | 2021-04-04T09:41:51.000Z | import os
S3_BUCKET = os.environ['STORAGE_S3_BUCKET']
S3_PREFIX = os.environ['STORAGE_S3_PREFIX']
| 19.8 | 43 | 0.787879 | 16 | 99 | 4.5 | 0.4375 | 0.222222 | 0.444444 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.044444 | 0.090909 | 99 | 4 | 44 | 24.75 | 0.755556 | 0 | 0 | 0 | 0 | 0 | 0.343434 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
8a56782435f0ac37c5433aa0fa86beaa8ff465b3 | 69,019 | py | Python | tests/unit/modules/boto_vpc_test.py | konradstarzyk/salt | 9ef4bb3b464661480cb7c455186870cdfc640f0d | [
"Apache-2.0"
] | null | null | null | tests/unit/modules/boto_vpc_test.py | konradstarzyk/salt | 9ef4bb3b464661480cb7c455186870cdfc640f0d | [
"Apache-2.0"
] | null | null | null | tests/unit/modules/boto_vpc_test.py | konradstarzyk/salt | 9ef4bb3b464661480cb7c455186870cdfc640f0d | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
# TODO: Update skipped tests to expect dictionary results from the execution
# module functions.
# Import Python libs
from __future__ import absolute_import
from distutils.version import LooseVersion # pylint: disable=import-error,no-name-in-module
# Import Salt Testing libs
from salttesting.unit import skipIf, TestCase
from salttesting.mock import NO_MOCK, NO_MOCK_REASON, patch
from salttesting.helpers import ensure_in_syspath
ensure_in_syspath('../../')
# Import Salt libs
import salt.config
import salt.loader
from salt.modules import boto_vpc
from salt.exceptions import SaltInvocationError, CommandExecutionError
from salt.modules.boto_vpc import _maybe_set_name_tag, _maybe_set_tags
# Import 3rd-party libs
import salt.ext.six as six
# pylint: disable=import-error,no-name-in-module
try:
import boto
from boto.exception import BotoServerError
HAS_BOTO = True
except ImportError:
HAS_BOTO = False
try:
import moto
from moto import mock_ec2
HAS_MOTO = True
except ImportError:
HAS_MOTO = False
def mock_ec2(self):
'''
if the mock_ec2 function is not available due to import failure
this replaces the decorated function with stub_function.
Allows boto_vpc unit tests to use the @mock_ec2 decorator
without a "NameError: name 'mock_ec2' is not defined" error.
'''
def stub_function(self):
pass
return stub_function
# pylint: enable=import-error,no-name-in-module
# the boto_vpc module relies on the connect_to_region() method
# which was added in boto 2.8.0
# https://github.com/boto/boto/commit/33ac26b416fbb48a60602542b4ce15dcc7029f12
required_boto_version = '2.8.0'
required_moto_version = '0.3.7'
region = 'us-east-1'
access_key = 'GKTADJGHEIQSXMKKRBJ08H'
secret_key = 'askdjghsdfjkghWupUjasdflkdfklgjsdfjajkghs'
conn_parameters = {'region': region, 'key': access_key, 'keyid': secret_key, 'profile': {}}
cidr_block = '10.0.0.0/24'
dhcp_options_parameters = {'domain_name': 'example.com', 'domain_name_servers': ['1.2.3.4'], 'ntp_servers': ['5.6.7.8'],
'netbios_name_servers': ['10.0.0.1'], 'netbios_node_type': 2}
network_acl_entry_parameters = ('fake', 100, -1, 'allow', cidr_block)
dhcp_options_parameters.update(conn_parameters)
opts = salt.config.DEFAULT_MINION_OPTS
utils = salt.loader.utils(opts, whitelist=['boto'])
boto_vpc.__utils__ = utils
boto_vpc.__init__(opts)
def _has_required_boto():
'''
Returns True/False boolean depending on if Boto is installed and correct
version.
'''
if not HAS_BOTO:
return False
elif LooseVersion(boto.__version__) < LooseVersion(required_boto_version):
return False
else:
return True
def _has_required_moto():
'''
Returns True/False boolean depending on if Moto is installed and correct
version.
'''
if not HAS_MOTO:
return False
else:
try:
if LooseVersion(moto.__version__) < LooseVersion(required_moto_version):
return False
except AttributeError:
import pkg_resources
from pkg_resources import DistributionNotFound
try:
if LooseVersion(pkg_resources.get_distribution('moto').version) < LooseVersion(required_moto_version):
return False
except DistributionNotFound:
return False
return True
class BotoVpcTestCaseBase(TestCase):
def setUp(self):
boto_vpc.__context__ = {}
class BotoVpcTestCaseMixin(object):
conn = None
def _create_vpc(self, name=None, tags=None):
'''
Helper function to create a test vpc
'''
if not self.conn:
self.conn = boto.vpc.connect_to_region(region)
vpc = self.conn.create_vpc(cidr_block)
_maybe_set_name_tag(name, vpc)
_maybe_set_tags(tags, vpc)
return vpc
def _create_subnet(self, vpc_id, cidr_block='10.0.0.0/25', name=None, tags=None, availability_zone=None):
'''
Helper function to create a test subnet
'''
if not self.conn:
self.conn = boto.vpc.connect_to_region(region)
subnet = self.conn.create_subnet(vpc_id, cidr_block, availability_zone=availability_zone)
_maybe_set_name_tag(name, subnet)
_maybe_set_tags(tags, subnet)
return subnet
def _create_internet_gateway(self, vpc_id, name=None, tags=None):
'''
Helper function to create a test internet gateway
'''
if not self.conn:
self.conn = boto.vpc.connect_to_region(region)
igw = self.conn.create_internet_gateway(vpc_id)
_maybe_set_name_tag(name, igw)
_maybe_set_tags(tags, igw)
return igw
def _create_customer_gateway(self, vpc_id, name=None, tags=None):
'''
Helper function to create a test customer gateway
'''
if not self.conn:
self.conn = boto.vpc.connect_to_region(region)
gw = self.conn.create_customer_gateway(vpc_id)
_maybe_set_name_tag(name, gw)
_maybe_set_tags(tags, gw)
return gw
def _create_dhcp_options(self, domain_name='example.com', domain_name_servers=None, ntp_servers=None,
netbios_name_servers=None, netbios_node_type=2):
'''
Helper function to create test dchp options
'''
if not netbios_name_servers:
netbios_name_servers = ['10.0.0.1']
if not ntp_servers:
ntp_servers = ['5.6.7.8']
if not domain_name_servers:
domain_name_servers = ['1.2.3.4']
if not self.conn:
self.conn = boto.vpc.connect_to_region(region)
return self.conn.create_dhcp_options(domain_name=domain_name, domain_name_servers=domain_name_servers,
ntp_servers=ntp_servers, netbios_name_servers=netbios_name_servers,
netbios_node_type=netbios_node_type)
def _create_network_acl(self, vpc_id):
'''
Helper function to create test network acl
'''
if not self.conn:
self.conn = boto.vpc.connect_to_region(region)
return self.conn.create_network_acl(vpc_id)
def _create_network_acl_entry(self, network_acl_id, rule_number, protocol, rule_action, cidr_block, egress=None,
icmp_code=None, icmp_type=None, port_range_from=None, port_range_to=None):
'''
Helper function to create test network acl entry
'''
if not self.conn:
self.conn = boto.vpc.connect_to_region(region)
return self.conn.create_network_acl_entry(network_acl_id, rule_number, protocol, rule_action,
cidr_block,
egress=egress,
icmp_code=icmp_code, icmp_type=icmp_type,
port_range_from=port_range_from, port_range_to=port_range_to)
def _create_route_table(self, vpc_id, name=None, tags=None):
'''
Helper function to create a test route table
'''
if not self.conn:
self.conn = boto.vpc.connect_to_region(region)
rtbl = self.conn.create_route_table(vpc_id)
_maybe_set_name_tag(name, rtbl)
_maybe_set_tags(tags, rtbl)
return rtbl
@skipIf(NO_MOCK, NO_MOCK_REASON)
@skipIf(HAS_BOTO is False, 'The boto module must be installed.')
@skipIf(HAS_MOTO is False, 'The moto module must be installed.')
@skipIf(_has_required_boto() is False, 'The boto module must be greater than'
' or equal to version {0}'
.format(required_boto_version))
@skipIf(_has_required_moto() is False, 'The moto version must be >= to version {0}'.format(required_moto_version))
class BotoVpcTestCase(BotoVpcTestCaseBase, BotoVpcTestCaseMixin):
'''
TestCase for salt.modules.boto_vpc module
'''
@mock_ec2
def test_that_when_checking_if_a_vpc_exists_by_id_and_a_vpc_exists_the_vpc_exists_method_returns_true(self):
'''
Tests checking vpc existence via id when the vpc already exists
'''
vpc = self._create_vpc()
vpc_exists_result = boto_vpc.exists(vpc_id=vpc.id, **conn_parameters)
self.assertTrue(vpc_exists_result['exists'])
@mock_ec2
def test_that_when_checking_if_a_vpc_exists_by_id_and_a_vpc_does_not_exist_the_vpc_exists_method_returns_false(
self):
'''
Tests checking vpc existence via id when the vpc does not exist
'''
self._create_vpc() # Created to ensure that the filters are applied correctly
vpc_exists_result = boto_vpc.exists(vpc_id='fake', **conn_parameters)
self.assertFalse(vpc_exists_result['exists'])
@mock_ec2
def test_that_when_checking_if_a_vpc_exists_by_name_and_a_vpc_exists_the_vpc_exists_method_returns_true(self):
'''
Tests checking vpc existence via name when vpc exists
'''
self._create_vpc(name='test')
vpc_exists_result = boto_vpc.exists(name='test', **conn_parameters)
self.assertTrue(vpc_exists_result['exists'])
@mock_ec2
def test_that_when_checking_if_a_vpc_exists_by_name_and_a_vpc_does_not_exist_the_vpc_exists_method_returns_false(
self):
'''
Tests checking vpc existence via name when vpc does not exist
'''
self._create_vpc() # Created to ensure that the filters are applied correctly
vpc_exists_result = boto_vpc.exists(name='test', **conn_parameters)
self.assertFalse(vpc_exists_result['exists'])
@mock_ec2
def test_that_when_checking_if_a_vpc_exists_by_tags_and_a_vpc_exists_the_vpc_exists_method_returns_true(self):
'''
Tests checking vpc existence via tag when vpc exists
'''
self._create_vpc(tags={'test': 'testvalue'})
vpc_exists_result = boto_vpc.exists(tags={'test': 'testvalue'}, **conn_parameters)
self.assertTrue(vpc_exists_result['exists'])
@mock_ec2
def test_that_when_checking_if_a_vpc_exists_by_tags_and_a_vpc_does_not_exist_the_vpc_exists_method_returns_false(
self):
'''
Tests checking vpc existence via tag when vpc does not exist
'''
self._create_vpc() # Created to ensure that the filters are applied correctly
vpc_exists_result = boto_vpc.exists(tags={'test': 'testvalue'}, **conn_parameters)
self.assertFalse(vpc_exists_result['exists'])
@mock_ec2
def test_that_when_checking_if_a_vpc_exists_by_cidr_and_a_vpc_exists_the_vpc_exists_method_returns_true(self):
'''
Tests checking vpc existence via cidr when vpc exists
'''
self._create_vpc()
vpc_exists_result = boto_vpc.exists(cidr=u'10.0.0.0/24', **conn_parameters)
self.assertTrue(vpc_exists_result['exists'])
@mock_ec2
def test_that_when_checking_if_a_vpc_exists_by_cidr_and_a_vpc_does_not_exist_the_vpc_exists_method_returns_false(
self):
'''
Tests checking vpc existence via cidr when vpc does not exist
'''
self._create_vpc() # Created to ensure that the filters are applied correctly
vpc_exists_result = boto_vpc.exists(cidr=u'10.10.10.10/24', **conn_parameters)
self.assertFalse(vpc_exists_result['exists'])
@mock_ec2
def test_that_when_checking_if_a_vpc_exists_but_providing_no_filters_the_vpc_exists_method_raises_a_salt_invocation_error(self):
'''
Tests checking vpc existence when no filters are provided
'''
with self.assertRaisesRegexp(SaltInvocationError, 'At least one of the following '
'must be provided: vpc_id, vpc_name, '
'cidr or tags.'):
boto_vpc.exists(**conn_parameters)
@mock_ec2
def test_get_vpc_id_method_when_filtering_by_name(self):
'''
Tests getting vpc id when filtering by name
'''
vpc = self._create_vpc(name='test')
get_id_result = boto_vpc.get_id(name='test', **conn_parameters)
self.assertEqual(vpc.id, get_id_result['id'])
@mock_ec2
def test_get_vpc_id_method_when_filtering_by_invalid_name(self):
'''
Tests getting vpc id when filtering by invalid name
'''
self._create_vpc(name='test')
get_id_result = boto_vpc.get_id(name='test_fake', **conn_parameters)
self.assertEqual(get_id_result['id'], None)
@mock_ec2
def test_get_vpc_id_method_when_filtering_by_cidr(self):
'''
Tests getting vpc id when filtering by cidr
'''
vpc = self._create_vpc()
get_id_result = boto_vpc.get_id(cidr=u'10.0.0.0/24', **conn_parameters)
self.assertEqual(vpc.id, get_id_result['id'])
@mock_ec2
def test_get_vpc_id_method_when_filtering_by_invalid_cidr(self):
'''
Tests getting vpc id when filtering by invalid cidr
'''
self._create_vpc()
get_id_result = boto_vpc.get_id(cidr=u'10.10.10.10/24', **conn_parameters)
self.assertEqual(get_id_result['id'], None)
@mock_ec2
def test_get_vpc_id_method_when_filtering_by_tags(self):
'''
Tests getting vpc id when filtering by tags
'''
vpc = self._create_vpc(tags={'test': 'testvalue'})
get_id_result = boto_vpc.get_id(tags={'test': 'testvalue'}, **conn_parameters)
self.assertEqual(vpc.id, get_id_result['id'])
@mock_ec2
def test_get_vpc_id_method_when_filtering_by_invalid_tags(self):
'''
Tests getting vpc id when filtering by invalid tags
'''
self._create_vpc(tags={'test': 'testvalue'})
get_id_result = boto_vpc.get_id(tags={'test': 'fake-testvalue'}, **conn_parameters)
self.assertEqual(get_id_result['id'], None)
@mock_ec2
def test_get_vpc_id_method_when_not_providing_filters_raises_a_salt_invocation_error(self):
'''
Tests getting vpc id but providing no filters
'''
with self.assertRaisesRegexp(SaltInvocationError, 'At least one of the following must be provided: vpc_id, vpc_name, cidr or tags.'):
boto_vpc.get_id(**conn_parameters)
@mock_ec2
def test_get_vpc_id_method_when_more_than_one_vpc_is_matched_raises_a_salt_command_execution_error(self):
'''
Tests getting vpc id but providing no filters
'''
vpc1 = self._create_vpc(name='vpc-test1')
vpc2 = self._create_vpc(name='vpc-test2')
with self.assertRaisesRegexp(CommandExecutionError, 'Found more than one VPC matching the criteria.'):
boto_vpc.get_id(cidr=u'10.0.0.0/24', **conn_parameters)
@mock_ec2
def test_that_when_creating_a_vpc_succeeds_the_create_vpc_method_returns_true(self):
'''
tests True VPC created.
'''
vpc_creation_result = boto_vpc.create(cidr_block, **conn_parameters)
self.assertTrue(vpc_creation_result)
@mock_ec2
def test_that_when_creating_a_vpc_and_specifying_a_vpc_name_succeeds_the_create_vpc_method_returns_true(self):
'''
tests True VPC created.
'''
vpc_creation_result = boto_vpc.create(cidr_block, vpc_name='test', **conn_parameters)
self.assertTrue(vpc_creation_result)
@mock_ec2
def test_that_when_creating_a_vpc_and_specifying_tags_succeeds_the_create_vpc_method_returns_true(self):
'''
tests True VPC created.
'''
vpc_creation_result = boto_vpc.create(cidr_block, tags={'test': 'value'}, **conn_parameters)
self.assertTrue(vpc_creation_result)
@mock_ec2
@skipIf(True, 'Disabled pending https://github.com/spulec/moto/issues/493')
def test_that_when_creating_a_vpc_fails_the_create_vpc_method_returns_false(self):
'''
tests False VPC not created.
'''
with patch('moto.ec2.models.VPCBackend.create_vpc', side_effect=BotoServerError(400, 'Mocked error')):
vpc_creation_result = boto_vpc.create(cidr_block, **conn_parameters)
self.assertFalse(vpc_creation_result['created'])
self.assertTrue('error' in vpc_creation_result)
@mock_ec2
def test_that_when_deleting_an_existing_vpc_the_delete_vpc_method_returns_true(self):
'''
Tests deleting an existing vpc
'''
vpc = self._create_vpc()
vpc_deletion_result = boto_vpc.delete(vpc.id, **conn_parameters)
self.assertTrue(vpc_deletion_result)
@mock_ec2
def test_that_when_deleting_a_non_existent_vpc_the_delete_vpc_method_returns_false(self):
'''
Tests deleting a non-existent vpc
'''
delete_vpc_result = boto_vpc.delete('1234', **conn_parameters)
self.assertFalse(delete_vpc_result['deleted'])
@mock_ec2
def test_that_when_describing_vpc_by_id_it_returns_the_dict_of_properties_returns_true(self):
'''
Tests describing parameters via vpc id if vpc exist
'''
vpc = self._create_vpc(name='test', tags={'test': 'testvalue'})
describe_vpc = boto_vpc.describe(vpc_id=vpc.id, **conn_parameters)
vpc_properties = dict(id=vpc.id,
cidr_block=six.text_type(cidr_block),
is_default=None,
state=u'available',
tags={u'Name': u'test', u'test': u'testvalue'},
dhcp_options_id=u'dopt-7a8b9c2d',
instance_tenancy=u'default')
self.assertEqual(describe_vpc, {'vpc': vpc_properties})
@mock_ec2
def test_that_when_describing_vpc_by_id_it_returns_the_dict_of_properties_returns_false(self):
'''
Tests describing parameters via vpc id if vpc does not exist
'''
vpc = self._create_vpc(name='test', tags={'test': 'testvalue'})
describe_vpc = boto_vpc.describe(vpc_id='vpc-fake', **conn_parameters)
self.assertFalse(describe_vpc['vpc'])
@mock_ec2
@skipIf(True, 'Disabled pending https://github.com/spulec/moto/issues/493')
def test_that_when_describing_vpc_by_id_on_connection_error_it_returns_error(self):
'''
Tests describing parameters failure
'''
vpc = self._create_vpc(name='test', tags={'test': 'testvalue'})
with patch('moto.ec2.models.VPCBackend.get_all_vpcs',
side_effect=BotoServerError(400, 'Mocked error')):
describe_result = boto_vpc.describe(vpc_id=vpc.id, **conn_parameters)
self.assertTrue('error' in describe_result)
@mock_ec2
def test_that_when_describing_vpc_but_providing_no_vpc_id_the_describe_method_raises_a_salt_invocation_error(self):
'''
Tests describing vpc without vpc id
'''
with self.assertRaisesRegexp(SaltInvocationError,
'A valid vpc id or name needs to be specified.'):
boto_vpc.describe(vpc_id=None, **conn_parameters)
@skipIf(NO_MOCK, NO_MOCK_REASON)
@skipIf(HAS_BOTO is False, 'The boto module must be installed.')
@skipIf(HAS_MOTO is False, 'The moto module must be installed.')
@skipIf(_has_required_boto() is False, 'The boto module must be greater than'
' or equal to version {0}'
.format(required_boto_version))
@skipIf(_has_required_moto() is False, 'The moto version must be >= to version {0}'.format(required_moto_version))
class BotoVpcSubnetsTestCase(BotoVpcTestCaseBase, BotoVpcTestCaseMixin):
@mock_ec2
def test_get_subnet_association_single_subnet(self):
'''
tests that given multiple subnet ids in the same VPC that the VPC ID is
returned. The test is valuable because it uses a string as an argument
to subnets as opposed to a list.
'''
vpc = self._create_vpc()
subnet = self._create_subnet(vpc.id)
subnet_association = boto_vpc.get_subnet_association(subnets=subnet.id,
**conn_parameters)
self.assertEqual(vpc.id, subnet_association['vpc_id'])
@mock_ec2
def test_get_subnet_association_multiple_subnets_same_vpc(self):
'''
tests that given multiple subnet ids in the same VPC that the VPC ID is
returned.
'''
vpc = self._create_vpc()
subnet_a = self._create_subnet(vpc.id, '10.0.0.0/25')
subnet_b = self._create_subnet(vpc.id, '10.0.0.128/25')
subnet_association = boto_vpc.get_subnet_association([subnet_a.id, subnet_b.id],
**conn_parameters)
self.assertEqual(vpc.id, subnet_association['vpc_id'])
@mock_ec2
def test_get_subnet_association_multiple_subnets_different_vpc(self):
'''
tests that given multiple subnet ids in different VPCs that False is
returned.
'''
vpc_a = self._create_vpc()
vpc_b = self.conn.create_vpc(cidr_block)
subnet_a = self._create_subnet(vpc_a.id, '10.0.0.0/24')
subnet_b = self._create_subnet(vpc_b.id, '10.0.0.0/24')
subnet_association = boto_vpc.get_subnet_association([subnet_a.id, subnet_b.id],
**conn_parameters)
self.assertEqual(set(subnet_association['vpc_ids']), set([vpc_a.id, vpc_b.id]))
@mock_ec2
def test_that_when_creating_a_subnet_succeeds_the_create_subnet_method_returns_true(self):
'''
Tests creating a subnet successfully
'''
vpc = self._create_vpc()
subnet_creation_result = boto_vpc.create_subnet(vpc.id, '10.0.0.0/24', **conn_parameters)
self.assertTrue(subnet_creation_result['created'])
self.assertTrue('id' in subnet_creation_result)
@mock_ec2
def test_that_when_creating_a_subnet_and_specifying_a_name_succeeds_the_create_subnet_method_returns_true(self):
'''
Tests creating a subnet successfully when specifying a name
'''
vpc = self._create_vpc()
subnet_creation_result = boto_vpc.create_subnet(vpc.id, '10.0.0.0/24', subnet_name='test', **conn_parameters)
self.assertTrue(subnet_creation_result['created'])
@mock_ec2
def test_that_when_creating_a_subnet_and_specifying_tags_succeeds_the_create_subnet_method_returns_true(self):
'''
Tests creating a subnet successfully when specifying a tag
'''
vpc = self._create_vpc()
subnet_creation_result = boto_vpc.create_subnet(vpc.id, '10.0.0.0/24', tags={'test': 'testvalue'},
**conn_parameters)
self.assertTrue(subnet_creation_result['created'])
@mock_ec2
@skipIf(True, 'Disabled pending https://github.com/spulec/moto/issues/493')
def test_that_when_creating_a_subnet_fails_the_create_subnet_method_returns_error(self):
'''
Tests creating a subnet failure
'''
vpc = self._create_vpc()
with patch('moto.ec2.models.SubnetBackend.create_subnet', side_effect=BotoServerError(400, 'Mocked error')):
subnet_creation_result = boto_vpc.create_subnet(vpc.id, '10.0.0.0/24', **conn_parameters)
self.assertTrue('error' in subnet_creation_result)
@mock_ec2
def test_that_when_deleting_an_existing_subnet_the_delete_subnet_method_returns_true(self):
'''
Tests deleting an existing subnet
'''
vpc = self._create_vpc()
subnet = self._create_subnet(vpc.id)
subnet_deletion_result = boto_vpc.delete_subnet(subnet_id=subnet.id, **conn_parameters)
self.assertTrue(subnet_deletion_result['deleted'])
@mock_ec2
def test_that_when_deleting_a_non_existent_subnet_the_delete_vpc_method_returns_false(self):
'''
Tests deleting a subnet that doesn't exist
'''
delete_subnet_result = boto_vpc.delete_subnet(subnet_id='1234', **conn_parameters)
self.assertTrue('error' in delete_subnet_result)
@mock_ec2
def test_that_when_checking_if_a_subnet_exists_by_id_the_subnet_exists_method_returns_true(self):
'''
Tests checking if a subnet exists when it does exist
'''
vpc = self._create_vpc()
subnet = self._create_subnet(vpc.id)
subnet_exists_result = boto_vpc.subnet_exists(subnet_id=subnet.id, **conn_parameters)
self.assertTrue(subnet_exists_result['exists'])
@mock_ec2
def test_that_when_a_subnet_does_not_exist_the_subnet_exists_method_returns_false(self):
'''
Tests checking if a subnet exists which doesn't exist
'''
subnet_exists_result = boto_vpc.subnet_exists('fake', **conn_parameters)
self.assertFalse(subnet_exists_result['exists'])
@mock_ec2
def test_that_when_checking_if_a_subnet_exists_by_name_the_subnet_exists_method_returns_true(self):
'''
Tests checking subnet existence by name
'''
vpc = self._create_vpc()
self._create_subnet(vpc.id, name='test')
subnet_exists_result = boto_vpc.subnet_exists(name='test', **conn_parameters)
self.assertTrue(subnet_exists_result['exists'])
@mock_ec2
def test_that_when_checking_if_a_subnet_exists_by_name_the_subnet_does_not_exist_the_subnet_method_returns_false(self):
'''
Tests checking subnet existence by name when it doesn't exist
'''
vpc = self._create_vpc()
self._create_subnet(vpc.id)
subnet_exists_result = boto_vpc.subnet_exists(name='test', **conn_parameters)
self.assertFalse(subnet_exists_result['exists'])
@mock_ec2
def test_that_when_checking_if_a_subnet_exists_by_tags_the_subnet_exists_method_returns_true(self):
'''
Tests checking subnet existence by tag
'''
vpc = self._create_vpc()
self._create_subnet(vpc.id, tags={'test': 'testvalue'})
subnet_exists_result = boto_vpc.subnet_exists(tags={'test': 'testvalue'}, **conn_parameters)
self.assertTrue(subnet_exists_result['exists'])
@mock_ec2
def test_that_when_checking_if_a_subnet_exists_by_tags_the_subnet_does_not_exist_the_subnet_method_returns_false(self):
'''
Tests checking subnet existence by tag when subnet doesn't exist
'''
vpc = self._create_vpc()
self._create_subnet(vpc.id)
subnet_exists_result = boto_vpc.subnet_exists(tags={'test': 'testvalue'}, **conn_parameters)
self.assertFalse(subnet_exists_result['exists'])
@mock_ec2
def test_that_when_checking_if_a_subnet_exists_but_providing_no_filters_the_subnet_exists_method_raises_a_salt_invocation_error(self):
'''
Tests checking subnet existence without any filters
'''
with self.assertRaisesRegexp(SaltInvocationError,
'At least one of the following must be specified: subnet id, cidr, subnet_name, tags, or zones.'):
boto_vpc.subnet_exists(**conn_parameters)
@mock_ec2
def test_that_describe_subnet_by_id_for_existing_subnet_returns_correct_data(self):
'''
Tests describing a subnet by id.
'''
vpc = self._create_vpc()
subnet = self._create_subnet(vpc.id)
describe_subnet_results = boto_vpc.describe_subnet(subnet_id=subnet.id)
self.assertEqual(set(describe_subnet_results['subnet'].keys()),
set(['id', 'cidr_block', 'availability_zone', 'tags']))
@mock_ec2
def test_that_describe_subnet_by_id_for_non_existent_subnet_returns_none(self):
'''
Tests describing a non-existent subnet by id.
'''
vpc = self._create_vpc()
describe_subnet_results = boto_vpc.describe_subnet(subnet_id='subnet-a1b2c3')
self.assertEqual(describe_subnet_results['subnet'], None)
@mock_ec2
def test_that_describe_subnet_by_name_for_existing_subnet_returns_correct_data(self):
'''
Tests describing a subnet by name.
'''
vpc = self._create_vpc()
subnet = self._create_subnet(vpc.id, name='test')
describe_subnet_results = boto_vpc.describe_subnet(subnet_name='test')
self.assertEqual(set(describe_subnet_results['subnet'].keys()),
set(['id', 'cidr_block', 'availability_zone', 'tags']))
@mock_ec2
def test_that_describe_subnet_by_name_for_non_existent_subnet_returns_none(self):
'''
Tests describing a non-existent subnet by id.
'''
vpc = self._create_vpc()
describe_subnet_results = boto_vpc.describe_subnet(subnet_name='test')
self.assertEqual(describe_subnet_results['subnet'], None)
@mock_ec2
def test_that_describe_subnets_by_id_for_existing_subnet_returns_correct_data(self):
'''
Tests describing multiple subnets by id.
'''
vpc = self._create_vpc()
subnet1 = self._create_subnet(vpc.id)
subnet2 = self._create_subnet(vpc.id)
describe_subnet_results = boto_vpc.describe_subnets(subnet_ids=[subnet1.id, subnet2.id])
self.assertEqual(len(describe_subnet_results['subnets']), 2)
self.assertEqual(set(describe_subnet_results['subnets'][0].keys()),
set(['id', 'cidr_block', 'availability_zone', 'tags']))
@mock_ec2
def test_that_describe_subnets_by_name_for_existing_subnets_returns_correct_data(self):
'''
Tests describing multiple subnets by id.
'''
vpc = self._create_vpc()
subnet1 = self._create_subnet(vpc.id, name='subnet1')
subnet2 = self._create_subnet(vpc.id, name='subnet2')
describe_subnet_results = boto_vpc.describe_subnets(subnet_names=['subnet1', 'subnet2'])
self.assertEqual(len(describe_subnet_results['subnets']), 2)
self.assertEqual(set(describe_subnet_results['subnets'][0].keys()),
set(['id', 'cidr_block', 'availability_zone', 'tags']))
@mock_ec2
def test_create_subnet_passes_availability_zone(self):
'''
Tests that the availability_zone kwarg is passed on to _create_resource
'''
vpc = self._create_vpc()
self._create_subnet(vpc.id, name='subnet1', availability_zone='us-east-1a')
describe_subnet_results = boto_vpc.describe_subnets(subnet_names=['subnet1'])
self.assertEqual(describe_subnet_results['subnets'][0]['availability_zone'], 'us-east-1a')
@skipIf(NO_MOCK, NO_MOCK_REASON)
@skipIf(HAS_BOTO is False, 'The boto module must be installed.')
@skipIf(HAS_MOTO is False, 'The moto module must be installed.')
@skipIf(_has_required_boto() is False, 'The boto module must be greater than'
' or equal to version {0}'
.format(required_boto_version))
class BotoVpcInternetGatewayTestCase(BotoVpcTestCaseBase, BotoVpcTestCaseMixin):
@mock_ec2
def test_that_when_creating_an_internet_gateway_the_create_internet_gateway_method_returns_true(self):
'''
Tests creating an internet gateway successfully (with no vpc id or name)
'''
igw_creation_result = boto_vpc.create_internet_gateway()
self.assertTrue(igw_creation_result.get('created'))
@mock_ec2
def test_that_when_creating_an_internet_gateway_with_non_existent_vpc_the_create_internet_gateway_method_returns_an_error(self):
'''
Tests that creating an internet gateway for a non-existent VPC fails.
'''
igw_creation_result = boto_vpc.create_internet_gateway(vpc_name='non-existent-vpc')
self.assertTrue('error' in igw_creation_result)
@mock_ec2
def test_that_when_creating_an_internet_gateway_with_vpc_name_specified_the_create_internet_gateway_method_returns_true(self):
'''
Tests creating an internet gateway with vpc name specified.
'''
self._create_vpc(name='test-vpc')
igw_creation_result = boto_vpc.create_internet_gateway(vpc_name='test-vpc')
self.assertTrue(igw_creation_result.get('created'))
@mock_ec2
def test_that_when_creating_an_internet_gateway_with_vpc_id_specified_the_create_internet_gateway_method_returns_true(self):
'''
Tests creating an internet gateway with vpc name specified.
'''
vpc = self._create_vpc()
igw_creation_result = boto_vpc.create_internet_gateway(vpc_id=vpc.id)
self.assertTrue(igw_creation_result.get('created'))
@skipIf(NO_MOCK, NO_MOCK_REASON)
@skipIf(HAS_BOTO is False, 'The boto module must be installed.')
@skipIf(HAS_MOTO is False, 'The moto module must be installed.')
@skipIf(_has_required_boto() is False, 'The boto module must be greater than'
' or equal to version {0}'
.format(required_boto_version))
class BotoVpcCustomerGatewayTestCase(BotoVpcTestCaseBase, BotoVpcTestCaseMixin):
@mock_ec2
@skipIf(True, 'Moto has not implemented this feature. Skipping for now.')
def test_that_when_creating_a_customer_gateway_the_create_customer_gateway_method_returns_true(self):
'''
Tests creating an internet gateway successfully (with no vpc id or name)
'''
gw_creation_result = boto_vpc.create_customer_gateway('ipsec.1', '10.1.1.1', None)
self.assertTrue(gw_creation_result.get('created'))
@mock_ec2
@skipIf(True, 'Moto has not implemented this feature. Skipping for now.')
def test_that_when_checking_if_a_subnet_exists_by_id_the_subnet_exists_method_returns_true(self):
'''
Tests checking if a subnet exists when it does exist
'''
gw_creation_result = boto_vpc.create_customer_gateway('ipsec.1', '10.1.1.1', None)
gw_exists_result = boto_vpc.customer_gateway_exists(customer_gateway_id=gw_creation_result['id'])
self.assertTrue(gw_exists_result['exists'])
@mock_ec2
@skipIf(True, 'Moto has not implemented this feature. Skipping for now.')
def test_that_when_a_subnet_does_not_exist_the_subnet_exists_method_returns_false(self):
'''
Tests checking if a subnet exists which doesn't exist
'''
gw_exists_result = boto_vpc.customer_gateway_exists('fake')
self.assertFalse(gw_exists_result['exists'])
@skipIf(NO_MOCK, NO_MOCK_REASON)
@skipIf(HAS_BOTO is False, 'The boto module must be installed.')
@skipIf(HAS_MOTO is False, 'The moto module must be installed.')
@skipIf(_has_required_boto() is False, 'The boto module must be greater than'
' or equal to version {0}'
.format(required_boto_version))
@skipIf(_has_required_moto() is False, 'The moto version must be >= to version {0}'.format(required_moto_version))
class BotoVpcDHCPOptionsTestCase(BotoVpcTestCaseBase, BotoVpcTestCaseMixin):
@mock_ec2
def test_that_when_creating_dhcp_options_succeeds_the_create_dhcp_options_method_returns_true(self):
'''
Tests creating dhcp options successfully
'''
dhcp_options_creation_result = boto_vpc.create_dhcp_options(**dhcp_options_parameters)
self.assertTrue(dhcp_options_creation_result['created'])
@mock_ec2
@skipIf(True, 'Moto has not implemented this feature. Skipping for now.')
def test_that_when_creating_dhcp_options_and_specifying_a_name_succeeds_the_create_dhcp_options_method_returns_true(
self):
'''
Tests creating dchp options with name successfully
'''
dhcp_options_creation_result = boto_vpc.create_dhcp_options(dhcp_options_name='test',
**dhcp_options_parameters)
self.assertTrue(dhcp_options_creation_result['created'])
@mock_ec2
def test_that_when_creating_dhcp_options_and_specifying_tags_succeeds_the_create_dhcp_options_method_returns_true(
self):
'''
Tests creating dchp options with tag successfully
'''
dhcp_options_creation_result = boto_vpc.create_dhcp_options(tags={'test': 'testvalue'},
**dhcp_options_parameters)
self.assertTrue(dhcp_options_creation_result['created'])
@mock_ec2
@skipIf(True, 'Disabled pending https://github.com/spulec/moto/issues/493')
def test_that_when_creating_dhcp_options_fails_the_create_dhcp_options_method_returns_error(self):
'''
Tests creating dhcp options failure
'''
with patch('moto.ec2.models.DHCPOptionsSetBackend.create_dhcp_options',
side_effect=BotoServerError(400, 'Mocked error')):
r = dhcp_options_creation_result = boto_vpc.create_dhcp_options(**dhcp_options_parameters)
self.assertTrue('error' in r)
@mock_ec2
def test_that_when_associating_an_existing_dhcp_options_set_to_an_existing_vpc_the_associate_dhcp_options_method_returns_true(
self):
'''
Tests associating existing dchp options successfully
'''
vpc = self._create_vpc()
dhcp_options = self._create_dhcp_options()
dhcp_options_association_result = boto_vpc.associate_dhcp_options_to_vpc(dhcp_options.id, vpc.id,
**conn_parameters)
self.assertTrue(dhcp_options_association_result['associated'])
@mock_ec2
def test_that_when_associating_a_non_existent_dhcp_options_set_to_an_existing_vpc_the_associate_dhcp_options_method_returns_error(
self):
'''
Tests associating non-existanct dhcp options successfully
'''
vpc = self._create_vpc()
dhcp_options_association_result = boto_vpc.associate_dhcp_options_to_vpc('fake', vpc.id, **conn_parameters)
self.assertTrue('error' in dhcp_options_association_result)
@mock_ec2
def test_that_when_associating_an_existing_dhcp_options_set_to_a_non_existent_vpc_the_associate_dhcp_options_method_returns_false(
self):
'''
Tests associating existing dhcp options to non-existence vpc
'''
dhcp_options = self._create_dhcp_options()
dhcp_options_association_result = boto_vpc.associate_dhcp_options_to_vpc(dhcp_options.id, 'fake',
**conn_parameters)
self.assertTrue('error' in dhcp_options_association_result)
@mock_ec2
def test_that_when_creating_and_associating_dhcp_options_set_to_an_existing_vpc_succeeds_the_associate_new_dhcp_options_method_returns_true(
self):
'''
Tests creation/association of dchp options to an existing vpc successfully
'''
vpc = self._create_vpc()
dhcp_creation_and_association_result = boto_vpc.associate_new_dhcp_options_to_vpc(vpc.id,
**dhcp_options_parameters)
self.assertTrue(dhcp_creation_and_association_result['created'])
@mock_ec2
@skipIf(True, 'Disabled pending https://github.com/spulec/moto/issues/493')
def test_that_when_creating_and_associating_dhcp_options_set_to_an_existing_vpc_fails_creating_the_dhcp_options_the_associate_new_dhcp_options_method_raises_exception(
self):
'''
Tests creation failure during creation/association of dchp options to an existing vpc
'''
vpc = self._create_vpc()
with patch('moto.ec2.models.DHCPOptionsSetBackend.create_dhcp_options',
side_effect=BotoServerError(400, 'Mocked error')):
r = boto_vpc.associate_new_dhcp_options_to_vpc(vpc.id, **dhcp_options_parameters)
self.assertTrue('error' in r)
@mock_ec2
@skipIf(True, 'Disabled pending https://github.com/spulec/moto/issues/493')
def test_that_when_creating_and_associating_dhcp_options_set_to_an_existing_vpc_fails_associating_the_dhcp_options_the_associate_new_dhcp_options_method_raises_exception(self):
'''
Tests association failure during creation/association of dchp options to existing vpc
'''
vpc = self._create_vpc()
with patch('moto.ec2.models.DHCPOptionsSetBackend.associate_dhcp_options',
side_effect=BotoServerError(400, 'Mocked error')):
r = boto_vpc.associate_new_dhcp_options_to_vpc(vpc.id, **dhcp_options_parameters)
self.assertTrue('error' in r)
@mock_ec2
def test_that_when_creating_and_associating_dhcp_options_set_to_a_non_existent_vpc_the_dhcp_options_the_associate_new_dhcp_options_method_returns_false(
self):
'''
Tests creation/association of dhcp options to non-existent vpc
'''
r = boto_vpc.associate_new_dhcp_options_to_vpc('fake', **dhcp_options_parameters)
self.assertTrue('error' in r)
@mock_ec2
def test_that_when_dhcp_options_exists_the_dhcp_options_exists_method_returns_true(self):
'''
Tests existence of dhcp options successfully
'''
dhcp_options = self._create_dhcp_options()
dhcp_options_exists_result = boto_vpc.dhcp_options_exists(dhcp_options.id, **conn_parameters)
self.assertTrue(dhcp_options_exists_result['exists'])
@mock_ec2
def test_that_when_dhcp_options_do_not_exist_the_dhcp_options_exists_method_returns_false(self):
'''
Tests existence of dhcp options failure
'''
r = boto_vpc.dhcp_options_exists('fake', **conn_parameters)
self.assertFalse(r['exists'])
@mock_ec2
def test_that_when_checking_if_dhcp_options_exists_but_providing_no_filters_the_dhcp_options_exists_method_raises_a_salt_invocation_error(self):
'''
Tests checking dhcp option existence with no filters
'''
with self.assertRaisesRegexp(SaltInvocationError, 'At least one of the following must be provided: id, name, or tags.'):
boto_vpc.dhcp_options_exists(**conn_parameters)
@skipIf(NO_MOCK, NO_MOCK_REASON)
@skipIf(HAS_BOTO is False, 'The boto module must be installed.')
@skipIf(HAS_MOTO is False, 'The moto module must be installed.')
@skipIf(_has_required_boto() is False, 'The boto module must be greater than'
' or equal to version {0}'
.format(required_boto_version))
class BotoVpcNetworkACLTestCase(BotoVpcTestCaseBase, BotoVpcTestCaseMixin):
@mock_ec2
#@skipIf(True, 'Moto has not implemented this feature. Skipping for now.')
def test_that_when_creating_network_acl_for_an_existing_vpc_the_create_network_acl_method_returns_true(self):
'''
Tests creation of network acl with existing vpc
'''
vpc = self._create_vpc()
network_acl_creation_result = boto_vpc.create_network_acl(vpc.id, **conn_parameters)
self.assertTrue(network_acl_creation_result)
@mock_ec2
#@skipIf(True, 'Moto has not implemented this feature. Skipping for now.')
def test_that_when_creating_network_acl_for_an_existing_vpc_and_specifying_a_name_the_create_network_acl_method_returns_true(
self):
'''
Tests creation of network acl via name with an existing vpc
'''
vpc = self._create_vpc()
network_acl_creation_result = boto_vpc.create_network_acl(vpc.id, network_acl_name='test', **conn_parameters)
self.assertTrue(network_acl_creation_result)
@mock_ec2
#@skipIf(True, 'Moto has not implemented this feature. Skipping for now.')
def test_that_when_creating_network_acl_for_an_existing_vpc_and_specifying_tags_the_create_network_acl_method_returns_true(
self):
'''
Tests creation of network acl via tags with an existing vpc
'''
vpc = self._create_vpc()
network_acl_creation_result = boto_vpc.create_network_acl(vpc.id, tags={'test': 'testvalue'}, **conn_parameters)
self.assertTrue(network_acl_creation_result)
@mock_ec2
#@skipIf(True, 'Moto has not implemented this feature. Skipping for now.')
def test_that_when_creating_network_acl_for_a_non_existent_vpc_the_create_network_acl_method_returns_an_error(self):
'''
Tests creation of network acl with a non-existent vpc
'''
network_acl_creation_result = boto_vpc.create_network_acl('fake', **conn_parameters)
self.assertTrue('error' in network_acl_creation_result)
@mock_ec2
@skipIf(True, 'Moto has not implemented this feature. Skipping for now.')
def test_that_when_creating_network_acl_fails_the_create_network_acl_method_returns_false(self):
'''
Tests creation of network acl failure
'''
vpc = self._create_vpc()
with patch('moto.ec2.models.NetworkACLBackend.create_network_acl',
side_effect=BotoServerError(400, 'Mocked error')):
network_acl_creation_result = boto_vpc.create_network_acl(vpc.id, **conn_parameters)
self.assertFalse(network_acl_creation_result)
@mock_ec2
#@skipIf(True, 'Moto has not implemented this feature. Skipping for now.')
def test_that_when_deleting_an_existing_network_acl_the_delete_network_acl_method_returns_true(self):
'''
Tests deletion of existing network acl successfully
'''
vpc = self._create_vpc()
network_acl = self._create_network_acl(vpc.id)
network_acl_deletion_result = boto_vpc.delete_network_acl(network_acl.id, **conn_parameters)
self.assertTrue(network_acl_deletion_result)
@mock_ec2
#@skipIf(True, 'Moto has not implemented this feature. Skipping for now.')
def test_that_when_deleting_a_non_existent_network_acl_the_delete_network_acl_method_returns_an_error(self):
'''
Tests deleting a non-existent network acl
'''
network_acl_deletion_result = boto_vpc.delete_network_acl('fake', **conn_parameters)
self.assertTrue('error' in network_acl_deletion_result)
@mock_ec2
#@skipIf(True, 'Moto has not implemented this feature. Skipping for now.')
def test_that_when_a_network_acl_exists_the_network_acl_exists_method_returns_true(self):
'''
Tests existence of network acl
'''
vpc = self._create_vpc()
network_acl = self._create_network_acl(vpc.id)
network_acl_deletion_result = boto_vpc.network_acl_exists(network_acl.id, **conn_parameters)
self.assertTrue(network_acl_deletion_result)
@mock_ec2
#@skipIf(True, 'Moto has not implemented this feature. Skipping for now.')
def test_that_when_a_network_acl_does_not_exist_the_network_acl_exists_method_returns_false(self):
'''
Tests checking network acl does not exist
'''
network_acl_deletion_result = boto_vpc.network_acl_exists('fake', **conn_parameters)
self.assertFalse(network_acl_deletion_result['exists'])
@mock_ec2
def test_that_when_checking_if_network_acl_exists_but_providing_no_filters_the_network_acl_exists_method_raises_a_salt_invocation_error(self):
'''
Tests checking existence of network acl with no filters
'''
with self.assertRaisesRegexp(
SaltInvocationError,
'At least one of the following must be provided: id, name, or tags.'
):
boto_vpc.dhcp_options_exists(**conn_parameters)
@mock_ec2
@skipIf(True, 'Moto has not implemented this feature. Skipping for now.')
def test_that_when_creating_a_network_acl_entry_successfully_the_create_network_acl_entry_method_returns_true(self):
'''
Tests creating network acl successfully
'''
vpc = self._create_vpc()
network_acl = self._create_network_acl(vpc.id)
network_acl_entry_creation_result = boto_vpc.create_network_acl_entry(network_acl.id,
*network_acl_entry_parameters,
**conn_parameters)
self.assertTrue(network_acl_entry_creation_result)
@mock_ec2
@skipIf(True, 'Moto has not implemented this feature. Skipping for now.')
def test_that_when_creating_a_network_acl_entry_for_a_non_existent_network_acl_the_create_network_acl_entry_method_returns_false(
self):
'''
Tests creating network acl entry for non-existent network acl
'''
network_acl_entry_creation_result = boto_vpc.create_network_acl_entry(*network_acl_entry_parameters,
**conn_parameters)
self.assertFalse(network_acl_entry_creation_result)
@mock_ec2
@skipIf(True, 'Moto has not implemented this feature. Skipping for now.')
def test_that_when_replacing_a_network_acl_entry_successfully_the_replace_network_acl_entry_method_returns_true(
self):
'''
Tests replacing network acl entry successfully
'''
vpc = self._create_vpc()
network_acl = self._create_network_acl(vpc.id)
self._create_network_acl_entry(network_acl.id, *network_acl_entry_parameters)
network_acl_entry_creation_result = boto_vpc.replace_network_acl_entry(network_acl.id,
*network_acl_entry_parameters,
**conn_parameters)
self.assertTrue(network_acl_entry_creation_result)
@mock_ec2
@skipIf(True, 'Moto has not implemented this feature. Skipping for now.')
def test_that_when_replacing_a_network_acl_entry_for_a_non_existent_network_acl_the_replace_network_acl_entry_method_returns_false(
self):
'''
Tests replacing a network acl entry for a non-existent network acl
'''
network_acl_entry_creation_result = boto_vpc.create_network_acl_entry(*network_acl_entry_parameters,
**conn_parameters)
self.assertFalse(network_acl_entry_creation_result)
@mock_ec2
@skipIf(True, 'Moto has not implemented this feature. Skipping for now.')
def test_that_when_deleting_an_existing_network_acl_entry_the_delete_network_acl_entry_method_returns_true(self):
'''
Tests deleting existing network acl entry successfully
'''
vpc = self._create_vpc()
network_acl = self._create_network_acl(vpc.id)
network_acl_entry = self._create_network_acl_entry(network_acl.id, *network_acl_entry_parameters)
network_acl_entry_deletion_result = boto_vpc.delete_network_acl_entry(network_acl_entry.id, 100,
**conn_parameters)
self.assertTrue(network_acl_entry_deletion_result)
@mock_ec2
@skipIf(True, 'Moto has not implemented this feature. Skipping for now.')
def test_that_when_deleting_a_non_existent_network_acl_entry_the_delete_network_acl_entry_method_returns_false(
self):
'''
Tests deleting a non-existent network acl entry
'''
network_acl_entry_deletion_result = boto_vpc.delete_network_acl_entry('fake', 100,
**conn_parameters)
self.assertFalse(network_acl_entry_deletion_result)
@mock_ec2
@skipIf(True, 'Moto has not implemented this feature. Skipping for now.')
def test_that_when_associating_an_existing_network_acl_to_an_existing_subnet_the_associate_network_acl_method_returns_true(
self):
'''
Tests association of existing network acl to existing subnet successfully
'''
vpc = self._create_vpc()
network_acl = self._create_network_acl(vpc.id)
subnet = self._create_subnet(vpc.id)
network_acl_association_result = boto_vpc.associate_network_acl_to_subnet(network_acl.id, subnet.id,
**conn_parameters)
self.assertTrue(network_acl_association_result)
@mock_ec2
#@skipIf(True, 'Moto has not implemented this feature. Skipping for now.')
def test_that_when_associating_a_non_existent_network_acl_to_an_existing_subnet_the_associate_network_acl_method_returns_an_error(
self):
'''
Tests associating a non-existent network acl to existing subnet failure
'''
vpc = self._create_vpc()
subnet = self._create_subnet(vpc.id)
network_acl_association_result = boto_vpc.associate_network_acl_to_subnet('fake', subnet.id,
**conn_parameters)
self.assertTrue('error' in network_acl_association_result)
@mock_ec2
@skipIf(True, 'Moto has not implemented this feature. Skipping for now.')
def test_that_when_associating_an_existing_network_acl_to_a_non_existent_subnet_the_associate_network_acl_method_returns_false(
self):
'''
Tests associating an existing network acl to a non-existent subnet
'''
vpc = self._create_vpc()
network_acl = self._create_network_acl(vpc.id)
network_acl_association_result = boto_vpc.associate_network_acl_to_subnet(network_acl.id, 'fake',
**conn_parameters)
self.assertFalse(network_acl_association_result)
@mock_ec2
@skipIf(True, 'Moto has not implemented this feature. Skipping for now.')
def test_that_when_creating_and_associating_a_network_acl_to_a_subnet_succeeds_the_associate_new_network_acl_to_subnet_method_returns_true(
self):
'''
Tests creating/associating a network acl to a subnet to a new network
'''
vpc = self._create_vpc()
subnet = self._create_subnet(vpc.id)
network_acl_creation_and_association_result = boto_vpc.associate_new_network_acl_to_subnet(vpc.id, subnet.id,
**conn_parameters)
self.assertTrue(network_acl_creation_and_association_result)
@mock_ec2
@skipIf(True, 'Moto has not implemented this feature. Skipping for now.')
def test_that_when_creating_and_associating_a_network_acl_to_a_subnet_and_specifying_a_name_succeeds_the_associate_new_network_acl_to_subnet_method_returns_true(
self):
'''
Tests creation/association of a network acl to subnet via name successfully
'''
vpc = self._create_vpc()
subnet = self._create_subnet(vpc.id)
network_acl_creation_and_association_result = boto_vpc.associate_new_network_acl_to_subnet(vpc.id, subnet.id,
network_acl_name='test',
**conn_parameters)
self.assertTrue(network_acl_creation_and_association_result)
@mock_ec2
@skipIf(True, 'Moto has not implemented this feature. Skipping for now.')
def test_that_when_creating_and_associating_a_network_acl_to_a_subnet_and_specifying_tags_succeeds_the_associate_new_network_acl_to_subnet_method_returns_true(
self):
'''
Tests creating/association of a network acl to a subnet via tag successfully
'''
vpc = self._create_vpc()
subnet = self._create_subnet(vpc.id)
network_acl_creation_and_association_result = boto_vpc.associate_new_network_acl_to_subnet(vpc.id, subnet.id,
tags={
'test': 'testvalue'},
**conn_parameters)
self.assertTrue(network_acl_creation_and_association_result)
@mock_ec2
@skipIf(True, 'Moto has not implemented this feature. Skipping for now.')
def test_that_when_creating_and_associating_a_network_acl_to_a_non_existent_subnet_the_associate_new_network_acl_to_subnet_method_returns_false(
self):
'''
Tests creation/association of a network acl to a non-existent vpc
'''
vpc = self._create_vpc()
network_acl_creation_and_association_result = boto_vpc.associate_new_network_acl_to_subnet(vpc.id, 'fake',
**conn_parameters)
self.assertFalse(network_acl_creation_and_association_result)
@mock_ec2
#@skipIf(True, 'Moto has not implemented this feature. Skipping for now.')
def test_that_when_creating_and_associating_a_network_acl_to_a_non_existent_vpc_the_associate_new_network_acl_to_subnet_method_returns_an_error(
self):
'''
Tests creation/association of network acl to a non-existent subnet
'''
vpc = self._create_vpc()
subnet = self._create_subnet(vpc.id)
network_acl_creation_and_association_result = boto_vpc.associate_new_network_acl_to_subnet('fake', subnet.id,
**conn_parameters)
self.assertTrue('error' in network_acl_creation_and_association_result)
@mock_ec2
@skipIf(True, 'Moto has not implemented this feature. Skipping for now.')
def test_that_when_disassociating_network_acl_succeeds_the_disassociate_network_acl_method_should_return_true(self):
'''
Tests disassociation of network acl success
'''
vpc = self._create_vpc()
subnet = self._create_subnet(vpc.id)
dhcp_disassociate_result = boto_vpc.disassociate_network_acl(subnet.id, vpc_id=vpc.id, **conn_parameters)
self.assertTrue(dhcp_disassociate_result)
@mock_ec2
@skipIf(True, 'Moto has not implemented this feature. Skipping for now.')
def test_that_when_disassociating_network_acl_for_a_non_existent_vpc_the_disassociate_network_acl_method_should_return_false(
self):
'''
Tests disassociation of network acl from non-existent vpc
'''
vpc = self._create_vpc()
subnet = self._create_subnet(vpc.id)
dhcp_disassociate_result = boto_vpc.disassociate_network_acl(subnet.id, vpc_id='fake', **conn_parameters)
self.assertFalse(dhcp_disassociate_result)
@mock_ec2
@skipIf(True, 'Moto has not implemented this feature. Skipping for now.')
def test_that_when_disassociating_network_acl_for_a_non_existent_subnet_the_disassociate_network_acl_method_should_return_false(
self):
'''
Tests disassociation of network acl from non-existent subnet
'''
vpc = self._create_vpc()
dhcp_disassociate_result = boto_vpc.disassociate_network_acl('fake', vpc_id=vpc.id, **conn_parameters)
self.assertFalse(dhcp_disassociate_result)
@skipIf(NO_MOCK, NO_MOCK_REASON)
@skipIf(HAS_BOTO is False, 'The boto module must be installed.')
@skipIf(HAS_MOTO is False, 'The moto module must be installed.')
@skipIf(_has_required_boto() is False, 'The boto module must be greater than'
' or equal to version {0}'
.format(required_boto_version))
class BotoVpcRouteTablesTestCase(BotoVpcTestCaseBase, BotoVpcTestCaseMixin):
@mock_ec2
@skipIf(True, 'Moto has not implemented this feature. Skipping for now.')
def test_that_when_creating_a_route_table_succeeds_the_create_route_table_method_returns_true(self):
'''
Tests creating route table successfully
'''
vpc = self._create_vpc()
route_table_creation_result = boto_vpc.create_route_table(vpc.id, **conn_parameters)
self.assertTrue(route_table_creation_result)
@mock_ec2
@skipIf(True, 'Moto has not implemented this feature. Skipping for now.')
def test_that_when_creating_a_route_table_on_a_non_existent_vpc_the_create_route_table_method_returns_false(self):
'''
Tests creating route table on a non-existent vpc
'''
route_table_creation_result = boto_vpc.create_route_table('fake', **conn_parameters)
self.assertTrue(route_table_creation_result)
@mock_ec2
@skipIf(True, 'Moto has not implemented this feature. Skipping for now.')
def test_that_when_deleting_a_route_table_succeeds_the_delete_route_table_method_returns_true(self):
'''
Tests deleting route table successfully
'''
vpc = self._create_vpc()
route_table = self._create_route_table(vpc.id)
route_table_deletion_result = boto_vpc.delete_route_table(route_table.id, **conn_parameters)
self.assertTrue(route_table_deletion_result)
@mock_ec2
@skipIf(True, 'Moto has not implemented this feature. Skipping for now.')
def test_that_when_deleting_a_non_existent_route_table_the_delete_route_table_method_returns_false(self):
'''
Tests deleting non-existent route table
'''
route_table_deletion_result = boto_vpc.delete_route_table('fake', **conn_parameters)
self.assertFalse(route_table_deletion_result)
@mock_ec2
@skipIf(True, 'Moto has not implemented this feature. Skipping for now.')
def test_that_when_route_table_exists_the_route_table_exists_method_returns_true(self):
'''
Tests existence of route table success
'''
vpc = self._create_vpc()
route_table = self._create_route_table(vpc.id)
route_table_existence_result = boto_vpc.route_table_exists(route_table.id, **conn_parameters)
self.assertTrue(route_table_existence_result)
@mock_ec2
@skipIf(True, 'Moto has not implemented this feature. Skipping for now.')
def test_that_when_route_table_does_not_exist_the_route_table_exists_method_returns_false(self):
'''
Tests existence of route table failure
'''
route_table_existence_result = boto_vpc.route_table_exists('fake', **conn_parameters)
self.assertFalse(route_table_existence_result)
@mock_ec2
def test_that_when_checking_if_a_route_table_exists_but_providing_no_filters_the_route_table_exists_method_raises_a_salt_invocation_error(self):
'''
Tests checking route table without filters
'''
with self.assertRaisesRegexp(
SaltInvocationError,
'At least one of the following must be provided: id, name, or tags.'
):
boto_vpc.dhcp_options_exists(**conn_parameters)
@mock_ec2
@skipIf(True, 'Moto has not implemented this feature. Skipping for now.')
def test_that_when_associating_a_route_table_succeeds_the_associate_route_table_method_should_return_the_association_id(
self):
'''
Tests associating route table successfully
'''
vpc = self._create_vpc()
subnet = self._create_subnet(vpc.id)
route_table = self._create_route_table(vpc.id)
association_id = boto_vpc.associate_route_table(route_table.id, subnet.id, **conn_parameters)
self.assertTrue(association_id)
@mock_ec2
@skipIf(True, 'Moto has not implemented this feature. Skipping for now.')
def test_that_when_associating_a_route_table_with_a_non_existent_route_table_the_associate_route_table_method_should_return_false(
self):
'''
Tests associating of route table to non-existent route table
'''
vpc = self._create_vpc()
subnet = self._create_subnet(vpc.id)
association_id = boto_vpc.associate_route_table('fake', subnet.id, **conn_parameters)
self.assertFalse(association_id)
@mock_ec2
@skipIf(True, 'Moto has not implemented this feature. Skipping for now.')
def test_that_when_associating_a_route_table_with_a_non_existent_subnet_the_associate_route_table_method_should_return_false(
self):
'''
Tests associating of route table with non-existent subnet
'''
vpc = self._create_vpc()
route_table = self._create_route_table(vpc.id)
association_id = boto_vpc.associate_route_table(route_table.id, 'fake', **conn_parameters)
self.assertFalse(association_id)
@mock_ec2
@skipIf(True, 'Moto has not implemented this feature. Skipping for now.')
def test_that_when_disassociating_a_route_table_succeeds_the_disassociate_route_table_method_should_return_true(
self):
'''
Tests disassociation of a route
'''
vpc = self._create_vpc()
subnet = self._create_subnet(vpc.id)
route_table = self._create_route_table(vpc.id)
association_id = self._associate_route_table(route_table.id, subnet.id)
dhcp_disassociate_result = boto_vpc.disassociate_route_table(association_id, **conn_parameters)
self.assertTrue(dhcp_disassociate_result)
@mock_ec2
@skipIf(True, 'Moto has not implemented this feature. Skipping for now.')
def test_that_when_creating_a_route_succeeds_the_create_route_method_should_return_true(self):
'''
Tests successful creation of a route
'''
vpc = self._create_vpc()
route_table = self._create_route_table(vpc.id)
route_creation_result = boto_vpc.create_route(route_table.id, cidr_block, **conn_parameters)
self.assertTrue(route_creation_result)
@mock_ec2
@skipIf(True, 'Moto has not implemented this feature. Skipping for now.')
def test_that_when_creating_a_route_with_a_non_existent_route_table_the_create_route_method_should_return_false(
self):
'''
Tests creation of route on non-existent route table
'''
route_creation_result = boto_vpc.create_route('fake', cidr_block, **conn_parameters)
self.assertFalse(route_creation_result)
@mock_ec2
@skipIf(True, 'Moto has not implemented this feature. Skipping for now.')
def test_that_when_deleting_a_route_succeeds_the_delete_route_method_should_return_true(self):
'''
Tests deleting route from route table
'''
vpc = self._create_vpc()
route_table = self._create_route_table(vpc.id)
route_deletion_result = boto_vpc.delete_route(route_table.id, cidr_block, **conn_parameters)
self.assertTrue(route_deletion_result)
@mock_ec2
@skipIf(True, 'Moto has not implemented this feature. Skipping for now.')
def test_that_when_deleting_a_route_with_a_non_existent_route_table_the_delete_route_method_should_return_false(
self):
'''
Tests deleting route from a non-existent route table
'''
route_deletion_result = boto_vpc.delete_route('fake', cidr_block, **conn_parameters)
self.assertFalse(route_deletion_result)
@mock_ec2
@skipIf(True, 'Moto has not implemented this feature. Skipping for now.')
def test_that_when_replacing_a_route_succeeds_the_replace_route_method_should_return_true(self):
'''
Tests replacing route successfully
'''
vpc = self._create_vpc()
route_table = self._create_route_table(vpc.id)
route_replacing_result = boto_vpc.replace_route(route_table.id, cidr_block, **conn_parameters)
self.assertTrue(route_replacing_result)
@mock_ec2
@skipIf(True, 'Moto has not implemented this feature. Skipping for now.')
def test_that_when_replacing_a_route_with_a_non_existent_route_table_the_replace_route_method_should_return_false(
self):
'''
Tests replacing a route when the route table doesn't exist
'''
route_replacing_result = boto_vpc.replace_route('fake', cidr_block, **conn_parameters)
self.assertFalse(route_replacing_result)
if __name__ == '__main__':
from integration import run_tests # pylint: disable=import-error
run_tests(BotoVpcTestCase, needs_daemon=False)
| 41.552679 | 180 | 0.680334 | 8,720 | 69,019 | 4.96078 | 0.041858 | 0.046465 | 0.026192 | 0.033635 | 0.871723 | 0.846826 | 0.809885 | 0.753941 | 0.706968 | 0.652249 | 0 | 0.007956 | 0.240586 | 69,019 | 1,660 | 181 | 41.577711 | 0.817358 | 0.121416 | 0 | 0.569717 | 0 | 0.001089 | 0.104042 | 0.007077 | 0 | 0 | 0 | 0.000602 | 0.12963 | 1 | 0.139434 | false | 0.002179 | 0.021786 | 0 | 0.190632 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8a8922f76a5fc834d14193aa41f49796f5edd961 | 19,311 | py | Python | test/core_tests/test_indexing.py | joshuawall/amuse | c2034074ee76c08057c4faa96c32044ab40952e9 | [
"Apache-2.0"
] | 1 | 2019-12-28T22:47:51.000Z | 2019-12-28T22:47:51.000Z | test/core_tests/test_indexing.py | joshuawall/amuse | c2034074ee76c08057c4faa96c32044ab40952e9 | [
"Apache-2.0"
] | null | null | null | test/core_tests/test_indexing.py | joshuawall/amuse | c2034074ee76c08057c4faa96c32044ab40952e9 | [
"Apache-2.0"
] | 2 | 2021-11-19T04:41:37.000Z | 2021-11-20T02:11:17.000Z | from amuse.test import amusetest
from amuse.support.interface import InCodeComponentImplementation
from amuse.datamodel.indexing import *
from amuse.datamodel import indexing
class TestIndexing(amusetest.TestCase):
def test1(self):
self.assertEquals(2, number_of_dimensions_after_index(3, 1))
self.assertEquals(3, number_of_dimensions_after_index(3, numpy.s_[0:3]))
self.assertEquals(1, number_of_dimensions_after_index(3, combine_indices(3,2)))
self.assertEquals(0, number_of_dimensions_after_index(3, combine_indices(combine_indices(3,2),1)))
self.assertEquals(3, indexing.number_of_dimensions_after_index(3, numpy.s_[1:2,...,...]))
self.assertEquals(3, indexing.number_of_dimensions_after_index(3, numpy.s_[1:2,:,:]))
def test2(self):
a = numpy.arange(12).reshape(3,4)
self.assertEquals(a[combine_indices(0,1)], a[0][1])
self.assertEquals(a[combine_indices(1,0)], a[1][0])
self.assertTrue(numpy.all(a[combine_indices(1,numpy.s_[0:2])] == a[1][0:2]))
indirect = combine_indices(0,1)
self.assertEquals(number_of_dimensions(a, indirect), 0)
def test3(self):
a = numpy.arange(12).reshape(3,4)
self.assertTrue(a[combine_indices(numpy.s_[0:2],0)].shape, a[0:2][0].shape)
self.assertTrue(numpy.all(a[combine_indices(numpy.s_[0:2],0)] == a[0:2][0]))
def test4(self):
a = numpy.arange(12).reshape(3,4)
direct = a[1][:]
indirect = a[combine_indices(1, indexing.normalize_slices(a[1].shape,numpy.s_[:]))]
self.assertEquals(indirect.shape, direct.shape)
self.assertTrue(numpy.all(indirect == direct))
def test5(self):
a = numpy.arange(12).reshape(3,4)
direct = a[0:2][:]
indirect = a[combine_indices(numpy.s_[0:2],indexing.normalize_slices(a[0:2].shape,numpy.s_[:]))]
self.assertEquals(indirect.shape, direct.shape)
self.assertTrue(numpy.all(indirect == direct))
def test6(self):
a = numpy.arange(12).reshape(3,4)
direct = a[1:3][1:]
indirect = a[combine_indices(numpy.s_[1:3],indexing.normalize_slices(a[1:3].shape,numpy.s_[1:]))]
self.assertEquals(indirect.shape, direct.shape)
self.assertTrue(numpy.all(indirect == direct))
def test7(self):
a = numpy.arange(30).reshape(6,5)
direct = a[1:5:2][1:]
indirect = a[combine_indices(numpy.s_[1:5:2],indexing.normalize_slices(a[1:5:2].shape,numpy.s_[1:]))]
self.assertEquals(indirect.shape, direct.shape)
self.assertTrue(numpy.all(indirect == direct))
def test8(self):
a = numpy.arange(30)
direct = a[2:14:3][1:5:2]
indirect = a[combine_indices(numpy.s_[2:14:3],numpy.s_[1:5:2])]
self.assertEquals(indirect.shape, direct.shape)
self.assertTrue(numpy.all(indirect == direct))
def test9(self):
a = numpy.arange(100)
for s in range(0,40):
for e in range(40,101):
for step in range(1,5):
direct = a[s:e:step][1:5:2]
indirect = a[combine_indices(numpy.s_[s:e:step],
indexing.normalize_slices(a[s:e:step].shape,numpy.s_[1:5:2]))]
self.assertEquals(indirect.shape, direct.shape)
self.assertTrue(numpy.all(indirect == direct))
def test10(self):
a = numpy.arange(60).reshape(5,6,2)
direct = a[3][2][1]
indirect = a[combine_indices(combine_indices(3,2),1)]
self.assertEquals(indirect, direct)
def test11(self):
a = numpy.arange(60).reshape(5,6,2)
direct = a[3]
indirect = a[combine_indices(3,Ellipsis)]
self.assertEquals(indirect.shape, direct.shape)
self.assertTrue(numpy.all(indirect == direct))
def test12(self):
self.assertEquals((1,4,2), indexing.shape_after_index((5,4,2), numpy.s_[1:2,...,...]))
self.assertEquals((1,4,2), indexing.shape_after_index((5,4,2), numpy.s_[1:2,:,:]))
self.assertEquals((2,4,2), indexing.shape_after_index((5,4,2), numpy.s_[1:3,...,...]))
self.assertEquals((2,4,2), indexing.shape_after_index((5,4,2), numpy.s_[1:3,:,:]))
self.assertEquals((2,1,2), indexing.shape_after_index((5,4,2), numpy.s_[1:3,2:3,...]))
self.assertEquals((2,1,2), indexing.shape_after_index((5,4,2), numpy.s_[1:3,2:3,:]))
def xtest13(self):
combined_indices = combine_indices(numpy.s_[1:3],numpy.s_[:])
self.assertEquals(combined_indices, numpy.s_[1:3:1])
combined_indices = combine_indices(numpy.s_[:], numpy.s_[1:3])
self.assertEquals(combined_indices, numpy.s_[1:3:1])
combined_indices = combine_indices((numpy.s_[:], numpy.s_[:]), (numpy.s_[1:3], numpy.s_[1:2]))
self.assertEquals(combined_indices,(numpy.s_[1:3:1], numpy.s_[1:2:1]))
combined_indices = combine_indices((numpy.s_[0:2], numpy.s_[:]), (numpy.s_[1:3], numpy.s_[1:2]))
self.assertEquals(combined_indices,(numpy.s_[1:2:1], numpy.s_[1:2:1]))
def test14(self):
self.assertEquals((5,4,2), indexing.shape_after_index((5,4,2), numpy.s_[:10,...,...]))
self.assertEquals((5,4,2), indexing.shape_after_index((5,4,2), numpy.s_[...,:10,...]))
self.assertEquals((4,4,2), indexing.shape_after_index((5,4,2), numpy.s_[1:10,...,...]))
self.assertEquals((1,4,2), indexing.shape_after_index((5,4,2), numpy.s_[-1:,...,...]))
self.assertEquals((2,4,2), indexing.shape_after_index((5,4,2), numpy.s_[-2:,...,...]))
self.assertEquals((1,4,2), indexing.shape_after_index((5,4,2), numpy.s_[-2:-1,...,...]))
self.assertEquals((5,4,2), indexing.shape_after_index((5,4,2), numpy.s_[-10:,...,...]))
def test15(self):
a = numpy.arange(6).reshape(2,3)
indices = numpy.asarray([[True, False, True],[True,False,True]])
direct = a[indices][list([1,3])]
combined = combine_indices(indices,[1,3])
indirect = a[combined]
self.assertEquals(indirect, direct)
def test16(self):
self.assertEquals((4,), indexing.shape_after_index((2,3), [[True, False, True],[True,False,True]]))
self.assertEquals((1,3), indexing.shape_after_index((2,3), [True, False]))
def test17(self):
a = numpy.arange(6).reshape(2,3)
indices = numpy.asarray([True,False])
direct = a[indices, 1:][0,1:]
combined = combine_indices(indexing.normalize_slices(a.shape,numpy.s_[indices,1:]),
indexing.normalize_slices(a[indices,1:].shape,numpy.s_[0,1:]))
indirect = a[combined]
self.assertEquals(indirect, direct)
def test18(self):
a = numpy.arange(6).reshape(2,3)
indices = numpy.asarray([True,False])
direct = a[indices, 1:][0]
combined = combine_indices(numpy.s_[indices,1:],0)
indirect = a[combined]
self.assertEquals(indirect, direct)
def test19(self):
a = numpy.arange(6).reshape(2,3)
indices = numpy.asarray([True,False])
direct = a[:1, 1:][0]
combined = combine_indices(numpy.s_[:1,1:],0)
indirect = a[combined]
self.assertEquals(indirect, direct)
def test20(self):
combined=combine_indices( slice(1, 199, None), slice(3, 5, None) )
self.assertEqual(combined,slice(4,6,1))
def test21(self):
shape=shape_after_index((200,), slice(4, 6, 1))
self.assertEqual(shape,(2,))
def xtest22(self):
tiny=range(2)
small=range(10)
big=range(1000)
# combining slicings w negative stops not possible! e.g. ((7,-1),(2,3),(9,10,1))
# (without normalize)
slicings=[ ((9,19),(5,9,2),(14,18,2)),
((7,19,2),(5,9,1),(17,19,2)),
((1,None),(1,10),(2,11,1)),
((7,None),(1,10),(8,17,1)),
((None,12),(3,5),(3,5,1)),
((None,12),(3,15),(3,12,1)),
((None,None),(3,5),(3,5,1)),
((None,None),(3,15),(3,15,1)),
((None,None),(None,15),(0,15,1)),
((None,None),(None,None),(0,None,1)),
((9,None),(None,None),(9,None,1)),
((9,None),(6,None),(15,None,1)),
((9,None),(None,40),(9,49,1)),
((1,None),(None,40),(1,41,1)),
((9,16),(None,40),(9,16,1)),
((1,16),(None,40),(1,16,1)),
((49,16),(None,40),(16,16,1)),
((41,66),(None,40),(41,66,1)),
]
for t1,t2,t3 in slicings:
s1=slice(*t1)
s2=slice(*t2)
s3=slice(*t3)
self.assertEqual(combine_slices(s1,s2),t3)
self.assertTrue(tiny[s1][s2]==tiny[s3])
self.assertTrue(small[s1][s2]==small[s3])
self.assertTrue(big[s1][s2]==big[s3])
def xtest23(self):
import random
random.seed(123456)
tiny=range(2)
small=range(20)
big=range(2000)
Ntest=1000
start0=[random.randint(0,20) for x in range(Ntest)]
stop0=[random.randint(15,50) for x in range(Ntest)]
step0=[random.randint(1,3) for x in range(Ntest)]
start1=[random.randint(0,10) for x in range(Ntest)]
stop1=[random.randint(5,25) for x in range(Ntest)]
step1=[random.randint(1,3) for x in range(Ntest)]
slicings=[]
for x in zip(start0,stop0,step0,start1,stop1,step1):
slicings.append(((x[0],x[1],x[2]),(x[3],x[4],x[5])))
for t1,t2 in slicings:
s1=slice(*t1)
s2=slice(*t2)
t3=combine_slices(s1,s2)
s3=slice(*t3)
self.assertTrue(tiny[s1][s2]==tiny[s3])
self.assertTrue(small[s1][s2]==small[s3])
self.assertTrue(big[s1][s2]==big[s3])
def test24(self):
import random
random.seed(123456)
tiny=range(2)
small=range(20)
big=range(2000)
Ntest=1000
stop0=[random.randint(0,20) for x in range(Ntest)]
start0=[random.randint(15,50) for x in range(Ntest)]
step0=[random.randint(-3,-1) for x in range(Ntest)]
start1=[random.randint(0,10) for x in range(Ntest)]
stop1=[random.randint(5,25) for x in range(Ntest)]
step1=[random.randint(1,3) for x in range(Ntest)]
slicings=[]
for x in zip(start0,stop0,step0,start1,stop1,step1):
slicings.append(((x[0],x[1],x[2]),(x[3],x[4],x[5])))
for t1,t2 in slicings:
s1=slice(*t1)
s2=slice(*t2)
t3=combine_slices(normalize_slices(len(tiny),s1),normalize_slices(len(tiny[s1]),s2))
s3=slice(*t3)
self.assertTrue(tiny[s1][s2]==tiny[s3])
t3=combine_slices(normalize_slices(len(small),s1),normalize_slices(len(small[s1]),s2))
s3=slice(*t3)
self.assertTrue(small[s1][s2]==small[s3])
t3=combine_slices(normalize_slices(len(big),s1),normalize_slices(len(big[s1]),s2))
s3=slice(*t3)
self.assertTrue(big[s1][s2]==big[s3])
def test25(self):
import random
random.seed(123456)
tiny=range(2)
small=range(20)
big=range(2000)
Ntest=1000
stop0=[random.randint(0,20) for x in range(Ntest)]
start0=[random.randint(15,50) for x in range(Ntest)]
step0=[random.randint(-3,-1) for x in range(Ntest)]
stop1=[random.randint(0,10) for x in range(Ntest)]
start1=[random.randint(5,25) for x in range(Ntest)]
step1=[random.randint(-3,-1) for x in range(Ntest)]
slicings=[]
for x in zip(start0,stop0,step0,start1,stop1,step1):
slicings.append(((x[0],x[1],x[2]),(x[3],x[4],x[5])))
for t1,t2 in slicings:
s1=slice(*t1)
s2=slice(*t2)
t3=combine_slices(normalize_slices(len(tiny),s1),normalize_slices(len(tiny[s1]),s2))
s3=slice(*t3)
print s1,s2,s3
self.assertTrue(tiny[s1][s2]==tiny[s3])
t3=combine_slices(normalize_slices(len(small),s1),normalize_slices(len(small[s1]),s2))
s3=slice(*t3)
self.assertTrue(small[s1][s2]==small[s3])
t3=combine_slices(normalize_slices(len(big),s1),normalize_slices(len(big[s1]),s2))
s3=slice(*t3)
self.assertTrue(big[s1][s2]==big[s3])
def test26(self):
oned=numpy.zeros(5)
threed=numpy.zeros((4,5,6))
for index in [0,[1],[1,2],[[1,2],[2,3]],[[2]],[[0,1]]]:
i=numpy.array(index)
self.assertEquals(len(oned[i].shape), number_of_dimensions_after_index(1, i ))
for index in [0,[1],[1,2],[[1,2],[2,3]],[[2]],[[2,1]]]:
i=numpy.array(index)
self.assertEquals(len(threed[i].shape), number_of_dimensions_after_index(3, i ))
def test27(self):
oned=numpy.zeros(5)
threed=numpy.zeros((4,5,6))
for index in [0,[1],[1,2],[[1,2],[2,3]],[[2]],[[0,1]]]:
i=numpy.array(index)
self.assertEquals(oned[i].shape, shape_after_index(oned.shape, i ))
for index in [0,[1],[1,2],[[1,2],[2,3]],[[2]],[[2,1]],[[[[0],[1],[1]]]]]:
i=numpy.array(index)
self.assertEquals(threed[i].shape, shape_after_index(threed.shape, i ))
def test28(self):
twod=numpy.zeros((5,6))
threed=numpy.zeros((4,5,6))
for _i,_j in [([0],[1]),([0,2],[1,3]),([0,2],[1,3]),([[0,1],[1,2]],[[2,3],[3,4]])]:
i=numpy.array(_i)
j=numpy.array(_j)
self.assertEquals(len(twod[i,j].shape), number_of_dimensions_after_index(2, (i,j) ))
for _i,_j in [([0],[1]),([0,2],[1,3]),([0,2],[1,3]),([[0,1],[1,2]],[[2,3],[3,4]])]:
i=numpy.array(_i)
j=numpy.array(_j)
self.assertEquals(len(threed[i,j].shape), number_of_dimensions_after_index(3, (i,j) ))
def test29(self):
twod=numpy.zeros((5,6))
for _i,_j in [([0],[1]),([0,2],[1,3]),([0,2],[1,3]),([[0,1],[1,2]],[[2,3],[3,4]])]:
i=numpy.array(_i)
j=numpy.array(_j)
self.assertEquals(twod[i,j].shape, shape_after_index(twod.shape, (i,j) ))
threed=numpy.zeros((4,5,6))
for _i,_j in [([0],[1]),([0,2],[1,3]),([0,2],[1,3]),([[0,1],[1,2]],[[2,3],[3,4]])]:
i=numpy.array(_i)
j=numpy.array(_j)
self.assertEquals(threed[i,j].shape, shape_after_index(threed.shape, (i,j) ))
fourd=numpy.zeros((4,5,6,7))
for _i,_j in [([0],[1]),([0,2],[1,3])]:
i=numpy.array(_i)
j=numpy.array(_j)
self.assertEquals(fourd[i,j].shape, shape_after_index(fourd.shape, (i,j) ))
for _i,_j in [([0,2],[1,3])]:
i=numpy.array(_i)
j=numpy.array(_j)
self.assertEquals(fourd[i,Ellipsis,j].shape, shape_after_index(fourd.shape, (i,Ellipsis,j) ))
for _i,_j in [([0,2],[1,3])]:
i=numpy.array(_i)
j=numpy.array(_j)
self.assertEquals(fourd[i,slice(None),j].shape, shape_after_index(fourd.shape, (i,slice(None),j) ))
class TestSplitOverDimensions(amusetest.TestCase):
def test1(self):
dimension_values = [
[3,4,5,6],
['a','b','c']
]
split_dimension_values = indexing.split_numpy_index_over_dimensions(0, dimension_values)
self.assertEquals(len(split_dimension_values) ,2)
self.assertEquals(split_dimension_values[0], 3)
self.assertEquals(split_dimension_values[1], ['a','b','c'])
def test2(self):
dimension_values = [
[3,4,5,6],
['a','b','c']
]
split_dimension_values = indexing.split_numpy_index_over_dimensions((1,2), dimension_values)
self.assertEquals(len(split_dimension_values) ,2)
self.assertEquals(split_dimension_values[0], 4)
self.assertEquals(split_dimension_values[1], 'c')
def test3(self):
dimension_values = [
[3,4,5,6],
['a','b','c']
]
split_dimension_values = indexing.split_numpy_index_over_dimensions(slice(0,2), dimension_values)
self.assertEquals(len(split_dimension_values) ,2)
self.assertEquals(split_dimension_values[0], [3,4])
self.assertEquals(split_dimension_values[1], ['a','b','c'])
def test4(self):
dimension_values = [
[0, 1, 2, 3, 4, 5, 6, 7, 8, ]
]
split_dimension_values = indexing.split_numpy_index_over_dimensions(slice(1,7,2), dimension_values)
self.assertEquals(len(split_dimension_values) ,1)
self.assertEquals(split_dimension_values[0], [1, 3, 5])
def test5(self):
dimension_values = [
[0, 1, 2, 3, 4, 5, 6, 7, 8,9 ]
]
split_dimension_values = indexing.split_numpy_index_over_dimensions(slice(-2,10), dimension_values)
self.assertEquals(split_dimension_values[0], [8, 9])
split_dimension_values = indexing.split_numpy_index_over_dimensions(slice(-3,3,-1), dimension_values)
self.assertEquals(split_dimension_values[0], [7, 6, 5, 4])
def test6(self):
dimension_values = [
[0, 1, 2, 3, 4, 5, 6, 7, 8,9 ]
]
split_dimension_values = indexing.split_numpy_index_over_dimensions(slice(5,None), dimension_values)
self.assertEquals(split_dimension_values[0], [5, 6, 7, 8, 9])
def test7(self):
dimension_values = [
[0, 1],
[0, 1, 2],
[0]
]
split_dimension_values = indexing.split_numpy_index_over_dimensions(slice(1,2), dimension_values)
self.assertEquals(split_dimension_values[0], [1])
self.assertEquals(split_dimension_values[1], [0,1,2])
self.assertEquals(split_dimension_values[2], [0])
def test8(self):
dimension_values = [
[0, 1],
[0, 1, 2],
[0]
]
split_dimension_values = indexing.split_numpy_index_over_dimensions((Ellipsis,0), dimension_values)
self.assertEquals(split_dimension_values[0], [0,1])
self.assertEquals(split_dimension_values[1], [0,1,2])
self.assertEquals(split_dimension_values[2], 0)
split_dimension_values = indexing.split_numpy_index_over_dimensions((slice(None),slice(None),0), dimension_values)
self.assertEquals(split_dimension_values[0], [0,1])
self.assertEquals(split_dimension_values[1], [0,1,2])
self.assertEquals(split_dimension_values[2], 0)
split_dimension_values = indexing.split_numpy_index_over_dimensions((Ellipsis,0, Ellipsis), dimension_values)
self.assertEquals(split_dimension_values[0], [0,1])
self.assertEquals(split_dimension_values[1], 0)
self.assertEquals(split_dimension_values[2], [0])
| 42.535242 | 122 | 0.56605 | 2,740 | 19,311 | 3.855474 | 0.058759 | 0.116622 | 0.070049 | 0.062476 | 0.844661 | 0.807175 | 0.779913 | 0.73883 | 0.656286 | 0.625142 | 0 | 0.07786 | 0.251774 | 19,311 | 453 | 123 | 42.629139 | 0.653263 | 0.005075 | 0 | 0.537037 | 0 | 0 | 0.000833 | 0 | 0 | 0 | 0 | 0 | 0.269841 | 0 | null | null | 0 | 0.018519 | null | null | 0.002646 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8a9f8452f56f340bc318c78d3f521d9ae982b0f4 | 37,930 | py | Python | nni/algorithms/compression/v2/pytorch/pruning/basic_pruner.py | Why1005/nni | 3c1e5b7acd564a80bfc3d442ee5e23e2869195ba | [
"MIT"
] | 1 | 2021-12-19T08:45:45.000Z | 2021-12-19T08:45:45.000Z | nni/algorithms/compression/v2/pytorch/pruning/basic_pruner.py | Micheallei/nni | 29fd8cfae4fe99b08a91f9a67be4297093483832 | [
"MIT"
] | null | null | null | nni/algorithms/compression/v2/pytorch/pruning/basic_pruner.py | Micheallei/nni | 29fd8cfae4fe99b08a91f9a67be4297093483832 | [
"MIT"
] | 2 | 2021-12-17T07:32:47.000Z | 2021-12-19T08:45:05.000Z | # Copyright (c) Microsoft Corporation.
# Licensed under the MIT license.
from copy import deepcopy
import logging
from typing import List, Dict, Tuple, Callable, Optional
from schema import And, Or, Optional as SchemaOptional
import torch
from torch import Tensor
import torch.nn as nn
from torch.nn import Module
from torch.optim import Optimizer
from nni.algorithms.compression.v2.pytorch.base.pruner import Pruner
from nni.algorithms.compression.v2.pytorch.utils import CompressorSchema, config_list_canonical
from .tools import (
DataCollector,
HookCollectorInfo,
WeightDataCollector,
WeightTrainerBasedDataCollector,
SingleHookTrainerBasedDataCollector
)
from .tools import (
MetricsCalculator,
NormMetricsCalculator,
MultiDataNormMetricsCalculator,
DistMetricsCalculator,
APoZRankMetricsCalculator,
MeanRankMetricsCalculator
)
from .tools import (
SparsityAllocator,
NormalSparsityAllocator,
GlobalSparsityAllocator,
Conv2dDependencyAwareAllocator
)
_logger = logging.getLogger(__name__)
__all__ = ['LevelPruner', 'L1NormPruner', 'L2NormPruner', 'FPGMPruner', 'SlimPruner', 'ActivationPruner',
'ActivationAPoZRankPruner', 'ActivationMeanRankPruner', 'TaylorFOWeightPruner', 'ADMMPruner']
NORMAL_SCHEMA = {
Or('sparsity', 'sparsity_per_layer'): And(float, lambda n: 0 <= n < 1),
SchemaOptional('op_types'): [str],
SchemaOptional('op_names'): [str],
SchemaOptional('op_partial_names'): [str]
}
GLOBAL_SCHEMA = {
'total_sparsity': And(float, lambda n: 0 <= n < 1),
SchemaOptional('max_sparsity_per_layer'): And(float, lambda n: 0 < n <= 1),
SchemaOptional('op_types'): [str],
SchemaOptional('op_names'): [str],
SchemaOptional('op_partial_names'): [str]
}
EXCLUDE_SCHEMA = {
'exclude': bool,
SchemaOptional('op_types'): [str],
SchemaOptional('op_names'): [str],
SchemaOptional('op_partial_names'): [str]
}
INTERNAL_SCHEMA = {
'total_sparsity': And(float, lambda n: 0 <= n < 1),
SchemaOptional('max_sparsity_per_layer'): {str: float},
SchemaOptional('op_types'): [str],
SchemaOptional('op_names'): [str]
}
class BasicPruner(Pruner):
def __init__(self, model: Module, config_list: List[Dict]):
self.data_collector: DataCollector = None
self.metrics_calculator: MetricsCalculator = None
self.sparsity_allocator: SparsityAllocator = None
super().__init__(model, config_list)
def validate_config(self, model: Module, config_list: List[Dict]):
self._validate_config_before_canonical(model, config_list)
self.config_list = config_list_canonical(model, config_list)
def _validate_config_before_canonical(self, model: Module, config_list: List[Dict]):
pass
def reset(self, model: Optional[Module], config_list: Optional[List[Dict]]):
super().reset(model=model, config_list=config_list)
self.reset_tools()
def reset_tools(self):
"""
This function is used to reset `self.data_collector`, `self.metrics_calculator` and `self.sparsity_allocator`.
The subclass needs to implement this function to complete the pruning process.
See `compress()` to understand how NNI use these three part to generate mask for the bound model.
"""
raise NotImplementedError()
def compress(self) -> Tuple[Module, Dict]:
"""
Used to generate the mask. Pruning process is divided in three stages.
`self.data_collector` collect the data used to calculate the specify metric.
`self.metrics_calculator` calculate the metric and `self.sparsity_allocator` generate the mask depend on the metric.
Returns
-------
Tuple[Module, Dict]
Return the wrapped model and mask.
"""
data = self.data_collector.collect()
_logger.debug('Collected Data:\n%s', data)
metrics = self.metrics_calculator.calculate_metrics(data)
_logger.debug('Metrics Calculate:\n%s', metrics)
masks = self.sparsity_allocator.generate_sparsity(metrics)
_logger.debug('Masks:\n%s', masks)
self.load_masks(masks)
return self.bound_model, masks
class LevelPruner(BasicPruner):
"""
Parameters
----------
model : torch.nn.Module
Model to be pruned
config_list : List[Dict]
Supported keys:
- sparsity : This is to specify the sparsity for each layer in this config to be compressed.
- sparsity_per_layer : Equals to sparsity.
- op_types : Operation types to prune.
- op_names : Operation names to prune.
- exclude : Set True then the layers setting by op_types and op_names will be excluded from pruning.
"""
def __init__(self, model: Module, config_list: List[Dict]):
super().__init__(model, config_list)
def _validate_config_before_canonical(self, model: Module, config_list: List[Dict]):
schema_list = [deepcopy(NORMAL_SCHEMA), deepcopy(EXCLUDE_SCHEMA), deepcopy(INTERNAL_SCHEMA)]
schema = CompressorSchema(schema_list, model, _logger)
schema.validate(config_list)
def reset_tools(self):
if self.data_collector is None:
self.data_collector = WeightDataCollector(self)
else:
self.data_collector.reset()
if self.metrics_calculator is None:
self.metrics_calculator = NormMetricsCalculator()
if self.sparsity_allocator is None:
self.sparsity_allocator = NormalSparsityAllocator(self)
class NormPruner(BasicPruner):
"""
Parameters
----------
model : torch.nn.Module
Model to be pruned
config_list : List[Dict]
Supported keys:
- sparsity : This is to specify the sparsity for each layer in this config to be compressed.
- sparsity_per_layer : Equals to sparsity.
- op_types : Conv2d and Linear are supported in NormPruner.
- op_names : Operation names to prune.
- exclude : Set True then the layers setting by op_types and op_names will be excluded from pruning.
p : int
The order of norm.
mode : str
'normal' or 'dependency_aware'.
If prune the model in a dependency-aware way, this pruner will
prune the model according to the norm of weights and the channel-dependency or
group-dependency of the model. In this way, the pruner will force the conv layers
that have dependencies to prune the same channels, so the speedup module can better
harvest the speed benefit from the pruned model. Note that, if set 'dependency_aware'
, the dummy_input cannot be None, because the pruner needs a dummy input to trace the
dependency between the conv layers.
dummy_input : Optional[torch.Tensor]
The dummy input to analyze the topology constraints. Note that, the dummy_input
should on the same device with the model.
"""
def __init__(self, model: Module, config_list: List[Dict], p: int,
mode: str = 'normal', dummy_input: Optional[Tensor] = None):
self.p = p
self.mode = mode
self.dummy_input = dummy_input
super().__init__(model, config_list)
def _validate_config_before_canonical(self, model: Module, config_list: List[Dict]):
schema_list = [deepcopy(NORMAL_SCHEMA), deepcopy(EXCLUDE_SCHEMA), deepcopy(INTERNAL_SCHEMA)]
for sub_shcema in schema_list:
sub_shcema[SchemaOptional('op_types')] = ['Conv2d', 'Linear']
schema = CompressorSchema(schema_list, model, _logger)
schema.validate(config_list)
def reset_tools(self):
if self.data_collector is None:
self.data_collector = WeightDataCollector(self)
else:
self.data_collector.reset()
if self.metrics_calculator is None:
self.metrics_calculator = NormMetricsCalculator(p=self.p, dim=0)
if self.sparsity_allocator is None:
if self.mode == 'normal':
self.sparsity_allocator = NormalSparsityAllocator(self, dim=0)
elif self.mode == 'dependency_aware':
self.sparsity_allocator = Conv2dDependencyAwareAllocator(self, 0, self.dummy_input)
else:
raise NotImplementedError('Only support mode `normal` and `dependency_aware`')
class L1NormPruner(NormPruner):
"""
Parameters
----------
model : torch.nn.Module
Model to be pruned
config_list : List[Dict]
Supported keys:
- sparsity : This is to specify the sparsity for each layer in this config to be compressed.
- sparsity_per_layer : Equals to sparsity.
- op_types : Conv2d and Linear are supported in L1NormPruner.
- op_names : Operation names to prune.
- exclude : Set True then the layers setting by op_types and op_names will be excluded from pruning.
mode : str
'normal' or 'dependency_aware'.
If prune the model in a dependency-aware way, this pruner will
prune the model according to the l1-norm of weights and the channel-dependency or
group-dependency of the model. In this way, the pruner will force the conv layers
that have dependencies to prune the same channels, so the speedup module can better
harvest the speed benefit from the pruned model. Note that, if set 'dependency_aware'
, the dummy_input cannot be None, because the pruner needs a dummy input to trace the
dependency between the conv layers.
dummy_input : Optional[torch.Tensor]
The dummy input to analyze the topology constraints. Note that, the dummy_input
should on the same device with the model.
"""
def __init__(self, model: Module, config_list: List[Dict],
mode: str = 'normal', dummy_input: Optional[Tensor] = None):
super().__init__(model, config_list, 1, mode, dummy_input)
class L2NormPruner(NormPruner):
"""
Parameters
----------
model : torch.nn.Module
Model to be pruned
config_list : List[Dict]
Supported keys:
- sparsity : This is to specify the sparsity for each layer in this config to be compressed.
- sparsity_per_layer : Equals to sparsity.
- op_types : Conv2d and Linear are supported in L1NormPruner.
- op_names : Operation names to prune.
- exclude : Set True then the layers setting by op_types and op_names will be excluded from pruning.
mode : str
'normal' or 'dependency_aware'.
If prune the model in a dependency-aware way, this pruner will
prune the model according to the l2-norm of weights and the channel-dependency or
group-dependency of the model. In this way, the pruner will force the conv layers
that have dependencies to prune the same channels, so the speedup module can better
harvest the speed benefit from the pruned model. Note that, if set 'dependency_aware'
, the dummy_input cannot be None, because the pruner needs a dummy input to trace the
dependency between the conv layers.
dummy_input : Optional[torch.Tensor]
The dummy input to analyze the topology constraints. Note that, the dummy_input
should on the same device with the model.
"""
def __init__(self, model: Module, config_list: List[Dict],
mode: str = 'normal', dummy_input: Optional[Tensor] = None):
super().__init__(model, config_list, 2, mode, dummy_input)
class FPGMPruner(BasicPruner):
"""
Parameters
----------
model : torch.nn.Module
Model to be pruned
config_list : List[Dict]
Supported keys:
- sparsity : This is to specify the sparsity for each layer in this config to be compressed.
- sparsity_per_layer : Equals to sparsity.
- op_types : Conv2d and Linear are supported in FPGMPruner.
- op_names : Operation names to prune.
- exclude : Set True then the layers setting by op_types and op_names will be excluded from pruning.
mode : str
'normal' or 'dependency_aware'.
If prune the model in a dependency-aware way, this pruner will
prune the model according to the FPGM of weights and the channel-dependency or
group-dependency of the model. In this way, the pruner will force the conv layers
that have dependencies to prune the same channels, so the speedup module can better
harvest the speed benefit from the pruned model. Note that, if set 'dependency_aware'
, the dummy_input cannot be None, because the pruner needs a dummy input to trace the
dependency between the conv layers.
dummy_input : Optional[torch.Tensor]
The dummy input to analyze the topology constraints. Note that, the dummy_input
should on the same device with the model.
"""
def __init__(self, model: Module, config_list: List[Dict],
mode: str = 'normal', dummy_input: Optional[Tensor] = None):
self.mode = mode
self.dummy_input = dummy_input
super().__init__(model, config_list)
def _validate_config_before_canonical(self, model: Module, config_list: List[Dict]):
schema_list = [deepcopy(NORMAL_SCHEMA), deepcopy(EXCLUDE_SCHEMA), deepcopy(INTERNAL_SCHEMA)]
for sub_shcema in schema_list:
sub_shcema[SchemaOptional('op_types')] = ['Conv2d', 'Linear']
schema = CompressorSchema(schema_list, model, _logger)
schema.validate(config_list)
def reset_tools(self):
if self.data_collector is None:
self.data_collector = WeightDataCollector(self)
else:
self.data_collector.reset()
if self.metrics_calculator is None:
self.metrics_calculator = DistMetricsCalculator(p=2, dim=0)
if self.sparsity_allocator is None:
if self.mode == 'normal':
self.sparsity_allocator = NormalSparsityAllocator(self, dim=0)
elif self.mode == 'dependency_aware':
self.sparsity_allocator = Conv2dDependencyAwareAllocator(self, 0, self.dummy_input)
else:
raise NotImplementedError('Only support mode `normal` and `dependency_aware`')
class SlimPruner(BasicPruner):
"""
Parameters
----------
model : torch.nn.Module
Model to be pruned
config_list : List[Dict]
Supported keys:
- sparsity : This is to specify the sparsity for each layer in this config to be compressed.
- sparsity_per_layer : Equals to sparsity.
- total_sparsity : This is to specify the total sparsity for all layers in this config, each layer may have different sparsity.
- max_sparsity_per_layer : Always used with total_sparsity. Limit the max sparsity of each layer.
- op_types : Only BatchNorm2d is supported in SlimPruner.
- op_names : Operation names to prune.
- exclude : Set True then the layers setting by op_types and op_names will be excluded from pruning.
trainer : Callable[[Module, Optimizer, Callable], None]
A callable function used to train model or just inference. Take model, optimizer, criterion as input.
The model will be trained or inferenced `training_epochs` epochs.
Example::
def trainer(model: Module, optimizer: Optimizer, criterion: Callable[[Tensor, Tensor], Tensor]):
training = model.training
model.train(mode=True)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
for batch_idx, (data, target) in enumerate(train_loader):
data, target = data.to(device), target.to(device)
optimizer.zero_grad()
output = model(data)
loss = criterion(output, target)
loss.backward()
# If you don't want to update the model, you can skip `optimizer.step()`, and set train mode False.
optimizer.step()
model.train(mode=training)
optimizer : torch.optim.Optimizer
The optimizer instance used in trainer. Note that this optimizer might be patched during collect data,
so do not use this optimizer in other places.
criterion : Callable[[Tensor, Tensor], Tensor]
The criterion function used in trainer. Take model output and target value as input, and return the loss.
training_epochs : int
The epoch number for training model to sparsify the BN weight.
scale : float
Penalty parameter for sparsification, which could reduce overfitting.
mode : str
'normal' or 'global'.
If prune the model in a global way, all layer weights with same config will be considered uniformly.
That means a single layer may not reach or exceed the sparsity setting in config,
but the total pruned weights meet the sparsity setting.
"""
def __init__(self, model: Module, config_list: List[Dict], trainer: Callable[[Module, Optimizer, Callable], None],
optimizer: Optimizer, criterion: Callable[[Tensor, Tensor], Tensor],
training_epochs: int, scale: float = 0.0001, mode='global'):
self.mode = mode
self.trainer = trainer
self.optimizer = optimizer
self.criterion = criterion
self.training_epochs = training_epochs
self._scale = scale
super().__init__(model, config_list)
def _validate_config_before_canonical(self, model: Module, config_list: List[Dict]):
schema_list = [deepcopy(EXCLUDE_SCHEMA), deepcopy(INTERNAL_SCHEMA)]
if self.mode == 'global':
schema_list.append(deepcopy(GLOBAL_SCHEMA))
else:
schema_list.append(deepcopy(NORMAL_SCHEMA))
for sub_shcema in schema_list:
sub_shcema[SchemaOptional('op_types')] = ['BatchNorm2d']
schema = CompressorSchema(schema_list, model, _logger)
schema.validate(config_list)
def criterion_patch(self, criterion: Callable[[Tensor, Tensor], Tensor]) -> Callable[[Tensor, Tensor], Tensor]:
def patched_criterion(input_tensor: Tensor, target: Tensor):
sum_l1 = 0
for _, wrapper in self.get_modules_wrapper().items():
sum_l1 += torch.norm(wrapper.module.weight.data, p=1)
return criterion(input_tensor, target) + self._scale * sum_l1
return patched_criterion
def reset_tools(self):
if self.data_collector is None:
self.data_collector = WeightTrainerBasedDataCollector(self, self.trainer, self.optimizer, self.criterion,
self.training_epochs, criterion_patch=self.criterion_patch)
else:
self.data_collector.reset()
if self.metrics_calculator is None:
self.metrics_calculator = NormMetricsCalculator()
if self.sparsity_allocator is None:
if self.mode == 'normal':
self.sparsity_allocator = NormalSparsityAllocator(self)
elif self.mode == 'global':
self.sparsity_allocator = GlobalSparsityAllocator(self)
else:
raise NotImplementedError('Only support mode `normal` and `global`')
class ActivationPruner(BasicPruner):
"""
Parameters
----------
model : torch.nn.Module
Model to be pruned
config_list : List[Dict]
Supported keys:
- sparsity : This is to specify the sparsity for each layer in this config to be compressed.
- sparsity_per_layer : Equals to sparsity.
- op_types : Conv2d and Linear are supported in ActivationPruner.
- op_names : Operation names to prune.
- exclude : Set True then the layers setting by op_types and op_names will be excluded from pruning.
trainer : Callable[[Module, Optimizer, Callable], None]
A callable function used to train model or just inference. Take model, optimizer, criterion as input.
The model will be trained or inferenced `training_epochs` epochs.
Example::
def trainer(model: Module, optimizer: Optimizer, criterion: Callable[[Tensor, Tensor], Tensor]):
training = model.training
model.train(mode=True)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
for batch_idx, (data, target) in enumerate(train_loader):
data, target = data.to(device), target.to(device)
optimizer.zero_grad()
output = model(data)
loss = criterion(output, target)
loss.backward()
# If you don't want to update the model, you can skip `optimizer.step()`, and set train mode False.
optimizer.step()
model.train(mode=training)
optimizer : torch.optim.Optimizer
The optimizer instance used in trainer. Note that this optimizer might be patched during collect data,
so do not use this optimizer in other places.
criterion : Callable[[Tensor, Tensor], Tensor]
The criterion function used in trainer. Take model output and target value as input, and return the loss.
training_batches
The batch number used to collect activations.
mode : str
'normal' or 'dependency_aware'.
If prune the model in a dependency-aware way, this pruner will
prune the model according to the activation-based metrics and the channel-dependency or
group-dependency of the model. In this way, the pruner will force the conv layers
that have dependencies to prune the same channels, so the speedup module can better
harvest the speed benefit from the pruned model. Note that, if set 'dependency_aware'
, the dummy_input cannot be None, because the pruner needs a dummy input to trace the
dependency between the conv layers.
dummy_input : Optional[torch.Tensor]
The dummy input to analyze the topology constraints. Note that, the dummy_input
should on the same device with the model.
"""
def __init__(self, model: Module, config_list: List[Dict], trainer: Callable[[Module, Optimizer, Callable], None],
optimizer: Optimizer, criterion: Callable[[Tensor, Tensor], Tensor], training_batches: int, activation: str = 'relu',
mode: str = 'normal', dummy_input: Optional[Tensor] = None):
self.mode = mode
self.dummy_input = dummy_input
self.trainer = trainer
self.optimizer = optimizer
self.criterion = criterion
self.training_batches = training_batches
self._activation = self._choose_activation(activation)
super().__init__(model, config_list)
def _validate_config_before_canonical(self, model: Module, config_list: List[Dict]):
schema_list = [deepcopy(NORMAL_SCHEMA), deepcopy(EXCLUDE_SCHEMA), deepcopy(INTERNAL_SCHEMA)]
for sub_shcema in schema_list:
sub_shcema[SchemaOptional('op_types')] = ['Conv2d', 'Linear']
schema = CompressorSchema(schema_list, model, _logger)
schema.validate(config_list)
def _choose_activation(self, activation: str = 'relu') -> Callable:
if activation == 'relu':
return nn.functional.relu
elif activation == 'relu6':
return nn.functional.relu6
else:
raise 'Unsupported activatoin {}'.format(activation)
def _collector(self, buffer: List) -> Callable[[Module, Tensor, Tensor], None]:
def collect_activation(_module: Module, _input: Tensor, output: Tensor):
if len(buffer) < self.training_batches:
buffer.append(self._activation(output.detach()))
return collect_activation
def reset_tools(self):
collector_info = HookCollectorInfo([layer_info for layer_info, _ in self._detect_modules_to_compress()], 'forward', self._collector)
if self.data_collector is None:
self.data_collector = SingleHookTrainerBasedDataCollector(self, self.trainer, self.optimizer, self.criterion,
1, collector_infos=[collector_info])
else:
self.data_collector.reset()
if self.metrics_calculator is None:
self.metrics_calculator = self._get_metrics_calculator()
if self.sparsity_allocator is None:
if self.mode == 'normal':
self.sparsity_allocator = NormalSparsityAllocator(self, dim=0)
elif self.mode == 'dependency_aware':
self.sparsity_allocator = Conv2dDependencyAwareAllocator(self, 0, self.dummy_input)
else:
raise NotImplementedError('Only support mode `normal` and `dependency_aware`')
def _get_metrics_calculator(self) -> MetricsCalculator:
raise NotImplementedError()
class ActivationAPoZRankPruner(ActivationPruner):
def _get_metrics_calculator(self) -> MetricsCalculator:
return APoZRankMetricsCalculator(dim=1)
class ActivationMeanRankPruner(ActivationPruner):
def _get_metrics_calculator(self) -> MetricsCalculator:
return MeanRankMetricsCalculator(dim=1)
class TaylorFOWeightPruner(BasicPruner):
"""
Parameters
----------
model : torch.nn.Module
Model to be pruned
config_list : List[Dict]
Supported keys:
- sparsity : This is to specify the sparsity for each layer in this config to be compressed.
- sparsity_per_layer : Equals to sparsity.
- total_sparsity : This is to specify the total sparsity for all layers in this config, each layer may have different sparsity.
- max_sparsity_per_layer : Always used with total_sparsity. Limit the max sparsity of each layer.
- op_types : Conv2d and Linear are supported in TaylorFOWeightPruner.
- op_names : Operation names to prune.
- exclude : Set True then the layers setting by op_types and op_names will be excluded from pruning.
trainer : Callable[[Module, Optimizer, Callable]
A callable function used to train model or just inference. Take model, optimizer, criterion as input.
The model will be trained or inferenced `training_epochs` epochs.
Example::
def trainer(model: Module, optimizer: Optimizer, criterion: Callable[[Tensor, Tensor], Tensor]):
training = model.training
model.train(mode=True)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
for batch_idx, (data, target) in enumerate(train_loader):
data, target = data.to(device), target.to(device)
optimizer.zero_grad()
output = model(data)
loss = criterion(output, target)
loss.backward()
# If you don't want to update the model, you can skip `optimizer.step()`, and set train mode False.
optimizer.step()
model.train(mode=training)
optimizer : torch.optim.Optimizer
The optimizer instance used in trainer. Note that this optimizer might be patched during collect data,
so do not use this optimizer in other places.
criterion : Callable[[Tensor, Tensor], Tensor]
The criterion function used in trainer. Take model output and target value as input, and return the loss.
training_batches : int
The batch number used to collect activations.
mode : str
'normal', 'dependency_aware' or 'global'.
If prune the model in a dependency-aware way, this pruner will
prune the model according to the taylorFO and the channel-dependency or
group-dependency of the model. In this way, the pruner will force the conv layers
that have dependencies to prune the same channels, so the speedup module can better
harvest the speed benefit from the pruned model. Note that, if set 'dependency_aware'
, the dummy_input cannot be None, because the pruner needs a dummy input to trace the
dependency between the conv layers.
If prune the model in a global way, all layer weights with same config will be considered uniformly.
That means a single layer may not reach or exceed the sparsity setting in config,
but the total pruned weights meet the sparsity setting.
dummy_input : Optional[torch.Tensor]
The dummy input to analyze the topology constraints. Note that, the dummy_input
should on the same device with the model.
"""
def __init__(self, model: Module, config_list: List[Dict], trainer: Callable[[Module, Optimizer, Callable], None],
optimizer: Optimizer, criterion: Callable[[Tensor, Tensor], Tensor], training_batches: int,
mode: str = 'normal', dummy_input: Optional[Tensor] = None):
self.mode = mode
self.dummy_input = dummy_input
self.trainer = trainer
self.optimizer = optimizer
self.criterion = criterion
self.training_batches = training_batches
super().__init__(model, config_list)
def _validate_config_before_canonical(self, model: Module, config_list: List[Dict]):
schema_list = [deepcopy(EXCLUDE_SCHEMA), deepcopy(INTERNAL_SCHEMA)]
if self.mode == 'global':
schema_list.append(deepcopy(GLOBAL_SCHEMA))
else:
schema_list.append(deepcopy(NORMAL_SCHEMA))
for sub_shcema in schema_list:
sub_shcema[SchemaOptional('op_types')] = ['Conv2d', 'Linear']
schema = CompressorSchema(schema_list, model, _logger)
schema.validate(config_list)
def _collector(self, buffer: List, weight_tensor: Tensor) -> Callable[[Tensor], None]:
def collect_taylor(grad: Tensor):
if len(buffer) < self.training_batches:
buffer.append(self._calculate_taylor_expansion(weight_tensor, grad))
return collect_taylor
def _calculate_taylor_expansion(self, weight_tensor: Tensor, grad: Tensor) -> Tensor:
return (weight_tensor.detach() * grad.detach()).data.pow(2)
def reset_tools(self):
hook_targets = {layer_info.name: layer_info.module.weight for layer_info, _ in self._detect_modules_to_compress()}
collector_info = HookCollectorInfo(hook_targets, 'tensor', self._collector)
if self.data_collector is None:
self.data_collector = SingleHookTrainerBasedDataCollector(self, self.trainer, self.optimizer, self.criterion,
1, collector_infos=[collector_info])
else:
self.data_collector.reset()
if self.metrics_calculator is None:
self.metrics_calculator = MultiDataNormMetricsCalculator(p=1, dim=0)
if self.sparsity_allocator is None:
if self.mode == 'normal':
self.sparsity_allocator = NormalSparsityAllocator(self, dim=0)
elif self.mode == 'global':
self.sparsity_allocator = GlobalSparsityAllocator(self, dim=0)
elif self.mode == 'dependency_aware':
self.sparsity_allocator = Conv2dDependencyAwareAllocator(self, 0, self.dummy_input)
else:
raise NotImplementedError('Only support mode `normal`, `global` and `dependency_aware`')
class ADMMPruner(BasicPruner):
"""
ADMM (Alternating Direction Method of Multipliers) Pruner is a kind of mathematical optimization technique.
The metric used in this pruner is the absolute value of the weight.
In each iteration, the weight with small magnitudes will be set to zero.
Only in the final iteration, the mask will be generated and apply to model wrapper.
The original paper refer to: https://arxiv.org/abs/1804.03294.
Parameters
----------
model : torch.nn.Module
Model to be pruned.
config_list : List[Dict]
Supported keys:
- sparsity : This is to specify the sparsity for each layer in this config to be compressed.
- sparsity_per_layer : Equals to sparsity.
- rho : Penalty parameters in ADMM algorithm.
- op_types : Operation types to prune.
- op_names : Operation names to prune.
- exclude : Set True then the layers setting by op_types and op_names will be excluded from pruning.
trainer : Callable[[Module, Optimizer, Callable]
A callable function used to train model or just inference. Take model, optimizer, criterion as input.
The model will be trained or inferenced `training_epochs` epochs.
Example::
def trainer(model: Module, optimizer: Optimizer, criterion: Callable[[Tensor, Tensor], Tensor]):
training = model.training
model.train(mode=True)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
for batch_idx, (data, target) in enumerate(train_loader):
data, target = data.to(device), target.to(device)
optimizer.zero_grad()
output = model(data)
loss = criterion(output, target)
loss.backward()
# If you don't want to update the model, you can skip `optimizer.step()`, and set train mode False.
optimizer.step()
model.train(mode=training)
optimizer : torch.optim.Optimizer
The optimizer instance used in trainer. Note that this optimizer might be patched during collect data,
so do not use this optimizer in other places.
criterion : Callable[[Tensor, Tensor], Tensor]
The criterion function used in trainer. Take model output and target value as input, and return the loss.
iterations : int
The total iteration number in admm pruning algorithm.
training_epochs : int
The epoch number for training model in each iteration.
"""
def __init__(self, model: Module, config_list: List[Dict], trainer: Callable[[Module, Optimizer, Callable], None],
optimizer: Optimizer, criterion: Callable[[Tensor, Tensor], Tensor], iterations: int, training_epochs: int):
self.trainer = trainer
# TODO: handle optimizer here will case additional memory use, need improve, also in WeightTrainerBasedDataCollector
self.optimizer = optimizer
self.criterion = criterion
self.iterations = iterations
self.training_epochs = training_epochs
super().__init__(model, config_list)
def reset(self, model: Optional[Module], config_list: Optional[List[Dict]]):
super().reset(model, config_list)
self.Z = {name: wrapper.module.weight.data.clone().detach() for name, wrapper in self.get_modules_wrapper().items()}
self.U = {name: torch.zeros_like(z).to(z.device) for name, z in self.Z.items()}
def _validate_config_before_canonical(self, model: Module, config_list: List[Dict]):
schema_list = [deepcopy(NORMAL_SCHEMA), deepcopy(INTERNAL_SCHEMA)]
for schema in schema_list:
schema.update({SchemaOptional('rho'): And(float, lambda n: n > 0)})
schema_list.append(deepcopy(EXCLUDE_SCHEMA))
schema = CompressorSchema(schema_list, model, _logger)
schema.validate(config_list)
def criterion_patch(self, origin_criterion: Callable[[Tensor, Tensor], Tensor]):
def patched_criterion(output: Tensor, target: Tensor):
penalty = torch.tensor(0.0).to(output.device)
for name, wrapper in self.get_modules_wrapper().items():
rho = wrapper.config.get('rho', 1e-4)
penalty += (rho / 2) * torch.sqrt(torch.norm(wrapper.module.weight - self.Z[name] + self.U[name]))
return origin_criterion(output, target) + penalty
return patched_criterion
def reset_tools(self):
if self.data_collector is None:
self.data_collector = WeightTrainerBasedDataCollector(self, self.trainer, self.optimizer, self.criterion,
self.training_epochs, criterion_patch=self.criterion_patch)
else:
self.data_collector.reset()
if self.metrics_calculator is None:
self.metrics_calculator = NormMetricsCalculator()
if self.sparsity_allocator is None:
self.sparsity_allocator = NormalSparsityAllocator(self)
def compress(self) -> Tuple[Module, Dict]:
"""
Returns
-------
Tuple[Module, Dict]
Return the wrapped model and mask.
"""
for i in range(self.iterations):
_logger.info('======= ADMM Iteration %d Start =======', i)
data = self.data_collector.collect()
for name, weight in data.items():
self.Z[name] = weight + self.U[name]
metrics = self.metrics_calculator.calculate_metrics(self.Z)
masks = self.sparsity_allocator.generate_sparsity(metrics)
for name, mask in masks.items():
self.Z[name] = self.Z[name].mul(mask['weight'])
self.U[name] = self.U[name] + data[name] - self.Z[name]
self.Z = None
self.U = None
torch.cuda.empty_cache()
metrics = self.metrics_calculator.calculate_metrics(data)
masks = self.sparsity_allocator.generate_sparsity(metrics)
self.load_masks(masks)
return self.bound_model, masks
| 47.891414 | 140 | 0.659056 | 4,540 | 37,930 | 5.368062 | 0.08326 | 0.022568 | 0.016085 | 0.02068 | 0.810759 | 0.798367 | 0.788109 | 0.771039 | 0.756309 | 0.734397 | 0 | 0.002959 | 0.260585 | 37,930 | 791 | 141 | 47.95196 | 0.866006 | 0.452201 | 0 | 0.607735 | 0 | 0 | 0.053061 | 0.004753 | 0 | 0 | 0 | 0.001264 | 0 | 1 | 0.121547 | false | 0.002762 | 0.038674 | 0.008287 | 0.229282 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8ab3d19b88e25eb3312d07dbb47b453da103893a | 39 | py | Python | article_26/true_false_as_variables.py | tisnik/go-fedora | 53243bc4c84209cc9ab582ccf5ef3b73e53fe676 | [
"Apache-2.0"
] | 18 | 2018-11-20T08:52:20.000Z | 2022-03-02T19:28:04.000Z | article_26/true_false_as_variables.py | tisnik/go-fedora | 53243bc4c84209cc9ab582ccf5ef3b73e53fe676 | [
"Apache-2.0"
] | 426 | 2019-12-18T13:45:43.000Z | 2021-01-22T12:16:40.000Z | article_26/true_false_as_variables.py | tisnik/go-fedora | 53243bc4c84209cc9ab582ccf5ef3b73e53fe676 | [
"Apache-2.0"
] | 7 | 2019-06-04T10:15:45.000Z | 2022-02-12T11:48:11.000Z | print(True)
True = False
print(True)
| 6.5 | 12 | 0.692308 | 6 | 39 | 4.5 | 0.5 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.179487 | 39 | 5 | 13 | 7.8 | 0.84375 | 0 | 0 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.666667 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
8ab6926904c7267483005e36fc5fe63e620fd7cd | 203 | py | Python | src/url2io_client/api/__init__.py | url2io/url2io-python-client | 6df57541e345da809e2bdf415e5822e1b27a0257 | [
"MIT"
] | 10 | 2020-12-04T17:33:10.000Z | 2022-03-08T09:19:26.000Z | src/url2io_client/api/__init__.py | url2io/url2io-python-client | 6df57541e345da809e2bdf415e5822e1b27a0257 | [
"MIT"
] | null | null | null | src/url2io_client/api/__init__.py | url2io/url2io-python-client | 6df57541e345da809e2bdf415e5822e1b27a0257 | [
"MIT"
] | 3 | 2020-12-22T08:17:02.000Z | 2021-08-02T02:36:23.000Z | from __future__ import absolute_import
# flake8: noqa
# import apis into api package
from url2io_client.api.url2_article_api import URL2ArticleApi
from url2io_client.api.url2_nlp_api import URL2NLPApi
| 25.375 | 61 | 0.852217 | 30 | 203 | 5.4 | 0.566667 | 0.123457 | 0.197531 | 0.234568 | 0.283951 | 0 | 0 | 0 | 0 | 0 | 0 | 0.038889 | 0.1133 | 203 | 7 | 62 | 29 | 0.861111 | 0.20197 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8ab728a6eac03d06ec200872c1bfdafea9a50d28 | 22,776 | py | Python | tools/c7n_azure/tests/test_storage.py | Seabreg/cloud-custodian | 133ee7fe6e3835b81bcabb8ea4bca07f8ea201aa | [
"Apache-2.0"
] | null | null | null | tools/c7n_azure/tests/test_storage.py | Seabreg/cloud-custodian | 133ee7fe6e3835b81bcabb8ea4bca07f8ea201aa | [
"Apache-2.0"
] | null | null | null | tools/c7n_azure/tests/test_storage.py | Seabreg/cloud-custodian | 133ee7fe6e3835b81bcabb8ea4bca07f8ea201aa | [
"Apache-2.0"
] | null | null | null | # Copyright 2015-2018 Capital One Services, LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import absolute_import, division, print_function, unicode_literals
from azure_common import BaseTest, arm_template, cassette_name
from c7n_azure.constants import BLOB_TYPE, FILE_TYPE, QUEUE_TYPE, TABLE_TYPE
from c7n_azure.resources.storage import StorageSettingsUtilities
from c7n_azure.storage_utils import StorageUtilities
from mock import patch, MagicMock
from c7n.utils import get_annotation_prefix
from c7n.utils import local_session
from c7n_azure.session import Session
from azure.mgmt.storage.models import StorageAccountUpdateParameters
class StorageTest(BaseTest):
def setUp(self):
super(StorageTest, self).setUp()
StorageUtilities.get_storage_primary_key.cache_clear()
def test_storage_schema_validate(self):
with self.sign_out_patch():
p = self.load_policy({
'name': 'test-storage',
'resource': 'azure.storage'
}, validate=True)
self.assertTrue(p)
@arm_template('storage.json')
def test_value_filter(self):
p = self.load_policy({
'name': 'test-azure-storage-enum',
'resource': 'azure.storage',
'filters': [
{'type': 'value',
'key': 'name',
'op': 'glob',
'value_type': 'normalize',
'value': 'cctstorage*'}],
})
resources = p.run()
self.assertEqual(len(resources), 1)
@arm_template('storage.json')
@cassette_name('firewall')
def test_firewall_rules_include(self):
p = self.load_policy({
'name': 'test-azure-storage',
'resource': 'azure.storage',
'filters': [
{'type': 'value',
'key': 'name',
'op': 'glob',
'value_type': 'normalize',
'value': 'ccipstorage*'},
{'type': 'firewall-rules',
'include': ['1.2.2.129']}],
})
resources = p.run()
self.assertEqual(len(resources), 1)
@arm_template('storage.json')
@cassette_name('firewall')
def test_firewall_rules_any(self):
p = self.load_policy({
'name': 'test-azure-storage',
'resource': 'azure.storage',
'filters': [
{'type': 'value',
'key': 'name',
'op': 'glob',
'value_type': 'normalize',
'value': 'ccipstorage*'},
{'type': 'firewall-rules',
'any': ['1.2.2.128/25', '8.8.8.8', '10.10.10.10']}],
})
resources = p.run()
self.assertEqual(len(resources), 1)
@arm_template('storage.json')
@cassette_name('firewall')
def test_firewall_rules_not_any(self):
p = self.load_policy({
'name': 'test-azure-storage',
'resource': 'azure.storage',
'filters': [
{'type': 'value',
'key': 'name',
'op': 'glob',
'value_type': 'normalize',
'value': 'ccipstorage*'},
{'type': 'firewall-rules',
'any': ['8.8.8.8', '10.10.10.10']}],
})
resources = p.run()
self.assertEqual(len(resources), 0)
@arm_template('storage.json')
@cassette_name('firewall')
def test_firewall_rules_not_only(self):
p = self.load_policy({
'name': 'test-azure-storage',
'resource': 'azure.storage',
'filters': [
{'type': 'value',
'key': 'name',
'op': 'glob',
'value_type': 'normalize',
'value': 'ccipstorage*'},
{'type': 'firewall-rules',
'only': ['1.2.2.128/25', '10.10.10.10']}],
})
resources = p.run()
self.assertEqual(len(resources), 0)
@arm_template('storage.json')
@cassette_name('firewall')
def test_firewall_rules_only(self):
p = self.load_policy({
'name': 'test-azure-storage',
'resource': 'azure.storage',
'filters': [
{'type': 'value',
'key': 'name',
'op': 'glob',
'value_type': 'normalize',
'value': 'ccipstorage*'},
{'type': 'firewall-rules',
'only': ['1.2.2.128/25', '3.1.1.1', '10.10.10.10']}],
})
resources = p.run()
self.assertEqual(len(resources), 1)
@arm_template('storage.json')
@cassette_name('firewall')
def test_firewall_rules_not_include_all_ranges(self):
p = self.load_policy({
'name': 'test-azure-storage',
'resource': 'azure.storage',
'filters': [
{'type': 'value',
'key': 'name',
'op': 'glob',
'value_type': 'normalize',
'value': 'ccipstorage*'},
{'type': 'firewall-rules',
'include': ['3.1.1.1', '3.1.1.2-3.1.1.2']}],
}, validate=True)
resources = p.run()
self.assertEqual(0, len(resources))
@arm_template('storage.json')
@cassette_name('firewall')
def test_firewall_rules_include_cidr(self):
p = self.load_policy({
'name': 'test-azure-storage',
'resource': 'azure.storage',
'filters': [
{'type': 'value',
'key': 'name',
'op': 'glob',
'value_type': 'normalize',
'value': 'ccipstorage*'},
{'type': 'firewall-rules',
'include': ['1.2.2.128/25']}],
}, validate=True)
resources = p.run()
self.assertEqual(1, len(resources))
@arm_template('storage.json')
@cassette_name('firewall')
def test_firewall_rules_not_include_cidr(self):
p = self.load_policy({
'name': 'test-azure-storage',
'resource': 'azure.storage',
'filters': [
{'type': 'value',
'key': 'name',
'op': 'glob',
'value_type': 'normalize',
'value': 'ccipstorage*'},
{'type': 'firewall-rules',
'include': ['2.2.2.128/25']}],
}, validate=True)
resources = p.run()
self.assertEqual(0, len(resources))
@arm_template('storage.json')
@cassette_name('firewall')
def test_firewall_rules_equal(self):
p = self.load_policy({
'name': 'test-azure-storage',
'resource': 'azure.storage',
'filters': [
{'type': 'value',
'key': 'name',
'op': 'glob',
'value_type': 'normalize',
'value': 'ccipstorage*'},
{'type': 'firewall-rules',
'equal': ['3.1.1.1-3.1.1.1', '1.2.2.128/25']}],
}, validate=True)
resources = p.run()
self.assertEqual(1, len(resources))
@arm_template('storage.json')
@cassette_name('firewall')
def test_firewall_rules_not_equal(self):
p = self.load_policy({
'name': 'test-azure-storage',
'resource': 'azure.storage',
'filters': [
{'type': 'value',
'key': 'name',
'op': 'glob',
'value_type': 'normalize',
'value': 'ccipstorage*'},
{'type': 'firewall-rules',
'equal': ['3.1.1.1-3.1.1.2', '3.1.1.1-3.1.1.1', '1.2.2.128/25']}],
}, validate=True)
resources = p.run()
self.assertEqual(0, len(resources))
@arm_template('storage.json')
def test_diagnostic_settings_blob_storage_type(self):
p = self.load_policy({
'name': 'test-azure-storage',
'resource': 'azure.storage',
'filters': [
{'type': 'value',
'key': 'name',
'op': 'glob',
'value_type': 'normalize',
'value': 'cctstorage*'},
{'type': 'storage-diagnostic-settings',
'storage-type': 'blob',
'key': 'logging.delete',
'value': False}],
}, validate=True)
resources = p.run()
self.assertEqual(1, len(resources))
self.assertTrue(get_annotation_prefix('blob') in resources[0])
@arm_template('storage.json')
def test_diagnostic_settings_file_storage_type(self):
p = self.load_policy({
'name': 'test-azure-storage',
'resource': 'azure.storage',
'filters': [
{'type': 'value',
'key': 'name',
'op': 'glob',
'value_type': 'normalize',
'value': 'cctstorage*'},
{'type': 'storage-diagnostic-settings',
'storage-type': 'file',
'key': 'hour_metrics.enabled',
'value': True}],
}, validate=True)
resources = p.run()
self.assertEqual(1, len(resources))
self.assertTrue(get_annotation_prefix('file') in resources[0])
@arm_template('storage.json')
def test_diagnostic_settings_queue_storage_type(self):
p = self.load_policy({
'name': 'test-azure-storage',
'resource': 'azure.storage',
'filters': [
{'type': 'value',
'key': 'name',
'op': 'glob',
'value_type': 'normalize',
'value': 'cctstorage*'},
{'type': 'storage-diagnostic-settings',
'storage-type': 'queue',
'key': 'logging.delete',
'value': False}],
}, validate=True)
resources = p.run()
self.assertEqual(1, len(resources))
self.assertTrue(get_annotation_prefix('queue') in resources[0])
@arm_template('storage.json')
def test_diagnostic_settings_table_storage_type(self):
p = self.load_policy({
'name': 'test-azure-storage',
'resource': 'azure.storage',
'filters': [
{'type': 'value',
'key': 'name',
'op': 'glob',
'value_type': 'normalize',
'value': 'cctstorage*'},
{'type': 'storage-diagnostic-settings',
'storage-type': 'table',
'key': 'logging.delete',
'value': False}],
}, validate=True)
resources = p.run()
self.assertEqual(1, len(resources))
self.assertTrue(get_annotation_prefix('table') in resources[0])
@arm_template('storage.json')
def test_enable_log_settings(self):
p = self.load_policy({
'name': 'test-azure-storage',
'resource': 'azure.storage',
'filters': [
{'type': 'value',
'key': 'name',
'op': 'glob',
'value_type': 'normalize',
'value': 'cclgstorage*'}],
'actions': [
{
'type': 'set-log-settings',
'storage-types': ['blob', 'queue', 'table'],
'retention': 5,
'log': ['read', 'write', 'delete']
}
]
}, validate=True)
resources = p.run()
session = local_session(p.session_factory)
token = StorageUtilities.get_storage_token(session)
blob_settings = StorageSettingsUtilities.get_settings(
BLOB_TYPE, resources[0], token=token)
queue_settings = StorageSettingsUtilities.get_settings(
QUEUE_TYPE, resources[0], token=token)
table_settings = StorageSettingsUtilities.get_settings(
TABLE_TYPE, resources[0], session=session)
# assert all logging settings are enabled
self.assertTrue(blob_settings.logging.delete and
blob_settings.logging.read and blob_settings.logging.write)
self.assertTrue(queue_settings.logging.delete and
queue_settings.logging.read and queue_settings.logging.write)
self.assertTrue(table_settings.logging.delete and
table_settings.logging.read and table_settings.logging.write)
# assert retention policy is enabled
self.assertTrue(blob_settings.logging.retention_policy.enabled)
self.assertTrue(queue_settings.logging.retention_policy.enabled)
self.assertTrue(table_settings.logging.retention_policy.enabled)
# assert retention days is set to 5
self.assertEqual(blob_settings.logging.retention_policy.days, 5)
self.assertEqual(table_settings.logging.retention_policy.days, 5)
self.assertEqual(queue_settings.logging.retention_policy.days, 5)
@arm_template('storage.json')
def test_disable_log_settings(self):
p = self.load_policy({
'name': 'test-azure-storage',
'resource': 'azure.storage',
'filters': [
{'type': 'value',
'key': 'name',
'op': 'glob',
'value_type': 'normalize',
'value': 'cclgstorage*'}],
'actions': [
{
'type': 'set-log-settings',
'storage-types': ['blob', 'queue', 'table'],
'retention': 5,
'log': ['delete']
}
]
}, validate=True)
resources = p.run()
session = local_session(p.session_factory)
token = StorageUtilities.get_storage_token(session)
blob_settings = StorageSettingsUtilities.get_settings(
BLOB_TYPE, resources[0], token=token)
queue_settings = StorageSettingsUtilities.get_settings(
QUEUE_TYPE, resources[0], token=token)
table_settings = StorageSettingsUtilities.get_settings(
TABLE_TYPE, resources[0], session=session)
# assert read and write logging settings are disabled
self.assertFalse(blob_settings.logging.read and blob_settings.logging.write)
self.assertFalse(queue_settings.logging.read and queue_settings.logging.write)
self.assertFalse(table_settings.logging.read and table_settings.logging.write)
# assert delete logging settings are enabled
self.assertTrue(blob_settings.logging.delete)
self.assertTrue(queue_settings.logging.delete)
self.assertTrue(table_settings.logging.delete)
@arm_template('storage.json')
def test_disable_retention_log_settings(self):
p = self.load_policy({
'name': 'test-azure-storage',
'resource': 'azure.storage',
'filters': [
{'type': 'value',
'key': 'name',
'op': 'glob',
'value_type': 'normalize',
'value': 'cclgstorage*'}],
'actions': [
{
'type': 'set-log-settings',
'storage-types': ['blob', 'queue', 'table'],
'retention': 0,
'log': ['read', 'write', 'delete']
}
]
}, validate=True)
resources = p.run()
session = local_session(p.session_factory)
token = StorageUtilities.get_storage_token(session)
blob_settings = StorageSettingsUtilities.get_settings(
BLOB_TYPE, resources[0], token=token)
queue_settings = StorageSettingsUtilities.get_settings(
QUEUE_TYPE, resources[0], token=token)
table_settings = StorageSettingsUtilities.get_settings(
TABLE_TYPE, resources[0], session=session)
# assert retention policy is disabled
self.assertFalse(blob_settings.logging.retention_policy.enabled)
self.assertFalse(queue_settings.logging.retention_policy.enabled)
self.assertFalse(table_settings.logging.retention_policy.enabled)
@patch('azure.storage.blob.blockblobservice.BlockBlobService.get_blob_service_properties')
def test_storage_settings_get_blob_settings(self, mock_blob_properties_call):
mock_storage_account = {
"resourceGroup": "mock_resource_group",
"name": "mock_storage_account"
}
mock_token = 'mock_token'
StorageSettingsUtilities.get_settings(BLOB_TYPE, mock_storage_account, token=mock_token)
mock_blob_properties_call.assert_called_once()
@patch('azure.storage.file.fileservice.FileService.get_file_service_properties')
@patch('c7n_azure.storage_utils.StorageUtilities.get_storage_primary_key',
return_value='mock_primary_key')
def test_storage_settings_get_file_settings(self, mock_get_storage_key,
mock_file_properties_call):
mock_storage_account = {
"resourceGroup": "mock_resource_group",
"name": "mock_storage_account"
}
mock_session = MagicMock()
StorageSettingsUtilities.get_settings(FILE_TYPE, mock_storage_account, session=mock_session)
mock_get_storage_key.assert_called_with(
'mock_resource_group', 'mock_storage_account', mock_session)
mock_file_properties_call.assert_called_once()
@patch('azure.cosmosdb.table.tableservice.TableService.get_table_service_properties')
@patch('c7n_azure.storage_utils.StorageUtilities.get_storage_primary_key',
return_value='mock_primary_key')
def test_storage_settings_get_table_settings(self, mock_get_storage_key,
mock_get_table_properties):
mock_storage_account = {
"resourceGroup": "mock_resource_group",
"name": "mock_storage_account"
}
mock_session = MagicMock()
StorageSettingsUtilities.get_settings(
TABLE_TYPE, mock_storage_account, session=mock_session)
mock_get_storage_key.assert_called_with(
'mock_resource_group', 'mock_storage_account', mock_session)
mock_get_table_properties.assert_called_once()
@patch('azure.storage.queue.queueservice.QueueService.get_queue_service_properties')
def test_storage_settings_get_queue_settings(self, mock_get_queue_properties):
mock_storage_account = {
"resourceGroup": "mock_resource_group",
"name": "mock_storage_account"
}
mock_token = 'mock_token'
StorageSettingsUtilities.get_settings(
QUEUE_TYPE, mock_storage_account, token=mock_token)
mock_get_queue_properties.assert_called_once()
@patch('azure.storage.queue.queueservice.QueueService.set_queue_service_properties')
def test_storage_settings_update_logging_queue(self, mock_set_queue_properties):
mock_storage_account = {
"resourceGroup": "mock_resource_group",
"name": "mock_storage_account"
}
mock_token = 'mock_token'
log_settings = MagicMock()
StorageSettingsUtilities.update_logging(
QUEUE_TYPE, mock_storage_account, log_settings, token=mock_token)
mock_set_queue_properties.assert_called_once()
@patch('azure.cosmosdb.table.tableservice.TableService.set_table_service_properties')
@patch('c7n_azure.storage_utils.StorageUtilities.get_storage_primary_key',
return_value='mock_primary_key')
def test_storage_settings_update_logging_table(self, mock_get_storage_key,
mock_set_table_properties):
mock_storage_account = {
"resourceGroup": "mock_resource_group",
"name": "mock_storage_account"
}
mock_session = MagicMock()
log_settings = MagicMock()
StorageSettingsUtilities.update_logging(
TABLE_TYPE, mock_storage_account, log_settings, session=mock_session)
mock_get_storage_key.assert_called_with(
'mock_resource_group', 'mock_storage_account', mock_session)
mock_set_table_properties.assert_called_once()
@patch('azure.storage.blob.blockblobservice.BlockBlobService.set_blob_service_properties')
def test_storage_settings_update_logging_blob(self, mock_set_blob_properties):
mock_storage_account = {
"resourceGroup": "mock_resource_group",
"name": "mock_storage_account"
}
mock_token = 'mock_token'
log_settings = MagicMock()
StorageSettingsUtilities.update_logging(
BLOB_TYPE, mock_storage_account, log_settings, token=mock_token)
mock_set_blob_properties.assert_called_once()
def test_storage_settings_require_secure_transfer(self):
with patch('azure.mgmt.storage.v%s.operations.'
'_storage_accounts_operations.StorageAccountsOperations.update'
% self._get_storage_management_client_api_string()) as update_storage_mock:
p = self.load_policy({
'name': 'my-first-policy',
'resource': 'azure.storage',
'filters': [
{'type': 'value',
'key': 'name',
'op': 'glob',
'value_type': 'normalize',
'value': 'cctstorage*'}
],
'actions': [
{'type': 'require-secure-transfer',
'value': True}
]
})
p.run()
args = update_storage_mock.call_args_list[0][0]
self.assertEqual(args[0], 'test_storage')
self.assertTrue(args[1].startswith('cctstorage'))
self.assertEqual(args[2],
StorageAccountUpdateParameters(enable_https_traffic_only=True))
def _get_storage_management_client_api_string(self):
return local_session(Session)\
.client('azure.mgmt.storage.StorageManagementClient')\
.DEFAULT_API_VERSION.replace("-", "_")
| 39.541667 | 100 | 0.564981 | 2,243 | 22,776 | 5.489077 | 0.0963 | 0.045809 | 0.035088 | 0.024366 | 0.831709 | 0.821394 | 0.765838 | 0.722141 | 0.692739 | 0.676007 | 0 | 0.01289 | 0.308527 | 22,776 | 575 | 101 | 39.610435 | 0.768874 | 0.035256 | 0 | 0.704365 | 0 | 0 | 0.222794 | 0.046053 | 0 | 0 | 0 | 0 | 0.10119 | 1 | 0.05754 | false | 0 | 0.019841 | 0.001984 | 0.081349 | 0.001984 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8ad661dc4c77a08c6047862ae0d18d51ebc241b7 | 33 | py | Python | src/python/trust_network_backend/tnb/apps/core/models.py | vladiibine/trust-network | 10d7c2f9082e9ad1944b2fb68206cb7657f63bde | [
"MIT"
] | 2 | 2017-02-06T10:31:34.000Z | 2018-02-21T09:06:09.000Z | src/python/trust_network_backend/tnb/apps/core/models.py | vladiibine/trust-network | 10d7c2f9082e9ad1944b2fb68206cb7657f63bde | [
"MIT"
] | null | null | null | src/python/trust_network_backend/tnb/apps/core/models.py | vladiibine/trust-network | 10d7c2f9082e9ad1944b2fb68206cb7657f63bde | [
"MIT"
] | null | null | null | from tnb.contrib.db import Model
| 16.5 | 32 | 0.818182 | 6 | 33 | 4.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121212 | 33 | 1 | 33 | 33 | 0.931034 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
76e7d9d47a3ae6eff3b24747c2f875ac68860096 | 27 | py | Python | src/euler_python_package/euler_python/medium/p288.py | wilsonify/euler | 5214b776175e6d76a7c6d8915d0e062d189d9b79 | [
"MIT"
] | null | null | null | src/euler_python_package/euler_python/medium/p288.py | wilsonify/euler | 5214b776175e6d76a7c6d8915d0e062d189d9b79 | [
"MIT"
] | null | null | null | src/euler_python_package/euler_python/medium/p288.py | wilsonify/euler | 5214b776175e6d76a7c6d8915d0e062d189d9b79 | [
"MIT"
] | null | null | null | def problem288():
pass
| 9 | 17 | 0.62963 | 3 | 27 | 5.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15 | 0.259259 | 27 | 2 | 18 | 13.5 | 0.7 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
76e92812669ffd190865b1c60d6a55014b14ae71 | 23 | py | Python | regym/environments/envs/__init__.py | KnwSondess/Regym | 825c7dacf955a3e2f6c658c0ecb879a0ca036c1a | [
"MIT"
] | 2 | 2020-09-13T15:53:20.000Z | 2020-12-08T15:57:05.000Z | regym/environments/envs/__init__.py | KnwSondess/Regym | 825c7dacf955a3e2f6c658c0ecb879a0ca036c1a | [
"MIT"
] | null | null | null | regym/environments/envs/__init__.py | KnwSondess/Regym | 825c7dacf955a3e2f6c658c0ecb879a0ca036c1a | [
"MIT"
] | 1 | 2021-09-20T13:48:30.000Z | 2021-09-20T13:48:30.000Z | from .gym_envs import * | 23 | 23 | 0.782609 | 4 | 23 | 4.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.130435 | 23 | 1 | 23 | 23 | 0.85 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
76f1bc349e36105ec3f4b4d8fdd72cc82c4c7560 | 64 | py | Python | python/airsim/__init__.py | meurissemax/autonomous-drone | e6b4288af18be7dcf136121e73454e08f3277c88 | [
"MIT"
] | 3 | 2022-01-13T15:24:48.000Z | 2022-03-08T16:51:12.000Z | python/airsim/__init__.py | meurissemax/autonomous-drone | e6b4288af18be7dcf136121e73454e08f3277c88 | [
"MIT"
] | null | null | null | python/airsim/__init__.py | meurissemax/autonomous-drone | e6b4288af18be7dcf136121e73454e08f3277c88 | [
"MIT"
] | null | null | null | from .client import *
from .utils import *
from .types import *
| 16 | 21 | 0.71875 | 9 | 64 | 5.111111 | 0.555556 | 0.434783 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1875 | 64 | 3 | 22 | 21.333333 | 0.884615 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0a0484393b6f27c07e54cee8ce076b7f99c92dde | 20 | py | Python | loglizer/models/__init__.py | nikile/loglizer | e37c661a7837fb30cd1dae1ba8cc2cd309c73333 | [
"MIT"
] | 103 | 2021-02-10T18:01:56.000Z | 2022-03-30T21:35:05.000Z | loglizer/models/__init__.py | nikile/loglizer | e37c661a7837fb30cd1dae1ba8cc2cd309c73333 | [
"MIT"
] | 9 | 2021-05-28T14:52:33.000Z | 2022-03-03T13:09:25.000Z | loglizer/models/__init__.py | nikile/loglizer | e37c661a7837fb30cd1dae1ba8cc2cd309c73333 | [
"MIT"
] | 10 | 2021-04-24T04:25:24.000Z | 2022-02-24T07:30:42.000Z | from .PCA import PCA | 20 | 20 | 0.8 | 4 | 20 | 4 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15 | 20 | 1 | 20 | 20 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0a107d7a6b461474e0dfec71581ac0fd32592bdb | 160 | py | Python | iqps/frontend/views.py | pasthorizon/iqps | 6d9b027432615338c97b95fb0ef4da52c9794554 | [
"MIT"
] | 1 | 2020-07-05T21:32:42.000Z | 2020-07-05T21:32:42.000Z | iqps/frontend/views.py | pasthorizon/iqps | 6d9b027432615338c97b95fb0ef4da52c9794554 | [
"MIT"
] | null | null | null | iqps/frontend/views.py | pasthorizon/iqps | 6d9b027432615338c97b95fb0ef4da52c9794554 | [
"MIT"
] | null | null | null | from django.shortcuts import render
from .forms import FilterForm
def index(request):
return render(request, "index.html", {'filter_form': FilterForm()})
| 22.857143 | 71 | 0.75 | 20 | 160 | 5.95 | 0.7 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.13125 | 160 | 6 | 72 | 26.666667 | 0.856115 | 0 | 0 | 0 | 0 | 0 | 0.13125 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.5 | 0.25 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
0a2e900fcd3424792b9ac60a6f5d4da152414387 | 190 | py | Python | inclearn/lib/losses/__init__.py | Zotkin/incremental_learning.pytorch | 6a0d7385d209abcd40a402dcad42293dd4e8b362 | [
"MIT"
] | null | null | null | inclearn/lib/losses/__init__.py | Zotkin/incremental_learning.pytorch | 6a0d7385d209abcd40a402dcad42293dd4e8b362 | [
"MIT"
] | null | null | null | inclearn/lib/losses/__init__.py | Zotkin/incremental_learning.pytorch | 6a0d7385d209abcd40a402dcad42293dd4e8b362 | [
"MIT"
] | null | null | null | # flake8: noqa
from .base import *
from .distillation import *
from .metrics import *
from .regularizations import *
from .unsupervised import *
from .nt_xent import *
from .sv_reg import *
| 21.111111 | 30 | 0.752632 | 25 | 190 | 5.64 | 0.52 | 0.425532 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006289 | 0.163158 | 190 | 8 | 31 | 23.75 | 0.880503 | 0.063158 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0a57e2ff32ec172b85562554c104ae5fc4bc48a2 | 64 | py | Python | headmouse/output_drivers/null.py | aranchelk/headmouse | 1c6f304d4dc2bc8a5b377de1649f7208975c6323 | [
"BSD-3-Clause"
] | 9 | 2016-06-27T16:43:33.000Z | 2021-05-19T04:29:24.000Z | headmouse/output_drivers/null.py | aranchelk/headmouse | 1c6f304d4dc2bc8a5b377de1649f7208975c6323 | [
"BSD-3-Clause"
] | null | null | null | headmouse/output_drivers/null.py | aranchelk/headmouse | 1c6f304d4dc2bc8a5b377de1649f7208975c6323 | [
"BSD-3-Clause"
] | null | null | null | from __future__ import print_function
def send_xy(xy):
pass | 16 | 37 | 0.78125 | 10 | 64 | 4.4 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.171875 | 64 | 4 | 38 | 16 | 0.830189 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0.333333 | 0.333333 | 0 | 0.666667 | 0.333333 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
6a9fd0e1618514dc075803593a3987159eab6e2a | 131 | py | Python | src/app/tests/clients/test_client_exception.py | ferdn4ndo/candlestick-data-lake | 93dab21740fcdd3807c23d9cc4b99e0420c76a02 | [
"MIT"
] | null | null | null | src/app/tests/clients/test_client_exception.py | ferdn4ndo/candlestick-data-lake | 93dab21740fcdd3807c23d9cc4b99e0420c76a02 | [
"MIT"
] | 15 | 2021-03-17T22:22:30.000Z | 2022-02-08T23:09:00.000Z | src/app/tests/clients/test_client_exception.py | ferdn4ndo/candlestick-data-lake | 93dab21740fcdd3807c23d9cc4b99e0420c76a02 | [
"MIT"
] | null | null | null | import unittest
from app.clients.client_exception import ClientException
class TestClientException(unittest.TestCase):
pass
| 16.375 | 56 | 0.832061 | 14 | 131 | 7.714286 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.122137 | 131 | 7 | 57 | 18.714286 | 0.93913 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.25 | 0.5 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
6ad9bfb8c81b6a02c78cac834d8d7f2d7fddbf4e | 84 | py | Python | cloudmesh/burn/burner/ubuntu.py | cloudmesh/cloudmesh_pi_burn | 0d080c0b6057e2c761e90ebd4144d04b4dff6d6b | [
"Apache-2.0"
] | 16 | 2021-01-16T16:18:08.000Z | 2022-03-07T16:09:18.000Z | cloudmesh/burn/burner/ubuntu.py | cloudmesh/cloudmesh-pi-burn | ad76a310e3ebe2b6111b00de0d2a80693ceeb6f4 | [
"Apache-2.0"
] | 11 | 2021-01-16T12:39:56.000Z | 2021-05-06T21:57:43.000Z | cloudmesh/burn/burner/ubuntu.py | cloudmesh/cloudmesh-pi-burn | ad76a310e3ebe2b6111b00de0d2a80693ceeb6f4 | [
"Apache-2.0"
] | 3 | 2021-02-07T16:35:05.000Z | 2021-04-03T04:48:10.000Z | # class Burner(AbstractBurner):
class Burner:
def __init__(self):
pass
| 14 | 31 | 0.654762 | 9 | 84 | 5.666667 | 0.777778 | 0.431373 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.25 | 84 | 5 | 32 | 16.8 | 0.809524 | 0.345238 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0.333333 | 0 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 6 |
0abb34c3b0af123c465c9d454db87cdba2593d78 | 17,300 | py | Python | nanobrok/blueprints/resources/resourcesCommands.py | retr0-13/nanobroK | 6e01e385c6c0c7c231609faedb76c0337de90dc0 | [
"Apache-2.0"
] | 142 | 2021-09-18T11:25:28.000Z | 2022-03-30T13:44:58.000Z | nanobrok/blueprints/resources/resourcesCommands.py | retr0-13/nanobroK | 6e01e385c6c0c7c231609faedb76c0337de90dc0 | [
"Apache-2.0"
] | 1 | 2021-09-19T14:31:17.000Z | 2021-09-21T00:47:04.000Z | nanobrok/blueprints/resources/resourcesCommands.py | retr0-13/nanobroK | 6e01e385c6c0c7c231609faedb76c0337de90dc0 | [
"Apache-2.0"
] | 31 | 2021-09-19T03:52:13.000Z | 2022-03-31T14:19:12.000Z | from flask_restplus import Resource
from flask import request
from nanobrok.models import (
PacketData,
PacketType,
PacketDataSchema,
Event,
SecCommandType,
ClipboardSchema,
MessageToastSchema,
AlarmSchema,
TimeLookSchema,
)
from nanobrok.exceptions import (
ValidationError as VE,
)
from marshmallow import ValidationError
from nanobrok.blueprints.webui.utils import remove_key_from_dict, build_packet_data
from nanobrok.ext.restapi import ns_commands
from .resourceUtils import build_message_done
from nanobrok.ext.socketio import socketio
import threading, json
from nanobrok.ext.database import db
from .resourcesAuth import token_required_admin
from dynaconf import settings
# This file is part of the Nanobrok Open Source Project.
# nanobrok is licensed under the Apache 2.0.
# Copyright 2021 p0cL4bs Team - Marcos Bomfim (mh4x0f)
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
def register_routes(app):
ns_commands.add_resource(LockNowResource, "/security/locknow")
ns_commands.add_resource(
MaxInactivityTimeLockResource, "/security/maxInactivityTimeLock"
)
ns_commands.add_resource(CheckIsEnableAdminResource, "/security/checkIsEnableAdmin")
ns_commands.add_resource(AlarmResource, "/security/alarm")
ns_commands.add_resource(MessageResource, "/misc/messageToast")
ns_commands.add_resource(ClipBoardResource, "/misc/clipboard")
print("ROUTERS Registed: CommandsController ")
class LockNowResource(Resource):
@ns_commands.doc(responses={200: "commands has executed successfully."})
@ns_commands.doc(responses={401: "User does not have permission to access"})
@ns_commands.doc(
responses={
503: "Client unavailable, the client is not ready to handle the request"
}
)
@token_required_admin
def post(self, current_user):
ev = threading.Event()
packet_data = None
def ackResponseLockNow(data):
nonlocal packet_data
nonlocal ev
packet_data = data
ev.set()
packet_data_request = build_packet_data(
Event.COMMAND_SECURITY_CODE, PacketType.SEND_PACKET_CODE
)
db.session.add(packet_data_request)
db.session.commit()
print("sending: event packetdata")
print("Event target => " + Event[packet_data_request.event].name)
print("data send: {}".format(packet_data_request.serialize()))
print("sending: event ")
socketio.emit(
Event.COMMAND_SECURITY_CODE.value,
{
"type": SecCommandType.SET_LOCK_NOW.value,
},
namespace=settings.ENDPOINT_IO_CORE,
callback=ackResponseLockNow,
)
ev.wait(timeout=5)
if packet_data != None:
packet_data = json.loads(packet_data)
print("data recv: {}".format(packet_data))
schema_packet = PacketDataSchema()
try:
result_packetData = schema_packet.load(
remove_key_from_dict(packet_data, {"data"})
)
obj_packetdata = PacketData(**result_packetData)
db.session.add(obj_packetdata)
db.session.commit()
message = packet_data["data"].get("message")
return build_message_done(200, message, packet_data["data"])
except Exception as err:
print(err)
raise VE(msg=err.messages.get(list(err.messages)[0])[0], code=400)
raise VE(
msg="Client unavailable, the client is not ready to handle the request",
code=503,
)
class MaxInactivityTimeLockResource(Resource):
@ns_commands.doc(responses={200: "commands has executed successfully."})
@ns_commands.doc(responses={401: "User does not have permission to access"})
@ns_commands.doc(
responses={
503: "Client unavailable, the client is not ready to handle the request"
}
)
@token_required_admin
def post(self, current_user):
timeLookData = request.get_json(silent=True)
if not timeLookData:
raise VE(
msg="There was an error in your request, please try again.", code=400
)
schema = TimeLookSchema()
try:
result_timeLookData = schema.load(timeLookData)
except ValidationError as err:
print(err.messages)
raise VE(msg=err.messages.get(list(err.messages)[0])[0], code=400)
ev = threading.Event()
packet_data = None
def ackResponse(data):
nonlocal packet_data
nonlocal ev
packet_data = data
ev.set()
packet_data_request = build_packet_data(
Event.COMMAND_SECURITY_CODE, PacketType.SEND_PACKET_CODE
)
db.session.add(packet_data_request)
db.session.commit()
print("sending: event packetdata")
print("Event target => " + Event[packet_data_request.event].name)
print("data send: {}".format(packet_data_request.serialize()))
print("sending: event ")
socketio.emit(
Event.COMMAND_SECURITY_CODE.value,
{
"type": SecCommandType.SET_MAX_INACTIVITY_TIME.value,
"timeMs": result_timeLookData["timeMs"] * 1000,
},
namespace=settings.ENDPOINT_IO_CORE,
callback=ackResponse,
)
ev.wait(timeout=5)
if packet_data != None:
packet_data = json.loads(packet_data)
print("data recv: {}".format(packet_data))
schema_packet = PacketDataSchema()
try:
result_packetData = schema_packet.load(
remove_key_from_dict(packet_data, {"data"})
)
obj_packetdata = PacketData(**result_packetData)
db.session.add(obj_packetdata)
db.session.commit()
message = packet_data["data"].get("message")
return build_message_done(200, message, packet_data["data"])
except Exception as err:
print(err)
raise VE(msg=err.messages.get(list(err.messages)[0])[0], code=400)
raise VE(
msg="Client unavailable, the client is not ready to handle the request",
code=503,
)
class CheckIsEnableAdminResource(Resource):
@ns_commands.doc(responses={200: "commands has executed successfully."})
@ns_commands.doc(responses={401: "User does not have permission to access"})
@ns_commands.doc(
responses={
503: "Client unavailable, the client is not ready to handle the request"
}
)
@token_required_admin
def post(self, current_user):
ev = threading.Event()
packet_data = None
def ackResponse(data):
nonlocal packet_data
nonlocal ev
packet_data = data
ev.set()
packet_data_request = build_packet_data(
Event.COMMAND_SECURITY_CODE, PacketType.SEND_PACKET_CODE
)
db.session.add(packet_data_request)
db.session.commit()
print("sending: event packetdata")
print("Event target => " + Event[packet_data_request.event].name)
print("data send: {}".format(packet_data_request.serialize()))
print("sending: event ")
socketio.emit(
Event.COMMAND_SECURITY_CODE.value,
{"type": SecCommandType.IS_DEVICE_ADMIN.value},
namespace=settings.ENDPOINT_IO_CORE,
callback=ackResponse,
)
ev.wait(timeout=5)
if packet_data != None:
packet_data = json.loads(packet_data)
print("data recv: {}".format(packet_data))
schema_packet = PacketDataSchema()
try:
result_packetData = schema_packet.load(
remove_key_from_dict(packet_data, {"data"})
)
obj_packetdata = PacketData(**result_packetData)
db.session.add(obj_packetdata)
db.session.commit()
message = packet_data["data"].get("message")
return build_message_done(200, message, packet_data["data"])
except Exception as err:
print(err)
raise VE(msg=err.messages.get(list(err.messages)[0])[0], code=400)
raise VE(
msg="Client unavailable, the client is not ready to handle the request",
code=503,
)
class MessageResource(Resource):
@ns_commands.doc(responses={200: "Message has been sent successfully"})
@ns_commands.doc(responses={401: "User does not have permission to access"})
@ns_commands.doc(
responses={400: "Bad Request, request syntax, invalid request message."}
)
@ns_commands.doc(
responses={
503: "Client unavailable, the client is not ready to handle the request"
}
)
@token_required_admin
def post(self, current_user):
message_data = request.get_json(silent=True)
if not message_data:
raise VE(
msg="There was an error in your request, please try again.", code=400
)
schema_message = MessageToastSchema()
try:
toast_message = schema_message.load(message_data)
except ValidationError as err:
print(err.messages)
raise VE(msg=err.messages.get(list(err.messages)[0])[0], code=400)
ev = threading.Event()
packet_data = None
def ackResponse(data):
nonlocal packet_data
nonlocal ev
packet_data = data
ev.set()
packet_data_request = build_packet_data(
Event.COMMAND_MISC_TOAST_CODE, PacketType.SEND_PACKET_CODE
)
db.session.add(packet_data_request)
db.session.commit()
print("sending: event packetdata")
print("Event target => " + Event[packet_data_request.event].name)
print("data send: {}".format(packet_data_request.serialize()))
print("sending: event ")
socketio.emit(
Event.COMMAND_MISC_TOAST_CODE.value,
toast_message,
namespace=settings.ENDPOINT_IO_CORE,
callback=ackResponse,
)
ev.wait(timeout=5)
if packet_data != None:
packet_data = json.loads(packet_data)
print("data recv: {}".format(packet_data))
schema_packet = PacketDataSchema()
try:
result_packetData = schema_packet.load(
remove_key_from_dict(packet_data, {"data"})
)
obj_packetdata = PacketData(**result_packetData)
db.session.add(obj_packetdata)
db.session.commit()
message = packet_data["data"].get("message")
return build_message_done(200, message, packet_data["data"])
except Exception as err:
print(err)
raise VE(msg=err.messages.get(list(err.messages)[0])[0], code=400)
raise VE(
msg="Client unavailable, the client is not ready to handle the request",
code=503,
)
class ClipBoardResource(Resource):
@ns_commands.doc(responses={200: "Clipboard has been sent successfully"})
@ns_commands.doc(responses={401: "User does not have permission to access"})
@ns_commands.doc(
responses={400: "Bad Request, request syntax, invalid request message."}
)
@ns_commands.doc(
responses={
503: "Client unavailable, the client is not ready to handle the request"
}
)
def post(self, current_user=None):
clipboard_data = request.get_json(silent=True)
if not clipboard_data:
raise VE(
msg="There was an error in your request, please try again.", code=400
)
schema_clipboard = ClipboardSchema()
try:
result_clipboard = schema_clipboard.load(clipboard_data)
except ValidationError as err:
print(err.messages)
raise VE(msg=err.messages.get(list(err.messages)[0])[0], code=400)
ev = threading.Event()
packet_data = None
def ackResponse(data):
nonlocal packet_data
nonlocal ev
packet_data = data
ev.set()
packet_data_request = build_packet_data(
Event.CLIPBOARD_CODE, PacketType.SEND_PACKET_CODE
)
db.session.add(packet_data_request)
db.session.commit()
print("sending: event packetdata")
print("Event target => " + Event[packet_data_request.event].name)
print("data send: {}".format(packet_data_request.serialize()))
print("sending: event ")
socketio.emit(
Event.CLIPBOARD_CODE.value,
result_clipboard,
namespace=settings.ENDPOINT_IO_CORE,
callback=ackResponse,
)
ev.wait(timeout=5)
if packet_data != None:
packet_data = json.loads(packet_data)
print("data recv: {}".format(packet_data))
schema_packet = PacketDataSchema()
try:
result_packetData = schema_packet.load(
remove_key_from_dict(packet_data, {"data"})
)
obj_packetdata = PacketData(**result_packetData)
db.session.add(obj_packetdata)
db.session.commit()
message = packet_data["data"].get("message")
return build_message_done(200, message, packet_data["data"])
except Exception as err:
print(err)
raise VE(msg=err.messages.get(list(err.messages)[0])[0], code=400)
raise VE(
msg="Client unavailable, the client is not ready to handle the request",
code=503,
)
class AlarmResource(Resource):
@ns_commands.doc(responses={200: "Alarm has been sent successfully"})
@ns_commands.doc(responses={401: "User does not have permission to access"})
@ns_commands.doc(
responses={400: "Bad Request, request syntax, invalid request message."}
)
@ns_commands.doc(
responses={
503: "Client unavailable, the client is not ready to handle the request"
}
)
@token_required_admin
def post(self, current_user):
message_data = request.get_json(silent=True)
if not message_data:
raise VE(
msg="There was an error in your request, please try again.", code=400
)
schema_alarm = AlarmSchema()
try:
alarm_data = schema_alarm.load(message_data)
except ValidationError as err:
print(err.messages)
raise VE(msg=err.messages.get(list(err.messages)[0])[0], code=400)
ev = threading.Event()
packet_data = None
def ackResponse(data):
nonlocal packet_data
nonlocal ev
packet_data = data
ev.set()
packet_data_request = build_packet_data(
Event.COMMAND_SECURITY_ALARM_CODE, PacketType.SEND_PACKET_CODE
)
db.session.add(packet_data_request)
db.session.commit()
print("sending: event packetdata")
print("Event target => " + Event[packet_data_request.event].name)
print("data send: {}".format(packet_data_request.serialize()))
print("sending: event ")
socketio.emit(
Event.COMMAND_SECURITY_ALARM_CODE.value,
alarm_data,
namespace=settings.ENDPOINT_IO_CORE,
callback=ackResponse,
)
ev.wait(timeout=5)
if packet_data != None:
packet_data = json.loads(packet_data)
print("data recv: {}".format(packet_data))
schema_packet = PacketDataSchema()
try:
result_packetData = schema_packet.load(
remove_key_from_dict(packet_data, {"data"})
)
obj_packetdata = PacketData(**result_packetData)
db.session.add(obj_packetdata)
db.session.commit()
message = packet_data["data"].get("message")
return build_message_done(200, message, packet_data["data"])
except Exception as err:
print(err)
raise VE(msg=err.messages.get(list(err.messages)[0])[0], code=400)
raise VE(
msg="Client unavailable, the client is not ready to handle the request",
code=503,
)
| 35.162602 | 88 | 0.606821 | 1,899 | 17,300 | 5.35387 | 0.120063 | 0.089505 | 0.033048 | 0.045441 | 0.792269 | 0.786564 | 0.776827 | 0.769155 | 0.765909 | 0.765909 | 0 | 0.01528 | 0.300173 | 17,300 | 491 | 89 | 35.234216 | 0.824482 | 0.038613 | 0 | 0.709756 | 0 | 0 | 0.14339 | 0.00355 | 0 | 0 | 0 | 0 | 0 | 1 | 0.031707 | false | 0 | 0.031707 | 0 | 0.092683 | 0.102439 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0ad9d75fb736a4f2c255cce57cf29417f4f859dc | 37 | py | Python | pylocalc/__init__.py | daurenmuratov/pylocalc | 9e8b2727fd17272b73382b8e4ed7cb29eb4b6d19 | [
"MIT"
] | 2 | 2021-01-13T12:57:51.000Z | 2021-04-16T13:01:09.000Z | pylocalc/__init__.py | daurenmuratov/pylocalc | 9e8b2727fd17272b73382b8e4ed7cb29eb4b6d19 | [
"MIT"
] | null | null | null | pylocalc/__init__.py | daurenmuratov/pylocalc | 9e8b2727fd17272b73382b8e4ed7cb29eb4b6d19 | [
"MIT"
] | 1 | 2021-12-04T00:04:29.000Z | 2021-12-04T00:04:29.000Z | from pylocalc.models import Document
| 18.5 | 36 | 0.864865 | 5 | 37 | 6.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.108108 | 37 | 1 | 37 | 37 | 0.969697 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0ae25a930f510f894ee0f5cdd0f5bd8d0aab8029 | 46 | py | Python | lino_book/projects/adg/settings/__init__.py | khchine5/book | b6272d33d49d12335d25cf0a2660f7996680b1d1 | [
"BSD-2-Clause"
] | 1 | 2018-01-12T14:09:58.000Z | 2018-01-12T14:09:58.000Z | lino_book/projects/adg/settings/__init__.py | khchine5/book | b6272d33d49d12335d25cf0a2660f7996680b1d1 | [
"BSD-2-Clause"
] | 4 | 2018-02-06T19:53:10.000Z | 2019-08-01T21:47:44.000Z | lino_book/projects/adg/settings/__init__.py | khchine5/book | b6272d33d49d12335d25cf0a2660f7996680b1d1 | [
"BSD-2-Clause"
] | null | null | null | from lino_avanti.lib.avanti.settings import *
| 23 | 45 | 0.826087 | 7 | 46 | 5.285714 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.086957 | 46 | 1 | 46 | 46 | 0.880952 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0aea7478a8c9f1dc3585e524fc767d7b1a8a9b90 | 16,681 | py | Python | tests/test_graphqlview.py | jrbeilke/chalice-graphql | deb1167ec319700a38e60f853dfaf635b7bcdc8b | [
"MIT"
] | 9 | 2021-03-11T04:57:53.000Z | 2022-01-25T01:33:20.000Z | tests/test_graphqlview.py | jrbeilke/chalice-graphql | deb1167ec319700a38e60f853dfaf635b7bcdc8b | [
"MIT"
] | 1 | 2020-11-18T15:15:46.000Z | 2020-11-18T15:15:46.000Z | tests/test_graphqlview.py | jrbeilke/chalice-graphql | deb1167ec319700a38e60f853dfaf635b7bcdc8b | [
"MIT"
] | 1 | 2021-05-12T12:00:52.000Z | 2021-05-12T12:00:52.000Z | import json
from io import StringIO
from urllib.parse import urlencode
from chalice.test import Client
import pytest
from .app import app
@pytest.fixture
def test_client():
with Client(app) as client:
yield client
def url_string(path, **url_params):
if url_params:
path += "?" + urlencode(url_params)
return path
def json_dump_kwarg(**kwargs):
return json.dumps(kwargs)
def json_dump_kwarg_list(**kwargs):
return json.dumps([kwargs])
def test_index(test_client):
response = test_client.http.get('/')
assert response.status_code == 200
assert response.json_body == {'hello': 'world'}
def test_allows_get_with_query_param(test_client):
response = test_client.http.get(url_string('/graphql', query="{test}"))
assert response.status_code == 200
assert response.json_body == {"data": {"test": "Hello World"}}
def test_allows_get_with_variable_values(test_client):
response = test_client.http.get(
url_string(
'/graphql',
query="query helloWho($who: String){ test(who: $who) }",
variables=json.dumps({"who": "Dolly"}),
)
)
assert response.status_code == 200
assert response.json_body == {"data": {"test": "Hello Dolly"}}
def test_allows_get_with_operation_name(test_client):
response = test_client.http.get(
url_string(
'/graphql',
query="""
query helloYou { test(who: "You"), ...shared }
query helloWorld { test(who: "World"), ...shared }
query helloDolly { test(who: "Dolly"), ...shared }
fragment shared on QueryRoot {
shared: test(who: "Everyone")
}
""",
operationName="helloWorld",
)
)
assert response.status_code == 200
assert response.json_body == {
"data": {"test": "Hello World", "shared": "Hello Everyone"}
}
def test_reports_validation_errors(test_client):
response = test_client.http.get(url_string('/graphql', query="{ test, unknownOne, unknownTwo }"))
assert response.status_code == 400
assert response.json_body == {
"errors": [
{
"message": "Cannot query field 'unknownOne' on type 'QueryRoot'.",
"locations": [{"line": 1, "column": 9}],
"path": None,
},
{
"message": "Cannot query field 'unknownTwo' on type 'QueryRoot'.",
"locations": [{"line": 1, "column": 21}],
"path": None,
},
]
}
def test_errors_when_missing_operation_name(test_client):
response = test_client.http.get(
url_string(
'/graphql',
query="""
query TestQuery { test }
mutation TestMutation { writeTest { test } }
""",
)
)
assert response.status_code == 400
assert response.json_body == {
"errors": [
{
"message": "Must provide operation name if query contains multiple operations.", # noqa: E501
"locations": None,
"path": None,
}
]
}
def test_errors_when_sending_a_mutation_via_get(test_client):
response = test_client.http.get(
url_string(
'/graphql',
query="""
mutation TestMutation { writeTest { test } }
""",
)
)
assert response.status_code == 405
assert response.json_body == {
"errors": [
{
"message": "Can only perform a mutation operation from a POST request.",
"locations": None,
"path": None,
}
]
}
def test_errors_when_selecting_a_mutation_within_a_get(test_client):
response = test_client.http.get(
url_string(
'/graphql',
query="""
query TestQuery { test }
mutation TestMutation { writeTest { test } }
""",
operationName="TestMutation",
)
)
assert response.status_code == 405
assert response.json_body == {
"errors": [
{
"message": "Can only perform a mutation operation from a POST request.",
"locations": None,
"path": None,
}
]
}
def test_allows_mutation_to_exist_within_a_get(test_client):
response = test_client.http.get(
url_string(
'/graphql',
query="""
query TestQuery { test }
mutation TestMutation { writeTest { test } }
""",
operationName="TestQuery",
)
)
assert response.status_code == 200
assert response.json_body == {"data": {"test": "Hello World"}}
def test_allows_post_with_json_encoding(test_client):
response = test_client.http.post(
url_string('/graphql'),
body=json_dump_kwarg(query="{test}"),
headers={'Content-Type': 'application/json'},
)
assert response.status_code == 200
assert response.json_body == {"data": {"test": "Hello World"}}
def test_allows_sending_a_mutation_via_post(test_client):
response = test_client.http.post(
url_string('/graphql'),
body=json_dump_kwarg(query="mutation TestMutation { writeTest { test } }"),
headers={'Content-Type': 'application/json'},
)
assert response.status_code == 200
assert response.json_body == {"data": {"writeTest": {"test": "Hello World"}}}
def test_allows_post_with_url_encoding(test_client):
response = test_client.http.post(
url_string('/graphql'),
body=urlencode(dict(query="{test}")),
headers={'Content-Type': 'application/x-www-form-urlencoded'},
)
assert response.status_code == 200
assert response.json_body == {"data": {"test": "Hello World"}}
def test_supports_post_json_query_with_string_variables(test_client):
response = test_client.http.post(
url_string('/graphql'),
body=json_dump_kwarg(
query="query helloWho($who: String){ test(who: $who) }",
variables=json.dumps({"who": "Dolly"}),
),
headers={'Content-Type': 'application/json'},
)
assert response.status_code == 200
assert response.json_body == {"data": {"test": "Hello Dolly"}}
def test_supports_post_json_query_with_json_variables(test_client):
response = test_client.http.post(
url_string('/graphql'),
body=json_dump_kwarg(
query="query helloWho($who: String){ test(who: $who) }",
variables={"who": "Dolly"},
),
headers={'Content-Type': 'application/json'},
)
assert response.status_code == 200
assert response.json_body == {"data": {"test": "Hello Dolly"}}
def test_supports_post_url_encoded_query_with_string_variables(test_client):
response = test_client.http.post(
url_string('/graphql'),
body=urlencode(
dict(
query="query helloWho($who: String){ test(who: $who) }",
variables=json.dumps({"who": "Dolly"}),
)
),
headers={'Content-Type': 'application/x-www-form-urlencoded'},
)
assert response.status_code == 200
assert response.json_body == {"data": {"test": "Hello Dolly"}}
def test_supports_post_json_quey_with_get_variable_values(test_client):
response = test_client.http.post(
url_string('/graphql', variables=json.dumps({"who": "Dolly"})),
body=json_dump_kwarg(query="query helloWho($who: String){ test(who: $who) }",),
headers={'Content-Type': 'application/json'},
)
assert response.status_code == 200
assert response.json_body == {"data": {"test": "Hello Dolly"}}
def test_post_url_encoded_query_with_get_variable_values(test_client):
response = test_client.http.post(
url_string('/graphql', variables=json.dumps({"who": "Dolly"})),
body=urlencode(dict(query="query helloWho($who: String){ test(who: $who) }",)),
headers={'Content-Type': 'application/x-www-form-urlencoded'},
)
assert response.status_code == 200
assert response.json_body == {"data": {"test": "Hello Dolly"}}
def test_supports_post_raw_text_query_with_get_variable_values(test_client):
response = test_client.http.post(
url_string('/graphql', variables=json.dumps({"who": "Dolly"})),
body="query helloWho($who: String){ test(who: $who) }",
headers={'Content-Type': 'application/graphql'},
)
assert response.status_code == 200
assert response.json_body == {"data": {"test": "Hello Dolly"}}
def test_allows_post_with_operation_name(test_client):
response = test_client.http.post(
url_string('/graphql'),
body=json_dump_kwarg(
query="""
query helloYou { test(who: "You"), ...shared }
query helloWorld { test(who: "World"), ...shared }
query helloDolly { test(who: "Dolly"), ...shared }
fragment shared on QueryRoot {
shared: test(who: "Everyone")
}
""",
operationName="helloWorld",
),
headers={'Content-Type': 'application/json'},
)
assert response.status_code == 200
assert response.json_body == {
"data": {"test": "Hello World", "shared": "Hello Everyone"}
}
def test_allows_post_with_get_operation_name(test_client):
response = test_client.http.post(
url_string('/graphql', operationName="helloWorld"),
body="""
query helloYou { test(who: "You"), ...shared }
query helloWorld { test(who: "World"), ...shared }
query helloDolly { test(who: "Dolly"), ...shared }
fragment shared on QueryRoot {
shared: test(who: "Everyone")
}
""",
headers={'Content-Type': 'application/graphql'},
)
assert response.status_code == 200
assert response.json_body == {
"data": {"test": "Hello World", "shared": "Hello Everyone"}
}
def test_not_pretty_by_default(test_client):
response = test_client.http.get(url_string('/graphql', query="{test}"))
assert response.body.decode() == '{"data":{"test":"Hello World"}}'
def test_supports_pretty_printing_by_request(test_client):
response = test_client.http.get(url_string('/graphql', query="{test}", pretty="1"))
assert response.body.decode() == (
"{\n" ' "data": {\n' ' "test": "Hello World"\n' " }\n" "}"
)
def test_handles_field_errors_caught_by_graphql(test_client):
response = test_client.http.get(url_string('/graphql', query="{thrower}"))
assert response.status_code == 200
assert response.json_body == {
"errors": [
{
"locations": [{"column": 2, "line": 1}],
"path": ["thrower"],
"message": "Throws!",
}
],
"data": None,
}
def test_handles_syntax_errors_caught_by_graphql(test_client):
response = test_client.http.get(url_string('/graphql', query="syntaxerror"))
assert response.status_code == 400
assert response.json_body == {
"errors": [
{
"locations": [{"column": 1, "line": 1}],
"message": "Syntax Error: Unexpected Name 'syntaxerror'.",
"path": None,
}
]
}
def test_handles_errors_caused_by_a_lack_of_query(test_client):
response = test_client.http.get(url_string('/graphql'))
assert response.status_code == 400
assert response.json_body == {
"errors": [
{"message": "Must provide query string.", "locations": None, "path": None}
]
}
def test_handles_batch_correctly_if_is_disabled(test_client):
response = test_client.http.post(url_string('/graphql'), body="[]", headers={'Content-Type': 'application/json'})
assert response.status_code == 400
assert response.json_body == {
"errors": [
{
"message": "Batch GraphQL requests are not enabled.",
"locations": None,
"path": None,
}
]
}
def test_handles_incomplete_json_bodies(test_client):
response = test_client.http.post(
url_string('/graphql'), body='{"query":', headers={'Content-Type': 'application/json'}
)
assert response.status_code == 400
assert response.json_body == {
"errors": [
{"message": "POST body sent invalid JSON.", "locations": None, "path": None}
]
}
def test_handles_plain_post_text(test_client):
response = test_client.http.post(
url_string('/graphql', variables=json.dumps({"who": "Dolly"})),
body="query helloWho($who: String){ test(who: $who) }",
headers={'Content-Type': 'text/plain'},
)
assert response.status_code == 400
assert response.json_body == {
"errors": [
{"message": "Must provide query string.", "locations": None, "path": None}
]
}
def test_handles_poorly_formed_variables(test_client):
response = test_client.http.get(
url_string(
'/graphql',
query="query helloWho($who: String){ test(who: $who) }",
variables="who:You",
)
)
assert response.status_code == 400
assert response.json_body == {
"errors": [
{"message": "Variables are invalid JSON.", "locations": None, "path": None}
]
}
def test_handles_unsupported_http_methods(test_client):
response = test_client.http.put(url_string('/graphql', query="{test}"))
assert response.status_code == 405
# TODO: Add this back in once Chalice supports returning the Allow header
# https://github.com/aws/chalice/issues/1583
# assert response.headers["Allow"] in ["GET, POST", "HEAD, GET, POST, OPTIONS"]
assert response.json_body == {
"errors": [
{
"message": "GraphQL only supports GET and POST requests.",
"locations": None,
"path": None,
}
]
}
def test_passes_request_into_request_context(test_client):
response = test_client.http.get(url_string('/graphql', query="{request}", q="testing"))
assert response.status_code == 200
assert response.json_body == {"data": {"request": "testing"}}
# TODO: Chalice lacks support for multipart requests
# https://github.com/aws/chalice/issues/796
#def test_post_multipart_data(test_client):
# query = "mutation TestMutation { writeTest { test } }"
# response = test_client.http.post(
# url_string('/graphql'),
# body={"query": query, "file": (StringIO(), "text1.txt")},
# headers={'Content-Type': 'multipart/form-data'},
# )
#
# assert response.status_code == 200
# assert response.json_body == {
# "data": {u"writeTest": {u"test": u"Hello World"}}
# }
def test_batch_allows_post_with_json_encoding(test_client):
response = test_client.http.post(
url_string('/graphql/batch'),
body=json_dump_kwarg_list(
# id=1,
query="{test}"
),
headers={'Content-Type': 'application/json'},
)
assert response.status_code == 200
assert response.json_body == [
{
# 'id': 1,
"data": {"test": "Hello World"}
}
]
def test_batch_supports_post_json_query_with_json_variables(test_client):
response = test_client.http.post(
url_string('/graphql/batch'),
body=json_dump_kwarg_list(
# id=1,
query="query helloWho($who: String){ test(who: $who) }",
variables={"who": "Dolly"},
),
headers={'Content-Type': 'application/json'},
)
assert response.status_code == 200
assert response.json_body == [
{
# 'id': 1,
"data": {"test": "Hello Dolly"}
}
]
def test_batch_allows_post_with_operation_name(test_client):
response = test_client.http.post(
url_string('/graphql/batch'),
body=json_dump_kwarg_list(
# id=1,
query="""
query helloYou { test(who: "You"), ...shared }
query helloWorld { test(who: "World"), ...shared }
query helloDolly { test(who: "Dolly"), ...shared }
fragment shared on QueryRoot {
shared: test(who: "Everyone")
}
""",
operationName="helloWorld",
),
headers={'Content-Type': 'application/json'},
)
assert response.status_code == 200
assert response.json_body == [
{
# 'id': 1,
"data": {"test": "Hello World", "shared": "Hello Everyone"}
}
]
| 30.164557 | 117 | 0.589173 | 1,799 | 16,681 | 5.240689 | 0.105614 | 0.075308 | 0.066822 | 0.081672 | 0.848324 | 0.832944 | 0.802927 | 0.772168 | 0.758379 | 0.732605 | 0 | 0.010298 | 0.266531 | 16,681 | 552 | 118 | 30.219203 | 0.760278 | 0.04628 | 0 | 0.551069 | 0 | 0 | 0.281226 | 0.007618 | 0 | 0 | 0 | 0.001812 | 0.15677 | 1 | 0.090261 | false | 0.002375 | 0.014252 | 0.004751 | 0.111639 | 0.002375 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e407d896ee72fda30ed70570567a49ede6de1a6d | 3,774 | py | Python | tests/parsers/java_idx.py | CNR-ITTIG/plasodfaxp | 923797fc00664fa9e3277781b0334d6eed5664fd | [
"Apache-2.0"
] | 1 | 2019-09-26T08:16:30.000Z | 2019-09-26T08:16:30.000Z | tests/parsers/java_idx.py | CNR-ITTIG/plasodfaxp | 923797fc00664fa9e3277781b0334d6eed5664fd | [
"Apache-2.0"
] | null | null | null | tests/parsers/java_idx.py | CNR-ITTIG/plasodfaxp | 923797fc00664fa9e3277781b0334d6eed5664fd | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/python
# -*- coding: utf-8 -*-
"""Tests for Java Cache IDX file parser."""
import unittest
from plaso.formatters import java_idx as _ # pylint: disable=unused-import
from plaso.lib import eventdata
from plaso.lib import timelib
from plaso.parsers import java_idx
from tests.parsers import test_lib
class IDXTest(test_lib.ParserTestCase):
"""Tests for Java Cache IDX file parser."""
def setUp(self):
"""Makes preparations before running an individual test."""
self._parser = java_idx.JavaIDXParser()
def testParse602(self):
"""Tests the Parse function on a version 602 IDX file."""
test_file = self._GetTestFilePath([u'java_602.idx'])
event_queue_consumer = self._ParseFile(self._parser, test_file)
event_objects = self._GetEventObjectsFromQueue(event_queue_consumer)
self.assertEqual(len(event_objects), 2)
event_object = event_objects[0]
idx_version_expected = 602
self.assertEqual(event_object.idx_version, idx_version_expected)
ip_address_expected = u'Unknown'
self.assertEqual(event_object.ip_address, ip_address_expected)
url_expected = u'http://www.gxxxxx.com/a/java/xxz.jar'
self.assertEqual(event_object.url, url_expected)
description_expected = u'File Hosted Date'
self.assertEqual(event_object.timestamp_desc, description_expected)
expected_timestamp = timelib.Timestamp.CopyFromString(
u'2010-05-05 01:34:19.720')
self.assertEqual(event_object.timestamp, expected_timestamp)
# Parse second event. Same metadata; different timestamp event.
event_object = event_objects[1]
self.assertEqual(event_object.idx_version, idx_version_expected)
self.assertEqual(event_object.ip_address, ip_address_expected)
self.assertEqual(event_object.url, url_expected)
description_expected = eventdata.EventTimestamp.FILE_DOWNLOADED
self.assertEqual(event_object.timestamp_desc, description_expected)
expected_timestamp = timelib.Timestamp.CopyFromString(
u'2010-05-05 03:52:31')
self.assertEqual(event_object.timestamp, expected_timestamp)
def testParse605(self):
"""Tests the Parse function on a version 605 IDX file."""
test_file = self._GetTestFilePath([u'java.idx'])
event_queue_consumer = self._ParseFile(self._parser, test_file)
event_objects = self._GetEventObjectsFromQueue(event_queue_consumer)
self.assertEqual(len(event_objects), 2)
event_object = event_objects[0]
idx_version_expected = 605
self.assertEqual(event_object.idx_version, idx_version_expected)
ip_address_expected = u'10.7.119.10'
self.assertEqual(event_object.ip_address, ip_address_expected)
url_expected = (
u'http://xxxxc146d3.gxhjxxwsf.xx:82/forum/dare.php?'
u'hsh=6&key=b30xxxx1c597xxxx15d593d3f0xxx1ab')
self.assertEqual(event_object.url, url_expected)
description_expected = u'File Hosted Date'
self.assertEqual(event_object.timestamp_desc, description_expected)
expected_timestamp = timelib.Timestamp.CopyFromString(
u'2001-07-26 05:00:00')
self.assertEqual(event_object.timestamp, expected_timestamp)
# Parse second event. Same metadata; different timestamp event.
event_object = event_objects[1]
self.assertEqual(event_object.idx_version, idx_version_expected)
self.assertEqual(event_object.ip_address, ip_address_expected)
self.assertEqual(event_object.url, url_expected)
description_expected = eventdata.EventTimestamp.FILE_DOWNLOADED
self.assertEqual(event_object.timestamp_desc, description_expected)
expected_timestamp = timelib.Timestamp.CopyFromString(
u'2013-01-13 16:22:01')
self.assertEqual(event_object.timestamp, expected_timestamp)
if __name__ == '__main__':
unittest.main()
| 35.271028 | 75 | 0.763381 | 482 | 3,774 | 5.711618 | 0.251037 | 0.095895 | 0.145296 | 0.188885 | 0.800581 | 0.800581 | 0.800581 | 0.74101 | 0.68725 | 0.68725 | 0 | 0.035858 | 0.142819 | 3,774 | 106 | 76 | 35.603774 | 0.815147 | 0.112878 | 0 | 0.584615 | 0 | 0 | 0.085895 | 0.012658 | 0 | 0 | 0 | 0 | 0.338462 | 1 | 0.046154 | false | 0 | 0.092308 | 0 | 0.153846 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7c1c3f18cb7a5a25b15163bbd9ac58654c12d78c | 26,304 | py | Python | qml/fchl.py | mikstr/qml | 552e273da080a3a1fb9f8c466e4562b7d64ed6bd | [
"MIT"
] | 185 | 2017-04-26T19:57:43.000Z | 2022-03-22T03:50:14.000Z | qml/fchl.py | FarnazH/qml | 552e273da080a3a1fb9f8c466e4562b7d64ed6bd | [
"MIT"
] | 61 | 2017-06-04T11:28:20.000Z | 2021-08-02T15:36:07.000Z | qml/fchl.py | FarnazH/qml | 552e273da080a3a1fb9f8c466e4562b7d64ed6bd | [
"MIT"
] | 78 | 2017-04-25T10:10:17.000Z | 2022-03-31T06:51:47.000Z | # MIT License
#
# Copyright (c) 2017 Felix Faber and Anders Steen Christensen
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
import numpy as np
import copy
from .ffchl_module import fget_kernels_fchl
from .ffchl_module import fget_symmetric_kernels_fchl
from .ffchl_module import fget_global_kernels_fchl
from .ffchl_module import fget_global_symmetric_kernels_fchl
from .ffchl_module import fget_atomic_kernels_fchl
from .ffchl_module import fget_atomic_symmetric_kernels_fchl
from .alchemy import get_alchemy
def generate_representation(coordinates, nuclear_charges,
max_size=23, neighbors=23, cut_distance = 5.0, cell=None):
""" Generates a representation for the FCHL kernel module.
:param coordinates: Input coordinates.
:type coordinates: numpy array
:param nuclear_charges: List of nuclear charges.
:type nuclear_charges: numpy array
:param max_size: Max number of atoms in representation.
:type max_size: integer
:param neighbors: Max number of atoms within the cut-off around an atom. (For periodic systems)
:type neighbors: integer
:param cell: Unit cell vectors. The presence of this keyword argument will generate a periodic representation.
:type cell: numpy array
:param cut_distance: Spatial cut-off distance - must be the same as used in the kernel function call.
:type cut_distance: float
:return: FCHL representation, shape = (size,5,neighbors).
:rtype: numpy array
"""
size = max_size
if cell is None:
neighbors=size
L = len(coordinates)
coords = np.asarray(coordinates)
ocupationList = np.asarray(nuclear_charges)
M = np.zeros((size,5,neighbors))
if cell is not None:
coords = np.dot(coords,cell)
nExtend = (np.floor(cut_distance/np.linalg.norm(cell,2,axis = 0)) + 1).astype(int)
for i in range(-nExtend[0],nExtend[0] + 1):
for j in range(-nExtend[1],nExtend[1] + 1):
for k in range(-nExtend[2],nExtend[2] + 1):
if i == -nExtend[0] and j == -nExtend[1] and k == -nExtend[2]:
coordsExt = coords + i*cell[0,:] + j*cell[1,:] + k*cell[2,:]
ocupationListExt = copy.copy(ocupationList)
else:
ocupationListExt = np.append(ocupationListExt,ocupationList)
coordsExt = np.append(coordsExt,coords + i*cell[0,:] + j*cell[1,:] + k*cell[2,:],axis = 0)
else:
coordsExt = copy.copy(coords)
ocupationListExt = copy.copy(ocupationList)
M[:,0,:] = 1E+100
for i in range(L):
cD = - coords[i] + coordsExt[:]
ocExt = np.asarray(ocupationListExt)
D1 = np.sqrt(np.sum(cD**2, axis = 1))
args = np.argsort(D1)
D1 = D1[args]
ocExt = np.asarray([ocExt[l] for l in args])
cD = cD[args]
args = np.where(D1 < cut_distance)[0]
D1 = D1[args]
ocExt = np.asarray([ocExt[l] for l in args])
cD = cD[args]
M[i,0,: len(D1)] = D1
M[i,1,: len(D1)] = ocExt[:]
M[i,2:5,: len(D1)] = cD.T
return M
def get_local_kernels(A, B, sigmas, \
two_body_scaling=np.sqrt(8), three_body_scaling=1.6,
two_body_width=0.2, three_body_width=np.pi,
two_body_power=4.0, three_body_power=2.0,
cut_start=1.0, cut_distance=5.0,
fourier_order=1, alchemy="periodic-table",
alchemy_period_width=1.6, alchemy_group_width=1.6):
""" Calculates the Gaussian kernel matrix K, where :math:`K_{ij}`:
:math:`K_{ij} = \\exp \\big( -\\frac{\\|A_i - B_j\\|_2^2}{2\sigma^2} \\big)`
Where :math:`A_{i}` and :math:`B_{j}` are FCHL representation vectors.
K is calculated analytically using an OpenMP parallel Fortran routine.
Note, that this kernel will ONLY work with FCHL representations as input.
:param A: Array of FCHL representation - shape=(N, maxsize, 5, maxneighbors).
:type A: numpy array
:param B: Array of FCHL representation - shape=(M, maxsize, 5, maxneighbors).
:type B: numpy array
:param sigma: List of kernel-widths.
:type sigma: list
:param two_body_scaling: Weight for 2-body terms.
:type two_body_scaling: float
:param three_body_scaling: Weight for 3-body terms.
:type three_body_scaling: float
:param two_body_width: Gaussian width for 2-body terms
:type two_body_width: float
:param three_body_width: Gaussian width for 3-body terms.
:type three_body_width: float
:param two_body_power: Powerlaw for :math:`r^{-n}` 2-body terms.
:type two_body_power: float
:param three_body_power: Powerlaw for Axilrod-Teller-Muto 3-body term
:type three_body_power: float
:param cut_start: The fraction of the cut-off radius at which cut-off damping start.
:type cut_start: float
:param cut_distance: Cut-off radius. (default=5 angstrom)
:type cut_distance: float
:param fourier_order: 3-body Fourier-expansion truncation order.
:type fourier_order: integer
:param alchemy: Type of alchemical interpolation ``"periodic-table"`` or ``"off"`` are possible options. Disabling alchemical interpolation can yield dramatic speedups.
:type alchemy: string
:param alchemy_period_width: Gaussian width along periods (columns) in the periodic table.
:type alchemy_period_width: float
:param alchemy_group_width: Gaussian width along groups (rows) in the periodic table.
:type alchemy_group_width: float
:return: Array of FCHL kernel matrices matrix - shape=(n_sigmas, N, M),
:rtype: numpy array
"""
atoms_max = A.shape[1]
neighbors_max = A.shape[3]
assert B.shape[1] == atoms_max, "ERROR: Check FCHL representation sizes! code = 2"
assert B.shape[3] == neighbors_max, "ERROR: Check FCHL representation sizes! code = 3"
nm1 = A.shape[0]
nm2 = B.shape[0]
N1 = np.zeros((nm1),dtype=np.int32)
N2 = np.zeros((nm2),dtype=np.int32)
for a in range(nm1):
N1[a] = len(np.where(A[a,:,1,0] > 0.0001)[0])
for a in range(nm2):
N2[a] = len(np.where(B[a,:,1,0] > 0.0001)[0])
neighbors1 = np.zeros((nm1, atoms_max), dtype=np.int32)
neighbors2 = np.zeros((nm2, atoms_max), dtype=np.int32)
for a, representation in enumerate(A):
ni = N1[a]
for i, x in enumerate(representation[:ni]):
neighbors1[a,i] = len(np.where(x[0]< cut_distance)[0])
for a, representation in enumerate(B):
ni = N2[a]
for i, x in enumerate(representation[:ni]):
neighbors2[a,i] = len(np.where(x[0]< cut_distance)[0])
nsigmas = len(sigmas)
doalchemy, pd = get_alchemy(alchemy, emax=100, r_width=alchemy_group_width, c_width=alchemy_period_width)
sigmas = np.array(sigmas)
assert len(sigmas.shape) == 1, "Third argument (sigmas) is not a 1D list/numpy.array!"
return fget_kernels_fchl(A, B, N1, N2, neighbors1, neighbors2, sigmas, \
nm1, nm2, nsigmas, three_body_width, two_body_width, cut_start, cut_distance, fourier_order, pd, two_body_scaling, three_body_scaling, doalchemy, two_body_power, three_body_power)
def get_local_symmetric_kernels(A, sigmas, \
two_body_scaling=np.sqrt(8), three_body_scaling=1.6,
two_body_width=0.2, three_body_width=np.pi,
two_body_power=4.0, three_body_power=2.0,
cut_start=1.0, cut_distance=5.0,
fourier_order=1, alchemy="periodic-table",
alchemy_period_width=1.6, alchemy_group_width=1.6):
""" Calculates the Gaussian kernel matrix K, where :math:`K_{ij}`:
:math:`K_{ij} = \\exp \\big( -\\frac{\\|A_i - A_j\\|_2^2}{2\sigma^2} \\big)`
Where :math:`A_{i}` and :math:`A_{j}` are FCHL representation vectors.
K is calculated analytically using an OpenMP parallel Fortran routine.
Note, that this kernel will ONLY work with FCHL representations as input.
:param A: Array of FCHL representation - shape=(N, maxsize, 5, maxneighbors).
:type A: numpy array
:param sigma: List of kernel-widths.
:type sigma: list
:param two_body_scaling: Weight for 2-body terms.
:type two_body_scaling: float
:param three_body_scaling: Weight for 3-body terms.
:type three_body_scaling: float
:param two_body_width: Gaussian width for 2-body terms
:type two_body_width: float
:param three_body_width: Gaussian width for 3-body terms.
:type three_body_width: float
:param two_body_power: Powerlaw for :math:`r^{-n}` 2-body terms.
:type two_body_power: float
:param three_body_power: Powerlaw for Axilrod-Teller-Muto 3-body term
:type three_body_power: float
:param cut_start: The fraction of the cut-off radius at which cut-off damping start.
:type cut_start: float
:param cut_distance: Cut-off radius. (default=5 angstrom)
:type cut_distance: float
:param fourier_order: 3-body Fourier-expansion truncation order.
:type fourier_order: integer
:param alchemy: Type of alchemical interpolation ``"periodic-table"`` or ``"off"`` are possible options. Disabling alchemical interpolation can yield dramatic speedups.
:type alchemy: string
:param alchemy_period_width: Gaussian width along periods (columns) in the periodic table.
:type alchemy_period_width: float
:param alchemy_group_width: Gaussian width along groups (rows) in the periodic table.
:type alchemy_group_width: float
:return: Array of FCHL kernel matrices matrix - shape=(n_sigmas, N, N),
:rtype: numpy array
"""
atoms_max = A.shape[1]
neighbors_max = A.shape[3]
nm1 = A.shape[0]
N1 = np.zeros((nm1),dtype=np.int32)
for a in range(nm1):
N1[a] = len(np.where(A[a,:,1,0] > 0.0001)[0])
neighbors1 = np.zeros((nm1, atoms_max), dtype=np.int32)
for a, representation in enumerate(A):
ni = N1[a]
for i, x in enumerate(representation[:ni]):
neighbors1[a,i] = len(np.where(x[0]< cut_distance)[0])
nsigmas = len(sigmas)
doalchemy, pd = get_alchemy(alchemy, emax=100, r_width=alchemy_group_width, c_width=alchemy_period_width)
sigmas = np.array(sigmas)
assert len(sigmas.shape) == 1, "Second argument (sigmas) is not a 1D list/numpy.array!"
return fget_symmetric_kernels_fchl(A, N1, neighbors1, sigmas, \
nm1, nsigmas, three_body_width, two_body_width, cut_start, cut_distance, fourier_order, pd, two_body_scaling, three_body_scaling, doalchemy, two_body_power, three_body_power)
def get_global_symmetric_kernels(A, sigmas, \
two_body_scaling=np.sqrt(8), three_body_scaling=1.6,
two_body_width=0.2, three_body_width=np.pi,
two_body_power=4.0, three_body_power=2.0,
cut_start=1.0, cut_distance=5.0,
fourier_order=1, alchemy="periodic-table",
alchemy_period_width=1.6, alchemy_group_width=1.6):
""" Calculates the Gaussian kernel matrix K, where :math:`K_{ij}`:
:math:`K_{ij} = \\exp \\big( -\\frac{\\|A_i - A_j\\|_2^2}{2\sigma^2} \\big)`
Where :math:`A_{i}` and :math:`A_{j}` are FCHL representation vectors.
K is calculated analytically using an OpenMP parallel Fortran routine.
Note, that this kernel will ONLY work with FCHL representations as input.
:param A: Array of FCHL representation - shape=(N, maxsize, 5, maxneighbors).
:type A: numpy array
:param sigma: List of kernel-widths.
:type sigma: list
:param two_body_scaling: Weight for 2-body terms.
:type two_body_scaling: float
:param three_body_scaling: Weight for 3-body terms.
:type three_body_scaling: float
:param two_body_width: Gaussian width for 2-body terms
:type two_body_width: float
:param three_body_width: Gaussian width for 3-body terms.
:type three_body_width: float
:param two_body_power: Powerlaw for :math:`r^{-n}` 2-body terms.
:type two_body_power: float
:param three_body_power: Powerlaw for Axilrod-Teller-Muto 3-body term
:type three_body_power: float
:param cut_start: The fraction of the cut-off radius at which cut-off damping start.
:type cut_start: float
:param cut_distance: Cut-off radius. (default=5 angstrom)
:type cut_distance: float
:param fourier_order: 3-body Fourier-expansion truncation order.
:type fourier_order: integer
:param alchemy: Type of alchemical interpolation ``"periodic-table"`` or ``"off"`` are possible options. Disabling alchemical interpolation can yield dramatic speedups.
:type alchemy: string
:param alchemy_period_width: Gaussian width along periods (columns) in the periodic table.
:type alchemy_period_width: float
:param alchemy_group_width: Gaussian width along groups (rows) in the periodic table.
:type alchemy_group_width: float
:return: Array of FCHL kernel matrices matrix - shape=(n_sigmas, N, N),
:rtype: numpy array
"""
atoms_max = A.shape[1]
neighbors_max = A.shape[3]
nm1 = A.shape[0]
N1 = np.zeros((nm1),dtype=np.int32)
for a in range(nm1):
N1[a] = len(np.where(A[a,:,1,0] > 0.0001)[0])
neighbors1 = np.zeros((nm1, atoms_max), dtype=np.int32)
for a, representation in enumerate(A):
ni = N1[a]
for i, x in enumerate(representation[:ni]):
neighbors1[a,i] = len(np.where(x[0]< cut_distance)[0])
nsigmas = len(sigmas)
doalchemy, pd = get_alchemy(alchemy, emax=100, r_width=alchemy_group_width, c_width=alchemy_period_width)
sigmas = np.array(sigmas)
assert len(sigmas.shape) == 1, "Second argument (sigmas) is not a 1D list/numpy.array!"
return fget_global_symmetric_kernels_fchl(A, N1, neighbors1, sigmas, \
nm1, nsigmas, three_body_width, two_body_width, cut_start, cut_distance, fourier_order, pd, two_body_scaling, three_body_scaling, doalchemy, two_body_power, three_body_power)
def get_global_kernels(A, B, sigmas, \
two_body_scaling=np.sqrt(8), three_body_scaling=1.6,
two_body_width=0.2, three_body_width=np.pi,
two_body_power=4.0, three_body_power=2.0,
cut_start=1.0, cut_distance=5.0,
fourier_order=1, alchemy="periodic-table",
alchemy_period_width=1.6, alchemy_group_width=1.6):
""" Calculates the Gaussian kernel matrix K, where :math:`K_{ij}`:
:math:`K_{ij} = \\exp \\big( -\\frac{\\|A_i - B_j\\|_2^2}{2\sigma^2} \\big)`
Where :math:`A_{i}` and :math:`B_{j}` are FCHL representation vectors.
K is calculated analytically using an OpenMP parallel Fortran routine.
Note, that this kernel will ONLY work with FCHL representations as input.
:param A: Array of FCHL representation - shape=(N, maxsize, 5, maxneighbors).
:type A: numpy array
:param B: Array of FCHL representation - shape=(M, maxsize, 5, maxneighbors).
:type B: numpy array
:param sigma: List of kernel-widths.
:type sigma: list
:param two_body_scaling: Weight for 2-body terms.
:type two_body_scaling: float
:param three_body_scaling: Weight for 3-body terms.
:type three_body_scaling: float
:param two_body_width: Gaussian width for 2-body terms
:type two_body_width: float
:param three_body_width: Gaussian width for 3-body terms.
:type three_body_width: float
:param two_body_power: Powerlaw for :math:`r^{-n}` 2-body terms.
:type two_body_power: float
:param three_body_power: Powerlaw for Axilrod-Teller-Muto 3-body term
:type three_body_power: float
:param cut_start: The fraction of the cut-off radius at which cut-off damping start.
:type cut_start: float
:param cut_distance: Cut-off radius. (default=5 angstrom)
:type cut_distance: float
:param fourier_order: 3-body Fourier-expansion truncation order.
:type fourier_order: integer
:param alchemy: Type of alchemical interpolation ``"periodic-table"`` or ``"off"`` are possible options. Disabling alchemical interpolation can yield dramatic speedups.
:type alchemy: string
:param alchemy_period_width: Gaussian width along periods (columns) in the periodic table.
:type alchemy_period_width: float
:param alchemy_group_width: Gaussian width along groups (rows) in the periodic table.
:type alchemy_group_width: float
:return: Array of FCHL kernel matrices matrix - shape=(n_sigmas, N, M),
:rtype: numpy array
"""
atoms_max = A.shape[1]
neighbors_max = A.shape[3]
assert B.shape[1] == atoms_max, "ERROR: Check FCHL representation sizes!"
assert B.shape[3] == neighbors_max, "ERROR: Check FCHL representation sizes!"
nm1 = A.shape[0]
nm2 = B.shape[0]
N1 = np.zeros((nm1),dtype=np.int32)
N2 = np.zeros((nm2),dtype=np.int32)
for a in range(nm1):
N1[a] = len(np.where(A[a,:,1,0] > 0.0001)[0])
for a in range(nm2):
N2[a] = len(np.where(B[a,:,1,0] > 0.0001)[0])
neighbors1 = np.zeros((nm1, atoms_max), dtype=np.int32)
neighbors2 = np.zeros((nm2, atoms_max), dtype=np.int32)
for a, representation in enumerate(A):
ni = N1[a]
for i, x in enumerate(representation[:ni]):
neighbors1[a,i] = len(np.where(x[0]< cut_distance)[0])
for a, representation in enumerate(B):
ni = N2[a]
for i, x in enumerate(representation[:ni]):
neighbors2[a,i] = len(np.where(x[0]< cut_distance)[0])
nsigmas = len(sigmas)
doalchemy, pd = get_alchemy(alchemy, emax=100, r_width=alchemy_group_width, c_width=alchemy_period_width)
sigmas = np.array(sigmas)
assert len(sigmas.shape) == 1, "Third argument (sigmas) is not a 1D list/numpy.array!"
return fget_global_kernels_fchl(A, B, N1, N2, neighbors1, neighbors2, sigmas, \
nm1, nm2, nsigmas, three_body_width, two_body_width, cut_start, cut_distance, fourier_order, pd, two_body_scaling, three_body_scaling, doalchemy, two_body_power, three_body_power)
def get_atomic_kernels(A, B, sigmas, \
two_body_scaling=np.sqrt(8), three_body_scaling=1.6,
two_body_width=0.2, three_body_width=np.pi,
two_body_power=4.0, three_body_power=2.0,
cut_start=1.0, cut_distance=5.0,
fourier_order=1, alchemy="periodic-table",
alchemy_period_width=1.6, alchemy_group_width=1.6):
""" Calculates the Gaussian kernel matrix K, where :math:`K_{ij}`:
:math:`K_{ij} = \\exp \\big( -\\frac{\\|A_i - B_j\\|_2^2}{2\sigma^2} \\big)`
Where :math:`A_{i}` and :math:`B_{j}` are FCHL representation vectors.
K is calculated analytically using an OpenMP parallel Fortran routine.
Note, that this kernel will ONLY work with FCHL representations as input.
:param A: Array of FCHL representation - shape=(N, maxsize, 5, size).
:type A: numpy array
:param B: Array of FCHL representation - shape=(M, maxsize, 5, size).
:type B: numpy array
:param sigma: List of kernel-widths.
:type sigma: list
:param two_body_scaling: Weight for 2-body terms.
:type two_body_scaling: float
:param three_body_scaling: Weight for 3-body terms.
:type three_body_scaling: float
:param two_body_width: Gaussian width for 2-body terms
:type two_body_width: float
:param three_body_width: Gaussian width for 3-body terms.
:type three_body_width: float
:param two_body_power: Powerlaw for :math:`r^{-n}` 2-body terms.
:type two_body_power: float
:param three_body_power: Powerlaw for Axilrod-Teller-Muto 3-body term
:type three_body_power: float
:param cut_start: The fraction of the cut-off radius at which cut-off damping start.
:type cut_start: float
:param cut_distance: Cut-off radius. (default=5 angstrom)
:type cut_distance: float
:param fourier_order: 3-body Fourier-expansion truncation order.
:type fourier_order: integer
:param alchemy: Type of alchemical interpolation ``"periodic-table"`` or ``"off"`` are possible options. Disabling alchemical interpolation can yield dramatic speedups.
:type alchemy: string
:param alchemy_period_width: Gaussian width along periods (columns) in the periodic table.
:type alchemy_period_width: float
:param alchemy_group_width: Gaussian width along groups (rows) in the periodic table.
:type alchemy_group_width: float
:return: Array of FCHL kernel matrices matrix - shape=(n_sigmas, N, M),
:rtype: numpy array
"""
assert len(A.shape) == 3
assert len(B.shape) == 3
# assert B.shape[1] == atoms_max, "ERROR: Check FCHL representation sizes! code = 2"
# assert B.shape[3] == neighbors_max, "ERROR: Check FCHL representation sizes! code = 3"
na1 = A.shape[0]
na2 = B.shape[0]
neighbors1 = np.zeros((na1), dtype=np.int32)
neighbors2 = np.zeros((na2), dtype=np.int32)
for i, x in enumerate(A):
neighbors1[i] = len(np.where(x[0]< cut_distance)[0])
for i, x in enumerate(B):
neighbors2[i] = len(np.where(x[0]< cut_distance)[0])
nsigmas = len(sigmas)
doalchemy, pd = get_alchemy(alchemy, emax=100, r_width=alchemy_group_width, c_width=alchemy_period_width)
sigmas = np.array(sigmas)
assert len(sigmas.shape) == 1
return fget_atomic_kernels_fchl(A, B, neighbors1, neighbors2, sigmas, \
na1, na2, nsigmas, three_body_width, two_body_width, cut_start, cut_distance, fourier_order, pd, two_body_scaling, three_body_scaling, doalchemy, two_body_power, three_body_power)
def get_atomic_symmetric_kernels(A, sigmas, \
two_body_scaling=np.sqrt(8), three_body_scaling=1.6,
two_body_width=0.2, three_body_width=np.pi,
two_body_power=4.0, three_body_power=2.0,
cut_start=1.0, cut_distance=5.0,
fourier_order=1, alchemy="periodic-table",
alchemy_period_width=1.6, alchemy_group_width=1.6):
""" Calculates the Gaussian kernel matrix K, where :math:`K_{ij}`:
:math:`K_{ij} = \\exp \\big( -\\frac{\\|A_i - B_j\\|_2^2}{2\sigma^2} \\big)`
Where :math:`A_{i}` and :math:`B_{j}` are FCHL representation vectors.
K is calculated analytically using an OpenMP parallel Fortran routine.
Note, that this kernel will ONLY work with FCHL representations as input.
:param A: Array of FCHL representation - shape=(N, maxsize, 5, size).
:type A: numpy array
:param sigma: List of kernel-widths.
:type sigma: list
:param two_body_scaling: Weight for 2-body terms.
:type two_body_scaling: float
:param three_body_scaling: Weight for 3-body terms.
:type three_body_scaling: float
:param two_body_width: Gaussian width for 2-body terms
:type two_body_width: float
:param three_body_width: Gaussian width for 3-body terms.
:type three_body_width: float
:param two_body_power: Powerlaw for :math:`r^{-n}` 2-body terms.
:type two_body_power: float
:param three_body_power: Powerlaw for Axilrod-Teller-Muto 3-body term
:type three_body_power: float
:param cut_start: The fraction of the cut-off radius at which cut-off damping start.
:type cut_start: float
:param cut_distance: Cut-off radius. (default=5 angstrom)
:type cut_distance: float
:param fourier_order: 3-body Fourier-expansion truncation order.
:type fourier_order: integer
:param alchemy: Type of alchemical interpolation ``"periodic-table"`` or ``"off"`` are possible options. Disabling alchemical interpolation can yield dramatic speedups.
:type alchemy: string
:param alchemy_period_width: Gaussian width along periods (columns) in the periodic table.
:type alchemy_period_width: float
:param alchemy_group_width: Gaussian width along groups (rows) in the periodic table.
:type alchemy_group_width: float
:return: Array of FCHL kernel matrices matrix - shape=(n_sigmas, N, M),
:rtype: numpy array
"""
assert len(A.shape) == 3
na1 = A.shape[0]
neighbors1 = np.zeros((na1), dtype=np.int32)
for i, x in enumerate(A):
neighbors1[i] = len(np.where(x[0]< cut_distance)[0])
nsigmas = len(sigmas)
doalchemy, pd = get_alchemy(alchemy, emax=100, r_width=alchemy_group_width, c_width=alchemy_period_width)
sigmas = np.array(sigmas)
assert len(sigmas.shape) == 1, "Second argument (sigmas) is not a 1D list/numpy.array!"
return fget_atomic_symmetric_kernels_fchl(A, neighbors1, sigmas, \
na1, nsigmas, three_body_width, two_body_width, cut_start, cut_distance, fourier_order, pd, two_body_scaling, three_body_scaling, doalchemy, two_body_power, three_body_power)
| 42.494346 | 195 | 0.667921 | 3,832 | 26,304 | 4.421712 | 0.079071 | 0.029745 | 0.023017 | 0.014873 | 0.855583 | 0.849799 | 0.848796 | 0.846671 | 0.835694 | 0.831917 | 0 | 0.024146 | 0.226924 | 26,304 | 618 | 196 | 42.563107 | 0.809098 | 0.511025 | 0 | 0.69378 | 0 | 0 | 0.047862 | 0 | 0 | 0 | 0 | 0 | 0.062201 | 1 | 0.033493 | false | 0 | 0.043062 | 0 | 0.110048 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7c41dede719bf2d37068528ef9598306983296ba | 180 | py | Python | dvc/dependency/gs.py | yfarjoun/dvc | eaca7dc80c765dd3a8dbe4c8fb3b206656bbc5e2 | [
"Apache-2.0"
] | 2 | 2019-06-23T14:24:48.000Z | 2019-07-08T12:22:53.000Z | dvc/dependency/gs.py | yfarjoun/dvc | eaca7dc80c765dd3a8dbe4c8fb3b206656bbc5e2 | [
"Apache-2.0"
] | null | null | null | dvc/dependency/gs.py | yfarjoun/dvc | eaca7dc80c765dd3a8dbe4c8fb3b206656bbc5e2 | [
"Apache-2.0"
] | 1 | 2019-09-02T00:29:40.000Z | 2019-09-02T00:29:40.000Z | from __future__ import unicode_literals
from dvc.output.gs import OutputGS
from dvc.dependency.base import DependencyBase
class DependencyGS(DependencyBase, OutputGS):
pass
| 20 | 46 | 0.827778 | 22 | 180 | 6.545455 | 0.681818 | 0.097222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.127778 | 180 | 8 | 47 | 22.5 | 0.917197 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.2 | 0.6 | 0 | 0.8 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
7c74f5135e7889dff58011dcb1f3cec49a6f6950 | 7,268 | py | Python | bot/util/messages/buttons/music.py | abindent/Utility-Bot | a11b790e7930a035fdca2b153950e624e3abafe4 | [
"MIT"
] | 2 | 2022-03-20T13:12:35.000Z | 2022-03-27T08:52:37.000Z | bot/util/messages/buttons/music.py | abindent/Nextcord-Utility-Bot | a11b790e7930a035fdca2b153950e624e3abafe4 | [
"MIT"
] | 2 | 2022-03-07T01:10:21.000Z | 2022-03-08T07:33:06.000Z | bot/util/messages/buttons/music.py | abindent/Utility-Bot | a11b790e7930a035fdca2b153950e624e3abafe4 | [
"MIT"
] | 1 | 2022-03-08T07:41:46.000Z | 2022-03-08T07:41:46.000Z | import nextcord, wavelink
from util.constants import Emojis
class MusicController(nextcord.ui.View):
def __init__(self, ctx):
super().__init__(timeout=None)
self.ctx = ctx
self.paused =True
async def interaction_check(self, interaction):
if interaction.user != self.ctx.author:
await interaction.response.send_message(":no_entry: This is not for you.", ephemeral=True)
return False
else:
return True
@nextcord.ui.button(style=nextcord.ButtonStyle.secondary, emoji=Emojis.mute)
async def mute(self, button: nextcord.ui.Button, interaction: nextcord.Interaction):
if not interaction.guild.voice_client:
embed = nextcord.Embed(
title=f"📢 Your are not playing a song.", color=0x91cd0e)
await interaction.response.send_message(embed=embed, ephemeral=True)
elif not getattr(interaction.user.voice, "channel", None):
embed = nextcord.Embed(
title=f"📢 | Join a voice channel please.", color=0x91cd0e)
await interaction.response.send_message(embed=embed, ephemeral=True)
else:
vc: wavelink.Player = interaction.guild.voice_client
await vc.set_volume(volume=0)
await interaction.response.send_message("Successfully muted the player.", ephemeral=True)
@nextcord.ui.button(style=nextcord.ButtonStyle.secondary, emoji=Emojis.pause)
async def pause(self, button: nextcord.ui.Button, interaction: nextcord.Interaction):
if self.paused:
if not interaction.guild.voice_client:
embed = nextcord.Embed(
title=f"📢 | Your are not playing a song.", color=0x91cd0e)
await interaction.response.send_message(embed=embed, ephemeral=True)
elif not getattr(interaction.user.voice, "channel", None):
embed = nextcord.Embed(
title=f"📢 | Join a voice channel please.", color=0x91cd0e)
await interaction.response.send_message(embed=embed, ephemeral=True)
else:
vc: wavelink.Player = interaction.guild.voice_client
embed = nextcord.Embed(
title=f"📢 | {Emojis.pause} Paused the player.", color=0x91cd0e)
await vc.pause()
self.pause.emoji = Emojis.resume
self.pause.style = nextcord.ButtonStyle.green
self.paused = False
await interaction.message.edit(view=self)
await interaction.response.send_message(embed=embed, ephemeral=True)
else:
if not interaction.guild.voice_client:
embed = nextcord.Embed(
title=f"📢 | Your are not playing a song.", color=0x91cd0e)
await interaction.response.send_message(embed=embed, ephemeral=True)
elif not getattr(interaction.user.voice, "channel", None):
embed = nextcord.Embed(
title=f"📢 | Join a voice channel please.", color=0x91cd0e)
await interaction.response.send_message(embed=embed, ephemeral=True)
else:
vc: wavelink.Player = interaction.guild.voice_client
embed = nextcord.Embed(
title=f"📢 | ⏯️ Resumed the player.", color=0x91cd0e)
await vc.resume()
self.pause.emoji = Emojis.pause
self.pause.style = nextcord.ButtonStyle.secondary
self.paused =True
await interaction.message.edit(view=self)
await interaction.response.send_message(embed=embed, ephemeral=True)
@nextcord.ui.button(style=nextcord.ButtonStyle.secondary, emoji=Emojis.halfvolume)
async def halfvolume(self, button: nextcord.ui.Button, interaction: nextcord.Interaction):
if not interaction.guild.voice_client:
embed = nextcord.Embed(
title=f"📢 | Your are not playing a song.", color=0x91cd0e)
await interaction.response.send_message(embed=embed, ephemeral=True)
elif not getattr(interaction.user.voice, "channel", None):
embed = nextcord.Embed(
title=f"📢 | Join a voice channel please.", color=0x91cd0e)
await interaction.response.send_message(embed=embed, ephemeral=True)
else:
vc: wavelink.Player = interaction.guild.voice_client
await vc.set_volume(volume=50)
await interaction.response.send_message("Successfully set you volume to `50%`", ephemeral=True)
@nextcord.ui.button(style=nextcord.ButtonStyle.secondary, emoji=Emojis.fullvolume)
async def fullvolume(self, button: nextcord.ui.Button, interaction=nextcord.Interaction):
if not interaction.guild.voice_client:
embed = nextcord.Embed(
title=f"📢 | Your are not playing a song.", color=0x91cd0e)
await interaction.response.send_message(embed=embed, ephemeral=True)
elif not getattr(interaction.user.voice, "channel", None):
embed = nextcord.Embed(
title=f"📢 | Join a voice channel please.", color=0x91cd0e)
await interaction.response.send_message(embed=embed, ephemeral=True)
else:
vc: wavelink.Player = interaction.guild.voice_client
await vc.set_volume(volume=100)
await interaction.response.send_message("Successfully set you volume to `100%`", ephemeral=True)
@nextcord.ui.button(style=nextcord.ButtonStyle.secondary, emoji=Emojis.loop)
async def loop(self, button: nextcord.ui.Button, interaction: nextcord.Interaction):
if not interaction.guild.voice_client:
embed = nextcord.Embed(
title=f"📢 | Your are not playing a song.", color=0x91cd0e)
return await interaction.response.send_message(embed=embed, ephemeral=True)
elif not getattr(interaction.user.voice, "channel", None):
embed = nextcord.Embed(
title=f"📢 | Join a voice channel please.", color=0x91cd0e)
return await interaction.response.send_message(embed=embed, ephemeral=True)
else:
vc: wavelink.Player = interaction.guild.voice_client
try:
vc.loop ^= True
except Exception:
setattr(vc, "loop", False)
if vc.loop:
return await interaction.response.send_message(f"Enabled {Emojis.loop} Loop", ephemeral=True)
else:
return await interaction.response.send_message(f"Disabled {Emojis.loop} Loop", ephemeral=True)
@nextcord.ui.button(style=nextcord.ButtonStyle.secondary, emoji=Emojis.closeConnection)
async def stop(self, button: nextcord.ui.Button, interaction: nextcord.Interaction):
if not interaction.guild.voice_client:
embed = nextcord.Embed(
title=f"📢 | Your are not playing a song.", color=0x91cd0e)
await interaction.response.send_message(embed=embed, ephemeral=True)
elif not getattr(interaction.user.voice, "channel", None):
embed = nextcord.Embed(
title=f"📢 | Join a voice channel please.", color=0x91cd0e)
await interaction.response.send_message(embed=embed, ephemeral=True)
else:
vc: wavelink.Player = interaction.guild.voice_client
await vc.stop()
| 44.864198 | 106 | 0.651348 | 846 | 7,268 | 5.559102 | 0.111111 | 0.08165 | 0.112269 | 0.13098 | 0.863066 | 0.832873 | 0.810546 | 0.792686 | 0.792686 | 0.780353 | 0 | 0.013736 | 0.248762 | 7,268 | 161 | 107 | 45.142857 | 0.844322 | 0 | 0 | 0.625 | 0 | 0 | 0.103054 | 0 | 0 | 0 | 0.017611 | 0 | 0 | 1 | 0.007813 | false | 0 | 0.015625 | 0 | 0.078125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7c9169ecdeb0990ab980c47e404693b8b20d0808 | 8,253 | py | Python | src/SimCSE/SentEval/senteval/sst.py | roronoayhd/2021daguan | 132380c55c54de08ec44c2c4161f962312c50a29 | [
"Apache-2.0"
] | 24 | 2021-09-02T10:50:13.000Z | 2021-11-03T10:06:36.000Z | src/SimCSE/SentEval/senteval/sst.py | roronoayhd/2021daguan | 132380c55c54de08ec44c2c4161f962312c50a29 | [
"Apache-2.0"
] | 2 | 2021-09-16T02:12:06.000Z | 2021-12-03T06:50:18.000Z | src/SimCSE/SentEval/senteval/sst.py | roronoayhd/2021daguan | 132380c55c54de08ec44c2c4161f962312c50a29 | [
"Apache-2.0"
] | 7 | 2021-09-02T15:25:21.000Z | 2021-09-18T17:09:24.000Z | # Copyright (c) 2017-present, Facebook, Inc.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
#
'''
SST - binary classification
'''
from __future__ import absolute_import, division, unicode_literals
import json
import os
import io
import logging
import numpy as np
from senteval.tools.validation import SplitClassifier
class SSTEval(object):
def __init__(self, task_path, nclasses=2, seed=1111):
self.seed = seed
# binary of fine-grained
assert nclasses in [2, 5]
self.nclasses = nclasses
self.task_name = 'Binary' if self.nclasses == 2 else 'Fine-Grained'
logging.debug('***** Transfer task : SST %s classification *****\n\n', self.task_name)
train = self.loadFile(os.path.join(task_path, 'sentiment-train'))
dev = self.loadFile(os.path.join(task_path, 'sentiment-dev'))
test = self.loadFile(os.path.join(task_path, 'sentiment-test'))
self.sst_data = {'train': train, 'dev': dev, 'test': test}
def do_prepare(self, params, prepare):
samples = self.sst_data['train']['X'] + self.sst_data['dev']['X'] + \
self.sst_data['test']['X']
return prepare(params, samples)
def loadFile(self, fpath):
sst_data = {'X': [], 'y': []}
with io.open(fpath, 'r', encoding='utf-8') as f:
for line in f:
if self.nclasses == 2:
sample = line.strip().split('\t')
sst_data['y'].append(int(sample[1]))
sst_data['X'].append(sample[0].split())
elif self.nclasses == 5:
sample = line.strip().split(' ', 1)
sst_data['y'].append(int(sample[0]))
sst_data['X'].append(sample[1].split())
assert max(sst_data['y']) == self.nclasses - 1
return sst_data
def run(self, params, batcher):
sst_embed = {'train': {}, 'dev': {}, 'test': {}}
bsize = params.batch_size
for key in self.sst_data:
logging.info('Computing embedding for {0}'.format(key))
# Sort to reduce padding
sorted_data = sorted(zip(self.sst_data[key]['X'],
self.sst_data[key]['y']),
key=lambda z: (len(z[0]), z[1]))
self.sst_data[key]['X'], self.sst_data[key]['y'] = map(list, zip(*sorted_data))
sst_embed[key]['X'] = []
for ii in range(0, len(self.sst_data[key]['y']), bsize):
batch = self.sst_data[key]['X'][ii:ii + bsize]
embeddings = batcher(params, batch)
sst_embed[key]['X'].append(embeddings)
sst_embed[key]['X'] = np.vstack(sst_embed[key]['X'])
sst_embed[key]['y'] = np.array(self.sst_data[key]['y'])
logging.info('Computed {0} embeddings'.format(key))
config_classifier = {'nclasses': self.nclasses, 'seed': self.seed,
'usepytorch': params.usepytorch,
'classifier': params.classifier}
clf = SplitClassifier(X={'train': sst_embed['train']['X'],
'valid': sst_embed['dev']['X'],
'test': sst_embed['test']['X']},
y={'train': sst_embed['train']['y'],
'valid': sst_embed['dev']['y'],
'test': sst_embed['test']['y']},
config=config_classifier)
devacc, testacc = clf.run()
logging.debug('\nDev acc : {0} Test acc : {1} for \
SST {2} classification\n'.format(devacc, testacc, self.task_name))
return {'devacc': devacc, 'acc': testacc,
'ndev': len(sst_embed['dev']['X']),
'ntest': len(sst_embed['test']['X'])}
class DaguanEval(object):
def __init__(self, task_path, nclasses=35, seed=1112):
self.seed = seed
# binary of fine-grained
assert nclasses in [35, 10]
self.nclasses = nclasses
self.task_name = 'level-2' if self.nclasses == 35 else 'level-1'
logging.debug('***** Transfer task : Daguan %s classification *****\n\n', self.task_name)
self.label_list_level_1 = [
label.strip() for label in open(
"../../datasets/phase_1/labels_level_1.txt", 'r', encoding='utf-8')
]
self.label_list_level_2 = [
label.strip() for label in open(
"../../datasets/phase_1/labels_level_2.txt", 'r', encoding='utf-8')
]
train = self.loadFile(os.path.join(task_path, 'train.txt'))
dev = self.loadFile(os.path.join(task_path, 'dev.txt'))
test = self.loadFile(os.path.join(task_path, 'test.txt'))
self.sst_data = {'train': train, 'dev': dev, 'test': test}
def do_prepare(self, params, prepare):
samples = self.sst_data['train']['X'] + self.sst_data['dev']['X'] + \
self.sst_data['test']['X']
return prepare(params, samples)
def loadFile(self, fpath):
sst_data = {'X': [], 'y': []}
with io.open(fpath, 'r', encoding='utf-8') as f:
for line in f:
if self.nclasses == 35:
sample = line.strip().split(',')
# print(sample)
if len(sample) == 2:
sst_data['y'].append(0)
else:
sst_data['y'].append(int(self.label_list_level_2.index(sample[-1])))
sst_data['X'].append(sample[1].split())
elif self.nclasses == 10:
sample = line.strip().split('\t')
if len(sample) == 2:
sst_data['y'].append(0)
else:
sst_data['y'].append(int(self.label_list_level_1.index(sample[-1])))
sst_data['X'].append(sample[1].split())
print(self.nclasses, max(sst_data['y']))
assert max(sst_data['y']) == self.nclasses - 1 or max(sst_data['y']) == 0
return sst_data
def run(self, params, batcher):
sst_embed = {'train': {}, 'dev': {}, 'test': {}}
bsize = params.batch_size
for key in self.sst_data:
logging.info('Computing embedding for {0}'.format(key))
# Sort to reduce padding
sorted_data = sorted(zip(self.sst_data[key]['X'],
self.sst_data[key]['y']),
key=lambda z: (len(z[0]), z[1]))
self.sst_data[key]['X'], self.sst_data[key]['y'] = map(list, zip(*sorted_data))
sst_embed[key]['X'] = []
for ii in range(0, len(self.sst_data[key]['y']), bsize):
batch = self.sst_data[key]['X'][ii:ii + bsize]
embeddings = batcher(params, batch)
sst_embed[key]['X'].append(embeddings)
sst_embed[key]['X'] = np.vstack(sst_embed[key]['X'])
sst_embed[key]['y'] = np.array(self.sst_data[key]['y'])
logging.info('Computed {0} embeddings'.format(key))
config_classifier = {'nclasses': self.nclasses, 'seed': self.seed,
'usepytorch': params.usepytorch,
'classifier': params.classifier}
clf = SplitClassifier(X={'train': sst_embed['train']['X'],
'valid': sst_embed['dev']['X'],
'test': sst_embed['test']['X']},
y={'train': sst_embed['train']['y'],
'valid': sst_embed['dev']['y'],
'test': sst_embed['test']['y']},
config=config_classifier)
devacc, testacc = clf.run()
logging.debug('\nDev acc : {0} Test acc : {1} for \
SST {2} classification\n'.format(devacc, testacc, self.task_name))
return {'devacc': devacc, 'acc': testacc,
'ndev': len(sst_embed['dev']['X']),
'ntest': len(sst_embed['test']['X'])}
| 42.107143 | 97 | 0.508421 | 984 | 8,253 | 4.134146 | 0.156504 | 0.072271 | 0.064897 | 0.048181 | 0.834808 | 0.806785 | 0.781219 | 0.737709 | 0.666175 | 0.666175 | 0 | 0.013686 | 0.327154 | 8,253 | 195 | 98 | 42.323077 | 0.718891 | 0.038047 | 0 | 0.705479 | 0 | 0 | 0.097589 | 0.010352 | 0 | 0 | 0 | 0 | 0.027397 | 1 | 0.054795 | false | 0 | 0.047945 | 0 | 0.157534 | 0.006849 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7cd9f906bccc1a2f4aec8c1ebd0470973ee8e239 | 6,158 | py | Python | tests/test_metadata.py | carj/pyPreservica | 0b07b67971e89e366964a22d44066c30c42cc5cc | [
"Apache-2.0"
] | 8 | 2020-07-01T12:20:59.000Z | 2022-02-22T09:11:38.000Z | tests/test_metadata.py | carj/pyPreservica | 0b07b67971e89e366964a22d44066c30c42cc5cc | [
"Apache-2.0"
] | 5 | 2020-11-13T13:38:36.000Z | 2022-02-21T09:12:20.000Z | tests/test_metadata.py | carj/pyPreservica | 0b07b67971e89e366964a22d44066c30c42cc5cc | [
"Apache-2.0"
] | null | null | null | import os
import uuid
import xml
import xml.etree.ElementTree
from xml.etree import ElementTree
import pytest
from pyPreservica import *
FOLDER_ID = "ebd977f6-bebd-4ecf-99be-e054989f9af4"
ASSET_ID = "683f9db7-ff81-4859-9c03-f68cfa5d9c3d"
CO_ID = "0f2997f7-728c-4e55-9f92-381ed1260d70"
XML_DOCUMENT = "<person:Person xmlns:person='https://www.person.com/person'>" \
"<person:Name>Name</person:Name>" \
"<person:Phone>01234 100 100</person:Phone>" \
"<person:Email>test@test.com</person:Email>" \
"<person:Address>Abingdon, UK</person:Address>" \
"</person:Person>"
def test_get_folder_metadata():
client = EntityAPI()
entity = client.entity(EntityType.FOLDER, FOLDER_ID)
xml_string = client.metadata_for_entity(entity, "http://purl.org/dc/elements/1.1/")
assert xml_string is not None
document = xml.etree.ElementTree.fromstring(xml_string)
identifier = document.find(".//{http://purl.org/dc/elements/1.1/}identifier")
assert identifier.text == "LC-USZ62-43601"
def test_update_folder_metadata():
client = EntityAPI()
entity = client.entity(EntityType.FOLDER, FOLDER_ID)
xml_string = client.metadata_for_entity(entity, "http://purl.org/dc/elements/1.1/")
assert xml_string is not None
document = xml.etree.ElementTree.fromstring(xml_string)
identifier = document.find(".//{http://purl.org/dc/elements/1.1/}identifier")
assert identifier.text == "LC-USZ62-43601"
description = document.find(".//{http://purl.org/dc/elements/1.1/}description")
assert description.text == "a"
description.text = "description"
xml_string = ElementTree.tostring(document, encoding='utf-8').decode("utf-8")
folder = client.update_metadata(entity, "http://purl.org/dc/elements/1.1/", xml_string)
document = xml.etree.ElementTree.fromstring(client.metadata_for_entity(folder, "http://purl.org/dc/elements/1.1/"))
description = document.find(".//{http://purl.org/dc/elements/1.1/}description")
assert description.text == "description"
description.text = "a"
xml_string = ElementTree.tostring(document, encoding='utf-8').decode("utf-8")
folder = client.update_metadata(entity, "http://purl.org/dc/elements/1.1/", xml_string)
def test_add_folder_metadata_string():
client = EntityAPI()
entity = client.entity(EntityType.FOLDER, FOLDER_ID)
assert len(entity.metadata) == 3
folder = client.add_metadata(entity, "https://www.person.com/person", XML_DOCUMENT)
assert len(folder.metadata) == 4
xml_string = client.metadata_for_entity(folder, "https://www.person.com/person")
document = xml.etree.ElementTree.fromstring(xml_string)
name = document.find(".//{https://www.person.com/person}Name")
assert name.text == "Name"
folder = client.delete_metadata(folder, "https://www.person.com/person")
assert len(folder.metadata) == 3
def test_get_asset_metadata():
client = EntityAPI()
entity = client.entity(EntityType.ASSET, ASSET_ID)
xml_string = client.metadata_for_entity(entity, "http://purl.org/dc/elements/1.1/")
assert xml_string is not None
document = xml.etree.ElementTree.fromstring(xml_string)
filename = document.find(".//{http://purl.org/dc/elements/1.1/}filename")
assert filename.text == "LC-USZ62-20901.tiff"
def test_get_all_asset_metadata():
client = EntityAPI()
entity = client.entity(EntityType.ASSET, ASSET_ID)
for m in client.all_metadata(entity):
assert m[0] is not None
document = xml.etree.ElementTree.fromstring(m[1])
assert document is not None
def test_get_co_metadata():
client = EntityAPI()
entity = client.entity(EntityType.CONTENT_OBJECT, CO_ID)
entity = client.delete_metadata(entity, "https://www.person.com/person")
xml_string = client.metadata_for_entity(entity, "https://www.person.com/person")
assert xml_string is None
co = client.add_metadata(entity, "https://www.person.com/person", XML_DOCUMENT)
xml_string = client.metadata_for_entity(co, "https://www.person.com/person")
document = xml.etree.ElementTree.fromstring(xml_string)
name = document.find(".//{https://www.person.com/person}Name")
assert name.text == "Name"
e = client.delete_metadata(co, "https://www.person.com/person")
xml_string = client.metadata_for_entity(e, "https://www.person.com/person")
assert xml_string is None
def test_get_folder_metadata_file():
client = EntityAPI()
entity = client.entity(EntityType.FOLDER, FOLDER_ID)
assert len(entity.metadata) == 3
filename = str(uuid.uuid4()) + ".xml"
fd = open(filename, "wt", encoding="utf-8")
fd.write(XML_DOCUMENT)
fd.flush()
fd.close()
with open(filename, "rt", encoding="utf-8") as file:
folder = client.add_metadata(entity, "https://www.person.com/person", file)
assert len(folder.metadata) == 4
xml_string = client.metadata_for_entity(folder, "https://www.person.com/person")
document = xml.etree.ElementTree.fromstring(xml_string)
name = document.find(".//{https://www.person.com/person}Name")
assert name.text == "Name"
folder = client.delete_metadata(folder, "https://www.person.com/person")
assert len(folder.metadata) == 3
os.remove(filename)
def test_get_asset_metadata_file():
client = EntityAPI()
entity = client.entity(EntityType.ASSET, ASSET_ID)
assert len(entity.metadata) == 2
filename = str(uuid.uuid4()) + ".xml"
fd = open(filename, "wt", encoding="utf-8")
fd.write(XML_DOCUMENT)
fd.flush()
fd.close()
with open(filename, "rt", encoding="utf-8") as file:
asset = client.add_metadata(entity, "https://www.person.com/person", file)
assert len(asset.metadata) == 3
xml_string = client.metadata_for_entity(asset, "https://www.person.com/person")
document = xml.etree.ElementTree.fromstring(xml_string)
name = document.find(".//{https://www.person.com/person}Name")
assert name.text == "Name"
asset = client.delete_metadata(asset, "https://www.person.com/person")
assert len(asset.metadata) == 2
os.remove(filename)
| 43.366197 | 119 | 0.697467 | 824 | 6,158 | 5.088592 | 0.128641 | 0.053661 | 0.066778 | 0.081088 | 0.812545 | 0.77367 | 0.755068 | 0.733842 | 0.708323 | 0.686859 | 0 | 0.025425 | 0.150536 | 6,158 | 141 | 120 | 43.673759 | 0.776142 | 0 | 0 | 0.590164 | 0 | 0 | 0.24391 | 0.033452 | 0 | 0 | 0 | 0 | 0.204918 | 1 | 0.065574 | false | 0 | 0.057377 | 0 | 0.122951 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6b03997f329a7403742a2610a78d05d4d3bcfaa6 | 7,994 | py | Python | test/unit/test_coverage.py | Izecson/sockeye-1.16.6 | f84044d4a64b2bcf744ccd4f94b16f8133d1f383 | [
"Apache-2.0"
] | 16 | 2018-05-29T04:45:00.000Z | 2020-05-23T15:45:47.000Z | test/unit/test_coverage.py | Izecson/sockeye-1.16.6 | f84044d4a64b2bcf744ccd4f94b16f8133d1f383 | [
"Apache-2.0"
] | 1 | 2018-05-18T10:27:09.000Z | 2018-05-18T14:49:39.000Z | test/unit/test_coverage.py | Izecson/sockeye-1.16.6 | f84044d4a64b2bcf744ccd4f94b16f8133d1f383 | [
"Apache-2.0"
] | 3 | 2018-05-29T04:45:05.000Z | 2019-12-11T08:30:18.000Z | # Copyright 2017 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You may not
# use this file except in compliance with the License. A copy of the License
# is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is distributed on
# an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
# express or implied. See the License for the specific language governing
# permissions and limitations under the License.
from unittest.mock import patch
import mxnet as mx
import numpy as np
import pytest
import sockeye.coverage
from test.common import gaussian_vector, integer_vector, uniform_vector
activation_types = ["tanh", "sigmoid", "relu", "softrelu"]
def setup_module():
# Store a reference to the original MXNet sequence mask function.
_mask_with_one.original_sequence_mask = mx.sym.SequenceMask
@pytest.mark.parametrize("act_type", activation_types)
def test_activation_coverage(act_type):
# Before running our test we patch MXNet's sequence mask function with a custom implementation. Our custom function
# will call the built in masking operation, but ensure the masking value is the number one. This masking value
# allows for clear test assertions.
_patch_sequence_mask(lambda: _test_activation_coverage(act_type))
def test_gru_coverage():
# Before running our test we patch MXNet's sequence mask function with a custom implementation. Our custom function
# will call the built in masking operation, but ensure the masking value is the number one. This masking value
# allows for clear test assertions.
_patch_sequence_mask(lambda: _test_gru_coverage())
def _test_activation_coverage(act_type):
config_coverage = sockeye.coverage.CoverageConfig(type=act_type, num_hidden=2, layer_normalization=False)
encoder_num_hidden, decoder_num_hidden, source_seq_len, batch_size = 5, 5, 10, 4
# source: (batch_size, source_seq_len, encoder_num_hidden)
source = mx.sym.Variable("source")
# source_length: (batch_size,)
source_length = mx.sym.Variable("source_length")
# prev_hidden: (batch_size, decoder_num_hidden)
prev_hidden = mx.sym.Variable("prev_hidden")
# prev_coverage: (batch_size, source_seq_len, coverage_num_hidden)
prev_coverage = mx.sym.Variable("prev_coverage")
# attention_scores: (batch_size, source_seq_len)
attention_scores = mx.sym.Variable("attention_scores")
source_shape = (batch_size, source_seq_len, encoder_num_hidden)
source_length_shape = (batch_size,)
prev_hidden_shape = (batch_size, decoder_num_hidden)
attention_scores_shape = (batch_size, source_seq_len)
prev_coverage_shape = (batch_size, source_seq_len, config_coverage.num_hidden)
source_data = gaussian_vector(shape=source_shape)
source_length_data = integer_vector(shape=source_length_shape, max_value=source_seq_len)
prev_hidden_data = gaussian_vector(shape=prev_hidden_shape)
prev_coverage_data = gaussian_vector(shape=prev_coverage_shape)
attention_scores_data = uniform_vector(shape=attention_scores_shape)
attention_scores_data = attention_scores_data / np.sum(attention_scores_data)
coverage = sockeye.coverage.get_coverage(config_coverage)
coverage_func = coverage.on(source, source_length, source_seq_len)
updated_coverage = coverage_func(prev_hidden, attention_scores, prev_coverage)
executor = updated_coverage.simple_bind(ctx=mx.cpu(),
source=source_shape,
source_length=source_length_shape,
prev_hidden=prev_hidden_shape,
prev_coverage=prev_coverage_shape,
attention_scores=attention_scores_shape)
executor.arg_dict["source"][:] = source_data
executor.arg_dict["source_length"][:] = source_length_data
executor.arg_dict["prev_hidden"][:] = prev_hidden_data
executor.arg_dict["prev_coverage"][:] = prev_coverage_data
executor.arg_dict["attention_scores"][:] = attention_scores_data
result = executor.forward()
# this is needed to modulate the 0 input. The output changes according to the activation type used.
activation = mx.sym.Activation(name="activation", act_type=act_type)
modulated = activation.eval(ctx=mx.cpu(), activation_data=mx.nd.zeros((1,1)))[0].asnumpy()
new_coverage = result[0].asnumpy()
assert new_coverage.shape == prev_coverage_shape
assert (np.sum(np.sum(new_coverage == modulated, axis=2) != 0, axis=1) == source_length_data).all()
def _test_gru_coverage():
config_coverage = sockeye.coverage.CoverageConfig(type="gru", num_hidden=2, layer_normalization=False)
encoder_num_hidden, decoder_num_hidden, source_seq_len, batch_size = 5, 5, 10, 4
# source: (batch_size, source_seq_len, encoder_num_hidden)
source = mx.sym.Variable("source")
# source_length: (batch_size,)
source_length = mx.sym.Variable("source_length")
# prev_hidden: (batch_size, decoder_num_hidden)
prev_hidden = mx.sym.Variable("prev_hidden")
# prev_coverage: (batch_size, source_seq_len, coverage_num_hidden)
prev_coverage = mx.sym.Variable("prev_coverage")
# attention_scores: (batch_size, source_seq_len)
attention_scores = mx.sym.Variable("attention_scores")
source_shape = (batch_size, source_seq_len, encoder_num_hidden)
source_length_shape = (batch_size,)
prev_hidden_shape = (batch_size, decoder_num_hidden)
attention_scores_shape = (batch_size, source_seq_len)
prev_coverage_shape = (batch_size, source_seq_len, config_coverage.num_hidden)
source_data = gaussian_vector(shape=source_shape)
source_length_data = integer_vector(shape=source_length_shape, max_value=source_seq_len)
prev_hidden_data = gaussian_vector(shape=prev_hidden_shape)
prev_coverage_data = gaussian_vector(shape=prev_coverage_shape)
attention_scores_data = uniform_vector(shape=attention_scores_shape)
attention_scores_data = attention_scores_data / np.sum(attention_scores_data)
coverage = sockeye.coverage.get_coverage(config_coverage)
coverage_func = coverage.on(source, source_length, source_seq_len)
updated_coverage = coverage_func(prev_hidden, attention_scores, prev_coverage)
executor = updated_coverage.simple_bind(ctx=mx.cpu(),
source=source_shape,
source_length=source_length_shape,
prev_hidden=prev_hidden_shape,
prev_coverage=prev_coverage_shape,
attention_scores=attention_scores_shape)
executor.arg_dict["source"][:] = source_data
executor.arg_dict["source_length"][:] = source_length_data
executor.arg_dict["prev_hidden"][:] = prev_hidden_data
executor.arg_dict["prev_coverage"][:] = prev_coverage_data
executor.arg_dict["attention_scores"][:] = attention_scores_data
result = executor.forward()
new_coverage = result[0].asnumpy()
assert new_coverage.shape == prev_coverage_shape
assert (np.sum(np.sum(new_coverage != 1, axis=2) != 0, axis=1) == source_length_data).all()
def _mask_with_one(data, use_sequence_length, sequence_length):
return _mask_with_one.original_sequence_mask(data=data, use_sequence_length=use_sequence_length,
sequence_length=sequence_length, value=1)
def _patch_sequence_mask(test):
# Wrap mx.sym to make it easily patchable. All un-patched methods will fall-back to their default implementation.
with patch.object(mx, 'sym', wraps=mx.sym) as mxnet_mock:
# Patch Sequence Mask to use ones for padding.
mxnet_mock.SequenceMask = _mask_with_one
test()
| 54.013514 | 120 | 0.72717 | 1,052 | 7,994 | 5.181559 | 0.184411 | 0.07705 | 0.039626 | 0.039626 | 0.762612 | 0.746652 | 0.706292 | 0.706292 | 0.706292 | 0.706292 | 0 | 0.005239 | 0.188141 | 7,994 | 147 | 121 | 54.380952 | 0.834669 | 0.2338 | 0 | 0.701031 | 0 | 0 | 0.046454 | 0 | 0 | 0 | 0 | 0 | 0.041237 | 1 | 0.072165 | false | 0 | 0.061856 | 0.010309 | 0.14433 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6b092775577a58fd462f503025263476dcf158e0 | 8,302 | py | Python | models/backbone.py | Max-luo-song/fs-map-project | 4e9d86e182d9a4b969e86b12d72f227e4fd4fd09 | [
"Apache-2.0"
] | 1 | 2021-08-20T06:22:57.000Z | 2021-08-20T06:22:57.000Z | models/backbone.py | Max-luo-song/fs-map-project | 4e9d86e182d9a4b969e86b12d72f227e4fd4fd09 | [
"Apache-2.0"
] | null | null | null | models/backbone.py | Max-luo-song/fs-map-project | 4e9d86e182d9a4b969e86b12d72f227e4fd4fd09 | [
"Apache-2.0"
] | 4 | 2021-08-20T06:23:02.000Z | 2022-01-06T12:09:07.000Z | import torch
import torch.nn as nn
import torch.nn.functional as F
import pretrainedmodels
from models.utils import conv2DBatchNormRelu, deconv2DBatchNormRelu
class resnet_encoder(nn.Module):
def __init__(self, n_classes=21, in_channels=3):
super(resnet_encoder, self).__init__()
feat_chn = 256
# self.feature_backbone = n_segnet_encoder(n_classes=n_classes, in_channels=in_channels)
self.feature_backbone = pretrainedmodels.__dict__['resnet18'](num_classes=1000, pretrained=None)
# print(self.feature_backbone)
self.backbone_0 = self.feature_backbone.conv1
pool = torch.nn.MaxPool2d(kernel_size=5, stride=4, padding=1)
self.backbone_1 = nn.Sequential(self.feature_backbone.bn1, self.feature_backbone.relu,
pool, self.feature_backbone.layer1)
# self.backbone_1 = nn.Sequential(self.feature_backbone.bn1, self.feature_backbone.relu,
# self.feature_backbone.maxpool, self.feature_backbone.layer1)
self.backbone_2 = self.feature_backbone.layer2
self.backbone_3 = self.feature_backbone.layer3
self.backbone_4 = self.feature_backbone.layer4
del self.feature_backbone.last_linear
def forward(self, inputs):
# torch.Size([12, 3, 512, 512])
# torch.Size([12, 64, 256, 256])
# torch.Size([12, 64, 128, 128])
# torch.Size([12, 128, 64, 64])
# torch.Size([12, 256, 32, 32])
# torch.Size([12, 512, 16, 16])
# torch.Size([30, 3, 512, 512])
# torch.Size([30, 64, 256, 256])
# torch.Size([30, 64, 64, 64])
# torch.Size([30, 128, 32, 32])
# torch.Size([30, 256, 16, 16])
# torch.Size([30, 512, 8, 8])
# print(inputs.size())
outputs = self.backbone_0(inputs)
# print(outputs.size())
outputs = self.backbone_1(outputs)
# print(outputs.size())
outputs = self.backbone_2(outputs)
# print(outputs.size())
outputs = self.backbone_3(outputs)
# print(outputs.size())
outputs = self.backbone_4(outputs)
# print(outputs.size())
# print()
return outputs
class resnet_encoder_small(nn.Module):
def __init__(self, n_classes=21, in_channels=3):
super(resnet_encoder_small, self).__init__()
feat_chn = 256
# self.feature_backbone = n_segnet_encoder(n_classes=n_classes, in_channels=in_channels)
self.feature_backbone = pretrainedmodels.__dict__['resnet18'](num_classes=1000, pretrained=None)
# print(self.feature_backbone)
self.backbone_0 = self.feature_backbone.conv1
pool = torch.nn.MaxPool2d(kernel_size=5, stride=4, padding=1)
pool2 = torch.nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
self.backbone_1 = nn.Sequential(self.feature_backbone.bn1, self.feature_backbone.relu,
pool, self.feature_backbone.layer1, pool2)
# self.backbone_1 = nn.Sequential(self.feature_backbone.bn1, self.feature_backbone.relu,
# self.feature_backbone.maxpool, self.feature_backbone.layer1)
self.backbone_2 = self.feature_backbone.layer2
self.backbone_3 = self.feature_backbone.layer3
self.backbone_4 = self.feature_backbone.layer4
del self.feature_backbone.last_linear
def forward(self, inputs):
# torch.Size([12, 3, 512, 512])
# torch.Size([12, 64, 256, 256])
# torch.Size([12, 64, 128, 128])
# torch.Size([12, 128, 64, 64])
# torch.Size([12, 256, 32, 32])
# torch.Size([12, 512, 16, 16])
# torch.Size([30, 3, 512, 512])
# torch.Size([30, 64, 256, 256])
# torch.Size([30, 64, 64, 64])
# torch.Size([30, 128, 32, 32])
# torch.Size([30, 256, 16, 16])
# torch.Size([30, 512, 8, 8])
# print(inputs.size())
outputs = self.backbone_0(inputs)
#print(outputs.size())
outputs = self.backbone_1(outputs)
# print(outputs.size())
outputs = self.backbone_2(outputs)
# print(outputs.size())
outputs = self.backbone_3(outputs)
# print(outputs.size())
outputs = self.backbone_4(outputs)
#print(outputs.size())
# print()
return outputs
class simple_classifier(nn.Module):
def __init__(self, n_classes=5, in_channels=512):
super(simple_classifier, self).__init__()
self.in_channels = in_channels
feat_chn = 256
self.pred = nn.Sequential(
nn.Conv2d(self.in_channels, 256, kernel_size=3, stride=2, padding=1),
nn.ReLU(inplace=True),
nn.Conv2d(256, 128, kernel_size=3, stride=2, padding=1),
nn.ReLU(inplace=True),
nn.AdaptiveAvgPool2d([1, 1])
# nn.Conv2d(feat_chn, n_classes, kernel_size=3, padding=1)
)
self.fc = nn.Linear(128, n_classes)
def forward(self, inputs):
# torch.Size([12, 512, 16, 16])
# torch.Size([12, 11, 16, 16])
# torch.Size([12, 11, 512, 512])
# print(inputs.size())
out = self.pred(inputs) # [50, 128, 1, 1]
out = out.view(out.size(0), out.size(1))
# print(out.size(),'====')
pred = self.fc(out)
# pred = nn.functional.interpolate(pred, size=torch.Size([inputs.size()[2] * 64, inputs.size()[3] * 64]),
# mode='bilinear', align_corners=False)
return pred
class simple_decoder(nn.Module):
def __init__(self, n_classes=21, in_channels=128):
super(simple_decoder, self).__init__()
self.in_channels = in_channels
feat_chn = 128
self.pred = nn.Sequential(
nn.Conv2d(self.in_channels, feat_chn, kernel_size=3, padding=1),
nn.ReLU(inplace=True),
nn.Conv2d(feat_chn, n_classes, kernel_size=3, padding=1)
)
def forward(self, inputs):
pred = self.pred(inputs)
#print(pred.size()) # [3,4,16,16]
pred = F.interpolate(pred, size=[512, 512], mode='bilinear', align_corners=False)
return pred
def forward_func(self, inputs, vars):
o1 = F.conv2d(inputs, vars[0], vars[1], padding=1)
o2 = F.relu(o1)
o3 = F.conv2d(o2, vars[2], vars[3], padding=1)
pred = F.interpolate(o3, size=[512, 512], mode='bilinear', align_corners=False)
return pred
class simple_decoder_classifier(nn.Module):
def __init__(self, n_classes=21, in_channels=128):
super(simple_decoder_classifier, self).__init__()
self.in_channels = in_channels
feat_chn = 128
self.pred = nn.Sequential(
nn.Conv2d(self.in_channels, feat_chn, kernel_size=3, padding=1),
nn.ReLU(inplace=True),
nn.Conv2d(feat_chn, n_classes, kernel_size=3, padding=1)
)
def forward(self, inputs):
pred = self.pred(inputs)
pred = F.adaptive_max_pool2d(pred, output_size=[1, 1])
pred = pred.view(pred.size(0), pred.size(1))
return pred
def forward_func(self, inputs, vars):
o1 = F.conv2d(inputs, vars[0], vars[1], padding=1)
o2 = F.relu(o1)
o3 = F.conv2d(o2, vars[2], vars[3], padding=1)
pred = F.adaptive_max_pool2d(o3, output_size=[1, 1])
pred = pred.view(pred.size(0), pred.size(1))
return pred
class n_segnet_decoder(nn.Module):
def __init__(self, in_channels=512):
# def __init__(self, n_classes=21, in_channels=512,agent_num=5):
super(n_segnet_decoder, self).__init__()
self.in_channels = in_channels
# Decoder
self.deconv1 = deconv2DBatchNormRelu(self.in_channels, 512, k_size=3, stride=2, padding=1, output_padding=1)
self.deconv2 = conv2DBatchNormRelu(512, 256, k_size=3, stride=1, padding=1)
self.deconv3 = conv2DBatchNormRelu(256, 128, k_size=3, stride=1, padding=1)
def forward(self, inputs):
outputs = self.deconv1(inputs)
# print(outputs.size())
outputs = self.deconv2(outputs)
# print(outputs.size())
outputs = self.deconv3(outputs)
# print(outputs.size(),'---')
return outputs
| 36.253275 | 116 | 0.605999 | 1,081 | 8,302 | 4.458834 | 0.100833 | 0.068465 | 0.118257 | 0.047718 | 0.842531 | 0.83112 | 0.79917 | 0.782573 | 0.751452 | 0.742739 | 0 | 0.080729 | 0.259937 | 8,302 | 228 | 117 | 36.412281 | 0.703776 | 0.258371 | 0 | 0.644068 | 0 | 0 | 0.005251 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.118644 | false | 0 | 0.042373 | 0 | 0.279661 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6b3c1d8a07e268a18b08211e3dee8a308ba8a690 | 144 | py | Python | main/context_processors.py | Aaron1011/texting_wall | c20b421652fbdaef927e9d206fc17d8f1f40ae46 | [
"MIT"
] | null | null | null | main/context_processors.py | Aaron1011/texting_wall | c20b421652fbdaef927e9d206fc17d8f1f40ae46 | [
"MIT"
] | null | null | null | main/context_processors.py | Aaron1011/texting_wall | c20b421652fbdaef927e9d206fc17d8f1f40ae46 | [
"MIT"
] | null | null | null | from texting_wall import settings as django_settings
def analytics(request):
return {'GOOGLE_ANALYTICS': django_settings.GOOGLE_ANALYTICS}
| 28.8 | 65 | 0.826389 | 18 | 144 | 6.333333 | 0.666667 | 0.245614 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 144 | 4 | 66 | 36 | 0.890625 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 6 |
861f9ce242dd9d3e9e75600d17eb03d48b047cbf | 43 | py | Python | python/testData/refactoring/move/moveFileDoesntReorderImports/before/src/mod.py | jnthn/intellij-community | 8fa7c8a3ace62400c838e0d5926a7be106aa8557 | [
"Apache-2.0"
] | 2 | 2019-04-28T07:48:50.000Z | 2020-12-11T14:18:08.000Z | python/testData/refactoring/move/moveFileDoesntReorderImports/before/src/mod.py | Cyril-lamirand/intellij-community | 60ab6c61b82fc761dd68363eca7d9d69663cfa39 | [
"Apache-2.0"
] | 173 | 2018-07-05T13:59:39.000Z | 2018-08-09T01:12:03.000Z | python/testData/refactoring/move/moveFileDoesntReorderImports/before/src/mod.py | Cyril-lamirand/intellij-community | 60ab6c61b82fc761dd68363eca7d9d69663cfa39 | [
"Apache-2.0"
] | 2 | 2020-03-15T08:57:37.000Z | 2020-04-07T04:48:14.000Z | import c
import b
import a
print(a, b, c)
| 7.166667 | 14 | 0.674419 | 10 | 43 | 2.9 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.232558 | 43 | 5 | 15 | 8.6 | 0.878788 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.75 | 0 | 0.75 | 0.25 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
862f18c824384ceff6f6969b255de77c40d2ffc4 | 10,916 | py | Python | Variable_Coefficient/Evaluation/Plot_Class_Loss.py | nw2190/ConvPDE | 86f3fa67d64a6c56f3dff4d32999fe70db30795e | [
"MIT"
] | 22 | 2019-05-21T16:35:49.000Z | 2022-03-28T06:27:48.000Z | Poisson_Varying_Domain/Evaluation/Plot_Class_Loss.py | nw2190/ConvPDE | 86f3fa67d64a6c56f3dff4d32999fe70db30795e | [
"MIT"
] | null | null | null | Poisson_Varying_Domain/Evaluation/Plot_Class_Loss.py | nw2190/ConvPDE | 86f3fa67d64a6c56f3dff4d32999fe70db30795e | [
"MIT"
] | 5 | 2019-05-22T05:19:21.000Z | 2022-03-08T07:20:21.000Z | import numpy as np
import matplotlib.pyplot as plt
import csv
def main():
legend_entries = []
filename = "class_losses.csv"
alt_filename = "noprob_class_losses.csv"
classes = 20
length_scales = [0.2, 0.2125, 0.225, 0.24, 0.25, 0.2625, 0.275, 0.2875, 0.3, 0.325,
0.35, 0.375, 0.4, 0.425, 0.45, 0.475, 0.5, 0.533, 0.566, 0.6]
#plt.rc('text', usetex=True)
#plt.rc('font', family='serif')
def get_data(f):
cs = []
l1_means = []
l1_stds = []
mse_means = []
mse_stds = []
with open(f, "r") as csvfile:
csvreader = csv.reader(csvfile, delimiter=' ', quotechar='|')
for row in csvreader:
c, l1_mean, l1_std, mse_mean, mse_std = row
cs.append(float(c))
l1_means.append(float(l1_mean))
l1_stds.append(float(l1_std))
mse_means.append(float(mse_mean))
mse_stds.append(float(mse_std))
#print([c, l1_mean, l1_std, mse_mean, mse_std])
cs = np.array(cs)
l1_means = np.array(l1_means)
l1_stds = np.array(l1_stds)
mse_means = np.array(mse_means)
mse_stds = np.array(mse_stds)
return cs, l1_means, l1_stds, mse_means, mse_stds
cs, l1_means, l1_stds, mse_means, mse_stds = get_data(filename)
acs, al1_means, al1_stds, amse_means, amse_stds = get_data(alt_filename)
# Plot parameters
linewidth = 3
titlesize = 24
ylabelsize = 24
xlabelsize = 24
xticksize = 16
yticksize = 16
ylabelpad = 20
xlabelpad = 20
# Bar parametrs
width = 0.4
# Plot L^2 errors
#alt_color = 'tab:purple'
fig, ax1 = plt.subplots(figsize=(14.0,7.0))
ax1.set_xlabel('Length Scale', fontsize=xlabelsize, labelpad=xlabelpad)
#ax1.set_ylabel('L^2 Error', color=color, fontsize=ylabelsize, labelpad=ylabelpad)
#ax1.set_ylabel('Average L^2 Error', color='k', fontsize=ylabelsize, labelpad=ylabelpad)
#ax1.set_ylabel('Relative L^2 Error', color='k', fontsize=ylabelsize, labelpad=ylabelpad)
ax1.set_ylabel(r'Average $\,L^2\,$ Relative Error', color='k', fontsize=ylabelsize, labelpad=ylabelpad)
#ax1.plot(cs, mse_means, color=color, label="L^2 Error", linewidth=linewidth)
error_kw = {"capsize": 4.5,
"elinewidth": 1.75,
"capthick": 2.25}
color = 'tab:orange'
#rects1 = ax1.bar(acs - 0.5*width, amse_means, width, color=color, yerr=amse_stds, label="MSE Training", alpha=0.7,
rects1 = ax1.bar(acs - 0.5*width, amse_means, width, color=color, yerr=amse_stds, label="MSE Network", alpha=0.7,
ecolor="black", error_kw=error_kw)
#, edgecolor='black', hatch="-")
color = 'tab:blue'
#rects2 = ax1.bar(cs + 0.5*width, mse_means, width, color=color, yerr=mse_stds, label="Probability Training", alpha=0.7,
rects2 = ax1.bar(cs + 0.5*width, mse_means, width, color=color, yerr=mse_stds, label="Probability Network", alpha=0.7,
edgecolor='white', hatch="/", ecolor="black", error_kw=error_kw)
#ax1.plot(acs, amse_means, linestyle='dashed', color=alt_color, label="L^2 Error (noprob)")
#ax1.plot(acs, amse_means, linestyle='dashed', color=alt_color, label="(without probability)", linewidth=linewidth)
#ax1.tick_params(axis='y', labelcolor=color, labelsize=yticksize)
#legend_entries.append("L^2 Mean Error")
ax1.tick_params(axis='y', labelsize=yticksize)
# Set xticks to length scale values
ticks = [n+1 for n in range(0,classes)]
#labels = (0.2, 0.2125, 0.225, 0.24, 0.25, 0.2625, 0.275, 0.2875, 0.3, 0.325,
# 0.35, 0.375, 0.4, 0.425, 0.45, 0.475, 0.5, 0.533, 0.566, 0.6)
labels = tuple(["{0:.2}".format(l).replace("0","",1) for l in length_scales])
plt.xticks(ticks, labels, fontsize=xticksize)
"""
# Plot L^1 errors
ax2 = ax1.twinx() # instantiate a second axes that shares the same x-axis
color = 'tab:orange'
alt_color = color
#alt_color = 'tab:red'
#ax2.set_ylabel('L^1 Error', color=color, fontsize=ylabelsize, labelpad=ylabelpad)
ax2.set_ylabel('L^1 Error', color='k', fontsize=ylabelsize, labelpad=ylabelpad)
#ax2.plot(cs, l1_means, color=color, linestyle='dashed', label="L^1 Error")
rects3 = ax2.bar(cs + 2*width, l1_means, width, color=color, yerr=l1_stds)
rects4 = ax2.bar(acs + 3*width, al1_means, width, color=color, yerr=al1_stds)
#ax2.plot(acs, al1_means, color=alt_color, linestyle='dashed', label="L^1 Error (noprob)")
#ax2.plot(acs, al1_means, color=alt_color, linestyle='dashed', label="(without probability)", linewidth=linewidth)
ax2.tick_params(axis='y', labelcolor=color, labelsize=yticksize)
#legend_entries.append("L^1 Mean Error")
"""
fig.tight_layout() # otherwise the right y-label is slightly clipped
#fig.legend(fontsize=24, loc=(0.525,0.7))
ax1.legend(fontsize=24, loc=(0.675,0.775))
plt.show()
def old_main():
NOPROB = True
legend_entries = []
filename = "class_losses.csv"
alt_filename = "noprob_class_losses.csv"
classes = 20
length_scales = [0.2, 0.2125, 0.225, 0.24, 0.25, 0.2625, 0.275, 0.2875, 0.3, 0.325,
0.35, 0.375, 0.4, 0.425, 0.45, 0.475, 0.5, 0.533, 0.566, 0.6]
def get_data(f):
cs = []
l1_means = []
l1_stds = []
mse_means = []
mse_stds = []
with open(f, "r") as csvfile:
csvreader = csv.reader(csvfile, delimiter=' ', quotechar='|')
for row in csvreader:
c, l1_mean, l1_std, mse_mean, mse_std = row
cs.append(float(c))
l1_means.append(float(l1_mean))
l1_stds.append(float(l1_std))
mse_means.append(float(mse_mean))
mse_stds.append(float(mse_std))
#print([c, l1_mean, l1_std, mse_mean, mse_std])
cs = np.array(cs)
l1_means = np.array(l1_means)
l1_stds = np.array(l1_stds)
mse_means = np.array(mse_means)
mse_stds = np.array(mse_stds)
return cs, l1_means, l1_stds, mse_means, mse_stds
cs, l1_means, l1_stds, mse_means, mse_stds = get_data(filename)
if NOPROB:
acs, al1_means, al1_stds, amse_means, amse_stds = get_data(alt_filename)
"""
cs = []
l1_means = []
l1_stds = []
mse_means = []
mse_stds = []
with open(filename, "r") as csvfile:
csvreader = csv.reader(csvfile, delimiter=' ', quotechar='|')
for row in csvreader:
c, l1_mean, l1_std, mse_mean, mse_std = row
cs.append(float(c))
l1_means.append(float(l1_mean))
l1_stds.append(float(l1_std))
mse_means.append(float(mse_mean))
mse_stds.append(float(mse_std))
#print([c, l1_mean, l1_std, mse_mean, mse_std])
cs = np.array(cs)
l1_means = np.array(l1_means)
l1_stds = np.array(l1_stds)
mse_means = np.array(mse_means)
mse_stds = np.array(mse_stds)
"""
# Plot parameters
linewidth = 3
titlesize = 24
ylabelsize = 20
xlabelsize = 24
xticksize = 16
yticksize = 16
ylabelpad = 20
xlabelpad = 20
# Plot L^2 errors
color = 'tab:blue'
alt_color = color
#alt_color = 'tab:purple'
fig, ax1 = plt.subplots()
ax1.set_xlabel('Length Scale', fontsize=xlabelsize, labelpad=xlabelpad)
#ax1.set_ylabel('L^2 Error', color=color, fontsize=ylabelsize, labelpad=ylabelpad)
ax1.set_ylabel('L^2 Error', color='k', fontsize=ylabelsize, labelpad=ylabelpad)
ax1.plot(cs, mse_means, color=color, label="L^2 Error", linewidth=linewidth)
if NOPROB:
#ax1.plot(acs, amse_means, linestyle='dashed', color=alt_color, label="L^2 Error (noprob)")
ax1.plot(acs, amse_means, linestyle='dashed', color=alt_color, label="(without probability)", linewidth=linewidth)
ax1.tick_params(axis='y', labelcolor=color, labelsize=yticksize)
#legend_entries.append("L^2 Mean Error")
# Plot L^2 standard deviations
alpha = 0.1
y1 = np.array(mse_means - mse_stds, dtype=np.float32)
y2 = np.array(mse_means + mse_stds, dtype=np.float32)
plt.fill_between(cs, y1, y2, where=y2 >= y1, facecolor=color, alpha=alpha, interpolate=True, label=None)
if NOPROB:
# Plot L^2 standard deviations (noprob)
y1 = np.array(amse_means - amse_stds, dtype=np.float32)
y2 = np.array(amse_means + amse_stds, dtype=np.float32)
plt.fill_between(acs, y1, y2, where=y2 >= y1, facecolor=alt_color, alpha=alpha/2., interpolate=True, label=None, hatch='X', edgecolor='k')
# Set xticks to length scale values
ticks = [n+1 for n in range(0,classes)]
#labels = (0.2, 0.2125, 0.225, 0.24, 0.25, 0.2625, 0.275, 0.2875, 0.3, 0.325,
# 0.35, 0.375, 0.4, 0.425, 0.45, 0.475, 0.5, 0.533, 0.566, 0.6)
labels = tuple(["{0:.2}".format(l).replace("0","",1) for l in length_scales])
plt.xticks(ticks, labels, fontsize=xticksize)
# Plot L^1 errors
ax2 = ax1.twinx() # instantiate a second axes that shares the same x-axis
color = 'tab:orange'
alt_color = color
#alt_color = 'tab:red'
#ax2.set_ylabel('L^1 Error', color=color, fontsize=ylabelsize, labelpad=ylabelpad)
ax2.set_ylabel('L^1 Error', color='k', fontsize=ylabelsize, labelpad=ylabelpad)
#ax2.plot(cs, l1_means, color=color, linestyle='dashed', label="L^1 Error")
ax2.plot(cs, l1_means, color=color, label="L^1 Error", linewidth=linewidth)
if NOPROB:
#ax2.plot(acs, al1_means, color=alt_color, linestyle='dashed', label="L^1 Error (noprob)")
ax2.plot(acs, al1_means, color=alt_color, linestyle='dashed', label="(without probability)", linewidth=linewidth)
ax2.tick_params(axis='y', labelcolor=color, labelsize=yticksize)
#legend_entries.append("L^1 Mean Error")
# Plot L^1 standard deviations
alpha = 0.15
y1 = np.array(l1_means - l1_stds, dtype=np.float32)
y2 = np.array(l1_means + l1_stds, dtype=np.float32)
plt.fill_between(cs, y1, y2, where=y2 >= y1, facecolor=color, alpha=alpha, interpolate=True, label=None)
if NOPROB:
# Plot L^1 standard deviations (noprob)
y1 = np.array(al1_means - al1_stds, dtype=np.float32)
y2 = np.array(al1_means + al1_stds, dtype=np.float32)
plt.fill_between(acs, y1, y2, where=y2 >= y1, facecolor=alt_color, alpha=alpha/2, interpolate=True, label=None, hatch='X', edgecolor='k')
fig.tight_layout() # otherwise the right y-label is slightly clipped
if NOPROB:
fig.legend(fontsize=24, loc=(0.525,0.7))
else:
fig.legend(fontsize=24, loc=(0.55,0.8))
plt.show()
# Run main() function when called directly
if __name__ == '__main__':
main()
| 37.771626 | 146 | 0.620923 | 1,622 | 10,916 | 4.037608 | 0.125771 | 0.024584 | 0.017865 | 0.02382 | 0.910979 | 0.885173 | 0.864101 | 0.845167 | 0.834784 | 0.793251 | 0 | 0.070276 | 0.23351 | 10,916 | 288 | 147 | 37.902778 | 0.712442 | 0.223067 | 0 | 0.675862 | 0 | 0 | 0.054425 | 0.006785 | 0 | 0 | 0 | 0 | 0 | 1 | 0.027586 | false | 0 | 0.02069 | 0 | 0.062069 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
864f27a79f5fd11bcb3221577d795e86ace0b033 | 19,576 | py | Python | pybind/slxos/v17r_2_00/telemetry/profile/mpls_traffic_fec/__init__.py | extremenetworks/pybind | 44c467e71b2b425be63867aba6e6fa28b2cfe7fb | [
"Apache-2.0"
] | null | null | null | pybind/slxos/v17r_2_00/telemetry/profile/mpls_traffic_fec/__init__.py | extremenetworks/pybind | 44c467e71b2b425be63867aba6e6fa28b2cfe7fb | [
"Apache-2.0"
] | null | null | null | pybind/slxos/v17r_2_00/telemetry/profile/mpls_traffic_fec/__init__.py | extremenetworks/pybind | 44c467e71b2b425be63867aba6e6fa28b2cfe7fb | [
"Apache-2.0"
] | 1 | 2021-11-05T22:15:42.000Z | 2021-11-05T22:15:42.000Z |
from operator import attrgetter
import pyangbind.lib.xpathhelper as xpathhelper
from pyangbind.lib.yangtypes import RestrictedPrecisionDecimalType, RestrictedClassType, TypedListType
from pyangbind.lib.yangtypes import YANGBool, YANGListType, YANGDynClass, ReferenceType
from pyangbind.lib.base import PybindBase
from decimal import Decimal
from bitarray import bitarray
import __builtin__
import mpls_traffic_fecs
import add_
class mpls_traffic_fec(PybindBase):
"""
This class was auto-generated by the PythonClass plugin for PYANG
from YANG module brocade-telemetry - based on the path /telemetry/profile/mpls-traffic-fec. Each member element of
the container is represented as a class variable - with a specific
YANG type.
"""
__slots__ = ('_pybind_generated_by', '_path_helper', '_yang_name', '_rest_name', '_extmethods', '__name','__interval','__mpls_traffic_fecs','__add_',)
_yang_name = 'mpls-traffic-fec'
_rest_name = 'mpls-traffic-fec'
_pybind_generated_by = 'container'
def __init__(self, *args, **kwargs):
path_helper_ = kwargs.pop("path_helper", None)
if path_helper_ is False:
self._path_helper = False
elif path_helper_ is not None and isinstance(path_helper_, xpathhelper.YANGPathHelper):
self._path_helper = path_helper_
elif hasattr(self, "_parent"):
path_helper_ = getattr(self._parent, "_path_helper", False)
self._path_helper = path_helper_
else:
self._path_helper = False
extmethods = kwargs.pop("extmethods", None)
if extmethods is False:
self._extmethods = False
elif extmethods is not None and isinstance(extmethods, dict):
self._extmethods = extmethods
elif hasattr(self, "_parent"):
extmethods = getattr(self._parent, "_extmethods", None)
self._extmethods = extmethods
else:
self._extmethods = False
self.__add_ = YANGDynClass(base=YANGListType("object",add_.add_, yang_name="add", rest_name="add", parent=self, is_container='list', user_ordered=False, path_helper=self._path_helper, yang_keys='object', extensions={u'tailf-common': {u'callpoint': u'MplstrafficfecProfileObject', u'cli-suppress-list-no': None, u'cli-suppress-mode': None, u'info': u'Add MPLS traffic telemetry object'}}), is_container='list', yang_name="add", rest_name="add", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'callpoint': u'MplstrafficfecProfileObject', u'cli-suppress-list-no': None, u'cli-suppress-mode': None, u'info': u'Add MPLS traffic telemetry object'}}, namespace='urn:brocade.com:mgmt:brocade-telemetry', defining_module='brocade-telemetry', yang_type='list', is_config=True)
self.__interval = YANGDynClass(base=RestrictedClassType(base_type=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), restriction_dict={'range': [u'60..3600']}), is_leaf=True, yang_name="interval", rest_name="interval", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'MPLS profile interval'}}, namespace='urn:brocade.com:mgmt:brocade-telemetry', defining_module='brocade-telemetry', yang_type='mpls-traffic-profile-interval-type', is_config=True)
self.__name = YANGDynClass(base=RestrictedClassType(base_type=unicode, restriction_dict={'pattern': u'default_mpls_traffic_fec_statistics', 'length': [u'3..64']}), is_leaf=True, yang_name="name", rest_name="name", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'MPLS profile name'}}, is_keyval=True, namespace='urn:brocade.com:mgmt:brocade-telemetry', defining_module='brocade-telemetry', yang_type='mpls-traffic-fec-profile-name-type', is_config=True)
self.__mpls_traffic_fecs = YANGDynClass(base=YANGListType("mpls_traffic_fec_address",mpls_traffic_fecs.mpls_traffic_fecs, yang_name="mpls-traffic-fecs", rest_name="fec", parent=self, is_container='list', user_ordered=False, path_helper=self._path_helper, yang_keys='mpls-traffic-fec-address', extensions={u'tailf-common': {u'callpoint': u'Mplstrafficfec', u'cli-suppress-mode': None, u'alt-name': u'fec', u'info': u'MPLS Stats profile by FEC address', u'cli-suppress-list-no': None}}), is_container='list', yang_name="mpls-traffic-fecs", rest_name="fec", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'callpoint': u'Mplstrafficfec', u'cli-suppress-mode': None, u'alt-name': u'fec', u'info': u'MPLS Stats profile by FEC address', u'cli-suppress-list-no': None}}, namespace='urn:brocade.com:mgmt:brocade-telemetry', defining_module='brocade-telemetry', yang_type='list', is_config=True)
load = kwargs.pop("load", None)
if args:
if len(args) > 1:
raise TypeError("cannot create a YANG container with >1 argument")
all_attr = True
for e in self._pyangbind_elements:
if not hasattr(args[0], e):
all_attr = False
break
if not all_attr:
raise ValueError("Supplied object did not have the correct attributes")
for e in self._pyangbind_elements:
nobj = getattr(args[0], e)
if nobj._changed() is False:
continue
setmethod = getattr(self, "_set_%s" % e)
if load is None:
setmethod(getattr(args[0], e))
else:
setmethod(getattr(args[0], e), load=load)
def _path(self):
if hasattr(self, "_parent"):
return self._parent._path()+[self._yang_name]
else:
return [u'telemetry', u'profile', u'mpls-traffic-fec']
def _rest_path(self):
if hasattr(self, "_parent"):
if self._rest_name:
return self._parent._rest_path()+[self._rest_name]
else:
return self._parent._rest_path()
else:
return [u'telemetry', u'profile', u'mpls-traffic-fec']
def _get_name(self):
"""
Getter method for name, mapped from YANG variable /telemetry/profile/mpls_traffic_fec/name (mpls-traffic-fec-profile-name-type)
"""
return self.__name
def _set_name(self, v, load=False):
"""
Setter method for name, mapped from YANG variable /telemetry/profile/mpls_traffic_fec/name (mpls-traffic-fec-profile-name-type)
If this variable is read-only (config: false) in the
source YANG file, then _set_name is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_name() directly.
"""
parent = getattr(self, "_parent", None)
if parent is not None and load is False:
raise AttributeError("Cannot set keys directly when" +
" within an instantiated list")
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=unicode, restriction_dict={'pattern': u'default_mpls_traffic_fec_statistics', 'length': [u'3..64']}), is_leaf=True, yang_name="name", rest_name="name", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'MPLS profile name'}}, is_keyval=True, namespace='urn:brocade.com:mgmt:brocade-telemetry', defining_module='brocade-telemetry', yang_type='mpls-traffic-fec-profile-name-type', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """name must be of a type compatible with mpls-traffic-fec-profile-name-type""",
'defined-type': "brocade-telemetry:mpls-traffic-fec-profile-name-type",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=unicode, restriction_dict={'pattern': u'default_mpls_traffic_fec_statistics', 'length': [u'3..64']}), is_leaf=True, yang_name="name", rest_name="name", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'MPLS profile name'}}, is_keyval=True, namespace='urn:brocade.com:mgmt:brocade-telemetry', defining_module='brocade-telemetry', yang_type='mpls-traffic-fec-profile-name-type', is_config=True)""",
})
self.__name = t
if hasattr(self, '_set'):
self._set()
def _unset_name(self):
self.__name = YANGDynClass(base=RestrictedClassType(base_type=unicode, restriction_dict={'pattern': u'default_mpls_traffic_fec_statistics', 'length': [u'3..64']}), is_leaf=True, yang_name="name", rest_name="name", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'MPLS profile name'}}, is_keyval=True, namespace='urn:brocade.com:mgmt:brocade-telemetry', defining_module='brocade-telemetry', yang_type='mpls-traffic-fec-profile-name-type', is_config=True)
def _get_interval(self):
"""
Getter method for interval, mapped from YANG variable /telemetry/profile/mpls_traffic_fec/interval (mpls-traffic-profile-interval-type)
"""
return self.__interval
def _set_interval(self, v, load=False):
"""
Setter method for interval, mapped from YANG variable /telemetry/profile/mpls_traffic_fec/interval (mpls-traffic-profile-interval-type)
If this variable is read-only (config: false) in the
source YANG file, then _set_interval is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_interval() directly.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), restriction_dict={'range': [u'60..3600']}), is_leaf=True, yang_name="interval", rest_name="interval", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'MPLS profile interval'}}, namespace='urn:brocade.com:mgmt:brocade-telemetry', defining_module='brocade-telemetry', yang_type='mpls-traffic-profile-interval-type', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """interval must be of a type compatible with mpls-traffic-profile-interval-type""",
'defined-type': "brocade-telemetry:mpls-traffic-profile-interval-type",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), restriction_dict={'range': [u'60..3600']}), is_leaf=True, yang_name="interval", rest_name="interval", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'MPLS profile interval'}}, namespace='urn:brocade.com:mgmt:brocade-telemetry', defining_module='brocade-telemetry', yang_type='mpls-traffic-profile-interval-type', is_config=True)""",
})
self.__interval = t
if hasattr(self, '_set'):
self._set()
def _unset_interval(self):
self.__interval = YANGDynClass(base=RestrictedClassType(base_type=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), restriction_dict={'range': [u'60..3600']}), is_leaf=True, yang_name="interval", rest_name="interval", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'MPLS profile interval'}}, namespace='urn:brocade.com:mgmt:brocade-telemetry', defining_module='brocade-telemetry', yang_type='mpls-traffic-profile-interval-type', is_config=True)
def _get_mpls_traffic_fecs(self):
"""
Getter method for mpls_traffic_fecs, mapped from YANG variable /telemetry/profile/mpls_traffic_fec/mpls_traffic_fecs (list)
"""
return self.__mpls_traffic_fecs
def _set_mpls_traffic_fecs(self, v, load=False):
"""
Setter method for mpls_traffic_fecs, mapped from YANG variable /telemetry/profile/mpls_traffic_fec/mpls_traffic_fecs (list)
If this variable is read-only (config: false) in the
source YANG file, then _set_mpls_traffic_fecs is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_mpls_traffic_fecs() directly.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=YANGListType("mpls_traffic_fec_address",mpls_traffic_fecs.mpls_traffic_fecs, yang_name="mpls-traffic-fecs", rest_name="fec", parent=self, is_container='list', user_ordered=False, path_helper=self._path_helper, yang_keys='mpls-traffic-fec-address', extensions={u'tailf-common': {u'callpoint': u'Mplstrafficfec', u'cli-suppress-mode': None, u'alt-name': u'fec', u'info': u'MPLS Stats profile by FEC address', u'cli-suppress-list-no': None}}), is_container='list', yang_name="mpls-traffic-fecs", rest_name="fec", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'callpoint': u'Mplstrafficfec', u'cli-suppress-mode': None, u'alt-name': u'fec', u'info': u'MPLS Stats profile by FEC address', u'cli-suppress-list-no': None}}, namespace='urn:brocade.com:mgmt:brocade-telemetry', defining_module='brocade-telemetry', yang_type='list', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """mpls_traffic_fecs must be of a type compatible with list""",
'defined-type': "list",
'generated-type': """YANGDynClass(base=YANGListType("mpls_traffic_fec_address",mpls_traffic_fecs.mpls_traffic_fecs, yang_name="mpls-traffic-fecs", rest_name="fec", parent=self, is_container='list', user_ordered=False, path_helper=self._path_helper, yang_keys='mpls-traffic-fec-address', extensions={u'tailf-common': {u'callpoint': u'Mplstrafficfec', u'cli-suppress-mode': None, u'alt-name': u'fec', u'info': u'MPLS Stats profile by FEC address', u'cli-suppress-list-no': None}}), is_container='list', yang_name="mpls-traffic-fecs", rest_name="fec", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'callpoint': u'Mplstrafficfec', u'cli-suppress-mode': None, u'alt-name': u'fec', u'info': u'MPLS Stats profile by FEC address', u'cli-suppress-list-no': None}}, namespace='urn:brocade.com:mgmt:brocade-telemetry', defining_module='brocade-telemetry', yang_type='list', is_config=True)""",
})
self.__mpls_traffic_fecs = t
if hasattr(self, '_set'):
self._set()
def _unset_mpls_traffic_fecs(self):
self.__mpls_traffic_fecs = YANGDynClass(base=YANGListType("mpls_traffic_fec_address",mpls_traffic_fecs.mpls_traffic_fecs, yang_name="mpls-traffic-fecs", rest_name="fec", parent=self, is_container='list', user_ordered=False, path_helper=self._path_helper, yang_keys='mpls-traffic-fec-address', extensions={u'tailf-common': {u'callpoint': u'Mplstrafficfec', u'cli-suppress-mode': None, u'alt-name': u'fec', u'info': u'MPLS Stats profile by FEC address', u'cli-suppress-list-no': None}}), is_container='list', yang_name="mpls-traffic-fecs", rest_name="fec", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'callpoint': u'Mplstrafficfec', u'cli-suppress-mode': None, u'alt-name': u'fec', u'info': u'MPLS Stats profile by FEC address', u'cli-suppress-list-no': None}}, namespace='urn:brocade.com:mgmt:brocade-telemetry', defining_module='brocade-telemetry', yang_type='list', is_config=True)
def _get_add_(self):
"""
Getter method for add_, mapped from YANG variable /telemetry/profile/mpls_traffic_fec/add (list)
"""
return self.__add_
def _set_add_(self, v, load=False):
"""
Setter method for add_, mapped from YANG variable /telemetry/profile/mpls_traffic_fec/add (list)
If this variable is read-only (config: false) in the
source YANG file, then _set_add_ is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_add_() directly.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=YANGListType("object",add_.add_, yang_name="add", rest_name="add", parent=self, is_container='list', user_ordered=False, path_helper=self._path_helper, yang_keys='object', extensions={u'tailf-common': {u'callpoint': u'MplstrafficfecProfileObject', u'cli-suppress-list-no': None, u'cli-suppress-mode': None, u'info': u'Add MPLS traffic telemetry object'}}), is_container='list', yang_name="add", rest_name="add", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'callpoint': u'MplstrafficfecProfileObject', u'cli-suppress-list-no': None, u'cli-suppress-mode': None, u'info': u'Add MPLS traffic telemetry object'}}, namespace='urn:brocade.com:mgmt:brocade-telemetry', defining_module='brocade-telemetry', yang_type='list', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """add_ must be of a type compatible with list""",
'defined-type': "list",
'generated-type': """YANGDynClass(base=YANGListType("object",add_.add_, yang_name="add", rest_name="add", parent=self, is_container='list', user_ordered=False, path_helper=self._path_helper, yang_keys='object', extensions={u'tailf-common': {u'callpoint': u'MplstrafficfecProfileObject', u'cli-suppress-list-no': None, u'cli-suppress-mode': None, u'info': u'Add MPLS traffic telemetry object'}}), is_container='list', yang_name="add", rest_name="add", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'callpoint': u'MplstrafficfecProfileObject', u'cli-suppress-list-no': None, u'cli-suppress-mode': None, u'info': u'Add MPLS traffic telemetry object'}}, namespace='urn:brocade.com:mgmt:brocade-telemetry', defining_module='brocade-telemetry', yang_type='list', is_config=True)""",
})
self.__add_ = t
if hasattr(self, '_set'):
self._set()
def _unset_add_(self):
self.__add_ = YANGDynClass(base=YANGListType("object",add_.add_, yang_name="add", rest_name="add", parent=self, is_container='list', user_ordered=False, path_helper=self._path_helper, yang_keys='object', extensions={u'tailf-common': {u'callpoint': u'MplstrafficfecProfileObject', u'cli-suppress-list-no': None, u'cli-suppress-mode': None, u'info': u'Add MPLS traffic telemetry object'}}), is_container='list', yang_name="add", rest_name="add", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'callpoint': u'MplstrafficfecProfileObject', u'cli-suppress-list-no': None, u'cli-suppress-mode': None, u'info': u'Add MPLS traffic telemetry object'}}, namespace='urn:brocade.com:mgmt:brocade-telemetry', defining_module='brocade-telemetry', yang_type='list', is_config=True)
name = __builtin__.property(_get_name, _set_name)
interval = __builtin__.property(_get_interval, _set_interval)
mpls_traffic_fecs = __builtin__.property(_get_mpls_traffic_fecs, _set_mpls_traffic_fecs)
add_ = __builtin__.property(_get_add_, _set_add_)
_pyangbind_elements = {'name': name, 'interval': interval, 'mpls_traffic_fecs': mpls_traffic_fecs, 'add_': add_, }
| 83.302128 | 971 | 0.736718 | 2,743 | 19,576 | 5.02953 | 0.073277 | 0.069368 | 0.044651 | 0.031313 | 0.841041 | 0.817846 | 0.804654 | 0.791679 | 0.789214 | 0.774427 | 0 | 0.005452 | 0.11933 | 19,576 | 234 | 972 | 83.65812 | 0.79478 | 0.109215 | 0 | 0.415584 | 0 | 0.025974 | 0.419196 | 0.171125 | 0 | 0 | 0 | 0 | 0 | 1 | 0.097403 | false | 0 | 0.064935 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8652120951426ad09a0a0d67910381fd1298d3ce | 174 | py | Python | netmiko/raisecom/__init__.py | mtuska/netmiko | 90ae69a7c251c13e483f7c52629dbbe4356e7a6d | [
"MIT"
] | 2,833 | 2015-01-04T20:04:10.000Z | 2022-03-31T13:03:17.000Z | netmiko/raisecom/__init__.py | MrPaulAR/netmiko | bc9700a803ccd89e29672dbe544368b946352aa0 | [
"MIT"
] | 2,137 | 2015-01-28T17:33:41.000Z | 2022-03-31T18:41:21.000Z | netmiko/raisecom/__init__.py | georgesnow/netmiko | 185f51ca5c24ea2977d6ca31db1ae263aa72cc12 | [
"MIT"
] | 1,367 | 2015-01-04T20:04:10.000Z | 2022-03-31T19:13:28.000Z | from netmiko.raisecom.raisecom_roap import RaisecomRoapSSH
from netmiko.raisecom.raisecom_roap import RaisecomRoapTelnet
__all__ = ["RaisecomRoapSSH", "RaisecomRoapTelnet"]
| 34.8 | 61 | 0.856322 | 17 | 174 | 8.411765 | 0.470588 | 0.153846 | 0.265734 | 0.377622 | 0.517483 | 0.517483 | 0 | 0 | 0 | 0 | 0 | 0 | 0.074713 | 174 | 4 | 62 | 43.5 | 0.888199 | 0 | 0 | 0 | 0 | 0 | 0.189655 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
86774f829be05233a23a0fb058504072225f3b18 | 113 | py | Python | frosted/test/__init__.py | magro11/frosted | bd05f782d9bee62379b8447dd4dcb2818f7f2142 | [
"MIT"
] | 59 | 2015-01-05T19:23:58.000Z | 2018-05-11T09:42:53.000Z | scripts/__init__.py | timothycrosley/frosted_original_fork | 54056e5d85759b286bcb199ff174df370cff2ada | [
"MIT"
] | 5 | 2015-09-15T03:57:22.000Z | 2017-12-27T16:17:53.000Z | scripts/__init__.py | timothycrosley/frosted_original_fork | 54056e5d85759b286bcb199ff174df370cff2ada | [
"MIT"
] | 10 | 2015-01-27T10:37:10.000Z | 2018-03-05T19:10:44.000Z | from __future__ import absolute_import, division, print_function, unicode_literals
from pies.overrides import *
| 28.25 | 82 | 0.849558 | 14 | 113 | 6.357143 | 0.785714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.106195 | 113 | 3 | 83 | 37.666667 | 0.881188 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0.5 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 6 |
86b8602fb82fbc86bd01d40e67f047a24c452753 | 742 | py | Python | bot/py/r3340/__init__.py | sonoprob/0x56 | fa4cf9365696b1461492244a016e3fb57a33ce7a | [
"Artistic-2.0"
] | null | null | null | bot/py/r3340/__init__.py | sonoprob/0x56 | fa4cf9365696b1461492244a016e3fb57a33ce7a | [
"Artistic-2.0"
] | null | null | null | bot/py/r3340/__init__.py | sonoprob/0x56 | fa4cf9365696b1461492244a016e3fb57a33ce7a | [
"Artistic-2.0"
] | null | null | null | """r3340_____________________"""
"""r3340___------___------___"""
"""r3340__--------_--------__"""
"""r3340__-----------------__"""
"""r3340___---------------___"""
"""r3340_____-----------_____"""
"""r3340_______-------_______"""
"""r3340_________---_________"""
"""r3340__________-__________"""
"""r3340_____________________"""
from cervelle import cervelle
from dec300 import dec
from notify import notify
"""r3340_____________________"""
"""r3340___------___------___"""
"""r3340__--------_--------__"""
"""r3340__-----------------__"""
"""r3340___---------------___"""
"""r3340_____-----------_____"""
"""r3340_______-------_______"""
"""r3340_________---_________"""
"""r3340__________-__________"""
"""r3340_____________________"""
| 27.481481 | 32 | 0.570081 | 32 | 742 | 5.21875 | 0.21875 | 1.077844 | 1.437126 | 1.676647 | 0.598802 | 0.598802 | 0.598802 | 0.598802 | 0.598802 | 0.598802 | 0 | 0.117397 | 0.04717 | 742 | 26 | 33 | 28.538462 | 0.118812 | 0.03504 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
86cbc6b937590fd8d170bfaa083617e2f56a9739 | 65 | py | Python | latex/slides/resources/08_unpacking_lambda/lamda_syntax.py | Bizarious/python-lessons | 24144f03d70d9ed52b0430d4cc2aca9dcded14a3 | [
"CC-BY-4.0"
] | null | null | null | latex/slides/resources/08_unpacking_lambda/lamda_syntax.py | Bizarious/python-lessons | 24144f03d70d9ed52b0430d4cc2aca9dcded14a3 | [
"CC-BY-4.0"
] | null | null | null | latex/slides/resources/08_unpacking_lambda/lamda_syntax.py | Bizarious/python-lessons | 24144f03d70d9ed52b0430d4cc2aca9dcded14a3 | [
"CC-BY-4.0"
] | null | null | null | def funktion(a1, a2):
return a1 + a2
lambda a1, a2: a1 + a2
| 13 | 22 | 0.6 | 12 | 65 | 3.25 | 0.5 | 0.410256 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.170213 | 0.276923 | 65 | 4 | 23 | 16.25 | 0.659574 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0.333333 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
86e719a8ba0356a4a65010275dfda97abc3ae722 | 6,051 | py | Python | Encrypt_Text_Into_Image.py | JackieChenssh/TextEmbeddingImage | 03939af48c9f1ad993efdfd4b90541ed78cac096 | [
"MIT"
] | null | null | null | Encrypt_Text_Into_Image.py | JackieChenssh/TextEmbeddingImage | 03939af48c9f1ad993efdfd4b90541ed78cac096 | [
"MIT"
] | null | null | null | Encrypt_Text_Into_Image.py | JackieChenssh/TextEmbeddingImage | 03939af48c9f1ad993efdfd4b90541ed78cac096 | [
"MIT"
] | null | null | null | from RandomCorEnc4Text import RandomCorEnc4Text
from waterMarkEmbedding import waterMarkEmbedding
from ImageProcess import imageScrambling
from ImageAttacker import ImageAttacker,FilterType,NoiseType
from PIL import Image
import numpy as np
text_key = 'happy_sugar_life'
test_text = 'You must - there are over 200,000 words in our free online dictionary, but you are looking for one that\'s only in the Merriam-Webster Unabridged Dictionary.'
watermark_img = Image.open('./img/img_watermark.jpg')
carrier_img = Image.open('./img/img_carrier.jpg')
carrier_img = carrier_img.resize(np.array(carrier_img.size) * 2)
md5_key,watermarked_img = RandomCorEnc4Text().convertEncAndDec(text_key,test_text,watermark_img,'enc')
logic_ini = np.var(md5_key) - np.var(md5_key,dtype = int)
scrambled_img = imageScrambling(watermarked_img,logic_ini)
encoded_img,watermark_shape = waterMarkEmbedding().DCTwaterMarkEmbedding4RGB(carrier_img, md5_key, scrambled_img)
encoded_img.save('./img/img_encoded_src.bmp')
Image.merge('RGB',[Image.fromarray(ImageAttacker().ImageFilter(colorLayer,method = FilterType.Median)) for colorLayer in encoded_img.split()]).save('./img/img_encoded_Median.bmp')
Image.merge('RGB',[Image.fromarray(ImageAttacker().ImageFilter(colorLayer,method = FilterType.Maximum)) for colorLayer in encoded_img.split()]).save('./img/img_encoded_Maximum.bmp')
Image.merge('RGB',[Image.fromarray(ImageAttacker().ImageFilter(colorLayer,method = FilterType.Minimum)) for colorLayer in encoded_img.split()]).save('./img/img_encoded_Minimum.bmp')
Image.merge('RGB',[Image.fromarray(ImageAttacker().ImageFilter(colorLayer,method = FilterType.Gaussian)) for colorLayer in encoded_img.split()]).save('./img/img_encoded_Gaussian.bmp')
Image.merge('RGB',[Image.fromarray(ImageAttacker().ImageFilter(colorLayer,method = FilterType.Smooth)) for colorLayer in encoded_img.split()]).save('./img/img_encoded_Smooth.bmp')
Image.merge('RGB',[Image.fromarray(ImageAttacker().ImageFilter(colorLayer,method = FilterType.Shapen)) for colorLayer in encoded_img.split()]).save('./img/img_encoded_Shapen.bmp')
Image.merge('RGB',[Image.fromarray(ImageAttacker().ImageFilter(colorLayer,method = FilterType.Mean)) for colorLayer in encoded_img.split()]).save('./img/img_encoded_Mean.bmp')
Image.merge('RGB',[Image.fromarray(ImageAttacker().AdditionNoise(colorLayer,method = NoiseType.Gaussian)) for colorLayer in encoded_img.split()]).save('./img/img_encoded_GaussianNoise.bmp')
Image.merge('RGB',[Image.fromarray(ImageAttacker().AdditionNoise(colorLayer,method = NoiseType.Uniform)) for colorLayer in encoded_img.split()]).save('./img/img_encoded_UniformNoise.bmp')
encoded_img = Image.open('./img/img_encoded_src.bmp')
decoded_img,_ = waterMarkEmbedding().DCTwaterMarkEmbedding4RGB(encoded_img, md5_key,method = 'dec',watermark_shape = watermark_shape)
watered_img = imageScrambling(decoded_img,logic_ini,'dec')
RandomCorEnc4Text().convertEncAndDec(text_key,carrier_img = watered_img,method = 'dec')
encoded_img = Image.open('./img/img_encoded_Median.bmp')
decoded_img,_ = waterMarkEmbedding().DCTwaterMarkEmbedding4RGB(encoded_img, md5_key,method = 'dec',watermark_shape = watermark_shape)
watered_img = imageScrambling(decoded_img,logic_ini,'dec')
RandomCorEnc4Text().convertEncAndDec(text_key,carrier_img = watered_img,method = 'dec')
encoded_img = Image.open('./img/img_encoded_Minimum.bmp')
decoded_img,_ = waterMarkEmbedding().DCTwaterMarkEmbedding4RGB(encoded_img, md5_key,method = 'dec',watermark_shape = watermark_shape)
watered_img = imageScrambling(decoded_img,logic_ini,'dec')
RandomCorEnc4Text().convertEncAndDec(text_key,carrier_img = watered_img,method = 'dec')
encoded_img = Image.open('./img/img_encoded_Maximum.bmp')
decoded_img,_ = waterMarkEmbedding().DCTwaterMarkEmbedding4RGB(encoded_img, md5_key,method = 'dec',watermark_shape = watermark_shape)
watered_img = imageScrambling(decoded_img,logic_ini,'dec')
RandomCorEnc4Text().convertEncAndDec(text_key,carrier_img = watered_img,method = 'dec')
encoded_img = Image.open('./img/img_encoded_Gaussian.bmp')
decoded_img,_ = waterMarkEmbedding().DCTwaterMarkEmbedding4RGB(encoded_img, md5_key,method = 'dec',watermark_shape = watermark_shape)
watered_img = imageScrambling(decoded_img,logic_ini,'dec')
RandomCorEnc4Text().convertEncAndDec(text_key,carrier_img = watered_img,method = 'dec')
encoded_img = Image.open('./img/img_encoded_Smooth.bmp')
decoded_img,_ = waterMarkEmbedding().DCTwaterMarkEmbedding4RGB(encoded_img, md5_key,method = 'dec',watermark_shape = watermark_shape)
watered_img = imageScrambling(decoded_img,logic_ini,'dec')
RandomCorEnc4Text().convertEncAndDec(text_key,carrier_img = watered_img,method = 'dec')
encoded_img = Image.open('./img/img_encoded_Shapen.bmp')
decoded_img,_ = waterMarkEmbedding().DCTwaterMarkEmbedding4RGB(encoded_img, md5_key,method = 'dec',watermark_shape = watermark_shape)
watered_img = imageScrambling(decoded_img,logic_ini,'dec')
RandomCorEnc4Text().convertEncAndDec(text_key,carrier_img = watered_img,method = 'dec')
encoded_img = Image.open('./img/img_encoded_Mean.bmp')
decoded_img,_ = waterMarkEmbedding().DCTwaterMarkEmbedding4RGB(encoded_img, md5_key,method = 'dec',watermark_shape = watermark_shape)
watered_img = imageScrambling(decoded_img,logic_ini,'dec')
RandomCorEnc4Text().convertEncAndDec(text_key,carrier_img = watered_img,method = 'dec')
encoded_img = Image.open('./img/img_encoded_GaussianNoise.bmp')
decoded_img,_ = waterMarkEmbedding().DCTwaterMarkEmbedding4RGB(encoded_img, md5_key,method = 'dec',watermark_shape = watermark_shape)
watered_img = imageScrambling(decoded_img,logic_ini,'dec')
RandomCorEnc4Text().convertEncAndDec(text_key,carrier_img = watered_img,method = 'dec')
encoded_img = Image.open('./img/img_encoded_UniformNoise.bmp')
decoded_img,_ = waterMarkEmbedding().DCTwaterMarkEmbedding4RGB(encoded_img, md5_key,method = 'dec',watermark_shape = watermark_shape)
watered_img = imageScrambling(decoded_img,logic_ini,'dec')
RandomCorEnc4Text().convertEncAndDec(text_key,carrier_img = watered_img,method = 'dec') | 67.233333 | 189 | 0.809949 | 757 | 6,051 | 6.200793 | 0.11889 | 0.066042 | 0.05539 | 0.038347 | 0.83447 | 0.792714 | 0.792714 | 0.785897 | 0.785897 | 0.785897 | 0 | 0.007911 | 0.05999 | 6,051 | 90 | 190 | 67.233333 | 0.817335 | 0 | 0 | 0.461538 | 0 | 0.153846 | 0.143424 | 0.103767 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.092308 | 0 | 0.092308 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
86f8b70034a09fcb6ed80e05bee1ad0869f8179a | 11,821 | py | Python | plaidcloud/utilities/xray.py | PlaidCloud/public-utilities | 663e94f2657a02a4249177945e0880bb968c3439 | [
"Apache-2.0"
] | null | null | null | plaidcloud/utilities/xray.py | PlaidCloud/public-utilities | 663e94f2657a02a4249177945e0880bb968c3439 | [
"Apache-2.0"
] | 48 | 2020-10-30T10:15:39.000Z | 2022-03-25T17:23:57.000Z | plaidcloud/utilities/xray.py | PlaidCloud/plaid-utilities | 1031cb87580bbe110f56455925e483a0ae177fe1 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
# coding=utf-8
"""
For development and debugging purposes
"""
from __future__ import absolute_import
__author__ = "Michael Rea"
__copyright__ = "© Copyright 2010-2011, Tartan Solutions, Inc"
__credits__ = ["Michael Rea, Paul Morel"]
__license__ = "Apache 2.0"
__maintainer__ = "Michael Rea"
__email__ = "michael.rea@tartansolutions.com"
def Xray(input_object, id_list=[], level=1, printout=False, show_private=False, max_level=9):
"""
Xray recursively dissects an object into its component parts.
It is intended to be a debugging tool.
To use Xray, just pass an object into it.
Args:
input_object (object): The object to xray
id_list (:type:`list` of :type:`str`, optional): A list of object properties to filter to
level (int, optional): The current level of recursion
printout (bool, optional): Print this xray out? Defaults to `False`
show_private (bool, optional): Show private items? Defaults to `False`
max_level (int, optional): The maximum number of levels to recurse down.
Returns:
str: The formatted x-ray of `input_object`
>>> test_a = ObjectA()
>>> test_b = ObjectB()
>>> test_a.set_b(test_b)
>>> test_b.set_a(test_a)
>>> xray_a = Xray(test_a)
>>> from six import string_types
>>> isinstance(xray_a, string_types)
True
"""
#MWR 20101124 Next 4 lines block looping recursion
#if id(input_object) in id_list:
if level >= max_level:
return ""
else:
id_list.append(id(input_object))
cr = "\r\n"
xray_results = ""
if level==1:
prepend = "".join([cr, (level * " "), "Object Type: ", " ",str(type(input_object)),cr])
else:
prepend = ""
x = True
for index, item in enumerate(dir(input_object)):
my_line = []
lvl = (level * " ")
itm = str(item)
#Just forcing some alignment here. Won't work so well if items have more than 30 characters.
length = max(0, 30 - len(itm))
itm = itm + (length * " ")
if item[2:] != "__" and item[:2] != "__":
#if item[1:] != "_":
private = False
else:
private = True
if "__getattribute__" in dir(input_object):
try:
myobj = input_object.__getattribute__(item)
except:
pass
else:
if type(input_object.__getattribute__(item)) in([type("")]):
my_line.extend([lvl, "str ", itm, str(input_object.__getattribute__(item)), " <", str(id(itm)),">","[", str(level), "]"])
elif "numpy" in str(type(input_object.__getattribute__(item))):
my_line.extend([lvl, str(type(input_object.__getattribute__(item)))," ", itm, str(input_object.__getattribute__(item)), " <", str(id(itm)),">","[", str(level), "]"])
elif type(input_object.__getattribute__(item)) in([type(x)]):
my_line.extend([lvl, "bool ", itm, str(input_object.__getattribute__(item)), " <", str(id(itm)),">","[", str(level), "]"])
elif type(input_object.__getattribute__(item)) in([type(1)]):
my_line.extend([lvl, "int ", itm, str(input_object.__getattribute__(item)), " <", str(id(itm)),">","[", str(level), "]"])
elif type(input_object.__getattribute__(item)) in([type(1.1)]):
my_line.extend([lvl, "float ", itm, str(input_object.__getattribute__(item)), " <", str(id(itm)),">","[", str(level), "]"])
elif type(input_object.__getattribute__(item)) in([type({})]):
#if id(input_object.__getattribute__(item)) not in id_list:
dict_description = XrayDict(input_object.__getattribute__(item), id_list, level+1, show_private)
my_line.extend([lvl, "dict ", itm, " <", str(id(itm)),">","[", str(level), "]", dict_description])
elif type(input_object.__getattribute__(item)) in([type([])]):
#change list to dict so we can reuse XrayDict (rather than build XrayList)
my_dict = {}
for my_index, my_item in enumerate(input_object.__getattribute__(item)):
my_dict[my_index] = my_item
#if id(input_object.__getattribute__(item)) not in id_list:
dict_description = XrayDict(my_dict, id_list, level+1, show_private)
my_line.extend([lvl, "list ", itm, " <", str(id(itm)),">","[", str(level), "]", dict_description])
elif type(input_object.__getattribute__(item)) in([type(())]):
my_line.extend([lvl, "tuple ", itm, str(input_object.__getattribute__(item)), " <", str(id(itm)),">","[", str(level), "]"])
elif type(input_object.__getattribute__(item)) in([type(None)]):
my_line.extend([lvl, "None ", itm, " <", str(id(itm)),">","[", str(level), "]"])
elif type(input_object.__getattribute__(item)).__name__ == "instancemethod":
my_line.extend([lvl, "method ", itm, " <", str(id(itm)),">","[", str(level), "]"])
elif "builtin" in str(type(input_object.__getattribute__(item))):
my_line.extend([lvl, "builtin ", itm, " <", str(id(itm)),">","[", str(level), "]"])
elif "__class__" in item:
my_line.extend([lvl, "class ", itm, " <", str(id(itm)),">","[", str(level), "]"])
elif "method-wrapper" in str(type(input_object.__getattribute__(item))):
my_line.extend([lvl, "meth-wrap ", " <", str(id(itm)),">","[", str(level), "]"])
else:
#if id(input_object.__getattribute__(item)) not in id_list:
#my_line.extend(type(item))
my_line.extend([lvl, "--- object ", itm, str(input_object.__getattribute__(item)).replace("\n", "").replace("\r", ""), " <", str(id(itm)),">","[", str(level), "]", Xray(input_object.__getattribute__(item), id_list, level + 1, show_private)])
#return contents.replace("\n", "").replace("\r", "")
if private == True and show_private == False:
pass
else:
xray_results = "".join([xray_results, cr, " ".join(my_line)])
else:
my_line.extend([lvl, "---", " <", item,">","[", str(level), "]"])
xray_results = "".join([xray_results, cr, " ".join(my_line)])
pass
xray_results = "".join([prepend, xray_results])
return xray_results
def XrayDict(input_object, id_list, level=1, printout=False, show_private = False, max_level = 9):
"""XRays a provided dict
Args:
input_object (dict): The dict to xray
id_list (:type:`list` of :type:`str`, optional): A list of object properties to filter to
level (int, optional): The current level of recursion
printout (bool, optional): Print this xray out? Defaults to `False`
show_private (bool, optional): Show private items? Defaults to `False`
max_level (int, optional): The maximum number of levels to recurse down.
Returns:
str: The formatted x-ray of `input_object`
"""
#MWR 20101124 Next 4 lines block looping recursion
if level >= max_level:
return ""
else:
id_list.append(id(input_object))
xray_results = ""
cr = "\r\n"
if level==1:
prepend = "".join([cr, (level * " "), "Object Type: ", " ",str(type(input_object)),cr])
else:
prepend = ""
x = True
for key in input_object.keys():
my_line = []
lvl = (level * " ")
itm = str(key)
length = 20 - len(itm)
itm = itm + (length * " ")
if str(key)[2:] != "__" and str(key)[:2] != "__":
#if str(key)[1:] != "_":
private = False
else:
private = True
if type(input_object.get(key)) in([type("")]):
my_line.extend([lvl, "str ", itm, str(input_object.get(key)), " <", str(id(itm)),">","[", str(level), "]"])
elif "numpy" in str(type(input_object.get(key))):
my_line.extend([lvl, str(type(input_object.get(key)))," ", itm, str(input_object.get(key)), " <", str(id(itm)),">","[", str(level), "]"])
elif type(input_object.get(key)) in([type(x)]):
my_line.extend([lvl, "bool ", itm, str(input_object.get(key)), " <", str(id(itm)),">","[", str(level), "]"])
elif type(input_object.get(key)) in([type(1)]):
my_line.extend([lvl, "int ", itm, str(input_object.get(key)), " <", str(id(itm)),">","[", str(level), "]"])
elif type(input_object.get(key)) in([type(1.1)]):
my_line.extend([lvl, "float ", itm, str(input_object.get(key)), " <", str(id(itm)),">","[", str(level), "]"])
elif type(input_object.get(key)) in([type({})]):
#if id(input_object.get(key)) not in id_list:
dict_description = XrayDict(input_object.get(key), id_list, level+1, show_private)
my_line.extend([lvl, "dict ", " <", str(id(itm)),">","[", str(level), "]", itm,dict_description])
elif type(input_object.get(key)) in([type([])]):
#change list to dict so we can reuse XrayDict (rathery than build XrayList)
my_dict = {}
for my_index, my_item in enumerate(input_object.get(key)):
my_dict[my_index] = my_item
#if id(input_object.get(key)) not in id_list:
dict_description = XrayDict(my_dict, id_list, level+1, show_private)
my_line.extend([lvl, "list ", itm, " <", str(id(itm)),">","[", str(level), "]",dict_description])
elif type(input_object.get(key)) in([type(())]):
my_line.extend([lvl, "tuple ", itm, str(input_object.get(key)), " <", str(id(itm)),">","[", str(level), "]"])
elif type(input_object.get(key)) in([type(None)]):
my_line.extend([lvl, "None ", itm, " <", str(id(itm)),">","[", str(level), "]"])
elif type(input_object.get(key)).__name__ == "instancemethod":
my_line.extend([lvl, "method ", itm, " <", str(id(itm)),">","[", str(level), "]"])
elif "builtin" in str(type(input_object.get(key))):
my_line.extend([lvl, "builtin ", itm, " <", str(id(itm)),">","[", str(level), "]"])
#elif "__class__" in key:
# #TypeError: argument of type 'VisitableType' is not iterable
# my_line.extend([lvl, "class ", itm])
elif "method-wrapper" in str(type(input_object.get(key))):
my_line.extend([lvl, "meth-wrap ", itm, " <", str(id(itm)),">","[", str(level), "]"])
else:
my_line.extend([lvl, "+++ object ", str(type(input_object.get(key))).replace("\n", "").replace("\r", ""), " <", str(id(itm)),">","[", str(level), "]", Xray(input_object.get(key), id_list, level + 1, show_private)])
if private == True and show_private == False:
pass
else:
xray_results = "".join([xray_results, cr, " ".join(my_line)])
xray_results = "".join([prepend, xray_results])
return xray_results
def __add(xray_results, new_line):
a = len(xray_results)
new_line.append("\r\n")
xray_results = " ".join(new_line)
return xray_results
class ObjectA(object):
def __init__(self):
pass
def set_b(self, new_obj):
self.b = new_obj
class ObjectB(object):
def __init__(self):
pass
def set_a(self, new_obj):
self.a = new_obj
| 42.369176 | 266 | 0.547416 | 1,439 | 11,821 | 4.236275 | 0.134121 | 0.119094 | 0.059055 | 0.071358 | 0.806102 | 0.78855 | 0.745735 | 0.723917 | 0.712434 | 0.692093 | 0 | 0.006987 | 0.273581 | 11,821 | 278 | 267 | 42.521583 | 0.702807 | 0.193215 | 0 | 0.473684 | 0 | 0 | 0.083333 | 0.003308 | 0 | 0 | 0 | 0 | 0 | 1 | 0.046053 | false | 0.039474 | 0.006579 | 0 | 0.098684 | 0.013158 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
811c2ee73ff0b5c2a997113197c7bab1f34dee88 | 14,144 | py | Python | model/train_models.py | keiradams/ChIRo | 2686d3a1801db8fb3dec10b61fb0cb0cec047c7c | [
"MIT"
] | 12 | 2021-10-19T21:57:07.000Z | 2022-03-16T20:32:19.000Z | model/train_models.py | keiradams/ChIRo | 2686d3a1801db8fb3dec10b61fb0cb0cec047c7c | [
"MIT"
] | null | null | null | model/train_models.py | keiradams/ChIRo | 2686d3a1801db8fb3dec10b61fb0cb0cec047c7c | [
"MIT"
] | 4 | 2021-11-05T12:10:30.000Z | 2022-03-09T19:52:31.000Z | import torch
import torch.nn as nn
import torch_geometric
import datetime
import numpy as np
from tqdm import tqdm
from copy import deepcopy
from .train_functions import classification_loop_alpha, contrastive_loop_alpha, binary_ranking_regression_loop_alpha
def train_binary_ranking_regression_model(model, train_loader, val_loader, N_epochs, optimizers, device, batch_size, absolute_penalty = 0.0, relative_penalty = 1.0, ranking_margin = 0.3, auxillary_torsion_loss = 0.02, weighted_sum = False, save = True, PATH = ''):
train_epoch_losses = []
train_epoch_aux_losses = []
train_epoch_abs_losses = []
train_epoch_rel_losses = []
train_epoch_accuracies = []
val_epoch_losses = []
val_epoch_aux_losses = []
val_epoch_abs_losses = []
val_epoch_rel_losses = []
val_epoch_accuracies = []
best_val_acc = 0.0
best_val_loss = np.inf
best_epoch = 0
for epoch in tqdm(range(1, N_epochs+1)):
train_losses, train_aux_losses, train_batch_sizes, train_abs_losses, train_rel_losses, train_accuracies = binary_ranking_regression_loop_alpha(model, train_loader, optimizers, device, epoch, batch_size, training = True, absolute_penalty = absolute_penalty, relative_penalty = relative_penalty, ranking_margin = ranking_margin, auxillary_torsion_loss = auxillary_torsion_loss)
if weighted_sum:
epoch_loss = torch.sum(torch.tensor(train_losses) * torch.tensor(train_batch_sizes)) / (torch.sum(torch.tensor(train_batch_sizes))) #weighted mean based on the batch sizes
epoch_abs_loss = torch.sum(torch.tensor(train_abs_losses) * torch.tensor(train_batch_sizes)) / (torch.sum(torch.tensor(train_batch_sizes)))
epoch_rel_loss = torch.sum(torch.tensor(train_rel_losses) * torch.tensor(train_batch_sizes)) / (torch.sum(torch.tensor(train_batch_sizes)))
epoch_aux_loss = torch.sum(torch.tensor(train_aux_losses) * torch.tensor(train_batch_sizes)) / (torch.sum(torch.tensor(train_batch_sizes)))
epoch_acc = torch.sum(torch.tensor(train_accuracies) * torch.tensor(train_batch_sizes)) / (torch.sum(torch.tensor(train_batch_sizes)))
else:
epoch_loss = torch.mean(torch.tensor(train_losses))
epoch_abs_loss = torch.mean(torch.tensor(train_abs_losses))
epoch_rel_loss = torch.mean(torch.tensor(train_rel_losses))
epoch_aux_loss = torch.mean(torch.tensor(train_aux_losses))
epoch_acc = torch.mean(torch.tensor(train_accuracies))
train_epoch_losses.append(epoch_loss)
train_epoch_abs_losses.append(epoch_abs_loss)
train_epoch_rel_losses.append(epoch_rel_loss)
train_epoch_aux_losses.append(epoch_aux_loss)
train_epoch_accuracies.append(epoch_acc)
with torch.no_grad():
val_losses, val_aux_losses, val_batch_sizes, val_abs_losses, val_rel_losses, val_accuracies = binary_ranking_regression_loop_alpha(model, val_loader, optimizers, device, epoch, batch_size, training = False, absolute_penalty = absolute_penalty, relative_penalty = relative_penalty, ranking_margin = ranking_margin, auxillary_torsion_loss = auxillary_torsion_loss)
if weighted_sum:
val_epoch_loss = torch.sum(torch.tensor(val_losses) * torch.tensor(val_batch_sizes)) / (torch.sum(torch.tensor(val_batch_sizes))) #weighted mean based on the batch sizes
val_epoch_abs_loss = torch.sum(torch.tensor(val_abs_losses) * torch.tensor(val_batch_sizes)) / (torch.sum(torch.tensor(val_batch_sizes)))
val_epoch_rel_loss = torch.sum(torch.tensor(val_rel_losses) * torch.tensor(val_batch_sizes)) / (torch.sum(torch.tensor(val_batch_sizes)))
val_epoch_aux_loss = torch.sum(torch.tensor(val_aux_losses) * torch.tensor(val_batch_sizes)) / (torch.sum(torch.tensor(val_batch_sizes)))
val_epoch_acc = torch.sum(torch.tensor(val_accuracies) * torch.tensor(val_batch_sizes)) / (torch.sum(torch.tensor(val_batch_sizes)))
else:
val_epoch_loss = torch.mean(torch.tensor(val_losses))
val_epoch_abs_loss = torch.mean(torch.tensor(val_abs_losses))
val_epoch_rel_loss = torch.mean(torch.tensor(val_rel_losses))
val_epoch_aux_loss = torch.mean(torch.tensor(val_aux_losses))
val_epoch_acc = torch.mean(torch.tensor(val_accuracies))
val_epoch_losses.append(val_epoch_loss)
val_epoch_abs_losses.append(val_epoch_abs_loss)
val_epoch_rel_losses.append(val_epoch_rel_loss)
val_epoch_aux_losses.append(val_epoch_aux_loss)
val_epoch_accuracies.append(val_epoch_acc)
if val_epoch_acc > best_val_acc:
best_val_acc = val_epoch_acc
best_epoch = epoch
best_state_dict = deepcopy(model.state_dict())
if save == True:
torch.save(model.state_dict(), PATH + 'best_model.pt')
print('\n saving best model:' + str(epoch))
print(' Best Epoch:', epoch, 'Train Loss:', epoch_loss, 'Train Acc.:', epoch_acc,'Validation Loss:', val_epoch_loss, 'Validation Aux. Loss:', val_epoch_aux_loss, 'Validation Acc.:', val_epoch_acc)
print(' Train Losses (abs., rel.):', (epoch_abs_loss, epoch_rel_loss), 'Validation Losses (abs., rel.):', (val_epoch_abs_loss, val_epoch_rel_loss))
if epoch % 5 == 0:
print('Epoch:', epoch, 'Train Loss:', epoch_loss, 'Train Acc.:', epoch_acc,'Validation Loss:', val_epoch_loss, 'Validation Aux. Loss:', val_epoch_aux_loss, 'Validation Acc.:', val_epoch_acc)
print(' Train Losses (abs., rel.):', (epoch_abs_loss, epoch_rel_loss), 'Validation Losses (abs., rel.):', (val_epoch_abs_loss, val_epoch_rel_loss))
if (save == True) and (epoch % 5 == 0):
torch.save(model.state_dict(), PATH + 'checkpoint_models/' + 'checkpoint_model_' + str(epoch) + '.pt')
torch.save(train_epoch_losses, PATH + 'train_epoch_losses.pt')
torch.save(train_epoch_abs_losses, PATH + 'train_epoch_abs_losses.pt')
torch.save(train_epoch_rel_losses, PATH + 'train_epoch_rel_losses.pt')
torch.save(val_epoch_losses, PATH + 'val_epoch_losses.pt')
torch.save(val_epoch_abs_losses, PATH + 'val_epoch_abs_losses.pt')
torch.save(val_epoch_rel_losses, PATH + 'val_epoch_rel_losses.pt')
torch.save(train_epoch_aux_losses, PATH + 'train_epoch_aux_losses.pt')
torch.save(val_epoch_aux_losses, PATH + 'val_epoch_aux_losses.pt')
torch.save(train_epoch_accuracies, PATH + 'train_epoch_accuracies.pt')
torch.save(val_epoch_accuracies, PATH + 'val_epoch_accuracies.pt')
return best_state_dict
def train_classification_model(model, train_loader, val_loader, N_epochs, optimizers, device, batch_size, auxillary_torsion_loss = 0.02, weighted_sum = False, save = True, PATH = ''):
train_epoch_losses = []
train_epoch_aux_losses = []
train_epoch_accuracy = []
val_epoch_losses = []
val_epoch_aux_losses = []
val_epoch_accuracy = []
best_val_accuracy = 0.0
best_epoch = 0
for epoch in tqdm(range(1, N_epochs+1)):
train_losses, train_aux_losses, train_batch_sizes, train_batch_accuracy = classification_loop_alpha(model, train_loader, optimizers, device, epoch, batch_size, training = True, auxillary_torsion_loss = auxillary_torsion_loss)
if weighted_sum:
epoch_loss = torch.sum(torch.tensor(train_losses) * torch.tensor(train_batch_sizes)) / (torch.sum(torch.tensor(train_batch_sizes))) #weighted mean based on the batch sizes
train_accuracy = torch.sum(torch.tensor(train_batch_accuracy) * torch.tensor(train_batch_sizes)) / (torch.sum(torch.tensor(train_batch_sizes)))
epoch_aux_loss = torch.sum(torch.tensor(train_aux_losses) * torch.tensor(train_batch_sizes)) / (torch.sum(torch.tensor(train_batch_sizes)))
else:
epoch_loss = torch.mean(torch.tensor(train_losses))
epoch_aux_loss = torch.mean(torch.tensor(train_aux_losses))
train_accuracy = torch.mean(torch.tensor(train_batch_accuracy))
train_epoch_losses.append(epoch_loss)
train_epoch_aux_losses.append(epoch_aux_loss)
train_epoch_accuracy.append(train_accuracy)
with torch.no_grad():
val_losses, val_aux_losses, val_batch_sizes, val_batch_accuracy = classification_loop_alpha(model, val_loader, optimizers, device, epoch, batch_size, training = False, auxillary_torsion_loss = auxillary_torsion_loss)
if weighted_sum:
val_epoch_loss = torch.sum(torch.tensor(val_losses) * torch.tensor(val_batch_sizes)) / (torch.sum(torch.tensor(val_batch_sizes))) #weighted mean based on the batch sizes
val_accuracy = torch.sum(torch.tensor(val_batch_accuracy) * torch.tensor(val_batch_sizes)) / (torch.sum(torch.tensor(val_batch_sizes)))
val_epoch_aux_loss = torch.sum(torch.tensor(val_aux_losses) * torch.tensor(val_batch_sizes)) / (torch.sum(torch.tensor(val_batch_sizes)))
else:
val_epoch_loss = torch.mean(torch.tensor(val_losses))
val_epoch_aux_loss = torch.mean(torch.tensor(val_aux_losses))
val_accuracy = torch.mean(torch.tensor(val_batch_accuracy))
val_epoch_losses.append(val_epoch_loss)
val_epoch_aux_losses.append(val_epoch_aux_loss)
val_epoch_accuracy.append(val_accuracy)
if val_accuracy > best_val_accuracy:
best_val_accuracy = val_accuracy
best_epoch = epoch
best_state_dict = deepcopy(model.state_dict())
if save == True:
torch.save(model.state_dict(), PATH + 'best_model.pt')
print('\n saving best model:' + str(epoch))
print(' Best Epoch:', epoch, 'Train Loss:', epoch_loss, 'Validation Loss:', val_epoch_loss, 'Validation Acc.', val_accuracy, 'Validation Aux. Loss', val_epoch_aux_loss)
if epoch % 1 == 0:
print('Epoch:', epoch, 'Train Loss:', epoch_loss, 'Validation Loss:', val_epoch_loss)
print(' Epoch:', epoch, 'Train Loss:', epoch_loss, 'Validation Loss:', val_epoch_loss, 'Validation Acc.', val_accuracy, 'Validation Aux. Loss', val_epoch_aux_loss)
if (save == True) and (epoch % 5 == 0):
torch.save(model.state_dict(), PATH + 'checkpoint_models/' + 'checkpoint_model_' + str(epoch) + '.pt')
torch.save(train_epoch_losses, PATH + 'train_epoch_losses.pt')
torch.save(val_epoch_losses, PATH + 'val_epoch_losses.pt')
torch.save(train_epoch_aux_losses, PATH + 'train_epoch_aux_losses.pt')
torch.save(val_epoch_aux_losses, PATH + 'val_epoch_aux_losses.pt')
return best_state_dict
def train_contrastive_model(model, train_loader, val_loader, N_epochs, optimizers, device, loss_function, batch_size, margin, contrastive_vector, auxillary_torsion_loss = 0.02, save = True, PATH = ''):
train_epoch_losses = []
train_epoch_aux_losses = []
val_epoch_losses = []
val_epoch_aux_losses = []
best_val_loss = np.inf
best_epoch = 0
for epoch in tqdm(range(1, N_epochs+1)):
train_losses, train_aux_losses = contrastive_loop_alpha(model, train_loader, optimizers, device, epoch, loss_function, batch_size, margin, training = True, contrastive_vector = contrastive_vector, auxillary_torsion_loss = auxillary_torsion_loss)
epoch_loss = torch.mean(torch.tensor(train_losses))
epoch_aux_loss = torch.mean(torch.tensor(train_aux_losses))
train_epoch_losses.append(epoch_loss)
train_epoch_aux_losses.append(epoch_aux_loss)
with torch.no_grad():
val_losses, val_aux_losses = contrastive_loop_alpha(model, train_loader, optimizers, device, epoch, loss_function, batch_size, margin, training = False, contrastive_vector = contrastive_vector, auxillary_torsion_loss = auxillary_torsion_loss)
val_epoch_loss = torch.mean(torch.tensor(val_losses))
val_epoch_aux_loss = torch.mean(torch.tensor(val_aux_losses))
val_epoch_losses.append(val_epoch_loss)
val_epoch_aux_losses.append(val_epoch_aux_loss)
if val_epoch_loss < best_val_loss:
best_val_loss = val_epoch_loss
best_epoch = epoch
best_state_dict = deepcopy(model.state_dict())
if save == True:
torch.save(model.state_dict(), PATH + 'best_model.pt')
print('\n saving best model:' + str(epoch))
print(' Best Epoch:', epoch, 'Train Loss:', epoch_loss, 'Validation Loss:', val_epoch_loss, 'Validation Aux. Loss', val_epoch_aux_loss)
if epoch % 1 == 0:
print('Epoch:', epoch, 'Train Loss:', epoch_loss, 'Validation Loss:', val_epoch_loss)
print(' Epoch:', epoch, 'Train Loss:', epoch_loss, 'Validation Loss:', val_epoch_loss, 'Validation Aux. Loss', val_epoch_aux_loss)
if (save == True) and (epoch % 5 == 0):
torch.save(model.state_dict(), PATH + 'checkpoint_models/' + 'checkpoint_model_' + str(epoch) + '.pt')
torch.save(train_epoch_losses, PATH + 'train_epoch_losses.pt')
torch.save(val_epoch_losses, PATH + 'val_epoch_losses.pt')
torch.save(train_epoch_aux_losses, PATH + 'train_epoch_aux_losses.pt')
torch.save(val_epoch_aux_losses, PATH + 'val_epoch_aux_losses.pt')
return best_state_dict
| 63.711712 | 383 | 0.670532 | 1,841 | 14,144 | 4.75774 | 0.0478 | 0.079461 | 0.062108 | 0.069414 | 0.890855 | 0.849983 | 0.811166 | 0.764357 | 0.758991 | 0.742436 | 0 | 0.00367 | 0.229355 | 14,144 | 221 | 384 | 64 | 0.799908 | 0.010747 | 0 | 0.6 | 0 | 0 | 0.090649 | 0.025093 | 0 | 0 | 0 | 0 | 0 | 1 | 0.017143 | false | 0 | 0.045714 | 0 | 0.08 | 0.074286 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
81264769fe02bb428fa5e0b37316775468648183 | 107 | py | Python | by-session/ta-921/j8/variables.py | amiraliakbari/sharif-mabani-python | 5d14a08d165267fe71c28389ddbafe29af7078c5 | [
"MIT"
] | 2 | 2015-04-29T20:59:35.000Z | 2018-09-26T13:33:43.000Z | by-session/ta-921/j8/variables.py | amiraliakbari/sharif-mabani-python | 5d14a08d165267fe71c28389ddbafe29af7078c5 | [
"MIT"
] | null | null | null | by-session/ta-921/j8/variables.py | amiraliakbari/sharif-mabani-python | 5d14a08d165267fe71c28389ddbafe29af7078c5 | [
"MIT"
] | null | null | null | a = 2
print 'Salam a'
print 'Salam', a
print 'Salam ' + str(a)
x = '3'
y = 4
print int(x) + y
| 9.727273 | 24 | 0.485981 | 20 | 107 | 2.6 | 0.5 | 0.576923 | 0.423077 | 0.615385 | 0.615385 | 0 | 0 | 0 | 0 | 0 | 0 | 0.042857 | 0.345794 | 107 | 10 | 25 | 10.7 | 0.7 | 0 | 0 | 0 | 0 | 0 | 0.197917 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.571429 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
8137da863d892bad8b0facf6c8eb6ca1024861f9 | 46 | py | Python | hydra/io/__init__.py | JimBoonie/hydra | 63665090812e4e209c67d5dc0b84b5bb35a57ead | [
"MIT"
] | 28 | 2015-12-30T22:38:16.000Z | 2021-03-21T07:52:39.000Z | hydra/io/__init__.py | JimBoonie/hydra | 63665090812e4e209c67d5dc0b84b5bb35a57ead | [
"MIT"
] | 33 | 2020-05-12T01:21:05.000Z | 2021-12-07T16:13:42.000Z | hypertools/io/__init__.py | jeremymanning/hypertools | 1b39b41aaa634e816d73635e0b9b773f1ed6e709 | [
"MIT"
] | 7 | 2017-02-23T09:43:24.000Z | 2022-01-10T12:17:36.000Z | from .load import load
from .save import save
| 15.333333 | 22 | 0.782609 | 8 | 46 | 4.5 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.173913 | 46 | 2 | 23 | 23 | 0.947368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d4c19b956c3791217c71c8e7bf4d23762ef119e5 | 19,098 | py | Python | day05/day05_puz2.py | KirstieJane/advent-code-2019 | 3aafb08a99d7fdd989a4a7e7b171bf19ac5aaca1 | [
"MIT"
] | 2 | 2019-12-20T04:59:19.000Z | 2020-11-22T13:40:01.000Z | day05/day05_puz2.py | KirstieJane/advent-code-2019 | 3aafb08a99d7fdd989a4a7e7b171bf19ac5aaca1 | [
"MIT"
] | 6 | 2019-12-02T17:12:36.000Z | 2019-12-27T21:41:39.000Z | day05/day05_puz2.py | KirstieJane/advent-code-2019 | 3aafb08a99d7fdd989a4a7e7b171bf19ac5aaca1 | [
"MIT"
] | null | null | null | #! /usr/bin/env python
def run_opcode(code_list, programme_input=1):
"""Run the opcode as determined by the values in code_list
Before you enter the next loop, check to see if the opcode
(the first number in the sequence) is 99. If it is, then
you can stop and return the code as it stands.
Parameters
----------
code_list : list
The opcode
programme_input : int
The input to the programme, default 1
"""
# Start reading in the programme at position 0
opcode_loc = 0 # Also known as the instruction pointer
opcode = None
output = None
while opcode != '99':
# Get and parse the opcode
code = code_list[opcode_loc]
opcode, parameter_mode_dict = parse_opcode(code)
if opcode == '01':
# Add the appropriate values together if you have an opcode of 1
code_list = apply_opcode1(code_list,
opcode_loc,
parameter_mode_dict)
# Increase the opcode_loc by 4 to keep yourself moving forwards
# through the code
opcode_loc += 4
if opcode == '02':
# Multiply the appropriate values together if you have an opcode
# of 2
code_list = apply_opcode2(code_list,
opcode_loc,
parameter_mode_dict)
# Increase the opcode_loc by 4 to keep yourself moving forwards
# through the code
opcode_loc += 4
if opcode == '03':
# Put the input value in the appropriate location if you have an
# opcode of 3
code_list = apply_opcode3(code_list,
opcode_loc,
programme_input=programme_input)
# Increase the opcode_loc by 2 to keep yourself moving forwards
# through the code
opcode_loc += 2
if opcode == '04':
# Return the output value if you have an opcode of 4
code_list, output = apply_opcode4(code_list,
opcode_loc,
parameter_mode_dict)
# Print the output value to screen
print(f'Output value: {output}')
# Increase the opcode_loc by 2 to keep yourself moving forwards
# through the code
opcode_loc += 2
if opcode == '05':
# Jump if true if you have an opcode of 5
code_list, inc_steps = apply_opcode5(code_list,
opcode_loc,
parameter_mode_dict)
# Increase the opcode_loc by inc_steps to keep yourself moving
# forwards through the code
opcode_loc += inc_steps
if opcode == '06':
# Jump if false if you have an opcode of 5
code_list, inc_steps = apply_opcode6(code_list,
opcode_loc,
parameter_mode_dict)
# Increase the opcode_loc by inc_steps to keep yourself moving
# forwards through the code
opcode_loc += inc_steps
if opcode == '07':
code_list, inc_steps = apply_opcode7(code_list,
opcode_loc,
parameter_mode_dict)
# Increase the opcode_loc by inc_steps to keep yourself moving
# forwards through the code
opcode_loc += inc_steps
if opcode == '08':
# Assess whether the 1st parameter is equal to the 2nd parameter
# if you have an opcode of 8
code_list, inc_steps = apply_opcode8(code_list,
opcode_loc,
parameter_mode_dict)
# Increase the opcode_loc by inc_steps to keep yourself moving
# forwards through the code
opcode_loc += inc_steps
return code_list, output
def load_computer_data(fname):
"""Read in input file with the computer's opcode as provided.
Parameters
----------
fname : string
File provided by advent of code competition
"""
# Create empty code list
code_list = []
# Read in each line, and split by comma
with open(fname, 'r') as f:
for line in f:
code_list += line.split(',')
# Convert all items to integer
code_list = [int(item) for item in code_list]
return code_list
def parse_opcode(code):
"""Each opcode is up to 5 digits long. The two on the furthest right
contain the instruction, and then the 3 on the left (reading from right
to left) indicate the mode (position or immediate) for each of the
parameters.
This function converts the number to a 0 padded string and then splits up
the 5 digits into the opcode and parameter modes.
Parameters
----------
code : int
instruction as integer that is up to 5 digits long
Returns
-------
opcode : str
two digit string corresponding to an instruction
parameter_mode_dict : dict
dictionary containing the parameter mode for each of the opcode
parameters
"""
code = f'{code:05}'
opcode = code[3:5]
parameter_mode_dict = {1: code[2], 2: code[1], 3: code[0]}
return opcode, parameter_mode_dict
# Define Python user-defined exceptions
# Adapted from https://www.programiz.com/python-programming/user-defined-exception # noqa
class Error(Exception):
"""Base class for other exceptions"""
pass
class ForbiddenValueError(Error):
"""Raised when the opcode mode is not permitted"""
pass
def apply_opcode1(code_list, opcode_loc, parameter_mode_dict):
"""When you've determined that the opcode is 1 - which means to add the
following two numbers (or the values at the position of those two numbers,
depending on the parameter mode) then you can use this function to adjust
code_list.
Parameters
----------
code_list : list
The whole programme
opcode_loc : int
The index of the opcode in code_list
parameter_mode_dict : dict
A dictionary indicating for the following 3 values after an opcode of 1
whether they should be considered in position (0) or immediate (1)
modes
Returns
-------
code_list : list
The whole programme
"""
opcode, param1, param2, param3 = code_list[opcode_loc:opcode_loc+4]
# If the mode is 1 then the parameter should be interpreted as it stands.
# If the mode is 0 then we need to get the value at that location in the
# code list
if parameter_mode_dict[1] == '0':
param1 = code_list[param1]
if parameter_mode_dict[2] == '0':
param2 = code_list[param2]
# The parameter mode for the 3rd parameter (which is the location that
# the answer will be stored) should never be anything other than 0, so
# we're going to raise an error if it is
if parameter_mode_dict[3] != '0':
print('Something has gone wrong! ' +
'The 3rd parameter should never be anything other than 0')
raise ForbiddenValueError
# Now lets actually do what the opcode says: add param1 and param2 and
# put the value at param3
code_list[param3] = param1 + param2
return code_list
def apply_opcode2(code_list, opcode_loc, parameter_mode_dict):
"""When you've determined that the opcode is 2 - which means to multiply
the following two numbers (or the values at the position of those two
numbers, depending on the parameter mode) then you can use this function to
adjust code_list.
Parameters
----------
code_list : list
The opcode
opcode_loc : int
The index of the opcode in code_list
parameter_mode_dict : dict
A dictionary indicating for the following 3 values after an opcode of 2
whether they should be considered in position (0) or immediate (1)
modes
Returns
-------
code_list : list
The whole programme
"""
opcode, param1, param2, param3 = code_list[opcode_loc:opcode_loc+4]
# If the mode is 1 then the parameter should be interpreted as it stands.
# If the mode is 0 then we need to get the value at that location in the
# code list
if parameter_mode_dict[1] == '0':
param1 = code_list[param1]
if parameter_mode_dict[2] == '0':
param2 = code_list[param2]
# The parameter mode for the 3rd parameter (which is the location that
# the answer will be stored) should never be anything other than 0, so
# we're going to raise an error if it is
if parameter_mode_dict[3] != '0':
print('Something has gone wrong! ' +
'The 3rd parameter should never be anything other than 0')
raise ForbiddenValueError
# Now lets actually do what the opcode says: multiply param1 and param2 and
# put the value at param3
code_list[param3] = param1 * param2
return code_list
def apply_opcode3(code_list, opcode_loc, programme_input=1):
"""When you've determined that the opcode is 3 - which means to take an
input value and store it in the location of its only parameter then you can
use this function to
adjust code_list.
Parameters
----------
code_list : list
The opcode
opcode_loc : int
The index of the opcode in code_list
programme_input : int
input value, default 1
Returns
-------
code_list : list
The whole programme
"""
opcode, param1 = code_list[opcode_loc:opcode_loc+2]
# Now lets actually do what the opcode says: put the input value at the
# location given by param1
code_list[param1] = programme_input
return code_list
def apply_opcode4(code_list, opcode_loc, parameter_mode_dict):
"""When you've determined that the opcode is 4 - which means to return a
value in the location of its only parameter as an output - you can use this
function to adjust code_list.
Parameters
----------
code_list : list
The opcode
opcode_loc : int
The index of the opcode in code_list
parameter_mode_dict : dict
A dictionary indicating for the following value after an opcode of 3
whether they should be considered in position (0) or immediate (1)
modes
Returns
-------
code_list : list
The whole programme
output : int
The value in the location determined by the parameter of the opcode
"""
opcode, param1 = code_list[opcode_loc:opcode_loc+2]
# If the mode is 1 then the parameter should be interpreted as it stands.
# If the mode is 0 then we need to get the value at that location in the
# code list
if parameter_mode_dict[1] == '0':
param1 = code_list[param1]
# Return that value as an output
output = param1
return code_list, output
def apply_opcode5(code_list, opcode_loc, parameter_mode_dict):
"""When you've determined that the opcode is 5 - which means to set the
instruction pointer to the value from the second parameter if the first
parameter is non zero - you can use this function to adjust code_list
and return how many steps to increase the instruction pointer
(aka opcode_loc).
Parameters
----------
code_list : list
The opcode
opcode_loc : int
The index of the opcode in code_list
parameter_mode_dict : dict
A dictionary indicating for the following value after an opcode of 3
whether they should be considered in position (0) or immediate (1)
modes
Returns
-------
code_list : list
The whole programme
inc_steps : int
The number of steps to increase the opcode_loc by for the next
instruction
"""
opcode, param1, param2 = code_list[opcode_loc:opcode_loc+3]
# If the mode is 1 then the parameter should be interpreted as it stands.
# If the mode is 0 then we need to get the value at that location in the
# code list
if parameter_mode_dict[1] == '0':
param1 = code_list[param1]
if parameter_mode_dict[2] == '0':
param2 = code_list[param2]
# If param1 is non-zero then set instruction pointer (opcode_loc) to the
# value from param2
if param1:
new_opcode_loc = param2
inc_steps = new_opcode_loc - opcode_loc
else:
inc_steps = 3
return code_list, inc_steps
def apply_opcode6(code_list, opcode_loc, parameter_mode_dict):
"""When you've determined that the opcode is 6 - which means to set the
instruction pointer to the value from the second parameter if the first
parameter is zero - you can use this function to adjust code_list
and return how many steps to increase the instruction pointer
(aka opcode_loc).
Parameters
----------
code_list : list
The opcode
opcode_loc : int
The index of the opcode in code_list
parameter_mode_dict : dict
A dictionary indicating for the following value after an opcode of 3
whether they should be considered in position (0) or immediate (1)
modes
Returns
-------
code_list : list
The whole programme
inc_steps : int
The number of steps to increase the opcode_loc by for the next
instruction
"""
opcode, param1, param2 = code_list[opcode_loc:opcode_loc+3]
# If the mode is 1 then the parameter should be interpreted as it stands.
# If the mode is 0 then we need to get the value at that location in the
# code list
if parameter_mode_dict[1] == '0':
param1 = code_list[param1]
if parameter_mode_dict[2] == '0':
param2 = code_list[param2]
# If param1 is zero then set instruction pointer (opcode_loc) to the
# value from param2
if not param1:
new_opcode_loc = param2
inc_steps = new_opcode_loc - opcode_loc
else:
inc_steps = 3
return code_list, inc_steps
def apply_opcode7(code_list, opcode_loc, parameter_mode_dict):
"""When you've determined that the opcode is 7 - which means to set the
instruction pointer to 1 if the value from the first parameter is less than
the value from the second parameter, otherwise set it to 0 - you can use
this function to adjust code_list and return how many steps to increase the
instruction pointer (aka opcode_loc).
Parameters
----------
code_list : list
The opcode
opcode_loc : int
The index of the opcode in code_list
parameter_mode_dict : dict
A dictionary indicating for the following value after an opcode of 3
whether they should be considered in position (0) or immediate (1)
modes
Returns
-------
code_list : list
The whole programme
inc_steps : int
The number of steps to increase the opcode_loc by for the next
instruction
"""
opcode, param1, param2, param3 = code_list[opcode_loc:opcode_loc+4]
# If the mode is 1 then the parameter should be interpreted as it stands.
# If the mode is 0 then we need to get the value at that location in the
# code list
if parameter_mode_dict[1] == '0':
param1 = code_list[param1]
if parameter_mode_dict[2] == '0':
param2 = code_list[param2]
# The parameter mode for the 3rd parameter (which is the location that
# the answer will be stored) should never be anything other than 0, so
# we're going to raise an error if it is
if parameter_mode_dict[3] != '0':
print('Something has gone wrong! ' +
'The 3rd parameter should never be anything other than 0')
raise ForbiddenValueError
# If param1 is less than param2 then set instruction pointer (opcode_loc)
# to 1, otherwise set to 0
if param1 < param2:
code_list[param3] = 1
else:
code_list[param3] = 0
# If you've overwritten the opcode for this instruction (the instruction
# pointer) then don't increase the steps, otherwise increase by 4
if param3 == opcode_loc:
inc_steps = 0
else:
inc_steps = 4
return code_list, inc_steps
def apply_opcode8(code_list, opcode_loc, parameter_mode_dict):
"""When you've determined that the opcode is 8 - which means to set the
instruction pointer to 1 if the value from the first parameter is equal to
the value from the second parameter, otherwise set it to 0 - you can use
this function to adjust code_list and return how many steps to increase the
instruction pointer (aka opcode_loc).
Parameters
----------
code_list : list
The opcode
opcode_loc : int
The index of the opcode in code_list
parameter_mode_dict : dict
A dictionary indicating for the following value after an opcode of 3
whether they should be considered in position (0) or immediate (1)
modes
Returns
-------
code_list : list
The whole programme
inc_steps : int
The number of steps to increase the opcode_loc by for the next
instruction
"""
opcode, param1, param2, param3 = code_list[opcode_loc:opcode_loc+4]
# If the mode is 1 then the parameter should be interpreted as it stands.
# If the mode is 0 then we need to get the value at that location in the
# code list
if parameter_mode_dict[1] == '0':
param1 = code_list[param1]
if parameter_mode_dict[2] == '0':
param2 = code_list[param2]
# The parameter mode for the 3rd parameter (which is the location that
# the answer will be stored) should never be anything other than 0, so
# we're going to raise an error if it is
if parameter_mode_dict[3] != '0':
print('Something has gone wrong! ' +
'The 3rd parameter should never be anything other than 0')
raise ForbiddenValueError
# If param1 is equal to param2 then set the value at the location given by
# param3 to 1, otherwise set it to 0
if param1 == param2:
code_list[param3] = 1
else:
code_list[param3] = 0
# If you've overwritten the opcode for this instruction (the instruction
# pointer) then don't increase the steps, otherwise increase by 4
if param3 == opcode_loc:
inc_steps = 0
else:
inc_steps = 4
return code_list, inc_steps
if __name__ == "__main__":
"""Load in the data, adjust it to the state before the computer caught fire,
then run the opcode and print the value in position 0 to the screen.
"""
code_list = load_computer_data('day05/input.txt')
print('\n---- Day 5, Puzzle 2 ----')
code_list, output = run_opcode(code_list, programme_input=5)
| 33.388112 | 89 | 0.633469 | 2,698 | 19,098 | 4.358784 | 0.097109 | 0.076871 | 0.060714 | 0.036139 | 0.804422 | 0.790986 | 0.771088 | 0.761139 | 0.748469 | 0.723724 | 0 | 0.02157 | 0.308147 | 19,098 | 571 | 90 | 33.446585 | 0.868463 | 0.557283 | 0 | 0.648485 | 0 | 0 | 0.059981 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066667 | false | 0.012121 | 0 | 0 | 0.145455 | 0.036364 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d4c4bea218e269f3b05f0cf2db62ab90c94b8300 | 60 | py | Python | reinhardt/models/__init__.py | cceit/reinhardt | 80cacb3c7d81c7128d3567bb42b75c277f05a53f | [
"BSD-3-Clause"
] | 8 | 2016-06-23T14:41:26.000Z | 2018-07-06T17:54:08.000Z | reinhardt/models/__init__.py | cceit/reinhardt | 80cacb3c7d81c7128d3567bb42b75c277f05a53f | [
"BSD-3-Clause"
] | null | null | null | reinhardt/models/__init__.py | cceit/reinhardt | 80cacb3c7d81c7128d3567bb42b75c277f05a53f | [
"BSD-3-Clause"
] | null | null | null | from .models import * # NOQA
from .mixins import * # NOQA
| 20 | 29 | 0.666667 | 8 | 60 | 5 | 0.625 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.233333 | 60 | 2 | 30 | 30 | 0.869565 | 0.15 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d4d839ebad2692b2c592bf555a2d54ccdfab81dc | 145 | py | Python | kafkaSchemaManager/decorator/__init__.py | YendiyarovSV/kafka-avro-producer-topkrabbensteam | d7a318b465ff38897150a4a4db267309793373bc | [
"Apache-2.0"
] | null | null | null | kafkaSchemaManager/decorator/__init__.py | YendiyarovSV/kafka-avro-producer-topkrabbensteam | d7a318b465ff38897150a4a4db267309793373bc | [
"Apache-2.0"
] | null | null | null | kafkaSchemaManager/decorator/__init__.py | YendiyarovSV/kafka-avro-producer-topkrabbensteam | d7a318b465ff38897150a4a4db267309793373bc | [
"Apache-2.0"
] | null | null | null | from .commandCenter import CommandCenter
from .commandCenterEnum import CommandCenterEnum
from .CommandCenterFactory import CommandCenterFactory
| 36.25 | 54 | 0.896552 | 12 | 145 | 10.833333 | 0.416667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.082759 | 145 | 3 | 55 | 48.333333 | 0.977444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d4fc1a0fc9d8b08292fc60579e3d61c1b39209c3 | 1,445 | py | Python | docs/source/tutorials/scripts/composing_multipanel_figures_examples.py | emthanh/svg_utils | 1ebd8e5a8cae067ed3f1d40939997eeed0a2d4fb | [
"MIT"
] | 195 | 2015-01-08T16:57:14.000Z | 2022-03-08T10:08:01.000Z | docs/source/tutorials/scripts/composing_multipanel_figures_examples.py | emthanh/svg_utils | 1ebd8e5a8cae067ed3f1d40939997eeed0a2d4fb | [
"MIT"
] | 61 | 2015-12-16T17:22:11.000Z | 2022-03-07T02:03:30.000Z | docs/source/tutorials/scripts/composing_multipanel_figures_examples.py | emthanh/svg_utils | 1ebd8e5a8cae067ed3f1d40939997eeed0a2d4fb | [
"MIT"
] | 58 | 2015-04-08T17:00:51.000Z | 2022-02-27T20:06:13.000Z | #!/usr/bin/env python3
# coding=utf-8
from svgutils.compose import *
CONFIG["figure.save_path"] = "composing_multipanel_figures"
Figure("16cm", "6.5cm", SVG("sigmoid_fit.svg")).save("ex1.svg")
Figure("16cm", "6.5cm", Text("A", 25, 20), SVG("sigmoid_fit.svg")).save("ex1a.svg")
Figure(
"16cm", "6.5cm", Text("A", 25, 20, size=12, weight="bold"), SVG("sigmoid_fit.svg")
).save("ex1b.svg")
Figure("16cm", "6.5cm", SVG("sigmoid_fit.svg"), SVG("anscombe.svg")).save("ex2.svg")
Figure("16cm", "6.5cm", SVG("sigmoid_fit.svg"), SVG("anscombe.svg")).tile(2, 1).save(
"ex3.svg"
)
Figure("16cm", "6.5cm", SVG("sigmoid_fit.svg"), SVG("anscombe.svg").scale(0.5)).tile(
2, 1
).save("ex3b.svg")
Figure("16cm", "6.5cm", SVG("sigmoid_fit.svg"), SVG("anscombe.svg").move(280, 0)).save(
"ex4.svg"
)
Figure(
"16cm", "6.5cm", SVG("sigmoid_fit.svg"), SVG("anscombe.svg").scale(0.5).move(280, 0)
).save("ex5.svg")
Figure(
"16cm",
"6.5cm",
SVG("sigmoid_fit.svg"),
SVG("anscombe.svg").scale(0.5).move(280, 0),
Grid(20, 20),
).save("ex6.svg")
Figure(
"16cm",
"6.5cm",
Panel(Text("A", 25, 20), SVG("sigmoid_fit.svg")),
Panel(Text("B", 25, 20).move(280, 0), SVG("anscombe.svg").scale(0.5).move(280, 0)),
).save("ex7.svg")
Figure(
"16cm",
"6.5cm",
Panel(Text("A", 25, 20), SVG("sigmoid_fit.svg")),
Panel(Text("B", 25, 20), SVG("anscombe.svg").scale(0.5)).move(280, 0),
).save("ex8.svg")
| 25.350877 | 88 | 0.594464 | 237 | 1,445 | 3.565401 | 0.232068 | 0.130178 | 0.143195 | 0.182249 | 0.753846 | 0.72071 | 0.72071 | 0.72071 | 0.666272 | 0.604734 | 0 | 0.099206 | 0.128028 | 1,445 | 56 | 89 | 25.803571 | 0.571429 | 0.023529 | 0 | 0.333333 | 0 | 0 | 0.350603 | 0.019872 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.025641 | 0 | 0.025641 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
077cb03408425c0a9f0096b26e9e40cf7c3ad071 | 11,366 | py | Python | tests/test_catboost.py | a-wozniakowski/scikit-physlearn | 3d4530fca1a7c997d4d6fc463fd8082d4ddc0e73 | [
"MIT"
] | 8 | 2020-10-20T08:25:32.000Z | 2022-02-17T10:27:20.000Z | tests/test_catboost.py | tzislam/scikit-physlearn | 1241bbc4e3cedd581a1753b660a4d23d2e4f0ef4 | [
"MIT"
] | 2 | 2021-07-14T16:25:08.000Z | 2021-07-20T03:05:14.000Z | tests/test_catboost.py | tzislam/scikit-physlearn | 1241bbc4e3cedd581a1753b660a4d23d2e4f0ef4 | [
"MIT"
] | 3 | 2020-07-16T04:20:51.000Z | 2021-06-23T21:22:43.000Z | """
Unit tests for CatBoost compatibility.
"""
# Author: Alex Wozniakowski
# License: MIT
import unittest
import pandas as pd
from scipy.stats import randint
from sklearn import __version__ as sk_version
from sklearn.base import clone
from sklearn.datasets import load_boston, load_linnerud
from sklearn.decomposition import PCA, TruncatedSVD
from sklearn.model_selection import train_test_split
from sklearn.pipeline import FeatureUnion
from physlearn import Regressor
from physlearn.datasets import load_benchmark
from physlearn.supervised import ShapInterpret
class TestCatBoost(unittest.TestCase):
def test_regressor_gridsearchcv(self):
X, y = load_boston(return_X_y=True)
X, y = pd.DataFrame(X), pd.Series(y)
X_train, X_test, y_train, y_test = train_test_split(X, y,
random_state=42)
params = dict(iterations=10, loss_function='RMSE')
reg = Regressor(regressor_choice='catboostregressor', pipeline_transform='standardscaler',
params=params)
search_params = dict(reg__iterations=[3, 5, 10],
tr__with_std=[True, False])
reg.search(X_train, y_train, search_params=search_params)
self.assertLess(reg.best_score_.values, 3.6)
self.assertIn(reg.best_params_['reg__iterations'], [3, 5, 10])
# sklearn < 0.23 does not have as_frame parameter
@unittest.skipIf(sk_version < '0.23.0', 'scikit-learn version is less than 0.23')
def test_multioutput_regressor_gridsearchcv(self):
bunch = load_linnerud(as_frame=True) # returns a Bunch instance
X, y = bunch['data'], bunch['target']
X_train, X_test, y_train, y_test = train_test_split(X, y,
random_state=42)
params = dict(iterations=10, loss_function='RMSE')
reg = Regressor(regressor_choice='catboostregressor', pipeline_transform='standardscaler',
params=params)
search_params = dict(reg__iterations=[3, 5, 10],
tr__with_std=[True, False])
reg.search(X_train, y_train, search_params=search_params)
self.assertLess(reg.best_score_.values, 10.0)
self.assertIn(reg.best_params_['reg__estimator__iterations'], [3, 5, 10])
# sklearn < 0.23 does not have as_frame parameter
@unittest.skipIf(sk_version < '0.23.0', 'scikit-learn version is less than 0.23')
def test_multioutput_regressorchain_gridsearchcv(self):
bunch = load_linnerud(as_frame=True) # returns a Bunch instance
X, y = bunch['data'], bunch['target']
X_train, X_test, y_train, y_test = train_test_split(X, y,
random_state=42)
params = dict(iterations=10, loss_function='RMSE')
reg = Regressor(regressor_choice='catboostregressor', pipeline_transform='standardscaler',
params=params, chain_order=[2, 0, 1])
search_params = dict(reg__iterations=[3, 5, 10],
tr__with_std=[True, False])
reg.search(X_train, y_train, search_params=search_params)
self.assertLess(reg.best_score_.values, 10.0)
self.assertIn(reg.best_params_['reg__base_estimator__iterations'], [3, 5, 10])
def test_regressor_randomizedsearchcv(self):
X, y = load_boston(return_X_y=True)
X, y = pd.DataFrame(X), pd.Series(y)
X_train, X_test, y_train, y_test = train_test_split(X, y,
random_state=42)
params = dict(iterations=10, loss_function='RMSE')
reg = Regressor(regressor_choice='catboostregressor', pipeline_transform='standardscaler',
params=params, randomizedcv_n_iter=6)
search_params = dict(reg__iterations=randint(low=3, high=10),
tr__with_std=[True, False])
reg.search(X_train, y_train, search_params=search_params,
search_method='randomizedsearchcv')
self.assertLess(reg.best_score_.values, 3.6)
self.assertLessEqual(reg.best_params_['reg__iterations'], 10)
self.assertGreaterEqual(reg.best_params_['reg__iterations'], 3)
# sklearn < 0.23 does not have as_frame parameter
@unittest.skipIf(sk_version < '0.23.0', 'scikit-learn version is less than 0.23')
def test_multioutput_regressor_randomizedsearchcv(self):
bunch = load_linnerud(as_frame=True) # returns a Bunch instance
X, y = bunch['data'], bunch['target']
X_train, X_test, y_train, y_test = train_test_split(X, y,
random_state=42)
params = dict(iterations=10, loss_function='RMSE')
reg = Regressor(regressor_choice='catboostregressor', pipeline_transform='standardscaler',
params=params, randomizedcv_n_iter=6)
search_params = dict(reg__iterations=randint(low=3, high=10),
tr__with_std=[True, False])
reg.search(X_train, y_train, search_params=search_params,
search_method='randomizedsearchcv')
self.assertLess(reg.best_score_.values, 10.0)
self.assertLessEqual(reg.best_params_['reg__estimator__iterations'], 10)
self.assertGreaterEqual(reg.best_params_['reg__estimator__iterations'], 3)
# sklearn < 0.23 does not have as_frame parameter
@unittest.skipIf(sk_version < '0.23.0', 'scikit-learn version is less than 0.23')
def test_multioutput_regressorchain_randomizedsearchcv(self):
bunch = load_linnerud(as_frame=True) # returns a Bunch instance
X, y = bunch['data'], bunch['target']
X_train, X_test, y_train, y_test = train_test_split(X, y,
random_state=42)
params = dict(iterations=10, loss_function='RMSE')
reg = Regressor(regressor_choice='catboostregressor', pipeline_transform='standardscaler',
params=params, randomizedcv_n_iter=6, chain_order=[2, 0, 1])
search_params = dict(reg__iterations=randint(low=3, high=10),
tr__with_std=[True, False])
reg.search(X_train, y_train, search_params=search_params,
search_method='randomizedsearchcv')
self.assertLess(reg.best_score_.values, 10.4)
self.assertLessEqual(reg.best_params_['reg__base_estimator__iterations'], 10)
self.assertGreaterEqual(reg.best_params_['reg__base_estimator__iterations'], 3)
def test_regressor_fit_score(self):
X, y = load_boston(return_X_y=True)
X, y = pd.DataFrame(X), pd.Series(y)
X_train, X_test, y_train, y_test = train_test_split(X, y,
random_state=42)
params = dict(iterations=10, loss_function='RMSE')
reg = Regressor(regressor_choice='catboostregressor', pipeline_transform='standardscaler',
params=params)
reg.fit(X_train, y_train)
y_pred = reg.fit(X_train, y_train).predict(X_test)
score = reg.score(y_test, y_pred)
self.assertCountEqual(y_pred.index, y_test.index)
self.assertGreaterEqual(score['mae'].values, 0.0)
self.assertGreaterEqual(score['mse'].values, 0.0)
self.assertLess(score['mae'].values, 2.7)
self.assertLess(score['mse'].values, 18.0)
# sklearn < 0.23 does not have as_frame parameter
@unittest.skipIf(sk_version < '0.23.0', 'scikit-learn version is less than 0.23')
def test_multioutput_regressor_fit_score(self):
bunch = load_linnerud(as_frame=True) # returns a Bunch instance
X, y = bunch['data'], bunch['target']
X_train, X_test, y_train, y_test = train_test_split(X, y,
random_state=42)
params = dict(iterations=10, loss_function='RMSE')
reg = Regressor(regressor_choice='catboostregressor', pipeline_transform='standardscaler',
params=params)
y_pred = reg.fit(X_train, y_train).predict(X_test)
score = reg.score(y_test, y_pred).mean()
self.assertCountEqual(y_pred.index, y_test.index)
self.assertGreaterEqual(score['mae'], 0.0)
self.assertGreaterEqual(score['mse'], 0.0)
self.assertLess(score['mae'], 12.0)
self.assertLess(score['mse'], 250.0)
# sklearn < 0.23 does not have as_frame parameter
@unittest.skipIf(sk_version < '0.23.0', 'scikit-learn version is less than 0.23')
def test_multioutput_regressorchain_fit_score(self):
bunch = load_linnerud(as_frame=True) # returns a Bunch instance
X, y = bunch['data'], bunch['target']
X_train, X_test, y_train, y_test = train_test_split(X, y,
random_state=42)
params = dict(iterations=10, loss_function='RMSE')
reg = Regressor(regressor_choice='catboostregressor', pipeline_transform='standardscaler',
params=params, chain_order=[0, 2, 1])
y_pred = reg.fit(X_train, y_train).predict(X_test)
score = reg.score(y_test, y_pred).mean()
self.assertCountEqual(y_pred.index, y_test.index)
self.assertGreaterEqual(score['mae'], 0.0)
self.assertGreaterEqual(score['mse'], 0.0)
self.assertLess(score['mae'], 11.0)
self.assertLess(score['mse'], 240.0)
def test_pipeline_clone_fit_score(self):
X, y = load_boston(return_X_y=True)
X, y = pd.DataFrame(X), pd.Series(y)
X_train, X_test, y_train, y_test = train_test_split(X, y,
random_state=42)
transformer_list = [('pca', PCA(n_components=1)),
('svd', TruncatedSVD(n_components=2))]
union = FeatureUnion(transformer_list=transformer_list, n_jobs=-1)
params = dict(iterations=10, loss_function='RMSE')
reg = Regressor(regressor_choice='catboostregressor', pipeline_transform=('tr', union),
params=params)
reg.get_pipeline(y=y_train)
_class_before_clone = reg.pipe.__class__
reg.pipe = clone(reg.pipe)
y_pred = reg.fit(X_train, y_train).predict(X_test)
score = reg.score(y_test, y_pred)
self.assertEqual(_class_before_clone, reg.pipe.__class__)
self.assertCountEqual(y_pred.index, y_test.index)
self.assertGreaterEqual(score['mae'].values, 0.0)
self.assertGreaterEqual(score['mse'].values, 0.0)
self.assertLess(score['mae'].values, 11.0)
self.assertLess(score['mse'].values, 232.0)
def test_shap_explainer(self):
X_train, _, y_train, _ = load_benchmark(return_split=True)
index = 3
params = dict(iterations=10, loss_function='RMSE')
interpret = ShapInterpret(regressor_choice='catboostregressor', target_index=index,
params=params)
interpret.fit(X=X_train, y=y_train, index=index)
explainer, shap_values = interpret.explainer(X=X_train)
self.assertEqual(X_train.shape, shap_values.shape)
if __name__ == '__main__':
unittest.main()
| 50.741071 | 98 | 0.633908 | 1,416 | 11,366 | 4.810028 | 0.106638 | 0.008222 | 0.013361 | 0.021142 | 0.838937 | 0.830128 | 0.796506 | 0.785347 | 0.770959 | 0.751138 | 0 | 0.025845 | 0.257874 | 11,366 | 223 | 99 | 50.96861 | 0.781624 | 0.045399 | 0 | 0.659341 | 0 | 0 | 0.09373 | 0.015791 | 0 | 0 | 0 | 0 | 0.203297 | 1 | 0.06044 | false | 0 | 0.065934 | 0 | 0.131868 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
077d6f7a2e0235db1d998ad0ad2e2792d76b6ce0 | 4,662 | py | Python | tests/sentry/models/test_monitor.py | pierredup/sentry | 0145e4b3bc0e775bf3482fe65f5e1a689d0dbb80 | [
"BSD-3-Clause"
] | null | null | null | tests/sentry/models/test_monitor.py | pierredup/sentry | 0145e4b3bc0e775bf3482fe65f5e1a689d0dbb80 | [
"BSD-3-Clause"
] | null | null | null | tests/sentry/models/test_monitor.py | pierredup/sentry | 0145e4b3bc0e775bf3482fe65f5e1a689d0dbb80 | [
"BSD-3-Clause"
] | null | null | null | from __future__ import absolute_import, print_function
import six
from datetime import datetime
from django.utils import timezone
from mock import patch
from sentry.models import Monitor, MonitorFailure, MonitorType, ScheduleType
from sentry.testutils import TestCase
class MonitorTestCase(TestCase):
def test_next_run_crontab_implicit(self):
monitor = Monitor(
last_checkin=datetime(2019, 1, 1, 1, 10, 20, tzinfo=timezone.utc),
config={"schedule": "* * * * *"},
)
assert monitor.get_next_scheduled_checkin() == datetime(
2019, 1, 1, 1, 11, tzinfo=timezone.utc
)
monitor.config["schedule"] = "*/5 * * * *"
assert monitor.get_next_scheduled_checkin() == datetime(
2019, 1, 1, 1, 15, tzinfo=timezone.utc
)
def test_next_run_crontab_explicit(self):
monitor = Monitor(
last_checkin=datetime(2019, 1, 1, 1, 10, 20, tzinfo=timezone.utc),
config={"schedule": "* * * * *", "schedule_type": ScheduleType.CRONTAB},
)
assert monitor.get_next_scheduled_checkin() == datetime(
2019, 1, 1, 1, 11, tzinfo=timezone.utc
)
monitor.config["schedule"] = "*/5 * * * *"
assert monitor.get_next_scheduled_checkin() == datetime(
2019, 1, 1, 1, 15, tzinfo=timezone.utc
)
def test_next_run_interval(self):
monitor = Monitor(
last_checkin=datetime(2019, 1, 1, 1, 10, 20, tzinfo=timezone.utc),
config={"schedule": [1, "month"], "schedule_type": ScheduleType.INTERVAL},
)
assert monitor.get_next_scheduled_checkin() == datetime(
2019, 2, 1, 1, 10, 20, tzinfo=timezone.utc
)
@patch("sentry.coreapi.ClientApiHelper.insert_data_to_database")
def test_mark_failed_default_params(self, mock_insert_data_to_database):
monitor = Monitor.objects.create(
name="test monitor",
organization_id=self.organization.id,
project_id=self.project.id,
type=MonitorType.CRON_JOB,
config={"schedule": [1, "month"], "schedule_type": ScheduleType.INTERVAL},
)
assert monitor.mark_failed()
assert len(mock_insert_data_to_database.mock_calls) == 1
event = mock_insert_data_to_database.mock_calls[0].args[0]
assert dict(
event,
**{
"level": "error",
"project": self.project.id,
"platform": "other",
"contexts": {
"monitor": {
"status": "active",
"type": "cron_job",
"config": {"schedule_type": 2, "schedule": [1, u"month"]},
"id": six.text_type(monitor.guid),
"name": monitor.name,
}
},
"logentry": {"formatted": "Monitor failure: test monitor (unknown)"},
"fingerprint": ["monitor", six.text_type(monitor.guid), u"unknown"],
"logger": "",
"type": "default",
}
) == dict(event)
@patch("sentry.coreapi.ClientApiHelper.insert_data_to_database")
def test_mark_failed_with_reason(self, mock_insert_data_to_database):
monitor = Monitor.objects.create(
name="test monitor",
organization_id=self.organization.id,
project_id=self.project.id,
type=MonitorType.CRON_JOB,
config={"schedule": [1, "month"], "schedule_type": ScheduleType.INTERVAL},
)
assert monitor.mark_failed(reason=MonitorFailure.DURATION)
assert len(mock_insert_data_to_database.mock_calls) == 1
event = mock_insert_data_to_database.mock_calls[0].args[0]
assert dict(
event,
**{
"level": "error",
"project": self.project.id,
"platform": "other",
"contexts": {
"monitor": {
"status": "active",
"type": "cron_job",
"config": {"schedule_type": 2, "schedule": [1, u"month"]},
"id": six.text_type(monitor.guid),
"name": monitor.name,
}
},
"logentry": {"formatted": "Monitor failure: test monitor (duration)"},
"fingerprint": ["monitor", six.text_type(monitor.guid), u"duration"],
"logger": "",
"type": "default",
}
) == dict(event)
| 37.902439 | 86 | 0.541613 | 463 | 4,662 | 5.24406 | 0.203024 | 0.012356 | 0.062603 | 0.065898 | 0.839374 | 0.806425 | 0.806425 | 0.796952 | 0.748764 | 0.748764 | 0 | 0.030468 | 0.331188 | 4,662 | 122 | 87 | 38.213115 | 0.748236 | 0 | 0 | 0.641509 | 0 | 0 | 0.151652 | 0.023166 | 0 | 0 | 0 | 0 | 0.103774 | 1 | 0.04717 | false | 0 | 0.066038 | 0 | 0.122642 | 0.028302 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
07b2df559d6842598f04f99caad2c539b578bbf6 | 23,323 | py | Python | test/unit/test_class_snapshot_list.py | lgdop/elastic-curator | fe186c24a28eecf2c8284ddc7c43b7cd94b65847 | [
"Apache-2.0"
] | 1 | 2019-04-11T17:42:04.000Z | 2019-04-11T17:42:04.000Z | test/unit/test_class_snapshot_list.py | lgdop/elastic-curator | fe186c24a28eecf2c8284ddc7c43b7cd94b65847 | [
"Apache-2.0"
] | null | null | null | test/unit/test_class_snapshot_list.py | lgdop/elastic-curator | fe186c24a28eecf2c8284ddc7c43b7cd94b65847 | [
"Apache-2.0"
] | null | null | null | from unittest import TestCase
from mock import Mock, patch
import elasticsearch
import yaml
import curator
# Get test variables and constants from a single source
from . import testvars as testvars
class TestSnapshotListClientAndInit(TestCase):
def test_init_bad_client(self):
client = 'not a real client'
self.assertRaises(TypeError, curator.SnapshotList, client)
def test_init_no_repo_exception(self):
client = Mock()
self.assertRaises(curator.MissingArgument, curator.SnapshotList, client)
def test_init_get_snapshots_exception(self):
client = Mock()
client.snapshot.get.return_value = testvars.snapshots
client.snapshot.get.side_effect = testvars.fake_fail
client.snapshot.get_repository.return_value = {}
self.assertRaises(
curator.FailedExecution,
curator.SnapshotList, client, repository=testvars.repo_name
)
def test_init(self):
client = Mock()
client.snapshot.get.return_value = testvars.snapshots
client.snapshot.get_repository.return_value = testvars.test_repo
sl = curator.SnapshotList(client, repository=testvars.repo_name)
self.assertEqual(testvars.snapshots['snapshots'],sl.all_snapshots)
self.assertEqual(
['snap_name','snapshot-2015.03.01'], sorted(sl.snapshots)
)
class TestSnapshotListOtherMethods(TestCase):
def test_empty_list(self):
client = Mock()
client.snapshot.get.return_value = testvars.snapshots
client.snapshot.get_repository.return_value = testvars.test_repo
sl = curator.SnapshotList(client, repository=testvars.repo_name)
self.assertEqual(2, len(sl.snapshots))
sl.snapshots = []
self.assertRaises(curator.NoSnapshots, sl.empty_list_check)
def test_working_list(self):
client = Mock()
client.snapshot.get.return_value = testvars.snapshots
client.snapshot.get_repository.return_value = testvars.test_repo
sl = curator.SnapshotList(client, repository=testvars.repo_name)
self.assertEqual(['snap_name', 'snapshot-2015.03.01'], sl.working_list())
class TestSnapshotListAgeFilterName(TestCase):
def test_get_name_based_ages_match(self):
client = Mock()
client.snapshot.get.return_value = testvars.snapshots
client.snapshot.get_repository.return_value = testvars.test_repo
sl = curator.SnapshotList(client, repository=testvars.repo_name)
sl._get_name_based_ages('%Y.%m.%d')
self.assertEqual(1425168000,
sl.snapshot_info['snapshot-2015.03.01']['age_by_name']
)
def test_get_name_based_ages_no_match(self):
client = Mock()
client.snapshot.get.return_value = testvars.snapshots
client.snapshot.get_repository.return_value = testvars.test_repo
sl = curator.SnapshotList(client, repository=testvars.repo_name)
sl._get_name_based_ages('%Y.%m.%d')
self.assertIsNone(sl.snapshot_info['snap_name']['age_by_name'])
class TestSnapshotListStateFilter(TestCase):
def test_success_inclusive(self):
client = Mock()
client.snapshot.get.return_value = testvars.snapshots
client.snapshot.get_repository.return_value = testvars.test_repo
sl = curator.SnapshotList(client, repository=testvars.repo_name)
sl.filter_by_state(state='SUCCESS')
self.assertEqual(
[u'snap_name', u'snapshot-2015.03.01'],
sorted(sl.snapshots)
)
def test_success_exclusive(self):
client = Mock()
client.snapshot.get.return_value = testvars.inprogress
client.snapshot.get_repository.return_value = testvars.test_repo
sl = curator.SnapshotList(client, repository=testvars.repo_name)
sl.filter_by_state(state='SUCCESS', exclude=True)
self.assertEqual([u'snapshot-2015.03.01'], sorted(sl.snapshots))
def test_invalid_state(self):
client = Mock()
client.snapshot.get.return_value = testvars.snapshots
client.snapshot.get_repository.return_value = testvars.test_repo
sl = curator.SnapshotList(client, repository=testvars.repo_name)
self.assertRaises(ValueError, sl.filter_by_state, state='invalid')
class TestSnapshotListRegexFilters(TestCase):
def test_filter_by_regex_prefix(self):
client = Mock()
client.snapshot.get.return_value = testvars.snapshots
client.snapshot.get_repository.return_value = testvars.test_repo
sl = curator.SnapshotList(client, repository=testvars.repo_name)
self.assertEqual(
[u'snap_name', u'snapshot-2015.03.01'],
sorted(sl.snapshots)
)
sl.filter_by_regex(kind='prefix', value='sna')
self.assertEqual(
[u'snap_name', u'snapshot-2015.03.01'],
sorted(sl.snapshots)
)
sl.filter_by_regex(kind='prefix', value='sna', exclude=True)
self.assertEqual([], sl.snapshots)
def test_filter_by_regex_prefix_exclude(self):
client = Mock()
client.snapshot.get.return_value = testvars.snapshots
client.snapshot.get_repository.return_value = testvars.test_repo
sl = curator.SnapshotList(client, repository=testvars.repo_name)
self.assertEqual(
[u'snap_name', u'snapshot-2015.03.01'],
sorted(sl.snapshots)
)
sl.filter_by_regex(kind='prefix', value='snap_', exclude=True)
self.assertEqual([u'snapshot-2015.03.01'], sl.snapshots)
def test_filter_by_regex_timestring(self):
client = Mock()
client.snapshot.get.return_value = testvars.snapshots
client.snapshot.get_repository.return_value = testvars.test_repo
sl = curator.SnapshotList(client, repository=testvars.repo_name)
self.assertEqual(
[u'snap_name', u'snapshot-2015.03.01'],
sorted(sl.snapshots)
)
sl.filter_by_regex(kind='timestring', value='%Y.%m.%d')
self.assertEqual(
[u'snapshot-2015.03.01'],
sorted(sl.snapshots)
)
sl.filter_by_regex(kind='timestring', value='%Y.%m.%d', exclude=True)
self.assertEqual([], sl.snapshots)
def test_filter_by_regex_no_value(self):
client = Mock()
client.snapshot.get.return_value = testvars.snapshots
client.snapshot.get_repository.return_value = testvars.test_repo
sl = curator.SnapshotList(client, repository=testvars.repo_name)
self.assertEqual(
[u'snap_name', u'snapshot-2015.03.01'],
sorted(sl.snapshots)
)
self.assertRaises(ValueError, sl.filter_by_regex, kind='prefix', value=None)
self.assertEqual(
[u'snap_name', u'snapshot-2015.03.01'],
sorted(sl.snapshots)
)
sl.filter_by_regex(kind='prefix', value=0)
self.assertEqual([], sl.snapshots)
def test_filter_by_regex_bad_kind(self):
client = Mock()
client.snapshot.get.return_value = testvars.snapshots
client.snapshot.get_repository.return_value = testvars.test_repo
sl = curator.SnapshotList(client, repository=testvars.repo_name)
self.assertEqual(
[u'snap_name', u'snapshot-2015.03.01'],
sorted(sl.snapshots)
)
self.assertRaises(
ValueError, sl.filter_by_regex, kind='invalid', value=None)
class TestSnapshotListFilterByAge(TestCase):
def test_filter_by_age_missing_direction(self):
client = Mock()
client.snapshot.get.return_value = testvars.snapshots
client.snapshot.get_repository.return_value = testvars.test_repo
sl = curator.SnapshotList(client, repository=testvars.repo_name)
self.assertRaises(curator.MissingArgument,
sl.filter_by_age, unit='days', unit_count=1
)
def test_filter_by_age_bad_direction(self):
client = Mock()
client.snapshot.get.return_value = testvars.snapshots
client.snapshot.get_repository.return_value = testvars.test_repo
sl = curator.SnapshotList(client, repository=testvars.repo_name)
self.assertRaises(ValueError, sl.filter_by_age, unit='days',
unit_count=1, direction="invalid"
)
def test_filter_by_age_invalid_source(self):
client = Mock()
client.snapshot.get.return_value = testvars.snapshots
client.snapshot.get_repository.return_value = testvars.test_repo
sl = curator.SnapshotList(client, repository=testvars.repo_name)
self.assertRaises(ValueError, sl.filter_by_age, unit='days',
source='invalid', unit_count=1, direction="older"
)
def test_filter_by_age__name_no_timestring(self):
client = Mock()
client.snapshot.get.return_value = testvars.snapshots
client.snapshot.get_repository.return_value = testvars.test_repo
sl = curator.SnapshotList(client, repository=testvars.repo_name)
self.assertRaises(curator.MissingArgument,
sl.filter_by_age,
source='name', unit='days', unit_count=1, direction='older'
)
def test_filter_by_age__name_older_than_now(self):
client = Mock()
client.snapshot.get.return_value = testvars.snapshots
client.snapshot.get_repository.return_value = testvars.test_repo
sl = curator.SnapshotList(client, repository=testvars.repo_name)
sl.filter_by_age(source='name', direction='older',
timestring='%Y.%m.%d', unit='days', unit_count=1
)
self.assertEqual(['snapshot-2015.03.01'], sl.snapshots)
def test_filter_by_age__name_younger_than_now(self):
client = Mock()
client.snapshot.get.return_value = testvars.snapshots
client.snapshot.get_repository.return_value = testvars.test_repo
sl = curator.SnapshotList(client, repository=testvars.repo_name)
sl.filter_by_age(source='name', direction='younger',
timestring='%Y.%m.%d', unit='days', unit_count=1
)
self.assertEqual([], sl.snapshots)
def test_filter_by_age__name_younger_than_past_date(self):
client = Mock()
client.snapshot.get.return_value = testvars.snapshots
client.snapshot.get_repository.return_value = testvars.test_repo
sl = curator.SnapshotList(client, repository=testvars.repo_name)
sl.filter_by_age(source='name', direction='younger',
timestring='%Y.%m.%d', unit='seconds', unit_count=0,
epoch=1422748800
)
self.assertEqual(['snapshot-2015.03.01'], sl.snapshots)
def test_filter_by_age__name_older_than_past_date(self):
client = Mock()
client.snapshot.get.return_value = testvars.snapshots
client.snapshot.get_repository.return_value = testvars.test_repo
sl = curator.SnapshotList(client, repository=testvars.repo_name)
sl.filter_by_age(source='name', direction='older',
timestring='%Y.%m.%d', unit='seconds', unit_count=0,
epoch=1456963200
)
self.assertEqual(['snapshot-2015.03.01'], sl.snapshots)
def test_filter_by_age__creation_date_older_than_now(self):
client = Mock()
client.snapshot.get.return_value = testvars.snapshots
client.snapshot.get_repository.return_value = testvars.test_repo
sl = curator.SnapshotList(client, repository=testvars.repo_name)
sl.filter_by_age(direction='older', unit='days', unit_count=1)
self.assertEqual(
['snap_name', 'snapshot-2015.03.01'], sorted(sl.snapshots))
def test_filter_by_age__creation_date_younger_than_now(self):
client = Mock()
client.snapshot.get.return_value = testvars.snapshots
client.snapshot.get_repository.return_value = testvars.test_repo
sl = curator.SnapshotList(client, repository=testvars.repo_name)
sl.filter_by_age(direction='younger',
timestring='%Y.%m.%d', unit='days', unit_count=1
)
self.assertEqual([], sl.snapshots)
def test_filter_by_age__creation_date_younger_than_past_date(self):
client = Mock()
client.snapshot.get.return_value = testvars.snapshots
client.snapshot.get_repository.return_value = testvars.test_repo
sl = curator.SnapshotList(client, repository=testvars.repo_name)
sl.filter_by_age(direction='younger',
timestring='%Y.%m.%d', unit='seconds', unit_count=0,
epoch=1422748801
)
self.assertEqual(['snapshot-2015.03.01'], sl.snapshots)
def test_filter_by_age__creation_date_older_than_past_date(self):
client = Mock()
client.snapshot.get.return_value = testvars.snapshots
client.snapshot.get_repository.return_value = testvars.test_repo
sl = curator.SnapshotList(client, repository=testvars.repo_name)
sl.filter_by_age(direction='older',
timestring='%Y.%m.%d', unit='seconds', unit_count=0,
epoch=1425168001
)
self.assertEqual(['snap_name'], sl.snapshots)
class TestIterateFiltersSnaps(TestCase):
def test_no_filters(self):
client = Mock()
client.snapshot.get.return_value = testvars.snapshots
client.snapshot.get_repository.return_value = testvars.test_repo
slo = curator.SnapshotList(client, repository=testvars.repo_name)
slo.iterate_filters({})
self.assertEqual(
['snap_name', 'snapshot-2015.03.01'], sorted(slo.snapshots)
)
def test_no_filtertype(self):
client = Mock()
client.snapshot.get.return_value = testvars.snapshots
client.snapshot.get_repository.return_value = testvars.test_repo
slo = curator.SnapshotList(client, repository=testvars.repo_name)
config = {'filters': [{'no_filtertype':'fail'}]}
self.assertRaises(
curator.ConfigurationError, slo.iterate_filters, config)
def test_invalid_filtertype_class(self):
client = Mock()
client.snapshot.get.return_value = testvars.snapshots
client.snapshot.get_repository.return_value = testvars.test_repo
slo = curator.SnapshotList(client, repository=testvars.repo_name)
config = {'filters': [{'filtertype':12345.6789}]}
self.assertRaises(
curator.ConfigurationError, slo.iterate_filters, config)
def test_invalid_filtertype(self):
client = Mock()
client.snapshot.get.return_value = testvars.snapshots
client.snapshot.get_repository.return_value = testvars.test_repo
slo = curator.SnapshotList(client, repository=testvars.repo_name)
config = yaml.load(testvars.invalid_ft)['actions'][1]
self.assertRaises(
curator.ConfigurationError,
slo.iterate_filters, config
)
def test_age_filtertype(self):
client = Mock()
client.snapshot.get.return_value = testvars.snapshots
client.snapshot.get_repository.return_value = testvars.test_repo
slo = curator.SnapshotList(client, repository=testvars.repo_name)
config = yaml.load(testvars.snap_age_ft)['actions'][1]
slo.iterate_filters(config)
self.assertEqual(
['snap_name', 'snapshot-2015.03.01'], sorted(slo.snapshots))
def test_pattern_filtertype(self):
client = Mock()
client.snapshot.get.return_value = testvars.snapshots
client.snapshot.get_repository.return_value = testvars.test_repo
slo = curator.SnapshotList(client, repository=testvars.repo_name)
config = yaml.load(testvars.snap_pattern_ft)['actions'][1]
slo.iterate_filters(config)
self.assertEqual(
['snap_name', 'snapshot-2015.03.01'], sorted(slo.snapshots))
def test_none_filtertype(self):
client = Mock()
client.snapshot.get.return_value = testvars.snapshots
client.snapshot.get_repository.return_value = testvars.test_repo
slo = curator.SnapshotList(client, repository=testvars.repo_name)
config = yaml.load(testvars.snap_none_ft)['actions'][1]
slo.iterate_filters(config)
self.assertEqual(
['snap_name', 'snapshot-2015.03.01'], sorted(slo.snapshots))
class TestSnapshotListFilterCount(TestCase):
def test_missing_count(self):
client = Mock()
client.snapshot.get.return_value = testvars.snapshots
client.snapshot.get_repository.return_value = testvars.test_repo
slo = curator.SnapshotList(client, repository=testvars.repo_name)
self.assertRaises(curator.MissingArgument, slo.filter_by_count)
def test_without_age(self):
client = Mock()
client.snapshot.get.return_value = testvars.snapshots
client.snapshot.get_repository.return_value = testvars.test_repo
slo = curator.SnapshotList(client, repository=testvars.repo_name)
slo.filter_by_count(count=1)
self.assertEqual(['snap_name'], slo.snapshots)
def test_without_age_reversed(self):
client = Mock()
client.snapshot.get.return_value = testvars.snapshots
client.snapshot.get_repository.return_value = testvars.test_repo
slo = curator.SnapshotList(client, repository=testvars.repo_name)
slo.filter_by_count(count=1, reverse=False)
self.assertEqual(['snapshot-2015.03.01'], slo.snapshots)
def test_with_age(self):
client = Mock()
client.snapshot.get.return_value = testvars.snapshots
client.snapshot.get_repository.return_value = testvars.test_repo
slo = curator.SnapshotList(client, repository=testvars.repo_name)
slo.filter_by_count(
count=1, source='creation_date', use_age=True
)
self.assertEqual(['snap_name'], slo.snapshots)
def test_with_age_reversed(self):
client = Mock()
client.snapshot.get.return_value = testvars.snapshots
client.snapshot.get_repository.return_value = testvars.test_repo
slo = curator.SnapshotList(client, repository=testvars.repo_name)
slo.filter_by_count(
count=1, source='creation_date', use_age=True, reverse=False
)
self.assertEqual(['snapshot-2015.03.01'], slo.snapshots)
def test_sort_by_age(self):
client = Mock()
client.snapshot.get.return_value = testvars.snapshots
client.snapshot.get_repository.return_value = testvars.test_repo
slo = curator.SnapshotList(client, repository=testvars.repo_name)
slo._calculate_ages()
slo.age_keyfield = 'invalid'
snaps = slo.snapshots
slo._sort_by_age(snaps)
self.assertEqual(['snapshot-2015.03.01'], slo.snapshots)
class TestSnapshotListPeriodFilter(TestCase):
def test_bad_args(self):
unit = 'days'
range_from = -1
range_to = -2
timestring = '%Y.%m.%d'
epoch = 1456963201
expected = curator.FailedExecution
client = Mock()
client.snapshot.get.return_value = testvars.snapshots
client.snapshot.get_repository.return_value = testvars.test_repo
sl = curator.SnapshotList(client, repository=testvars.repo_name)
self.assertRaises(expected, sl.filter_period, unit=unit,
range_from=range_from, range_to=range_to, source='name',
timestring=timestring, epoch=epoch
)
def test_in_range(self):
unit = 'days'
range_from = -2
range_to = 2
timestring = '%Y.%m.%d'
epoch = 1425168000
expected = ['snapshot-2015.03.01']
client = Mock()
client.snapshot.get.return_value = testvars.snapshots
client.snapshot.get_repository.return_value = testvars.test_repo
sl = curator.SnapshotList(client, repository=testvars.repo_name)
sl.filter_period(source='name', range_from=range_from, epoch=epoch,
range_to=range_to, timestring='%Y.%m.%d', unit=unit,
)
self.assertEqual(expected, sl.snapshots)
def test_not_in_range(self):
unit = 'days'
range_from = 2
range_to = 4
timestring = '%Y.%m.%d'
epoch = 1425168000
expected = []
client = Mock()
client.snapshot.get.return_value = testvars.snapshots
client.snapshot.get_repository.return_value = testvars.test_repo
sl = curator.SnapshotList(client, repository=testvars.repo_name)
sl.filter_period(source='name', range_from=range_from, epoch=epoch,
range_to=range_to, timestring='%Y.%m.%d', unit=unit,
)
self.assertEqual(expected, sl.snapshots)
def test_no_creation_date(self):
unit = 'days'
range_from = -2
range_to = 2
epoch = 1456963201
expected = []
client = Mock()
client.snapshot.get.return_value = testvars.snapshots
client.snapshot.get_repository.return_value = testvars.test_repo
sl = curator.SnapshotList(client, repository=testvars.repo_name)
sl.snapshot_info['snap_name']['start_time_in_millis'] = None
sl.snapshot_info['snapshot-2015.03.01']['start_time_in_millis'] = None
sl.filter_period(source='creation_date', range_from=range_from,
epoch=epoch, range_to=range_to, unit=unit,
)
self.assertEqual(expected, sl.snapshots)
def test_invalid_period_type(self):
unit = 'days'
range_from = -1
range_to = -2
timestring = '%Y.%m.%d'
epoch = 1456963201
expected = ValueError
client = Mock()
client.snapshot.get.return_value = testvars.snapshots
client.snapshot.get_repository.return_value = testvars.test_repo
sl = curator.SnapshotList(client, repository=testvars.repo_name)
self.assertRaises(expected, sl.filter_period, unit=unit, period_type='invalid',
range_from=range_from, range_to=range_to, source='name',
timestring=timestring, epoch=epoch
)
def test_invalid_range_from(self):
unit = 'days'
range_from = -1
range_to = 'invalid'
timestring = '%Y.%m.%d'
epoch = 1456963201
expected = curator.ConfigurationError
client = Mock()
client.snapshot.get.return_value = testvars.snapshots
client.snapshot.get_repository.return_value = testvars.test_repo
sl = curator.SnapshotList(client, repository=testvars.repo_name)
self.assertRaises(expected, sl.filter_period, unit=unit, period_type='relative',
range_from=range_from, range_to=range_to, source='name',
timestring=timestring, epoch=epoch
)
def test_missing_absolute_date_values(self):
unit = 'days'
range_from = -1
range_to = 'invalid'
timestring = '%Y.%m.%d'
epoch = 1456963201
expected = curator.ConfigurationError
client = Mock()
client.snapshot.get.return_value = testvars.snapshots
client.snapshot.get_repository.return_value = testvars.test_repo
sl = curator.SnapshotList(client, repository=testvars.repo_name)
self.assertRaises(expected, sl.filter_period, unit=unit, period_type='absolute',
range_from=range_from, range_to=range_to, source='name',
timestring=timestring, epoch=epoch
)
| 46.927565 | 88 | 0.676199 | 2,730 | 23,323 | 5.548352 | 0.05641 | 0.085958 | 0.104377 | 0.072886 | 0.89417 | 0.884334 | 0.866706 | 0.854097 | 0.842477 | 0.8265 | 0 | 0.021209 | 0.217639 | 23,323 | 496 | 89 | 47.022177 | 0.8089 | 0.002272 | 0 | 0.644033 | 0 | 0 | 0.06004 | 0 | 0 | 0 | 0 | 0 | 0.121399 | 1 | 0.098765 | false | 0 | 0.012346 | 0 | 0.12963 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
07cf3ba334257d1613f71f2f937520edea323681 | 23,134 | py | Python | tests/unit/cli/commands/test_deployment_location.py | rajahaidar/lmctl | 48984047d3656eca51a382bdfb936304cf48d5aa | [
"Apache-2.0"
] | null | null | null | tests/unit/cli/commands/test_deployment_location.py | rajahaidar/lmctl | 48984047d3656eca51a382bdfb936304cf48d5aa | [
"Apache-2.0"
] | null | null | null | tests/unit/cli/commands/test_deployment_location.py | rajahaidar/lmctl | 48984047d3656eca51a382bdfb936304cf48d5aa | [
"Apache-2.0"
] | null | null | null | import tests.unit.cli.commands.command_testing as command_testing
import lmctl.drivers.lm.base as lm_drivers
import lmctl.cli.commands.deployment_location as deployment_cmds
import tempfile
import shutil
import os
import json
import yaml
from unittest.mock import patch
from tests.common.simulations.lm_simulator import LmSimulator
class TestDeploymentLocationCommands(command_testing.CommandTestCase):
def setUp(self):
super().setUp()
# Created simulated LM session when requested
self.lm_sim = LmSimulator().start()
create_lm_session_patcher = patch('lmctl.cli.ctlmgmt.create_lm_session')
self.mock_create_lm_session = create_lm_session_patcher.start()
self.mock_create_lm_session.return_value = self.lm_sim.as_mocked_session()
self.addCleanup(create_lm_session_patcher.stop)
self.lm_sim.add_rm({'name': 'rm123'})
def test_add_with_defaults(self):
result = self.runner.invoke(deployment_cmds.add, ['TestEnv', 'testdl', '--rm', 'rm123'])
self.assert_no_errors(result)
expected_id = None
for dl_id, dl in self.lm_sim.deployment_locations.items():
expected_id = dl_id
expected_output = '| id | name | resourceManager | infrastructureType | description |'
expected_output += '\n|--------------------------------------+--------+-------------------+----------------------+---------------|'
expected_output += '\n| {0} | testdl | rm123 | | |'.format(expected_id)
self.assert_output(result, expected_output)
self.mock_create_lm_session.assert_called_once_with('TestEnv', None, None)
mock_dl_driver = self.mock_create_lm_session.return_value.deployment_location_driver
mock_dl_driver.add_location.assert_called_once_with({'name': 'testdl', 'description': None, 'resourceManager': 'rm123', 'infrastructureType': None, 'infrastructureSpecificProperties': {}})
def test_add_with_params(self):
result = self.runner.invoke(deployment_cmds.add, ['TestEnv', 'testdl', '--rm', 'rm123', '-i', 'Openstack', '-d', 'test location'])
self.assert_no_errors(result)
expected_id = None
for dl_id, dl in self.lm_sim.deployment_locations.items():
expected_id = dl_id
expected_output = '| id | name | resourceManager | infrastructureType | description |'
expected_output += '\n|--------------------------------------+--------+-------------------+----------------------+---------------|'
expected_output += '\n| {0} | testdl | rm123 | Openstack | test location |'.format(expected_id)
self.assert_output(result, expected_output)
self.mock_create_lm_session.assert_called_once_with('TestEnv', None, None)
mock_dl_driver = self.mock_create_lm_session.return_value.deployment_location_driver
mock_dl_driver.add_location.assert_called_once_with({'name': 'testdl', 'description': 'test location', 'resourceManager': 'rm123', 'infrastructureType': 'Openstack', 'infrastructureSpecificProperties': {}})
def test_add_with_json_properties(self):
tmp_dir = tempfile.mkdtemp()
try:
properties_dict = {
'propA': 'valueA'
}
properties_file = os.path.join(tmp_dir, 'props.json')
with open(properties_file, 'w') as f:
json.dump(properties_dict, f)
result = self.runner.invoke(deployment_cmds.add, ['TestEnv', 'testdl', '--rm', 'rm123', '-p', properties_file])
self.assert_no_errors(result)
expected_id = None
for dl_id, dl in self.lm_sim.deployment_locations.items():
expected_id = dl_id
expected_output = '| id | name | resourceManager | infrastructureType | description |'
expected_output += '\n|--------------------------------------+--------+-------------------+----------------------+---------------|'
expected_output += '\n| {0} | testdl | rm123 | | |'.format(expected_id)
self.assert_output(result, expected_output)
self.mock_create_lm_session.assert_called_once_with('TestEnv', None, None)
mock_dl_driver = self.mock_create_lm_session.return_value.deployment_location_driver
mock_dl_driver.add_location.assert_called_once_with({'name': 'testdl', 'description': None, 'resourceManager': 'rm123', 'infrastructureType': None, 'infrastructureSpecificProperties': properties_dict})
finally:
if os.path.exists(tmp_dir):
shutil.rmtree(tmp_dir)
def test_add_with_yaml_properties(self):
tmp_dir = tempfile.mkdtemp()
try:
properties_dict = {
'propA': 'valueA'
}
properties_file = os.path.join(tmp_dir, 'props.yaml')
with open(properties_file, 'w') as f:
yaml.dump(properties_dict, f)
result = self.runner.invoke(deployment_cmds.add, ['TestEnv', 'testdl', '--rm', 'rm123', '-p', properties_file])
self.assert_no_errors(result)
expected_id = None
for dl_id, dl in self.lm_sim.deployment_locations.items():
expected_id = dl_id
expected_output = '| id | name | resourceManager | infrastructureType | description |'
expected_output += '\n|--------------------------------------+--------+-------------------+----------------------+---------------|'
expected_output += '\n| {0} | testdl | rm123 | | |'.format(expected_id)
self.assert_output(result, expected_output)
self.mock_create_lm_session.assert_called_once_with('TestEnv', None, None)
mock_dl_driver = self.mock_create_lm_session.return_value.deployment_location_driver
mock_dl_driver.add_location.assert_called_once_with({'name': 'testdl', 'description': None, 'resourceManager': 'rm123', 'infrastructureType': None, 'infrastructureSpecificProperties': properties_dict})
finally:
if os.path.exists(tmp_dir):
shutil.rmtree(tmp_dir)
def test_add_with_config(self):
result = self.runner.invoke(deployment_cmds.add, ['TestEnv', 'testdl', '--rm', 'rm123', '--config', 'my/config/file'])
self.assert_no_errors(result)
expected_id = None
for dl_id, dl in self.lm_sim.deployment_locations.items():
expected_id = dl_id
expected_output = '| id | name | resourceManager | infrastructureType | description |'
expected_output += '\n|--------------------------------------+--------+-------------------+----------------------+---------------|'
expected_output += '\n| {0} | testdl | rm123 | | |'.format(expected_id)
self.assert_output(result, expected_output)
self.mock_create_lm_session.assert_called_once_with('TestEnv', None, 'my/config/file')
def test_add_with_pwd(self):
result = self.runner.invoke(deployment_cmds.add, ['TestEnv', 'testdl', '--rm', 'rm123', '--pwd', 'secret'])
self.assert_no_errors(result)
expected_id = None
for dl_id, dl in self.lm_sim.deployment_locations.items():
expected_id = dl_id
expected_output = '| id | name | resourceManager | infrastructureType | description |'
expected_output += '\n|--------------------------------------+--------+-------------------+----------------------+---------------|'
expected_output += '\n| {0} | testdl | rm123 | | |'.format(expected_id)
self.assert_output(result, expected_output)
self.mock_create_lm_session.assert_called_once_with('TestEnv', 'secret', None)
def test_add_with_output_json_format(self):
result = self.runner.invoke(deployment_cmds.add, ['TestEnv', 'testdl', '--rm', 'rm123', '-f', 'json'])
self.assert_no_errors(result)
expected_id = None
for dl_id, dl in self.lm_sim.deployment_locations.items():
expected_id = dl_id
expected_output = '{'
expected_output += '\n \"name\": \"testdl\",'
expected_output += '\n \"description\": null,'
expected_output += '\n \"resourceManager\": \"rm123\",'
expected_output += '\n \"infrastructureType\": null,'
expected_output += '\n \"infrastructureSpecificProperties\": {},'
expected_output += '\n \"id\": \"{0}\"'.format(expected_id)
expected_output += '\n}'
self.assert_output(result, expected_output)
self.mock_create_lm_session.assert_called_once_with('TestEnv', None, None)
def test_add_with_output_yaml_format(self):
result = self.runner.invoke(deployment_cmds.add, ['TestEnv', 'testdl', '--rm', 'rm123', '-f', 'yaml'])
self.assert_no_errors(result)
expected_id = None
for dl_id, dl in self.lm_sim.deployment_locations.items():
expected_id = dl_id
expected_output = 'name: testdl'
expected_output += '\ndescription: null'
expected_output += '\nresourceManager: rm123'
expected_output += '\ninfrastructureType: null'
expected_output += '\ninfrastructureSpecificProperties: {}'
expected_output += '\nid: {0}\n'.format(expected_id)
self.assert_output(result, expected_output)
self.mock_create_lm_session.assert_called_once_with('TestEnv', None, None)
def test_add_handles_lm_driver_error(self):
self.mock_create_lm_session.return_value.deployment_location_driver.add_location.side_effect = lm_drivers.LmDriverException('Mocked error')
result = self.runner.invoke(deployment_cmds.add, ['TestEnv', 'testdl', '--rm', 'rm123'])
self.assert_has_system_exit(result)
expected_output = 'LM error occurred: Mocked error'
self.assert_output(result, expected_output)
def test_delete_with_defaults(self):
dl_id = '123'
dl_name = 'abc'
self.lm_sim.add_deployment_location({'id': dl_id, 'name': dl_name, 'resourceManager': 'rm123'})
result = self.runner.invoke(deployment_cmds.delete, ['TestEnv', dl_name])
self.assert_no_errors(result)
expected_output = 'Deleting deployment location: {0}...'.format(dl_id)
expected_output += '\nDeleted deployment location: {0}'.format(dl_id)
self.assert_output(result, expected_output)
self.mock_create_lm_session.assert_called_once_with('TestEnv', None, None)
mock_dl_driver = self.mock_create_lm_session.return_value.deployment_location_driver
mock_dl_driver.get_locations_by_name.assert_called_once_with(dl_name)
mock_dl_driver.delete_location.assert_called_once_with(dl_id)
def test_delete_with_config(self):
dl_id = '123'
dl_name = 'abc'
self.lm_sim.add_deployment_location({'id': dl_id, 'name': dl_name, 'resourceManager': 'rm123'})
result = self.runner.invoke(deployment_cmds.delete, ['TestEnv', dl_name, '--config', 'my/config/file'])
self.assert_no_errors(result)
expected_output = 'Deleting deployment location: {0}...'.format(dl_id)
expected_output += '\nDeleted deployment location: {0}'.format(dl_id)
self.assert_output(result, expected_output)
self.mock_create_lm_session.assert_called_once_with('TestEnv', None, 'my/config/file')
def test_delete_with_pwd(self):
dl_id = '123'
dl_name = 'abc'
self.lm_sim.add_deployment_location({'id': dl_id, 'name': dl_name, 'resourceManager': 'rm123'})
result = self.runner.invoke(deployment_cmds.delete, ['TestEnv', dl_name, '--pwd', 'secret'])
self.assert_no_errors(result)
expected_output = 'Deleting deployment location: {0}...'.format(dl_id)
expected_output += '\nDeleted deployment location: {0}'.format(dl_id)
self.assert_output(result, expected_output)
self.mock_create_lm_session.assert_called_once_with('TestEnv', 'secret', None)
def test_delete_handles_lm_driver_error(self):
result = self.runner.invoke(deployment_cmds.delete, ['TestEnv', 'SomeDl'])
self.assert_has_system_exit(result)
expected_output = 'Error: No deployment location with name: SomeDl'
self.assert_output(result, expected_output)
def test_get_with_defaults(self):
dl_id = 'f801fa73-6278-42f0-b5d3-a0fe40675327'
dl_name = 'testdl'
self.lm_sim.add_deployment_location({'id': dl_id, 'name': dl_name, 'resourceManager': 'rm123'})
result = self.runner.invoke(deployment_cmds.get, ['TestEnv', dl_name])
self.assert_no_errors(result)
expected_output = '| id | name | resourceManager | infrastructureType | description |'
expected_output += '\n|--------------------------------------+--------+-------------------+----------------------+---------------|'
expected_output += '\n| {0} | testdl | rm123 | | |'.format(dl_id)
self.assert_output(result, expected_output)
self.mock_create_lm_session.assert_called_once_with('TestEnv', None, None)
mock_dl_driver = self.mock_create_lm_session.return_value.deployment_location_driver
mock_dl_driver.get_locations_by_name.assert_called_once_with(dl_name)
def test_get_with_config(self):
dl_id = 'f801fa73-6278-42f0-b5d3-a0fe40675327'
dl_name = 'testdl'
self.lm_sim.add_deployment_location({'id': dl_id, 'name': dl_name, 'resourceManager': 'rm123'})
result = self.runner.invoke(deployment_cmds.get, ['TestEnv', dl_name, '--config', 'my/config/file'])
self.assert_no_errors(result)
expected_output = '| id | name | resourceManager | infrastructureType | description |'
expected_output += '\n|--------------------------------------+--------+-------------------+----------------------+---------------|'
expected_output += '\n| {0} | testdl | rm123 | | |'.format(dl_id)
self.assert_output(result, expected_output)
self.mock_create_lm_session.assert_called_once_with('TestEnv', None, 'my/config/file')
def test_get_with_pwd(self):
dl_id = 'f801fa73-6278-42f0-b5d3-a0fe40675327'
dl_name = 'testdl'
self.lm_sim.add_deployment_location({'id': dl_id, 'name': dl_name, 'resourceManager': 'rm123'})
result = self.runner.invoke(deployment_cmds.get, ['TestEnv', dl_name, '--pwd', 'secret'])
self.assert_no_errors(result)
expected_output = '| id | name | resourceManager | infrastructureType | description |'
expected_output += '\n|--------------------------------------+--------+-------------------+----------------------+---------------|'
expected_output += '\n| {0} | testdl | rm123 | | |'.format(dl_id)
self.assert_output(result, expected_output)
self.mock_create_lm_session.assert_called_once_with('TestEnv', 'secret', None)
def test_get_not_found(self):
result = self.runner.invoke(deployment_cmds.get, ['TestEnv', 'SomeDl'])
self.assert_has_system_exit(result)
expected_output = 'Error: No deployment location with name: SomeDl'
self.assert_output(result, expected_output)
self.mock_create_lm_session.assert_called_once_with('TestEnv', None, None)
def test_get_with_output_json_format(self):
dl_id = 'f801fa73-6278-42f0-b5d3-a0fe40675327'
dl_name = 'testdl'
self.lm_sim.add_deployment_location({'id': dl_id, 'name': dl_name, 'resourceManager': 'rm123'})
result = self.runner.invoke(deployment_cmds.get, ['TestEnv', dl_name, '-f', 'json'])
self.assert_no_errors(result)
expected_output = '{'
expected_output += '\n \"id\": \"{0}\",'.format(dl_id)
expected_output += '\n \"name\": \"{0}\",'.format(dl_name)
expected_output += '\n \"resourceManager\": \"rm123\"'
expected_output += '\n}'
self.assert_output(result, expected_output)
self.mock_create_lm_session.assert_called_once_with('TestEnv', None, None)
def test_get_with_output_yaml_format(self):
dl_id = 'f801fa73-6278-42f0-b5d3-a0fe40675327'
dl_name = 'testdl'
self.lm_sim.add_deployment_location({'id': dl_id, 'name': dl_name, 'resourceManager': 'rm123'})
result = self.runner.invoke(deployment_cmds.get, ['TestEnv', dl_name, '-f', 'yaml'])
self.assert_no_errors(result)
expected_output = 'id: {0}'.format(dl_id)
expected_output += '\nname: {0}'.format(dl_name)
expected_output += '\nresourceManager: rm123\n'
self.assert_output(result, expected_output)
self.mock_create_lm_session.assert_called_once_with('TestEnv', None, None)
def test_list_with_defaults(self):
dl_A_id = 'f801fa73-6278-42f0-b5d3-a0fe40675327'
dl_A_name = 'testdl_a'
self.lm_sim.add_deployment_location({'id': dl_A_id, 'name': dl_A_name, 'resourceManager': 'rm123'})
dl_B_id = 'c502bc73-6278-42e0-a5e3-a0fe40674754'
dl_B_name = 'testdl_b'
self.lm_sim.add_deployment_location({'id': dl_B_id, 'name': dl_B_name, 'resourceManager': 'rm123'})
result = self.runner.invoke(deployment_cmds.list_locations, ['TestEnv'])
self.assert_no_errors(result)
expected_output = '| id | name | resourceManager | infrastructureType | description |'
expected_output += '\n|--------------------------------------+----------+-------------------+----------------------+---------------|'
expected_output += '\n| f801fa73-6278-42f0-b5d3-a0fe40675327 | testdl_a | rm123 | | |'
expected_output += '\n| c502bc73-6278-42e0-a5e3-a0fe40674754 | testdl_b | rm123 | | |'
self.assert_output(result, expected_output)
self.mock_create_lm_session.assert_called_once_with('TestEnv', None, None)
mock_dl_driver = self.mock_create_lm_session.return_value.deployment_location_driver
mock_dl_driver.get_locations.assert_called_once()
def test_list_with_config(self):
dl_id = 'f801fa73-6278-42f0-b5d3-a0fe40675327'
dl_name = 'testdl'
self.lm_sim.add_deployment_location({'id': dl_id, 'name': dl_name, 'resourceManager': 'rm123'})
result = self.runner.invoke(deployment_cmds.list_locations, ['TestEnv', '--config', 'my/config/file'])
self.assert_no_errors(result)
expected_output = '| id | name | resourceManager | infrastructureType | description |'
expected_output += '\n|--------------------------------------+--------+-------------------+----------------------+---------------|'
expected_output += '\n| {0} | testdl | rm123 | | |'.format(dl_id)
self.assert_output(result, expected_output)
self.mock_create_lm_session.assert_called_once_with('TestEnv', None, 'my/config/file')
def test_get_with_pwd(self):
dl_id = 'f801fa73-6278-42f0-b5d3-a0fe40675327'
dl_name = 'testdl'
self.lm_sim.add_deployment_location({'id': dl_id, 'name': dl_name, 'resourceManager': 'rm123'})
result = self.runner.invoke(deployment_cmds.list_locations, ['TestEnv', '--pwd', 'secret'])
self.assert_no_errors(result)
expected_output = '| id | name | resourceManager | infrastructureType | description |'
expected_output += '\n|--------------------------------------+--------+-------------------+----------------------+---------------|'
expected_output += '\n| {0} | testdl | rm123 | | |'.format(dl_id)
self.assert_output(result, expected_output)
self.mock_create_lm_session.assert_called_once_with('TestEnv', 'secret', None)
def test_list_with_output_json_format(self):
dl_A_id = 'f801fa73-6278-42f0-b5d3-a0fe40675327'
dl_A_name = 'testdl_a'
self.lm_sim.add_deployment_location({'id': dl_A_id, 'name': dl_A_name, 'resourceManager': 'rm123'})
dl_B_id = 'c502bc73-6278-42e0-a5e3-a0fe40674754'
dl_B_name = 'testdl_b'
self.lm_sim.add_deployment_location({'id': dl_B_id, 'name': dl_B_name, 'resourceManager': 'rm123'})
result = self.runner.invoke(deployment_cmds.list_locations, ['TestEnv', '-f', 'json'])
self.assert_no_errors(result)
expected_output = '{'
expected_output += '\n \"items\": ['
expected_output += '\n {'
expected_output += '\n \"id\": \"{0}\",'.format(dl_A_id)
expected_output += '\n \"name\": \"{0}\",'.format(dl_A_name)
expected_output += '\n \"resourceManager\": \"rm123\"'
expected_output += '\n },'
expected_output += '\n {'
expected_output += '\n \"id\": \"{0}\",'.format(dl_B_id)
expected_output += '\n \"name\": \"{0}\",'.format(dl_B_name)
expected_output += '\n \"resourceManager\": \"rm123\"'
expected_output += '\n }'
expected_output += '\n ]'
expected_output += '\n}'
self.assert_output(result, expected_output)
self.mock_create_lm_session.assert_called_once_with('TestEnv', None, None)
def test_list_with_output_yaml_format(self):
dl_A_id = 'f801fa73-6278-42f0-b5d3-a0fe40675327'
dl_A_name = 'testdl_a'
self.lm_sim.add_deployment_location({'id': dl_A_id, 'name': dl_A_name, 'resourceManager': 'rm123'})
dl_B_id = 'c502bc73-6278-42e0-a5e3-a0fe40674754'
dl_B_name = 'testdl_b'
self.lm_sim.add_deployment_location({'id': dl_B_id, 'name': dl_B_name, 'resourceManager': 'rm123'})
result = self.runner.invoke(deployment_cmds.list_locations, ['TestEnv', '-f', 'yaml'])
self.assert_no_errors(result)
expected_output = 'items:'
expected_output += '\n- id: {0}'.format(dl_A_id)
expected_output += '\n name: {0}'.format(dl_A_name)
expected_output += '\n resourceManager: rm123'
expected_output += '\n- id: {0}'.format(dl_B_id)
expected_output += '\n name: {0}'.format(dl_B_name)
expected_output += '\n resourceManager: rm123\n'
self.assert_output(result, expected_output)
self.mock_create_lm_session.assert_called_once_with('TestEnv', None, None)
| 63.554945 | 214 | 0.595963 | 2,521 | 23,134 | 5.132487 | 0.059897 | 0.122266 | 0.063761 | 0.03957 | 0.896901 | 0.874874 | 0.866373 | 0.856326 | 0.836 | 0.826494 | 0 | 0.030223 | 0.230527 | 23,134 | 363 | 215 | 63.730028 | 0.696646 | 0.001859 | 0 | 0.690476 | 0 | 0 | 0.304474 | 0.092728 | 0 | 0 | 0 | 0 | 0.232143 | 1 | 0.074405 | false | 0 | 0.029762 | 0 | 0.107143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
07f312ac8e7b3cc93af1f83b5c0fabf75b0aff0b | 14,592 | py | Python | ipware/tests/tests_v1/tests_ipv4.py | Karimerto/django-ipware | 640431b23a0927eedcf08af01ef08ae235dc7f0e | [
"MIT"
] | null | null | null | ipware/tests/tests_v1/tests_ipv4.py | Karimerto/django-ipware | 640431b23a0927eedcf08af01ef08ae235dc7f0e | [
"MIT"
] | null | null | null | ipware/tests/tests_v1/tests_ipv4.py | Karimerto/django-ipware | 640431b23a0927eedcf08af01ef08ae235dc7f0e | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
import warnings
from django.conf import settings
from django.http import HttpRequest
from django.test import TestCase
from ipware.ip import get_ip
from ipware.ip import get_real_ip
from ipware.ip import get_trusted_ip
warnings.simplefilter('ignore')
class IPv4TestCase(TestCase):
"""IP address Test"""
def test_meta_none(self):
request = HttpRequest()
request.META = {
}
ip = get_real_ip(request)
self.assertIsNone(ip)
def test_http_x_forwarded_for_multiple(self):
request = HttpRequest()
request.META = {
'HTTP_X_FORWARDED_FOR': '192.168.255.182, 10.0.0.0, 127.0.0.1, 198.84.193.157, 177.139.233.139',
'HTTP_X_REAL_IP': '177.139.233.132',
'REMOTE_ADDR': '177.139.233.133',
}
ip = get_real_ip(request)
self.assertEqual(ip, "198.84.193.157")
def test_http_x_forwarded_for_multiple_left_most_ip(self):
request = HttpRequest()
request.META = {
'HTTP_X_FORWARDED_FOR': '192.168.255.182, 198.84.193.157, 10.0.0.0, 127.0.0.1, 177.139.233.139',
'HTTP_X_REAL_IP': '177.139.233.132',
'REMOTE_ADDR': '177.139.233.133',
}
ip = get_real_ip(request)
self.assertEqual(ip, "198.84.193.157")
def test_http_x_forwarded_for_multiple_right_most_ip(self):
request = HttpRequest()
request.META = {
'HTTP_X_FORWARDED_FOR': '192.168.255.182, 198.84.193.157, 10.0.0.0, 127.0.0.1, 177.139.233.139',
'HTTP_X_REAL_IP': '177.139.233.132',
'REMOTE_ADDR': '177.139.233.133',
}
ip = get_real_ip(request, right_most_proxy=True)
self.assertEqual(ip, "177.139.233.139")
def test_http_x_forwarded_for_multiple_right_most_ip_private(self):
request = HttpRequest()
request.META = {
'HTTP_X_FORWARDED_FOR': '192.168.255.182, 198.84.193.157, 10.0.0.0, 127.0.0.1, 177.139.233.139',
'HTTP_X_REAL_IP': '177.139.233.132',
'REMOTE_ADDR': '177.139.233.133',
}
ip = get_real_ip(request, right_most_proxy=True)
self.assertEqual(ip, "177.139.233.139")
def test_http_x_forwarded_for_multiple_bad_address(self):
request = HttpRequest()
request.META = {
'HTTP_X_FORWARDED_FOR': 'unknown, 192.168.255.182, 10.0.0.0, 127.0.0.1, 198.84.193.157, 177.139.233.139',
'HTTP_X_REAL_IP': '177.139.233.132',
'REMOTE_ADDR': '177.139.233.133',
}
ip = get_real_ip(request)
self.assertEqual(ip, "198.84.193.157")
def test_http_x_forwarded_for_singleton(self):
request = HttpRequest()
request.META = {
'HTTP_X_FORWARDED_FOR': '177.139.233.139',
'HTTP_X_REAL_IP': '177.139.233.132',
'REMOTE_ADDR': '177.139.233.133',
}
ip = get_real_ip(request)
self.assertEqual(ip, "177.139.233.139")
def test_http_x_forwarded_for_singleton_private_address(self):
request = HttpRequest()
request.META = {
'HTTP_X_FORWARDED_FOR': '192.168.255.182',
'HTTP_X_REAL_IP': '177.139.233.132',
'REMOTE_ADDR': '177.139.233.133',
}
ip = get_real_ip(request)
self.assertEqual(ip, "177.139.233.132")
def test_bad_http_x_forwarded_for_fallback_on_x_real_ip(self):
request = HttpRequest()
request.META = {
'HTTP_X_FORWARDED_FOR': 'unknown 177.139.233.139',
'HTTP_X_REAL_IP': '177.139.233.132',
'REMOTE_ADDR': '177.139.233.133',
}
ip = get_real_ip(request)
self.assertEqual(ip, "177.139.233.132")
def test_empty_http_x_forwarded_for_fallback_on_x_real_ip(self):
request = HttpRequest()
request.META = {
'HTTP_X_FORWARDED_FOR': '',
'HTTP_X_REAL_IP': '177.139.233.132',
'REMOTE_ADDR': '177.139.233.133',
}
ip = get_real_ip(request)
self.assertEqual(ip, "177.139.233.132")
def test_empty_http_x_forwarded_for_empty_x_real_ip_fallback_on_remote_addr(self):
request = HttpRequest()
request.META = {
'HTTP_X_FORWARDED_FOR': '',
'HTTP_X_REAL_IP': '',
'REMOTE_ADDR': '177.139.233.133',
}
ip = get_real_ip(request)
self.assertEqual(ip, "177.139.233.133")
def test_empty_http_x_forwarded_for_private_x_real_ip_fallback_on_remote_addr(self):
request = HttpRequest()
request.META = {
'HTTP_X_FORWARDED_FOR': '',
'HTTP_X_REAL_IP': '192.168.255.182',
'REMOTE_ADDR': '177.139.233.133',
}
ip = get_real_ip(request)
self.assertEqual(ip, "177.139.233.133")
def test_private_http_x_forward_for_ip_addr(self):
request = HttpRequest()
request.META = {
'HTTP_X_FORWARDED_FOR': '127.0.0.1',
'HTTP_X_REAL_IP': '',
'REMOTE_ADDR': '',
}
ip = get_real_ip(request)
self.assertEqual(ip, None)
def test_private_remote_addr_for_ip_addr(self):
request = HttpRequest()
request.META = {
'HTTP_X_FORWARDED_FOR': '',
'REMOTE_ADDR': '127.0.0.1',
}
ip = get_real_ip(request)
self.assertEqual(ip, None)
def test_missing_x_forwarded(self):
request = HttpRequest()
request.META = {
'REMOTE_ADDR': '177.139.233.133',
}
ip = get_real_ip(request)
self.assertEqual(ip, "177.139.233.133")
def test_missing_x_forwarded_missing_real_ip(self):
request = HttpRequest()
request.META = {
'REMOTE_ADDR': '177.139.233.133',
}
ip = get_real_ip(request)
self.assertEqual(ip, "177.139.233.133")
def test_best_matched_real_ip(self):
request = HttpRequest()
request.META = {
'HTTP_X_REAL_IP': '127.0.0.1',
'REMOTE_ADDR': '177.31.233.133',
}
ip = get_ip(request)
self.assertEqual(ip, "177.31.233.133")
def test_best_matched_private_ip(self):
request = HttpRequest()
request.META = {
'HTTP_X_REAL_IP': '127.0.0.1',
'REMOTE_ADDR': '192.31.233.133',
}
ip = get_ip(request)
self.assertEqual(ip, "192.31.233.133")
def test_best_matched_private_ip_2(self):
request = HttpRequest()
request.META = {
'HTTP_X_REAL_IP': '192.31.233.133',
'REMOTE_ADDR': '127.0.0.1',
}
ip = get_ip(request)
self.assertEqual(ip, "192.31.233.133")
def test_x_forwarded_for_multiple(self):
request = HttpRequest()
request.META = {
'X_FORWARDED_FOR': '192.168.255.182, 10.0.0.0, 127.0.0.1, 198.84.193.157, 177.139.233.139',
'REMOTE_ADDR': '177.139.233.133',
}
ip = get_real_ip(request)
self.assertEqual(ip, "198.84.193.157")
def test_x_forwarded_for_multiple_left_most_ip(self):
request = HttpRequest()
request.META = {
'X_FORWARDED_FOR': '192.168.255.182, 198.84.193.157, 10.0.0.0, 127.0.0.1, 177.139.233.139',
'REMOTE_ADDR': '177.139.233.133',
}
ip = get_real_ip(request)
self.assertEqual(ip, "198.84.193.157")
def test_x_forwarded_for_multiple_right_most_ip(self):
request = HttpRequest()
request.META = {
'X_FORWARDED_FOR': '192.168.255.182, 198.84.193.157, 10.0.0.0, 127.0.0.1, 177.139.233.139',
'REMOTE_ADDR': '177.139.233.133',
}
ip = get_real_ip(request, right_most_proxy=True)
self.assertEqual(ip, "177.139.233.139")
def test_x_forwarded_for_multiple_right_most_ip_private(self):
request = HttpRequest()
request.META = {
'X_FORWARDED_FOR': '192.168.255.182, 198.84.193.157, 10.0.0.0, 127.0.0.1, 177.139.233.139',
'REMOTE_ADDR': '177.139.233.133',
}
ip = get_real_ip(request, right_most_proxy=True)
self.assertEqual(ip, "177.139.233.139")
def test_x_forwarded_for_multiple_bad_address(self):
request = HttpRequest()
request.META = {
'X_FORWARDED_FOR': 'unknown, 192.168.255.182, 10.0.0.0, 127.0.0.1, 198.84.193.157, 177.139.233.139',
'REMOTE_ADDR': '177.139.233.133',
}
ip = get_real_ip(request)
self.assertEqual(ip, "198.84.193.157")
def test_x_forwarded_for_singleton(self):
request = HttpRequest()
request.META = {
'X_FORWARDED_FOR': '177.139.233.139',
'REMOTE_ADDR': '177.139.233.133',
}
ip = get_real_ip(request)
self.assertEqual(ip, "177.139.233.139")
def test_x_forwarded_for_singleton_private_address(self):
request = HttpRequest()
request.META = {
'X_FORWARDED_FOR': '192.168.255.182',
'REMOTE_ADDR': '177.139.233.133',
}
ip = get_real_ip(request)
self.assertEqual(ip, "177.139.233.133")
def test_bad_x_forwarded_for_fallback_on_x_real_ip(self):
request = HttpRequest()
request.META = {
'X_FORWARDED_FOR': 'unknown 177.139.233.139',
'REMOTE_ADDR': '177.139.233.133',
}
ip = get_real_ip(request)
self.assertEqual(ip, "177.139.233.133")
def test_empty_x_forwarded_for_fallback_on_x_real_ip(self):
request = HttpRequest()
request.META = {
'X_FORWARDED_FOR': '',
'REMOTE_ADDR': '177.139.233.133',
}
ip = get_real_ip(request)
self.assertEqual(ip, "177.139.233.133")
def test_empty_x_forwarded_for_empty_x_real_ip_fallback_on_remote_addr(self):
request = HttpRequest()
request.META = {
'X_FORWARDED_FOR': '',
'REMOTE_ADDR': '177.139.233.133',
}
ip = get_real_ip(request)
self.assertEqual(ip, "177.139.233.133")
def test_empty_x_forwarded_for_private_x_real_ip_fallback_on_remote_addr(self):
request = HttpRequest()
request.META = {
'X_FORWARDED_FOR': '',
'REMOTE_ADDR': '177.139.233.133',
}
ip = get_real_ip(request)
self.assertEqual(ip, "177.139.233.133")
def test_private_x_forward_for_ip_addr(self):
request = HttpRequest()
request.META = {
'X_FORWARDED_FOR': '127.0.0.1',
'REMOTE_ADDR': '',
}
ip = get_real_ip(request)
self.assertEqual(ip, None)
def test_x_forwarded_for_singleton_hyphen_as_delimiter(self):
request = HttpRequest()
request.META = {
'X-FORWARDED-FOR': '177.139.233.139',
'REMOTE-ADDR': '177.139.233.133',
}
ip = get_real_ip(request)
self.assertEqual(ip, "177.139.233.139")
class IPv4TrustedProxiesTestCase(TestCase):
"""Trusted Proxies - IP address Test"""
def test_meta_none(self):
request = HttpRequest()
request.META = {
}
ip = get_trusted_ip(request)
self.assertIsNone(ip)
def test_http_x_forwarded_for_conf_settings(self):
request = HttpRequest()
request.META = {
'HTTP_X_FORWARDED_FOR': '198.84.193.157, 177.139.200.139, 177.139.233.100',
}
ip = get_trusted_ip(request)
self.assertEqual(ip, "198.84.193.157")
def test_http_x_forwarded_for_no_proxy(self):
request = HttpRequest()
request.META = {
'HTTP_X_FORWARDED_FOR': '198.84.193.157, 177.139.200.139, 177.139.233.139',
}
ip = get_trusted_ip(request, trusted_proxies=[])
self.assertIsNone(ip)
def test_http_x_forwarded_for_single_proxy(self):
request = HttpRequest()
request.META = {
'HTTP_X_FORWARDED_FOR': '198.84.193.157, 177.139.200.139, 177.139.233.139',
}
ip = get_trusted_ip(request, trusted_proxies=['177.139.233.139'])
self.assertEqual(ip, "198.84.193.157")
def test_http_x_forwarded_for_single_proxy_with_right_most(self):
request = HttpRequest()
request.META = {
'HTTP_X_FORWARDED_FOR': '177.139.233.139, 177.139.200.139, 198.84.193.157',
}
ip = get_trusted_ip(request, right_most_proxy=True, trusted_proxies=['177.139.233.139'])
self.assertEqual(ip, "198.84.193.157")
def test_http_x_forwarded_for_multi_proxy(self):
request = HttpRequest()
request.META = {
'HTTP_X_FORWARDED_FOR': '198.84.193.157, 177.139.200.139, 177.139.233.139',
}
ip = get_trusted_ip(request, trusted_proxies=['177.139.233.138', '177.139.233.139'])
self.assertEqual(ip, "198.84.193.157")
def test_http_x_forwarded_for_all_proxies_in_subnet(self):
request = HttpRequest()
request.META = {
'HTTP_X_FORWARDED_FOR': '198.84.193.157, 177.139.200.139, 177.139.233.139',
}
ip = get_trusted_ip(request, trusted_proxies=['177.139.233'])
self.assertEqual(ip, "198.84.193.157")
def test_http_x_forwarded_for_all_proxies_in_subnet_2(self):
request = HttpRequest()
request.META = {
'HTTP_X_FORWARDED_FOR': '198.84.193.157, 177.139.200.139, 177.139.233.139',
}
ip = get_trusted_ip(request, trusted_proxies=['177.139'])
self.assertEqual(ip, "198.84.193.157")
def test_x_forwarded_for_single_proxy(self):
request = HttpRequest()
request.META = {
'X_FORWARDED_FOR': '198.84.193.157, 177.139.200.139, 177.139.233.139',
}
ip = get_trusted_ip(request, trusted_proxies=['177.139.233.139'])
self.assertEqual(ip, "198.84.193.157")
def test_x_forwarded_for_single_proxy_hyphens(self):
request = HttpRequest()
request.META = {
'X-FORWARDED-FOR': '198.84.193.157, 177.139.200.139, 177.139.233.139',
}
ip = get_trusted_ip(request, trusted_proxies=['177.139.233.139'])
self.assertEqual(ip, "198.84.193.157")
def test_http_x_forwarded_for_and_x_forward_for_single_proxy(self):
request = HttpRequest()
request.META = {
'HTTP_X_FORWARDED_FOR': '198.84.193.156, 177.139.200.139, 177.139.233.139',
'X_FORWARDED_FOR': '198.84.193.157, 177.139.200.139, 177.139.233.139',
}
ip = get_trusted_ip(request, trusted_proxies=['177.139.233.139'])
self.assertEqual(ip, "198.84.193.156")
| 36.208437 | 117 | 0.599438 | 2,017 | 14,592 | 4.051562 | 0.044125 | 0.072687 | 0.095815 | 0.152594 | 0.954723 | 0.945179 | 0.92768 | 0.924988 | 0.916544 | 0.892438 | 0 | 0.181091 | 0.262815 | 14,592 | 402 | 118 | 36.298507 | 0.5786 | 0.004934 | 0 | 0.649573 | 0 | 0.059829 | 0.265697 | 0 | 0 | 0 | 0 | 0 | 0.122507 | 1 | 0.122507 | false | 0 | 0.019943 | 0 | 0.148148 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ed2d2f47922b695c340772766084a13625166242 | 20,002 | py | Python | client-libraries/sd-python-library/tests/tests_sd_discovery.py | ealogar/servicedirectory | fb4f4bfa8b499b93c03af589ef2f34c08a830b17 | [
"Apache-2.0"
] | null | null | null | client-libraries/sd-python-library/tests/tests_sd_discovery.py | ealogar/servicedirectory | fb4f4bfa8b499b93c03af589ef2f34c08a830b17 | [
"Apache-2.0"
] | null | null | null | client-libraries/sd-python-library/tests/tests_sd_discovery.py | ealogar/servicedirectory | fb4f4bfa8b499b93c03af589ef2f34c08a830b17 | [
"Apache-2.0"
] | null | null | null | '''
(c) Copyright 2013 Telefonica, I+D. Printed in Spain (Europe). All Rights
Reserved.
The copyright to the software program(s) is property of Telefonica I+D.
The program(s) may be used and or copied only with the express written
consent of Telefonica I+D or in accordance with the terms and conditions
stipulated in the agreement/contract under which the program(s) have
been supplied.
'''
import unittest
from com.tdigital.sd.sd_discovery import ServiceDirectory
from mock import MagicMock, patch, ANY, call
import sys
import os
import time
from requests.exceptions import Timeout, TooManyRedirects
from com.tdigital.sd.exceptions import SDLibraryException, RemoteException,\
ConnectionException
class TestSdDiscovery(unittest.TestCase):
def setUp(self):
self.patcher_request_get = patch('com.tdigital.sd.sd_discovery.get')
self.requestGetMock = self.patcher_request_get.start()
self.sdRespMock = MagicMock(name='sdRespMock')
self.sdRespMock.status_code = 200
# We dont care about rules in SD, just we want to unit test library
self.sdRespMock.json.return_value = {
"class_name": "test_api",
"uri": "uri_test",
"version": "1.0",
"environment": "production",
"attributes": {"ob": "oba"}
}
self.requestGetMock.return_value = self.sdRespMock
def tearDown(self):
self.sdRespMock.reset_mock()
self.sdRespMock.json.reset_mock()
self.sdRespMock.json.side_effect = None
self.patcher_request_get.stop()
def test_get_endpoints_uncached_should_return_existing(self):
library = ServiceDirectory('localhost', 8000, 'v1')
instance = library.bind_instance('test_api')
self.assertEquals('test_api', instance.class_name)
self.assertEquals('uri_test', instance.uri)
self.assertEquals({"ob": "oba"}, instance.attributes)
self.requestGetMock.assert_called_once_with(ANY, timeout=30, auth=ANY, params=ANY)
def test_get_instance_cached_should_return_from_cache(self):
library = ServiceDirectory('localhost', 8000, 'v1')
instance = library.bind_instance('test_api')
self.assertEquals('test_api', instance.class_name)
self.assertEquals('uri_test', instance.uri)
self.assertEquals({"ob": "oba"}, instance.attributes)
# If we call get_endPoinst we must have the value cached
instance = library.bind_instance('test_api')
self.assertEquals('test_api', instance.class_name)
self.assertEquals('uri_test', instance.uri)
self.assertEquals({"ob": "oba"}, instance.attributes)
self.requestGetMock.assert_called_once_with(ANY, timeout=30, auth=ANY, params=ANY)
# Now we check the number of times cache is called
info_cache = library.bind_instance.cache_info()
self.assertEquals(1, info_cache.hits)
@patch('com.tdigital.sd.sd_discovery._cache_size', 1)
def test_get_endpoints_max_size_cached_reached_should_return_lru_from_cache(self):
library = ServiceDirectory('localhost', 8000, 'v1')
library.bind_instance('test_api')
info_cache = library.bind_instance.cache_info()
self.assertEquals(0, info_cache.hits)
self.assertEquals(1, info_cache.currsize)
self.assertEquals(1, info_cache.misses)
# If we call get_endPoinst we must have the value cached
library.bind_instance('test_api')
info_cache = library.bind_instance.cache_info()
self.assertEquals(1, info_cache.hits)
self.assertEquals(1, info_cache.currsize)
# We call sd and test_new_api must update cache but not hit
library.bind_instance('test_new_api')
info_cache = library.bind_instance.cache_info()
self.assertEquals(1, info_cache.hits)
self.assertEquals(1, info_cache.currsize)
self.assertEquals(2, info_cache.misses)
library.bind_instance('test_new_api')
info_cache = library.bind_instance.cache_info()
self.assertEquals(2, info_cache.hits)
self.assertEquals(1, info_cache.currsize) # as expected the cache maximun is reached
self.assertEquals(2, info_cache.misses)
# Only two calls to SD
self.assertEquals(2, self.requestGetMock.call_count, "SD not called 2 times")
self.requestGetMock.assert_has_calls([call(ANY, timeout=30, auth=ANY, params=ANY),
call(ANY, timeout=30, auth=ANY, params=ANY)])
library.bind_instance.cache_clear()
info_cache = library.bind_instance.cache_info()
self.assertEquals(0, info_cache.hits)
self.assertEquals(0, info_cache.currsize) # as expected the cache maximun is reached
self.assertEquals(0, info_cache.misses)
def test_init_library_with_config_file_should_get_values_from_config(self):
path_properties = os.path.join(os.path.dirname(os.path.abspath(__file__)), 'config')
sys.path.append(path_properties)
library = ServiceDirectory()
sys.path.remove(path_properties)
self.assertEquals('localhosttest', library.host, "Host was not read from config file")
self.assertEquals(9000, library.port, "Port was not read from config file")
self.assertEquals(1, library.ttl, "ttl was not read from config file")
self.assertEquals(60, library.ttr, "ttr was not read from config file")
self.assertEquals(10, library.timeout, "timeout was not read from config file")
self.assertEquals('v2', library.version, "timeout was not read from config file")
def test_init_library_with_config_file_and_constructor_should_get_values_from_const_first(self):
path_properties = os.path.join(os.path.dirname(os.path.abspath(__file__)), 'config')
sys.path.append(path_properties)
library = ServiceDirectory('hostconstructor', 9900, timeout=38)
sys.path.remove(path_properties)
self.assertEquals('hostconstructor', library.host, "Host was not read from init")
self.assertEquals(9900, library.port, "Port was not read from omot")
self.assertEquals(1, library.ttl, "ttl was not read from config file")
self.assertEquals(60, library.ttr, "ttr was not read from config file")
self.assertEquals(38, library.timeout, "timeout was not read from config file")
self.assertEquals('v2', library.version, "timeout was not read from config file")
def test_init_library_with_small_config_file_and_constructor_should_get_default_values(self):
path_properties = os.path.join(os.path.dirname(os.path.abspath(__file__)), 'small_config')
sys.path.append(path_properties)
library = ServiceDirectory('hostconstructor', 9900)
sys.path.remove(path_properties)
self.assertEquals('hostconstructor', library.host, "Host was not read from init")
self.assertEquals(9900, library.port, "Port was not read from init")
self.assertEquals(168, library.ttl, "ttl default incorrect")
self.assertEquals(3600, library.ttr, "ttr default incorrect")
self.assertEquals(30, library.timeout, "timeout default value incorrect")
self.assertEquals('vsmall', library.version, "timeout was not read from config file")
def test_init_library_with_wrong_values_should_raise_sd_library_exception(self):
self.assertRaises(SDLibraryException, ServiceDirectory, "host")
self.assertRaises(SDLibraryException, ServiceDirectory, "host", "port")
self.assertRaises(SDLibraryException, ServiceDirectory, "host", 8000, 'v1', ttl='undefined')
self.assertRaises(SDLibraryException, ServiceDirectory, "host", 8000, 'v1', ttl=90, ttr='bad')
self.assertRaises(SDLibraryException, ServiceDirectory, "host", 8000, 'v1', ttl=90, ttr=90, timeout='bad')
self.assertRaises(SDLibraryException, ServiceDirectory, "host", 8000, 'v1', ttl=1.0 / 3700, ttr=90, timeout=30)
self.assertRaises(SDLibraryException, ServiceDirectory, "host", 8000, 'v1', ttl=1, ttr=4000, timeout=30)
self.assertRaises(SDLibraryException, ServiceDirectory, "host", 8000, 'v1', ttl=16, ttr=3000, timeout=0.1)
def test_init_library_without_version_should_get_last_version_from_sd(self):
sdRespInfoMock = MagicMock(name='sdRespMockInfo')
sdRespInfoMock.status_code = 200
sdRespInfoMock.json.return_value = {"app_name": "Service Directory", "default_version": "vlast"}
self.requestGetMock.return_value = sdRespInfoMock
library = ServiceDirectory('localhost', 8000)
self.assertEquals('localhost', library.host, "Host was not read from config file")
self.assertEquals(8000, library.port, "Port was not read from config file")
self.assertEquals(168, library.ttl, "ttl was not read from config file")
self.assertEquals(3600, library.ttr, "ttr was not read from config file")
self.assertEquals(30, library.timeout, "timeout was not read from config file")
self.assertEquals('vlast', library.version, "version was not obtained from SD")
def test_init_library_with_timeout_from_sd_should_raise_Remote_exception(self):
sdRespInfoMock = MagicMock(name='sdRespMockInfo')
sdRespInfoMock.status_code = 200
sdRespInfoMock.json.return_value = {"app_name": "Service Directory", "wrong_version": "vlast"}
self.requestGetMock.return_value = sdRespInfoMock
self.assertRaises(RemoteException, ServiceDirectory, 'localhost', 8000)
def test_init_library_with_error_from_sd_should_raise_Sd_exception(self):
sdRespInfoMock = MagicMock(name='sdRespMockInfo')
sdRespInfoMock.status_code = 500
sdRespInfoMock.json.return_value = {"exceptionId": "SVC00000", "exceptionText": "vlast"}
self.requestGetMock.return_value = sdRespInfoMock
self.assertRaises(SDLibraryException, ServiceDirectory, 'localhost', 8000)
def test_init_library_with_timeout_from_sd_should_raise_Sd_exception(self):
self.requestGetMock.side_effect = Timeout()
self.assertRaises(SDLibraryException, ServiceDirectory, 'localhost', 8000)
def test_get_endpoints_ttl_zero_should_not_hit_cache(self):
library = ServiceDirectory('localhost', 8000, 'v1', ttl=0, timeout=30)
library.bind_instance('test_api')
library.bind_instance('test_api')
self.requestGetMock.assert_has_calls([call(ANY, timeout=30, auth=ANY, params=ANY),
call(ANY, timeout=30, auth=ANY, params=ANY)])
self.assertEquals(2, self.requestGetMock.call_count, "SD called more than 3 times")
def test_get_endpoints_low_ttr_should_refresh_element(self):
sdRespMock = MagicMock(name='sdRespMockRefresh')
sdRespMock.status_code = 200
sdRespMock.json.side_effect = [
{"class_name": "test_api",
"uri": "uri_test",
"version": "1.0",
"environment": "production",
"attributes": {"ob": "oba"}}
, {
"class_name" : "test_api",
"uri" : "uri_test_refreshed",
"version" : "1.0",
"environment" : "production",
"attributes": {"ob": "oba"}
}
]
self.requestGetMock.return_value = sdRespMock
library = ServiceDirectory('localhost', 8000, 'v1', ttl=1, ttr=1, timeout=30)
instance = library.bind_instance('test_api') # update cache and call SD
self.assertEquals('uri_test', instance.uri)
time.sleep(1.1) # after 1 second the cache should be refreshed
instance = library.bind_instance('test_api') # refresh element cache and call SD
self.assertEquals('uri_test_refreshed', instance.uri)
self.requestGetMock.assert_has_calls([call(ANY, timeout=30, auth=ANY, params=ANY),
call(ANY, timeout=30, auth=ANY, params=ANY)])
self.assertEquals(2, self.requestGetMock.call_count, "SD called more than 3 times")
def test_get_endpoints_ttr_expired_SD_Unavailable_should_return_cached(self):
sdRespMock = MagicMock(name='sdRespMockRefresh')
sdRespMock.status_code = 200
returns = [
{
"class_name": "test_api",
"uri": "uri_test",
"version": "1.0",
"environment": "production",
"attributes": {"ob": "oba"}
},
Timeout('SD unavailable')
]
def side_effect_calls(*args):
result = returns.pop(0)
if isinstance(result, Exception):
raise result
return result
sdRespMock.json.side_effect = side_effect_calls
self.requestGetMock.return_value = sdRespMock
library = ServiceDirectory('localhost', 8000, 'v1', ttl=1, ttr=1, timeout=30)
instance = library.bind_instance('test_api') # update cache and call SD
self.assertEquals('uri_test', instance.uri)
time.sleep(1.1) # after 1 second the cache should be refreshed
instance = library.bind_instance('test_api') # sd return Timeout and we return cached
self.assertEquals('uri_test', instance.uri)
self.requestGetMock.assert_has_calls([call(ANY, timeout=30, auth=ANY, params=ANY),
call(ANY, timeout=30, auth=ANY, params=ANY)])
self.assertEquals(2, self.requestGetMock.call_count, "SD called more than 3 times")
def test_get_endpoints_ttr_and_ttl_expired_SD_Unavailable_should_return_exception(self):
sdRespMock = MagicMock(name='sdRespMockRefresh')
sdRespMock.status_code = 200
returns = [
{
"class_name": "test_api",
"uri": "uri_test_ttl",
"version": "1.0",
"environment": "production",
"attributes": {"ob": "oba"}
}, Timeout('SD unavailable')]
def side_effect_calls(*args):
result = returns.pop(0)
if isinstance(result, Exception):
raise result
return result
sdRespMock.json.side_effect = side_effect_calls
self.requestGetMock.return_value = sdRespMock
library = ServiceDirectory('localhost', 8000, 'v1', ttr=0.1, ttl=1.0 / 3600, timeout=30)
instance = library.bind_instance('test_api') # update cache and call SD
self.assertEquals('uri_test_ttl', instance.uri)
time.sleep(1.1) # after 1.1 seconds ttr and ttl are expired
self.assertRaises(ConnectionException, library.bind_instance, 'test_api')
self.requestGetMock.assert_has_calls([call(ANY, timeout=30, auth=ANY, params=ANY),
call(ANY, timeout=30, auth=ANY, params=ANY)])
self.assertEquals(2, self.requestGetMock.call_count, "SD called more than 3 times")
def test_get_endpoints_ttl_expired_SD_Unavailable_should_return_new_value(self):
sdRespMock = MagicMock(name='sdRespMockRefresh')
sdRespMock.status_code = 200
returns = [
{
"class_name": "test_api",
"uri": "uri_test_ttl",
"version": "1.0",
"environment": "production",
"attributes": {"ob": "oba"}
}, Timeout('SD unavailable'),
{
"class_name": "test_api",
"uri": "uri_test_available",
"version": "1.0",
"environment": "production",
"attributes": {"ob": "oba"}
}]
def side_effect_calls(*args):
result = returns.pop(0)
if isinstance(result, Exception):
raise result
return result
sdRespMock.json.side_effect = side_effect_calls
self.requestGetMock.return_value = sdRespMock
library = ServiceDirectory('localhost', 8000, 'v1', ttr=0.1, ttl=1.0 / 3600, timeout=30)
instance = library.bind_instance('test_api') # update cache and call SD
self.assertEquals('uri_test_ttl', instance.uri)
time.sleep(1.1) # after 1.1 seconds ttr and ttl are expired, we get Exception
self.assertRaises(ConnectionException, library.bind_instance, 'test_api')
# Now SD becomes available and the cache element does not exist, SD called
instance = library.bind_instance('test_api') # fill cache and call SD
self.assertEquals('uri_test_available', instance.uri)
# now is in the cache again
instance = library.bind_instance('test_api') # fill cache and call SD
self.assertEquals('uri_test_available', instance.uri)
self.requestGetMock.assert_has_calls([call(ANY, timeout=30, auth=ANY, params=ANY),
call(ANY, timeout=30, auth=ANY, params=ANY),
call(ANY, timeout=30, auth=ANY, params=ANY)])
self.assertEquals(3, self.requestGetMock.call_count, "SD called more than 3 times")
def test_get_endpoints_with_context_cached_should_return_from_cache(self):
library = ServiceDirectory('localhost', 8000, 'v1')
instance = library.bind_instance('test_api',
context={'ob': 'oba', 'premium': True})
self.assertEquals('test_api', instance.class_name)
self.assertEquals('uri_test', instance.uri)
self.assertEquals({"ob": "oba"}, instance.attributes)
# If we call get_endPoinst we must have the value cached
instance = library.bind_instance('test_api',
context={'premium': True, 'ob': 'oba'})
self.assertEquals('test_api', instance.class_name)
self.assertEquals('uri_test', instance.uri)
self.assertEquals({"ob": "oba"}, instance.attributes)
self.requestGetMock.assert_called_once_with(ANY, timeout=30, auth=ANY, params=ANY)
def test_get_endpoints_uncached_SD_timeout_should_raise_conn_exception(self):
self.requestGetMock.side_effect = Timeout()
library = ServiceDirectory('localhost', 8000, 'v1', ttr=0.1, ttl=1.0 / 3600, timeout=30)
self.assertRaises(ConnectionException, library.bind_instance, 'test_api')
self.requestGetMock.assert_called_once_with(ANY, timeout=30, auth=ANY, params=ANY)
def test_get_endpoints_uncached_SD_conn_error_should_raise_conn_exception(self):
self.requestGetMock.side_effect = TooManyRedirects()
library = ServiceDirectory('localhost', 8000, 'v1', ttr=0.1, ttl=1.0 / 3600, timeout=30)
self.assertRaises(ConnectionException, library.bind_instance, 'test_api')
self.requestGetMock.assert_called_once_with(ANY, timeout=30, auth=ANY, params=ANY)
def test_get_endpoints_bad_json_resp_SD_should_raise_remote(self):
sdRespMock = MagicMock(name='sdRespMockRefresh')
sdRespMock.status_code = 200
sdRespMock.json.return_value = {
"class_name": "api"
}
self.requestGetMock.return_value = sdRespMock
library = ServiceDirectory('localhost', 8000, 'v1', ttr=0.1, ttl=1.0 / 3600, timeout=30)
self.assertRaises(RemoteException, library.bind_instance, 'test_api')
def test_get_endpoints_error_resp_SD_should_raise_remote(self):
sdRespMock = MagicMock(name='sdRespMockRefresh')
sdRespMock.status_code = 400
sdRespMock.json.return_value = {
"exceptionText": "Bidning not found",
"excetpionId": "SVC0000"
}
self.requestGetMock.return_value = sdRespMock
library = ServiceDirectory('localhost', 8000, 'v1', ttr=0.1, ttl=1.0 / 3600, timeout=30)
self.assertRaises(RemoteException, library.bind_instance, 'test_api')
def test_get_endpoints_invalid_context_should_raise_sdLibrary(self):
library = ServiceDirectory('localhost', 8000, 'v1', ttr=0.1, ttl=1.0 / 3600, timeout=30)
self.assertRaises(SDLibraryException, library.bind_instance, 'test_api', 'invalid_context')
if __name__ == "__main__":
unittest.main(argv=unittest.sys.argv + ['--verbose'])
| 52.088542 | 119 | 0.675082 | 2,394 | 20,002 | 5.439432 | 0.109023 | 0.083551 | 0.048149 | 0.045922 | 0.833436 | 0.806174 | 0.778145 | 0.74904 | 0.703041 | 0.680003 | 0 | 0.026951 | 0.217178 | 20,002 | 383 | 120 | 52.224543 | 0.8047 | 0.066793 | 0 | 0.626168 | 0 | 0 | 0.149442 | 0.003863 | 0 | 0 | 0 | 0 | 0.302181 | 1 | 0.084112 | false | 0 | 0.024922 | 0 | 0.121495 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ed3baf66f5e0371345123af5016ed8ff2b002a8f | 40 | py | Python | src/log_mine/__init__.py | nr-blablacar/logmine | 6ac777a41bbb870707a6f1471b6b78f1af17e127 | [
"MIT"
] | null | null | null | src/log_mine/__init__.py | nr-blablacar/logmine | 6ac777a41bbb870707a6f1471b6b78f1af17e127 | [
"MIT"
] | null | null | null | src/log_mine/__init__.py | nr-blablacar/logmine | 6ac777a41bbb870707a6f1471b6b78f1af17e127 | [
"MIT"
] | null | null | null | from logmine_pkg.log_mine import LogMine | 40 | 40 | 0.9 | 7 | 40 | 4.857143 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.075 | 40 | 1 | 40 | 40 | 0.918919 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ed66d66b1b13b63736606d5265f31f69dd513549 | 103 | py | Python | src/moncash/exceptions/payment_error.py | MLHaiti/moncash_python | 3b8677312306379020c36b774bbfbf39c085a7be | [
"MIT"
] | 15 | 2021-03-02T01:25:37.000Z | 2022-03-12T14:20:07.000Z | src/moncash/exceptions/payment_error.py | Wadprog/moncash_python | 3b8677312306379020c36b774bbfbf39c085a7be | [
"MIT"
] | 6 | 2021-03-04T17:22:11.000Z | 2022-03-12T16:54:43.000Z | src/moncash/exceptions/payment_error.py | Wadprog/moncash_python | 3b8677312306379020c36b774bbfbf39c085a7be | [
"MIT"
] | 3 | 2022-03-07T15:54:41.000Z | 2022-03-12T14:24:27.000Z | from moncash.exceptions.moncash_error import MoncashError
class PaymentError(MoncashError):
pass
| 20.6 | 58 | 0.825243 | 11 | 103 | 7.636364 | 0.818182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.126214 | 103 | 4 | 59 | 25.75 | 0.933333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
ed7dd9d29c00190d229cad13da4368336391ae2e | 1,196 | py | Python | decapodcli/decapodcli/__init__.py | angry-tony/ceph-lcm-decapod | 535944d3ee384c3a7c4af82f74041b0a7792433f | [
"Apache-2.0"
] | 41 | 2016-11-03T16:40:17.000Z | 2019-05-23T08:39:17.000Z | decapodcli/decapodcli/__init__.py | Mirantis/ceph-lcm | fad9bad0b94f2ef608362953583b10a54a841d24 | [
"Apache-2.0"
] | 30 | 2016-10-14T10:54:46.000Z | 2017-10-20T15:58:01.000Z | decapodcli/decapodcli/__init__.py | angry-tony/ceph-lcm-decapod | 535944d3ee384c3a7c4af82f74041b0a7792433f | [
"Apache-2.0"
] | 28 | 2016-09-17T01:17:36.000Z | 2019-07-05T03:32:54.000Z | # -*- coding: utf-8 -*-
# Copyright (c) 2016 Mirantis Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Decapod CLI tools package."""
import warnings
import click
click.disable_unicode_literals_warning = True
warnings.simplefilter("always", PendingDeprecationWarning)
from decapodcli import cloud_config # NOQA
from decapodcli import cluster # NOQA
from decapodcli import execution # NOQA
from decapodcli import password_reset # NOQA
from decapodcli import permission # NOQA
from decapodcli import playbook_configuration # NOQA
from decapodcli import playbook # NOQA
from decapodcli import role # NOQA
from decapodcli import server # NOQA
from decapodcli import user # NOQA
| 31.473684 | 69 | 0.774247 | 165 | 1,196 | 5.575758 | 0.587879 | 0.152174 | 0.217391 | 0.234783 | 0.069565 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008991 | 0.163043 | 1,196 | 37 | 70 | 32.324324 | 0.91009 | 0.545987 | 0 | 0 | 0 | 0 | 0.011696 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.071429 | 0.857143 | 0 | 0.857143 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
9c3858b9db4e8f716d3f69d9f2a054fa03d6811e | 16 | py | Python | src/load/load.py | aaronwinter/graph-politics | afb240f8f37475fe38fe9f1d036666044984538e | [
"MIT"
] | 9 | 2016-11-27T06:07:57.000Z | 2021-04-02T04:38:47.000Z | src/load/load.py | aaronwinter/graph-politics | afb240f8f37475fe38fe9f1d036666044984538e | [
"MIT"
] | null | null | null | src/load/load.py | aaronwinter/graph-politics | afb240f8f37475fe38fe9f1d036666044984538e | [
"MIT"
] | 2 | 2016-11-01T22:17:15.000Z | 2018-03-23T01:11:25.000Z | import py2neo
| 4 | 13 | 0.75 | 2 | 16 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.083333 | 0.25 | 16 | 3 | 14 | 5.333333 | 0.916667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9c5dff0d2339ddf4c874429906af10991c50fa24 | 135 | py | Python | classifier/source/connector.py | YeongHyeon/GNN_with_MNIST | 260c5c9a73348aeac9da868f3327c4687ac5895b | [
"MIT"
] | 1 | 2021-12-13T06:15:50.000Z | 2021-12-13T06:15:50.000Z | classifier/source/connector.py | YeongHyeon/GNN_with_MNIST | 260c5c9a73348aeac9da868f3327c4687ac5895b | [
"MIT"
] | null | null | null | classifier/source/connector.py | YeongHyeon/GNN_with_MNIST | 260c5c9a73348aeac9da868f3327c4687ac5895b | [
"MIT"
] | null | null | null | def connect(nn):
if(nn == 0): import neuralnet.net00_gcn as nn
elif(nn == 1): import neuralnet.net01_gat as nn
return nn
| 19.285714 | 51 | 0.651852 | 23 | 135 | 3.73913 | 0.652174 | 0.348837 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.058252 | 0.237037 | 135 | 6 | 52 | 22.5 | 0.776699 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.5 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9c6aab52fbb68971912b929435db9f5f61b39fb4 | 411 | py | Python | PyPowerDNS/exceptions.py | TheDJVG/PyPowerDNS | 2e0e47c3bb7a7b20c08ddfa6f0cd93e663d02dc7 | [
"MIT"
] | 1 | 2021-04-05T21:40:34.000Z | 2021-04-05T21:40:34.000Z | PyPowerDNS/exceptions.py | TheDJVG/PyPowerDNS | 2e0e47c3bb7a7b20c08ddfa6f0cd93e663d02dc7 | [
"MIT"
] | 1 | 2020-09-21T15:00:44.000Z | 2020-09-22T00:38:15.000Z | PyPowerDNS/exceptions.py | TheDJVG/PyPowerDNS | 2e0e47c3bb7a7b20c08ddfa6f0cd93e663d02dc7 | [
"MIT"
] | null | null | null | class PDNSApiException(Exception):
def __init__(self, status, message):
self.status = status
self.message = message
super(PDNSApiException, self).__init__()
def __str__(self):
return f"status_code={self.status}: {self.message})"
def __repr__(self):
return f"{type(self).__name__}({self.status}: {self.message})"
class PDNSApiNotFound(Exception):
pass
| 25.6875 | 70 | 0.656934 | 45 | 411 | 5.533333 | 0.4 | 0.160643 | 0.204819 | 0.168675 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.211679 | 411 | 15 | 71 | 27.4 | 0.768519 | 0 | 0 | 0 | 0 | 0 | 0.22871 | 0.150852 | 0 | 0 | 0 | 0 | 0 | 1 | 0.272727 | false | 0.090909 | 0 | 0.181818 | 0.636364 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 6 |
92dbbc93a9e62a38a7a69b19ec2decb89c860d46 | 23,829 | py | Python | pypika/tests/test_inserts.py | jmaschad/pypika | 48fcac96e4ba54856c53f8047bc18f99baebf00c | [
"Apache-2.0"
] | null | null | null | pypika/tests/test_inserts.py | jmaschad/pypika | 48fcac96e4ba54856c53f8047bc18f99baebf00c | [
"Apache-2.0"
] | null | null | null | pypika/tests/test_inserts.py | jmaschad/pypika | 48fcac96e4ba54856c53f8047bc18f99baebf00c | [
"Apache-2.0"
] | null | null | null | import unittest
from pypika import (
Field as F,
MySQLQuery,
PostgreSQLQuery,
Query,
Table,
Tables,
functions as fn,
)
from pypika.functions import Avg
from pypika.terms import Values
from pypika.utils import QueryException
__author__ = "Timothy Heys"
__email__ = "theys@kayak.com"
class InsertIntoTests(unittest.TestCase):
table_abc = Table("abc")
def test_insert_one_column(self):
query = Query.into(self.table_abc).insert(1)
self.assertEqual('INSERT INTO "abc" VALUES (1)', str(query))
def test_insert_one_column_single_element_array(self):
query = Query.into(self.table_abc).insert((1,))
self.assertEqual('INSERT INTO "abc" VALUES (1)', str(query))
def test_insert_one_column_multi_element_array(self):
query = Query.into(self.table_abc).insert((1,), (2,))
self.assertEqual('INSERT INTO "abc" VALUES (1),(2)', str(query))
def test_insert_single_row_with_array_value(self):
query = Query.into(self.table_abc).insert(1, ["a", "b", "c"])
self.assertEqual("INSERT INTO \"abc\" VALUES (1,['a','b','c'])", str(query))
def test_insert_multiple_rows_with_array_value(self):
query = Query.into(self.table_abc).insert(
(1, ["a", "b", "c"]), (2, ["c", "d", "e"]),
)
self.assertEqual(
'INSERT INTO "abc" ' "VALUES (1,['a','b','c']),(2,['c','d','e'])",
str(query),
)
def test_insert_all_columns(self):
query = Query.into(self.table_abc).insert(1, "a", True)
self.assertEqual("INSERT INTO \"abc\" VALUES (1,'a',true)", str(query))
def test_insert_all_columns_single_element(self):
query = Query.into(self.table_abc).insert((1, "a", True))
self.assertEqual("INSERT INTO \"abc\" VALUES (1,'a',true)", str(query))
def test_insert_all_columns_multi_rows(self):
query = Query.into(self.table_abc).insert((1, "a", True), (2, "b", False))
self.assertEqual(
"INSERT INTO \"abc\" VALUES (1,'a',true),(2,'b',false)", str(query)
)
def test_insert_all_columns_multi_rows_chained(self):
query = Query.into(self.table_abc).insert(1, "a", True).insert(2, "b", False)
self.assertEqual(
"INSERT INTO \"abc\" VALUES (1,'a',true),(2,'b',false)", str(query)
)
def test_insert_all_columns_multi_rows_chained_mixed(self):
query = (
Query.into(self.table_abc)
.insert((1, "a", True), (2, "b", False))
.insert(3, "c", True)
)
self.assertEqual(
'INSERT INTO "abc" VALUES ' "(1,'a',true),(2,'b',false)," "(3,'c',true)",
str(query),
)
def test_insert_all_columns_multi_rows_chained_multiple_rows(self):
query = (
Query.into(self.table_abc)
.insert((1, "a", True), (2, "b", False))
.insert((3, "c", True), (4, "d", False))
)
self.assertEqual(
'INSERT INTO "abc" VALUES '
"(1,'a',true),(2,'b',false),"
"(3,'c',true),(4,'d',false)",
str(query),
)
def test_insert_selected_columns(self):
query = (
Query.into(self.table_abc)
.columns(self.table_abc.foo, self.table_abc.bar, self.table_abc.buz)
.insert(1, "a", True)
)
self.assertEqual(
'INSERT INTO "abc" ("foo","bar","buz") VALUES (1,\'a\',true)', str(query)
)
def test_insert_none_skipped(self):
query = Query.into(self.table_abc).insert()
self.assertEqual("", str(query))
def test_insert_ignore(self):
query = Query.into(self.table_abc).insert(1).ignore()
self.assertEqual('INSERT IGNORE INTO "abc" VALUES (1)', str(query))
def test_insert_null(self):
query = Query.into(self.table_abc).insert(None)
self.assertEqual('INSERT INTO "abc" VALUES (NULL)', str(query))
def test_insert_column_using_table_alias(self):
q = self.table_abc.insert(1)
self.assertEqual('INSERT INTO "abc" VALUES (1)', str(q))
def test_insert_column_using_alias_with_chain(self):
q = self.table_abc.insert(1, "a", True).insert(2, "b", False)
self.assertEqual(
"INSERT INTO \"abc\" VALUES (1,'a',true),(2,'b',false)", str(q)
)
class PostgresInsertIntoOnConflictTests(unittest.TestCase):
table_abc = Table("abc")
def test_insert_on_conflict_do_nothing_field(self):
query = (
PostgreSQLQuery.into(self.table_abc)
.insert(1)
.on_conflict(self.table_abc.id)
.do_nothing()
)
self.assertEqual(
'INSERT INTO "abc" VALUES (1) ON CONFLICT (id) DO NOTHING', str(query)
)
def test_insert_on_conflict_do_nothing_field_str(self):
query = (
PostgreSQLQuery.into(self.table_abc)
.insert(1)
.on_conflict("id")
.do_nothing()
)
self.assertEqual(
'INSERT INTO "abc" VALUES (1) ON CONFLICT (id) DO NOTHING', str(query)
)
def test_insert_on_conflict_do_update_field(self):
query = (
PostgreSQLQuery.into(self.table_abc)
.insert(1, "m")
.on_conflict(self.table_abc.id)
.do_update(self.table_abc.name, "m")
)
self.assertEqual(
"INSERT INTO \"abc\" VALUES (1,'m') ON CONFLICT (id) DO UPDATE SET name='m'",
str(query),
)
def test_insert_on_conflict_do_update_field_str(self):
query = (
PostgreSQLQuery.into(self.table_abc)
.insert(1, "m")
.on_conflict("id")
.do_update("name", "m")
)
self.assertEqual(
"INSERT INTO \"abc\" VALUES (1,'m') ON CONFLICT (id) DO UPDATE SET name='m'",
str(query),
)
def test_insert_on_conflict_no_handler(self):
with self.assertRaises(QueryException):
query = str(
PostgreSQLQuery.into(self.table_abc)
.insert(1)
.on_conflict(self.table_abc.id)
)
def test_insert_on_conflict_two_handlers_do_nothing(self):
with self.assertRaises(QueryException):
query = (
PostgreSQLQuery.into(self.table_abc)
.insert(1)
.on_conflict("id")
.do_nothing()
.do_update(self.table_abc.name, "m")
)
def test_insert_on_conflict_two_handlers_do_update(self):
with self.assertRaises(QueryException):
query = (
PostgreSQLQuery.into(self.table_abc)
.insert(1)
.on_conflict(self.table_abc.id)
.do_update(self.table_abc.name, "m")
.do_nothing()
)
def test_non_insert_on_conflict_do_nothing(self):
with self.assertRaises(QueryException):
query = (
PostgreSQLQuery.update(self.table_abc)
.set("foo", "bar")
.on_conflict("id")
.do_nothing()
)
def test_non_insert_on_conflict_do_update(self):
with self.assertRaises(QueryException):
query = (
PostgreSQLQuery.update(self.table_abc)
.set("foo", "bar")
.on_conflict("id")
.do_update(["name"], ["m"])
)
def test_insert_on_fieldless_conflict_do_nothing(self):
query = (
PostgreSQLQuery.into(self.table_abc)
.insert(1)
.on_conflict(None)
.do_nothing()
)
self.assertEqual(
'INSERT INTO "abc" VALUES (1) ON CONFLICT DO NOTHING', str(query)
)
def test_insert_on_fieldless_conflict_do_update_field(self):
with self.assertRaises(QueryException):
query = str(
PostgreSQLQuery.into(self.table_abc)
.insert(1, "m")
.on_conflict(None)
.do_update(self.table_abc.name, "m")
)
class PostgresInsertIntoReturningTests(unittest.TestCase):
table_abc = Table("abc")
def test_insert_returning_one_field(self):
query = (
PostgreSQLQuery.into(self.table_abc).insert(1).returning(self.table_abc.id)
)
self.assertEqual('INSERT INTO "abc" VALUES (1) RETURNING id', str(query))
def test_insert_returning_one_field_str(self):
query = PostgreSQLQuery.into(self.table_abc).insert(1).returning("id")
self.assertEqual('INSERT INTO "abc" VALUES (1) RETURNING id', str(query))
def test_insert_returning_all_fields(self):
query = (
PostgreSQLQuery.into(self.table_abc)
.insert(1)
.returning(self.table_abc.star)
)
self.assertEqual('INSERT INTO "abc" VALUES (1) RETURNING *', str(query))
def test_insert_returning_all_fields_and_arithmetics(self):
query = (
PostgreSQLQuery.into(self.table_abc)
.insert(1)
.returning(self.table_abc.star, self.table_abc.f1 + self.table_abc.f2)
)
self.assertEqual('INSERT INTO "abc" VALUES (1) RETURNING *,f1+f2', str(query))
def test_insert_all_columns_multi_rows_chained_returning_star(self):
query = (
PostgreSQLQuery.into(self.table_abc)
.insert(1, "a", True)
.insert(2, "b", False)
.returning(self.table_abc.star)
)
self.assertEqual(
"INSERT INTO \"abc\" VALUES (1,'a',true),(2,'b',false) RETURNING *",
str(query),
)
def test_insert_all_columns_multi_rows_chained_returning_star_and_id(self):
query = (
PostgreSQLQuery.into(self.table_abc)
.insert(1, "a", True)
.insert(2, "b", False)
.returning(self.table_abc.name, self.table_abc.star, self.table_abc.id,)
)
self.assertEqual(
"INSERT INTO \"abc\" VALUES (1,'a',true),(2,'b',false) RETURNING *",
str(query),
)
def test_insert_all_columns_multi_rows_chained_returning_star_str(self):
query = (
PostgreSQLQuery.into(self.table_abc)
.insert(1, "a", True)
.insert(2, "b", False)
.returning("*")
)
self.assertEqual(
"INSERT INTO \"abc\" VALUES (1,'a',true),(2,'b',false) RETURNING *",
str(query),
)
def test_insert_all_columns_single_element_arrays(self):
query = (
PostgreSQLQuery.into(self.table_abc)
.insert((1, "a", True))
.returning(self.table_abc.star)
)
self.assertEqual(
"INSERT INTO \"abc\" VALUES (1,'a',true) RETURNING *", str(query)
)
def test_insert_returning_null(self):
query = PostgreSQLQuery.into(self.table_abc).insert(1).returning(None)
self.assertEqual('INSERT INTO "abc" VALUES (1) RETURNING NULL', str(query))
def test_insert_returning_tuple(self):
query = PostgreSQLQuery.into(self.table_abc).insert(1).returning((1, 2, 3))
self.assertEqual('INSERT INTO "abc" VALUES (1) RETURNING (1,2,3)', str(query))
def test_insert_returning_arithmetics(self):
query = (
PostgreSQLQuery.into(self.table_abc)
.insert(1)
.returning(self.table_abc.f1 + self.table_abc.f2)
)
self.assertEqual('INSERT INTO "abc" VALUES (1) RETURNING f1+f2', str(query))
def test_insert_returning_aggregate(self):
with self.assertRaises(QueryException):
PostgreSQLQuery.into(self.table_abc).insert(1).returning(
Avg(self.table_abc.views)
)
def test_insert_returning_from_other_table(self):
table_cba = Table("cba")
with self.assertRaises(QueryException):
PostgreSQLQuery.into(self.table_abc).insert(1).returning(table_cba.id)
class InsertIntoOnDuplicateTests(unittest.TestCase):
table_abc = Table("abc")
def test_insert_one_column(self):
query = (
MySQLQuery.into(self.table_abc)
.insert(1)
.on_duplicate_key_update(self.table_abc.foo, self.table_abc.foo)
)
self.assertEqual(
"INSERT INTO `abc` VALUES (1) ON DUPLICATE KEY UPDATE `foo`=`foo`",
str(query),
)
def test_insert_one_column_using_values(self):
query = (
MySQLQuery.into(self.table_abc)
.insert(1)
.on_duplicate_key_update(self.table_abc.foo, Values(self.table_abc.foo))
)
self.assertEqual(
"INSERT INTO `abc` VALUES (1) ON DUPLICATE KEY UPDATE `foo`=VALUES(`foo`)",
str(query),
)
def test_insert_one_column_single_element_array(self):
query = (
MySQLQuery.into(self.table_abc)
.insert((1,))
.on_duplicate_key_update(self.table_abc.foo, self.table_abc.foo)
)
self.assertEqual(
"INSERT INTO `abc` VALUES (1) ON DUPLICATE KEY UPDATE `foo`=`foo`",
str(query),
)
def test_insert_one_column_multi_element_array(self):
query = (
MySQLQuery.into(self.table_abc)
.insert((1,), (2,))
.on_duplicate_key_update(self.table_abc.foo, self.table_abc.foo)
)
self.assertEqual(
"INSERT INTO `abc` VALUES (1),(2) ON DUPLICATE KEY UPDATE `foo`=`foo`",
str(query),
)
def test_insert_multiple_columns_on_duplicate_update_one_with_same_value(self):
query = (
MySQLQuery.into(self.table_abc)
.insert(1, "a")
.on_duplicate_key_update(self.table_abc.bar, Values(self.table_abc.bar))
)
self.assertEqual(
"INSERT INTO `abc` VALUES (1,'a') ON DUPLICATE KEY UPDATE `bar`=VALUES(`bar`)",
str(query),
)
def test_insert_multiple_columns_on_duplicate_update_one_with_different_value(self):
query = (
MySQLQuery.into(self.table_abc)
.insert(1, "a")
.on_duplicate_key_update(self.table_abc.bar, "b")
)
self.assertEqual(
"INSERT INTO `abc` VALUES (1,'a') ON DUPLICATE KEY UPDATE `bar`='b'",
str(query),
)
def test_insert_multiple_columns_on_duplicate_update_one_with_expression(self):
query = (
MySQLQuery.into(self.table_abc)
.insert(1, 2)
.on_duplicate_key_update(self.table_abc.bar, 4 + F("bar"))
) # todo sql expression? not python
self.assertEqual(
"INSERT INTO `abc` VALUES (1,2) ON DUPLICATE KEY UPDATE `bar`=4+`bar`",
str(query),
)
def test_insert_multiple_columns_on_duplicate_update_one_with_expression_using_original_field_value(
self,
):
query = (
MySQLQuery.into(self.table_abc)
.insert(1, "a")
.on_duplicate_key_update(
self.table_abc.bar, fn.Concat(self.table_abc.bar, "update")
)
)
self.assertEqual(
"INSERT INTO `abc` VALUES (1,'a') ON DUPLICATE KEY UPDATE `bar`=CONCAT(`bar`,'update')",
str(query),
)
def test_insert_multiple_columns_on_duplicate_update_one_with_expression_using_values(
self,
):
query = (
MySQLQuery.into(self.table_abc)
.insert(1, "a")
.on_duplicate_key_update(
self.table_abc.bar, fn.Concat(Values(self.table_abc.bar), "update")
)
)
self.assertEqual(
"INSERT INTO `abc` VALUES (1,'a') ON DUPLICATE KEY UPDATE `bar`=CONCAT(VALUES(`bar`),'update')",
str(query),
)
def test_insert_multiple_columns_on_duplicate_update_multiple(self):
query = (
MySQLQuery.into(self.table_abc)
.insert(1, "a", "b")
.on_duplicate_key_update(self.table_abc.bar, "b")
.on_duplicate_key_update(self.table_abc.baz, "c")
)
self.assertEqual(
"INSERT INTO `abc` VALUES (1,'a','b') ON DUPLICATE KEY UPDATE `bar`='b',`baz`='c'",
str(query),
)
def test_insert_multi_rows_chained_mixed_on_duplicate_update_multiple(self):
query = (
MySQLQuery.into(self.table_abc)
.insert((1, "a", True), (2, "b", False))
.insert(3, "c", True)
.on_duplicate_key_update(self.table_abc.foo, self.table_abc.foo)
.on_duplicate_key_update(self.table_abc.bar, Values(self.table_abc.bar))
)
self.assertEqual(
"INSERT INTO `abc` VALUES (1,'a',true),(2,'b',false),(3,'c',true) "
"ON DUPLICATE KEY UPDATE `foo`=`foo`,`bar`=VALUES(`bar`)",
str(query),
)
def test_insert_selected_columns_on_duplicate_update_one(self):
query = (
MySQLQuery.into(self.table_abc)
.columns(self.table_abc.foo, self.table_abc.bar, self.table_abc.baz)
.insert(1, "a", True)
.on_duplicate_key_update(self.table_abc.baz, False)
)
self.assertEqual(
"INSERT INTO `abc` (`foo`,`bar`,`baz`) VALUES (1,'a',true) ON DUPLICATE KEY UPDATE `baz`=false",
str(query),
)
def test_insert_selected_columns_on_duplicate_update_multiple(self):
query = (
MySQLQuery.into(self.table_abc)
.columns(self.table_abc.foo, self.table_abc.bar, self.table_abc.baz)
.insert(1, "a", True)
.on_duplicate_key_update(self.table_abc.baz, False)
.on_duplicate_key_update(self.table_abc.bar, Values(self.table_abc.bar))
)
self.assertEqual(
"INSERT INTO `abc` (`foo`,`bar`,`baz`) VALUES (1,'a',true) "
"ON DUPLICATE KEY UPDATE `baz`=false,`bar`=VALUES(`bar`)",
str(query),
)
def test_insert_none_skipped(self):
query = (
MySQLQuery.into(self.table_abc)
.insert()
.on_duplicate_key_update(self.table_abc.baz, False)
)
self.assertEqual("", str(query))
def test_insert_ignore(self):
query = (
MySQLQuery.into(self.table_abc)
.insert(1)
.ignore()
.on_duplicate_key_update(self.table_abc.baz, False)
)
self.assertEqual(
"INSERT IGNORE INTO `abc` VALUES (1) ON DUPLICATE KEY UPDATE `baz`=false",
str(query),
)
class InsertSelectFromTests(unittest.TestCase):
table_abc, table_efg, table_hij = Tables("abc", "efg", "hij")
def test_insert_star(self):
query = Query.into(self.table_abc).from_(self.table_efg).select("*")
self.assertEqual('INSERT INTO "abc" SELECT * FROM "efg"', str(query))
def test_insert_ignore_star(self):
query = Query.into(self.table_abc).from_(self.table_efg).select("*").ignore()
self.assertEqual('INSERT IGNORE INTO "abc" SELECT * FROM "efg"', str(query))
def test_insert_from_columns(self):
query = (
Query.into(self.table_abc)
.from_(self.table_efg)
.select(self.table_efg.fiz, self.table_efg.buz, self.table_efg.baz)
)
self.assertEqual(
'INSERT INTO "abc" ' 'SELECT "fiz","buz","baz" FROM "efg"', str(query)
)
def test_insert_columns_from_star(self):
query = (
Query.into(self.table_abc)
.columns(self.table_abc.foo, self.table_abc.bar, self.table_abc.buz,)
.from_(self.table_efg)
.select("*")
)
self.assertEqual(
'INSERT INTO "abc" ("foo","bar","buz") ' 'SELECT * FROM "efg"', str(query)
)
def test_insert_columns_from_columns(self):
query = (
Query.into(self.table_abc)
.columns(self.table_abc.foo, self.table_abc.bar, self.table_abc.buz)
.from_(self.table_efg)
.select(self.table_efg.fiz, self.table_efg.buz, self.table_efg.baz)
)
self.assertEqual(
'INSERT INTO "abc" ("foo","bar","buz") '
'SELECT "fiz","buz","baz" FROM "efg"',
str(query),
)
def test_insert_columns_from_columns_with_join(self):
query = (
Query.into(self.table_abc)
.columns(
self.table_abc.c1,
self.table_abc.c2,
self.table_abc.c3,
self.table_abc.c4,
)
.from_(self.table_efg)
.select(self.table_efg.foo, self.table_efg.bar)
.join(self.table_hij)
.on(self.table_efg.id == self.table_hij.abc_id)
.select(self.table_hij.fiz, self.table_hij.buz)
)
self.assertEqual(
'INSERT INTO "abc" ("c1","c2","c3","c4") '
'SELECT "efg"."foo","efg"."bar","hij"."fiz","hij"."buz" FROM "efg" '
'JOIN "hij" ON "efg"."id"="hij"."abc_id"',
str(query),
)
class InsertSubqueryTests(unittest.TestCase):
def test_insert_subquery_wrapped_in_brackets(self):
purchase_order_item, part = Tables("purchase_order_item", "part")
q = (
Query.into(purchase_order_item)
.columns(purchase_order_item.id_part, purchase_order_item.id_customer)
.insert(
Query.from_(part)
.select(part.part_id)
.where(part.part_number == "FOOBAR"),
12345,
)
)
self.assertEqual(
'INSERT INTO "purchase_order_item" '
'("id_part","id_customer") '
"VALUES "
'((SELECT "part_id" FROM "part" WHERE "part_number"=\'FOOBAR\'),12345)',
str(q),
)
class SelectIntoTests(unittest.TestCase):
table_abc, table_efg, table_hij = Tables("abc", "efg", "hij")
def test_select_star_into(self):
query = Query.from_(self.table_abc).select("*").into(self.table_efg)
self.assertEqual('SELECT * INTO "efg" FROM "abc"', str(query))
def test_select_columns_into(self):
query = (
Query.from_(self.table_abc)
.select(self.table_abc.foo, self.table_abc.bar, self.table_abc.buz)
.into(self.table_efg)
)
self.assertEqual('SELECT "foo","bar","buz" INTO "efg" FROM "abc"', str(query))
def test_select_columns_into_with_join(self):
query = (
Query.from_(self.table_abc)
.select(self.table_abc.foo, self.table_abc.bar)
.join(self.table_hij)
.on(self.table_abc.id == self.table_hij.abc_id)
.select(self.table_hij.fiz, self.table_hij.buz)
.into(self.table_efg)
)
self.assertEqual(
'SELECT "abc"."foo","abc"."bar","hij"."fiz","hij"."buz" '
'INTO "efg" FROM "abc" '
'JOIN "hij" ON "abc"."id"="hij"."abc_id"',
str(query),
)
class ReplaceTests(unittest.TestCase):
table_abc, table_def = Tables("abc", "efg")
def test_replace_simple(self):
query = Query.into(self.table_abc).replace("v1", "v2", "v3")
expected_output = "REPLACE INTO \"abc\" VALUES ('v1','v2','v3')"
self.assertEqual(str(query), expected_output)
def test_replace_subquery(self):
query = Query.into(self.table_abc).replace(
Query.from_(self.table_def).select("f1", "f2")
)
expected_output = 'REPLACE INTO "abc" VALUES ((SELECT "f1","f2" FROM "efg"))'
self.assertEqual(str(query), expected_output)
| 33.188022 | 108 | 0.573923 | 2,860 | 23,829 | 4.545105 | 0.053147 | 0.117009 | 0.130164 | 0.073852 | 0.8993 | 0.879452 | 0.854296 | 0.821679 | 0.755674 | 0.712593 | 0 | 0.010305 | 0.29141 | 23,829 | 717 | 109 | 33.23431 | 0.75955 | 0.001301 | 0 | 0.493934 | 0 | 0.013865 | 0.154396 | 0.027189 | 0 | 0 | 0 | 0.001395 | 0.117851 | 1 | 0.117851 | false | 0 | 0.008666 | 0 | 0.147314 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
92f29573f51e634fe1c57a284e2f105cc6404b45 | 27,233 | py | Python | workflows/coverage_simulations_and_bias_reduction/gw/pipeline.py | montefiore-ai/averting-a-crisis-in-simulation-based-inference | 9dca4a508a19c514422a9b177d4fcf1dad6ea693 | [
"BSD-3-Clause"
] | 1 | 2021-10-13T12:35:48.000Z | 2021-10-13T12:35:48.000Z | workflows/coverage_simulations_and_bias_reduction/gw/pipeline.py | montefiore-ai/averting-a-crisis-in-simulation-based-inference | 9dca4a508a19c514422a9b177d4fcf1dad6ea693 | [
"BSD-3-Clause"
] | null | null | null | workflows/coverage_simulations_and_bias_reduction/gw/pipeline.py | montefiore-ai/averting-a-crisis-in-simulation-based-inference | 9dca4a508a19c514422a9b177d4fcf1dad6ea693 | [
"BSD-3-Clause"
] | null | null | null | import argparse
import glob
import hypothesis as h
import hypothesis.workflow as w
import logging
import matplotlib.pyplot as plt
import numpy as np
import os
import papermill as pm
import shutil
from hypothesis.workflow import shell
from ratio_estimation import load_estimator, train_flow_sbi
from tqdm import tqdm
from util import coverage_of_estimator, compute_sbc
from util import measure_diagnostic
from util import simulate
# Argument parsing
parser = argparse.ArgumentParser()
parser.add_argument("--redo", action="store_true", help="Executes the workflow from scratch by removing all postconditions (default: false).")
parser.add_argument("--simulate-test-n", type=int, default=100000, help="Number of testing simulations (default: 100 000).")
parser.add_argument("--simulate-train-n", type=int, default=200000, help="Number of training simulations (default: 10 000 000).")
parser.add_argument("--slurm", action="store_true", help="Execute the workflow on a Slurm-enabled HPC system (default: false).")
parser.add_argument("--test", action="store_true", help="Execute the workflow with fast hyper parameters for testing (default: false).")
parser.add_argument("--only_flows", action="store_true", help="Execute only the flow part of the workflow (default: false).")
arguments, _ = parser.parse_known_args()
### BEGIN Pre-workflow #########################################################
# Pipeline constants
root = os.path.dirname(os.path.abspath(__file__))
datadir = root + "/data"
outputdir = root + "/output"
# Hyperparameters
learning_rate = 0.001
if arguments.test:
batch_size = 32
num_ensembles = 2
epochs = 2
simulations = [2 ** n for n in range(10, 11)]
credible_interval_levels = [0.9, 0.95]
simulation_block_size = 10
arguments.simulate_train_n = 3000
arguments.simulate_test_n = 20
sbc_nb_rank_samples = 10
sbc_nb_posterior_samples = 100
diagnostic_n = 10
else:
batch_size = 64
num_ensembles = 5
epochs = 100
simulations = [2 ** n for n in range(10, 18)]
credible_interval_levels = [x/20 for x in range(1, 20)]
simulation_block_size = 10000
sbc_nb_rank_samples = 10000
sbc_nb_posterior_samples = 1000
diagnostic_n = 100000
assert arguments.simulate_train_n % simulation_block_size == 0
assert arguments.simulate_test_n % simulation_block_size == 0
num_train_blocks = int(arguments.simulate_train_n / simulation_block_size)
num_test_blocks = int(arguments.simulate_test_n / simulation_block_size)
# Check if everything needs to be cleaned.
if arguments.redo:
shutil.rmtree(datadir, ignore_errors=True)
shutil.rmtree(outputdir, ignore_errors=True)
# Simulation arguments
datadir_simulate_test = root + "/data/test"
datadir_simulate_train = root + "/data/train"
### END Pre-workflow ###########################################################
### BEGIN Workflow definition ##################################################
@w.root
def main():
# Prepare simulation environment
if not os.path.exists(datadir_simulate_train):
logging.info("Creating training data directory.")
os.makedirs(datadir_simulate_train)
if not os.path.exists(datadir_simulate_test):
logging.info("Creating testing data directory.")
os.makedirs(datadir_simulate_test)
# Prepare the output directory
if not os.path.exists(outputdir):
logging.info("Creating the output directory.")
os.makedirs(outputdir)
@w.dependency(main)
@w.postcondition(w.num_files(datadir_simulate_train + "/block-*/inputs.npy", num_train_blocks))
@w.postcondition(w.num_files(datadir_simulate_train + "/block-*/outputs.npy", num_train_blocks))
@w.slurm.cpu_and_memory(1, "8g")
@w.slurm.timelimit("01:00:00")
@w.tasks(num_train_blocks)
def simulate_train(task_index):
output_directory = datadir_simulate_train + "/block-" + str(task_index).zfill(5)
# Check if the data has already been simulated
dont_simulate = True
dont_simulate &= os.path.exists(output_directory + "/inputs.npy")
dont_simulate &= os.path.exists(output_directory + "/outputs.npy")
if not dont_simulate:
logging.info("Simulating training data block (" + str(task_index + 1) + " / " + str(num_train_blocks) + ")")
simulate(n=simulation_block_size, directory=output_directory)
@w.dependency(main)
@w.postcondition(w.num_files(datadir_simulate_test + "/block-*/inputs.npy", num_test_blocks))
@w.postcondition(w.num_files(datadir_simulate_test + "/block-*/outputs.npy", num_test_blocks))
@w.slurm.cpu_and_memory(1, "8g")
@w.slurm.timelimit("01:00:00")
@w.tasks(num_test_blocks)
def simulate_test(task_index):
output_directory = datadir_simulate_test + "/block-" + str(task_index).zfill(5)
# Check if the data has already been simulated
dont_simulate = True
dont_simulate &= os.path.exists(output_directory + "/inputs.npy")
dont_simulate &= os.path.exists(output_directory + "/outputs.npy")
if not dont_simulate:
logging.info("Simulating testing data block (" + str(task_index + 1) + " / " + str(num_test_blocks) + ")")
simulate(n=simulation_block_size, directory=output_directory)
@w.dependency(simulate_train)
@w.postcondition(w.exists(datadir_simulate_train + "/inputs.npy"))
@w.postcondition(w.exists(datadir_simulate_train + "/outputs.npy"))
@w.slurm.cpu_and_memory(1, "32g")
@w.slurm.timelimit("01:00:00")
def merge_train():
logging.info("Merging training data.")
cwd = os.getcwd()
os.chdir(root)
shell("hypothesis merge --extension numpy --dimension 0 --in-memory --files 'data/train/block-*/inputs.npy' --sort --out data/train/inputs.npy")
shell("hypothesis merge --extension numpy --dimension 0 --in-memory --files 'data/train/block-*/outputs.npy' --sort --out data/train/outputs.npy")
shell("rm -rf data/train/block-*")
os.chdir(cwd)
@w.dependency(simulate_test)
@w.postcondition(w.exists(datadir_simulate_test + "/inputs.npy"))
@w.postcondition(w.exists(datadir_simulate_test + "/outputs.npy"))
@w.slurm.cpu_and_memory(1, "16g")
@w.slurm.timelimit("01:00:00")
def merge_test():
logging.info("Merging testing data.")
cwd = os.getcwd()
os.chdir(root)
shell("hypothesis merge --extension numpy --dimension 0 --in-memory --files 'data/test/block-*/inputs.npy' --sort --out data/test/inputs.npy")
shell("hypothesis merge --extension numpy --dimension 0 --in-memory --files 'data/test/block-*/outputs.npy' --sort --out data/test/outputs.npy")
shell("rm -rf data/test/block-*")
os.chdir(cwd)
dependencies = []
r"""
"""
def evaluate_point_classifier(simulation_budget, cl_list=[0.95]):
r""""""
storagedir = outputdir + "/" + str(simulation_budget)
@w.dependency(simulate_test)
@w.dependency(merge_train)
@w.postcondition(w.num_files(storagedir + "/mlp-0*/weights.th", num_ensembles))
@w.slurm.cpu_and_memory(4, "32g")
@w.slurm.gpu(1)
@w.slurm.timelimit("72:00:00")
@w.tasks(num_ensembles)
def train_ratio_estimator(task_index):
resultdir = storagedir + "/mlp-" + str(task_index).zfill(5)
os.makedirs(resultdir, exist_ok=True)
if not os.path.exists(resultdir + "/weights.th"):
logging.info("Training classifier ratio estimator ({index} / {n}) for the GW problem.".format(index=task_index + 1, n=num_ensembles))
logging.info("Using the following hyper parameters:")
logging.info(" - batch-size : " + str(batch_size))
logging.info(" - epochs : " + str(epochs))
logging.info(" - learning-rate : 0.001")
logging.info(" - simulations : " + str(simulation_budget))
command = r"""python -m hypothesis.bin.ratio_estimation.train --batch-size {batch_size} \
--data-test ratio_estimation.DatasetJointTest \
--data-train ratio_estimation.DatasetJointTrain{simulations} \
--epochs {epochs} \
--estimator ratio_estimation.ClassifierRatioEstimator \
--hooks hooks.add \
--lr {lr} \
--lrsched-on-plateau \
--out {out} \
--show""".format(
batch_size=batch_size,
epochs=epochs,
simulations=simulation_budget,
lr=0.001,
out=resultdir)
command += " --criterion hypothesis.nn.ratio_estimation.BaseCriterion"
# Execute the training procedure
shell(command)
@w.dependency(train_ratio_estimator)
@w.postcondition(w.num_files(storagedir + "/mlp-0*/coverage.npy", num_ensembles))
@w.slurm.cpu_and_memory(4, "32g")
@w.slurm.gpu(1)
@w.slurm.timelimit("48:00:00")
@w.tasks(num_ensembles)
def coverage_individual(task_index):
resultdir = storagedir + "/mlp-" + str(task_index).zfill(5)
if not os.path.exists(resultdir + "/coverage.npy"):
query = resultdir + "/weights.th"
coverage = coverage_of_estimator(query, cl_list=cl_list, max_samples=10000)
np.save(resultdir + "/coverage.npy", coverage)
@w.dependency(train_ratio_estimator)
@w.postcondition(w.exists(storagedir + "/coverage-classifier.npy"))
@w.slurm.cpu_and_memory(4, "32g")
@w.slurm.gpu(1)
@w.slurm.timelimit("48:00:00")
def coverage_ensemble():
if not os.path.exists(storagedir + "/coverage-classifier.npy"):
query = storagedir + "/mlp-0*/weights.th"
coverage = coverage_of_estimator(query, cl_list=cl_list, max_samples=10000)
np.save(storagedir + "/coverage-classifier.npy", coverage)
@w.dependency(train_ratio_estimator)
@w.postcondition(w.num_files(storagedir + "/mlp-0*/diagnostic.npy", num_ensembles))
@w.slurm.cpu_and_memory(4, "32g")
@w.slurm.gpu(1)
@w.slurm.timelimit("00:15:00")
@w.tasks(num_ensembles)
def diagnostic_individual(task_index):
resultdir = storagedir + "/mlp-" + str(task_index).zfill(5)
outputfile = resultdir + "/diagnostic.npy"
if not os.path.exists(outputfile):
query = resultdir + "/weights.th"
r = load_estimator(query)
result = measure_diagnostic(r, n=diagnostic_n)
np.save(outputfile, result)
@w.dependency(train_ratio_estimator)
@w.postcondition(w.exists(storagedir + "/diagnostic-mlp.npy"))
@w.slurm.cpu_and_memory(4, "32g")
@w.slurm.gpu(1)
@w.slurm.timelimit("00:15:00")
def diagnostic_ensemble():
resultdir = storagedir
outputfile = resultdir + "/diagnostic-mlp.npy"
if not os.path.exists(outputfile):
query = resultdir + "/mlp-0*/weights.th"
r = load_estimator(query)
result = measure_diagnostic(r, n=diagnostic_n)
np.save(outputfile, result)
@w.dependency(train_ratio_estimator)
@w.postcondition(w.num_files(storagedir + "/mlp-0*/sbc.npy", num_ensembles))
@w.slurm.cpu_and_memory(4, "32g")
@w.slurm.gpu(1)
@w.slurm.timelimit("48:00:00")
@w.tasks(num_ensembles)
def sbc_individual(task_index):
resultdir = storagedir + "/mlp-" + str(task_index).zfill(5)
if not os.path.exists(resultdir + "/sbc.npy"):
query = resultdir + "/weights.th"
compute_sbc(query, sbc_nb_rank_samples, sbc_nb_posterior_samples, resultdir + "/sbc.npy")
@w.dependency(train_ratio_estimator)
@w.postcondition(w.exists(storagedir + "/sbc-classifier.npy"))
@w.slurm.cpu_and_memory(4, "32g")
@w.slurm.gpu(1)
@w.slurm.timelimit("48:00:00")
def sbc_ensemble():
if not os.path.exists(storagedir + "/sbc-classifier.npy"):
query = storagedir + "/mlp-0*/weights.th"
compute_sbc(query, sbc_nb_rank_samples, sbc_nb_posterior_samples, storagedir + "/sbc-classifier.npy")
# Add the dependencies for the summary notebook.
dependencies.append(diagnostic_individual)
dependencies.append(diagnostic_ensemble)
dependencies.append(coverage_individual)
dependencies.append(coverage_ensemble)
dependencies.append(sbc_individual)
dependencies.append(sbc_ensemble)
@w.dependency(simulate_test)
@w.dependency(merge_train)
@w.postcondition(w.num_files(storagedir + "/mlp-bagging-0*/weights.th", num_ensembles))
@w.slurm.cpu_and_memory(4, "32g")
@w.slurm.gpu(1)
@w.slurm.timelimit("72:00:00")
@w.tasks(num_ensembles)
def train_ratio_estimator_bagging(task_index):
resultdir = storagedir + "/mlp-bagging-" + str(task_index).zfill(5)
os.makedirs(resultdir, exist_ok=True)
if not os.path.exists(resultdir + "/weights.th"):
logging.info("Training classifier ratio estimator ({index} / {n}) for the GW problem.".format(index=task_index + 1, n=num_ensembles))
logging.info("Using the following hyper parameters:")
logging.info(" - batch-size : " + str(batch_size))
logging.info(" - epochs : " + str(epochs))
logging.info(" - learning-rate : 0.001")
logging.info(" - simulations : " + str(simulation_budget))
command = r"""python -m hypothesis.bin.ratio_estimation.train --batch-size {batch_size} \
--data-test ratio_estimation.DatasetJointTest \
--data-train ratio_estimation.DatasetJointTrainBagging{simulations} \
--epochs {epochs} \
--estimator ratio_estimation.ClassifierRatioEstimator \
--hooks hooks.add \
--lr {lr} \
--lrsched-on-plateau \
--out {out} \
--show""".format(
batch_size=batch_size,
epochs=epochs,
simulations=simulation_budget,
lr=0.001,
out=resultdir)
command += " --criterion hypothesis.nn.ratio_estimation.BaseCriterion"
# Execute the training procedure
shell(command)
@w.dependency(train_ratio_estimator_bagging)
@w.postcondition(w.exists(storagedir + "/coverage-classifier-bagging.npy"))
@w.slurm.cpu_and_memory(4, "32g")
@w.slurm.gpu(1)
@w.slurm.timelimit("48:00:00")
def coverage_ensemble_bagging():
if not os.path.exists(storagedir + "/coverage-classifier-bagging.npy"):
query = storagedir + "/mlp-bagging-0*/weights.th"
coverage = coverage_of_estimator(query, cl_list=cl_list, max_samples=10000)
np.save(storagedir + "/coverage-classifier-bagging.npy", coverage)
@w.dependency(train_ratio_estimator_bagging)
@w.postcondition(w.exists(storagedir + "/diagnostic-mlp-bagging.npy"))
@w.slurm.cpu_and_memory(4, "32g")
@w.slurm.gpu(1)
@w.slurm.timelimit("00:15:00")
def diagnostic_ensemble_bagging():
resultdir = storagedir
outputfile = resultdir + "/diagnostic-mlp-bagging.npy"
if not os.path.exists(outputfile):
query = resultdir + "/mlp-bagging-0*/weights.th"
r = load_estimator(query)
result = measure_diagnostic(r, n=diagnostic_n)
np.save(outputfile, result)
@w.dependency(train_ratio_estimator_bagging)
@w.postcondition(w.exists(storagedir + "/sbc-classifier-bagging.npy"))
@w.slurm.cpu_and_memory(4, "32g")
@w.slurm.gpu(1)
@w.slurm.timelimit("48:00:00")
def sbc_ensemble_bagging():
if not os.path.exists(storagedir + "/sbc-classifier-bagging.npy"):
query = storagedir + "/mlp-bagging-0*/weights.th"
compute_sbc(query, sbc_nb_rank_samples, sbc_nb_posterior_samples, storagedir + "/sbc-classifier-bagging.npy")
# Add the dependencies for the summary notebook.
dependencies.append(diagnostic_ensemble_bagging)
dependencies.append(coverage_ensemble_bagging)
dependencies.append(sbc_ensemble_bagging)
@w.dependency(simulate_test)
@w.dependency(merge_train)
@w.postcondition(w.num_files(storagedir + "/mlp-static-0*/weights.th", num_ensembles))
@w.slurm.cpu_and_memory(4, "32g")
@w.slurm.gpu(1)
@w.slurm.timelimit("72:00:00")
@w.tasks(num_ensembles)
def train_ratio_estimator_static(task_index):
resultdir = storagedir + "/mlp-static-" + str(task_index).zfill(5)
os.makedirs(resultdir, exist_ok=True)
if not os.path.exists(resultdir + "/weights.th"):
logging.info("Training classifier ratio estimator ({index} / {n}) for the GW problem.".format(index=task_index + 1, n=num_ensembles))
logging.info("Using the following hyper parameters:")
logging.info(" - batch-size : " + str(batch_size))
logging.info(" - epochs : " + str(epochs))
logging.info(" - learning-rate : 0.001")
logging.info(" - simulations : " + str(simulation_budget))
command = r"""python -m hypothesis.bin.ratio_estimation.train --batch-size {batch_size} \
--data-test ratio_estimation.DatasetJointTest \
--data-train ratio_estimation.DatasetJointTrainStatic{simulations} \
--epochs {epochs} \
--estimator ratio_estimation.ClassifierRatioEstimator \
--hooks hooks.add \
--lr {lr} \
--lrsched-on-plateau \
--out {out} \
--show""".format(
batch_size=batch_size,
epochs=epochs,
simulations=simulation_budget,
lr=0.001,
out=resultdir)
command += " --criterion hypothesis.nn.ratio_estimation.BaseCriterion"
# Execute the training procedure
shell(command)
@w.dependency(train_ratio_estimator_static)
@w.postcondition(w.exists(storagedir + "/coverage-classifier-static.npy"))
@w.slurm.cpu_and_memory(4, "32g")
@w.slurm.gpu(1)
@w.slurm.timelimit("48:00:00")
def coverage_ensemble_static():
if not os.path.exists(storagedir + "/coverage-classifier-static.npy"):
query = storagedir + "/mlp-static-0*/weights.th"
coverage = coverage_of_estimator(query, cl_list=cl_list, max_samples=10000)
np.save(storagedir + "/coverage-classifier-static.npy", coverage)
@w.dependency(train_ratio_estimator_static)
@w.postcondition(w.exists(storagedir + "/diagnostic-mlp-static.npy"))
@w.slurm.cpu_and_memory(4, "32g")
@w.slurm.gpu(1)
@w.slurm.timelimit("00:15:00")
def diagnostic_ensemble_static():
resultdir = storagedir
outputfile = resultdir + "/diagnostic-mlp-static.npy"
if not os.path.exists(outputfile):
query = resultdir + "/mlp-static-0*/weights.th"
r = load_estimator(query)
result = measure_diagnostic(r, n=diagnostic_n)
np.save(outputfile, result)
@w.dependency(train_ratio_estimator_static)
@w.postcondition(w.exists(storagedir + "/sbc-classifier-static.npy"))
@w.slurm.cpu_and_memory(4, "32g")
@w.slurm.gpu(1)
@w.slurm.timelimit("48:00:00")
def sbc_ensemble_static():
if not os.path.exists(storagedir + "/sbc-classifier-static.npy"):
query = storagedir + "/mlp-static-0*/weights.th"
compute_sbc(query, sbc_nb_rank_samples, sbc_nb_posterior_samples, storagedir + "/sbc-classifier-static.npy")
# Add the dependencies for the summary notebook.
dependencies.append(diagnostic_ensemble_static)
dependencies.append(coverage_ensemble_static)
dependencies.append(sbc_ensemble_static)
def evaluate_point_flow_sbi(simulation_budget, storagedir=None, cl_list=[0.95]):
if storagedir is None:
storagedir = outputdir + "/" + str(simulation_budget) + "/without-regularization"
@w.dependency(simulate_test)
@w.dependency(merge_train)
@w.postcondition(w.num_files(storagedir + "/flow-sbi-0*/posterior.pkl", num_ensembles))
@w.slurm.cpu_and_memory(4, "64g")
@w.slurm.gpu(1)
@w.slurm.timelimit("72:00:00")
@w.tasks(num_ensembles)
def train_flow(task_index):
resultdir = storagedir + "/flow-sbi-" + str(task_index).zfill(5)
os.makedirs(resultdir, exist_ok=True)
train_flow_sbi(simulation_budget, epochs, learning_rate, batch_size, resultdir, task_index)
@w.dependency(train_flow)
@w.postcondition(w.num_files(storagedir + "/flow-sbi-0*/coverage.npy", num_ensembles))
@w.slurm.cpu_and_memory(4, "32g")
@w.slurm.gpu(1)
@w.slurm.timelimit("72:00:00")
@w.tasks(num_ensembles)
def coverage_individual(task_index):
resultdir = storagedir + "/flow-sbi-" + str(task_index).zfill(5)
if not os.path.exists(resultdir + "/coverage.npy"):
query = resultdir + "/posterior.pkl"
coverage = coverage_of_estimator(query, cl_list=cl_list, flow_sbi=True, max_samples=10000)
np.save(resultdir + "/coverage.npy", coverage)
@w.dependency(train_flow)
@w.postcondition(w.exists(storagedir + "/coverage-flow-sbi.npy"))
@w.slurm.cpu_and_memory(4, "32g")
@w.slurm.gpu(1)
@w.slurm.timelimit("72:00:00")
def coverage_ensemble():
if not os.path.exists(storagedir + "/coverage-flow-sbi.npy"):
query = storagedir + "/flow-sbi-0*/posterior.pkl"
coverage = coverage_of_estimator(query, cl_list=cl_list, flow_sbi=True, max_samples=5000)
np.save(storagedir + "/coverage-flow-sbi.npy", coverage)
@w.dependency(train_flow)
@w.postcondition(w.num_files(storagedir + "/flow-sbi-0*/sbc.npy", num_ensembles))
@w.slurm.cpu_and_memory(4, "32g")
@w.slurm.gpu(1)
@w.slurm.timelimit("48:00:00")
@w.tasks(num_ensembles)
def sbc_individual(task_index):
resultdir = storagedir + "/flow-sbi-" + str(task_index).zfill(5)
if not os.path.exists(resultdir + "/sbc.npy"):
query = resultdir + "/posterior.pkl"
compute_sbc(query, sbc_nb_rank_samples, sbc_nb_posterior_samples, resultdir + "/sbc.npy", flow_sbi=True)
@w.dependency(train_flow)
@w.postcondition(w.exists(storagedir + "/sbc-flow-sbi.npy"))
@w.slurm.cpu_and_memory(4, "32g")
@w.slurm.gpu(1)
@w.slurm.timelimit("48:00:00")
def sbc_ensemble():
if not os.path.exists(storagedir + "/sbc-flow-sbi.npy"):
query = storagedir + "/flow-sbi-0*/posterior.pkl"
compute_sbc(query, sbc_nb_rank_samples, sbc_nb_posterior_samples, storagedir + "/sbc-flow-sbi.npy", flow_sbi=True)
# Add the dependencies for the summary notebook.
dependencies.append(coverage_individual)
dependencies.append(coverage_ensemble)
dependencies.append(sbc_individual)
dependencies.append(sbc_ensemble)
@w.dependency(simulate_test)
@w.dependency(merge_train)
@w.postcondition(w.num_files(storagedir + "/flow-sbi-bagging-0*/posterior.pkl", num_ensembles))
@w.slurm.cpu_and_memory(4, "64g")
@w.slurm.gpu(1)
@w.slurm.timelimit("72:00:00")
@w.tasks(num_ensembles)
def train_flow_bagging(task_index):
resultdir = storagedir + "/flow-sbi-bagging-" + str(task_index).zfill(5)
os.makedirs(resultdir, exist_ok=True)
train_flow_sbi(simulation_budget, epochs, learning_rate, batch_size, resultdir, task_index, bagging=True)
@w.dependency(train_flow_bagging)
@w.postcondition(w.exists(storagedir + "/coverage-flow-sbi-bagging.npy"))
@w.slurm.cpu_and_memory(4, "32g")
@w.slurm.gpu(1)
@w.slurm.timelimit("72:00:00")
def coverage_ensemble_bagging():
if not os.path.exists(storagedir + "/coverage-flow-sbi-bagging.npy"):
query = storagedir + "/flow-sbi-bagging-0*/posterior.pkl"
coverage = coverage_of_estimator(query, cl_list=cl_list, flow_sbi=True, max_samples=5000)
np.save(storagedir + "/coverage-flow-sbi-bagging.npy", coverage)
@w.dependency(train_flow_bagging)
@w.postcondition(w.exists(storagedir + "/sbc-flow-sbi-bagging.npy"))
@w.slurm.cpu_and_memory(4, "32g")
@w.slurm.gpu(1)
@w.slurm.timelimit("48:00:00")
def sbc_ensemble_bagging():
if not os.path.exists(storagedir + "/sbc-flow-sbi-bagging.npy"):
query = storagedir + "/flow-sbi-bagging-0*/posterior.pkl"
compute_sbc(query, sbc_nb_rank_samples, sbc_nb_posterior_samples, storagedir + "/sbc-flow-sbi-bagging.npy", flow_sbi=True)
# Add the dependencies for the summary notebook.
dependencies.append(coverage_ensemble_bagging)
dependencies.append(sbc_ensemble_bagging)
@w.dependency(simulate_test)
@w.dependency(merge_train)
@w.postcondition(w.num_files(storagedir + "/flow-sbi-static-0*/posterior.pkl", num_ensembles))
@w.slurm.cpu_and_memory(4, "64g")
@w.slurm.gpu(1)
@w.slurm.timelimit("72:00:00")
@w.tasks(num_ensembles)
def train_flow_static(task_index):
resultdir = storagedir + "/flow-sbi-static-" + str(task_index).zfill(5)
os.makedirs(resultdir, exist_ok=True)
train_flow_sbi(simulation_budget, epochs, learning_rate, batch_size, resultdir, task_index, static=True)
@w.dependency(train_flow_static)
@w.postcondition(w.exists(storagedir + "/coverage-flow-sbi-static.npy"))
@w.slurm.cpu_and_memory(4, "32g")
@w.slurm.gpu(1)
@w.slurm.timelimit("72:00:00")
def coverage_ensemble_static():
if not os.path.exists(storagedir + "/coverage-flow-sbi-static.npy"):
query = storagedir + "/flow-sbi-static-0*/posterior.pkl"
coverage = coverage_of_estimator(query, cl_list=cl_list, flow_sbi=True, max_samples=5000)
np.save(storagedir + "/coverage-flow-sbi-static.npy", coverage)
@w.dependency(train_flow_static)
@w.postcondition(w.exists(storagedir + "/sbc-flow-sbi-static.npy"))
@w.slurm.cpu_and_memory(4, "32g")
@w.slurm.gpu(1)
@w.slurm.timelimit("48:00:00")
def sbc_ensemble_static():
if not os.path.exists(storagedir + "/sbc-flow-sbi-static.npy"):
query = storagedir + "/flow-sbi-static-0*/posterior.pkl"
compute_sbc(query, sbc_nb_rank_samples, sbc_nb_posterior_samples, storagedir + "/sbc-flow-sbi-static.npy", flow_sbi=True)
# Add the dependencies for the summary notebook.
dependencies.append(coverage_ensemble_static)
dependencies.append(sbc_ensemble_static)
for simulation_budget in simulations:
if arguments.only_flows:
evaluate_point_flow_sbi(simulation_budget, cl_list=credible_interval_levels)
else:
evaluate_point_classifier(simulation_budget, cl_list=credible_interval_levels)
evaluate_point_flow_sbi(simulation_budget, cl_list=credible_interval_levels)
### END Workflow definition ####################################################
# Execute the workflow
if __name__ == "__main__":
if arguments.slurm:
w.slurm.execute(directory=root)
else:
w.local.execute()
| 44.864909 | 150 | 0.656777 | 3,424 | 27,233 | 5.04118 | 0.076227 | 0.030242 | 0.029546 | 0.020856 | 0.854238 | 0.830079 | 0.793233 | 0.75772 | 0.733329 | 0.718556 | 0 | 0.023323 | 0.203356 | 27,233 | 606 | 151 | 44.938944 | 0.772298 | 0.027503 | 0 | 0.614481 | 0 | 0.007828 | 0.247168 | 0.086801 | 0 | 0 | 0 | 0 | 0.003914 | 1 | 0.064579 | false | 0 | 0.031311 | 0 | 0.09589 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
131c645858d9005c9d51675dc835c8184b8ac595 | 255 | py | Python | gathernomics/models/__init__.py | SuperOxigen/CanDev-Finance-Canada | 6c46b1fd62c31094b01d95397324ca64d19a06ac | [
"MIT"
] | 1 | 2018-10-21T20:14:45.000Z | 2018-10-21T20:14:45.000Z | gathernomics/models/__init__.py | SuperOxigen/CanDev-Finance-Canada | 6c46b1fd62c31094b01d95397324ca64d19a06ac | [
"MIT"
] | null | null | null | gathernomics/models/__init__.py | SuperOxigen/CanDev-Finance-Canada | 6c46b1fd62c31094b01d95397324ca64d19a06ac | [
"MIT"
] | null | null | null | """Restaurant Site - Gathernomics Model Init.
Copyright (c) 2018 Alex Dale
See LICENSE for information.
"""
from gathernomics.models.factor import TemporalFrequency, FinancialFactor
from gathernomics.models.sourcetbl import SourceTableType, SourceTable
| 28.333333 | 73 | 0.823529 | 28 | 255 | 7.5 | 0.821429 | 0.152381 | 0.209524 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017621 | 0.109804 | 255 | 8 | 74 | 31.875 | 0.907489 | 0.396078 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1320efd0b549d7b0c0ef436433ad19310534e568 | 3,069 | py | Python | active_rl/utils/KGW_memory.py | Tony-Cheng/Active-Reinforcement-Learning | 50bb65106ae1f957d8cb6cb5706ce1285519e6b4 | [
"MIT"
] | null | null | null | active_rl/utils/KGW_memory.py | Tony-Cheng/Active-Reinforcement-Learning | 50bb65106ae1f957d8cb6cb5706ce1285519e6b4 | [
"MIT"
] | null | null | null | active_rl/utils/KGW_memory.py | Tony-Cheng/Active-Reinforcement-Learning | 50bb65106ae1f957d8cb6cb5706ce1285519e6b4 | [
"MIT"
] | null | null | null | from collections import namedtuple
import random
import torch
import cv2
class ReplayMemory(object):
def __init__(self, capacity, state_shape, n_actions, device):
c, h, w = state_shape
self.capacity = capacity
self.device = device
self.m_states = torch.zeros((capacity, c, h, w), dtype=torch.uint8)
self.m_actions = torch.zeros((capacity, 1), dtype=torch.long)
self.m_rewards = torch.zeros((capacity, 1), dtype=torch.int8)
self.m_dones = torch.zeros((capacity, 1), dtype=torch.bool)
self.position = 0
self.size = 0
def push(self, state, action, reward, done):
"""Saves a transition."""
self.m_states[self.position] = state # 5,84,84
self.m_actions[self.position, 0] = action
self.m_rewards[self.position, 0] = reward
self.m_dones[self.position, 0] = done
self.position = (self.position + 1) % self.capacity
self.size = max(self.size, self.position)
def sample(self, bs):
i = torch.randint(0, high=self.size, size=(bs,))
bs = self.m_states[i, :32].to(self.device)
bns = self.m_states[i, 8:].to(self.device)
ba = self.m_actions[i].to(self.device)
br = self.m_rewards[i].to(self.device).float()
bd = self.m_dones[i].to(self.device).float()
return bs, ba, br, bns, bd
def __len__(self):
return self.size
class RankedReplayMemory(object):
def __init__(self, capacity, state_shape, n_actions, rank_func, AMN_net, replacement=False, device='cuda'):
c, h, w = state_shape
self.capacity = capacity
self.device = device
self.m_states = torch.zeros((capacity, c, h, w), dtype=torch.uint8)
self.m_actions = torch.zeros((capacity, 1), dtype=torch.long)
self.m_rewards = torch.zeros((capacity, 1), dtype=torch.int8)
self.m_dones = torch.zeros((capacity, 1), dtype=torch.bool)
self.position = 0
self.size = 0
self.rank_func = rank_func
self.AMN_net = AMN_net
self.replacement = replacement
def push(self, state, action, reward, done):
"""Saves a transition."""
self.m_states[self.position] = state # 5,84,84
self.m_actions[self.position, 0] = action
self.m_rewards[self.position, 0] = reward
self.m_dones[self.position, 0] = done
self.position = (self.position + 1) % self.capacity
self.size = max(self.size, self.position)
def sample(self, percentage=0.1):
_, i = torch.sort(self.rank_func(
self.AMN_net, self.m_states[: self.size, :32], device=self.device), descending=True)
i = i[: int(percentage * self.size)]
i = i[torch.randperm(i.shape[0])]
# i = torch.randint(0, high=self.size, size=(bs,))
bs = self.m_states[i, :32]
bns = self.m_states[i, 8:]
ba = self.m_actions[i]
br = self.m_rewards[i].float()
bd = self.m_dones[i].float()
return bs, ba, br, bns, bd
def __len__(self):
return self.size | 39.857143 | 111 | 0.6116 | 438 | 3,069 | 4.152968 | 0.171233 | 0.074217 | 0.054426 | 0.062672 | 0.805937 | 0.739967 | 0.702584 | 0.702584 | 0.702584 | 0.655305 | 0 | 0.019965 | 0.249267 | 3,069 | 77 | 112 | 39.857143 | 0.769531 | 0.034213 | 0 | 0.575758 | 0 | 0 | 0.001355 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.121212 | false | 0 | 0.060606 | 0.030303 | 0.272727 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
133cbdd9e9f65b5a260e890a56937bc2b4583393 | 83 | py | Python | DMOJ/saoj.py | zzh8829/CompetitiveProgramming | 36f36b10269b4648ca8be0b08c2c49e96abede25 | [
"MIT"
] | 1 | 2017-10-01T00:51:39.000Z | 2017-10-01T00:51:39.000Z | DMOJ/saoj.py | zzh8829/CompetitiveProgramming | 36f36b10269b4648ca8be0b08c2c49e96abede25 | [
"MIT"
] | null | null | null | DMOJ/saoj.py | zzh8829/CompetitiveProgramming | 36f36b10269b4648ca8be0b08c2c49e96abede25 | [
"MIT"
] | null | null | null | print((lambda n:n*(n+1)*(2*n+1)*(3*n**4+6*n**3-3*n+1)//42)(int(input()))%int(1e99)) | 83 | 83 | 0.53012 | 23 | 83 | 1.913043 | 0.521739 | 0.136364 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.170732 | 0.012048 | 83 | 1 | 83 | 83 | 0.365854 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
1359ba3f9bc763fb1faf34cbc0528c65794848e7 | 790 | py | Python | app/tests/test_timestamp.py | require-id/core | 63415602106c0d742cd868050654272d58109e74 | [
"MIT"
] | 2 | 2020-04-02T09:57:10.000Z | 2021-03-26T05:40:12.000Z | app/tests/test_timestamp.py | require-id/core | 63415602106c0d742cd868050654272d58109e74 | [
"MIT"
] | null | null | null | app/tests/test_timestamp.py | require-id/core | 63415602106c0d742cd868050654272d58109e74 | [
"MIT"
] | null | null | null | import datetime
import pytest
from app.shared.utils import convert_timestamp
@pytest.mark.asyncio
async def test_timestamp(loop):
assert convert_timestamp('1984-08-01T22:30:20.004711Z') == datetime.datetime(1984, 8, 1, 22, 30, 20, 4711)
assert convert_timestamp('2019-02-28 10:22:30') == datetime.datetime(2019, 2, 28, 10, 22, 30, 0)
assert convert_timestamp('2016-01-01 00:00:01 UTC') == datetime.datetime(2016, 1, 1, 0, 0, 1, 0)
assert convert_timestamp('2016-01-01 00:00:01 CEST') is None
assert convert_timestamp('2016-02-29T00:00:00.100000Z') == datetime.datetime(2016, 2, 29, 0, 0, 0, 100000)
assert convert_timestamp('2016-02-29T00:00:00Z') == datetime.datetime(2016, 2, 29, 0, 0, 0, 0)
assert convert_timestamp('2017-02-29T00:00:00.000000Z') is None
| 46.470588 | 110 | 0.710127 | 132 | 790 | 4.181818 | 0.348485 | 0.231884 | 0.278986 | 0.188406 | 0.355072 | 0.355072 | 0.355072 | 0.228261 | 0.134058 | 0.134058 | 0 | 0.267936 | 0.135443 | 790 | 16 | 111 | 49.375 | 0.540264 | 0 | 0 | 0 | 0 | 0 | 0.211392 | 0.102532 | 0 | 0 | 0 | 0 | 0.583333 | 1 | 0 | true | 0 | 0.25 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
135f58b301fddf1b14c3125d581eff16126e44b9 | 38 | py | Python | src/lib/xmllib.py | DTenore/skulpt | 098d20acfb088d6db85535132c324b7ac2f2d212 | [
"MIT"
] | 2,671 | 2015-01-03T08:23:25.000Z | 2022-03-31T06:15:48.000Z | src/lib/xmllib.py | wakeupmuyunhe/skulpt | a8fb11a80fb6d7c016bab5dfe3712517a350b347 | [
"MIT"
] | 972 | 2015-01-05T08:11:00.000Z | 2022-03-29T13:47:15.000Z | src/lib/xmllib.py | wakeupmuyunhe/skulpt | a8fb11a80fb6d7c016bab5dfe3712517a350b347 | [
"MIT"
] | 845 | 2015-01-03T19:53:36.000Z | 2022-03-29T18:34:22.000Z | import _sk_fail; _sk_fail._("xmllib")
| 19 | 37 | 0.763158 | 6 | 38 | 4 | 0.666667 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.078947 | 38 | 1 | 38 | 38 | 0.685714 | 0 | 0 | 0 | 0 | 0 | 0.157895 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
136b101b48418dab6dbff447803055f5353801df | 1,857 | py | Python | packages/pyright-internal/src/tests/samples/protocol3.py | sasano8/pyright | e804f324ee5dbd25fd37a258791b3fd944addecd | [
"MIT"
] | 4,391 | 2019-05-07T01:18:57.000Z | 2022-03-31T20:45:44.000Z | packages/pyright-internal/src/tests/samples/protocol3.py | sasano8/pyright | e804f324ee5dbd25fd37a258791b3fd944addecd | [
"MIT"
] | 2,740 | 2019-05-07T03:29:30.000Z | 2022-03-31T12:57:46.000Z | packages/pyright-internal/src/tests/samples/protocol3.py | sasano8/pyright | e804f324ee5dbd25fd37a258791b3fd944addecd | [
"MIT"
] | 455 | 2019-05-07T12:55:14.000Z | 2022-03-31T17:09:15.000Z | # This sample tests the assignment of protocols that
# include property declarations.
from typing import Protocol
class Foo1(Protocol):
@property
def batch_shape(self) -> int:
return 0
class MockFoo1:
def __init__(self, batch_shape: int):
self._batch_shape = batch_shape
@property
def batch_shape(self) -> int:
return self._batch_shape
# This should not generate an error.
d: Foo1 = MockFoo1(batch_shape=1)
class Foo2(Protocol):
@property
def batch_shape(self) -> int:
return 0
class MockFoo2:
def __init__(self, batch_shape: int):
self._batch_shape = batch_shape
@property
def batch_shape(self) -> float:
return self._batch_shape
# This should generate an error because the
# type of the batch_shape property is not compatible.
e: Foo2 = MockFoo2(batch_shape=1)
class Foo3(Protocol):
@property
def batch_shape(self) -> int:
return 0
@batch_shape.setter
def batch_shape(self, value: int) -> None:
pass
class MockFoo3:
def __init__(self, batch_shape: int):
self._batch_shape = batch_shape
@property
def batch_shape(self) -> int:
return self._batch_shape
# This should generate an error because it is missing
# a setter.
f: Foo3 = MockFoo3(batch_shape=1)
class Foo4(Protocol):
@property
def batch_shape(self) -> int:
return 0
@batch_shape.deleter
def batch_shape(self) -> None:
pass
class MockFoo4:
def __init__(self, batch_shape: int):
self._batch_shape = batch_shape
@property
def batch_shape(self) -> int:
return self._batch_shape
@batch_shape.setter
def batch_shape(self, value: int) -> None:
pass
# This should generate an error because it is missing
# a deleter.
g: Foo4 = MockFoo4(batch_shape=1)
| 19.34375 | 53 | 0.668282 | 250 | 1,857 | 4.728 | 0.22 | 0.296108 | 0.142132 | 0.158206 | 0.677665 | 0.677665 | 0.677665 | 0.677665 | 0.677665 | 0.677665 | 0 | 0.017192 | 0.24825 | 1,857 | 95 | 54 | 19.547368 | 0.829513 | 0.180398 | 0 | 0.703704 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.277778 | false | 0.055556 | 0.018519 | 0.148148 | 0.592593 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 6 |
136c9d79e9fa0434077aa9119ea504a8dbb5fa4b | 29 | py | Python | tests/__init__.py | jugla/pyAtome | a05accc4aa33586a55008b8c24bed102ece448e7 | [
"Apache-2.0"
] | 2 | 2021-05-12T07:31:51.000Z | 2021-11-15T18:13:51.000Z | tests/__init__.py | jugla/pyAtome | a05accc4aa33586a55008b8c24bed102ece448e7 | [
"Apache-2.0"
] | null | null | null | tests/__init__.py | jugla/pyAtome | a05accc4aa33586a55008b8c24bed102ece448e7 | [
"Apache-2.0"
] | null | null | null | from pyatome.client import *
| 14.5 | 28 | 0.793103 | 4 | 29 | 5.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.137931 | 29 | 1 | 29 | 29 | 0.92 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
138d268aa9873dbe0fe7931e77a579f61ac69b87 | 70 | py | Python | navi/__init__.py | RimorRes/MDRS_Rover | a46f9a2482febd588d621a51a784bee648198c4d | [
"MIT"
] | 7 | 2021-09-18T11:18:53.000Z | 2022-02-17T21:57:58.000Z | navi/__init__.py | RimorRes/MDRS_Rover | a46f9a2482febd588d621a51a784bee648198c4d | [
"MIT"
] | 8 | 2021-10-29T19:27:00.000Z | 2022-02-04T15:32:03.000Z | navi/__init__.py | RimorRes/MDRS_Rover | a46f9a2482febd588d621a51a784bee648198c4d | [
"MIT"
] | 3 | 2021-09-24T13:56:33.000Z | 2021-11-27T08:54:10.000Z | # flake8: noqa
from navi.pathfinder import *
from navi.a_star import * | 23.333333 | 29 | 0.771429 | 11 | 70 | 4.818182 | 0.727273 | 0.301887 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016667 | 0.142857 | 70 | 3 | 30 | 23.333333 | 0.866667 | 0.171429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
13de54d34b9287cb47507de4d32c15ae03e043c6 | 287 | py | Python | how_to_use.py | Othergreengrasses/TextGrid-SyllableTier-Injector | 18f5744a9f9c2b73a220077163dcd08cf7ae61ca | [
"MIT"
] | null | null | null | how_to_use.py | Othergreengrasses/TextGrid-SyllableTier-Injector | 18f5744a9f9c2b73a220077163dcd08cf7ae61ca | [
"MIT"
] | null | null | null | how_to_use.py | Othergreengrasses/TextGrid-SyllableTier-Injector | 18f5744a9f9c2b73a220077163dcd08cf7ae61ca | [
"MIT"
] | null | null | null | # Sample python script to demonstrate how to use the Text Grid Syllable Tier Injector Tool
from textGridSyllableTierInjector import insertSyllableTierInTextGrid
insertSyllableTierInTextGrid('TextGrid/sample-librispeech.TextGrid','TextGrid/sample-librispeech.withSyllableTier.TextGrid')
| 57.4 | 124 | 0.87108 | 29 | 287 | 8.62069 | 0.724138 | 0.112 | 0.2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.076655 | 287 | 4 | 125 | 71.75 | 0.943396 | 0.30662 | 0 | 0 | 0 | 0 | 0.451777 | 0.451777 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
13ece80a92a151a7827ccd371ae7bad289b86321 | 146 | py | Python | entrega2.py | maariagarrcia/calentamiento | e21105049b6ae5d8c15f65a2d65199a96768c87a | [
"Apache-2.0"
] | null | null | null | entrega2.py | maariagarrcia/calentamiento | e21105049b6ae5d8c15f65a2d65199a96768c87a | [
"Apache-2.0"
] | null | null | null | entrega2.py | maariagarrcia/calentamiento | e21105049b6ae5d8c15f65a2d65199a96768c87a | [
"Apache-2.0"
] | null | null | null | print("Hola Mundo")
# Introducimos un código para que sólo la línea 4 esté en verde
print(chr(27)+"[0;32m"+"Hola Mundo")
print(chr(27)+"[0;37m")
| 24.333333 | 63 | 0.691781 | 27 | 146 | 3.740741 | 0.740741 | 0.178218 | 0.19802 | 0.217822 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.086614 | 0.130137 | 146 | 5 | 64 | 29.2 | 0.708661 | 0.417808 | 0 | 0 | 0 | 0 | 0.385542 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
b91818dd1c7fd0042e0ffd6366e044087e3c262a | 25,182 | py | Python | sdk/storage/azure-storage-queue/tests/test_queue_client.py | anuchandy/azure-sdk-for-python | 589b9890554ebf261aa2184e8f1c6507f01a207c | [
"MIT"
] | null | null | null | sdk/storage/azure-storage-queue/tests/test_queue_client.py | anuchandy/azure-sdk-for-python | 589b9890554ebf261aa2184e8f1c6507f01a207c | [
"MIT"
] | null | null | null | sdk/storage/azure-storage-queue/tests/test_queue_client.py | anuchandy/azure-sdk-for-python | 589b9890554ebf261aa2184e8f1c6507f01a207c | [
"MIT"
] | null | null | null | # -------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
# --------------------------------------------------------------------------
import unittest
import pytest
import platform
from devtools_testutils import ResourceGroupPreparer, StorageAccountPreparer
from azure.storage.queue import (
VERSION,
QueueServiceClient,
QueueClient,
)
from _shared.testcase import GlobalStorageAccountPreparer, StorageTestCase
# ------------------------------------------------------------------------------
SERVICES = {
QueueServiceClient: 'queue',
QueueClient: 'queue',
}
_CONNECTION_ENDPOINTS = {'queue': 'QueueEndpoint'}
_CONNECTION_ENDPOINTS_SECONDARY = {'queue': 'QueueSecondaryEndpoint'}
class StorageQueueClientTest(StorageTestCase):
def setUp(self):
super(StorageQueueClientTest, self).setUp()
self.sas_token = self.generate_sas_token()
self.token_credential = self.generate_oauth_token()
# --Helpers-----------------------------------------------------------------
def validate_standard_account_endpoints(self, service, url_type, account_name, account_key):
self.assertIsNotNone(service)
self.assertEqual(service.account_name, account_name)
self.assertEqual(service.credential.account_name, account_name)
self.assertEqual(service.credential.account_key, account_key)
self.assertTrue('{}.{}.core.windows.net'.format(account_name, url_type) in service.url)
self.assertTrue('{}-secondary.{}.core.windows.net'.format(account_name, url_type) in service.secondary_endpoint)
# --Direct Parameters Test Cases --------------------------------------------
@GlobalStorageAccountPreparer()
def test_create_service_with_key(self, resource_group, location, storage_account, storage_account_key):
# Arrange
for client, url in SERVICES.items():
# Act
service = client(
self.account_url(storage_account, "queue"), credential=storage_account_key, queue_name='foo')
# Assert
self.validate_standard_account_endpoints(service, url, storage_account.name, storage_account_key)
self.assertEqual(service.scheme, 'https')
@GlobalStorageAccountPreparer()
def test_create_service_with_connection_string(self, resource_group, location, storage_account, storage_account_key):
for service_type in SERVICES.items():
# Act
service = service_type[0].from_connection_string(
self.connection_string(storage_account, storage_account_key), queue_name="test")
# Assert
self.validate_standard_account_endpoints(service, service_type[1], storage_account.name, storage_account_key)
self.assertEqual(service.scheme, 'https')
@GlobalStorageAccountPreparer()
def test_create_service_with_sas(self, resource_group, location, storage_account, storage_account_key):
# Arrange
for service_type in SERVICES:
# Act
service = service_type(
self.account_url(storage_account, "queue"), credential=self.sas_token, queue_name='foo')
# Assert
self.assertIsNotNone(service)
self.assertEqual(service.account_name, storage_account.name)
self.assertTrue(service.url.startswith('https://' + storage_account.name + '.queue.core.windows.net'))
self.assertTrue(service.url.endswith(self.sas_token))
self.assertIsNone(service.credential)
@GlobalStorageAccountPreparer()
def test_create_service_with_token(self, resource_group, location, storage_account, storage_account_key):
for service_type in SERVICES:
# Act
service = service_type(
self.account_url(storage_account, "queue"), credential=self.token_credential, queue_name='foo')
# Assert
self.assertIsNotNone(service)
self.assertEqual(service.account_name, storage_account.name)
self.assertTrue(service.url.startswith('https://' + storage_account.name + '.queue.core.windows.net'))
self.assertEqual(service.credential, self.token_credential)
self.assertFalse(hasattr(service.credential, 'account_key'))
self.assertTrue(hasattr(service.credential, 'get_token'))
@GlobalStorageAccountPreparer()
def test_create_service_with_token_and_http(self, resource_group, location, storage_account, storage_account_key):
for service_type in SERVICES:
# Act
with self.assertRaises(ValueError):
url = self.account_url(storage_account, "queue").replace('https', 'http')
service_type(url, credential=self.token_credential, queue_name='foo')
@GlobalStorageAccountPreparer()
def test_create_service_china(self, resource_group, location, storage_account, storage_account_key):
# Arrange
for service_type in SERVICES.items():
# Act
url = self.account_url(storage_account, "queue").replace('core.windows.net', 'core.chinacloudapi.cn')
service = service_type[0](
url, credential=storage_account_key, queue_name='foo')
# Assert
self.assertIsNotNone(service)
self.assertEqual(service.account_name, storage_account.name)
self.assertEqual(service.credential.account_name, storage_account.name)
self.assertEqual(service.credential.account_key, storage_account_key)
self.assertTrue(service.primary_endpoint.startswith(
'https://{}.{}.core.chinacloudapi.cn'.format(storage_account.name, service_type[1])))
self.assertTrue(service.secondary_endpoint.startswith(
'https://{}-secondary.{}.core.chinacloudapi.cn'.format(storage_account.name, service_type[1])))
@GlobalStorageAccountPreparer()
def test_create_service_protocol(self, resource_group, location, storage_account, storage_account_key):
# Arrange
for service_type in SERVICES.items():
# Act
url = self.account_url(storage_account, "queue").replace('https', 'http')
service = service_type[0](
url, credential=storage_account_key, queue_name='foo')
# Assert
self.validate_standard_account_endpoints(service, service_type[1], storage_account.name, storage_account_key)
self.assertEqual(service.scheme, 'http')
@GlobalStorageAccountPreparer()
def test_create_service_empty_key(self, resource_group, location, storage_account, storage_account_key):
# Arrange
QUEUE_SERVICES = [QueueServiceClient, QueueClient]
for service_type in QUEUE_SERVICES:
# Act
with self.assertRaises(ValueError) as e:
test_service = service_type('testaccount', credential='', queue_name='foo')
self.assertEqual(
str(e.exception), "You need to provide either a SAS token or an account shared key to authenticate.")
@GlobalStorageAccountPreparer()
def test_create_service_with_socket_timeout(self, resource_group, location, storage_account, storage_account_key):
# Arrange
for service_type in SERVICES.items():
# Act
default_service = service_type[0](
self.account_url(storage_account, "queue"), credential=storage_account_key, queue_name='foo')
service = service_type[0](
self.account_url(storage_account, "queue"), credential=storage_account_key,
queue_name='foo', connection_timeout=22)
# Assert
self.validate_standard_account_endpoints(service, service_type[1], storage_account.name, storage_account_key)
assert service._client._client._pipeline._transport.connection_config.timeout == 22
assert default_service._client._client._pipeline._transport.connection_config.timeout in [20, (20, 2000)]
# --Connection String Test Cases --------------------------------------------
@GlobalStorageAccountPreparer()
def test_create_service_with_connection_string_key(self, resource_group, location, storage_account, storage_account_key):
# Arrange
conn_string = 'AccountName={};AccountKey={};'.format(storage_account.name, storage_account_key)
for service_type in SERVICES.items():
# Act
service = service_type[0].from_connection_string(conn_string, queue_name='foo')
# Assert
self.validate_standard_account_endpoints(service, service_type[1], storage_account.name, storage_account_key)
self.assertEqual(service.scheme, 'https')
@GlobalStorageAccountPreparer()
def test_create_service_with_connection_string_sas(self, resource_group, location, storage_account, storage_account_key):
# Arrange
conn_string = 'AccountName={};SharedAccessSignature={};'.format(storage_account.name, self.sas_token)
for service_type in SERVICES:
# Act
service = service_type.from_connection_string(conn_string, queue_name='foo')
# Assert
self.assertIsNotNone(service)
self.assertEqual(service.account_name, storage_account.name)
self.assertTrue(service.url.startswith('https://' + storage_account.name + '.queue.core.windows.net'))
self.assertTrue(service.url.endswith(self.sas_token))
self.assertIsNone(service.credential)
@GlobalStorageAccountPreparer()
def test_create_service_with_connection_string_endpoint_protocol(self, resource_group, location, storage_account, storage_account_key):
# Arrange
conn_string = 'AccountName={};AccountKey={};DefaultEndpointsProtocol=http;EndpointSuffix=core.chinacloudapi.cn;'.format(
storage_account.name, storage_account_key)
for service_type in SERVICES.items():
# Act
service = service_type[0].from_connection_string(conn_string, queue_name="foo")
# Assert
self.assertIsNotNone(service)
self.assertEqual(service.account_name, storage_account.name)
self.assertEqual(service.credential.account_name, storage_account.name)
self.assertEqual(service.credential.account_key, storage_account_key)
self.assertTrue(
service.primary_endpoint.startswith(
'http://{}.{}.core.chinacloudapi.cn/'.format(storage_account.name, service_type[1])))
self.assertTrue(
service.secondary_endpoint.startswith(
'http://{}-secondary.{}.core.chinacloudapi.cn'.format(storage_account.name, service_type[1])))
self.assertEqual(service.scheme, 'http')
@GlobalStorageAccountPreparer()
def test_create_service_with_connection_string_emulated(self, resource_group, location, storage_account, storage_account_key):
# Arrange
for service_type in SERVICES.items():
conn_string = 'UseDevelopmentStorage=true;'.format(storage_account.name, storage_account_key)
# Act
with self.assertRaises(ValueError):
service = service_type[0].from_connection_string(conn_string, queue_name="foo")
@GlobalStorageAccountPreparer()
def test_create_service_with_connection_string_custom_domain(self, resource_group, location, storage_account, storage_account_key):
# Arrange
for service_type in SERVICES.items():
conn_string = 'AccountName={};AccountKey={};QueueEndpoint=www.mydomain.com;'.format(
storage_account.name, storage_account_key)
# Act
service = service_type[0].from_connection_string(conn_string, queue_name="foo")
# Assert
self.assertIsNotNone(service)
self.assertEqual(service.account_name, storage_account.name)
self.assertEqual(service.credential.account_name, storage_account.name)
self.assertEqual(service.credential.account_key, storage_account_key)
self.assertTrue(service.primary_endpoint.startswith('https://www.mydomain.com/'))
self.assertTrue(service.secondary_endpoint.startswith('https://' + storage_account.name + '-secondary.queue.core.windows.net'))
@GlobalStorageAccountPreparer()
def test_create_service_with_conn_str_custom_domain_trailing_slash(self, resource_group, location, storage_account, storage_account_key):
# Arrange
for service_type in SERVICES.items():
conn_string = 'AccountName={};AccountKey={};QueueEndpoint=www.mydomain.com/;'.format(
storage_account.name, storage_account_key)
# Act
service = service_type[0].from_connection_string(conn_string, queue_name="foo")
# Assert
self.assertIsNotNone(service)
self.assertEqual(service.account_name, storage_account.name)
self.assertEqual(service.credential.account_name, storage_account.name)
self.assertEqual(service.credential.account_key, storage_account_key)
self.assertTrue(service.primary_endpoint.startswith('https://www.mydomain.com/'))
self.assertTrue(service.secondary_endpoint.startswith('https://' + storage_account.name + '-secondary.queue.core.windows.net'))
@GlobalStorageAccountPreparer()
def test_create_service_with_conn_str_custom_domain_sec_override(self, resource_group, location, storage_account, storage_account_key):
# Arrange
for service_type in SERVICES.items():
conn_string = 'AccountName={};AccountKey={};QueueEndpoint=www.mydomain.com/;'.format(
storage_account.name, storage_account_key)
# Act
service = service_type[0].from_connection_string(
conn_string, secondary_hostname="www-sec.mydomain.com", queue_name="foo")
# Assert
self.assertIsNotNone(service)
self.assertEqual(service.account_name, storage_account.name)
self.assertEqual(service.credential.account_name, storage_account.name)
self.assertEqual(service.credential.account_key, storage_account_key)
self.assertTrue(service.primary_endpoint.startswith('https://www.mydomain.com/'))
self.assertTrue(service.secondary_endpoint.startswith('https://www-sec.mydomain.com/'))
@GlobalStorageAccountPreparer()
def test_create_service_with_conn_str_fails_if_sec_without_primary(self, resource_group, location, storage_account, storage_account_key):
for service_type in SERVICES.items():
# Arrange
conn_string = 'AccountName={};AccountKey={};{}=www.mydomain.com;'.format(
storage_account.name, storage_account_key,
_CONNECTION_ENDPOINTS_SECONDARY.get(service_type[1]))
# Act
# Fails if primary excluded
with self.assertRaises(ValueError):
service = service_type[0].from_connection_string(conn_string, queue_name="foo")
@GlobalStorageAccountPreparer()
def test_create_service_with_conn_str_succeeds_if_sec_with_primary(self, resource_group, location, storage_account, storage_account_key):
for service_type in SERVICES.items():
# Arrange
conn_string = 'AccountName={};AccountKey={};{}=www.mydomain.com;{}=www-sec.mydomain.com;'.format(
storage_account.name,
storage_account_key,
_CONNECTION_ENDPOINTS.get(service_type[1]),
_CONNECTION_ENDPOINTS_SECONDARY.get(service_type[1]))
# Act
service = service_type[0].from_connection_string(conn_string, queue_name="foo")
# Assert
self.assertIsNotNone(service)
self.assertEqual(service.account_name, storage_account.name)
self.assertEqual(service.credential.account_name, storage_account.name)
self.assertEqual(service.credential.account_key, storage_account_key)
self.assertTrue(service.primary_endpoint.startswith('https://www.mydomain.com/'))
self.assertTrue(service.secondary_endpoint.startswith('https://www-sec.mydomain.com/'))
@GlobalStorageAccountPreparer()
def test_create_service_with_custom_account_endpoint_path(self, resource_group, location, storage_account, storage_account_key):
custom_account_url = "http://local-machine:11002/custom/account/path/" + self.sas_token
for service_type in SERVICES.items():
conn_string = 'DefaultEndpointsProtocol=http;AccountName={};AccountKey={};QueueEndpoint={};'.format(
storage_account.name, storage_account_key, custom_account_url)
# Act
service = service_type[0].from_connection_string(conn_string, queue_name="foo")
# Assert
self.assertEqual(service.account_name, storage_account.name)
self.assertEqual(service.credential.account_name, storage_account.name)
self.assertEqual(service.credential.account_key, storage_account_key)
self.assertEqual(service.primary_hostname, 'local-machine:11002/custom/account/path')
service = QueueServiceClient(account_url=custom_account_url)
self.assertEqual(service.account_name, None)
self.assertEqual(service.credential, None)
self.assertEqual(service.primary_hostname, 'local-machine:11002/custom/account/path')
self.assertTrue(service.url.startswith('http://local-machine:11002/custom/account/path/?'))
service = QueueClient(account_url=custom_account_url, queue_name="foo")
self.assertEqual(service.account_name, None)
self.assertEqual(service.queue_name, "foo")
self.assertEqual(service.credential, None)
self.assertEqual(service.primary_hostname, 'local-machine:11002/custom/account/path')
self.assertTrue(service.url.startswith('http://local-machine:11002/custom/account/path/foo?'))
service = QueueClient.from_queue_url("http://local-machine:11002/custom/account/path/foo" + self.sas_token)
self.assertEqual(service.account_name, None)
self.assertEqual(service.queue_name, "foo")
self.assertEqual(service.credential, None)
self.assertEqual(service.primary_hostname, 'local-machine:11002/custom/account/path')
self.assertTrue(service.url.startswith('http://local-machine:11002/custom/account/path/foo?'))
@GlobalStorageAccountPreparer()
def test_request_callback_signed_header(self, resource_group, location, storage_account, storage_account_key):
# Arrange
service = QueueServiceClient(self.account_url(storage_account, "queue"), credential=storage_account_key)
name = self.get_resource_name('cont')
# Act
try:
headers = {'x-ms-meta-hello': 'world'}
queue = service.create_queue(name, headers=headers)
# Assert
metadata = queue.get_queue_properties().metadata
self.assertEqual(metadata, {'hello': 'world'})
finally:
service.delete_queue(name)
@GlobalStorageAccountPreparer()
def test_response_callback(self, resource_group, location, storage_account, storage_account_key):
# Arrange
service = QueueServiceClient(self.account_url(storage_account, "queue"), credential=storage_account_key)
name = self.get_resource_name('cont')
queue = service.get_queue_client(name)
# Act
def callback(response):
response.http_response.status_code = 200
response.http_response.headers.clear()
# Assert
exists = queue.get_queue_properties(raw_response_hook=callback)
self.assertTrue(exists)
@GlobalStorageAccountPreparer()
def test_user_agent_default(self, resource_group, location, storage_account, storage_account_key):
service = QueueServiceClient(self.account_url(storage_account, "queue"), credential=storage_account_key)
def callback(response):
self.assertTrue('User-Agent' in response.http_request.headers)
self.assertEqual(
response.http_request.headers['User-Agent'],
"azsdk-python-storage-queue/{} Python/{} ({})".format(
VERSION,
platform.python_version(),
platform.platform()))
service.get_service_properties(raw_response_hook=callback)
@GlobalStorageAccountPreparer()
def test_user_agent_custom(self, resource_group, location, storage_account, storage_account_key):
custom_app = "TestApp/v1.0"
service = QueueServiceClient(
self.account_url(storage_account, "queue"), credential=storage_account_key, user_agent=custom_app)
def callback(response):
self.assertTrue('User-Agent' in response.http_request.headers)
self.assertEqual(
response.http_request.headers['User-Agent'],
"TestApp/v1.0 azsdk-python-storage-queue/{} Python/{} ({})".format(
VERSION,
platform.python_version(),
platform.platform()))
service.get_service_properties(raw_response_hook=callback)
def callback(response):
self.assertTrue('User-Agent' in response.http_request.headers)
self.assertEqual(
response.http_request.headers['User-Agent'],
"TestApp/v2.0 azsdk-python-storage-queue/{} Python/{} ({})".format(
VERSION,
platform.python_version(),
platform.platform()))
service.get_service_properties(raw_response_hook=callback, user_agent="TestApp/v2.0")
@GlobalStorageAccountPreparer()
def test_user_agent_append(self, resource_group, location, storage_account, storage_account_key):
service = QueueServiceClient(self.account_url(storage_account, "queue"), credential=storage_account_key)
def callback(response):
self.assertTrue('User-Agent' in response.http_request.headers)
self.assertEqual(
response.http_request.headers['User-Agent'],
"azsdk-python-storage-queue/{} Python/{} ({}) customer_user_agent".format(
VERSION,
platform.python_version(),
platform.platform()))
custom_headers = {'User-Agent': 'customer_user_agent'}
service.get_service_properties(raw_response_hook=callback, headers=custom_headers)
@GlobalStorageAccountPreparer()
def test_create_queue_client_with_complete_queue_url(self, resource_group, location, storage_account, storage_account_key):
# Arrange
queue_url = self.account_url(storage_account, "queue") + "/foo"
service = QueueClient(queue_url, queue_name='bar', credential=storage_account_key)
# Assert
self.assertEqual(service.scheme, 'https')
self.assertEqual(service.queue_name, 'bar')
def test_error_with_malformed_conn_str(self):
# Arrange
for conn_str in ["", "foobar", "foobar=baz=foo", "foo;bar;baz", "foo=;bar=;", "=", ";", "=;=="]:
for service_type in SERVICES.items():
# Act
with self.assertRaises(ValueError) as e:
service = service_type[0].from_connection_string(conn_str, queue_name="test")
if conn_str in("", "foobar", "foo;bar;baz", ";"):
self.assertEqual(
str(e.exception), "Connection string is either blank or malformed.")
elif conn_str in ("foobar=baz=foo" , "foo=;bar=;", "=", "=;=="):
self.assertEqual(
str(e.exception), "Connection string missing required connection details.")
@GlobalStorageAccountPreparer()
def test_closing_pipeline_client(self, resource_group, location, storage_account, storage_account_key):
# Arrange
for client, url in SERVICES.items():
# Act
service = client(
self.account_url(storage_account, "queue"), credential=storage_account_key, queue_name='queue')
# Assert
with service:
assert hasattr(service, 'close')
service.close()
@GlobalStorageAccountPreparer()
def test_closing_pipeline_client_simple(self, resource_group, location, storage_account, storage_account_key):
# Arrange
for client, url in SERVICES.items():
# Act
service = client(
self.account_url(storage_account, "queue"), credential=storage_account_key, queue_name='queue')
service.close()
# ------------------------------------------------------------------------------
if __name__ == '__main__':
unittest.main()
| 50.163347 | 141 | 0.668295 | 2,593 | 25,182 | 6.209024 | 0.081373 | 0.127826 | 0.065466 | 0.048137 | 0.828696 | 0.799752 | 0.785031 | 0.75646 | 0.707516 | 0.69528 | 0 | 0.004753 | 0.214558 | 25,182 | 501 | 142 | 50.263473 | 0.809242 | 0.04396 | 0 | 0.6 | 0 | 0 | 0.106879 | 0.044835 | 0 | 0 | 0 | 0 | 0.305882 | 1 | 0.102941 | false | 0 | 0.017647 | 0 | 0.123529 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b92827b6cadbc2d1cd0b71b80019debadaba97bf | 234 | py | Python | simulation/summary_functions/summary_function_base.py | LBNL-ETA/LPDM | 3384a784b97e49cd7a801b758717a7107a51119f | [
"BSD-3-Clause-LBNL"
] | 2 | 2019-01-05T02:33:38.000Z | 2020-04-22T16:57:50.000Z | simulation/summary_functions/summary_function_base.py | LBNL-ETA/LPDM | 3384a784b97e49cd7a801b758717a7107a51119f | [
"BSD-3-Clause-LBNL"
] | 3 | 2019-04-17T18:13:08.000Z | 2021-04-23T22:40:23.000Z | simulation/summary_functions/summary_function_base.py | LBNL-ETA/LPDM | 3384a784b97e49cd7a801b758717a7107a51119f | [
"BSD-3-Clause-LBNL"
] | 1 | 2019-01-31T08:37:44.000Z | 2019-01-31T08:37:44.000Z |
class SummaryFunction(object):
def __init__(self):
pass
def __repr__(self):
pass
def file_match(self, file_name):
pass
def process_line(self):
pass
def end(self):
pass
| 13 | 36 | 0.559829 | 27 | 234 | 4.444444 | 0.518519 | 0.266667 | 0.275 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.358974 | 234 | 17 | 37 | 13.764706 | 0.8 | 0 | 0 | 0.454545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.454545 | false | 0.454545 | 0 | 0 | 0.545455 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 6 |
b94e04050d6d9250ebea3da301afbcc1b63b9780 | 29 | py | Python | aniko/__init__.py | FilmBee/Aniko | 209c8e84fbeb293d06de85c0af31b529c8ccb8b5 | [
"MIT"
] | 11 | 2022-02-02T00:29:52.000Z | 2022-03-18T10:32:36.000Z | aniko/__init__.py | Doctorstra/Aniko | 7f2305a88d6d3a98cd1f002ff66931d68871fd9d | [
"MIT"
] | 2 | 2022-02-02T12:23:43.000Z | 2022-02-03T01:44:32.000Z | aniko/__init__.py | Doctorstra/Aniko | 7f2305a88d6d3a98cd1f002ff66931d68871fd9d | [
"MIT"
] | 13 | 2022-02-02T00:29:56.000Z | 2022-03-31T11:09:53.000Z | from aniko.aniko import Aniko | 29 | 29 | 0.862069 | 5 | 29 | 5 | 0.6 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.103448 | 29 | 1 | 29 | 29 | 0.961538 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b97b8ea8404a94d39807b9f6ce12775b4d6fe5bf | 102 | py | Python | malt/__init__.py | Anavros/malt | d972ce9851f0174160c1ae73b9d54b56575fe5c0 | [
"MIT"
] | null | null | null | malt/__init__.py | Anavros/malt | d972ce9851f0174160c1ae73b9d54b56575fe5c0 | [
"MIT"
] | 8 | 2015-12-05T17:28:39.000Z | 2016-12-09T18:41:25.000Z | malt/__init__.py | Anavros/malt | d972ce9851f0174160c1ae73b9d54b56575fe5c0 | [
"MIT"
] | null | null | null |
from malt.cmd import parse, offer, read, load
try:
import readline
except ImportError:
pass
| 12.75 | 45 | 0.715686 | 14 | 102 | 5.214286 | 0.928571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.22549 | 102 | 7 | 46 | 14.571429 | 0.924051 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.2 | 0.6 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
b9813302732764c32a5c79a0c783adf8a1d17aa0 | 37 | py | Python | tests/integrations/providers/__init__.py | cercos/masonite | f7f220efa7fae833683e9f07ce13c3795a87d3b8 | [
"MIT"
] | 1,816 | 2018-02-14T01:59:51.000Z | 2022-03-31T17:09:20.000Z | tests/integrations/providers/__init__.py | cercos/masonite | f7f220efa7fae833683e9f07ce13c3795a87d3b8 | [
"MIT"
] | 340 | 2018-02-11T00:27:26.000Z | 2022-03-21T12:00:24.000Z | tests/integrations/providers/__init__.py | cercos/masonite | f7f220efa7fae833683e9f07ce13c3795a87d3b8 | [
"MIT"
] | 144 | 2018-03-18T00:08:16.000Z | 2022-02-26T01:51:58.000Z | from .AppProvider import AppProvider
| 18.5 | 36 | 0.864865 | 4 | 37 | 8 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.108108 | 37 | 1 | 37 | 37 | 0.969697 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b992a8470876154b3565ad489f61d3d787ccca42 | 16,210 | py | Python | tests/actions.py | uhavin/slacken | be45af974029bc16ad34a0d73c9952cd1821678b | [
"MIT"
] | 56 | 2019-11-08T22:46:34.000Z | 2022-03-27T21:53:13.000Z | tests/actions.py | uhavin/slacken | be45af974029bc16ad34a0d73c9952cd1821678b | [
"MIT"
] | 18 | 2019-09-16T10:40:01.000Z | 2022-03-16T16:30:29.000Z | tests/actions.py | uhavin/slacken | be45af974029bc16ad34a0d73c9952cd1821678b | [
"MIT"
] | 12 | 2020-01-17T14:43:57.000Z | 2022-03-15T21:33:39.000Z | import json
import pytest
from starlette.status import HTTP_200_OK
from starlette.testclient import TestClient
from slackers.hooks import actions
from slackers.models import SlackAction
from slackers.registry import R
@pytest.fixture(autouse=True)
def reset_registry():
R.callbacks = {}
@pytest.fixture
def action_defaults():
action_defaults = SlackAction(type="...", token="...")
return action_defaults.dict()
@pytest.fixture
def message_action(action_defaults):
action_defaults.update(
{
"token": "TOKEN",
"callback_id": "CALLBACK_ID",
"trigger_id": "TRIGGER_ID",
"response_url": "https://example.com/response",
"type": "message_action",
"user": {"id": "USER_ID", "name": "USER_NAME"},
"message": {},
"channel": {"id": "CHANNEL_ID", "name": "CHANNEL_NAME"},
"team": {"id": "TEAM_ID", "domain": "TEAM_DOMAIN"},
"actions": [],
"view": {},
}
)
return action_defaults
@pytest.fixture
def interactive_message(message_action):
message_action.update(
{
"type": "interactive_message",
"actions": [
{"name": "ACTION_1_NAME", "type": "ACTION_1_TYPE"},
{"name": "ACTION_2_NAME", "type": "ACTION_2_TYPE"},
],
}
)
return message_action
@pytest.fixture
def block_actions(action_defaults):
action_defaults.update(
{
"token": "TOKEN",
"trigger_id": "TRIGGER_ID",
"response_url": "https://example.com/response",
"type": "block_actions",
"user": {"id": "USER_ID", "name": "USER_NAME"},
"message": {},
"channel": {"id": "CHANNEL_ID", "name": "CHANNEL_NAME"},
"team": {"id": "TEAM_ID", "domain": "TEAM_DOMAIN"},
"actions": [{"action_id": "ACTION_ID_1"}, {"action_id": "ACTION_ID_2"}],
"view": {},
}
)
return action_defaults
@pytest.fixture
def view_submission(action_defaults):
action_defaults.update(
{
"type": "view_submission",
"team": {},
"user": {},
"view": {
"id": "VIEW_ID",
"type": "modal",
"title": {},
"submit": {},
"blocks": [],
"private_metadata": "private!",
"callback_id": "VIEW_CALLBACK_ID",
"state": {
"values": {
"multi-line": {
"ml-value": {
"type": "plain_text_input",
"value": "This is my example inputted value",
}
}
}
},
"hash": "156663117.cd33ad1f",
},
}
)
return action_defaults
@pytest.fixture
def view_closed(action_defaults):
action_defaults.update(
{
"type": "view_closed",
"team": {"id": "TXXXXXX", "domain": "coverbands"},
"user": {"id": "UXXXXXX", "name": "dreamweaver"},
"view": {"callback_id": "VIEW_CLOSED_CALLBACK_ID"},
"api_app_id": "AXXXXXX",
"is_cleared": False,
}
)
return action_defaults
@pytest.mark.usefixtures("pass_header_verification")
def post_message_actions_should_emit_actions_event_with_payload(
mocker, client: TestClient, test_headers, message_action
):
action_payload = json.dumps(message_action)
base_event_callee = mocker.Mock()
@actions.on("message_action")
def on_message_action(payload):
base_event_callee(payload=payload)
response = client.post(
url="/actions", data={"payload": action_payload}, headers=test_headers
)
assert HTTP_200_OK == response.status_code
base_event_callee.assert_called_once_with(payload=message_action)
@pytest.mark.usefixtures("pass_header_verification")
def post_message_actions_should_emit_callback_id_event_with_payload(
mocker, client: TestClient, test_headers, message_action
):
specific_event_callee = mocker.Mock()
action_payload = json.dumps(message_action)
@actions.on("message_action:CALLBACK_ID")
def on_message_action_callback_id(payload):
specific_event_callee(payload=payload)
response = client.post(
url="/actions", data={"payload": action_payload}, headers=test_headers
)
assert HTTP_200_OK == response.status_code
specific_event_callee.assert_called_once_with(payload=message_action)
@pytest.mark.usefixtures("pass_header_verification")
def post_block_actions_should_emit_actions_event_with_payload(
mocker, client: TestClient, test_headers, block_actions
):
action_payload = json.dumps(block_actions)
base_event_callee = mocker.Mock()
@actions.on("block_actions")
def on_foo(payload):
base_event_callee(payload=payload)
response = client.post(
url="/actions", data={"payload": action_payload}, headers=test_headers
)
assert HTTP_200_OK == response.status_code
base_event_callee.assert_called_once_with(payload=block_actions)
@pytest.mark.usefixtures("pass_header_verification")
def post_block_actions_should_emit_action_event_with_payload(
mocker, client: TestClient, test_headers, block_actions
):
action_payload = json.dumps(block_actions)
specific_event_callee_1 = mocker.Mock()
specific_event_callee_2 = mocker.Mock()
@actions.on("block_actions:ACTION_ID_1")
def on_block_actions_action_id_1(payload):
specific_event_callee_1(payload=payload)
@actions.on("block_actions:ACTION_ID_2")
def on_block_actions_action_id_2(payload):
specific_event_callee_2(payload=payload)
response = client.post(
url="/actions", data={"payload": action_payload}, headers=test_headers
)
assert HTTP_200_OK == response.status_code
specific_event_callee_1.assert_called_once_with(payload=block_actions)
specific_event_callee_2.assert_called_once_with(payload=block_actions)
@pytest.mark.usefixtures("pass_header_verification")
def post_view_submission_should_emit_submission_event_with_payload(
mocker, client: TestClient, test_headers, view_submission
):
# test that callback_id is not required
view_submission["view"].pop("callback_id")
action_payload = json.dumps(view_submission)
base_event_callee = mocker.Mock()
@actions.on("view_submission")
def on_view_submission_callback_id(payload):
base_event_callee(payload=payload)
response = client.post(
url="/actions", data={"payload": action_payload}, headers=test_headers
)
assert HTTP_200_OK == response.status_code
base_event_callee.assert_called_once_with(payload=view_submission)
@pytest.mark.usefixtures("pass_header_verification")
def post_view_submission_should_emit_selected_action_event_with_payload(
mocker, client: TestClient, test_headers, view_submission
):
action_payload = json.dumps(view_submission)
specific_event_callee = mocker.Mock()
@actions.on("view_submission:VIEW_CALLBACK_ID")
def on_view_submission_callback_id(payload):
specific_event_callee(payload=payload)
response = client.post(
url="/actions", data={"payload": action_payload}, headers=test_headers
)
assert HTTP_200_OK == response.status_code
specific_event_callee.assert_called_once_with(payload=view_submission)
@pytest.mark.usefixtures("pass_header_verification")
def post_block_actions_should_return_a_custom_response(
client: TestClient, test_headers, block_actions
):
action_payload = json.dumps(block_actions)
from slackers.hooks import responder
@responder("block_actions:ACTION_ID_1")
def custom_response(actual_payload):
from starlette.responses import JSONResponse
assert actual_payload == block_actions
return JSONResponse(content={"custom": "Custom Response"})
response = client.post(
url="/actions", data={"payload": action_payload}, headers=test_headers
)
assert HTTP_200_OK == response.status_code
assert {"custom": "Custom Response"} == response.json()
@pytest.mark.usefixtures("pass_header_verification")
def post_interactive_message_should_emit_interactive_message_event_with_payload(
mocker, client: TestClient, test_headers, interactive_message
):
interactive_message_payload = json.dumps(interactive_message)
base_event_callee = mocker.Mock()
@actions.on("interactive_message")
def on_foo(payload):
base_event_callee(payload=payload)
response = client.post(
url="/actions",
data={"payload": interactive_message_payload},
headers=test_headers,
)
assert HTTP_200_OK == response.status_code
base_event_callee.assert_called_once_with(payload=interactive_message)
@pytest.mark.usefixtures("pass_header_verification")
def post_interactive_message_should_emit_interactive_message_event_names_with_payload(
mocker, client: TestClient, test_headers, interactive_message
):
interactive_message_payload = json.dumps(interactive_message)
base_event_callee_1 = mocker.Mock()
base_event_callee_2 = mocker.Mock()
@actions.on("interactive_message:ACTION_1_NAME")
def on_foo(payload):
base_event_callee_1(payload=payload)
@actions.on("interactive_message:ACTION_2_NAME")
def on_foo(payload):
base_event_callee_2(payload=payload)
response = client.post(
url="/actions",
data={"payload": interactive_message_payload},
headers=test_headers,
)
assert HTTP_200_OK == response.status_code
base_event_callee_1.assert_called_once_with(payload=interactive_message)
base_event_callee_2.assert_called_once_with(payload=interactive_message)
@pytest.mark.usefixtures("pass_header_verification")
def post_interactive_message_should_emit_interactive_message_event_types_with_payload(
mocker, client: TestClient, test_headers, interactive_message
):
interactive_message_payload = json.dumps(interactive_message)
base_event_callee_1 = mocker.Mock()
base_event_callee_2 = mocker.Mock()
@actions.on("interactive_message:ACTION_1_TYPE")
def on_foo(payload):
base_event_callee_1(payload=payload)
@actions.on("interactive_message:ACTION_2_TYPE")
def on_foo(payload):
base_event_callee_2(payload=payload)
response = client.post(
url="/actions",
data={"payload": interactive_message_payload},
headers=test_headers,
)
assert HTTP_200_OK == response.status_code
base_event_callee_1.assert_called_once_with(payload=interactive_message)
base_event_callee_2.assert_called_once_with(payload=interactive_message)
@pytest.mark.usefixtures("pass_header_verification")
def post_interactive_message_should_emit_interactive_message_event_name_type_combo_with_payload(
mocker, client: TestClient, test_headers, interactive_message
):
interactive_message_payload = json.dumps(interactive_message)
base_event_callee_1 = mocker.Mock()
base_event_callee_2 = mocker.Mock()
@actions.on("interactive_message:ACTION_1_NAME:ACTION_1_TYPE")
def on_foo(payload):
base_event_callee_1(payload=payload)
@actions.on("interactive_message:ACTION_2_NAME:ACTION_2_TYPE")
def on_foo(payload):
base_event_callee_2(payload=payload)
response = client.post(
url="/actions",
data={"payload": interactive_message_payload},
headers=test_headers,
)
assert HTTP_200_OK == response.status_code
base_event_callee_1.assert_called_once_with(payload=interactive_message)
base_event_callee_2.assert_called_once_with(payload=interactive_message)
@pytest.mark.usefixtures("pass_header_verification")
def post_interactive_message_should_be_able_to_return_custom_response(
client: TestClient, test_headers, interactive_message
):
from slackers.hooks import responder
interactive_message_payload = json.dumps(interactive_message)
@responder("interactive_message")
def custom_response(actual_payload):
from starlette.responses import JSONResponse
assert actual_payload == interactive_message
return JSONResponse(content={"custom": "Custom Response"})
response = client.post(
url="/actions",
data={"payload": interactive_message_payload},
headers=test_headers,
)
assert HTTP_200_OK == response.status_code
assert {"custom": "Custom Response"} == response.json()
@pytest.mark.usefixtures("pass_header_verification")
def post_view_submission_should_return_a_custom_response(
client: TestClient, test_headers, view_submission
):
action_payload = json.dumps(view_submission)
from slackers.hooks import responder
@responder("view_submission:VIEW_CALLBACK_ID")
def custom_response(actual_payload):
from starlette.responses import JSONResponse
assert actual_payload == view_submission
return JSONResponse(content={"custom": "Custom Response"})
response = client.post(
url="/actions", data={"payload": action_payload}, headers=test_headers
)
assert HTTP_200_OK == response.status_code
assert {"custom": "Custom Response"} == response.json()
@pytest.mark.usefixtures("pass_header_verification")
def max_one_custom_response_should_be_possible(
client: TestClient, test_headers, view_submission
):
action_payload = json.dumps(view_submission)
from slackers.hooks import responder
@responder("view_submission")
def custom_response(payload):
... # pragma: no cover, exception raised before calling function
@responder("view_submission:VIEW_CALLBACK_ID")
def custom_response(payload):
... # pragma: no cover, exception raised before calling function
with pytest.raises(ValueError, match="Multiple response handlers found"):
client.post(
url="/actions", data={"payload": action_payload}, headers=test_headers
)
@pytest.mark.usefixtures("pass_header_verification")
def handler_should_return_starlette_response(
client: TestClient, test_headers, view_submission
):
action_payload = json.dumps(view_submission)
from slackers.hooks import responder
@responder("view_submission:VIEW_CALLBACK_ID")
def custom_response(payload):
from requests import Response
return Response()
with pytest.raises(
AssertionError, match="Please return a starlette.responses.Response"
):
client.post(
url="/actions", data={"payload": action_payload}, headers=test_headers
)
@pytest.mark.usefixtures("pass_header_verification")
def post_view_closed_should_emit_closed_event(
mocker, client: TestClient, test_headers, view_closed
):
action_payload = json.dumps(view_closed)
specific_event_callee = mocker.Mock()
@actions.on("view_closed")
def on_view_closed(payload):
specific_event_callee(payload=payload)
response = client.post(
url="/actions", data={"payload": action_payload}, headers=test_headers
)
assert HTTP_200_OK == response.status_code
specific_event_callee.assert_called_once_with(payload=view_closed)
@pytest.mark.usefixtures("pass_header_verification")
def post_view_closed_should_emit_closed_event_callback_id(
mocker, client: TestClient, test_headers, view_closed
):
# This might be a nonexistent use case, but even so, if a callback id
# is in a view_closed body, the callback_id event will be emitted as
# a side effect anyway
action_payload = json.dumps(view_closed)
specific_event_callee = mocker.Mock()
@actions.on("view_closed:VIEW_CLOSED_CALLBACK_ID")
def on_view_closed_callback_id(payload):
specific_event_callee(payload=payload)
response = client.post(
url="/actions", data={"payload": action_payload}, headers=test_headers
)
assert HTTP_200_OK == response.status_code
specific_event_callee.assert_called_once_with(payload=view_closed)
| 32.48497 | 96 | 0.70074 | 1,875 | 16,210 | 5.6816 | 0.088 | 0.049564 | 0.042242 | 0.039895 | 0.84502 | 0.830189 | 0.795926 | 0.734816 | 0.72205 | 0.717732 | 0 | 0.007906 | 0.196299 | 16,210 | 498 | 97 | 32.550201 | 0.809794 | 0.019186 | 0 | 0.592689 | 0 | 0 | 0.150894 | 0.059716 | 0 | 0 | 0 | 0 | 0.099217 | 1 | 0.120104 | false | 0.044386 | 0.041775 | 0 | 0.18799 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b9d71178397011781d81c30056287a5676fdb3cf | 70 | py | Python | python/testData/refactoring/extractsuperclass/moveExtendsCheckReference/source_module.after.py | jnthn/intellij-community | 8fa7c8a3ace62400c838e0d5926a7be106aa8557 | [
"Apache-2.0"
] | 2 | 2019-04-28T07:48:50.000Z | 2020-12-11T14:18:08.000Z | python/testData/refactoring/extractsuperclass/moveExtendsCheckReference/source_module.after.py | jnthn/intellij-community | 8fa7c8a3ace62400c838e0d5926a7be106aa8557 | [
"Apache-2.0"
] | 173 | 2018-07-05T13:59:39.000Z | 2018-08-09T01:12:03.000Z | python/testData/refactoring/extractsuperclass/moveExtendsCheckReference/source_module.after.py | jnthn/intellij-community | 8fa7c8a3ace62400c838e0d5926a7be106aa8557 | [
"Apache-2.0"
] | 2 | 2020-03-15T08:57:37.000Z | 2020-04-07T04:48:14.000Z | from dest_module import NewParent
class MyClass(NewParent):
pass | 14 | 33 | 0.785714 | 9 | 70 | 6 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.171429 | 70 | 5 | 34 | 14 | 0.931034 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
b9d7c044d9317438cde9def730f9be708ecb7ce2 | 3,300 | py | Python | nssrc/com/citrix/netscaler/nitro/resource/config/authentication/__init__.py | benfinke/ns_python | d651d7aa01d7dc63c1cd435c7b3314d7f5b26659 | [
"Apache-2.0"
] | 1 | 2015-04-05T21:21:26.000Z | 2015-04-05T21:21:26.000Z | nssrc/com/citrix/netscaler/nitro/resource/config/authentication/__init__.py | benfinke/ns_python | d651d7aa01d7dc63c1cd435c7b3314d7f5b26659 | [
"Apache-2.0"
] | 1 | 2017-01-20T22:56:58.000Z | 2017-01-20T22:56:58.000Z | nssrc/com/citrix/netscaler/nitro/resource/config/authentication/__init__.py | benfinke/ns_python | d651d7aa01d7dc63c1cd435c7b3314d7f5b26659 | [
"Apache-2.0"
] | 6 | 2015-04-21T13:14:08.000Z | 2020-12-03T07:27:52.000Z | __all__ = ['authenticationauthnprofile', 'authenticationcertaction', 'authenticationcertpolicy', 'authenticationcertpolicy_authenticationvserver_binding', 'authenticationcertpolicy_binding', 'authenticationcertpolicy_systemglobal_binding', 'authenticationcertpolicy_vpnglobal_binding', 'authenticationcertpolicy_vpnvserver_binding', 'authenticationldapaction', 'authenticationldappolicy', 'authenticationldappolicy_authenticationvserver_binding', 'authenticationldappolicy_binding', 'authenticationldappolicy_systemglobal_binding', 'authenticationldappolicy_vpnglobal_binding', 'authenticationldappolicy_vpnvserver_binding', 'authenticationlocalpolicy', 'authenticationlocalpolicy_authenticationvserver_binding', 'authenticationlocalpolicy_binding', 'authenticationlocalpolicy_systemglobal_binding', 'authenticationlocalpolicy_vpnglobal_binding', 'authenticationlocalpolicy_vpnvserver_binding', 'authenticationnegotiateaction', 'authenticationnegotiatepolicy', 'authenticationnegotiatepolicy_authenticationvserver_binding', 'authenticationnegotiatepolicy_binding', 'authenticationpolicy', 'authenticationpolicylabel', 'authenticationpolicylabel_authenticationpolicy_binding', 'authenticationpolicylabel_binding', 'authenticationradiusaction', 'authenticationradiuspolicy', 'authenticationradiuspolicy_authenticationvserver_binding', 'authenticationradiuspolicy_binding', 'authenticationradiuspolicy_systemglobal_binding', 'authenticationradiuspolicy_vpnglobal_binding', 'authenticationradiuspolicy_vpnvserver_binding', 'authenticationsamlaction', 'authenticationsamlidppolicy', 'authenticationsamlidppolicy_authenticationvserver_binding', 'authenticationsamlidppolicy_binding', 'authenticationsamlidppolicy_vpnvserver_binding', 'authenticationsamlidpprofile', 'authenticationsamlpolicy', 'authenticationsamlpolicy_authenticationvserver_binding', 'authenticationsamlpolicy_binding', 'authenticationtacacsaction', 'authenticationtacacspolicy', 'authenticationtacacspolicy_authenticationvserver_binding', 'authenticationtacacspolicy_binding', 'authenticationtacacspolicy_systemglobal_binding', 'authenticationtacacspolicy_vpnglobal_binding', 'authenticationtacacspolicy_vpnvserver_binding', 'authenticationvserver', 'authenticationvserver_auditnslogpolicy_binding', 'authenticationvserver_auditsyslogpolicy_binding', 'authenticationvserver_authenticationcertpolicy_binding', 'authenticationvserver_authenticationldappolicy_binding', 'authenticationvserver_authenticationlocalpolicy_binding', 'authenticationvserver_authenticationnegotiatepolicy_binding', 'authenticationvserver_authenticationpolicy_binding', 'authenticationvserver_authenticationradiuspolicy_binding', 'authenticationvserver_authenticationsamlidppolicy_binding', 'authenticationvserver_authenticationsamlpolicy_binding', 'authenticationvserver_authenticationtacacspolicy_binding', 'authenticationvserver_authenticationwebauthpolicy_binding', 'authenticationvserver_binding', 'authenticationvserver_tmsessionpolicy_binding', 'authenticationwebauthaction', 'authenticationwebauthpolicy', 'authenticationwebauthpolicy_authenticationvserver_binding', 'authenticationwebauthpolicy_binding', 'authenticationwebauthpolicy_systemglobal_binding', 'authenticationwebauthpolicy_vpnglobal_binding', 'authenticationwebauthpolicy_vpnvserver_binding'] | 3,300 | 3,300 | 0.909394 | 170 | 3,300 | 17.070588 | 0.170588 | 0.135079 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.022727 | 3,300 | 1 | 3,300 | 3,300 | 0.899845 | 0 | 0 | 0 | 0 | 0 | 0.906998 | 0.900939 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b9e39a0cff047fe28ba7c09e98b1bfb5ade812b6 | 183 | py | Python | lab07/tienda/admin.py | AlexanderRod/TECSUP-DAE-2021-2 | 47b2cce717ff012c1b40394955388d8b2a8beb63 | [
"MIT"
] | null | null | null | lab07/tienda/admin.py | AlexanderRod/TECSUP-DAE-2021-2 | 47b2cce717ff012c1b40394955388d8b2a8beb63 | [
"MIT"
] | null | null | null | lab07/tienda/admin.py | AlexanderRod/TECSUP-DAE-2021-2 | 47b2cce717ff012c1b40394955388d8b2a8beb63 | [
"MIT"
] | null | null | null | from django.contrib import admin
# Register your models here.
from .models import Categoria
from .models import Producto
admin.site.register(Categoria)
admin.site.register(Producto) | 22.875 | 32 | 0.819672 | 25 | 183 | 6 | 0.48 | 0.133333 | 0.213333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.10929 | 183 | 8 | 33 | 22.875 | 0.920245 | 0.142077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.6 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b9e5ba737f5d7b29b3320b97c44e8447ddc008bf | 50,729 | py | Python | test/test_scrapbook_convert_wsb2sb.py | clach04/PyWebScrapBook | 310e8f20cc5337336875679246b9269265b4476a | [
"MIT"
] | 39 | 2019-04-10T18:07:40.000Z | 2022-02-07T07:11:30.000Z | test/test_scrapbook_convert_wsb2sb.py | clach04/PyWebScrapBook | 310e8f20cc5337336875679246b9269265b4476a | [
"MIT"
] | 56 | 2019-05-07T23:29:14.000Z | 2022-02-24T10:33:43.000Z | test/test_scrapbook_convert_wsb2sb.py | clach04/PyWebScrapBook | 310e8f20cc5337336875679246b9269265b4476a | [
"MIT"
] | 15 | 2019-06-12T05:16:43.000Z | 2022-01-16T13:24:11.000Z | from unittest import mock
import unittest
import os
import shutil
import glob
import zipfile
import time
from base64 import b64decode
from datetime import datetime, timezone
from lxml import etree
from webscrapbook import WSB_DIR
from webscrapbook import util
from webscrapbook.scrapbook.host import Host
from webscrapbook.scrapbook.convert import wsb2sb
from webscrapbook.scrapbook.convert.wsb2sb import RDF, NS1, NC
root_dir = os.path.abspath(os.path.dirname(__file__))
test_root = os.path.join(root_dir, 'test_scrapbook_convert')
def setUpModule():
# mock out user config
global mockings
mockings = [
mock.patch('webscrapbook.scrapbook.host.WSB_USER_DIR', os.path.join(test_root, 'wsb')),
mock.patch('webscrapbook.WSB_USER_DIR', os.path.join(test_root, 'wsb')),
mock.patch('webscrapbook.WSB_USER_CONFIG', test_root),
]
for mocking in mockings:
mocking.start()
def tearDownModule():
# stop mock
for mocking in mockings:
mocking.stop()
class Test(unittest.TestCase):
@classmethod
def setUpClass(cls):
cls.maxDiff = 8192
cls.test_input = os.path.join(test_root, 'input')
cls.test_input_config = os.path.join(cls.test_input, WSB_DIR, 'config.ini')
cls.test_input_tree = os.path.join(cls.test_input, WSB_DIR, 'tree')
cls.test_input_meta = os.path.join(cls.test_input_tree, 'meta.js')
cls.test_input_toc = os.path.join(cls.test_input_tree, 'toc.js')
cls.test_output = os.path.join(test_root, 'output')
cls.test_output_rdf = os.path.join(cls.test_output, 'scrapbook.rdf')
def setUp(self):
"""Set up a general temp test folder
"""
os.makedirs(self.test_input_tree, exist_ok=True)
os.makedirs(self.test_output, exist_ok=True)
def tearDown(self):
"""Remove general temp test folder
"""
try:
shutil.rmtree(self.test_input)
except NotADirectoryError:
os.remove(self.test_input)
except FileNotFoundError:
pass
try:
shutil.rmtree(self.test_output)
except NotADirectoryError:
os.remove(self.test_output)
except FileNotFoundError:
pass
class TestRun(Test):
def test_meta_basic(self):
"""A sample of typical WebScrapBook item."""
with open(self.test_input_meta, 'w', encoding='UTF-8') as fh:
fh.write("""\
scrapbook.meta({
"20200101000000000": {
"index": "20200101000000000/index.html",
"type": "",
"title": "Hello 中文",
"create": "20200102000000000",
"modify": "20200103000000000",
"source": "http://example.com",
"icon": "favicon.bmp",
"comment": "some comment\\nsecond line\\nthird line",
"charset": "UTF-8",
"locked": true
}
})""")
index_file = os.path.join(self.test_input, '20200101000000000', 'index.html')
os.makedirs(os.path.dirname(index_file), exist_ok=True)
with open(index_file, 'w', encoding='UTF-8') as fh:
fh.write('page content')
icon_file = os.path.join(self.test_input, '20200101000000000', 'favicon.bmp')
os.makedirs(os.path.dirname(icon_file), exist_ok=True)
with open(icon_file, 'wb') as fh:
fh.write(b64decode('Qk08AAAAAAAAADYAAAAoAAAAAQAAAAEAAAABACAAAAAAAAYAAAASCwAAEgsAAAAAAAAAAAAAAP8AAAAA'))
for info in wsb2sb.run(self.test_input, self.test_output):
pass
with open(self.test_output_rdf, 'rb') as fh:
tree = etree.parse(fh)
oid = util.datetime_to_id_legacy(util.id_to_datetime('20200101000000000'))
self.assertEqual(tree.getroot().tag, f'{RDF}RDF')
self.assertEqual(dict(tree.find(f'{RDF}Description').attrib), {
f'{RDF}about': f'urn:scrapbook:item{oid}',
f'{NS1}id': oid,
f'{NS1}type': '',
f'{NS1}title': 'Hello 中文',
f'{NS1}create': util.datetime_to_id_legacy(util.id_to_datetime('20200102000000000')),
f'{NS1}modify': util.datetime_to_id_legacy(util.id_to_datetime('20200103000000000')),
f'{NS1}source': 'http://example.com',
f'{NS1}icon': f'resource://scrapbook/data/{oid}/favicon.bmp',
f'{NS1}comment': 'some comment __BR__ second line __BR__ third line',
f'{NS1}chars': 'UTF-8',
f'{NS1}lock': 'true'
})
def test_meta_separator(self):
"""A sample of typical WebScrapBook separator item."""
with open(self.test_input_meta, 'w', encoding='UTF-8') as fh:
fh.write("""\
scrapbook.meta({
"20200101000000000": {
"type": "separator",
"title": "Hello 中文",
"create": "20200102000000000",
"modify": "20200103000000000"
}
})""")
for info in wsb2sb.run(self.test_input, self.test_output):
pass
with open(self.test_output_rdf, 'rb') as fh:
tree = etree.parse(fh)
oid = util.datetime_to_id_legacy(util.id_to_datetime('20200101000000000'))
self.assertEqual(dict(tree.find(f'{NC}BookmarkSeparator').attrib), {
f'{RDF}about': f'urn:scrapbook:item{oid}',
f'{NS1}id': oid,
f'{NS1}type': 'separator',
f'{NS1}title': 'Hello 中文',
f'{NS1}create': util.datetime_to_id_legacy(util.id_to_datetime('20200102000000000')),
f'{NS1}modify': util.datetime_to_id_legacy(util.id_to_datetime('20200103000000000')),
f'{NS1}source': '',
f'{NS1}icon': '',
f'{NS1}comment': '',
f'{NS1}chars': '',
})
def test_meta_type01(self):
"""postit => note"""
with open(self.test_input_meta, 'w', encoding='UTF-8') as fh:
fh.write("""\
scrapbook.meta({
"20200101000000000": {
"index": "20200101000000000/index.html",
"type": "postit"
}
})""")
index_file = os.path.join(self.test_input, '20200101000000000', 'index.html')
os.makedirs(os.path.dirname(index_file), exist_ok=True)
with open(index_file, 'w', encoding='UTF-8') as fh:
fh.write("""\
<!DOCTYPE html><html><head>\
<meta charset="UTF-8">\
<meta name="viewport" content="width=device-width">\
<style>pre { white-space: pre-wrap; overflow-wrap: break-word; }</style>\
</head><body><pre>
postit page content < & > < & >
</pre></body></html>""")
for info in wsb2sb.run(self.test_input, self.test_output):
pass
with open(self.test_output_rdf, 'rb') as fh:
tree = etree.parse(fh)
self.assertEqual(tree.find(f'{RDF}Description').attrib[f'{NS1}type'], 'note')
# check output legacy note format
oid = util.datetime_to_id_legacy(util.id_to_datetime('20200101000000000'))
with open(os.path.join(self.test_output, 'data', oid, 'index.html'), encoding='UTF-8') as fh:
self.assertEqual(fh.read(), """\
<html><head><meta http-equiv="Content-Type" content="text/html;Charset=UTF-8"></head><body><pre>
postit page content < & > < & >
</pre></body></html>""")
def test_meta_type02(self):
"""note => notex"""
with open(self.test_input_meta, 'w', encoding='UTF-8') as fh:
fh.write("""\
scrapbook.meta({
"20200101000000000": {
"index": "20200101000000000/index.html",
"type": "note"
}
})""")
index_file = os.path.join(self.test_input, '20200101000000000', 'index.html')
os.makedirs(os.path.dirname(index_file), exist_ok=True)
with open(index_file, 'w', encoding='UTF-8') as fh:
fh.write('note page content')
for info in wsb2sb.run(self.test_input, self.test_output):
pass
with open(self.test_output_rdf, 'rb') as fh:
tree = etree.parse(fh)
self.assertEqual(tree.find(f'{RDF}Description').attrib[f'{NS1}type'], 'notex')
def test_meta_marked01(self):
"""true marked property with "" type => marked type"""
with open(self.test_input_meta, 'w', encoding='UTF-8') as fh:
fh.write("""\
scrapbook.meta({
"20200101000000000": {
"index": "20200101000000000/index.html",
"type": "",
"marked": true
}
})""")
index_file = os.path.join(self.test_input, '20200101000000000', 'index.html')
os.makedirs(os.path.dirname(index_file), exist_ok=True)
with open(index_file, 'w', encoding='UTF-8') as fh:
fh.write('page content')
for info in wsb2sb.run(self.test_input, self.test_output):
pass
with open(self.test_output_rdf, 'rb') as fh:
tree = etree.parse(fh)
self.assertEqual(tree.find(f'{RDF}Description').attrib[f'{NS1}type'], 'marked')
self.assertIsNone(tree.find(f'{RDF}Description').attrib.get(f'{NS1}marked'))
def test_meta_marked02(self):
"""marked property with other type => discard marked"""
with open(self.test_input_meta, 'w', encoding='UTF-8') as fh:
fh.write("""\
scrapbook.meta({
"20200101000000000": {
"index": "20200101000000000/index.html",
"type": "file",
"marked": true
}
})""")
index_file = os.path.join(self.test_input, '20200101000000000', 'index.html')
os.makedirs(os.path.dirname(index_file), exist_ok=True)
with open(index_file, 'w', encoding='UTF-8') as fh:
fh.write('page content')
for info in wsb2sb.run(self.test_input, self.test_output):
pass
with open(self.test_output_rdf, 'rb') as fh:
tree = etree.parse(fh)
self.assertEqual(tree.find(f'{RDF}Description').attrib[f'{NS1}type'], 'file')
self.assertIsNone(tree.find(f'{RDF}Description').attrib.get(f'{NS1}marked'))
def test_meta_marked03(self):
"""false marked property => normal type"""
with open(self.test_input_meta, 'w', encoding='UTF-8') as fh:
fh.write("""\
scrapbook.meta({
"20200101000000000": {
"index": "20200101000000000/index.html",
"type": "",
"marked": false
}
})""")
index_file = os.path.join(self.test_input, '20200101000000000', 'index.html')
os.makedirs(os.path.dirname(index_file), exist_ok=True)
with open(index_file, 'w', encoding='UTF-8') as fh:
fh.write('page content')
for info in wsb2sb.run(self.test_input, self.test_output):
pass
with open(self.test_output_rdf, 'rb') as fh:
tree = etree.parse(fh)
self.assertEqual(tree.find(f'{RDF}Description').attrib[f'{NS1}type'], '')
self.assertIsNone(tree.find(f'{RDF}Description').attrib.get(f'{NS1}marked'))
def test_meta_create(self):
"""empty create property => no create property"""
with open(self.test_input_meta, 'w', encoding='UTF-8') as fh:
fh.write("""\
scrapbook.meta({
"20200101000000000": {
"index": "20200101000000000/index.html",
"type": "",
"create": ""
}
})""")
for info in wsb2sb.run(self.test_input, self.test_output):
pass
with open(self.test_output_rdf, 'rb') as fh:
tree = etree.parse(fh)
self.assertIsNone(tree.find(f'{RDF}Description').attrib.get(f'{NS1}create'))
def test_meta_modify(self):
"""empty modify property => no modify property"""
with open(self.test_input_meta, 'w', encoding='UTF-8') as fh:
fh.write("""\
scrapbook.meta({
"20200101000000000": {
"index": "20200101000000000/index.html",
"type": "",
"modify": ""
}
})""")
for info in wsb2sb.run(self.test_input, self.test_output):
pass
with open(self.test_output_rdf, 'rb') as fh:
tree = etree.parse(fh)
self.assertIsNone(tree.find(f'{RDF}Description').attrib.get(f'{NS1}modify'))
def test_meta_icon01(self):
"""Empty icon with icon-moz property => moz-icon:// """
with open(self.test_input_meta, 'w', encoding='UTF-8') as fh:
fh.write("""\
scrapbook.meta({
"20200101000000000": {
"index": "20200101000000000/index.html",
"type": "image",
"icon": "",
"icon-moz": "moz-icon://myimage.png?size=16"
}
})""")
for info in wsb2sb.run(self.test_input, self.test_output):
pass
with open(self.test_output_rdf, 'rb') as fh:
tree = etree.parse(fh)
self.assertEqual(
tree.find(f'{RDF}Description').attrib[f'{NS1}icon'],
'moz-icon://myimage.png?size=16'
)
def test_meta_icon02(self):
"""File with empty icon => moz-icon:// """
with open(self.test_input_meta, 'w', encoding='UTF-8') as fh:
fh.write("""\
scrapbook.meta({
"20200101000000000": {
"index": "20200101000000000/index.html",
"type": "file",
"icon": ""
}
})""")
index_file = os.path.join(self.test_input, '20200101000000000', 'index.html')
os.makedirs(os.path.dirname(index_file), exist_ok=True)
with open(index_file, 'w', encoding='UTF-8') as fh:
fh.write('<meta charset="UTF-8"><meta http-equiv="refresh" content="0;URL=./myfile.txt">')
for info in wsb2sb.run(self.test_input, self.test_output):
pass
with open(self.test_output_rdf, 'rb') as fh:
tree = etree.parse(fh)
self.assertEqual(
tree.find(f'{RDF}Description').attrib[f'{NS1}icon'],
'moz-icon://myfile.txt?size=16'
)
def test_meta_icon03(self):
"""Absolute URL => use as-is"""
with open(self.test_input_meta, 'w', encoding='UTF-8') as fh:
fh.write("""\
scrapbook.meta({
"20200101000000000": {
"index": "20200101000000000/index.html",
"type": "",
"icon": "data:image/bmp;base64,Qk08AAAAAAAAADYAAAAoAAAAAQAAAAEAAAABACAAAAAAAAYAAAASCwAAEgsAAAAAAAAAAAAAAP8AAAAA"
}
})""")
for info in wsb2sb.run(self.test_input, self.test_output):
pass
with open(self.test_output_rdf, 'rb') as fh:
tree = etree.parse(fh)
self.assertEqual(
tree.find(f'{RDF}Description').attrib[f'{NS1}icon'],
'data:image/bmp;base64,Qk08AAAAAAAAADYAAAAoAAAAAQAAAAEAAAABACAAAAAAAAYAAAASCwAAEgsAAAAAAAAAAAAAAP8AAAAA'
)
def test_meta_icon04(self):
"""Favicon cache => icon folder"""
with open(self.test_input_meta, 'w', encoding='UTF-8') as fh:
fh.write("""\
scrapbook.meta({
"20200101000000000": {
"index": "20200101000000000.html",
"type": "",
"icon": ".wsb/tree/favicon/dbc82be549e49d6db9a5719086722a4f1c5079cd.bmp"
}
})""")
icon_file = os.path.join(self.test_input_tree, 'favicon', 'dbc82be549e49d6db9a5719086722a4f1c5079cd.bmp')
os.makedirs(os.path.dirname(icon_file), exist_ok=True)
with open(icon_file, 'wb') as fh:
fh.write(b64decode('Qk08AAAAAAAAADYAAAAoAAAAAQAAAAEAAAABACAAAAAAAAYAAAASCwAAEgsAAAAAAAAAAAAAAP8AAAAA'))
for info in wsb2sb.run(self.test_input, self.test_output):
pass
with open(self.test_output_rdf, 'rb') as fh:
tree = etree.parse(fh)
self.assertEqual(
tree.find(f'{RDF}Description').attrib[f'{NS1}icon'],
'resource://scrapbook/icon/dbc82be549e49d6db9a5719086722a4f1c5079cd.bmp'
)
self.assertTrue(
os.path.isfile(os.path.join(self.test_output, 'icon', 'dbc82be549e49d6db9a5719086722a4f1c5079cd.bmp'))
)
def test_meta_icon05(self):
"""Item folder => mapped item folder"""
with open(self.test_input_meta, 'w', encoding='UTF-8') as fh:
fh.write("""\
scrapbook.meta({
"20200101000000000": {
"index": "20200101000000000/index.html",
"type": "",
"icon": "favicon.bmp"
}
})""")
icon_file = os.path.join(self.test_input, '20200101000000000', 'favicon.bmp')
os.makedirs(os.path.dirname(icon_file), exist_ok=True)
with open(icon_file, 'wb') as fh:
fh.write(b64decode('Qk08AAAAAAAAADYAAAAoAAAAAQAAAAEAAAABACAAAAAAAAYAAAASCwAAEgsAAAAAAAAAAAAAAP8AAAAA'))
for info in wsb2sb.run(self.test_input, self.test_output):
pass
with open(self.test_output_rdf, 'rb') as fh:
tree = etree.parse(fh)
ts = util.datetime_to_id_legacy(util.id_to_datetime('20200101000000000'))
self.assertEqual(
tree.find(f'{RDF}Description').attrib[f'{NS1}icon'],
f'resource://scrapbook/data/{ts}/favicon.bmp'
)
self.assertTrue(
os.path.isfile(os.path.join(self.test_output, 'data', ts, 'favicon.bmp'))
)
def test_meta_icon06(self):
"""Data folder => data folder"""
with open(self.test_input_meta, 'w', encoding='UTF-8') as fh:
fh.write("""\
scrapbook.meta({
"20200101000000000": {
"index": "20200101000000000/index.html",
"type": "",
"icon": "../icons/favicon.bmp"
}
})""")
icon_file = os.path.join(self.test_input, 'icons', 'favicon.bmp')
os.makedirs(os.path.dirname(icon_file), exist_ok=True)
with open(icon_file, 'wb') as fh:
fh.write(b64decode('Qk08AAAAAAAAADYAAAAoAAAAAQAAAAEAAAABACAAAAAAAAYAAAASCwAAEgsAAAAAAAAAAAAAAP8AAAAA'))
for info in wsb2sb.run(self.test_input, self.test_output):
pass
with open(self.test_output_rdf, 'rb') as fh:
tree = etree.parse(fh)
self.assertEqual(
tree.find(f'{RDF}Description').attrib[f'{NS1}icon'],
'resource://scrapbook/data/icons/favicon.bmp'
)
self.assertTrue(
os.path.isfile(os.path.join(self.test_output, 'data', 'icons', 'favicon.bmp'))
)
def test_meta_icon07(self):
"""Outside of data folder => scrapbook folder"""
with open(self.test_input_config, 'w', encoding='UTF-8') as fh:
fh.write("""\
[book ""]
data_dir = data
tree_dir = tree
""")
meta_file = os.path.join(self.test_input, 'tree', 'meta.js')
os.makedirs(os.path.dirname(meta_file), exist_ok=True)
with open(meta_file, 'w', encoding='UTF-8') as fh:
fh.write("""\
scrapbook.meta({
"20200101000000000": {
"index": "20200101000000000/index.html",
"type": "",
"icon": "../../icons/favicon.bmp"
}
})""")
icon_file = os.path.join(self.test_input, 'icons', 'favicon.bmp')
os.makedirs(os.path.dirname(icon_file), exist_ok=True)
with open(icon_file, 'wb') as fh:
fh.write(b64decode('Qk08AAAAAAAAADYAAAAoAAAAAQAAAAEAAAABACAAAAAAAAYAAAASCwAAEgsAAAAAAAAAAAAAAP8AAAAA'))
for info in wsb2sb.run(self.test_input, self.test_output):
pass
with open(self.test_output_rdf, 'rb') as fh:
tree = etree.parse(fh)
self.assertEqual(
tree.find(f'{RDF}Description').attrib[f'{NS1}icon'],
'resource://scrapbook/icons/favicon.bmp'
)
self.assertTrue(
os.path.isfile(os.path.join(self.test_output, 'icons', 'favicon.bmp'))
)
def test_id_mapping01(self):
"""WebScrapBook timestamp => legacy ScrapBook timestamp"""
with open(self.test_input_meta, 'w', encoding='UTF-8') as fh:
fh.write("""\
scrapbook.meta({
"20200101000000000": {
"type": "folder"
},
"20200101000001000": {
"type": "folder"
},
"20200101000002000": {
"type": "folder"
}
})""")
with open(self.test_input_toc, 'w', encoding='UTF-8') as fh:
fh.write("""\
scrapbook.toc({
"root": [
"20200101000000000",
"20200101000001000",
"20200101000002000"
]
})""")
for info in wsb2sb.run(self.test_input, self.test_output):
pass
with open(self.test_output_rdf, 'rb') as fh:
tree = etree.parse(fh)
self.assertEqual([node.attrib[f'{NS1}id'] for node in tree.findall(f'{RDF}Description')], [
util.datetime_to_id_legacy(util.id_to_datetime('20200101000000000')),
util.datetime_to_id_legacy(util.id_to_datetime('20200101000001000')),
util.datetime_to_id_legacy(util.id_to_datetime('20200101000002000')),
])
self.assertEqual([node.attrib[f'{RDF}resource'] for node in tree.findall(f'{RDF}Seq/{RDF}li')], [
'urn:scrapbook:item' + util.datetime_to_id_legacy(util.id_to_datetime('20200101000000000')),
'urn:scrapbook:item' + util.datetime_to_id_legacy(util.id_to_datetime('20200101000001000')),
'urn:scrapbook:item' + util.datetime_to_id_legacy(util.id_to_datetime('20200101000002000')),
])
def test_id_mapping02(self):
"""If conflict, increament by 1 from timestamp"""
with open(self.test_input_meta, 'w', encoding='UTF-8') as fh:
fh.write("""\
scrapbook.meta({
"20200101000000000": {
"type": "folder"
},
"20200101000000001": {
"type": "folder"
},
"20200101000000010": {
"type": "folder"
}
})""")
with open(self.test_input_toc, 'w', encoding='UTF-8') as fh:
fh.write("""\
scrapbook.toc({
"root": [
"20200101000000000",
"20200101000000001",
"20200101000000010"
]
})""")
for info in wsb2sb.run(self.test_input, self.test_output):
pass
with open(self.test_output_rdf, 'rb') as fh:
tree = etree.parse(fh)
self.assertEqual([node.attrib[f'{NS1}id'] for node in tree.findall(f'{RDF}Description')], [
util.datetime_to_id_legacy(util.id_to_datetime('20200101000000000')),
util.datetime_to_id_legacy(util.id_to_datetime('20200101000001000')),
util.datetime_to_id_legacy(util.id_to_datetime('20200101000002000')),
])
self.assertEqual([node.attrib[f'{RDF}resource'] for node in tree.findall(f'{RDF}Seq/{RDF}li')], [
'urn:scrapbook:item' + util.datetime_to_id_legacy(util.id_to_datetime('20200101000000000')),
'urn:scrapbook:item' + util.datetime_to_id_legacy(util.id_to_datetime('20200101000001000')),
'urn:scrapbook:item' + util.datetime_to_id_legacy(util.id_to_datetime('20200101000002000')),
])
def test_id_mapping03(self):
"""Legacy timestamp => use as-is"""
with open(self.test_input_meta, 'w', encoding='UTF-8') as fh:
fh.write("""\
scrapbook.meta({
"20200101000000": {
"type": "folder"
},
"20200101000010": {
"type": "folder"
},
"20200101000100": {
"type": "folder"
}
})""")
with open(self.test_input_toc, 'w', encoding='UTF-8') as fh:
fh.write("""\
scrapbook.toc({
"root": [
"20200101000000",
"20200101000010",
"20200101000100"
]
})""")
for info in wsb2sb.run(self.test_input, self.test_output):
pass
with open(self.test_output_rdf, 'rb') as fh:
tree = etree.parse(fh)
self.assertEqual([node.attrib[f'{NS1}id'] for node in tree.findall(f'{RDF}Description')], [
'20200101000000',
'20200101000010',
'20200101000100',
])
self.assertEqual([node.attrib[f'{RDF}resource'] for node in tree.findall(f'{RDF}Seq/{RDF}li')], [
'urn:scrapbook:item20200101000000',
'urn:scrapbook:item20200101000010',
'urn:scrapbook:item20200101000100',
])
def test_id_mapping04(self):
"""Increament by 1 from now if not timestamp"""
with open(self.test_input_meta, 'w', encoding='UTF-8') as fh:
fh.write("""\
scrapbook.meta({
"dummy1": {
"type": "folder"
},
"dummy2": {
"type": "folder"
},
"dummy3": {
"type": "folder"
}
})""")
with open(self.test_input_toc, 'w', encoding='UTF-8') as fh:
fh.write("""\
scrapbook.toc({
"root": [
"dummy1",
"dummy2",
"dummy3"
]
})""")
for info in wsb2sb.run(self.test_input, self.test_output):
pass
with open(self.test_output_rdf, 'rb') as fh:
tree = etree.parse(fh)
ts_now = datetime.now(timezone.utc).timestamp()
id_list = [n.attrib[f'{NS1}id'] for n in tree.findall(f'{RDF}Description')]
ts_list = [util.id_to_datetime_legacy(id).timestamp() for id in id_list]
self.assertAlmostEqual(ts_list[0], ts_now, delta=3)
self.assertEqual(ts_list[0] + 1, ts_list[1])
self.assertEqual(ts_list[0] + 2, ts_list[2])
self.assertEqual([node.attrib[f'{RDF}resource'] for node in tree.findall(f'{RDF}Seq/{RDF}li')], [
'urn:scrapbook:item' + id_list[0],
'urn:scrapbook:item' + id_list[1],
'urn:scrapbook:item' + id_list[2],
])
def test_toc_no_root(self):
"""root list not exist => empty root container"""
for info in wsb2sb.run(self.test_input, self.test_output):
pass
with open(self.test_output_rdf, 'rb') as fh:
tree = etree.parse(fh)
self.assertIsNotNone(tree.find(f'{RDF}Seq[@{RDF}about="urn:scrapbook:root"]'))
self.assertIsNone(tree.find(f'{RDF}Seq[@{RDF}about="urn:scrapbook:root"]/{RDF}li'))
def test_toc_duplicate(self):
"""Duplicated item => preserve only the first one (depth first)"""
with open(self.test_input_meta, 'w', encoding='UTF-8') as fh:
fh.write("""\
scrapbook.meta({
"20200101000001": {
"type": "folder"
},
"20200101000002": {
"type": "folder"
},
"20200101000003": {
"type": "folder"
}
})""")
with open(self.test_input_toc, 'w', encoding='UTF-8') as fh:
fh.write("""\
scrapbook.toc({
"root": [
"20200101000001",
"20200101000002",
"20200101000003"
],
"20200101000001": [
"20200101000002"
]
})""")
for info in wsb2sb.run(self.test_input, self.test_output):
pass
with open(self.test_output_rdf, 'rb') as fh:
tree = etree.parse(fh)
self.assertEqual([
node.attrib[f'{RDF}resource']
for node in
tree.findall(f'{RDF}Seq[@{RDF}about="urn:scrapbook:root"]/{RDF}li')
], [
'urn:scrapbook:item20200101000001',
'urn:scrapbook:item20200101000003',
])
self.assertEqual([
node.attrib[f'{RDF}resource']
for node in
tree.findall(f'{RDF}Seq[@{RDF}about="urn:scrapbook:item20200101000001"]/{RDF}li')
], [
'urn:scrapbook:item20200101000002',
])
def test_copy_data_files01(self):
"""###/index.html => copy ###/* to <ID>/*"""
with open(self.test_input_meta, 'w', encoding='UTF-8') as fh:
fh.write("""\
scrapbook.meta({
"20200101000000000": {
"index": "20200101000000000/index.html",
"type": ""
}
})""")
index_dir = os.path.join(self.test_input, '20200101000000000')
os.makedirs(index_dir, exist_ok=True)
with open(os.path.join(index_dir, 'index.html'), 'w', encoding='UTF-8') as fh:
fh.write('page content')
with open(os.path.join(index_dir, 'page.html'), 'w', encoding='UTF-8') as fh:
fh.write('dummy')
os.makedirs(os.path.join(self.test_input, '20200101000000001'), exist_ok=True)
with open(os.path.join(self.test_input, 'other.html'), 'w', encoding='UTF-8') as fh:
fh.write('dummy')
for info in wsb2sb.run(self.test_input, self.test_output):
pass
oid = util.datetime_to_id_legacy(util.id_to_datetime('20200101000000000'))
self.assertEqual(
set(glob.iglob(os.path.join(self.test_output, '**'), recursive=True)), {
os.path.join(self.test_output, ''),
os.path.join(self.test_output, 'scrapbook.rdf'),
os.path.join(self.test_output, 'data'),
os.path.join(self.test_output, 'data', oid),
os.path.join(self.test_output, 'data', oid, 'index.html'),
os.path.join(self.test_output, 'data', oid, 'page.html'),
})
def test_copy_data_files02(self):
"""###.html => copy ###.html to <ID>/*"""
with open(self.test_input_meta, 'w', encoding='UTF-8') as fh:
fh.write("""\
scrapbook.meta({
"20200101000000000": {
"index": "20200101000000000.html",
"type": ""
}
})""")
with open(os.path.join(self.test_input, '20200101000000000.html'), 'w', encoding='UTF-8') as fh:
fh.write('page content')
with open(os.path.join(self.test_input, 'page.html'), 'w', encoding='UTF-8') as fh:
fh.write('dummy')
for info in wsb2sb.run(self.test_input, self.test_output):
pass
oid = util.datetime_to_id_legacy(util.id_to_datetime('20200101000000000'))
self.assertEqual(
set(glob.iglob(os.path.join(self.test_output, '**'), recursive=True)), {
os.path.join(self.test_output, ''),
os.path.join(self.test_output, 'scrapbook.rdf'),
os.path.join(self.test_output, 'data'),
os.path.join(self.test_output, 'data', oid),
os.path.join(self.test_output, 'data', oid, 'index.html'),
})
def test_copy_data_files03(self):
"""###.htz => copy internal files to <ID>/*"""
with open(self.test_input_meta, 'w', encoding='UTF-8') as fh:
fh.write("""\
scrapbook.meta({
"20200101000000000": {
"index": "20200101000000000.htz",
"type": ""
}
})""")
with zipfile.ZipFile(os.path.join(self.test_input, '20200101000000000.htz'), 'w') as zh:
zh.writestr('index.html', 'page content')
zh.writestr('page.html', 'dummy')
zh.writestr('subdir/page2.html', 'dummy2')
for info in wsb2sb.run(self.test_input, self.test_output):
pass
oid = util.datetime_to_id_legacy(util.id_to_datetime('20200101000000000'))
self.assertEqual(
set(glob.iglob(os.path.join(self.test_output, '**'), recursive=True)), {
os.path.join(self.test_output, ''),
os.path.join(self.test_output, 'scrapbook.rdf'),
os.path.join(self.test_output, 'data'),
os.path.join(self.test_output, 'data', oid),
os.path.join(self.test_output, 'data', oid, 'index.html'),
os.path.join(self.test_output, 'data', oid, 'page.html'),
os.path.join(self.test_output, 'data', oid, 'subdir'),
os.path.join(self.test_output, 'data', oid, 'subdir', 'page2.html'),
})
def test_copy_data_files04(self):
"""###.maff => copy internal files of first topdir to <ID>/*"""
with open(self.test_input_meta, 'w', encoding='UTF-8') as fh:
fh.write("""\
scrapbook.meta({
"20200101000000000": {
"index": "20200101000000000.maff",
"type": ""
}
})""")
with zipfile.ZipFile(os.path.join(self.test_input, '20200101000000000.maff'), 'w') as zh:
zh.writestr('20200101000000000/index.html', 'page content')
zh.writestr('20200101000000000/page.html', 'dummy')
zh.writestr('20200101000000000/subdir/page2.html', 'dummy2')
zh.writestr('20200101000000001/index.html', 'page content 2')
for info in wsb2sb.run(self.test_input, self.test_output):
pass
oid = util.datetime_to_id_legacy(util.id_to_datetime('20200101000000000'))
self.assertEqual(
set(glob.iglob(os.path.join(self.test_output, '**'), recursive=True)), {
os.path.join(self.test_output, ''),
os.path.join(self.test_output, 'scrapbook.rdf'),
os.path.join(self.test_output, 'data'),
os.path.join(self.test_output, 'data', oid),
os.path.join(self.test_output, 'data', oid, 'index.html'),
os.path.join(self.test_output, 'data', oid, 'page.html'),
os.path.join(self.test_output, 'data', oid, 'subdir'),
os.path.join(self.test_output, 'data', oid, 'subdir', 'page2.html'),
})
def test_copy_data_files05(self):
"""###.maff => copy nothing if no page"""
with open(self.test_input_meta, 'w', encoding='UTF-8') as fh:
fh.write("""\
scrapbook.meta({
"20200101000000000": {
"index": "20200101000000000.maff",
"type": ""
}
})""")
with zipfile.ZipFile(os.path.join(self.test_input, '20200101000000000.maff'), 'w') as zh:
zh.writestr('index.html', 'dummy')
for info in wsb2sb.run(self.test_input, self.test_output):
pass
oid = util.datetime_to_id_legacy(util.id_to_datetime('20200101000000000'))
self.assertEqual(
set(glob.iglob(os.path.join(self.test_output, '**'), recursive=True)), {
os.path.join(self.test_output, ''),
os.path.join(self.test_output, 'scrapbook.rdf'),
})
def test_copy_data_files06(self):
"""foo.bar => copy it and create meta refresh"""
with open(self.test_input_meta, 'w', encoding='UTF-8') as fh:
fh.write("""\
scrapbook.meta({
"20200101000000000": {
"index": "中文#1.xhtml",
"type": ""
}
})""")
with open(os.path.join(self.test_input, '中文#1.xhtml'), 'w', encoding='UTF-8') as fh:
fh.write("""\
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN" "http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<title>Title of document</title>
</head>
<body>
some content
</body>
</html>
""")
for info in wsb2sb.run(self.test_input, self.test_output):
pass
oid = util.datetime_to_id_legacy(util.id_to_datetime('20200101000000000'))
self.assertEqual(
set(glob.iglob(os.path.join(self.test_output, '**'), recursive=True)), {
os.path.join(self.test_output, ''),
os.path.join(self.test_output, 'scrapbook.rdf'),
os.path.join(self.test_output, 'data'),
os.path.join(self.test_output, 'data', oid),
os.path.join(self.test_output, 'data', oid, 'index.html'),
os.path.join(self.test_output, 'data', oid, '中文#1.xhtml'),
})
self.assertEqual(
util.get_meta_refreshed_file(os.path.join(self.test_output, 'data', oid, 'index.html')),
os.path.join(self.test_output, 'data', oid, '中文#1.xhtml'),
)
class TestConvertHtmlFile(Test):
def test_convert_html_file_linemarker01(self):
"""Convert linemarker."""
with open(self.test_input_meta, 'w', encoding='UTF-8') as fh:
fh.write("""\
scrapbook.meta({
"20200101000000000": {
"index": "20200101000000000/index.html",
"type": ""
}
})""")
input = """<html><body><scrapbook-linemarker data-scrapbook-id="20200101000000000" data-scrapbook-elem="linemarker" style="background: #FFFF00; background: linear-gradient(transparent 40%, rgba(255,255,0,0.9) 90%, transparent 100%);" class="first">Lorem ipsum dolor </scrapbook-linemarker><strong><scrapbook-linemarker data-scrapbook-id="20200101000000000" data-scrapbook-elem="linemarker" style="background: #FFFF00; background: linear-gradient(transparent 40%, rgba(255,255,0,0.9) 90%, transparent 100%);">sit amet</scrapbook-linemarker></strong><scrapbook-linemarker data-scrapbook-id="20200101000000000" data-scrapbook-elem="linemarker" style="background: #FFFF00; background: linear-gradient(transparent 40%, rgba(255,255,0,0.9) 90%, transparent 100%);" class="last">, consectetur adipiscing elit.</scrapbook-linemarker></body></html>"""
expected = """<html><body><span data-sb-id="20200101000000000" data-sb-obj="linemarker" class="linemarker-marked-line" style="background: #FFFF00; background: linear-gradient(transparent 40%, rgba(255,255,0,0.9) 90%, transparent 100%);">Lorem ipsum dolor </span><strong><span data-sb-id="20200101000000000" data-sb-obj="linemarker" class="linemarker-marked-line" style="background: #FFFF00; background: linear-gradient(transparent 40%, rgba(255,255,0,0.9) 90%, transparent 100%);">sit amet</span></strong><span data-sb-id="20200101000000000" data-sb-obj="linemarker" class="linemarker-marked-line" style="background: #FFFF00; background: linear-gradient(transparent 40%, rgba(255,255,0,0.9) 90%, transparent 100%);">, consectetur adipiscing elit.</span></body></html>"""
index_dir = os.path.join(self.test_input, '20200101000000000')
os.makedirs(index_dir, exist_ok=True)
with open(os.path.join(index_dir, 'index.html'), 'w', encoding='UTF-8') as fh:
fh.write(input)
for info in wsb2sb.run(self.test_input, self.test_output):
pass
oid = util.datetime_to_id_legacy(util.id_to_datetime('20200101000000000'))
with open(os.path.join(self.test_output, 'data', oid, 'index.html'), encoding='UTF-8') as fh:
self.assertEqual(fh.read(), expected)
def test_convert_html_file_linemarker02(self):
"""Convert annotated linemarker."""
with open(self.test_input_meta, 'w', encoding='UTF-8') as fh:
fh.write("""\
scrapbook.meta({
"20200101000000000": {
"index": "20200101000000000/index.html",
"type": ""
}
})""")
input = """<html><body><scrapbook-linemarker data-scrapbook-id="20200101000000000" data-scrapbook-elem="linemarker" style="border-bottom: 2px dotted #FF0000;" class="first" title="inline annotation
2nd line">Suspendisse eget</scrapbook-linemarker></b><scrapbook-linemarker data-scrapbook-id="20200101000000000" data-scrapbook-elem="linemarker" style="border-bottom: 2px dotted #FF0000;" class="last" title="inline annotation
2nd line"> interdum quam, eu semper ipsum</scrapbook-linemarker>.<style data-scrapbook-elem="annotation-css">/* stylesheet */</style><script data-scrapbook-elem="annotation-loader">/* script */</script></body></html>"""
expected = """<html><body><span data-sb-id="20200101000000000" data-sb-obj="inline" class="scrapbook-inline" style="border-bottom: 2px dotted #FF0000;" title="inline annotation
2nd line">Suspendisse eget</span></b><span data-sb-id="20200101000000000" data-sb-obj="inline" class="scrapbook-inline" style="border-bottom: 2px dotted #FF0000;" title="inline annotation
2nd line"> interdum quam, eu semper ipsum</span>.</body></html>"""
index_dir = os.path.join(self.test_input, '20200101000000000')
os.makedirs(index_dir, exist_ok=True)
with open(os.path.join(index_dir, 'index.html'), 'w', encoding='UTF-8') as fh:
fh.write(input)
for info in wsb2sb.run(self.test_input, self.test_output):
pass
oid = util.datetime_to_id_legacy(util.id_to_datetime('20200101000000000'))
with open(os.path.join(self.test_output, 'data', oid, 'index.html'), encoding='UTF-8') as fh:
self.assertEqual(fh.read(), expected)
def test_convert_html_file_sticky01(self):
"""Convert sticky (styled plaintext)."""
with open(self.test_input_meta, 'w', encoding='UTF-8') as fh:
fh.write("""\
scrapbook.meta({
"20200101000000000": {
"index": "20200101000000000/index.html",
"type": ""
}
})""")
input = """<html><body><scrapbook-sticky data-scrapbook-id="20200101000000000" data-scrapbook-elem="sticky" class="styled plaintext" style="width: 250px; height: 100px; left: 572px; top: 83px;">annotation
2nd line</scrapbook-sticky><style data-scrapbook-elem="annotation-css">/* stylesheet */</style><script data-scrapbook-elem="annotation-loader">/* script */</script></body></html>"""
expected = """<html><body><div data-sb-obj="freenote" style="cursor: help; overflow: visible; border: 1px solid #CCCCCC; border-top-width: 12px; background: #FAFFFA; opacity: 0.95; padding: 0px; z-index: 500000; text-align: start; font-size: small; line-height: 1.2em; word-wrap: break-word; position: absolute; width: 250px; height: 100px; left: 572px; top: 83px;">annotation<br>2nd line</div></body></html>"""
index_dir = os.path.join(self.test_input, '20200101000000000')
os.makedirs(index_dir, exist_ok=True)
with open(os.path.join(index_dir, 'index.html'), 'w', encoding='UTF-8') as fh:
fh.write(input)
for info in wsb2sb.run(self.test_input, self.test_output):
pass
oid = util.datetime_to_id_legacy(util.id_to_datetime('20200101000000000'))
with open(os.path.join(self.test_output, 'data', oid, 'index.html'), encoding='UTF-8') as fh:
self.assertEqual(fh.read(), expected)
def test_convert_html_file_sticky02(self):
"""Convert sticky (styled plaintext relative)."""
with open(self.test_input_meta, 'w', encoding='UTF-8') as fh:
fh.write("""\
scrapbook.meta({
"20200101000000000": {
"index": "20200101000000000/index.html",
"type": ""
}
})""")
input = """<html><body><scrapbook-sticky data-scrapbook-id="20200101000000000" data-scrapbook-elem="sticky" class="styled plaintext relative">annotation
2nd line</scrapbook-sticky><style data-scrapbook-elem="annotation-css">/* stylesheet */</style><script data-scrapbook-elem="annotation-loader">/* script */</script></body></html>"""
expected = """<html><body><div data-sb-obj="freenote" style="cursor: help; overflow: visible; margin: 16px auto; border: 1px solid #CCCCCC; border-top-width: 12px; background: #FAFFFA; opacity: 0.95; padding: 0px; z-index: 500000; text-align: start; font-size: small; line-height: 1.2em; word-wrap: break-word; position: static;">annotation<br>2nd line</div></body></html>"""
index_dir = os.path.join(self.test_input, '20200101000000000')
os.makedirs(index_dir, exist_ok=True)
with open(os.path.join(index_dir, 'index.html'), 'w', encoding='UTF-8') as fh:
fh.write(input)
for info in wsb2sb.run(self.test_input, self.test_output):
pass
oid = util.datetime_to_id_legacy(util.id_to_datetime('20200101000000000'))
with open(os.path.join(self.test_output, 'data', oid, 'index.html'), encoding='UTF-8') as fh:
self.assertEqual(fh.read(), expected)
def test_convert_html_file_sticky03(self):
"""Convert sticky (styled)."""
with open(self.test_input_meta, 'w', encoding='UTF-8') as fh:
fh.write("""\
scrapbook.meta({
"20200101000000000": {
"index": "20200101000000000/index.html",
"type": ""
}
})""")
input = """<html><body><scrapbook-sticky data-scrapbook-id="20200101000000000" data-scrapbook-elem="sticky" class="styled" style="left: 367px; top: 323px; width: 250px; height: 100px;">annotation<div><b>2nd</b> line</div></scrapbook-sticky><style data-scrapbook-elem="annotation-css">/* stylesheet */</style><script data-scrapbook-elem="annotation-loader">/* script */</script></body></html>"""
expected = """<html><body><div data-sb-obj="freenote" style="cursor: help; overflow: visible; border: 1px solid #CCCCCC; border-top-width: 12px; background: #FAFFFA; opacity: 0.95; padding: 0px; z-index: 500000; text-align: start; font-size: small; line-height: 1.2em; word-wrap: break-word; position: absolute; left: 367px; top: 323px; width: 250px; height: 100px;">annotation<div><b>2nd</b> line</div></div></body></html>"""
index_dir = os.path.join(self.test_input, '20200101000000000')
os.makedirs(index_dir, exist_ok=True)
with open(os.path.join(index_dir, 'index.html'), 'w', encoding='UTF-8') as fh:
fh.write(input)
for info in wsb2sb.run(self.test_input, self.test_output):
pass
oid = util.datetime_to_id_legacy(util.id_to_datetime('20200101000000000'))
with open(os.path.join(self.test_output, 'data', oid, 'index.html'), encoding='UTF-8') as fh:
self.assertEqual(fh.read(), expected)
def test_convert_html_file_sticky04(self):
"""Convert sticky (styled relative)."""
with open(self.test_input_meta, 'w', encoding='UTF-8') as fh:
fh.write("""\
scrapbook.meta({
"20200101000000000": {
"index": "20200101000000000/index.html",
"type": ""
}
})""")
input = """<html><body><scrapbook-sticky data-scrapbook-id="20200101000000000" data-scrapbook-elem="sticky" class="styled relative" style="height: 42.6px;">annotation<div><b>2nd</b> line</div></scrapbook-sticky><style data-scrapbook-elem="annotation-css">/* stylesheet */</style><script data-scrapbook-elem="annotation-loader">/* script */</script></body></html>"""
expected = """<html><body><div data-sb-obj="freenote" style="cursor: help; overflow: visible; margin: 16px auto; border: 1px solid #CCCCCC; border-top-width: 12px; background: #FAFFFA; opacity: 0.95; padding: 0px; z-index: 500000; text-align: start; font-size: small; line-height: 1.2em; word-wrap: break-word; position: static; height: 42.6px;">annotation<div><b>2nd</b> line</div></div></body></html>"""
index_dir = os.path.join(self.test_input, '20200101000000000')
os.makedirs(index_dir, exist_ok=True)
with open(os.path.join(index_dir, 'index.html'), 'w', encoding='UTF-8') as fh:
fh.write(input)
for info in wsb2sb.run(self.test_input, self.test_output):
pass
oid = util.datetime_to_id_legacy(util.id_to_datetime('20200101000000000'))
with open(os.path.join(self.test_output, 'data', oid, 'index.html'), encoding='UTF-8') as fh:
self.assertEqual(fh.read(), expected)
def test_convert_html_file_sticky05(self):
"""Convert sticky (plaintext relative)."""
with open(self.test_input_meta, 'w', encoding='UTF-8') as fh:
fh.write("""\
scrapbook.meta({
"20200101000000000": {
"index": "20200101000000000/index.html",
"type": ""
}
})""")
input = """<html><body><scrapbook-sticky data-scrapbook-elem="sticky" class="plaintext relative" style="border: 1px dotted rgb(215, 221, 191) !important; margin: 10px !important; padding: 10px !important; font-size: 12px !important; font-weight: normal !important; line-height: 16px !important; text-decoration: none !important; color: rgb(96, 96, 96) !important; background-color: rgb(239, 248, 206) !important; cursor: pointer !important; white-space: pre-wrap;">Legacy block comment.
Second line.</scrapbook-sticky><style data-scrapbook-elem="annotation-css">/* stylesheet */</style><script data-scrapbook-elem="annotation-loader">/* script */</script></body></html>"""
expected = """<html><body><div class="scrapbook-block-comment" style="border: 1px dotted rgb(215, 221, 191) !important; margin: 10px !important; padding: 10px !important; font-size: 12px !important; font-weight: normal !important; line-height: 16px !important; text-decoration: none !important; color: rgb(96, 96, 96) !important; background-color: rgb(239, 248, 206) !important; cursor: pointer !important; white-space: pre-wrap;">Legacy block comment.
Second line.</div></body></html>"""
index_dir = os.path.join(self.test_input, '20200101000000000')
os.makedirs(index_dir, exist_ok=True)
with open(os.path.join(index_dir, 'index.html'), 'w', encoding='UTF-8') as fh:
fh.write(input)
for info in wsb2sb.run(self.test_input, self.test_output):
pass
oid = util.datetime_to_id_legacy(util.id_to_datetime('20200101000000000'))
with open(os.path.join(self.test_output, 'data', oid, 'index.html'), encoding='UTF-8') as fh:
self.assertEqual(fh.read(), expected)
def test_convert_html_file_other(self):
"""Convert other elements."""
with open(self.test_input_meta, 'w', encoding='UTF-8') as fh:
fh.write("""\
scrapbook.meta({
"20200101000000000": {
"index": "20200101000000000/index.html",
"type": ""
}
})""")
input = """\
<!DOCTYPE html>
<html>
<head>
<title data-scrapbook-elem="title">My page</title>
</head>
<body>
Donec nec lacus<span data-scrapbook-elem="annotation">(my legacy <em>inline</em>annotation)</span> efficitur.
<a data-scrapbook-elem="link-url" href="http://example.com">Suspendisse eget interdum quam</a>, eu semper <span data-scrapbook-id="20200101000000000">ipsum</span>.
</body>
</html>
"""
expected = """\
<!DOCTYPE html>
<html>
<head>
<title data-sb-obj="title">My page</title>
</head>
<body>
Donec nec lacus<span data-sb-obj="annotation">(my legacy <em>inline</em>annotation)</span> efficitur.
<a data-sb-obj="link-url" href="http://example.com">Suspendisse eget interdum quam</a>, eu semper <span data-sb-id="20200101000000000">ipsum</span>.
</body>
</html>
"""
index_dir = os.path.join(self.test_input, '20200101000000000')
os.makedirs(index_dir, exist_ok=True)
with open(os.path.join(index_dir, 'index.html'), 'w', encoding='UTF-8') as fh:
fh.write(input)
for info in wsb2sb.run(self.test_input, self.test_output):
pass
oid = util.datetime_to_id_legacy(util.id_to_datetime('20200101000000000'))
with open(os.path.join(self.test_output, 'data', oid, 'index.html'), encoding='UTF-8') as fh:
self.assertEqual(fh.read(), expected)
def test_convert_html_file_skip_special_tags(self):
"""Do not rewrite content in <template>, <xml>, <math>, etc.
"""
with open(self.test_input_meta, 'w', encoding='UTF-8') as fh:
fh.write("""\
scrapbook.meta({
"20200101000000000": {
"type": "",
"index": "20200101000000000/index.html"
}
})""")
input = """\
<!DOCTYPE html>
<html>
<body>
<xmp>
<span data-scrapbook-elem="annotation">foo</span>
</xmp>
<template>
<span data-scrapbook-elem="annotation">foo</span>
</template>
<svg>
<text data-scrapbook-elem="annotation">foo</text>
</svg>
<math>
<mtext data-scrapbook-elem="annotation">foo</mtext>
</math>
</body>
</html>
"""
expected = """\
<!DOCTYPE html>
<html>
<body>
<xmp>
<span data-scrapbook-elem="annotation">foo</span>
</xmp>
<template>
<span data-scrapbook-elem="annotation">foo</span>
</template>
<svg>
<text data-scrapbook-elem="annotation">foo</text>
</svg>
<math>
<mtext data-scrapbook-elem="annotation">foo</mtext>
</math>
</body>
</html>
"""
index_dir = os.path.join(self.test_input, '20200101000000000')
os.makedirs(index_dir, exist_ok=True)
with open(os.path.join(index_dir, 'index.html'), 'w', encoding='UTF-8') as fh:
fh.write(input)
for info in wsb2sb.run(self.test_input, self.test_output):
pass
oid = util.datetime_to_id_legacy(util.id_to_datetime('20200101000000000'))
with open(os.path.join(self.test_output, 'data', oid, 'index.html'), encoding='UTF-8') as fh:
self.assertEqual(fh.read(), expected)
if __name__ == '__main__':
unittest.main()
| 39.570203 | 850 | 0.623588 | 6,386 | 50,729 | 4.836674 | 0.06984 | 0.059831 | 0.053939 | 0.039887 | 0.846343 | 0.825331 | 0.804934 | 0.793311 | 0.787807 | 0.782044 | 0 | 0.092229 | 0.208539 | 50,729 | 1,281 | 851 | 39.601093 | 0.677061 | 0.031087 | 0 | 0.693625 | 0 | 0.026641 | 0.405156 | 0.135066 | 0 | 0 | 0 | 0 | 0.052331 | 1 | 0.039962 | false | 0.037108 | 0.016175 | 0 | 0.058991 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6a2c5892af24c1eef0f50b8db3d48fb9322f987a | 32 | py | Python | Modulo_1/semana4/Modulos_Paquetes/Modulo/main-import-as.py | rubens233/cocid_python | 492ebdf21817e693e5eb330ee006397272f2e0cc | [
"MIT"
] | null | null | null | Modulo_1/semana4/Modulos_Paquetes/Modulo/main-import-as.py | rubens233/cocid_python | 492ebdf21817e693e5eb330ee006397272f2e0cc | [
"MIT"
] | null | null | null | Modulo_1/semana4/Modulos_Paquetes/Modulo/main-import-as.py | rubens233/cocid_python | 492ebdf21817e693e5eb330ee006397272f2e0cc | [
"MIT"
] | 1 | 2022-03-04T00:57:18.000Z | 2022-03-04T00:57:18.000Z | import fibo as fib
fib.fib(500)
| 10.666667 | 18 | 0.75 | 7 | 32 | 3.428571 | 0.714286 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 0.15625 | 32 | 2 | 19 | 16 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
6a4d8cebae3cfab8f4c54f5058919b3c2756518a | 28 | py | Python | legofy/__init__.py | oliveirarodolfo/legofy | 6653e5cb6257bd89b8e660bd206afbaaedbff7e0 | [
"MIT"
] | 2 | 2015-11-05T02:11:44.000Z | 2015-11-07T15:30:28.000Z | legofy/__init__.py | oliveirarodolfo/legofy | 6653e5cb6257bd89b8e660bd206afbaaedbff7e0 | [
"MIT"
] | 1 | 2015-12-02T07:37:30.000Z | 2015-12-03T00:24:03.000Z | legofy/__init__.py | oliveirarodolfo/legofy | 6653e5cb6257bd89b8e660bd206afbaaedbff7e0 | [
"MIT"
] | null | null | null |
from .legofy import legofy
| 9.333333 | 26 | 0.785714 | 4 | 28 | 5.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.178571 | 28 | 2 | 27 | 14 | 0.956522 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6a4db4a458aab22ccb9e635e5e4503a78b9bccc7 | 36,905 | py | Python | venv/lib/python3.6/site-packages/ansible_collections/arista/eos/plugins/modules/eos_prefix_lists.py | usegalaxy-no/usegalaxy | 75dad095769fe918eb39677f2c887e681a747f3a | [
"MIT"
] | 1 | 2020-01-22T13:11:23.000Z | 2020-01-22T13:11:23.000Z | venv/lib/python3.6/site-packages/ansible_collections/arista/eos/plugins/modules/eos_prefix_lists.py | usegalaxy-no/usegalaxy | 75dad095769fe918eb39677f2c887e681a747f3a | [
"MIT"
] | 12 | 2020-02-21T07:24:52.000Z | 2020-04-14T09:54:32.000Z | venv/lib/python3.6/site-packages/ansible_collections/arista/eos/plugins/modules/eos_prefix_lists.py | usegalaxy-no/usegalaxy | 75dad095769fe918eb39677f2c887e681a747f3a | [
"MIT"
] | null | null | null | #!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright 2021 Red Hat
# GNU General Public License v3.0+
# (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
"""
The module file for eos_prefix_lists
"""
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = """
---
module: eos_prefix_lists
short_description: Manages Prefix lists resource module
description: This module configures and manages the attributes of Prefix lists on Arista
EOS platforms.
version_added: 2.2.0
author: Gomathi Selvi Srinivasan (@GomathiselviS)
notes:
- Tested against Arista EOS 4.20.10M
- This module works with connection C(network_cli). See the L(EOS Platform Options,eos_platform_options).
options:
config:
description: A list of dictionary of prefix-list options
type: list
elements: dict
suboptions:
afi:
description:
- The Address Family Indicator (AFI) for the prefix list.
type: str
required: true
choices:
- ipv4
- ipv6
prefix_lists:
description:
- A list of prefix-lists.
type: list
elements: dict
suboptions:
name:
description: Name of the prefix-list
type: str
required: true
entries:
description: List of prefix-lists
type: list
elements: dict
suboptions:
action:
description: action to be performed on the specified path
type: str
choices: ['deny', 'permit']
address:
description: ipv4/v6 address in prefix-mask or address-masklen format
type: str
match:
description: match masklen
type: dict
suboptions:
operator:
description: equalto/greater than/lesser than
type: str
choices: ['eq', 'le', 'ge']
masklen:
description: Mask Length.
type: int
sequence:
description: sequence number
type: int
resequence:
description: Resequence the list.
type: dict
suboptions:
default:
description: Resequence with default values (10).
type: bool
start_seq:
description: Starting sequence number.
type: int
step:
description: Step to increment the sequence number.
type: int
running_config:
description:
- This option is used only with state I(parsed).
- The value of this option should be the output received from the EOS device by
executing the command B(show running-config | section access-list).
- The state I(parsed) reads the configuration from C(running_config) option and
transforms it into Ansible structured data as per the resource module's argspec
and the value is then returned in the I(parsed) key within the result.
type: str
state:
description:
- The state the configuration should be left in.
type: str
choices:
- deleted
- merged
- overridden
- replaced
- gathered
- rendered
- parsed
default: merged
"""
EXAMPLES = """
# Using merged
# Before state
# veos#show running-config | section prefix-lists
# veos#
- name: Merge provided configuration with device configuration
arista.eos.eos_prefix_lists:
config:
- afi: "ipv4"
prefix_lists:
- name: "v401"
entries:
- sequence: 25
action: "deny"
address: "45.55.4.0/24"
- sequence: 100
action: "permit"
address: "11.11.2.0/24"
match:
masklen: 32
operator: "ge"
- name: "v402"
entries:
- action: "deny"
address: "10.1.1.0/24"
sequence: 10
match:
masklen: 32
operator: "ge"
- afi: "ipv6"
prefix_lists:
- name: "v601"
entries:
- sequence: 125
action: "deny"
address: "5000:1::/64"
# After State
# veos#
# veos#show running-config | section prefix-list
# ip prefix-list v401
# seq 25 deny 45.55.4.0/24
# seq 100 permit 11.11.2.0/24 ge 32
# !
# ip prefix-list v402
# seq 10 deny 10.1.1.0/24 ge 32
# !
# ipv6 prefix-list v601
# seq 125 deny 5000:1::/64
# veos#
#
# Module Execution:
# "after": [
# {
# "afi": "ipv4",
# "prefix_lists": [
# {
# "entries": [
# {
# "action": "deny",
# "address": "45.55.4.0/24",
# "sequence": 25
# },
# {
# "action": "permit",
# "address": "11.11.2.0/24",
# "match": {
# "masklen": 32,
# "operator": "ge"
# },
# "sequence": 100
# }
# ],
# "name": "v401"
# },
# {
# "entries": [
# {
# "action": "deny",
# "address": "10.1.1.0/24",
# "match": {
# "masklen": 32,
# "operator": "ge"
# },
# "sequence": 10
# }
# ],
# "name": "v402"
# }
# ]
# },
# {
# "afi": "ipv6",
# "prefix_lists": [
# {
# "entries": [
# {
# "action": "deny",
# "address": "5000:1::/64",
# "sequence": 125
# }
# ],
# "name": "v601"
# }
# ]
# }
# ],
# "before": {},
# "changed": true,
# "commands": [
# "ipv6 prefix-list v601",
# "seq 125 deny 5000:1::/64",
# "ip prefix-list v401",
# "seq 25 deny 45.55.4.0/24",
# "seq 100 permit 11.11.2.0/24 ge 32",
# "ip prefix-list v402",
# "seq 10 deny 10.1.1.0/24 ge 32"
# ],
#
# using merged:
# Failure scenario : 'merged' should not be used when an existing prefix-list (sequence number)
# is to be modified.
# Before State:
# veos#show running-config | section prefix-list
# ip prefix-list v401
# seq 25 deny 45.55.4.0/24
# seq 100 permit 11.11.2.0/24 ge 32
# !
# ip prefix-list v402
# seq 10 deny 10.1.1.0/24 ge 32
# !
# ipv6 prefix-list v601
# seq 125 deny 5000:1::/64
# veos#
- name: Merge provided configuration with device configuration
arista.eos.eos_prefix_lists:
config:
- afi: "ipv4"
prefix_lists:
- name: "v401"
entries:
- sequence: 25
action: "deny"
address: "45.55.4.0/24"
match:
masklen: 32
operator: "ge"
- sequence: 100
action: "permit"
address: "11.11.2.0/24"
match:
masklen: 32
operator: "ge"
- name: "v402"
entries:
- action: "deny"
address: "10.1.1.0/24"
sequence: 10
match:
masklen: 32
operator: "ge"
- afi: "ipv6"
prefix_lists:
- name: "v601"
entries:
- sequence: 125
action: "deny"
address: "5000:1::/64"
state: merged
# Module Execution:
# fatal: [192.168.122.113]: FAILED! => {
# "changed": false,
# "invocation": {
# "module_args": {
# "config": [
# {
# "afi": "ipv4",
# "prefix_lists": [
# {
# "entries": [
# {
# "action": "deny",
# "address": "45.55.4.0/24",
# "match": {
# "masklen": 32,
# "operator": "ge"
# },
# "resequence": null,
# "sequence": 25
# },
# {
# "action": "permit",
# "address": "11.11.2.0/24",
# "match": {
# "masklen": 32,
# "operator": "ge"
# },
# "resequence": null,
# "sequence": 100
# }
# ],
# "name": "v401"
# },
# {
# "entries": [
# {
# "action": "deny",
# "address": "10.1.1.0/24",
# "match": {
# "masklen": 32,
# "operator": "ge"
# },
# "resequence": null,
# "sequence": 10
# }
# ],
# "name": "v402"
# }
# ]
# },
# {
# "afi": "ipv6",
# "prefix_lists": [
# {
# "entries": [
# {
# "action": "deny",
# "address": "5000:1::/64",
# "match": null,
# "resequence": null,
# "sequence": 125
# }
# ],
# "name": "v601"
# }
# ]
# }
# ],
# "running_config": null,
# "state": "merged"
# }
# },
# "msg": "Sequence number 25 is already present. Use replaced/overridden operation to change the configuration"
# }
#
# Using Replaced:
# Before state:
# veos#show running-config | section prefix-list
# ip prefix-list v401
# seq 25 deny 45.55.4.0/24
# seq 100 permit 11.11.2.0/24 ge 32
# !
# ip prefix-list v402
# seq 10 deny 10.1.1.0/24 ge 32
# !
# ipv6 prefix-list v601
# seq 125 deny 5000:1::/64
# veos#
- name: Replace
arista.eos.eos_prefix_lists:
config:
- afi: "ipv4"
prefix_lists:
- name: "v401"
entries:
- sequence: 25
action: "deny"
address: "45.55.4.0/24"
match:
masklen: 32
operator: "ge"
- sequence: 200
action: "permit"
address: "200.11.2.0/24"
match:
masklen: 32
operator: "ge"
state: replaced
# After State:
# veos#show running-config | section prefix-list
# ip prefix-list v401
# seq 25 deny 45.55.4.0/24 ge 32
# seq 200 permit 200.11.2.0/24 ge 32
# !
# ipv6 prefix-list v601
# seq 125 deny 5000:1::/64
# veos#
#
#
# Module Execution:
#
# "after": [
# {
# "afi": "ipv4",
# "prefix_lists": [
# {
# "entries": [
# {
# "action": "deny",
# "address": "45.55.4.0/24",
# "match": {
# "masklen": 32,
# "operator": "ge"
# },
# "sequence": 25
# },
# {
# "action": "permit",
# "address": "200.11.2.0/24",
# "match": {
# "masklen": 32,
# "operator": "ge"
# },
# "sequence": 200
# }
# ],
# "name": "v401"
# }
# ]
# },
# {
# "afi": "ipv6",
# "prefix_lists": [
# {
# "entries": [
# {
# "action": "deny",
# "address": "5000:1::/64",
# "sequence": 125
# }
# ],
# "name": "v601"
# }
# ]
# }
# ],
# "before": [
# {
# "afi": "ipv4",
# "prefix_lists": [
# {
# "entries": [
# {
# "action": "deny",
# "address": "45.55.4.0/24",
# "sequence": 25
# },
# {
# "action": "permit",
# "address": "11.11.2.0/24",
# "match": {
# "masklen": 32,
# "operator": "ge"
# },
# "sequence": 100
# }
# ],
# "name": "v401"
# },
# {
# "entries": [
# {
# "action": "deny",
# "address": "10.1.1.0/24",
# "match": {
# "masklen": 32,
# "operator": "ge"
# },
# "sequence": 10
# }
# ],
# "name": "v402"
# }
# ]
# },
# {
# "afi": "ipv6",
# "prefix_lists": [
# {
# "entries": [
# {
# "action": "deny",
# "address": "5000:1::/64",
# "sequence": 125
# }
# ],
# "name": "v601"
# }
# ]
# }
# ],
# "changed": true,
# "commands": [
# "ip prefix-list v401",
# "no seq 25",
# "seq 25 deny 45.55.4.0/24 ge 32",
# "seq 200 permit 200.11.2.0/24 ge 32",
# "no seq 100",
# "no ip prefix-list v402"
# ],
# Using overridden:
# Before State:
# veos#show running-config | section prefix-list
# ip prefix-list v401
# seq 25 deny 45.55.4.0/24 ge 32
# seq 100 permit 11.11.2.0/24 ge 32
# seq 200 permit 200.11.2.0/24 ge 32
# !
# ip prefix-list v402
# seq 10 deny 10.1.1.0/24 ge 32
# !
# ipv6 prefix-list v601
# seq 125 deny 5000:1::/64
# veos#
- name: Override
arista.eos.eos_prefix_lists:
config:
- afi: "ipv4"
prefix_lists:
- name: "v401"
entries:
- sequence: 25
action: "deny"
address: "45.55.4.0/24"
- sequence: 300
action: "permit"
address: "30.11.2.0/24"
match:
masklen: 32
operator: "ge"
- name: "v403"
entries:
- action: "deny"
address: "10.1.1.0/24"
sequence: 10
state: overridden
# After State
# veos#
# veos#show running-config | section prefix-list
# ip prefix-list v401
# seq 25 deny 45.55.4.0/24 ge 32
# seq 300 permit 30.11.2.0/24 ge 32
# !
# ip prefix-list v403
# seq 10 deny 10.1.1.0/24
# veos#
#
#
# Module Execution:
# "after": [
# {
# "afi": "ipv4",
# "prefix_lists": [
# {
# "entries": [
# {
# "action": "deny",
# "address": "45.55.4.0/24",
# "match": {
# "masklen": 32,
# "operator": "ge"
# },
# "sequence": 25
# },
# {
# "action": "permit",
# "address": "30.11.2.0/24",
# "match": {
# "masklen": 32,
# "operator": "ge"
# },
# "sequence": 300
# }
# ],
# "name": "v401"
# },
# {
# "entries": [
# {
# "action": "deny",
# "address": "10.1.1.0/24",
# "sequence": 10
# }
# ],
# "name": "v403"
# }
# ]
# }
# ],
# "before": [
# {
# "afi": "ipv4",
# "prefix_lists": [
# {
# "entries": [
# {
# "action": "deny",
# "address": "45.55.4.0/24",
# "match": {
# "masklen": 32,
# "operator": "ge"
# },
# "sequence": 25
# },
# {
# "action": "permit",
# "address": "11.11.2.0/24",
# "match": {
# "masklen": 32,
# "operator": "ge"
# },
# "sequence": 100
# },
# {
# "action": "permit",
# "address": "200.11.2.0/24",
# "match": {
# "masklen": 32,
# "operator": "ge"
# },
# "sequence": 200
# }
# ],
# "name": "v401"
# },
# {
# "entries": [
# {
# "action": "deny",
# "address": "10.1.1.0/24",
# "match": {
# "masklen": 32,
# "operator": "ge"
# },
# "sequence": 10
# }
# ],
# "name": "v402"
# }
# ]
# },
# {
# "afi": "ipv6",
# "prefix_lists": [
# {
# "entries": [
# {
# "action": "deny",
# "address": "5000:1::/64",
# "sequence": 125
# }
# ],
# "name": "v601"
# }
# ]
# }
# ],
# "changed": true,
# "commands": [
# "no ipv6 prefix-list v601",
# "ip prefix-list v401",
# "seq 25 deny 45.55.4.0/24",
# "seq 300 permit 30.11.2.0/24 ge 32",
# "no seq 100",
# "no seq 200",
# "ip prefix-list v403",
# "seq 10 deny 10.1.1.0/24",
# "no ip prefix-list v402"
# ],
#
# Using deleted:
# Before State:
# veos#show running-config | section prefix-list
# ip prefix-list v401
# seq 25 deny 45.55.4.0/24 ge 32
# seq 100 permit 11.11.2.0/24 ge 32
# seq 300 permit 30.11.2.0/24 ge 32
# !
# ip prefix-list v402
# seq 10 deny 10.1.1.0/24 ge 32
# !
# ip prefix-list v403
# seq 10 deny 10.1.1.0/24
# !
# ipv6 prefix-list v601
# seq 125 deny 5000:1::/64
# veos#
- name: Delete device configuration
arista.eos.eos_prefix_lists:
config:
- afi: "ipv6"
state: deleted
# after State:
# veos#show running-config | section prefix-list
# ip prefix-list v401
# seq 25 deny 45.55.4.0/24 ge 32
# seq 100 permit 11.11.2.0/24 ge 32
# seq 300 permit 30.11.2.0/24 ge 32
# !
# ip prefix-list v402
# seq 10 deny 10.1.1.0/24 ge 32
# !
# ip prefix-list v403
# seq 10 deny 10.1.1.0/24
#
#
# Module Execution:
# "after": [
# {
# "afi": "ipv4",
# "prefix_lists": [
# {
# "entries": [
# {
# "action": "deny",
# "address": "45.55.4.0/24",
# "match": {
# "masklen": 32,
# "operator": "ge"
# },
# "sequence": 25
# },
# {
# "action": "permit",
# "address": "11.11.2.0/24",
# "match": {
# "masklen": 32,
# "operator": "ge"
# },
# "sequence": 100
# },
# {
# "action": "permit",
# "address": "30.11.2.0/24",
# "match": {
# "masklen": 32,
# "operator": "ge"
# },
# "sequence": 300
# }
# ],
# "name": "v401"
# },
# {
# "entries": [
# {
# "action": "deny",
# "address": "10.1.1.0/24",
# "match": {
# "masklen": 32,
# "operator": "ge"
# },
# "sequence": 10
# }
# ],
# "name": "v402"
# },
# {
# "entries": [
# {
# "action": "deny",
# "address": "10.1.1.0/24",
# "sequence": 10
# }
# ],
# "name": "v403"
# }
# ]
# }
# ],
# "before": [
# {
# "afi": "ipv4",
# "prefix_lists": [
# {
# "entries": [
# {
# "action": "deny",
# "address": "45.55.4.0/24",
# "match": {
# "masklen": 32,
# "operator": "ge"
# },
# "sequence": 25
# },
# {
# "action": "permit",
# "address": "11.11.2.0/24",
# "match": {
# "masklen": 32,
# "operator": "ge"
# },
# "sequence": 100
# },
# {
# "action": "permit",
# "address": "30.11.2.0/24",
# "match": {
# "masklen": 32,
# "operator": "ge"
# },
# "sequence": 300
# }
# ],
# "name": "v401"
# },
# {
# "entries": [
# {
# "action": "deny",
# "address": "10.1.1.0/24",
# "match": {
# "masklen": 32,
# "operator": "ge"
# },
# "sequence": 10
# }
# ],
# "name": "v402"
# },
# {
# "entries": [
# {
# "action": "deny",
# "address": "10.1.1.0/24",
# "sequence": 10
# }
# ],
# "name": "v403"
# }
# ]
# },
# {
# "afi": "ipv6",
# "prefix_lists": [
# {
# "entries": [
# {
# "action": "deny",
# "address": "5000:1::/64",
# "sequence": 125
# }
# ],
# "name": "v601"
# }
# ]
# }
# ],
# "changed": true,
# "commands": [
# "no ipv6 prefix-list v601"
# ],
#
# Using deleted
# Before state:
# veos#show running-config | section prefix-list
# ip prefix-list v401
# seq 25 deny 45.55.4.0/24 ge 32
# seq 100 permit 11.11.2.0/24 ge 32
# seq 300 permit 30.11.2.0/24 ge 32
# !
# ip prefix-list v402
# seq 10 deny 10.1.1.0/24 ge 32
# !
# ip prefix-list v403
# seq 10 deny 10.1.1.0/24
# veos#
- name: Delete device configuration
arista.eos.eos_prefix_lists:
state: deleted
# After State:
# veos#show running-config | section prefix-list
# veos#
#
# Module Execution:
# "after": {},
# "before": [
# {
# "afi": "ipv4",
# "prefix_lists": [
# {
# "entries": [
# {
# "action": "deny",
# "address": "45.55.4.0/24",
# "match": {
# "masklen": 32,
# "operator": "ge"
# },
# "sequence": 25
# },
# {
# "action": "permit",
# "address": "11.11.2.0/24",
# "match": {
# "masklen": 32,
# "operator": "ge"
# },
# "sequence": 100
# },
# {
# "action": "permit",
# "address": "30.11.2.0/24",
# "match": {
# "masklen": 32,
# "operator": "ge"
# },
# "sequence": 300
# }
# ],
# "name": "v401"
# },
# {
# "entries": [
# {
# "action": "deny",
# "address": "10.1.1.0/24",
# "match": {
# "masklen": 32,
# "operator": "ge"
# },
# "sequence": 10
# }
# ],
# "name": "v402"
# },
# {
# "entries": [
# {
# "action": "deny",
# "address": "10.1.1.0/24",
# "sequence": 10
# }
# ],
# "name": "v403"
# }
# ]
# }
# ],
# "changed": true,
# "commands": [
# "no ip prefix-list v401",
# "no ip prefix-list v402",
# "no ip prefix-list v403"
# ],
#
# Using parsed:
# parse_prefix_lists.cfg
# ip prefix-list v401
# seq 25 deny 45.55.4.0/24
# seq 100 permit 11.11.2.0/24 ge 32
# !
# ip prefix-list v402
# seq 10 deny 10.1.1.0/24
# !
# ipv6 prefix-list v601
# seq 125 deny 5000:1::/64
#
- name: parse configs
arista.eos.eos_prefix_lists:
running_config: "{{ lookup('file', './parsed_prefix_lists.cfg') }}"
state: parsed
# Module Execution:
# "parsed": [
# {
# "afi": "ipv4",
# "prefix_lists": [
# {
# "entries": [
# {
# "action": "deny",
# "address": "45.55.4.0/24",
# "sequence": 25
# },
# {
# "action": "permit",
# "address": "11.11.2.0/24",
# "match": {
# "masklen": 32,
# "operator": "ge"
# },
# "sequence": 100
# }
# ],
# "name": "v401"
# },
# {
# "entries": [
# {
# "action": "deny",
# "address": "10.1.1.0/24",
# "sequence": 10
# }
# ],
# "name": "v402"
# }
# ]
# },
# {
# "afi": "ipv6",
# "prefix_lists": [
# {
# "entries": [
# {
# "action": "deny",
# "address": "5000:1::/64",
# "sequence": 125
# }
# ],
# "name": "v601"
# }
# ]
# }
# ]
# Using rendered:
- name: Render provided configuration
arista.eos.eos_prefix_lists:
config:
- afi: "ipv4"
prefix_lists:
- name: "v401"
entries:
- sequence: 25
action: "deny"
address: "45.55.4.0/24"
- sequence: 200
action: "permit"
address: "200.11.2.0/24"
match:
masklen: 32
operator: "ge"
- name: "v403"
entries:
- action: "deny"
address: "10.1.1.0/24"
sequence: 10
state: rendered
# Module Execution:
# "rendered": [
# "ip prefix-list v401",
# "seq 25 deny 45.55.4.0/24",
# "seq 200 permit 200.11.2.0/24 ge 32",
# "ip prefix-list v403",
# "seq 10 deny 10.1.1.0/24"
# ]
#
# using gathered:
# Device config:
# veos#show running-config | section prefix-list
# ip prefix-list v401
# seq 25 deny 45.55.4.0/24
# seq 100 permit 11.11.2.0/24 ge 32
# !
# ip prefix-list v402
# seq 10 deny 10.1.1.0/24 ge 32
# !
# ipv6 prefix-list v601
# seq 125 deny 5000:1::/64
# veos#
- name: gather configs
arista.eos.eos_prefix_lists:
state: gathered
# Module Execution:
#
# "gathered": [
# {
# "afi": "ipv4",
# "prefix_lists": [
# {
# "entries": [
# {
# "action": "deny",
# "address": "45.55.4.0/24",
# "sequence": 25
# },
# {
# "action": "permit",
# "address": "11.11.2.0/24",
# "match": {
# "masklen": 32,
# "operator": "ge"
# },
# "sequence": 100
# }
# ],
# "name": "v401"
# },
# {
# "entries": [
# {
# "action": "deny",
# "address": "10.1.1.0/24",
# "match": {
# "masklen": 32,
# "operator": "ge"
# },
# "sequence": 10
# }
# ],
# "name": "v402"
# }
# ]
# },
# {
# "afi": "ipv6",
# "prefix_lists": [
# {
# "entries": [
# {
# "action": "deny",
# "address": "5000:1::/64",
# "sequence": 125
# }
# ],
# "name": "v601"
# }
# ]
# }
# ],
"""
from ansible.module_utils.basic import AnsibleModule
from ansible_collections.arista.eos.plugins.module_utils.network.eos.argspec.prefix_lists.prefix_lists import (
Prefix_listsArgs,
)
from ansible_collections.arista.eos.plugins.module_utils.network.eos.config.prefix_lists.prefix_lists import (
Prefix_lists,
)
def main():
"""
Main entry point for module execution
:returns: the result form module invocation
"""
module = AnsibleModule(
argument_spec=Prefix_listsArgs.argument_spec,
mutually_exclusive=[["config", "running_config"]],
required_if=[
["state", "merged", ["config"]],
["state", "replaced", ["config"]],
["state", "overridden", ["config"]],
["state", "rendered", ["config"]],
["state", "parsed", ["running_config"]],
],
supports_check_mode=True,
)
result = Prefix_lists(module).execute_module()
module.exit_json(**result)
if __name__ == "__main__":
main()
| 30.857023 | 115 | 0.299851 | 2,611 | 36,905 | 4.198008 | 0.100345 | 0.028191 | 0.066691 | 0.021348 | 0.727762 | 0.719734 | 0.704042 | 0.698203 | 0.694097 | 0.6847 | 0 | 0.111977 | 0.579434 | 36,905 | 1,195 | 116 | 30.882845 | 0.594227 | 0.00737 | 0 | 0.856392 | 0 | 0.001751 | 0.974701 | 0.009071 | 0 | 0 | 0 | 0 | 0 | 1 | 0.000876 | false | 0 | 0.003503 | 0 | 0.004378 | 0.000876 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
dbf7f2610343a281c47720fc6010428ba5bcf218 | 7,068 | py | Python | tests/layers/test_misc.py | VarunBal/keras-retinanet | c45b6316515f058feed68808c548fa477523f707 | [
"Apache-2.0"
] | null | null | null | tests/layers/test_misc.py | VarunBal/keras-retinanet | c45b6316515f058feed68808c548fa477523f707 | [
"Apache-2.0"
] | null | null | null | tests/layers/test_misc.py | VarunBal/keras-retinanet | c45b6316515f058feed68808c548fa477523f707 | [
"Apache-2.0"
] | null | null | null | """
Copyright 2017-2018 Fizyr (https://fizyr.com)
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
import tensorflow.keras as keras
import keras_retinanet.backend
import keras_retinanet.layers
import numpy as np
class TestAnchors(object):
def test_simple(self):
# create simple Anchors layer
anchors_layer = keras_retinanet.layers.Anchors(
size=32,
stride=8,
ratios=np.array([1], tensorflow.keras.backend.floatx()),
scales=np.array([1], tensorflow.keras.backend.floatx()),
)
# create fake features input (only shape is used anyway)
features = np.zeros((1, 2, 2, 1024), dtype=tensorflow.keras.backend.floatx())
features = tensorflow.keras.backend.variable(features)
# call the Anchors layer
anchors = anchors_layer.call(features)
anchors = tensorflow.keras.backend.eval(anchors)
# expected anchor values
expected = np.array([[
[-12, -12, 20, 20],
[-4 , -12, 28, 20],
[-12, -4 , 20, 28],
[-4 , -4 , 28, 28],
]], dtype=tensorflow.keras.backend.floatx())
# test anchor values
np.testing.assert_array_equal(anchors, expected)
# mark test to fail
def test_mini_batch(self):
# create simple Anchors layer
anchors_layer = keras_retinanet.layers.Anchors(
size=32,
stride=8,
ratios=np.array([1], dtype=tensorflow.keras.backend.floatx()),
scales=np.array([1], dtype=tensorflow.keras.backend.floatx()),
)
# create fake features input with batch_size=2
features = np.zeros((2, 2, 2, 1024), dtype=tensorflow.keras.backend.floatx())
features = tensorflow.keras.backend.variable(features)
# call the Anchors layer
anchors = anchors_layer.call(features)
anchors = tensorflow.keras.backend.eval(anchors)
# expected anchor values
expected = np.array([[
[-12, -12, 20, 20],
[-4 , -12, 28, 20],
[-12, -4 , 20, 28],
[-4 , -4 , 28, 28],
]], dtype=tensorflow.keras.backend.floatx())
expected = np.tile(expected, (2, 1, 1))
# test anchor values
np.testing.assert_array_equal(anchors, expected)
class TestUpsampleLike(object):
def test_simple(self):
# create simple UpsampleLike layer
upsample_like_layer = keras_retinanet.layers.UpsampleLike()
# create input source
source = np.zeros((1, 2, 2, 1), dtype=tensorflow.keras.backend.floatx())
source = tensorflow.keras.backend.variable(source)
target = np.zeros((1, 5, 5, 1), dtype=tensorflow.keras.backend.floatx())
expected = target
target = tensorflow.keras.backend.variable(target)
# compute output
actual = upsample_like_layer.call([source, target])
actual = tensorflow.keras.backend.eval(actual)
np.testing.assert_array_equal(actual, expected)
def test_mini_batch(self):
# create simple UpsampleLike layer
upsample_like_layer = keras_retinanet.layers.UpsampleLike()
# create input source
source = np.zeros((2, 2, 2, 1), dtype=keras.backend.floatx())
source = keras.backend.variable(source)
target = np.zeros((2, 5, 5, 1), dtype=keras.backend.floatx())
expected = target
target = keras.backend.variable(target)
# compute output
actual = upsample_like_layer.call([source, target])
actual = keras.backend.eval(actual)
np.testing.assert_array_equal(actual, expected)
class TestRegressBoxes(object):
def test_simple(self):
mean = [0, 0, 0, 0]
std = [0.2, 0.2, 0.2, 0.2]
# create simple RegressBoxes layer
regress_boxes_layer = keras_retinanet.layers.RegressBoxes(mean=mean, std=std)
# create input
anchors = np.array([[
[0 , 0 , 10 , 10 ],
[50, 50, 100, 100],
[20, 20, 40 , 40 ],
]], dtype=keras.backend.floatx())
anchors = keras.backend.variable(anchors)
regression = np.array([[
[0 , 0 , 0 , 0 ],
[0.1, 0.1, 0 , 0 ],
[0 , 0 , 0.1, 0.1],
]], dtype=keras.backend.floatx())
regression = keras.backend.variable(regression)
# compute output
actual = regress_boxes_layer.call([anchors, regression])
actual = keras.backend.eval(actual)
# compute expected output
expected = np.array([[
[0 , 0 , 10 , 10 ],
[51, 51, 100 , 100 ],
[20, 20, 40.4, 40.4],
]], dtype=keras.backend.floatx())
np.testing.assert_array_almost_equal(actual, expected, decimal=2)
# mark test to fail
def test_mini_batch(self):
mean = [0, 0, 0, 0]
std = [0.2, 0.2, 0.2, 0.2]
# create simple RegressBoxes layer
regress_boxes_layer = keras_retinanet.layers.RegressBoxes(mean=mean, std=std)
# create input
anchors = np.array([
[
[0 , 0 , 10 , 10 ], # 1
[50, 50, 100, 100], # 2
[20, 20, 40 , 40 ], # 3
],
[
[20, 20, 40 , 40 ], # 3
[0 , 0 , 10 , 10 ], # 1
[50, 50, 100, 100], # 2
],
], dtype=keras.backend.floatx())
anchors = keras.backend.variable(anchors)
regression = np.array([
[
[0 , 0 , 0 , 0 ], # 1
[0.1, 0.1, 0 , 0 ], # 2
[0 , 0 , 0.1, 0.1], # 3
],
[
[0 , 0 , 0.1, 0.1], # 3
[0 , 0 , 0 , 0 ], # 1
[0.1, 0.1, 0 , 0 ], # 2
],
], dtype=keras.backend.floatx())
regression = keras.backend.variable(regression)
# compute output
actual = regress_boxes_layer.call([anchors, regression])
actual = keras.backend.eval(actual)
# compute expected output
expected = np.array([
[
[0 , 0 , 10 , 10 ], # 1
[51, 51, 100 , 100 ], # 2
[20, 20, 40.4, 40.4], # 3
],
[
[20, 20, 40.4, 40.4], # 3
[0 , 0 , 10 , 10 ], # 1
[51, 51, 100 , 100 ], # 2
],
], dtype=keras.backend.floatx())
np.testing.assert_array_almost_equal(actual, expected, decimal=2)
| 33.183099 | 85 | 0.549802 | 845 | 7,068 | 4.537278 | 0.171598 | 0.106416 | 0.084507 | 0.073031 | 0.809338 | 0.783255 | 0.743871 | 0.703704 | 0.649452 | 0.633803 | 0 | 0.074339 | 0.326259 | 7,068 | 212 | 86 | 33.339623 | 0.730785 | 0.174731 | 0 | 0.729323 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.045113 | 1 | 0.045113 | false | 0 | 0.030075 | 0 | 0.097744 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e00c2fcd6259fe36a100f999f63a29b4005e4601 | 63 | py | Python | src/Recursive_Symmetry_Aware_Materials_Microstructure_Explorer/__init__.py | m3-learning/Recursive_Symmetry_Aware_Materials_Microstructure_Explorer | 93f4b568998a28ff474e98d8c68f3fefb4365232 | [
"BSD-3-Clause"
] | 2 | 2021-10-15T15:50:46.000Z | 2022-02-04T17:56:30.000Z | src/Recursive_Symmetry_Aware_Materials_Microstructure_Explorer/__init__.py | m3-learning/Recursive_Symmetry_Aware_Materials_Microstructure_Explorer | 93f4b568998a28ff474e98d8c68f3fefb4365232 | [
"BSD-3-Clause"
] | null | null | null | src/Recursive_Symmetry_Aware_Materials_Microstructure_Explorer/__init__.py | m3-learning/Recursive_Symmetry_Aware_Materials_Microstructure_Explorer | 93f4b568998a28ff474e98d8c68f3fefb4365232 | [
"BSD-3-Clause"
] | null | null | null | from . import util
from . import viz
from . import select_model | 21 | 26 | 0.777778 | 10 | 63 | 4.8 | 0.6 | 0.625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.174603 | 63 | 3 | 26 | 21 | 0.923077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e0594517b86688d17a482691b76586f3876f78bf | 176 | py | Python | lokahi_dropbox/wall_post/models.py | y4ahmed/Crowdfunding-Web-Application | 52beab945ee88f8fd773f942577137c770a601c1 | [
"MIT"
] | 4 | 2017-09-28T04:26:33.000Z | 2022-01-04T22:51:17.000Z | lokahi_dropbox/wall_post/models.py | y4ahmed/Crowdfunding-Web-Application | 52beab945ee88f8fd773f942577137c770a601c1 | [
"MIT"
] | null | null | null | lokahi_dropbox/wall_post/models.py | y4ahmed/Crowdfunding-Web-Application | 52beab945ee88f8fd773f942577137c770a601c1 | [
"MIT"
] | 1 | 2021-01-17T23:11:21.000Z | 2021-01-17T23:11:21.000Z | from django.db import models
# Create your models here.
class Post(models.Model):
message = models.CharField(max_length=255)
sender = models.CharField(max_length=255)
| 25.142857 | 46 | 0.755682 | 25 | 176 | 5.24 | 0.68 | 0.229008 | 0.274809 | 0.366412 | 0.412214 | 0 | 0 | 0 | 0 | 0 | 0 | 0.04 | 0.147727 | 176 | 6 | 47 | 29.333333 | 0.833333 | 0.136364 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
e071c7306d7aa4b1659f4489b2d5581951f990eb | 23 | py | Python | pylsci/__init__.py | pkeilbach/pylsci | c57d0e60b80b65eb85cbc949e2d4c18fb371d378 | [
"MIT"
] | 5 | 2021-06-09T11:24:11.000Z | 2022-03-04T08:24:23.000Z | pylsci/__init__.py | pkeilbach/pylsci | c57d0e60b80b65eb85cbc949e2d4c18fb371d378 | [
"MIT"
] | 1 | 2021-01-26T21:11:14.000Z | 2021-01-26T21:11:14.000Z | pylsci/__init__.py | pkeilbach/pylsci | c57d0e60b80b65eb85cbc949e2d4c18fb371d378 | [
"MIT"
] | 2 | 2021-09-17T08:21:09.000Z | 2022-02-09T12:40:11.000Z | from .lsci import Lsci
| 11.5 | 22 | 0.782609 | 4 | 23 | 4.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.173913 | 23 | 1 | 23 | 23 | 0.947368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e0779799e1bda7fb27f75912efcceaf57c657bf0 | 154,028 | py | Python | tests/track/loader_test.py | karmi/rally | 51a83d7ad2b94de90b135749956b354cb50bcffc | [
"Apache-2.0"
] | null | null | null | tests/track/loader_test.py | karmi/rally | 51a83d7ad2b94de90b135749956b354cb50bcffc | [
"Apache-2.0"
] | null | null | null | tests/track/loader_test.py | karmi/rally | 51a83d7ad2b94de90b135749956b354cb50bcffc | [
"Apache-2.0"
] | null | null | null | # Licensed to Elasticsearch B.V. under one or more contributor
# license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright
# ownership. Elasticsearch B.V. licenses this file to you under
# the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
import os
import random
import re
import textwrap
import unittest.mock as mock
import urllib.error
from unittest import TestCase
from esrally import exceptions, config
from esrally.track import loader, track
from esrally.utils import io
def strip_ws(s):
return re.sub(r"\s", "", s)
class StaticClock:
NOW = 1453362707.0
@staticmethod
def now():
return StaticClock.NOW
@staticmethod
def stop_watch():
return None
class SimpleTrackRepositoryTests(TestCase):
@mock.patch("os.path.exists")
@mock.patch("os.path.isdir")
def test_track_from_directory(self, is_dir, path_exists):
is_dir.return_value = True
path_exists.return_value = True
repo = loader.SimpleTrackRepository("/path/to/track/unit-test")
self.assertEqual("unit-test", repo.track_name)
self.assertEqual(["unit-test"], repo.track_names)
self.assertEqual("/path/to/track/unit-test", repo.track_dir("unit-test"))
self.assertEqual("/path/to/track/unit-test/track.json", repo.track_file("unit-test"))
@mock.patch("os.path.exists")
@mock.patch("os.path.isdir")
@mock.patch("os.path.isfile")
def test_track_from_file(self, is_file, is_dir, path_exists):
is_file.return_value = True
is_dir.return_value = False
path_exists.return_value = True
repo = loader.SimpleTrackRepository("/path/to/track/unit-test/my-track.json")
self.assertEqual("my-track", repo.track_name)
self.assertEqual(["my-track"], repo.track_names)
self.assertEqual("/path/to/track/unit-test", repo.track_dir("my-track"))
self.assertEqual("/path/to/track/unit-test/my-track.json", repo.track_file("my-track"))
@mock.patch("os.path.exists")
@mock.patch("os.path.isdir")
@mock.patch("os.path.isfile")
def test_track_from_named_pipe(self, is_file, is_dir, path_exists):
is_file.return_value = False
is_dir.return_value = False
path_exists.return_value = True
with self.assertRaises(exceptions.SystemSetupError) as ctx:
loader.SimpleTrackRepository("a named pipe cannot point to a track")
self.assertEqual("a named pipe cannot point to a track is neither a file nor a directory", ctx.exception.args[0])
@mock.patch("os.path.exists")
def test_track_from_non_existing_path(self, path_exists):
path_exists.return_value = False
with self.assertRaises(exceptions.SystemSetupError) as ctx:
loader.SimpleTrackRepository("/path/does/not/exist")
self.assertEqual("Track path /path/does/not/exist does not exist", ctx.exception.args[0])
@mock.patch("os.path.isdir")
@mock.patch("os.path.exists")
def test_track_from_directory_without_track(self, path_exists, is_dir):
# directory exists, but not the file
path_exists.side_effect = [True, False]
is_dir.return_value = True
with self.assertRaises(exceptions.SystemSetupError) as ctx:
loader.SimpleTrackRepository("/path/to/not/a/track")
self.assertEqual("Could not find track.json in /path/to/not/a/track", ctx.exception.args[0])
@mock.patch("os.path.exists")
@mock.patch("os.path.isdir")
@mock.patch("os.path.isfile")
def test_track_from_file_but_not_json(self, is_file, is_dir, path_exists):
is_file.return_value = True
is_dir.return_value = False
path_exists.return_value = True
with self.assertRaises(exceptions.SystemSetupError) as ctx:
loader.SimpleTrackRepository("/path/to/track/unit-test/my-track.xml")
self.assertEqual("/path/to/track/unit-test/my-track.xml has to be a JSON file", ctx.exception.args[0])
class GitRepositoryTests(TestCase):
class MockGitRepo:
def __init__(self, remote_url, root_dir, repo_name, resource_name, offline, fetch=True):
self.repo_dir = "%s/%s" % (root_dir, repo_name)
@mock.patch("os.path.exists")
@mock.patch("os.walk")
def test_track_from_existing_repo(self, walk, exists):
walk.return_value = iter([(".", ["unittest", "unittest2", "unittest3"], [])])
exists.return_value = True
cfg = config.Config()
cfg.add(config.Scope.application, "track", "track.name", "unittest")
cfg.add(config.Scope.application, "track", "repository.name", "default")
cfg.add(config.Scope.application, "system", "offline.mode", False)
cfg.add(config.Scope.application, "node", "root.dir", "/tmp")
cfg.add(config.Scope.application, "benchmarks", "track.repository.dir", "tracks")
repo = loader.GitTrackRepository(cfg, fetch=False, update=False, repo_class=GitRepositoryTests.MockGitRepo)
self.assertEqual("unittest", repo.track_name)
self.assertEqual(["unittest", "unittest2", "unittest3"], list(repo.track_names))
self.assertEqual("/tmp/tracks/default/unittest", repo.track_dir("unittest"))
self.assertEqual("/tmp/tracks/default/unittest/track.json", repo.track_file("unittest"))
class TrackPreparationTests(TestCase):
@mock.patch("esrally.utils.io.prepare_file_offset_table")
@mock.patch("os.path.getsize")
@mock.patch("os.path.isfile")
def test_does_nothing_if_document_file_available(self, is_file, get_size, prepare_file_offset_table):
is_file.return_value = True
get_size.return_value = 2000
prepare_file_offset_table.return_value = 5
p = loader.DocumentSetPreparator(track_name="unit-test",
downloader=loader.Downloader(offline=False, test_mode=False),
decompressor=loader.Decompressor())
p.prepare_document_set(document_set=track.Documents(source_format=track.Documents.SOURCE_FORMAT_BULK,
document_file="docs.json",
document_archive="docs.json.bz2",
number_of_documents=5,
compressed_size_in_bytes=200,
uncompressed_size_in_bytes=2000),
data_root="/tmp")
prepare_file_offset_table.assert_called_with("/tmp/docs.json")
@mock.patch("esrally.utils.io.prepare_file_offset_table")
@mock.patch("os.path.getsize")
@mock.patch("os.path.isfile")
def test_decompresses_if_archive_available(self, is_file, get_size, prepare_file_offset_table):
is_file.return_value = True
get_size.return_value = 2000
prepare_file_offset_table.return_value = 5
p = loader.DocumentSetPreparator(track_name="unit-test",
downloader=loader.Downloader(offline=False, test_mode=False),
decompressor=loader.Decompressor())
p.prepare_document_set(document_set=track.Documents(source_format=track.Documents.SOURCE_FORMAT_BULK,
document_file="docs.json",
document_archive="docs.json.bz2",
number_of_documents=5,
compressed_size_in_bytes=200,
uncompressed_size_in_bytes=2000),
data_root="/tmp")
prepare_file_offset_table.assert_called_with("/tmp/docs.json")
@mock.patch("esrally.utils.io.decompress")
@mock.patch("os.path.getsize")
@mock.patch("os.path.isfile")
def test_raise_error_on_wrong_uncompressed_file_size(self, is_file, get_size, decompress):
# uncompressed file does not exist
# compressed file exists
# after decompression, uncompressed file exists
is_file.side_effect = [False, True, True]
# compressed file size is 200
# uncompressed is corrupt, only 1 byte available
get_size.side_effect = [200, 1]
p = loader.DocumentSetPreparator(track_name="unit-test",
downloader=loader.Downloader(offline=False, test_mode=False),
decompressor=loader.Decompressor())
with self.assertRaises(exceptions.DataError) as ctx:
p.prepare_document_set(document_set=track.Documents(source_format=track.Documents.SOURCE_FORMAT_BULK,
document_file="docs.json",
document_archive="docs.json.bz2",
number_of_documents=5,
compressed_size_in_bytes=200,
uncompressed_size_in_bytes=2000),
data_root="/tmp")
self.assertEqual("[/tmp/docs.json] is corrupt. Extracted [1] bytes but [2000] bytes are expected.", ctx.exception.args[0])
decompress.assert_called_with("/tmp/docs.json.bz2", "/tmp")
@mock.patch("esrally.utils.io.decompress")
@mock.patch("os.path.getsize")
@mock.patch("os.path.isfile")
def test_raise_error_if_compressed_does_not_contain_expected_document_file(self, is_file, get_size, decompress):
# uncompressed file does not exist
# compressed file exists
# after decompression, uncompressed file does not exist (e.g. because the output file name is called differently)
is_file.side_effect = [False, True, False]
# compressed file size is 200
get_size.return_value = 200
p = loader.DocumentSetPreparator(track_name="unit-test",
downloader=loader.Downloader(offline=False, test_mode=False),
decompressor=loader.Decompressor())
with self.assertRaises(exceptions.DataError) as ctx:
p.prepare_document_set(document_set=track.Documents(source_format=track.Documents.SOURCE_FORMAT_BULK,
base_url="http://benchmarks.elasticsearch.org/corpora/unit-test",
document_file="docs.json",
document_archive="docs.json.bz2",
number_of_documents=5,
compressed_size_in_bytes=200,
uncompressed_size_in_bytes=2000),
data_root="/tmp")
self.assertEqual("Decompressing [/tmp/docs.json.bz2] did not create [/tmp/docs.json]. Please check with the track author if the "
"compressed archive has been created correctly.", ctx.exception.args[0])
decompress.assert_called_with("/tmp/docs.json.bz2", "/tmp")
@mock.patch("esrally.utils.io.prepare_file_offset_table")
@mock.patch("esrally.utils.io.decompress")
@mock.patch("esrally.utils.net.download")
@mock.patch("esrally.utils.io.ensure_dir")
@mock.patch("os.path.getsize")
@mock.patch("os.path.isfile")
def test_download_document_archive_if_no_file_available(self, is_file, get_size, ensure_dir, download, decompress,
prepare_file_offset_table):
# uncompressed file does not exist
# compressed file does not exist
# after download compressed file exists
# after download uncompressed file still does not exist (in main loop)
# after download compressed file exists (in main loop)
# after decompression, uncompressed file exists
is_file.side_effect = [False, False, True, False, True, True, True]
# compressed file size is 200 after download
# compressed file size is 200 after download (in main loop)
# uncompressed file size is 2000 after decompression
# uncompressed file size is 2000 after decompression (in main loop)
get_size.side_effect = [200, 200, 2000, 2000]
prepare_file_offset_table.return_value = 5
p = loader.DocumentSetPreparator(track_name="unit-test",
downloader=loader.Downloader(offline=False, test_mode=False),
decompressor=loader.Decompressor())
p.prepare_document_set(document_set=track.Documents(source_format=track.Documents.SOURCE_FORMAT_BULK,
base_url="http://benchmarks.elasticsearch.org/corpora/unit-test",
document_file="docs.json",
document_archive="docs.json.bz2",
number_of_documents=5,
compressed_size_in_bytes=200,
uncompressed_size_in_bytes=2000),
data_root="/tmp")
ensure_dir.assert_called_with("/tmp")
decompress.assert_called_with("/tmp/docs.json.bz2", "/tmp")
download.assert_called_with("http://benchmarks.elasticsearch.org/corpora/unit-test/docs.json.bz2",
"/tmp/docs.json.bz2", 200, progress_indicator=mock.ANY)
prepare_file_offset_table.assert_called_with("/tmp/docs.json")
@mock.patch("esrally.utils.io.prepare_file_offset_table")
@mock.patch("esrally.utils.io.decompress")
@mock.patch("esrally.utils.net.download")
@mock.patch("esrally.utils.io.ensure_dir")
@mock.patch("os.path.getsize")
@mock.patch("os.path.isfile")
def test_download_document_with_trailing_baseurl_slash(self, is_file, get_size, ensure_dir, download, decompress,
prepare_file_offset_table):
# uncompressed file does not exist
# after download uncompressed file exists
# after download uncompressed file exists (main loop)
is_file.side_effect = [False, True, True]
# uncompressed file size is 2000
get_size.return_value = 2000
scheme = random.choice(["http", "https", "s3", "gs"])
prepare_file_offset_table.return_value = 5
p = loader.DocumentSetPreparator(track_name="unit-test",
downloader=loader.Downloader(offline=False, test_mode=False),
decompressor=loader.Decompressor())
p.prepare_document_set(document_set=track.Documents(source_format=track.Documents.SOURCE_FORMAT_BULK,
base_url=f"{scheme}://benchmarks.elasticsearch.org/corpora/unit-test/",
document_file="docs.json",
# --> We don't provide a document archive here <--
document_archive=None,
number_of_documents=5,
compressed_size_in_bytes=200,
uncompressed_size_in_bytes=2000),
data_root="/tmp")
ensure_dir.assert_called_with("/tmp")
download.assert_called_with(f"{scheme}://benchmarks.elasticsearch.org/corpora/unit-test/docs.json",
"/tmp/docs.json", 2000, progress_indicator=mock.ANY)
prepare_file_offset_table.assert_called_with("/tmp/docs.json")
@mock.patch("esrally.utils.io.prepare_file_offset_table")
@mock.patch("esrally.utils.net.download")
@mock.patch("esrally.utils.io.ensure_dir")
@mock.patch("os.path.getsize")
@mock.patch("os.path.isfile")
def test_download_document_file_if_no_file_available(self, is_file, get_size, ensure_dir, download, prepare_file_offset_table):
# uncompressed file does not exist
# after download uncompressed file exists
# after download uncompressed file exists (main loop)
is_file.side_effect = [False, True, True]
# uncompressed file size is 2000
get_size.return_value = 2000
prepare_file_offset_table.return_value = 5
p = loader.DocumentSetPreparator(track_name="unit-test",
downloader=loader.Downloader(offline=False, test_mode=False),
decompressor=loader.Decompressor())
p.prepare_document_set(document_set=track.Documents(source_format=track.Documents.SOURCE_FORMAT_BULK,
base_url="http://benchmarks.elasticsearch.org/corpora/unit-test",
document_file="docs.json",
# --> We don't provide a document archive here <--
document_archive=None,
number_of_documents=5,
compressed_size_in_bytes=200,
uncompressed_size_in_bytes=2000),
data_root="/tmp")
ensure_dir.assert_called_with("/tmp")
download.assert_called_with("http://benchmarks.elasticsearch.org/corpora/unit-test/docs.json",
"/tmp/docs.json", 2000, progress_indicator=mock.ANY)
prepare_file_offset_table.assert_called_with("/tmp/docs.json")
@mock.patch("esrally.utils.net.download")
@mock.patch("esrally.utils.io.ensure_dir")
@mock.patch("os.path.isfile")
def test_raise_download_error_if_offline(self, is_file, ensure_dir, download):
# uncompressed file does not exist
is_file.return_value = False
p = loader.DocumentSetPreparator(track_name="unit-test",
downloader=loader.Downloader(offline=True, test_mode=False),
decompressor=loader.Decompressor())
with self.assertRaises(exceptions.SystemSetupError) as ctx:
p.prepare_document_set(document_set=track.Documents(source_format=track.Documents.SOURCE_FORMAT_BULK,
base_url="http://benchmarks.elasticsearch.org/corpora/unit-test",
document_file="docs.json",
number_of_documents=5,
uncompressed_size_in_bytes=2000),
data_root="/tmp")
self.assertEqual("Cannot find [/tmp/docs.json]. Please disable offline mode and retry.", ctx.exception.args[0])
self.assertEqual(0, ensure_dir.call_count)
self.assertEqual(0, download.call_count)
@mock.patch("esrally.utils.net.download")
@mock.patch("esrally.utils.io.ensure_dir")
@mock.patch("os.path.isfile")
def test_raise_download_error_if_no_url_provided_and_file_missing(self, is_file, ensure_dir, download):
# uncompressed file does not exist
is_file.return_value = False
p = loader.DocumentSetPreparator(track_name="unit-test",
downloader=loader.Downloader(offline=False, test_mode=False),
decompressor=loader.Decompressor())
with self.assertRaises(exceptions.DataError) as ctx:
p.prepare_document_set(document_set=track.Documents(source_format=track.Documents.SOURCE_FORMAT_BULK,
base_url=None,
document_file="docs.json",
document_archive=None,
number_of_documents=5,
uncompressed_size_in_bytes=2000),
data_root="/tmp")
self.assertEqual("Cannot download data because no base URL is provided.", ctx.exception.args[0])
self.assertEqual(0, ensure_dir.call_count)
self.assertEqual(0, download.call_count)
@mock.patch("esrally.utils.net.download")
@mock.patch("esrally.utils.io.ensure_dir")
@mock.patch("os.path.getsize")
@mock.patch("os.path.isfile")
def test_raise_download_error_if_no_url_provided_and_wrong_file_size(self, is_file, get_size, ensure_dir, download):
# uncompressed file exists...
is_file.return_value = True
# but it's size is wrong
get_size.return_value = 100
p = loader.DocumentSetPreparator(track_name="unit-test",
downloader=loader.Downloader(offline=False, test_mode=False),
decompressor=loader.Decompressor())
with self.assertRaises(exceptions.DataError) as ctx:
p.prepare_document_set(document_set=track.Documents(source_format=track.Documents.SOURCE_FORMAT_BULK,
document_file="docs.json",
number_of_documents=5,
uncompressed_size_in_bytes=2000),
data_root="/tmp")
self.assertEqual("[/tmp/docs.json] is present but does not have the expected size of [2000] bytes and it "
"cannot be downloaded because no base URL is provided.", ctx.exception.args[0])
self.assertEqual(0, ensure_dir.call_count)
self.assertEqual(0, download.call_count)
@mock.patch("esrally.utils.net.download")
@mock.patch("esrally.utils.io.ensure_dir")
@mock.patch("os.path.isfile")
def test_raise_download_error_no_test_mode_file(self, is_file, ensure_dir, download):
# uncompressed file does not exist
is_file.return_value = False
download.side_effect = urllib.error.HTTPError("http://benchmarks.elasticsearch.org.s3.amazonaws.com/corpora/unit-test/docs-1k.json",
404, "", None, None)
p = loader.DocumentSetPreparator(track_name="unit-test",
downloader=loader.Downloader(offline=False, test_mode=True),
decompressor=loader.Decompressor())
with self.assertRaises(exceptions.DataError) as ctx:
p.prepare_document_set(document_set=track.Documents(source_format=track.Documents.SOURCE_FORMAT_BULK,
base_url="http://benchmarks.elasticsearch.org/corpora/unit-test",
document_file="docs-1k.json",
number_of_documents=5,
uncompressed_size_in_bytes=None),
data_root="/tmp")
self.assertEqual("This track does not support test mode. Ask the track author to add it or disable "
"test mode and retry.", ctx.exception.args[0])
ensure_dir.assert_called_with("/tmp")
download.assert_called_with("http://benchmarks.elasticsearch.org/corpora/unit-test/docs-1k.json",
"/tmp/docs-1k.json", None, progress_indicator=mock.ANY)
@mock.patch("esrally.utils.net.download")
@mock.patch("esrally.utils.io.ensure_dir")
@mock.patch("os.path.isfile")
def test_raise_download_error_on_connection_problems(self, is_file, ensure_dir, download):
# uncompressed file does not exist
is_file.return_value = False
download.side_effect = urllib.error.HTTPError("http://benchmarks.elasticsearch.org/corpora/unit-test/docs.json",
500, "Internal Server Error", None, None)
p = loader.DocumentSetPreparator(track_name="unit-test",
downloader=loader.Downloader(offline=False, test_mode=False),
decompressor=loader.Decompressor())
with self.assertRaises(exceptions.DataError) as ctx:
p.prepare_document_set(document_set=track.Documents(source_format=track.Documents.SOURCE_FORMAT_BULK,
base_url="http://benchmarks.elasticsearch.org/corpora/unit-test",
document_file="docs.json",
number_of_documents=5,
uncompressed_size_in_bytes=2000),
data_root="/tmp")
self.assertEqual("Could not download [http://benchmarks.elasticsearch.org/corpora/unit-test/docs.json] "
"to [/tmp/docs.json] (HTTP status: 500, reason: Internal Server Error)", ctx.exception.args[0])
ensure_dir.assert_called_with("/tmp")
download.assert_called_with("http://benchmarks.elasticsearch.org/corpora/unit-test/docs.json",
"/tmp/docs.json", 2000, progress_indicator=mock.ANY)
@mock.patch("esrally.utils.io.prepare_file_offset_table")
@mock.patch("esrally.utils.io.decompress")
@mock.patch("os.path.getsize")
@mock.patch("os.path.isfile")
def test_prepare_bundled_document_set_if_document_file_available(self, is_file, get_size, decompress, prepare_file_offset_table):
is_file.return_value = True
# check only uncompressed
get_size.side_effect = [2000]
prepare_file_offset_table.return_value = 5
p = loader.DocumentSetPreparator(track_name="unit-test",
downloader=loader.Downloader(offline=False, test_mode=False),
decompressor=loader.Decompressor())
self.assertTrue(p.prepare_bundled_document_set(document_set=track.Documents(source_format=track.Documents.SOURCE_FORMAT_BULK,
document_file="docs.json",
document_archive="docs.json.bz2",
number_of_documents=5,
compressed_size_in_bytes=200,
uncompressed_size_in_bytes=2000),
data_root="."))
prepare_file_offset_table.assert_called_with("./docs.json")
@mock.patch("esrally.utils.io.prepare_file_offset_table")
@mock.patch("esrally.utils.io.decompress")
@mock.patch("os.path.getsize")
@mock.patch("os.path.isfile")
def test_prepare_bundled_document_set_does_nothing_if_no_document_files(self, is_file, get_size, decompress, prepare_file_offset_table):
# no files present
is_file.return_value = False
p = loader.DocumentSetPreparator(track_name="unit-test",
downloader=loader.Downloader(offline=False, test_mode=False),
decompressor=loader.Decompressor())
self.assertFalse(p.prepare_bundled_document_set(document_set=track.Documents(source_format=track.Documents.SOURCE_FORMAT_BULK,
document_file="docs.json",
document_archive="docs.json.bz2",
number_of_documents=5,
compressed_size_in_bytes=200,
uncompressed_size_in_bytes=2000),
data_root="."))
self.assertEqual(0, decompress.call_count)
self.assertEqual(0, prepare_file_offset_table.call_count)
def test_used_corpora(self):
track_specification = {
"description": "description for unit test",
"indices": [
{"name": "logs-181998"},
{"name": "logs-191998"},
{"name": "logs-201998"},
],
"corpora": [
{
"name": "http_logs_unparsed",
"target-type": "type",
"documents": [
{
"target-index": "logs-181998",
"source-file": "documents-181998.unparsed.json.bz2",
"document-count": 2708746,
"compressed-bytes": 13064317,
"uncompressed-bytes": 303920342
},
{
"target-index": "logs-191998",
"source-file": "documents-191998.unparsed.json.bz2",
"document-count": 9697882,
"compressed-bytes": 47211781,
"uncompressed-bytes": 1088378738
},
{
"target-index": "logs-201998",
"source-file": "documents-201998.unparsed.json.bz2",
"document-count": 13053463,
"compressed-bytes": 63174979,
"uncompressed-bytes": 1456836090
}
]
},
{
"name": "http_logs",
"target-type": "type",
"documents": [
{
"target-index": "logs-181998",
"source-file": "documents-181998.json.bz2",
"document-count": 2708746,
"compressed-bytes": 13815456,
"uncompressed-bytes": 363512754
},
{
"target-index": "logs-191998",
"source-file": "documents-191998.json.bz2",
"document-count": 9697882,
"compressed-bytes": 49439633,
"uncompressed-bytes": 1301732149
},
{
"target-index": "logs-201998",
"source-file": "documents-201998.json.bz2",
"document-count": 13053463,
"compressed-bytes": 65623436,
"uncompressed-bytes": 1744012279
}
]
}
],
"operations": [
{
"name": "bulk-index-1",
"operation-type": "bulk",
"corpora": ["http_logs"],
"indices": ["logs-181998"],
"bulk-size": 500
},
{
"name": "bulk-index-2",
"operation-type": "bulk",
"corpora": ["http_logs"],
"indices": ["logs-191998"],
"bulk-size": 500
},
{
"name": "bulk-index-3",
"operation-type": "bulk",
"corpora": ["http_logs_unparsed"],
"indices": ["logs-201998"],
"bulk-size": 500
},
{
"name": "node-stats",
"operation-type": "node-stats"
},
],
"challenges": [
{
"name": "default-challenge",
"schedule": [
{
"parallel": {
"tasks": [
{
"name": "index-1",
"operation": "bulk-index-1",
},
{
"name": "index-2",
"operation": "bulk-index-2",
},
{
"name": "index-3",
"operation": "bulk-index-3",
},
]
}
},
{
"operation": "node-stats"
}
]
}
]
}
reader = loader.TrackSpecificationReader(selected_challenge="default-challenge")
full_track = reader("unittest", track_specification, "/mappings")
used_corpora = sorted(loader.used_corpora(full_track), key=lambda c: c.name)
self.assertEqual(2, len(used_corpora))
self.assertEqual("http_logs", used_corpora[0].name)
# each bulk operation requires a different data file but they should have been merged properly.
self.assertEqual({"documents-181998.json.bz2", "documents-191998.json.bz2"},
{d.document_archive for d in used_corpora[0].documents})
self.assertEqual("http_logs_unparsed", used_corpora[1].name)
self.assertEqual({"documents-201998.unparsed.json.bz2"}, {d.document_archive for d in used_corpora[1].documents})
@mock.patch("esrally.utils.io.prepare_file_offset_table")
@mock.patch("esrally.utils.io.decompress")
@mock.patch("os.path.getsize")
@mock.patch("os.path.isfile")
def test_prepare_bundled_document_set_decompresses_compressed_docs(self, is_file, get_size, decompress, prepare_file_offset_table):
# uncompressed is missing
# decompressed is present
# check if uncompressed is present after decompression
# final loop iteration - uncompressed is present now
is_file.side_effect = [False, True, True, True]
# compressed
# uncompressed after decompression
# uncompressed in final loop iteration
get_size.side_effect = [200, 2000, 2000]
prepare_file_offset_table.return_value = 5
p = loader.DocumentSetPreparator(track_name="unit-test",
downloader=loader.Downloader(offline=False, test_mode=False),
decompressor=loader.Decompressor())
self.assertTrue(p.prepare_bundled_document_set(document_set=track.Documents(source_format=track.Documents.SOURCE_FORMAT_BULK,
document_file="docs.json",
document_archive="docs.json.bz2",
number_of_documents=5,
compressed_size_in_bytes=200,
uncompressed_size_in_bytes=2000),
data_root="."))
prepare_file_offset_table.assert_called_with("./docs.json")
@mock.patch("os.path.getsize")
@mock.patch("os.path.isfile")
def test_prepare_bundled_document_set_error_compressed_docs_wrong_size(self, is_file, get_size):
# uncompressed is missing
# decompressed is present
is_file.side_effect = [False, True]
# compressed has wrong size
get_size.side_effect = [150]
p = loader.DocumentSetPreparator(track_name="unit-test",
downloader=loader.Downloader(offline=False, test_mode=False),
decompressor=loader.Decompressor())
with self.assertRaises(exceptions.DataError) as ctx:
p.prepare_bundled_document_set(document_set=track.Documents(source_format=track.Documents.SOURCE_FORMAT_BULK,
document_file="docs.json",
document_archive="docs.json.bz2",
number_of_documents=5,
compressed_size_in_bytes=200,
uncompressed_size_in_bytes=2000),
data_root=".")
self.assertEqual("[./docs.json.bz2] is present but does not have the expected size of [200] bytes.",
ctx.exception.args[0])
@mock.patch("esrally.utils.io.prepare_file_offset_table")
@mock.patch("esrally.utils.io.decompress")
@mock.patch("os.path.getsize")
@mock.patch("os.path.isfile")
def test_prepare_bundled_document_set_uncompressed_docs_wrong_size(self, is_file, get_size, decompress, prepare_file_offset_table):
# uncompressed is present
is_file.side_effect = [True]
# uncompressed
get_size.side_effect = [1500]
p = loader.DocumentSetPreparator(track_name="unit-test",
downloader=loader.Downloader(offline=False, test_mode=False),
decompressor=loader.Decompressor())
with self.assertRaises(exceptions.DataError) as ctx:
p.prepare_bundled_document_set(document_set=track.Documents(source_format=track.Documents.SOURCE_FORMAT_BULK,
document_file="docs.json",
document_archive="docs.json.bz2",
number_of_documents=5,
compressed_size_in_bytes=200,
uncompressed_size_in_bytes=2000),
data_root=".")
self.assertEqual("[./docs.json] is present but does not have the expected size of [2000] bytes.",
ctx.exception.args[0])
self.assertEqual(0, prepare_file_offset_table.call_count)
class TemplateSource(TestCase):
@mock.patch("esrally.utils.io.dirname")
@mock.patch.object(loader.TemplateSource, "read_glob_files")
def test_entrypoint_of_replace_includes(self, patched_read_glob, patched_dirname):
track = textwrap.dedent("""
{% import "rally.helpers" as rally with context %}
{
"version": 2,
"description": "unittest track",
"data-url": "http://benchmarks.elasticsearch.org.s3.amazonaws.com/corpora/geonames",
"indices": [
{
"name": "geonames",
"body": "index.json"
}
],
"corpora": [
{
"name": "geonames",
"base-url": "http://benchmarks.elasticsearch.org.s3.amazonaws.com/corpora/geonames",
"documents": [
{
"source-file": "documents-2.json.bz2",
"document-count": 11396505,
"compressed-bytes": 264698741,
"uncompressed-bytes": 3547614383
}
]
}
],
"operations": [
{{ rally.collect(parts="operations/*.json") }}
],
"challenges": [
{{ rally.collect(parts="challenges/*.json") }}
]
}
""")
def dummy_read_glob(c):
return "{{\"replaced {}\": \"true\"}}".format(c)
patched_read_glob.side_effect = dummy_read_glob
base_path = "~/.rally/benchmarks/tracks/default/geonames"
template_file_name = "track.json"
tmpl_src = loader.TemplateSource(base_path, template_file_name)
# pylint: disable=trailing-whitespace
expected_response = textwrap.dedent("""
{% import "rally.helpers" as rally with context %}
{
"version": 2,
"description": "unittest track",
"data-url": "http://benchmarks.elasticsearch.org.s3.amazonaws.com/corpora/geonames",
"indices": [
{
"name": "geonames",
"body": "index.json"
}
],
"corpora": [
{
"name": "geonames",
"base-url": "http://benchmarks.elasticsearch.org.s3.amazonaws.com/corpora/geonames",
"documents": [
{
"source-file": "documents-2.json.bz2",
"document-count": 11396505,
"compressed-bytes": 264698741,
"uncompressed-bytes": 3547614383
}
]
}
],
"operations": [
{"replaced ~/.rally/benchmarks/tracks/default/geonames/operations/*.json": "true"}
],
"challenges": [
{"replaced ~/.rally/benchmarks/tracks/default/geonames/challenges/*.json": "true"}
]
}
""")
self.assertEqual(
expected_response,
tmpl_src.replace_includes(base_path, track)
)
def test_read_glob_files(self):
tmpl_obj = loader.TemplateSource(
base_path="/some/path/to/a/rally/track",
template_file_name="track.json",
fileglobber=lambda pat: [
os.path.join(os.path.dirname(__file__), "resources", "track_fragment_1.json"),
os.path.join(os.path.dirname(__file__), "resources", "track_fragment_2.json")
]
)
response = tmpl_obj.read_glob_files("*track_fragment_*.json")
expected_response = '{\n "item1": "value1"\n}\n,\n{\n "item2": "value2"\n}\n'
self.assertEqual(expected_response, response)
class TemplateRenderTests(TestCase):
unittest_template_internal_vars = loader.default_internal_template_vars(clock=StaticClock)
def test_render_simple_template(self):
template = """
{
"key": {{'01-01-2000' | days_ago(now)}},
"key2": "static value"
}
"""
rendered = loader.render_template(template, template_internal_vars=TemplateRenderTests.unittest_template_internal_vars)
expected = """
{
"key": 5864,
"key2": "static value"
}
"""
self.assertEqual(expected, rendered)
def test_render_template_with_external_variables(self):
template = """
{
"greeting": "{{greeting | default("Aloha")}}",
"name": "{{name | default("stranger")}}"
}
"""
rendered = loader.render_template(template, template_vars={"greeting": "Hi"},
template_internal_vars=TemplateRenderTests.unittest_template_internal_vars)
expected = """
{
"greeting": "Hi",
"name": "stranger"
}
"""
self.assertEqual(expected, rendered)
def test_render_template_with_globbing(self):
def key_globber(e):
if e == "dynamic-key-*":
return [
"dynamic-key-1",
"dynamic-key-2",
"dynamic-key-3",
]
else:
return []
template = """
{% import "rally.helpers" as rally %}
{
"key1": "static value",
{{ rally.collect(parts="dynamic-key-*") }}
}
"""
source = io.DictStringFileSourceFactory({
"dynamic-key-1": [
textwrap.dedent('"dkey1": "value1"')
],
"dynamic-key-2": [
textwrap.dedent('"dkey2": "value2"')
],
"dynamic-key-3": [
textwrap.dedent('"dkey3": "value3"')
]
})
template_source = loader.TemplateSource("", "track.json", source=source, fileglobber=key_globber)
template_source.load_template_from_string(template)
rendered = loader.render_template(
template_source.assembled_source,
template_internal_vars=TemplateRenderTests.unittest_template_internal_vars)
expected = """
{
"key1": "static value",
"dkey1": "value1",
"dkey2": "value2",
"dkey3": "value3"
}
"""
self.assertEqualIgnoreWhitespace(expected, rendered)
def test_render_template_with_variables(self):
template = """
{% set _clients = clients if clients is defined else 16 %}
{% set _bulk_size = bulk_size if bulk_size is defined else 100 %}
{% import "rally.helpers" as rally with context %}
{
"key1": "static value",
"dkey1": {{ _clients }},
"dkey2": {{ _bulk_size }}
}
"""
rendered = loader.render_template(
template,
template_vars={"clients": 8},
template_internal_vars=TemplateRenderTests.unittest_template_internal_vars)
expected = """
{
"key1": "static value",
"dkey1": 8,
"dkey2": 100
}
"""
self.assertEqualIgnoreWhitespace(expected, rendered)
def assertEqualIgnoreWhitespace(self, expected, actual):
self.assertEqual(strip_ws(expected), strip_ws(actual))
class CompleteTrackParamsTests(TestCase):
assembled_source = textwrap.dedent("""{% import "rally.helpers" as rally with context %}
"key1": "value1",
"key2": {{ value2 | default(3) }},
"key3": {{ value3 | default("default_value3") }}
"key4": {{ value2 | default(3) }}
""")
def test_check_complete_track_params_contains_all_track_params(self):
complete_track_params = loader.CompleteTrackParams()
loader.register_all_params_in_track(CompleteTrackParamsTests.assembled_source, complete_track_params)
self.assertEqual(
["value2", "value3"],
complete_track_params.sorted_track_defined_params
)
def test_check_complete_track_params_does_not_fail_with_no_track_params(self):
complete_track_params = loader.CompleteTrackParams()
loader.register_all_params_in_track('{}', complete_track_params)
self.assertEqual(
[],
complete_track_params.sorted_track_defined_params
)
def test_unused_user_defined_track_params(self):
track_params = {
"number_of_repliacs": 1, # deliberate typo
"enable_source": True, # unknown parameter
"number_of_shards": 5
}
complete_track_params = loader.CompleteTrackParams(user_specified_track_params=track_params)
complete_track_params.populate_track_defined_params(list_of_track_params=[
"bulk_indexing_clients",
"bulk_indexing_iterations",
"bulk_size",
"cluster_health",
"number_of_replicas",
"number_of_shards"]
)
self.assertEqual(
["enable_source", "number_of_repliacs"],
sorted(complete_track_params.unused_user_defined_track_params())
)
def test_unused_user_defined_track_params_doesnt_fail_with_detaults(self):
complete_track_params = loader.CompleteTrackParams()
complete_track_params.populate_track_defined_params(list_of_track_params=[
"bulk_indexing_clients",
"bulk_indexing_iterations",
"bulk_size",
"cluster_health",
"number_of_replicas",
"number_of_shards"]
)
self.assertEqual(
[],
sorted(complete_track_params.unused_user_defined_track_params())
)
class TrackPostProcessingTests(TestCase):
track_with_params_as_string = textwrap.dedent("""{
"indices": [
{
"name": "test-index",
"body": "test-index-body.json",
"types": ["test-type"]
}
],
"corpora": [
{
"name": "unittest",
"documents": [
{
"source-file": "documents.json.bz2",
"document-count": 10,
"compressed-bytes": 100,
"uncompressed-bytes": 10000
}
]
}
],
"operations": [
{
"name": "index-append",
"operation-type": "bulk",
"bulk-size": 5000
},
{
"name": "search",
"operation-type": "search"
}
],
"challenges": [
{
"name": "default-challenge",
"description": "Default challenge",
"schedule": [
{
"clients": {{ bulk_indexing_clients | default(8) }},
"operation": "index-append",
"warmup-time-period": 100,
"time-period": 240
},
{
"parallel": {
"tasks": [
{
"name": "search #1",
"clients": 4,
"operation": "search",
"warmup-iterations": 1000,
"iterations": 2000,
"target-interval": 30
},
{
"name": "search #2",
"clients": 1,
"operation": "search",
"warmup-iterations": 1000,
"iterations": 2000,
"target-throughput": 200
},
{
"name": "search #3",
"clients": 1,
"operation": "search",
"iterations": 1
}
]
}
}
]
}
]
}""")
def test_post_processes_track_spec(self):
track_specification = {
"indices": [
{
"name": "test-index",
"body": "test-index-body.json",
"types": ["test-type"]
}
],
"corpora": [
{
"name": "unittest",
"documents": [
{
"source-file": "documents.json.bz2",
"document-count": 10,
"compressed-bytes": 100,
"uncompressed-bytes": 10000
}
]
}
],
"operations": [
{
"name": "index-append",
"operation-type": "bulk",
"bulk-size": 5000
},
{
"name": "search",
"operation-type": "search"
}
],
"challenges": [
{
"name": "default-challenge",
"description": "Default challenge",
"schedule": [
{
"clients": 8,
"operation": "index-append",
"warmup-time-period": 100,
"time-period": 240,
},
{
"parallel": {
"tasks": [
{
"name": "search #1",
"clients": 4,
"operation": "search",
"warmup-iterations": 1000,
"iterations": 2000,
"target-interval": 30
},
{
"name": "search #2",
"clients": 1,
"operation": "search",
"warmup-iterations": 1000,
"iterations": 2000,
"target-throughput": 200
},
{
"name": "search #3",
"clients": 1,
"operation": "search",
"iterations": 1
}
]
}
}
]
}
]
}
expected_post_processed = {
"indices": [
{
"name": "test-index",
"body": "test-index-body.json",
"types": ["test-type"]
}
],
"corpora": [
{
"name": "unittest",
"documents": [
{
"source-file": "documents-1k.json.bz2",
"document-count": 1000
}
]
}
],
"operations": [
{
"name": "index-append",
"operation-type": "bulk",
"bulk-size": 5000
},
{
"name": "search",
"operation-type": "search"
}
],
"challenges": [
{
"name": "default-challenge",
"description": "Default challenge",
"schedule": [
{
"clients": 8,
"operation": "index-append",
"warmup-time-period": 0,
"time-period": 10,
},
{
"parallel": {
"tasks": [
{
"name": "search #1",
"clients": 4,
"operation": "search",
"warmup-iterations": 4,
"iterations": 4
},
{
"name": "search #2",
"clients": 1,
"operation": "search",
"warmup-iterations": 1,
"iterations": 1
},
{
"name": "search #3",
"clients": 1,
"operation": "search",
"iterations": 1
}
]
}
}
]
}
]
}
complete_track_params = loader.CompleteTrackParams()
index_body = '{"settings": {"index.number_of_shards": {{ number_of_shards | default(5) }}, '\
'"index.number_of_replicas": {{ number_of_replicas | default(0)}} }}'
cfg = config.Config()
cfg.add(config.Scope.application, "track", "test.mode.enabled", True)
self.assertEqual(
self.as_track(expected_post_processed, complete_track_params=complete_track_params, index_body=index_body),
loader.TestModeTrackProcessor(cfg).on_after_load_track(
self.as_track(track_specification, complete_track_params=complete_track_params, index_body=index_body)
)
)
self.assertEqual(
["number_of_replicas", "number_of_shards"],
complete_track_params.sorted_track_defined_params
)
def as_track(self, track_specification, track_params=None, complete_track_params=None, index_body=None):
reader = loader.TrackSpecificationReader(
track_params=track_params,
complete_track_params=complete_track_params,
source=io.DictStringFileSourceFactory({
"/mappings/test-index-body.json": [index_body]
})
)
return reader("unittest", track_specification, "/mappings")
class TrackPathTests(TestCase):
@mock.patch("os.path.exists")
def test_sets_absolute_path(self, path_exists):
path_exists.return_value = True
cfg = config.Config()
cfg.add(config.Scope.application, "benchmarks", "local.dataset.cache", "/data")
default_challenge = track.Challenge("default", default=True, schedule=[
track.Task(name="index", operation=track.Operation("index", operation_type=track.OperationType.Bulk), clients=4)
])
another_challenge = track.Challenge("other", default=False)
t = track.Track(name="u", challenges=[another_challenge, default_challenge],
corpora=[
track.DocumentCorpus("unittest", documents=[
track.Documents(source_format=track.Documents.SOURCE_FORMAT_BULK,
document_file="docs/documents.json",
document_archive="docs/documents.json.bz2")
])
],
indices=[track.Index(name="test", types=["docs"])])
loader.set_absolute_data_path(cfg, t)
self.assertEqual("/data/unittest/docs/documents.json", t.corpora[0].documents[0].document_file)
self.assertEqual("/data/unittest/docs/documents.json.bz2", t.corpora[0].documents[0].document_archive)
class TrackFilterTests(TestCase):
def filter(self, track_specification, include_tasks=None, exclude_tasks=None):
cfg = config.Config()
cfg.add(config.Scope.application, "track", "include.tasks", include_tasks)
cfg.add(config.Scope.application, "track", "exclude.tasks", exclude_tasks)
processor = loader.TaskFilterTrackProcessor(cfg)
return processor.on_after_load_track(track_specification)
def test_rejects_invalid_syntax(self):
with self.assertRaises(exceptions.SystemSetupError) as ctx:
self.filter(track_specification=None, include_tasks=["valid", "a:b:c"])
self.assertEqual("Invalid format for filtered tasks: [a:b:c]", ctx.exception.args[0])
def test_rejects_unknown_filter_type(self):
with self.assertRaises(exceptions.SystemSetupError) as ctx:
self.filter(track_specification=None, include_tasks=["valid", "op-type:index"])
self.assertEqual("Invalid format for filtered tasks: [op-type:index]. Expected [type] but got [op-type].",
ctx.exception.args[0])
def test_filters_tasks(self):
track_specification = {
"description": "description for unit test",
"indices": [{"name": "test-index", "auto-managed": False}],
"operations": [
{
"name": "create-index",
"operation-type": "create-index"
},
{
"name": "bulk-index",
"operation-type": "bulk"
},
{
"name": "node-stats",
"operation-type": "node-stats"
},
{
"name": "cluster-stats",
"operation-type": "custom-operation-type"
},
{
"name": "match-all",
"operation-type": "search",
"body": {
"query": {
"match_all": {}
}
}
},
],
"challenges": [
{
"name": "default-challenge",
"schedule": [
{
"operation": "create-index"
},
{
"parallel": {
"tasks": [
{
"name": "index-1",
"operation": "bulk-index",
},
{
"name": "index-2",
"operation": "bulk-index",
},
{
"name": "index-3",
"operation": "bulk-index",
},
{
"name": "match-all-parallel",
"operation": "match-all",
},
]
}
},
{
"operation": "node-stats"
},
{
"name": "match-all-serial",
"operation": "match-all"
},
{
"operation": "cluster-stats"
},
{
"parallel": {
"tasks": [
{
"name": "query-filtered",
"tags": "include-me",
"operation": "match-all",
},
{
"name": "index-4",
"tags": ["include-me", "bulk-task"],
"operation": "bulk-index",
},
{
"name": "index-5",
"operation": "bulk-index",
}
]
}
},
{
"name": "final-cluster-stats",
"operation": "cluster-stats",
"tags": "include-me"
}
]
}
]
}
reader = loader.TrackSpecificationReader()
full_track = reader("unittest", track_specification, "/mappings")
self.assertEqual(7, len(full_track.challenges[0].schedule))
filtered = self.filter(full_track, include_tasks=["index-3",
"type:search",
# Filtering should also work for non-core operation types.
"type:custom-operation-type",
"tag:include-me"])
schedule = filtered.challenges[0].schedule
self.assertEqual(5, len(schedule))
self.assertEqual(["index-3", "match-all-parallel"], [t.name for t in schedule[0].tasks])
self.assertEqual("match-all-serial", schedule[1].name)
self.assertEqual("cluster-stats", schedule[2].name)
self.assertEqual(["query-filtered", "index-4"], [t.name for t in schedule[3].tasks])
self.assertEqual("final-cluster-stats", schedule[4].name)
def test_filters_exclude_tasks(self):
track_specification = {
"description": "description for unit test",
"indices": [{"name": "test-index", "auto-managed": False}],
"operations": [
{
"name": "create-index",
"operation-type": "create-index"
},
{
"name": "bulk-index",
"operation-type": "bulk"
},
{
"name": "node-stats",
"operation-type": "node-stats"
},
{
"name": "cluster-stats",
"operation-type": "custom-operation-type"
},
{
"name": "match-all",
"operation-type": "search",
"body": {
"query": {
"match_all": {}
}
}
},
],
"challenges": [
{
"name": "default-challenge",
"schedule": [
{
"operation": "create-index"
},
{
"parallel": {
"tasks": [
{
"name": "index-1",
"operation": "bulk-index",
},
{
"name": "index-2",
"operation": "bulk-index",
},
{
"name": "index-3",
"operation": "bulk-index",
},
{
"name": "match-all-parallel",
"operation": "match-all",
},
]
}
},
{
"operation": "node-stats"
},
{
"name": "match-all-serial",
"operation": "match-all"
},
{
"operation": "cluster-stats"
}
]
}
]
}
reader = loader.TrackSpecificationReader()
full_track = reader("unittest", track_specification, "/mappings")
self.assertEqual(5, len(full_track.challenges[0].schedule))
filtered = self.filter(full_track, exclude_tasks=["index-3", "type:search", "create-index"])
schedule = filtered.challenges[0].schedule
self.assertEqual(3, len(schedule))
self.assertEqual(["index-1", "index-2"], [t.name for t in schedule[0].tasks])
self.assertEqual("node-stats", schedule[1].name)
self.assertEqual("cluster-stats", schedule[2].name)
def test_unmatched_exclude_runs_everything(self):
track_specification = {
"description": "description for unit test",
"indices": [{"name": "test-index", "auto-managed": False}],
"operations": [
{
"name": "create-index",
"operation-type": "create-index"
},
{
"name": "bulk-index",
"operation-type": "bulk"
},
{
"name": "node-stats",
"operation-type": "node-stats"
},
{
"name": "cluster-stats",
"operation-type": "custom-operation-type"
},
{
"name": "match-all",
"operation-type": "search",
"body": {
"query": {
"match_all": {}
}
}
},
],
"challenges": [
{
"name": "default-challenge",
"schedule": [
{
"operation": "create-index"
},
{
"operation": "bulk-index"
},
{
"operation": "node-stats"
},
{
"name": "match-all-serial",
"operation": "match-all"
},
{
"operation": "cluster-stats"
}
]
}
]
}
reader = loader.TrackSpecificationReader()
full_track = reader("unittest", track_specification, "/mappings")
self.assertEqual(5, len(full_track.challenges[0].schedule))
expected_schedule = full_track.challenges[0].schedule.copy()
filtered = self.filter(full_track, exclude_tasks=["nothing"])
schedule = filtered.challenges[0].schedule
self.assertEqual(expected_schedule, schedule)
def test_unmatched_include_runs_nothing(self):
track_specification = {
"description": "description for unit test",
"indices": [{"name": "test-index", "auto-managed": False}],
"operations": [
{
"name": "create-index",
"operation-type": "create-index"
},
{
"name": "bulk-index",
"operation-type": "bulk"
},
{
"name": "node-stats",
"operation-type": "node-stats"
},
{
"name": "cluster-stats",
"operation-type": "custom-operation-type"
},
{
"name": "match-all",
"operation-type": "search",
"body": {
"query": {
"match_all": {}
}
}
},
],
"challenges": [
{
"name": "default-challenge",
"schedule": [
{
"operation": "create-index"
},
{
"operation": "bulk-index"
},
{
"operation": "node-stats"
},
{
"name": "match-all-serial",
"operation": "match-all"
},
{
"operation": "cluster-stats"
}
]
}
]
}
reader = loader.TrackSpecificationReader()
full_track = reader("unittest", track_specification, "/mappings")
self.assertEqual(5, len(full_track.challenges[0].schedule))
expected_schedule = []
filtered = self.filter(full_track, include_tasks=["nothing"])
schedule = filtered.challenges[0].schedule
self.assertEqual(expected_schedule, schedule)
# pylint: disable=too-many-public-methods
class TrackSpecificationReaderTests(TestCase):
def test_description_is_optional(self):
track_specification = {
# no description here
"challenges": []
}
reader = loader.TrackSpecificationReader()
resulting_track = reader("unittest", track_specification, "/mappings")
self.assertEqual("unittest", resulting_track.name)
self.assertEqual("", resulting_track.description)
def test_can_read_track_info(self):
track_specification = {
"description": "description for unit test",
"indices": [{"name": "test-index", "types": ["test-type"]}],
"data-streams": [],
"corpora": [],
"operations": [],
"challenges": []
}
reader = loader.TrackSpecificationReader()
resulting_track = reader("unittest", track_specification, "/mappings")
self.assertEqual("unittest", resulting_track.name)
self.assertEqual("description for unit test", resulting_track.description)
def test_document_count_mandatory_if_file_present(self):
track_specification = {
"description": "description for unit test",
"indices": [{"name": "test-index", "types": ["docs"]}],
"corpora": [
{
"name": "test",
"base-url": "https://localhost/data",
"documents": [{"source-file": "documents-main.json.bz2"}]
}
],
"challenges": []
}
reader = loader.TrackSpecificationReader()
with self.assertRaises(loader.TrackSyntaxError) as ctx:
reader("unittest", track_specification, "/mappings")
self.assertEqual("Track 'unittest' is invalid. Mandatory element 'document-count' is missing.", ctx.exception.args[0])
@mock.patch("esrally.track.loader.register_all_params_in_track")
def test_parse_with_mixed_warmup_iterations_and_measurement(self, mocked_params_checker):
track_specification = {
"description": "description for unit test",
"indices": [
{
"name": "test-index",
"body": "index.json",
"types": ["docs"]
}
],
"corpora": [
{
"name": "test",
"documents": [
{
"source-file": "documents-main.json.bz2",
"document-count": 10,
"compressed-bytes": 100,
"uncompressed-bytes": 10000
}
]
}
],
"operations": [
{
"name": "index-append",
"operation-type": "bulk",
"bulk-size": 5000,
}
],
"challenges": [
{
"name": "default-challenge",
"schedule": [
{
"clients": 8,
"operation": "index-append",
"warmup-iterations": 3,
"time-period": 60
}
]
}
]
}
reader = loader.TrackSpecificationReader(source=io.DictStringFileSourceFactory({
"/mappings/index.json": ['{"mappings": {"docs": "empty-for-test"}}'],
}))
with self.assertRaises(loader.TrackSyntaxError) as ctx:
reader("unittest", track_specification, "/mappings")
self.assertEqual("Track 'unittest' is invalid. Operation 'index-append' in challenge 'default-challenge' defines '3' warmup "
"iterations and a time period of '60' seconds. Please do not mix time periods and iterations.",
ctx.exception.args[0])
@mock.patch("esrally.track.loader.register_all_params_in_track")
def test_parse_missing_challenge_or_challenges(self, mocked_params_checker):
track_specification = {
"description": "description for unit test",
"indices": [
{
"name": "test-index",
"body": "index.json",
"types": ["docs"]
}
],
"corpora": [
{
"name": "test",
"documents": [
{
"source-file": "documents-main.json.bz2",
"document-count": 10,
"compressed-bytes": 100,
"uncompressed-bytes": 10000
}
]
}
],
# no challenge or challenges element
}
reader = loader.TrackSpecificationReader(source=io.DictStringFileSourceFactory({
"/mappings/index.json": ['{"mappings": {"docs": "empty-for-test"}}'],
}))
with self.assertRaises(loader.TrackSyntaxError) as ctx:
reader("unittest", track_specification, "/mappings")
self.assertEqual("Track 'unittest' is invalid. You must define 'challenge', 'challenges' or 'schedule' but none is specified.",
ctx.exception.args[0])
@mock.patch("esrally.track.loader.register_all_params_in_track")
def test_parse_challenge_and_challenges_are_defined(self, mocked_params_checker):
track_specification = {
"description": "description for unit test",
"indices": [
{
"name": "test-index",
"body": "index.json",
"types": ["docs"]
}
],
"corpora": [
{
"name": "test",
"documents": [
{
"source-file": "documents-main.json.bz2",
"document-count": 10,
"compressed-bytes": 100,
"uncompressed-bytes": 10000
}
]
}
],
# We define both. Note that challenges without any properties would not pass JSON schema validation but we don't test this here.
"challenge": {},
"challenges": []
}
reader = loader.TrackSpecificationReader(source=io.DictStringFileSourceFactory({
"/mappings/index.json": ['{"mappings": {"docs": "empty-for-test"}}'],
}))
with self.assertRaises(loader.TrackSyntaxError) as ctx:
reader("unittest", track_specification, "/mappings")
self.assertEqual("Track 'unittest' is invalid. Multiple out of 'challenge', 'challenges' or 'schedule' are defined but only "
"one of them is allowed.", ctx.exception.args[0])
@mock.patch("esrally.track.loader.register_all_params_in_track")
def test_parse_with_mixed_warmup_time_period_and_iterations(self, mocked_params_checker):
track_specification = {
"description": "description for unit test",
"indices": [
{
"name": "test-index",
"body": "index.json",
"types": ["docs"]
}
],
"corpora": [
{
"name": "test",
"documents": [
{
"source-file": "documents-main.json.bz2",
"document-count": 10,
"compressed-bytes": 100,
"uncompressed-bytes": 10000
}
]
}
],
"operations": [
{
"name": "index-append",
"operation-type": "index",
"bulk-size": 5000,
}
],
"challenges": [
{
"name": "default-challenge",
"schedule": [
{
"clients": 8,
"operation": "index-append",
"warmup-time-period": 20,
"iterations": 1000
}
]
}
]
}
reader = loader.TrackSpecificationReader(source=io.DictStringFileSourceFactory({
"/mappings/index.json": ['{"mappings": {"docs": "empty-for-test"}}'],
}))
with self.assertRaises(loader.TrackSyntaxError) as ctx:
reader("unittest", track_specification, "/mappings")
self.assertEqual("Track 'unittest' is invalid. Operation 'index-append' in challenge 'default-challenge' defines a warmup time "
"period of '20' seconds and '1000' iterations. Please do not mix time periods and iterations.",
ctx.exception.args[0])
def test_parse_duplicate_implicit_task_names(self):
track_specification = {
"description": "description for unit test",
"operations": [
{
"name": "search",
"operation-type": "search",
"index": "_all"
}
],
"challenge": {
"name": "default-challenge",
"schedule": [
{
"operation": "search",
"clients": 1
},
{
"operation": "search",
"clients": 2
}
]
}
}
reader = loader.TrackSpecificationReader()
with self.assertRaises(loader.TrackSyntaxError) as ctx:
reader("unittest", track_specification, "/mappings")
self.assertEqual("Track 'unittest' is invalid. Challenge 'default-challenge' contains multiple tasks with the name 'search'. Please"
" use the task's name property to assign a unique name for each task.",
ctx.exception.args[0])
def test_parse_duplicate_explicit_task_names(self):
track_specification = {
"description": "description for unit test",
"operations": [
{
"name": "search",
"operation-type": "search",
"index": "_all"
}
],
"challenge": {
"name": "default-challenge",
"schedule": [
{
"name": "duplicate-task-name",
"operation": "search",
"clients": 1
},
{
"name": "duplicate-task-name",
"operation": "search",
"clients": 2
}
]
}
}
reader = loader.TrackSpecificationReader()
with self.assertRaises(loader.TrackSyntaxError) as ctx:
reader("unittest", track_specification, "/mappings")
self.assertEqual("Track 'unittest' is invalid. Challenge 'default-challenge' contains multiple tasks with the name "
"'duplicate-task-name'. Please use the task's name property to assign a unique name for each task.",
ctx.exception.args[0])
@mock.patch("esrally.track.loader.register_all_params_in_track")
def test_load_invalid_index_body(self, mocked_params_checker):
track_specification = {
"description": "description for unit test",
"indices": [
{
"name": "index-historical",
"body": "body.json",
"types": ["_doc"]
}
],
"corpora": [
{
"name": "test",
"documents": [
{
"source-file": "documents-main.json.bz2",
"document-count": 10,
"compressed-bytes": 100,
"uncompressed-bytes": 10000
}
]
}
],
"schedule": [
{
"clients": 8,
"operation": {
"name": "index-append",
"operation-type": "index",
"bulk-size": 5000
}
}
]
}
reader = loader.TrackSpecificationReader(
track_params={"number_of_shards": 3},
source=io.DictStringFileSourceFactory({
"/mappings/body.json": ["""
{
"settings": {
"number_of_shards": {{ number_of_shards }}
},
"mappings": {
"_doc": "no closing quotation mark!!,
}
}
"""]
}))
with self.assertRaises(loader.TrackSyntaxError) as ctx:
reader("unittest", track_specification, "/mappings")
self.assertEqual("Could not load file template for 'definition for index index-historical in body.json'", ctx.exception.args[0])
def test_parse_unique_task_names(self):
track_specification = {
"description": "description for unit test",
"operations": [
{
"name": "search",
"operation-type": "search",
"index": "_all"
}
],
"challenge": {
"name": "default-challenge",
"schedule": [
{
"name": "search-one-client",
"operation": "search",
"clients": 1
},
{
"name": "search-two-clients",
"operation": "search",
"clients": 2
}
]
}
}
reader = loader.TrackSpecificationReader(selected_challenge="default-challenge")
resulting_track = reader("unittest", track_specification, "/mappings")
self.assertEqual("unittest", resulting_track.name)
challenge = resulting_track.challenges[0]
self.assertTrue(challenge.selected)
schedule = challenge.schedule
self.assertEqual(2, len(schedule))
self.assertEqual("search-one-client", schedule[0].name)
self.assertEqual("search", schedule[0].operation.name)
self.assertEqual("search-two-clients", schedule[1].name)
self.assertEqual("search", schedule[1].operation.name)
def test_parse_indices_valid_track_specification(self):
track_specification = {
"description": "description for unit test",
"indices": [
{
"name": "index-historical",
"body": "body.json",
"types": ["main", "secondary"]
}
],
"corpora": [
{
"name": "test",
"base-url": "https://localhost/data",
"meta": {
"test-corpus": True
},
"documents": [
{
"source-file": "documents-main.json.bz2",
"document-count": 10,
"compressed-bytes": 100,
"uncompressed-bytes": 10000,
"target-index": "index-historical",
"target-type": "main",
"meta": {
"test-docs": True,
"role": "main"
}
},
{
"source-file": "documents-secondary.json.bz2",
"includes-action-and-meta-data": True,
"document-count": 20,
"compressed-bytes": 200,
"uncompressed-bytes": 20000,
"meta": {
"test-docs": True,
"role": "secondary"
}
}
]
}
],
"operations": [
{
"name": "index-append",
"operation-type": "index",
"bulk-size": 5000,
"meta": {
"append": True
}
},
{
"name": "search",
"operation-type": "search",
"index": "index-historical"
}
],
"challenges": [
{
"name": "default-challenge",
"description": "Default challenge",
"meta": {
"mixed": True,
"max-clients": 8
},
"schedule": [
{
"clients": 8,
"operation": "index-append",
"meta": {
"operation-index": 0
}
},
{
"clients": 1,
"operation": "search"
}
]
}
]
}
complete_track_params = loader.CompleteTrackParams()
reader = loader.TrackSpecificationReader(
track_params={"number_of_shards": 3},
complete_track_params=complete_track_params,
source=io.DictStringFileSourceFactory({
"/mappings/body.json": ["""
{
"settings": {
"number_of_shards": {{ number_of_shards }}
},
"mappings": {
"main": "empty-for-test",
"secondary": "empty-for-test"
}
}
"""]
}))
resulting_track = reader("unittest", track_specification, "/mappings")
# j2 variables defined in the track -- used for checking mismatching user track params
self.assertEqual(
["number_of_shards"],
complete_track_params.sorted_track_defined_params
)
self.assertEqual("unittest", resulting_track.name)
self.assertEqual("description for unit test", resulting_track.description)
# indices
self.assertEqual(1, len(resulting_track.indices))
self.assertEqual("index-historical", resulting_track.indices[0].name)
self.assertDictEqual({
"settings": {
"number_of_shards": 3
},
"mappings":
{
"main": "empty-for-test",
"secondary": "empty-for-test"
}
}, resulting_track.indices[0].body)
self.assertEqual(2, len(resulting_track.indices[0].types))
self.assertEqual("main", resulting_track.indices[0].types[0])
self.assertEqual("secondary", resulting_track.indices[0].types[1])
# corpora
self.assertEqual(1, len(resulting_track.corpora))
self.assertEqual("test", resulting_track.corpora[0].name)
self.assertDictEqual({"test-corpus": True}, resulting_track.corpora[0].meta_data)
self.assertEqual(2, len(resulting_track.corpora[0].documents))
docs_primary = resulting_track.corpora[0].documents[0]
self.assertEqual(track.Documents.SOURCE_FORMAT_BULK, docs_primary.source_format)
self.assertEqual("documents-main.json", docs_primary.document_file)
self.assertEqual("documents-main.json.bz2", docs_primary.document_archive)
self.assertEqual("https://localhost/data", docs_primary.base_url)
self.assertFalse(docs_primary.includes_action_and_meta_data)
self.assertEqual(10, docs_primary.number_of_documents)
self.assertEqual(100, docs_primary.compressed_size_in_bytes)
self.assertEqual(10000, docs_primary.uncompressed_size_in_bytes)
self.assertEqual("index-historical", docs_primary.target_index)
self.assertEqual("main", docs_primary.target_type)
self.assertDictEqual({
"test-docs": True,
"role": "main"
}, docs_primary.meta_data)
docs_secondary = resulting_track.corpora[0].documents[1]
self.assertEqual(track.Documents.SOURCE_FORMAT_BULK, docs_secondary.source_format)
self.assertEqual("documents-secondary.json", docs_secondary.document_file)
self.assertEqual("documents-secondary.json.bz2", docs_secondary.document_archive)
self.assertEqual("https://localhost/data", docs_secondary.base_url)
self.assertTrue(docs_secondary.includes_action_and_meta_data)
self.assertEqual(20, docs_secondary.number_of_documents)
self.assertEqual(200, docs_secondary.compressed_size_in_bytes)
self.assertEqual(20000, docs_secondary.uncompressed_size_in_bytes)
# This is defined by the action-and-meta-data line!
self.assertIsNone(docs_secondary.target_index)
self.assertIsNone(docs_secondary.target_type)
self.assertDictEqual({
"test-docs": True,
"role": "secondary"
}, docs_secondary.meta_data)
# challenges
self.assertEqual(1, len(resulting_track.challenges))
self.assertEqual("default-challenge", resulting_track.challenges[0].name)
self.assertEqual("Default challenge", resulting_track.challenges[0].description)
self.assertEqual({"mixed": True, "max-clients": 8}, resulting_track.challenges[0].meta_data)
self.assertEqual({"append": True}, resulting_track.challenges[0].schedule[0].operation.meta_data)
self.assertEqual({"operation-index": 0}, resulting_track.challenges[0].schedule[0].meta_data)
def test_parse_data_streams_valid_track_specification(self):
track_specification = {
"description": "description for unit test",
"data-streams": [
{
"name": "data-stream-historical"
}
],
"corpora": [
{
"name": "test",
"base-url": "https://localhost/data",
"documents": [
{
"source-file": "documents-main.json.bz2",
"document-count": 10,
"compressed-bytes": 100,
"uncompressed-bytes": 10000,
"target-data-stream": "data-stream-historical"
},
{
"source-file": "documents-secondary.json.bz2",
"includes-action-and-meta-data": True,
"document-count": 20,
"compressed-bytes": 200,
"uncompressed-bytes": 20000
},
{
"source-file": "documents-main.json.bz2",
"document-count": 10,
"compressed-bytes": 100,
"uncompressed-bytes": 10000,
"target-data-stream": "data-stream-historical"
}
]
}
],
"operations": [
{
"name": "index-append",
"operation-type": "index",
"bulk-size": 5000,
"meta": {
"append": True
}
},
{
"name": "search",
"operation-type": "search",
"data-stream": "data-stream-historical"
}
],
"challenges": [
{
"name": "default-challenge",
"description": "Default challenge",
"meta": {
"mixed": True,
"max-clients": 8
},
"schedule": [
{
"clients": 8,
"operation": "index-append",
"meta": {
"operation-index": 0
}
},
{
"clients": 1,
"operation": "search"
}
]
}
]
}
complete_track_params = loader.CompleteTrackParams()
reader = loader.TrackSpecificationReader(
complete_track_params=complete_track_params)
resulting_track = reader("unittest", track_specification, "/mappings")
# j2 variables defined in the track -- used for checking mismatching user track params
self.assertEqual("unittest", resulting_track.name)
self.assertEqual("description for unit test", resulting_track.description)
# data streams
self.assertEqual(1, len(resulting_track.data_streams))
self.assertEqual("data-stream-historical", resulting_track.data_streams[0].name)
# corpora
self.assertEqual(1, len(resulting_track.corpora))
self.assertEqual("test", resulting_track.corpora[0].name)
self.assertEqual(3, len(resulting_track.corpora[0].documents))
docs_primary = resulting_track.corpora[0].documents[0]
self.assertEqual(track.Documents.SOURCE_FORMAT_BULK, docs_primary.source_format)
self.assertEqual("documents-main.json", docs_primary.document_file)
self.assertEqual("documents-main.json.bz2", docs_primary.document_archive)
self.assertEqual("https://localhost/data", docs_primary.base_url)
self.assertFalse(docs_primary.includes_action_and_meta_data)
self.assertEqual(10, docs_primary.number_of_documents)
self.assertEqual(100, docs_primary.compressed_size_in_bytes)
self.assertEqual(10000, docs_primary.uncompressed_size_in_bytes)
self.assertEqual("data-stream-historical", docs_primary.target_data_stream)
self.assertIsNone(docs_primary.target_index)
self.assertIsNone(docs_primary.target_type)
docs_secondary = resulting_track.corpora[0].documents[1]
self.assertEqual(track.Documents.SOURCE_FORMAT_BULK, docs_secondary.source_format)
self.assertEqual("documents-secondary.json", docs_secondary.document_file)
self.assertEqual("documents-secondary.json.bz2", docs_secondary.document_archive)
self.assertEqual("https://localhost/data", docs_secondary.base_url)
self.assertTrue(docs_secondary.includes_action_and_meta_data)
self.assertEqual(20, docs_secondary.number_of_documents)
self.assertEqual(200, docs_secondary.compressed_size_in_bytes)
self.assertEqual(20000, docs_secondary.uncompressed_size_in_bytes)
# This is defined by the action-and-meta-data line!
self.assertIsNone(docs_secondary.target_data_stream)
self.assertIsNone(docs_secondary.target_index)
self.assertIsNone(docs_secondary.target_type)
docs_tertiary = resulting_track.corpora[0].documents[2]
self.assertEqual(track.Documents.SOURCE_FORMAT_BULK, docs_tertiary.source_format)
self.assertEqual("documents-main.json", docs_tertiary.document_file)
self.assertEqual("documents-main.json.bz2", docs_tertiary.document_archive)
self.assertEqual("https://localhost/data", docs_tertiary.base_url)
self.assertFalse(docs_tertiary.includes_action_and_meta_data)
self.assertEqual(10, docs_tertiary.number_of_documents)
self.assertEqual(100, docs_tertiary.compressed_size_in_bytes)
self.assertIsNone(docs_tertiary.target_index)
self.assertIsNone(docs_tertiary.target_type)
self.assertEqual("data-stream-historical", docs_tertiary.target_data_stream)
# challenges
self.assertEqual(1, len(resulting_track.challenges))
self.assertEqual("default-challenge", resulting_track.challenges[0].name)
self.assertEqual("Default challenge", resulting_track.challenges[0].description)
self.assertEqual({"mixed": True, "max-clients": 8}, resulting_track.challenges[0].meta_data)
self.assertEqual({"append": True}, resulting_track.challenges[0].schedule[0].operation.meta_data)
self.assertEqual({"operation-index": 0}, resulting_track.challenges[0].schedule[0].meta_data)
@mock.patch("esrally.track.loader.register_all_params_in_track")
def test_parse_valid_without_types(self, mocked_param_checker):
track_specification = {
"description": "description for unit test",
"indices": [
{
"name": "index-historical",
"body": "body.json"
# no type information here
}
],
"corpora": [
{
"name": "test",
"base-url": "https://localhost/data",
"documents": [
{
"source-file": "documents-main.json.bz2",
"document-count": 10,
"compressed-bytes": 100,
"uncompressed-bytes": 10000,
},
]
}
],
"schedule": [
{
"clients": 8,
"operation": {
"name": "index-append",
"operation-type": "bulk",
"bulk-size": 5000
}
}
]
}
reader = loader.TrackSpecificationReader(
track_params={"number_of_shards": 3},
source=io.DictStringFileSourceFactory({
"/mappings/body.json": ["""
{
"settings": {
"number_of_shards": {{ number_of_shards }}
}
}
"""]
}))
resulting_track = reader("unittest", track_specification, "/mappings")
self.assertEqual("unittest", resulting_track.name)
self.assertEqual("description for unit test", resulting_track.description)
# indices
self.assertEqual(1, len(resulting_track.indices))
self.assertEqual("index-historical", resulting_track.indices[0].name)
self.assertDictEqual({
"settings": {
"number_of_shards": 3
}
}, resulting_track.indices[0].body)
self.assertEqual(0, len(resulting_track.indices[0].types))
# corpora
self.assertEqual(1, len(resulting_track.corpora))
self.assertEqual("test", resulting_track.corpora[0].name)
self.assertEqual(1, len(resulting_track.corpora[0].documents))
docs_primary = resulting_track.corpora[0].documents[0]
self.assertEqual(track.Documents.SOURCE_FORMAT_BULK, docs_primary.source_format)
self.assertEqual("documents-main.json", docs_primary.document_file)
self.assertEqual("documents-main.json.bz2", docs_primary.document_archive)
self.assertEqual("https://localhost/data", docs_primary.base_url)
self.assertFalse(docs_primary.includes_action_and_meta_data)
self.assertEqual(10, docs_primary.number_of_documents)
self.assertEqual(100, docs_primary.compressed_size_in_bytes)
self.assertEqual(10000, docs_primary.uncompressed_size_in_bytes)
self.assertEqual("index-historical", docs_primary.target_index)
self.assertIsNone(docs_primary.target_type)
self.assertIsNone(docs_primary.target_data_stream)
# challenges
self.assertEqual(1, len(resulting_track.challenges))
@mock.patch("esrally.track.loader.register_all_params_in_track")
def test_parse_invalid_data_streams_with_indices(self, mocked_param_checker):
track_specification = {
"description": "description for unit test",
"indices": [
{
"name": "index-historical",
# no type information here
}
],
"data-streams": [
{
"name": "historical-data-stream"
}
],
"corpora": [
{
"name": "test",
"base-url": "https://localhost/data",
"documents": [
{
"source-file": "documents-main.json.bz2",
"document-count": 10,
"compressed-bytes": 100,
"uncompressed-bytes": 10000,
},
]
}
],
"schedule": [
{
"clients": 8,
"operation": {
"name": "index-append",
"operation-type": "bulk",
"bulk-size": 5000
}
}
]
}
complete_track_params = loader.CompleteTrackParams()
reader = loader.TrackSpecificationReader(
complete_track_params=complete_track_params)
with self.assertRaises(loader.TrackSyntaxError):
reader("unittest", track_specification, "/mapping")
@mock.patch("esrally.track.loader.register_all_params_in_track")
def test_parse_invalid_data_streams_with_target_index(self, mocked_param_checker):
track_specification = {
"description": "description for unit test",
"data-streams": [
{
"name": "historical-data-stream"
}
],
"corpora": [
{
"name": "test",
"base-url": "https://localhost/data",
"documents": [
{
"source-file": "documents-main.json.bz2",
"document-count": 10,
"compressed-bytes": 100,
"uncompressed-bytes": 10000,
"target-index": "historical-index",
},
]
}
],
"schedule": [
{
"clients": 8,
"operation": {
"name": "index-append",
"operation-type": "bulk",
"bulk-size": 5000
}
}
]
}
complete_track_params = loader.CompleteTrackParams()
reader = loader.TrackSpecificationReader(
complete_track_params=complete_track_params)
with self.assertRaises(loader.TrackSyntaxError):
reader("unittest", track_specification, "/mapping")
@mock.patch("esrally.track.loader.register_all_params_in_track")
def test_parse_invalid_data_streams_with_target_type(self, mocked_param_checker):
track_specification = {
"description": "description for unit test",
"data-streams": [
{
"name": "historical-data-stream"
}
],
"corpora": [
{
"name": "test",
"base-url": "https://localhost/data",
"documents": [
{
"source-file": "documents-main.json.bz2",
"document-count": 10,
"compressed-bytes": 100,
"uncompressed-bytes": 10000,
"target-type": "_doc",
},
]
}
],
"schedule": [
{
"clients": 8,
"operation": {
"name": "index-append",
"operation-type": "bulk",
"bulk-size": 5000
}
}
]
}
complete_track_params = loader.CompleteTrackParams()
reader = loader.TrackSpecificationReader(
complete_track_params=complete_track_params)
with self.assertRaises(loader.TrackSyntaxError):
reader("unittest", track_specification, "/mapping")
@mock.patch("esrally.track.loader.register_all_params_in_track")
def test_parse_invalid_no_data_stream_target(self, mocked_param_checker):
track_specification = {
"description": "description for unit test",
"data-streams": [
{
"name": "historical-data-stream"
},
{
"name": "historical-data-stream-2"
}
],
"corpora": [
{
"name": "test",
"base-url": "https://localhost/data",
"documents": [
{
"source-file": "documents-main.json.bz2",
"document-count": 10,
"compressed-bytes": 100,
"uncompressed-bytes": 10000
}
]
}
],
"schedule": [
{
"clients": 8,
"operation": {
"name": "index-append",
"operation-type": "bulk",
"bulk-size": 5000
}
}
]
}
complete_track_params = loader.CompleteTrackParams()
reader = loader.TrackSpecificationReader(
complete_track_params=complete_track_params)
with self.assertRaises(loader.TrackSyntaxError):
reader("unittest", track_specification, "/mapping")
@mock.patch("esrally.track.loader.register_all_params_in_track")
def test_parse_valid_without_indices(self, mocked_param_checker):
track_specification = {
"description": "description for unit test",
"data-streams": [
{
"name": "historical-data-stream"
}
],
"corpora": [
{
"name": "test",
"base-url": "https://localhost/data",
"documents": [
{
"source-file": "documents-main.json.bz2",
"document-count": 10,
"compressed-bytes": 100,
"uncompressed-bytes": 10000,
},
]
}
],
"schedule": [
{
"clients": 8,
"operation": {
"name": "index-append",
"operation-type": "bulk",
"bulk-size": 5000
}
}
]
}
reader = loader.TrackSpecificationReader(
track_params={"number_of_shards": 3},
source=io.DictStringFileSourceFactory({
"/mappings/body.json": ["""
{
"settings": {
"number_of_shards": {{ number_of_shards }}
}
}
"""]
}))
resulting_track = reader("unittest", track_specification, "/mappings")
self.assertEqual("unittest", resulting_track.name)
self.assertEqual("description for unit test", resulting_track.description)
# indices
self.assertEqual(0, len(resulting_track.indices))
# data streams
self.assertEqual(1, len(resulting_track.data_streams))
self.assertEqual("historical-data-stream", resulting_track.data_streams[0].name)
# corpora
self.assertEqual(1, len(resulting_track.corpora))
self.assertEqual("test", resulting_track.corpora[0].name)
self.assertEqual(1, len(resulting_track.corpora[0].documents))
docs_primary = resulting_track.corpora[0].documents[0]
self.assertEqual(track.Documents.SOURCE_FORMAT_BULK, docs_primary.source_format)
self.assertEqual("documents-main.json", docs_primary.document_file)
self.assertEqual("documents-main.json.bz2", docs_primary.document_archive)
self.assertEqual("https://localhost/data", docs_primary.base_url)
self.assertFalse(docs_primary.includes_action_and_meta_data)
self.assertEqual(10, docs_primary.number_of_documents)
self.assertEqual(100, docs_primary.compressed_size_in_bytes)
self.assertEqual(10000, docs_primary.uncompressed_size_in_bytes)
self.assertEqual("historical-data-stream", docs_primary.target_data_stream)
self.assertIsNone(docs_primary.target_type)
self.assertIsNone(docs_primary.target_index)
# challenges
self.assertEqual(1, len(resulting_track.challenges))
def test_parse_valid_track_specification_with_index_template(self):
track_specification = {
"description": "description for unit test",
"templates": [
{
"name": "my-index-template",
"index-pattern": "*",
"template": "default-template.json"
}
],
"operations": [],
"challenges": []
}
complete_track_params = loader.CompleteTrackParams()
reader = loader.TrackSpecificationReader(
track_params={"index_pattern": "*"},
complete_track_params=complete_track_params,
source=io.DictStringFileSourceFactory({
"/mappings/default-template.json": ["""
{
"index_patterns": [ "{{index_pattern}}"],
"settings": {
"number_of_shards": {{ number_of_shards | default(1) }}
}
}
"""],
}))
resulting_track = reader("unittest", track_specification, "/mappings")
self.assertEqual(
["index_pattern", "number_of_shards"],
complete_track_params.sorted_track_defined_params
)
self.assertEqual("unittest", resulting_track.name)
self.assertEqual("description for unit test", resulting_track.description)
self.assertEqual(0, len(resulting_track.indices))
self.assertEqual(1, len(resulting_track.templates))
self.assertEqual("my-index-template", resulting_track.templates[0].name)
self.assertEqual("*", resulting_track.templates[0].pattern)
self.assertDictEqual(
{
"index_patterns": ["*"],
"settings": {
"number_of_shards": 1
}
}, resulting_track.templates[0].content)
self.assertEqual(0, len(resulting_track.challenges))
def test_parse_valid_track_specification_with_composable_template(self):
track_specification = {
"description": "description for unit test",
"composable-templates": [
{
"name": "my-index-template",
"index-pattern": "*",
"template": "default-template.json"
}
],
"component-templates": [
{
"name": "my-component-template-1",
"template": "component-template-1.json"
},
{
"name": "my-component-template-2",
"template": "component-template-2.json"
}
],
"operations": [],
"challenges": []
}
complete_track_params = loader.CompleteTrackParams()
reader = loader.TrackSpecificationReader(
track_params={"index_pattern": "logs-*", "number_of_replicas": 1},
complete_track_params=complete_track_params,
source=io.DictStringFileSourceFactory({
"/mappings/default-template.json": ["""
{
"index_patterns": [ "{{index_pattern}}"],
"template": {
"settings": {
"number_of_shards": {{ number_of_shards | default(1) }}
}
},
"composed_of": ["my-component-template-1", "my-component-template-2"]
}
"""],
"/mappings/component-template-1.json": ["""
{
"template": {
"settings": {
"index.number_of_shards": 2
}
}
}
"""],
"/mappings/component-template-2.json": ["""
{
"template": {
"settings": {
"index.number_of_replicas": {{ number_of_replicas }}
},
"mappings": {
"properties": {
"@timestamp": {
"type": "date"
}
}
}
}
}
"""]
}))
resulting_track = reader("unittest", track_specification, "/mappings")
self.assertEqual(
["index_pattern", "number_of_replicas", "number_of_shards"],
complete_track_params.sorted_track_defined_params
)
self.assertEqual("unittest", resulting_track.name)
self.assertEqual("description for unit test", resulting_track.description)
self.assertEqual(0, len(resulting_track.indices))
self.assertEqual(1, len(resulting_track.composable_templates))
self.assertEqual(2, len(resulting_track.component_templates))
self.assertEqual("my-index-template", resulting_track.composable_templates[0].name)
self.assertEqual("*", resulting_track.composable_templates[0].pattern)
self.assertEqual("my-component-template-1", resulting_track.component_templates[0].name)
self.assertEqual("my-component-template-2", resulting_track.component_templates[1].name)
self.assertDictEqual(
{
"index_patterns": ["logs-*"],
"template": {
"settings": {
"number_of_shards": 1
}
},
"composed_of": ["my-component-template-1", "my-component-template-2"]
}, resulting_track.composable_templates[0].content)
self.assertDictEqual(
{
"template": {
"settings": {
"index.number_of_shards": 2
}
}
}, resulting_track.component_templates[0].content)
self.assertDictEqual(
{
"template": {
"settings": {
"index.number_of_replicas": 1
},
"mappings": {
"properties": {
"@timestamp": {
"type": "date"
}
}
}
}
}, resulting_track.component_templates[1].content)
self.assertEqual(0, len(resulting_track.challenges))
def test_parse_invalid_track_specification_with_composable_template(self):
track_specification = {
"description": "description for unit test",
"component-templates": [
{
"name": "my-component-template-2"
}
],
"operations": [],
"challenges": []
}
complete_track_params = loader.CompleteTrackParams()
reader = loader.TrackSpecificationReader(
track_params={"index_pattern": "logs-*", "number_of_replicas": 1},
complete_track_params=complete_track_params)
with self.assertRaises(loader.TrackSyntaxError) as ctx:
reader("unittest", track_specification, "/mappings")
self.assertEqual("Track 'unittest' is invalid. Mandatory element 'template' is missing.",
ctx.exception.args[0])
def test_unique_challenge_names(self):
track_specification = {
"description": "description for unit test",
"indices": [{"name": "test-index"}],
"operations": [
{
"name": "index-append",
"operation-type": "bulk"
}
],
"challenges": [
{
"name": "test-challenge",
"description": "Some challenge",
"default": True,
"schedule": [
{
"operation": "index-append"
}
]
},
{
"name": "test-challenge",
"description": "Another challenge with the same name",
"schedule": [
{
"operation": "index-append"
}
]
}
]
}
reader = loader.TrackSpecificationReader()
with self.assertRaises(loader.TrackSyntaxError) as ctx:
reader("unittest", track_specification, "/mappings")
self.assertEqual("Track 'unittest' is invalid. Duplicate challenge with name 'test-challenge'.", ctx.exception.args[0])
def test_not_more_than_one_default_challenge_possible(self):
track_specification = {
"description": "description for unit test",
"indices": [{"name": "test-index"}],
"operations": [
{
"name": "index-append",
"operation-type": "bulk"
}
],
"challenges": [
{
"name": "default-challenge",
"description": "Default challenge",
"default": True,
"schedule": [
{
"operation": "index-append"
}
]
},
{
"name": "another-challenge",
"description": "See if we can sneek it in as another default",
"default": True,
"schedule": [
{
"operation": "index-append"
}
]
}
]
}
reader = loader.TrackSpecificationReader()
with self.assertRaises(loader.TrackSyntaxError) as ctx:
reader("unittest", track_specification, "/mappings")
self.assertEqual("Track 'unittest' is invalid. Both 'default-challenge' and 'another-challenge' are defined as default challenges. "
"Please define only one of them as default.", ctx.exception.args[0])
def test_at_least_one_default_challenge(self):
track_specification = {
"description": "description for unit test",
"indices": [{"name": "test-index"}],
"operations": [
{
"name": "index-append",
"operation-type": "bulk"
}
],
"challenges": [
{
"name": "challenge",
"schedule": [
{
"operation": "index-append"
}
]
},
{
"name": "another-challenge",
"schedule": [
{
"operation": "index-append"
}
]
}
]
}
reader = loader.TrackSpecificationReader()
with self.assertRaises(loader.TrackSyntaxError) as ctx:
reader("unittest", track_specification, "/mappings")
self.assertEqual("Track 'unittest' is invalid. No default challenge specified. Please edit the track and add \"default\": true "
"to one of the challenges challenge, another-challenge.", ctx.exception.args[0])
def test_exactly_one_default_challenge(self):
track_specification = {
"description": "description for unit test",
"indices": [{"name": "test-index"}],
"operations": [
{
"name": "index-append",
"operation-type": "bulk"
}
],
"challenges": [
{
"name": "challenge",
"default": True,
"schedule": [
{
"operation": "index-append"
}
]
},
{
"name": "another-challenge",
"schedule": [
{
"operation": "index-append"
}
]
}
]
}
reader = loader.TrackSpecificationReader(selected_challenge="another-challenge")
resulting_track = reader("unittest", track_specification, "/mappings")
self.assertEqual(2, len(resulting_track.challenges))
self.assertEqual("challenge", resulting_track.challenges[0].name)
self.assertTrue(resulting_track.challenges[0].default)
self.assertFalse(resulting_track.challenges[1].default)
self.assertTrue(resulting_track.challenges[1].selected)
def test_selects_sole_challenge_implicitly_as_default(self):
track_specification = {
"description": "description for unit test",
"indices": [{"name": "test-index"}],
"operations": [
{
"name": "index-append",
"operation-type": "bulk"
}
],
"challenge": {
"name": "challenge",
"schedule": [
{
"operation": "index-append"
}
]
}
}
reader = loader.TrackSpecificationReader()
resulting_track = reader("unittest", track_specification, "/mappings")
self.assertEqual(1, len(resulting_track.challenges))
self.assertEqual("challenge", resulting_track.challenges[0].name)
self.assertTrue(resulting_track.challenges[0].default)
self.assertTrue(resulting_track.challenges[0].selected)
def test_auto_generates_challenge_from_schedule(self):
track_specification = {
"description": "description for unit test",
"indices": [{"name": "test-index"}],
"operations": [
{
"name": "index-append",
"operation-type": "bulk"
}
],
"schedule": [
{
"operation": "index-append"
}
]
}
reader = loader.TrackSpecificationReader()
resulting_track = reader("unittest", track_specification, "/mappings")
self.assertEqual(1, len(resulting_track.challenges))
self.assertTrue(resulting_track.challenges[0].auto_generated)
self.assertTrue(resulting_track.challenges[0].default)
self.assertTrue(resulting_track.challenges[0].selected)
def test_inline_operations(self):
track_specification = {
"description": "description for unit test",
"indices": [{"name": "test-index"}],
"challenge": {
"name": "challenge",
"schedule": [
# an operation with parameters still needs to define a type
{
"operation": {
"operation-type": "bulk",
"bulk-size": 5000
}
},
# a parameterless operation can just use the operation type as implicit reference to the operation
{
"operation": "force-merge"
}
]
}
}
reader = loader.TrackSpecificationReader()
resulting_track = reader("unittest", track_specification, "/mappings")
challenge = resulting_track.challenges[0]
self.assertEqual(2, len(challenge.schedule))
self.assertEqual(track.OperationType.Bulk.to_hyphenated_string(), challenge.schedule[0].operation.type)
self.assertEqual(track.OperationType.ForceMerge.to_hyphenated_string(), challenge.schedule[1].operation.type)
def test_supports_target_throughput(self):
track_specification = {
"description": "description for unit test",
"indices": [{"name": "test-index"}],
"operations": [
{
"name": "index-append",
"operation-type": "bulk"
}
],
"challenge": {
"name": "default-challenge",
"schedule": [
{
"operation": "index-append",
"target-throughput": 10,
}
]
}
}
reader = loader.TrackSpecificationReader()
resulting_track = reader("unittest", track_specification, "/mappings")
self.assertEqual(10, resulting_track.challenges[0].schedule[0].params["target-throughput"])
def test_supports_target_interval(self):
track_specification = {
"description": "description for unit test",
"indices": [{"name": "test-index"}],
"operations": [
{
"name": "index-append",
"operation-type": "bulk"
}
],
"challenges": [
{
"name": "default-challenge",
"schedule": [
{
"operation": "index-append",
"target-interval": 5,
}
]
}
]
}
reader = loader.TrackSpecificationReader()
resulting_track = reader("unittest", track_specification, "/mappings")
self.assertEqual(5, resulting_track.challenges[0].schedule[0].params["target-interval"])
def test_parallel_tasks_with_default_values(self):
track_specification = {
"description": "description for unit test",
"indices": [{"name": "test-index"}],
"operations": [
{
"name": "index-1",
"operation-type": "bulk"
},
{
"name": "index-2",
"operation-type": "bulk"
},
{
"name": "index-3",
"operation-type": "bulk"
},
],
"challenges": [
{
"name": "default-challenge",
"schedule": [
{
"parallel": {
"warmup-time-period": 2400,
"time-period": 36000,
"tasks": [
{
"operation": "index-1",
"warmup-time-period": 300,
"clients": 2
},
{
"operation": "index-2",
"time-period": 3600,
"clients": 4
},
{
"operation": "index-3",
"target-throughput": 10,
"clients": 16
},
]
}
}
]
}
]
}
reader = loader.TrackSpecificationReader()
resulting_track = reader("unittest", track_specification, "/mappings")
parallel_element = resulting_track.challenges[0].schedule[0]
parallel_tasks = parallel_element.tasks
self.assertEqual(22, parallel_element.clients)
self.assertEqual(3, len(parallel_tasks))
self.assertEqual("index-1", parallel_tasks[0].operation.name)
self.assertEqual(300, parallel_tasks[0].warmup_time_period)
self.assertEqual(36000, parallel_tasks[0].time_period)
self.assertEqual(2, parallel_tasks[0].clients)
self.assertFalse("target-throughput" in parallel_tasks[0].params)
self.assertEqual("index-2", parallel_tasks[1].operation.name)
self.assertEqual(2400, parallel_tasks[1].warmup_time_period)
self.assertEqual(3600, parallel_tasks[1].time_period)
self.assertEqual(4, parallel_tasks[1].clients)
self.assertFalse("target-throughput" in parallel_tasks[1].params)
self.assertEqual("index-3", parallel_tasks[2].operation.name)
self.assertEqual(2400, parallel_tasks[2].warmup_time_period)
self.assertEqual(36000, parallel_tasks[2].time_period)
self.assertEqual(16, parallel_tasks[2].clients)
self.assertEqual(10, parallel_tasks[2].params["target-throughput"])
def test_parallel_tasks_with_default_clients_does_not_propagate(self):
track_specification = {
"description": "description for unit test",
"indices": [{"name": "test-index"}],
"operations": [
{
"name": "index-1",
"operation-type": "bulk"
}
],
"challenges": [
{
"name": "default-challenge",
"schedule": [
{
"parallel": {
"warmup-time-period": 2400,
"time-period": 36000,
"clients": 2,
"tasks": [
{
"name": "index-1-1",
"operation": "index-1"
},
{
"name": "index-1-2",
"operation": "index-1"
},
{
"name": "index-1-3",
"operation": "index-1"
},
{
"name": "index-1-4",
"operation": "index-1"
}
]
}
}
]
}
]
}
reader = loader.TrackSpecificationReader()
resulting_track = reader("unittest", track_specification, "/mappings")
parallel_element = resulting_track.challenges[0].schedule[0]
parallel_tasks = parallel_element.tasks
# we will only have two clients *in total*
self.assertEqual(2, parallel_element.clients)
self.assertEqual(4, len(parallel_tasks))
for task in parallel_tasks:
self.assertEqual(1, task.clients)
def test_parallel_tasks_with_completed_by_set(self):
track_specification = {
"description": "description for unit test",
"indices": [{"name": "test-index"}],
"operations": [
{
"name": "index-1",
"operation-type": "bulk"
},
{
"name": "index-2",
"operation-type": "bulk"
}
],
"challenges": [
{
"name": "default-challenge",
"schedule": [
{
"parallel": {
"warmup-time-period": 2400,
"time-period": 36000,
"completed-by": "index-2",
"tasks": [
{
"operation": "index-1"
},
{
"operation": "index-2"
}
]
}
}
]
}
]
}
reader = loader.TrackSpecificationReader()
resulting_track = reader("unittest", track_specification, "/mappings")
parallel_element = resulting_track.challenges[0].schedule[0]
parallel_tasks = parallel_element.tasks
# we will only have two clients *in total*
self.assertEqual(2, parallel_element.clients)
self.assertEqual(2, len(parallel_tasks))
self.assertEqual("index-1", parallel_tasks[0].operation.name)
self.assertFalse(parallel_tasks[0].completes_parent)
self.assertEqual("index-2", parallel_tasks[1].operation.name)
self.assertTrue(parallel_tasks[1].completes_parent)
def test_parallel_tasks_with_named_task_completed_by_set(self):
track_specification = {
"description": "description for unit test",
"indices": [{"name": "test-index"}],
"operations": [
{
"name": "index-1",
"operation-type": "bulk"
},
{
"name": "index-2",
"operation-type": "bulk"
}
],
"challenges": [
{
"name": "default-challenge",
"schedule": [
{
"parallel": {
"warmup-time-period": 2400,
"time-period": 36000,
"completed-by": "name-index-2",
"tasks": [
{
"name": "name-index-1",
"operation": "index-1"
},
{
"name": "name-index-2",
"operation": "index-2"
}
]
}
}
]
}
]
}
reader = loader.TrackSpecificationReader()
resulting_track = reader("unittest", track_specification, "/mappings")
parallel_element = resulting_track.challenges[0].schedule[0]
parallel_tasks = parallel_element.tasks
# we will only have two clients *in total*
self.assertEqual(2, parallel_element.clients)
self.assertEqual(2, len(parallel_tasks))
self.assertEqual("index-1", parallel_tasks[0].operation.name)
self.assertFalse(parallel_tasks[0].completes_parent)
self.assertEqual("index-2", parallel_tasks[1].operation.name)
self.assertTrue(parallel_tasks[1].completes_parent)
def test_parallel_tasks_with_completed_by_set_no_task_matches(self):
track_specification = {
"description": "description for unit test",
"indices": [{"name": "test-index"}],
"operations": [
{
"name": "index-1",
"operation-type": "bulk"
},
{
"name": "index-2",
"operation-type": "bulk"
}
],
"challenges": [
{
"name": "default-challenge",
"schedule": [
{
"parallel": {
"completed-by": "non-existing-task",
"tasks": [
{
"operation": "index-1"
},
{
"operation": "index-2"
}
]
}
}
]
}
]
}
reader = loader.TrackSpecificationReader()
with self.assertRaises(loader.TrackSyntaxError) as ctx:
reader("unittest", track_specification, "/mappings")
self.assertEqual("Track 'unittest' is invalid. 'parallel' element for challenge 'default-challenge' is marked with 'completed-by' "
"with task name 'non-existing-task' but no task with this name exists.", ctx.exception.args[0])
def test_parallel_tasks_with_completed_by_set_multiple_tasks_match(self):
track_specification = {
"description": "description for unit test",
"indices": [{"name": "test-index"}],
"operations": [
{
"name": "index-1",
"operation-type": "bulk"
}
],
"challenges": [
{
"name": "default-challenge",
"schedule": [
{
"parallel": {
"completed-by": "index-1",
"tasks": [
{
"operation": "index-1"
},
{
"operation": "index-1"
}
]
}
}
]
}
]
}
reader = loader.TrackSpecificationReader()
with self.assertRaises(loader.TrackSyntaxError) as ctx:
reader("unittest", track_specification, "/mappings")
self.assertEqual("Track 'unittest' is invalid. 'parallel' element for challenge 'default-challenge' contains multiple tasks with "
"the name 'index-1' which are marked with 'completed-by' but only task is allowed to match.",
ctx.exception.args[0])
def test_propagate_parameters_to_challenge_level(self):
track_specification = {
"description": "description for unit test",
"parameters": {
"level": "track",
"value": 7
},
"indices": [{"name": "test-index"}],
"operations": [
{
"name": "index-append",
"operation-type": "bulk"
}
],
"challenges": [
{
"name": "challenge",
"default": True,
"parameters": {
"level": "challenge",
"another-value": 17
},
"schedule": [
{
"operation": "index-append"
}
]
},
{
"name": "another-challenge",
"schedule": [
{
"operation": "index-append"
}
]
}
]
}
reader = loader.TrackSpecificationReader(selected_challenge="another-challenge")
resulting_track = reader("unittest", track_specification, "/mappings")
self.assertEqual(2, len(resulting_track.challenges))
self.assertEqual("challenge", resulting_track.challenges[0].name)
self.assertTrue(resulting_track.challenges[0].default)
self.assertDictEqual({
"level": "challenge",
"value": 7,
"another-value": 17
}, resulting_track.challenges[0].parameters)
self.assertFalse(resulting_track.challenges[1].default)
self.assertTrue(resulting_track.challenges[1].selected)
self.assertDictEqual({
"level": "track",
"value": 7
}, resulting_track.challenges[1].parameters)
class MyMockTrackProcessor(loader.TrackProcessor):
pass
class TrackProcessorRegistryTests(TestCase):
def test_default_track_processors(self):
cfg = config.Config()
cfg.add(config.Scope.application, "system", "offline.mode", False)
tpr = loader.TrackProcessorRegistry(cfg)
expected_defaults = [
loader.TaskFilterTrackProcessor,
loader.TestModeTrackProcessor,
loader.DefaultTrackPreparator
]
actual_defaults = [proc.__class__ for proc in tpr.processors]
self.assertCountEqual(expected_defaults, actual_defaults)
def test_override_default_preparator(self):
cfg = config.Config()
cfg.add(config.Scope.application, "system", "offline.mode", False)
tpr = loader.TrackProcessorRegistry(cfg)
# call this once beforehand to make sure we don't "harden" the default in case calls are made out of order
tpr.processors # pylint: disable=pointless-statement
tpr.register_track_processor(MyMockTrackProcessor())
expected_processors = [
loader.TaskFilterTrackProcessor,
loader.TestModeTrackProcessor,
MyMockTrackProcessor
]
actual_processors = [proc.__class__ for proc in tpr.processors]
self.assertCountEqual(expected_processors, actual_processors)
def test_allow_to_specify_default_preparator(self):
cfg = config.Config()
cfg.add(config.Scope.application, "system", "offline.mode", False)
tpr = loader.TrackProcessorRegistry(cfg)
tpr.register_track_processor(MyMockTrackProcessor())
# should be idempotent now that we have a custom config
tpr.processors # pylint: disable=pointless-statement
tpr.register_track_processor(loader.DefaultTrackPreparator())
expected_processors = [
loader.TaskFilterTrackProcessor,
loader.TestModeTrackProcessor,
MyMockTrackProcessor,
loader.DefaultTrackPreparator
]
actual_processors = [proc.__class__ for proc in tpr.processors]
self.assertCountEqual(expected_processors, actual_processors)
| 43.024581 | 140 | 0.466149 | 12,007 | 154,028 | 5.792454 | 0.055884 | 0.054565 | 0.014479 | 0.0155 | 0.827247 | 0.794177 | 0.766341 | 0.73294 | 0.70706 | 0.683781 | 0 | 0.018505 | 0.430591 | 154,028 | 3,579 | 141 | 43.036602 | 0.774494 | 0.026047 | 0 | 0.57032 | 0 | 0.007762 | 0.239974 | 0.032806 | 0 | 0 | 0 | 0 | 0.113008 | 1 | 0.028873 | false | 0.00031 | 0.004657 | 0.001242 | 0.041602 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0eb535bddeb2ef571a82f06270e5074ec93b058c | 118 | py | Python | threeML/catalogs/__init__.py | BjoernBiltzinger/threeML | fc3d989173b1613a199633455f260e67fdb50369 | [
"BSD-3-Clause"
] | null | null | null | threeML/catalogs/__init__.py | BjoernBiltzinger/threeML | fc3d989173b1613a199633455f260e67fdb50369 | [
"BSD-3-Clause"
] | null | null | null | threeML/catalogs/__init__.py | BjoernBiltzinger/threeML | fc3d989173b1613a199633455f260e67fdb50369 | [
"BSD-3-Clause"
] | null | null | null | from Fermi import FermiGBMBurstCatalog, FermiLATSourceCatalog, FermiLLEBurstCatalog
from Swift import SwiftGRBCatalog
| 39.333333 | 83 | 0.898305 | 10 | 118 | 10.6 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.084746 | 118 | 2 | 84 | 59 | 0.981481 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0ebbbfc1e7bc967c7c2cc91af09eeaac4462d7ab | 162 | py | Python | scitbx/wigner/__init__.py | hbrunie/cctbx_project | 2d8cb383d50fe20cdbbe4bebae8ed35fabce61e5 | [
"BSD-3-Clause-LBNL"
] | 2 | 2021-03-18T12:31:57.000Z | 2022-03-14T06:27:06.000Z | scitbx/wigner/__init__.py | hbrunie/cctbx_project | 2d8cb383d50fe20cdbbe4bebae8ed35fabce61e5 | [
"BSD-3-Clause-LBNL"
] | null | null | null | scitbx/wigner/__init__.py | hbrunie/cctbx_project | 2d8cb383d50fe20cdbbe4bebae8ed35fabce61e5 | [
"BSD-3-Clause-LBNL"
] | 1 | 2021-03-26T12:52:30.000Z | 2021-03-26T12:52:30.000Z | from __future__ import absolute_import, division, print_function
import boost.python
boost.python.import_ext("scitbx_wigner_ext")
from scitbx_wigner_ext import *
| 32.4 | 64 | 0.858025 | 23 | 162 | 5.565217 | 0.521739 | 0.171875 | 0.234375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.080247 | 162 | 4 | 65 | 40.5 | 0.85906 | 0 | 0 | 0 | 0 | 0 | 0.104938 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0.25 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0ec2ebfea47a87c23614feb58c14446f1fbb4b90 | 221 | py | Python | analyzer/parser/object_parser.py | ltthacker/bdi_final | d2758cc00670d0f2eae3f468f36731a25e9a30bc | [
"MIT"
] | null | null | null | analyzer/parser/object_parser.py | ltthacker/bdi_final | d2758cc00670d0f2eae3f468f36731a25e9a30bc | [
"MIT"
] | null | null | null | analyzer/parser/object_parser.py | ltthacker/bdi_final | d2758cc00670d0f2eae3f468f36731a25e9a30bc | [
"MIT"
] | null | null | null | from .object_parser_util import getObject
from .object_parser_fakenew_util import checkObject
def parse(new):
return getObject(new['content'],new['timestamp'])
def fakenew(content):
return checkObject(content)
| 22.1 | 53 | 0.782805 | 28 | 221 | 6 | 0.5 | 0.119048 | 0.190476 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.122172 | 221 | 9 | 54 | 24.555556 | 0.865979 | 0 | 0 | 0 | 0 | 0 | 0.072398 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 6 |
0eda4935027ffc79b5c425eff0a70a6d492b21d1 | 16,294 | py | Python | pytdx/reader/gbbq_reader.py | AtlantixJJ/vnpy | 28992c7d5391f6dd42a14b481d01ceafde048b5f | [
"MIT"
] | 13 | 2019-06-07T04:34:09.000Z | 2022-03-21T07:46:01.000Z | pytdx/reader/gbbq_reader.py | AtlantixJJ/vnpy | 28992c7d5391f6dd42a14b481d01ceafde048b5f | [
"MIT"
] | 1 | 2020-04-21T02:42:32.000Z | 2020-04-21T02:42:32.000Z | venv/lib/python3.7/site-packages/pytdx/reader/gbbq_reader.py | CatTiger/vnpy | 7901a0fb80a5b44d6fc752bd4b2b64ec62c8f84b | [
"MIT"
] | 2 | 2021-07-08T03:44:41.000Z | 2021-09-15T00:41:19.000Z | #encoding=utf-8
import struct
from ctypes import *
import pandas as pd
import sys
### take ref this article :http://blog.csdn.net/fangle6688/article/details/50956609
### and this http://blog.sina.com.cn/s/blog_6b2f87db0102uxo3.html
class GbbqReader(object):
def get_df(self, fname):
if sys.version_info.major == 2:
bin_keys = bytearray.fromhex(self.hexdump_keys)
else:
bin_keys = bytes.fromhex(self.hexdump_keys)
result = []
with open(fname, "rb") as f:
content = f.read()
pos = 0
(count, ) = struct.unpack("<I", content[pos: pos+4])
pos += 4
encrypt_data = content
# data_len = len(encrypt_data)
data_offset = pos
for _ in range(count):
clear_data = bytearray()
for i in range(3):
(eax, ) = struct.unpack("<I", bin_keys[0x44: 0x44 + 4])
(ebx, ) = struct.unpack("<I", encrypt_data[data_offset: data_offset+4])
num = c_uint32(eax ^ ebx).value
(numold, ) = struct.unpack("<I", encrypt_data[data_offset + 0x4: data_offset + 0x4 + 4])
for j in reversed(range(4, 0x40+4, 4)):
ebx = (num & 0xff0000) >> 16
(eax, ) = struct.unpack("<I", bin_keys[ebx * 4 + 0x448: ebx * 4 + 0x448 + 4])
ebx = num >> 24
(eax_add, ) = struct.unpack("<I", bin_keys[ebx * 4 + 0x48: ebx * 4 + 0x48 + 4])
eax += eax_add
eax = c_uint32(eax).value
ebx = (num & 0xff00) >> 8
(eax_xor, ) = struct.unpack("<I", bin_keys[ebx * 4 + 0x848: ebx * 4 + 0x848 + 4])
eax ^= eax_xor
eax = c_uint32(eax).value
ebx = num & 0xff
(eax_add, ) = struct.unpack("<I", bin_keys[ebx * 4 + 0xC48: ebx * 4 + 0xC48 + 4])
eax += eax_add
eax = c_uint32(eax).value
(eax_xor, ) = struct.unpack("<I", bin_keys[j: j + 4])
eax ^= eax_xor
eax = c_uint32(eax).value
ebx = num
num = numold ^ eax
num = c_uint32(num).value
numold = ebx
(numold_op, ) = struct.unpack("<I", bin_keys[0: 4])
numold ^= numold_op
numold = c_uint32(numold).value
clear_data.extend(struct.pack("<II", numold, num))
data_offset += 8
clear_data.extend(encrypt_data[data_offset: data_offset+5])
(v1,v2, v3,v4,v5,v6,v7,v8) = (struct.unpack("<B7sIBffff", clear_data))
line = (v1,
v2.rstrip(b"\x00").decode("utf-8"),
v3,
v4,
v5,
v6,
v7,
v8)
result.append(line)
data_offset += 5
df = pd.DataFrame(data=result, columns=['market', 'code', 'datetime', 'category',
'hongli_panqianliutong',
'peigujia_qianzongguben',
'songgu_qianzongguben',
'peigu_houzongguben'])
return df
hexdump_keys = "38 A7 C2 1D E0 6A 17 E2 D1 39 A2 40 9C BA 46 AF 42 C6 FF 05 74 EA DA BB 89 B4 F8 44 AC 89 D7 F2 98 7F B6 BC E4 F7 6B 75 05 04 58 67 79 C8 6D C6 2B 06 96 8C FB 86 06 8B BF D6 E8 E1 87 49 6B 36 C7 18 02 79 53 25 72 72 13 CC 04 0B 90 24 0C DC DB 03 1A D5 2E 04 85 5C 7E 8E BD 02 26 2D BD 06 1B 50 34 99 1B A2 24 04 F2 88 35 C8 89 EA D5 FB 12 24 BB B5 3B 29 CA 14 A6 04 CE A9 A8 58 02 B9 AA E3 97 A3 A6 22 57 BB AD A0 22 5F EB 05 86 11 C3 ED B1 3F 39 C2 36 D1 4A 43 C8 64 4D B0 6E 3A 7C 51 6D F7 8E C6 DF F3 8E A4 1E 74 9D B2 22 05 4D 07 3F 96 7F 97 F9 63 B9 C4 2B 98 75 F6 D6 84 56 DC 15 D3 52 8B 60 F3 D6 0E A9 AD 07 07 E9 02 86 58 C2 32 9C 90 BC C9 19 BF B0 54 7A F8 CC A8 27 63 82 29 EE FB 98 11 BF 35 29 62 91 93 95 FC F4 F0 08 E4 B2 3A B4 5E B3 B0 2E 3E 20 C1 D7 43 59 7D C6 29 5F 69 74 7F B2 77 E1 0E FA 85 A1 C9 77 73 83 B3 CB 1C 60 DB E9 53 69 FC B3 18 59 15 0F 97 8A 7A C8 83 F5 49 DC 1B 3E 86 C1 95 45 46 E2 16 67 7F 12 35 A0 BB 27 FB CC F8 30 7E 4F C8 6D AB 18 B2 0D 01 CC 79 20 80 7B FA 37 AA 14 9E 85 E8 25 E9 D4 2D 35 4E 8F D3 DE B0 06 8D 15 15 52 65 E8 39 03 28 09 02 67 99 3D 13 BA F3 68 5C 4C 89 B0 E3 6B AE 16 5C 88 25 F8 33 03 19 02 5B 29 7B 2A 41 2D 75 49 48 9B B3 B6 B3 BF AA DF 8C 95 FE 0F 13 B8 7B 02 BB 52 E1 1C 34 C3 9B 87 59 E2 46 CC 22 77 4B D7 C4 2C 31 AA 84 7C 44 51 88 15 1A CC AE 40 9D 1F 44 97 29 98 45 60 74 47 A1 0D A5 73 F0 53 FF 01 F9 F4 9A F1 36 07 D0 2D A0 79 2D 81 23 25 AD 4B 9C C8 BC 12 55 4D D4 BB 95 B1 B9 BE 7D A6 E6 A0 53 BA 83 8C DD 7E E9 4B ED BA 28 42 D8 FF 98 69 35 CA 4E 9C 9D 57 D6 CF A0 89 5C A2 E7 54 D2 AF 4C FB 54 C4 B4 4F C3 BA F8 A2 58 69 19 79 0E A8 0E 3D C8 04 FD 26 32 C8 E1 02 8B A7 1C C3 91 25 E5 D8 49 DB DF 19 5F 16 F5 A7 8B 18 23 04 D4 BF FB 44 C4 61 7C 79 6E C8 90 15 B5 EB 50 87 CA 7A 69 47 2F AF A8 B5 A2 8A 84 C4 41 79 E8 DE 0C AC D0 D5 6F 34 C6 CB A7 76 F9 00 24 42 05 26 7E 7B 14 86 59 7B DB 1C 62 D5 B7 3E F7 17 44 27 4B D2 C6 6F FF C8 49 55 AD 65 52 2D 43 C2 33 9B 63 AB 3D 54 54 28 E2 02 65 03 9A 03 4B 8F 64 1A 92 52 DE 32 D6 2B F0 BE BE 1D 54 B1 7C 70 41 9B 90 55 DA 71 55 21 B9 B6 68 90 19 5F BC AA B4 55 0E E6 81 4C A3 BE BC 64 D7 59 00 59 BD 0F 6A 57 1A A6 A0 D5 1A 0A 80 D3 09 06 73 5A 51 E2 DD 29 66 AC A0 86 29 21 2B 7A 6D 9E 3A 68 D0 A3 DC A7 2B 85 A0 4C D4 F0 C5 C4 43 E4 CF 0C 19 81 30 B6 F6 BE 71 F5 AC 25 AA CF 42 90 06 64 1B 45 29 FD 3A A3 B6 0B 9D 29 9F FA 31 B8 6D D8 EC 43 F5 92 7E 35 22 E0 C3 D3 09 06 61 71 DA E8 36 0A 19 F6 23 81 CB 89 E0 67 6E FE B1 E6 47 72 63 5C 25 18 E0 B4 65 85 EF B5 1B 26 23 90 89 CC EE E3 01 77 95 63 DF C4 AC BF E6 37 14 99 15 49 8A 96 02 91 AA 1D 98 21 57 5E 87 96 C7 B5 87 08 3F 58 06 52 58 17 8F AB A8 4E A1 7A 60 B1 69 5E 9C BE E2 D0 C5 12 59 DF 31 EB D2 19 54 96 E2 10 11 8E 68 B4 1A 2D D3 2F AB 12 F7 FE F3 A7 F7 61 FC F7 7C CB FC 87 8C 6A 10 40 29 7B 30 D6 0D 13 4C 71 CD 5E AB 36 A2 F1 4C 05 ED 53 88 E5 FF 8E 71 79 5D B5 AF D3 67 6D C4 44 6B AB C1 A7 AA 38 D8 70 1E 08 E6 D2 36 7B 88 11 96 DB D2 68 D9 FF D8 50 2B 3A A9 CC 45 1A CA CD D2 05 C6 FC A0 35 0C EE 98 2B 5C B2 39 6A 27 12 8F 97 EC CB 7B B6 C0 27 F6 A7 48 75 09 82 98 CA 3A 5D E3 96 0C A5 D2 B3 6C A4 D1 1F AE 99 67 B0 3D D6 9A 7A 3E 00 8B FD 45 32 F7 9F 28 7C 94 03 DB 64 AA 44 80 D2 27 AF B3 73 87 57 31 EB 08 D9 BA 73 4D 2C 77 03 BF F5 0F 47 3C 22 DA 3F B9 F1 9A 1B 22 83 16 EE F4 18 FC 08 E8 3B 30 1C 04 50 AA 4C E3 28 53 AB DE F8 5F 32 D9 E1 78 7B F1 C5 A8 CA 85 B6 9F 89 1F 40 B8 2C 88 D7 C1 66 34 45 D6 46 FD 7B F3 72 A3 32 55 23 CF B5 B0 79 AB A0 F1 00 5C DB EE 3F 51 AA AE C0 89 8E 47 A5 30 4E 4B DD D6 AE D8 6D 40 1C 4E 8E FB 0C 60 8D 54 1E 2F 17 B7 3A ED DE DC 81 F5 72 85 B7 A6 39 31 6F 47 50 84 43 C5 11 F3 6A 26 8E BA 7F 81 98 31 FD 13 6B 83 C9 11 61 48 64 FA E3 F5 39 2C 12 11 C1 6D 4D 03 13 A6 C2 E0 DF F5 32 8E 5B 35 A7 7F 08 F7 85 27 0D 71 9D B8 CE 9C 1E BA 77 3A F6 A1 A7 26 94 29 C0 20 10 65 75 6E EF AA 32 0C 66 91 3A 4E 0E 74 E2 8A FE B6 F8 17 C7 A7 E4 D8 35 67 2E F0 83 A8 9F A6 28 13 40 A3 96 DC 49 83 55 E1 85 AB BD 4D ED 88 FA 36 69 A9 77 59 5A 9C D0 A0 B1 3D EB 31 16 DC 3E 29 7B 39 01 5B D4 FF 5C E5 9E DA F7 55 D5 3F E3 3B 51 76 83 8E 40 AE E1 2E E8 3E F8 08 B7 B0 24 26 91 AD 82 4C 2E 2F 37 7A 34 A1 05 BD 8C 9A 75 52 5C CD 59 80 CB 92 F8 B1 F8 A5 F2 2C 9F 4A 59 BF EF 76 A3 74 4F E1 C9 7C 7F 91 D9 0D 12 05 B2 8E D0 E0 BB 46 D4 5C 44 2F 65 6D 7A 1C 02 86 FB 7E 7D B6 2A 57 B9 DB 80 CD 02 BF E7 9E 35 21 FB BE 28 13 82 9F F0 74 F7 92 55 DE F2 7B F2 F2 7D F5 A0 14 0F 99 4D 25 F4 DC 11 17 7A 77 65 77 CC BE EF 90 88 E8 FD B2 4E 8E F5 26 FE 53 5D 65 A9 74 47 0B CB E9 E8 71 95 95 87 6C FD 86 94 A7 E5 FC 20 00 1E 0A 0A E3 85 17 24 D4 D0 73 8A 11 1E 1E EF 83 E3 D7 E1 BF CC 98 07 6D 70 37 3A 8F 31 17 55 4E 60 A8 C8 AB 4F 08 2D 37 76 E6 2B 58 DD 81 0F D1 6E 9A A6 55 3D 80 82 99 9E 2D 16 9A DF 4E CB 3B 5D DA A8 53 08 C7 FF 54 DD C6 11 31 1A B6 EB A3 03 08 4A FB B4 45 EC C0 7C 0D C6 CF CB 1B 78 46 88 8F F4 6A 15 62 2F 17 12 E6 41 64 76 58 96 78 DB 29 B5 6A AE DE 63 41 6F BE 9B 37 6C C9 D0 EC 1B F6 79 17 9E FE 79 0E B1 82 28 F2 06 15 C2 BE 96 9C E0 81 80 D7 00 DB 95 87 4B C0 0D 91 55 5B 1F 86 22 64 74 EA 1B 89 85 D2 DD F7 9F F1 D9 09 06 64 FA 6D 59 72 EF CE 66 A7 03 D1 99 E8 DF AE D7 63 5F 60 5F AB 6E C5 22 C8 3A 94 6A 3B 00 72 F8 DB 90 E7 05 DC A2 89 0F 83 AA 03 FE 42 14 1C 8A E6 1C 9E DB D8 D0 CA 97 21 6C AD ED 0A E0 A2 9E EC C1 FF D1 B4 8A 9A AD AB 34 0B 13 3F B5 18 8D 85 9E 0D F9 FB AC 21 2E DD 7A DE BF 9F 7E BD BF 84 DF F5 FD 1E BE E1 1F 0F F8 18 9D 73 09 02 29 B7 5B 26 7E 44 75 04 4D B1 AA 2F 3A DB 46 38 12 D1 41 35 91 29 06 DF C9 98 69 92 02 F2 48 12 A9 71 D2 AE 3B 23 6D 1C E2 6B 8B 75 87 4A 13 A7 1F 81 4D 29 65 53 0A 3A 34 CE 6D E6 31 8D 7E 4E DD 25 6E 76 44 82 3C 47 36 4C B9 C4 9B F4 4F 84 43 11 56 C2 94 53 7E B0 2E 36 DA EB 77 5F C1 64 E2 CA 9F BE 29 D8 06 36 53 D0 6F 82 19 DA BC 8C 5F 4D 45 E7 21 37 9E 90 A6 D4 33 A8 64 4D EC BC 90 5E FE 8E 8B CA 17 7C FF AC 96 BB 21 CF 3D 24 71 3B C2 A1 74 68 85 CF 32 8E 7F 63 39 C5 E7 8E A5 E0 CD 3A F5 9A B8 FD 43 D4 43 39 08 8E 45 76 5F DF E9 17 54 59 12 ED D0 E9 3D 6F 3F 02 14 8A 0A 47 9A D1 E7 FA 4E A1 41 00 50 EF 60 9D 4D C1 CA 87 98 40 E7 B2 0F 76 C0 9D 71 EF D7 46 93 C1 2B 9F 11 B8 F9 05 AC ED A7 72 6B F5 11 9B 3E 0A 04 21 7D 06 D7 46 76 7B AD AE 9D 95 A6 47 68 05 AD F5 38 7C C7 A5 5A CA B2 CB 48 18 C1 F2 62 55 98 36 39 08 80 C5 28 B1 06 E4 FB 46 11 3C 38 A1 4F 1C FE A1 81 B7 FC DB 94 B0 7A FE B5 74 F1 BB 92 AA FF B0 FE 1E 31 8B C6 BC F0 4F 1A FE 91 C5 7A 9C 73 09 4A 32 90 51 01 8B 12 C0 20 CA 3C CB 14 83 D3 C7 7C 5A 12 79 EE 56 1A 36 C4 09 E2 3E DC E8 CE F1 C1 A1 9E 99 DA 64 4F CF 1E D6 2B 70 27 86 3E CF BE 75 1C 39 9B F9 53 63 C1 6B 58 CC 71 D2 07 41 88 BB 14 70 96 F1 68 CE 13 75 FE F4 A0 C8 85 A2 67 18 49 56 0D 07 94 1D 74 61 89 0C 32 49 9D 0D 94 73 4A AB 1A E9 0F E0 BA B6 4A 34 F9 33 1D B3 71 C2 B8 64 D7 0B CB 19 F7 BD E0 69 3E 24 96 B1 C4 28 09 5F 58 AE 8A C0 83 99 19 64 4D 44 37 55 A6 9B A1 42 50 84 B8 18 29 B5 21 91 58 23 88 EB 8F 13 4A 24 09 EC 0F 6D 7D AF 3E FC F7 F3 9F 34 39 15 C4 84 03 BB 7E 67 39 5F 2A 2C 67 94 F4 A6 B5 02 3F 45 56 79 0C 2A 9B 25 77 67 C2 3B CC F2 71 3B 4F 83 2A 8D 8C 53 0D 18 49 54 CA 58 0E BE 8B 3A 53 74 FC 6F 47 28 07 8E C1 F5 53 D3 34 4B 08 05 FF E9 14 29 40 1B 57 AD 77 EC E8 DA DA 35 55 A7 78 03 56 4C 7C B2 ED 3B B5 61 65 91 DF 41 B4 5D C9 B7 9B 13 82 41 15 D7 B3 6E 1C C8 15 B4 F0 F3 3F 91 4B A1 C8 90 78 91 39 5A 21 55 DA 6A E1 2C BA C9 38 69 F6 AE A8 2B 8C B7 14 C1 35 82 35 A0 78 47 56 C0 9A A7 7F 74 14 64 85 F1 B7 48 BC 55 8C 6A A4 95 1C CB F3 52 F9 54 61 15 27 56 43 D0 27 95 E3 35 AA 39 DC 23 38 DA EF 1F 27 65 3A AB F7 CC BB 25 DB 00 36 34 96 D1 F7 C4 EC 44 37 42 7E 17 18 67 C8 9C 9A 5B 39 08 5C 3C F4 92 F1 16 31 88 FA 12 44 9E 79 27 1C C2 0B 46 AC CD 1F 39 B8 9F 9A 56 34 0A 85 86 C2 B1 B1 9B 31 CE 47 57 05 3E A7 AE 3F 3E 01 2D C5 B9 C1 CB BA AB 0A 2A D2 71 E4 EC F8 0A 71 85 CC A1 CA 6E EF 9D 87 22 38 5D 80 81 F7 1A 6C 31 7B 82 86 BD 7F 10 9D 89 B6 F7 AF E4 41 0D 4F 97 28 80 34 06 3E 19 3A 21 60 ED 54 18 02 0F 2F D5 D5 3B A5 87 01 21 38 1B A6 99 32 28 E9 8D 6F 02 35 60 85 BD 64 C4 B0 26 7E 68 D1 E6 97 B5 32 6E B2 4F EB 06 4C 4D C2 97 8E 6B 30 22 C0 B4 3D 47 93 78 67 AC 27 42 DD 5C 3C 27 ED 0A 6C E4 4A 0D 0F DF 52 63 A6 70 76 09 F0 2E 58 F6 05 B2 DF EE C9 1F CB 1D 11 0C A1 8B 19 26 B8 10 2C 81 48 FF 98 EF 30 36 0C 01 C5 4A D9 AC 05 72 89 C7 3F D6 4D E0 17 BA BA B3 D3 E8 1B 0C 8C C8 DF 6B FE 7E BA 91 FD F6 A0 CB 59 19 B0 01 2F D7 0B A0 62 0F 5F CE 74 B8 EB 42 89 B5 BE CA C9 EF DA 9A BB C6 66 1B E0 65 EE D4 3A CE D9 CC 0E BB 85 50 41 45 01 BA 1B 29 11 6F 34 11 55 03 DD 0C B5 99 56 3A 93 4D 4D 95 6D CE C3 51 E0 15 54 3E FF 2F A3 DA 59 EC 3D 59 2D 62 FC 64 39 D6 7B C8 80 78 1D D7 FD E8 0B 5D 8A ED 1A 9D 98 CB C2 EE 78 47 30 AD 8F 64 A5 82 12 23 DA B3 3E CA 4C 85 7A 80 D5 9F 46 20 D6 EE D1 F9 33 FA 1F C5 9C 8E F9 1E 66 51 A5 46 68 DC B7 7F A8 5A DE E6 18 D7 8C 2B 5D EA A8 EC 6B 8B 48 C1 92 5A C1 B1 6A 5E 37 82 22 4B 6A B6 F0 40 16 89 16 A5 81 F8 D4 1B 20 26 86 35 E5 AD C1 01 6E C9 B5 D0 69 C5 0B 31 08 51 5D 35 FC 74 F5 13 04 7A F4 57 10 53 5B A4 CC 8B 21 82 82 15 4B 8C 3D 6B DA 91 85 CB D6 CF 05 80 D0 F0 CF 0D DF 7A B4 99 C7 F8 D5 4C 76 56 30 E9 65 B6 58 60 C1 C0 39 8A 42 54 BC 4A 48 8B A1 D9 5C 32 05 7A 1C BB 50 51 5B 7F C7 75 2D 68 55 E6 83 7B C3 98 FD E6 D5 B8 DA A8 31 01 78 F5 60 8B 1A D2 FD 51 34 47 FA AF 23 AE E2 DE 15 A7 07 66 69 35 9A 40 61 55 25 98 23 54 2A 50 C9 7D A6 CE 74 F8 19 0C 8E 63 E5 49 2F F9 17 05 FD 39 15 55 F4 B0 91 BF 60 B7 B2 40 2E 7A D3 68 86 C0 FC 38 88 AB B9 03 8A 04 05 1A 9F 61 AE F2 D3 B8 A4 29 F8 51 43 CF 84 26 4A 90 6E 13 27 AF 7B 52 DB F9 00 E8 AE C0 B5 6F 64 03 57 20 59 7C F5 E1 65 A8 47 C3 BD EE 72 2A 85 E2 70 8D EA 9D 98 D4 2A D5 70 A2 E9 76 A2 DA E6 7C B0 F7 14 D9 23 B6 88 C0 B3 6F 42 12 F4 69 0C 15 81 D6 F7 0B B7 1B DF 15 E6 75 63 13 53 B3 20 43 79 90 34 E3 34 48 80 D6 86 BB 45 A2 85 DD F8 23 64 3B D5 68 AB 99 53 34 C6 25 0A 87 73 17 37 56 39 BA 8C 0E 39 24 4B CC AA 98 84 0C 2F 27 E6 E2 AC 86 34 5D 1E 25 AE FD 1E FF 3C 27 AD 26 18 4A 1A E5 09 61 5D 83 5F 2C DC 41 A7 C6 07 55 5B B5 0B 71 FE 86 E7 30 A1 BC 27 AF 5F 24 51 1A DD 20 F6 32 9E 3D 64 6F DC 43 65 2A 80 CB 95 C4 B6 F0 E1 F3 CF 6C F2 C2 9C EA 81 88 0C 2D D2 DA 74 82 C6 A5 1E 98 D3 BC 71 ED E2 0B 05 DA BB 0E FA 35 0A 2C D5 C8 62 E7 B1 AF 95 14 6C 83 7D F1 CE 9F 13 6B D8 68 C9 A5 F5 87 2E A5 8F D7 5C B2 C6 99 37 31 5A A4 D0 E2 43 DF C8 BE BD 10 C0 D8 22 63 95 46 1E E7 8C A8 61 E4 74 02 6C B4 30 F3 06 15 11 E6 2A 3A 0D 3B 2F B9 3B B3 83 40 18 79 FB 39 38 B7 CE 4D BA F6 9E AA E1 8F 32 1C B1 68 DD 5C 2C 37 65 61 73 3D C6 34 56 CD EA BC 77 6A A1 7D 6A F1 F9 78 AF 0F D9 C2 AA D3 D7 A8 2D A8 6E BC 19 83 96 B5 A3 3E B3 B2 5C 54 AD 77 CE 1D E5 D5 AA B3 0D 36 7A 32 7D 5C A3 60 66 8D 84 A0 BD 4F 0F A9 09 89 B8 EC 14 8A 2B 2B 74 8E 75 77 5A 8E B2 51 D0 26 D6 06 8C 9A CA 31 D6 94 17 F0 14 D7 43 1C 82 0C 00 83 E6 75 05 5C 52 AB 0C 38 8F A3 35 77 52 E8 3E 3B CB 48 81 E3 25 B1 A9 40 12 76 4F 16 F1 CE 3D D7 23 89 44 D7 3F 24 7E B7 46 66 C1 16 7A 17 B2 2A 99 F1 AC 3C C9 9D C5 FE 89 BE BF 2C 68 BC 2C A7 F1 C5 2F 26 1E CC D1 AF 7D AA 7D C5 94 4A 4D C4 87 97 2D 2B 6A 5E 5E BF 39 82 18 AB 8C B9 DC 80 83 A1 D1 80 D2 65 FE 2E CC 6A F1 02 84 B2 36 60 37 24 4E 5E 57 AD A5 C5 50 1A 5E A4 5C 31 B6 93 60 57 AC EB ED 65 3F BF EA C7 08 CA 13 00 93 E5 E6 79 F6 37 20 CA B4 6E 39 9E 83 4F 15 8B 15 CD E7 8C 90 93 B0 85 91 9B AE 21 EF 03 D0 A4 B6 2A B4 C6 D3 07 04 92 54 72 8E EC 2E B3 47 6C CE 42 06 7F E0 5B 96 F2 48 8B FA 8F 83 E2 47 10 A5 B7 30 F8 68 B0 FD 02 74 6F 48 71 D7 F1 2E DF A1 52 61 76 99 47 BE 0A 2F F8 F2 69 9D AD 03 FA E6 84 A7 CF 35 7D 8F 5F C5 A6 9B 21 66 35 BC 58 D5 89 B5 E0 9F 11 F0 A8 8A 1F C8 3C 24 B2 B7 F1 6C 8A DB 3B 39 7A CA D0 EF 15 61 22 72 FD FC 02 3D BD 76 35 9E E1 C6 D7 2C B2 59 E1 03 E0 FF 7A 87 03 79 9F 61 AB CC 49 98 C2 41 CF 6E 9B AA 52 9B D0 08 B5 9E 23 F6 C1 39 82 77 16 5D D4 E1 B3 AD A0 0C 58 F8 E2 67 00 6A 0B 4B D2 6C E1 C5 6B 9D BA 3F 40 82 C5 28 B8 C1 60 75 85 EE C4 FA 04 ED 62 64 B6 29 10 67 4B 9B D6 6C 0E 06 62 64 83 CA F0 2F 2D B8 F6 0A D7 D7 6A 1C 58 14 BE 18 60 80 29 02 CD F6 B1 95 A5 6D 2E 27 9C 08 E3 1F C5 C2 07 7F 63 7F DB 82 C6 C6 85 AC A6 D2 4C F1 7F DB 1D CF 86 20 56 60 C0 24 E0 C0 42 0B 4E 00 5F 8B 78 60 FE EA EC 6D 31 93 49 70 EB 2A 45 4F 92 9B 6C 17 28 BB 89 FC C0 07 84 CC AD 1B 85 F2 85 18 5C 3D 5A 60 54 AF 03 9D 9E E4 26 D3 86 AA 0B 7C A3 32 9C C2 0F 3A D4 3E 1F 52 43 A8 31 E9 70 FC 0C B4 7C F5 E3 C7 6F 11 ED 22 4C 0C 1B 82 CB 72 A4 95 28 1A D4 1B E5 C4 6E D7 F1 EC BF 25 2C B8 92 87 A8 D2 15 79 34 39 C0 BE 0D C8 68 2D F2 D3 8E 01 09 3C 48 94 32 69 89 D5 C0 5D E8 2C E6 A6 97 59 4B 9A C6 61 B0 9E DB 81 DC D3 F9 47 34 84 00 CA 87 BE 5D 6D 56 F3 01 02 3B FF FF FF FF 00 00 00 00"
if __name__ == '__main__':
result = GbbqReader().get_df("/Users/rainx/tmp/gbbq")
print(result)
| 169.729167 | 12,548 | 0.612434 | 4,565 | 16,294 | 2.171742 | 0.084995 | 0.013315 | 0.013113 | 0.011297 | 0.045289 | 0.043272 | 0.034093 | 0.01856 | 0.01856 | 0.006859 | 0 | 0.516175 | 0.360685 | 16,294 | 95 | 12,549 | 171.515789 | 0.435538 | 0.011354 | 0 | 0.108108 | 0 | 0.013514 | 0.789206 | 0.003975 | 0 | 1 | 0.004596 | 0 | 0 | 1 | 0.013514 | false | 0 | 0.054054 | 0 | 0.108108 | 0.013514 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
160078949dbf2532f034fe96599aa212f61df315 | 15,988 | py | Python | ore/tests/tests_visibility.py | lukegb/Ore-python | 1d1c73795406fa52ae969726feb89f7aedbc4afc | [
"MIT"
] | 1 | 2016-05-24T14:49:42.000Z | 2016-05-24T14:49:42.000Z | ore/tests/tests_visibility.py | gratimax/ore-old | 1d1c73795406fa52ae969726feb89f7aedbc4afc | [
"MIT"
] | null | null | null | ore/tests/tests_visibility.py | gratimax/ore-old | 1d1c73795406fa52ae969726feb89f7aedbc4afc | [
"MIT"
] | null | null | null | from django.contrib.auth.models import AnonymousUser
from ore.accounts.models import OreUser
from ore.core.models import Organization
from django.test import TestCase as TestCase
from ore.projects.models import Project
from ore.teams.models import OrganizationTeam, ProjectTeam
from ore.versions.models import Version, File
class VisibilityTestCase(TestCase):
def make_project(self, name, namespace, status=Project.STATUS.active):
proj = Project.objects.create(
name=name, namespace=namespace,
description='?', status=status,
)
return proj
def make_user(self, username, status=OreUser.STATUS.active):
user = OreUser.objects.create_user(
username, 'password', '{}@ore.spongepowered.org'.format(username)
)
# The first user is always an admin, turn this off
if OreUser.objects.count() == 1:
user.is_staff = False
user.is_superuser = False
user.save()
if status is not OreUser.STATUS.active:
user.status = status
user.save()
return user
def make_superuser(self, *args, **kwargs):
user = self.make_user(*args, **kwargs)
user.is_staff = True
user.is_superuser = True
user.save()
return user
def make_organization(self, name, owners=None, status=Organization.STATUS.active):
org = Organization.objects.create(
name=name, status=status,
)
if owners:
org.teams.get(is_owner_team=True).users = owners
return org
def make_version(self, name, project, status=Version.STATUS.active):
return Version.objects.create(
name=name, project=project, status=status,
)
def make_organization_team(self, name, organization, users=None, projects=None, permissions=None, **kwargs):
team = OrganizationTeam.objects.create(
name=name, organization=organization, **kwargs
)
if users:
team.users = users
if projects:
team.projects = projects
if permissions:
team.permissions = permissions
return team
def make_project_team(self, name, project, users=None, permissions=None, **kwargs):
team = ProjectTeam.objects.create(
name=name, project=project, **kwargs
)
if users:
team.users = users
if permissions:
team.permissions = permissions
return team
def make_file(self, version, filetype, status=File.STATUS.active):
return File.objects.create(
version=version, filetype=filetype, status=status
)
def assertUserCanSee(self, model, user, item):
self.assertIn(item, model.objects.as_user(user))
def assertUserCanNotSee(self, model, user, item):
self.assertNotIn(item, model.objects.as_user(user))
class RepoUserVisibilityTestCase(VisibilityTestCase):
def test_user_visible_anonymously(self):
user_joe = self.make_user('joe')
self.assertUserCanSee(OreUser, AnonymousUser(), user_joe)
def test_user_visible_randomer(self):
user_joe = self.make_user('joe')
user_jane = self.make_user('jane')
self.assertUserCanSee(OreUser, user_jane, user_joe)
def test_user_visible_themselves(self):
user_joe = self.make_user('joe')
self.assertUserCanSee(OreUser, user_joe, user_joe)
def test_user_visible_staff(self):
user_joe = self.make_user('joe')
user_janet = self.make_superuser('janet')
self.assertUserCanSee(OreUser, user_janet, user_joe)
def test_deleted_user_not_visible_anonymously(self):
user_joe = self.make_user('joe', status=OreUser.STATUS.deleted)
self.assertUserCanNotSee(OreUser, AnonymousUser(), user_joe)
def test_deleted_user_not_visible_randomer(self):
user_joe = self.make_user('joe', status=OreUser.STATUS.deleted)
user_jane = self.make_user('jane')
self.assertUserCanNotSee(OreUser, user_jane, user_joe)
def test_deleted_user_visible_staff(self):
user_joe = self.make_user('joe', status=OreUser.STATUS.deleted)
user_janet = self.make_superuser('janet')
self.assertUserCanSee(OreUser, user_janet, user_joe)
class OrganizationVisibilityTestCase(VisibilityTestCase):
def test_organization_visible_anonymously(self):
org_sponge = self.make_organization('Sponge')
self.assertUserCanSee(Organization, AnonymousUser(), org_sponge)
def test_organization_visible_randomer(self):
org_sponge = self.make_organization('Sponge')
user_jane = self.make_user('jane')
self.assertUserCanSee(Organization, user_jane, org_sponge)
def test_organization_visible_owner(self):
user_joe = self.make_user('joe')
org_sponge = self.make_organization('Sponge', owners=[user_joe])
self.assertUserCanSee(Organization, user_joe, org_sponge)
def test_organization_visible_staff(self):
org_sponge = self.make_organization('Sponge')
user_janet = self.make_superuser('janet')
self.assertUserCanSee(Organization, user_janet, org_sponge)
def test_deleted_organization_not_visible_anonymously(self):
org_sponge = self.make_organization(
'Sponge', status=Organization.STATUS.deleted)
self.assertUserCanNotSee(Organization, AnonymousUser(), org_sponge)
def test_deleted_organization_not_visible_randomer(self):
org_sponge = self.make_organization(
'Sponge', status=Organization.STATUS.deleted)
user_jane = self.make_user('jane')
self.assertUserCanNotSee(Organization, user_jane, org_sponge)
def test_deleted_organization_not_visible_owner(self):
user_joe = self.make_user('joe')
org_sponge = self.make_organization(
'Sponge', owners=[user_joe], status=Organization.STATUS.deleted)
self.assertUserCanNotSee(Organization, user_joe, org_sponge)
def test_deleted_organization_visible_staff(self):
org_sponge = self.make_organization(
'Sponge', status=Organization.STATUS.deleted)
user_janet = self.make_superuser('janet')
self.assertUserCanSee(Organization, user_janet, org_sponge)
class UserNamespaceMixin(object):
def make_namespace(self, **kwargs):
user_joe = self.make_user('joe', **kwargs)
return user_joe, user_joe
class OrganizationNamespaceMixin(object):
def make_namespace(self, **kwargs):
user_joe = self.make_user('joe')
org_sponge = self.make_organization(
'Sponge', owners=[user_joe], **kwargs)
return user_joe, org_sponge
class ProjectVisibilityTestCaseMixin(object):
def test_project_visible_anonymously(self):
user_joe, namespace = self.make_namespace()
proj_sponge = self.make_project('Sponge', namespace)
self.assertUserCanSee(Project, AnonymousUser(), proj_sponge)
def test_project_visible_randomer(self):
user_joe, namespace = self.make_namespace()
proj_sponge = self.make_project('Sponge', namespace)
user_jane = self.make_user('jane')
self.assertUserCanSee(Project, user_jane, proj_sponge)
def test_project_visible_owner(self):
user_joe, namespace = self.make_namespace()
proj_sponge = self.make_project('Sponge', namespace)
self.assertUserCanSee(Project, user_joe, proj_sponge)
def test_project_visible_project_team_member(self):
user_joe, namespace = self.make_namespace()
proj_sponge = self.make_project('Sponge', namespace)
user_jack = self.make_user('jack')
pteam_spongers = self.make_project_team(
'Spongers', proj_sponge, users=[user_jack])
self.assertUserCanSee(Project, user_jack, proj_sponge)
def test_project_visible_staff(self):
user_joe, namespace = self.make_namespace()
proj_sponge = self.make_project('Sponge', namespace)
user_janet = self.make_superuser('janet')
self.assertUserCanSee(Project, user_janet, proj_sponge)
def test_namespace_deleted_project_not_visible_anonymously(self):
user_joe, namespace = self.make_namespace(status='deleted')
proj_sponge = self.make_project('Sponge', namespace)
self.assertUserCanNotSee(Project, AnonymousUser(), proj_sponge)
def test_namespace_deleted_project_not_visible_randomer(self):
user_joe, namespace = self.make_namespace(status='deleted')
proj_sponge = self.make_project('Sponge', namespace)
user_jane = self.make_user('jane')
self.assertUserCanNotSee(Project, user_jane, proj_sponge)
def test_namespace_deleted_project_not_visible_owner(self):
user_joe, namespace = self.make_namespace(status='deleted')
proj_sponge = self.make_project('Sponge', namespace)
self.assertUserCanNotSee(Project, user_joe, proj_sponge)
def test_namespace_deleted_project_not_visible_project_team_member(self):
user_joe, namespace = self.make_namespace(status='deleted')
proj_sponge = self.make_project('Sponge', namespace)
user_jack = self.make_user('jack')
pteam_spongers = self.make_project_team(
'Spongers', proj_sponge, users=[user_jack])
self.assertUserCanNotSee(Project, user_jack, proj_sponge)
def test_namespace_deleted_project_visible_staff(self):
user_joe, namespace = self.make_namespace(status='deleted')
proj_sponge = self.make_project('Sponge', namespace)
user_janet = self.make_superuser('janet')
self.assertUserCanSee(Project, user_janet, proj_sponge)
def test_deleted_project_not_visible_anonymously(self):
user_joe, namespace = self.make_namespace()
proj_sponge = self.make_project(
'Sponge', namespace, status=Project.STATUS.deleted)
self.assertUserCanNotSee(Project, AnonymousUser(), proj_sponge)
def test_deleted_project_not_visible_randomer(self):
user_joe, namespace = self.make_namespace()
proj_sponge = self.make_project(
'Sponge', namespace, status=Project.STATUS.deleted)
user_jane = self.make_user('jane')
self.assertUserCanNotSee(Project, user_jane, proj_sponge)
def test_deleted_project_not_visible_owner(self):
user_joe, namespace = self.make_namespace()
proj_sponge = self.make_project(
'Sponge', namespace, status=Project.STATUS.deleted)
self.assertUserCanNotSee(Project, user_joe, proj_sponge)
def test_deleted_project_not_visible_project_team_member(self):
user_joe, namespace = self.make_namespace()
proj_sponge = self.make_project(
'Sponge', namespace, status=Project.STATUS.deleted)
user_jack = self.make_user('jack')
pteam_spongers = self.make_project_team(
'Spongers', proj_sponge, users=[user_jack])
self.assertUserCanNotSee(Project, user_jack, proj_sponge)
def test_deleted_project_visible_staff(self):
user_joe, namespace = self.make_namespace()
proj_sponge = self.make_project(
'Sponge', namespace, status=Project.STATUS.deleted)
user_janet = self.make_superuser('janet')
self.assertUserCanSee(Project, user_janet, proj_sponge)
class UserProjectVisibilityTestCase(UserNamespaceMixin, ProjectVisibilityTestCaseMixin, VisibilityTestCase):
pass
class OrganizationProjectVisibilityTestCase(OrganizationNamespaceMixin, ProjectVisibilityTestCaseMixin, VisibilityTestCase):
def test_project_visible_irrelevant_organization_team_member(self):
user_joe, namespace = self.make_namespace()
proj_sponge = self.make_project('Sponge', namespace)
user_jack = self.make_user('jack')
oteam_spongers = self.make_organization_team(
'Spongers', namespace, users=[user_jack], projects=[], is_all_projects=False)
self.assertUserCanSee(Project, user_jack, proj_sponge)
def test_namespace_deleted_project_not_visible_irrelevant_organization_team_member(self):
user_joe, namespace = self.make_namespace(status='deleted')
proj_sponge = self.make_project('Sponge', namespace)
user_jack = self.make_user('jack')
oteam_spongers = self.make_organization_team(
'Spongers', namespace, users=[user_jack], projects=[], is_all_projects=False)
self.assertUserCanNotSee(Project, user_jack, proj_sponge)
def test_deleted_project_not_visible_irrelevant_organization_team_member(self):
user_joe, namespace = self.make_namespace()
proj_sponge = self.make_project(
'Sponge', namespace, status=Project.STATUS.deleted)
user_jack = self.make_user('jack')
oteam_spongers = self.make_organization_team(
'Spongers', namespace, users=[user_jack], projects=[], is_all_projects=False)
self.assertUserCanNotSee(Project, user_jack, proj_sponge)
def test_project_visible_all_projects_organization_team_member(self):
user_joe, namespace = self.make_namespace()
proj_sponge = self.make_project('Sponge', namespace)
user_jack = self.make_user('jack')
oteam_spongers = self.make_organization_team(
'Spongers', namespace, users=[user_jack], projects=[], is_all_projects=True)
self.assertUserCanSee(Project, user_jack, proj_sponge)
def test_namespace_deleted_project_not_visible_all_projects_organization_team_member(self):
user_joe, namespace = self.make_namespace(status='deleted')
proj_sponge = self.make_project('Sponge', namespace)
user_jack = self.make_user('jack')
oteam_spongers = self.make_organization_team(
'Spongers', namespace, users=[user_jack], projects=[], is_all_projects=True)
self.assertUserCanNotSee(Project, user_jack, proj_sponge)
def test_deleted_project_not_visible_all_projects_organization_team_member(self):
user_joe, namespace = self.make_namespace()
proj_sponge = self.make_project(
'Sponge', namespace, status=Project.STATUS.deleted)
user_jack = self.make_user('jack')
oteam_spongers = self.make_organization_team(
'Spongers', namespace, users=[user_jack], projects=[], is_all_projects=True)
self.assertUserCanNotSee(Project, user_jack, proj_sponge)
def test_project_visible_project_organization_team_member(self):
user_joe, namespace = self.make_namespace()
proj_sponge = self.make_project('Sponge', namespace)
user_jack = self.make_user('jack')
oteam_spongers = self.make_organization_team('Spongers', namespace, users=[
user_jack], projects=[proj_sponge], is_all_projects=False)
self.assertUserCanSee(Project, user_jack, proj_sponge)
def test_namespace_deleted_project_not_visible_project_organization_team_member(self):
user_joe, namespace = self.make_namespace(status='deleted')
proj_sponge = self.make_project('Sponge', namespace)
user_jack = self.make_user('jack')
oteam_spongers = self.make_organization_team('Spongers', namespace, users=[
user_jack], projects=[proj_sponge], is_all_projects=False)
self.assertUserCanNotSee(Project, user_jack, proj_sponge)
def test_deleted_project_not_visible_project_organization_team_member(self):
user_joe, namespace = self.make_namespace()
proj_sponge = self.make_project(
'Sponge', namespace, status=Project.STATUS.deleted)
user_jack = self.make_user('jack')
oteam_spongers = self.make_organization_team('Spongers', namespace, users=[
user_jack], projects=[proj_sponge], is_all_projects=False)
self.assertUserCanNotSee(Project, user_jack, proj_sponge)
| 44.044077 | 124 | 0.701339 | 1,822 | 15,988 | 5.845225 | 0.055982 | 0.080376 | 0.034085 | 0.04507 | 0.814178 | 0.804789 | 0.765164 | 0.728357 | 0.700094 | 0.655869 | 0 | 0.000079 | 0.205467 | 15,988 | 362 | 125 | 44.165746 | 0.838306 | 0.003002 | 0 | 0.578767 | 0 | 0 | 0.033066 | 0.001506 | 0 | 0 | 0 | 0 | 0.14726 | 1 | 0.174658 | false | 0.006849 | 0.023973 | 0.006849 | 0.260274 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
16141c105531a3e68c7cb0828fb8d7d38e7da49c | 81 | py | Python | test/__init__.py | mbdevpl/lidar-playground | f5e8d2f105f116b4bd389553c58b67d3d6488aa3 | [
"Apache-2.0"
] | null | null | null | test/__init__.py | mbdevpl/lidar-playground | f5e8d2f105f116b4bd389553c58b67d3d6488aa3 | [
"Apache-2.0"
] | null | null | null | test/__init__.py | mbdevpl/lidar-playground | f5e8d2f105f116b4bd389553c58b67d3d6488aa3 | [
"Apache-2.0"
] | null | null | null | """Tests for lidar-playground package."""
from lidar_playground import _logging
| 20.25 | 41 | 0.790123 | 10 | 81 | 6.2 | 0.8 | 0.483871 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 81 | 3 | 42 | 27 | 0.861111 | 0.432099 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1621ccbe4860a9270b854050234439ae4544ad28 | 192 | py | Python | djrest_wrapper/exceptions/services/exceptions.py | almohress/djrest-wrapper | 48f6e413fc9d8c8e22585133af7b344185398c4a | [
"MIT"
] | null | null | null | djrest_wrapper/exceptions/services/exceptions.py | almohress/djrest-wrapper | 48f6e413fc9d8c8e22585133af7b344185398c4a | [
"MIT"
] | null | null | null | djrest_wrapper/exceptions/services/exceptions.py | almohress/djrest-wrapper | 48f6e413fc9d8c8e22585133af7b344185398c4a | [
"MIT"
] | null | null | null | from .base import BaseServiceExp
class DoesNotExistsExp(BaseServiceExp):
pass
class DuplicateModelExp(BaseServiceExp):
pass
class InvalidCredentialsExp(BaseServiceExp):
pass
| 13.714286 | 44 | 0.786458 | 16 | 192 | 9.4375 | 0.5625 | 0.357616 | 0.304636 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.161458 | 192 | 13 | 45 | 14.769231 | 0.937888 | 0 | 0 | 0.428571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.428571 | 0.142857 | 0 | 0.571429 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 6 |
1664d58bf96073096531afeb0d5e32e834ea2311 | 20,141 | py | Python | monk/pytorch/transforms/transforms.py | Sanskar329/monk_v1 | 51a497a925ec1fb2c8fef1d51245ea7040a5a65a | [
"Apache-2.0"
] | 7 | 2020-07-26T08:37:29.000Z | 2020-10-30T10:23:11.000Z | monk/pytorch/transforms/transforms.py | mursalfk/monk_v1 | 62f34a52f242772186ffff7e56764e958fbcd920 | [
"Apache-2.0"
] | null | null | null | monk/pytorch/transforms/transforms.py | mursalfk/monk_v1 | 62f34a52f242772186ffff7e56764e958fbcd920 | [
"Apache-2.0"
] | null | null | null | from pytorch.transforms.imports import *
from system.imports import *
@accepts(dict, int, bool, bool, bool, retrieve=bool, post_trace=False)
#@TraceFunction(trace_args=False, trace_rv=False)
def transform_center_crop(system_dict, input_size, train, val, test, retrieve=False):
'''
Apply Center Cropping transformation
Args:
system_dict (dict): System dictionary storing experiment state and set variables
input_size (int, list): Crop size
train (bool): If True, transform applied to training data
val (bool): If True, transform applied to validation data
test (bool): If True, transform applied to testing/inferencing data
Returns:
dict: updated system dict
'''
tmp = {};
tmp["CenterCrop"] = {};
tmp["CenterCrop"]["input_size"] = input_size;
if(train):
if(not retrieve):
system_dict["dataset"]["transforms"]["train"].append(tmp);
system_dict["local"]["transforms_train"].append(transforms.CenterCrop(input_size));
if(val):
if(not retrieve):
system_dict["dataset"]["transforms"]["val"].append(tmp);
system_dict["local"]["transforms_val"].append(transforms.CenterCrop(input_size));
if(test):
if(not retrieve):
system_dict["dataset"]["transforms"]["test"].append(tmp);
system_dict["local"]["transforms_test"].append(transforms.CenterCrop(input_size));
return system_dict;
@accepts(dict, [list, float, int], [list, float, int], [list, float, int], [list, float, int], bool, bool, bool, retrieve=bool, post_trace=False)
#@TraceFunction(trace_args=False, trace_rv=False)
def transform_color_jitter(system_dict, brightness, contrast, saturation, hue, train, val, test, retrieve=False):
'''
Apply Color jittering transformations
Args:
system_dict (dict): System dictionary storing experiment state and set variables
brightness (float): Levels to jitter brightness.
0 - min
1 - max
contrast (float): Levels to jitter contrast.
0 - min
1 - max
saturation (float): Levels to jitter saturation.
0 - min
1 - max
hue (float): Levels to jitter hue.
0 - min
1 - max
train (bool): If True, transform applied to training data
val (bool): If True, transform applied to validation data
test (bool): If True, transform applied to testing/inferencing data
Returns:
dict: updated system dict
'''
tmp = {};
tmp["ColorJitter"] = {};
tmp["ColorJitter"]["brightness"] = brightness;
tmp["ColorJitter"]["contrast"] = contrast;
tmp["ColorJitter"]["saturation"] = saturation;
tmp["ColorJitter"]["hue"] = hue;
if(train):
if(not retrieve):
system_dict["dataset"]["transforms"]["train"].append(tmp);
system_dict["local"]["transforms_train"].append(transforms.ColorJitter(brightness=brightness, contrast=contrast, saturation=saturation, hue=hue));
if(val):
if(not retrieve):
system_dict["dataset"]["transforms"]["val"].append(tmp);
system_dict["local"]["transforms_val"].append(transforms.ColorJitter(brightness=brightness, contrast=contrast, saturation=saturation, hue=hue));
if(test):
if(not retrieve):
system_dict["dataset"]["transforms"]["test"].append(tmp);
system_dict["local"]["transforms_test"].append(transforms.ColorJitter(brightness=brightness, contrast=contrast, saturation=saturation, hue=hue));
return system_dict;
@accepts(dict, [list, float, int], [tuple, list, type(None)], [tuple, list, type(None)], [list, float, int, tuple, type(None)],
bool, bool, bool, retrieve=bool, post_trace=False)
#@TraceFunction(trace_args=False, trace_rv=False)
def transform_random_affine(system_dict, degrees, translate, scale, shear, train, val, test, retrieve=False):
'''
Apply random affine transformations
Args:
system_dict (dict): System dictionary storing experiment state and set variables
degrees (float): Max Rotation range limit for transforms
scale (float, list): Range for randomly scaling
shear (float, list): Range for randomly applying sheer changes
train (bool): If True, transform applied to training data
val (bool): If True, transform applied to validation data
test (bool): If True, transform applied to testing/inferencing data
Returns:
dict: updated system dict
'''
tmp = {};
tmp["RandomAffine"] = {};
tmp["RandomAffine"]["degrees"] = degrees;
tmp["RandomAffine"]["translate"] = translate;
tmp["RandomAffine"]["scale"] = scale;
tmp["RandomAffine"]["shear"] = shear;
if(train):
if(not retrieve):
system_dict["dataset"]["transforms"]["train"].append(tmp);
system_dict["local"]["transforms_train"].append(transforms.RandomAffine(degrees, translate=translate, scale=scale, shear=shear));
if(val):
if(not retrieve):
system_dict["dataset"]["transforms"]["val"].append(tmp);
system_dict["local"]["transforms_val"].append(transforms.RandomAffine(degrees, translate=translate, scale=scale, shear=shear));
if(test):
if(not retrieve):
system_dict["dataset"]["transforms"]["test"].append(tmp);
system_dict["local"]["transforms_test"].append(transforms.RandomAffine(degrees, translate=translate, scale=scale, shear=shear));
return system_dict;
@accepts(dict, int, bool, bool, bool, retrieve=bool, post_trace=False)
#@TraceFunction(trace_args=False, trace_rv=False)
def transform_random_crop(system_dict, input_size, train, val, test, retrieve=True):
'''
Apply Random Cropping transformation
Args:
system_dict (dict): System dictionary storing experiment state and set variables
input_size (int, list): Crop size
train (bool): If True, transform applied to training data
val (bool): If True, transform applied to validation data
test (bool): If True, transform applied to testing/inferencing data
Returns:
dict: updated system dict
'''
tmp = {};
tmp["RandomCrop"] = {};
tmp["RandomCrop"]["input_size"] = input_size;
if(train):
if(not retrieve):
system_dict["dataset"]["transforms"]["train"].append(tmp);
system_dict["local"]["transforms_train"].append(transforms.RandomCrop(input_size));
if(val):
if(not retrieve):
system_dict["dataset"]["transforms"]["val"].append(tmp);
system_dict["local"]["transforms_val"].append(transforms.RandomCrop(input_size));
if(test):
if(not retrieve):
system_dict["dataset"]["transforms"]["test"].append(tmp);
system_dict["local"]["transforms_test"].append(transforms.RandomCrop(input_size));
return system_dict;
@accepts(dict, float, bool, bool, bool, retrieve=bool, post_trace=False)
#@TraceFunction(trace_args=False, trace_rv=False)
def transform_random_horizontal_flip(system_dict, probability, train, val, test, retrieve=False):
'''
Apply random horizontal flip transformations
Args:
system_dict (dict): System dictionary storing experiment state and set variables
probability (float): Probability of flipping the input image
train (bool): If True, transform applied to training data
val (bool): If True, transform applied to validation data
test (bool): If True, transform applied to testing/inferencing data
Returns:
dict: updated system dict
'''
tmp = {};
tmp["RandomHorizontalFlip"] = {};
tmp["RandomHorizontalFlip"]["p"] = probability;
if(train):
if(not retrieve):
system_dict["dataset"]["transforms"]["train"].append(tmp);
system_dict["local"]["transforms_train"].append(transforms.RandomHorizontalFlip(p=probability));
if(val):
if(not retrieve):
system_dict["dataset"]["transforms"]["val"].append(tmp);
system_dict["local"]["transforms_val"].append(transforms.RandomHorizontalFlip(p=probability));
if(test):
if(not retrieve):
system_dict["dataset"]["transforms"]["test"].append(tmp);
system_dict["local"]["transforms_test"].append(transforms.RandomHorizontalFlip(p=probability));
return system_dict;
@accepts(dict, [float, int], [float, int], bool, bool, bool, retrieve=bool, post_trace=False)
#@TraceFunction(trace_args=False, trace_rv=False)
def transform_random_perspective(system_dict, distortion_scale, probability, train, val, test, retrieve=False):
'''
Apply random perspective transformations
Args:
system_dict (dict): System dictionary storing experiment state and set variables
distortion_scale (float): Max limit for perspective distortion
probability (float): Probability of applying transformation
train (bool): If True, transform applied to training data
val (bool): If True, transform applied to validation data
test (bool): If True, transform applied to testing/inferencing data
Returns:
dict: updated system dict
'''
tmp = {};
tmp["RandomPerspective"] = {};
tmp["RandomPerspective"]["distortion_scale"] = distortion_scale;
tmp["RandomPerspective"]["p"] = probability;
if(train):
if(not retrieve):
system_dict["dataset"]["transforms"]["train"].append(tmp);
system_dict["local"]["transforms_train"].append(transforms.RandomPerspective(distortion_scale=distortion_scale, p=probability));
if(val):
if(not retrieve):
system_dict["dataset"]["transforms"]["val"].append(tmp);
system_dict["local"]["transforms_val"].append(transforms.RandomPerspective(distortion_scale=distortion_scale, p=probability));
if(test):
if(not retrieve):
system_dict["dataset"]["transforms"]["test"].append(tmp);
system_dict["local"]["transforms_test"].append(transforms.RandomPerspective(distortion_scale=distortion_scale, p=probability));
return system_dict;
@accepts(dict, int, [tuple, list, float], [tuple, list, float], bool, bool, bool, retrieve=bool, post_trace=False)
#@TraceFunction(trace_args=False, trace_rv=False)
def transform_random_resized_crop(system_dict, input_size, scale, ratio, train, val, test, retrieve=False):
'''
Apply Random Resized Cropping transformation
Args:
system_dict (dict): System dictionary storing experiment state and set variables
input_size (int, list): Crop size
scale (float, tuple): scaling ratio limits; for maximum and minimum random scaling
ratio (float, tuple): aspect ratio limits; for maximum and minmum changes to aspect ratios
train (bool): If True, transform applied to training data
val (bool): If True, transform applied to validation data
test (bool): If True, transform applied to testing/inferencing data
Returns:
dict: updated system dict
'''
tmp = {};
tmp["RandomResizedCrop"] = {};
tmp["RandomResizedCrop"]["input_size"] = input_size;
tmp["RandomResizedCrop"]["scale"] = scale;
tmp["RandomResizedCrop"]["ratio"] = ratio;
if(train):
if(not retrieve):
system_dict["dataset"]["transforms"]["train"].append(tmp);
system_dict["local"]["transforms_train"].append(transforms.RandomResizedCrop(size=input_size, scale=scale, ratio=ratio));
if(val):
if(not retrieve):
system_dict["dataset"]["transforms"]["val"].append(tmp);
system_dict["local"]["transforms_val"].append(transforms.RandomResizedCrop(size=input_size, scale=scale, ratio=ratio));
if(test):
if(not retrieve):
system_dict["dataset"]["transforms"]["test"].append(tmp);
system_dict["local"]["transforms_test"].append(transforms.RandomResizedCrop(size=input_size, scale=scale, ratio=ratio));
return system_dict;
@accepts(dict, int, bool, bool, bool, retrieve=bool, post_trace=False)
#@TraceFunction(trace_args=False, trace_rv=False)
def transform_grayscale(system_dict, num_output_channels, train, val, test, retrieve=False):
'''
Not active
'''
tmp = {};
tmp["Grayscale"] = {};
tmp["Grayscale"]["num_output_channels"] = num_output_channels;
if(train):
if(not retrieve):
system_dict["dataset"]["transforms"]["train"].append(tmp);
system_dict["local"]["transforms_train"].append(transforms.Grayscale(num_output_channels=num_output_channels));
if(val):
if(not retrieve):
system_dict["dataset"]["transforms"]["val"].append(tmp);
system_dict["local"]["transforms_val"].append(transforms.Grayscale(num_output_channels=num_output_channels));
if(test):
if(not retrieve):
system_dict["dataset"]["transforms"]["test"].append(tmp);
system_dict["local"]["transforms_test"].append(transforms.Grayscale(num_output_channels=num_output_channels));
return system_dict;
@accepts(dict, [float, int, list], bool, bool, bool, retrieve=bool, post_trace=False)
#@TraceFunction(trace_args=False, trace_rv=False)
def transform_random_rotation(system_dict, degrees, train, val, test, retrieve=False):
'''
Apply random rotation transformations
Args:
system_dict (dict): System dictionary storing experiment state and set variables
degrees (float): Max Rotation range limit for transforms
train (bool): If True, transform applied to training data
val (bool): If True, transform applied to validation data
test (bool): If True, transform applied to testing/inferencing data
Returns:
dict: updated system dict
'''
tmp = {};
tmp["RandomRotation"] = {};
tmp["RandomRotation"]["degrees"] = degrees;
if(train):
if(not retrieve):
system_dict["dataset"]["transforms"]["train"].append(tmp);
system_dict["local"]["transforms_train"].append(transforms.RandomRotation(degrees));
if(val):
if(not retrieve):
system_dict["dataset"]["transforms"]["val"].append(tmp);
system_dict["local"]["transforms_val"].append(transforms.RandomRotation(degrees));
if(test):
if(not retrieve):
system_dict["dataset"]["transforms"]["test"].append(tmp);
system_dict["local"]["transforms_test"].append(transforms.RandomRotation(degrees));
return system_dict;
@accepts(dict, float, bool, bool, bool, retrieve=bool, post_trace=False)
#@TraceFunction(trace_args=False, trace_rv=False)
def transform_random_vertical_flip(system_dict, probability, train, val, test, retrieve=False):
'''
Apply random vertical flip transformations
Args:
system_dict (dict): System dictionary storing experiment state and set variables
probability (float): Probability of flipping the input image
train (bool): If True, transform applied to training data
val (bool): If True, transform applied to validation data
test (bool): If True, transform applied to testing/inferencing data
Returns:
dict: updated system dict
'''
tmp = {};
tmp["RandomVerticalFlip"] = {};
tmp["RandomVerticalFlip"]["p"] = probability;
if(train):
if(not retrieve):
system_dict["dataset"]["transforms"]["train"].append(tmp);
system_dict["local"]["transforms_train"].append(transforms.RandomVerticalFlip(p=probability));
if(val):
if(not retrieve):
system_dict["dataset"]["transforms"]["val"].append(tmp);
system_dict["local"]["transforms_val"].append(transforms.RandomVerticalFlip(p=probability));
if(test):
if(not retrieve):
system_dict["dataset"]["transforms"]["test"].append(tmp);
system_dict["local"]["transforms_test"].append(transforms.RandomVerticalFlip(p=probability));
return system_dict;
@accepts(dict, int, bool, bool, bool, retrieve=bool, post_trace=False)
#@TraceFunction(trace_args=False, trace_rv=False)
def transform_resize(system_dict, input_size, train, val, test, retrieve=False):
'''
Apply standard resizing
Args:
system_dict (dict): System dictionary storing experiment state and set variables
input_size (int, list): expected final size
train (bool): If True, transform applied to training data
val (bool): If True, transform applied to validation data
test (bool): If True, transform applied to testing/inferencing data
Returns:
dict: updated system dict
'''
tmp = {};
tmp["Resize"] = {};
tmp["Resize"]["input_size"] = input_size;
if(train):
if(not retrieve):
system_dict["dataset"]["transforms"]["train"].append(tmp);
system_dict["local"]["transforms_train"].append(transforms.Resize(size=(input_size, input_size)));
if(val):
if(not retrieve):
system_dict["dataset"]["transforms"]["val"].append(tmp);
system_dict["local"]["transforms_val"].append(transforms.Resize(size=(input_size, input_size)));
if(test):
if(not retrieve):
system_dict["dataset"]["transforms"]["test"].append(tmp);
system_dict["local"]["transforms_test"].append(transforms.Resize(size=(input_size, input_size)));
return system_dict;
@accepts(dict, [float, list], [float, list], bool, bool, bool, retrieve=bool, post_trace=False)
#@TraceFunction(trace_args=False, trace_rv=False)
def transform_normalize(system_dict, mean, std, train, val, test, retrieve=False):
'''
Apply mean subtraction and standard normalization
Args:
system_dict (dict): System dictionary storing experiment state and set variables
mean (float, list): Mean value for subtraction
std (float, list): Normalization factor
train (bool): If True, transform applied to training data
val (bool): If True, transform applied to validation data
test (bool): If True, transform applied to testing/inferencing data
Returns:
dict: updated system dict
'''
tmp = {};
tmp["Normalize"] = {};
tmp["Normalize"]["mean"] = mean;
tmp["Normalize"]["std"] = std;
system_dict["local"]["normalize"] = True;
input_size = system_dict["dataset"]["params"]["input_size"];
if(type(system_dict["dataset"]["params"]["input_size"]) == tuple or type(system_dict["dataset"]["params"]["input_size"]) == list):
h = system_dict["dataset"]["params"]["input_size"][0];
w = system_dict["dataset"]["params"]["input_size"][1];
else:
h = system_dict["dataset"]["params"]["input_size"];
w = system_dict["dataset"]["params"]["input_size"];
if(train):
if(not retrieve):
system_dict["dataset"]["transforms"]["train"].append(tmp);
system_dict["local"]["transforms_train"].append(transforms.Resize(size=(w, h)));
system_dict["local"]["transforms_train"].append(transforms.ToTensor())
system_dict["local"]["transforms_train"].append(transforms.Normalize(mean=mean, std=std));
if(val):
if(not retrieve):
system_dict["dataset"]["transforms"]["val"].append(tmp);
system_dict["local"]["transforms_val"].append(transforms.Resize(size=(w, h)));
system_dict["local"]["transforms_val"].append(transforms.ToTensor())
system_dict["local"]["transforms_val"].append(transforms.Normalize(mean=mean, std=std));
if(test):
if(not retrieve):
system_dict["dataset"]["transforms"]["test"].append(tmp);
system_dict["local"]["transforms_test"].append(transforms.Resize(size=(w, h)));
system_dict["local"]["transforms_test"].append(transforms.ToTensor())
system_dict["local"]["transforms_test"].append(transforms.Normalize(mean=mean, std=std));
return system_dict;
| 41.786307 | 154 | 0.659153 | 2,306 | 20,141 | 5.631396 | 0.05941 | 0.101648 | 0.056291 | 0.080856 | 0.861928 | 0.839365 | 0.825812 | 0.776683 | 0.750886 | 0.734945 | 0 | 0.000622 | 0.201926 | 20,141 | 481 | 155 | 41.873181 | 0.807266 | 0.30217 | 0 | 0.567901 | 0 | 0 | 0.175941 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.049383 | false | 0 | 0.00823 | 0 | 0.106996 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
bc4f19c4229067ab546c82fde4b1d9f4dd0e26e5 | 154 | py | Python | Advanced/Unit-Test/main.py | Alperencode/Python | 6cd1ce6afb76011ec25d567a988367f8522e7461 | [
"MIT"
] | 1 | 2022-03-07T18:57:40.000Z | 2022-03-07T18:57:40.000Z | Advanced/Unit-Test/main.py | Alperencode/Python | 6cd1ce6afb76011ec25d567a988367f8522e7461 | [
"MIT"
] | null | null | null | Advanced/Unit-Test/main.py | Alperencode/Python | 6cd1ce6afb76011ec25d567a988367f8522e7461 | [
"MIT"
] | null | null | null | def add(a,b):
return a+b
def sub(a,b):
return a-b
def mul(a,b):
return a*b
def div(a,b):
if b==0:
return "error"
return a/b | 11.846154 | 22 | 0.512987 | 33 | 154 | 2.393939 | 0.333333 | 0.202532 | 0.405063 | 0.341772 | 0.493671 | 0.493671 | 0 | 0 | 0 | 0 | 0 | 0.009615 | 0.324675 | 154 | 13 | 23 | 11.846154 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0.032258 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0 | 0.3 | 0.9 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
bc620406a1c86e1dcf6485a33afd5a65f59dcec3 | 23 | py | Python | zwende/__init__.py | daxm/zwende | 013d7f918ce3a3dbe8b525046d03c68c75e5cc4f | [
"BSD-3-Clause"
] | 2 | 2018-04-27T09:14:54.000Z | 2020-02-13T15:59:23.000Z | zwende/__init__.py | daxm/zwende | 013d7f918ce3a3dbe8b525046d03c68c75e5cc4f | [
"BSD-3-Clause"
] | null | null | null | zwende/__init__.py | daxm/zwende | 013d7f918ce3a3dbe8b525046d03c68c75e5cc4f | [
"BSD-3-Clause"
] | null | null | null |
from .zwende import *
| 7.666667 | 21 | 0.695652 | 3 | 23 | 5.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.217391 | 23 | 2 | 22 | 11.5 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.