hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
4343518882271496aaea1025e987346434d5d990 | 113,816 | py | Python | Plugins/UnrealEnginePython/Binaries/Win64/Lib/site-packages/tensorflow/python/pywrap_tensorflow_internal.py | JustinACoder/H22-GR3-UnrealAI | 361eb9ef1147f8a2991e5f98c4118cd823184adf | [
"MIT"
] | 6 | 2022-02-04T18:12:24.000Z | 2022-03-21T23:57:12.000Z | Lib/site-packages/tensorflow/python/pywrap_tensorflow_internal.py | shfkdroal/Robot-Learning-in-Mixed-Adversarial-and-Collaborative-Settings | 1fa4cd6a566c8745f455fc3d2273208f21f88ced | [
"bzip2-1.0.6"
] | null | null | null | Lib/site-packages/tensorflow/python/pywrap_tensorflow_internal.py | shfkdroal/Robot-Learning-in-Mixed-Adversarial-and-Collaborative-Settings | 1fa4cd6a566c8745f455fc3d2273208f21f88ced | [
"bzip2-1.0.6"
] | 1 | 2022-02-08T03:53:23.000Z | 2022-02-08T03:53:23.000Z | # This file was automatically generated by SWIG (http://www.swig.org).
# Version 3.0.8
#
# Do not make changes to this file unless you know what you are doing--modify
# the SWIG interface file instead.
from sys import version_info
if version_info >= (2, 6, 0):
def swig_import_helper():
from os.path import dirname
import imp
fp = None
try:
fp, pathname, description = imp.find_module('_pywrap_tensorflow_internal', [dirname(__file__)])
except ImportError:
import _pywrap_tensorflow_internal
return _pywrap_tensorflow_internal
if fp is not None:
try:
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
finally:
fp.close()
return _mod
_pywrap_tensorflow_internal = swig_import_helper()
del swig_import_helper
else:
import _pywrap_tensorflow_internal
del version_info
try:
_swig_property = property
except NameError:
pass # Python < 2.2 doesn't have 'property'.
def _swig_setattr_nondynamic(self, class_type, name, value, static=1):
if (name == "thisown"):
return self.this.own(value)
if (name == "this"):
if type(value).__name__ == 'SwigPyObject':
self.__dict__[name] = value
return
method = class_type.__swig_setmethods__.get(name, None)
if method:
return method(self, value)
if (not static):
if _newclass:
object.__setattr__(self, name, value)
else:
self.__dict__[name] = value
else:
raise AttributeError("You cannot add attributes to %s" % self)
def _swig_setattr(self, class_type, name, value):
return _swig_setattr_nondynamic(self, class_type, name, value, 0)
def _swig_getattr_nondynamic(self, class_type, name, static=1):
if (name == "thisown"):
return self.this.own()
method = class_type.__swig_getmethods__.get(name, None)
if method:
return method(self)
if (not static):
return object.__getattr__(self, name)
else:
raise AttributeError(name)
def _swig_getattr(self, class_type, name):
return _swig_getattr_nondynamic(self, class_type, name, 0)
def _swig_repr(self):
try:
strthis = "proxy of " + self.this.__repr__()
except Exception:
strthis = ""
return "<%s.%s; %s >" % (self.__class__.__module__, self.__class__.__name__, strthis,)
try:
_object = object
_newclass = 1
except AttributeError:
class _object:
pass
_newclass = 0
def TFE_NewContextOptions():
return _pywrap_tensorflow_internal.TFE_NewContextOptions()
TFE_NewContextOptions = _pywrap_tensorflow_internal.TFE_NewContextOptions
def TFE_ContextOptionsSetConfig(options, proto):
return _pywrap_tensorflow_internal.TFE_ContextOptionsSetConfig(options, proto)
TFE_ContextOptionsSetConfig = _pywrap_tensorflow_internal.TFE_ContextOptionsSetConfig
_pywrap_tensorflow_internal.TFE_DEVICE_PLACEMENT_EXPLICIT_swigconstant(_pywrap_tensorflow_internal)
TFE_DEVICE_PLACEMENT_EXPLICIT = _pywrap_tensorflow_internal.TFE_DEVICE_PLACEMENT_EXPLICIT
_pywrap_tensorflow_internal.TFE_DEVICE_PLACEMENT_WARN_swigconstant(_pywrap_tensorflow_internal)
TFE_DEVICE_PLACEMENT_WARN = _pywrap_tensorflow_internal.TFE_DEVICE_PLACEMENT_WARN
_pywrap_tensorflow_internal.TFE_DEVICE_PLACEMENT_SILENT_swigconstant(_pywrap_tensorflow_internal)
TFE_DEVICE_PLACEMENT_SILENT = _pywrap_tensorflow_internal.TFE_DEVICE_PLACEMENT_SILENT
_pywrap_tensorflow_internal.TFE_DEVICE_PLACEMENT_SILENT_FOR_INT32_swigconstant(_pywrap_tensorflow_internal)
TFE_DEVICE_PLACEMENT_SILENT_FOR_INT32 = _pywrap_tensorflow_internal.TFE_DEVICE_PLACEMENT_SILENT_FOR_INT32
def TFE_ContextOptionsSetAsync(arg1, enable):
return _pywrap_tensorflow_internal.TFE_ContextOptionsSetAsync(arg1, enable)
TFE_ContextOptionsSetAsync = _pywrap_tensorflow_internal.TFE_ContextOptionsSetAsync
def TFE_ContextOptionsSetDevicePlacementPolicy(arg1, arg2):
return _pywrap_tensorflow_internal.TFE_ContextOptionsSetDevicePlacementPolicy(arg1, arg2)
TFE_ContextOptionsSetDevicePlacementPolicy = _pywrap_tensorflow_internal.TFE_ContextOptionsSetDevicePlacementPolicy
def TFE_DeleteContextOptions(arg1):
return _pywrap_tensorflow_internal.TFE_DeleteContextOptions(arg1)
TFE_DeleteContextOptions = _pywrap_tensorflow_internal.TFE_DeleteContextOptions
def TFE_NewContext(opts):
return _pywrap_tensorflow_internal.TFE_NewContext(opts)
TFE_NewContext = _pywrap_tensorflow_internal.TFE_NewContext
def TFE_DeleteContext(ctx):
return _pywrap_tensorflow_internal.TFE_DeleteContext(ctx)
TFE_DeleteContext = _pywrap_tensorflow_internal.TFE_DeleteContext
def TFE_ContextListDevices(ctx):
return _pywrap_tensorflow_internal.TFE_ContextListDevices(ctx)
TFE_ContextListDevices = _pywrap_tensorflow_internal.TFE_ContextListDevices
def TFE_ContextClearCaches(ctx):
return _pywrap_tensorflow_internal.TFE_ContextClearCaches(ctx)
TFE_ContextClearCaches = _pywrap_tensorflow_internal.TFE_ContextClearCaches
def TFE_ContextSetThreadLocalDevicePlacementPolicy(arg1, arg2):
return _pywrap_tensorflow_internal.TFE_ContextSetThreadLocalDevicePlacementPolicy(arg1, arg2)
TFE_ContextSetThreadLocalDevicePlacementPolicy = _pywrap_tensorflow_internal.TFE_ContextSetThreadLocalDevicePlacementPolicy
def TFE_ContextGetDevicePlacementPolicy(arg1):
return _pywrap_tensorflow_internal.TFE_ContextGetDevicePlacementPolicy(arg1)
TFE_ContextGetDevicePlacementPolicy = _pywrap_tensorflow_internal.TFE_ContextGetDevicePlacementPolicy
def TFE_ContextSetAsyncForThread(arg1, enable):
return _pywrap_tensorflow_internal.TFE_ContextSetAsyncForThread(arg1, enable)
TFE_ContextSetAsyncForThread = _pywrap_tensorflow_internal.TFE_ContextSetAsyncForThread
def TFE_ContextSetServerDef(ctx, keep_alive_secs, proto):
return _pywrap_tensorflow_internal.TFE_ContextSetServerDef(ctx, keep_alive_secs, proto)
TFE_ContextSetServerDef = _pywrap_tensorflow_internal.TFE_ContextSetServerDef
def TFE_ContextAsyncWait(arg1):
return _pywrap_tensorflow_internal.TFE_ContextAsyncWait(arg1)
TFE_ContextAsyncWait = _pywrap_tensorflow_internal.TFE_ContextAsyncWait
def TFE_ContextAsyncClearError(arg1):
return _pywrap_tensorflow_internal.TFE_ContextAsyncClearError(arg1)
TFE_ContextAsyncClearError = _pywrap_tensorflow_internal.TFE_ContextAsyncClearError
def TFE_OpNameGetAttrType(ctx, op_or_function_name, attr_name):
return _pywrap_tensorflow_internal.TFE_OpNameGetAttrType(ctx, op_or_function_name, attr_name)
TFE_OpNameGetAttrType = _pywrap_tensorflow_internal.TFE_OpNameGetAttrType
def TFE_ContextAddFunctionDef(ctx, serialized_function_def, size):
return _pywrap_tensorflow_internal.TFE_ContextAddFunctionDef(ctx, serialized_function_def, size)
TFE_ContextAddFunctionDef = _pywrap_tensorflow_internal.TFE_ContextAddFunctionDef
def TFE_ContextAddFunction(ctx, function):
return _pywrap_tensorflow_internal.TFE_ContextAddFunction(ctx, function)
TFE_ContextAddFunction = _pywrap_tensorflow_internal.TFE_ContextAddFunction
def TFE_ContextEnableRunMetadata(ctx):
return _pywrap_tensorflow_internal.TFE_ContextEnableRunMetadata(ctx)
TFE_ContextEnableRunMetadata = _pywrap_tensorflow_internal.TFE_ContextEnableRunMetadata
def TFE_ContextDisableRunMetadata(ctx):
return _pywrap_tensorflow_internal.TFE_ContextDisableRunMetadata(ctx)
TFE_ContextDisableRunMetadata = _pywrap_tensorflow_internal.TFE_ContextDisableRunMetadata
def TFE_ContextExportRunMetadata(ctx, buf):
return _pywrap_tensorflow_internal.TFE_ContextExportRunMetadata(ctx, buf)
TFE_ContextExportRunMetadata = _pywrap_tensorflow_internal.TFE_ContextExportRunMetadata
def TFE_ContextStartStep(ctx):
return _pywrap_tensorflow_internal.TFE_ContextStartStep(ctx)
TFE_ContextStartStep = _pywrap_tensorflow_internal.TFE_ContextStartStep
def TFE_ContextEndStep(ctx):
return _pywrap_tensorflow_internal.TFE_ContextEndStep(ctx)
TFE_ContextEndStep = _pywrap_tensorflow_internal.TFE_ContextEndStep
def TFE_Py_Execute(ctx, device_name, op_name, inputs, attrs, outputs):
return _pywrap_tensorflow_internal.TFE_Py_Execute(ctx, device_name, op_name, inputs, attrs, outputs)
TFE_Py_Execute = _pywrap_tensorflow_internal.TFE_Py_Execute
def TFE_Py_RegisterExceptionClass(e):
return _pywrap_tensorflow_internal.TFE_Py_RegisterExceptionClass(e)
TFE_Py_RegisterExceptionClass = _pywrap_tensorflow_internal.TFE_Py_RegisterExceptionClass
def TFE_Py_RegisterResourceVariableType(e):
return _pywrap_tensorflow_internal.TFE_Py_RegisterResourceVariableType(e)
TFE_Py_RegisterResourceVariableType = _pywrap_tensorflow_internal.TFE_Py_RegisterResourceVariableType
def TFE_Py_RegisterVSpace(e):
return _pywrap_tensorflow_internal.TFE_Py_RegisterVSpace(e)
TFE_Py_RegisterVSpace = _pywrap_tensorflow_internal.TFE_Py_RegisterVSpace
def TFE_Py_RegisterFallbackExceptionClass(e):
return _pywrap_tensorflow_internal.TFE_Py_RegisterFallbackExceptionClass(e)
TFE_Py_RegisterFallbackExceptionClass = _pywrap_tensorflow_internal.TFE_Py_RegisterFallbackExceptionClass
def TFE_Py_RegisterGradientFunction(e):
return _pywrap_tensorflow_internal.TFE_Py_RegisterGradientFunction(e)
TFE_Py_RegisterGradientFunction = _pywrap_tensorflow_internal.TFE_Py_RegisterGradientFunction
def TFE_Py_UID():
return _pywrap_tensorflow_internal.TFE_Py_UID()
TFE_Py_UID = _pywrap_tensorflow_internal.TFE_Py_UID
def TFE_Py_InitEagerTensor(base_class):
return _pywrap_tensorflow_internal.TFE_Py_InitEagerTensor(base_class)
TFE_Py_InitEagerTensor = _pywrap_tensorflow_internal.TFE_Py_InitEagerTensor
def TFE_Py_SetEagerTensorProfiler(profiler):
return _pywrap_tensorflow_internal.TFE_Py_SetEagerTensorProfiler(profiler)
TFE_Py_SetEagerTensorProfiler = _pywrap_tensorflow_internal.TFE_Py_SetEagerTensorProfiler
def TFE_Py_TapeSetNew(persistent, watch_accessed_variables):
return _pywrap_tensorflow_internal.TFE_Py_TapeSetNew(persistent, watch_accessed_variables)
TFE_Py_TapeSetNew = _pywrap_tensorflow_internal.TFE_Py_TapeSetNew
def TFE_Py_TapeSetRemove(tape):
return _pywrap_tensorflow_internal.TFE_Py_TapeSetRemove(tape)
TFE_Py_TapeSetRemove = _pywrap_tensorflow_internal.TFE_Py_TapeSetRemove
def TFE_Py_TapeSetAdd(tape):
return _pywrap_tensorflow_internal.TFE_Py_TapeSetAdd(tape)
TFE_Py_TapeSetAdd = _pywrap_tensorflow_internal.TFE_Py_TapeSetAdd
def TFE_Py_TapeSetIsEmpty():
return _pywrap_tensorflow_internal.TFE_Py_TapeSetIsEmpty()
TFE_Py_TapeSetIsEmpty = _pywrap_tensorflow_internal.TFE_Py_TapeSetIsEmpty
def TFE_Py_TapeSetShouldRecord(tensors):
return _pywrap_tensorflow_internal.TFE_Py_TapeSetShouldRecord(tensors)
TFE_Py_TapeSetShouldRecord = _pywrap_tensorflow_internal.TFE_Py_TapeSetShouldRecord
def TFE_Py_TapeWatch(tape, tensor):
return _pywrap_tensorflow_internal.TFE_Py_TapeWatch(tape, tensor)
TFE_Py_TapeWatch = _pywrap_tensorflow_internal.TFE_Py_TapeWatch
def TFE_Py_TapeSetDeleteTrace(tensor_id):
return _pywrap_tensorflow_internal.TFE_Py_TapeSetDeleteTrace(tensor_id)
TFE_Py_TapeSetDeleteTrace = _pywrap_tensorflow_internal.TFE_Py_TapeSetDeleteTrace
def TFE_Py_TapeSetStopOnThread():
return _pywrap_tensorflow_internal.TFE_Py_TapeSetStopOnThread()
TFE_Py_TapeSetStopOnThread = _pywrap_tensorflow_internal.TFE_Py_TapeSetStopOnThread
def TFE_Py_TapeSetRestartOnThread():
return _pywrap_tensorflow_internal.TFE_Py_TapeSetRestartOnThread()
TFE_Py_TapeSetRestartOnThread = _pywrap_tensorflow_internal.TFE_Py_TapeSetRestartOnThread
def TFE_Py_TapeSetRecordOperation(op_type, output_tensors, input_tensor_ids, backward_function):
return _pywrap_tensorflow_internal.TFE_Py_TapeSetRecordOperation(op_type, output_tensors, input_tensor_ids, backward_function)
TFE_Py_TapeSetRecordOperation = _pywrap_tensorflow_internal.TFE_Py_TapeSetRecordOperation
def TFE_Py_TapeVariableAccessed(variable):
return _pywrap_tensorflow_internal.TFE_Py_TapeVariableAccessed(variable)
TFE_Py_TapeVariableAccessed = _pywrap_tensorflow_internal.TFE_Py_TapeVariableAccessed
def TFE_Py_TapeWatchVariable(tape, variable):
return _pywrap_tensorflow_internal.TFE_Py_TapeWatchVariable(tape, variable)
TFE_Py_TapeWatchVariable = _pywrap_tensorflow_internal.TFE_Py_TapeWatchVariable
def TFE_Py_TapeGradient(tape, target, sources, output_gradients):
return _pywrap_tensorflow_internal.TFE_Py_TapeGradient(tape, target, sources, output_gradients)
TFE_Py_TapeGradient = _pywrap_tensorflow_internal.TFE_Py_TapeGradient
def TFE_Py_RecordGradient(op_name, inputs, attrs, results, name):
return _pywrap_tensorflow_internal.TFE_Py_RecordGradient(op_name, inputs, attrs, results, name)
TFE_Py_RecordGradient = _pywrap_tensorflow_internal.TFE_Py_RecordGradient
def TFE_Py_TapeWatchedVariables(tape):
return _pywrap_tensorflow_internal.TFE_Py_TapeWatchedVariables(tape)
TFE_Py_TapeWatchedVariables = _pywrap_tensorflow_internal.TFE_Py_TapeWatchedVariables
def TFE_Py_TensorShapeSlice(tensors, slice_dim):
return _pywrap_tensorflow_internal.TFE_Py_TensorShapeSlice(tensors, slice_dim)
TFE_Py_TensorShapeSlice = _pywrap_tensorflow_internal.TFE_Py_TensorShapeSlice
def TFE_Py_TensorShapeOnDevice(tensor):
return _pywrap_tensorflow_internal.TFE_Py_TensorShapeOnDevice(tensor)
TFE_Py_TensorShapeOnDevice = _pywrap_tensorflow_internal.TFE_Py_TensorShapeOnDevice
def TFE_Py_EncodeArg(arg1):
return _pywrap_tensorflow_internal.TFE_Py_EncodeArg(arg1)
TFE_Py_EncodeArg = _pywrap_tensorflow_internal.TFE_Py_EncodeArg
def IsGoogleCudaEnabled():
return _pywrap_tensorflow_internal.IsGoogleCudaEnabled()
IsGoogleCudaEnabled = _pywrap_tensorflow_internal.IsGoogleCudaEnabled
def CudaSupportsHalfMatMulAndConv():
return _pywrap_tensorflow_internal.CudaSupportsHalfMatMulAndConv()
CudaSupportsHalfMatMulAndConv = _pywrap_tensorflow_internal.CudaSupportsHalfMatMulAndConv
def IsMklEnabled():
return _pywrap_tensorflow_internal.IsMklEnabled()
IsMklEnabled = _pywrap_tensorflow_internal.IsMklEnabled
def CheckpointReader_GetTensor(reader, name, out_status):
return _pywrap_tensorflow_internal.CheckpointReader_GetTensor(reader, name, out_status)
CheckpointReader_GetTensor = _pywrap_tensorflow_internal.CheckpointReader_GetTensor
def NewCheckpointReader(filepattern):
from tensorflow.python.framework import errors
with errors.raise_exception_on_not_ok_status() as status:
from tensorflow.python.util import compat
return CheckpointReader(compat.as_bytes(filepattern), status)
NewCheckpointReader._tf_api_names = ['train.NewCheckpointReader']
NewCheckpointReader._tf_api_names_v1 = ['train.NewCheckpointReader']
class CheckpointReader(_object):
__swig_setmethods__ = {}
__setattr__ = lambda self, name, value: _swig_setattr(self, CheckpointReader, name, value)
__swig_getmethods__ = {}
__getattr__ = lambda self, name: _swig_getattr(self, CheckpointReader, name)
__repr__ = _swig_repr
def __init__(self, filepattern, out_status):
this = _pywrap_tensorflow_internal.new_CheckpointReader(filepattern, out_status)
try:
self.this.append(this)
except Exception:
self.this = this
def _HasTensor(self, name):
return _pywrap_tensorflow_internal.CheckpointReader__HasTensor(self, name)
def debug_string(self):
return _pywrap_tensorflow_internal.CheckpointReader_debug_string(self)
def get_variable_to_shape_map(self):
return _pywrap_tensorflow_internal.CheckpointReader_get_variable_to_shape_map(self)
def _GetVariableToDataTypeMap(self):
return _pywrap_tensorflow_internal.CheckpointReader__GetVariableToDataTypeMap(self)
def get_variable_to_dtype_map(self):
from tensorflow.python.framework import dtypes
return {name: dtypes.DType(type_enum)
for name, type_enum in self._GetVariableToDataTypeMap().items()}
def has_tensor(self, tensor_str):
from tensorflow.python.util import compat
return self._HasTensor(compat.as_bytes(tensor_str))
def get_tensor(self, tensor_str):
from tensorflow.python.framework import errors
with errors.raise_exception_on_not_ok_status() as status:
from tensorflow.python.util import compat
return CheckpointReader_GetTensor(self, compat.as_bytes(tensor_str),
status)
__swig_destroy__ = _pywrap_tensorflow_internal.delete_CheckpointReader
__del__ = lambda self: None
CheckpointReader_swigregister = _pywrap_tensorflow_internal.CheckpointReader_swigregister
CheckpointReader_swigregister(CheckpointReader)
TFE_Py_FastPathExecute = _pywrap_tensorflow_internal.TFE_Py_FastPathExecute
def NewStatSummarizer(unused):
return _pywrap_tensorflow_internal.NewStatSummarizer(unused)
NewStatSummarizer = _pywrap_tensorflow_internal.NewStatSummarizer
def DeleteStatSummarizer(ss):
return _pywrap_tensorflow_internal.DeleteStatSummarizer(ss)
DeleteStatSummarizer = _pywrap_tensorflow_internal.DeleteStatSummarizer
class StatSummarizer(_object):
__swig_setmethods__ = {}
__setattr__ = lambda self, name, value: _swig_setattr(self, StatSummarizer, name, value)
__swig_getmethods__ = {}
__getattr__ = lambda self, name: _swig_getattr(self, StatSummarizer, name)
__repr__ = _swig_repr
__swig_destroy__ = _pywrap_tensorflow_internal.delete_StatSummarizer
__del__ = lambda self: None
def ProcessStepStats(self, step_stats):
return _pywrap_tensorflow_internal.StatSummarizer_ProcessStepStats(self, step_stats)
def GetOutputString(self):
return _pywrap_tensorflow_internal.StatSummarizer_GetOutputString(self)
def PrintStepStats(self):
return _pywrap_tensorflow_internal.StatSummarizer_PrintStepStats(self)
def ProcessStepStatsStr(self, step_stats_str):
return _pywrap_tensorflow_internal.StatSummarizer_ProcessStepStatsStr(self, step_stats_str)
def __init__(self, *args):
this = _pywrap_tensorflow_internal.new_StatSummarizer(*args)
try:
self.this.append(this)
except Exception:
self.this = this
StatSummarizer_swigregister = _pywrap_tensorflow_internal.StatSummarizer_swigregister
StatSummarizer_swigregister(StatSummarizer)
def NewProfiler(graph, op_log):
return _pywrap_tensorflow_internal.NewProfiler(graph, op_log)
NewProfiler = _pywrap_tensorflow_internal.NewProfiler
def DeleteProfiler():
return _pywrap_tensorflow_internal.DeleteProfiler()
DeleteProfiler = _pywrap_tensorflow_internal.DeleteProfiler
def AddStep(step, graph, run_meta, op_log):
return _pywrap_tensorflow_internal.AddStep(step, graph, run_meta, op_log)
AddStep = _pywrap_tensorflow_internal.AddStep
def WriteProfile(filename):
return _pywrap_tensorflow_internal.WriteProfile(filename)
WriteProfile = _pywrap_tensorflow_internal.WriteProfile
def ProfilerFromFile(filename):
return _pywrap_tensorflow_internal.ProfilerFromFile(filename)
ProfilerFromFile = _pywrap_tensorflow_internal.ProfilerFromFile
def SerializeToString():
return _pywrap_tensorflow_internal.SerializeToString()
SerializeToString = _pywrap_tensorflow_internal.SerializeToString
def Profile(command, options):
return _pywrap_tensorflow_internal.Profile(command, options)
Profile = _pywrap_tensorflow_internal.Profile
def PrintModelAnalysis(graph, run_meta, op_log, command, options):
return _pywrap_tensorflow_internal.PrintModelAnalysis(graph, run_meta, op_log, command, options)
PrintModelAnalysis = _pywrap_tensorflow_internal.PrintModelAnalysis
def InitializePyTrampoline(trampoline):
return _pywrap_tensorflow_internal.InitializePyTrampoline(trampoline)
InitializePyTrampoline = _pywrap_tensorflow_internal.InitializePyTrampoline
class PyExceptionRegistry(_object):
__swig_setmethods__ = {}
__setattr__ = lambda self, name, value: _swig_setattr(self, PyExceptionRegistry, name, value)
__swig_getmethods__ = {}
__getattr__ = lambda self, name: _swig_getattr(self, PyExceptionRegistry, name)
def __init__(self, *args, **kwargs):
raise AttributeError("No constructor defined")
__repr__ = _swig_repr
__swig_getmethods__["Init"] = lambda x: _pywrap_tensorflow_internal.PyExceptionRegistry_Init
if _newclass:
Init = staticmethod(_pywrap_tensorflow_internal.PyExceptionRegistry_Init)
__swig_destroy__ = _pywrap_tensorflow_internal.delete_PyExceptionRegistry
__del__ = lambda self: None
PyExceptionRegistry_swigregister = _pywrap_tensorflow_internal.PyExceptionRegistry_swigregister
PyExceptionRegistry_swigregister(PyExceptionRegistry)
def PyExceptionRegistry_Init(code_to_exc_type_map):
return _pywrap_tensorflow_internal.PyExceptionRegistry_Init(code_to_exc_type_map)
PyExceptionRegistry_Init = _pywrap_tensorflow_internal.PyExceptionRegistry_Init
class PyRecordReader(_object):
__swig_setmethods__ = {}
__setattr__ = lambda self, name, value: _swig_setattr(self, PyRecordReader, name, value)
__swig_getmethods__ = {}
__getattr__ = lambda self, name: _swig_getattr(self, PyRecordReader, name)
def __init__(self, *args, **kwargs):
raise AttributeError("No constructor defined")
__repr__ = _swig_repr
__swig_getmethods__["New"] = lambda x: _pywrap_tensorflow_internal.PyRecordReader_New
if _newclass:
New = staticmethod(_pywrap_tensorflow_internal.PyRecordReader_New)
__swig_destroy__ = _pywrap_tensorflow_internal.delete_PyRecordReader
__del__ = lambda self: None
def GetNext(self):
return _pywrap_tensorflow_internal.PyRecordReader_GetNext(self)
def record(self):
return _pywrap_tensorflow_internal.PyRecordReader_record(self)
def offset(self):
return _pywrap_tensorflow_internal.PyRecordReader_offset(self)
def Close(self):
return _pywrap_tensorflow_internal.PyRecordReader_Close(self)
PyRecordReader_swigregister = _pywrap_tensorflow_internal.PyRecordReader_swigregister
PyRecordReader_swigregister(PyRecordReader)
def PyRecordReader_New(filename, start_offset, compression_type_string, out_status):
return _pywrap_tensorflow_internal.PyRecordReader_New(filename, start_offset, compression_type_string, out_status)
PyRecordReader_New = _pywrap_tensorflow_internal.PyRecordReader_New
class RecordWriterOptions(_object):
__swig_setmethods__ = {}
__setattr__ = lambda self, name, value: _swig_setattr(self, RecordWriterOptions, name, value)
__swig_getmethods__ = {}
__getattr__ = lambda self, name: _swig_getattr(self, RecordWriterOptions, name)
__repr__ = _swig_repr
__swig_getmethods__["CreateRecordWriterOptions"] = lambda x: _pywrap_tensorflow_internal.RecordWriterOptions_CreateRecordWriterOptions
if _newclass:
CreateRecordWriterOptions = staticmethod(_pywrap_tensorflow_internal.RecordWriterOptions_CreateRecordWriterOptions)
__swig_setmethods__["zlib_options"] = _pywrap_tensorflow_internal.RecordWriterOptions_zlib_options_set
__swig_getmethods__["zlib_options"] = _pywrap_tensorflow_internal.RecordWriterOptions_zlib_options_get
if _newclass:
zlib_options = _swig_property(_pywrap_tensorflow_internal.RecordWriterOptions_zlib_options_get, _pywrap_tensorflow_internal.RecordWriterOptions_zlib_options_set)
def __init__(self):
this = _pywrap_tensorflow_internal.new_RecordWriterOptions()
try:
self.this.append(this)
except Exception:
self.this = this
__swig_destroy__ = _pywrap_tensorflow_internal.delete_RecordWriterOptions
__del__ = lambda self: None
RecordWriterOptions_swigregister = _pywrap_tensorflow_internal.RecordWriterOptions_swigregister
RecordWriterOptions_swigregister(RecordWriterOptions)
def RecordWriterOptions_CreateRecordWriterOptions(compression_type):
return _pywrap_tensorflow_internal.RecordWriterOptions_CreateRecordWriterOptions(compression_type)
RecordWriterOptions_CreateRecordWriterOptions = _pywrap_tensorflow_internal.RecordWriterOptions_CreateRecordWriterOptions
class ZlibCompressionOptions(_object):
__swig_setmethods__ = {}
__setattr__ = lambda self, name, value: _swig_setattr(self, ZlibCompressionOptions, name, value)
__swig_getmethods__ = {}
__getattr__ = lambda self, name: _swig_getattr(self, ZlibCompressionOptions, name)
def __init__(self, *args, **kwargs):
raise AttributeError("No constructor defined")
__repr__ = _swig_repr
__swig_setmethods__["flush_mode"] = _pywrap_tensorflow_internal.ZlibCompressionOptions_flush_mode_set
__swig_getmethods__["flush_mode"] = _pywrap_tensorflow_internal.ZlibCompressionOptions_flush_mode_get
if _newclass:
flush_mode = _swig_property(_pywrap_tensorflow_internal.ZlibCompressionOptions_flush_mode_get, _pywrap_tensorflow_internal.ZlibCompressionOptions_flush_mode_set)
__swig_setmethods__["input_buffer_size"] = _pywrap_tensorflow_internal.ZlibCompressionOptions_input_buffer_size_set
__swig_getmethods__["input_buffer_size"] = _pywrap_tensorflow_internal.ZlibCompressionOptions_input_buffer_size_get
if _newclass:
input_buffer_size = _swig_property(_pywrap_tensorflow_internal.ZlibCompressionOptions_input_buffer_size_get, _pywrap_tensorflow_internal.ZlibCompressionOptions_input_buffer_size_set)
__swig_setmethods__["output_buffer_size"] = _pywrap_tensorflow_internal.ZlibCompressionOptions_output_buffer_size_set
__swig_getmethods__["output_buffer_size"] = _pywrap_tensorflow_internal.ZlibCompressionOptions_output_buffer_size_get
if _newclass:
output_buffer_size = _swig_property(_pywrap_tensorflow_internal.ZlibCompressionOptions_output_buffer_size_get, _pywrap_tensorflow_internal.ZlibCompressionOptions_output_buffer_size_set)
__swig_setmethods__["window_bits"] = _pywrap_tensorflow_internal.ZlibCompressionOptions_window_bits_set
__swig_getmethods__["window_bits"] = _pywrap_tensorflow_internal.ZlibCompressionOptions_window_bits_get
if _newclass:
window_bits = _swig_property(_pywrap_tensorflow_internal.ZlibCompressionOptions_window_bits_get, _pywrap_tensorflow_internal.ZlibCompressionOptions_window_bits_set)
__swig_setmethods__["compression_level"] = _pywrap_tensorflow_internal.ZlibCompressionOptions_compression_level_set
__swig_getmethods__["compression_level"] = _pywrap_tensorflow_internal.ZlibCompressionOptions_compression_level_get
if _newclass:
compression_level = _swig_property(_pywrap_tensorflow_internal.ZlibCompressionOptions_compression_level_get, _pywrap_tensorflow_internal.ZlibCompressionOptions_compression_level_set)
__swig_setmethods__["compression_method"] = _pywrap_tensorflow_internal.ZlibCompressionOptions_compression_method_set
__swig_getmethods__["compression_method"] = _pywrap_tensorflow_internal.ZlibCompressionOptions_compression_method_get
if _newclass:
compression_method = _swig_property(_pywrap_tensorflow_internal.ZlibCompressionOptions_compression_method_get, _pywrap_tensorflow_internal.ZlibCompressionOptions_compression_method_set)
__swig_setmethods__["mem_level"] = _pywrap_tensorflow_internal.ZlibCompressionOptions_mem_level_set
__swig_getmethods__["mem_level"] = _pywrap_tensorflow_internal.ZlibCompressionOptions_mem_level_get
if _newclass:
mem_level = _swig_property(_pywrap_tensorflow_internal.ZlibCompressionOptions_mem_level_get, _pywrap_tensorflow_internal.ZlibCompressionOptions_mem_level_set)
__swig_setmethods__["compression_strategy"] = _pywrap_tensorflow_internal.ZlibCompressionOptions_compression_strategy_set
__swig_getmethods__["compression_strategy"] = _pywrap_tensorflow_internal.ZlibCompressionOptions_compression_strategy_get
if _newclass:
compression_strategy = _swig_property(_pywrap_tensorflow_internal.ZlibCompressionOptions_compression_strategy_get, _pywrap_tensorflow_internal.ZlibCompressionOptions_compression_strategy_set)
__swig_destroy__ = _pywrap_tensorflow_internal.delete_ZlibCompressionOptions
__del__ = lambda self: None
ZlibCompressionOptions_swigregister = _pywrap_tensorflow_internal.ZlibCompressionOptions_swigregister
ZlibCompressionOptions_swigregister(ZlibCompressionOptions)
class PyRecordWriter(_object):
__swig_setmethods__ = {}
__setattr__ = lambda self, name, value: _swig_setattr(self, PyRecordWriter, name, value)
__swig_getmethods__ = {}
__getattr__ = lambda self, name: _swig_getattr(self, PyRecordWriter, name)
def __init__(self, *args, **kwargs):
raise AttributeError("No constructor defined")
__repr__ = _swig_repr
__swig_getmethods__["New"] = lambda x: _pywrap_tensorflow_internal.PyRecordWriter_New
if _newclass:
New = staticmethod(_pywrap_tensorflow_internal.PyRecordWriter_New)
__swig_destroy__ = _pywrap_tensorflow_internal.delete_PyRecordWriter
__del__ = lambda self: None
def WriteRecord(self, record, out_status):
return _pywrap_tensorflow_internal.PyRecordWriter_WriteRecord(self, record, out_status)
def Flush(self, out_status):
return _pywrap_tensorflow_internal.PyRecordWriter_Flush(self, out_status)
def Close(self, out_status):
return _pywrap_tensorflow_internal.PyRecordWriter_Close(self, out_status)
PyRecordWriter_swigregister = _pywrap_tensorflow_internal.PyRecordWriter_swigregister
PyRecordWriter_swigregister(PyRecordWriter)
def PyRecordWriter_New(filename, compression_options, out_status):
return _pywrap_tensorflow_internal.PyRecordWriter_New(filename, compression_options, out_status)
PyRecordWriter_New = _pywrap_tensorflow_internal.PyRecordWriter_New
class Status(_object):
__swig_setmethods__ = {}
__setattr__ = lambda self, name, value: _swig_setattr(self, Status, name, value)
__swig_getmethods__ = {}
__getattr__ = lambda self, name: _swig_getattr(self, Status, name)
__repr__ = _swig_repr
def __init__(self, *args):
this = _pywrap_tensorflow_internal.new_Status(*args)
try:
self.this.append(this)
except Exception:
self.this = this
__swig_getmethods__["OK"] = lambda x: _pywrap_tensorflow_internal.Status_OK
if _newclass:
OK = staticmethod(_pywrap_tensorflow_internal.Status_OK)
def ok(self):
return _pywrap_tensorflow_internal.Status_ok(self)
def code(self):
return _pywrap_tensorflow_internal.Status_code(self)
def error_message(self):
return _pywrap_tensorflow_internal.Status_error_message(self)
def __eq__(self, x):
return _pywrap_tensorflow_internal.Status___eq__(self, x)
def __ne__(self, x):
return _pywrap_tensorflow_internal.Status___ne__(self, x)
def Update(self, new_status):
return _pywrap_tensorflow_internal.Status_Update(self, new_status)
def ToString(self):
return _pywrap_tensorflow_internal.Status_ToString(self)
def IgnoreError(self):
return _pywrap_tensorflow_internal.Status_IgnoreError(self)
__swig_destroy__ = _pywrap_tensorflow_internal.delete_Status
__del__ = lambda self: None
Status_swigregister = _pywrap_tensorflow_internal.Status_swigregister
Status_swigregister(Status)
def Status_OK():
return _pywrap_tensorflow_internal.Status_OK()
Status_OK = _pywrap_tensorflow_internal.Status_OK
def __lshift__(os, x):
return _pywrap_tensorflow_internal.__lshift__(os, x)
__lshift__ = _pywrap_tensorflow_internal.__lshift__
def TfCheckOpHelperOutOfLine(v, msg):
return _pywrap_tensorflow_internal.TfCheckOpHelperOutOfLine(v, msg)
TfCheckOpHelperOutOfLine = _pywrap_tensorflow_internal.TfCheckOpHelperOutOfLine
def TfCheckOpHelper(v, msg):
return _pywrap_tensorflow_internal.TfCheckOpHelper(v, msg)
TfCheckOpHelper = _pywrap_tensorflow_internal.TfCheckOpHelper
class EventsWriter(_object):
__swig_setmethods__ = {}
__setattr__ = lambda self, name, value: _swig_setattr(self, EventsWriter, name, value)
__swig_getmethods__ = {}
__getattr__ = lambda self, name: _swig_getattr(self, EventsWriter, name)
__repr__ = _swig_repr
def __init__(self, file_prefix):
this = _pywrap_tensorflow_internal.new_EventsWriter(file_prefix)
try:
self.this.append(this)
except Exception:
self.this = this
__swig_destroy__ = _pywrap_tensorflow_internal.delete_EventsWriter
__del__ = lambda self: None
def InitWithSuffix(self, suffix):
return _pywrap_tensorflow_internal.EventsWriter_InitWithSuffix(self, suffix)
def FileName(self):
return _pywrap_tensorflow_internal.EventsWriter_FileName(self)
def _WriteSerializedEvent(self, event_str):
return _pywrap_tensorflow_internal.EventsWriter__WriteSerializedEvent(self, event_str)
def Flush(self):
return _pywrap_tensorflow_internal.EventsWriter_Flush(self)
def Close(self):
return _pywrap_tensorflow_internal.EventsWriter_Close(self)
def WriteEvent(self, event):
from tensorflow.core.util.event_pb2 import Event
if not isinstance(event, Event):
raise TypeError("Expected an event_pb2.Event proto, "
" but got %s" % type(event))
return self._WriteSerializedEvent(event.SerializeToString())
EventsWriter_swigregister = _pywrap_tensorflow_internal.EventsWriter_swigregister
EventsWriter_swigregister(EventsWriter)
_pywrap_tensorflow_internal.__version___swigconstant(_pywrap_tensorflow_internal)
__version__ = _pywrap_tensorflow_internal.__version__
_pywrap_tensorflow_internal.GRAPH_DEF_VERSION_swigconstant(_pywrap_tensorflow_internal)
GRAPH_DEF_VERSION = _pywrap_tensorflow_internal.GRAPH_DEF_VERSION
_pywrap_tensorflow_internal.GRAPH_DEF_VERSION_MIN_CONSUMER_swigconstant(_pywrap_tensorflow_internal)
GRAPH_DEF_VERSION_MIN_CONSUMER = _pywrap_tensorflow_internal.GRAPH_DEF_VERSION_MIN_CONSUMER
_pywrap_tensorflow_internal.GRAPH_DEF_VERSION_MIN_PRODUCER_swigconstant(_pywrap_tensorflow_internal)
GRAPH_DEF_VERSION_MIN_PRODUCER = _pywrap_tensorflow_internal.GRAPH_DEF_VERSION_MIN_PRODUCER
_pywrap_tensorflow_internal.__git_version___swigconstant(_pywrap_tensorflow_internal)
__git_version__ = _pywrap_tensorflow_internal.__git_version__
_pywrap_tensorflow_internal.__compiler_version___swigconstant(_pywrap_tensorflow_internal)
__compiler_version__ = _pywrap_tensorflow_internal.__compiler_version__
_pywrap_tensorflow_internal.__cxx11_abi_flag___swigconstant(_pywrap_tensorflow_internal)
__cxx11_abi_flag__ = _pywrap_tensorflow_internal.__cxx11_abi_flag__
_pywrap_tensorflow_internal.__monolithic_build___swigconstant(_pywrap_tensorflow_internal)
__monolithic_build__ = _pywrap_tensorflow_internal.__monolithic_build__
_pywrap_tensorflow_internal.TENSOR_HANDLE_KEY_swigconstant(_pywrap_tensorflow_internal)
TENSOR_HANDLE_KEY = _pywrap_tensorflow_internal.TENSOR_HANDLE_KEY
def TF_Version():
return _pywrap_tensorflow_internal.TF_Version()
TF_Version = _pywrap_tensorflow_internal.TF_Version
_pywrap_tensorflow_internal.TF_FLOAT_swigconstant(_pywrap_tensorflow_internal)
TF_FLOAT = _pywrap_tensorflow_internal.TF_FLOAT
_pywrap_tensorflow_internal.TF_DOUBLE_swigconstant(_pywrap_tensorflow_internal)
TF_DOUBLE = _pywrap_tensorflow_internal.TF_DOUBLE
_pywrap_tensorflow_internal.TF_INT32_swigconstant(_pywrap_tensorflow_internal)
TF_INT32 = _pywrap_tensorflow_internal.TF_INT32
_pywrap_tensorflow_internal.TF_UINT8_swigconstant(_pywrap_tensorflow_internal)
TF_UINT8 = _pywrap_tensorflow_internal.TF_UINT8
_pywrap_tensorflow_internal.TF_INT16_swigconstant(_pywrap_tensorflow_internal)
TF_INT16 = _pywrap_tensorflow_internal.TF_INT16
_pywrap_tensorflow_internal.TF_INT8_swigconstant(_pywrap_tensorflow_internal)
TF_INT8 = _pywrap_tensorflow_internal.TF_INT8
_pywrap_tensorflow_internal.TF_STRING_swigconstant(_pywrap_tensorflow_internal)
TF_STRING = _pywrap_tensorflow_internal.TF_STRING
_pywrap_tensorflow_internal.TF_COMPLEX64_swigconstant(_pywrap_tensorflow_internal)
TF_COMPLEX64 = _pywrap_tensorflow_internal.TF_COMPLEX64
_pywrap_tensorflow_internal.TF_COMPLEX_swigconstant(_pywrap_tensorflow_internal)
TF_COMPLEX = _pywrap_tensorflow_internal.TF_COMPLEX
_pywrap_tensorflow_internal.TF_INT64_swigconstant(_pywrap_tensorflow_internal)
TF_INT64 = _pywrap_tensorflow_internal.TF_INT64
_pywrap_tensorflow_internal.TF_BOOL_swigconstant(_pywrap_tensorflow_internal)
TF_BOOL = _pywrap_tensorflow_internal.TF_BOOL
_pywrap_tensorflow_internal.TF_QINT8_swigconstant(_pywrap_tensorflow_internal)
TF_QINT8 = _pywrap_tensorflow_internal.TF_QINT8
_pywrap_tensorflow_internal.TF_QUINT8_swigconstant(_pywrap_tensorflow_internal)
TF_QUINT8 = _pywrap_tensorflow_internal.TF_QUINT8
_pywrap_tensorflow_internal.TF_QINT32_swigconstant(_pywrap_tensorflow_internal)
TF_QINT32 = _pywrap_tensorflow_internal.TF_QINT32
_pywrap_tensorflow_internal.TF_BFLOAT16_swigconstant(_pywrap_tensorflow_internal)
TF_BFLOAT16 = _pywrap_tensorflow_internal.TF_BFLOAT16
_pywrap_tensorflow_internal.TF_QINT16_swigconstant(_pywrap_tensorflow_internal)
TF_QINT16 = _pywrap_tensorflow_internal.TF_QINT16
_pywrap_tensorflow_internal.TF_QUINT16_swigconstant(_pywrap_tensorflow_internal)
TF_QUINT16 = _pywrap_tensorflow_internal.TF_QUINT16
_pywrap_tensorflow_internal.TF_UINT16_swigconstant(_pywrap_tensorflow_internal)
TF_UINT16 = _pywrap_tensorflow_internal.TF_UINT16
_pywrap_tensorflow_internal.TF_COMPLEX128_swigconstant(_pywrap_tensorflow_internal)
TF_COMPLEX128 = _pywrap_tensorflow_internal.TF_COMPLEX128
_pywrap_tensorflow_internal.TF_HALF_swigconstant(_pywrap_tensorflow_internal)
TF_HALF = _pywrap_tensorflow_internal.TF_HALF
_pywrap_tensorflow_internal.TF_RESOURCE_swigconstant(_pywrap_tensorflow_internal)
TF_RESOURCE = _pywrap_tensorflow_internal.TF_RESOURCE
_pywrap_tensorflow_internal.TF_VARIANT_swigconstant(_pywrap_tensorflow_internal)
TF_VARIANT = _pywrap_tensorflow_internal.TF_VARIANT
_pywrap_tensorflow_internal.TF_UINT32_swigconstant(_pywrap_tensorflow_internal)
TF_UINT32 = _pywrap_tensorflow_internal.TF_UINT32
_pywrap_tensorflow_internal.TF_UINT64_swigconstant(_pywrap_tensorflow_internal)
TF_UINT64 = _pywrap_tensorflow_internal.TF_UINT64
def TF_DataTypeSize(dt):
return _pywrap_tensorflow_internal.TF_DataTypeSize(dt)
TF_DataTypeSize = _pywrap_tensorflow_internal.TF_DataTypeSize
_pywrap_tensorflow_internal.TF_OK_swigconstant(_pywrap_tensorflow_internal)
TF_OK = _pywrap_tensorflow_internal.TF_OK
_pywrap_tensorflow_internal.TF_CANCELLED_swigconstant(_pywrap_tensorflow_internal)
TF_CANCELLED = _pywrap_tensorflow_internal.TF_CANCELLED
_pywrap_tensorflow_internal.TF_UNKNOWN_swigconstant(_pywrap_tensorflow_internal)
TF_UNKNOWN = _pywrap_tensorflow_internal.TF_UNKNOWN
_pywrap_tensorflow_internal.TF_INVALID_ARGUMENT_swigconstant(_pywrap_tensorflow_internal)
TF_INVALID_ARGUMENT = _pywrap_tensorflow_internal.TF_INVALID_ARGUMENT
_pywrap_tensorflow_internal.TF_DEADLINE_EXCEEDED_swigconstant(_pywrap_tensorflow_internal)
TF_DEADLINE_EXCEEDED = _pywrap_tensorflow_internal.TF_DEADLINE_EXCEEDED
_pywrap_tensorflow_internal.TF_NOT_FOUND_swigconstant(_pywrap_tensorflow_internal)
TF_NOT_FOUND = _pywrap_tensorflow_internal.TF_NOT_FOUND
_pywrap_tensorflow_internal.TF_ALREADY_EXISTS_swigconstant(_pywrap_tensorflow_internal)
TF_ALREADY_EXISTS = _pywrap_tensorflow_internal.TF_ALREADY_EXISTS
_pywrap_tensorflow_internal.TF_PERMISSION_DENIED_swigconstant(_pywrap_tensorflow_internal)
TF_PERMISSION_DENIED = _pywrap_tensorflow_internal.TF_PERMISSION_DENIED
_pywrap_tensorflow_internal.TF_UNAUTHENTICATED_swigconstant(_pywrap_tensorflow_internal)
TF_UNAUTHENTICATED = _pywrap_tensorflow_internal.TF_UNAUTHENTICATED
_pywrap_tensorflow_internal.TF_RESOURCE_EXHAUSTED_swigconstant(_pywrap_tensorflow_internal)
TF_RESOURCE_EXHAUSTED = _pywrap_tensorflow_internal.TF_RESOURCE_EXHAUSTED
_pywrap_tensorflow_internal.TF_FAILED_PRECONDITION_swigconstant(_pywrap_tensorflow_internal)
TF_FAILED_PRECONDITION = _pywrap_tensorflow_internal.TF_FAILED_PRECONDITION
_pywrap_tensorflow_internal.TF_ABORTED_swigconstant(_pywrap_tensorflow_internal)
TF_ABORTED = _pywrap_tensorflow_internal.TF_ABORTED
_pywrap_tensorflow_internal.TF_OUT_OF_RANGE_swigconstant(_pywrap_tensorflow_internal)
TF_OUT_OF_RANGE = _pywrap_tensorflow_internal.TF_OUT_OF_RANGE
_pywrap_tensorflow_internal.TF_UNIMPLEMENTED_swigconstant(_pywrap_tensorflow_internal)
TF_UNIMPLEMENTED = _pywrap_tensorflow_internal.TF_UNIMPLEMENTED
_pywrap_tensorflow_internal.TF_INTERNAL_swigconstant(_pywrap_tensorflow_internal)
TF_INTERNAL = _pywrap_tensorflow_internal.TF_INTERNAL
_pywrap_tensorflow_internal.TF_UNAVAILABLE_swigconstant(_pywrap_tensorflow_internal)
TF_UNAVAILABLE = _pywrap_tensorflow_internal.TF_UNAVAILABLE
_pywrap_tensorflow_internal.TF_DATA_LOSS_swigconstant(_pywrap_tensorflow_internal)
TF_DATA_LOSS = _pywrap_tensorflow_internal.TF_DATA_LOSS
def TF_NewStatus():
return _pywrap_tensorflow_internal.TF_NewStatus()
TF_NewStatus = _pywrap_tensorflow_internal.TF_NewStatus
def TF_DeleteStatus(arg1):
return _pywrap_tensorflow_internal.TF_DeleteStatus(arg1)
TF_DeleteStatus = _pywrap_tensorflow_internal.TF_DeleteStatus
def TF_SetStatus(s, code, msg):
return _pywrap_tensorflow_internal.TF_SetStatus(s, code, msg)
TF_SetStatus = _pywrap_tensorflow_internal.TF_SetStatus
def TF_GetCode(s):
return _pywrap_tensorflow_internal.TF_GetCode(s)
TF_GetCode = _pywrap_tensorflow_internal.TF_GetCode
def TF_Message(s):
return _pywrap_tensorflow_internal.TF_Message(s)
TF_Message = _pywrap_tensorflow_internal.TF_Message
class TF_Buffer(_object):
__swig_setmethods__ = {}
__setattr__ = lambda self, name, value: _swig_setattr(self, TF_Buffer, name, value)
__swig_getmethods__ = {}
__getattr__ = lambda self, name: _swig_getattr(self, TF_Buffer, name)
__repr__ = _swig_repr
__swig_setmethods__["data"] = _pywrap_tensorflow_internal.TF_Buffer_data_set
__swig_getmethods__["data"] = _pywrap_tensorflow_internal.TF_Buffer_data_get
if _newclass:
data = _swig_property(_pywrap_tensorflow_internal.TF_Buffer_data_get, _pywrap_tensorflow_internal.TF_Buffer_data_set)
__swig_setmethods__["length"] = _pywrap_tensorflow_internal.TF_Buffer_length_set
__swig_getmethods__["length"] = _pywrap_tensorflow_internal.TF_Buffer_length_get
if _newclass:
length = _swig_property(_pywrap_tensorflow_internal.TF_Buffer_length_get, _pywrap_tensorflow_internal.TF_Buffer_length_set)
__swig_setmethods__["data_deallocator"] = _pywrap_tensorflow_internal.TF_Buffer_data_deallocator_set
__swig_getmethods__["data_deallocator"] = _pywrap_tensorflow_internal.TF_Buffer_data_deallocator_get
if _newclass:
data_deallocator = _swig_property(_pywrap_tensorflow_internal.TF_Buffer_data_deallocator_get, _pywrap_tensorflow_internal.TF_Buffer_data_deallocator_set)
def __init__(self):
this = _pywrap_tensorflow_internal.new_TF_Buffer()
try:
self.this.append(this)
except Exception:
self.this = this
__swig_destroy__ = _pywrap_tensorflow_internal.delete_TF_Buffer
__del__ = lambda self: None
TF_Buffer_swigregister = _pywrap_tensorflow_internal.TF_Buffer_swigregister
TF_Buffer_swigregister(TF_Buffer)
def TF_NewBufferFromString(proto):
return _pywrap_tensorflow_internal.TF_NewBufferFromString(proto)
TF_NewBufferFromString = _pywrap_tensorflow_internal.TF_NewBufferFromString
def TF_NewBuffer():
return _pywrap_tensorflow_internal.TF_NewBuffer()
TF_NewBuffer = _pywrap_tensorflow_internal.TF_NewBuffer
def TF_DeleteBuffer(arg1):
return _pywrap_tensorflow_internal.TF_DeleteBuffer(arg1)
TF_DeleteBuffer = _pywrap_tensorflow_internal.TF_DeleteBuffer
def TF_GetBuffer(buffer):
return _pywrap_tensorflow_internal.TF_GetBuffer(buffer)
TF_GetBuffer = _pywrap_tensorflow_internal.TF_GetBuffer
def TF_NewTensor(arg1, dims, num_dims, data, len, deallocator, deallocator_arg):
return _pywrap_tensorflow_internal.TF_NewTensor(arg1, dims, num_dims, data, len, deallocator, deallocator_arg)
TF_NewTensor = _pywrap_tensorflow_internal.TF_NewTensor
def TF_AllocateTensor(arg1, dims, num_dims, len):
return _pywrap_tensorflow_internal.TF_AllocateTensor(arg1, dims, num_dims, len)
TF_AllocateTensor = _pywrap_tensorflow_internal.TF_AllocateTensor
def TF_TensorMaybeMove(tensor):
return _pywrap_tensorflow_internal.TF_TensorMaybeMove(tensor)
TF_TensorMaybeMove = _pywrap_tensorflow_internal.TF_TensorMaybeMove
def TF_DeleteTensor(arg1):
return _pywrap_tensorflow_internal.TF_DeleteTensor(arg1)
TF_DeleteTensor = _pywrap_tensorflow_internal.TF_DeleteTensor
def TF_TensorType(arg1):
return _pywrap_tensorflow_internal.TF_TensorType(arg1)
TF_TensorType = _pywrap_tensorflow_internal.TF_TensorType
def TF_NumDims(arg1):
return _pywrap_tensorflow_internal.TF_NumDims(arg1)
TF_NumDims = _pywrap_tensorflow_internal.TF_NumDims
def TF_Dim(tensor, dim_index):
return _pywrap_tensorflow_internal.TF_Dim(tensor, dim_index)
TF_Dim = _pywrap_tensorflow_internal.TF_Dim
def TF_TensorByteSize(arg1):
return _pywrap_tensorflow_internal.TF_TensorByteSize(arg1)
TF_TensorByteSize = _pywrap_tensorflow_internal.TF_TensorByteSize
def TF_TensorData(arg1):
return _pywrap_tensorflow_internal.TF_TensorData(arg1)
TF_TensorData = _pywrap_tensorflow_internal.TF_TensorData
def TF_StringEncode(src, src_len, dst, dst_len):
return _pywrap_tensorflow_internal.TF_StringEncode(src, src_len, dst, dst_len)
TF_StringEncode = _pywrap_tensorflow_internal.TF_StringEncode
def TF_StringDecode(src, src_len, dst, dst_len):
return _pywrap_tensorflow_internal.TF_StringDecode(src, src_len, dst, dst_len)
TF_StringDecode = _pywrap_tensorflow_internal.TF_StringDecode
def TF_StringEncodedSize(len):
return _pywrap_tensorflow_internal.TF_StringEncodedSize(len)
TF_StringEncodedSize = _pywrap_tensorflow_internal.TF_StringEncodedSize
def _TF_NewSessionOptions():
return _pywrap_tensorflow_internal._TF_NewSessionOptions()
_TF_NewSessionOptions = _pywrap_tensorflow_internal._TF_NewSessionOptions
def _TF_SetTarget(options, target):
return _pywrap_tensorflow_internal._TF_SetTarget(options, target)
_TF_SetTarget = _pywrap_tensorflow_internal._TF_SetTarget
def _TF_SetConfig(options, proto):
return _pywrap_tensorflow_internal._TF_SetConfig(options, proto)
_TF_SetConfig = _pywrap_tensorflow_internal._TF_SetConfig
def TF_DeleteSessionOptions(arg1):
return _pywrap_tensorflow_internal.TF_DeleteSessionOptions(arg1)
TF_DeleteSessionOptions = _pywrap_tensorflow_internal.TF_DeleteSessionOptions
def TF_NewGraph():
return _pywrap_tensorflow_internal.TF_NewGraph()
TF_NewGraph = _pywrap_tensorflow_internal.TF_NewGraph
def TF_DeleteGraph(arg1):
return _pywrap_tensorflow_internal.TF_DeleteGraph(arg1)
TF_DeleteGraph = _pywrap_tensorflow_internal.TF_DeleteGraph
class TF_Input(_object):
__swig_setmethods__ = {}
__setattr__ = lambda self, name, value: _swig_setattr(self, TF_Input, name, value)
__swig_getmethods__ = {}
__getattr__ = lambda self, name: _swig_getattr(self, TF_Input, name)
__repr__ = _swig_repr
__swig_setmethods__["oper"] = _pywrap_tensorflow_internal.TF_Input_oper_set
__swig_getmethods__["oper"] = _pywrap_tensorflow_internal.TF_Input_oper_get
if _newclass:
oper = _swig_property(_pywrap_tensorflow_internal.TF_Input_oper_get, _pywrap_tensorflow_internal.TF_Input_oper_set)
__swig_setmethods__["index"] = _pywrap_tensorflow_internal.TF_Input_index_set
__swig_getmethods__["index"] = _pywrap_tensorflow_internal.TF_Input_index_get
if _newclass:
index = _swig_property(_pywrap_tensorflow_internal.TF_Input_index_get, _pywrap_tensorflow_internal.TF_Input_index_set)
def __init__(self):
this = _pywrap_tensorflow_internal.new_TF_Input()
try:
self.this.append(this)
except Exception:
self.this = this
__swig_destroy__ = _pywrap_tensorflow_internal.delete_TF_Input
__del__ = lambda self: None
TF_Input_swigregister = _pywrap_tensorflow_internal.TF_Input_swigregister
TF_Input_swigregister(TF_Input)
class TF_Output(_object):
__swig_setmethods__ = {}
__setattr__ = lambda self, name, value: _swig_setattr(self, TF_Output, name, value)
__swig_getmethods__ = {}
__getattr__ = lambda self, name: _swig_getattr(self, TF_Output, name)
__repr__ = _swig_repr
__swig_setmethods__["oper"] = _pywrap_tensorflow_internal.TF_Output_oper_set
__swig_getmethods__["oper"] = _pywrap_tensorflow_internal.TF_Output_oper_get
if _newclass:
oper = _swig_property(_pywrap_tensorflow_internal.TF_Output_oper_get, _pywrap_tensorflow_internal.TF_Output_oper_set)
__swig_setmethods__["index"] = _pywrap_tensorflow_internal.TF_Output_index_set
__swig_getmethods__["index"] = _pywrap_tensorflow_internal.TF_Output_index_get
if _newclass:
index = _swig_property(_pywrap_tensorflow_internal.TF_Output_index_get, _pywrap_tensorflow_internal.TF_Output_index_set)
def __init__(self):
this = _pywrap_tensorflow_internal.new_TF_Output()
try:
self.this.append(this)
except Exception:
self.this = this
__swig_destroy__ = _pywrap_tensorflow_internal.delete_TF_Output
__del__ = lambda self: None
TF_Output_swigregister = _pywrap_tensorflow_internal.TF_Output_swigregister
TF_Output_swigregister(TF_Output)
def TF_GraphSetTensorShape(graph, output, dims, num_dims):
return _pywrap_tensorflow_internal.TF_GraphSetTensorShape(graph, output, dims, num_dims)
TF_GraphSetTensorShape = _pywrap_tensorflow_internal.TF_GraphSetTensorShape
def TF_GraphGetTensorNumDims(graph, output):
return _pywrap_tensorflow_internal.TF_GraphGetTensorNumDims(graph, output)
TF_GraphGetTensorNumDims = _pywrap_tensorflow_internal.TF_GraphGetTensorNumDims
def TF_GraphGetTensorShape(graph, output, dims, num_dims):
return _pywrap_tensorflow_internal.TF_GraphGetTensorShape(graph, output, dims, num_dims)
TF_GraphGetTensorShape = _pywrap_tensorflow_internal.TF_GraphGetTensorShape
def TF_NewOperation(graph, op_type, oper_name):
return _pywrap_tensorflow_internal.TF_NewOperation(graph, op_type, oper_name)
TF_NewOperation = _pywrap_tensorflow_internal.TF_NewOperation
def TF_SetDevice(desc, device):
return _pywrap_tensorflow_internal.TF_SetDevice(desc, device)
TF_SetDevice = _pywrap_tensorflow_internal.TF_SetDevice
def TF_AddInput(desc, input):
return _pywrap_tensorflow_internal.TF_AddInput(desc, input)
TF_AddInput = _pywrap_tensorflow_internal.TF_AddInput
def TF_AddInputList(desc, inputs):
return _pywrap_tensorflow_internal.TF_AddInputList(desc, inputs)
TF_AddInputList = _pywrap_tensorflow_internal.TF_AddInputList
def TF_AddControlInput(desc, input):
return _pywrap_tensorflow_internal.TF_AddControlInput(desc, input)
TF_AddControlInput = _pywrap_tensorflow_internal.TF_AddControlInput
def TF_ColocateWith(desc, op):
return _pywrap_tensorflow_internal.TF_ColocateWith(desc, op)
TF_ColocateWith = _pywrap_tensorflow_internal.TF_ColocateWith
def TF_SetAttrString(desc, attr_name, value, length):
return _pywrap_tensorflow_internal.TF_SetAttrString(desc, attr_name, value, length)
TF_SetAttrString = _pywrap_tensorflow_internal.TF_SetAttrString
def TF_SetAttrStringList(desc, attr_name, values, lengths, num_values):
return _pywrap_tensorflow_internal.TF_SetAttrStringList(desc, attr_name, values, lengths, num_values)
TF_SetAttrStringList = _pywrap_tensorflow_internal.TF_SetAttrStringList
def TF_SetAttrInt(desc, attr_name, value):
return _pywrap_tensorflow_internal.TF_SetAttrInt(desc, attr_name, value)
TF_SetAttrInt = _pywrap_tensorflow_internal.TF_SetAttrInt
def TF_SetAttrIntList(desc, attr_name, values, num_values):
return _pywrap_tensorflow_internal.TF_SetAttrIntList(desc, attr_name, values, num_values)
TF_SetAttrIntList = _pywrap_tensorflow_internal.TF_SetAttrIntList
def TF_SetAttrFloat(desc, attr_name, value):
return _pywrap_tensorflow_internal.TF_SetAttrFloat(desc, attr_name, value)
TF_SetAttrFloat = _pywrap_tensorflow_internal.TF_SetAttrFloat
def TF_SetAttrFloatList(desc, attr_name, values, num_values):
return _pywrap_tensorflow_internal.TF_SetAttrFloatList(desc, attr_name, values, num_values)
TF_SetAttrFloatList = _pywrap_tensorflow_internal.TF_SetAttrFloatList
def TF_SetAttrBool(desc, attr_name, value):
return _pywrap_tensorflow_internal.TF_SetAttrBool(desc, attr_name, value)
TF_SetAttrBool = _pywrap_tensorflow_internal.TF_SetAttrBool
def TF_SetAttrBoolList(desc, attr_name, values, num_values):
return _pywrap_tensorflow_internal.TF_SetAttrBoolList(desc, attr_name, values, num_values)
TF_SetAttrBoolList = _pywrap_tensorflow_internal.TF_SetAttrBoolList
def TF_SetAttrType(desc, attr_name, value):
return _pywrap_tensorflow_internal.TF_SetAttrType(desc, attr_name, value)
TF_SetAttrType = _pywrap_tensorflow_internal.TF_SetAttrType
def TF_SetAttrTypeList(desc, attr_name, values, num_values):
return _pywrap_tensorflow_internal.TF_SetAttrTypeList(desc, attr_name, values, num_values)
TF_SetAttrTypeList = _pywrap_tensorflow_internal.TF_SetAttrTypeList
def TF_SetAttrFuncName(desc, attr_name, value, length):
return _pywrap_tensorflow_internal.TF_SetAttrFuncName(desc, attr_name, value, length)
TF_SetAttrFuncName = _pywrap_tensorflow_internal.TF_SetAttrFuncName
def TF_SetAttrShape(desc, attr_name, dims, num_dims):
return _pywrap_tensorflow_internal.TF_SetAttrShape(desc, attr_name, dims, num_dims)
TF_SetAttrShape = _pywrap_tensorflow_internal.TF_SetAttrShape
def TF_SetAttrShapeList(desc, attr_name, dims, num_dims, num_shapes):
return _pywrap_tensorflow_internal.TF_SetAttrShapeList(desc, attr_name, dims, num_dims, num_shapes)
TF_SetAttrShapeList = _pywrap_tensorflow_internal.TF_SetAttrShapeList
def TF_SetAttrTensorShapeProto(desc, attr_name, proto):
return _pywrap_tensorflow_internal.TF_SetAttrTensorShapeProto(desc, attr_name, proto)
TF_SetAttrTensorShapeProto = _pywrap_tensorflow_internal.TF_SetAttrTensorShapeProto
def TF_SetAttrTensorShapeProtoList(desc, attr_name, protos, proto_lens, num_shapes):
return _pywrap_tensorflow_internal.TF_SetAttrTensorShapeProtoList(desc, attr_name, protos, proto_lens, num_shapes)
TF_SetAttrTensorShapeProtoList = _pywrap_tensorflow_internal.TF_SetAttrTensorShapeProtoList
def TF_SetAttrTensor(desc, attr_name, value):
return _pywrap_tensorflow_internal.TF_SetAttrTensor(desc, attr_name, value)
TF_SetAttrTensor = _pywrap_tensorflow_internal.TF_SetAttrTensor
def TF_SetAttrTensorList(desc, attr_name, values, num_values):
return _pywrap_tensorflow_internal.TF_SetAttrTensorList(desc, attr_name, values, num_values)
TF_SetAttrTensorList = _pywrap_tensorflow_internal.TF_SetAttrTensorList
def TF_SetAttrValueProto(desc, attr_name, proto):
return _pywrap_tensorflow_internal.TF_SetAttrValueProto(desc, attr_name, proto)
TF_SetAttrValueProto = _pywrap_tensorflow_internal.TF_SetAttrValueProto
def TF_FinishOperation(desc):
return _pywrap_tensorflow_internal.TF_FinishOperation(desc)
TF_FinishOperation = _pywrap_tensorflow_internal.TF_FinishOperation
def TF_OperationName(oper):
return _pywrap_tensorflow_internal.TF_OperationName(oper)
TF_OperationName = _pywrap_tensorflow_internal.TF_OperationName
def TF_OperationOpType(oper):
return _pywrap_tensorflow_internal.TF_OperationOpType(oper)
TF_OperationOpType = _pywrap_tensorflow_internal.TF_OperationOpType
def TF_OperationDevice(oper):
return _pywrap_tensorflow_internal.TF_OperationDevice(oper)
TF_OperationDevice = _pywrap_tensorflow_internal.TF_OperationDevice
def TF_OperationNumOutputs(oper):
return _pywrap_tensorflow_internal.TF_OperationNumOutputs(oper)
TF_OperationNumOutputs = _pywrap_tensorflow_internal.TF_OperationNumOutputs
def TF_OperationOutputType(oper_out):
return _pywrap_tensorflow_internal.TF_OperationOutputType(oper_out)
TF_OperationOutputType = _pywrap_tensorflow_internal.TF_OperationOutputType
def TF_OperationOutputListLength(oper, arg_name):
return _pywrap_tensorflow_internal.TF_OperationOutputListLength(oper, arg_name)
TF_OperationOutputListLength = _pywrap_tensorflow_internal.TF_OperationOutputListLength
def TF_OperationNumInputs(oper):
return _pywrap_tensorflow_internal.TF_OperationNumInputs(oper)
TF_OperationNumInputs = _pywrap_tensorflow_internal.TF_OperationNumInputs
def TF_OperationInputType(oper_in):
return _pywrap_tensorflow_internal.TF_OperationInputType(oper_in)
TF_OperationInputType = _pywrap_tensorflow_internal.TF_OperationInputType
def TF_OperationInputListLength(oper, arg_name):
return _pywrap_tensorflow_internal.TF_OperationInputListLength(oper, arg_name)
TF_OperationInputListLength = _pywrap_tensorflow_internal.TF_OperationInputListLength
def TF_OperationInput(oper_in):
return _pywrap_tensorflow_internal.TF_OperationInput(oper_in)
TF_OperationInput = _pywrap_tensorflow_internal.TF_OperationInput
def TF_OperationOutputNumConsumers(oper_out):
return _pywrap_tensorflow_internal.TF_OperationOutputNumConsumers(oper_out)
TF_OperationOutputNumConsumers = _pywrap_tensorflow_internal.TF_OperationOutputNumConsumers
def TF_OperationNumControlInputs(oper):
return _pywrap_tensorflow_internal.TF_OperationNumControlInputs(oper)
TF_OperationNumControlInputs = _pywrap_tensorflow_internal.TF_OperationNumControlInputs
def TF_OperationNumControlOutputs(oper):
return _pywrap_tensorflow_internal.TF_OperationNumControlOutputs(oper)
TF_OperationNumControlOutputs = _pywrap_tensorflow_internal.TF_OperationNumControlOutputs
_pywrap_tensorflow_internal.TF_ATTR_STRING_swigconstant(_pywrap_tensorflow_internal)
TF_ATTR_STRING = _pywrap_tensorflow_internal.TF_ATTR_STRING
_pywrap_tensorflow_internal.TF_ATTR_INT_swigconstant(_pywrap_tensorflow_internal)
TF_ATTR_INT = _pywrap_tensorflow_internal.TF_ATTR_INT
_pywrap_tensorflow_internal.TF_ATTR_FLOAT_swigconstant(_pywrap_tensorflow_internal)
TF_ATTR_FLOAT = _pywrap_tensorflow_internal.TF_ATTR_FLOAT
_pywrap_tensorflow_internal.TF_ATTR_BOOL_swigconstant(_pywrap_tensorflow_internal)
TF_ATTR_BOOL = _pywrap_tensorflow_internal.TF_ATTR_BOOL
_pywrap_tensorflow_internal.TF_ATTR_TYPE_swigconstant(_pywrap_tensorflow_internal)
TF_ATTR_TYPE = _pywrap_tensorflow_internal.TF_ATTR_TYPE
_pywrap_tensorflow_internal.TF_ATTR_SHAPE_swigconstant(_pywrap_tensorflow_internal)
TF_ATTR_SHAPE = _pywrap_tensorflow_internal.TF_ATTR_SHAPE
_pywrap_tensorflow_internal.TF_ATTR_TENSOR_swigconstant(_pywrap_tensorflow_internal)
TF_ATTR_TENSOR = _pywrap_tensorflow_internal.TF_ATTR_TENSOR
_pywrap_tensorflow_internal.TF_ATTR_PLACEHOLDER_swigconstant(_pywrap_tensorflow_internal)
TF_ATTR_PLACEHOLDER = _pywrap_tensorflow_internal.TF_ATTR_PLACEHOLDER
_pywrap_tensorflow_internal.TF_ATTR_FUNC_swigconstant(_pywrap_tensorflow_internal)
TF_ATTR_FUNC = _pywrap_tensorflow_internal.TF_ATTR_FUNC
class TF_AttrMetadata(_object):
__swig_setmethods__ = {}
__setattr__ = lambda self, name, value: _swig_setattr(self, TF_AttrMetadata, name, value)
__swig_getmethods__ = {}
__getattr__ = lambda self, name: _swig_getattr(self, TF_AttrMetadata, name)
__repr__ = _swig_repr
__swig_setmethods__["is_list"] = _pywrap_tensorflow_internal.TF_AttrMetadata_is_list_set
__swig_getmethods__["is_list"] = _pywrap_tensorflow_internal.TF_AttrMetadata_is_list_get
if _newclass:
is_list = _swig_property(_pywrap_tensorflow_internal.TF_AttrMetadata_is_list_get, _pywrap_tensorflow_internal.TF_AttrMetadata_is_list_set)
__swig_setmethods__["list_size"] = _pywrap_tensorflow_internal.TF_AttrMetadata_list_size_set
__swig_getmethods__["list_size"] = _pywrap_tensorflow_internal.TF_AttrMetadata_list_size_get
if _newclass:
list_size = _swig_property(_pywrap_tensorflow_internal.TF_AttrMetadata_list_size_get, _pywrap_tensorflow_internal.TF_AttrMetadata_list_size_set)
__swig_setmethods__["type"] = _pywrap_tensorflow_internal.TF_AttrMetadata_type_set
__swig_getmethods__["type"] = _pywrap_tensorflow_internal.TF_AttrMetadata_type_get
if _newclass:
type = _swig_property(_pywrap_tensorflow_internal.TF_AttrMetadata_type_get, _pywrap_tensorflow_internal.TF_AttrMetadata_type_set)
__swig_setmethods__["total_size"] = _pywrap_tensorflow_internal.TF_AttrMetadata_total_size_set
__swig_getmethods__["total_size"] = _pywrap_tensorflow_internal.TF_AttrMetadata_total_size_get
if _newclass:
total_size = _swig_property(_pywrap_tensorflow_internal.TF_AttrMetadata_total_size_get, _pywrap_tensorflow_internal.TF_AttrMetadata_total_size_set)
def __init__(self):
this = _pywrap_tensorflow_internal.new_TF_AttrMetadata()
try:
self.this.append(this)
except Exception:
self.this = this
__swig_destroy__ = _pywrap_tensorflow_internal.delete_TF_AttrMetadata
__del__ = lambda self: None
TF_AttrMetadata_swigregister = _pywrap_tensorflow_internal.TF_AttrMetadata_swigregister
TF_AttrMetadata_swigregister(TF_AttrMetadata)
def TF_OperationGetAttrMetadata(oper, attr_name):
return _pywrap_tensorflow_internal.TF_OperationGetAttrMetadata(oper, attr_name)
TF_OperationGetAttrMetadata = _pywrap_tensorflow_internal.TF_OperationGetAttrMetadata
def TF_OperationGetAttrString(oper, attr_name, value, max_length):
return _pywrap_tensorflow_internal.TF_OperationGetAttrString(oper, attr_name, value, max_length)
TF_OperationGetAttrString = _pywrap_tensorflow_internal.TF_OperationGetAttrString
def TF_OperationGetAttrStringList(oper, attr_name, values, lengths, max_values, storage, storage_size):
return _pywrap_tensorflow_internal.TF_OperationGetAttrStringList(oper, attr_name, values, lengths, max_values, storage, storage_size)
TF_OperationGetAttrStringList = _pywrap_tensorflow_internal.TF_OperationGetAttrStringList
def TF_OperationGetAttrInt(oper, attr_name, value):
return _pywrap_tensorflow_internal.TF_OperationGetAttrInt(oper, attr_name, value)
TF_OperationGetAttrInt = _pywrap_tensorflow_internal.TF_OperationGetAttrInt
def TF_OperationGetAttrIntList(oper, attr_name, values, max_values):
return _pywrap_tensorflow_internal.TF_OperationGetAttrIntList(oper, attr_name, values, max_values)
TF_OperationGetAttrIntList = _pywrap_tensorflow_internal.TF_OperationGetAttrIntList
def TF_OperationGetAttrFloat(oper, attr_name, value):
return _pywrap_tensorflow_internal.TF_OperationGetAttrFloat(oper, attr_name, value)
TF_OperationGetAttrFloat = _pywrap_tensorflow_internal.TF_OperationGetAttrFloat
def TF_OperationGetAttrFloatList(oper, attr_name, values, max_values):
return _pywrap_tensorflow_internal.TF_OperationGetAttrFloatList(oper, attr_name, values, max_values)
TF_OperationGetAttrFloatList = _pywrap_tensorflow_internal.TF_OperationGetAttrFloatList
def TF_OperationGetAttrBool(oper, attr_name, value):
return _pywrap_tensorflow_internal.TF_OperationGetAttrBool(oper, attr_name, value)
TF_OperationGetAttrBool = _pywrap_tensorflow_internal.TF_OperationGetAttrBool
def TF_OperationGetAttrBoolList(oper, attr_name, values, max_values):
return _pywrap_tensorflow_internal.TF_OperationGetAttrBoolList(oper, attr_name, values, max_values)
TF_OperationGetAttrBoolList = _pywrap_tensorflow_internal.TF_OperationGetAttrBoolList
def TF_OperationGetAttrType(oper, attr_name, value):
return _pywrap_tensorflow_internal.TF_OperationGetAttrType(oper, attr_name, value)
TF_OperationGetAttrType = _pywrap_tensorflow_internal.TF_OperationGetAttrType
def TF_OperationGetAttrTypeList(oper, attr_name, values, max_values):
return _pywrap_tensorflow_internal.TF_OperationGetAttrTypeList(oper, attr_name, values, max_values)
TF_OperationGetAttrTypeList = _pywrap_tensorflow_internal.TF_OperationGetAttrTypeList
def TF_OperationGetAttrShape(oper, attr_name, value, num_dims):
return _pywrap_tensorflow_internal.TF_OperationGetAttrShape(oper, attr_name, value, num_dims)
TF_OperationGetAttrShape = _pywrap_tensorflow_internal.TF_OperationGetAttrShape
def TF_OperationGetAttrShapeList(oper, attr_name, dims, num_dims, num_shapes, storage, storage_size):
return _pywrap_tensorflow_internal.TF_OperationGetAttrShapeList(oper, attr_name, dims, num_dims, num_shapes, storage, storage_size)
TF_OperationGetAttrShapeList = _pywrap_tensorflow_internal.TF_OperationGetAttrShapeList
def TF_OperationGetAttrTensorShapeProto(oper, attr_name, value):
return _pywrap_tensorflow_internal.TF_OperationGetAttrTensorShapeProto(oper, attr_name, value)
TF_OperationGetAttrTensorShapeProto = _pywrap_tensorflow_internal.TF_OperationGetAttrTensorShapeProto
def TF_OperationGetAttrTensorShapeProtoList(oper, attr_name, values, max_values):
return _pywrap_tensorflow_internal.TF_OperationGetAttrTensorShapeProtoList(oper, attr_name, values, max_values)
TF_OperationGetAttrTensorShapeProtoList = _pywrap_tensorflow_internal.TF_OperationGetAttrTensorShapeProtoList
def TF_OperationGetAttrTensor(oper, attr_name, value):
return _pywrap_tensorflow_internal.TF_OperationGetAttrTensor(oper, attr_name, value)
TF_OperationGetAttrTensor = _pywrap_tensorflow_internal.TF_OperationGetAttrTensor
def TF_OperationGetAttrTensorList(oper, attr_name, values, max_values):
return _pywrap_tensorflow_internal.TF_OperationGetAttrTensorList(oper, attr_name, values, max_values)
TF_OperationGetAttrTensorList = _pywrap_tensorflow_internal.TF_OperationGetAttrTensorList
def TF_OperationGetAttrValueProto(oper, attr_name, output_attr_value):
return _pywrap_tensorflow_internal.TF_OperationGetAttrValueProto(oper, attr_name, output_attr_value)
TF_OperationGetAttrValueProto = _pywrap_tensorflow_internal.TF_OperationGetAttrValueProto
def TF_GraphOperationByName(graph, oper_name):
return _pywrap_tensorflow_internal.TF_GraphOperationByName(graph, oper_name)
TF_GraphOperationByName = _pywrap_tensorflow_internal.TF_GraphOperationByName
def TF_GraphNextOperation(graph, pos):
return _pywrap_tensorflow_internal.TF_GraphNextOperation(graph, pos)
TF_GraphNextOperation = _pywrap_tensorflow_internal.TF_GraphNextOperation
def TF_GraphToGraphDef(graph, output_graph_def):
return _pywrap_tensorflow_internal.TF_GraphToGraphDef(graph, output_graph_def)
TF_GraphToGraphDef = _pywrap_tensorflow_internal.TF_GraphToGraphDef
def TF_GraphGetOpDef(graph, op_name, output_op_def):
return _pywrap_tensorflow_internal.TF_GraphGetOpDef(graph, op_name, output_op_def)
TF_GraphGetOpDef = _pywrap_tensorflow_internal.TF_GraphGetOpDef
def TF_GraphVersions(graph, output_version_def):
return _pywrap_tensorflow_internal.TF_GraphVersions(graph, output_version_def)
TF_GraphVersions = _pywrap_tensorflow_internal.TF_GraphVersions
def TF_NewImportGraphDefOptions():
return _pywrap_tensorflow_internal.TF_NewImportGraphDefOptions()
TF_NewImportGraphDefOptions = _pywrap_tensorflow_internal.TF_NewImportGraphDefOptions
def TF_DeleteImportGraphDefOptions(opts):
return _pywrap_tensorflow_internal.TF_DeleteImportGraphDefOptions(opts)
TF_DeleteImportGraphDefOptions = _pywrap_tensorflow_internal.TF_DeleteImportGraphDefOptions
def TF_ImportGraphDefOptionsSetPrefix(opts, prefix):
return _pywrap_tensorflow_internal.TF_ImportGraphDefOptionsSetPrefix(opts, prefix)
TF_ImportGraphDefOptionsSetPrefix = _pywrap_tensorflow_internal.TF_ImportGraphDefOptionsSetPrefix
def TF_ImportGraphDefOptionsSetUniquifyNames(opts, uniquify_names):
return _pywrap_tensorflow_internal.TF_ImportGraphDefOptionsSetUniquifyNames(opts, uniquify_names)
TF_ImportGraphDefOptionsSetUniquifyNames = _pywrap_tensorflow_internal.TF_ImportGraphDefOptionsSetUniquifyNames
def TF_ImportGraphDefOptionsSetUniquifyPrefix(opts, uniquify_prefix):
return _pywrap_tensorflow_internal.TF_ImportGraphDefOptionsSetUniquifyPrefix(opts, uniquify_prefix)
TF_ImportGraphDefOptionsSetUniquifyPrefix = _pywrap_tensorflow_internal.TF_ImportGraphDefOptionsSetUniquifyPrefix
def TF_ImportGraphDefOptionsAddInputMapping(opts, src_name, src_index, dst):
return _pywrap_tensorflow_internal.TF_ImportGraphDefOptionsAddInputMapping(opts, src_name, src_index, dst)
TF_ImportGraphDefOptionsAddInputMapping = _pywrap_tensorflow_internal.TF_ImportGraphDefOptionsAddInputMapping
def TF_ImportGraphDefOptionsRemapControlDependency(opts, src_name, dst):
return _pywrap_tensorflow_internal.TF_ImportGraphDefOptionsRemapControlDependency(opts, src_name, dst)
TF_ImportGraphDefOptionsRemapControlDependency = _pywrap_tensorflow_internal.TF_ImportGraphDefOptionsRemapControlDependency
def TF_ImportGraphDefOptionsAddControlDependency(opts, oper):
return _pywrap_tensorflow_internal.TF_ImportGraphDefOptionsAddControlDependency(opts, oper)
TF_ImportGraphDefOptionsAddControlDependency = _pywrap_tensorflow_internal.TF_ImportGraphDefOptionsAddControlDependency
def TF_ImportGraphDefOptionsAddReturnOutput(opts, oper_name, index):
return _pywrap_tensorflow_internal.TF_ImportGraphDefOptionsAddReturnOutput(opts, oper_name, index)
TF_ImportGraphDefOptionsAddReturnOutput = _pywrap_tensorflow_internal.TF_ImportGraphDefOptionsAddReturnOutput
def TF_ImportGraphDefOptionsNumReturnOutputs(opts):
return _pywrap_tensorflow_internal.TF_ImportGraphDefOptionsNumReturnOutputs(opts)
TF_ImportGraphDefOptionsNumReturnOutputs = _pywrap_tensorflow_internal.TF_ImportGraphDefOptionsNumReturnOutputs
def TF_ImportGraphDefOptionsAddReturnOperation(opts, oper_name):
return _pywrap_tensorflow_internal.TF_ImportGraphDefOptionsAddReturnOperation(opts, oper_name)
TF_ImportGraphDefOptionsAddReturnOperation = _pywrap_tensorflow_internal.TF_ImportGraphDefOptionsAddReturnOperation
def TF_ImportGraphDefOptionsNumReturnOperations(opts):
return _pywrap_tensorflow_internal.TF_ImportGraphDefOptionsNumReturnOperations(opts)
TF_ImportGraphDefOptionsNumReturnOperations = _pywrap_tensorflow_internal.TF_ImportGraphDefOptionsNumReturnOperations
def TF_ImportGraphDefResultsReturnOutputs(results):
return _pywrap_tensorflow_internal.TF_ImportGraphDefResultsReturnOutputs(results)
TF_ImportGraphDefResultsReturnOutputs = _pywrap_tensorflow_internal.TF_ImportGraphDefResultsReturnOutputs
def TF_ImportGraphDefResultsReturnOperations(results):
return _pywrap_tensorflow_internal.TF_ImportGraphDefResultsReturnOperations(results)
TF_ImportGraphDefResultsReturnOperations = _pywrap_tensorflow_internal.TF_ImportGraphDefResultsReturnOperations
def TF_DeleteImportGraphDefResults(results):
return _pywrap_tensorflow_internal.TF_DeleteImportGraphDefResults(results)
TF_DeleteImportGraphDefResults = _pywrap_tensorflow_internal.TF_DeleteImportGraphDefResults
def TF_GraphImportGraphDefWithResults(graph, graph_def, options):
return _pywrap_tensorflow_internal.TF_GraphImportGraphDefWithResults(graph, graph_def, options)
TF_GraphImportGraphDefWithResults = _pywrap_tensorflow_internal.TF_GraphImportGraphDefWithResults
def TF_GraphImportGraphDefWithReturnOutputs(graph, graph_def, options, return_outputs, num_return_outputs):
return _pywrap_tensorflow_internal.TF_GraphImportGraphDefWithReturnOutputs(graph, graph_def, options, return_outputs, num_return_outputs)
TF_GraphImportGraphDefWithReturnOutputs = _pywrap_tensorflow_internal.TF_GraphImportGraphDefWithReturnOutputs
def TF_GraphImportGraphDef(graph, graph_def, options):
return _pywrap_tensorflow_internal.TF_GraphImportGraphDef(graph, graph_def, options)
TF_GraphImportGraphDef = _pywrap_tensorflow_internal.TF_GraphImportGraphDef
def TF_GraphCopyFunction(g, func, grad):
return _pywrap_tensorflow_internal.TF_GraphCopyFunction(g, func, grad)
TF_GraphCopyFunction = _pywrap_tensorflow_internal.TF_GraphCopyFunction
def TF_GraphNumFunctions(g):
return _pywrap_tensorflow_internal.TF_GraphNumFunctions(g)
TF_GraphNumFunctions = _pywrap_tensorflow_internal.TF_GraphNumFunctions
def TF_GraphGetFunctions(g, funcs, max_func):
return _pywrap_tensorflow_internal.TF_GraphGetFunctions(g, funcs, max_func)
TF_GraphGetFunctions = _pywrap_tensorflow_internal.TF_GraphGetFunctions
def TF_OperationToNodeDef(oper, output_node_def):
return _pywrap_tensorflow_internal.TF_OperationToNodeDef(oper, output_node_def)
TF_OperationToNodeDef = _pywrap_tensorflow_internal.TF_OperationToNodeDef
def TF_AddGradients(g, y, ny, x, nx, dx, dy):
return _pywrap_tensorflow_internal.TF_AddGradients(g, y, ny, x, nx, dx, dy)
TF_AddGradients = _pywrap_tensorflow_internal.TF_AddGradients
def TF_AddGradientsWithPrefix(g, prefix, y, ny, x, nx, dx, dy):
return _pywrap_tensorflow_internal.TF_AddGradientsWithPrefix(g, prefix, y, ny, x, nx, dx, dy)
TF_AddGradientsWithPrefix = _pywrap_tensorflow_internal.TF_AddGradientsWithPrefix
def TF_GraphToFunction(fn_body, fn_name, append_hash_to_fn_name, num_opers, opers, ninputs, inputs, noutputs, outputs, output_names, opts, description):
return _pywrap_tensorflow_internal.TF_GraphToFunction(fn_body, fn_name, append_hash_to_fn_name, num_opers, opers, ninputs, inputs, noutputs, outputs, output_names, opts, description)
TF_GraphToFunction = _pywrap_tensorflow_internal.TF_GraphToFunction
def TF_FunctionName(func):
return _pywrap_tensorflow_internal.TF_FunctionName(func)
TF_FunctionName = _pywrap_tensorflow_internal.TF_FunctionName
def TF_FunctionToFunctionDef(func, output_func_def):
return _pywrap_tensorflow_internal.TF_FunctionToFunctionDef(func, output_func_def)
TF_FunctionToFunctionDef = _pywrap_tensorflow_internal.TF_FunctionToFunctionDef
def TF_FunctionImportFunctionDef(proto):
return _pywrap_tensorflow_internal.TF_FunctionImportFunctionDef(proto)
TF_FunctionImportFunctionDef = _pywrap_tensorflow_internal.TF_FunctionImportFunctionDef
def TF_FunctionSetAttrValueProto(func, attr_name, proto):
return _pywrap_tensorflow_internal.TF_FunctionSetAttrValueProto(func, attr_name, proto)
TF_FunctionSetAttrValueProto = _pywrap_tensorflow_internal.TF_FunctionSetAttrValueProto
def TF_FunctionGetAttrValueProto(func, attr_name, output_attr_value):
return _pywrap_tensorflow_internal.TF_FunctionGetAttrValueProto(func, attr_name, output_attr_value)
TF_FunctionGetAttrValueProto = _pywrap_tensorflow_internal.TF_FunctionGetAttrValueProto
def TF_DeleteFunction(func):
return _pywrap_tensorflow_internal.TF_DeleteFunction(func)
TF_DeleteFunction = _pywrap_tensorflow_internal.TF_DeleteFunction
def TF_TryEvaluateConstant(graph, output, result):
return _pywrap_tensorflow_internal.TF_TryEvaluateConstant(graph, output, result)
TF_TryEvaluateConstant = _pywrap_tensorflow_internal.TF_TryEvaluateConstant
def TF_NewSession(graph, opts):
return _pywrap_tensorflow_internal.TF_NewSession(graph, opts)
TF_NewSession = _pywrap_tensorflow_internal.TF_NewSession
def TF_LoadSessionFromSavedModel(session_options, run_options, export_dir, tags, tags_len, graph, meta_graph_def):
return _pywrap_tensorflow_internal.TF_LoadSessionFromSavedModel(session_options, run_options, export_dir, tags, tags_len, graph, meta_graph_def)
TF_LoadSessionFromSavedModel = _pywrap_tensorflow_internal.TF_LoadSessionFromSavedModel
def TF_CloseSession(arg1):
return _pywrap_tensorflow_internal.TF_CloseSession(arg1)
TF_CloseSession = _pywrap_tensorflow_internal.TF_CloseSession
def TF_DeleteSession(arg1):
return _pywrap_tensorflow_internal.TF_DeleteSession(arg1)
TF_DeleteSession = _pywrap_tensorflow_internal.TF_DeleteSession
def TF_DeletePRunHandle(handle):
return _pywrap_tensorflow_internal.TF_DeletePRunHandle(handle)
TF_DeletePRunHandle = _pywrap_tensorflow_internal.TF_DeletePRunHandle
def TF_NewDeprecatedSession(arg1):
return _pywrap_tensorflow_internal.TF_NewDeprecatedSession(arg1)
TF_NewDeprecatedSession = _pywrap_tensorflow_internal.TF_NewDeprecatedSession
def TF_CloseDeprecatedSession(arg1):
return _pywrap_tensorflow_internal.TF_CloseDeprecatedSession(arg1)
TF_CloseDeprecatedSession = _pywrap_tensorflow_internal.TF_CloseDeprecatedSession
def TF_DeleteDeprecatedSession(arg1):
return _pywrap_tensorflow_internal.TF_DeleteDeprecatedSession(arg1)
TF_DeleteDeprecatedSession = _pywrap_tensorflow_internal.TF_DeleteDeprecatedSession
def TF_Reset(opt, containers, ncontainers):
return _pywrap_tensorflow_internal.TF_Reset(opt, containers, ncontainers)
TF_Reset = _pywrap_tensorflow_internal.TF_Reset
def TF_ExtendGraph(arg1, proto, arg3):
return _pywrap_tensorflow_internal.TF_ExtendGraph(arg1, proto, arg3)
TF_ExtendGraph = _pywrap_tensorflow_internal.TF_ExtendGraph
def TF_SessionListDevices(session):
return _pywrap_tensorflow_internal.TF_SessionListDevices(session)
TF_SessionListDevices = _pywrap_tensorflow_internal.TF_SessionListDevices
def TF_DeprecatedSessionListDevices(session):
return _pywrap_tensorflow_internal.TF_DeprecatedSessionListDevices(session)
TF_DeprecatedSessionListDevices = _pywrap_tensorflow_internal.TF_DeprecatedSessionListDevices
def TF_DeleteDeviceList(list):
return _pywrap_tensorflow_internal.TF_DeleteDeviceList(list)
TF_DeleteDeviceList = _pywrap_tensorflow_internal.TF_DeleteDeviceList
def TF_DeviceListCount(list):
return _pywrap_tensorflow_internal.TF_DeviceListCount(list)
TF_DeviceListCount = _pywrap_tensorflow_internal.TF_DeviceListCount
def TF_DeviceListName(list, index):
return _pywrap_tensorflow_internal.TF_DeviceListName(list, index)
TF_DeviceListName = _pywrap_tensorflow_internal.TF_DeviceListName
def TF_DeviceListType(list, index):
return _pywrap_tensorflow_internal.TF_DeviceListType(list, index)
TF_DeviceListType = _pywrap_tensorflow_internal.TF_DeviceListType
def TF_DeviceListMemoryBytes(list, index):
return _pywrap_tensorflow_internal.TF_DeviceListMemoryBytes(list, index)
TF_DeviceListMemoryBytes = _pywrap_tensorflow_internal.TF_DeviceListMemoryBytes
def TF_DeviceListIncarnation(list, index):
return _pywrap_tensorflow_internal.TF_DeviceListIncarnation(list, index)
TF_DeviceListIncarnation = _pywrap_tensorflow_internal.TF_DeviceListIncarnation
def TF_LoadLibrary(library_filename):
return _pywrap_tensorflow_internal.TF_LoadLibrary(library_filename)
TF_LoadLibrary = _pywrap_tensorflow_internal.TF_LoadLibrary
def TF_GetOpList(lib_handle):
return _pywrap_tensorflow_internal.TF_GetOpList(lib_handle)
TF_GetOpList = _pywrap_tensorflow_internal.TF_GetOpList
def TF_DeleteLibraryHandle(lib_handle):
return _pywrap_tensorflow_internal.TF_DeleteLibraryHandle(lib_handle)
TF_DeleteLibraryHandle = _pywrap_tensorflow_internal.TF_DeleteLibraryHandle
def TF_GetAllOpList():
return _pywrap_tensorflow_internal.TF_GetAllOpList()
TF_GetAllOpList = _pywrap_tensorflow_internal.TF_GetAllOpList
def TF_NewApiDefMap(op_list_buffer):
return _pywrap_tensorflow_internal.TF_NewApiDefMap(op_list_buffer)
TF_NewApiDefMap = _pywrap_tensorflow_internal.TF_NewApiDefMap
def TF_DeleteApiDefMap(apimap):
return _pywrap_tensorflow_internal.TF_DeleteApiDefMap(apimap)
TF_DeleteApiDefMap = _pywrap_tensorflow_internal.TF_DeleteApiDefMap
def TF_ApiDefMapPut(api_def_map, text, text_len):
return _pywrap_tensorflow_internal.TF_ApiDefMapPut(api_def_map, text, text_len)
TF_ApiDefMapPut = _pywrap_tensorflow_internal.TF_ApiDefMapPut
def TF_ApiDefMapGet(api_def_map, name, name_len):
return _pywrap_tensorflow_internal.TF_ApiDefMapGet(api_def_map, name, name_len)
TF_ApiDefMapGet = _pywrap_tensorflow_internal.TF_ApiDefMapGet
def TF_GetAllRegisteredKernels():
return _pywrap_tensorflow_internal.TF_GetAllRegisteredKernels()
TF_GetAllRegisteredKernels = _pywrap_tensorflow_internal.TF_GetAllRegisteredKernels
def TF_GetRegisteredKernelsForOp(name):
return _pywrap_tensorflow_internal.TF_GetRegisteredKernelsForOp(name)
TF_GetRegisteredKernelsForOp = _pywrap_tensorflow_internal.TF_GetRegisteredKernelsForOp
def AddControlInput(graph, op, input):
return _pywrap_tensorflow_internal.AddControlInput(graph, op, input)
AddControlInput = _pywrap_tensorflow_internal.AddControlInput
def SetAttr(graph, op, attr_name, attr_value_proto):
return _pywrap_tensorflow_internal.SetAttr(graph, op, attr_name, attr_value_proto)
SetAttr = _pywrap_tensorflow_internal.SetAttr
def SetRequestedDevice(graph, op, device):
return _pywrap_tensorflow_internal.SetRequestedDevice(graph, op, device)
SetRequestedDevice = _pywrap_tensorflow_internal.SetRequestedDevice
def UpdateEdge(graph, new_src, dst):
return _pywrap_tensorflow_internal.UpdateEdge(graph, new_src, dst)
UpdateEdge = _pywrap_tensorflow_internal.UpdateEdge
def RemoveAllControlInputs(graph, op):
return _pywrap_tensorflow_internal.RemoveAllControlInputs(graph, op)
RemoveAllControlInputs = _pywrap_tensorflow_internal.RemoveAllControlInputs
def SetRequireShapeInferenceFns(graph, require):
return _pywrap_tensorflow_internal.SetRequireShapeInferenceFns(graph, require)
SetRequireShapeInferenceFns = _pywrap_tensorflow_internal.SetRequireShapeInferenceFns
def ExtendSession(session):
return _pywrap_tensorflow_internal.ExtendSession(session)
ExtendSession = _pywrap_tensorflow_internal.ExtendSession
def GetHandleShapeAndType(graph, output):
return _pywrap_tensorflow_internal.GetHandleShapeAndType(graph, output)
GetHandleShapeAndType = _pywrap_tensorflow_internal.GetHandleShapeAndType
def SetHandleShapeAndType(graph, output, proto):
return _pywrap_tensorflow_internal.SetHandleShapeAndType(graph, output, proto)
SetHandleShapeAndType = _pywrap_tensorflow_internal.SetHandleShapeAndType
def TF_NewSessionOptions(target=None, config=None):
# NOTE: target and config are validated in the session constructor.
opts = _TF_NewSessionOptions()
if target is not None:
_TF_SetTarget(opts, target)
if config is not None:
from tensorflow.python.framework import errors
config_str = config.SerializeToString()
_TF_SetConfig(opts, config_str)
return opts
def TF_Reset(target, containers=None, config=None):
from tensorflow.python.framework import errors
opts = TF_NewSessionOptions(target=target, config=config)
try:
with errors.raise_exception_on_not_ok_status() as status:
TF_Reset_wrapper(opts, containers, status)
finally:
TF_DeleteSessionOptions(opts)
def TF_NewSessionRef(graph, opts):
return _pywrap_tensorflow_internal.TF_NewSessionRef(graph, opts)
TF_NewSessionRef = _pywrap_tensorflow_internal.TF_NewSessionRef
def TF_Run(session, run_options, feed_dict, output_names, target_nodes, out_status, run_outputs):
return _pywrap_tensorflow_internal.TF_Run(session, run_options, feed_dict, output_names, target_nodes, out_status, run_outputs)
TF_Run = _pywrap_tensorflow_internal.TF_Run
def TF_DeprecatedSessionMakeCallable(session, callable_options, out_status):
return _pywrap_tensorflow_internal.TF_DeprecatedSessionMakeCallable(session, callable_options, out_status)
TF_DeprecatedSessionMakeCallable = _pywrap_tensorflow_internal.TF_DeprecatedSessionMakeCallable
def TF_SessionMakeCallable(session, callable_options, out_status):
return _pywrap_tensorflow_internal.TF_SessionMakeCallable(session, callable_options, out_status)
TF_SessionMakeCallable = _pywrap_tensorflow_internal.TF_SessionMakeCallable
def TF_DeprecatedSessionRunCallable(session, handle, feed_values, out_status, run_metadata):
return _pywrap_tensorflow_internal.TF_DeprecatedSessionRunCallable(session, handle, feed_values, out_status, run_metadata)
TF_DeprecatedSessionRunCallable = _pywrap_tensorflow_internal.TF_DeprecatedSessionRunCallable
def TF_SessionRunCallable(session, handle, feed_values, out_status, run_metadata):
return _pywrap_tensorflow_internal.TF_SessionRunCallable(session, handle, feed_values, out_status, run_metadata)
TF_SessionRunCallable = _pywrap_tensorflow_internal.TF_SessionRunCallable
def TF_DeprecatedSessionReleaseCallable(session, handle, out_status):
return _pywrap_tensorflow_internal.TF_DeprecatedSessionReleaseCallable(session, handle, out_status)
TF_DeprecatedSessionReleaseCallable = _pywrap_tensorflow_internal.TF_DeprecatedSessionReleaseCallable
def TF_SessionReleaseCallable(session, handle, out_status):
return _pywrap_tensorflow_internal.TF_SessionReleaseCallable(session, handle, out_status)
TF_SessionReleaseCallable = _pywrap_tensorflow_internal.TF_SessionReleaseCallable
def TF_PRunSetup(session, input_names, output_names, target_nodes, out_status):
return _pywrap_tensorflow_internal.TF_PRunSetup(session, input_names, output_names, target_nodes, out_status)
TF_PRunSetup = _pywrap_tensorflow_internal.TF_PRunSetup
def TF_PRun(session, handle, feed_dict, output_names, out_status):
return _pywrap_tensorflow_internal.TF_PRun(session, handle, feed_dict, output_names, out_status)
TF_PRun = _pywrap_tensorflow_internal.TF_PRun
def TF_Reset_wrapper(opt, containers, out_status):
return _pywrap_tensorflow_internal.TF_Reset_wrapper(opt, containers, out_status)
TF_Reset_wrapper = _pywrap_tensorflow_internal.TF_Reset_wrapper
def EqualGraphDefWrapper(actual, expected):
return _pywrap_tensorflow_internal.EqualGraphDefWrapper(actual, expected)
EqualGraphDefWrapper = _pywrap_tensorflow_internal.EqualGraphDefWrapper
def EqualAttrValueWrapper(actual, expected):
return _pywrap_tensorflow_internal.EqualAttrValueWrapper(actual, expected)
EqualAttrValueWrapper = _pywrap_tensorflow_internal.EqualAttrValueWrapper
def TF_GraphGetTensorShapeHelper(graph, output):
return _pywrap_tensorflow_internal.TF_GraphGetTensorShapeHelper(graph, output)
TF_GraphGetTensorShapeHelper = _pywrap_tensorflow_internal.TF_GraphGetTensorShapeHelper
def TF_SessionRun_wrapper(session, run_options, inputs, outputs, targets, run_metadata):
return _pywrap_tensorflow_internal.TF_SessionRun_wrapper(session, run_options, inputs, outputs, targets, run_metadata)
TF_SessionRun_wrapper = _pywrap_tensorflow_internal.TF_SessionRun_wrapper
def TF_SessionPRunSetup_wrapper(session, inputs, outputs, targets):
return _pywrap_tensorflow_internal.TF_SessionPRunSetup_wrapper(session, inputs, outputs, targets)
TF_SessionPRunSetup_wrapper = _pywrap_tensorflow_internal.TF_SessionPRunSetup_wrapper
def TF_SessionPRun_wrapper(session, handle, inputs, outputs):
return _pywrap_tensorflow_internal.TF_SessionPRun_wrapper(session, handle, inputs, outputs)
TF_SessionPRun_wrapper = _pywrap_tensorflow_internal.TF_SessionPRun_wrapper
def GetOperationInputs(oper):
return _pywrap_tensorflow_internal.GetOperationInputs(oper)
GetOperationInputs = _pywrap_tensorflow_internal.GetOperationInputs
def TF_OperationGetControlInputs_wrapper(oper):
return _pywrap_tensorflow_internal.TF_OperationGetControlInputs_wrapper(oper)
TF_OperationGetControlInputs_wrapper = _pywrap_tensorflow_internal.TF_OperationGetControlInputs_wrapper
def TF_OperationGetControlOutputs_wrapper(oper):
return _pywrap_tensorflow_internal.TF_OperationGetControlOutputs_wrapper(oper)
TF_OperationGetControlOutputs_wrapper = _pywrap_tensorflow_internal.TF_OperationGetControlOutputs_wrapper
def TF_OperationOutputConsumers_wrapper(oper_out):
return _pywrap_tensorflow_internal.TF_OperationOutputConsumers_wrapper(oper_out)
TF_OperationOutputConsumers_wrapper = _pywrap_tensorflow_internal.TF_OperationOutputConsumers_wrapper
def TF_GraphToFunction_wrapper(fn_body, fn_name, append_hash_to_fn_name, opers, inputs, outputs, output_names, opts, description):
return _pywrap_tensorflow_internal.TF_GraphToFunction_wrapper(fn_body, fn_name, append_hash_to_fn_name, opers, inputs, outputs, output_names, opts, description)
TF_GraphToFunction_wrapper = _pywrap_tensorflow_internal.TF_GraphToFunction_wrapper
def TF_GraphSetOutputHandleShapesAndTypes_wrapper(graph, output, shapes, ranks, types):
return _pywrap_tensorflow_internal.TF_GraphSetOutputHandleShapesAndTypes_wrapper(graph, output, shapes, ranks, types)
TF_GraphSetOutputHandleShapesAndTypes_wrapper = _pywrap_tensorflow_internal.TF_GraphSetOutputHandleShapesAndTypes_wrapper
def TF_GraphSetTensorShape_wrapper(graph, output, dims, unknown_shape):
return _pywrap_tensorflow_internal.TF_GraphSetTensorShape_wrapper(graph, output, dims, unknown_shape)
TF_GraphSetTensorShape_wrapper = _pywrap_tensorflow_internal.TF_GraphSetTensorShape_wrapper
def TF_ImportGraphDefResultsMissingUnusedInputMappings_wrapper(results):
return _pywrap_tensorflow_internal.TF_ImportGraphDefResultsMissingUnusedInputMappings_wrapper(results)
TF_ImportGraphDefResultsMissingUnusedInputMappings_wrapper = _pywrap_tensorflow_internal.TF_ImportGraphDefResultsMissingUnusedInputMappings_wrapper
def TF_TryEvaluateConstant_wrapper(graph, output):
return _pywrap_tensorflow_internal.TF_TryEvaluateConstant_wrapper(graph, output)
TF_TryEvaluateConstant_wrapper = _pywrap_tensorflow_internal.TF_TryEvaluateConstant_wrapper
def ListDevices(out_status):
return _pywrap_tensorflow_internal.ListDevices(out_status)
ListDevices = _pywrap_tensorflow_internal.ListDevices
def ListDevicesWithSessionConfig(config, out_status):
return _pywrap_tensorflow_internal.ListDevicesWithSessionConfig(config, out_status)
ListDevicesWithSessionConfig = _pywrap_tensorflow_internal.ListDevicesWithSessionConfig
def list_devices(session_config=None):
from tensorflow.python.framework import errors
with errors.raise_exception_on_not_ok_status() as status:
if session_config:
return ListDevicesWithSessionConfig(session_config.SerializeToString(),
status)
else:
return ListDevices(status)
def TF_bfloat16_type():
return _pywrap_tensorflow_internal.TF_bfloat16_type()
TF_bfloat16_type = _pywrap_tensorflow_internal.TF_bfloat16_type
def FileExists(filename, out_status):
return _pywrap_tensorflow_internal.FileExists(filename, out_status)
FileExists = _pywrap_tensorflow_internal.FileExists
def DeleteFile(filename, out_status):
return _pywrap_tensorflow_internal.DeleteFile(filename, out_status)
DeleteFile = _pywrap_tensorflow_internal.DeleteFile
def ReadFileToString(filename, out_status):
return _pywrap_tensorflow_internal.ReadFileToString(filename, out_status)
ReadFileToString = _pywrap_tensorflow_internal.ReadFileToString
def WriteStringToFile(filename, file_content, out_status):
return _pywrap_tensorflow_internal.WriteStringToFile(filename, file_content, out_status)
WriteStringToFile = _pywrap_tensorflow_internal.WriteStringToFile
def GetChildren(dir, out_status):
return _pywrap_tensorflow_internal.GetChildren(dir, out_status)
GetChildren = _pywrap_tensorflow_internal.GetChildren
def GetMatchingFiles(filename, out_status):
return _pywrap_tensorflow_internal.GetMatchingFiles(filename, out_status)
GetMatchingFiles = _pywrap_tensorflow_internal.GetMatchingFiles
def CreateDir(dirname, out_status):
return _pywrap_tensorflow_internal.CreateDir(dirname, out_status)
CreateDir = _pywrap_tensorflow_internal.CreateDir
def RecursivelyCreateDir(dirname, out_status):
return _pywrap_tensorflow_internal.RecursivelyCreateDir(dirname, out_status)
RecursivelyCreateDir = _pywrap_tensorflow_internal.RecursivelyCreateDir
def CopyFile(oldpath, newpath, overwrite, out_status):
return _pywrap_tensorflow_internal.CopyFile(oldpath, newpath, overwrite, out_status)
CopyFile = _pywrap_tensorflow_internal.CopyFile
def RenameFile(oldname, newname, overwrite, out_status):
return _pywrap_tensorflow_internal.RenameFile(oldname, newname, overwrite, out_status)
RenameFile = _pywrap_tensorflow_internal.RenameFile
def DeleteRecursively(dirname, out_status):
return _pywrap_tensorflow_internal.DeleteRecursively(dirname, out_status)
DeleteRecursively = _pywrap_tensorflow_internal.DeleteRecursively
def IsDirectory(dirname, out_status):
return _pywrap_tensorflow_internal.IsDirectory(dirname, out_status)
IsDirectory = _pywrap_tensorflow_internal.IsDirectory
def Stat(filename, stats, out_status):
return _pywrap_tensorflow_internal.Stat(filename, stats, out_status)
Stat = _pywrap_tensorflow_internal.Stat
def CreateBufferedInputStream(filename, buffer_size, out_status):
return _pywrap_tensorflow_internal.CreateBufferedInputStream(filename, buffer_size, out_status)
CreateBufferedInputStream = _pywrap_tensorflow_internal.CreateBufferedInputStream
def CreateWritableFile(filename, mode, out_status):
return _pywrap_tensorflow_internal.CreateWritableFile(filename, mode, out_status)
CreateWritableFile = _pywrap_tensorflow_internal.CreateWritableFile
def AppendToFile(file_content, file, out_status):
return _pywrap_tensorflow_internal.AppendToFile(file_content, file, out_status)
AppendToFile = _pywrap_tensorflow_internal.AppendToFile
def ReadFromStream(stream, bytes, out_status):
return _pywrap_tensorflow_internal.ReadFromStream(stream, bytes, out_status)
ReadFromStream = _pywrap_tensorflow_internal.ReadFromStream
class WritableFile(_object):
__swig_setmethods__ = {}
__setattr__ = lambda self, name, value: _swig_setattr(self, WritableFile, name, value)
__swig_getmethods__ = {}
__getattr__ = lambda self, name: _swig_getattr(self, WritableFile, name)
def __init__(self, *args, **kwargs):
raise AttributeError("No constructor defined - class is abstract")
__repr__ = _swig_repr
__swig_destroy__ = _pywrap_tensorflow_internal.delete_WritableFile
__del__ = lambda self: None
def Close(self):
return _pywrap_tensorflow_internal.WritableFile_Close(self)
def Flush(self):
return _pywrap_tensorflow_internal.WritableFile_Flush(self)
WritableFile_swigregister = _pywrap_tensorflow_internal.WritableFile_swigregister
WritableFile_swigregister(WritableFile)
class BufferedInputStream(_object):
__swig_setmethods__ = {}
__setattr__ = lambda self, name, value: _swig_setattr(self, BufferedInputStream, name, value)
__swig_getmethods__ = {}
__getattr__ = lambda self, name: _swig_getattr(self, BufferedInputStream, name)
def __init__(self, *args, **kwargs):
raise AttributeError("No constructor defined")
__repr__ = _swig_repr
__swig_destroy__ = _pywrap_tensorflow_internal.delete_BufferedInputStream
__del__ = lambda self: None
def Tell(self):
return _pywrap_tensorflow_internal.BufferedInputStream_Tell(self)
def Seek(self, position):
return _pywrap_tensorflow_internal.BufferedInputStream_Seek(self, position)
def ReadLineAsString(self):
return _pywrap_tensorflow_internal.BufferedInputStream_ReadLineAsString(self)
BufferedInputStream_swigregister = _pywrap_tensorflow_internal.BufferedInputStream_swigregister
BufferedInputStream_swigregister(BufferedInputStream)
def Set_TF_Status_from_Status(tf_status, status):
return _pywrap_tensorflow_internal.Set_TF_Status_from_Status(tf_status, status)
Set_TF_Status_from_Status = _pywrap_tensorflow_internal.Set_TF_Status_from_Status
def StatusFromTF_Status(tf_status):
return _pywrap_tensorflow_internal.StatusFromTF_Status(tf_status)
StatusFromTF_Status = _pywrap_tensorflow_internal.StatusFromTF_Status
def IsAbsolutePath(path):
return _pywrap_tensorflow_internal.IsAbsolutePath(path)
IsAbsolutePath = _pywrap_tensorflow_internal.IsAbsolutePath
def Dirname(path):
return _pywrap_tensorflow_internal.Dirname(path)
Dirname = _pywrap_tensorflow_internal.Dirname
def Basename(path):
return _pywrap_tensorflow_internal.Basename(path)
Basename = _pywrap_tensorflow_internal.Basename
def Extension(path):
return _pywrap_tensorflow_internal.Extension(path)
Extension = _pywrap_tensorflow_internal.Extension
def CleanPath(path):
return _pywrap_tensorflow_internal.CleanPath(path)
CleanPath = _pywrap_tensorflow_internal.CleanPath
def ParseURI(uri, scheme, host, path):
return _pywrap_tensorflow_internal.ParseURI(uri, scheme, host, path)
ParseURI = _pywrap_tensorflow_internal.ParseURI
def CreateURI(scheme, host, path):
return _pywrap_tensorflow_internal.CreateURI(scheme, host, path)
CreateURI = _pywrap_tensorflow_internal.CreateURI
def GetTempFilename(extension):
return _pywrap_tensorflow_internal.GetTempFilename(extension)
GetTempFilename = _pywrap_tensorflow_internal.GetTempFilename
class FileStatistics(_object):
__swig_setmethods__ = {}
__setattr__ = lambda self, name, value: _swig_setattr(self, FileStatistics, name, value)
__swig_getmethods__ = {}
__getattr__ = lambda self, name: _swig_getattr(self, FileStatistics, name)
__repr__ = _swig_repr
__swig_setmethods__["length"] = _pywrap_tensorflow_internal.FileStatistics_length_set
__swig_getmethods__["length"] = _pywrap_tensorflow_internal.FileStatistics_length_get
if _newclass:
length = _swig_property(_pywrap_tensorflow_internal.FileStatistics_length_get, _pywrap_tensorflow_internal.FileStatistics_length_set)
__swig_setmethods__["mtime_nsec"] = _pywrap_tensorflow_internal.FileStatistics_mtime_nsec_set
__swig_getmethods__["mtime_nsec"] = _pywrap_tensorflow_internal.FileStatistics_mtime_nsec_get
if _newclass:
mtime_nsec = _swig_property(_pywrap_tensorflow_internal.FileStatistics_mtime_nsec_get, _pywrap_tensorflow_internal.FileStatistics_mtime_nsec_set)
__swig_setmethods__["is_directory"] = _pywrap_tensorflow_internal.FileStatistics_is_directory_set
__swig_getmethods__["is_directory"] = _pywrap_tensorflow_internal.FileStatistics_is_directory_get
if _newclass:
is_directory = _swig_property(_pywrap_tensorflow_internal.FileStatistics_is_directory_get, _pywrap_tensorflow_internal.FileStatistics_is_directory_set)
def __init__(self, *args):
this = _pywrap_tensorflow_internal.new_FileStatistics(*args)
try:
self.this.append(this)
except Exception:
self.this = this
__swig_destroy__ = _pywrap_tensorflow_internal.delete_FileStatistics
__del__ = lambda self: None
FileStatistics_swigregister = _pywrap_tensorflow_internal.FileStatistics_swigregister
FileStatistics_swigregister(FileStatistics)
def DoQuantizeTrainingOnGraphDefHelper(input_graph, num_bits, out_status):
return _pywrap_tensorflow_internal.DoQuantizeTrainingOnGraphDefHelper(input_graph, num_bits, out_status)
DoQuantizeTrainingOnGraphDefHelper = _pywrap_tensorflow_internal.DoQuantizeTrainingOnGraphDefHelper
from tensorflow.python.util import deprecation
from tensorflow.python.util.tf_export import tf_export
@deprecation.deprecated(
None,
"GraphDef quantized training rewriter is deprecated in the long term")
@tf_export(v1=["train.do_quantize_training_on_graphdef"])
def do_quantize_training_on_graphdef(input_graph, num_bits):
"""A general quantization scheme is being developed in `tf.contrib.quantize`.
Consider using that instead, though since it is in the tf.contrib namespace,
it is not subject to backward compatibility guarantees.
"""
from tensorflow.core.framework.graph_pb2 import GraphDef
from tensorflow.python.framework import errors
with errors.raise_exception_on_not_ok_status() as status:
graph = GraphDef()
result_graph_string = DoQuantizeTrainingOnGraphDefHelper(
input_graph.SerializeToString(), num_bits, status)
graph.ParseFromString(result_graph_string)
return graph
do_quantize_training_on_graphdef._tf_api_names = [
'train.do_quantize_training_on_graphdef']
do_quantize_training_on_graphdef._tf_api_names_v1 = [
'train.do_quantize_training_on_graphdef']
def PyServer_New(server_def, out_status):
return _pywrap_tensorflow_internal.PyServer_New(server_def, out_status)
PyServer_New = _pywrap_tensorflow_internal.PyServer_New
def PyServer_Start(in_server, out_status):
return _pywrap_tensorflow_internal.PyServer_Start(in_server, out_status)
PyServer_Start = _pywrap_tensorflow_internal.PyServer_Start
def PyServer_Stop(in_server, out_status):
return _pywrap_tensorflow_internal.PyServer_Stop(in_server, out_status)
PyServer_Stop = _pywrap_tensorflow_internal.PyServer_Stop
def PyServer_Join(in_server, out_status):
return _pywrap_tensorflow_internal.PyServer_Join(in_server, out_status)
PyServer_Join = _pywrap_tensorflow_internal.PyServer_Join
class ServerInterface(_object):
__swig_setmethods__ = {}
__setattr__ = lambda self, name, value: _swig_setattr(self, ServerInterface, name, value)
__swig_getmethods__ = {}
__getattr__ = lambda self, name: _swig_getattr(self, ServerInterface, name)
def __init__(self, *args, **kwargs):
raise AttributeError("No constructor defined - class is abstract")
__repr__ = _swig_repr
__swig_destroy__ = _pywrap_tensorflow_internal.delete_ServerInterface
__del__ = lambda self: None
def target(self):
return _pywrap_tensorflow_internal.ServerInterface_target(self)
ServerInterface_swigregister = _pywrap_tensorflow_internal.ServerInterface_swigregister
ServerInterface_swigregister(ServerInterface)
def GetPythonWrappers(op_list_buf):
return _pywrap_tensorflow_internal.GetPythonWrappers(op_list_buf)
GetPythonWrappers = _pywrap_tensorflow_internal.GetPythonWrappers
def RunCppShapeInference(graph_def_version, serialized_node_def, input_serialized_shapes, input_constant_tensor_values, input_constant_tensor_as_shape_values, out_status):
return _pywrap_tensorflow_internal.RunCppShapeInference(graph_def_version, serialized_node_def, input_serialized_shapes, input_constant_tensor_values, input_constant_tensor_as_shape_values, out_status)
RunCppShapeInference = _pywrap_tensorflow_internal.RunCppShapeInference
def InstallStacktraceHandler():
return _pywrap_tensorflow_internal.InstallStacktraceHandler()
InstallStacktraceHandler = _pywrap_tensorflow_internal.InstallStacktraceHandler
def TryFindKernelClass(serialized_node_def):
return _pywrap_tensorflow_internal.TryFindKernelClass(serialized_node_def)
TryFindKernelClass = _pywrap_tensorflow_internal.TryFindKernelClass
def TransformGraphWithStringInputs(graph_def_string, inputs_string, outputs_string, transforms_string, out_status):
return _pywrap_tensorflow_internal.TransformGraphWithStringInputs(graph_def_string, inputs_string, outputs_string, transforms_string, out_status)
TransformGraphWithStringInputs = _pywrap_tensorflow_internal.TransformGraphWithStringInputs
def IsSequence(o):
"""
Returns a true if its input is a collections.Sequence (except strings).
Args:
seq: an input sequence.
Returns:
True if the sequence is a not a string and is a collections.Sequence or a
dict.
"""
return _pywrap_tensorflow_internal.IsSequence(o)
def IsNamedtuple(o, strict):
return _pywrap_tensorflow_internal.IsNamedtuple(o, strict)
IsNamedtuple = _pywrap_tensorflow_internal.IsNamedtuple
def IsMapping(o):
"""
Returns True iff `instance` is a `collections.Mapping`.
Args:
instance: An instance of a Python object.
Returns:
True if `instance` is a `collections.Mapping`.
"""
return _pywrap_tensorflow_internal.IsMapping(o)
def IsAttrs(o):
"""
Returns True iff `instance` is an instance of an `attr.s` decorated class.
Args:
instance: An instance of a Python object.
Returns:
True if `instance` is an instance of an `attr.s` decorated class.
"""
return _pywrap_tensorflow_internal.IsAttrs(o)
def SameNamedtuples(o1, o2):
"""Returns True if the two namedtuples have the same name and fields."""
return _pywrap_tensorflow_internal.SameNamedtuples(o1, o2)
def AssertSameStructure(o1, o2, check_types):
return _pywrap_tensorflow_internal.AssertSameStructure(o1, o2, check_types)
AssertSameStructure = _pywrap_tensorflow_internal.AssertSameStructure
def Flatten(nested):
"""
Returns a flat list from a given nested structure.
If `nest` is not a sequence, tuple, or dict, then returns a single-element
list: `[nest]`.
In the case of dict instances, the sequence consists of the values, sorted by
key to ensure deterministic behavior. This is true also for `OrderedDict`
instances: their sequence order is ignored, the sorting order of keys is
used instead. The same convention is followed in `pack_sequence_as`. This
correctly repacks dicts and `OrderedDict`s after they have been flattened,
and also allows flattening an `OrderedDict` and then repacking it back using
a corresponding plain dict, or vice-versa.
Dictionaries with non-sortable keys cannot be flattened.
Users must not modify any collections used in `nest` while this function is
running.
Args:
nest: an arbitrarily nested structure or a scalar object. Note, numpy
arrays are considered scalars.
Returns:
A Python list, the flattened version of the input.
Raises:
TypeError: The nest is or contains a dict with non-sortable keys.
"""
return _pywrap_tensorflow_internal.Flatten(nested)
def IsSequenceForData(o):
"""
Returns a true if `seq` is a Sequence or dict (except strings/lists).
NOTE(mrry): This differs from `tensorflow.python.util.nest.is_sequence()`,
which *does* treat a Python list as a sequence. For ergonomic
reasons, `tf.data` users would prefer to treat lists as
implicit `tf.Tensor` objects, and dicts as (nested) sequences.
Args:
seq: an input sequence.
Returns:
True if the sequence is a not a string or list and is a
collections.Sequence.
"""
return _pywrap_tensorflow_internal.IsSequenceForData(o)
def FlattenForData(nested):
"""
Returns a flat sequence from a given nested structure.
If `nest` is not a sequence, this returns a single-element list: `[nest]`.
Args:
nest: an arbitrarily nested structure or a scalar object.
Note, numpy arrays are considered scalars.
Returns:
A Python list, the flattened version of the input.
"""
return _pywrap_tensorflow_internal.FlattenForData(nested)
def AssertSameStructureForData(o1, o2, check_types):
return _pywrap_tensorflow_internal.AssertSameStructureForData(o1, o2, check_types)
AssertSameStructureForData = _pywrap_tensorflow_internal.AssertSameStructureForData
def RegisterType(type_name, type):
return _pywrap_tensorflow_internal.RegisterType(type_name, type)
RegisterType = _pywrap_tensorflow_internal.RegisterType
_pywrap_tensorflow_internal.SHARED_PTR_DISOWN_swigconstant(_pywrap_tensorflow_internal)
SHARED_PTR_DISOWN = _pywrap_tensorflow_internal.SHARED_PTR_DISOWN
class GItem(_object):
__swig_setmethods__ = {}
__setattr__ = lambda self, name, value: _swig_setattr(self, GItem, name, value)
__swig_getmethods__ = {}
__getattr__ = lambda self, name: _swig_getattr(self, GItem, name)
__repr__ = _swig_repr
__swig_setmethods__["item_"] = _pywrap_tensorflow_internal.GItem_item__set
__swig_getmethods__["item_"] = _pywrap_tensorflow_internal.GItem_item__get
if _newclass:
item_ = _swig_property(_pywrap_tensorflow_internal.GItem_item__get, _pywrap_tensorflow_internal.GItem_item__set)
def __init__(self):
this = _pywrap_tensorflow_internal.new_GItem()
try:
self.this.append(this)
except Exception:
self.this = this
__swig_destroy__ = _pywrap_tensorflow_internal.delete_GItem
__del__ = lambda self: None
GItem_swigregister = _pywrap_tensorflow_internal.GItem_swigregister
GItem_swigregister(GItem)
def TF_NewItem(meta_graph, ignore_colocation, ignore_user_placement, out_status):
return _pywrap_tensorflow_internal.TF_NewItem(meta_graph, ignore_colocation, ignore_user_placement, out_status)
TF_NewItem = _pywrap_tensorflow_internal.TF_NewItem
def TF_IdentifyImportantOps(item, sort_topologically):
return _pywrap_tensorflow_internal.TF_IdentifyImportantOps(item, sort_topologically)
TF_IdentifyImportantOps = _pywrap_tensorflow_internal.TF_IdentifyImportantOps
def TF_GetOpProperties(item):
return _pywrap_tensorflow_internal.TF_GetOpProperties(item)
TF_GetOpProperties = _pywrap_tensorflow_internal.TF_GetOpProperties
def TF_GetColocationGroups(item):
return _pywrap_tensorflow_internal.TF_GetColocationGroups(item)
TF_GetColocationGroups = _pywrap_tensorflow_internal.TF_GetColocationGroups
class GCluster(_object):
__swig_setmethods__ = {}
__setattr__ = lambda self, name, value: _swig_setattr(self, GCluster, name, value)
__swig_getmethods__ = {}
__getattr__ = lambda self, name: _swig_getattr(self, GCluster, name)
__repr__ = _swig_repr
__swig_setmethods__["cluster_"] = _pywrap_tensorflow_internal.GCluster_cluster__set
__swig_getmethods__["cluster_"] = _pywrap_tensorflow_internal.GCluster_cluster__get
if _newclass:
cluster_ = _swig_property(_pywrap_tensorflow_internal.GCluster_cluster__get, _pywrap_tensorflow_internal.GCluster_cluster__set)
def __init__(self):
this = _pywrap_tensorflow_internal.new_GCluster()
try:
self.this.append(this)
except Exception:
self.this = this
__swig_destroy__ = _pywrap_tensorflow_internal.delete_GCluster
__del__ = lambda self: None
GCluster_swigregister = _pywrap_tensorflow_internal.GCluster_swigregister
GCluster_swigregister(GCluster)
def TF_NewCluster(allow_soft_placement, disable_detailed_stats, out_status):
return _pywrap_tensorflow_internal.TF_NewCluster(allow_soft_placement, disable_detailed_stats, out_status)
TF_NewCluster = _pywrap_tensorflow_internal.TF_NewCluster
def TF_NewVirtualCluster(named_devices, out_status):
return _pywrap_tensorflow_internal.TF_NewVirtualCluster(named_devices, out_status)
TF_NewVirtualCluster = _pywrap_tensorflow_internal.TF_NewVirtualCluster
def TF_ShutdownCluster(cluster):
return _pywrap_tensorflow_internal.TF_ShutdownCluster(cluster)
TF_ShutdownCluster = _pywrap_tensorflow_internal.TF_ShutdownCluster
def TF_ListDevices(cluster):
return _pywrap_tensorflow_internal.TF_ListDevices(cluster)
TF_ListDevices = _pywrap_tensorflow_internal.TF_ListDevices
def TF_ListAvailableOps():
return _pywrap_tensorflow_internal.TF_ListAvailableOps()
TF_ListAvailableOps = _pywrap_tensorflow_internal.TF_ListAvailableOps
def TF_GetSupportedDevices(cluster, item):
return _pywrap_tensorflow_internal.TF_GetSupportedDevices(cluster, item)
TF_GetSupportedDevices = _pywrap_tensorflow_internal.TF_GetSupportedDevices
def TF_EstimatePerformance(device):
return _pywrap_tensorflow_internal.TF_EstimatePerformance(device)
TF_EstimatePerformance = _pywrap_tensorflow_internal.TF_EstimatePerformance
def TF_MeasureCosts(item, cluster, generate_timeline, out_status):
return _pywrap_tensorflow_internal.TF_MeasureCosts(item, cluster, generate_timeline, out_status)
TF_MeasureCosts = _pywrap_tensorflow_internal.TF_MeasureCosts
def TF_DeterminePeakMemoryUsage(item, cluster, out_status):
return _pywrap_tensorflow_internal.TF_DeterminePeakMemoryUsage(item, cluster, out_status)
TF_DeterminePeakMemoryUsage = _pywrap_tensorflow_internal.TF_DeterminePeakMemoryUsage
def TF_OptimizeGraph(cluster, rewriter_config, metagraph, verbose, graph_id, out_status):
return _pywrap_tensorflow_internal.TF_OptimizeGraph(cluster, rewriter_config, metagraph, verbose, graph_id, out_status)
TF_OptimizeGraph = _pywrap_tensorflow_internal.TF_OptimizeGraph
def GenerateCostReport(metagraph, per_node_report, verbose, cluster):
return _pywrap_tensorflow_internal.GenerateCostReport(metagraph, per_node_report, verbose, cluster)
GenerateCostReport = _pywrap_tensorflow_internal.GenerateCostReport
def GraphAnalyzer(file_path, n):
return _pywrap_tensorflow_internal.GraphAnalyzer(file_path, n)
GraphAnalyzer = _pywrap_tensorflow_internal.GraphAnalyzer
def GenerateModelReport(metagraph, assume_valid_feeds, debug):
return _pywrap_tensorflow_internal.GenerateModelReport(metagraph, assume_valid_feeds, debug)
GenerateModelReport = _pywrap_tensorflow_internal.GenerateModelReport
# This file is compatible with both classic and new-style classes.
| 49.143351 | 206 | 0.82933 | 12,333 | 113,816 | 7.057488 | 0.070786 | 0.191912 | 0.287868 | 0.173254 | 0.715085 | 0.514683 | 0.371473 | 0.297783 | 0.223093 | 0.155653 | 0 | 0.001959 | 0.11639 | 113,816 | 2,315 | 207 | 49.164579 | 0.863517 | 0.026385 | 0 | 0.133489 | 1 | 0 | 0.010519 | 0.002246 | 0 | 0 | 0 | 0 | 0.003513 | 1 | 0.232436 | false | 0.001171 | 0.050351 | 0.209602 | 0.596604 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
4a2f12a0d7d7917a6cef0ea83a1c7d17b09afe9b | 2,399 | py | Python | tests/libs/measurements/test_measure.py | izm51/obniz-python-sdk | 40a738b5fe2c0a415cdc09f46d28c143982bfb07 | [
"MIT"
] | 11 | 2019-03-22T12:02:11.000Z | 2021-01-21T04:57:18.000Z | tests/libs/measurements/test_measure.py | izm51/obniz-python-sdk | 40a738b5fe2c0a415cdc09f46d28c143982bfb07 | [
"MIT"
] | 5 | 2019-03-02T08:28:25.000Z | 2021-02-02T22:06:37.000Z | tests/libs/measurements/test_measure.py | izm51/obniz-python-sdk | 40a738b5fe2c0a415cdc09f46d28c143982bfb07 | [
"MIT"
] | 3 | 2019-07-20T06:55:09.000Z | 2019-12-04T05:05:00.000Z | from ...utils import assert_finished, assert_send, receive_json
class TestObnizMeasure:
def test_echo(self, obniz):
obniz.measure.echo(
{
"io_pulse": 1, # io for generate pulse
"io_echo": 2, # io to be measured
"pulse": "positive", # generate pulse pattern
"pulse_width": 0.1, # generate pulse width
"measure_edges": 3, # 1 to 4. maximum edges to measure
"timeout": 1000, # self is optional. 1000(1sec) is default
}
)
assert_send(
obniz,
[
{
"measure": {
"echo": {
"io_pulse": 1,
"io_echo": 2,
"pulse": "positive",
"pulse_width": 0.1,
"measure_edges": 3,
"timeout": 1000,
}
}
}
],
)
assert_finished(obniz)
def test_echo_response(self, mocker, obniz):
stub = mocker.stub()
obniz.measure.echo(
{
"io_pulse": 1, # io for generate pulse
"io_echo": 2, # io to be measured
"pulse": "positive", # generate pulse pattern
"pulse_width": 0.1, # generate pulse width
"measure_edges": 3, # 1 to 4. maximum edges to measure
"timeout": 1000, # self is optional. 1000(1sec) is default
"callback": stub,
}
)
assert_send(
obniz,
[
{
"measure": {
"echo": {
"io_pulse": 1,
"io_echo": 2,
"pulse": "positive",
"pulse_width": 0.1,
"measure_edges": 3,
"timeout": 1000,
}
}
}
],
)
assert_finished(obniz)
receive_json(obniz, [{"measure": {"echo": [{"edge": True, "timing": 500}]}}])
assert len(stub.call_args[0]) == 1
assert stub.call_args[0][0] == [{"edge": True, "timing": 500}]
| 32.863014 | 85 | 0.380992 | 200 | 2,399 | 4.425 | 0.25 | 0.088136 | 0.090395 | 0.081356 | 0.711864 | 0.711864 | 0.711864 | 0.711864 | 0.711864 | 0.711864 | 0 | 0.05137 | 0.51313 | 2,399 | 72 | 86 | 33.319444 | 0.706336 | 0.130471 | 0 | 0.584615 | 0 | 0 | 0.143271 | 0 | 0 | 0 | 0 | 0 | 0.107692 | 1 | 0.030769 | false | 0 | 0.015385 | 0 | 0.061538 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4a384662ecd3ed20160d26a62c08d57078218863 | 144 | py | Python | other_exercises/repeating_anagrams.py | katchengli/tech-interview-prep | 0e62d37339c79a943637fed13305e60ac2e6e6aa | [
"Apache-2.0"
] | null | null | null | other_exercises/repeating_anagrams.py | katchengli/tech-interview-prep | 0e62d37339c79a943637fed13305e60ac2e6e6aa | [
"Apache-2.0"
] | null | null | null | other_exercises/repeating_anagrams.py | katchengli/tech-interview-prep | 0e62d37339c79a943637fed13305e60ac2e6e6aa | [
"Apache-2.0"
] | null | null | null | # find all words that possess an anagram of themselves in a dictionary
def find_anagrams(listOfWords):
print(find_anagrams(listOfWords))
| 20.571429 | 70 | 0.784722 | 20 | 144 | 5.55 | 0.8 | 0.216216 | 0.414414 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.159722 | 144 | 6 | 71 | 24 | 0.917355 | 0.472222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.5 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
4a56e999a3b0a80235fb30a0028da23f2e14f7e1 | 150 | py | Python | src/wai/spectralio/util/__init__.py | waikato-datamining/wai-spectral-io | a0edba2208b0b646ed54782cb0832ce10eed0d5e | [
"MIT"
] | null | null | null | src/wai/spectralio/util/__init__.py | waikato-datamining/wai-spectral-io | a0edba2208b0b646ed54782cb0832ce10eed0d5e | [
"MIT"
] | 3 | 2020-07-01T01:54:03.000Z | 2020-12-02T07:47:30.000Z | src/wai/spectralio/util/__init__.py | waikato-datamining/wai-spectral-io | a0edba2208b0b646ed54782cb0832ce10eed0d5e | [
"MIT"
] | null | null | null | from ._instanceoptionalmethod import instanceoptionalmethod
from ._non_default_kwargs import non_default_kwargs
from ._with_locale import with_locale
| 37.5 | 59 | 0.9 | 18 | 150 | 7 | 0.444444 | 0.15873 | 0.253968 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.08 | 150 | 3 | 60 | 50 | 0.913043 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4a991d7ca81961f14ffa004b0d56c1473fe35bd4 | 96 | py | Python | venv/lib/python3.8/site-packages/debugpy/_vendored/pydevd/_pydevd_frame_eval/vendored/bytecode/tests/test_instr.py | Retraces/UkraineBot | 3d5d7f8aaa58fa0cb8b98733b8808e5dfbdb8b71 | [
"MIT"
] | 2 | 2022-03-13T01:58:52.000Z | 2022-03-31T06:07:54.000Z | venv/lib/python3.8/site-packages/debugpy/_vendored/pydevd/_pydevd_frame_eval/vendored/bytecode/tests/test_instr.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | 19 | 2021-11-20T04:09:18.000Z | 2022-03-23T15:05:55.000Z | venv/lib/python3.8/site-packages/debugpy/_vendored/pydevd/_pydevd_frame_eval/vendored/bytecode/tests/test_instr.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | null | null | null | /home/runner/.cache/pip/pool/85/0e/84/caa49d76325e52fe2f85916b122cb5c71322aa9c83b822b4c5f29b81ff | 96 | 96 | 0.895833 | 9 | 96 | 9.555556 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.427083 | 0 | 96 | 1 | 96 | 96 | 0.46875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
43a8ee8921313099eddbbfe1dd6dff47cef9e1e7 | 17,530 | py | Python | tests/test_checker.py | mabrowning/pylint-protobuf | 1e5a873a3fa329703ec9e890a2579a56e2c0d19a | [
"MIT"
] | null | null | null | tests/test_checker.py | mabrowning/pylint-protobuf | 1e5a873a3fa329703ec9e890a2579a56e2c0d19a | [
"MIT"
] | null | null | null | tests/test_checker.py | mabrowning/pylint-protobuf | 1e5a873a3fa329703ec9e890a2579a56e2c0d19a | [
"MIT"
] | null | null | null | import sys
import pytest
import astroid
import pylint.testutils
import pylint_protobuf
from hypothesis import given, strategies as st
@pytest.fixture
def fake_pb2(proto_builder):
return proto_builder("""
message Foo {
required string valid_field = 1;
}
""", name='fake')
class TestProtobufDescriptorChecker(pylint.testutils.CheckerTestCase):
CHECKER_CLASS = pylint_protobuf.ProtobufDescriptorChecker
def test_unaliased_module_happy_path_should_not_warn(self):
node = astroid.extract_node("""
import person_pb2
foo = person_pb2.Person()
foo.id = 123 #@
""")
with self.assertNoMessages():
self.walk(node.root())
def test_star_import_no_errors(self):
node = astroid.extract_node("""
from person_pb2 import *
""")
with self.assertNoMessages():
self.walk(node.root())
def test_unaliased_module_happy_path_should_warn(self):
node = astroid.extract_node("""
import person_pb2
foo = person_pb2.Person()
foo.should_warn #@
""")
message = pylint.testutils.Message(
'protobuf-undefined-attribute',
node=node, args=('should_warn', 'Person')
)
with self.assertAddsMessages(message):
self.walk(node.root())
def test_star_import_should_warn(self):
node = astroid.extract_node("""
from person_pb2 import *
foo = Person()
foo.should_warn #@
""")
message = pylint.testutils.Message(
'protobuf-undefined-attribute',
node=node, args=('should_warn', 'Person')
)
with self.assertAddsMessages(message):
self.walk(node.root())
@pytest.mark.skipif(sys.version_info < (3, 6),
reason='AnnAssign requires Python 3.6+')
def test_annassign_happy_path_should_not_warn(self):
node = astroid.extract_node("""
import person_pb2
foo: Person = person_pb2.Person()
foo.id = 123 #@
""")
with self.assertNoMessages():
self.walk(node.root())
@pytest.mark.skipif(sys.version_info < (3, 6),
reason='AnnAssign requires Python 3.6+')
def test_annassign_attr_happy_path_should_not_warn(self):
node = astroid.extract_node("""
import person_pb2
foo: Person = person_pb2.Person()
foo.id: int = 123 #@
""")
with self.assertNoMessages():
self.walk(node.root())
def test_unaliased_module_import_should_warn(self):
node = astroid.extract_node("""
import person_pb2
foo = person_pb2.Person()
foo.invalid_field = 'should warn' #@
""")
message = pylint.testutils.Message(
'protobuf-undefined-attribute',
node=node.targets[0], args=('invalid_field', 'Person')
)
with self.assertAddsMessages(message):
self.walk(node.root())
@pytest.mark.skipif(sys.version_info < (3, 6),
reason='AnnAssign requires Python 3.6+')
def test_annassign_invalid_field_should_warn(self):
node = astroid.extract_node("""
import person_pb2
foo: Person = person_pb2.Person()
foo.should_warn = 123 #@
""")
message = pylint.testutils.Message(
'protobuf-undefined-attribute',
node=node.targets[0], args=('should_warn', 'Person')
)
with self.assertAddsMessages(message):
self.walk(node.root())
@pytest.mark.skipif(sys.version_info < (3, 6),
reason='AnnAssign requires Python 3.6+')
def test_annassign_attribute_invalid_field_should_warn(self):
node = astroid.extract_node("""
import person_pb2
foo = person_pb2.Person()
foo.should_warn: int = 123 #@
""")
message = pylint.testutils.Message(
'protobuf-undefined-attribute',
node=node.target, args=('should_warn', 'Person')
)
with self.assertAddsMessages(message):
self.walk(node.root())
def test_module_import_should_warn(self):
node = astroid.extract_node("""
import person_pb2 as person
foo = person.Person()
foo.invalid_field = 'should warn' #@
""")
message = pylint.testutils.Message(
'protobuf-undefined-attribute',
node=node.targets[0], args=('invalid_field', 'Person')
)
with self.assertAddsMessages(message):
self.walk(node.root())
def test_module_import_as_self_should_warn(self):
node = astroid.extract_node("""
import person_pb2 as person_pb2
foo = person_pb2.Person()
foo.invalid_field = 'should warn' #@
""")
message = pylint.testutils.Message(
'protobuf-undefined-attribute',
node=node.targets[0], args=('invalid_field', 'Person')
)
with self.assertAddsMessages(message):
self.walk(node.root())
def test_importfrom_should_warn(self, fake_pb2):
node = astroid.extract_node("""
from fake_pb2 import Foo
foo = Foo()
foo.should_warn = 123 #@
""")
message = pylint.testutils.Message(
'protobuf-undefined-attribute',
node=node.targets[0], args=('should_warn', 'Foo')
)
with self.assertAddsMessages(message):
self.walk(node.root())
def test_importfrom_with_aliasing_should_warn(self, fake_pb2):
node = astroid.extract_node("""
from fake_pb2 import Foo as Bar
class Foo(object):
pass # normal class, not fake_pb2.Foo (nor fake_pb2.Bar)
bar = Bar()
bar.should_warn = 123 #@
""")
message = pylint.testutils.Message(
'protobuf-undefined-attribute',
node=node.targets[0], args=('should_warn', 'Foo')
)
with self.assertAddsMessages(message):
self.walk(node.root())
def test_importfrom_with_multiple_aliasing(self, fake_pb2):
node = astroid.extract_node("""
from fake_pb2 import Foo, Foo as Bar
bar = Foo()
bar.should_warn = 123 #@
""")
message = pylint.testutils.Message(
'protobuf-undefined-attribute',
node=node.targets[0], args=('should_warn', 'Foo')
)
with self.assertAddsMessages(message):
self.walk(node.root())
def test_importfrom_with_aliasing_no_warning(self):
node = astroid.extract_node("""
from fake_pb2 import Foo as Bar
class Foo(object):
pass # normal class, not fake_pb2.Foo (nor fake_pb2.Bar)
foo = Foo()
foo.no_error = 123 #@
""")
with self.assertNoMessages():
self.walk(node.root())
def test_aliasing_via_getitem_does_not_throw(self):
node = astroid.extract_node("""
from fake_pb2 import Foo
foo = [Foo][0]() #@
""")
self.walk(node.root())
def test_aliasing_via_getitem_list(self, fake_pb2):
node = astroid.extract_node("""
from fake_pb2 import Foo
bar = [Foo]
foo = bar[0]()
foo.should_warn = 123 #@
""")
message = pylint.testutils.Message(
'protobuf-undefined-attribute',
node=node.targets[0], args=('should_warn', 'Foo')
)
with self.assertAddsMessages(message):
self.walk(node.root())
def test_aliasing_via_getitem_dict(self, fake_pb2):
node = astroid.extract_node("""
from fake_pb2 import Foo
bar = {
'baz': Foo,
}
foo = bar['baz']()
foo.should_warn = 123 #@
""")
message = pylint.testutils.Message(
'protobuf-undefined-attribute',
node=node.targets[0], args=('should_warn', 'Foo')
)
with self.assertAddsMessages(message):
self.walk(node.root())
def test_aliasing_via_getitem_uninferable_should_not_warn(self):
node = astroid.extract_node("""
from fake_pb2 import Foo
from random import randint
types = [Foo, int]
foo = types[randint(0, 2)]()
foo.should_warn = 123 #@
""")
with self.assertNoMessages():
self.walk(node.root())
def test_aliasing_via_getitem_nested_lists(self, fake_pb2):
node = astroid.extract_node("""
from fake_pb2 import Foo
bar = [[Foo]]
foo = bar[0][0]()
foo.should_warn = 123 #@
""")
message = pylint.testutils.Message(
'protobuf-undefined-attribute',
node=node.targets[0], args=('should_warn', 'Foo')
)
with self.assertAddsMessages(message):
self.walk(node.root())
def test_aliasing_via_indirection_class_renaming(self, fake_pb2):
node = astroid.extract_node("""
from fake_pb2 import Foo
Indirect = Foo
foo = Indirect()
foo.should_warn = 123 #@
""")
message = pylint.testutils.Message(
'protobuf-undefined-attribute',
node=node.targets[0], args=('should_warn', 'Foo')
)
with self.assertAddsMessages(message):
self.walk(node.root())
def test_aliasing_via_instance_renaming(self, fake_pb2):
node = astroid.extract_node("""
from fake_pb2 import Foo
foo = Foo()
bar = foo
bar.should_warn = 123 #@
""")
message = pylint.testutils.Message(
'protobuf-undefined-attribute',
node=node.targets[0], args=('should_warn', 'Foo')
)
with self.assertAddsMessages(message):
self.walk(node.root())
def test_aliasing_via_multiple_assignment(self, fake_pb2):
node = astroid.extract_node("""
from fake_pb2 import Foo
baz = bar = Foo()
baz.should_warn = 123 #@
""")
message = pylint.testutils.Message(
'protobuf-undefined-attribute',
node=node.targets[0], args=('should_warn', 'Foo')
)
with self.assertAddsMessages(message):
self.walk(node.root())
def test_bad_fields_in_multiple_assignment_multiple_messages(self, fake_pb2):
node = astroid.extract_node("""
from fake_pb2 import Foo
foo = Foo()
bar = Foo()
foo.should_warn = bar.should_also_warn = 123 #@
""")
messages = [
pylint.testutils.Message(
'protobuf-undefined-attribute',
node=node.targets[0], args=('should_warn', 'Foo')
),
pylint.testutils.Message(
'protobuf-undefined-attribute',
node=node.targets[1], args=('should_also_warn', 'Foo')
),
]
with self.assertAddsMessages(*messages):
self.walk(node.root())
@pytest.mark.xfail(reason='unimplemented')
def test_aliasing_via_indirection_getitem(self):
node = astroid.extract_node("""
from fake_pb2 import Foo
types = {}
types[0] = Foo
foo = types[0]()
foo.should_warn = 123 #@
""")
message = pylint.testutils.Message(
'protobuf-undefined-attribute',
node=node.targets[0], args=('should_warn', 'Foo')
)
with self.assertAddsMessages(message):
self.walk(node.root())
def test_aliasing_via_getitem_list_indirection(self, fake_pb2):
node = astroid.extract_node("""
from fake_pb2 import Foo
baz = [Foo]
bar = bar[0]
foo = baz[0]()
foo.should_warn = 123 #@
""")
message = pylint.testutils.Message(
'protobuf-undefined-attribute',
node=node.targets[0], args=('should_warn', 'Foo')
)
with self.assertAddsMessages(message):
self.walk(node.root())
def test_aliasing_via_tuple_unpacking(self, fake_pb2):
node = astroid.extract_node("""
from fake_pb2 import Foo
foo, bar = Foo(), 'bar'
foo.should_warn = 123 #@
""")
message = pylint.testutils.Message(
'protobuf-undefined-attribute',
node=node.targets[0], args=('should_warn', 'Foo')
)
with self.assertAddsMessages(message):
self.walk(node.root())
def test_issue5_inferenceerror_should_not_propagate(self):
node = astroid.extract_node("""
foo = 'bar/baz'.split('/')[-1]
""")
try:
self.walk(node.root())
except astroid.exceptions.InferenceError:
pytest.fail("InferenceError should not propagate")
def test_issue6_importing_a_missing_module(self, error_on_missing_modules):
node = astroid.extract_node('import missing_module_pb2')
with pytest.raises(AssertionError, match='expected to import module "missing_module_pb2"'):
self.walk(node.root())
def test_issue6_importing_a_missing_module_as_alias(self, error_on_missing_modules):
node = astroid.extract_node('import missing_module_pb2 as foo')
with pytest.raises(AssertionError, match='expected to import module "missing_module_pb2"'):
self.walk(node.root())
def test_issue6_from_importing_a_missing_module(self, error_on_missing_modules):
node = astroid.extract_node('from missing_module_pb2 import foo')
with pytest.raises(AssertionError, match='expected to import module "missing_module_pb2"'):
self.walk(node.root())
def test_issue7_indexerror_on_slice_inference(self):
node = astroid.extract_node("""
foo = []
bar = foo[0] #@
""")
self.walk(node.root())
@pytest.mark.skip(reason='probably should be Uninferable')
def test_issue7_indexerror_on_correct_slice_inference(self):
# TODO: this shouldn't raise IndexError, like above, but the value of
# bar could be correctly inferred unlike above. Should we do this, and
# where should we draw the line on what is too complex to infer?
node = astroid.extract_node("""
foo = []
foo.append(123)
bar = foo[0] #@
""")
self.walk(node.root())
def test_lookup_on_nonetype_should_not_raise(self):
node = astroid.extract_node('foo = None[0]')
self.walk(node.root())
@given(st.sampled_from(pylint_protobuf.PROTOBUF_IMPLICIT_ATTRS))
def test_implicit_attrs_issue8(self, attr):
node = astroid.extract_node("""
from person_pb2 import Person
p = Person()
print(p.{})
""".format(attr))
with self.assertNoMessages():
self.walk(node.root())
def test_issue13_importing_a_module_from_package(self):
node = astroid.extract_node("""
from fixture import innerclass_pb2
p = innerclass_pb2.Person()
p.should_warn = 123
""")
message = pylint.testutils.Message(
'protobuf-undefined-attribute',
node=node.targets[0], args=('should_warn', 'Person')
)
with self.assertAddsMessages(message):
self.walk(node.root())
def test_issue13_importing_a_module_with_alias_from_package(self):
node = astroid.extract_node("""
from fixture import innerclass_pb2 as foo
p = foo.Person()
p.should_warn = 123
""")
message = pylint.testutils.Message(
'protobuf-undefined-attribute',
node=node.targets[0], args=('should_warn', 'Person')
)
with self.assertAddsMessages(message):
self.walk(node.root())
def test_issue13_importing_many_modules_from_package_no_errors(self):
node = astroid.extract_node("""
from fixture import innerclass_pb2, child_pb2
""")
self.walk(node.root())
def test_issue13_importing_many_modules_with_aliases_from_package(self):
node = astroid.extract_node("""
from fixture import child_pb2 as bar, innerclass_pb2 as foo
p = foo.Person()
p.should_warn = 123
""")
message = pylint.testutils.Message(
'protobuf-undefined-attribute',
node=node.targets[0], args=('should_warn', 'Person')
)
with self.assertAddsMessages(message):
self.walk(node.root())
def test_module_import_renaming_still_warns(self):
node = astroid.extract_node("""
import person_pb2 as person_pb2
import person_pb2 as foobar
p = person_pb2.Person()
p.should_warn = 123
""")
message = pylint.testutils.Message(
'protobuf-undefined-attribute',
node=node.targets[0], args=('should_warn', 'Person')
)
with self.assertAddsMessages(message):
self.walk(node.root())
@pytest.mark.xfail(reason='unimplemented')
def test_typeerror_on_attrassign(self):
node = astroid.extract_node("""
import person_pb2 as person_pb2
p = person_pb2.Person()
p.name = 123
""")
message = pylint.testutils.Message(
'protobuf-type-error', # TODO
node=node.targets[0], args=('name', 'Person')
)
with self.assertAddsMessages(message):
self.walk(node.root()) | 32.951128 | 99 | 0.592185 | 1,929 | 17,530 | 5.166407 | 0.097978 | 0.055188 | 0.074052 | 0.090508 | 0.84989 | 0.829019 | 0.821092 | 0.805338 | 0.792494 | 0.769215 | 0 | 0.017794 | 0.2915 | 17,530 | 532 | 100 | 32.951128 | 0.784622 | 0.011637 | 0 | 0.686534 | 0 | 0 | 0.337779 | 0.043124 | 0 | 0 | 0 | 0.00188 | 0.077263 | 1 | 0.092715 | false | 0.004415 | 0.143488 | 0.002208 | 0.242826 | 0.002208 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
43c603b19e616f7a747b15bb662ec4d1e2c82eef | 129,031 | py | Python | RecoLocalTracker/SiPixelClusterizer/test/testClusters.py | SWuchterl/cmssw | 769b4a7ef81796579af7d626da6039dfa0347b8e | [
"Apache-2.0"
] | 6 | 2017-09-08T14:12:56.000Z | 2022-03-09T23:57:01.000Z | RecoLocalTracker/SiPixelClusterizer/test/testClusters.py | SWuchterl/cmssw | 769b4a7ef81796579af7d626da6039dfa0347b8e | [
"Apache-2.0"
] | 545 | 2017-09-19T17:10:19.000Z | 2022-03-07T16:55:27.000Z | RecoLocalTracker/SiPixelClusterizer/test/testClusters.py | SWuchterl/cmssw | 769b4a7ef81796579af7d626da6039dfa0347b8e | [
"Apache-2.0"
] | 14 | 2017-10-04T09:47:21.000Z | 2019-10-23T18:04:45.000Z | #
# Last update: new version for python
#
#
import FWCore.ParameterSet.Config as cms
process = cms.Process("cluTest")
import HLTrigger.HLTfilters.hltHighLevel_cfi as hlt
# accept if 'path_1' succeeds
process.hltfilter = hlt.hltHighLevel.clone(
# Min-Bias
# HLTPaths = ['HLT_Physics_v*'],
# HLTPaths = ['HLT_Random_v*'],
HLTPaths = ['HLT_ZeroBias_*'],
# HLTPaths = ['HLT_PAZeroBias*'],
# HLTPaths = ['HLT_PARandom*'],
# HLTPaths = ['HLT_PAMinBias*'],
# Commissioning:
# HLTPaths = ['HLT_BeamGas_HF_Beam1_v*'],
# HLTPaths = ['HLT_BeamGas_HF_Beam2_v*'],
# HLTPaths = ['HLT_BeamGas_HF_Beam1_v*','HLT_BeamGas_HF_Beam2_v*'],
#
# HLTPaths = ['p*'],
# HLTPaths = ['path_?'],
andOr = True, # False = and, True=or
throw = False
)
# to select PhysicsBit
process.load('HLTrigger.special.hltPhysicsDeclared_cfi')
process.hltPhysicsDeclared.L1GtReadoutRecordTag = 'gtDigis'
# i do not know what is this doing?
triggerSelection = cms.EDFilter( "TriggerResultsFilter",
triggerConditions = cms.vstring( 'HLT_ZeroBias' ),
hltResults = cms.InputTag( "TriggerResults", "", "HLT" ),
l1tResults = cms.InputTag( "gtDigis" ),
l1tIgnoreMaskAndPrescale = cms.bool( True ),
throw = cms.bool( True )
)
process.maxEvents = cms.untracked.PSet(
input = cms.untracked.int32(1000)
)
process.MessageLogger = cms.Service("MessageLogger",
debugModules = cms.untracked.vstring('siPixelClusters'),
destinations = cms.untracked.vstring('cout'),
# destinations = cms.untracked.vstring("log","cout"),
cout = cms.untracked.PSet(
threshold = cms.untracked.string('ERROR')
)
# log = cms.untracked.PSet(
# threshold = cms.untracked.string('DEBUG')
# )
)
process.source = cms.Source("PoolSource",
fileNames = cms.untracked.vstring(
# "file:/afs/cern.ch/work/d/dkotlins/public/data/digis.root"
## 2012, cosmics
# "/store/data/Commissioning12/MinimumBias/RECO/PromptReco-v1/000/186/791/CEA46376-7069-E111-B395-001D09F24D67.root",
# "/store/data/Commissioning12/Commissioning/RECO/PromptReco-v1/000/186/791/6EC3470C-6F69-E111-93CA-001D09F241B9.root",
# "/store/data/Commissioning12/Cosmics/RECO/PromptReco-v1/000/186/791/6A54D2A0-6D69-E111-ABA8-001D09F2441B.root",
# R186822
# "/store/data/Commissioning12/MinimumBias/RECO/PromptReco-v1/000/186/822/2C4E0F91-C569-E111-B751-003048D2C01A.root",
# "/store/data/Commissioning12/MinimumBias/RECO/PromptReco-v1/000/186/822/38A8E118-C969-E111-B30B-003048F117EC.root",
# "/store/data/Commissioning12/MinimumBias/RECO/PromptReco-v1/000/186/822/3AF5B2FF-C669-E111-8930-003048F024FA.root",
# "/store/data/Commissioning12/MinimumBias/RECO/PromptReco-v1/000/186/822/48F7CB3D-C469-E111-AB5D-BCAEC53296F8.root",
# "/store/data/Commissioning12/MinimumBias/RECO/PromptReco-v1/000/186/822/664D8A17-C769-E111-81CA-003048F11114.root",
# "/store/data/Commissioning12/MinimumBias/RECO/PromptReco-v1/000/186/822/6C772594-C569-E111-ADAE-BCAEC5364C93.root",
# "/store/data/Commissioning12/MinimumBias/RECO/PromptReco-v1/000/186/822/B29DAEDB-C469-E111-A9DF-0025901D6268.root",
# "/store/data/Commissioning12/MinimumBias/RECO/PromptReco-v1/000/186/822/BC3CB891-C569-E111-A8A8-BCAEC518FF89.root",
# "/store/data/Commissioning12/MinimumBias/RECO/PromptReco-v1/000/186/822/BEC6EFFE-C669-E111-BE09-003048F0258C.root",
# "/store/data/Commissioning12/MinimumBias/RECO/PromptReco-v1/000/186/822/E8CDFC34-CB69-E111-A466-001D09F2AF1E.root",
# "/store/data/Commissioning12/MinimumBias/RECO/PromptReco-v1/000/186/822/FEC1AA17-C769-E111-BDAE-003048CF94A6.root",
# R 187446
## "/store/data/Commissioning12/MinimumBias/RECO/PromptReco-v1/000/187/446/FE7B607F-D76D-E111-993E-003048D37538.root",
## "/store/data/Commissioning12/MinimumBias/RECO/PromptReco-v1/000/187/446/F4E94D8C-D36D-E111-8B8E-0025B3203898.root",
## "/store/data/Commissioning12/MinimumBias/RECO/PromptReco-v1/000/187/446/F45BE48A-D16D-E111-8873-001D09F25041.root",
## "/store/data/Commissioning12/MinimumBias/RECO/PromptReco-v1/000/187/446/F2A06371-D36D-E111-89CB-0025901D5D90.root",
## "/store/data/Commissioning12/MinimumBias/RECO/PromptReco-v1/000/187/446/F01C4674-D36D-E111-B595-5404A63886EB.root",
## "/store/data/Commissioning12/MinimumBias/RECO/PromptReco-v1/000/187/446/EEC93216-DB6D-E111-A734-BCAEC5329709.root",
## "/store/data/Commissioning12/MinimumBias/RECO/PromptReco-v1/000/187/446/E447DE30-D86D-E111-928E-5404A63886C7.root",
## "/store/data/Commissioning12/MinimumBias/RECO/PromptReco-v1/000/187/446/E2193A73-D36D-E111-B41A-E0CB4E4408E7.root",
## "/store/data/Commissioning12/MinimumBias/RECO/PromptReco-v1/000/187/446/E09580FD-D86D-E111-96FC-003048F1C420.root",
## "/store/data/Commissioning12/MinimumBias/RECO/PromptReco-v1/000/187/446/E08B0C31-D86D-E111-B44B-E0CB4E55365D.root",
## "/store/data/Commissioning12/MinimumBias/RECO/PromptReco-v1/000/187/446/D8E4334B-D26D-E111-856A-5404A63886AB.root",
## "/store/data/Commissioning12/MinimumBias/RECO/PromptReco-v1/000/187/446/CC38AC13-DB6D-E111-8A1B-E0CB4E4408E3.root",
## "/store/data/Commissioning12/MinimumBias/RECO/PromptReco-v1/000/187/446/CA77838E-D36D-E111-BF84-003048F0258C.root",
## "/store/data/Commissioning12/MinimumBias/RECO/PromptReco-v1/000/187/446/C8B07B4E-DA6D-E111-A95D-003048F24A04.root",
## "/store/data/Commissioning12/MinimumBias/RECO/PromptReco-v1/000/187/446/BE506EC4-D66D-E111-B3F5-003048673374.root",
## "/store/data/Commissioning12/MinimumBias/RECO/PromptReco-v1/000/187/446/BE4F2977-D36D-E111-9884-BCAEC53296F4.root",
## "/store/data/Commissioning12/MinimumBias/RECO/PromptReco-v1/000/187/446/BC1FA198-D56D-E111-A364-001D09F252E9.root",
## "/store/data/Commissioning12/MinimumBias/RECO/PromptReco-v1/000/187/446/BC0A2B97-D56D-E111-9878-001D09F24399.root",
## "/store/data/Commissioning12/MinimumBias/RECO/PromptReco-v1/000/187/446/B8EE564F-DA6D-E111-A3C6-0025B32036D2.root",
## "/store/data/Commissioning12/MinimumBias/RECO/PromptReco-v1/000/187/446/B07B8B7F-D76D-E111-BB09-003048D2BDD8.root",
## "/store/data/Commissioning12/MinimumBias/RECO/PromptReco-v1/000/187/446/ACC25274-D36D-E111-862C-BCAEC518FF54.root",
## "/store/data/Commissioning12/MinimumBias/RECO/PromptReco-v1/000/187/446/AA900C51-DA6D-E111-A4DF-00215AEDFCCC.root",
## "/store/data/Commissioning12/MinimumBias/RECO/PromptReco-v1/000/187/446/9E5A6448-D26D-E111-B2EC-BCAEC518FF74.root",
## "/store/data/Commissioning12/MinimumBias/RECO/PromptReco-v1/000/187/446/9C679D51-DA6D-E111-B8FB-002481E0D646.root",
## "/store/data/Commissioning12/MinimumBias/RECO/PromptReco-v1/000/187/446/98D2E27F-D76D-E111-9DFD-0015C5FDE067.root",
## "/store/data/Commissioning12/MinimumBias/RECO/PromptReco-v1/000/187/446/8ED12356-DA6D-E111-85F7-001D09F25267.root",
## "/store/data/Commissioning12/MinimumBias/RECO/PromptReco-v1/000/187/446/8E3DDA51-DA6D-E111-8066-002481E0D958.root",
## "/store/data/Commissioning12/MinimumBias/RECO/PromptReco-v1/000/187/446/866BDE7F-D76D-E111-B95E-0025B32035BC.root",
## "/store/data/Commissioning12/MinimumBias/RECO/PromptReco-v1/000/187/446/8666AD8C-D36D-E111-986C-003048F1C424.root",
## "/store/data/Commissioning12/MinimumBias/RECO/PromptReco-v1/000/187/446/801FD68D-D36D-E111-A36D-BCAEC5329717.root",
## "/store/data/Commissioning12/MinimumBias/RECO/PromptReco-v1/000/187/446/74E7F77E-D76D-E111-9572-003048D2C16E.root",
## "/store/data/Commissioning12/MinimumBias/RECO/PromptReco-v1/000/187/446/72C42773-D36D-E111-915A-BCAEC518FF41.root",
## "/store/data/Commissioning12/MinimumBias/RECO/PromptReco-v1/000/187/446/72AF0133-D66D-E111-837F-003048D2BC38.root",
## "/store/data/Commissioning12/MinimumBias/RECO/PromptReco-v1/000/187/446/7280F888-D16D-E111-B684-001D09F295FB.root",
## "/store/data/Commissioning12/MinimumBias/RECO/PromptReco-v1/000/187/446/702B534F-DA6D-E111-BD9A-001D09F29619.root",
## "/store/data/Commissioning12/MinimumBias/RECO/PromptReco-v1/000/187/446/5AB938FD-D86D-E111-BC54-003048F024FA.root",
## "/store/data/Commissioning12/MinimumBias/RECO/PromptReco-v1/000/187/446/5A8B532F-D86D-E111-8F81-5404A63886C4.root",
## "/store/data/Commissioning12/MinimumBias/RECO/PromptReco-v1/000/187/446/58D3E574-D36D-E111-97FA-BCAEC5329705.root",
## "/store/data/Commissioning12/MinimumBias/RECO/PromptReco-v1/000/187/446/56941D79-D36D-E111-AB00-BCAEC5329719.root",
## "/store/data/Commissioning12/MinimumBias/RECO/PromptReco-v1/000/187/446/563B1597-D56D-E111-94D5-002481E0D73C.root",
## "/store/data/Commissioning12/MinimumBias/RECO/PromptReco-v1/000/187/446/54EF4674-D36D-E111-9019-BCAEC5329713.root",
## "/store/data/Commissioning12/MinimumBias/RECO/PromptReco-v1/000/187/446/5427C280-D76D-E111-8948-001D09F242EF.root",
## "/store/data/Commissioning12/MinimumBias/RECO/PromptReco-v1/000/187/446/4EF9C201-D56D-E111-825E-00215AEDFD74.root",
## "/store/data/Commissioning12/MinimumBias/RECO/PromptReco-v1/000/187/446/3EA8C331-D86D-E111-A427-001D09F2A690.root",
## "/store/data/Commissioning12/MinimumBias/RECO/PromptReco-v1/000/187/446/3E4BCC7C-D76D-E111-BCCF-003048F11114.root",
## "/store/data/Commissioning12/MinimumBias/RECO/PromptReco-v1/000/187/446/3C6C6C51-DA6D-E111-99CA-00237DDBE49C.root",
## "/store/data/Commissioning12/MinimumBias/RECO/PromptReco-v1/000/187/446/3AE663DF-D26D-E111-964E-BCAEC518FF44.root",
## "/store/data/Commissioning12/MinimumBias/RECO/PromptReco-v1/000/187/446/32F29045-D26D-E111-98C6-BCAEC5364C4C.root",
## "/store/data/Commissioning12/MinimumBias/RECO/PromptReco-v1/000/187/446/2EA3407F-D76D-E111-9847-002481E0DEC6.root",
## "/store/data/Commissioning12/MinimumBias/RECO/PromptReco-v1/000/187/446/2E137EFD-D86D-E111-9CBC-0025B32035A2.root",
## "/store/data/Commissioning12/MinimumBias/RECO/PromptReco-v1/000/187/446/28BE4756-DA6D-E111-8899-003048F11DE2.root",
## "/store/data/Commissioning12/MinimumBias/RECO/PromptReco-v1/000/187/446/2010C213-DB6D-E111-99CC-5404A63886D6.root",
## "/store/data/Commissioning12/MinimumBias/RECO/PromptReco-v1/000/187/446/1E1641BD-DB6D-E111-B1EB-003048F1C832.root",
## "/store/data/Commissioning12/MinimumBias/RECO/PromptReco-v1/000/187/446/1C15184F-DA6D-E111-BA10-003048F118C4.root",
## "/store/data/Commissioning12/MinimumBias/RECO/PromptReco-v1/000/187/446/1A5F5A56-DA6D-E111-A8DD-003048F11942.root",
## "/store/data/Commissioning12/MinimumBias/RECO/PromptReco-v1/000/187/446/06681347-D26D-E111-8005-E0CB4E4408E3.root",
## "/store/data/Commissioning12/MinimumBias/RECO/PromptReco-v1/000/187/446/04E47C26-DD6D-E111-8D1C-003048F024F6.root",
## "/store/data/Commissioning12/MinimumBias/RECO/PromptReco-v1/000/187/446/0233A574-D36D-E111-91C3-BCAEC5364C42.root",
## "/store/data/Commissioning12/MinimumBias/RECO/PromptReco-v1/000/187/446/0228C49A-D56D-E111-A48B-001D09F24EE3.root",
# 190389 (ran OK, no zb)
## "/store/data/Commissioning12/MinimumBias/RECO/PromptReco-v1/000/190/389/009B5147-9F80-E111-90B5-001D09F2424A.root",
## "/store/data/Commissioning12/MinimumBias/RECO/PromptReco-v1/000/190/389/06DD07A7-A080-E111-AA52-0015C5FDE067.root",
## "/store/data/Commissioning12/MinimumBias/RECO/PromptReco-v1/000/190/389/06FF4150-AA80-E111-B82A-5404A63886B0.root",
## "/store/data/Commissioning12/MinimumBias/RECO/PromptReco-v1/000/190/389/0E8628A8-A080-E111-AA55-001D09F292D1.root",
## "/store/data/Commissioning12/MinimumBias/RECO/PromptReco-v1/000/190/389/0EE8D651-AA80-E111-9B0C-5404A63886EF.root",
## "/store/data/Commissioning12/MinimumBias/RECO/PromptReco-v1/000/190/389/10976EFE-9180-E111-9390-5404A63886B6.root",
## "/store/data/Commissioning12/MinimumBias/RECO/PromptReco-v1/000/190/389/1A0F5A0C-AD80-E111-BA2A-BCAEC5364C93.root",
## "/store/data/Commissioning12/MinimumBias/RECO/PromptReco-v1/000/190/389/1A806611-AD80-E111-B702-001D09F2A690.root",
## "/store/data/Commissioning12/MinimumBias/RECO/PromptReco-v1/000/190/389/220A939E-B180-E111-806C-003048CF94A6.root",
## "/store/data/Commissioning12/MinimumBias/RECO/PromptReco-v1/000/190/389/260242F4-B080-E111-AE76-00215AEDFD74.root",
# "/store/express/Commissioning12/ExpressPhysics/FEVT/Express-v1/000/190/411/0280693C-F87E-E111-9911-BCAEC532970F.root",
# run 191271
## "/store/data/Run2012A/MinimumBias/RECO/PromptReco-v1/000/191/271/0C745F0F-BE88-E111-9978-485B3977172C.root",
## "/store/data/Run2012A/MinimumBias/RECO/PromptReco-v1/000/191/271/0EC6C2CC-B288-E111-93DB-001D09F24353.root",
## "/store/data/Run2012A/MinimumBias/RECO/PromptReco-v1/000/191/271/7A86F1C4-B988-E111-BF91-00215AEDFCCC.root",
## "/store/data/Run2012A/MinimumBias/RECO/PromptReco-v1/000/191/271/7C042DFF-B488-E111-8171-5404A640A642.root",
## "/store/data/Run2012A/MinimumBias/RECO/PromptReco-v1/000/191/271/8CC241CE-B288-E111-8320-001D09F29321.root",
## "/store/data/Run2012A/MinimumBias/RECO/PromptReco-v1/000/191/271/AAD273EC-BB88-E111-891A-BCAEC5329713.root",
## "/store/data/Run2012A/MinimumBias/RECO/PromptReco-v1/000/191/271/B20AAA76-B688-E111-A69E-BCAEC5364C42.root",
## "/store/data/Run2012A/MinimumBias/RECO/PromptReco-v1/000/191/271/B43F5CB7-C388-E111-BBB3-5404A63886EF.root",
## "/store/data/Run2012A/MinimumBias/RECO/PromptReco-v1/000/191/271/B60FA7FF-C288-E111-B5D7-5404A6388699.root",
## "/store/data/Run2012A/MinimumBias/RECO/PromptReco-v1/000/191/271/C84D5FFB-B488-E111-B0BE-5404A6388694.root",
## "/store/data/Run2012A/MinimumBias/RECO/PromptReco-v1/000/191/271/D0E4D5F9-B488-E111-9E82-BCAEC518FF41.root",
## "/store/data/Run2012A/MinimumBias/RECO/PromptReco-v1/000/191/271/D6F911A0-BC88-E111-A1C9-BCAEC518FF7A.root",
# 191411
# "/store/data/Run2012A/MinimumBias/RECO/PromptReco-v1/000/191/411/66FD8F4B-088A-E111-A96C-001D09F252E9.root",
# 191718
# "/store/data/Run2012A/MinimumBias/RECO/PromptReco-v1/000/191/718/2AB8ED3A-B88B-E111-A086-5404A6388692.root",
# "/store/data/Run2012A/MinimumBias/RECO/PromptReco-v1/000/191/718/3CCF1232-D28B-E111-8E14-001D09F2424A.root",
# "/store/data/Run2012A/MinimumBias/RECO/PromptReco-v1/000/191/718/46F6126B-D68B-E111-8895-003048F11114.root",
# "/store/data/Run2012A/MinimumBias/RECO/PromptReco-v1/000/191/718/52075473-C58B-E111-95CD-0015C5FDE067.root",
# "/store/data/Run2012A/MinimumBias/RECO/PromptReco-v1/000/191/718/AC4A5932-D28B-E111-BD5F-001D09F24FEC.root",
# "/store/data/Run2012A/MinimumBias/RECO/PromptReco-v1/000/191/718/B42DACB6-CE8B-E111-A6DC-001D09F23C73.root",
# "/store/data/Run2012A/MinimumBias/RECO/PromptReco-v1/000/191/718/C0241E8A-CC8B-E111-B8A4-001D09F24303.root",
# "/store/data/Run2012A/MinimumBias/RECO/PromptReco-v1/000/191/718/F2E89EB6-CE8B-E111-8F33-0019B9F4A1D7.root",
# "/store/data/Run2012A/Commissioning/RECO/PromptReco-v1/000/191/718/44DC8D5C-DB8B-E111-B786-5404A63886B2.root",
# "/store/data/Run2012A/Commissioning/RECO/PromptReco-v1/000/191/718/46C834F6-D98B-E111-A1D7-5404A638869E.root",
# "/store/data/Run2012A/Commissioning/RECO/PromptReco-v1/000/191/718/4E7B5EAE-CE8B-E111-9E02-001D09F29114.root",
# "/store/data/Run2012A/Commissioning/RECO/PromptReco-v1/000/191/718/8057AD53-E08B-E111-B0B3-485B39897227.root",
# "/store/data/Run2012A/Commissioning/RECO/PromptReco-v1/000/191/718/8801A26B-D68B-E111-A0FC-002481E0DEC6.root",
# "/store/data/Run2012A/Commissioning/RECO/PromptReco-v1/000/191/718/A028490E-D08B-E111-8D6D-0025901D627C.root",
# "/store/data/Run2012A/Commissioning/RECO/PromptReco-v1/000/191/718/DE0D2BA4-DA8B-E111-A55A-0025B320384C.root",
# fill 2576
# "rfio:/castor/cern.ch/cms/store/data/Run2012A/LP_ZeroBias/RECO/PromptReco-v1/000/193/092/10CC3327-9B95-E111-A670-001D09F2A465.root",
# "rfio:/castor/cern.ch/cms/store/data/Run2012A/LP_ZeroBias/RECO/PromptReco-v1/000/193/092/1AA80E34-8C95-E111-A50D-001D09F24FBA.root",
# "rfio:/castor/cern.ch/cms/store/data/Run2012A/LP_ZeroBias/RECO/PromptReco-v1/000/193/092/801B1A24-8F95-E111-B7F8-003048D2C108.root",
# "rfio:/castor/cern.ch/cms/store/data/Run2012A/LP_ZeroBias/RECO/PromptReco-v1/000/193/092/86223BA8-8795-E111-9FEB-BCAEC5329713.root",
# "rfio:/castor/cern.ch/cms/store/data/Run2012A/LP_ZeroBias/RECO/PromptReco-v1/000/193/092/903C8452-9395-E111-A3EC-001D09F2905B.root",
# "rfio:/castor/cern.ch/cms/store/data/Run2012A/LP_ZeroBias/RECO/PromptReco-v1/000/193/092/9E6FE753-8995-E111-AE3B-003048F118E0.root",
# "rfio:/castor/cern.ch/cms/store/data/Run2012A/LP_ZeroBias/RECO/PromptReco-v1/000/193/092/A6D3BB38-8795-E111-ADF6-BCAEC518FF6E.root",
# "rfio:/castor/cern.ch/cms/store/data/Run2012A/LP_ZeroBias/RECO/PromptReco-v1/000/193/092/B48B8874-9095-E111-9A6A-003048F11C5C.root",
# "rfio:/castor/cern.ch/cms/store/data/Run2012A/LP_ZeroBias/RECO/PromptReco-v1/000/193/092/DA2D45C9-8995-E111-88A1-0025901D631E.root",
# "rfio:/castor/cern.ch/cms/store/data/Run2012A/LP_MinBias1/RECO/PromptReco-v1/000/193/092/002CD9DE-8995-E111-869C-001D09F2AF1E.root",
# fill 2596,
## "/store/data/Run2012A/MinimumBias/RECO/PromptReco-v1/000/193/621/1E63A782-8B9A-E111-97CF-001D09F2B30B.root",
## "/store/data/Run2012A/MinimumBias/RECO/PromptReco-v1/000/193/621/20257BCD-8F9A-E111-9EE4-001D09F25460.root",
## "/store/data/Run2012A/MinimumBias/RECO/PromptReco-v1/000/193/621/3CDAD5EC-949A-E111-9B86-003048D2BC42.root",
## "/store/data/Run2012A/MinimumBias/RECO/PromptReco-v1/000/193/621/4C13AD2F-919A-E111-A327-001D09F2B30B.root",
## "/store/data/Run2012A/MinimumBias/RECO/PromptReco-v1/000/193/621/5E47D39E-929A-E111-89A8-001D09F25267.root",
## "/store/data/Run2012A/MinimumBias/RECO/PromptReco-v1/000/193/621/72295CE6-949A-E111-B7DB-003048F1C836.root",
## "/store/data/Run2012A/MinimumBias/RECO/PromptReco-v1/000/193/621/7AC6A094-989A-E111-9F81-001D09F2512C.root",
## "/store/data/Run2012A/MinimumBias/RECO/PromptReco-v1/000/193/621/82FD1049-999A-E111-B882-003048F024FE.root",
## "/store/data/Run2012A/MinimumBias/RECO/PromptReco-v1/000/193/621/94441B27-949A-E111-93D1-003048F1BF68.root",
## "/store/data/Run2012A/MinimumBias/RECO/PromptReco-v1/000/193/621/A0EAD5B6-9C9A-E111-89BB-BCAEC518FF6B.root",
## "/store/data/Run2012A/MinimumBias/RECO/PromptReco-v1/000/193/621/A8869045-8C9A-E111-8D13-001D09F27003.root",
## "/store/data/Run2012A/MinimumBias/RECO/PromptReco-v1/000/193/621/AAF776F5-8C9A-E111-B091-001D09F25460.root",
## "/store/data/Run2012A/MinimumBias/RECO/PromptReco-v1/000/193/621/B05A6127-949A-E111-A824-003048CF9B28.root",
## "/store/data/Run2012A/MinimumBias/RECO/PromptReco-v1/000/193/621/B6ED0C24-8A9A-E111-AA86-0019B9F72D71.root",
## "/store/data/Run2012A/MinimumBias/RECO/PromptReco-v1/000/193/621/BA234F8A-939A-E111-9F84-001D09F241B9.root",
## "/store/data/Run2012A/MinimumBias/RECO/PromptReco-v1/000/193/621/CA8D40AC-8D9A-E111-A572-001D09F244DE.root",
## "/store/data/Run2012A/MinimumBias/RECO/PromptReco-v1/000/193/621/CC0FE027-949A-E111-9390-003048F118DE.root",
## "/store/data/Run2012A/MinimumBias/RECO/PromptReco-v1/000/193/621/CE296C9F-929A-E111-8748-001D09F244DE.root",
## "/store/data/Run2012A/MinimumBias/RECO/PromptReco-v1/000/193/621/D8D77840-8C9A-E111-9332-001D09F2447F.root",
## "/store/data/Run2012A/MinimumBias/RECO/PromptReco-v1/000/193/621/FEF2BAC8-999A-E111-A3DC-485B3977172C.root",
## "rfio:/castor/cern.ch/cms/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/193/998/161575BF-339D-E111-B009-485B3977172C.root",
## "rfio:/castor/cern.ch/cms/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/193/998/5464CD17-329D-E111-8062-5404A63886BE.root",
## "rfio:/castor/cern.ch/cms/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/193/998/8271BC4A-2F9D-E111-891F-003048F11C28.root",
## "rfio:/castor/cern.ch/cms/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/193/998/84AE016A-319D-E111-9EB2-5404A63886C0.root",
## "rfio:/castor/cern.ch/cms/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/193/998/C67F52B9-419D-E111-98E9-003048D37560.root",
## "rfio:/castor/cern.ch/cms/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/193/998/D2897664-2A9D-E111-AE88-003048D2BC52.root",
## "rfio:/castor/cern.ch/cms/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/193/998/DCBD02EA-399D-E111-966F-BCAEC518FF68.root",
## "rfio:/castor/cern.ch/cms/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/193/998/E8BA354A-2F9D-E111-BEC6-0015C5FDE067.root",
# fill 2621
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/194/050/0E3C3137-179E-E111-9A06-5404A63886C5.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/194/050/0E727127-159E-E111-AC08-00215AEDFD74.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/194/050/102488EE-3C9E-E111-B275-003048D3733E.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/194/050/16F23FD3-269E-E111-90F3-0025B320384C.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/194/050/20D96233-179E-E111-9BCD-BCAEC5329702.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/194/050/261AE52F-429E-E111-8BAE-0025901D5DF4.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/194/050/2638B30A-309E-E111-8973-BCAEC518FF6E.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/194/050/268D0C20-269E-E111-9561-003048D2BD66.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/194/050/44301897-0F9E-E111-82A4-003048D374F2.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/194/050/6213E48B-119E-E111-BA26-0030486733B4.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/194/050/68790229-1C9E-E111-84F2-003048D2BC4C.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/194/050/6C1ECA8D-0F9E-E111-93BC-002481E0D7C0.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/194/050/6C301D9C-0F9E-E111-B74C-BCAEC5364C93.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/194/050/743C1AC6-1A9E-E111-9B5D-00215AEDFCCC.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/194/050/762082AC-139E-E111-816F-0025901D631E.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/194/050/7A1FF1BF-1A9E-E111-AFBA-002481E0D7EC.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/194/050/880CCD28-1C9E-E111-9664-003048D2BC38.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/194/050/92236757-149E-E111-8389-002481E0D73C.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/194/050/92E3A46D-0F9E-E111-8A45-E0CB4E553651.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/194/050/A03F677E-229E-E111-8D0C-003048D2BC4C.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/194/050/A0DEB0E1-179E-E111-A3AF-003048F1C82A.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/194/050/A6ABBAF3-0F9E-E111-90A2-001D09F2437B.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/194/050/B0C7F697-189E-E111-A9E0-003048D37538.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/194/050/B6460AFE-199E-E111-A3A2-003048D374CA.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/194/050/B8F5592F-429E-E111-BDF6-5404A63886AB.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/194/050/BE90BB55-149E-E111-95B9-003048F118DE.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/194/050/C4225BD2-0F9E-E111-A17A-BCAEC5364C42.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/194/050/CEBEC4A4-0F9E-E111-A9B1-003048D37538.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/194/050/D28804BE-1A9E-E111-90F9-0025B32035BC.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/194/050/D2B5D280-119E-E111-915D-5404A63886D2.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/194/050/DE912530-139E-E111-AAE6-0030486780B4.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/194/050/F6587C8E-519E-E111-A914-5404A63886AD.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/194/050/FEDC688B-279E-E111-9EF7-0025901D62A0.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/194/050/16A57AB6-649E-E111-9D6D-001D09F28F25.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/194/050/5457E6EE-489E-E111-81DA-5404A63886E6.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/194/050/9ADAB38B-519E-E111-B3A4-BCAEC5329717.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/194/050/A44197F4-579E-E111-8264-003048D2BE06.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/194/050/BA6CD7F5-579E-E111-A79D-003048D2BB90.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/194/050/C4A51D8C-519E-E111-A84B-5404A638868F.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/194/050/0E3ABC62-499E-E111-9BD7-485B3962633D.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/194/050/887E0357-5E9E-E111-99DC-0025901D5D80.root",
# fill 2663
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/194/912/00471167-82A8-E111-B467-BCAEC53296F3.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/194/912/0E6700C0-25A8-E111-93C0-003048D2C108.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/194/912/0ECA6CAF-89A8-E111-895D-5404A63886B1.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/194/912/101E6F20-5CA8-E111-8806-BCAEC5329709.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/194/912/103AD8C4-64A8-E111-8049-003048D2BC42.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/194/912/1470FC66-3CA8-E111-ABED-003048D2C01A.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/194/912/16DDF92A-66A8-E111-9C6D-001D09F23174.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/194/912/1E2AEDB0-60A8-E111-B148-5404A63886EC.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/194/912/1EFEA2DF-1BA8-E111-97F0-001D09F24353.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/194/912/22826145-2EA8-E111-8ABF-001D09F292D1.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/194/912/24E517CB-64A8-E111-8521-0030486780B8.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/194/912/264A3786-91A8-E111-A791-BCAEC518FF80.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/194/912/28C2B478-5CA8-E111-88C0-BCAEC532971E.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/194/912/2A6C9B7B-80A8-E111-8EF0-E0CB4E5536AE.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/194/912/2E4C455F-9BA8-E111-B1B0-003048D3751E.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/194/912/2EBC8E96-5FA8-E111-BF27-BCAEC532970F.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/194/912/328F7379-A2A8-E111-B1BC-001D09F297EF.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/194/912/367D5007-D2A8-E111-B369-0025901D5C88.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/194/912/3805738A-9DA8-E111-889D-003048D2C0F4.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/194/912/384D77E3-1BA8-E111-988D-0025901D5DB2.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/194/912/544C491F-5CA8-E111-A7F0-BCAEC532971D.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/194/912/54AF243E-72A8-E111-9FBA-003048F118C6.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/194/912/6418506B-41A8-E111-A313-001D09F24DA8.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/194/912/68C64FBF-36A8-E111-A588-0025901D5D7E.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/194/912/6AE7927D-80A8-E111-98D3-E0CB4E4408E7.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/194/912/722C7109-5BA8-E111-BA78-0025901D62A0.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/194/912/741E027E-3EA8-E111-B23E-BCAEC5329716.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/194/912/74E1E304-4FA8-E111-BF2D-003048D2C01E.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/194/912/76E4A9E1-97A8-E111-AD5E-003048D2C020.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/194/912/7CE53D59-35A8-E111-BE61-003048D37580.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/194/912/8E480FB0-60A8-E111-9AC3-BCAEC518FF54.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/194/912/8E98112E-36A8-E111-BB0E-5404A63886CF.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/194/912/90725E0D-5BA8-E111-A7BF-5404A6388694.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/194/912/96AC63B1-53A8-E111-B916-5404A6388698.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/194/912/98D4701A-7CA8-E111-8281-BCAEC518FF74.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/194/912/A6187646-94A8-E111-8444-001D09F29533.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/194/912/A8DAE671-61A8-E111-AAEA-0025B3203898.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/194/912/AA15F3EF-99A8-E111-AF0E-003048D2BC52.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/194/912/B035818D-98A8-E111-B805-003048D373AE.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/194/912/B0E3C7D0-3DA8-E111-B0F1-5404A63886B2.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/194/912/B20E582B-A8A8-E111-8BFE-001D09F2527B.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/194/912/B427EA9A-A9A8-E111-9E21-001D09F25460.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/194/912/B43B7388-1FA8-E111-866E-001D09F24DA8.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/194/912/BC322A0E-70A8-E111-90F8-003048D2BD66.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/194/912/C4421C8B-9DA8-E111-8FE5-00237DDBE41A.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/194/912/C4695770-65A8-E111-96DB-BCAEC53296F8.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/194/912/CADF58A6-34A8-E111-B24D-0025B32445E0.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/194/912/CC08A75D-6DA8-E111-A049-001D09F2447F.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/194/912/D08E43A4-87A8-E111-AB98-485B3962633D.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/194/912/D439CAEF-33A8-E111-B362-BCAEC518FF52.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/194/912/D6097E1A-A6A8-E111-BEEB-5404A63886A0.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/194/912/E8F99B4E-52A8-E111-905B-BCAEC53296F8.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/194/912/F2C035BD-69A8-E111-B021-003048F1BF66.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/194/912/F8A3A8A1-4CA8-E111-9481-BCAEC518FF8D.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/194/912/FC394E53-24A8-E111-A17D-003048F118C6.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/194/912/FCCD33CD-B7A8-E111-B7F1-003048D2C01E.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/194/912/FEC82608-A1A8-E111-B4CD-BCAEC518FF50.root",
# fill 2670
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/195/099/EA8428A6-6BAA-E111-90B6-485B3962633D.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/195/099/E20CB792-5CAA-E111-9CBC-003048D37666.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/195/099/A2DB76A2-44AA-E111-872A-5404A638868F.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/195/099/78E58267-66AA-E111-9505-BCAEC5364C4C.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/195/099/4475310E-65AA-E111-A450-BCAEC518FF52.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/195/099/40AB4A06-72AA-E111-83E5-0025901D5DB2.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/195/099/32B9AA8E-61AA-E111-B6F5-003048F118AA.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/195/099/2AAD3A6B-59AA-E111-8ED7-0025901D629C.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/195/099/1AC3A20F-6FAA-E111-9A64-BCAEC53296FB.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/195/099/06BB53B2-73AA-E111-B98A-003048F117B4.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/195/099/024EFA12-60AA-E111-BA44-E0CB4E553673.root",
# fill 2671
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/195/109/26682BC3-EAAA-E111-8D66-002481E0D7EC.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/195/109/2627AD36-F0AA-E111-93B7-003048D2C16E.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/195/109/30AA5C32-E0AA-E111-A2E5-5404A63886B4.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/195/109/5E44AA1A-ECAA-E111-ADC1-BCAEC518FF52.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/195/109/5E7E23B0-F8AA-E111-A69F-5404A63886B9.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/195/109/7A2DAF17-E5AA-E111-ADD1-0025901D6288.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/195/109/8EEDAF3D-0DAB-E111-8D24-003048D2BE08.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/195/109/B22F7F59-E4AA-E111-B424-BCAEC518FF89.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/195/109/B2D4DCA8-F6AA-E111-BFAC-5404A638869E.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/195/109/BC8ACBB7-FCAA-E111-A853-0025901D5DB2.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/195/109/C4ABF907-00AB-E111-B9A1-0025901D626C.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/195/109/C800A10C-00AB-E111-9077-003048F11CF0.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/195/109/EA16EF88-02AB-E111-88D7-002481E0D790.root",
# fill 2712
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/195/774/FC8E7F2E-71B3-E111-BB47-003048D3C980.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/195/774/E0C9E8C7-63B3-E111-ABED-003048F11942.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/195/774/D8FEDF23-8EB3-E111-893A-003048F118C2.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/195/774/D42341D5-14B3-E111-AB6A-001D09F2910A.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/195/774/D20E09CA-05B3-E111-8CC6-485B39897227.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/195/774/CA5DD02C-71B3-E111-A055-003048D3756A.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/195/774/C4B0B7BD-3DB3-E111-ABEC-003048F118AC.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/195/774/C40F632F-71B3-E111-A14B-003048D2BBF0.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/195/774/C409FE7D-FCB2-E111-8669-003048F118C6.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/195/774/C273B1DB-1EB3-E111-BBC2-BCAEC5329717.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/195/774/B2F74F91-10B3-E111-BB98-001D09F2A690.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/195/774/B0B3A13D-21B3-E111-A08F-001D09F2525D.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/195/774/A8C73DC4-42B3-E111-9B8F-001D09F295FB.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/195/774/A4E62C1E-48B3-E111-9733-5404A6388692.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/195/774/98B4A3D4-12B3-E111-BADA-0025901D5DB8.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/195/774/981E1BF9-02B3-E111-A155-003048D3750A.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/195/774/9015A3AB-74B3-E111-A605-001D09F24303.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/195/774/900DA09E-FBB2-E111-A068-5404A63886C4.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/195/774/8EE22EF5-FCB2-E111-9E12-E0CB4E553651.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/195/774/8CA02797-41B3-E111-996B-BCAEC518FF80.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/195/774/8A86F139-78B3-E111-93D0-001D09F23174.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/195/774/84EC2056-0EB3-E111-9FB8-BCAEC5329709.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/195/774/7ED3B246-28B3-E111-A4A0-BCAEC518FF44.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/195/774/6A6E2CB7-25B3-E111-AD7D-003048F024C2.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/195/774/6858C9F5-84B3-E111-956E-001D09F2924F.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/195/774/5EE9BC9B-A5B3-E111-9549-003048D3750A.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/195/774/4EA0E383-6DB3-E111-994D-0025901D624E.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/195/774/4CA14E2C-42B3-E111-8DDF-5404A63886C6.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/195/774/3EDDF63F-21B3-E111-8E50-001D09F23C73.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/195/774/366E7FDF-F7B2-E111-9236-003048F1183E.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/195/774/345C88B1-03B3-E111-83FA-003048F117F6.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/195/774/2A2B5349-5EB3-E111-BD9C-001D09F2841C.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/195/774/2252F549-24B3-E111-BD1C-BCAEC518FF6B.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/195/774/224874A1-4DB3-E111-A6E8-BCAEC518FF76.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/195/774/1CCBD219-54B3-E111-8117-0025901D6268.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/195/774/1C584E1B-54B3-E111-8675-5404A63886B6.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/195/774/109D50B0-79B3-E111-A0EE-001D09F24303.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/195/774/06F334FC-0CB3-E111-BA49-5404A63886A0.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/195/774/04E0B480-FAB2-E111-995A-003048F11114.root",
# fill 2739
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/196/531/EA14CD67-FBBA-E111-9E54-0015C5FDE067.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/196/531/EA143873-02BB-E111-879A-001D09F25041.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/196/531/D864DC90-FDBA-E111-AB59-001D09F2447F.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/196/531/D0FE1038-F9BA-E111-8CA2-003048D2C092.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/196/531/CECA8AD0-08BB-E111-A4A8-003048D2C092.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/196/531/CA48FDED-F9BA-E111-8EEF-003048D375AA.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/196/531/C073FC49-00BB-E111-B0EA-003048D374F2.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/196/531/7011E9E6-0ABB-E111-A5CD-BCAEC5329702.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/196/531/669D6022-03BB-E111-A41B-003048D2C092.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/196/531/64BC6062-FBBA-E111-A5F5-001D09F2A465.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/196/531/628479F6-05BB-E111-8386-003048D2C020.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/196/531/5E6A478B-FDBA-E111-BF6D-001D09F27067.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/196/531/4E5F98E8-0ABB-E111-9A1F-BCAEC5364C4C.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/196/531/38306919-08BB-E111-AE8F-001D09F29169.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/196/531/323D086C-F6BA-E111-9C42-003048D374F2.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/196/531/2438DA6B-FBBA-E111-9493-003048F01E88.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/196/531/18CF61CB-FCBA-E111-AFE9-003048D374F2.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/196/531/121B0271-02BB-E111-AE1C-001D09F25460.root",
## "/store/data/Run2012B/MinimumBias/RECO/PromptReco-v1/000/196/531/0CE99CF6-05BB-E111-A3FD-003048673374.root",
# fill 2865, high PU
## "/store/data/Run2012C/ZeroBias/RECO/PromptReco-v1/000/198/609/F4A50C72-2CCD-E111-9FE5-001D09F276CF.root",
## "/store/data/Run2012C/ZeroBias/RECO/PromptReco-v1/000/198/609/E233D83D-28CD-E111-AF07-001D09F2924F.root",
## "/store/data/Run2012C/ZeroBias/RECO/PromptReco-v1/000/198/609/C0C2D2F3-28CD-E111-8599-001D09F2A465.root",
## "/store/data/Run2012C/ZeroBias/RECO/PromptReco-v1/000/198/609/BEED0F57-23CD-E111-BF8B-0019B9F72CE5.root",
## "/store/data/Run2012C/ZeroBias/RECO/PromptReco-v1/000/198/609/BE589B78-33CD-E111-8CB6-001D09F28E80.root",
## "/store/data/Run2012C/ZeroBias/RECO/PromptReco-v1/000/198/609/B25F5628-2DCD-E111-B057-0019B9F72CE5.root",
## "/store/data/Run2012C/ZeroBias/RECO/PromptReco-v1/000/198/609/A62801FA-2FCD-E111-A105-001D09F24EE3.root",
## "/store/data/Run2012C/ZeroBias/RECO/PromptReco-v1/000/198/609/9AE138C7-37CD-E111-8F87-001D09F251FE.root",
## "/store/data/Run2012C/ZeroBias/RECO/PromptReco-v1/000/198/609/76FADBC2-2BCD-E111-8227-001D09F29146.root",
## "/store/data/Run2012C/ZeroBias/RECO/PromptReco-v1/000/198/609/681CCA4D-21CD-E111-B409-001D09F23F2A.root",
## "/store/data/Run2012C/ZeroBias/RECO/PromptReco-v1/000/198/609/66687A98-27CD-E111-A411-001D09F25267.root",
## "/store/data/Run2012C/ZeroBias/RECO/PromptReco-v1/000/198/609/62BE0501-37CD-E111-B316-001D09F242EF.root",
## "/store/data/Run2012C/ZeroBias/RECO/PromptReco-v1/000/198/609/3070DF12-2BCD-E111-89E8-001D09F2462D.root",
## "/store/data/Run2012C/ZeroBias/RECO/PromptReco-v1/000/198/609/0E478DDC-26CD-E111-90D2-001D09F2B30B.root",
# fill 2855 (VDM?)
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/FEE0ADFD-77D3-E111-A45A-003048F1C424.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/FE9FFD87-D4D3-E111-B40A-003048F1C836.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/FE78951D-B0D3-E111-9397-BCAEC518FF65.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/FE385AFE-81D3-E111-9634-BCAEC53296F3.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/FC0E084C-D5D3-E111-B070-003048F1BF68.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/FACD5C14-B5D3-E111-A08E-003048F024C2.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/F88FAE24-B6D3-E111-B736-002481E0D646.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/F807D5DF-93D3-E111-93BE-0025B32035A2.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/F6FD9200-CBD3-E111-AD7D-BCAEC532971C.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/F6741513-B5D3-E111-BDDD-003048F24A04.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/F4B41F88-D4D3-E111-8960-002481E0D7D8.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/F4907CF6-78D3-E111-B2A3-003048D37694.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/F2C3FAE0-93D3-E111-96E8-001D09F292D1.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/F29AF713-B5D3-E111-A214-BCAEC518FF80.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/F291886D-C9D3-E111-B075-5404A63886C6.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/F258E7B2-C2D3-E111-A7F8-BCAEC5364CFB.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/F0C356E4-A4D3-E111-91DB-BCAEC518FF8E.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/EE3BB526-7FD3-E111-824A-BCAEC518FF8F.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/EE37D177-8ED3-E111-95B6-003048D2BC52.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/EADF0246-8BD3-E111-BDAB-003048D2C020.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/EACA0766-8AD3-E111-A75F-BCAEC518FF54.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/EABF3CE2-93D3-E111-B797-002481E0D90C.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/E6BB052A-C2D3-E111-9602-BCAEC518FF7C.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/E6325DF1-9FD3-E111-8756-00237DDC5CB0.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/E4D7BE73-A5D3-E111-80A7-003048678098.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/E48CBF8B-AAD3-E111-ABC4-00237DDBE0E2.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/E47CCB14-D5D3-E111-BE3C-003048D37456.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/E44EDE00-78D3-E111-B76C-003048D2BE06.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/E2A51DF7-9CD3-E111-9EB6-003048F0258C.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/E27E7BCE-84D3-E111-BD2F-003048F11112.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/E27E22EC-D3D3-E111-8DB5-001D09F23A20.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/E01F7C34-CFD3-E111-B928-E0CB4E4408E7.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/E0043FE1-93D3-E111-A90D-001D09F24024.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/DECA8B8E-94D3-E111-9DCE-003048678098.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/DC7A921D-A1D3-E111-BC44-001D09F23A20.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/DC70E9E2-93D3-E111-B22E-0030486733B4.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/DC295D7E-CBD3-E111-97AD-003048F1C420.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/DC25AAD5-B7D3-E111-A7FB-0025901D5DEE.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/DABB0276-A8D3-E111-96B1-002481E0DEC6.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/DAA6DC80-ABD3-E111-9E93-5404A640A648.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/DA9572C7-C6D3-E111-9136-003048D2BBF0.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/D8FC0934-C2D3-E111-B507-001D09F29524.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/D6CCD28E-94D3-E111-AB81-001D09F28F25.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/D6B7434A-D7D3-E111-9B05-003048F024F6.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/D6635452-90D3-E111-8A01-BCAEC518FF80.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/D4A7E74B-D5D3-E111-A7F9-5404A63886B6.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/D45C8E81-A3D3-E111-9BCD-BCAEC518FF30.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/D2CA4B1D-7AD3-E111-B93D-BCAEC5329701.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/D2A9E4AD-9ED3-E111-9724-001D09F2AD4D.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/D0D05F23-B6D3-E111-B910-002481E0D7C0.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/D08BBBEA-82D3-E111-8B2D-001D09F2A465.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/D06A767A-A9D3-E111-B482-BCAEC5329716.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/CE99E4EA-A2D3-E111-A3D2-002481E94050.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/CE5ABDEF-9FD3-E111-97AE-001D09F253C0.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/CCE71BFA-83D3-E111-A3C0-0025901D5E10.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/CCCE46DB-81D3-E111-A438-003048F1110E.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/CCA53F6C-8FD3-E111-BD81-5404A63886B4.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/C8367534-6ED3-E111-8AB7-0025901D5C88.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/C68D927A-CED3-E111-AE86-BCAEC5364CED.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/C66E58E3-D2D3-E111-97BB-002481E0D646.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/C66AC1E0-B1D3-E111-93B7-003048F0258C.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/C42D9DD8-81D3-E111-A807-001D09F29597.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/C408CFB2-C2D3-E111-9D2E-0025901D625A.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/C2A48E61-BAD3-E111-AE07-003048F024E0.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/C2A488FF-81D3-E111-A7B3-0025901D5D9A.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/C29FF01D-A1D3-E111-88CF-003048D3C982.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/C267FC8A-ABD3-E111-8D4D-5404A63886C3.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/C053E8DD-D7D3-E111-A2D1-0019B9F72F97.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/C0396F14-D5D3-E111-9427-003048678110.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/BEE8D9CA-84D3-E111-828D-0025901D5DEE.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/BEAD99B2-C2D3-E111-8387-BCAEC53296FB.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/BE761E36-97D3-E111-916D-001D09F24024.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/BC94AEC0-83D3-E111-B60F-001D09F291D7.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/BC603D35-6ED3-E111-9757-5404A638869E.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/BAF74AED-9FD3-E111-B9A6-003048CF94A6.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/BA53E5CB-D4D3-E111-9EF9-BCAEC518FF5F.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/BA0C0F29-D1D3-E111-948C-5404A63886AB.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/B6F5155B-77D3-E111-9F39-001D09F24399.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/B4F6A26B-8FD3-E111-86BC-BCAEC518FF54.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/B4BB9D65-8AD3-E111-8022-BCAEC5364C93.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/B496F3A2-7BD3-E111-81E4-E0CB4E5536AE.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/B26C3C64-D9D3-E111-BA49-001D09F24024.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/B0FF3272-85D3-E111-A75C-0025901D5DF4.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/B0F9A071-96D3-E111-9272-BCAEC532971E.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/B0DE33FF-81D3-E111-AF0B-5404A640A63D.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/B0BC86C0-6CD3-E111-9DEE-003048D2BBF0.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/AEA9543F-BCD3-E111-9D1C-0025901D6272.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/AE8A75FF-77D3-E111-A542-5404A638869B.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/AE4533A9-87D3-E111-BFC6-BCAEC518FF67.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/AC84E233-CFD3-E111-A94A-BCAEC532971C.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/AC2A0080-CED3-E111-99ED-003048D2C01A.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/AC19DAE4-D2D3-E111-B725-0030486780A8.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/A81CD9C7-A1D3-E111-A3B0-001D09F23174.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/A66E8800-8DD3-E111-AEB9-5404A63886C6.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/A60CD91A-7AD3-E111-95D7-BCAEC532971C.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/A2DEDF1E-A1D3-E111-A2CE-001D09F23D1D.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/A0E68DCD-84D3-E111-9312-003048F11C28.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/A077A3B1-C2D3-E111-92C9-BCAEC518FF65.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/9EA2AEEA-C8D3-E111-B343-0025901D631E.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/9E60D9CD-84D3-E111-AB59-BCAEC518FF8A.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/9CC693E3-D2D3-E111-BA37-0025901D5C86.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/9CABA7D7-81D3-E111-A6C2-003048F11DE2.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/9AFD4A6A-C9D3-E111-8068-5404A63886C0.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/9A7D5BB3-C2D3-E111-B4C3-5404A63886A5.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/9842DE49-D7D3-E111-8AA5-003048CF9B28.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/981D92F4-CFD3-E111-B0DA-0025901D5DF4.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/96E7042D-DAD3-E111-85C3-001D09F2305C.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/96B65458-90D3-E111-AC0E-001D09F2B30B.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/94F73000-78D3-E111-8A38-00237DDC5BBC.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/941477EE-A3D3-E111-82EB-BCAEC5364C4C.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/9238BEBA-AAD3-E111-96E4-0025901D5D78.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/908BCD9B-AAD3-E111-9C37-003048D2BF1C.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/905FFB27-D1D3-E111-B78B-BCAEC5329700.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/8EBD8DC1-6CD3-E111-8D46-0030486733D8.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/8E53D46C-8FD3-E111-98C6-5404A63886AB.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/8CD297FD-77D3-E111-BA36-002481E94C7E.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/8C97989C-C7D3-E111-A9A5-001D09F241B9.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/8AC7DE91-7CD3-E111-B972-003048D2C020.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/8A2002EB-74D3-E111-BF75-003048D3C982.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/88C4D516-B5D3-E111-B038-0030486780B4.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/88A17664-C6D3-E111-B645-003048D37560.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/86DA94A6-7FD3-E111-845F-001D09F28D4A.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/86BD408F-94D3-E111-B835-001D09F29146.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/86B83EC1-AFD3-E111-8229-003048D2BED6.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/80991280-CED3-E111-A014-0030486780B8.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/7EEF5919-BDD3-E111-A9A0-001D09F252DA.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/7C8926A5-9FD3-E111-87AE-001D09F25267.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/7C442963-D3D3-E111-B84F-0025901D5E10.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/7AF938D8-B9D3-E111-A832-001D09F290CE.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/7A158772-79D3-E111-BBE7-001D09F242EF.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/785AFF76-CDD3-E111-9077-001D09F29146.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/768D12F2-9AD3-E111-A2D2-E0CB4E5536AE.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/7447E31C-88D3-E111-BA8A-BCAEC518FF63.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/7445F477-CDD3-E111-B11F-003048D2BE06.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/742696CA-84D3-E111-A625-BCAEC5329716.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/72C7C1CD-84D3-E111-B1B2-5404A63886C5.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/70EABFD8-6ED3-E111-AAE6-003048D37694.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/70D82516-B5D3-E111-9F8C-5404A6388694.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/6E74C778-D5D3-E111-A40F-0025901D5C86.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/6AFCBCF5-CFD3-E111-8CCD-BCAEC532971F.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/68FA5519-EFD3-E111-8C3B-5404A6388698.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/68B4AF34-CFD3-E111-B13A-BCAEC5329700.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/6659193F-8BD3-E111-9816-003048F11C28.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/6658B2E6-D2D3-E111-AC2A-003048F11C28.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/64FA2A8A-A3D3-E111-AF59-0030486780A8.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/62C1DF28-D1D3-E111-A3DB-5404A63886C6.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/627CC97D-BED3-E111-AC6A-0025901D5E10.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/6226DCDA-81D3-E111-BAB2-001D09F25041.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/60BFF0E0-93D3-E111-81E8-001D09F2932B.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/603B763E-BCD3-E111-8269-BCAEC518FF76.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/60049947-72D3-E111-9F35-BCAEC532971E.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/5EED2C0D-73D3-E111-800E-003048D2BE08.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/5EB9EFF3-98D3-E111-A6FB-001D09F253D4.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/5E68F0FB-77D3-E111-ACE0-BCAEC518FF74.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/5E5B7F7A-A9D3-E111-8ABF-BCAEC5364CFB.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/5E1A619D-C7D3-E111-BBD9-001D09F24024.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/5CC18056-81D3-E111-9EF7-001D09F2A690.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/5C5F24C1-75D3-E111-9D2C-5404A63886C6.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/5AC576AD-D7D3-E111-BF55-001D09F2A690.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/5A90C4EE-C8D3-E111-89AE-BCAEC532971D.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/5A834EE9-74D3-E111-AE6F-001D09F2305C.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/5A665A3E-D6D3-E111-8CAB-BCAEC5329713.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/5A400C30-D1D3-E111-BD3A-001D09F2305C.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/58E372F7-CFD3-E111-B4BE-5404A640A63D.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/5689077C-A9D3-E111-BEA7-BCAEC518FF6E.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/54591156-98D3-E111-AED6-BCAEC5329700.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/541775EA-82D3-E111-8B2A-001D09F25479.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/52C99010-D7D3-E111-BAD8-BCAEC5364C93.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/52518C2B-8ED3-E111-A836-003048D2BD66.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/5238DAAE-9ED3-E111-A9E3-001D09F2915A.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/520CBAA8-87D3-E111-953A-BCAEC5329708.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/50B2FD23-B6D3-E111-9B81-003048F118E0.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/505A1365-C6D3-E111-9EFB-003048678098.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/50211FC1-83D3-E111-AB95-0019B9F70468.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/4EEA8D9E-84D3-E111-A6AA-002481E0D790.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/4E061FEF-9FD3-E111-92CA-001D09F24FBA.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/4CEA342C-8ED3-E111-A882-003048D2BB90.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/4CA39686-D7D3-E111-9BBC-001D09F34488.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/4C8F8A61-AED3-E111-A3CF-E0CB4E553651.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/4AB8C98D-94D3-E111-9D6C-002481E0CC00.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/48760A39-97D3-E111-9C7A-001D09F2932B.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/46D7A527-D1D3-E111-85DE-E0CB4E553673.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/44B32FE3-B1D3-E111-88DF-003048F117EC.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/42CFCB63-C6D3-E111-8B69-BCAEC5329720.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/40B31CCD-C0D3-E111-A7DE-5404A638869E.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/4027D534-6ED3-E111-9439-BCAEC518FF7C.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/40040E3F-BCD3-E111-BF23-0025901D6288.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/3EAD79C9-A1D3-E111-BA58-001D09F241B9.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/3E2EEEA3-6DD3-E111-AB58-003048678098.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/3E13E029-8ED3-E111-9C49-0030486780EC.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/3CD68EB2-C2D3-E111-82F5-0025901D631E.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/3AA93C36-D4D3-E111-A6EE-0025901D624A.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/3A3AF5A7-87D3-E111-97BD-BCAEC5364C93.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/38C97BDA-6ED3-E111-876D-001D09F2983F.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/38C51F5F-AED3-E111-8530-5404A63886C1.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/38AF10E5-A2D3-E111-8253-003048F1C58C.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/386FDDD8-81D3-E111-8B7B-001D09F241B9.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/36DD2B73-96D3-E111-8A6E-5404A63886A8.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/368EB4F5-CFD3-E111-B373-0025901D5C80.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/368E04F6-CFD3-E111-B6D9-5404A63886C6.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/34E40D33-C2D3-E111-B344-003048F110BE.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/349C9388-D4D3-E111-9D5A-00237DDBE49C.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/32B8CAE2-74D3-E111-9AB7-BCAEC518FF6B.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/3297F10E-C5D3-E111-819C-003048D2C01E.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/2CB81972-79D3-E111-8B6A-0025B32035A2.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/2C682764-BAD3-E111-AF06-00237DDC5CB0.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/28691FED-BED3-E111-9EC4-5404A63886CC.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/26E927FE-77D3-E111-9AB3-5404A63886C6.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/26E37BEB-82D3-E111-B0BA-003048D2BC4C.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/26A88391-92D3-E111-BC8F-003048CF9B28.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/269AAD20-A1D3-E111-9657-003048D37694.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/24A74629-C2D3-E111-BA03-E0CB4E55365D.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/2493AC9B-ADD3-E111-9DD4-003048F11942.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/24738499-84D3-E111-94CE-BCAEC5329727.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/22AB4F55-72D3-E111-931C-5404A6388698.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/224E95F2-BED3-E111-A0F1-003048F1C424.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/20F5CCCD-84D3-E111-A504-003048D2C020.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/1E3C18FF-77D3-E111-AE3B-00237DDBE0E2.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/1CDE24B2-C2D3-E111-AC19-BCAEC518FF8A.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/1CA5C264-BAD3-E111-9475-003048CFB40C.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/1C5CBA7B-A9D3-E111-98EA-5404A63886B6.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/1A4A16D8-B9D3-E111-82D0-001D09F24303.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/1A143B6E-8AD3-E111-8A72-003048D3C932.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/18757DF3-D6D3-E111-AF95-5404A63886BB.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/184E844A-D7D3-E111-A5EA-002481E0D646.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/1682D5A6-7FD3-E111-B3AC-001D09F2424A.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/12CE676F-A5D3-E111-83FF-BCAEC5329713.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/125FBB6B-D3D3-E111-A795-002481E0D646.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/10F1968E-B8D3-E111-9DE2-003048F11942.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/10DF7D12-B5D3-E111-90C7-BCAEC53296F3.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/0CEBA7F5-9AD3-E111-AA21-0025901D5DB8.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/0C3EA435-D4D3-E111-A19C-0025901D5DEE.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/08FAD2F9-9CD3-E111-A043-00237DDC5BBC.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/089DFF83-C0D3-E111-83D4-001D09F2525D.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/088DF336-D4D3-E111-8A8F-0025901D626C.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/04F32824-B4D3-E111-929F-5404A638869E.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/04DBE034-C2D3-E111-A9F9-001D09F2512C.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/02746071-79D3-E111-AE2C-003048F1C832.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/199/282/00757C58-90D3-E111-A67D-001D09F248F8.root",
## # fill 2858
## ## "/store/data/Run2012C/MinimumBias/RECO/PromptReco-v2/000/199/318/E83D7D32-22D4-E111-86A8-BCAEC5364C4C.root",
## ## "/store/data/Run2012C/MinimumBias/RECO/PromptReco-v2/000/199/318/B044AF4B-1CD4-E111-B627-BCAEC53296F4.root",
## "/store/data/Run2012C/MinimumBias/RECO/PromptReco-v2/000/199/318/AC6EBE7D-28D4-E111-88E2-BCAEC518FF7C.root",
## "/store/data/Run2012C/MinimumBias/RECO/PromptReco-v2/000/199/318/32741137-22D4-E111-BE27-00237DDBE49C.root",
## "/store/data/Run2012C/MinimumBias/RECO/PromptReco-v2/000/199/318/3078BF82-3AD4-E111-AB87-BCAEC518FF5F.root",
#"/store/data/Run2012C/Commissioning/RECO/PromptReco-v2/000/199/318/0A1A4091-1BD4-E111-B477-0025B32445E0.root",
#"/store/data/Run2012C/Commissioning/RECO/PromptReco-v2/000/199/318/5E850A7D-14D4-E111-A1CF-001D09F24303.root",
#"/store/data/Run2012C/Commissioning/RECO/PromptReco-v2/000/199/318/B6F59161-12D4-E111-9DC1-001D09F241B9.root",
## "/store/data/Run2012C/MinimumBias/RECO/PromptReco-v2/000/199/319/FEC77CAF-37D4-E111-89F6-003048F118D4.root",
## "/store/data/Run2012C/MinimumBias/RECO/PromptReco-v2/000/199/319/FCC70F48-3BD4-E111-9BCB-5404A63886D4.root",
## "/store/data/Run2012C/MinimumBias/RECO/PromptReco-v2/000/199/319/FC735283-39D4-E111-AEDF-0025901D624A.root",
## "/store/data/Run2012C/MinimumBias/RECO/PromptReco-v2/000/199/319/FC237955-36D4-E111-A15D-003048D37580.root",
## "/store/data/Run2012C/MinimumBias/RECO/PromptReco-v2/000/199/319/F68C21AC-65D4-E111-BAD4-00215AEDFD98.root",
## "/store/data/Run2012C/MinimumBias/RECO/PromptReco-v2/000/199/319/EE8CEEF7-3BD4-E111-9286-001D09F25267.root",
## "/store/data/Run2012C/MinimumBias/RECO/PromptReco-v2/000/199/319/E42D09B3-65D4-E111-AE8C-003048D2BC62.root",
## "/store/data/Run2012C/MinimumBias/RECO/PromptReco-v2/000/199/319/E0D88393-6DD4-E111-A04D-001D09F28F25.root",
## "/store/data/Run2012C/MinimumBias/RECO/PromptReco-v2/000/199/319/CC84F0F1-4BD4-E111-AC19-0025B3203898.root",
## "/store/data/Run2012C/MinimumBias/RECO/PromptReco-v2/000/199/319/C872BC7C-42D4-E111-B3E1-003048D2C0F0.root",
## "/store/data/Run2012C/MinimumBias/RECO/PromptReco-v2/000/199/319/C2FA8B79-3AD4-E111-ABE5-5404A640A642.root",
## "/store/data/Run2012C/MinimumBias/RECO/PromptReco-v2/000/199/319/B8DC4239-64D4-E111-B78A-001D09F28EA3.root",
## "/store/data/Run2012C/MinimumBias/RECO/PromptReco-v2/000/199/319/B610D272-4BD4-E111-B9BB-00215AEDFCCC.root",
## "/store/data/Run2012C/MinimumBias/RECO/PromptReco-v2/000/199/319/B2621577-42D4-E111-A456-00237DDBE49C.root",
## "/store/data/Run2012C/MinimumBias/RECO/PromptReco-v2/000/199/319/B020059C-44D4-E111-8CC7-5404A63886BB.root",
## "/store/data/Run2012C/MinimumBias/RECO/PromptReco-v2/000/199/319/AAED13FF-60D4-E111-BE11-001D09F24FEC.root",
## "/store/data/Run2012C/MinimumBias/RECO/PromptReco-v2/000/199/319/A85081AB-36D4-E111-BDE1-BCAEC518FF89.root",
## "/store/data/Run2012C/MinimumBias/RECO/PromptReco-v2/000/199/319/944ABC82-39D4-E111-A186-5404A63886D4.root",
## "/store/data/Run2012C/MinimumBias/RECO/PromptReco-v2/000/199/319/8C262836-32D4-E111-BA7A-BCAEC518FF7C.root",
## "/store/data/Run2012C/MinimumBias/RECO/PromptReco-v2/000/199/319/803F7641-5DD4-E111-BFDF-001D09F297EF.root",
## "/store/data/Run2012C/MinimumBias/RECO/PromptReco-v2/000/199/319/7A5FF783-3ED4-E111-91B8-003048D3733E.root",
## "/store/data/Run2012C/MinimumBias/RECO/PromptReco-v2/000/199/319/6CA2B16D-6AD4-E111-9405-003048D2BC62.root",
## "/store/data/Run2012C/MinimumBias/RECO/PromptReco-v2/000/199/319/6487F22E-40D4-E111-AC6B-BCAEC5364C42.root",
## "/store/data/Run2012C/MinimumBias/RECO/PromptReco-v2/000/199/319/60D0BE63-5ED4-E111-89E0-001D09F24D8A.root",
## "/store/data/Run2012C/MinimumBias/RECO/PromptReco-v2/000/199/319/3CA94B62-50D4-E111-96EF-5404A63886EF.root",
## "/store/data/Run2012C/MinimumBias/RECO/PromptReco-v2/000/199/319/300FA3D1-6BD4-E111-8B3E-5404A63886EE.root",
## "/store/data/Run2012C/MinimumBias/RECO/PromptReco-v2/000/199/319/26B096BA-48D4-E111-A463-00237DDBE49C.root",
## "/store/data/Run2012C/MinimumBias/RECO/PromptReco-v2/000/199/319/268EB5C2-75D4-E111-9924-0025901D6288.root",
## "/store/data/Run2012C/MinimumBias/RECO/PromptReco-v2/000/199/319/26866A5A-45D4-E111-B156-BCAEC5364C93.root",
## "/store/data/Run2012C/MinimumBias/RECO/PromptReco-v2/000/199/319/227D17CC-5BD4-E111-95ED-003048F1183E.root",
## "/store/data/Run2012C/MinimumBias/RECO/PromptReco-v2/000/199/319/1A35B0DD-74D4-E111-8F08-001D09F23C73.root",
## "/store/data/Run2012C/MinimumBias/RECO/PromptReco-v2/000/199/319/162ADE59-4DD4-E111-9A9D-001D09F2512C.root",
## "/store/data/Run2012C/MinimumBias/RECO/PromptReco-v2/000/199/319/0A0C4CF7-34D4-E111-9513-5404A63886C5.root",
## "/store/data/Run2012C/MinimumBias/RECO/PromptReco-v2/000/199/319/04BC0641-3BD4-E111-A555-BCAEC518FF50.root",
## "/store/data/Run2012C/MinimumBias/RECO/PromptReco-v2/000/199/319/027CCF25-57D4-E111-8085-5404A640A648.root",
## "/store/data/Run2012C/Commissioning/RECO/PromptReco-v2/000/199/319/E66F789D-31D4-E111-9D70-E0CB4E55365C.root",
## "/store/data/Run2012C/Commissioning/RECO/PromptReco-v2/000/199/319/D42C98DB-4BD4-E111-9B15-BCAEC5329717.root",
## "/store/data/Run2012C/Commissioning/RECO/PromptReco-v2/000/199/319/C849CC69-4BD4-E111-AEFA-5404A63886D6.root",
## "/store/data/Run2012C/Commissioning/RECO/PromptReco-v2/000/199/319/C042820D-38D4-E111-9CD3-5404A6388699.root",
## "/store/data/Run2012C/Commissioning/RECO/PromptReco-v2/000/199/319/B090B9F6-6AD4-E111-8291-003048CF99BA.root",
## "/store/data/Run2012C/Commissioning/RECO/PromptReco-v2/000/199/319/AEAB037A-3ED4-E111-9519-003048CF9B28.root",
## "/store/data/Run2012C/Commissioning/RECO/PromptReco-v2/000/199/319/A6D09EEE-61D4-E111-A783-BCAEC518FF8E.root",
## "/store/data/Run2012C/Commissioning/RECO/PromptReco-v2/000/199/319/92035DE3-49D4-E111-B8AE-001D09F24763.root",
## "/store/data/Run2012C/Commissioning/RECO/PromptReco-v2/000/199/319/90E0FE00-5FD4-E111-8FF0-5404A63886BD.root",
## "/store/data/Run2012C/Commissioning/RECO/PromptReco-v2/000/199/319/86763099-54D4-E111-B76E-001D09F24763.root",
## "/store/data/Run2012C/Commissioning/RECO/PromptReco-v2/000/199/319/840734EF-39D4-E111-A450-0025B32036E2.root",
## "/store/data/Run2012C/Commissioning/RECO/PromptReco-v2/000/199/319/7C6E6015-4CD4-E111-A820-0025B32035A2.root",
## "/store/data/Run2012C/Commissioning/RECO/PromptReco-v2/000/199/319/6AAAA6ED-67D4-E111-BF5D-003048D2BF1C.root",
## "/store/data/Run2012C/Commissioning/RECO/PromptReco-v2/000/199/319/689EAFB6-35D4-E111-B9DE-5404A63886B7.root",
## "/store/data/Run2012C/Commissioning/RECO/PromptReco-v2/000/199/319/5CD43F0F-37D4-E111-A691-5404A63886B2.root",
## "/store/data/Run2012C/Commissioning/RECO/PromptReco-v2/000/199/319/4AFC9053-36D4-E111-A14C-003048F024C2.root",
## "/store/data/Run2012C/Commissioning/RECO/PromptReco-v2/000/199/319/3C9B70A2-4ED4-E111-A05D-001D09F28E80.root",
## "/store/data/Run2012C/Commissioning/RECO/PromptReco-v2/000/199/319/3ABEFD30-5DD4-E111-9E09-001D09F2AF96.root",
## "/store/data/Run2012C/Commissioning/RECO/PromptReco-v2/000/199/319/240AE3BF-5DD4-E111-8755-BCAEC5329713.root",
## "/store/data/Run2012C/Commissioning/RECO/PromptReco-v2/000/199/319/22495FAA-37D4-E111-9D1A-003048F1BF68.root",
# 200781 B=0
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/781/7026A08B-6EE6-E111-B3A6-001D09F28F25.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/781/8C9E6FD8-6DE6-E111-AB15-0030486733B4.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/781/B472FDD5-6DE6-E111-B1EC-0025901D5DF4.root",
## "/store/data/Run2012C/ZeroBias2/RECO/PromptReco-v2/000/200/781/444239C7-77E6-E111-8AB3-00215AEDFD74.root",
## "/store/data/Run2012C/ZeroBias2/RECO/PromptReco-v2/000/200/781/84EB0AF6-6FE6-E111-8855-0030486780E6.root",
## "/store/data/Run2012C/ZeroBias2/RECO/PromptReco-v2/000/200/781/C2327DC4-77E6-E111-93C5-003048F118AC.root",
## "/store/data/Run2012C/ZeroBias3/RECO/PromptReco-v2/000/200/781/0CD53E26-6DE6-E111-972A-001D09F29321.root",
## "/store/data/Run2012C/ZeroBias3/RECO/PromptReco-v2/000/200/781/1C9ADC26-6DE6-E111-BEA0-003048F118AC.root",
## "/store/data/Run2012C/ZeroBias3/RECO/PromptReco-v2/000/200/781/DE22D826-6DE6-E111-AE8F-00215AEDFD74.root",
## "/store/data/Run2012C/ZeroBias4/RECO/PromptReco-v2/000/200/781/66C51627-6DE6-E111-8DE2-BCAEC518FF74.root",
## "/store/data/Run2012C/ZeroBias4/RECO/PromptReco-v2/000/200/781/96D2EE3A-7BE6-E111-A336-0030486733B4.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/FECD36CC-ECE7-E111-9223-0025901D5DF4.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/FEA90EAA-BFE7-E111-A689-BCAEC518FF74.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/FA546FAA-B2E7-E111-9D08-0025901D5D90.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/FA056E87-DFE7-E111-9C8B-0025B32445E0.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/F8EA3E8D-07E8-E111-85AC-002481E0D524.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/F8DCD02B-D6E7-E111-A14E-00237DDBE49C.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/F64F390F-E2E7-E111-88D7-003048D2C0F2.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/F4C5CC90-E5E7-E111-AF76-5404A63886C0.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/F4270EBB-B3E7-E111-8D31-003048D37666.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/F2B97040-C8E7-E111-898E-003048D2BC5C.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/F289A6ED-88E7-E111-97AC-BCAEC5329700.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/F0A40917-2AE7-E111-A262-BCAEC518FF89.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/EEFE0862-46E7-E111-8620-5404A63886C0.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/EE5F5789-1FE7-E111-8042-003048F024E0.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/EAA1C906-88E7-E111-B2E3-003048F118DE.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/E81F9C63-46E7-E111-9EAB-BCAEC5364CFB.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/E81767BD-8DE7-E111-AB3F-003048D2BDD8.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/E48A49F8-7EE7-E111-BE7F-001D09F28E80.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/E44AB5AD-F7E7-E111-94F2-BCAEC5329708.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/E4194014-0AE8-E111-B00F-003048D37456.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/E2AE552A-42E7-E111-8E2C-002481E0CC00.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/E2A2C70C-7CE7-E111-B4E5-001D09F27067.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/E0DB057C-C5E6-E111-9E83-001D09F24FEC.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/DEE7F4AE-42E8-E111-9667-E0CB4E55367F.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/DEE62168-45E7-E111-B456-BCAEC5364CED.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/DE127A7D-38E7-E111-82D3-5404A640A643.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/DC190952-16E7-E111-A369-5404A640A648.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/DA166410-48E7-E111-94CF-BCAEC518FF50.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/D8EC030E-E1E6-E111-853A-003048D3756A.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/D617BDA9-D0E7-E111-8228-003048F24A04.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/D4ADE567-0AE8-E111-B723-001D09F29146.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/D4506E65-46E7-E111-AD1A-BCAEC53296F4.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/CE8407AA-C2E6-E111-A13F-BCAEC518FF41.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/CC2ADFB0-07E8-E111-A89F-003048F1110E.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/CA15954B-A1E7-E111-87B1-BCAEC518FF68.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/C8551D05-7CE7-E111-B951-5404A63886B0.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/C6D5EAA1-1AE8-E111-91F0-0025901D624A.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/C0B625A4-F3E7-E111-9C2E-BCAEC532971E.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/C07D0812-B4E7-E111-8816-0030486780B8.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/BC9D8712-14E7-E111-8160-001D09F291D7.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/BA452ADE-3FE8-E111-8BDE-E0CB4E55367F.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/BA3EB2EE-36E7-E111-9991-5404A63886CE.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/BA1171EB-15E8-E111-9E07-003048F24A04.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/B8AAA964-C3E7-E111-B97F-002481E94C7E.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/B83C384F-08E8-E111-A357-003048F117EC.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/B81C4DCA-48E7-E111-BB87-5404A640A643.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/B6B1BCF4-84E7-E111-88EA-002481E0D7D8.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/B2F09DF9-C8E6-E111-A9DC-002481E0D7D8.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/B079F912-40E7-E111-ADC2-001D09F2441B.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/B0412B38-A4E6-E111-902E-5404A63886C1.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/ACB25A41-2BE7-E111-A7DF-0025901D625A.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/AC9CFFC1-3DE8-E111-BF0D-E0CB4E55367F.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/AC6F6642-7FE7-E111-B16C-5404A6388698.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/AAFCE5E8-09E8-E111-BB20-5404A638869C.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/AA17B2D7-93E7-E111-85B2-BCAEC5329708.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/A882B255-E8E7-E111-AD9C-0025901D5DF4.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/A80DF0C9-10E7-E111-9A00-BCAEC518FF41.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/A6FF0384-52E7-E111-BB8C-002481E0D73C.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/A612B124-42E7-E111-A046-003048F1182E.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/A4812A56-E8E7-E111-BA0C-BCAEC532971F.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/A42AD103-09E8-E111-BB9A-003048D2BC38.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/A2A33A5B-9FE7-E111-839F-5404A63886C6.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/A2809EE2-17E8-E111-A5A0-BCAEC518FF6B.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/A0F460E4-91E7-E111-A9D6-0025901D5DEE.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/A0F208DF-1DE8-E111-AA84-001D09F29169.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/A0F0964C-34E7-E111-9E7D-003048F117F6.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/A09E3CB4-40E8-E111-B7C3-001D09F25267.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/A030147E-CCE7-E111-9CB3-BCAEC532971B.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/A0298DA1-3BE8-E111-84B5-001D09F29114.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/9CE01877-18E8-E111-ACA5-BCAEC518FF8E.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/9AFAC9D8-33E8-E111-9AF5-E0CB4E55367F.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/9AD41A7C-6BE7-E111-990F-00237DDC5C24.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/961D2CA7-C2E6-E111-932A-5404A63886A8.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/94BBB4B9-16E8-E111-B99C-0025901D5E10.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/9024AC47-2CE8-E111-BF11-E0CB4E553673.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/9023509D-1AE8-E111-9398-0025901D631E.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/8E859DD3-19E7-E111-8274-003048D37694.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/8E81632B-1DE8-E111-96F7-001D09F2525D.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/8C796ED7-43E7-E111-9D0E-003048F024FA.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/8C3E3CC3-ECE7-E111-BA5C-003048F118C4.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/88EB3A2E-67E7-E111-89AF-5404A63886CC.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/88C94E1C-8CE7-E111-821D-00237DDC5CB0.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/86A52DCC-BEE7-E111-830A-BCAEC532970F.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/8414AF26-80E7-E111-AE90-5404A63886B2.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/840A5CE4-C4E7-E111-A362-003048D2BB58.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/8287223A-B5E7-E111-81C5-BCAEC518FF62.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/826DFABE-26E8-E111-BC28-485B3962633D.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/823451A3-93E7-E111-80C1-5404A6388698.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/80863B2A-C3E7-E111-B87B-5404A63886D6.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/7EC6253E-18E8-E111-9235-0025901D6272.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/7EA18D64-C3E7-E111-9814-BCAEC518FF68.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/7E88432B-ACE6-E111-839F-001D09F28D54.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/7E136395-C3E7-E111-A7B3-00237DDBE0E2.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/7A4FAAD2-93E7-E111-9816-BCAEC518FF5F.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/7A4B7427-6AE7-E111-AA49-00237DDC5C24.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/76CBBC02-7FE7-E111-AD4A-001D09F2447F.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/72D5D2DF-AEE6-E111-9DDF-5404A638869B.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/728B65AF-8BE7-E111-B5F8-BCAEC5329700.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/723CA5F5-CDE6-E111-A12E-001D09F24D8A.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/722D0CAB-8DE7-E111-B4D3-003048D2BC30.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/704CF854-07E8-E111-8792-BCAEC518FF6E.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/6EFBA375-B2E6-E111-AC73-5404A63886BB.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/6CCBC1CF-DEE7-E111-AA6B-003048F1BF66.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/6C625004-20E8-E111-BE58-003048F117B6.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/6A98FD85-B9E6-E111-9392-0025901D5C86.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/6893E615-56E7-E111-8509-BCAEC518FF41.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/68859900-C6E7-E111-BCC4-003048D2C092.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/66F82236-A0E7-E111-B038-0025901D629C.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/66A861E8-D6E7-E111-A0C6-001D09F2512C.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/6465FD59-3CE7-E111-BC35-003048D37456.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/625AADE2-E3E6-E111-95E6-5404A63886B4.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/60A3106A-4DE7-E111-B0A5-0030486780B4.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/5ED0C10F-A3E7-E111-849A-00237DDC5CB0.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/5EB040D4-C1E6-E111-AF68-003048D2C16E.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/5E3E5D6D-C3E7-E111-8CCE-002481E0D7C0.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/5CD2A977-07E8-E111-B051-BCAEC53296F7.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/5C5CA45D-1FE8-E111-A1FE-001D09F2910A.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/5A9F24CE-2AE7-E111-8F30-003048F117EA.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/5851F0D2-90E7-E111-B67B-001D09F297EF.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/5845E2CE-EAE7-E111-8343-003048F117B4.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/569EBDAB-E0E7-E111-B465-003048F118AA.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/567FA57F-07E8-E111-9B63-5404A63886A8.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/54FE3DDC-17E8-E111-B119-BCAEC518FF68.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/542C6CF8-3AE8-E111-BCFD-001D09F25267.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/5419D1B8-19E8-E111-BBA0-003048F11C28.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/52C3DAF4-08E8-E111-9B97-001D09F24664.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/52B9D387-9BE7-E111-9F98-E0CB4E55365D.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/52945E7D-1CE8-E111-9957-001D09F2910A.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/5055C4CF-37E7-E111-9B6E-BCAEC518FF8D.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/500A8D7D-10E8-E111-8D81-E0CB4E4408E3.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/4EED44FE-88E7-E111-943E-00215AEDFD98.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/4E5ECB45-CDE6-E111-B5A7-BCAEC518FF44.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/48FE10F6-09E8-E111-97F1-BCAEC518FF6B.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/46434494-40E8-E111-817B-001D09F26509.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/44734DE7-CFE7-E111-BE7E-5404A63886AB.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/40FE72BD-CAE7-E111-8E20-BCAEC518FF65.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/40C2A0C3-36E8-E111-9711-001D09F29619.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/409BEFC8-E1E7-E111-A4AC-0025B32445E0.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/3AFB8F03-30E7-E111-B140-BCAEC532970F.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/3ACFC54C-32E7-E111-ADA2-BCAEC518FF44.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/3A2CDA1C-F5E7-E111-8C8A-003048678098.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/38B76F4A-08E8-E111-A586-BCAEC518FF6B.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/38B17738-B4E7-E111-B12C-001D09F2A49C.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/38A00331-1DE8-E111-93C4-0019B9F4A1D7.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/38758183-55E7-E111-AF3B-BCAEC532971D.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/36D689DE-A7E6-E111-989A-BCAEC518FF67.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/3658B63E-01E8-E111-B72A-BCAEC532971D.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/348C5417-C7E7-E111-9EE3-485B3962633D.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/326B536F-B2E6-E111-8473-5404A63886E6.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/30DE3F19-8CE7-E111-BB0C-003048F1C836.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/30AB0E41-C8E7-E111-8B7E-001D09F2512C.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/300E756C-7EE7-E111-8621-0025901D5DEE.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/2E4AD4F4-70E7-E111-9E20-BCAEC518FF40.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/2AD18703-95E7-E111-ABF4-003048F11C28.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/2A9668B9-42E8-E111-ADAE-001D09F29321.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/28DA4C22-A5E7-E111-98D5-BCAEC5329719.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/2889C694-E1E7-E111-ABE5-003048D2C16E.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/26DF1F06-18E8-E111-9B67-5404A63886B7.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/264E7B06-EEE7-E111-977D-001D09F28D54.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/260526BB-36E8-E111-AAA1-001D09F2A465.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/247F656C-CAE6-E111-9921-001D09F2525D.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/22AA5E32-1EE7-E111-A1CF-003048D2BA82.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/1EF6C2FD-C3E7-E111-8A8F-003048F1182E.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/1EA56B14-56E7-E111-80ED-0025B32036E2.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/1CB66784-30E7-E111-BEFD-BCAEC5364C62.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/1C46B2D5-22E7-E111-A017-003048F1182E.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/1AF70E0C-95E7-E111-8730-003048D37560.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/18CFCC0F-E9E7-E111-8B49-0025901D5C86.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/18A7D8AD-53E7-E111-8A06-001D09F28EA3.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/188BAA82-55E7-E111-9054-5404A640A648.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/183ECE04-89E7-E111-93E9-001D09F2424A.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/1677D0BC-6AE7-E111-9789-0030486733D8.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/16755B54-DBE7-E111-99DB-BCAEC5329700.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/12BE4C0B-F8E7-E111-BC85-BCAEC5329702.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/12B466E7-15E8-E111-AF6D-003048D2C108.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/10542285-83E7-E111-8498-003048F118D4.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/100CD1CD-51E7-E111-979A-0025901D5DB8.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/0E829C59-D1E7-E111-AB0C-0025901D629C.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/0E3D1FA9-C7E6-E111-91B2-5404A63886B7.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/0CA54A78-CAE7-E111-8C29-BCAEC518FF54.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/0C70B306-EFE7-E111-985E-002481E0D7D8.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/0C39A124-70E7-E111-9B6F-001D09F2462D.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/0AA7F5A5-F7E6-E111-9C23-BCAEC5329720.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/0A908D12-F5E7-E111-803D-5404A63886A5.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/0A89708C-2CE7-E111-93A5-0025901D6268.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/0A42EC81-E3E7-E111-BBD5-BCAEC518FF6E.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/08CE5605-A3E7-E111-8978-BCAEC518FF41.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/08BCF32F-39E7-E111-8643-0025B32034EA.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/08B7052A-6EE7-E111-BBBC-0030486780EC.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/06E57763-46E7-E111-A27D-0025901D5C86.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/0276AE4F-69E7-E111-9378-5404A63886CB.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/00A1E991-2CE7-E111-9053-E0CB4E4408E3.root",
## "/store/data/Run2012C/ZeroBias1/RECO/PromptReco-v2/000/200/786/001CB74F-16E8-E111-A88F-5404A638869C.root",
## "/store/data/Run2012C/MinimumBias/RECO/PromptReco-v2/000/201/624/F6C1E89C-9DF0-E111-8644-5404A638869B.root",
## "/store/data/Run2012C/MinimumBias/RECO/PromptReco-v2/000/201/624/EC8CD0C5-7BF0-E111-8289-003048F118C4.root",
## "/store/data/Run2012C/MinimumBias/RECO/PromptReco-v2/000/201/624/E69762EE-89F0-E111-867B-5404A63886C1.root",
## "/store/data/Run2012C/MinimumBias/RECO/PromptReco-v2/000/201/624/D270E54A-7CF0-E111-89D7-003048F1BF66.root",
## "/store/data/Run2012C/MinimumBias/RECO/PromptReco-v2/000/201/624/BA5C8DB2-89F0-E111-9927-BCAEC53296FB.root",
## "/store/data/Run2012C/MinimumBias/RECO/PromptReco-v2/000/201/624/AA33C323-6CF0-E111-939C-003048D37538.root",
## "/store/data/Run2012C/MinimumBias/RECO/PromptReco-v2/000/201/624/A28A4B3D-A3F0-E111-93CB-5404A63886C1.root",
## "/store/data/Run2012C/MinimumBias/RECO/PromptReco-v2/000/201/624/848C2069-A0F0-E111-A39F-BCAEC518FF7C.root",
## "/store/data/Run2012C/MinimumBias/RECO/PromptReco-v2/000/201/624/4273B153-94F0-E111-8448-5404A63886B2.root",
## "/store/data/Run2012C/MinimumBias/RECO/PromptReco-v2/000/201/624/424FE001-81F0-E111-8481-0030486780A8.root",
## "/store/data/Run2012C/MinimumBias/RECO/PromptReco-v2/000/201/624/2CCCC2EA-8FF0-E111-847A-5404A63886CE.root",
# fill 3071
## "/store/data/Run2012C/MinimumBias/RECO/PromptReco-v2/000/203/002/ECC7C56C-5602-E211-9F9B-0025901D6288.root",
## "/store/data/Run2012C/MinimumBias/RECO/PromptReco-v2/000/203/002/ECAEE003-6802-E211-8F3A-001D09F244DE.root",
## "/store/data/Run2012C/MinimumBias/RECO/PromptReco-v2/000/203/002/EA478775-9002-E211-876D-003048F1C424.root",
## "/store/data/Run2012C/MinimumBias/RECO/PromptReco-v2/000/203/002/DAE6F3B5-4F02-E211-838C-003048F117EA.root",
## "/store/data/Run2012C/MinimumBias/RECO/PromptReco-v2/000/203/002/C8FF2344-3002-E211-831C-001D09F24399.root",
## "/store/data/Run2012C/MinimumBias/RECO/PromptReco-v2/000/203/002/C81042C3-4F02-E211-8B75-003048D374F2.root",
## "/store/data/Run2012C/MinimumBias/RECO/PromptReco-v2/000/203/002/BE0759FA-AA02-E211-81E9-001D09F28EA3.root",
## "/store/data/Run2012C/MinimumBias/RECO/PromptReco-v2/000/203/002/BC117AA0-9002-E211-BD58-0019B9F4A1D7.root",
## "/store/data/Run2012C/MinimumBias/RECO/PromptReco-v2/000/203/002/B8A8DFE1-2E02-E211-894A-485B3977172C.root",
## "/store/data/Run2012C/MinimumBias/RECO/PromptReco-v2/000/203/002/B6240BBD-6202-E211-9E1A-003048F024FA.root",
## "/store/data/Run2012C/MinimumBias/RECO/PromptReco-v2/000/203/002/B4435C66-5A02-E211-BA80-003048D2C0F2.root",
## "/store/data/Run2012C/MinimumBias/RECO/PromptReco-v2/000/203/002/B25FEF7D-9002-E211-BFED-001D09F2B30B.root",
## "/store/data/Run2012C/MinimumBias/RECO/PromptReco-v2/000/203/002/B22A307C-9002-E211-8DB3-001D09F248F8.root",
## "/store/data/Run2012C/MinimumBias/RECO/PromptReco-v2/000/203/002/ACBD22BB-9002-E211-8174-001D09F25479.root",
## "/store/data/Run2012C/MinimumBias/RECO/PromptReco-v2/000/203/002/ACAA75AC-5002-E211-AAFB-001D09F25109.root",
## "/store/data/Run2012C/MinimumBias/RECO/PromptReco-v2/000/203/002/A8A383BC-5102-E211-B908-0019B9F70468.root",
## "/store/data/Run2012C/MinimumBias/RECO/PromptReco-v2/000/203/002/A87FA1FB-2702-E211-98C1-BCAEC5329709.root",
## "/store/data/Run2012C/MinimumBias/RECO/PromptReco-v2/000/203/002/A875CB2A-5602-E211-AC21-001D09F23D1D.root",
## "/store/data/Run2012C/MinimumBias/RECO/PromptReco-v2/000/203/002/A80C2B6D-9002-E211-B027-003048F1C58C.root",
## "/store/data/Run2012C/MinimumBias/RECO/PromptReco-v2/000/203/002/A22EEDD8-AA02-E211-8788-0025B32036D2.root",
## "/store/data/Run2012C/MinimumBias/RECO/PromptReco-v2/000/203/002/9E45E0C4-2702-E211-8295-0025901D627C.root",
## "/store/data/Run2012C/MinimumBias/RECO/PromptReco-v2/000/203/002/9CF92AC9-5802-E211-840A-5404A63886BE.root",
## "/store/data/Run2012C/MinimumBias/RECO/PromptReco-v2/000/203/002/9CD8087D-3802-E211-B1FC-0019B9F72CE5.root",
## "/store/data/Run2012C/MinimumBias/RECO/PromptReco-v2/000/203/002/9A8905B0-4F02-E211-B1C3-001D09F2983F.root",
## "/store/data/Run2012C/MinimumBias/RECO/PromptReco-v2/000/203/002/905612D6-9002-E211-9366-001D09F2462D.root",
## "/store/data/Run2012C/MinimumBias/RECO/PromptReco-v2/000/203/002/8ED8CE85-9002-E211-8268-BCAEC518FF6B.root",
## "/store/data/Run2012C/MinimumBias/RECO/PromptReco-v2/000/203/002/8A167F6E-9002-E211-A753-003048F11CF0.root",
## "/store/data/Run2012C/MinimumBias/RECO/PromptReco-v2/000/203/002/866E5AB2-4F02-E211-8833-003048D3756A.root",
## "/store/data/Run2012C/MinimumBias/RECO/PromptReco-v2/000/203/002/860A4275-9002-E211-BEE0-0025901D627C.root",
## "/store/data/Run2012C/MinimumBias/RECO/PromptReco-v2/000/203/002/84D74272-9002-E211-8987-485B39897227.root",
## "/store/data/Run2012C/MinimumBias/RECO/PromptReco-v2/000/203/002/849D35AF-4F02-E211-98DA-003048D2BED6.root",
## "/store/data/Run2012C/MinimumBias/RECO/PromptReco-v2/000/203/002/80B63F2F-6002-E211-B82E-001D09F29533.root",
## "/store/data/Run2012C/MinimumBias/RECO/PromptReco-v2/000/203/002/7EF6DFB7-6202-E211-ACD4-0025B320384C.root",
## "/store/data/Run2012C/MinimumBias/RECO/PromptReco-v2/000/203/002/7A8720C9-6502-E211-AC34-003048D2BC52.root",
## "/store/data/Run2012C/MinimumBias/RECO/PromptReco-v2/000/203/002/5C9B937B-9002-E211-BEBE-BCAEC518FF74.root",
## "/store/data/Run2012C/MinimumBias/RECO/PromptReco-v2/000/203/002/5C5AD239-6102-E211-9C92-5404A640A643.root",
## "/store/data/Run2012C/MinimumBias/RECO/PromptReco-v2/000/203/002/524B616C-9002-E211-8D9A-003048F1182E.root",
## "/store/data/Run2012C/MinimumBias/RECO/PromptReco-v2/000/203/002/4A7805D9-AA02-E211-AFB5-003048CF94A8.root",
## "/store/data/Run2012C/MinimumBias/RECO/PromptReco-v2/000/203/002/4A4296E2-AA02-E211-8916-5404A6388698.root",
## "/store/data/Run2012C/MinimumBias/RECO/PromptReco-v2/000/203/002/38A919AC-4F02-E211-AA09-5404A640A648.root",
## "/store/data/Run2012C/MinimumBias/RECO/PromptReco-v2/000/203/002/38588776-9002-E211-8EC6-E0CB4E55367F.root",
## "/store/data/Run2012C/MinimumBias/RECO/PromptReco-v2/000/203/002/36D448D5-AA02-E211-AFF9-BCAEC5329705.root",
## "/store/data/Run2012C/MinimumBias/RECO/PromptReco-v2/000/203/002/2C215F1C-5602-E211-944B-001D09F29524.root",
## "/store/data/Run2012C/MinimumBias/RECO/PromptReco-v2/000/203/002/1E23A8F7-BF02-E211-8880-003048678098.root",
## "/store/data/Run2012C/MinimumBias/RECO/PromptReco-v2/000/203/002/1C256F6F-9002-E211-86EA-BCAEC518FF8A.root",
## "/store/data/Run2012C/MinimumBias/RECO/PromptReco-v2/000/203/002/1AE33299-5002-E211-A138-BCAEC5329717.root",
## "/store/data/Run2012C/MinimumBias/RECO/PromptReco-v2/000/203/002/1A3FE904-2802-E211-A156-001D09F29533.root",
## "/store/data/Run2012C/MinimumBias/RECO/PromptReco-v2/000/203/002/147F27BA-9002-E211-8934-001D09F24DA8.root",
## "/store/data/Run2012C/MinimumBias/RECO/PromptReco-v2/000/203/002/129CC79C-5702-E211-8F5A-485B3962633D.root",
# fill 3110, 203853
#"/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/203/853/9EECE9DF-B60B-E211-8CFE-001D09F2906A.root",
#"/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/203/853/882B0EC7-B40B-E211-A0F2-001D09F2906A.root",
#"/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/203/853/72D6848D-BF0B-E211-9EA9-003048D2C174.root",
#"/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/203/853/66306A5F-BA0B-E211-830A-003048D2C0F2.root",
#"/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/203/853/1A294A8F-BC0B-E211-AFE5-001D09F2906A.root",
# fill 3138, 204599
## "/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/204/599/E084F1DE-0A13-E211-B10B-BCAEC5329708.root",
## "/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/204/599/D8D4F5A5-0113-E211-830D-5404A63886BE.root",
## "/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/204/599/D023509A-0213-E211-8211-001D09F28D54.root",
## "/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/204/599/BC3B9A63-1313-E211-AC8E-003048F11114.root",
## "/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/204/599/B05DEACF-FF12-E211-B4D4-0025901D624E.root",
## "/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/204/599/9A6810DB-1A13-E211-85BF-BCAEC518FF62.root",
## "/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/204/599/8A7052CC-1713-E211-B91B-0025B32445E0.root",
## "/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/204/599/7A27763C-1413-E211-AFD2-5404A63886BE.root",
## "/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/204/599/6A4FAC3B-1713-E211-9627-BCAEC518FF74.root",
## "/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/204/599/5CCD5414-2613-E211-9657-BCAEC518FF41.root",
## "/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/204/599/509B2695-0213-E211-A70C-001D09F28E80.root",
## "/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/204/599/4A1E8FCB-FF12-E211-9995-BCAEC518FF89.root",
## "/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/204/599/409AC08D-2113-E211-B051-0025901D5DB8.root",
## "/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/204/599/3891EADF-2913-E211-9327-0025B32035A2.root",
## "/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/204/599/0063F1BF-1713-E211-9620-0025901D62A0.root",
# fill 3207, 205718
## "/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/205/718/F2ACC626-DB1D-E211-90A7-003048F024DE.root",
## "/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/205/718/F2A9F6A8-D61D-E211-87E7-001D09F2437B.root",
## "/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/205/718/DEA53A15-FD1D-E211-96E9-0025B3203898.root",
## "/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/205/718/DE532F21-DB1D-E211-ABCF-0030486780A8.root",
## "/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/205/718/DAF4D3EC-091E-E211-ACF7-5404A640A643.root",
## "/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/205/718/D2EC5795-DE1D-E211-A4A3-003048F1C82A.root",
## "/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/205/718/CA8972E1-231E-E211-8801-001D09F23F2A.root",
## "/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/205/718/C41C48FD-F01D-E211-AA63-00237DDBE41A.root",
## "/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/205/718/BE545438-131E-E211-9C78-BCAEC518FF40.root",
## "/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/205/718/B8E8D1C6-EC1D-E211-92A6-BCAEC518FF68.root",
## "/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/205/718/B8D812FF-051E-E211-83FD-00237DDC5C24.root",
## "/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/205/718/8C88DC31-DB1D-E211-AE00-00215AEDFD98.root",
## "/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/205/718/8AA894EB-FF1D-E211-82B2-003048D2BC4C.root",
## "/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/205/718/84B7A0B7-2D1E-E211-BC40-5404A63886BE.root",
## "/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/205/718/70060C5E-DC1D-E211-B446-003048F1C58C.root",
## "/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/205/718/6090F10F-F81D-E211-896F-003048673374.root",
## "/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/205/718/546B4838-F01D-E211-AEA1-003048D2BD28.root",
## "/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/205/718/4C3ECC8A-F41D-E211-ABA4-0025901D5DB2.root",
## "/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/205/718/46F4F880-121E-E211-945D-003048D2BD66.root",
## "/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/205/718/3CA30833-241E-E211-AC65-0025901D5DEE.root",
## "/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/205/718/3A3CC81B-DB1D-E211-A83A-BCAEC5329713.root",
## "/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/205/718/32C26601-DE1D-E211-9973-0025B32034EA.root",
## "/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/205/718/2E9A731F-E41D-E211-A427-BCAEC518FF50.root",
## "/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/205/718/1E563A35-E81D-E211-B94B-BCAEC5329709.root",
## "/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/205/718/1E1F7BF3-D91D-E211-8933-002481E0D524.root",
## "/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/205/718/1CAE3300-141E-E211-8B6B-00215AEDFD74.root",
## "/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/205/718/1AC502BC-E71D-E211-A514-BCAEC518FF54.root",
## "/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/205/718/126CB092-5F1E-E211-BFD0-003048F11DE2.root",
## "/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/205/718/0CF52756-0B1E-E211-89F1-003048D2C108.root",
## "/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/205/718/0C9543C3-1E1E-E211-A9FC-001D09F28E80.root",
## "/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/205/718/0A1E7E95-081E-E211-B394-BCAEC518FF44.root",
## "/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/205/718/02E36D68-E31D-E211-AF30-003048D37560.root",
# fill 3273 run 206940
"/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/206/940/FA55823C-312C-E211-94AB-001D09F29533.root",
"/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/206/940/ECC47D25-282C-E211-9503-001D09F24D67.root",
"/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/206/940/E4CD3D82-0A2C-E211-B072-0015C5FDE067.root",
"/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/206/940/E241CCA4-372C-E211-8E11-001D09F291D2.root",
"/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/206/940/DEEE805C-1D2C-E211-8ADE-0019B9F72F97.root",
"/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/206/940/D86EF7B2-262C-E211-BEAB-003048F118C6.root",
"/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/206/940/D6E8B8A3-152C-E211-9EBE-001D09F248F8.root",
"/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/206/940/D085A321-392C-E211-901A-BCAEC532971D.root",
"/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/206/940/C6429DA7-152C-E211-99F8-001D09F291D2.root",
"/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/206/940/A86476FE-2C2C-E211-9586-485B3977172C.root",
"/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/206/940/A499F954-272C-E211-A72B-001D09F24763.root",
"/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/206/940/9ED3CFEC-022C-E211-BAB6-003048D3751E.root",
"/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/206/940/98F5F80E-002C-E211-899D-002481E0D7D8.root",
"/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/206/940/92F99AFE-3D2C-E211-86F2-002481E94C7E.root",
"/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/206/940/90B64ABA-262C-E211-B3B9-00237DDBE0E2.root",
"/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/206/940/908CDADC-252C-E211-8EE0-0025901D626C.root",
"/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/206/940/8E953609-322C-E211-92F2-003048678110.root",
"/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/206/940/889E0DED-052C-E211-B0C9-5404A63886AD.root",
"/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/206/940/6A5A3C96-302C-E211-AE1A-001D09F24664.root",
"/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/206/940/68AA4E8D-1A2C-E211-A551-003048D2BA82.root",
"/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/206/940/68735AF9-512C-E211-B0BC-003048D2C0F0.root",
"/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/206/940/5E8892F6-2A2C-E211-B378-001D09F27067.root",
"/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/206/940/5C0FC841-2A2C-E211-AB41-001D09F25267.root",
"/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/206/940/58CCE83B-4C2C-E211-BE24-001D09F24303.root",
"/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/206/940/588B9F22-FF2B-E211-9610-5404A63886CF.root",
"/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/206/940/4CE79042-202C-E211-93A6-003048D2BB58.root",
"/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/206/940/3EC73922-342C-E211-914E-00215AEDFD74.root",
"/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/206/940/3CCA5DF8-422C-E211-8302-0025901D5E10.root",
"/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/206/940/3AC7225A-082C-E211-BEEC-5404A63886C1.root",
"/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/206/940/38DB8AA8-172C-E211-96AD-001D09F23D1D.root",
"/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/206/940/2C797B1E-1F2C-E211-A986-001D09F2983F.root",
"/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/206/940/226FB799-2B2C-E211-9BCA-001D09F23D1D.root",
"/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/206/940/02D3B6A8-152C-E211-9176-001D09F24FBA.root",
"/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/206/940/0223D51D-192C-E211-B06D-001D09F29619.root",
# fill 3297, 207469
#"/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/207/469/C2B693B7-5133-E211-ADA5-00215AEDFD98.root",
#"/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/207/469/B65A3EA3-2D33-E211-9C97-003048D3750A.root",
#"/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/207/469/9E2D078F-4F33-E211-8A94-5404A63886A9.root",
# fill 3298, 207477
## "/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/207/477/00F73F89-4133-E211-A502-BCAEC518FF68.root",
## "/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/207/477/0217FDEC-6C33-E211-9140-003048F118E0.root",
## "/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/207/477/0AA58A41-2733-E211-9FB5-5404A63886B9.root",
## "/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/207/477/16DDCC18-4733-E211-9467-5404A63886B0.root",
## "/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/207/477/26F62CB2-5133-E211-8EAA-BCAEC518FF65.root",
## "/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/207/477/32EEB561-4233-E211-8D13-001D09F25109.root",
## "/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/207/477/3CF7C6A1-3333-E211-90E3-5404A63886BD.root",
## "/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/207/477/6C568240-4533-E211-87C8-003048F117B6.root",
## "/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/207/477/7011A04E-5A33-E211-A9FD-5404A63886EC.root",
## "/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/207/477/949ACCAA-3333-E211-8429-5404A63886AE.root",
## "/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/207/477/94B1A3F7-7033-E211-A78E-BCAEC532970F.root",
## "/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/207/477/9C7F723D-4533-E211-9D0E-003048CF94A6.root",
## "/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/207/477/A260E4A3-5D33-E211-BED7-0025B32035A2.root",
## "/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/207/477/AEE6557E-5233-E211-A218-BCAEC5329702.root",
## "/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/207/477/C0D8D082-1533-E211-9A2C-5404A6388692.root",
## "/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/207/477/C206CDDA-3133-E211-8A5E-0025B3203898.root",
## "/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/207/477/C2219D33-4833-E211-9E9E-5404A63886B2.root",
## "/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/207/477/C4712C03-2433-E211-9615-001D09F2AD84.root",
## "/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/207/477/D6BC2EAD-5133-E211-B0C9-0025901D629C.root",
## "/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/207/477/E0A7D205-4C33-E211-A5B4-E0CB4E4408E3.root",
## "/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/207/477/E236D88C-7433-E211-8471-003048F1C58C.root",
## "/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/207/477/E4A9C446-2733-E211-ACF2-0025901D5E10.root",
## "/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/207/477/EC6AA7C0-5B33-E211-85C2-003048673374.root",
## "/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/207/477/ECCE1F42-5333-E211-B37D-0019B9F72F97.root",
## "/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/207/477/EE59238B-5633-E211-B154-BCAEC5329702.root",
## "/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/207/477/FECFED3C-4533-E211-9699-003048D2BDD8.root",
## "/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/208/686/F60495B3-1E41-E211-BB7C-003048D3756A.root",
## "/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/208/686/F2BA6B22-2C41-E211-9D7A-003048D2BED6.root",
## "/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/208/686/E4E6B318-2041-E211-B351-001D09F29114.root",
## "/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/208/686/B27AC385-3241-E211-AD10-0019B9F4A1D7.root",
## "/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/208/686/AC6EF0B7-4941-E211-9EFB-003048D374F2.root",
## "/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/208/686/AA4018D3-2C41-E211-8279-00215AEDFD98.root",
## "/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/208/686/A8CF653C-4D41-E211-811E-003048673374.root",
## "/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/208/686/98EEEB5E-4A41-E211-A591-001D09F25460.root",
## "/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/208/686/96BE2949-2241-E211-9993-001D09F23F2A.root",
## "/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/208/686/90F6F479-2641-E211-99E5-001D09F29524.root",
## "/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/208/686/8AAFC294-2141-E211-89E8-003048F1182E.root",
## "/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/208/686/82B885ED-2241-E211-9877-001D09F252E9.root",
## "/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/208/686/6440884D-2941-E211-BBA9-0025901D6288.root",
## "/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/208/686/604E8D2C-2741-E211-B542-003048F11C28.root",
## "/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/208/686/4EB6D745-2241-E211-9738-001D09F24D8A.root",
## "/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/208/686/3E863C44-2241-E211-9255-001D09F25041.root",
## "/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/208/686/3E76F7E8-2741-E211-8249-003048D37666.root",
## "/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/208/686/3C1D83B8-3641-E211-8C66-0025B32035A2.root",
## "/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/208/686/2A12A045-2241-E211-8BF5-001D09F2915A.root",
## "/store/data/Run2012D/MinimumBias/RECO/PromptReco-v1/000/208/686/1661335F-3041-E211-9B96-00237DDBE0E2.root",
# HI
# fill 3478. r 210534
## "/store/hidata/HIRun2013/PAMinBiasUPC/RECO/PromptReco-v1/000/210/534/00000/EA29B575-1865-E211-AACC-BCAEC518FF65.root",
## "/store/hidata/HIRun2013/PAMinBiasUPC/RECO/PromptReco-v1/000/210/534/00000/EA20D405-1865-E211-967D-BCAEC5329716.root",
## "/store/hidata/HIRun2013/PAMinBiasUPC/RECO/PromptReco-v1/000/210/534/00000/E08072AF-2E65-E211-92B5-001D09F24664.root",
## "/store/hidata/HIRun2013/PAMinBiasUPC/RECO/PromptReco-v1/000/210/534/00000/DE498BF4-1865-E211-9087-5404A63886C1.root",
## "/store/hidata/HIRun2013/PAMinBiasUPC/RECO/PromptReco-v1/000/210/534/00000/DAB793F5-1865-E211-A34A-BCAEC5329701.root",
## "/store/hidata/HIRun2013/PAMinBiasUPC/RECO/PromptReco-v1/000/210/534/00000/D8FA72FD-1865-E211-8ABE-003048F118D2.root",
## "/store/hidata/HIRun2013/PAMinBiasUPC/RECO/PromptReco-v1/000/210/534/00000/CC0D3101-1965-E211-AD07-BCAEC518FF7E.root",
## "/store/hidata/HIRun2013/PAMinBiasUPC/RECO/PromptReco-v1/000/210/534/00000/C49A6303-1965-E211-8D40-0025901D624A.root",
## "/store/hidata/HIRun2013/PAMinBiasUPC/RECO/PromptReco-v1/000/210/534/00000/C2F5B2F1-1865-E211-9CA7-BCAEC532971D.root",
## "/store/hidata/HIRun2013/PAMinBiasUPC/RECO/PromptReco-v1/000/210/534/00000/C252D102-1965-E211-9CC5-BCAEC518FF6B.root",
## "/store/hidata/HIRun2013/PAMinBiasUPC/RECO/PromptReco-v1/000/210/534/00000/BCB90CF3-1865-E211-8C66-5404A63886D6.root",
## "/store/hidata/HIRun2013/PAMinBiasUPC/RECO/PromptReco-v1/000/210/534/00000/B63B290D-2A65-E211-B03D-0030486733D8.root",
## "/store/hidata/HIRun2013/PAMinBiasUPC/RECO/PromptReco-v1/000/210/534/00000/B491BE5C-1965-E211-9C6B-BCAEC518FF76.root",
## "/store/hidata/HIRun2013/PAMinBiasUPC/RECO/PromptReco-v1/000/210/534/00000/B0E8D029-1965-E211-97C5-003048D2BE08.root",
## "/store/hidata/HIRun2013/PAMinBiasUPC/RECO/PromptReco-v1/000/210/534/00000/AA1521EF-1865-E211-BCB4-BCAEC518FF7A.root",
## "/store/hidata/HIRun2013/PAMinBiasUPC/RECO/PromptReco-v1/000/210/534/00000/A2613BFA-1865-E211-AB55-BCAEC518FF8D.root",
## "/store/hidata/HIRun2013/PAMinBiasUPC/RECO/PromptReco-v1/000/210/534/00000/9ACFF272-3265-E211-B582-5404A63886B0.root",
## "/store/hidata/HIRun2013/PAMinBiasUPC/RECO/PromptReco-v1/000/210/534/00000/98763587-4565-E211-94BB-BCAEC5364C62.root",
## "/store/hidata/HIRun2013/PAMinBiasUPC/RECO/PromptReco-v1/000/210/534/00000/941169FD-1865-E211-8C6C-BCAEC53296F8.root",
## "/store/hidata/HIRun2013/PAMinBiasUPC/RECO/PromptReco-v1/000/210/534/00000/903C9383-3765-E211-B7A4-5404A63886C1.root",
## "/store/hidata/HIRun2013/PAMinBiasUPC/RECO/PromptReco-v1/000/210/534/00000/844E8C8A-4565-E211-9753-003048678110.root",
## "/store/hidata/HIRun2013/PAMinBiasUPC/RECO/PromptReco-v1/000/210/534/00000/7A7F7C09-1965-E211-AF18-BCAEC518FF41.root",
## "/store/hidata/HIRun2013/PAMinBiasUPC/RECO/PromptReco-v1/000/210/534/00000/74018417-1965-E211-A275-BCAEC518FF74.root",
## "/store/hidata/HIRun2013/PAMinBiasUPC/RECO/PromptReco-v1/000/210/534/00000/7065AD32-1A65-E211-8415-0025901D5C86.root",
## "/store/hidata/HIRun2013/PAMinBiasUPC/RECO/PromptReco-v1/000/210/534/00000/6A2E32F6-1865-E211-B3F2-0025901D5C88.root",
## "/store/hidata/HIRun2013/PAMinBiasUPC/RECO/PromptReco-v1/000/210/534/00000/6498550A-1965-E211-80B2-001D09F28D4A.root",
## "/store/hidata/HIRun2013/PAMinBiasUPC/RECO/PromptReco-v1/000/210/534/00000/56E9BFDE-2865-E211-8BCA-001D09F24D67.root",
## "/store/hidata/HIRun2013/PAMinBiasUPC/RECO/PromptReco-v1/000/210/534/00000/4E6E13FB-1865-E211-859A-003048F117EA.root",
## "/store/hidata/HIRun2013/PAMinBiasUPC/RECO/PromptReco-v1/000/210/534/00000/4CD1BF06-1965-E211-8F2B-003048F11112.root",
## "/store/hidata/HIRun2013/PAMinBiasUPC/RECO/PromptReco-v1/000/210/534/00000/402CCA76-1865-E211-9F05-0025901D6268.root",
## "/store/hidata/HIRun2013/PAMinBiasUPC/RECO/PromptReco-v1/000/210/534/00000/3C728574-1865-E211-8F3E-003048F11942.root",
## "/store/hidata/HIRun2013/PAMinBiasUPC/RECO/PromptReco-v1/000/210/534/00000/3A9CA223-1A65-E211-B733-BCAEC518FF44.root",
## "/store/hidata/HIRun2013/PAMinBiasUPC/RECO/PromptReco-v1/000/210/534/00000/38E46556-2465-E211-9514-001D09F24664.root",
## "/store/hidata/HIRun2013/PAMinBiasUPC/RECO/PromptReco-v1/000/210/534/00000/361A1128-1965-E211-A61B-0025901D5D78.root",
## "/store/hidata/HIRun2013/PAMinBiasUPC/RECO/PromptReco-v1/000/210/534/00000/34D9070B-1965-E211-84B0-BCAEC518FF54.root",
## "/store/hidata/HIRun2013/PAMinBiasUPC/RECO/PromptReco-v1/000/210/534/00000/3259F06D-1865-E211-B535-003048F024C2.root",
## "/store/hidata/HIRun2013/PAMinBiasUPC/RECO/PromptReco-v1/000/210/534/00000/2237D1F5-1865-E211-A191-002481E0D7EC.root",
## "/store/hidata/HIRun2013/PAMinBiasUPC/RECO/PromptReco-v1/000/210/534/00000/164F482C-1965-E211-9A0D-002481E0D524.root",
## "/store/hidata/HIRun2013/PAMinBiasUPC/RECO/PromptReco-v1/000/210/534/00000/066FF3EC-1A65-E211-B4B0-0025901D5DB2.root",
## "/store/hidata/HIRun2013/PAMinBiasUPC/RECO/PromptReco-v1/000/210/534/00000/0448E6F8-4E65-E211-B7E0-003048D3C980.root",
##"/store/caf/user/venturia/logerrevent_HI2013_express_v1_210634_210635_v14.root"
)
)
#process.source.lumisToProcess = cms.untracked.VLuminosityBlockRange('124230:26-124230:9999','124030:2-124030:9999')
#process.source.lumisToProcess = cms.untracked.VLuminosityBlockRange('190389:40-190389:9999')
#process.source.lumisToProcess = cms.untracked.VLuminosityBlockRange('191271:55-191271:9999')
#process.source.lumisToProcess = cms.untracked.VLuminosityBlockRange('191718:30-191718:9999')
#process.source.lumisToProcess = cms.untracked.VLuminosityBlockRange('193621:58-193621:9999')
#process.source.lumisToProcess = cms.untracked.VLuminosityBlockRange('193998:63-193998:9999')
#process.source.lumisToProcess = cms.untracked.VLuminosityBlockRange('194050:52-194050:9999')
#process.source.lumisToProcess = cms.untracked.VLuminosityBlockRange('194912:52-194912:9999')
#process.source.lumisToProcess = cms.untracked.VLuminosityBlockRange('194912:52-194912:330')
#process.source.lumisToProcess = cms.untracked.VLuminosityBlockRange('195099:61-195099:9999')
#process.source.lumisToProcess = cms.untracked.VLuminosityBlockRange('195109:85-195109:9999')
#process.source.lumisToProcess = cms.untracked.VLuminosityBlockRange('196531:61-196531:9999')
#process.source.lumisToProcess = cms.untracked.VLuminosityBlockRange('198609:47-198609:112')
#process.source.lumisToProcess = cms.untracked.VLuminosityBlockRange('199282:44-199282:1477')
#process.source.lumisToProcess = cms.untracked.VLuminosityBlockRange('199318:64-199318:9999')
#process.source.lumisToProcess = cms.untracked.VLuminosityBlockRange('200781:72-200781:9999')
#process.source.lumisToProcess = cms.untracked.VLuminosityBlockRange('201624:82-201624:9999')
#process.source.lumisToProcess = cms.untracked.VLuminosityBlockRange('203002:74-203002:1596')
#process.source.lumisToProcess = cms.untracked.VLuminosityBlockRange('203853:122-203853:229')
#process.source.lumisToProcess = cms.untracked.VLuminosityBlockRange('204599:72-204599:9999')
#process.source.lumisToProcess = cms.untracked.VLuminosityBlockRange('205718:49-205718:734')
process.source.lumisToProcess = cms.untracked.VLuminosityBlockRange('206940:0-206940:1027')
#process.source.lumisToProcess = cms.untracked.VLuminosityBlockRange('207469:0-207469:51')
#process.source.lumisToProcess = cms.untracked.VLuminosityBlockRange('207477:76-207477:570')
#process.source.lumisToProcess = cms.untracked.VLuminosityBlockRange('208686:73-208686:463')
#process.source.lumisToProcess = cms.untracked.VLuminosityBlockRange('210534:24-210534:347')
process.TFileService = cms.Service("TFileService",
fileName = cms.string('h.root')
)
process.load("Configuration.Geometry.GeometryIdeal_cff")
process.load("Configuration.StandardSequences.MagneticField_38T_cff")
# what is this?
# process.load("Configuration.StandardSequences.Services_cff")
# what is this?
#process.load("SimTracker.Configuration.SimTracker_cff")
# needed for global transformation
# process.load("Configuration.StandardSequences.FakeConditions_cff")
process.load("Configuration.StandardSequences.FrontierConditions_GlobalTag_cff")# Choose the global tag here:
process.GlobalTag.globaltag = "GR_P_V40::All"
# process.GlobalTag.globaltag = "GR_P_V28::All" 2012 A&B
# 2011
# process.GlobalTag.globaltag = "GR_P_V20::All"
# process.GlobalTag.globaltag = "GR_R_311_V2::All"
# 2010
# process.GlobalTag.globaltag = 'GR10_P_V5::All'
# process.GlobalTag.globaltag = 'GR10_P_V4::All'
# OK for 2009 LHC data
#process.GlobalTag.globaltag = 'CRAFT09_R_V4::All'
process.d = cms.EDAnalyzer("TestClusters",
Verbosity = cms.untracked.bool(False),
src = cms.InputTag("siPixelClusters"),
Select1 = cms.untracked.int32(1), # cut on the num of dets <4 skip, 0 means 4 default
Select2 = cms.untracked.int32(0), # 6 no bptx, 0 no selection
)
#process.p = cms.Path(process.hltPhysicsDeclared*process.hltfilter*process.d)
process.p = cms.Path(process.hltPhysicsDeclared*process.d)
#process.p = cms.Path(process.hltfilter*process.d)
#process.p = cms.Path(process.d)
# define an EndPath to analyze all other path results
#process.hltTrigReport = cms.EDAnalyzer( 'HLTrigReport',
# HLTriggerResults = cms.InputTag( 'TriggerResults','','' )
#)
#process.HLTAnalyzerEndpath = cms.EndPath( process.hltTrigReport )
| 99.560957 | 138 | 0.790314 | 18,272 | 129,031 | 5.577167 | 0.190072 | 0.148922 | 0.127696 | 0.118492 | 0.698654 | 0.681716 | 0.673061 | 0.663258 | 0.663258 | 0.661306 | 0 | 0.314679 | 0.025893 | 129,031 | 1,295 | 139 | 99.637838 | 0.496094 | 0.930885 | 0 | 0 | 0 | 0.425 | 0.588775 | 0.557826 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.025 | 0 | 0.025 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
60398922213b2fdcaccc86d21b6674e10df5d89a | 23,208 | py | Python | pirates/leveleditor/worldData/interior_spanish_store_tattoo.py | itsyaboyrocket/pirates | 6ca1e7d571c670b0d976f65e608235707b5737e3 | [
"BSD-3-Clause"
] | 3 | 2021-02-25T06:38:13.000Z | 2022-03-22T07:00:15.000Z | pirates/leveleditor/worldData/interior_spanish_store_tattoo.py | itsyaboyrocket/pirates | 6ca1e7d571c670b0d976f65e608235707b5737e3 | [
"BSD-3-Clause"
] | null | null | null | pirates/leveleditor/worldData/interior_spanish_store_tattoo.py | itsyaboyrocket/pirates | 6ca1e7d571c670b0d976f65e608235707b5737e3 | [
"BSD-3-Clause"
] | 1 | 2021-02-25T06:38:17.000Z | 2021-02-25T06:38:17.000Z | # uncompyle6 version 3.2.0
# Python bytecode 2.4 (62061)
# Decompiled from: Python 2.7.14 (v2.7.14:84471935ed, Sep 16 2017, 20:19:30) [MSC v.1500 32 bit (Intel)]
# Embedded file name: pirates.leveleditor.worldData.interior_spanish_store_tattoo
from pandac.PandaModules import Point3, VBase3, Vec4, Vec3
objectStruct = {'Objects': {'1156268617.43dzlu0j': {'Type': 'Building Interior', 'Name': '', 'Instanced': True, 'Objects': {'1172095480.47kmuller': {'Type': 'Interior_furnishings', 'DisableCollision': False, 'Hpr': VBase3(129.334, 0.0, 0.0), 'Pos': Point3(-13.634, 6.934, 0.0), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Color': (0.4300000071525574, 0.3499999940395355, 0.3499999940395355, 1.0), 'Model': 'models/props/shop_tatoo_bottles'}}, '1172095536.58kmuller': {'Type': 'Interior_furnishings', 'DisableCollision': False, 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(-4.967, 11.284, 2.83), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/props/shop_tatoo_heater'}}, '1172100435.43kmuller': {'Type': 'Furniture', 'DisableCollision': False, 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(-4.818, 11.41, 0.0), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/props/table_shanty'}}, '1172100717.96kmuller': {'Type': 'Furniture', 'DisableCollision': True, 'Hpr': VBase3(90.427, 0.0, 0.0), 'Pos': Point3(-19.193, -3.644, 0.0), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/props/cabinet_spanish'}}, '1172100724.71kmuller': {'Type': 'Furniture', 'DisableCollision': True, 'Hpr': VBase3(90.427, 0.0, 0.0), 'Pos': Point3(-18.851, -14.336, 0.0), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/props/cabinet_spanish'}}, '1172100752.18kmuller': {'Type': 'Furniture', 'DisableCollision': True, 'Hpr': VBase3(90.427, 0.0, 0.0), 'Pos': Point3(-18.754, -8.986, 0.0), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/props/cabinet_spanish_low'}}, '1172101251.99kmuller': {'Type': 'Furniture', 'DisableCollision': False, 'Hpr': VBase3(1.277, 0.0, 0.0), 'Objects': {}, 'Pos': Point3(13.454, 11.635, 0.0), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Color': (0.699999988079071, 0.5699999928474426, 0.4699999988079071, 1.0), 'Model': 'models/props/table_shanty'}}, '1172101331.02kmuller': {'Type': 'Interior_furnishings', 'DisableCollision': False, 'Hpr': VBase3(1.277, 0.0, 0.0), 'Pos': Point3(13.053, 11.627, 2.971), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/props/shop_tatoo_heater'}}, '1172101372.05kmuller': {'Type': 'Furniture', 'DisableCollision': False, 'Hpr': VBase3(51.097, 0.0, 0.0), 'Objects': {}, 'Pos': Point3(12.276, 5.021, 0.0), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/props/table_bar_square'}}, '1172101830.61kmuller': {'Type': 'Interior_furnishings', 'DisableCollision': False, 'Hpr': VBase3(90.945, 0.0, 0.0), 'Pos': Point3(-18.834, -8.579, 3.345), 'Scale': VBase3(0.854, 0.854, 0.854), 'Visual': {'Model': 'models/props/shop_doctor_bottles'}}, '1172101986.27kmuller': {'Type': 'Trunks', 'DisableCollision': True, 'Hpr': VBase3(72.057, 0.0, 0.0), 'Pos': Point3(-19.016, -2.673, 3.256), 'Scale': VBase3(0.548, 0.548, 0.548), 'Visual': {'Model': 'models/props/Trunk_rounded'}}, '1172102034.1kmuller': {'Type': 'Sack', 'DisableCollision': True, 'Hpr': VBase3(87.338, 0.0, 0.0), 'Pos': Point3(-19.321, -4.363, 4.881), 'Scale': VBase3(0.421, 0.421, 0.421), 'Visual': {'Model': 'models/props/Sack'}}, '1172102089.18kmuller': {'Type': 'Sack', 'DisableCollision': True, 'Hpr': VBase3(87.338, 0.166, 0.0), 'Pos': Point3(-19.413, -4.384, 5.493), 'Scale': VBase3(0.421, 0.421, 0.421), 'Visual': {'Model': 'models/props/Sack'}}, '1172102242.03kmuller': {'Type': 'Jugs_and_Jars', 'DisableCollision': False, 'Hpr': VBase3(0.0, 0.0, 0.0), 'Pos': Point3(-18.069, -12.725, 6.953), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/props/jar'}}, '1172102341.66kmuller': {'Type': 'Mortar_Pestle', 'DisableCollision': False, 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(-18.06, -15.625, 5.076), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/props/mortar_pestle_stone'}}, '1172102400.72kmuller': {'Type': 'Crate', 'DisableCollision': True, 'Hpr': VBase3(11.4, 0.0, 0.0), 'Pos': Point3(-18.33, -13.146, 3.257), 'Scale': VBase3(0.431, 0.431, 0.431), 'Visual': {'Model': 'models/props/crate'}}, '1172102479.08kmuller': {'Type': 'Furniture', 'DisableCollision': False, 'Hpr': VBase3(44.574, 0.0, 0.0), 'Pos': Point3(10.319, 6.724, 0.0), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Color': (0.3100000023841858, 0.25999999046325684, 0.25, 1.0), 'Model': 'models/props/stool_shanty'}}, '1172102495.93kmuller': {'Type': 'Furniture', 'DisableCollision': False, 'Hpr': VBase3(-37.021, 0.0, 0.0), 'Pos': Point3(16.124, 3.592, 0.0), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Color': (0.5799999833106995, 0.47999998927116394, 0.4000000059604645, 1.0), 'Model': 'models/props/chair_shanty'}}, '1172102515.35kmuller': {'Type': 'Furniture', 'DisableCollision': False, 'Hpr': VBase3(-19.437, 0.0, 0.0), 'Pos': Point3(-6.532, 4.023, 0.0), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Color': (0.5799999833106995, 0.47999998927116394, 0.4000000059604645, 1.0), 'Model': 'models/props/chair_shanty'}}, '1172102612.1kmuller': {'Type': 'Furniture', 'DisableCollision': False, 'Hpr': VBase3(-0.506, 0.0, 0.0), 'Pos': Point3(13.258, -26.678, 0.0), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Color': (0.3100000023841858, 0.25999999046325684, 0.25, 1.0), 'Model': 'models/props/bench_shanty_2'}}, '1172102617.41kmuller': {'Type': 'Furniture', 'DisableCollision': False, 'Hpr': VBase3(-0.506, 0.0, 0.0), 'Pos': Point3(12.494, -17.563, 0.0), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Color': (0.3100000023841858, 0.25999999046325684, 0.25, 1.0), 'Model': 'models/props/bench_shanty_2'}}, '1172102641.11kmuller': {'Type': 'Furniture', 'DisableCollision': False, 'Hpr': VBase3(92.532, 0.0, 0.0), 'Pos': Point3(13.213, -21.9, 0.0), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Color': (0.3100000023841858, 0.25999999046325684, 0.25, 1.0), 'Model': 'models/props/table_shanty_2'}}, '1172102778.21kmuller': {'Type': 'Interior_furnishings', 'DisableCollision': False, 'Hpr': VBase3(0.295, 0.0, 0.0), 'Pos': Point3(-4.971, 12.716, 6.646), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Color': (0.6700000166893005, 0.7900000214576721, 0.7799999713897705, 1.0), 'Model': 'models/props/shop_tatoo_sample'}}, '1174687513.07dzlu': {'Type': 'Light - Dynamic', 'Attenuation': '0.005', 'ConeAngle': '120.0000', 'DropOff': '1.3636', 'FlickRate': '0.5000', 'Flickering': False, 'Hpr': VBase3(-84.14, -65.515, -60.818), 'Intensity': '1.1515', 'LightType': 'SPOT', 'Pos': Point3(-8.431, 8.768, 49.745), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Color': (1.0, 0.87, 1.0, 1.0), 'Model': 'models/props/light_tool_bulb'}}, '1176415428.42dzlu': {'Type': 'Light - Dynamic', 'Attenuation': '0.005', 'ConeAngle': '43.4639', 'DropOff': '2.7273', 'FlickRate': '0.5000', 'Flickering': False, 'Hpr': VBase3(105.328, -16.239, 11.996), 'Intensity': '0.6627', 'LightType': 'SPOT', 'Pos': Point3(34.364, 9.339, 11.766), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Color': (1.0, 0.89, 1.0, 1.0), 'Model': 'models/props/light_tool_bulb'}}, '1176415733.23dzlu': {'Type': 'Light - Dynamic', 'Attenuation': '0.005', 'ConeAngle': '49.1566', 'DropOff': '2.7273', 'FlickRate': '0.5000', 'Flickering': False, 'Hpr': VBase3(110.857, -18.436, -2.646), 'Intensity': '0.6506', 'LightType': 'SPOT', 'Pos': Point3(29.898, -7.489, 17.195), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Color': (0.87, 0.89, 1.0, 1.0), 'Model': 'models/props/light_tool_bulb'}}, '1176416289.58dzlu': {'Type': 'Light - Dynamic', 'Attenuation': '0.005', 'ConeAngle': '60.0000', 'DropOff': '0.0000', 'FlickRate': '0.5000', 'Flickering': False, 'Hpr': VBase3(20.961, -1.881, -28.253), 'Intensity': '0.6747', 'LightType': 'DIRECTIONAL', 'Pos': Point3(1.361, -22.622, 6.648), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Color': (1.0, 0.88, 1.0, 1.0), 'Model': 'models/props/light_tool_bulb'}}, '1178152578.37kmuller': {'Type': 'Interior_furnishings', 'DisableCollision': False, 'Hpr': VBase3(-93.794, 0.0, 0.0), 'Pos': Point3(17.551, -0.176, 0.0), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Color': (0.5899999737739563, 0.5299999713897705, 0.44999998807907104, 1.0), 'Model': 'models/props/shop_tatoo_bottles'}}, '1178152773.68kmuller': {'Type': 'Furniture', 'DisableCollision': False, 'Hpr': VBase3(-37.462, 0.0, 0.0), 'Pos': Point3(-8.422, 0.694, 0.0), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/props/table_bar_square'}}, '1178152790.56kmuller': {'Type': 'Furniture', 'DisableCollision': False, 'Hpr': VBase3(139.569, 0.0, 0.0), 'Pos': Point3(-10.168, -1.846, 0.0), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/props/chair_bar'}}, '1178152833.04kmuller': {'Type': 'Interior_furnishings', 'DisableCollision': True, 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(-8.226, 13.169, 0.0), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/props/interior_wall_spanish'}}, '1178152841.92kmuller': {'Type': 'Interior_furnishings', 'DisableCollision': True, 'Holiday': '', 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(14.081, 13.241, 0.0), 'Scale': VBase3(1.0, 1.0, 1.0), 'VisSize': '', 'Visual': {'Model': 'models/props/interior_wall_spanish'}}, '1178152917.4kmuller': {'Type': 'Interior_furnishings', 'DisableCollision': False, 'Hpr': VBase3(-26.017, 0.0, 0.0), 'Pos': Point3(18.629, 11.458, 0.0), 'Scale': VBase3(1.24, 1.24, 1.24), 'Visual': {'Model': 'models/props/stove_potbelly'}}, '1178153169.18kmuller': {'Type': 'Prop_Groups', 'DisableCollision': True, 'Hpr': VBase3(-23.668, 0.0, 0.0), 'Pos': Point3(-17.446, -32.083, 0.0), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Color': (0.5899999737739563, 0.5299999713897705, 0.44999998807907104, 1.0), 'Model': 'models/props/prop_group_A'}}, '1178153218.76kmuller': {'Type': 'Furniture - Fancy', 'DisableCollision': False, 'Hpr': VBase3(0.0, 0.0, 0.0), 'Pos': Point3(12.382, 4.515, 0.047), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/props/rug_oriental'}}, '1178153257.25kmuller': {'Type': 'Furniture - Fancy', 'DisableCollision': False, 'Hpr': VBase3(49.395, 0.0, 0.0), 'Pos': Point3(-7.651, 1.478, 0.05), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/props/rug_oriental'}}, '1185497561.36kmuller': {'Type': 'Collision Barrier', 'DisableCollision': False, 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(-16.081, -25.951, -0.677), 'Scale': VBase3(1.529, 1.961, 1.405), 'Visual': {'Model': 'models/misc/pir_m_prp_lev_cambarrier_cube'}}, '1185497607.82kmuller': {'Type': 'Collision Barrier', 'DisableCollision': False, 'Hpr': VBase3(0.0, 0.0, 0.0), 'Pos': Point3(-18.639, -9.024, -0.502), 'Scale': VBase3(0.611, 3.186, 1.908), 'Visual': {'Model': 'models/misc/pir_m_prp_lev_cambarrier_cube'}}, '1185497680.03kmuller': {'Type': 'Collision Barrier', 'DisableCollision': False, 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(3.494, 12.348, -0.634), 'Scale': VBase3(5.279, 5.279, 5.279), 'Visual': {'Model': 'models/misc/pir_m_prp_lev_cambarrier_plane'}}, '1185497728.65kmuller': {'Type': 'Bucket', 'DisableCollision': False, 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(-13.448, 11.706, 0.0), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/props/bucket'}}, '1185497813.03kmuller': {'Type': 'Trunks', 'DisableCollision': False, 'Hpr': VBase3(88.394, 0.0, 0.0), 'Pos': Point3(-19.145, 7.328, 0.0), 'Scale': VBase3(0.849, 0.849, 0.849), 'Visual': {'Model': 'models/props/Trunk_rounded_2'}}, '1185497833.32kmuller': {'Type': 'Collision Barrier', 'DisableCollision': False, 'Hpr': VBase3(40.143, 0.0, 0.0), 'Pos': Point3(-15.206, 8.884, 0.0), 'Scale': VBase3(1.444, 1.0, 1.0), 'Visual': {'Model': 'models/misc/pir_m_prp_lev_cambarrier_plane'}}, '1201049157.09dxschafe': {'Type': 'Door Locator Node', 'Name': 'door_locator', 'Hpr': VBase3(-1.084, 0.0, 0.0), 'Pos': Point3(0.226, -30.04, -0.042), 'Scale': VBase3(1.0, 1.0, 1.0)}, '1201049204.75dxschafe': {'Type': 'Light - Dynamic', 'Attenuation': '0.005', 'ConeAngle': '60.0000', 'DropOff': '31.4458', 'FlickRate': '0.5000', 'Flickering': False, 'Hpr': VBase3(-72.576, -34.776, 0.0), 'Intensity': '0.8554', 'LightType': 'SPOT', 'Pos': Point3(-20.948, 0.641, 16.373), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Color': (1, 1, 1, 1), 'Model': 'models/props/light_tool_bulb'}}, '1257964982.69caoconno': {'Type': 'Holiday', 'DisableCollision': False, 'Holiday': 'WinterFestival', 'Hpr': VBase3(-88.81, 0.0, 0.0), 'Pos': Point3(19.499, -26.187, 9.194), 'Scale': VBase3(1.175, 1.175, 1.175), 'VisSize': '', 'Visual': {'Model': 'models/props/pir_m_prp_hol_decoSwag_winter08'}}, '1257965007.85caoconno': {'Type': 'Holiday', 'DisableCollision': False, 'Holiday': 'WinterFestival', 'Hpr': VBase3(91.369, 0.0, 0.0), 'Pos': Point3(-19.396, -26.019, 9.194), 'Scale': VBase3(1.175, 1.175, 1.175), 'VisSize': '', 'Visual': {'Model': 'models/props/pir_m_prp_hol_decoSwag_winter08'}}, '1257965016.58caoconno': {'Type': 'Holiday', 'DisableCollision': False, 'Holiday': 'WinterFestival', 'Hpr': VBase3(91.369, 0.0, 0.0), 'Pos': Point3(-19.487, 8.738, 9.194), 'Scale': VBase3(1.175, 1.175, 1.175), 'VisSize': '', 'Visual': {'Model': 'models/props/pir_m_prp_hol_decoSwag_winter08'}}, '1257965047.55caoconno': {'Type': 'Holiday', 'DisableCollision': False, 'Holiday': 'WinterFestival', 'Hpr': VBase3(4.546, 0.0, 0.0), 'Pos': Point3(-12.425, 12.26, 9.194), 'Scale': VBase3(1.175, 1.175, 1.175), 'VisSize': '', 'Visual': {'Model': 'models/props/pir_m_prp_hol_decoSwag_winter08'}}, '1257965080.59caoconno': {'Type': 'Holiday', 'DisableCollision': False, 'Holiday': 'WinterFestival', 'Hpr': VBase3(0.576, 0.0, 0.0), 'Pos': Point3(11.085, 12.305, 9.194), 'Scale': VBase3(1.175, 1.175, 1.175), 'VisSize': '', 'Visual': {'Model': 'models/props/pir_m_prp_hol_decoSwag_winter08'}}, '1257965133.95caoconno': {'Type': 'Holiday', 'DisableCollision': False, 'Holiday': 'WinterFestival', 'Hpr': VBase3(173.37, 0.0, 0.0), 'Pos': Point3(16.882, 12.641, 6.966), 'Scale': VBase3(1.0, 1.0, 1.0), 'VisSize': '', 'Visual': {'Model': 'models/props/pir_m_prp_hol_decoStocking01_winter09'}}, '1257965186.47caoconno': {'Type': 'Holiday', 'DisableCollision': False, 'Holiday': 'WinterFestival', 'Hpr': VBase3(-88.81, 0.0, 0.0), 'Pos': Point3(19.448, 8.281, 9.194), 'Scale': VBase3(1.175, 1.175, 1.175), 'VisSize': '', 'Visual': {'Model': 'models/props/pir_m_prp_hol_decoSwag_winter08'}}, '1257965195.58caoconno': {'Type': 'Holiday', 'DisableCollision': False, 'Holiday': 'WinterFestival', 'Hpr': VBase3(-88.81, 0.0, 0.0), 'Pos': Point3(19.157, -9.731, 9.194), 'Scale': VBase3(1.175, 1.175, 1.175), 'VisSize': '', 'Visual': {'Model': 'models/props/pir_m_prp_hol_decoSwag_winter08'}}, '1257965211.5caoconno': {'Type': 'Holiday', 'DisableCollision': False, 'Holiday': 'WinterFestival', 'Hpr': VBase3(91.369, 0.0, 0.0), 'Pos': Point3(-19.478, -8.991, 9.194), 'Scale': VBase3(1.175, 1.175, 1.175), 'VisSize': '', 'Visual': {'Model': 'models/props/pir_m_prp_hol_decoSwag_winter08'}}, '1257965252.0caoconno': {'Type': 'Holiday', 'DisableCollision': False, 'Holiday': 'WinterFestival', 'Hpr': VBase3(-179.619, 0.281, 36.49), 'Pos': Point3(1.333, 12.471, 9.878), 'Scale': VBase3(1.749, 1.749, 1.749), 'VisSize': '', 'Visual': {'Model': 'models/props/pir_m_prp_hol_candycane_winter09'}}, '1257965268.67caoconno': {'Type': 'Holiday', 'DisableCollision': False, 'Holiday': 'WinterFestival', 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(2.341, 12.438, 9.02), 'Scale': VBase3(1.749, 1.749, 1.749), 'VisSize': '', 'Visual': {'Model': 'models/props/pir_m_prp_hol_decoBow_winter08'}}, '1257965276.03caoconno': {'Type': 'Holiday', 'DisableCollision': False, 'Holiday': 'WinterFestival', 'Hpr': VBase3(0.0, 0.0, 38.003), 'Pos': Point3(3.343, 12.483, 9.722), 'Scale': VBase3(1.749, 1.749, 1.749), 'VisSize': '', 'Visual': {'Model': 'models/props/pir_m_prp_hol_candycane_winter09'}}}, 'Visual': {'Model': 'models/buildings/interior_spanish_npc'}}}, 'Node Links': [], 'Layers': {'Collisions': ['1184008208.59kmuller', '1184016064.62kmuller', '1184013852.84kmuller', '1185822696.06kmuller', '1184006140.32kmuller', '1184002350.98kmuller', '1184007573.29kmuller', '1184021176.59kmuller', '1184005963.59kmuller', '1188324241.31akelts', '1184006537.34kmuller', '1184006605.81kmuller', '1187139568.33kmuller', '1188324186.98akelts', '1184006730.66kmuller', '1184007538.51kmuller', '1184006188.41kmuller', '1184021084.27kmuller', '1185824396.94kmuller', '1185824250.16kmuller', '1185823630.52kmuller', '1185823760.23kmuller', '1185824497.83kmuller', '1185824751.45kmuller', '1187739103.34akelts', '1188323993.34akelts', '1184016538.29kmuller', '1185822200.97kmuller', '1184016225.99kmuller', '1195241421.34akelts', '1195242796.08akelts', '1184020642.13kmuller', '1195237994.63akelts', '1184020756.88kmuller', '1184020833.4kmuller', '1185820992.97kmuller', '1185821053.83kmuller', '1184015068.54kmuller', '1184014935.82kmuller', '1185821432.88kmuller', '1185821701.86kmuller', '1195240137.55akelts', '1195241539.38akelts', '1195238422.3akelts', '1195238473.22akelts', '1185821453.17kmuller', '1184021269.96kmuller', '1185821310.89kmuller', '1185821165.59kmuller', '1185821199.36kmuller', '1185822035.98kmuller', '1184015806.59kmuller', '1185822059.48kmuller', '1185920461.76kmuller', '1194984449.66akelts', '1185824206.22kmuller', '1184003446.23kmuller', '1184003254.85kmuller', '1184003218.74kmuller', '1184002700.44kmuller', '1186705073.11kmuller', '1187658531.86akelts', '1186705214.3kmuller', '1185824927.28kmuller', '1184014204.54kmuller', '1184014152.84kmuller']}, 'ObjectIds': {'1156268617.43dzlu0j': '["Objects"]["1156268617.43dzlu0j"]', '1172095480.47kmuller': '["Objects"]["1156268617.43dzlu0j"]["Objects"]["1172095480.47kmuller"]', '1172095536.58kmuller': '["Objects"]["1156268617.43dzlu0j"]["Objects"]["1172095536.58kmuller"]', '1172100435.43kmuller': '["Objects"]["1156268617.43dzlu0j"]["Objects"]["1172100435.43kmuller"]', '1172100717.96kmuller': '["Objects"]["1156268617.43dzlu0j"]["Objects"]["1172100717.96kmuller"]', '1172100724.71kmuller': '["Objects"]["1156268617.43dzlu0j"]["Objects"]["1172100724.71kmuller"]', '1172100752.18kmuller': '["Objects"]["1156268617.43dzlu0j"]["Objects"]["1172100752.18kmuller"]', '1172101251.99kmuller': '["Objects"]["1156268617.43dzlu0j"]["Objects"]["1172101251.99kmuller"]', '1172101331.02kmuller': '["Objects"]["1156268617.43dzlu0j"]["Objects"]["1172101331.02kmuller"]', '1172101372.05kmuller': '["Objects"]["1156268617.43dzlu0j"]["Objects"]["1172101372.05kmuller"]', '1172101830.61kmuller': '["Objects"]["1156268617.43dzlu0j"]["Objects"]["1172101830.61kmuller"]', '1172101986.27kmuller': '["Objects"]["1156268617.43dzlu0j"]["Objects"]["1172101986.27kmuller"]', '1172102034.1kmuller': '["Objects"]["1156268617.43dzlu0j"]["Objects"]["1172102034.1kmuller"]', '1172102089.18kmuller': '["Objects"]["1156268617.43dzlu0j"]["Objects"]["1172102089.18kmuller"]', '1172102242.03kmuller': '["Objects"]["1156268617.43dzlu0j"]["Objects"]["1172102242.03kmuller"]', '1172102341.66kmuller': '["Objects"]["1156268617.43dzlu0j"]["Objects"]["1172102341.66kmuller"]', '1172102400.72kmuller': '["Objects"]["1156268617.43dzlu0j"]["Objects"]["1172102400.72kmuller"]', '1172102479.08kmuller': '["Objects"]["1156268617.43dzlu0j"]["Objects"]["1172102479.08kmuller"]', '1172102495.93kmuller': '["Objects"]["1156268617.43dzlu0j"]["Objects"]["1172102495.93kmuller"]', '1172102515.35kmuller': '["Objects"]["1156268617.43dzlu0j"]["Objects"]["1172102515.35kmuller"]', '1172102612.1kmuller': '["Objects"]["1156268617.43dzlu0j"]["Objects"]["1172102612.1kmuller"]', '1172102617.41kmuller': '["Objects"]["1156268617.43dzlu0j"]["Objects"]["1172102617.41kmuller"]', '1172102641.11kmuller': '["Objects"]["1156268617.43dzlu0j"]["Objects"]["1172102641.11kmuller"]', '1172102778.21kmuller': '["Objects"]["1156268617.43dzlu0j"]["Objects"]["1172102778.21kmuller"]', '1174687513.07dzlu': '["Objects"]["1156268617.43dzlu0j"]["Objects"]["1174687513.07dzlu"]', '1176415428.42dzlu': '["Objects"]["1156268617.43dzlu0j"]["Objects"]["1176415428.42dzlu"]', '1176415733.23dzlu': '["Objects"]["1156268617.43dzlu0j"]["Objects"]["1176415733.23dzlu"]', '1176416289.58dzlu': '["Objects"]["1156268617.43dzlu0j"]["Objects"]["1176416289.58dzlu"]', '1178152578.37kmuller': '["Objects"]["1156268617.43dzlu0j"]["Objects"]["1178152578.37kmuller"]', '1178152773.68kmuller': '["Objects"]["1156268617.43dzlu0j"]["Objects"]["1178152773.68kmuller"]', '1178152790.56kmuller': '["Objects"]["1156268617.43dzlu0j"]["Objects"]["1178152790.56kmuller"]', '1178152833.04kmuller': '["Objects"]["1156268617.43dzlu0j"]["Objects"]["1178152833.04kmuller"]', '1178152841.92kmuller': '["Objects"]["1156268617.43dzlu0j"]["Objects"]["1178152841.92kmuller"]', '1178152917.4kmuller': '["Objects"]["1156268617.43dzlu0j"]["Objects"]["1178152917.4kmuller"]', '1178153169.18kmuller': '["Objects"]["1156268617.43dzlu0j"]["Objects"]["1178153169.18kmuller"]', '1178153218.76kmuller': '["Objects"]["1156268617.43dzlu0j"]["Objects"]["1178153218.76kmuller"]', '1178153257.25kmuller': '["Objects"]["1156268617.43dzlu0j"]["Objects"]["1178153257.25kmuller"]', '1185497561.36kmuller': '["Objects"]["1156268617.43dzlu0j"]["Objects"]["1185497561.36kmuller"]', '1185497607.82kmuller': '["Objects"]["1156268617.43dzlu0j"]["Objects"]["1185497607.82kmuller"]', '1185497680.03kmuller': '["Objects"]["1156268617.43dzlu0j"]["Objects"]["1185497680.03kmuller"]', '1185497728.65kmuller': '["Objects"]["1156268617.43dzlu0j"]["Objects"]["1185497728.65kmuller"]', '1185497813.03kmuller': '["Objects"]["1156268617.43dzlu0j"]["Objects"]["1185497813.03kmuller"]', '1185497833.32kmuller': '["Objects"]["1156268617.43dzlu0j"]["Objects"]["1185497833.32kmuller"]', '1201049157.09dxschafe': '["Objects"]["1156268617.43dzlu0j"]["Objects"]["1201049157.09dxschafe"]', '1201049204.75dxschafe': '["Objects"]["1156268617.43dzlu0j"]["Objects"]["1201049204.75dxschafe"]', '1257964982.69caoconno': '["Objects"]["1156268617.43dzlu0j"]["Objects"]["1257964982.69caoconno"]', '1257965007.85caoconno': '["Objects"]["1156268617.43dzlu0j"]["Objects"]["1257965007.85caoconno"]', '1257965016.58caoconno': '["Objects"]["1156268617.43dzlu0j"]["Objects"]["1257965016.58caoconno"]', '1257965047.55caoconno': '["Objects"]["1156268617.43dzlu0j"]["Objects"]["1257965047.55caoconno"]', '1257965080.59caoconno': '["Objects"]["1156268617.43dzlu0j"]["Objects"]["1257965080.59caoconno"]', '1257965133.95caoconno': '["Objects"]["1156268617.43dzlu0j"]["Objects"]["1257965133.95caoconno"]', '1257965186.47caoconno': '["Objects"]["1156268617.43dzlu0j"]["Objects"]["1257965186.47caoconno"]', '1257965195.58caoconno': '["Objects"]["1156268617.43dzlu0j"]["Objects"]["1257965195.58caoconno"]', '1257965211.5caoconno': '["Objects"]["1156268617.43dzlu0j"]["Objects"]["1257965211.5caoconno"]', '1257965252.0caoconno': '["Objects"]["1156268617.43dzlu0j"]["Objects"]["1257965252.0caoconno"]', '1257965268.67caoconno': '["Objects"]["1156268617.43dzlu0j"]["Objects"]["1257965268.67caoconno"]', '1257965276.03caoconno': '["Objects"]["1156268617.43dzlu0j"]["Objects"]["1257965276.03caoconno"]'}}
extraInfo = {'camPos': Point3(2.3673, -19.323, 12.4913), 'camHpr': VBase3(2.6033, -12.1413, 0), 'focalLength': 0.852765381336, 'skyState': -1, 'fog': 0} | 3,315.428571 | 22,752 | 0.675371 | 3,064 | 23,208 | 5.064948 | 0.202676 | 0.025646 | 0.023584 | 0.018816 | 0.481023 | 0.443456 | 0.416651 | 0.337908 | 0.321606 | 0.305883 | 0 | 0.286408 | 0.068295 | 23,208 | 7 | 22,753 | 3,315.428571 | 0.4313 | 0.010126 | 0 | 0 | 0 | 0 | 0.576168 | 0.265184 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
604f121aa523b6e486160e4764d0837c6e9773da | 99 | py | Python | src/d03_Modelling/__init__.py | bruno154/project-1-santander-customers | 2f049b73da4ef2bd7e6e278e27d723dde92182bc | [
"MIT"
] | null | null | null | src/d03_Modelling/__init__.py | bruno154/project-1-santander-customers | 2f049b73da4ef2bd7e6e278e27d723dde92182bc | [
"MIT"
] | null | null | null | src/d03_Modelling/__init__.py | bruno154/project-1-santander-customers | 2f049b73da4ef2bd7e6e278e27d723dde92182bc | [
"MIT"
] | null | null | null | from .cart import *
from .model_selection import *
from .Random_Forest import *
from .SGD import *
| 19.8 | 30 | 0.757576 | 14 | 99 | 5.214286 | 0.571429 | 0.410959 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.161616 | 99 | 4 | 31 | 24.75 | 0.879518 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
60639c6623b4556f5e12599f6bf2c49d4d4b6b79 | 70 | py | Python | habitat/water/admin/__init__.py | matrach/habitatOS | 1ae2a3caf6f279cf6d6d20bcd81f24d50f61d7d3 | [
"MIT"
] | 1 | 2021-02-01T19:04:39.000Z | 2021-02-01T19:04:39.000Z | habitat/water/models/__init__.py | matrach/habitatOS | 1ae2a3caf6f279cf6d6d20bcd81f24d50f61d7d3 | [
"MIT"
] | null | null | null | habitat/water/models/__init__.py | matrach/habitatOS | 1ae2a3caf6f279cf6d6d20bcd81f24d50f61d7d3 | [
"MIT"
] | null | null | null | from .drinking import *
from .green import *
from .technical import *
| 17.5 | 24 | 0.742857 | 9 | 70 | 5.777778 | 0.555556 | 0.384615 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.171429 | 70 | 3 | 25 | 23.333333 | 0.896552 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
609e9ec5fd8b668550b08adc55556c10830e0ebf | 1,133 | py | Python | parser_test.py | AricHasting/senior-software | 0424cd9aa94533ef8ba58a2f70e279761028f96e | [
"MIT"
] | null | null | null | parser_test.py | AricHasting/senior-software | 0424cd9aa94533ef8ba58a2f70e279761028f96e | [
"MIT"
] | 7 | 2018-09-02T23:42:43.000Z | 2018-11-08T22:14:28.000Z | parser_test.py | AricHasting/senior-software | 0424cd9aa94533ef8ba58a2f70e279761028f96e | [
"MIT"
] | 4 | 2018-08-30T01:12:11.000Z | 2018-09-11T17:44:57.000Z | import unittest
import parser
class TestAvatar(unittest.TestCase):
def test_no_avatar(self):
self.assertEqual(parser.getAvatar('Hello'), False)
def test_avatar(self):
self.assertEqual(parser.getAvatar('/avatar Happy'), 'Happy')
def test_with_whitespace(self):
self.assertEqual(parser.getAvatar(' \t /avatar \t \n Happy\n\t '), 'Happy')
def test_empty(self):
self.assertEqual(parser.getAvatar(''), False)
def test_no_value(self):
self.assertEqual(parser.getAvatar('/avatar'), '')
class TestCommand(unittest.TestCase):
def test_no_command(self):
self.assertEqual(parser.getCommand('This is a test!'), False)
def test_command(self):
self.assertEqual(parser.getCommand('/wizard'), 'wizard')
def test_arguments(self):
self.assertEqual(parser.getArguments('/connect 127.0.0.1 8080'), ['127.0.0.1', '8080'])
def test_no_arguments(self):
self.assertEqual(parser.getArguments('/connect'), [])
def test_empty(self):
self.assertEqual(parser.getCommand(''), False)
self.assertEqual(parser.getArguments(''), [])
if __name__ == '__main__':
unittest.main() | 29.815789 | 91 | 0.699029 | 141 | 1,133 | 5.453901 | 0.276596 | 0.214564 | 0.30039 | 0.325098 | 0.6671 | 0.507152 | 0.23407 | 0 | 0 | 0 | 0 | 0.020534 | 0.140335 | 1,133 | 38 | 92 | 29.815789 | 0.768994 | 0 | 0 | 0.074074 | 0 | 0 | 0.126984 | 0 | 0 | 0 | 0 | 0 | 0.407407 | 1 | 0.37037 | false | 0 | 0.074074 | 0 | 0.518519 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
60ccc01322e50318e059e18317b2c8b41eae2daa | 841 | py | Python | terrascript/azuread/d.py | mjuenema/python-terrascript | 6d8bb0273a14bfeb8ff8e950fe36f97f7c6e7b1d | [
"BSD-2-Clause"
] | 507 | 2017-07-26T02:58:38.000Z | 2022-01-21T12:35:13.000Z | terrascript/azuread/d.py | mjuenema/python-terrascript | 6d8bb0273a14bfeb8ff8e950fe36f97f7c6e7b1d | [
"BSD-2-Clause"
] | 135 | 2017-07-20T12:01:59.000Z | 2021-10-04T22:25:40.000Z | terrascript/azuread/d.py | mjuenema/python-terrascript | 6d8bb0273a14bfeb8ff8e950fe36f97f7c6e7b1d | [
"BSD-2-Clause"
] | 81 | 2018-02-20T17:55:28.000Z | 2022-01-31T07:08:40.000Z | # terrascript/azuread/d.py
# Automatically generated by tools/makecode.py ()
import warnings
warnings.warn(
"using the 'legacy layout' is deprecated", DeprecationWarning, stacklevel=2
)
import terrascript
class azuread_application(terrascript.Data):
pass
class azuread_application_published_app_ids(terrascript.Data):
pass
class azuread_application_template(terrascript.Data):
pass
class azuread_client_config(terrascript.Data):
pass
class azuread_domains(terrascript.Data):
pass
class azuread_group(terrascript.Data):
pass
class azuread_groups(terrascript.Data):
pass
class azuread_service_principal(terrascript.Data):
pass
class azuread_service_principals(terrascript.Data):
pass
class azuread_user(terrascript.Data):
pass
class azuread_users(terrascript.Data):
pass
| 15.574074 | 79 | 0.774078 | 98 | 841 | 6.459184 | 0.397959 | 0.208531 | 0.330174 | 0.379147 | 0.546603 | 0.252765 | 0 | 0 | 0 | 0 | 0 | 0.001401 | 0.151011 | 841 | 53 | 80 | 15.867925 | 0.885154 | 0.085612 | 0 | 0.407407 | 1 | 0 | 0.050914 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.407407 | 0.074074 | 0 | 0.481481 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
714dbb37c9f4cc50747ee1e53eb1f10703e19e6c | 43 | py | Python | app/db/__init__.py | FaiZaman/BreakingBoundrio | 2127c67542f65f46c5d6e41ab22f6f1438b90e84 | [
"MIT"
] | null | null | null | app/db/__init__.py | FaiZaman/BreakingBoundrio | 2127c67542f65f46c5d6e41ab22f6f1438b90e84 | [
"MIT"
] | null | null | null | app/db/__init__.py | FaiZaman/BreakingBoundrio | 2127c67542f65f46c5d6e41ab22f6f1438b90e84 | [
"MIT"
] | null | null | null | from . import models
from .models import * | 14.333333 | 21 | 0.744186 | 6 | 43 | 5.333333 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.186047 | 43 | 3 | 21 | 14.333333 | 0.914286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
716162dc441584e7122a44d00660036aaa064fd8 | 37 | py | Python | crop_image/__init__.py | martinig94/crop_image | b009f483e96cea1bcb37d7f8b9ebbb7dfecc1d53 | [
"MIT"
] | null | null | null | crop_image/__init__.py | martinig94/crop_image | b009f483e96cea1bcb37d7f8b9ebbb7dfecc1d53 | [
"MIT"
] | null | null | null | crop_image/__init__.py | martinig94/crop_image | b009f483e96cea1bcb37d7f8b9ebbb7dfecc1d53 | [
"MIT"
] | null | null | null | from crop_image.main import CropImage | 37 | 37 | 0.891892 | 6 | 37 | 5.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.081081 | 37 | 1 | 37 | 37 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
719a24a4c1255eae7bc563fb858907246b28ddb6 | 192 | py | Python | ckan/plugins/__init__.py | Gnafu/ckan | d81f69b90291e50ef7e85821ccb83daa94eb3bb7 | [
"BSD-3-Clause"
] | 2 | 2021-02-19T20:06:52.000Z | 2021-04-15T20:42:11.000Z | ckan/plugins/__init__.py | Gnafu/ckan | d81f69b90291e50ef7e85821ccb83daa94eb3bb7 | [
"BSD-3-Clause"
] | 1 | 2018-01-17T19:11:24.000Z | 2018-04-27T19:53:34.000Z | ckan/plugins/__init__.py | Gnafu/ckan | d81f69b90291e50ef7e85821ccb83daa94eb3bb7 | [
"BSD-3-Clause"
] | 4 | 2016-12-17T22:26:06.000Z | 2017-01-20T21:51:24.000Z | from ckan.plugins.core import *
from ckan.plugins.interfaces import *
# Expose the toolkit object without doing an import *
import toolkit as _toolkit
toolkit = _toolkit.toolkit
del _toolkit
| 24 | 53 | 0.802083 | 27 | 192 | 5.592593 | 0.555556 | 0.278146 | 0.198676 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.145833 | 192 | 7 | 54 | 27.428571 | 0.920732 | 0.265625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.6 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e08152fb2530ae8949187a7664bfe62fd8eabf20 | 524 | py | Python | pefs/cli.py | atbentley/postgres-efs | 4fd0d2f3de67bfb203e84f49f2c213060301f6ef | [
"MIT"
] | null | null | null | pefs/cli.py | atbentley/postgres-efs | 4fd0d2f3de67bfb203e84f49f2c213060301f6ef | [
"MIT"
] | null | null | null | pefs/cli.py | atbentley/postgres-efs | 4fd0d2f3de67bfb203e84f49f2c213060301f6ef | [
"MIT"
] | null | null | null | import os
import click
from .pefs import Pefs
@click.group()
def cli():
pass
@cli.command()
@click.argument('db')
@click.argument('efs-root')
@click.option('--pg-user')
def clone(db, efs_root, pg_user):
pefs = Pefs(db, 'public', efs_root, pg_user, '')
pefs.clone_db()
@cli.command()
@click.argument('db')
@click.argument('efs-root')
@click.option('--pg-user')
def link(db, efs_root, pg_user):
pefs = Pefs(db, 'public', efs_root, pg_user, '')
pefs.link_db()
if __name__ == '__main__':
cli()
| 15.878788 | 52 | 0.641221 | 79 | 524 | 4.025316 | 0.291139 | 0.132075 | 0.113208 | 0.163522 | 0.710692 | 0.710692 | 0.710692 | 0.710692 | 0.710692 | 0.710692 | 0 | 0 | 0.158397 | 524 | 32 | 53 | 16.375 | 0.721088 | 0 | 0 | 0.454545 | 0 | 0 | 0.110687 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.136364 | false | 0.045455 | 0.136364 | 0 | 0.272727 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e08c183d72a802b3e679e653bf4c0109d7b52653 | 29 | py | Python | djangoproj/djangoapp/csc/conceptnet4/__init__.py | pbarton666/buzz_bot | 9f44c66e8ecb10e231f70989421f164d7a55029a | [
"MIT"
] | null | null | null | djangoproj/djangoapp/csc/conceptnet4/__init__.py | pbarton666/buzz_bot | 9f44c66e8ecb10e231f70989421f164d7a55029a | [
"MIT"
] | null | null | null | djangoproj/djangoapp/csc/conceptnet4/__init__.py | pbarton666/buzz_bot | 9f44c66e8ecb10e231f70989421f164d7a55029a | [
"MIT"
] | null | null | null | from csc.conceptnet import *
| 14.5 | 28 | 0.793103 | 4 | 29 | 5.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.137931 | 29 | 1 | 29 | 29 | 0.92 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e0d4bff546fe9a2edf7732ef7cccc2e10e32e39f | 1,008 | py | Python | Chapter04/winner_AI.py | PacktPublishing/Learning-Python-by-building-games | 0713e6fc141b2cd201128560ae0c3b689b7d2116 | [
"MIT"
] | 25 | 2019-09-01T16:19:16.000Z | 2021-12-20T07:08:35.000Z | Chapter04/winner_AI.py | PacktPublishing/Learning-Python-by-building-games. | 0713e6fc141b2cd201128560ae0c3b689b7d2116 | [
"MIT"
] | 4 | 2019-08-27T19:45:48.000Z | 2020-07-24T12:29:56.000Z | Chapter04/winner_AI.py | PacktPublishing/Learning-Python-by-building-games | 0713e6fc141b2cd201128560ae0c3b689b7d2116 | [
"MIT"
] | 24 | 2019-06-01T18:31:07.000Z | 2022-03-15T19:24:34.000Z | def isWinner(board, current_player):
return ((board[7] == current_player and board[8] == current_player
and board[9] == current_player)
or (board[4] == current_player and board[5] == current_player
and board[6] == current_player)
or (board[1] == current_player and board[2] == current_player
and board[3] == current_player)
or (board[7] == current_player and board[4] == current_player
and board[1] == current_player)
or (board[8] == current_player and board[5] == current_player
and board[2] == current_player)
or (board[9] == current_player and board[6] == current_player
and board[3] == current_player)
or (board[7] == current_player and board[5] == current_player
and board[3] == current_player)
or (board[9] == current_player and board[5] == current_player
and board[1] == current_player))
| 56 | 73 | 0.573413 | 125 | 1,008 | 4.424 | 0.136 | 0.587703 | 0.462929 | 0.607595 | 0.889693 | 0.889693 | 0.777577 | 0.620253 | 0.620253 | 0.231465 | 0 | 0.034483 | 0.309524 | 1,008 | 17 | 74 | 59.294118 | 0.760057 | 0 | 0 | 0.176471 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.058824 | false | 0 | 0 | 0.058824 | 0.117647 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e0dfde12e1aee624efd61f63ce2ac4d159e6d51a | 505 | py | Python | simpleredial/inference_utils/__init__.py | gmftbyGMFTBY/SimpleReDial-v1 | f45b8eb23d1499ec617b4cc4f417d83d8f2b6bde | [
"MIT"
] | 36 | 2021-10-13T10:32:08.000Z | 2022-03-20T07:50:05.000Z | simpleredial/inference_utils/__init__.py | gmftbyGMFTBY/SimpleReDial-v1 | f45b8eb23d1499ec617b4cc4f417d83d8f2b6bde | [
"MIT"
] | 3 | 2021-11-24T10:57:59.000Z | 2022-03-27T15:37:40.000Z | simpleredial/inference_utils/__init__.py | gmftbyGMFTBY/SimpleReDial-v1 | f45b8eb23d1499ec617b4cc4f417d83d8f2b6bde | [
"MIT"
] | 1 | 2022-03-15T07:13:22.000Z | 2022-03-15T07:13:22.000Z | from .response import *
from .gray_test import *
from .data_augmentation import *
from .data_filter import *
from .response_with_source import *
from .writer_with_source import *
from .gray import *
from .gray_simcse import *
from .gray_simcse_unlikelyhood import *
from .gray_hard import *
from .gray_extend import *
from .gray_one2many import *
from .gray_one2many_res import *
from .gray_one2many_with_source import *
from .unparallel import *
from .self_play import *
from .gray_one2many_ctx import *
| 28.055556 | 40 | 0.79802 | 72 | 505 | 5.305556 | 0.291667 | 0.418848 | 0.366492 | 0.230366 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009153 | 0.134653 | 505 | 17 | 41 | 29.705882 | 0.864989 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e0e1be71eb9b0eca455e9cdaf073ca072a6709e5 | 169 | py | Python | wp_app/winnerspie_app/doctype/oriflame_premium_member/test_oriflame_premium_member.py | avtserver/wp_app | 4a0e75b8362bf6908e73a5ba58f064dd1c9c8c78 | [
"MIT"
] | null | null | null | wp_app/winnerspie_app/doctype/oriflame_premium_member/test_oriflame_premium_member.py | avtserver/wp_app | 4a0e75b8362bf6908e73a5ba58f064dd1c9c8c78 | [
"MIT"
] | null | null | null | wp_app/winnerspie_app/doctype/oriflame_premium_member/test_oriflame_premium_member.py | avtserver/wp_app | 4a0e75b8362bf6908e73a5ba58f064dd1c9c8c78 | [
"MIT"
] | null | null | null | # Copyright (c) 2021, AV Tutoring Pvt Ltd and Contributors
# See license.txt
# import frappe
import unittest
class TestOriflamePremiumMember(unittest.TestCase):
pass
| 18.777778 | 58 | 0.792899 | 21 | 169 | 6.380952 | 0.904762 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.027586 | 0.142012 | 169 | 8 | 59 | 21.125 | 0.896552 | 0.508876 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
4612ab858e0bf07756231c31370a177179db4a3f | 205 | py | Python | base/admin.py | Kunal614/Resources | 6bba32f9f70554ddc658e9dab864433d150e46d2 | [
"Apache-2.0"
] | 1 | 2021-10-08T10:42:39.000Z | 2021-10-08T10:42:39.000Z | base/admin.py | Kunal614/Resources | 6bba32f9f70554ddc658e9dab864433d150e46d2 | [
"Apache-2.0"
] | 1 | 2021-07-10T04:22:44.000Z | 2021-07-10T04:22:44.000Z | base/admin.py | Kunal614/Resources | 6bba32f9f70554ddc658e9dab864433d150e46d2 | [
"Apache-2.0"
] | null | null | null | from django.contrib import admin
from .models import *
# Register your models here.
admin.site.register(about)
admin.site.register(details)
admin.site.register(tokenStuff)
admin.site.register(notification) | 29.285714 | 33 | 0.819512 | 28 | 205 | 6 | 0.5 | 0.214286 | 0.404762 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.078049 | 205 | 7 | 33 | 29.285714 | 0.888889 | 0.126829 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
461864e1500310693736a5847cacdcdefcfdb40c | 28 | py | Python | __init__.py | yi-xuan-huang/census-api | a2430ab6a8459e85c2a8b436f60174a34798a541 | [
"MIT"
] | null | null | null | __init__.py | yi-xuan-huang/census-api | a2430ab6a8459e85c2a8b436f60174a34798a541 | [
"MIT"
] | null | null | null | __init__.py | yi-xuan-huang/census-api | a2430ab6a8459e85c2a8b436f60174a34798a541 | [
"MIT"
] | null | null | null | from censusapi.core import * | 28 | 28 | 0.821429 | 4 | 28 | 5.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.107143 | 28 | 1 | 28 | 28 | 0.92 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1cc85aaa418ac66a50cb4044773a84eedb4cea80 | 3,275 | py | Python | router/router/tests/test_router.py | darius-kia/director4 | 1d2c2c4c3ec12cc9b7f846d5dc075ea3bbef36f9 | [
"MIT"
] | 7 | 2020-08-23T23:08:34.000Z | 2021-12-02T04:17:37.000Z | router/router/tests/test_router.py | darius-kia/director4 | 1d2c2c4c3ec12cc9b7f846d5dc075ea3bbef36f9 | [
"MIT"
] | 43 | 2020-08-24T16:48:29.000Z | 2022-03-02T19:45:54.000Z | router/router/tests/test_router.py | darius-kia/director4 | 1d2c2c4c3ec12cc9b7f846d5dc075ea3bbef36f9 | [
"MIT"
] | 10 | 2020-08-17T20:42:52.000Z | 2021-07-16T03:46:51.000Z | import unittest
from unittest.mock import mock_open, patch
from ..app import app
class RouterTest(unittest.TestCase):
def setUp(self) -> None:
app.testing = True
self.client = app.test_client()
def test_ping(self) -> None:
request = self.client.get("/ping")
self.assertEqual(b"Pong", request.data)
def test_update_nginx_page(self) -> None:
request = self.client.post("/sites/1234/update-nginx")
self.assertEqual(400, request.status_code)
request = self.client.post(
"/sites/1234/update-nginx", data={"data": '{"name": "hello", "custom_domains": {}}'}
)
self.assertEqual(500, request.status_code)
mock_open_obj = mock_open()
with patch("router.nginx.open", mock_open_obj):
with patch("router.nginx.settings.NGINX_RELOAD_COMMAND", "echo"):
request = self.client.post(
"/sites/1234/update-nginx",
data={"data": '{"name": "hello", "custom_domains": {}}'},
)
self.assertEqual(200, request.status_code)
self.assertEqual(b"Success", request.data)
mock_open_obj = mock_open()
with patch("router.nginx.open", mock_open_obj):
with patch("router.nginx.settings.NGINX_RELOAD_COMMAND", "echo"):
request = self.client.post(
"/sites/1234/update-nginx",
data={"data": '{"name": "hello", "custom_domains": ["tjhsst.edu"]}'},
)
self.assertEqual(200, request.status_code)
self.assertEqual(b"Success", request.data)
def test_remove_nginx_page(self) -> None:
request = self.client.post("/sites/1234/remove-nginx")
self.assertEqual(200, request.status_code)
with patch("router.nginx.os.path.exists", return_value=True):
request = self.client.post("/sites/1234/remove-nginx")
self.assertEqual(500, request.status_code)
def test_setup_certbot_page(self) -> None:
request = self.client.post("/sites/1234/certbot-setup")
self.assertEqual(400, request.status_code)
self.assertEqual(b"Error", request.data)
with patch("router.certbot.subprocess.run", return_value=True) as mock_obj:
request = self.client.post(
"/sites/1234/certbot-setup",
data={"data": '{"name": "hello", "custom_domains": []}'},
)
self.assertEqual(200, request.status_code)
self.assertEqual(b"Success", request.data)
mock_obj.assert_called_once()
def test_remove_old_certbot_domains_page(self) -> None:
request = self.client.post("/sites/certbot-remove-old-domains")
self.assertEqual(400, request.status_code)
self.assertEqual(b"Error", request.data)
with patch("router.certbot.subprocess.run", return_value=True) as mock_obj:
request = self.client.post(
"/sites/certbot-remove-old-domains",
data={"domains": '["tjhsst.edu", "tjhsst.fcps.edu", "sysadmins.tjhsst.edu"]'},
)
self.assertEqual(200, request.status_code)
self.assertEqual(b"Success", request.data)
mock_obj.assert_called_once()
| 37.643678 | 96 | 0.607328 | 377 | 3,275 | 5.127321 | 0.177719 | 0.131919 | 0.096741 | 0.108639 | 0.812726 | 0.799793 | 0.750647 | 0.750647 | 0.724263 | 0.665287 | 0 | 0.025275 | 0.250992 | 3,275 | 86 | 97 | 38.081395 | 0.76274 | 0 | 0 | 0.538462 | 0 | 0 | 0.233893 | 0.138015 | 0 | 0 | 0 | 0 | 0.292308 | 1 | 0.092308 | false | 0 | 0.046154 | 0 | 0.153846 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1cef8d41a144546f9160a9a0b7fced17f07e05b4 | 55,827 | py | Python | BacklogReport.py | flopezag/fiware-scrum-reports | 56773c2b1d0603f019f08ca7b66fc091e2b975a0 | [
"Apache-2.0"
] | null | null | null | BacklogReport.py | flopezag/fiware-scrum-reports | 56773c2b1d0603f019f08ca7b66fc091e2b975a0 | [
"Apache-2.0"
] | 6 | 2018-09-04T08:49:29.000Z | 2018-09-05T10:31:32.000Z | BacklogReport.py | flopezag/fiware-scrum-reports | 56773c2b1d0603f019f08ca7b66fc091e2b975a0 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
# -*- encoding: utf-8 -*-
##
# Copyright 2018 FIWARE Foundation, e.V.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
__author__ = "Fernando López"
import os
from datetime import date, datetime
import xlsxwriter
from xlsxwriter.utility import xl_range
from kernel.Reporter import ChapterReporter, EnablerReporter, CoordinationReporter, \
ChaptersReporter, ToolReporter, LabReporter
from kernel.Calendar import agileCalendar
from kernel.TrackerBook import chaptersBook
from kernel.DataFactory import DataEngine
from kernel.NodesBook import helpdeskNodesBook
from kernel.Settings import settings
from kernel.SheetFormats import SpreadsheetFormats
from kernel.BacklogFactory import BacklogFactory
from kernel.DeploymentModel import deploymentBook
from kernel.UploaderTool import Uploader
from kernel.ComponentsBook import labNodesBook
from collections import Counter
class Painter:
def __init__(self, wb, ws):
self._wb = wb
self._ws = ws
self._column = 10
def draw_composition(self, data):
data = {item: data[item] for item in data if data[item]}
wb, ws = self._wb, self._ws
chart = wb.add_chart({'type': 'pie'})
headings = ('Type', 'Amount')
col = self._column
ws.write_row(0, col, headings)
ws.write_column(1, col + 0, data)
ws.write_column(1, col + 1, [data[k] for k in data])
sheet_name = ws.get_name()
chart.add_series({
'name': [sheet_name, 0, col + 1],
'categories': [sheet_name, 1, col + 0, len(data), col + 0],
'values': [sheet_name, 1, col + 1, len(data), col + 1],
'data_labels': {'category': True, 'value': True, 'leader_lines': True, 'percentage': True}
})
chart.set_title({'name': 'Backlog Composition'})
# chart.set_title({'none': True})
chart.set_legend({'position': 'top'})
chart.set_size({'width': 507, 'height': 502, 'x_scale': 1, 'y_scale': 1})
chart.set_plotarea({'fill': {'color': '#FFFF99'}})
chart.set_style(2)
self._column += len(headings) + 1
return chart
def draw_status(self, data):
wb = self._wb
ws = self._ws
# print(data)
chart = wb.add_chart({'type': 'column'})
headings = ('Perspective', '#Items')
_perspectives = ('Implemented', 'Working On', 'Foreseen')
col = self._column
ws.write_row(0, col, headings)
#
ws.write_column(1, col + 0, _perspectives)
ws.write_column(1, col + 1, [data[k] for k in _perspectives])
sheet_name = ws.get_name()
chart.add_series({
'name': [sheet_name, 0, col + 0],
'categories': [sheet_name, 1, col + 0, len(data), col + 0],
'values': [sheet_name, 1, col + 1, len(data), col + 1],
'data_labels': {'value': True}
})
chart.set_title({'name': 'Backlog Status'})
# chart.set_title({'none': True})
chart.set_x_axis({'name': 'Perspective'})
chart.set_y_axis({'name': '# items'})
chart.set_legend({'none': True})
chart.set_size({'width': 480, 'height': 502, 'x_scale': 1, 'y_scale': 1})
chart.set_plotarea({'fill': {'color': '#FFFF99'}})
chart.set_style(2)
self._column += len(headings) + 1
return chart
def draw_errors(self, data):
wb, ws = self._wb, self._ws
chart = wb.add_chart({'type': 'pie'})
headings = ('Type', 'Amount')
col = self._column
ws.write_row(0, col, headings)
_types = ('OK', 'KO')
ws.write_column(1, col + 0, _types)
ws.write_column(1, col + 1, [data[k] for k in _types])
sheet_name = ws.get_name()
chart.add_series({
'name': [sheet_name, 0, col + 1],
'categories': [sheet_name, 1, col + 0, len(data), col + 0],
'values': [sheet_name, 1, col + 1, len(data), col + 1],
'data_labels': {'category': True, 'value': True, 'leader_lines': True, 'percentage': True},
'points': [{'fill': {'color': 'green'}},
{'fill': {'color': 'red'}}]
})
chart.set_title({'name': 'Backlog Errors'})
chart.set_legend({'position': 'top'})
chart.set_size({'width': 288, 'height': 288, 'x_scale': 1, 'y_scale': 1})
chart.set_plotarea({'fill': {'color': '#FFFF99'}})
chart.set_style(2)
self._column += len(headings) + 1
return chart
def draw_sprint_burndown(self, data):
wb = self._wb
ws = self._ws
chart = wb.add_chart({'type': 'line'})
headings = ('Day', 'Reference', 'Actual', 'Closed')
col = self._column
ws.write_row(0, col, headings)
ws.write_column(1, col + 0, data['categories'])
ws.write_column(1, col + 1, data['reference'])
ws.write_column(1, col + 2, data['actual'])
ws.write_column(1, col + 3, data['closed'])
sheet_name = ws.get_name()
chart.add_series({
'name': [sheet_name, 0, col + 1],
'categories': [sheet_name, 1, col + 0, len(data['categories']), col + 0],
'values': [sheet_name, 1, col + 1, len(data['reference']), col + 1],
'line': {'dash_type': 'dash_dot'}
})
chart.add_series({
'name': [sheet_name, 0, col + 2],
'categories': [sheet_name, 1, col + 0, len(data['categories']), col + 0],
'values': [sheet_name, 1, col + 2, len(data['actual']), col + 2]
})
cchart = wb.add_chart({'type': 'column'})
cchart.add_series({
'name': [sheet_name, 0, col + 3],
'categories': [sheet_name, 1, col + 0, len(data['categories']), col + 0],
'values': [sheet_name, 1, col + 3, len(data['closed']), col + 3],
'data_labels': {'value': True}
})
chart.combine(cchart)
chart.set_title({'name': 'Backlog Sprint Evolution'})
chart.set_x_axis({'name': '# day in month'})
chart.set_y_axis({'name': '# items'})
chart.set_legend({'position': 'top'})
chart.set_size({'width': 700, 'height': 288, 'x_scale': 1, 'y_scale': 1})
chart.set_plotarea({'fill': {'color': '#FFFF99'}})
chart.set_style(2)
self._column += len(headings) + 1
return chart
def draw_sprint_status(self, data, legend=True):
wb, ws = self._wb, self._ws
chart = wb.add_chart({'type': 'pie'})
headings = ('Type', 'Amount')
col = self._column
ws.write_row(0, col, headings)
ws.write_column(1, col + 0, data)
ws.write_column(1, col + 1, [data[k] for k in data])
sheet_name = ws.get_name()
chart.add_series({
'name': [sheet_name, 0, col + 1],
'categories': [sheet_name, 1, col + 0, len(data), col + 0],
'values': [sheet_name, 1, col + 1, len(data), col + 1],
'data_labels': {'category': True, 'value': True, 'percentage': True}
})
chart.set_title({'name': 'Backlog Sprint Status'})
if legend:
chart.set_legend({'position': 'top'})
else:
chart.set_legend({'none': True})
chart.set_size({'width': 288, 'height': 288, 'x_scale': 1, 'y_scale': 1})
chart.set_plotarea({'fill': {'color': '#FFFF99'}})
chart.set_style(2)
self._column += len(headings) + 1
return chart
def draw_evolution(self, data):
wb = self._wb
ws = self._ws
chart = wb.add_chart({'type': 'line'})
headings = ('Month', 'Created', 'Resolved', 'Updated', 'Released', 'Progress', 'Dummy')
col = self._column
ws.write_row(0, col, headings)
ws.write_column(1, col + 0, data['categories'])
ws.write_column(1, col + 1, data['created'])
ws.write_column(1, col + 2, data['resolved'])
ws.write_column(1, col + 3, data['updated'])
ws.write_column(1, col + 4, data['released'])
ws.write_column(1, col + 5, data['progress'])
ws.write_column(1, col + 6, [0 for i in data['categories']])
sheet_name = ws.get_name()
chart.add_series({
'name': [sheet_name, 0, col + 1],
'categories': [sheet_name, 1, col + 0, len(data['categories']), col + 0],
'values': [sheet_name, 1, col + 1, len(data['created']), col + 1]
})
chart.add_series({
'name': [sheet_name, 0, col + 2],
'categories': [sheet_name, 1, col + 0, len(data['categories']), col + 0],
'values': [sheet_name, 1, col + 2, len(data['resolved']), col + 2]
})
chart.add_series({
'name': [sheet_name, 0, col + 3],
'categories': [sheet_name, 1, col + 0, len(data['categories']), col + 0],
'values': [sheet_name, 1, col + 3, len(data['updated']), col + 3]
})
chart.add_series({
'name': [sheet_name, 0, col + 4],
'categories': [sheet_name, 1, col + 0, len(data['categories']), col + 0],
'values': [sheet_name, 1, col + 4, len(data['released']), col + 4]
})
cchart = wb.add_chart({'type': 'column'})
cchart.add_series({
'name': [sheet_name, 0, col + 5],
'categories': [sheet_name, 1, col + 0, len(data['categories']), col + 0],
'values': [sheet_name, 1, col + 5, len(data['categories']), col + 5],
# 'values': [sheet_name, 1, col+5, len(data['progress']), col+5],
'data_labels': {'value': True}
})
chart.combine(cchart)
chart.set_title({'name': 'Backlog Evolution'})
chart.set_x_axis({'name': '# Month'})
chart.set_y_axis({'name': '# items'})
chart.set_legend({'position': 'top'})
chart.set_size({'width': 1000, 'height': 291, 'x_scale': 1, 'y_scale': 1})
chart.set_plotarea({'fill': {'color': '#FFFF99'}})
chart.set_style(2)
self._column += len(headings) + 1
return chart
def draw_component_sprint_status(self, cmpType, components, data):
wb = self._wb
ws = self._ws
_data = {item['name']: item['data'] for item in data}
chart = wb.add_chart({'type': 'column', 'subtype': 'stacked'})
status = tuple([item['name'] for item in data])
headings = (cmpType,) + status
col = self._column
ws.write_row(0, col, headings)
ws.write_column(1, col + 0, components)
for i, _status in enumerate(status, start=1):
ws.write_column(1, col + i, _data[_status])
sheet_name = ws.get_name()
for i, _status in enumerate(status, start=1):
chart.add_series({
'name': [sheet_name, 0, col + i],
'categories': [sheet_name, 1, col + 0, len(components), col + 0],
'values': [sheet_name, 1, col + i, len(components), col + i],
'data_labels': {'value': True}
})
chart.set_title({'name': "{}s' Backlog Sprint Status".format(cmpType)})
chart.set_x_axis({'name': cmpType})
chart.set_y_axis({'name': '# items'})
chart.set_legend({'position': 'top'})
chart.set_size({'width': 1000, 'height': 291, 'x_scale': 1, 'y_scale': 1})
chart.set_plotarea({'fill': {'color': '#FFFF99'}})
chart.set_style(2)
self._column += len(headings) + 1
return chart
def draw_component_status(self, cmpType, components, data):
wb = self._wb
ws = self._ws
_data = {item['name']: item['data'] for item in data}
chart = wb.add_chart({'type': 'column', 'subtype': 'stacked'})
status = tuple([item['name'] for item in data])
headings = (cmpType,) + status
col = self._column
ws.write_row(0, col, headings)
ws.write_column(1, col + 0, components)
for i, _status in enumerate(status, start=1):
ws.write_column(1, col + i, _data[_status])
sheet_name = ws.get_name()
for i, _status in enumerate(status, start=1):
chart.add_series({
'name': [sheet_name, 0, col + i],
'categories': [sheet_name, 1, col + 0, len(components), col + 0],
'values': [sheet_name, 1, col + i, len(components), col + i],
'data_labels': {'value': True}
})
chart.set_title({'name': "{}s' Backlog Status".format(cmpType)})
chart.set_x_axis({'name': 'Enablers'})
chart.set_y_axis({'name': '# items'})
chart.set_legend({'position': 'top'})
chart.set_size({'width': 1000, 'height': 291, 'x_scale': 1, 'y_scale': 1})
chart.set_plotarea({'fill': {'color': '#FFFF99'}})
chart.set_style(2)
self._column += len(headings) + 1
return chart
def draw_chapters_sprint_status(self, chapters, data):
wb = self._wb
ws = self._ws
_data = {item['name']: item['data'] for item in data}
chart = wb.add_chart({'type': 'column', 'subtype': 'stacked'})
status = tuple([item['name'] for item in data])
headings = ('Chapter',) + status
col = self._column
ws.write_row(0, col, headings)
ws.write_column(1, col + 0, chapters)
for i, _status in enumerate(status, start=1):
ws.write_column(1, col + i, _data[_status])
sheet_name = ws.get_name()
for i, _status in enumerate(status, start=1):
chart.add_series({
'name': [sheet_name, 0, col + i],
'categories': [sheet_name, 1, col + 0, len(chapters), col + 0],
'values': [sheet_name, 1, col + i, len(chapters), col + i],
'data_labels': {'value': True}
})
chart.set_title({'name': "Chapters' Backlog Sprint Status"})
chart.set_x_axis({'name': 'Chapters'})
chart.set_y_axis({'name': '# items'})
chart.set_legend({'position': 'top'})
chart.set_size({'width': 1000, 'height': 291, 'x_scale': 1, 'y_scale': 1})
chart.set_plotarea({'fill': {'color': '#FFFF99'}})
chart.set_style(2)
self._column += len(headings) + 1
return chart
def draw_lab_status(self, nodes, data):
wb = self._wb
ws = self._ws
_data = {item['name']: reversed(item['data']) for item in data}
chart = wb.add_chart({'type': 'bar', 'subtype': 'stacked'})
status = tuple([item['name'] for item in data])
headings = ('Node',) + status
col = self._column
ws.write_row(0, col, headings)
ws.write_column(1, col + 0, reversed(nodes))
for i, _status in enumerate(status, start=1):
ws.write_column(1, col + i, _data[_status])
sheet_name = ws.get_name()
for i, _status in enumerate(status, start=1):
chart.add_series({
'name': [sheet_name, 0, col + i],
'categories': [sheet_name, 1, col + 0, len(nodes), col + 0],
'values': [sheet_name, 1, col + i, len(nodes), col + i],
'data_labels': {'value': True}
})
chart.set_title({'name': "Enablers' Backlog Status"})
chart.set_y_axis({'name': 'Enablers'})
chart.set_x_axis({'name': '# items'})
chart.set_legend({'position': 'top'})
chart.set_size({'width': 1000, 'height': 1600, 'x_scale': 1, 'y_scale': 1})
chart.set_plotarea({'fill': {'color': '#FFFF99'}})
chart.set_style(2)
self._column += len(headings) + 1
return chart
def draw_chapters_status(self, chapters, data):
wb = self._wb
ws = self._ws
_data = {item['name']: item['data'] for item in data}
chart = wb.add_chart({'type': 'column', 'subtype': 'stacked'})
status = tuple([item['name'] for item in data])
headings = ('Chapter',) + status
col = self._column
ws.write_row(0, col, headings)
ws.write_column(1, col + 0, chapters)
for i, _status in enumerate(status, start=1):
ws.write_column(1, col + i, _data[_status])
sheet_name = ws.get_name()
for i, _status in enumerate(status, start=1):
chart.add_series({
'name': [sheet_name, 0, col + i],
'categories': [sheet_name, 1, col + 0, len(chapters), col + 0],
'values': [sheet_name, 1, col + i, len(chapters), col + i],
'data_labels': {'value': True}
})
chart.set_title({'name': "Chapters' Backlog Status"})
chart.set_x_axis({'name': 'Chapters'})
chart.set_y_axis({'name': '# items'})
chart.set_legend({'position': 'top'})
chart.set_size({'width': 1000, 'height': 291, 'x_scale': 1, 'y_scale': 1})
chart.set_plotarea({'fill': {'color': '#FFFF99'}})
chart.set_style(2)
self._column += len(headings) + 1
return chart
def draw_enablers_status(self, enablers, data):
wb = self._wb
ws = self._ws
_data = {item['name']: reversed(item['data']) for item in data}
chart = wb.add_chart({'type': 'bar', 'subtype': 'stacked'})
status = tuple([item['name'] for item in data])
headings = ('Enabler',) + status
col = self._column
ws.write_row(0, col, headings)
ws.write_column(1, col + 0, reversed(enablers))
for i, _status in enumerate(status, start=1):
ws.write_column(1, col + i, _data[_status])
sheet_name = ws.get_name()
for i, _status in enumerate(status, start=1):
chart.add_series({
'name': [sheet_name, 0, col + i],
'categories': [sheet_name, 1, col + 0, len(enablers), col + 0],
'values': [sheet_name, 1, col + i, len(enablers), col + i],
'data_labels': {'value': True}
})
chart.set_title({'name': "Enablers' Backlog Status"})
chart.set_y_axis({'name': 'Enablers'})
chart.set_x_axis({'name': '# items'})
chart.set_legend({'position': 'top'})
chart.set_size({'width': 1000, 'height': 1600, 'x_scale': 1, 'y_scale': 1})
chart.set_plotarea({'fill': {'color': '#FFFF99'}})
chart.set_style(2)
self._column += len(headings) + 1
return chart
class BacklogReporter:
def __init__(self):
self.calendar = agileCalendar
self.workbook = None
self.spFormats = None
self.factory = BacklogFactory()
self.gReporter = ChaptersReporter(self.factory.getTechChaptersBacklog())
self.gLabReporter = LabReporter(self.factory.getLabChapterBacklog())
self.start = date(2016, 12, 1) # year, month, day
self.end = date(2017, 11, 30) # year, month, day
def get_format(self, issue):
_timeSlot = issue.timeSlot.split(' ')[1] if issue.timeSlot != 'Unscheduled' else 'Unscheduled'
if _timeSlot in agileCalendar.pastTimeSlots:
return self.spFormats.brown
elif _timeSlot in agileCalendar.currentTimeSlots():
return self.spFormats.green
else:
return self.spFormats.blue
def _write_issue(self, ws, row, item):
ws.write_url(row, 0, item.url, self.spFormats.link, item.key)
if item.issueType == 'Epic':
_format = self.workbook.add_format({'color': 'blue', 'underline': 1,
'align': 'left', 'bg_color': '#99CC00'})
if item.p_url:
ws.write_url(row, 1, item.p_url, _format, item.name)
else:
ws.write(row, 1, item.name)
_format = self.workbook.add_format({'bg_color': '#99CC00'})
ws.write(row, 2, '', _format)
ws.write(row, 3, item.status, _format)
ws.write(row, 4, item.issueType, _format)
elif item.issueType == 'Feature':
if item.p_url:
ws.write_url(row, 1, item.p_url, self.spFormats.lefty_link, item.name)
else:
ws.write(row, 1, item.name)
_format = self.get_format(item)
ws.write(row, 2, item.timeSlot, _format)
ws.write(row, 3, item.status, _format)
ws.write(row, 4, item.issueType, _format)
else:
if item.p_url:
ws.write_url(row, 1, item.p_url, self.spFormats.lefty_link, item.name)
else:
ws.write(row, 1, item.name)
_format = self.get_format(item)
ws.write(row, 2, item.timeSlot, _format)
ws.write(row, 3, item.status, _format)
ws.write(row, 4, item.issueType, _format)
def _coordination_dashboard(self, coordination):
wb = self.workbook
ws = wb.add_worksheet(coordination.name[1:])
backlog = self.factory.getCoordinationBacklog(coordination.key)
backlog.sort(key=backlog.sortDict['name'])
painter = Painter(wb, ws)
ws.set_zoom(80)
ws.set_column(0, 0, 30)
ws.set_column(1, 1, 122)
ws.set_column(2, 5, 20)
row, col = 0, 0
_heading = self.workbook.add_format({'bold': True, 'font_size': 30,
'bg_color': '#002D67', 'font_color': '#FFE616', 'align': 'center'})
ws.merge_range(xl_range(row, 0, row, 3), "Coordination Backlog", _heading)
ws.set_row(0, 42)
ws.insert_image(0, 0, settings.logofiware, {'x_scale': 0.5, 'y_scale': 0.5, 'x_offset': 0, 'y_offset': 0})
row += 1
ws.write(row, 0, 'Project Time:', self.spFormats.bold_right)
ws.write(row, 1, '{}'.format(agileCalendar.projectTime()))
ws.write(row, 2, 'Report Date:', self.spFormats.bold_right)
ws.write(row, 3, date.today().strftime('%d-%m-%Y'))
row += 1
ws.write(row, 0, 'Start of Data Analysis:', self.spFormats.bold_right)
ws.write(row, 1, '{}'.format(agileCalendar.projectTime(current_date=self.start)))
row += 1
ws.write(row, 0, 'End of Data Analysis:', self.spFormats.bold_right)
ws.write(row, 1, '{}'.format(agileCalendar.projectTime(current_date=self.end)))
row += 2
_format = self.workbook.add_format({'bold': True, 'font_size': 15, 'bg_color': '#60C1CF'})
ws.write(row, 0, 'Backlog Owner:', self.spFormats.bold_right)
ws.write(row, 1, coordination.leader, _format)
ws.write(row, 2, '', _format)
row += 2
ws.write(row, 0, 'Backlog Summary:', self.spFormats.bold_right)
ws.write(row, 1, '# Items', self.spFormats.bold_left)
row += 1
reporter = CoordinationReporter(coordination.project, backlog)
data = reporter.issueType
ws.write(row, 0, 'Composition', self.spFormats.bold_right)
ws.write(row, 1, '{0} Issues = {Epic} Epics + {Feature} Features + '
'{Story} User Stories + {WorkItem} WorkItems + {Bug} Bugs'.format(sum(data.values()), **data))
row += 1
data = reporter.perspective
ws.write(row, 0, 'Status', self.spFormats.bold_right)
ws.write(row, 1, '{0} Issues = {Implemented} Implemented + {Working On} Working On + '
' {Foreseen} Foreseen'.format(sum(data.values()), **data))
row += 2
chart = painter.draw_composition(reporter.issueType)
ws.insert_chart(row, 1, chart, {'x_offset': 0, 'y_offset': 0})
chart = painter.draw_status(reporter.perspective)
ws.insert_chart(row, 1, chart, {'x_offset': 520, 'y_offset': 0})
row += 26
chart = painter.draw_evolution(reporter.implemented(self.start, self.end))
ws.insert_chart(row, 1, chart, {'x_offset': 0, 'y_offset': 0})
row += 15
_format = self.workbook.add_format({'bold': True, 'font_size': 20, 'bg_color': '#60C1CF', 'align': 'center'})
ws.merge_range(xl_range(row, 0, row, 4), 'Backlog Entries', _format)
row += 1
ws.write_row(row, 0, ('Item Id', 'Item reference', 'Time frame', 'Status', 'Item type'),
self.spFormats.column_heading)
for issue in backlog:
row += 1
self._write_issue(ws, row, issue)
def _enabler_dashboard(self, enabler):
print('------>', enabler.name)
wb = self.workbook
ws = wb.add_worksheet(enabler.name)
backlog = self.factory.getEnablerBacklog(enabler.name)
backlog.sort(key=backlog.sortDict['name'])
painter = Painter(wb, ws)
ws.set_zoom(80)
ws.set_column(0, 0, 30)
ws.set_column(1, 1, 122)
ws.set_column(2, 5, 20)
row, col = 0, 0
_heading = self.workbook.add_format({'bold': True, 'font_size': 30,
'bg_color': '#002D67', 'font_color': '#FFE616', 'align': 'center'})
ws.merge_range(xl_range(row, 0, row, 3),
"Backlog for Enabler: '{0}'".format(enabler.name), _heading)
ws.set_row(0, 42)
ws.insert_image(0, 0, settings.logofiware, {'x_scale': 0.5, 'y_scale': 0.5, 'x_offset': 0, 'y_offset': 0})
row += 1
ws.write(row, 0, 'Project Time:', self.spFormats.bold_right)
ws.write(row, 1, '{}'.format(agileCalendar.projectTime()))
ws.write(row, 2, 'Report Date:', self.spFormats.bold_right)
ws.write(row, 3, date.today().strftime('%d-%m-%Y'))
row += 1
ws.write(row, 0, 'Start of Data Analysis:', self.spFormats.bold_right)
ws.write(row, 1, '{}'.format(agileCalendar.projectTime(current_date=self.start)))
row += 1
ws.write(row, 0, 'End of Data Analysis:', self.spFormats.bold_right)
ws.write(row, 1, '{}'.format(agileCalendar.projectTime(current_date=self.end)))
row += 2
_format = self.workbook.add_format({'bold': True, 'font_size': 15, 'color': 'green'})
ename = enabler.Name if enabler.GE else enabler.name
ws.write(row, 0, 'Enabler:', self.spFormats.bold_right)
ws.write(row, 1, ename, _format)
row += 1
_format = self.workbook.add_format({'bold': True, 'font_size': 15, 'bg_color': '#60C1CF'})
ws.write(row, 0, 'Product Owner:', self.spFormats.bold_right)
ws.write(row, 1, '{} - {}'.format(enabler.owner, enabler.leader), _format)
ws.write(row, 2, '', _format)
row += 1
ws.write(row, 0, 'Work Mode:', self.spFormats.bold_right)
ws.write(row, 1, enabler.mode)
row += 2
ws.write(row, 0, 'Backlog Summary:', self.spFormats.bold_right)
ws.write(row, 1, '# Items', self.spFormats.bold_left)
row += 1
reporter = EnablerReporter(enabler.name, backlog)
data = reporter.issueType
ws.write(row, 0, 'Composition', self.spFormats.bold_right)
ws.write(row, 1, '{0:,} Issues = {Epic} Epics + {Feature} Features + '
'{Story:,} User Stories + {WorkItem:,} WorkItems + {Bug} Bugs'.format(sum(data.values()),
**data))
row += 1
data = reporter.perspective
ws.write(row, 0, 'Status', self.spFormats.bold_right)
ws.write(row, 1, '{0:,} Issues = {Implemented:,} Implemented + {Working On} Working On + '
' {Foreseen} Foreseen'.format(sum(data.values()), **data))
if not reporter.length:
return
row += 2
chart = painter.draw_composition(reporter.issueType)
ws.insert_chart(row, 1, chart, {'x_offset': 0, 'y_offset': 0})
chart = painter.draw_status(reporter.perspective)
ws.insert_chart(row, 1, chart, {'x_offset': 520, 'y_offset': 0})
row += 26
chart = painter.draw_evolution(reporter.implemented(self.start, self.end))
ws.insert_chart(row, 1, chart, {'x_offset': 0, 'y_offset': 0})
row += 15
row += 1
_format = self.workbook.add_format({'bold': True, 'font_size': 20, 'bg_color': '#60C1CF', 'align': 'center'})
ws.merge_range(xl_range(row, 0, row, 4), 'Backlog Entries', _format)
row += 1
ws.write_row(row, 0, ('Item Id', 'Item reference', 'Time frame', 'Status', 'Item type'),
self.spFormats.column_heading)
for issue in backlog:
row += 1
self._write_issue(ws, row, issue)
def _lab_node_dashboard(self, node):
print('------>', node)
wb = self.workbook
ws = wb.add_worksheet(node)
backlog = self.gLabReporter
try:
key = labNodesBook[node].key
backlog = list(filter(lambda x: x.component == key, list(backlog.backlog)))
except Exception:
# There is no data about the corresponding node, therefore we manage it as a empty issues
backlog = ()
painter = Painter(wb, ws)
ws.set_zoom(80)
ws.set_column(0, 0, 30)
ws.set_column(1, 1, 122)
ws.set_column(2, 5, 20)
row, col = 0, 0
_heading = self.workbook.add_format({'bold': True, 'font_size': 30,
'bg_color': '#002D67', 'font_color': '#FFE616', 'align': 'center'})
ws.merge_range(xl_range(row, 0, row, 3),
"Backlog for Lab Node: '{0}'".format(node), _heading)
ws.set_row(0, 42)
ws.insert_image(0, 0, settings.logofiware, {'x_scale': 0.5, 'y_scale': 0.5, 'x_offset': 0, 'y_offset': 0})
row += 1
ws.write(row, 0, 'Project Time:', self.spFormats.bold_right)
ws.write(row, 1, '{}'.format(agileCalendar.projectTime()))
ws.write(row, 2, 'Report Date:', self.spFormats.bold_right)
ws.write(row, 3, date.today().strftime('%d-%m-%Y'))
row += 1
ws.write(row, 0, 'Start of Data Analysis:', self.spFormats.bold_right)
ws.write(row, 1, '{}'.format(agileCalendar.projectTime(current_date=self.start)))
row += 1
ws.write(row, 0, 'End of Data Analysis:', self.spFormats.bold_right)
ws.write(row, 1, '{}'.format(agileCalendar.projectTime(current_date=self.end)))
row += 2
_format = self.workbook.add_format({'bold': True, 'font_size': 15, 'color': 'green'})
ws.write(row, 0, 'Node:', self.spFormats.bold_right)
ws.write(row, 1, node, _format)
row += 1
ws.write(row, 0, 'Work Mode:', self.spFormats.bold_right)
try:
ws.write(row, 1, labNodesBook[node].mode)
except Exception:
# there is no data about the node, therefore we consider the node Inactive
ws.write(row, 1, 'Inactive')
row += 2
ws.write(row, 0, 'Backlog Summary:', self.spFormats.bold_right)
ws.write(row, 1, '# Items', self.spFormats.bold_left)
row += 1
if len(backlog) == 0:
ws.write(row, 0, 'Composition', self.spFormats.bold_right)
ws.write(row, 1, '0 Issues = 0 Epics + 0 Features + 0 User Stories + 0 WorkItems + 0 Bugs')
row += 1
ws.write(row, 0, 'Status', self.spFormats.bold_right)
ws.write(row, 1, '0 Issues = 0 Implemented + 0 Working On + 0 Foreseen')
return
else:
data = Counter(list(map(lambda x: x['issueType'], backlog)))
data_issue_type = \
BacklogReporter.fix_values(a_dict=data, keys=['Epic', 'Feature', 'Story', 'WorkItem', 'Bug'])
ws.write(row, 0, 'Composition', self.spFormats.bold_right)
text = '{0:,} Issues = {Epic} Epics + {Feature} Features + {Story} User Stories ' \
'+ {WorkItem} WorkItems + {Bug} Bugs'.format(sum(data_issue_type.values()), **data_issue_type)
ws.write(row, 1, text)
row += 1
data = Counter(list(map(lambda x: x['frame'], backlog)))
data_frame = \
BacklogReporter.fix_values(a_dict=data, keys=['Implemented', 'Working On', 'Foreseen'])
ws.write(row, 0, 'Status', self.spFormats.bold_right)
ws.write(row, 1, '{0:,} Issues = {Implemented:,} Implemented + {Working On} Working On + '
' {Foreseen} Foreseen'.format(sum(data_frame.values()), **data_frame))
row += 2
chart = painter.draw_composition(data_issue_type)
ws.insert_chart(row, 1, chart, {'x_offset': 0, 'y_offset': 0})
chart = painter.draw_status(data_frame)
ws.insert_chart(row, 1, chart, {'x_offset': 520, 'y_offset': 0})
row += 26
from kernel.Reporter import Reporter
data = Reporter(backlog)
chart = painter.draw_evolution(data.implemented(self.start, self.end))
ws.insert_chart(row, 1, chart, {'x_offset': 0, 'y_offset': 0})
row += 16
_format = \
self.workbook.add_format({'bold': True, 'font_size': 20, 'bg_color': '#60C1CF', 'align': 'center'})
ws.merge_range(xl_range(row, 0, row, 4), 'Backlog Entries', _format)
row += 1
ws.write_row(row, 0, ('Item Id', 'Item reference', 'Time frame', 'Status', 'Item type'),
self.spFormats.column_heading)
for issue in backlog:
row += 1
self._write_issue(ws, row, issue)
@staticmethod
def fix_values(a_dict, keys, value=0):
result = {}
for key in keys:
try:
result[key] = a_dict[key]
except KeyError:
result[key] = value
return result
def _tool_dashboard(self, tool):
print('------>', tool.name)
wb = self.workbook
ws = wb.add_worksheet(tool.name)
backlog = self.factory.getToolBacklog(tool.name)
backlog.sort(key=backlog.sortDict['name'])
painter = Painter(wb, ws)
ws.set_zoom(80)
ws.set_column(0, 0, 30)
ws.set_column(1, 1, 122)
ws.set_column(2, 5, 20)
row, col = 0, 0
_heading = self.workbook.add_format({'bold': True, 'font_size': 30,
'bg_color': '#002D67', 'font_color': '#FFE616', 'align': 'center'})
ws.merge_range(xl_range(row, 0, row, 3),
"Backlog for Tool: '{0}'".format(tool.name), _heading)
ws.set_row(0, 42)
ws.insert_image(0, 0, settings.logofiware, {'x_scale': 0.5, 'y_scale': 0.5, 'x_offset': 0, 'y_offset': 0})
row += 1
ws.write(row, 0, 'Project Time:', self.spFormats.bold_right)
ws.write(row, 1, '{}'.format(agileCalendar.projectTime()))
ws.write(row, 2, 'Report Date:', self.spFormats.bold_right)
ws.write(row, 3, date.today().strftime('%d-%m-%Y'))
row += 1
ws.write(row, 0, 'Start of Data Analysis:', self.spFormats.bold_right)
ws.write(row, 1, '{}'.format(agileCalendar.projectTime(current_date=self.start)))
row += 1
ws.write(row, 0, 'End of Data Analysis:', self.spFormats.bold_right)
ws.write(row, 1, '{}'.format(agileCalendar.projectTime(current_date=self.end)))
row += 2
_format = self.workbook.add_format({'bold': True, 'font_size': 15, 'bg_color': '#60C1CF'})
ws.write(row, 0, 'Product Owner:', self.spFormats.bold_right)
ws.write(row, 1, '{} - {}'.format(tool.owner, tool.leader), _format)
ws.write(row, 2, '', _format)
row += 1
ws.write(row, 0, 'Work Mode:', self.spFormats.bold_right)
ws.write(row, 1, tool.mode)
row += 2
ws.write(row, 0, 'Backlog Summary:', self.spFormats.bold_right)
ws.write(row, 1, '# Items', self.spFormats.bold_left)
row += 1
reporter = ToolReporter(tool.name, backlog)
data = reporter.issueType
ws.write(row, 0, 'Composition', self.spFormats.bold_right)
ws.write(row, 1, '{0} Issues = {Epic} Epics + {Feature} Features + '
'{Story} User Stories + {WorkItem} WorkItems + {Bug} Bugs'.format(sum(data.values()), **data))
row += 1
data = reporter.perspective
ws.write(row, 0, 'Status', self.spFormats.bold_right)
ws.write(row, 1, '{0} Issues = {Implemented} Implemented + {Working On} Working On + '
' {Foreseen} Foreseen'.format(sum(data.values()), **data))
row += 1
data = reporter.sprint_status
ws.write(row, 0, 'Sprint Status', self.spFormats.red_bold_right)
ws.write_string(row, 1, '{} Issues = {}'.format(sum(data.values()),
' + '.join("{!s} {}".format(v, k) for (k, v) in data.items())))
row += 1
ws.write(row, 0, 'Tests', self.spFormats.bold_right)
data = reporter.backlog.testMetrics
total = sum(data['OK'].values()) + sum(data['KO'].values())
ws.write_rich_string(row, 1,
'{0:,} Tests = {1:,}'.format(total, sum(data['OK'].values())),
self.spFormats.green, ' OK', ' + ',
'{0:,}'.format(sum(data['KO'].values())), self.spFormats.red, ' KO ')
row += 1
data = reporter.errors
ws.write(row, 0, 'Errors', self.spFormats.bold_right)
ws.write_rich_string(row, 1,
'{:,} Issues = {OK:,}'.format(sum(data.values()), **data), self.spFormats.green, ' OK',
' + '
' {KO:,}'.format(sum(data.values()), **data), self.spFormats.red, ' KO')
if not reporter.length:
return
row += 2
chart = painter.draw_composition(reporter.issueType)
ws.insert_chart(row, 1, chart, {'x_offset': 0, 'y_offset': 0})
chart = painter.draw_status(reporter.perspective)
ws.insert_chart(row, 1, chart, {'x_offset': 520, 'y_offset': 0})
row += 26
chart = painter.draw_evolution(reporter.implemented(self.start, self.end))
ws.insert_chart(row, 1, chart, {'x_offset': 0, 'y_offset': 0})
row += 15
_format = self.workbook.add_format({'bold': True, 'font_size': 20, 'bg_color': '#60C1CF', 'align': 'center'})
ws.merge_range(xl_range(row, 0, row, 4), 'Backlog Entries', _format)
row += 1
ws.write_row(row, 0, ('Item Id', 'Item reference', 'Time frame', 'Status', 'Item type'),
self.spFormats.column_heading)
for issue in backlog:
row += 1
self._write_issue(ws, row, issue)
def _chapter_dashboard(self, chapter):
print('------>', chapter.name)
wb = self.workbook
ws = wb.add_worksheet('{} Chapter'.format(chapter.name))
backlog = self.factory.getChapterBacklog(chapter.name)
painter = Painter(wb, ws)
ws.set_zoom(80)
ws.set_column(0, 0, 30)
ws.set_column(1, 1, 122)
ws.set_column(2, 5, 20)
row, col = 0, 0
_heading = self.workbook.add_format({'bold': True, 'font_size': 30,
'bg_color': '#002D67', 'font_color': '#FFE616', 'align': 'center'})
ws.merge_range(xl_range(row, 0, row, 3),
"Backlog for Chapter: '{0}'".format(chapter.name), _heading)
ws.set_row(0, 42)
ws.insert_image(0, 0, settings.logofiware, {'x_scale': 0.5, 'y_scale': 0.5, 'x_offset': 0, 'y_offset': 0})
row += 1
ws.write(row, 0, 'Project Time:', self.spFormats.bold_right)
ws.write(row, 1, '{}'.format(agileCalendar.projectTime()))
ws.write(row, 2, 'Report Date:', self.spFormats.bold_right)
ws.write(row, 3, date.today().strftime('%d-%m-%Y'))
row += 1
ws.write(row, 0, 'Start of Data Analysis:', self.spFormats.bold_right)
ws.write(row, 1, '{}'.format(agileCalendar.projectTime(current_date=self.start)))
row += 1
ws.write(row, 0, 'End of Data Analysis:', self.spFormats.bold_right)
ws.write(row, 1, '{}'.format(agileCalendar.projectTime(current_date=self.end)))
row += 2
_format = self.workbook.add_format({'bold': True, 'font_size': 15, 'color': 'green'})
ws.write(row, 0, 'Chapter Name:', self.spFormats.bold_right)
ws.write(row, 1, chapter.Name, _format)
row += 1
_format = self.workbook.add_format({'bold': True, 'font_size': 15, 'bg_color': '#60C1CF'})
ws.write(row, 0, 'Chapter Leader:', self.spFormats.bold_right)
ws.write(row, 1, chapter.leader, _format)
ws.write(row, 2, '', _format)
if chapter.architect:
row += 1
ws.write(row, 0, 'Chapter Architect:', self.spFormats.bold_right)
ws.write(row, 1, chapter.architect, _format)
ws.write(row, 2, '', _format)
row += 2
if deploymentBook.roadmap[chapter.name]:
ws.write(row, 0, 'Roadmap:', self.spFormats.bold_right)
ws.write_url(row, 1, '{0}'.format(deploymentBook.roadmap[chapter.name]))
row += 1
link = deploymentBook.tracker[chapter.name] + '&func=browse'
ws.write(row, 0, 'Tracker:', self.spFormats.bold_right)
ws.write_url(row, 1, '{0}'.format(link))
row += 1
link = 'http://backlog.fiware.org/chapter/{}'.format(chapter.name)
ws.write(row, 0, 'Backlog:', self.spFormats.bold_right)
ws.write_url(row, 1, '{0}'.format(link))
if deploymentBook.materializing[chapter.name]:
row += 1
ws.write(row, 0, 'Materializing:', self.spFormats.bold_right)
ws.write_url(row, 1, '{0}'.format(deploymentBook.materializing[chapter.name]))
row += 2
ws.write(row, 0, 'Backlog Structure:', self.spFormats.bold_right)
n_enablers = len(chapter.enablers)
n_tools = len(chapter.tools)
n_coordination = 1
data = (n_enablers + n_tools + n_coordination, n_enablers, n_tools, n_coordination)
ws.write(row, 1, '{} Components = {} Enablers + {} Tools + {} Coordination'.format(*data))
row += 1
ws.write(row, 0, 'Backlog Summary:', self.spFormats.bold_right)
ws.write(row, 1, '# Items', self.spFormats.bold_left)
row += 1
reporter = ChapterReporter(chapter.name, backlog)
data = reporter.issueType
ws.write(row, 0, 'Composition', self.spFormats.bold_right)
ws.write(row, 1, '{0:,} Issues = {Epic:,} Epics + {Feature:,} Features + '
'{Story:,} User Stories + {WorkItem:,} WorkItems + {Bug:,} Bugs'.format(sum(data.values()),
**data))
row += 1
data = reporter.perspective
ws.write(row, 0, 'Status', self.spFormats.bold_right)
ws.write(row, 1, '{0:,} Issues = {Implemented:,} Implemented + {Working On:,} Working On + '
' {Foreseen:,} Foreseen'.format(sum(data.values()), **data))
row += 2
chart = painter.draw_composition(reporter.issueType)
ws.insert_chart(row, 1, chart, {'x_offset': 0, 'y_offset': 0})
chart = painter.draw_status(reporter.perspective)
ws.insert_chart(row, 1, chart, {'x_offset': 520, 'y_offset': 0})
if len(reporter.enablers):
row += 26
chart = painter.draw_component_status('Enabler', reporter.enablers, reporter.enablers_execution_status)
ws.insert_chart(row, 1, chart, {'x_offset': 0, 'y_offset': 0})
if len(reporter.tools):
row += 15
chart = painter.draw_component_status('Tool', reporter.tools, reporter.tools_execution_status)
ws.insert_chart(row, 1, chart, {'x_offset': 0, 'y_offset': 0})
row += 15
chart = painter.draw_evolution(reporter.implemented(self.start, self.end))
ws.insert_chart(row, 1, chart, {'x_offset': 0, 'y_offset': 0})
row += 15
ws.write(row, 0, '')
def _techChapters_dashboard(self):
print('---> TechChapters')
wb = self.workbook
ws = wb.add_worksheet('Overview')
painter = Painter(wb, ws)
ws.set_zoom(80)
ws.set_column(0, 0, 30)
ws.set_column(1, 1, 122)
ws.set_column(2, 5, 20)
row, col = 0, 0
_heading = self.workbook.add_format({'bold': True, 'font_size': 30,
'bg_color': '#002D67', 'font_color': '#FFE616', 'align': 'center'})
ws.merge_range(xl_range(row, 0, row, 3),
"Backlog for Technical Chapters", _heading)
ws.set_row(0, 42)
ws.insert_image(0, 0, settings.logofiware, {'x_scale': 0.5, 'y_scale': 0.5, 'x_offset': 0, 'y_offset': 0})
row += 1
ws.write(row, 0, 'Project Time:', self.spFormats.bold_right)
ws.write(row, 1, '{}'.format(agileCalendar.projectTime()))
ws.write(row, 2, 'Report Date:', self.spFormats.bold_right)
ws.write(row, 3, date.today().strftime('%d-%m-%Y'))
row += 1
ws.write(row, 0, 'Start of Data Analysis:', self.spFormats.bold_right)
ws.write(row, 1, '{}'.format(agileCalendar.projectTime(current_date=self.start)))
row += 1
ws.write(row, 0, 'End of Data Analysis:', self.spFormats.bold_right)
ws.write(row, 1, '{}'.format(agileCalendar.projectTime(current_date=self.end)))
row += 2
_format = self.workbook.add_format({'bold': True, 'font_size': 15, 'bg_color': '#60C1CF'})
ws.write(row, 0, 'Scrum Master:', self.spFormats.bold_right)
ws.write(row, 1, 'FF - Veronika Vlnkova', _format)
ws.write(row, 2, '', _format)
row += 1
_format = self.workbook.add_format({'bold': True, 'font_size': 15, 'bg_color': '#60C1CF'})
ws.write(row, 0, 'Technical Scrum Master:', self.spFormats.bold_right)
ws.write(row, 1, 'FF - Fernando López', _format)
ws.write(row, 2, '', _format)
row += 1
ws.write(row, 0, 'Backlog Structure:', self.spFormats.bold_right)
ws.write(row, 1, '# Items', self.spFormats.bold_left)
row += 1
ws.write(row, 1, '{} Chapters'.format(len(self.gReporter.chapters)))
n_enablers = sum([len(chaptersBook[chapter].enablers) for chapter in chaptersBook])
n_tools = sum([len(chaptersBook[chapter].tools) for chapter in chaptersBook])
n_coordination = len([chaptersBook[chapter].coordination for chapter in chaptersBook])
data = (n_enablers + n_tools + n_coordination, n_enablers, n_tools, n_coordination)
row += 1
ws.write(row, 1, '{} Components = {} Enablers + {} Tools + {} Coordination'.format(*data))
row += 1
ws.write(row, 0, 'Backlog Summary:', self.spFormats.bold_right)
ws.write(row, 1, '# Items', self.spFormats.bold_left)
reporter = self.gReporter
row += 1
data = reporter.issueType
ws.write(row, 0, 'Composition', self.spFormats.bold_right)
ws.write(row, 1, '{0:,} Issues = {Epic:,} Epics + {Feature:,} Features + '
'{Story:,} User Stories + {WorkItem:,} WorkItems + {Bug:,} Bugs'.format(sum(data.values()),
**data))
row += 1
data = reporter.perspective
ws.write(row, 0, 'Status', self.spFormats.bold_right)
ws.write(row, 1, '{0:,} Issues = {Implemented:,} Implemented + {Working On:,} Working On + '
' {Foreseen:,} Foreseen'.format(sum(data.values()), **data))
row += 2
chart = painter.draw_composition(reporter.issueType)
ws.insert_chart(row, 1, chart, {'x_offset': 0, 'y_offset': 0})
chart = painter.draw_status(reporter.perspective)
ws.insert_chart(row, 1, chart, {'x_offset': 520, 'y_offset': 0})
row += 26
chart = painter.draw_evolution(reporter.implemented(self.start, self.end))
ws.insert_chart(row, 1, chart, {'x_offset': 0, 'y_offset': 0})
row += 15
chart = painter.draw_chapters_status(reporter.chapters, reporter.chapters_execution_status)
ws.insert_chart(row, 1, chart, {'x_offset': 0, 'y_offset': 0})
row += 15
chart = painter.draw_enablers_status(reporter.enablers, reporter.enablers_execution_status)
ws.insert_chart(row, 1, chart, {'x_offset': 0, 'y_offset': 0})
row += 50
ws.write(row, 0, '')
def chapter(self, chaptername):
if chaptername not in settings.chapters:
raise Exception("Unknown chapter: {}".format(chaptername))
print()
print("--monitor-- chapter:", chaptername)
_date = datetime.now().strftime("%Y%m%d-%H%M")
filename = 'FIWARE.backlog.report.' + chaptername + '.' + _date + '.xlsx'
myfile = os.path.join(settings.outHome, filename)
self.workbook = xlsxwriter.Workbook(myfile)
self.spFormats = SpreadsheetFormats(self.workbook)
self._techChapters_dashboard()
chapter = chaptersBook[chaptername]
self._chapter_dashboard(chapter)
self._coordination_dashboard(chapter.coordination)
for _enabler in chapter.enablers:
self._enabler_dashboard(chapter.enablers[_enabler])
for _tool in chapter.tools:
self._tool_dashboard(chapter.tools[_tool])
print(chaptername, ': W:' + myfile)
self.workbook.close()
def _lab_chapter_dashboard(self):
print('---> LabChapter')
wb = self.workbook
ws = wb.add_worksheet('Overview')
painter = Painter(wb, ws)
ws.set_zoom(80)
ws.set_column(0, 0, 30)
ws.set_column(1, 1, 122)
ws.set_column(2, 5, 20)
row, col = 0, 0
_heading = self.workbook.add_format({'bold': True, 'font_size': 30,
'bg_color': '#002D67', 'font_color': '#FFE616', 'align': 'center'})
ws.merge_range(xl_range(row, 0, row, 3),
"Backlog for Lab Chapter", _heading)
ws.set_row(0, 42)
ws.insert_image(0, 0, settings.logofiware, {'x_scale': 0.5, 'y_scale': 0.5, 'x_offset': 0, 'y_offset': 0})
row += 1
ws.write(row, 0, 'Project Time:', self.spFormats.bold_right)
ws.write(row, 1, '{}'.format(agileCalendar.projectTime()))
ws.write(row, 2, 'Report Date:', self.spFormats.bold_right)
ws.write(row, 3, date.today().strftime('%d-%m-%Y'))
row += 1
ws.write(row, 0, 'Start of Data Analysis:', self.spFormats.bold_right)
ws.write(row, 1, '{}'.format(agileCalendar.projectTime(current_date=self.start)))
row += 1
ws.write(row, 0, 'End of Data Analysis:', self.spFormats.bold_right)
ws.write(row, 1, '{}'.format(agileCalendar.projectTime(current_date=self.end)))
row += 2
_format = self.workbook.add_format({'bold': True, 'font_size': 15, 'bg_color': '#60C1CF'})
ws.write(row, 0, 'Scrum Master:', self.spFormats.bold_right)
ws.write(row, 1, 'FF - Veronika Vlnkova', _format)
ws.write(row, 2, '', _format)
row += 1
_format = self.workbook.add_format({'bold': True, 'font_size': 15, 'bg_color': '#60C1CF'})
ws.write(row, 0, 'Technical Scrum Master:', self.spFormats.bold_right)
ws.write(row, 1, 'FF - Fernando López', _format)
ws.write(row, 2, '', _format)
row += 1
ws.write(row, 0, 'Backlog Structure:', self.spFormats.bold_right)
ws.write(row, 1, '# Items', self.spFormats.bold_left)
row += 1
ws.write(row, 1, '{} Nodes'.format(len(self.gLabReporter.lab_nodes)))
row += 2
ws.write(row, 0, 'Backlog Summary:', self.spFormats.bold_right)
ws.write(row, 1, '# Items', self.spFormats.bold_left)
reporter = self.gLabReporter
row += 1
data = reporter.issueType
ws.write(row, 0, 'Composition', self.spFormats.bold_right)
ws.write(row, 1, '{0:,} Issues = {Epic:,} Epics + {Feature:,} Features + '
'{Story:,} User Stories + {WorkItem:,} WorkItems + {Bug:,} Bugs'.format(sum(data.values()),
**data))
row += 1
data = reporter.perspective
ws.write(row, 0, 'Status', self.spFormats.bold_right)
ws.write(row, 1, '{0:,} Issues = {Implemented:,} Implemented + {Working On:,} Working On + '
' {Foreseen:,} Foreseen'.format(sum(data.values()), **data))
row += 2
chart = painter.draw_composition(reporter.issueType)
ws.insert_chart(row, 1, chart, {'x_offset': 0, 'y_offset': 0})
chart = painter.draw_status(reporter.perspective)
ws.insert_chart(row, 1, chart, {'x_offset': 520, 'y_offset': 0})
row += 26
chart = painter.draw_evolution(reporter.implemented(self.start, self.end))
ws.insert_chart(row, 1, chart, {'x_offset': 0, 'y_offset': 0})
row += 15
chart = painter.draw_lab_status(self.gLabReporter.lab_nodes, reporter.nodes_execution_status)
ws.insert_chart(row, 1, chart, {'x_offset': 0, 'y_offset': 0})
row += 50
ws.write(row, 0, '')
def lab(self):
print()
print("--monitor-- chapter: Lab")
_date = datetime.now().strftime("%Y%m%d-%H%M")
filename = 'FIWARE.backlog.report.lab.' + _date + '.xlsx'
myfile = os.path.join(settings.outHome, filename)
self.workbook = xlsxwriter.Workbook(myfile)
self.spFormats = SpreadsheetFormats(self.workbook)
self._lab_chapter_dashboard()
# for each node we have to get the data to show detailed information
for node in helpdeskNodesBook:
self._lab_node_dashboard(node)
print('Lab: W:' + myfile)
self.workbook.close()
class WorkBench:
@staticmethod
def report():
print('report')
reporter = BacklogReporter()
chapters = settings.chapters
for _chapter in chapters:
reporter.chapter(_chapter)
reporter.lab()
@staticmethod
def snapshot():
print('snapshot')
DataEngine.snapshot(storage=settings.storeHome)
@staticmethod
def upload():
print('upload')
uploader = Uploader()
uploader.upload('backlog', 'report', settings.chapters)
if __name__ == "__main__":
options = {'0': WorkBench.snapshot,
'1': WorkBench.report,
'2': WorkBench.upload,
'E': exit}
while True:
menu = '\nMenu:\n\t0: get snapshot\n\t1: create reports \n\t2: upload report\n\tE: Exit'
choice = input(menu + '\nEnter your choice[0-2,(E)xit] : ')
print('\nChosen option: {}\n'.format(choice))
if choice in ('0', '1', '2', 'E'):
options[choice]()
else:
print('\n\n\nWrong option, please try again... ')
| 41.630872 | 119 | 0.562076 | 6,974 | 55,827 | 4.354603 | 0.062661 | 0.053245 | 0.062235 | 0.030426 | 0.788172 | 0.771445 | 0.761336 | 0.746254 | 0.728111 | 0.714281 | 0 | 0.030051 | 0.279954 | 55,827 | 1,340 | 120 | 41.66194 | 0.725434 | 0.018486 | 0 | 0.72026 | 0 | 0.000929 | 0.149963 | 0.000876 | 0 | 0 | 0 | 0 | 0 | 1 | 0.026952 | false | 0 | 0.015799 | 0 | 0.063197 | 0.025093 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e82eb21b1cf7f4d918f57973715a606f2515e9c9 | 122 | py | Python | external_connections/__init__.py | RamonWill/portfolio-management-project | ac8ce313f8d62f09810fc1da19d6b252f193871b | [
"MIT"
] | 14 | 2020-01-01T04:59:06.000Z | 2022-02-08T06:48:21.000Z | external_connections/__init__.py | linhvien/portfolio-management-project | ac8ce313f8d62f09810fc1da19d6b252f193871b | [
"MIT"
] | null | null | null | external_connections/__init__.py | linhvien/portfolio-management-project | ac8ce313f8d62f09810fc1da19d6b252f193871b | [
"MIT"
] | 8 | 2020-10-15T06:52:37.000Z | 2021-10-04T06:44:36.000Z | from .news_api import NewsConnection
from .alphavantage_api import AlphaVantageAPI
from .oanda_api import OandaConnection
| 30.5 | 45 | 0.877049 | 15 | 122 | 6.933333 | 0.6 | 0.259615 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.098361 | 122 | 3 | 46 | 40.666667 | 0.945455 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1c3d1c2f9ec0e9adbd1e4c5d5dd86197d544fb49 | 251 | py | Python | SourceModel/SM_IfStmt.py | crossminer/CrossPuppeteer | ab99f67f9c3440752e767ad284de5049f6fd1da9 | [
"Apache-2.0"
] | 47 | 2016-02-08T08:46:17.000Z | 2021-01-17T23:56:34.000Z | SourceModel/SM_IfStmt.py | crossminer/CrossPuppeteer | ab99f67f9c3440752e767ad284de5049f6fd1da9 | [
"Apache-2.0"
] | null | null | null | SourceModel/SM_IfStmt.py | crossminer/CrossPuppeteer | ab99f67f9c3440752e767ad284de5049f6fd1da9 | [
"Apache-2.0"
] | 15 | 2016-02-09T13:34:48.000Z | 2021-05-12T14:34:26.000Z | import SourceModel.SM_Element
class SM_IfStmt(SourceModel.SM_Element.SM_Element):
def __init__(self, text):
self.resourceText = text
super().__init__(text)
def getUsedVariables(self):
return super().getUsedVariables() | 27.888889 | 51 | 0.709163 | 28 | 251 | 5.928571 | 0.5 | 0.162651 | 0.240964 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.191235 | 251 | 9 | 52 | 27.888889 | 0.817734 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.285714 | false | 0 | 0.142857 | 0.142857 | 0.714286 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
1c52fc06e2519fe82ee1ae66e4f2f568db75608d | 31 | py | Python | samples/src/main/resources/datasets/python/126.py | sritchie/kotlingrad | 8165ed1cd77220a5347c58cded4c6f2bcf22ee30 | [
"Apache-2.0"
] | 11 | 2020-12-19T01:19:44.000Z | 2021-12-25T20:43:33.000Z | src/main/resources/datasets/python/126.py | breandan/katholic | 081c39f3acc73ff41f5865563debe78a36e1038f | [
"Apache-2.0"
] | null | null | null | src/main/resources/datasets/python/126.py | breandan/katholic | 081c39f3acc73ff41f5865563debe78a36e1038f | [
"Apache-2.0"
] | 2 | 2021-01-25T07:59:20.000Z | 2021-08-07T07:13:49.000Z | def unaryOp0(a):
return +a
| 10.333333 | 16 | 0.612903 | 5 | 31 | 3.8 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.043478 | 0.258065 | 31 | 2 | 17 | 15.5 | 0.782609 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
1c54f0ab806e669b2a2513e59a39f24bdae98ad7 | 31 | py | Python | services/alphabot/__init__.py | MegOBonus/aplhabot-controller | 7007c5afc1cec02b374305b724507200664b242b | [
"MIT"
] | null | null | null | services/alphabot/__init__.py | MegOBonus/aplhabot-controller | 7007c5afc1cec02b374305b724507200664b242b | [
"MIT"
] | null | null | null | services/alphabot/__init__.py | MegOBonus/aplhabot-controller | 7007c5afc1cec02b374305b724507200664b242b | [
"MIT"
] | null | null | null | from .alphabot import Alphabot
| 15.5 | 30 | 0.83871 | 4 | 31 | 6.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.129032 | 31 | 1 | 31 | 31 | 0.962963 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
98be7475351d101a0140e2ad313637e6de9e5c3a | 83 | py | Python | torchility/callbacks/__init__.py | hitlic/torchility | c28701d8c93955ad115d364b35b680a60ecfd360 | [
"MIT"
] | 9 | 2021-05-15T14:48:47.000Z | 2021-11-08T04:09:59.000Z | torchility/callbacks/__init__.py | hitlic/torchility | c28701d8c93955ad115d364b35b680a60ecfd360 | [
"MIT"
] | null | null | null | torchility/callbacks/__init__.py | hitlic/torchility | c28701d8c93955ad115d364b35b680a60ecfd360 | [
"MIT"
] | 1 | 2021-07-01T08:04:55.000Z | 2021-07-01T08:04:55.000Z | from .progressbars import *
from .interpreters import *
from .performances import * | 27.666667 | 27 | 0.795181 | 9 | 83 | 7.333333 | 0.555556 | 0.30303 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.13253 | 83 | 3 | 28 | 27.666667 | 0.916667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
98c10acf5eb1b0fcacddfeca572385985ea45111 | 7,272 | py | Python | src/connectedmachine/azext_connectedmachine/generated/_params.py | Mannan2812/azure-cli-extensions | e2b34efe23795f6db9c59100534a40f0813c3d95 | [
"MIT"
] | 2 | 2021-03-24T21:06:20.000Z | 2021-03-24T21:07:58.000Z | src/connectedmachine/azext_connectedmachine/generated/_params.py | Mannan2812/azure-cli-extensions | e2b34efe23795f6db9c59100534a40f0813c3d95 | [
"MIT"
] | 3 | 2020-05-27T20:16:26.000Z | 2020-07-23T19:46:49.000Z | src/connectedmachine/azext_connectedmachine/generated/_params.py | Mannan2812/azure-cli-extensions | e2b34efe23795f6db9c59100534a40f0813c3d95 | [
"MIT"
] | 5 | 2020-09-08T22:46:48.000Z | 2020-11-08T14:54:35.000Z | # --------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
#
# Code generated by Microsoft (R) AutoRest Code Generator.
# Changes may cause incorrect behavior and will be lost if the code is
# regenerated.
# --------------------------------------------------------------------------
# pylint: disable=too-many-lines
# pylint: disable=too-many-statements
from azure.cli.core.commands.parameters import (
tags_type,
get_three_state_flag,
resource_group_name_type,
get_location_type
)
from azure.cli.core.commands.validators import (
get_default_location_from_resource_group,
validate_file_or_dict
)
def load_arguments(self, _):
with self.argument_context('connectedmachine list') as c:
c.argument('resource_group_name', resource_group_name_type)
with self.argument_context('connectedmachine show') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('machine_name', options_list=['--name', '-n', '--machine-name'], type=str, help='The name of the '
'hybrid machine.', id_part='name')
with self.argument_context('connectedmachine delete') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('machine_name', options_list=['--name', '-n', '--machine-name'], type=str, help='The name of the '
'hybrid machine.', id_part='name')
with self.argument_context('connectedmachine extension list') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('machine_name', type=str, help='The name of the machine containing the extension.')
c.argument('expand', type=str, help='The expand expression to apply on the operation.')
with self.argument_context('connectedmachine extension show') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('machine_name', type=str, help='The name of the machine containing the extension.', id_part='name')
c.argument('name', options_list=['-n', '--extension-name', '--name'], type=str, help='The name of the machine '
'extension.', id_part='child_name_1')
with self.argument_context('connectedmachine extension create') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('machine_name', type=str, help='The name of the machine where the extension should be created or '
'updated.')
c.argument('name', options_list=['-n', '--extension-name', '--name'], type=str, help='The name of the machine '
'extension.')
c.argument('tags', tags_type)
c.argument('location', arg_type=get_location_type(self.cli_ctx),
validator=get_default_location_from_resource_group)
c.argument('force_update_tag', type=str, help='How the extension handler should be forced to update even if '
'the extension configuration has not changed.')
c.argument('publisher', type=str, help='The name of the extension handler publisher.')
c.argument('type_', options_list=['--type'], type=str, help='Specifies the type of the extension; an example '
'is "CustomScriptExtension".')
c.argument('type_handler_version', type=str, help='Specifies the version of the script handler.')
c.argument('auto_upgrade_minor_version', options_list=['--auto-upgrade-minor'],
arg_type=get_three_state_flag(), help='Indicates whether the extension should use a newer minor '
'version if one is available at deployment time. Once deployed, however, the extension will not '
'upgrade minor versions unless redeployed, even with this property set to true.')
c.argument('settings', type=validate_file_or_dict, help='Json formatted public settings for the extension. '
'Expected value: json-string/@json-file.')
c.argument('protected_settings', type=validate_file_or_dict, help='The extension can contain either '
'protectedSettings or protectedSettingsFromKeyVault or no protected settings at all. Expected '
'value: json-string/@json-file.')
with self.argument_context('connectedmachine extension update') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('machine_name', type=str, help='The name of the machine where the extension should be created or '
'updated.', id_part='name')
c.argument('name', options_list=['-n', '--extension-name', '--name'], type=str, help='The name of the machine '
'extension.', id_part='child_name_1')
c.argument('tags', tags_type)
c.argument('force_update_tag', type=str, help='How the extension handler should be forced to update even if '
'the extension configuration has not changed.')
c.argument('publisher', type=str, help='The name of the extension handler publisher.')
c.argument('type_', options_list=['--type'], type=str, help='Specifies the type of the extension; an example '
'is "CustomScriptExtension".')
c.argument('type_handler_version', type=str, help='Specifies the version of the script handler.')
c.argument('auto_upgrade_minor_version', options_list=['--auto-upgrade-minor'],
arg_type=get_three_state_flag(), help='Indicates whether the extension should use a newer minor '
'version if one is available at deployment time. Once deployed, however, the extension will not '
'upgrade minor versions unless redeployed, even with this property set to true.')
c.argument('settings', type=validate_file_or_dict, help='Json formatted public settings for the extension. '
'Expected value: json-string/@json-file.')
c.argument('protected_settings', type=validate_file_or_dict, help='The extension can contain either '
'protectedSettings or protectedSettingsFromKeyVault or no protected settings at all. Expected '
'value: json-string/@json-file.')
with self.argument_context('connectedmachine extension delete') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('machine_name', type=str, help='The name of the machine where the extension should be deleted.',
id_part='name')
c.argument('name', options_list=['-n', '--extension-name', '--name'], type=str, help='The name of the machine '
'extension.', id_part='child_name_1')
with self.argument_context('connectedmachine extension wait') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('machine_name', type=str, help='The name of the machine containing the extension.', id_part='name')
c.argument('name', options_list=['-n', '--extension-name', '--name'], type=str, help='The name of the machine '
'extension.', id_part='child_name_1')
| 65.513514 | 119 | 0.663229 | 921 | 7,272 | 5.064061 | 0.175896 | 0.077187 | 0.051887 | 0.048027 | 0.884434 | 0.852916 | 0.825686 | 0.814751 | 0.814751 | 0.814751 | 0 | 0.000692 | 0.205308 | 7,272 | 110 | 120 | 66.109091 | 0.806368 | 0.069582 | 0 | 0.666667 | 0 | 0 | 0.470684 | 0.037015 | 0 | 0 | 0 | 0 | 0 | 1 | 0.011494 | false | 0 | 0.022989 | 0 | 0.034483 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
98cb315eb30b925f5ced1bcdcef3eeab88a392ac | 5,409 | py | Python | visbeer/test/test_beer_service.py | lukaselmer/vis-beer | ecabbfa8aeb540ea3cb66cd1a1d2192b8e439085 | [
"MIT"
] | null | null | null | visbeer/test/test_beer_service.py | lukaselmer/vis-beer | ecabbfa8aeb540ea3cb66cd1a1d2192b8e439085 | [
"MIT"
] | null | null | null | visbeer/test/test_beer_service.py | lukaselmer/vis-beer | ecabbfa8aeb540ea3cb66cd1a1d2192b8e439085 | [
"MIT"
] | null | null | null | import unittest
import datetime
from visbeer.services.beer_service import BeerService
from visbeer.services.data_service import DataService, DATETIME_FORMAT
from visbeer.test.mocks.flag_service_mock import FlagServiceMock
class BeerServiceTestCase(unittest.TestCase):
def test_ctor(self):
self.assertEqual('010101@rfid.ethz.ch', BeerService('010101@rfid.ethz.ch', DataService(FlagServiceMock())).rfid)
self.assertEqual('010203@rfid.ethz.ch', BeerService('010203@rfid.ethz.ch', DataService(FlagServiceMock())).rfid)
def test_invalid_rfid(self):
with self.assertRaises(Exception):
BeerService('12345@rfid.ethz.ch', None)
with self.assertRaises(Exception):
BeerService('1234567@rfid.ethz.ch', None)
with self.assertRaises(Exception):
BeerService('@rfid.ethz.ch', None)
with self.assertRaises(Exception):
BeerService('rfid.ethz.ch', None)
with self.assertRaises(Exception):
BeerService('awefefw@rfid.ethz.ch', None)
with self.assertRaises(Exception):
BeerService('awefefw@', None)
with self.assertRaises(Exception):
BeerService('awefefw@whatever.com', None)
with self.assertRaises(Exception):
BeerService('234234', None)
with self.assertRaises(Exception):
BeerService('234@2q3r', None)
def test_status(self):
mock = FlagServiceMock()
bs = BeerService('010101@rfid.ethz.ch', DataService(mock))
self.assertEqual(1, bs.status())
mock.data['010101@rfid.ethz.ch']['coffee_beer|credits_per_day'] = 10
mock.data['010101@rfid.ethz.ch']['coffee_beer|credits'] = 10
self.assertEqual(5, bs.status())
mock.data['010101@rfid.ethz.ch']['coffee_beer|credits_per_day'] = 10
mock.data['010101@rfid.ethz.ch']['coffee_beer|credits'] = None
self.assertEqual(5, bs.status())
mock.data['010101@rfid.ethz.ch']['coffee_beer|credits_per_day'] = 10
mock.data['010101@rfid.ethz.ch']['coffee_beer|credits'] = 5
three_years_ago = (datetime.datetime.now() - datetime.timedelta(days=3 * 365)).strftime(DATETIME_FORMAT)
mock.data['010101@rfid.ethz.ch']['coffee_beer|last_consumption'] = three_years_ago
self.assertEqual(5, bs.status())
mock.data['010101@rfid.ethz.ch']['coffee_beer|credits_per_day'] = 10
mock.data['010101@rfid.ethz.ch']['coffee_beer|credits'] = 5
one_day_ago = (datetime.datetime.now() - datetime.timedelta(days=1)).strftime(DATETIME_FORMAT)
mock.data['010101@rfid.ethz.ch']['coffee_beer|last_consumption'] = one_day_ago
self.assertEqual(5, bs.status())
mock.data['010101@rfid.ethz.ch']['coffee_beer|credits_per_day'] = 10
mock.data['010101@rfid.ethz.ch']['coffee_beer|credits'] = 5
mock.data['010101@rfid.ethz.ch']['coffee_beer|last_consumption'] = datetime.datetime.now().strftime(DATETIME_FORMAT)
self.assertEqual(2, bs.status())
mock.data['010101@rfid.ethz.ch']['coffee_beer|credits_per_day'] = 10
mock.data['010101@rfid.ethz.ch']['coffee_beer|credits'] = 2
mock.data['010101@rfid.ethz.ch']['coffee_beer|last_consumption'] = datetime.datetime.now().strftime(DATETIME_FORMAT)
self.assertEqual(1, bs.status())
mock.data['010101@rfid.ethz.ch']['coffee_beer|credits_per_day'] = 10
mock.data['010101@rfid.ethz.ch']['coffee_beer|credits'] = 1
mock.data['010101@rfid.ethz.ch']['coffee_beer|last_consumption'] = datetime.datetime.now().strftime(DATETIME_FORMAT)
self.assertEqual(0, bs.status())
mock.data['010101@rfid.ethz.ch']['coffee_beer|credits_per_day'] = 1
mock.data['010101@rfid.ethz.ch']['coffee_beer|credits'] = 1
mock.data['010101@rfid.ethz.ch']['coffee_beer|last_consumption'] = datetime.datetime.now().strftime(DATETIME_FORMAT)
self.assertEqual(0, bs.status())
mock.data['010101@rfid.ethz.ch']['coffee_beer|credits_per_day'] = 1
mock.data['010101@rfid.ethz.ch']['coffee_beer|credits'] = 0
mock.data['010101@rfid.ethz.ch']['coffee_beer|last_consumption'] = datetime.datetime.now().strftime(DATETIME_FORMAT)
self.assertEqual(0, bs.status())
mock.data['010101@rfid.ethz.ch']['coffee_beer|credits_per_day'] = 10
mock.data['010101@rfid.ethz.ch']['coffee_beer|credits'] = 0
mock.data['010101@rfid.ethz.ch']['coffee_beer|last_consumption'] = datetime.datetime.now().strftime(DATETIME_FORMAT)
self.assertEqual(0, bs.status())
def test_status_and_dispensed(self):
mock = FlagServiceMock()
bs = BeerService('010101@rfid.ethz.ch', DataService(mock))
self.assertEqual(1, bs.status())
bs.dispensed()
self.assertEqual(0, bs.status())
bs.dispensed()
self.assertEqual(0, bs.status())
mock.data['010101@rfid.ethz.ch']['coffee_beer|credits'] = 4
self.assertEqual(2, bs.status())
bs.dispensed()
self.assertEqual(2, mock.data['010101@rfid.ethz.ch']['coffee_beer|credits'])
self.assertEqual(1, bs.status())
bs.dispensed()
self.assertEqual(0, mock.data['010101@rfid.ethz.ch']['coffee_beer|credits'])
self.assertEqual(0, bs.status())
bs.dispensed()
self.assertEqual(0, bs.status())
if __name__ == '__main__':
unittest.main()
| 47.447368 | 124 | 0.666112 | 687 | 5,409 | 5.106259 | 0.103348 | 0.095781 | 0.119726 | 0.159635 | 0.859749 | 0.843501 | 0.781072 | 0.742018 | 0.742018 | 0.726055 | 0 | 0.06704 | 0.175448 | 5,409 | 113 | 125 | 47.867257 | 0.719507 | 0 | 0 | 0.641304 | 0 | 0 | 0.291551 | 0.091329 | 0 | 0 | 0 | 0 | 0.336957 | 1 | 0.043478 | false | 0 | 0.054348 | 0 | 0.108696 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
98ed4c84547e015eb547c8c02145213c2bd86792 | 111 | py | Python | PyEDA/build/lib/PyEDA/__init__.py | cescgina/NormalModes-PYT_SBI | 6b7b77dffe45157c0b9a7ac24ad9daa096464f5b | [
"MIT"
] | null | null | null | PyEDA/build/lib/PyEDA/__init__.py | cescgina/NormalModes-PYT_SBI | 6b7b77dffe45157c0b9a7ac24ad9daa096464f5b | [
"MIT"
] | null | null | null | PyEDA/build/lib/PyEDA/__init__.py | cescgina/NormalModes-PYT_SBI | 6b7b77dffe45157c0b9a7ac24ad9daa096464f5b | [
"MIT"
] | null | null | null | from PyEDA import edanalysis, helper_module, interface
__all__ = ['edanalysis', 'helper_module', 'interface']
| 27.75 | 54 | 0.774775 | 12 | 111 | 6.666667 | 0.666667 | 0.4 | 0.55 | 0.775 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.108108 | 111 | 3 | 55 | 37 | 0.808081 | 0 | 0 | 0 | 0 | 0 | 0.288288 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
c706b057b82466dfeb0eac0f16024ccc338e56da | 42 | py | Python | kick/device2/elektra/actions/__init__.py | CiscoDevNet/firepower-kickstart | 37a36856fcdc661e8c51edaa694e48f74cc6fcb5 | [
"Apache-2.0"
] | 2 | 2020-02-10T23:36:57.000Z | 2020-03-25T15:46:05.000Z | kick/device2/elektra/actions/__init__.py | CiscoDevNet/firepower-kickstart | 37a36856fcdc661e8c51edaa694e48f74cc6fcb5 | [
"Apache-2.0"
] | 1 | 2020-08-07T13:01:32.000Z | 2020-08-07T13:01:32.000Z | kick/device2/elektra/actions/__init__.py | CiscoDevNet/firepower-kickstart | 37a36856fcdc661e8c51edaa694e48f74cc6fcb5 | [
"Apache-2.0"
] | 1 | 2020-02-19T13:58:35.000Z | 2020-02-19T13:58:35.000Z | from .elektra import Elektra, ElektraLine
| 21 | 41 | 0.833333 | 5 | 42 | 7 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.119048 | 42 | 1 | 42 | 42 | 0.945946 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c744445e8c6d4a114881225ae44af8a200eb27af | 22,647 | py | Python | multi_armed_bandit/pymab/policy/contextual.py | smn-ailab/ysaito-qiita | 8c5dd1496efa662cc636024cc49e2fd374a3daa5 | [
"MIT"
] | 14 | 2018-03-16T09:40:05.000Z | 2021-08-16T16:38:57.000Z | multi_armed_bandit/pymab/policy/contextual.py | smn-ailab/ysaito-qiita | 8c5dd1496efa662cc636024cc49e2fd374a3daa5 | [
"MIT"
] | 1 | 2019-02-18T01:09:24.000Z | 2019-02-18T01:09:24.000Z | multi_armed_bandit/pymab/policy/contextual.py | smn-ailab/ysaito-qiita | 8c5dd1496efa662cc636024cc49e2fd374a3daa5 | [
"MIT"
] | 6 | 2018-12-21T08:58:31.000Z | 2021-12-08T10:05:06.000Z | """This Module contains Contextual Bandit Policies."""
import copy
import math
import random
from typing import Optional, Tuple, Union
import numpy as np
from scipy.stats import norm
from pymab.utils import _check_x_input
from .base import BaseContextualPolicy
class LinUCB(BaseContextualPolicy):
"""Linear Upper Confidence Bound.
Parameters
----------
n_arms: int
The number of given bandit arms.
n_features: int
The dimention of context vectors.
alpha: float, optional(default=1.0)
The hyper-parameter which represents how often the algorithm explores.
warmup: int, optional(default=1)
The minimum number of pull of earch arm.
batch_size: int, optional (default=1)
The number of data given in each batch.
References
-------
[1] L. Li, W. Chu, J. Langford, and E. Schapire.
A contextual-bandit approach to personalized news article recommendation.
In Proceedings of the 19th International Conference on World Wide Web, pp. 661–670. ACM, 2010.
"""
def __init__(self, n_arms: int, n_features: int, alpha: float=1.0, warmup: int=1, batch_size: int=1) -> None:
"""Initialize class."""
super().__init__(n_arms, n_features, warmup, batch_size)
self.alpha = alpha
self.name = f"LinUCB(α={self.alpha})"
self.theta_hat = np.zeros((self.n_features, self.n_arms)) # d * k
self.A_inv = np.concatenate([np.identity(self.n_features)
for i in np.arange(self.n_arms)]).reshape(self.n_arms, self.n_features, self.n_features) # k * d * d
self.b = np.zeros((self.n_features, self.n_arms)) # d * k
self._A_inv = np.concatenate([np.identity(self.n_features)
for i in np.arange(self.n_arms)]).reshape(self.n_arms, self.n_features, self.n_features)
self._b = np.zeros((self.n_features, self.n_arms))
def select_arm(self, x: np.ndarray) -> int:
"""Select arms according to the policy for new data.
Parameters
----------
x : array-like, shape = (n_features, )
A test sample.
Returns
-------
result: int
The selected arm.
"""
if True in (self.counts < self.warmup):
result = np.argmax(np.array(self.counts < self.warmup, dtype=int))
else:
x = _check_x_input(x)
self.theta_hat = np.concatenate([self.A_inv[i] @ np.expand_dims(self.b[:, i], axis=1)
for i in np.arange(self.n_arms)], axis=1) # user_dim * n_arms
sigma_hat = np.concatenate([np.sqrt(x.T @ self.A_inv[i] @ x) for i in np.arange(self.n_arms)], axis=1) # 1 * n_arms
result = np.argmax(x.T @ self.theta_hat + self.alpha * sigma_hat)
return result
def update(self, x: np.matrix, chosen_arm: int, reward: Union[int, float]) -> None:
"""Update the reward and parameter information about earch arm.
Parameters
----------
x : array-like, shape = (n_features, )
A test sample.
chosen_arm: int
The chosen arm.
reward: int, float
The observed reward value from the chosen arm.
"""
x = _check_x_input(x)
self.data_size += 1
self.counts[chosen_arm] += 1
self.rewards += reward
self._A_inv[chosen_arm] -= \
self._A_inv[chosen_arm] @ x @ x.T @ self._A_inv[chosen_arm] / (1 + x.T @ self._A_inv[chosen_arm] @ x) # d * d
self._b[:, chosen_arm] += np.ravel(x) * reward # d * 1
if self.data_size % self.batch_size == 0:
self.A_inv, self.b = np.copy(self._A_inv), np.copy(self._b) # d * d, d * 1
class HybridLinUCB(BaseContextualPolicy):
"""Hybrid Linear Upper Confidence Bound.
Parameters
----------
n_arms: int
The number of given bandit arms.
z_dim: int,
The dimensions of context vectors which are common to all arms.
x_dim:, int
The dimentions of context vectors which are unique to earch arm.
alpha: float, optional(default=1.0)
The hyper-parameter which represents how often the algorithm explores.
warmup: int, optional(default=1)
The minimum number of pull of earch arm.
batch_size: int, optional (default=1)
The number of data given in each batch.
References
-------
[1] L. Li, W. Chu, J. Langford, and E. Schapire.
A contextual-bandit approach to personalized news article recommendation.
In Proceedings of the 19th International Conference on World Wide Web, pp. 661–670. ACM, 2010.
"""
def __init__(self, n_arms: int, z_dim: int, x_dim: int, alpha: float=1.0, warmup: int=1, batch_size: int=1) -> None:
"""Initialize class."""
super().__init__(n_arms, z_dim + x_dim, warmup, batch_size)
self.z_dim = z_dim # k
self.x_dim = x_dim # d
self.alpha = alpha
self.name = f"HybridLinUCB(α={self.alpha})"
self.beta = np.zeros(self.z_dim)
self.theta_hat = np.zeros((self.x_dim, self.n_arms)) # d * k
# matrices which are common to all context
self.A_zero, self.b_zero = np.identity(self.z_dim), np.zeros((self.z_dim, 1)) # k * k, k * 1
self.A_inv = np.concatenate([np.identity(self.x_dim)
for i in np.arange(self.n_arms)]).reshape(self.n_arms, self.x_dim, self.x_dim) # k * d * d
self.B = np.concatenate([np.zeros((self.x_dim, self.z_dim))
for i in np.arange(self.n_arms)]).reshape(self.n_arms, self.x_dim, self.z_dim)
self.b = np.zeros((self.x_dim, self.n_arms))
self._A_zero, self._b_zero = np.identity(self.z_dim), np.zeros((self.z_dim, 1))
self._A_inv = np.concatenate([np.identity(self.x_dim)
for i in np.arange(self.n_arms)]).reshape(self.n_arms, self.x_dim, self.x_dim) # k * d * d
self._B = np.concatenate([np.zeros((self.x_dim, self.z_dim))
for i in np.arange(self.n_arms)]).reshape(self.n_arms, self.x_dim, self.z_dim)
self._b = np.zeros((self.x_dim, self.n_arms))
def select_arm(self, x: np.ndarray) -> int:
"""Select arms according to the policy for new data.
Parameters
----------
x : array-like, shape = (n_features, )
A test sample.
Returns
-------
result: int
The selected arm.
"""
if True in (self.counts < self.warmup):
result = np.argmax(np.array(self.counts < self.warmup, dtype=int))
else:
z, x = _check_x_input(x[:self.z_dim]), _check_x_input(x[self.z_dim:])
self.beta = np.linalg.inv(self.A_zero) @ self.b_zero # k * 1
self.theta_hat = np.concatenate([(self.A_inv[i] @ (np.expand_dims(self.b[:, i], axis=1) - self.B[i] @ self.beta))
for i in np.arange(self.n_arms)], axis=1)
s1 = z.T @ np.linalg.inv(self.A_zero) @ z
s2 = - 2 * np.concatenate([z.T @ np.linalg.inv(self.A_zero) @ self.B[i].T @ self.A_inv[i] @ x
for i in np.arange(self.n_arms)], axis=1)
s3 = np.concatenate([x.T @ self.A_inv[i] @ x for i in np.arange(self.n_arms)], axis=1)
s4 = np.concatenate([x.T @ self.A_inv[i] @ self.B[i] @ np.linalg.inv(self.A_zero) @ self.B[i].T @ self.A_inv[i] @ x
for i in np.arange(self.n_arms)], axis=1)
sigma_hat = s1 + s2 + s3 + s4
result = np.argmax(z.T @ self.beta + x.T @ self.theta_hat + self.alpha * sigma_hat)
return result
def update(self, x: np.ndarray, chosen_arm: int, reward: float) -> None:
"""Update the reward and parameter information about earch arm.
Parameters
----------
x : array-like, shape = (n_features, )
A test sample.
chosen_arm: int
The chosen arm.
reward: int, float
The observed reward value from the chosen arm.
"""
z, x = _check_x_input(x[:self.z_dim]), _check_x_input(x[self.z_dim:])
self.data_size += 1
self.counts[chosen_arm] += 1
self.rewards += reward
self._A_zero += self._B[chosen_arm].T @ self._A_inv[chosen_arm] @ self._B[chosen_arm]
self._b_zero += self._B[chosen_arm].T @ self._A_inv[chosen_arm] @ self._b[chosen_arm]
self._A_inv[chosen_arm] -= self._A_inv[chosen_arm] @ x @ x.T @ self._A_inv[chosen_arm] / (1 + x.T @ self._A_inv[chosen_arm] @ x)
self._B[chosen_arm] += x @ z.T
self._b[:, chosen_arm] += np.ravel(x) * reward
self._A_zero += z @ z.T - self._B[chosen_arm].T @ self._A_inv[chosen_arm] @ self._B[chosen_arm]
self._b_zero += z * reward - self._B[chosen_arm].T @ self._A_inv[chosen_arm] @ np.expand_dims(self._b[:, chosen_arm], axis=1)
if self.data_size % self.batch_size == 0:
self.A_zero, self.b_zero = np.copy(self._A_zero), np.copy(self._b_zero)
self.A_inv, self.B, self.b = np.copy(self._A_inv), np.copy(self._B), np.copy(self._b)
class LinTS(BaseContextualPolicy):
"""Linear Thompson Sampling.
Parameters
----------
n_arms: int
The number of given bandit arms.
n_features: int
The dimention of context vectors.
sigma: float, optional(default=1.0)
The variance of prior gaussian distribution.
warmup: int, optional(default=1)
The minimum number of pull of earch arm.
sample_batch: int, optional (default=1)
How often the policy sample new parameters.
batch_size: int, optional (default=1)
The number of data given in each batch.
References
-------
[1] 本多淳也, 中村篤祥. バンディット問題の理論とアルゴリズム. 講談社 機械学習プロフェッショナルシリーズ. 2016.
"""
def __init__(self, n_arms: int, n_features: int, sigma: float=1.0,
warmup: int=1, sample_batch: int=1, batch_size: int=1) -> None:
"""Initialize class."""
super().__init__(n_arms, n_features, warmup, batch_size)
self.sigma = sigma
self.sample_batch = sample_batch
self.name = f"LinTS(σ={self.sigma})"
self.theta_hat, self.theta_tilde = np.zeros((self.n_features, self.n_arms)), np.zeros((self.n_features, self.n_arms))
self.A_inv = np.concatenate([np.identity(self.n_features)
for i in np.arange(self.n_arms)]).reshape(self.n_arms, self.n_features, self.n_features) # k * d * d
self.b = np.zeros((self.n_features, self.n_arms)) # d * k
self._A_inv = np.concatenate([np.identity(self.n_features)
for i in np.arange(self.n_arms)]).reshape(self.n_arms, self.n_features, self.n_features)
self._b = np.zeros((self.n_features, self.n_arms))
def select_arm(self, x: np.matrix) -> int:
"""Select arms according to the policy for new data.
Parameters
----------
x : array-like, shape = (n_features, )
A test sample.
Returns
-------
result: int
The selected arm.
"""
if True in (self.counts < self.warmup):
result = np.argmax(np.array(self.counts < self.warmup, dtype=int))
else:
x = _check_x_input(x)
if self.data_size % self.sample_batch == 0:
self.theta_hat = np.concatenate([self.A_inv[i] @ np.expand_dims(self.b[:, i], axis=1)
for i in np.arange(self.n_arms)], axis=1)
self.theta_tilde = np.concatenate([np.expand_dims(np.random.multivariate_normal(self.theta_hat[:, i], self.A_inv[i]), axis=1)
for i in np.arange(self.n_arms)], axis=1)
result = np.argmax(x.T @ self.theta_tilde)
return result
def update(self, x: np.matrix, chosen_arm: int, reward: float) -> None:
"""Update the reward and parameter information about earch arm.
Parameters
----------
x : array-like, shape = (n_features, )
A test sample.
chosen_arm: int
The chosen arm.
reward: int, float
The observed reward value from the chosen arm.
"""
x = _check_x_input(x)
self.data_size += 1
self.counts[chosen_arm] += 1
self.rewards += reward
self._A_inv[chosen_arm] -= \
self._A_inv[chosen_arm] @ x @ x.T @ self._A_inv[chosen_arm] / (1 + x.T @ self._A_inv[chosen_arm] @ x) # d * d
self._b[:, chosen_arm] += np.ravel(x) * reward # d * 1
if self.data_size % self.batch_size == 0:
self.A_inv, self.b = np.copy(self._A_inv), np.copy(self._b) # d * d, d * 1
class LogisticTS(BaseContextualPolicy):
"""Logistic Thompson Sampling.
Parameters
----------
n_arms: int
The number of given bandit arms.
n_features: int
The dimention of context vectors.
sigma: float, optional(default=1.0)
The variance of prior gaussian distribution.
n_iter: int, optional(default=1)
The num of iteration of newton method in each parameter update.
sample_batch: int, optional (default=1)
How often the policy sample new parameters.
warmup: int, optional(default=1)
The minimum number of pull of earch arm.
batch_size: int, optional (default=1)
The number of data given in each batch.
References
-------
[1] 本多淳也, 中村篤祥. バンディット問題の理論とアルゴリズム. 講談社 機械学習プロフェッショナルシリーズ, 2016.
[2] O. Chapelle, L. Li. An Empirical Evaluation of Thompson Sampling. In NIPS, pp. 2249–2257, 2011.
"""
def __init__(self, n_arms: int, n_features: int, sigma: float=0.1,
n_iter: int=1, warmup: int=1, sample_batch: int=1, batch_size: int=1) -> None:
"""Initialize Class."""
super().__init__(n_arms, n_features, warmup, batch_size)
self.sigma = sigma
self.n_iter = n_iter
self.sample_batch = sample_batch
self.name = f"LogisticTS(σ={self.sigma})"
self.data_stock: list = [[] for i in np.arange(self.n_arms)]
self.reward_stock: list = [[] for i in np.arange(self.n_arms)]
# array - (n_arms * user_dim),
self.theta_hat, self.theta_tilde = np.zeros((self.n_features, self.n_arms)), np.zeros((self.n_features, self.n_arms))
self.hessian_inv = np.concatenate([np.identity(self.n_features)
for i in np.arange(self.n_arms)]).reshape(self.n_arms, self.n_features, self.n_features)
def select_arm(self, x: np.ndarray) -> int:
"""Select arms according to the policy for new data.
Parameters
----------
x : array-like, shape = (n_features, )
A test sample.
Returns
-------
result: int
The selected arm.
"""
if True in (self.counts < self.warmup):
result = np.argmax(np.array(self.counts < self.warmup, dtype=int))
else:
x = _check_x_input(x)
if self.data_size % self.sample_batch == 0:
self.theta_tilde = np.concatenate([np.expand_dims(np.random.multivariate_normal(self.theta_hat[:, i], self.hessian_inv[i]), axis=1)
for i in np.arange(self.n_arms)], axis=1)
result = np.argmax(x.T @ self.theta_tilde)
return result
def update(self, x: np.ndarray, chosen_arm: int, reward: float) -> None:
"""Update the reward and parameter information about earch arm.
Parameters
----------
x : array-like, shape = (n_features, )
A test sample.
chosen_arm: int
The chosen arm.
reward: int, float
The observed reward value from the chosen arm.
"""
x = _check_x_input(x)
self.counts[chosen_arm] += 1
self.rewards += reward
self.data_stock[chosen_arm].append(x) # (user_dim + arm_dim) * 1
self.reward_stock[chosen_arm].append(reward)
self.data_size += 1
if self.data_size % self.batch_size == 0:
for i in np.arange(self.n_iter):
self.theta_hat[:, chosen_arm], self.hessian_inv[chosen_arm] = \
self._update_theta_hat(chosen_arm, self.theta_hat[:, chosen_arm])
def _calc_gradient(self, chosen_arm: int, theta_hat: np.ndarray) -> np.ndarray:
_hat = np.expand_dims(theta_hat, axis=1)
_gradient = _hat / self.sigma
_data = np.concatenate(self.data_stock[chosen_arm], axis=1) # arm_dim * n_user
_gradient += np.expand_dims(np.sum(_data * (np.exp(_hat.T @ _data) / (1 + np.exp(_hat.T @ _data))), axis=1), axis=1)
_gradient -= np.expand_dims(np.sum(_data[:, np.array(self.reward_stock[chosen_arm]) == 1], axis=1), axis=1)
return _gradient
def _calc_hessian(self, chosen_arm: int, theta_hat: np.ndarray) -> np.ndarray:
_hat = np.expand_dims(theta_hat, axis=1)
_hessian = np.identity(self.n_features) / self.sigma
_data = np.concatenate(self.data_stock[chosen_arm], axis=1)
mat = [np.expand_dims(_data[:, i], axis=1) @ np.expand_dims(_data[:, i], axis=1).T
for i in np.arange(self.counts[chosen_arm])]
weight = np.ravel(np.exp(_hat.T @ _data) / (1 + np.exp(_hat.T @ _data)) ** 2) # 1 * data_size
_hessian += np.sum(
np.concatenate([_mat * w for _mat, w in zip(mat, weight)], axis=0).reshape(self.counts[chosen_arm],
self.n_features,
self.n_features),
axis=0)
return _hessian
def _update_theta_hat(self, chosen_arm: int, theta_hat: np.ndarray) -> np.ndarray:
_theta_hat = np.expand_dims(theta_hat, axis=1) # (user_dim * arm_dim) * 1
_gradient = self._calc_gradient(chosen_arm, theta_hat)
_hessian_inv = np.linalg.inv(self._calc_hessian(chosen_arm, theta_hat))
_theta_hat -= _hessian_inv @ _gradient
return np.ravel(_theta_hat), _hessian_inv
class ACTS(BaseContextualPolicy):
"""Action Centered Thompson Sampling Algorithm for Contextual Multi-Armed Bandit Problem.
References
-------
[1] K. Greenewald, Ambuj Tewari, S. Murphy, and P. Klasnja. Action centered contextual bandits. In NIPS, 2017.
"""
def __init__(self, n_arms: int, n_features: int, v: float = 1.0,
pi_min: float = 0.1, pi_max: float = 0.9, warmup: int = 10,
batch_size: int = 100, sample_batch_size: int = 20) -> None:
"""Initialize class."""
self.n_arms = n_arms
self.n_features = n_features # n_arms * user_dim
self.warmup = warmup
self.sigma = v ** 2 # v ** 2 ?
self.pi_min = pi_min
self.pi_max = pi_max
self.a_bar = 0
self.pi_t = pi_max
self.sample_batch_size = sample_batch_size
self.B_inv = [np.copy(np.matrix(np.identity(self.n_features))) for i in np.arange(self.n_arms)]
self.b = [np.copy(np.matrix(np.zeros(self.n_features)).T) for i in np.arange(self.n_arms)]
self.theta = [np.copy(np.zeros(self.n_features)) for i in np.arange(self.n_arms)]
self.theta_tilde = np.matrix(np.zeros(shape=(self.n_features, self.n_arms)))
self.data_size = 0
self.batch_size = batch_size
self._B_inv = [np.copy(np.matrix(np.identity(self.n_features))) for i in np.arange(self.n_arms)] * 1
self._b = [np.copy(np.matrix(np.zeros(self.n_features)).T) for i in np.arange(self.n_arms)]
self._theta = [np.copy(np.zeros(self.n_features)) for i in np.arange(self.n_arms)]
self.counts_warmup = np.zeros(n_arms, dtype=int)
self.counts = np.zeros(n_arms + 1, dtype=int)
self.rewards = 0.0
def select_arm(self, x: np.matrix) -> int:
"""Select arms according to the policy for new data.
Parameters
----------
x : array-like, shape = (n_features, )
A test sample.
Returns
-------
result: int
The selected arm.
"""
if True in (self.counts_warmup < self.warmup):
self.a_bar = np.where(self.counts_warmup < self.warmup)[0][0]
self.counts_warmup[self.a_bar] += 1
result = self.a_bar + 1
else:
values = np.zeros(self.n_arms)
if self.data_size % self.sample_batch_size == 0:
self.theta_tilde = np.concatenate([np.matrix(np.random.multivariate_normal(mean=self.theta[i], cov=self.sigma * self.B_inv[i])).T
for i in np.arange(self.n_arms)], axis=1)
values = self.theta_tilde.T @ x
self.a_bar = np.argmax(values)
mu_bar = self.theta_tilde[:, self.a_bar].T @ x
sigma_bar = self.sigma * (x.T @ self.B_inv[self.a_bar] @ x).A[0]
self.pi_t = 1.0 - np.clip(a=norm.cdf(x=0, loc=mu_bar, scale=sigma_bar), a_min=self.pi_min, a_max=self.pi_max)[0][0]
result = np.random.choice([0, self.a_bar + 1], p=[1 - self.pi_t, self.pi_t])
return result
def update(self, x: np.matrix, chosen_arm: int, reward: float) -> None:
"""Update the reward and parameter information about earch arm.
Parameters
----------
x : array-like, shape = (n_features, )
A test sample.
chosen_arm: int
The chosen arm.
reward: int, float
The observed reward value from the chosen arm.
"""
self.data_size += 1
self.counts[chosen_arm] += 1
self.rewards += reward
_x = (1 - self.pi_t) * self.pi_t * x
self._B_inv[self.a_bar] -= self._B_inv[self.a_bar] @ _x @ _x.T @ self._B_inv[self.a_bar] / (1 + _x.T @ self._B_inv[self.a_bar] @ _x)
self._b[self.a_bar] += x * reward * (np.sign([chosen_arm]) - self.pi_t)
self._theta[self.a_bar] = (self._B_inv[self.a_bar] @ self._b[self.a_bar]).A.reshape(self.n_features)
if self.data_size % self.batch_size == 0:
self.B_inv = np.copy(self._B_inv) # d * d
self.b = np.copy(self._b) # d * 1
self.theta = np.copy(self._theta)
| 40.154255 | 147 | 0.581666 | 3,295 | 22,647 | 3.799697 | 0.077997 | 0.03754 | 0.040256 | 0.01853 | 0.806629 | 0.770367 | 0.760064 | 0.741933 | 0.716134 | 0.697843 | 0 | 0.012669 | 0.289001 | 22,647 | 563 | 148 | 40.225577 | 0.764688 | 0.251733 | 0 | 0.452675 | 0 | 0 | 0.006164 | 0.006164 | 0 | 0 | 0 | 0 | 0 | 1 | 0.074074 | false | 0 | 0.032922 | 0 | 0.160494 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c74bdd3b259eed5f8a9834fe21ee19f7c8d8839d | 2,407 | py | Python | conans/test/model/ref_test.py | ytimenkov/conan | 89eb275b9696b308aaaa1fbfaa0f8cdab284a764 | [
"MIT"
] | 3 | 2016-11-11T01:09:44.000Z | 2017-07-19T13:30:17.000Z | conans/test/model/ref_test.py | ytimenkov/conan | 89eb275b9696b308aaaa1fbfaa0f8cdab284a764 | [
"MIT"
] | 6 | 2017-06-14T11:40:15.000Z | 2020-05-23T01:43:28.000Z | conans/test/model/ref_test.py | ytimenkov/conan | 89eb275b9696b308aaaa1fbfaa0f8cdab284a764 | [
"MIT"
] | 2 | 2017-11-29T14:05:22.000Z | 2018-09-19T12:43:33.000Z | import unittest
from conans.model.ref import ConanFileReference
from conans.errors import ConanException
class RefTest(unittest.TestCase):
def basic_test(self):
ref = ConanFileReference.loads("opencv/2.4.10 @ lasote/testing")
self.assertEqual(ref.name, "opencv")
self.assertEqual(ref.version, "2.4.10")
self.assertEqual(ref.user, "lasote")
self.assertEqual(ref.channel, "testing")
self.assertEqual(str(ref), "opencv/2.4.10@lasote/testing")
ref = ConanFileReference.loads("opencv_lite/2.4.10@phil_lewis/testing")
self.assertEqual(ref.name, "opencv_lite")
self.assertEqual(ref.version, "2.4.10")
self.assertEqual(ref.user, "phil_lewis")
self.assertEqual(ref.channel, "testing")
self.assertEqual(str(ref), "opencv_lite/2.4.10@phil_lewis/testing")
ref = ConanFileReference.loads("opencv/2.4.10@3rd-party/testing")
self.assertEqual(ref.name, "opencv")
self.assertEqual(ref.version, "2.4.10")
self.assertEqual(ref.user, "3rd-party")
self.assertEqual(ref.channel, "testing")
self.assertEqual(str(ref), "opencv/2.4.10@3rd-party/testing")
def errors_test(self):
self.assertRaises(ConanException, ConanFileReference.loads, "")
self.assertRaises(ConanException, ConanFileReference.loads, "opencv/2.4.10")
self.assertRaises(ConanException, ConanFileReference.loads, "opencv/2.4.10 @ lasote")
self.assertRaises(ConanException, ConanFileReference.loads, "opencv??/2.4.10@laso/testing")
self.assertRaises(ConanException, ConanFileReference.loads, ".opencv/2.4.10@lasote/testing")
self.assertRaises(ConanException, ConanFileReference.loads, "o/2.4.10 @ lasote/testing")
self.assertRaises(ConanException, ConanFileReference.loads, "lib/1.0@user&surname/channel")
self.assertRaises(ConanException, ConanFileReference.loads,
"opencv%s/2.4.10@laso/testing" % "A" * 40)
self.assertRaises(ConanException, ConanFileReference.loads,
"opencv/2.4.10%s@laso/testing" % "A" * 40)
self.assertRaises(ConanException, ConanFileReference.loads,
"opencv/2.4.10@laso%s/testing" % "A" * 40)
self.assertRaises(ConanException, ConanFileReference.loads,
"opencv/2.4.10@laso/testing%s" % "A" * 40)
| 53.488889 | 100 | 0.669713 | 281 | 2,407 | 5.708185 | 0.156584 | 0.022444 | 0.044888 | 0.068579 | 0.868454 | 0.816708 | 0.754364 | 0.715711 | 0.682045 | 0.618454 | 0 | 0.043635 | 0.190694 | 2,407 | 44 | 101 | 54.704545 | 0.779774 | 0 | 0 | 0.307692 | 0 | 0 | 0.225177 | 0.149979 | 0 | 0 | 0 | 0 | 0.666667 | 1 | 0.051282 | false | 0 | 0.076923 | 0 | 0.153846 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c75adf9f343f504cc7ea04bc64063c0d6256e0dd | 3,639 | py | Python | packages/girder_worker/tests/integration/test_traditional.py | ShenQianwithC/HistomicsTK | 4ad7e72a7ebdabbdfc879254fad04ce7ca47e320 | [
"Apache-2.0"
] | null | null | null | packages/girder_worker/tests/integration/test_traditional.py | ShenQianwithC/HistomicsTK | 4ad7e72a7ebdabbdfc879254fad04ce7ca47e320 | [
"Apache-2.0"
] | null | null | null | packages/girder_worker/tests/integration/test_traditional.py | ShenQianwithC/HistomicsTK | 4ad7e72a7ebdabbdfc879254fad04ce7ca47e320 | [
"Apache-2.0"
] | null | null | null | from girder_worker.utils import JobStatus
import pytest
# This may create custom statuses that it is hard to test for.
# Make sure we check that we have at least the expected
# standard statuses
@pytest.mark.parametrize('endpoint,standard_statuses', [
('integration_tests/traditional/test_job_girder_worker_run',
[JobStatus.QUEUED,
JobStatus.RUNNING,
JobStatus.SUCCESS]),
('integration_tests/traditional/test_girder_worker_run_as_celery_task',
[JobStatus.RUNNING,
JobStatus.SUCCESS])], ids=['traditional', 'celery'])
def test_girder_worker_run(session, endpoint, standard_statuses):
r = session.post(endpoint)
assert r.status_code == 200, r.content
with session.wait_for_success(r.json()['_id']) as job:
assert [ts['status'] for ts in job['timestamps']
if ts['status'] in standard_statuses] == standard_statuses
assert 'celeryTaskId' in job
assert session.get_result(job['celeryTaskId']) == '{"c": {"data": 3, "format": "integer"}}'
# Note: This may create custom statuses that it is hard to test for.
# Make sure we check that we have at least the expected
# standard statuses
@pytest.mark.parametrize('endpoint,standard_statuses', [
('integration_tests/traditional/test_job_girder_worker_run_fails',
[JobStatus.QUEUED,
JobStatus.RUNNING,
JobStatus.ERROR]),
('integration_tests/traditional/test_girder_worker_run_as_celery_task_fails',
[JobStatus.RUNNING,
JobStatus.ERROR])], ids=['traditional', 'celery'])
def test_girder_worker_run_fails(session, endpoint, standard_statuses):
r = session.post(endpoint)
assert r.status_code == 200, r.content
with session.wait_for_error(r.json()['_id']) as job:
assert [ts['status'] for ts in job['timestamps']
if ts['status'] in standard_statuses] == standard_statuses
assert job['log'][0].startswith('Exception: invalid syntax (<string>, line 1)')
def test_custom_task_name(session):
r = session.post('integration_tests/traditional/test_job_custom_task_name')
assert r.status_code == 200, r.content
with session.wait_for_success(r.json()['_id']) as job:
assert [ts['status'] for ts in job['timestamps']] == \
[JobStatus.QUEUED, JobStatus.RUNNING, JobStatus.SUCCESS]
assert 'celeryTaskId' in job
assert session.get_result(job['celeryTaskId']) == '6765'
def test_custom_task_name_fails(session):
r = session.post('integration_tests/traditional/test_job_custom_task_name_fails')
assert r.status_code == 200, r.content
with session.wait_for_error(r.json()['_id']) as job:
assert [ts['status'] for ts in job['timestamps']] == \
[JobStatus.QUEUED, JobStatus.RUNNING, JobStatus.ERROR]
assert job['log'][0].startswith('Exception: Intentionally failed after 0.5 seconds')
def test_task_cancel(session):
url = 'integration_tests/traditional/test_task_cancel'
r = session.post(url)
assert r.status_code == 200, r.content
with session.wait_for_canceled(r.json()['_id']) as job:
assert [ts['status'] for ts in job['timestamps']] == \
[JobStatus.QUEUED, JobStatus.RUNNING, JobStatus.CANCELING,
JobStatus.CANCELED]
def test_task_cancel_in_queue(session):
url = 'integration_tests/traditional/test_task_cancel_in_queue'
r = session.post(url)
assert r.status_code == 200, r.content
with session.wait_for_canceled(r.json()['_id']) as job:
assert [ts['status'] for ts in job['timestamps']] == \
[JobStatus.QUEUED, JobStatus.CANCELING, JobStatus.CANCELED]
| 39.129032 | 99 | 0.698818 | 475 | 3,639 | 5.143158 | 0.195789 | 0.065493 | 0.088416 | 0.101515 | 0.880884 | 0.852231 | 0.785919 | 0.785919 | 0.709783 | 0.709783 | 0 | 0.009355 | 0.177521 | 3,639 | 92 | 100 | 39.554348 | 0.806883 | 0.074471 | 0 | 0.539683 | 0 | 0 | 0.260934 | 0.156799 | 0 | 0 | 0 | 0 | 0.285714 | 1 | 0.095238 | false | 0 | 0.031746 | 0 | 0.126984 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c7edadc88c2027569e6c70c62ea86110e2c0eaa6 | 22 | py | Python | widgets/VBox/__init__.py | flatironinstitute/ephys-viz | 8da5f334e5b0cea41ad746872ef82ef858348fdf | [
"Apache-2.0"
] | 6 | 2019-10-23T03:11:53.000Z | 2021-09-23T01:08:49.000Z | widgets/VBox/__init__.py | flatironinstitute/reactopya_examples | 9b270c2cf3bab7bb53c3eabbae4adb48621cb8ba | [
"Apache-2.0"
] | 13 | 2018-05-16T19:08:39.000Z | 2019-12-31T04:40:32.000Z | widgets/VBox/__init__.py | flatironinstitute/ephys-viz | 8da5f334e5b0cea41ad746872ef82ef858348fdf | [
"Apache-2.0"
] | 7 | 2018-05-08T15:32:12.000Z | 2021-09-23T01:08:50.000Z | from .VBox import VBox | 22 | 22 | 0.818182 | 4 | 22 | 4.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.136364 | 22 | 1 | 22 | 22 | 0.947368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4006b5f5866aa4c23e9a2b16b930191035e700e8 | 287 | py | Python | back/utils/types/chat.py | azakharau/chatify | 8e85285ecbac8e1dac5b14af7b2b591ba3ccc1c2 | [
"MIT"
] | null | null | null | back/utils/types/chat.py | azakharau/chatify | 8e85285ecbac8e1dac5b14af7b2b591ba3ccc1c2 | [
"MIT"
] | null | null | null | back/utils/types/chat.py | azakharau/chatify | 8e85285ecbac8e1dac5b14af7b2b591ba3ccc1c2 | [
"MIT"
] | null | null | null | import typing
from dataclasses import dataclass
from utils.mixins import DataMixin
@dataclass()
class Chat(DataMixin):
id: typing.Optional[int] = None
username: typing.Optional[str] = None
first_name: typing.Optional[str] = None
last_name: typing.Optional[str] = None
| 22.076923 | 43 | 0.738676 | 37 | 287 | 5.675676 | 0.513514 | 0.266667 | 0.242857 | 0.3 | 0.238095 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.167247 | 287 | 12 | 44 | 23.916667 | 0.878661 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.888889 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
402d6dfaed87f0fd03822b89577b1c2f4f0645d9 | 31 | py | Python | metric/__init__.py | edwardsdean/maubot_metric_bot | 5a38ba5f081f9e0e897a8604d152201de4144ae7 | [
"MIT"
] | 1 | 2022-02-28T04:04:52.000Z | 2022-02-28T04:04:52.000Z | metric/__init__.py | edwardsdean/maubot_metric_bot | 5a38ba5f081f9e0e897a8604d152201de4144ae7 | [
"MIT"
] | null | null | null | metric/__init__.py | edwardsdean/maubot_metric_bot | 5a38ba5f081f9e0e897a8604d152201de4144ae7 | [
"MIT"
] | null | null | null | from .bot import MetricPlugin
| 15.5 | 30 | 0.806452 | 4 | 31 | 6.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.16129 | 31 | 1 | 31 | 31 | 0.961538 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
403e8f1da6cc0366d2c20b7ebec15c1355f704c1 | 138 | py | Python | iter_tasks/src/designer.py | Wisc-HCI/ITER | 2ae8a5f0ae17783db4db25198ec0d97e72cd7296 | [
"MIT"
] | 1 | 2021-04-07T15:54:44.000Z | 2021-04-07T15:54:44.000Z | iter_tasks/src/designer.py | Wisc-HCI/ITER | 2ae8a5f0ae17783db4db25198ec0d97e72cd7296 | [
"MIT"
] | null | null | null | iter_tasks/src/designer.py | Wisc-HCI/ITER | 2ae8a5f0ae17783db4db25198ec0d97e72cd7296 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
'''
Designer Node
Intended to interface with RVIZ via interactive markers to generate an assembly
plan
'''
# TODO
| 13.8 | 79 | 0.746377 | 20 | 138 | 5.15 | 0.95 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 138 | 9 | 80 | 15.333333 | 0.895652 | 0.905797 | 0 | null | 1 | null | 0 | 0 | null | 0 | 0 | 0.111111 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
406b0b7f65efcf1a0a4869003013f00719d925bd | 174 | py | Python | index.py | Grace-lekaro/e-array-zero-padding | a2e7b02a45ec199a287e8da9ec7f62139beb91ed | [
"Apache-2.0"
] | null | null | null | index.py | Grace-lekaro/e-array-zero-padding | a2e7b02a45ec199a287e8da9ec7f62139beb91ed | [
"Apache-2.0"
] | null | null | null | index.py | Grace-lekaro/e-array-zero-padding | a2e7b02a45ec199a287e8da9ec7f62139beb91ed | [
"Apache-2.0"
] | null | null | null | current_words=[1,2,3,4]
current_words = list(current_words + [0] * (10 - len(current_words)))
print(len(current_words))
print("---- : " + str(current_words))
##By lekaro
| 21.75 | 69 | 0.666667 | 26 | 174 | 4.230769 | 0.538462 | 0.654545 | 0.272727 | 0.363636 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.046053 | 0.126437 | 174 | 7 | 70 | 24.857143 | 0.677632 | 0.051724 | 0 | 0 | 0 | 0 | 0.04908 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.5 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
40c9f40e17c34c62e8b4a87419f5f5e96a267b3b | 69 | py | Python | network/models/__init__.py | PDillis/coiltraine | a682aa62af5f6ecb95a837d33b70d893d3d261f6 | [
"MIT"
] | 1 | 2021-03-01T19:43:12.000Z | 2021-03-01T19:43:12.000Z | network/models/__init__.py | PDillis/coiltraine | a682aa62af5f6ecb95a837d33b70d893d3d261f6 | [
"MIT"
] | null | null | null | network/models/__init__.py | PDillis/coiltraine | a682aa62af5f6ecb95a837d33b70d893d3d261f6 | [
"MIT"
] | null | null | null | from .coil_icra import CoILICRA
from .coil_reverse import CoILReverse | 34.5 | 37 | 0.869565 | 10 | 69 | 5.8 | 0.7 | 0.275862 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.101449 | 69 | 2 | 37 | 34.5 | 0.935484 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
40e2f0cfc600d490f4cf8441bfb4b944a9f2a528 | 32 | py | Python | nexthiv/db/__init__.py | sdwfrost/nexthiv | d66d513a27352f915927f9d25689730bbd27a28d | [
"MIT"
] | 6 | 2016-12-21T19:56:37.000Z | 2018-08-06T09:28:22.000Z | nexthiv/db/__init__.py | sdwfrost/nexthiv | d66d513a27352f915927f9d25689730bbd27a28d | [
"MIT"
] | null | null | null | nexthiv/db/__init__.py | sdwfrost/nexthiv | d66d513a27352f915927f9d25689730bbd27a28d | [
"MIT"
] | null | null | null | from . backends import db_setup
| 16 | 31 | 0.8125 | 5 | 32 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15625 | 32 | 1 | 32 | 32 | 0.925926 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
40f81e2ff6af675fc4f8ee1d8388e88c1d1d6d72 | 39 | py | Python | Site/Instagram/Instagram-bruteforce/lib/__init__.py | darknoyan/Hack-keriy | 78a4795d293a4214098cdbeadcefca59f589235c | [
"Apache-2.0"
] | null | null | null | Site/Instagram/Instagram-bruteforce/lib/__init__.py | darknoyan/Hack-keriy | 78a4795d293a4214098cdbeadcefca59f589235c | [
"Apache-2.0"
] | null | null | null | Site/Instagram/Instagram-bruteforce/lib/__init__.py | darknoyan/Hack-keriy | 78a4795d293a4214098cdbeadcefca59f589235c | [
"Apache-2.0"
] | null | null | null | # Date: 12/30/2018
# Author: Mohamed
| 13 | 19 | 0.641026 | 6 | 39 | 4.166667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.258065 | 0.205128 | 39 | 2 | 20 | 19.5 | 0.548387 | 0.820513 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9088dbfee432896f84b501078762fc3d10134b4e | 148 | py | Python | python/7kyu/factorial.py | Sigmanificient/codewars | b34df4bf55460d312b7ddf121b46a707b549387a | [
"MIT"
] | 3 | 2021-06-08T01:57:13.000Z | 2021-06-26T10:52:47.000Z | python/7kyu/factorial.py | Sigmanificient/codewars | b34df4bf55460d312b7ddf121b46a707b549387a | [
"MIT"
] | null | null | null | python/7kyu/factorial.py | Sigmanificient/codewars | b34df4bf55460d312b7ddf121b46a707b549387a | [
"MIT"
] | 2 | 2021-06-10T21:20:13.000Z | 2021-06-30T10:13:26.000Z | """Kata url: https://www.codewars.com/kata/57a049e253ba33ac5e000212."""
def factorial(n: int) -> int:
return n * factorial(n - 1) if n else 1
| 24.666667 | 71 | 0.668919 | 22 | 148 | 4.5 | 0.681818 | 0.20202 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153226 | 0.162162 | 148 | 5 | 72 | 29.6 | 0.645161 | 0.439189 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
2907a0cf7a200f365f4854b753768503f15a01e8 | 121 | py | Python | Birnn_Transformer/ncc/tasks/translation/__init__.py | code-backdoor/code-backdoor | 1eeb3d79aa8a54c8f08e8d0156b569de5edd974e | [
"MIT"
] | 1 | 2021-12-21T05:52:37.000Z | 2021-12-21T05:52:37.000Z | ncc/tasks/translation/__init__.py | hrshy0629/naturalcc | 9c3329dd8387c8242deb52bf590ebe3ac795f8de | [
"MIT"
] | null | null | null | ncc/tasks/translation/__init__.py | hrshy0629/naturalcc | 9c3329dd8387c8242deb52bf590ebe3ac795f8de | [
"MIT"
] | null | null | null | from .translation import TranslationTask
from .translation_from_pretrained_bart import TranslationFromPretrainedBARTTask
| 40.333333 | 79 | 0.917355 | 11 | 121 | 9.818182 | 0.636364 | 0.277778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.066116 | 121 | 2 | 80 | 60.5 | 0.955752 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
292bbafbbcb9621bf86faeee90ea0d74d6dc271d | 44 | py | Python | bowling/sort/insertion/__init__.py | necromuralist/Bowling-For-Data | 8fb2bff206bf419812f96a5ad243e1d82959a00a | [
"MIT"
] | null | null | null | bowling/sort/insertion/__init__.py | necromuralist/Bowling-For-Data | 8fb2bff206bf419812f96a5ad243e1d82959a00a | [
"MIT"
] | null | null | null | bowling/sort/insertion/__init__.py | necromuralist/Bowling-For-Data | 8fb2bff206bf419812f96a5ad243e1d82959a00a | [
"MIT"
] | null | null | null | from .insertion_stuff import insertion_sort
| 22 | 43 | 0.886364 | 6 | 44 | 6.166667 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.090909 | 44 | 1 | 44 | 44 | 0.925 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
29467ff5601a9ee7f95ebf1444d5e57da4eb8b6c | 105 | py | Python | src/bpp/tests/test_views/test_logout.py | iplweb/django-bpp | 85f183a99d8d5027ae4772efac1e4a9f21675849 | [
"BSD-3-Clause"
] | 1 | 2017-04-27T19:50:02.000Z | 2017-04-27T19:50:02.000Z | src/bpp/tests/test_views/test_logout.py | mpasternak/django-bpp | 434338821d5ad1aaee598f6327151aba0af66f5e | [
"BSD-3-Clause"
] | 41 | 2019-11-07T00:07:02.000Z | 2022-02-27T22:09:39.000Z | src/bpp/tests/test_views/test_logout.py | iplweb/bpp | f027415cc3faf1ca79082bf7bacd4be35b1a6fdf | [
"BSD-3-Clause"
] | null | null | null | from django.urls import reverse
def test_logout(admin_client):
admin_client.get(reverse("logout"))
| 17.5 | 39 | 0.771429 | 15 | 105 | 5.2 | 0.733333 | 0.282051 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.12381 | 105 | 5 | 40 | 21 | 0.847826 | 0 | 0 | 0 | 0 | 0 | 0.057143 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
297b3b14b056429f45ef2fd8426698cb8de684f6 | 318 | py | Python | bdd/contact_scenarios.py | yulia-baturina/python_training | ef29b64e284ef2a2526092c9cb474b9bb489e1d0 | [
"Apache-2.0"
] | null | null | null | bdd/contact_scenarios.py | yulia-baturina/python_training | ef29b64e284ef2a2526092c9cb474b9bb489e1d0 | [
"Apache-2.0"
] | null | null | null | bdd/contact_scenarios.py | yulia-baturina/python_training | ef29b64e284ef2a2526092c9cb474b9bb489e1d0 | [
"Apache-2.0"
] | null | null | null | from pytest_bdd import scenario
from .contact_steps import *
@scenario("contacts.feature","add new contact")
def test_add_new_contact():
pass
@scenario("contacts.feature","delete a contact")
def test_delete_contact():
pass
@scenario("contacts.feature","update a contact")
def test_update_contact():
pass | 22.714286 | 48 | 0.754717 | 43 | 318 | 5.372093 | 0.395349 | 0.207792 | 0.298701 | 0.233766 | 0.294372 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125786 | 318 | 14 | 49 | 22.714286 | 0.830935 | 0 | 0 | 0.272727 | 0 | 0 | 0.297806 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.272727 | true | 0.272727 | 0.181818 | 0 | 0.454545 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
4639dd0760e495496df4872cc9ab43be57d5219b | 26,641 | py | Python | examples/convert_script/convert_bart__cloudwalk.py | Tongjilibo/bert4torch | 71d5ffb3698730b16e5a252b06644a136787711e | [
"MIT"
] | 49 | 2022-03-15T07:28:16.000Z | 2022-03-31T07:16:15.000Z | examples/convert_script/convert_bart__cloudwalk.py | Tongjilibo/bert4torch | 71d5ffb3698730b16e5a252b06644a136787711e | [
"MIT"
] | null | null | null | examples/convert_script/convert_bart__cloudwalk.py | Tongjilibo/bert4torch | 71d5ffb3698730b16e5a252b06644a136787711e | [
"MIT"
] | null | null | null | #! -*- coding: utf-8 -*-
# 将cloudwalk的预训练bart模型转换为bert4keras可用的权重
# 权重链接百度云地址:
import torch
ckpt_file = 'F:/Projects/pretrain_ckpt/bart/[cloudwalk_torch_base]/pytorch_base_model_2024000.pt'
torch_weights = torch.load(ckpt_file)
map = {'bart.embeddings.word_embeddings.weight': 'encoder.embed_tokens.weight',
'bart.embeddings.position_embeddings.weight': 'encoder.embed_positions.weight',
'bart.embeddings.LayerNorm.weight': 'encoder.layernorm_embedding.weight',
'bart.embeddings.LayerNorm.bias': 'encoder.layernorm_embedding.bias',
'bart.encoder.encoder_layer.0.attention.self.query.weight': 'encoder.layers.0.self_attn.q_proj.weight',
'bart.encoder.encoder_layer.0.attention.self.query.bias': 'encoder.layers.0.self_attn.q_proj.bias',
'bart.encoder.encoder_layer.0.attention.self.key.weight': 'encoder.layers.0.self_attn.k_proj.weight',
'bart.encoder.encoder_layer.0.attention.self.key.bias': 'encoder.layers.0.self_attn.k_proj.bias',
'bart.encoder.encoder_layer.0.attention.self.value.weight': 'encoder.layers.0.self_attn.v_proj.weight',
'bart.encoder.encoder_layer.0.attention.self.value.bias': 'encoder.layers.0.self_attn.v_proj.bias',
'bart.encoder.encoder_layer.0.attention.output.dense.weight': 'encoder.layers.0.self_attn.out_proj.weight',
'bart.encoder.encoder_layer.0.attention.output.dense.bias': 'encoder.layers.0.self_attn.out_proj.bias',
'bart.encoder.encoder_layer.0.attention.output.LayerNorm.weight': 'encoder.layers.0.self_attn_layer_norm.weight',
'bart.encoder.encoder_layer.0.attention.output.LayerNorm.bias': 'encoder.layers.0.self_attn_layer_norm.bias',
'bart.encoder.encoder_layer.0.intermediate.dense.weight': 'encoder.layers.0.fc1.weight',
'bart.encoder.encoder_layer.0.intermediate.dense.bias': 'encoder.layers.0.fc1.bias',
'bart.encoder.encoder_layer.0.output.dense.weight': 'encoder.layers.0.fc2.weight',
'bart.encoder.encoder_layer.0.output.dense.bias': 'encoder.layers.0.fc2.bias',
'bart.encoder.encoder_layer.0.output.LayerNorm.weight': 'encoder.layers.0.final_layer_norm.weight',
'bart.encoder.encoder_layer.0.output.LayerNorm.bias': 'encoder.layers.0.final_layer_norm.bias',
'bart.encoder.encoder_layer.1.attention.self.query.weight': 'encoder.layers.1.self_attn.q_proj.weight',
'bart.encoder.encoder_layer.1.attention.self.query.bias': 'encoder.layers.1.self_attn.q_proj.bias',
'bart.encoder.encoder_layer.1.attention.self.key.weight': 'encoder.layers.1.self_attn.k_proj.weight',
'bart.encoder.encoder_layer.1.attention.self.key.bias': 'encoder.layers.1.self_attn.k_proj.bias',
'bart.encoder.encoder_layer.1.attention.self.value.weight': 'encoder.layers.1.self_attn.v_proj.weight',
'bart.encoder.encoder_layer.1.attention.self.value.bias': 'encoder.layers.1.self_attn.v_proj.bias',
'bart.encoder.encoder_layer.1.attention.output.dense.weight': 'encoder.layers.1.self_attn.out_proj.weight',
'bart.encoder.encoder_layer.1.attention.output.dense.bias': 'encoder.layers.1.self_attn.out_proj.bias',
'bart.encoder.encoder_layer.1.attention.output.LayerNorm.weight': 'encoder.layers.1.self_attn_layer_norm.weight',
'bart.encoder.encoder_layer.1.attention.output.LayerNorm.bias': 'encoder.layers.1.self_attn_layer_norm.bias',
'bart.encoder.encoder_layer.1.intermediate.dense.weight': 'encoder.layers.1.fc1.weight',
'bart.encoder.encoder_layer.1.intermediate.dense.bias': 'encoder.layers.1.fc1.bias',
'bart.encoder.encoder_layer.1.output.dense.weight': 'encoder.layers.1.fc2.weight',
'bart.encoder.encoder_layer.1.output.dense.bias': 'encoder.layers.1.fc2.bias',
'bart.encoder.encoder_layer.1.output.LayerNorm.weight': 'encoder.layers.1.final_layer_norm.weight',
'bart.encoder.encoder_layer.1.output.LayerNorm.bias': 'encoder.layers.1.final_layer_norm.bias',
'bart.encoder.encoder_layer.2.attention.self.query.weight': 'encoder.layers.2.self_attn.q_proj.weight',
'bart.encoder.encoder_layer.2.attention.self.query.bias': 'encoder.layers.2.self_attn.q_proj.bias',
'bart.encoder.encoder_layer.2.attention.self.key.weight': 'encoder.layers.2.self_attn.k_proj.weight',
'bart.encoder.encoder_layer.2.attention.self.key.bias': 'encoder.layers.2.self_attn.k_proj.bias',
'bart.encoder.encoder_layer.2.attention.self.value.weight': 'encoder.layers.2.self_attn.v_proj.weight',
'bart.encoder.encoder_layer.2.attention.self.value.bias': 'encoder.layers.2.self_attn.v_proj.bias',
'bart.encoder.encoder_layer.2.attention.output.dense.weight': 'encoder.layers.2.self_attn.out_proj.weight',
'bart.encoder.encoder_layer.2.attention.output.dense.bias': 'encoder.layers.2.self_attn.out_proj.bias',
'bart.encoder.encoder_layer.2.attention.output.LayerNorm.weight': 'encoder.layers.2.self_attn_layer_norm.weight',
'bart.encoder.encoder_layer.2.attention.output.LayerNorm.bias': 'encoder.layers.2.self_attn_layer_norm.bias',
'bart.encoder.encoder_layer.2.intermediate.dense.weight': 'encoder.layers.2.fc1.weight',
'bart.encoder.encoder_layer.2.intermediate.dense.bias': 'encoder.layers.2.fc1.bias',
'bart.encoder.encoder_layer.2.output.dense.weight': 'encoder.layers.2.fc2.weight',
'bart.encoder.encoder_layer.2.output.dense.bias': 'encoder.layers.2.fc2.bias',
'bart.encoder.encoder_layer.2.output.LayerNorm.weight': 'encoder.layers.2.final_layer_norm.weight',
'bart.encoder.encoder_layer.2.output.LayerNorm.bias': 'encoder.layers.2.final_layer_norm.bias',
'bart.encoder.encoder_layer.3.attention.self.query.weight': 'encoder.layers.3.self_attn.q_proj.weight',
'bart.encoder.encoder_layer.3.attention.self.query.bias': 'encoder.layers.3.self_attn.q_proj.bias',
'bart.encoder.encoder_layer.3.attention.self.key.weight': 'encoder.layers.3.self_attn.k_proj.weight',
'bart.encoder.encoder_layer.3.attention.self.key.bias': 'encoder.layers.3.self_attn.k_proj.bias',
'bart.encoder.encoder_layer.3.attention.self.value.weight': 'encoder.layers.3.self_attn.v_proj.weight',
'bart.encoder.encoder_layer.3.attention.self.value.bias': 'encoder.layers.3.self_attn.v_proj.bias',
'bart.encoder.encoder_layer.3.attention.output.dense.weight': 'encoder.layers.3.self_attn.out_proj.weight',
'bart.encoder.encoder_layer.3.attention.output.dense.bias': 'encoder.layers.3.self_attn.out_proj.bias',
'bart.encoder.encoder_layer.3.attention.output.LayerNorm.weight': 'encoder.layers.3.self_attn_layer_norm.weight',
'bart.encoder.encoder_layer.3.attention.output.LayerNorm.bias': 'encoder.layers.3.self_attn_layer_norm.bias',
'bart.encoder.encoder_layer.3.intermediate.dense.weight': 'encoder.layers.3.fc1.weight',
'bart.encoder.encoder_layer.3.intermediate.dense.bias': 'encoder.layers.3.fc1.bias',
'bart.encoder.encoder_layer.3.output.dense.weight': 'encoder.layers.3.fc2.weight',
'bart.encoder.encoder_layer.3.output.dense.bias': 'encoder.layers.3.fc2.bias',
'bart.encoder.encoder_layer.3.output.LayerNorm.weight': 'encoder.layers.3.final_layer_norm.weight',
'bart.encoder.encoder_layer.3.output.LayerNorm.bias': 'encoder.layers.3.final_layer_norm.bias',
'bart.encoder.encoder_layer.4.attention.self.query.weight': 'encoder.layers.4.self_attn.q_proj.weight',
'bart.encoder.encoder_layer.4.attention.self.query.bias': 'encoder.layers.4.self_attn.q_proj.bias',
'bart.encoder.encoder_layer.4.attention.self.key.weight': 'encoder.layers.4.self_attn.k_proj.weight',
'bart.encoder.encoder_layer.4.attention.self.key.bias': 'encoder.layers.4.self_attn.k_proj.bias',
'bart.encoder.encoder_layer.4.attention.self.value.weight': 'encoder.layers.4.self_attn.v_proj.weight',
'bart.encoder.encoder_layer.4.attention.self.value.bias': 'encoder.layers.4.self_attn.v_proj.bias',
'bart.encoder.encoder_layer.4.attention.output.dense.weight': 'encoder.layers.4.self_attn.out_proj.weight',
'bart.encoder.encoder_layer.4.attention.output.dense.bias': 'encoder.layers.4.self_attn.out_proj.bias',
'bart.encoder.encoder_layer.4.attention.output.LayerNorm.weight': 'encoder.layers.4.self_attn_layer_norm.weight',
'bart.encoder.encoder_layer.4.attention.output.LayerNorm.bias': 'encoder.layers.4.self_attn_layer_norm.bias',
'bart.encoder.encoder_layer.4.intermediate.dense.weight': 'encoder.layers.4.fc1.weight',
'bart.encoder.encoder_layer.4.intermediate.dense.bias': 'encoder.layers.4.fc1.bias',
'bart.encoder.encoder_layer.4.output.dense.weight': 'encoder.layers.4.fc2.weight',
'bart.encoder.encoder_layer.4.output.dense.bias': 'encoder.layers.4.fc2.bias',
'bart.encoder.encoder_layer.4.output.LayerNorm.weight': 'encoder.layers.4.final_layer_norm.weight',
'bart.encoder.encoder_layer.4.output.LayerNorm.bias': 'encoder.layers.4.final_layer_norm.bias',
'bart.encoder.encoder_layer.5.attention.self.query.weight': 'encoder.layers.5.self_attn.q_proj.weight',
'bart.encoder.encoder_layer.5.attention.self.query.bias': 'encoder.layers.5.self_attn.q_proj.bias',
'bart.encoder.encoder_layer.5.attention.self.key.weight': 'encoder.layers.5.self_attn.k_proj.weight',
'bart.encoder.encoder_layer.5.attention.self.key.bias': 'encoder.layers.5.self_attn.k_proj.bias',
'bart.encoder.encoder_layer.5.attention.self.value.weight': 'encoder.layers.5.self_attn.v_proj.weight',
'bart.encoder.encoder_layer.5.attention.self.value.bias': 'encoder.layers.5.self_attn.v_proj.bias',
'bart.encoder.encoder_layer.5.attention.output.dense.weight': 'encoder.layers.5.self_attn.out_proj.weight',
'bart.encoder.encoder_layer.5.attention.output.dense.bias': 'encoder.layers.5.self_attn.out_proj.bias',
'bart.encoder.encoder_layer.5.attention.output.LayerNorm.weight': 'encoder.layers.5.self_attn_layer_norm.weight',
'bart.encoder.encoder_layer.5.attention.output.LayerNorm.bias': 'encoder.layers.5.self_attn_layer_norm.bias',
'bart.encoder.encoder_layer.5.intermediate.dense.weight': 'encoder.layers.5.fc1.weight',
'bart.encoder.encoder_layer.5.intermediate.dense.bias': 'encoder.layers.5.fc1.bias',
'bart.encoder.encoder_layer.5.output.dense.weight': 'encoder.layers.5.fc2.weight',
'bart.encoder.encoder_layer.5.output.dense.bias': 'encoder.layers.5.fc2.bias',
'bart.encoder.encoder_layer.5.output.LayerNorm.weight': 'encoder.layers.5.final_layer_norm.weight',
'bart.encoder.encoder_layer.5.output.LayerNorm.bias': 'encoder.layers.5.final_layer_norm.bias',
'bart.decoder.decoder_layer.0.attention.self.query.weight': 'decoder.layers.0.self_attn.q_proj.weight',
'bart.decoder.decoder_layer.0.attention.self.query.bias': 'decoder.layers.0.self_attn.q_proj.bias',
'bart.decoder.decoder_layer.0.attention.self.key.weight': 'decoder.layers.0.self_attn.k_proj.weight',
'bart.decoder.decoder_layer.0.attention.self.key.bias': 'decoder.layers.0.self_attn.k_proj.bias',
'bart.decoder.decoder_layer.0.attention.self.value.weight': 'decoder.layers.0.self_attn.v_proj.weight',
'bart.decoder.decoder_layer.0.attention.self.value.bias': 'decoder.layers.0.self_attn.v_proj.bias',
'bart.decoder.decoder_layer.0.attention.output.dense.weight': 'decoder.layers.0.self_attn.out_proj.weight',
'bart.decoder.decoder_layer.0.attention.output.dense.bias': 'decoder.layers.0.self_attn.out_proj.bias',
'bart.decoder.decoder_layer.0.attention.output.LayerNorm.weight': 'decoder.layers.0.self_attn_layer_norm.weight',
'bart.decoder.decoder_layer.0.attention.output.LayerNorm.bias': 'decoder.layers.0.self_attn_layer_norm.bias',
'bart.decoder.decoder_layer.0.crossattention.self.query.weight': 'decoder.layers.0.encoder_attn.q_proj.weight',
'bart.decoder.decoder_layer.0.crossattention.self.query.bias': 'decoder.layers.0.encoder_attn.q_proj.bias',
'bart.decoder.decoder_layer.0.crossattention.self.key.weight': 'decoder.layers.0.encoder_attn.k_proj.weight',
'bart.decoder.decoder_layer.0.crossattention.self.key.bias': 'decoder.layers.0.encoder_attn.k_proj.bias',
'bart.decoder.decoder_layer.0.crossattention.self.value.weight': 'decoder.layers.0.encoder_attn.v_proj.weight',
'bart.decoder.decoder_layer.0.crossattention.self.value.bias': 'decoder.layers.0.encoder_attn.v_proj.bias',
'bart.decoder.decoder_layer.0.crossattention.output.dense.weight': 'decoder.layers.0.encoder_attn.out_proj.weight',
'bart.decoder.decoder_layer.0.crossattention.output.dense.bias': 'decoder.layers.0.encoder_attn.out_proj.bias',
'bart.decoder.decoder_layer.0.crossattention.output.LayerNorm.weight': 'decoder.layers.0.encoder_attn_layer_norm.weight',
'bart.decoder.decoder_layer.0.crossattention.output.LayerNorm.bias': 'decoder.layers.0.encoder_attn_layer_norm.bias',
'bart.decoder.decoder_layer.0.intermediate.dense.weight': 'decoder.layers.0.fc1.weight',
'bart.decoder.decoder_layer.0.intermediate.dense.bias': 'decoder.layers.0.fc1.bias',
'bart.decoder.decoder_layer.0.output.dense.weight': 'decoder.layers.0.fc2.weight',
'bart.decoder.decoder_layer.0.output.dense.bias': 'decoder.layers.0.fc2.bias',
'bart.decoder.decoder_layer.0.output.LayerNorm.weight': 'decoder.layers.0.final_layer_norm.weight',
'bart.decoder.decoder_layer.0.output.LayerNorm.bias': 'decoder.layers.0.final_layer_norm.bias',
'bart.decoder.decoder_layer.1.attention.self.query.weight': 'decoder.layers.1.self_attn.q_proj.weight',
'bart.decoder.decoder_layer.1.attention.self.query.bias': 'decoder.layers.1.self_attn.q_proj.bias',
'bart.decoder.decoder_layer.1.attention.self.key.weight': 'decoder.layers.1.self_attn.k_proj.weight',
'bart.decoder.decoder_layer.1.attention.self.key.bias': 'decoder.layers.1.self_attn.k_proj.bias',
'bart.decoder.decoder_layer.1.attention.self.value.weight': 'decoder.layers.1.self_attn.v_proj.weight',
'bart.decoder.decoder_layer.1.attention.self.value.bias': 'decoder.layers.1.self_attn.v_proj.bias',
'bart.decoder.decoder_layer.1.attention.output.dense.weight': 'decoder.layers.1.self_attn.out_proj.weight',
'bart.decoder.decoder_layer.1.attention.output.dense.bias': 'decoder.layers.1.self_attn.out_proj.bias',
'bart.decoder.decoder_layer.1.attention.output.LayerNorm.weight': 'decoder.layers.1.self_attn_layer_norm.weight',
'bart.decoder.decoder_layer.1.attention.output.LayerNorm.bias': 'decoder.layers.1.self_attn_layer_norm.bias',
'bart.decoder.decoder_layer.1.crossattention.self.query.weight': 'decoder.layers.1.encoder_attn.q_proj.weight',
'bart.decoder.decoder_layer.1.crossattention.self.query.bias': 'decoder.layers.1.encoder_attn.q_proj.bias',
'bart.decoder.decoder_layer.1.crossattention.self.key.weight': 'decoder.layers.1.encoder_attn.k_proj.weight',
'bart.decoder.decoder_layer.1.crossattention.self.key.bias': 'decoder.layers.1.encoder_attn.k_proj.bias',
'bart.decoder.decoder_layer.1.crossattention.self.value.weight': 'decoder.layers.1.encoder_attn.v_proj.weight',
'bart.decoder.decoder_layer.1.crossattention.self.value.bias': 'decoder.layers.1.encoder_attn.v_proj.bias',
'bart.decoder.decoder_layer.1.crossattention.output.dense.weight': 'decoder.layers.1.encoder_attn.out_proj.weight',
'bart.decoder.decoder_layer.1.crossattention.output.dense.bias': 'decoder.layers.1.encoder_attn.out_proj.bias',
'bart.decoder.decoder_layer.1.crossattention.output.LayerNorm.weight': 'decoder.layers.1.encoder_attn_layer_norm.weight',
'bart.decoder.decoder_layer.1.crossattention.output.LayerNorm.bias': 'decoder.layers.1.encoder_attn_layer_norm.bias',
'bart.decoder.decoder_layer.1.intermediate.dense.weight': 'decoder.layers.1.fc1.weight',
'bart.decoder.decoder_layer.1.intermediate.dense.bias': 'decoder.layers.1.fc1.bias',
'bart.decoder.decoder_layer.1.output.dense.weight': 'decoder.layers.1.fc2.weight',
'bart.decoder.decoder_layer.1.output.dense.bias': 'decoder.layers.1.fc2.bias',
'bart.decoder.decoder_layer.1.output.LayerNorm.weight': 'decoder.layers.1.final_layer_norm.weight',
'bart.decoder.decoder_layer.1.output.LayerNorm.bias': 'decoder.layers.1.final_layer_norm.bias',
'bart.decoder.decoder_layer.2.attention.self.query.weight': 'decoder.layers.2.self_attn.q_proj.weight',
'bart.decoder.decoder_layer.2.attention.self.query.bias': 'decoder.layers.2.self_attn.q_proj.bias',
'bart.decoder.decoder_layer.2.attention.self.key.weight': 'decoder.layers.2.self_attn.k_proj.weight',
'bart.decoder.decoder_layer.2.attention.self.key.bias': 'decoder.layers.2.self_attn.k_proj.bias',
'bart.decoder.decoder_layer.2.attention.self.value.weight': 'decoder.layers.2.self_attn.v_proj.weight',
'bart.decoder.decoder_layer.2.attention.self.value.bias': 'decoder.layers.2.self_attn.v_proj.bias',
'bart.decoder.decoder_layer.2.attention.output.dense.weight': 'decoder.layers.2.self_attn.out_proj.weight',
'bart.decoder.decoder_layer.2.attention.output.dense.bias': 'decoder.layers.2.self_attn.out_proj.bias',
'bart.decoder.decoder_layer.2.attention.output.LayerNorm.weight': 'decoder.layers.2.self_attn_layer_norm.weight',
'bart.decoder.decoder_layer.2.attention.output.LayerNorm.bias': 'decoder.layers.2.self_attn_layer_norm.bias',
'bart.decoder.decoder_layer.2.crossattention.self.query.weight': 'decoder.layers.2.encoder_attn.q_proj.weight',
'bart.decoder.decoder_layer.2.crossattention.self.query.bias': 'decoder.layers.2.encoder_attn.q_proj.bias',
'bart.decoder.decoder_layer.2.crossattention.self.key.weight': 'decoder.layers.2.encoder_attn.k_proj.weight',
'bart.decoder.decoder_layer.2.crossattention.self.key.bias': 'decoder.layers.2.encoder_attn.k_proj.bias',
'bart.decoder.decoder_layer.2.crossattention.self.value.weight': 'decoder.layers.2.encoder_attn.v_proj.weight',
'bart.decoder.decoder_layer.2.crossattention.self.value.bias': 'decoder.layers.2.encoder_attn.v_proj.bias',
'bart.decoder.decoder_layer.2.crossattention.output.dense.weight': 'decoder.layers.2.encoder_attn.out_proj.weight',
'bart.decoder.decoder_layer.2.crossattention.output.dense.bias': 'decoder.layers.2.encoder_attn.out_proj.bias',
'bart.decoder.decoder_layer.2.crossattention.output.LayerNorm.weight': 'decoder.layers.2.encoder_attn_layer_norm.weight',
'bart.decoder.decoder_layer.2.crossattention.output.LayerNorm.bias': 'decoder.layers.2.encoder_attn_layer_norm.bias',
'bart.decoder.decoder_layer.2.intermediate.dense.weight': 'decoder.layers.2.fc1.weight',
'bart.decoder.decoder_layer.2.intermediate.dense.bias': 'decoder.layers.2.fc1.bias',
'bart.decoder.decoder_layer.2.output.dense.weight': 'decoder.layers.2.fc2.weight',
'bart.decoder.decoder_layer.2.output.dense.bias': 'decoder.layers.2.fc2.bias',
'bart.decoder.decoder_layer.2.output.LayerNorm.weight': 'decoder.layers.2.final_layer_norm.weight',
'bart.decoder.decoder_layer.2.output.LayerNorm.bias': 'decoder.layers.2.final_layer_norm.bias',
'bart.decoder.decoder_layer.3.attention.self.query.weight': 'decoder.layers.3.self_attn.q_proj.weight',
'bart.decoder.decoder_layer.3.attention.self.query.bias': 'decoder.layers.3.self_attn.q_proj.bias',
'bart.decoder.decoder_layer.3.attention.self.key.weight': 'decoder.layers.3.self_attn.k_proj.weight',
'bart.decoder.decoder_layer.3.attention.self.key.bias': 'decoder.layers.3.self_attn.k_proj.bias',
'bart.decoder.decoder_layer.3.attention.self.value.weight': 'decoder.layers.3.self_attn.v_proj.weight',
'bart.decoder.decoder_layer.3.attention.self.value.bias': 'decoder.layers.3.self_attn.v_proj.bias',
'bart.decoder.decoder_layer.3.attention.output.dense.weight': 'decoder.layers.3.self_attn.out_proj.weight',
'bart.decoder.decoder_layer.3.attention.output.dense.bias': 'decoder.layers.3.self_attn.out_proj.bias',
'bart.decoder.decoder_layer.3.attention.output.LayerNorm.weight': 'decoder.layers.3.self_attn_layer_norm.weight',
'bart.decoder.decoder_layer.3.attention.output.LayerNorm.bias': 'decoder.layers.3.self_attn_layer_norm.bias',
'bart.decoder.decoder_layer.3.crossattention.self.query.weight': 'decoder.layers.3.encoder_attn.q_proj.weight',
'bart.decoder.decoder_layer.3.crossattention.self.query.bias': 'decoder.layers.3.encoder_attn.q_proj.bias',
'bart.decoder.decoder_layer.3.crossattention.self.key.weight': 'decoder.layers.3.encoder_attn.k_proj.weight',
'bart.decoder.decoder_layer.3.crossattention.self.key.bias': 'decoder.layers.3.encoder_attn.k_proj.bias',
'bart.decoder.decoder_layer.3.crossattention.self.value.weight': 'decoder.layers.3.encoder_attn.v_proj.weight',
'bart.decoder.decoder_layer.3.crossattention.self.value.bias': 'decoder.layers.3.encoder_attn.v_proj.bias',
'bart.decoder.decoder_layer.3.crossattention.output.dense.weight': 'decoder.layers.3.encoder_attn.out_proj.weight',
'bart.decoder.decoder_layer.3.crossattention.output.dense.bias': 'decoder.layers.3.encoder_attn.out_proj.bias',
'bart.decoder.decoder_layer.3.crossattention.output.LayerNorm.weight': 'decoder.layers.3.encoder_attn_layer_norm.weight',
'bart.decoder.decoder_layer.3.crossattention.output.LayerNorm.bias': 'decoder.layers.3.encoder_attn_layer_norm.bias',
'bart.decoder.decoder_layer.3.intermediate.dense.weight': 'decoder.layers.3.fc1.weight',
'bart.decoder.decoder_layer.3.intermediate.dense.bias': 'decoder.layers.3.fc1.bias',
'bart.decoder.decoder_layer.3.output.dense.weight': 'decoder.layers.3.fc2.weight',
'bart.decoder.decoder_layer.3.output.dense.bias': 'decoder.layers.3.fc2.bias',
'bart.decoder.decoder_layer.3.output.LayerNorm.weight': 'decoder.layers.3.final_layer_norm.weight',
'bart.decoder.decoder_layer.3.output.LayerNorm.bias': 'decoder.layers.3.final_layer_norm.bias',
'bart.decoder.decoder_layer.4.attention.self.query.weight': 'decoder.layers.4.self_attn.q_proj.weight',
'bart.decoder.decoder_layer.4.attention.self.query.bias': 'decoder.layers.4.self_attn.q_proj.bias',
'bart.decoder.decoder_layer.4.attention.self.key.weight': 'decoder.layers.4.self_attn.k_proj.weight',
'bart.decoder.decoder_layer.4.attention.self.key.bias': 'decoder.layers.4.self_attn.k_proj.bias',
'bart.decoder.decoder_layer.4.attention.self.value.weight': 'decoder.layers.4.self_attn.v_proj.weight',
'bart.decoder.decoder_layer.4.attention.self.value.bias': 'decoder.layers.4.self_attn.v_proj.bias',
'bart.decoder.decoder_layer.4.attention.output.dense.weight': 'decoder.layers.4.self_attn.out_proj.weight',
'bart.decoder.decoder_layer.4.attention.output.dense.bias': 'decoder.layers.4.self_attn.out_proj.bias',
'bart.decoder.decoder_layer.4.attention.output.LayerNorm.weight': 'decoder.layers.4.self_attn_layer_norm.weight',
'bart.decoder.decoder_layer.4.attention.output.LayerNorm.bias': 'decoder.layers.4.self_attn_layer_norm.bias',
'bart.decoder.decoder_layer.4.crossattention.self.query.weight': 'decoder.layers.4.encoder_attn.q_proj.weight',
'bart.decoder.decoder_layer.4.crossattention.self.query.bias': 'decoder.layers.4.encoder_attn.q_proj.bias',
'bart.decoder.decoder_layer.4.crossattention.self.key.weight': 'decoder.layers.4.encoder_attn.k_proj.weight',
'bart.decoder.decoder_layer.4.crossattention.self.key.bias': 'decoder.layers.4.encoder_attn.k_proj.bias',
'bart.decoder.decoder_layer.4.crossattention.self.value.weight': 'decoder.layers.4.encoder_attn.v_proj.weight',
'bart.decoder.decoder_layer.4.crossattention.self.value.bias': 'decoder.layers.4.encoder_attn.v_proj.bias',
'bart.decoder.decoder_layer.4.crossattention.output.dense.weight': 'decoder.layers.4.encoder_attn.out_proj.weight',
'bart.decoder.decoder_layer.4.crossattention.output.dense.bias': 'decoder.layers.4.encoder_attn.out_proj.bias',
'bart.decoder.decoder_layer.4.crossattention.output.LayerNorm.weight': 'decoder.layers.4.encoder_attn_layer_norm.weight',
'bart.decoder.decoder_layer.4.crossattention.output.LayerNorm.bias': 'decoder.layers.4.encoder_attn_layer_norm.bias',
'bart.decoder.decoder_layer.4.intermediate.dense.weight': 'decoder.layers.4.fc1.weight',
'bart.decoder.decoder_layer.4.intermediate.dense.bias': 'decoder.layers.4.fc1.bias',
'bart.decoder.decoder_layer.4.output.dense.weight': 'decoder.layers.4.fc2.weight',
'bart.decoder.decoder_layer.4.output.dense.bias': 'decoder.layers.4.fc2.bias',
'bart.decoder.decoder_layer.4.output.LayerNorm.weight': 'decoder.layers.4.final_layer_norm.weight',
'bart.decoder.decoder_layer.4.output.LayerNorm.bias': 'decoder.layers.4.final_layer_norm.bias',
'bart.decoder.decoder_layer.5.attention.self.query.weight': 'decoder.layers.5.self_attn.q_proj.weight',
'bart.decoder.decoder_layer.5.attention.self.query.bias': 'decoder.layers.5.self_attn.q_proj.bias',
'bart.decoder.decoder_layer.5.attention.self.key.weight': 'decoder.layers.5.self_attn.k_proj.weight',
'bart.decoder.decoder_layer.5.attention.self.key.bias': 'decoder.layers.5.self_attn.k_proj.bias',
'bart.decoder.decoder_layer.5.attention.self.value.weight': 'decoder.layers.5.self_attn.v_proj.weight',
'bart.decoder.decoder_layer.5.attention.self.value.bias': 'decoder.layers.5.self_attn.v_proj.bias',
'bart.decoder.decoder_layer.5.attention.output.dense.weight': 'decoder.layers.5.self_attn.out_proj.weight',
'bart.decoder.decoder_layer.5.attention.output.dense.bias': 'decoder.layers.5.self_attn.out_proj.bias',
'bart.decoder.decoder_layer.5.attention.output.LayerNorm.weight': 'decoder.layers.5.self_attn_layer_norm.weight',
'bart.decoder.decoder_layer.5.attention.output.LayerNorm.bias': 'decoder.layers.5.self_attn_layer_norm.bias',
'bart.decoder.decoder_layer.5.crossattention.self.query.weight': 'decoder.layers.5.encoder_attn.q_proj.weight',
'bart.decoder.decoder_layer.5.crossattention.self.query.bias': 'decoder.layers.5.encoder_attn.q_proj.bias',
'bart.decoder.decoder_layer.5.crossattention.self.key.weight': 'decoder.layers.5.encoder_attn.k_proj.weight',
'bart.decoder.decoder_layer.5.crossattention.self.key.bias': 'decoder.layers.5.encoder_attn.k_proj.bias',
'bart.decoder.decoder_layer.5.crossattention.self.value.weight': 'decoder.layers.5.encoder_attn.v_proj.weight',
'bart.decoder.decoder_layer.5.crossattention.self.value.bias': 'decoder.layers.5.encoder_attn.v_proj.bias',
'bart.decoder.decoder_layer.5.crossattention.output.dense.weight': 'decoder.layers.5.encoder_attn.out_proj.weight',
'bart.decoder.decoder_layer.5.crossattention.output.dense.bias': 'decoder.layers.5.encoder_attn.out_proj.bias',
'bart.decoder.decoder_layer.5.crossattention.output.LayerNorm.weight': 'decoder.layers.5.encoder_attn_layer_norm.weight',
'bart.decoder.decoder_layer.5.crossattention.output.LayerNorm.bias': 'decoder.layers.5.encoder_attn_layer_norm.bias',
'bart.decoder.decoder_layer.5.intermediate.dense.weight': 'decoder.layers.5.fc1.weight',
'bart.decoder.decoder_layer.5.intermediate.dense.bias': 'decoder.layers.5.fc1.bias',
'bart.decoder.decoder_layer.5.output.dense.weight': 'decoder.layers.5.fc2.weight',
'bart.decoder.decoder_layer.5.output.dense.bias': 'decoder.layers.5.fc2.bias',
'bart.decoder.decoder_layer.5.output.LayerNorm.weight': 'decoder.layers.5.final_layer_norm.weight',
'bart.decoder.decoder_layer.5.output.LayerNorm.bias': 'decoder.layers.5.final_layer_norm.bias'}
model_new = {}
for key, value in map.items():
model_new[value] = torch_weights[key]
torch.save(model_new, 'F:/Projects/pretrain_ckpt/bart/[cloudwalk_torch_base]/bert4torch_pytorch_model.bin') | 99.037175 | 122 | 0.80241 | 4,137 | 26,641 | 4.993232 | 0.016437 | 0.083071 | 0.135935 | 0.173694 | 0.977393 | 0.955318 | 0.863727 | 0.661035 | 0.477223 | 0 | 0 | 0.021753 | 0.030217 | 26,641 | 269 | 123 | 99.037175 | 0.777791 | 0.00274 | 0 | 0 | 0 | 0 | 0.906045 | 0.906045 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.003802 | 0 | 0.003802 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
467a2caefffbaf358a55d50774855dffc1101d48 | 87 | py | Python | test/test.py | kamzadias/Python1_Ass1 | 6d4e5645a1dd57e63a968173df044b08d0f1c252 | [
"MIT"
] | null | null | null | test/test.py | kamzadias/Python1_Ass1 | 6d4e5645a1dd57e63a968173df044b08d0f1c252 | [
"MIT"
] | null | null | null | test/test.py | kamzadias/Python1_Ass1 | 6d4e5645a1dd57e63a968173df044b08d0f1c252 | [
"MIT"
] | null | null | null | from CoinGeckoAPI import functionTest
print(functionTest(3))
print(functionTest(4)) | 21.75 | 38 | 0.804598 | 10 | 87 | 7 | 0.7 | 0.485714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.025641 | 0.103448 | 87 | 4 | 39 | 21.75 | 0.871795 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0.666667 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
467db35f4cbfac232461789a938b522bec218fa1 | 29 | py | Python | test/test_null.py | aroberge/pyextensions | cd18f6936df2c4ffafacb445fe77f8908d67f4f1 | [
"MIT"
] | null | null | null | test/test_null.py | aroberge/pyextensions | cd18f6936df2c4ffafacb445fe77f8908d67f4f1 | [
"MIT"
] | null | null | null | test/test_null.py | aroberge/pyextensions | cd18f6936df2c4ffafacb445fe77f8908d67f4f1 | [
"MIT"
] | null | null | null | from .null_testfile import *
| 14.5 | 28 | 0.793103 | 4 | 29 | 5.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.137931 | 29 | 1 | 29 | 29 | 0.88 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d3c2ca4dc6089f768291843c780f88a2be288fd9 | 6,630 | py | Python | LeetCode/15-3Sum☆/3Sum.py | hscspring/TheAlgorithms-Python | 5c2faea1d2d25a9a81a4786e053b0cc58ab46c6f | [
"MIT"
] | 10 | 2020-07-06T11:00:58.000Z | 2022-01-29T09:25:24.000Z | LeetCode/15-3Sum☆/3Sum.py | hscspring/TheAlgorithms-Python | 5c2faea1d2d25a9a81a4786e053b0cc58ab46c6f | [
"MIT"
] | null | null | null | LeetCode/15-3Sum☆/3Sum.py | hscspring/TheAlgorithms-Python | 5c2faea1d2d25a9a81a4786e053b0cc58ab46c6f | [
"MIT"
] | 3 | 2020-07-13T06:39:23.000Z | 2020-08-15T16:29:48.000Z | class Solution(object):
def threeSum(self, nums):
"""
:type nums: List[int]
:rtype: List[List[int]]
"""
sort = sorted(nums)
res = []
for i in range(len(sort) - 2):
if i > 0 and sort[i] == sort[i-1]:
continue
l, r = i+1, len(sort) - 1
while l < r:
if sort[i] + sort[l] + sort[r] < 0:
l += 1
elif sort[i] + sort[l] + sort[r] > 0:
r -= 1
else:
res.append([sort[i], sort[l], sort[r]])
while l < r and sort[l] == sort[l+1]:
l += 1
while l < r and sort[r] == sort[r-1]:
r -= 1
l += 1
r -= 1
return res
# lazy approach
class Solution(object):
def threeSum(self, nums):
"""
:type nums: List[int]
:rtype: List[List[int]]
"""
res = []
for i in range(len(nums)):
for j in range(i, len(nums)):
for k in range(j, len(nums)):
if i == j or i == k or j == k:
continue
if nums[i] + nums[j] + nums[k] == 0:
item = sorted((nums[i], nums[j], nums[k]))
if item in res:
continue
res.append(item)
return res
# O(N*N)
def threeSum1(nums):
if len(nums) < 3:
res = []
zeros, negs, poss = [], [], []
for i in range(len(nums)):
item = nums[i]
if item == 0:
zeros.append(item)
elif item > 0:
poss.append(item)
else:
negs.append(item)
res = []
if len(zeros) > 0:
for i in range(len(negs)):
if -negs[i] in poss:
item = [negs[i], 0, -negs[i]]
if item not in res:
res.append(item)
if len(zeros) > 2:
res.append([0, 0, 0])
for i in range(len(negs)):
for j in range(len(poss)):
tmp = -(negs[i] + poss[j])
if tmp in negs[0:i] + negs[i+1:]:
big, small = (negs[i], tmp) if negs[i] > tmp else (tmp, negs[i])
item = [small, big, poss[j]]
elif tmp in poss[0:j] + poss[j+1:]:
big, small = (poss[j], tmp) if poss[j] > tmp else (tmp, poss[j])
item = [negs[i], small, big]
else:
continue
if item not in res:
res.append(item)
return res
# BE careful to use python `ele in list`, it's O(n)
def threeSum2(nums):
if len(nums) < 3:
res = []
zeros, negs, poss = [], [], []
for i in range(len(nums)):
item = nums[i]
if item == 0:
zeros.append(item)
elif item > 0:
poss.append(item)
else:
negs.append(item)
res = []
if len(zeros) > 0:
for i in range(len(negs)):
if -negs[i] in poss:
item = [negs[i], 0, -negs[i]]
if item not in res:
res.append(item)
if len(zeros) > 2:
res.append([0, 0, 0])
sorted_negs = sorted(negs)
sorted_poss = sorted(poss)
for i in range(len(sorted_negs)):
if i > 0 and sorted_negs[i] == sorted_negs[i-1]:
continue
l, r = 0, len(sorted_poss) - 1
while l < r:
if sorted_negs[i] + sorted_poss[l] + sorted_poss[r] == 0:
item = [sorted_negs[i], sorted_poss[l], sorted_poss[r]]
if item not in res:
res.append(item)
l += 1
r -= 1
elif sorted_negs[i] + sorted_poss[l] + sorted_poss[r] < 0:
l += 1
else:
r -= 1
for i in range(len(sorted_poss)):
if i > 0 and sorted_poss[i] == sorted_poss[i-1]:
continue
l, r = 0, len(sorted_negs) - 1
while l < r:
if sorted_poss[i] + sorted_negs[l] + sorted_negs[r] == 0:
item = [sorted_negs[l], sorted_negs[r], sorted_poss[i]]
if item not in res:
res.append(item)
l += 1
r -= 1
elif sorted_poss[i] + sorted_negs[l] + sorted_negs[r] < 0:
l += 1
else:
r -= 1
return res
def threeNums3(nums):
if len(nums) < 3:
res = []
zeros, negs, poss = [], [], []
for i in range(len(nums)):
item = nums[i]
if item == 0:
zeros.append(item)
elif item > 0:
poss.append(item)
else:
negs.append(item)
res = []
if len(zeros) > 0:
for i in range(len(negs)):
if -negs[i] in poss:
item = [negs[i], 0, -negs[i]]
if item not in res:
res.append(item)
if len(zeros) > 2:
res.append([0, 0, 0])
sorted_negs = sorted(negs)
sorted_poss = sorted(poss)
for i in range(len(sorted_negs)):
if i > 0 and sorted_negs[i] == sorted_negs[i-1]:
continue
l, r = 0, len(sorted_poss) - 1
while l < r:
if sorted_negs[i] + sorted_poss[l] + sorted_poss[r] == 0:
item = [sorted_negs[i], sorted_poss[l], sorted_poss[r]]
res.append(item)
while l < r and sorted_poss[l] == sorted_poss[l+1]:
l += 1
while l < r and sorted_poss[r] == sorted_poss[r-1]:
r -= 1
l += 1
r -= 1
elif sorted_negs[i] + sorted_poss[l] + sorted_poss[r] < 0:
l += 1
else:
r -= 1
for i in range(len(sorted_poss)):
if i > 0 and sorted_poss[i] == sorted_poss[i-1]:
continue
l, r = 0, len(sorted_negs) - 1
while l < r:
if sorted_poss[i] + sorted_negs[l] + sorted_negs[r] == 0:
item = [sorted_negs[l], sorted_negs[r], sorted_poss[i]]
res.append(item)
while l < r and sorted_negs[l] == sorted_negs[l+1]:
l += 1
while l < r and sorted_negs[r] == sorted_negs[r-1]:
r -= 1
l += 1
r -= 1
elif sorted_poss[i] + sorted_negs[l] + sorted_negs[r] < 0:
l += 1
else:
r -= 1
return res
| 32.028986 | 80 | 0.415083 | 874 | 6,630 | 3.075515 | 0.067506 | 0.126488 | 0.052083 | 0.053199 | 0.823289 | 0.78869 | 0.75186 | 0.733259 | 0.693452 | 0.678571 | 0 | 0.027057 | 0.453695 | 6,630 | 206 | 81 | 32.184466 | 0.715075 | 0.024585 | 0 | 0.837838 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.027027 | false | 0 | 0 | 0 | 0.064865 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d3dcf50390621ea7ce7c4eea24dce4e8c6639992 | 31,197 | py | Python | src/study_keras/5_hello_style_transfer/hello_style_transfer_2.py | iascchen/ai_study_notes | 03f46c5e37670c10bd99000d979940db8878f36c | [
"MIT"
] | 4 | 2019-08-08T09:39:01.000Z | 2019-08-08T09:44:58.000Z | src/study_keras/5_hello_style_transfer/hello_style_transfer_2.py | iascchen/ai_study_notes | 03f46c5e37670c10bd99000d979940db8878f36c | [
"MIT"
] | 5 | 2020-01-28T22:54:31.000Z | 2021-12-13T20:07:11.000Z | src/study_keras/5_hello_style_transfer/hello_style_transfer_2.py | iascchen/ai_study_notes | 03f46c5e37670c10bd99000d979940db8878f36c | [
"MIT"
] | null | null | null | import datetime
import json
import time
from argparse import ArgumentParser
import numpy as np
import tensorflow as tf
import tensorflow.keras.backend as K
from imageio import imwrite, imsave
from style_transfer_utils import get_style_loss, get_content_loss, get_tv_loss, residual_block, OutputScale, \
InputReflect, AverageAddTwo, process_image, expand_input, get_vgg_activation, dummy_loss, zero_loss, \
deprocess_image, get_padding, remove_padding
from tensorflow.keras import optimizers
from tensorflow.keras.applications import vgg16
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.utils.vis_utils import plot_model
from tensorflow.python.keras import Model
from tensorflow.python.layers import layers
tf.enable_eager_execution()
print("Eager execution: {}".format(tf.executing_eagerly()))
def get_training_model(width, height, bs=1, bi_style=False):
input_o = layers.Input(shape=(height, width, 3), dtype='float32', name='input_o')
c1 = layers.Conv2D(32, (9, 9), strides=1, padding='same', name='conv_1')(input_o)
c1 = layers.BatchNormalization(name='normal_1')(c1)
c1 = layers.Activation('relu', name='relu_1')(c1)
c2 = layers.Conv2D(64, (3, 3), strides=2, padding='same', name='conv_2')(c1)
c2 = layers.BatchNormalization(name='normal_2')(c2)
c2 = layers.Activation('relu', name='relu_2')(c2)
c3 = layers.Conv2D(128, (3, 3), strides=2, padding='same', name='conv_3')(c2)
c3 = layers.BatchNormalization(name='normal_3')(c3)
c3 = layers.Activation('relu', name='relu_3')(c3)
r1 = residual_block(c3, 1)
r2 = residual_block(r1, 2)
r3 = residual_block(r2, 3)
r4 = residual_block(r3, 4)
r5 = residual_block(r4, 5)
d1 = layers.Conv2DTranspose(64, (3, 3), strides=2, padding='same', name='conv_4')(r5)
d1 = layers.BatchNormalization(name='normal_4')(d1)
d1 = layers.Activation('relu', name='relu_4')(d1)
d2 = layers.Conv2DTranspose(32, (3, 3), strides=2, padding='same', name='conv_5')(d1)
d2 = layers.BatchNormalization(name='normal_5')(d2)
d2 = layers.Activation('relu', name='relu_5')(d2)
c4 = layers.Conv2D(3, (9, 9), strides=1, padding='same', name='conv_6')(d2)
c4 = layers.BatchNormalization(name='normal_6')(c4)
c4 = layers.Activation('tanh', name='tanh_1')(c4)
c4 = OutputScale(name='output')(c4)
content_activation = layers.Input(shape=(height // 2, width // 2, 128), dtype='float32')
style_activation1 = layers.Input(shape=(height, width, 64), dtype='float32')
style_activation2 = layers.Input(shape=(height // 2, width // 2, 128), dtype='float32')
style_activation3 = layers.Input(shape=(height // 4, width // 4, 256), dtype='float32')
style_activation4 = layers.Input(shape=(height // 8, width // 8, 512), dtype='float32')
if bi_style:
style_activation1_2 = layers.Input(shape=(height, width, 64), dtype='float32')
style_activation2_2 = layers.Input(shape=(height // 2, width // 2, 128), dtype='float32')
style_activation3_2 = layers.Input(shape=(height // 4, width // 4, 256), dtype='float32')
style_activation4_2 = layers.Input(shape=(height // 8, width // 8, 512), dtype='float32')
total_variation_loss = layers.Lambda(get_tv_loss, output_shape=(1,), name='tv',
arguments={'width': width, 'height': height})([c4])
# Block 1
x = layers.Conv2D(64, (3, 3), activation='relu', padding='same', name='block1_conv1')(c4)
x = layers.Conv2D(64, (3, 3), activation='relu', padding='same', name='block1_conv2')(x)
style_loss1 = layers.Lambda(get_style_loss, output_shape=(1,),
name='style1', arguments={'batch_size': bs})([x, style_activation1])
if bi_style:
style_loss1_2 = layers.Lambda(get_style_loss, output_shape=(1,),
name='style1_2', arguments={'batch_size': bs})([x, style_activation1_2])
style_loss1 = AverageAddTwo(name='style1_out')([style_loss1, style_loss1_2])
x = layers.MaxPooling2D((2, 2), strides=(2, 2), name='block1_pool')(x)
# Block 2
x = layers.Conv2D(128, (3, 3), activation='relu', padding='same', name='block2_conv1')(x)
x = layers.Conv2D(128, (3, 3), activation='relu', padding='same', name='block2_conv2')(x)
content_loss = layers.Lambda(get_content_loss, output_shape=(1,), name='content')([x, content_activation])
style_loss2 = layers.Lambda(get_style_loss, output_shape=(1,),
name='style2', arguments={'batch_size': bs})([x, style_activation2])
if bi_style:
style_loss2_2 = layers.Lambda(get_style_loss, output_shape=(1,),
name='style2_2', arguments={'batch_size': bs})([x, style_activation2_2])
style_loss2 = AverageAddTwo(name='style2_out')([style_loss2, style_loss2_2])
x = layers.MaxPooling2D((2, 2), strides=(2, 2), name='block2_pool')(x)
# Block 3
x = layers.Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv1')(x)
x = layers.Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv2')(x)
x = layers.Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv3')(x)
style_loss3 = layers.Lambda(get_style_loss, output_shape=(1,),
name='style3', arguments={'batch_size': bs})([x, style_activation3])
if bi_style:
style_loss3_2 = layers.Lambda(get_style_loss, output_shape=(1,),
name='style3_2', arguments={'batch_size': bs})([x, style_activation3_2])
style_loss3 = AverageAddTwo(name='style3_out')([style_loss3, style_loss3_2])
x = layers.MaxPooling2D((2, 2), strides=(2, 2), name='block3_pool')(x)
# Block 4
x = layers.Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv1')(x)
x = layers.Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv2')(x)
x = layers.Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv3')(x)
style_loss4 = layers.Lambda(get_style_loss, output_shape=(1,),
name='style4', arguments={'batch_size': bs})([x, style_activation4])
if bi_style:
style_loss4_2 = layers.Lambda(get_style_loss, output_shape=(1,),
name='style4_2', arguments={'batch_size': bs})([x, style_activation4_2])
style_loss4 = AverageAddTwo(name='style4_out')([style_loss4, style_loss4_2])
x = layers.MaxPooling2D((2, 2), strides=(2, 2), name='block4_pool')(x)
# Block 5
x = layers.Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv1')(x)
x = layers.Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv2')(x)
x = layers.Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv3')(x)
x = layers.MaxPooling2D((2, 2), strides=(2, 2), name='block5_pool')(x)
if bi_style:
model = Model(
[input_o, content_activation, style_activation1, style_activation2, style_activation3, style_activation4,
style_activation1_2, style_activation2_2, style_activation3_2, style_activation4_2],
[content_loss, style_loss1, style_loss2, style_loss3, style_loss4, total_variation_loss, c4])
else:
model = Model(
[input_o, content_activation, style_activation1, style_activation2, style_activation3, style_activation4],
[content_loss, style_loss1, style_loss2, style_loss3, style_loss4, total_variation_loss, c4])
model_layers = {layer.name: layer for layer in model.layers}
original_vgg = vgg16.VGG16(weights='imagenet', include_top=False)
original_vgg_layers = {layer.name: layer for layer in original_vgg.layers}
# load image_net weight
for layer in original_vgg.layers:
if layer.name in model_layers:
model_layers[layer.name].set_weights(original_vgg_layers[layer.name].get_weights())
model_layers[layer.name].trainable = False
print("training model built successfully!")
return model
def get_evaluate_model(width, height):
input_o = layers.Input(shape=(height, width, 3), dtype='float32', name='input_o')
c1 = layers.Conv2D(32, (9, 9), strides=1, padding='same', name='conv_1')(input_o)
c1 = layers.BatchNormalization(name='normal_1')(c1)
c1 = layers.Activation('relu', name='relu_1')(c1)
c2 = layers.Conv2D(64, (3, 3), strides=2, padding='same', name='conv_2')(c1)
c2 = layers.BatchNormalization(name='normal_2')(c2)
c2 = layers.Activation('relu', name='relu_2')(c2)
c3 = layers.Conv2D(128, (3, 3), strides=2, padding='same', name='conv_3')(c2)
c3 = layers.BatchNormalization(name='normal_3')(c3)
c3 = layers.Activation('relu', name='relu_3')(c3)
r1 = residual_block(c3, 1)
r2 = residual_block(r1, 2)
r3 = residual_block(r2, 3)
r4 = residual_block(r3, 4)
r5 = residual_block(r4, 5)
d1 = layers.Conv2DTranspose(64, (3, 3), strides=2, padding='same', name='conv_4')(r5)
d1 = layers.BatchNormalization(name='normal_4')(d1)
d1 = layers.Activation('relu', name='relu_4')(d1)
d2 = layers.Conv2DTranspose(32, (3, 3), strides=2, padding='same', name='conv_5')(d1)
d2 = layers.BatchNormalization(name='normal_5')(d2)
d2 = layers.Activation('relu', name='relu_5')(d2)
c4 = layers.Conv2D(3, (9, 9), strides=1, padding='same', name='conv_6')(d2)
c4 = layers.BatchNormalization(name='normal_6')(c4)
c4 = layers.Activation('tanh', name='tanh_1')(c4)
c4 = OutputScale(name='output')(c4)
model = Model([input_o], c4)
print("evaluate model built successfully!")
return model
def get_temp_view_model(width, height, bs=1, bi_style=False):
input_o = layers.Input(shape=(height, width, 3), dtype='float32')
y = InputReflect(width, height, name='output')(input_o)
total_variation_loss = layers.Lambda(get_tv_loss, output_shape=(1,), name='tv',
arguments={'width': width, 'height': height})([y])
content_activation = layers.Input(shape=(height // 2, width // 2, 128), dtype='float32')
style_activation1 = layers.Input(shape=(height, width, 64), dtype='float32')
style_activation2 = layers.Input(shape=(height // 2, width // 2, 128), dtype='float32')
style_activation3 = layers.Input(shape=(height // 4, width // 4, 256), dtype='float32')
style_activation4 = layers.Input(shape=(height // 8, width // 8, 512), dtype='float32')
if bi_style:
style_activation1_2 = layers.Input(shape=(height, width, 64), dtype='float32')
style_activation2_2 = layers.Input(shape=(height // 2, width // 2, 128), dtype='float32')
style_activation3_2 = layers.Input(shape=(height // 4, width // 4, 256), dtype='float32')
style_activation4_2 = layers.Input(shape=(height // 8, width // 8, 512), dtype='float32')
# Block 1
x = layers.Conv2D(64, (3, 3), activation='relu', padding='same', name='block1_conv1')(y)
x = layers.Conv2D(64, (3, 3), activation='relu', padding='same', name='block1_conv2')(x)
style_loss1 = layers.Lambda(get_style_loss, output_shape=(1,),
name='style1', arguments={'batch_size': bs})([x, style_activation1])
if bi_style:
style_loss1_2 = layers.Lambda(get_style_loss, output_shape=(1,),
name='style1_2', arguments={'batch_size': bs})([x, style_activation1_2])
style_loss1 = AverageAddTwo(name='style1_out')([style_loss1, style_loss1_2])
x = layers.MaxPooling2D((2, 2), strides=(2, 2), name='block1_pool')(x)
# Block 2
x = layers.Conv2D(128, (3, 3), activation='relu', padding='same', name='block2_conv1')(x)
x = layers.Conv2D(128, (3, 3), activation='relu', padding='same', name='block2_conv2')(x)
content_loss = layers.Lambda(get_content_loss, output_shape=(1,), name='content')([x, content_activation])
style_loss2 = layers.Lambda(get_style_loss, output_shape=(1,),
name='style2', arguments={'batch_size': bs})([x, style_activation2])
if bi_style:
style_loss2_2 = layers.Lambda(get_style_loss, output_shape=(1,),
name='style2_2', arguments={'batch_size': bs})([x, style_activation2_2])
style_loss2 = AverageAddTwo(name='style2_out')([style_loss2, style_loss2_2])
x = layers.MaxPooling2D((2, 2), strides=(2, 2), name='block2_pool')(x)
# Block 3
x = layers.Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv1')(x)
x = layers.Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv2')(x)
x = layers.Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv3')(x)
style_loss3 = layers.Lambda(get_style_loss, output_shape=(1,),
name='style3', arguments={'batch_size': bs})([x, style_activation3])
if bi_style:
style_loss3_2 = layers.Lambda(get_style_loss, output_shape=(1,),
name='style3_2', arguments={'batch_size': bs})([x, style_activation3_2])
style_loss3 = AverageAddTwo(name='style3_out')([style_loss3, style_loss3_2])
x = layers.MaxPooling2D((2, 2), strides=(2, 2), name='block3_pool')(x)
# Block 4
x = layers.Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv1')(x)
x = layers.Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv2')(x)
x = layers.Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv3')(x)
style_loss4 = layers.Lambda(get_style_loss, output_shape=(1,),
name='style4', arguments={'batch_size': bs})([x, style_activation4])
if bi_style:
style_loss4_2 = layers.Lambda(get_style_loss, output_shape=(1,),
name='style4_2', arguments={'batch_size': bs})([x, style_activation4_2])
style_loss4 = AverageAddTwo(name='style4_out')([style_loss4, style_loss4_2])
x = layers.MaxPooling2D((2, 2), strides=(2, 2), name='block4_pool')(x)
# Block 5
x = layers.Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv1')(x)
x = layers.Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv2')(x)
x = layers.Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv3')(x)
x = layers.MaxPooling2D((2, 2), strides=(2, 2), name='block5_pool')(x)
if bi_style:
model = Model(
[input_o, content_activation, style_activation1, style_activation2, style_activation3,
style_activation4,
style_activation1_2, style_activation2_2, style_activation3_2, style_activation4_2],
[content_loss, style_loss1, style_loss2, style_loss3, style_loss4, total_variation_loss, y])
else:
model = Model(
[input_o, content_activation, style_activation1, style_activation2, style_activation3,
style_activation4],
[content_loss, style_loss1, style_loss2, style_loss3, style_loss4, total_variation_loss, y])
model_layers = {layer.name: layer for layer in model.layers}
original_vgg = vgg16.VGG16(weights='imagenet', include_top=False)
original_vgg_layers = {layer.name: layer for layer in original_vgg.layers}
# load image_net weight
for layer in original_vgg.layers:
if layer.name in model_layers:
model_layers[layer.name].set_weights(original_vgg_layers[layer.name].get_weights())
model_layers[layer.name].trainable = False
print("temp_view model built successfully!")
return model
def train(options):
width = options["train_image_width"]
height = options["train_image_height"]
# Get style activations
style_tensor = process_image(options["style_image_path"], width, height)
style_acts = list()
for layer_name in options["style_layer"]:
func = get_vgg_activation(layer_name, width, height)
style_act = expand_input(options["batch_size"], func([style_tensor])[0])
style_acts.append(style_act)
if "style_image_path_2" in options:
style_tensor_2 = process_image(options["style_image_path_2"], width, height)
style_acts_2 = list()
for layer_name in options["style_layer"]:
func = get_vgg_activation(layer_name, width, height)
style_act_2 = expand_input(options["batch_size"], func([style_tensor_2])[0])
style_acts_2.append(style_act_2)
# Get content activations for test_image
content_test = process_image(options["test_image_path"], width, height)
content_func = get_vgg_activation(options["content_layer"], width, height)
content_act_test = expand_input(options["batch_size"], content_func([content_test])[0])
content_test = expand_input(options["batch_size"], content_test)
# Get weights
style_w = options["style_weight"] / len(style_acts)
content_w = options["content_weight"]
tv_w = options["total_variation_weight"]
# Get training model
bi_style = False
if "style_image_path_2" in options:
bi_style = True
training_model = get_training_model(width, height, bs=options['batch_size'], bi_style=bi_style)
if bi_style:
training_model.compile(loss={'content': dummy_loss, 'style1_out': dummy_loss, 'style2_out': dummy_loss,
'style3_out': dummy_loss, 'style4_out': dummy_loss, 'tv': dummy_loss,
'output': zero_loss},
optimizer=optimizers.Adam(lr=options["learning_rate"]),
loss_weights=[content_w, style_w, style_w, style_w, style_w, tv_w, 0])
else:
training_model.compile(loss={'content': dummy_loss, 'style1': dummy_loss, 'style2': dummy_loss,
'style3': dummy_loss, 'style4': dummy_loss, 'tv': dummy_loss, 'output': zero_loss},
optimizer=optimizers.Adam(lr=options["learning_rate"]),
loss_weights=[content_w, style_w, style_w, style_w, style_w, tv_w, 0])
# If flag is set, print model summary and generate model description
if options["plot_model"]:
training_model.summary()
plot_model(training_model, to_file='model.png')
# function for printing test information
def print_test_results(cur_res, cur_iter, prev_loss):
losses = list()
losses.append(cur_res[0][0] * content_w)
losses.append(cur_res[1][0] * style_w)
losses.append(cur_res[2][0] * style_w)
losses.append(cur_res[3][0] * style_w)
losses.append(cur_res[4][0] * style_w)
losses.append(cur_res[5][0] * tv_w)
cur_loss = sum(losses)
if prev_loss is None:
prev_loss = cur_loss
print("----------------------------------------------------")
print("Details: iteration %d, " % cur_iter, end='')
print('improvement: %.2f percent, ' % ((prev_loss - cur_loss) / prev_loss * 100), end='')
print("loss: %.0f" % cur_loss)
print("content_loss: %.0f, style_loss_1: %.0f, style_loss_2: %.0f\n"
"style_loss_3: %.0f, style_loss_4: %.0f, tv_loss: %.0f"
% (losses[0], losses[1], losses[2], losses[3], losses[4], losses[5]))
print("----------------------------------------------------")
return cur_loss
# Prepare for training
dg = ImageDataGenerator()
dummy_in = expand_input(options["batch_size"], np.array([0.0]))
interrupted = False
c_loss = None
t_sum = 0.0
# Begin Training
t_total_1 = time.time()
for i in range(options["epochs"]):
print("Epoch: %d" % (i + 1))
iters = 0
for x in dg.flow_from_directory(options["train_image_path"], class_mode=None,
batch_size=options["batch_size"], target_size=(height, width)):
try:
t1 = time.time()
x = vgg16.preprocess_input(x)
content_act = content_func([x])[0]
if bi_style:
res = training_model.fit([x, content_act, style_acts[0], style_acts[1], style_acts[2],
style_acts[3], style_acts_2[0], style_acts_2[1], style_acts_2[2],
style_acts_2[3]], [dummy_in, dummy_in, dummy_in, dummy_in, dummy_in,
dummy_in, x],
epochs=1, verbose=0, batch_size=options["batch_size"])
else:
res = training_model.fit([x, content_act, style_acts[0], style_acts[1], style_acts[2],
style_acts[3]], [dummy_in, dummy_in, dummy_in, dummy_in, dummy_in,
dummy_in, x],
epochs=1, verbose=0, batch_size=options["batch_size"])
t2 = time.time()
t_sum += t2 - t1
iters += 1
if iters % options["view_iter"] == 0:
loss = res.history['loss'][0]
est_time = int((options["steps_per_epoch"] * (options["epochs"] - i) - iters)
* (t_sum / options["view_iter"]))
print("Iter : %d / %d, Time elapsed: %0.2f seconds, Loss: %.0f, EST: " %
(iters, options["steps_per_epoch"], t_sum / options["view_iter"], loss) +
str(datetime.timedelta(seconds=est_time)))
t_sum = 0.0
if iters % options["test_iter"] == 0:
if bi_style:
res = training_model.predict([content_test, content_act_test, style_acts[0], style_acts[1],
style_acts[2], style_acts[3], style_acts_2[0], style_acts_2[1],
style_acts_2[2], style_acts_2[3]])
else:
res = training_model.predict([content_test, content_act_test, style_acts[0], style_acts[1],
style_acts[2], style_acts[3]])
c_loss = print_test_results(res, iters, c_loss)
output = deprocess_image(res[6][0], width, height)
imsave(options["test_res_save_path"] + '%d_%d_output.jpg' % (i, iters), output)
if iters >= options["steps_per_epoch"]:
break
except KeyboardInterrupt:
print("Interrupted, training suspended.")
interrupted = True
break
if interrupted:
break
t_total_2 = time.time()
print("Training ended. Time used: " + str(datetime.timedelta(seconds=int(t_total_2 - t_total_1))))
# Saving models
print("Saving models...")
model_eval = get_evaluate_model(width, height)
training_model_layers = {layer.name: layer for layer in training_model.layers}
for layer in model_eval.layers:
if layer.name in training_model_layers:
print(layer.name)
layer.set_weights(training_model_layers[layer.name].get_weights())
model_eval.save_weights(options["weights_save_path"] + '%s_weights.h5' % options["net_name"])
def temp_view(options, img_read_path, img_write_path, iters):
width = options["train_image_width"]
height = options["train_image_height"]
# Get style activations
style_tensor = K.variable(process_image(options["style_image_path"], width, height))
style_acts = list()
for layer_name in options["style_layer"]:
func = get_vgg_activation(layer_name, width, height)
style_act = func([style_tensor])[0]
style_acts.append(style_act)
if "style_image_path_2" in options:
style_tensor_2 = process_image(options["style_image_path_2"], width, height)
style_acts_2 = list()
for layer_name in options["style_layer"]:
func = get_vgg_activation(layer_name, width, height)
style_act_2 = func([style_tensor_2])[0]
style_acts_2.append(style_act_2)
# Get content activations
content_tensor = K.variable(process_image(img_read_path, width, height))
func = get_vgg_activation(options["content_layer"], width, height)
content_act = func([content_tensor])[0]
dummy_in = np.array([0.0])
style_w = options["style_weight"] / len(style_acts)
content_w = options["content_weight"]
tv_w = options["total_variation_weight"]
# Get training model
bi_style = False
if "style_image_path_2" in options:
bi_style = True
training_model = get_temp_view_model(width, height, bi_style=bi_style)
if bi_style:
training_model.compile(loss={'content': dummy_loss, 'style1_out': dummy_loss, 'style2_out': dummy_loss,
'style3_out': dummy_loss, 'style4_out': dummy_loss, 'tv': dummy_loss,
'output': zero_loss},
optimizer=optimizers.Adam(lr=1),
loss_weights=[content_w, style_w, style_w, style_w, style_w, tv_w, 0])
else:
training_model.compile(loss={'content': dummy_loss, 'style1': dummy_loss, 'style2': dummy_loss,
'style3': dummy_loss, 'style4': dummy_loss, 'tv': dummy_loss, 'output': zero_loss},
optimizer=optimizers.Adam(lr=1),
loss_weights=[content_w, style_w, style_w, style_w, style_w, tv_w, 0])
# If flag is set, print model summary and generate model description
if options["plot_model"]:
training_model.summary()
plot_model(training_model, to_file='model.png')
# Input should always be ones
x = np.ones([1, height, width, 3], dtype='float32')
# Begin training
prev_loss = None
for i in range(iters):
t1 = time.time()
if bi_style:
res = training_model.fit(
[x, content_act, style_acts[0], style_acts[1], style_acts[2], style_acts[3], style_acts_2[0],
style_acts_2[1], style_acts_2[2], style_acts_2[3]],
[dummy_in, dummy_in, dummy_in, dummy_in, dummy_in, dummy_in, x],
epochs=1, verbose=0, batch_size=1)
else:
res = training_model.fit([x, content_act, style_acts[0], style_acts[1], style_acts[2], style_acts[3]],
[dummy_in, dummy_in, dummy_in, dummy_in, dummy_in, dummy_in, x],
epochs=1, verbose=0, batch_size=1)
t2 = time.time()
if i % 10 == 0:
loss = res.history['loss'][0]
if prev_loss is None:
prev_loss = loss
improvement = (prev_loss - loss) / prev_loss * 100
prev_loss = loss
print("Iter: %d / %d, Time elapsed: %0.2f seconds, Loss: %.0f, Improvement: %0.2f percent." %
(i, iters, t2 - t1, loss, improvement))
if bi_style:
print("Detail: content_loss: %0.0f, style_loss_1: %0.0f, style_loss_2: %0.0f,"
" style_loss_3: %0.0f, style_loss_4: %0.0f, tv_loss: %0.0f"
% (float(res.history['content_loss'][0]) * content_w,
float(res.history['style1_out_loss'][0]) * style_w,
float(res.history['style2_out_loss'][0]) * style_w,
float(res.history['style3_out_loss'][0]) * style_w,
float(res.history['style4_out_loss'][0]) * style_w,
float(res.history['tv_loss'][0]) * tv_w))
else:
print("Detail: content_loss: %0.0f, style_loss_1: %0.0f, style_loss_2: %0.0f,"
" style_loss_3: %0.0f, style_loss_4: %0.0f, tv_loss: %0.0f"
% (float(res.history['content_loss'][0]) * content_w,
float(res.history['style1_loss'][0]) * style_w,
float(res.history['style2_loss'][0]) * style_w,
float(res.history['style3_loss'][0]) * style_w,
float(res.history['style4_loss'][0]) * style_w,
float(res.history['tv_loss'][0]) * tv_w))
if bi_style:
res = training_model.predict(
[x, content_act, style_acts[0], style_acts[1], style_acts[2], style_acts[3], style_acts_2[0],
style_acts_2[1], style_acts_2[2], style_acts_2[3]])
else:
res = training_model.predict([x, content_act, style_acts[0], style_acts[1], style_acts[2], style_acts[3]])
output = deprocess_image(res[6][0], width, height)
imsave(img_write_path, output)
def predict(options, img_read_path, img_write_path):
# Read image
content = process_image(img_read_path, -1, -1, resize=False)
ori_height = content.shape[1]
ori_width = content.shape[2]
# Pad image
content = get_padding(content)
height = content.shape[1]
width = content.shape[2]
# Get eval model
eval_model = get_evaluate_model(width, height)
eval_model.load_weights(options['weights_read_path'])
# If flag is set, print model summary and generate model description
if options["plot_model"]:
eval_model.summary()
plot_model(eval_model, to_file='model.png')
# Generate output and save image
res = eval_model.predict([content])
output = deprocess_image(res[0], width, height)
output = remove_padding(output, ori_height, ori_width)
imwrite(img_write_path, output)
def build_parser():
parser = ArgumentParser()
parser.add_argument('-c', type=str, dest='config_path', help='config path',
metavar='CONFIG_PATH', required=True)
parser.add_argument('-m', type=str, dest='mode', help='train, predict or temp_view',
metavar='MODE', required=True)
parser.add_argument('-i', type=str, dest='image_path', help='image for transformation or viewing',
metavar='IMAGE_PATH')
parser.add_argument('-o', type=str, dest='image_output_path', help='image output path',
metavar='IMAGE_OUTPUT_PATH')
parser.add_argument('--iters', type=int, dest='iters', help='iter times, only for temp_view mode',
metavar='ITER_TIMES', default=500)
return parser
if __name__ == '__main__':
parser = build_parser()
args = parser.parse_args()
with open(args.config_path) as f_config:
options = json.load(f_config)
if args.mode == 'train':
train(options)
elif args.mode == 'predict':
predict(options, args.image_path, args.image_output_path)
elif args.mode == 'temp_view':
temp_view(options, args.image_path, args.image_output_path, args.iters)
| 50.97549 | 120 | 0.615348 | 4,092 | 31,197 | 4.439394 | 0.071359 | 0.028735 | 0.031377 | 0.0229 | 0.808598 | 0.781625 | 0.771992 | 0.750963 | 0.730541 | 0.725917 | 0 | 0.046421 | 0.241818 | 31,197 | 611 | 121 | 51.05892 | 0.7216 | 0.021797 | 0 | 0.63278 | 0 | 0.008299 | 0.124984 | 0.004855 | 0 | 0 | 0 | 0 | 0 | 1 | 0.016598 | false | 0 | 0.03112 | 0 | 0.058091 | 0.043568 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d3efb5cddd97fde84a7828bddcb89221bdeafc93 | 32,299 | py | Python | tiled-lutnet/training-software/MNIST-CIFAR-SVHN/models/SVHN/scripts/bnn_pruning.py | awai54st/LUTNet | 81b044f31d1131bee1a7fae41fc4d2fb102ea73a | [
"BSD-2-Clause"
] | 38 | 2019-10-28T10:06:33.000Z | 2022-02-21T21:38:39.000Z | tiled-lutnet/training-software/MNIST-CIFAR-SVHN/models/SVHN/scripts/bnn_pruning.py | awai54st/LUTNet | 81b044f31d1131bee1a7fae41fc4d2fb102ea73a | [
"BSD-2-Clause"
] | null | null | null | tiled-lutnet/training-software/MNIST-CIFAR-SVHN/models/SVHN/scripts/bnn_pruning.py | awai54st/LUTNet | 81b044f31d1131bee1a7fae41fc4d2fb102ea73a | [
"BSD-2-Clause"
] | 13 | 2019-10-28T10:17:48.000Z | 2021-08-10T21:37:11.000Z | import h5py
import numpy as np
from shutil import copyfile
copyfile("baseline_reg.h5", "pretrained_pruned.h5") # create pretrained.h5 using datastructure from dummy.h5
bl = h5py.File("baseline_reg.h5", 'r')
#dummy = h5py.File("dummy.h5", 'r')
pretrained = h5py.File("pretrained_pruned.h5", 'r+')
normalisation="l2"
channel_threshold=0.5
p_c1=-1
p_c2=1
p_c3=1.00
p_c4=1.00
p_c5=1.00
p_c6=1.00
p_d1=1.00
p_d2=1.00
p_d3=-1
# conv layer 1
bl_w1 = bl["model_weights"]["binary_conv_1"]["binary_conv_1"]["Variable_1:0"]
#bl_rand_map = bl["model_weights"]["binary_conv_1"]["binary_conv_1"]["rand_map:0"]
bl_pruning_mask = bl["model_weights"]["binary_conv_1"]["binary_conv_1"]["pruning_mask:0"]
bl_gamma = bl["model_weights"]["binary_conv_1"]["binary_conv_1"]["Variable:0"]
zero_fill = np.zeros(np.shape(np.array(bl_w1)))
pret_w1 = pretrained["model_weights"]["binary_conv_1"]["binary_conv_1"]["Variable_1:0"]
#pret_rand_map = pretrained["model_weights"]["binary_conv_1"]["binary_conv_1"]["rand_map:0"]
pret_pruning_mask = pretrained["model_weights"]["binary_conv_1"]["binary_conv_1"]["pruning_mask:0"]
p_gamma = pretrained["model_weights"]["binary_conv_1"]["binary_conv_1"]["Variable:0"]
pret_w1[...] = np.array(bl_w1)
#pret_rand_map[...] = np.array(bl_rand_map)
p_gamma[...] = np.array(bl_gamma)
weight = np.array(bl_w1)
TRC = 1
TM = 1
TN = 2
Tsize_RC = np.shape(weight)[0]/TRC
Tsize_M = np.shape(weight)[2]/TM
Tsize_N = np.shape(weight)[3]/TN
one_tile = np.zeros([Tsize_RC,Tsize_RC,Tsize_M,Tsize_N])
# set up pruning_mask
#mean=np.mean(abs(weight),axis=3)
norm=one_tile
if normalisation=="l1":
for n in range(TN):
for m in range(TM):
for rc in range(TRC):
norm = norm + weight[(rc*Tsize_RC):((rc+1)*Tsize_RC),(rc*Tsize_RC):((rc+1)*Tsize_RC),(m*Tsize_M):((m+1)*Tsize_M),(n*Tsize_N):((n+1)*Tsize_N)]
norm = norm / (TRC*TRC*TM*TN)
elif normalisation=="l2":
for n in range(TN):
for m in range(TM):
for rc in range(TRC):
norm = norm + weight[(rc*Tsize_RC):((rc+1)*Tsize_RC),(rc*Tsize_RC):((rc+1)*Tsize_RC),(m*Tsize_M):((m+1)*Tsize_M),(n*Tsize_N):((n+1)*Tsize_N)]**2
norm = norm / (TRC*TRC*TM*TN)
norm = np.sqrt(norm)
norm=np.reshape(norm, [-1,np.shape(norm)[3]])
pruning_mask = np.greater(norm, p_c1)
pret_pruning_mask[...] = np.array(pruning_mask,dtype=float)
print(np.sum(np.array(pret_pruning_mask)))
# conv layer 2
bl_w1 = bl["model_weights"]["binary_conv_2"]["binary_conv_2"]["Variable_1:0"]
#bl_w2 = bl["model_weights"]["binary_conv_2"]["binary_conv_2"]["Variable_2:0"]
#bl_w3 = bl["model_weights"]["binary_conv_2"]["binary_conv_2"]["Variable_3:0"]
#bl_w4 = bl["model_weights"]["binary_conv_2"]["binary_conv_2"]["Variable_4:0"]
#bl_rand_map = bl["model_weights"]["binary_conv_2"]["binary_conv_2"]["rand_map:0"]
bl_pruning_mask = bl["model_weights"]["binary_conv_2"]["binary_conv_2"]["pruning_mask:0"]
bl_gamma = bl["model_weights"]["binary_conv_2"]["binary_conv_2"]["Variable:0"]
bl_means = bl["model_weights"]["residual_sign_1"]["residual_sign_1"]["means:0"]
zero_fill = np.zeros(np.shape(np.array(bl_w1)))
pret_w1 = pretrained["model_weights"]["binary_conv_2"]["binary_conv_2"]["Variable_1:0"]
#pret_w2 = pretrained["model_weights"]["binary_conv_2"]["binary_conv_2"]["Variable_2:0"]
#pret_w3 = pretrained["model_weights"]["binary_conv_2"]["binary_conv_2"]["Variable_3:0"]
#pret_w4 = pretrained["model_weights"]["binary_conv_2"]["binary_conv_2"]["Variable_4:0"]
#pret_rand_map = pretrained["model_weights"]["binary_conv_2"]["binary_conv_2"]["rand_map:0"]
pret_pruning_mask = pretrained["model_weights"]["binary_conv_2"]["binary_conv_2"]["pruning_mask:0"]
p_gamma = pretrained["model_weights"]["binary_conv_2"]["binary_conv_2"]["Variable:0"]
pret_means = pretrained["model_weights"]["residual_sign_1"]["residual_sign_1"]["means:0"]
pret_w1[...] = np.array(bl_w1)
#pret_w2[...] = zero_fill
#pret_w3[...] = zero_fill
#pret_w4[...] = -np.array(bl_w1)
#pret_rand_map[...] = np.array(bl_rand_map)
p_gamma[...] = np.array(bl_gamma)
pret_means[...] = np.array(bl_means)
weight = np.array(bl_w1)
TRC = 1
TM = 8
TN = 8
Tsize_RC = np.shape(weight)[0]/TRC
Tsize_M = np.shape(weight)[2]/TM
Tsize_N = np.shape(weight)[3]/TN
one_tile = np.zeros([Tsize_RC,Tsize_RC,Tsize_M,Tsize_N])
# set up pruning_mask
#mean=np.mean(abs(weight),axis=3)
norm=one_tile
if normalisation=="l1":
for n in range(TN):
for m in range(TM):
for rc in range(TRC):
norm = norm + weight[(rc*Tsize_RC):((rc+1)*Tsize_RC),(rc*Tsize_RC):((rc+1)*Tsize_RC),(m*Tsize_M):((m+1)*Tsize_M),(n*Tsize_N):((n+1)*Tsize_N)]
norm = norm / (TRC*TRC*TM*TN)
elif normalisation=="l2":
for n in range(TN):
for m in range(TM):
for rc in range(TRC):
norm = norm + weight[(rc*Tsize_RC):((rc+1)*Tsize_RC),(rc*Tsize_RC):((rc+1)*Tsize_RC),(m*Tsize_M):((m+1)*Tsize_M),(n*Tsize_N):((n+1)*Tsize_N)]**2
norm = norm / (TRC*TRC*TM*TN)
norm = np.sqrt(norm)
norm=np.reshape(norm, [-1,np.shape(norm)[3]])
pruning_mask = np.greater(norm, p_c2)
pret_pruning_mask[...] = np.array(pruning_mask,dtype=float)
print(np.sum(np.array(pret_pruning_mask)))
# conv layer 3
bl_w1 = bl["model_weights"]["binary_conv_3"]["binary_conv_3"]["Variable_1:0"]
#bl_w2 = bl["model_weights"]["binary_conv_3"]["binary_conv_3"]["Variable_2:0"]
#bl_w3 = bl["model_weights"]["binary_conv_3"]["binary_conv_3"]["Variable_3:0"]
#bl_w4 = bl["model_weights"]["binary_conv_3"]["binary_conv_3"]["Variable_4:0"]
#bl_rand_map = bl["model_weights"]["binary_conv_3"]["binary_conv_3"]["rand_map:0"]
bl_pruning_mask = bl["model_weights"]["binary_conv_3"]["binary_conv_3"]["pruning_mask:0"]
bl_gamma = bl["model_weights"]["binary_conv_3"]["binary_conv_3"]["Variable:0"]
bl_means = bl["model_weights"]["residual_sign_2"]["residual_sign_2"]["means:0"]
zero_fill = np.zeros(np.shape(np.array(bl_w1)))
pret_w1 = pretrained["model_weights"]["binary_conv_3"]["binary_conv_3"]["Variable_1:0"]
#pret_w2 = pretrained["model_weights"]["binary_conv_3"]["binary_conv_3"]["Variable_2:0"]
#pret_w3 = pretrained["model_weights"]["binary_conv_3"]["binary_conv_3"]["Variable_3:0"]
#pret_w4 = pretrained["model_weights"]["binary_conv_3"]["binary_conv_3"]["Variable_4:0"]
#pret_rand_map = pretrained["model_weights"]["binary_conv_3"]["binary_conv_3"]["rand_map:0"]
pret_pruning_mask = pretrained["model_weights"]["binary_conv_3"]["binary_conv_3"]["pruning_mask:0"]
p_gamma = pretrained["model_weights"]["binary_conv_3"]["binary_conv_3"]["Variable:0"]
pret_means = pretrained["model_weights"]["residual_sign_2"]["residual_sign_2"]["means:0"]
pret_w1[...] = np.array(bl_w1)
#pret_w2[...] = zero_fill
#pret_w3[...] = zero_fill
#pret_w4[...] = -np.array(bl_w1)
#pret_rand_map[...] = np.array(bl_rand_map)
p_gamma[...] = np.array(bl_gamma)
pret_means[...] = np.array(bl_means)
weight = np.array(bl_w1)
TRC = 1
TM = 8
TN = 8
Tsize_RC = np.shape(weight)[0]/TRC
Tsize_M = np.shape(weight)[2]/TM
Tsize_N = np.shape(weight)[3]/TN
one_tile = np.zeros([Tsize_RC,Tsize_RC,Tsize_M,Tsize_N])
# set up pruning_mask
#mean=np.mean(abs(weight),axis=3)
norm=one_tile
if normalisation=="l1":
for n in range(TN):
for m in range(TM):
for rc in range(TRC):
norm = norm + weight[(rc*Tsize_RC):((rc+1)*Tsize_RC),(rc*Tsize_RC):((rc+1)*Tsize_RC),(m*Tsize_M):((m+1)*Tsize_M),(n*Tsize_N):((n+1)*Tsize_N)]
norm = norm / (TRC*TRC*TM*TN)
elif normalisation=="l2":
for n in range(TN):
for m in range(TM):
for rc in range(TRC):
norm = norm + weight[(rc*Tsize_RC):((rc+1)*Tsize_RC),(rc*Tsize_RC):((rc+1)*Tsize_RC),(m*Tsize_M):((m+1)*Tsize_M),(n*Tsize_N):((n+1)*Tsize_N)]**2
norm = norm / (TRC*TRC*TM*TN)
norm = np.sqrt(norm)
norm=np.reshape(norm, [-1,np.shape(norm)[3]])
pruning_mask = np.greater(norm, p_c3)
pret_pruning_mask[...] = np.array(pruning_mask,dtype=float)
print(np.sum(np.array(pret_pruning_mask)))
# conv layer 4
bl_w1 = bl["model_weights"]["binary_conv_4"]["binary_conv_4"]["Variable_1:0"]
#bl_w2 = bl["model_weights"]["binary_conv_4"]["binary_conv_4"]["Variable_2:0"]
#bl_w3 = bl["model_weights"]["binary_conv_4"]["binary_conv_4"]["Variable_3:0"]
#bl_w4 = bl["model_weights"]["binary_conv_4"]["binary_conv_4"]["Variable_4:0"]
#bl_rand_map = bl["model_weights"]["binary_conv_4"]["binary_conv_4"]["rand_map:0"]
bl_pruning_mask = bl["model_weights"]["binary_conv_4"]["binary_conv_4"]["pruning_mask:0"]
bl_gamma = bl["model_weights"]["binary_conv_4"]["binary_conv_4"]["Variable:0"]
bl_means = bl["model_weights"]["residual_sign_3"]["residual_sign_3"]["means:0"]
zero_fill = np.zeros(np.shape(np.array(bl_w1)))
pret_w1 = pretrained["model_weights"]["binary_conv_4"]["binary_conv_4"]["Variable_1:0"]
#pret_w2 = pretrained["model_weights"]["binary_conv_4"]["binary_conv_4"]["Variable_2:0"]
#pret_w3 = pretrained["model_weights"]["binary_conv_4"]["binary_conv_4"]["Variable_3:0"]
#pret_w4 = pretrained["model_weights"]["binary_conv_4"]["binary_conv_4"]["Variable_4:0"]
#pret_rand_map = pretrained["model_weights"]["binary_conv_4"]["binary_conv_4"]["rand_map:0"]
pret_pruning_mask = pretrained["model_weights"]["binary_conv_4"]["binary_conv_4"]["pruning_mask:0"]
p_gamma = pretrained["model_weights"]["binary_conv_4"]["binary_conv_4"]["Variable:0"]
pret_means = pretrained["model_weights"]["residual_sign_3"]["residual_sign_3"]["means:0"]
pret_w1[...] = np.array(bl_w1)
#pret_w2[...] = zero_fill
#pret_w3[...] = zero_fill
#pret_w4[...] = -np.array(bl_w1)
#pret_rand_map[...] = np.array(bl_rand_map)
p_gamma[...] = np.array(bl_gamma)
pret_means[...] = np.array(bl_means)
weight = np.array(bl_w1)
TRC = 1
TM = 8
TN = 8
Tsize_RC = np.shape(weight)[0]/TRC
Tsize_M = np.shape(weight)[2]/TM
Tsize_N = np.shape(weight)[3]/TN
one_tile = np.zeros([Tsize_RC,Tsize_RC,Tsize_M,Tsize_N])
# set up pruning_mask
#mean=np.mean(abs(weight),axis=3)
norm=one_tile
if normalisation=="l1":
for n in range(TN):
for m in range(TM):
for rc in range(TRC):
norm = norm + weight[(rc*Tsize_RC):((rc+1)*Tsize_RC),(rc*Tsize_RC):((rc+1)*Tsize_RC),(m*Tsize_M):((m+1)*Tsize_M),(n*Tsize_N):((n+1)*Tsize_N)]
norm = norm / (TRC*TRC*TM*TN)
elif normalisation=="l2":
for n in range(TN):
for m in range(TM):
for rc in range(TRC):
norm = norm + weight[(rc*Tsize_RC):((rc+1)*Tsize_RC),(rc*Tsize_RC):((rc+1)*Tsize_RC),(m*Tsize_M):((m+1)*Tsize_M),(n*Tsize_N):((n+1)*Tsize_N)]**2
norm = norm / (TRC*TRC*TM*TN)
norm = np.sqrt(norm)
norm=np.reshape(norm, [-1,np.shape(norm)[3]])
pruning_mask = np.greater(norm, p_c4)
pret_pruning_mask[...] = np.array(pruning_mask,dtype=float)
print(np.sum(np.array(pret_pruning_mask)))
# conv layer 5
bl_w1 = bl["model_weights"]["binary_conv_5"]["binary_conv_5"]["Variable_1:0"]
#bl_w2 = bl["model_weights"]["binary_conv_5"]["binary_conv_5"]["Variable_2:0"]
#bl_w3 = bl["model_weights"]["binary_conv_5"]["binary_conv_5"]["Variable_3:0"]
#bl_w4 = bl["model_weights"]["binary_conv_5"]["binary_conv_5"]["Variable_4:0"]
#bl_rand_map = bl["model_weights"]["binary_conv_5"]["binary_conv_5"]["rand_map:0"]
bl_pruning_mask = bl["model_weights"]["binary_conv_5"]["binary_conv_5"]["pruning_mask:0"]
bl_gamma = bl["model_weights"]["binary_conv_5"]["binary_conv_5"]["Variable:0"]
bl_means = bl["model_weights"]["residual_sign_4"]["residual_sign_4"]["means:0"]
zero_fill = np.zeros(np.shape(np.array(bl_w1)))
pret_w1 = pretrained["model_weights"]["binary_conv_5"]["binary_conv_5"]["Variable_1:0"]
#pret_w2 = pretrained["model_weights"]["binary_conv_5"]["binary_conv_5"]["Variable_2:0"]
#pret_w3 = pretrained["model_weights"]["binary_conv_5"]["binary_conv_5"]["Variable_3:0"]
#pret_w4 = pretrained["model_weights"]["binary_conv_5"]["binary_conv_5"]["Variable_4:0"]
#pret_rand_map = pretrained["model_weights"]["binary_conv_5"]["binary_conv_5"]["rand_map:0"]
pret_pruning_mask = pretrained["model_weights"]["binary_conv_5"]["binary_conv_5"]["pruning_mask:0"]
p_gamma = pretrained["model_weights"]["binary_conv_5"]["binary_conv_5"]["Variable:0"]
pret_means = pretrained["model_weights"]["residual_sign_4"]["residual_sign_4"]["means:0"]
pret_w1[...] = np.array(bl_w1)
#pret_w2[...] = zero_fill
#pret_w3[...] = zero_fill
#pret_w4[...] = -np.array(bl_w1)
#pret_rand_map[...] = np.array(bl_rand_map)
p_gamma[...] = np.array(bl_gamma)
pret_means[...] = np.array(bl_means)
weight = np.array(bl_w1)
TRC = 1
TM = 8
TN = 8
Tsize_RC = np.shape(weight)[0]/TRC
Tsize_M = np.shape(weight)[2]/TM
Tsize_N = np.shape(weight)[3]/TN
one_tile = np.zeros([Tsize_RC,Tsize_RC,Tsize_M,Tsize_N])
# set up pruning_mask
#mean=np.mean(abs(weight),axis=3)
norm=one_tile
if normalisation=="l1":
for n in range(TN):
for m in range(TM):
for rc in range(TRC):
norm = norm + weight[(rc*Tsize_RC):((rc+1)*Tsize_RC),(rc*Tsize_RC):((rc+1)*Tsize_RC),(m*Tsize_M):((m+1)*Tsize_M),(n*Tsize_N):((n+1)*Tsize_N)]
norm = norm / (TRC*TRC*TM*TN)
elif normalisation=="l2":
for n in range(TN):
for m in range(TM):
for rc in range(TRC):
norm = norm + weight[(rc*Tsize_RC):((rc+1)*Tsize_RC),(rc*Tsize_RC):((rc+1)*Tsize_RC),(m*Tsize_M):((m+1)*Tsize_M),(n*Tsize_N):((n+1)*Tsize_N)]**2
norm = norm / (TRC*TRC*TM*TN)
norm = np.sqrt(norm)
norm=np.reshape(norm, [-1,np.shape(norm)[3]])
pruning_mask = np.greater(norm, p_c5)
pret_pruning_mask[...] = np.array(pruning_mask,dtype=float)
print(np.sum(np.array(pret_pruning_mask)))
# conv layer 6
bl_w1 = bl["model_weights"]["binary_conv_6"]["binary_conv_6"]["Variable_1:0"]
#bl_w2 = bl["model_weights"]["binary_conv_6"]["binary_conv_6"]["Variable_2:0"]
#bl_w3 = bl["model_weights"]["binary_conv_6"]["binary_conv_6"]["Variable_3:0"]
#bl_w4 = bl["model_weights"]["binary_conv_6"]["binary_conv_6"]["Variable_4:0"]
#bl_rand_map = bl["model_weights"]["binary_conv_6"]["binary_conv_6"]["rand_map:0"]
bl_pruning_mask = bl["model_weights"]["binary_conv_6"]["binary_conv_6"]["pruning_mask:0"]
bl_gamma = bl["model_weights"]["binary_conv_6"]["binary_conv_6"]["Variable:0"]
bl_means = bl["model_weights"]["residual_sign_5"]["residual_sign_5"]["means:0"]
zero_fill = np.zeros(np.shape(np.array(bl_w1)))
pret_w1 = pretrained["model_weights"]["binary_conv_6"]["binary_conv_6"]["Variable_1:0"]
#pret_w2 = pretrained["model_weights"]["binary_conv_6"]["binary_conv_6"]["Variable_2:0"]
#pret_w3 = pretrained["model_weights"]["binary_conv_6"]["binary_conv_6"]["Variable_3:0"]
#pret_w4 = pretrained["model_weights"]["binary_conv_6"]["binary_conv_6"]["Variable_4:0"]
#pret_rand_map = pretrained["model_weights"]["binary_conv_6"]["binary_conv_6"]["rand_map:0"]
pret_pruning_mask = pretrained["model_weights"]["binary_conv_6"]["binary_conv_6"]["pruning_mask:0"]
p_gamma = pretrained["model_weights"]["binary_conv_6"]["binary_conv_6"]["Variable:0"]
pret_means = pretrained["model_weights"]["residual_sign_5"]["residual_sign_5"]["means:0"]
pret_w1[...] = np.array(bl_w1)
#pret_w2[...] = zero_fill
#pret_w3[...] = zero_fill
#pret_w4[...] = -np.array(bl_w1)
#pret_rand_map[...] = np.array(bl_rand_map)
p_gamma[...] = np.array(bl_gamma)
pret_means[...] = np.array(bl_means)
weight = np.array(bl_w1)
TRC = 1
TM = 8
TN = 8
Tsize_RC = np.shape(weight)[0]/TRC
Tsize_M = np.shape(weight)[2]/TM
Tsize_N = np.shape(weight)[3]/TN
one_tile = np.zeros([Tsize_RC,Tsize_RC,Tsize_M,Tsize_N])
# set up pruning_mask
#mean=np.mean(abs(weight),axis=3)
norm=one_tile
if normalisation=="l1":
for n in range(TN):
for m in range(TM):
for rc in range(TRC):
norm = norm + weight[(rc*Tsize_RC):((rc+1)*Tsize_RC),(rc*Tsize_RC):((rc+1)*Tsize_RC),(m*Tsize_M):((m+1)*Tsize_M),(n*Tsize_N):((n+1)*Tsize_N)]
norm = norm / (TRC*TRC*TM*TN)
elif normalisation=="l2":
for n in range(TN):
for m in range(TM):
for rc in range(TRC):
norm = norm + weight[(rc*Tsize_RC):((rc+1)*Tsize_RC),(rc*Tsize_RC):((rc+1)*Tsize_RC),(m*Tsize_M):((m+1)*Tsize_M),(n*Tsize_N):((n+1)*Tsize_N)]**2
norm = norm / (TRC*TRC*TM*TN)
norm = np.sqrt(norm)
norm=np.reshape(norm, [-1,np.shape(norm)[3]])
pruning_mask = np.greater(norm, p_c6)
pret_pruning_mask[...] = np.array(pruning_mask,dtype=float)
print(np.sum(np.array(pret_pruning_mask)))
# dense layer 1
bl_w1 = bl["model_weights"]["binary_dense_1"]["binary_dense_1"]["Variable_1:0"]
#bl_w2 = bl["model_weights"]["binary_dense_1"]["binary_dense_1"]["Variable_2:0"]
#bl_w3 = bl["model_weights"]["binary_dense_1"]["binary_dense_1"]["Variable_3:0"]
#bl_w4 = bl["model_weights"]["binary_dense_1"]["binary_dense_1"]["Variable_4:0"]
#bl_rand_map = bl["model_weights"]["binary_dense_1"]["binary_dense_1"]["rand_map:0"]
bl_pruning_mask = bl["model_weights"]["binary_dense_1"]["binary_dense_1"]["pruning_mask:0"]
bl_gamma = bl["model_weights"]["binary_dense_1"]["binary_dense_1"]["Variable:0"]
bl_means = bl["model_weights"]["residual_sign_6"]["residual_sign_6"]["means:0"]
zero_fill = np.zeros(np.shape(np.array(bl_w1)))
pret_w1 = pretrained["model_weights"]["binary_dense_1"]["binary_dense_1"]["Variable_1:0"]
#pret_w2 = pretrained["model_weights"]["binary_dense_1"]["binary_dense_1"]["Variable_2:0"]
#pret_w3 = pretrained["model_weights"]["binary_dense_1"]["binary_dense_1"]["Variable_3:0"]
#pret_w4 = pretrained["model_weights"]["binary_dense_1"]["binary_dense_1"]["Variable_4:0"]
#pret_rand_map = pretrained["model_weights"]["binary_dense_1"]["binary_dense_1"]["rand_map:0"]
pret_pruning_mask = pretrained["model_weights"]["binary_dense_1"]["binary_dense_1"]["pruning_mask:0"]
p_gamma = pretrained["model_weights"]["binary_dense_1"]["binary_dense_1"]["Variable:0"]
pret_means = pretrained["model_weights"]["residual_sign_6"]["residual_sign_6"]["means:0"]
pret_w1[...] = np.array(bl_w1)
#pret_w2[...] = zero_fill
#pret_w3[...] = zero_fill
#pret_w4[...] = -np.array(bl_w1)
#pret_rand_map[...] = np.array(bl_rand_map)
p_gamma[...] = np.array(bl_gamma)
pret_means[...] = np.array(bl_means)
weight = np.array(bl_w1)
TM = 8
TN = 8
Tsize_M = np.shape(weight)[0]/TM
Tsize_N = np.shape(weight)[1]/TN
one_tile = np.zeros([Tsize_M,Tsize_N])
# set up pruning_mask
#mean=np.mean(abs(weight),axis=3)
norm=one_tile
if normalisation=="l1":
for n in range(TN):
for m in range(TM):
norm = norm + weight[(m*Tsize_M):((m+1)*Tsize_M),(n*Tsize_N):((n+1)*Tsize_N)]
norm = norm / (TRC*TRC*TM*TN)
elif normalisation=="l2":
for n in range(TN):
for m in range(TM):
norm = norm + weight[(m*Tsize_M):((m+1)*Tsize_M),(n*Tsize_N):((n+1)*Tsize_N)]**2
norm = norm / (TM*TN)
norm = np.sqrt(norm)
#l1_norm=np.reshape(l1_norm, [-1,np.shape(l1_norm)[3]])
pruning_mask = np.greater(norm, p_d1)
pret_pruning_mask[...] = np.array(pruning_mask,dtype=float)
print(np.sum(np.array(pret_pruning_mask)))
# dense layer 2
bl_w1 = bl["model_weights"]["binary_dense_2"]["binary_dense_2"]["Variable_1:0"]
#bl_w2 = bl["model_weights"]["binary_dense_2"]["binary_dense_2"]["Variable_2:0"]
#bl_w3 = bl["model_weights"]["binary_dense_2"]["binary_dense_2"]["Variable_3:0"]
#bl_w4 = bl["model_weights"]["binary_dense_2"]["binary_dense_2"]["Variable_4:0"]
#bl_rand_map = bl["model_weights"]["binary_dense_2"]["binary_dense_2"]["rand_map:0"]
bl_pruning_mask = bl["model_weights"]["binary_dense_2"]["binary_dense_2"]["pruning_mask:0"]
bl_gamma = bl["model_weights"]["binary_dense_2"]["binary_dense_2"]["Variable:0"]
bl_means = bl["model_weights"]["residual_sign_7"]["residual_sign_7"]["means:0"]
zero_fill = np.zeros(np.shape(np.array(bl_w1)))
pret_w1 = pretrained["model_weights"]["binary_dense_2"]["binary_dense_2"]["Variable_1:0"]
#pret_w2 = pretrained["model_weights"]["binary_dense_2"]["binary_dense_2"]["Variable_2:0"]
#pret_w3 = pretrained["model_weights"]["binary_dense_2"]["binary_dense_2"]["Variable_3:0"]
#pret_w4 = pretrained["model_weights"]["binary_dense_2"]["binary_dense_2"]["Variable_4:0"]
#pret_rand_map = pretrained["model_weights"]["binary_dense_2"]["binary_dense_2"]["rand_map:0"]
pret_pruning_mask = pretrained["model_weights"]["binary_dense_2"]["binary_dense_2"]["pruning_mask:0"]
p_gamma = pretrained["model_weights"]["binary_dense_2"]["binary_dense_2"]["Variable:0"]
pret_means = pretrained["model_weights"]["residual_sign_7"]["residual_sign_7"]["means:0"]
pret_w1[...] = np.array(bl_w1)
#pret_w2[...] = zero_fill
#pret_w3[...] = zero_fill
#pret_w4[...] = -np.array(bl_w1)
#pret_rand_map[...] = np.array(bl_rand_map)
p_gamma[...] = np.array(bl_gamma)
pret_means[...] = np.array(bl_means)
weight = np.array(bl_w1)
TM = 8
TN = 8
Tsize_M = np.shape(weight)[0]/TM
Tsize_N = np.shape(weight)[1]/TN
one_tile = np.zeros([Tsize_M,Tsize_N])
# set up pruning_mask
#mean=np.mean(abs(weight),axis=3)
norm=one_tile
if normalisation=="l1":
for n in range(TN):
for m in range(TM):
norm = norm + weight[(m*Tsize_M):((m+1)*Tsize_M),(n*Tsize_N):((n+1)*Tsize_N)]
norm = norm / (TRC*TRC*TM*TN)
elif normalisation=="l2":
for n in range(TN):
for m in range(TM):
norm = norm + weight[(m*Tsize_M):((m+1)*Tsize_M),(n*Tsize_N):((n+1)*Tsize_N)]**2
norm = norm / (TM*TN)
norm = np.sqrt(norm)
#l1_norm=np.reshape(l1_norm, [-1,np.shape(l1_norm)[3]])
pruning_mask = np.greater(norm, p_d2)
pret_pruning_mask[...] = np.array(pruning_mask,dtype=float)
print(np.sum(np.array(pret_pruning_mask)))
# dense layer 3
bl_w1 = bl["model_weights"]["binary_dense_3"]["binary_dense_3"]["Variable_1:0"]
#bl_w2 = bl["model_weights"]["binary_dense_3"]["binary_dense_3"]["Variable_2:0"]
#bl_w3 = bl["model_weights"]["binary_dense_3"]["binary_dense_3"]["Variable_3:0"]
#bl_w4 = bl["model_weights"]["binary_dense_3"]["binary_dense_3"]["Variable_4:0"]
#bl_rand_map = bl["model_weights"]["binary_dense_3"]["binary_dense_3"]["rand_map:0"]
bl_pruning_mask = bl["model_weights"]["binary_dense_3"]["binary_dense_3"]["pruning_mask:0"]
bl_gamma = bl["model_weights"]["binary_dense_3"]["binary_dense_3"]["Variable:0"]
bl_means = bl["model_weights"]["residual_sign_8"]["residual_sign_8"]["means:0"]
zero_fill = np.zeros(np.shape(np.array(bl_w1)))
pret_w1 = pretrained["model_weights"]["binary_dense_3"]["binary_dense_3"]["Variable_1:0"]
#pret_w2 = pretrained["model_weights"]["binary_dense_3"]["binary_dense_3"]["Variable_2:0"]
#pret_w3 = pretrained["model_weights"]["binary_dense_3"]["binary_dense_3"]["Variable_3:0"]
#pret_w4 = pretrained["model_weights"]["binary_dense_3"]["binary_dense_3"]["Variable_4:0"]
#pret_rand_map = pretrained["model_weights"]["binary_dense_3"]["binary_dense_3"]["rand_map:0"]
pret_pruning_mask = pretrained["model_weights"]["binary_dense_3"]["binary_dense_3"]["pruning_mask:0"]
p_gamma = pretrained["model_weights"]["binary_dense_3"]["binary_dense_3"]["Variable:0"]
pret_means = pretrained["model_weights"]["residual_sign_8"]["residual_sign_8"]["means:0"]
pret_w1[...] = np.array(bl_w1)
#pret_w2[...] = zero_fill
#pret_w3[...] = zero_fill
#pret_w4[...] = -np.array(bl_w1)
#pret_rand_map[...] = np.array(bl_rand_map)
p_gamma[...] = np.array(bl_gamma)
pret_means[...] = np.array(bl_means)
weight = np.array(bl_w1)
TM = 8
TN = 10
Tsize_M = np.shape(weight)[0]/TM
Tsize_N = np.shape(weight)[1]/TN
one_tile = np.zeros([Tsize_M,Tsize_N])
# set up pruning_mask
#mean=np.mean(abs(weight),axis=3)
norm=one_tile
if normalisation=="l1":
for n in range(TN):
for m in range(TM):
norm = norm + weight[(m*Tsize_M):((m+1)*Tsize_M),(n*Tsize_N):((n+1)*Tsize_N)]
norm = norm / (TRC*TRC*TM*TN)
elif normalisation=="l2":
for n in range(TN):
for m in range(TM):
norm = norm + weight[(m*Tsize_M):((m+1)*Tsize_M),(n*Tsize_N):((n+1)*Tsize_N)]**2
norm = norm / (TM*TN)
norm = np.sqrt(norm)
#l1_norm=np.reshape(l1_norm, [-1,np.shape(l1_norm)[3]])
pruning_mask = np.greater(norm, p_d3)
pret_pruning_mask[...] = np.array(pruning_mask,dtype=float)
print(np.sum(np.array(pret_pruning_mask)))
# bn 1
bl_beta = bl["model_weights"]["batch_normalization_1"]["batch_normalization_1"]["beta:0"]
bl_gamma = bl["model_weights"]["batch_normalization_1"]["batch_normalization_1"]["gamma:0"]
bl_moving_mean = bl["model_weights"]["batch_normalization_1"]["batch_normalization_1"]["moving_mean:0"]
bl_moving_variance = bl["model_weights"]["batch_normalization_1"]["batch_normalization_1"]["moving_variance:0"]
p_beta = pretrained["model_weights"]["batch_normalization_1"]["batch_normalization_1"]["beta:0"]
p_gamma = pretrained["model_weights"]["batch_normalization_1"]["batch_normalization_1"]["gamma:0"]
p_moving_mean = pretrained["model_weights"]["batch_normalization_1"]["batch_normalization_1"]["moving_mean:0"]
p_moving_variance = pretrained["model_weights"]["batch_normalization_1"]["batch_normalization_1"]["moving_variance:0"]
p_beta[...] = np.array(bl_beta)
p_gamma[...] = np.array(bl_gamma)
p_moving_mean[...] = np.array(bl_moving_mean)
p_moving_variance[...] = np.array(bl_moving_variance)
# bn 2
bl_beta = bl["model_weights"]["batch_normalization_2"]["batch_normalization_2"]["beta:0"]
bl_gamma = bl["model_weights"]["batch_normalization_2"]["batch_normalization_2"]["gamma:0"]
bl_moving_mean = bl["model_weights"]["batch_normalization_2"]["batch_normalization_2"]["moving_mean:0"]
bl_moving_variance = bl["model_weights"]["batch_normalization_2"]["batch_normalization_2"]["moving_variance:0"]
p_beta = pretrained["model_weights"]["batch_normalization_2"]["batch_normalization_2"]["beta:0"]
p_gamma = pretrained["model_weights"]["batch_normalization_2"]["batch_normalization_2"]["gamma:0"]
p_moving_mean = pretrained["model_weights"]["batch_normalization_2"]["batch_normalization_2"]["moving_mean:0"]
p_moving_variance = pretrained["model_weights"]["batch_normalization_2"]["batch_normalization_2"]["moving_variance:0"]
p_beta[...] = np.array(bl_beta)
p_gamma[...] = np.array(bl_gamma)
p_moving_mean[...] = np.array(bl_moving_mean)
p_moving_variance[...] = np.array(bl_moving_variance)
# bn 3
bl_beta = bl["model_weights"]["batch_normalization_3"]["batch_normalization_3"]["beta:0"]
bl_gamma = bl["model_weights"]["batch_normalization_3"]["batch_normalization_3"]["gamma:0"]
bl_moving_mean = bl["model_weights"]["batch_normalization_3"]["batch_normalization_3"]["moving_mean:0"]
bl_moving_variance = bl["model_weights"]["batch_normalization_3"]["batch_normalization_3"]["moving_variance:0"]
p_beta = pretrained["model_weights"]["batch_normalization_3"]["batch_normalization_3"]["beta:0"]
p_gamma = pretrained["model_weights"]["batch_normalization_3"]["batch_normalization_3"]["gamma:0"]
p_moving_mean = pretrained["model_weights"]["batch_normalization_3"]["batch_normalization_3"]["moving_mean:0"]
p_moving_variance = pretrained["model_weights"]["batch_normalization_3"]["batch_normalization_3"]["moving_variance:0"]
p_beta[...] = np.array(bl_beta)
p_gamma[...] = np.array(bl_gamma)
p_moving_mean[...] = np.array(bl_moving_mean)
p_moving_variance[...] = np.array(bl_moving_variance)
# bn 4
bl_beta = bl["model_weights"]["batch_normalization_4"]["batch_normalization_4"]["beta:0"]
bl_gamma = bl["model_weights"]["batch_normalization_4"]["batch_normalization_4"]["gamma:0"]
bl_moving_mean = bl["model_weights"]["batch_normalization_4"]["batch_normalization_4"]["moving_mean:0"]
bl_moving_variance = bl["model_weights"]["batch_normalization_4"]["batch_normalization_4"]["moving_variance:0"]
p_beta = pretrained["model_weights"]["batch_normalization_4"]["batch_normalization_4"]["beta:0"]
p_gamma = pretrained["model_weights"]["batch_normalization_4"]["batch_normalization_4"]["gamma:0"]
p_moving_mean = pretrained["model_weights"]["batch_normalization_4"]["batch_normalization_4"]["moving_mean:0"]
p_moving_variance = pretrained["model_weights"]["batch_normalization_4"]["batch_normalization_4"]["moving_variance:0"]
p_beta[...] = np.array(bl_beta)
p_gamma[...] = np.array(bl_gamma)
p_moving_mean[...] = np.array(bl_moving_mean)
p_moving_variance[...] = np.array(bl_moving_variance)
# bn 5
bl_beta = bl["model_weights"]["batch_normalization_5"]["batch_normalization_5"]["beta:0"]
bl_gamma = bl["model_weights"]["batch_normalization_5"]["batch_normalization_5"]["gamma:0"]
bl_moving_mean = bl["model_weights"]["batch_normalization_5"]["batch_normalization_5"]["moving_mean:0"]
bl_moving_variance = bl["model_weights"]["batch_normalization_5"]["batch_normalization_5"]["moving_variance:0"]
p_beta = pretrained["model_weights"]["batch_normalization_5"]["batch_normalization_5"]["beta:0"]
p_gamma = pretrained["model_weights"]["batch_normalization_5"]["batch_normalization_5"]["gamma:0"]
p_moving_mean = pretrained["model_weights"]["batch_normalization_5"]["batch_normalization_5"]["moving_mean:0"]
p_moving_variance = pretrained["model_weights"]["batch_normalization_5"]["batch_normalization_5"]["moving_variance:0"]
p_beta[...] = np.array(bl_beta)
p_gamma[...] = np.array(bl_gamma)
p_moving_mean[...] = np.array(bl_moving_mean)
p_moving_variance[...] = np.array(bl_moving_variance)
# bn 6
bl_beta = bl["model_weights"]["batch_normalization_6"]["batch_normalization_6"]["beta:0"]
bl_gamma = bl["model_weights"]["batch_normalization_6"]["batch_normalization_6"]["gamma:0"]
bl_moving_mean = bl["model_weights"]["batch_normalization_6"]["batch_normalization_6"]["moving_mean:0"]
bl_moving_variance = bl["model_weights"]["batch_normalization_6"]["batch_normalization_6"]["moving_variance:0"]
p_beta = pretrained["model_weights"]["batch_normalization_6"]["batch_normalization_6"]["beta:0"]
p_gamma = pretrained["model_weights"]["batch_normalization_6"]["batch_normalization_6"]["gamma:0"]
p_moving_mean = pretrained["model_weights"]["batch_normalization_6"]["batch_normalization_6"]["moving_mean:0"]
p_moving_variance = pretrained["model_weights"]["batch_normalization_6"]["batch_normalization_6"]["moving_variance:0"]
p_beta[...] = np.array(bl_beta)
p_gamma[...] = np.array(bl_gamma)
p_moving_mean[...] = np.array(bl_moving_mean)
p_moving_variance[...] = np.array(bl_moving_variance)
# bn 7
bl_beta = bl["model_weights"]["batch_normalization_7"]["batch_normalization_7"]["beta:0"]
bl_gamma = bl["model_weights"]["batch_normalization_7"]["batch_normalization_7"]["gamma:0"]
bl_moving_mean = bl["model_weights"]["batch_normalization_7"]["batch_normalization_7"]["moving_mean:0"]
bl_moving_variance = bl["model_weights"]["batch_normalization_7"]["batch_normalization_7"]["moving_variance:0"]
p_beta = pretrained["model_weights"]["batch_normalization_7"]["batch_normalization_7"]["beta:0"]
p_gamma = pretrained["model_weights"]["batch_normalization_7"]["batch_normalization_7"]["gamma:0"]
p_moving_mean = pretrained["model_weights"]["batch_normalization_7"]["batch_normalization_7"]["moving_mean:0"]
p_moving_variance = pretrained["model_weights"]["batch_normalization_7"]["batch_normalization_7"]["moving_variance:0"]
p_beta[...] = np.array(bl_beta)
p_gamma[...] = np.array(bl_gamma)
p_moving_mean[...] = np.array(bl_moving_mean)
p_moving_variance[...] = np.array(bl_moving_variance)
# bn 8
bl_beta = bl["model_weights"]["batch_normalization_8"]["batch_normalization_8"]["beta:0"]
bl_gamma = bl["model_weights"]["batch_normalization_8"]["batch_normalization_8"]["gamma:0"]
bl_moving_mean = bl["model_weights"]["batch_normalization_8"]["batch_normalization_8"]["moving_mean:0"]
bl_moving_variance = bl["model_weights"]["batch_normalization_8"]["batch_normalization_8"]["moving_variance:0"]
p_beta = pretrained["model_weights"]["batch_normalization_8"]["batch_normalization_8"]["beta:0"]
p_gamma = pretrained["model_weights"]["batch_normalization_8"]["batch_normalization_8"]["gamma:0"]
p_moving_mean = pretrained["model_weights"]["batch_normalization_8"]["batch_normalization_8"]["moving_mean:0"]
p_moving_variance = pretrained["model_weights"]["batch_normalization_8"]["batch_normalization_8"]["moving_variance:0"]
p_beta[...] = np.array(bl_beta)
p_gamma[...] = np.array(bl_gamma)
p_moving_mean[...] = np.array(bl_moving_mean)
p_moving_variance[...] = np.array(bl_moving_variance)
# bn 7
bl_beta = bl["model_weights"]["batch_normalization_9"]["batch_normalization_9"]["beta:0"]
bl_gamma = bl["model_weights"]["batch_normalization_9"]["batch_normalization_9"]["gamma:0"]
bl_moving_mean = bl["model_weights"]["batch_normalization_9"]["batch_normalization_9"]["moving_mean:0"]
bl_moving_variance = bl["model_weights"]["batch_normalization_9"]["batch_normalization_9"]["moving_variance:0"]
p_beta = pretrained["model_weights"]["batch_normalization_9"]["batch_normalization_9"]["beta:0"]
p_gamma = pretrained["model_weights"]["batch_normalization_9"]["batch_normalization_9"]["gamma:0"]
p_moving_mean = pretrained["model_weights"]["batch_normalization_9"]["batch_normalization_9"]["moving_mean:0"]
p_moving_variance = pretrained["model_weights"]["batch_normalization_9"]["batch_normalization_9"]["moving_variance:0"]
p_beta[...] = np.array(bl_beta)
p_gamma[...] = np.array(bl_gamma)
p_moving_mean[...] = np.array(bl_moving_mean)
p_moving_variance[...] = np.array(bl_moving_variance)
pretrained.close()
| 48.135618 | 148 | 0.727855 | 5,431 | 32,299 | 3.957466 | 0.019702 | 0.116131 | 0.100498 | 0.07984 | 0.983576 | 0.983576 | 0.983576 | 0.981064 | 0.978086 | 0.960266 | 0 | 0.035316 | 0.064584 | 32,299 | 670 | 149 | 48.207463 | 0.676067 | 0.229883 | 0 | 0.617778 | 0 | 0 | 0.341042 | 0.122266 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.006667 | 0 | 0.006667 | 0.02 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d3f53de972e9a287851ec94e3ebe3a9ca63cc6e2 | 275 | py | Python | shopify/__init__.py | subhrajyoti21/shopify_python_api | 8efdafd7a57aad782e7e93b5b16cde47d350da2a | [
"MIT"
] | 828 | 2015-01-08T16:03:55.000Z | 2022-03-25T16:58:37.000Z | shopify/__init__.py | subhrajyoti21/shopify_python_api | 8efdafd7a57aad782e7e93b5b16cde47d350da2a | [
"MIT"
] | 389 | 2015-02-01T03:33:49.000Z | 2022-03-23T08:42:33.000Z | shopify/__init__.py | subhrajyoti21/shopify_python_api | 8efdafd7a57aad782e7e93b5b16cde47d350da2a | [
"MIT"
] | 267 | 2015-01-20T21:40:19.000Z | 2022-03-29T04:09:56.000Z | from shopify.version import VERSION
from shopify.session import Session, ValidationException
from shopify.resources import *
from shopify.limits import Limits
from shopify.api_version import *
from shopify.api_access import *
from shopify.collection import PaginatedIterator
| 34.375 | 56 | 0.854545 | 35 | 275 | 6.657143 | 0.342857 | 0.330472 | 0.218884 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.105455 | 275 | 7 | 57 | 39.285714 | 0.947154 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3110be7b92215fcda831926e2c522182ac865eae | 34,759 | py | Python | pattern/tests/test_recognition.py | IKATS/op-pattern | ee9c06b2494a949739e249414a7d6f964f2b5fe2 | [
"Apache-2.0"
] | null | null | null | pattern/tests/test_recognition.py | IKATS/op-pattern | ee9c06b2494a949739e249414a7d6f964f2b5fe2 | [
"Apache-2.0"
] | null | null | null | pattern/tests/test_recognition.py | IKATS/op-pattern | ee9c06b2494a949739e249414a7d6f964f2b5fe2 | [
"Apache-2.0"
] | 1 | 2019-10-29T08:08:11.000Z | 2019-10-29T08:08:11.000Z | """
Copyright 2018-2019 CS Systèmes d'Information
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
import unittest
import numpy as np
from ikats.algo.sax.sliding_sax import SaxResult
from ikats.algo.pattern.random_proj import NeighborhoodSearch, ConfigRecognition
from ikats.algo.pattern.collision import SparseMatrix
from ikats.algo.pattern.recognition import OPT_USING_BRUTE_FORCE, OPT_USING_COLLISIONS, \
_start_alphabet, _get_mindist
from ikats.core.library.spark import ScManager
class TestRecognition(unittest.TestCase):
"""
Tests the pattern recognition
"""
@staticmethod
def _print_mindist_mat(search_info, activate=False):
"""
Building new TU : this method diplays mindist matrix from the NeighborhoodSearch
Simply set activate to False to disable useless printing
:param search_info: tested object
:type search_info: NeighborhoodSearch
:param activate:
:type activate: boolean
"""
if activate:
alphabet = _start_alphabet(search_info.alphabet_size)
nb_seqs = len(search_info.sax)
mindist_mat = np.zeros((nb_seqs, nb_seqs))
for i in range(0, nb_seqs):
for j in range(0, nb_seqs):
mindist_mat[i][j] = _get_mindist(search_info.size_sequence, search_info.sax[i],
search_info.sax[j],
search_info.mindist_lookup_table,
alphabet)
print("mindist distances:")
print(mindist_mat)
@staticmethod
def _print_matrix(test, data, nb_seq, activate=False):
"""
Building new TU : this method diplays the matrix corresponding to SparseMatrix
Simply set activate to False to disable useless printing
:param test: name of the test
:type test: str
:param data: data from SparseMatrix
:type data: list of tuple
:param nb_seq: nb sequences
:type nb_seq: int
:param activate: False to disable useless printing, once the test is well prepared
:type activate: boolean
"""
if activate:
mat = np.zeros((nb_seq, nb_seq))
for coll, (row, col) in data:
mat[row, col] = coll
# not initialized mat[col, row] = coll
print(test)
print("np.array({})".format(str(mat).replace('.', ',').replace(']\n', '],\n')))
def test_global_same_words_spark(self):
"""
Test: see _apply_motif_global_same_words(activate_spark=True)
"""
self._apply_motif_global_same_words(activate_spark=True)
def test_global_same_words_no_spark(self):
"""
Test: see _apply_motif_global_same_words(activate_spark=False)
"""
self._apply_motif_global_same_words(activate_spark=False)
def _apply_motif_global_same_words(self, activate_spark):
"""
Test
- with the global method to search the neighborhood motif,
- with/without spark jobs according to activate_spark
- and where the words are all the same
"""
spark_context = ScManager.get()
# Build the SAX result with large breakpoints
sax_result = SaxResult(paa=spark_context.parallelize([]), breakpoints=[-300, -100, 100, 300],
sax_word='abcdeabcdeabcdeabcde')
sax, _, _ = sax_result.start_sax(5, spark_ctx=spark_context)
# sax is an rdd -> to np.array
sax = np.transpose(sax.collect())
breakpoint = sax_result.build_mindist_lookup_table(alphabet_size=5)
# Build the collision matrix result
collision_matrix = SparseMatrix(np.array([[0, 0, 0, 0, ],
[100, 0, 0, 0, ],
[100, 100, 0, 0, ],
[100, 100, 100, 0, ]]))
# two identical cases here: brute force / with collisions
for method_opt in [OPT_USING_BRUTE_FORCE, OPT_USING_COLLISIONS]:
# mindist distances:
#
# [[ 0. 0. 0. 0.]
# [ 0. 0. 0. 0.]
# [ 0. 0. 0. 0.]
# [ 0. 0. 0. 0.]]
# Build the class for motif search
search_info = NeighborhoodSearch(size_sequence=20,
mindist_lookup_table=breakpoint,
alphabet_size=5,
sax=np.transpose(sax),
radius=0.01,
collision_matrix=collision_matrix)
recognition_info = ConfigRecognition(is_stopped_by_eq9=True,
iterations=0,
min_value=1,
is_algo_method_global=True,
activate_spark=activate_spark,
radius=0.01,
neighborhood_method=method_opt)
# neighborhood_method=OPT_USING_BRUTE_FORCE (compare with all the words)
result = search_info.motif_neighborhood_global(30, recognition_info)
self._print_mindist_mat(search_info)
# The words corresponding to the six largest values cells have a MINDIST < radius
self.assertEqual(len(result), 1)
# This results are the same : [0,1,2,3]: the 6 groups have been reduced to one inside
self.assertEqual(result, [[0, 1, 2, 3]])
def test_global_zero_coll_spark(self):
"""
Test: see _apply_motif_global_zero_coll(activate_spark=False)
"""
self._apply_motif_global_zero_coll(activate_spark=True)
def test_global_zero_coll_no_spark(self):
"""
Test: see _apply_motif_global_zero_coll(activate_spark=False)
"""
self._apply_motif_global_zero_coll(activate_spark=False)
def _apply_motif_global_zero_coll(self, activate_spark):
"""
Test
- with the global method to search the neighborhood motif,
- with/without spark jobs, according to activate_spark
- and where the words are all different.
"""
spark_context = ScManager.get()
# Build the SAX result with different words, and small breakpoints
sax_result = SaxResult(paa=spark_context.parallelize([]), breakpoints=[-0.3, -0.1, 0.1, 0.3],
sax_word='abcdebcdeacdeabdeabceabcd')
sax, _, _ = sax_result.start_sax(5, spark_ctx=spark_context)
# sax is an rdd -> to np.array
sax = np.transpose(sax.collect())
breakpoint = sax_result.build_mindist_lookup_table(5)
# Different words => noly zero cells in the collision matrix
collision_matrix = SparseMatrix(np.zeros((2, 2)))
# two identical cases here: brute force / with collisions
for method_opt in [OPT_USING_BRUTE_FORCE, OPT_USING_COLLISIONS]:
# Build the class for motif search
search_info = NeighborhoodSearch(size_sequence=20,
mindist_lookup_table=breakpoint,
alphabet_size=5,
sax=np.transpose(sax),
radius=1000,
collision_matrix=collision_matrix)
recognition_info = ConfigRecognition(is_stopped_by_eq9=True,
iterations=0,
min_value=1,
is_algo_method_global=True,
activate_spark=activate_spark,
radius=1000,
neighborhood_method=method_opt)
# neighborhood_method=OPT_USING_BRUTE_FORCE
result = search_info.motif_neighborhood_global(30, recognition_info)
# There is no similar sequences
self.assertEqual(len(result), 0)
def test_global_brute_spark_ex1(self):
"""
Test: see _apply_motif_global_brute_ex1(activate_spark=None)
"""
self._apply_motif_global_brute_ex1(activate_spark=True)
def test_global_brute_no_spark_ex1(self):
"""
Test: see _apply_motif_global_brute_ex1(activate_spark=None)
"""
self._apply_motif_global_brute_ex1(activate_spark=None)
def _apply_motif_global_brute_ex1(self, activate_spark):
"""
Test
- with the global method to search the neighborhood motif,
- with brute force
- with/without spark jobs according to activate_spark
- and where the words have only one different letter.
"""
# Build the SAX result where the words have only one different letter (words: 5 letters)
sequences = ["abcde", "abcdd", "abcdc", "abcdb", "abcda"]
tested_sax_word = ''.join(sequences)
spark_context = ScManager.get()
sax_result = SaxResult(paa=spark_context.parallelize([]), breakpoints=[-1.1, -1, 0, 1.501],
sax_word=tested_sax_word)
sax, _, nb_seq = sax_result.start_sax(5, spark_ctx=spark_context)
# sax is an rdd -> to np.array
sax = np.transpose(sax.collect())
breakpoint = sax_result.build_mindist_lookup_table(5)
# Build a collision matrix (the real collision matrix is different, but we take this one for the test)
collision_matrix = SparseMatrix(np.array([[0, 0, 0, 0, 0, ],
[30, 0, 0, 0, 0, ],
[2, 40, 0, 0, 0, ],
[4, 8, 50, 0, 0, ],
[6, 10, 20, 60, 0, ]]))
self._print_matrix("test_global_brute_force_ex1", collision_matrix.data, nb_seq)
# mindist distances:
# [[ 0. 0. 3.002 5.002 5.202]
# [ 0. 0. 0. 2. 2.2 ]
# [ 3.002 0. 0. 0. 0.2 ]
# [ 5.002 2. 0. 0. 0. ]
# [ 5.202 2.2 0.2 0. 0. ]]
# Using neighborhood_method=OPT_USING_BRUTE_FORCE
#
# brute force: for collisions (0,1) (1,2) (2,3) (3,4) greater than min_value==25
#
# for radius 1.9 => global result is [[0, 1, 2], [0, 1, 2, 3, 4], [1, 2, 3, 4], [2, 3, 4]]
#
# for radius 2.5 => global result is [[0, 1, 2, 3, 4], [0, 1, 2, 3, 4], [1, 2, 3, 4], [1, 2, 3, 4]]
# => reduced to [[[0, 1, 2, 3, 4], [1, 2, 3, 4]]
#
# for radius 3.5 => global result is [[0, 1, 2, 3, 4], [0, 1, 2, 3, 4], [0, 1, 2, 3, 4], [1, 2, 3, 4]]
# => reduced to [[0, 1, 2, 3, 4], [1, 2, 3, 4]]
#
# for radius 6 => global result is [[0, 1, 2, 3, 4], [0, 1, 2, 3, 4], [0, 1, 2, 3, 4], [0, 1, 2, 3, 4]]
# => reduced to [[0, 1, 2, 3, 4]]
#
for radius, expected_res in [[2.5, [[0, 1, 2, 3, 4], [1, 2, 3, 4]]],
[1.9, [[0, 1, 2], [0, 1, 2, 3, 4], [1, 2, 3, 4], [2, 3, 4]]],
[3.5, [[0, 1, 2, 3, 4], [1, 2, 3, 4]]],
[6, [[0, 1, 2, 3, 4]]]]:
# Build the class for motif search where the min_value is 25
search_info = NeighborhoodSearch(size_sequence=20,
mindist_lookup_table=breakpoint,
alphabet_size=5,
sax=np.transpose(sax),
radius=radius,
collision_matrix=collision_matrix)
# for info: here is the mindist:
# (see _print_mindist_mat doc: in order to activate print)
self._print_mindist_mat(search_info)
recognition_info = ConfigRecognition(is_stopped_by_eq9=True,
iterations=0,
min_value=25,
is_algo_method_global=True,
activate_spark=activate_spark,
radius=radius,
neighborhood_method=OPT_USING_BRUTE_FORCE)
search_info.radius = radius
recognition_info.radius = radius
result = search_info.motif_neighborhood_global(recognition_info.min_value, recognition_info)
self.assertEqual(len(result), len(expected_res))
for group in result:
self.assertTrue(group in expected_res)
def test_global_coll_no_spark_ex1(self):
"""
Tests without spark: see apply_motif_neighborhood_global__with_collisions_ex1(activate_spark=False)
"""
self._apply_motif_global_coll_ex1(activate_spark=False)
def test_global_coll_spark_ex1(self):
"""
Tests with spark: see apply_motif_neighborhood_global__with_collisions_ex1(activate_spark=True)
"""
self._apply_motif_global_coll_ex1(activate_spark=True)
def _apply_motif_global_coll_ex1(self, activate_spark):
"""
Test
- with the global method to search the neighborhood motif,
- with/without spark according to activate_spark
- exploring similarities with collisions heuristic
- with input: the words have only one different letter. And every sequence
Si has collisions with Sj with that matrix.
Note: results ought to be equal to test_global_brute_no_spark_ex1
"""
# Build the SAX result where the words have only one different letter (words: 5 letters)
sequences = ["abcde", "abcdd", "abcdc", "abcdb", "abcda"]
tested_sax_word = ''.join(sequences)
spark_context = ScManager.get()
sax_result = SaxResult(paa=spark_context.parallelize([]), breakpoints=[-1.1, -1, 0, 1.501],
sax_word=tested_sax_word)
sax, _, nb_seq = sax_result.start_sax(5, spark_ctx=spark_context)
# sax is an rdd -> to np.array
sax = np.transpose(sax.collect())
breakpoint = sax_result.build_mindist_lookup_table(5)
# Build a collision matrix (the real collision matrix is different, but we take this one for the test)
collision_matrix = SparseMatrix(np.array([[0, 0, 0, 0, 0, ],
[30, 0, 0, 0, 0, ],
[2, 40, 0, 0, 0, ],
[4, 8, 50, 0, 0, ],
[6, 10, 20, 60, 0, ]]))
self._print_matrix("test_global_coll_no_spark_ex1",
collision_matrix.data,
nb_seq)
# mindist distances:
# [[ 0. 0. 3.002 5.002 5.202]
# [ 0. 0. 0. 2. 2.2 ]
# [ 3.002 0. 0. 0. 0.2 ]
# [ 5.002 2. 0. 0. 0. ]
# [ 5.202 2.2 0.2 0. 0. ]]
# Using neighborhood_method=OPT_USING_COLLISIONS
#
# for collisions (0,1) (1,2) (2,3) (3,4) greater than min_value==25
# and with the collisions heuristic: only sequences having collisions with Si or Sj are examined
#
# for radius 1.9 => global result is [[0, 1, 2], [0, 1, 2, 3, 4], [1, 2, 3, 4], [2, 3, 4]]
#
# for radius 2.5 => global result is [[0, 1, 2, 3, 4], [0, 1, 2, 3, 4], [1, 2, 3, 4], [1, 2, 3, 4]]
# => reduced to [[[0, 1, 2, 3, 4], [1, 2, 3, 4]]
#
# for radius 3.5 => global result is [[0, 1, 2, 3, 4], [0, 1, 2, 3, 4], [0, 1, 2, 3, 4], [1, 2, 3, 4]]
# => reduced to [[0, 1, 2, 3, 4], [1, 2, 3, 4]]
#
# for radius 6 => global result is [[0, 1, 2, 3, 4], [0, 1, 2, 3, 4], [0, 1, 2, 3, 4], [0, 1, 2, 3, 4]]
# => reduced to [[0, 1, 2, 3, 4]]
#
for radius, expected_res in [[2.5, [[0, 1, 2, 3, 4], [1, 2, 3, 4]]],
[1.9, [[0, 1, 2], [0, 1, 2, 3, 4], [1, 2, 3, 4], [2, 3, 4]]],
[3.5, [[0, 1, 2, 3, 4], [1, 2, 3, 4]]],
[6, [[0, 1, 2, 3, 4]]]]:
# Build the class for motif search where the min_value is 25
search_info = NeighborhoodSearch(size_sequence=20,
mindist_lookup_table=breakpoint,
alphabet_size=5,
sax=np.transpose(sax),
radius=radius,
collision_matrix=collision_matrix)
# for info: here is the mindist:
# (see _print_mindist_mat doc: in order to activate print)
self._print_mindist_mat(search_info)
recognition_info = ConfigRecognition(is_stopped_by_eq9=True,
iterations=0,
min_value=25,
is_algo_method_global=True,
activate_spark=activate_spark,
radius=radius,
neighborhood_method=OPT_USING_COLLISIONS)
print("radius {}:expected: {}".format(radius, expected_res))
result = search_info.motif_neighborhood_global(recognition_info.min_value, recognition_info)
print("radius {}:->global with collisions: {}".format(radius, result))
self.assertEqual(len(result), len(expected_res))
for group in result:
self.assertTrue(group in expected_res)
def test_iter_same_words_spark(self):
"""
Test: see _apply_motif_iter_same_words(activate_spark=True)
"""
self._apply_motif_iter_same_words(activate_spark=True)
def test_iter_same_words_no_spark(self):
"""
Test: see _apply_motif_iter_same_words(activate_spark=False)
"""
self._apply_motif_iter_same_words(activate_spark=False)
def _apply_motif_iter_same_words(self, activate_spark):
"""
Test
- with the iterative method to search the neighborhood motif,
- with/without spark jobs according to activate_spark
- and where the words are all the same
"""
spark_context = ScManager.get()
# Build the SAX result with large breakpoints
sax_result = SaxResult(paa=spark_context.parallelize([]), breakpoints=[-300, -100, 100, 300],
sax_word='abcdeabcdeabcdeabcde')
sax, _, _ = sax_result.start_sax(5, spark_ctx=spark_context)
# sax is an rdd -> to np.array
sax = np.transpose(sax.collect())
breakpoint = sax_result.build_mindist_lookup_table(alphabet_size=5)
# Build the collision matrix result
collision_matrix = SparseMatrix(np.array([[0, 0, 0, 0, ],
[100, 0, 0, 0, ],
[99, 97, 0, 0, ],
[98, 96, 95, 0, ]]))
# two identical cases here: brute force / with collisions
for method_opt in [OPT_USING_BRUTE_FORCE, OPT_USING_COLLISIONS]:
# mindist distances:
#
# [[ 0. 0. 0. 0.]
# [ 0. 0. 0. 0.]
# [ 0. 0. 0. 0.]
# [ 0. 0. 0. 0.]]
# Build the class for motif search
search_info = NeighborhoodSearch(size_sequence=20,
mindist_lookup_table=breakpoint,
alphabet_size=5,
sax=np.transpose(sax),
radius=0.01,
collision_matrix=collision_matrix)
recognition_info = ConfigRecognition(is_stopped_by_eq9=True,
iterations=4,
min_value=1,
is_algo_method_global=False,
activate_spark=activate_spark,
radius=0.01,
neighborhood_method=method_opt)
# neighborhood_method=OPT_USING_BRUTE_FORCE (compare with all the words)
result = search_info.motif_neighborhood_iterative(30, recognition_info)
# The words corresponding to the six largest values cells have a MINDIST < radius,
# but the iterative method take 2 group of similar sequences (in recognition_info : iterations = 2)
self.assertEqual(len(result), 1)
# This results are the same : [[0,1,2,3]]
self.assertListEqual(result[0], [0, 1, 2, 3])
def test_iter_zero_coll_spark(self):
"""
Test: see _apply_motif_iter_zero_coll(activate_spark=True)
"""
self._apply_motif_iter_zero_coll(activate_spark=True)
def test_iter_zero_coll_no_spark(self):
"""
Test: see _apply_motif_iter_zero_coll(activate_spark=False)
"""
self._apply_motif_iter_zero_coll(activate_spark=False)
def _apply_motif_iter_zero_coll(self, activate_spark):
"""
Test
- with the iterative method to search the neighborhood motif,
- with/without spark jobs
- and where the words are all different => no collisions
"""
spark_context = ScManager.get()
# Build the SAX result with different words, and small breakpoints
sax_result = SaxResult(paa=spark_context.parallelize([]), breakpoints=[-0.3, -0.1, 0.1, 0.3],
sax_word='abcdebcdeacdeabdeabceabcd')
sax, _, nb_seq = sax_result.start_sax(5, spark_ctx=spark_context)
# sax is an rdd -> to np.array
sax = np.transpose(sax.collect())
breakpoint = sax_result.build_mindist_lookup_table(nb_seq)
# Different words => only zero cells in the collision matrix
collision_matrix = SparseMatrix(np.zeros((nb_seq, nb_seq)))
# Build the class for motif search
search_info = NeighborhoodSearch(size_sequence=20,
mindist_lookup_table=breakpoint,
alphabet_size=5,
sax=np.transpose(sax),
radius=1000,
collision_matrix=collision_matrix)
recognition_info = ConfigRecognition(is_stopped_by_eq9=True,
iterations=100,
min_value=1,
is_algo_method_global=False,
activate_spark=activate_spark,
radius=1000,
neighborhood_method=OPT_USING_BRUTE_FORCE)
# neighborhood_method=OPT_USING_BRUTE_FORCE
result = search_info.motif_neighborhood_iterative(30, recognition_info)
# There is no similar sequences
self.assertEqual(len(result), 0)
# neighborhood_method=OPT_USING_COLLISIONS
recognition_info.neighborhood_method = OPT_USING_COLLISIONS
result = search_info.motif_neighborhood_iterative(30, recognition_info)
# There is no similar sequences
self.assertEqual(len(result), 0)
def test_iter_brute_ex1_spark(self):
"""
Test: see _apply_iter_brute_ex1(activate_spark=True)
"""
self._apply_iter_brute_ex1(activate_spark=True)
def test_iter_brute_ex1_no_spark(self):
"""
Test: see _apply_iter_brute_ex1(activate_spark=False)
"""
self._apply_iter_brute_ex1(activate_spark=False)
def _apply_iter_brute_ex1(self, activate_spark):
"""
Tests motif_neighborhood_iterative()
- the iterative method
- using the brute force method
- to search the neighborhood motif
- with/without spark jobs according to activate_spark
Note: test where the words have only one different letter.
"""
# Build the SAX result where the words have only one different letter (words: 5 letters)
sequences = ["abcde", "abcdd", "abcdc", "abcdb", "abcda"]
tested_sax_word = ''.join(sequences)
spark_context = ScManager.get()
sax_result = SaxResult(paa=spark_context.parallelize([]), breakpoints=[-1.1, -1, 0, 1.501],
sax_word=tested_sax_word)
sax, _, nb_seq = sax_result.start_sax(5, spark_ctx=spark_context)
# sax is an rdd -> to np.array
sax = np.transpose(sax.collect())
breakpoint = sax_result.build_mindist_lookup_table(5)
# Build a collision matrix
collision_matrix = SparseMatrix(np.array([[0, 0, 0, 0, 0, ],
[30, 0, 0, 0, 0, ],
[2, 40, 0, 0, 0, ],
[4, 8, 50, 0, 0, ],
[6, 10, 20, 50, 0, ]]))
self._print_matrix("test_iterative__brute_no_spark_ex1",
collision_matrix.data,
nb_seq)
# mindist distances:
# [[ 0. 0. 3.002 5.002 5.202]
# [ 0. 0. 0. 2. 2.2 ]
# [ 3.002 0. 0. 0. 0.2 ]
# [ 5.002 2. 0. 0. 0. ]
# [ 5.202 2.2 0.2 0. 0. ]]
# Using neighborhood_method=OPT_USING_BRUTE_FORCE
#
# iterative: examining collisions (i,j) per iteration:
# (3,4)+(2,3) then (1,2) then (0,1)
#
# (collisions greater than min_value==25)
#
# Test with fixed radius 1.9:
# - iter=1 => result is [[1,2,3,4],[2, 3, 4]] considering (S2,S3) and (S3,S4) neighborhoods
# - iter=2 => result extended with [0,1,2,3,4] considering (S1,S2)
# - iter=3 => result extended with [0,1,2] considering (S0,S1)
# - iter=100 => result is the same than for iter=3: no more collision available
#
for radius, nb_iter, expected_res in [[1.9, 1, [[1, 2, 3, 4], [2, 3, 4]]],
[1.9, 2, [[1, 2, 3, 4], [2, 3, 4], [0, 1, 2, 3, 4]]],
[1.9, 3, [[1, 2, 3, 4], [2, 3, 4], [0, 1, 2, 3, 4], [0, 1, 2]]],
[1.9, 100, [[1, 2, 3, 4], [2, 3, 4], [0, 1, 2, 3, 4], [0, 1, 2]]]]:
# Build the class for motif search where the min_value is 25
search_info = NeighborhoodSearch(size_sequence=20,
mindist_lookup_table=breakpoint,
alphabet_size=5,
sax=np.transpose(sax),
radius=radius,
collision_matrix=collision_matrix)
# for info: here is the mindist:
# (see _print_mindist_mat doc: in order to activate print)
self._print_mindist_mat(search_info)
recognition_info = ConfigRecognition(is_stopped_by_eq9=True,
iterations=nb_iter,
min_value=25,
is_algo_method_global=False,
activate_spark=activate_spark,
radius=radius,
neighborhood_method=OPT_USING_BRUTE_FORCE)
result = search_info.motif_neighborhood_iterative(recognition_info.min_value, recognition_info)
self.assertEqual(len(result), len(expected_res))
for group in result:
self.assertTrue(group in expected_res)
def test_iter_coll_ex1_spark(self):
"""
Test: see _apply_iter_coll_no_spark_ex1(activate_spark=True)
"""
self._apply_iter_coll_no_spark_ex1(activate_spark=True)
def test_iter_coll_ex1_no_spark(self):
"""
Test: see _apply_iter_coll_no_spark_ex1(activate_spark=False)
"""
self._apply_iter_coll_no_spark_ex1(activate_spark=False)
def _apply_iter_coll_no_spark_ex1(self, activate_spark):
"""
Tests motif_neighborhood_iterative()
- the iterative method
- using the heuristic based upon collisions
- to search the neighborhood motif
Note: test where the words have only one different letter.
"""
# Build the SAX result where the words have only one different letter (words: 5 letters)
sequences = ["abcde", "abcdd", "abcdc", "abcdb", "abcda"]
tested_sax_word = ''.join(sequences)
spark_context = ScManager.get()
sax_result = SaxResult(paa=spark_context.parallelize([]), breakpoints=[-1.1, -1, 0, 1.501],
sax_word=tested_sax_word)
sax, _, nb_seq = sax_result.start_sax(5, spark_ctx=spark_context)
# sax is an rdd -> to np.array
sax = np.transpose(sax.collect())
breakpoint = sax_result.build_mindist_lookup_table(5)
# Build a collision matrix
# Note: this matrix is different from the one from
# test test_iterative__brute_no_spark_ex1:
# => see zeros are added: coll(3,2) == coll(4,2) == 0
collision_matrix = SparseMatrix(np.array([[0, 0, 0, 0, 0, ],
[40, 0, 0, 0, 0, ],
[2, 40, 0, 0, 0, ],
[4, 8, 0, 0, 0, ],
[6, 10, 0, 50, 0, ]]))
self._print_matrix("test_iterative__brute_no_spark_ex1",
collision_matrix.data,
nb_seq)
# mindist distances:
# [[ 0. 0. 3.002 5.002 5.202]
# [ 0. 0. 0. 2. 2.2 ]
# [ 3.002 0. 0. 0. 0.2 ]
# [ 5.002 2. 0. 0. 0. ]
# [ 5.202 2.2 0.2 0. 0. ]]
# Using neighborhood_method=OPT_USING_BRUTE_FORCE
#
# iterative: examining collisions (i,j) per iteration:
# (3,4) then (1,2) +(0,1)
#
# (collisions greater than min_value==25)
#
# Test with fixed radius 1.9:
# - iter=1 => result is [[3, 4]] considering (S3,S4) neighborhood
# - iter=2 => result extended with [0,1,2] considering (S0,S1), unchanged for (S1,S2)
# - iter=3 => result is the same than for iter=2: no more collision available
# - iter=100 => result is the same than for iter=2: no more collision available
#
for radius, nb_iter, expected_res in [[1.9, 1, [[3, 4]]],
[1.9, 2, [[3, 4], [0, 1, 2]]],
[1.9, 3, [[3, 4], [0, 1, 2]]],
[1.9, 100, [[3, 4], [0, 1, 2]]]]:
# Build the class for motif search where the min_value is 25
search_info = NeighborhoodSearch(size_sequence=20,
mindist_lookup_table=breakpoint,
alphabet_size=5,
sax=np.transpose(sax),
radius=radius,
collision_matrix=collision_matrix)
# for info: here is the mindist:
# (see _print_mindist_mat doc: in order to activate print)
self._print_mindist_mat(search_info)
recognition_info = ConfigRecognition(is_stopped_by_eq9=True,
iterations=nb_iter,
min_value=25,
is_algo_method_global=False,
activate_spark=activate_spark,
radius=radius,
neighborhood_method=OPT_USING_COLLISIONS)
result = search_info.motif_neighborhood_iterative(recognition_info.min_value, recognition_info)
self.assertEqual(len(result), len(expected_res))
for group in result:
self.assertTrue(group in expected_res)
if __name__ == '__main__':
unittest.main()
| 46.971622 | 114 | 0.506459 | 3,914 | 34,759 | 4.270567 | 0.07767 | 0.014358 | 0.013461 | 0.014598 | 0.85157 | 0.830571 | 0.817768 | 0.797786 | 0.757045 | 0.738857 | 0 | 0.054848 | 0.401508 | 34,759 | 739 | 115 | 47.035183 | 0.748642 | 0.307805 | 0 | 0.7 | 0 | 0 | 0.019065 | 0.007591 | 0 | 0 | 0 | 0 | 0.045455 | 1 | 0.078788 | false | 0 | 0.021212 | 0 | 0.10303 | 0.051515 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3138a5954d5bcb2fd19bab6f48df69825df0d485 | 13,501 | py | Python | migrations/versions/5fe85b823f57_create_iris_bite_status_table.py | ARM-DOE/warno | 231f0eb87fa3011133f361ebac780fc21d0968c6 | [
"BSD-3-Clause"
] | 4 | 2017-08-09T15:27:19.000Z | 2021-03-11T07:16:09.000Z | migrations/versions/5fe85b823f57_create_iris_bite_status_table.py | ARM-DOE/warno | 231f0eb87fa3011133f361ebac780fc21d0968c6 | [
"BSD-3-Clause"
] | null | null | null | migrations/versions/5fe85b823f57_create_iris_bite_status_table.py | ARM-DOE/warno | 231f0eb87fa3011133f361ebac780fc21d0968c6 | [
"BSD-3-Clause"
] | 2 | 2017-08-09T15:27:28.000Z | 2019-05-22T16:09:06.000Z | """Create Iris BITE status table
Revision ID: 5fe85b823f57
Revises: 952e55f64a88
Create Date: 2017-04-05 21:02:21.574011
"""
# revision identifiers, used by Alembic.
revision = '5fe85b823f57'
down_revision = '952e55f64a88'
from alembic import op
import sqlalchemy as sa
from sqlalchemy.dialects import postgresql
def upgrade():
### commands auto generated by Alembic - please adjust! ###
op.create_table('iris_bite',
sa.Column('packet_id', sa.Integer(), nullable=False),
sa.Column('time', sa.DateTime(), nullable=False),
sa.Column('site_id', sa.Integer(), nullable=False),
sa.Column('instrument_id', sa.Integer(), nullable=False),
sa.Column('ab842_digital_az_timeout', sa.Integer(), nullable=True),
sa.Column('ab842_digital_checksum_error', sa.Integer(), nullable=True),
sa.Column('ab842_digital_diagnostic_error', sa.Integer(), nullable=True),
sa.Column('ab842_digital_el_timeout', sa.Integer(), nullable=True),
sa.Column('ab842_digital_frequency_exceeded', sa.Integer(), nullable=True),
sa.Column('ab842_digital_light_cntrl_reserve', sa.Integer(), nullable=True),
sa.Column('ab842_digital_max__accel_flag', sa.Integer(), nullable=True),
sa.Column('ab842_digital_max__velocity_flag', sa.Integer(), nullable=True),
sa.Column('ab842_digital_min__accel_flag', sa.Integer(), nullable=True),
sa.Column('ab842_digital_min__velocity_flag', sa.Integer(), nullable=True),
sa.Column('ab842_digital_position_error', sa.Integer(), nullable=True),
sa.Column('ab842_digital_position_limits_exc', sa.Integer(), nullable=True),
sa.Column('ab842_digital_startup_error', sa.Integer(), nullable=True),
sa.Column('ab842_digital_temp_out_of_range', sa.Integer(), nullable=True),
sa.Column('ab842_digital_volt__out_of_range', sa.Integer(), nullable=True),
sa.Column('antenna_local_mode', sa.Integer(), nullable=True),
sa.Column('azimuth', sa.Float(), nullable=True),
sa.Column('azimuth_encoder_calibrated', sa.Integer(), nullable=True),
sa.Column('azimuth_rate_of_change', sa.Float(), nullable=True),
sa.Column('elevation', sa.Float(), nullable=True),
sa.Column('elevation_encoder_calibrated', sa.Integer(), nullable=True),
sa.Column('elevation_rate_of_change', sa.Float(), nullable=True),
sa.Column('interlock_open', sa.Integer(), nullable=True),
sa.Column('internal_adc_pos15vdc_status', sa.Float(), nullable=True),
sa.Column('internal_adc_pos24_vdc_status', sa.Float(), nullable=True),
sa.Column('internal_adc_pos5v_dc_status', sa.Float(), nullable=True),
sa.Column('internal_adc_temperature_1', sa.Float(), nullable=True),
sa.Column('internal_adc_temperature_2', sa.Float(), nullable=True),
sa.Column('internal_adc_temperature_3', sa.Float(), nullable=True),
sa.Column('internal_saux_dehy__duty_cycle', sa.Integer(), nullable=True),
sa.Column('internal_saux_dehy__wg__pressure', sa.Integer(), nullable=True),
sa.Column('internal_saux_low_el__interlock', sa.Integer(), nullable=True),
sa.Column('internal_saux_main_power_status', sa.Integer(), nullable=True),
sa.Column('internal_saux_noise_source_status', sa.Integer(), nullable=True),
sa.Column('internal_saux_pedestal_interlock', sa.Integer(), nullable=True),
sa.Column('internal_saux_radome_door_ilock', sa.Integer(), nullable=True),
sa.Column('internal_saux_servo_pdu_power', sa.Integer(), nullable=True),
sa.Column('internal_saux_servo_power_status', sa.Integer(), nullable=True),
sa.Column('internal_saux_stalo_status', sa.Integer(), nullable=True),
sa.Column('internal_saux_tx_pdu_power', sa.Integer(), nullable=True),
sa.Column('internal_saux_ups_status', sa.Integer(), nullable=True),
sa.Column('internal_saux_waveguide_sw_h', sa.Integer(), nullable=True),
sa.Column('internal_saux_waveguide_sw_v', sa.Integer(), nullable=True),
sa.Column('ipc15hc_digital_alignment_error', sa.Integer(), nullable=True),
sa.Column('ipc15hc_digital_az_timeout', sa.Integer(), nullable=True),
sa.Column('ipc15hc_digital_bad_hall_state', sa.Integer(), nullable=True),
sa.Column('ipc15hc_digital_bridge_foldback_err', sa.Integer(), nullable=True),
sa.Column('ipc15hc_digital_bridge_foldback_war', sa.Integer(), nullable=True),
sa.Column('ipc15hc_digital_bridge_hardware_err', sa.Integer(), nullable=True),
sa.Column('ipc15hc_digital_bridge_temp_fault', sa.Integer(), nullable=True),
sa.Column('ipc15hc_digital_control_pwr_active', sa.Integer(), nullable=True),
sa.Column('ipc15hc_digital_drive_enabled', sa.Integer(), nullable=True),
sa.Column('ipc15hc_digital_drive_faulted_error', sa.Integer(), nullable=True),
sa.Column('ipc15hc_digital_drive_param_error', sa.Integer(), nullable=True),
sa.Column('ipc15hc_digital_drive_temp_fault', sa.Integer(), nullable=True),
sa.Column('ipc15hc_digital_el_timeout', sa.Integer(), nullable=True),
sa.Column('ipc15hc_digital_encoder_loss_fault', sa.Integer(), nullable=True),
sa.Column('ipc15hc_digital_encoder_read_fault', sa.Integer(), nullable=True),
sa.Column('ipc15hc_digital_excess_enc__count', sa.Integer(), nullable=True),
sa.Column('ipc15hc_digital_excess_speed_at_ena', sa.Integer(), nullable=True),
sa.Column('ipc15hc_digital_excessive_position', sa.Integer(), nullable=True),
sa.Column('ipc15hc_digital_excessive_velocity', sa.Integer(), nullable=True),
sa.Column('ipc15hc_digital_feedback_failure', sa.Integer(), nullable=True),
sa.Column('ipc15hc_digital_low_voltage_at_ena', sa.Integer(), nullable=True),
sa.Column('ipc15hc_digital_motor_config_error', sa.Integer(), nullable=True),
sa.Column('ipc15hc_digital_motor_config_warn', sa.Integer(), nullable=True),
sa.Column('ipc15hc_digital_motor_current_high', sa.Integer(), nullable=True),
sa.Column('ipc15hc_digital_motor_temp_fault', sa.Integer(), nullable=True),
sa.Column('ipc15hc_digital_motor_therm_model', sa.Integer(), nullable=True),
sa.Column('ipc15hc_digital_network_loss_fault', sa.Integer(), nullable=True),
sa.Column('ipc15hc_digital_over_voltage_err', sa.Integer(), nullable=True),
sa.Column('ipc15hc_digital_peak_current_high', sa.Integer(), nullable=True),
sa.Column('ipc15hc_digital_power_regen_fault', sa.Integer(), nullable=True),
sa.Column('ipc15hc_digital_power_regen_warning', sa.Integer(), nullable=True),
sa.Column('ipc15hc_digital_pwm_not_active', sa.Integer(), nullable=True),
sa.Column('ipc15hc_digital_shaft_power_limited', sa.Integer(), nullable=True),
sa.Column('ipc15hc_digital_torque_rating_high', sa.Integer(), nullable=True),
sa.Column('ipc15hc_digital_under_voltage_err', sa.Integer(), nullable=True),
sa.Column('ipc15hc_digital_user_fault', sa.Integer(), nullable=True),
sa.Column('iris_mode_2', sa.Integer(), nullable=True),
sa.Column('iris_mode_0', sa.Integer(), nullable=True),
sa.Column('iris_mode_1', sa.Integer(), nullable=True),
sa.Column('low_air_flow', sa.Integer(), nullable=True),
sa.Column('low_waveguide_pressure', sa.Integer(), nullable=True),
sa.Column('lsb_pulse_width', sa.Integer(), nullable=True),
sa.Column('magnetron_current_normal', sa.Integer(), nullable=True),
sa.Column('milliseconds_since_sweep_start', sa.Integer(), nullable=True),
sa.Column('msb_pulse_width', sa.Integer(), nullable=True),
sa.Column('radiate_on', sa.Integer(), nullable=True),
sa.Column('radxcm_analog_pos15v_ps', sa.Float(), nullable=True),
sa.Column('radxcm_analog_minus15v_ps', sa.Float(), nullable=True),
sa.Column('radxcm_analog_24v_ps', sa.Float(), nullable=True),
sa.Column('radxcm_analog_28v_ps', sa.Float(), nullable=True),
sa.Column('radxcm_analog_360v_ps', sa.Float(), nullable=True),
sa.Column('radxcm_analog_5v_ps', sa.Float(), nullable=True),
sa.Column('radxcm_analog_cooling_air_temp', sa.Float(), nullable=True),
sa.Column('radxcm_analog_duty_cycle', sa.Float(), nullable=True),
sa.Column('radxcm_analog_filament_dac', sa.Float(), nullable=True),
sa.Column('radxcm_analog_filament_voltage', sa.Float(), nullable=True),
sa.Column('radxcm_analog_forward_power', sa.Float(), nullable=True),
sa.Column('radxcm_analog_high_voltage', sa.Float(), nullable=True),
sa.Column('radxcm_analog_high_voltage_minus', sa.Float(), nullable=True),
sa.Column('radxcm_analog_high_voltage_plus', sa.Float(), nullable=True),
sa.Column('radxcm_analog_horizontal_vswr', sa.Float(), nullable=True),
sa.Column('radxcm_analog_hv_current', sa.Float(), nullable=True),
sa.Column('radxcm_analog_hv_dac', sa.Float(), nullable=True),
sa.Column('radxcm_analog_igbt_assy_air_temp', sa.Float(), nullable=True),
sa.Column('radxcm_analog_mag_ave_current', sa.Float(), nullable=True),
sa.Column('radxcm_analog_mag_peak_current', sa.Float(), nullable=True),
sa.Column('radxcm_analog_misfires', sa.Float(), nullable=True),
sa.Column('radxcm_analog_prf', sa.Float(), nullable=True),
sa.Column('radxcm_analog_pulse_width', sa.Float(), nullable=True),
sa.Column('radxcm_analog_reset_current', sa.Float(), nullable=True),
sa.Column('radxcm_analog_reset_voltage', sa.Float(), nullable=True),
sa.Column('radxcm_analog_reverse_power_h', sa.Float(), nullable=True),
sa.Column('radxcm_analog_reverse_power_v', sa.Float(), nullable=True),
sa.Column('radxcm_analog_timer', sa.Float(), nullable=True),
sa.Column('radxcm_analog_vertical_vswr', sa.Float(), nullable=True),
sa.Column('radxcm_digital_24v', sa.Integer(), nullable=True),
sa.Column('radxcm_digital_28v', sa.Integer(), nullable=True),
sa.Column('radxcm_digital_360v', sa.Integer(), nullable=True),
sa.Column('radxcm_digital_5v', sa.Integer(), nullable=True),
sa.Column('radxcm_digital_airflow_switch', sa.Integer(), nullable=True),
sa.Column('radxcm_digital_cooldown_state', sa.Integer(), nullable=True),
sa.Column('radxcm_digital_cooling_air', sa.Integer(), nullable=True),
sa.Column('radxcm_digital_dc_dc_temp_switch', sa.Integer(), nullable=True),
sa.Column('radxcm_digital_door_interlock', sa.Integer(), nullable=True),
sa.Column('radxcm_digital_duty_cycle_fault', sa.Integer(), nullable=True),
sa.Column('radxcm_digital_fault_state', sa.Integer(), nullable=True),
sa.Column('radxcm_digital_filament_current', sa.Integer(), nullable=True),
sa.Column('radxcm_digital_filament_v', sa.Integer(), nullable=True),
sa.Column('radxcm_digital_forward_power', sa.Integer(), nullable=True),
sa.Column('radxcm_digital_hv_current', sa.Integer(), nullable=True),
sa.Column('radxcm_digital_hv_current_fault', sa.Integer(), nullable=True),
sa.Column('radxcm_digital_hvm', sa.Integer(), nullable=True),
sa.Column('radxcm_digital_hvp', sa.Integer(), nullable=True),
sa.Column('radxcm_digital_local_mode_control', sa.Integer(), nullable=True),
sa.Column('radxcm_digital_m15', sa.Integer(), nullable=True),
sa.Column('radxcm_digital_mag_current_avg', sa.Integer(), nullable=True),
sa.Column('radxcm_digital_mag_current_fault', sa.Integer(), nullable=True),
sa.Column('radxcm_digital_mag_temp', sa.Integer(), nullable=True),
sa.Column('radxcm_digital_magnetron_current', sa.Integer(), nullable=True),
sa.Column('radxcm_digital_modulator_temp_swit', sa.Integer(), nullable=True),
sa.Column('radxcm_digital_p15', sa.Integer(), nullable=True),
sa.Column('radxcm_digital_pfc1_status', sa.Integer(), nullable=True),
sa.Column('radxcm_digital_pfc2_status', sa.Integer(), nullable=True),
sa.Column('radxcm_digital_powerup_state', sa.Integer(), nullable=True),
sa.Column('radxcm_digital_pulse_width_fault', sa.Integer(), nullable=True),
sa.Column('radxcm_digital_radiate_state', sa.Integer(), nullable=True),
sa.Column('radxcm_digital_radtec_xcm_timeout', sa.Integer(), nullable=True),
sa.Column('radxcm_digital_reset_i', sa.Integer(), nullable=True),
sa.Column('radxcm_digital_reset_v', sa.Integer(), nullable=True),
sa.Column('radxcm_digital_reverse_power_h', sa.Integer(), nullable=True),
sa.Column('radxcm_digital_reverse_power_v', sa.Integer(), nullable=True),
sa.Column('radxcm_digital_shutdown_state', sa.Integer(), nullable=True),
sa.Column('radxcm_digital_spare_switch_input', sa.Integer(), nullable=True),
sa.Column('radxcm_digital_standby_state', sa.Integer(), nullable=True),
sa.Column('radxcm_digital_warmup_state', sa.Integer(), nullable=True),
sa.Column('radxcm_digital_waveguide_pressure', sa.Integer(), nullable=True),
sa.Column('rcp02_is_shutdown', sa.Integer(), nullable=True),
sa.Column('servo_power', sa.Integer(), nullable=True),
sa.Column('signal_generator_cw', sa.Integer(), nullable=True),
sa.Column('signal_generator_fault', sa.Integer(), nullable=True),
sa.Column('signal_generator_level', sa.Integer(), nullable=True),
sa.Column('signal_generator_on', sa.Integer(), nullable=True),
sa.Column('standby', sa.Integer(), nullable=True),
sa.Column('t_r_local_mode', sa.Integer(), nullable=True),
sa.Column('t_r_power_on', sa.Integer(), nullable=True),
sa.ForeignKeyConstraint(['instrument_id'], ['instruments.instrument_id'], ),
sa.ForeignKeyConstraint(['site_id'], ['sites.site_id'], ),
sa.PrimaryKeyConstraint('packet_id')
)
### end Alembic commands ###
def downgrade():
### commands auto generated by Alembic - please adjust! ###
op.drop_table('iris_bite')
### end Alembic commands ###
| 66.507389 | 82 | 0.740612 | 1,838 | 13,501 | 5.135473 | 0.136017 | 0.145778 | 0.249179 | 0.353851 | 0.855917 | 0.854222 | 0.837801 | 0.792987 | 0.585126 | 0.237631 | 0 | 0.01726 | 0.098808 | 13,501 | 202 | 83 | 66.836634 | 0.758527 | 0.022739 | 0 | 0 | 0 | 0 | 0.351493 | 0.305296 | 0 | 0 | 0 | 0 | 0 | 1 | 0.010811 | false | 0 | 0.016216 | 0 | 0.027027 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
313a082aa603ab995a9513240a3b507e9251eeed | 134 | py | Python | singleton_injector/__init__.py | santiagoMeloMedina/singleton-injector | bff2748f0fb6b7ca03681461fb1334b491999d91 | [
"MIT"
] | null | null | null | singleton_injector/__init__.py | santiagoMeloMedina/singleton-injector | bff2748f0fb6b7ca03681461fb1334b491999d91 | [
"MIT"
] | null | null | null | singleton_injector/__init__.py | santiagoMeloMedina/singleton-injector | bff2748f0fb6b7ca03681461fb1334b491999d91 | [
"MIT"
] | null | null | null | from singleton_injector.singleton import Singleton
from singleton_injector.injector import Injector
injector = Injector(Singleton())
| 26.8 | 50 | 0.858209 | 15 | 134 | 7.533333 | 0.266667 | 0.424779 | 0.371681 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.089552 | 134 | 4 | 51 | 33.5 | 0.92623 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
314754d137ebbda354d65cc0f277137032425f15 | 24,638 | py | Python | startup/users/01-ames.py | NSLS-II-OPLS/profile_collection | 8c1987fc99a477f21d4e879bbc12a8551788fa4f | [
"BSD-3-Clause"
] | null | null | null | startup/users/01-ames.py | NSLS-II-OPLS/profile_collection | 8c1987fc99a477f21d4e879bbc12a8551788fa4f | [
"BSD-3-Clause"
] | 4 | 2021-01-06T14:51:40.000Z | 2021-01-12T05:32:41.000Z | startup/users/01-ames.py | NSLS-II-OPLS/profile_collection | 8c1987fc99a477f21d4e879bbc12a8551788fa4f | [
"BSD-3-Clause"
] | null | null | null |
##Copied from 76-GID in /nsls2/xf12id1/user/2021_c2/..../honghu
# yield from one_gid( name=sam, xpos=start_pos, stth = stth, exp_time=30, attenuator=0, beta1=0, beta_off=0.13, det_mode=3)
# -82 -34 16 62
# -80 -33.5 14.5 60
# -83 -34 15 60
# -79 -32 14 58
# -83 -35 12 58
# -33 14
#sample_1 = "peg1k-NP5_100mM-k"
#sample_1 = "peg2k-NP10_peg5k-NP5_1-4_100mM-k"
sample_2 = "peg2k-Ag20_100mM-k"
sample_3 = "peg2k-Ag20_peg5k-Ag10_100mM-k"
#sample_4 = "peg1k-NP20_peg2k-NP5_2-1_100mM-k"
def ames_4():
# indent = 1
yield from he_on()
yield from bps.mv(abs2,6)
yield from bps.mv(shutter,1) # open shutter
yield from check_ih() #Align the spectrometer height
yield from check_tth() #Align the spectrometer rotation angle
#yield from ames_1(sample_1, -81, detector=pilatus100k)
yield from ames_1(sample_2, -33, detector=pilatus100k)
yield from ames_1(sample_3, +15.5, detector=pilatus100k)
#yield from ames_1(sample_4, +61, detector=pilatus100k)
yield from shclose()
yield from he_off()
def ames_1(sam, xpos_start,detector=lambda_det):
yield from bps.mv(geo.det_mode,1)
# yield from sample_height_set_coarse(detector=detector) #scan the detector arm height (sh) from -1 to 1 with 41 points
# yield from sample_height_set_fine(detector=detector) #scan the detector arm height from -0.2 to 0.2 with 21 points
yield from one_ref(name=sam, xpos=xpos_start, tiltx=0,detector=pilatus100k)
yield from mabt(0.08,0.08,0)
# This takes the GID
yield from bps.mv(geo.det_mode,2)
alphai = 0.11
yield from bps.mvr(x2,-0.5)
print("at 1")
# yield from gid_scan(md={'sample_name': sam + '_GID-'},
# exp_time = 1,
# detector = pilatus300k,
# alphai = alphai,
# attenuator=1)
# yield from bps.mvr(x2,1.0)
# yield from gid_scan(md={'sample_name': sam +'_GID+'},
# exp_time = 1,
# detector = pilatus300k,
# alphai = alphai,
# attenuator=1)
yield from gid_scan_stitch(md={'sample_name': sam + '_GID-'},
exp_time = 1,
detector = pilatus300k,
alphai = alphai,
attenuator=1)
yield from gid_scan_stitch(md={'sample_name': sam + '_GID-5s'},
exp_time = 5,
detector = pilatus300k,
alphai = alphai,
attenuator=1)
yield from bps.mvr(x2,1.0)
yield from gid_scan_stitch(md={'sample_name': sam +'_GID+'},
exp_time = 1,
detector = pilatus300k,
alphai = alphai,
attenuator=1)
yield from gid_scan_stitch(md={'sample_name': sam +'_GID+5s'},
exp_time = 5,
detector = pilatus300k,
alphai = alphai,
attenuator=1)
def tmp():
alphai=0.11
yield from gid_scan_stitch(md={'sample_name': "zero" + '_GID'},
exp_time = 5,
detector = pilatus300k,
alphai = alphai,
attenuator=1)
def cfn(name):
# This takes the reflectivity
yield from bps.mv(geo.stblx2,0.2)
yield from bps.mv(flow3,3.2) # need to change back to 3.1
yield from bps.mv(geo.det_mode,1)
# sets sample height at alpha=0.08
yield from sample_height_set()
print('Sleeping time before reflectivity')
yield from bps.sleep(10)
yield from bps.mv(flow3,2.7)
# takes the reflectivity
yield from fast_scan(name)
# sets sample height at alpha=0.08 so that it is ready for GID
yield from bps.mv(abs2,6)
yield from mabt(0.08,0.08,0)
print('Start the height scan before GID')
yield from sample_height_set()
# This takes the GID
yield from bps.mv(geo.det_mode,2)
alphai = 0.11
yield from gid_scan(md={'sample_name': name+'_GID'},
exp_time = 1,
detector = pilatus100k,
alphai = alphai,
attenuator=1)
yield from bps.mvr(geo.stblx2,2)
yield from sample_height_set()
yield from bps.mv(geo.det_mode,2)
alphai = 0.11
yield from gid_scan(md={'sample_name': name+'_fresh1_GID'},
exp_time = 1,
detector = pilatus100k,
alphai = alphai,
attenuator=1)
yield from bps.mvr(geo.stblx2,-4)
yield from sample_height_set()
yield from bps.mv(geo.det_mode,2)
alphai = 0.11
yield from gid_scan(md={'sample_name': name+'_fresh2_GID'},
exp_time = 1,
detector = pilatus100k,
alphai = alphai,
attenuator=1)
yield from bps.mv(flow3,2.7)
yield from bps.mv(geo.stblx2,0.2)
# gid_dets = [pilatus300k, quadem]
# @bpp.stage_decorator(gid_dets)
def gid_cfn_cal(md=None, exp_time=1, detector = 'pilatus300k', alphai = 0.1, attenuator=2):
# Bluesky command to record metadata
base_md = {'plan_name': 'gid',
'detector': detector,
'energy': energy.energy.position,
'alphai': alphai,
# ...
}
base_md.update(md or {})
bec.disable_plots()
yield from bps.open_run(md=base_md)
# Creation of a fignal to record the attenuation
yield from bps.mv(abs2, attenuator)# to avoid pilatus saturation
attenuation = calculate_att_comb([np.sum(current_att_thickness[0:attenuator+1])], ['Mo'], energy.energy.position)
attenuation_factor_signal = Signal(name='attenuation', value = attenuation[0])
# Set and record the exposure time to 0.1 for the precount
exposure_time = Signal(name='exposure_time', value = exp_time)
yield from det_exposure_time_pilatus(exp_time, exp_time)
# Move to the good geometry position
yield from mabt(alphai, 0, 0) # gid poistion with beam stop
yield from bps.sleep(5)
# yield from bps.mv(abs2, 0)
# yield from bps.mv(abs2, 3)# to avoid pilatus saturation
# yield from bps.mv(attenuation_factor_signal, 1)
# yield from bps.mvr(geo.stblx2, -1) # move stable X2
yield from bps.mv(shutter,1)
yield from bps.sleep(0.5) # add this because the QuadEM I0
yield from bps.trigger_and_read(gid_dets + [geo] + [attenuation_factor_signal] + [exposure_time], name='primary')
yield from bps.mv(shutter,0)
yield from bps.mv(abs2, 6)
yield from mabt(alphai, 0, -1) # gid poistion without beam stop
yield from bps.sleep(5)
yield from bps.mv(shutter,1)
yield from bps.sleep(0.5) # add this because the QuadEM I0
yield from bps.trigger_and_read(gid_dets + [geo] + [attenuation_factor_signal] + [exposure_time], name='primary')
yield from bps.mv(shutter,0)
yield from bps.mv(abs2, 6)
yield from mabt(alphai, 0, -2) # gid poistion without beam stop
yield from bps.sleep(5)
yield from bps.mv(shutter,1)
yield from bps.sleep(0.5) # add this because the QuadEM I0
yield from bps.trigger_and_read(gid_dets + [geo] + [attenuation_factor_signal] + [exposure_time], name='primary')
yield from bps.mv(shutter,0)
# Bluesky command to stop recording metadata
yield from close_run()
bec.enable_plots()
# yield from bps.mv(abs2, 5)
print('The gid is over')
def cfn_3():
# name_cfn = { 1: 'AuNR_5_19_stock',
# 2: 'AuNR_5_19_T6K',
# 3: 'AuNR_5_19_E6K',
# }
# name_cfn = { 1: 'AuNR_5_19_stock_10mMNaCl', # add 20.2uL 1M NaCl @9:35pm 06/30/21
# 2: 'AuNR_5_19_T6K_10mMNaCl',
# 3: 'AuNR_5_19_E6K_10mMNaCl',
# }
# name_cfn = { 1: 'AuNR_5_19_stock_100mMNaCl', # add 36.6uL 5M NaCl @11:35pm 06/30/21
# 2: 'AuNR_5_19_T6K_100mMNaCl',
# 3: 'AuNR_5_19_E6K_100mMNaCl',
# }
# name_cfn = { 1: 'AuNR_10_30_E6K', # add 2ml @7:32pm 07/01/21
# 2: 'AuNR_10_30_T6K',
# #3: 'AuNR_5_19_E6K_100mMNaCl',
# }
# name_cfn = { 1: 'AuNR_10_30_E6K_10mMNaCl', # add 20.2uL 1M NaCl @8:53pm 07/01/21
# 2: 'AuNR_10_30_T6K_10mMNaCl',
# #3: 'AuNR_5_19_E6K_100mMNaCl',
# }
# name_cfn = { 1: 'AuNR_10_30_E6K_100mMNaCl', # add 36.6uL 5M NaCl @10.43pm 07/01/21
# 2: 'AuNR_10_30_T6K_100mMNaCl',
# #3: 'AuNR_5_19_E6K_100mMNaCl',
# }
# name_cfn = { #1: 'AuNR_10_30_E6K_100mMNaCl', # add 36.6uL 5M NaCl @10.43pm 07/01/21
# 2: 'AuNR_10_30_T6K_100mMNaCl_LowConc', #remove 1340 ul solution and then add 1340 100mMNaCl to make the NP conc as the E6K
# #3: 'AuNR_5_19_E6K_100mMNaCl',
# }
# name_cfn = { 1: 'AuNR_5_19_E2K', # add 2ml, 12 nM @12:51 am 07/02/21
# 2: 'AuNR_5_19_T2K', # add 2ml, 12 nM @12:51 am 07/02/21
# #3: 'AuNR_5_19_E6K_100mMNaCl',
# }
# name_cfn = { 1: 'AuNR_5_19_E2K_10mMNaCl', # add 20.2uL 1M NaCl @2am 07/02/21
# 2: 'AuNR_5_19_T2K_10mMNaCl', # add 20.2uL 1M NaCl @2am 07/02/21
# #3: 'AuNR_5_19_E6K_100mMNaCl',
# }
# name_cfn = { 1: 'AuNR_5_19_E2K_100mMNaCl', # add 36.6uL 5M NaCl @3:05am 07/02/21
# 2: 'AuNR_5_19_T2K_100mMNaCl', # add 36.6uL 5M NaCl @3:05am 07/02/21
# #3: 'AuNR_5_19_E6K_100mMNaCl',
# }
# name_cfn = { 1: 'AuNR_5_19_E2KS6k_10mMNaCl', # add 2ml, 10 nM and 20.2uL 1M NaCl @5:04am 07/02/21
# 2: 'AuNR_5_19_E6kS2k_10mMNaCl', # add 2ml, 10 nM and 20.2uL 1M NaCl @5:04am 07/02/21
# #3: 'AuNR_5_19_E6K_100mMNaCl',
# }
name_cfn = { 1: 'AuNR_5_19_E2KS6k_100mMNaCl', # add 36.6uL 5M NaCl @6:10am 07/02/21
2: 'AuNR_5_19_E6kS2k_100mMNaCl', # add 36.6uL 5M NaCl @6:10am 07/02/21
#3: 'AuNR_5_19_E6K_100mMNaCl',
}
yield from bps.mv(geo.det_mode,1)
# x2_pos1 = -47.6 # -11.3-38.1
# tilt1 = 0
# x2_pos2 = -10 # -11.3-0.2
# tilt2 = 0
# x2_pos2 = -9 # for AuNR_10_30_T6K_100mMNaCl_LowConc
# tilt2 = 0
# x2_pos3 = -11.3+38.1-0.5
# tilt3 = -0.4
# x2_pos1 = -50.8
# tilt1 = 0
# x2_pos2 = -12.5-0.2
# tilt2 = 0
# x2_pos1 = -46.4
# tilt1 = 0
# x2_pos2 = -8.5
# tilt2 = 0
x2_pos1 = -46.4
tilt1 = 0
x2_pos2 = -8
tilt2 = 0
yield from cfn_ref(name_cfn[1],x2_pos1,tilt1)
yield from cfn_gid(name_cfn[1])
yield from cfn_ref(name_cfn[2],x2_pos2,tilt2)
yield from cfn_gid(name_cfn[2])
#yield from cfn_ref(name_cfn[3],x2_pos3,tilt3)
#yield from cfn_gid(name_cfn[3])
def cfn_1():
'''
XR and GID run for one sample cell
'''
# name_cfn = { 2: 'AuNR_10_50_T6K', # @11:14am 07/01/21
# }
# name_cfn = { 2: 'AuNR_10_50_T6K_10mMNaCl', # add 20.2uL 1M NaCl @ 12pm 07/01/21
# }
# name_cfn = { 2: 'AuNR_10_50_T6K_100mMNaCl', # add 36.6uL 5M NaCl @ 12:45pm 07/01/21
# }
# name_cfn = { 2: 'AuNR_10_40_T6K', # @1:27 pm 07/01/21
# }
#name_cfn = { 2: 'AuNR_10_40_T6K_10mMNaCl', # add 20.2uL 1M NaCl @2:04 pm 07/01/21
#}
name_cfn = { 2: 'AuNR_10_40_T6K_100mMNaCl', # add 36.6uL 5M NaCl @4:31 pm 07/01/21
}
yield from bps.mv(geo.det_mode,1)
x2_pos2 = -9.0 #-11.3+2+0.3
tilt2 = -0.4
yield from cfn_ref(name_cfn[2],x2_pos2,tilt2)
yield from cfn_gid(name_cfn[2])
# def cfn_gid(name):
# # sets sample height at alpha=0.08 so that it is ready for GID
# print('Start the height scan before GID')
# gid_dets = [pilatus300k, quadem]
# @bpp.stage_decorator(gid_dets)
# if _dx2 == 0:
# yield from bps.mv(shutter,1)
# yield from ih_set() #Align the spectrometer height
# yield from tth_set() #Align the spectrometer rotation angle
# yield from sample_height_set_fine()
# yield from bps.mv(geo.det_mode,2)
# alphai = 0.1
# yield from gid_scan(md={'sample_name': name+'_GID_pos_' + str(_dx2)+'_exp_' + str(_exp_time)+'s'},
# exp_time = _exp_time,
# detector = 'pilatus300k',
# alphai = alphai,
# attenuator=0)
def cfn_ref(name,xpos,tiltx):
'''Conduct reflectivity measurments'''
print("file name=",name)
yield from bps.mv(geo.stblx2,xpos) #move the Sample Table X2 to xpos
yield from bps.mv(tilt.x,tiltx) #move the Sample tilt
yield from bps.mv(shutter,1) # open shutter
yield from ih_set() #Align the spectrometer height
yield from tth_set() #Align the spectrometer rotation angle
yield from sample_height_set_coarse() #scan the detector arm height (sh) from -1 to 1 with 41 points
yield from sample_height_set_fine() #scan the detector arm height from -0.2 to 0.2 with 21 points
yield from bps.mv(shutter,1) # open shutter
# yield from astth_set() #Align the detector arm rotation angle# comment out as it might affect OFFSET
yield from fast_scan(name)
def cfn_20210718_night():
yield from he_on()
for run_num in range(3):
yield from bps.mv(shutter,1) # open shutter
yield from cfn_20210718_night_one_scan(run_num)
yield from bps.mv(shutter,0) # close shutter
yield from bps.sleep(3600 * 2 ) #
def cfn_20210718_night_one_scan(run_num):
#yield from he_on()
yield from one_ref("S1, 1ul_DSPEP_P14_run=%s"%(run_num),-57+run_num, tiltx=0,detector=pilatus100k)
yield from one_ref("S2, 2ul_DSPEP_Px_run=%s"%(run_num), -10+run_num, tiltx=0,detector=pilatus100k)
yield from one_ref("S3, 4ul_DSPEP_P38_run=%s"%(run_num), 36+run_num, tiltx=0,detector=pilatus100k)
#yield from he_off()
def cfn_20210719_pm_one_scan( ):
yield from he_on()
sam = 'S1, DNA-DSPEP_1-5_P34'
yield from one_ref(sam,-56 , tiltx=0,detector=pilatus100k)
yield from bps.mv(geo.stblx2, -56 + 1 )
yield from sample_height_set_fine(detector=pilatus100k)
yield from one_gid( name=sam, xpos=-56 + 1, stth = 17.5, exp_time=60, attenuator=0, beta1=0, beta_off=0.13 )
yield from one_gid( name=sam, xpos=-56 + 1 , stth = 17.5, exp_time=60,attenuator=0, beta1=2, beta_off=0.13 )
sam = 'S2, DNA-DSPEP_1-10_Px'
yield from one_ref(sam, -9, tiltx=0,detector=pilatus100k)
yield from bps.mv(geo.stblx2, -9 + 1 )
yield from sample_height_set_fine(detector=pilatus100k)
yield from one_gid( name=sam, xpos=-9+1, stth = 17.5, exp_time=60,attenuator=0, beta1=0, beta_off=0.13 )
yield from one_gid( name=sam, xpos=-9+1, stth = 17.5, exp_time=60,attenuator=0, beta1=2, beta_off=0.13 )
sam = 'S3, DNA-DSPEP_1-1_P36'
yield from one_ref( sam, 37 , tiltx=0,detector=pilatus100k)
yield from bps.mv(geo.stblx2, 37 + 1 )
yield from sample_height_set_fine(detector=pilatus100k)
yield from one_gid( name=sam, xpos=37+1, stth = 17.5, exp_time=60,attenuator=0, beta1=0, beta_off=0.13 )
yield from one_gid( name=sam, xpos=37+1, stth = 17.5, exp_time=60,attenuator=0, beta1=2, beta_off=0.13 )
def cfn_20210719_pm_np_one_scan( ):
yield from he_on()
sam = 'S1, DNA-DSPEP_1-5_NP_P32_run1'
yield from one_ref(sam,-56 , tiltx=0,detector=pilatus100k)
yield from bps.mv(geo.stblx2, -56 + 1 )
yield from sample_height_set_fine(detector=pilatus100k)
yield from one_gid( name=sam, xpos=-56+0.5, stth = 0, exp_time=1, attenuator=0, beta1=0, beta_off= 0.4, det_mode=2)
yield from one_gid( name=sam, xpos=-56+0.75, stth = 0, exp_time=5, attenuator=0, beta1=0, beta_off= 0.4, det_mode=2)
yield from one_gid( name=sam, xpos=-56+1, stth = 17.5, exp_time=60, attenuator=0, beta1=0, beta_off=0.13, det_mode=3)
yield from one_gid( name=sam, xpos=-56+1, stth = 17.5, exp_time=60,attenuator=0, beta1=2, beta_off=0.13, det_mode=3)
sam = 'S2, DNA-DSPEP_1-10_NP_Px_run1'
yield from one_ref(sam, -9, tiltx=0,detector=pilatus100k)
yield from bps.mv(geo.stblx2, -9 + 1 )
yield from sample_height_set_fine(detector=pilatus100k)
yield from one_gid( name=sam, xpos=-9+0.5, stth = 0, exp_time=1, attenuator=0, beta1=0, beta_off= 0.4, det_mode=2)
yield from one_gid( name=sam, xpos=-9+0.75, stth = 0, exp_time=5, attenuator=0, beta1=0, beta_off= 0.4, det_mode=2)
yield from one_gid( name=sam, xpos=-9+1, stth = 17.5, exp_time=60,attenuator=0, beta1=0, beta_off=0.13, det_mode=3)
yield from one_gid( name=sam, xpos=-9+1, stth = 17.5, exp_time=60,attenuator=0, beta1=2, beta_off=0.13, det_mode=3)
sam = 'S3, DNA-DSPEP_1-1_NP_P35_run1'
yield from one_ref( sam, 37 , tiltx=0,detector=pilatus100k)
yield from bps.mv(geo.stblx2, 37 + 1 )
yield from sample_height_set_fine(detector=pilatus100k)
yield from one_gid( name=sam, xpos=37+0.5, stth = 0, exp_time=1, attenuator=0, beta1=0, beta_off= 0.4, det_mode=2)
yield from one_gid( name=sam, xpos=37+0.75, stth = 0, exp_time=5, attenuator=0, beta1=0, beta_off= 0.4, det_mode=2)
yield from one_gid( name=sam, xpos=37+1, stth = 17.5, exp_time=60,attenuator=0, beta1=0, beta_off=0.13, det_mode=3)
yield from one_gid( name=sam, xpos=37+1, stth = 17.5, exp_time=60,attenuator=0, beta1=2, beta_off=0.13, det_mode=3)
#yield from he_off()
def cfn_20210719_night_npsalt_one_scan( ):
yield from he_on()
sam = 'S1, DNA-DSPEP_1-5_NPsalt_P34_run1'
yield from one_ref(sam,-56 , tiltx=0,detector=pilatus100k)
yield from bps.mv(geo.stblx2, -56 + 1 )
yield from sample_height_set_fine(detector=pilatus100k)
yield from one_gid( name=sam, xpos=-56+0.5, stth = 0, exp_time=1, attenuator=0, beta1=0, beta_off= 0.4, det_mode=2)
yield from one_gid( name=sam, xpos=-56+0.75, stth = 0, exp_time=5, attenuator=0, beta1=0, beta_off= 0.4, det_mode=2)
yield from one_gid( name=sam, xpos=-56+1, stth = 17.5, exp_time=60, attenuator=0, beta1=0, beta_off=0.13, det_mode=3)
yield from one_gid( name=sam, xpos=-56+1, stth = 17.5, exp_time=60,attenuator=0, beta1=2, beta_off=0.13, det_mode=3)
sam = 'S2, DNA-DSPEP_1-10_NPsalt_Px_run1'
yield from one_ref(sam, -9, tiltx=0,detector=pilatus100k)
yield from bps.mv(geo.stblx2, -9 + 1 )
yield from sample_height_set_fine(detector=pilatus100k)
yield from one_gid( name=sam, xpos=-9+0.5, stth = 0, exp_time=1, attenuator=0, beta1=0, beta_off= 0.4, det_mode=2)
yield from one_gid( name=sam, xpos=-9+0.75, stth = 0, exp_time=5, attenuator=0, beta1=0, beta_off= 0.4, det_mode=2)
yield from one_gid( name=sam, xpos=-9+1, stth = 17.5, exp_time=60,attenuator=0, beta1=0, beta_off=0.13, det_mode=3)
yield from one_gid( name=sam, xpos=-9+1, stth = 17.5, exp_time=60,attenuator=0, beta1=2, beta_off=0.13, det_mode=3)
sam = 'S3, DNA-DSPEP_1-1_NPsalt_P42_run1'
yield from one_ref( sam, 37 , tiltx=0,detector=pilatus100k)
yield from bps.mv(geo.stblx2, 37 + 1 )
yield from sample_height_set_fine(detector=pilatus100k)
yield from one_gid( name=sam, xpos=37+0.5, stth = 0, exp_time=1, attenuator=0, beta1=0, beta_off= 0.4, det_mode=2)
yield from one_gid( name=sam, xpos=37+0.75, stth = 0, exp_time=5, attenuator=0, beta1=0, beta_off= 0.4, det_mode=2)
yield from one_gid( name=sam, xpos=37+1, stth = 17.5, exp_time=60,attenuator=0, beta1=0, beta_off=0.13, det_mode=3)
yield from one_gid( name=sam, xpos=37+1, stth = 17.5, exp_time=60,attenuator=0, beta1=2, beta_off=0.13, det_mode=3)
yield from bps.mv(abs2, 6)
def cfn_20210719_night_npsalt_gid_scan( ):
yield from he_on()
sam = 'S1, DNA-DSPEP_1-5_NPsalt_P34_run2'
yield from bps.mv(geo.stblx2, -56 + 1 )
yield from sample_height_set_fine(detector=pilatus100k)
yield from one_gid( name=sam, xpos=-56+0.5, stth = 0, exp_time=1, attenuator=0, beta1=0, beta_off= 0.4, det_mode=2)
yield from one_gid( name=sam, xpos=-56+0.75, stth = 0, exp_time=5, attenuator=0, beta1=0, beta_off= 0.4, det_mode=2)
yield from one_gid( name=sam, xpos=-56+1, stth = 17.5, exp_time=60, attenuator=0, beta1=0, beta_off=0.13, det_mode=3)
yield from one_gid( name=sam, xpos=-56+1, stth = 17.5, exp_time=60,attenuator=0, beta1=2, beta_off=0.13, det_mode=3)
sam = 'S2, DNA-DSPEP_1-10_NPsalt_Px_run2'
yield from bps.mv(geo.stblx2, -9 + 1 )
yield from sample_height_set_fine(detector=pilatus100k)
yield from one_gid( name=sam, xpos=-9+0.5, stth = 0, exp_time=1, attenuator=0, beta1=0, beta_off= 0.4, det_mode=2)
yield from one_gid( name=sam, xpos=-9+0.75, stth = 0, exp_time=5, attenuator=0, beta1=0, beta_off= 0.4, det_mode=2)
yield from one_gid( name=sam, xpos=-9+1, stth = 17.5, exp_time=60,attenuator=0, beta1=0, beta_off=0.13, det_mode=3)
yield from one_gid( name=sam, xpos=-9+1, stth = 17.5, exp_time=60,attenuator=0, beta1=2, beta_off=0.13, det_mode=3)
sam = 'S3, DNA-DSPEP_1-1_NPsalt_P42_run2'
yield from bps.mv(geo.stblx2, 37 + 1 )
yield from sample_height_set_fine(detector=pilatus100k)
yield from one_gid( name=sam, xpos=37+0.5, stth = 0, exp_time=1, attenuator=0, beta1=0, beta_off= 0.4, det_mode=2)
yield from one_gid( name=sam, xpos=37+0.75, stth = 0, exp_time=5, attenuator=0, beta1=0, beta_off= 0.4, det_mode=2)
yield from one_gid( name=sam, xpos=37+1, stth = 17.5, exp_time=60,attenuator=0, beta1=0, beta_off=0.13, det_mode=3)
yield from one_gid( name=sam, xpos=37+1, stth = 17.5, exp_time=60,attenuator=0, beta1=2, beta_off=0.13, det_mode=3)
yield from bps.mv(abs2, 6)
def cfn_20210720_night():
yield from he_on()
for run_num in range(3):
#yield from shopen()
yield from bps.mv(shutter,1) # open shutter
yield from cfn_20210720_night_npsalt_one_scan( run_num )
yield from bps.mv(shutter,0) # close shutter
#yield from shclose()
#yield from bps.sleep(3600 * .5 ) #
yield from he_off()
def cfn_20210720_night_npsalt_one_scan( run_num ):
''''
Need to add tth and ih check before GID
'''
yield from he_on()
sam = 'S1, DNA-DSPEP_1-5_NPsalt_P34_run%i'%(3+run_num)
start_pos = -57 + run_num
yield from one_ref(sam, start_pos , tiltx=0,detector=pilatus100k)
yield from bps.mv(geo.stblx2, start_pos + .5 )
yield from sample_height_set_fine(detector=pilatus100k)
yield from one_gid( name=sam, xpos=start_pos + .25 , stth = 0, exp_time=1, attenuator=0, beta1=0, beta_off= 0.4, det_mode=2)
yield from one_gid( name=sam, xpos=start_pos + .25, stth = 0, exp_time=5, attenuator=0, beta1=0, beta_off= 0.4, det_mode=2)
yield from one_gid( name=sam, xpos=start_pos + .75, stth = 17.5, exp_time=60, attenuator=0, beta1=0, beta_off=0.13, det_mode=3)
yield from one_gid( name=sam, xpos=start_pos + .75, stth = 17.5, exp_time=60,attenuator=0, beta1=2, beta_off=0.13, det_mode=3)
sam = 'S2, DNA-DSPEP_1-10_NPsalt_Px_run%i'%(3+run_num)
start_pos = -10 + run_num
yield from one_ref(sam, start_pos , tiltx=0,detector=pilatus100k)
yield from bps.mv(geo.stblx2, start_pos + .5 )
yield from sample_height_set_fine(detector=pilatus100k)
yield from one_gid( name=sam, xpos=start_pos + .25 , stth = 0, exp_time=1, attenuator=0, beta1=0, beta_off= 0.4, det_mode=2)
yield from one_gid( name=sam, xpos=start_pos + .25, stth = 0, exp_time=5, attenuator=0, beta1=0, beta_off= 0.4, det_mode=2)
yield from one_gid( name=sam, xpos=start_pos + .75, stth = 17.5, exp_time=60, attenuator=0, beta1=0, beta_off=0.13, det_mode=3)
yield from one_gid( name=sam, xpos=start_pos + .75, stth = 17.5, exp_time=60,attenuator=0, beta1=2, beta_off=0.13, det_mode=3)
sam = 'S3, DNA-DSPEP_1-1_NPsalt_P42_run%i'%(3+run_num)
start_pos = 36 + run_num
# yield from one_ref(sam, start_pos , tiltx=0,detector=pilatus100k)
yield from bps.mv(geo.stblx2, start_pos + .5 )
yield from sample_height_set_fine(detector=pilatus100k)
yield from one_gid( name=sam, xpos=start_pos + .25 , stth = 0, exp_time=1, attenuator=0, beta1=0, beta_off= 0.4, det_mode=2)
yield from one_gid( name=sam, xpos=start_pos + .25, stth = 0, exp_time=5, attenuator=0, beta1=0, beta_off= 0.4, det_mode=2)
yield from one_gid( name=sam, xpos=start_pos + .75, stth = 17.5, exp_time=60, attenuator=0, beta1=0, beta_off=0.13, det_mode=3)
yield from one_gid( name=sam, xpos=start_pos + .75, stth = 17.5, exp_time=60,attenuator=0, beta1=2, beta_off=0.13, det_mode=3)
| 44.392793 | 142 | 0.633331 | 4,171 | 24,638 | 3.541597 | 0.075521 | 0.138302 | 0.060926 | 0.053073 | 0.864338 | 0.837937 | 0.807744 | 0.763878 | 0.737138 | 0.7127 | 0 | 0.116969 | 0.239386 | 24,638 | 554 | 143 | 44.472924 | 0.671291 | 0.265525 | 0 | 0.633333 | 0 | 0 | 0.054593 | 0.024887 | 0 | 0 | 0 | 0 | 0 | 1 | 0.053333 | false | 0 | 0 | 0 | 0.053333 | 0.016667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
314dad0189b0b7b176fe57837c8f733e1c645c69 | 143 | py | Python | tests/test_dir.py | timgates42/pew | 37d9ff79342336b8ef6437d9a551008be07afe9b | [
"MIT"
] | 1,031 | 2015-01-02T05:24:43.000Z | 2022-03-07T23:21:12.000Z | tests/test_dir.py | timgates42/pew | 37d9ff79342336b8ef6437d9a551008be07afe9b | [
"MIT"
] | 181 | 2015-01-03T14:01:56.000Z | 2022-02-14T21:37:01.000Z | tests/test_dir.py | timgates42/pew | 37d9ff79342336b8ef6437d9a551008be07afe9b | [
"MIT"
] | 99 | 2015-01-10T19:11:03.000Z | 2022-02-09T17:17:29.000Z | from pew._utils import invoke_pew as invoke
def test_dir(workon_home, env1):
assert str(workon_home / 'env1') == invoke('dir', 'env1').out | 35.75 | 65 | 0.72028 | 23 | 143 | 4.26087 | 0.652174 | 0.204082 | 0.285714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02439 | 0.13986 | 143 | 4 | 65 | 35.75 | 0.772358 | 0 | 0 | 0 | 0 | 0 | 0.076389 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0.333333 | false | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
316561c9020e82c3e16d31311e9a9224bdbebfcf | 42 | py | Python | mmskeleton/models/skeleton_head/__init__.py | fserracant/mmskeleton | 44008bdef3dd6354a17c220fac8bcd8cd08ed201 | [
"Apache-2.0"
] | 1,347 | 2019-08-24T19:03:50.000Z | 2022-03-29T05:44:57.000Z | mmskeleton/models/skeleton_head/__init__.py | fserracant/mmskeleton | 44008bdef3dd6354a17c220fac8bcd8cd08ed201 | [
"Apache-2.0"
] | 246 | 2019-08-24T15:36:11.000Z | 2022-03-23T06:57:02.000Z | mmskeleton/models/skeleton_head/__init__.py | fserracant/mmskeleton | 44008bdef3dd6354a17c220fac8bcd8cd08ed201 | [
"Apache-2.0"
] | 335 | 2019-08-25T14:54:19.000Z | 2022-03-31T23:07:18.000Z | from .simplehead import SimpleSkeletonHead | 42 | 42 | 0.904762 | 4 | 42 | 9.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.071429 | 42 | 1 | 42 | 42 | 0.974359 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3172506cf43b09a56fafbf2b7f61eb9a53d4fc1c | 37 | py | Python | dpy_cooldowns/__init__.py | TheGabDooSan/dpy-psql-cooldowns | 413d1dc536c70c256722d8649e4ced94debb8b30 | [
"MIT"
] | 1 | 2021-04-05T16:29:32.000Z | 2021-04-05T16:29:32.000Z | dpy_cooldowns/__init__.py | gabriel-dahan/dpy-cooldowns | 413d1dc536c70c256722d8649e4ced94debb8b30 | [
"MIT"
] | null | null | null | dpy_cooldowns/__init__.py | gabriel-dahan/dpy-cooldowns | 413d1dc536c70c256722d8649e4ced94debb8b30 | [
"MIT"
] | null | null | null | from .errors import CommandOnCooldown | 37 | 37 | 0.891892 | 4 | 37 | 8.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.081081 | 37 | 1 | 37 | 37 | 0.970588 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
31a32686828b6fa59b0c6abc56fc2c88794508f1 | 35 | py | Python | spongebob/__init__.py | AntonTrainer/spongebob | 09d1f7a3697d5fc49d24bcae1ec2ca6548026123 | [
"MIT"
] | 1 | 2019-04-16T22:15:57.000Z | 2019-04-16T22:15:57.000Z | spongebob/__init__.py | AntonTrainer/spongebob | 09d1f7a3697d5fc49d24bcae1ec2ca6548026123 | [
"MIT"
] | null | null | null | spongebob/__init__.py | AntonTrainer/spongebob | 09d1f7a3697d5fc49d24bcae1ec2ca6548026123 | [
"MIT"
] | null | null | null | from .spongebob import spongebobify | 35 | 35 | 0.885714 | 4 | 35 | 7.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.085714 | 35 | 1 | 35 | 35 | 0.96875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
730928216310ca233fa6dba6f9ec59786984ebb4 | 10,057 | py | Python | axelrod/tests/unit/test_match_generator.py | nandhinianandj/Axelrod | 379b907d64c51816a50abfd8480240276c893953 | [
"MIT"
] | 596 | 2015-03-30T17:34:14.000Z | 2022-03-21T19:32:38.000Z | axelrod/tests/unit/test_match_generator.py | nandhinianandj/Axelrod | 379b907d64c51816a50abfd8480240276c893953 | [
"MIT"
] | 1,018 | 2015-03-30T14:57:33.000Z | 2022-03-14T14:57:48.000Z | axelrod/tests/unit/test_match_generator.py | nandhinianandj/Axelrod | 379b907d64c51816a50abfd8480240276c893953 | [
"MIT"
] | 263 | 2015-03-31T10:26:28.000Z | 2022-03-29T09:26:02.000Z | import unittest
import axelrod as axl
from axelrod.match_generator import graph_is_connected
from hypothesis import example, given, settings
from hypothesis.strategies import integers
test_strategies = [
axl.Cooperator,
axl.TitForTat,
axl.Defector,
axl.Grudger,
axl.GoByMajority,
]
test_turns = 100
test_repetitions = 20
test_game = axl.Game()
class TestMatchGenerator(unittest.TestCase):
@classmethod
def setUpClass(cls):
cls.players = [s() for s in test_strategies]
def test_build_single_match_params(self):
rr = axl.MatchGenerator(
players=self.players,
turns=test_turns,
game=test_game,
repetitions=test_repetitions,
)
match_params = rr.build_single_match_params()
self.assertIsInstance(match_params, dict)
self.assertEqual(match_params["turns"], test_turns)
self.assertEqual(match_params["game"], test_game)
self.assertEqual(match_params["noise"], 0)
self.assertIsNone(match_params["prob_end"])
# Check that can build a match
players = [axl.Cooperator(), axl.Defector()]
match_params["players"] = players
match = axl.Match(**match_params)
self.assertIsInstance(match, axl.Match)
self.assertEqual(len(match), test_turns)
def test_build_single_match_params_with_noise(self):
rr = axl.MatchGenerator(
players=self.players,
turns=test_turns,
game=test_game,
repetitions=test_repetitions,
noise=0.5,
)
match_params = rr.build_single_match_params()
self.assertIsInstance(match_params, dict)
self.assertEqual(match_params["turns"], test_turns)
self.assertEqual(match_params["game"], test_game)
self.assertEqual(match_params["noise"], 0.5)
self.assertIsNone(match_params["prob_end"])
# Check that can build a match
players = [axl.Cooperator(), axl.Defector()]
match_params["players"] = players
match = axl.Match(**match_params)
self.assertIsInstance(match, axl.Match)
self.assertEqual(len(match), test_turns)
def test_build_single_match_params_with_prob_end(self):
rr = axl.MatchGenerator(
players=self.players,
game=test_game,
repetitions=test_repetitions,
prob_end=0.5,
)
match_params = rr.build_single_match_params()
self.assertIsInstance(match_params, dict)
self.assertIsNone(match_params["turns"])
self.assertEqual(match_params["game"], test_game)
self.assertEqual(match_params["noise"], 0)
self.assertEqual(match_params["prob_end"], 0.5)
# Check that can build a match
players = [axl.Cooperator(), axl.Defector()]
match_params["players"] = players
match = axl.Match(**match_params)
self.assertIsInstance(match, axl.Match)
with self.assertRaises(TypeError):
len(match)
def test_build_single_match_params_with_prob_end_and_noise(self):
rr = axl.MatchGenerator(
players=self.players,
game=test_game,
repetitions=test_repetitions,
noise=0.5,
prob_end=0.5,
)
match_params = rr.build_single_match_params()
self.assertIsInstance(match_params, dict)
self.assertIsNone(match_params["turns"])
self.assertEqual(match_params["game"], rr.game)
self.assertEqual(match_params["prob_end"], 0.5)
self.assertEqual(match_params["noise"], 0.5)
# Check that can build a match
players = [axl.Cooperator(), axl.Defector()]
match_params["players"] = players
match = axl.Match(**match_params)
self.assertIsInstance(match, axl.Match)
with self.assertRaises(TypeError):
len(match)
def test_build_single_match_params_with_prob_end_and_turns(self):
rr = axl.MatchGenerator(
players=self.players,
game=test_game,
repetitions=test_repetitions,
turns=5,
prob_end=0.5,
)
match_params = rr.build_single_match_params()
self.assertIsInstance(match_params, dict)
self.assertEqual(match_params["turns"], 5)
self.assertEqual(match_params["game"], test_game)
self.assertEqual(match_params["prob_end"], 0.5)
self.assertEqual(match_params["noise"], 0)
# Check that can build a match
players = [axl.Cooperator(), axl.Defector()]
match_params["players"] = players
match = axl.Match(**match_params)
self.assertIsInstance(match, axl.Match)
self.assertIsInstance(len(match), int)
self.assertGreater(len(match), 0)
self.assertLessEqual(len(match), 10)
def test_build_single_match_params_with_fixed_length_unknown(self):
rr = axl.MatchGenerator(
players=self.players,
game=test_game,
repetitions=test_repetitions,
turns=5,
match_attributes={"length": float("inf")},
)
match_params = rr.build_single_match_params()
self.assertIsInstance(match_params, dict)
self.assertEqual(match_params["turns"], 5)
self.assertEqual(match_params["game"], test_game)
self.assertEqual(match_params["prob_end"], None)
self.assertEqual(match_params["noise"], 0)
self.assertEqual(
match_params["match_attributes"], {"length": float("inf")}
)
# Check that can build a match
players = [axl.Cooperator(), axl.Defector()]
match_params["players"] = players
match = axl.Match(**match_params)
self.assertIsInstance(match, axl.Match)
self.assertEqual(len(match), 5)
self.assertEqual(match.match_attributes, {"length": float("inf")})
@given(repetitions=integers(min_value=1, max_value=test_repetitions))
@settings(max_examples=5)
@example(repetitions=test_repetitions)
def test_build_match_chunks(self, repetitions):
rr = axl.MatchGenerator(
players=self.players,
turns=test_turns,
game=test_game,
repetitions=repetitions,
)
chunks = list(rr.build_match_chunks())
match_definitions = [
tuple(list(index_pair) + [repetitions])
for (index_pair, match_params, repetitions, _) in chunks
]
expected_match_definitions = [
(i, j, repetitions) for i in range(5) for j in range(i, 5)
]
self.assertEqual(
sorted(match_definitions), sorted(expected_match_definitions)
)
@given(
repetitions=integers(min_value=1, max_value=test_repetitions),
seed=integers(min_value=1, max_value=4294967295),
)
@settings(max_examples=5)
def test_seeding_equality(self, repetitions, seed):
rr1 = axl.MatchGenerator(
players=self.players,
turns=test_turns,
game=test_game,
repetitions=repetitions,
seed=seed,
)
chunks1 = list(rr1.build_match_chunks())
rr2 = axl.MatchGenerator(
players=self.players,
turns=test_turns,
game=test_game,
repetitions=repetitions,
seed=seed,
)
chunks2 = list(rr2.build_match_chunks())
self.assertEqual(chunks1, chunks2)
def test_seeding_inequality(self, repetitions=10):
rr1 = axl.MatchGenerator(
players=self.players,
turns=test_turns,
game=test_game,
repetitions=repetitions,
seed=0,
)
chunks1 = list(rr1.build_match_chunks())
rr2 = axl.MatchGenerator(
players=self.players,
turns=test_turns,
game=test_game,
repetitions=repetitions,
seed=1,
)
chunks2 = list(rr2.build_match_chunks())
self.assertNotEqual(chunks1, chunks2)
@given(repetitions=integers(min_value=1, max_value=test_repetitions))
@settings(max_examples=5)
@example(repetitions=test_repetitions)
def test_spatial_build_match_chunks(self, repetitions):
cycle = [(0, 1), (1, 2), (2, 3), (3, 4), (4, 1)]
rr = axl.MatchGenerator(
players=self.players,
turns=test_turns,
game=test_game,
edges=cycle,
repetitions=repetitions,
)
chunks = list(rr.build_match_chunks())
match_definitions = [
tuple(list(index_pair) + [repetitions])
for (index_pair, match_params, repetitions, _) in chunks
]
expected_match_definitions = [(i, j, repetitions) for i, j in cycle]
self.assertEqual(
sorted(match_definitions), sorted(expected_match_definitions)
)
def test_len(self):
turns = 5
repetitions = 10
rr = axl.MatchGenerator(
players=self.players,
turns=test_turns,
game=test_game,
repetitions=test_repetitions,
)
self.assertEqual(len(rr), len(list(rr.build_match_chunks())))
def test_init_with_graph_edges_not_including_all_players(self):
edges = [(0, 1), (1, 2)]
with self.assertRaises(ValueError):
axl.MatchGenerator(
players=self.players,
repetitions=3,
game=test_game,
turns=5,
edges=edges,
noise=0,
)
class TestUtilityFunctions(unittest.TestCase):
def test_connected_graph(self):
edges = [(0, 0), (0, 1), (1, 1)]
players = ["Cooperator", "Defector"]
self.assertTrue(graph_is_connected(edges, players))
def test_unconnected_graph(self):
edges = [(0, 0), (0, 1), (1, 1)]
players = ["Cooperator", "Defector", "Alternator"]
self.assertFalse(graph_is_connected(edges, players))
| 35.164336 | 76 | 0.616585 | 1,101 | 10,057 | 5.407811 | 0.105359 | 0.116392 | 0.0739 | 0.091703 | 0.824824 | 0.78082 | 0.77175 | 0.754619 | 0.751764 | 0.726234 | 0 | 0.014866 | 0.277618 | 10,057 | 285 | 77 | 35.287719 | 0.80468 | 0.017202 | 0 | 0.646825 | 0 | 0 | 0.026628 | 0 | 0 | 0 | 0 | 0 | 0.214286 | 1 | 0.059524 | false | 0 | 0.019841 | 0 | 0.087302 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
735ef51aceda43ac4014f217c5a6f035a57f6087 | 34,129 | py | Python | routing_simulator.py | minyee/TAGO | 9fea77cc39aa035796ab3ca52e95ebb66ffe0e7f | [
"MIT"
] | 2 | 2020-06-01T00:10:06.000Z | 2020-09-25T03:29:28.000Z | routing_simulator.py | minyee/TAGO | 9fea77cc39aa035796ab3ca52e95ebb66ffe0e7f | [
"MIT"
] | null | null | null | routing_simulator.py | minyee/TAGO | 9fea77cc39aa035796ab3ca52e95ebb66ffe0e7f | [
"MIT"
] | null | null | null | import sys, os
sys.path.append('../')
sys.path.append('./traffic_generator')
from adaptive_routing import *
import routing_simulation_util as util
import UniformGroupDragonfly
import SkewedGroupDragonfly
import UniformGroupExpander
import SkewedGroupExpander
import DragonflyAdversarialTrafficGenerator
import DragonflyUniformTrafficGenerator
import DragonflyLoadSingleGlobalLinkTrafficGenerator
import DragonflyAdversarialSingleSwitchTrafficGenerator
import Stencil27PTrafficGenerator
import TraceBasedTrafficGenerator
NETBENCH_DIRECTORY = os.environ.get('NETBENCH_TAGO_DIRECTORY')
traces_directory = os.getcwd() + "/traces"
def develop_custom_toy_example():
topology = {
0 : [1, 2, 3, ],
1 : [0, 2, 3, 4],
2 : [0, 1, 3, 5],
3 : [0, 1, 2, 6],
4 : [5, 6, 7, 1],
5 : [4, 6, 7, 2],
6 : [4, 5, 7, 3],
7 : [4, 5, 6, ],
}
switch_to_block_map = {
0 : 0,
1 : 0,
2 : 0,
3 : 0,
4 : 1,
5 : 1,
6 : 1,
7 : 1,
}
s2s_traffic_matrix = [
[0, 0, 0, 0, 0, 0, 0, 5., ],
[0, 0, 0, 0, 0, 0, 0, 0, ],
[0, 0, 0, 0, 0, 0, 0, 0, ],
[0, 0, 0, 0, 0, 0, 0, 0, ],
[0, 0, 0, 0, 0, 0, 0, 0, ],
[0, 0, 0, 0, 0, 0, 0, 0, ],
[0, 0, 0, 0, 0, 0, 0, 0, ],
[0, 0, 0, 0, 0, 0, 0, 0, ],
]
return topology, switch_to_block_map, s2s_traffic_matrix
def toy_example_main():
tolerance_fairness = 0
concentration = 1
load_level = 5.
network_link_capacity = 100 # in gbps
injection_link_capacity = 200 # in gbps
average_flow_size_in_bytes = 23199798
average_flow_size_in_gbits = float(8 * average_flow_size_in_bytes) / 1E9
per_server_flow_arrival_rate = load_level * injection_link_capacity / average_flow_size_in_gbits
'''
adaptive_router = AdaptiveRouting(tolerance_fairness, max_intrablock_distance=2)
## test it on dfly
dragonfly = dragonfly_module.Dragonfly(5,4,1)
dragonfly.DesignFullTopology()
dfly_adj_list = dragonfly.GetAdjacencyList()
dfly_switch_to_block = dragonfly.GetSwitchesToBlock()
'''
adaptive_router = AdaptiveRouting(tolerance_fairness, max_intrablock_distance=2)
topology, switch_to_block_map, s2s_traffic_matrix = develop_custom_toy_example()
routing_weights = adaptive_router.route(topology, switch_to_block_map, s2s_traffic_matrix)
## generate
base_directory = "/Users/minyee/src/jocn_reconf_expander/routing"
if not os.path.exists(base_directory + "/" + "netbench_simulations"):
os.mkdir(base_directory + "/" + "netbench_simulations")
if not os.path.exists(base_directory + "/netbench_simulations/toy_example"):
os.mkdir(base_directory + "/netbench_simulations/toy_example")
os.chdir(base_directory + "/netbench_simulations/toy_example")
topology_adj_list_filename = "logical_topology.topology"
util.write_topology_file("logical_topology.topology", topology, concentration=concentration)
switch_to_block_map_filename = "switch_to_block_filename.txt"
util.write_switch_to_block_map(switch_to_block_map_filename, switch_to_block_map)
num_switches = len(topology.keys())
traffic_probability_filename = "traffic_probability_filename"
server_to_server_traffic_matrix = util.rescale_square_matrix(s2s_traffic_matrix, num_switches * concentration)
traffic_probability_matrix = util.normalize_square_matrix(server_to_server_traffic_matrix, 1.)
util.write_traffic_probability_file(traffic_probability_filename, traffic_probability_matrix, num_switches)
routing_weights_filename = "routing_weights.txt"
util.write_routing_weights_file(routing_weights_filename, routing_weights)
## Tested Routing Schemes
routing_schemes = [util.ROUTING.ECMP, util.ROUTING.SIMPLE_FORWARDING, util.ROUTING.TRAFFIC_AWARE_SRC]
routing_schemes = [util.ROUTING.BLOCK_VALIANT]
#routing_schemes = [util.ROUTING.UGAL_L, util.ROUTING.UGAL_G]
#routing_schemes = [util.ROUTING.UGAL_G]
output_directory = base_directory + "/netbench_simulations/toy_example"
for routing_scheme in routing_schemes:
sim_param_filename = util.write_simulation_properties_file(output_directory,
output_directory + "/" + topology_adj_list_filename,
output_directory + "/" + switch_to_block_map_filename,
output_directory + "/" + traffic_probability_filename,
output_directory + "/" + routing_weights_filename,
concentration=concentration,
network_link_capacity=network_link_capacity,
injection_link_capacity=injection_link_capacity,
load_level=per_server_flow_arrival_rate,
routing_class=routing_scheme)
os.chdir(NETBENCH_DIRECTORY)
os.system('java -jar -ea NetBench.jar {}/{}'.format(output_directory, sim_param_filename))
os.chdir(output_directory)
return
### uniform dragonfly simulation
def uniform_dragonfly_simulation():
print("\n##########################################################################")
print("Beginning Uniform Dragonfly Simulation")
print("##########################################################################")
## preamble, network parameters
concentration = 1
number_of_injectors_per_switch = 10
#load_levels = [0.1, 0.3, 0.5, 0.7, 0.9]
load_levels = [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9]
network_link_capacity = 40 # in gbps
injection_link_capacity = number_of_injectors_per_switch * network_link_capacity # in gbps
average_flow_size_in_bytes = 23199798
average_flow_size_in_gbits = float(8 * average_flow_size_in_bytes) / 1E9
## define the dragonfly parameters
number_links_between_each_group = 4
#number_of_groups = 5
number_of_groups = 8
number_of_switches_per_group = (number_of_groups - 1) * number_links_between_each_group
uniform_dfly = UniformGroupDragonfly.UniformGroupDragonfly(number_of_groups, number_of_switches_per_group, number_links_between_each_group)
uniform_dfly.design_full_topology()
uniform_dfly_name_str = uniform_dfly.get_name()
## Initialize the traffic generator
interblock_traffic_fraction = 0.9
#traffic_generator = DragonflyAdversarialTrafficGenerator.DragonflyAdversarialTrafficGenerator(uniform_dfly, intergroup_traffic_fraction=interblock_traffic_fraction)
traffic_generator = DragonflyAdversarialSingleSwitchTrafficGenerator.DragonflyAdversarialSingleSwitchTrafficGenerator(uniform_dfly, intergroup_traffic_fraction=interblock_traffic_fraction)
traffic_generator = Stencil27PTrafficGenerator.Stencil27PTrafficGenerator(uniform_dfly, (4,4,5))
#### Trace based traffic generator
randomize_placement = False
trace_files = ["AMG_1728", "nekbone_1024_shortened_original"]
trace_alias = ["AMG1728", "Nekbone1024"]
#trace_files = ["facebook_hadoop_6690.txt"]
#trace_alias = ["fbHadoop"]
trace_subdir = "/Users/minyee/src/arpa_e/traces/"
trace_files = [trace_subdir + x for x in trace_files]
traffic_generator = TraceBasedTrafficGenerator.TraceBasedTrafficGenerator(uniform_dfly, trace_files, trace_alias, randomize_job_mapping=randomize_placement)
#traffic_generator = DragonflyUniformTrafficGenerator.DragonflyUniformTrafficGenerator(uniform_dfly, intergroup_traffic_fraction=interblock_traffic_fraction)
#traffic_generator = DragonflyLoadSingleGlobalLinkTrafficGenerator.DragonflyLoadSingleGlobalLinkTrafficGenerator(uniform_dfly)
switch_traffic_matrix = traffic_generator.generate_traffic()
## generate the directories
base_directory = "/Users/minyee/src/jocn_reconf_expander/routing"
if not os.path.exists(base_directory + "/" + "netbench_simulations"):
os.mkdir(base_directory + "/" + "netbench_simulations")
if not os.path.exists(base_directory + "/netbench_simulations/{}".format(uniform_dfly_name_str)):
os.mkdir(base_directory + "/netbench_simulations/{}".format(uniform_dfly_name_str))
#os.chdir(base_directory + "/netbench_simulations/{}".format(uniform_dfly_name_str))
if not os.path.exists(base_directory + "/netbench_simulations/{}/{}".format(uniform_dfly_name_str, traffic_generator.to_string())):
os.mkdir(base_directory + "/netbench_simulations/{}/{}".format(uniform_dfly_name_str, traffic_generator.to_string()))
uniform_dfly_adj_list = uniform_dfly.get_adjacency_list();
## write the topology file
topology_adj_list_filename = base_directory + "/netbench_simulations/{}/".format(uniform_dfly_name_str) + "topology_description.topology"
util.write_topology_file(topology_adj_list_filename, uniform_dfly_adj_list)
## then write the switch to block ID file
switch_to_block_map_filename = base_directory + "/netbench_simulations/{}/".format(uniform_dfly_name_str) + "switch_to_block_file.txt"
util.write_switch_to_block_map(switch_to_block_map_filename, uniform_dfly.get_switch_id_to_block_id_map())
## Generate the traffic files
traffic_filename = base_directory + "/netbench_simulations/{}/{}/".format(uniform_dfly_name_str, traffic_generator.to_string()) + "traffic_filename.txt"
util.write_traffic_probability_file(traffic_filename, switch_traffic_matrix, uniform_dfly.get_total_num_switches())
## Traffic aware source routing (start cracking the routing weights)
tolerance_fairness = 0.00
adaptive_router = AdaptiveRouting(tolerance_fairness, max_intrablock_distance=1)
routing_weights = adaptive_router.route(uniform_dfly.get_adjacency_list(), uniform_dfly.get_switch_id_to_block_id_map(), switch_traffic_matrix)
routing_weights_filename = base_directory + "/netbench_simulations/{}/{}/".format(uniform_dfly_name_str, traffic_generator.to_string()) + "routing_weights.txt"
util.write_routing_weights_file(routing_weights_filename, routing_weights)
### Finally, start writing the simulation property file for each routing algorithm, and then run the netbench simulations
routing_schemes = [util.ROUTING.TRAFFIC_AWARE_SRC, util.ROUTING.ECMP, util.ROUTING.SIMPLE_FORWARDING, util.ROUTING.BLOCK_VALIANT]
routing_schemes = [util.ROUTING.TRAFFIC_AWARE_SRC, util.ROUTING.ECMP, util.ROUTING.SIMPLE_FORWARDING, util.ROUTING.BLOCK_VALIANT, util.ROUTING.UGAL_G, util.ROUTING.UGAL_L]
#routing_schemes = [util.ROUTING.TRAFFIC_AWARE_SRC]
#routing_schemes = [util.ROUTING.ECMP, util.ROUTING.SIMPLE_FORWARDING, util.ROUTING.BLOCK_VALIANT, util.ROUTING.UGAL_G, util.ROUTING.UGAL_L]
#routing_schemes = [util.ROUTING.BLOCK_VALIANT]
#routing_schemes = [util.ROUTING.UGAL_L, util.ROUTING.UGAL_G]
#routing_schemes = [util.ROUTING.UGAL_G, util.ROUTING.UGAL_L]
output_directory = base_directory + "/netbench_simulations/{}/{}".format(uniform_dfly_name_str, traffic_generator.to_string())
for routing_scheme in routing_schemes:
num_total_servers = uniform_dfly.get_total_num_switches() * concentration
for load_level in load_levels:
per_server_flow_arrival_rate = load_level * injection_link_capacity / average_flow_size_in_gbits
sim_param_filename = util.write_simulation_properties_file(output_directory,
topology_adj_list_filename,
switch_to_block_map_filename,
traffic_filename,
routing_weights_filename,
concentration=concentration,
network_link_capacity=network_link_capacity,
injection_link_capacity=injection_link_capacity,
load_level=load_level,
flow_arrival_per_sec=per_server_flow_arrival_rate * num_total_servers,
routing_class=routing_scheme)
os.chdir(NETBENCH_DIRECTORY)
os.system('java -jar -ea NetBench.jar {}/{}'.format(output_directory, sim_param_filename))
os.chdir(output_directory)
print("##########################################################################")
print("Ending Uniform Dragonfly Simulation")
print("##########################################################################\n")
return
### uniform expander simulation
def uniform_expander_simulation():
print("\n##########################################################################")
print("Beginning Uniform Expander Simulation")
print("##########################################################################")
## preamble, network parameters
concentration = 1
number_of_injectors_per_switch = 10
#load_levels = [0.1, 0.3, 0.5, 0.7, 0.9]
load_levels = [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9]
network_link_capacity = 40 # in gbps
injection_link_capacity = number_of_injectors_per_switch * network_link_capacity # in gbps
average_flow_size_in_bytes = 23199798
average_flow_size_in_gbits = float(8 * average_flow_size_in_bytes) / 1E9
## define the dragonfly parameters
number_links_between_each_group = 4
#number_of_groups = 5
number_of_groups = 8
number_of_switches_per_group = (number_of_groups - 1) * number_links_between_each_group
num_intragroup_links_per_switch = int(0.7 * (number_of_switches_per_group - 1))
uniform_expander = UniformGroupExpander.UniformGroupExpander(number_of_groups, number_of_switches_per_group, number_links_between_each_group, num_intragroup_links_per_switch)
uniform_expander.design_full_topology()
uniform_expander_name_str = uniform_expander.get_name()
## Initialize the traffic generator
interblock_traffic_fraction = 0.9
#traffic_generator = DragonflyAdversarialTrafficGenerator.DragonflyAdversarialTrafficGenerator(uniform_expander, intergroup_traffic_fraction=interblock_traffic_fraction)
traffic_generator = DragonflyAdversarialSingleSwitchTrafficGenerator.DragonflyAdversarialSingleSwitchTrafficGenerator(uniform_expander, intergroup_traffic_fraction=interblock_traffic_fraction)
traffic_generator = Stencil27PTrafficGenerator.Stencil27PTrafficGenerator(uniform_expander, (4,4,5))
#### Trace based traffic generator
randomize_placement = False
trace_files = ["AMG_1728", "nekbone_1024_shortened_original"]
trace_alias = ["AMG1728", "Nekbone1024"]
#trace_files = ["facebook_hadoop_6690.txt"]
#trace_alias = ["fbHadoop"]
trace_subdir = "/Users/minyee/src/arpa_e/traces/"
trace_files = [trace_subdir + x for x in trace_files]
traffic_generator = TraceBasedTrafficGenerator.TraceBasedTrafficGenerator(uniform_expander, trace_files, trace_alias, randomize_job_mapping=randomize_placement)
#traffic_generator = DragonflyUniformTrafficGenerator.DragonflyUniformTrafficGenerator(uniform_expander, intergroup_traffic_fraction=interblock_traffic_fraction)
#traffic_generator = DragonflyLoadSingleGlobalLinkTrafficGenerator.DragonflyLoadSingleGlobalLinkTrafficGenerator(uniform_expander)
switch_traffic_matrix = traffic_generator.generate_traffic()
## generate the directories
base_directory = "/Users/minyee/src/jocn_reconf_expander/routing"
if not os.path.exists(base_directory + "/" + "netbench_simulations"):
os.mkdir(base_directory + "/" + "netbench_simulations")
if not os.path.exists(base_directory + "/netbench_simulations/{}".format(uniform_expander_name_str)):
os.mkdir(base_directory + "/netbench_simulations/{}".format(uniform_expander_name_str))
#os.chdir(base_directory + "/netbench_simulations/{}".format(uniform_expander_name_str))
if not os.path.exists(base_directory + "/netbench_simulations/{}/{}".format(uniform_expander_name_str, traffic_generator.to_string())):
os.mkdir(base_directory + "/netbench_simulations/{}/{}".format(uniform_expander_name_str, traffic_generator.to_string()))
uniform_expander_adj_list = uniform_expander.get_adjacency_list();
## write the topology file
topology_adj_list_filename = base_directory + "/netbench_simulations/{}/".format(uniform_expander_name_str) + "topology_description.topology"
util.write_topology_file(topology_adj_list_filename, uniform_expander_adj_list)
## then write the switch to block ID file
switch_to_block_map_filename = base_directory + "/netbench_simulations/{}/".format(uniform_expander_name_str) + "switch_to_block_file.txt"
util.write_switch_to_block_map(switch_to_block_map_filename, uniform_expander.get_switch_id_to_block_id_map())
## Generate the traffic files
traffic_filename = base_directory + "/netbench_simulations/{}/{}/".format(uniform_expander_name_str, traffic_generator.to_string()) + "traffic_filename.txt"
util.write_traffic_probability_file(traffic_filename, switch_traffic_matrix, uniform_expander.get_total_num_switches())
## Traffic aware source routing (start cracking the routing weights)
tolerance_fairness = 0.00
adaptive_router = AdaptiveRouting(tolerance_fairness, max_intrablock_distance=2)
routing_weights = adaptive_router.route(uniform_expander.get_adjacency_list(), uniform_expander.get_switch_id_to_block_id_map(), switch_traffic_matrix)
routing_weights_filename = base_directory + "/netbench_simulations/{}/{}/".format(uniform_expander_name_str, traffic_generator.to_string()) + "routing_weights.txt"
util.write_routing_weights_file(routing_weights_filename, routing_weights)
### Finally, start writing the simulation property file for each routing algorithm, and then run the netbench simulations
routing_schemes = [util.ROUTING.TRAFFIC_AWARE_SRC, util.ROUTING.ECMP, util.ROUTING.SIMPLE_FORWARDING, util.ROUTING.BLOCK_VALIANT]
routing_schemes = [util.ROUTING.TRAFFIC_AWARE_SRC, util.ROUTING.ECMP, util.ROUTING.SIMPLE_FORWARDING, util.ROUTING.BLOCK_VALIANT, util.ROUTING.UGAL_G, util.ROUTING.UGAL_L]
#routing_schemes = [util.ROUTING.TRAFFIC_AWARE_SRC]
#routing_schemes = [util.ROUTING.ECMP, util.ROUTING.SIMPLE_FORWARDING, util.ROUTING.BLOCK_VALIANT, util.ROUTING.UGAL_G, util.ROUTING.UGAL_L]
#routing_schemes = [util.ROUTING.BLOCK_VALIANT]
#routing_schemes = [util.ROUTING.UGAL_L, util.ROUTING.UGAL_G]
#routing_schemes = [util.ROUTING.UGAL_G, util.ROUTING.UGAL_L]
output_directory = base_directory + "/netbench_simulations/{}/{}".format(uniform_expander_name_str, traffic_generator.to_string())
for routing_scheme in routing_schemes:
num_total_servers = uniform_expander.get_total_num_switches() * concentration
for load_level in load_levels:
per_server_flow_arrival_rate = load_level * injection_link_capacity / average_flow_size_in_gbits
sim_param_filename = util.write_simulation_properties_file(output_directory,
topology_adj_list_filename,
switch_to_block_map_filename,
traffic_filename,
routing_weights_filename,
concentration=concentration,
network_link_capacity=network_link_capacity,
injection_link_capacity=injection_link_capacity,
load_level=load_level,
flow_arrival_per_sec=per_server_flow_arrival_rate * num_total_servers,
routing_class=routing_scheme)
os.chdir(NETBENCH_DIRECTORY)
os.system('java -jar -ea NetBench.jar {}/{}'.format(output_directory, sim_param_filename))
os.chdir(output_directory)
print("##########################################################################")
print("Ending Uniform Expander Simulation")
print("##########################################################################\n")
return
### skewed dragonfly simulation
def skewed_dragonfly_simulation():
print("\n##########################################################################")
print("Beginning Skewed Dragonfly Simulation")
print("##########################################################################")
## preamble, network parameters
## preamble, network parameters
concentration = 1
number_of_injectors_per_switch = 10
load_levels = [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9]
network_link_capacity = 40 # in gbps
injection_link_capacity = number_of_injectors_per_switch * network_link_capacity # in gbps
average_flow_size_in_bytes = 23199798
average_flow_size_in_gbits = float(8 * average_flow_size_in_bytes) / 1E9
## define the dragonfly parameters
number_links_between_each_group = 4
#number_of_groups = 5
number_of_groups = 8
number_of_switches_per_group = (number_of_groups - 1) * number_links_between_each_group
skewed_dfly = SkewedGroupDragonfly.SkewedGroupDragonfly(number_of_groups, number_of_switches_per_group, number_links_between_each_group)
skewed_dfly_name_str = skewed_dfly.get_name()
## Initialize the traffic generator
interblock_traffic_fraction = 0.9
#traffic_generator = DragonflyAdversarialTrafficGenerator.DragonflyAdversarialTrafficGenerator(skewed_dfly, intergroup_traffic_fraction=interblock_traffic_fraction)
traffic_generator = DragonflyAdversarialSingleSwitchTrafficGenerator.DragonflyAdversarialSingleSwitchTrafficGenerator(skewed_dfly, intergroup_traffic_fraction=interblock_traffic_fraction)
traffic_generator = Stencil27PTrafficGenerator.Stencil27PTrafficGenerator(skewed_dfly, (4,4,5))
#### Trace based traffic generator
randomize_placement = False
trace_files = ["AMG_1728", "nekbone_1024_shortened_original"]
trace_alias = ["AMG1728", "Nekbone1024"]
#trace_files = ["facebook_hadoop_6690.txt"]
#trace_alias = ["fbHadoop"]
trace_subdir = "/Users/minyee/src/arpa_e/traces/"
trace_files = [trace_subdir + x for x in trace_files]
traffic_generator = TraceBasedTrafficGenerator.TraceBasedTrafficGenerator(skewed_dfly, trace_files, trace_alias, randomize_job_mapping=randomize_placement)
#traffic_generator = DragonflyUniformTrafficGenerator.DragonflyUniformTrafficGenerator(skewed_dfly, intergroup_traffic_fraction=interblock_traffic_fraction)
#traffic_generator = DragonflyLoadSingleGlobalLinkTrafficGenerator.DragonflyLoadSingleGlobalLinkTrafficGenerator(skewed_dfly)
switch_traffic_matrix = traffic_generator.generate_traffic()
## generate the traffic by performing bandwidth steering
block_traffic_matrix = traffic_generator.compute_interblock_traffic_from_switch_traffic(switch_traffic_matrix, skewed_dfly.get_block_id_to_switch_ids())
skewed_dfly.design_full_topology(block_traffic_matrix) ## need to feed in the interblock traffic
## generate the directories
base_directory = "/Users/minyee/src/jocn_reconf_expander/routing"
if not os.path.exists(base_directory + "/" + "netbench_simulations"):
os.mkdir(base_directory + "/" + "netbench_simulations")
if not os.path.exists(base_directory + "/netbench_simulations/{}".format(skewed_dfly_name_str)):
os.mkdir(base_directory + "/netbench_simulations/{}".format(skewed_dfly_name_str))
#os.chdir(base_directory + "/netbench_simulations/{}".format(skewed_dfly_name_str))
if not os.path.exists(base_directory + "/netbench_simulations/{}/{}".format(skewed_dfly_name_str, traffic_generator.to_string())):
os.mkdir(base_directory + "/netbench_simulations/{}/{}".format(skewed_dfly_name_str, traffic_generator.to_string()))
skewed_dfly_adj_list = skewed_dfly.get_adjacency_list();
print("Printing the interblock connectivity:\n{}\n".format(skewed_dfly.get_interblock_topology()))
## write the topology file
topology_adj_list_filename = base_directory + "/netbench_simulations/{}/".format(skewed_dfly_name_str) + "topology_description.topology"
util.write_topology_file(topology_adj_list_filename, skewed_dfly_adj_list)
## then write the switch to block ID file
switch_to_block_map_filename = base_directory + "/netbench_simulations/{}/".format(skewed_dfly_name_str) + "switch_to_block_file.txt"
util.write_switch_to_block_map(switch_to_block_map_filename, skewed_dfly.get_switch_id_to_block_id_map())
## Generate the traffic files
traffic_filename = base_directory + "/netbench_simulations/{}/{}/".format(skewed_dfly_name_str, traffic_generator.to_string()) + "traffic_filename.txt"
util.write_traffic_probability_file(traffic_filename, switch_traffic_matrix, skewed_dfly.get_total_num_switches())
## Traffic aware source routing (start cracking the routing weights)
tolerance_fairness = 0.00
adaptive_router = AdaptiveRouting(tolerance_fairness, max_intrablock_distance=1)
routing_weights = adaptive_router.route(skewed_dfly.get_adjacency_list(), skewed_dfly.get_switch_id_to_block_id_map(), switch_traffic_matrix)
routing_weights_filename = base_directory + "/netbench_simulations/{}/{}/".format(skewed_dfly_name_str, traffic_generator.to_string()) + "routing_weights.txt"
util.write_routing_weights_file(routing_weights_filename, routing_weights)
### Finally, start writing the simulation property file for each routing algorithm, and then run the netbench simulations
routing_schemes = [util.ROUTING.TRAFFIC_AWARE_SRC, util.ROUTING.ECMP, util.ROUTING.SIMPLE_FORWARDING, util.ROUTING.BLOCK_VALIANT]
routing_schemes = [util.ROUTING.TRAFFIC_AWARE_SRC, util.ROUTING.ECMP, util.ROUTING.SIMPLE_FORWARDING, util.ROUTING.BLOCK_VALIANT, util.ROUTING.UGAL_G, util.ROUTING.UGAL_L]
#routing_schemes = [util.ROUTING.TRAFFIC_AWARE_SRC]
#routing_schemes = [util.ROUTING.ECMP, util.ROUTING.SIMPLE_FORWARDING, util.ROUTING.BLOCK_VALIANT, util.ROUTING.UGAL_G, util.ROUTING.UGAL_L]
#routing_schemes = [util.ROUTING.BLOCK_VALIANT]
#routing_schemes = [util.ROUTING.UGAL_L, util.ROUTING.UGAL_G]
#routing_schemes = [util.ROUTING.UGAL_G, util.ROUTING.UGAL_L]
output_directory = base_directory + "/netbench_simulations/{}/{}".format(skewed_dfly_name_str, traffic_generator.to_string())
for routing_scheme in routing_schemes:
num_total_servers = skewed_dfly.get_total_num_switches() * concentration
for load_level in load_levels:
per_server_flow_arrival_rate = load_level * injection_link_capacity / average_flow_size_in_gbits
sim_param_filename = util.write_simulation_properties_file(output_directory,
topology_adj_list_filename,
switch_to_block_map_filename,
traffic_filename,
routing_weights_filename,
concentration=concentration,
network_link_capacity=network_link_capacity,
injection_link_capacity=injection_link_capacity,
load_level=load_level,
flow_arrival_per_sec=per_server_flow_arrival_rate * num_total_servers,
routing_class=routing_scheme)
os.chdir(NETBENCH_DIRECTORY)
os.system('java -jar -ea NetBench.jar {}/{}'.format(output_directory, sim_param_filename))
os.chdir(output_directory)
print("##########################################################################")
print("Ending Skewed Dragonfly Simulation")
print("##########################################################################\n")
return
### skewed expander simulation
def skewed_expander_simulation():
print("\n##########################################################################")
print("Beginning Skewed Expander Simulation")
print("##########################################################################")
## preamble, network parameters
## preamble, network parameters
concentration = 1
number_of_injectors_per_switch = 10
load_levels = [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9]
#load_levels = [0.2]
network_link_capacity = 40 # in gbps
injection_link_capacity = number_of_injectors_per_switch * network_link_capacity # in gbps
average_flow_size_in_bytes = 23199798
average_flow_size_in_gbits = float(8 * average_flow_size_in_bytes) / 1E9
## define the dragonfly parameters
number_links_between_each_group = 4
#number_of_groups = 5
number_of_groups = 8
number_of_switches_per_group = (number_of_groups - 1) * number_links_between_each_group
num_intragroup_links_per_switch = int(0.7 * (number_of_switches_per_group - 1))
skewed_expander = SkewedGroupExpander.SkewedGroupExpander(number_of_groups, number_of_switches_per_group, number_links_between_each_group, num_intragroup_links_per_switch)
skewed_expander_name_str = skewed_expander.get_name()
## Initialize the traffic generator
interblock_traffic_fraction = 0.9
#traffic_generator = DragonflyAdversarialTrafficGenerator.DragonflyAdversarialTrafficGenerator(skewed_expander, intergroup_traffic_fraction=interblock_traffic_fraction)
#traffic_generator = DragonflyAdversarialSingleSwitchTrafficGenerator.DragonflyAdversarialSingleSwitchTrafficGenerator(skewed_expander, intergroup_traffic_fraction=interblock_traffic_fraction)
#traffic_generator = Stencil27PTrafficGenerator.Stencil27PTrafficGenerator(skewed_expander, (4,4,5))
#### Trace based traffic generator
randomize_placement = False
trace_files = ["AMG_1728", "nekbone_1024_shortened_original"]
trace_alias = ["AMG1728", "Nekbone1024"]
#trace_files = ["facebook_hadoop_6690.txt"]
#trace_alias = ["fbHadoop"]
trace_subdir = "/Users/minyee/src/arpa_e/traces/"
trace_files = [trace_subdir + x for x in trace_files]
traffic_generator = TraceBasedTrafficGenerator.TraceBasedTrafficGenerator(skewed_expander, trace_files, trace_alias, randomize_job_mapping=randomize_placement)
#traffic_generator = DragonflyUniformTrafficGenerator.DragonflyUniformTrafficGenerator(skewed_expander, intergroup_traffic_fraction=interblock_traffic_fraction)
#traffic_generator = DragonflyLoadSingleGlobalLinkTrafficGenerator.DragonflyLoadSingleGlobalLinkTrafficGenerator(skewed_expander)
switch_traffic_matrix = traffic_generator.generate_traffic()
## generate the traffic by performing bandwidth steering
block_traffic_matrix = traffic_generator.compute_interblock_traffic_from_switch_traffic(switch_traffic_matrix, skewed_expander.get_block_id_to_switch_ids())
skewed_expander.design_full_topology(block_traffic_matrix) ## need to feed in the interblock traffic
## generate the directories
base_directory = "/Users/minyee/src/jocn_reconf_expander/routing"
if not os.path.exists(base_directory + "/" + "netbench_simulations"):
os.mkdir(base_directory + "/" + "netbench_simulations")
if not os.path.exists(base_directory + "/netbench_simulations/{}".format(skewed_expander_name_str)):
os.mkdir(base_directory + "/netbench_simulations/{}".format(skewed_expander_name_str))
#os.chdir(base_directory + "/netbench_simulations/{}".format(skewed_expander_name_str))
if not os.path.exists(base_directory + "/netbench_simulations/{}/{}".format(skewed_expander_name_str, traffic_generator.to_string())):
os.mkdir(base_directory + "/netbench_simulations/{}/{}".format(skewed_expander_name_str, traffic_generator.to_string()))
skewed_expander_adj_list = skewed_expander.get_adjacency_list();
print("Printing the interblock connectivity:\n{}\n".format(skewed_expander.get_interblock_topology()))
## write the topology file
topology_adj_list_filename = base_directory + "/netbench_simulations/{}/".format(skewed_expander_name_str) + "topology_description.topology"
util.write_topology_file(topology_adj_list_filename, skewed_expander_adj_list)
## then write the switch to block ID file
switch_to_block_map_filename = base_directory + "/netbench_simulations/{}/".format(skewed_expander_name_str) + "switch_to_block_file.txt"
util.write_switch_to_block_map(switch_to_block_map_filename, skewed_expander.get_switch_id_to_block_id_map())
## Generate the traffic files
traffic_filename = base_directory + "/netbench_simulations/{}/{}/".format(skewed_expander_name_str, traffic_generator.to_string()) + "traffic_filename.txt"
util.write_traffic_probability_file(traffic_filename, switch_traffic_matrix, skewed_expander.get_total_num_switches())
## Traffic aware source routing (start cracking the routing weights)
tolerance_fairness = 0.00
adaptive_router = AdaptiveRouting(tolerance_fairness, max_intrablock_distance=3)
routing_weights = adaptive_router.route(skewed_expander.get_adjacency_list(), skewed_expander.get_switch_id_to_block_id_map(), switch_traffic_matrix)
routing_weights_filename = base_directory + "/netbench_simulations/{}/{}/".format(skewed_expander_name_str, traffic_generator.to_string()) + "routing_weights.txt"
util.write_routing_weights_file(routing_weights_filename, routing_weights)
### Finally, start writing the simulation property file for each routing algorithm, and then run the netbench simulations
routing_schemes = [util.ROUTING.TRAFFIC_AWARE_SRC, util.ROUTING.ECMP, util.ROUTING.SIMPLE_FORWARDING, util.ROUTING.BLOCK_VALIANT]
routing_schemes = [util.ROUTING.TRAFFIC_AWARE_SRC, util.ROUTING.ECMP, util.ROUTING.SIMPLE_FORWARDING, util.ROUTING.BLOCK_VALIANT, util.ROUTING.UGAL_G, util.ROUTING.UGAL_L]
#routing_schemes = [util.ROUTING.TRAFFIC_AWARE_SRC, util.ROUTING.BLOCK_VALIANT]
#routing_schemes = [util.ROUTING.UGAL_L]
#routing_schemes = [util.ROUTING.ECMP, util.ROUTING.SIMPLE_FORWARDING, util.ROUTING.BLOCK_VALIANT, util.ROUTING.UGAL_G, util.ROUTING.UGAL_L]
#routing_schemes = [util.ROUTING.BLOCK_VALIANT]
#routing_schemes = [util.ROUTING.UGAL_L, util.ROUTING.UGAL_G]
#routing_schemes = [util.ROUTING.UGAL_G, util.ROUTING.UGAL_L]
output_directory = base_directory + "/netbench_simulations/{}/{}".format(skewed_expander_name_str, traffic_generator.to_string())
for routing_scheme in routing_schemes:
num_total_servers = skewed_expander.get_total_num_switches() * concentration
for load_level in load_levels:
per_server_flow_arrival_rate = load_level * injection_link_capacity / average_flow_size_in_gbits
sim_param_filename = util.write_simulation_properties_file(output_directory,
topology_adj_list_filename,
switch_to_block_map_filename,
traffic_filename,
routing_weights_filename,
concentration=concentration,
network_link_capacity=network_link_capacity,
injection_link_capacity=injection_link_capacity,
load_level=load_level,
flow_arrival_per_sec=per_server_flow_arrival_rate * num_total_servers,
routing_class=routing_scheme)
os.chdir(NETBENCH_DIRECTORY)
os.system('java -jar -ea NetBench.jar {}/{}'.format(output_directory, sim_param_filename))
os.chdir(output_directory)
print("##########################################################################")
print("Ending Skewed Expander Simulation")
print("##########################################################################\n")
return
if __name__ == "__main__":
print("\n#################################################")
print("Starting Routing evaluation")
print("#################################################\n")
#toy_example_main()
#uniform_dragonfly_simulation()
#skewed_dragonfly_simulation()
#uniform_expander_simulation()
skewed_expander_simulation()
print("\n#################################################")
print("Completed main")
print("#################################################\n") | 58.042517 | 193 | 0.763896 | 4,129 | 34,129 | 5.894405 | 0.056672 | 0.042033 | 0.007273 | 0.009368 | 0.919098 | 0.894445 | 0.870778 | 0.863177 | 0.855863 | 0.84855 | 0 | 0.015179 | 0.106244 | 34,129 | 588 | 194 | 58.042517 | 0.78271 | 0.197222 | 0 | 0.558673 | 0 | 0 | 0.163465 | 0.122562 | 0 | 0 | 0 | 0 | 0 | 1 | 0.015306 | false | 0 | 0.033163 | 0 | 0.063776 | 0.081633 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7df8c936bdc024da27ce618c6069a7118927edaa | 96 | py | Python | venv/lib/python3.8/site-packages/pip/_internal/utils/setuptools_build.py | Retraces/UkraineBot | 3d5d7f8aaa58fa0cb8b98733b8808e5dfbdb8b71 | [
"MIT"
] | 2 | 2022-03-13T01:58:52.000Z | 2022-03-31T06:07:54.000Z | venv/lib/python3.8/site-packages/pip/_internal/utils/setuptools_build.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | 19 | 2021-11-20T04:09:18.000Z | 2022-03-23T15:05:55.000Z | venv/lib/python3.8/site-packages/pip/_internal/utils/setuptools_build.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | null | null | null | /home/runner/.cache/pip/pool/c6/4f/6c/4418d4c8d4c7b3f4ef11679b556b3519f2cf376d3c333a525ebf4e93f0 | 96 | 96 | 0.895833 | 9 | 96 | 9.555556 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.416667 | 0 | 96 | 1 | 96 | 96 | 0.479167 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b45fb8fa0e7b9603ffbfe76281b9cf6d105b59ab | 117 | py | Python | meta_analysis/__init__.py | aperezlebel/meta_analysis | 10f983a4f3a94d385b9cd69a13c36ac610b1be93 | [
"MIT"
] | null | null | null | meta_analysis/__init__.py | aperezlebel/meta_analysis | 10f983a4f3a94d385b9cd69a13c36ac610b1be93 | [
"MIT"
] | null | null | null | meta_analysis/__init__.py | aperezlebel/meta_analysis | 10f983a4f3a94d385b9cd69a13c36ac610b1be93 | [
"MIT"
] | null | null | null | '''
A python module for building Meta Analysis Maps
'''
from .Maps import Maps
from .tools import print_percent | 16.714286 | 51 | 0.735043 | 17 | 117 | 5 | 0.764706 | 0.188235 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.196581 | 117 | 7 | 52 | 16.714286 | 0.904255 | 0.401709 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0.5 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 6 |
c32cdb7a56d530b32b03fd008747991d25a99ed3 | 22,117 | py | Python | tests/unit/states/test_elasticsearch.py | edusperoni/salt | c9bfb00c2a81a9d4734fa7d1aa80e893d5ef790b | [
"Apache-2.0"
] | 12 | 2015-01-21T00:18:25.000Z | 2021-07-11T07:35:26.000Z | tests/unit/states/test_elasticsearch.py | edusperoni/salt | c9bfb00c2a81a9d4734fa7d1aa80e893d5ef790b | [
"Apache-2.0"
] | 1 | 2015-10-05T22:03:10.000Z | 2015-10-05T22:03:10.000Z | tests/unit/states/test_elasticsearch.py | edusperoni/salt | c9bfb00c2a81a9d4734fa7d1aa80e893d5ef790b | [
"Apache-2.0"
] | 12 | 2015-01-05T09:50:42.000Z | 2019-08-19T01:43:40.000Z | # -*- coding: utf-8 -*-
'''
:codeauthor: :email:`Lukas Raska <lukas@raska.me>`
'''
# Import Python libs
from __future__ import absolute_import, print_function, unicode_literals
# Import Salt Testing Libs
from tests.support.mixins import LoaderModuleMockMixin
from tests.support.unit import skipIf, TestCase
from tests.support.mock import (
NO_MOCK,
NO_MOCK_REASON,
MagicMock,
patch)
from salt.exceptions import CommandExecutionError
# Import Salt Libs
import salt.utils.dictdiffer as dictdiffer
from salt.states import elasticsearch
@skipIf(NO_MOCK, NO_MOCK_REASON)
class ElasticsearchTestCase(TestCase, LoaderModuleMockMixin):
'''
Test cases for salt.states.elasticsearch
'''
def setup_loader_modules(self):
return {
elasticsearch: {
'__opts__': {'test': False},
'__utils__': {'dictdiffer.deep_diff': dictdiffer.deep_diff}
}
}
# 'index_absent' function tests: 1
def test_index_absent(self):
'''
Test to manage a elasticsearch index.
'''
name = 'foo'
ret = {'name': name,
'result': True,
'comment': 'Index foo is already absent',
'changes': {}}
mock_get = MagicMock(side_effect=[None, {name: {"test": "key"}}, {name: {}}, {name: {"test": "key"}}, CommandExecutionError, {name: {"test": "key"}}])
mock_delete = MagicMock(side_effect=[True, False, CommandExecutionError])
with patch.dict(elasticsearch.__salt__, {'elasticsearch.index_get': mock_get,
'elasticsearch.index_delete': mock_delete}):
self.assertDictEqual(elasticsearch.index_absent(name), ret)
ret.update({'comment': 'Successfully removed index foo', 'changes': {"old": {"test": "key"}}})
self.assertDictEqual(elasticsearch.index_absent(name), ret)
ret.update({'comment': 'Failed to remove index foo for unknown reasons', 'result': False, 'changes': {}})
self.assertDictEqual(elasticsearch.index_absent(name), ret)
with patch.dict(elasticsearch.__opts__, {'test': True}):
ret.update({'comment': "Index foo will be removed", 'result': None, 'changes': {"old": {"test": "key"}}})
self.assertDictEqual(elasticsearch.index_absent(name), ret)
ret.update({'comment': '', 'result': False, 'changes': {}})
self.assertDictEqual(elasticsearch.index_absent(name), ret)
ret.update({'comment': '', 'result': False, 'changes': {}})
self.assertDictEqual(elasticsearch.index_absent(name), ret)
# 'index_present' function tests: 1
def test_index_present(self):
'''
Test to manage a elasticsearch index.
'''
name = 'foo'
ret = {'name': name,
'result': True,
'comment': 'Index foo is already present',
'changes': {}}
mock_exists = MagicMock(side_effect=[True, False, False, False, CommandExecutionError, False, False])
mock_get = MagicMock(side_effect=[{name: {"test": "key"}}, CommandExecutionError])
mock_create = MagicMock(side_effect=[True, False, CommandExecutionError, True])
with patch.dict(elasticsearch.__salt__, {'elasticsearch.index_get': mock_get,
'elasticsearch.index_exists': mock_exists,
'elasticsearch.index_create': mock_create}):
self.assertDictEqual(elasticsearch.index_present(name), ret)
ret.update({'comment': 'Successfully created index foo', 'changes': {"new": {"test": "key"}}})
self.assertDictEqual(elasticsearch.index_present(name), ret)
ret.update({'comment': 'Cannot create index foo, False', 'result': False, 'changes': {}})
self.assertDictEqual(elasticsearch.index_present(name), ret)
with patch.dict(elasticsearch.__opts__, {'test': True}):
ret.update({'comment': "Index foo does not exist and will be created", 'result': None, 'changes': {"new": {"test2": "key"}}})
self.assertDictEqual(elasticsearch.index_present(name, {"test2": "key"}), ret)
ret.update({'comment': '', 'result': False, 'changes': {}})
self.assertDictEqual(elasticsearch.index_absent(name), ret)
ret.update({'comment': '', 'result': False, 'changes': {}})
self.assertDictEqual(elasticsearch.index_absent(name), ret)
ret.update({'comment': '', 'result': False, 'changes': {}})
self.assertDictEqual(elasticsearch.index_absent(name), ret)
# 'alias_absent' function tests: 1
def test_alias_absent(self):
'''
Test to manage a elasticsearch alias.
'''
name = 'foo'
index = 'bar'
alias = {index: {"aliases": {name: {"test": "key"}}}}
ret = {'name': name,
'result': True,
'comment': 'Alias foo for index bar is already absent',
'changes': {}}
mock_get = MagicMock(side_effect=[None, {"foo2": {}}, alias, alias, alias, CommandExecutionError, alias])
mock_delete = MagicMock(side_effect=[True, False, CommandExecutionError])
with patch.dict(elasticsearch.__salt__, {'elasticsearch.alias_get': mock_get,
'elasticsearch.alias_delete': mock_delete}):
self.assertDictEqual(elasticsearch.alias_absent(name, index), ret)
self.assertDictEqual(elasticsearch.alias_absent(name, index), ret)
ret.update({'comment': 'Successfully removed alias foo for index bar', 'changes': {"old": {"test": "key"}}})
self.assertDictEqual(elasticsearch.alias_absent(name, index), ret)
ret.update({'comment': 'Failed to remove alias foo for index bar for unknown reasons', 'result': False, 'changes': {}})
self.assertDictEqual(elasticsearch.alias_absent(name, index), ret)
with patch.dict(elasticsearch.__opts__, {'test': True}):
ret.update({'comment': "Alias foo for index bar will be removed", 'result': None, 'changes': {"old": {"test": "key"}}})
self.assertDictEqual(elasticsearch.alias_absent(name, index), ret)
ret.update({'comment': '', 'result': False, 'changes': {}})
self.assertDictEqual(elasticsearch.alias_absent(name, index), ret)
ret.update({'comment': '', 'result': False, 'changes': {}})
self.assertDictEqual(elasticsearch.alias_absent(name, index), ret)
# 'alias_present' function tests: 1
def test_alias_present(self):
'''
Test to manage a elasticsearch alias.
'''
name = 'foo'
index = 'bar'
alias = {index: {"aliases": {name: {"test": "key"}}}}
ret = {'name': name,
'result': True,
'comment': 'Alias foo for index bar is already present',
'changes': {}}
mock_get = MagicMock(side_effect=[alias, alias, None, None, None, alias, CommandExecutionError, None])
mock_create = MagicMock(side_effect=[True, True, False, CommandExecutionError])
with patch.dict(elasticsearch.__salt__, {'elasticsearch.alias_get': mock_get,
'elasticsearch.alias_create': mock_create}):
self.assertDictEqual(elasticsearch.alias_present(name, index, {"test": "key"}), ret)
ret.update({'comment': "Successfully replaced alias foo for index bar", 'changes': {'old': {"test": "key"}, 'new': {"test2": "key"}}})
self.assertDictEqual(elasticsearch.alias_present(name, index, {"test2": "key"}), ret)
ret.update({'comment': "Successfully created alias foo for index bar", 'changes': {'new': {"test2": "key"}}})
self.assertDictEqual(elasticsearch.alias_present(name, index, {"test2": "key"}), ret)
ret.update({'comment': 'Cannot create alias foo for index bar, False', 'result': False})
self.assertDictEqual(elasticsearch.alias_present(name, index, {"test2": "key"}), ret)
with patch.dict(elasticsearch.__opts__, {'test': True}):
ret.update({'comment': "Alias foo for index bar does not exist and will be created", 'result': None, 'changes': {'new': {"test2": "key"}}})
self.assertDictEqual(elasticsearch.alias_present(name, index, {"test2": "key"}), ret)
ret.update({'comment': "Alias foo for index bar exists with wrong configuration and will be overridden", 'result': None, 'changes': {'old': {"test": "key"}, 'new': {"test2": "key"}}})
self.assertDictEqual(elasticsearch.alias_present(name, index, {"test2": "key"}), ret)
ret.update({'comment': '', 'result': False, 'changes': {}})
self.assertDictEqual(elasticsearch.alias_present(name, index), ret)
ret.update({'comment': '', 'result': False, 'changes': {}})
self.assertDictEqual(elasticsearch.alias_present(name, index), ret)
# 'index_template_absent' function tests: 1
def test_index_template_absent(self):
'''
Test to manage a elasticsearch index template.
'''
name = 'foo'
index_template = {name: {"test": "key"}}
ret = {'name': name,
'result': True,
'comment': 'Index template foo is already absent',
'changes': {}}
mock_get = MagicMock(side_effect=[None, {"bar": {}}, index_template, index_template, index_template, CommandExecutionError, index_template])
mock_delete = MagicMock(side_effect=[True, False, CommandExecutionError])
with patch.dict(elasticsearch.__salt__, {'elasticsearch.index_template_get': mock_get,
'elasticsearch.index_template_delete': mock_delete}):
self.assertDictEqual(elasticsearch.index_template_absent(name), ret)
self.assertDictEqual(elasticsearch.index_template_absent(name), ret)
ret.update({'comment': 'Successfully removed index template foo', 'changes': {"old": {"test": "key"}}})
self.assertDictEqual(elasticsearch.index_template_absent(name), ret)
ret.update({'comment': 'Failed to remove index template foo for unknown reasons', 'result': False, 'changes': {}})
self.assertDictEqual(elasticsearch.index_template_absent(name), ret)
with patch.dict(elasticsearch.__opts__, {'test': True}):
ret.update({'comment': "Index template foo will be removed", 'result': None, 'changes': {"old": {"test": "key"}}})
self.assertDictEqual(elasticsearch.index_template_absent(name), ret)
ret.update({'comment': '', 'result': False, 'changes': {}})
self.assertDictEqual(elasticsearch.index_template_absent(name), ret)
ret.update({'comment': '', 'result': False, 'changes': {}})
self.assertDictEqual(elasticsearch.index_template_absent(name), ret)
# 'index_template_present' function tests: 1
def test_index_template_present(self):
'''
Test to manage a elasticsearch index template.
'''
name = 'foo'
index_template = {name: {"test": "key"}}
ret = {'name': name,
'result': True,
'comment': 'Index template foo is already present',
'changes': {}}
mock_exists = MagicMock(side_effect=[True, False, False, False, CommandExecutionError, False, False])
mock_create = MagicMock(side_effect=[True, False, CommandExecutionError, True])
mock_get = MagicMock(side_effect=[index_template, CommandExecutionError])
with patch.dict(elasticsearch.__salt__, {'elasticsearch.index_template_get': mock_get,
'elasticsearch.index_template_create': mock_create,
'elasticsearch.index_template_exists': mock_exists}):
self.assertDictEqual(elasticsearch.index_template_present(name, {"test2": "key"}), ret)
ret.update({'comment': "Successfully created index template foo", 'changes': {'new': {"test": "key"}}})
self.assertDictEqual(elasticsearch.index_template_present(name, {"test2": "key"}), ret)
ret.update({'comment': 'Cannot create index template foo, False', 'result': False, 'changes': {}})
self.assertDictEqual(elasticsearch.index_template_present(name, {"test2": "key"}), ret)
with patch.dict(elasticsearch.__opts__, {'test': True}):
ret.update({'comment': "Index template foo does not exist and will be created", 'result': None, 'changes': {'new': {"test2": "key"}}})
self.assertDictEqual(elasticsearch.index_template_present(name, {"test2": "key"}), ret)
ret.update({'comment': '', 'result': False, 'changes': {}})
self.assertDictEqual(elasticsearch.index_template_present(name, {}), ret)
ret.update({'comment': '', 'result': False, 'changes': {}})
self.assertDictEqual(elasticsearch.index_template_present(name, {}), ret)
ret.update({'comment': '', 'result': False, 'changes': {}})
self.assertDictEqual(elasticsearch.index_template_present(name, {}), ret)
# 'pipeline_absent' function tests: 1
def test_pipeline_absent(self):
'''
Test to manage a elasticsearch pipeline.
'''
name = 'foo'
pipeline = {name: {"test": "key"}}
ret = {'name': name,
'result': True,
'comment': 'Pipeline foo is already absent',
'changes': {}}
mock_get = MagicMock(side_effect=[None, {"foo2": {}}, pipeline, pipeline, pipeline, CommandExecutionError, pipeline])
mock_delete = MagicMock(side_effect=[True, False, CommandExecutionError])
with patch.dict(elasticsearch.__salt__, {'elasticsearch.pipeline_get': mock_get,
'elasticsearch.pipeline_delete': mock_delete}):
self.assertDictEqual(elasticsearch.pipeline_absent(name), ret)
self.assertDictEqual(elasticsearch.pipeline_absent(name), ret)
ret.update({'comment': 'Successfully removed pipeline foo', 'changes': {"old": {"test": "key"}}})
self.assertDictEqual(elasticsearch.pipeline_absent(name), ret)
ret.update({'comment': 'Failed to remove pipeline foo for unknown reasons', 'result': False, 'changes': {}})
self.assertDictEqual(elasticsearch.pipeline_absent(name), ret)
with patch.dict(elasticsearch.__opts__, {'test': True}):
ret.update({'comment': "Pipeline foo will be removed", 'result': None, 'changes': {"old": {"test": "key"}}})
self.assertDictEqual(elasticsearch.pipeline_absent(name), ret)
ret.update({'comment': '', 'result': False, 'changes': {}})
self.assertDictEqual(elasticsearch.pipeline_absent(name), ret)
ret.update({'comment': '', 'result': False, 'changes': {}})
self.assertDictEqual(elasticsearch.pipeline_absent(name), ret)
# 'pipeline_present' function tests: 1
def test_pipeline_present(self):
'''
Test to manage a elasticsearch pipeline.
'''
name = 'foo'
pipeline = {name: {"test": "key"}}
ret = {'name': name,
'result': True,
'comment': 'Pipeline foo is already present',
'changes': {}}
mock_get = MagicMock(side_effect=[pipeline, pipeline, None, None, None, pipeline, CommandExecutionError, None])
mock_create = MagicMock(side_effect=[True, True, False, CommandExecutionError])
with patch.dict(elasticsearch.__salt__, {'elasticsearch.pipeline_get': mock_get,
'elasticsearch.pipeline_create': mock_create}):
self.assertDictEqual(elasticsearch.pipeline_present(name, {"test": "key"}), ret)
ret.update({'comment': "Successfully replaced pipeline foo", 'changes': {'old': {"test": "key"}, 'new': {"test2": "key"}}})
self.assertDictEqual(elasticsearch.pipeline_present(name, {"test2": "key"}), ret)
ret.update({'comment': "Successfully created pipeline foo", 'changes': {'new': {"test2": "key"}}})
self.assertDictEqual(elasticsearch.pipeline_present(name, {"test2": "key"}), ret)
ret.update({'comment': 'Cannot create pipeline foo, False', 'result': False})
self.assertDictEqual(elasticsearch.pipeline_present(name, {"test2": "key"}), ret)
with patch.dict(elasticsearch.__opts__, {'test': True}):
ret.update({'comment': "Pipeline foo does not exist and will be created", 'result': None, 'changes': {'new': {"test2": "key"}}})
self.assertDictEqual(elasticsearch.pipeline_present(name, {"test2": "key"}), ret)
ret.update({'comment': "Pipeline foo exists with wrong configuration and will be overridden", 'result': None, 'changes': {'old': {"test": "key"}, 'new': {"test2": "key"}}})
self.assertDictEqual(elasticsearch.pipeline_present(name, {"test2": "key"}), ret)
ret.update({'comment': '', 'result': False, 'changes': {}})
self.assertDictEqual(elasticsearch.pipeline_present(name, {}), ret)
ret.update({'comment': '', 'result': False, 'changes': {}})
self.assertDictEqual(elasticsearch.pipeline_present(name, {}), ret)
# 'search_template_absent' function tests: 1
def test_search_template_absent(self):
'''
Test to manage a elasticsearch search template.
'''
name = 'foo'
template = {"template": '{"test": "key"}'}
ret = {'name': name,
'result': True,
'comment': 'Search template foo is already absent',
'changes': {}}
mock_get = MagicMock(side_effect=[None, template, template, template, CommandExecutionError, template])
mock_delete = MagicMock(side_effect=[True, False, CommandExecutionError])
with patch.dict(elasticsearch.__salt__, {'elasticsearch.search_template_get': mock_get,
'elasticsearch.search_template_delete': mock_delete}):
self.assertDictEqual(elasticsearch.search_template_absent(name), ret)
ret.update({'comment': 'Successfully removed search template foo', 'changes': {"old": {"test": "key"}}})
self.assertDictEqual(elasticsearch.search_template_absent(name), ret)
ret.update({'comment': 'Failed to remove search template foo for unknown reasons', 'result': False, 'changes': {}})
self.assertDictEqual(elasticsearch.search_template_absent(name), ret)
with patch.dict(elasticsearch.__opts__, {'test': True}):
ret.update({'comment': "Search template foo will be removed", 'result': None, 'changes': {"old": {"test": "key"}}})
self.assertDictEqual(elasticsearch.search_template_absent(name), ret)
ret.update({'comment': '', 'result': False, 'changes': {}})
self.assertDictEqual(elasticsearch.search_template_absent(name), ret)
ret.update({'comment': '', 'result': False, 'changes': {}})
self.assertDictEqual(elasticsearch.search_template_absent(name), ret)
# 'pipeline_present' function tests: 1
def test_search_template_present(self):
'''
Test to manage a elasticsearch search template.
'''
name = 'foo'
template = {"template": '{"test": "key"}'}
ret = {'name': name,
'result': True,
'comment': 'Search template foo is already present',
'changes': {}}
mock_get = MagicMock(side_effect=[template, template, None, None, None, template, CommandExecutionError, None])
mock_create = MagicMock(side_effect=[True, True, False, CommandExecutionError])
with patch.dict(elasticsearch.__salt__, {'elasticsearch.search_template_get': mock_get,
'elasticsearch.search_template_create': mock_create}):
self.assertDictEqual(elasticsearch.search_template_present(name, {"test": "key"}), ret)
ret.update({'comment': "Successfully replaced search template foo", 'changes': {'old': {"test": "key"}, 'new': {"test2": "key"}}})
self.assertDictEqual(elasticsearch.search_template_present(name, {"test2": "key"}), ret)
ret.update({'comment': "Successfully created search template foo", 'changes': {'new': {"test2": "key"}}})
self.assertDictEqual(elasticsearch.search_template_present(name, {"test2": "key"}), ret)
ret.update({'comment': 'Cannot create search template foo, False', 'result': False})
self.assertDictEqual(elasticsearch.search_template_present(name, {"test2": "key"}), ret)
with patch.dict(elasticsearch.__opts__, {'test': True}):
ret.update({'comment': "Search template foo does not exist and will be created", 'result': None, 'changes': {'new': {"test2": "key"}}})
self.assertDictEqual(elasticsearch.search_template_present(name, {"test2": "key"}), ret)
ret.update({'comment': "Search template foo exists with wrong configuration and will be overridden", 'result': None, 'changes': {'old': {"test": "key"}, 'new': {"test2": "key"}}})
self.assertDictEqual(elasticsearch.search_template_present(name, {"test2": "key"}), ret)
ret.update({'comment': '', 'result': False, 'changes': {}})
self.assertDictEqual(elasticsearch.search_template_present(name, {}), ret)
ret.update({'comment': '', 'result': False, 'changes': {}})
self.assertDictEqual(elasticsearch.search_template_present(name, {}), ret)
| 49.701124 | 199 | 0.607768 | 2,194 | 22,117 | 5.969462 | 0.050137 | 0.103001 | 0.173475 | 0.069634 | 0.907994 | 0.896847 | 0.871803 | 0.830496 | 0.806979 | 0.771016 | 0 | 0.002805 | 0.242528 | 22,117 | 444 | 200 | 49.813063 | 0.778965 | 0.043903 | 0 | 0.626374 | 0 | 0 | 0.222265 | 0.030609 | 0 | 0 | 0 | 0 | 0.260073 | 1 | 0.040293 | false | 0 | 0.025641 | 0.003663 | 0.07326 | 0.003663 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c352f1a26eff56c22be4c01f97c96ad5935e0053 | 10,376 | py | Python | tests/integration_tests/test_wandb.py | divyanshugit/optuna | af7f11ccc3f0f8840dc4d867a17ba2c68664a032 | [
"MIT"
] | 1,300 | 2018-12-03T06:11:11.000Z | 2019-11-15T01:28:25.000Z | tests/integration_tests/test_wandb.py | divyanshugit/optuna | af7f11ccc3f0f8840dc4d867a17ba2c68664a032 | [
"MIT"
] | 274 | 2018-12-04T09:54:07.000Z | 2019-11-15T02:23:18.000Z | tests/integration_tests/test_wandb.py | divyanshugit/optuna | af7f11ccc3f0f8840dc4d867a17ba2c68664a032 | [
"MIT"
] | 148 | 2018-12-03T10:48:50.000Z | 2019-11-11T16:37:51.000Z | from typing import Any
from typing import Dict
from typing import List
from typing import Sequence
from typing import Tuple
from typing import Union
from unittest import mock
import pytest
import optuna
from optuna.integration import WeightsAndBiasesCallback
def _objective_func(trial: optuna.trial.Trial) -> float:
x = trial.suggest_float("x", low=-10, high=10)
y = trial.suggest_float("y", low=1, high=10, log=True)
return (x - 2) ** 2 + (y - 25) ** 2
def _multiobjective_func(trial: optuna.trial.Trial) -> Tuple[float, float]:
x = trial.suggest_float("x", low=-10, high=10)
y = trial.suggest_float("y", low=1, high=10, log=True)
first_objective = (x - 2) ** 2 + (y - 25) ** 2
second_objective = (x - 2) ** 3 + (y - 25) ** 3
return first_objective, second_objective
@mock.patch("optuna.integration.wandb.wandb")
def test_run_initialized(wandb: mock.MagicMock) -> None:
wandb.sdk.wandb_run.Run = mock.MagicMock
n_trials = 10
wandb_kwargs = {
"project": "optuna",
"group": "summary",
"job_type": "logging",
"mode": "offline",
"tags": ["test-tag"],
}
WeightsAndBiasesCallback(metric_name="mse", wandb_kwargs=wandb_kwargs, as_multirun=False)
wandb.init.assert_called_once_with(
project="optuna", group="summary", job_type="logging", mode="offline", tags=["test-tag"]
)
wandbc = WeightsAndBiasesCallback(
metric_name="mse", wandb_kwargs=wandb_kwargs, as_multirun=True
)
wandb.run = None
study = optuna.create_study(direction="minimize")
_wrapped_func = wandbc.track_in_wandb()(lambda t: 1.0)
wandb.init.reset_mock()
trial = optuna.create_trial(value=1.0)
_wrapped_func(trial)
wandb.init.assert_called_once_with(
project="optuna", group="summary", job_type="logging", mode="offline", tags=["test-tag"]
)
wandb.init.reset_mock()
study.optimize(_objective_func, n_trials=n_trials, callbacks=[wandbc])
wandb.init.assert_called_with(
project="optuna", group="summary", job_type="logging", mode="offline", tags=["test-tag"]
)
assert wandb.init.call_count == n_trials
wandb.init().finish.assert_called()
assert wandb.init().finish.call_count == n_trials
@mock.patch("optuna.integration.wandb.wandb")
@pytest.mark.parametrize("as_multirun", [True, False])
def test_attributes_set_on_epoch(wandb: mock.MagicMock, as_multirun: bool) -> None:
wandb.sdk.wandb_run.Run = mock.MagicMock
expected_config: Dict[str, Any] = {"direction": ["MINIMIZE"]}
trial_params = {"x": 1.1, "y": 2.2}
expected_config_with_params = {**expected_config, **trial_params}
study = optuna.create_study(direction="minimize")
wandbc = WeightsAndBiasesCallback(as_multirun=as_multirun)
if as_multirun:
wandb.run = None
study.enqueue_trial(trial_params)
study.optimize(_objective_func, n_trials=1, callbacks=[wandbc])
if as_multirun:
wandb.init().config.update.assert_called_once_with(expected_config_with_params)
else:
wandb.run.config.update.assert_called_once_with(expected_config)
@mock.patch("optuna.integration.wandb.wandb")
@pytest.mark.parametrize("as_multirun", [True, False])
def test_multiobjective_attributes_set_on_epoch(wandb: mock.MagicMock, as_multirun: bool) -> None:
wandb.sdk.wandb_run.Run = mock.MagicMock
expected_config: Dict[str, Any] = {"direction": ["MINIMIZE", "MAXIMIZE"]}
trial_params = {"x": 1.1, "y": 2.2}
expected_config_with_params = {**expected_config, **trial_params}
study = optuna.create_study(directions=["minimize", "maximize"])
wandbc = WeightsAndBiasesCallback(as_multirun=as_multirun)
if as_multirun:
wandb.run = None
study.enqueue_trial(trial_params)
study.optimize(_multiobjective_func, n_trials=1, callbacks=[wandbc])
if as_multirun:
wandb.init().config.update.assert_called_once_with(expected_config_with_params)
else:
wandb.run.config.update.assert_called_once_with(expected_config)
@mock.patch("optuna.integration.wandb.wandb")
def test_log_api_call_count(wandb: mock.MagicMock) -> None:
wandb.sdk.wandb_run.Run = mock.MagicMock
study = optuna.create_study()
wandbc = WeightsAndBiasesCallback()
@wandbc.track_in_wandb()
def _decorated_objective(trial: optuna.trial.Trial) -> float:
result = _objective_func(trial)
wandb.run.log({"result": result})
return result
target_n_trials = 10
study.optimize(_objective_func, n_trials=target_n_trials, callbacks=[wandbc])
assert wandb.run.log.call_count == target_n_trials
wandbc = WeightsAndBiasesCallback(as_multirun=True)
wandb.run.reset_mock()
study.optimize(_decorated_objective, n_trials=target_n_trials, callbacks=[wandbc])
assert wandb.run.log.call_count == 2 * target_n_trials
wandb.run = None
study.optimize(_objective_func, n_trials=target_n_trials, callbacks=[wandbc])
assert wandb.init().log.call_count == target_n_trials
@pytest.mark.parametrize(
"metric,as_multirun,expected",
[("value", False, ["x", "y", "value"]), ("foo", True, ["x", "y", "foo", "trial_number"])],
)
@mock.patch("optuna.integration.wandb.wandb")
def test_values_registered_on_epoch(
wandb: mock.MagicMock, metric: str, as_multirun: bool, expected: List[str]
) -> None:
def assert_call_args(log_func: mock.MagicMock, as_multirun: bool) -> None:
call_args = log_func.call_args
assert list(call_args[0][0].keys()) == expected
assert call_args[1] == {"step": None if as_multirun else 0}
wandb.sdk.wandb_run.Run = mock.MagicMock
if as_multirun:
wandb.run = None
log_func = wandb.init().log
else:
log_func = wandb.run.log
study = optuna.create_study()
wandbc = WeightsAndBiasesCallback(metric_name=metric, as_multirun=as_multirun)
study.optimize(_objective_func, n_trials=1, callbacks=[wandbc])
assert_call_args(log_func, as_multirun)
@pytest.mark.parametrize("metric,expected", [("foo", ["x", "y", "foo", "trial_number"])])
@mock.patch("optuna.integration.wandb.wandb")
def test_values_registered_on_epoch_with_logging(
wandb: mock.MagicMock, metric: str, expected: List[str]
) -> None:
wandb.sdk.wandb_run.Run = mock.MagicMock
study = optuna.create_study()
wandbc = WeightsAndBiasesCallback(metric_name=metric, as_multirun=True)
@wandbc.track_in_wandb()
def _decorated_objective(trial: optuna.trial.Trial) -> float:
result = _objective_func(trial)
wandb.run.log({"result": result})
return result
study.enqueue_trial({"x": 2, "y": 25})
study.optimize(_decorated_objective, n_trials=1, callbacks=[wandbc])
logged_in_decorator = wandb.run.log.mock_calls[0][1][0]
logged_in_callback = wandb.run.log.mock_calls[1][1][0]
assert len(wandb.run.log.mock_calls) == 2
assert list(logged_in_decorator) == ["result"]
assert list(logged_in_callback) == expected
call_args = wandb.run.log.call_args
assert call_args[1] == {"step": 0}
@pytest.mark.parametrize(
"metrics,as_multirun,expected",
[
("value", False, ["x", "y", "value_0", "value_1"]),
("value", True, ["x", "y", "value_0", "value_1", "trial_number"]),
(["foo", "bar"], False, ["x", "y", "foo", "bar"]),
(("foo", "bar"), True, ["x", "y", "foo", "bar", "trial_number"]),
],
)
@mock.patch("optuna.integration.wandb.wandb")
def test_multiobjective_values_registered_on_epoch(
wandb: mock.MagicMock,
metrics: Union[str, Sequence[str]],
as_multirun: bool,
expected: List[str],
) -> None:
def assert_call_args(log_func: mock.MagicMock, as_multirun: bool) -> None:
call_args = log_func.call_args
assert list(call_args[0][0].keys()) == expected
assert call_args[1] == {"step": None if as_multirun else 0}
wandb.sdk.wandb_run.Run = mock.MagicMock
if as_multirun:
wandb.run = None
log_func = wandb.init().log
else:
log_func = wandb.run.log
study = optuna.create_study(directions=["minimize", "maximize"])
wandbc = WeightsAndBiasesCallback(metric_name=metrics, as_multirun=as_multirun)
study.optimize(_multiobjective_func, n_trials=1, callbacks=[wandbc])
assert_call_args(log_func, as_multirun)
@pytest.mark.parametrize(
"metrics,expected",
[
("value", ["x", "y", "value_0", "value_1", "trial_number"]),
(("foo", "bar"), ["x", "y", "foo", "bar", "trial_number"]),
],
)
@mock.patch("optuna.integration.wandb.wandb")
def test_multiobjective_values_registered_on_epoch_with_logging(
wandb: mock.MagicMock, metrics: Union[str, Sequence[str]], expected: List[str]
) -> None:
wandbc = WeightsAndBiasesCallback(as_multirun=True, metric_name=metrics)
@wandbc.track_in_wandb()
def _decorated_objective(trial: optuna.trial.Trial) -> Tuple[float, float]:
result0, result1 = _multiobjective_func(trial)
wandb.run.log({"result0": result0, "result1": result1})
return result0, result1
study = optuna.create_study(directions=["minimize", "maximize"])
study.enqueue_trial({"x": 2, "y": 24})
study.optimize(_decorated_objective, n_trials=1, callbacks=[wandbc])
logged_in_decorator = wandb.run.log.mock_calls[0][1][0]
logged_in_callback = wandb.run.log.mock_calls[1][1][0]
assert len(wandb.run.log.mock_calls) == 2
assert list(logged_in_decorator) == ["result0", "result1"]
assert list(logged_in_callback) == expected
call_args = wandb.run.log.call_args
assert call_args[1] == {"step": 0}
@pytest.mark.parametrize("metrics", [["foo"], ["foo", "bar", "baz"]])
@mock.patch("optuna.integration.wandb.wandb")
def test_multiobjective_raises_on_name_mismatch(wandb: mock.MagicMock, metrics: List[str]) -> None:
wandb.sdk.wandb_run.Run = mock.MagicMock
study = optuna.create_study(directions=["minimize", "maximize"])
wandbc = WeightsAndBiasesCallback(metric_name=metrics)
with pytest.raises(ValueError):
study.optimize(_multiobjective_func, n_trials=1, callbacks=[wandbc])
@pytest.mark.parametrize("metrics", [{0: "foo", 1: "bar"}])
def test_multiobjective_raises_on_type_mismatch(metrics: Any) -> None:
with pytest.raises(TypeError):
WeightsAndBiasesCallback(metric_name=metrics)
| 34.019672 | 99 | 0.691018 | 1,357 | 10,376 | 5.044215 | 0.095063 | 0.04821 | 0.024105 | 0.034186 | 0.831702 | 0.797224 | 0.760847 | 0.744485 | 0.717166 | 0.678159 | 0 | 0.011877 | 0.164225 | 10,376 | 304 | 100 | 34.131579 | 0.777445 | 0 | 0 | 0.54955 | 0 | 0 | 0.094931 | 0.031322 | 0 | 0 | 0 | 0 | 0.130631 | 1 | 0.076577 | false | 0 | 0.045045 | 0 | 0.144144 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c373a0c4fa1dd67ad6085854a7755f2fc4e392db | 141 | py | Python | app/utils/middleware.py | acidbutter96/number_to_string | df922e3aec5fc717dd4937b5cd089e384ad7188a | [
"MIT"
] | null | null | null | app/utils/middleware.py | acidbutter96/number_to_string | df922e3aec5fc717dd4937b5cd089e384ad7188a | [
"MIT"
] | null | null | null | app/utils/middleware.py | acidbutter96/number_to_string | df922e3aec5fc717dd4937b5cd089e384ad7188a | [
"MIT"
] | null | null | null | class Middleware:
def __init__(self,entrance):
self.entrance = entrance
def getEntrance(self):
return self.entrance | 20.142857 | 32 | 0.666667 | 15 | 141 | 6 | 0.533333 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.255319 | 141 | 7 | 33 | 20.142857 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0 | 0.2 | 0.8 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
c37a2ae9e4f3dd932613429b4f8edc4fcabe6854 | 31 | py | Python | remi/res/__init__.py | Qu4n7r01d/remi | 6ba6c9cbc5121e00d849ff385966ac7e72e1409f | [
"MIT"
] | 1 | 2021-12-31T08:35:59.000Z | 2021-12-31T08:35:59.000Z | lib/jnpr/junos/cfg/__init__.py | stoned/py-junos-eznc | 93e5530e998a8d6aae758aa7ad1cca420e6501b8 | [
"Apache-2.0",
"BSD-3-Clause"
] | 13 | 2019-06-25T13:23:30.000Z | 2022-02-10T07:00:39.000Z | lib/jnpr/junos/cfg/__init__.py | stoned/py-junos-eznc | 93e5530e998a8d6aae758aa7ad1cca420e6501b8 | [
"Apache-2.0",
"BSD-3-Clause"
] | null | null | null | from .resource import Resource
| 15.5 | 30 | 0.83871 | 4 | 31 | 6.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.129032 | 31 | 1 | 31 | 31 | 0.962963 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5ee0504c717df948ca14652b4f652e70e7ae6ade | 149 | py | Python | titan/react_state_pkg/highlightbvr/resources.py | mnieber/gen | 65f8aa4fb671c4f90d5cbcb1a0e10290647a31d9 | [
"MIT"
] | null | null | null | titan/react_state_pkg/highlightbvr/resources.py | mnieber/gen | 65f8aa4fb671c4f90d5cbcb1a0e10290647a31d9 | [
"MIT"
] | null | null | null | titan/react_state_pkg/highlightbvr/resources.py | mnieber/gen | 65f8aa4fb671c4f90d5cbcb1a0e10290647a31d9 | [
"MIT"
] | null | null | null | from dataclasses import dataclass
from titan.react_state_pkg.behavior.resources import Behavior
@dataclass
class HighlightBvr(Behavior):
pass
| 16.555556 | 61 | 0.825503 | 18 | 149 | 6.722222 | 0.722222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.127517 | 149 | 8 | 62 | 18.625 | 0.930769 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.2 | 0.4 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
6f06a20f9414f7cd8fdfb352d7337e9f1cf16643 | 51 | py | Python | tests/test_cli.py | joergbrech/mapfix | 6eb827892fb22a54931c6af98780326804c6c224 | [
"MIT"
] | null | null | null | tests/test_cli.py | joergbrech/mapfix | 6eb827892fb22a54931c6af98780326804c6c224 | [
"MIT"
] | 4 | 2019-10-21T13:26:25.000Z | 2019-11-09T17:41:08.000Z | tests/test_cli.py | joergbrech/mapfix | 6eb827892fb22a54931c6af98780326804c6c224 | [
"MIT"
] | null | null | null | import pytest
def test_dummy():
assert(1==1)
| 8.5 | 17 | 0.647059 | 8 | 51 | 4 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.05 | 0.215686 | 51 | 5 | 18 | 10.2 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6f2742e9b397b1614ecff9b25fab1c6bc7f4ba41 | 44 | py | Python | venv/Lib/site-packages/nornir/plugins/functions/inv/__init__.py | melihteke/ebook_study | 4848ea42e37ee1d6ec777bfc33f49984653ace34 | [
"MIT"
] | null | null | null | venv/Lib/site-packages/nornir/plugins/functions/inv/__init__.py | melihteke/ebook_study | 4848ea42e37ee1d6ec777bfc33f49984653ace34 | [
"MIT"
] | null | null | null | venv/Lib/site-packages/nornir/plugins/functions/inv/__init__.py | melihteke/ebook_study | 4848ea42e37ee1d6ec777bfc33f49984653ace34 | [
"MIT"
] | null | null | null | from .helper import populate_ip, resolve_ip
| 22 | 43 | 0.840909 | 7 | 44 | 5 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.113636 | 44 | 1 | 44 | 44 | 0.897436 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6f2ab83a96a5fb4875c5e1c4afb0022ad50c436b | 269 | py | Python | backend/views/__init__.py | Luanee/MoneyThreads | 786610855ad4d4fdda4a25a95c9e4756c6a9dedf | [
"MIT"
] | null | null | null | backend/views/__init__.py | Luanee/MoneyThreads | 786610855ad4d4fdda4a25a95c9e4756c6a9dedf | [
"MIT"
] | null | null | null | backend/views/__init__.py | Luanee/MoneyThreads | 786610855ad4d4fdda4a25a95c9e4756c6a9dedf | [
"MIT"
] | null | null | null | from backend.views.home import index, signin, dashboard, error_404, error_500
from backend.views.users import SignUpView, SignInView, ResetPasswordView, logout, ActivateAccountView, ConfirmedAccountView, UserDashboardView
from backend.views.budget import DashboardView
| 67.25 | 143 | 0.858736 | 30 | 269 | 7.633333 | 0.7 | 0.144105 | 0.209607 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.024292 | 0.081784 | 269 | 3 | 144 | 89.666667 | 0.902834 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
6f31597043d44fa3b035d664d37ee5783d76bfa5 | 27 | py | Python | strongr/secretsdomain/model/__init__.py | bigr-erasmusmc/StrongR | 48573e170771a251f629f2d13dba7173f010a38c | [
"Apache-2.0"
] | null | null | null | strongr/secretsdomain/model/__init__.py | bigr-erasmusmc/StrongR | 48573e170771a251f629f2d13dba7173f010a38c | [
"Apache-2.0"
] | null | null | null | strongr/secretsdomain/model/__init__.py | bigr-erasmusmc/StrongR | 48573e170771a251f629f2d13dba7173f010a38c | [
"Apache-2.0"
] | null | null | null | from .secret import Secret
| 13.5 | 26 | 0.814815 | 4 | 27 | 5.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148148 | 27 | 1 | 27 | 27 | 0.956522 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6f434121462cd82628062bfc684d56c7264b540a | 62 | py | Python | src/main/resources/assets/openpython/opos/v1.0/lib/computer.py | fossabot/OpenPython | 8fe3f794f2a6c543d96c1ef5c097ffa18f90b680 | [
"PSF-2.0",
"Apache-2.0",
"CC0-1.0",
"MIT"
] | 41 | 2018-10-25T06:15:31.000Z | 2022-02-20T11:20:43.000Z | src/main/resources/assets/openpython/opos/v1.0/lib/computer.py | fossabot/OpenPython | 8fe3f794f2a6c543d96c1ef5c097ffa18f90b680 | [
"PSF-2.0",
"Apache-2.0",
"CC0-1.0",
"MIT"
] | 16 | 2018-03-20T12:25:27.000Z | 2018-03-25T13:34:44.000Z | src/main/resources/assets/openpython/opos/v1.0/lib/computer.py | fossabot/OpenPython | 8fe3f794f2a6c543d96c1ef5c097ffa18f90b680 | [
"PSF-2.0",
"Apache-2.0",
"CC0-1.0",
"MIT"
] | 8 | 2018-11-04T02:03:15.000Z | 2022-01-13T11:46:28.000Z | # noinspection PyUnresolvedReferences
from ucomputer import *
| 20.666667 | 37 | 0.854839 | 5 | 62 | 10.6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.112903 | 62 | 2 | 38 | 31 | 0.963636 | 0.564516 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5b10ebf2e1e76e507c9bf37c002ad0a6bd701807 | 28 | py | Python | test/test_noop.py | DorianGray/qtile-conf | 8ce02d016b987e6a0dcbaca3ecc3de018df14d0c | [
"MIT"
] | null | null | null | test/test_noop.py | DorianGray/qtile-conf | 8ce02d016b987e6a0dcbaca3ecc3de018df14d0c | [
"MIT"
] | null | null | null | test/test_noop.py | DorianGray/qtile-conf | 8ce02d016b987e6a0dcbaca3ecc3de018df14d0c | [
"MIT"
] | null | null | null |
def test_noop():
pass
| 5.6 | 16 | 0.571429 | 4 | 28 | 3.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.321429 | 28 | 4 | 17 | 7 | 0.789474 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
d2c94a4fd60d0311b064126e9b3dc241b7cf068f | 21 | py | Python | cdjs/__init__.py | ofhellsfire/cdjs | d181eb42ad8076ca592d4b40b5c8f92c46bc757c | [
"MIT"
] | 1 | 2021-02-27T15:05:09.000Z | 2021-02-27T15:05:09.000Z | cdjs/__init__.py | ofhellsfire/cdjs | d181eb42ad8076ca592d4b40b5c8f92c46bc757c | [
"MIT"
] | null | null | null | cdjs/__init__.py | ofhellsfire/cdjs | d181eb42ad8076ca592d4b40b5c8f92c46bc757c | [
"MIT"
] | null | null | null | from .cdjs import *
| 7 | 19 | 0.666667 | 3 | 21 | 4.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.238095 | 21 | 2 | 20 | 10.5 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d2dea617251bdbc5af05266539175fb6f81a3473 | 31 | py | Python | brunel_hand/__init__.py | pollen-robotics/brunel-hand | 0c1c936eb89791ec891e7cd4a11925644b4d100e | [
"Apache-2.0"
] | 5 | 2018-10-04T04:20:29.000Z | 2021-01-14T09:23:53.000Z | brunel_hand/__init__.py | pollen-robotics/brunel-hand | 0c1c936eb89791ec891e7cd4a11925644b4d100e | [
"Apache-2.0"
] | null | null | null | brunel_hand/__init__.py | pollen-robotics/brunel-hand | 0c1c936eb89791ec891e7cd4a11925644b4d100e | [
"Apache-2.0"
] | 1 | 2021-01-14T09:24:17.000Z | 2021-01-14T09:24:17.000Z | from .brunel import BrunelHand
| 15.5 | 30 | 0.83871 | 4 | 31 | 6.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.129032 | 31 | 1 | 31 | 31 | 0.962963 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d2ef843fa8a014561b865d4cbef3543af53b070f | 5,226 | py | Python | hsstock/app/collect/futu/int_mysqlschema_kline_history_5M_app.py | hsstock/hsstock | f8841331022e8844537a5c5b08d047e2cc328856 | [
"Apache-2.0"
] | 2 | 2018-10-04T08:04:24.000Z | 2021-01-21T06:58:30.000Z | hsstock/app/collect/futu/int_mysqlschema_kline_history_5M_app.py | hsstock/hsstock | f8841331022e8844537a5c5b08d047e2cc328856 | [
"Apache-2.0"
] | null | null | null | hsstock/app/collect/futu/int_mysqlschema_kline_history_5M_app.py | hsstock/hsstock | f8841331022e8844537a5c5b08d047e2cc328856 | [
"Apache-2.0"
] | 1 | 2018-10-20T09:39:50.000Z | 2018-10-20T09:39:50.000Z | # -*- coding: UTF-8 -*-
import logging
import sqlalchemy as sa
import pandas as pd
from hsstock.service.mysql_service import MysqlService
from hsstock.utils.app_logging import setup_logging
def main():
storeservice = MysqlService(2)
# The total number of history_5M tables is 80, but last table is 63
kline_5m_tables_number = 81
schemaArr = [
{
"table": "ft_5M_{0}",
"dtype": {
"id": sa.types.BIGINT,
"code": sa.types.NVARCHAR(20),
"time_key": sa.types.DATETIME,
"open": sa.types.FLOAT,
"close": sa.types.FLOAT,
"high": sa.types.FLOAT,
"low": sa.types.FLOAT,
"pe_ratio": sa.types.FLOAT,
"turnover_rate": sa.types.FLOAT,
"volume": sa.types.BIGINT,
"turnover": sa.types.FLOAT,
"change_rate": sa.types.FLOAT,
"last_close": sa.types.FLOAT
},
"clauses": [
'ALTER TABLE `{0}` ADD PRIMARY KEY (`id`);',
'ALTER TABLE `{0}` ADD UNIQUE INDEX (`code`,`time_key`);',
'ALTER TABLE `{0}` MODIFY COLUMN id BIGINT NOT NULL AUTO_INCREMENT COMMENT \'id\'',
'ALTER TABLE `{0}` MODIFY COLUMN pe_ratio FLOAT COMMENT \'市盈率\';',
'ALTER TABLE `{0}` MODIFY COLUMN turnover_rate FLOAT COMMENT \'换手率\';',
'ALTER TABLE `{0}` MODIFY COLUMN volume BIGINT COMMENT \'成交量\';',
'ALTER TABLE `{0}` MODIFY COLUMN turnover FLOAT COMMENT \'成交额\';',
'ALTER TABLE `{0}` MODIFY COLUMN change_rate FLOAT COMMENT \'涨跌幅\';',
'ALTER TABLE `{0}` MODIFY COLUMN last_close FLOAT COMMENT \'昨收价\';',
'ALTER TABLE `{0}` ENGINE=MyISAM;'
]
},
]
# try:
# logging.info("create sub kline 5m schema, starting")
#
# for index in range(61,kline_5m_tables_number,1):
# for schema in schemaArr:
# df = pd.DataFrame(None, columns=schema['dtype'].keys())
# table = schema['table'].format(index)
# logging.info(table)
# logging.info('table:{0}'.format(table))
# clauses = []
# for clause in schema['clauses']:
# clause = clause.format(table)
# clauses.append(clause)
# storeservice.init_schema(table, df, schema['dtype'], clauses)
#
# logging.info("create sub kline 5m, end")
# except IOError as err:
# logging.error("OS|error: {0}".format(err))
# else:
# logging.info('create sub kline success')
union_table = [('ft_5M_{0}'.format(table)) for table in range(1, kline_5m_tables_number, 1)]
mrg_kline_claus = 'ALTER TABLE `{0}` ENGINE = MRG_MyISAM UNION = ({1}) INSERT_METHOD = LAST;'.format({0}, ','.join(union_table))
schemaArr = [
{
"table": "ft_5m",
"dtype": {
"id": sa.types.BIGINT,
"code": sa.types.NVARCHAR(20),
"time_key": sa.types.DATETIME,
"open": sa.types.FLOAT,
"close": sa.types.FLOAT,
"high": sa.types.FLOAT,
"low": sa.types.FLOAT,
"pe_ratio": sa.types.FLOAT,
"turnover_rate": sa.types.FLOAT,
"volume": sa.types.BIGINT,
"turnover": sa.types.FLOAT,
"change_rate": sa.types.FLOAT,
"last_close": sa.types.FLOAT
},
"clauses": [
'ALTER TABLE `{0}` ADD PRIMARY KEY (`id`);',
'ALTER TABLE `{0}` ADD UNIQUE INDEX (`code`,`time_key`);',
'ALTER TABLE `{0}` MODIFY COLUMN id BIGINT NOT NULL AUTO_INCREMENT COMMENT \'id\'',
'ALTER TABLE `{0}` MODIFY COLUMN pe_ratio FLOAT COMMENT \'市盈率\';',
'ALTER TABLE `{0}` MODIFY COLUMN turnover_rate FLOAT COMMENT \'换手率\';',
'ALTER TABLE `{0}` MODIFY COLUMN volume BIGINT COMMENT \'成交量\';',
'ALTER TABLE `{0}` MODIFY COLUMN turnover FLOAT COMMENT \'成交额\';',
'ALTER TABLE `{0}` MODIFY COLUMN change_rate FLOAT COMMENT \'涨跌幅\';',
'ALTER TABLE `{0}` MODIFY COLUMN last_close FLOAT COMMENT \'昨收价\';',
mrg_kline_claus
]
}
]
try:
logging.info("create kline 5m schema, starting")
for schema in schemaArr:
df = pd.DataFrame(None, columns=schema['dtype'].keys())
table = schema['table']
logging.info(table)
logging.info('table:{0}'.format(table))
clauses = []
for clause in schema['clauses']:
clause = clause.format(table)
clauses.append(clause)
storeservice.init_schema(table, df, schema['dtype'], clauses)
logging.info("create kline 5m, end")
except IOError as err:
logging.error("OS|error: {0}".format(err))
else:
logging.info('create kline 5m success')
if __name__ == "__main__":
setup_logging()
main()
| 41.149606 | 132 | 0.519326 | 573 | 5,226 | 4.633508 | 0.205934 | 0.06855 | 0.082863 | 0.089642 | 0.803013 | 0.761582 | 0.750282 | 0.750282 | 0.750282 | 0.750282 | 0 | 0.016623 | 0.343858 | 5,226 | 126 | 133 | 41.47619 | 0.757655 | 0.159587 | 0 | 0.531915 | 0 | 0 | 0.348592 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.010638 | false | 0 | 0.053191 | 0 | 0.06383 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d2f51066ed5988d745fc30dbd3f2e1e5cd34354e | 122 | py | Python | words2num/__init__.py | michelleful/words2num | bbaf2f8244ae56691ce386673a63b2e5e766caaf | [
"MIT"
] | 12 | 2020-09-09T13:38:34.000Z | 2022-03-22T04:04:01.000Z | words2num/__init__.py | michelleful/words2num | bbaf2f8244ae56691ce386673a63b2e5e766caaf | [
"MIT"
] | 5 | 2020-07-24T19:30:51.000Z | 2022-03-31T19:00:40.000Z | words2num/__init__.py | michelleful/words2num | bbaf2f8244ae56691ce386673a63b2e5e766caaf | [
"MIT"
] | 2 | 2021-07-28T00:04:33.000Z | 2021-09-06T08:37:25.000Z | from .base import (w2n)
from .base import w2n as words2num
from .core import (NumberParseException)
__version__ = '0.3.2'
| 24.4 | 40 | 0.762295 | 18 | 122 | 4.944444 | 0.666667 | 0.179775 | 0.314607 | 0.382022 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.057143 | 0.139344 | 122 | 4 | 41 | 30.5 | 0.790476 | 0 | 0 | 0 | 0 | 0 | 0.040984 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.75 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
961474c294ecb04d4276ea540be933a1087837fa | 89 | py | Python | giraffe/__init__.py | Julian/giraffe | 8ef37fcb0a7cc5aa24c684d17568c55ad04692dc | [
"MIT"
] | 1 | 2017-05-02T21:28:02.000Z | 2017-05-02T21:28:02.000Z | giraffe/__init__.py | Julian/giraffe | 8ef37fcb0a7cc5aa24c684d17568c55ad04692dc | [
"MIT"
] | null | null | null | giraffe/__init__.py | Julian/giraffe | 8ef37fcb0a7cc5aa24c684d17568c55ad04692dc | [
"MIT"
] | null | null | null | from giraffe.exceptions import GiraffeException
from giraffe.graph import Graph, DiGraph
| 29.666667 | 47 | 0.865169 | 11 | 89 | 7 | 0.636364 | 0.285714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.101124 | 89 | 2 | 48 | 44.5 | 0.9625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
961a08653d4fd85e7be3faed065a78890225ff64 | 113 | py | Python | una_hora/core/views.py | Hectoronian/una-hora | 955a9ad19e263f959642e7c7cf4093fdca22676d | [
"MIT"
] | 8 | 2021-09-09T22:01:12.000Z | 2022-01-11T23:32:08.000Z | una_hora/core/views.py | Hectoronian/una-hora | 955a9ad19e263f959642e7c7cf4093fdca22676d | [
"MIT"
] | 21 | 2021-09-02T21:31:13.000Z | 2022-02-14T14:27:16.000Z | una_hora/core/views.py | Hectoronian/una-hora | 955a9ad19e263f959642e7c7cf4093fdca22676d | [
"MIT"
] | 1 | 2021-11-11T02:37:34.000Z | 2021-11-11T02:37:34.000Z | from django.shortcuts import render # noqa: F401
def legal(request):
return render(request, "legal.html")
| 18.833333 | 49 | 0.725664 | 15 | 113 | 5.466667 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.031915 | 0.168142 | 113 | 5 | 50 | 22.6 | 0.840426 | 0.088496 | 0 | 0 | 0 | 0 | 0.09901 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
96227082d6cb461e7d5f25fcbe3d7fd114b22582 | 48 | py | Python | pynewproject_ciaa/__init__.py | ericsonj/pynewproject_ciaa | 1191d798dd9f2c420ebd6cc787c90543f17fcf18 | [
"MIT"
] | null | null | null | pynewproject_ciaa/__init__.py | ericsonj/pynewproject_ciaa | 1191d798dd9f2c420ebd6cc787c90543f17fcf18 | [
"MIT"
] | null | null | null | pynewproject_ciaa/__init__.py | ericsonj/pynewproject_ciaa | 1191d798dd9f2c420ebd6cc787c90543f17fcf18 | [
"MIT"
] | null | null | null | generators = [
"edu_ciaa_nxp.EDU_CIAA_NXP"
] | 16 | 31 | 0.708333 | 7 | 48 | 4.285714 | 0.571429 | 0.466667 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 48 | 3 | 32 | 16 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0.510204 | 0.510204 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
829b3464c85855d0d48c6a106117800d239cb9d6 | 14 | py | Python | reverse.py | jesseqzhen/source_tree_proj | 4c45df0c4dc2aca6ac3ed0cef61eb120c0f836a8 | [
"MIT"
] | null | null | null | reverse.py | jesseqzhen/source_tree_proj | 4c45df0c4dc2aca6ac3ed0cef61eb120c0f836a8 | [
"MIT"
] | null | null | null | reverse.py | jesseqzhen/source_tree_proj | 4c45df0c4dc2aca6ac3ed0cef61eb120c0f836a8 | [
"MIT"
] | null | null | null | print(a[::-1]) | 14 | 14 | 0.5 | 3 | 14 | 2.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.071429 | 0 | 14 | 1 | 14 | 14 | 0.428571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
82a957981ea07c4535fb3fe17682f162d2b0bbb2 | 31 | py | Python | msdsl/plugin/__init__.py | sgherbst/msdsl | e38d5ecdb88b3574bda62f22a4f91ce3e4173d12 | [
"MIT"
] | 15 | 2019-05-14T10:12:23.000Z | 2022-03-29T15:29:52.000Z | msdsl/plugin/__init__.py | sgherbst/msdsl | e38d5ecdb88b3574bda62f22a4f91ce3e4173d12 | [
"MIT"
] | 19 | 2020-01-22T21:44:33.000Z | 2021-06-05T02:10:41.000Z | msdsl/plugin/__init__.py | sgherbst/msdsl | e38d5ecdb88b3574bda62f22a4f91ce3e4173d12 | [
"MIT"
] | 5 | 2019-10-21T09:53:17.000Z | 2021-08-10T17:32:20.000Z | from .msdsl import CustomPlugin | 31 | 31 | 0.870968 | 4 | 31 | 6.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.096774 | 31 | 1 | 31 | 31 | 0.964286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
82ad42b48043f7ae2186a7826c1f162a2142a665 | 40 | py | Python | tests/urls.py | cluesblues/drf-jwt-knox | fc75c36d37b7da7a6290b2dc97ade4e8eff99682 | [
"Apache-2.0"
] | 10 | 2016-08-08T12:23:51.000Z | 2022-01-16T08:01:15.000Z | tests/urls.py | cluesblues/drf-jwt-knox | fc75c36d37b7da7a6290b2dc97ade4e8eff99682 | [
"Apache-2.0"
] | 4 | 2016-08-08T11:35:19.000Z | 2021-12-26T12:46:48.000Z | tests/urls.py | cluesblues/drf-jwt-knox | fc75c36d37b7da7a6290b2dc97ade4e8eff99682 | [
"Apache-2.0"
] | 1 | 2021-04-26T22:13:27.000Z | 2021-04-26T22:13:27.000Z | from jwt_knox.urls import urlpatterns
| 10 | 37 | 0.825 | 6 | 40 | 5.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15 | 40 | 3 | 38 | 13.333333 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
82d60c84167572031a93f07b1a8bf22635f6eb47 | 30 | py | Python | mayan/apps/document_indexing/tests/__init__.py | Syunkolee9891/Mayan-EDMS | 3759a9503a264a180b74cc8518388f15ca66ac1a | [
"Apache-2.0"
] | 1 | 2021-06-17T18:24:25.000Z | 2021-06-17T18:24:25.000Z | mayan/apps/document_indexing/tests/__init__.py | Syunkolee9891/Mayan-EDMS | 3759a9503a264a180b74cc8518388f15ca66ac1a | [
"Apache-2.0"
] | 7 | 2020-06-06T00:01:04.000Z | 2022-01-13T01:47:17.000Z | mayan/apps/document_indexing/tests/__init__.py | Syunkolee9891/Mayan-EDMS | 3759a9503a264a180b74cc8518388f15ca66ac1a | [
"Apache-2.0"
] | null | null | null | from .mixins import * # NOQA
| 15 | 29 | 0.666667 | 4 | 30 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.233333 | 30 | 1 | 30 | 30 | 0.869565 | 0.133333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
82e34fd31fc97e2cbaaf57ee4d20cd92445e48d1 | 21,403 | py | Python | tests/unit_tests/various/test_shower_card.py | khurtado/MG5_aMC | 9cde676b0a1097058c416983017af257385fa375 | [
"NCSA"
] | 5 | 2018-10-23T14:37:18.000Z | 2021-11-22T20:59:02.000Z | tests/unit_tests/various/test_shower_card.py | khurtado/MG5_aMC | 9cde676b0a1097058c416983017af257385fa375 | [
"NCSA"
] | 26 | 2018-10-08T15:49:32.000Z | 2020-05-15T13:33:36.000Z | tests/unit_tests/various/test_shower_card.py | khurtado/MG5_aMC | 9cde676b0a1097058c416983017af257385fa375 | [
"NCSA"
] | 4 | 2019-02-18T11:42:18.000Z | 2021-11-11T20:46:08.000Z | ################################################################################
#
# Copyright (c) 2011 The MadGraph5_aMC@NLO Development team and Contributors
#
# This file is a part of the MadGraph5_aMC@NLO project, an application which
# automatically generates Feynman diagrams and matrix elements for arbitrary
# high-energy processes in the Standard Model and beyond.
#
# It is subject to the MadGraph5_aMC@NLO license which should accompany this
# distribution.
#
# For more information, visit madgraph.phys.ucl.ac.be and amcatnlo.web.cern.ch
#
################################################################################
import os
import sys
import tests.unit_tests as unittest
import madgraph.various.shower_card as shower_card
class TestShowerCard(unittest.TestCase):
"""Check the class linked to a block of the param_card"""
def setUp(self):
if not hasattr(self, 'card') or not hasattr(self, 'card_analyse'):
text = \
"""#***********************************************************************
# MadGraph5_aMC@NLO *
# *
# shower_card.dat aMC@NLO *
# *
# This file is used to set the parameters for the shower. *
# *
# Some notation/conventions: *
# *
# Lines starting with a hash (#) are info or comments *
# *
# mind the format: variable = value # comment *
#***********************************************************************
#
#****************
# Shower settings
#****************
#
#***********************************************************************
# Number of events, jobs, errors, and random seeds *
#***********************************************************************
nevents = -1 # N evts to shower (< 0 = all)
nsplit_jobs = 1 # N jobs to run in parallel (< 100!!)
combine_td = T # combine the topdrawer files if nsplit_jobs > 1
maxprint = 2 # N evts to print in the log
maxerrs = 0.1 # max fraction of errors
rnd_seed = 0 # 1st random seed (0 = default)
rnd_seed2 = 0 # 2nd random seed (0 = default) !ONLY FOR HWERIG6!
#***********************************************************************
# PDFs and non-perturbative modelling *
#***********************************************************************
pdfcode = 0 # 0 = internal, 1 = same as NLO, other = lhaglue
ue_enabled = F # underlying event
hadronize = T # hadronisation on/off !IGNORED BY HERWIG6!
lambda_5 = -1 # Lambda_5 (< 0 = default) !IGNORED BY PYTHIA8!
#***********************************************************************
# Stable or unstable particles *
#***********************************************************************
b_stable = F # set B hadrons stable
pi_stable = T # set pi0's stable
wp_stable = F # set w+'s stable
wm_stable = F # set w-'s stable
z_stable = F # set z0's stable
h_stable = F # set Higgs' stable
tap_stable = F # set tau+'s stable
tam_stable = F # set tau-'s stable
mup_stable = F # set mu+'s stable
mum_stable = F # set mu-'s stable
#***********************************************************************
# Mass of the b quark *
#***********************************************************************
b_mass = -1 # b mass, (< 0 = default)
#***********************************************************************
# Special settings *
#***********************************************************************
is_4lep = F # T if 4-lepton production !ONLY FOR PYTHIA6!
is_bbar = F # T if bb~ production !ONLY FOR HERWIG6!
#***********************************************************************
# Decay channels *
# Write down the decay channels for the resonances, to be performed by *
# the shower. *
# The syntax (for a two-body decay) is *
# DM_I = M > D1 D2 @ BR @ ME *
# where I < 100, M is the decaying resonance, D1, D2 are the decay *
# products (up to D5 if such a decay is supported by the shower), BR *
# is the branching ratio (only used by the HERWIG6 shower, ignored *
# otherwise) and ME is the type of matrix element to be used in the *
# decay (only used by HERWIG6, ignored otherwise). *
# BR's are correctly understood by HERWIG6 only if they add up to 1 *
# and only if no more than three modes are required for a given *
# resonance. *
# ME corresponds to the third entry of subroutine HWMODK, see the *
# relevant manual. *
# *
# WARNING: in HERWIG6, the order of decay products in > 2-body decays *
# IS RELEVANT. *
# WARNING: in PYTHIA6, turning hadronisation off disables top decays *
# WARNING: in PYTHIA6 and PYTHIA8, 1 -> n decays (with n > 2) are *
# handled through a sequence of 1 -> 2 decays. *
# *
# Examples of syntax: *
# Z -> e+ e- or mu+ mu- with BR = 0.5 each *
# DM_1 = 23 > -11 11 @ 0.5d0 @ 100 *
# DM_2 = 23 > -13 13 @ 0.5d0 @ 100 *
# H -> tau+ tau- with BR = 1 *
# DM_3 = 25 > -15 15 @ 1.0d0 @ 0 *
# t -> nu_e e+ b with BR = 1 (HERWIG) *
# DM_4 = 6 > 12 -11 5 @ 1d0 @ 100 *
# t -> nu_e e+ b with BR = 1 (PYTHIA) *
# DM_5 = 6 > 24 5 @ 1d0 @ 100 *
# DM_6 = 24 > 12 -11 @ 1d0 @ 100 *
#***********************************************************************
#***********************************************************************
# Extra Libraries/analyses *
# The following lines need to be changed if the user does not want to *
# create a StdHEP/HepMC file, but to directly run an own analysis (to *
# be placed in HWAnalyzer or analogous MCatNLO subfolders). *
# Please use files in those folders as examples. *
#***********************************************************************
EXTRALIBS = stdhep Fmcfio # Extra-libraries (not LHAPDF)
# Default: "stdhep Fmcfio"
# PYTHIA > 8.200 may require library dl
EXTRAPATHS = ../lib # Path to the extra-libraries
# Default: "../lib"
INCLUDEPATHS = # Path to header files needed by c++
# Dir names separated by white spaces
ANALYSE = # User's analysis and histogramming
# routines (please use .o as extension
# and use spaces to separate files)
"""
TestShowerCard.card = shower_card.ShowerCard(text, testing = True)
text_analyse = \
"""#***********************************************************************
# MadGraph5_aMC@NLO *
# *
# shower_card.dat aMC@NLO *
# *
# This file is used to set the parameters for the shower. *
# *
# Some notation/conventions: *
# *
# Lines starting with a hash (#) are info or comments *
# *
# mind the format: variable = value # comment *
#***********************************************************************
#
#****************
# Shower settings
#****************
#
#***********************************************************************
# Number of events, jobs, errors, and random seeds *
#***********************************************************************
nevents = -1 # N evts to shower (< 0 = all)
nsplit_jobs = 1 # N jobs to run in parallel (< 100!!)
combine_td = T # combine the topdrawer files if nsplit_jobs > 1
maxprint = 2 # N evts to print in the log
maxerrs = 0.1 # max fraction of errors
rnd_seed = 0 # 1st random seed (0 = default)
rnd_seed2 = 0 # 2nd random seed (0 = default) !ONLY FOR HWERIG6!
#***********************************************************************
# PDFs and non-perturbative modelling *
#***********************************************************************
pdfcode = 0 # 0 = internal, 1 = same as NLO, other = lhaglue
ue_enabled = F # underlying event
hadronize = T # hadronisation on/off !IGNORED BY HERWIG6!
lambda_5 = -1 # Lambda_5 (< 0 = default) !IGNORED BY PYTHIA8!
#***********************************************************************
# Stable or unstable particles *
#***********************************************************************
b_stable = F # set B hadrons stable
pi_stable = T # set pi0's stable
wp_stable = F # set w+'s stable
wm_stable = F # set w-'s stable
z_stable = F # set z0's stable
h_stable = F # set Higgs' stable
tap_stable = F # set tau+'s stable
tam_stable = F # set tau-'s stable
mup_stable = F # set mu+'s stable
mum_stable = F # set mu-'s stable
#***********************************************************************
# Mass of the b quark *
#***********************************************************************
b_mass = -1 # b mass, (< 0 = default)
#***********************************************************************
# Special settings *
#***********************************************************************
is_4lep = F # T if 4-lepton production !ONLY FOR PYTHIA6!
is_bbar = F # T if bb~ production !ONLY FOR HERWIG6!
#***********************************************************************
# Decay channels *
# Write down the decay channels for the resonances, to be performed by *
# the shower. *
# The syntax (for a two-body decay) is *
# DM_I = M > D1 D2 @ BR @ ME *
# where I < 100, M is the decaying resonance, D1, D2 are the decay *
# products (up to D5 if such a decay is supported by the shower), BR *
# is the branching ratio (only used by the HERWIG6 shower, ignored *
# otherwise) and ME is the type of matrix element to be used in the *
# decay (only used by HERWIG6, ignored otherwise). *
# BR's are correctly understood by HERWIG6 only if they add up to 1 *
# and only if no more than three modes are required for a given *
# resonance. *
# ME corresponds to the third entry of subroutine HWMODK, see the *
# relevant manual. *
# *
# WARNING: in HERWIG6, the order of decay products in > 2-body decays *
# IS RELEVANT. *
# WARNING: in PYTHIA6, turning hadronisation off disables top decays *
# WARNING: in PYTHIA6 and PYTHIA8, 1 -> n decays (with n > 2) are *
# handled through a sequence of 1 -> 2 decays. *
# *
# Examples of syntax: *
# Z -> e+ e- or mu+ mu- with BR = 0.5 each *
# DM_1 = 23 > -11 11 @ 0.5d0 @ 100 *
# DM_2 = 23 > -13 13 @ 0.5d0 @ 100 *
# H -> tau+ tau- with BR = 1 *
# DM_3 = 25 > -15 15 @ 1.0d0 @ 0 *
# t -> nu_e e+ b with BR = 1 (HERWIG) *
# DM_4 = 6 > 12 -11 5 @ 1d0 @ 100 *
# t -> nu_e e+ b with BR = 1 (PYTHIA) *
# DM_5 = 6 > 24 5 @ 1d0 @ 100 *
# DM_6 = 24 > 12 -11 @ 1d0 @ 100 *
#***********************************************************************
#***********************************************************************
# Extra Libraries/analyses *
# The following lines need to be changed if the user does not want to *
# create a StdHEP/HepMC file, but to directly run an own analysis (to *
# be placed in HWAnalyzer or analogous MCatNLO subfolders). *
# Please use files in those folders as examples. *
#***********************************************************************
EXTRALIBS = stdhep Fmcfio # Extra-libraries (not LHAPDF)
# Default: "stdhep Fmcfio"
# PYTHIA > 8.200 may require library dl
EXTRAPATHS = ../lib # Path to the extra-libraries
# Default: "../lib"
INCLUDEPATHS = # Path to header files needed by c++
# Dir names separated by white spaces
ANALYSE = # User's analysis and histogramming
# routines (please use .o as extension
# and use spaces to separate files)
"""
TestShowerCard.card_analyse = shower_card.ShowerCard(text_analyse, testing = True)
def test_shower_card_py8(self):
"""test that the py8 card is correctly written"""
goal = \
"""NEVENTS=-1
MAXPR_PY8=2
ERR_FR_PY8=0.100
RNDEVSEED_PY8=0
PDFCODE=0
UE_PY8=.FALSE.
HADRONIZE_PY8=.TRUE.
LAMBDAPYTH=-1.000
B_STABLE_PY8=.FALSE.
PI_STABLE_PY8=.TRUE.
WP_STABLE_PY8=.FALSE.
WM_STABLE_PY8=.FALSE.
Z_STABLE_PY8=.FALSE.
H_STABLE_PY8=.FALSE.
TAUP_STABLE_PY8=.FALSE.
TAUM_STABLE_PY8=.FALSE.
MUP_STABLE_PY8=.FALSE.
MUM_STABLE_PY8=.FALSE.
B_MASS=-1.000
EXTRALIBS="stdhep Fmcfio"
EXTRAPATHS="../lib"
INCLUDEPATHS=
PY8UTI=""
"""
text = self.card.write_card('PYTHIA8', '')
for a, b in zip(text.split('\n'), goal.split('\n')):
self.assertEqual(a,b)
self.assertEqual(text, goal)
def test_shower_card_py8_analyse(self):
"""test that the py8 card is correctly written"""
goal = \
"""NEVENTS=-1
MAXPR_PY8=2
ERR_FR_PY8=0.100
RNDEVSEED_PY8=0
PDFCODE=0
UE_PY8=.FALSE.
HADRONIZE_PY8=.TRUE.
LAMBDAPYTH=-1.000
B_STABLE_PY8=.FALSE.
PI_STABLE_PY8=.TRUE.
WP_STABLE_PY8=.FALSE.
WM_STABLE_PY8=.FALSE.
Z_STABLE_PY8=.FALSE.
H_STABLE_PY8=.FALSE.
TAUP_STABLE_PY8=.FALSE.
TAUM_STABLE_PY8=.FALSE.
MUP_STABLE_PY8=.FALSE.
MUM_STABLE_PY8=.FALSE.
B_MASS=-1.000
EXTRALIBS="stdhep Fmcfio"
EXTRAPATHS="../lib"
INCLUDEPATHS=
PY8UTI=""
"""
text = self.card_analyse.write_card('PYTHIA8', '')
for a, b in zip(text.split('\n'), goal.split('\n')):
self.assertEqual(a,b)
self.assertEqual(text, goal)
def test_shower_card_hwpp(self):
"""test that the hwpp card is correctly written"""
goal = \
"""NEVENTS=-1
MAXPR_HWPP=2
ERR_FR_HWPP=0.100
RNDEVSEED_HWPP=0
PDFCODE=0
UE_HWPP=.FALSE.
HADRONIZE_HWPP=.TRUE.
LAMBDAHERW=-1.000
B_STABLE_HWPP=.FALSE.
PI_STABLE_HWPP=.TRUE.
WP_STABLE_HWPP=.FALSE.
WM_STABLE_HWPP=.FALSE.
Z_STABLE_HWPP=.FALSE.
H_STABLE_HWPP=.FALSE.
TAUP_STABLE_HWPP=.FALSE.
TAUM_STABLE_HWPP=.FALSE.
MUP_STABLE_HWPP=.FALSE.
MUM_STABLE_HWPP=.FALSE.
B_MASS=-1.000
EXTRALIBS="stdhep Fmcfio"
EXTRAPATHS="../lib"
INCLUDEPATHS=
HWPPUTI=""
"""
text = self.card.write_card('HERWIGPP', '')
for a, b in zip(text.split('\n'), goal.split('\n')):
self.assertEqual(a,b)
self.assertEqual(text, goal)
def test_shower_card_hwpp_analyse(self):
"""test that the hwpp card is correctly written"""
goal = \
"""NEVENTS=-1
MAXPR_HWPP=2
ERR_FR_HWPP=0.100
RNDEVSEED_HWPP=0
PDFCODE=0
UE_HWPP=.FALSE.
HADRONIZE_HWPP=.TRUE.
LAMBDAHERW=-1.000
B_STABLE_HWPP=.FALSE.
PI_STABLE_HWPP=.TRUE.
WP_STABLE_HWPP=.FALSE.
WM_STABLE_HWPP=.FALSE.
Z_STABLE_HWPP=.FALSE.
H_STABLE_HWPP=.FALSE.
TAUP_STABLE_HWPP=.FALSE.
TAUM_STABLE_HWPP=.FALSE.
MUP_STABLE_HWPP=.FALSE.
MUM_STABLE_HWPP=.FALSE.
B_MASS=-1.000
EXTRALIBS="stdhep Fmcfio"
EXTRAPATHS="../lib"
INCLUDEPATHS=
HWPPUTI=""
"""
text = self.card_analyse.write_card('HERWIGPP', '')
for a, b in zip(text.split('\n'), goal.split('\n')):
self.assertEqual(a,b)
self.assertEqual(text, goal)
def test_shower_card_hw6(self):
"""test that the hw6 card is correctly written"""
goal = \
"""NEVENTS=-1
MAXPR_HW=2
ERR_FR_HW=0.100
RNDEVSEED1_HW=0
RNDEVSEED2_HW=0
PDFCODE=0
LHSOFT=.FALSE.
LAMBDAHERW=-1.000
B_STABLE_HW=.FALSE.
PI_STABLE_HW=.TRUE.
WP_STABLE_HW=.FALSE.
WM_STABLE_HW=.FALSE.
Z_STABLE_HW=.FALSE.
H_STABLE_HW=.FALSE.
TAUP_STABLE_HW=.FALSE.
TAUM_STABLE_HW=.FALSE.
MUP_STABLE_HW=.FALSE.
MUM_STABLE_HW=.FALSE.
B_MASS=-1.000
IS_BB_HW=.FALSE.
EXTRALIBS="stdhep Fmcfio"
EXTRAPATHS="../lib"
INCLUDEPATHS=
HWUTI="mcatnlo_hwan_stdhep.o"
"""
text = self.card.write_card('HERWIG6', '')
for a, b in zip(text.split('\n'), goal.split('\n')):
self.assertEqual(a,b)
self.assertEqual(text, goal)
def test_shower_card_hw6_analyse(self):
"""test that the hw6 card is correctly written"""
goal = \
"""NEVENTS=-1
MAXPR_HW=2
ERR_FR_HW=0.100
RNDEVSEED1_HW=0
RNDEVSEED2_HW=0
PDFCODE=0
LHSOFT=.FALSE.
LAMBDAHERW=-1.000
B_STABLE_HW=.FALSE.
PI_STABLE_HW=.TRUE.
WP_STABLE_HW=.FALSE.
WM_STABLE_HW=.FALSE.
Z_STABLE_HW=.FALSE.
H_STABLE_HW=.FALSE.
TAUP_STABLE_HW=.FALSE.
TAUM_STABLE_HW=.FALSE.
MUP_STABLE_HW=.FALSE.
MUM_STABLE_HW=.FALSE.
B_MASS=-1.000
IS_BB_HW=.FALSE.
EXTRALIBS="stdhep Fmcfio"
EXTRAPATHS="../lib"
INCLUDEPATHS=
HWUTI="mcatnlo_hwan_stdhep.o"
"""
text = self.card_analyse.write_card('HERWIG6', '')
for a, b in zip(text.split('\n'), goal.split('\n')):
self.assertEqual(a,b)
self.assertEqual(text, goal)
def test_shower_card_py6(self):
"""test that the py6 card is correctly written"""
goal = \
"""NEVENTS=-1
MAXPR_PY=2
ERR_FR_PY=0.100
RNDEVSEED_PY=0
PDFCODE=0
MSTP_81=0
MSTP_111=1
LAMBDAPYTH=-1.000
B_STABLE_PY=.FALSE.
PI_STABLE_PY=.TRUE.
WP_STABLE_PY=.FALSE.
WM_STABLE_PY=.FALSE.
Z_STABLE_PY=.FALSE.
H_STABLE_PY=.FALSE.
TAUP_STABLE_PY=.FALSE.
TAUM_STABLE_PY=.FALSE.
MUP_STABLE_PY=.FALSE.
MUM_STABLE_PY=.FALSE.
B_MASS=-1.000
IS_4L_PY=.FALSE.
EXTRALIBS="stdhep Fmcfio"
EXTRAPATHS="../lib"
INCLUDEPATHS=
PYUTI="mcatnlo_pyan_stdhep.o"
"""
text = self.card.write_card('PYTHIA6Q', '')
for a, b in zip(text.split('\n'), goal.split('\n')):
self.assertEqual(a,b)
self.assertEqual(text, goal)
def test_shower_card_py6_analyse(self):
"""test that the py6 card is correctly written"""
goal = \
"""NEVENTS=-1
MAXPR_PY=2
ERR_FR_PY=0.100
RNDEVSEED_PY=0
PDFCODE=0
MSTP_81=0
MSTP_111=1
LAMBDAPYTH=-1.000
B_STABLE_PY=.FALSE.
PI_STABLE_PY=.TRUE.
WP_STABLE_PY=.FALSE.
WM_STABLE_PY=.FALSE.
Z_STABLE_PY=.FALSE.
H_STABLE_PY=.FALSE.
TAUP_STABLE_PY=.FALSE.
TAUM_STABLE_PY=.FALSE.
MUP_STABLE_PY=.FALSE.
MUM_STABLE_PY=.FALSE.
B_MASS=-1.000
IS_4L_PY=.FALSE.
EXTRALIBS="stdhep Fmcfio"
EXTRAPATHS="../lib"
INCLUDEPATHS=
PYUTI="mcatnlo_pyan_stdhep.o"
"""
text = self.card_analyse.write_card('PYTHIA6Q', '')
for a, b in zip(text.split('\n'), goal.split('\n')):
self.assertEqual(a,b)
self.assertEqual(text, goal)
| 41.001916 | 95 | 0.460636 | 2,335 | 21,403 | 4.073662 | 0.152891 | 0.016821 | 0.018923 | 0.014298 | 0.92704 | 0.921678 | 0.920206 | 0.920206 | 0.918734 | 0.918734 | 0 | 0.033967 | 0.338364 | 21,403 | 521 | 96 | 41.080614 | 0.637737 | 0.039948 | 0 | 0.542373 | 0 | 0 | 0.042303 | 0 | 0 | 0 | 0 | 0 | 0.271186 | 1 | 0.152542 | false | 0 | 0.067797 | 0 | 0.237288 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7d5447eb21c0cbaa4f491badf347e662bc2b9eed | 122 | py | Python | pavlov/stats/timeseries/__init__.py | jzf2101/boardlaw | 29126c2a6ab7f11154fb242c303d3b11f1566201 | [
"MIT"
] | 20 | 2021-01-20T17:15:18.000Z | 2022-01-25T21:51:29.000Z | pavlov/stats/timeseries/__init__.py | jzf2101/boardlaw | 29126c2a6ab7f11154fb242c303d3b11f1566201 | [
"MIT"
] | 17 | 2021-01-21T08:14:11.000Z | 2021-06-09T22:27:00.000Z | pavlov/stats/timeseries/__init__.py | jzf2101/boardlaw | 29126c2a6ab7f11154fb242c303d3b11f1566201 | [
"MIT"
] | 3 | 2021-02-15T05:18:41.000Z | 2021-06-30T14:11:26.000Z | from ... import tests, runs
# Have to import kinds before importing KINDS
from . import kinds
from .factory import KINDS
| 20.333333 | 45 | 0.762295 | 18 | 122 | 5.166667 | 0.555556 | 0.354839 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.180328 | 122 | 5 | 46 | 24.4 | 0.93 | 0.352459 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7d6cc827721c95d72a5fae4653d980deca4bf593 | 156 | py | Python | active_learning/strategies/__init__.py | bpanahij/maskal | 5a565854d43c80cac8a4c5d9996a1042db70633e | [
"Apache-2.0"
] | 11 | 2021-12-17T09:12:57.000Z | 2022-03-23T18:27:17.000Z | active_learning/strategies/__init__.py | bpanahij/maskal | 5a565854d43c80cac8a4c5d9996a1042db70633e | [
"Apache-2.0"
] | null | null | null | active_learning/strategies/__init__.py | bpanahij/maskal | 5a565854d43c80cac8a4c5d9996a1042db70633e | [
"Apache-2.0"
] | 1 | 2022-01-26T23:25:08.000Z | 2022-01-26T23:25:08.000Z | # @Author: Pieter Blok
# @Date: 2021-03-22 09:43:07
# @Last Modified by: Pieter Blok
# @Last Modified time: 2021-03-26 09:42:51
from .dropout import *
| 22.285714 | 42 | 0.673077 | 27 | 156 | 3.888889 | 0.740741 | 0.190476 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.220472 | 0.185897 | 156 | 6 | 43 | 26 | 0.606299 | 0.788462 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7d6e1ff499a14a0dc6aecd7eb3f13bce53fe3efb | 3,470 | py | Python | health/migrations/0001_initial.py | jkan2i/HealthBuddy | 520d28f01551bb50e347057c3dcbc8fac3db3db7 | [
"MIT"
] | null | null | null | health/migrations/0001_initial.py | jkan2i/HealthBuddy | 520d28f01551bb50e347057c3dcbc8fac3db3db7 | [
"MIT"
] | null | null | null | health/migrations/0001_initial.py | jkan2i/HealthBuddy | 520d28f01551bb50e347057c3dcbc8fac3db3db7 | [
"MIT"
] | null | null | null | # Generated by Django 3.1.4 on 2020-12-14 13:54
from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
initial = True
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
]
operations = [
migrations.CreateModel(
name='Patient',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('mobile', models.CharField(max_length=10, null=True)),
('address', models.CharField(max_length=100, null=True)),
('image', models.FileField(null=True, upload_to='')),
('user', models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL)),
],
),
migrations.CreateModel(
name='Medical',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('timing', models.CharField(max_length=10, null=True)),
('mobile', models.CharField(max_length=10, null=True)),
('address', models.CharField(max_length=100, null=True)),
('image', models.FileField(null=True, upload_to='')),
('user', models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL)),
],
),
migrations.CreateModel(
name='Hospital',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('mobile', models.CharField(max_length=10, null=True)),
('timing', models.CharField(max_length=10, null=True)),
('days_time', models.CharField(max_length=10, null=True)),
('address', models.CharField(max_length=100, null=True)),
('image', models.FileField(null=True, upload_to='')),
('user', models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL)),
],
),
migrations.CreateModel(
name='Doctor',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('mobile', models.CharField(max_length=10, null=True)),
('address', models.CharField(max_length=100, null=True)),
('experience', models.CharField(max_length=100, null=True)),
('specialist', models.CharField(max_length=100, null=True)),
('clinic', models.CharField(max_length=100, null=True)),
('cl_address', models.CharField(max_length=100, null=True)),
('daystiming', models.CharField(max_length=100, null=True)),
('timing', models.CharField(max_length=100, null=True)),
('dob', models.DateField(null=True)),
('gender', models.CharField(max_length=100, null=True)),
('biography', models.TextField(null=True)),
('image', models.FileField(null=True, upload_to='')),
('user', models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL)),
],
),
]
| 49.571429 | 129 | 0.588184 | 371 | 3,470 | 5.363881 | 0.202156 | 0.112563 | 0.162814 | 0.217085 | 0.801508 | 0.801508 | 0.801508 | 0.692965 | 0.631658 | 0.631658 | 0 | 0.024247 | 0.263112 | 3,470 | 69 | 130 | 50.289855 | 0.754009 | 0.012968 | 0 | 0.612903 | 1 | 0 | 0.065148 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.048387 | 0 | 0.112903 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7daf46b634e03d3e2abb31f3996bf3b09e98f148 | 94 | py | Python | losses/WGANLoss.py | NoelShin/LIT | ac08254c6ef2d29f5bb823d79f613b355f286953 | [
"MIT"
] | 1 | 2019-01-23T07:44:47.000Z | 2019-01-23T07:44:47.000Z | losses/WGANLoss.py | NoelShin/LIT | ac08254c6ef2d29f5bb823d79f613b355f286953 | [
"MIT"
] | null | null | null | losses/WGANLoss.py | NoelShin/LIT | ac08254c6ef2d29f5bb823d79f613b355f286953 | [
"MIT"
] | null | null | null | import torch
import torch.nn as nn
from torch.autograd import grad
from base_loss import Loss
| 18.8 | 31 | 0.829787 | 17 | 94 | 4.529412 | 0.529412 | 0.285714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148936 | 94 | 4 | 32 | 23.5 | 0.9625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7dddd4879c9b125208e5677084cd406b6ed6659f | 4,513 | py | Python | tests/test_metrics.py | pzelasko/daseg | 5e3aaf6e81a44a5eb42226bd376c92c7d1879261 | [
"Apache-2.0"
] | 4 | 2021-07-12T00:46:32.000Z | 2022-02-28T07:02:27.000Z | tests/test_metrics.py | pzelasko/daseg | 5e3aaf6e81a44a5eb42226bd376c92c7d1879261 | [
"Apache-2.0"
] | 2 | 2021-12-09T12:34:24.000Z | 2022-02-14T20:37:01.000Z | tests/test_metrics.py | pzelasko/daseg | 5e3aaf6e81a44a5eb42226bd376c92c7d1879261 | [
"Apache-2.0"
] | null | null | null | from functools import reduce
from operator import add
from typing import List
import pytest
from daseg import Call, DialogActCorpus, FunctionalSegment
from daseg.metrics import compute_original_zhao_kawahara_metrics, compute_zhao_kawahara_metrics
def as_labels(corpus: DialogActCorpus) -> List[List[str]]:
return [
reduce(
add,
(s.encoded_acts for s in Call(turn).encode(
use_joint_coding=True, continuations_allowed=False, add_turn_token=False
))
)
for turn in corpus.turns
]
@pytest.fixture
def true_dataset():
return DialogActCorpus(dialogues={
'call1': Call([
FunctionalSegment('a b c', dialog_act='sd', speaker='A'),
FunctionalSegment('a b c', dialog_act='ad', speaker='A'),
FunctionalSegment('a b c', dialog_act='h', speaker='A'),
FunctionalSegment('a b', dialog_act='qy', speaker='A'),
])
})
@pytest.fixture
def pred_dataset():
return DialogActCorpus(dialogues={
'call1': Call([
FunctionalSegment('a b c', dialog_act='sd', speaker='A'),
FunctionalSegment('a b c a', dialog_act='ad', speaker='A'),
FunctionalSegment('b c', dialog_act='h', speaker='A'),
FunctionalSegment('a b', dialog_act='qy^d', speaker='A'),
])
})
def test_zhao_kwahara_metrics(true_dataset, pred_dataset):
metrics = compute_zhao_kawahara_metrics(true_dataset=true_dataset, pred_dataset=pred_dataset)
assert metrics['DSER'] == 2 / 4
assert metrics['SegmentationWER'] == 6 / 11
assert metrics['DER'] == 3 / 4
assert metrics['JointWER'] == 8 / 11
def test_original_zhao_kwahara_metrics(true_dataset, pred_dataset):
metrics = compute_original_zhao_kawahara_metrics(
true_turns=as_labels(true_dataset),
pred_turns=as_labels(pred_dataset)
)
assert metrics['DSER'] == 2 / 4
assert metrics['SegmentationWER'] == 6 / 11
assert metrics['DER'] == 3 / 4
assert metrics['JointWER'] == 8 / 11
@pytest.fixture
def true_dataset_ins():
return DialogActCorpus(dialogues={
'call1': Call([
FunctionalSegment('a b c', dialog_act='sd', speaker='A'),
])
})
@pytest.fixture
def pred_dataset_ins():
return DialogActCorpus(dialogues={
'call1': Call([
FunctionalSegment('a', dialog_act='sd', speaker='A'),
FunctionalSegment('b c', dialog_act='sd', speaker='A'),
])
})
@pytest.fixture
def pred_dataset_ins_diff_label():
return DialogActCorpus(dialogues={
'call1': Call([
FunctionalSegment('a', dialog_act='sv', speaker='A'),
FunctionalSegment('b c', dialog_act='sd', speaker='A'),
])
})
def test_zhao_kwahara_metrics_segment_insertion(true_dataset_ins, pred_dataset_ins):
metrics = compute_zhao_kawahara_metrics(true_dataset=true_dataset_ins, pred_dataset=pred_dataset_ins)
assert metrics['DSER'] == 1 / 1
assert metrics['SegmentationWER'] == 3 / 3
assert metrics['DER'] == 1 / 1
assert metrics['JointWER'] == 3 / 3
def test_zhao_kwahara_metrics_segment_insertion_different_label(true_dataset_ins, pred_dataset_ins_diff_label):
metrics = compute_zhao_kawahara_metrics(true_dataset=true_dataset_ins, pred_dataset=pred_dataset_ins_diff_label)
assert metrics['DSER'] == 1 / 1
assert metrics['SegmentationWER'] == 3 / 3
assert metrics['DER'] == 1 / 1
assert metrics['JointWER'] == 3 / 3
@pytest.skip("The original Zhao-Kawahara code scores this incorrectly")
def test_original_zhao_kwahara_metrics_segment_insertion(true_dataset_ins, pred_dataset_ins):
metrics = compute_original_zhao_kawahara_metrics(
true_turns=as_labels(true_dataset_ins),
pred_turns=as_labels(pred_dataset_ins)
)
assert metrics['DSER'] == 1 / 1
assert metrics['SegmentationWER'] == 3 / 3
assert metrics['DER'] == 1 / 1
assert metrics['JointWER'] == 3 / 3
@pytest.skip("The original Zhao-Kawahara code scores this incorrectly")
def test_original_zhao_kwahara_metrics_segment_insertion_different_label(true_dataset_ins, pred_dataset_ins_diff_label):
metrics = compute_original_zhao_kawahara_metrics(
true_turns=as_labels(true_dataset_ins),
pred_turns=as_labels(pred_dataset_ins_diff_label)
)
assert metrics['DSER'] == 1 / 1
assert metrics['SegmentationWER'] == 3 / 3
assert metrics['DER'] == 1 / 1
assert metrics['JointWER'] == 3 / 3
| 33.932331 | 120 | 0.675382 | 560 | 4,513 | 5.1625 | 0.148214 | 0.107921 | 0.048426 | 0.030439 | 0.883777 | 0.846074 | 0.829817 | 0.801453 | 0.789346 | 0.693532 | 0 | 0.015877 | 0.20452 | 4,513 | 132 | 121 | 34.189394 | 0.789415 | 0 | 0 | 0.575472 | 0 | 0 | 0.089298 | 0 | 0 | 0 | 0 | 0 | 0.226415 | 1 | 0.113208 | false | 0 | 0.056604 | 0.056604 | 0.226415 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
815e11a2c148aa63f6628f6ee2cf54142b24a984 | 30 | py | Python | scout/adapter/mongo/__init__.py | szilvajuhos/scout | 2f4a03fb3192a57c99fd62be626e8c22051e81af | [
"BSD-3-Clause"
] | 1 | 2019-08-17T21:20:04.000Z | 2019-08-17T21:20:04.000Z | scout/adapter/mongo/__init__.py | szilvajuhos/scout | 2f4a03fb3192a57c99fd62be626e8c22051e81af | [
"BSD-3-Clause"
] | null | null | null | scout/adapter/mongo/__init__.py | szilvajuhos/scout | 2f4a03fb3192a57c99fd62be626e8c22051e81af | [
"BSD-3-Clause"
] | null | null | null | from .base import MongoAdapter | 30 | 30 | 0.866667 | 4 | 30 | 6.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 30 | 1 | 30 | 30 | 0.962963 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8162e0d346b3d126ca419f33ae72f53a90aceafe | 5,921 | py | Python | mlni/adml_regression_rbf.py | anbai106/pyhydra | 1b1060c06b15c02ca417fee13dc7def77b95d4da | [
"MIT"
] | 1 | 2022-03-21T13:18:13.000Z | 2022-03-21T13:18:13.000Z | mlni/adml_regression_rbf.py | anbai106/pyhydra | 1b1060c06b15c02ca417fee13dc7def77b95d4da | [
"MIT"
] | 4 | 2021-04-20T13:37:32.000Z | 2021-05-07T01:54:22.000Z | mlni/adml_regression_rbf.py | anbai106/pyhydra | 1b1060c06b15c02ca417fee13dc7def77b95d4da | [
"MIT"
] | 2 | 2020-10-24T16:45:07.000Z | 2021-01-11T03:13:17.000Z | from mlni.regression_rbf import RB_RepeatedHoldOut_DualSVM_Regression, RB_KFold_DualSVM_Regression, \
VB_RepeatedHoldOut_DualSVM_Regression, VB_KFold_DualSVM_Regression
from mlni.base import RB_Input, VB_Input
import os, pickle
from mlni.utils import make_cv_partition
__author__ = "Junhao Wen"
__copyright__ = "Copyright 2019-2020 The CBICA & SBIA Lab"
__credits__ = ["Junhao Wen"]
__license__ = "See LICENSE file"
__version__ = "0.1.0"
__maintainer__ = "Junhao Wen"
__email__ = "junhao.wen89@gmail.com"
__status__ = "Development"
def regression_roi(feature_tsv, output_dir, cv_repetition, cv_strategy='hold_out', n_threads=8, seed=None, verbose=False):
"""
Core function for regression with ROI-based features
Args:
feature_tsv:str, path to the tsv containing extracted feature, following the BIDS convention. The tsv contains
the following headers: "
"i) the first column is the participant_id;"
"ii) the second column should be the session_id;"
"iii) the third column should be the diagnosis;"
"The following column should be the extracted features. e.g., the ROI features"
output_dir: str, path to store the regression results.
cv_repetition: int, number of repetitions for cross-validation (CV)
cv_strategy: str, cross validation strategy used. Default is hold_out. choices=['k_fold', 'hold_out']
n_threads: int, default is 8. The number of threads to run model in parallel.
verbose: Bool, default is False. If the output message is verbose.
Returns: regression outputs.
"""
print('MLNI for a regression with nested CV...')
input_data = RB_Input(feature_tsv, standardization_method="minmax")
## data split
print('Data split was performed based on validation strategy: %s...\n' % cv_strategy)
## check if data split has been done, if yes, the pickle file is there
if os.path.isfile(os.path.join(output_dir, 'data_split_stratified_' + str(cv_repetition) + '-holdout.pkl')):
split_index = pickle.load(open(os.path.join(output_dir, 'data_split_stratified_' + str(cv_repetition) + '-holdout.pkl'), 'rb'))
else:
split_index, _ = make_cv_partition(input_data.get_y(), cv_strategy, output_dir, cv_repetition, seed=seed)
print('Data split has been done!\n')
print('Starts regression with SVR...')
## Here, we perform a nested CV (outer CV with defined CV method, inner CV with 10-fold grid search) for regression.
if cv_strategy == 'hold_out':
wf_regression = RB_RepeatedHoldOut_DualSVM_Regression(input_data, split_index, os.path.join(output_dir, 'regression'),
n_threads=n_threads, n_iterations=cv_repetition, verbose=verbose)
wf_regression.run()
elif cv_strategy == 'k_fold':
wf_regression = RB_KFold_DualSVM_Regression(input_data, split_index, os.path.join(output_dir, 'regression'),
cv_repetition, n_threads=n_threads, verbose=verbose)
wf_regression.run()
else:
raise Exception("CV methods have not been implemented")
print('Finish...')
def regression_voxel(participant_tsv, output_dir, cv_repetition, cv_strategy='hold_out', n_threads=8, seed=None, verbose=False):
"""
Core function for regression with voxel-wise images
Args:
participant_tsv:str, path to the tsv containing extracted feature, following the BIDS convention. The tsv contains
the following headers: "
"i) the first column is the participant_id;"
"ii) the second column should be the session_id;"
"iii) the third column should be the diagnosis;"
"iv) the forth column should be the path to each image;"
output_dir: str, path to store the regression results.
cv_repetition: int, number of repetitions for cross-validation (CV)
cv_strategy: str, cross validation strategy used. Default is hold_out. choices=['k_fold', 'hold_out']
n_threads: int, default is 8. The number of threads to run model in parallel.
verbose: Bool, default is False. If the output message is verbose.
Returns: regression outputs.
"""
print('MLNI for a regression with nested CV...')
input_data = VB_Input(participant_tsv)
## data split
print('Data split was performed based on validation strategy: %s...\n' % cv_strategy)
## check if data split has been done, if yes, the pickle file is there
if os.path.isfile(os.path.join(output_dir, 'data_split_stratified_' + str(cv_repetition) + '-holdout.pkl')):
split_index = pickle.load(open(os.path.join(output_dir, 'data_split_stratified_' + str(cv_repetition) + '-holdout.pkl'), 'rb'))
else:
split_index, _ = make_cv_partition(input_data.get_y(), cv_strategy, output_dir, cv_repetition, seed=seed)
print('Data split has been done!\n')
print('Starts regression with SVR...')
## Here, we perform a nested CV (outer CV with defined CV method, inner CV with 10-fold grid search) for regression.
if cv_strategy == 'hold_out':
wf_regression = VB_RepeatedHoldOut_DualSVM_Regression(input_data, split_index, os.path.join(output_dir, 'regression'),
n_threads=n_threads, n_iterations=cv_repetition, verbose=verbose)
wf_regression.run()
elif cv_strategy == 'k_fold':
wf_regression = VB_KFold_DualSVM_Regression(input_data, split_index, os.path.join(output_dir, 'regression'),
cv_repetition, n_threads=n_threads, verbose=verbose)
wf_regression.run()
else:
raise Exception("CV methods have not been implemented")
print('Finish...') | 54.824074 | 135 | 0.675224 | 788 | 5,921 | 4.837563 | 0.217005 | 0.037775 | 0.020986 | 0.033578 | 0.849423 | 0.823715 | 0.823715 | 0.823715 | 0.823715 | 0.823715 | 0 | 0.004634 | 0.234589 | 5,921 | 108 | 136 | 54.824074 | 0.836496 | 0.387435 | 0 | 0.618182 | 0 | 0 | 0.217816 | 0.031609 | 0 | 0 | 0 | 0 | 0 | 1 | 0.036364 | false | 0 | 0.072727 | 0 | 0.109091 | 0.181818 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
817a810be2018fae9f333560b082fb250af2d462 | 30 | py | Python | exercises/saddle-points/saddle_points.py | RJTK/python | f9678d629735f75354bbd543eb7f10220a498dae | [
"MIT"
] | 1 | 2021-05-15T19:59:04.000Z | 2021-05-15T19:59:04.000Z | exercises/saddle-points/saddle_points.py | RJTK/python | f9678d629735f75354bbd543eb7f10220a498dae | [
"MIT"
] | null | null | null | exercises/saddle-points/saddle_points.py | RJTK/python | f9678d629735f75354bbd543eb7f10220a498dae | [
"MIT"
] | 2 | 2018-03-03T08:32:12.000Z | 2019-08-22T11:55:53.000Z | def saddle_points():
pass
| 10 | 20 | 0.666667 | 4 | 30 | 4.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.233333 | 30 | 2 | 21 | 15 | 0.826087 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
81a5af0e25a2d163e2f9bd5c49cc87d282619275 | 67 | py | Python | geekshop/adminapp/views/__init__.py | tortilla1310/Django_shop | b61bea6a7f09eeb445321d4d3f508b1e8b88d18d | [
"MIT"
] | null | null | null | geekshop/adminapp/views/__init__.py | tortilla1310/Django_shop | b61bea6a7f09eeb445321d4d3f508b1e8b88d18d | [
"MIT"
] | 1 | 2022-03-29T22:06:56.000Z | 2022-03-29T22:06:56.000Z | geekshop/adminapp/views/__init__.py | ignat-cmd/internet_store | 638ed290b07b360335bc64a144bb8bafeccc077f | [
"MIT"
] | null | null | null | from .category import *
from .product import *
from .user import *
| 16.75 | 23 | 0.731343 | 9 | 67 | 5.444444 | 0.555556 | 0.408163 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.179104 | 67 | 3 | 24 | 22.333333 | 0.890909 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.