id stringlengths 1 265 | text stringlengths 6 5.19M | dataset_id stringclasses 7 values |
|---|---|---|
/HVFM-1.0.1.tar.gz/HVFM-1.0.1/README.md | - feature_engineering.py
- func.py
- generate_dataset.py
- model.py
- olga_func.py
- train.py
# demo_work1: Case
## AutoOLGAFleet
- mech_case:
1. base
2. config
3. auto_generate
4. other: info
## BiVFM
- dataset
1. dataset_0_tpl
2. dataset_1_csv
3. dataset_2_xy
4. dataset_3_pkl
- logs
1. logs_Model
1. log
2. save_dir
3. log_imgs
4. imgs_save
2. ...
- feature_extractor.xlsx
- conf_base.yaml
- conf_info.yaml
Flow measurement is the basis of production monitoring and productivity allocation. Virtual flowmeter (VFM) provides a multiphase flow prediction method for petroleum assets as its advantages of flexibility and low cost compared with the multiphase flowmeter (MPFM). Mechanism model and data-driven model are two important methods used in VFM, but the previous studies on VFM can not fully combine the advantages of them two, and are with harsh requirements for operating conditions and measurements. So it is difficult to meet the production requirements in the oil fields. In this study, a bi-model-driven distributed virtual flowmeter framework (BiVFM-Fleet) is proposed, in which the mechanism-driven model is coupled with the data-driven model. In this framework, the domain knowledge is obtained with the large-scale distributed multiphase flow mechanism calculation method, combined with the grid search to realize the study of the flow law for arbitrary well patterns under complex conditions. Furthermore, the multiphase flow convolution network (MPFNet) based on Temporal Convolutional Network (TCN) Layer is proposed as the core component of the framework to optimize the problem of multiphase flow. The BiVFM-Fleet in the case of 30 single well flow cycle evaluations has achieved state-of-the-art performance with a MAPE of 0.15%, a 140% improvement over the mainstream method LSTM and 33.33% higher than the following method TCN. In the case of 1,024 large-scale multi-well evaluations, a MAPE of 5.05% is obtained, which is 0.34% higher than TCN, indicating a 6.73% performance improvement. The transient flow prediction case of multi-well flow is calculated with real-time data source monitoring. In addition, the Python package of BiVFM-Fleet framework and the largest data set of transient flow patterns for multiphase flow at present are released, providing the basement for subsequent multiphase flow studies.
Paper: A Bi-Model Driven Transient Multiphase Virtual Flow Metering Paradigm for Arbitrary Well Patterns Based on the Mechanism Model and Data-Driven Model Pip Package: https://pypi.org/project/HVFM Github Repository: https://github.com/mmmahhhhe/HVFM | PypiClean |
/BayesDB-0.2.0.tar.gz/BayesDB-0.2.0/bayesdb/parser.py |
import utils
import os
import bql_grammar as bql
import pyparsing as pp
import functions
import operator
class Parser(object):
def __init__(self):
self.reset_root_dir()
def pyparse_input(self, input_string):
"""
Uses the grammar defined in bql_grammar to create a pyparsing object out of an input string
:param str input_string: the input block of bql statement(s)
:return: the pyparsing object for the parsed statement.
:rtype: pyparsing.ParseResults
:raises BayesDBParseError: if the string is not a valid bql statement.
"""
try:
bql_blob_ast = bql.bql_input.parseString(input_string, parseAll=True)
except pp.ParseException as x:
raise utils.BayesDBParseError("Invalid query. Could not parse (Line {e.lineno}, column {e.col}):\n\t'{e.line}'\n\t".format(e=x) + ' ' * x.col + '^')
return bql_blob_ast
def parse_single_statement(self,bql_statement_ast):
"""
Calls parse_method on the statement_id in bql_statement_ast.
:param bql_statement_ast pyparsing.ParseResults: The single statement pyparsing result
:return: parse_method(bql_statement_ast)
"""
## TODO Check for nest
parse_method = getattr(self,'parse_' + bql_statement_ast.statement_id)
return parse_method(bql_statement_ast)
#####################################################################################
## -------------------------- Individual Parse Methods --------------------------- ##
#####################################################################################
def parse_list_btables(self,bql_statement_ast):
"""
:param bql_statement_ast pyparsing.ParseResults:
:return ('list_btables', args_dict, None):
"""
if bql_statement_ast.statement_id == "list_btables":
return 'list_btables', dict(), None
else:
raise utils.BayesDBParseError("Parsing statement as LIST BTABLES failed")
def parse_help(self, bql_statement_ast):
"""
:param bql_statement_ast pyparsing.ParseResults:
:return ('help', args_dict, None):
"""
method= None
if bql_statement_ast.method_name != '':
method = bql_statement_ast.method_name
return 'help', dict(method=method), None
def parse_execute_file(self,bql_statement_ast):
"""
:param bql_statement_ast pyparsing.ParseResults:
:return ('execute_file', args_dict, None):
"""
return 'execute_file', dict(filename=self.get_absolute_path(bql_statement_ast.filename)), None
def parse_show_schema(self,bql_statement_ast):
"""
:param bql_statement_ast pyparsing.ParseResults:
:return ('show_schema', args_dict, None):
"""
return 'show_schema', dict(tablename=bql_statement_ast.btable), None
def parse_show_models(self,bql_statement_ast):
"""
:param bql_statement_ast pyparsing.ParseResults:
:return ('show_models', args_dict, None):
"""
return 'show_models', dict(tablename=bql_statement_ast.btable), None
def parse_show_diagnostics(self,bql_statement_ast):
"""
:param bql_statement_ast pyparsing.ParseResults:
:return ('show_diagnostics', args_dict, None):
"""
return 'show_diagnostics', dict(tablename=bql_statement_ast.btable), None
def parse_drop_column_list(self,bql_statement_ast):
"""
:param bql_statement_ast pyparsing.ParseResults:
:return ('drop_column_list', args_dict, None):
"""
list_name = None
if bql_statement_ast.list_name != '':
list_name = bql_statement_ast.list_name
return 'drop_column_list', dict(tablename=bql_statement_ast.btable, list_name=list_name), None
def parse_drop_row_list(self,bql_statement_ast):
"""
:param bql_statement_ast pyparsing.ParseResults:
:return ('drop_row_list', args_dict, None):
"""
list_name = None
if bql_statement_ast.list_name != '':
list_name = bql_statement_ast.list_name
return 'drop_row_list', dict(tablename=bql_statement_ast.btable, list_name=list_name), None
def parse_drop_models(self,bql_statement_ast):
"""
:param bql_statement_ast pyparsing.ParseResults:
:return ('drop_models', args_dict, None):
"""
model_indices = None
if bql_statement_ast.index_clause != '':
model_indices = bql_statement_ast.index_clause.asList()
return 'drop_models', dict(tablename=bql_statement_ast.btable, model_indices=model_indices), None
def parse_initialize_models(self,bql_statement_ast):
"""
:param bql_statement_ast pyparsing.ParseResults:
:return ('initialize_models', args_dict, None):
"""
n_models = 16 ##TODO magic number - move to config file
if bql_statement_ast.num_models != '':
n_models = int(bql_statement_ast.num_models)
tablename = bql_statement_ast.btable
arguments_dict = dict(tablename=tablename, n_models=n_models, model_config=None)
if bql_statement_ast.config != '':
arguments_dict['model_config'] = bql_statement_ast.config
return 'initialize_models', arguments_dict, None
def parse_create_btable(self,bql_statement_ast):
"""
:param bql_statement_ast pyparsing.ParseResults:
:return ('create_btable', args_dict, client_dict):
"""
tablename = bql_statement_ast.btable
filename = self.get_absolute_path(bql_statement_ast.filename)
client_dict = dict(csv_path = filename)
if bql_statement_ast.codebook_file != '':
codebook_path = self.get_absolute_path(bql_statement_ast.codebook_file)
client_dict['codebook_path'] = codebook_path
return 'create_btable', dict(tablename=tablename, cctypes_full=None), client_dict
#TODO types?
def parse_upgrade_btable(self, bql_statement_ast):
"""
:param bql_statement_ast pyparsing.ParseResults:
:return ('upgrade_btable', args_dict, client_dict):
"""
tablename = bql_statement_ast.btable
return 'upgrade_btable', dict(tablename=tablename), None
def parse_describe(self, bql_statement_ast):
"""
:param bql_statement_ast pyparsing.ParseResults:
:return ('describe', args_dict, None):
"""
tablename = None
if bql_statement_ast.btable != '':
tablename = bql_statement_ast.btable
columnset = None
if bql_statement_ast.columnset != '':
columnset = bql_statement_ast.columnset
return 'describe', dict(tablename=tablename, columnset=columnset), None
def parse_update_codebook(self, bql_statement_ast):
"""
:param bql_statement_ast pyparsing.ParseResults:
:return ('update_codebook', args_dict, client_dict):
"""
tablename = None
if bql_statement_ast.btable != '':
tablename = bql_statement_ast.btable
codebook_path = None
if bql_statement_ast.filename != '':
codebook_path = bql_statement_ast.filename
return 'update_codebook', dict(tablename=tablename), dict(codebook_path=codebook_path)
def parse_update_short_names(self, bql_statement_ast):
"""
:param bql_statement_ast pyparsing.ParseResults:
:return ('update_short_names', args_dict, None):
"""
tablename = None
if bql_statement_ast.btable != '':
tablename = bql_statement_ast.btable
mappings = None
if bql_statement_ast.label_clause != '':
mappings = {}
for label_set in bql_statement_ast.label_clause:
mappings[label_set[0]] = label_set[1]
return 'update_short_names', dict(tablename=tablename, mappings=mappings), None
def parse_update_descriptions(self, bql_statement_ast):
"""
:param bql_statement_ast pyparsing.ParseResults:
:return ('update_desriptions', args_dict, None):
"""
tablename = None
if bql_statement_ast.btable != '':
tablename = bql_statement_ast.btable
mappings = None
if bql_statement_ast.label_clause != '':
mappings = {}
for label_set in bql_statement_ast.label_clause:
mappings[label_set[0]] = label_set[1]
return 'update_descriptions', dict(tablename=tablename, mappings=mappings), None
def parse_update_schema(self,bql_statement_ast):
"""
:param bql_statement_ast pyparsing.ParseResults:
:return ('update_schema', args_dict, None):
"""
tablename = bql_statement_ast.btable
mappings = dict()
type_clause = bql_statement_ast.type_clause
for update in type_clause:
column = update[0]
cctype = update[1]
mappings[column] = dict()
mappings[column]['cctype'] = cctype
if 'parameters' in update:
mappings[column]['parameters'] = dict()
for param_type in update.parameters.keys():
mappings[column]['parameters'][param_type] = update.parameters[param_type]
else:
mappings[column]['parameters'] = None
return 'update_schema', dict(tablename=tablename, mappings=mappings), None
def parse_drop_btable(self,bql_statement_ast):
"""
:param bql_statement_ast pyparsing.ParseResults:
:return ('drop_btable', args_dict, client_dict):
"""
return 'drop_btable', dict(tablename=bql_statement_ast.btable), None
def parse_analyze(self,bql_statement_ast):
"""
:param bql_statement_ast pyparsing.ParseResults:
:return ('analyze', args_dict, None):
"""
model_indices = None
iterations = None
seconds = None
kernel = 0
wait = True
tablename = bql_statement_ast.btable
if bql_statement_ast.index_clause != '':
model_indices = bql_statement_ast.index_clause.asList()
if bql_statement_ast.num_minutes !='':
seconds = int(bql_statement_ast.num_minutes) * 60
if bql_statement_ast.num_iterations !='':
iterations = int(bql_statement_ast.num_iterations)
if bql_statement_ast.with_kernel_clause != '':
kernel = bql_statement_ast.with_kernel_clause.kernel_id
if kernel == 'mh': ## TODO should return None or something for invalid kernels
kernel=1
if bql_statement_ast.wait != '':
wait = False
return 'analyze', dict(tablename=tablename, model_indices=model_indices,
iterations=iterations, seconds=seconds,
ct_kernel=kernel, background=wait), None
def parse_cancel_analyze(self, bql_statement_ast): ##TODO no tests written
"""
:param bql_statement_ast pyparsing.ParseResults:
:return ('cancel_analyze', args_dict, None):
"""
return 'cancel_analyze', dict(tablename=bql_statement_ast.btable), None
def parse_show_analyze(self, bql_statement_ast): ##TODO no tests written
"""
:param bql_statement_ast pyparsing.ParseResults:
:return ('show_analyze', args_dict, None):
"""
return 'show_analyze', dict(tablename=bql_statement_ast.btable), None
def parse_show_row_lists(self,bql_statement_ast):
"""
:param bql_statement_ast pyparsing.ParseResults:
:return ('show_row_lists', args_dict, None):
"""
return 'show_row_lists', dict(tablename=bql_statement_ast.btable), None
def parse_show_column_lists(self,bql_statement_ast):
"""
:param bql_statement_ast pyparsing.ParseResults:
:return ('show_column_lists', args_dict, None):
"""
return 'show_column_lists', dict(tablename=bql_statement_ast.btable), None
def parse_show_columns(self,bql_statement_ast):
"""
:param bql_statement_ast pyparsing.ParseResults:
:return ('show_columns', args_dict, None):
"""
return 'show_columns', dict(tablename=bql_statement_ast.btable, column_list=bql_statement_ast.column_list[0]), None
def parse_save_models(self,bql_statement_ast):
"""
:param bql_statement_ast pyparsing.ParseResults:
:return ('save_models', args_dict, client_dict):
"""
return 'save_models', dict(tablename=bql_statement_ast.btable), dict(pkl_path=bql_statement_ast.filename)
def parse_load_models(self,bql_statement_ast):
"""
:param bql_statement_ast pyparsing.ParseResults:
:return ('load_models', args_dict, client_dict):
"""
return 'load_models', dict(tablename=bql_statement_ast.btable), dict(pkl_path=bql_statement_ast.filename)
def parse_label_columns(self, bql_statement_ast): ##TODO only smoke test right now
"""
:param bql_statement_ast pyparsing.ParseResults:
:return ('label_columns', args_dict, client_dict):
"""
tablename = bql_statement_ast.btable
source = None
mappings = None
if bql_statement_ast.label_clause != '':
source = 'inline'
mappings = {}
for label_set in bql_statement_ast.label_clause:
mappings[label_set[0]] = label_set[1]
csv_path = None
if bql_statement_ast.filename != '':
csv_path = bql_statement_ast.filename
source = 'file'
return 'label_columns', \
dict(tablename=tablename, mappings=mappings), \
dict(source=source, csv_path=csv_path)
def parse_show_metadata(self, bql_statement_ast):
"""
:param bql_statement_ast pyparsing.ParseResults:
:return ('show_metadata', args_dict, None):
"""
tablename = None
if bql_statement_ast.btable != '':
tablename = bql_statement_ast.btable
keyset = None
if bql_statement_ast.keyset != '':
keyset = bql_statement_ast.keyset
return 'show_metadata', dict(tablename=tablename, keyset=keyset), None
def parse_show_label(self, bql_statement_ast):
"""
:param bql_statement_ast pyparsing.ParseResults:
:return ('show_label', args_dict, None):
"""
tablename = None
if bql_statement_ast.btable != '':
tablename = bql_statement_ast.btable
columnset = None
if bql_statement_ast.columnset != '':
columnset = bql_statement_ast.columnset
return 'show_labels', dict(tablename=tablename, columnset=columnset), None
def parse_update_metadata(self, bql_statement_ast):
"""
:param bql_statement_ast pyparsing.ParseResults:
:return ('update_metadata', args_dict, client_dict):
"""
tablename = bql_statement_ast.btable
source = None
mappings = None
if bql_statement_ast.label_clause != '':
source = 'inline'
mappings = {}
for label_set in bql_statement_ast.label_clause:
mappings[label_set[0]] = label_set[1]
csv_path = None
if bql_statement_ast.filename != '':
csv_path = bql_statement_ast.filename
source = 'file'
return 'update_metadata', \
dict(tablename=tablename, mappings=mappings), \
dict(source=source, csv_path=csv_path)
def parse_query(self, bql_statement_ast):
'''
master parser for queries (select, infer, simulate, estimate pairwise, etc)
returns a general args dict which the specific versions of those functions
will then trim and check for illegal aruguments through assertions.
:param bql_statement_ast pyparsing.ParseResults:
:return (statement_id, args_dict, client_dict):
'''
statement_id = bql_statement_ast.statement_id
confidence = None
if bql_statement_ast.confidence != '':
confidence = float(bql_statement_ast.confidence)
filename = None
if bql_statement_ast.filename != '':
filename = bql_statement_ast.filename
functions = bql_statement_ast.functions
givens = None
if bql_statement_ast.given_clause != '':
givens = bql_statement_ast.given_clause
limit = float('inf')
if bql_statement_ast.limit != '':
limit = int(bql_statement_ast.limit)
modelids = None
if bql_statement_ast.using_models_index_clause != '':
modelids = bql_statement_ast.using_models_index_clause.asList()
name = None
if bql_statement_ast.as_column_list != '':
## TODO name is a bad name
name = bql_statement_ast.as_column_list
newtablename = None
if bql_statement_ast.newtablename != '':
newtablename = bql_statement_ast.newtablename
numpredictions = None
if bql_statement_ast.times != '':
numpredictions = int(bql_statement_ast.times)
numsamples = None
if bql_statement_ast.samples != '':
numsamples = int(bql_statement_ast.samples)
order_by = False
if bql_statement_ast.order_by != '':
order_by = bql_statement_ast.order_by
plot=(bql_statement_ast.plot == 'plot')
column_list = None
row_list = None
if bql_statement_ast.for_list != '':
if statement_id == 'estimate_pairwise':
##TODO can only handle single column or row lists in this clause.
column_list = bql_statement_ast.for_list.asList()
assert len(column_list) == 1, "BayesDBParseError: You may only specify one column list. Remove %s" % column_list[1]
column_list = column_list[0]
elif statement_id == 'estimate_pairwise_row':
row_list = bql_statement_ast.for_list.asList()
assert len(row_list) == 1, "BayesDBParseError: You may only specify one row list. Remove %s" % row_list[1]
row_list = row_list[0]
else:
raise utils.BayesDBParseError("Invalid query: FOR <list> only acceptable in estimate pairwise.")
summarize=(bql_statement_ast.summarize == 'summarize')
hist = (bql_statement_ast.hist == 'hist')
freq = (bql_statement_ast.freq == 'freq')
tablename = bql_statement_ast.btable
clusters_name = None
threshold = None
if bql_statement_ast.clusters_clause != '':
clusters_name = bql_statement_ast.clusters_clause.as_label
threshold = float(bql_statement_ast.clusters_clause.threshold)
whereclause = None
if bql_statement_ast.where_conditions != '':
whereclause = bql_statement_ast.where_conditions
return statement_id, \
dict(clusters_name=clusters_name,
confidence = confidence,
functions=functions,
givens=givens,
limit=limit,
modelids=modelids,
name=name,
newtablename=newtablename,
numpredictions=numpredictions,
numsamples=numsamples,
order_by=order_by,
plot=plot,
column_list=column_list,
row_list=row_list,
summarize=summarize,
hist=hist,
freq=freq,
tablename=tablename,
threshold=threshold,
whereclause=whereclause), \
dict(plot=plot,
scatter=False, ##TODO remove scatter from args
pairwise=False, ##TODO remove pairwise from args
filename=filename)
def parse_infer(self,bql_statement_ast):
"""
:param bql_statement_ast pyparsing.ParseResults:
:return ('infer', args_dict, client_dict):
"""
method_name, args_dict, client_dict = self.parse_query(bql_statement_ast)
tablename = args_dict['tablename']
functions = args_dict['functions']
summarize = args_dict['summarize']
hist = args_dict['hist']
freq = args_dict['freq']
plot = args_dict['plot']
whereclause = args_dict['whereclause']
limit = args_dict['limit']
order_by = args_dict['order_by']
modelids = args_dict['modelids']
newtablename = args_dict['newtablename']
confidence = args_dict['confidence']
numsamples = args_dict['numsamples']
pairwise = client_dict['pairwise']
filename = client_dict['filename']
scatter = client_dict['scatter']
assert args_dict['clusters_name'] == None, "BayesDBParsingError: SAVE CLUSTERS clause not allowed in INFER"
assert args_dict['threshold'] == None, "BayesDBParsingError: SAVE CLUSTERS clause not allowed in INFER"
assert args_dict['givens'] == None, "BayesDBParsingError: GIVENS clause not allowed in INFER"
assert args_dict['name'] == None, "BayesDBParsingError: SAVE AS <column_list> clause not allowed in INFER"
assert args_dict['numpredictions'] == None, "BayesDBParsingError: TIMES clause not allowed in INFER"
assert args_dict['column_list'] == None, "BayesDBParsingError: FOR <columns> clause not allowed in INFER"
assert args_dict['row_list'] == None, "BayesDBParsingError: FOR <rows> not allowed in INFER"
for function in functions:
assert function.function_id == '', "BayesDBParsingError: %s not valid in INFER" % function.function_id
assert confidence is None or 0.0 <= confidence <= 1.0, "BayesDBParsingError: CONFIDENCE must be unspecified (default 0) or in the interval [0, 1]"
return 'infer', \
dict(tablename=tablename, functions=functions,
newtablename=newtablename, confidence=confidence,
whereclause=whereclause, limit=limit,
numsamples=numsamples, order_by=order_by,
plot=plot, modelids=modelids, summarize=summarize, hist=hist, freq=freq), \
dict(plot=plot, scatter=scatter, pairwise=pairwise, filename=filename)
def parse_select(self,bql_statement_ast):
"""
:param bql_statement_ast pyparsing.ParseResults:
:return ('select', args_dict):
"""
method_name, args_dict, client_dict = self.parse_query(bql_statement_ast)
tablename = args_dict['tablename']
functions = args_dict['functions']
summarize = args_dict['summarize']
hist = args_dict['hist']
freq = args_dict['freq']
plot = args_dict['plot']
whereclause = args_dict['whereclause']
limit = args_dict['limit']
order_by = args_dict['order_by']
modelids = args_dict['modelids']
newtablename = args_dict['newtablename']
numsamples = args_dict['numsamples']
pairwise = client_dict['pairwise']
filename = client_dict['filename']
scatter = client_dict['scatter']
assert args_dict['clusters_name'] == None, "BayesDBParsingError: SAVE CLUSTERS clause not allowed in SELECT"
assert args_dict['threshold'] == None, "BayesDBParsingError: SAVE CLUSTERS clause not allowed in SELECT"
assert args_dict['givens'] == None, "BayesDBParsingError: GIVENS clause not allowed in SELECT"
assert args_dict['name'] == None, "BayesDBParsingError: SAVE AS <column_list> clause not allowed in SELECT"
assert args_dict['numpredictions'] == None, "BayesDBParsingError: TIMES clause not allowed in SELECT"
assert args_dict['column_list'] == None, "BayesDBParsingError: FOR <columns> clause not allowed in SELECT"
assert args_dict['row_list'] == None, "BayesDBParsingError: FOR <rows> not allowed in SELECT"
assert args_dict['confidence'] == None, "BayesDBParsingError: CONFIDENCE not allowed in SELECT"
for function in functions:
assert function.conf == '', "BayesDBParsingError: CONF (WITH CONFIDENCE) not valid in SELECT"
return 'select', \
dict(tablename=tablename, whereclause=whereclause,
functions=functions, limit=limit, order_by=order_by, plot=plot, numsamples=numsamples,
modelids=modelids, summarize=summarize, hist=hist, freq=freq, newtablename=newtablename), \
dict(pairwise=pairwise, scatter=scatter, filename=filename, plot=plot)
def parse_simulate(self,bql_statement_ast):
"""
:param bql_statement_ast pyparsing.ParseResults:
:return ('simulate', args_dict, client_dict):
"""
method_name, args_dict, client_dict = self.parse_query(bql_statement_ast)
tablename = args_dict['tablename']
functions = args_dict['functions']
summarize = args_dict['summarize']
hist = args_dict['hist']
freq = args_dict['freq']
plot = args_dict['plot']
order_by = args_dict['order_by']
modelids = args_dict['modelids']
newtablename = args_dict['newtablename']
givens = args_dict['givens']
numpredictions = args_dict['numpredictions']
pairwise = client_dict['pairwise']
filename = client_dict['filename']
scatter = client_dict['scatter']
# Require the number of observations to simulate
assert args_dict['numpredictions'] is not None, "BayesDBParsingError: Include TIMES: SIMULATE <columns> FROM <btable> TIMES <times>."
assert args_dict['clusters_name'] == None, "BayesDBParsingError: SAVE CLUSTERS clause not allowed in SIMULATE."
assert args_dict['numsamples'] == None, 'BayesDBParsingError: WITH <numsamples> SAMPLES clause not allowed in SIMULATE.'
assert args_dict['threshold'] == None, "BayesDBParsingError: SAVE CLUSTERS clause not allowed in SIMULATE."
assert args_dict['name'] == None, "BayesDBParsingError: SAVE AS <column_list> clause not allowed in SIMULATE."
assert args_dict['column_list'] == None, "BayesDBParsingError: FOR <columns> clause not allowed in SIMULATE."
assert args_dict['row_list'] == None, "BayesDBParsingError: FOR <rows> not allowed in SIMULATE."
assert args_dict['confidence'] == None, "BayesDBParsingError: CONFIDENCE not allowed in SIMULATE."
assert args_dict['whereclause'] == None, "BayesDBParsingError: whereclause not allowed in SIMULATE. Use GIVEN instead."
for function in functions:
assert function.function_id == '', "BayesDBParsingError: %s not valid in SIMULATE" % function.function_id
assert function.conf == '', "BayesDBParsingError: CONF (WITH CONFIDENCE) not valid in SIMULATE"
return 'simulate', \
dict(tablename=tablename, functions=functions,
newtablename=newtablename, givens=givens,
numpredictions=numpredictions, order_by=order_by,
plot=plot, modelids=modelids, summarize=summarize, hist=hist, freq=freq), \
dict(filename=filename, plot=plot, scatter=scatter, pairwise=pairwise)
def parse_estimate(self,bql_statement_ast):
"""
:param bql_statement_ast pyparsing.ParseResults:
:return ('estimate', args_dict, client_dict):
"""
method_name, args_dict, client_dict = self.parse_query(bql_statement_ast)
assert args_dict['functions'][0] == 'column', "BayesDBParseError: must be ESTIMATE COLUMNS."
functions = args_dict['functions']
tablename = args_dict['tablename']
whereclause = args_dict['whereclause']
limit = args_dict['limit']
order_by = args_dict['order_by']
modelids = args_dict['modelids']
name = args_dict['name']
numsamples = args_dict['numsamples']
assert args_dict['clusters_name'] == None, "BayesDBParsingError: SAVE CLUSTERS not allowed in estimate columns."
assert args_dict['confidence'] == None, "BayesDBParsingError: WITH CONFIDENCE not allowed in estimate columns."
assert args_dict['givens'] == None, "BayesDBParsingError: GIVENS not allowed in estimate columns."
assert args_dict['newtablename'] == None, "BayesDBParsingError: INTO TABLE not allowed in estimate columns."
assert args_dict['numpredictions'] == None, "BayesDBParsingError: TIMES not allowed in estimate columns."
assert args_dict['column_list'] == None, "BayesDBParsingError: FOR COLUMNS not allowed in estimate columns."
assert args_dict['row_list'] == None, "BayesDBParsingError: FOR ROWS not allowed in estimate columns."
assert args_dict['summarize'] == False, "BayesDBParsingError: SUMMARIZE not allowed in estimate columns."
assert args_dict['hist'] == False, "BayesDBParsingError: HIST not allowed in estimated columns."
assert args_dict['freq'] == False, "BayesDBParsingError: FREQ not allowed in estimated columns."
assert args_dict['threshold'] == None, "BayesDBParsingError: SAVE CLUSTERS not allowed in estimate columns."
assert args_dict['plot'] == False, "BayesDBParsingError: PLOT not allowed in estimate columns."
assert client_dict['plot'] == False, "BayesDBParsingError: PLOT not allowed in estimate columns."
assert client_dict['scatter'] == False, "BayesDBParsingError: SCATTER not allowed in estimate columns."
assert client_dict['pairwise'] == False, "BayesDBParsingError: PAIRWISE not allowed in estimate columns."
assert client_dict['filename'] == None, "BayesDBParsingError: AS FILE not allowed in estimate columns."
return 'estimate_columns', \
dict(tablename=tablename, functions=functions,
whereclause=whereclause, limit=limit, numsamples=numsamples,
order_by=order_by, name=name, modelids=modelids), \
None
def parse_create_column_list(self,bql_statement_ast):
"""
:param bql_statement_ast pyparsing.ParseResults:
:return ('create_column_list', args_dict, client_dict):
"""
method_name, args_dict, client_dict = self.parse_query(bql_statement_ast)
functions = args_dict['functions']
name = args_dict['name']
tablename = args_dict['tablename']
assert args_dict['clusters_name'] == None, "BayesDBParsingError: SAVE CLUSTERS not allowed in create column list."
assert args_dict['confidence'] == None, "BayesDBParsingError: WITH CONFIDENCE not allowed in create column list."
assert args_dict['givens'] == None, "BayesDBParsingError: GIVENS not allowed in create column list."
assert args_dict['newtablename'] == None, "BayesDBParsingError: INTO TABLE not allowed in create column list."
assert args_dict['numpredictions'] == None, "BayesDBParsingError: TIMES not allowed in create column list."
assert args_dict['column_list'] == None, "BayesDBParsingError: FOR COLUMNS not allowed in create column list."
assert args_dict['row_list'] == None, "BayesDBParsingError: FOR ROWS not allowed in create column list."
assert args_dict['summarize'] == False, "BayesDBParsingError: SUMMARIZE not allowed in create column list."
assert args_dict['hist'] == False, "BayesDBParsingError: HIST not allowed in estimated columns."
assert args_dict['freq'] == False, "BayesDBParsingError: FREQ not allowed in estimated columns."
assert args_dict['threshold'] == None, "BayesDBParsingError: SAVE CLUSTERS not allowed in create column list."
assert args_dict['plot'] == False, "BayesDBParsingError: PLOT not allowed in create column list."
assert args_dict['numsamples'] == None, "BayesDBParsingError: NUMSAMPLES not allowed in create column list."
assert args_dict['modelids'] == None, "BayesDBParsingError: USING MODELS not allowed in create column list."
assert args_dict['order_by'] == False, "BayesDBParsingError: ORDER BY not allowed in create column list."
assert args_dict['limit'] == float('inf'), "BayesDBParsingError: LIMIT not allowed in create column list."
assert args_dict['whereclause'] == None, "BayesDBParsingError: WHERE not allowed in create column list."
assert client_dict['plot'] == False, "BayesDBParsingError: PLOT not allowed in create column list."
assert client_dict['scatter'] == False, "BayesDBParsingError: SCATTER not allowed in create column list."
assert client_dict['pairwise'] == False, "BayesDBParsingError: PAIRWISE not allowed in create column list."
assert client_dict['filename'] == None, "BayesDBParsingError: AS FILE not allowed in create column list."
return 'create_column_list', \
dict(tablename=tablename, functions=functions, name=name), None
def parse_estimate_pairwise_row(self,bql_statement_ast):
"""
:param bql_statement_ast pyparsing.ParseResults:
:return ('estimate_pairwise_row', args_dict, client_dict):
"""
method_name, args_dict, client_dict = self.parse_query(bql_statement_ast)
functions = args_dict['functions'][0]
assert len(args_dict['functions']) == 1, "BayesDBParsingError: Only one function allowed in estimate pairwise."
tablename = args_dict['tablename']
row_list = args_dict['row_list']
clusters_name = args_dict['clusters_name']
threshold = args_dict['threshold']
modelids = args_dict['modelids']
filename = client_dict['filename']
assert args_dict['confidence'] == None, "BayesDBParsingError: WITH CONFIDENCE not allowed in ESTIMATE PAIRWISE."
assert args_dict['givens'] == None, "BayesDBParsingError: GIVENS not allowed in ESTIMATE PAIRWISE."
assert args_dict['newtablename'] == None, "BayesDBParsingError: INTO TABLE not allowed in ESTIMATE PAIRWISE."
assert args_dict['numpredictions'] == None, "BayesDBParsingError: TIMES not allowed in ESTIMATE PAIRWISE."
assert args_dict['numsamples'] == None, "BayesDBParsingError: WITH SAMPLES not allowed in ESTIMATE PAIRWISE."
assert args_dict['column_list'] == None, "BayesDBParsingError: FOR COLUMNS not allowed in ESTIMATE PAIRWISE."
assert args_dict['summarize'] == False, "BayesDBParsingError: SUMMARIZE not allowed in ESTIMATE PAIRWISE."
assert args_dict['hist'] == False, "BayesDBParsingError: HIST not allowed in ESTIMATE PAIRWISE."
assert args_dict['freq'] == False, "BayesDBParsingError: FREQ not allowed in ESTIMATE PAIRWISE."
assert args_dict['plot'] == False, "BayesDBParsingError: PLOT not allowed in ESTIMATE PAIRWISE."
assert args_dict['whereclause'] == None, "BayesDBParsingError: WHERE not allowed in ESTIMATE PAIRWISE."
assert client_dict['plot'] == False, "BayesDBParsingError: PLOT not allowed in ESTIMATE PAIRWISE."
assert client_dict['scatter'] == False, "BayesDBParsingError: SCATTER not allowed in ESTIMATE PAIRWISE."
assert client_dict['pairwise'] == False, "BayesDBParsingError: PAIRWISE not allowed in ESTIMATE PAIRWISE."
return 'estimate_pairwise_row', \
dict(tablename=tablename, function=functions,
row_list=row_list, clusters_name=clusters_name,
threshold=threshold, modelids=modelids), \
dict(filename=filename)
def parse_estimate_pairwise(self,bql_statement_ast):
"""
:param bql_statement_ast pyparsing.ParseResults:
:return ('estimate_pairwise', args_dict, client_dict):
"""
method_name, args_dict, client_dict = self.parse_query(bql_statement_ast)
functions = args_dict['functions']
assert len(args_dict['functions']) == 1, "BayesDBParsingError: Only one function allowed in estimate pairwise."
assert functions[0].function_id in ['correlation', 'mutual information', 'dependence probability']
function_name = functions[0].function_id
tablename = args_dict['tablename']
column_list = args_dict['column_list']
clusters_name = args_dict['clusters_name']
threshold = args_dict['threshold']
modelids = args_dict['modelids']
numsamples = args_dict['numsamples']
filename = client_dict['filename']
assert args_dict['confidence'] == None, "BayesDBParsingError: WITH CONFIDENCE not allowed in ESTIMATE PAIRWISE."
assert args_dict['givens'] == None, "BayesDBParsingError: GIVENS not allowed in ESTIMATE PAIRWISE."
assert args_dict['newtablename'] == None, "BayesDBParsingError: INTO TABLE not allowed in ESTIMATE PAIRWISE."
assert args_dict['numpredictions'] == None, "BayesDBParsingError: TIMES not allowed in ESTIMATE PAIRWISE."
assert args_dict['row_list'] == None, "BayesDBParsingError: FOR ROWS not allowed in ESTIMATE PAIRWISE."
assert args_dict['summarize'] == False, "BayesDBParsingError: SUMMARIZE not allowed in ESTIMATE PAIRWISE."
assert args_dict['hist'] == False, "BayesDBParsingError: HIST not allowed in ESTIMATE PAIRWISE."
assert args_dict['freq'] == False, "BayesDBParsingError: FREQ not allowed in ESTIMATE PAIRWISE."
assert args_dict['plot'] == False, "BayesDBParsingError: PLOT not allowed in ESTIMATE PAIRWISE."
assert args_dict['whereclause'] == None, "BayesDBParsingError: whereclause not allowed in ESTIMATE PAIRWISE"
assert client_dict['plot'] == False, "BayesDBParsingError: PLOT not allowed in ESTIMATE PAIRWISE."
assert client_dict['scatter'] == False, "BayesDBParsingError: SCATTER not allowed in ESTIMATE PAIRWISE."
assert client_dict['pairwise'] == False, "BayesDBParsingError: PAIRWISE not allowed in ESTIMATE PAIRWISE."
return 'estimate_pairwise', \
dict(tablename=tablename, function_name=function_name, numsamples=numsamples,
column_list=column_list, clusters_name=clusters_name,
threshold=threshold, modelids=modelids), \
dict(filename=filename)
#####################################################################################
## ------------------------------ Function parsing ------------------------------- ##
#####################################################################################
def get_args_pred_prob(self, function_group, M_c):
"""
:param function_group pyparsing.ParseResults:
:param M_c:
:return c_idx: column index
:rtype int:
returns the column index from a predictive probability function
raises exceptions for unfound columns
"""
if function_group.column != '' and function_group.column in M_c['name_to_idx']:
column = function_group.column
c_idx = M_c['name_to_idx'][column]
return c_idx
elif function_group.column != '':
raise utils.BayesDBParseError("Invalid query: could not parse '%s'" % function_group.column)
else:
raise utils.BayesDBParseError("Invalid query: missing column argument")
def get_args_prob(self,function_group, M_c):
"""
:param function_group:
Returns column_index, value from a probability function
raises exception for unfound columns
"""
if function_group.column != '' and function_group.column in M_c['name_to_idx']:
column = function_group.column
c_idx = M_c['name_to_idx'][column]
elif function_group.column != '':
raise utils.BayesDBParseError("Invalid query: could not parse '%s'" % function_group.column)
else:
raise utils.BayesDBParseError("Invalid query: missing column argument")
value = utils.string_to_column_type(function_group.value, column, M_c)
return c_idx, value
def get_args_similarity(self,function_group, M_c, M_c_full, T, T_full, column_lists):
"""
returns the target_row_id and a list of with_respect_to columns based on
similarity function
Raises exception for unfound columns
"""
##TODO some cleaining with row_clause
target_row_id = None
target_columns = None
if function_group != '':
## Case for given row_id
if function_group.row_id != '':
target_row_id = int(function_group.row_id)
## Case for format column = value
elif function_group.column != '':
assert T is not None
assert M_c_full is not None
target_col_name = function_group.column
target_col_value = function_group.column_value
target_row_id = utils.row_id_from_col_value(target_col_value, target_col_name, M_c_full, T_full)
## With respect to clause
with_respect_to_clause = function_group.with_respect_to
if with_respect_to_clause !='':
column_set = with_respect_to_clause.column_list
target_column_names = []
for column_name in column_set:
if column_name == '*':
target_columns = None
break
elif column_lists is not None and column_name in column_lists.keys():
target_column_names += column_lists[column_name]
elif column_name in M_c['name_to_idx']:
target_column_names.append(column_name)
else:
raise utils.BayesDBParseError("Invalid query: column '%s' not found" % column_name)
target_columns = [M_c['name_to_idx'][column_name] for column_name in target_column_names]
return target_row_id, target_columns
def get_args_typicality(self,function_group, M_c):
"""
returns column_index if present, if not, returns True. ##TODO this needs a ton of testing
if invalid column, raises exception
"""
if function_group.column == '':
return True
else:
return utils.get_index_from_colname(M_c, function_group.column)
def get_args_pairwise_column(self,function_group, M_c):
"""
designed to handle dependence probability, mutual information, and correlation function_groups
all have an optional of clause
returns of_column_index, with_column_index
invalid column raises exception
"""
with_column = function_group.with_column
with_column_index = utils.get_index_from_colname(M_c, with_column)
of_column_index = None
if function_group.of_column != '':
of_column_index = utils.get_index_from_colname(M_c, function_group.of_column)
return of_column_index, with_column_index
#####################################################################################
## ----------------------------- Sub query parsing ------------------------------ ##
########################parse_where#############################################################
def parse_where_clause(self, where_clause_ast, M_c, T, M_c_full, T_full, column_lists):
"""
Creates conditions: the list of conditions in the whereclause
List of (c_idx, op, val)
"""
conditions = []
operator_map = {'<=': operator.le, '<': operator.lt, '=': operator.eq,
'>': operator.gt, '>=': operator.ge, 'in': operator.contains}
for single_condition in where_clause_ast:
## Confidence in the where clause not yet handled by client/engine.
confidence = None
if single_condition.conf != '':
confidence = float(single_condition.conf)
raw_value = single_condition.value
function = None
args = None
## SELECT and INFER versions
if single_condition.function.function_id == 'typicality':
value = utils.value_string_to_num(raw_value)
function = functions._row_typicality
assert self.get_args_typicality(single_condition.function, M_c) == True
args = True
elif single_condition.function.function_id == 'similarity':
value = utils.value_string_to_num(raw_value)
function = functions._similarity
args = self.get_args_similarity(single_condition.function, M_c, M_c_full, T, T_full, column_lists)
elif single_condition.function.function_id == 'predictive probability':
value = utils.value_string_to_num(raw_value)
function = functions._predictive_probability
args = self.get_args_pred_prob(single_condition.function, M_c)
elif single_condition.function.function_id == 'key':
value = raw_value
function = functions._row_id
elif single_condition.function.column != '':
## whereclause of the form "where col = val"
column_name = single_condition.function.column
assert column_name != '*'
if column_name in M_c['name_to_idx']:
args = (M_c['name_to_idx'][column_name], confidence)
value = utils.string_to_column_type(raw_value, column_name, M_c)
function = functions._column
elif column_name in M_c_full['name_to_idx']:
args = M_c_full['name_to_idx'][column_name]
value = raw_value
function = functions._column_ignore
else:
raise utils.BayesDBParseError("Invalid where clause: column %s was not found in the table" %
column_name)
else:
if single_condition.function.function_id != '':
raise utils.BayesDBParseError("Invalid where clause: %s not allowed." %
single_condition.function.function_id)
else:
raise utils.BayesDBParseError("Invalid where clause. Unrecognized function")
if single_condition.operation != '':
op = operator_map[single_condition.operation]
else:
raise utils.BayesDBParseError("Invalid where clause: no operator found")
conditions.append((function, args, op, value))
return conditions
def parse_column_whereclause(self, whereclause, M_c, T): ##TODO throw exception on parseable, invalid
"""
Creates conditions: the list of conditions in the whereclause
List of (c_idx, op, val)
"""
conditions = []
if whereclause == None:
return conditions
operator_map = {'<=': operator.le, '<': operator.lt, '=': operator.eq,
'>': operator.gt, '>=': operator.ge, 'in': operator.contains}
for single_condition in whereclause:
## Confidence in the where clause not yet handled by client/engine.
raw_value = single_condition.value
value = utils.value_string_to_num(raw_value)
function = None
args = None
_ = None
## SELECT and INFER versions
if single_condition.function.function_id == 'typicality':
function = functions._col_typicality
assert self.get_args_typicality(single_condition.function, M_c) == True
args = None
elif single_condition.function.function_id == 'dependence probability':
function = functions._dependence_probability
_, args = self.get_args_pairwise_column(single_condition.function, M_c)
elif single_condition.function.function_id == 'mutual information':
function = functions._mutual_information
_, args = self.get_args_pairwise_column(single_condition.function, M_c)
elif single_condition.function.function_id == 'correlation':
function = functions._correlation
_, args = self.get_args_pairwise_column(single_condition.function, M_c)
else:
if single_condition.function.function_id != '':
raise utils.BayesDBParseError("Invalid where clause: %s not allowed." %
single_condition.function.function_id)
else:
raise utils.BayesDBParseError("Invalid where clause. Unrecognized function")
if single_condition.operation != '':
op = operator_map[single_condition.operation]
else:
raise utils.BayesDBParseError("Invalid where clause: no operator found")
if _ != None:
raise utils.BayesDBParseError("Invalid where clause, do not specify an 'of' column in estimate columns")
conditions.append(((function, args), op, value))
return conditions
def parse_order_by_clause(self, order_by_clause_ast, M_c, T, M_c_full, T_full, column_lists):
function_list = []
for orderable in order_by_clause_ast:
confidence = None
if orderable.conf != '':
confidence = float(orderable.conf)
desc = True
if orderable.asc_desc == 'asc':
desc = False
if orderable.function.function_id == 'similarity':
function = functions._similarity
args = self.get_args_similarity(orderable.function, M_c, M_c_full, T, T_full, column_lists)
elif orderable.function.function_id == 'typicality':
function = functions._row_typicality
args = self.get_args_typicality(orderable.function, M_c)
elif orderable.function.function_id == 'predictive probability':
function = functions._predictive_probability
args = self.get_args_pred_prob(orderable.function, M_c)
elif orderable.function.column != '':
if orderable.function.column in M_c['name_to_idx']:
function = functions._column
args = (M_c['name_to_idx'][orderable.function.column], confidence)
else:
function = functions._column_ignore
args = M_c_full['name_to_idx'][orderable.function.column]
else:
raise utils.BayesDBParseError("Invalid order by clause.")
function_list.append((function, args, desc))
return function_list
def parse_column_order_by_clause(self, order_by_clause_ast, M_c, ):
function_list = []
for orderable in order_by_clause_ast:
desc = True
if orderable.asc_desc == 'asc':
desc = False
if orderable.function.function_id == 'typicality':
assert orderable.function.column == '', "BayesDBParseError: Column order by typicality cannot include 'of %s'" % orderable.function.column
function = functions._col_typicality
args = None
elif orderable.function.function_id == 'dependence probability':
function = functions._dependence_probability
_, args = self.get_args_pairwise_column(orderable.function, M_c)
elif orderable.function.function_id == 'correlation':
function = functions._correlation
_, args = self.get_args_pairwise_column(orderable.function, M_c)
elif orderable.function.function_id == 'mutual information':
function = functions._mutual_information
_, args = self.get_args_pairwise_column(orderable.function, M_c)
else:
raise utils.BayesDBParseError("Invalid order by clause. Can only order by typicality, correlation, mutual information, or dependence probability.")
function_list.append((function, args, desc))
return function_list
def parse_functions(self, function_groups, M_c=None, T=None, M_c_full=None, T_full=None, column_lists=None, key_column_name=None):
'''
Generates two lists of functions, arguments, aggregate tuples.
Returns queries, query_colnames
queries is a list of (query_function, query_args, aggregate) tuples,
where query_function is: row_id, column, probability, similarity.
For row_id: query_args is ignored (so it is None).
For column: query_args is a c_idx.
For probability: query_args is a (c_idx, value) tuple.
For similarity: query_args is a (target_row_id, target_column) tuple.
'''
## Return the table key as the first column - should only be None if called from simulate
if key_column_name is not None:
query_colnames = [key_column_name]
index_list, name_list, ignore_column = self.parse_column_set(key_column_name, M_c, M_c_full, column_lists)
queries = [(functions._column_ignore, column_index, False) for column_index in index_list]
else:
query_colnames = []
queries = []
for function_group in function_groups:
if function_group.function_id == 'predictive probability':
queries.append((functions._predictive_probability,
self.get_args_pred_prob(function_group, M_c),
False))
query_colnames.append(' '.join(function_group))
elif function_group.function_id == 'typicality':
if function_group.column != '':
queries.append((functions._col_typicality,
self.get_args_typicality(function_group, M_c),
True))
else:
queries.append((functions._row_typicality,
self.get_args_typicality(function_group, M_c),
False))
query_colnames.append(' '.join(function_group))
elif function_group.function_id == 'probability':
queries.append((functions._probability,
self.get_args_prob(function_group, M_c),
True))
query_colnames.append(' '.join(function_group))
elif function_group.function_id == 'similarity':
assert M_c is not None
assert M_c_full is not None
queries.append((functions._similarity,
self.get_args_similarity(function_group, M_c, M_c_full, T, T_full, column_lists),
False))
pre_name_list = function_group.asList()
if function_group.with_respect_to != '':
pre_name_list[-1] = ', '.join(pre_name_list[-1])
query_colnames.append(' '.join(pre_name_list))
elif function_group.function_id == 'dependence probability':
queries.append((functions._dependence_probability,
self.get_args_pairwise_column(function_group, M_c),
True))
query_colnames.append(' '.join(function_group))
elif function_group.function_id == 'mutual information':
queries.append((functions._mutual_information,
self.get_args_pairwise_column(function_group, M_c),
True))
query_colnames.append(' '.join(function_group))
elif function_group.function_id == 'correlation':
queries.append((functions._correlation,
self.get_args_pairwise_column(function_group, M_c),
True))
query_colnames.append(' '.join(function_group))
## single column, column_list, or *
elif function_group.column_id not in ['', key_column_name]:
column_name = function_group.column_id
confidence = None
assert M_c is not None
index_list, name_list, ignore_column = self.parse_column_set(column_name, M_c, M_c_full, column_lists)
if ignore_column:
queries += [(functions._column_ignore, column_index, False) for column_index in index_list]
else:
if function_group.conf != '':
confidence = float(function_group.conf)
queries += [(functions._column, (column_index, confidence), False) for column_index in index_list]
if confidence is not None:
query_colnames += [name + ' with confidence %s' % confidence for name in name_list]
else:
query_colnames += [name for name in name_list]
elif function_group.column_id == key_column_name:
# Key column will be added to the output anyways.
pass
else:
raise utils.BayesDBParseError("Invalid query: could not parse function")
return queries, query_colnames
def parse_column_set(self, column_name, M_c, M_c_full, column_lists = None):
"""
given a string representation of a column name or column_list,
returns a list of the column indexes, list of column names.
"""
index_list = []
name_list = []
ignore_column = False
if column_name == '*':
all_columns = utils.get_all_column_names_in_original_order(M_c)
index_list += [M_c['name_to_idx'][column_name] for column_name in all_columns]
name_list += [name for name in all_columns]
elif (column_lists is not None) and (column_name in column_lists.keys()):
index_list += [M_c['name_to_idx'][name] for name in column_lists[column_name]]
name_list += [name for name in column_lists[column_name]]
elif column_name in M_c['name_to_idx']:
index_list += [M_c['name_to_idx'][column_name]]
name_list += [column_name]
elif column_name in M_c_full['name_to_idx']:
index_list += [M_c_full['name_to_idx'][column_name]]
name_list += [column_name]
ignore_column = True
else:
raise utils.BayesDBParseError("Invalid query: %s not found." % column_name)
return index_list, name_list, ignore_column
#####################################################################################
## --------------------------- Other Helper functions ---------------------------- ##
#####################################################################################
def set_root_dir(self, root_dir):
"""Set the root_directory, used as the base for all relative paths."""
self.root_directory = root_dir
def reset_root_dir(self):
"""Set the root_directory, used as the base for all relative paths, to
the current working directory."""
self.root_directory = os.getcwd()
def get_absolute_path(self, relative_path):
"""
If a relative file path is given by the user in a command,
this method is used to convert the path to an absolute path
by assuming that the correct base directory is self.root_directory.
"""
relative_path = os.path.expanduser(relative_path)
if os.path.isabs(relative_path):
return relative_path
else:
return os.path.join(self.root_directory, relative_path) | PypiClean |
/DjangoDjangoAppCenter-0.0.11-py3-none-any.whl/DjangoAppCenter/simpleui/static/admin/js/vendor/xregexp/xregexp.js | /**
* XRegExp provides augmented, extensible JavaScript regular expressions. You get new syntax,
* flags, and methods beyond what browsers support natively. XRegExp is also a regex utility belt
* with tools to make your client-side grepping simpler and more powerful, while freeing you from
* worrying about pesky cross-browser inconsistencies and the dubious `lastIndex` property. See
* XRegExp's documentation (http://xregexp.com/) for more details.
* @module xregexp
* @requires N/A
*/
var XRegExp;
// Avoid running twice; that would reset tokens and could break references to native globals
XRegExp = XRegExp || (function (undef) {
"use strict";
/*--------------------------------------
* Private variables
*------------------------------------*/
var self,
addToken,
add,
// Optional features; can be installed and uninstalled
features = {
natives: false,
extensibility: false
},
// Store native methods to use and restore ("native" is an ES3 reserved keyword)
nativ = {
exec: RegExp.prototype.exec,
test: RegExp.prototype.test,
match: String.prototype.match,
replace: String.prototype.replace,
split: String.prototype.split
},
// Storage for fixed/extended native methods
fixed = {},
// Storage for cached regexes
cache = {},
// Storage for addon tokens
tokens = [],
// Token scopes
defaultScope = "default",
classScope = "class",
// Regexes that match native regex syntax
nativeTokens = {
// Any native multicharacter token in default scope (includes octals, excludes character classes)
"default": /^(?:\\(?:0(?:[0-3][0-7]{0,2}|[4-7][0-7]?)?|[1-9]\d*|x[\dA-Fa-f]{2}|u[\dA-Fa-f]{4}|c[A-Za-z]|[\s\S])|\(\?[:=!]|[?*+]\?|{\d+(?:,\d*)?}\??)/,
// Any native multicharacter token in character class scope (includes octals)
"class": /^(?:\\(?:[0-3][0-7]{0,2}|[4-7][0-7]?|x[\dA-Fa-f]{2}|u[\dA-Fa-f]{4}|c[A-Za-z]|[\s\S]))/
},
// Any backreference in replacement strings
replacementToken = /\$(?:{([\w$]+)}|(\d\d?|[\s\S]))/g,
// Any character with a later instance in the string
duplicateFlags = /([\s\S])(?=[\s\S]*\1)/g,
// Any greedy/lazy quantifier
quantifier = /^(?:[?*+]|{\d+(?:,\d*)?})\??/,
// Check for correct `exec` handling of nonparticipating capturing groups
compliantExecNpcg = nativ.exec.call(/()??/, "")[1] === undef,
// Check for flag y support (Firefox 3+)
hasNativeY = RegExp.prototype.sticky !== undef,
// Used to kill infinite recursion during XRegExp construction
isInsideConstructor = false,
// Storage for known flags, including addon flags
registeredFlags = "gim" + (hasNativeY ? "y" : "");
/*--------------------------------------
* Private helper functions
*------------------------------------*/
/**
* Attaches XRegExp.prototype properties and named capture supporting data to a regex object.
* @private
* @param {RegExp} regex Regex to augment.
* @param {Array} captureNames Array with capture names, or null.
* @param {Boolean} [isNative] Whether the regex was created by `RegExp` rather than `XRegExp`.
* @returns {RegExp} Augmented regex.
*/
function augment(regex, captureNames, isNative) {
var p;
// Can't auto-inherit these since the XRegExp constructor returns a nonprimitive value
for (p in self.prototype) {
if (self.prototype.hasOwnProperty(p)) {
regex[p] = self.prototype[p];
}
}
regex.xregexp = {captureNames: captureNames, isNative: !!isNative};
return regex;
}
/**
* Returns native `RegExp` flags used by a regex object.
* @private
* @param {RegExp} regex Regex to check.
* @returns {String} Native flags in use.
*/
function getNativeFlags(regex) {
//return nativ.exec.call(/\/([a-z]*)$/i, String(regex))[1];
return (regex.global ? "g" : "") +
(regex.ignoreCase ? "i" : "") +
(regex.multiline ? "m" : "") +
(regex.extended ? "x" : "") + // Proposed for ES6, included in AS3
(regex.sticky ? "y" : ""); // Proposed for ES6, included in Firefox 3+
}
/**
* Copies a regex object while preserving special properties for named capture and augmenting with
* `XRegExp.prototype` methods. The copy has a fresh `lastIndex` property (set to zero). Allows
* adding and removing flags while copying the regex.
* @private
* @param {RegExp} regex Regex to copy.
* @param {String} [addFlags] Flags to be added while copying the regex.
* @param {String} [removeFlags] Flags to be removed while copying the regex.
* @returns {RegExp} Copy of the provided regex, possibly with modified flags.
*/
function copy(regex, addFlags, removeFlags) {
if (!self.isRegExp(regex)) {
throw new TypeError("type RegExp expected");
}
var flags = nativ.replace.call(getNativeFlags(regex) + (addFlags || ""), duplicateFlags, "");
if (removeFlags) {
// Would need to escape `removeFlags` if this was public
flags = nativ.replace.call(flags, new RegExp("[" + removeFlags + "]+", "g"), "");
}
if (regex.xregexp && !regex.xregexp.isNative) {
// Compiling the current (rather than precompilation) source preserves the effects of nonnative source flags
regex = augment(self(regex.source, flags),
regex.xregexp.captureNames ? regex.xregexp.captureNames.slice(0) : null);
} else {
// Augment with `XRegExp.prototype` methods, but use native `RegExp` (avoid searching for special tokens)
regex = augment(new RegExp(regex.source, flags), null, true);
}
return regex;
}
/*
* Returns the last index at which a given value can be found in an array, or `-1` if it's not
* present. The array is searched backwards.
* @private
* @param {Array} array Array to search.
* @param {*} value Value to locate in the array.
* @returns {Number} Last zero-based index at which the item is found, or -1.
*/
function lastIndexOf(array, value) {
var i = array.length;
if (Array.prototype.lastIndexOf) {
return array.lastIndexOf(value); // Use the native method if available
}
while (i--) {
if (array[i] === value) {
return i;
}
}
return -1;
}
/**
* Determines whether an object is of the specified type.
* @private
* @param {*} value Object to check.
* @param {String} type Type to check for, in lowercase.
* @returns {Boolean} Whether the object matches the type.
*/
function isType(value, type) {
return Object.prototype.toString.call(value).toLowerCase() === "[object " + type + "]";
}
/**
* Prepares an options object from the given value.
* @private
* @param {String|Object} value Value to convert to an options object.
* @returns {Object} Options object.
*/
function prepareOptions(value) {
value = value || {};
if (value === "all" || value.all) {
value = {natives: true, extensibility: true};
} else if (isType(value, "string")) {
value = self.forEach(value, /[^\s,]+/, function (m) {
this[m] = true;
}, {});
}
return value;
}
/**
* Runs built-in/custom tokens in reverse insertion order, until a match is found.
* @private
* @param {String} pattern Original pattern from which an XRegExp object is being built.
* @param {Number} pos Position to search for tokens within `pattern`.
* @param {Number} scope Current regex scope.
* @param {Object} context Context object assigned to token handler functions.
* @returns {Object} Object with properties `output` (the substitution string returned by the
* successful token handler) and `match` (the token's match array), or null.
*/
function runTokens(pattern, pos, scope, context) {
var i = tokens.length,
result = null,
match,
t;
// Protect against constructing XRegExps within token handler and trigger functions
isInsideConstructor = true;
// Must reset `isInsideConstructor`, even if a `trigger` or `handler` throws
try {
while (i--) { // Run in reverse order
t = tokens[i];
if ((t.scope === "all" || t.scope === scope) && (!t.trigger || t.trigger.call(context))) {
t.pattern.lastIndex = pos;
match = fixed.exec.call(t.pattern, pattern); // Fixed `exec` here allows use of named backreferences, etc.
if (match && match.index === pos) {
result = {
output: t.handler.call(context, match, scope),
match: match
};
break;
}
}
}
} catch (err) {
throw err;
} finally {
isInsideConstructor = false;
}
return result;
}
/**
* Enables or disables XRegExp syntax and flag extensibility.
* @private
* @param {Boolean} on `true` to enable; `false` to disable.
*/
function setExtensibility(on) {
self.addToken = addToken[on ? "on" : "off"];
features.extensibility = on;
}
/**
* Enables or disables native method overrides.
* @private
* @param {Boolean} on `true` to enable; `false` to disable.
*/
function setNatives(on) {
RegExp.prototype.exec = (on ? fixed : nativ).exec;
RegExp.prototype.test = (on ? fixed : nativ).test;
String.prototype.match = (on ? fixed : nativ).match;
String.prototype.replace = (on ? fixed : nativ).replace;
String.prototype.split = (on ? fixed : nativ).split;
features.natives = on;
}
/*--------------------------------------
* Constructor
*------------------------------------*/
/**
* Creates an extended regular expression object for matching text with a pattern. Differs from a
* native regular expression in that additional syntax and flags are supported. The returned object
* is in fact a native `RegExp` and works with all native methods.
* @class XRegExp
* @constructor
* @param {String|RegExp} pattern Regex pattern string, or an existing `RegExp` object to copy.
* @param {String} [flags] Any combination of flags:
* <li>`g` - global
* <li>`i` - ignore case
* <li>`m` - multiline anchors
* <li>`n` - explicit capture
* <li>`s` - dot matches all (aka singleline)
* <li>`x` - free-spacing and line comments (aka extended)
* <li>`y` - sticky (Firefox 3+ only)
* Flags cannot be provided when constructing one `RegExp` from another.
* @returns {RegExp} Extended regular expression object.
* @example
*
* // With named capture and flag x
* date = XRegExp('(?<year> [0-9]{4}) -? # year \n\
* (?<month> [0-9]{2}) -? # month \n\
* (?<day> [0-9]{2}) # day ', 'x');
*
* // Passing a regex object to copy it. The copy maintains special properties for named capture,
* // is augmented with `XRegExp.prototype` methods, and has a fresh `lastIndex` property (set to
* // zero). Native regexes are not recompiled using XRegExp syntax.
* XRegExp(/regex/);
*/
self = function (pattern, flags) {
if (self.isRegExp(pattern)) {
if (flags !== undef) {
throw new TypeError("can't supply flags when constructing one RegExp from another");
}
return copy(pattern);
}
// Tokens become part of the regex construction process, so protect against infinite recursion
// when an XRegExp is constructed within a token handler function
if (isInsideConstructor) {
throw new Error("can't call the XRegExp constructor within token definition functions");
}
var output = [],
scope = defaultScope,
tokenContext = {
hasNamedCapture: false,
captureNames: [],
hasFlag: function (flag) {
return flags.indexOf(flag) > -1;
}
},
pos = 0,
tokenResult,
match,
chr;
pattern = pattern === undef ? "" : String(pattern);
flags = flags === undef ? "" : String(flags);
if (nativ.match.call(flags, duplicateFlags)) { // Don't use test/exec because they would update lastIndex
throw new SyntaxError("invalid duplicate regular expression flag");
}
// Strip/apply leading mode modifier with any combination of flags except g or y: (?imnsx)
pattern = nativ.replace.call(pattern, /^\(\?([\w$]+)\)/, function ($0, $1) {
if (nativ.test.call(/[gy]/, $1)) {
throw new SyntaxError("can't use flag g or y in mode modifier");
}
flags = nativ.replace.call(flags + $1, duplicateFlags, "");
return "";
});
self.forEach(flags, /[\s\S]/, function (m) {
if (registeredFlags.indexOf(m[0]) < 0) {
throw new SyntaxError("invalid regular expression flag " + m[0]);
}
});
while (pos < pattern.length) {
// Check for custom tokens at the current position
tokenResult = runTokens(pattern, pos, scope, tokenContext);
if (tokenResult) {
output.push(tokenResult.output);
pos += (tokenResult.match[0].length || 1);
} else {
// Check for native tokens (except character classes) at the current position
match = nativ.exec.call(nativeTokens[scope], pattern.slice(pos));
if (match) {
output.push(match[0]);
pos += match[0].length;
} else {
chr = pattern.charAt(pos);
if (chr === "[") {
scope = classScope;
} else if (chr === "]") {
scope = defaultScope;
}
// Advance position by one character
output.push(chr);
++pos;
}
}
}
return augment(new RegExp(output.join(""), nativ.replace.call(flags, /[^gimy]+/g, "")),
tokenContext.hasNamedCapture ? tokenContext.captureNames : null);
};
/*--------------------------------------
* Public methods/properties
*------------------------------------*/
// Installed and uninstalled states for `XRegExp.addToken`
addToken = {
on: function (regex, handler, options) {
options = options || {};
if (regex) {
tokens.push({
pattern: copy(regex, "g" + (hasNativeY ? "y" : "")),
handler: handler,
scope: options.scope || defaultScope,
trigger: options.trigger || null
});
}
// Providing `customFlags` with null `regex` and `handler` allows adding flags that do
// nothing, but don't throw an error
if (options.customFlags) {
registeredFlags = nativ.replace.call(registeredFlags + options.customFlags, duplicateFlags, "");
}
},
off: function () {
throw new Error("extensibility must be installed before using addToken");
}
};
/**
* Extends or changes XRegExp syntax and allows custom flags. This is used internally and can be
* used to create XRegExp addons. `XRegExp.install('extensibility')` must be run before calling
* this function, or an error is thrown. If more than one token can match the same string, the last
* added wins.
* @memberOf XRegExp
* @param {RegExp} regex Regex object that matches the new token.
* @param {Function} handler Function that returns a new pattern string (using native regex syntax)
* to replace the matched token within all future XRegExp regexes. Has access to persistent
* properties of the regex being built, through `this`. Invoked with two arguments:
* <li>The match array, with named backreference properties.
* <li>The regex scope where the match was found.
* @param {Object} [options] Options object with optional properties:
* <li>`scope` {String} Scopes where the token applies: 'default', 'class', or 'all'.
* <li>`trigger` {Function} Function that returns `true` when the token should be applied; e.g.,
* if a flag is set. If `false` is returned, the matched string can be matched by other tokens.
* Has access to persistent properties of the regex being built, through `this` (including
* function `this.hasFlag`).
* <li>`customFlags` {String} Nonnative flags used by the token's handler or trigger functions.
* Prevents XRegExp from throwing an invalid flag error when the specified flags are used.
* @example
*
* // Basic usage: Adds \a for ALERT character
* XRegExp.addToken(
* /\\a/,
* function () {return '\\x07';},
* {scope: 'all'}
* );
* XRegExp('\\a[\\a-\\n]+').test('\x07\n\x07'); // -> true
*/
self.addToken = addToken.off;
/**
* Caches and returns the result of calling `XRegExp(pattern, flags)`. On any subsequent call with
* the same pattern and flag combination, the cached copy is returned.
* @memberOf XRegExp
* @param {String} pattern Regex pattern string.
* @param {String} [flags] Any combination of XRegExp flags.
* @returns {RegExp} Cached XRegExp object.
* @example
*
* while (match = XRegExp.cache('.', 'gs').exec(str)) {
* // The regex is compiled once only
* }
*/
self.cache = function (pattern, flags) {
var key = pattern + "/" + (flags || "");
return cache[key] || (cache[key] = self(pattern, flags));
};
/**
* Escapes any regular expression metacharacters, for use when matching literal strings. The result
* can safely be used at any point within a regex that uses any flags.
* @memberOf XRegExp
* @param {String} str String to escape.
* @returns {String} String with regex metacharacters escaped.
* @example
*
* XRegExp.escape('Escaped? <.>');
* // -> 'Escaped\?\ <\.>'
*/
self.escape = function (str) {
return nativ.replace.call(str, /[-[\]{}()*+?.,\\^$|#\s]/g, "\\$&");
};
/**
* Executes a regex search in a specified string. Returns a match array or `null`. If the provided
* regex uses named capture, named backreference properties are included on the match array.
* Optional `pos` and `sticky` arguments specify the search start position, and whether the match
* must start at the specified position only. The `lastIndex` property of the provided regex is not
* used, but is updated for compatibility. Also fixes browser bugs compared to the native
* `RegExp.prototype.exec` and can be used reliably cross-browser.
* @memberOf XRegExp
* @param {String} str String to search.
* @param {RegExp} regex Regex to search with.
* @param {Number} [pos=0] Zero-based index at which to start the search.
* @param {Boolean|String} [sticky=false] Whether the match must start at the specified position
* only. The string `'sticky'` is accepted as an alternative to `true`.
* @returns {Array} Match array with named backreference properties, or null.
* @example
*
* // Basic use, with named backreference
* var match = XRegExp.exec('U+2620', XRegExp('U\\+(?<hex>[0-9A-F]{4})'));
* match.hex; // -> '2620'
*
* // With pos and sticky, in a loop
* var pos = 2, result = [], match;
* while (match = XRegExp.exec('<1><2><3><4>5<6>', /<(\d)>/, pos, 'sticky')) {
* result.push(match[1]);
* pos = match.index + match[0].length;
* }
* // result -> ['2', '3', '4']
*/
self.exec = function (str, regex, pos, sticky) {
var r2 = copy(regex, "g" + (sticky && hasNativeY ? "y" : ""), (sticky === false ? "y" : "")),
match;
r2.lastIndex = pos = pos || 0;
match = fixed.exec.call(r2, str); // Fixed `exec` required for `lastIndex` fix, etc.
if (sticky && match && match.index !== pos) {
match = null;
}
if (regex.global) {
regex.lastIndex = match ? r2.lastIndex : 0;
}
return match;
};
/**
* Executes a provided function once per regex match.
* @memberOf XRegExp
* @param {String} str String to search.
* @param {RegExp} regex Regex to search with.
* @param {Function} callback Function to execute for each match. Invoked with four arguments:
* <li>The match array, with named backreference properties.
* <li>The zero-based match index.
* <li>The string being traversed.
* <li>The regex object being used to traverse the string.
* @param {*} [context] Object to use as `this` when executing `callback`.
* @returns {*} Provided `context` object.
* @example
*
* // Extracts every other digit from a string
* XRegExp.forEach('1a2345', /\d/, function (match, i) {
* if (i % 2) this.push(+match[0]);
* }, []);
* // -> [2, 4]
*/
self.forEach = function (str, regex, callback, context) {
var pos = 0,
i = -1,
match;
while ((match = self.exec(str, regex, pos))) {
callback.call(context, match, ++i, str, regex);
pos = match.index + (match[0].length || 1);
}
return context;
};
/**
* Copies a regex object and adds flag `g`. The copy maintains special properties for named
* capture, is augmented with `XRegExp.prototype` methods, and has a fresh `lastIndex` property
* (set to zero). Native regexes are not recompiled using XRegExp syntax.
* @memberOf XRegExp
* @param {RegExp} regex Regex to globalize.
* @returns {RegExp} Copy of the provided regex with flag `g` added.
* @example
*
* var globalCopy = XRegExp.globalize(/regex/);
* globalCopy.global; // -> true
*/
self.globalize = function (regex) {
return copy(regex, "g");
};
/**
* Installs optional features according to the specified options.
* @memberOf XRegExp
* @param {Object|String} options Options object or string.
* @example
*
* // With an options object
* XRegExp.install({
* // Overrides native regex methods with fixed/extended versions that support named
* // backreferences and fix numerous cross-browser bugs
* natives: true,
*
* // Enables extensibility of XRegExp syntax and flags
* extensibility: true
* });
*
* // With an options string
* XRegExp.install('natives extensibility');
*
* // Using a shortcut to install all optional features
* XRegExp.install('all');
*/
self.install = function (options) {
options = prepareOptions(options);
if (!features.natives && options.natives) {
setNatives(true);
}
if (!features.extensibility && options.extensibility) {
setExtensibility(true);
}
};
/**
* Checks whether an individual optional feature is installed.
* @memberOf XRegExp
* @param {String} feature Name of the feature to check. One of:
* <li>`natives`
* <li>`extensibility`
* @returns {Boolean} Whether the feature is installed.
* @example
*
* XRegExp.isInstalled('natives');
*/
self.isInstalled = function (feature) {
return !!(features[feature]);
};
/**
* Returns `true` if an object is a regex; `false` if it isn't. This works correctly for regexes
* created in another frame, when `instanceof` and `constructor` checks would fail.
* @memberOf XRegExp
* @param {*} value Object to check.
* @returns {Boolean} Whether the object is a `RegExp` object.
* @example
*
* XRegExp.isRegExp('string'); // -> false
* XRegExp.isRegExp(/regex/i); // -> true
* XRegExp.isRegExp(RegExp('^', 'm')); // -> true
* XRegExp.isRegExp(XRegExp('(?s).')); // -> true
*/
self.isRegExp = function (value) {
return isType(value, "regexp");
};
/**
* Retrieves the matches from searching a string using a chain of regexes that successively search
* within previous matches. The provided `chain` array can contain regexes and objects with `regex`
* and `backref` properties. When a backreference is specified, the named or numbered backreference
* is passed forward to the next regex or returned.
* @memberOf XRegExp
* @param {String} str String to search.
* @param {Array} chain Regexes that each search for matches within preceding results.
* @returns {Array} Matches by the last regex in the chain, or an empty array.
* @example
*
* // Basic usage; matches numbers within <b> tags
* XRegExp.matchChain('1 <b>2</b> 3 <b>4 a 56</b>', [
* XRegExp('(?is)<b>.*?</b>'),
* /\d+/
* ]);
* // -> ['2', '4', '56']
*
* // Passing forward and returning specific backreferences
* html = '<a href="http://xregexp.com/api/">XRegExp</a>\
* <a href="http://www.google.com/">Google</a>';
* XRegExp.matchChain(html, [
* {regex: /<a href="([^"]+)">/i, backref: 1},
* {regex: XRegExp('(?i)^https?://(?<domain>[^/?#]+)'), backref: 'domain'}
* ]);
* // -> ['xregexp.com', 'www.google.com']
*/
self.matchChain = function (str, chain) {
return (function recurseChain(values, level) {
var item = chain[level].regex ? chain[level] : {regex: chain[level]},
matches = [],
addMatch = function (match) {
matches.push(item.backref ? (match[item.backref] || "") : match[0]);
},
i;
for (i = 0; i < values.length; ++i) {
self.forEach(values[i], item.regex, addMatch);
}
return ((level === chain.length - 1) || !matches.length) ?
matches :
recurseChain(matches, level + 1);
}([str], 0));
};
/**
* Returns a new string with one or all matches of a pattern replaced. The pattern can be a string
* or regex, and the replacement can be a string or a function to be called for each match. To
* perform a global search and replace, use the optional `scope` argument or include flag `g` if
* using a regex. Replacement strings can use `${n}` for named and numbered backreferences.
* Replacement functions can use named backreferences via `arguments[0].name`. Also fixes browser
* bugs compared to the native `String.prototype.replace` and can be used reliably cross-browser.
* @memberOf XRegExp
* @param {String} str String to search.
* @param {RegExp|String} search Search pattern to be replaced.
* @param {String|Function} replacement Replacement string or a function invoked to create it.
* Replacement strings can include special replacement syntax:
* <li>$$ - Inserts a literal '$'.
* <li>$&, $0 - Inserts the matched substring.
* <li>$` - Inserts the string that precedes the matched substring (left context).
* <li>$' - Inserts the string that follows the matched substring (right context).
* <li>$n, $nn - Where n/nn are digits referencing an existent capturing group, inserts
* backreference n/nn.
* <li>${n} - Where n is a name or any number of digits that reference an existent capturing
* group, inserts backreference n.
* Replacement functions are invoked with three or more arguments:
* <li>The matched substring (corresponds to $& above). Named backreferences are accessible as
* properties of this first argument.
* <li>0..n arguments, one for each backreference (corresponding to $1, $2, etc. above).
* <li>The zero-based index of the match within the total search string.
* <li>The total string being searched.
* @param {String} [scope='one'] Use 'one' to replace the first match only, or 'all'. If not
* explicitly specified and using a regex with flag `g`, `scope` is 'all'.
* @returns {String} New string with one or all matches replaced.
* @example
*
* // Regex search, using named backreferences in replacement string
* var name = XRegExp('(?<first>\\w+) (?<last>\\w+)');
* XRegExp.replace('John Smith', name, '${last}, ${first}');
* // -> 'Smith, John'
*
* // Regex search, using named backreferences in replacement function
* XRegExp.replace('John Smith', name, function (match) {
* return match.last + ', ' + match.first;
* });
* // -> 'Smith, John'
*
* // Global string search/replacement
* XRegExp.replace('RegExp builds RegExps', 'RegExp', 'XRegExp', 'all');
* // -> 'XRegExp builds XRegExps'
*/
self.replace = function (str, search, replacement, scope) {
var isRegex = self.isRegExp(search),
search2 = search,
result;
if (isRegex) {
if (scope === undef && search.global) {
scope = "all"; // Follow flag g when `scope` isn't explicit
}
// Note that since a copy is used, `search`'s `lastIndex` isn't updated *during* replacement iterations
search2 = copy(search, scope === "all" ? "g" : "", scope === "all" ? "" : "g");
} else if (scope === "all") {
search2 = new RegExp(self.escape(String(search)), "g");
}
result = fixed.replace.call(String(str), search2, replacement); // Fixed `replace` required for named backreferences, etc.
if (isRegex && search.global) {
search.lastIndex = 0; // Fixes IE, Safari bug (last tested IE 9, Safari 5.1)
}
return result;
};
/**
* Splits a string into an array of strings using a regex or string separator. Matches of the
* separator are not included in the result array. However, if `separator` is a regex that contains
* capturing groups, backreferences are spliced into the result each time `separator` is matched.
* Fixes browser bugs compared to the native `String.prototype.split` and can be used reliably
* cross-browser.
* @memberOf XRegExp
* @param {String} str String to split.
* @param {RegExp|String} separator Regex or string to use for separating the string.
* @param {Number} [limit] Maximum number of items to include in the result array.
* @returns {Array} Array of substrings.
* @example
*
* // Basic use
* XRegExp.split('a b c', ' ');
* // -> ['a', 'b', 'c']
*
* // With limit
* XRegExp.split('a b c', ' ', 2);
* // -> ['a', 'b']
*
* // Backreferences in result array
* XRegExp.split('..word1..', /([a-z]+)(\d+)/i);
* // -> ['..', 'word', '1', '..']
*/
self.split = function (str, separator, limit) {
return fixed.split.call(str, separator, limit);
};
/**
* Executes a regex search in a specified string. Returns `true` or `false`. Optional `pos` and
* `sticky` arguments specify the search start position, and whether the match must start at the
* specified position only. The `lastIndex` property of the provided regex is not used, but is
* updated for compatibility. Also fixes browser bugs compared to the native
* `RegExp.prototype.test` and can be used reliably cross-browser.
* @memberOf XRegExp
* @param {String} str String to search.
* @param {RegExp} regex Regex to search with.
* @param {Number} [pos=0] Zero-based index at which to start the search.
* @param {Boolean|String} [sticky=false] Whether the match must start at the specified position
* only. The string `'sticky'` is accepted as an alternative to `true`.
* @returns {Boolean} Whether the regex matched the provided value.
* @example
*
* // Basic use
* XRegExp.test('abc', /c/); // -> true
*
* // With pos and sticky
* XRegExp.test('abc', /c/, 0, 'sticky'); // -> false
*/
self.test = function (str, regex, pos, sticky) {
// Do this the easy way :-)
return !!self.exec(str, regex, pos, sticky);
};
/**
* Uninstalls optional features according to the specified options.
* @memberOf XRegExp
* @param {Object|String} options Options object or string.
* @example
*
* // With an options object
* XRegExp.uninstall({
* // Restores native regex methods
* natives: true,
*
* // Disables additional syntax and flag extensions
* extensibility: true
* });
*
* // With an options string
* XRegExp.uninstall('natives extensibility');
*
* // Using a shortcut to uninstall all optional features
* XRegExp.uninstall('all');
*/
self.uninstall = function (options) {
options = prepareOptions(options);
if (features.natives && options.natives) {
setNatives(false);
}
if (features.extensibility && options.extensibility) {
setExtensibility(false);
}
};
/**
* Returns an XRegExp object that is the union of the given patterns. Patterns can be provided as
* regex objects or strings. Metacharacters are escaped in patterns provided as strings.
* Backreferences in provided regex objects are automatically renumbered to work correctly. Native
* flags used by provided regexes are ignored in favor of the `flags` argument.
* @memberOf XRegExp
* @param {Array} patterns Regexes and strings to combine.
* @param {String} [flags] Any combination of XRegExp flags.
* @returns {RegExp} Union of the provided regexes and strings.
* @example
*
* XRegExp.union(['a+b*c', /(dogs)\1/, /(cats)\1/], 'i');
* // -> /a\+b\*c|(dogs)\1|(cats)\2/i
*
* XRegExp.union([XRegExp('(?<pet>dogs)\\k<pet>'), XRegExp('(?<pet>cats)\\k<pet>')]);
* // -> XRegExp('(?<pet>dogs)\\k<pet>|(?<pet>cats)\\k<pet>')
*/
self.union = function (patterns, flags) {
var parts = /(\()(?!\?)|\\([1-9]\d*)|\\[\s\S]|\[(?:[^\\\]]|\\[\s\S])*]/g,
numCaptures = 0,
numPriorCaptures,
captureNames,
rewrite = function (match, paren, backref) {
var name = captureNames[numCaptures - numPriorCaptures];
if (paren) { // Capturing group
++numCaptures;
if (name) { // If the current capture has a name
return "(?<" + name + ">";
}
} else if (backref) { // Backreference
return "\\" + (+backref + numPriorCaptures);
}
return match;
},
output = [],
pattern,
i;
if (!(isType(patterns, "array") && patterns.length)) {
throw new TypeError("patterns must be a nonempty array");
}
for (i = 0; i < patterns.length; ++i) {
pattern = patterns[i];
if (self.isRegExp(pattern)) {
numPriorCaptures = numCaptures;
captureNames = (pattern.xregexp && pattern.xregexp.captureNames) || [];
// Rewrite backreferences. Passing to XRegExp dies on octals and ensures patterns
// are independently valid; helps keep this simple. Named captures are put back
output.push(self(pattern.source).source.replace(parts, rewrite));
} else {
output.push(self.escape(pattern));
}
}
return self(output.join("|"), flags);
};
/**
* The XRegExp version number.
* @static
* @memberOf XRegExp
* @type String
*/
self.version = "2.0.0";
/*--------------------------------------
* Fixed/extended native methods
*------------------------------------*/
/**
* Adds named capture support (with backreferences returned as `result.name`), and fixes browser
* bugs in the native `RegExp.prototype.exec`. Calling `XRegExp.install('natives')` uses this to
* override the native method. Use via `XRegExp.exec` without overriding natives.
* @private
* @param {String} str String to search.
* @returns {Array} Match array with named backreference properties, or null.
*/
fixed.exec = function (str) {
var match, name, r2, origLastIndex, i;
if (!this.global) {
origLastIndex = this.lastIndex;
}
match = nativ.exec.apply(this, arguments);
if (match) {
// Fix browsers whose `exec` methods don't consistently return `undefined` for
// nonparticipating capturing groups
if (!compliantExecNpcg && match.length > 1 && lastIndexOf(match, "") > -1) {
r2 = new RegExp(this.source, nativ.replace.call(getNativeFlags(this), "g", ""));
// Using `str.slice(match.index)` rather than `match[0]` in case lookahead allowed
// matching due to characters outside the match
nativ.replace.call(String(str).slice(match.index), r2, function () {
var i;
for (i = 1; i < arguments.length - 2; ++i) {
if (arguments[i] === undef) {
match[i] = undef;
}
}
});
}
// Attach named capture properties
if (this.xregexp && this.xregexp.captureNames) {
for (i = 1; i < match.length; ++i) {
name = this.xregexp.captureNames[i - 1];
if (name) {
match[name] = match[i];
}
}
}
// Fix browsers that increment `lastIndex` after zero-length matches
if (this.global && !match[0].length && (this.lastIndex > match.index)) {
this.lastIndex = match.index;
}
}
if (!this.global) {
this.lastIndex = origLastIndex; // Fixes IE, Opera bug (last tested IE 9, Opera 11.6)
}
return match;
};
/**
* Fixes browser bugs in the native `RegExp.prototype.test`. Calling `XRegExp.install('natives')`
* uses this to override the native method.
* @private
* @param {String} str String to search.
* @returns {Boolean} Whether the regex matched the provided value.
*/
fixed.test = function (str) {
// Do this the easy way :-)
return !!fixed.exec.call(this, str);
};
/**
* Adds named capture support (with backreferences returned as `result.name`), and fixes browser
* bugs in the native `String.prototype.match`. Calling `XRegExp.install('natives')` uses this to
* override the native method.
* @private
* @param {RegExp} regex Regex to search with.
* @returns {Array} If `regex` uses flag g, an array of match strings or null. Without flag g, the
* result of calling `regex.exec(this)`.
*/
fixed.match = function (regex) {
if (!self.isRegExp(regex)) {
regex = new RegExp(regex); // Use native `RegExp`
} else if (regex.global) {
var result = nativ.match.apply(this, arguments);
regex.lastIndex = 0; // Fixes IE bug
return result;
}
return fixed.exec.call(regex, this);
};
/**
* Adds support for `${n}` tokens for named and numbered backreferences in replacement text, and
* provides named backreferences to replacement functions as `arguments[0].name`. Also fixes
* browser bugs in replacement text syntax when performing a replacement using a nonregex search
* value, and the value of a replacement regex's `lastIndex` property during replacement iterations
* and upon completion. Note that this doesn't support SpiderMonkey's proprietary third (`flags`)
* argument. Calling `XRegExp.install('natives')` uses this to override the native method. Use via
* `XRegExp.replace` without overriding natives.
* @private
* @param {RegExp|String} search Search pattern to be replaced.
* @param {String|Function} replacement Replacement string or a function invoked to create it.
* @returns {String} New string with one or all matches replaced.
*/
fixed.replace = function (search, replacement) {
var isRegex = self.isRegExp(search), captureNames, result, str, origLastIndex;
if (isRegex) {
if (search.xregexp) {
captureNames = search.xregexp.captureNames;
}
if (!search.global) {
origLastIndex = search.lastIndex;
}
} else {
search += "";
}
if (isType(replacement, "function")) {
result = nativ.replace.call(String(this), search, function () {
var args = arguments, i;
if (captureNames) {
// Change the `arguments[0]` string primitive to a `String` object that can store properties
args[0] = new String(args[0]);
// Store named backreferences on the first argument
for (i = 0; i < captureNames.length; ++i) {
if (captureNames[i]) {
args[0][captureNames[i]] = args[i + 1];
}
}
}
// Update `lastIndex` before calling `replacement`.
// Fixes IE, Chrome, Firefox, Safari bug (last tested IE 9, Chrome 17, Firefox 11, Safari 5.1)
if (isRegex && search.global) {
search.lastIndex = args[args.length - 2] + args[0].length;
}
return replacement.apply(null, args);
});
} else {
str = String(this); // Ensure `args[args.length - 1]` will be a string when given nonstring `this`
result = nativ.replace.call(str, search, function () {
var args = arguments; // Keep this function's `arguments` available through closure
return nativ.replace.call(String(replacement), replacementToken, function ($0, $1, $2) {
var n;
// Named or numbered backreference with curly brackets
if ($1) {
/* XRegExp behavior for `${n}`:
* 1. Backreference to numbered capture, where `n` is 1+ digits. `0`, `00`, etc. is the entire match.
* 2. Backreference to named capture `n`, if it exists and is not a number overridden by numbered capture.
* 3. Otherwise, it's an error.
*/
n = +$1; // Type-convert; drop leading zeros
if (n <= args.length - 3) {
return args[n] || "";
}
n = captureNames ? lastIndexOf(captureNames, $1) : -1;
if (n < 0) {
throw new SyntaxError("backreference to undefined group " + $0);
}
return args[n + 1] || "";
}
// Else, special variable or numbered backreference (without curly brackets)
if ($2 === "$") return "$";
if ($2 === "&" || +$2 === 0) return args[0]; // $&, $0 (not followed by 1-9), $00
if ($2 === "`") return args[args.length - 1].slice(0, args[args.length - 2]);
if ($2 === "'") return args[args.length - 1].slice(args[args.length - 2] + args[0].length);
// Else, numbered backreference (without curly brackets)
$2 = +$2; // Type-convert; drop leading zero
/* XRegExp behavior:
* - Backreferences without curly brackets end after 1 or 2 digits. Use `${..}` for more digits.
* - `$1` is an error if there are no capturing groups.
* - `$10` is an error if there are less than 10 capturing groups. Use `${1}0` instead.
* - `$01` is equivalent to `$1` if a capturing group exists, otherwise it's an error.
* - `$0` (not followed by 1-9), `$00`, and `$&` are the entire match.
* Native behavior, for comparison:
* - Backreferences end after 1 or 2 digits. Cannot use backreference to capturing group 100+.
* - `$1` is a literal `$1` if there are no capturing groups.
* - `$10` is `$1` followed by a literal `0` if there are less than 10 capturing groups.
* - `$01` is equivalent to `$1` if a capturing group exists, otherwise it's a literal `$01`.
* - `$0` is a literal `$0`. `$&` is the entire match.
*/
if (!isNaN($2)) {
if ($2 > args.length - 3) {
throw new SyntaxError("backreference to undefined group " + $0);
}
return args[$2] || "";
}
throw new SyntaxError("invalid token " + $0);
});
});
}
if (isRegex) {
if (search.global) {
search.lastIndex = 0; // Fixes IE, Safari bug (last tested IE 9, Safari 5.1)
} else {
search.lastIndex = origLastIndex; // Fixes IE, Opera bug (last tested IE 9, Opera 11.6)
}
}
return result;
};
/**
* Fixes browser bugs in the native `String.prototype.split`. Calling `XRegExp.install('natives')`
* uses this to override the native method. Use via `XRegExp.split` without overriding natives.
* @private
* @param {RegExp|String} separator Regex or string to use for separating the string.
* @param {Number} [limit] Maximum number of items to include in the result array.
* @returns {Array} Array of substrings.
*/
fixed.split = function (separator, limit) {
if (!self.isRegExp(separator)) {
return nativ.split.apply(this, arguments); // use faster native method
}
var str = String(this),
origLastIndex = separator.lastIndex,
output = [],
lastLastIndex = 0,
lastLength;
/* Values for `limit`, per the spec:
* If undefined: pow(2,32) - 1
* If 0, Infinity, or NaN: 0
* If positive number: limit = floor(limit); if (limit >= pow(2,32)) limit -= pow(2,32);
* If negative number: pow(2,32) - floor(abs(limit))
* If other: Type-convert, then use the above rules
*/
limit = (limit === undef ? -1 : limit) >>> 0;
self.forEach(str, separator, function (match) {
if ((match.index + match[0].length) > lastLastIndex) { // != `if (match[0].length)`
output.push(str.slice(lastLastIndex, match.index));
if (match.length > 1 && match.index < str.length) {
Array.prototype.push.apply(output, match.slice(1));
}
lastLength = match[0].length;
lastLastIndex = match.index + lastLength;
}
});
if (lastLastIndex === str.length) {
if (!nativ.test.call(separator, "") || lastLength) {
output.push("");
}
} else {
output.push(str.slice(lastLastIndex));
}
separator.lastIndex = origLastIndex;
return output.length > limit ? output.slice(0, limit) : output;
};
/*--------------------------------------
* Built-in tokens
*------------------------------------*/
// Shortcut
add = addToken.on;
/* Letter identity escapes that natively match literal characters: \p, \P, etc.
* Should be SyntaxErrors but are allowed in web reality. XRegExp makes them errors for cross-
* browser consistency and to reserve their syntax, but lets them be superseded by XRegExp addons.
*/
add(/\\([ABCE-RTUVXYZaeg-mopqyz]|c(?![A-Za-z])|u(?![\dA-Fa-f]{4})|x(?![\dA-Fa-f]{2}))/,
function (match, scope) {
// \B is allowed in default scope only
if (match[1] === "B" && scope === defaultScope) {
return match[0];
}
throw new SyntaxError("invalid escape " + match[0]);
},
{scope: "all"});
/* Empty character class: [] or [^]
* Fixes a critical cross-browser syntax inconsistency. Unless this is standardized (per the spec),
* regex syntax can't be accurately parsed because character class endings can't be determined.
*/
add(/\[(\^?)]/,
function (match) {
// For cross-browser compatibility with ES3, convert [] to \b\B and [^] to [\s\S].
// (?!) should work like \b\B, but is unreliable in Firefox
return match[1] ? "[\\s\\S]" : "\\b\\B";
});
/* Comment pattern: (?# )
* Inline comments are an alternative to the line comments allowed in free-spacing mode (flag x).
*/
add(/(?:\(\?#[^)]*\))+/,
function (match) {
// Keep tokens separated unless the following token is a quantifier
return nativ.test.call(quantifier, match.input.slice(match.index + match[0].length)) ? "" : "(?:)";
});
/* Named backreference: \k<name>
* Backreference names can use the characters A-Z, a-z, 0-9, _, and $ only.
*/
add(/\\k<([\w$]+)>/,
function (match) {
var index = isNaN(match[1]) ? (lastIndexOf(this.captureNames, match[1]) + 1) : +match[1],
endIndex = match.index + match[0].length;
if (!index || index > this.captureNames.length) {
throw new SyntaxError("backreference to undefined group " + match[0]);
}
// Keep backreferences separate from subsequent literal numbers
return "\\" + index + (
endIndex === match.input.length || isNaN(match.input.charAt(endIndex)) ? "" : "(?:)"
);
});
/* Whitespace and line comments, in free-spacing mode (aka extended mode, flag x) only.
*/
add(/(?:\s+|#.*)+/,
function (match) {
// Keep tokens separated unless the following token is a quantifier
return nativ.test.call(quantifier, match.input.slice(match.index + match[0].length)) ? "" : "(?:)";
},
{
trigger: function () {
return this.hasFlag("x");
},
customFlags: "x"
});
/* Dot, in dotall mode (aka singleline mode, flag s) only.
*/
add(/\./,
function () {
return "[\\s\\S]";
},
{
trigger: function () {
return this.hasFlag("s");
},
customFlags: "s"
});
/* Named capturing group; match the opening delimiter only: (?<name>
* Capture names can use the characters A-Z, a-z, 0-9, _, and $ only. Names can't be integers.
* Supports Python-style (?P<name> as an alternate syntax to avoid issues in recent Opera (which
* natively supports the Python-style syntax). Otherwise, XRegExp might treat numbered
* backreferences to Python-style named capture as octals.
*/
add(/\(\?P?<([\w$]+)>/,
function (match) {
if (!isNaN(match[1])) {
// Avoid incorrect lookups, since named backreferences are added to match arrays
throw new SyntaxError("can't use integer as capture name " + match[0]);
}
this.captureNames.push(match[1]);
this.hasNamedCapture = true;
return "(";
});
/* Numbered backreference or octal, plus any following digits: \0, \11, etc.
* Octals except \0 not followed by 0-9 and backreferences to unopened capture groups throw an
* error. Other matches are returned unaltered. IE <= 8 doesn't support backreferences greater than
* \99 in regex syntax.
*/
add(/\\(\d+)/,
function (match, scope) {
if (!(scope === defaultScope && /^[1-9]/.test(match[1]) && +match[1] <= this.captureNames.length) &&
match[1] !== "0") {
throw new SyntaxError("can't use octal escape or backreference to undefined group " + match[0]);
}
return match[0];
},
{scope: "all"});
/* Capturing group; match the opening parenthesis only.
* Required for support of named capturing groups. Also adds explicit capture mode (flag n).
*/
add(/\((?!\?)/,
function () {
if (this.hasFlag("n")) {
return "(?:";
}
this.captureNames.push(null);
return "(";
},
{customFlags: "n"});
/*--------------------------------------
* Expose XRegExp
*------------------------------------*/
// For CommonJS enviroments
if (typeof exports !== "undefined") {
exports.XRegExp = self;
}
return self;
}());
/***** unicode-base.js *****/
/*!
* XRegExp Unicode Base v1.0.0
* (c) 2008-2012 Steven Levithan <http://xregexp.com/>
* MIT License
* Uses Unicode 6.1 <http://unicode.org/>
*/
/**
* Adds support for the `\p{L}` or `\p{Letter}` Unicode category. Addon packages for other Unicode
* categories, scripts, blocks, and properties are available separately. All Unicode tokens can be
* inverted using `\P{..}` or `\p{^..}`. Token names are case insensitive, and any spaces, hyphens,
* and underscores are ignored.
* @requires XRegExp
*/
(function (XRegExp) {
"use strict";
var unicode = {};
/*--------------------------------------
* Private helper functions
*------------------------------------*/
// Generates a standardized token name (lowercase, with hyphens, spaces, and underscores removed)
function slug(name) {
return name.replace(/[- _]+/g, "").toLowerCase();
}
// Expands a list of Unicode code points and ranges to be usable in a regex character class
function expand(str) {
return str.replace(/\w{4}/g, "\\u$&");
}
// Adds leading zeros if shorter than four characters
function pad4(str) {
while (str.length < 4) {
str = "0" + str;
}
return str;
}
// Converts a hexadecimal number to decimal
function dec(hex) {
return parseInt(hex, 16);
}
// Converts a decimal number to hexadecimal
function hex(dec) {
return parseInt(dec, 10).toString(16);
}
// Inverts a list of Unicode code points and ranges
function invert(range) {
var output = [],
lastEnd = -1,
start;
XRegExp.forEach(range, /\\u(\w{4})(?:-\\u(\w{4}))?/, function (m) {
start = dec(m[1]);
if (start > (lastEnd + 1)) {
output.push("\\u" + pad4(hex(lastEnd + 1)));
if (start > (lastEnd + 2)) {
output.push("-\\u" + pad4(hex(start - 1)));
}
}
lastEnd = dec(m[2] || m[1]);
});
if (lastEnd < 0xFFFF) {
output.push("\\u" + pad4(hex(lastEnd + 1)));
if (lastEnd < 0xFFFE) {
output.push("-\\uFFFF");
}
}
return output.join("");
}
// Generates an inverted token on first use
function cacheInversion(item) {
return unicode["^" + item] || (unicode["^" + item] = invert(unicode[item]));
}
/*--------------------------------------
* Core functionality
*------------------------------------*/
XRegExp.install("extensibility");
/**
* Adds to the list of Unicode properties that XRegExp regexes can match via \p{..} or \P{..}.
* @memberOf XRegExp
* @param {Object} pack Named sets of Unicode code points and ranges.
* @param {Object} [aliases] Aliases for the primary token names.
* @example
*
* XRegExp.addUnicodePackage({
* XDigit: '0030-00390041-00460061-0066' // 0-9A-Fa-f
* }, {
* XDigit: 'Hexadecimal'
* });
*/
XRegExp.addUnicodePackage = function (pack, aliases) {
var p;
if (!XRegExp.isInstalled("extensibility")) {
throw new Error("extensibility must be installed before adding Unicode packages");
}
if (pack) {
for (p in pack) {
if (pack.hasOwnProperty(p)) {
unicode[slug(p)] = expand(pack[p]);
}
}
}
if (aliases) {
for (p in aliases) {
if (aliases.hasOwnProperty(p)) {
unicode[slug(aliases[p])] = unicode[slug(p)];
}
}
}
};
/* Adds data for the Unicode `Letter` category. Addon packages include other categories, scripts,
* blocks, and properties.
*/
XRegExp.addUnicodePackage({
L: "0041-005A0061-007A00AA00B500BA00C0-00D600D8-00F600F8-02C102C6-02D102E0-02E402EC02EE0370-037403760377037A-037D03860388-038A038C038E-03A103A3-03F503F7-0481048A-05270531-055605590561-058705D0-05EA05F0-05F20620-064A066E066F0671-06D306D506E506E606EE06EF06FA-06FC06FF07100712-072F074D-07A507B107CA-07EA07F407F507FA0800-0815081A082408280840-085808A008A2-08AC0904-0939093D09500958-09610971-09770979-097F0985-098C098F09900993-09A809AA-09B009B209B6-09B909BD09CE09DC09DD09DF-09E109F009F10A05-0A0A0A0F0A100A13-0A280A2A-0A300A320A330A350A360A380A390A59-0A5C0A5E0A72-0A740A85-0A8D0A8F-0A910A93-0AA80AAA-0AB00AB20AB30AB5-0AB90ABD0AD00AE00AE10B05-0B0C0B0F0B100B13-0B280B2A-0B300B320B330B35-0B390B3D0B5C0B5D0B5F-0B610B710B830B85-0B8A0B8E-0B900B92-0B950B990B9A0B9C0B9E0B9F0BA30BA40BA8-0BAA0BAE-0BB90BD00C05-0C0C0C0E-0C100C12-0C280C2A-0C330C35-0C390C3D0C580C590C600C610C85-0C8C0C8E-0C900C92-0CA80CAA-0CB30CB5-0CB90CBD0CDE0CE00CE10CF10CF20D05-0D0C0D0E-0D100D12-0D3A0D3D0D4E0D600D610D7A-0D7F0D85-0D960D9A-0DB10DB3-0DBB0DBD0DC0-0DC60E01-0E300E320E330E40-0E460E810E820E840E870E880E8A0E8D0E94-0E970E99-0E9F0EA1-0EA30EA50EA70EAA0EAB0EAD-0EB00EB20EB30EBD0EC0-0EC40EC60EDC-0EDF0F000F40-0F470F49-0F6C0F88-0F8C1000-102A103F1050-1055105A-105D106110651066106E-10701075-1081108E10A0-10C510C710CD10D0-10FA10FC-1248124A-124D1250-12561258125A-125D1260-1288128A-128D1290-12B012B2-12B512B8-12BE12C012C2-12C512C8-12D612D8-13101312-13151318-135A1380-138F13A0-13F41401-166C166F-167F1681-169A16A0-16EA1700-170C170E-17111720-17311740-17511760-176C176E-17701780-17B317D717DC1820-18771880-18A818AA18B0-18F51900-191C1950-196D1970-19741980-19AB19C1-19C71A00-1A161A20-1A541AA71B05-1B331B45-1B4B1B83-1BA01BAE1BAF1BBA-1BE51C00-1C231C4D-1C4F1C5A-1C7D1CE9-1CEC1CEE-1CF11CF51CF61D00-1DBF1E00-1F151F18-1F1D1F20-1F451F48-1F4D1F50-1F571F591F5B1F5D1F5F-1F7D1F80-1FB41FB6-1FBC1FBE1FC2-1FC41FC6-1FCC1FD0-1FD31FD6-1FDB1FE0-1FEC1FF2-1FF41FF6-1FFC2071207F2090-209C21022107210A-211321152119-211D212421262128212A-212D212F-2139213C-213F2145-2149214E218321842C00-2C2E2C30-2C5E2C60-2CE42CEB-2CEE2CF22CF32D00-2D252D272D2D2D30-2D672D6F2D80-2D962DA0-2DA62DA8-2DAE2DB0-2DB62DB8-2DBE2DC0-2DC62DC8-2DCE2DD0-2DD62DD8-2DDE2E2F300530063031-3035303B303C3041-3096309D-309F30A1-30FA30FC-30FF3105-312D3131-318E31A0-31BA31F0-31FF3400-4DB54E00-9FCCA000-A48CA4D0-A4FDA500-A60CA610-A61FA62AA62BA640-A66EA67F-A697A6A0-A6E5A717-A71FA722-A788A78B-A78EA790-A793A7A0-A7AAA7F8-A801A803-A805A807-A80AA80C-A822A840-A873A882-A8B3A8F2-A8F7A8FBA90A-A925A930-A946A960-A97CA984-A9B2A9CFAA00-AA28AA40-AA42AA44-AA4BAA60-AA76AA7AAA80-AAAFAAB1AAB5AAB6AAB9-AABDAAC0AAC2AADB-AADDAAE0-AAEAAAF2-AAF4AB01-AB06AB09-AB0EAB11-AB16AB20-AB26AB28-AB2EABC0-ABE2AC00-D7A3D7B0-D7C6D7CB-D7FBF900-FA6DFA70-FAD9FB00-FB06FB13-FB17FB1DFB1F-FB28FB2A-FB36FB38-FB3CFB3EFB40FB41FB43FB44FB46-FBB1FBD3-FD3DFD50-FD8FFD92-FDC7FDF0-FDFBFE70-FE74FE76-FEFCFF21-FF3AFF41-FF5AFF66-FFBEFFC2-FFC7FFCA-FFCFFFD2-FFD7FFDA-FFDC"
}, {
L: "Letter"
});
/* Adds Unicode property syntax to XRegExp: \p{..}, \P{..}, \p{^..}
*/
XRegExp.addToken(
/\\([pP]){(\^?)([^}]*)}/,
function (match, scope) {
var inv = (match[1] === "P" || match[2]) ? "^" : "",
item = slug(match[3]);
// The double negative \P{^..} is invalid
if (match[1] === "P" && match[2]) {
throw new SyntaxError("invalid double negation \\P{^");
}
if (!unicode.hasOwnProperty(item)) {
throw new SyntaxError("invalid or unknown Unicode property " + match[0]);
}
return scope === "class" ?
(inv ? cacheInversion(item) : unicode[item]) :
"[" + inv + unicode[item] + "]";
},
{scope: "all"}
);
}(XRegExp));
/***** unicode-categories.js *****/
/*!
* XRegExp Unicode Categories v1.2.0
* (c) 2010-2012 Steven Levithan <http://xregexp.com/>
* MIT License
* Uses Unicode 6.1 <http://unicode.org/>
*/
/**
* Adds support for all Unicode categories (aka properties) E.g., `\p{Lu}` or
* `\p{Uppercase Letter}`. Token names are case insensitive, and any spaces, hyphens, and
* underscores are ignored.
* @requires XRegExp, XRegExp Unicode Base
*/
(function (XRegExp) {
"use strict";
if (!XRegExp.addUnicodePackage) {
throw new ReferenceError("Unicode Base must be loaded before Unicode Categories");
}
XRegExp.install("extensibility");
XRegExp.addUnicodePackage({
//L: "", // Included in the Unicode Base addon
Ll: "0061-007A00B500DF-00F600F8-00FF01010103010501070109010B010D010F01110113011501170119011B011D011F01210123012501270129012B012D012F01310133013501370138013A013C013E014001420144014601480149014B014D014F01510153015501570159015B015D015F01610163016501670169016B016D016F0171017301750177017A017C017E-0180018301850188018C018D019201950199-019B019E01A101A301A501A801AA01AB01AD01B001B401B601B901BA01BD-01BF01C601C901CC01CE01D001D201D401D601D801DA01DC01DD01DF01E101E301E501E701E901EB01ED01EF01F001F301F501F901FB01FD01FF02010203020502070209020B020D020F02110213021502170219021B021D021F02210223022502270229022B022D022F02310233-0239023C023F0240024202470249024B024D024F-02930295-02AF037103730377037B-037D039003AC-03CE03D003D103D5-03D703D903DB03DD03DF03E103E303E503E703E903EB03ED03EF-03F303F503F803FB03FC0430-045F04610463046504670469046B046D046F04710473047504770479047B047D047F0481048B048D048F04910493049504970499049B049D049F04A104A304A504A704A904AB04AD04AF04B104B304B504B704B904BB04BD04BF04C204C404C604C804CA04CC04CE04CF04D104D304D504D704D904DB04DD04DF04E104E304E504E704E904EB04ED04EF04F104F304F504F704F904FB04FD04FF05010503050505070509050B050D050F05110513051505170519051B051D051F05210523052505270561-05871D00-1D2B1D6B-1D771D79-1D9A1E011E031E051E071E091E0B1E0D1E0F1E111E131E151E171E191E1B1E1D1E1F1E211E231E251E271E291E2B1E2D1E2F1E311E331E351E371E391E3B1E3D1E3F1E411E431E451E471E491E4B1E4D1E4F1E511E531E551E571E591E5B1E5D1E5F1E611E631E651E671E691E6B1E6D1E6F1E711E731E751E771E791E7B1E7D1E7F1E811E831E851E871E891E8B1E8D1E8F1E911E931E95-1E9D1E9F1EA11EA31EA51EA71EA91EAB1EAD1EAF1EB11EB31EB51EB71EB91EBB1EBD1EBF1EC11EC31EC51EC71EC91ECB1ECD1ECF1ED11ED31ED51ED71ED91EDB1EDD1EDF1EE11EE31EE51EE71EE91EEB1EED1EEF1EF11EF31EF51EF71EF91EFB1EFD1EFF-1F071F10-1F151F20-1F271F30-1F371F40-1F451F50-1F571F60-1F671F70-1F7D1F80-1F871F90-1F971FA0-1FA71FB0-1FB41FB61FB71FBE1FC2-1FC41FC61FC71FD0-1FD31FD61FD71FE0-1FE71FF2-1FF41FF61FF7210A210E210F2113212F21342139213C213D2146-2149214E21842C30-2C5E2C612C652C662C682C6A2C6C2C712C732C742C76-2C7B2C812C832C852C872C892C8B2C8D2C8F2C912C932C952C972C992C9B2C9D2C9F2CA12CA32CA52CA72CA92CAB2CAD2CAF2CB12CB32CB52CB72CB92CBB2CBD2CBF2CC12CC32CC52CC72CC92CCB2CCD2CCF2CD12CD32CD52CD72CD92CDB2CDD2CDF2CE12CE32CE42CEC2CEE2CF32D00-2D252D272D2DA641A643A645A647A649A64BA64DA64FA651A653A655A657A659A65BA65DA65FA661A663A665A667A669A66BA66DA681A683A685A687A689A68BA68DA68FA691A693A695A697A723A725A727A729A72BA72DA72F-A731A733A735A737A739A73BA73DA73FA741A743A745A747A749A74BA74DA74FA751A753A755A757A759A75BA75DA75FA761A763A765A767A769A76BA76DA76FA771-A778A77AA77CA77FA781A783A785A787A78CA78EA791A793A7A1A7A3A7A5A7A7A7A9A7FAFB00-FB06FB13-FB17FF41-FF5A",
Lu: "0041-005A00C0-00D600D8-00DE01000102010401060108010A010C010E01100112011401160118011A011C011E01200122012401260128012A012C012E01300132013401360139013B013D013F0141014301450147014A014C014E01500152015401560158015A015C015E01600162016401660168016A016C016E017001720174017601780179017B017D018101820184018601870189-018B018E-0191019301940196-0198019C019D019F01A001A201A401A601A701A901AC01AE01AF01B1-01B301B501B701B801BC01C401C701CA01CD01CF01D101D301D501D701D901DB01DE01E001E201E401E601E801EA01EC01EE01F101F401F6-01F801FA01FC01FE02000202020402060208020A020C020E02100212021402160218021A021C021E02200222022402260228022A022C022E02300232023A023B023D023E02410243-02460248024A024C024E03700372037603860388-038A038C038E038F0391-03A103A3-03AB03CF03D2-03D403D803DA03DC03DE03E003E203E403E603E803EA03EC03EE03F403F703F903FA03FD-042F04600462046404660468046A046C046E04700472047404760478047A047C047E0480048A048C048E04900492049404960498049A049C049E04A004A204A404A604A804AA04AC04AE04B004B204B404B604B804BA04BC04BE04C004C104C304C504C704C904CB04CD04D004D204D404D604D804DA04DC04DE04E004E204E404E604E804EA04EC04EE04F004F204F404F604F804FA04FC04FE05000502050405060508050A050C050E05100512051405160518051A051C051E05200522052405260531-055610A0-10C510C710CD1E001E021E041E061E081E0A1E0C1E0E1E101E121E141E161E181E1A1E1C1E1E1E201E221E241E261E281E2A1E2C1E2E1E301E321E341E361E381E3A1E3C1E3E1E401E421E441E461E481E4A1E4C1E4E1E501E521E541E561E581E5A1E5C1E5E1E601E621E641E661E681E6A1E6C1E6E1E701E721E741E761E781E7A1E7C1E7E1E801E821E841E861E881E8A1E8C1E8E1E901E921E941E9E1EA01EA21EA41EA61EA81EAA1EAC1EAE1EB01EB21EB41EB61EB81EBA1EBC1EBE1EC01EC21EC41EC61EC81ECA1ECC1ECE1ED01ED21ED41ED61ED81EDA1EDC1EDE1EE01EE21EE41EE61EE81EEA1EEC1EEE1EF01EF21EF41EF61EF81EFA1EFC1EFE1F08-1F0F1F18-1F1D1F28-1F2F1F38-1F3F1F48-1F4D1F591F5B1F5D1F5F1F68-1F6F1FB8-1FBB1FC8-1FCB1FD8-1FDB1FE8-1FEC1FF8-1FFB21022107210B-210D2110-211221152119-211D212421262128212A-212D2130-2133213E213F214521832C00-2C2E2C602C62-2C642C672C692C6B2C6D-2C702C722C752C7E-2C802C822C842C862C882C8A2C8C2C8E2C902C922C942C962C982C9A2C9C2C9E2CA02CA22CA42CA62CA82CAA2CAC2CAE2CB02CB22CB42CB62CB82CBA2CBC2CBE2CC02CC22CC42CC62CC82CCA2CCC2CCE2CD02CD22CD42CD62CD82CDA2CDC2CDE2CE02CE22CEB2CED2CF2A640A642A644A646A648A64AA64CA64EA650A652A654A656A658A65AA65CA65EA660A662A664A666A668A66AA66CA680A682A684A686A688A68AA68CA68EA690A692A694A696A722A724A726A728A72AA72CA72EA732A734A736A738A73AA73CA73EA740A742A744A746A748A74AA74CA74EA750A752A754A756A758A75AA75CA75EA760A762A764A766A768A76AA76CA76EA779A77BA77DA77EA780A782A784A786A78BA78DA790A792A7A0A7A2A7A4A7A6A7A8A7AAFF21-FF3A",
Lt: "01C501C801CB01F21F88-1F8F1F98-1F9F1FA8-1FAF1FBC1FCC1FFC",
Lm: "02B0-02C102C6-02D102E0-02E402EC02EE0374037A0559064006E506E607F407F507FA081A0824082809710E460EC610FC17D718431AA71C78-1C7D1D2C-1D6A1D781D9B-1DBF2071207F2090-209C2C7C2C7D2D6F2E2F30053031-3035303B309D309E30FC-30FEA015A4F8-A4FDA60CA67FA717-A71FA770A788A7F8A7F9A9CFAA70AADDAAF3AAF4FF70FF9EFF9F",
Lo: "00AA00BA01BB01C0-01C3029405D0-05EA05F0-05F20620-063F0641-064A066E066F0671-06D306D506EE06EF06FA-06FC06FF07100712-072F074D-07A507B107CA-07EA0800-08150840-085808A008A2-08AC0904-0939093D09500958-09610972-09770979-097F0985-098C098F09900993-09A809AA-09B009B209B6-09B909BD09CE09DC09DD09DF-09E109F009F10A05-0A0A0A0F0A100A13-0A280A2A-0A300A320A330A350A360A380A390A59-0A5C0A5E0A72-0A740A85-0A8D0A8F-0A910A93-0AA80AAA-0AB00AB20AB30AB5-0AB90ABD0AD00AE00AE10B05-0B0C0B0F0B100B13-0B280B2A-0B300B320B330B35-0B390B3D0B5C0B5D0B5F-0B610B710B830B85-0B8A0B8E-0B900B92-0B950B990B9A0B9C0B9E0B9F0BA30BA40BA8-0BAA0BAE-0BB90BD00C05-0C0C0C0E-0C100C12-0C280C2A-0C330C35-0C390C3D0C580C590C600C610C85-0C8C0C8E-0C900C92-0CA80CAA-0CB30CB5-0CB90CBD0CDE0CE00CE10CF10CF20D05-0D0C0D0E-0D100D12-0D3A0D3D0D4E0D600D610D7A-0D7F0D85-0D960D9A-0DB10DB3-0DBB0DBD0DC0-0DC60E01-0E300E320E330E40-0E450E810E820E840E870E880E8A0E8D0E94-0E970E99-0E9F0EA1-0EA30EA50EA70EAA0EAB0EAD-0EB00EB20EB30EBD0EC0-0EC40EDC-0EDF0F000F40-0F470F49-0F6C0F88-0F8C1000-102A103F1050-1055105A-105D106110651066106E-10701075-1081108E10D0-10FA10FD-1248124A-124D1250-12561258125A-125D1260-1288128A-128D1290-12B012B2-12B512B8-12BE12C012C2-12C512C8-12D612D8-13101312-13151318-135A1380-138F13A0-13F41401-166C166F-167F1681-169A16A0-16EA1700-170C170E-17111720-17311740-17511760-176C176E-17701780-17B317DC1820-18421844-18771880-18A818AA18B0-18F51900-191C1950-196D1970-19741980-19AB19C1-19C71A00-1A161A20-1A541B05-1B331B45-1B4B1B83-1BA01BAE1BAF1BBA-1BE51C00-1C231C4D-1C4F1C5A-1C771CE9-1CEC1CEE-1CF11CF51CF62135-21382D30-2D672D80-2D962DA0-2DA62DA8-2DAE2DB0-2DB62DB8-2DBE2DC0-2DC62DC8-2DCE2DD0-2DD62DD8-2DDE3006303C3041-3096309F30A1-30FA30FF3105-312D3131-318E31A0-31BA31F0-31FF3400-4DB54E00-9FCCA000-A014A016-A48CA4D0-A4F7A500-A60BA610-A61FA62AA62BA66EA6A0-A6E5A7FB-A801A803-A805A807-A80AA80C-A822A840-A873A882-A8B3A8F2-A8F7A8FBA90A-A925A930-A946A960-A97CA984-A9B2AA00-AA28AA40-AA42AA44-AA4BAA60-AA6FAA71-AA76AA7AAA80-AAAFAAB1AAB5AAB6AAB9-AABDAAC0AAC2AADBAADCAAE0-AAEAAAF2AB01-AB06AB09-AB0EAB11-AB16AB20-AB26AB28-AB2EABC0-ABE2AC00-D7A3D7B0-D7C6D7CB-D7FBF900-FA6DFA70-FAD9FB1DFB1F-FB28FB2A-FB36FB38-FB3CFB3EFB40FB41FB43FB44FB46-FBB1FBD3-FD3DFD50-FD8FFD92-FDC7FDF0-FDFBFE70-FE74FE76-FEFCFF66-FF6FFF71-FF9DFFA0-FFBEFFC2-FFC7FFCA-FFCFFFD2-FFD7FFDA-FFDC",
M: "0300-036F0483-04890591-05BD05BF05C105C205C405C505C70610-061A064B-065F067006D6-06DC06DF-06E406E706E806EA-06ED07110730-074A07A6-07B007EB-07F30816-0819081B-08230825-08270829-082D0859-085B08E4-08FE0900-0903093A-093C093E-094F0951-0957096209630981-098309BC09BE-09C409C709C809CB-09CD09D709E209E30A01-0A030A3C0A3E-0A420A470A480A4B-0A4D0A510A700A710A750A81-0A830ABC0ABE-0AC50AC7-0AC90ACB-0ACD0AE20AE30B01-0B030B3C0B3E-0B440B470B480B4B-0B4D0B560B570B620B630B820BBE-0BC20BC6-0BC80BCA-0BCD0BD70C01-0C030C3E-0C440C46-0C480C4A-0C4D0C550C560C620C630C820C830CBC0CBE-0CC40CC6-0CC80CCA-0CCD0CD50CD60CE20CE30D020D030D3E-0D440D46-0D480D4A-0D4D0D570D620D630D820D830DCA0DCF-0DD40DD60DD8-0DDF0DF20DF30E310E34-0E3A0E47-0E4E0EB10EB4-0EB90EBB0EBC0EC8-0ECD0F180F190F350F370F390F3E0F3F0F71-0F840F860F870F8D-0F970F99-0FBC0FC6102B-103E1056-1059105E-10601062-10641067-106D1071-10741082-108D108F109A-109D135D-135F1712-17141732-1734175217531772177317B4-17D317DD180B-180D18A91920-192B1930-193B19B0-19C019C819C91A17-1A1B1A55-1A5E1A60-1A7C1A7F1B00-1B041B34-1B441B6B-1B731B80-1B821BA1-1BAD1BE6-1BF31C24-1C371CD0-1CD21CD4-1CE81CED1CF2-1CF41DC0-1DE61DFC-1DFF20D0-20F02CEF-2CF12D7F2DE0-2DFF302A-302F3099309AA66F-A672A674-A67DA69FA6F0A6F1A802A806A80BA823-A827A880A881A8B4-A8C4A8E0-A8F1A926-A92DA947-A953A980-A983A9B3-A9C0AA29-AA36AA43AA4CAA4DAA7BAAB0AAB2-AAB4AAB7AAB8AABEAABFAAC1AAEB-AAEFAAF5AAF6ABE3-ABEAABECABEDFB1EFE00-FE0FFE20-FE26",
Mn: "0300-036F0483-04870591-05BD05BF05C105C205C405C505C70610-061A064B-065F067006D6-06DC06DF-06E406E706E806EA-06ED07110730-074A07A6-07B007EB-07F30816-0819081B-08230825-08270829-082D0859-085B08E4-08FE0900-0902093A093C0941-0948094D0951-095709620963098109BC09C1-09C409CD09E209E30A010A020A3C0A410A420A470A480A4B-0A4D0A510A700A710A750A810A820ABC0AC1-0AC50AC70AC80ACD0AE20AE30B010B3C0B3F0B41-0B440B4D0B560B620B630B820BC00BCD0C3E-0C400C46-0C480C4A-0C4D0C550C560C620C630CBC0CBF0CC60CCC0CCD0CE20CE30D41-0D440D4D0D620D630DCA0DD2-0DD40DD60E310E34-0E3A0E47-0E4E0EB10EB4-0EB90EBB0EBC0EC8-0ECD0F180F190F350F370F390F71-0F7E0F80-0F840F860F870F8D-0F970F99-0FBC0FC6102D-10301032-10371039103A103D103E10581059105E-10601071-1074108210851086108D109D135D-135F1712-17141732-1734175217531772177317B417B517B7-17BD17C617C9-17D317DD180B-180D18A91920-19221927192819321939-193B1A171A181A561A58-1A5E1A601A621A65-1A6C1A73-1A7C1A7F1B00-1B031B341B36-1B3A1B3C1B421B6B-1B731B801B811BA2-1BA51BA81BA91BAB1BE61BE81BE91BED1BEF-1BF11C2C-1C331C361C371CD0-1CD21CD4-1CE01CE2-1CE81CED1CF41DC0-1DE61DFC-1DFF20D0-20DC20E120E5-20F02CEF-2CF12D7F2DE0-2DFF302A-302D3099309AA66FA674-A67DA69FA6F0A6F1A802A806A80BA825A826A8C4A8E0-A8F1A926-A92DA947-A951A980-A982A9B3A9B6-A9B9A9BCAA29-AA2EAA31AA32AA35AA36AA43AA4CAAB0AAB2-AAB4AAB7AAB8AABEAABFAAC1AAECAAEDAAF6ABE5ABE8ABEDFB1EFE00-FE0FFE20-FE26",
Mc: "0903093B093E-09400949-094C094E094F0982098309BE-09C009C709C809CB09CC09D70A030A3E-0A400A830ABE-0AC00AC90ACB0ACC0B020B030B3E0B400B470B480B4B0B4C0B570BBE0BBF0BC10BC20BC6-0BC80BCA-0BCC0BD70C01-0C030C41-0C440C820C830CBE0CC0-0CC40CC70CC80CCA0CCB0CD50CD60D020D030D3E-0D400D46-0D480D4A-0D4C0D570D820D830DCF-0DD10DD8-0DDF0DF20DF30F3E0F3F0F7F102B102C10311038103B103C105610571062-10641067-106D108310841087-108C108F109A-109C17B617BE-17C517C717C81923-19261929-192B193019311933-193819B0-19C019C819C91A19-1A1B1A551A571A611A631A641A6D-1A721B041B351B3B1B3D-1B411B431B441B821BA11BA61BA71BAA1BAC1BAD1BE71BEA-1BEC1BEE1BF21BF31C24-1C2B1C341C351CE11CF21CF3302E302FA823A824A827A880A881A8B4-A8C3A952A953A983A9B4A9B5A9BAA9BBA9BD-A9C0AA2FAA30AA33AA34AA4DAA7BAAEBAAEEAAEFAAF5ABE3ABE4ABE6ABE7ABE9ABEAABEC",
Me: "0488048920DD-20E020E2-20E4A670-A672",
N: "0030-003900B200B300B900BC-00BE0660-066906F0-06F907C0-07C90966-096F09E6-09EF09F4-09F90A66-0A6F0AE6-0AEF0B66-0B6F0B72-0B770BE6-0BF20C66-0C6F0C78-0C7E0CE6-0CEF0D66-0D750E50-0E590ED0-0ED90F20-0F331040-10491090-10991369-137C16EE-16F017E0-17E917F0-17F91810-18191946-194F19D0-19DA1A80-1A891A90-1A991B50-1B591BB0-1BB91C40-1C491C50-1C5920702074-20792080-20892150-21822185-21892460-249B24EA-24FF2776-27932CFD30073021-30293038-303A3192-31953220-32293248-324F3251-325F3280-328932B1-32BFA620-A629A6E6-A6EFA830-A835A8D0-A8D9A900-A909A9D0-A9D9AA50-AA59ABF0-ABF9FF10-FF19",
Nd: "0030-00390660-066906F0-06F907C0-07C90966-096F09E6-09EF0A66-0A6F0AE6-0AEF0B66-0B6F0BE6-0BEF0C66-0C6F0CE6-0CEF0D66-0D6F0E50-0E590ED0-0ED90F20-0F291040-10491090-109917E0-17E91810-18191946-194F19D0-19D91A80-1A891A90-1A991B50-1B591BB0-1BB91C40-1C491C50-1C59A620-A629A8D0-A8D9A900-A909A9D0-A9D9AA50-AA59ABF0-ABF9FF10-FF19",
Nl: "16EE-16F02160-21822185-218830073021-30293038-303AA6E6-A6EF",
No: "00B200B300B900BC-00BE09F4-09F90B72-0B770BF0-0BF20C78-0C7E0D70-0D750F2A-0F331369-137C17F0-17F919DA20702074-20792080-20892150-215F21892460-249B24EA-24FF2776-27932CFD3192-31953220-32293248-324F3251-325F3280-328932B1-32BFA830-A835",
P: "0021-00230025-002A002C-002F003A003B003F0040005B-005D005F007B007D00A100A700AB00B600B700BB00BF037E0387055A-055F0589058A05BE05C005C305C605F305F40609060A060C060D061B061E061F066A-066D06D40700-070D07F7-07F90830-083E085E0964096509700AF00DF40E4F0E5A0E5B0F04-0F120F140F3A-0F3D0F850FD0-0FD40FD90FDA104A-104F10FB1360-13681400166D166E169B169C16EB-16ED1735173617D4-17D617D8-17DA1800-180A194419451A1E1A1F1AA0-1AA61AA8-1AAD1B5A-1B601BFC-1BFF1C3B-1C3F1C7E1C7F1CC0-1CC71CD32010-20272030-20432045-20512053-205E207D207E208D208E2329232A2768-277527C527C627E6-27EF2983-299829D8-29DB29FC29FD2CF9-2CFC2CFE2CFF2D702E00-2E2E2E30-2E3B3001-30033008-30113014-301F3030303D30A030FBA4FEA4FFA60D-A60FA673A67EA6F2-A6F7A874-A877A8CEA8CFA8F8-A8FAA92EA92FA95FA9C1-A9CDA9DEA9DFAA5C-AA5FAADEAADFAAF0AAF1ABEBFD3EFD3FFE10-FE19FE30-FE52FE54-FE61FE63FE68FE6AFE6BFF01-FF03FF05-FF0AFF0C-FF0FFF1AFF1BFF1FFF20FF3B-FF3DFF3FFF5BFF5DFF5F-FF65",
Pd: "002D058A05BE140018062010-20152E172E1A2E3A2E3B301C303030A0FE31FE32FE58FE63FF0D",
Ps: "0028005B007B0F3A0F3C169B201A201E2045207D208D23292768276A276C276E27702772277427C527E627E827EA27EC27EE2983298529872989298B298D298F299129932995299729D829DA29FC2E222E242E262E283008300A300C300E3010301430163018301A301DFD3EFE17FE35FE37FE39FE3BFE3DFE3FFE41FE43FE47FE59FE5BFE5DFF08FF3BFF5BFF5FFF62",
Pe: "0029005D007D0F3B0F3D169C2046207E208E232A2769276B276D276F27712773277527C627E727E927EB27ED27EF298429862988298A298C298E2990299229942996299829D929DB29FD2E232E252E272E293009300B300D300F3011301530173019301B301E301FFD3FFE18FE36FE38FE3AFE3CFE3EFE40FE42FE44FE48FE5AFE5CFE5EFF09FF3DFF5DFF60FF63",
Pi: "00AB2018201B201C201F20392E022E042E092E0C2E1C2E20",
Pf: "00BB2019201D203A2E032E052E0A2E0D2E1D2E21",
Pc: "005F203F20402054FE33FE34FE4D-FE4FFF3F",
Po: "0021-00230025-0027002A002C002E002F003A003B003F0040005C00A100A700B600B700BF037E0387055A-055F058905C005C305C605F305F40609060A060C060D061B061E061F066A-066D06D40700-070D07F7-07F90830-083E085E0964096509700AF00DF40E4F0E5A0E5B0F04-0F120F140F850FD0-0FD40FD90FDA104A-104F10FB1360-1368166D166E16EB-16ED1735173617D4-17D617D8-17DA1800-18051807-180A194419451A1E1A1F1AA0-1AA61AA8-1AAD1B5A-1B601BFC-1BFF1C3B-1C3F1C7E1C7F1CC0-1CC71CD3201620172020-20272030-2038203B-203E2041-20432047-205120532055-205E2CF9-2CFC2CFE2CFF2D702E002E012E06-2E082E0B2E0E-2E162E182E192E1B2E1E2E1F2E2A-2E2E2E30-2E393001-3003303D30FBA4FEA4FFA60D-A60FA673A67EA6F2-A6F7A874-A877A8CEA8CFA8F8-A8FAA92EA92FA95FA9C1-A9CDA9DEA9DFAA5C-AA5FAADEAADFAAF0AAF1ABEBFE10-FE16FE19FE30FE45FE46FE49-FE4CFE50-FE52FE54-FE57FE5F-FE61FE68FE6AFE6BFF01-FF03FF05-FF07FF0AFF0CFF0EFF0FFF1AFF1BFF1FFF20FF3CFF61FF64FF65",
S: "0024002B003C-003E005E0060007C007E00A2-00A600A800A900AC00AE-00B100B400B800D700F702C2-02C502D2-02DF02E5-02EB02ED02EF-02FF03750384038503F60482058F0606-0608060B060E060F06DE06E906FD06FE07F609F209F309FA09FB0AF10B700BF3-0BFA0C7F0D790E3F0F01-0F030F130F15-0F170F1A-0F1F0F340F360F380FBE-0FC50FC7-0FCC0FCE0FCF0FD5-0FD8109E109F1390-139917DB194019DE-19FF1B61-1B6A1B74-1B7C1FBD1FBF-1FC11FCD-1FCF1FDD-1FDF1FED-1FEF1FFD1FFE20442052207A-207C208A-208C20A0-20B9210021012103-21062108210921142116-2118211E-2123212521272129212E213A213B2140-2144214A-214D214F2190-2328232B-23F32400-24262440-244A249C-24E92500-26FF2701-27672794-27C427C7-27E527F0-29822999-29D729DC-29FB29FE-2B4C2B50-2B592CE5-2CEA2E80-2E992E9B-2EF32F00-2FD52FF0-2FFB300430123013302030363037303E303F309B309C319031913196-319F31C0-31E33200-321E322A-324732503260-327F328A-32B032C0-32FE3300-33FF4DC0-4DFFA490-A4C6A700-A716A720A721A789A78AA828-A82BA836-A839AA77-AA79FB29FBB2-FBC1FDFCFDFDFE62FE64-FE66FE69FF04FF0BFF1C-FF1EFF3EFF40FF5CFF5EFFE0-FFE6FFE8-FFEEFFFCFFFD",
Sm: "002B003C-003E007C007E00AC00B100D700F703F60606-060820442052207A-207C208A-208C21182140-2144214B2190-2194219A219B21A021A321A621AE21CE21CF21D221D421F4-22FF2308-230B23202321237C239B-23B323DC-23E125B725C125F8-25FF266F27C0-27C427C7-27E527F0-27FF2900-29822999-29D729DC-29FB29FE-2AFF2B30-2B442B47-2B4CFB29FE62FE64-FE66FF0BFF1C-FF1EFF5CFF5EFFE2FFE9-FFEC",
Sc: "002400A2-00A5058F060B09F209F309FB0AF10BF90E3F17DB20A0-20B9A838FDFCFE69FF04FFE0FFE1FFE5FFE6",
Sk: "005E006000A800AF00B400B802C2-02C502D2-02DF02E5-02EB02ED02EF-02FF0375038403851FBD1FBF-1FC11FCD-1FCF1FDD-1FDF1FED-1FEF1FFD1FFE309B309CA700-A716A720A721A789A78AFBB2-FBC1FF3EFF40FFE3",
So: "00A600A900AE00B00482060E060F06DE06E906FD06FE07F609FA0B700BF3-0BF80BFA0C7F0D790F01-0F030F130F15-0F170F1A-0F1F0F340F360F380FBE-0FC50FC7-0FCC0FCE0FCF0FD5-0FD8109E109F1390-1399194019DE-19FF1B61-1B6A1B74-1B7C210021012103-210621082109211421162117211E-2123212521272129212E213A213B214A214C214D214F2195-2199219C-219F21A121A221A421A521A7-21AD21AF-21CD21D021D121D321D5-21F32300-2307230C-231F2322-2328232B-237B237D-239A23B4-23DB23E2-23F32400-24262440-244A249C-24E92500-25B625B8-25C025C2-25F72600-266E2670-26FF2701-27672794-27BF2800-28FF2B00-2B2F2B452B462B50-2B592CE5-2CEA2E80-2E992E9B-2EF32F00-2FD52FF0-2FFB300430123013302030363037303E303F319031913196-319F31C0-31E33200-321E322A-324732503260-327F328A-32B032C0-32FE3300-33FF4DC0-4DFFA490-A4C6A828-A82BA836A837A839AA77-AA79FDFDFFE4FFE8FFEDFFEEFFFCFFFD",
Z: "002000A01680180E2000-200A20282029202F205F3000",
Zs: "002000A01680180E2000-200A202F205F3000",
Zl: "2028",
Zp: "2029",
C: "0000-001F007F-009F00AD03780379037F-0383038B038D03A20528-05300557055805600588058B-058E059005C8-05CF05EB-05EF05F5-0605061C061D06DD070E070F074B074C07B2-07BF07FB-07FF082E082F083F085C085D085F-089F08A108AD-08E308FF097809800984098D098E0991099209A909B109B3-09B509BA09BB09C509C609C909CA09CF-09D609D8-09DB09DE09E409E509FC-0A000A040A0B-0A0E0A110A120A290A310A340A370A3A0A3B0A3D0A43-0A460A490A4A0A4E-0A500A52-0A580A5D0A5F-0A650A76-0A800A840A8E0A920AA90AB10AB40ABA0ABB0AC60ACA0ACE0ACF0AD1-0ADF0AE40AE50AF2-0B000B040B0D0B0E0B110B120B290B310B340B3A0B3B0B450B460B490B4A0B4E-0B550B58-0B5B0B5E0B640B650B78-0B810B840B8B-0B8D0B910B96-0B980B9B0B9D0BA0-0BA20BA5-0BA70BAB-0BAD0BBA-0BBD0BC3-0BC50BC90BCE0BCF0BD1-0BD60BD8-0BE50BFB-0C000C040C0D0C110C290C340C3A-0C3C0C450C490C4E-0C540C570C5A-0C5F0C640C650C70-0C770C800C810C840C8D0C910CA90CB40CBA0CBB0CC50CC90CCE-0CD40CD7-0CDD0CDF0CE40CE50CF00CF3-0D010D040D0D0D110D3B0D3C0D450D490D4F-0D560D58-0D5F0D640D650D76-0D780D800D810D840D97-0D990DB20DBC0DBE0DBF0DC7-0DC90DCB-0DCE0DD50DD70DE0-0DF10DF5-0E000E3B-0E3E0E5C-0E800E830E850E860E890E8B0E8C0E8E-0E930E980EA00EA40EA60EA80EA90EAC0EBA0EBE0EBF0EC50EC70ECE0ECF0EDA0EDB0EE0-0EFF0F480F6D-0F700F980FBD0FCD0FDB-0FFF10C610C8-10CC10CE10CF1249124E124F12571259125E125F1289128E128F12B112B612B712BF12C112C612C712D7131113161317135B135C137D-137F139A-139F13F5-13FF169D-169F16F1-16FF170D1715-171F1737-173F1754-175F176D17711774-177F17DE17DF17EA-17EF17FA-17FF180F181A-181F1878-187F18AB-18AF18F6-18FF191D-191F192C-192F193C-193F1941-1943196E196F1975-197F19AC-19AF19CA-19CF19DB-19DD1A1C1A1D1A5F1A7D1A7E1A8A-1A8F1A9A-1A9F1AAE-1AFF1B4C-1B4F1B7D-1B7F1BF4-1BFB1C38-1C3A1C4A-1C4C1C80-1CBF1CC8-1CCF1CF7-1CFF1DE7-1DFB1F161F171F1E1F1F1F461F471F4E1F4F1F581F5A1F5C1F5E1F7E1F7F1FB51FC51FD41FD51FDC1FF01FF11FF51FFF200B-200F202A-202E2060-206F20722073208F209D-209F20BA-20CF20F1-20FF218A-218F23F4-23FF2427-243F244B-245F27002B4D-2B4F2B5A-2BFF2C2F2C5F2CF4-2CF82D262D28-2D2C2D2E2D2F2D68-2D6E2D71-2D7E2D97-2D9F2DA72DAF2DB72DBF2DC72DCF2DD72DDF2E3C-2E7F2E9A2EF4-2EFF2FD6-2FEF2FFC-2FFF3040309730983100-3104312E-3130318F31BB-31BF31E4-31EF321F32FF4DB6-4DBF9FCD-9FFFA48D-A48FA4C7-A4CFA62C-A63FA698-A69EA6F8-A6FFA78FA794-A79FA7AB-A7F7A82C-A82FA83A-A83FA878-A87FA8C5-A8CDA8DA-A8DFA8FC-A8FFA954-A95EA97D-A97FA9CEA9DA-A9DDA9E0-A9FFAA37-AA3FAA4EAA4FAA5AAA5BAA7C-AA7FAAC3-AADAAAF7-AB00AB07AB08AB0FAB10AB17-AB1FAB27AB2F-ABBFABEEABEFABFA-ABFFD7A4-D7AFD7C7-D7CAD7FC-F8FFFA6EFA6FFADA-FAFFFB07-FB12FB18-FB1CFB37FB3DFB3FFB42FB45FBC2-FBD2FD40-FD4FFD90FD91FDC8-FDEFFDFEFDFFFE1A-FE1FFE27-FE2FFE53FE67FE6C-FE6FFE75FEFD-FF00FFBF-FFC1FFC8FFC9FFD0FFD1FFD8FFD9FFDD-FFDFFFE7FFEF-FFFBFFFEFFFF",
Cc: "0000-001F007F-009F",
Cf: "00AD0600-060406DD070F200B-200F202A-202E2060-2064206A-206FFEFFFFF9-FFFB",
Co: "E000-F8FF",
Cs: "D800-DFFF",
Cn: "03780379037F-0383038B038D03A20528-05300557055805600588058B-058E059005C8-05CF05EB-05EF05F5-05FF0605061C061D070E074B074C07B2-07BF07FB-07FF082E082F083F085C085D085F-089F08A108AD-08E308FF097809800984098D098E0991099209A909B109B3-09B509BA09BB09C509C609C909CA09CF-09D609D8-09DB09DE09E409E509FC-0A000A040A0B-0A0E0A110A120A290A310A340A370A3A0A3B0A3D0A43-0A460A490A4A0A4E-0A500A52-0A580A5D0A5F-0A650A76-0A800A840A8E0A920AA90AB10AB40ABA0ABB0AC60ACA0ACE0ACF0AD1-0ADF0AE40AE50AF2-0B000B040B0D0B0E0B110B120B290B310B340B3A0B3B0B450B460B490B4A0B4E-0B550B58-0B5B0B5E0B640B650B78-0B810B840B8B-0B8D0B910B96-0B980B9B0B9D0BA0-0BA20BA5-0BA70BAB-0BAD0BBA-0BBD0BC3-0BC50BC90BCE0BCF0BD1-0BD60BD8-0BE50BFB-0C000C040C0D0C110C290C340C3A-0C3C0C450C490C4E-0C540C570C5A-0C5F0C640C650C70-0C770C800C810C840C8D0C910CA90CB40CBA0CBB0CC50CC90CCE-0CD40CD7-0CDD0CDF0CE40CE50CF00CF3-0D010D040D0D0D110D3B0D3C0D450D490D4F-0D560D58-0D5F0D640D650D76-0D780D800D810D840D97-0D990DB20DBC0DBE0DBF0DC7-0DC90DCB-0DCE0DD50DD70DE0-0DF10DF5-0E000E3B-0E3E0E5C-0E800E830E850E860E890E8B0E8C0E8E-0E930E980EA00EA40EA60EA80EA90EAC0EBA0EBE0EBF0EC50EC70ECE0ECF0EDA0EDB0EE0-0EFF0F480F6D-0F700F980FBD0FCD0FDB-0FFF10C610C8-10CC10CE10CF1249124E124F12571259125E125F1289128E128F12B112B612B712BF12C112C612C712D7131113161317135B135C137D-137F139A-139F13F5-13FF169D-169F16F1-16FF170D1715-171F1737-173F1754-175F176D17711774-177F17DE17DF17EA-17EF17FA-17FF180F181A-181F1878-187F18AB-18AF18F6-18FF191D-191F192C-192F193C-193F1941-1943196E196F1975-197F19AC-19AF19CA-19CF19DB-19DD1A1C1A1D1A5F1A7D1A7E1A8A-1A8F1A9A-1A9F1AAE-1AFF1B4C-1B4F1B7D-1B7F1BF4-1BFB1C38-1C3A1C4A-1C4C1C80-1CBF1CC8-1CCF1CF7-1CFF1DE7-1DFB1F161F171F1E1F1F1F461F471F4E1F4F1F581F5A1F5C1F5E1F7E1F7F1FB51FC51FD41FD51FDC1FF01FF11FF51FFF2065-206920722073208F209D-209F20BA-20CF20F1-20FF218A-218F23F4-23FF2427-243F244B-245F27002B4D-2B4F2B5A-2BFF2C2F2C5F2CF4-2CF82D262D28-2D2C2D2E2D2F2D68-2D6E2D71-2D7E2D97-2D9F2DA72DAF2DB72DBF2DC72DCF2DD72DDF2E3C-2E7F2E9A2EF4-2EFF2FD6-2FEF2FFC-2FFF3040309730983100-3104312E-3130318F31BB-31BF31E4-31EF321F32FF4DB6-4DBF9FCD-9FFFA48D-A48FA4C7-A4CFA62C-A63FA698-A69EA6F8-A6FFA78FA794-A79FA7AB-A7F7A82C-A82FA83A-A83FA878-A87FA8C5-A8CDA8DA-A8DFA8FC-A8FFA954-A95EA97D-A97FA9CEA9DA-A9DDA9E0-A9FFAA37-AA3FAA4EAA4FAA5AAA5BAA7C-AA7FAAC3-AADAAAF7-AB00AB07AB08AB0FAB10AB17-AB1FAB27AB2F-ABBFABEEABEFABFA-ABFFD7A4-D7AFD7C7-D7CAD7FC-D7FFFA6EFA6FFADA-FAFFFB07-FB12FB18-FB1CFB37FB3DFB3FFB42FB45FBC2-FBD2FD40-FD4FFD90FD91FDC8-FDEFFDFEFDFFFE1A-FE1FFE27-FE2FFE53FE67FE6C-FE6FFE75FEFDFEFEFF00FFBF-FFC1FFC8FFC9FFD0FFD1FFD8FFD9FFDD-FFDFFFE7FFEF-FFF8FFFEFFFF"
}, {
//L: "Letter", // Included in the Unicode Base addon
Ll: "Lowercase_Letter",
Lu: "Uppercase_Letter",
Lt: "Titlecase_Letter",
Lm: "Modifier_Letter",
Lo: "Other_Letter",
M: "Mark",
Mn: "Nonspacing_Mark",
Mc: "Spacing_Mark",
Me: "Enclosing_Mark",
N: "Number",
Nd: "Decimal_Number",
Nl: "Letter_Number",
No: "Other_Number",
P: "Punctuation",
Pd: "Dash_Punctuation",
Ps: "Open_Punctuation",
Pe: "Close_Punctuation",
Pi: "Initial_Punctuation",
Pf: "Final_Punctuation",
Pc: "Connector_Punctuation",
Po: "Other_Punctuation",
S: "Symbol",
Sm: "Math_Symbol",
Sc: "Currency_Symbol",
Sk: "Modifier_Symbol",
So: "Other_Symbol",
Z: "Separator",
Zs: "Space_Separator",
Zl: "Line_Separator",
Zp: "Paragraph_Separator",
C: "Other",
Cc: "Control",
Cf: "Format",
Co: "Private_Use",
Cs: "Surrogate",
Cn: "Unassigned"
});
}(XRegExp));
/***** unicode-scripts.js *****/
/*!
* XRegExp Unicode Scripts v1.2.0
* (c) 2010-2012 Steven Levithan <http://xregexp.com/>
* MIT License
* Uses Unicode 6.1 <http://unicode.org/>
*/
/**
* Adds support for all Unicode scripts in the Basic Multilingual Plane (U+0000-U+FFFF).
* E.g., `\p{Latin}`. Token names are case insensitive, and any spaces, hyphens, and underscores
* are ignored.
* @requires XRegExp, XRegExp Unicode Base
*/
(function (XRegExp) {
"use strict";
if (!XRegExp.addUnicodePackage) {
throw new ReferenceError("Unicode Base must be loaded before Unicode Scripts");
}
XRegExp.install("extensibility");
XRegExp.addUnicodePackage({
Arabic: "0600-06040606-060B060D-061A061E0620-063F0641-064A0656-065E066A-066F0671-06DC06DE-06FF0750-077F08A008A2-08AC08E4-08FEFB50-FBC1FBD3-FD3DFD50-FD8FFD92-FDC7FDF0-FDFCFE70-FE74FE76-FEFC",
Armenian: "0531-05560559-055F0561-0587058A058FFB13-FB17",
Balinese: "1B00-1B4B1B50-1B7C",
Bamum: "A6A0-A6F7",
Batak: "1BC0-1BF31BFC-1BFF",
Bengali: "0981-09830985-098C098F09900993-09A809AA-09B009B209B6-09B909BC-09C409C709C809CB-09CE09D709DC09DD09DF-09E309E6-09FB",
Bopomofo: "02EA02EB3105-312D31A0-31BA",
Braille: "2800-28FF",
Buginese: "1A00-1A1B1A1E1A1F",
Buhid: "1740-1753",
Canadian_Aboriginal: "1400-167F18B0-18F5",
Cham: "AA00-AA36AA40-AA4DAA50-AA59AA5C-AA5F",
Cherokee: "13A0-13F4",
Common: "0000-0040005B-0060007B-00A900AB-00B900BB-00BF00D700F702B9-02DF02E5-02E902EC-02FF0374037E038503870589060C061B061F06400660-066906DD096409650E3F0FD5-0FD810FB16EB-16ED173517361802180318051CD31CE11CE9-1CEC1CEE-1CF31CF51CF62000-200B200E-2064206A-20702074-207E2080-208E20A0-20B92100-21252127-2129212C-21312133-214D214F-215F21892190-23F32400-24262440-244A2460-26FF2701-27FF2900-2B4C2B50-2B592E00-2E3B2FF0-2FFB3000-300430063008-30203030-3037303C-303F309B309C30A030FB30FC3190-319F31C0-31E33220-325F327F-32CF3358-33FF4DC0-4DFFA700-A721A788-A78AA830-A839FD3EFD3FFDFDFE10-FE19FE30-FE52FE54-FE66FE68-FE6BFEFFFF01-FF20FF3B-FF40FF5B-FF65FF70FF9EFF9FFFE0-FFE6FFE8-FFEEFFF9-FFFD",
Coptic: "03E2-03EF2C80-2CF32CF9-2CFF",
Cyrillic: "0400-04840487-05271D2B1D782DE0-2DFFA640-A697A69F",
Devanagari: "0900-09500953-09630966-09770979-097FA8E0-A8FB",
Ethiopic: "1200-1248124A-124D1250-12561258125A-125D1260-1288128A-128D1290-12B012B2-12B512B8-12BE12C012C2-12C512C8-12D612D8-13101312-13151318-135A135D-137C1380-13992D80-2D962DA0-2DA62DA8-2DAE2DB0-2DB62DB8-2DBE2DC0-2DC62DC8-2DCE2DD0-2DD62DD8-2DDEAB01-AB06AB09-AB0EAB11-AB16AB20-AB26AB28-AB2E",
Georgian: "10A0-10C510C710CD10D0-10FA10FC-10FF2D00-2D252D272D2D",
Glagolitic: "2C00-2C2E2C30-2C5E",
Greek: "0370-03730375-0377037A-037D038403860388-038A038C038E-03A103A3-03E103F0-03FF1D26-1D2A1D5D-1D611D66-1D6A1DBF1F00-1F151F18-1F1D1F20-1F451F48-1F4D1F50-1F571F591F5B1F5D1F5F-1F7D1F80-1FB41FB6-1FC41FC6-1FD31FD6-1FDB1FDD-1FEF1FF2-1FF41FF6-1FFE2126",
Gujarati: "0A81-0A830A85-0A8D0A8F-0A910A93-0AA80AAA-0AB00AB20AB30AB5-0AB90ABC-0AC50AC7-0AC90ACB-0ACD0AD00AE0-0AE30AE6-0AF1",
Gurmukhi: "0A01-0A030A05-0A0A0A0F0A100A13-0A280A2A-0A300A320A330A350A360A380A390A3C0A3E-0A420A470A480A4B-0A4D0A510A59-0A5C0A5E0A66-0A75",
Han: "2E80-2E992E9B-2EF32F00-2FD5300530073021-30293038-303B3400-4DB54E00-9FCCF900-FA6DFA70-FAD9",
Hangul: "1100-11FF302E302F3131-318E3200-321E3260-327EA960-A97CAC00-D7A3D7B0-D7C6D7CB-D7FBFFA0-FFBEFFC2-FFC7FFCA-FFCFFFD2-FFD7FFDA-FFDC",
Hanunoo: "1720-1734",
Hebrew: "0591-05C705D0-05EA05F0-05F4FB1D-FB36FB38-FB3CFB3EFB40FB41FB43FB44FB46-FB4F",
Hiragana: "3041-3096309D-309F",
Inherited: "0300-036F04850486064B-0655065F0670095109521CD0-1CD21CD4-1CE01CE2-1CE81CED1CF41DC0-1DE61DFC-1DFF200C200D20D0-20F0302A-302D3099309AFE00-FE0FFE20-FE26",
Javanese: "A980-A9CDA9CF-A9D9A9DEA9DF",
Kannada: "0C820C830C85-0C8C0C8E-0C900C92-0CA80CAA-0CB30CB5-0CB90CBC-0CC40CC6-0CC80CCA-0CCD0CD50CD60CDE0CE0-0CE30CE6-0CEF0CF10CF2",
Katakana: "30A1-30FA30FD-30FF31F0-31FF32D0-32FE3300-3357FF66-FF6FFF71-FF9D",
Kayah_Li: "A900-A92F",
Khmer: "1780-17DD17E0-17E917F0-17F919E0-19FF",
Lao: "0E810E820E840E870E880E8A0E8D0E94-0E970E99-0E9F0EA1-0EA30EA50EA70EAA0EAB0EAD-0EB90EBB-0EBD0EC0-0EC40EC60EC8-0ECD0ED0-0ED90EDC-0EDF",
Latin: "0041-005A0061-007A00AA00BA00C0-00D600D8-00F600F8-02B802E0-02E41D00-1D251D2C-1D5C1D62-1D651D6B-1D771D79-1DBE1E00-1EFF2071207F2090-209C212A212B2132214E2160-21882C60-2C7FA722-A787A78B-A78EA790-A793A7A0-A7AAA7F8-A7FFFB00-FB06FF21-FF3AFF41-FF5A",
Lepcha: "1C00-1C371C3B-1C491C4D-1C4F",
Limbu: "1900-191C1920-192B1930-193B19401944-194F",
Lisu: "A4D0-A4FF",
Malayalam: "0D020D030D05-0D0C0D0E-0D100D12-0D3A0D3D-0D440D46-0D480D4A-0D4E0D570D60-0D630D66-0D750D79-0D7F",
Mandaic: "0840-085B085E",
Meetei_Mayek: "AAE0-AAF6ABC0-ABEDABF0-ABF9",
Mongolian: "1800180118041806-180E1810-18191820-18771880-18AA",
Myanmar: "1000-109FAA60-AA7B",
New_Tai_Lue: "1980-19AB19B0-19C919D0-19DA19DE19DF",
Nko: "07C0-07FA",
Ogham: "1680-169C",
Ol_Chiki: "1C50-1C7F",
Oriya: "0B01-0B030B05-0B0C0B0F0B100B13-0B280B2A-0B300B320B330B35-0B390B3C-0B440B470B480B4B-0B4D0B560B570B5C0B5D0B5F-0B630B66-0B77",
Phags_Pa: "A840-A877",
Rejang: "A930-A953A95F",
Runic: "16A0-16EA16EE-16F0",
Samaritan: "0800-082D0830-083E",
Saurashtra: "A880-A8C4A8CE-A8D9",
Sinhala: "0D820D830D85-0D960D9A-0DB10DB3-0DBB0DBD0DC0-0DC60DCA0DCF-0DD40DD60DD8-0DDF0DF2-0DF4",
Sundanese: "1B80-1BBF1CC0-1CC7",
Syloti_Nagri: "A800-A82B",
Syriac: "0700-070D070F-074A074D-074F",
Tagalog: "1700-170C170E-1714",
Tagbanwa: "1760-176C176E-177017721773",
Tai_Le: "1950-196D1970-1974",
Tai_Tham: "1A20-1A5E1A60-1A7C1A7F-1A891A90-1A991AA0-1AAD",
Tai_Viet: "AA80-AAC2AADB-AADF",
Tamil: "0B820B830B85-0B8A0B8E-0B900B92-0B950B990B9A0B9C0B9E0B9F0BA30BA40BA8-0BAA0BAE-0BB90BBE-0BC20BC6-0BC80BCA-0BCD0BD00BD70BE6-0BFA",
Telugu: "0C01-0C030C05-0C0C0C0E-0C100C12-0C280C2A-0C330C35-0C390C3D-0C440C46-0C480C4A-0C4D0C550C560C580C590C60-0C630C66-0C6F0C78-0C7F",
Thaana: "0780-07B1",
Thai: "0E01-0E3A0E40-0E5B",
Tibetan: "0F00-0F470F49-0F6C0F71-0F970F99-0FBC0FBE-0FCC0FCE-0FD40FD90FDA",
Tifinagh: "2D30-2D672D6F2D702D7F",
Vai: "A500-A62B",
Yi: "A000-A48CA490-A4C6"
});
}(XRegExp));
/***** unicode-blocks.js *****/
/*!
* XRegExp Unicode Blocks v1.2.0
* (c) 2010-2012 Steven Levithan <http://xregexp.com/>
* MIT License
* Uses Unicode 6.1 <http://unicode.org/>
*/
/**
* Adds support for all Unicode blocks in the Basic Multilingual Plane (U+0000-U+FFFF). Unicode
* blocks use the prefix "In". E.g., `\p{InBasicLatin}`. Token names are case insensitive, and any
* spaces, hyphens, and underscores are ignored.
* @requires XRegExp, XRegExp Unicode Base
*/
(function (XRegExp) {
"use strict";
if (!XRegExp.addUnicodePackage) {
throw new ReferenceError("Unicode Base must be loaded before Unicode Blocks");
}
XRegExp.install("extensibility");
XRegExp.addUnicodePackage({
InBasic_Latin: "0000-007F",
InLatin_1_Supplement: "0080-00FF",
InLatin_Extended_A: "0100-017F",
InLatin_Extended_B: "0180-024F",
InIPA_Extensions: "0250-02AF",
InSpacing_Modifier_Letters: "02B0-02FF",
InCombining_Diacritical_Marks: "0300-036F",
InGreek_and_Coptic: "0370-03FF",
InCyrillic: "0400-04FF",
InCyrillic_Supplement: "0500-052F",
InArmenian: "0530-058F",
InHebrew: "0590-05FF",
InArabic: "0600-06FF",
InSyriac: "0700-074F",
InArabic_Supplement: "0750-077F",
InThaana: "0780-07BF",
InNKo: "07C0-07FF",
InSamaritan: "0800-083F",
InMandaic: "0840-085F",
InArabic_Extended_A: "08A0-08FF",
InDevanagari: "0900-097F",
InBengali: "0980-09FF",
InGurmukhi: "0A00-0A7F",
InGujarati: "0A80-0AFF",
InOriya: "0B00-0B7F",
InTamil: "0B80-0BFF",
InTelugu: "0C00-0C7F",
InKannada: "0C80-0CFF",
InMalayalam: "0D00-0D7F",
InSinhala: "0D80-0DFF",
InThai: "0E00-0E7F",
InLao: "0E80-0EFF",
InTibetan: "0F00-0FFF",
InMyanmar: "1000-109F",
InGeorgian: "10A0-10FF",
InHangul_Jamo: "1100-11FF",
InEthiopic: "1200-137F",
InEthiopic_Supplement: "1380-139F",
InCherokee: "13A0-13FF",
InUnified_Canadian_Aboriginal_Syllabics: "1400-167F",
InOgham: "1680-169F",
InRunic: "16A0-16FF",
InTagalog: "1700-171F",
InHanunoo: "1720-173F",
InBuhid: "1740-175F",
InTagbanwa: "1760-177F",
InKhmer: "1780-17FF",
InMongolian: "1800-18AF",
InUnified_Canadian_Aboriginal_Syllabics_Extended: "18B0-18FF",
InLimbu: "1900-194F",
InTai_Le: "1950-197F",
InNew_Tai_Lue: "1980-19DF",
InKhmer_Symbols: "19E0-19FF",
InBuginese: "1A00-1A1F",
InTai_Tham: "1A20-1AAF",
InBalinese: "1B00-1B7F",
InSundanese: "1B80-1BBF",
InBatak: "1BC0-1BFF",
InLepcha: "1C00-1C4F",
InOl_Chiki: "1C50-1C7F",
InSundanese_Supplement: "1CC0-1CCF",
InVedic_Extensions: "1CD0-1CFF",
InPhonetic_Extensions: "1D00-1D7F",
InPhonetic_Extensions_Supplement: "1D80-1DBF",
InCombining_Diacritical_Marks_Supplement: "1DC0-1DFF",
InLatin_Extended_Additional: "1E00-1EFF",
InGreek_Extended: "1F00-1FFF",
InGeneral_Punctuation: "2000-206F",
InSuperscripts_and_Subscripts: "2070-209F",
InCurrency_Symbols: "20A0-20CF",
InCombining_Diacritical_Marks_for_Symbols: "20D0-20FF",
InLetterlike_Symbols: "2100-214F",
InNumber_Forms: "2150-218F",
InArrows: "2190-21FF",
InMathematical_Operators: "2200-22FF",
InMiscellaneous_Technical: "2300-23FF",
InControl_Pictures: "2400-243F",
InOptical_Character_Recognition: "2440-245F",
InEnclosed_Alphanumerics: "2460-24FF",
InBox_Drawing: "2500-257F",
InBlock_Elements: "2580-259F",
InGeometric_Shapes: "25A0-25FF",
InMiscellaneous_Symbols: "2600-26FF",
InDingbats: "2700-27BF",
InMiscellaneous_Mathematical_Symbols_A: "27C0-27EF",
InSupplemental_Arrows_A: "27F0-27FF",
InBraille_Patterns: "2800-28FF",
InSupplemental_Arrows_B: "2900-297F",
InMiscellaneous_Mathematical_Symbols_B: "2980-29FF",
InSupplemental_Mathematical_Operators: "2A00-2AFF",
InMiscellaneous_Symbols_and_Arrows: "2B00-2BFF",
InGlagolitic: "2C00-2C5F",
InLatin_Extended_C: "2C60-2C7F",
InCoptic: "2C80-2CFF",
InGeorgian_Supplement: "2D00-2D2F",
InTifinagh: "2D30-2D7F",
InEthiopic_Extended: "2D80-2DDF",
InCyrillic_Extended_A: "2DE0-2DFF",
InSupplemental_Punctuation: "2E00-2E7F",
InCJK_Radicals_Supplement: "2E80-2EFF",
InKangxi_Radicals: "2F00-2FDF",
InIdeographic_Description_Characters: "2FF0-2FFF",
InCJK_Symbols_and_Punctuation: "3000-303F",
InHiragana: "3040-309F",
InKatakana: "30A0-30FF",
InBopomofo: "3100-312F",
InHangul_Compatibility_Jamo: "3130-318F",
InKanbun: "3190-319F",
InBopomofo_Extended: "31A0-31BF",
InCJK_Strokes: "31C0-31EF",
InKatakana_Phonetic_Extensions: "31F0-31FF",
InEnclosed_CJK_Letters_and_Months: "3200-32FF",
InCJK_Compatibility: "3300-33FF",
InCJK_Unified_Ideographs_Extension_A: "3400-4DBF",
InYijing_Hexagram_Symbols: "4DC0-4DFF",
InCJK_Unified_Ideographs: "4E00-9FFF",
InYi_Syllables: "A000-A48F",
InYi_Radicals: "A490-A4CF",
InLisu: "A4D0-A4FF",
InVai: "A500-A63F",
InCyrillic_Extended_B: "A640-A69F",
InBamum: "A6A0-A6FF",
InModifier_Tone_Letters: "A700-A71F",
InLatin_Extended_D: "A720-A7FF",
InSyloti_Nagri: "A800-A82F",
InCommon_Indic_Number_Forms: "A830-A83F",
InPhags_pa: "A840-A87F",
InSaurashtra: "A880-A8DF",
InDevanagari_Extended: "A8E0-A8FF",
InKayah_Li: "A900-A92F",
InRejang: "A930-A95F",
InHangul_Jamo_Extended_A: "A960-A97F",
InJavanese: "A980-A9DF",
InCham: "AA00-AA5F",
InMyanmar_Extended_A: "AA60-AA7F",
InTai_Viet: "AA80-AADF",
InMeetei_Mayek_Extensions: "AAE0-AAFF",
InEthiopic_Extended_A: "AB00-AB2F",
InMeetei_Mayek: "ABC0-ABFF",
InHangul_Syllables: "AC00-D7AF",
InHangul_Jamo_Extended_B: "D7B0-D7FF",
InHigh_Surrogates: "D800-DB7F",
InHigh_Private_Use_Surrogates: "DB80-DBFF",
InLow_Surrogates: "DC00-DFFF",
InPrivate_Use_Area: "E000-F8FF",
InCJK_Compatibility_Ideographs: "F900-FAFF",
InAlphabetic_Presentation_Forms: "FB00-FB4F",
InArabic_Presentation_Forms_A: "FB50-FDFF",
InVariation_Selectors: "FE00-FE0F",
InVertical_Forms: "FE10-FE1F",
InCombining_Half_Marks: "FE20-FE2F",
InCJK_Compatibility_Forms: "FE30-FE4F",
InSmall_Form_Variants: "FE50-FE6F",
InArabic_Presentation_Forms_B: "FE70-FEFF",
InHalfwidth_and_Fullwidth_Forms: "FF00-FFEF",
InSpecials: "FFF0-FFFF"
});
}(XRegExp));
/***** unicode-properties.js *****/
/*!
* XRegExp Unicode Properties v1.0.0
* (c) 2012 Steven Levithan <http://xregexp.com/>
* MIT License
* Uses Unicode 6.1 <http://unicode.org/>
*/
/**
* Adds Unicode properties necessary to meet Level 1 Unicode support (detailed in UTS#18 RL1.2).
* Includes code points from the Basic Multilingual Plane (U+0000-U+FFFF) only. Token names are
* case insensitive, and any spaces, hyphens, and underscores are ignored.
* @requires XRegExp, XRegExp Unicode Base
*/
(function (XRegExp) {
"use strict";
if (!XRegExp.addUnicodePackage) {
throw new ReferenceError("Unicode Base must be loaded before Unicode Properties");
}
XRegExp.install("extensibility");
XRegExp.addUnicodePackage({
Alphabetic: "0041-005A0061-007A00AA00B500BA00C0-00D600D8-00F600F8-02C102C6-02D102E0-02E402EC02EE03450370-037403760377037A-037D03860388-038A038C038E-03A103A3-03F503F7-0481048A-05270531-055605590561-058705B0-05BD05BF05C105C205C405C505C705D0-05EA05F0-05F20610-061A0620-06570659-065F066E-06D306D5-06DC06E1-06E806ED-06EF06FA-06FC06FF0710-073F074D-07B107CA-07EA07F407F507FA0800-0817081A-082C0840-085808A008A2-08AC08E4-08E908F0-08FE0900-093B093D-094C094E-09500955-09630971-09770979-097F0981-09830985-098C098F09900993-09A809AA-09B009B209B6-09B909BD-09C409C709C809CB09CC09CE09D709DC09DD09DF-09E309F009F10A01-0A030A05-0A0A0A0F0A100A13-0A280A2A-0A300A320A330A350A360A380A390A3E-0A420A470A480A4B0A4C0A510A59-0A5C0A5E0A70-0A750A81-0A830A85-0A8D0A8F-0A910A93-0AA80AAA-0AB00AB20AB30AB5-0AB90ABD-0AC50AC7-0AC90ACB0ACC0AD00AE0-0AE30B01-0B030B05-0B0C0B0F0B100B13-0B280B2A-0B300B320B330B35-0B390B3D-0B440B470B480B4B0B4C0B560B570B5C0B5D0B5F-0B630B710B820B830B85-0B8A0B8E-0B900B92-0B950B990B9A0B9C0B9E0B9F0BA30BA40BA8-0BAA0BAE-0BB90BBE-0BC20BC6-0BC80BCA-0BCC0BD00BD70C01-0C030C05-0C0C0C0E-0C100C12-0C280C2A-0C330C35-0C390C3D-0C440C46-0C480C4A-0C4C0C550C560C580C590C60-0C630C820C830C85-0C8C0C8E-0C900C92-0CA80CAA-0CB30CB5-0CB90CBD-0CC40CC6-0CC80CCA-0CCC0CD50CD60CDE0CE0-0CE30CF10CF20D020D030D05-0D0C0D0E-0D100D12-0D3A0D3D-0D440D46-0D480D4A-0D4C0D4E0D570D60-0D630D7A-0D7F0D820D830D85-0D960D9A-0DB10DB3-0DBB0DBD0DC0-0DC60DCF-0DD40DD60DD8-0DDF0DF20DF30E01-0E3A0E40-0E460E4D0E810E820E840E870E880E8A0E8D0E94-0E970E99-0E9F0EA1-0EA30EA50EA70EAA0EAB0EAD-0EB90EBB-0EBD0EC0-0EC40EC60ECD0EDC-0EDF0F000F40-0F470F49-0F6C0F71-0F810F88-0F970F99-0FBC1000-10361038103B-103F1050-10621065-1068106E-1086108E109C109D10A0-10C510C710CD10D0-10FA10FC-1248124A-124D1250-12561258125A-125D1260-1288128A-128D1290-12B012B2-12B512B8-12BE12C012C2-12C512C8-12D612D8-13101312-13151318-135A135F1380-138F13A0-13F41401-166C166F-167F1681-169A16A0-16EA16EE-16F01700-170C170E-17131720-17331740-17531760-176C176E-1770177217731780-17B317B6-17C817D717DC1820-18771880-18AA18B0-18F51900-191C1920-192B1930-19381950-196D1970-19741980-19AB19B0-19C91A00-1A1B1A20-1A5E1A61-1A741AA71B00-1B331B35-1B431B45-1B4B1B80-1BA91BAC-1BAF1BBA-1BE51BE7-1BF11C00-1C351C4D-1C4F1C5A-1C7D1CE9-1CEC1CEE-1CF31CF51CF61D00-1DBF1E00-1F151F18-1F1D1F20-1F451F48-1F4D1F50-1F571F591F5B1F5D1F5F-1F7D1F80-1FB41FB6-1FBC1FBE1FC2-1FC41FC6-1FCC1FD0-1FD31FD6-1FDB1FE0-1FEC1FF2-1FF41FF6-1FFC2071207F2090-209C21022107210A-211321152119-211D212421262128212A-212D212F-2139213C-213F2145-2149214E2160-218824B6-24E92C00-2C2E2C30-2C5E2C60-2CE42CEB-2CEE2CF22CF32D00-2D252D272D2D2D30-2D672D6F2D80-2D962DA0-2DA62DA8-2DAE2DB0-2DB62DB8-2DBE2DC0-2DC62DC8-2DCE2DD0-2DD62DD8-2DDE2DE0-2DFF2E2F3005-30073021-30293031-30353038-303C3041-3096309D-309F30A1-30FA30FC-30FF3105-312D3131-318E31A0-31BA31F0-31FF3400-4DB54E00-9FCCA000-A48CA4D0-A4FDA500-A60CA610-A61FA62AA62BA640-A66EA674-A67BA67F-A697A69F-A6EFA717-A71FA722-A788A78B-A78EA790-A793A7A0-A7AAA7F8-A801A803-A805A807-A80AA80C-A827A840-A873A880-A8C3A8F2-A8F7A8FBA90A-A92AA930-A952A960-A97CA980-A9B2A9B4-A9BFA9CFAA00-AA36AA40-AA4DAA60-AA76AA7AAA80-AABEAAC0AAC2AADB-AADDAAE0-AAEFAAF2-AAF5AB01-AB06AB09-AB0EAB11-AB16AB20-AB26AB28-AB2EABC0-ABEAAC00-D7A3D7B0-D7C6D7CB-D7FBF900-FA6DFA70-FAD9FB00-FB06FB13-FB17FB1D-FB28FB2A-FB36FB38-FB3CFB3EFB40FB41FB43FB44FB46-FBB1FBD3-FD3DFD50-FD8FFD92-FDC7FDF0-FDFBFE70-FE74FE76-FEFCFF21-FF3AFF41-FF5AFF66-FFBEFFC2-FFC7FFCA-FFCFFFD2-FFD7FFDA-FFDC",
Uppercase: "0041-005A00C0-00D600D8-00DE01000102010401060108010A010C010E01100112011401160118011A011C011E01200122012401260128012A012C012E01300132013401360139013B013D013F0141014301450147014A014C014E01500152015401560158015A015C015E01600162016401660168016A016C016E017001720174017601780179017B017D018101820184018601870189-018B018E-0191019301940196-0198019C019D019F01A001A201A401A601A701A901AC01AE01AF01B1-01B301B501B701B801BC01C401C701CA01CD01CF01D101D301D501D701D901DB01DE01E001E201E401E601E801EA01EC01EE01F101F401F6-01F801FA01FC01FE02000202020402060208020A020C020E02100212021402160218021A021C021E02200222022402260228022A022C022E02300232023A023B023D023E02410243-02460248024A024C024E03700372037603860388-038A038C038E038F0391-03A103A3-03AB03CF03D2-03D403D803DA03DC03DE03E003E203E403E603E803EA03EC03EE03F403F703F903FA03FD-042F04600462046404660468046A046C046E04700472047404760478047A047C047E0480048A048C048E04900492049404960498049A049C049E04A004A204A404A604A804AA04AC04AE04B004B204B404B604B804BA04BC04BE04C004C104C304C504C704C904CB04CD04D004D204D404D604D804DA04DC04DE04E004E204E404E604E804EA04EC04EE04F004F204F404F604F804FA04FC04FE05000502050405060508050A050C050E05100512051405160518051A051C051E05200522052405260531-055610A0-10C510C710CD1E001E021E041E061E081E0A1E0C1E0E1E101E121E141E161E181E1A1E1C1E1E1E201E221E241E261E281E2A1E2C1E2E1E301E321E341E361E381E3A1E3C1E3E1E401E421E441E461E481E4A1E4C1E4E1E501E521E541E561E581E5A1E5C1E5E1E601E621E641E661E681E6A1E6C1E6E1E701E721E741E761E781E7A1E7C1E7E1E801E821E841E861E881E8A1E8C1E8E1E901E921E941E9E1EA01EA21EA41EA61EA81EAA1EAC1EAE1EB01EB21EB41EB61EB81EBA1EBC1EBE1EC01EC21EC41EC61EC81ECA1ECC1ECE1ED01ED21ED41ED61ED81EDA1EDC1EDE1EE01EE21EE41EE61EE81EEA1EEC1EEE1EF01EF21EF41EF61EF81EFA1EFC1EFE1F08-1F0F1F18-1F1D1F28-1F2F1F38-1F3F1F48-1F4D1F591F5B1F5D1F5F1F68-1F6F1FB8-1FBB1FC8-1FCB1FD8-1FDB1FE8-1FEC1FF8-1FFB21022107210B-210D2110-211221152119-211D212421262128212A-212D2130-2133213E213F21452160-216F218324B6-24CF2C00-2C2E2C602C62-2C642C672C692C6B2C6D-2C702C722C752C7E-2C802C822C842C862C882C8A2C8C2C8E2C902C922C942C962C982C9A2C9C2C9E2CA02CA22CA42CA62CA82CAA2CAC2CAE2CB02CB22CB42CB62CB82CBA2CBC2CBE2CC02CC22CC42CC62CC82CCA2CCC2CCE2CD02CD22CD42CD62CD82CDA2CDC2CDE2CE02CE22CEB2CED2CF2A640A642A644A646A648A64AA64CA64EA650A652A654A656A658A65AA65CA65EA660A662A664A666A668A66AA66CA680A682A684A686A688A68AA68CA68EA690A692A694A696A722A724A726A728A72AA72CA72EA732A734A736A738A73AA73CA73EA740A742A744A746A748A74AA74CA74EA750A752A754A756A758A75AA75CA75EA760A762A764A766A768A76AA76CA76EA779A77BA77DA77EA780A782A784A786A78BA78DA790A792A7A0A7A2A7A4A7A6A7A8A7AAFF21-FF3A",
Lowercase: "0061-007A00AA00B500BA00DF-00F600F8-00FF01010103010501070109010B010D010F01110113011501170119011B011D011F01210123012501270129012B012D012F01310133013501370138013A013C013E014001420144014601480149014B014D014F01510153015501570159015B015D015F01610163016501670169016B016D016F0171017301750177017A017C017E-0180018301850188018C018D019201950199-019B019E01A101A301A501A801AA01AB01AD01B001B401B601B901BA01BD-01BF01C601C901CC01CE01D001D201D401D601D801DA01DC01DD01DF01E101E301E501E701E901EB01ED01EF01F001F301F501F901FB01FD01FF02010203020502070209020B020D020F02110213021502170219021B021D021F02210223022502270229022B022D022F02310233-0239023C023F0240024202470249024B024D024F-02930295-02B802C002C102E0-02E40345037103730377037A-037D039003AC-03CE03D003D103D5-03D703D903DB03DD03DF03E103E303E503E703E903EB03ED03EF-03F303F503F803FB03FC0430-045F04610463046504670469046B046D046F04710473047504770479047B047D047F0481048B048D048F04910493049504970499049B049D049F04A104A304A504A704A904AB04AD04AF04B104B304B504B704B904BB04BD04BF04C204C404C604C804CA04CC04CE04CF04D104D304D504D704D904DB04DD04DF04E104E304E504E704E904EB04ED04EF04F104F304F504F704F904FB04FD04FF05010503050505070509050B050D050F05110513051505170519051B051D051F05210523052505270561-05871D00-1DBF1E011E031E051E071E091E0B1E0D1E0F1E111E131E151E171E191E1B1E1D1E1F1E211E231E251E271E291E2B1E2D1E2F1E311E331E351E371E391E3B1E3D1E3F1E411E431E451E471E491E4B1E4D1E4F1E511E531E551E571E591E5B1E5D1E5F1E611E631E651E671E691E6B1E6D1E6F1E711E731E751E771E791E7B1E7D1E7F1E811E831E851E871E891E8B1E8D1E8F1E911E931E95-1E9D1E9F1EA11EA31EA51EA71EA91EAB1EAD1EAF1EB11EB31EB51EB71EB91EBB1EBD1EBF1EC11EC31EC51EC71EC91ECB1ECD1ECF1ED11ED31ED51ED71ED91EDB1EDD1EDF1EE11EE31EE51EE71EE91EEB1EED1EEF1EF11EF31EF51EF71EF91EFB1EFD1EFF-1F071F10-1F151F20-1F271F30-1F371F40-1F451F50-1F571F60-1F671F70-1F7D1F80-1F871F90-1F971FA0-1FA71FB0-1FB41FB61FB71FBE1FC2-1FC41FC61FC71FD0-1FD31FD61FD71FE0-1FE71FF2-1FF41FF61FF72071207F2090-209C210A210E210F2113212F21342139213C213D2146-2149214E2170-217F218424D0-24E92C30-2C5E2C612C652C662C682C6A2C6C2C712C732C742C76-2C7D2C812C832C852C872C892C8B2C8D2C8F2C912C932C952C972C992C9B2C9D2C9F2CA12CA32CA52CA72CA92CAB2CAD2CAF2CB12CB32CB52CB72CB92CBB2CBD2CBF2CC12CC32CC52CC72CC92CCB2CCD2CCF2CD12CD32CD52CD72CD92CDB2CDD2CDF2CE12CE32CE42CEC2CEE2CF32D00-2D252D272D2DA641A643A645A647A649A64BA64DA64FA651A653A655A657A659A65BA65DA65FA661A663A665A667A669A66BA66DA681A683A685A687A689A68BA68DA68FA691A693A695A697A723A725A727A729A72BA72DA72F-A731A733A735A737A739A73BA73DA73FA741A743A745A747A749A74BA74DA74FA751A753A755A757A759A75BA75DA75FA761A763A765A767A769A76BA76DA76F-A778A77AA77CA77FA781A783A785A787A78CA78EA791A793A7A1A7A3A7A5A7A7A7A9A7F8-A7FAFB00-FB06FB13-FB17FF41-FF5A",
White_Space: "0009-000D0020008500A01680180E2000-200A20282029202F205F3000",
Noncharacter_Code_Point: "FDD0-FDEFFFFEFFFF",
Default_Ignorable_Code_Point: "00AD034F115F116017B417B5180B-180D200B-200F202A-202E2060-206F3164FE00-FE0FFEFFFFA0FFF0-FFF8",
// \p{Any} matches a code unit. To match any code point via surrogate pairs, use (?:[\0-\uD7FF\uDC00-\uFFFF]|[\uD800-\uDBFF][\uDC00-\uDFFF]|[\uD800-\uDBFF])
Any: "0000-FFFF", // \p{^Any} compiles to [^\u0000-\uFFFF]; [\p{^Any}] to []
Ascii: "0000-007F",
// \p{Assigned} is equivalent to \p{^Cn}
//Assigned: XRegExp("[\\p{^Cn}]").source.replace(/[[\]]|\\u/g, "") // Negation inside a character class triggers inversion
Assigned: "0000-0377037A-037E0384-038A038C038E-03A103A3-05270531-05560559-055F0561-05870589058A058F0591-05C705D0-05EA05F0-05F40600-06040606-061B061E-070D070F-074A074D-07B107C0-07FA0800-082D0830-083E0840-085B085E08A008A2-08AC08E4-08FE0900-09770979-097F0981-09830985-098C098F09900993-09A809AA-09B009B209B6-09B909BC-09C409C709C809CB-09CE09D709DC09DD09DF-09E309E6-09FB0A01-0A030A05-0A0A0A0F0A100A13-0A280A2A-0A300A320A330A350A360A380A390A3C0A3E-0A420A470A480A4B-0A4D0A510A59-0A5C0A5E0A66-0A750A81-0A830A85-0A8D0A8F-0A910A93-0AA80AAA-0AB00AB20AB30AB5-0AB90ABC-0AC50AC7-0AC90ACB-0ACD0AD00AE0-0AE30AE6-0AF10B01-0B030B05-0B0C0B0F0B100B13-0B280B2A-0B300B320B330B35-0B390B3C-0B440B470B480B4B-0B4D0B560B570B5C0B5D0B5F-0B630B66-0B770B820B830B85-0B8A0B8E-0B900B92-0B950B990B9A0B9C0B9E0B9F0BA30BA40BA8-0BAA0BAE-0BB90BBE-0BC20BC6-0BC80BCA-0BCD0BD00BD70BE6-0BFA0C01-0C030C05-0C0C0C0E-0C100C12-0C280C2A-0C330C35-0C390C3D-0C440C46-0C480C4A-0C4D0C550C560C580C590C60-0C630C66-0C6F0C78-0C7F0C820C830C85-0C8C0C8E-0C900C92-0CA80CAA-0CB30CB5-0CB90CBC-0CC40CC6-0CC80CCA-0CCD0CD50CD60CDE0CE0-0CE30CE6-0CEF0CF10CF20D020D030D05-0D0C0D0E-0D100D12-0D3A0D3D-0D440D46-0D480D4A-0D4E0D570D60-0D630D66-0D750D79-0D7F0D820D830D85-0D960D9A-0DB10DB3-0DBB0DBD0DC0-0DC60DCA0DCF-0DD40DD60DD8-0DDF0DF2-0DF40E01-0E3A0E3F-0E5B0E810E820E840E870E880E8A0E8D0E94-0E970E99-0E9F0EA1-0EA30EA50EA70EAA0EAB0EAD-0EB90EBB-0EBD0EC0-0EC40EC60EC8-0ECD0ED0-0ED90EDC-0EDF0F00-0F470F49-0F6C0F71-0F970F99-0FBC0FBE-0FCC0FCE-0FDA1000-10C510C710CD10D0-1248124A-124D1250-12561258125A-125D1260-1288128A-128D1290-12B012B2-12B512B8-12BE12C012C2-12C512C8-12D612D8-13101312-13151318-135A135D-137C1380-139913A0-13F41400-169C16A0-16F01700-170C170E-17141720-17361740-17531760-176C176E-1770177217731780-17DD17E0-17E917F0-17F91800-180E1810-18191820-18771880-18AA18B0-18F51900-191C1920-192B1930-193B19401944-196D1970-19741980-19AB19B0-19C919D0-19DA19DE-1A1B1A1E-1A5E1A60-1A7C1A7F-1A891A90-1A991AA0-1AAD1B00-1B4B1B50-1B7C1B80-1BF31BFC-1C371C3B-1C491C4D-1C7F1CC0-1CC71CD0-1CF61D00-1DE61DFC-1F151F18-1F1D1F20-1F451F48-1F4D1F50-1F571F591F5B1F5D1F5F-1F7D1F80-1FB41FB6-1FC41FC6-1FD31FD6-1FDB1FDD-1FEF1FF2-1FF41FF6-1FFE2000-2064206A-20712074-208E2090-209C20A0-20B920D0-20F02100-21892190-23F32400-24262440-244A2460-26FF2701-2B4C2B50-2B592C00-2C2E2C30-2C5E2C60-2CF32CF9-2D252D272D2D2D30-2D672D6F2D702D7F-2D962DA0-2DA62DA8-2DAE2DB0-2DB62DB8-2DBE2DC0-2DC62DC8-2DCE2DD0-2DD62DD8-2DDE2DE0-2E3B2E80-2E992E9B-2EF32F00-2FD52FF0-2FFB3000-303F3041-30963099-30FF3105-312D3131-318E3190-31BA31C0-31E331F0-321E3220-32FE3300-4DB54DC0-9FCCA000-A48CA490-A4C6A4D0-A62BA640-A697A69F-A6F7A700-A78EA790-A793A7A0-A7AAA7F8-A82BA830-A839A840-A877A880-A8C4A8CE-A8D9A8E0-A8FBA900-A953A95F-A97CA980-A9CDA9CF-A9D9A9DEA9DFAA00-AA36AA40-AA4DAA50-AA59AA5C-AA7BAA80-AAC2AADB-AAF6AB01-AB06AB09-AB0EAB11-AB16AB20-AB26AB28-AB2EABC0-ABEDABF0-ABF9AC00-D7A3D7B0-D7C6D7CB-D7FBD800-FA6DFA70-FAD9FB00-FB06FB13-FB17FB1D-FB36FB38-FB3CFB3EFB40FB41FB43FB44FB46-FBC1FBD3-FD3FFD50-FD8FFD92-FDC7FDF0-FDFDFE00-FE19FE20-FE26FE30-FE52FE54-FE66FE68-FE6BFE70-FE74FE76-FEFCFEFFFF01-FFBEFFC2-FFC7FFCA-FFCFFFD2-FFD7FFDA-FFDCFFE0-FFE6FFE8-FFEEFFF9-FFFD"
});
}(XRegExp));
/***** matchrecursive.js *****/
/*!
* XRegExp.matchRecursive v0.2.0
* (c) 2009-2012 Steven Levithan <http://xregexp.com/>
* MIT License
*/
(function (XRegExp) {
"use strict";
/**
* Returns a match detail object composed of the provided values.
* @private
*/
function row(value, name, start, end) {
return {value: value, name: name, start: start, end: end};
}
/**
* Returns an array of match strings between outermost left and right delimiters, or an array of
* objects with detailed match parts and position data. An error is thrown if delimiters are
* unbalanced within the data.
* @memberOf XRegExp
* @param {String} str String to search.
* @param {String} left Left delimiter as an XRegExp pattern.
* @param {String} right Right delimiter as an XRegExp pattern.
* @param {String} [flags] Flags for the left and right delimiters. Use any of: `gimnsxy`.
* @param {Object} [options] Lets you specify `valueNames` and `escapeChar` options.
* @returns {Array} Array of matches, or an empty array.
* @example
*
* // Basic usage
* var str = '(t((e))s)t()(ing)';
* XRegExp.matchRecursive(str, '\\(', '\\)', 'g');
* // -> ['t((e))s', '', 'ing']
*
* // Extended information mode with valueNames
* str = 'Here is <div> <div>an</div></div> example';
* XRegExp.matchRecursive(str, '<div\\s*>', '</div>', 'gi', {
* valueNames: ['between', 'left', 'match', 'right']
* });
* // -> [
* // {name: 'between', value: 'Here is ', start: 0, end: 8},
* // {name: 'left', value: '<div>', start: 8, end: 13},
* // {name: 'match', value: ' <div>an</div>', start: 13, end: 27},
* // {name: 'right', value: '</div>', start: 27, end: 33},
* // {name: 'between', value: ' example', start: 33, end: 41}
* // ]
*
* // Omitting unneeded parts with null valueNames, and using escapeChar
* str = '...{1}\\{{function(x,y){return y+x;}}';
* XRegExp.matchRecursive(str, '{', '}', 'g', {
* valueNames: ['literal', null, 'value', null],
* escapeChar: '\\'
* });
* // -> [
* // {name: 'literal', value: '...', start: 0, end: 3},
* // {name: 'value', value: '1', start: 4, end: 5},
* // {name: 'literal', value: '\\{', start: 6, end: 8},
* // {name: 'value', value: 'function(x,y){return y+x;}', start: 9, end: 35}
* // ]
*
* // Sticky mode via flag y
* str = '<1><<<2>>><3>4<5>';
* XRegExp.matchRecursive(str, '<', '>', 'gy');
* // -> ['1', '<<2>>', '3']
*/
XRegExp.matchRecursive = function (str, left, right, flags, options) {
flags = flags || "";
options = options || {};
var global = flags.indexOf("g") > -1,
sticky = flags.indexOf("y") > -1,
basicFlags = flags.replace(/y/g, ""), // Flag y controlled internally
escapeChar = options.escapeChar,
vN = options.valueNames,
output = [],
openTokens = 0,
delimStart = 0,
delimEnd = 0,
lastOuterEnd = 0,
outerStart,
innerStart,
leftMatch,
rightMatch,
esc;
left = XRegExp(left, basicFlags);
right = XRegExp(right, basicFlags);
if (escapeChar) {
if (escapeChar.length > 1) {
throw new SyntaxError("can't use more than one escape character");
}
escapeChar = XRegExp.escape(escapeChar);
// Using XRegExp.union safely rewrites backreferences in `left` and `right`
esc = new RegExp(
"(?:" + escapeChar + "[\\S\\s]|(?:(?!" + XRegExp.union([left, right]).source + ")[^" + escapeChar + "])+)+",
flags.replace(/[^im]+/g, "") // Flags gy not needed here; flags nsx handled by XRegExp
);
}
while (true) {
// If using an escape character, advance to the delimiter's next starting position,
// skipping any escaped characters in between
if (escapeChar) {
delimEnd += (XRegExp.exec(str, esc, delimEnd, "sticky") || [""])[0].length;
}
leftMatch = XRegExp.exec(str, left, delimEnd);
rightMatch = XRegExp.exec(str, right, delimEnd);
// Keep the leftmost match only
if (leftMatch && rightMatch) {
if (leftMatch.index <= rightMatch.index) {
rightMatch = null;
} else {
leftMatch = null;
}
}
/* Paths (LM:leftMatch, RM:rightMatch, OT:openTokens):
LM | RM | OT | Result
1 | 0 | 1 | loop
1 | 0 | 0 | loop
0 | 1 | 1 | loop
0 | 1 | 0 | throw
0 | 0 | 1 | throw
0 | 0 | 0 | break
* Doesn't include the sticky mode special case
* Loop ends after the first completed match if `!global` */
if (leftMatch || rightMatch) {
delimStart = (leftMatch || rightMatch).index;
delimEnd = delimStart + (leftMatch || rightMatch)[0].length;
} else if (!openTokens) {
break;
}
if (sticky && !openTokens && delimStart > lastOuterEnd) {
break;
}
if (leftMatch) {
if (!openTokens) {
outerStart = delimStart;
innerStart = delimEnd;
}
++openTokens;
} else if (rightMatch && openTokens) {
if (!--openTokens) {
if (vN) {
if (vN[0] && outerStart > lastOuterEnd) {
output.push(row(vN[0], str.slice(lastOuterEnd, outerStart), lastOuterEnd, outerStart));
}
if (vN[1]) {
output.push(row(vN[1], str.slice(outerStart, innerStart), outerStart, innerStart));
}
if (vN[2]) {
output.push(row(vN[2], str.slice(innerStart, delimStart), innerStart, delimStart));
}
if (vN[3]) {
output.push(row(vN[3], str.slice(delimStart, delimEnd), delimStart, delimEnd));
}
} else {
output.push(str.slice(innerStart, delimStart));
}
lastOuterEnd = delimEnd;
if (!global) {
break;
}
}
} else {
throw new Error("string contains unbalanced delimiters");
}
// If the delimiter matched an empty string, avoid an infinite loop
if (delimStart === delimEnd) {
++delimEnd;
}
}
if (global && !sticky && vN && vN[0] && str.length > lastOuterEnd) {
output.push(row(vN[0], str.slice(lastOuterEnd), lastOuterEnd, str.length));
}
return output;
};
}(XRegExp));
/***** build.js *****/
/*!
* XRegExp.build v0.1.0
* (c) 2012 Steven Levithan <http://xregexp.com/>
* MIT License
* Inspired by RegExp.create by Lea Verou <http://lea.verou.me/>
*/
(function (XRegExp) {
"use strict";
var subparts = /(\()(?!\?)|\\([1-9]\d*)|\\[\s\S]|\[(?:[^\\\]]|\\[\s\S])*]/g,
parts = XRegExp.union([/\({{([\w$]+)}}\)|{{([\w$]+)}}/, subparts], "g");
/**
* Strips a leading `^` and trailing unescaped `$`, if both are present.
* @private
* @param {String} pattern Pattern to process.
* @returns {String} Pattern with edge anchors removed.
*/
function deanchor(pattern) {
var startAnchor = /^(?:\(\?:\))?\^/, // Leading `^` or `(?:)^` (handles /x cruft)
endAnchor = /\$(?:\(\?:\))?$/; // Trailing `$` or `$(?:)` (handles /x cruft)
if (endAnchor.test(pattern.replace(/\\[\s\S]/g, ""))) { // Ensure trailing `$` isn't escaped
return pattern.replace(startAnchor, "").replace(endAnchor, "");
}
return pattern;
}
/**
* Converts the provided value to an XRegExp.
* @private
* @param {String|RegExp} value Value to convert.
* @returns {RegExp} XRegExp object with XRegExp syntax applied.
*/
function asXRegExp(value) {
return XRegExp.isRegExp(value) ?
(value.xregexp && !value.xregexp.isNative ? value : XRegExp(value.source)) :
XRegExp(value);
}
/**
* Builds regexes using named subpatterns, for readability and pattern reuse. Backreferences in the
* outer pattern and provided subpatterns are automatically renumbered to work correctly. Native
* flags used by provided subpatterns are ignored in favor of the `flags` argument.
* @memberOf XRegExp
* @param {String} pattern XRegExp pattern using `{{name}}` for embedded subpatterns. Allows
* `({{name}})` as shorthand for `(?<name>{{name}})`. Patterns cannot be embedded within
* character classes.
* @param {Object} subs Lookup object for named subpatterns. Values can be strings or regexes. A
* leading `^` and trailing unescaped `$` are stripped from subpatterns, if both are present.
* @param {String} [flags] Any combination of XRegExp flags.
* @returns {RegExp} Regex with interpolated subpatterns.
* @example
*
* var time = XRegExp.build('(?x)^ {{hours}} ({{minutes}}) $', {
* hours: XRegExp.build('{{h12}} : | {{h24}}', {
* h12: /1[0-2]|0?[1-9]/,
* h24: /2[0-3]|[01][0-9]/
* }, 'x'),
* minutes: /^[0-5][0-9]$/
* });
* time.test('10:59'); // -> true
* XRegExp.exec('10:59', time).minutes; // -> '59'
*/
XRegExp.build = function (pattern, subs, flags) {
var inlineFlags = /^\(\?([\w$]+)\)/.exec(pattern),
data = {},
numCaps = 0, // Caps is short for captures
numPriorCaps,
numOuterCaps = 0,
outerCapsMap = [0],
outerCapNames,
sub,
p;
// Add flags within a leading mode modifier to the overall pattern's flags
if (inlineFlags) {
flags = flags || "";
inlineFlags[1].replace(/./g, function (flag) {
flags += (flags.indexOf(flag) > -1 ? "" : flag); // Don't add duplicates
});
}
for (p in subs) {
if (subs.hasOwnProperty(p)) {
// Passing to XRegExp enables entended syntax for subpatterns provided as strings
// and ensures independent validity, lest an unescaped `(`, `)`, `[`, or trailing
// `\` breaks the `(?:)` wrapper. For subpatterns provided as regexes, it dies on
// octals and adds the `xregexp` property, for simplicity
sub = asXRegExp(subs[p]);
// Deanchoring allows embedding independently useful anchored regexes. If you
// really need to keep your anchors, double them (i.e., `^^...$$`)
data[p] = {pattern: deanchor(sub.source), names: sub.xregexp.captureNames || []};
}
}
// Passing to XRegExp dies on octals and ensures the outer pattern is independently valid;
// helps keep this simple. Named captures will be put back
pattern = asXRegExp(pattern);
outerCapNames = pattern.xregexp.captureNames || [];
pattern = pattern.source.replace(parts, function ($0, $1, $2, $3, $4) {
var subName = $1 || $2, capName, intro;
if (subName) { // Named subpattern
if (!data.hasOwnProperty(subName)) {
throw new ReferenceError("undefined property " + $0);
}
if ($1) { // Named subpattern was wrapped in a capturing group
capName = outerCapNames[numOuterCaps];
outerCapsMap[++numOuterCaps] = ++numCaps;
// If it's a named group, preserve the name. Otherwise, use the subpattern name
// as the capture name
intro = "(?<" + (capName || subName) + ">";
} else {
intro = "(?:";
}
numPriorCaps = numCaps;
return intro + data[subName].pattern.replace(subparts, function (match, paren, backref) {
if (paren) { // Capturing group
capName = data[subName].names[numCaps - numPriorCaps];
++numCaps;
if (capName) { // If the current capture has a name, preserve the name
return "(?<" + capName + ">";
}
} else if (backref) { // Backreference
return "\\" + (+backref + numPriorCaps); // Rewrite the backreference
}
return match;
}) + ")";
}
if ($3) { // Capturing group
capName = outerCapNames[numOuterCaps];
outerCapsMap[++numOuterCaps] = ++numCaps;
if (capName) { // If the current capture has a name, preserve the name
return "(?<" + capName + ">";
}
} else if ($4) { // Backreference
return "\\" + outerCapsMap[+$4]; // Rewrite the backreference
}
return $0;
});
return XRegExp(pattern, flags);
};
}(XRegExp));
/***** prototypes.js *****/
/*!
* XRegExp Prototype Methods v1.0.0
* (c) 2012 Steven Levithan <http://xregexp.com/>
* MIT License
*/
/**
* Adds a collection of methods to `XRegExp.prototype`. RegExp objects copied by XRegExp are also
* augmented with any `XRegExp.prototype` methods. Hence, the following work equivalently:
*
* XRegExp('[a-z]', 'ig').xexec('abc');
* XRegExp(/[a-z]/ig).xexec('abc');
* XRegExp.globalize(/[a-z]/i).xexec('abc');
*/
(function (XRegExp) {
"use strict";
/**
* Copy properties of `b` to `a`.
* @private
* @param {Object} a Object that will receive new properties.
* @param {Object} b Object whose properties will be copied.
*/
function extend(a, b) {
for (var p in b) {
if (b.hasOwnProperty(p)) {
a[p] = b[p];
}
}
//return a;
}
extend(XRegExp.prototype, {
/**
* Implicitly calls the regex's `test` method with the first value in the provided arguments array.
* @memberOf XRegExp.prototype
* @param {*} context Ignored. Accepted only for congruity with `Function.prototype.apply`.
* @param {Array} args Array with the string to search as its first value.
* @returns {Boolean} Whether the regex matched the provided value.
* @example
*
* XRegExp('[a-z]').apply(null, ['abc']); // -> true
*/
apply: function (context, args) {
return this.test(args[0]);
},
/**
* Implicitly calls the regex's `test` method with the provided string.
* @memberOf XRegExp.prototype
* @param {*} context Ignored. Accepted only for congruity with `Function.prototype.call`.
* @param {String} str String to search.
* @returns {Boolean} Whether the regex matched the provided value.
* @example
*
* XRegExp('[a-z]').call(null, 'abc'); // -> true
*/
call: function (context, str) {
return this.test(str);
},
/**
* Implicitly calls {@link #XRegExp.forEach}.
* @memberOf XRegExp.prototype
* @example
*
* XRegExp('\\d').forEach('1a2345', function (match, i) {
* if (i % 2) this.push(+match[0]);
* }, []);
* // -> [2, 4]
*/
forEach: function (str, callback, context) {
return XRegExp.forEach(str, this, callback, context);
},
/**
* Implicitly calls {@link #XRegExp.globalize}.
* @memberOf XRegExp.prototype
* @example
*
* var globalCopy = XRegExp('regex').globalize();
* globalCopy.global; // -> true
*/
globalize: function () {
return XRegExp.globalize(this);
},
/**
* Implicitly calls {@link #XRegExp.exec}.
* @memberOf XRegExp.prototype
* @example
*
* var match = XRegExp('U\\+(?<hex>[0-9A-F]{4})').xexec('U+2620');
* match.hex; // -> '2620'
*/
xexec: function (str, pos, sticky) {
return XRegExp.exec(str, this, pos, sticky);
},
/**
* Implicitly calls {@link #XRegExp.test}.
* @memberOf XRegExp.prototype
* @example
*
* XRegExp('c').xtest('abc'); // -> true
*/
xtest: function (str, pos, sticky) {
return XRegExp.test(str, this, pos, sticky);
}
});
}(XRegExp)); | PypiClean |
/Hotel_Lite-0.6.tar.gz/Hotel_Lite-0.6/Hotel_Lite/Eventos.py | import gi
gi.require_version("Gtk", "3.0")
from gi.repository import Gtk
from Hotel_Lite import FuncionesCompresion, FuncionesServicios, FuncionesHab, FuncionesCli, FuncionesImportExport, \
Conexion, FuncionesIm, FuncionesRes
from Hotel_Lite.FuncionesIm import *
class Eventos:
'''Cierre de la ventana principal.
'''
def salir(self):
Gtk.main_quit()
# Eventos generales
'''
Cierre de la ventana principal
'''
def on_venPrincipal_destroy(self, widget):
self.salir()
'''
Alta de cliente
'''
def on_btn_alta_clicked(self, widget):
Conexion.abrirDB()
FuncionesCli.altacli()
Conexion.cerrarbbdd()
'''
Baja de cliente
'''
def on_btn_baja_clicked(self, widget):
Conexion.abrirDB()
FuncionesCli.bajacli()
Conexion.cerrarbbdd()
'''
Seleccion de cliente en treeview
'''
def on_treeclientes_cursor_changed(self, widget):
Conexion.abrirDB()
FuncionesCli.mostrarcli()
Conexion.cerrarbbdd()
'''
Modificacion de cliente
'''
def on_btn_modif_cli_clicked(self, widget):
Conexion.abrirDB()
FuncionesCli.modificarcliente()
Conexion.cerrarbbdd()
'''
Fecha cliente
'''
def on_btn_fecha_clicked(self, widget):
try:
Variables.ventcalendar.connect('delete-event', lambda w, e: w.hide() or True)
Variables.ventcalendar.show()
except:
FuncionesGenericas.showError("Error al abrir calendario")
def on_calendar_day_selected_double_click(self, widget):
try:
agno, mes, dia = Variables.calendar.get_date()
fecha = '%02d/' % dia + "%02d/" % (mes + 1) + "%s" % agno
Variables.filacli[5].set_text(fecha)
Variables.ventcalendar.hide()
except:
FuncionesGenericas.showError("Error en seleccionar fecha")
'''
Alta habitacion
'''
def on_btn_AltaHabitacion_clicked(self, widget):
try:
tipohab = ""
if Variables.filahabitacion[0].get_active():
tipohab = "Simple"
elif Variables.filahabitacion[1].get_active():
tipohab = "Double"
elif Variables.filahabitacion[2].get_active():
tipohab = "Family"
if Variables.habocupada.get_active():
ocu = 'SI'
else:
ocu = 'NO'
nab = int(Variables.filahabitacion[3].get_text())
ncamas = int(Variables.filahabitacion[4].get_text())
npers = int(Variables.filahabitacion[5].get_text())
pnoche = float(Variables.filahabitacion[6].get_text())
registro = (nab, tipohab, ncamas, npers, pnoche, ocu)
if nab != "" and ncamas != "" and npers != "" and pnoche != "":
Conexion.abrirDB()
FuncionesHab.listarhab(Variables.listlhabitaciones)
FuncionesHab.insertarhab(registro)
Variables.listlhabitaciones.append(registro)
Variables.treehabitaciones.show()
FuncionesHab.limpiarEnt(Variables.filahabitacion)
Conexion.cerrarbbdd()
except:
FuncionesGenericas.showError("Error en alta")
def on_btn_BajaHabitacion_clicked(self, widget):
try:
nab = Variables.filahabitacion[3].get_text()
if nab != "":
FuncionesHab.bajanab(nab)
except:
FuncionesGenericas.showError("Error en baja")
def on_treeHabitaciones_cursor_changed(self, widget):
FuncionesHab.cargarenthab()
def on_btn_modif_hab_clicked(self, widget):
try:
Conexion.abrirDB()
nab = int(Variables.filahabitacion[3].get_text())
ncamas = int(Variables.filahabitacion[4].get_text())
npers = int(Variables.filahabitacion[5].get_text())
pnoche = float(Variables.filahabitacion[6].get_text())
tipohab = ""
if Variables.filahabitacion[0].get_active():
tipohab = "Simple"
elif Variables.filahabitacion[1].get_active():
tipohab = "Double"
elif Variables.filahabitacion[2].get_active():
tipohab = "Family"
registro = (nab, tipohab, ncamas, npers, pnoche)
if nab is not None:
FuncionesHab.modifhab(registro)
FuncionesHab.listarhab(Variables.listlhabitaciones)
FuncionesHab.limpiarEnt(Variables.filahabitacion)
else:
FuncionesGenericas.showError("Faltan datos")
except Exception as e:
FuncionesGenericas.showError("Error en modificacion")
# Declaracion de los botones del toolbar
def on_btn_salir_tool_clicked(self, widget):
self.salir()
def on_btn_cli_tool_clicked(self, widget):
try:
panelactual = Variables.panel.get_current_page()
if panelactual != 0:
Variables.panel.set_current_page(0)
except:
FuncionesGenericas.showError("Error en btn_cli")
def on_btn_reservas_tool_clicked(self, widget):
try:
panelactual = Variables.panel.get_current_page()
if panelactual != 1:
Variables.panel.set_current_page(1)
except:
FuncionesGenericas.showError("Error en btn_cli")
def on_btn_camas_tool_clicked(self, widget):
try:
panelactual = Variables.panel.get_current_page()
if panelactual != 2:
Variables.panel.set_current_page(2)
except:
FuncionesGenericas.showError("Error en btn_cli")
def on_btn_service_tool_clicked(self, widget):
try:
panelactual = Variables.panel.get_current_page()
if panelactual != 3:
Variables.panel.set_current_page(3)
except:
FuncionesGenericas.showError("Error en btn_servicio")
def on_btn_cal_tool_clicked(self, widget):
os.system("gnome-calculator")
def on_btn_limpiar_tool_clicked(self, widget):
FuncionesHab.limpiarEnt(Variables.filahabitacion)
FuncionesCli.limpiarEnt()
FuncionesRes.limpiarEnt()
Variables.filaservicios[0].set_text("")
FuncionesServicios.limpiarent()
# Eventos de la barra de menu
def on_menu_bar_salir_activate(self, widget):
self.salir()
def on_btn_menu_acerca_activate(self, widget):
Variables.ventacerca.connect('delete-event', lambda w, e: w.hide() or True)
Variables.ventacerca.show()
def on_bk_activate(self, widget):
Variables.ventbackup.connect('delete-event', lambda w, e: w.hide() or True)
Variables.ventbackup.show()
def on_bk_salir_clicked(self, widget):
Variables.ventbackup.connect('delete-event', lambda w, e: w.hide() or True)
Variables.ventbackup.hide()
def on_btnSalirAcerca_clicked(self, widget):
Variables.ventacerca.connect('delete-event', lambda w, e: w.hide() or True)
Variables.ventacerca.hide()
def on_btn_do_backup_clicked(self, widget):
FuncionesCompresion.comprimir(Variables.ventbackup.get_filename())
Variables.ventbackup.connect('delete-event', lambda w, e: w.hide() or True)
Variables.ventbackup.hide()
def on_mainnote_switch_page(self, widget, DATA=None, PAGE=None):
Conexion.abrirDB()
FuncionesRes.cargarHabsPik()
Conexion.cerrarbbdd()
def on_entr_dnires_changed(self, widget):
FuncionesRes.setapeldni(Variables.filareservas[0].get_text())
def on_btn_fechckin_clicked(self, widget):
Variables.vent_calendarcheckin.connect('delete-event', lambda w, e: w.hide() or True)
Variables.vent_calendarcheckin.show()
def on_btn_fechcheckout_clicked(self, widget):
Variables.vent_calendarcheckout.connect('delete-event', lambda w, e: w.hide() or True)
Variables.vent_calendarcheckout.show()
def on_calendar_checkin_day_selected_double_click(self, widget):
agno, mes, dia = Variables.calcheckin.get_date()
fecha = '%02d/' % dia + "%02d/" % (mes + 1) + "%s" % agno
Variables.filareservas[3].set_text(fecha)
Variables.vent_calendarcheckin.hide()
if Variables.filareservas[4].get_text() != "":
d_fchkout = datetime.strptime(Variables.filareservas[4].get_text(), "%d/%m/%Y")
d = d_fchkout - datetime(agno, mes + 1, dia)
dias = d.days
if dias > 0:
Variables.filareservas[5].set_text(str(dias))
def on_btn_imprimir_tool_clicked(self, widget):
os.system("system-config-printer")
def on_calendar_checkout_day_selected_double_click(self, widget):
agno, mes, dia = Variables.calcheckout.get_date()
fecha = '%02d/' % dia + "%02d/" % (mes + 1) + "%s" % agno
Variables.filareservas[4].set_text(fecha)
Variables.vent_calendarcheckout.hide()
if Variables.filareservas[3].get_text() != "":
d_fchkin = datetime.strptime(Variables.filareservas[3].get_text(), "%d/%m/%Y")
d = datetime(agno, mes + 1, dia) - d_fchkin
dias = d.days
if dias > 0:
Variables.filareservas[5].set_text(str(dias))
def on_btn_AltaReserva_clicked(self, widget):
Conexion.abrirDB()
FuncionesRes.altareserva()
Conexion.cerrarbbdd()
def on_btn_BajaReserva_clicked(self, widget):
Conexion.abrirDB()
FuncionesRes.bajareserva()
Conexion.cerrarbbdd()
def on_treeReservas_cursor_changed(self, widget):
Conexion.abrirDB()
FuncionesRes.cargarres()
Conexion.cerrarbbdd()
def on_btn_modif_res_clicked(self, widget):
Conexion.abrirDB()
FuncionesRes.modifres(Variables.filareservas)
Conexion.cerrarbbdd()
# Ventana error
def on_btn_error_aceptar_clicked(self, widget):
Variables.ventError.hide()
# Facturacion
def on_btn_imprimir_fact_clicked(self, widget):
factura()
# Checkout
def on_acep_del_res_clicked(self, widget):
dni = Variables.filareservas[0].get_text()
index = Variables.filareservas[2].get_active()
model = Variables.filareservas[2].get_model()
checkin = Variables.filareservas[3].get_text()
it = model[index]
Conexion.abrirDB()
FuncionesRes.bajares(dni, it[0], checkin, True)
Conexion.cerrarbbdd()
Variables.diagdialog.connect('delete-event', lambda w, e: w.hide() or True)
Variables.diagdialog.hide()
def on_del_res_cancel_clicked(self, widget):
Variables.diagdialog.connect('delete-event', lambda w, e: w.hide() or True)
Variables.diagdialog.hide()
# Importar
def on_menu_bar_importar_activate(self, widget):
Variables.diagimp.connect('delete-event', lambda w, e: w.hide() or True)
Variables.diagimp.show()
def on_btn_import_1_clicked(self, widget):
FuncionesImportExport.importar(Variables.diagimp.get_filename())
Variables.diagimp.connect('delete-event', lambda w, e: w.hide() or True)
Variables.diagimp.hide()
def on_import_salir_clicked(self, widget):
Variables.diagimp.connect('delete-event', lambda w, e: w.hide() or True)
Variables.diagimp.hide()
def on_menu_bar_exportar_activate(self, widget):
Conexion.abrirDB()
FuncionesImportExport.exportar()
Conexion.cerrarbbdd()
def on_menu_list_clientes_activate(self, widget):
FuncionesIm.listadoCli()
# Servicios
def on_btn_altaservicio_clicked(self, widget):
Conexion.abrirDB()
FuncionesServicios.altaServicioOrd()
Conexion.cerrarbbdd()
def on_btn_bajaservicio_clicked(self, widget):
Conexion.abrirDB()
FuncionesServicios.bajaserv()
Conexion.cerrarbbdd()
def on_treeServicios_cursor_changed(self, widget):
FuncionesServicios.entrserv() | PypiClean |
/Nuitka_fixed-1.1.2-cp310-cp310-win_amd64.whl/nuitka/build/inline_copy/lib/scons-2.3.2/SCons/Scanner/C.py |
__revision__ = "src/engine/SCons/Scanner/C.py 2014/07/05 09:42:21 garyo"
import SCons.Node.FS
import SCons.Scanner
import SCons.Util
import SCons.cpp
class SConsCPPScanner(SCons.cpp.PreProcessor):
"""
SCons-specific subclass of the cpp.py module's processing.
We subclass this so that: 1) we can deal with files represented
by Nodes, not strings; 2) we can keep track of the files that are
missing.
"""
def __init__(self, *args, **kw):
SCons.cpp.PreProcessor.__init__(self, *args, **kw)
self.missing = []
def initialize_result(self, fname):
self.result = SCons.Util.UniqueList([fname])
def finalize_result(self, fname):
return self.result[1:]
def find_include_file(self, t):
keyword, quote, fname = t
result = SCons.Node.FS.find_file(fname, self.searchpath[quote])
if not result:
self.missing.append((fname, self.current_file))
return result
def read_file(self, file):
try:
fp = open(str(file.rfile()))
except EnvironmentError, e:
self.missing.append((file, self.current_file))
return ''
else:
return fp.read()
def dictify_CPPDEFINES(env):
cppdefines = env.get('CPPDEFINES', {})
if cppdefines is None:
return {}
if SCons.Util.is_Sequence(cppdefines):
result = {}
for c in cppdefines:
if SCons.Util.is_Sequence(c):
result[c[0]] = c[1]
else:
result[c] = None
return result
if not SCons.Util.is_Dict(cppdefines):
return {cppdefines : None}
return cppdefines
class SConsCPPScannerWrapper(object):
"""
The SCons wrapper around a cpp.py scanner.
This is the actual glue between the calling conventions of generic
SCons scanners, and the (subclass of) cpp.py class that knows how
to look for #include lines with reasonably real C-preprocessor-like
evaluation of #if/#ifdef/#else/#elif lines.
"""
def __init__(self, name, variable):
self.name = name
self.path = SCons.Scanner.FindPathDirs(variable)
def __call__(self, node, env, path = ()):
cpp = SConsCPPScanner(current = node.get_dir(),
cpppath = path,
dict = dictify_CPPDEFINES(env))
result = cpp(node)
for included, includer in cpp.missing:
fmt = "No dependency generated for file: %s (included from: %s) -- file not found"
SCons.Warnings.warn(SCons.Warnings.DependencyWarning,
fmt % (included, includer))
return result
def recurse_nodes(self, nodes):
return nodes
def select(self, node):
return self
def CScanner():
"""Return a prototype Scanner instance for scanning source files
that use the C pre-processor"""
# Here's how we would (or might) use the CPP scanner code above that
# knows how to evaluate #if/#ifdef/#else/#elif lines when searching
# for #includes. This is commented out for now until we add the
# right configurability to let users pick between the scanners.
#return SConsCPPScannerWrapper("CScanner", "CPPPATH")
cs = SCons.Scanner.ClassicCPP("CScanner",
"$CPPSUFFIXES",
"CPPPATH",
'^[ \t]*#[ \t]*(?:include|import)[ \t]*(<|")([^>"]+)(>|")')
return cs
# Local Variables:
# tab-width:4
# indent-tabs-mode:nil
# End:
# vim: set expandtab tabstop=4 shiftwidth=4: | PypiClean |
/EnforsML-0.0.3.tar.gz/EnforsML-0.0.3/enforsml/text/utils.py | from enforsml.text import word
def normalize(txt):
"""Return a normalized copy of txt.
"""
txt = normalize_whitespace(txt)
txt = unify_sentence_dividers(txt)
return txt
def unify_sentence_dividers(txt):
"""Return copy of txt with ? and ! replaced with .
"""
for ch in ["!", "?"]:
txt = txt.replace(ch, ".")
return txt
def remove_junk_chars(txt):
"""Return copy of txt without unneeded chars.
"""
for ch in [": ", "; "]:
txt = txt.replace(ch, " ")
for ch in [".", ",", "(", ")", '"']:
txt = txt.replace(ch, "")
return txt
def remove_words(txt, words_to_remove):
"""Return a copy of the txt string with the specified words (not Words)
removed.
"""
output = ""
for wrd in txt.split(" "):
if wrd not in words_to_remove:
output += wrd + " "
return output.strip()
def split_sentences(txt):
"""Attempt to split a txt into sentences.
"""
txt = normalize_whitespace(txt)
txt.replace("!", ".")
txt.replace("?", ".")
sentences = [sentence.strip() for sentence in txt.split(". ")]
sentences[-1] = sentences[-1].rstrip(".")
return sentences
def split_sentence(txt):
"""Given a normalized sentence, return a list of Words.
"""
words = []
for part in txt.split(" "):
words.append(word.Word(part))
return words
def normalize_and_split_sentences(txt):
"""Return normalized sentences.
>>> normalize_and_split_sentences("Foo bar. Another small sentence.")
['Foo bar', 'Another small sentence']
>>> normalize_and_split_sentences(" Foo bar. Another small sentence.")
['Foo bar', 'Another small sentence']
>>> normalize_and_split_sentences("Foo bar . Another small sentence.")
['Foo bar', 'Another small sentence']
"""
txt = normalize(txt)
sentences = split_sentences(txt)
return sentences
def normalize_whitespace(txt):
"""Return a copy of txt with one space between all words, with all
newlines and tab characters removed.
>>> print(normalize_whitespace("some text"))
some text
>>> print(normalize_whitespace(" some text "))
some text
>>> print(normalize_whitespace(" some text"))
some text
>>> print(normalize_whitespace('\t\tsome text'))
some text
>>> print(normalize_whitespace(" some text "))
some text
"""
new_txt = txt.replace("\n", " ")
new_txt = new_txt.replace("\r", "")
new_txt = new_txt.replace("\t", " ")
words = [word.strip() for word in new_txt.split(" ") if len(word) > 0]
new_txt = " ".join(words)
return new_txt | PypiClean |
/MultiEncoder-0.0.6.tar.gz/MultiEncoder-0.0.6/mle/multi_layer_encoder.py | from transformers import AutoTokenizer, AutoModel
import numpy as np
import torch
import torch.nn.functional as F
class multi_layer_encoder():
def __init__(self, model_dir):
self.tokenizer = AutoTokenizer.from_pretrained(model_dir)
self.model = AutoModel.from_pretrained(model_dir, output_hidden_states=True)
if torch.cuda.is_available():
self.device = torch.device("cuda")
print(f'There are {torch.cuda.device_count()} GPU(s) available.')
print('Device name:', torch.cuda.get_device_name(0))
self.model.to(self.device)
else:
print('No GPU available, using the CPU instead.')
device = torch.device("cpu")
# Helper: Mean and Max Pooling - Take attention mask into account for correct averaging
def mean_pooling(self,model_output, attention_mask):
"""
Take the mean of the token embeddings.
:param model_output:
:param attention_mask:
:return: mean pooled embedding
:rtype: torch.tensor
"""
token_embeddings = model_output # First element of model_output contains all token embeddings (last hidden state)
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Max Pooling - Take the max value over time for every dimension.
def max_pooling(self,model_output, attention_mask):
"""
Take the max of the token embeddings.
:param model_output:
:param attention_mask:
:return: max pooled embedding
:rtype: torch.tensor
"""
token_embeddings = model_output # First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
token_embeddings[input_mask_expanded == 0] = -1e9 # Set padding tokens to large negative value
return torch.max(token_embeddings, 1)[0]
def return_last_six_hidden(self,tensor_list, cut_point):
length_of_tensor_list = len(tensor_list)
cut_off_point = cut_point # 6
dict_of_layer_outputs = {}
# 6th layer is index 5, don't ask why !
# 13th layer is index 12, don't ask why ! (aka. last hidden state)
for i in range(cut_off_point - 1, length_of_tensor_list):
layer_outputs = tensor_list[i]
dict_of_layer_outputs[f"layer_{i + 1}"] = layer_outputs
return dict_of_layer_outputs
def get_encoded_longformer_input(self,text):
"""
Encode a text with the longformer model.
:param text: input text
:return:
"""
sentence = text
encoded_input = self.tokenizer(sentence, padding=True, truncation=True, return_tensors='pt').to(self.device)
# Compute token embeddings
with torch.no_grad():
model_output = self.model(**encoded_input)
return model_output, encoded_input
def multi_encode(self, input_text, encode_layers=6, max_pool=False):
"""
Encode a text with the longformer model.
:param input_text: The text to encode.
:param encode_layers: which layers to get encoding from. (From layer 12 to encode_layers)
:param max_pool: boolean, if true, use max pooling, else use mean pooling
:return: list of encoded input. The first is the lowest layer, the last is the highest layer.
:return: dictonary, with keys being the layer name and values being the encoded input.
:rtype: list, dict
"""
model_output, encoded_input = self.get_encoded_longformer_input(input_text)
dect = self.return_last_six_hidden(model_output.hidden_states, encode_layers)
list_of_encoded_inputs = [] # starts from encoder_layers to last year 12 in the model.
for key in dect.keys():
#print("Pooling :", key) # Take this away!
if max_pool:
sentence_embeddings = self.max_pooling(dect.get(key), encoded_input['attention_mask'])
else:
sentence_embeddings = self.mean_pooling(dect.get(key), encoded_input['attention_mask'])
if torch.cuda.is_available():
list_of_encoded_inputs.append(sentence_embeddings.cpu().numpy()[0])
else:
list_of_encoded_inputs.append(sentence_embeddings.numpy()[0])
#print("CAUTION: Dictionary embeddings are not pooled and is of type torch.tensor")
return list_of_encoded_inputs, dect | PypiClean |
/MegEngine-1.13.1-cp37-cp37m-macosx_10_14_x86_64.whl/megengine/tools/compare_binary_iodump.py | import argparse
import os
import struct
import textwrap
from pathlib import Path
import numpy as np
def load_tensor_binary(fobj):
"""Load a tensor dumped by the :class:`BinaryOprIODump` plugin; the actual
tensor value dump is implemented by ``mgb::debug::dump_tensor``.
Args:
fobj: file object, or a string that contains the file name.
Returns:
tuple ``(tensor_value, tensor_name)``.
"""
if isinstance(fobj, str):
with open(fobj, "rb") as fin:
return load_tensor_binary(fin)
DTYPE_LIST = {
0: np.float32,
1: np.uint8,
2: np.int8,
3: np.int16,
4: np.int32,
# 5: _mgb.intb1,
# 6: _mgb.intb2,
# 7: _mgb.intb4,
8: None,
9: np.float16,
# quantized dtype start from 100000
# see MEGDNN_PARAMETERIZED_DTYPE_ENUM_BASE in
# dnn/include/megdnn/dtype.h
100000: np.uint8,
100001: np.int32,
100002: np.int8,
}
header_fmt = struct.Struct("III")
name_len, dtype, max_ndim = header_fmt.unpack(fobj.read(header_fmt.size))
assert (
DTYPE_LIST[dtype] is not None
), "Cannot load this tensor: dtype Byte is unsupported."
shape = list(struct.unpack("I" * max_ndim, fobj.read(max_ndim * 4)))
while shape[-1] == 0:
shape.pop(-1)
name = fobj.read(name_len).decode("ascii")
return np.fromfile(fobj, dtype=DTYPE_LIST[dtype]).reshape(shape), name
def check(v0, v1, name, max_err):
v0 = np.ascontiguousarray(v0, dtype=np.float32)
v1 = np.ascontiguousarray(v1, dtype=np.float32)
assert np.isfinite(v0.sum()) and np.isfinite(
v1.sum()
), "{} not finite: sum={} vs sum={}".format(name, v0.sum(), v1.sum())
assert v0.shape == v1.shape, "{} shape mismatch: {} vs {}".format(
name, v0.shape, v1.shape
)
vdiv = np.max([np.abs(v0), np.abs(v1), np.ones_like(v0)], axis=0)
err = np.abs(v0 - v1) / vdiv
rst = err > max_err
if rst.sum():
idx = tuple(i[0] for i in np.nonzero(rst))
raise AssertionError(
"{} not equal: "
"shape={} nonequal_idx={} v0={} v1={} err={}".format(
name, v0.shape, idx, v0[idx], v1[idx], err[idx]
)
)
def main():
parser = argparse.ArgumentParser(
description=(
"compare tensor dumps generated BinaryOprIODump plugin, "
"it can compare two dirs or two single files"
),
formatter_class=argparse.ArgumentDefaultsHelpFormatter,
)
parser.add_argument("input0", help="dirname or filename")
parser.add_argument("input1", help="dirname or filename")
parser.add_argument(
"-e", "--max-err", type=float, default=1e-3, help="max allowed error"
)
parser.add_argument(
"-s", "--stop-on-error", action="store_true", help="do not compare "
)
args = parser.parse_args()
files0 = set()
files1 = set()
if os.path.isdir(args.input0):
assert os.path.isdir(args.input1)
name0 = set()
name1 = set()
for i in os.listdir(args.input0):
files0.add(str(Path(args.input0) / i))
name0.add(i)
for i in os.listdir(args.input1):
files1.add(str(Path(args.input1) / i))
name1.add(i)
assert name0 == name1, "dir files mismatch: a-b={} b-a={}".format(
name0 - name1, name1 - name0
)
else:
files0.add(args.input0)
files1.add(args.input1)
files0 = sorted(files0)
files1 = sorted(files1)
for i, j in zip(files0, files1):
val0, name0 = load_tensor_binary(i)
val1, name1 = load_tensor_binary(j)
name = "{}: \n{}\n{}\n".format(
i, "\n ".join(textwrap.wrap(name0)), "\n ".join(textwrap.wrap(name1))
)
try:
check(val0, val1, name, args.max_err)
except Exception as exc:
if args.stop_on_error:
raise exc
print(exc)
if __name__ == "__main__":
main() | PypiClean |
/Electrum-CHI-3.3.8.tar.gz/Electrum-CHI-3.3.8/electrum_chi/electrum/base_wizard.py |
import os
import sys
import copy
import traceback
from functools import partial
from typing import List, TYPE_CHECKING, Tuple, NamedTuple, Any, Dict, Optional
from . import bitcoin
from . import keystore
from . import mnemonic
from .bip32 import is_bip32_derivation, xpub_type, normalize_bip32_derivation
from .keystore import bip44_derivation, purpose48_derivation
from .wallet import (Imported_Wallet, Standard_Wallet, Multisig_Wallet,
wallet_types, Wallet, Abstract_Wallet)
from .storage import (WalletStorage, STO_EV_USER_PW, STO_EV_XPUB_PW,
get_derivation_used_for_hw_device_encryption)
from .i18n import _
from .util import UserCancelled, InvalidPassword, WalletFileException
from .simple_config import SimpleConfig
from .plugin import Plugins, HardwarePluginLibraryUnavailable
from .logging import Logger
from .plugins.hw_wallet.plugin import OutdatedHwFirmwareException, HW_PluginBase
if TYPE_CHECKING:
from .plugin import DeviceInfo
# hardware device setup purpose
HWD_SETUP_NEW_WALLET, HWD_SETUP_DECRYPT_WALLET = range(0, 2)
class ScriptTypeNotSupported(Exception): pass
class GoBack(Exception): pass
class WizardStackItem(NamedTuple):
action: Any
args: Any
kwargs: Dict[str, Any]
storage_data: dict
class BaseWizard(Logger):
def __init__(self, config: SimpleConfig, plugins: Plugins):
super(BaseWizard, self).__init__()
Logger.__init__(self)
self.config = config
self.plugins = plugins
self.data = {}
self.pw_args = None
self._stack = [] # type: List[WizardStackItem]
self.plugin = None
self.keystores = []
self.is_kivy = config.get('gui') == 'kivy'
self.seed_type = None
def set_icon(self, icon):
pass
def run(self, *args, **kwargs):
action = args[0]
args = args[1:]
storage_data = copy.deepcopy(self.data)
self._stack.append(WizardStackItem(action, args, kwargs, storage_data))
if not action:
return
if type(action) is tuple:
self.plugin, action = action
if self.plugin and hasattr(self.plugin, action):
f = getattr(self.plugin, action)
f(self, *args, **kwargs)
elif hasattr(self, action):
f = getattr(self, action)
f(*args, **kwargs)
else:
raise Exception("unknown action", action)
def can_go_back(self):
return len(self._stack) > 1
def go_back(self):
if not self.can_go_back():
return
# pop 'current' frame
self._stack.pop()
# pop 'previous' frame
stack_item = self._stack.pop()
# try to undo side effects since we last entered 'previous' frame
# FIXME only self.storage is properly restored
self.data = copy.deepcopy(stack_item.storage_data)
# rerun 'previous' frame
self.run(stack_item.action, *stack_item.args, **stack_item.kwargs)
def reset_stack(self):
self._stack = []
def new(self):
title = _("Create new wallet")
message = '\n'.join([
_("What kind of wallet do you want to create?")
])
wallet_kinds = [
('standard', _("Standard wallet")),
('2fa', _("Wallet with two-factor authentication")),
('multisig', _("Multi-signature wallet")),
('imported', _("Import Xaya addresses or private keys")),
]
choices = [pair for pair in wallet_kinds if pair[0] in wallet_types]
self.choice_dialog(title=title, message=message, choices=choices, run_next=self.on_wallet_type)
def upgrade_storage(self, storage):
exc = None
def on_finished():
if exc is None:
self.terminate(storage=storage)
else:
raise exc
def do_upgrade():
nonlocal exc
try:
storage.upgrade()
except Exception as e:
exc = e
self.waiting_dialog(do_upgrade, _('Upgrading wallet format...'), on_finished=on_finished)
def load_2fa(self):
self.data['wallet_type'] = '2fa'
self.data['use_trustedcoin'] = True
self.plugin = self.plugins.load_plugin('trustedcoin')
def on_wallet_type(self, choice):
self.data['wallet_type'] = self.wallet_type = choice
if choice == 'standard':
action = 'choose_keystore'
elif choice == 'multisig':
action = 'choose_multisig'
elif choice == '2fa':
self.load_2fa()
action = self.plugin.get_action(self.data)
elif choice == 'imported':
action = 'import_addresses_or_keys'
self.run(action)
def choose_multisig(self):
def on_multisig(m, n):
multisig_type = "%dof%d" % (m, n)
self.data['wallet_type'] = multisig_type
self.n = n
self.run('choose_keystore')
self.multisig_dialog(run_next=on_multisig)
def choose_keystore(self):
assert self.wallet_type in ['standard', 'multisig']
i = len(self.keystores)
title = _('Add cosigner') + ' (%d of %d)'%(i+1, self.n) if self.wallet_type=='multisig' else _('Keystore')
if self.wallet_type =='standard' or i==0:
message = _('Do you want to create a new seed, or to restore a wallet using an existing seed?')
choices = [
('choose_seed_type', _('Create a new seed')),
('restore_from_seed', _('I already have a seed')),
('restore_from_key', _('Use a master key')),
]
if not self.is_kivy:
choices.append(('choose_hw_device', _('Use a hardware device')))
else:
message = _('Add a cosigner to your multi-sig wallet')
choices = [
('restore_from_key', _('Enter cosigner key')),
('restore_from_seed', _('Enter cosigner seed')),
]
if not self.is_kivy:
choices.append(('choose_hw_device', _('Cosign with hardware device')))
self.choice_dialog(title=title, message=message, choices=choices, run_next=self.run)
def import_addresses_or_keys(self):
v = lambda x: keystore.is_address_list(x) or keystore.is_private_key_list(x, raise_on_error=True)
title = _("Import Xaya Addresses")
message = _("Enter a list of Xaya addresses (this will create a watching-only wallet), or a list of private keys.")
self.add_xpub_dialog(title=title, message=message, run_next=self.on_import,
is_valid=v, allow_multi=True, show_wif_help=True)
def on_import(self, text):
# text is already sanitized by is_address_list and is_private_keys_list
if keystore.is_address_list(text):
self.data['addresses'] = {}
for addr in text.split():
assert bitcoin.is_address(addr)
self.data['addresses'][addr] = {}
elif keystore.is_private_key_list(text):
self.data['addresses'] = {}
k = keystore.Imported_KeyStore({})
keys = keystore.get_private_keys(text)
for pk in keys:
assert bitcoin.is_private_key(pk)
txin_type, pubkey = k.import_privkey(pk, None)
addr = bitcoin.pubkey_to_address(txin_type, pubkey)
self.data['addresses'][addr] = {'type':txin_type, 'pubkey':pubkey, 'redeem_script':None}
self.keystores.append(k)
else:
return self.terminate()
return self.run('create_wallet')
def restore_from_key(self):
if self.wallet_type == 'standard':
v = keystore.is_master_key
title = _("Create keystore from a master key")
message = ' '.join([
_("To create a watching-only wallet, please enter your master public key (xpub/ypub/zpub)."),
_("To create a spending wallet, please enter a master private key (xprv/yprv/zprv).")
])
self.add_xpub_dialog(title=title, message=message, run_next=self.on_restore_from_key, is_valid=v)
else:
i = len(self.keystores) + 1
self.add_cosigner_dialog(index=i, run_next=self.on_restore_from_key, is_valid=keystore.is_bip32_key)
def on_restore_from_key(self, text):
k = keystore.from_master_key(text)
self.on_keystore(k)
def choose_hw_device(self, purpose=HWD_SETUP_NEW_WALLET, *, storage=None):
title = _('Hardware Keystore')
# check available plugins
supported_plugins = self.plugins.get_hardware_support()
devices = [] # type: List[Tuple[str, DeviceInfo]]
devmgr = self.plugins.device_manager
debug_msg = ''
def failed_getting_device_infos(name, e):
nonlocal debug_msg
err_str_oneline = ' // '.join(str(e).splitlines())
self.logger.warning(f'error getting device infos for {name}: {err_str_oneline}')
indented_error_msg = ' '.join([''] + str(e).splitlines(keepends=True))
debug_msg += f' {name}: (error getting device infos)\n{indented_error_msg}\n'
# scan devices
try:
scanned_devices = devmgr.scan_devices()
except BaseException as e:
self.logger.info('error scanning devices: {}'.format(repr(e)))
debug_msg = ' {}:\n {}'.format(_('Error scanning devices'), e)
else:
for splugin in supported_plugins:
name, plugin = splugin.name, splugin.plugin
# plugin init errored?
if not plugin:
e = splugin.exception
indented_error_msg = ' '.join([''] + str(e).splitlines(keepends=True))
debug_msg += f' {name}: (error during plugin init)\n'
debug_msg += ' {}\n'.format(_('You might have an incompatible library.'))
debug_msg += f'{indented_error_msg}\n'
continue
# see if plugin recognizes 'scanned_devices'
try:
# FIXME: side-effect: unpaired_device_info sets client.handler
device_infos = devmgr.unpaired_device_infos(None, plugin, devices=scanned_devices,
include_failing_clients=True)
except HardwarePluginLibraryUnavailable as e:
failed_getting_device_infos(name, e)
continue
except BaseException as e:
self.logger.exception('')
failed_getting_device_infos(name, e)
continue
device_infos_failing = list(filter(lambda di: di.exception is not None, device_infos))
for di in device_infos_failing:
failed_getting_device_infos(name, di.exception)
device_infos_working = list(filter(lambda di: di.exception is None, device_infos))
devices += list(map(lambda x: (name, x), device_infos_working))
if not debug_msg:
debug_msg = ' {}'.format(_('No exceptions encountered.'))
if not devices:
msg = (_('No hardware device detected.') + '\n' +
_('To trigger a rescan, press \'Next\'.') + '\n\n')
if sys.platform == 'win32':
msg += _('If your device is not detected on Windows, go to "Settings", "Devices", "Connected devices", '
'and do "Remove device". Then, plug your device again.') + '\n'
msg += _('While this is less than ideal, it might help if you run Electrum-CHI as Administrator.') + '\n'
else:
msg += _('On Linux, you might have to add a new permission to your udev rules.') + '\n'
msg += '\n\n'
msg += _('Debug message') + '\n' + debug_msg
self.confirm_dialog(title=title, message=msg,
run_next=lambda x: self.choose_hw_device(purpose, storage=storage))
return
# select device
self.devices = devices
choices = []
for name, info in devices:
state = _("initialized") if info.initialized else _("wiped")
label = info.label or _("An unnamed {}").format(name)
try: transport_str = info.device.transport_ui_string[:20]
except: transport_str = 'unknown transport'
descr = f"{label} [{name}, {state}, {transport_str}]"
choices.append(((name, info), descr))
msg = _('Select a device') + ':'
self.choice_dialog(title=title, message=msg, choices=choices,
run_next=lambda *args: self.on_device(*args, purpose=purpose, storage=storage))
def on_device(self, name, device_info, *, purpose, storage=None):
self.plugin = self.plugins.get_plugin(name) # type: HW_PluginBase
try:
self.plugin.setup_device(device_info, self, purpose)
except OSError as e:
self.show_error(_('We encountered an error while connecting to your device:')
+ '\n' + str(e) + '\n'
+ _('To try to fix this, we will now re-pair with your device.') + '\n'
+ _('Please try again.'))
devmgr = self.plugins.device_manager
devmgr.unpair_id(device_info.device.id_)
self.choose_hw_device(purpose, storage=storage)
return
except OutdatedHwFirmwareException as e:
if self.question(e.text_ignore_old_fw_and_continue(), title=_("Outdated device firmware")):
self.plugin.set_ignore_outdated_fw()
# will need to re-pair
devmgr = self.plugins.device_manager
devmgr.unpair_id(device_info.device.id_)
self.choose_hw_device(purpose, storage=storage)
return
except (UserCancelled, GoBack):
self.choose_hw_device(purpose, storage=storage)
return
except BaseException as e:
self.logger.exception('')
self.show_error(str(e))
self.choose_hw_device(purpose, storage=storage)
return
if purpose == HWD_SETUP_NEW_WALLET:
def f(derivation, script_type):
derivation = normalize_bip32_derivation(derivation)
self.run('on_hw_derivation', name, device_info, derivation, script_type)
self.derivation_and_script_type_dialog(f)
elif purpose == HWD_SETUP_DECRYPT_WALLET:
derivation = get_derivation_used_for_hw_device_encryption()
xpub = self.plugin.get_xpub(device_info.device.id_, derivation, 'standard', self)
password = keystore.Xpub.get_pubkey_from_xpub(xpub, ())
try:
storage.decrypt(password)
except InvalidPassword:
# try to clear session so that user can type another passphrase
devmgr = self.plugins.device_manager
client = devmgr.client_by_id(device_info.device.id_)
if hasattr(client, 'clear_session'): # FIXME not all hw wallet plugins have this
client.clear_session()
raise
else:
raise Exception('unknown purpose: %s' % purpose)
def derivation_and_script_type_dialog(self, f):
message1 = _('Choose the type of addresses in your wallet.')
message2 = ' '.join([
_('You can override the suggested derivation path.'),
_('If you are not sure what this is, leave this field unchanged.')
])
if self.wallet_type == 'multisig':
# There is no general standard for HD multisig.
# For legacy, this is partially compatible with BIP45; assumes index=0
# For segwit, a custom path is used, as there is no standard at all.
default_choice_idx = 2
choices = [
('standard', 'legacy multisig (p2sh)', "m/45'/0"),
('p2wsh-p2sh', 'p2sh-segwit multisig (p2wsh-p2sh)', purpose48_derivation(0, xtype='p2wsh-p2sh')),
('p2wsh', 'native segwit multisig (p2wsh)', purpose48_derivation(0, xtype='p2wsh')),
]
else:
default_choice_idx = 2
choices = [
('standard', 'legacy (p2pkh)', bip44_derivation(0, bip43_purpose=44)),
('p2wpkh-p2sh', 'p2sh-segwit (p2wpkh-p2sh)', bip44_derivation(0, bip43_purpose=49)),
('p2wpkh', 'native segwit (p2wpkh)', bip44_derivation(0, bip43_purpose=84)),
]
while True:
try:
self.choice_and_line_dialog(
run_next=f, title=_('Script type and Derivation path'), message1=message1,
message2=message2, choices=choices, test_text=is_bip32_derivation,
default_choice_idx=default_choice_idx)
return
except ScriptTypeNotSupported as e:
self.show_error(e)
# let the user choose again
def on_hw_derivation(self, name, device_info, derivation, xtype):
from .keystore import hardware_keystore
try:
xpub = self.plugin.get_xpub(device_info.device.id_, derivation, xtype, self)
except ScriptTypeNotSupported:
raise # this is handled in derivation_dialog
except BaseException as e:
self.logger.exception('')
self.show_error(e)
return
d = {
'type': 'hardware',
'hw_type': name,
'derivation': derivation,
'xpub': xpub,
'label': device_info.label,
}
k = hardware_keystore(d)
self.on_keystore(k)
def passphrase_dialog(self, run_next, is_restoring=False):
title = _('Seed extension')
message = '\n'.join([
_('You may extend your seed with custom words.'),
_('Your seed extension must be saved together with your seed.'),
])
warning = '\n'.join([
_('Note that this is NOT your encryption password.'),
_('If you do not know what this is, leave this field empty.'),
])
warn_issue4566 = is_restoring and self.seed_type == 'bip39'
self.line_dialog(title=title, message=message, warning=warning,
default='', test=lambda x:True, run_next=run_next,
warn_issue4566=warn_issue4566)
def restore_from_seed(self):
self.opt_bip39 = True
self.opt_ext = True
is_cosigning_seed = lambda x: mnemonic.seed_type(x) in ['standard', 'segwit']
test = mnemonic.is_seed if self.wallet_type == 'standard' else is_cosigning_seed
self.restore_seed_dialog(run_next=self.on_restore_seed, test=test)
def on_restore_seed(self, seed, is_bip39, is_ext):
self.seed_type = 'bip39' if is_bip39 else mnemonic.seed_type(seed)
if self.seed_type == 'bip39':
f = lambda passphrase: self.on_restore_bip39(seed, passphrase)
self.passphrase_dialog(run_next=f, is_restoring=True) if is_ext else f('')
elif self.seed_type in ['standard', 'segwit']:
f = lambda passphrase: self.run('create_keystore', seed, passphrase)
self.passphrase_dialog(run_next=f, is_restoring=True) if is_ext else f('')
elif self.seed_type == 'old':
self.run('create_keystore', seed, '')
elif mnemonic.is_any_2fa_seed_type(self.seed_type):
self.load_2fa()
self.run('on_restore_seed', seed, is_ext)
else:
raise Exception('Unknown seed type', self.seed_type)
def on_restore_bip39(self, seed, passphrase):
def f(derivation, script_type):
derivation = normalize_bip32_derivation(derivation)
self.run('on_bip43', seed, passphrase, derivation, script_type)
self.derivation_and_script_type_dialog(f)
def create_keystore(self, seed, passphrase):
k = keystore.from_seed(seed, passphrase, self.wallet_type == 'multisig')
self.on_keystore(k)
def on_bip43(self, seed, passphrase, derivation, script_type):
k = keystore.from_bip39_seed(seed, passphrase, derivation, xtype=script_type)
self.on_keystore(k)
def on_keystore(self, k):
has_xpub = isinstance(k, keystore.Xpub)
if has_xpub:
t1 = xpub_type(k.xpub)
if self.wallet_type == 'standard':
if has_xpub and t1 not in ['standard', 'p2wpkh', 'p2wpkh-p2sh']:
self.show_error(_('Wrong key type') + ' %s'%t1)
self.run('choose_keystore')
return
self.keystores.append(k)
self.run('create_wallet')
elif self.wallet_type == 'multisig':
assert has_xpub
if t1 not in ['standard', 'p2wsh', 'p2wsh-p2sh']:
self.show_error(_('Wrong key type') + ' %s'%t1)
self.run('choose_keystore')
return
if k.xpub in map(lambda x: x.xpub, self.keystores):
self.show_error(_('Error: duplicate master public key'))
self.run('choose_keystore')
return
if len(self.keystores)>0:
t2 = xpub_type(self.keystores[0].xpub)
if t1 != t2:
self.show_error(_('Cannot add this cosigner:') + '\n' + "Their key type is '%s', we are '%s'"%(t1, t2))
self.run('choose_keystore')
return
self.keystores.append(k)
if len(self.keystores) == 1:
xpub = k.get_master_public_key()
self.reset_stack()
self.run('show_xpub_and_add_cosigners', xpub)
elif len(self.keystores) < self.n:
self.run('choose_keystore')
else:
self.run('create_wallet')
def create_wallet(self):
encrypt_keystore = any(k.may_have_password() for k in self.keystores)
# note: the following condition ("if") is duplicated logic from
# wallet.get_available_storage_encryption_version()
if self.wallet_type == 'standard' and isinstance(self.keystores[0], keystore.Hardware_KeyStore):
# offer encrypting with a pw derived from the hw device
k = self.keystores[0]
try:
k.handler = self.plugin.create_handler(self)
password = k.get_password_for_storage_encryption()
except UserCancelled:
devmgr = self.plugins.device_manager
devmgr.unpair_xpub(k.xpub)
self.choose_hw_device()
return
except BaseException as e:
self.logger.exception('')
self.show_error(str(e))
return
self.request_storage_encryption(
run_next=lambda encrypt_storage: self.on_password(
password,
encrypt_storage=encrypt_storage,
storage_enc_version=STO_EV_XPUB_PW,
encrypt_keystore=False))
else:
# reset stack to disable 'back' button in password dialog
self.reset_stack()
# prompt the user to set an arbitrary password
self.request_password(
run_next=lambda password, encrypt_storage: self.on_password(
password,
encrypt_storage=encrypt_storage,
storage_enc_version=STO_EV_USER_PW,
encrypt_keystore=encrypt_keystore),
force_disable_encrypt_cb=not encrypt_keystore)
def on_password(self, password, *, encrypt_storage,
storage_enc_version=STO_EV_USER_PW, encrypt_keystore):
for k in self.keystores:
if k.may_have_password():
k.update_password(None, password)
if self.wallet_type == 'standard':
self.data['seed_type'] = self.seed_type
keys = self.keystores[0].dump()
self.data['keystore'] = keys
elif self.wallet_type == 'multisig':
for i, k in enumerate(self.keystores):
self.data['x%d/'%(i+1)] = k.dump()
elif self.wallet_type == 'imported':
if len(self.keystores) > 0:
keys = self.keystores[0].dump()
self.data['keystore'] = keys
else:
raise Exception('Unknown wallet type')
self.pw_args = password, encrypt_storage, storage_enc_version
self.terminate()
def create_storage(self, path):
if os.path.exists(path):
raise Exception('file already exists at path')
if not self.pw_args:
return
password, encrypt_storage, storage_enc_version = self.pw_args
self.pw_args = None # clean-up so that it can get GC-ed
storage = WalletStorage(path)
storage.set_keystore_encryption(bool(password))
if encrypt_storage:
storage.set_password(password, enc_version=storage_enc_version)
for key, value in self.data.items():
storage.put(key, value)
storage.write()
storage.load_plugins()
return storage
def terminate(self, *, storage: Optional[WalletStorage] = None):
raise NotImplementedError() # implemented by subclasses
def show_xpub_and_add_cosigners(self, xpub):
self.show_xpub_dialog(xpub=xpub, run_next=lambda x: self.run('choose_keystore'))
def choose_seed_type(self, message=None, choices=None):
title = _('Choose Seed type')
if message is None:
message = ' '.join([
_("The type of addresses used by your wallet will depend on your seed."),
_("Segwit wallets use bech32 addresses, defined in BIP173."),
_("Please note that websites and other wallets may not support these addresses yet."),
_("Thus, you might want to keep using a non-segwit wallet in order to be able to receive CHI during the transition period.")
])
if choices is None:
choices = [
('create_segwit_seed', _('Segwit')),
('create_standard_seed', _('Legacy')),
]
self.choice_dialog(title=title, message=message, choices=choices, run_next=self.run)
def create_segwit_seed(self): self.create_seed('segwit')
def create_standard_seed(self): self.create_seed('standard')
def create_seed(self, seed_type):
from . import mnemonic
self.seed_type = seed_type
seed = mnemonic.Mnemonic('en').make_seed(self.seed_type)
self.opt_bip39 = False
f = lambda x: self.request_passphrase(seed, x)
self.show_seed_dialog(run_next=f, seed_text=seed)
def request_passphrase(self, seed, opt_passphrase):
if opt_passphrase:
f = lambda x: self.confirm_seed(seed, x)
self.passphrase_dialog(run_next=f)
else:
self.run('confirm_seed', seed, '')
def confirm_seed(self, seed, passphrase):
f = lambda x: self.confirm_passphrase(seed, passphrase)
self.confirm_seed_dialog(run_next=f, test=lambda x: x==seed)
def confirm_passphrase(self, seed, passphrase):
f = lambda x: self.run('create_keystore', seed, x)
if passphrase:
title = _('Confirm Seed Extension')
message = '\n'.join([
_('Your seed extension must be saved together with your seed.'),
_('Please type it here.'),
])
self.line_dialog(run_next=f, title=title, message=message, default='', test=lambda x: x==passphrase)
else:
f('') | PypiClean |
/Mopidy-Syncprojects-0.0.1.tar.gz/Mopidy-Syncprojects-0.0.1/mopidy_syncprojects/syncprojects.py | import datetime
import logging
import re
import requests
import string
import time
import unicodedata
from contextlib import closing
from mopidy import httpclient
from mopidy.models import Album, Artist, Track
from requests.adapters import HTTPAdapter
from requests.exceptions import HTTPError
from urllib.parse import quote_plus
import mopidy_syncprojects
logger = logging.getLogger(__name__)
def safe_url(uri):
return quote_plus(
unicodedata.normalize("NFKD", uri).encode("ASCII", "ignore")
)
def readable_url(uri):
valid_chars = f"-_.() {string.ascii_letters}{string.digits}"
safe_uri = (
unicodedata.normalize("NFKD", uri).encode("ascii", "ignore").decode()
)
return re.sub(
r"\s+", " ", "".join(c for c in safe_uri if c in valid_chars)
).strip()
def get_user_url(user_id):
return "me" if not user_id else f"users/self/"
def get_requests_session(proxy_config, user_agent, token, public=False):
proxy = httpclient.format_proxy(proxy_config)
full_user_agent = httpclient.format_user_agent(user_agent)
session = requests.Session()
session.proxies.update({"http": proxy, "https": proxy})
if not public:
session.headers.update({"user-agent": full_user_agent})
session.headers.update({"Authorization": f"Token {token}"})
return session
def get_mopidy_requests_session(config, public=False):
return get_requests_session(
proxy_config=config["proxy"],
user_agent=(
f"{mopidy_syncprojects.Extension.dist_name}/"
f"{mopidy_syncprojects.__version__}"
),
token=config["syncprojects"]["auth_token"],
public=public,
)
class cache: # noqa
# TODO: merge this to util library
def __init__(self, ctl=8, ttl=3600):
self.cache = {}
self.ctl = ctl
self.ttl = ttl
self._call_count = 1
def __call__(self, func):
def _memoized(*args):
self.func = func
now = time.time()
try:
value, last_update = self.cache[args]
age = now - last_update
if self._call_count >= self.ctl or age > self.ttl:
self._call_count = 1
raise AttributeError
self._call_count += 1
return value
except (KeyError, AttributeError):
value = self.func(*args)
self.cache[args] = (value, now)
return value
except TypeError:
return self.func(*args)
return _memoized
class ThrottlingHttpAdapter(HTTPAdapter):
def __init__(self, burst_length, burst_window, wait_window):
super().__init__()
self.max_hits = burst_length
self.hits = 0
self.rate = burst_length / burst_window
self.burst_window = datetime.timedelta(seconds=burst_window)
self.total_window = datetime.timedelta(
seconds=burst_window + wait_window
)
self.timestamp = datetime.datetime.min
def _is_too_many_requests(self):
now = datetime.datetime.utcnow()
if now < self.timestamp + self.total_window:
elapsed = now - self.timestamp
self.hits += 1
if (now < self.timestamp + self.burst_window) and (
self.hits < self.max_hits
):
return False
else:
logger.debug(
f"Request throttling after {self.hits} hits in "
f"{elapsed.microseconds} us "
f"(window until {self.timestamp + self.total_window})"
)
return True
else:
self.timestamp = now
self.hits = 0
return False
def send(self, request, **kwargs):
if request.method == "HEAD" and self._is_too_many_requests():
resp = requests.Response()
resp.request = request
resp.url = request.url
resp.status_code = 429
resp.reason = (
"Client throttled to {self.rate:.1f} requests per second"
)
return resp
else:
return super().send(request, **kwargs)
def sanitize_list(tracks):
return [t for t in tracks if t]
class SyncprojectsClient:
public_client_id = None
def __init__(self, config):
super().__init__()
self.explore_songs = config["syncprojects"].get("explore_songs", 25)
self.http_client = get_mopidy_requests_session(config)
adapter = ThrottlingHttpAdapter(
burst_length=3, burst_window=1, wait_window=10
)
self.http_client.mount("https://www.syncprojects.app/", adapter)
self.public_stream_client = get_mopidy_requests_session(
config, public=True
)
@property
@cache()
def user(self):
return self._get("users/self/")
@cache(ttl=10)
def get_projects(self):
projects = self._get('projects/', multi=True)
return self.parse_projects(projects)
@cache(ttl=10)
def get_songs(self, artist_id=None):
if artist_id:
projects = [self._get(f'projects/{artist_id}/')]
else:
projects = self._get('projects/', multi=True)
return self.parse_songs_from_projects(projects)
@cache()
def get_track_artwork(self, track_id):
track = self._get(f"songs/{track_id}/")
if track["album"] is not None:
album = self.get_album(track["album"])
if album["cover"] is not None:
return album["cover"]
artist = self.get_project(track["project"])
return artist["image"]
@cache()
def get_track(self, track_id, streamable=False):
logger.debug(f"Getting info for track with ID {track_id}")
try:
return self.parse_track(self._get(f"songs/{track_id}/"), None, streamable)
except Exception:
import traceback
traceback.print_exc()
return None
@cache()
def get_project(self, project_id):
logger.debug(f"Getting info for project with ID {project_id}")
try:
return self._get(f"projects/{project_id}/")
except Exception:
return None
@cache()
def get_album(self, album_id):
logger.debug(f"Getting info for album with ID {album_id}")
try:
return self._get(f"albums/{album_id}/")
except Exception:
return None
@staticmethod
def get_uri_id(track):
logger.debug(f"Parsing track {track}")
if hasattr(track, "uri"):
track = track.uri
return track.split(".")[-1]
def search(self, query):
raise NotImplementedError()
query = quote_plus(query.encode("utf-8"))
search_results = self._get(f"tracks?q={query}", limit=True)
tracks = []
for track in search_results:
tracks.append(self.parse_track(track))
return sanitize_list(tracks)
def parse_songs_from_projects(self, res):
tracks = []
logger.debug(f"Parsing {len(res)} result(s)...")
for project in res:
for song in project['songs']:
tracks.append(self.parse_track(song, project))
return sanitize_list(tracks)
def parse_projects(self, res):
artists = []
logger.debug(f"Parsing {len(res)} result(s)...")
for project in res:
artists.append(self.parse_artists(project))
return sanitize_list(artists)
def _get(self, path, limit=None, multi=False):
url = f"https://www.syncprojects.app/api/v1/{path}"
params = []
if limit:
params.insert(0, ("limit", self.explore_songs))
try:
if multi:
results = []
while True:
with closing(self.http_client.get(url, params=params)) as res:
logger.debug(f"Requested {res.url}")
res.raise_for_status()
j = res.json()
if not multi:
return j
results.extend(j['results'])
if 'next' in j and j['next'] is not None:
url = j['next']
else:
return results
except Exception as e:
if isinstance(e, HTTPError) and e.response.status_code == 401:
logger.error(
'Invalid "auth_token" used for Syncprojects '
"authentication!"
)
else:
logger.error(f"Syncprojects API request failed: {e}")
return {}
@cache()
def parse_artists(self, project):
artist_kwargs = {"name": project["name"],
"uri": f"syncprojects:artist/{readable_url(project['name'])}.{project['id']}"}
return Artist(**artist_kwargs)
@cache()
def parse_track(self, song, project=None, remote_url=False):
if song['url'] is None:
return None
if project is None:
project = self.get_project(song['project'])
if song['album'] is not None:
album = self.get_album(song['album'])
else:
album = {'name': "Unknown Album"}
track_kwargs = {}
artist_kwargs = {}
album_kwargs = {}
track_kwargs["name"] = song["name"]
if song["album_order"] is not None:
track_kwargs["track_no"] = song["album_order"]
artist_kwargs["name"] = project["name"]
album_kwargs["name"] = album["name"]
if remote_url:
track_kwargs["uri"] = song["url"]
else:
track_kwargs[
"uri"
] = f"syncprojects:song/{readable_url(song['name'])}.{song['id']}"
if artist_kwargs:
track_kwargs["artists"] = [Artist(**artist_kwargs)]
if album_kwargs:
track_kwargs["album"] = Album(**album_kwargs)
return Track(**track_kwargs)
@staticmethod
def parse_fail_reason(reason):
return "" if reason == "Unknown" else f"({reason})" | PypiClean |
/Messenger_Desktop_Server_Application-0.7.tar.gz/Messenger_Desktop_Server_Application-0.7/server/gui/config_window.py |
import sys
from PyQt5.QtWidgets import QApplication, QMessageBox, QDialog, QLabel, \
QLineEdit, QPushButton, QFileDialog
class ConfigWindow(QDialog):
"""
The class describes the settings window.
"""
def __init__(self):
super().__init__()
# Call the constructor.
self.initUI()
def initUI(self):
"""The method creates interface elements of the main window."""
self.setFixedSize(370, 200)
self.setWindowTitle('Настройки сервера')
# Text to the line with the path to the database.
self.db_path_text = QLabel('Путь к файлу БД: ', self)
self.db_path_text.move(15, 15)
self.db_path_text.setFixedSize(240, 15)
# The path to the database.
self.db_path = QLineEdit(self)
self.db_path.move(15, 35)
self.db_path.setFixedSize(250, 20)
self.db_path.setReadOnly(True)
# Button to select the path to the database.
self.db_path_select = QPushButton('Обзор...', self)
self.db_path_select.move(270, 30)
# The text for the database file name field.
self.db_file_text = QLabel('Имя файла БД: ', self)
self.db_file_text.move(15, 70)
self.db_file_text.setFixedSize(150, 20)
# Field for entering database file name.
self.db_file_name = QLineEdit(self)
self.db_file_name.move(125, 70)
self.db_file_name.setFixedSize(235, 20)
# Текст для поля ввода IP-адреса.
self.ip_address_text = QLabel('IP-адрес: ', self)
self.ip_address_text.move(15, 95)
self.ip_address_text.setFixedSize(150, 20)
# A field for entering an IP address.
self.ip_address_field = QLineEdit(self)
self.ip_address_field.move(125, 95)
self.ip_address_field.setFixedSize(235, 20)
# Text for port input field.
self.port_text = QLabel('Порт: ', self)
self.port_text.move(15, 120)
self.port_text.setFixedSize(150, 20)
# Field for entering the port.
self.port_field = QLineEdit(self)
self.port_field.move(125, 120)
self.port_field.setFixedSize(235, 20)
# Button for saving settings.
self.save_button = QPushButton('Сохранить', self)
self.save_button.move(80, 155)
# Button to close the window.
self.close_button = QPushButton(' Закрыть ', self)
self.close_button.move(190, 155)
self.close_button.clicked.connect(self.close)
self.db_path_select.clicked.connect(self.open_path_select)
self.show()
def open_path_select(self):
"""
The handler method opens the path selection window.
"""
window = QFileDialog()
path = window.getExistingDirectory()
self.db_path.insert(path)
if __name__ == '__main__':
# Create an application object.
APP = QApplication(sys.argv)
# Create a message box.
MESSAGE = QMessageBox
# Test settings menu.
CONFIG_WINDOW = ConfigWindow()
# Application launch (event polling cycle).
APP.exec_() | PypiClean |
/AtomPy-0.5.1.1.zip/AtomPy-0.5.1.1/atompy/DownloadAPI.py | import httplib2
import sys
from httplib2 import Http
from urllib import urlencode
from apiclient.discovery import build
from oauth2client.client import AccessTokenCredentials
import gdata.docs
import gdata.docs.service
import gdata.spreadsheet.service
import pandas
from StringIO import StringIO
def getDriveService():
#Gets the drive service instance using google drive credentials
#Takes: nothing
#Returns: a drive service instance
try:
#Google Drive Credentials (unique per account)
ClientID = '230798942269.apps.googleusercontent.com'
ClientSecret = 'JZ7dyNbQEHQ9XLXHxcFlcAad'
OAUTH_SCOPE = 'https://www.googleapis.com/auth/drive'
REDIRECT_URI = 'urn:ietf:wg:oauth:2.0:oob'
SavedRefreshToken = '1/WjgLdc0RekqCu0s5uae1dJm9ZbmyufQWulsaXdvu3b8'
h = Http(disable_ssl_certificate_validation=True)
post_data = {'client_id':ClientID,
'client_secret':ClientSecret,
'refresh_token':SavedRefreshToken,
'grant_type':'refresh_token'}
headers = {'Content-type': 'application/x-www-form-urlencoded'}
resp, content = h.request("https://accounts.google.com/o/oauth2/token",
"POST",
urlencode(post_data),
headers=headers)
content2 = content.split()
access_token = content2[3]
access_token = access_token.replace('"', '')
access_token = access_token.replace(',', '')
#Exchange the code / access token for credentials
credentials = AccessTokenCredentials(access_token, ClientID)
#Intialize the drive service and return it
http = Http(disable_ssl_certificate_validation=True)
http = credentials.authorize(http)
return build('drive', 'v2', http=http)
#Error may occur if the user's network is down
except httplib2.ServerNotFoundError:
sys.exit('Can not connect to Google Drive. Please check internet connection.')
#Unexpected error may occur
except httplib2.HttpLib2Error, e:
sys.exit('httplib2 exception: ' + str(e))
def getFileList(drive_service):
#Retrieves a list of the files in the database
#Takes: a drive service instance
#Returns: a list containing the google file info
onlineDataFiles = []
page_token = None
while True:
#Request a list of the database files
param = {}
if page_token:
param['pageToken'] = page_token
files = drive_service.files().list(**param).execute()
#Make sure that the files aren't in the trash or something like that
for x in range(len(files['items'])):
if files['items'][x]['labels']['hidden'] == False:
if files['items'][x]['labels']['trashed'] == False:
if files['items'][x]['labels']['restricted'] == False:
onlineDataFiles.append(files['items'][x])
#Interate through all of the pages in the database
page_token = files.get('nextPageToken')
if not page_token:
break
#Return the files
return onlineDataFiles
def recursiveList(driveService, fileList, id, level):
#Print the current file/folder
printString = ''
for y in range(level):
printString += '-'
found = False
for y in range(len(fileList)):
if fileList[y]['id'] == id:
found = True
printString += str(fileList[y]['title'])
if found == True:
print printString
#Figure out if there are some children
children = driveService.children().list(folderId=id).execute()['items']
#Check to see that they are valid
temp = []
for x in range(len(children)):
for y in range(len(fileList)):
if fileList[y]['id'] == children[x]['id']:
temp.append(children[x])
children = temp
#If there are children, recursive call each
if len(children) > 0:
for y in range(len(children)):
recursiveList(driveService, fileList, children[y]['id'], level+1)
def listContent():
#Prints a directory view of database
#First, lets get drive service
driveService = getDriveService()
#Now get our files listing (includes folders)
fileList = getFileList(driveService)
#Now sort the files by first seperating the
#files from the folders
files = []
folders = []
for x in range(len(fileList)):
if 'spreadsheet' in fileList[x]['mimeType']:
files.append(fileList[x])
if 'folder' in fileList[x]['mimeType']:
folders.append(fileList[x])
#Find the root
root = -1
for x in range(len(folders)):
if folders[x]['parents'][0]['isRoot'] == True:
root = folders[x]['id']
break
#Now begin the recursive call sequence
recursiveList(driveService, fileList, root, 0)
def getGDClient():
#Returns a Google Drive Client object that is
#already pre-logged with valid credentials to
#the AtomPy database
gd_client = gdata.spreadsheet.service.SpreadsheetsService()
gd_client.email = "atompython@gmail.com"
gd_client.password = "kalamazoo01"
gd_client.ProgrammaticLogin()
return gd_client
def getFile(filename):
#Gets the file data from Google Drive with
#the queried filename, and returns a workbook
#object containing a list that contains
#titles, data, and sources for each worksheet
# (these are later transferred to Ion)
print 'Retrieving workbook: ' + filename
#First, lets login to our drive
GDClient = getGDClient()
#Create our list object
workbook = {'title':None,
'worksheets':[]}
#Get our query feed
q = gdata.spreadsheet.service.DocumentQuery()
feed = GDClient.GetSpreadsheetsFeed(query=q)
#Now to search for the file on the drive
found = -1
for x in range(len(feed.entry)):
if feed.entry[x].title.text == filename:
found = x
workbook['title'] = feed.entry[x].title.text
break
#If the file is not on the drive, return error
if found == -1:
return 'ERROR: File (' + filename + ') not found in database.'
#Now get the spreadsheet ID and use it to change
#the feed from a workbook feed to a spreadsheets
#feed
workbook_id = feed.entry[found].id.text.rsplit('/',1)[1]
feed = GDClient.GetWorksheetsFeed(workbook_id)
#Now cycle through all of the worksheets and add the
#data and source information to the file object
for x in range(len(feed.entry)):
#First, lets get our worksheet ID
worksheet_id = feed.entry[x].id.text.rsplit('/',1)[1]
#Second, lets create our worksheet list object
worksheet = {'title':None,#plain string
'type':feed.entry[x].title.text,#string of data type
'data':None,#data is a Pandas dataframe
'sources':None}#sources are in an array
#Now to get the data and sources from the worksheet
#In order to get all of the data, we need to get the
#spreadsheet cell feed (comes in the form of list of rows)
query = gdata.spreadsheet.service.CellQuery()
cells = GDClient.GetCellsFeed(workbook_id, worksheet_id, query=query)
nCol = int(cells.col_count.text)
nRow = int(cells.row_count.text)
cells = cells.entry
#Grab the title
worksheet['title'] = str(cells[0].content.text)
#Now cycle through all of the rows and extract the data
#into a 2D Array (initialized to NULL)
rawData = [['' for z in range(nCol)] for y in range(nRow)]
for y in range(len(cells)):
rawData[int(cells[y].cell.row)-1][int(cells[y].cell.col)-1] = str(cells[y].content.text)
#Delete all empty rows
delOffset = 0
for y in range(len(rawData)):
if rawData[y-delOffset] == ['' for z in range(nCol)]:
rawData.pop(y - delOffset)
delOffset += 1
nRow = len(rawData)
#Delete all empty columns
delOffset = 0
for y in range(nCol):
emptyCol = True
for z in range(nRow):
if rawData[z][y-delOffset] != '':
emptyCol = False
break
if emptyCol == True:
for z in range(nRow):
rawData[z].pop(y-delOffset)
delOffset += 1
#Figure out where the category line is for
#splitting up the file into its components
#Also fix the category line values
#And remove units
categoryLine = -1
for y in range(len(rawData)):
if rawData[y][0] == 'Z':
categoryLine = y
break
lastAddition = None
additionModified = False
for y in range(len(rawData[categoryLine])):
#Remove units
if '(' in rawData[categoryLine][y] and ')' in rawData[categoryLine][y]:
rawData[categoryLine][y] = rawData[categoryLine][y].split('(')[0]
#Remove spaces
while ' ' in rawData[categoryLine][y]:
rawData[categoryLine][y] = rawData[categoryLine][y].replace(' ','')
#Add source info
if lastAddition == None and rawData[categoryLine-1][y] != '':
lastAddition = rawData[categoryLine-1][y]
additionModified = True
if lastAddition != None and rawData[categoryLine-1][y] != '' and additionModified == False:
lastAddition = rawData[categoryLine-1][y]
if lastAddition != None:
rawData[categoryLine][y] += '_' + lastAddition
additionModified = False
#Now to extract the sources from the file content
rawSources = ''
for y in range(len(rawData)):
#Breakpoint
if y == categoryLine - 1:
break
#Skip first line (title already collected)
if y == 0:
continue
#Collect the info from the first cell
rawSources += rawData[y][0] + '\n'
worksheet['sources'] = rawSources
#Bug fix: data containing commas replaced with dashes
#Otherwise the CSV file doesnt work properly
for y in range(len(rawData)):
for z in range(len(rawData[y])):
if ',' in rawData[y][z]:
rawData[y][z] = rawData[y][z].replace(',','-')
#Now to convert the raw data into a csv file string
csv_string = ''
for y in range(len(rawData)):
#Skip until we get to the data
if y < categoryLine:
continue
#Convert the data
for z in range(len(rawData[y])):
csv_string += rawData[y][z] + ','
csv_string = csv_string[:-1] + '\n'
#Setup our dataframe
dataframe = None
#Reference data
if 'elements' in workbook['title'] and dataframe == None:
dataframe = pandas.read_csv(StringIO(csv_string), index_col=['Z'])
if 'ions' in workbook['title'] and dataframe == None:
dataframe = pandas.read_csv(StringIO(csv_string), index_col=['Z','N'])
if 'isotopes' in workbook['title'] and dataframe == None:
dataframe = pandas.read_csv(StringIO(csv_string), index_col=['Z','M'])
#Regular data
if 'E' in worksheet['type'] and dataframe == None:
dataframe = pandas.read_csv(StringIO(csv_string), index_col=['Z','N','i'])
if 'A' in worksheet['type'] and dataframe == None:
dataframe = pandas.read_csv(StringIO(csv_string), index_col=['Z','N','k','i'])
if 'U' in worksheet['type'] and dataframe == None:
dataframe = pandas.read_csv(StringIO(csv_string), index_col=['Z','N','k','i','np'])
if 'O' in worksheet['type'] and dataframe == None:
dataframe = pandas.read_csv(StringIO(csv_string), index_col=['Z','N'])
#Set our dataframe to the worksheet object
worksheet['data'] = dataframe
workbook['worksheets'].append(worksheet)
print 'Finished worbook: ' + workbook['title']
return workbook | PypiClean |
/Anisearch-1.1.0.tar.gz/Anisearch-1.1.0/README.md | # Anisearch
Anilist API module for python. you only need to copy the Anilist folder to your own script.
### Executing program
* How to run the program
* Import module
```python
from Anisearch import Anilist
instance = Anilist()
```
From there you can get information from Anilist using their new GraphQL API.
To get data on a known ID.
```python
instance.get.anime(13601) # Return data on PSYCHO-PASS
instance.get.manga(64127) # Return data on Mahouka Koukou no Rettousei
instance.get.staff(113803) # Return data on Kantoku
instance.get.studio(7) # Return data on J.C. Staff
```
Searching is also making a return.
```python
instance.search.anime("Sword") # Anime search results for Sword.
instance.search.manga("Sword") # Manga search results for Sword.
instance.search.character("Tsutsukakushi") # Character search results for Tsutsukakushi.
instance.search.staff("Kantoku") # Staff search results for Kantoku.
instance.search.studio("J.C. Staff") # Studio search result for J.C. Staff.
```
A note about the searching and getting:
```python
search(term, page = 1, perpage = 10)
get(item_id)
```
Pagination is done automatically in the API. By default, you'll get 10 results per page.
If you want more, just change the per page value. pageInfo is always the first result in the returned data.
Pages start at 1 and if you want another page, just replace page with the next number.
query_string is to set what info you want to display.
### Customization
You can set your own settings as follows
```python
import logging
from Anisearch import Anilist
# for init instance
SETTINGS = {
'header': {
'Content-Type': 'application/json',
'User-Agent': 'Anisearch (github.com/MeGaNeKoS/Anisearch)',
'Accept': 'application/json'},
'api_url': 'https://graphql.anilist.co'
}
request_param = {} # this is for the requests lib parameters.
instance = Anilist(log_level=logging.INFO, settings = SETTINGS, request_param = request_param)
# for instance get/search parameters
retry = 10
instance.get.anime(13601, num_retries=retry) # default 10
```
### Todo
* Add more error handling when the API returns an error.
- currently is limited to 429 too many requests. You can help me by providing a log when other errors occur. | PypiClean |
/NlvWxPython-4.2.0-cp37-cp37m-win_amd64.whl/wx/lib/wxcairo/wx_pycairo.py | import wx
from six import PY3
import cairo
import ctypes
import ctypes.util
#----------------------------------------------------------------------------
# A reference to the cairo shared lib via ctypes.CDLL
cairoLib = None
# A reference to the pycairo C API structure
pycairoAPI = None
# a convenience function, just to save a bit of typing below
def voidp(ptr):
"""Convert a SIP void* type to a ctypes c_void_p"""
return ctypes.c_void_p(int(ptr))
#----------------------------------------------------------------------------
def _ContextFromDC(dc):
"""
Creates and returns a Cairo context object using the wxDC as the
surface. (Only window, client, paint and memory DC's are allowed
at this time.)
"""
if not isinstance(dc, wx.WindowDC) and not isinstance(dc, wx.MemoryDC):
raise TypeError("Only window and memory DC's are supported at this time.")
if 'wxMac' in wx.PlatformInfo:
width, height = dc.GetSize()
# use the CGContextRef of the DC to make the cairo surface
cgc = dc.GetHandle()
assert cgc is not None, "Unable to get CGContext from DC."
cgref = voidp( cgc )
surfaceptr = voidp(surface_create(cgref, width, height))
# create a cairo context for that surface
ctxptr = voidp(cairo_create(surfaceptr))
# Turn it into a pycairo context object
ctx = pycairoAPI.Context_FromContext(ctxptr, pycairoAPI.Context_Type, None)
# The context keeps its own reference to the surface
cairoLib.cairo_surface_destroy(surfaceptr)
elif 'wxMSW' in wx.PlatformInfo:
# This one is easy, just fetch the HDC and use PyCairo to make
# the surface and context.
hdc = dc.GetHandle()
# Ensure the pointer value is clampped into the range of a C signed long
hdc = ctypes.c_long(hdc)
surface = cairo.Win32Surface(hdc.value)
ctx = cairo.Context(surface)
elif 'wxGTK' in wx.PlatformInfo:
if 'gtk3' in wx.PlatformInfo:
# With wxGTK3, GetHandle() returns a cairo context directly
ctxptr = voidp( dc.GetHandle() )
# pyCairo will try to destroy it so we need to increase ref count
cairoLib.cairo_reference(ctxptr)
else:
# Get the GdkDrawable from the dc
drawable = voidp( dc.GetHandle() )
# Call a GDK API to create a cairo context
ctxptr = gdkLib.gdk_cairo_create(drawable)
# Turn it into a pycairo context object
ctx = pycairoAPI.Context_FromContext(ctxptr, pycairoAPI.Context_Type, None)
else:
raise NotImplementedError("Help me, I'm lost...")
return ctx
#----------------------------------------------------------------------------
def _FontFaceFromFont(font):
"""
Creates and returns a cairo.FontFace object from the native
information in a wx.Font.
"""
if 'wxMac' in wx.PlatformInfo:
fontfaceptr = font_face_create(voidp(font.OSXGetCGFont()))
fontface = pycairoAPI.FontFace_FromFontFace(fontfaceptr)
elif 'wxMSW' in wx.PlatformInfo:
fontfaceptr = voidp( cairoLib.cairo_win32_font_face_create_for_hfont(
ctypes.c_ulong(font.GetHFONT())) )
fontface = pycairoAPI.FontFace_FromFontFace(fontfaceptr)
elif 'wxGTK' in wx.PlatformInfo:
# wow, this is a hell of a lot of steps...
desc = voidp( font.GetPangoFontDescription() )
pcfm = voidp(pcLib.pango_cairo_font_map_get_default())
pctx = voidp(gdkLib.gdk_pango_context_get())
pfnt = voidp( pcLib.pango_font_map_load_font(pcfm, pctx, desc) )
scaledfontptr = voidp( pcLib.pango_cairo_font_get_scaled_font(pfnt) )
fontfaceptr = voidp(cairoLib.cairo_scaled_font_get_font_face(scaledfontptr))
cairoLib.cairo_font_face_reference(fontfaceptr)
fontface = pycairoAPI.FontFace_FromFontFace(fontfaceptr)
gdkLib.g_object_unref(pctx)
else:
raise NotImplementedError("Help me, I'm lost...")
return fontface
#----------------------------------------------------------------------------
# Only implementation helpers after this point
#----------------------------------------------------------------------------
def _findCairoLib():
"""
Try to locate the Cairo shared library and make a CDLL for it.
"""
global cairoLib
if cairoLib is not None:
return
names = ['cairo', 'cairo-2', 'libcairo', 'libcairo-2']
# first look using just the base name
for name in names:
try:
cairoLib = ctypes.CDLL(name)
return
except:
pass
# if that didn't work then use the ctypes util to search the paths
# appropriate for the system
for name in names:
location = ctypes.util.find_library(name)
if location:
try:
cairoLib = ctypes.CDLL(location)
return
except:
pass
# If the above didn't find it on OS X then we still have a
# trick up our sleeve...
if 'wxMac' in wx.PlatformInfo:
# look at the libs linked to by the pycairo extension module
import macholib.MachO
m = macholib.MachO.MachO(cairo._cairo.__file__)
for h in m.headers:
for idx, name, path in h.walkRelocatables():
if 'libcairo' in path:
try:
cairoLib = ctypes.CDLL(path)
return
except:
pass
if not cairoLib:
raise RuntimeError("Unable to find the Cairo shared library")
#----------------------------------------------------------------------------
# For other DLLs we'll just use a dictionary to track them, as there
# probably isn't any need to use them outside of this module.
_dlls = dict()
def _findHelper(names, key, msg):
dll = _dlls.get(key, None)
if dll is not None:
return dll
location = None
for name in names:
location = ctypes.util.find_library(name)
if location:
break
if not location:
raise RuntimeError(msg)
dll = ctypes.CDLL(location)
_dlls[key] = dll
return dll
def _findGDKLib():
if 'gtk3' in wx.PlatformInfo:
libname = 'gdk-3'
else:
libname = 'gdk-x11-2.0'
return _findHelper([libname], 'gdk',
"Unable to find the GDK shared library")
def _findPangoCairoLib():
return _findHelper(['pangocairo-1.0'], 'pangocairo',
"Unable to find the pangocairo shared library")
def _findAppSvcLib():
return _findHelper(['ApplicationServices'], 'appsvc',
"Unable to find the ApplicationServices Framework")
#----------------------------------------------------------------------------
# PyCairo exports a C API in a structure via a PyCObject. Using
# ctypes will let us use that API from Python too. We'll use it to
# convert a C pointer value to pycairo objects. The information about
# this API structure is gleaned from pycairo.h.
class Pycairo_CAPI(ctypes.Structure):
if cairo.version_info < (1,8): # This structure is known good with pycairo 1.6.4
_fields_ = [
('Context_Type', ctypes.py_object),
('Context_FromContext', ctypes.PYFUNCTYPE(ctypes.py_object,
ctypes.c_void_p,
ctypes.py_object,
ctypes.py_object)),
('FontFace_Type', ctypes.py_object),
('FontFace_FromFontFace', ctypes.PYFUNCTYPE(ctypes.py_object, ctypes.c_void_p)),
('FontOptions_Type', ctypes.py_object),
('FontOptions_FromFontOptions', ctypes.PYFUNCTYPE(ctypes.py_object, ctypes.c_void_p)),
('Matrix_Type', ctypes.py_object),
('Matrix_FromMatrix', ctypes.PYFUNCTYPE(ctypes.py_object, ctypes.c_void_p)),
('Path_Type', ctypes.py_object),
('Path_FromPath', ctypes.PYFUNCTYPE(ctypes.py_object, ctypes.c_void_p)),
('Pattern_Type', ctypes.py_object),
('SolidPattern_Type', ctypes.py_object),
('SurfacePattern_Type', ctypes.py_object),
('Gradient_Type', ctypes.py_object),
('LinearGradient_Type', ctypes.py_object),
('RadialGradient_Type', ctypes.py_object),
('Pattern_FromPattern', ctypes.PYFUNCTYPE(ctypes.py_object, ctypes.c_void_p)),
('ScaledFont_Type', ctypes.py_object),
('ScaledFont_FromScaledFont', ctypes.PYFUNCTYPE(ctypes.py_object, ctypes.c_void_p)),
('Surface_Type', ctypes.py_object),
('ImageSurface_Type', ctypes.py_object),
('PDFSurface_Type', ctypes.py_object),
('PSSurface_Type', ctypes.py_object),
('SVGSurface_Type', ctypes.py_object),
('Win32Surface_Type', ctypes.py_object),
('XlibSurface_Type', ctypes.py_object),
('Surface_FromSurface', ctypes.PYFUNCTYPE(ctypes.py_object,
ctypes.c_void_p,
ctypes.py_object)),
('Check_Status', ctypes.PYFUNCTYPE(ctypes.c_int, ctypes.c_int))]
# This structure is known good with pycairo 1.8.4.
# We have to also test for (1,10,8) because pycairo 1.8.10 has an
# incorrect version_info value
elif cairo.version_info < (1,9) or cairo.version_info == (1,10,8):
_fields_ = [
('Context_Type', ctypes.py_object),
('Context_FromContext', ctypes.PYFUNCTYPE(ctypes.py_object,
ctypes.c_void_p,
ctypes.py_object,
ctypes.py_object)),
('FontFace_Type', ctypes.py_object),
('ToyFontFace_Type', ctypes.py_object), #** new in 1.8.4
('FontFace_FromFontFace', ctypes.PYFUNCTYPE(ctypes.py_object, ctypes.c_void_p)),
('FontOptions_Type', ctypes.py_object),
('FontOptions_FromFontOptions', ctypes.PYFUNCTYPE(ctypes.py_object, ctypes.c_void_p)),
('Matrix_Type', ctypes.py_object),
('Matrix_FromMatrix', ctypes.PYFUNCTYPE(ctypes.py_object, ctypes.c_void_p)),
('Path_Type', ctypes.py_object),
('Path_FromPath', ctypes.PYFUNCTYPE(ctypes.py_object, ctypes.c_void_p)),
('Pattern_Type', ctypes.py_object),
('SolidPattern_Type', ctypes.py_object),
('SurfacePattern_Type', ctypes.py_object),
('Gradient_Type', ctypes.py_object),
('LinearGradient_Type', ctypes.py_object),
('RadialGradient_Type', ctypes.py_object),
('Pattern_FromPattern', ctypes.PYFUNCTYPE(ctypes.py_object, ctypes.c_void_p,
ctypes.py_object)), #** changed in 1.8.4
('ScaledFont_Type', ctypes.py_object),
('ScaledFont_FromScaledFont', ctypes.PYFUNCTYPE(ctypes.py_object, ctypes.c_void_p)),
('Surface_Type', ctypes.py_object),
('ImageSurface_Type', ctypes.py_object),
('PDFSurface_Type', ctypes.py_object),
('PSSurface_Type', ctypes.py_object),
('SVGSurface_Type', ctypes.py_object),
('Win32Surface_Type', ctypes.py_object),
('XlibSurface_Type', ctypes.py_object),
('Surface_FromSurface', ctypes.PYFUNCTYPE(ctypes.py_object,
ctypes.c_void_p,
ctypes.py_object)),
('Check_Status', ctypes.PYFUNCTYPE(ctypes.c_int, ctypes.c_int))]
# This structure is known good with pycairo 1.10.0. They keep adding stuff
# to the middle of the structure instead of only adding to the end!
elif cairo.version_info < (1,11):
_fields_ = [
('Context_Type', ctypes.py_object),
('Context_FromContext', ctypes.PYFUNCTYPE(ctypes.py_object,
ctypes.c_void_p,
ctypes.py_object,
ctypes.py_object)),
('FontFace_Type', ctypes.py_object),
('ToyFontFace_Type', ctypes.py_object),
('FontFace_FromFontFace', ctypes.PYFUNCTYPE(ctypes.py_object, ctypes.c_void_p)),
('FontOptions_Type', ctypes.py_object),
('FontOptions_FromFontOptions', ctypes.PYFUNCTYPE(ctypes.py_object, ctypes.c_void_p)),
('Matrix_Type', ctypes.py_object),
('Matrix_FromMatrix', ctypes.PYFUNCTYPE(ctypes.py_object, ctypes.c_void_p)),
('Path_Type', ctypes.py_object),
('Path_FromPath', ctypes.PYFUNCTYPE(ctypes.py_object, ctypes.c_void_p)),
('Pattern_Type', ctypes.py_object),
('SolidPattern_Type', ctypes.py_object),
('SurfacePattern_Type', ctypes.py_object),
('Gradient_Type', ctypes.py_object),
('LinearGradient_Type', ctypes.py_object),
('RadialGradient_Type', ctypes.py_object),
('Pattern_FromPattern', ctypes.PYFUNCTYPE(ctypes.py_object, ctypes.c_void_p,
ctypes.py_object)), #** changed in 1.8.4
('ScaledFont_Type', ctypes.py_object),
('ScaledFont_FromScaledFont', ctypes.PYFUNCTYPE(ctypes.py_object, ctypes.c_void_p)),
('Surface_Type', ctypes.py_object),
('ImageSurface_Type', ctypes.py_object),
('PDFSurface_Type', ctypes.py_object),
('PSSurface_Type', ctypes.py_object),
('SVGSurface_Type', ctypes.py_object),
('Win32Surface_Type', ctypes.py_object),
('Win32PrintingSurface_Type', ctypes.py_object), #** new
('XCBSurface_Type', ctypes.py_object), #** new
('XlibSurface_Type', ctypes.py_object),
('Surface_FromSurface', ctypes.PYFUNCTYPE(ctypes.py_object,
ctypes.c_void_p,
ctypes.py_object)),
('Check_Status', ctypes.PYFUNCTYPE(ctypes.c_int, ctypes.c_int))]
# This structure is known good with pycairo 1.11.1+.
else:
_fields_ = [
('Context_Type', ctypes.py_object),
('Context_FromContext', ctypes.PYFUNCTYPE(ctypes.py_object,
ctypes.c_void_p,
ctypes.py_object,
ctypes.py_object)),
('FontFace_Type', ctypes.py_object),
('ToyFontFace_Type', ctypes.py_object),
('FontFace_FromFontFace', ctypes.PYFUNCTYPE(ctypes.py_object, ctypes.c_void_p)),
('FontOptions_Type', ctypes.py_object),
('FontOptions_FromFontOptions', ctypes.PYFUNCTYPE(ctypes.py_object, ctypes.c_void_p)),
('Matrix_Type', ctypes.py_object),
('Matrix_FromMatrix', ctypes.PYFUNCTYPE(ctypes.py_object, ctypes.c_void_p)),
('Path_Type', ctypes.py_object),
('Path_FromPath', ctypes.PYFUNCTYPE(ctypes.py_object, ctypes.c_void_p)),
('Pattern_Type', ctypes.py_object),
('SolidPattern_Type', ctypes.py_object),
('SurfacePattern_Type', ctypes.py_object),
('Gradient_Type', ctypes.py_object),
('LinearGradient_Type', ctypes.py_object),
('RadialGradient_Type', ctypes.py_object),
('Pattern_FromPattern', ctypes.PYFUNCTYPE(ctypes.py_object, ctypes.c_void_p,
ctypes.py_object)), #** changed in 1.8.4
('ScaledFont_Type', ctypes.py_object),
('ScaledFont_FromScaledFont', ctypes.PYFUNCTYPE(ctypes.py_object, ctypes.c_void_p)),
('Surface_Type', ctypes.py_object),
('ImageSurface_Type', ctypes.py_object),
('PDFSurface_Type', ctypes.py_object),
('PSSurface_Type', ctypes.py_object),
('SVGSurface_Type', ctypes.py_object),
('Win32Surface_Type', ctypes.py_object),
('Win32PrintingSurface_Type', ctypes.py_object), #** new
('XCBSurface_Type', ctypes.py_object), #** new
('XlibSurface_Type', ctypes.py_object),
('Surface_FromSurface', ctypes.PYFUNCTYPE(ctypes.py_object,
ctypes.c_void_p,
ctypes.py_object)),
('Check_Status', ctypes.PYFUNCTYPE(ctypes.c_int, ctypes.c_int)),
('RectangleInt_Type', ctypes.py_object),
('RectangleInt_FromRectangleInt', ctypes.PYFUNCTYPE(ctypes.py_object, ctypes.c_void_p)),
('Region_Type', ctypes.py_object),
('Region_FromRegion', ctypes.PYFUNCTYPE(ctypes.py_object, ctypes.c_void_p)),
('RecordingSurface_Type', ctypes.py_object)]
def _loadPycairoAPI():
global pycairoAPI
if pycairoAPI is not None:
return
if not hasattr(cairo, 'CAPI'):
return
if PY3:
PyCapsule_GetPointer = ctypes.pythonapi.PyCapsule_GetPointer
PyCapsule_GetPointer.argtypes = [ctypes.py_object, ctypes.c_char_p]
PyCapsule_GetPointer.restype = ctypes.c_void_p
ptr = PyCapsule_GetPointer(cairo.CAPI, b'cairo.CAPI')
else:
PyCObject_AsVoidPtr = ctypes.pythonapi.PyCObject_AsVoidPtr
PyCObject_AsVoidPtr.argtypes = [ctypes.py_object]
PyCObject_AsVoidPtr.restype = ctypes.c_void_p
ptr = PyCObject_AsVoidPtr(cairo.CAPI)
pycairoAPI = ctypes.cast(ptr, ctypes.POINTER(Pycairo_CAPI)).contents
#----------------------------------------------------------------------------
# Load these at import time. That seems a bit better than doing it at
# first use...
_findCairoLib()
_loadPycairoAPI()
if 'wxMac' in wx.PlatformInfo:
surface_create = cairoLib.cairo_quartz_surface_create_for_cg_context
surface_create.argtypes = [ctypes.c_void_p, ctypes.c_int, ctypes.c_int]
surface_create.restype = ctypes.c_void_p
cairo_create = cairoLib.cairo_create
cairo_create.argtypes = [ctypes.c_void_p]
cairo_create.restype = ctypes.c_void_p
font_face_create = cairoLib.cairo_quartz_font_face_create_for_cgfont
font_face_create.argtypes = [ctypes.c_void_p]
font_face_create.restype = ctypes.c_void_p
elif 'wxMSW' in wx.PlatformInfo:
cairoLib.cairo_win32_font_face_create_for_hfont.restype = ctypes.c_void_p
elif 'wxGTK' in wx.PlatformInfo:
gdkLib = _findGDKLib()
pcLib = _findPangoCairoLib()
gdkLib.gdk_cairo_create.restype = ctypes.c_void_p
pcLib.pango_cairo_font_map_get_default.restype = ctypes.c_void_p
gdkLib.gdk_pango_context_get.restype = ctypes.c_void_p
pcLib.pango_font_map_load_font.restype = ctypes.c_void_p
pcLib.pango_cairo_font_get_scaled_font.restype = ctypes.c_void_p
cairoLib.cairo_scaled_font_get_font_face.restype = ctypes.c_void_p
#---------------------------------------------------------------------------- | PypiClean |
/ManagerTk-0.3-py3-none-any.whl/ManagerTk.py | from tkinter import *
from time import sleep
import re
coords = [0,0]
debug = False
boolc = False
window = None
time_cview = 0
def interface(window):
window.bind('<Motion>', move)
def move(event):
global coords
coords[0] = event.x
coords[1] = event.y
if(boolc):
try:sleep(time_cview);print(coords[0], coords[1])
except:raise TypeError("time_cview SHOULD be integer or float.")
def run(window):interface(window)
def coord_view(window,boolc):"Creating dublicate window for viewing coordinates(x,y)."; interface(window)
def typely(listm):
"Looks on types."
typely = True
for coord in listm:
for coords in coord[1]:
try:
ex = coord[1][0] + coord[1][1]
except:
typely = False
finally:
if(typely == True):return True
else: return False
def create_mpage(window,ui_page,size):
if(typely(ui_page)):
window.geometry(size)
for ui in ui_page:
ui.pack()
if(debug == True):
print("Page ui loaded.")
else:
raise TypeError("create_mpage(ui_page), ui_page SHOULD be list type.")
def redirect_pages(page_ui_1,page_ui_2,side=None):
"This function makes redirecting from page_1 to page_2, ui reconstruction."
if(type(page_ui_1) and type(page_ui_2) == list):
if(type(side) != list and type(side) != str and side != None): raise TypeError("redirect_pages(ui_page_1,ui_page_2,side), side SHOULD be str type. \nEXAMPLE:\n\redirect_pages(main,second,'left')\nOR\redirect_pages(ui_page_1,ui_page_2,['top','right',left'])\n\n")
else:
if debug == True:print("Start Redirecting.")
for ui_page_1 in page_ui_1:
ui_page_1.pack_forget()
if(type(side) != list and side != None and type()):
for ui_page_2 in page_ui_2:
ui_page_2.pack(side=side)
if debug == True:print("Packed ui.")
else:
if(side in ["bottom","top","right","left"]):
side_index = 0
for ui_page_2 in page_ui_2:
try:
ui_page_2.pack(side=side[side_index])
except:
raise SyntaxError("if side is a list, count index side SHOULD be == count index ui_page_2.\n\n")
side_index += 1
if debug == True:print("Packed ui.")
else:
if(typely(page_ui_2)):
for sidec in page_ui_2:
print(sidec[1][0])
try:
sidec[0].place(x=sidec[1][0],y=sidec[1][1])
print("x: {},\ny:{}.".format(sidec[1][0],sidec[1][1]))
except:
raise SyntaxError("if side is a list, count index side SHOULD be == count index ui_page_2.\n\n")
if debug == True:print("Packed ui.")
else:
print("ERROR SYNTAX list.")
else:
raise TypeError("redirect_pages(ui_page), page_ui_1 and page_ui_2 SHOULD be list type.")
def window_exit(title: str, message: str):
"Message Box for Exit(two buttons 'Yes','No' and Title text and message text boxs)."
output = messagebox.askyesno(title=title,message=message)
if(output==True):
if debug == True:
print("Program Exit.")
exit()
def documentation():
"Show Documentation for ManagerTk."
print("Documentation: https://pypi.org/project/ManagerTk/") | PypiClean |
/MFAProblem-1.0.1b0.tar.gz/MFAProblem-1.0.1b0/mfa_problem/su_trace.py | import os
import time
import logging
import logging.handlers
import psutil
import pandas as pd
logger = logging.getLogger()
def logger_init(
logname,
mode
):
global logger
logger = logging.getLogger("sumoptimisation") # root logger
if len(logger.handlers) > 0:
logger.handlers[0].close()
logger.removeHandler(logger.handlers[0])
logger.setLevel(logging.DEBUG)
hdlr = logging.FileHandler(logname, mode)
fmt = logging.Formatter("%(levelname)-5s %(message)s", "%x %X")
hdlr.setFormatter(fmt)
logger.addHandler(hdlr)
def base_filename():
return logging.getLogger("sumoptimisation").handlers[0].baseFilename
def run_log(myfile):
IsNew = not os.path.isfile(myfile)
logging.basicConfig(filename=myfile, format='%(asctime)s,%(msecs)03d - %(levelname)-8s - \
%(funcName)-20s (%(lineno)04d): %(message)s', datefmt='%H:%M:%S', level=logging.INFO)
# define a Handler which writes INFO messages or higher to the sys.stderr
console = logging.StreamHandler()
# console.setLevel(logging.INFO)
# set a format which is simpler for console use
co_formatter = logging.Formatter('%(funcName)-20s (%(lineno)04d): %(message)s')
# tell the handler to use this format
console.setFormatter(co_formatter)
# add the handler to the root logger
logging.getLogger().addHandler(console)
if IsNew:
strnow = time.strftime('%Y-%m-%d')
logging.info(f'Log file just created. Date of creation : {strnow}')
else:
logging.info('*****************************')
logging.info('********** New run **********')
logging.info('*****************************')
def log_level(StrLevel="INFO"):
'''
Change the level information of the current logger
Possible values are (All calls with a higher value than the selected one are logged)
"NOTSET"(value 0), "DEBUG"(value 10), "INFO"(20), "WARNING"(30), "ERROR"(40), "CRITICAL"(50)
'''
switcher = {"NOTSET": 0, "DEBUG": 10, "INFO": 20, "WARNING": 30, "ERROR": 40, "CRITICAL": 50}
NewLevel = 20
if StrLevel in switcher:
NewLevel = switcher[StrLevel]
logging.getLogger().setLevel(NewLevel)
def check_log(nbmax=20):
log_def = 'log_' + time.strftime('%Y%m%d') + '.log'
dir_fi = 'logs' + os.path.sep
if not os.path.isdir('logs'):
os.makedirs('logs')
else:
li_file = os.listdir('logs')
df_file = pd.DataFrame(li_file, columns=['files'])
df_file['date'] = [os.path.getctime(dir_fi + fi) for fi in li_file]
df_sort = df_file.sort_values(by=['date'], ascending=False)
if len(df_sort['date'] > nbmax):
df_del = df_sort['files'][nbmax:]
for fi in df_del:
os.remove(dir_fi + fi)
log_def = dir_fi + log_def
return log_def
def timems(
t_input: float,
f_out='',
b_full=False,
):
if b_full:
st0 = time.localtime(t_input)
comp = 0
if f_out == 'milli':
comp = int(round((t_input-int(t_input))*1000))
elif f_out == 'micro':
comp = int(round((t_input-int(t_input))*1000000))
return time.strftime(f'%Y-%m-%d %H:%M:%S,{comp}', st0) # return a string
else:
if f_out == 'milli':
comp = int(round(t_input*1000))
elif f_out == 'micro':
comp = int(round(t_input*1000000))
else:
comp = int(round(t_input)) # time in sec
return comp # return an integer
def perf_process(procname='python'):
first_pid_found = False
procname = procname.lower()
for proc in psutil.process_iter():
pname = proc.name().lower()
if procname in pname:
if not first_pid_found:
first_pid_found = True
pyid = proc.pid
stproc_id = f'process id is : {pyid}'
else:
stproc_id += f', id_add: {proc.pid}'
# PROCESS INFOS
ppy = psutil.Process(pyid)
stproc_inf = f'PROCESS - cpu_percent: {ppy.cpu_percent()}, '
mem_val = str(round(ppy.memory_percent(), 2))
stproc_inf += f'mem_percent: {mem_val}, '
ppym = ppy.memory_full_info()
mem_val = str(round(ppym.uss/1024/1024))
stproc_inf += f'mem_dedic_PID: {mem_val} Mo, '
ppym = psutil.Process().memory_full_info()
mem_val = str(round(ppym.uss/1024/1024))
stproc_inf += f'mem_dedic: {mem_val} Mo'
sysm = psutil.virtual_memory()
# SYSTEM INFOS
stsys_inf = f'SYSTEM - cpu_percent: {psutil.cpu_percent()}, '
mem_val = str(round(sysm.percent, 2))
stsys_inf += f'mem_percent: {mem_val}, '
mem_val = str(round(sysm.used/1024/1024))
stsys_inf += f'mem_used: {mem_val} Mo, '
mem_val = str(round(sysm.available/1024/1024))
stsys_inf += f'mem_avail: {mem_val} Mo'
return [stproc_id, stproc_inf, stsys_inf] | PypiClean |
/ImSwitchUC2-2.1.0.tar.gz/ImSwitchUC2-2.1.0/imswitch/imcontrol/view/widgets/FFTWidget.py | import pyqtgraph as pg
from qtpy import QtCore, QtWidgets
from imswitch.imcommon.view.guitools import pyqtgraphtools
from imswitch.imcontrol.view import guitools
from .basewidgets import Widget
class FFTWidget(Widget):
""" Displays the FFT transform of the image. """
sigShowToggled = QtCore.Signal(bool) # (enabled)
sigPosToggled = QtCore.Signal(bool) # (enabled)
sigPosChanged = QtCore.Signal(float) # (pos)
sigUpdateRateChanged = QtCore.Signal(float) # (rate)
sigResized = QtCore.Signal()
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
# Graphical elements
self.showCheck = QtWidgets.QCheckBox('Show FFT')
self.showCheck.setCheckable(True)
self.posCheck = guitools.BetterPushButton('Period (pix)')
self.posCheck.setCheckable(True)
self.linePos = QtWidgets.QLineEdit('4')
self.lineRate = QtWidgets.QLineEdit('0')
self.labelRate = QtWidgets.QLabel('Update rate')
# Vertical and horizontal lines
self.vline = pg.InfiniteLine()
self.hline = pg.InfiniteLine()
self.rvline = pg.InfiniteLine()
self.lvline = pg.InfiniteLine()
self.uhline = pg.InfiniteLine()
self.dhline = pg.InfiniteLine()
# Viewbox
self.cwidget = pg.GraphicsLayoutWidget()
self.vb = self.cwidget.addViewBox(row=1, col=1)
self.vb.setMouseMode(pg.ViewBox.RectMode)
self.img = pg.ImageItem(axisOrder='row-major')
self.img.setTransform(self.img.transform().translate(-0.5, -0.5))
self.vb.addItem(self.img)
self.vb.setAspectLocked(True)
self.hist = pg.HistogramLUTItem(image=self.img)
self.hist.vb.setLimits(yMin=0, yMax=66000)
self.hist.gradient.loadPreset('greyclip')
for tick in self.hist.gradient.ticks:
tick.hide()
self.cwidget.addItem(self.hist, row=1, col=2)
# Add lines to viewbox
self.vb.addItem(self.vline)
self.vb.addItem(self.hline)
self.vb.addItem(self.lvline)
self.vb.addItem(self.rvline)
self.vb.addItem(self.uhline)
self.vb.addItem(self.dhline)
# Add elements to GridLayout
grid = QtWidgets.QGridLayout()
self.setLayout(grid)
grid.addWidget(self.cwidget, 0, 0, 1, 6)
grid.addWidget(self.showCheck, 1, 0, 1, 1)
grid.addWidget(self.posCheck, 2, 0, 1, 1)
grid.addWidget(self.linePos, 2, 1, 1, 1)
grid.addWidget(self.labelRate, 2, 2, 1, 1)
grid.addWidget(self.lineRate, 2, 3, 1, 1)
# grid.setRowMinimumHeight(0, 300)
# Connect signals
self.showCheck.toggled.connect(self.sigShowToggled)
self.posCheck.toggled.connect(self.sigPosToggled)
self.linePos.textChanged.connect(
lambda: self.sigPosChanged.emit(self.getPos())
)
self.lineRate.textChanged.connect(
lambda: self.sigUpdateRateChanged.emit(self.getUpdateRate())
)
self.vb.sigResized.connect(self.sigResized)
def getShowFFTChecked(self):
return self.showCheck.isChecked()
def getShowPosChecked(self):
return self.posCheck.isChecked()
def getPos(self):
return float(self.linePos.text())
def getUpdateRate(self):
return float(self.lineRate.text())
def getImage(self):
return self.img.image
def setImage(self, im):
self.img.setImage(im, autoLevels=False)
def updateImageLimits(self, imgWidth, imgHeight):
pyqtgraphtools.setPGBestImageLimits(self.vb, imgWidth, imgHeight)
def getImageDisplayLevels(self):
return self.hist.getLevels()
def setImageDisplayLevels(self, minimum, maximum):
self.hist.setLevels(minimum, maximum)
self.hist.vb.autoRange()
def setPosLinesVisible(self, visible):
self.vline.setVisible(visible)
self.hline.setVisible(visible)
self.rvline.setVisible(visible)
self.lvline.setVisible(visible)
self.uhline.setVisible(visible)
self.dhline.setVisible(visible)
def updatePosLines(self, pos, imgWidth, imgHeight):
self.vline.setValue(0.5 * imgWidth)
self.hline.setAngle(0)
self.hline.setValue(0.5 * imgHeight)
self.rvline.setValue((0.5 + pos) * imgWidth)
self.lvline.setValue((0.5 - pos) * imgWidth)
self.dhline.setAngle(0)
self.dhline.setValue((0.5 - pos) * imgHeight)
self.uhline.setAngle(0)
self.uhline.setValue((0.5 + pos) * imgHeight)
# Copyright (C) 2020-2021 ImSwitch developers
# This file is part of ImSwitch.
#
# ImSwitch is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# ImSwitch is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>. | PypiClean |
/Flask-Statics-Helper-1.0.0.tar.gz/Flask-Statics-Helper-1.0.0/flask_statics/static/angular/i18n/angular-locale_ky.js | 'use strict';
angular.module("ngLocale", [], ["$provide", function($provide) {
var PLURAL_CATEGORY = {ZERO: "zero", ONE: "one", TWO: "two", FEW: "few", MANY: "many", OTHER: "other"};
$provide.value("$locale", {
"DATETIME_FORMATS": {
"AMPMS": [
"\u0442\u0430\u04a3\u043a\u044b",
"\u0442\u04af\u0448\u0442\u04e9\u043d \u043a\u0438\u0439\u0438\u043d"
],
"DAY": [
"\u0436\u0435\u043a\u0448\u0435\u043c\u0431\u0438",
"\u0434\u04af\u0439\u0448\u04e9\u043c\u0431\u04af",
"\u0448\u0435\u0439\u0448\u0435\u043c\u0431\u0438",
"\u0448\u0430\u0440\u0448\u0435\u043c\u0431\u0438",
"\u0431\u0435\u0439\u0448\u0435\u043c\u0431\u0438",
"\u0436\u0443\u043c\u0430",
"\u0438\u0448\u0435\u043c\u0431\u0438"
],
"MONTH": [
"\u044f\u043d\u0432\u0430\u0440\u044c",
"\u0444\u0435\u0432\u0440\u0430\u043b\u044c",
"\u043c\u0430\u0440\u0442",
"\u0430\u043f\u0440\u0435\u043b\u044c",
"\u043c\u0430\u0439",
"\u0438\u044e\u043d\u044c",
"\u0438\u044e\u043b\u044c",
"\u0430\u0432\u0433\u0443\u0441\u0442",
"\u0441\u0435\u043d\u0442\u044f\u0431\u0440\u044c",
"\u043e\u043a\u0442\u044f\u0431\u0440\u044c",
"\u043d\u043e\u044f\u0431\u0440\u044c",
"\u0434\u0435\u043a\u0430\u0431\u0440\u044c"
],
"SHORTDAY": [
"\u0436\u0435\u043a.",
"\u0434\u04af\u0439.",
"\u0448\u0435\u0439\u0448.",
"\u0448\u0430\u0440\u0448.",
"\u0431\u0435\u0439\u0448.",
"\u0436\u0443\u043c\u0430",
"\u0438\u0448\u043c."
],
"SHORTMONTH": [
"\u044f\u043d\u0432.",
"\u0444\u0435\u0432.",
"\u043c\u0430\u0440.",
"\u0430\u043f\u0440.",
"\u043c\u0430\u0439",
"\u0438\u044e\u043d.",
"\u0438\u044e\u043b.",
"\u0430\u0432\u0433.",
"\u0441\u0435\u043d.",
"\u043e\u043a\u0442.",
"\u043d\u043e\u044f.",
"\u0434\u0435\u043a."
],
"fullDate": "EEEE, d-MMMM, y-'\u0436'.",
"longDate": "y MMMM d",
"medium": "y MMM d HH:mm:ss",
"mediumDate": "y MMM d",
"mediumTime": "HH:mm:ss",
"short": "dd.MM.yy HH:mm",
"shortDate": "dd.MM.yy",
"shortTime": "HH:mm"
},
"NUMBER_FORMATS": {
"CURRENCY_SYM": "KGS",
"DECIMAL_SEP": ",",
"GROUP_SEP": "\u00a0",
"PATTERNS": [
{
"gSize": 3,
"lgSize": 3,
"maxFrac": 3,
"minFrac": 0,
"minInt": 1,
"negPre": "-",
"negSuf": "",
"posPre": "",
"posSuf": ""
},
{
"gSize": 3,
"lgSize": 3,
"maxFrac": 2,
"minFrac": 2,
"minInt": 1,
"negPre": "-",
"negSuf": "\u00a0\u00a4",
"posPre": "",
"posSuf": "\u00a0\u00a4"
}
]
},
"id": "ky",
"pluralCat": function(n, opt_precision) { if (n == 1) { return PLURAL_CATEGORY.ONE; } return PLURAL_CATEGORY.OTHER;}
});
}]); | PypiClean |
/ClueDojo-1.4.3-1.tar.gz/ClueDojo-1.4.3-1/src/cluedojo/static/dijit/nls/dijit-all_th.js | dojo.provide("dijit.nls.dijit-all_th");dojo.provide("dojo.nls.colors");dojo.nls.colors._built=true;dojo.provide("dojo.nls.colors.th");dojo.nls.colors.th={"lightsteelblue":"light steel blue","orangered":"ส้มแกมแดง","midnightblue":"midnight blue","cadetblue":"cadet blue","seashell":"seashell","slategrey":"slate gray","coral":"coral","darkturquoise":"dark turquoise","antiquewhite":"antique white","mediumspringgreen":"medium spring green","salmon":"salmon","darkgrey":"เทาเข้ม","ivory":"งาช้าง","greenyellow":"เขียวแกมเหลือง","mistyrose":"misty rose","lightsalmon":"light salmon","silver":"เงิน","dimgrey":"dim gray","orange":"ส้ม","white":"ขาว","navajowhite":"navajo white","royalblue":"royal blue","deeppink":"ชมพูเข้ม","lime":"เหลืองมะนาว","oldlace":"old lace","chartreuse":"chartreuse","darkcyan":"เขียวแกมน้ำเงินเข้ม","yellow":"เหลือง","linen":"linen","olive":"โอลีฟ","gold":"ทอง","lawngreen":"lawn green","lightyellow":"เหลืองอ่อน","tan":"tan","darkviolet":"ม่วงเข้ม","lightslategrey":"light slate gray","grey":"เทา","darkkhaki":"dark khaki","green":"เขียว","deepskyblue":"deep sky blue","aqua":"ฟ้าน้ำทะเล","sienna":"sienna","mintcream":"mint cream","rosybrown":"rosy brown","mediumslateblue":"medium slate blue","magenta":"แดงแกมม่วง","lightseagreen":"light sea green","cyan":"เขียวแกมน้ำเงิน","olivedrab":"olive drab","darkgoldenrod":"dark goldenrod","slateblue":"slate blue","mediumaquamarine":"medium aquamarine","lavender":"ม่วงลาเวนเดอร์","mediumseagreen":"medium sea green","maroon":"น้ำตาลแดง","darkslategray":"dark slate gray","mediumturquoise":"medium turquoise","ghostwhite":"ghost white","darkblue":"น้ำเงินเข้ม","mediumvioletred":"medium violet-red","brown":"น้ำตาล","lightgray":"เทาอ่อน","sandybrown":"sandy brown","pink":"ชมพู","firebrick":"สีอิฐ","indigo":"indigo","snow":"snow","darkorchid":"dark orchid","turquoise":"turquoise","chocolate":"ช็อกโกแลต","springgreen":"spring green","moccasin":"ม็อคค่า","navy":"น้ำเงินเข้ม","lemonchiffon":"lemon chiffon","teal":"teal","floralwhite":"floral white","cornflowerblue":"cornflower blue","paleturquoise":"pale turquoise","purple":"ม่วง","gainsboro":"gainsboro","plum":"plum","red":"แดง","blue":"น้ำเงิน","forestgreen":"forest green","darkgreen":"เขียวเข้ม","honeydew":"honeydew","darkseagreen":"dark sea green","lightcoral":"light coral","palevioletred":"pale violet-red","mediumpurple":"medium purple","saddlebrown":"saddle brown","darkmagenta":"แดงแกมม่วงเข้ม","thistle":"thistle","whitesmoke":"ขาวควัน","wheat":"wheat","violet":"ม่วง","lightskyblue":"ฟ้าอ่อน","goldenrod":"goldenrod","mediumblue":"medium blue","skyblue":"sky blue","crimson":"แดงเลือดหมู","darksalmon":"dark salmon","darkred":"แดงเข้ม","darkslategrey":"dark slate gray","peru":"peru","lightgrey":"เทาอ่อน","lightgoldenrodyellow":"light goldenrod yellow","blanchedalmond":"blanched almond","aliceblue":"alice blue","bisque":"bisque","slategray":"slate gray","palegoldenrod":"pale goldenrod","darkorange":"ส้มเข้ม","aquamarine":"aquamarine","lightgreen":"เขียวอ่อน","burlywood":"burlywood","dodgerblue":"dodger blue","darkgray":"เทาเข้ม","lightcyan":"เขียวแกมน้ำเงินอ่อน","powderblue":"powder blue","blueviolet":"น้ำเงินม่วง","orchid":"orchid","dimgray":"dim gray","beige":"น้ำตาลเบจ","fuchsia":"fuchsia","lavenderblush":"lavender blush","hotpink":"hot pink","steelblue":"steel blue","tomato":"tomato","lightpink":"ชมพูอ่อน","limegreen":"เขียวมะนาว","indianred":"indian red","papayawhip":"papaya whip","lightslategray":"light slate gray","gray":"เทา","mediumorchid":"medium orchid","cornsilk":"cornsilk","black":"ดำ","seagreen":"sea green","darkslateblue":"dark slate blue","khaki":"khaki","lightblue":"น้ำเงินอ่อน","palegreen":"pale green","azure":"น้ำเงินฟ้า","peachpuff":"peach puff","darkolivegreen":"เขียวโอลีฟเข้ม","yellowgreen":"เหลืองแกมเขียว"};dojo.provide("dijit.nls.loading");dijit.nls.loading._built=true;dojo.provide("dijit.nls.loading.th");dijit.nls.loading.th={"loadingState":"กำลังโหลด...","errorState":"ขออภัย เกิดข้อผิดพลาด"};dojo.provide("dijit.nls.common");dijit.nls.common._built=true;dojo.provide("dijit.nls.common.th");dijit.nls.common.th={"buttonOk":"ตกลง","buttonCancel":"ยกเลิก","buttonSave":"บันทึก","itemClose":"ปิด"};dojo.provide("dijit._editor.nls.commands");dijit._editor.nls.commands._built=true;dojo.provide("dijit._editor.nls.commands.th");dijit._editor.nls.commands.th={"removeFormat":"ลบรูปแบบออก","copy":"คัดลอก","paste":"วาง","selectAll":"เลือกทั้งหมด","insertOrderedList":"ลำดับเลข","insertTable":"แทรก/แก้ไข ตาราง","underline":"ขีดเส้นใต้","foreColor":"สีพื้นหน้า","htmlToggle":"ซอร์ส HTML","formatBlock":"ลักษณะย่อหน้า","insertHorizontalRule":"ไม้บรรทัดแนวนอน","delete":"ลบ","insertUnorderedList":"หัวข้อย่อย","tableProp":"คุณสมบัติตาราง","insertImage":"แทรกอิมเมจ","superscript":"ตัวยก","subscript":"ตัวห้อย","createLink":"สร้างลิงก์","undo":"เลิกทำ","italic":"ตัวเอียง","fontName":"ชื่อฟอนต์","justifyLeft":"จัดชิดซ้าย","unlink":"ลบลิงก์ออก","toggleTableBorder":"สลับเส้นขอบตาราง","fontSize":"ขนาดฟอนต์","systemShortcut":"แอ็กชัน \"${0}\" ใช้งานได้เฉพาะกับเบราว์เซอร์ของคุณโดยใช้แป้นพิมพ์ลัด ใช้ ${1}","indent":"เพิ่มการเยื้อง","redo":"ทำซ้ำ","strikethrough":"ขีดทับ","justifyFull":"จัดชิดขอบ","justifyCenter":"จัดกึ่งกลาง","hiliteColor":"สีพื้นหลัง","deleteTable":"ลบตาราง","outdent":"ลดการเยื้อง","cut":"ตัด","plainFormatBlock":"ลักษณะย่อหน้า","toggleDir":"สลับทิศทาง","bold":"ตัวหนา","tabIndent":"เยื้องแท็บ","justifyRight":"จัดชิดขวา","print":"Print","newPage":"New Page","appleKey":"⌘${0}","fullScreen":"Toggle Full Screen","viewSource":"View HTML Source","ctrlKey":"ctrl+${0}"};dojo.provide("dojo.cldr.nls.number");dojo.cldr.nls.number._built=true;dojo.provide("dojo.cldr.nls.number.th");dojo.cldr.nls.number.th={"group":",","percentSign":"%","exponential":"E","percentFormat":"#,##0%","scientificFormat":"#E0","list":";","infinity":"∞","patternDigit":"#","minusSign":"-","decimal":".","nan":"NaN","nativeZeroDigit":"0","perMille":"‰","decimalFormat":"#,##0.###","currencyFormat":"¤#,##0.00;¤-#,##0.00","plusSign":"+","currencySpacing-afterCurrency-currencyMatch":"[:letter:]","currencySpacing-beforeCurrency-surroundingMatch":"[:digit:]","currencySpacing-afterCurrency-insertBetween":" ","currencySpacing-afterCurrency-surroundingMatch":"[:digit:]","currencySpacing-beforeCurrency-currencyMatch":"[:letter:]","currencySpacing-beforeCurrency-insertBetween":" "};dojo.provide("dijit.form.nls.validate");dijit.form.nls.validate._built=true;dojo.provide("dijit.form.nls.validate.th");dijit.form.nls.validate.th={"rangeMessage":"ค่านี้เกินช่วง","invalidMessage":"ค่าที่ป้อนไม่ถูกต้อง","missingMessage":"จำเป็นต้องมีค่านี้"};dojo.provide("dojo.cldr.nls.currency");dojo.cldr.nls.currency._built=true;dojo.provide("dojo.cldr.nls.currency.th");dojo.cldr.nls.currency.th={"HKD_displayName":"ดอลลาร์ฮ่องกง","CHF_displayName":"ฟรังก์สวิส","JPY_symbol":"¥","CAD_displayName":"ดอลลาร์แคนาดา","CNY_displayName":"หยวนเหรินหมินปี้ (สาธารณรัฐประชาชนจีน)","AUD_displayName":"ดอลลาร์ออสเตรเลีย","JPY_displayName":"เยนญี่ปุ่น","USD_displayName":"ดอลลาร์สหรัฐ","GBP_displayName":"ปอนด์สเตอร์ลิง (สหราชอาณาจักร)","EUR_displayName":"ยูโร","CHF_symbol":"Fr.","HKD_symbol":"HK$","USD_symbol":"US$","CAD_symbol":"CA$","EUR_symbol":"€","CNY_symbol":"CN¥","GBP_symbol":"£","AUD_symbol":"AU$"};dojo.provide("dojo.cldr.nls.gregorian");dojo.cldr.nls.gregorian._built=true;dojo.provide("dojo.cldr.nls.gregorian.th");dojo.cldr.nls.gregorian.th={"dateFormatItem-yM":"M/yyyy","field-dayperiod":"ช่วงวัน","dateFormatItem-yQ":"Q yyyy","field-minute":"นาที","eraNames":["ปีก่อนคริสต์ศักราช","คริสต์ศักราช"],"dateFormatItem-MMMEd":"E d MMM","dateTimeFormat-full":"{1}, {0}","field-weekday":"วันในสัปดาห์","dateFormatItem-yQQQ":"QQQ y","days-standAlone-wide":["วันอาทิตย์","วันจันทร์","วันอังคาร","วันพุธ","วันพฤหัสบดี","วันศุกร์","วันเสาร์"],"dateFormatItem-MMM":"LLL","months-standAlone-narrow":["ม.ค.","ก.พ.","มี.ค.","เม.ย.","พ.ค.","มิ.ย.","ก.ค.","ส.ค.","ก.ย.","ต.ค.","พ.ย.","ธ.ค."],"dateTimeFormat-short":"{1}, {0}","field-era":"สมัย","field-hour":"ชั่วโมง","dateTimeFormat-medium":"{1}, {0}","dateFormatItem-y":"y","timeFormat-full":"H นาฬิกา m นาที ss วินาที zzzz","months-standAlone-abbr":["ม.ค.","ก.พ.","มี.ค.","เม.ย.","พ.ค.","มิ.ย.","ก.ค.","ส.ค.","ก.ย.","ต.ค.","พ.ย.","ธ.ค."],"dateFormatItem-yMMM":"MMM y","days-standAlone-narrow":["อ","จ","อ","พ","พ","ศ","ส"],"eraAbbr":["ปีก่อน ค.ศ.","ค.ศ."],"dateFormatItem-yyyyMMMM":"MMMM y","dateFormat-long":"d MMMM y","timeFormat-medium":"H:mm:ss","dateFormatItem-EEEd":"EEE d","field-zone":"เขต","dateFormatItem-Hm":"H:mm","dateFormat-medium":"d MMM y","quarters-standAlone-wide":["ไตรมาส 1","ไตรมาส 2","ไตรมาส 3","ไตรมาส 4"],"dateFormatItem-yMMMM":"MMMM y","dateFormatItem-ms":"mm:ss","field-year":"ปี","quarters-standAlone-narrow":["1","2","3","4"],"dateTimeFormat-long":"{1}, {0}","dateFormatItem-HHmmss":"HH:mm:ss","field-week":"สัปดาห์","months-standAlone-wide":["มกราคม","กุมภาพันธ์","มีนาคม","เมษายน","พฤษภาคม","มิถุนายน","กรกฎาคม","สิงหาคม","กันยายน","ตุลาคม","พฤศจิกายน","ธันวาคม"],"dateFormatItem-MMMMEd":"E d MMMM","dateFormatItem-MMMd":"d MMM","dateFormatItem-HHmm":"HH:mm","dateFormatItem-yyQ":"Q yy","timeFormat-long":"H นาฬิกา m นาที ss วินาที z","months-format-abbr":["ม.ค.","ก.พ.","มี.ค.","เม.ย.","พ.ค.","มิ.ย.","ก.ค.","ส.ค.","ก.ย.","ต.ค.","พ.ย.","ธ.ค."],"timeFormat-short":"H:mm","field-month":"เดือน","quarters-format-abbr":["Q1","Q2","Q3","Q4"],"dateFormatItem-MMMMd":"d MMMM","days-format-abbr":["อา.","จ.","อ.","พ.","พฤ.","ศ.","ส."],"pm":"หลังเที่ยง","dateFormatItem-M":"L","dateFormatItem-mmss":"mm:ss","days-format-narrow":["อ","จ","อ","พ","พ","ศ","ส"],"field-second":"วินาที","field-day":"วัน","dateFormatItem-MEd":"E, d/M","months-format-narrow":["ม.ค.","ก.พ.","มี.ค.","เม.ย.","พ.ค.","มิ.ย.","ก.ค.","ส.ค.","ก.ย.","ต.ค.","พ.ย.","ธ.ค."],"am":"ก่อนเที่ยง","days-standAlone-abbr":["อา.","จ.","อ.","พ.","พฤ.","ศ.","ส."],"dateFormat-short":"d/M/yyyy","dateFormatItem-yyyyM":"M/yyyy","dateFormatItem-yMMMEd":"EEE d MMM y","dateFormat-full":"EEEEที่ d MMMM G y","dateFormatItem-Md":"d/M","dateFormatItem-yMEd":"EEE d/M/yyyy","months-format-wide":["มกราคม","กุมภาพันธ์","มีนาคม","เมษายน","พฤษภาคม","มิถุนายน","กรกฎาคม","สิงหาคม","กันยายน","ตุลาคม","พฤศจิกายน","ธันวาคม"],"dateFormatItem-d":"d","quarters-format-wide":["ไตรมาส 1","ไตรมาส 2","ไตรมาส 3","ไตรมาส 4"],"days-format-wide":["วันอาทิตย์","วันจันทร์","วันอังคาร","วันพุธ","วันพฤหัสบดี","วันศุกร์","วันเสาร์"],"eraNarrow":"ก่อน ค.ศ.","dateTimeFormats-appendItem-Day-Of-Week":"{0} {1}","dateTimeFormats-appendItem-Second":"{0} ({2}: {1})","dateTimeFormats-appendItem-Era":"{0} {1}","dateTimeFormats-appendItem-Week":"{0} ({2}: {1})","quarters-standAlone-abbr":["Q1","Q2","Q3","Q4"],"quarters-format-narrow":["1","2","3","4"],"dateTimeFormats-appendItem-Day":"{0} ({2}: {1})","dateFormatItem-hm":"h:mm a","dateTimeFormats-appendItem-Year":"{0} {1}","dateTimeFormats-appendItem-Hour":"{0} ({2}: {1})","dateTimeFormats-appendItem-Quarter":"{0} ({2}: {1})","dateTimeFormats-appendItem-Month":"{0} ({2}: {1})","dateTimeFormats-appendItem-Minute":"{0} ({2}: {1})","dateTimeFormats-appendItem-Timezone":"{0} {1}","dateFormatItem-Hms":"H:mm:ss","dateFormatItem-hms":"h:mm:ss a"};dojo.provide("dijit.form.nls.ComboBox");dijit.form.nls.ComboBox._built=true;dojo.provide("dijit.form.nls.ComboBox.th");dijit.form.nls.ComboBox.th={"previousMessage":"การเลือกก่อนหน้า","nextMessage":"การเลือกเพิ่มเติม"}; | PypiClean |
/IsoCon-0.3.3.tar.gz/IsoCon-0.3.3/modules/SW_alignment_module.py | import signal
from multiprocessing import Pool
import multiprocessing as mp
import sys
import parasail
import re
# import ssw
# from Bio import pairwise2
# from Bio.SubsMat import MatrixInfo as matlist
def cigar_to_seq(cigar, query, ref):
cigar_tuples = []
result = re.split(r'[=DXSMI]+', cigar)
i = 0
for length in result[:-1]:
i += len(length)
type_ = cigar[i]
i += 1
cigar_tuples.append((int(length), type_ ))
r_index = 0
q_index = 0
q_aln = []
r_aln = []
for length_ , type_ in cigar_tuples:
if type_ == "=" or type_ == "X":
q_aln.append(query[q_index : q_index + length_])
r_aln.append(ref[r_index : r_index + length_])
r_index += length_
q_index += length_
elif type_ == "I":
# insertion w.r.t. reference
r_aln.append('-' * length_)
q_aln.append(query[q_index: q_index + length_])
# only query index change
q_index += length_
elif type_ == 'D':
# deletion w.r.t. reference
r_aln.append(ref[r_index: r_index + length_])
q_aln.append('-' * length_)
# only ref index change
r_index += length_
else:
print("error")
print(cigar)
sys.exit()
return "".join([s for s in q_aln]), "".join([s for s in r_aln])
def parasail_alignment_helper(arguments):
args, kwargs = arguments
return parasail_alignment(*args, **kwargs)
def parasail_alignment(s1, s2, i, j, x_acc = "", y_acc = "", match_score = 2, mismatch_penalty = -3, opening_penalty = 2, gap_ext = 0):
user_matrix = parasail.matrix_create("ACGT", match_score, mismatch_penalty)
result = parasail.sg_trace_scan_16(s1, s2, opening_penalty, gap_ext, user_matrix)
if result.saturated:
print("SATURATED!")
result = parasail.sg_trace_scan_32(s1, s2, opening_penalty, gap_ext, user_matrix)
# print(result.cigar.seq)
# print(result.cigar.decode )
# print(str(result.cigar.decode,'utf-8') )
if sys.version_info[0] < 3:
cigar_string = str(result.cigar.decode).decode('utf-8')
else:
cigar_string = str(result.cigar.decode, 'utf-8')
s1_alignment, s2_alignment = cigar_to_seq(cigar_string, s1, s2)
mismatches = len([ 1 for n1, n2 in zip(s1_alignment,s2_alignment) if n1 != n2 and n1 != "-" and n2 != "-" ])
matches = len([ 1 for n1, n2 in zip(s1_alignment,s2_alignment) if n1 == n2 and n1 != "-"])
indels = len(s1_alignment) - mismatches - matches
if x_acc == y_acc == "":
return (s1, s2, (s1_alignment, s2_alignment, (matches, mismatches, indels)) )
else:
return (x_acc, y_acc, (s1_alignment, s2_alignment, (matches, mismatches, indels)) )
def sw_align_sequences(matches, nr_cores = 1, mismatch_penalty = -1):
"""
Matches should be a 2D matrix implemented as a dict of dict, the value should be the edit distance.
"""
exact_matches = {}
if nr_cores == 1:
for j, s1 in enumerate(matches):
for i, s2 in enumerate(matches[s1]):
if s1 in exact_matches:
if s2 in exact_matches[s1]:
continue
ed = matches[s1][s2]
error_rate = float(ed)/ min(len(s1), len(s2))
if error_rate <= 0.01:
mismatch_penalty = -1
elif 0.01 < error_rate <= 0.09:
mismatch_penalty = -2
else:
mismatch_penalty = -4
# print(s1,s2)
s1, s2, stats = parasail_alignment_helper( ((s1, s2, i, j), {"mismatch_penalty" : mismatch_penalty }) )
if stats:
if s1 in exact_matches:
exact_matches[s1][s2] = stats
else:
exact_matches[s1] = {}
exact_matches[s1][s2] = stats
else:
pass
else:
####### parallelize alignment #########
# pool = Pool(processes=mp.cpu_count())
original_sigint_handler = signal.signal(signal.SIGINT, signal.SIG_IGN)
signal.signal(signal.SIGINT, original_sigint_handler)
pool = Pool(processes=nr_cores)
matches_with_mismatch = {}
for j, s1 in enumerate(matches):
matches_with_mismatch[s1] = {}
for i, s2 in enumerate(matches[s1]):
ed = matches[s1][s2]
error_rate = float(ed)/ min(len(s1), len(s2))
if error_rate <= 0.01:
mismatch_penalty = -1
elif 0.01 < error_rate <= 0.09:
mismatch_penalty = -2
else:
mismatch_penalty = -4
matches_with_mismatch[s1][s2] = mismatch_penalty
try:
res = pool.map_async(parasail_alignment_helper, [ ((s1, s2, i,j), {"mismatch_penalty" : mismatch_penalty}) for j, s1 in enumerate(matches_with_mismatch) for i, (s2, mismatch_penalty) in enumerate(matches_with_mismatch[s1].items()) ] )
alignment_results =res.get(999999999) # Without the timeout this blocking call ignores all signals.
except KeyboardInterrupt:
print("Caught KeyboardInterrupt, terminating workers")
pool.terminate()
sys.exit()
else:
# print("Normal termination")
pool.close()
pool.join()
for s1, s2, stats in alignment_results:
if stats:
if s1 in exact_matches:
exact_matches[s1][s2] = stats
else:
exact_matches[s1] = {}
exact_matches[s1][s2] = stats
else:
pass
return exact_matches
def sw_align_sequences_keeping_accession(matches, nr_cores = 1):
"""
Matches should be a 2D matrix implemented as a dict of dict, the value should be a tuple (s1,s2, edit_distance) .
"""
print(len(matches))
exact_matches = {}
if nr_cores == 1:
for j, s1_acc in enumerate(matches):
for i, s2_acc in enumerate(matches[s1_acc]):
s1, s2, ed = matches[s1_acc][s2_acc]
if s1_acc in exact_matches:
if s2_acc in exact_matches[s1_acc]:
continue
error_rate = float(ed)/ min(len(s1), len(s2))
# print("ERROR_RATE", error_rate)
if error_rate <= 0.01:
mismatch_penalty = -1
elif 0.01 < error_rate <= 0.09:
mismatch_penalty = -2
else:
mismatch_penalty = -4
s1_acc, s2_acc, stats = parasail_alignment_helper( ((s1, s2, i, j), {"x_acc" : s1_acc, "y_acc" :s2_acc, "mismatch_penalty" : mismatch_penalty}) )
if stats:
if s1_acc in exact_matches:
exact_matches[s1_acc][s2_acc] = stats
else:
exact_matches[s1_acc] = {}
exact_matches[s1_acc][s2_acc] = stats
else:
pass
else:
####### parallelize alignment #########
# pool = Pool(processes=mp.cpu_count())
original_sigint_handler = signal.signal(signal.SIGINT, signal.SIG_IGN)
signal.signal(signal.SIGINT, original_sigint_handler)
pool = Pool(processes=nr_cores)
matches_with_mismatch = {}
for j, s1_acc in enumerate(matches):
matches_with_mismatch[s1_acc] = {}
for i, s2_acc in enumerate(matches[s1_acc]):
s1, s2, ed = matches[s1_acc][s2_acc]
error_rate = float(ed)/ min(len(s1), len(s2))
# print("ERROR_RATE",error_rate ) #, s1_acc, ed, len(s1), len(s2) )
if error_rate <= 0.01:
mismatch_penalty = -1
elif 0.01 < error_rate <= 0.09:
mismatch_penalty = -2
else:
mismatch_penalty = -4
matches_with_mismatch[s1_acc][s2_acc] = mismatch_penalty
print(len(matches_with_mismatch), "lewl")
# for j, s1_acc in enumerate(matches):
# for i, s2_acc in enumerate(matches[s1_acc]):
# print("lool", matches[s1_acc][s2_acc][0], matches[s1_acc][s2_acc][1], i,j, {"x_acc": s1_acc, "y_acc" : s2_acc} )
try:
res = pool.map_async(parasail_alignment_helper, [ ((matches[s1_acc][s2_acc][0], matches[s1_acc][s2_acc][1], i,j), {"x_acc": s1_acc, "y_acc" : s2_acc, "mismatch_penalty" : mismatch_penalty}) for j, s1_acc in enumerate(matches_with_mismatch) for i, (s2_acc, mismatch_penalty) in enumerate(matches_with_mismatch[s1_acc].items()) ] )
alignment_results =res.get(999999999) # Without the timeout this blocking call ignores all signals.
except KeyboardInterrupt:
print("Caught KeyboardInterrupt, terminating workers")
pool.terminate()
sys.exit()
else:
# print("Normal termination")
pool.close()
pool.join()
for s1_acc, s2_acc, stats in alignment_results:
if stats:
if s1_acc in exact_matches:
exact_matches[s1_acc][s2_acc] = stats
else:
exact_matches[s1_acc] = {}
exact_matches[s1_acc][s2_acc] = stats
else:
# print("OMG!")
# print(len(matches[s1_acc][s2_acc][0]), len(matches[s1_acc][s2_acc][1]) )
pass
print(len(exact_matches))
return exact_matches
# def ssw_alignment_helper(arguments):
# args, kwargs = arguments
# return ssw_alignment(*args, **kwargs)
# def ssw_alignment(x, y, i,j, ends_discrepancy_threshold = 25 , x_acc = "", y_acc = "", mismatch_penalty = -3 ):
# """
# Aligns two sequences with SSW
# x: query
# y: reference
# """
# # if i == 100 and j % 1000 == 0:
# # print("SW processed alignments:{0}, mismatch_penalty: {1}".format(j+1, mismatch_penalty))
# score_matrix = ssw.DNA_ScoreMatrix(match=2, mismatch=mismatch_penalty)
# aligner = ssw.Aligner(gap_open=2, gap_extend=0, matrix=score_matrix)
# # for the ends that SSW leaves behind
# bio_matrix = matlist.blosum62
# g_open = -1
# g_extend = -0.5
# ######################################
# # result = aligner.align("GA", "G", revcomp=False)
# # y_alignment, match_line, x_alignment = result.alignment
# # c = Counter(match_line)
# # matches, mismatches, indels = c["|"], c["*"], c[" "]
# # alignment_length = len(match_line)
# # print("matches:{0}, mismatches:{1}, indels:{2} ".format(matches, mismatches, indels))
# # print(match_line)
# result = aligner.align(x, y, revcomp=False)
# y_alignment, match_line, x_alignment = result.alignment
# matches, mismatches, indels = match_line.count("|"), match_line.count("*"), match_line.count(" ")
# # alignment_length = len(match_line)
# start_discrepancy = max(result.query_begin, result.reference_begin) # 0-indexed # max(result.query_begin, result.reference_begin) - min(result.query_begin, result.reference_begin)
# query_end_discrepancy = len(x) - result.query_end - 1
# ref_end_discrepancy = len(y) - result.reference_end - 1
# end_discrepancy = max(query_end_discrepancy, ref_end_discrepancy) # max(result.query_end, result.reference_end) - min(result.query_end, result.reference_end)
# # print(start_discrepancy, end_discrepancy)
# tot_discrepancy = start_discrepancy + end_discrepancy
# if 0 < start_discrepancy <= ends_discrepancy_threshold:
# # print("HERE")
# matches_snippet = 0
# mismatches_snippet = 0
# if result.query_begin and result.reference_begin:
# query_start_snippet = x[:result.query_begin]
# ref_start_snippet = y[:result.reference_begin]
# alns = pairwise2.align.globalds(query_start_snippet, ref_start_snippet, bio_matrix, g_open, g_extend)
# top_aln = alns[0]
# # print(alns)
# mismatches_snippet = len(list(filter(lambda x: x[0] != x[1] and x[0] != '-' and x[1] != "-", zip(top_aln[0],top_aln[1]))))
# indels_snippet = top_aln[0].count("-") + top_aln[1].count("-")
# matches_snippet = len(top_aln[0]) - mismatches_snippet - indels_snippet
# # print(matches_snippet, mismatches_snippet, indels_snippet)
# query_start_alignment_snippet = top_aln[0]
# ref_start_alignment_snippet = top_aln[1]
# elif result.query_begin:
# query_start_alignment_snippet = x[:result.query_begin]
# ref_start_alignment_snippet = "-"*len(query_start_alignment_snippet)
# indels_snippet = len(ref_start_alignment_snippet)
# elif result.reference_begin:
# ref_start_alignment_snippet = y[:result.reference_begin]
# query_start_alignment_snippet = "-"*len(ref_start_alignment_snippet)
# indels_snippet = len(query_start_alignment_snippet)
# else:
# print("BUG")
# sys.exit()
# matches, mismatches, indels = matches + matches_snippet, mismatches + mismatches_snippet, indels + indels_snippet
# # print(ref_start_alignment_snippet)
# # print(query_start_alignment_snippet)
# y_alignment = ref_start_alignment_snippet + y_alignment
# x_alignment = query_start_alignment_snippet + x_alignment
# if 0 < end_discrepancy <= ends_discrepancy_threshold:
# # print("HERE2")
# matches_snippet = 0
# mismatches_snippet = 0
# if query_end_discrepancy and ref_end_discrepancy:
# query_end_snippet = x[result.query_end+1:]
# ref_end_snippet = y[result.reference_end+1:]
# alns = pairwise2.align.globalds(query_end_snippet, ref_end_snippet, bio_matrix, g_open, g_extend)
# top_aln = alns[0]
# mismatches_snippet = len(list(filter(lambda x: x[0] != x[1] and x[0] != '-' and x[1] != "-", zip(top_aln[0],top_aln[1]))))
# indels_snippet = top_aln[0].count("-") + top_aln[1].count("-")
# matches_snippet = len(top_aln[0]) - mismatches_snippet - indels_snippet
# query_end_alignment_snippet = top_aln[0]
# ref_end_alignment_snippet = top_aln[1]
# elif query_end_discrepancy:
# query_end_alignment_snippet = x[result.query_end+1:]
# ref_end_alignment_snippet = "-"*len(query_end_alignment_snippet)
# indels_snippet = len(ref_end_alignment_snippet)
# elif ref_end_discrepancy:
# ref_end_alignment_snippet = y[result.reference_end+1:]
# query_end_alignment_snippet = "-"*len(ref_end_alignment_snippet)
# indels_snippet = len(query_end_alignment_snippet)
# else:
# print("BUG")
# sys.exit()
# matches, mismatches, indels = matches + matches_snippet, mismatches + mismatches_snippet, indels + indels_snippet
# y_alignment = y_alignment + ref_end_alignment_snippet
# x_alignment = x_alignment + query_end_alignment_snippet
# if x_acc == y_acc == "":
# if start_discrepancy > ends_discrepancy_threshold or end_discrepancy > ends_discrepancy_threshold:
# # print("REMOVING", start_discrepancy, end_discrepancy)
# return (x, y, None)
# else:
# return (x, y, (x_alignment, y_alignment, (matches, mismatches, indels)) )
# else:
# if start_discrepancy > ends_discrepancy_threshold or end_discrepancy > ends_discrepancy_threshold:
# # print("REMOVING", start_discrepancy, end_discrepancy)
# return (x_acc, y_acc, None)
# else:
# return (x_acc, y_acc, (x_alignment, y_alignment, (matches, mismatches, indels)) ) | PypiClean |
/NNBuilder-0.3.7.tar.gz/NNBuilder-0.3.7/nnbuilder/extensions/saveload.py | import os
from basic import *
class SaveLoad(ExtensionBase):
def __init__(self):
ExtensionBase.__init__(self)
self.max = 3
self.freq = 10000
self.save = True
self.load = True
self.epoch = False
self.overwrite = True
self.loadfile = None
def init(self):
self.path = './'+self.config.name+'/save/'
def before_train(self):
if self.load:
self.mainloop_load(self.model, '')
def after_iteration(self):
if (self.train_history['n_iter']) % self.freq == 0:
savename = '{}.npz'.format(self.train_history['n_iter'])
self.mainloop_save(self.model, '', savename, self.max, self.overwrite)
def after_epoch(self):
if self.epoch:
savename = '{}.npz'.format(self.train_history['n_epoch'])
self.mainloop_save(self.model, 'epoch/', savename, self.max, self.overwrite)
def after_train(self):
self.mainloop_save(self.model, 'final/', 'final.npz', self.max, self.overwrite)
def mainloop_save(self, model, path, file, max=1, overwrite=True):
filepath = self.path + path + file
np.savez(filepath,
parameter=SaveLoad.get_params(model),
train_history=SaveLoad.get_train_history(self.train_history),
extensions=SaveLoad.get_extensions_dict(self.extensions),
optimizer=SaveLoad.get_optimizer_dict(self.model.optimizer))
if self.is_log_detail():
self.logger("")
self.logger("Save Sucessfully At File : [{}]".format(filepath), 1)
# delete old files
if overwrite:
filelist = [self.path + path + name for name in os.listdir(self.path + path) if name.endswith('.npz')]
filelist.sort(SaveLoad.compare_timestamp)
for i in range(len(filelist) - max):
os.remove(filelist[i])
if self.is_log_detail():
self.logger("Deleted Old File : [{}]".format(filelist[i]), 1)
if self.is_log_detail():
self.logger("")
def mainloop_load(self, model, file):
self.logger('Loading saved model from checkpoint:', 1, 1)
# prepare loading
if os.path.isfile(file):
file = self.path + file
else:
filelist = [self.path + filename for filename in os.listdir(self.path + file) if filename.endswith('.npz')]
if filelist == []:
self.logger('Checkpoint not found, exit loading', 2)
return
filelist.sort(SaveLoad.compare_timestamp)
file = filelist[-1]
self.logger('Checkpoint found : [{}]'.format(file), 2)
# load params
SaveLoad.load_params(model, file)
SaveLoad.load_train_history(self.train_history, file)
SaveLoad.load_extensions(self.extensions, file)
SaveLoad.load_optimizer(self.model.optimizer, file)
self.logger('Load sucessfully', 2)
self.logger('', 2)
@staticmethod
def get_params(model):
params = OrderedDict()
for name, param in model.params.items():
params[name] = param().get()
return params
@staticmethod
def get_train_history(train_history):
return train_history
@staticmethod
def get_extensions_dict(extensions):
extensions_dict = OrderedDict()
for ex in extensions:
extensions_dict[ex.__class__.__name__] = ex.save_(OrderedDict())
return extensions_dict
@staticmethod
def get_optimizer_dict(optimizer):
optimizer_dict = OrderedDict()
optimizer_dict[optimizer.__class__.__name__] = optimizer.save_(OrderedDict())
return optimizer_dict
@staticmethod
def load_params(model, file):
params = np.load(file)['parameter'].tolist()
for name, param in params.items():
model.params[name]().set(param)
@staticmethod
def load_train_history(train_history, file):
loaded_train_history = np.load(file)['train_history'].tolist()
for key, value in loaded_train_history.items():
train_history[key] = value
@staticmethod
def load_extensions(extensions, file):
loaded_extensions = np.load(file)['extensions'].tolist()
for ex in extensions:
if ex.__class__.__name__ in loaded_extensions:
ex.load_(loaded_extensions[ex.__class__.__name__])
@staticmethod
def load_optimizer(optimizer, file):
loaded_optimizer = np.load(file)['optimizer'].tolist()
if optimizer.__class__.__name__ in loaded_optimizer:
optimizer.load_(loaded_optimizer[optimizer.__class__.__name__])
@staticmethod
def save_file(model, file):
params = SaveLoad.get_params(model)
np.savez(file, parameter=params)
@staticmethod
def compare_timestamp(x, y):
xt = os.stat(x)
yt = os.stat(y)
if xt.st_mtime > yt.st_mtime:
return 1
else:
return -1
saveload = SaveLoad() | PypiClean |
/Balltic-0.2.0.tar.gz/Balltic-0.2.0/balltic/gasdynamics/pneumatic.py | __author__ = 'Anthony Byuraev'
__all__ = ['Pneumatic']
import typing
import numpy as np
from balltic.core.grid import EulerianGrid
from balltic.core.guns import AirGun
from balltic.core.gas import Gas
class Pneumatic(EulerianGrid):
"""
Класс - решение основной задачи внутренней баллистики
в газодинамической постановке на подвижной сетке по методу Эйлера
Parameters
----------
gun: AirGun
Именованный кортеж начальных условий и параметров АО
gas: Gas
Именованный кортеж параметров легкого газа
nodes: int
Количество узлов (интерфейсов) сетки
initialp: float, optional
Начальное давление в каморе
chamber: int or float, optional
Начальная длина запоршневого пространства
barrel: int or float, optional
Длина ведущей части стола
kurant: int or float, optional
Число Куранта
Returns
-------
solution:
"""
def __str__(self):
return 'Обьект класса Pneumatic'
def __repr__(self):
return f'{self.__class__.__name__}(gun, gas)'
def __init__(self, gun: AirGun, gas: Gas, nodes=100,
initialp: typing.Union[int, float] = None,
chamber: typing.Union[int, float] = None,
barrel: typing.Union[int, float] = None,
kurant: typing.Union[int, float] = None) -> None:
if isinstance(gun, AirGun):
self.gun = gun
else:
raise ValueError('Параметр gun должен быть AirGun')
if isinstance(gas, Gas):
self.gas = gas
else:
raise ValueError('Параметр gas должен быть Gas')
self.nodes = nodes
if initialp is not None:
self.gun = self.gun._replace(initialp=initialp)
if kurant is not None:
self.gun = self.gun._replace(kurant=kurant)
if chamber is not None:
self.gun = self.gun._replace(chamber=chamber)
if barrel is not None:
self.gun = self.gun._replace(barrel=barrel)
self.gun = self.gun._replace(cs_area=np.pi * self.gun.caliber ** 2 / 4)
self.energy_cell = np.full(
self.nodes,
self.gun.initialp / (self.gas.k - 1) / self.gas.ro
)
self.c_cell = np.full(
self.nodes,
np.sqrt(self.gas.k * self.gun.initialp / self.gas.ro)
)
self.ro_cell = np.full(self.nodes, self.gas.ro)
self.v_cell = np.zeros(self.nodes)
self.press_cell = np.zeros(self.nodes)
# Для расчета Маха на интерфейсе
self.mah_cell_m = np.zeros(self.nodes - 1)
self.mah_cell_p = np.zeros(self.nodes - 1)
# Для расчета потока f (Векторы Ф)
self.F_param_p = np.array(
[
np.zeros(self.nodes - 1),
np.zeros(self.nodes - 1),
np.zeros(self.nodes - 1)
]
)
self.F_param_m = np.array(
[
np.zeros(self.nodes - 1),
np.zeros(self.nodes - 1),
np.zeros(self.nodes - 1)
]
)
self.c_interface = np.zeros(self.nodes - 1)
self.mah_interface = np.zeros(self.nodes - 1)
self.press_interface = np.zeros(self.nodes - 1)
self.v_interface = np.zeros(self.nodes - 1)
self.x_interface = np.zeros(self.nodes - 1)
self.f_param = np.array(
[
np.zeros(self.nodes - 1),
self.press_cell[1:],
np.zeros(self.nodes - 1)
]
)
self.q_param = np.array(
[
self.ro_cell,
self.ro_cell * self.v_cell,
self.ro_cell * (self.energy_cell + self.v_cell ** 2 / 2)
]
)
self.is_solved = False
return self._run()
def _get_q(self):
coef_stretch = self._x_previous / self.x_interface[1]
self.q_param[0][1:-1] = coef_stretch \
* (self.q_param[0][1:-1]
- self.tau / self._x_previous
* (self.f_param[0][1:] - self.f_param[0][:-1]))
self.q_param[1][1:-1] = coef_stretch \
* (self.q_param[1][1:-1]
- self.tau / self._x_previous
* (self.f_param[1][1:] - self.f_param[1][:-1]))
self.q_param[2][1:-1] = coef_stretch \
* (self.q_param[2][1:-1]
- self.tau / self._x_previous
* (self.f_param[2][1:] - self.f_param[2][:-1]))
self.ro_cell = self.q_param[0]
self.v_cell = self.q_param[1] / self.q_param[0]
self.energy_cell = self.q_param[2] \
/ self.q_param[0] - self.v_cell ** 2 / 2
self.press_cell = self.ro_cell * self.energy_cell * (self.gas.k - 1)
self.c_cell = np.sqrt(self.gas.k * self.press_cell / self.ro_cell)
self._border()
def _get_f(self):
self.f_param[0] = self.c_interface / 2 \
* (self.mah_interface
* (self.F_param_p[0] + self.F_param_m[0])
- abs(self.mah_interface)
* (self.F_param_p[0] - self.F_param_m[0]))
self.f_param[1] = self.c_interface / 2 \
* (self.mah_interface
* (self.F_param_p[1] + self.F_param_m[1])
- abs(self.mah_interface)
* (self.F_param_p[1] - self.F_param_m[1])) \
+ self.press_interface
self.f_param[2] = self.c_interface / 2 \
* (self.mah_interface
* (self.F_param_p[2] + self.F_param_m[2])
- abs(self.mah_interface)
* (self.F_param_p[2] - self.F_param_m[2])) \
+ self.press_interface * self.v_interface
def _get_F_mines(self):
self.F_param_m[0] = self.ro_cell[:-1]
self.F_param_m[1] = self.ro_cell[:-1] * self.v_cell[:-1]
self.F_param_m[2] = self.ro_cell[:-1] * (
self.energy_cell[:-1]
+ self.v_cell[:-1] ** 2 / 2
+ self.press_cell[:-1] / self.ro_cell[:-1]
)
def _get_F_plus(self):
self.F_param_p[0] = self.ro_cell[1:]
self.F_param_p[1] = self.ro_cell[1:] * self.v_cell[1:]
self.F_param_p[2] = self.ro_cell[1:] * (
self.energy_cell[1:]
+ self.v_cell[1:] ** 2 / 2
+ self.press_cell[1:] / self.ro_cell[1:]
)
def _border(self):
self.q_param[0][0] = self.q_param[0][1]
self.q_param[0][self.nodes - 1] = self.q_param[0][self.nodes - 2]
self.v_cell[0] = -self.v_cell[1]
self.q_param[1][0] = self.ro_cell[0] * self.v_cell[0]
self.q_param[1][self.nodes - 1] = \
self.q_param[0][self.nodes - 1] \
* (2 * self.v_interface[self.nodes - 2]
- self.v_cell[self.nodes - 2])
self.q_param[2][0] = self.q_param[2][1]
self.q_param[2][self.nodes - 1] = self.q_param[2][self.nodes - 2]
def _end_vel_x(self) -> tuple:
"""
Возвращает скорость и координату последней границы
"""
acceleration = self.press_cell[-2] * self.gun.cs_area / self.gun.shell
velocity = self.v_interface[-1] + acceleration * self.tau
x = self.x_interface[-1] + self.v_interface[-1] * self.tau \
+ acceleration * self.tau ** 2 / 2
return velocity, x | PypiClean |
/Microservice_ml_classifier-0.0.0.tar.gz/Microservice_ml_classifier-0.0.0/README.md | # Microservice ml classifier
This repository is my attempt to play with areas that are sometimes omitted in machine
learning projects. Some of them are:
- Packaging the machine learning model
- Implement a CI/CD pipeline to ensure code works as expected
- Deployment to a production environment
- Build an API to create an interface between the client and the server
- Explore different architectures based on project needs: Docker, IaaS, PaaS...
These areas are very often managed by other roles in software teams such as DevOps,
Architects or Backend software engineers. On the other hand, a Machine Learning Engineer,
is usually in charge of building training and inference pipelines for models in a development
environment.
I want to enrich my craft and experience working in these areas.
Hope it helps to anyone who reads this! | PypiClean |
/CAuthomatic-0.1.5.tar.gz/CAuthomatic-0.1.5/examples/gae/showcase/static/js/foundation/foundation.reveal.js |
;(function ($, window, document, undefined) {
'use strict';
Foundation.libs.reveal = {
name: 'reveal',
version : '4.0.9',
locked : false,
settings : {
animation: 'fadeAndPop',
animationSpeed: 250,
closeOnBackgroundClick: true,
dismissModalClass: 'close-reveal-modal',
bgClass: 'reveal-modal-bg',
open: function(){},
opened: function(){},
close: function(){},
closed: function(){},
bg : $('.reveal-modal-bg'),
css : {
open : {
'opacity': 0,
'visibility': 'visible',
'display' : 'block'
},
close : {
'opacity': 1,
'visibility': 'hidden',
'display': 'none'
}
}
},
init : function (scope, method, options) {
this.scope = scope || this.scope;
Foundation.inherit(this, 'data_options delay');
if (typeof method === 'object') {
$.extend(true, this.settings, method);
}
if (typeof method != 'string') {
this.events();
return this.settings.init;
} else {
return this[method].call(this, options);
}
},
events : function () {
var self = this;
$(this.scope)
.off('.fndtn.reveal')
.on('click.fndtn.reveal', '[data-reveal-id]', function (e) {
e.preventDefault();
if (!self.locked) {
self.locked = true;
self.open.call(self, $(this));
}
})
.on('click.fndtn.reveal touchend.click.fndtn.reveal', this.close_targets(), function (e) {
e.preventDefault();
if (!self.locked) {
self.locked = true;
self.close.call(self, $(this).closest('.reveal-modal'));
}
})
.on('open.fndtn.reveal', '.reveal-modal', this.settings.open)
.on('opened.fndtn.reveal', '.reveal-modal', this.settings.opened)
.on('opened.fndtn.reveal', '.reveal-modal', this.open_video)
.on('close.fndtn.reveal', '.reveal-modal', this.settings.close)
.on('closed.fndtn.reveal', '.reveal-modal', this.settings.closed)
.on('closed.fndtn.reveal', '.reveal-modal', this.close_video);
return true;
},
open : function (target) {
if (target) {
var modal = $('#' + target.data('reveal-id'));
} else {
var modal = $(this.scope);
}
if (!modal.hasClass('open')) {
var open_modal = $('.reveal-modal.open');
if (typeof modal.data('css-top') === 'undefined') {
modal.data('css-top', parseInt(modal.css('top'), 10))
.data('offset', this.cache_offset(modal));
}
modal.trigger('open');
if (open_modal.length < 1) {
this.toggle_bg(modal);
}
this.hide(open_modal, this.settings.css.open);
this.show(modal, this.settings.css.open);
}
},
close : function (modal) {
var modal = modal || $(this.scope),
open_modals = $('.reveal-modal.open');
if (open_modals.length > 0) {
this.locked = true;
modal.trigger('close');
this.toggle_bg(modal);
this.hide(open_modals, this.settings.css.close);
}
},
close_targets : function () {
var base = '.' + this.settings.dismissModalClass;
if (this.settings.closeOnBackgroundClick) {
return base + ', .' + this.settings.bgClass;
}
return base;
},
toggle_bg : function (modal) {
if ($('.reveal-modal-bg').length === 0) {
this.settings.bg = $('<div />', {'class': this.settings.bgClass})
.insertAfter(modal);
}
if (this.settings.bg.filter(':visible').length > 0) {
this.hide(this.settings.bg);
} else {
this.show(this.settings.bg);
}
},
show : function (el, css) {
// is modal
if (css) {
if (/pop/i.test(this.settings.animation)) {
css.top = $(window).scrollTop() - el.data('offset') + 'px';
var end_css = {
top: $(window).scrollTop() + el.data('css-top') + 'px',
opacity: 1
}
return this.delay(function () {
return el
.css(css)
.animate(end_css, this.settings.animationSpeed, 'linear', function () {
this.locked = false;
el.trigger('opened');
}.bind(this))
.addClass('open');
}.bind(this), this.settings.animationSpeed / 2);
}
if (/fade/i.test(this.settings.animation)) {
var end_css = {opacity: 1};
return this.delay(function () {
return el
.css(css)
.animate(end_css, this.settings.animationSpeed, 'linear', function () {
this.locked = false;
el.trigger('opened');
}.bind(this))
.addClass('open');
}.bind(this), this.settings.animationSpeed / 2);
}
return el.css(css).show().css({opacity: 1}).addClass('open').trigger('opened');
}
// should we animate the background?
if (/fade/i.test(this.settings.animation)) {
return el.fadeIn(this.settings.animationSpeed / 2);
}
return el.show();
},
hide : function (el, css) {
// is modal
if (css) {
if (/pop/i.test(this.settings.animation)) {
var end_css = {
top: - $(window).scrollTop() - el.data('offset') + 'px',
opacity: 0
};
return this.delay(function () {
return el
.animate(end_css, this.settings.animationSpeed, 'linear', function () {
this.locked = false;
el.css(css).trigger('closed');
}.bind(this))
.removeClass('open');
}.bind(this), this.settings.animationSpeed / 2);
}
if (/fade/i.test(this.settings.animation)) {
var end_css = {opacity: 0};
return this.delay(function () {
return el
.animate(end_css, this.settings.animationSpeed, 'linear', function () {
this.locked = false;
el.css(css).trigger('closed');
}.bind(this))
.removeClass('open');
}.bind(this), this.settings.animationSpeed / 2);
}
return el.hide().css(css).removeClass('open').trigger('closed');
}
// should we animate the background?
if (/fade/i.test(this.settings.animation)) {
return el.fadeOut(this.settings.animationSpeed / 2);
}
return el.hide();
},
close_video : function (e) {
var video = $(this).find('.flex-video'),
iframe = video.find('iframe');
if (iframe.length > 0) {
iframe.attr('data-src', iframe[0].src);
iframe.attr('src', 'about:blank');
video.fadeOut(100).hide();
}
},
open_video : function (e) {
var video = $(this).find('.flex-video'),
iframe = video.find('iframe');
if (iframe.length > 0) {
var data_src = iframe.attr('data-src');
if (typeof data_src === 'string') {
iframe[0].src = iframe.attr('data-src');
}
video.show().fadeIn(100);
}
},
cache_offset : function (modal) {
var offset = modal.show().height() + parseInt(modal.css('top'), 10);
modal.hide();
return offset;
},
off : function () {
$(this.scope).off('.fndtn.reveal');
}
};
}(Foundation.zj, this, this.document)); | PypiClean |
/EPViz-0.0.0.tar.gz/EPViz-0.0.0/visualization/image_saving/saveImg_options.py | import sys
from PyQt5.QtWidgets import QWidget, QDialogButtonBox, QFileDialog
import numpy as np
from matplotlib.backends.backend_qt5agg import FigureCanvas
from matplotlib.figure import Figure
from visualization.plot_utils import check_annotations
from visualization.ui_files.saveImg import Ui_Form
class SaveImgOptions(QWidget):
""" Class for the print preview window """
def __init__(self, data, parent):
""" Constructor.
Args:
data - the save image info object
parent - the main (parent) window
"""
super().__init__()
self.data = data
self.parent = parent
self.sio_ui = Ui_Form()
self.sio_ui.setupUi(self)
self.setup_ui() # Show the GUI
def setup_ui(self):
""" Setup the UI for the window.
"""
self.m = PlotCanvas(self, width=7, height=7)
self.sio_ui.plot_layout.addWidget(self.m)
self.set_sig_slots()
def set_sig_slots(self):
""" Set signals and slots
"""
self.sio_ui.okBtn.button(QDialogButtonBox.Ok).clicked.connect(self.print_plot)
self.sio_ui.okBtn.button(QDialogButtonBox.Cancel).clicked.connect(self.close_window)
self.ann_list = []
self.aspan_list = []
self.nchns = self.parent.ci.nchns_to_plot
self.fs = self.parent.edf_info.fs
self.plot_data = self.data.data
self.count = self.data.count
self.window_size = self.data.window_size
self.y_lim = self.data.y_lim
self.predicted = self.data.predicted
self.thresh = self.data.thresh
# add items to the combo boxes
self.sio_ui.textSizeInput.addItems(["6pt","8pt","10pt", "12pt",
"14pt", "16pt"])
self.sio_ui.textSizeInput.setCurrentIndex(3)
self.sio_ui.lineThickInput.addItems(["0.25px","0.5px","0.75px",
"1px","1.25px","1.5px","1.75px",
"2px"])
self.sio_ui.lineThickInput.setCurrentIndex(1)
self.sio_ui.annCbox.setChecked(1)
self.make_plot()
if (not self.parent.argv.export_png_file is None) and self.parent.init == 0:
self.data.plot_ann = self.parent.argv.print_annotations
self.data.line_thick = self.parent.argv.line_thickness
self.data.font_size = self.parent.argv.font_size
self.data.plot_title = 1
self.data.title = self.parent.argv.plot_title
self.make_plot()
self.print_plot()
else:
self.show()
def ann_checked(self):
""" Called when the annotation cbox is toggled.
"""
self.make_plot()
def title_checked(self):
""" Called when the title cbox is toggled.
"""
if self.sio_ui.titleCbox.isChecked():
self.data.title = self.sio_ui.titleInput.text()
else:
self.data.title = ""
self.ax.set_title(self.data.title, fontsize=self.data.font_size)
self.m.draw()
def title_changed(self):
""" Called when the text in the title input is changed.
"""
self.title_checked()
def chg_line_thick(self):
""" Called when the line thickness is changed.
"""
self.make_plot()
def chg_text_size(self):
""" Called when the font size is changed.
"""
self.make_plot()
def make_plot(self):
""" Makes the plot with the given specifications.
"""
# Get values
self.data.plot_ann = self.sio_ui.annCbox.isChecked()
# Protect against special case when text size gets set before line thickness
if self.sio_ui.lineThickInput.currentIndex() != -1:
thickness = self.sio_ui.lineThickInput.currentText()
thickness = float(thickness.split("px")[0])
self.data.line_thick = thickness
font_size = self.sio_ui.textSizeInput.currentText()
font_size = int(font_size.split("pt")[0])
self.data.font_size = font_size
self.m.fig.clf()
self.ax = self.m.fig.add_subplot(self.m.gs[0])
del self.ax.lines[:]
for i, a in enumerate(self.ann_list):
a.remove()
self.ann_list[:] = []
for aspan in self.aspan_list:
aspan.remove()
self.aspan_list[:] = []
for i in range(self.nchns):
if self.data.plot_ann:
self.ax.plot(self.plot_data[i, :]
+ (i + 1) * self.y_lim, '-', linewidth=self.data.line_thick,
color=self.data.ci.colors[i])
self.ax.set_ylim([-self.y_lim, self.y_lim * (self.nchns + 1)])
self.ax.set_yticks(np.arange(0, (self.nchns + 1)*self.y_lim, step=self.y_lim))
self.ax.set_yticklabels(self.data.ci.labels_to_plot, fontdict=None,
minor=False, fontsize=self.data.font_size)
width = 1 / (self.nchns + 2)
else:
self.ax.plot(self.plot_data[i, :] + (i) * self.y_lim,
'-', linewidth=self.data.line_thick,
color=self.data.ci.colors[i])
self.ax.set_ylim([-self.y_lim, self.y_lim * (self.nchns)])
self.ax.set_yticks(np.arange(0, (self.nchns)*self.y_lim, step=self.y_lim))
self.ax.set_yticklabels(self.data.ci.labels_to_plot[1:], fontdict=None,
minor=False, fontsize=self.data.font_size)
width = 1 / (self.nchns + 1)
if self.predicted == 1:
starts, ends, chns, class_vals = self.data.pi.compute_starts_ends_chns(self.thresh,
self.count, self.window_size, self.fs, self.nchns)
for k in range(len(starts)):
if self.data.pi.pred_by_chn and not self.data.pi.multi_class:
if chns[k][i]:
if i == self.plot_data.shape[0] - 1:
if self.data.plot_ann:
self.aspan_list.append(self.ax.axvspan(starts[k] -
self.count * self.fs,
ends[k] - self.count *
self.fs, ymin=width*(i+1.5),
ymax=1, color='paleturquoise',
alpha=1))
else:
self.aspan_list.append(self.ax.axvspan(starts[k] -
self.count * self.fs,
ends[k] - self.count *
self.fs, ymin=width*(i+0.5),
ymax=1, color='paleturquoise',
alpha=1))
else:
if self.data.plot_ann:
self.aspan_list.append(self.ax.axvspan(starts[k] -
self.count * self.fs,
ends[k] - self.count * self.fs,
ymin=width*(i+1.5), ymax=width*(i+2.5),
color='paleturquoise', alpha=1))
else:
self.aspan_list.append(self.ax.axvspan(starts[k] -
self.count * self.fs,
ends[k] - self.count *
self.fs, ymin=width*(i+0.5),
ymax=width*(i+1.5),
color='paleturquoise',
alpha=1))
x_vals = range(int(starts[k]) - self.count * self.fs,
int(ends[k]) - self.count * self.fs)
if self.data.plot_ann:
self.ax.plot(x_vals, self.plot_data[i, int(starts[k])
- self.count * self.fs:int(ends[k])
- self.count * self.fs] + i * self.y_lim
+ self.y_lim, '-',
linewidth=self.data.line_thick * 2,
color=self.data.ci.colors[i])
else:
self.ax.plot(x_vals, self.plot_data[i, int(starts[k])
- self.count * self.fs:int(ends[k])
- self.count * self.fs] + (i - 1) * self.y_lim
+ self.y_lim, '-',
linewidth=self.data.line_thick * 2,
color=self.data.ci.colors[i])
elif not self.data.pi.pred_by_chn and not self.data.pi.multi_class:
self.aspan_list.append(self.ax.axvspan(starts[k] - self.count * self.fs,
ends[k] - self.count * self.fs, color='paleturquoise', alpha=0.5))
elif not self.data.pi.pred_by_chn and self.data.pi.multi_class:
if i == 0: # only plot for first chn
r, g, b, a = self.data.pi.get_color(class_vals[k])
r = r / 255
g = g / 255
b = b / 255
a = a / 255
self.aspan_list.append(self.ax.axvspan(starts[k] - self.count * self.fs,
ends[k] - self.count * self.fs, color=(r,g,b,a)))
else:
for i in range(self.nchns):
r, g, b, a = self.data.pi.get_color(chns[i][k])
r = r / 255
g = g / 255
b = b / 255
a = a / 255
if i == self.plot_data.shape[0] - 1:
if self.data.plot_ann:
self.aspan_list.append(self.ax.axvspan(starts[k] -
self.count * self.fs,
ends[k] - self.count *
self.fs, ymin=width*(i+1.5),
ymax=1, color=(r,g,b,a)))
else:
self.aspan_list.append(self.ax.axvspan(starts[k] -
self.count * self.fs,
ends[k] - self.count *
self.fs, ymin=width*(i+0.5),
ymax=1, color=(r,g,b,a)))
else:
if self.data.plot_ann:
self.aspan_list.append(self.ax.axvspan(starts[k] -
self.count * self.fs,
ends[k] - self.count * self.fs,
ymin=width*(i+1.5), ymax=width*(i+2.5),
color=(r,g,b,a)))
else:
self.aspan_list.append(self.ax.axvspan(starts[k] -
self.count * self.fs,
ends[k] - self.count *
self.fs, ymin=width*(i+0.5),
ymax=width*(i+1.5),
color=(r,g,b,a)))
self.ax.set_xlim([0, self.fs * self.window_size])
step_size = self.fs # Updating the x labels with scaling
step_width = 1
if self.window_size >= 15 and self.window_size <= 25:
step_size = step_size * 2
step_width = step_width * 2
elif self.window_size > 25:
step_size = step_size * 3
step_width = step_width * 3
self.ax.set_xticks(np.arange(0, self.window_size *
self.fs + 1, step=step_size))
self.ax.set_xticklabels(np.arange(self.count, self.count + self.window_size + 1,
step=step_width), fontdict=None,
minor=False, fontsize=self.data.font_size)
self.ax.set_xlabel("Time (s)", fontsize=self.data.font_size)
self.ax.set_title(self.data.title, fontsize=self.data.font_size)
if self.data.plot_ann:
ann, idx_w_ann = check_annotations(
self.count, self.window_size, self.parent.edf_info)
font_size = self.data.font_size - 4
# Add in annotations
if len(ann) != 0:
ann = np.array(ann).T
txt = ""
int_prev = int(float(ann[0, 0]))
for i in range(ann.shape[1]):
int_i = int(float(ann[0, i]))
if int_prev == int_i:
txt = txt + "\n" + ann[2, i]
else:
if idx_w_ann[int_prev - self.count] and int_prev % 2 == 1:
self.ann_list.append(self.ax.annotate(txt, xy=(
(int_prev - self.count)*self.fs, -self.y_lim / 2 + self.y_lim),
color='black', size=font_size))
else:
self.ann_list.append(self.ax.annotate(txt, xy=(
(int_prev - self.count)*self.fs, -self.y_lim / 2),
color='black', size=font_size))
txt = ann[2, i]
int_prev = int_i
if txt != "":
if idx_w_ann[int_i - self.count] and int_i % 2 == 1:
self.ann_list.append(self.ax.annotate(txt, xy=(
(int_i - self.count)*self.fs, -self.y_lim / 2 + self.y_lim),
color='black', size=font_size))
else:
self.ann_list.append(self.ax.annotate(
txt, xy=((int_i - self.count)*self.fs, -self.y_lim / 2),
color='black', size=font_size))
self.m.draw()
def print_plot(self):
""" Saves the plot and exits.
"""
if (not self.parent.argv.export_png_file is None) and self.parent.init == 0:
self.ax.figure.savefig(self.parent.argv.export_png_file,
bbox_inches='tight', dpi=300)
if self.parent.argv.show == 0:
sys.exit()
else:
self.close_window()
else:
file = QFileDialog.getSaveFileName(self, 'Save File')
if len(file[0]) == 0 or file[0] is None:
return
else:
self.ax.figure.savefig(file[0] + ".png",
bbox_inches='tight', dpi=300)
self.close_window()
def reset_initial_state(self):
""" Reset initial values for the next time this window is opened.
"""
self.data.plot_ann = 1
self.data.line_thick = 0.5
self.data.font_size = 12
self.data.plot_title = 0
self.data.title = ""
def close_window(self):
""" Close the window and exit.
"""
self.parent.saveimg_win_open = 0
self.reset_initial_state()
self.close()
def closeEvent(self, event):
""" Called when the window is closed.
"""
self.parent.saveimg_win_open = 0
self.reset_initial_state()
event.accept()
class PlotCanvas(FigureCanvas):
""" Plot canvas class used to make a matplotlib graph into a widget.
"""
def __init__(self, parent=None, width=5, height=4, dpi=100):
""" Create the widget.
"""
self.fig = Figure(figsize=(width, height), dpi=dpi,
constrained_layout=False)
self.gs = self.fig.add_gridspec(1, 1, wspace=0.0, hspace=0.0)
FigureCanvas.__init__(self, self.fig)
self.setParent(parent) | PypiClean |
/Flask_RESTful_extend-0.3.3-py3-none-any.whl/flask_restful_extend/error_handling.py | from flask.ext import restful
class ErrorHandledApi(restful.Api):
"""Usage:
api = restful_extend.ErrorHandledApi(app)
instead of:
api = restful.Api(app)
"""
def handle_error(self, e):
"""Resolve sometimes the error message specified by programmer won't output to user's problem.
Flask-RESTFul's error handler handling format different exceptions has different behavior.
If we report error by `restful.abort()`,
likes `restful.abort(400, "message": "my_msg", "custom_data": "value")`,
it will output:
Status 400
Content {"message": "my_msg", "custom_data": "value"}
The error message was outputted normally.
But, if we use `flask.abort()`, or raise an exception by any other way,
example `from werkzeug.exceptions import BadRequest; raise BadRequest('my_msg')`,
it will output:
Status 400
Content {"status": 400, "message": "Bad Request"}
The output content's format was change, and the error message specified by ourselves was lose.
Let's see why.
The exception-format supported by Flask-RESTFul's error handler was:
code: status code
data: {
message: error message
}
Exceptions raised by Flask-RESTFul's format was, so error handler can handling it normally:
code: status code
description: predefined error message for this status code
data: {
message: error message
}
This is python's standard Exception's format:
message: error message
And this is `werkzeug.exceptions.HTTPException` (same as BadRequest) 's format:
code: status code
name: the name correspondence to status code
description: error message
Flask-RESTFul's error handler hasn't handle these exceptions as my expectation.
What I need to do, was create an attribute names `code` for exceptions doesn't have it,
and create an attribute names `data` to represent the original error message."""
if not hasattr(e, 'data'):
if hasattr(e, 'description'):
e.data = dict(message=e.description)
elif hasattr(e, 'message'):
if not hasattr(e, 'code'):
e.code = 500
e.data = dict(message=e.message)
return super(ErrorHandledApi, self).handle_error(e)
def unauthorized(self, response):
"""In default, when users was unauthorized, Flask-RESTFul will popup an login dialog for user.
But for an RESTFul app, this is useless, so I override the method to remove this behavior."""
return response | PypiClean |
/CryptoLyzer-0.9.1-py3-none-any.whl/cryptolyzer/ssh/server.py |
import abc
import attr
from cryptodatahub.ssh.algorithm import (
SshCompressionAlgorithm,
SshEncryptionAlgorithm,
SshHostKeyAlgorithm,
SshKexAlgorithm,
SshMacAlgorithm,
)
from cryptoparser.common.classes import LanguageTag
from cryptoparser.ssh.record import SshRecordInit
from cryptoparser.ssh.subprotocol import SshMessageCode, SshReasonCode, SshKeyExchangeInit, SshDisconnectMessage
from cryptoparser.ssh.version import SshProtocolVersion, SshVersion
from cryptolyzer.common.application import L7ServerBase, L7ServerHandshakeBase, L7ServerConfigurationBase
from cryptolyzer.ssh.client import SshProtocolMessageDefault
from cryptolyzer.ssh.transfer import SshHandshakeBase
@attr.s
class SshServerConfiguration(L7ServerConfigurationBase): # pylint: disable=too-many-instance-attributes
protocol_version = attr.ib(
validator=attr.validators.instance_of(SshProtocolVersion),
default=SshProtocolVersion(SshVersion.SSH2)
)
kex_algorithms = attr.ib(
validator=attr.validators.deep_iterable(member_validator=attr.validators.in_(SshKexAlgorithm)),
default=list(SshKexAlgorithm)
)
server_host_key_algorithms = attr.ib(
validator=attr.validators.deep_iterable(member_validator=attr.validators.in_(SshHostKeyAlgorithm)),
default=list(SshHostKeyAlgorithm)
)
encryption_algorithms_client_to_server = attr.ib(
validator=attr.validators.deep_iterable(member_validator=attr.validators.in_(SshEncryptionAlgorithm)),
default=list(SshEncryptionAlgorithm)
)
encryption_algorithms_server_to_client = attr.ib(
validator=attr.validators.deep_iterable(member_validator=attr.validators.in_(SshEncryptionAlgorithm)),
default=list(SshEncryptionAlgorithm)
)
mac_algorithms_client_to_server = attr.ib(
validator=attr.validators.deep_iterable(member_validator=attr.validators.in_(SshMacAlgorithm)),
default=list(SshMacAlgorithm)
)
mac_algorithms_server_to_client = attr.ib(
validator=attr.validators.deep_iterable(member_validator=attr.validators.in_(SshMacAlgorithm)),
default=list(SshMacAlgorithm)
)
compression_algorithms_client_to_server = attr.ib(
validator=attr.validators.deep_iterable(member_validator=attr.validators.in_(SshCompressionAlgorithm)),
default=list(SshCompressionAlgorithm)
)
compression_algorithms_server_to_client = attr.ib(
validator=attr.validators.deep_iterable(member_validator=attr.validators.in_(SshCompressionAlgorithm)),
default=list(SshCompressionAlgorithm)
)
languages_client_to_server = attr.ib(
validator=attr.validators.deep_iterable(member_validator=attr.validators.instance_of(LanguageTag)),
default=()
)
languages_server_to_client = attr.ib(
validator=attr.validators.deep_iterable(member_validator=attr.validators.instance_of(LanguageTag)),
default=()
)
@attr.s
class L7ServerSshBase(L7ServerBase):
@classmethod
@abc.abstractmethod
def get_scheme(cls):
raise NotImplementedError()
@classmethod
@abc.abstractmethod
def get_default_port(cls):
raise NotImplementedError()
def _get_handshake_class(self):
return SshServerHandshake
def _do_handshake(self, last_handshake_message_type):
try:
handshake_class = self._get_handshake_class()
handshake_object = handshake_class(self, self.configuration)
handshake_object.do_handshake(last_handshake_message_type)
finally:
self.l4_transfer.close()
return handshake_object.client_messages
def do_ssh_handshake(self, last_handshake_message_type=SshMessageCode.KEXINIT):
return self._do_handshakes(last_handshake_message_type)
@attr.s
class SshServerHandshake(L7ServerHandshakeBase, SshHandshakeBase):
def _init_connection(self, last_handshake_message_type):
protocol_message = SshProtocolMessageDefault()
protocol_message.protocol_version = self.configuration.protocol_version
key_exchange_init_message = SshKeyExchangeInit(
self.configuration.kex_algorithms,
self.configuration.server_host_key_algorithms,
self.configuration.encryption_algorithms_client_to_server,
self.configuration.encryption_algorithms_server_to_client,
self.configuration.mac_algorithms_client_to_server,
self.configuration.mac_algorithms_server_to_client,
self.configuration.compression_algorithms_client_to_server,
self.configuration.compression_algorithms_server_to_client,
self.configuration.languages_client_to_server,
self.configuration.languages_server_to_client,
)
return self.do_key_exchange_init(
transfer=self.l7_transfer,
protocol_message=protocol_message,
key_exchange_init_message=key_exchange_init_message,
last_handshake_message_type=last_handshake_message_type
)
def _parse_record(self):
record = SshRecordInit.parse_exact_size(self.l7_transfer.buffer)
is_handshake = record.packet.get_message_code() == SshMessageCode.KEXINIT
return record, len(self.l7_transfer.buffer), is_handshake
def _parse_message(self, record):
return record.packet
def _process_handshake_message(self, message, last_handshake_message_type):
self._last_processed_message_type = message.get_message_code()
self.client_messages[self._last_processed_message_type] = message
if self._last_processed_message_type == last_handshake_message_type:
self._send_disconnect(SshReasonCode.HOST_NOT_ALLOWED_TO_CONNECT, 'not allowed to connect')
raise StopIteration()
def _process_non_handshake_message(self, message):
self._send_disconnect(SshReasonCode.PROTOCOL_ERROR, 'protocol error', 'en')
raise StopIteration()
def _process_invalid_message(self):
self._send_disconnect(SshReasonCode.PROTOCOL_ERROR, 'protocol error', 'en')
raise StopIteration()
def _send_disconnect(self, reason, description, language=None):
kwargs = {
'reason': reason,
'description': description,
}
if language is not None:
kwargs['language'] = language
self.l7_transfer.send(SshRecordInit(SshDisconnectMessage(**kwargs)).compose())
class L7ServerSsh(L7ServerSshBase):
def __attrs_post_init__(self):
if self.configuration is None:
self.configuration = SshServerConfiguration()
@classmethod
def get_scheme(cls):
return 'ssh'
@classmethod
def get_default_port(cls):
return 2222 | PypiClean |
/models/MLP/model_maker.py |
# Built in
import math
# Libs
import numpy as np
# Pytorch module
import torch.nn as nn
import torch.nn.functional as F
import torch
from torch import pow, add, mul, div, sqrt
class Forward(nn.Module):
def __init__(self, flags):
super(Forward, self).__init__()
self.skip_connection = flags.skip_connection
if flags.dropout > 0:
self.dp = True
self.dropout = nn.Dropout(p=flags.dropout)
else:
self.dp = False
self.skip_head = flags.skip_head
"""
General layer definitions:
"""
# Linear Layer and Batch_norm Layer definitions here
self.linears = nn.ModuleList([])
self.bn_linears = nn.ModuleList([])
#self.dropout = nn.ModuleList([]) #Dropout layer was tested for fixing overfitting problem
for ind, fc_num in enumerate(flags.linear[0:-1]): # Excluding the last one as we need intervals
self.linears.append(nn.Linear(fc_num, flags.linear[ind + 1]))
self.bn_linears.append(nn.BatchNorm1d(flags.linear[ind + 1]))
#self.dropout.append(nn.Dropout(p=0.05))
def forward(self, G):
"""
The forward function which defines how the network is connected
:param G: The input geometry (Since this is a forward network)
:return: S: Spectrum outputs
"""
out = G # initialize the out
# For the linear part
for ind, (fc, bn) in enumerate(zip(self.linears, self.bn_linears)):
#print(out.size()
if self.skip_connection:
if ind < len(self.linears) - 1:
if ind == self.skip_head:
out = F.relu(bn(fc(out)))
if self.dp:
out = self.dropout(out)
identity = out
elif ind > self.skip_head and (ind - self.skip_head)%2 == 0:
out = F.relu(bn(fc(out))) # ReLU + BN + Linear
if self.dp:
out = self.dropout(out)
out += identity
identity = out
else:
out = F.relu(bn(fc(out)))
if self.dp:
out = self.dropout(out)
else:
out = (fc(out))
else:
if ind < len(self.linears) - 1:
out = F.relu(bn(fc(out)))
else:
out = fc(out)
return out | PypiClean |
/KegLogin-0.5.4.tar.gz/KegLogin-0.5.4/scripts/release.sh |
# VARIABLES
SRC_DIR=keg_login
REPO_URL="https://github.com/level12/keg-login"
# FUNCTIONS
confirm () {
# call with a prompt string or use a default
read -r -p "${1:-Are you sure?} [y/N]: " response
case $response in
[yY][eE][sS]|[yY])
true
;;
*)
false
;;
esac
}
check_for_auth() {
FOUND_AUTH=false
echo "Searching for credentials..."
if [ -z $PYPI_USER ] || [ -z $PYPI_PASS ]; then
echo " PYPI_USER or PYPI_PASS not set looking for other auth method."
else
echo " Found username and password in environment."
FOUND_AUTH='env'
fi
if [ -z $FOUND_AUTH ] || [ ! -f "$HOME/.pypirc" ]; then
echo " Unable to find a ~/.pypirc file."
else
echo " Found pypirc for authentication."
FOUND_AUTH='file'
fi
if ! $FOUND_AUTH; then
echo "ERROR: Unable to find an authentication mechanism"; exit 1;
fi
return FOUND_AUTH;
}
# ENVIRONMENT CHECKS;
test -z "$(git status --porcelain)" || { echo "Unclean head... aborting"; exit 1; }
HAS_SED=$(command -v sed 2>&1>/dev/null)
HAS_AWK=$(command -v awk 2>&1>/dev/null)
HAS_GPG=$(command -v gpg 2>&1>/dev/null)
PROJECT_NAME=$(python setup.py --name)
CURRENT_VERSION=$(python setup.py -V)
echo -n "Enter new version number (current $CURRENT_VERSION): "
read NEW_VERSION
if $HAS_SED && confirm "Update version file?"; then
echo "VERSION = '$NEW_VERSION'" > $SRC_DIR/version.py
fi
CURRENT_GIT_TAG=$(git describe --tags --abbrev=0)
GIT_CHANGELOG=$(git log --merges --pretty=format:"* %s (%h_)" $CURRENT_GIT_TAG..HEAD)
GIT_CHANGELOG_LINKS=$(git log --merges --pretty=format:".. _%h: $REPO_URL/commit/%h" $CURRENT_GIT_TAG..HEAD)
echo -e "CHANGELOG UPDATES\n"
echo -e "$GIT_CHANGELOG\n"
echo "$GIT_CHANGELOG_LINKS\n"
if $HAS_AWK && confirm "Update changelog?"; then
awk -v H="$NEW_VERSION - $(date +%Y-%m-%d)" \
-v HB="------------------" \
-v CL="$GIT_CHANGELOG" \
-v CLL="$GIT_CHANGELOG_LINKS" \
'/=========/{print;
print "";
print H;
print HB;
print "";
print CL;
print "";
print CLL;
print "";
next}1' \
changelog.rst > tmp && mv tmp changelog.rst
fi
if confirm "Commit?"; then
git add changelog.rst $SRC_DIR/version.py
git commit --quiet -m "Version Bump $NEW_VERSION"
GPG_TTY=$(tty) git tag --sign -m "Version Bump $NEW_VERSION" $NEW_VERSION
if confirm "Push to Git remote?"; then
echo -n "Which remote should I push to?"
read UPSTREAM
git push --tags $UPSTREAM master
fi
fi
echo "Building package..."
rm -rf dist
pip --quiet install twine wheel
python setup.py --quiet sdist bdist_wheel
if $HAS_GPG && confirm "Sign files?"; then
gpg --detach-sign -a "dist/$PROJECT_NAME-$(python setup.py -V).tar.gz"
gpg --detach-sign -a "dist/$PROJECT_NAME-$(python setup.py -V)-py2.py3-none-any.whl"
fi
if confirm "Push to pypi?"; then
twine upload dist/*
fi | PypiClean |
/DigLabTools-0.0.2-py3-none-any.whl/redcap_bridge/server_interface.py | import os
import warnings
import json
import pandas as pd
import redcap
from redcap_bridge.utils import map_header_csv_to_json, compress_record
def upload_datadict(csv_file, server_config_json):
"""
Parameters
----------
csv_file: str
Path to the csv file to be used as data dictionary
server_config_json: str
Path to the json file containing the redcap url and api token
Returns
-------
(int): The number of uploaded fields
"""
df = pd.read_csv(csv_file, dtype=str)
df.rename(columns=map_header_csv_to_json, inplace=True)
# Upload csv using pycap
redproj = get_redcap_project(server_config_json)
n = redproj.import_metadata(df, import_format='df', return_format_type='json')
return n
def download_datadict(save_to, server_config_json, format='csv'):
"""
Parameters
----------
save_to: str
Path where to save the retrieved data dictionary
server_config_json: str
Path to the json file containing the redcap url and api token
format: 'csv', 'json', 'df'
Format of the retrieved data dictionary
"""
redproj = get_redcap_project(server_config_json)
data_dict = redproj.export_metadata(format_type=format)
if format == 'csv':
with open(save_to, 'w') as save_file:
save_file.writelines(data_dict)
elif format == 'df':
data_dict.to_csv(save_to)
elif format == 'json':
with open(save_to, 'w') as save_file:
json.dump(data_dict, save_file)
else:
raise ValueError(f'Unknown format {format}. Valid formats are "csv" '
f'and "json".')
def download_records(save_to, server_config_json, format='csv', compressed=False, **kwargs):
"""
Download records from the redcap server.
Parameters
----------
save_to: str
Path where to save the retrieved records csv
server_config_json: str
Path to the json file containing the redcap url and api token
format: 'csv', 'json'
Format of the retrieved records
kwargs: dict
Additional arguments passed to PyCap `export_records`
"""
if compressed:
fixed_params = {'raw_or_label': 'label',
'raw_or_label_headers': 'label',
'export_checkbox_labels': True}
for fix_key, fix_value in fixed_params.items():
if fix_key in kwargs and kwargs[fix_key] != fix_value:
warnings.warn(f'`compressed` is overwriting current {fix_key} setting.')
kwargs.update(fixed_params)
redproj = get_redcap_project(server_config_json)
records = redproj.export_records(format_type=format, **kwargs)
if format == 'csv':
with open(save_to, 'w') as save_file:
save_file.writelines(records)
elif format == 'df':
records.to_csv(save_to)
elif format == 'json':
with open(save_to, 'w') as save_file:
json.dump(records, save_file)
else:
raise ValueError(f'Unknown format {format}. Valid formats are "csv" '
f'and "json".')
if compressed:
if format != 'csv':
warnings.warn('Can only compress csv output. Ignoring `compressed` parameter.')
else:
# run compression in place
compress_record(save_to, save_to)
def upload_records(csv_file, server_config_json):
"""
Parameters
----------
csv_file: str
Path to the csv file to be used as records
server_config_json: str
Path to the json file containing the redcap url and api token
Returns
-------
(int): Number of uploaded records
"""
df = pd.read_csv(csv_file, dtype=str)
df.rename(columns=map_header_csv_to_json, inplace=True)
# Upload csv using pycap
redproj = get_redcap_project(server_config_json)
# activate repeating instrument feature if present in records
if 'redcap_repeat_instrument' in df.columns:
form_name = df['redcap_repeat_instrument'].values[0]
redproj.import_repeating_instruments_events([{"form_name": form_name,
"custom_form_label": ""}])
n = redproj.import_records(df, import_format='df', return_format_type='json')
return n['count']
def get_json_csv_header_mapping(server_config_json):
"""
Returns
-------
dict:
Mapping of json to csv headers
"""
# TODO: This function should replace utils.py/map_header_json_to_csv
raise NotImplementedError()
def check_external_modules(server_config_json):
"""
Download records from the redcap server.
Parameters
----------
server_config_json: str
Path to the json file containing the redcap url, api token and required external modules
Returns
-------
bool: True if required external modules are present
"""
config = json.load(open(server_config_json, 'r'))
if 'external_modules' not in config:
warnings.warn('No external_modules defined in project configuration')
return True
redproj = get_redcap_project(server_config_json)
proj_json = redproj.export_project_info(format_type='json')
missing_modules = []
for ext_mod in config['external_modules']:
if ext_mod not in proj_json['external_modules']:
missing_modules.append(ext_mod)
if missing_modules:
warnings.warn(f'Project on server is missing external modules: {missing_modules}')
return False
else:
return True
def download_project_settings(server_config_json, format='json'):
"""
Get project specific settings from server
Parameters
----------
server_config_json: str
Path to the json file containing the redcap url, api token and required external modules
format: str
Return format to use (json, csv, xml, df)
Returns
-------
(dict|list|xml|df): The project settings in the corresponding format
"""
redproj = get_redcap_project(server_config_json)
proj_settings = redproj.export_project_info(format_type=format)
return proj_settings
def configure_project_settings(server_config_json):
"""
Setting project specific settings on server
Parameters
----------
server_config_json: str
Path to the json file containing the redcap url, api token and required external modules
"""
redproj = get_redcap_project(server_config_json)
proj_json = redproj.export_project_info(format_type='json')
config = json.load(open(server_config_json, 'r'))
# configure default settings (surveys) and configured settings
# setting project info requires a SUPER API token - not for standard usage
if not proj_json['surveys_enabled']:
warnings.warn(f'Surveys are not enabled for project {proj_json["project_title"]} '
f'(project_id {proj_json["project_id"]}). Visit the RedCap webinterface and '
f'enable surveys to be able to collect data via the survey URL')
def get_redcap_project(server_config_json):
"""
Initialize a pycap project based on the provided server configuration
:param server_config_json: json file containing the api_url and api_token
:return: pycap project
"""
config = json.load(open(server_config_json, 'r'))
if config['api_token'] in os.environ:
config['api_token'] = os.environ[config['api_token']]
redproj = redcap.Project(config['api_url'], config['api_token'])
return redproj
if __name__ == '__main__':
pass | PypiClean |
/FlydraAnalysisTools-0.0.1.tar.gz/FlydraAnalysisTools-0.0.1/flydra_analysis_tools/flydra_analysis_dataset.py |
# depends on flydra.analysis and flydra.a2 -- if these are not installed, point to the appropriate path
# once you have this dataset class, the data no longer depends on flydra, and therefore, matplotlib 0.99
import numpy as np
import pickle
import sys
import os
import time
try:
import flydra.a2.core_analysis as core_analysis
import flydra.analysis.result_utils as result_utils
except:
print 'Need to install flydra if you want to load raw data!'
print 'For unpickling, however, flydra is not necessary'
class Dataset:
def __init__(self):
self.trajecs = {}
def test(self, filename, kalman_smoothing=False, dynamic_model=None, fps=None, info={}, save_covariance=False):
ca = core_analysis.get_global_CachingAnalyzer()
(obj_ids, use_obj_ids, is_mat_file, data_file, extra) = ca.initial_file_load(filename)
return ca, obj_ids, use_obj_ids, is_mat_file, data_file, extra
def load_data(self, filename, kalman_smoothing=False, dynamic_model=None, fps=None, info={}, save_covariance=False):
# use info to pass information to trajectory instances as a dictionary.
# eg. info={"post_type": "black"}
# save_covariance: set to True if you need access to the covariance data. Keep as False if this is not important for analysis (takes up lots of space)
# set up analyzer
ca = core_analysis.get_global_CachingAnalyzer()
(obj_ids, use_obj_ids, is_mat_file, data_file, extra) = ca.initial_file_load(filename)
data_file.flush()
# data set defaults
if fps is None:
fps = result_utils.get_fps(data_file)
if dynamic_model is None:
try:
dyn_model = extra['dynamic_model_name']
except:
print 'cannot find dynamic model'
print 'using EKF mamarama, units: mm'
dyn_model = 'EKF mamarama, units: mm'
if dynamic_model is not None:
dyn_model = dynamic_model
# if kalman smoothing is on, then we cannot use the EKF model - remove that from the model name
print '** Kalman Smoothing is: ', kalman_smoothing, ' **'
if kalman_smoothing is True:
dyn_model = dyn_model[4:]
print 'using dynamic model: ', dyn_model
print 'framerate: ', fps
print 'loading data.... '
self.dynamic_model = dyn_model
self.fps = fps
# load object id's and save as Trajectory instances
for obj_id in use_obj_ids:
print 'processing: ', obj_id
try:
print obj_id
kalman_rows = ca.load_data( obj_id, data_file,
dynamic_model_name = dyn_model,
use_kalman_smoothing= kalman_smoothing,
frames_per_second= fps)
except:
print 'object id failed to load (probably no data): ', obj_id
continue
# couple object ID dictionary with trajectory objects
trajec_id = str(obj_id) # this is not necessarily redundant with the obj_id, it allows for making a unique trajectory id when merging multiple datasets
tmp = Trajectory(trajec_id, kalman_rows, info=info, fps=fps, save_covariance=save_covariance, extra=extra)
self.trajecs.setdefault(trajec_id, tmp)
return
def del_trajec(self, key):
del (self.trajecs[key])
def get_trajec(self, n=0):
key = self.trajecs.keys()[n]
return self.trajecs[key]
class Trajectory(object):
def __init__(self, trajec_id, kalman_rows=None, info={}, fps=None, save_covariance=False, extra=None):
self.key = trajec_id
if kalman_rows is None:
return
self.info = info
self.fps = fps
"""
kalman rows = [0] = obj_id
[1] = frame
[2] = timestamp
[3:6] = positions
[6:9] = velocities
[9:12] = P00-P02
[12:18] = P11,P12,P22,P33,P44,P55
[18:21] = rawdir_pos
[21:24] = dir_pos
dtype=[('obj_id', '<u4'), ('frame', '<i8'), ('timestamp', '<f8'), ('x', '<f8'), ('y', '<f8'), ('z', '<f8'), ('xvel', '<f8'), ('yvel', '<f8'), ('zvel', '<f8'), ('P00', '<f8'), ('P01', '<f8'), ('P02', '<f8'), ('P11', '<f8'), ('P12', '<f8'), ('P22', '<f8'), ('P33', '<f8'), ('P44', '<f8'), ('P55', '<f8'), ('rawdir_x', '<f4'), ('rawdir_y', '<f4'), ('rawdir_z', '<f4'), ('dir_x', '<f4'), ('dir_y', '<f4'), ('dir_z', '<f4')])
Covariance Matrix (P):
[ xx xy xz
xy yy yz
xz yz zz ]
full covariance for trajectories as velc as well
"""
self.obj_id = kalman_rows[0][0]
self.first_frame = int(kalman_rows[0][1])
self.fps = float(fps)
self.length = len(kalman_rows)
print 'local time: ', time.strftime( '%Y%m%d_%H%M%S', time.localtime(extra['time_model'].framestamp2timestamp(kalman_rows[0][1])) )
print 'epochtime: ', extra['time_model'].framestamp2timestamp(kalman_rows[0][1])
self.timestamp_local = time.strftime( '%Y%m%d_%H%M%S', time.localtime(extra['time_model'].framestamp2timestamp(kalman_rows[0][1])) )
self.timestamp_epoch = extra['time_model'].framestamp2timestamp(kalman_rows[0][1])
self.time_fly = np.arange(0,self.length/self.fps,1/self.fps)
self.positions = np.zeros([self.length, 3])
self.velocities = np.zeros([self.length, 3])
self.speed = np.zeros([self.length])
self.speed_xy = np.zeros([self.length])
self.length = len(self.speed)
self.cull = False
self.frame_range_roi = (0, self.length)
for i in range(len(kalman_rows)):
for j in range(3):
self.positions[i][j] = kalman_rows[i][j+3]
self.velocities[i][j] = kalman_rows[i][j+6]
self.speed[i] = np.sqrt(kalman_rows[i][6]**2+kalman_rows[i][7]**2+kalman_rows[i][8]**2)
self.speed_xy[i] = np.sqrt(kalman_rows[i][6]**2+kalman_rows[i][7]**2)
if save_covariance:
self.covariance_position = [np.zeros([3,3]) for i in range(self.length)]
self.covariance_velocity = [np.zeros([3]) for i in range(self.length)]
for i in range(self.length):
self.covariance_position[i][0,0] = kalman_rows[i]['P00']
self.covariance_position[i][0,1] = kalman_rows[i]['P01']
self.covariance_position[i][0,2] = kalman_rows[i]['P02']
self.covariance_position[i][1,1] = kalman_rows[i]['P11']
self.covariance_position[i][1,2] = kalman_rows[i]['P12']
self.covariance_position[i][2,2] = kalman_rows[i]['P22']
self.covariance_velocity[i][0] = kalman_rows[i]['P33']
self.covariance_velocity[i][1] = kalman_rows[i]['P44']
self.covariance_velocity[i][2] = kalman_rows[i]['P55']
###################################################################################################
# General use dataset functions
###################################################################################################
# recommended function naming scheme:
# calc_foo(trajec): will save values to trajec.foo (use sparingly on large datasets)
# get_foo(trajec): will return values
# use iterate_calc_function(dataset, function) to run functions on all the trajectories in a dataset
def save(dataset, filename):
print 'saving dataset to file: ', filename
fname = (filename)
fd = open( fname, mode='w' )
pickle.dump(dataset, fd)
fd.close()
return 1
def load(filename):
fname = (filename)
fd = open( fname, mode='r')
print 'loading data... '
dataset = pickle.load(fd)
fd.close()
return dataset
# this function lets you write functions that operate on the trajectory class, but then easily apply that function to all the trajectories within a dataset class. It makes debugging new functions easier/faster if you write to operate on a trajectory.
def iterate_calc_function(dataset, function, keys=None, *args, **kwargs):
# keys allows you to provide a list of keys to perform the function on, default is all the keys
if keys is None:
keys = dataset.trajecs.keys()
for key in keys:
trajec = dataset.trajecs[key]
function(trajec, *args, **kwargs)
def merge_datasets(dataset_list):
# dataset_list should be a list of datasets
dataset = Dataset()
n = 0
for d in dataset_list:
for k, trajec in d.trajecs.iteritems():
new_trajec_id = str(n) + '_' + k # make sure we use unique identifier
trajec.key = new_trajec_id
# backwards compatibility stuff: NOTE: not fully backwards compatible! (laziness)
if type(trajec) is not Trajectory:
new_trajec = Trajectory(new_trajec_id)
new_trajec.positions = trajec.positions
new_trajec.velocities = trajec.velocities
new_trajec.speed = trajec.speed
new_trajec.length = len(trajec.speed)
new_trajec.fps = trajec.fps
dataset.trajecs.setdefault(new_trajec_id, new_trajec)
else:
dataset.trajecs.setdefault(new_trajec_id, trajec)
n += 1
return dataset
def make_mini_dataset(dataset, nkeys = 500):
# helpful if working with large datasets and you want a small one to test out code/plots
new_dataset = Dataset()
n = 0
for k, trajec in dataset.trajecs.iteritems():
n += 1
new_trajec_id = str(n) + '_' + k # make sure we use unique identifier
trajec.key = new_trajec_id
new_dataset.trajecs.setdefault(new_trajec_id, trajec)
if n >= nkeys:
break
return new_dataset
def count_flies(dataset, attr=None, val=None):
print 'n flies: ', len(dataset.trajecs.keys())
def count_for_attribute(a, v):
n = 0
for k, trajec in dataset.trajecs.iteritems():
test_val = trajec.__getattribute__(a)
if test_val == v:
n += 1
print a + ': ' + v + ': ' + str(n)
if attr is not None:
if type(attr) is list:
for i, a in enumerate(attr):
count_for_attribute(a, val[i])
else:
count_for_attribute(attr, val)
def get_basic_statistics(dataset):
n_flies = 0
mean_speeds = []
frame_length = []
for k, trajec in dataset.trajecs.items():
n_flies += 1
mean_speeds.append( np.mean(trajec.speed) )
frame_length.append( len(trajec.speed) )
mean_speeds = np.array(mean_speeds)
frame_length = np.array(frame_length)
print 'num flies: ', n_flies
print 'mean speed: ', np.mean(mean_speeds)
print 'mean frame length: ', np.mean(frame_length)
def load_single_h5(filename, save_as=None, save_dataset=True, return_dataset=True, kalman_smoothing=True, save_covariance=False, info={}):
# filename should be a .h5 file
if save_as is None:
save_as = 'dataset_' + filename.split('/')[-1]
dataset = Dataset()
dataset.load_data(filename, kalman_smoothing=True, save_covariance=False, info=info)
if save_dataset:
print 'saving dataset...'
save(dataset, save_as)
if return_dataset:
return dataset
def load_all_h5s_in_directory(path, print_filenames_only=False, kalmanized=True, savedataset=True, savename='merged_dataset', kalman_smoothing=True, dynamic_model=None, fps=None, info={}, save_covariance=False):
# if you get an error, try appending an '/' at the end of the path
# only looks at files that end in '.h5', assumes they are indeed .h5 files
# kalmanized=True will only load files that have a name like kalmanized.h5
cmd = 'ls ' + path
ls = os.popen(cmd).read()
all_filelist = ls.split('\n')
try:
all_filelist.remove('')
except:
pass
filelist = []
for i, filename in enumerate(all_filelist):
if filename[-3:] != '.h5':
pass
else:
if kalmanized:
if filename[-13:] == 'kalmanized.h5':
filelist.append(path + filename)
else:
if filename[-13:] == 'kalmanized.h5':
pass
else:
filelist.append(path + filename)
print
print 'loading files: '
for filename in filelist:
print filename
if print_filenames_only:
return
dataset_list = []
n = 0
for filename in filelist:
n += 1
dataset = Dataset()
dataset.load_data(filename, kalman_smoothing=kalman_smoothing, dynamic_model=dynamic_model, fps=fps, info=info, save_covariance=save_covariance)
tmpname = 'dataset_tmp_' + str(n)
save(dataset, tmpname)
dataset_list.append(dataset)
merged_dataset = merge_datasets(dataset_list)
if savedataset:
save(merged_dataset, savename)
return merged_dataset
def get_keys_with_attr(dataset, attr, val):
keys = []
for k, trajec in dataset.trajecs.iteritems():
if trajec.__getattribute__(attr) == val:
keys.append(k)
return keys
def get_trajec_with_attr(dataset, attr, val, n=0):
keys = get_keys_with_attr(dataset, attr, val)
if n > len(keys):
n = -1
return dataset.trajecs[keys[n]]
def set_attribute_for_trajecs(dataset, attr, val, keys=None):
if keys is None:
keys = dataset.trajecs.keys()
for key in keys:
trajec = dataset.trajecs[key]
trajec.__setattr__(attr, val)
def make_dataset_with_attribute_filter(dataset, attr, val):
new_dataset = Dataset()
keys = get_keys_with_attr(dataset, attr, val)
for key in keys:
new_dataset.trajecs.setdefault(key, dataset.trajecs[key])
return new_dataset
###################################################################################################
# Example usage
###################################################################################################
def example_load_single_h5_file(filename):
# filename should be a .h5 file
info = {'post_type': 'black', 'post_position': np.zeros([3]), 'post_radius': 0.0965}
dataset = Dataset()
dataset.load_data(filename, kalman_smoothing=True, save_covariance=False, info=info)
print 'saving dataset...'
#save(dataset, 'example_load_single_h5_file_pickled_dataset')
#for k, trajec in dataset.trajecs.iteritems():
# print 'key: ', k, 'trajectory length: ', trajec.length, 'speed at end of trajec: ', trajec.speed[-1]
return dataset
if __name__ == "__main__":
pass | PypiClean |
/ASGIWebDAV-1.3.2.tar.gz/ASGIWebDAV-1.3.2/asgi_webdav/helpers.py | import hashlib
import re
import xml.parsers.expat
from collections.abc import AsyncGenerator, Callable
from logging import getLogger
from mimetypes import guess_type as orig_guess_type
from pathlib import Path
import aiofiles
import xmltodict
from chardet import UniversalDetector
from asgi_webdav.config import Config
from asgi_webdav.constants import RESPONSE_DATA_BLOCK_SIZE
logger = getLogger(__name__)
async def receive_all_data_in_one_call(receive: Callable) -> bytes:
data = b""
more_body = True
while more_body:
request_data = await receive()
data += request_data.get("body", b"")
more_body = request_data.get("more_body")
return data
async def empty_data_generator() -> AsyncGenerator[bytes, bool]:
yield b"", False
async def get_data_generator_from_content(
content: bytes,
content_range_start: int | None = None,
content_range_end: int | None = None,
block_size: int = RESPONSE_DATA_BLOCK_SIZE,
) -> AsyncGenerator[bytes, bool]:
"""
content_range_start: start with 0
https://developer.mozilla.org/zh-CN/docs/Web/HTTP/Range_requests
"""
if content_range_start is None:
start = 0
else:
start = content_range_start
if content_range_end is None:
content_range_end = len(content)
more_body = True
while more_body:
end = start + block_size
if end > content_range_end:
end = content_range_end
data = content[start:end]
data_length = len(data)
start += data_length
more_body = data_length >= block_size
yield data, more_body
def generate_etag(f_size: [float, int], f_modify_time: float) -> str:
"""
https://tools.ietf.org/html/rfc7232#section-2.3 ETag
https://developer.mozilla.org/zh-CN/docs/Web/HTTP/Headers/ETag
"""
return 'W/"{}"'.format(
hashlib.md5(f"{f_size}{f_modify_time}".encode("utf-8")).hexdigest()
)
def guess_type(config: Config, file: str | Path) -> (str | None, str | None):
"""
https://tools.ietf.org/html/rfc6838
https://developer.mozilla.org/zh-CN/docs/Web/HTTP/Basics_of_HTTP/MIME_types
https://www.iana.org/assignments/media-types/media-types.xhtml
"""
if isinstance(file, str):
file = Path(file)
elif not isinstance(file, Path):
raise # TODO
content_encoding = None
if config.guess_type_extension.enable:
# extension guess
content_type = config.guess_type_extension.filename_mapping.get(file.name)
if content_type:
return content_type, content_encoding
content_type = config.guess_type_extension.suffix_mapping.get(file.suffix)
if content_type:
return content_type, content_encoding
# basic guess
content_type, content_encoding = orig_guess_type(file, strict=False)
return content_type, content_encoding
async def detect_charset(file: str | Path, content_type: str | None) -> str | None:
"""
https://docs.python.org/3/library/codecs.html
"""
if isinstance(file, str):
return None
if content_type is None or not content_type.startswith("text/"):
return None
detector = UniversalDetector()
async with aiofiles.open(file, "rb") as fp:
for line in await fp.readlines():
detector.feed(line)
if detector.done:
break
if detector.result.get("confidence") >= 0.6:
return detector.result.get("encoding")
return None
USER_AGENT_PATTERN = r"firefox|chrome|safari"
def is_browser_user_agent(user_agent: bytes | None) -> bool:
if user_agent is None:
return False
user_agent = str(user_agent).lower()
if re.search(USER_AGENT_PATTERN, user_agent) is None:
return False
return True
def dav_dict2xml(data: dict) -> bytes:
return (
xmltodict.unparse(data, short_empty_elements=True)
.replace("\n", "")
.encode("utf-8")
)
def dav_xml2dict(data: bytes) -> dict | None:
try:
data = xmltodict.parse(data, process_namespaces=True)
except (xmltodict.ParsingInterrupted, xml.parsers.expat.ExpatError) as e:
logger.warning(f"parser XML failed, {e}, {data}")
return None
return data | PypiClean |
/DjangoDjangoAppCenter-0.0.11-py3-none-any.whl/DjangoAppCenter/simpleui/static/admin/simpleui-x/elementui/umd/locale/lv.js | (function (global, factory) {
if (typeof define === "function" && define.amd) {
define('element/locale/lv', ['module', 'exports'], factory);
} else if (typeof exports !== "undefined") {
factory(module, exports);
} else {
var mod = {
exports: {}
};
factory(mod, mod.exports);
global.ELEMENT.lang = global.ELEMENT.lang || {};
global.ELEMENT.lang.lv = mod.exports;
}
})(this, function (module, exports) {
'use strict';
exports.__esModule = true;
exports.default = {
el: {
colorpicker: {
confirm: 'Labi',
clear: 'Notīrīt'
},
datepicker: {
now: 'Tagad',
today: 'Šodien',
cancel: 'Atcelt',
clear: 'Notīrīt',
confirm: 'Labi',
selectDate: 'Izvēlēties datumu',
selectTime: 'Izvēlēties laiku',
startDate: 'Sākuma datums',
startTime: 'Sākuma laiks',
endDate: 'Beigu datums',
endTime: 'Beigu laiks',
prevYear: 'Iepriekšējais gads',
nextYear: 'Nākamais gads',
prevMonth: 'Iepriekšējais mēnesis',
nextMonth: 'Nākamais mēnesis',
year: '',
month1: 'Janvāris',
month2: 'Februāris',
month3: 'Marts',
month4: 'Aprīlis',
month5: 'Maijs',
month6: 'Jūnijs',
month7: 'Jūlijs',
month8: 'Augusts',
month9: 'Septembris',
month10: 'Oktobris',
month11: 'Novembris',
month12: 'Decembris',
// week: 'nedēļa',
weeks: {
sun: 'Sv',
mon: 'Pr',
tue: 'Ot',
wed: 'Tr',
thu: 'Ce',
fri: 'Pk',
sat: 'Se'
},
months: {
jan: 'Jan',
feb: 'Feb',
mar: 'Mar',
apr: 'Apr',
may: 'Mai',
jun: 'Jūn',
jul: 'Jūl',
aug: 'Aug',
sep: 'Sep',
oct: 'Okt',
nov: 'Nov',
dec: 'Dec'
}
},
select: {
loading: 'Ielādē',
noMatch: 'Nav atbilstošu datu',
noData: 'Nav datu',
placeholder: 'Izvēlēties'
},
cascader: {
noMatch: 'Nav atbilstošu datu',
loading: 'Ielādē',
placeholder: 'Izvēlēties',
noData: 'Nav datu'
},
pagination: {
goto: 'Iet uz',
pagesize: '/lapa',
total: 'Kopā {total}',
pageClassifier: ''
},
messagebox: {
title: 'Paziņojums',
confirm: 'Labi',
cancel: 'Atcelt',
error: 'Nederīga ievade'
},
upload: {
deleteTip: 'Nospiediet dzēst lai izņemtu',
delete: 'Dzēst',
preview: 'Priekšskatīt',
continue: 'Turpināt'
},
table: {
emptyText: 'Nav datu',
confirmFilter: 'Apstiprināt',
resetFilter: 'Atiestatīt',
clearFilter: 'Visi',
sumText: 'Summa'
},
tree: {
emptyText: 'Nav datu'
},
transfer: {
noMatch: 'Nav atbilstošu datu',
noData: 'Nav datu',
titles: ['Saraksts 1', 'Saraksts 2'],
filterPlaceholder: 'Ievadīt atslēgvārdu',
noCheckedFormat: '{total} vienības',
hasCheckedFormat: '{checked}/{total} atzīmēti'
},
image: {
error: 'FAILED' // to be translated
},
pageHeader: {
title: 'Back' // to be translated
}
}
};
module.exports = exports['default'];
}); | PypiClean |
/CIRpy-1.0.2.tar.gz/CIRpy-1.0.2/docs/source/guide/install.rst | .. _install:
Installation
============
CIRpy supports Python versions 2.7, 3.3, 3.4 and 3.5. There are no required dependencies.
Option 1: Use pip (recommended)
-------------------------------
The easiest and recommended way to install is using pip::
pip install cirpy
This will download the latest version of CIRpy, and place it in your `site-packages` folder so it is automatically
available to all your python scripts.
If you don't already have pip installed, you can `install it using get-pip.py`_::
curl -O https://raw.github.com/pypa/pip/master/contrib/get-pip.py
python get-pip.py
Option 2: Download the latest release
-------------------------------------
Alternatively, `download the latest release`_ manually and install yourself::
tar -xzvf CIRpy-1.0.2.tar.gz
cd CIRpy-1.0.2
python setup.py install
The setup.py command will install CIRpy in your `site-packages` folder so it is automatically available to all your
python scripts.
Option 3: Clone the repository
------------------------------
The latest development version of CIRpy is always `available on GitHub`_. This version is not guaranteed to be
stable, but may include new features that have not yet been released. Simply clone the repository and install as usual::
git clone https://github.com/mcs07/CIRpy.git
cd CIRpy
python setup.py install
.. _`install it using get-pip.py`: http://www.pip-installer.org/en/latest/installing.html
.. _`download the latest release`: https://github.com/mcs07/CIRpy/releases/
.. _`available on GitHub`: https://github.com/mcs07/CIRpy
| PypiClean |
/OASYS1-shadow4-0.0.10.tar.gz/OASYS1-shadow4-0.0.10/orangecontrib/shadow4/widgets/compatibility/ow_shadow_compare_beam3_beam4.py | import os, sys
from PyQt5 import QtGui, QtWidgets
from PyQt5.QtGui import QPalette, QColor, QFont
from PyQt5.QtWidgets import QApplication, QFileDialog
from PyQt5.QtCore import QRect
from PyQt5.QtGui import QTextCursor
from orangewidget import gui
from orangewidget.settings import Setting
from oasys.widgets import widget
from oasys.widgets import gui as oasysgui
from oasys.widgets import congruence
from oasys.util.oasys_util import TriggerIn, TriggerOut, EmittingStream
from orangecontrib.shadow.util.shadow_objects import ShadowBeam
from orangecontrib.shadow.util.shadow_util import ShadowCongruence, ShadowPlot
from orangecontrib.shadow4.util.shadow4_objects import ShadowData
from orangecontrib.shadow4.util.shadow4_util import ShadowCongruence as ShadowCongruence4
from shadow4.beam.s4_beam import S4Beam
"""
Tools to compare beams from shadow3 and Shadow4
"""
import numpy
from srxraylib.plot.gol import plot_scatter
from numpy.testing import assert_almost_equal
def check_six_columns_mean_and_std(beam3, beam4, do_plot=True, do_assert=False, assert_value=1e-2, to_meters=1.0, good_only=True):
raysnew = beam4.rays
rays = beam3.rays
if good_only:
indices = numpy.where(rays[:,9] > 0 )[0]
rays = rays[indices, :].copy()
raysnew = raysnew[indices, :].copy()
if do_plot:
plot_scatter(rays[:,3],rays[:,5],title="Divergences shadow3",show=False)
plot_scatter(raysnew[:,3],raysnew[:,5],title="Divergences shadow4")
plot_scatter(rays[:,0],rays[:,2],title="Real Space shadow3",show=False)
plot_scatter(raysnew[:,0],raysnew[:,2],title="Real Space shadow4")
#
b3 = Shadow.Beam()
b3.rays = rays
b4 = Shadow.Beam()
b4.rays = raysnew
Shadow.ShadowTools.histo1(b3,11,ref=23,nolost=1)
Shadow.ShadowTools.histo1(b4,11,ref=23,nolost=1)
print("Comparing...")
for i in range(6):
m0 = (raysnew[:,i]).mean()
m1 = (rays[:,i]*to_meters).mean()
print("\ncol %d, mean sh3, sh4, |sh4-sh3|: %10g %10g %10g"%(i+1,m1,m0,numpy.abs(m0-m1)))
std0 = raysnew[:,i].std()
std1 = (rays[:,i]*to_meters).std()
print("col %d, stdv sh3, sh4, |sh4-sh3|: %10g %10g %10g"%(i+1,std1,std0,numpy.abs(std0-std1)))
if do_assert:
print("\n\n\n\n")
for i in range(6):
m0 = (raysnew[:, i]).mean()
m1 = (rays[:, i] * to_meters).mean()
std0 = raysnew[:, i].std()
std1 = (rays[:, i] * to_meters).std()
try:
assert(numpy.abs(m0-m1) < assert_value)
assert(numpy.abs(std0-std1) < assert_value)
print("col %d **passed**" % (i + 1))
except:
print("col %d **failed**" % (i + 1))
def check_almost_equal(beam3, beam4, do_assert=True, display_ray_number=10, level=1, skip_columns=[], good_only=1):
print("\ncol# shadow3 shadow4 (showing ray index=%d)" % display_ray_number)
# display
for i in range(18):
txt = "col%d %20.10f %20.10f " % (i + 1, beam3.rays[display_ray_number, i], beam4.rays[display_ray_number, i])
print(txt)
if do_assert:
rays3 = beam3.rays.copy()
rays4 = beam4.get_rays()
if good_only:
f = numpy.where(rays4[:,9] > 0.0)
if len(f[0])==0:
print ('Warning: no GOOD rays, using ALL rays')
else:
rays3 = rays3[f[0], :].copy()
rays4 = rays4[f[0], :].copy()
print("\n\n\n\n")
for i in range(18):
txt = "col%d " % (i + 1)
if (i+1) in skip_columns:
print(txt+"**column not asserted**")
else:
if i in [13,14]: # angles
try:
assert_almost_equal( numpy.mod(rays3[:, i], numpy.pi), numpy.mod(rays4[:, i], numpy.pi), level)
print(txt + "**passed**")
except:
print(txt + "********failed********", S4Beam.column_short_names()[i])
else:
try:
assert_almost_equal(rays3[:,i], rays4[:,i], level)
print(txt + "**passed**")
except:
print(txt + "********failed********", S4Beam.column_short_names()[i])
class OWShadowCompareBeam3Beam4(widget.OWWidget):
name = "Shadow3 Shadow4 Comparison"
description = "Shadow3 Shadow4 Comparison"
icon = "icons/compare3to4.png"
maintainer = "Manuel Sanchez del Rio"
maintainer_email = "srio(@at@)esrf.eu"
priority = 30
category = "Tools"
keywords = ["script"]
inputs = [("Input Beam", ShadowBeam, "setBeam3"),
("Shadow Data", ShadowData, "set_shadow_data")]
input_shadow_data = None # shadow4
input_beam = None # shadow3
columns_6_flag = Setting(1)
columns_6_assert = Setting(1)
columns_6_good_only = Setting(1)
columns_6_plot = Setting(0)
columns_18_flag = Setting(1)
columns_18_ray_index = Setting(10)
columns_18_assert = Setting(1)
columns_18_good_only = Setting(1)
columns_18_level = Setting(6)
columns_18_skip_columns = Setting("[]")
IMAGE_WIDTH = 890
IMAGE_HEIGHT = 680
is_automatic_run = Setting(True)
error_id = 0
warning_id = 0
info_id = 0
MAX_WIDTH = 1320
MAX_HEIGHT = 700
CONTROL_AREA_WIDTH = 405
TABS_AREA_HEIGHT = 560
def __init__(self, show_automatic_box=True, show_general_option_box=True):
super().__init__() # show_automatic_box=show_automatic_box)
geom = QApplication.desktop().availableGeometry()
self.setGeometry(QRect(round(geom.width()*0.05),
round(geom.height()*0.05),
round(min(geom.width()*0.98, self.MAX_WIDTH)),
round(min(geom.height()*0.95, self.MAX_HEIGHT))))
self.setMaximumHeight(self.geometry().height())
self.setMaximumWidth(self.geometry().width())
self.controlArea.setFixedWidth(self.CONTROL_AREA_WIDTH)
self.general_options_box = gui.widgetBox(self.controlArea, "General Options", addSpace=True, orientation="horizontal")
self.general_options_box.setVisible(show_general_option_box)
if show_automatic_box :
gui.checkBox(self.general_options_box, self, 'is_automatic_run', 'Automatic Execution')
#
#
#
button_box = oasysgui.widgetBox(self.controlArea, "", addSpace=False, orientation="horizontal")
button = gui.button(button_box, self, "Compare Beams", callback=self.compare_beams)
font = QFont(button.font())
font.setBold(True)
button.setFont(font)
palette = QPalette(button.palette()) # make a copy of the palette
palette.setColor(QPalette.ButtonText, QColor('Dark Blue'))
button.setPalette(palette) # assign new palette
button.setFixedHeight(45)
gui.separator(self.controlArea)
gen_box6 = oasysgui.widgetBox(self.controlArea, "Compare 6 main columns",
addSpace=False, orientation="vertical", height=200,
width=self.CONTROL_AREA_WIDTH-5)
gui.comboBox(gen_box6, self, "columns_6_flag", label="compare 6 cols mean & stdev",
items=["No", "Yes"], labelWidth=300,
sendSelectedValue=False, orientation="horizontal")
box60 = gui.widgetBox(gen_box6, orientation="horizontal")
gui.comboBox(box60, self, "columns_6_assert", label="Assert?",
items=["No", "Yes"], labelWidth=300,
sendSelectedValue=False, orientation="horizontal")
self.show_at("self.columns_6_flag == 1", box60)
box60b = gui.widgetBox(gen_box6, orientation="horizontal")
gui.comboBox(box60b, self, "columns_6_good_only", label="used rays",
items=["All", "Good only"], labelWidth=300,
sendSelectedValue=False, orientation="horizontal")
self.show_at("self.columns_6_flag == 1 and self.columns_6_assert == 1", box60b)
box61 = gui.widgetBox(gen_box6, orientation="horizontal")
gui.comboBox(box61, self, "columns_6_plot", label="show plots",
items=["No", "Yes"], labelWidth=300,
sendSelectedValue=False, orientation="horizontal")
self.show_at("self.columns_6_flag == 1", box61)
gen_box18 = oasysgui.widgetBox(self.controlArea, "Compare all columns",
addSpace=False, orientation="vertical", height=200,
width=self.CONTROL_AREA_WIDTH-5)
gui.comboBox(gen_box18, self, "columns_18_flag", label="compare 18 columns",
items=["No", "Yes"], labelWidth=300,
sendSelectedValue=False, orientation="horizontal")
box179 = gui.widgetBox(gen_box18, orientation="horizontal")
oasysgui.lineEdit(box179, self, "columns_18_ray_index", "display ray index", labelWidth=300, valueType=int,
orientation="horizontal")
self.show_at("self.columns_18_flag == 1", box179)
box180 = gui.widgetBox(gen_box18, orientation="horizontal")
gui.comboBox(box180, self, "columns_18_assert", label="Assert?",
items=["No", "Yes"], labelWidth=300,
sendSelectedValue=False, orientation="horizontal")
self.show_at("self.columns_18_flag == 1", box180)
box180b = gui.widgetBox(gen_box18, orientation="horizontal")
gui.comboBox(box180b, self, "columns_18_good_only", label="used rays",
items=["All", "Good only"], labelWidth=300,
sendSelectedValue=False, orientation="horizontal")
self.show_at("self.columns_18_flag == 1 and self.columns_18_assert == 1", box180b)
box181 = gui.widgetBox(gen_box18, orientation="horizontal")
oasysgui.lineEdit(box181, self, "columns_18_level", "depth", labelWidth=150, valueType=int,
orientation="horizontal")
self.show_at("self.columns_18_flag == 1 and self.columns_18_assert == 1", box181)
box182 = gui.widgetBox(gen_box18, orientation="horizontal")
oasysgui.lineEdit(box182, self, "columns_18_skip_columns", "skip columns (e.g. [10,11])", labelWidth=250, valueType=str,
orientation="horizontal")
self.show_at("self.columns_18_flag == 1 and self.columns_18_assert == 1", box182)
tabs_setting = oasysgui.tabWidget(self.mainArea)
tabs_setting.setFixedHeight(self.IMAGE_HEIGHT)
tabs_setting.setFixedWidth(self.IMAGE_WIDTH)
tab_out = oasysgui.createTabPage(tabs_setting, "System Output")
self.shadow_output = oasysgui.textArea()
out_box = oasysgui.widgetBox(tab_out, "System Output", addSpace=True, orientation="horizontal", height=self.IMAGE_WIDTH - 45)
out_box.layout().addWidget(self.shadow_output)
gui.rubber(self.controlArea)
self.process_showers()
def callResetSettings(self):
pass
def setBeam3(self, beam):
if ShadowCongruence.checkEmptyBeam(beam):
if ShadowCongruence.checkGoodBeam(beam):
# sys.stdout = EmittingStream(textWritten=self.writeStdOut)
self.input_beam = beam
if self.is_automatic_run:
self.compare_beams()
else:
QtWidgets.QMessageBox.critical(self, "Error",
"Data not displayable: No good rays or bad content",
QtWidgets.QMessageBox.Ok)
def set_shadow_data(self, input_data):
if ShadowCongruence4.check_empty_data(input_data):
self.input_data = input_data.duplicate()
if self.is_automatic_run: self.compare_beams()
def writeStdOut(self, text):
cursor = self.shadow_output.textCursor()
cursor.movePosition(QTextCursor.End)
cursor.insertText(text)
self.shadow_output.setTextCursor(cursor)
self.shadow_output.ensureCursorVisible()
def compare_beams(self):
self.shadow_output.setText("")
sys.stdout = EmittingStream(textWritten=self.writeStdOut)
print(">>>> comparing shadow3 and shadow4 beams")
fail = 0
try:
beam3 = self.input_beam._beam
except:
print(">>> Error retrieving beam3")
fail = 1
try:
beam4 = self.input_data.beam
except:
print(">>> Error retrieving beam4")
fail = 1
if not fail:
if self.columns_6_flag:
print("\n\n>>>> comparing mean and stdev of 6 first columns")
check_six_columns_mean_and_std(beam3, beam4, do_assert=self.columns_6_assert, do_plot=self.columns_6_plot)
if self.columns_18_flag:
print("\n\n>>>> comparing columns of all 18 columns")
if self.columns_18_skip_columns.strip() == "":
columns_18_skip_columns = "[]"
else:
columns_18_skip_columns = self.columns_18_skip_columns
skip_columns = eval(columns_18_skip_columns)
check_almost_equal(beam3, beam4,
do_assert=self.columns_18_assert,
level=self.columns_18_level,
skip_columns=skip_columns,
display_ray_number=self.columns_18_ray_index,
good_only=self.columns_18_good_only)
for i in range(20): # needed to display correctly the full text. Why?
print("")
if __name__ == "__main__":
import sys
import Shadow
a = QApplication(sys.argv)
ow = OWShadowCompareBeam3Beam4()
ow.show()
a.exec_()
ow.saveSettings() | PypiClean |
/ConTexto-0.2.0-py3-none-any.whl/contexto/comparacion.py | import jellyfish
import numpy as np
from scipy.sparse import issparse
from sklearn.metrics.pairwise import cosine_similarity
from sklearn.metrics import pairwise_distances
from lenguajes import definir_lenguaje
from vectorizacion import VectorizadorWord2Vec
from utils.auxiliares import cargar_objeto
# Para que no se muestre la warning de "DataConversionWarning"
import warnings
from sklearn.exceptions import DataConversionWarning
warnings.filterwarnings(action="ignore", category=DataConversionWarning)
# Clase Similitud ---------------------------------------------------
class Similitud:
def __init__(self, vectorizador=None, lenguaje="es"):
"""
Constructor de la clase Similitud.\
Permite calcular la similitud coseno y de Jaccard entre textos \
y/o vectores.
:param vectorizador: Objeto de tipo `vectorizador`, o `string` con la \
ubicación del archivo que lo contiene. Vectorizador que va a ser \
utilizado para generar representaciones vectoriales de los \
textos. Si no se especifica un vectorizador, se utilizará uno de \
tipo `Word2Vec`. Si se pasa un vectorizador al objeto de clase \
`Similitud`, este ya debe estar ajustado. Valor por defecto `None`
:type vectorizador: str/None/Vectorizador, opcional
:param lenguaje: Indica el lenguaje que utilizará el vectorizador. \
Para mayor información, consultar la sección de \
:ref:`Lenguajes soportados <seccion_lenguajes_soportados>`. Si se \
pasa un vectorizador ya ajustado, este parámetro no será \
utilizado. Valor por defecto `'es'`.
:type lenguaje: str, opcional
"""
# Definir lenguaje del vectorizador y vectorizador a utilizar
self.establecer_lenguaje(lenguaje)
self.establecer_vectorizador(vectorizador)
# Función auxiliar
def __jaccard_textos(self, texto1, texto2):
if type(texto1) == str:
texto1 = texto1.split()
if type(texto2) == str:
texto2 = texto2.split()
interseccion = set(texto1).intersection(set(texto2))
union = set(texto1).union(set(texto2))
return np.array([[len(interseccion) / len(union)]])
def establecer_lenguaje(self, lenguaje):
"""
Establece el lenguaje del objeto Similitud.
:param lenguaje: Indica el lenguaje que utilizará el \
`vectorizador`. Para mayor información, consultar la \
sección de \
:ref:`Lenguajes soportados <seccion_lenguajes_soportados>`. Si se \
pasa un vectorizado ajustado, este parámetro no será usado.
:type lenguaje: str
"""
self.lenguaje = definir_lenguaje(lenguaje)
def establecer_vectorizador(self, vectorizador=None):
"""
Establece el `vectorizador` del objeto Similitud.
:param vectorizador: Objeto de tipo `vectorizador`, o `string` con la \
ubicación del archivo que lo contiene. Vectorizador que va a ser \
utilizado para generar representaciones vectoriales de los \
textos. Si se pasa un vectorizador al objeto de clase Similitud, \
este ya debe estar ajustado.
:type vectorizador: vectorizador, opcional
"""
# Definir modelo para vectorizar
if vectorizador is None:
# vectorizador por defecto
self.vectorizador = VectorizadorWord2Vec(self.lenguaje)
elif isinstance(vectorizador, str):
self.vectorizador = cargar_objeto(vectorizador)
else:
self.vectorizador = vectorizador
def coseno(self, lista1, lista2=[]):
"""
Calcula la similitud coseno entre uno o dos grupos de textos o \
vectores de entrada.
:param lista1: Texto o lista de textos de interés para \
el cálculo de similitud. También es posible ingresar directamente \
los vectores pre-calculados de los textos en un arreglo de numpy \
o una matriz dispersa.
:type lista1: list, str
:param lista2: Texto o lista de textos para comparar. Si se \
utiliza este parámetro, se calculará la similitud entre cada uno \
de los textos de `lista1` con cada uno de los textos de `lista2`. \
También es posible ingresar directamente los vectores \
pre-calculados de los textos en un arreglo de numpy o una matriz \
dispersa. Valor por defecto [].
:type lista2: list, opcional
:return: (numpy.array) Matriz de dos dimensiones con las similitudes \
coseno entre los textos/vectores de entrada. Si solo se utilizó \
el parámetro `lista1` con `n` textos/vectores, devolverá una \
matriz de `n x n` simétrica, con las similitudes coseno entre \
todos los elementos de `lista1`. Si se utilizan los parámetros \
`lista1` y `lista2` con `n_1` y `n_2` textos respectivamente, \
devolverá una matriz de `n_ 1 x n_2`, con las similitudes coseno \
entre los textos/vectores de `lista1` y los elementos de `lista2`.
"""
if isinstance(lista1, str):
lista1 = [lista1]
if isinstance(lista2, str):
lista2 = [lista2]
# Cantidad de elementos en lista2
n2 = len(lista2) if not issparse(lista2) else lista2.shape[0]
# Si se ingresan textos, estos se pasan por el vectorizador
if isinstance(lista1[0], str):
try:
lista1 = self.vectorizador.vectorizar(lista1, disperso=True)
except Exception:
lista1 = self.vectorizador.vectorizar(lista1)
if n2 > 0 and isinstance(lista2[0], str):
try:
lista2 = self.vectorizador.vectorizar(lista2, disperso=True)
except Exception:
lista2 = self.vectorizador.vectorizar(lista2)
if n2 < 1:
return cosine_similarity(lista1)
else:
return cosine_similarity(lista1, lista2)
def jaccard(self, lista1, lista2=[], vectorizar=False):
"""
Calcula la similitud de Jaccard entre uno o dos grupos de textos o \
vectores de entrada.
:param lista1: Texto o lista de textos de interés para \
el cálculo de similitud. También es posible ingresar directamente \
los vectores pre-calculados de los textos en un arreglo de \
`numpy`, utilizando vectorizadores basados en frecuencias (`BOW`, \
`TF-IDF`, `Hashing`).
:type lista1: list, str
:param lista2: Texto o lista de textos para comparar. Si se \
utiliza este parámetro, se calculará la similitud entre cada uno \
de los textos de `lista1` con cada uno de los textos de `lista2`. \
También es posible ingresar directamente los vectores \
pre-calculados de los textos en un arreglo de numpy, utilizando \
vectorizadores basados en frecuencias (`BOW`, `TF-IDF`, \
`Hashing`). Valor por defecto `[]`.
:type lista2: list, str, optional
:param vectorizar: Indica si se desean vectorizar los textos \
de entrada pertenecientes a `lista1` y a `lista2.` \
Si `vectorizar=False`, se calculará la similitud de Jaccard de \
cada par de textos directamente, sin obtener sus representaciones \
vectoriales. Valor por defecto `False`.
:type vectorizar: bool, opcional
:return: (numpy.array) Matriz de dos dimensiones con las similitudes \
de Jaccard entre los textos/vectores de entrada. Si solo se \
utilizó el parámetro `lista1` con `n` textos/vectores, devolverá \
una matriz de `n x n` simétrica, con las similitudes de Jaccard \
entre todos los elementos de `lista1`. Si se utilizan los \
parámetros `lista1` y `lista2` con `n_1` y `n_2` textos \
respectivamente, devolverá una matriz de `n_1 x n_2`, con las \
similitudes de Jaccard entre los elementos de `lista1` y los \
elementos de `lista2`.
"""
if isinstance(lista1, str):
lista1 = [lista1]
if isinstance(lista2, str):
lista2 = [lista2]
# Esta función no acepta matrices dispersas, por lo que
# se pasan a numpy array
if issparse(lista1):
lista1 = lista1.toarray()
if issparse(lista2):
lista2 = lista2.toarray()
# Cantidad de elementos en cada lista
n1, n2 = len(lista1), len(lista2)
# Si se indicó, se calculan los vectores de las listas
# de textos de entrada
if vectorizar:
if isinstance(lista1[0], str):
lista1 = self.vectorizador.vectorizar(lista1)
if n2 > 0 and isinstance(lista2[0], str):
lista2 = self.vectorizador.vectorizar(lista2)
if n2 < 1:
if isinstance(lista1[0], str):
similitudes = np.zeros((n1, n1))
for i in range(n1):
for j in range(i, n1):
similitudes[i, j] = self.__jaccard_textos(
lista1[i], lista1[j]
)
# Para que la matriz de similitudes quede simétrica
similitudes += similitudes.T - np.diag(np.diag(similitudes))
else:
similitudes = 1 - pairwise_distances(lista1, metric="jaccard")
else:
if isinstance(lista1[0], str) and isinstance(lista2[0], str):
similitudes = np.zeros((n1, n2))
for i in range(n1):
for j in range(n2):
similitudes[i, j] = self.__jaccard_textos(
lista1[i], lista2[j]
)
else:
similitudes = 1 - pairwise_distances(
lista1, lista2, metric="jaccard"
)
# Devolver matriz de similitudes
return similitudes
# Clase Distancia ----------------------------------------------------
class Distancia:
def __init__(self, vectorizador=None, lenguaje="es"):
"""
Constructor de la clase `Distacia`.
Permite calcular las diferentes medidas de distancia entre textos y/o \
vectores.
:param vectorizador: Objeto de tipo `vectorizador`, o `string` con \
la ubicación del archivo que lo contiene. Vectorizador que va a \
ser utilizado para generar representaciones vectoriales de los \
textos. Si no se especifica un vectorizador, se utilizará uno de \
tipo `Word2Vec`. Si se pasa un vectorizador al objeto de clase \
`Distancia`, este ya debe estar ajustado. Valor por defecto \
`None`.
:type vectorizador: vectorizador, str, opcional
:param lenguaje: Indica el lenguaje que utilizará el `vectorizador`. \
Para mayor información, consultar la sección de \
:ref:`Lenguajes soportados <seccion_lenguajes_soportados>`. Si se \
pasa un vectorizador ya ajustado, este parámetro no será \
utilizado. Valor por defecto `'es'`.
:type lenguaje: str
"""
# Definir lenguaje del vectorizador y vectorizador a utilizar
self.establecer_lenguaje(lenguaje)
self.establecer_vectorizador(vectorizador)
# Distancias de sklearn que aceptan matrices dispersas
self.aceptan_dispersas = [
"cityblock",
"cosine",
"euclidean",
"l1",
"l2",
"manhattan",
]
def establecer_lenguaje(self, lenguaje):
"""
Establece el lenguaje del objeto `Distancia`.
:param lenguaje: Indica el lenguaje que utilizará el \
vectorizador. Para mayor información, consultar la \
sección de \
:ref:`Lenguajes soportados <seccion_lenguajes_soportados>`. Si se \
pasa un vectorizador ya ajustado, este parámetro no será utilizado.
:type lenguaje: str
"""
self.lenguaje = definir_lenguaje(lenguaje)
def establecer_vectorizador(self, vectorizador):
"""
Establece el `vectorizador` del objeto `Distancia`.
:param vectorizador: Objeto de tipo vectorizador, o `string` con la \
ubicación del archivo que lo contiene. Vectorizador que va a ser \
utilizado para generar representaciones vectoriales de los \
textos. Si no se especifica un `vectorizador`, se utilizará uno \
de tipo `Word2Vec`. Si se pasa un `vectorizador` al objeto de \
clase `Distancia`, este ya debe estar ajustado.
:type vectorizador: vectorizador, str
"""
# Definir modelo para vectorizar
if vectorizador is None:
# vectorizador por defecto
self.vectorizador = VectorizadorWord2Vec(self.lenguaje)
elif isinstance(vectorizador, str):
self.vectorizador = cargar_objeto(vectorizador)
else:
self.vectorizador = vectorizador
def distancia_pares(
self, lista1, lista2=[], tipo_distancia="l2", **kwargs
):
"""
Permite calcular diferentes métricas de distancias entre uno o dos \
grupos de textos y/o vectores de entrada.
:param lista1: Texto o lista de textos de interés para \
el cálculo de las distancias. También es posible ingresar \
directamente los vectores pre-calculados de los textos en un \
arreglo de numpy o una matriz dispersa.
:type lista1: list, str, numpy.array
:param lista2: Texto o lista de textos para comparar. Si se \
utiliza este parámetro, se calculará la distancia entre cada uno \
de los textos/vectores de `lista1` con cada uno de los elementos \
de `lista2`. También es posible ingresar directamente los \
vectores pre-calculados de los textos en un arreglo de numpy o \
una matriz dispersa. Valor por defecto `[]`.
:type lista2: list, str, numpy.array, opcional
:param tipo_distancia: Métrica de \
distancia que se desea calcular. Para una lista de todas las \
distancias que se pueden calcular por medio de esta función, \
se puede consultar la documentación de scikit-learn y scipy: \
https://scikit-learn.org/stable/modules/generated/sklearn.metrics.pairwise_distances.html.
Valor por defecto `l2`.
:type tipo_distancia: str, opcional
:param kwargs: Parámetros opcionales que pueden ser ajustables, \
dependiendo de la métrica de distancia elegida.
:type kwargs: dict, opcional
:return: (numpy.array) Matriz de dos dimensiones con las distancias \
calculadas entre los textos/vectores de entrada. Si solo se \
utilizó el parámetro `lista1` con `n` textos/vectores, devolverá \
una matriz de `n x n` simétrica, con las distancias entre todos \
los elementos de `lista1`. Si se utilizan los parámetros `lista1` \
y lista2 con `n_1` y `n_2` textos/vectores respectivamente, \
devolverá una matriz de `n_1 x n_2`, con las distancias entre los \
elementos de `lista1` y los elementos de `lista2`.
"""
if isinstance(lista1, str):
lista1 = [lista1]
if isinstance(lista2, str):
lista2 = [lista2]
# Cantidad de elementos en lista2
n2 = len(lista2) if not issparse(lista2) else lista2.shape[0]
# Si se ingresan textos, estos se pasan por el vectorizador
if isinstance(lista1[0], str):
try:
lista1 = self.vectorizador.vectorizar(lista1, disperso=True)
except Exception:
lista1 = self.vectorizador.vectorizar(lista1)
if n2 > 0 and isinstance(lista2[0], str):
try:
lista2 = self.vectorizador.vectorizar(lista2, disperso=True)
except Exception:
lista2 = self.vectorizador.vectorizar(lista2)
# Si la distancia a calcular no soporta matrices dispersas,
# se asegura de que no hayan
if tipo_distancia not in self.aceptan_dispersas:
if issparse(lista1):
lista1 = lista1.toarray()
if issparse(lista2):
lista2 = lista2.toarray()
if n2 < 1:
return pairwise_distances(lista1, metric=tipo_distancia, **kwargs)
else:
return pairwise_distances(
lista1, lista2, metric=tipo_distancia, **kwargs
)
def l1(self, lista1, lista2=[]):
"""
Calcula la distancia `L1`, también conocida como la distancia \
Manhattan, entre uno o dos grupos de textos y/o vectores de \
entrada.
:param lista1: Texto o lista de textos de interés para \
el cálculo de las distancias. También es posible ingresar \
directamente los vectores pre-calculados de los textos en un \
arreglo de numpy o una matriz dispersa.
:type lista1: str, list, numpy.array
:param lista2: Texto o lista de textos para comparar. Si se \
utiliza este parámetro, se calcularán la distancias entre cada \
uno de los textos/vectores de `lista1` con cada uno de los \
elementos de `lista2`. También es posible ingresar directamente \
los vectores pre-calculados de los textos en un arreglo de numpy o \
una matriz dispersa. Valor por defecto `[]`.
:type lista2: str, list, numpy.array, opcional
:return: (numpy.array) Matriz de dos dimensiones con las distancias \
`L1` calculadas entre los textos/vectores de entrada. Si solo se \
utilizó el parámetro `lista1` con `n` textos/vectores, devolverá \
una matriz de `n x n` simétrica, con las distancias `L1` entre \
todos los elementos de `lista1`. Si se utilizan los parámetros \
`lista1` y lista2 con `n1` y `n2` textos/vectores \
respectivamente, devolverá una matriz de `n_1 x n_2`, con las \
distancias entre los elementos de `lista1` y los elementos de \
`lista2`.
"""
return self.distancia_pares(lista1, lista2, tipo_distancia="l1")
def l2(self, lista1, lista2=[]):
"""
Calcula la distancia `L2`, también conocida como la distancia \
euclidiana, entre uno o dos grupos de textos y/o vectores de entrada.
:param lista1: Texto o lista de textos de interés para \
el cálculo de las distancias. También es posible ingresar \
directamente los vectores pre-calculados de los textos en un \
arreglo de numpy o una matriz dispersa.
:type lista1: str, list, numpy.array
:param lista2: Texto o lista de textos para comparar. Si se \
utiliza este parámetro, se calcularán la distancias entre cada \
uno de los textos/vectores de `lista1` con cada uno de los \
elementos de `lista2`. También es posible ingresar directamente \
los vectores pre-calculados de los textos en un arreglo de numpy o \
una matriz dispersa. Valor por defecto `[]`.
:type lista2: str, list, numpy.array, opcional
:return: (numpy.array) Matriz de dos dimensiones con las distancias \
`L2` calculadas entre los textos/vectores de entrada. Si solo se \
utilizó el parámetro `lista1` con `n` textos/vectores, devolverá \
una matriz de `n x n` simétrica, con las distancias `L2` entre \
todos los elementos de `lista1`. Si se utilizan los parámetros \
`lista1` y `lista2` con `n_1` y `n_2` textos/vectores \
respectivamente, devolverá una matriz de `n_1 x n_2`, con las \
distancias entre los elementos de `lista1` y los elementos de \
`lista2`.
"""
return self.distancia_pares(lista1, lista2, tipo_distancia="l2")
def minkowski(self, lista1, lista2=[], p=2):
"""
Calcula la distancia de Minkowski entre uno o dos grupos de textos \
y/o vectores de entrada.
:param lista1: Texto o lista de textos de interés para \
el cálculo de las distancias. También es posible ingresar \
directamente los vectores pre-calculados de los textos en un \
arreglo de numpy o una matriz dispersa.
:type lista1: str, list, numpy.array
:param lista2: Texto o lista de textos para comparar. Si se \
utiliza este parámetro, se calcularán la distancias entre cada \
uno de los textos/vectores de `lista1` con cada uno de los \
elementos de `lista2`. También es posible ingresar directamente \
los vectores pre-calculados de los textos en un arreglo de numpy o \
una matriz dispersa. Valor por defecto `[]`.
:type lista2: str, list, numpy.array, opcional
:param p: Orden o grado de la distancia \
de Minkowski que se desea calcular. Si `p = 1`, la distancia \
calculada es equivalente a la distancia de Manhattan (L1) y \
cuando `p=2` la distancia calculada es equivalente a la distancia \
euclidiana (L2). Valor por defecto `2`.
:type p: int
:return: (numpy.array) Matriz de dos dimensiones con las distancias \
calculadas entre los textos/vectores de entrada. Si solo se \
utilizó el parámetro `lista1` con `n` textos/vectores, devolverá \
una matriz de `n x n` simétrica, con las distancias entre todos \
los elementos de `lista1`. Si se utilizan los parámetros `lista1` \
y `lista2` con `n_1` y `n_2` textos/vectores respectivamente, \
devolverá una matriz de `n1 x n2`, con las distancias entre los \
elementos de `lista1` y los elementos de `lista2`.
"""
if p == 1:
return self.distancia_pares(lista1, lista2, tipo_distancia="l1")
elif p == 2:
return self.distancia_pares(lista1, lista2, tipo_distancia="l2")
else:
return self.distancia_pares(
lista1, lista2, tipo_distancia="minkowski", p=p
)
def jaccard(self, lista1, lista2=[]):
"""
Calcula la distancia de Jaccard entre uno o dos grupos de textos y/o \
vectores de entrada.
:param lista1: Texto o lista de textos de interés para \
el cálculo de las distancias. También es posible ingresar \
directamente los vectores pre-calculados de los textos en un \
arreglo de numpy o una matriz dispersa.
:type lista1: str, list, numpy.array
:param lista2: Texto o lista de textos para comparar. Si se \
utiliza este parámetro, se calcularán la distancias entre cada \
uno de los textos/vectores de `lista1` con cada uno de los \
elementos de `lista2`. También es posible ingresar directamente \
los vectores pre-calculados de los textos en un arreglo de numpy o \
una matriz dispersa. Valor por defecto `[]`.
:type lista2: str, list, numpy.array, opcional
:return: (numpy.array) Matriz de dos dimensiones con las distancias \
calculadas entre los textos/vectores de entrada. Si solo se \
utilizó el parámetro `lista1` con `n` textos/vectores, devolverá \
una matriz de `n x n` simétrica, con las distancias entre todos \
los elementos de `lista1`. Si se utilizan los parámetros `lista1` \
y `lista2` con `n_1` y `n_2` textos/vectores respectivamente, \
devolverá una matriz de `n1 x n2`, con las distancias entre los \
elementos de `lista1` y los elementos de `lista2`.
"""
return self.distancia_pares(lista1, lista2, tipo_distancia="jaccard")
def hamming(self, lista1, lista2=[]):
"""
Calcula la distancia de Hamming entre uno o dos grupos de textos y/o \
vectores de entrada.
:param lista1: Texto o lista de textos de interés para \
el cálculo de las distancias. También es posible ingresar \
directamente los vectores pre-calculados de los textos en un \
arreglo de numpy o una matriz dispersa.
:type lista1: str, list, numpy.array
:param lista2: Texto o lista de textos para comparar. Si se \
utiliza este parámetro, se calcularán la distancias entre cada \
uno de los textos/vectores de `lista1` con cada uno de los \
elementos de `lista2`. También es posible ingresar directamente \
los vectores pre-calculados de los textos en un arreglo de numpy o \
una matriz dispersa. Valor por defecto `[]`.
:type lista2: str, list, numpy.array, opcional
:return: (numpy.array) Matriz de dos dimensiones con las distancias \
calculadas entre los textos/vectores de entrada. Si solo se \
utilizó el parámetro `lista1` con `n` textos/vectores, devolverá \
una matriz de `n x n` simétrica, con las distancias entre todos \
los elementos de `lista1`. Si se utilizan los parámetros `lista1` \
y `lista2` con `n_1` y `n_2` textos/vectores respectivamente, \
devolverá una matriz de `n1 x n2`, con las distancias entre los \
elementos de `lista1` y los elementos de `lista2`.
"""
return self.distancia_pares(lista1, lista2, tipo_distancia="hamming")
# Clase DiferenciaStrings ----------------------------------------------------
class DiferenciaStrings:
"""
Esta clase se recomienda para comparaciones de strings relativamente \
cortos, como nombres, direcciones y otras cadenas de caracteres \
similares. Para textos más extensos, se recomiendan las clases \
:py:meth:`comparacion.Similitud` o :py:meth:`comparacion.Distancia`.
"""
def comparacion_pares(self, texto1, texto2, tipo="levenshtein", norm=None):
"""
Permite hacer comparaciones entre dos textos de entrada, de acuerdo a \
un tipo de distancia o similitud determinado.
:param texto1: Primer texto de interés a comparar.
:type texto1: str
:param texto2: Segundo texto de interés a comparar.
:type texto2: str
:param tipo: Criterio de comparación a utilizar entre los textos. \
Valor por defecto `'levenshtein'`.
:type tipo: {'damerau_levenshtein', 'levenshtein', 'hamming', \
'jaro_winkler', 'jaro'}, opcional
:param norm: Permite normalizar los resultados en función de la \
longitud de los textos. Si `norm = 1` se normaliza en función al \
texto más corto, si `norm = 2` se normaliza en función al texto \
de mayor extensión.
:type norm: {1,2}, opcional
:return: (float) Valor resultado de la comparación entre `texto1` y \
`texto2`.
"""
tipo = tipo.lower()
if "damerau" in tipo:
salida = jellyfish.damerau_levenshtein_distance(texto1, texto2)
elif "levenshtein" in tipo:
salida = jellyfish.levenshtein_distance(texto1, texto2)
elif "hamming" in tipo:
salida = jellyfish.hamming_distance(texto1, texto2)
elif "winkler" in tipo:
salida = jellyfish.jaro_winkler_similarity(texto1, texto2)
elif "jaro" in tipo:
salida = jellyfish.jaro_similarity(texto1, texto2)
else:
print(
(
"Por favor seleccione un criterio válido "
"para comparar los strings."
)
)
return None
if norm in [1, 2] and "jaro" not in tipo:
if norm == 1:
salida /= min(len(texto1), len(texto2))
else:
salida /= max(len(texto1), len(texto2))
return salida
def comparacion_lista(
self, lista1, lista2=[], tipo="levenshtein", norm=None
):
"""
Permite hacer comparaciones entre una o dos listas de textos de \
entrada.
:param lista1: Texto o lista de textos de interés para \
realizar las comparaciones.
:type lista1: str, list
:param lista2: Texto o lista de textos para comparar. Si se \
utiliza este parámetro, se harán las comparaciones entre cada uno \
de los textos de `lista1` con cada uno de los textos de `lista2`. \
Valor por defecto `[]`.
:type lista2: str, list, opcional
:param tipo: Criterio de comparación a utilizar entre los textos. \
Valor por defecto `'levenshtein'`.
:type tipo: {'damerau_levenshtein', 'levenshtein', 'hamming', \
'jaro_winkler', 'jaro'}, opcional
:param norm: Permite normalizar los resultados en función de la \
longitud de los textos. Si `norm = 1` se normaliza en función al \
texto más corto, si `norm = 2` se normaliza en función al texto \
de mayor extensión.
:type norm: {1,2}, opcional
:return: (numpy.array) Matriz de dos dimensiones con las distancias \
calculadas entre los textos/vectores de entrada. Si solo se \
utilizó el parámetro `lista1` con `n` textos/vectores, devolverá \
una matriz de `n x n` simétrica, con las distancias entre todos \
los elementos de `lista1`. Si se utilizan los parámetros `lista1` \
y `lista2` con `n_1` y `n_2` textos/vectores respectivamente, \
devolverá una matriz de `n1 x n2`, con las distancias entre los \
elementos de `lista1` y los elementos de `lista2`.
"""
if isinstance(lista1, str):
lista1 = [lista1]
if isinstance(lista2, str):
lista2 = [lista2]
n1, n2 = len(lista1), len(lista2)
if n2 < 1:
diferencias = np.zeros((n1, n1))
for i in range(n1):
for j in range(i, n1):
diferencias[i, j] = self.comparacion_pares(
lista1[i], lista1[j], tipo, norm
)
# Para que la matriz quede simétrica
diferencias += diferencias.T - np.diag(np.diag(diferencias))
else:
diferencias = np.zeros((n1, n2))
for i in range(n1):
for j in range(n2):
diferencias[i, j] = self.comparacion_pares(
lista1[i], lista2[j], tipo, norm
)
return diferencias
def distancia_levenshtein(self, lista1, lista2=[], norm=None):
"""
Permite calcular la distancia de Levenshtein entre una o dos listas \
de textos de entrada.
:param lista1: Texto o lista de textos de interés para \
realizar las comparaciones.
:type lista1: str, list
:param lista2: Texto o lista de textos para comparar. Si se \
utiliza este parámetro, se harán las comparaciones entre cada uno \
de los textos de `lista1` con cada uno de los textos de `lista2`. \
Valor por defecto `[]`.
:type lista2: str, list, opcional
:param norm: Permite normalizar los resultados en función de la \
longitud de los textos. Si `norm = 1` se normaliza en función al \
texto más corto, si `norm = 2` se normaliza en función al texto \
de mayor extensión.
:type norm: {1,2}, opcional
:return: (numpy.array) Matriz de dos dimensiones con las distancias \
calculadas entre los textos/vectores de entrada. Si solo se \
utilizó el parámetro `lista1` con `n` textos/vectores, devolverá \
una matriz de `n x n` simétrica, con las distancias entre todos \
los elementos de `lista1`. Si se utilizan los parámetros `lista1` \
y `lista2` con `n_1` y `n_2` textos/vectores respectivamente, \
devolverá una matriz de `n1 x n2`, con las distancias entre los \
elementos de `lista1` y los elementos de `lista2`.
"""
return self.comparacion_lista(lista1, lista2, "levenshtein", norm)
def distancia_damerau_levenshtein(self, lista1, lista2=[], norm=None):
"""
Permite calcular la distancia de Damerau-Levenshtein entre una o dos \
listas de textos de entrada.
Permite calcular la distancia de Levenshtein entre una o dos listas \
de textos de entrada.
:param lista1: Texto o lista de textos de interés para \
realizar las comparaciones.
:type lista1: str, list
:param lista2: Texto o lista de textos para comparar. Si se \
utiliza este parámetro, se harán las comparaciones entre cada uno \
de los textos de `lista1` con cada uno de los textos de `lista2`. \
Valor por defecto `[]`.
:type lista2: str, list, opcional
:param norm: Permite normalizar los resultados en función de la \
longitud de los textos. Si `norm = 1` se normaliza en función al \
texto más corto, si `norm = 2` se normaliza en función al texto \
de mayor extensión.
:type norm: {1,2}, opcional
:return: (numpy.array) Matriz de dos dimensiones con las distancias \
calculadas entre los textos/vectores de entrada. Si solo se \
utilizó el parámetro `lista1` con `n` textos/vectores, devolverá \
una matriz de `n x n` simétrica, con las distancias entre todos \
los elementos de `lista1`. Si se utilizan los parámetros `lista1` \
y `lista2` con `n_1` y `n_2` textos/vectores respectivamente, \
devolverá una matriz de `n1 x n2`, con las distancias entre los \
elementos de `lista1` y los elementos de `lista2`.
"""
return self.comparacion_lista(
lista1, lista2, "damerau_levenshtein", norm
)
def distancia_hamming(self, lista1, lista2=[], norm=None):
"""
Permite calcular la distancia de Hamming entre una o dos listas de \
textos de entrada.
Permite calcular la distancia de Levenshtein entre una o dos listas \
de textos de entrada.
:param lista1: Texto o lista de textos de interés para \
realizar las comparaciones.
:type lista1: str, list
:param lista2: Texto o lista de textos para comparar. Si se \
utiliza este parámetro, se harán las comparaciones entre cada uno \
de los textos de `lista1` con cada uno de los textos de `lista2`. \
Valor por defecto `[]`.
:type lista2: str, list, opcional
:param norm: Permite normalizar los resultados en función de la \
longitud de los textos. Si `norm = 1` se normaliza en función al \
texto más corto, si `norm = 2` se normaliza en función al texto \
de mayor extensión.
:type norm: {1,2}, opcional
:return: (numpy.array) Matriz de dos dimensiones con las distancias \
calculadas entre los textos/vectores de entrada. Si solo se \
utilizó el parámetro `lista1` con `n` textos/vectores, devolverá \
una matriz de `n x n` simétrica, con las distancias entre todos \
los elementos de `lista1`. Si se utilizan los parámetros `lista1` \
y `lista2` con `n_1` y `n_2` textos/vectores respectivamente, \
devolverá una matriz de `n1 x n2`, con las distancias entre los \
elementos de `lista1` y los elementos de `lista2`.
"""
return self.comparacion_lista(lista1, lista2, "hamming", norm)
def similitud_jaro(self, lista1, lista2=[]):
"""
Permite calcular la similitud de Jaro entre una o dos listas de \
textos de entrada.
Permite calcular la distancia de Levenshtein entre una o dos listas \
de textos de entrada.
:param lista1: Texto o lista de textos de interés para \
realizar las comparaciones.
:type lista1: str, list
:param lista2: Texto o lista de textos para comparar. Si se \
utiliza este parámetro, se harán las comparaciones entre cada uno \
de los textos de `lista1` con cada uno de los textos de `lista2`. \
Valor por defecto `[]`.
:type lista2: str, list, opcional
:param norm: Permite normalizar los resultados en función de la \
longitud de los textos. Si `norm = 1` se normaliza en función al \
texto más corto, si `norm = 2` se normaliza en función al texto \
de mayor extensión.
:type norm: {1,2}, opcional
:return: (numpy.array) Matriz de dos dimensiones con las distancias \
calculadas entre los textos/vectores de entrada. Si solo se \
utilizó el parámetro `lista1` con `n` textos/vectores, devolverá \
una matriz de `n x n` simétrica, con las distancias entre todos \
los elementos de `lista1`. Si se utilizan los parámetros `lista1` \
y `lista2` con `n_1` y `n_2` textos/vectores respectivamente, \
devolverá una matriz de `n1 x n2`, con las distancias entre los \
elementos de `lista1` y los elementos de `lista2`.
"""
return self.comparacion_lista(lista1, lista2, "jaro")
def similitud_jaro_winkler(self, lista1, lista2=[]):
"""
Permite calcular la similitud de Jaro-Winkler entre una o dos listas \
de textos de entrada.
Permite calcular la distancia de Levenshtein entre una o dos listas \
de textos de entrada.
:param lista1: Texto o lista de textos de interés para \
realizar las comparaciones.
:type lista1: str, list
:param lista2: Texto o lista de textos para comparar. Si se \
utiliza este parámetro, se harán las comparaciones entre cada uno \
de los textos de `lista1` con cada uno de los textos de `lista2`. \
Valor por defecto `[]`.
:type lista2: str, list, opcional
:param norm: Permite normalizar los resultados en función de la \
longitud de los textos. Si `norm = 1` se normaliza en función al \
texto más corto, si `norm = 2` se normaliza en función al texto \
de mayor extensión.
:type norm: {1,2}, opcional
:return: (numpy.array) Matriz de dos dimensiones con las distancias \
calculadas entre los textos/vectores de entrada. Si solo se \
utilizó el parámetro `lista1` con `n` textos/vectores, devolverá \
una matriz de `n x n` simétrica, con las distancias entre todos \
los elementos de `lista1`. Si se utilizan los parámetros `lista1` \
y `lista2` con `n_1` y `n_2` textos/vectores respectivamente, \
devolverá una matriz de `n1 x n2`, con las distancias entre los \
elementos de `lista1` y los elementos de `lista2`.
"""
return self.comparacion_lista(lista1, lista2, "jaro_winkler") | PypiClean |
/JsOnXmLSe814Rializer749-1.0.tar.gz/JsOnXmLSe814Rializer749-1.0/Lab_3/helpers/functions.py | from __future__ import annotations
import re
from types import FunctionType, MethodType, CodeType, ModuleType, \
BuiltinMethodType, BuiltinFunctionType, CellType
from typing import Any, Collection, Iterable
from ..helpers.constants import IGNORED_FIELDS, IGNORED_FIELD_TYPES, TYPE_MAPPING
def get_items(obj) -> dict[str, Any]:
if isinstance(obj, (BuiltinFunctionType, BuiltinMethodType)):
return {}
if isinstance(obj, dict):
return obj
elif isinstance(obj, Collection):
return dict(enumerate(obj))
elif isinstance(obj, CodeType):
return {
"argcount": obj.co_argcount,
"posonlyargcount": obj.co_posonlyargcount,
"kwonlyargcount": obj.co_kwonlyargcount,
"nlocals": obj.co_nlocals,
"stacksize": obj.co_stacksize,
"flags": obj.co_flags,
"code": obj.co_code,
"consts": obj.co_consts,
"names": obj.co_names,
"varnames": obj.co_varnames,
"filename": obj.co_filename,
"name": obj.co_name,
"firstlineno": obj.co_firstlineno,
"lnotab": obj.co_lnotab,
"freevars": obj.co_freevars,
"cellvars": obj.co_cellvars,
}
elif isinstance(obj, FunctionType):
if obj.__closure__ and "__class__" in obj.__code__.co_freevars:
closure = ([... for _ in obj.__closure__])
elif obj.__closure__:
closure = ([cell.cell_contents for cell in obj.__closure__])
else:
closure = None
return {
"argcount": obj.__code__.co_argcount,
"posonlyargcount": obj.__code__.co_posonlyargcount,
"kwonlyargcount": obj.__code__.co_kwonlyargcount,
"nlocals": obj.__code__.co_nlocals,
"stacksize": obj.__code__.co_stacksize,
"flags": obj.__code__.co_flags,
"code": obj.__code__.co_code,
"consts": obj.__code__.co_consts,
"names": obj.__code__.co_names,
"varnames": obj.__code__.co_varnames,
"filename": obj.__code__.co_filename,
"name": obj.__code__.co_name,
"firstlineno": obj.__code__.co_firstlineno,
"lnotab": obj.__code__.co_lnotab,
"freevars": obj.__code__.co_freevars,
"cellvars": obj.__code__.co_cellvars,
"globals": {
k: obj.__globals__[k]
for k in (
set(
k for k, v in obj.__globals__.items()
if isinstance(v, ModuleType)
) |
set(obj.__globals__) &
set(obj.__code__.co_names) -
{obj.__name__}
)
},
"closure": closure,
"qualname": obj.__qualname__
}
elif isinstance(obj, MethodType):
return {
"__func__": obj.__func__,
"__self__": obj.__self__
}
elif issubclass(type(obj), type):
return {
'name': obj.__name__,
'mro': tuple(obj.mro()[1:-1]),
'attrs': {
k: v for k, v in obj.__dict__.items()
if (
k not in IGNORED_FIELDS and
type(v) not in IGNORED_FIELD_TYPES
)
}
}
elif issubclass(type(obj), ModuleType):
return {'name': obj.__name__}
elif isinstance(obj, staticmethod):
return get_items(obj.__func__)
elif isinstance(obj, classmethod):
return get_items(obj.__func__)
else:
return {
'class': obj.__class__,
'attrs': {
k: v for k, v in obj.__dict__.items()
if (
k not in IGNORED_FIELDS and
type(k) not in IGNORED_FIELD_TYPES
)
}
}
def get_key(value, obj: dict) -> int:
return [key for key in obj if obj[key] == value][0]
def to_number(s: str) -> int | float | complex | None:
for num_type in (int, float, complex):
try:
return num_type(s)
except (ValueError, TypeError):
pass
def create_object(obj_type: type, obj_data):
if issubclass(obj_type, dict):
return obj_data
elif issubclass(obj_type, Iterable):
return obj_type(obj_data.values())
elif issubclass(obj_type, CodeType):
return CodeType(*list(obj_data.values()))
elif issubclass(obj_type, FunctionType):
if obj_data.get('closure'):
closure = tuple([CellType(x) for x in obj_data.get('closure')])
elif obj_data.get('closure') and '__class__' in obj_data.get('freevars'):
closure = tuple([CellType(...) for _ in obj_data.get('closure')])
else:
closure = tuple()
obj = FunctionType(
code=CodeType(*list(obj_data.values())[:16]),
globals=obj_data.get('globals'),
name=obj_data['name'],
closure=closure
)
obj.__qualname__ = obj_data.get('qualname')
obj.__globals__[obj.__name__] = obj
return obj
elif issubclass(obj_type, MethodType):
return MethodType(
obj_data.get('__func__'),
obj_data.get('__self__'),
)
elif issubclass(obj_type, (staticmethod, classmethod)):
return create_object(FunctionType, obj_data)
elif issubclass(obj_type, type):
obj = type(obj_data.get('name'), obj_data.get('mro'), obj_data.get('attrs'))
try:
obj.__init__.__closure__[0].cell_contents = obj
except (AttributeError, IndexError):
...
return obj
elif issubclass(obj_type, ModuleType):
return __import__(obj_data.get('name'))
else:
obj = object.__new__(obj_data.get('class'))
obj.__dict__ = obj_data.get('attrs')
return obj
def type_from_str(s: str, pattern: str) -> type:
if not re.search(pattern, s):
return type(None)
return TYPE_MAPPING[re.search(pattern, s).group(1)] | PypiClean |
/Mesa-2.1.1-py3-none-any.whl/mesa/visualization/modules/BarChartVisualization.py | import json
from typing import ClassVar
from mesa.visualization.ModularVisualization import D3_JS_FILE, VisualizationElement
class BarChartModule(VisualizationElement):
"""Each bar chart can either visualize model-level or agent-level fields from a datcollector
with a bar chart.
Attributes:
scope: whether to visualize agent-level or model-level fields
fields: A List of Dictionaries containing information about each field to be charted,
including the name of the datacollector field and the desired color of the
corresponding bar.
Ex: [{"Label":"<your field name>", "Color":"<your desired color in hex>"}]
sorting: Whether to sort ascending, descending, or neither when charting agent fields
sort_by: The agent field to sort by
canvas_height, canvas_width: The width and height to draw the chart on the page, in pixels.
Default to 800 x 400
data_collector_name: Name of the DataCollector object in the model to retrieve data from.
"""
package_includes: ClassVar = [D3_JS_FILE, "BarChartModule.js"]
def __init__(
self,
fields,
scope="model",
sorting="none",
sort_by="none",
canvas_height=400,
canvas_width=800,
data_collector_name="datacollector",
):
"""
Create a new bar chart visualization.
Args:
scope: "model" if visualizing model-level fields, "agent" if visualizing agent-level
fields.
fields: A List of Dictionaries containing information about each field to be charted,
including the name of the datacollector field and the desired color of the
corresponding bar.
Ex: [{"Label":"<your field name>", "Color":"<your desired color in hex>"}]
sorting: "ascending", "descending", or "none"
sort_by: The agent field to sort by
canvas_height, canvas_width: Size in pixels of the chart to draw.
data_collector_name: Name of the DataCollector to use.
"""
self.scope = scope
self.fields = fields
self.sorting = sorting
self.canvas_height = canvas_height
self.canvas_width = canvas_width
self.data_collector_name = data_collector_name
fields_json = json.dumps(self.fields)
new_element = "new BarChartModule({}, {}, {}, '{}', '{}')"
new_element = new_element.format(
fields_json, canvas_width, canvas_height, sorting, sort_by
)
self.js_code = "elements.push(" + new_element + ")"
def render(self, model):
current_values = []
data_collector = getattr(model, self.data_collector_name)
if self.scope == "agent":
df = data_collector.get_agent_vars_dataframe().astype("float")
latest_step = df.index.levels[0][-1]
label_strings = [f["Label"] for f in self.fields]
dict = df.loc[latest_step].T.loc[label_strings].to_dict()
current_values = list(dict.values())
elif self.scope == "model":
out_dict = {}
for s in self.fields:
name = s["Label"]
try:
val = data_collector.model_vars[name][-1]
except (IndexError, KeyError):
val = 0
out_dict[name] = val
current_values.append(out_dict)
else:
raise ValueError("scope must be 'agent' or 'model'")
return current_values | PypiClean |
/Flask-CKEditor-0.4.6.tar.gz/Flask-CKEditor-0.4.6/flask_ckeditor/static/full/plugins/specialchar/dialogs/lang/km.js | /*
Copyright (c) 2003-2020, CKSource - Frederico Knabben. All rights reserved.
For licensing, see LICENSE.md or https://ckeditor.com/legal/ckeditor-oss-license
*/
CKEDITOR.plugins.setLang("specialchar","km",{euro:"សញ្ញាអឺរ៉ូ",lsquo:"Left single quotation mark",rsquo:"Right single quotation mark",ldquo:"Left double quotation mark",rdquo:"Right double quotation mark",ndash:"En dash",mdash:"Em dash",iexcl:"Inverted exclamation mark",cent:"សញ្ញាសេន",pound:"សញ្ញាផោន",curren:"សញ្ញារូបិយបណ្ណ",yen:"សញ្ញាយ៉េន",brvbar:"Broken bar",sect:"Section sign",uml:"Diaeresis",copy:"សញ្ញារក្សាសិទ្ធិ",ordf:"Feminine ordinal indicator",laquo:"Left-pointing double angle quotation mark",
not:"Not sign",reg:"Registered sign",macr:"Macron",deg:"សញ្ញាដឺក្រេ",sup2:"Superscript two",sup3:"Superscript three",acute:"Acute accent",micro:"សញ្ញាមីក្រូ",para:"Pilcrow sign",middot:"Middle dot",cedil:"Cedilla",sup1:"Superscript one",ordm:"Masculine ordinal indicator",raquo:"Right-pointing double angle quotation mark",frac14:"Vulgar fraction one quarter",frac12:"Vulgar fraction one half",frac34:"Vulgar fraction three quarters",iquest:"Inverted question mark",Agrave:"Latin capital letter A with grave accent",
Aacute:"Latin capital letter A with acute accent",Acirc:"Latin capital letter A with circumflex",Atilde:"Latin capital letter A with tilde",Auml:"Latin capital letter A with diaeresis",Aring:"Latin capital letter A with ring above",AElig:"Latin capital letter Æ",Ccedil:"Latin capital letter C with cedilla",Egrave:"Latin capital letter E with grave accent",Eacute:"Latin capital letter E with acute accent",Ecirc:"Latin capital letter E with circumflex",Euml:"Latin capital letter E with diaeresis",Igrave:"Latin capital letter I with grave accent",
Iacute:"Latin capital letter I with acute accent",Icirc:"Latin capital letter I with circumflex",Iuml:"Latin capital letter I with diaeresis",ETH:"Latin capital letter Eth",Ntilde:"Latin capital letter N with tilde",Ograve:"Latin capital letter O with grave accent",Oacute:"Latin capital letter O with acute accent",Ocirc:"Latin capital letter O with circumflex",Otilde:"Latin capital letter O with tilde",Ouml:"Latin capital letter O with diaeresis",times:"Multiplication sign",Oslash:"Latin capital letter O with stroke",
Ugrave:"Latin capital letter U with grave accent",Uacute:"Latin capital letter U with acute accent",Ucirc:"Latin capital letter U with circumflex",Uuml:"Latin capital letter U with diaeresis",Yacute:"Latin capital letter Y with acute accent",THORN:"Latin capital letter Thorn",szlig:"Latin small letter sharp s",agrave:"Latin small letter a with grave accent",aacute:"Latin small letter a with acute accent",acirc:"Latin small letter a with circumflex",atilde:"Latin small letter a with tilde",auml:"Latin small letter a with diaeresis",
aring:"Latin small letter a with ring above",aelig:"Latin small letter æ",ccedil:"Latin small letter c with cedilla",egrave:"Latin small letter e with grave accent",eacute:"Latin small letter e with acute accent",ecirc:"Latin small letter e with circumflex",euml:"Latin small letter e with diaeresis",igrave:"Latin small letter i with grave accent",iacute:"Latin small letter i with acute accent",icirc:"Latin small letter i with circumflex",iuml:"Latin small letter i with diaeresis",eth:"Latin small letter eth",
ntilde:"Latin small letter n with tilde",ograve:"Latin small letter o with grave accent",oacute:"Latin small letter o with acute accent",ocirc:"Latin small letter o with circumflex",otilde:"Latin small letter o with tilde",ouml:"Latin small letter o with diaeresis",divide:"Division sign",oslash:"Latin small letter o with stroke",ugrave:"Latin small letter u with grave accent",uacute:"Latin small letter u with acute accent",ucirc:"Latin small letter u with circumflex",uuml:"Latin small letter u with diaeresis",
yacute:"Latin small letter y with acute accent",thorn:"Latin small letter thorn",yuml:"Latin small letter y with diaeresis",OElig:"Latin capital ligature OE",oelig:"Latin small ligature oe",372:"Latin capital letter W with circumflex",374:"Latin capital letter Y with circumflex",373:"Latin small letter w with circumflex",375:"Latin small letter y with circumflex",sbquo:"Single low-9 quotation mark",8219:"Single high-reversed-9 quotation mark",bdquo:"Double low-9 quotation mark",hellip:"Horizontal ellipsis",
trade:"Trade mark sign",9658:"Black right-pointing pointer",bull:"Bullet",rarr:"Rightwards arrow",rArr:"Rightwards double arrow",hArr:"Left right double arrow",diams:"Black diamond suit",asymp:"Almost equal to"}); | PypiClean |
/DeepPhysX.Torch-22.12-py3-none-any.whl/DeepPhysX/examples/Torch/tutorial/FC.py | import torch
from time import time
# DeepPhysX's PyTorch imports
from DeepPhysX.Torch.FC.FCConfig import FCConfig
def main():
# FC configuration
fc_config = FCConfig(network_dir=None, # Path with a trained network to load parameters
network_name="MyUnet", # Nickname of the network
which_network=0, # Instance index in case several where saved
save_each_epoch=False, # Save network parameters at each epoch end or not
lr=1e-5, # Leaning rate
require_training_stuff=True, # Loss & optimizer are required or not for training
loss=torch.nn.MSELoss, # Loss class to use
optimizer=torch.optim.Adam, # Optimizer class to manage the learning process
dim_output=3, # Number of dimensions of the output
dim_layers=[60, 60, 60, 60], # Size of each layer of the FC
biases=[True, True, False]) # Layers contain biases or not
"""
The following methods are automatically called by the NetworkManager in a normal DeepPhysX pipeline.
They are only used here to demonstrate what is performed during the pipeline.
"""
# Creating network, data_transformation and network_optimization
print("Creating FC...")
fc = fc_config.create_network()
fc.set_device()
fc.set_eval()
data_transformation = fc_config.create_data_transformation()
optimization = fc_config.create_optimization()
print("\nNETWORK DESCRIPTION:", fc)
print("\nDATA TRANSFORMATION DESCRIPTION:", data_transformation)
print("\nOPTIMIZATION DESCRIPTION:", optimization)
# Data transformations and forward pass of Unet on a random tensor
t = torch.rand((1, 20, 3), dtype=torch.float, device=fc.device)
data = {'input': t}
start_time = time()
fc_input = data_transformation.transform_before_prediction(data)
fc_output = fc.predict(fc_input)
fc_loss, _ = data_transformation.transform_before_loss(fc_output, None)
fc_pred = data_transformation.transform_before_apply(fc_loss)['prediction']
fc_apply = fc_pred.reshape(t.shape)
end_time = time()
print(f"Prediction time: {round(end_time - start_time, 5) * 1e3} ms")
print("Tensor shape:", t.shape)
print("Input shape:", fc_input['input'].shape)
print("Output shape:", fc_output['prediction'].shape)
print("Loss shape:", fc_loss['prediction'].shape)
print("Prediction shape:", fc_pred.shape)
print("Apply shape:", fc_apply.shape)
if __name__ == '__main__':
main() | PypiClean |
/MagnetiCalc-1.15.2.tar.gz/MagnetiCalc-1.15.2/magneticalc/About_Dialog.py |
# ISC License
#
# Copyright (c) 2020–2022, Paul Wilhelm, M. Sc. <anfrage@paulwilhelm.de>
#
# Permission to use, copy, modify, and/or distribute this software for any
# purpose with or without fee is hereby granted, provided that the above
# copyright notice and this permission notice appear in all copies.
#
# THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
# OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
import webbrowser
from functools import partial
from PyQt5.Qt import QShowEvent
from magneticalc.QtWidgets2.QDialog2 import QDialog2
from magneticalc.QtWidgets2.QTextBrowser2 import QTextBrowser2
from magneticalc.Debug import Debug
from magneticalc.Theme import Theme
from magneticalc.Version import Version
class About_Dialog(QDialog2):
""" About_Dialog class. """
# Donation URL
DonationURL = "https://www.paypal.com/cgi-bin/webscr?cmd=_s-xclick&hosted_button_id=TN6YTPVX36YHA&source=url"
# HTML content
HTML = f"""
<span style="color: {Theme.MainColor};"><b>{Version.String}</b></span><br>
<br>
Copyright © 2020–2022, Paul Wilhelm, M. Sc.
<<a href="mailto:anfrage@paulwilhelm.de">anfrage@paulwilhelm.de</a>><br>
<br>
<small>
<b>ISC License</b><br>
<br>
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.<br>
<br>
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.<br>
</small>
<br>
<span style="color: {Theme.MainColor}; font-weight: bold;">
If you like this software, please consider buying me a coffee!
</span>
<br>
"""
def __init__(self) -> None:
"""
Initializes the dialog.
"""
QDialog2.__init__(self, title="About", width=640)
Debug(self, ": Init", init=True)
self.text_browser = QTextBrowser2(html=self.HTML)
self.addWidget(self.text_browser)
buttons = self.addButtons({
"OK" : ("fa.check", self.accept),
"Donate 3€ …" : ("fa.paypal", partial(webbrowser.open, About_Dialog.DonationURL))
})
buttons[0].setFocus()
def showEvent(self, event: QShowEvent) -> None:
"""
Gets called when the dialog is opened.
@param event: QShowEvent
"""
Debug(self, ".showEvent()")
self.text_browser.fit_to_contents() | PypiClean |
/Flask_Unchained-0.9.0-py3-none-any.whl/flask_unchained/bundles/security/config.py | from datetime import datetime, timezone
from flask import abort
from flask_unchained import BundleConfig
from http import HTTPStatus
from .forms import (
LoginForm,
RegisterForm,
ForgotPasswordForm,
ResetPasswordForm,
ChangePasswordForm,
SendConfirmationForm,
)
from .models import AnonymousUser
class AuthenticationConfig:
"""
Config options for logging in and out.
"""
SECURITY_LOGIN_FORM = LoginForm
"""
The form class to use for the login view.
"""
SECURITY_DEFAULT_REMEMBER_ME = False
"""
Whether or not the login form should default to checking the
"Remember me?" option.
"""
SECURITY_REMEMBER_SALT = 'security-remember-salt'
"""
Salt used for the remember me cookie token.
"""
SECURITY_USER_IDENTITY_ATTRIBUTES = ['email'] # FIXME-identity
"""
List of attributes on the user model that can used for logging in with.
Each must be unique.
"""
SECURITY_POST_LOGIN_REDIRECT_ENDPOINT = '/'
"""
The endpoint or url to redirect to after a successful login.
"""
SECURITY_POST_LOGOUT_REDIRECT_ENDPOINT = '/'
"""
The endpoint or url to redirect to after a user logs out.
"""
class ChangePasswordConfig:
"""
Config options for changing passwords
"""
SECURITY_CHANGEABLE = False
"""
Whether or not to enable change password functionality.
"""
SECURITY_CHANGE_PASSWORD_FORM = ChangePasswordForm
"""
Form class to use for the change password view.
"""
SECURITY_POST_CHANGE_REDIRECT_ENDPOINT = None
"""
Endpoint or url to redirect to after the user changes their password.
"""
SECURITY_SEND_PASSWORD_CHANGED_EMAIL = \
'mail_bundle' in BundleConfig.current_app.unchained.bundles
"""
Whether or not to send the user an email when their password has been changed.
Defaults to True, and it's strongly recommended to leave this option enabled.
"""
class EncryptionConfig:
"""
Config options for encryption hashing.
"""
SECURITY_PASSWORD_SALT = 'security-password-salt'
"""
Specifies the HMAC salt. This is only used if the password hash type is
set to something other than plain text.
"""
SECURITY_PASSWORD_HASH = 'bcrypt'
"""
Specifies the password hash algorithm to use when hashing passwords.
Recommended values for production systems are ``argon2``, ``bcrypt``,
or ``pbkdf2_sha512``. May require extra packages to be installed.
"""
SECURITY_PASSWORD_SINGLE_HASH = False
"""
Specifies that passwords should only be hashed once. By default, passwords
are hashed twice, first with SECURITY_PASSWORD_SALT, and then with a random
salt. May be useful for integrating with other applications.
"""
SECURITY_PASSWORD_SCHEMES = ['argon2',
'bcrypt',
'pbkdf2_sha512',
# and always the last one...
'plaintext']
"""
List of algorithms that can be used for hashing passwords.
"""
SECURITY_PASSWORD_HASH_OPTIONS = {}
"""
Specifies additional options to be passed to the hashing method.
"""
SECURITY_DEPRECATED_PASSWORD_SCHEMES = ['auto']
"""
List of deprecated algorithms for hashing passwords.
"""
SECURITY_HASHING_SCHEMES = ['sha512_crypt']
"""
List of algorithms that can be used for creating and validating tokens.
"""
SECURITY_DEPRECATED_HASHING_SCHEMES = []
"""
List of deprecated algorithms for creating and validating tokens.
"""
class ForgotPasswordConfig:
"""
Config options for recovering forgotten passwords
"""
SECURITY_RECOVERABLE = False
"""
Whether or not to enable forgot password functionality.
"""
SECURITY_FORGOT_PASSWORD_FORM = ForgotPasswordForm
"""
Form class to use for the forgot password form.
"""
# reset password (when the user clicks the link from the email sent by forgot pw)
# --------------
SECURITY_RESET_PASSWORD_FORM = ResetPasswordForm
"""
Form class to use for the reset password form.
"""
SECURITY_RESET_SALT = 'security-reset-salt'
"""
Salt used for the reset token.
"""
SECURITY_RESET_PASSWORD_WITHIN = '5 days'
"""
Specifies the amount of time a user has before their password reset link
expires. Always pluralized the time unit for this value. Defaults to 5 days.
"""
SECURITY_POST_RESET_REDIRECT_ENDPOINT = None
"""
Endpoint or url to redirect to after the user resets their password.
"""
SECURITY_INVALID_RESET_TOKEN_REDIRECT = 'security_controller.forgot_password'
"""
Endpoint or url to redirect to if the reset token is invalid.
"""
SECURITY_EXPIRED_RESET_TOKEN_REDIRECT = 'security_controller.forgot_password'
"""
Endpoint or url to redirect to if the reset token is expired.
"""
SECURITY_API_RESET_PASSWORD_HTTP_GET_REDIRECT = None
"""
Endpoint or url to redirect to if a GET request is made to the reset password
view. Defaults to None, meaning no redirect. Useful for single page apps.
"""
SECURITY_SEND_PASSWORD_RESET_NOTICE_EMAIL = \
'mail_bundle' in BundleConfig.current_app.unchained.bundles
"""
Whether or not to send the user an email when their password has been reset.
Defaults to True, and it's strongly recommended to leave this option enabled.
"""
class RegistrationConfig:
"""
Config options for user registration
"""
SECURITY_REGISTERABLE = False
"""
Whether or not to enable registration.
"""
SECURITY_REGISTER_FORM = RegisterForm
"""
The form class to use for the register view.
"""
SECURITY_POST_REGISTER_REDIRECT_ENDPOINT = None
"""
The endpoint or url to redirect to after a user completes the
registration form.
"""
SECURITY_SEND_REGISTER_EMAIL = \
'mail_bundle' in BundleConfig.current_app.unchained.bundles
"""
Whether or not send a welcome email after a user completes the
registration form.
"""
# email confirmation options
# --------------------------
SECURITY_CONFIRMABLE = False
"""
Whether or not to enable required email confirmation for new users.
"""
SECURITY_SEND_CONFIRMATION_FORM = SendConfirmationForm
"""
Form class to use for the (re)send confirmation email form.
"""
SECURITY_CONFIRM_SALT = 'security-confirm-salt'
"""
Salt used for the confirmation token.
"""
SECURITY_LOGIN_WITHOUT_CONFIRMATION = False
"""
Allow users to login without confirming their email first. (This option
only applies when :attr:`SECURITY_CONFIRMABLE` is True.)
"""
SECURITY_CONFIRM_EMAIL_WITHIN = '5 days'
"""
How long to wait until considering the token in confirmation emails to
be expired.
"""
SECURITY_POST_CONFIRM_REDIRECT_ENDPOINT = None
"""
Endpoint or url to redirect to after the user confirms their email.
Defaults to :attr:`SECURITY_POST_LOGIN_REDIRECT_ENDPOINT`.
"""
SECURITY_CONFIRM_ERROR_REDIRECT_ENDPOINT = None
"""
Endpoint to redirect to if there's an error confirming the user's email.
"""
class TokenConfig:
"""
Config options for token authentication.
"""
SECURITY_TOKEN_AUTHENTICATION_KEY = 'auth_token'
"""
Specifies the query string parameter to read when using token authentication.
"""
SECURITY_TOKEN_AUTHENTICATION_HEADER = 'Authentication-Token'
"""
Specifies the HTTP header to read when using token authentication.
"""
SECURITY_TOKEN_MAX_AGE = None
"""
Specifies the number of seconds before an authentication token expires.
Defaults to None, meaning the token never expires.
"""
class Config(AuthenticationConfig,
ChangePasswordConfig,
EncryptionConfig,
ForgotPasswordConfig,
RegistrationConfig,
TokenConfig,
BundleConfig):
"""
Config options for the Security Bundle.
"""
SECURITY_ANONYMOUS_USER = AnonymousUser
"""
Class to use for representing anonymous users.
"""
SECURITY_UNAUTHORIZED_CALLBACK = lambda: abort(HTTPStatus.UNAUTHORIZED)
"""
This callback gets called when authorization fails. By default we abort with
an HTTP status code of 401 (UNAUTHORIZED).
"""
# make datetimes timezone-aware by default
SECURITY_DATETIME_FACTORY = lambda: datetime.now(timezone.utc)
"""
Factory function to use when creating new dates. By default we use
``datetime.now(timezone.utc)`` to create a timezone-aware datetime.
"""
ADMIN_CATEGORY_ICON_CLASSES = {
'Security': 'fa fa-lock',
}
class TestConfig(Config):
"""
Default test settings for the Security Bundle.
"""
SECURITY_PASSWORD_HASH = 'plaintext'
"""
Disable password-hashing in tests (shaves about 30% off the test-run time)
""" | PypiClean |
/Docassemble-Pattern-3.6.7.tar.gz/Docassemble-Pattern-3.6.7/docassemble_pattern/vector/svm/libsvmutil.py |
from __future__ import absolute_import
from __future__ import unicode_literals
from __future__ import print_function
from __future__ import division
from builtins import str, bytes, dict, int
from builtins import object, range
from builtins import map, zip, filter
import os
import sys
from .libsvm import *
from .libsvm import __all__ as svm_all
__all__ = ['evaluations', 'svm_load_model', 'svm_predict', 'svm_read_problem',
'svm_save_model', 'svm_train'] + svm_all
sys.path = [os.path.dirname(os.path.abspath(__file__))] + sys.path
def svm_read_problem(data_file_name):
"""
svm_read_problem(data_file_name) -> [y, x]
Read LIBSVM-format data from data_file_name and return labels y
and data instances x.
"""
prob_y = []
prob_x = []
for line in open(data_file_name):
line = line.split(None, 1)
# In case an instance with all zero features
if len(line) == 1:
line += ['']
label, features = line
xi = {}
for e in features.split():
ind, val = e.split(":")
xi[int(ind)] = float(val)
prob_y += [float(label)]
prob_x += [xi]
return (prob_y, prob_x)
def svm_load_model(model_file_name):
"""
svm_load_model(model_file_name) -> model
Load a LIBSVM model from model_file_name and return.
"""
model = libsvm.svm_load_model(model_file_name.encode())
if not model:
print("can't open model file %s" % model_file_name)
return None
model = toPyModel(model)
return model
def svm_save_model(model_file_name, model):
"""
svm_save_model(model_file_name, model) -> None
Save a LIBSVM model to the file model_file_name.
"""
libsvm.svm_save_model(model_file_name.encode(), model)
def evaluations(ty, pv):
"""
evaluations(ty, pv) -> (ACC, MSE, SCC)
Calculate accuracy, mean squared error and squared correlation coefficient
using the true values (ty) and predicted values (pv).
"""
if len(ty) != len(pv):
raise ValueError("len(ty) must equal to len(pv)")
total_correct = total_error = 0
sumv = sumy = sumvv = sumyy = sumvy = 0
for v, y in zip(pv, ty):
if y == v:
total_correct += 1
total_error += (v - y) * (v - y)
sumv += v
sumy += y
sumvv += v * v
sumyy += y * y
sumvy += v * y
l = len(ty)
ACC = 100.0 * total_correct / l
MSE = total_error / l
try:
SCC = ((l * sumvy - sumv * sumy) * (l * sumvy - sumv * sumy)) / ((l * sumvv - sumv * sumv) * (l * sumyy - sumy * sumy))
except:
SCC = float('nan')
return (ACC, MSE, SCC)
def svm_train(arg1, arg2=None, arg3=None):
"""
svm_train(y, x [, options]) -> model | ACC | MSE
svm_train(prob [, options]) -> model | ACC | MSE
svm_train(prob, param) -> model | ACC| MSE
Train an SVM model from data (y, x) or an svm_problem prob using
'options' or an svm_parameter param.
If '-v' is specified in 'options' (i.e., cross validation)
either accuracy (ACC) or mean-squared error (MSE) is returned.
options:
-s svm_type : set type of SVM (default 0)
0 -- C-SVC (multi-class classification)
1 -- nu-SVC (multi-class classification)
2 -- one-class SVM
3 -- epsilon-SVR (regression)
4 -- nu-SVR (regression)
-t kernel_type : set type of kernel function (default 2)
0 -- linear: u'*v
1 -- polynomial: (gamma*u'*v + coef0)^degree
2 -- radial basis function: exp(-gamma*|u-v|^2)
3 -- sigmoid: tanh(gamma*u'*v + coef0)
4 -- precomputed kernel (kernel values in training_set_file)
-d degree : set degree in kernel function (default 3)
-g gamma : set gamma in kernel function (default 1/num_features)
-r coef0 : set coef0 in kernel function (default 0)
-c cost : set the parameter C of C-SVC, epsilon-SVR, and nu-SVR (default 1)
-n nu : set the parameter nu of nu-SVC, one-class SVM, and nu-SVR (default 0.5)
-p epsilon : set the epsilon in loss function of epsilon-SVR (default 0.1)
-m cachesize : set cache memory size in MB (default 100)
-e epsilon : set tolerance of termination criterion (default 0.001)
-h shrinking : whether to use the shrinking heuristics, 0 or 1 (default 1)
-b probability_estimates : whether to train a SVC or SVR model for probability estimates, 0 or 1 (default 0)
-wi weight : set the parameter C of class i to weight*C, for C-SVC (default 1)
-v n: n-fold cross validation mode
-q : quiet mode (no outputs)
"""
prob, param = None, None
if isinstance(arg1, (list, tuple)):
assert isinstance(arg2, (list, tuple))
y, x, options = arg1, arg2, arg3
param = svm_parameter(options)
prob = svm_problem(y, x, isKernel=(param.kernel_type == PRECOMPUTED))
elif isinstance(arg1, svm_problem):
prob = arg1
if isinstance(arg2, svm_parameter):
param = arg2
else:
param = svm_parameter(arg2)
if prob is None or param is None:
raise TypeError("Wrong types for the arguments")
if param.kernel_type == PRECOMPUTED:
for xi in prob.x_space:
idx, val = xi[0].index, xi[0].value
if xi[0].index != 0:
raise ValueError('Wrong input format: first column must be 0:sample_serial_number')
if val <= 0 or val > prob.n:
raise ValueError('Wrong input format: sample_serial_number out of range')
if param.gamma == 0 and prob.n > 0:
param.gamma = 1.0 / prob.n
libsvm.svm_set_print_string_function(param.print_func)
err_msg = libsvm.svm_check_parameter(prob, param)
if err_msg:
raise ValueError('Error: %s' % err_msg)
if param.cross_validation:
l, nr_fold = prob.l, param.nr_fold
target = (c_double * l)()
libsvm.svm_cross_validation(prob, param, nr_fold, target)
ACC, MSE, SCC = evaluations(prob.y[:l], target[:l])
if param.svm_type in [EPSILON_SVR, NU_SVR]:
print("Cross Validation Mean squared error = %g" % MSE)
print("Cross Validation Squared correlation coefficient = %g" % SCC)
return MSE
else:
print("Cross Validation Accuracy = %g%%" % ACC)
return ACC
else:
m = libsvm.svm_train(prob, param)
m = toPyModel(m)
# If prob is destroyed, data including SVs pointed by m can remain.
m.x_space = prob.x_space
return m
def svm_predict(y, x, m, options=""):
"""
svm_predict(y, x, m [, options]) -> (p_labels, p_acc, p_vals)
Predict data (y, x) with the SVM model m.
options:
-b probability_estimates: whether to predict probability estimates,
0 or 1 (default 0); for one-class SVM only 0 is supported.
-q : quiet mode (no outputs).
The return tuple contains
p_labels: a list of predicted labels
p_acc: a tuple including accuracy (for classification), mean-squared
error, and squared correlation coefficient (for regression).
p_vals: a list of decision values or probability estimates (if '-b 1'
is specified). If k is the number of classes, for decision values,
each element includes results of predicting k(k-1)/2 binary-class
SVMs. For probabilities, each element contains k values indicating
the probability that the testing instance is in each class.
Note that the order of classes here is the same as 'model.label'
field in the model structure.
"""
def info(s):
print(s)
predict_probability = 0
argv = options.split()
i = 0
while i < len(argv):
if argv[i] == '-b':
i += 1
predict_probability = int(argv[i])
elif argv[i] == '-q':
info = print_null
else:
raise ValueError("Wrong options")
i += 1
svm_type = m.get_svm_type()
is_prob_model = m.is_probability_model()
nr_class = m.get_nr_class()
pred_labels = []
pred_values = []
if predict_probability:
if not is_prob_model:
raise ValueError("Model does not support probabiliy estimates")
if svm_type in [NU_SVR, EPSILON_SVR]:
info("Prob. model for test data: target value = predicted value + z,\n"
"z: Laplace distribution e^(-|z|/sigma)/(2sigma),sigma=%g" % m.get_svr_probability());
nr_class = 0
prob_estimates = (c_double * nr_class)()
for xi in x:
xi, idx = gen_svm_nodearray(xi, isKernel=(m.param.kernel_type == PRECOMPUTED))
label = libsvm.svm_predict_probability(m, xi, prob_estimates)
values = prob_estimates[:nr_class]
pred_labels += [label]
pred_values += [values]
else:
if is_prob_model:
info("Model supports probability estimates, but disabled in predicton.")
if svm_type in (ONE_CLASS, EPSILON_SVR, NU_SVC):
nr_classifier = 1
else:
nr_classifier = nr_class * (nr_class - 1) // 2
dec_values = (c_double * nr_classifier)()
for xi in x:
xi, idx = gen_svm_nodearray(xi, isKernel=(m.param.kernel_type == PRECOMPUTED))
label = libsvm.svm_predict_values(m, xi, dec_values)
if(nr_class == 1):
values = [1]
else:
values = dec_values[:nr_classifier]
pred_labels += [label]
pred_values += [values]
ACC, MSE, SCC = evaluations(y, pred_labels)
l = len(y)
if svm_type in [EPSILON_SVR, NU_SVR]:
info("Mean squared error = %g (regression)" % MSE)
info("Squared correlation coefficient = %g (regression)" % SCC)
else:
info("Accuracy = %g%% (%d/%d) (classification)" % (ACC, int(l * ACC / 100), l))
return pred_labels, (ACC, MSE, SCC), pred_values | PypiClean |
/DendroPy_calver-2023.330.2-py3-none-any.whl/dendropy/utility/processio.py |
##############################################################################
## DendroPy Phylogenetic Computing Library.
##
## Copyright 2010-2015 Jeet Sukumaran and Mark T. Holder.
## All rights reserved.
##
## See "LICENSE.rst" for terms and conditions of usage.
##
## If you use this work or any portion thereof in published work,
## please cite it as:
##
## Sukumaran, J. and M. T. Holder. 2010. DendroPy: a Python library
## for phylogenetic computing. Bioinformatics 26: 1569-1571.
##
##############################################################################
"""
Wraps external process as a processio, i.e., allow for non-blocking
read/writes to stdout/stderr/stdin.
"""
from dendropy.utility import textprocessing
import sys
import subprocess
import threading
try:
from Queue import Queue, Empty
except ImportError:
from queue import Queue, Empty # python 3.x
ON_POSIX = 'posix' in sys.builtin_module_names
############################################################################
## Handling of byte/string conversion during subprocess calls
def communicate(p, commands=None, timeout=None):
if isinstance(commands, list) or isinstance(commands, tuple):
commands = "\n".join(str(c) for c in commands)
if commands is not None:
commands = str.encode(commands)
if timeout is None:
stdout, stderr = p.communicate(commands)
else:
try:
stdout, stderr = p.communicate(commands, timeout=timeout)
except TypeError as e:
if "unexpected keyword argument 'timeout'" in str(e):
stdout, stderr = p.communicate(commands)
else:
raise
if stdout is not None:
stdout = textprocessing.bytes_to_text(stdout)
if stderr is not None:
stderr = textprocessing.bytes_to_text(stderr)
return stdout, stderr
############################################################################
## SessionReader
class SessionReader(object):
def __init__(self, file_handle):
self.queue = Queue()
self.stream = file_handle
self.thread = threading.Thread(
target=self.enqueue_stream,
)
self.thread.daemon = True
self.thread.start()
def enqueue_stream(self):
# for line in self.stream.readline():
for line in iter(self.stream.readline, b''):
self.queue.put(line)
self.stream.close()
def read(self):
# read line without blocking
try:
line = self.queue.get_nowait()
# line = self.queue.get(timeout=0.1)
except Empty:
return None
else:
return line # got line
class Session(object):
def __init__(self, join_err_to_out=False):
self.process = None
self.stdin = None
self._stdout_reader = None
self._stderr_reader = None
self.queue = None
self.thread = None
self.join_err_to_out = join_err_to_out
def start(self, command):
if self.join_err_to_out:
stderr = subprocess.STDOUT
else:
stderr = subprocess.PIPE
self.process = subprocess.Popen(command,
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=stderr,
bufsize=1,
close_fds=ON_POSIX)
self._stdout_reader = SessionReader(self.process.stdout)
if not self.join_err_to_out:
self._stderr_reader = SessionReader(self.process.stderr)
def _stdin_write(self, command):
self.process.stdin.write(command)
self.process.stdin.flush() | PypiClean |
/Newcalls-0.0.1-cp37-cp37m-win_amd64.whl/newcalls/node_modules/@types/node/ts4.8/stream/web.d.ts | declare module 'stream/web' {
// stub module, pending copy&paste from .d.ts or manual impl
// copy from lib.dom.d.ts
interface ReadableWritablePair<R = any, W = any> {
readable: ReadableStream<R>;
/**
* Provides a convenient, chainable way of piping this readable stream
* through a transform stream (or any other { writable, readable }
* pair). It simply pipes the stream into the writable side of the
* supplied pair, and returns the readable side for further use.
*
* Piping a stream will lock it for the duration of the pipe, preventing
* any other consumer from acquiring a reader.
*/
writable: WritableStream<W>;
}
interface StreamPipeOptions {
preventAbort?: boolean;
preventCancel?: boolean;
/**
* Pipes this readable stream to a given writable stream destination.
* The way in which the piping process behaves under various error
* conditions can be customized with a number of passed options. It
* returns a promise that fulfills when the piping process completes
* successfully, or rejects if any errors were encountered.
*
* Piping a stream will lock it for the duration of the pipe, preventing
* any other consumer from acquiring a reader.
*
* Errors and closures of the source and destination streams propagate
* as follows:
*
* An error in this source readable stream will abort destination,
* unless preventAbort is truthy. The returned promise will be rejected
* with the source's error, or with any error that occurs during
* aborting the destination.
*
* An error in destination will cancel this source readable stream,
* unless preventCancel is truthy. The returned promise will be rejected
* with the destination's error, or with any error that occurs during
* canceling the source.
*
* When this source readable stream closes, destination will be closed,
* unless preventClose is truthy. The returned promise will be fulfilled
* once this process completes, unless an error is encountered while
* closing the destination, in which case it will be rejected with that
* error.
*
* If destination starts out closed or closing, this source readable
* stream will be canceled, unless preventCancel is true. The returned
* promise will be rejected with an error indicating piping to a closed
* stream failed, or with any error that occurs during canceling the
* source.
*
* The signal option can be set to an AbortSignal to allow aborting an
* ongoing pipe operation via the corresponding AbortController. In this
* case, this source readable stream will be canceled, and destination
* aborted, unless the respective options preventCancel or preventAbort
* are set.
*/
preventClose?: boolean;
signal?: AbortSignal;
}
interface ReadableStreamGenericReader {
readonly closed: Promise<undefined>;
cancel(reason?: any): Promise<void>;
}
interface ReadableStreamDefaultReadValueResult<T> {
done: false;
value: T;
}
interface ReadableStreamDefaultReadDoneResult {
done: true;
value?: undefined;
}
type ReadableStreamController<T> = ReadableStreamDefaultController<T>;
type ReadableStreamDefaultReadResult<T> = ReadableStreamDefaultReadValueResult<T> | ReadableStreamDefaultReadDoneResult;
interface ReadableByteStreamControllerCallback {
(controller: ReadableByteStreamController): void | PromiseLike<void>;
}
interface UnderlyingSinkAbortCallback {
(reason?: any): void | PromiseLike<void>;
}
interface UnderlyingSinkCloseCallback {
(): void | PromiseLike<void>;
}
interface UnderlyingSinkStartCallback {
(controller: WritableStreamDefaultController): any;
}
interface UnderlyingSinkWriteCallback<W> {
(chunk: W, controller: WritableStreamDefaultController): void | PromiseLike<void>;
}
interface UnderlyingSourceCancelCallback {
(reason?: any): void | PromiseLike<void>;
}
interface UnderlyingSourcePullCallback<R> {
(controller: ReadableStreamController<R>): void | PromiseLike<void>;
}
interface UnderlyingSourceStartCallback<R> {
(controller: ReadableStreamController<R>): any;
}
interface TransformerFlushCallback<O> {
(controller: TransformStreamDefaultController<O>): void | PromiseLike<void>;
}
interface TransformerStartCallback<O> {
(controller: TransformStreamDefaultController<O>): any;
}
interface TransformerTransformCallback<I, O> {
(chunk: I, controller: TransformStreamDefaultController<O>): void | PromiseLike<void>;
}
interface UnderlyingByteSource {
autoAllocateChunkSize?: number;
cancel?: ReadableStreamErrorCallback;
pull?: ReadableByteStreamControllerCallback;
start?: ReadableByteStreamControllerCallback;
type: 'bytes';
}
interface UnderlyingSource<R = any> {
cancel?: UnderlyingSourceCancelCallback;
pull?: UnderlyingSourcePullCallback<R>;
start?: UnderlyingSourceStartCallback<R>;
type?: undefined;
}
interface UnderlyingSink<W = any> {
abort?: UnderlyingSinkAbortCallback;
close?: UnderlyingSinkCloseCallback;
start?: UnderlyingSinkStartCallback;
type?: undefined;
write?: UnderlyingSinkWriteCallback<W>;
}
interface ReadableStreamErrorCallback {
(reason: any): void | PromiseLike<void>;
}
/** This Streams API interface represents a readable stream of byte data. */
interface ReadableStream<R = any> {
readonly locked: boolean;
cancel(reason?: any): Promise<void>;
getReader(): ReadableStreamDefaultReader<R>;
pipeThrough<T>(transform: ReadableWritablePair<T, R>, options?: StreamPipeOptions): ReadableStream<T>;
pipeTo(destination: WritableStream<R>, options?: StreamPipeOptions): Promise<void>;
tee(): [ReadableStream<R>, ReadableStream<R>];
values(options?: { preventCancel?: boolean }): AsyncIterableIterator<R>;
[Symbol.asyncIterator](): AsyncIterableIterator<R>;
}
const ReadableStream: {
prototype: ReadableStream;
new (underlyingSource: UnderlyingByteSource, strategy?: QueuingStrategy<Uint8Array>): ReadableStream<Uint8Array>;
new <R = any>(underlyingSource?: UnderlyingSource<R>, strategy?: QueuingStrategy<R>): ReadableStream<R>;
};
interface ReadableStreamDefaultReader<R = any> extends ReadableStreamGenericReader {
read(): Promise<ReadableStreamDefaultReadResult<R>>;
releaseLock(): void;
}
const ReadableStreamDefaultReader: {
prototype: ReadableStreamDefaultReader;
new <R = any>(stream: ReadableStream<R>): ReadableStreamDefaultReader<R>;
};
const ReadableStreamBYOBReader: any;
const ReadableStreamBYOBRequest: any;
interface ReadableByteStreamController {
readonly byobRequest: undefined;
readonly desiredSize: number | null;
close(): void;
enqueue(chunk: ArrayBufferView): void;
error(error?: any): void;
}
const ReadableByteStreamController: {
prototype: ReadableByteStreamController;
new (): ReadableByteStreamController;
};
interface ReadableStreamDefaultController<R = any> {
readonly desiredSize: number | null;
close(): void;
enqueue(chunk?: R): void;
error(e?: any): void;
}
const ReadableStreamDefaultController: {
prototype: ReadableStreamDefaultController;
new (): ReadableStreamDefaultController;
};
interface Transformer<I = any, O = any> {
flush?: TransformerFlushCallback<O>;
readableType?: undefined;
start?: TransformerStartCallback<O>;
transform?: TransformerTransformCallback<I, O>;
writableType?: undefined;
}
interface TransformStream<I = any, O = any> {
readonly readable: ReadableStream<O>;
readonly writable: WritableStream<I>;
}
const TransformStream: {
prototype: TransformStream;
new <I = any, O = any>(transformer?: Transformer<I, O>, writableStrategy?: QueuingStrategy<I>, readableStrategy?: QueuingStrategy<O>): TransformStream<I, O>;
};
interface TransformStreamDefaultController<O = any> {
readonly desiredSize: number | null;
enqueue(chunk?: O): void;
error(reason?: any): void;
terminate(): void;
}
const TransformStreamDefaultController: {
prototype: TransformStreamDefaultController;
new (): TransformStreamDefaultController;
};
/**
* This Streams API interface provides a standard abstraction for writing
* streaming data to a destination, known as a sink. This object comes with
* built-in back pressure and queuing.
*/
interface WritableStream<W = any> {
readonly locked: boolean;
abort(reason?: any): Promise<void>;
close(): Promise<void>;
getWriter(): WritableStreamDefaultWriter<W>;
}
const WritableStream: {
prototype: WritableStream;
new <W = any>(underlyingSink?: UnderlyingSink<W>, strategy?: QueuingStrategy<W>): WritableStream<W>;
};
/**
* This Streams API interface is the object returned by
* WritableStream.getWriter() and once created locks the < writer to the
* WritableStream ensuring that no other streams can write to the underlying
* sink.
*/
interface WritableStreamDefaultWriter<W = any> {
readonly closed: Promise<undefined>;
readonly desiredSize: number | null;
readonly ready: Promise<undefined>;
abort(reason?: any): Promise<void>;
close(): Promise<void>;
releaseLock(): void;
write(chunk?: W): Promise<void>;
}
const WritableStreamDefaultWriter: {
prototype: WritableStreamDefaultWriter;
new <W = any>(stream: WritableStream<W>): WritableStreamDefaultWriter<W>;
};
/**
* This Streams API interface represents a controller allowing control of a
* WritableStream's state. When constructing a WritableStream, the
* underlying sink is given a corresponding WritableStreamDefaultController
* instance to manipulate.
*/
interface WritableStreamDefaultController {
error(e?: any): void;
}
const WritableStreamDefaultController: {
prototype: WritableStreamDefaultController;
new (): WritableStreamDefaultController;
};
interface QueuingStrategy<T = any> {
highWaterMark?: number;
size?: QueuingStrategySize<T>;
}
interface QueuingStrategySize<T = any> {
(chunk?: T): number;
}
interface QueuingStrategyInit {
/**
* Creates a new ByteLengthQueuingStrategy with the provided high water
* mark.
*
* Note that the provided high water mark will not be validated ahead of
* time. Instead, if it is negative, NaN, or not a number, the resulting
* ByteLengthQueuingStrategy will cause the corresponding stream
* constructor to throw.
*/
highWaterMark: number;
}
/**
* This Streams API interface provides a built-in byte length queuing
* strategy that can be used when constructing streams.
*/
interface ByteLengthQueuingStrategy extends QueuingStrategy<ArrayBufferView> {
readonly highWaterMark: number;
readonly size: QueuingStrategySize<ArrayBufferView>;
}
const ByteLengthQueuingStrategy: {
prototype: ByteLengthQueuingStrategy;
new (init: QueuingStrategyInit): ByteLengthQueuingStrategy;
};
/**
* This Streams API interface provides a built-in byte length queuing
* strategy that can be used when constructing streams.
*/
interface CountQueuingStrategy extends QueuingStrategy {
readonly highWaterMark: number;
readonly size: QueuingStrategySize;
}
const CountQueuingStrategy: {
prototype: CountQueuingStrategy;
new (init: QueuingStrategyInit): CountQueuingStrategy;
};
interface TextEncoderStream {
/** Returns "utf-8". */
readonly encoding: 'utf-8';
readonly readable: ReadableStream<Uint8Array>;
readonly writable: WritableStream<string>;
readonly [Symbol.toStringTag]: string;
}
const TextEncoderStream: {
prototype: TextEncoderStream;
new (): TextEncoderStream;
};
interface TextDecoderOptions {
fatal?: boolean;
ignoreBOM?: boolean;
}
type BufferSource = ArrayBufferView | ArrayBuffer;
interface TextDecoderStream {
/** Returns encoding's name, lower cased. */
readonly encoding: string;
/** Returns `true` if error mode is "fatal", and `false` otherwise. */
readonly fatal: boolean;
/** Returns `true` if ignore BOM flag is set, and `false` otherwise. */
readonly ignoreBOM: boolean;
readonly readable: ReadableStream<string>;
readonly writable: WritableStream<BufferSource>;
readonly [Symbol.toStringTag]: string;
}
const TextDecoderStream: {
prototype: TextDecoderStream;
new (label?: string, options?: TextDecoderOptions): TextDecoderStream;
};
}
declare module 'node:stream/web' {
export * from 'stream/web';
} | PypiClean |
/C_MTEB-1.0.0.tar.gz/C_MTEB-1.0.0/C_MTEB/tasks/Retrieval.py | from collections import defaultdict
from datasets import load_dataset, DatasetDict
from mteb import AbsTaskRetrieval
def load_retrieval_data(hf_hub_name, eval_splits):
eval_split = eval_splits[0]
dataset = load_dataset(hf_hub_name)
qrels = load_dataset(hf_hub_name + '-qrels')[eval_split]
corpus = {e['id']: {'text': e['text']} for e in dataset['corpus']}
queries = {e['id']: e['text'] for e in dataset['queries']}
relevant_docs = defaultdict(dict)
for e in qrels:
relevant_docs[e['qid']][e['pid']] = e['score']
corpus = DatasetDict({eval_split:corpus})
queries = DatasetDict({eval_split:queries})
relevant_docs = DatasetDict({eval_split:relevant_docs})
return corpus, queries, relevant_docs
class T2Retrieval(AbsTaskRetrieval):
@property
def description(self):
return {
'name': 'T2Retrieval',
'hf_hub_name': 'C-MTEB/T2Retrieval',
'reference': 'https://arxiv.org/abs/2304.03679',
'description': 'T2Ranking: A large-scale Chinese Benchmark for Passage Ranking',
'type': 'Retrieval',
'category': 's2p',
'eval_splits': ['dev'],
'eval_langs': ['zh'],
'main_score': 'ndcg_at_10',
}
def load_data(self, **kwargs):
if self.data_loaded:
return
self.corpus, self.queries, self.relevant_docs = load_retrieval_data(self.description['hf_hub_name'],
self.description['eval_splits'])
self.data_loaded = True
class MMarcoRetrieval(AbsTaskRetrieval):
@property
def description(self):
return {
'name': 'MMarcoRetrieval',
'hf_hub_name': 'C-MTEB/MMarcoRetrieval',
'reference': 'https://github.com/unicamp-dl/mMARCO',
'description': 'mMARCO is a multilingual version of the MS MARCO passage ranking dataset',
'type': 'Retrieval',
'category': 's2p',
'eval_splits': ['dev'],
'eval_langs': ['zh'],
'main_score': 'ndcg_at_10',
}
def load_data(self, **kwargs):
if self.data_loaded:
return
self.corpus, self.queries, self.relevant_docs = load_retrieval_data(self.description['hf_hub_name'],
self.description['eval_splits'])
self.data_loaded = True
class DuRetrieval(AbsTaskRetrieval):
@property
def description(self):
return {
'name': 'DuRetrieval',
'hf_hub_name': 'C-MTEB/DuRetrieval',
'reference': 'https://aclanthology.org/2022.emnlp-main.357.pdf',
'description': 'A Large-scale Chinese Benchmark for Passage Retrieval from Web Search Engine',
'type': 'Retrieval',
'category': 's2p',
'eval_splits': ['dev'],
'eval_langs': ['zh'],
'main_score': 'ndcg_at_10',
}
def load_data(self, **kwargs):
if self.data_loaded:
return
self.corpus, self.queries, self.relevant_docs = load_retrieval_data(self.description['hf_hub_name'],
self.description['eval_splits'])
self.data_loaded = True
class CovidRetrieval(AbsTaskRetrieval):
@property
def description(self):
return {
'name': 'CovidRetrieval',
'hf_hub_name': 'C-MTEB/CovidRetrieval',
'reference': 'https://aclanthology.org/2022.emnlp-main.357.pdf',
'description': 'COVID-19 news articles',
'type': 'Retrieval',
'category': 's2p',
'eval_splits': ['dev'],
'eval_langs': ['zh'],
'main_score': 'ndcg_at_10',
}
def load_data(self, **kwargs):
if self.data_loaded:
return
self.corpus, self.queries, self.relevant_docs = load_retrieval_data(self.description['hf_hub_name'],
self.description['eval_splits'])
self.data_loaded = True
class CmedqaRetrieval(AbsTaskRetrieval):
@property
def description(self):
return {
'name': 'CmedqaRetrieval',
'hf_hub_name': 'C-MTEB/CmedqaRetrieval',
'reference': 'https://aclanthology.org/2022.emnlp-main.357.pdf',
'description': 'Online medical consultation text',
'type': 'Retrieval',
'category': 's2p',
'eval_splits': ['dev'],
'eval_langs': ['zh'],
'main_score': 'ndcg_at_10',
}
def load_data(self, **kwargs):
if self.data_loaded:
return
self.corpus, self.queries, self.relevant_docs = load_retrieval_data(self.description['hf_hub_name'],
self.description['eval_splits'])
self.data_loaded = True
class EcomRetrieval(AbsTaskRetrieval):
@property
def description(self):
return {
'name': 'EcomRetrieval',
'hf_hub_name': 'C-MTEB/EcomRetrieval',
'reference': 'https://arxiv.org/abs/2203.03367',
'description': 'Passage retrieval dataset collected from Alibaba search engine systems in ecom domain',
'type': 'Retrieval',
'category': 's2p',
'eval_splits': ['dev'],
'eval_langs': ['zh'],
'main_score': 'ndcg_at_10',
}
def load_data(self, **kwargs):
if self.data_loaded:
return
self.corpus, self.queries, self.relevant_docs = load_retrieval_data(self.description['hf_hub_name'],
self.description['eval_splits'])
self.data_loaded = True
class MedicalRetrieval(AbsTaskRetrieval):
@property
def description(self):
return {
'name': 'MedicalRetrieval',
'hf_hub_name': 'C-MTEB/MedicalRetrieval',
'reference': 'https://arxiv.org/abs/2203.03367',
'description': 'Passage retrieval dataset collected from Alibaba search engine systems in medical domain',
'type': 'Retrieval',
'category': 's2p',
'eval_splits': ['dev'],
'eval_langs': ['zh'],
'main_score': 'ndcg_at_10',
}
def load_data(self, **kwargs):
if self.data_loaded:
return
self.corpus, self.queries, self.relevant_docs = load_retrieval_data(self.description['hf_hub_name'],
self.description['eval_splits'])
self.data_loaded = True
class VideoRetrieval(AbsTaskRetrieval):
@property
def description(self):
return {
'name': 'VideoRetrieval',
'hf_hub_name': 'C-MTEB/VideoRetrieval',
'reference': 'https://arxiv.org/abs/2203.03367',
'description': 'Passage retrieval dataset collected from Alibaba search engine systems in video domain',
'type': 'Retrieval',
'category': 's2p',
'eval_splits': ['dev'],
'eval_langs': ['zh'],
'main_score': 'ndcg_at_10',
}
def load_data(self, **kwargs):
if self.data_loaded:
return
self.corpus, self.queries, self.relevant_docs = load_retrieval_data(self.description['hf_hub_name'], self.description['eval_splits'])
self.data_loaded = True | PypiClean |
/CPAT-3.0.4.tar.gz/CPAT-3.0.4/.eggs/nose-1.3.7-py3.7.egg/nose/commands.py | try:
from setuptools import Command
except ImportError:
Command = nosetests = None
else:
from nose.config import Config, option_blacklist, user_config_files, \
flag, _bool
from nose.core import TestProgram
from nose.plugins import DefaultPluginManager
def get_user_options(parser):
"""convert a optparse option list into a distutils option tuple list"""
opt_list = []
for opt in parser.option_list:
if opt._long_opts[0][2:] in option_blacklist:
continue
long_name = opt._long_opts[0][2:]
if opt.action not in ('store_true', 'store_false'):
long_name = long_name + "="
short_name = None
if opt._short_opts:
short_name = opt._short_opts[0][1:]
opt_list.append((long_name, short_name, opt.help or ""))
return opt_list
class nosetests(Command):
description = "Run unit tests using nosetests"
__config = Config(files=user_config_files(),
plugins=DefaultPluginManager())
__parser = __config.getParser()
user_options = get_user_options(__parser)
def initialize_options(self):
"""create the member variables, but change hyphens to
underscores
"""
self.option_to_cmds = {}
for opt in self.__parser.option_list:
cmd_name = opt._long_opts[0][2:]
option_name = cmd_name.replace('-', '_')
self.option_to_cmds[option_name] = cmd_name
setattr(self, option_name, None)
self.attr = None
def finalize_options(self):
"""nothing to do here"""
pass
def run(self):
"""ensure tests are capable of being run, then
run nose.main with a reconstructed argument list"""
if getattr(self.distribution, 'use_2to3', False):
# If we run 2to3 we can not do this inplace:
# Ensure metadata is up-to-date
build_py = self.get_finalized_command('build_py')
build_py.inplace = 0
build_py.run()
bpy_cmd = self.get_finalized_command("build_py")
build_path = bpy_cmd.build_lib
# Build extensions
egg_info = self.get_finalized_command('egg_info')
egg_info.egg_base = build_path
egg_info.run()
build_ext = self.get_finalized_command('build_ext')
build_ext.inplace = 0
build_ext.run()
else:
self.run_command('egg_info')
# Build extensions in-place
build_ext = self.get_finalized_command('build_ext')
build_ext.inplace = 1
build_ext.run()
if self.distribution.install_requires:
self.distribution.fetch_build_eggs(
self.distribution.install_requires)
if self.distribution.tests_require:
self.distribution.fetch_build_eggs(
self.distribution.tests_require)
ei_cmd = self.get_finalized_command("egg_info")
argv = ['nosetests', '--where', ei_cmd.egg_base]
for (option_name, cmd_name) in list(self.option_to_cmds.items()):
if option_name in option_blacklist:
continue
value = getattr(self, option_name)
if value is not None:
argv.extend(
self.cfgToArg(option_name.replace('_', '-'), value))
TestProgram(argv=argv, config=self.__config)
def cfgToArg(self, optname, value):
argv = []
long_optname = '--' + optname
opt = self.__parser.get_option(long_optname)
if opt.action in ('store_true', 'store_false'):
if not flag(value):
raise ValueError("Invalid value '%s' for '%s'" % (
value, optname))
if _bool(value):
argv.append(long_optname)
else:
argv.extend([long_optname, value])
return argv | PypiClean |
/LUBEAT-0.13.1-cp38-cp38-macosx_10_9_x86_64.whl/econml/policy/_drlearner.py |
from warnings import warn
import numpy as np
from sklearn.base import clone
from ..utilities import check_inputs, filter_none_kwargs, check_input_arrays
from ..dr import DRLearner
from ..dr._drlearner import _ModelFinal
from .._tree_exporter import _SingleTreeExporterMixin
from ._base import PolicyLearner
from . import PolicyTree, PolicyForest
class _PolicyModelFinal(_ModelFinal):
def fit(self, Y, T, X=None, W=None, *, nuisances,
sample_weight=None, freq_weight=None, sample_var=None, groups=None):
if sample_var is not None:
warn('Parameter `sample_var` is ignored by the final estimator')
sample_var = None
Y_pred, _ = nuisances
self.d_y = Y_pred.shape[1:-1] # track whether there's a Y dimension (must be a singleton)
if (X is not None) and (self._featurizer is not None):
X = self._featurizer.fit_transform(X)
filtered_kwargs = filter_none_kwargs(sample_weight=sample_weight, sample_var=sample_var)
ys = Y_pred[..., 1:] - Y_pred[..., [0]] # subtract control results from each other arm
if self.d_y: # need to squeeze out singleton so that we fit on 2D array
ys = ys.squeeze(1)
ys = np.hstack([np.zeros((ys.shape[0], 1)), ys])
self.model_cate = self._model_final.fit(X, ys, **filtered_kwargs)
return self
def predict(self, X=None):
if (X is not None) and (self._featurizer is not None):
X = self._featurizer.transform(X)
pred = self.model_cate.predict_value(X)[:, 1:]
if self.d_y: # need to reintroduce singleton Y dimension
return pred[:, np.newaxis, :]
return pred
def score(self, Y, T, X=None, W=None, *, nuisances, sample_weight=None, groups=None):
return 0
class _DRLearnerWrapper(DRLearner):
def _gen_ortho_learner_model_final(self):
return _PolicyModelFinal(self._gen_model_final(), self._gen_featurizer(), self.multitask_model_final)
class _BaseDRPolicyLearner(PolicyLearner):
def _gen_drpolicy_learner(self):
pass
def fit(self, Y, T, *, X=None, W=None, sample_weight=None, groups=None):
"""
Estimate a policy model from data.
Parameters
----------
Y: (n,) vector of length n
Outcomes for each sample
T: (n,) vector of length n
Treatments for each sample
X: optional(n, d_x) matrix or None (Default=None)
Features for each sample
W: optional(n, d_w) matrix or None (Default=None)
Controls for each sample
sample_weight: optional(n,) vector or None (Default=None)
Weights for each samples
groups: (n,) vector, optional
All rows corresponding to the same group will be kept together during splitting.
If groups is not None, the `cv` argument passed to this class's initializer
must support a 'groups' argument to its split method.
Returns
-------
self: object instance
"""
self.drlearner_ = self._gen_drpolicy_learner()
self.drlearner_.fit(Y, T, X=X, W=W, sample_weight=sample_weight, groups=groups)
return self
def predict_value(self, X):
""" Get effect values for each non-baseline treatment and for each sample.
Parameters
----------
X : array-like of shape (n_samples, n_features)
The training input samples.
Returns
-------
values : array-like of shape (n_samples, n_treatments - 1)
The predicted average value for each sample and for each non-baseline treatment, as compared
to the baseline treatment value and based on the feature neighborhoods defined by the trees.
"""
return self.drlearner_.const_marginal_effect(X)
def predict_proba(self, X):
""" Predict the probability of recommending each treatment
Parameters
----------
X : array-like of shape (n_samples, n_features)
The input samples.
Returns
-------
treatment_proba : array-like of shape (n_samples, n_treatments)
The probability of each treatment recommendation
"""
X, = check_input_arrays(X)
if self.drlearner_.featurizer_ is not None:
X = self.drlearner_.featurizer_.fit_transform(X)
return self.policy_model_.predict_proba(X)
def predict(self, X):
""" Get recommended treatment for each sample.
Parameters
----------
X : array-like of shape (n_samples, n_features)
The training input samples.
Returns
-------
treatment : array-like of shape (n_samples,)
The index of the recommended treatment in the same order as in categories, or in
lexicographic order if `categories='auto'`. 0 corresponds to the baseline/control treatment.
For ensemble policy models, recommended treatments are aggregated from each model in the ensemble
and the treatment that receives the most votes is returned. Use `predict_proba` to get the fraction
of models in the ensemble that recommend each treatment for each sample.
"""
return np.argmax(self.predict_proba(X), axis=1)
def policy_feature_names(self, *, feature_names=None):
"""
Get the output feature names.
Parameters
----------
feature_names: list of strings of length X.shape[1] or None
The names of the input features. If None and X is a dataframe, it defaults to the column names
from the dataframe.
Returns
-------
out_feature_names: list of strings or None
The names of the output features on which the policy model is fitted.
"""
return self.drlearner_.cate_feature_names(feature_names=feature_names)
def policy_treatment_names(self, *, treatment_names=None):
"""
Get the names of the treatments.
Parameters
----------
treatment_names: list of strings of length n_categories
The names of the treatments (including the baseling). If None then values are auto-generated
based on input metadata.
Returns
-------
out_treatment_names: list of strings
The names of the treatments including the baseline/control treatment.
"""
if treatment_names is not None:
if len(treatment_names) != len(self.drlearner_.cate_treatment_names()) + 1:
raise ValueError('The variable `treatment_names` should have length equal to '
'n_treatments + 1, containing the value of the control/none/baseline treatment as '
'the first element and the names of all the treatments as subsequent elements.')
return treatment_names
return ['None'] + self.drlearner_.cate_treatment_names()
def feature_importances(self, max_depth=4, depth_decay_exponent=2.0):
"""
Parameters
----------
max_depth : int, default=4
Splits of depth larger than `max_depth` are not used in this calculation
depth_decay_exponent: double, default=2.0
The contribution of each split to the total score is re-weighted by ``1 / (1 + `depth`)**2.0``.
Returns
-------
feature_importances_ : ndarray of shape (n_features,)
Normalized total parameter heterogeneity inducing importance of each feature
"""
return self.policy_model_.feature_importances(max_depth=max_depth,
depth_decay_exponent=depth_decay_exponent)
@property
def feature_importances_(self):
return self.feature_importances()
@property
def policy_model_(self):
""" The trained final stage policy model
"""
return self.drlearner_.multitask_model_cate
class DRPolicyTree(_BaseDRPolicyLearner):
"""
Policy learner that uses doubly-robust correction techniques to account for
covariate shift (selection bias) between the treatment arms.
In this estimator, the policy is estimated by first constructing doubly robust estimates of the counterfactual
outcomes
.. math ::
Y_{i, t}^{DR} = E[Y | X_i, W_i, T_i=t]\
+ \\frac{Y_i - E[Y | X_i, W_i, T_i=t]}{Pr[T_i=t | X_i, W_i]} \\cdot 1\\{T_i=t\\}
Then optimizing the objective
.. math ::
V(\\pi) = \\sum_i \\sum_t \\pi_t(X_i) * (Y_{i, t} - Y_{i, 0})
with the constraint that only one of :math:`\\pi_t(X_i)` is 1 and the rest are 0, for each :math:`X_i`.
Thus if we estimate the nuisance functions :math:`h(X, W, T) = E[Y | X, W, T]` and
:math:`p_t(X, W)=Pr[T=t | X, W]` in the first stage, we can estimate the final stage cate for each
treatment t, by running a constructing a decision tree that maximizes the objective :math:`V(\\pi)`
The problem of estimating the nuisance function :math:`p` is a simple multi-class classification
problem of predicting the label :math:`T` from :math:`X, W`. The :class:`.DRLearner`
class takes as input the parameter ``model_propensity``, which is an arbitrary scikit-learn
classifier, that is internally used to solve this classification problem.
The second nuisance function :math:`h` is a simple regression problem and the :class:`.DRLearner`
class takes as input the parameter ``model_regressor``, which is an arbitrary scikit-learn regressor that
is internally used to solve this regression problem.
Parameters
----------
model_propensity : scikit-learn classifier or 'auto', optional (default='auto')
Estimator for Pr[T=t | X, W]. Trained by regressing treatments on (features, controls) concatenated.
Must implement `fit` and `predict_proba` methods. The `fit` method must be able to accept X and T,
where T is a shape (n, ) array.
If 'auto', :class:`~sklearn.linear_model.LogisticRegressionCV` will be chosen.
model_regression : scikit-learn regressor or 'auto', optional (default='auto')
Estimator for E[Y | X, W, T]. Trained by regressing Y on (features, controls, one-hot-encoded treatments)
concatenated. The one-hot-encoding excludes the baseline treatment. Must implement `fit` and
`predict` methods. If different models per treatment arm are desired, see the
:class:`.MultiModelWrapper` helper class.
If 'auto' :class:`.WeightedLassoCV`/:class:`.WeightedMultiTaskLassoCV` will be chosen.
featurizer : :term:`transformer`, optional, default None
Must support fit_transform and transform. Used to create composite features in the final CATE regression.
It is ignored if X is None. The final CATE will be trained on the outcome of featurizer.fit_transform(X).
If featurizer=None, then CATE is trained on X.
min_propensity : float, optional, default ``1e-6``
The minimum propensity at which to clip propensity estimates to avoid dividing by zero.
categories: 'auto' or list, default 'auto'
The categories to use when encoding discrete treatments (or 'auto' to use the unique sorted values).
The first category will be treated as the control treatment.
cv: int, cross-validation generator or an iterable, optional (default is 2)
Determines the cross-validation splitting strategy.
Possible inputs for cv are:
- None, to use the default 3-fold cross-validation,
- integer, to specify the number of folds.
- :term:`CV splitter`
- An iterable yielding (train, test) splits as arrays of indices.
For integer/None inputs, if the treatment is discrete
:class:`~sklearn.model_selection.StratifiedKFold` is used, else,
:class:`~sklearn.model_selection.KFold` is used
(with a random shuffle in either case).
Unless an iterable is used, we call `split(concat[W, X], T)` to generate the splits. If all
W, X are None, then we call `split(ones((T.shape[0], 1)), T)`.
mc_iters: int, optional (default=None)
The number of times to rerun the first stage models to reduce the variance of the nuisances.
mc_agg: {'mean', 'median'}, optional (default='mean')
How to aggregate the nuisance value for each sample across the `mc_iters` monte carlo iterations of
cross-fitting.
max_depth : integer or None, optional (default=None)
The maximum depth of the tree. If None, then nodes are expanded until
all leaves are pure or until all leaves contain less than
min_samples_split samples.
min_samples_split : int, float, optional (default=10)
The minimum number of splitting samples required to split an internal node.
- If int, then consider `min_samples_split` as the minimum number.
- If float, then `min_samples_split` is a fraction and
`ceil(min_samples_split * n_samples)` are the minimum
number of samples for each split.
min_samples_leaf : int, float, optional (default=5)
The minimum number of samples required to be at a leaf node.
A split point at any depth will only be considered if it leaves at
least ``min_samples_leaf`` splitting samples in each of the left and
right branches. This may have the effect of smoothing the model,
especially in regression. After construction the tree is also pruned
so that there are at least min_samples_leaf estimation samples on
each leaf.
- If int, then consider `min_samples_leaf` as the minimum number.
- If float, then `min_samples_leaf` is a fraction and
`ceil(min_samples_leaf * n_samples)` are the minimum
number of samples for each node.
min_weight_fraction_leaf : float, optional (default=0.)
The minimum weighted fraction of the sum total of weights (of all
splitting samples) required to be at a leaf node. Samples have
equal weight when sample_weight is not provided. After construction
the tree is pruned so that the fraction of the sum total weight
of the estimation samples contained in each leaf node is at
least min_weight_fraction_leaf
max_features : int, float, string or None, optional (default="auto")
The number of features to consider when looking for the best split:
- If int, then consider `max_features` features at each split.
- If float, then `max_features` is a fraction and
`int(max_features * n_features)` features are considered at each
split.
- If "auto", then `max_features=n_features`.
- If "sqrt", then `max_features=sqrt(n_features)`.
- If "log2", then `max_features=log2(n_features)`.
- If None, then `max_features=n_features`.
Note: the search for a split does not stop until at least one
valid partition of the node samples is found, even if it requires to
effectively inspect more than ``max_features`` features.
min_impurity_decrease : float, optional (default=0.)
A node will be split if this split induces a decrease of the impurity
greater than or equal to this value.
The weighted impurity decrease equation is the following::
N_t / N * (impurity - N_t_R / N_t * right_impurity
- N_t_L / N_t * left_impurity)
where ``N`` is the total number of split samples, ``N_t`` is the number of
split samples at the current node, ``N_t_L`` is the number of split samples in the
left child, and ``N_t_R`` is the number of split samples in the right child.
``N``, ``N_t``, ``N_t_R`` and ``N_t_L`` all refer to the weighted sum,
if ``sample_weight`` is passed.
min_balancedness_tol: float in [0, .5], default=.45
How imbalanced a split we can tolerate. This enforces that each split leaves at least
(.5 - min_balancedness_tol) fraction of samples on each side of the split; or fraction
of the total weight of samples, when sample_weight is not None. Default value, ensures
that at least 5% of the parent node weight falls in each side of the split. Set it to 0.0 for no
balancedness and to .5 for perfectly balanced splits. For the formal inference theory
to be valid, this has to be any positive constant bounded away from zero.
honest : boolean, optional (default=True)
Whether to use honest trees, i.e. half of the samples are used for
creating the tree structure and the other half for the estimation at
the leafs. If False, then all samples are used for both parts.
random_state: int, :class:`~numpy.random.mtrand.RandomState` instance or None, optional (default=None)
If int, random_state is the seed used by the random number generator;
If :class:`~numpy.random.mtrand.RandomState` instance, random_state is the random number generator;
If None, the random number generator is the :class:`~numpy.random.mtrand.RandomState` instance used
by :mod:`np.random<numpy.random>`.
"""
def __init__(self, *,
model_regression="auto",
model_propensity="auto",
featurizer=None,
min_propensity=1e-6,
categories='auto',
cv=2,
mc_iters=None,
mc_agg='mean',
max_depth=None,
min_samples_split=10,
min_samples_leaf=5,
min_weight_fraction_leaf=0.,
max_features="auto",
min_impurity_decrease=0.,
min_balancedness_tol=.45,
honest=True,
random_state=None):
self.model_regression = clone(model_regression, safe=False)
self.model_propensity = clone(model_propensity, safe=False)
self.featurizer = clone(featurizer, safe=False)
self.min_propensity = min_propensity
self.categories = categories
self.cv = cv
self.mc_iters = mc_iters
self.mc_agg = mc_agg
self.max_depth = max_depth
self.min_samples_split = min_samples_split
self.min_samples_leaf = min_samples_leaf
self.min_weight_fraction_leaf = min_weight_fraction_leaf
self.max_features = max_features
self.min_impurity_decrease = min_impurity_decrease
self.min_balancedness_tol = min_balancedness_tol
self.honest = honest
self.random_state = random_state
def _gen_drpolicy_learner(self):
return _DRLearnerWrapper(model_regression=self.model_regression,
model_propensity=self.model_propensity,
featurizer=self.featurizer,
min_propensity=self.min_propensity,
categories=self.categories,
cv=self.cv,
mc_iters=self.mc_iters,
mc_agg=self.mc_agg,
model_final=PolicyTree(max_depth=self.max_depth,
min_samples_split=self.min_samples_split,
min_samples_leaf=self.min_samples_leaf,
min_weight_fraction_leaf=self.min_weight_fraction_leaf,
max_features=self.max_features,
min_impurity_decrease=self.min_impurity_decrease,
min_balancedness_tol=self.min_balancedness_tol,
honest=self.honest,
random_state=self.random_state),
multitask_model_final=True,
random_state=self.random_state)
def plot(self, *, feature_names=None, treatment_names=None, ax=None, title=None,
max_depth=None, filled=True, rounded=True, precision=3, fontsize=None):
"""
Exports policy trees to matplotlib
Parameters
----------
ax : :class:`matplotlib.axes.Axes`, optional, default None
The axes on which to plot
title : string, optional, default None
A title for the final figure to be printed at the top of the page.
feature_names : list of strings, optional, default None
Names of each of the features.
treatment_names : list of strings, optional, default None
Names of each of the treatments including the baseline/control
max_depth: int or None, optional, default None
The maximum tree depth to plot
filled : bool, optional, default False
When set to ``True``, paint nodes to indicate majority class for
classification, extremity of values for regression, or purity of node
for multi-output.
rounded : bool, optional, default True
When set to ``True``, draw node boxes with rounded corners and use
Helvetica fonts instead of Times-Roman.
precision : int, optional, default 3
Number of digits of precision for floating point in the values of
impurity, threshold and value attributes of each node.
fontsize : int, optional, default None
Font size for text
"""
return self.policy_model_.plot(feature_names=self.policy_feature_names(feature_names=feature_names),
treatment_names=self.policy_treatment_names(treatment_names=treatment_names),
ax=ax,
title=title,
max_depth=max_depth,
filled=filled,
rounded=rounded,
precision=precision,
fontsize=fontsize)
def export_graphviz(self, *, out_file=None,
feature_names=None, treatment_names=None,
max_depth=None, filled=True, leaves_parallel=True,
rotate=False, rounded=True, special_characters=False, precision=3):
"""
Export a graphviz dot file representing the learned tree model
Parameters
----------
out_file : file object or string, optional, default None
Handle or name of the output file. If ``None``, the result is
returned as a string.
feature_names : list of strings, optional, default None
Names of each of the features.
treatment_names : list of strings, optional, default None
Names of each of the treatments, including the baseline treatment
max_depth: int or None, optional, default None
The maximum tree depth to plot
filled : bool, optional, default False
When set to ``True``, paint nodes to indicate majority class for
classification, extremity of values for regression, or purity of node
for multi-output.
leaves_parallel : bool, optional, default True
When set to ``True``, draw all leaf nodes at the bottom of the tree.
rotate : bool, optional, default False
When set to ``True``, orient tree left to right rather than top-down.
rounded : bool, optional, default True
When set to ``True``, draw node boxes with rounded corners and use
Helvetica fonts instead of Times-Roman.
special_characters : bool, optional, default False
When set to ``False``, ignore special characters for PostScript
compatibility.
precision : int, optional, default 3
Number of digits of precision for floating point in the values of
impurity, threshold and value attributes of each node.
"""
return self.policy_model_.export_graphviz(out_file=out_file,
feature_names=self.policy_feature_names(feature_names=feature_names),
treatment_names=self.policy_treatment_names(
treatment_names=treatment_names),
max_depth=max_depth,
filled=filled,
leaves_parallel=leaves_parallel,
rotate=rotate,
rounded=rounded,
special_characters=special_characters,
precision=precision)
def render(self, out_file, *, format='pdf', view=True, feature_names=None,
treatment_names=None, max_depth=None,
filled=True, leaves_parallel=True, rotate=False, rounded=True,
special_characters=False, precision=3):
"""
Render the tree to a flie
Parameters
----------
out_file : file name to save to
format : string, optional, default 'pdf'
The file format to render to; must be supported by graphviz
view : bool, optional, default True
Whether to open the rendered result with the default application.
feature_names : list of strings, optional, default None
Names of each of the features.
treatment_names : list of strings, optional, default None
Names of each of the treatments, including the baseline/control
max_depth: int or None, optional, default None
The maximum tree depth to plot
filled : bool, optional, default False
When set to ``True``, paint nodes to indicate majority class for
classification, extremity of values for regression, or purity of node
for multi-output.
leaves_parallel : bool, optional, default True
When set to ``True``, draw all leaf nodes at the bottom of the tree.
rotate : bool, optional, default False
When set to ``True``, orient tree left to right rather than top-down.
rounded : bool, optional, default True
When set to ``True``, draw node boxes with rounded corners and use
Helvetica fonts instead of Times-Roman.
special_characters : bool, optional, default False
When set to ``False``, ignore special characters for PostScript
compatibility.
precision : int, optional, default 3
Number of digits of precision for floating point in the values of
impurity, threshold and value attributes of each node.
"""
return self.policy_model_.render(out_file,
format=format,
view=view,
feature_names=self.policy_feature_names(feature_names=feature_names),
treatment_names=self.policy_treatment_names(treatment_names=treatment_names),
max_depth=max_depth,
filled=filled,
leaves_parallel=leaves_parallel,
rotate=rotate,
rounded=rounded,
special_characters=special_characters,
precision=precision)
class DRPolicyForest(_BaseDRPolicyLearner):
"""
Policy learner that uses doubly-robust correction techniques to account for
covariate shift (selection bias) between the treatment arms.
In this estimator, the policy is estimated by first constructing doubly robust estimates of the counterfactual
outcomes
.. math ::
Y_{i, t}^{DR} = E[Y | X_i, W_i, T_i=t]\
+ \\frac{Y_i - E[Y | X_i, W_i, T_i=t]}{Pr[T_i=t | X_i, W_i]} \\cdot 1\\{T_i=t\\}
Then optimizing the objective
.. math ::
V(\\pi) = \\sum_i \\sum_t \\pi_t(X_i) * (Y_{i, t} - Y_{i, 0})
with the constraint that only one of :math:`\\pi_t(X_i)` is 1 and the rest are 0, for each :math:`X_i`.
Thus if we estimate the nuisance functions :math:`h(X, W, T) = E[Y | X, W, T]` and
:math:`p_t(X, W)=Pr[T=t | X, W]` in the first stage, we can estimate the final stage cate for each
treatment t, by running a constructing a decision tree that maximizes the objective :math:`V(\\pi)`
The problem of estimating the nuisance function :math:`p` is a simple multi-class classification
problem of predicting the label :math:`T` from :math:`X, W`. The :class:`.DRLearner`
class takes as input the parameter ``model_propensity``, which is an arbitrary scikit-learn
classifier, that is internally used to solve this classification problem.
The second nuisance function :math:`h` is a simple regression problem and the :class:`.DRLearner`
class takes as input the parameter ``model_regressor``, which is an arbitrary scikit-learn regressor that
is internally used to solve this regression problem.
Parameters
----------
model_propensity : scikit-learn classifier or 'auto', optional (default='auto')
Estimator for Pr[T=t | X, W]. Trained by regressing treatments on (features, controls) concatenated.
Must implement `fit` and `predict_proba` methods. The `fit` method must be able to accept X and T,
where T is a shape (n, ) array.
If 'auto', :class:`~sklearn.linear_model.LogisticRegressionCV` will be chosen.
model_regression : scikit-learn regressor or 'auto', optional (default='auto')
Estimator for E[Y | X, W, T]. Trained by regressing Y on (features, controls, one-hot-encoded treatments)
concatenated. The one-hot-encoding excludes the baseline treatment. Must implement `fit` and
`predict` methods. If different models per treatment arm are desired, see the
:class:`.MultiModelWrapper` helper class.
If 'auto' :class:`.WeightedLassoCV`/:class:`.WeightedMultiTaskLassoCV` will be chosen.
featurizer : :term:`transformer`, optional, default None
Must support fit_transform and transform. Used to create composite features in the final CATE regression.
It is ignored if X is None. The final CATE will be trained on the outcome of featurizer.fit_transform(X).
If featurizer=None, then CATE is trained on X.
min_propensity : float, optional, default ``1e-6``
The minimum propensity at which to clip propensity estimates to avoid dividing by zero.
categories: 'auto' or list, default 'auto'
The categories to use when encoding discrete treatments (or 'auto' to use the unique sorted values).
The first category will be treated as the control treatment.
cv: int, cross-validation generator or an iterable, optional (default is 2)
Determines the cross-validation splitting strategy.
Possible inputs for cv are:
- None, to use the default 3-fold cross-validation,
- integer, to specify the number of folds.
- :term:`CV splitter`
- An iterable yielding (train, test) splits as arrays of indices.
For integer/None inputs, if the treatment is discrete
:class:`~sklearn.model_selection.StratifiedKFold` is used, else,
:class:`~sklearn.model_selection.KFold` is used
(with a random shuffle in either case).
Unless an iterable is used, we call `split(concat[W, X], T)` to generate the splits. If all
W, X are None, then we call `split(ones((T.shape[0], 1)), T)`.
mc_iters: int, optional (default=None)
The number of times to rerun the first stage models to reduce the variance of the nuisances.
mc_agg: {'mean', 'median'}, optional (default='mean')
How to aggregate the nuisance value for each sample across the `mc_iters` monte carlo iterations of
cross-fitting.
n_estimators : integer, optional (default=100)
The total number of trees in the forest. The forest consists of a
forest of sqrt(n_estimators) sub-forests, where each sub-forest
contains sqrt(n_estimators) trees.
max_depth : integer or None, optional (default=None)
The maximum depth of the tree. If None, then nodes are expanded until
all leaves are pure or until all leaves contain less than
min_samples_split samples.
min_samples_split : int, float, optional (default=10)
The minimum number of splitting samples required to split an internal node.
- If int, then consider `min_samples_split` as the minimum number.
- If float, then `min_samples_split` is a fraction and
`ceil(min_samples_split * n_samples)` are the minimum
number of samples for each split.
min_samples_leaf : int, float, optional (default=5)
The minimum number of samples required to be at a leaf node.
A split point at any depth will only be considered if it leaves at
least ``min_samples_leaf`` splitting samples in each of the left and
right branches. This may have the effect of smoothing the model,
especially in regression. After construction the tree is also pruned
so that there are at least min_samples_leaf estimation samples on
each leaf.
- If int, then consider `min_samples_leaf` as the minimum number.
- If float, then `min_samples_leaf` is a fraction and
`ceil(min_samples_leaf * n_samples)` are the minimum
number of samples for each node.
min_weight_fraction_leaf : float, optional (default=0.)
The minimum weighted fraction of the sum total of weights (of all
splitting samples) required to be at a leaf node. Samples have
equal weight when sample_weight is not provided. After construction
the tree is pruned so that the fraction of the sum total weight
of the estimation samples contained in each leaf node is at
least min_weight_fraction_leaf
max_features : int, float, string or None, optional (default="auto")
The number of features to consider when looking for the best split:
- If int, then consider `max_features` features at each split.
- If float, then `max_features` is a fraction and
`int(max_features * n_features)` features are considered at each
split.
- If "auto", then `max_features=n_features`.
- If "sqrt", then `max_features=sqrt(n_features)`.
- If "log2", then `max_features=log2(n_features)`.
- If None, then `max_features=n_features`.
Note: the search for a split does not stop until at least one
valid partition of the node samples is found, even if it requires to
effectively inspect more than ``max_features`` features.
min_impurity_decrease : float, optional (default=0.)
A node will be split if this split induces a decrease of the impurity
greater than or equal to this value.
The weighted impurity decrease equation is the following::
N_t / N * (impurity - N_t_R / N_t * right_impurity
- N_t_L / N_t * left_impurity)
where ``N`` is the total number of split samples, ``N_t`` is the number of
split samples at the current node, ``N_t_L`` is the number of split samples in the
left child, and ``N_t_R`` is the number of split samples in the right child.
``N``, ``N_t``, ``N_t_R`` and ``N_t_L`` all refer to the weighted sum,
if ``sample_weight`` is passed.
max_samples : int or float in (0, 1], default=.5,
The number of samples to use for each subsample that is used to train each tree:
- If int, then train each tree on `max_samples` samples, sampled without replacement from all the samples
- If float, then train each tree on ceil(`max_samples` * `n_samples`), sampled without replacement
from all the samples.
min_balancedness_tol: float in [0, .5], default=.45
How imbalanced a split we can tolerate. This enforces that each split leaves at least
(.5 - min_balancedness_tol) fraction of samples on each side of the split; or fraction
of the total weight of samples, when sample_weight is not None. Default value, ensures
that at least 5% of the parent node weight falls in each side of the split. Set it to 0.0 for no
balancedness and to .5 for perfectly balanced splits. For the formal inference theory
to be valid, this has to be any positive constant bounded away from zero.
honest : boolean, optional (default=True)
Whether to use honest trees, i.e. half of the samples are used for
creating the tree structure and the other half for the estimation at
the leafs. If False, then all samples are used for both parts.
n_jobs : int or None, optional (default=-1)
The number of jobs to run in parallel for both `fit` and `predict`.
``None`` means 1 unless in a :func:`joblib.parallel_backend` context.
``-1`` means using all processors. See :term:`Glossary <n_jobs>`
for more details.
verbose : int, optional (default=0)
Controls the verbosity when fitting and predicting.
random_state: int, :class:`~numpy.random.mtrand.RandomState` instance or None, optional (default=None)
If int, random_state is the seed used by the random number generator;
If :class:`~numpy.random.mtrand.RandomState` instance, random_state is the random number generator;
If None, the random number generator is the :class:`~numpy.random.mtrand.RandomState` instance used
by :mod:`np.random<numpy.random>`.
"""
def __init__(self, *,
model_regression="auto",
model_propensity="auto",
featurizer=None,
min_propensity=1e-6,
categories='auto',
cv=2,
mc_iters=None,
mc_agg='mean',
n_estimators=100,
max_depth=None,
min_samples_split=10,
min_samples_leaf=5,
min_weight_fraction_leaf=0.,
max_features="auto",
min_impurity_decrease=0.,
max_samples=.5,
min_balancedness_tol=.45,
honest=True,
n_jobs=-1,
verbose=0,
random_state=None):
self.model_regression = clone(model_regression, safe=False)
self.model_propensity = clone(model_propensity, safe=False)
self.featurizer = clone(featurizer, safe=False)
self.min_propensity = min_propensity
self.categories = categories
self.cv = cv
self.mc_iters = mc_iters
self.mc_agg = mc_agg
self.n_estimators = n_estimators
self.max_depth = max_depth
self.min_samples_split = min_samples_split
self.min_samples_leaf = min_samples_leaf
self.min_weight_fraction_leaf = min_weight_fraction_leaf
self.max_features = max_features
self.min_impurity_decrease = min_impurity_decrease
self.max_samples = max_samples
self.min_balancedness_tol = min_balancedness_tol
self.honest = honest
self.n_jobs = n_jobs
self.verbose = verbose
self.random_state = random_state
def _gen_drpolicy_learner(self):
return _DRLearnerWrapper(model_regression=self.model_regression,
model_propensity=self.model_propensity,
featurizer=self.featurizer,
min_propensity=self.min_propensity,
categories=self.categories,
cv=self.cv,
mc_iters=self.mc_iters,
mc_agg=self.mc_agg,
model_final=PolicyForest(max_depth=self.max_depth,
min_samples_split=self.min_samples_split,
min_samples_leaf=self.min_samples_leaf,
min_weight_fraction_leaf=self.min_weight_fraction_leaf,
max_features=self.max_features,
min_impurity_decrease=self.min_impurity_decrease,
max_samples=self.max_samples,
min_balancedness_tol=self.min_balancedness_tol,
honest=self.honest,
n_jobs=self.n_jobs,
verbose=self.verbose,
random_state=self.random_state),
multitask_model_final=True,
random_state=self.random_state)
def plot(self, tree_id, *, feature_names=None, treatment_names=None,
ax=None, title=None,
max_depth=None, filled=True, rounded=True, precision=3, fontsize=None):
"""
Exports policy trees to matplotlib
Parameters
----------
tree_id : int
The id of the tree of the forest to plot
ax : :class:`matplotlib.axes.Axes`, optional, default None
The axes on which to plot
title : string, optional, default None
A title for the final figure to be printed at the top of the page.
feature_names : list of strings, optional, default None
Names of each of the features.
treatment_names : list of strings, optional, default None
Names of each of the treatments, starting with a name for the baseline/control treatment
(alphanumerically smallest)
max_depth: int or None, optional, default None
The maximum tree depth to plot
filled : bool, optional, default False
When set to ``True``, paint nodes to indicate majority class for
classification, extremity of values for regression, or purity of node
for multi-output.
rounded : bool, optional, default True
When set to ``True``, draw node boxes with rounded corners and use
Helvetica fonts instead of Times-Roman.
precision : int, optional, default 3
Number of digits of precision for floating point in the values of
impurity, threshold and value attributes of each node.
fontsize : int, optional, default None
Font size for text
"""
return self.policy_model_[tree_id].plot(feature_names=self.policy_feature_names(feature_names=feature_names),
treatment_names=self.policy_treatment_names(
treatment_names=treatment_names),
ax=ax,
title=title,
max_depth=max_depth,
filled=filled,
rounded=rounded,
precision=precision,
fontsize=fontsize)
def export_graphviz(self, tree_id, *, out_file=None, feature_names=None, treatment_names=None,
max_depth=None,
filled=True, leaves_parallel=True,
rotate=False, rounded=True, special_characters=False, precision=3):
"""
Export a graphviz dot file representing the learned tree model
Parameters
----------
tree_id : int
The id of the tree of the forest to plot
out_file : file object or string, optional, default None
Handle or name of the output file. If ``None``, the result is
returned as a string.
feature_names : list of strings, optional, default None
Names of each of the features.
treatment_names : list of strings, optional, default None
Names of each of the treatments, starting with a name for the baseline/control/None treatment
(alphanumerically smallest in case of discrete treatment)
max_depth: int or None, optional, default None
The maximum tree depth to plot
filled : bool, optional, default False
When set to ``True``, paint nodes to indicate majority class for
classification, extremity of values for regression, or purity of node
for multi-output.
leaves_parallel : bool, optional, default True
When set to ``True``, draw all leaf nodes at the bottom of the tree.
rotate : bool, optional, default False
When set to ``True``, orient tree left to right rather than top-down.
rounded : bool, optional, default True
When set to ``True``, draw node boxes with rounded corners and use
Helvetica fonts instead of Times-Roman.
special_characters : bool, optional, default False
When set to ``False``, ignore special characters for PostScript
compatibility.
precision : int, optional, default 3
Number of digits of precision for floating point in the values of
impurity, threshold and value attributes of each node.
"""
feature_names = self.policy_feature_names(feature_names=feature_names)
return self.policy_model_[tree_id].export_graphviz(out_file=out_file,
feature_names=feature_names,
treatment_names=self.policy_treatment_names(
treatment_names=treatment_names),
max_depth=max_depth,
filled=filled,
leaves_parallel=leaves_parallel,
rotate=rotate,
rounded=rounded,
special_characters=special_characters,
precision=precision)
def render(self, tree_id, out_file, *, format='pdf', view=True,
feature_names=None,
treatment_names=None,
max_depth=None,
filled=True, leaves_parallel=True, rotate=False, rounded=True,
special_characters=False, precision=3):
"""
Render the tree to a flie
Parameters
----------
tree_id : int
The id of the tree of the forest to plot
out_file : file name to save to
format : string, optional, default 'pdf'
The file format to render to; must be supported by graphviz
view : bool, optional, default True
Whether to open the rendered result with the default application.
feature_names : list of strings, optional, default None
Names of each of the features.
treatment_names : list of strings, optional, default None
Names of each of the treatments, starting with a name for the baseline/control treatment
(alphanumerically smallest in case of discrete treatment)
max_depth: int or None, optional, default None
The maximum tree depth to plot
filled : bool, optional, default False
When set to ``True``, paint nodes to indicate majority class for
classification, extremity of values for regression, or purity of node
for multi-output.
leaves_parallel : bool, optional, default True
When set to ``True``, draw all leaf nodes at the bottom of the tree.
rotate : bool, optional, default False
When set to ``True``, orient tree left to right rather than top-down.
rounded : bool, optional, default True
When set to ``True``, draw node boxes with rounded corners and use
Helvetica fonts instead of Times-Roman.
special_characters : bool, optional, default False
When set to ``False``, ignore special characters for PostScript
compatibility.
precision : int, optional, default 3
Number of digits of precision for floating point in the values of
impurity, threshold and value attributes of each node.
"""
feature_names = self.policy_feature_names(feature_names=feature_names)
return self.policy_model_[tree_id].render(out_file,
feature_names=feature_names,
treatment_names=self.policy_treatment_names(
treatment_names=treatment_names),
format=format,
view=view,
max_depth=max_depth,
filled=filled,
leaves_parallel=leaves_parallel,
rotate=rotate,
rounded=rounded,
special_characters=special_characters,
precision=precision) | PypiClean |
/FEV_KEGG-1.1.4.tar.gz/FEV_KEGG-1.1.4/FEV_KEGG/Evolution/Events.py | from typing import Set, Dict, Tuple
from FEV_KEGG.Graph.Elements import Enzyme, EcNumber
from FEV_KEGG.Graph.SubstanceGraphs import SubstanceEnzymeGraph, SubstanceEcGraph
from FEV_KEGG.KEGG import Database
from builtins import set
from FEV_KEGG.KEGG.Organism import Group
from _collections_abc import Iterable
from FEV_KEGG import settings
import itertools
from FEV_KEGG.Drawing import Export
defaultEValue = settings.defaultEvalue
"""
Default threshold for the statistical expectation value (E-value), below which a sequence alignment is considered significant.
This can be overridden in each relevant method's `eValue` parameter in this module.
"""
class GeneFunctionConservation(object):
"""
Evolutionary event of conserving a gene function (EC number) between a pair of arbitrary ancestor and descendant.
The conditions for a gene function conservation are:
- The EC number has been conserved, along the way from an older group of organisms to a newer one.
"""
@staticmethod
def getECs(ancestorEcGraph: SubstanceEcGraph, descendantEcGraph: SubstanceEcGraph) -> Set[EcNumber]:
"""
Get EC numbers which have been conserved between ancestor and descendant, existing in both.
Parameters
----------
ancestorEcGraph : SubstanceEcGraph
descendantEcGraph : SubstanceEcGraph
Returns
-------
Set[EcNumber]
Set of EC numbers which occur in the ancestor's EC graph and in the decendants's, i.e. EC numbers which are conserved in the descendant.
"""
conservedECs = ancestorEcGraph.getECs()
conservedECs.intersection_update(descendantEcGraph.getECs())
return conservedECs
@staticmethod
def getGraph(ancestorEcGraph: SubstanceEcGraph, descendantEcGraph: SubstanceEcGraph) -> SubstanceEcGraph:
"""
Get graph containing EC numbers which have been conserved between ancestor and descendant, existing in both.
Parameters
----------
ancestorEcGraph : SubstanceEcGraph
descendantEcGraph : SubstanceEcGraph
Returns
-------
SubstanceEcGraph
Graph of EC numbers which occur in the ancestor's EC graph, and in the decendants's, i.e. EC numbers which are conserved in the descendant.
Substance-EC-product edges are only included if both graphs, ancestor and descendant, have both nodes, substrate and product.
"""
conservedGraph = ancestorEcGraph.intersection(descendantEcGraph, addCount=False)
conservedGraph.removeIsolatedNodes()
return conservedGraph
class GeneFunctionAddition(object):
"""
Evolutionary event of adding a gene function (EC number) between a pair of arbitrary ancestor and descendant.
The conditions for a gene function addition are:
- The EC number has been added, from an unknown origin, along the way from an older group of organisms to a newer one.
"""
@staticmethod
def getECs(ancestorEcGraph: SubstanceEcGraph, descendantEcGraph: SubstanceEcGraph) -> Set[EcNumber]:
"""
Get EC numbers which have been added between ancestor and descendant, existing only in the descendant.
Parameters
----------
ancestorEcGraph : SubstanceEcGraph
descendantEcGraph : SubstanceEcGraph
Returns
-------
Set[EcNumber]
Set of EC numbers which occur in the descendant's EC graph, but not in the ancestor's, i.e. EC numbers which are new to the descendant.
"""
addedECs = descendantEcGraph.getECs()
addedECs.difference_update(ancestorEcGraph.getECs())
return addedECs
@staticmethod
def getGraph(ancestorEcGraph: SubstanceEcGraph, descendantEcGraph: SubstanceEcGraph) -> SubstanceEcGraph:
"""
Get graph containing EC numbers which have been added between ancestor and descendant, existing only in the descendant.
Parameters
----------
ancestorEcGraph : SubstanceEcGraph
descendantEcGraph : SubstanceEcGraph
Returns
-------
SubstanceEcGraph
Graph of EC numbers which occur in the descendant's EC graph, but not in the ancestor's, i.e. EC numbers which are new in the descendant.
Substance-EC-product edges are only included if both graphs, ancestor and descendant, have both nodes, substrate and product.
"""
addedGraph = descendantEcGraph.difference(ancestorEcGraph, subtractNodes=False)
addedGraph.removeIsolatedNodes()
return addedGraph
class GeneFunctionLoss(object):
"""
Evolutionary event of losing a gene function (EC number) between a pair of arbitrary ancestor and descendant.
The conditions for a gene function loss are:
- The EC number has been lost, along the way from an older group of organisms to a newer one.
"""
@staticmethod
def getECs(ancestorEcGraph: SubstanceEcGraph, descendantEcGraph: SubstanceEcGraph) -> Set[EcNumber]:
"""
Get EC numbers which have been lost between ancestor and descendant, existing only in the ancestor.
Parameters
----------
ancestorEcGraph : SubstanceEcGraph
descendantEcGraph : SubstanceEcGraph
Returns
-------
Set[EcNumber]
Set of EC numbers which occur in the ancestor's EC graph, but not in the decendants's, i.e. EC numbers which are lost to the descendant.
"""
lostECs = ancestorEcGraph.getECs()
lostECs.difference_update(descendantEcGraph.getECs())
return lostECs
@staticmethod
def getGraph(ancestorEcGraph: SubstanceEcGraph, descendantEcGraph: SubstanceEcGraph) -> SubstanceEcGraph:
"""
Get graph containing EC numbers which have been lost between ancestor and descendant, existing only in the ancestor.
Parameters
----------
ancestorEcGraph : SubstanceEcGraph
descendantEcGraph : SubstanceEcGraph
Returns
-------
SubstanceEcGraph
Graph of EC numbers which occur in the ancestor's EC graph, but not in the decendants's, i.e. EC numbers which are lost in the descendant.
Substance-EC-product edges are only included if both graphs, ancestor and descendant, have both nodes, substrate and product.
"""
lostGraph = ancestorEcGraph.difference(descendantEcGraph, subtractNodes=False)
lostGraph.removeIsolatedNodes()
return lostGraph
class GeneFunctionDivergence(object):
"""
Evolutionary event of diverging (adding or losing) a gene function (EC number) between a pair of arbitrary ancestor and descendant.
The conditions for a gene function divergence are:
- The EC number exists in an older group of organisms, but not in a newer one, or the other way around.
"""
@staticmethod
def getECs(ancestorEcGraph: SubstanceEcGraph, descendantEcGraph: SubstanceEcGraph) -> Set[EcNumber]:
"""
Get EC numbers which have diverged between ancestor and descendant, existing only in either one of them.
Obviously, `ancestorEcGraph` and `descendantEcGraph` can be swapped here without changing the result.
Parameters
----------
ancestorEcGraph : SubstanceEcGraph
descendantEcGraph : SubstanceEcGraph
Returns
-------
Set[EcNumber]
Set of EC numbers which occur in the ancestor's EC graph, but not in the decendants's and vice versa, i.e. EC numbers which only exist in either one of the organism groups.
"""
divergedECs = ancestorEcGraph.getECs()
divergedECs.symmetric_difference(descendantEcGraph.getECs())
return divergedECs
@staticmethod
def getGraph(ancestorEcGraph: SubstanceEcGraph, descendantEcGraph: SubstanceEcGraph) -> SubstanceEcGraph:
"""
Get graph containing EC numbers which have diverged between ancestor and descendant, existing only in either one of them.
Parameters
----------
ancestorEcGraph : SubstanceEcGraph
descendantEcGraph : SubstanceEcGraph
Returns
-------
SubstanceEcGraph
Graph of EC numbers which occur in the ancestor's EC graph, but not in the decendants's and vice versa, i.e. EC numbers which only exist in either one of the organism groups.
Substance-EC-product edges are only included if both graphs, ancestor and descendant, have both nodes, substrate and product.
"""
addedGraph = GeneFunctionAddition.getGraph(ancestorEcGraph, descendantEcGraph)
lostGraph = GeneFunctionLoss.getGraph(ancestorEcGraph, descendantEcGraph)
divergedGraph = addedGraph.union(lostGraph, addCount=False)
return divergedGraph
class GeneDuplication(object):
"""
Abstract class for any type of gene duplication.
"""
class SimpleGroupGeneDuplication(GeneDuplication):
"""
Evolutionary event of duplicating a gene, regardless of ancestoral bonds in the comparison group.
The conditions for a 'simple group' gene duplication are:
- The gene has at least one homolog within the set of organisms its organism belongs to.
"""
def __init__(self, sameGroupOrganisms: 'Iterable[Organism] or KEGG.Organism.Group'):
"""
Simple group gene duplication extends simple gene duplication by expanding the term 'paralog' to every organism in the set of organisms the gene's organism blongs to.
In contrast to :class:`SimpleGeneDuplication`, this class has to be instantiated, using the aforementioned set of organisms belonging to each other.
This would usually be a :class:`FEV_KEGG.KEGG.Organism.Group` of the same :class:`FEV_KEGG.Evolution.Clade.Clade`.
Parameters
----------
sameGroupOrganisms : Iterable[Organism] or Organism.Group
Organisms which will be searched for the occurence of homologs, i.e. are considered "semi-paralogously" related.
Attributes
----------
self.sameGroupOrganisms : Iterable[Organism]
Raises
------
ValueError
If `sameGroupOrganisms` is of wrong type.
Warnings
--------
This takes much longer than :class:`SimpleGeneDuplication`, because each sought gene is compared between all organisms of the same group, not only within its own organism.
"""
if isinstance(sameGroupOrganisms, Group):
self.sameGroupOrganisms = sameGroupOrganisms.organisms
elif isinstance(sameGroupOrganisms, Iterable):
self.sameGroupOrganisms = sameGroupOrganisms
else:
raise ValueError("'sameGroupOrganisms' must be of type Iterable or KEGG.Organism.Group")
def getEnzymes(self, enzymes: 'Set[Enzyme] or SubstanceEnzymeGraph', eValue = defaultEValue, returnMatches = False, ignoreDuplicatesOutsideSet = None):
"""
Get gene-duplicated enzymes.
Parameters
----------
enzymes : Set[Enzyme] or SubstanceEnzymeGraph
Set of enzymes to be checked for gene duplication, or a graph.
eValue : float, optional
Threshold for the statistical expectation value (E-value), below which a sequence alignment is considered significant.
returnMatches : bool, optional
If *True*, return not only `enzymes` that have homologs, but also which homologs they have. Useful for filtering for relevant homologs afterwards.
ignoreDuplicatesOutsideSet : Set[GeneID], optional
If *None*, report all found duplicates.
If not *None*, count only such enzymes as gene duplicated, which have at least one of their duplicates inside this set.
This can, for example, serve to exclude duplicates in secondary metabolism.
Returns
-------
Set[Enzyme] or Dict[Element, Set[GeneID]]
If returnMatches == *False*, all enzymes in `enzymes` which fulfil the conditions of this gene duplication definition.
If returnMatches == *True*, all enzymes in `enzymes` which fulfil the conditions of this gene duplication definition, pointing to a set of gene IDs of the found homologs.
Raises
------
ValueError
If any organism does not exist.
HTTPError
If any gene does not exist.
URLError
If connection to KEGG fails.
"""
if isinstance(enzymes, SubstanceEnzymeGraph):
enzymes = enzymes.getEnzymes()
# find gene duplicated enzymes according to simple gene duplication
shallReturnMatches = returnMatches or (ignoreDuplicatesOutsideSet is not None)
simpleGeneDuplicates = SimpleGeneDuplication.getEnzymes(enzymes, eValue, returnMatches = shallReturnMatches, ignoreDuplicatesOutsideSet = ignoreDuplicatesOutsideSet)
if returnMatches or ignoreDuplicatesOutsideSet is not None:
simpleGeneDuplicatesSet = simpleGeneDuplicates.keys()
else:
simpleGeneDuplicatesSet = simpleGeneDuplicates
duplicatedEnzymes = dict()
# those enzymes already duplicated according to simple gene duplication do not have to be tested again
soughtEnzymes = enzymes.copy()
soughtEnzymes.difference_update( simpleGeneDuplicatesSet )
# get orthologs
geneIDs = [enzyme.geneID for enzyme in soughtEnzymes]
if returnMatches or ignoreDuplicatesOutsideSet is not None: # need to get all orthologs
matchingsDict = Database.getOrthologsBulk(geneIDs, self.sameGroupOrganisms, eValue) # GeneID -> List[ Matching ]
for enzyme in enzymes: # for all enzymes, even the ones already found as paralogs
# add paralogs
paralogousGeneIDs = simpleGeneDuplicates.get(enzyme.geneID)
if paralogousGeneIDs is not None:
if len(paralogousGeneIDs) > 0:
duplicatedEnzymes[enzyme] = paralogousGeneIDs
continue # can not be in orthologs if it was already paralogous
# add orthologs
orthologousMatchings = matchingsDict.get(enzyme.geneID)
if orthologousMatchings is not None:
if len(orthologousMatchings) > 0:
matches = []
for matching in orthologousMatchings:
matches.extend(matching.matches)
matchedGeneIDs = {match.foundGeneID for match in matches}
if len(matchedGeneIDs) > 0:
currentDuplicates = duplicatedEnzymes.get(enzyme)
if currentDuplicates is None:
currentDuplicates = set()
duplicatedEnzymes[enzyme] = currentDuplicates
currentDuplicates.update(matchedGeneIDs)
else: # only interesting IF there are orthologs
orthologousOrganismsDict = Database.hasOrthologsBulk(geneIDs, self.sameGroupOrganisms, eValue) # GeneID -> List[ organismAbbreviation ]
# regarding each enzyme sought
for enzyme in soughtEnzymes:
# count orthologous GeneIDs
orthologousOrganisms = orthologousOrganismsDict.get(enzyme.geneID)
if orthologousOrganisms is None:
continue
if len(orthologousOrganisms) > 0: # has at least one ortholog in any other organism
duplicatedEnzymes[enzyme] = None #orthologousOrganisms
if ignoreDuplicatesOutsideSet is not None:
filteredDuplicatedEnzymes = dict()
for enzyme, matchGeneIDs in duplicatedEnzymes.items():
matchesInSearchedSet = ignoreDuplicatesOutsideSet.intersection( matchGeneIDs )
if len(matchesInSearchedSet) > 0: # some of the matches are in the set of enzymes to be checked for duplicates
filteredDuplicatedEnzymes[enzyme] = matchesInSearchedSet
duplicatedEnzymes = filteredDuplicatedEnzymes
if returnMatches:
return duplicatedEnzymes
else:
return set(duplicatedEnzymes.keys())
def getEnzymePairs(self, enzymes: 'Set[Enzyme] or SubstanceEnzymeGraph', eValue = defaultEValue, ignoreDuplicatesOutsideSet = None, geneIdToEnzyme = None) -> Set[Tuple[Enzyme, Enzyme]]:
"""
Get gene-duplicated enzymes, in pairs of duplicates.
If enzymeA is a duplicate of enzymeB and vice versa, this returns symmetric duplicates of the form (enzymeA, enzymeB) and (enzymeB, enzymeA).
Parameters
----------
enzymes : Set[Enzyme] or SubstanceEnzymeGraph
Set of enzymes to be checked for gene duplication, or a graph.
eValue : float, optional
Threshold for the statistical expectation value (E-value), below which a sequence alignment is considered significant.
ignoreDuplicatesOutsideSet : Set[GeneID], optional
If *None*, report all found duplicates.
If not *None*, count only such enzymes as gene duplicated, which have at least one of their duplicates inside this set.
This can, for example, serve to exclude duplicates in secondary metabolism.
geneIdToEnzyme : Dict[GeneID, Enzyme], optional
Dictionary for mapping each gene ID of every found duplicate to an enzyme object.
If *None*, gets the enzyme from the database. This avoids the KeyError, but can cause a lot of network load.
Returns
-------
Set[Tuple[Enzyme, Enzyme]]
Set of pairs of gene-duplicated enzymes, realised as tuples. The order is arbitrary and there will almost certainly be 100% duplicates.
Raises
------
KeyError
If `geneIdToEnzyme` is passed, but does not contain the gene ID of every duplicate.
"""
duplicatedEnzymeMatches = self.getEnzymes(enzymes, eValue, returnMatches = True, ignoreDuplicatesOutsideSet = ignoreDuplicatesOutsideSet)
# expand matches of homologous gene IDs to pairs of duplicated enzymes. Which we can do (without wasting further resources) only here, because only here we have the geneID -> enzyme dict!
duplicatedEnzymePairs = set()
if geneIdToEnzyme is None: # need to get enzyme objects from database
allGeneIDs = set()
for geneIDs in duplicatedEnzymeMatches.values():
allGeneIDs.update( geneIDs )
geneIdToEnzyme = dict()
for geneID, gene in Database.getGeneBulk(allGeneIDs).items():
enzyme = Enzyme.fromGene(gene)
geneIdToEnzyme[geneID] = enzyme
for enzymeA, geneIDs in duplicatedEnzymeMatches.items():
for geneID in geneIDs:
enzymeB = geneIdToEnzyme[geneID]
duplicatedEnzymePairs.add( (enzymeA, enzymeB) )
return duplicatedEnzymePairs
def filterEnzymes(self, substanceEnzymeGraph: SubstanceEnzymeGraph, eValue = defaultEValue) -> SubstanceEnzymeGraph:
"""
Remove all enzymes from a graph which have not been gene-duplicated.
Parameters
----------
substanceEnzymeGraph : SubstanceEnzymeGraph
Graph of enzymes to be checked for gene duplication.
eValue : float, optional
Threshold for the statistical expectation value (E-value), below which a sequence alignment is considered significant.
Returns
-------
SubstanceEnzymeGraph
A copy of the `substanceEnzymeGraph` containing only enzymes which fulfil the conditions of this gene duplication definition.
Raises
------
ValueError
If any organism does not exist.
HTTPError
If any gene does not exist.
URLError
If connection to KEGG fails.
"""
graph = substanceEnzymeGraph.copy()
possibleGeneDuplicates = self.getEnzymes(substanceEnzymeGraph, eValue)
graph.removeAllEnzymesExcept( possibleGeneDuplicates )
return graph
class SimpleGeneDuplication(GeneDuplication):
"""
Evolutionary event of duplicating a gene, regardless of ancestoral bonds.
The conditions for a 'simple' gene duplication are:
- The gene has at least one paralog.
"""
@staticmethod
def getEnzymes(enzymes: 'Set[Enzyme] or SubstanceEnzymeGraph', eValue = defaultEValue, returnMatches = False, ignoreDuplicatesOutsideSet = None, preCalculatedEnzymes = None):
"""
Get gene-duplicated enzymes.
Parameters
----------
enzymes : Set[Enzyme] or SubstanceEnzymeGraph
Set of enzymes to be checked for gene duplication, or a graph.
eValue : float, optional
Threshold for the statistical expectation value (E-value), below which a sequence alignment is considered significant.
returnMatches : bool, optional
If *True*, return not only `enzymes` that have homologs, but also which homologs they have. Useful for filtering for relevant homologs afterwards.
ignoreDuplicatesOutsideSet : Set[GeneID] or *True*, optional
If *None*, report all found duplicates.
If *True*, automatically restrict to all enzymes in `substanceEnzymeGraph`.
If a set, count only such enzymes as gene duplicated, which have at least one of their duplicates inside this set. Beware, the set has to contain the enzymes' gene ID!
This can, for example, serve to exclude duplicates in secondary metabolism.
Returns
-------
Set[Enzyme] or Dict[Element, Set[GeneID]]
If returnMatches == *False*, all enzymes in `enzymes` which fulfil the conditions of this gene duplication definition.
If returnMatches == *True*, all enzymes in `enzymes` which fulfil the conditions of this gene duplication definition, pointing to a set of gene IDs of the found homologs.
Raises
------
ValueError
If any organism does not exist.
HTTPError
If any gene does not exist.
URLError
If connection to KEGG fails.
"""
if isinstance(enzymes, SubstanceEnzymeGraph):
enzymes = enzymes.getEnzymes()
if preCalculatedEnzymes is not None:
if returnMatches is True:
result = dict()
else:
result = set()
# filter preCalculatedEnzymes by enzymes
for enzyme, duplicatesSet in preCalculatedEnzymes.items():
if enzyme in enzymes:
validDuplicates = set()
for duplicate in duplicatesSet:
if ignoreDuplicatesOutsideSet is None or duplicate.geneID in ignoreDuplicatesOutsideSet:
validDuplicates.add( duplicate.geneID )
if len(validDuplicates) > 0:
if returnMatches is True:
result[enzyme] = validDuplicates
else:
result.add( enzyme )
return result
possibleGeneDuplicates = dict()
geneIDs = {enzyme.geneID for enzyme in enzymes}
matchingsDict = Database.getParalogsBulk(geneIDs, eValue)
for enzyme in enzymes:
matching = matchingsDict.get(enzyme.geneID, None)
if matching is None:
print( 'WARNING: data for GeneID ' + str(enzyme.geneID) + ' could not be downloaded. Maybe you want to exclude this erroneous organism completely, before it skews statistics? See quirks.py')
continue
paralogs = matching.matches
if len(paralogs) > 0:
possibleGeneDuplicates[enzyme] = [match.foundGeneID for match in paralogs]
if ignoreDuplicatesOutsideSet is True:
ignoreDuplicatesOutsideSet = geneIDs
if ignoreDuplicatesOutsideSet is not None and len(ignoreDuplicatesOutsideSet) > 0:
filteredPossibleGeneDuplicates = dict()
for enzyme, matchGeneIDs in possibleGeneDuplicates.items():
# which genes have been found AND are in the set of relevant duplicates?
matchesInSearchedSet = ignoreDuplicatesOutsideSet.intersection( matchGeneIDs )
if len(matchesInSearchedSet) > 0: # some of the matches are in the set of relevant duplicates
filteredPossibleGeneDuplicates[enzyme] = matchesInSearchedSet
possibleGeneDuplicates = filteredPossibleGeneDuplicates
if returnMatches:
return possibleGeneDuplicates
else:
return set(possibleGeneDuplicates.keys())
@classmethod
def getEnzymePairs(cls, enzymes: 'Set[Enzyme] or SubstanceEnzymeGraph', eValue = defaultEValue, ignoreDuplicatesOutsideSet = None, geneIdToEnzyme = None, preCalculatedEnzymes = None) -> Set[Tuple[Enzyme, Enzyme]]:
"""
Get gene-duplicated enzymes, in pairs of duplicates.
If enzyme A is a duplicate of enzyme B and vice versa, this does not return duplicates, but returns only one pair, with the "smaller" enzyme as the first value. An enzyme is "smaller" if its gene ID string is "smaller".
Parameters
----------
enzymes : Set[Enzyme] or SubstanceEnzymeGraph
Set of enzymes to be checked for gene duplication, or a graph.
eValue : float, optional
Threshold for the statistical expectation value (E-value), below which a sequence alignment is considered significant.
ignoreDuplicatesOutsideSet : Set[GeneID] or *True*, optional
If *None*, report all found duplicates.
If *True*, automatically restrict to all enzymes in `substanceEnzymeGraph`.
If a set, count only such enzymes as gene duplicated, which have at least one of their duplicates inside this set. Beware, the set has to contain the enzymes' gene ID!
This can, for example, serve to exclude duplicates in secondary metabolism.
geneIdToEnzyme : Dict[GeneID, Enzyme], optional
Dictionary for mapping each gene ID of every found duplicate to an enzyme object.
If *None*, gets the enzyme from the database. This avoids the KeyError, but can cause a lot of network load.
Returns
-------
Set[Tuple[Enzyme, Enzyme]]
Set of pairs of gene-duplicated enzymes, realised as tuples. The order is arbitrary.
Raises
------
KeyError
If `geneIdToEnzyme` is passed, but does not contain the gene ID of every duplicate.
"""
duplicatedEnzymeMatches = cls.getEnzymes(enzymes, eValue, returnMatches = True, ignoreDuplicatesOutsideSet = ignoreDuplicatesOutsideSet, preCalculatedEnzymes = preCalculatedEnzymes)
# expand matches of homologous gene IDs to pairs of duplicated enzymes. Which we can do (without wasting further resources) only here, because only here we have the geneID -> enzyme dict!
duplicatedEnzymePairs = set()
if geneIdToEnzyme is None: # need to get enzyme objects from database
allGeneIDs = set()
for geneIDs in duplicatedEnzymeMatches.values():
allGeneIDs.update( geneIDs )
geneIdToEnzyme = dict()
for geneID, gene in Database.getGeneBulk(allGeneIDs).items():
enzyme = Enzyme.fromGene(gene)
geneIdToEnzyme[geneID] = enzyme
for enzymeA, geneIDs in duplicatedEnzymeMatches.items():
for geneID in geneIDs:
enzymeB = geneIdToEnzyme[geneID]
duplicatedEnzymePairs.add( (enzymeA, enzymeB) )
# filter symmetric duplicates
deduplicatedEnzymePairs = set()
for enzymeA, enzymeB in duplicatedEnzymePairs:
if enzymeA <= enzymeB:
deduplicatedEnzymePairs.add( (enzymeA, enzymeB) )
else:
deduplicatedEnzymePairs.add( (enzymeB, enzymeA) )
return deduplicatedEnzymePairs
@classmethod
def filterEnzymes(cls, substanceEnzymeGraph: SubstanceEnzymeGraph, eValue = defaultEValue, ignoreDuplicatesOutsideSet = None, preCalculatedEnzymes = None) -> SubstanceEnzymeGraph:
"""
Remove all enzymes from a graph which have not been gene-duplicated.
Parameters
----------
substanceEnzymeGraph : SubstanceEnzymeGraph
Graph of enzymes to be checked for gene duplication.
eValue : float, optional
Threshold for the statistical expectation value (E-value), below which a sequence alignment is considered significant.
ignoreDuplicatesOutsideSet : Set[GeneID] or *True*, optional
If *None*, report all found duplicates.
If *True*, automatically restrict to all enzymes in `substanceEnzymeGraph`.
If a set, count only such enzymes as gene duplicated, which have at least one of their duplicates inside this set. Beware, the set has to contain the enzymes' gene ID!
This can, for example, serve to exclude duplicates in secondary metabolism.
Returns
-------
SubstanceEnzymeGraph
A copy of the `substanceEnzymeGraph` containing only enzymes which fulfil the conditions of this gene duplication definition.
Raises
------
ValueError
If any organism does not exist.
HTTPError
If any gene does not exist.
URLError
If connection to KEGG fails.
"""
graph = substanceEnzymeGraph.copy()
possibleGeneDuplicates = cls.getEnzymes(substanceEnzymeGraph, eValue, returnMatches = False, ignoreDuplicatesOutsideSet = ignoreDuplicatesOutsideSet, preCalculatedEnzymes = preCalculatedEnzymes)
graph.removeAllEnzymesExcept( possibleGeneDuplicates )
return graph
class ChevronGeneDuplication(GeneDuplication):
"""
Evolutionary event of duplicating a gene, in dependence of a certain ancestoral bond.
The conditions for a 'chevron' gene duplication are:
- The gene has at least one paralog.
- The gene has at least one ortholog in a pre-defined set of organisms.
"""
def __init__(self, possiblyOrthologousOrganisms: 'Iterable[Organism] or KEGG.Organism.Group'):
"""
Chevron gene duplication extends simple gene duplication by limiting the possibly duplicated genes via a set of possibly orthologous organisms.
In contrast to :class:`SimpleGeneDuplication`, this class has to be instantiated, using the aforementioned set of possibly orthologous organisms.
Parameters
----------
possiblyOrthologousOrganisms : Iterable[Organism] or Organism.Group
Organisms which will be searched for the occurence of orthologs, i.e. are considered ancestoral.
Attributes
----------
self.possiblyOrthologousOrganisms : Iterable[Organism]
Raises
------
ValueError
If `possiblyOrthologousOrganisms` is of wrong type.
Warnings
--------
This takes much longer than :class:`SimpleGeneDuplication`, because additionally, each found paralog is searched for an ortholog in all organisms of the other group.
However, if you set `returnMatches` == *False* and `ignoreDuplicatesOutsideSelf` == *False*, the search is aborted with the very first ortholog, which is much faster than getting all orthologs.
Because in this model even a single orthologous match is enough to prove gene duplication, we do not necessarily have to fully search all organisms.
"""
if isinstance(possiblyOrthologousOrganisms, Group):
self.possiblyOrthologousOrganisms = possiblyOrthologousOrganisms.organisms
elif isinstance(possiblyOrthologousOrganisms, Iterable):
self.possiblyOrthologousOrganisms = possiblyOrthologousOrganisms
else:
raise ValueError("'possiblyOrthologusOrganisms' must be of type Iterable or KEGG.Organism.Group")
def getEnzymes(self, enzymes: 'Set[Enzyme] or SubstanceEnzymeGraph', eValue = defaultEValue, returnMatches = False, ignoreDuplicatesOutsideSelf = False):
"""
Get gene-duplicated enzymes.
Parameters
----------
enzymes : Set[Enzyme] or SubstanceEnzymeGraph
Set of enzymes to be checked for gene duplication, or a graph.
eValue : float, optional
Threshold for the statistical expectation value (E-value), below which a sequence alignment is considered significant.
returnMatches : bool, optional
If *True*, return not only `enzymes` that have homologs, but also which homologs they have. Useful for filtering for relevant homologs afterwards.
ignoreDuplicatesOutsideSelf : bool, optional
If *True*, count only such enzymes as gene duplicated, which have at least one of their duplicates inside the set of enzymes searched for duplicates.
This can, for example, serve to exclude duplicates in secondary metabolism.
Returns
-------
Set[Enzyme] or Dict[Element, Set[GeneID]]
If returnMatches == *False*, all enzymes in `enzymes` which fulfil the conditions of this gene duplication definition.
If returnMatches == *True*, all enzymes in `enzymes` which fulfil the conditions of this gene duplication definition, pointing to a set of gene IDs of the found homologs.
Raises
------
ValueError
If any organism does not exist.
HTTPError
If any gene does not exist.
URLError
If connection to KEGG fails.
"""
# graph -> set
if isinstance(enzymes, SubstanceEnzymeGraph):
enzymes = enzymes.getEnzymes()
# get paralogs first
possibleGeneDuplicates = SimpleGeneDuplication.getEnzymes(enzymes, eValue, returnMatches = returnMatches or ignoreDuplicatesOutsideSelf, ignoreDuplicatesOutsideSet = {enzyme.geneID for enzyme in enzymes}) # always need returnMatches if ignoreDuplicatesOutsideSelf is True!
if returnMatches or ignoreDuplicatesOutsideSelf:
possibleGeneDuplicatesSet = possibleGeneDuplicates.keys()
else:
possibleGeneDuplicatesSet = possibleGeneDuplicates
if len( possibleGeneDuplicatesSet ) == 0: # nothing to do, because there are no paralogs
return possibleGeneDuplicates
duplicatedEnzymes = dict()
# get orthologs
geneIDs = {enzyme.geneID for enzyme in possibleGeneDuplicatesSet}
if returnMatches or ignoreDuplicatesOutsideSelf: # need to get all orthologs
matchingsDict = Database.getOrthologsBulk(geneIDs, self.possiblyOrthologousOrganisms, eValue) # GeneID -> List[ Matching ]
for enzyme in possibleGeneDuplicatesSet:
# add orthologs
orthologousMatchings = matchingsDict.get(enzyme.geneID)
if orthologousMatchings is not None:
if len(orthologousMatchings) > 0:
matches = []
for matching in orthologousMatchings:
matches.extend(matching.matches)
duplicatedEnzymes[enzyme] = {match.foundGeneID for match in matches}
# add paralogs
paralogousGeneIDs = possibleGeneDuplicates.get(enzyme.geneID)
if paralogousGeneIDs is not None:
if len(paralogousGeneIDs) > 0:
currentDuplicates = duplicatedEnzymes.get(enzyme)
if currentDuplicates is None:
currentDuplicates = set()
duplicatedEnzymes[enzyme] = currentDuplicates
currentDuplicates.update(paralogousGeneIDs)
else: # only interesting IF there are orthologs
orthologousOrganismsDict = Database.hasOrthologsBulk(geneIDs, self.possiblyOrthologousOrganisms, eValue) # GeneID -> List[ organismAbbreviation ]
for enzyme in possibleGeneDuplicatesSet:
orthologousOrganisms = orthologousOrganismsDict.get(enzyme.geneID)
if orthologousOrganisms is None:
continue
if len(orthologousOrganisms) > 0:
duplicatedEnzymes[enzyme] = None #orthologousOrganisms
if ignoreDuplicatesOutsideSelf:
filteredDuplicatedEnzymes = dict()
for enzyme, matchGeneIDs in duplicatedEnzymes.items():
matchesInSearchedSet = geneIDs.intersection( matchGeneIDs )
if len(matchesInSearchedSet) > 0: # some of the matches are in the set of enzymes to be checked for duplicates
filteredDuplicatedEnzymes[enzyme] = matchesInSearchedSet
duplicatedEnzymes = filteredDuplicatedEnzymes
if returnMatches:
return duplicatedEnzymes
else:
return set(duplicatedEnzymes.keys())
def getEnzymePairs(self, enzymes: 'Set[Enzyme] or SubstanceEnzymeGraph', eValue = defaultEValue, ignoreDuplicatesOutsideSelf = False, geneIdToEnzyme = None) -> Set[Tuple[Enzyme, Enzyme]]:
"""
Get gene-duplicated enzymes, in pairs of duplicates.
If enzyme A is a duplicate of enzyme B and vice versa, this does not return duplicates, but returns only one pair, with the "smaller" enzyme as the first value. An enzyme is "smaller" if its gene ID string is "smaller".
Parameters
----------
enzymes : Set[Enzyme] or SubstanceEnzymeGraph
Set of enzymes to be checked for gene duplication, or a graph.
eValue : float, optional
Threshold for the statistical expectation value (E-value), below which a sequence alignment is considered significant.
ignoreDuplicatesOutsideSelf : bool, optional
If *True*, count only such enzymes as gene duplicated, which have at least one of their duplicates inside the set of enzymes searched for duplicates.
This can, for example, serve to exclude duplicates in secondary metabolism.
geneIdToEnzyme : Dict[GeneID, Enzyme], optional
Dictionary for mapping each gene ID of every found duplicate to an enzyme object.
If *None*, gets the enzyme from the database. This avoids the KeyError, but can cause a lot of network load.
Returns
-------
Set[Tuple[Enzyme, Enzyme]]
Set of pairs of gene-duplicated enzymes, realised as tuples. The order is arbitrary.
Raises
------
KeyError
If `geneIdToEnzyme` is passed, but does not contain the gene ID of every duplicate.
"""
duplicatedEnzymeMatches = self.getEnzymes(enzymes, eValue, returnMatches = True, ignoreDuplicatesOutsideSelf = ignoreDuplicatesOutsideSelf)
# expand matches of homologous gene IDs to pairs of duplicated enzymes. Which we can do (without wasting further resources) only here, because only here we have the geneID -> enzyme dict!
duplicatedEnzymePairs = set()
if geneIdToEnzyme is None: # need to get enzyme objects from database
allGeneIDs = set()
for geneIDs in duplicatedEnzymeMatches.values():
allGeneIDs.update( geneIDs )
geneIdToEnzyme = dict()
for geneID, gene in Database.getGeneBulk(allGeneIDs).items():
enzyme = Enzyme.fromGene(gene)
geneIdToEnzyme[geneID] = enzyme
for enzymeA, geneIDs in duplicatedEnzymeMatches.items():
for geneID in geneIDs:
enzymeB = geneIdToEnzyme[geneID]
duplicatedEnzymePairs.add( (enzymeA, enzymeB) )
# filter symmetric duplicates
deduplicatedEnzymePairs = set()
for enzymeA, enzymeB in duplicatedEnzymePairs:
if enzymeA <= enzymeB:
deduplicatedEnzymePairs.add( (enzymeA, enzymeB) )
else:
deduplicatedEnzymePairs.add( (enzymeB, enzymeA) )
return deduplicatedEnzymePairs
def filterEnzymes(self, substanceEnzymeGraph: SubstanceEnzymeGraph, eValue = defaultEValue, ignoreDuplicatesOutsideSelf = False) -> SubstanceEnzymeGraph:
"""
Remove all enzymes from a graph which have not been gene-duplicated.
Parameters
----------
substanceEnzymeGraph : SubstanceEnzymeGraph
Graph of enzymes to be checked for gene duplication.
eValue : float, optional
Threshold for the statistical expectation value (E-value), below which a sequence alignment is considered significant.
ignoreDuplicatesOutsideSelf : bool, optional
If *True*, count only such enzymes as gene duplicated, which have at least one of their duplicates inside the set of enzymes searched for duplicates.
This can, for example, serve to exclude duplicates in secondary metabolism.
Returns
-------
SubstanceEnzymeGraph
A copy of the `substanceEnzymeGraph` containing only enzymes which fulfil the conditions of this gene duplication definition.
Raises
------
ValueError
If any organism does not exist.
HTTPError
If any gene does not exist.
URLError
If connection to KEGG fails.
"""
graph = substanceEnzymeGraph.copy()
possibleGeneDuplicates = self.getEnzymes(substanceEnzymeGraph, eValue, returnMatches = False, ignoreDuplicatesOutsideSelf = ignoreDuplicatesOutsideSelf)
graph.removeAllEnzymesExcept( possibleGeneDuplicates )
return graph
class Neofunctionalisation():
def __init__(self, enzymeA: Enzyme, enzymeB: Enzyme):
"""
Evolutionary event of Neofunctionalisation between a pair of enzymes.
The conditions for a neofunctionalisation are:
- The enzyme's gene has been duplicated, according to a certain class of GeneDuplication.
- The duplicated enzyme is associated with a different EC number than its duplicate.
The order of the two enzymes has no meaning, it has been arbitrarily chosen to reflect the lexicographic order of their associated EC numbers. The enzyme posessing the "smallest" EC number comes first.
This absolute ordering prevents duplicate events, because without an order there would have always been a second event with the exact same enzymes, but in swapped positions, because neofunctionalisation has no direction here and is, thus, symmetric.
Parameters
----------
enzymeA : Enzyme
An enzyme, which is a gene duplicate of `enzymeB`. The order is arbitrary.
enzymeB : Enzyme
An enzyme, which is a gene duplicate of `enzymeA`. The order is arbitrary.
Attributes
----------
self.enzymePair : Tuple[Enzyme, Enzyme]
Tuple of the two enzymes, sorted by the lexicographic order of their "smallest" EC number.
Raises
------
ValueError
If the enzymes are equal, have the same set of EC numbers, or one has no EC number.
"""
if enzymeA == enzymeB:
raise ValueError('The enzymes must be unequal!')
ecNumbersAset = enzymeA.ecNumbers
ecNumbersBset = enzymeB.ecNumbers
if ecNumbersAset == ecNumbersBset:
raise ValueError('The enzymes must have differing EC numbers to be NEOfunctionalised!')
if len(ecNumbersAset) == 0 or len(ecNumbersBset) == 0:
raise ValueError('The enzymes have to be associated with at least one EC number!')
ecNumbersA = list(ecNumbersAset)
ecNumbersA.sort()
ecNumbersB = list(ecNumbersBset)
ecNumbersB.sort()
if len(ecNumbersA) <= len(ecNumbersB):
smallerSet = ecNumbersA
biggerSet = ecNumbersB
isAsmaller = True
else:
smallerSet = ecNumbersB
biggerSet = ecNumbersA
isAsmaller = False
for index, ec1 in enumerate(smallerSet):
ec2 = biggerSet[index]
if ec1 == ec2:
if index == len(smallerSet) - 1: # end of smaller set reached
if isAsmaller:
self.enzymePair = (enzymeA, enzymeB)
else:
self.enzymePair = (enzymeB, enzymeA)
else: # not yet at the end, compare next indizes
continue
elif ec1 < ec2:
if isAsmaller:
self.enzymePair = (enzymeA, enzymeB)
else:
self.enzymePair = (enzymeB, enzymeA)
else:
if isAsmaller:
self.enzymePair = (enzymeB, enzymeA)
else:
self.enzymePair = (enzymeA, enzymeB)
def getEnzymes(self) -> Tuple[Enzyme, Enzyme]:
"""
Get the pair of enzymes.
Returns
-------
Tuple[Enzyme, Enzyme]
"""
return self.enzymePair
def getEcNumbers(self) -> Tuple[Set[EcNumber], Set[EcNumber]]:
"""
Get the enzymes' EC numbers.
Returns
-------
Tuple[Set[EcNumber], Set[EcNumber]]
Same order as in :func:`getEnzymes`.
Because an enzyme could have multiple EC numbers, they are given as sets.
"""
return (self.enzymePair[0].ecNumbers, self.enzymePair[1].ecNumbers)
def getDifferingEcLevels(self) -> int:
"""
Get the maximum number of EC levels in which the enzymes' EC numbers differ.
Returns
-------
int
Number of differing EC levels between the two enzymes' EC numbers, starting with the substrate-level.
If an enzyme has multiple EC numbers, returns the biggest difference.
For example 1.2.3.4 and 1.2.3.7 returns 1, while 1.2.3.4 and 1.8.9.10 returns 3.
However, wildcards do **not** match numbers: 1.2.3.4 and 1.2.3.- returns 1!
"""
biggestDifference = 0
for ecA in self.getEcNumbers()[0]:
for ecB in self.getEcNumbers()[1]:
difference = 4 - ecA.matchingLevels(ecB, wildcardMatchesNumber = False)
if difference > biggestDifference:
biggestDifference = difference
return biggestDifference
def isSameEcReaction(self) -> bool:
"""
Whether the enzymes' EC numbers describe a different reaction, or merely a different substrate
Returns
-------
bool
*True*, if the biggest difference in EC levels of the two enzymes is still on the fourth (substrate) level.
"""
if self.getDifferingEcLevels() > 1:
return False
else:
return True
def toHtml(self, short = False):
"""
Get the string representation as an HTML line.
"""
enzymePair = self.enzymePair
ecPair = self.getEcNumbers()
return '<td>' + enzymePair[0].toHtml() + '<td>' + ', '.join([ec.toHtml(short) for ec in sorted(ecPair[0])]) + '</td></td><td><-></td><td>'+ enzymePair[1].toHtml() + '<td>' + ', '.join([ec.toHtml(short) for ec in sorted(ecPair[1])]) + '</td></td>'
def __str__(self):
enzymePair = self.enzymePair
ecPair = self.getEcNumbers()
return '(' + str(enzymePair[0]) + ' [' + ', '.join([str(ec) for ec in sorted(ecPair[0])]) + '],\t'+ str(enzymePair[1]) + ' [' + ', '.join([str(ec) for ec in sorted(ecPair[1])]) + '])'
def __repr__(self):
return self.__str__()
def __eq__(self, other):
if isinstance(self, other.__class__):
return self.enzymePair == other.enzymePair
return False
def __ne__(self, other):
return not self == other
def __hash__(self):
return self.enzymePair.__hash__()
def __lt__(self, other):
# sort by EC number first
selfEnzyme1 = self.enzymePair[0]
selfEnzyme2 = self.enzymePair[1]
selfEnzyme1EcList = list(selfEnzyme1.ecNumbers)
selfEnzyme2EcList = list(selfEnzyme2.ecNumbers)
otherEnzyme1 = other.enzymePair[0]
otherEnzyme2 = other.enzymePair[1]
otherEnzyme1EcList = list(otherEnzyme1.ecNumbers)
otherEnzyme2EcList = list(otherEnzyme2.ecNumbers)
if selfEnzyme1EcList == otherEnzyme1EcList:
if selfEnzyme2EcList == otherEnzyme2EcList:
# then by gene ID
if selfEnzyme1.uniqueID == otherEnzyme1.uniqueID:
return selfEnzyme2.uniqueID < otherEnzyme2.uniqueID
else:
return selfEnzyme1.uniqueID < otherEnzyme1.uniqueID
else:
return selfEnzyme2EcList < otherEnzyme2EcList
else:
return selfEnzyme1EcList < otherEnzyme1EcList
def __gt__(self, other):
# sort by EC number first
selfEnzyme1 = self.enzymePair[0]
selfEnzyme2 = self.enzymePair[1]
selfEnzyme1EcList = list(selfEnzyme1.ecNumbers)
selfEnzyme2EcList = list(selfEnzyme2.ecNumbers)
otherEnzyme1 = other.enzymePair[0]
otherEnzyme2 = other.enzymePair[1]
otherEnzyme1EcList = list(otherEnzyme1.ecNumbers)
otherEnzyme2EcList = list(otherEnzyme2.ecNumbers)
if selfEnzyme1EcList == otherEnzyme1EcList:
if selfEnzyme2EcList == otherEnzyme2EcList:
# then by gene ID
if selfEnzyme1.uniqueID == otherEnzyme1.uniqueID:
return selfEnzyme2.uniqueID > otherEnzyme2.uniqueID
else:
return selfEnzyme1.uniqueID > otherEnzyme1.uniqueID
else:
return selfEnzyme2EcList > otherEnzyme2EcList
else:
return selfEnzyme1EcList > otherEnzyme1EcList
def __le__(self, other):
# sort by EC number first
selfEnzyme1 = self.enzymePair[0]
selfEnzyme2 = self.enzymePair[1]
selfEnzyme1EcList = list(selfEnzyme1.ecNumbers)
selfEnzyme2EcList = list(selfEnzyme2.ecNumbers)
otherEnzyme1 = other.enzymePair[0]
otherEnzyme2 = other.enzymePair[1]
otherEnzyme1EcList = list(otherEnzyme1.ecNumbers)
otherEnzyme2EcList = list(otherEnzyme2.ecNumbers)
if selfEnzyme1EcList == otherEnzyme1EcList:
if selfEnzyme2EcList == otherEnzyme2EcList:
# then by gene ID
if selfEnzyme1.uniqueID == otherEnzyme1.uniqueID:
return selfEnzyme2.uniqueID <= otherEnzyme2.uniqueID
else:
return selfEnzyme1.uniqueID <= otherEnzyme1.uniqueID
else:
return selfEnzyme2EcList <= otherEnzyme2EcList
else:
return selfEnzyme1EcList <= otherEnzyme1EcList
def __ge__(self, other):
# sort by EC number first
selfEnzyme1 = self.enzymePair[0]
selfEnzyme2 = self.enzymePair[1]
selfEnzyme1EcList = list(selfEnzyme1.ecNumbers)
selfEnzyme2EcList = list(selfEnzyme2.ecNumbers)
otherEnzyme1 = other.enzymePair[0]
otherEnzyme2 = other.enzymePair[1]
otherEnzyme1EcList = list(otherEnzyme1.ecNumbers)
otherEnzyme2EcList = list(otherEnzyme2.ecNumbers)
if selfEnzyme1EcList == otherEnzyme1EcList:
if selfEnzyme2EcList == otherEnzyme2EcList:
# then by gene ID
if selfEnzyme1.uniqueID == otherEnzyme1.uniqueID:
return selfEnzyme2.uniqueID >= otherEnzyme2.uniqueID
else:
return selfEnzyme1.uniqueID >= otherEnzyme1.uniqueID
else:
return selfEnzyme2EcList >= otherEnzyme2EcList
else:
return selfEnzyme1EcList >= otherEnzyme1EcList
class NeofunctionalisedEnzymes():
def __init__(self, enzymes: Set[Enzyme], geneDuplicationModel: GeneDuplication, eValue = defaultEValue, ignoreDuplicatesOutsideSet: bool = True):
"""
Neofunctionalisation events among certain `enzymes`.
Parameters
----------
enzymes : Set[Enzyme]
Enzymes among which to test for neofunctionalisation. Neofunctionalisations involving enzymes outside this set are **not** reported.
geneDuplicationModel : GeneDuplication
The model of gene duplication to use.
eValue : float, optional
Threshold for the statistical expectation value (E-value), below which a sequence alignment is considered significant.
ignoreDuplicatesOutsideSet : bool, optional
If *True*, any neofunctionalisation involving an enzyme outside the `enzymes` set is not reported.
This helps to exclude secondary metabolism when examining core metabolism.
Raises
------
ValueError
If a gene duplication model is used which requires instantiation, but only its class was given.
"""
if geneDuplicationModel == ChevronGeneDuplication:
raise ValueError("Chevron gene duplication model requires you to instantiate an object, parametrised with the set of possibly orthologous organisms.")
elif geneDuplicationModel == SimpleGroupGeneDuplication:
raise ValueError("Simple group gene duplication model requires you to instantiate an object, parametrised with the set of organisms belonging to the same group.")
self._geneDuplicationModel = geneDuplicationModel
self._neofunctionalisations = set()
# get possibly gene-duplicated enzymes, pointing to their duplicates
geneIdToEnzyme = dict()
for enzyme in enzymes:
geneIdToEnzyme[enzyme.geneID] = enzyme
if ignoreDuplicatesOutsideSet is True: # restrict allowed gene duplicates to the set of passed enzymes. This helps to avoid parts of the metabolism outside the core metabolism
if isinstance(geneDuplicationModel, ChevronGeneDuplication):
duplicatedEnzymePairs = geneDuplicationModel.getEnzymePairs(enzymes, eValue, ignoreDuplicatesOutsideSelf = ignoreDuplicatesOutsideSet, geneIdToEnzyme = geneIdToEnzyme)
else:
duplicatedEnzymePairs = geneDuplicationModel.getEnzymePairs(enzymes, eValue, ignoreDuplicatesOutsideSet = set(geneIdToEnzyme.keys()), geneIdToEnzyme = geneIdToEnzyme)
else: # you want to check for neofunctionalisation outside the passed enzymes (usually core metabolism)
duplicatedEnzymePairs = geneDuplicationModel.getEnzymePairs(enzymes, eValue)
# in all pairs of duplicated enzymes, find neofunctionalised ones
for enzymePair in duplicatedEnzymePairs:
try:
neofunctionalisation = Neofunctionalisation(enzymePair[0], enzymePair[1])
self._neofunctionalisations.add( neofunctionalisation )
except ValueError: # obviously not neofunctionalised
pass # ignore
def getNeofunctionalisations(self, minimumEcDifference: int = None) -> Set[Neofunctionalisation]:
"""
Get neofunctionalisation events between two enzymes each.
Parameters
----------
minimumEcDifference : int, optional
May only be one of [1, 2, 3, 4].
If *None* or *1*, all neofunctionalisations are returned.
If > *1*, return only neofunctionalisations in which the EC numbers differ in more than the `minimumEcDifference` lowest levels.
They then describe a different reaction, instead of only a different substrate.
For example, `minimumEcDifference` == *2* means that 1.2.3.4/1.2.3.5 is not reported, while 1.2.3.4/1.2.5.6 is.
Returns
-------
Set[Neofunctionalisation]
Set of possible neofunctionalisation events.
"""
if minimumEcDifference is not None and minimumEcDifference > 1:
filteredNeofunctionalisations = set()
for neofunctionalisation in self._neofunctionalisations:
if neofunctionalisation.getDifferingEcLevels() >= minimumEcDifference:
filteredNeofunctionalisations.add( neofunctionalisation )
return filteredNeofunctionalisations
else:
return self._neofunctionalisations.copy()
def getEnzymes(self, minimumEcDifference: int = None) -> Set[Enzyme]:
"""
Get all neofunctionalised enzymes.
Parameters
----------
minimumEcDifference : int, optional
May only be one of [1, 2, 3, 4].
If *None* or *1*, all neofunctionalisations are returned.
If > *1*, return only neofunctionalisations in which the EC numbers differ in more than the `minimumEcDifference` lowest levels.
They then describe a different reaction, instead of only a different substrate.
For example, `minimumEcDifference` == *2* means that 1.2.3.4/1.2.3.5 is not reported, while 1.2.3.4/1.2.5.6 is.
Returns
-------
Set[Enzyme]
Set of all possibly neofunctionalised enzymes, regardless of the real direction of neofunctionalisation, which we can not determine here.
"""
neofunctionalisedEnzymes = set()
for neofunctionalisation in self.getNeofunctionalisations(minimumEcDifference):
neofunctionalisedEnzymes.update( neofunctionalisation.getEnzymes() )
return neofunctionalisedEnzymes
def filterGraph(self, enzymeGraph: SubstanceEnzymeGraph, minimumEcDifference: int = None) -> SubstanceEnzymeGraph:
"""
Filter enzyme graph to only contain neofunctionalised enzymes.
Parameters
----------
enzymeGraph : SubstanceEnzymeGraph
The enzyme graph to filter.
minimumEcDifference : int, optional
May only be one of [1, 2, 3, 4].
If *None* or *1*, all neofunctionalisations are returned.
If > *1*, return only neofunctionalisations in which the EC numbers differ in more than the `minimumEcDifference` lowest levels.
They then describe a different reaction, instead of only a different substrate.
For example, `minimumEcDifference` == *2* means that 1.2.3.4/1.2.3.5 is not reported, while 1.2.3.4/1.2.5.6 is.
Returns
-------
SubstanceEnzymeGraph
A copy of `enzymeGraph`, leaving only edges with a neofunctionalised enzyme as key.
"""
graph = enzymeGraph.copy()
neofunctionalisedEnzymes = self.getEnzymes(minimumEcDifference)
graph.removeAllEnzymesExcept( neofunctionalisedEnzymes )
return graph
def colourGraph(self, enzymeGraph: SubstanceEnzymeGraph, colour: Export.Colour = Export.Colour.GREEN, minimumEcDifference: int = None) -> SubstanceEnzymeGraph:
"""
Colour enzyme graph's neofunctionalised enzyme edges.
Parameters
----------
enzymeGraph : SubstanceEnzymeGraph
The enzyme graph to colour.
colour : Export.Colour, optional
The colour to use for edges with neofunctionalised enzymes as key.
minimumEcDifference : int, optional
May only be one of [1, 2, 3, 4].
If *None* or *1*, all neofunctionalisations are returned.
If > *1*, return only neofunctionalisations in which the EC numbers differ in more than the `minimumEcDifference` lowest levels.
They then describe a different reaction, instead of only a different substrate.
For example, `minimumEcDifference` == *2* means that 1.2.3.4/1.2.3.5 is not reported, while 1.2.3.4/1.2.5.6 is.
Returns
-------
SubstanceEnzymeGraph
A copy of `enzymeGraph` in which edges with neofunctionalised enzymes as key have an additional colour attribute, see :func:`FEV_KEGG.Drawing.Export.addColourAttribute`.
"""
graph = enzymeGraph.copy()
neofunctionalisedEnzymes = self.getEnzymes(minimumEcDifference)
Export.addColourAttribute(graph, colour, nodes = False, edges = neofunctionalisedEnzymes)
return graph
class FunctionChange():
def __init__(self, ecA: EcNumber, ecB: EcNumber):
"""
Possible evolutionary change of enzymatic function, from one EC number to another.
The direction of change, or if it really happened, can not be determined here!
The order of EC numbers in this object is arbitrarily chosen to reflect their lexicographic order.
A function change resembles the possibility that the first EC number has evolutionarily changed into the second one. Or the other way around, since the direction of evolution can not be determined here.
A function change can never have the same EC number twice, nor can the first be lexicographically "bigger" than the second, see the examples section.
Parameters
----------
ecA: EcNumber
ecB: EcNumber
Must be lexicographically "bigger" than `ecA`. Can not be equal to `ecA`.
Raises
------
ValueError
If `ecA` is lexicographically "bigger" than, or equal to, `ecB`.
"""
if ecA >= ecB:
raise ValueError("EC number 1 is bigger than or equal to EC number 2.")
self.ecA = ecA
self.ecB = ecB
self.ecPair = (ecA, ecB)
@classmethod
def fromNeofunctionalisation(cls, neofunctionalisation: Neofunctionalisation) -> Set['FunctionChange']:
"""
Create combinations of function changes from a `neofunctionalisation`.
Parameters
----------
neofunctionalisation : Neofunctionalisation
Returns
-------
Set[FunctionChange]
Set of function changes which might have been caused by the `neofunctionalisation`.
Since an enzyme of a neofunctionalisation can have multiple EC numbers, all combinations of the two enzymes' EC numbers are formed and treated as separate possible function changes.
Examples
--------
A: 1
B: 2
= (1, 2)
A: 1
B: 1, 2
= (1, 2)
A: 1, 2
B: 1, 3
= (1, 3) (1, 2) (2, 3)
A: 1, 2
B: 3, 4
= (1, 3) (1, 4) (2, 3) (2, 4)
A: 1, 2
B: 1, 2, 3
= (1, 3) (2, 3)
"""
ecNumbersA, ecNumbersB = neofunctionalisation.getEcNumbers()
# cross product
questionableEcPairs = itertools.product(ecNumbersA, ecNumbersB)
# filter illegal products
functionChanges = set()
for pair in questionableEcPairs:
try:
functionChanges.add(cls(pair[0], pair[1]))
except ValueError:
pass
return functionChanges
def getDifferingEcLevels(self) -> int:
"""
Get the number of EC levels in which the EC numbers differ.
Returns
-------
int
Number of differing EC levels between the two EC numbers, starting with the substrate-level.
For example 1.2.3.4 and 1.2.3.7 returns 1, while 1.2.3.4 and 1.8.9.10 returns 3.
However, wildcards do **not** match numbers: 1.2.3.4 and 1.2.3.- returns 1!
"""
return 4 - self.ecA.matchingLevels(self.ecB, wildcardMatchesNumber = False)
def toHtml(self, short = False):
"""
Get the string representation as an HTML line.
"""
return '<td>' + self.ecPair[0].toHtml(short) + '</td><td><-></td><td>' + self.ecPair[1].toHtml(short) + '</td>'
def __str__(self):
return self.ecPair.__str__()
def __repr__(self):
return self.__str__()
def __eq__(self, other):
if isinstance(self, other.__class__):
return self.ecPair == other.ecPair
return False
def __ne__(self, other):
return not self == other
def __hash__(self):
return self.ecPair.__hash__()
def __lt__(self, other):
return self.ecPair < other.ecPair
def __gt__(self, other):
return self.ecPair > other.ecPair
def __le__(self, other):
return self.ecPair <= other.ecPair
def __ge__(self, other):
return self.ecPair >= other.ecPair
class NeofunctionalisedECs():
def __init__(self, neofunctionalisedEnzymes: NeofunctionalisedEnzymes):
"""
EC numbers which are affected by neofunctionalisation events.
Parameters
----------
neofunctionalisedEnzymes : NeofunctionalisedEnzymes
Neofunctionalisation events among certain enzymes.
"""
self._neofunctionalisedEnzymes = neofunctionalisedEnzymes
def getNeofunctionalisationsForFunctionChange(self, minimumEcDifference: int = None, minimumOrganismsCount: int = None) -> Dict[FunctionChange, Set[Neofunctionalisation]]:
"""
Get neofunctionalsation events, keyed by a change of function between the two enzymes.
Parameters
----------
minimumEcDifference : int, optional
May only be one of [1, 2, 3, 4].
If *None* or *1*, all neofunctionalisations are returned.
If > *1*, return only neofunctionalisations in which the EC numbers differ in more than the `minimumEcDifference` lowest levels.
They then describe a different reaction, instead of only a different substrate.
For example, `minimumEcDifference` == *2* means that 1.2.3.4/1.2.3.5 is not reported, while 1.2.3.4/1.2.5.6 is.
minimumOrganismsCount : int, optional
Minimum number of organisms which have to be involved in the neofunctionalisations of each function change.
If *None*, there is no filtering due to organism involvement.
For example, the function change 1->2 is associated with two neofunctionalisations 'eco:12345'->'eco:69875' and 'obc:76535'->'abc:41356', this involves three organisms in total (eco, obc, abc), finally, if `minimumOrganismsCount` <= 3, the function change 1->2 is returned.
Returns
-------
Dict[FunctionChange, Set[Neofunctionalisation]]
Dictionary of function changes, pointing to a set of neofunctionalisations which might have caused them.
Since an enzyme of a neofunctionalisation can have multiple EC numbers, all combinations of the two enzymes' EC numbers are formed and treated as separate possible function changes.
The neofunctionalisation is then saved again for each function change, which obviously leads to duplicated neofunctionalisation objects.
Examples
--------
A: 1
B: 2
= (1, 2)
A: 1
B: 1, 2
= (1, 2)
A: 1, 2
B: 1, 3
= (1, 3) (1, 2) (2, 3)
A: 1, 2
B: 3, 4
= (1, 3) (1, 4) (2, 3) (2, 4)
A: 1, 2
B: 1, 2, 3
= (1, 3) (2, 3)
"""
neofunctionalisationForFunctionChange = dict()
# get neofunctionalisations
for neofunctionalisation in self._neofunctionalisedEnzymes.getNeofunctionalisations(minimumEcDifference):
# split sets of EC numbers into pair-wise combinations
functionChanges = FunctionChange.fromNeofunctionalisation(neofunctionalisation)
for functionChange in functionChanges:
currentSet = neofunctionalisationForFunctionChange.get(functionChange, None)
if currentSet is None:
currentSet = set()
neofunctionalisationForFunctionChange[functionChange] = currentSet
currentSet.add(neofunctionalisation)
# filter function changes with neofunctionalised enzymes which stem from too few organisms
if minimumOrganismsCount is not None:
keysToDelete = []
for functionChange, neofunctionalisations in neofunctionalisationForFunctionChange.items():
enzymes = set()
for neofunctionalisation in neofunctionalisations:
enzymes.update( neofunctionalisation.getEnzymes() )
organisms = set()
for enzyme in enzymes:
organisms.add( enzyme.organismAbbreviation )
# enough occuring organisms?
if not len(organisms) >= minimumOrganismsCount:
keysToDelete.append( functionChange )
# delete keys which failed the test
for key in keysToDelete:
del neofunctionalisationForFunctionChange[key]
return neofunctionalisationForFunctionChange
def getFunctionChanges(self, minimumEcDifference: int = None, minimumOrganismsCount: int = None) -> Set[FunctionChange]:
"""
Get all possible changes of function between the two enzymes of every neofunctionalisation.
Parameters
----------
minimumEcDifference : int, optional
May only be one of [1, 2, 3, 4].
If *None* or *1*, all neofunctionalisations are returned.
If > *1*, return only neofunctionalisations in which the EC numbers differ in more than the `minimumEcDifference` lowest levels.
They then describe a different reaction, instead of only a different substrate.
For example, `minimumEcDifference` == *2* means that 1.2.3.4/1.2.3.5 is not reported, while 1.2.3.4/1.2.5.6 is.
minimumOrganismsCount : int, optional
Minimum number of organisms which have to be involved in the neofunctionalisations of each function change.
If *None*, there is no filtering due to organism involvement.
Returns
-------
Set[FunctionChange]
Set of all function changes, which meet the criteria.
"""
return set( self.getNeofunctionalisationsForFunctionChange(minimumEcDifference, minimumOrganismsCount).keys() )
def getEnzymesForFunctionChange(self, minimumEcDifference: int = None, minimumOrganismsCount: int = None) -> Dict[FunctionChange, Set[Enzyme]]:
"""
Get enzymes of neofunctionalisations, keyed by a possible change of function.
Parameters
----------
minimumEcDifference : int, optional
May only be one of [1, 2, 3, 4].
If *None* or *1*, all neofunctionalisations are returned.
If > *1*, return only neofunctionalisations in which the EC numbers differ in more than the `minimumEcDifference` lowest levels.
They then describe a different reaction, instead of only a different substrate.
For example, `minimumEcDifference` == *2* means that 1.2.3.4/1.2.3.5 is not reported, while 1.2.3.4/1.2.5.6 is.
minimumOrganismsCount : int, optional
Minimum number of organisms which have to be involved in the neofunctionalisations of each function change.
If *None*, there is no filtering due to organism involvement.
For example, the function change 1->2 is associated with two neofunctionalisations 'eco:12345'->'eco:69875' and 'obc:76535'->'abc:41356', this involves three organisms in total (eco, obc, abc), finally, if `minimumOrganismsCount` <= 3, the function change 1->2 is returned.
Returns
-------
Dict[FunctionChange, Set[Enzyme]]
Dictionary of function changes, pointing to a set of enzymes involved in the neofunctionalisations which might have caused the function change.
This can lead to many duplicated enzymes.
"""
enzymesForFunctionChange = dict()
for functionChange, neofunctionalisations in self.getNeofunctionalisationsForFunctionChange(minimumEcDifference, minimumOrganismsCount).items():
currentSet = enzymesForFunctionChange.get(functionChange, None)
if currentSet is None:
currentSet = set()
enzymesForFunctionChange[functionChange] = currentSet
for neofunctionalisation in neofunctionalisations:
currentSet.update( neofunctionalisation.getEnzymes() )
return enzymesForFunctionChange
def getNeofunctionalisationsForEC(self, minimumEcDifference: int = None, minimumOrganismsCount: int = None) -> Dict[EcNumber, Set[Neofunctionalisation]]:
"""
Get neofunctionalisation events, keyed by an EC number participating in the change of function between the two enzymes.
Parameters
----------
minimumEcDifference : int, optional
May only be one of [1, 2, 3, 4].
If *None* or *1*, all neofunctionalisations are returned.
If > *1*, return only neofunctionalisations in which the EC numbers differ in more than the `minimumEcDifference` lowest levels.
They then describe a different reaction, instead of only a different substrate.
For example, `minimumEcDifference` == *2* means that 1.2.3.4/1.2.3.5 is not reported, while 1.2.3.4/1.2.5.6 is.
minimumOrganismsCount : int, optional
Minimum number of organisms which have to be involved in the neofunctionalisations of each EC number.
If *None*, there is no filtering due to organism involvement.
This sums the occurences of organisms across function changes, for each EC number the function changes overlap with. Hence, it is much less likely that a neofunctionalisation is filtered, compared to filtering per function change.
For example, the function change 1->2 is associated with two neofunctionalisations 'eco:12345'->'eco:69875' and 'obc:76535'->'abc:41356', this involves three organisms in total (eco, obc, abc).
Also, the function change 1->3 involves two organisms ('eco:53235'->'iuf:34587'). If `minimumOrganismsCount` == 4, neither 1->2, nor 1->3 are reported.
However, if we look at single EC numbers, 1 is involved in function changes affecting four organisms (eco, obc, abc, iuf). Thus, 1 would be reported here, but neither 2 nor 3.
Returns
-------
Dict[EcNumber, Set[Neofunctionalisation]]
Dictionary of EC numbers which are part of function changes, pointing to a set of neofunctionalisations which might have caused them.
Very likely has duplicated neofunctionalisations, because there are always at least two EC numbers involved in a neofunctionalisation.
"""
neofunctionalisationForEC = dict()
# get neofunctionalisations
for neofunctionalisation in self._neofunctionalisedEnzymes.getNeofunctionalisations(minimumEcDifference):
# split sets of EC numbers into pair-wise combinations
functionChanges = FunctionChange.fromNeofunctionalisation(neofunctionalisation)
for functionChange in functionChanges:
# for each EC number of a function change, save neofunctionalisation
for ec in functionChange.ecPair:
currentSet = neofunctionalisationForEC.get(ec, None)
if currentSet is None:
currentSet = set()
neofunctionalisationForEC[ec] = currentSet
currentSet.add(neofunctionalisation)
# filter ECs with neofunctionalised enzymes which stem from too few organisms
if minimumOrganismsCount is not None:
keysToDelete = []
for ec, neofunctionalisations in neofunctionalisationForEC.items():
enzymes = set()
for neofunctionalisation in neofunctionalisations:
enzymes.update( neofunctionalisation.getEnzymes() )
organisms = set()
for enzyme in enzymes:
organisms.add( enzyme.organismAbbreviation )
# enough occuring organisms?
if not len(organisms) >= minimumOrganismsCount:
keysToDelete.append( ec )
# delete keys which failed the test
for key in keysToDelete:
del neofunctionalisationForEC[key]
return neofunctionalisationForEC
def getECs(self, minimumEcDifference: int = None, minimumOrganismsCount: int = None) -> Set[EcNumber]:
"""
Get EC numbers participating in the change of function due to neofunctionalisations.
They could also be called "neofunctionalised" EC numbers.
Parameters
----------
minimumEcDifference : int, optional
May only be one of [1, 2, 3, 4].
If *None* or *1*, all neofunctionalisations are returned.
If > *1*, return only neofunctionalisations in which the EC numbers differ in more than the `minimumEcDifference` lowest levels.
They then describe a different reaction, instead of only a different substrate.
For example, `minimumEcDifference` == *2* means that 1.2.3.4/1.2.3.5 is not reported, while 1.2.3.4/1.2.5.6 is.
minimumOrganismsCount : int, optional
Minimum number of organisms which have to be involved in the neofunctionalisations of each EC number.
If *None*, there is no filtering due to organism involvement.
This sums the occurences of organisms across function changes, for each EC number the function changes overlap with. Hence, it is much less likely that a neofunctionalisation is filtered, compared to filtering per function change.
For example, the function change 1->2 is associated with two neofunctionalisations 'eco:12345'->'eco:69875' and 'obc:76535'->'abc:41356', this involves three organisms in total (eco, obc, abc).
Also, the function change 1->3 involves two organisms ('eco:53235'->'iuf:34587'). If `minimumOrganismsCount` == 4, neither 1->2, nor 1->3 are reported.
However, if we look at single EC numbers, 1 is involved in function changes affecting four organisms (eco, obc, abc, iuf). Thus, 1 would be reported here, but neither 2 nor 3.
Returns
-------
Set[EcNumber]
Set of EC numbers which are part of function changes which possibly happened due to neofunctionalisations.
"""
return set( self.getNeofunctionalisationsForEC(minimumEcDifference, minimumOrganismsCount).keys() )
def getEnzymesForEC(self, minimumEcDifference: int = None, minimumOrganismsCount: int = None) -> Dict[EcNumber, Set[Enzyme]]:
"""
Get enzymes of neofunctionalisations, keyed by an EC number of a possible function change.
Parameters
----------
minimumEcDifference : int, optional
May only be one of [1, 2, 3, 4].
If *None* or *1*, all neofunctionalisations are returned.
If > *1*, return only neofunctionalisations in which the EC numbers differ in more than the `minimumEcDifference` lowest levels.
They then describe a different reaction, instead of only a different substrate.
For example, `minimumEcDifference` == *2* means that 1.2.3.4/1.2.3.5 is not reported, while 1.2.3.4/1.2.5.6 is.
minimumOrganismsCount : int, optional
Minimum number of organisms which have to be involved in the neofunctionalisations of each EC number.
If *None*, there is no filtering due to organism involvement.
This sums the occurences of organisms across function changes, for each EC number the function changes overlap with. Hence, it is much less likely that a neofunctionalisation is filtered, compared to filtering per function change.
For example, the function change 1->2 is associated with two neofunctionalisations 'eco:12345'->'eco:69875' and 'obc:76535'->'abc:41356', this involves three organisms in total (eco, obc, abc).
Also, the function change 1->3 involves two organisms ('eco:53235'->'iuf:34587'). If `minimumOrganismsCount` == 4, neither 1->2, nor 1->3 are reported.
However, if we look at single EC numbers, 1 is involved in function changes affecting four organisms (eco, obc, abc, iuf). Thus, 1 would be reported here, but neither 2 nor 3.
Returns
-------
Dict[EcNumber, Set[Enzyme]]
Dictionary of EC numbers, pointing to a set of enzymes involved in the neofunctionalisations which might have caused the function changes the EC number is part of.
This can lead to many duplicated enzymes.
"""
enzymesForEC = dict()
for ec, neofunctionalisations in self.getNeofunctionalisationsForEC(minimumEcDifference, minimumOrganismsCount).items():
currentSet = enzymesForEC.get(ec, None)
if currentSet is None:
currentSet = set()
enzymesForEC[ec] = currentSet
for neofunctionalisation in neofunctionalisations:
currentSet.update( neofunctionalisation.getEnzymes() )
return enzymesForEC
def filterGraph(self, ecGraph: SubstanceEcGraph, minimumEcDifference: int = None, minimumOrganismsCount: int = None) -> SubstanceEcGraph:
"""
Filter EC graph to only contain "neofunctionalised" EC numbers.
Parameters
----------
minimumEcDifference : int, optional
May only be one of [1, 2, 3, 4].
If *None* or *1*, all neofunctionalisations are returned.
If > *1*, return only neofunctionalisations in which the EC numbers differ in more than the `minimumEcDifference` lowest levels.
They then describe a different reaction, instead of only a different substrate.
For example, `minimumEcDifference` == *2* means that 1.2.3.4/1.2.3.5 is not reported, while 1.2.3.4/1.2.5.6 is.
minimumOrganismsCount : int, optional
Minimum number of organisms which have to be involved in the neofunctionalisations of each EC number.
If *None*, there is no filtering due to organism involvement.
This sums the occurences of organisms across function changes, for each EC number the function changes overlap with. Hence, it is much less likely that a neofunctionalisation is filtered, compared to filtering per function change.
For example, the function change 1->2 is associated with two neofunctionalisations 'eco:12345'->'eco:69875' and 'obc:76535'->'abc:41356', this involves three organisms in total (eco, obc, abc).
Also, the function change 1->3 involves two organisms ('eco:53235'->'iuf:34587'). If `minimumOrganismsCount` == 4, neither 1->2, nor 1->3 are reported.
However, if we look at single EC numbers, 1 is involved in function changes affecting four organisms (eco, obc, abc, iuf). Thus, 1 would be reported here, but neither 2 nor 3.
Returns
-------
SubstanceEcGraph
A copy of `ecGraph`, leaving only edges with a "neofunctionalised" EC as key.
"""
graph = ecGraph.copy()
neofunctionalisedECs = self.getECs(minimumEcDifference, minimumOrganismsCount)
graph.removeAllECsExcept( neofunctionalisedECs )
return graph
def colourGraph(self, ecGraph: SubstanceEcGraph, colour: Export.Colour = Export.Colour.GREEN, minimumEcDifference: int = None, minimumOrganismsCount: int = None) -> SubstanceEcGraph:
"""
Colour EC graph's "neofunctionalised" EC number edges.
Parameters
----------
minimumEcDifference : int, optional
May only be one of [1, 2, 3, 4].
If *None* or *1*, all neofunctionalisations are returned.
If > *1*, return only neofunctionalisations in which the EC numbers differ in more than the `minimumEcDifference` lowest levels.
They then describe a different reaction, instead of only a different substrate.
For example, `minimumEcDifference` == *2* means that 1.2.3.4/1.2.3.5 is not reported, while 1.2.3.4/1.2.5.6 is.
minimumOrganismsCount : int, optional
Minimum number of organisms which have to be involved in the neofunctionalisations of each EC number.
If *None*, there is no filtering due to organism involvement.
This sums the occurences of organisms across function changes, for each EC number the function changes overlap with. Hence, it is much less likely that a neofunctionalisation is filtered, compared to filtering per function change.
For example, the function change 1->2 is associated with two neofunctionalisations 'eco:12345'->'eco:69875' and 'obc:76535'->'abc:41356', this involves three organisms in total (eco, obc, abc).
Also, the function change 1->3 involves two organisms ('eco:53235'->'iuf:34587'). If `minimumOrganismsCount` == 4, neither 1->2, nor 1->3 are reported.
However, if we look at single EC numbers, 1 is involved in function changes affecting four organisms (eco, obc, abc, iuf). Thus, 1 would be reported here, but neither 2 nor 3.
Returns
-------
SubstanceEcGraph
A copy of `ecGraph` in which edges with "neofunctionalised" ECs as key have an additional colour attribute, see :func:`FEV_KEGG.Drawing.Export.addColourAttribute`.
"""
graph = ecGraph.copy()
neofunctionalisedECs = self.getECs(minimumEcDifference, minimumOrganismsCount)
Export.addColourAttribute(graph, colour, nodes = False, edges = neofunctionalisedECs)
return graph | PypiClean |
/Finance-Python-0.9.10.tar.gz/Finance-Python-0.9.10/PyFin/Analysis/__init__.py | u"""
Created on 2015-8-8
@author: cheng.li
"""
from PyFin.Analysis.DataProviders import DataProvider
from PyFin.Analysis.SecurityValueHolders import SecurityShiftedValueHolder
from PyFin.Analysis.SecurityValueHolders import SecurityDeltaValueHolder
from PyFin.Analysis.SecurityValueHolders import SecurityIIFValueHolder
from PyFin.Analysis.SecurityValueHolders import SecurityConstArrayValueHolder
from PyFin.Analysis.CrossSectionValueHolders import CSRankedSecurityValueHolder
from PyFin.Analysis.CrossSectionValueHolders import CSTopNSecurityValueHolder
from PyFin.Analysis.CrossSectionValueHolders import CSBottomNSecurityValueHolder
from PyFin.Analysis.CrossSectionValueHolders import CSAverageSecurityValueHolder
from PyFin.Analysis.CrossSectionValueHolders import CSAverageAdjustedSecurityValueHolder
from PyFin.Analysis.CrossSectionValueHolders import CSZScoreSecurityValueHolder
from PyFin.Analysis.CrossSectionValueHolders import CSFillNASecurityValueHolder
from PyFin.Analysis.CrossSectionValueHolders import CSPercentileSecurityValueHolder
from PyFin.Analysis.CrossSectionValueHolders import CSResidueSecurityValueHolder
from PyFin.Analysis.SecurityValueHolders import SecurityCurrentValueHolder
from PyFin.Analysis.SecurityValueHolders import SecurityLatestValueHolder
from PyFin.Analysis import TechnicalAnalysis
from PyFin.Analysis.transformer import transform
__all__ = ['DataProvider',
'SecurityShiftedValueHolder',
'SecurityDeltaValueHolder',
'SecurityIIFValueHolder',
'SecurityConstArrayValueHolder',
'CSRankedSecurityValueHolder',
'CSTopNSecurityValueHolder',
'CSBottomNSecurityValueHolder',
'CSAverageSecurityValueHolder',
'CSAverageAdjustedSecurityValueHolder',
'CSZScoreSecurityValueHolder',
'CSFillNASecurityValueHolder',
'CSPercentileSecurityValueHolder',
'CSResidueSecurityValueHolder',
'SecurityCurrentValueHolder',
'SecurityLatestValueHolder',
'TechnicalAnalysis',
'transform'] | PypiClean |
/OBP_security_pillar_2-0.0.4.tar.gz/OBP_security_pillar_2-0.0.4/OBP_security_pillar_2/ec2/vpc_flow_logs_enabled.py | import logging
from botocore.exceptions import ClientError
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger()
# Ensure VPC flow logging is enabled in all VPCs (Scored)
def vpc_logging_enabled(self) -> dict:
logger.info(" ---Inside vpc_logging_enabled()--- ")
"""Summary
Returns:
TYPE: Description
"""
result = True
failReason = ""
offenders = []
control_id = 'Id3.102'
compliance_type = "VPC flow logs enabled"
description = "Ensure VPC flow logging is enabled in all VPCs"
resource_type = "EC2"
risk_level = 'High'
regions = self.session.get_available_regions('ec2')
for n in regions:
try:
client = self.session.client('ec2', region_name=n)
flowlogs = client.describe_flow_logs(
# No paginator support in boto atm.
)
activeLogs = []
for m in flowlogs['FlowLogs']:
if "vpc-" in str(m['ResourceId']):
activeLogs.append(m['ResourceId'])
vpcs = client.describe_vpcs(
Filters=[
{
'Name': 'state',
'Values': [
'available',
]
},
]
)
for m in vpcs['Vpcs']:
if not str(m['VpcId']) in str(activeLogs):
result = False
failReason = "VPC without active VPC Flow Logs found"
offenders.append(str(n) + " : " + str(m['VpcId']))
except ClientError as e:
if e.response['Error']['Code'] == 'UnauthorizedOperation':
logger.info('---------Ec2 read access denied----------')
result = False
failReason = "Access Denied"
break
logger.warning("Something went wrong with the region {}: {}".format(n, e))
return {
'Result': result,
'failReason': failReason,
'resource_type': resource_type,
'Offenders': offenders,
'Compliance_type': compliance_type,
'Description': description,
'Risk Level': risk_level,
'ControlId': control_id
} | PypiClean |
/EOxServer-1.2.12-py3-none-any.whl/eoxserver/services/ows/wps/parameters/allowed_values.py |
from math import isinf
try:
from itertools import chain, ifilterfalse as filterfalse
except ImportError:
from itertools import filterfalse
from .data_types import BaseType, Double, DTYPES
class TypedMixIn(object):
""" Mix-in class adding date-type to an allowed value range. """
# pylint: disable=too-few-public-methods
def __init__(self, dtype):
if issubclass(dtype, BaseType):
self._dtype = dtype
elif dtype in DTYPES:
self._dtype = DTYPES[dtype]
else:
raise TypeError("Non-supported data type %s!" % dtype)
@property
def dtype(self):
""" Get data-type. """
return self._dtype
class BaseAllowed(object):
""" Allowed values base class. """
def check(self, value):
""" check validity """
raise NotImplementedError
def verify(self, value):
""" Verify the value."""
raise NotImplementedError
class AllowedAny(BaseAllowed):
""" Allowed values class allowing any value. """
def check(self, value):
return True
def verify(self, value):
return value
class AllowedByReference(BaseAllowed):
""" Allowed values class defined by a reference.
NOTE: As it is not how such a reference definition looks like
this class has the same behaviour as the AllowedAny class.
"""
# TODO: Implement proper handling of the allowed values defined by a reference.
def __init__(self, url):
self._url = url
@property
def url(self):
""" Get the URL of the reference. """
return self._url
def check(self, value):
return True
def verify(self, value):
return value
class AllowedEnum(BaseAllowed, TypedMixIn):
""" Allowed values class allowing values from an enumerated set. """
@staticmethod
def _unique(values):
""" Get list of order-preserved unique values and the corresponding
unordered set.
"""
vset = set()
vlist = []
vset_add = vset.add
vlist_append = vlist.append
for value in filterfalse(vset.__contains__, values):
vset_add(value)
vlist_append(value)
return vlist, vset
def __init__(self, values, dtype=Double):
TypedMixIn.__init__(self, dtype)
self._values_list, self._values_set = self._unique(
self._dtype.parse(value) for value in values
)
@property
def values(self):
""" Get the allowed values. """
return self._values_list
def check(self, value):
return self._dtype.parse(value) in self._values_set
def verify(self, value):
if self.check(value):
return value
raise ValueError("The value is not in the set of the allowed values.")
class AllowedRange(BaseAllowed, TypedMixIn):
""" Allowed values class allowing values from a range.
Constructor parameters:
minval range lower bound - set to None if unbound
maxval range upper bound - set to None if unbound
closure *'closed'|'open'|'open-closed'|'closed-open'
spacing uniform spacing of discretely sampled ranges
spacing_rtol relative tolerance of the spacing match
"""
ALLOWED_CLOSURES = ['closed', 'open', 'open-closed', 'closed-open']
# NOTE: Use of spacing with float discrete range is not recommended.
def __init__(self, minval, maxval, closure='closed',
spacing=None, spacing_rtol=1e-9, dtype=Double):
# pylint: disable=too-many-arguments
TypedMixIn.__init__(self, dtype)
if not self._dtype.comparable:
raise ValueError("Non-supported range data type '%s'!" % self._dtype)
if closure not in self.ALLOWED_CLOSURES:
raise ValueError("Invalid closure specification!")
if minval is None and maxval is None:
raise ValueError("Invalid range bounds!")
if spacing_rtol < 0.0 or spacing_rtol > 1.0:
raise ValueError("Invalid spacing relative tolerance!")
self._closure = self.ALLOWED_CLOSURES.index(closure)
self._minval = None if minval is None else self._dtype.parse(minval)
self._maxval = None if maxval is None else self._dtype.parse(maxval)
self._base = self._maxval if self._minval is None else self._minval
# verify the spacing
if spacing is not None:
ddtype = self._dtype.get_diff_dtype()
# check whether the type has difference operation defined
if ddtype is None or ddtype.zero is None:
raise TypeError(
"Spacing is not applicable for type '%s'!" % dtype
)
spacing = ddtype.parse(spacing)
if spacing <= ddtype.zero:
raise ValueError("Invalid spacing '%s'!" % spacing)
self._spacing = spacing
self._rtol = spacing_rtol
@property
def minval(self):
""" Get the lower bound of the range. """
return self._minval
@property
def maxval(self):
""" Get the upper bound of the range. """
return self._maxval
@property
def spacing(self):
""" Get the range spacing. """
return self._spacing
@property
def closure(self):
""" Get the range closure type. """
return self.ALLOWED_CLOSURES[self._closure]
def _out_of_spacing(self, value):
if self._spacing is None:
return False
ddtype = self._dtype.get_diff_dtype()
tmp0 = ddtype.as_number(self._dtype.sub(value, self._base))
tmp1 = ddtype.as_number(self.spacing)
tmp2 = float(tmp0) / float(tmp1)
if isinf(tmp2):
return True
return not self._rtol >= abs(tmp2 - round(tmp2))
def _out_of_bounds(self, value):
if value != value: # simple type-safe NaN check (works in Python > 2.5)
return True
below = self._minval is not None and (
value < self._minval or (
value == self._minval and self._closure in (1, 2)
)
)
above = self._maxval is not None and (
value > self._maxval or (
value == self._maxval and self._closure in (1, 3)
)
)
return below or above
def check(self, value):
value = self._dtype.parse(value)
return not (self._out_of_bounds(value) or self._out_of_spacing(value))
def verify(self, value):
parsed_value = self._dtype.parse(value)
if self._out_of_bounds(parsed_value):
raise ValueError("The value is not within the allowed range.")
if self._out_of_spacing(parsed_value):
raise ValueError("The value does not fit the requested spacing.")
return value
class AllowedRangeCollection(BaseAllowed, TypedMixIn):
""" Allowed value class allowing values from a collection of AllowedEnum
and AllowedRange instances.
"""
def __init__(self, *objs):
if not objs:
raise ValueError(
"At least one AllowedEnum or AllowedRange object "
"must be provided!"
)
TypedMixIn.__init__(self, objs[0].dtype)
values = set()
ranges = []
for obj in objs:
# Merge enumerated values into one set.
if isinstance(obj, AllowedEnum):
values.update(obj.values)
# Collect ranges.
elif isinstance(obj, AllowedRange):
ranges.append(obj)
else:
raise ValueError(
"An object which is neither AllowedEnum"
" nor AllowedRange instance! OBJ=%r" % obj
)
# Check that all ranges and value sets are of the same type.
if self.dtype != obj.dtype:
raise TypeError("Data type mismatch!")
self._enum = AllowedEnum(values, dtype=self.dtype)
self._ranges = ranges
@property
def enum(self):
""" Get merged set of the enumerated allowed values. """
return self._enum
@property
def ranges(self):
""" Get list of the allowed values' ranges. """
return self._ranges
def check(self, value):
for obj in chain([self._enum], self._ranges):
if obj.check(value):
return True
return False
def verify(self, value):
if self.check(value):
return value
raise ValueError(
"The value does not match the range of the allowed values!"
) | PypiClean |
/Daarmaan-0.2.2.tar.gz/Daarmaan-0.2.2/daarmaan/server/views/general.py |
import json
from django.shortcuts import render_to_response as rr
from django.shortcuts import redirect, get_object_or_404
from django.template import RequestContext
from django.contrib.auth import authenticate, login
from django.utils.translation import ugettext as _
from django.core.urlresolvers import reverse
from django.http import (HttpResponse, HttpResponseRedirect,
HttpResponseForbidden)
from django.contrib.auth.decorators import login_required
from daarmaan.server.forms import PreRegistrationForm
from daarmaan.server.models import Profile, Service
@login_required
def dashboard(request):
return HttpResponse()
@login_required
def setup_session(request):
"""
Insert all needed values into user session.
"""
return
services = request.user.get_profile().services.all()
services_id = [i.id for i in services]
request.session["services"] = services_id
def login_view(request):
username = request.POST['username']
password = request.POST['password']
remember = request.POST.get("remember_me", False)
next_url = request.POST.get("next", None)
form = PreRegistrationForm()
user = authenticate(username=username,
password=password)
if user is not None:
if user.is_active:
login(request, user)
setup_session(request)
if next_url:
return HttpResponseRedirect("/")
return redirect(reverse(
"dashboard-index",
args=[]))
else:
return rr("index.html", {"regform": form,
"msgclass": "error",
"next": next_url,
"msg": _("Your account is disabled.")},
context_instance=RequestContext(request))
else:
return rr("index.html", {"regform": form,
"msgclass": "error",
"next": next_url,
"msg": _("Username or Password is invalid.")},
context_instance=RequestContext(request))
def pre_register(request):
form = PreRegistrationForm(request.POST)
if form.is_valid():
email = form.cleaned_data["email"]
username = form.cleaned_data["username"]
return HttpResponse()
def index(request):
"""
Main page.
"""
if request.user.is_authenticated():
return HttpResponseRedirect(reverse('dashboard-index'))
if request.method == "POST":
if request.POST["form"] == "login":
return login_view(request)
else:
return pre_register(request)
else:
form = PreRegistrationForm()
next_url = request.GET.get("next", "")
return rr("index.html", {"regform": form,
"next": next_url},
context_instance=RequestContext(request)) | PypiClean |
/MSM_PELE-1.1.1-py3-none-any.whl/AdaptivePELE/AdaptivePELE/automateRoundsAdaptive.py | from __future__ import absolute_import, division, print_function, unicode_literals
from AdaptivePELE.simulation import simulationrunner
import AdaptivePELE.adaptiveSampling as adaptiveSampling
import argparse
import os
def automateSimulation(args):
"""
Run multiple AdaptivePELE simulation with the same parameters, changing
only the seed
:param args: Object containing the command line arguments
:type args: object
"""
controlFile = args.controlFile
numSimulations = args.numSimulations
nProcessors = args.nProcessors
nSteps = args.nSteps
simulationName = args.simulationName
simulationParameters = simulationrunner.SimulationParameters()
simulationParameters.templetizedControlFile = controlFile
simulationRunner = simulationrunner.SimulationRunner(simulationParameters)
epochs = args.epochs
if epochs:
rangeOfEpochs = epochs
else:
rangeOfEpochs = list(range(1, numSimulations+1))
print("rangeOfEpochs", rangeOfEpochs)
for i in rangeOfEpochs:
controlFileDictionary = {"SEED": "%d%d%d", "OUTPUTPATH": "%s_%d"}
SEED_i = int(controlFileDictionary["SEED"] % (i, nProcessors, nSteps))
controlFileDictionary["SEED"] = SEED_i
outputPath_i = controlFileDictionary["OUTPUTPATH"] % (simulationName, i)
controlFileDictionary["OUTPUTPATH"] = outputPath_i
controlFileName = "tmp_%s_controlfile_%s_%d.conf" % (os.path.splitext(controlFile)[0], simulationName, i)
controlFileName = controlFileName.replace("/", "_")
simulationRunner.makeWorkingControlFile(controlFileName, controlFileDictionary)
print("Starting simulation %d" % i)
adaptiveSampling.main(controlFileName)
def parseArguments():
"""
Parse the command line arguments
"""
parser = argparse.ArgumentParser(description="Automate the process "
"of repeating simulations")
parser.add_argument('controlFile', type=str)
parser.add_argument('numSimulations', type=int)
parser.add_argument('nProcessors', type=int)
parser.add_argument('nSteps', type=int)
parser.add_argument('simulationName', type=str)
parser.add_argument("-e", "--epochs", nargs='*', type=int, help="Epochs to run")
args = parser.parse_args()
return args
def main():
"""
Run the multiple simulations
"""
args = parseArguments()
automateSimulation(args)
if __name__ == "__main__":
main() | PypiClean |
/Djblets-3.3.tar.gz/Djblets-3.3/docs/releasenotes/2.2.rst | .. default-intersphinx:: django1.11 djblets2.x
=========================
Djblets 2.2 Release Notes
=========================
**Release date**: March 2, 2021
Performance Improvements
========================
* Improved compilation times with our
:py:class:`~djblets.pipeline.compilers.less.LessCompiler` for LessCSS files.
This has a significant impact on development servers, where all LessCSS
files are checked every page load to see which need to be compiled. The
first page's load time will be reduced, and subsequent page loads will be
nearly instantaneous.
New Features
============
* Added a ``settings.LOGGING_TO_STDOUT`` setting, which can be set to ``True``
to force all log messages to go to standard out.
This is useful particularly when running in a Docker container, where
applications are expected to log to standard output. This can be used along
with the standard file-based logging.
Bug Fixes
=========
* Fixed packaging extensions when running on Python 3.
Our extension packaging support was previously using byte strings on
Python 3 for some command line arguments used for packaging. This crashed
when trying to invoke the :file:`setup.py` for an extension's package.
Contributors
============
* Christian Hammond
| PypiClean |
/Flask-MDEditor-0.1.4.tar.gz/Flask-MDEditor-0.1.4/flask_mdeditor/static/mdeditor/js/lib/codemirror/addon/comment/continuecomment.js |
(function(mod) {
if (typeof exports == "object" && typeof module == "object") // CommonJS
mod(require("../../lib/codemirror"));
else if (typeof define == "function" && define.amd) // AMD
define(["../../lib/codemirror"], mod);
else // Plain browser env
mod(CodeMirror);
})(function(CodeMirror) {
var modes = ["clike", "css", "javascript"];
for (var i = 0; i < modes.length; ++i)
CodeMirror.extendMode(modes[i], {blockCommentContinue: " * "});
function continueComment(cm) {
if (cm.getOption("disableInput")) return CodeMirror.Pass;
var ranges = cm.listSelections(), mode, inserts = [];
for (var i = 0; i < ranges.length; i++) {
var pos = ranges[i].head, token = cm.getTokenAt(pos);
if (token.type != "comment") return CodeMirror.Pass;
var modeHere = CodeMirror.innerMode(cm.getMode(), token.state).mode;
if (!mode) mode = modeHere;
else if (mode != modeHere) return CodeMirror.Pass;
var insert = null;
if (mode.blockCommentStart && mode.blockCommentContinue) {
var end = token.string.indexOf(mode.blockCommentEnd);
var full = cm.getRange(CodeMirror.Pos(pos.line, 0), CodeMirror.Pos(pos.line, token.end)), found;
if (end != -1 && end == token.string.length - mode.blockCommentEnd.length && pos.ch >= end) {
// Comment ended, don't continue it
} else if (token.string.indexOf(mode.blockCommentStart) == 0) {
insert = full.slice(0, token.start);
if (!/^\s*$/.test(insert)) {
insert = "";
for (var j = 0; j < token.start; ++j) insert += " ";
}
} else if ((found = full.indexOf(mode.blockCommentContinue)) != -1 &&
found + mode.blockCommentContinue.length > token.start &&
/^\s*$/.test(full.slice(0, found))) {
insert = full.slice(0, found);
}
if (insert != null) insert += mode.blockCommentContinue;
}
if (insert == null && mode.lineComment && continueLineCommentEnabled(cm)) {
var line = cm.getLine(pos.line), found = line.indexOf(mode.lineComment);
if (found > -1) {
insert = line.slice(0, found);
if (/\S/.test(insert)) insert = null;
else insert += mode.lineComment + line.slice(found + mode.lineComment.length).match(/^\s*/)[0];
}
}
if (insert == null) return CodeMirror.Pass;
inserts[i] = "\n" + insert;
}
cm.operation(function() {
for (var i = ranges.length - 1; i >= 0; i--)
cm.replaceRange(inserts[i], ranges[i].from(), ranges[i].to(), "+insert");
});
}
function continueLineCommentEnabled(cm) {
var opt = cm.getOption("continueComments");
if (opt && typeof opt == "object")
return opt.continueLineComment !== false;
return true;
}
CodeMirror.defineOption("continueComments", null, function(cm, val, prev) {
if (prev && prev != CodeMirror.Init)
cm.removeKeyMap("continueComment");
if (val) {
var key = "Enter";
if (typeof val == "string")
key = val;
else if (typeof val == "object" && val.key)
key = val.key;
var map = {name: "continueComment"};
map[key] = continueComment;
cm.addKeyMap(map);
}
});
}); | PypiClean |
/Mathics_Django-6.0.0-py3-none-any.whl/mathics_django/web/media/js/mathjax/localization/ja/FontWarnings.js | MathJax.Localization.addTranslation("ja","FontWarnings",{version:"2.7.9",isLoaded:true,strings:{webFont:"MathJax \u306F\u3053\u306E\u30DA\u30FC\u30B8\u3067\u3001\u6570\u5F0F\u3092\u8868\u793A\u3059\u308B\u305F\u3081\u306B\u30A6\u30A7\u30D6 \u30D9\u30FC\u30B9\u306E\u30D5\u30A9\u30F3\u30C8\u3092\u4F7F\u7528\u3057\u3066\u3044\u307E\u3059\u3002\u30D5\u30A9\u30F3\u30C8\u306E\u30C0\u30A6\u30F3\u30ED\u30FC\u30C9\u306B\u6642\u9593\u304C\u304B\u304B\u308B\u305F\u3081\u3001\u3042\u306A\u305F\u306E\u30B7\u30B9\u30C6\u30E0\u306E\u30D5\u30A9\u30F3\u30C8 \u30D5\u30A9\u30EB\u30C0\u30FC\u306B\u6570\u5F0F\u30D5\u30A9\u30F3\u30C8\u3092\u76F4\u63A5\u30A4\u30F3\u30B9\u30C8\u30FC\u30EB\u3059\u308B\u3053\u3068\u3067\u30DA\u30FC\u30B8\u306E\u30EC\u30F3\u30C0\u30EA\u30F3\u30B0\u304C\u3088\u308A\u901F\u304F\u306A\u308A\u307E\u3059\u3002",imageFonts:"MathJax \u306F\u30ED\u30FC\u30AB\u30EB \u30D5\u30A9\u30F3\u30C8\u3084 Web \u30D5\u30A9\u30F3\u30C8\u3067\u306F\u306A\u304F\u753B\u50CF\u30D5\u30A9\u30F3\u30C8\u3092\u4F7F\u7528\u3057\u3066\u3044\u307E\u3059\u3002\u63CF\u753B\u304C\u901A\u5E38\u3088\u308A\u9045\u3044\u304A\u305D\u308C\u304C\u3042\u308A\u3001\u30D7\u30EA\u30F3\u30BF\u30FC\u3067\u306E\u9AD8\u89E3\u50CF\u5EA6\u306E\u5370\u5237\u306B\u5411\u304B\u306A\u3044\u304A\u305D\u308C\u304C\u3042\u308A\u307E\u3059\u3002",noFonts:"MathJax \u304C\u6570\u5F0F\u306E\u8868\u793A\u306B\u4F7F\u7528\u3059\u308B\u30D5\u30A9\u30F3\u30C8\u3092\u898B\u3064\u3051\u3089\u308C\u305A\u3001\u753B\u50CF\u30D5\u30A9\u30F3\u30C8\u3082\u5229\u7528\u3067\u304D\u306A\u3044\u305F\u3081\u3001\u4EE3\u308F\u308A\u306B\u6C4E\u7528\u306E Unicode \u6587\u5B57\u3092\u4F7F\u7528\u3057\u3066\u3044\u307E\u3059\u3002\u3054\u4F7F\u7528\u4E2D\u306E\u30D6\u30E9\u30A6\u30B6\u30FC\u304C\u8868\u793A\u3067\u304D\u308B\u3082\u306E\u3068\u671F\u5F85\u3057\u3066\u3044\u307E\u3059\u304C\u3001\u4E00\u90E8\u306E\u6587\u5B57\u304C\u9069\u5207\u306B\u8868\u793A\u3055\u308C\u306A\u3044\u3001\u307E\u305F\u306F\u5168\u304F\u8868\u793A\u3055\u308C\u306A\u3044\u304A\u305D\u308C\u304C\u3042\u308A\u307E\u3059\u3002",webFonts:"\u591A\u304F\u306E\u30A6\u30A7\u30D6 \u30D6\u30E9\u30A6\u30B6\u30FC\u306F\u30A6\u30A7\u30D6\u304B\u3089\u30D5\u30A9\u30F3\u30C8\u3092\u30C0\u30A6\u30F3\u30ED\u30FC\u30C9\u3067\u304D\u307E\u3059\u3002\u3054\u4F7F\u7528\u4E2D\u306E\u30D6\u30E9\u30A6\u30B6\u30FC\u3092\u3088\u308A\u65B0\u3057\u3044\u30D0\u30FC\u30B8\u30E7\u30F3\u306B\u66F4\u65B0\u3059\u308B (\u307E\u305F\u306F\u5225\u306E\u30D6\u30E9\u30A6\u30B6\u30FC\u306B\u5909\u66F4\u3059\u308B) \u3053\u3068\u3067\u3001\u3053\u306E\u30DA\u30FC\u30B8\u306E\u6570\u5F0F\u306E\u54C1\u8CEA\u304C\u5411\u4E0A\u3059\u308B\u53EF\u80FD\u6027\u304C\u3042\u308A\u307E\u3059\u3002",fonts:"MathJax \u3067\u306F [STIX \u30D5\u30A9\u30F3\u30C8](%1)\u3084 [MathJax Tex \u30D5\u30A9\u30F3\u30C8](%2)\u3092\u4F7F\u7528\u3067\u304D\u307E\u3059\u3002MathJax \u4F53\u9A13\u3092\u6539\u5584\u3059\u308B\u305F\u3081\u306B\u3001\u30D5\u30A9\u30F3\u30C8\u3092\u30C0\u30A6\u30F3\u30ED\u30FC\u30C9\u304A\u3088\u3073\u30A4\u30F3\u30B9\u30C8\u30FC\u30EB\u3057\u3066\u304F\u3060\u3055\u3044\u3002",STIXPage:"\u3053\u306E\u30DA\u30FC\u30B8\u306F [STIX \u30D5\u30A9\u30F3\u30C8](%1)\u3092\u4F7F\u7528\u3059\u308B\u3088\u3046\u306B\u8A2D\u8A08\u3055\u308C\u3066\u3044\u307E\u3059\u3002MathJax \u4F53\u9A13\u3092\u6539\u5584\u3059\u308B\u305F\u3081\u306B\u3001\u30D5\u30A9\u30F3\u30C8\u3092\u30C0\u30A6\u30F3\u30ED\u30FC\u30C9\u304A\u3088\u3073\u30A4\u30F3\u30B9\u30C8\u30FC\u30EB\u3057\u3066\u304F\u3060\u3055\u3044\u3002",TeXPage:"\u3053\u306E\u30DA\u30FC\u30B8\u306F [MathJax TeX \u30D5\u30A9\u30F3\u30C8](%1)\u3092\u4F7F\u7528\u3059\u308B\u3088\u3046\u306B\u8A2D\u8A08\u3055\u308C\u3066\u3044\u307E\u3059\u3002MathJax \u4F53\u9A13\u3092\u6539\u5584\u3059\u308B\u305F\u3081\u306B\u3001\u30D5\u30A9\u30F3\u30C8\u3092\u30C0\u30A6\u30F3\u30ED\u30FC\u30C9\u304A\u3088\u3073\u30A4\u30F3\u30B9\u30C8\u30FC\u30EB\u3057\u3066\u304F\u3060\u3055\u3044\u3002"}});MathJax.Ajax.loadComplete("[MathJax]/localization/ja/FontWarnings.js"); | PypiClean |
/Boodler-2.0.3.tar.gz/Boodler-2.0.3/src/boodle/sample.py | import fileinput
import os
import os.path
import aifc
import wave
import sunau
import struct
import bisect
# Maps File objects, and also str/unicode pathnames, to Samples.
cache = {}
# We still support $BOODLER_SOUND_PATH, for old times' sake.
# But packaged modules should not rely on it.
sound_dirs = os.environ.get('BOODLER_SOUND_PATH', os.curdir)
sound_dirs = sound_dirs.split(':')
if struct.pack("h", 1) == "\000\001":
big_endian = 1
else:
big_endian = 0
class Sample:
"""Sample: represents a sound file, held in memory.
This is really just a container for a native object (csamp), which
is used by the cboodle native module. Samples may only be created
by the SampleLoader classes in this module.
"""
reloader = None
def __init__(self, filename, csamp):
self.filename = filename
self.refcount = 0
self.lastused = 0
self.csamp = csamp
def __repr__(self):
return '<Sample at ' + str(self.filename) + '>'
def queue_note(self, pitch, volume, pan, starttime, chan):
if (cboodle.is_sample_error(self.csamp)):
raise SampleError('sample is unplayable')
if (not cboodle.is_sample_loaded(self.csamp)):
if (not (self.reloader is None)):
self.reloader.reload(self)
if (not cboodle.is_sample_loaded(self.csamp)):
raise SampleError('sample is unloaded')
(panscx, panshx, panscy, panshy) = stereo.extend_tuple(pan)
def closure(samp=self, chan=chan):
samp.refcount -= 1
chan.remnote()
dur = cboodle.create_note(self.csamp, pitch, volume,
panscx, panshx, panscy, panshy,
starttime, chan, closure)
chan.addnote()
self.refcount += 1
if (self.lastused < starttime + dur):
self.lastused = starttime + dur
return dur
def queue_note_duration(self, pitch, volume, pan, starttime, duration, chan):
if (cboodle.is_sample_error(self.csamp)):
raise SampleError('sample is unplayable')
if (not cboodle.is_sample_loaded(self.csamp)):
if (not (self.reloader is None)):
self.reloader.reload(self)
if (not cboodle.is_sample_loaded(self.csamp)):
raise SampleError('sample is unloaded')
(panscx, panshx, panscy, panshy) = stereo.extend_tuple(pan)
def closure(samp=self, chan=chan):
samp.refcount -= 1
chan.remnote()
dur = cboodle.create_note_duration(self.csamp, pitch, volume,
panscx, panshx, panscy, panshy,
starttime, duration, chan, closure)
chan.addnote()
self.refcount += 1
if (self.lastused < starttime + dur):
self.lastused = starttime + dur
return dur
def get_info(self, pitch=1.0):
if (cboodle.is_sample_error(self.csamp)):
raise SampleError('sample is unplayable')
res = cboodle.sample_info(self.csamp)
ratio = float(res[0]) * float(pitch) * float(cboodle.framespersec())
if (len(res) == 2):
return (float(res[1]) / ratio, None)
else:
return (float(res[1]) / ratio,
(float(res[2]) / ratio, float(res[3]) / ratio))
class MixinSample(Sample):
def __init__(self, filename, ranges, default, modname=None):
self.ranges = ranges
self.minvals = [ rn.min for rn in ranges ]
self.default = default
if (filename is None):
filename = '<constructed>'
self.filename = filename
if (modname):
self.__module__ = modname
self.lastused = 0
self.refcount = 0
self.csamp = None
def find(self, pitch):
pos = bisect.bisect(self.minvals, pitch)
pos -= 1
while (pos >= 0):
rn = self.ranges[pos]
if (pitch <= rn.max):
return rn
pos -= 1
if (not (self.default is None)):
return self.default
raise SampleError(str(pitch) + ' is outside mixin ranges')
def queue_note(self, pitch, volume, pan, starttime, chan):
rn = self.find(pitch)
if (not (rn.pitch is None)):
pitch *= rn.pitch
if (not (rn.volume is None)):
volume *= rn.volume
samp = get(rn.sample)
return samp.queue_note(pitch, volume, pan, starttime, chan)
def queue_note_duration(self, pitch, volume, pan, starttime, duration, chan):
rn = self.find(pitch)
if (not (rn.pitch is None)):
pitch *= rn.pitch
if (not (rn.volume is None)):
volume *= rn.volume
samp = get(rn.sample)
return samp.queue_note_duration(pitch, volume, pan, starttime, duration, chan)
def get_info(self, pitch=1.0):
rn = self.find(pitch)
if (not (rn.pitch is None)):
pitch *= rn.pitch
samp = get(rn.sample)
return samp.get_info(pitch)
def unload_unused(deathtime):
for samp in list(cache.values()):
if (samp.refcount == 0
and (not (samp.csamp is None))
and deathtime >= samp.lastused
and cboodle.is_sample_loaded(samp.csamp)):
cboodle.unload_sample(samp.csamp)
def adjust_timebase(trimoffset, maxage):
for samp in cache.values():
if (samp.lastused >= -maxage):
samp.lastused = samp.lastused - trimoffset
def get(sname):
"""get(sample) -> Sample
Load a sample object, given a filename or File object. (You can also
pass a Sample object; it will be returned back to you.)
(If the filename is relative, $BOODLER_SOUND_PATH is searched.)
The module maintains a cache of sample objects, so if you load the
same filename twice, the second get() call will be fast.
This function is not useful, since agent.sched_note() and such methods
call it for you -- they accept filenames as well as sample objects.
This function is available nevertheless.
"""
# If the argument is a Sample in the first place, return it.
if (isinstance(sname, Sample)):
return sname
# If we've seen it before, it's in the cache.
samp = cache.get(sname)
if (not (samp is None)):
return samp
suffix = None
if (isinstance(sname, boopak.pinfo.MemFile)):
filename = sname
suffix = sname.suffix
elif (isinstance(sname, boopak.pinfo.File)):
filename = sname
if (not os.access(sname.pathname, os.R_OK)):
raise SampleError('file not readable: ' + sname.pathname)
(dummy, suffix) = os.path.splitext(sname.pathname)
elif (not (type(sname) in [str, unicode])):
raise SampleError('not a File or filename')
elif (os.path.isabs(sname)):
filename = sname
if (not os.access(filename, os.R_OK)):
raise SampleError('file not readable: ' + filename)
(dummy, suffix) = os.path.splitext(filename)
else:
for dir in sound_dirs:
filename = os.path.join(dir, sname)
if (os.access(filename, os.R_OK)):
(dummy, suffix) = os.path.splitext(filename)
break
else:
raise SampleError('file not readable: ' + sname)
suffix = suffix.lower()
loader = find_loader(suffix)
samp = loader.load(filename, suffix)
# Cache under the original key (may be File, str, or unicode)
cache[sname] = samp
return samp
def get_info(samp, pitch=1):
"""get_info(sample, pitch=1) -> tuple
Measure the expected running time and looping parameters of a sound.
The argument can be either a filename, or a sample object (as
returned by get()).
The result is a 2-tuple. The first member is the duration of the
sound (in seconds, if played with the given pitch -- by default,
the sound's original pitch). The second member is None, if the
sound has no looping parameters, or a 2-tuple (loopstart, loopend).
The result of this function may not be precisely accurate, due
to rounding annoyances. In particular, the duration may not be
exactly equal to the value returned by agent.sched_note(), when
the note is actually played.
"""
samp = get(samp)
return samp.get_info(pitch)
class MixIn:
"""MixIn: base class for statically declared mix-in samples.
To use this, declare a construct:
class your_sample_name(MixIn):
ranges = [
MixIn.range(...),
MixIn.range(...),
MixIn.range(...),
]
default = MixIn.default(...)
A range declaration looks like
MixIn.range(maxval, sample)
or
MixIn.range(minval, maxval, sample)
or
MixIn.range(minval, maxval, sample, pitch=1.0, volume=1.0)
If you don't give a minval, the maxval of the previous range is used.
You may use the constants MixIn.MIN and MixIn.MAX to represent the
limits of the range. The pitch and volume arguments are optional.
A default declaration looks like
MixIn.default(sample)
or
MixIn.default(sample, pitch=1.0, volume=1.0)
The default declaration is option. (As are, again, the pitch and
volume arguments.)
When your declaration is complete, your_sample_name will magically
be a MixinSample instance (not a class).
"""
MIN = 0.0
MAX = 1000000.0
def default(samp, pitch=None, volume=None):
if (samp is None):
raise SampleError('default must have a sample')
return MixIn.range(MixIn.MIN, MixIn.MAX, samp,
pitch=pitch, volume=volume)
default = staticmethod(default)
class range:
def __init__(self, arg1, arg2, arg3=None, pitch=None, volume=None):
if (arg3 is None):
(min, max, samp) = (None, arg1, arg2)
else:
(min, max, samp) = (arg1, arg2, arg3)
if (samp is None):
raise SampleError('range must have a sample')
if (max is None):
raise SampleError('range must have a maximum value')
(self.min, self.max) = (min, max)
self.sample = samp
self.pitch = pitch
self.volume = volume
def __repr__(self):
return '<range %s, %s>' % (self.min, self.max)
def __cmp__(self, other):
if (not (self.min is None or other.min is None)):
res = cmp(self.min, other.min)
if (res):
return res
if (not (self.max is None or other.max is None)):
res = cmp(self.max, other.max)
if (res):
return res
return 0
def __class__(name, bases, dic):
ranges = dic['ranges']
default = dic.get('default', None)
modname = dic['__module__']
MixIn.sort_mixin_ranges(ranges)
return MixinSample('<'+name+'>', ranges, default, modname)
__class__ = staticmethod(__class__)
def sort_mixin_ranges(ranges):
ranges.sort()
lastmin = 0.0
for rn in ranges:
if (rn.min is None):
rn.min = lastmin
if (rn.min > rn.max):
raise SampleError('range\'s min must be less than its max')
lastmin = rn.max
sort_mixin_ranges = staticmethod(sort_mixin_ranges)
class SampleLoader:
"""SampleLoader: Base class for the facility to load a particular
form of sound sample from a file.
Subclasses of this are defined and instantiated later in the module.
"""
suffixmap = {}
def __init__(self):
self.register_suffixes()
def register_suffixes(self):
for val in self.suffixlist:
SampleLoader.suffixmap[val] = self
def load(self, filename, suffix):
csamp = cboodle.new_sample()
try:
self.raw_load(filename, csamp)
except Exception, ex:
cboodle.delete_sample(csamp)
raise
samp = Sample(filename, csamp)
samp.reloader = self
return samp
def reload(self, samp):
self.raw_load(samp.filename, samp.csamp)
def find_loader(suffix):
"""find_loader(suffix) -> SampleLoader
Locate the SampleLoader instance which handles the given file
suffix. (The suffix should be given as a dot followed by lower-case
characters.)
"""
clas = SampleLoader.suffixmap.get(suffix)
if (clas is None):
raise SampleError('unknown sound file extension \''
+ suffix + '\'')
return clas
class AifcLoader(SampleLoader):
suffixlist = ['.aifc', '.aiff', '.aif']
def raw_load(self, filename, csamp):
if (isinstance(filename, boopak.pinfo.File)):
afl = filename.open(True)
else:
afl = open(filename, 'rb')
try:
fl = aifc.open(afl)
numframes = fl.getnframes()
dat = fl.readframes(numframes)
numchannels = fl.getnchannels()
samplebits = fl.getsampwidth()*8
framerate = fl.getframerate()
markers = fl.getmarkers()
fl.close()
finally:
afl.close()
loopstart = -1
loopend = -1
if (not (markers is None)):
for (mark, pos, name) in markers:
if (mark == 1):
loopstart = pos
elif (mark == 2):
loopend = pos
if (loopstart < 0 or loopend < 0):
loopstart = -1
loopend = -1
params = (framerate, numframes, dat, loopstart, loopend, numchannels, samplebits, 1, 1)
res = cboodle.load_sample(csamp, params)
if (not res):
raise SampleError('unable to load aiff data')
aifc_loader = AifcLoader()
class WavLoader(SampleLoader):
suffixlist = ['.wav']
def raw_load(self, filename, csamp):
if (isinstance(filename, boopak.pinfo.File)):
afl = filename.open(True)
else:
afl = open(filename, 'rb')
try:
fl = wave.open(afl)
numframes = fl.getnframes()
dat = fl.readframes(numframes)
numchannels = fl.getnchannels()
samplebits = fl.getsampwidth()*8
framerate = fl.getframerate()
fl.close()
finally:
afl.close()
params = (framerate, numframes, dat, -1, -1, numchannels, samplebits, 1, big_endian)
res = cboodle.load_sample(csamp, params)
if (not res):
raise SampleError('unable to load wav data')
wav_loader = WavLoader()
class SunAuLoader(SampleLoader):
suffixlist = ['.au']
def raw_load(self, filename, csamp):
if (isinstance(filename, boopak.pinfo.File)):
afl = filename.open(True)
else:
afl = open(filename, 'rb')
try:
fl = sunau.open(afl, 'r')
numframes = fl.getnframes()
dat = fl.readframes(numframes)
numchannels = fl.getnchannels()
samplebits = fl.getsampwidth()*8
framerate = fl.getframerate()
fl.close()
finally:
afl.close()
params = (framerate, numframes, dat, -1, -1, numchannels, samplebits, 1, 1)
res = cboodle.load_sample(csamp, params)
if (not res):
raise SampleError('unable to load au data')
sunau_loader = SunAuLoader()
class MixinLoader(SampleLoader):
suffixlist = ['.mixin']
def load(self, filename, suffix):
dirname = None
modname = None
if (isinstance(filename, boopak.pinfo.File)):
afl = filename.open(True)
modname = filename.package.encoded_name
else:
dirname = os.path.dirname(filename)
afl = open(filename, 'rb')
linelist = afl.readlines()
afl.close()
ranges = []
defval = None
for line in linelist:
tok = line.split()
if len(tok) == 0:
continue
if (tok[0].startswith('#')):
continue
if (tok[0] == 'range'):
if (len(tok) < 4):
raise SampleError('range and filename required after range')
tup = self.parseparam(filename, dirname, tok[3:])
if (tok[1] == '-'):
startval = None
else:
startval = float(tok[1])
if (tok[2] == '-'):
endval = MixIn.MAX
else:
endval = float(tok[2])
rn = MixIn.range(startval, endval, tup[0], pitch=tup[1], volume=tup[2])
ranges.append(rn)
elif (tok[0] == 'else'):
if (len(tok) < 2):
raise SampleError('filename required after else')
tup = self.parseparam(filename, dirname, tok[1:])
rn = MixIn.default(tup[0], pitch=tup[1], volume=tup[2])
defval = rn
else:
raise SampleError('unknown statement in mixin: ' + tok[0])
MixIn.sort_mixin_ranges(ranges)
return MixinSample(filename, ranges, defval, modname)
def parseparam(self, filename, dirname, tok):
if (dirname is None):
pkg = filename.package
samp = pkg.loader.load_item_by_name(tok[0], package=pkg)
else:
newname = os.path.join(dirname, tok[0])
newname = os.path.normpath(newname)
samp = get(newname)
pitch = None
volume = None
if (len(tok) > 2):
if (tok[2] != '-'):
volume = float(tok[2])
if (len(tok) > 1):
if (tok[1] != '-'):
pitch = float(tok[1])
return (samp, pitch, volume)
def reload(self, samp):
pass
mixin_loader = MixinLoader()
# Late imports.
import boodle
from boodle import stereo
# cboodle may be updated later, by a set_driver() call.
cboodle = boodle.cboodle
import boopak
class SampleError(boodle.BoodlerError):
"""SampleError: Represents problems encountered while finding or
loading sound files.
"""
pass | PypiClean |
/BCPy2000-1.6.tar.gz/BCPy2000-1.6/src/AppTools/Displays.py | __all__ = [
'monitors', 'monitor', 'number_of_monitors',
'init_screen', 'split_33_66', 'fullscreen',
'main_coordinate_frame', 'scr',
]
from . import Boxes
import platform
if platform.system().lower() == 'windows':
try:
import win32api
def monitors():
"""
Returns a list of AppTools.Boxes.box objects, one for each of the
displays attached.
"""###
m = [Boxes.Box(rect=x[2]) for x in win32api.EnumDisplayMonitors()]
rebase = m[0].height
for i in range(1,len(m)): m[i].bottom,m[i].top = rebase-m[i].top,rebase-m[i].bottom
return m
def monitor(id=0):
"""
Return the coordinates of the specified display.
"""###
m = monitors()
if id >= len(m): raise IndexError("index %d out of range (%d monitors detected)" % (id,len(m)))
return m[id]
def number_of_monitors():
"""
Return the number of displays detected.
"""###
return len(monitors())
except ImportError:
print(__name__,"module failed to import win32api")
import ctypes
GetSystemMetrics = ctypes.windll.user32.GetSystemMetrics
def monitors():
return [Boxes.Box(rect=(0,0,GetSystemMetrics(0),GetSystemMetrics(1)))]
def monitor(id=0):
if not id in [0,-1]: raise IndexError("win32api not available---cannot get information about multiple displays")
return monitors()[id]
def number_of_monitors():
return 0
else:
print(__name__,"module does not know how to get information about the number and size of displays")
def monitors():
try: import pygame;
except ImportError: pass
else:
try:
pygame.init()
a = pygame.display.Info()
return [Boxes.Box(rect=(0,0,a.current_w,a.current_h))]
except:
print("failed to get screen size using pygame.display.Info()")
return [Boxes.Box(rect=(0,0,640,480))]
def monitor(id=0):
if not id in [0,-1]: raise IndexError("do not know how to enumerate displays on this system")
return monitors()[id]
def number_of_monitors():
return 0
from . import CurrentRenderer
def init_screen(b, **kwargs):
"""
Initialize the drawing window, via screen.setup(), to have the
dimensions specified in box object <b>.
"""###
screen = CurrentRenderer.get_screen()
if b == None:
b = Boxes.Box(rect=(0,0,screen.size[0],screen.size[1]))
else:
b = Boxes.Box(b) # makes a copy
for k in ['left', 'top', 'right', 'bottom', 'width', 'height', 'x', 'y']:
v = kwargs.pop(k, None)
if v != None: setattr(b, k, v)
b.width = int(round(b.width))
b.height = int(round(b.height))
b.left = int(round(b.left))
b.top = int(round(b.top))
screen.setup(
width=b.width, height=b.height,
left=b.left, top=monitor(0).top-b.top,
**kwargs
)
b.sticky = True
b.anchor = 'bottom left'
b.position = (0,0)
if b.internal == None: b.internal = b.__class__(rect=(-1,-1,+1,+1), sticky=False)
else: b.internal = b.__class__(b.internal) # makes a copy
screen.__dict__['coords'] = b
return b
def main_coordinate_frame():
"""
Return CurrentRenderer.get_screen().coords, initializing it
if it has not already been put in place by init_screen()
"""###
screen = CurrentRenderer.get_screen()
if not hasattr(screen, 'coords'): init_screen(None)
return screen.coords
def scr(*pargs):
"""
scr((x,y)) or scr(x,y) maps the specified position via
CurrentRenderer.get_screen().coords, which is a coordinate
frame initialized by init_screen().
"""###
return main_coordinate_frame().scr(*pargs)
def fullscreen(scale=1.0, id=-1, anchor='center', **kwargs):
"""
Initialize the drawing window to be <scale> times the full size
of the display indexed by <id>.
"""###
m = monitor(id)
m.anchor = anchor
m.scale(scale)
return init_screen(m, **kwargs)
def split_33_66(bci, **kwargs):
"""
Initialize the drawing window such that one third of it is an
"experimenter panel" on the first monitor, and two thirds are
a "subject panel" on the last monitor. The two monitors should
be arranged side-by-side with the subject monitor logically to
the right. The experimenter panel becomes the main coordinate
frame (returned by main_coordinate_frame() or bci.screen.coords).
<bci> is the BciApplication instance. It receives attributes
called experimenter_panel and subject_panel, Boxes.box objects
representing the two coordinate frames.
"""###
m = monitor(-1)
if number_of_monitors() == 2:
panelwidth = monitor(0).width/3
m.anchor = 'top right'
m.width += panelwidth
else:
m.anchor = 'top right'
m.height /= 2; m.width *=0.8
panelwidth = m.width/3
bci.experimenter_panel = init_screen(m, **kwargs) # this one also recorded in screen.coords
bci.subject_panel = bci.experimenter_panel.copy()
bci.experimenter_panel.anchor = 'left'
bci.experimenter_panel.width = panelwidth
bci.subject_panel.anchor = 'right'
bci.subject_panel.width -= panelwidth | PypiClean |
/0x-web3-5.0.0a5.tar.gz/0x-web3-5.0.0a5/web3/middleware/validation.py | from eth_utils.curried import (
apply_formatter_at_index,
apply_formatter_if,
apply_formatters_to_dict,
is_null,
)
from hexbytes import (
HexBytes,
)
from web3._utils.toolz import (
complement,
compose,
curry,
dissoc,
)
from web3.exceptions import (
ValidationError,
)
from web3.middleware.formatting import (
construct_web3_formatting_middleware,
)
MAX_EXTRADATA_LENGTH = 32
is_not_null = complement(is_null)
@curry
def validate_chain_id(web3, chain_id):
if chain_id == web3.net.chainId:
return chain_id
else:
raise ValidationError(
"The transaction declared chain ID %r, "
"but the connected node is on %r" % (
chain_id,
"UNKNOWN",
)
)
def check_extradata_length(val):
if not isinstance(val, (str, int, bytes)):
return val
result = HexBytes(val)
if len(result) > MAX_EXTRADATA_LENGTH:
raise ValidationError(
"The field extraData is %d bytes, but should be %d. "
"It is quite likely that you are connected to a POA chain. "
"Refer "
"http://web3py.readthedocs.io/en/stable/middleware.html#geth-style-proof-of-authority "
"for more details. The full extraData is: %r" % (
len(result), MAX_EXTRADATA_LENGTH, result
)
)
return val
def transaction_normalizer(transaction):
return dissoc(transaction, 'chainId')
def transaction_param_validator(web3):
transactions_params_validators = {
'chainId': apply_formatter_if(
# Bypass `validate_chain_id` if chainId can't be determined
lambda _: is_not_null(web3.net.chainId),
validate_chain_id(web3)
),
}
return apply_formatter_at_index(
apply_formatters_to_dict(transactions_params_validators),
0
)
BLOCK_VALIDATORS = {
'extraData': check_extradata_length,
}
block_validator = apply_formatter_if(
is_not_null,
apply_formatters_to_dict(BLOCK_VALIDATORS)
)
@curry
def chain_id_validator(web3):
return compose(
apply_formatter_at_index(transaction_normalizer, 0),
transaction_param_validator(web3)
)
def build_validators_with_web3(w3):
return dict(
request_formatters={
'eth_sendTransaction': chain_id_validator(w3),
'eth_estimateGas': chain_id_validator(w3),
'eth_call': chain_id_validator(w3),
},
result_formatters={
'eth_getBlockByHash': block_validator,
'eth_getBlockByNumber': block_validator,
},
)
validation_middleware = construct_web3_formatting_middleware(build_validators_with_web3) | PypiClean |
/GailBot-0.2a0-py3-none-any.whl/gailbot/api.py | from typing import List, Dict, Union, Tuple, Callable
from gailbot.services import ServiceController, SettingDict
from gailbot.workspace import WorkspaceManager
from .plugins.suite import PluginSuite
from gailbot.core.utils.logger import makelogger
logger = makelogger("gb_api")
class GailBot:
"""
Class for API wrapper
"""
def __init__(self, ws_root: str):
"""initialize a gailbot object that provides a suite of functions
to interact with gailbot
Args:
ws_root (str): the path to workspace root
"""
self.ws_manager: WorkspaceManager = WorkspaceManager(ws_root)
self.init_workspace()
logger.info("workspace manager initialized")
self.gb: ServiceController = ServiceController(
self.ws_manager, load_exist_setting=True
)
logger.info("gailbot service controller initialized")
def init_workspace(self):
"""
Resets the workspace: clears the old workspace and initializes a new one.
Returns:
No return but instantiates a new workspace.
"""
try:
self.ws_manager.clear_gb_temp_dir()
self.ws_manager.init_workspace()
return True
except Exception as e:
logger.error(f"failed to reset workspace due to the error {e}", exc_info=e)
return False
def clear_workspace(self):
"""
Clears current workspace
Returns: None
"""
try:
self.ws_manager.clear_gb_temp_dir()
return True
except Exception as e:
logger.error(f"failed to reset workspace due to the error {e}", exc_info=e)
return False
def reset_workspace(self) -> bool:
"""
Reset the gailbot workspace
Returns: True if workspace successfully reset; false otherwise
"""
return self.ws_manager.reset_workspace()
###########################################################################
# Sources #
###########################################################################
def add_source(self, source_path: str, output_dir: str) -> bool:
"""
Adds a given source
Args:
source_path : str: Source path of the given source
output_dir : str: Path to the output directory of the given source
Returns:
Union[str, bool]: return the name if successfully added, false if not
"""
return self.gb.add_source(source_path, output_dir)
def add_sources(self, src_output_pairs: List[Tuple[str, str]]) -> bool:
"""
Adds a given list of sources
Args:
src_output_pairs: List [Tuple [str, str]]: List of Tuples of strings
to strings, each representing the source path and output path of
a source to add
Returns:
Bool: True if each given source was successfully added, false if not
"""
return self.gb.add_sources(src_output_pairs)
def is_source(self, name: str) -> bool:
"""
Determines if a given name corresponds to an existing source
Args:
name: str: Name of the source to look for
Returns:
Bool: True if the given name corresponds to an existing source,
false if not
"""
return self.gb.is_source(name)
def get_source_outdir(self, name: str) -> str:
"""
Accesses source output directory with a given name
Args:
source_name: str: source name to access
Returns:
a string stores the output path of the source
"""
return self.gb.get_source_out_dir(name)
def remove_source(self, source_name: str) -> bool:
"""
Removes the given source
Args:
source_name : str: Name of the existing source to remove
Returns:
Bool: True if source was successfully removed, false if not
"""
return self.gb.remove_source(source_name)
def remove_sources(self, source_names: List[str]) -> bool:
"""
Removes the given list of sources
Args:
source_name : str: Name of the existing sources to remove
Returns:
Bool: True if all sources were successfully removed, false if not
"""
return self.gb.remove_sources(source_names)
def get_source_setting_dict(self, source_name) -> Union[bool, SettingDict]:
"""
Given a source, returns its setting content as a dictionary
Args:
source_name (str): the name of the source
Returns:
Union[bool, Dict[str, Union[str, Dict]]]: a dictionary that stores
the source setting content
"""
return self.gb.get_source_setting_dict(source_name)
def clear_source_memory(self) -> bool:
"""
Clears source memory
Returns:
bool: True if successfully cleared, False if not
"""
return self.gb.clear_source_memory()
def get_all_source_names(self) -> List[str]:
"""
Returns list of all source names
Returns: List[str] : list of source names
"""
return self.gb.get_all_source_names()
def get_src_setting_name(self, source_name: str) -> Union[bool, str]:
"""given a source name, return the setting name applied to the source
Args:
source_name (str): the name that identify the source
Returns:
Union[bool, str]: if the source is found, return the setting name
applied to the source, else return false
"""
return self.gb.get_src_setting_name(source_name)
def get_source_setting_dict(self, source_name) -> Union[bool, SettingDict]:
"""
Given a source, returns its setting content as a dictionary
Args:
source_name (str): the name of the source
Returns:
Union[bool, Dict[str, Union[str, Dict]]]: a dictionary that stores
the source setting content
"""
return self.gb.get_source_setting_dict(source_name)
###########################################################################
# Transcribe #
###########################################################################
# def keep
def transcribe(self, sources: List[str] = None) -> Tuple[List[str], List[str]]:
"""given a list of the source name, and transcribe the sources
Args:
sources (List[str], optional): a list of source name, which
can be either a list of source paths or the file name of the
source file without the file extension
if sources is None, the default is to transcribe all sources
that have been configured
Returns:
Tuple[bool, List[str]]:
returns a tuple of two lists of string
the first lists consist of files that are not valid input
the second lists consist of files that fails to be processed
"""
return self.gb.transcribe(sources)
###########################################################################
# Setting (Profile) #
###########################################################################
def create_new_setting(
self, name: str, setting: Dict[str, str], overwrite: bool = True
) -> bool:
"""
Creates a new setting profile
Args:
name: str: Name to assign to the newly created setting profile
setting : Dict[str, str]: Dictionary representation of the setting
overwrite : bool
Returns:
True if setting was successfully created, false if not
"""
return self.gb.create_new_setting(name, setting, overwrite)
def save_setting(self, setting_name: str) -> bool:
"""
Saves the given setting
Args:
setting_name: str: Name of the setting to save
Returns:
Bool: True if setting was successfully saved, false if not
"""
return self.gb.save_setting(setting_name)
def get_setting_dict(self, setting_name: str) -> Union[bool, SettingDict]:
"""
Given a setting name, returns the setting content in a dictionary
Args:
setting_name (str): name that identifies a setting
Returns:
Union[bool, SettingDict]: if the setting is found, returns its setting
content stored in a dictionary, else returns false
"""
return self.gb.get_setting_dict(setting_name)
def get_all_settings_data(self) -> Dict[str, SettingDict]:
"""
given a setting name, return the setting content in a dictionary format
Returns:
Union[bool, Dict[str, Union[str, Dict]]]: the setting content
"""
return self.gb.get_all_settings_data()
def get_all_profile_names(self) -> List[str]:
"""get the names fo available settings
Returns:
List[str]: a list of available setting names
"""
return self.gb.get_all_settings_names()
def rename_setting(self, old_name: str, new_name: str) -> bool:
"""
Renames a given setting to a given new name
Args:
old_name: str: original name of the setting to rename
new_name: str: name to rename the setting to
Returns:
Bool: True if setting was successfully renamed, false if not
"""
return self.gb.rename_setting(old_name, new_name)
def update_setting(self, setting_name: str, new_setting: Dict[str, str]) -> bool:
"""
Updates a given setting to a newly given structure
Args:
setting_name: str: name of the setting to update
new_setting: Dict[str, str]: dictionary representation of
the new structure of the setting
Returns:
Bool: true if setting was successfully updated, false if not
"""
return self.gb.update_setting(setting_name, new_setting)
# keep
def get_plugin_setting(self, setting_name: str) -> bool | list[str]:
"""
Accesses the plugin setting of a given setting
Args:
setting_name: str: name of the setting to get the plugin setting of
Returns:
Dict[str, str]: dictionary representation of the plugin setting
"""
return self.gb.get_plugin_setting(setting_name)
def remove_setting(self, setting_name: str) -> bool:
"""
Removes the given setting
Args:
setting_name: str: name of the setting to remove
Returns:
Bool: True if setting was successfully removed, false if not
"""
return self.gb.remove_setting(setting_name)
def remove_multiple_settings(self, setting_names: List[str]) -> bool:
"""
Removes the given list of settings
Args:
setting_name: List[str]: names of the setting to remove
Returns:
Bool: True if all settings were successfully removed, false if not
"""
return self.gb.remove_multiple_settings(setting_names)
def is_setting(self, name: str) -> bool:
"""
Determines if a given setting name corresponds to an existing setting
Args:
name: str: name of the setting to search fort
Returns:
Bool: True if given setting is an existing setting, false if not
"""
return self.gb.is_setting(name)
def apply_setting_to_source(
self, source: str, setting: str, overwrite: bool = True
) -> bool:
"""
Applies a given setting to a given source
Args:
source: str: name of the source to which to apply the given setting profile
setting: str: name of the setting to apply to the given source
overwrite: bool: determines if it should overwrite from an existing setting
Defaults to true
Returns:
Bool: true if setting was successfully applied, false if not
"""
return self.gb.apply_setting_to_source(source, setting, overwrite)
def apply_setting_to_sources(
self, sources: List[str], setting: str, overwrite: bool = True
) -> bool:
"""
Applies a given setting to a given list of sources
Args:
sources: List[str]: list of names of the sources to which to apply the given setting profile
setting: str: name of the setting to apply to the given sources
overwrite: bool: determines if it should overwrite from an existing setting
Defaults to true
Returns:
Bool: true if setting was successfully applied, false if not
"""
return self.gb.apply_setting_to_sources(sources, setting, overwrite)
def is_setting_in_use(self, setting_name: str) -> bool:
"""check if a setting is being used by any source
Args:
setting_name (str): the name of the setting
Returns:
bool: return true if the setting is being used, false otherwise
"""
return self.gb.is_setting_in_use(setting_name)
def get_default_profile_setting_name(self) -> str:
"""get the name of current default setting
Returns:
str: the name of current default setting
"""
return self.gb.get_default_profile_setting_name()
def get_default_engine_setting_name(self) -> str:
"""get the default engine setting name
Returns:
str: a string that represent the default engine setting
"""
return self.gb.get_default_engine_setting_name()
def set_default_setting(self, setting_name) -> bool:
"""set the default setting to setting name
Args:
setting_name (str): the name of the default setting
Returns:
bool: true if default setting is set correctly
"""
return self.gb.set_default_setting(setting_name)
###########################################################################
# Plugin Suite #
###########################################################################
def register_plugin_suite(self, plugin_source: str) -> Union[List[str], str]:
"""
Registers a gailbot plugin suite
Args:
plugin_source : str: Name of the plugin suite to register
Returns:
return a list of plugin name if the plugin is registered,
return the string that stores the error message if the plugin suite
is not registered
"""
return self.gb.register_plugin_suite(plugin_source)
def get_plugin_suite(self, suite_name) -> PluginSuite:
"""
Gets the plugin suite with a given name
Args:
suite_name: string name of the given plugin suite
Returns:
PluginSuite with the given name
"""
return self.gb.get_plugin_suite(suite_name)
def is_plugin_suite(self, suite_name: str) -> bool:
"""
Determines if a given plugin suite is an existing plugin suite
Args:
suite_name: str: name of the plugin suite of which to determine existence
Returns:
Bool: true if given plugin suite exists, false if not
"""
return self.gb.is_plugin_suite(suite_name)
def delete_plugin_suite(self, suite_name: str) -> bool:
"""
Removes the given plugin suite
Args:
suite_name: str: name of the plugin suite to delete
Returns:
Bool: true if plugin suite was successfully removed, false if not
"""
return self.gb.delete_plugin_suite(suite_name)
def delete_plugin_suites(self, suite_names: List[str]) -> bool:
"""
Removes the given list of plugin suites
Args:
suite_name: List[str]: list of names of the plugin suites to delete
Returns:
Bool: true if all plugin suites were successfully removed, false if not
"""
return self.gb.delete_plugin_suites(suite_names)
def add_progress_display(self, source: str, displayer: Callable) -> bool:
"""
Add a function displayer to track for the progress of source,
Args:
source (str): the name of the source
displayer (Callable): displayer is a function that takes in a string as
argument, and the string encodes the progress of
the source
Returns:
bool: return true if the displayer is added, false otherwise
"""
return self.gb.add_progress_display(source, displayer)
def get_all_plugin_suites(self) -> List[str]:
"""get names of available plugin suites
Returns:
List[str]: a list of available plugin suites name
"""
return self.gb.get_all_plugin_suites()
def get_plugin_suite_metadata(self, suite_name: str) -> Dict[str, str]:
"""get the metadata of a plugin suite identified by suite name
Args:
suite_name (str): the name of the suite
Returns:
MetaData: a MetaData object that stores the suite's metadata,
"""
return self.gb.get_plugin_suite_metadata(suite_name)
def get_plugin_suite_dependency_graph(
self, suite_name: str
) -> Dict[str, List[str]]:
"""get the dependency map of the plugin suite identified by suite_name
Args:
suite_name (str): the name of the suite
Returns:
Dict[str, List[str]]: the dependency graph of the suite
"""
return self.gb.get_plugin_suite_dependency_graph(suite_name)
def get_plugin_suite_documentation_path(self, suite_name: str) -> str:
"""get the path to the documentation map of the plugin suite identified by suite_name
Args:
suite_name (str): the name of the suite
Returns:
str: the path to the documentation file
"""
return self.gb.get_plugin_suite_documentation_path(suite_name)
def is_suite_in_use(self, suite_name: str) -> bool:
"""given a suite_name, check if this suite is used
in any of the setting
Args:
suite_name (str): the name of the plugin suite
Returns:
bool: return true if the suite is used in any of the setting,
false otherwise
"""
return self.gb.is_suite_in_use(suite_name)
def is_official_suite(self, suite_name: str) -> bool:
"""given a suite_name, check if the suite identified by the suite_name
is official
Args:
suite_name (str): the name of the suite
Returns:
bool: true if the suite is official false otherwise
"""
return self.gb.is_official_suite(suite_name)
def get_suite_source_path(self, suite_name: str) -> str:
"""
given the name of the suite , return the path to the source
code of the suite
"""
return self.gb.get_suite_path(suite_name)
###########################################################################
# Engines #
###########################################################################
def get_engine_setting_names(self) -> List[str]:
"""get a list of available engine setting name
Returns:
List[str]: the list of engine setting name
"""
return self.gb.get_engine_setting_names()
def add_new_engine(self, name, setting, overwrite=False) -> bool:
"""add a new engine setting
Args:
name (str): the name of the engine setting
setting (Dict[str, str]): the setting data stored in a dictionary
overwrite (bool, optional): if True, overwrite the existing
engine setting with the same name. Defaults to False.
Returns:
bool: return True if the engine setting is successfully created
"""
return self.gb.add_new_engine(name, setting, overwrite)
def remove_engine_setting(self, name) -> bool:
"""remove the engine setting identified by name
Args:
name (str): the name of the engine setting to be removed
Returns:
bool: return True if the engine setting is successfully removed
"""
return self.gb.remove_engine_setting(name)
def update_engine_setting(self, name, setting_data: Dict[str, str]) -> bool:
"""update the engine setting identified by name
Args:
name (str): the name of the engine setting to be updated
setting_data (Dict[str, str]): the content of the new setting
Returns:
bool: return True if the engine setting is successfully updated
"""
return self.gb.update_engine_setting(name, setting_data)
def get_engine_setting_data(self, name: str) -> Union[bool, Dict[str, str]]:
"""get the engine setting data
Args:
name (str): the name of the engine setting
Returns:
Union[bool, Dict[str, str]]: if the engine setting name is available
return the engine setting data as stored in a dictionary, else return False
"""
return self.gb.get_engine_setting_data(name)
def is_engine_setting_in_use(self, name: str) -> bool:
"""check if the engine setting identified by name is in use
Args:
name (str): the name of the engine setting
Returns:
bool: return true if the engine setting is in use, false otherwise
"""
return self.gb.is_engine_setting_in_use(name)
def is_engine_setting(self, name: str):
"""check if the given engine name is engine setting
Args:
name (str): the name of the engine setting
"""
return self.gb.is_engine_setting(name) | PypiClean |
/Mopidy_Multisonic-0.5.2-py3-none-any.whl/mopidy_multisonic/lookup.py | from mopidy import models
from .backend import logger
from . import cache
from . import uri_parser
from . import httpclient
from . import browser
def log_subsonic_error(error):
logger.error("Failed to lookup: " + str(error["code"]) + " : " + error["message"])
def build_track_model(song, http_client_config):
artists = [models.Artist(
name=song["artist"],
uri=uri_parser.build_artist(
str(song["artistId"]),
http_client_config.name
)
)]
album = models.Album(
uri=uri_parser.build_album(
str(song["albumId"]),
http_client_config.name
),
name=song["album"],
date=str(song["year"]),
artists=artists
)
track = models.Track(
uri=uri_parser.build_track(
str(song["id"]),
http_client_config.name
),
name=song["title"],
date=str(song["year"]),
length=song["duration"]*1000,
disc_no=song["discNumber"],
track_no=song["track"],
artists=artists,
album=album,
)
return track
def lookup_track(http_client_config, uri):
id = uri_parser.get_id(uri)
data = httpclient.get_track(http_client_config, id).json()
data = data["subsonic-response"]
if 'failed' == data["status"]:
log_subsonic_error(data["error"])
return []
song = data["song"]
track = build_track_model(song, http_client_config)
return [track]
def lookup_album(http_client_config, uri):
id = uri_parser.get_id(uri)
data = httpclient.get_album(http_client_config, id).json()
data = data["subsonic-response"]
if 'failed' == data["status"]:
log_subsonic_error(data["error"])
return []
data = data["album"]
artists = [models.Artist(
name=data["artist"],
uri=uri_parser.build_artist(
str(data["artistId"]),
http_client_config.name
)
)]
album = models.Album(
uri=uri_parser.build_album(
str(data["id"]),
http_client_config.name
),
name=data["name"],
date=str(data["year"]),
artists=artists
)
tracks = []
for song in data["song"]:
track = models.Track(
uri=uri_parser.build_track(
str(song["id"]),
http_client_config.name
),
name=song["title"],
date=str(song["year"]),
length=song["duration"]*1000,
disc_no=song["discNumber"],
track_no=song["track"],
artists=artists,
album=album,
)
tracks.append(track)
return tracks
def lookup_artist(http_client_config, uri):
id = uri_parser.get_id(uri)
data = httpclient.get_artist(http_client_config, id).json()
data = data["subsonic-response"]
if 'failed' == data["status"]:
log_subsonic_error(data["error"])
return []
data = data["artist"]
artists = [models.Artist(
name=data["name"],
uri=uri_parser.build_artist(
str(data["id"]),
http_client_config.name
)
)]
tracks = []
for data in data["album"]:
tracks += lookup_album(
http_client_config,
uri_parser.build_album(
str(data["id"]),
http_client_config.name
)
)
return tracks
def lookup(http_client_configs, uri):
http_client_config = browser.get_http_client_config(http_client_configs, uri)
tracks = cache.fetch_model(uri)
if tracks:
return tracks
if uri_parser.is_track(uri):
return lookup_track(http_client_config, uri)
if uri_parser.is_album(uri):
return lookup_album(http_client_config, uri)
if uri_parser.is_artist(uri):
return lookup_artist(http_client_config, uri)
logger.error("Can't lookup " + uri)
return [] | PypiClean |
/NVDA-addonTemplate-0.5.2.zip/NVDA-addonTemplate-0.5.2/NVDAAddonTemplate/data/{{cookiecutter.project_slug}}/scons-local-2.5.0/SCons/Scanner/Dir.py |
__revision__ = "src/engine/SCons/Scanner/Dir.py rel_2.5.0:3543:937e55cd78f7 2016/04/09 11:29:54 bdbaddog"
import SCons.Node.FS
import SCons.Scanner
def only_dirs(nodes):
is_Dir = lambda n: isinstance(n.disambiguate(), SCons.Node.FS.Dir)
return list(filter(is_Dir, nodes))
def DirScanner(**kw):
"""Return a prototype Scanner instance for scanning
directories for on-disk files"""
kw['node_factory'] = SCons.Node.FS.Entry
kw['recursive'] = only_dirs
return SCons.Scanner.Base(scan_on_disk, "DirScanner", **kw)
def DirEntryScanner(**kw):
"""Return a prototype Scanner instance for "scanning"
directory Nodes for their in-memory entries"""
kw['node_factory'] = SCons.Node.FS.Entry
kw['recursive'] = None
return SCons.Scanner.Base(scan_in_memory, "DirEntryScanner", **kw)
skip_entry = {}
skip_entry_list = [
'.',
'..',
'.sconsign',
# Used by the native dblite.py module.
'.sconsign.dblite',
# Used by dbm and dumbdbm.
'.sconsign.dir',
# Used by dbm.
'.sconsign.pag',
# Used by dumbdbm.
'.sconsign.dat',
'.sconsign.bak',
# Used by some dbm emulations using Berkeley DB.
'.sconsign.db',
]
for skip in skip_entry_list:
skip_entry[skip] = 1
skip_entry[SCons.Node.FS._my_normcase(skip)] = 1
do_not_scan = lambda k: k not in skip_entry
def scan_on_disk(node, env, path=()):
"""
Scans a directory for on-disk files and directories therein.
Looking up the entries will add these to the in-memory Node tree
representation of the file system, so all we have to do is just
that and then call the in-memory scanning function.
"""
try:
flist = node.fs.listdir(node.get_abspath())
except (IOError, OSError):
return []
e = node.Entry
for f in filter(do_not_scan, flist):
# Add ./ to the beginning of the file name so if it begins with a
# '#' we don't look it up relative to the top-level directory.
e('./' + f)
return scan_in_memory(node, env, path)
def scan_in_memory(node, env, path=()):
"""
"Scans" a Node.FS.Dir for its in-memory entries.
"""
try:
entries = node.entries
except AttributeError:
# It's not a Node.FS.Dir (or doesn't look enough like one for
# our purposes), which can happen if a target list containing
# mixed Node types (Dirs and Files, for example) has a Dir as
# the first entry.
return []
entry_list = sorted(filter(do_not_scan, list(entries.keys())))
return [entries[n] for n in entry_list]
# Local Variables:
# tab-width:4
# indent-tabs-mode:nil
# End:
# vim: set expandtab tabstop=4 shiftwidth=4: | PypiClean |
/FightMan01dc.pymod-2.0.4.tar.gz/FightMan01dc.pymod-2.0.4/discord/invite.py | from .asset import Asset
from .utils import parse_time, snowflake_time
from .mixins import Hashable
from .enums import ChannelType, VerificationLevel, try_enum
from collections import namedtuple
class PartialInviteChannel(namedtuple('PartialInviteChannel', 'id name type')):
"""Represents a "partial" invite channel.
This model will be given when the user is not part of the
guild the :class:`Invite` resolves to.
.. container:: operations
.. describe:: x == y
Checks if two partial channels are the same.
.. describe:: x != y
Checks if two partial channels are not the same.
.. describe:: hash(x)
Return the partial channel's hash.
.. describe:: str(x)
Returns the partial channel's name.
Attributes
-----------
name: :class:`str`
The partial channel's name.
id: :class:`int`
The partial channel's ID.
type: :class:`ChannelType`
The partial channel's type.
"""
__slots__ = ()
def __str__(self):
return self.name
@property
def mention(self):
""":class:`str`: The string that allows you to mention the channel."""
return '<#%s>' % self.id
@property
def created_at(self):
""":class:`datetime.datetime`: Returns the channel's creation time in UTC."""
return snowflake_time(self.id)
class PartialInviteGuild:
"""Represents a "partial" invite guild.
This model will be given when the user is not part of the
guild the :class:`Invite` resolves to.
.. container:: operations
.. describe:: x == y
Checks if two partial guilds are the same.
.. describe:: x != y
Checks if two partial guilds are not the same.
.. describe:: hash(x)
Return the partial guild's hash.
.. describe:: str(x)
Returns the partial guild's name.
Attributes
-----------
name: :class:`str`
The partial guild's name.
id: :class:`int`
The partial guild's ID.
verification_level: :class:`VerificationLevel`
The partial guild's verification level.
features: List[:class:`str`]
A list of features the guild has. See :attr:`Guild.features` for more information.
icon: Optional[:class:`str`]
The partial guild's icon.
banner: Optional[:class:`str`]
The partial guild's banner.
splash: Optional[:class:`str`]
The partial guild's invite splash.
description: Optional[:class:`str`]
The partial guild's description.
"""
__slots__ = ('_state', 'features', 'icon', 'banner', 'id', 'name', 'splash',
'verification_level', 'description')
def __init__(self, state, data, id):
self._state = state
self.id = id
self.name = data['name']
self.features = data.get('features', [])
self.icon = data.get('icon')
self.banner = data.get('banner')
self.splash = data.get('splash')
self.verification_level = try_enum(VerificationLevel, data.get('verification_level'))
self.description = data.get('description')
def __str__(self):
return self.name
def __repr__(self):
return '<{0.__class__.__name__} id={0.id} name={0.name!r} features={0.features} ' \
'description={0.description!r}>'.format(self)
@property
def created_at(self):
""":class:`datetime.datetime`: Returns the guild's creation time in UTC."""
return snowflake_time(self.id)
@property
def icon_url(self):
""":class:`Asset`: Returns the guild's icon asset."""
return self.icon_url_as()
def icon_url_as(self, *, format='webp', size=1024):
"""The same operation as :meth:`Guild.icon_url_as`."""
return Asset._from_guild_image(self._state, self.id, self.icon, 'icons', format=format, size=size)
@property
def banner_url(self):
""":class:`Asset`: Returns the guild's banner asset."""
return self.banner_url_as()
def banner_url_as(self, *, format='webp', size=2048):
"""The same operation as :meth:`Guild.banner_url_as`."""
return Asset._from_guild_image(self._state, self.id, self.banner, 'banners', format=format, size=size)
@property
def splash_url(self):
""":class:`Asset`: Returns the guild's invite splash asset."""
return self.splash_url_as()
def splash_url_as(self, *, format='webp', size=2048):
"""The same operation as :meth:`Guild.splash_url_as`."""
return Asset._from_guild_image(self._state, self.id, self.splash, 'splashes', format=format, size=size)
class Invite(Hashable):
r"""Represents a Discord :class:`Guild` or :class:`abc.GuildChannel` invite.
Depending on the way this object was created, some of the attributes can
have a value of ``None``.
.. container:: operations
.. describe:: x == y
Checks if two invites are equal.
.. describe:: x != y
Checks if two invites are not equal.
.. describe:: hash(x)
Returns the invite hash.
.. describe:: str(x)
Returns the invite URL.
The following table illustrates what methods will obtain the attributes:
+------------------------------------+----------------------------------------------------------+
| Attribute | Method |
+====================================+==========================================================+
| :attr:`max_age` | :meth:`abc.GuildChannel.invites`\, :meth:`Guild.invites` |
+------------------------------------+----------------------------------------------------------+
| :attr:`max_uses` | :meth:`abc.GuildChannel.invites`\, :meth:`Guild.invites` |
+------------------------------------+----------------------------------------------------------+
| :attr:`created_at` | :meth:`abc.GuildChannel.invites`\, :meth:`Guild.invites` |
+------------------------------------+----------------------------------------------------------+
| :attr:`temporary` | :meth:`abc.GuildChannel.invites`\, :meth:`Guild.invites` |
+------------------------------------+----------------------------------------------------------+
| :attr:`uses` | :meth:`abc.GuildChannel.invites`\, :meth:`Guild.invites` |
+------------------------------------+----------------------------------------------------------+
| :attr:`approximate_member_count` | :meth:`Client.fetch_invite` |
+------------------------------------+----------------------------------------------------------+
| :attr:`approximate_presence_count` | :meth:`Client.fetch_invite` |
+------------------------------------+----------------------------------------------------------+
If it's not in the table above then it is available by all methods.
Attributes
-----------
max_age: :class:`int`
How long the before the invite expires in seconds. A value of 0 indicates that it doesn't expire.
code: :class:`str`
The URL fragment used for the invite.
guild: Union[:class:`Guild`, :class:`PartialInviteGuild`]
The guild the invite is for.
revoked: :class:`bool`
Indicates if the invite has been revoked.
created_at: :class:`datetime.datetime`
A datetime object denoting the time the invite was created.
temporary: :class:`bool`
Indicates that the invite grants temporary membership.
If ``True``, members who joined via this invite will be kicked upon disconnect.
uses: :class:`int`
How many times the invite has been used.
max_uses: :class:`int`
How many times the invite can be used.
inviter: :class:`User`
The user who created the invite.
approximate_member_count: Optional[:class:`int`]
The approximate number of members in the guild.
approximate_presence_count: Optional[:class:`int`]
The approximate number of members currently active in the guild.
This includes idle, dnd, online, and invisible members. Offline members are excluded.
channel: Union[:class:`abc.GuildChannel`, :class:`PartialInviteChannel`]
The channel the invite is for.
"""
__slots__ = ('max_age', 'code', 'guild', 'revoked', 'created_at', 'uses',
'temporary', 'max_uses', 'inviter', 'channel', '_state',
'approximate_member_count', 'approximate_presence_count' )
def __init__(self, *, state, data):
self._state = state
self.max_age = data.get('max_age')
self.code = data.get('code')
self.guild = data.get('guild')
self.revoked = data.get('revoked')
self.created_at = parse_time(data.get('created_at'))
self.temporary = data.get('temporary')
self.uses = data.get('uses')
self.max_uses = data.get('max_uses')
self.approximate_presence_count = data.get('approximate_presence_count')
self.approximate_member_count = data.get('approximate_member_count')
inviter_data = data.get('inviter')
self.inviter = None if inviter_data is None else self._state.store_user(inviter_data)
self.channel = data.get('channel')
@classmethod
def from_incomplete(cls, *, state, data):
guild_id = int(data['guild']['id'])
channel_id = int(data['channel']['id'])
guild = state._get_guild(guild_id)
if guild is not None:
channel = guild.get_channel(channel_id)
else:
channel_data = data['channel']
guild_data = data['guild']
channel_type = try_enum(ChannelType, channel_data['type'])
channel = PartialInviteChannel(id=channel_id, name=channel_data['name'], type=channel_type)
guild = PartialInviteGuild(state, guild_data, guild_id)
data['guild'] = guild
data['channel'] = channel
return cls(state=state, data=data)
def __str__(self):
return self.url
def __repr__(self):
return '<Invite code={0.code!r} guild={0.guild!r} ' \
'online={0.approximate_presence_count} ' \
'members={0.approximate_member_count}>'.format(self)
def __hash__(self):
return hash(self.code)
@property
def id(self):
""":class:`str`: Returns the proper code portion of the invite."""
return self.code
@property
def url(self):
""":class:`str`: A property that retrieves the invite URL."""
return 'http://discord.gg/' + self.code
async def delete(self, *, reason=None):
"""|coro|
Revokes the instant invite.
You must have the :attr:`~Permissions.manage_channels` permission to do this.
Parameters
-----------
reason: Optional[:class:`str`]
The reason for deleting this invite. Shows up on the audit log.
Raises
-------
Forbidden
You do not have permissions to revoke invites.
NotFound
The invite is invalid or expired.
HTTPException
Revoking the invite failed.
"""
await self._state.http.delete_invite(self.code, reason=reason) | PypiClean |
/EnergyCapSdk-8.2304.4743.tar.gz/EnergyCapSdk-8.2304.4743/energycap/sdk/models/address_child_py3.py |
from msrest.serialization import Model
class AddressChild(Model):
"""AddressChild.
:param address_type_id: The address type identifier
:type address_type_id: int
:param line1: The line 1 of the address <span
class='property-internal'>Must be between 0 and 100 characters</span>
:type line1: str
:param line2: The line 2 of the address <span
class='property-internal'>Must be between 0 and 100 characters</span>
:type line2: str
:param line3: The line 3 of the address <span
class='property-internal'>Must be between 0 and 100 characters</span>.
Default value: "Æ" .
:type line3: str
:param city: The city of the place <span class='property-internal'>Must be
between 0 and 100 characters</span>
:type city: str
:param state: The state of the place <span class='property-internal'>Must
be between 0 and 100 characters</span>
:type state: str
:param country: The country of the place <span
class='property-internal'>Must be between 0 and 64 characters</span>
:type country: str
:param postal_code: The postal code of the place <span
class='property-internal'>Must be between 0 and 32 characters</span>
:type postal_code: str
:param latitude: The latitude of the place
Required when the country is not United States or Canada <span
class='property-internal'>Must be between -90 and 90</span>
:type latitude: float
:param longitude: The longitude of the place
Required when the country is not United States or Canada <span
class='property-internal'>Must be between -180 and 180</span>
:type longitude: float
"""
_validation = {
'line1': {'max_length': 100, 'min_length': 0},
'line2': {'max_length': 100, 'min_length': 0},
'line3': {'max_length': 100, 'min_length': 0},
'city': {'max_length': 100, 'min_length': 0},
'state': {'max_length': 100, 'min_length': 0},
'country': {'max_length': 64, 'min_length': 0},
'postal_code': {'max_length': 32, 'min_length': 0},
'latitude': {'maximum': 90, 'minimum': -90},
'longitude': {'maximum': 180, 'minimum': -180},
}
_attribute_map = {
'address_type_id': {'key': 'addressTypeId', 'type': 'int'},
'line1': {'key': 'line1', 'type': 'str'},
'line2': {'key': 'line2', 'type': 'str'},
'line3': {'key': 'line3', 'type': 'str'},
'city': {'key': 'city', 'type': 'str'},
'state': {'key': 'state', 'type': 'str'},
'country': {'key': 'country', 'type': 'str'},
'postal_code': {'key': 'postalCode', 'type': 'str'},
'latitude': {'key': 'latitude', 'type': 'float'},
'longitude': {'key': 'longitude', 'type': 'float'},
}
def __init__(self, *, address_type_id: int=None, line1: str=None, line2: str=None, line3: str="Æ", city: str=None, state: str=None, country: str=None, postal_code: str=None, latitude: float=None, longitude: float=None, **kwargs) -> None:
super(AddressChild, self).__init__(**kwargs)
self.address_type_id = address_type_id
self.line1 = line1
self.line2 = line2
self.line3 = line3
self.city = city
self.state = state
self.country = country
self.postal_code = postal_code
self.latitude = latitude
self.longitude = longitude | PypiClean |
/OctoBot-Trading-2.4.23.tar.gz/OctoBot-Trading-2.4.23/octobot_trading/personal_data/trades/trade_pnl.py | import decimal
import octobot_trading.constants as constants
import octobot_trading.errors as errors
import octobot_trading.enums as enums
import octobot_trading.personal_data.orders.order_util as order_util
import octobot_commons.symbols as symbols
class TradePnl:
def __init__(self, entries, closes):
self.entries = entries
self.closes = closes
def get_entry_time(self) -> float:
try:
return min(
entry.executed_time
for entry in self.entries
)
except ValueError as err:
raise errors.IncompletePNLError from err
def get_close_time(self) -> float:
try:
return max(
close.executed_time
for close in self.closes
)
except ValueError as err:
raise errors.IncompletePNLError from err
def get_total_entry_quantity(self) -> decimal.Decimal:
return sum(
entry.executed_quantity
for entry in self.entries
) or constants.ZERO
def get_total_close_quantity(self) -> decimal.Decimal:
return sum(
close.executed_quantity
for close in self.closes
) or constants.ZERO
def get_entry_price(self) -> decimal.Decimal:
try:
return sum(
entry.executed_price
for entry in self.entries
) / len(self.entries)
except ZeroDivisionError as err:
raise errors.IncompletePNLError from err
def get_close_price(self) -> decimal.Decimal:
try:
return sum(
close.executed_price
for close in self.closes
) / len(self.closes)
except ZeroDivisionError as err:
raise errors.IncompletePNLError from err
def get_entry_total_cost(self):
return self.get_entry_price() * self.get_total_entry_quantity()
def get_close_total_cost(self):
return self.get_close_price() * self.get_total_close_quantity()
def get_close_ratio(self):
try:
if self.entries[0].side is enums.TradeOrderSide.BUY:
return min(self.get_total_close_quantity() / self.get_total_entry_quantity(), constants.ONE)
return min(self.get_close_total_cost() / self.get_entry_total_cost(), constants.ONE)
except (IndexError, decimal.DivisionByZero) as err:
raise errors.IncompletePNLError from err
def get_closed_entry_value(self) -> decimal.Decimal:
return self.get_entry_price() * self.get_total_entry_quantity()
def get_closed_close_value(self) -> decimal.Decimal:
return self.get_close_price() * self.get_closed_pnl_quantity()
def get_closed_pnl_quantity(self):
try:
if self.entries[0].side is enums.TradeOrderSide.BUY:
# entry is a buy, exit is a sell: take exit size capped at the entry size
# (can't account in pnl for more than what has been bought)
return min(self.get_total_close_quantity(), self.get_total_entry_quantity())
# entry is a sell, exit is a buy: take closing size capped at the equivalent entry cost
# (can't account in pnl for more than what has been sold for entry)
entry_cost = self.get_entry_price() * self.get_total_entry_quantity()
max_close_quantity = entry_cost / self.get_close_price()
return min(self.get_total_close_quantity(), max_close_quantity)
except IndexError as err:
raise errors.IncompletePNLError from err
def _get_fees(self, trade) -> decimal.Decimal:
if not trade.fee:
return constants.ZERO
symbol = symbols.parse_symbol(trade.symbol)
# return fees denominated in quote
fees = order_util.get_fees_for_currency(trade.fee, symbol.quote)
return fees \
+ order_util.get_fees_for_currency(trade.fee, symbol.base) * trade.executed_price
def get_paid_special_fees_by_currency(self) -> dict:
"""
:return: a dict containing fees paid in currencies different from the base or quote of the trades pair
values are not converted into base currency of the trading pair
"""
if not self.entries:
return {}
try:
fees = {}
base_and_quote = symbols.parse_symbol(self.entries[0].symbol).base_and_quote()
for trade in (*self.entries, *self.closes):
if trade.fee:
currency = trade.fee[enums.FeePropertyColumns.CURRENCY.value]
if currency in base_and_quote:
# not a special fee
continue
if currency in fees:
fees[currency] += order_util.get_fees_for_currency(trade.fee, currency)
else:
fees[currency] = order_util.get_fees_for_currency(trade.fee, currency)
return fees
except IndexError as err:
raise errors.IncompletePNLError from err
def get_paid_regular_fees_in_quote(self) -> decimal.Decimal:
"""
:return: the total value (in quote) of paid fees when paid in base or quote of the trades pair
"""
return sum(
self._get_fees(trade)
for trade in (*self.entries, *self.closes)
) or constants.ZERO
def get_profits(self) -> (decimal.Decimal, decimal.Decimal):
"""
:return: the pnl profits as flat value and percent
"""
close_holdings = self.get_closed_close_value() - self.get_paid_regular_fees_in_quote()
entry_holdings = self.get_closed_entry_value() * self.get_close_ratio()
try:
percent_profit = (close_holdings * constants.ONE_HUNDRED / entry_holdings) - constants.ONE_HUNDRED
except decimal.DivisionByZero:
percent_profit = constants.ZERO
return (
close_holdings - entry_holdings,
percent_profit
) | PypiClean |
/EOxServer-1.2.12-py3-none-any.whl/eoxserver/services/ows/common/v20/encoders.py |
from django.conf import settings
import traceback
from itertools import chain
from lxml import etree
from lxml.builder import ElementMaker
from eoxserver.core.util.xmltools import XMLEncoder, NameSpace, NameSpaceMap
from eoxserver.services.ows.dispatch import filter_handlers
ns_xlink = NameSpace("http://www.w3.org/1999/xlink", "xlink")
ns_ows = NameSpace("http://www.opengis.net/ows/2.0", "ows", "http://schemas.opengis.net/ows/2.0/owsAll.xsd")
ns_xml = NameSpace("http://www.w3.org/XML/1998/namespace", "xml")
nsmap = NameSpaceMap(ns_ows)
OWS = ElementMaker(namespace=ns_ows.uri, nsmap=nsmap)
class OWS20Encoder(XMLEncoder):
def get_conf(self):
raise NotImplementedError
def get_http_service_url(self, request):
conf = self.get_conf()
if conf.http_service_url:
return conf.http_service_url
from eoxserver.services.urls import get_http_service_url
return get_http_service_url(request)
def encode_reference(self, node_name, href, reftype="simple"):
attributes = {ns_xlink("href"): href}
if reftype:
attributes[ns_xlink("type")] = reftype
return OWS(node_name, **attributes)
def encode_service_identification(self, service, conf, profiles):
# get a list of versions in descending order from all active
# GetCapabilities handlers.
handlers = filter_handlers(
service=service, request="GetCapabilities"
)
versions = sorted(
set(chain(*[handler.versions for handler in handlers])),
reverse=True
)
elem = OWS("ServiceIdentification",
OWS("Title", conf.title),
OWS("Abstract", conf.abstract),
OWS("Keywords", *[
OWS("Keyword", keyword) for keyword in conf.keywords
]),
OWS("ServiceType", "OGC WCS", codeSpace="OGC")
)
elem.extend(
OWS("ServiceTypeVersion", version) for version in versions
)
elem.extend(
OWS("Profile", "http://www.opengis.net/%s" % profile)
for profile in profiles
)
elem.extend((
OWS("Fees", conf.fees),
OWS("AccessConstraints", conf.access_constraints)
))
return elem
def encode_service_provider(self, conf):
return OWS("ServiceProvider",
OWS("ProviderName", conf.provider_name),
self.encode_reference("ProviderSite", conf.provider_site),
OWS("ServiceContact",
OWS("IndividualName", conf.individual_name),
OWS("PositionName", conf.position_name),
OWS("ContactInfo",
OWS("Phone",
OWS("Voice", conf.phone_voice),
OWS("Facsimile", conf.phone_facsimile)
),
OWS("Address",
OWS("DeliveryPoint", conf.delivery_point),
OWS("City", conf.city),
OWS("AdministrativeArea", conf.administrative_area),
OWS("PostalCode", conf.postal_code),
OWS("Country", conf.country),
OWS(
"ElectronicMailAddress",
conf.electronic_mail_address
)
),
self.encode_reference(
"OnlineResource", conf.onlineresource
),
OWS("HoursOfService", conf.hours_of_service),
OWS("ContactInstructions", conf.contact_instructions)
),
OWS("Role", conf.role)
)
)
def encode_operations_metadata(self, request, service, versions):
get_handlers = filter_handlers(
service=service, versions=versions, method="GET"
)
post_handlers = filter_handlers(
service=service, versions=versions, method="POST"
)
all_handlers = sorted(
set(get_handlers + post_handlers),
key=lambda h: (getattr(h, "index", 10000), h.request)
)
http_service_url = self.get_http_service_url(request)
operations = []
for handler in all_handlers:
methods = []
if handler in get_handlers:
methods.append(
self.encode_reference("Get", http_service_url)
)
if handler in post_handlers:
post = self.encode_reference("Post", http_service_url)
post.append(
OWS("Constraint",
OWS("AllowedValues",
OWS("Value", "XML")
), name="PostEncoding"
)
)
methods.append(post)
operations.append(
OWS("Operation",
OWS("DCP",
OWS("HTTP", *methods)
),
# apply default values as constraints
*([
OWS("Constraint",
OWS("NoValues"),
OWS("DefaultValue", str(default)),
name=name
)
for name, default
in getattr(handler(), "constraints", {}).items()
] + [
OWS("Parameter",
OWS("AnyValue")
if allowed_values is None else
OWS("AllowedValues", *[
OWS("Value", value)
for value in allowed_values
]),
name=name
)
for name, allowed_values
in getattr(handler(), "additional_parameters", {}).items()
]),
name=handler.request
)
)
return OWS("OperationsMetadata", *operations)
class OWS20ExceptionXMLEncoder(XMLEncoder):
def encode_exception(self, message, version, code, locator=None, request=None, exception=None):
exception_attributes = {
"exceptionCode": str(code)
}
if locator:
exception_attributes["locator"] = str(locator)
exception_text = (OWS("ExceptionText", message),) if message else ()
report = OWS("ExceptionReport",
OWS("Exception", *exception_text, **exception_attributes),
version=version, **{ns_xml("lang"): "en"}
)
try:
if getattr(settings, 'DEBUG', False):
report.append(etree.Comment(traceback.format_exc()))
except:
pass
return report
def get_schema_locations(self):
return nsmap.schema_locations | PypiClean |
/GNS-1.0-py3-none-any.whl/gns/nested_sampler.py | import numpy as np
import matplotlib.pyplot as plt
import sys
import scipy.stats
import scipy.integrate
#import getdist
##########misc functions
def logAddArr(x, y, axis = None):
"""
logaddexp where x is a scalar and y is an array.
Returns a scalar in case that axis = None (exponentiates elements of array then adds them together).
np implementation on its own returns an array i.e. doesn't sum exponentiated elements of array.
Axis specifies axis to do summation of exponentials over, and if specified returns an array with shape of the remaining dimensions.
"""
yExp = np.exp(y)
logySum = np.log(yExp.sum(axis = axis))
return np.logaddexp(x, logySum)
def logAddArr2(x, y, indexes = (None,)):
"""
Alternative version of logAddArr that avoids over/ underflow errors of exponentiating the array y, to the same extent that np.logaddexp() does
Note however that it is slower than logAddArr, so in cases where over/ underflow isn't an issue, use that
Loops over each specified element of y (using rowcol values) and adds to log of sums.
By default loops over entire array, but to specify a certain row for e.g. a 2d array set indexes to (row_index, slice(None))
or for a certain column (slice(None), col_index)
"""
result = x
for l in np.nditer(y[indexes]):
result = np.logaddexp(result, l)
return result
def logsubexp(a, b):
"""
Only currently works for scalars.
Calculates log(exp(a) - exp(b)).
Subtracts max(a,b) from a and b before exponentiating
in attempt to avoid underflow when a and b are small
"""
maxab = np.max(a,b)
expa = np.exp(a - maxab)
expb = np.exp(b - maxab)
return np.log(expa - expb)
###########PDF related functions
def fitPriors(priorParams):
"""
Only currently handles one dimensional (independent priors). Scipy.stats multivariate functions do not have built in inverse CDF methods, so if I want to consider multivariate priors I may have to write my own code.
Note scipy.stats.uniform takes parameters loc and scale where the boundaries are defined to be
loc and loc + scale, so scale = upper - lower bound.
Returns list of fitted prior objects length of nDims (one function for each parameter)
"""
priorFuncs = []
priorFuncsPpf = []
priorFuncsLogPdf = []
priorType = priorParams[0,:]
param1Vec = priorParams[1,:]
param2Vec = priorParams[2,:]
for i in range(len(priorType)):
if priorType[i] == 1:
priorFunc = scipy.stats.uniform(param1Vec[i], param2Vec[i] - param1Vec[i])
elif priorType[i] == 2:
priorFunc = scipy.stats.norm(param1Vec[i], param2Vec[i])
else:
print("priors other than uniform and Gaussian not currently supported")
sys.exit(1)
priorFuncs.append(priorFunc)
return priorFuncs
def getPriorPdfs(priorObjs):
"""
Takes list of fitted prior objects, returns list of objects' .pdf() methods
"""
priorFuncsPdf = []
for obj in priorObjs:
priorFuncsPdf.append(obj.pdf)
return priorFuncsPdf
def getPriorLogPdfs(priorObjs):
"""
Takes list of fitted prior objects, returns list of objects' .logpdf() methods
"""
priorFuncsLogPdf = []
for obj in priorObjs:
priorFuncsLogPdf.append(obj.logpdf)
return priorFuncsLogPdf
def getPriorPpfs(priorObjs):
"""
Takes list of fitted prior objects, returns list of objects' .ppf() methods
"""
priorFuncsPpf = []
for obj in priorObjs:
priorFuncsPpf.append(obj.ppf)
return priorFuncsPpf
def invPrior(livePoints, priorFuncsPpf):
"""
take in array of livepoints each which has value isin[0,1], and has nDim dimensions. Output physical array of values corresponding to priors for each parameter dimension.
"""
livePointsPhys = np.zeros_like(livePoints)
for i in range(len(priorFuncsPpf)):
livePointsPhys[:,i] = priorFuncsPpf[i](livePoints[:,i])
return livePointsPhys
def priorFuncsProd(livePoint, priorFuncsPdf):
"""
calculates pdf of prior for each parameter dimension, then multiplies these together to get the pdf of the prior (i.e. the prior pdf assuming the parameters are independent)
Works in linear space, but can easily be adapted if this ever leads to underflow errors (consider sum of log(pi(theta))).
"""
livePointPriorValues = np.zeros_like(livePoint)
for i in range(len(priorFuncsPdf)):
livePointPriorValues[i] = priorFuncsPdf[i](livePoint[i])
priorProdValue = livePointPriorValues.prod()
return priorProdValue
def fitLhood(LLhoodParams):
"""
fit lhood (without data) for parameters to make future evaluations much faster.
"""
LLhoodType = LLhoodParams[0]
mu = LLhoodParams[1].reshape(-1)
sigma = LLhoodParams[2]
if LLhoodType == 2:
LhoodObj = scipy.stats.multivariate_normal(mu, sigma)
return LhoodObj
def Lhood(LhoodObj):
"""
Returns .pdf method of LhoodObj
"""
return LhoodObj.pdf
def LLhood(LhoodObj):
"""
Returns .logpdf method of LhoodObj
"""
return LhoodObj.logpdf
##########Lhood sampling functions
def getNewLiveBlind(priorFuncsPpf, LhoodFunc, LhoodStar):
"""
Blindly picks points isin U[0, 1]^D, converts these to physical values according to physical prior and uses as candidates for new livepoint until L > L* is found
LhoodFunc can be Lhood or LLhood func, either works fine
"""
trialPointLhood = -np.inf
while trialPointLhood <= LhoodStar:
nDims = len(priorFuncsPpf)
trialPoint = np.random.rand(1, nDims)
trialPointPhys = invPrior(trialPoint, priorFuncsPpf) #Convert trialpoint value to physical value
trialPointLhood = LhoodFunc(trialPointPhys) #calculate LLhood value of trialpoint
return trialPointPhys, trialPointLhood
def getNewLiveMH(livePointsPhys, deadIndex, priorFuncsPdf, priorParams, LhoodFunc, LhoodStar):
"""
gets new livepoint using variant of MCMC MH algorithm. From current livepoints (not including one to be excluded in current NS iteration), first picks a point at random as starting point.
Next the standard deviation of the trial distribution is calculated from the width of the current livepoints (including the one to be excluded) as 0.1 * [max(param_value) - min(param_value)] in each dimension.
A trial distribution (CURRENTLY GAUSSIAN) is centred on the selected livepoint with the calculated variance, and the output is used as a trial point.
This point is kept with probability prior(trial) / prior(previous) if L_trial > L* and rejected otherwise.
Each time a trial point is proposed, nTrials is incremented. If the trial point was accepted nAccept is incremented, if not nReject is.
The trial distribution is updated based on nAccept, nReject such that the acceptance rate should be roughyl 50%.
Once nTrials nTrials = maxTrials, if nAccept = 0 the whole process is repeated as failure to do so will mean the returned live point is a copy of another point. If not, the new livepoint is returned.
One would hope that nAccept is > 1 to ensure that the sampling space is explored uniformly.
LhoodFunc can be Lhood or LLhood func, either works fine
"""
nAccept = 0
maxTrials = 80
#current deadpoint not a possible starting candidate. This could be ignored
startCandidates = np.delete(livePointsPhys, (deadIndex), axis = 0)
trialSigma = calcInitTrialSig(livePointsPhys)
while nAccept == 0: #ensure that at least one move was made from initially picked point, or new returned livepoint will be same as pre-existing livepoint.
#in the case of no acceptances, process is started again from step of picking starting livepoint
#randomly pick starting candidate
startIndex = np.random.randint(0, len(startCandidates[:,0]))
startPoint = startCandidates[startIndex]
startLhood = LhoodFunc(startPoint) #this is not used per sae, but an arbitrary value is needed for first value in loop
nTrials = 0
nReject = 0
while nTrials < maxTrials:
nTrials += 1
#find physical values of trial point candidate
trialPoint = np.random.multivariate_normal(startPoint, trialSigma ** 2)
#check trial point has physical values within sampling space domain
trialPoint = checkBoundaries(trialPoint, priorParams)
trialLhood = LhoodFunc(trialPoint)
#returns previous point values if test fails, or trial point values if it passes
acceptFlag, startPoint, startLhood = testTrial(trialPoint, startPoint, trialLhood, startLhood, LhoodStar, priorFuncsPdf)
if acceptFlag:
nAccept += 1
else:
nReject += 1
#update trial distribution variance
trialSigma = updateTrialSigma(trialSigma, nAccept, nReject)
return startPoint, startLhood
################MH related functions
def calcInitTrialSig(livePoints):
"""
calculate initial standard deviation based on width of domain defined by max and min parameter values of livepoints in each dimension
"""
minParams = livePoints.min(axis = 0)
maxParams = livePoints.max(axis = 0)
livePointsWidth = maxParams - minParams
trialSigma = np.diag(0.1 * livePointsWidth) #Sivia 2006 uses 0.1 * domain width
return trialSigma
def testTrial(trialPoint, startPoint, trialLhood, startLhood, LhoodStar, priorFuncsPdf):
"""
Check if trial point has L > L* and accept with probability prior(trial) / prior(previous)
"""
newPoint = startPoint
newLhood = startLhood
acceptFlag = False
if trialLhood > LhoodStar:
prob = np.random.rand()
priorRatio = priorFuncsProd(trialPoint, priorFuncsPdf) / priorFuncsProd(startPoint, priorFuncsPdf)
if priorRatio > prob:
newPoint = trialPoint
newLhood = trialLhood
acceptFlag = True
return acceptFlag, newPoint, newLhood
def updateTrialSigma(trialSigma, nAccept, nReject):
"""
update standard deviation as in Sivia 2006.
Apparently this ensures that ~50% of the points are accepted, but I'm sceptical.
"""
if nAccept > nReject:
trialSigma = trialSigma * np.exp(1. / nAccept)
else:
trialSigma = trialSigma * np.exp(-1. / nReject)
return trialSigma
def checkBoundaries(livePoint, priorParams):
"""
For all parameters with fixed boundaries, (just uniform for NOW), ensures trial point in that dimension has value in allowed domain
"""
#it makes most sense for these two to be the same
differenceCorrection = 'reflect'
pointCorrection = 'reflect'
priorType = priorParams[0,:]
param1Vec = priorParams[1,:]
param2Vec = priorParams[2,:]
for i in range(len(priorType)):
if priorType[i] == 1:
livePoint[i] = applyBoundary(livePoint[i], param1Vec[i], param2Vec[i], differenceCorrection, pointCorrection)
return livePoint
def applyBoundary(point, lower, upper, differenceCorrection, pointCorrection):
"""
give a point in or outside the domain, in case of point being in the domain it does nothing.
When it is outside, it calculates a 'distance' from the domain according to differenceCorrection type, and then uses this 'distance' to transform the point into the domain by using either reflective or wrapping methods according to the value of pointCorrection.
It is recommended that differenceCorrection and pointCorrection take the same values (makes most intuitive sense to me)
"""
#get 'distance' from boundary
if differenceCorrection == 'wrap':
pointTemp = modToDomainWrap(point, lower, upper)
elif differenceCorrection == 'reflect':
pointTemp = modToDomainReflect(point, lower, upper)
#use 'distance' to reflect or wrap point into domain
if pointCorrection == 'wrap':
if point < lower:
point = upper - pointTemp
elif point > upper:
point = lower + pointTemp
if pointCorrection == 'reflect':
if point < lower:
point = lower + pointTemp
elif point > upper:
point = upper - pointTemp
return point
#following ensures point isin [lower, upper]. This wraps according to (positive) difference between point and nearest bound and returns a 'distance' which can actually be used to get the correct value of the point within the domain. Effect of this is basically mod'ing the difference by the width of the domain. It makes most intuitive sense to me to use this when you want to wrap the points around the domain.
modToDomainWrap = lambda point, lower, upper : (lower - point) % (upper - lower) if point < lower else (point - upper) % (upper - lower)
def modToDomainReflect(point, lower, upper):
"""
following ensures point isin [lower, upper]. This reflects according to (positive) difference between point and nearest bound and returns a 'distance' which can actually be used to get the correct value of the point within the domain. It makes most intuitive sense to me to use this when you want to reflect the points in the domain.
Operation done to ensure reflecting is different based on whether the difference between the point and the nearest part of the domain is an odd or even (incl. 0) multiple of the boundary.
"""
if point < lower:
outsideMultiple = (lower - point) // (upper - lower) #number of multiples (truncated) of the width of the domain the point lays outside it
oddFlag = outsideMultiple % 2 #checks if number of multiples of width of domain the point is outside the domain is odd or even (the latter including zero)
if oddFlag:
#in this case for a reflective value the mod'd distance needs to be counted from the opposite boundary. This can be done by calculating - delta mod width where delta is difference between closest boundary and point
pointTemp = (point - lower) % (upper - lower)
else:
#this is the simpler case in which the reflection is counted from the nearest boundary which is just delta mod width
pointTemp = (lower - point) % (upper - lower)
elif point > upper:
outsideMultiple = (point - upper) // (upper - lower) #as above but delta is calculated from upper bound
oddFlag = outsideMultiple % 2 #as above
if oddFlag:
#as above
pointTemp = (upper - point) % (upper - lower)
else:
#as above
pointTemp = (point - upper) % (upper - lower)
else:
pointTemp = None
return pointTemp
#############setup related functions
def checkInputParamsShape(priorParams, LhoodParams, nDims):
"""
checks prior and Lhood input arrays are correct shape
"""
assert (priorParams.shape == (3, nDims)), "Prior parameter array should have shape (3, nDims)"
assert (LhoodParams[1].shape == (1, nDims)), "Llhood params mean array should have shape (1, nDims)"
assert (LhoodParams[2].shape == (nDims, nDims)), "LLhood covariance array should have shape (nDims, nDims)"
###############NS loop related functions
def tryTerminationLog(verbose, terminationType, terminationFactor, nest, nLive, logEofX, livePointsLLhood, LLhoodStar, ZLiveType, trapezoidalFlag, logEofZ, H):
"""
See if termination condition for main loop of NS has been met. Can be related to information value H or whether estimated remaining evidence is below a given fraction of the Z value calculated up to that iteration
"""
breakFlag = False
if terminationType == 'information':
terminator = terminationFactor * nLive * H
if verbose:
printTerminationUpdateInfo(nest, terminator)
if nest > terminator:
#since it is terminating need to calculate remaining Z
liveMaxIndex, liveLLhoodMax, logEofZLive, avLLhood, nFinal = getLogEofZLive(nLive, logEofX, livePointsLLhood, LLhoodStar, ZLiveType, trapezoidalFlag)
breakFlag = True
else:
liveMaxIndex = None #no point calculating
liveLLhoodMax = None #these values if not terminating
elif terminationType == 'evidence':
liveMaxIndex, liveLLhoodMax, logEofZLive, avLLhood, nFinal = getLogEofZLive(nLive, logEofX, livePointsLLhood, LLhoodStar, ZLiveType, trapezoidalFlag)
endValue = np.exp(logEofZLive - logEofZ)
if verbose:
printTerminationUpdateZ(logEofZLive, endValue, terminationFactor, 'log')
if endValue <= terminationFactor:
breakFlag = True
return breakFlag, liveMaxIndex, liveLLhoodMax, avLLhood, nFinal
def tryTermination(verbose, terminationType, terminationFactor, nest, nLive, EofX, livePointsLhood, LhoodStar, ZLiveType, trapezoidalFlag, EofZ, H):
"""
as above but in linear space
"""
breakFlag = False
if terminationType == 'information':
terminator = terminationFactor * nLive * H
if verbose:
printTerminationUpdateInfo(nest, terminator)
if nest > terminator:
liveMaxIndex, liveLhoodMax, ZLive, avLhood, nFinal = getEofZLive(nLive, EofX, livePointsLhood, LhoodStar, ZLiveType, trapezoidalFlag)
breakFlag = True
else:
liveMaxIndex = None
liveLhoodMax = None
elif terminationType == 'evidence':
liveMaxIndex, liveLhoodMax, EofZLive, avLhood, nFinal = getEofZLive(nLive, EofX, livePointsLhood, LhoodStar, ZLiveType, trapezoidalFlag)
endValue = EofZLive / EofZ
if verbose:
printTerminationUpdateZ(EofZLive, endValue, terminationFactor, 'linear')
if endValue <= terminationFactor:
breakFlag = True
return breakFlag, liveMaxIndex, liveLhoodMax, avLhood, nFinal
def getLogEofZLive(nLive, logEofX, livePointsLLhood, LLhoodStar, ZLiveType, trapezoidalFlag):
"""
NOTE logWeightsLive here is an np array
newLiveLLhoods has same shape as logWeightsLive (i.e. account for averageLhoodOrX value). If ZLiveType == 'max' avLLhood will just be the maximum LLhood value.
there is no averaging to consider if ZLiveType == 'max Lhood'.
Could return live weights, but these need to be calculated again in final contribution function so don't bother
"""
livePointsLLhood2, liveLLhoodMax, liveMaxIndex = getMaxLhood(ZLiveType, livePointsLLhood)
logEofwLive = getLogEofwLive(nLive, logEofX, ZLiveType)
logEofWeightsLive, avLLhood, nFinal = getLogEofWeightsLive(logEofwLive, LLhoodStar, livePointsLLhood2, trapezoidalFlag, ZLiveType) #this will be an array nLive long for 'average' ZLiveType and 'X' averageLhoodOrX or a 1 element array for 'max' ZLiveType or 'Lhood' averageLhoodOrX
logEofZLive = logAddArr2(-np.inf, logEofWeightsLive)
return liveMaxIndex, liveLLhoodMax, logEofZLive, avLLhood, nFinal
def getEofZLive(nLive, EofX, livePointsLhood, LhoodStar, ZLiveType, trapezoidalFlag):
"""
as above but in linear space
"""
livePointsLhood2, liveLhoodMax, liveMaxIndex = getMaxLhood(ZLiveType, livePointsLhood)
EofwLive = getEofwLive(nLive, EofX, ZLiveType)
EofWeightsLive, avLhood, nFinal = getEofWeightsLive(EofwLive, LhoodStar, livePointsLhood2, trapezoidalFlag, ZLiveType)
EofZLive = np.sum(EofWeightsLive)
return liveMaxIndex, liveLhoodMax, EofZLive, avLhood, nFinal
def getMaxLhood(ZLiveType, livePointsLhood):
"""
For ZLiveType == 'max' returns a 1 element array with maximum LLhood value, its value as a scalar, and the index of the max LLhood in the given array.
For ZLiveType == 'average' it essentially does nothing
"""
if 'average' in ZLiveType: #average of remaining LLhood values/ X for final Z estimate
livePointsLhood2 = livePointsLhood
liveLhoodMax = None #liveLLhoodMax is redundant for this method so just return None for it
liveMaxIndex = None #same as line above
elif ZLiveType == 'max Lhood': #max of remaining LLhood values & remaining X for final Z estimate
liveMaxIndex = np.argmax(livePointsLhood)
liveLhoodMax = np.asscalar(livePointsLhood[liveMaxIndex])
livePointsLhood2 = np.array([liveLhoodMax])
return livePointsLhood2, liveLhoodMax, liveMaxIndex
def getLogEofwLive(nLive, logEofX, ZLiveType):
"""
Determines final logw based on ZLiveType and averageLhoodOrX, i.e. it determines whether final contribution is averaged/ maximised over L or averaged over X.
"""
if (ZLiveType == 'max Lhood') or (ZLiveType == 'average Lhood'):
return logEofX
else:
return logEofX - np.log(nLive)
def getEofwLive(nLive, EofX, ZLiveType):
"""
as above but in non-log space
"""
if (ZLiveType == 'max Lhood') or (ZLiveType == 'average Lhood'):
return EofX
else:
return EofX / nLive
def getLogEofWeightsLive(logEofw, LLhoodStar, liveLLhoods, trapezoidalFlag, ZLiveType):
"""
From Will's implementation, Z = sum (X_im1 - X_i) * 0.5 * (L_i + L_im1)
Unsure whether you should treat final contribution using trapezium rule (when it is used for rest of sum). I think you should
and in case of ZLiveType == 'average *', the L values used are L* + {L_live}
and in the case of ZLiveType == 'max', the L values used are L* + {max(L_live)}.
When trapezium rule isn't used (for rest of sum), L values used are
{L_live} in case of ZLiveType == 'average *'
and {max(L_live)} in case of ZLiveType == 'max'.
When ZLiveType == 'average *' there is an added complication of what the average is 'taken over' (for both trapezium rule and standard quadrature) i.e. over the prior volume or the likelihood.
If ZLiveType == 'average X' the average is taken over X, meaning there are still nLive live log weights (equally spaced in X with values X / nLive) which for standard quadrature have values: {log(X / nLive) + log(L_1), ..., log(X / nLive) + log(L_nLive)}
and for trapezium rule: {log(X / nLive) + log((L* + L_1) / 2. ), ..., log(X / nLive) + log((L_nLive-1 + L_nLive) / 2. )}
If ZLiveType == 'average Lhood' the average is taken over the remaining L values, meaning there is 1 live log weight with X value X (i.e. the L_average value is assumed to be at X = 0). For the standard quadrature method the live log weight thus has a value log(X) + log(sum_i^nLive[L_i] / nLive)
and for the trapezoidal rule log(X) + log((L* + sum_i^nLive[L_i] / nLive) / 2.).
When ZLiveType == 'max', the maximum is obviously taken over the remaining Lhoods. Thus there is only one live log weight. For standard quadrature this is log(X) + log(max(L_i)
and for the trapezium rule it is log(X) + log((L* + max(L_i)) / 2.)
If averaging over L, final livepoint needs to be attributed this L, so it is stored here under the variable avLLhood
"""
if trapezoidalFlag:
if ZLiveType == 'average X': #assumes there is still another nLive points to be added to the posterior samples, as averaging is done over X, not L
nFinal = len(liveLLhoods)
laggedLLhoods = np.concatenate(([LLhoodStar], liveLLhoods[:-1])) #slower than appending lists together, but liveLLhoods is a numpy array, and converting it to a list is slow
logEofWeightsLive = logEofw + np.log(0.5) + np.logaddexp(liveLLhoods, laggedLLhoods)
avLLhood = None #if not averaging over Lhood this isn't needed
else: #assumes 'final' Lhood value is given by the average of the remaining L values, and that this is at X = 0
nFinal = 1
LSumLhood = np.array([logAddArr2(-np.inf, liveLLhoods)]) #Make array for consistency
n = len(liveLLhoods) #1 for ZLiveType == 'max' or nLive for ZLiveType == 'average Lhood'
avLLhood = LSumLhood - np.log(n)
logEofWeightsLive = np.log(0.5) + logEofw + np.logaddexp(LLhoodStar, avLLhood)
else:
if ZLiveType == 'average X':
nFinal = len(liveLLhoods)
logEofWeightsLive = logEofw + liveLLhoods
avLLhood = None
else:
nFinal = 1
LSumLhood = np.array([logAddArr2(-np.inf, liveLLhoods)])
n = len(liveLLhoods)
avLLhood = LSumLhood - np.log(n)
logEofWeightsLive = logEofw + avLLhood
return logEofWeightsLive, avLLhood, nFinal
def getEofWeightsLive(Eofw, LhoodStar, liveLhoods, trapezoidalFlag, ZLiveType):
"""
as above but non-log space version
"""
if trapezoidalFlag:
if ZLiveType == 'average X':
nFinal = len(liveLhoods)
laggedLhoods = np.concatenate(([LhoodStar], liveLhoods[:-1]))
EofWeightsLive = Eofw * 0.5 * (liveLhoods + laggedLhoods)
avLhood = None
else:
nFinal = 1
sumLhood = np.array([liveLhoods.sum()])
n = len(liveLhoods)
avLhood = sumLhood / n
EofWeightsLive = Eofw * 0.5 * (LhoodStar + avLhood)
else:
if ZLiveType == 'average X':
nFinal = len(liveLhoods)
EofWeightsLive = Eofw * liveLhoods
avLhood = None
else:
nFinal = 1
sumLhood = np.array([liveLhoods.sum()])
n = len(liveLhoods)
avLhood = sumLhood / n
EofWeightsLive = Eofw * avLhood
return EofWeightsLive, avLhood, nFinal
############final contribution to NS sampling functions
def getFinalContributionLog(verbose, ZLiveType, trapezoidalFlag, nFinal, logEofZ, logEofZ2, logEofX, logEofWeights, H, livePointsPhys, livePointsLLhood, avLLhood, liveLLhoodMax, liveMaxIndex, LLhoodStar, errorEval = 'recursive'):
"""
Get final contribution from livepoints after NS loop has ended. Way of estimating final contribution is dictated by ZLiveType.
Also updates H value and gets final weights (and physical values) for posterior
this function could be quite taxing on memory as it has to copy all arrays/ lists across
NOTE: for standard quadrature summation, average Lhood and average X give same values of Z (averaging over X is equivalent to averaging over L). However, correct posterior weights are given by latter method, and Z errors are different in both cases
"""
livePointsLLhood = checkIfAveragedLhood(nFinal, livePointsLLhood, avLLhood) #only relevant for 'average' ZLiveType
if 'average' in ZLiveType:
LLhoodsFinal = np.concatenate((np.array([LLhoodStar]), livePointsLLhood))
logEofZOld = logEofZ
logEofZ2Old = logEofZ2
for i in range(nFinal): #add weight of each remaining live point incrementally so H can be calculated easily (according to formulation given in Skilling)
logEofZLive, logEofZ2Live, logEofWeightLive = updateLogZnXMomentsFinal(nFinal, logEofZOld, logEofZ2Old, logEofX, LLhoodsFinal[i], LLhoodsFinal[i+1], trapezoidalFlag, errorEval)
logEofWeights.append(logEofWeightLive)
H = updateHLog(H, logEofWeightLive, logEofZLive, LLhoodsFinal[i+1], logEofZOld)
logEofZOld = logEofZLive
logEofZ2Old = logEofZ2Live
if verbose:
printFinalLivePoints(i, livePointsPhys[i], LLhoodsFinal[i+1], ZLiveType, 'log')
livePointsPhysFinal, livePointsLLhoodFinal, logEofXFinalArr = getFinalAverage(livePointsPhys, livePointsLLhood, logEofX, nFinal, avLLhood, 'log')
elif ZLiveType == 'max Lhood': #assigns all remaining prior mass to one point which has highest likelihood (of remaining livepoints)
logEofZLive, logEofZ2Live, logEofWeightLive = updateLogZnXMomentsFinal(nFinal, logEofZ, logEofZ2, logEofX, LLhoodStar, liveLLhoodMax, trapezoidalFlag, errorEval)
logEofWeights.append(logEofWeightLive) #add scalar to list (as in 'average' case) instead of 1 element array
H = updateHLog(H, logEofWeightLive, logEofZLive, liveLLhoodMax, logEofZ)
livePointsPhysFinal, livePointsLLhoodFinal, logEofXFinalArr = getFinalMax(liveMaxIndex, livePointsPhys, liveLLhoodMax, logEofX)
if verbose:
printFinalLivePoints(liveMaxIndex, livePointsPhysFinal, livePointsLLhoodFinal, ZLiveType, 'log')
return logEofZLive, logEofZ2Live, H, livePointsPhysFinal, livePointsLLhoodFinal, logEofXFinalArr
def getFinalContribution(verbose, ZLiveType, trapezoidalFlag, nFinal, EofZ, EofZ2, EofX, EofWeights, H, livePointsPhys, livePointsLhood, avLhood, liveLhoodMax, liveMaxIndex, LhoodStar, errorEval = 'recursive'):
"""
as above but in linear space
"""
livePointsLhood = checkIfAveragedLhood(nFinal, livePointsLhood, avLhood)
if (ZLiveType == 'average Lhood') or (ZLiveType == 'average X'):
EofZOld = EofZ
EofZ2Old = EofZ2
LhoodsFinal = np.concatenate((np.array([LhoodStar]), livePointsLhood))
for i in range(nFinal):
EofZLive, EofZ2Live, EofWeightLive = updateZnXMomentsFinal(nFinal, EofZOld, EofZ2Old, EofX, LhoodsFinal[i], LhoodsFinal[i+1], trapezoidalFlag, 'recursive')
EofWeights.append(EofWeightLive)
H = updateH(H, EofWeightLive, EofZLive, LhoodsFinal[i+1], EofZOld)
EofZOld = EofZLive
EofZ2Old = EofZ2Live
if verbose:
printFinalLivePoints(i, livePointsPhys[i], LhoodsFinal[i+1], ZLiveType, 'linear')
livePointsPhysFinal, livePointsLhoodFinal, EofXFinalArr = getFinalAverage(livePointsPhys, livePointsLhood, EofX, nFinal, avLhood, 'linear')
elif ZLiveType == 'max Lhood':
EofZLive, EofZ2Live, EofWeightLive = updateZnXMomentsFinal(nFinal, EofZ, EofZ2, EofX, LhoodStar, liveLhoodMax, trapezoidalFlag, 'recursive')
EofWeights.append(EofWeightLive)
H = updateH(H, EofWeightLive, EofZLive, liveLhoodMax, EofZ)
livePointsPhysFinal, livePointsLhoodFinal, EofXFinalArr = getFinalMax(liveMaxIndex, livePointsPhys, liveLhoodMax, EofX)
if verbose:
printFinalLivePoints(liveMaxIndex, livePointsPhysFinal, livePointsLhoodFinal, ZLiveType, 'linear')
return EofZLive, EofZ2Live, H, livePointsPhysFinal, livePointsLhoodFinal, EofXFinalArr
def checkIfAveragedLhood(nFinal, livePointsLhood, avLhood):
"""
Checks if Lhood was averaged over in getLogWeightsLive or not.
If it was, need to work with average LLhood value for remainder of calculations,
if not then carry on working with nLive size array of LLhoods.
For ZLiveType == 'max', average is just taken over array size one with max Lhood value in it, so it is still just max value
"""
if nFinal == 1:
return avLhood
else:
return livePointsLhood
def getFinalAverage(livePointsPhys, livePointsLLhood, X, nFinal, avLLhood, space):
"""
gets final livepoint values and X value per remaining livepoint for average Z criteria
NOTE Xfinal is a list not a numpy array
space says whether you are working in linear or log space (X or logX)
"""
livePointsPhysFinal = getLivePointsPhysFinal(livePointsPhys, avLLhood) #only relevant for 'average' ZLiveType
livePointsLLhoodFinal = livePointsLLhood
if space == 'linear':
Xfinalarr = [X / nFinal] * nFinal
else:
Xfinalarr = [X - np.log(nFinal)] * nFinal
return livePointsPhysFinal, livePointsLLhoodFinal, Xfinalarr
def getLivePointsPhysFinal(livePointsPhys, avLhood):
"""
Get physical values associated with remaining contribution of livepoints. If LLhood isn't averaged over (X is) this is just the input livepoint values, but if LLhood is averaged it is non-trivial, I.E. THE PHYSICAL VALUES ASSOCIATED WITH THIS POINT ARE MEANININGLESS
Only relevant for ZLiveType == 'average' as for 'max' case, physical values are just that corresponding to max(L)
These are needed for posterior samples of remaining contribution of livepoints
"""
if not avLhood: #ZLiveType == 'max' means livePointsPhys is already just one livepoint, averageLhoodOrX == 'average X' means retain previous array
return livePointsPhys
else: #need to obtain one livepoint from set of nLive. NO (KNOWN AT STAGE OF ALGORITHM) PHYSICAL VECTOR CORRESPONDS TO THIS LIKEILIHOO, SO THIS VALUE IS MEANINGLESS
return livePointsPhys.mean(axis = 0).reshape(1,-1)
def getFinalMax(liveMaxIndex, livePointsPhys, liveLhoodMax, X):
"""
get livepoint and physical livepoint values
corresponding to maximum likelihood point in remaining
livepoints.
Note Xfinal is a list not a numpy array or a scalar
Function works for log or linear space
"""
livePointsPhysFinal = livePointsPhys[liveMaxIndex].reshape(1,-1)
livePointsLhoodFinal = np.array([liveLhoodMax]) #for consistency with 'average' equivalent function
Xfinal = [X] #for consistency with 'average' equivalent function, make it a list.
return livePointsPhysFinal, livePointsLhoodFinal, Xfinal
###############final datastructure / output functions
def getTotal(deadPointsPhys, livePointsPhysFinal, deadPointsLhood, livePointsLhoodFinal, XArr, XFinalArr, weights):
"""
gets final arrays of physical, llhood and X values for all accepted points in algorithm.
This function mutates deadPointsPhys by appending numpy array livePointsPhysFinal. This is at the end of the program
however, so it shouldn't be an issue.
Concatenate works on a list of numpy arrays (those corresponding to deadPoints should have shape (1, nDims) and there should be nest of them,
the single numpy arrays corresponding to the final live points should have shape (nLive, nDims) if average of Z was used for final contribution or (1, nDims) if max of Z was used.
Concatenating list of numpy arrays is much more efficient than using np.append() at each iteration.
"""
deadPointsPhys.append(livePointsPhysFinal)
totalPointsPhys = np.concatenate(deadPointsPhys)
totalPointsLhood = np.append(deadPointsLhood, livePointsLhoodFinal)
XArr = np.append(XArr, XFinalArr)
weights = np.array(weights)
return totalPointsPhys, totalPointsLhood, XArr, weights
def writeOutput(outputFile, totalPointsPhys, totalPointsLhood, weights, XArr, paramNames, space):
"""
writes a summary file which contains values for all sampled points.
Also writes files needed for getDist.
"""
paramNamesStr = ', '.join(paramNames)
if space == 'linear':
summaryStr = ' Lhood, weights, X'
else:
summaryStr = ' LLhood, logWeights, logX'
#summary file containing most information of sampled points
np.savetxt(outputFile + '_summary.txt', np.column_stack((totalPointsPhys, totalPointsLhood, weights, XArr)), delimiter = ',', header = paramNamesStr + summaryStr)
#chains file in format needed for getDist: importance weight (weights or logWeights), LHood (Lhood or LLhood), phys param values
np.savetxt(outputFile + '.txt', np.column_stack((weights, totalPointsLhood, totalPointsPhys)))
#index and list parameter names for getDist
nameFile = open(outputFile + '.paramnames', 'w')
for i, name in enumerate(paramNames):
nameFile.write('p%i %s\n' %(i+1, name))
nameFile.close()
#write file with hard constraints on parameter boundaries.
#Hard constraints are currently inferred from data for all parameters
rangeFile = open(outputFile + '.ranges', 'w')
for i in range(len(paramNames)):
rangeFile.write('p%i N N\n' %(i+1))
rangeFile.close()
####################Z & H theoretical functions
def nDIntegratorZTheor(integrandFuncs, limitsList, integrandLogVal = 200.):
"""
integrator used for calculating theoretical value of Z. integrand is function which evaluates to value of integrand at given parameter values (which are determined by nquad function).
integrandFuncs is list of functions (Lhood & non-rectangular priors) which are multiplied together to give value of integrand.
If integrandLogVal evaluates to true, does integration method which takes exp of log of integrand * some large number given by integrandLogVal
to avoid underflow. The final integral result (and error) is then divided by exp(integrandLogVal) to get the final value.
"""
if integrandLogVal:
return scipy.integrate.nquad(evalExpLogIntegrand, limitsList, args = (integrandFuncs, integrandLogVal)) / np.exp(integrandLogVal)
else: #this should only occur if you aren't concerned about underflow
return scipy.integrate.nquad(evalIntegrand, limitsList, args = (integrandFuncs, ))
def nDIntegratorHTheor(integrandFuncs, limitsList, integrandLogVal = 200., LLhoodFunc = None):
"""
As above but has to call nquad with a slightly different function representing integrand when using
exp(log(integrand)) method for calculating, due to LLhood(theta) part of integrand
"""
if integrandLogVal:
return scipy.integrate.nquad(evalLogLExpLogIntegrand, limitsList, args = (integrandFuncs, integrandLogVal, LLhoodFunc)) / np.exp(integrandLogVal)
else: #this should only occur if you aren't concerned about underflow
return scipy.integrate.nquad(evalIntegrand, limitsList, args = (integrandFuncs,))
def evalIntegrand(*args):
"""
evaluates a-priori parameter fitted pdfs with given data. data has to be reshaped because scipy functions are annoying and require last axis to be number of dimensions.
Last element of parametersAndIntegrandFuncs is list of pdf objects for Lhood and priors, which are evaluated and multiplied together to give the value of the integrand.
Note in general, Lhood will have dimensionality nDims whereas each prior will have dimensionality 1.
The value (of the key corresponding to the function) is the relevant slice of x to get the correct dimensions
for each function.
May suffer from underflow either when evaluating the .pdf() calls, or when multiplying them together.
args consists of 1) the vector of theta values for given call (from nquad)
2) list of functions which make up the integrand
"""
theta = np.array(args[:-1]).reshape(1,-1)
integrandFuncs = args[-1]
integrandVal = 1.
for func, argIndices in integrandFuncs.items():
integrandVal *= func(theta[argIndices])
return integrandVal
def evalExpLogIntegrand(*args):
"""
evaluates .logpdf() of Lhood/ prior functions, adds them together
along with an arbitrary 'large' value given in paramsNIntLogFuncsNIntLogVal[-1]
then exponentiates this value to avoid underflow when evaluating the pdfs/ multiplying them together.
This should avoid any underflow at all, provided the given value of integrandLogVal is large enough
There is some underflow you cannot avoid, regardless of the value of integrandLogVal. This is because the LLhood values range from e.g. O(-10) to ~ O(-10^3).
Subtracting approximation of max value of integrand is also not helpful for the same reason (spans too many orders of magnitude)
Using an integrandLogVal suitable for the former will still cause the latter to underflow,
but using a value to prevent the latter from underflow will result in the former overflowing!
Dynamically calculating integrandLogVal wouldn't help, as you would need to include this number again after exponentiating (which would cause it to underflow), but before the value is added to integral (if factor isn't constant, doesn't commute with integral)
n.b. underflow is usually better than overflow, as you don't want to miss your most likely values
args consists of 1) the vector of theta values for given call (from nquad)
2) list of functions which make up the integrand
3) 'large' number to be added to logarithm before exponentiated to give integrand
"""
theta = np.array(args[:-2]).reshape(1,-1)
integrandLogFuncs = args[-2]
integrandLogVal = args[-1]
for logFunc, argIndices in integrandLogFuncs.items():
integrandLogVal += logFunc(theta[argIndices])
return np.exp(integrandLogVal)
def evalLogLExpLogIntegrand(*args):
"""
Same as above evalExpLogIntegrand but multiplies by log(L) for calculating H
using exp(log(integrand)) method.
LLhoodFunc has to be passed in separately as well as in the dictionary of functions, so it can be used
to evaluate LLhoodFunc(theta)*np.exp(integrandLogVal)
args consists of 1) the vector of theta values for given call (from nquad)
2) list of functions which make up the integrand
3) 'large' number to be added to logarithm before exponentiated to give integrand
4) LLhood function required for log(L(theta)) part of integrand
"""
theta = np.array(args[:-3]).reshape(1,-1)
integrandLogFuncs = args[-3]
integrandLogVal = args[-2]
LLhoodFunc = args[-1]
LLhoodFuncArgs = integrandLogFuncs[LLhoodFunc] #should be all of theta
for logFunc, argIndices in integrandLogFuncs.items():
integrandLogVal += logFunc(theta[argIndices])
return LLhoodFunc(theta[LLhoodFuncArgs]) * np.exp(integrandLogVal)
def getPriorIntegrandAndLimits(priorParams, priorFuncsPdf, integrandFuncs):
"""
Adds prior functions to integrandFuncs dict for 'non-rectangular' (uniform) dimensions, as long as a mapping to dimensions of data required to integrate over for that function. For uniform priors, calculates the hyperrectangular volume. Also creates list of limits in each dimension of integral. For Gaussian priors, sets limits to +- infinity
"""
hyperRectangleVolume = 1.
priorTypes = priorParams[0,:]
bounds = priorParams[1:,:]
limitsList = []
for i in range(len(priorTypes)):
if priorTypes[i] == 1: #uniform
limitsList.append(np.array([priorParams[1,i], priorParams[2,i]]))
priorWidth = priorParams[2,i] - priorParams[1,i]
hyperRectangleVolume *= priorWidth
elif priorTypes[i] == 2: #gauss
limitsList.append(np.array([-np.inf, np.inf])) #parameters of Gauss dists aren't limits, so change to +-inf here
integrandFuncs[priorFuncsPdf[i]] = (0, i) #tuple representing slice of data array x for this prior function, is mapped to that prior function via the dictionary
return integrandFuncs, limitsList, hyperRectangleVolume
class ZTheorException(Exception):
pass
def calcZTheor(priorParams, priorFuncsPdf, LhoodFunc, nDims):
"""
numerically integrates L(theta) * pi(theta) over theta
priorFuncs must be in same order as dimensions of LhoodFunc when it was fitted (and in same order as priorParams).
Have to temporarily set np.seterr to warnings only as underflow always seems to occur when evaluating integral (make sense I guess).
A bit slow, but not sure how I can make it faster tbh, as it will get exponentially slower with # of dimensions
LhoodFunc and priorFuncsPdf can be .pdf() or .logpdf() methods
"""
np.seterr(all = 'warn')
integrandFuncs = {LhoodFunc:slice(None)} #slice refers to which dimensions of data array are required for given function in integration call. In case of Lhood, all dimensions of the parameter space are required
integrandFuncs, limitsList, hyperRectangleVolume = getPriorIntegrandAndLimits(priorParams, priorFuncsPdf, integrandFuncs)
ZIntegral, ZIntegralE = nDIntegratorZTheor(integrandFuncs, limitsList)
ZTheor = 1. / hyperRectangleVolume * ZIntegral
np.seterr(all = 'raise')
return ZTheor, ZIntegralE, hyperRectangleVolume
def calcZTheorApprox(priorParams):
"""
Only valid in limit that prior is hyperrectangle, and majority of lhood is contained in prior hypervolume
such that limits of integration (domain of the sampling space defined by the prior) can be extended close enough +- infinity such that the Lhood integrates to 1 over this domain
"""
priorDifferences = priorParams[2,:] - priorParams[1,:]
priorVolume = priorDifferences.prod()
ZTheor = 1. / priorVolume
return ZTheor, priorVolume
def calcHTheor(priorParams, priorFuncsPdf, LLhoodFunc, nDims, Z, ZErr, LhoodFunc = None):
"""
Calculates HTheor from the KL divergence equation: H = int[P(theta) * ln(P(theta) / pi(theta))] = 1/Z * int[L(theta) * pi(theta) * ln(L(theta))] - ln(Z).
For uniform priors, calculates volume and skips that part of integral (over pi(theta)).
Uses same trick as ZTheor in that it composes a dictionary of functions to integrate, mapped to dimension(s) of theta vector to integrate along for given function.
Passes this dictionary to function which nquad actually evaluates.
"""
np.seterr(all = 'warn')
if LhoodFunc: #calculate H without considering underflow
integrandFuncs = {LhoodFunc:slice(None), LLhoodFunc:slice(None)} #slice refers to which dimensions of data array are required for given function in integration call. In case of Lhood, all dimensions of the parameter space are required
integrandLogVal = None
else:
integrandFuncs = {LLhoodFunc:slice(None)} #evaluate L(theta) using exp(log(L(theta)))
integrandLogVal = 100.
integrandFuncs, limitsList, hyperRectangleVolume = getPriorIntegrandAndLimits(priorParams, priorFuncsPdf, integrandFuncs)
LhoodPiLogLIntegral, LhoodPiLogLErr = nDIntegratorHTheor(integrandFuncs, limitsList, integrandLogVal, LLhoodFunc)
HErr = calcHErr(Z, ZErr, LhoodPiLogLIntegral, LhoodPiLogLErr)
return 1. / (hyperRectangleVolume * Z) * LhoodPiLogLIntegral - np.log(Z), HErr
def calcHErr(Z, ZErr, LhoodPiLogLIntegral, LhoodPiLogLErr):
"""
Calculates error on H due to uncertainty of Z, HIntegrand and ln(Z).
ignores possible correlation between Z and LhoodPiLogLIntegral
"""
logZErr = ZErr / Z
IntOverZErr = LhoodPiLogLIntegral / Z * np.sqrt((ZErr / Z)**2. + (LhoodPiLogLErr / LhoodPiLogLIntegral)**2.)
return np.sqrt(logZErr**2. + IntOverZErr**2.)
def calcHTheorApprox(Z, nDims, priorVolume):
"""
Only valid in limit that prior is hyperrectangle, and majority of lhood is contained in prior hypervolume
such that limits of integration can be extended to +- infinity
"""
return -0.5 * nDims / (Z * priorVolume) * (1. + np.log(2. * np.pi)) - np.log(Z)
###########Updating expected values of Z, X and H functions
def calct(nLive, expectation = 't', sampling = False, maxPoints = False):
"""
calc value of t from its pdf,
from (supposedely equivalent) way of deriving form of pdf,
or from E[.] or E[l(.)] """
if sampling:
if maxPoints:
t = np.random.rand(nLive).max()
else:
t = np.random.rand() ** (1. / nLive)
else:
if expectation == 'logt':
t = np.exp(-1. / nLive)
elif expectation == 't':
t = nLive / (nLive + 1.)
return t
def calct2(nLive, expectation = 't2', sampling = False, maxPoints = False):
"""
calc value of t^2 from its pdf,
from (supposedely equivalent) way of deriving form of pdf,
or from E[.] or E[l(.)]
"""
if sampling:
if maxPoints:
##TODO
pass
else:
##TODO
pass
else:
if expectation == 'logt2':
##TODO
pass
elif expectation == 't2':
t = nLive / (nLive + 2.)
return t
def calc1mt(nLive, expectation = '1mt', sampling = False, maxPoints = False):
"""
calc value of 1-t from its pdf,
from (supposedely equivalent) way of deriving form of pdf,
or from E[.] or E[l(.)]
"""
if sampling:
if maxPoints:
##TODO
pass
else:
##TODO
pass
else:
if expectation == 'log1mt':
##TODO
pass
elif expectation == '1mt':
t = 1. / (nLive + 1.)
return t
def calc1mt2(nLive, expectation = '1mt2', sampling = False, maxPoints = False):
"""
calc value of (1-t)^2 from its pdf,
from (supposedely equivalent) way of deriving form of pdf,
or from E[.] or E[l(.)]
"""
if sampling:
if maxPoints:
##TODO
pass
else:
##TODO
pass
else:
if expectation == 'log1mt2':
##TODO
pass
elif expectation == '1mt2':
t = 2. / ((nLive + 1.) * (nLive + 2.))
return t
def calcEofts(nLive):
"""
calculate expected values of t related variables to update Z and X moments
"""
Eoft = calct(nLive)
Eoft2 = calct2(nLive)
Eof1mt = calc1mt(nLive)
Eof1mt2 = calc1mt2(nLive)
return Eoft, Eoft2, Eof1mt, Eof1mt2
def updateZnXMoments(nLive, EofZ, EofZ2, EofZX, EofX, EofX2, LhoodStarOld, LhoodStar, trapezoidalFlag):
"""
Wrapper around updateZnXM taking into account whether trapezium rule is used or not
"""
if trapezoidalFlag:
EofZ, EofZ2, EofZX, EofX, EofX2, EofWeight = updateZnXM(nLive, EofZ, EofZ2, EofZX, EofX, EofX2, 0.5 * (LhoodStarOld + LhoodStar))
else:
EofZ, EofZ2, EofZX, EofX, EofX2, EofWeight = updateZnXM(nLive, EofZ, EofZ2, EofZX, EofX, EofX2, LhoodStar)
return EofZ, EofZ2, EofZX, EofX, EofX2, EofWeight
def updateZnXM(nLive, EofZ, EofZ2, EofZX, EofX, EofX2, L):
"""
Update moments of Z and X based on their previous values, expected value of random variable t and Lhood value ((L_i + L_i-1) / 2. in case of trapezium rule).
Used to calculate the mean and standard deviation of Z, and thus of log(Z) as well
TODO: CONSIDER KEETON NON-RECURSIVE METHOD
"""
Eoft, Eoft2, Eof1mt, Eof1mt2 = calcEofts(nLive)
EofZ, EofWeight = updateEofZ(EofZ, Eof1mt, EofX, L)
EofZ2 = updateEofZ2(EofZ2, Eof1mt, EofZX, Eof1mt2, EofX2, L)
EofZX = updateEofZX(Eoft, EofZX, Eoft2, EofX2, L)
EofX2 = updateEofX2(Eoft2, EofX2)
EofX = updateEofX(Eoft, EofX)
return EofZ, EofZ2, EofZX, EofX, EofX2, EofWeight
def updateEofZ(EofZ, Eof1mt, EofX, L):
"""
Update mean estimate of Z.
"""
EofWeight = Eof1mt * EofX * L
return EofZ + EofWeight, EofWeight
def updateEofZX(Eoft, EofZX, Eoft2, EofX2, L):
"""
Updates raw 'mixed' moment of Z and X. Required to calculate E(Z)^2.
"""
crossTerm = Eoft * EofZX
X2Term = (Eoft - Eoft2) * EofX2 * L
return crossTerm + X2Term
def updateEofZ2(EofZ2, Eof1mt, EofZX, Eof1mt2, EofX2, L):
"""
Update value of raw 2nd moment of Z based on Lhood value obtained in that NS iteration
"""
crossTerm = 2 * Eof1mt * EofZX * L
X2Term = Eof1mt2 * EofX2 * L**2.
return EofZ2 + X2Term + crossTerm
def updateEofX2(Eoft2, EofX2):
"""
Update value of raw 2nd momement of X
"""
return Eoft2 * EofX2
def updateEofX(Eoft, EofX):
"""
Update value of raw first moment of X
"""
return Eoft * EofX
def calcLogEofts(nLive):
"""
Calculate log(E[t] - E[t^2]) as it is much easier to do so here than later having on log(E[t]) and log(E[t^2])
"""
return np.log(calcEofts(nLive) + (calct(nLive) - calct2(nLive),))
def updateLogZnXMoments(nLive, logEofZ, logEofZ2, logEofZX, logEofX, logEofX2, LLhoodStarOld, LLhoodStar, trapezoidalFlag):
"""
as above but for log space
"""
if trapezoidalFlag:
logEofZ, logEofZ2, logEofZX, logEofX, logEofX2, logEofWeight = updateLogZnXM(nLive, logEofZ, logEofZ2, logEofZX, logEofX, logEofX2, np.log(0.5) + np.logaddexp(LLhoodStarOld, LLhoodStar))
else:
logEofZ, logEofZ2, logEofZX, logEofX, logEofX2, logEofWeight = updateLogZnXM(nLive, logEofZ, logEofZ2, logEofZX, logEofX, logEofX2, LLhoodStar)
return logEofZ, logEofZ2, logEofZX, logEofX, logEofX2, logEofWeight
def updateLogZnXM(nLive, logEofZ, logEofZ2, logEofZX, logEofX, logEofX2, LL):
"""
as above but for log space
"""
logEoft, logEoft2, logEof1mt, logEof1mt2, logEoftmEoft2 = calcLogEofts(nLive)
logEofZ, logEofWeight = updateLogEofZ(logEofZ, logEof1mt, logEofX, LL)
logEofZ2 = updateLogEofZ2(logEofZ2, logEof1mt, logEofZX, logEof1mt2, logEofX2, LL)
logEofZX = updateLogEofZX(logEoft, logEofZX, logEoftmEoft2, logEofX2, LL)
logEofX2 = updateLogEofX2(logEoft2, logEofX2)
logEofX = updateLogEofX(logEoft, logEofX)
return logEofZ, logEofZ2, logEofZX, logEofX, logEofX2, logEofWeight
def updateLogEofZ(logEofZ, logEof1mt, logEofX, LL):
"""
as above but for log space
"""
logEofWeight = logEof1mt + logEofX + LL
return np.logaddexp(logEofWeight, logEofZ), logEofWeight
def updateLogEofZX(logEoft, logEofZX, logEoftmEoft2, logEofX2, LL):
"""
as above but for log space
"""
crossTerm = logEoft + logEofZX
X2Term = logEoftmEoft2 + logEofX2 + LL
return np.logaddexp(crossTerm, X2Term)
def updateLogEofZ2(logEofZ2, logEof1mt, logEofZX, logEof1mt2, logEofX2, LL):
"""
as above but for log space
"""
crossTerm = np.log(2) + logEof1mt + logEofZX + LL
X2Term = logEof1mt2 + logEofX2 + 2. * LL
newTerm = np.logaddexp(crossTerm, X2Term)
return np.logaddexp(logEofZ2, newTerm)
def updateLogEofX2(logEoft2, logEofX2):
"""
as above but for log space
"""
return logEoft2 + logEofX2
def updateLogEofX(logEoft, logEofX):
"""
as above but for log space
"""
return logEoft + logEofX
def updateZnXMomentsFinal(nFinal, EofZ, EofZ2, EofX, Lhood_im1, Lhood_i, trapezoidalFlag, errorEval):
"""
Wrapper around updateZnXMomentsF taking into account whether trapezium rule is used or not
"""
if trapezoidalFlag:
EofZ, EofZ2, EofWeight = updateZnXMomentsF(nFinal, EofZ, EofZ2, EofX, (Lhood_im1 + Lhood_i) / 2., errorEval)
else:
EofZ, EofZ2, EofWeight = updateZnXMomentsF(nFinal, EofZ, EofZ2, EofX, Lhood_i, errorEval)
return EofZ, EofZ2, EofWeight
def updateZnXMomentsF(nFinal, EofZ, EofZ2, EofX, L, errorEval):
"""
TODO: rewrite docstring
TODO: CONSIDER KEETON NON-RECURSIVE METHOD WHICH EXPLICITLY ACCOUNTS FOR CORRELATION BETWEEN EOFZ AND EOFZLIVE
"""
if errorEval == 'recursive':
EofX = updateEofXFinal(EofX, nFinal)
EofX2 = updateEofX2Final(EofX, nFinal)
EofZ2 = updateEofZ2Final(EofZ2, EofX, EofZ, EofX2, L)
EofZ, EofWeight = updateEofZFinal(EofZ, EofX, L)
return EofZ, EofZ2, EofWeight
def updateEofZ2Final(EofZ2, EofX, EofZ, EofX2, L):
"""
TODO: rewrite docstring
"""
crossTerm = 2 * EofX * EofZ * L
XTerm = EofX2 * L**2.
return EofZ2 + crossTerm + XTerm
def updateEofZFinal(EofZ, EofX, L):
"""
TODO: rewrite docstring
"""
EofWeight = EofX * L
return EofZ + EofWeight, EofWeight
def updateEofXFinal(EofX, nFinal):
"""
can't be proved mathematically, X is treated deterministically to be X / nLive
"""
return EofX / nFinal
def updateEofX2Final(EofX, nFinal):
"""
can't be proved mathematically, just derived from recurrence relations
"""
return EofX**2. / nFinal**2.
def updateLogZnXMomentsFinal(nFinal, logEofZ, logEofZ2, logEofX, LLhood_im1, LLhood_i, trapezoidalFlag, errorEval):
"""
Wrapper around updateZnXMomentsF taking into account whether trapezium rule is used or not
"""
if trapezoidalFlag:
logEofZ, logEofZ2, logEofWeight = updateLogZnXMomentsF(nFinal, logEofZ, logEofZ2, logEofX, np.log(0.5) + np.logaddexp(LLhood_im1, LLhood_i), errorEval)
else:
logEofZ, logEofZ2, logEofWeight = updateLogZnXMomentsF(nFinal, logEofZ, logEofZ2, logEofX, LLhood_i, errorEval)
return logEofZ, logEofZ2, logEofWeight
def updateLogZnXMomentsF(nFinal, logEofZ, logEofZ2, logEofX, LL, errorEval):
"""
TODO: rewrite docstring
"""
if errorEval == 'recursive':
logEofX = updateLogEofXFinal(logEofX, nFinal)
logEofX2 = updateLogEofX2Final(logEofX, nFinal)
logEofZ2 = updateLogEofZ2Final(logEofZ2, logEofX, logEofZ, logEofX2, LL)
logEofZ, logEofWeight = updateLogEofZFinal(logEofZ, logEofX, LL)
return logEofZ, logEofZ2, logEofWeight
def updateLogEofZ2Final(logEofZ2, logEofX, logEofZ, logEofX2, LL):
"""
TODO: rewrite docstring
"""
crossTerm = np.log(2.) + logEofX + logEofZ + LL
XTerm = logEofX2 + 2. * LL
newTerm = np.logaddexp(crossTerm, XTerm)
return np.logaddexp(logEofZ2, newTerm)
def updateLogEofZFinal(logEofZ, logEofX, LL):
"""
TODO: rewrite docstring
"""
logEofWeight = logEofX + LL
return np.logaddexp(logEofZ, logEofWeight), logEofWeight
def updateLogEofXFinal(logEofX, nFinal):
"""
can't be proved mathematically, just derived from recurrence relations
"""
return logEofX - np.log(nFinal)
def updateLogEofX2Final(logEofX, nFinal):
"""
can't be proved mathematically, just derived from recurrence relations
"""
return 2. * logEofX - 2. * np.log(nFinal)
def updateH(H, weight, ZNew, Lhood, Z):
"""
Same as Skilling's implementation but in linear space
Handles FloatingPointErrors associated with taking np.log(0) (0 * log(0) = 0)
"""
try:
return 1. / ZNew * weight * np.log(Lhood) + Z / ZNew * (H + np.log(Z)) - np.log(ZNew)
except FloatingPointError: #take lim Z->0^+ Z / ZNew * (H + log(Z)) = 0
return 1. / ZNew * weight * np.log(Lhood) - np.log(ZNew)
def updateHLog(H, logWeight, logZNew, LLhood, logZ):
"""
update H using previous value, previous and new log(Z) and latest weight
Isn't a non-log version as H propto log(L).
As given in Skilling's paper
TODO: consider if trapezium rule should lead to different implementation
"""
try:
return np.exp(logWeight - logZNew) * LLhood + np.exp(logZ - logZNew) * (H + logZ) - logZNew
except FloatingPointError: #when logZ is -infinity, np.exp(logZ) * logZ cannot be evaluated. Treat it as zero, ie treat it as lim Z->0^+ exp(logZ) * logZ = 0
return np.exp(logWeight - logZNew) * LLhood - logZNew
######## calculate/ retrieve final estimates/ errors of Z
def calcVariance(EofX, EofX2):
"""
Calculate second moment of X from first moment and raw second moment
"""
return EofX2 - EofX**2.
def calcVarianceLog(logEofX, logEofX2):
"""
Calc log(var(X)) from log(E[X]) and log(E[X^2])
Does logsubtractexp manually, so doesn't account for possible underflow issues with exponentiating
like np.logaddexp() does, but this shouldn't be an issue for the numbers involved here.
"""
return logsubexp(logEofX2, 2. * logEofX)
def calcVarZSkillingK(EofZ, nLive, H):
"""
Uses definition of error given in Skilling's NS paper, ACCORDING to Keeton.
Only valid in limit that Skilling's approximation of var[log(Z)] = H / nLive being correct,
and E[Z]^2 >> var[Z] so that log(1+x)~x approximation is valid.
Also requires that Z is log-normally distributed
I think this is only valid for NS loop contributions, not final part or total
"""
return EofZ**2. * H / nLive
def calcHSkillingK(EofZ, varZ, nLive):
"""
Uses definition of error given in Skilling's NS paper, ACCORDING to Keeton.
Only valid in limit that Skilling's approximation of var[log(Z)] = H / nLive being correct,
and E[Z]^2 >> var[Z] so that log(1+x)~x approximation is valid.
Also requires that Z is log-normally distributed
I think this is only valid for NS loop contributions, not final part or total
"""
return varZ * nLive / EofZ**2.
def calcVarLogZ(EofZ, varZ, method):
"""
Uses propagation of uncertainty formula or
relationship between log-normal r.v.s and the normally distributed log of the log-normal r.v.s
to calculate var[logZ] from EofZ and varZ (taken from Wikipedia)
"""
if method == 'uncertainty':
return varZ / EofZ**2.
elif method == 'log-normal':
return np.sqrt(np.log(1. + varZ / EofZ**2.))
def calcEofLogZ(EofZ, varZ):
"""
Calc E[log(Z)] from E[Z] and Var[Z]. Assumes Z is log-normally distributed
"""
return np.log(EofZ**2. / (np.sqrt(varZ + EofZ**2.)))
def calcEofZ(EofLogZ, varLogZ):
"""
calc E[Z] from E[logZ] and var[logZ]. Assumes Z is log-normal
"""
return np.exp(EofLogZ + 0.5 * varLogZ)
def calcVarZ(varLogZ, method, EofZ = None, EofLogZ = None):
"""
Uses propagation of uncertainty formula or
relationship between log-normal r.v.s and the normally distributed log of the log-normal r.v.s
to calculate var[Z] from EofZ and varLogZ (taken from Wikipedia)
"""
if method == 'uncertainty':
return varLogZ * EofZ**2.
elif method == 'log-normal':
return np.exp(2. * EofLogZ + varLogZ) * (np.exp(varLogZ) - 1.)
def calcVarLogZSkilling(H, nLive):
"""
Skilling works in log space throughout, including calculating the moments of log(*)
i.e. E[f(log(*))]. Thus he derives a value for the variance of log(Z), through his discussions of
Poisson fluctuations whilst exploring the posterior.
"""
return H / nLive
################Calculate Z moments and H a-posteri using Keeton's methods
#wrappers around E[t] functions for calculating powers of them.
#required for calculating Z moments with Keeton's method.
#E[t]^i
EoftPowi = lambda nLive, i : calct(nLive)**i
#E[t^2]^i
Eoft2Powi = lambda nLive, i : calct2(nLive)**i
#(E[t^2]/E[t])^i
Eoft2OverEoftPowi = lambda nLive, i : (calct2(nLive) / calct(nLive))**i
def calcEofftArr(Eofft, nLive, n):
"""
Calculates E[f(t)]^i then returns this with yield.
Yield means next time function is called,
it picks off from where it last returned,
with same variable values as before returning.
Note the function isn't executed until the generator return by yield is iterated over
Putting for loop here is faster than filling in blank array
"""
for i in range(1, n+1):
yield Eofft(nLive, i)
def getEofftArr(Eofft, nLive, nest):
"""
faster than creating array of zeroes and looping over
"""
return np.fromiter(calcEofftArr(Eofft, nLive, nest), dtype = float, count = nest)
def calcZMomentsKeeton(Lhoods, nLive, nest):
"""
calculate Z moments a-posteri with full list of Lhoods used in NS loop,
using equations given in Keeton
"""
EofZ = calcEofZKeeton(Lhoods, nLive, nest)
EofZ2 = calcEofZ2Keeton(Lhoods, nLive, nest)
return EofZ, EofZ2
def calcEofZKeeton(Lhoods, nLive, nest):
"""
Calculate first moment of Z from main NS loop.
According to paper, this is just E[Z] = 1. / nLive * sum_i^nest L_i * E[t]^i
"""
EoftArr = getEofftArr(EoftPowi, nLive, nest)
LEoft = Lhoods * EoftArr
return 1. / nLive * LEoft.sum()
def calcEofZ2Keeton(Lhoods, nLive, nest):
"""
Calculate second (raw) moment of Z from main NS loop (equation 22 Keeton)
"""
const = 2. / (nLive * (nLive + 1.))
summations = calcSums(Lhoods, nLive, nest)
return const * summations
def calcSums(Lhoods, nLive, nest):
"""
Calculate double summation in equation (22) of Keeton using two generator (yielding) functions.
First one creates array associated with index of inner sum (which is subsequently summed).
Second one creates array of summed inner sums, which is then multiplied by array of
L_k * E[t]^k terms to give outer summation terms.
Outer summation terms are added together to give total of double sum.
"""
EoftArr = getEofftArr(EoftPowi, nLive, nest)
LEoft = Lhoods * EoftArr
innerSums = np.fromiter(calcInnerSums(Lhoods, nLive, nest), dtype = float, count = nest)
outerSums = LEoft * innerSums
return outerSums.sum()
def calcInnerSums(Lhoods, nLive, nest):
"""
Second generator (yielding) function, which returns inner sum for outer index k
"""
for k in range(1, nest + 1):
Eoft2OverEoftArr = getEofftArr(Eoft2OverEoftPowi, nLive, k)
innerTerms = Lhoods[:k] * Eoft2OverEoftArr
innerSum = innerTerms.sum()
yield innerSum
def calcSumsLoop(Lhoods, nLive, nest):
"""
Calculate double summation in equation (22) of Keeton using double for loop (one for each summation).
Inefficient (I think) but easy
"""
total = 0.
for k in range(1, nest + 1):
innerSum = 0.
for i in range(1, k + 1):
innerSum += Lhoods[i-1] * Eoft2OverEoftPowi(nLive, i)
outerSum = Lhoods[k-1] * EoftPowi(nLive, k) * innerSum
total += outerSum
return total
def calcHKeeton(EofZ, Lhoods, nLive, nest):
"""
Calculate H from KL divergence equation transformed to LX space
as given in Keeton.
"""
sumTerms = Lhoods * np.log(Lhoods) * getEofftArr(EoftPowi, nLive, nest)
sumTerm = 1. / nLive * sumTerms.sum()
return 1. / EofZ * sumTerm - np.log(EofZ)
def calcZMomentsKeetonLog(deadPointsLLhood, nLive, nest):
"""
TODO
"""
print("not implemented yet. Exiting")
sys.exit(1)
return logEofZ, logEofZ2
def calcEofZKeetonLog(LLhoods, nLive, nest):
"""
TODO
"""
print("not implemented yet. Exiting")
sys.exit(1)
return logEofZ
def calcEofZ2KeetonLog(LLhoods, nLive, nest):
"""
TODO
"""
print("not implemented yet. Exiting")
sys.exit(1)
return logEofZ2
def calcHKeetonLog(logEofZK, deadPointsLLhood, nLive, nest):
"""
TODO
"""
print("not implemented yet. Exiting")
sys.exit(1)
return H
def calcZMomentsFinalKeeton(finalLhoods, nLive, nest):
"""
calculate Z moments a-posteri with list of final Lhood points (ones remaining at termination of main loop),
using equations given in Keeton
"""
EofZ = calcEofZFinalKeeton(finalLhoods, nLive, nest)
EofZ2 = calcEofZ2FinalKeeton(finalLhoods, nLive, nest)
return EofZ, EofZ2
def calcEofZFinalKeeton(finalLhoods, nLive, nest):
"""
Averages over Lhood, which I don't think is the correct thing to do as it doesn't correspond to a unique parameter vector value.
TODO: consider other ways of getting final contribution from livepoints with Keeton's method
"""
LhoodAv = finalLhoods.mean()
EofFinalX = EoftPowi(nLive, nest)
return EofFinalX * LhoodAv
def calcEofZ2FinalKeeton(finalLhoods, nLive, nest):
"""
Averages over Lhood, which I don't think is the correct thing to do as it doesn't correspond to a unique parameter vector value.
TODO: consider other ways of getting final contribution from livepoints with Keeton's method
"""
LhoodAv = finalLhoods.mean()
EofFinalX2 = Eoft2Powi(nLive, nest)
return LhoodAv**2. * EofFinalX2
def calcEofZZFinalKeeton(Lhoods, finalLhoods, nLive, nest):
"""
Averages over Lhood for contribution from final points,
which I don't think is the correct thing to do as it doesn't correspond to a unique parameter vector value.
TODO: consider other ways of getting final contribution from livepoints with Keeton's method
"""
finalLhoodAv = finalLhoods.mean()
finalTerm = finalLhoodAv / (nLive + 1.) * EoftPowi(nLive, nest)
Eoft2OverEoftArr = getEofftArr(Eoft2OverEoftPowi, nLive, nest)
loopTerms = Lhoods * Eoft2OverEoftArr
loopTerm = loopTerms.sum()
return finalTerm * loopTerm
def calcHTotalKeeton(EofZ, Lhoods, nLive, nest, finalLhoods):
"""
Calculates total value of H based on KL divergence equation transformed to
LX space as given in Keeton.
Uses H function used to calculate loop H value (but with total Z), and adapts
final result to give HTotal
"""
LAv = finalLhoods.mean()
HPartial = calcHKeeton(EofZ, Lhoods, nLive, nest)
return HPartial + 1. / EofZ * LAv * np.log(LAv) * EoftPowi(nLive, nest)
def calcZMomentsFinalKeetonLog(livePointsLLhood, nLive, nest):
"""
TODO
"""
print("not implemented yet. Exiting")
sys.exit(1)
return logEofZFinal, logEofZ2Final
def calcEofZFinalKeetonLog(finalLLhoods, nLive, nest):
"""
TODO
"""
print("not implemented yet. Exiting")
sys.exit(1)
return logEofZFinal
def calcEofZ2FinalKeetonLog(finalLLhoods, nLive, nest):
"""
TODO
"""
print("not implemented yet. Exiting")
sys.exit(1)
return logEofZ2Final
def calcEofZZFinalKeetonLog(deadPointsLLhood, livePointsLLhood, nLive, nest):
"""
TODO
"""
print("not implemented yet. Exiting")
sys.exit(1)
return logEofZZFinalK
def calcHTotalKeetonLog(logEofZFinalK, deadPointsLLhood, nLive, nest, livePointsLLhood):
"""
TODO
"""
print("not implemented yet. Exiting")
sys.exit(1)
return H
#Functions for combining contributions from main NS loop and termination ('final' quantities) for estimate or Z and its error
def getEofZTotalKeeton(EofZ, EofZFinal):
"""
get total from NS loop and final contributions
"""
return EofZ + EofZFinal
def getEofZ2TotalKeeton(EofZ2, EofZ2Final):
"""
get total from NS loop and final contributions
"""
return EofZ2 + EofZ2Final
def getVarTotalKeeton(varZ, varZFinal, EofZ, EofZFinal, EofZZFinal):
"""
Get total variance from NS loop and final contributions.
For recursive method, since E[ZLive] = E[ZTot] etc.,
and assuming that the recurrence relations account for the covariance between
Z and ZFinal, this is just varZFinal.
For Keeton's method, have to explicitly account for correlation as expectations for Z and ZLive are essentially calculated independently
TODO: check if recurrence relations of Z and ZFinal properly account for correlation between two
"""
return varZ + varZFinal + 2. * (EofZZFinal - EofZ * EofZFinal)
def getVarTotalKeetonLog(logVarZ, logVarZFinal, logEofZ, logEofZFinal, logEofZZFinal):
"""
TODO
"""
print("not implemented yet. Exiting")
sys.exit(1)
return logVarZTotal
def getEofZTotalKeetonLog(logEofZ, logEofZFinal):
"""
TODO
"""
print("not implemented yet. Exiting")
sys.exit(1)
return logEofZTotal
def getEofZ2TotalKeetonLog(logEofZ2, logEofZ2Final):
"""
TODO
"""
print("not implemented yet. Exiting")
sys.exit(1)
return logEofZ2Total
#############DEPRECATED I THINK
def getLogEofXLogEofw(nLive, X):
"""
get increment (part of weight for posterior and evidence calculations) based on previous value of X, calculates latest X using t calculated from either expected value or sampling. Expected value can be of t (E[t]) or log(t) E[log(t)]. These are roughly the same for large nLive
Sampling can take two forms: sampling from the pdf or taking the highest of U[0,1]^Nlive values (from which the pdf form is derived from), so they should in theory be the same.
"""
expectation = 't'
t = calct(nLive, expectation)
XNew = X * t
return np.log(XNew), np.log(X - XNew)
#############DEPRECATED I THINK
def getLogEofWeight(logw, LLhood_im1, LLhood_i, trapezoidalFlag):
"""
calculates logw + log(f(L_im1, L_i)) where f(L_im1, L_i) = L_i for standard quadrature
and f(L_im1, L_i) = (L_im1 + L_i) / 2. for the trapezium rule
"""
if trapezoidalFlag:
return np.log(0.5) + logw + np.logaddexp(LLhood_im1, LLhood_i) #from Will's implementation, Z = sum (X_im1 - X_i) * 0.5 * (L_i + L_im1)
else:
return logw + LLhood_i #weight of deadpoint (for posterior) = prior mass decrement * likelihood
############plotting functions
def plotPhysPosteriorIW(x, unnormalisedSamples, Z, space):
"""
Plots posterior in physical space according to importance weights w(theta)L(theta) / Z. Doesn't use KDE so isn't true shape of posterior.
If inputting logWeights/ logZ then set space == 'log'
"""
if space == 'log':
normalisedSamples = np.exp(unnormalisedSamples - Z)
else:
normalisedSamples = unnormalisedSamples / Z
plt.figure('phys posterior')
plt.scatter(x, normalisedSamples)
plt.show()
plt.close()
def plotXPosterior(X, L, Z, space):
"""
Plots X*L(X)/Z in log X space, not including KDE methods
"""
if space == 'log':
LhoodDivZ = np.exp(L - Z)
X = np.exp(X)
else:
LhoodDivZ = L / Z
LXovrZ = X * LhoodDivZ
plt.figure('posterior')
plt.scatter(X, LXovrZ)
plt.set_xscale('log')
plt.show()
plt.close()
def callGetDist(chainsFilePrefix, plotName, nParams):
"""
produces triangular posterior plots using getDist for first nParams
parameters from chains file as labelled in that file and in .paramnames
"""
paramList = ['p' + str(i+1) for i in range(nParams)]
chains = getdist.loadMCSamples(chainsFilePrefix)
g = getdist.plots.getSubplotPlotter()
g.triangle_plot([chains], paramList,
filled_compare=True)
g.export(plotName)
###################print output functions
def printUpdate(nest, deadPointPhys, deadPointLhood, EofZ, livePointPhys, livePointLhood, space):
"""
gives update on latest deadpoint and newpoint found to replace it
"""
if space == 'log':
L = 'LLhood'
Z = 'ln(E[Z])'
elif space == 'linear':
L = 'Lhood'
Z = 'E[Z]'
else:
print("invalid space")
sys.exit(1)
print("for deadpoint %i: physical value = %s %s value = %f" %(nest, deadPointPhys, L, deadPointLhood))
print("%s = %s" % (Z, EofZ))
print("new live point obtained: physical value = %s %s has value = %s" %(livePointPhys, L, livePointLhood))
def printBreak():
"""
tell user final contribution to sampling is being calculated
"""
print("adding final contribution from remaining live points")
def printZHValues(EofZ, EofZ2, varZ, H, space, stage, method):
"""
print values of Z (including varios moments, variance) and H
in either log or linear space, at a given stage and calculated by a given method
"""
if space == 'log':
Z = 'ln(E[Z])'
Z2 = 'ln([Z^2])'
var = 'ln(var[Z])'
elif space == 'linear':
Z = 'E[Z]'
Z2 = 'E[Z2]'
var = 'var[Z]'
else:
print("invalid space")
sys.exit(1)
print("%s %s (%s) = %s" %(Z, stage, method, EofZ))
print("%s %s (%s) = %s" %(Z2, stage, method, EofZ2))
print("%s %s (%s) = %s" %(var, stage, method, varZ))
print("H %s (%s) = %s" %(stage, method, H))
def printTheoretical(ZTheor, ZTheorErr, HTheor, HTheorErr):
"""
Outputs values for theoretical values of Z and H (and their errors)
"""
print("ZTheor = %s" %ZTheor)
print("ZTheorErr = %s" %ZTheorErr)
print("HTheor = %s" %HTheor)
print("HTheorErr = %s" %HTheorErr)
def printSampleNum(numSamples):
"""
Print number of samples used in sampling (including final livepoints used for posterior weights)
"""
print("total number of samples = %i" %numSamples)
def printTerminationUpdateInfo(nest, terminator):
"""
Print update on termination status when evaluating by H value
"""
print("current end value is %i. Termination value is %f" %(nest, terminator))
def printTerminationUpdateZ(EofZLive, endValue, terminationFactor, space):
"""
Print update on termination status when evaluating by Z ratio
"""
if space == 'linear':
Z = 'E[ZLive]'
elif space == 'log':
Z = 'log(E[ZLive])'
else:
print("invalid space")
sys.exit(1)
print("%s = %s" %(Z, EofZLive))
print("current end value is %s. Termination value is %s" %(endValue, terminationFactor))
def printFinalLivePoints(i, physValue, Lhood, ZLiveType, space):
"""
print information about final livepoints used to calculate final
contribution to Z/ posterior samples.
"""
if space == 'linear':
L = 'Lhood'
elif space == 'log':
if ZLiveType == 'average Lhood':
L = 'log(average Lhood)'
else:
L = 'LLhood'
else:
print("invalid space")
sys.exit(1)
if ZLiveType == 'average Lhood':
print("'average' physical value = %s (n.b. this has no useful meaning), %s = %s" %(physValue, L, Lhood))
elif ZLiveType == 'average X':
print("remaining livepoint number %i: physical value = %s %s value = %s" %(i, physValue, L, Lhood))
elif ZLiveType == 'max Lhood':
print("maximum %s remaining livepoint number %i: physical value = %s %s value = %s" %(L, i, physValue, L, Lhood))
############nested run functions
def NestedRun(priorParams, LLhoodParams, paramNames, setupDict):
"""
function which completes a NS run. parameters of priors and likelihood need to be specified, as well as a flag indication type of prior for each dimension and the pdf for the lhood.
setupDict contains other setup parameters such as termination type & factor, method of finding new livepoint, details of how weights are calculated, how final Z contribution is added, and directory/file prefix for saved files.
"""
nLive = 50
nDims = len(paramNames)
checkInputParamsShape(priorParams, LLhoodParams, nDims)
livePoints = np.random.rand(nLive, nDims) #initialise livepoints to random values uniformly on [0,1]^D
priorObjs = fitPriors(priorParams)
priorFuncsPdf = getPriorPdfs(priorObjs)
priorFuncsPpf = getPriorPpfs(priorObjs)
livePointsPhys = invPrior(livePoints, priorFuncsPpf) #Convert livepoint values to physical values
LhoodObj = fitLhood(LLhoodParams)
LLhoodFunc = LLhood(LhoodObj)
livePointsLLhood = LLhoodFunc(livePointsPhys) #calculate LLhood values of initial livepoints
# initialise lists for storing values
logEofXArr = []
logEofWeights = []
deadPoints = []
deadPointsPhys = []
deadPointsLLhood = []
# initialise mean and variance of Z variables and other moments
logEofZ = -np.inf
logEofZ2 = -np.inf
logEofZX = -np.inf
logEofX = 0.
logEofX2 = 0.
logX = 0.
# initialise other variables
LLhoodStar = -np.inf
H = 0.
nest = 0
logZLive = np.inf
checkTermination = 100
#begin nested sample loop
while True:
LLhoodStarOld = LLhoodStar
deadIndex = np.argmin(livePointsLLhood) #index of lowest likelihood livepoint (next deadpoint)
LLhoodStar = livePointsLLhood[deadIndex] #LLhood of dead point and new target
#update expected values of moments of X and Z, and get posterior weights
logEofZNew, logEofZ2, logEofZX, logEofX, logEofX2, logEofWeight = updateLogZnXMoments(nLive, logEofZ, logEofZ2, logEofZX, logEofX, logEofX2, LLhoodStarOld, LLhoodStar, setupDict['trapezoidalFlag'])
logEofXArr.append(logEofX)
logEofWeights.append(logEofWeight)
H = updateHLog(H, logEofWeight, logEofZNew, LLhoodStar, logEofZ)
logEofZ = logEofZNew #update evidence part II
#WARNING, VIEWING A NUMPY SLICE (IE NOT USING NP.COPY) DOES NOT CREATE A COPY AND SO A-POSTORI CHANGES TO ARRAY WILL AFFECT PREVIOUSLY SLICED ARRAY
#USE NP.MAY_SHARE_MEMORY(A, B) TO SEE IF ARRAYS SHARE MEMORY, PYTHON 'IS' KEYWORD DOESN'T WORK
deadPointPhys = np.copy(livePointsPhys[deadIndex]).reshape(1,-1)
deadPointsPhys.append(deadPointPhys)
deadPointLLhood = LLhoodStar
deadPointsLLhood.append(deadPointLLhood)
#update array where last deadpoint was with new livepoint picked subject to L_new > L*
if setupDict['sampler'] == 'blind':
livePointsPhys[deadIndex], livePointsLLhood[deadIndex] = getNewLiveBlind(priorFuncsPpf, LLhoodFunc, LLhoodStar)
elif setupDict['sampler'] == 'MH':
livePointsPhys[deadIndex], livePointsLLhood[deadIndex] = getNewLiveMH(livePointsPhys, deadIndex, priorFuncsPdf, priorParams, LLhoodFunc, LLhoodStar)
if setupDict['verbose']:
printUpdate(nest, deadPointPhys, deadPointLLhood, logEofZ, livePointsPhys[deadIndex].reshape(1, -1), livePointsLLhood[deadIndex], 'log')
nest += 1
if nest % checkTermination == 0:
breakFlag, liveMaxIndex, liveLLhoodMax, avLLhood, nFinal = tryTerminationLog(setupDict['verbose'], setupDict['terminationType'], setupDict['terminationFactor'], nest, nLive, logEofX, livePointsLLhood, LLhoodStar, setupDict['ZLiveType'], setupDict['trapezoidalFlag'], logEofZ, H)
if breakFlag: #termination condition was reached
break
EofZ = np.exp(logEofZ)
EofZ2 = np.exp(logEofZ2)
varZ = calcVariance(EofZ, EofZ2)
#logVarZ = calcVarianceLog(logEofZ, logEofZ2)
#logEofZK, logEofZ2K = calcZMomentsKeetonLog(np.array(deadPointsLLhood), nLive, nest)
EofZK, EofZ2K = calcZMomentsKeeton(np.exp(np.array(deadPointsLLhood)), nLive, nest)
varZK = calcVariance(EofZK, EofZ2K)
#logVarZK = calcVarianceLog(logEofZK, logEofZ2K)
#HK = calcHKeetonLog(logEofZK, np.array(deadPointsLLhood), nLive, nest)
HK = calcHKeeton(EofZK, np.exp(np.array(deadPointsLLhood)), nLive, nest)
if setupDict['verbose']:
printBreak()
printZHValues(EofZ, EofZ2, varZ, H, 'linear', 'before final', 'recursive')
printZHValues(EofZK, EofZ2K, varZK, HK, 'linear', 'before final', 'Keeton equations')
#printZHValues(logEofZ, logEofZ2, logVarZ, H, 'log', 'before final', 'recursive')
#printZHValues(logEofZK, logEofZ2K, logVarZK, HK, 'log', 'before final', 'Keeton equations')
logEofZTotal, logEofZ2Total, H, livePointsPhysFinal, livePointsLLhoodFinal, logEofXFinalArr = getFinalContributionLog(setupDict['verbose'], setupDict['ZLiveType'], setupDict['trapezoidalFlag'], nFinal, logEofZ, logEofZ2, logEofX, logEofWeights, H, livePointsPhys, livePointsLLhood, avLLhood, liveLLhoodMax, liveMaxIndex, LLhoodStar)
totalPointsPhys, totalPointsLLhood, logEofXArr, logEofWeights = getTotal(deadPointsPhys, livePointsPhysFinal, deadPointsLLhood, livePointsLLhoodFinal, logEofXArr, logEofXFinalArr, logEofWeights)
EofZTotal = np.exp(logEofZTotal)
EofZ2Total = np.exp(logEofZ2Total)
varZ = calcVariance(EofZTotal, EofZ2Total)
#logEofZFinalK, logEofZ2FinalK = calcZMomentsFinalKeetonLog(livePointsLLhood, nLive, nest)
EofZFinalK, EofZ2FinalK = calcZMomentsFinalKeeton(np.exp(livePointsLLhood), nLive, nest)
varZFinalK = calcVariance(EofZFinalK, EofZ2FinalK)
#logVarZFinalK = calcVarianceLog(logEofZFinalK, logEofZ2FinalK)
#logEofZZFinalK = calcEofZZFinalKeetonLog(np.array(deadPointsLLhood), livePointsLLhood, nLive, nest)
EofZZFinalK = calcEofZZFinalKeeton(np.exp(np.array(deadPointsLLhood)), np.exp(livePointsLLhood), nLive, nest)
#logVarZTotalK = getVarTotalKeetonLog(logVarZK, logVarZFinalK, logEofZK, logEofZFinalK, logEofZZFinalK)
varZTotalK = getVarTotalKeeton(varZK, varZFinalK, EofZK, EofZFinalK, EofZZFinalK)
#logEofZTotalK = getEofZTotalKeetonLog(logEofZK, logEofZFinalK)
EofZTotalK = getEofZTotalKeeton(EofZK, EofZFinalK)
#logEofZ2TotalK = getEofZ2TotalKeetonLog(logEofZ2K, logEofZ2FinalK)
EofZ2TotalK = getEofZ2TotalKeeton(EofZ2K, EofZ2FinalK)
#HK = calcHTotalKeetonLog(logEofZTotalK, np.array(deadPointsLLhood), nLive, nest, livePointsLLhood)
HK = calcHTotalKeeton(EofZTotalK, np.exp(np.array(deadPointsLLhood)), nLive, nest, np.exp(livePointsLLhood))
priorFuncsLogPdf = getPriorLogPdfs(priorObjs)
ZTheor, ZTheorErr, priorVolume = calcZTheor(priorParams, priorFuncsLogPdf, LLhoodFunc, nDims)
HTheor, HTheorErr = calcHTheor(priorParams, priorFuncsPdf, LLhoodFunc, nDims, ZTheor, ZTheorErr)
numSamples = len(totalPointsPhys[:,0])
if setupDict['verbose']:
printZHValues(EofZTotal, EofZ2Total, varZ, H, 'linear', 'total', 'recursive')
printZHValues(EofZFinalK, EofZ2FinalK, varZFinalK, 'not calculated', 'linear', 'final contribution', 'Keeton equations')
printZHValues(EofZTotalK, EofZ2TotalK, varZTotalK, HK, 'linear', 'total', 'Keeton equations')
#printZHValues(logEofZTotal, logEofZ2Total, logVarZ, H, 'log', 'total', 'recursive')
#printZHValues(logEofZFinalK, logEofZ2FinalK, logVarZFinalK, 'not calculated', 'log', 'final contribution', 'Keeton equations')
#printZHValues(logEofZTotalK, logEofZ2TotalK, logVarZTotalK, HK, 'log', 'total', 'Keeton equations')
printTheoretical(ZTheor, ZTheorErr, HTheor, HTheorErr)
if setupDict['outputFile']:
writeOutput(setupDict['outputFile'], totalPointsPhys, totalPointsLLhood, logEofWeights, logEofXArr, paramNames ,'log')
return logEofZ, totalPointsPhys, totalPointsLLhood, logEofWeights, logEofXArr
def NestedRunLinear(priorParams, LhoodParams, paramNames, setupDict):
"""
function which completes a NS run. parameters of priors and likelihood need to be specified, as well as a flag indication type of prior for each dimension and the pdf for the lhood.
setupDict contains other setup parameters such as termination type & factor, method of finding new livepoint, details of how weights are calculated, how final Z contribution is added, and directory/file prefix for saved files.
"""
nLive = 50
nDims = len(paramNames)
checkInputParamsShape(priorParams, LhoodParams, nDims)
livePoints = np.random.rand(nLive, nDims) #initialise livepoints to random values uniformly on [0,1]^D
priorObjs = fitPriors(priorParams)
priorFuncsPdf = getPriorPdfs(priorObjs)
priorFuncsPpf = getPriorPpfs(priorObjs)
livePointsPhys = invPrior(livePoints, priorFuncsPpf) #Convert livepoint values to physical values
LhoodObj = fitLhood(LhoodParams)
LhoodFunc = Lhood(LhoodObj)
livePointsLhood = LhoodFunc(livePointsPhys) #calculate LLhood values of initial livepoints
# initialise lists for storing values
EofXArr = []
EofWeights = []
deadPoints = []
deadPointsPhys = []
deadPointsLhood = []
# initialise mean and variance of Z variables and other moments
EofZ = 0.
EofZ2 = 0.
EofZX = 0.
EofX = 1.
EofX2 = 1.
X = 1.
# initialise other variables
LhoodStar = 0.
H = 0.
nest = 0
ZLive = np.inf
checkTermination = 100
#begin nested sample loop
while True:
LhoodStarOld = LhoodStar
deadIndex = np.argmin(livePointsLhood) #index of lowest likelihood livepoint (next deadpoint)
LhoodStar = livePointsLhood[deadIndex] #LLhood of dead point and new target
#update expected values of moments of X and Z, and get posterior weights
EofZNew, EofZ2, EofZX, EofX, EofX2, EofWeight = updateZnXMoments(nLive, EofZ, EofZ2, EofZX, EofX, EofX2, LhoodStarOld, LhoodStar, setupDict['trapezoidalFlag'])
EofXArr.append(EofX)
EofWeights.append(EofWeight)
H = updateH(H, EofWeight, EofZNew, LhoodStar, EofZ)
EofZ = EofZNew #update evidence part II
#WARNING, VIEWING A NUMPY SLICE (IE NOT USING NP.COPY) DOES NOT CREATE A COPY AND SO A-POSTORI CHANGES TO ARRAY WILL AFFECT PREVIOUSLY SLICED ARRAY
#USE NP.MAY_SHARE_MEMORY(A, B) TO SEE IF ARRAYS SHARE MEMORY, PYTHON 'IS' KEYWORD DOESN'T WORK
deadPointPhys = np.copy(livePointsPhys[deadIndex]).reshape(1,-1)
deadPointsPhys.append(deadPointPhys)
deadPointLhood = LhoodStar
deadPointsLhood.append(deadPointLhood)
#update array where last deadpoint was with new livepoint picked subject to L_new > L*
if setupDict['sampler'] == 'blind':
livePointsPhys[deadIndex], livePointsLhood[deadIndex] = getNewLiveBlind(priorFuncsPpf, LhoodFunc, LhoodStar)
elif setupDict['sampler'] == 'MH':
livePointsPhys[deadIndex], livePointsLhood[deadIndex] = getNewLiveMH(livePointsPhys, deadIndex, priorFuncsPdf, priorParams, LhoodFunc, LhoodStar)
if setupDict['verbose']:
printUpdate(nest, deadPointPhys, deadPointLhood, EofZ, livePointsPhys[deadIndex].reshape(1, -1), livePointsLhood[deadIndex], 'linear')
nest += 1
if nest % checkTermination == 0:
breakFlag, liveMaxIndex, liveLhoodMax, avLhood, nFinal = tryTermination(setupDict['verbose'], setupDict['terminationType'], setupDict['terminationFactor'], nest, nLive, EofX, livePointsLhood, LhoodStar, setupDict['ZLiveType'], setupDict['trapezoidalFlag'], EofZ, H)
if breakFlag: #termination condition was reached
break
varZ = calcVariance(EofZ, EofZ2)
EofZK, EofZ2K = calcZMomentsKeeton(np.array(deadPointsLhood), nLive, nest)
varZK = calcVariance(EofZK, EofZ2K)
HK = calcHKeeton(EofZK, np.array(deadPointsLhood), nLive, nest)
if setupDict['verbose']:
printBreak()
printZHValues(EofZ, EofZ2, varZ, H, 'linear', 'before final', 'recursive')
printZHValues(EofZK, EofZ2K, varZK, HK, 'linear', 'before final', 'Keeton equations')
EofZTotal, EofZ2Total, H, livePointsPhysFinal, livePointsLhoodFinal, EofXFinalArr = getFinalContribution(setupDict['verbose'], setupDict['ZLiveType'], setupDict['trapezoidalFlag'], nFinal, EofZ, EofZ2, EofX, EofWeights, H, livePointsPhys, livePointsLhood, avLhood, liveLhoodMax, liveMaxIndex, LhoodStar)
totalPointsPhys, totalPointsLhood, EofXArr, EofWeights = getTotal(deadPointsPhys, livePointsPhysFinal, deadPointsLhood, livePointsLhoodFinal, EofXArr, EofXFinalArr, EofWeights)
varZ = calcVariance(EofZTotal, EofZ2Total)
EofZFinalK, EofZ2FinalK = calcZMomentsFinalKeeton(livePointsLhood, nLive, nest)
varZFinalK = calcVariance(EofZFinalK, EofZ2FinalK)
EofZZFinalK = calcEofZZFinalKeeton(np.array(deadPointsLhood), livePointsLhood, nLive, nest)
varZTotalK = getVarTotalKeeton(varZK, varZFinalK, EofZK, EofZFinalK, EofZZFinalK)
EofZTotalK = getEofZTotalKeeton(EofZK, EofZFinalK)
EofZ2TotalK = getEofZ2TotalKeeton(EofZ2K, EofZ2FinalK)
HK = calcHTotalKeeton(EofZTotalK, np.array(deadPointsLhood), nLive, nest, livePointsLhood)
LLhoodFunc = LLhood(LhoodObj)
priorFuncsLogPdf = getPriorLogPdfs(priorObjs)
ZTheor, ZTheorErr, priorVolume = calcZTheor(priorParams, priorFuncsLogPdf, LLhoodFunc, nDims)
HTheor, HTheorErr = calcHTheor(priorParams, priorFuncsPdf, LLhoodFunc, nDims, ZTheor, ZTheorErr)
numSamples = len(totalPointsPhys[:,0])
if setupDict['verbose']:
printZHValues(EofZTotal, EofZ2Total, varZ, H, 'linear', 'total', 'recursive')
printZHValues(EofZFinalK, EofZ2FinalK, varZFinalK, 'not calculated', 'linear', 'final contribution', 'Keeton equations')
#print "EofZZFinal (keeton) = %s" %EofZZFinalK
printZHValues(EofZTotalK, EofZ2TotalK, varZTotalK, HK, 'linear', 'total', 'Keeton equations')
printTheoretical(ZTheor, ZTheorErr, HTheor, HTheorErr)
if setupDict['outputFile']:
writeOutput(setupDict['outputFile'], totalPointsPhys, totalPointsLhood, EofWeights, EofXArr, paramNames ,'linear')
return EofZ, totalPointsPhys, totalPointsLhood, EofWeights, EofXArr
###########main function
def main():
#samplingFlag and maxPoints flag are explained in getIncrement(...) function
setupDict = {'verbose':True, 'trapezoidalFlag': False, 'ZLiveType':'average Lhood', 'terminationType':'evidence', 'terminationFactor':0.5, 'sampler':'MH', 'outputFile':'./output/test'}
#priorParams is (3,nDims) shape array. For a given parameter, first value indicates prior type (1 = UNIFORM, 2 = NORMAL)
#for UNIFORM PDF, 2nd value is lower bound, 3rd value is upper bound
#for NORMAL PDF, 2nd value is mean, 3rd value is variance (parameter priors are assumed to be INDEPENDENT)
#priorParams = np.array([[1, -5., 5.], [1, -5., 5.], [1, -5., 5.], [1, -5., 5.]]).T
priorParams = np.array([[1, -5., 5.], [1, -5., 5.]]).T
#LLhoodParams has shape (1, (1, nDims), (nDims, nDims)). First value (scalar) indicates type of likelihood function (2 = NORMAL)
#second element (shape (1, nDims)) is the mean value for the likelihood in each dimension.
#third element (shape (nDims, nDims)) is the covariance matrix for the likelihood
LLhoodParams = [2, np.array([0., 0.]).reshape(1,2), np.array([1., 0., 0., 1.]).reshape(2,2)]
#LLhoodParams = [2, np.array([0., 0., 0., 0.]).reshape(1,4), np.array([1., 0., 0., 0., 0., 1., 0., 0., 0., 0., 1., 0., 0., 0., 0., 1.]).reshape(4,4)]
#paramNames = ['\\theta_1', '\\theta_2', '\\theta_3', '\\theta_4']
paramNames = ['\\theta_1', '\\theta_2']
#ensures numpy raises FloatingPointError associated with exp(-inf)*-inf
np.seterr(all = 'raise')
np.random.seed(0) #set this to value if you want NS to use same randomisations
#logEofZ, totalPointsPhys, totalPointsLLhood, logWeights, Xarr = NestedRun(priorParams, LLhoodParams, paramNames, setupDict)
EofZ, totalPointsPhys, totalPointsLhood, weights, Xarr = NestedRun(priorParams, LLhoodParams, paramNames, setupDict)
#callGetDist('./output/MH_gauss_uniform_4D', './plots/MH_gauss_uniform_4D', len(paramNames))
#plotXPosterior(Xarr, totalPointsLLhood, logZ)
#for i in range(len(totalPointsPhys[0,:])):
# plotPhysPosteriorIW(totalPointsPhys[:,i], logWeights, logZ)
if __name__ == '__main__':
main() | PypiClean |
/HCGB-0.5.5.tar.gz/HCGB-0.5.5/README.md | # Description
This is a basic python module that contains multiple functions used in different projects.
## Contents
HCGB/sampleParser:
* files.py
* merge.py
* samples.py
HCGB/functions:
* aesthetics_functions.py
* fasta_functions.py
* main_functions.py
* time_functions.py
* blast_functions.py
* files_functions.py
* system_call_functions.py
## Copyright & License
MIT License
Copyright (c) 2020-2021 HCGB-IGTP
http://www.germanstrias.org/technology-services/genomica-bioinformatica/
| PypiClean |
/AaronTools-1.0b14.tar.gz/AaronTools-1.0b14/README.md | <a href="https://badge.fury.io/py/AaronTools"><img src="https://badge.fury.io/py/AaronTools.svg" alt="PyPI version" height="18"></a>
# AaronTools.py
AaronTools provides a collection of tools for automating routine tasks encountered when running quantum chemistry computations.
These tools can be used either directly within a Python script using AaronTools objects, or via a series of command-line scripts.
See the <a href="https://github.com/QChASM/AaronTools.py/wiki">Wiki</a> for installation and usage.
AaronTools is described in
"QChASM: Quantum Chemistry Automation and Structure Manipulation" <a href="http://dx.doi.org/10.1002/wcms.1510" target="_blank"><i>WIREs Comp. Mol. Sci.</i> <b>11</b>, e1510 (2021)</a>.
A Perl implementation of AaronTools is also <a href="https://github.com/QChASM/AaronTools">available here.</a>
However, users are <em>strongly urged</em> to use the Python version since it has far more powerful features and, unlike the Perl version, will continue to be developed and supported.
## Citation
If you use the Python AaronTools, please cite:
V. M. Ingman, A. J. Schaefer, L. R. Andreola, and S. E. Wheeler "QChASM: Quantum Chemistry Automation and Structure Manipulation" <a href="http://dx.doi.org/10.1002/wcms.1510" target="_blank"><i>WIREs Comp. Mol. Sci.</i> <b>11</b>, e1510 (2021)</a>
## Contact
If you have any questions or would like to discuss bugs or additional needed features, feel free to contact us at qchasm@uga.edu
| PypiClean |
/DJModels-0.0.6-py3-none-any.whl/djmodels/core/management/commands/compilemessages.py | import codecs
import concurrent.futures
import glob
import os
from djmodels.core.management.base import BaseCommand, CommandError
from djmodels.core.management.utils import find_command, popen_wrapper
def has_bom(fn):
with open(fn, 'rb') as f:
sample = f.read(4)
return sample.startswith((codecs.BOM_UTF8, codecs.BOM_UTF16_LE, codecs.BOM_UTF16_BE))
def is_writable(path):
# Known side effect: updating file access/modified time to current time if
# it is writable.
try:
with open(path, 'a'):
os.utime(path, None)
except (IOError, OSError):
return False
return True
class Command(BaseCommand):
help = 'Compiles .po files to .mo files for use with builtin gettext support.'
requires_system_checks = False
program = 'msgfmt'
program_options = ['--check-format']
def add_arguments(self, parser):
parser.add_argument(
'--locale', '-l', action='append', default=[],
help='Locale(s) to process (e.g. de_AT). Default is to process all. '
'Can be used multiple times.',
)
parser.add_argument(
'--exclude', '-x', action='append', default=[],
help='Locales to exclude. Default is none. Can be used multiple times.',
)
parser.add_argument(
'--use-fuzzy', '-f', dest='fuzzy', action='store_true',
help='Use fuzzy translations.',
)
def handle(self, **options):
locale = options['locale']
exclude = options['exclude']
self.verbosity = options['verbosity']
if options['fuzzy']:
self.program_options = self.program_options + ['-f']
if find_command(self.program) is None:
raise CommandError("Can't find %s. Make sure you have GNU gettext "
"tools 0.15 or newer installed." % self.program)
basedirs = [os.path.join('conf', 'locale'), 'locale']
if os.environ.get('DJMODELS_SETTINGS_MODULE'):
from djmodels.conf import settings
basedirs.extend(settings.LOCALE_PATHS)
# Walk entire tree, looking for locale directories
for dirpath, dirnames, filenames in os.walk('.', topdown=True):
for dirname in dirnames:
if dirname == 'locale':
basedirs.append(os.path.join(dirpath, dirname))
# Gather existing directories.
basedirs = set(map(os.path.abspath, filter(os.path.isdir, basedirs)))
if not basedirs:
raise CommandError("This script should be run from the Django Git "
"checkout or your project or app tree, or with "
"the settings module specified.")
# Build locale list
all_locales = []
for basedir in basedirs:
locale_dirs = filter(os.path.isdir, glob.glob('%s/*' % basedir))
all_locales.extend(map(os.path.basename, locale_dirs))
# Account for excluded locales
locales = locale or all_locales
locales = set(locales).difference(exclude)
self.has_errors = False
for basedir in basedirs:
if locales:
dirs = [os.path.join(basedir, l, 'LC_MESSAGES') for l in locales]
else:
dirs = [basedir]
locations = []
for ldir in dirs:
for dirpath, dirnames, filenames in os.walk(ldir):
locations.extend((dirpath, f) for f in filenames if f.endswith('.po'))
if locations:
self.compile_messages(locations)
if self.has_errors:
raise CommandError('compilemessages generated one or more errors.')
def compile_messages(self, locations):
"""
Locations is a list of tuples: [(directory, file), ...]
"""
with concurrent.futures.ThreadPoolExecutor() as executor:
futures = []
for i, (dirpath, f) in enumerate(locations):
if self.verbosity > 0:
self.stdout.write('processing file %s in %s\n' % (f, dirpath))
po_path = os.path.join(dirpath, f)
if has_bom(po_path):
self.stderr.write(
'The %s file has a BOM (Byte Order Mark). Django only '
'supports .po files encoded in UTF-8 and without any BOM.' % po_path
)
self.has_errors = True
continue
base_path = os.path.splitext(po_path)[0]
# Check writability on first location
if i == 0 and not is_writable(base_path + '.mo'):
self.stderr.write(
'The po files under %s are in a seemingly not writable location. '
'mo files will not be updated/created.' % dirpath
)
self.has_errors = True
return
args = [self.program] + self.program_options + [
'-o', base_path + '.mo', base_path + '.po'
]
futures.append(executor.submit(popen_wrapper, args))
for future in concurrent.futures.as_completed(futures):
output, errors, status = future.result()
if status:
if self.verbosity > 0:
if errors:
self.stderr.write("Execution of %s failed: %s" % (self.program, errors))
else:
self.stderr.write("Execution of %s failed" % self.program)
self.has_errors = True | PypiClean |
/Ajango-1.0.0.tar.gz/Ajango-1.0.0/ajango/core/factory.py | import importlib
from django.core.management.base import CommandError
from django.core.exceptions import ImproperlyConfigured
from django.utils.termcolors import make_style
from django.core.management.color import supports_color
from django.conf import settings
from abc import ABCMeta
class FactoryBase(object):
""" Klasa bazowa tworzaca fabryke. """
__metaclass__ = ABCMeta
def __init__(self, param=None):
self.class_name = ''
self.base_address = {}
self.object = param
self.str = None
self.init()
def init(self):
"""
Metoda inicjalizujaca.
@param self: Obiekt fabryki
"""
pass
def execution(self, fun):
"""
Wykonanie zadan obiektu.
@param self: Obiekt fabryki
@param fun: Funkcja inicjalizujaca obiekt utworzony przez fabryke
"""
return fun(self.object)
def _get_base_address(self):
"""
Pobierz tabele z klasami inicjalizujacymi.
@param self: Obiekt fabryki
"""
return self.base_address
def _create_object(self, key):
"""
Tworzenie obiektu.
@param self: Obiekt fabryki
@param key: Klucz obiektu
@type key: str
"""
obj = "unKnown"
try:
base_address = self._get_base_address()
obj = base_address[key]
module = importlib.import_module(obj)
fun = getattr(module, self.class_name)
return self.execution(fun)
except KeyError:
raise CommandError("Doesn't know %s type: %r" %
(self.class_name, key))
except ImportError:
raise CommandError("Module %r doesn't exist" % obj)
def get_class_factory(self, key):
"""
Pobranie obiektu na podstawie klucza.
@param self: Obiekt fabryki
@param key: Klucz obiektu
@type key: str
"""
if supports_color():
blue = make_style(fg='cyan')
else:
blue = lambda text: text
print("Create '" + blue(key) + "' from '" +
blue(type(self).__name__) + "'")
return self._create_object(key)
def get_from_params(self):
"""
Pobranie obiektu na podstawie danych fabryki.
@param self: Obiekt fabryki
"""
if supports_color():
blue = make_style(fg='cyan')
else:
blue = lambda text: text
print("Create '" + blue(self.str) + "' from '" +
blue(type(self).__name__) + "'")
return self._create_object(self.str)
def __add_modules(self, modules):
"""
Dodanie nowego modulu w fabryce.
@param self: Obiekt fabryki
@param modules: Zestaw obiektow dostepnych w fabryce
@type modules: Slownik w ktorym B{klucz} jest kluczem dla fabryki,
a B{wartosc} jest adresem modulu w ktorym znajduje sie obiekt.
"""
for elem in modules.keys():
if elem in self.base_address:
raise CommandError("Cannot rewrite %r key" % elem)
self.base_address[elem] = modules[elem]
def __set_class_name(self, class_name):
"""
Ustawienie nazwy klasy obiektu dostepnego dla fabryki w module.
Metoda wprowadza nazwe obiektu i probuje wczytac dane poczatkowe z
pliku settingsow.
@param self: Obiekt fabryki
self.class_name = class_name
@param class_name: Nazwa klasy znajdujacej sie w pliku modulu
@type class_name: str
"""
if self.class_name != "":
raise CommandError("Cannot update class_name for factory [%r]" %
class_name)
self.class_name = class_name
self.__read_items_from_settings()
def set_items(self, class_name, modules):
"""
Ustwienie opcji fabryki.
Metoda ta powinna byc wywolana w ramach metody
L{init(self) <ajango.core.factory.FactoryBase.init>} w klasie
fabryki. Moze ona byc wywolana jednokrotnie. Ponowne wywolanie moze
spowodowac nieokreslone bledy.
@param self: Obiekt fabryki
@param class_name: Nazwa klasy znajdujacej sie w pliku modulu
@type class_name: str
@param modules: Zestaw obiektow dostepnych w fabryce
@type modules: Slownik w ktorym B{klucz} jest kluczem dla fabryki,
a B{wartosc} jest adresem modulu w ktorym znajduje sie obiekt.
"""
self.__set_class_name(class_name)
self.__add_modules(modules)
def __read_items_from_settings(self):
"""
Wczytanie obiektow do fabryki z settingsow.
Wiecej informacji na temat dodawania obiektow do fabryki w opisie
modulu L{Factory <ajango.core.factory>}
@param self: Obiekt fabryki
"""
try:
if not self.class_name in settings.AJANGO_FACTORY:
# fabyrka nie ma zdefiniowanych elementow dodatkowych
return
tab = settings.AJANGO_FACTORY[self.class_name]
self.__add_modules(tab)
except AttributeError:
# Brak definicji dla dodatkowych obiektow fabryki
return
except ImproperlyConfigured:
# Plik settings.py nie ma poprawnej konfiguracji
return | PypiClean |
/MAVR-0.93.tar.gz/MAVR-0.93/scripts/sequence/histogram_length.py | __author__ = 'Sergei F. Kliver'
import argparse
"""
from numpy import arange, int32, append
import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt
from RouToolPa.Collections.General import SynDict
"""""
from RouToolPa.Routines import DrawingRoutines
parser = argparse.ArgumentParser()
parser.add_argument("-i", "--input_file", action="store", dest="input_file",
help="Input file with sequences")
parser.add_argument("-o", "--output_prefix", action="store", dest="output_prefix",
help="Prefix of output files")
parser.add_argument("-f", "--format", action="store", dest="format", default="fasta",
help="Format of input file. Default - fasta")
parser.add_argument("-b", "--number_of_bins", action="store", dest="number_of_bins", type=int,
help="Number of bins in histogram. Incompatible with -w/--width_of_bins option. Default - 30")
parser.add_argument("-w", "--width_of_bins", action="store", dest="width_of_bins", type=int,
help="Width of bins in histogram. Incompatible with -b/--number_of_bins option. Not set by default")
parser.add_argument("-n", "--min_length", action="store", dest="min_length", type=int, default=1,
help="Minimum length of sequence to count. Default - 1")
parser.add_argument("-x", "--max_length", action="store", dest="max_length", type=int,
help="Maximum length of sequence to count. Default - length of longest sequence")
parser.add_argument("-e", "--extensions", action="store", dest="extensions", type=lambda x: x.split(","),
default=["png", "svg"],
help="Comma-separated list of extensions for histogram files")
parser.add_argument("-l", "--legend_location", action="store", dest="legend_location", default='best',
help="Legend location on histogram. Default - 'best'")
args = parser.parse_args()
if (args.number_of_bins is not None) and (args.width_of_bins is not None):
raise AttributeError("Options -w/--width_of_bins and -b/--number_of_bins mustn't be set simultaneously")
sequence_dict = DrawingRoutines.parse_seq_file(args.input_file, format=args.format, mode="parse")
DrawingRoutines.draw_length_histogram(sequence_dict, args.output_prefix, number_of_bins=args.number_of_bins,
width_of_bins=args.width_of_bins, min_length=args.min_length,
max_length=args.max_length, extensions=args.extensions,
legend_location=args.legend_location)
"""
for record in sequence_dict:
length_dict[record] = len(sequence_dict[record].seq)
length_dict.write("%s.len" % args.output_prefix)
lengths = length_dict.values()
max_len = max(lengths)
if args.max_length is None:
args.max_length = max_len
if (args.max_length != max_len) and (args.min_length != 1):
filtered = []
for entry in lengths:
if args.min_length <= entry <= args.max_length:
filtered.append(entry)
else:
filtered = lengths
plt.figure(1, figsize=(6, 6))
plt.subplot(1, 1, 1)
if args.number_of_bins:
bins = args.number_of_bins
elif args.width_of_bins:
bins = arange(args.min_length - 1, args.max_length, args.width_of_bins, dtype=int32)
bins[0] += 1
bins = append(bins, [args.max_length])
else:
bins = 30
plt.hist(lengths, bins=bins)
plt.xlim(xmin=args.min_length, xmax=args.max_length)
plt.xlabel("Length")
plt.ylabel("N")
plt.title("Distribution of sequence lengths")
for ext in args.extensions:
plt.savefig("%s.%s" % (args.output_prefix, ext))
os.remove("temp.idx")
""" | PypiClean |
/DDFacet-0.7.2.0.tar.gz/DDFacet-0.7.2.0/SkyModel/Other/ClassCasaImage.py | from __future__ import division, absolute_import, print_function
from pyrap.images import image
import os
from SkyModel.Other import MyPickle
import numpy as np
#from SkyModel.Other import MyLogger
#log=MyLogger.getLogger("ClassCasaImage")
from DDFacet.Other import logger
log=logger.getLogger("ClassCasaImage")
from SkyModel.Other import rad2hmsdms
import astropy.io.fits as pyfits
import pyrap.images
def PutDataInNewImage(ImageNameIn,ImageNameOut,data,CorrT=False):
im=image(ImageNameIn)
F=pyfits.open(ImageNameIn)
F0=F[0]
nx=F0.header["NAXIS1"]
ny=F0.header["NAXIS2"]
npol=F0.header["NAXIS3"]
nch=F0.header["NAXIS4"]
shape=(nch,npol,ny,nx)
Dico=im.coordinates().dict()
cell=abs(Dico["direction0"]["cdelt"][0])*180/np.pi*3600
ra,dec=Dico["direction0"]["crval"]
CasaImage=ClassCasaimage(ImageNameOut,shape,cell,(ra,dec))
CasaImage.setdata(data,CorrT=CorrT)
CasaImage.ToFits()
CasaImage.close()
class ClassCasaimage():
def __init__(self,ImageName,ImShape,Cell,radec,Freqs=None,KeepCasa=False):
self.Cell=Cell
self.radec=radec
self.KeepCasa=KeepCasa
self.Freqs=Freqs
self.ImShape=ImShape
self.nch,self.npol,self.Npix,_=ImShape
self.ImageName=ImageName
#print "image refpix:",rad2hmsdms.rad2hmsdms(radec[0],Type="ra").replace(" ",":"),", ",rad2hmsdms.rad2hmsdms(radec[1],Type="dec").replace(" ",".")
self.createScratch()
def createScratch(self):
ImageName=self.ImageName
#print>>log, " ----> Create casa image %s"%ImageName
#HYPERCAL_DIR=os.environ["HYPERCAL_DIR"]
tmpIm=image(imagename=ImageName,shape=self.ImShape)
c=tmpIm.coordinates()
del(tmpIm)
os.system("rm -Rf %s"%ImageName)
incr=c.get_increment()
incrRad=(self.Cell/60.)#*np.pi/180
incr[-1][0]=incrRad
incr[-1][1]=-incrRad
#RefPix=c.get_referencepixel()
Npix=self.Npix
#RefPix[0][0]=Npix//2
#RefPix[0][1]=Npix//2
#RefPix[0][0]=Npix//2-1
#RefPix[0][1]=Npix//2-1
#RefPix=c.set_referencepixel(RefPix)
RefVal=c.get_referencevalue()
RaDecRad=self.radec
RefVal[-1][1]=RaDecRad[0]*180./np.pi*60
RefVal[-1][0]=RaDecRad[1]*180./np.pi*60
if self.Freqs is not None:
#print RefVal[0]
RefVal[0]=self.Freqs[0]
ich,ipol,xy=c.get_referencepixel()
ich=0
c.set_referencepixel((ich,ipol,xy))
# if self.Freqs.size>1:
# F=self.Freqs
# df=np.mean(self.Freqs[1::]-self.Freqs[0:-1])
# print df
# incr[0]=df
# D=c.__dict__["_csys"]
# fmean=np.mean(self.Freqs)
# D["worldreplace2"]=np.array([fmean])
# D["spectral2"]["restfreq"]=fmean
# D["spectral2"]["restfreqs"]=np.array([fmean])
# D["spectral2"]["tabular"]={'axes': ['Frequency'],
# 'pc': np.array([[ 1.]]),
# 'units': ['Hz']}
# D["spectral2"]["tabular"]["cdelt"]=np.array([df])
# D["spectral2"]["tabular"]["crpix"]=np.array([0])
# D["spectral2"]["tabular"]["crval"]=np.array([fmean])
# D["spectral2"]["tabular"]["pixelvalues"]=np.arange(F.size)
# D["spectral2"]["tabular"]["worldvalues"]=F
# # Out[16]: array([ 1.42040575e+09])
# # In [17]: c.dict()["spectral2"]["tabular"]
# # Out[17]:
# # {'axes': ['Frequency'],
# # 'cdelt': array([ 0.]),
# # 'crpix': array([ 0.]),
# # 'crval': array([ 1.41500000e+09]),
# # 'pc': array([[ 1.]]),
# # 'pixelvalues': array([ 0.]),
# # 'units': ['Hz'],
# # 'worldvalues': array([ 1.41500000e+09])}
c.set_increment(incr)
c.set_referencevalue(RefVal)
#self.im=image(imagename=ImageName,shape=(1,1,Npix,Npix),coordsys=c)
#self.im=image(imagename=ImageName,shape=(Npix,Npix),coordsys=c)
self.im=image(imagename=ImageName,shape=self.ImShape,coordsys=c)
#data=np.random.randn(*self.ImShape)
#self.setdata(data)
def setdata(self,dataIn,CorrT=False):
#print>>log, " ----> put data in casa image %s"%self.ImageName
data=dataIn.copy()
if CorrT:
nch,npol,_,_=dataIn.shape
for ch in range(nch):
for pol in range(npol):
data[ch,pol]=data[ch,pol][::-1].T
self.im.putdata(data)
def ToFits(self):
FileOut=self.ImageName+".fits"
os.system("rm -rf %s"%FileOut)
print(" ----> Save data in casa image as FITS file %s"%FileOut, file=log)
self.im.tofits(FileOut)
def setBeam(self,beam):
bmaj, bmin, PA=beam
FileOut=self.ImageName+".fits"
#print>>log, " ----> Save beam info in FITS file %s"%FileOut
F2=pyfits.open(FileOut)
F2[0].header["BMAJ"]=bmaj
F2[0].header["BMIN"]=bmin
F2[0].header["BPA"]=PA
os.system("rm -rf %s"%FileOut)
F2.writeto(FileOut,clobber=True)
def close(self):
#print>>log, " ----> Closing %s"%self.ImageName
del(self.im)
#print>>log, " ----> Closed %s"%self.ImageName
if self.KeepCasa==False:
#print>>log, " ----> Delete %s"%self.ImageName
os.system("rm -rf %s"%self.ImageName)
# def test():
# ra=15.*(2.+30./60)*np.pi/180
# dec=(40.+30./60)*np.pi/180
# radec=(ra,dec)
# Cell=20.
# imShape=(1, 1, 1029, 1029)
# #name="lala2.psf"
# name,imShape,Cell,radec="lala2.psf", (1, 1, 1029, 1029), 20, (3.7146787856873478, 0.91111035090915093)
# im=ClassCasaimage(name,imShape,Cell,radec)
# im.setdata(np.random.randn(*(imShape)),CorrT=True)
# im.ToFits()
# im.setBeam((0.,0.,0.))
# im.close()
def test():
name,imShape,Cell,radec="lala2.psf", (3, 1, 1029, 1029), 20, (3.7146787856873478, 0.91111035090915093)
im=ClassCasaimage(name,imShape,Cell,radec,Freqs=np.array([100,200,300],dtype=np.float32)*1e6)
im.setdata(np.random.randn(*(imShape)),CorrT=True)
im.ToFits()
#im.setBeam((0.,0.,0.))
im.close()
if __name__=="__main__":
test() | PypiClean |
/CloudFerry-1.55.2.tar.gz/CloudFerry-1.55.2/cloudferry/actions/networking/instance_floatingip_actions.py |
import copy
from cloudferry.lib.base.action import action
from cloudferry.lib.os.network import network_utils
from cloudferry.lib.utils import utils as utl
class AssociateFloatingip(action.Action):
"""Associates previously created VM port with floating IP port
Depends on:
- Identity objects migrated
- Instance ports migrated (see `PrepareNetworks`)
- Network objects migrated, primarily floating IPs
"""
def run(self, info=None, **kwargs):
if self.cfg.migrate.keep_floatingip:
network_resource = self.cloud.resources[utl.NETWORK_RESOURCE]
network_utils.associate_floatingip(info, network_resource)
return {}
class DisassociateFloatingip(action.Action):
"""Disassociates floating IP from VM"""
def run(self, info=None, **kwargs):
if self.cfg.migrate.keep_floatingip:
info_compute = copy.deepcopy(info)
compute_resource = self.cloud.resources[utl.COMPUTE_RESOURCE]
instance = info_compute[utl.INSTANCES_TYPE].values()[0]
networks_info = instance[utl.INSTANCE_BODY][utl.INTERFACES]
old_id = instance[utl.OLD_ID]
for net in networks_info:
if net['floatingip']:
compute_resource.dissociate_floatingip(old_id,
net['floatingip'])
return {}
class DisassociateAllFloatingips(action.Action):
"""Disassociates all floating IPs from VM on source"""
def run(self, info=None, **kwargs):
if self.cfg.migrate.keep_floatingip:
info_compute = copy.deepcopy(info)
compute_resource = self.cloud.resources[utl.COMPUTE_RESOURCE]
instances = info_compute[utl.INSTANCES_TYPE]
for instance in instances.values():
networks_info = instance[utl.INSTANCE_BODY][utl.INTERFACES]
old_id = instance[utl.OLD_ID]
for net in networks_info:
if net['floatingip']:
compute_resource.dissociate_floatingip(
old_id, net['floatingip'])
return {} | PypiClean |
/FlaskCms-0.0.4.tar.gz/FlaskCms-0.0.4/flask_cms/static/js/ace/mode-csharp.js | ace.define("ace/mode/doc_comment_highlight_rules",["require","exports","module","ace/lib/oop","ace/mode/text_highlight_rules"], function(require, exports, module) {
"use strict";
var oop = require("../lib/oop");
var TextHighlightRules = require("./text_highlight_rules").TextHighlightRules;
var DocCommentHighlightRules = function() {
this.$rules = {
"start" : [ {
token : "comment.doc.tag",
regex : "@[\\w\\d_]+" // TODO: fix email addresses
}, {
token : "comment.doc.tag",
regex : "\\bTODO\\b"
}, {
defaultToken : "comment.doc"
}]
};
};
oop.inherits(DocCommentHighlightRules, TextHighlightRules);
DocCommentHighlightRules.getStartRule = function(start) {
return {
token : "comment.doc", // doc comment
regex : "\\/\\*(?=\\*)",
next : start
};
};
DocCommentHighlightRules.getEndRule = function (start) {
return {
token : "comment.doc", // closing comment
regex : "\\*\\/",
next : start
};
};
exports.DocCommentHighlightRules = DocCommentHighlightRules;
});
ace.define("ace/mode/csharp_highlight_rules",["require","exports","module","ace/lib/oop","ace/mode/doc_comment_highlight_rules","ace/mode/text_highlight_rules"], function(require, exports, module) {
"use strict";
var oop = require("../lib/oop");
var DocCommentHighlightRules = require("./doc_comment_highlight_rules").DocCommentHighlightRules;
var TextHighlightRules = require("./text_highlight_rules").TextHighlightRules;
var CSharpHighlightRules = function() {
var keywordMapper = this.createKeywordMapper({
"variable.language": "this",
"keyword": "abstract|event|new|struct|as|explicit|null|switch|base|extern|object|this|bool|false|operator|throw|break|finally|out|true|byte|fixed|override|try|case|float|params|typeof|catch|for|private|uint|char|foreach|protected|ulong|checked|goto|public|unchecked|class|if|readonly|unsafe|const|implicit|ref|ushort|continue|in|return|using|decimal|int|sbyte|virtual|default|interface|sealed|volatile|delegate|internal|short|void|do|is|sizeof|while|double|lock|stackalloc|else|long|static|enum|namespace|string|var|dynamic",
"constant.language": "null|true|false"
}, "identifier");
this.$rules = {
"start" : [
{
token : "comment",
regex : "\\/\\/.*$"
},
DocCommentHighlightRules.getStartRule("doc-start"),
{
token : "comment", // multi line comment
regex : "\\/\\*",
next : "comment"
}, {
token : "string", // character
regex : /'(?:.|\\(:?u[\da-fA-F]+|x[\da-fA-F]+|[tbrf'"n]))'/
}, {
token : "string", start : '"', end : '"|$', next: [
{token: "constant.language.escape", regex: /\\(:?u[\da-fA-F]+|x[\da-fA-F]+|[tbrf'"n])/},
{token: "invalid", regex: /\\./}
]
}, {
token : "string", start : '@"', end : '"', next:[
{token: "constant.language.escape", regex: '""'}
]
}, {
token : "constant.numeric", // hex
regex : "0[xX][0-9a-fA-F]+\\b"
}, {
token : "constant.numeric", // float
regex : "[+-]?\\d+(?:(?:\\.\\d*)?(?:[eE][+-]?\\d+)?)?\\b"
}, {
token : "constant.language.boolean",
regex : "(?:true|false)\\b"
}, {
token : keywordMapper,
regex : "[a-zA-Z_$][a-zA-Z0-9_$]*\\b"
}, {
token : "keyword.operator",
regex : "!|\\$|%|&|\\*|\\-\\-|\\-|\\+\\+|\\+|~|===|==|=|!=|!==|<=|>=|<<=|>>=|>>>=|<>|<|>|!|&&|\\|\\||\\?\\:|\\*=|%=|\\+=|\\-=|&=|\\^=|\\b(?:in|instanceof|new|delete|typeof|void)"
}, {
token : "keyword",
regex : "^\\s*#(if|else|elif|endif|define|undef|warning|error|line|region|endregion|pragma)"
}, {
token : "punctuation.operator",
regex : "\\?|\\:|\\,|\\;|\\."
}, {
token : "paren.lparen",
regex : "[[({]"
}, {
token : "paren.rparen",
regex : "[\\])}]"
}, {
token : "text",
regex : "\\s+"
}
],
"comment" : [
{
token : "comment", // closing comment
regex : ".*?\\*\\/",
next : "start"
}, {
token : "comment", // comment spanning whole line
regex : ".+"
}
]
};
this.embedRules(DocCommentHighlightRules, "doc-",
[ DocCommentHighlightRules.getEndRule("start") ]);
this.normalizeRules();
};
oop.inherits(CSharpHighlightRules, TextHighlightRules);
exports.CSharpHighlightRules = CSharpHighlightRules;
});
ace.define("ace/mode/matching_brace_outdent",["require","exports","module","ace/range"], function(require, exports, module) {
"use strict";
var Range = require("../range").Range;
var MatchingBraceOutdent = function() {};
(function() {
this.checkOutdent = function(line, input) {
if (! /^\s+$/.test(line))
return false;
return /^\s*\}/.test(input);
};
this.autoOutdent = function(doc, row) {
var line = doc.getLine(row);
var match = line.match(/^(\s*\})/);
if (!match) return 0;
var column = match[1].length;
var openBracePos = doc.findMatchingBracket({row: row, column: column});
if (!openBracePos || openBracePos.row == row) return 0;
var indent = this.$getIndent(doc.getLine(openBracePos.row));
doc.replace(new Range(row, 0, row, column-1), indent);
};
this.$getIndent = function(line) {
return line.match(/^\s*/)[0];
};
}).call(MatchingBraceOutdent.prototype);
exports.MatchingBraceOutdent = MatchingBraceOutdent;
});
ace.define("ace/mode/behaviour/cstyle",["require","exports","module","ace/lib/oop","ace/mode/behaviour","ace/token_iterator","ace/lib/lang"], function(require, exports, module) {
"use strict";
var oop = require("../../lib/oop");
var Behaviour = require("../behaviour").Behaviour;
var TokenIterator = require("../../token_iterator").TokenIterator;
var lang = require("../../lib/lang");
var SAFE_INSERT_IN_TOKENS =
["text", "paren.rparen", "punctuation.operator"];
var SAFE_INSERT_BEFORE_TOKENS =
["text", "paren.rparen", "punctuation.operator", "comment"];
var context;
var contextCache = {}
var initContext = function(editor) {
var id = -1;
if (editor.multiSelect) {
id = editor.selection.id;
if (contextCache.rangeCount != editor.multiSelect.rangeCount)
contextCache = {rangeCount: editor.multiSelect.rangeCount};
}
if (contextCache[id])
return context = contextCache[id];
context = contextCache[id] = {
autoInsertedBrackets: 0,
autoInsertedRow: -1,
autoInsertedLineEnd: "",
maybeInsertedBrackets: 0,
maybeInsertedRow: -1,
maybeInsertedLineStart: "",
maybeInsertedLineEnd: ""
};
};
var CstyleBehaviour = function() {
this.add("braces", "insertion", function(state, action, editor, session, text) {
var cursor = editor.getCursorPosition();
var line = session.doc.getLine(cursor.row);
if (text == '{') {
initContext(editor);
var selection = editor.getSelectionRange();
var selected = session.doc.getTextRange(selection);
if (selected !== "" && selected !== "{" && editor.getWrapBehavioursEnabled()) {
return {
text: '{' + selected + '}',
selection: false
};
} else if (CstyleBehaviour.isSaneInsertion(editor, session)) {
if (/[\]\}\)]/.test(line[cursor.column]) || editor.inMultiSelectMode) {
CstyleBehaviour.recordAutoInsert(editor, session, "}");
return {
text: '{}',
selection: [1, 1]
};
} else {
CstyleBehaviour.recordMaybeInsert(editor, session, "{");
return {
text: '{',
selection: [1, 1]
};
}
}
} else if (text == '}') {
initContext(editor);
var rightChar = line.substring(cursor.column, cursor.column + 1);
if (rightChar == '}') {
var matching = session.$findOpeningBracket('}', {column: cursor.column + 1, row: cursor.row});
if (matching !== null && CstyleBehaviour.isAutoInsertedClosing(cursor, line, text)) {
CstyleBehaviour.popAutoInsertedClosing();
return {
text: '',
selection: [1, 1]
};
}
}
} else if (text == "\n" || text == "\r\n") {
initContext(editor);
var closing = "";
if (CstyleBehaviour.isMaybeInsertedClosing(cursor, line)) {
closing = lang.stringRepeat("}", context.maybeInsertedBrackets);
CstyleBehaviour.clearMaybeInsertedClosing();
}
var rightChar = line.substring(cursor.column, cursor.column + 1);
if (rightChar === '}') {
var openBracePos = session.findMatchingBracket({row: cursor.row, column: cursor.column+1}, '}');
if (!openBracePos)
return null;
var next_indent = this.$getIndent(session.getLine(openBracePos.row));
} else if (closing) {
var next_indent = this.$getIndent(line);
} else {
CstyleBehaviour.clearMaybeInsertedClosing();
return;
}
var indent = next_indent + session.getTabString();
return {
text: '\n' + indent + '\n' + next_indent + closing,
selection: [1, indent.length, 1, indent.length]
};
} else {
CstyleBehaviour.clearMaybeInsertedClosing();
}
});
this.add("braces", "deletion", function(state, action, editor, session, range) {
var selected = session.doc.getTextRange(range);
if (!range.isMultiLine() && selected == '{') {
initContext(editor);
var line = session.doc.getLine(range.start.row);
var rightChar = line.substring(range.end.column, range.end.column + 1);
if (rightChar == '}') {
range.end.column++;
return range;
} else {
context.maybeInsertedBrackets--;
}
}
});
this.add("parens", "insertion", function(state, action, editor, session, text) {
if (text == '(') {
initContext(editor);
var selection = editor.getSelectionRange();
var selected = session.doc.getTextRange(selection);
if (selected !== "" && editor.getWrapBehavioursEnabled()) {
return {
text: '(' + selected + ')',
selection: false
};
} else if (CstyleBehaviour.isSaneInsertion(editor, session)) {
CstyleBehaviour.recordAutoInsert(editor, session, ")");
return {
text: '()',
selection: [1, 1]
};
}
} else if (text == ')') {
initContext(editor);
var cursor = editor.getCursorPosition();
var line = session.doc.getLine(cursor.row);
var rightChar = line.substring(cursor.column, cursor.column + 1);
if (rightChar == ')') {
var matching = session.$findOpeningBracket(')', {column: cursor.column + 1, row: cursor.row});
if (matching !== null && CstyleBehaviour.isAutoInsertedClosing(cursor, line, text)) {
CstyleBehaviour.popAutoInsertedClosing();
return {
text: '',
selection: [1, 1]
};
}
}
}
});
this.add("parens", "deletion", function(state, action, editor, session, range) {
var selected = session.doc.getTextRange(range);
if (!range.isMultiLine() && selected == '(') {
initContext(editor);
var line = session.doc.getLine(range.start.row);
var rightChar = line.substring(range.start.column + 1, range.start.column + 2);
if (rightChar == ')') {
range.end.column++;
return range;
}
}
});
this.add("brackets", "insertion", function(state, action, editor, session, text) {
if (text == '[') {
initContext(editor);
var selection = editor.getSelectionRange();
var selected = session.doc.getTextRange(selection);
if (selected !== "" && editor.getWrapBehavioursEnabled()) {
return {
text: '[' + selected + ']',
selection: false
};
} else if (CstyleBehaviour.isSaneInsertion(editor, session)) {
CstyleBehaviour.recordAutoInsert(editor, session, "]");
return {
text: '[]',
selection: [1, 1]
};
}
} else if (text == ']') {
initContext(editor);
var cursor = editor.getCursorPosition();
var line = session.doc.getLine(cursor.row);
var rightChar = line.substring(cursor.column, cursor.column + 1);
if (rightChar == ']') {
var matching = session.$findOpeningBracket(']', {column: cursor.column + 1, row: cursor.row});
if (matching !== null && CstyleBehaviour.isAutoInsertedClosing(cursor, line, text)) {
CstyleBehaviour.popAutoInsertedClosing();
return {
text: '',
selection: [1, 1]
};
}
}
}
});
this.add("brackets", "deletion", function(state, action, editor, session, range) {
var selected = session.doc.getTextRange(range);
if (!range.isMultiLine() && selected == '[') {
initContext(editor);
var line = session.doc.getLine(range.start.row);
var rightChar = line.substring(range.start.column + 1, range.start.column + 2);
if (rightChar == ']') {
range.end.column++;
return range;
}
}
});
this.add("string_dquotes", "insertion", function(state, action, editor, session, text) {
if (text == '"' || text == "'") {
initContext(editor);
var quote = text;
var selection = editor.getSelectionRange();
var selected = session.doc.getTextRange(selection);
if (selected !== "" && selected !== "'" && selected != '"' && editor.getWrapBehavioursEnabled()) {
return {
text: quote + selected + quote,
selection: false
};
} else {
var cursor = editor.getCursorPosition();
var line = session.doc.getLine(cursor.row);
var leftChar = line.substring(cursor.column-1, cursor.column);
if (leftChar == '\\') {
return null;
}
var tokens = session.getTokens(selection.start.row);
var col = 0, token;
var quotepos = -1; // Track whether we're inside an open quote.
for (var x = 0; x < tokens.length; x++) {
token = tokens[x];
if (token.type == "string") {
quotepos = -1;
} else if (quotepos < 0) {
quotepos = token.value.indexOf(quote);
}
if ((token.value.length + col) > selection.start.column) {
break;
}
col += tokens[x].value.length;
}
if (!token || (quotepos < 0 && token.type !== "comment" && (token.type !== "string" || ((selection.start.column !== token.value.length+col-1) && token.value.lastIndexOf(quote) === token.value.length-1)))) {
if (!CstyleBehaviour.isSaneInsertion(editor, session))
return;
return {
text: quote + quote,
selection: [1,1]
};
} else if (token && token.type === "string") {
var rightChar = line.substring(cursor.column, cursor.column + 1);
if (rightChar == quote) {
return {
text: '',
selection: [1, 1]
};
}
}
}
}
});
this.add("string_dquotes", "deletion", function(state, action, editor, session, range) {
var selected = session.doc.getTextRange(range);
if (!range.isMultiLine() && (selected == '"' || selected == "'")) {
initContext(editor);
var line = session.doc.getLine(range.start.row);
var rightChar = line.substring(range.start.column + 1, range.start.column + 2);
if (rightChar == selected) {
range.end.column++;
return range;
}
}
});
};
CstyleBehaviour.isSaneInsertion = function(editor, session) {
var cursor = editor.getCursorPosition();
var iterator = new TokenIterator(session, cursor.row, cursor.column);
if (!this.$matchTokenType(iterator.getCurrentToken() || "text", SAFE_INSERT_IN_TOKENS)) {
var iterator2 = new TokenIterator(session, cursor.row, cursor.column + 1);
if (!this.$matchTokenType(iterator2.getCurrentToken() || "text", SAFE_INSERT_IN_TOKENS))
return false;
}
iterator.stepForward();
return iterator.getCurrentTokenRow() !== cursor.row ||
this.$matchTokenType(iterator.getCurrentToken() || "text", SAFE_INSERT_BEFORE_TOKENS);
};
CstyleBehaviour.$matchTokenType = function(token, types) {
return types.indexOf(token.type || token) > -1;
};
CstyleBehaviour.recordAutoInsert = function(editor, session, bracket) {
var cursor = editor.getCursorPosition();
var line = session.doc.getLine(cursor.row);
if (!this.isAutoInsertedClosing(cursor, line, context.autoInsertedLineEnd[0]))
context.autoInsertedBrackets = 0;
context.autoInsertedRow = cursor.row;
context.autoInsertedLineEnd = bracket + line.substr(cursor.column);
context.autoInsertedBrackets++;
};
CstyleBehaviour.recordMaybeInsert = function(editor, session, bracket) {
var cursor = editor.getCursorPosition();
var line = session.doc.getLine(cursor.row);
if (!this.isMaybeInsertedClosing(cursor, line))
context.maybeInsertedBrackets = 0;
context.maybeInsertedRow = cursor.row;
context.maybeInsertedLineStart = line.substr(0, cursor.column) + bracket;
context.maybeInsertedLineEnd = line.substr(cursor.column);
context.maybeInsertedBrackets++;
};
CstyleBehaviour.isAutoInsertedClosing = function(cursor, line, bracket) {
return context.autoInsertedBrackets > 0 &&
cursor.row === context.autoInsertedRow &&
bracket === context.autoInsertedLineEnd[0] &&
line.substr(cursor.column) === context.autoInsertedLineEnd;
};
CstyleBehaviour.isMaybeInsertedClosing = function(cursor, line) {
return context.maybeInsertedBrackets > 0 &&
cursor.row === context.maybeInsertedRow &&
line.substr(cursor.column) === context.maybeInsertedLineEnd &&
line.substr(0, cursor.column) == context.maybeInsertedLineStart;
};
CstyleBehaviour.popAutoInsertedClosing = function() {
context.autoInsertedLineEnd = context.autoInsertedLineEnd.substr(1);
context.autoInsertedBrackets--;
};
CstyleBehaviour.clearMaybeInsertedClosing = function() {
if (context) {
context.maybeInsertedBrackets = 0;
context.maybeInsertedRow = -1;
}
};
oop.inherits(CstyleBehaviour, Behaviour);
exports.CstyleBehaviour = CstyleBehaviour;
});
ace.define("ace/mode/folding/cstyle",["require","exports","module","ace/lib/oop","ace/range","ace/mode/folding/fold_mode"], function(require, exports, module) {
"use strict";
var oop = require("../../lib/oop");
var Range = require("../../range").Range;
var BaseFoldMode = require("./fold_mode").FoldMode;
var FoldMode = exports.FoldMode = function(commentRegex) {
if (commentRegex) {
this.foldingStartMarker = new RegExp(
this.foldingStartMarker.source.replace(/\|[^|]*?$/, "|" + commentRegex.start)
);
this.foldingStopMarker = new RegExp(
this.foldingStopMarker.source.replace(/\|[^|]*?$/, "|" + commentRegex.end)
);
}
};
oop.inherits(FoldMode, BaseFoldMode);
(function() {
this.foldingStartMarker = /(\{|\[)[^\}\]]*$|^\s*(\/\*)/;
this.foldingStopMarker = /^[^\[\{]*(\}|\])|^[\s\*]*(\*\/)/;
this.getFoldWidgetRange = function(session, foldStyle, row, forceMultiline) {
var line = session.getLine(row);
var match = line.match(this.foldingStartMarker);
if (match) {
var i = match.index;
if (match[1])
return this.openingBracketBlock(session, match[1], row, i);
var range = session.getCommentFoldRange(row, i + match[0].length, 1);
if (range && !range.isMultiLine()) {
if (forceMultiline) {
range = this.getSectionRange(session, row);
} else if (foldStyle != "all")
range = null;
}
return range;
}
if (foldStyle === "markbegin")
return;
var match = line.match(this.foldingStopMarker);
if (match) {
var i = match.index + match[0].length;
if (match[1])
return this.closingBracketBlock(session, match[1], row, i);
return session.getCommentFoldRange(row, i, -1);
}
};
this.getSectionRange = function(session, row) {
var line = session.getLine(row);
var startIndent = line.search(/\S/);
var startRow = row;
var startColumn = line.length;
row = row + 1;
var endRow = row;
var maxRow = session.getLength();
while (++row < maxRow) {
line = session.getLine(row);
var indent = line.search(/\S/);
if (indent === -1)
continue;
if (startIndent > indent)
break;
var subRange = this.getFoldWidgetRange(session, "all", row);
if (subRange) {
if (subRange.start.row <= startRow) {
break;
} else if (subRange.isMultiLine()) {
row = subRange.end.row;
} else if (startIndent == indent) {
break;
}
}
endRow = row;
}
return new Range(startRow, startColumn, endRow, session.getLine(endRow).length);
};
}).call(FoldMode.prototype);
});
ace.define("ace/mode/folding/csharp",["require","exports","module","ace/lib/oop","ace/range","ace/mode/folding/cstyle"], function(require, exports, module) {
"use strict";
var oop = require("../../lib/oop");
var Range = require("../../range").Range;
var CFoldMode = require("./cstyle").FoldMode;
var FoldMode = exports.FoldMode = function(commentRegex) {
if (commentRegex) {
this.foldingStartMarker = new RegExp(
this.foldingStartMarker.source.replace(/\|[^|]*?$/, "|" + commentRegex.start)
);
this.foldingStopMarker = new RegExp(
this.foldingStopMarker.source.replace(/\|[^|]*?$/, "|" + commentRegex.end)
);
}
};
oop.inherits(FoldMode, CFoldMode);
(function() {
this.usingRe = /^\s*using \S/;
this.getFoldWidgetRangeBase = this.getFoldWidgetRange;
this.getFoldWidgetBase = this.getFoldWidget;
this.getFoldWidget = function(session, foldStyle, row) {
var fw = this.getFoldWidgetBase(session, foldStyle, row);
if (!fw) {
var line = session.getLine(row);
if (/^\s*#region\b/.test(line))
return "start";
var usingRe = this.usingRe;
if (usingRe.test(line)) {
var prev = session.getLine(row - 1);
var next = session.getLine(row + 1);
if (!usingRe.test(prev) && usingRe.test(next))
return "start"
}
}
return fw;
};
this.getFoldWidgetRange = function(session, foldStyle, row) {
var range = this.getFoldWidgetRangeBase(session, foldStyle, row);
if (range)
return range;
var line = session.getLine(row);
if (this.usingRe.test(line))
return this.getUsingStatementBlock(session, line, row);
if (/^\s*#region\b/.test(line))
return this.getRegionBlock(session, line, row);
};
this.getUsingStatementBlock = function(session, line, row) {
var startColumn = line.match(this.usingRe)[0].length - 1;
var maxRow = session.getLength();
var startRow = row;
var endRow = row;
while (++row < maxRow) {
line = session.getLine(row);
if (/^\s*$/.test(line))
continue;
if (!this.usingRe.test(line))
break;
endRow = row;
}
if (endRow > startRow) {
var endColumn = session.getLine(endRow).length;
return new Range(startRow, startColumn, endRow, endColumn);
}
};
this.getRegionBlock = function(session, line, row) {
var startColumn = line.search(/\s*$/);
var maxRow = session.getLength();
var startRow = row;
var re = /^\s*#(end)?region\b/
var depth = 1
while (++row < maxRow) {
line = session.getLine(row);
var m = re.exec(line);
if (!m)
continue;
if (m[1])
depth--;
else
depth++;
if (!depth)
break;
}
var endRow = row;
if (endRow > startRow) {
var endColumn = line.search(/\S/);
return new Range(startRow, startColumn, endRow, endColumn);
}
};
}).call(FoldMode.prototype);
});
ace.define("ace/mode/csharp",["require","exports","module","ace/lib/oop","ace/mode/text","ace/mode/csharp_highlight_rules","ace/mode/matching_brace_outdent","ace/mode/behaviour/cstyle","ace/mode/folding/csharp"], function(require, exports, module) {
"use strict";
var oop = require("../lib/oop");
var TextMode = require("./text").Mode;
var CSharpHighlightRules = require("./csharp_highlight_rules").CSharpHighlightRules;
var MatchingBraceOutdent = require("./matching_brace_outdent").MatchingBraceOutdent;
var CstyleBehaviour = require("./behaviour/cstyle").CstyleBehaviour;
var CStyleFoldMode = require("./folding/csharp").FoldMode;
var Mode = function() {
this.HighlightRules = CSharpHighlightRules;
this.$outdent = new MatchingBraceOutdent();
this.$behaviour = new CstyleBehaviour();
this.foldingRules = new CStyleFoldMode();
};
oop.inherits(Mode, TextMode);
(function() {
this.lineCommentStart = "//";
this.blockComment = {start: "/*", end: "*/"};
this.getNextLineIndent = function(state, line, tab) {
var indent = this.$getIndent(line);
var tokenizedLine = this.getTokenizer().getLineTokens(line, state);
var tokens = tokenizedLine.tokens;
if (tokens.length && tokens[tokens.length-1].type == "comment") {
return indent;
}
if (state == "start") {
var match = line.match(/^.*[\{\(\[]\s*$/);
if (match) {
indent += tab;
}
}
return indent;
};
this.checkOutdent = function(state, line, input) {
return this.$outdent.checkOutdent(line, input);
};
this.autoOutdent = function(state, doc, row) {
this.$outdent.autoOutdent(doc, row);
};
this.createWorker = function(session) {
return null;
};
this.$id = "ace/mode/csharp";
}).call(Mode.prototype);
exports.Mode = Mode;
}); | PypiClean |
/K_AIKO-0.5.2-py3-none-any.whl/kaiko/kerminal.py | import time
import itertools
import functools
import re
import contextlib
import queue
import threading
import signal
import numpy
import pyaudio
import audioread
from . import cfg
from . import datanodes as dn
from . import tui
@contextlib.contextmanager
def nullcontext(value):
yield value
@contextlib.contextmanager
def prepare_pyaudio():
try:
manager = pyaudio.PyAudio()
yield manager
finally:
manager.terminate()
class MixerSettings(cfg.Configurable):
output_device: int = -1
output_samplerate: int = 44100
output_buffer_length: int = 512*4
output_channels: int = 1
output_format: str = 'f4'
sound_delay: float = 0.0
debug_timeit: bool = False
@classmethod
def get(clz, manager, device=-1):
if device == -1:
device = manager.get_default_output_device_info()['index']
info = manager.get_device_info_by_index(device)
settings = clz()
settings.output_device = device
settings.output_samplerate = int(info['defaultSampleRate'])
settings.output_channels = min(2, info['maxOutputChannels'])
return settings
class Mixer:
def __init__(self, effects_scheduler, samplerate, buffer_length, nchannels):
self.effects_scheduler = effects_scheduler
self.samplerate = samplerate
self.buffer_length = buffer_length
self.nchannels = nchannels
@staticmethod
def get_node(scheduler, settings, manager, ref_time):
samplerate = settings.output_samplerate
buffer_length = settings.output_buffer_length
nchannels = settings.output_channels
format = settings.output_format
device = settings.output_device
sound_delay = settings.sound_delay
debug_timeit = settings.debug_timeit
@dn.datanode
def _node():
index = 0
with scheduler:
yield
while True:
time = index * buffer_length / samplerate + sound_delay - ref_time
data = numpy.zeros((buffer_length, nchannels), dtype=numpy.float32)
try:
data = scheduler.send((data, time))
except StopIteration:
return
yield data
index += 1
output_node = _node()
if debug_timeit:
output_node = dn.timeit(output_node, lambda msg: print(" output: " + msg))
return dn.play(manager, output_node,
samplerate=samplerate,
buffer_shape=(buffer_length, nchannels),
format=format,
device=device,
)
@classmethod
def create(clz, settings, manager, ref_time=0.0):
samplerate = settings.output_samplerate
buffer_length = settings.output_buffer_length
nchannels = settings.output_channels
scheduler = dn.Scheduler()
output_node = clz.get_node(scheduler, settings, manager, ref_time)
return output_node, clz(scheduler, samplerate, buffer_length, nchannels)
def add_effect(self, node, time=None, zindex=(0,)):
if time is not None:
node = self.delay(node, time)
return self.effects_scheduler.add_node(node, zindex=zindex)
def remove_effect(self, key):
return self.effects_scheduler.remove_node(key)
@dn.datanode
def delay(self, node, start_time):
node = dn.DataNode.wrap(node)
samplerate = self.samplerate
buffer_length = self.buffer_length
nchannels = self.nchannels
with node:
data, time = yield
offset = round((start_time - time) * samplerate) if start_time is not None else 0
while offset < 0:
length = min(-offset, buffer_length)
dummy = numpy.zeros((length, nchannels), dtype=numpy.float32)
try:
node.send((dummy, time+offset/samplerate))
except StopIteration:
return
offset += length
while 0 < offset:
if data.shape[0] < offset:
offset -= data.shape[0]
else:
data1, data2 = data[:offset], data[offset:]
try:
data2 = node.send((data2, time+offset/samplerate))
except StopIteration:
return
data = numpy.concatenate((data1, data2), axis=0)
offset = 0
data, time = yield data
while True:
try:
data, time = yield node.send((data, time))
except StopIteration:
return
def resample(self, node, samplerate=None, channels=None, volume=0.0, start=None, end=None):
if start is not None or end is not None:
node = dn.tslice(node, samplerate, start, end)
if channels is not None and channels != self.nchannels:
node = dn.pipe(node, dn.rechannel(self.nchannels))
if samplerate is not None and samplerate != self.samplerate:
node = dn.pipe(node, dn.resample(ratio=(self.samplerate, samplerate)))
if volume != 0:
node = dn.pipe(node, lambda s: s * 10**(volume/20))
return node
@functools.lru_cache(maxsize=32)
def load_sound(self, filepath):
return dn.load_sound(filepath, channels=self.nchannels, samplerate=self.samplerate)
def play(self, node, samplerate=None, channels=None, volume=0.0, start=None, end=None, time=None, zindex=(0,)):
if isinstance(node, str):
node = dn.DataNode.wrap(self.load_sound(node))
samplerate = None
channels = None
node = self.resample(node, samplerate, channels, volume, start, end)
node = dn.pipe(lambda a:a[0], dn.attach(node))
return self.add_effect(node, time=time, zindex=zindex)
class DetectorSettings(cfg.Configurable):
# input
input_device: int = -1
input_samplerate: int = 44100
input_buffer_length: int = 512
input_channels: int = 1
input_format: str = 'f4'
detector_time_res: float = 0.0116099773 # hop_length = 512 if samplerate == 44100
detector_freq_res: float = 21.5332031 # win_length = 512*4 if samplerate == 44100
detector_pre_max: float = 0.03
detector_post_max: float = 0.03
detector_pre_avg: float = 0.03
detector_post_avg: float = 0.03
detector_wait: float = 0.03
detector_delta: float = 5.48e-6
knock_delay: float = 0.0
knock_energy: float = 1.0e-3
debug_timeit: bool = False
@classmethod
def get(clz, manager, device=-1):
if device == -1:
device = manager.get_default_input_device_info()['index']
info = manager.get_device_info_by_index(device)
settings = clz()
settings.input_device = device
settings.input_samplerate = int(info['defaultSampleRate'])
settings.input_channels = min(2, info['maxInputChannels'])
hop_length = 512
win_length = hop_length*4
settings.detector_time_res = hop_length / samplerate
settings.detector_freq_res = win_length / samplerate
return settings
class Detector:
def __init__(self, listeners_scheduler):
self.listeners_scheduler = listeners_scheduler
@staticmethod
def get_node(scheduler, settings, manager, ref_time):
samplerate = settings.input_samplerate
buffer_length = settings.input_buffer_length
nchannels = settings.input_channels
format = settings.input_format
device = settings.input_device
time_res = settings.detector_time_res
freq_res = settings.detector_freq_res
hop_length = round(samplerate*time_res)
win_length = round(samplerate/freq_res)
pre_max = round(settings.detector_pre_max / time_res)
post_max = round(settings.detector_post_max / time_res)
pre_avg = round(settings.detector_pre_avg / time_res)
post_avg = round(settings.detector_post_avg / time_res)
wait = round(settings.detector_wait / time_res)
delta = settings.detector_delta
knock_delay = settings.knock_delay
knock_energy = settings.knock_energy
debug_timeit = settings.debug_timeit
@dn.datanode
def _node():
prepare = max(post_max, post_avg)
window = dn.get_half_Hann_window(win_length)
onset = dn.pipe(
dn.frame(win_length=win_length, hop_length=hop_length),
dn.power_spectrum(win_length=win_length,
samplerate=samplerate,
windowing=window,
weighting=True),
dn.onset_strength(1))
picker = dn.pick_peak(pre_max, post_max, pre_avg, post_avg, wait, delta)
with scheduler, onset, picker:
data = yield
buffer = [(knock_delay, 0.0)]*prepare
index = 0
while True:
try:
strength = onset.send(data)
detected = picker.send(strength)
except StopIteration:
return
time = index * hop_length / samplerate + knock_delay - ref_time
strength = strength / knock_energy
buffer.append((time, strength))
time, strength = buffer.pop(0)
try:
scheduler.send((None, time, strength, detected))
except StopIteration:
return
data = yield
index += 1
input_node = _node()
if buffer_length != hop_length:
input_node = dn.unchunk(input_node, chunk_shape=(hop_length, nchannels))
if debug_timeit:
input_node = dn.timeit(input_node, lambda msg: print(" input: " + msg))
return dn.record(manager, input_node,
samplerate=samplerate,
buffer_shape=(buffer_length, nchannels),
format=format,
device=device,
)
@classmethod
def create(clz, settings, manager, ref_time=0.0):
scheduler = dn.Scheduler()
input_node = clz.get_node(scheduler, settings, manager, ref_time)
return input_node, clz(scheduler)
def add_listener(self, node):
return self.listeners_scheduler.add_node(node, (0,))
def remove_listener(self, key):
self.listeners_scheduler.remove_node(key)
def on_hit(self, func, time=None, duration=None):
return self.add_listener(self._hit_listener(func, time, duration))
@dn.datanode
@staticmethod
def _hit_listener(func, start_time, duration):
_, time, strength, detected = yield
if start_time is None:
start_time = time
while time < start_time:
_, time, strength, detected = yield
while duration is None or time < start_time + duration:
if detected:
finished = func(strength)
if finished:
return
_, time, strength, detected = yield
class RendererSettings(cfg.Configurable):
display_framerate: float = 160.0 # ~ 2 / detector_time_res
display_delay: float = 0.0
display_columns: int = -1
debug_timeit: bool = True
class Renderer:
def __init__(self, drawers_scheduler, msg_queue):
self.drawers_scheduler = drawers_scheduler
self.msg_queue = msg_queue
@staticmethod
def get_node(scheduler, msg_queue, settings, ref_time):
framerate = settings.display_framerate
display_delay = settings.display_delay
columns = settings.display_columns
debug_timeit = settings.debug_timeit
@dn.datanode
def _node():
width = 0
size_node = dn.terminal_size()
index = 0
with scheduler, size_node:
yield
while True:
try:
size = size_node.send(None)
except StopIteration:
return
width = size.columns if columns == -1 else min(columns, size.columns)
time = index / framerate + display_delay - ref_time
view = tui.newwin1(width)
try:
view = scheduler.send((view, time, width))
except StopIteration:
return
msg = []
while not msg_queue.empty():
msg.append(msg_queue.get())
msg = "\n" + "".join(msg) + "\n" if msg else ""
yield msg + "\r" + "".join(view) + "\r"
index += 1
display_node = _node()
if debug_timeit:
display_node = dn.timeit(display_node, lambda msg: print("display: " + msg))
return dn.show(display_node, 1/framerate, hide_cursor=True)
@classmethod
def create(clz, settings, ref_time=0.0):
scheduler = dn.Scheduler()
msg_queue = queue.Queue()
display_node = clz.get_node(scheduler, msg_queue, settings, ref_time)
return display_node, clz(scheduler, msg_queue)
def message(self, msg):
self.msg_queue.put(msg)
def add_drawer(self, node, zindex=(0,)):
return self.drawers_scheduler.add_node(node, zindex=zindex)
def remove_drawer(self, key):
self.drawers_scheduler.remove_node(key)
def add_text(self, text_node, x=0, xmask=slice(None,None), zindex=(0,)):
return self.add_drawer(self._text_drawer(text_node, x, xmask), zindex)
def add_pad(self, pad_node, xmask=slice(None,None), zindex=(0,)):
return self.add_drawer(self._pad_drawer(pad_node, xmask), zindex)
@staticmethod
@dn.datanode
def _text_drawer(text_node, x=0, xmask=slice(None,None)):
text_node = dn.DataNode.wrap(text_node)
with text_node:
view, time, width = yield
while True:
try:
text = text_node.send((time, range(-x, width-x)[xmask]))
except StopIteration:
return
view, _ = tui.addtext1(view, width, x, text, xmask=xmask)
view, time, width = yield view
@staticmethod
@dn.datanode
def _pad_drawer(pad_node, xmask=slice(None,None)):
pad_node = dn.DataNode.wrap(pad_node)
with pad_node:
view, time, width = yield
while True:
subview, x, subwidth = tui.newpad1(view, width, xmask=xmask)
try:
subview = pad_node.send(((time, subwidth), subview))
except StopIteration:
return
view, xran = tui.addpad1(view, width, x, subview, subwidth)
view, time, width = yield view
class ControllerSettings(cfg.Configurable):
pass
class Controller:
def __init__(self, handlers_scheduler):
self.handlers_scheduler = handlers_scheduler
@staticmethod
def get_node(scheduler, settings, ref_time):
@dn.datanode
def _node():
with scheduler:
while True:
time, key = yield
time_ = time - ref_time
try:
scheduler.send((None, time_, key))
except StopIteration:
return
return dn.input(_node())
@classmethod
def create(clz, settings, ref_time=0.0):
scheduler = dn.Scheduler()
node = clz.get_node(scheduler, settings, ref_time)
return node, clz(scheduler)
def add_handler(self, node, key=None):
if key is None:
return self.handlers_scheduler.add_node(node, (0,))
else:
if isinstance(key, str):
key = re.compile(re.escape(key))
return self.handlers_scheduler.add_node(self._filter_node(node, key), (0,))
def remove_handler(self, key):
self.handlers_scheduler.remove_node(key)
@dn.datanode
def _filter_node(self, node, regex):
node = dn.DataNode.wrap(node)
with node:
while True:
_, t, key = yield
if regex.fullmatch(key):
try:
node.send((None, t, key))
except StopIteration:
return
class ClockSettings(cfg.Configurable):
tickrate: float = 60.0
clock_delay: float = 0.0
class Clock:
def __init__(self, coroutines_scheduler):
self.coroutines_scheduler = coroutines_scheduler
@staticmethod
def get_node(scheduler, settings, ref_time):
tickrate = settings.tickrate
clock_delay = settings.clock_delay
@dn.datanode
def _node():
index = 0
with scheduler:
yield
while True:
time = index / tickrate + clock_delay - ref_time
try:
scheduler.send((None, time))
except StopIteration:
return
yield
index += 1
return dn.interval(consumer=_node(), dt=1/tickrate)
@classmethod
def create(clz, settings, ref_time=0.0):
scheduler = dn.Scheduler()
node = clz.get_node(scheduler, settings, ref_time)
return node, clz(scheduler)
def add_coroutine(self, node, time=None):
if time is not None:
node = self.schedule(node, time)
return self.coroutines_scheduler.add_node(node, (0,))
@dn.datanode
def schedule(self, node, start_time):
node = dn.DataNode.wrap(node)
with node:
_, time = yield
while time < start_time:
_, time = yield
while True:
try:
node.send((None, time))
except StopIteration:
return
_, time = yield
def remove_coroutine(self, key):
self.coroutines_scheduler.remove_node(key) | PypiClean |
/M2CryptoWin64-0.21.1-3.tar.gz/M2CryptoWin64-0.21.1-3/M2Crypto/EVP.py | from M2Crypto import Err, util, BIO, RSA
import m2
class EVPError(Exception): pass
m2.evp_init(EVPError)
def pbkdf2(password, salt, iter, keylen):
"""
Derive a key from password using PBKDF2 algorithm specified in RFC 2898.
@param password: Derive the key from this password.
@type password: str
@param salt: Salt.
@type salt: str
@param iter: Number of iterations to perform.
@type iter: int
@param keylen: Length of key to produce.
@type keylen: int
@return: Key.
@rtype: str
"""
return m2.pkcs5_pbkdf2_hmac_sha1(password, salt, iter, keylen)
class MessageDigest:
"""
Message Digest
"""
m2_md_ctx_free = m2.md_ctx_free
def __init__(self, algo):
md = getattr(m2, algo, None)
if md is None:
raise ValueError, ('unknown algorithm', algo)
self.md=md()
self.ctx=m2.md_ctx_new()
m2.digest_init(self.ctx, self.md)
def __del__(self):
if getattr(self, 'ctx', None):
self.m2_md_ctx_free(self.ctx)
def update(self, data):
"""
Add data to be digested.
@return: -1 for Python error, 1 for success, 0 for OpenSSL failure.
"""
return m2.digest_update(self.ctx, data)
def final(self):
return m2.digest_final(self.ctx)
# Deprecated.
digest = final
class HMAC:
m2_hmac_ctx_free = m2.hmac_ctx_free
def __init__(self, key, algo='sha1'):
md = getattr(m2, algo, None)
if md is None:
raise ValueError, ('unknown algorithm', algo)
self.md=md()
self.ctx=m2.hmac_ctx_new()
m2.hmac_init(self.ctx, key, self.md)
def __del__(self):
if getattr(self, 'ctx', None):
self.m2_hmac_ctx_free(self.ctx)
def reset(self, key):
m2.hmac_init(self.ctx, key, self.md)
def update(self, data):
m2.hmac_update(self.ctx, data)
def final(self):
return m2.hmac_final(self.ctx)
digest=final
def hmac(key, data, algo='sha1'):
md = getattr(m2, algo, None)
if md is None:
raise ValueError, ('unknown algorithm', algo)
return m2.hmac(key, data, md())
class Cipher:
m2_cipher_ctx_free = m2.cipher_ctx_free
def __init__(self, alg, key, iv, op, key_as_bytes=0, d='md5', salt='12345678', i=1, padding=1):
cipher = getattr(m2, alg, None)
if cipher is None:
raise ValueError, ('unknown cipher', alg)
self.cipher=cipher()
if key_as_bytes:
kmd = getattr(m2, d, None)
if kmd is None:
raise ValueError, ('unknown message digest', d)
key = m2.bytes_to_key(self.cipher, kmd(), key, salt, iv, i)
self.ctx=m2.cipher_ctx_new()
m2.cipher_init(self.ctx, self.cipher, key, iv, op)
self.set_padding(padding)
del key
def __del__(self):
if getattr(self, 'ctx', None):
self.m2_cipher_ctx_free(self.ctx)
def update(self, data):
return m2.cipher_update(self.ctx, data)
def final(self):
return m2.cipher_final(self.ctx)
def set_padding(self, padding=1):
return m2.cipher_set_padding(self.ctx, padding)
class PKey:
"""
Public Key
"""
m2_pkey_free = m2.pkey_free
m2_md_ctx_free = m2.md_ctx_free
def __init__(self, pkey=None, _pyfree=0, md='sha1'):
if pkey is not None:
self.pkey = pkey
self._pyfree = _pyfree
else:
self.pkey = m2.pkey_new()
self._pyfree = 1
self._set_context(md)
def __del__(self):
if getattr(self, '_pyfree', 0):
self.m2_pkey_free(self.pkey)
if getattr(self, 'ctx', None):
self.m2_md_ctx_free(self.ctx)
def _ptr(self):
return self.pkey
def _set_context(self, md):
mda = getattr(m2, md, None)
if mda is None:
raise ValueError, ('unknown message digest', md)
self.md = mda()
self.ctx = m2.md_ctx_new()
def reset_context(self, md='sha1'):
"""
Reset internal message digest context.
@type md: string
@param md: The message digest algorithm.
"""
self._set_context(md)
def sign_init(self):
"""
Initialise signing operation with self.
"""
m2.sign_init(self.ctx, self.md)
def sign_update(self, data):
"""
Feed data to signing operation.
@type data: string
@param data: Data to be signed.
"""
m2.sign_update(self.ctx, data)
def sign_final(self):
"""
Return signature.
@rtype: string
@return: The signature.
"""
return m2.sign_final(self.ctx, self.pkey)
# Deprecated
update = sign_update
final = sign_final
def verify_init(self):
"""
Initialise signature verification operation with self.
"""
m2.verify_init(self.ctx, self.md)
def verify_update(self, data):
"""
Feed data to verification operation.
@type data: string
@param data: Data to be verified.
@return: -1 on Python error, 1 for success, 0 for OpenSSL error
"""
return m2.verify_update(self.ctx, data)
def verify_final(self, sign):
"""
Return result of verification.
@param sign: Signature to use for verification
@rtype: int
@return: Result of verification: 1 for success, 0 for failure, -1 on
other error.
"""
return m2.verify_final(self.ctx, sign, self.pkey)
def assign_rsa(self, rsa, capture=1):
"""
Assign the RSA key pair to self.
@type rsa: M2Crypto.RSA.RSA
@param rsa: M2Crypto.RSA.RSA object to be assigned to self.
@type capture: boolean
@param capture: If true (default), this PKey object will own the RSA
object, meaning that once the PKey object gets
deleted it is no longer safe to use the RSA object.
@rtype: int
@return: Return 1 for success and 0 for failure.
"""
if capture:
ret = m2.pkey_assign_rsa(self.pkey, rsa.rsa)
if ret:
rsa._pyfree = 0
else:
ret = m2.pkey_set1_rsa(self.pkey, rsa.rsa)
return ret
def get_rsa(self):
"""
Return the underlying RSA key if that is what the EVP
instance is holding.
"""
rsa_ptr = m2.pkey_get1_rsa(self.pkey)
if rsa_ptr is None:
raise ValueError("PKey instance is not holding a RSA key")
rsa = RSA.RSA_pub(rsa_ptr, 1)
return rsa
def save_key(self, file, cipher='aes_128_cbc', callback=util.passphrase_callback):
"""
Save the key pair to a file in PEM format.
@type file: string
@param file: Name of file to save key to.
@type cipher: string
@param cipher: Symmetric cipher to protect the key. The default
cipher is 'aes_128_cbc'. If cipher is None, then the key is saved
in the clear.
@type callback: Python callable
@param callback: A Python callable object that is invoked
to acquire a passphrase with which to protect the key.
The default is util.passphrase_callback.
"""
bio = BIO.openfile(file, 'wb')
return self.save_key_bio(bio, cipher, callback)
def save_key_bio(self, bio, cipher='aes_128_cbc', callback=util.passphrase_callback):
"""
Save the key pair to the M2Crypto.BIO object 'bio' in PEM format.
@type bio: M2Crypto.BIO
@param bio: M2Crypto.BIO object to save key to.
@type cipher: string
@param cipher: Symmetric cipher to protect the key. The default
cipher is 'aes_128_cbc'. If cipher is None, then the key is saved
in the clear.
@type callback: Python callable
@param callback: A Python callable object that is invoked
to acquire a passphrase with which to protect the key.
The default is util.passphrase_callback.
"""
if cipher is None:
return m2.pkey_write_pem_no_cipher(self.pkey, bio._ptr(), callback)
else:
proto = getattr(m2, cipher, None)
if proto is None:
raise ValueError, 'no such cipher %s' % cipher
return m2.pkey_write_pem(self.pkey, bio._ptr(), proto(), callback)
def as_pem(self, cipher='aes_128_cbc', callback=util.passphrase_callback):
"""
Return key in PEM format in a string.
@type cipher: string
@param cipher: Symmetric cipher to protect the key. The default
cipher is 'aes_128_cbc'. If cipher is None, then the key is saved
in the clear.
@type callback: Python callable
@param callback: A Python callable object that is invoked
to acquire a passphrase with which to protect the key.
The default is util.passphrase_callback.
"""
bio = BIO.MemoryBuffer()
self.save_key_bio(bio, cipher, callback)
return bio.read_all()
def as_der(self):
"""
Return key in DER format in a string
"""
buf = m2.pkey_as_der(self.pkey)
bio = BIO.MemoryBuffer(buf)
return bio.read_all()
def size(self):
"""
Return the size of the key in bytes.
"""
return m2.pkey_size(self.pkey)
def get_modulus(self):
"""
Return the modulus in hex format.
"""
return m2.pkey_get_modulus(self.pkey)
def load_key(file, callback=util.passphrase_callback):
"""
Load an M2Crypto.EVP.PKey from file.
@type file: string
@param file: Name of file containing the key in PEM format.
@type callback: Python callable
@param callback: A Python callable object that is invoked
to acquire a passphrase with which to protect the key.
@rtype: M2Crypto.EVP.PKey
@return: M2Crypto.EVP.PKey object.
"""
bio = m2.bio_new_file(file, 'r')
if bio is None:
raise BIO.BIOError(Err.get_error())
cptr = m2.pkey_read_pem(bio, callback)
m2.bio_free(bio)
if cptr is None:
raise EVPError(Err.get_error())
return PKey(cptr, 1)
def load_key_bio(bio, callback=util.passphrase_callback):
"""
Load an M2Crypto.EVP.PKey from an M2Crypto.BIO object.
@type bio: M2Crypto.BIO
@param bio: M2Crypto.BIO object containing the key in PEM format.
@type callback: Python callable
@param callback: A Python callable object that is invoked
to acquire a passphrase with which to protect the key.
@rtype: M2Crypto.EVP.PKey
@return: M2Crypto.EVP.PKey object.
"""
cptr = m2.pkey_read_pem(bio._ptr(), callback)
if cptr is None:
raise EVPError(Err.get_error())
return PKey(cptr, 1)
def load_key_string(string, callback=util.passphrase_callback):
"""
Load an M2Crypto.EVP.PKey from a string.
@type string: string
@param string: String containing the key in PEM format.
@type callback: Python callable
@param callback: A Python callable object that is invoked
to acquire a passphrase with which to protect the key.
@rtype: M2Crypto.EVP.PKey
@return: M2Crypto.EVP.PKey object.
"""
bio = BIO.MemoryBuffer(string)
return load_key_bio( bio, callback) | PypiClean |
/fisinma-0.1.0.tar.gz/fisinma-0.1.0/docs/source/theoretical_overview/theoretical_background/sensitivity_calculation.rst | Sensitivity Calculation
=======================
.. note::
In-depth information about the theoretical underlying and the calculation methods will be described in a book chapter releasing in the near future.
The Fisher information matrix (FIM) can be easily calculated via the sensitivity matrix :math:`S`:
.. math::
\begin{alignat}{3}
F = S^T C^{-1} S,
\end{alignat}
where :math:`C` is the covariance matrix of measurement error.
As an example, the mentioned sensitivity matrix for two observables :math:`y = (y_1, y_2)`, two different inputs :math:`u = (u_1, u_2)`, :math:`N` different time and :math:`N_p` parameterscan be built in the following way:
.. math::
S =
\begin{bmatrix}
s_{11} (t_1, u_1) & ... & s_{1 N_p}(t_1, u_1) \\
\vdots & & \vdots \\
s_{11} (t_{N}, u_1) & ... & s_{1 N_p} (t_{N}, u_1)\\
s_{11} (t_1, u_2) & ... & s_{1 N_p}(t_1, u_2) \\
\vdots & & \vdots \\
s_{11} (t_N, u_2) & ... & s_{1 N_p} (t_N, u_2)\\
s_{21} (t_1, u_1) & ... & s_{2 N_p}(t_1, u_1) \\
\vdots & & \vdots \\
s_{21} (t_{N}, u_1) & ... & s_{2 N_p} (t_{N}, u_1)\\
s_{21} (t_1, u_2) & ... & s_{2 N_p}(t_1, u_2) \\
\vdots & & \vdots \\
s_{21} (t_N, u_2) & ... & s_{2 N_p} (t_N, u_2)
\end{bmatrix}
Here the elements of this matrix are the local sensitivity coefficients
.. math::
\begin{alignat}{3}
s_{ij} (t_m, u_n) = \frac{\mathrm{d} y_i}{\mathrm{d} p_j}
\end{alignat}
| PypiClean |
/FreePyBX-1.0-RC1.tar.gz/FreePyBX-1.0-RC1/freepybx/public/js/dojox/charting/DataSeries.js.uncompressed.js | define("dojox/charting/DataSeries", ["dojo/_base/lang", "dojo/_base/declare", "dojo/_base/array", "dojo/_base/connect", "dojox/lang/functional"],
function(Lang, declare, ArrayUtil, Hub, df){
return declare("dojox.charting.DataSeries", null, {
constructor: function(store, kwArgs, value){
// summary:
// Series adapter for dojo.data stores.
// store: Object:
// A dojo.data store object.
// kwArgs: Object:
// A store-specific keyword parameters used for fetching items.
// See dojo.data.api.Read.fetch().
// value: Function|Object|String|Null:
// Function, which takes a store, and an object handle, and
// produces an output possibly inspecting the store's item. Or
// a dictionary object, which tells what names to extract from
// an object and how to map them to an output. Or a string, which
// is a numeric field name to use for plotting. If undefined, null
// or empty string (the default), "value" field is extracted.
this.store = store;
this.kwArgs = kwArgs;
if(value){
if(Lang.isFunction(value)){
this.value = value;
}else if(Lang.isObject(value)){
this.value = Lang.hitch(this, "_dictValue",
df.keys(value), value);
}else{
this.value = Lang.hitch(this, "_fieldValue", value);
}
}else{
this.value = Lang.hitch(this, "_defaultValue");
}
this.data = [];
this._events = [];
if(this.store.getFeatures()["dojo.data.api.Notification"]){
this._events.push(
Hub.connect(this.store, "onNew", this, "_onStoreNew"),
Hub.connect(this.store, "onDelete", this, "_onStoreDelete"),
Hub.connect(this.store, "onSet", this, "_onStoreSet")
);
}
this.fetch();
},
destroy: function(){
// summary:
// Clean up before GC.
ArrayUtil.forEach(this._events, Hub.disconnect);
},
setSeriesObject: function(series){
// summary:
// Sets a dojox.charting.Series object we will be working with.
// series: dojox.charting.Series:
// Our interface to the chart.
this.series = series;
},
// value transformers
_dictValue: function(keys, dict, store, item){
var o = {};
ArrayUtil.forEach(keys, function(key){
o[key] = store.getValue(item, dict[key]);
});
return o;
},
_fieldValue: function(field, store, item){
return store.getValue(item, field);
},
_defaultValue: function(store, item){
return store.getValue(item, "value");
},
// store fetch loop
fetch: function(){
// summary:
// Fetches data from the store and updates a chart.
if(!this._inFlight){
this._inFlight = true;
var kwArgs = Lang.delegate(this.kwArgs);
kwArgs.onComplete = Lang.hitch(this, "_onFetchComplete");
kwArgs.onError = Lang.hitch(this, "onFetchError");
this.store.fetch(kwArgs);
}
},
_onFetchComplete: function(items, request){
this.items = items;
this._buildItemMap();
this.data = ArrayUtil.map(this.items, function(item){
return this.value(this.store, item);
}, this);
this._pushDataChanges();
this._inFlight = false;
},
onFetchError: function(errorData, request){
// summary:
// As stub to process fetch errors. Provide so user can attach to
// it with dojo.connect(). See dojo.data.api.Read fetch() for
// details: onError property.
this._inFlight = false;
},
_buildItemMap: function(){
if(this.store.getFeatures()["dojo.data.api.Identity"]){
var itemMap = {};
ArrayUtil.forEach(this.items, function(item, index){
itemMap[this.store.getIdentity(item)] = index;
}, this);
this.itemMap = itemMap;
}
},
_pushDataChanges: function(){
if(this.series){
this.series.chart.updateSeries(this.series.name, this);
this.series.chart.delayedRender();
}
},
// store notification handlers
_onStoreNew: function(){
// the only thing we can do is to re-fetch items
this.fetch();
},
_onStoreDelete: function(item){
// we cannot do anything with deleted item, the only way is to compare
// items for equality
if(this.items){
var flag = ArrayUtil.some(this.items, function(it, index){
if(it === item){
this.items.splice(index, 1);
this._buildItemMap();
this.data.splice(index, 1);
return true;
}
return false;
}, this);
if(flag){
this._pushDataChanges();
}
}
},
_onStoreSet: function(item){
if(this.itemMap){
// we can use our handy item map, if the store supports Identity
var id = this.store.getIdentity(item), index = this.itemMap[id];
if(typeof index == "number"){
this.data[index] = this.value(this.store, this.items[index]);
this._pushDataChanges();
}
}else{
// otherwise we have to rely on item's equality
if(this.items){
var flag = ArrayUtil.some(this.items, function(it, index){
if(it === item){
this.data[index] = this.value(this.store, it);
return true;
}
return false;
}, this);
if(flag){
this._pushDataChanges();
}
}
}
}
});
}); | PypiClean |
/Nuitka_fixed-1.1.2-cp310-cp310-win_amd64.whl/nuitka/plugins/standard/NumpyPlugin.py | """ Details see below in class definition.
"""
import os
import re
from nuitka import Options
from nuitka.plugins.PluginBase import NuitkaPluginBase
from nuitka.PythonVersions import getSystemPrefixPath
from nuitka.utils.FileOperations import listDir, listDllFilesFromDirectory
from nuitka.utils.Utils import isMacOS, isWin32Windows
sklearn_mods = [
"sklearn.utils.sparsetools._graph_validation",
"sklearn.utils.sparsetools._graph_tools",
"sklearn.utils.lgamma",
"sklearn.utils.weight_vector",
"sklearn.utils._unittest_backport",
"sklearn.externals.joblib.externals.cloudpickle.dumps",
"sklearn.externals.joblib.externals.loky.backend.managers",
]
if isWin32Windows():
sklearn_mods.extend(
[
"sklearn.externals.joblib.externals.loky.backend.synchronize",
"sklearn.externals.joblib.externals.loky.backend._win_wait",
"sklearn.externals.joblib.externals.loky.backend._win_reduction",
"sklearn.externals.joblib.externals.loky.backend.popen_loky_win32",
]
)
else:
sklearn_mods.extend(
[
"sklearn.externals.joblib.externals.loky.backend.synchronize",
"sklearn.externals.joblib.externals.loky.backend.compat_posix",
"sklearn.externals.joblib.externals.loky.backend._posix_reduction",
"sklearn.externals.joblib.externals.loky.backend.popen_loky_posix",
]
)
class NuitkaPluginNumpy(NuitkaPluginBase):
"""This class represents the main logic of the plugin.
This is a plugin to ensure scripts using numpy, scipy work well in
standalone mode.
While there already are relevant entries in the "ImplicitImports.py" plugin,
this plugin copies any additional binary or data files required by many
installations.
"""
plugin_name = "numpy" # Nuitka knows us by this name
plugin_desc = "Required for numpy."
def __init__(self, include_scipy):
self.include_numpy = True # For consistency
self.include_scipy = include_scipy
@classmethod
def isRelevant(cls):
"""Check whether plugin might be required.
Returns:
True if this is a standalone compilation.
"""
return Options.isStandaloneMode()
@classmethod
def addPluginCommandLineOptions(cls, group):
group.add_option(
"--noinclude-scipy",
action="store_false",
dest="include_scipy",
default=True,
help="""\
Should scipy, sklearn or skimage when used be not included with numpy, Default is %default.""",
)
def getExtraDlls(self, module):
"""Copy extra shared libraries or data for this installation.
Args:
module: module object
Yields:
DLL entry point objects
"""
full_name = module.getFullName()
if self.include_numpy and full_name == "numpy":
numpy_binaries = tuple(
self._getNumpyCoreBinaries(numpy_dir=module.getCompileTimeDirectory())
)
for full_path, target_filename in numpy_binaries:
yield self.makeDllEntryPoint(
source_path=full_path,
dest_path=target_filename,
package_name=full_name,
reason="core binary of 'numpy'",
)
self.reportFileCount(full_name, len(numpy_binaries))
if full_name == "scipy" and self.include_scipy and isWin32Windows():
scipy_binaries = tuple(
self._getScipyCoreBinaries(scipy_dir=module.getCompileTimeDirectory())
)
for source_path, target_filename in scipy_binaries:
yield self.makeDllEntryPoint(
source_path=source_path,
dest_path=target_filename,
package_name=full_name,
reason="core binary of 'scipy'",
)
self.reportFileCount(full_name, len(scipy_binaries))
@staticmethod
def _getNumpyCoreBinaries(numpy_dir):
"""Return any binaries in numpy package.
Notes:
This covers the special cases like MKL binaries.
Returns:
tuple of abspaths of binaries.
"""
# First look in numpy folder for binaries, this is for PyPI package.
numpy_lib_dir = os.path.join(numpy_dir, ".libs" if not isMacOS() else ".dylibs")
if os.path.isdir(numpy_lib_dir):
for full_path, filename in listDir(numpy_lib_dir):
yield full_path, filename
# Then look for libraries in numpy.core package path should already
# return the MKL files in ordinary cases
numpy_core_dir = os.path.join(numpy_dir, "core")
if os.path.exists(numpy_core_dir):
for full_path, filename in listDllFilesFromDirectory(numpy_core_dir):
yield full_path, filename
# Also look for MKL libraries in folder "above" numpy.
# This should meet the layout of Anaconda installs.
base_prefix = getSystemPrefixPath()
if isWin32Windows():
lib_dir = os.path.join(base_prefix, "Library", "bin")
else:
lib_dir = os.path.join(base_prefix, "lib")
# TODO: This doesn't actually match many files on macOS and seems not needed
# there, check if it has an impact on Windows, where maybe DLL detection is
# weaker.
if os.path.isdir(lib_dir):
for full_path, filename in listDir(lib_dir):
if isWin32Windows():
if not (
filename.startswith(("libi", "libm", "mkl"))
and filename.endswith(".dll")
):
continue
else:
re_mkllib = re.compile(
r"^(?:lib)?mkl[_\w]+\.(?:dll|so|dylib)", re.IGNORECASE
)
if not re_mkllib.match(filename):
continue
yield full_path, filename
@staticmethod
def _getScipyCoreBinaries(scipy_dir):
"""Return binaries from the extra-dlls folder (Windows only)."""
for dll_dir_name in ("extra_dll", ".libs"):
dll_dir_path = os.path.join(scipy_dir, dll_dir_name)
if os.path.isdir(dll_dir_path):
for source_path, source_filename in listDir(dll_dir_path):
if source_filename.lower().endswith(".dll"):
yield source_path, os.path.join(
"scipy", dll_dir_name, source_filename
)
def onModuleEncounter(self, module_name, module_filename, module_kind):
if not self.include_scipy and module_name.hasOneOfNamespaces(
"scipy", "sklearn", "skimage"
):
return False, "Omit unneeded components"
if module_name in ("cv2", "cv2.cv2", "cv2.data"):
return True, "Needed for OpenCV"
if self.include_scipy and module_name in sklearn_mods:
return True, "Needed by sklearn"
class NuitkaPluginDetectorNumpy(NuitkaPluginBase):
"""Only used if plugin is NOT activated.
Notes:
We are given the chance to issue a warning if we think we may be required.
"""
detector_for = NuitkaPluginNumpy
@classmethod
def isRelevant(cls):
"""Check whether plugin might be required.
Returns:
True if this is a standalone compilation.
"""
return Options.isStandaloneMode()
def onModuleDiscovered(self, module):
"""This method checks whether numpy is required.
Notes:
For this we check whether its first name part is numpy relevant.
Args:
module: the module object
Returns:
None
"""
module_name = module.getFullName()
if module_name == "numpy":
self.warnUnusedPlugin("Numpy may miss DLLs otherwise.") | PypiClean |
/MDP-3.6.tar.gz/MDP-3.6/bimdp/inspection/utils.py | from future import standard_library
standard_library.install_aliases()
from builtins import next
from builtins import object
import os
import pickle as pickle
def robust_pickle(path, filename, obj):
"""Robust pickle function, creates path if it does not exist."""
filename = os.path.join(path, filename)
try:
picke_file = open(filename, "wb")
except IOError as inst:
error_code = inst.args[0]
if error_code == 2: # path does not exist
os.makedirs(path)
picke_file = open(filename, "wb")
else:
raise
try:
pickle.dump(obj, picke_file, -1)
finally:
picke_file.close()
def robust_write_file(path, filename, content):
"""Create a file with the given content and return the filename.
If the provided path does not exist it will be created.
If the file already exists it will be overwritten.
"""
try:
new_file = open(os.path.join(path, filename), "w")
except IOError as inst:
error_code = inst.args[0]
if error_code == 2: # path does not exist
os.makedirs(path)
new_file = open(os.path.join(path, filename), "w")
else:
raise
new_file.write(content)
return filename
def first_iterable_elem(iterable):
"""Helper function to get the first element of an iterator or iterable.
The return value is a tuple of the first element and the iterable.
If the iterable is actually an iterator then a decorator is used to wrap
it and extract the first element in a non-consuming way.
"""
if iter(iterable) is iterable:
# iterable is actually iterator, have to wrap it
peek_iter = PeekIterator(iterable)
first_elem = peek_iter.peek()
return first_elem, peek_iter
else:
first_elem = next(iter(iterable))
return first_elem, iterable
class PeekIterator(object):
"""Look-ahead iterator decorator."""
def __init__(self, iterator):
self.iterator = iterator
# we simplicity we do not use collections.deque
self.cache = []
def peek(self):
"""Return the next element in the iterator without consuming it.
So the returned elements will still be returned by next in the normal
order. If the iterator has no next element then the StopIterator
exception is passed.
"""
next_elem = next(self)
# TODO: use a dequeue for better efficiency
self.cache = [next_elem] + self.cache
return next_elem
def __next__(self):
if self.cache:
return self.cache.pop()
else:
return next(self.iterator)
def __iter__(self):
return self | PypiClean |
/Faker-19.3.1.tar.gz/Faker-19.3.1/faker/providers/lorem/th_TH/__init__.py | from typing import Dict
from .. import Provider as LoremProvider
class Provider(LoremProvider):
"""Implement lorem provider for ``th_TH`` locale.
Word list is randomly drawn from the Thailand's Ministry of Education,
removing compound words and long words, adding common words (like
prepositions) and few of regional words.
Sources:
- http://www.arts.chula.ac.th/~ling/TTC/id-4.html
- https://www.sanook.com/campus/1390689/
- https://www.sanook.com/campus/1397677/
- https://www.sanook.com/campus/1392241/
"""
word_connector = "" # Thai writing has no word divider
sentence_punctuation = " " # single space
word_list = (
"กตัญญู",
"กบ",
"กรดไหลย้อน",
"กรรมการ",
"กระจาย",
"กระถาง",
"กล",
"กล่อง",
"กล้า",
"กลาง",
"กลางคืน",
"กล่าว",
"กว้าง",
"กะเพรา",
"กะละมัง",
"กับ",
"ก้าง",
"กาม",
"การ",
"กำ",
"กำไร",
"กิ่งไม้",
"กิจกรรม",
"กิน",
"กิโลเมตร",
"กีฬา",
"กู",
"กูเกิล",
"เกม",
"เกาหลี",
"แก้ว",
"แกะ",
"แก",
"แก่",
"แก้",
"โก๋แก่",
"โกง",
"ขนม",
"ขนมชั้น",
"ของหวาน",
"ขัด",
"ขันน้ำ",
"ข้าง",
"ขาดเคิ่ง",
"ข้าว",
"ข้าวเจ้า",
"ข้าวหมูแดง",
"ขี่",
"ขี้ไคล",
"ขี้ดิน",
"ขุด",
"เขยิบ",
"เขยื้อน",
"เข้ารหัส",
"แข่งขัน",
"แข็ง",
"แข้ง",
"ไข่",
"คนไข้",
"คนตาย",
"คบ",
"คมนาคม",
"ครอง",
"ครู",
"คลาน",
"ควร",
"ความ",
"คอก",
"คอมมิวนิสต์",
"ค่อย",
"คะแนน",
"คั่ว",
"คาว",
"คำถาม",
"คำสั่ง",
"คู่",
"เคย",
"เครื่องบิน",
"เคเอฟซี",
"เคารพ",
"แคะ",
"โควิด",
"ไค้หัน",
"งม",
"ง่วง",
"เงา",
"โง่",
"จะไปพั่ง",
"จัด",
"จาก",
"จ๋า",
"เจ็บไข้",
"แจ่มใส",
"ใจ",
"ฉีด",
"เฉย",
"ชนิด",
"ชะนี",
"ช้า",
"ชาว",
"ชาวนา",
"ชิง",
"ชุดนอน",
"ชุมนุม",
"ชู",
"เช้า",
"เชื่อม",
"เชื้อโรค",
"เชื่อ",
"ไชโย",
"ซ่อน",
"ซ่อมเบิ่ง",
"ซอย",
"ซี่",
"แซง",
"ด้วย",
"ดอกไม้",
"ดอง",
"ดังนี้",
"ด้าย",
"ดาวเทียม",
"ดำ",
"ดี",
"ดึก",
"ดู",
"เดี่ยว",
"โดย",
"ได้แก่",
"ตกลง",
"ต้น",
"ตรวจ",
"ตลอด",
"ตอก",
"ตอใด",
"ต่อ",
"ตะแกรง",
"ตะปู",
"ตั้งแต่",
"ตับ",
"ตัวเมีย",
"ตัวอย่าง",
"ตำลึง",
"ติด",
"ตีน",
"ตื่น",
"ตู้",
"ตู่",
"เตา",
"เตียน",
"แต่ง",
"แตะ",
"แต่",
"โตย",
"โต",
"ไต้หวัน",
"ไต้",
"ถกเถียง",
"ถาง",
"ถีบ",
"ถึง",
"แถบ",
"ทด",
"ทดลอง",
"ทรัพย์สิน",
"ทวด",
"ทวิตเตอร์",
"ทหาร",
"ท้องฟ้า",
"ทอด",
"ทอดมัน",
"ทั่ว",
"ทาน",
"ทำสวน",
"ที่ดิน",
"ที่",
"ทุกข์",
"ทุ่ม",
"เทเลแกรม",
"แท็กซี่",
"แท็บลอยด์",
"ธนาคาร",
"ธาตุ",
"น้อง",
"นักเรียน",
"นั่ง",
"น้า",
"น้ำเย็น",
"น้ำหวาน",
"นิ่ม",
"นุ่น",
"เนื่องจาก",
"เนื้อ",
"โน่น",
"ใน",
"บริโภค",
"บริษัท",
"บอก",
"บอกใบ้",
"บัดนี้",
"บันได",
"บาด",
"บูชา",
"บูด",
"เบียร์",
"ใบไม้",
"ปกครอง",
"ประชาธิปไตย",
"ประพฤติ",
"ประสบการณ์",
"ปาก",
"ปิ่นโต",
"ปี",
"ปี่",
"ปู",
"เป็น",
"เปลือง",
"เป้า",
"แปรง",
"ผล",
"ผลัด",
"ผลิต",
"ผสม",
"ผ่อ",
"ผัก",
"ผิด",
"ผีก",
"ผู้ร้าย",
"เผื่อ",
"แผนที่",
"โผล่",
"ฝาก",
"พนมมือ",
"พยาธิ",
"พ่อ",
"พักผ่อน",
"พับ",
"พิการ",
"พิพักพิพ่วน",
"เพดาน",
"เพราะ",
"เพลง",
"เพียง",
"แพ้",
"ฟาก",
"เฟซบุ๊ก",
"มลายู",
"มอบ",
"มะเขือเทศ",
"มัสยิด",
"มิตร",
"เมตตา",
"เมล็ด",
"เมาะ",
"แมค",
"แม่มด",
"แมลง",
"แม่",
"แม้",
"ย่อ",
"ยัน",
"ยา",
"ย้ำ",
"ยีราฟ",
"ยึด",
"ยูทูบ",
"เย็น",
"เย็บ",
"เยอะ",
"เยาวชน",
"รถโดยสาร",
"รถถัง",
"รถทัวร์",
"รถบัส",
"ร่มรื่น",
"รสชาติ",
"ร้อน",
"รอ",
"ระเบียง",
"ระยำ",
"รังแก",
"รัฐบาล",
"รัฐประหาร",
"ราก",
"ร่างกาย",
"ร่าง",
"ริม",
"รู้จัก",
"เริ่ม",
"เรียง",
"เรื่อย",
"แรก",
"แรงงาน",
"โรงสี",
"ฤดู",
"ลงมือ",
"ล่อ",
"ลืมคาว",
"ลูกชิ้น",
"ลูกตา",
"ลูก",
"เล่ม",
"เลี้ยว",
"เลือก",
"แลก",
"และ",
"วัง",
"วัฒนธรรม",
"วาด",
"วิกิพีเดีย",
"วิ่ง",
"วิชาชีพ",
"วินโดวส์",
"ศาลากลาง",
"ศาสตร์",
"ศิษย์",
"เศรษฐกิจ",
"เศษอาหาร",
"เศษ",
"สดชื่น",
"สด",
"สถานี",
"สนอง",
"สบาย",
"สมอง",
"สมาคม",
"สม่ำเสมอ",
"สลับ",
"สหกรณ์",
"สหภาพ",
"สัญญา",
"สาธารณรัฐ",
"สารวัตร",
"สำนักงาน",
"สำหรับ",
"สีแดง",
"สีเทา",
"สี",
"สุขภาพ",
"สุดท้าย",
"เสรีนิยม",
"เสรีภาพ",
"เสียบ",
"แสง",
"หน้ากาก",
"หน้าต่าง",
"หน้าที่",
"หนุน",
"หนู",
"หมด",
"ห่มผ้า",
"หมอก",
"หม้อ",
"หมัด",
"หมี",
"หมุน",
"หยอก",
"หยัก",
"หรือ",
"หลง",
"หล่น",
"หลบ",
"หลังคา",
"ห่วงใย",
"หว่าน",
"ห่อข้าว",
"ห้องเรียน",
"หอย",
"ห้าง",
"หาบ",
"หาม้าย",
"หาย",
"หึงสา",
"หุ้ม",
"เหตุ",
"เห็น",
"แหย่",
"ใหม่",
"ไหน",
"องค์",
"อด",
"อธิษฐาน",
"อนุบาล",
"อบอุ่น",
"อวัยวะ",
"ออนซอนเด๊",
"อ่อนหวาน",
"อัศจรรย์",
"อายุ",
"อาสา",
"อาหาร",
"อิฐ",
"อินเทอร์เน็ต",
"อินสตาแกรม",
"อิสลาม",
"อุปโภค",
"เอสซีบี",
"เอิด",
"แอนดรอยด์",
"ไอศกรีม",
"ไอโอเอส",
)
parts_of_speech: Dict[str, tuple] = {} | PypiClean |
/MaterialDjango-0.2.5.tar.gz/MaterialDjango-0.2.5/materialdjango/static/materialdjango/components/bower_components/paper-listbox/.github/ISSUE_TEMPLATE.md | <!-- Instructions: https://github.com/PolymerElements/paper-listbox/CONTRIBUTING.md#filing-issues -->
### Description
<!-- Example: The `paper-foo` element causes the page to turn pink when clicked. -->
### Expected outcome
<!-- Example: The page stays the same color. -->
### Actual outcome
<!-- Example: The page turns pink. -->
### Live Demo
<!-- Example: https://jsbin.com/cagaye/edit?html,output -->
### Steps to reproduce
<!-- Example
1. Put a `paper-foo` element in the page.
2. Open the page in a web browser.
3. Click the `paper-foo` element.
-->
### Browsers Affected
<!-- Check all that apply -->
- [ ] Chrome
- [ ] Firefox
- [ ] Safari 9
- [ ] Safari 8
- [ ] Safari 7
- [ ] Edge
- [ ] IE 11
- [ ] IE 10
| PypiClean |
/AstroCabTools-1.5.1.tar.gz/AstroCabTools-1.5.1/astrocabtools/fit_line/src/models/lineModelCreation.py | import numpy as np
import pandas as pd
from lmfit import Parameters, Model
import sys
import traceback
import io
from collections import deque
from .linePointsData import linePointsData
from astrocabtools.fit_line.src.utils.fitting_model_creation import calculate_intercept, calculate_slope, integrated_flux, line_fitting_function
__all__ = ['lineModel']
class lineModel:
def __init__(self, textLines, typeCont, parent=None):
super().__init__()
self.__lineFitPoints = linePointsData(leftX=0.0, rightX=0.0, leftY=0.0, rightY=0.0)
self.__lineDict = {}
self.__lineDeque = deque()
self.__lines = []
self.__markers = []
self.__textLines = textLines
@property
def lines(self):
return self.__lines
@property
def markers(self):
return self.__markers
@lines.setter
def lines(self, figure):
self.__lines.append(figure)
@markers.setter
def markers(self, marker):
self.__markers.append(marker)
def del_marker(self, marker):
self.__markers.remove(marker)
def del_line(self, line):
self.__lines.remove(line)
def init_data_points(self):
self.__lineDict = self.__lineFitPoints.asdict()
"""
Merge two dicts to simplify the use in the iterative process and
in case of duplicate parameters on both
"""
self.__textLines = iter(self.__textLines)
def add_data_points(self, xdata, ydata):
""" Update specific coordinate values based on order value of counter
:param float xdata: X coordinate
:param float ydata: Y coordinate
"""
self.__lineDeque.append((xdata, ydata))
return self.__textLines
def _generate_initial_line_model(self,wavelengthValues, a, b):
y_values = []
for x in wavelengthValues:
y_values.append(line_fitting_function(x, a, b))
return y_values
def draw_model_fit(self, path, wavelength, flux):
""" Generate the gauss model, draw the model results based on x value range
and update the table that shows the results parameters"""
for i, key in enumerate(self.__lineDict.keys()):
self.__lineDict[key] = self.__lineDeque[i]
#Obtain the wavelength values on given range
wavelengthValues = wavelength[(wavelength >= self.__lineDict['left'][0]) & (wavelength <= self.__lineDict['right'][0])]
#Obtain de indexes from the initial wavelength array
#based on the min a max values of the slice made previously
index1 = np.where(wavelength == np.amin(wavelengthValues))
index2 = np.where(wavelength == np.amax(wavelengthValues))
#Obtain the flux values between the indexes obtained previously
fluxValues = flux[index1[0][0]:(index2[0][0]+1)]
line_model = Model(line_fitting_function, name= 'model1')
slope = calculate_slope(self.__lineDict['left'][0], self.__lineDict['left'][1],self.__lineDict['right'][0], self.__lineDict['right'][1])
inital_y_values = self._generate_initial_line_model(wavelengthValues, calculate_intercept(slope, self.__lineDict['left'][0], self.__lineDict['left'][1]), slope)
params = line_model.make_params(a=calculate_intercept(slope, self.__lineDict['left'][0], self.__lineDict['left'][1]),
b=slope)
init = line_model.eval(params, x=wavelengthValues)
result = line_model.fit(fluxValues, params, x=wavelengthValues)
#Update table of results parameters
resultText = "Path: {}".format(path)
resultText = resultText + "\n" + \
"Line model: {} + {} * x".format(str(result.params['a'].value), str(result.params['b'].value))
lineFitResultList = [key + " = " + str(result.params[key].value) for key in result.params]
for resultParams in lineFitResultList:
resultText = resultText + "\n" + resultParams
resultText = resultText + "\n" + "Chi-square" + " = " + str(result.chisqr)
return result, resultText, wavelengthValues, fluxValues, inital_y_values, None | PypiClean |
/Eureqa-1.76.0.tar.gz/Eureqa-1.76.0/eureqa/analysis_templates/parameters.py |
from parameter import Parameter
from text_parameter import TextParameter
from data_source_parameter import DataSourceParameter
from top_level_model_parameter import TopLevelModelParameter
from variable_parameter import VariableParameter
from data_file_parameter import DataFileParameter
from combo_box_parameter import ComboBoxParameter
from numeric_parameter import NumericParameter
import json
from collections import OrderedDict
class Parameters:
"""Analysis template parameters definition
:param list[Parameter] parameters: The list of parameters for the template, whether text, variable or datasource.
"""
def __init__(self, parameters=None):
"""Initializes a new instance of the ~Parameters class
"""
self.parameters = OrderedDict((p.id, p) for p in parameters) if parameters is not None else parameters
def _to_json(self):
body = {}
if self.parameters is not None:
body['parameters'] = [x._to_json() for x in self.parameters.itervalues()]
return body
def __str__(self):
return json.dumps(self._to_json(), indent=4)
_parameter_map = {
'combo_box': ComboBoxParameter,
'data_file': DataFileParameter,
'datasource': DataSourceParameter,
'numeric': NumericParameter,
'top_level_model': TopLevelModelParameter,
'text': TextParameter,
'text_multiline': TextParameter,
'variable': VariableParameter
}
@classmethod
def _from_json(cls, body):
ret = Parameters()
if not 'parameters' in body: return ret
parameters = []
for element in body['parameters']:
parameter = Parameter._from_json(element)
if not parameter._type in cls._parameter_map: raise Exception('Invalid analysis template parameter type \'%s\' for parameter \'%s\' with label \'%s\'' %(parameter._type, parameter.id, parameter.label))
parameter = cls._parameter_map[parameter._type]._from_json(element)
parameters.append(parameter)
ret.parameters = OrderedDict((p.id, p) for p in parameters)
return ret | PypiClean |
/Image2Dia-0.51.tar.gz/Image2Dia-0.51/image2dia/image2dia.py | import os
import sys
try:
from lxml import etree
import Image
import shutil
import cairo
import rsvg
except:
print """ERROR:
Some dependencies failed"
cairo, rsvg, PIL and lxml needed"
"""
class Image2Dia():
'''
This class inserts an image file into an DIA sheet.
The main function is 'add(filename,sheet)'
'''
# CONSTANTS ...
VERSIO = "0.51"
_SHEET_LINE = """<object name='%s - %s'><description>%s</description></object>"""
_SHEET_NS = "{http://www.lysator.liu.se/~alla/dia/dia-sheet-ns}"
_SHAPE_NS = "{http://www.daa.com.au/~james/dia-shape-ns}"
_SVG_NS = "{http://www.w3.org/2000/svg}"
_XLINK_NS = "{http://www.w3.org/1999/xlink}"
class Image2DiaErrors(Exception):
'''
Class to manage errors generated with Image2dia
0: Shape created without problems
1: Incorrect parameters
2: Image file allready in this sheet
3: Name of the shape allready in use in the same sheet
4: Image file provided not found
'''
errors = [
"Shape created",
"Incorrect parameters",
"Image file allready in use in this sheet",
"The name of the shape is allready in use in the specified sheet",
"Image file not found",
]
def __init__(self,value):
self.value = value
def __str__(self):
return self.errors[self.value]
def __init__(self):
self.DiaUserDir = "{0:>s}/.dia".format(os.path.expanduser("~"))
def getVersion(self):
'''
returns the version of the module
'''
return self.VERSIO
def _getDiaUserDir(self):
return self.DiaUserDir
def _getSheetsDir(self,p):
'''
Returns the sheets directory
PARAMS
USER: User directory
SYSTEM: System directory (not working in non Debian systems)
'''
SHEETS_DIR = { "USER" : "{0:>s}/sheets".format(self._getDiaUserDir()),
"SYSTEM" : "/usr/share/dia/sheets" }
if p in SHEETS_DIR:
return SHEETS_DIR[p]
else:
return None
def _getShapesDir(self, p):
'''
Returns the shapes dir
PARAMS
USER: User directory
SYSTEM: System directory (not working in non Debian systems)
'''
SHAPES_DIR = { "USER" : "{0:>s}/shapes".format(self._getDiaUserDir()),
"SYSTEM" : "/usr/share/dia/shapes"}
if p in SHAPES_DIR:
return SHAPES_DIR[p]
else:
return None
def listSheets(self, quin):
'''
Gives a list with the actual Sheets in the local installation
PARAMS
quin: Must be USER or SYSTEM
USER: Sheets in the user home folder
SYSTEM: Sheets of the system (only /usr/share/dia )
returns the specified list or an empty list for incorrect parameters
'''
newList = []
directori = self._getSheetsDir(quin)
if directori is not None:
for fitxer in os.listdir(directori):
nom,_ = os.path.splitext(fitxer)
newList.append(nom)
return newList
def listAllSheets(self):
'''
Returns all sheets in the system
'''
newList = []
for value in ("USER","SYSTEM"):
newList = newList + self.listSheets(value)
return newList
def _createDiaSheetFile(self, nom):
"""
Create a new Sheet file
PARAMS
nom: Name of the sheet
"""
sheet = """<?xml version="1.0" encoding="utf-8"?>
<sheet xmlns="http://www.lysator.liu.se/~alla/dia/dia-sheet-ns">
<name>Imatges</name>
<description>Simplement imatges</description>
<contents>
</contents>
</sheet>
"""
root = etree.XML(sheet)
description, _ = os.path.splitext(os.path.basename(nom))
for element in root.iter():
if element.tag == self._SHEET_NS + 'name':
element.text = description
elif element.tag == self._SHEET_NS + 'description':
element.text = description
doc = etree.ElementTree()
doc._setroot(root)
doc.write(nom, encoding="utf-8", xml_declaration=True, method="xml")
def _createDiaShapeFile(self, shapeFile, grup, nomfitxer, mida):
'''
Creates a shapeFile with the name especified
PARAMS
shapefile: Name of the shape File
grup: Sheet name
nom: Name of the shape
mida: List with the dimensions of the image
'''
shape = """<?xml version="1.0" encoding="UTF-8"?>
<shape xmlns="http://www.daa.com.au/~james/dia-shape-ns"
xmlns:svg="http://www.w3.org/2000/svg"
xmlns:xlink="http://www.w3.org/1999/xlink">
<name>Imatges - Tux</name>
<icon>tux-icon.png</icon>
<connections>
<point x="0" y="0"/>
<point x="0" y="20"/>
<point x="10" y="10"/>
<point x="20" y="0"/>
<point x="20" y="20"/>
</connections>
<aspectratio type="fixed"/>
<svg:svg>
<svg:image x="0" y="0" width="20" height="20" xlink:href="tux.png"/>
</svg:svg>
</shape>
"""
nom, _ = os.path.splitext(nomfitxer)
root = etree.XML(shape)
for element in root.iter():
if element.tag == self._SHAPE_NS + 'name':
element.text = grup + " - " + nom
elif element.tag == self._SHAPE_NS + 'icon':
element.text = '%s-icon.png' % (nom)
elif element.tag == self._SHAPE_NS + 'connections':
x = 0
y = 0
# iterx = midax / (len(element)-1)
# itery = miday / (len(element)-1)
for fill in element.iterchildren():
if element.index(fill) == 0:
x = mida[0] / 2
y = mida[1] / 2
fill.set('x', str(x))
fill.set('y', str(y))
x = 0
y = 0
else:
if x > mida[0]:
x = 0
y = y + mida[1]
fill.set('x', str(x))
fill.set('y', str(y))
x = x + mida[0]
elif element.tag == self._SVG_NS + 'image':
element.set("%shref" % (self._XLINK_NS), nomfitxer)
element.set('width', str(mida[0]))
element.set('height', str(mida[1]))
# Write shape file
'''
1) If I use parse() I can write it to screen
doc.write(sys.stdout)
2) Same but starting from root element
print etree.tostring(root, xml_declaration=True,
encoding="UTF-8",pretty_print=True)
'''
doc = etree.ElementTree()
doc._setroot(root)
doc.write(shapeFile, encoding="UTF-8", xml_declaration=True, method="xml")
def checkFiles(self, shapeFile, sheetFile):
"""
1- Test if the shape file exists (for not overwriting existing shapes)
2- Test if the sheet File already exists
PARAMS
shapeFile: Name of the shape file (full path)
sheetFile: Name of the sheet where the shape will be inserted
(full path)
RETURNS
0: Ok
1: Reserved
2: Shape file allready exists
3: Shape allready in the sheet
"""
# Check if the shape file exists:
if os.path.exists(shapeFile):
raise self.Image2DiaErrors(2)
# Check if we are adding a shape in an existent sheet
if os.path.exists(sheetFile):
# Name of the shape in the sheet file ("sheet - Name")
shapeName = '%s - %s' % (os.path.basename(shapeFile),os.path.basename(sheetFile))
# Test if the reference file already exists in the sheet - XPath
arrel = etree.parse(sheetFile, parser=None, base_url=None).getroot()
r = arrel.xpath("//d:object/@name",
namespaces={'d': 'http://www.lysator.liu.se/~alla/dia/dia-sheet-ns'})
if shapeName in r:
raise self.Image2DiaErrors(3)
return 0
def _convertSVGImageFile(self, nom):
'''
Convesion from SVG to a PNG with the same name
'''
nomfitxer = "%s.svg" % (nom)
svg = rsvg.Handle(nomfitxer)
midax = svg.get_property('width')
miday = svg.get_property('height')
surface = cairo.ImageSurface(cairo.FORMAT_ARGB32, midax, miday)
ctx = cairo.Context(surface)
svg.render_cairo(ctx)
surface.write_to_png(nom + ".png")
surface.finish()
def _createIcon(self, nom, extensio):
''' Creates an Icon for the shape
PARAMETERS
nom : Name of the file without extension
extensio: Extension
RETURNS
Returns (sizex, sizey)
'''
# create an icon 22x22 WITHOUT TRANSPARENCY
try:
img = Image.open(nom + extensio)
except:
# Yes, I need to detect the error type ... maybe in the next version
raise self.Image2DiaErrors(4)
midax, miday = img.size
# Create a white background to skip 'bad transparency effect'
icona = Image.new("RGB",(22,22),(255,255,255))
# Aspect Ratio
novaMida = (30, 30 *miday/midax)
img.thumbnail([22, 22], Image.ANTIALIAS)
elformat = img.format
center = ( (22 - img.size[0])/2, (22-img.size[1])/2 )
icona.paste(img, center, img)
try:
icona.save('{0:>s}-icon.png'.format(nom), elformat, quality=90, optimize=1)
except:
icona.save("{0:>s}-icon.png".format(nom), elformat, quality=90)
# img.close
return novaMida
def addImage(self, nomfitxer, grup):
'''
Inserts the image into the specified sheet. If the sheet
does not exists its automaticaly created
PARAMS
nomfitxer: Image file name
grup: Sheet to insert the image
RETURN
0: Shape created without problems
RAISES
Image2DiaErrors.
'''
# Split name and extension
nom, extensio = os.path.splitext(nomfitxer)
# Define folders and files...
shapeDir = self._getShapesDir("USER")
sheetDir = self._getSheetsDir("USER")
shapeFile = "{0:>s}/{1:>s}/{2:>s}.shape".format(shapeDir, grup, nom)
sheetFile = "{0:>s}/{1:>s}.sheet".format(sheetDir, grup)
# Test if the shape and sheet files are ok:
resultat = self.checkFiles(shapeFile, sheetFile)
# 1) Icon creation
# -------------------------------------------------------------
# Must convert a SVG file to PNG to create an icon (Can be simplified?)
if extensio == ".svg":
self._convertSVGImageFile(nom)
extensio = ".png"
mida = self._createIcon(nom, extensio)
# 2) Create the XML shape file
# --------------------------------------------------------------
folder = os.path.dirname(shapeFile)
if not os.path.isdir(folder):
os.mkdir(folder, 0750)
# Create the file
self._createDiaShapeFile(shapeFile, grup, nomfitxer,mida)
# 3) Add the new image to sheets file
# -------------------------------------------------
if not os.path.exists(sheetFile):
# Sheet not found, proceed creating one ...
self._createDiaSheetFile(sheetFile)
doc = etree.parse(sheetFile)
node = doc.find("%scontents" % (self._SHEET_NS))
# Insert the new node
noufill = etree.XML(self._SHEET_LINE % (grup, nom, nom))
node.append(noufill)
doc.write(sheetFile, encoding="utf-8", xml_declaration=True, method="xml")
# 4) Copy the images into the shape folder
# -------------------------------------
shapeDestination = "%s/%s" % (shapeDir, grup)
if os.getcwd() != shapeDestination:
shutil.copy2(nomfitxer, shapeDestination)
shutil.move("%s-icon.png" % (nom), shapeDestination)
# Happy End
return resultat
if __name__ == "__main__":
# Always show the version info
i2d = Image2Dia()
print "%s v.%s" % (os.path.basename(sys.argv[0]), i2d.getVersion())
print "-----------------------------------"
nomfitxer = ""
# 0) Check params
# -------------------------------------
if len(sys.argv) == 3:
nomfitxer = sys.argv[1]
grup = sys.argv[2]
resultat = i2d.addImage(nomfitxer,grup)
else:
raise i2d.Image2DiaErrors(1) | PypiClean |
/IPFX-1.0.8.tar.gz/IPFX-1.0.8/docs/installation.rst | .. highlight:: shell
============
Installation
============
The easiest way to install ``ipfx`` is via pip:
.. code-block:: bash
pip install ipfx
We suggest installing ``ipfx`` into a managed Python environment, such as those provided by `anaconda <https://anaconda.org/anaconda/anaconda-project>`_ or `venv <https://docs.python.org/3/library/venv.html>`_. This avoids conflicts between different Python and package versions.
Installing for development
--------------------------
If you wish to make contributions to ``ipfx`` (thank you!), you must clone the `git repository <https://github.com/alleninstitute/ipfx>`_. You will need to install `git-lfs <https://git-lfs.github.com/>`_ before cloning (we store some test data in lfs). Once you have cloned ``ipfx``, simply pip install it into your environment:
.. code-block:: bash
pip install -e path/to/your/ipfx/clone
(-e installs in editable mode, so that you can use your changes right off the bat).
See :doc:`contributing` for more information. | PypiClean |
/HyFetch_testing-1.5.0rc3-py3-none-any.whl/hyfetch/color_util.py | from __future__ import annotations
import colorsys
from typing import NamedTuple, Callable, Optional
from typing_extensions import Literal
from .constants import GLOBAL_CFG
AnsiMode = Literal['default', 'ansi', '8bit', 'rgb']
LightDark = Literal['light', 'dark']
MINECRAFT_COLORS = ["&0/\033[0;30m", "&1/\033[0;34m", "&2/\033[0;32m", "&3/\033[0;36m", "&4/\033[0;31m",
"&5/\033[0;35m", "&6/\033[0;33m", "&7/\033[0;37m", "&8/\033[1;30m", "&9/\033[1;34m",
"&a/\033[1;32m", "&b/\033[1;36m", "&c/\033[1;31m", "&d/\033[1;35m", "&e/\033[1;33m",
"&f/\033[1;37m",
"&r/\033[0m", "&l/\033[1m", "&o/\033[3m", "&n/\033[4m", "&-/\n"]
MINECRAFT_COLORS = [(r[:2], r[3:]) for r in MINECRAFT_COLORS]
def color(msg: str) -> str:
"""
Replace extended minecraft color codes in string
:param msg: Message with minecraft color codes
:return: Message with escape codes
"""
for code, esc in MINECRAFT_COLORS:
msg = msg.replace(code, esc)
while '&gf(' in msg or '&gb(' in msg:
i = msg.index('&gf(') if '&gf(' in msg else msg.index('&gb(')
end = msg.index(')', i)
code = msg[i + 4:end]
fore = msg[i + 2] == 'f'
if code.startswith('#'):
rgb = tuple(int(code.lstrip('#')[i:i+2], 16) for i in (0, 2, 4))
else:
code = code.replace(',', ' ').replace(';', ' ').replace(' ', ' ')
rgb = tuple(int(c) for c in code.split(' '))
msg = msg[:i] + RGB(*rgb).to_ansi(foreground=fore) + msg[end + 1:]
return msg
def printc(msg: str):
"""
Print with color
:param msg: Message with minecraft color codes
"""
print(color(msg + '&r'))
def clear_screen(title: str = ''):
"""
Clear screen using ANSI escape codes
"""
if not GLOBAL_CFG.debug:
print('\033[2J\033[H', end='')
if title:
print()
printc(title)
print()
def redistribute_rgb(r: int, g: int, b: int) -> tuple[int, int, int]:
"""
Redistribute RGB after lightening
Credit: https://stackoverflow.com/a/141943/7346633
"""
threshold = 255.999
m = max(r, g, b)
if m <= threshold:
return int(r), int(g), int(b)
total = r + g + b
if total >= 3 * threshold:
return int(threshold), int(threshold), int(threshold)
x = (3 * threshold - total) / (3 * m - total)
gray = threshold - x * m
return int(gray + x * r), int(gray + x * g), int(gray + x * b)
class RGB(NamedTuple):
r: int
g: int
b: int
@classmethod
def from_hex(cls, hex: str) -> "RGB":
"""
Create color from hex code
>>> RGB.from_hex('#FFAAB7')
RGB(r=255, g=170, b=183)
:param hex: Hex color code
:return: RGB object
"""
while hex.startswith('#'):
hex = hex[1:]
r = int(hex[0:2], 16)
g = int(hex[2:4], 16)
b = int(hex[4:6], 16)
return cls(r, g, b)
def to_ansi_rgb(self, foreground: bool = True) -> str:
"""
Convert RGB to ANSI TrueColor (RGB) Escape Code.
This uses the 24-bit color encoding (an uint8 for each color value), and supports 16 million
colors. However, not all terminal emulators support this escape code. (For example, IntelliJ
debug console doesn't support it).
Currently, we do not know how to detect whether a terminal environment supports ANSI RGB. If
you have any thoughts, feel free to submit an issue on our Github page!
:param foreground: Whether the color is for foreground text or background color
:return: ANSI RGB escape code like \033[38;2;255;100;0m
"""
c = '38' if foreground else '48'
return f'\033[{c};2;{self.r};{self.g};{self.b}m'
def to_ansi_8bit(self, foreground: bool = True) -> str:
"""
Convert RGB to ANSI 8bit 256 Color Escape Code.
This encoding supports 256 colors in total.
:return: ANSI 256 escape code like \033[38;5;206m'
"""
r, g, b = self.r, self.g, self.b
sep = 42.5
while True:
if r < sep or g < sep or b < sep:
gray = r < sep and g < sep and b < sep
break
sep += 42.5
if gray:
color = 232 + (r + g + b) / 33
else:
color = 16 + int(r / 256. * 6) * 36 + int(g / 256. * 6) * 6 + int(b / 256. * 6)
c = '38' if foreground else '48'
return f'\033[{c};5;{int(color)}m'
def to_ansi_16(self, foreground: bool = True) -> str:
"""
Convert RGB to ANSI 16 Color Escape Code
:return: ANSI 16 escape code
"""
raise NotImplementedError()
def to_ansi(self, mode: AnsiMode | None = None, foreground: bool = True):
if not mode:
mode = GLOBAL_CFG.color_mode
if mode == 'rgb':
return self.to_ansi_rgb(foreground)
if mode == '8bit':
return self.to_ansi_8bit(foreground)
if mode == 'ansi':
return self.to_ansi_16(foreground)
def lighten(self, multiplier: float) -> 'RGB':
"""
Lighten the color by a multiplier
:param multiplier: Multiplier
:return: Lightened color (original isn't modified)
"""
return RGB(*redistribute_rgb(*[v * multiplier for v in self]))
def set_light(self, light: float, at_least: bool | None = None, at_most: bool | None = None) -> 'RGB':
"""
Set HSL lightness value
:param light: Lightness value (0-1)
:param at_least: Set the lightness to at least this value (no change if greater)
:param at_most: Set the lightness to at most this value (no change if lesser)
:return: New color (original isn't modified)
"""
# Convert to HSL
h, l, s = colorsys.rgb_to_hls(*[v / 255.0 for v in self])
# Modify light value
if at_least is None and at_most is None:
l = light
else:
if at_most:
l = min(l, light)
if at_least:
l = max(l, light)
# Convert back to RGB
return RGB(*[round(v * 255.0) for v in colorsys.hls_to_rgb(h, l, s)]) | PypiClean |
/hypergraphanalysistoolbox-1.0.20.tar.gz/hypergraphanalysistoolbox-1.0.20/HAT/HAT.py | import itertools
from itertools import combinations
import numpy as np
import scipy as sp
import scipy.io
import networkx as nx
import os
import HAT.multilinalg
import HAT
def directSimilarity(HG1, HG2, measure='Hamming'):
"""This function computes the direct similarity between two uniform hypergraphs.
:param HG1: Hypergraph 1
:type HG1: *Hypergraph*
:param HG2: Hypergraph 2
:type HG2: *Hypergraph*
:param measure: This sepcifies which similarity measure to apply. It defaults to
``Hamming``, and ``Spectral-S`` and ``Centrality`` are available as other options
as well.
:type measure: str, optional
:return: Hypergraph similarity
:rtype: *float*
References
----------
.. [1] Amit Surana, Can Chen, and Indika Rajapakse. Hypergraph similarity measures. IEEE Transactions on Network Science and Engineering, pages 1-16, 2022.
"""
# Auth: Joshua Pickard
# jpic@umich.edu
# Date: Dec 2, 2022
if measure == 'Hamming':
return HAT.multilinalg.HammingSimilarity(HG1.laplacianTensor(), HG2.laplacianTensor())
elif measure == 'Spectral-S':
return HAT.multilinalg.SpectralHSimilarity(HG1.laplacianTensor(), HG2.laplacianTensor())
# elif measure == 'Spectral-H': # Not implemented until we get a H-eigenvalue solver
elif measure == 'Centrality':
C1 = HG1.centrality()[0]
C2 = HG2.centrality()[0]
C1 /= np.linalg.norm(C1)
C2 /= np.linalg.norm(C2)
m = np.linalg.norm(C1 - C2) / len(C1)
return m
def indirectSimilarity(G1, G2, measure='Hamming', eps=10e-3):
"""This function computes the indirect similarity between two hypergraphs.
:param G1: Hypergraph 1 expansion
:type G1: *nx.Graph* or *ndarray*
:param G2: Hypergraph 2 expansion
:type G2: *nx.Graph* or *ndarray*
:param measure: This specifies which similarity measure to apply. It defaults to ``Hamming`` , and
``Jaccard`` , ``deltaCon`` , ``Spectral`` , and ``Centrality`` are provided as well. When ``Centrality``
is used as the similarity measure, ``G1`` and ``G2`` should *ndarray* s of centrality values; Otherwise
``G1`` and ``G2`` are *nx.Graph*s or *ndarray** s as adjacency matrices.
:type measure: *str*, optional
:param eps: a hyperparameter required for deltaCon similarity, defaults to 10e-3
:type eps: *float*, optional
:return: similarity measure
:rtype: *float*
References
==========
.. [1] Amit Surana, Can Chen, and Indika Rajapakse. Hypergraph similarity measures. IEEE Transactions on Network Science and Engineering, pages 1-16, 2022.
"""
# Auth: Joshua Pickard
# jpic@umich.edu
# Date: Dec 2, 2022
if isinstance(G1, nx.classes.graph.Graph):
M1 = nx.adjacency_matrix(G1).todense()
M1 = np.array(M1)
else:
M1 = G1
if isinstance(G2, nx.classes.graph.Graph):
M2 = nx.adjacency_matrix(G2).todense()
M2 = np.array(M2)
else:
M2 = G2
if measure == 'Hamming':
n = len(M1)
s = np.linalg.norm(M1 - M2) / (n*(n-1))
elif measure == 'Jaccard':
M1 = np.matrix.flatten(M1)
M2 = np.matrix.flatten(M2)
M = np.array([M1, M2])
Mmin = np.min(M, axis=0)
Mmax = np.max(M, axis=0)
s = 1 - sum(Mmin)/sum(Mmax)
elif measure == 'deltaCon':
D1 = np.diag(sum(M1))
D2 = np.diag(sum(M2))
I = np.eye(len(D1))
S1 = sp.linalg.inv(I - (eps**2) * D1 - (eps*M1))
S2 = sp.linalg.inv(I - (eps**2) * D2 - (eps*M2))
M = np.square(np.sqrt(S1) - np.sqrt(S2))
s = (1 / (len(M1)**2)) * np.sqrt(np.sum(np.sum(M)))
elif measure == 'Spectral':
v1, _ = np.linalg.eigh(M1)
v2, _ = np.linalg.eigh(M2)
s = (1 / len(v1)) * np.linalg.norm(v1-v2)
elif measure == 'Centrality':
# In this case mat1 and mat2 are centrality vectors
s = (1 / len(v1)) * np.linalg.norm(M1-M2)
return s
def multicorrelations(D, order, mtype='Drezner', idxs=None):
"""This function computes the multicorrelation among pairwise or 2D data.
:param D: 2D or pairwise data
:type D: *ndarray*
:param order: order of the multi-way interactions
:type order: *int*
:param mtype: This specifies which multicorrelation measure to use. It defaults to
``Drezner`` [1], but ``Wang`` [2] and ``Taylor`` [3] are options as well.
:type mtype: *str*
:param idxs: specify which indices of ``D`` to compute multicorrelations of. The default is ``None``, in which case
all combinations of ``order`` indices are computed.
:type idxs: *ndarray*, optional
:return: A vector of the multicorrelation scores computed and a vector of the column indices of
``D`` used to compute each multicorrelation.
:rtype: *(ndarray, ndarray)*
References
----------
.. [1] Zvi Drezner. Multirelation—a correlation among more than two variables. Computational Statistics & Data Analysis, 19(3):283–292, 1995.
.. [2] Jianji Wang and Nanning Zheng. Measures of correlation for multiple variables. arXiv preprint arXiv:1401.4827, 2014.
.. [3] Benjamin M Taylor. A multi-way correlation coefficient. arXiv preprint arXiv:2003.02561, 2020.
"""
# Auth: Joshua Pickard
# jpic@umich.edu
# Date: Dec 2, 2022
R = np.corrcoef(D.T)
if idxs == None:
idxs = np.array(list(itertools.combinations(range(len(R)), order)))
M = np.zeros(len(idxs),)
if mtype == 'Taylor':
taylorCoef = 1/np.sqrt(order)
for i in range(len(idxs)):
minor = R[np.ix_(idxs[i,:], idxs[i,:])]
if mtype == 'Drezner':
w, _ = np.linalg.eigh(minor)
M[i] = 1 - w[0]
elif mtype == 'Wang':
M[i] = pow((1 - np.linalg.det(minor)), 0.5)
elif mtype == 'Taylor':
w, _ = np.linalg.eigh(minor)
M[i] = taylorCoef * np.std(w)
return M, idxs
def uniformErdosRenyi(v, e, k):
"""This function generates a uniform, random hypergraph.
:param v: number of vertices
:type v: *int*
:param e: number of edges
:type e: *int*
:param k: order of hypergraph
:type k: *int*
:return: Hypergraph
:rtype: *Hypergraph*
"""
# Auth: Joshua Pickard
# jpic@umich.edu
# Date: Dec 2, 2022
IM = np.zeros((v,e))
for i in range(e):
idx = np.random.choice(v, size = k, replace = False)
IM[idx,i] = 1
return HAT.Hypergraph(IM)
def load(dataset='Karate'):
"""This function loads built-in datasets. Currently only one dataset is available and we are working to expand this.
:param dataset: sets which dataset to load in, defaults to 'Karate'
:type dataset: str, optional
:return: incidence matrix or graph object
:rtype: *ndarray* or *nx.Graph*
"""
# Auth: Joshua Pickard
# jpic@umich.edu
# Date: Dec 2, 2022
current_path = os.path.dirname(os.path.realpath(__file__))
current_path += '/data/'
if dataset == 'Karate':
return nx.karate_club_graph()
elif dataset == 'ArnetMiner Citation':
mat = sp.io.loadmat(current_path + 'aminer_cocitation.mat')
S = mat['S']
return S
elif dataset == 'ArnetMiner Reference':
mat = sp.io.loadmat(current_path + 'aminer_coreference.mat')
S = mat['S']
return S
elif dataset == 'Citeseer Citation':
mat = sp.io.loadmat(current_path + 'citeseer_cocitation.mat')
S = mat['S']
return S
elif dataset == 'Cora Citation':
mat = sp.io.loadmat(current_path + 'cora_coreference.mat')
S = mat['S']
return S
elif dataset == 'Cora Citation':
mat = sp.io.loadmat(current_path + 'cora_coreference.mat')
S = mat['S']
return S
elif dataset == 'DBLP':
mat = sp.io.loadmat(current_path + 'dblp.mat')
S = mat['S']
return S
def hyperedges2IM(edgeSet):
"""This function constructs an incidence matrix from an edge set.
:param edgeSet: a :math:`e \\times k` matrix where each row contains :math:`k` integers that are contained within the same hyperedge
:type edgeSet: *ndarray*
:return: a :math:`n \times e` incidence matrix where each row of the edge set corresponds to a column of the incidence matrix. :math:`n` is the number of nodes contained in the edgeset.
:rtype: *ndarray*
"""
# Auth: Joshua Pickard
# jpic@umich.edu
# Date: Dec 2, 2022
n = np.max(edgeSet)
e = len(edgeSet)
IM = np.zeros((n+1,e))
for e in range(e):
IM[edgeSet[e,:],e] = 1
return IM
def hyperedgeHomophily(H, HG=None, G=None, method='CN'):
"""This function computes the hyperedge homophily score according to the below methods. The homophily score is the average score based on
structural similarity of the vertices in hypredge `H` in the clique expanded graph `G`. This function is an interface from `HAT` to `networkx`
link prediction algorithms.
:param G: a pairwise hypergraph expansion
:type G: `networkx.Graph`
:param H: hyperedge containing individual vertices within the edge
:type H: `ndarray`
:param method: specifies which structural similarity method to use. This defaults to `CN` common neighbors.
"""
# Auth: Joshua Pickard
# jpic@umich.edu
# Date: Dec 6, 2022
pairwise = list(itertools.combinations(H, 2))
# Compute pairwise scores with networkx
if method == 'CN':
pairwiseScores = nx.common_neighbor_centrality(G, pairwise)
elif method == 'RA':
pairwiseScores = nx.resource_allocation_index(G, pairwise)
elif method == 'JC':
pairwiseScores = nx.jaccard_coefficient(G, pairwise)
elif method == 'AA':
pairwiseScores = nx.adamic_adar_index(G, pairwise)
elif method == 'PA':
pairwiseScores = nx.preferential_attachment(G, pairwise)
# Compute average pairwise score
pairwiseScores = pairwiseScores[:, 2]
hyperedgeHomophily = sum(pairwiseScores)/len(pairwiseScores)
return hyperedgeHomophily
def edgeRemoval(HG, p, method='Random'):
"""This function randomly removes edges from a hypergraph. In [1], four primary reasons are given for data missing in pairwise networks:
1. random edge removal
2. right censoring
3. snowball effect
4. cold-ends
This method removes edes from hypergraphs according to the multi-way analogue of these.
References
----------
.. [1] Yan, Bowen, and Steve Gregory. "Finding missing edges and communities in incomplete networks." Journal of Physics A: Mathematical and Theoretical
44.49 (2011): 495102.
.. [2] Zhu, Yu-Xiao, et al. "Uncovering missing links with cold ends." Physica A: Statistical Mechanics and its Applications 391.22 (2012): 5769-5778.
"""
# Auth: Joshua Pickard
# jpic@umich.edu
# Date: Dec 9, 2022
IM = HG.IM
n, e = IM.shape
if method == 'Random':
# Randome edge removal
known, unknown = randomRemoval(HG, p)
elif method == 'RC':
# Right censoring
known, unknown = rightCensorRemoval(HG, p)
elif method == 'SB':
# Snowball Effect
known, unknown = snowBallRemoval(HG, p)
elif method == 'CE':
# Cold Ends
known, unknown = coldEndsRemoval(HG, p)
else:
print('Enter a valid edge removal method')
return
# Bradcast from 3d to 2d
a, _, c = unknown.shape
unknown = np.reshape(unknown, (a, c))
a, _, c = known.shape
known = np.reshape(known, (a, c))
# Return hypergraph objects
K = HAT.Hypergraph(known)
U = HAT.Hypergraph(unknown)
return K, U
def randomRemoval(HG, p):
# Auth: Joshua Pickard
# jpic@umich.edu
# Date: Dec 9, 2022
IM = HG.IM
n, e = IM.shape
knownIdxs = np.random.choice([0, 1], size=(e,), p=[p, 1-p])
known = IM[:, np.where(knownIdxs == 1)]
unknown = IM[:, np.where(knownIdxs == 0)]
return known, unknown
def rightCensorRemoval(HG, p):
# Auth: Joshua Pickard
# jpic@umich.edu
# Date: Dec 9, 2022
IM = HG.IM.copy()
n, e = IM.shape
# Determine number of known edges in remaining graph
numknownEdges = sum(np.random.choice([0, 1], size=(e,), p=[p, 1-p]))
# Vertex degree
vxDegree = np.sum(IM, axis=1)
# Iteratively remove edges
removedEdges = 0
knownIdxs = np.ones(e,)
while sum(knownIdxs) > numknownEdges:
# Select vertex with maximum degree
vx = np.argmax(vxDegree)
# Select edges vx participates in
vxEdges = np.where(IM[vx,:] != 0)[0]
# Select single edge to remove
removeEdge = np.random.choice(vxEdges)
# Remove edge from incidence matrix
IM[:, removeEdge] = 0
# Remove it from list of known edges
knownIdxs[removeEdge] = 0
# Decrease degree of vx
vxDegree[vx] -= 1
print(IM)
print(HG.IM)
known = HG.IM[:, np.where(knownIdxs == 1)]
unknown = HG.IM[:, np.where(knownIdxs == 0)]
return known, unknown
def coldEndsRemoval(HG, p):
# Auth: Joshua Pickard
# jpic@umich.edu
# Date: Dec 9, 2022
IM = HG.IM.copy()
n, e = IM.shape
# Determine number of known edges in remaining graph
numknownEdges = sum(np.random.choice([0, 1], size=(e,), p=[p, 1-p]))
# Vertex degree
vxDegree = np.sum(IM, axis=1)
# Iteratively remove edges
removedEdges = 0
knownIdxs = np.ones(e,)
while sum(knownIdxs) > numknownEdges:
# Select vertex with maximum degree
vx = np.argmin(vxDegree)
if vxDegree[vx] == 1:
# Does not remove vxc with degree one. The degree value is set
# to max and the process continues
vxDegree[vx] = max(vxDegree)
continue
# Select edges vx participates in
vxEdges = np.where(IM[vx,:] != 0)[0]
# Select single edge to remove
removeEdge = np.random.choice(vxEdges)
# Remove edge from incidence matrix
IM[:, removeEdge] = 0
# Remove it from list of known edges
knownIdxs[removeEdge] = 0
# Decrease degree of vx
vxDegree[vx] -= 1
known = HG.IM[:, np.where(knownIdxs == 1)]
unknown = HG.IM[:, np.where(knownIdxs == 0)]
return known, unknown
def snowBallRemoval(HG, p):
# Auth: Joshua Pickard
# jpic@umich.edu
# Date: Dec 9, 2022
n, e = HG.IM.shape
source = np.random.choice(n)
# Clique expand HG
C = HG.cliqueGraph()
# Perform BFS on C. BFSedgeList is an ordered list of tuples containing each edge discovered in BFS
BFSedgeList = list(nx.bfs_tree(C, 5).edges())
# List of tuples to list preserving order nodes are visited
orderedVxc = [item for e in BFSedgeList for item in e]
_, idx = np.unique(orderedVxc, return_index=True)
# Ordered list vertices are visited
vxOrder = np.array(orderedVxc)[np.sort(idx)]
# Set which vertices will remain known in the hypergraph
numKnownVxc = sum(np.random.choice([0, 1], size=(n,), p=[p, 1-p]))
knownVxc = vxOrder[0:numKnownVxc]
# Only include edges where every vertex in the edge is known
knownIdxs = np.ones(e,)
for edge in range(e):
edgeVxc = np.where(HG.IM[:, edge] != 0)
numRecognizedNodes = np.intersect1d(edgeVxc, knownVxc)
if len(numRecognizedNodes) != len(edgeVxc):
knownIdxs[edge] = 0
known = HG.IM[:, np.where(knownIdxs == 1)]
unknown = HG.IM[:, np.where(knownIdxs == 0)]
return known, unknown | PypiClean |
/EbookLib-re-0.18.0.tar.gz/EbookLib-re-0.18.0/ebooklib/epub.py |
import zipfile
import six
import logging
import uuid
import warnings
import posixpath as zip_path
import os.path
from collections import OrderedDict
try:
from urllib.parse import unquote
except ImportError:
from urllib import unquote
from lxml import etree
import ebooklib
from ebooklib.utils import parse_string, parse_html_string, guess_type, get_pages_for_items
# Version of EPUB library
VERSION = (0, 17, 1)
NAMESPACES = {'XML': 'http://www.w3.org/XML/1998/namespace',
'EPUB': 'http://www.idpf.org/2007/ops',
'DAISY': 'http://www.daisy.org/z3986/2005/ncx/',
'OPF': 'http://www.idpf.org/2007/opf',
'CONTAINERNS': 'urn:oasis:names:tc:opendocument:xmlns:container',
'DC': 'http://purl.org/dc/elements/1.1/',
'XHTML': 'http://www.w3.org/1999/xhtml'}
# XML Templates
CONTAINER_PATH = 'META-INF/container.xml'
CONTAINER_XML = '''<?xml version="1.0" encoding="utf-8"?>
<container xmlns="urn:oasis:names:tc:opendocument:xmlns:container" version="1.0">
<rootfiles>
<rootfile media-type="application/oebps-package+xml" full-path="%(folder_name)s/content.opf"/>
</rootfiles>
</container>
'''
NCX_XML = six.b('''<!DOCTYPE ncx PUBLIC "-//NISO//DTD ncx 2005-1//EN" "http://www.daisy.org/z3986/2005/ncx-2005-1.dtd">
<ncx xmlns="http://www.daisy.org/z3986/2005/ncx/" version="2005-1" />''')
NAV_XML = six.b('''<?xml version="1.0" encoding="utf-8"?><!DOCTYPE html><html xmlns="http://www.w3.org/1999/xhtml" xmlns:epub="http://www.idpf.org/2007/ops"/>''')
CHAPTER_XML = six.b('''<?xml version="1.0" encoding="UTF-8"?><!DOCTYPE html><html xmlns="http://www.w3.org/1999/xhtml" xmlns:epub="http://www.idpf.org/2007/ops" epub:prefix="z3998: http://www.daisy.org/z3998/2012/vocab/structure/#"></html>''')
COVER_XML = six.b('''<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE html>
<html xmlns="http://www.w3.org/1999/xhtml" xmlns:epub="http://www.idpf.org/2007/ops" lang="en" xml:lang="en">
<head>
<style>
body { margin: 0em; padding: 0em; }
img { max-width: 100%; max-height: 100%; }
</style>
</head>
<body>
<img src="" alt="" />
</body>
</html>''')
IMAGE_MEDIA_TYPES = ['image/jpeg', 'image/jpg', 'image/png', 'image/svg+xml']
# TOC and navigation elements
class Section(object):
def __init__(self, title, href=''):
self.title = title
self.href = href
class Link(object):
def __init__(self, href, title, uid=None):
self.href = href
self.title = title
self.uid = uid
# Exceptions
class EpubException(Exception):
def __init__(self, code, msg):
self.code = code
self.msg = msg
def __str__(self):
return repr(self.msg)
# Items
class EpubItem(object):
"""
Base class for the items in a book.
"""
def __init__(self, uid=None, file_name='', media_type='', content=six.b(''), manifest=True):
"""
:Args:
- uid: Unique identifier for this item (optional)
- file_name: File name for this item (optional)
- media_type: Media type for this item (optional)
- content: Content for this item (optional)
- manifest: Manifest for this item (optional)
"""
self.id = uid
self.file_name = file_name
self.media_type = media_type
self.content = content
self.is_linear = True
self.manifest = manifest
self.book = None
def get_id(self):
"""
Returns unique identifier for this item.
:Returns:
Returns uid number as string.
"""
return self.id
def get_name(self):
"""
Returns name for this item. By default it is always file name but it does not have to be.
:Returns:
Returns file name for this item.
"""
return self.file_name
def get_type(self):
"""
Guess type according to the file extension. Might not be the best way how to do it, but it works for now.
Items can be of type:
- ITEM_UNKNOWN = 0
- ITEM_IMAGE = 1
- ITEM_STYLE = 2
- ITEM_SCRIPT = 3
- ITEM_NAVIGATION = 4
- ITEM_VECTOR = 5
- ITEM_FONT = 6
- ITEM_VIDEO = 7
- ITEM_AUDIO = 8
- ITEM_DOCUMENT = 9
- ITEM_COVER = 10
We map type according to the extensions which are defined in ebooklib.EXTENSIONS.
:Returns:
Returns type of the item as number.
"""
_, ext = zip_path.splitext(self.get_name())
ext = ext.lower()
for uid, ext_list in six.iteritems(ebooklib.EXTENSIONS):
if ext in ext_list:
return uid
return ebooklib.ITEM_UNKNOWN
def get_content(self, default=six.b('')):
"""
Returns content of the item. Content should be of type 'str' (Python 2) or 'bytes' (Python 3)
:Args:
- default: Default value for the content if it is not already defined.
:Returns:
Returns content of the item.
"""
return self.content or default
def set_content(self, content):
"""
Sets content value for this item.
:Args:
- content: Content value
"""
self.content = content
def __str__(self):
return '<EpubItem:%s>' % self.id
class EpubNcx(EpubItem):
"Represents Navigation Control File (NCX) in the EPUB."
def __init__(self, uid='ncx', file_name='toc.ncx'):
super(EpubNcx, self).__init__(uid=uid, file_name=file_name, media_type='application/x-dtbncx+xml')
def __str__(self):
return '<EpubNcx:%s>' % self.id
class EpubCover(EpubItem):
"""
Represents Cover image in the EPUB file.
"""
def __init__(self, uid='cover-img', file_name=''):
super(EpubCover, self).__init__(uid=uid, file_name=file_name)
def get_type(self):
return ebooklib.ITEM_COVER
def __str__(self):
return '<EpubCover:%s:%s>' % (self.id, self.file_name)
class EpubHtml(EpubItem):
"""
Represents HTML document in the EPUB file.
"""
_template_name = 'chapter'
def __init__(self, uid=None, file_name='', media_type='', content=None, title='',
lang=None, direction=None, media_overlay=None, media_duration=None):
super(EpubHtml, self).__init__(uid, file_name, media_type, content)
self.title = title
self.lang = lang
self.direction = direction
self.media_overlay = media_overlay
self.media_duration = media_duration
self.links = []
self.properties = []
self.pages = []
def is_chapter(self):
"""
Returns if this document is chapter or not.
:Returns:
Returns book value.
"""
return True
def get_type(self):
"""
Always returns ebooklib.ITEM_DOCUMENT as type of this document.
:Returns:
Always returns ebooklib.ITEM_DOCUMENT
"""
return ebooklib.ITEM_DOCUMENT
def set_language(self, lang):
"""
Sets language for this book item. By default it will use language of the book but it
can be overwritten with this call.
"""
self.lang = lang
def get_language(self):
"""
Get language code for this book item. Language of the book item can be different from
the language settings defined globaly for book.
:Returns:
As string returns language code.
"""
return self.lang
def add_link(self, **kwgs):
"""
Add additional link to the document. Links will be embeded only inside of this document.
>>> add_link(href='styles.css', rel='stylesheet', type='text/css')
"""
self.links.append(kwgs)
if kwgs.get('type') == 'text/javascript':
if 'scripted' not in self.properties:
self.properties.append('scripted')
def get_links(self):
"""
Returns list of additional links defined for this document.
:Returns:
As tuple return list of links.
"""
return (link for link in self.links)
def get_links_of_type(self, link_type):
"""
Returns list of additional links of specific type.
:Returns:
As tuple returns list of links.
"""
return (link for link in self.links if link.get('type', '') == link_type)
def add_item(self, item):
"""
Add other item to this document. It will create additional links according to the item type.
:Args:
- item: item we want to add defined as instance of EpubItem
"""
if item.get_type() == ebooklib.ITEM_STYLE:
self.add_link(href=item.get_name(), rel='stylesheet', type='text/css')
if item.get_type() == ebooklib.ITEM_SCRIPT:
self.add_link(src=item.get_name(), type='text/javascript')
def get_body_content(self):
"""
Returns content of BODY element for this HTML document. Content will be of type 'str' (Python 2)
or 'bytes' (Python 3).
:Returns:
Returns content of this document.
"""
try:
html_tree = parse_html_string(self.content)
except:
return ''
html_root = html_tree.getroottree()
if len(html_root.find('body')) != 0:
body = html_tree.find('body')
tree_str = etree.tostring(body, pretty_print=True, encoding='utf-8', xml_declaration=False)
# this is so stupid
if tree_str.startswith(six.b('<body>')):
n = tree_str.rindex(six.b('</body>'))
return tree_str[6:n]
return tree_str
return ''
def get_content(self, default=None):
"""
Returns content for this document as HTML string. Content will be of type 'str' (Python 2)
or 'bytes' (Python 3).
:Args:
- default: Default value for the content if it is not defined.
:Returns:
Returns content of this document.
"""
tree = parse_string(self.book.get_template(self._template_name))
tree_root = tree.getroot()
tree_root.set('lang', self.lang or self.book.language)
tree_root.attrib['{%s}lang' % NAMESPACES['XML']] = self.lang or self.book.language
# add to the head also
# <meta charset="utf-8" />
try:
html_tree = parse_html_string(self.content)
except:
return ''
html_root = html_tree.getroottree()
# create and populate head
_head = etree.SubElement(tree_root, 'head')
if self.title != '':
_title = etree.SubElement(_head, 'title')
_title.text = self.title
for lnk in self.links:
if lnk.get('type') == 'text/javascript':
_lnk = etree.SubElement(_head, 'script', lnk)
# force <script></script>
_lnk.text = ''
else:
_lnk = etree.SubElement(_head, 'link', lnk)
# this should not be like this
# head = html_root.find('head')
# if head is not None:
# for i in head.getchildren():
# if i.tag == 'title' and self.title != '':
# continue
# _head.append(i)
# create and populate body
_body = etree.SubElement(tree_root, 'body')
if self.direction:
_body.set('dir', self.direction)
tree_root.set('dir', self.direction)
body = html_tree.find('body')
if body is not None:
for i in body.getchildren():
_body.append(i)
tree_str = etree.tostring(tree, pretty_print=True, encoding='utf-8', xml_declaration=True)
return tree_str
def __str__(self):
return '<EpubHtml:%s:%s>' % (self.id, self.file_name)
class EpubCoverHtml(EpubHtml):
"""
Represents Cover page in the EPUB file.
"""
def __init__(self, uid='cover', file_name='cover.xhtml', image_name='', title='Cover'):
super(EpubCoverHtml, self).__init__(uid=uid, file_name=file_name, title=title)
self.image_name = image_name
self.is_linear = False
def is_chapter(self):
"""
Returns if this document is chapter or not.
:Returns:
Returns book value.
"""
return False
def get_content(self):
"""
Returns content for cover page as HTML string. Content will be of type 'str' (Python 2) or 'bytes' (Python 3).
:Returns:
Returns content of this document.
"""
self.content = self.book.get_template('cover')
tree = parse_string(super(EpubCoverHtml, self).get_content())
tree_root = tree.getroot()
images = tree_root.xpath('//xhtml:img', namespaces={'xhtml': NAMESPACES['XHTML']})
images[0].set('src', self.image_name)
images[0].set('alt', self.title)
tree_str = etree.tostring(tree, pretty_print=True, encoding='utf-8', xml_declaration=True)
return tree_str
def __str__(self):
return '<EpubCoverHtml:%s:%s>' % (self.id, self.file_name)
class EpubNav(EpubHtml):
"""
Represents Navigation Document in the EPUB file.
"""
def __init__(self, uid='nav', file_name='nav.xhtml', media_type='application/xhtml+xml', title=''):
super(EpubNav, self).__init__(uid=uid, file_name=file_name, media_type=media_type, title=title)
def is_chapter(self):
"""
Returns if this document is chapter or not.
:Returns:
Returns book value.
"""
return False
def __str__(self):
return '<EpubNav:%s:%s>' % (self.id, self.file_name)
class EpubImage(EpubItem):
"""
Represents Image in the EPUB file.
"""
def __init__(self, *args, **kwargs):
super(EpubImage, self).__init__(*args, **kwargs)
def get_type(self):
return ebooklib.ITEM_IMAGE
def __str__(self):
return '<EpubImage:%s:%s>' % (self.id, self.file_name)
class EpubSMIL(EpubItem):
def __init__(self, uid=None, file_name='', content=None):
super(EpubSMIL, self).__init__(uid=uid, file_name=file_name, media_type='application/smil+xml', content=content)
def get_type(self):
return ebooklib.ITEM_SMIL
def __str__(self):
return '<EpubSMIL:%s:%s>' % (self.id, self.file_name)
# EpubBook
class EpubBook(object):
def __init__(self):
self.EPUB_VERSION = None
self.reset()
# we should have options here
def reset(self):
"Initialises all needed variables to default values"
self.metadata = {}
self.items = []
self.spine = []
self.guide = []
self.pages = []
self.toc = []
self.bindings = []
self.IDENTIFIER_ID = 'id'
self.FOLDER_NAME = 'EPUB'
self._id_html = 0
self._id_image = 0
self._id_static = 0
self.title = ''
self.language = 'en'
self.direction = None
self.templates = {
'ncx': NCX_XML,
'nav': NAV_XML,
'chapter': CHAPTER_XML,
'cover': COVER_XML
}
self.add_metadata('OPF', 'generator', '', {
'name': 'generator', 'content': 'Ebook-lib %s' % '.'.join([str(s) for s in VERSION])
})
# default to using a randomly-unique identifier if one is not specified manually
self.set_identifier(str(uuid.uuid4()))
# custom prefixes and namespaces to be set to the content.opf doc
self.prefixes = []
self.namespaces = {}
def set_identifier(self, uid):
"""
Sets unique id for this epub
:Args:
- uid: Value of unique identifier for this book
"""
self.uid = uid
self.set_unique_metadata('DC', 'identifier', self.uid, {'id': self.IDENTIFIER_ID})
def set_title(self, title):
"""
Set title. You can set multiple titles.
:Args:
- title: Title value
"""
self.title = title
self.add_metadata('DC', 'title', self.title)
def set_language(self, lang):
"""
Set language for this epub. You can set multiple languages. Specific items in the book can have
different language settings.
:Args:
- lang: Language code
"""
self.language = lang
self.add_metadata('DC', 'language', lang)
def set_direction(self, direction):
"""
:Args:
- direction: Options are "ltr", "rtl" and "default"
"""
self.direction = direction
def set_cover(self, file_name, content, create_page=True):
"""
Set cover and create cover document if needed.
:Args:
- file_name: file name of the cover page
- content: Content for the cover image
- create_page: Should cover page be defined. Defined as bool value (optional). Default value is True.
"""
# as it is now, it can only be called once
c0 = EpubCover(file_name=file_name)
c0.content = content
self.add_item(c0)
if create_page:
c1 = EpubCoverHtml(image_name=file_name)
self.add_item(c1)
self.add_metadata(None, 'meta', '', OrderedDict([('name', 'cover'), ('content', 'cover-img')]))
def add_author(self, author, file_as=None, role=None, uid='creator'):
"Add author for this document"
self.add_metadata('DC', 'creator', author, {'id': uid})
if file_as:
self.add_metadata(None, 'meta', file_as, {'refines': '#' + uid,
'property': 'file-as',
'scheme': 'marc:relators'})
if role:
self.add_metadata(None, 'meta', role, {'refines': '#' + uid,
'property': 'role',
'scheme': 'marc:relators'})
def add_metadata(self, namespace, name, value, others=None):
"Add metadata"
if namespace in NAMESPACES:
namespace = NAMESPACES[namespace]
if namespace not in self.metadata:
self.metadata[namespace] = {}
if name not in self.metadata[namespace]:
self.metadata[namespace][name] = []
self.metadata[namespace][name].append((value, others))
def get_metadata(self, namespace, name):
"Retrieve metadata"
if namespace in NAMESPACES:
namespace = NAMESPACES[namespace]
return self.metadata[namespace].get(name, [])
def set_unique_metadata(self, namespace, name, value, others=None):
"Add metadata if metadata with this identifier does not already exist, otherwise update existing metadata."
if namespace in NAMESPACES:
namespace = NAMESPACES[namespace]
if namespace in self.metadata and name in self.metadata[namespace]:
self.metadata[namespace][name] = [(value, others)]
else:
self.add_metadata(namespace, name, value, others)
def add_item(self, item):
"""
Add additional item to the book. If not defined, media type and chapter id will be defined
for the item.
:Args:
- item: Item instance
"""
if item.media_type == '':
(has_guessed, media_type) = guess_type(item.get_name().lower())
if has_guessed:
if media_type is not None:
item.media_type = media_type
else:
item.media_type = has_guessed
else:
item.media_type = 'application/octet-stream'
if not item.get_id():
# make chapter_, image_ and static_ configurable
if isinstance(item, EpubHtml):
item.id = 'chapter_%d' % self._id_html
self._id_html += 1
# If there's a page list, append it to the book's page list
self.pages += item.pages
elif isinstance(item, EpubImage):
item.id = 'image_%d' % self._id_image
self._id_image += 1
else:
item.id = 'static_%d' % self._id_static
self._id_static += 1
item.book = self
self.items.append(item)
return item
def get_item_with_id(self, uid):
"""
Returns item for defined UID.
>>> book.get_item_with_id('image_001')
:Args:
- uid: UID for the item
:Returns:
Returns item object. Returns None if nothing was found.
"""
for item in self.get_items():
if item.id == uid:
return item
return None
def get_item_with_href(self, href):
"""
Returns item for defined HREF.
>>> book.get_item_with_href('EPUB/document.xhtml')
:Args:
- href: HREF for the item we are searching for
:Returns:
Returns item object. Returns None if nothing was found.
"""
for item in self.get_items():
if item.get_name() == href:
return item
return None
def get_items(self):
"""
Returns all items attached to this book.
:Returns:
Returns all items as tuple.
"""
return (item for item in self.items)
def get_items_of_type(self, item_type):
"""
Returns all items of specified type.
>>> book.get_items_of_type(epub.ITEM_IMAGE)
:Args:
- item_type: Type for items we are searching for
:Returns:
Returns found items as tuple.
"""
return (item for item in self.items if item.get_type() == item_type)
def get_items_of_media_type(self, media_type):
"""
Returns all items of specified media type.
:Args:
- media_type: Media type for items we are searching for
:Returns:
Returns found items as tuple.
"""
return (item for item in self.items if item.media_type == media_type)
def set_template(self, name, value):
"""
Defines templates which are used to generate certain types of pages. When defining new value for the template
we have to use content of type 'str' (Python 2) or 'bytes' (Python 3).
At the moment we use these templates:
- ncx
- nav
- chapter
- cover
:Args:
- name: Name for the template
- value: Content for the template
"""
self.templates[name] = value
def get_template(self, name):
"""
Returns value for the template.
:Args:
- name: template name
:Returns:
Value of the template.
"""
return self.templates.get(name)
def add_prefix(self, name, uri):
"""
Appends custom prefix to be added to the content.opf document
>>> epub_book.add_prefix('bkterms', 'http://booktype.org/')
:Args:
- name: namespave name
- uri: URI for the namespace
"""
self.prefixes.append('%s: %s' % (name, uri))
class EpubWriter(object):
DEFAULT_OPTIONS = {
'epub2_guide': True,
'epub3_landmark': True,
'epub3_pages': True,
'landmark_title': 'Guide',
'pages_title': 'Pages',
'spine_direction': True,
'package_direction': False,
'play_order': {
'enabled': False,
'start_from': 1
}
}
def __init__(self, name, book, options=None):
self.file_name = name
self.book = book
self.options = dict(self.DEFAULT_OPTIONS)
if options:
self.options.update(options)
self._init_play_order()
def _init_play_order(self):
self._play_order = {
'enabled': False,
'start_from': 1
}
try:
self._play_order['enabled'] = self.options['play_order']['enabled']
self._play_order['start_from'] = self.options['play_order']['start_from']
except KeyError:
pass
def process(self):
# should cache this html parsing so we don't do it for every plugin
for plg in self.options.get('plugins', []):
if hasattr(plg, 'before_write'):
plg.before_write(self.book)
for item in self.book.get_items():
if isinstance(item, EpubHtml):
for plg in self.options.get('plugins', []):
if hasattr(plg, 'html_before_write'):
plg.html_before_write(self.book, item)
def _write_container(self):
container_xml = CONTAINER_XML % {'folder_name': self.book.FOLDER_NAME}
self.out.writestr(CONTAINER_PATH, container_xml)
def _write_opf_metadata(self, root):
# This is really not needed
# problem is uppercase/lowercase
# for ns_name, values in six.iteritems(self.book.metadata):
# if ns_name:
# for n_id, ns_url in six.iteritems(NAMESPACES):
# if ns_name == ns_url:
# nsmap[n_id.lower()] = NAMESPACES[n_id]
nsmap = {'dc': NAMESPACES['DC'], 'opf': NAMESPACES['OPF']}
nsmap.update(self.book.namespaces)
metadata = etree.SubElement(root, 'metadata', nsmap=nsmap)
el = etree.SubElement(metadata, 'meta', {'property': 'dcterms:modified'})
if 'mtime' in self.options:
mtime = self.options['mtime']
else:
import datetime
mtime = datetime.datetime.now()
el.text = mtime.strftime('%Y-%m-%dT%H:%M:%SZ')
for ns_name, values in six.iteritems(self.book.metadata):
if ns_name == NAMESPACES['OPF']:
for values in values.values():
for v in values:
if 'property' in v[1] and v[1]['property'] == 'dcterms:modified':
continue
try:
el = etree.SubElement(metadata, 'meta', v[1])
if v[0]:
el.text = v[0]
except ValueError:
logging.error('Could not create metadata.')
else:
for name, values in six.iteritems(values):
for v in values:
try:
if ns_name:
el = etree.SubElement(metadata, '{%s}%s' % (ns_name, name), v[1])
else:
el = etree.SubElement(metadata, '%s' % name, v[1])
el.text = v[0]
except ValueError:
logging.error('Could not create metadata "{}".'.format(name))
def _write_opf_manifest(self, root):
manifest = etree.SubElement(root, 'manifest')
_ncx_id = None
# mathml, scripted, svg, remote-resources, and switch
# nav
# cover-image
for item in self.book.get_items():
if not item.manifest:
continue
if isinstance(item, EpubNav):
etree.SubElement(manifest, 'item', {'href': item.get_name(),
'id': item.id,
'media-type': item.media_type,
'properties': 'nav'})
elif isinstance(item, EpubNcx):
_ncx_id = item.id
etree.SubElement(manifest, 'item', {'href': item.file_name,
'id': item.id,
'media-type': item.media_type})
elif isinstance(item, EpubCover):
etree.SubElement(manifest, 'item', {'href': item.file_name,
'id': item.id,
'media-type': item.media_type,
'properties': 'cover-image'})
else:
opts = {'href': item.file_name,
'id': item.id,
'media-type': item.media_type}
if hasattr(item, 'properties') and len(item.properties) > 0:
opts['properties'] = ' '.join(item.properties)
if hasattr(item, 'media_overlay') and item.media_overlay is not None:
opts['media-overlay'] = item.media_overlay
if hasattr(item, 'media_duration') and item.media_duration is not None:
opts['duration'] = item.media_duration
etree.SubElement(manifest, 'item', opts)
return _ncx_id
def _write_opf_spine(self, root, ncx_id):
spine_attributes = {'toc': ncx_id or 'ncx'}
if self.book.direction and self.options['spine_direction']:
spine_attributes['page-progression-direction'] = self.book.direction
spine = etree.SubElement(root, 'spine', spine_attributes)
for _item in self.book.spine:
# this is for now
# later we should be able to fetch things from tuple
is_linear = True
if isinstance(_item, tuple):
item = _item[0]
if len(_item) > 1:
if _item[1] == 'no':
is_linear = False
else:
item = _item
if isinstance(item, EpubHtml):
opts = {'idref': item.get_id()}
if not item.is_linear or not is_linear:
opts['linear'] = 'no'
elif isinstance(item, EpubItem):
opts = {'idref': item.get_id()}
if not item.is_linear or not is_linear:
opts['linear'] = 'no'
else:
opts = {'idref': item}
try:
itm = self.book.get_item_with_id(item)
if not itm.is_linear or not is_linear:
opts['linear'] = 'no'
except:
pass
etree.SubElement(spine, 'itemref', opts)
def _write_opf_guide(self, root):
# - http://www.idpf.org/epub/20/spec/OPF_2.0.1_draft.htm#Section2.6
if len(self.book.guide) > 0 and self.options.get('epub2_guide'):
guide = etree.SubElement(root, 'guide', {})
for item in self.book.guide:
if 'item' in item:
chap = item.get('item')
if chap:
_href = chap.file_name
_title = chap.title
else:
_href = item.get('href', '')
_title = item.get('title', '')
if _title is None:
_title = ''
ref = etree.SubElement(guide, 'reference', {'type': item.get('type', ''),
'title': _title,
'href': _href})
def _write_opf_bindings(self, root):
if len(self.book.bindings) > 0:
bindings = etree.SubElement(root, 'bindings', {})
for item in self.book.bindings:
etree.SubElement(bindings, 'mediaType', item)
def _write_opf_file(self, root):
tree_str = etree.tostring(root, pretty_print=True, encoding='utf-8', xml_declaration=True)
self.out.writestr('%s/content.opf' % self.book.FOLDER_NAME, tree_str)
def _write_opf(self):
package_attributes = {'xmlns': NAMESPACES['OPF'],
'unique-identifier': self.book.IDENTIFIER_ID,
'version': '3.0'}
if self.book.direction and self.options['package_direction']:
package_attributes['dir'] = self.book.direction
root = etree.Element('package', package_attributes)
prefixes = ['rendition: http://www.idpf.org/vocab/rendition/#'] + self.book.prefixes
root.attrib['prefix'] = ' '.join(prefixes)
# METADATA
self._write_opf_metadata(root)
# MANIFEST
_ncx_id = self._write_opf_manifest(root)
# SPINE
self._write_opf_spine(root, _ncx_id)
# GUIDE
self._write_opf_guide(root)
# BINDINGS
self._write_opf_bindings(root)
# WRITE FILE
self._write_opf_file(root)
def _get_nav(self, item):
# just a basic navigation for now
nav_xml = parse_string(self.book.get_template('nav'))
root = nav_xml.getroot()
root.set('lang', self.book.language)
root.attrib['{%s}lang' % NAMESPACES['XML']] = self.book.language
nav_dir_name = os.path.dirname(item.file_name)
head = etree.SubElement(root, 'head')
title = etree.SubElement(head, 'title')
title.text = item.title or self.book.title
# for now this just handles css files and ignores others
for _link in item.links:
_lnk = etree.SubElement(head, 'link', {
'href': _link.get('href', ''), 'rel': 'stylesheet', 'type': 'text/css'
})
body = etree.SubElement(root, 'body')
nav = etree.SubElement(body, 'nav', {
'{%s}type' % NAMESPACES['EPUB']: 'toc',
'id': 'id',
'role': 'doc-toc',
})
content_title = etree.SubElement(nav, 'h2')
content_title.text = item.title or self.book.title
def _create_section(itm, items):
ol = etree.SubElement(itm, 'ol')
for item in items:
if isinstance(item, tuple) or isinstance(item, list):
li = etree.SubElement(ol, 'li')
if isinstance(item[0], EpubHtml):
a = etree.SubElement(li, 'a', {'href': os.path.relpath(item[0].file_name, nav_dir_name)})
elif isinstance(item[0], Section) and item[0].href != '':
a = etree.SubElement(li, 'a', {'href': os.path.relpath(item[0].href, nav_dir_name)})
elif isinstance(item[0], Link):
a = etree.SubElement(li, 'a', {'href': os.path.relpath(item[0].href, nav_dir_name)})
else:
a = etree.SubElement(li, 'span')
a.text = item[0].title
_create_section(li, item[1])
elif isinstance(item, Link):
li = etree.SubElement(ol, 'li')
a = etree.SubElement(li, 'a', {'href': os.path.relpath(item.href, nav_dir_name)})
a.text = item.title
elif isinstance(item, EpubHtml):
li = etree.SubElement(ol, 'li')
a = etree.SubElement(li, 'a', {'href': os.path.relpath(item.file_name, nav_dir_name)})
a.text = item.title
_create_section(nav, self.book.toc)
# LANDMARKS / GUIDE
# - http://www.idpf.org/epub/30/spec/epub30-contentdocs.html#sec-xhtml-nav-def-types-landmarks
if len(self.book.guide) > 0 and self.options.get('epub3_landmark'):
# Epub2 guide types do not map completely to epub3 landmark types.
guide_to_landscape_map = {
'notes': 'rearnotes',
'text': 'bodymatter'
}
guide_nav = etree.SubElement(body, 'nav', {'{%s}type' % NAMESPACES['EPUB']: 'landmarks'})
guide_content_title = etree.SubElement(guide_nav, 'h2')
guide_content_title.text = self.options.get('landmark_title', 'Guide')
guild_ol = etree.SubElement(guide_nav, 'ol')
for elem in self.book.guide:
li_item = etree.SubElement(guild_ol, 'li')
if 'item' in elem:
chap = elem.get('item', None)
if chap:
_href = chap.file_name
_title = chap.title
else:
_href = elem.get('href', '')
_title = elem.get('title', '')
guide_type = elem.get('type', '')
a_item = etree.SubElement(li_item, 'a', {
'{%s}type' % NAMESPACES['EPUB']: guide_to_landscape_map.get(guide_type, guide_type),
'href': os.path.relpath(_href, nav_dir_name)
})
a_item.text = _title
# PAGE-LIST
if self.options.get('epub3_pages'):
inserted_pages = get_pages_for_items([item for item in self.book.get_items_of_type(ebooklib.ITEM_DOCUMENT) \
if not isinstance(item, EpubNav)])
if len(inserted_pages) > 0:
pagelist_nav = etree.SubElement(
body,
'nav',
{
'{%s}type' % NAMESPACES['EPUB']: 'page-list',
'id': 'pages',
'hidden': 'hidden',
}
)
pagelist_content_title = etree.SubElement(pagelist_nav, 'h2')
pagelist_content_title.text = self.options.get(
'pages_title', 'Pages'
)
pages_ol = etree.SubElement(pagelist_nav, 'ol')
for filename, pageref, label in inserted_pages:
li_item = etree.SubElement(pages_ol, 'li')
_href = u'{}#{}'.format(filename, pageref)
_title = label
a_item = etree.SubElement(li_item, 'a', {
'href': os.path.relpath(_href, nav_dir_name),
})
a_item.text = _title
tree_str = etree.tostring(nav_xml, pretty_print=True, encoding='utf-8', xml_declaration=True)
return tree_str
def _get_ncx(self):
# we should be able to setup language for NCX as also
ncx = parse_string(self.book.get_template('ncx'))
root = ncx.getroot()
head = etree.SubElement(root, 'head')
# get this id
uid = etree.SubElement(head, 'meta', {'content': self.book.uid, 'name': 'dtb:uid'})
uid = etree.SubElement(head, 'meta', {'content': '0', 'name': 'dtb:depth'})
uid = etree.SubElement(head, 'meta', {'content': '0', 'name': 'dtb:totalPageCount'})
uid = etree.SubElement(head, 'meta', {'content': '0', 'name': 'dtb:maxPageNumber'})
doc_title = etree.SubElement(root, 'docTitle')
title = etree.SubElement(doc_title, 'text')
title.text = self.book.title
# doc_author = etree.SubElement(root, 'docAuthor')
# author = etree.SubElement(doc_author, 'text')
# author.text = 'Name of the person'
# For now just make a very simple navMap
nav_map = etree.SubElement(root, 'navMap')
def _add_play_order(nav_point):
nav_point.set('playOrder', str(self._play_order['start_from']))
self._play_order['start_from'] += 1
def _create_section(itm, items, uid):
for item in items:
if isinstance(item, tuple) or isinstance(item, list):
section, subsection = item[0], item[1]
np = etree.SubElement(itm, 'navPoint', {
'id': section.get_id() if isinstance(section, EpubHtml) else 'sep_%d' % uid
})
if self._play_order['enabled']:
_add_play_order(np)
nl = etree.SubElement(np, 'navLabel')
nt = etree.SubElement(nl, 'text')
nt.text = section.title
# CAN NOT HAVE EMPTY SRC HERE
href = ''
if isinstance(section, EpubHtml):
href = section.file_name
elif isinstance(section, Section) and section.href != '':
href = section.href
elif isinstance(section, Link):
href = section.href
nc = etree.SubElement(np, 'content', {'src': href})
uid = _create_section(np, subsection, uid + 1)
elif isinstance(item, Link):
_parent = itm
_content = _parent.find('content')
if _content is not None:
if _content.get('src') == '':
_content.set('src', item.href)
np = etree.SubElement(itm, 'navPoint', {'id': item.uid})
if self._play_order['enabled']:
_add_play_order(np)
nl = etree.SubElement(np, 'navLabel')
nt = etree.SubElement(nl, 'text')
nt.text = item.title
nc = etree.SubElement(np, 'content', {'src': item.href})
elif isinstance(item, EpubHtml):
_parent = itm
_content = _parent.find('content')
if _content is not None:
if _content.get('src') == '':
_content.set('src', item.file_name)
np = etree.SubElement(itm, 'navPoint', {'id': item.get_id()})
if self._play_order['enabled']:
_add_play_order(np)
nl = etree.SubElement(np, 'navLabel')
nt = etree.SubElement(nl, 'text')
nt.text = item.title
nc = etree.SubElement(np, 'content', {'src': item.file_name})
return uid
_create_section(nav_map, self.book.toc, 0)
tree_str = etree.tostring(root, pretty_print=True, encoding='utf-8', xml_declaration=True)
return tree_str
def _write_items(self):
for item in self.book.get_items():
if isinstance(item, EpubNcx):
self.out.writestr('%s/%s' % (self.book.FOLDER_NAME, item.file_name), self._get_ncx())
elif isinstance(item, EpubNav):
self.out.writestr('%s/%s' % (self.book.FOLDER_NAME, item.file_name), self._get_nav(item))
elif item.manifest:
self.out.writestr('%s/%s' % (self.book.FOLDER_NAME, item.file_name), item.get_content())
else:
self.out.writestr('%s' % item.file_name, item.get_content())
def write(self):
# check for the option allowZip64
self.out = zipfile.ZipFile(self.file_name, 'w', zipfile.ZIP_DEFLATED)
self.out.writestr('mimetype', 'application/epub+zip', compress_type=zipfile.ZIP_STORED)
self._write_container()
self._write_opf()
self._write_items()
self.out.close()
class EpubReader(object):
DEFAULT_OPTIONS = {
'ignore_ncx': False
}
def __init__(self, epub_file_name, options=None):
self.file_name = epub_file_name
self.book = EpubBook()
self.zf = None
self.opf_file = ''
self.opf_dir = ''
self.options = dict(self.DEFAULT_OPTIONS)
if options:
self.options.update(options)
self._check_deprecated()
def _check_deprecated(self):
if not self.options.get('ignore_ncx'):
warnings.warn('In the future version we will turn default option ignore_ncx to True.')
def process(self):
# should cache this html parsing so we don't do it for every plugin
for plg in self.options.get('plugins', []):
if hasattr(plg, 'after_read'):
plg.after_read(self.book)
for item in self.book.get_items():
if isinstance(item, EpubHtml):
for plg in self.options.get('plugins', []):
if hasattr(plg, 'html_after_read'):
plg.html_after_read(self.book, item)
def load(self):
self._load()
return self.book
def read_file(self, name):
# Raises KeyError
name = zip_path.normpath(name)
return self.zf.read(name)
def _load_container(self):
meta_inf = self.read_file('META-INF/container.xml')
tree = parse_string(meta_inf)
for root_file in tree.findall('//xmlns:rootfile[@media-type]', namespaces={'xmlns': NAMESPACES['CONTAINERNS']}):
if root_file.get('media-type') == 'application/oebps-package+xml':
self.opf_file = root_file.get('full-path')
self.opf_dir = zip_path.dirname(self.opf_file)
def _load_metadata(self):
container_root = self.container.getroot()
# get epub version
self.book.version = container_root.get('version', None)
# get unique-identifier
if container_root.get('unique-identifier', None):
self.book.IDENTIFIER_ID = container_root.get('unique-identifier')
# get xml:lang
# get metadata
metadata = self.container.find('{%s}%s' % (NAMESPACES['OPF'], 'metadata'))
nsmap = metadata.nsmap
nstags = dict((k, '{%s}' % v) for k, v in six.iteritems(nsmap))
default_ns = nstags.get(None, '')
nsdict = dict((v, {}) for v in nsmap.values())
def add_item(ns, tag, value, extra):
if ns not in nsdict:
nsdict[ns] = {}
values = nsdict[ns].setdefault(tag, [])
values.append((value, extra))
for t in metadata:
if not etree.iselement(t) or t.tag is etree.Comment:
continue
if t.tag == default_ns + 'meta':
name = t.get('name')
others = dict((k, v) for k, v in t.items())
if name and ':' in name:
prefix, name = name.split(':', 1)
else:
prefix = None
add_item(t.nsmap.get(prefix, prefix), name, t.text, others)
else:
tag = t.tag[t.tag.rfind('}') + 1:]
if (t.prefix and t.prefix.lower() == 'dc') and tag == 'identifier':
_id = t.get('id', None)
if _id:
self.book.IDENTIFIER_ID = _id
others = dict((k, v) for k, v in t.items())
add_item(t.nsmap[t.prefix], tag, t.text, others)
self.book.metadata = nsdict
titles = self.book.get_metadata('DC', 'title')
if len(titles) > 0:
self.book.title = titles[0][0]
for value, others in self.book.get_metadata('DC', 'identifier'):
if others.get('id') == self.book.IDENTIFIER_ID:
self.book.uid = value
def _load_manifest(self):
for r in self.container.find('{%s}%s' % (NAMESPACES['OPF'], 'manifest')):
if r is not None and r.tag != '{%s}item' % NAMESPACES['OPF']:
continue
media_type = r.get('media-type')
_properties = r.get('properties', '')
if _properties:
properties = _properties.split(' ')
else:
properties = []
# people use wrong content types
if media_type == 'image/jpg':
media_type = 'image/jpeg'
if media_type == 'application/x-dtbncx+xml':
ei = EpubNcx(uid=r.get('id'), file_name=unquote(r.get('href')))
ei.content = self.read_file(zip_path.join(self.opf_dir, ei.file_name))
elif media_type == 'application/smil+xml':
ei = EpubSMIL(uid=r.get('id'), file_name=unquote(r.get('href')))
ei.content = self.read_file(zip_path.join(self.opf_dir, ei.file_name))
elif media_type == 'application/xhtml+xml':
if 'nav' in properties:
ei = EpubNav(uid=r.get('id'), file_name=unquote(r.get('href')))
ei.content = self.read_file(zip_path.join(self.opf_dir, r.get('href')))
elif 'cover' in properties:
ei = EpubCoverHtml()
ei.content = self.read_file(zip_path.join(self.opf_dir, unquote(r.get('href'))))
else:
ei = EpubHtml()
ei.id = r.get('id')
ei.file_name = unquote(r.get('href'))
ei.media_type = media_type
ei.media_overlay = r.get('media-overlay', None)
ei.media_duration = r.get('duration', None)
ei.content = self.read_file(zip_path.join(self.opf_dir, ei.get_name()))
ei.properties = properties
elif media_type in IMAGE_MEDIA_TYPES:
if 'cover-image' in properties:
ei = EpubCover(uid=r.get('id'), file_name=unquote(r.get('href')))
ei.media_type = media_type
ei.content = self.read_file(zip_path.join(self.opf_dir, ei.get_name()))
else:
ei = EpubImage()
ei.id = r.get('id')
ei.file_name = unquote(r.get('href'))
ei.media_type = media_type
ei.content = self.read_file(zip_path.join(self.opf_dir, ei.get_name()))
else:
# different types
ei = EpubItem()
ei.id = r.get('id')
ei.file_name = unquote(r.get('href'))
ei.media_type = media_type
ei.content = self.read_file(zip_path.join(self.opf_dir, ei.get_name()))
self.book.add_item(ei)
def _parse_ncx(self, data):
tree = parse_string(data)
tree_root = tree.getroot()
nav_map = tree_root.find('{%s}navMap' % NAMESPACES['DAISY'])
def _get_children(elems, n, nid):
label, content = '', ''
children = []
for a in elems.getchildren():
if a.tag == '{%s}navLabel' % NAMESPACES['DAISY']:
label = a.getchildren()[0].text
if a.tag == '{%s}content' % NAMESPACES['DAISY']:
content = a.get('src', '')
if a.tag == '{%s}navPoint' % NAMESPACES['DAISY']:
children.append(_get_children(a, n + 1, a.get('id', '')))
if len(children) > 0:
if n == 0:
return children
return (Section(label, href=content),
children)
else:
return Link(content, label, nid)
self.book.toc = _get_children(nav_map, 0, '')
def _parse_nav(self, data, base_path, navtype='toc'):
html_node = parse_html_string(data)
if navtype == 'toc':
# parsing the table of contents
nav_node = html_node.xpath("//nav[@*='toc']")[0]
else:
# parsing the list of pages
_page_list = html_node.xpath("//nav[@*='page-list']")
if len(_page_list) == 0:
return
nav_node = _page_list[0]
def parse_list(list_node):
items = []
for item_node in list_node.findall('li'):
sublist_node = item_node.find('ol')
link_node = item_node.find('a')
if sublist_node is not None:
title = item_node[0].text
children = parse_list(sublist_node)
if link_node is not None:
href = zip_path.normpath(zip_path.join(base_path, link_node.get('href')))
items.append((Section(title, href=href), children))
else:
items.append((Section(title), children))
elif link_node is not None:
title = link_node.text
href = zip_path.normpath(zip_path.join(base_path, link_node.get('href')))
items.append(Link(href, title))
return items
if navtype == 'toc':
self.book.toc = parse_list(nav_node.find('ol'))
elif nav_node is not None:
# generate the pages list if there is one
self.book.pages = parse_list(nav_node.find('ol'))
# generate the per-file pages lists
# because of the order of parsing the files, this can't be done
# when building the EpubHtml objects
htmlfiles = dict()
for htmlfile in self.book.items:
if isinstance(htmlfile, EpubHtml):
htmlfiles[htmlfile.file_name] = htmlfile
for page in self.book.pages:
try:
(filename, idref) = page.href.split('#')
except ValueError:
filename = page.href
if filename in htmlfiles:
htmlfiles[filename].pages.append(page)
def _load_spine(self):
spine = self.container.find('{%s}%s' % (NAMESPACES['OPF'], 'spine'))
self.book.spine = [(t.get('idref'), t.get('linear', 'yes')) for t in spine]
toc = spine.get('toc', '')
self.book.set_direction(spine.get('page-progression-direction', None))
# should read ncx or nav file
nav_item = next((item for item in self.book.items if isinstance(item, EpubNav)), None)
if toc:
if not self.options.get('ignore_ncx') or not nav_item:
try:
ncxFile = self.read_file(zip_path.join(self.opf_dir, self.book.get_item_with_id(toc).get_name()))
except KeyError:
raise EpubException(-1, 'Can not find ncx file.')
self._parse_ncx(ncxFile)
def _load_guide(self):
guide = self.container.find('{%s}%s' % (NAMESPACES['OPF'], 'guide'))
if guide is not None:
self.book.guide = [{'href': t.get('href'), 'title': t.get('title'), 'type': t.get('type')} for t in guide]
def _load_opf_file(self):
try:
s = self.read_file(self.opf_file)
except KeyError:
raise EpubException(-1, 'Can not find container file')
self.container = parse_string(s)
self._load_metadata()
self._load_manifest()
self._load_spine()
self._load_guide()
# read nav file if found
#
nav_item = next((item for item in self.book.items if isinstance(item, EpubNav)), None)
if nav_item:
if self.options.get('ignore_ncx') or not self.book.toc:
self._parse_nav(
nav_item.content,
zip_path.dirname(nav_item.file_name),
navtype='toc'
)
self._parse_nav(
nav_item.content,
zip_path.dirname(nav_item.file_name),
navtype='pages'
)
def _load(self):
if os.path.isdir(self.file_name):
file_name = self.file_name
class Directory:
def read(self, subname):
with open(os.path.join(file_name, subname), 'rb') as fp:
return fp.read()
def close(self):
pass
self.zf = Directory()
else:
try:
self.zf = zipfile.ZipFile(self.file_name, 'r', compression=zipfile.ZIP_DEFLATED, allowZip64=True)
except zipfile.BadZipfile as bz:
raise EpubException(0, 'Bad Zip file')
except zipfile.LargeZipFile as bz:
raise EpubException(1, 'Large Zip file')
# 1st check metadata
self._load_container()
self._load_opf_file()
self.zf.close()
# WRITE
def write_epub(name, book, options=None):
"""
Creates epub file with the content defined in EpubBook.
>>> ebooklib.write_epub('book.epub', book)
:Args:
- name: file name for the output file
- book: instance of EpubBook
- options: extra opions as dictionary (optional)
"""
epub = EpubWriter(name, book, options)
epub.process()
try:
epub.write()
except IOError:
pass
# READ
def read_epub(name, options=None):
"""
Creates new instance of EpubBook with the content defined in the input file.
>>> book = ebooklib.read_epub('book.epub')
:Args:
- name: full path to the input file
- options: extra options as dictionary (optional)
:Returns:
Instance of EpubBook.
"""
reader = EpubReader(name, options)
book = reader.load()
reader.process()
return book | PypiClean |
/MIAvisual-0.0.6-py3-none-any.whl/matplotlib/animation.py |
import abc
import base64
import contextlib
from io import BytesIO, TextIOWrapper
import itertools
import logging
from pathlib import Path
import shutil
import subprocess
import sys
from tempfile import TemporaryDirectory
import uuid
import warnings
import numpy as np
from PIL import Image
import matplotlib as mpl
from matplotlib._animation_data import (
DISPLAY_TEMPLATE, INCLUDED_FRAMES, JS_INCLUDE, STYLE_INCLUDE)
from matplotlib import _api, cbook
_log = logging.getLogger(__name__)
# Process creation flag for subprocess to prevent it raising a terminal
# window. See for example:
# https://stackoverflow.com/q/24130623/
if sys.platform == 'win32':
subprocess_creation_flags = CREATE_NO_WINDOW = 0x08000000
else:
# Apparently None won't work here
subprocess_creation_flags = 0
# Other potential writing methods:
# * http://pymedia.org/
# * libming (produces swf) python wrappers: https://github.com/libming/libming
# * Wrap x264 API:
# (https://stackoverflow.com/q/2940671/)
def adjusted_figsize(w, h, dpi, n):
"""
Compute figure size so that pixels are a multiple of n.
Parameters
----------
w, h : float
Size in inches.
dpi : float
The dpi.
n : int
The target multiple.
Returns
-------
wnew, hnew : float
The new figure size in inches.
"""
# this maybe simplified if / when we adopt consistent rounding for
# pixel size across the whole library
def correct_roundoff(x, dpi, n):
if int(x*dpi) % n != 0:
if int(np.nextafter(x, np.inf)*dpi) % n == 0:
x = np.nextafter(x, np.inf)
elif int(np.nextafter(x, -np.inf)*dpi) % n == 0:
x = np.nextafter(x, -np.inf)
return x
wnew = int(w * dpi / n) * n / dpi
hnew = int(h * dpi / n) * n / dpi
return correct_roundoff(wnew, dpi, n), correct_roundoff(hnew, dpi, n)
class MovieWriterRegistry:
"""Registry of available writer classes by human readable name."""
def __init__(self):
self._registered = dict()
def register(self, name):
"""
Decorator for registering a class under a name.
Example use::
@registry.register(name)
class Foo:
pass
"""
def wrapper(writer_cls):
self._registered[name] = writer_cls
return writer_cls
return wrapper
def is_available(self, name):
"""
Check if given writer is available by name.
Parameters
----------
name : str
Returns
-------
bool
"""
try:
cls = self._registered[name]
except KeyError:
return False
return cls.isAvailable()
def __iter__(self):
"""Iterate over names of available writer class."""
for name in self._registered:
if self.is_available(name):
yield name
def list(self):
"""Get a list of available MovieWriters."""
return [*self]
def __getitem__(self, name):
"""Get an available writer class from its name."""
if self.is_available(name):
return self._registered[name]
raise RuntimeError(f"Requested MovieWriter ({name}) not available")
writers = MovieWriterRegistry()
class AbstractMovieWriter(abc.ABC):
"""
Abstract base class for writing movies, providing a way to grab frames by
calling `~AbstractMovieWriter.grab_frame`.
`setup` is called to start the process and `finish` is called afterwards.
`saving` is provided as a context manager to facilitate this process as ::
with moviewriter.saving(fig, outfile='myfile.mp4', dpi=100):
# Iterate over frames
moviewriter.grab_frame(**savefig_kwargs)
The use of the context manager ensures that `setup` and `finish` are
performed as necessary.
An instance of a concrete subclass of this class can be given as the
``writer`` argument of `Animation.save()`.
"""
def __init__(self, fps=5, metadata=None, codec=None, bitrate=None):
self.fps = fps
self.metadata = metadata if metadata is not None else {}
self.codec = (
mpl.rcParams['animation.codec'] if codec is None else codec)
self.bitrate = (
mpl.rcParams['animation.bitrate'] if bitrate is None else bitrate)
@abc.abstractmethod
def setup(self, fig, outfile, dpi=None):
"""
Setup for writing the movie file.
Parameters
----------
fig : `~matplotlib.figure.Figure`
The figure object that contains the information for frames.
outfile : str
The filename of the resulting movie file.
dpi : float, default: ``fig.dpi``
The DPI (or resolution) for the file. This controls the size
in pixels of the resulting movie file.
"""
self.outfile = outfile
self.fig = fig
if dpi is None:
dpi = self.fig.dpi
self.dpi = dpi
@property
def frame_size(self):
"""A tuple ``(width, height)`` in pixels of a movie frame."""
w, h = self.fig.get_size_inches()
return int(w * self.dpi), int(h * self.dpi)
@abc.abstractmethod
def grab_frame(self, **savefig_kwargs):
"""
Grab the image information from the figure and save as a movie frame.
All keyword arguments in *savefig_kwargs* are passed on to the
`~.Figure.savefig` call that saves the figure.
"""
@abc.abstractmethod
def finish(self):
"""Finish any processing for writing the movie."""
@contextlib.contextmanager
def saving(self, fig, outfile, dpi, *args, **kwargs):
"""
Context manager to facilitate writing the movie file.
``*args, **kw`` are any parameters that should be passed to `setup`.
"""
# This particular sequence is what contextlib.contextmanager wants
self.setup(fig, outfile, dpi, *args, **kwargs)
try:
yield self
finally:
self.finish()
class MovieWriter(AbstractMovieWriter):
"""
Base class for writing movies.
This is a base class for MovieWriter subclasses that write a movie frame
data to a pipe. You cannot instantiate this class directly.
See examples for how to use its subclasses.
Attributes
----------
frame_format : str
The format used in writing frame data, defaults to 'rgba'.
fig : `~matplotlib.figure.Figure`
The figure to capture data from.
This must be provided by the sub-classes.
"""
# Builtin writer subclasses additionally define the _exec_key and _args_key
# attributes, which indicate the rcParams entries where the path to the
# executable and additional command-line arguments to the executable are
# stored. Third-party writers cannot meaningfully set these as they cannot
# extend rcParams with new keys.
# Pipe-based writers only support RGBA, but file-based ones support more
# formats.
supported_formats = ["rgba"]
def __init__(self, fps=5, codec=None, bitrate=None, extra_args=None,
metadata=None):
"""
Parameters
----------
fps : int, default: 5
Movie frame rate (per second).
codec : str or None, default: :rc:`animation.codec`
The codec to use.
bitrate : int, default: :rc:`animation.bitrate`
The bitrate of the movie, in kilobits per second. Higher values
means higher quality movies, but increase the file size. A value
of -1 lets the underlying movie encoder select the bitrate.
extra_args : list of str or None, optional
Extra command-line arguments passed to the underlying movie
encoder. The default, None, means to use
:rc:`animation.[name-of-encoder]_args` for the builtin writers.
metadata : dict[str, str], default: {}
A dictionary of keys and values for metadata to include in the
output file. Some keys that may be of use include:
title, artist, genre, subject, copyright, srcform, comment.
"""
if type(self) is MovieWriter:
# TODO MovieWriter is still an abstract class and needs to be
# extended with a mixin. This should be clearer in naming
# and description. For now, just give a reasonable error
# message to users.
raise TypeError(
'MovieWriter cannot be instantiated directly. Please use one '
'of its subclasses.')
super().__init__(fps=fps, metadata=metadata, codec=codec,
bitrate=bitrate)
self.frame_format = self.supported_formats[0]
self.extra_args = extra_args
def _adjust_frame_size(self):
if self.codec == 'h264':
wo, ho = self.fig.get_size_inches()
w, h = adjusted_figsize(wo, ho, self.dpi, 2)
if (wo, ho) != (w, h):
self.fig.set_size_inches(w, h, forward=True)
_log.info('figure size in inches has been adjusted '
'from %s x %s to %s x %s', wo, ho, w, h)
else:
w, h = self.fig.get_size_inches()
_log.debug('frame size in pixels is %s x %s', *self.frame_size)
return w, h
def setup(self, fig, outfile, dpi=None):
# docstring inherited
super().setup(fig, outfile, dpi=dpi)
self._w, self._h = self._adjust_frame_size()
# Run here so that grab_frame() can write the data to a pipe. This
# eliminates the need for temp files.
self._run()
def _run(self):
# Uses subprocess to call the program for assembling frames into a
# movie file. *args* returns the sequence of command line arguments
# from a few configuration options.
command = self._args()
_log.info('MovieWriter._run: running command: %s',
cbook._pformat_subprocess(command))
PIPE = subprocess.PIPE
self._proc = subprocess.Popen(
command, stdin=PIPE, stdout=PIPE, stderr=PIPE,
creationflags=subprocess_creation_flags)
def finish(self):
"""Finish any processing for writing the movie."""
overridden_cleanup = _api.deprecate_method_override(
__class__.cleanup, self, since="3.4", alternative="finish()")
if overridden_cleanup is not None:
overridden_cleanup()
else:
self._cleanup() # Inline _cleanup() once cleanup() is removed.
def grab_frame(self, **savefig_kwargs):
# docstring inherited
_log.debug('MovieWriter.grab_frame: Grabbing frame.')
# Readjust the figure size in case it has been changed by the user.
# All frames must have the same size to save the movie correctly.
self.fig.set_size_inches(self._w, self._h)
# Save the figure data to the sink, using the frame format and dpi.
self.fig.savefig(self._proc.stdin, format=self.frame_format,
dpi=self.dpi, **savefig_kwargs)
def _args(self):
"""Assemble list of encoder-specific command-line arguments."""
return NotImplementedError("args needs to be implemented by subclass.")
def _cleanup(self): # Inline to finish() once cleanup() is removed.
"""Clean-up and collect the process used to write the movie file."""
out, err = self._proc.communicate()
# Use the encoding/errors that universal_newlines would use.
out = TextIOWrapper(BytesIO(out)).read()
err = TextIOWrapper(BytesIO(err)).read()
if out:
_log.log(
logging.WARNING if self._proc.returncode else logging.DEBUG,
"MovieWriter stdout:\n%s", out)
if err:
_log.log(
logging.WARNING if self._proc.returncode else logging.DEBUG,
"MovieWriter stderr:\n%s", err)
if self._proc.returncode:
raise subprocess.CalledProcessError(
self._proc.returncode, self._proc.args, out, err)
@_api.deprecated("3.4")
def cleanup(self):
self._cleanup()
@classmethod
def bin_path(cls):
"""
Return the binary path to the commandline tool used by a specific
subclass. This is a class method so that the tool can be looked for
before making a particular MovieWriter subclass available.
"""
return str(mpl.rcParams[cls._exec_key])
@classmethod
def isAvailable(cls):
"""Return whether a MovieWriter subclass is actually available."""
return shutil.which(cls.bin_path()) is not None
class FileMovieWriter(MovieWriter):
"""
`MovieWriter` for writing to individual files and stitching at the end.
This must be sub-classed to be useful.
"""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.frame_format = mpl.rcParams['animation.frame_format']
def setup(self, fig, outfile, dpi=None, frame_prefix=None):
"""
Setup for writing the movie file.
Parameters
----------
fig : `~matplotlib.figure.Figure`
The figure to grab the rendered frames from.
outfile : str
The filename of the resulting movie file.
dpi : float, default: ``fig.dpi``
The dpi of the output file. This, with the figure size,
controls the size in pixels of the resulting movie file.
frame_prefix : str, optional
The filename prefix to use for temporary files. If *None* (the
default), files are written to a temporary directory which is
deleted by `cleanup`; if not *None*, no temporary files are
deleted.
"""
self.fig = fig
self.outfile = outfile
if dpi is None:
dpi = self.fig.dpi
self.dpi = dpi
self._adjust_frame_size()
if frame_prefix is None:
self._tmpdir = TemporaryDirectory()
self.temp_prefix = str(Path(self._tmpdir.name, 'tmp'))
else:
self._tmpdir = None
self.temp_prefix = frame_prefix
self._frame_counter = 0 # used for generating sequential file names
self._temp_paths = list()
self.fname_format_str = '%s%%07d.%s'
def __del__(self):
if self._tmpdir:
self._tmpdir.cleanup()
@property
def frame_format(self):
"""
Format (png, jpeg, etc.) to use for saving the frames, which can be
decided by the individual subclasses.
"""
return self._frame_format
@frame_format.setter
def frame_format(self, frame_format):
if frame_format in self.supported_formats:
self._frame_format = frame_format
else:
_api.warn_external(
f"Ignoring file format {frame_format!r} which is not "
f"supported by {type(self).__name__}; using "
f"{self.supported_formats[0]} instead.")
self._frame_format = self.supported_formats[0]
def _base_temp_name(self):
# Generates a template name (without number) given the frame format
# for extension and the prefix.
return self.fname_format_str % (self.temp_prefix, self.frame_format)
def grab_frame(self, **savefig_kwargs):
# docstring inherited
# Creates a filename for saving using basename and counter.
path = Path(self._base_temp_name() % self._frame_counter)
self._temp_paths.append(path) # Record the filename for later use.
self._frame_counter += 1 # Ensures each created name is unique.
_log.debug('FileMovieWriter.grab_frame: Grabbing frame %d to path=%s',
self._frame_counter, path)
with open(path, 'wb') as sink: # Save figure to the sink.
self.fig.savefig(sink, format=self.frame_format, dpi=self.dpi,
**savefig_kwargs)
def finish(self):
# Call run here now that all frame grabbing is done. All temp files
# are available to be assembled.
self._run()
super().finish() # Will call clean-up
def _cleanup(self): # Inline to finish() once cleanup() is removed.
super()._cleanup()
if self._tmpdir:
_log.debug('MovieWriter: clearing temporary path=%s', self._tmpdir)
self._tmpdir.cleanup()
@writers.register('pillow')
class PillowWriter(AbstractMovieWriter):
@classmethod
def isAvailable(cls):
return True
def setup(self, fig, outfile, dpi=None):
super().setup(fig, outfile, dpi=dpi)
self._frames = []
def grab_frame(self, **savefig_kwargs):
buf = BytesIO()
self.fig.savefig(
buf, **{**savefig_kwargs, "format": "rgba", "dpi": self.dpi})
self._frames.append(Image.frombuffer(
"RGBA", self.frame_size, buf.getbuffer(), "raw", "RGBA", 0, 1))
def finish(self):
self._frames[0].save(
self.outfile, save_all=True, append_images=self._frames[1:],
duration=int(1000 / self.fps), loop=0)
# Base class of ffmpeg information. Has the config keys and the common set
# of arguments that controls the *output* side of things.
class FFMpegBase:
"""
Mixin class for FFMpeg output.
To be useful this must be multiply-inherited from with a
`MovieWriterBase` sub-class.
"""
_exec_key = 'animation.ffmpeg_path'
_args_key = 'animation.ffmpeg_args'
@property
def output_args(self):
args = []
if Path(self.outfile).suffix == '.gif':
self.codec = 'gif'
else:
args.extend(['-vcodec', self.codec])
extra_args = (self.extra_args if self.extra_args is not None
else mpl.rcParams[self._args_key])
# For h264, the default format is yuv444p, which is not compatible
# with quicktime (and others). Specifying yuv420p fixes playback on
# iOS, as well as HTML5 video in firefox and safari (on both Win and
# OSX). Also fixes internet explorer. This is as of 2015/10/29.
if self.codec == 'h264' and '-pix_fmt' not in extra_args:
args.extend(['-pix_fmt', 'yuv420p'])
# For GIF, we're telling FFMPEG to split the video stream, to generate
# a palette, and then use it for encoding.
elif self.codec == 'gif' and '-filter_complex' not in extra_args:
args.extend(['-filter_complex',
'split [a][b];[a] palettegen [p];[b][p] paletteuse'])
if self.bitrate > 0:
args.extend(['-b', '%dk' % self.bitrate]) # %dk: bitrate in kbps.
args.extend(extra_args)
for k, v in self.metadata.items():
args.extend(['-metadata', '%s=%s' % (k, v)])
return args + ['-y', self.outfile]
# Combine FFMpeg options with pipe-based writing
@writers.register('ffmpeg')
class FFMpegWriter(FFMpegBase, MovieWriter):
"""
Pipe-based ffmpeg writer.
Frames are streamed directly to ffmpeg via a pipe and written in a single
pass.
"""
def _args(self):
# Returns the command line parameters for subprocess to use
# ffmpeg to create a movie using a pipe.
args = [self.bin_path(), '-f', 'rawvideo', '-vcodec', 'rawvideo',
'-s', '%dx%d' % self.frame_size, '-pix_fmt', self.frame_format,
'-r', str(self.fps)]
# Logging is quieted because subprocess.PIPE has limited buffer size.
# If you have a lot of frames in your animation and set logging to
# DEBUG, you will have a buffer overrun.
if _log.getEffectiveLevel() > logging.DEBUG:
args += ['-loglevel', 'error']
args += ['-i', 'pipe:'] + self.output_args
return args
# Combine FFMpeg options with temp file-based writing
@writers.register('ffmpeg_file')
class FFMpegFileWriter(FFMpegBase, FileMovieWriter):
"""
File-based ffmpeg writer.
Frames are written to temporary files on disk and then stitched
together at the end.
"""
supported_formats = ['png', 'jpeg', 'tiff', 'raw', 'rgba']
def _args(self):
# Returns the command line parameters for subprocess to use
# ffmpeg to create a movie using a collection of temp images
args = []
# For raw frames, we need to explicitly tell ffmpeg the metadata.
if self.frame_format in {'raw', 'rgba'}:
args += [
'-f', 'image2', '-vcodec', 'rawvideo',
'-video_size', '%dx%d' % self.frame_size,
'-pixel_format', 'rgba',
'-framerate', str(self.fps),
]
args += ['-r', str(self.fps), '-i', self._base_temp_name(),
'-vframes', str(self._frame_counter)]
# Logging is quieted because subprocess.PIPE has limited buffer size.
# If you have a lot of frames in your animation and set logging to
# DEBUG, you will have a buffer overrun.
if _log.getEffectiveLevel() > logging.DEBUG:
args += ['-loglevel', 'error']
return [self.bin_path(), *args, *self.output_args]
# Base class for animated GIFs with ImageMagick
class ImageMagickBase:
"""
Mixin class for ImageMagick output.
To be useful this must be multiply-inherited from with a
`MovieWriterBase` sub-class.
"""
_exec_key = 'animation.convert_path'
_args_key = 'animation.convert_args'
@property
def delay(self):
return 100. / self.fps
@property
def output_args(self):
extra_args = (self.extra_args if self.extra_args is not None
else mpl.rcParams[self._args_key])
return [*extra_args, self.outfile]
@classmethod
def bin_path(cls):
binpath = super().bin_path()
if binpath == 'convert':
binpath = mpl._get_executable_info('magick').executable
return binpath
@classmethod
def isAvailable(cls):
try:
return super().isAvailable()
except mpl.ExecutableNotFoundError as _enf:
# May be raised by get_executable_info.
_log.debug('ImageMagick unavailable due to: %s', _enf)
return False
# Combine ImageMagick options with pipe-based writing
@writers.register('imagemagick')
class ImageMagickWriter(ImageMagickBase, MovieWriter):
"""
Pipe-based animated gif.
Frames are streamed directly to ImageMagick via a pipe and written
in a single pass.
"""
def _args(self):
return ([self.bin_path(),
'-size', '%ix%i' % self.frame_size, '-depth', '8',
'-delay', str(self.delay), '-loop', '0',
'%s:-' % self.frame_format]
+ self.output_args)
# Combine ImageMagick options with temp file-based writing
@writers.register('imagemagick_file')
class ImageMagickFileWriter(ImageMagickBase, FileMovieWriter):
"""
File-based animated gif writer.
Frames are written to temporary files on disk and then stitched
together at the end.
"""
supported_formats = ['png', 'jpeg', 'tiff', 'raw', 'rgba']
def _args(self):
# Force format: ImageMagick does not recognize 'raw'.
fmt = 'rgba:' if self.frame_format == 'raw' else ''
return ([self.bin_path(),
'-size', '%ix%i' % self.frame_size, '-depth', '8',
'-delay', str(self.delay), '-loop', '0',
'%s%s*.%s' % (fmt, self.temp_prefix, self.frame_format)]
+ self.output_args)
# Taken directly from jakevdp's JSAnimation package at
# http://github.com/jakevdp/JSAnimation
def _included_frames(paths, frame_format):
"""paths should be a list of Paths"""
return INCLUDED_FRAMES.format(Nframes=len(paths),
frame_dir=paths[0].parent,
frame_format=frame_format)
def _embedded_frames(frame_list, frame_format):
"""frame_list should be a list of base64-encoded png files"""
if frame_format == 'svg':
# Fix MIME type for svg
frame_format = 'svg+xml'
template = ' frames[{0}] = "data:image/{1};base64,{2}"\n'
return "\n" + "".join(
template.format(i, frame_format, frame_data.replace('\n', '\\\n'))
for i, frame_data in enumerate(frame_list))
@writers.register('html')
class HTMLWriter(FileMovieWriter):
"""Writer for JavaScript-based HTML movies."""
supported_formats = ['png', 'jpeg', 'tiff', 'svg']
@classmethod
def isAvailable(cls):
return True
def __init__(self, fps=30, codec=None, bitrate=None, extra_args=None,
metadata=None, embed_frames=False, default_mode='loop',
embed_limit=None):
if extra_args:
_log.warning("HTMLWriter ignores 'extra_args'")
extra_args = () # Don't lookup nonexistent rcParam[args_key].
self.embed_frames = embed_frames
self.default_mode = default_mode.lower()
_api.check_in_list(['loop', 'once', 'reflect'],
default_mode=self.default_mode)
# Save embed limit, which is given in MB
if embed_limit is None:
self._bytes_limit = mpl.rcParams['animation.embed_limit']
else:
self._bytes_limit = embed_limit
# Convert from MB to bytes
self._bytes_limit *= 1024 * 1024
super().__init__(fps, codec, bitrate, extra_args, metadata)
def setup(self, fig, outfile, dpi, frame_dir=None):
outfile = Path(outfile)
_api.check_in_list(['.html', '.htm'], outfile_extension=outfile.suffix)
self._saved_frames = []
self._total_bytes = 0
self._hit_limit = False
if not self.embed_frames:
if frame_dir is None:
frame_dir = outfile.with_name(outfile.stem + '_frames')
frame_dir.mkdir(parents=True, exist_ok=True)
frame_prefix = frame_dir / 'frame'
else:
frame_prefix = None
super().setup(fig, outfile, dpi, frame_prefix)
self._clear_temp = False
def grab_frame(self, **savefig_kwargs):
if self.embed_frames:
# Just stop processing if we hit the limit
if self._hit_limit:
return
f = BytesIO()
self.fig.savefig(f, format=self.frame_format,
dpi=self.dpi, **savefig_kwargs)
imgdata64 = base64.encodebytes(f.getvalue()).decode('ascii')
self._total_bytes += len(imgdata64)
if self._total_bytes >= self._bytes_limit:
_log.warning(
"Animation size has reached %s bytes, exceeding the limit "
"of %s. If you're sure you want a larger animation "
"embedded, set the animation.embed_limit rc parameter to "
"a larger value (in MB). This and further frames will be "
"dropped.", self._total_bytes, self._bytes_limit)
self._hit_limit = True
else:
self._saved_frames.append(imgdata64)
else:
return super().grab_frame(**savefig_kwargs)
def finish(self):
# save the frames to an html file
if self.embed_frames:
fill_frames = _embedded_frames(self._saved_frames,
self.frame_format)
Nframes = len(self._saved_frames)
else:
# temp names is filled by FileMovieWriter
fill_frames = _included_frames(self._temp_paths, self.frame_format)
Nframes = len(self._temp_paths)
mode_dict = dict(once_checked='',
loop_checked='',
reflect_checked='')
mode_dict[self.default_mode + '_checked'] = 'checked'
interval = 1000 // self.fps
with open(self.outfile, 'w') as of:
of.write(JS_INCLUDE + STYLE_INCLUDE)
of.write(DISPLAY_TEMPLATE.format(id=uuid.uuid4().hex,
Nframes=Nframes,
fill_frames=fill_frames,
interval=interval,
**mode_dict))
# duplicate the temporary file clean up logic from
# FileMovieWriter.cleanup. We can not call the inherited
# versions of finish or cleanup because both assume that
# there is a subprocess that we either need to call to merge
# many frames together or that there is a subprocess call that
# we need to clean up.
if self._tmpdir:
_log.debug('MovieWriter: clearing temporary path=%s', self._tmpdir)
self._tmpdir.cleanup()
class Animation:
"""
A base class for Animations.
This class is not usable as is, and should be subclassed to provide needed
behavior.
.. note::
You must store the created Animation in a variable that lives as long
as the animation should run. Otherwise, the Animation object will be
garbage-collected and the animation stops.
Parameters
----------
fig : `~matplotlib.figure.Figure`
The figure object used to get needed events, such as draw or resize.
event_source : object, optional
A class that can run a callback when desired events
are generated, as well as be stopped and started.
Examples include timers (see `TimedAnimation`) and file
system notifications.
blit : bool, default: False
Whether blitting is used to optimize drawing.
See Also
--------
FuncAnimation, ArtistAnimation
"""
def __init__(self, fig, event_source=None, blit=False):
self._draw_was_started = False
self._fig = fig
# Disables blitting for backends that don't support it. This
# allows users to request it if available, but still have a
# fallback that works if it is not.
self._blit = blit and fig.canvas.supports_blit
# These are the basics of the animation. The frame sequence represents
# information for each frame of the animation and depends on how the
# drawing is handled by the subclasses. The event source fires events
# that cause the frame sequence to be iterated.
self.frame_seq = self.new_frame_seq()
self.event_source = event_source
# Instead of starting the event source now, we connect to the figure's
# draw_event, so that we only start once the figure has been drawn.
self._first_draw_id = fig.canvas.mpl_connect('draw_event', self._start)
# Connect to the figure's close_event so that we don't continue to
# fire events and try to draw to a deleted figure.
self._close_id = self._fig.canvas.mpl_connect('close_event',
self._stop)
if self._blit:
self._setup_blit()
def __del__(self):
if not getattr(self, '_draw_was_started', True):
warnings.warn(
'Animation was deleted without rendering anything. This is '
'most likely not intended. To prevent deletion, assign the '
'Animation to a variable, e.g. `anim`, that exists until you '
'have outputted the Animation using `plt.show()` or '
'`anim.save()`.'
)
def _start(self, *args):
"""
Starts interactive animation. Adds the draw frame command to the GUI
handler, calls show to start the event loop.
"""
# Do not start the event source if saving() it.
if self._fig.canvas.is_saving():
return
# First disconnect our draw event handler
self._fig.canvas.mpl_disconnect(self._first_draw_id)
# Now do any initial draw
self._init_draw()
# Add our callback for stepping the animation and
# actually start the event_source.
self.event_source.add_callback(self._step)
self.event_source.start()
def _stop(self, *args):
# On stop we disconnect all of our events.
if self._blit:
self._fig.canvas.mpl_disconnect(self._resize_id)
self._fig.canvas.mpl_disconnect(self._close_id)
self.event_source.remove_callback(self._step)
self.event_source = None
def save(self, filename, writer=None, fps=None, dpi=None, codec=None,
bitrate=None, extra_args=None, metadata=None, extra_anim=None,
savefig_kwargs=None, *, progress_callback=None):
"""
Save the animation as a movie file by drawing every frame.
Parameters
----------
filename : str
The output filename, e.g., :file:`mymovie.mp4`.
writer : `MovieWriter` or str, default: :rc:`animation.writer`
A `MovieWriter` instance to use or a key that identifies a
class to use, such as 'ffmpeg'.
fps : int, optional
Movie frame rate (per second). If not set, the frame rate from the
animation's frame interval.
dpi : float, default: :rc:`savefig.dpi`
Controls the dots per inch for the movie frames. Together with
the figure's size in inches, this controls the size of the movie.
codec : str, default: :rc:`animation.codec`.
The video codec to use. Not all codecs are supported by a given
`MovieWriter`.
bitrate : int, default: :rc:`animation.bitrate`
The bitrate of the movie, in kilobits per second. Higher values
means higher quality movies, but increase the file size. A value
of -1 lets the underlying movie encoder select the bitrate.
extra_args : list of str or None, optional
Extra command-line arguments passed to the underlying movie
encoder. The default, None, means to use
:rc:`animation.[name-of-encoder]_args` for the builtin writers.
metadata : dict[str, str], default: {}
Dictionary of keys and values for metadata to include in
the output file. Some keys that may be of use include:
title, artist, genre, subject, copyright, srcform, comment.
extra_anim : list, default: []
Additional `Animation` objects that should be included
in the saved movie file. These need to be from the same
`matplotlib.figure.Figure` instance. Also, animation frames will
just be simply combined, so there should be a 1:1 correspondence
between the frames from the different animations.
savefig_kwargs : dict, default: {}
Keyword arguments passed to each `~.Figure.savefig` call used to
save the individual frames.
progress_callback : function, optional
A callback function that will be called for every frame to notify
the saving progress. It must have the signature ::
def func(current_frame: int, total_frames: int) -> Any
where *current_frame* is the current frame number and
*total_frames* is the total number of frames to be saved.
*total_frames* is set to None, if the total number of frames can
not be determined. Return values may exist but are ignored.
Example code to write the progress to stdout::
progress_callback =\
lambda i, n: print(f'Saving frame {i} of {n}')
Notes
-----
*fps*, *codec*, *bitrate*, *extra_args* and *metadata* are used to
construct a `.MovieWriter` instance and can only be passed if
*writer* is a string. If they are passed as non-*None* and *writer*
is a `.MovieWriter`, a `RuntimeError` will be raised.
"""
if writer is None:
writer = mpl.rcParams['animation.writer']
elif (not isinstance(writer, str) and
any(arg is not None
for arg in (fps, codec, bitrate, extra_args, metadata))):
raise RuntimeError('Passing in values for arguments '
'fps, codec, bitrate, extra_args, or metadata '
'is not supported when writer is an existing '
'MovieWriter instance. These should instead be '
'passed as arguments when creating the '
'MovieWriter instance.')
if savefig_kwargs is None:
savefig_kwargs = {}
if fps is None and hasattr(self, '_interval'):
# Convert interval in ms to frames per second
fps = 1000. / self._interval
# Re-use the savefig DPI for ours if none is given
if dpi is None:
dpi = mpl.rcParams['savefig.dpi']
if dpi == 'figure':
dpi = self._fig.dpi
writer_kwargs = {}
if codec is not None:
writer_kwargs['codec'] = codec
if bitrate is not None:
writer_kwargs['bitrate'] = bitrate
if extra_args is not None:
writer_kwargs['extra_args'] = extra_args
if metadata is not None:
writer_kwargs['metadata'] = metadata
all_anim = [self]
if extra_anim is not None:
all_anim.extend(anim
for anim
in extra_anim if anim._fig is self._fig)
# If we have the name of a writer, instantiate an instance of the
# registered class.
if isinstance(writer, str):
try:
writer_cls = writers[writer]
except RuntimeError: # Raised if not available.
writer_cls = PillowWriter # Always available.
_log.warning("MovieWriter %s unavailable; using Pillow "
"instead.", writer)
writer = writer_cls(fps, **writer_kwargs)
_log.info('Animation.save using %s', type(writer))
if 'bbox_inches' in savefig_kwargs:
_log.warning("Warning: discarding the 'bbox_inches' argument in "
"'savefig_kwargs' as it may cause frame size "
"to vary, which is inappropriate for animation.")
savefig_kwargs.pop('bbox_inches')
# Create a new sequence of frames for saved data. This is different
# from new_frame_seq() to give the ability to save 'live' generated
# frame information to be saved later.
# TODO: Right now, after closing the figure, saving a movie won't work
# since GUI widgets are gone. Either need to remove extra code to
# allow for this non-existent use case or find a way to make it work.
if mpl.rcParams['savefig.bbox'] == 'tight':
_log.info("Disabling savefig.bbox = 'tight', as it may cause "
"frame size to vary, which is inappropriate for "
"animation.")
# canvas._is_saving = True makes the draw_event animation-starting
# callback a no-op; canvas.manager = None prevents resizing the GUI
# widget (both are likewise done in savefig()).
with mpl.rc_context({'savefig.bbox': None}), \
writer.saving(self._fig, filename, dpi), \
cbook._setattr_cm(self._fig.canvas,
_is_saving=True, manager=None):
for anim in all_anim:
anim._init_draw() # Clear the initial frame
frame_number = 0
# TODO: Currently only FuncAnimation has a save_count
# attribute. Can we generalize this to all Animations?
save_count_list = [getattr(a, 'save_count', None)
for a in all_anim]
if None in save_count_list:
total_frames = None
else:
total_frames = sum(save_count_list)
for data in zip(*[a.new_saved_frame_seq() for a in all_anim]):
for anim, d in zip(all_anim, data):
# TODO: See if turning off blit is really necessary
anim._draw_next_frame(d, blit=False)
if progress_callback is not None:
progress_callback(frame_number, total_frames)
frame_number += 1
writer.grab_frame(**savefig_kwargs)
def _step(self, *args):
"""
Handler for getting events. By default, gets the next frame in the
sequence and hands the data off to be drawn.
"""
# Returns True to indicate that the event source should continue to
# call _step, until the frame sequence reaches the end of iteration,
# at which point False will be returned.
try:
framedata = next(self.frame_seq)
self._draw_next_frame(framedata, self._blit)
return True
except StopIteration:
return False
def new_frame_seq(self):
"""Return a new sequence of frame information."""
# Default implementation is just an iterator over self._framedata
return iter(self._framedata)
def new_saved_frame_seq(self):
"""Return a new sequence of saved/cached frame information."""
# Default is the same as the regular frame sequence
return self.new_frame_seq()
def _draw_next_frame(self, framedata, blit):
# Breaks down the drawing of the next frame into steps of pre- and
# post- draw, as well as the drawing of the frame itself.
self._pre_draw(framedata, blit)
self._draw_frame(framedata)
self._post_draw(framedata, blit)
def _init_draw(self):
# Initial draw to clear the frame. Also used by the blitting code
# when a clean base is required.
self._draw_was_started = True
def _pre_draw(self, framedata, blit):
# Perform any cleaning or whatnot before the drawing of the frame.
# This default implementation allows blit to clear the frame.
if blit:
self._blit_clear(self._drawn_artists)
def _draw_frame(self, framedata):
# Performs actual drawing of the frame.
raise NotImplementedError('Needs to be implemented by subclasses to'
' actually make an animation.')
def _post_draw(self, framedata, blit):
# After the frame is rendered, this handles the actual flushing of
# the draw, which can be a direct draw_idle() or make use of the
# blitting.
if blit and self._drawn_artists:
self._blit_draw(self._drawn_artists)
else:
self._fig.canvas.draw_idle()
# The rest of the code in this class is to facilitate easy blitting
def _blit_draw(self, artists):
# Handles blitted drawing, which renders only the artists given instead
# of the entire figure.
updated_ax = {a.axes for a in artists}
# Enumerate artists to cache axes' backgrounds. We do not draw
# artists yet to not cache foreground from plots with shared axes
for ax in updated_ax:
# If we haven't cached the background for the current view of this
# axes object, do so now. This might not always be reliable, but
# it's an attempt to automate the process.
cur_view = ax._get_view()
view, bg = self._blit_cache.get(ax, (object(), None))
if cur_view != view:
self._blit_cache[ax] = (
cur_view, ax.figure.canvas.copy_from_bbox(ax.bbox))
# Make a separate pass to draw foreground.
for a in artists:
a.axes.draw_artist(a)
# After rendering all the needed artists, blit each axes individually.
for ax in updated_ax:
ax.figure.canvas.blit(ax.bbox)
def _blit_clear(self, artists):
# Get a list of the axes that need clearing from the artists that
# have been drawn. Grab the appropriate saved background from the
# cache and restore.
axes = {a.axes for a in artists}
for ax in axes:
try:
view, bg = self._blit_cache[ax]
except KeyError:
continue
if ax._get_view() == view:
ax.figure.canvas.restore_region(bg)
else:
self._blit_cache.pop(ax)
def _setup_blit(self):
# Setting up the blit requires: a cache of the background for the
# axes
self._blit_cache = dict()
self._drawn_artists = []
self._resize_id = self._fig.canvas.mpl_connect('resize_event',
self._on_resize)
self._post_draw(None, self._blit)
def _on_resize(self, event):
# On resize, we need to disable the resize event handling so we don't
# get too many events. Also stop the animation events, so that
# we're paused. Reset the cache and re-init. Set up an event handler
# to catch once the draw has actually taken place.
self._fig.canvas.mpl_disconnect(self._resize_id)
self.event_source.stop()
self._blit_cache.clear()
self._init_draw()
self._resize_id = self._fig.canvas.mpl_connect('draw_event',
self._end_redraw)
def _end_redraw(self, event):
# Now that the redraw has happened, do the post draw flushing and
# blit handling. Then re-enable all of the original events.
self._post_draw(None, False)
self.event_source.start()
self._fig.canvas.mpl_disconnect(self._resize_id)
self._resize_id = self._fig.canvas.mpl_connect('resize_event',
self._on_resize)
def to_html5_video(self, embed_limit=None):
"""
Convert the animation to an HTML5 ``<video>`` tag.
This saves the animation as an h264 video, encoded in base64
directly into the HTML5 video tag. This respects :rc:`animation.writer`
and :rc:`animation.bitrate`. This also makes use of the
``interval`` to control the speed, and uses the ``repeat``
parameter to decide whether to loop.
Parameters
----------
embed_limit : float, optional
Limit, in MB, of the returned animation. No animation is created
if the limit is exceeded.
Defaults to :rc:`animation.embed_limit` = 20.0.
Returns
-------
str
An HTML5 video tag with the animation embedded as base64 encoded
h264 video.
If the *embed_limit* is exceeded, this returns the string
"Video too large to embed."
"""
VIDEO_TAG = r'''<video {size} {options}>
<source type="video/mp4" src="data:video/mp4;base64,{video}">
Your browser does not support the video tag.
</video>'''
# Cache the rendering of the video as HTML
if not hasattr(self, '_base64_video'):
# Save embed limit, which is given in MB
if embed_limit is None:
embed_limit = mpl.rcParams['animation.embed_limit']
# Convert from MB to bytes
embed_limit *= 1024 * 1024
# Can't open a NamedTemporaryFile twice on Windows, so use a
# TemporaryDirectory instead.
with TemporaryDirectory() as tmpdir:
path = Path(tmpdir, "temp.m4v")
# We create a writer manually so that we can get the
# appropriate size for the tag
Writer = writers[mpl.rcParams['animation.writer']]
writer = Writer(codec='h264',
bitrate=mpl.rcParams['animation.bitrate'],
fps=1000. / self._interval)
self.save(str(path), writer=writer)
# Now open and base64 encode.
vid64 = base64.encodebytes(path.read_bytes())
vid_len = len(vid64)
if vid_len >= embed_limit:
_log.warning(
"Animation movie is %s bytes, exceeding the limit of %s. "
"If you're sure you want a large animation embedded, set "
"the animation.embed_limit rc parameter to a larger value "
"(in MB).", vid_len, embed_limit)
else:
self._base64_video = vid64.decode('ascii')
self._video_size = 'width="{}" height="{}"'.format(
*writer.frame_size)
# If we exceeded the size, this attribute won't exist
if hasattr(self, '_base64_video'):
# Default HTML5 options are to autoplay and display video controls
options = ['controls', 'autoplay']
# If we're set to repeat, make it loop
if hasattr(self, 'repeat') and self.repeat:
options.append('loop')
return VIDEO_TAG.format(video=self._base64_video,
size=self._video_size,
options=' '.join(options))
else:
return 'Video too large to embed.'
def to_jshtml(self, fps=None, embed_frames=True, default_mode=None):
"""
Generate HTML representation of the animation.
Parameters
----------
fps : int, optional
Movie frame rate (per second). If not set, the frame rate from
the animation's frame interval.
embed_frames : bool, optional
default_mode : str, optional
What to do when the animation ends. Must be one of ``{'loop',
'once', 'reflect'}``. Defaults to ``'loop'`` if ``self.repeat``
is True, otherwise ``'once'``.
"""
if fps is None and hasattr(self, '_interval'):
# Convert interval in ms to frames per second
fps = 1000 / self._interval
# If we're not given a default mode, choose one base on the value of
# the repeat attribute
if default_mode is None:
default_mode = 'loop' if self.repeat else 'once'
if not hasattr(self, "_html_representation"):
# Can't open a NamedTemporaryFile twice on Windows, so use a
# TemporaryDirectory instead.
with TemporaryDirectory() as tmpdir:
path = Path(tmpdir, "temp.html")
writer = HTMLWriter(fps=fps,
embed_frames=embed_frames,
default_mode=default_mode)
self.save(str(path), writer=writer)
self._html_representation = path.read_text()
return self._html_representation
def _repr_html_(self):
"""IPython display hook for rendering."""
fmt = mpl.rcParams['animation.html']
if fmt == 'html5':
return self.to_html5_video()
elif fmt == 'jshtml':
return self.to_jshtml()
def pause(self):
"""Pause the animation."""
self.event_source.stop()
if self._blit:
for artist in self._drawn_artists:
artist.set_animated(False)
def resume(self):
"""Resume the animation."""
self.event_source.start()
if self._blit:
for artist in self._drawn_artists:
artist.set_animated(True)
class TimedAnimation(Animation):
"""
`Animation` subclass for time-based animation.
A new frame is drawn every *interval* milliseconds.
.. note::
You must store the created Animation in a variable that lives as long
as the animation should run. Otherwise, the Animation object will be
garbage-collected and the animation stops.
Parameters
----------
fig : `~matplotlib.figure.Figure`
The figure object used to get needed events, such as draw or resize.
interval : int, default: 200
Delay between frames in milliseconds.
repeat_delay : int, default: 0
The delay in milliseconds between consecutive animation runs, if
*repeat* is True.
repeat : bool, default: True
Whether the animation repeats when the sequence of frames is completed.
blit : bool, default: False
Whether blitting is used to optimize drawing.
"""
def __init__(self, fig, interval=200, repeat_delay=0, repeat=True,
event_source=None, *args, **kwargs):
self._interval = interval
# Undocumented support for repeat_delay = None as backcompat.
self._repeat_delay = repeat_delay if repeat_delay is not None else 0
self.repeat = repeat
# If we're not given an event source, create a new timer. This permits
# sharing timers between animation objects for syncing animations.
if event_source is None:
event_source = fig.canvas.new_timer(interval=self._interval)
super().__init__(fig, event_source=event_source, *args, **kwargs)
def _step(self, *args):
"""Handler for getting events."""
# Extends the _step() method for the Animation class. If
# Animation._step signals that it reached the end and we want to
# repeat, we refresh the frame sequence and return True. If
# _repeat_delay is set, change the event_source's interval to our loop
# delay and set the callback to one which will then set the interval
# back.
still_going = super()._step(*args)
if not still_going and self.repeat:
self._init_draw()
self.frame_seq = self.new_frame_seq()
self.event_source.interval = self._repeat_delay
return True
else:
self.event_source.interval = self._interval
return still_going
class ArtistAnimation(TimedAnimation):
"""
Animation using a fixed set of `.Artist` objects.
Before creating an instance, all plotting should have taken place
and the relevant artists saved.
.. note::
You must store the created Animation in a variable that lives as long
as the animation should run. Otherwise, the Animation object will be
garbage-collected and the animation stops.
Parameters
----------
fig : `~matplotlib.figure.Figure`
The figure object used to get needed events, such as draw or resize.
artists : list
Each list entry is a collection of `.Artist` objects that are made
visible on the corresponding frame. Other artists are made invisible.
interval : int, default: 200
Delay between frames in milliseconds.
repeat_delay : int, default: 0
The delay in milliseconds between consecutive animation runs, if
*repeat* is True.
repeat : bool, default: True
Whether the animation repeats when the sequence of frames is completed.
blit : bool, default: False
Whether blitting is used to optimize drawing.
"""
def __init__(self, fig, artists, *args, **kwargs):
# Internal list of artists drawn in the most recent frame.
self._drawn_artists = []
# Use the list of artists as the framedata, which will be iterated
# over by the machinery.
self._framedata = artists
super().__init__(fig, *args, **kwargs)
def _init_draw(self):
super()._init_draw()
# Make all the artists involved in *any* frame invisible
figs = set()
for f in self.new_frame_seq():
for artist in f:
artist.set_visible(False)
artist.set_animated(self._blit)
# Assemble a list of unique figures that need flushing
if artist.get_figure() not in figs:
figs.add(artist.get_figure())
# Flush the needed figures
for fig in figs:
fig.canvas.draw_idle()
def _pre_draw(self, framedata, blit):
"""Clears artists from the last frame."""
if blit:
# Let blit handle clearing
self._blit_clear(self._drawn_artists)
else:
# Otherwise, make all the artists from the previous frame invisible
for artist in self._drawn_artists:
artist.set_visible(False)
def _draw_frame(self, artists):
# Save the artists that were passed in as framedata for the other
# steps (esp. blitting) to use.
self._drawn_artists = artists
# Make all the artists from the current frame visible
for artist in artists:
artist.set_visible(True)
class FuncAnimation(TimedAnimation):
"""
Makes an animation by repeatedly calling a function *func*.
.. note::
You must store the created Animation in a variable that lives as long
as the animation should run. Otherwise, the Animation object will be
garbage-collected and the animation stops.
Parameters
----------
fig : `~matplotlib.figure.Figure`
The figure object used to get needed events, such as draw or resize.
func : callable
The function to call at each frame. The first argument will
be the next value in *frames*. Any additional positional
arguments can be supplied via the *fargs* parameter.
The required signature is::
def func(frame, *fargs) -> iterable_of_artists
If ``blit == True``, *func* must return an iterable of all artists
that were modified or created. This information is used by the blitting
algorithm to determine which parts of the figure have to be updated.
The return value is unused if ``blit == False`` and may be omitted in
that case.
frames : iterable, int, generator function, or None, optional
Source of data to pass *func* and each frame of the animation
- If an iterable, then simply use the values provided. If the
iterable has a length, it will override the *save_count* kwarg.
- If an integer, then equivalent to passing ``range(frames)``
- If a generator function, then must have the signature::
def gen_function() -> obj
- If *None*, then equivalent to passing ``itertools.count``.
In all of these cases, the values in *frames* is simply passed through
to the user-supplied *func* and thus can be of any type.
init_func : callable, optional
A function used to draw a clear frame. If not given, the results of
drawing from the first item in the frames sequence will be used. This
function will be called once before the first frame.
The required signature is::
def init_func() -> iterable_of_artists
If ``blit == True``, *init_func* must return an iterable of artists
to be re-drawn. This information is used by the blitting algorithm to
determine which parts of the figure have to be updated. The return
value is unused if ``blit == False`` and may be omitted in that case.
fargs : tuple or None, optional
Additional arguments to pass to each call to *func*.
save_count : int, default: 100
Fallback for the number of values from *frames* to cache. This is
only used if the number of frames cannot be inferred from *frames*,
i.e. when it's an iterator without length or a generator.
interval : int, default: 200
Delay between frames in milliseconds.
repeat_delay : int, default: 0
The delay in milliseconds between consecutive animation runs, if
*repeat* is True.
repeat : bool, default: True
Whether the animation repeats when the sequence of frames is completed.
blit : bool, default: False
Whether blitting is used to optimize drawing. Note: when using
blitting, any animated artists will be drawn according to their zorder;
however, they will be drawn on top of any previous artists, regardless
of their zorder.
cache_frame_data : bool, default: True
Whether frame data is cached. Disabling cache might be helpful when
frames contain large objects.
"""
def __init__(self, fig, func, frames=None, init_func=None, fargs=None,
save_count=None, *, cache_frame_data=True, **kwargs):
if fargs:
self._args = fargs
else:
self._args = ()
self._func = func
self._init_func = init_func
# Amount of framedata to keep around for saving movies. This is only
# used if we don't know how many frames there will be: in the case
# of no generator or in the case of a callable.
self.save_count = save_count
# Set up a function that creates a new iterable when needed. If nothing
# is passed in for frames, just use itertools.count, which will just
# keep counting from 0. A callable passed in for frames is assumed to
# be a generator. An iterable will be used as is, and anything else
# will be treated as a number of frames.
if frames is None:
self._iter_gen = itertools.count
elif callable(frames):
self._iter_gen = frames
elif np.iterable(frames):
if kwargs.get('repeat', True):
self._tee_from = frames
def iter_frames(frames=frames):
this, self._tee_from = itertools.tee(self._tee_from, 2)
yield from this
self._iter_gen = iter_frames
else:
self._iter_gen = lambda: iter(frames)
if hasattr(frames, '__len__'):
self.save_count = len(frames)
else:
self._iter_gen = lambda: iter(range(frames))
self.save_count = frames
if self.save_count is None:
# If we're passed in and using the default, set save_count to 100.
self.save_count = 100
else:
# itertools.islice returns an error when passed a numpy int instead
# of a native python int (https://bugs.python.org/issue30537).
# As a workaround, convert save_count to a native python int.
self.save_count = int(self.save_count)
self._cache_frame_data = cache_frame_data
# Needs to be initialized so the draw functions work without checking
self._save_seq = []
super().__init__(fig, **kwargs)
# Need to reset the saved seq, since right now it will contain data
# for a single frame from init, which is not what we want.
self._save_seq = []
def new_frame_seq(self):
# Use the generating function to generate a new frame sequence
return self._iter_gen()
def new_saved_frame_seq(self):
# Generate an iterator for the sequence of saved data. If there are
# no saved frames, generate a new frame sequence and take the first
# save_count entries in it.
if self._save_seq:
# While iterating we are going to update _save_seq
# so make a copy to safely iterate over
self._old_saved_seq = list(self._save_seq)
return iter(self._old_saved_seq)
else:
if self.save_count is not None:
return itertools.islice(self.new_frame_seq(), self.save_count)
else:
frame_seq = self.new_frame_seq()
def gen():
try:
for _ in range(100):
yield next(frame_seq)
except StopIteration:
pass
else:
_api.warn_deprecated(
"2.2", message="FuncAnimation.save has truncated "
"your animation to 100 frames. In the future, no "
"such truncation will occur; please pass "
"'save_count' accordingly.")
return gen()
def _init_draw(self):
super()._init_draw()
# Initialize the drawing either using the given init_func or by
# calling the draw function with the first item of the frame sequence.
# For blitting, the init_func should return a sequence of modified
# artists.
if self._init_func is None:
try:
frame_data = next(self.new_frame_seq())
except StopIteration:
# we can't start the iteration, it may have already been
# exhausted by a previous save or just be 0 length.
# warn and bail.
warnings.warn(
"Can not start iterating the frames for the initial draw. "
"This can be caused by passing in a 0 length sequence "
"for *frames*.\n\n"
"If you passed *frames* as a generator "
"it may be exhausted due to a previous display or save."
)
return
self._draw_frame(frame_data)
else:
self._drawn_artists = self._init_func()
if self._blit:
if self._drawn_artists is None:
raise RuntimeError('The init_func must return a '
'sequence of Artist objects.')
for a in self._drawn_artists:
a.set_animated(self._blit)
self._save_seq = []
def _draw_frame(self, framedata):
if self._cache_frame_data:
# Save the data for potential saving of movies.
self._save_seq.append(framedata)
# Make sure to respect save_count (keep only the last save_count
# around)
self._save_seq = self._save_seq[-self.save_count:]
# Call the func with framedata and args. If blitting is desired,
# func needs to return a sequence of any artists that were modified.
self._drawn_artists = self._func(framedata, *self._args)
if self._blit:
err = RuntimeError('The animation function must return a sequence '
'of Artist objects.')
try:
# check if a sequence
iter(self._drawn_artists)
except TypeError:
raise err from None
# check each item if it's artist
for i in self._drawn_artists:
if not isinstance(i, mpl.artist.Artist):
raise err
self._drawn_artists = sorted(self._drawn_artists,
key=lambda x: x.get_zorder())
for a in self._drawn_artists:
a.set_animated(self._blit) | PypiClean |
/IsPycharmRun-1.0.tar.gz/IsPycharmRun-1.0/poco/utils/simplerpc/jsonrpc/backend/flask.py | from __future__ import absolute_import
import copy
import json
import logging
import time
from uuid import uuid4
from flask import Blueprint, request, Response
from ..exceptions import JSONRPCInvalidRequestException
from ..jsonrpc import JSONRPCRequest
from ..manager import JSONRPCResponseManager
from ..utils import DatetimeDecimalEncoder
from ..dispatcher import Dispatcher
logger = logging.getLogger(__name__)
class JSONRPCAPI(object):
def __init__(self, dispatcher=None, check_content_type=True):
"""
:param dispatcher: methods dispatcher
:param check_content_type: if True - content-type must be
"application/json"
:return:
"""
self.dispatcher = dispatcher if dispatcher is not None \
else Dispatcher()
self.check_content_type = check_content_type
def as_blueprint(self, name=None):
blueprint = Blueprint(name if name else str(uuid4()), __name__)
blueprint.add_url_rule(
'/', view_func=self.jsonrpc, methods=['POST'])
blueprint.add_url_rule(
'/map', view_func=self.jsonrpc_map, methods=['GET'])
return blueprint
def as_view(self):
return self.jsonrpc
def jsonrpc(self):
request_str = self._get_request_str()
try:
jsonrpc_request = JSONRPCRequest.from_json(request_str)
except (TypeError, ValueError, JSONRPCInvalidRequestException):
response = JSONRPCResponseManager.handle(
request_str, self.dispatcher)
else:
jsonrpc_request.params = jsonrpc_request.params or {}
jsonrpc_request_params = copy.copy(jsonrpc_request.params)
t1 = time.time()
response = JSONRPCResponseManager.handle_request(
jsonrpc_request, self.dispatcher)
t2 = time.time()
logger.info('{0}({1}) {2:.2f} sec'.format(
jsonrpc_request.method, jsonrpc_request_params, t2 - t1))
if response:
response.serialize = self._serialize
response = response.json
return Response(response, content_type="application/json")
def jsonrpc_map(self):
""" Map of json-rpc available calls.
:return str:
"""
result = "<h1>JSON-RPC map</h1><pre>{0}</pre>".format("\n\n".join([
"{0}: {1}".format(fname, f.__doc__)
for fname, f in self.dispatcher.items()
]))
return Response(result)
def _get_request_str(self):
if self.check_content_type or request.data:
return request.data
return list(request.form.keys())[0]
@staticmethod
def _serialize(s):
return json.dumps(s, cls=DatetimeDecimalEncoder)
api = JSONRPCAPI() | PypiClean |
/NURBS-Python-3.8.0.tar.gz/NURBS-Python-3.8.0/geomdl/shapes/curve2d.py | from geomdl import NURBS
# Generates a NURBS circle from 9 control points
def full_circle(radius=1):
""" Generates a NURBS full circle from 9 control points.
:param radius: radius of the circle
:type radius: int, float
:return: a NURBS curve
:rtype: NURBS.Curve
"""
if radius <= 0:
raise ValueError("Curve radius cannot be less than and equal to zero")
# Control points for a unit circle
control_points = [[0.0, -1.0, 1.0], [-0.707, -0.707, 0.707], [-1.0, 0.0, 1.0],
[-0.707, 0.707, 0.707], [0.0, 1.0, 1.0], [0.707, 0.707, 0.707],
[1.0, 0.0, 1.0], [0.707, -0.707, 0.707], [0.0, -1.0, 1.0]]
# Set radius
ctrlpts = []
if radius != 1:
for point in control_points:
npt = [i * radius for i in point[0:2]]
npt.append(point[-1])
ctrlpts.append(npt)
else:
ctrlpts = control_points
# Generate the curve
curve = NURBS.Curve()
curve.degree = 2
curve.ctrlpts = ctrlpts
curve.knotvector = [0, 0, 0, 0.25, 0.25, 0.5, 0.5, 0.75, 0.75, 1, 1, 1]
# Return the generated curve
return curve
# Generates a NURBS circle from 7 control points
def full_circle2(radius=1):
""" Generates a NURBS full circle from 7 control points.
:param radius: radius of the circle
:type radius: int, float
:return: a NURBS curve
:rtype: NURBS.Curve
"""
if radius <= 0:
raise ValueError("Curve radius cannot be less than and equal to zero")
# Control points for a unit circle
control_points = [[1.0, 0.5, 1.0], [0.0, 1.0, 0.5], [-1.0, 0.5, 1.0],
[-1.0, -0.5, 0.5], [0.0, -1.0, 1.0], [1.0, -0.5, 0.5],
[1.0, 0.5, 1.0]]
# Set radius
ctrlpts = []
if radius != 1:
for point in control_points:
npt = [i * radius for i in point[0:2]]
npt.append(point[-1])
ctrlpts.append(npt)
else:
ctrlpts = control_points
# Generate the curve
curve = NURBS.Curve()
curve.degree = 2
curve.ctrlpts = ctrlpts
curve.knotvector = [0, 0, 0, 0.33, 0.33, 0.66, 0.66, 1, 1, 1]
# Return the generated curve
return curve | PypiClean |
/Hikka_Pyro-2.0.66-py3-none-any.whl/pyrogram/types/inline_mode/inline_query_result_location.py |
import pyrogram
from pyrogram import raw, types
from .inline_query_result import InlineQueryResult
class InlineQueryResultLocation(InlineQueryResult):
"""A location on a map.
By default, the location will be sent by the user. Alternatively, you can use *input_message_content* to send a
message with the specified content instead of the location.
Parameters:
title (``str``):
Title for the result.
latitude (``float``):
Location latitude in degrees.
longitude (``float``):
Location longitude in degrees.
id (``str``, *optional*):
Unique identifier for this result, 1-64 bytes.
Defaults to a randomly generated UUID4.
horizontal_accuracy (``float``, *optional*)
The radius of uncertainty for the location, measured in meters; 0-1500.
live_period (``int``, *optional*):
Period in seconds for which the location can be updated, should be between 60 and 86400.
heading (``int``, *optional*):
For live locations, a direction in which the user is moving, in degrees.
Must be between 1 and 360 if specified.
proximity_alert_radius (``int``, *optional*):
For live locations, a maximum distance for proximity alerts about approaching another chat member,
in meters. Must be between 1 and 100000 if specified.
reply_markup (:obj:`~pyrogram.types.InlineKeyboardMarkup`, *optional*):
Inline keyboard attached to the message.
input_message_content (:obj:`~pyrogram.types.InputMessageContent`):
Content of the message to be sent instead of the file.
thumb_url (``str``, *optional*):
Url of the thumbnail for the result.
thumb_width (``int``, *optional*):
Thumbnail width.
thumb_height (``int``, *optional*):
Thumbnail height.
"""
def __init__(
self,
title: str,
latitude: float,
longitude: float,
horizontal_accuracy: float = None,
live_period: int = None,
heading: int = None,
proximity_alert_radius: int = None,
id: str = None,
reply_markup: "types.InlineKeyboardMarkup" = None,
input_message_content: "types.InputMessageContent" = None,
thumb_url: str = None,
thumb_width: int = 0,
thumb_height: int = 0
):
super().__init__("location", id, input_message_content, reply_markup)
self.title = title
self.latitude = latitude
self.longitude = longitude
self.horizontal_accuracy = horizontal_accuracy
self.live_period = live_period
self.heading = heading
self.proximity_alert_radius = proximity_alert_radius
self.thumb_url = thumb_url
self.thumb_width = thumb_width
self.thumb_height = thumb_height
async def write(self, client: "pyrogram.Client"):
return raw.types.InputBotInlineResult(
id=self.id,
type=self.type,
title=self.title,
send_message=(
await self.input_message_content.write(client, self.reply_markup)
if self.input_message_content
else raw.types.InputBotInlineMessageMediaGeo(
geo_point=raw.types.InputGeoPoint(
lat=self.latitude,
long=self.longitude
),
heading=self.heading,
period=self.live_period,
proximity_notification_radius=self.proximity_alert_radius,
reply_markup=await self.reply_markup.write(client) if self.reply_markup else None
)
)
) | PypiClean |
/ARS-0.5a2.zip/ARS-0.5a2/ars/model/physics/ode_objects_factories.py | import ode
from ...constants import G_VECTOR
#==============================================================================
# Environment
#==============================================================================
def create_ode_world(gravity=G_VECTOR, ERP=0.8, CFM=1E-10):
"""Create an ODE world object.
:param gravity: gravity acceleration vector
:type gravity: 3-sequence of floats
:param ERP: Error Reduction Parameter
:type ERP: float
:param CFM: Constraint Force Mixing
:type CFM: float
:return: world
:rtype: :class:`ode.World`
"""
world = ode.World()
world.setGravity(gravity)
world.setERP(ERP)
world.setCFM(CFM)
return world
#==============================================================================
# Bodies
#==============================================================================
def create_ode_box(world, size, mass=None, density=None):
"""Create an ODE body with box-like mass parameters.
:param world:
:type world: :class:`ode.World`
:param size:
:type size: 3-sequence of floats
:param mass:
:type mass: float or None
:param density:
:type density: float or None
:return: box body
:rtype: :class:`ode.Body`
"""
body = ode.Body(world)
if mass is not None:
m = ode.Mass()
m.setBoxTotal(mass, *size)
body.setMass(m)
elif density is not None:
m = ode.Mass()
m.setBox(density, *size)
body.setMass(m)
return body
def create_ode_sphere(world, radius, mass=None, density=None):
"""Create an ODE body with sphere-like mass parameters.
:param world:
:type world: :class:`ode.World`
:param radius:
:type radius: float
:param mass:
:type mass: float or None
:param density:
:type density: float or None
:return: sphere body
:rtype: :class:`ode.Body`
"""
body = ode.Body(world)
if mass is not None:
m = ode.Mass()
m.setSphereTotal(mass, radius)
body.setMass(m)
elif density is not None:
m = ode.Mass()
m.setSphere(density, radius)
body.setMass(m)
return body
def create_ode_capsule(world, length, radius, mass=None, density=None):
"""Create an ODE body with capsule-like mass parameters.
:param world:
:type world: :class:`ode.World`
:param length:
:type length: float
:param radius:
:type radius: float
:param mass:
:type mass: float or None
:param density:
:type density: float or None
:return: capsule body
:rtype: :class:`ode.Body`
"""
capsule_direction = 3 # z-axis
body = ode.Body(world)
if mass is not None:
m = ode.Mass()
m.setCapsuleTotal(mass, capsule_direction, radius, length)
body.setMass(m)
elif density is not None:
m = ode.Mass()
m.setCapsule(density, capsule_direction, radius, length)
body.setMass(m)
# set parameters for drawing the body
# TODO: delete this, because it is related to the original implementation
body.shape = "capsule"
body.length = length
body.radius = radius
return body
def create_ode_cylinder(world, length, radius, mass=None, density=None):
"""Create an ODE body with cylinder-like mass parameters.
:param world:
:type world: :class:`ode.World`
:param length:
:type length: float
:param radius:
:type radius: float
:param mass:
:type mass: float or None
:param density:
:type density: float or None
:return: cylinder body
:rtype: :class:`ode.Body`
"""
cylinderDirection = 3 # Z-axis
body = ode.Body(world)
if mass is not None:
m = ode.Mass()
m.setCylinderTotal(mass, cylinderDirection, radius, length)
body.setMass(m)
elif density is not None:
m = ode.Mass()
m.setCylinder(density, cylinderDirection, radius, length)
body.setMass(m)
return body | PypiClean |
/Coveralls-HG-2.0.0.2.tar.gz/Coveralls-HG-2.0.0.2/coveralls_hg/api.py | import json
import hashlib
import requests
from coverage import __version__ as coverage_version
# coverage 3 and 4 have api changes, since I use pydev which is still stuck
# on coverage 3, but the rest of the world has moved on I'll need to support
# both.
from coverage import coverage as Coverage
BASE='https://coveralls.io'
CLIENT='coveralls-python-hg'
_API='api/v1/jobs'
def _generate_source_files(file_path_name='.coverage', strip_path=None):
"Use the .coverage data file to generate the coverage lines for coveralls."
coverage = Coverage(data_file=file_path_name)
coverage.load()
tmp = list()
for file_name in coverage.data.measured_files():
analysis = coverage._analyze(file_name)# pylint:disable=protected-access
# pylint:disable=no-member
if coverage_version.startswith('3'): # pragma: no cover
length = len(analysis.parser.lines) + 1
md5 = hashlib.md5(analysis.parser.text.encode('UTF-8')).hexdigest()
else: # pragma: no cover
source = analysis.file_reporter.source()
md5 = hashlib.md5(source.encode('UTF-8')).hexdigest()
length = len(source.split('\n'))
lines = list()
for line_no in range(1, length):
if line_no in analysis.missing:
lines.append(0)
elif line_no in analysis.statements:
lines.append(1)
else:
lines.append(None)
if strip_path is not None:
if file_name.startswith(strip_path):
file_name = file_name[len(strip_path):]
if file_name.startswith('/'):
file_name = file_name[1:]
tmp.append({'name':file_name,
'source_digest':md5,
'coverage':lines})
return tmp
def _fetch_json_data(url):
"Return the json data"
got = requests.get(url)
if got.status_code != 200:
got.raise_for_status()
data = got.json()
return data
class API(object):
"""API Front-end to coveralls.io
At the moment only fetching builds and submitting builds is supported.
"""
def __init__(self, user, repo, token, dvcs='bitbucket'):
self.settings = {'USER':user, 'REPO':repo, 'TOKEN':token,
'BASE':BASE, 'DVCS':dvcs, 'FORM':'json',
'UPLOAD':dict()}
self.url_post = '/'.join([BASE, _API])
def _url(self, *keys, add_form=False):
"Build the url using the settings and keys."
#If a key is not in the settings, it will be included as the key itself.
tmp = list()
for key in keys:
if key in self.settings:
tmp.append(self.settings[key])
else:
tmp.append(key)
url = '/'.join(tmp)
if add_form:
url = url + '.' + self.settings['FORM']
return url
def list_builds(self):
"Yields all builds, newest first."
url = self._url('BASE', 'DVCS', 'USER', 'REPO', add_form=True)
url = url + '?page='
page = 0
while True:
page += 1
data = _fetch_json_data(url + str(page))
for build in data['builds']:
yield build
if len(data['builds']) == 0:
break
def builds(self, commit_sha):
"Return builds (singular for list_builds)."
url = self._url('BASE', 'builds', commit_sha, add_form=True)
data = _fetch_json_data(url)
return data
def _add_values(self, dictionary):
""
for key, value in dictionary.items():
if value is not None:
self.settings['UPLOAD'][key] = value
def set_repo_token(self, token=None):
"Set the coveralls repository token"
if token is None:
token = self.settings['TOKEN']
self.settings['UPLOAD']['repo_token'] = token
def set_service_values(self, name=CLIENT, number=None, job_id=None):
"Set service values."
tmp = {'service_name':name, 'service_number':number,
'service_job_id':job_id}
self._add_values(tmp)
def set_build_values(self, build_url=None, branch=None, pull_request=''):
"Set build values."
if pull_request.strip().lower() == 'false' or len(pull_request) == 0:
pull_request=None
tmp = {'service_build_url':build_url, 'service_branch':branch,
'service_pull_request':pull_request}
self._add_values(tmp)
def set_source_files(self, coverage_file, strip_path=None):
"set the source files"
self.settings['UPLOAD']['source_files'] = \
_generate_source_files(coverage_file, strip_path)
def _assert_git_dict(self):
"Make sure the git headers are set."
if 'git' not in self.settings['UPLOAD']:
self.settings['UPLOAD']['git'] = {'head':dict()}
def set_dvcs_user(self, name_author, email_author,
name_committer=None, email_comitter=None):
"Set repository user details."
self._assert_git_dict()
if name_committer is None:
name_committer = name_author
if email_comitter is None:
email_comitter = email_author
tmp = {'author_name':name_author, 'author_email':email_author,
'commiter_name':name_committer, 'committer_email':email_comitter}
for key, value in tmp.items():
self.settings['UPLOAD']['git']['head'][key] = value
def set_dvcs_commit(self, commit_id, message, branch, remotes=None):
"set repository commit details."
self._assert_git_dict()
upload = self.settings['UPLOAD']
upload['git']['head']['id'] = commit_id
upload['git']['head']['message'] = message
upload['git']['branch'] = branch
if remotes is not None:
upload['git']['remotes'] = remotes
def _check_upload(self):
upload = self.settings['UPLOAD']
tmp = ['set_source_files', 'set_dvcs_user', 'set_dvcs_commit']
if 'service_name' not in upload:
self.set_service_values()
if 'repo_token' not in upload:
self.set_repo_token()
if 'source_files' in upload:
tmp.remove('set_source_files')
if 'git' not in upload:
tmp.remove('set_dvcs_user')
tmp.remove('set_dvcs_commit')
else:
if 'author_name' in upload['git']['head']:
tmp.remove('set_dvcs_user')
if 'id' in upload['git']['head']:
tmp.remove('set_dvcs_commit')
if len(tmp) > 0:
text = 'Missing upload data, please set data with: %s' % str(tmp)
raise ValueError(text)
return upload
def upload_coverage(self):
"Upload coverage data."
self._check_upload()
json_file = json.dumps(self.settings['UPLOAD'])
files = {'json_file':json_file}
post = requests.post(self.url_post, files=files)
if post.status_code != 200:
post.raise_for_status()
else:
return True | PypiClean |
/Cubane-1.0.11.tar.gz/Cubane-1.0.11/cubane/backend/static/cubane/backend/tinymce/js/tinymce/plugins/image/plugin.min.js | !function(){"use strict";var e=tinymce.util.Tools.resolve("tinymce.PluginManager"),t=tinymce.util.Tools.resolve("tinymce.util.Tools"),n={hasDimensions:function(e){return!1!==e.settings.image_dimensions},hasAdvTab:function(e){return!0===e.settings.image_advtab},getPrependUrl:function(e){return e.getParam("image_prepend_url","")},getClassList:function(e){return e.getParam("image_class_list")},hasDescription:function(e){return!1!==e.settings.image_description},hasImageTitle:function(e){return!0===e.settings.image_title},hasImageCaption:function(e){return!0===e.settings.image_caption},getImageList:function(e){return e.getParam("image_list",!1)},hasUploadUrl:function(e){return e.getParam("images_upload_url",!1)},hasUploadHandler:function(e){return e.getParam("images_upload_handler",!1)},getUploadUrl:function(e){return e.getParam("images_upload_url")},getUploadHandler:function(e){return e.getParam("images_upload_handler")},getUploadBasePath:function(e){return e.getParam("images_upload_base_path")},getUploadCredentials:function(e){return e.getParam("images_upload_credentials")}},a="undefined"!=typeof window?window:Function("return this;")(),i=function(e,t){for(var n=t!==undefined&&null!==t?t:a,i=0;i<e.length&&n!==undefined&&null!==n;++i)n=n[e[i]];return n},r=function(e,t){var n=e.split(".");return i(n,t)},o={getOrDie:function(e,t){var n=r(e,t);if(n===undefined||null===n)throw e+" not available on this browser";return n}};function l(){return new(o.getOrDie("FileReader"))}var s=tinymce.util.Tools.resolve("tinymce.util.Promise"),c=tinymce.util.Tools.resolve("tinymce.util.XHR"),u=function(e,t){return Math.max(parseInt(e,10),parseInt(t,10))},g={getImageSize:function(e,t){var n=document.createElement("img");function a(e,a){n.parentNode&&n.parentNode.removeChild(n),t({width:e,height:a})}n.onload=function(){a(u(n.width,n.clientWidth),u(n.height,n.clientHeight))},n.onerror=function(){a(0,0)};var i=n.style;i.visibility="hidden",i.position="fixed",i.bottom=i.left="0px",i.width=i.height="auto",document.body.appendChild(n),n.src=e},buildListItems:function(e,n,a){return function i(e,a){return a=a||[],t.each(e,function(e){var t={text:e.text||e.title};e.menu?t.menu=i(e.menu):(t.value=e.value,n(t)),a.push(t)}),a}(e,a||[])},removePixelSuffix:function(e){return e&&(e=e.replace(/px$/,"")),e},addPixelSuffix:function(e){return e.length>0&&/^[0-9]+$/.test(e)&&(e+="px"),e},mergeMargins:function(e){if(e.margin){var t=e.margin.split(" ");switch(t.length){case 1:e["margin-top"]=e["margin-top"]||t[0],e["margin-right"]=e["margin-right"]||t[0],e["margin-bottom"]=e["margin-bottom"]||t[0],e["margin-left"]=e["margin-left"]||t[0];break;case 2:e["margin-top"]=e["margin-top"]||t[0],e["margin-right"]=e["margin-right"]||t[1],e["margin-bottom"]=e["margin-bottom"]||t[0],e["margin-left"]=e["margin-left"]||t[1];break;case 3:e["margin-top"]=e["margin-top"]||t[0],e["margin-right"]=e["margin-right"]||t[1],e["margin-bottom"]=e["margin-bottom"]||t[2],e["margin-left"]=e["margin-left"]||t[1];break;case 4:e["margin-top"]=e["margin-top"]||t[0],e["margin-right"]=e["margin-right"]||t[1],e["margin-bottom"]=e["margin-bottom"]||t[2],e["margin-left"]=e["margin-left"]||t[3]}delete e.margin}return e},createImageList:function(e,t){var a=n.getImageList(e);"string"==typeof a?c.send({url:a,success:function(e){t(JSON.parse(e))}}):"function"==typeof a?a(t):t(a)},waitLoadImage:function(e,t,a){function i(){a.onload=a.onerror=null,e.selection&&(e.selection.select(a),e.nodeChanged())}a.onload=function(){t.width||t.height||!n.hasDimensions(e)||e.dom.setAttribs(a,{width:a.clientWidth,height:a.clientHeight}),i()},a.onerror=i},blobToDataUri:function(e){return new s(function(t,n){var a=new l;a.onload=function(){t(a.result)},a.onerror=function(){n(l.error.message)},a.readAsDataURL(e)})}},d={makeTab:function(e,t){return{title:"Advanced",type:"form",pack:"start",items:[{label:"Style",name:"style",type:"textbox",onchange:(a=e,function(e){var t=a.dom,i=e.control.rootControl;if(n.hasAdvTab(a)){var r=i.toJSON(),o=t.parseStyle(r.style);i.find("#vspace").value(""),i.find("#hspace").value(""),((o=g.mergeMargins(o))["margin-top"]&&o["margin-bottom"]||o["margin-right"]&&o["margin-left"])&&(o["margin-top"]===o["margin-bottom"]?i.find("#vspace").value(g.removePixelSuffix(o["margin-top"])):i.find("#vspace").value(""),o["margin-right"]===o["margin-left"]?i.find("#hspace").value(g.removePixelSuffix(o["margin-right"])):i.find("#hspace").value("")),o["border-width"]&&i.find("#border").value(g.removePixelSuffix(o["border-width"])),i.find("#style").value(t.serializeStyle(t.parseStyle(t.serializeStyle(o))))}})},{type:"form",layout:"grid",packV:"start",columns:2,padding:0,alignH:["left","right"],defaults:{type:"textbox",maxWidth:50,onchange:function(n){t(e,n.control.rootControl)}},items:[{label:"Vertical space",name:"vspace"},{label:"Horizontal space",name:"hspace"},{label:"Border",name:"border"}]}]};var a}},m=function(e,t){e.state.set("oldVal",e.value()),t.state.set("oldVal",t.value())},f=function(e,t){var n=e.find("#width")[0],a=e.find("#height")[0],i=e.find("#constrain")[0];n&&a&&i&&t(n,a,i.checked())},p=function(e,t,n){var a=e.state.get("oldVal"),i=t.state.get("oldVal"),r=e.value(),o=t.value();n&&a&&i&&r&&o&&(r!==a?(o=Math.round(r/a*o),isNaN(o)||t.value(o)):(r=Math.round(o/i*r),isNaN(r)||e.value(r))),m(e,t)},h=function(e){f(e,p)},b={createUi:function(){var e=function(e){h(e.control.rootControl)};return{type:"container",label:"Dimensions",layout:"flex",align:"center",spacing:5,items:[{name:"width",type:"textbox",maxLength:5,size:5,onchange:e,ariaLabel:"Width"},{type:"label",text:"x"},{name:"height",type:"textbox",maxLength:5,size:5,onchange:e,ariaLabel:"Height"},{name:"constrain",type:"checkbox",checked:!0,text:"Constrain proportions"}]}},syncSize:function(e){f(e,m)},updateSize:h},v=function(e){e.meta=e.control.rootControl.toJSON()},y=function(e,a){var i=[{name:"src",type:"filepicker",filetype:"image",label:"Source",autofocus:!0,onchange:function(a){var i,r,o,l,s,c,u,d,m;r=e,c=(i=a).meta||{},u=i.control,d=u.rootControl,(m=d.find("#image-list")[0])&&m.value(r.convertURL(u.value(),"src")),t.each(c,function(e,t){d.find("#"+t).value(e)}),c.width||c.height||(o=r.convertURL(u.value(),"src"),l=n.getPrependUrl(r),s=new RegExp("^(?:[a-z]+:)?//","i"),l&&!s.test(o)&&o.substring(0,l.length)!==l&&(o=l+o),u.value(o),g.getImageSize(r.documentBaseURI.toAbsolute(u.value()),function(e){e.width&&e.height&&n.hasDimensions(r)&&(d.find("#width").value(e.width),d.find("#height").value(e.height),b.updateSize(d))}))},onbeforecall:v},a];return n.hasDescription(e)&&i.push({name:"alt",type:"textbox",label:"Image description"}),n.hasImageTitle(e)&&i.push({name:"title",type:"textbox",label:"Image Title"}),n.hasDimensions(e)&&i.push(b.createUi()),n.getClassList(e)&&i.push({name:"class",type:"listbox",label:"Class",values:g.buildListItems(n.getClassList(e),function(t){t.value&&(t.textStyle=function(){return e.formatter.getCssText({inline:"img",classes:[t.value]})})})}),n.hasImageCaption(e)&&i.push({name:"caption",type:"checkbox",label:"Caption"}),i},x={makeTab:function(e,t){return{title:"General",type:"form",items:y(e,t)}},getGeneralItems:y},w=function(){return o.getOrDie("URL")},S=function(e){return w().createObjectURL(e)},U=function(e){w().revokeObjectURL(e)},T=tinymce.util.Tools.resolve("tinymce.ui.Factory"),C=function(){},I=function(e,t){return e?e.replace(/\/$/,"")+"/"+t.replace(/^\//,""):t};function P(e){var n=function(t,n,a,i){var r,l;(r=new function(){return new(o.getOrDie("XMLHttpRequest"))}).open("POST",e.url),r.withCredentials=e.credentials,r.upload.onprogress=function(e){i(e.loaded/e.total*100)},r.onerror=function(){a("Image upload failed due to a XHR Transport error. Code: "+r.status)},r.onload=function(){var t;r.status<200||r.status>=300?a("HTTP Error: "+r.status):(t=JSON.parse(r.responseText))&&"string"==typeof t.location?n(I(e.basePath,t.location)):a("Invalid JSON: "+r.responseText)},(l=new FormData).append("file",t.blob(),t.filename()),r.send(l)};return e=t.extend({credentials:!1,handler:n},e),{upload:function(t){return e.url||e.handler!==n?(a=t,i=e.handler,new s(function(e,t){try{i(a,e,t,C)}catch(n){t(n.message)}})):s.reject("Upload url missing from the settings.");var a,i}}}var L=function(e){return function(t){var a=T.get("Throbber"),i=t.control.rootControl,r=new a(i.getEl()),o=t.control.value(),l=S(o),s=P({url:n.getUploadUrl(e),basePath:n.getUploadBasePath(e),credentials:n.getUploadCredentials(e),handler:n.getUploadHandler(e)}),c=function(){r.hide(),U(l)};return r.show(),g.blobToDataUri(o).then(function(t){var n=e.editorUpload.blobCache.create({blob:o,blobUri:l,name:o.name?o.name.replace(/\.[^\.]+$/,""):null,base64:t.split(",")[1]});return s.upload(n).then(function(e){var t=i.find("#src");return t.value(e),i.find("tabpanel")[0].activateTab(0),t.fire("change"),c(),e})})["catch"](function(t){e.windowManager.alert(t),c()})}},_=".jpg,.jpeg,.png,.gif",N={makeTab:function(e){return{title:"Upload",type:"form",layout:"flex",direction:"column",align:"stretch",padding:"20 20 20 20",items:[{type:"container",layout:"flex",direction:"column",align:"center",spacing:10,items:[{text:"Browse for an image",type:"browsebutton",accept:_,onchange:L(e)},{text:"OR",type:"label"}]},{text:"Drop an image here",type:"dropzone",accept:_,height:100,onchange:L(e)}]}}};function A(e){var a=function(e,t){if(n.hasAdvTab(e)){var a=e.dom,i=t.toJSON(),r=a.parseStyle(i.style);r=g.mergeMargins(r),i.vspace&&(r["margin-top"]=r["margin-bottom"]=g.addPixelSuffix(i.vspace)),i.hspace&&(r["margin-left"]=r["margin-right"]=g.addPixelSuffix(i.hspace)),i.border&&(r["border-width"]=g.addPixelSuffix(i.border)),t.find("#style").value(a.serializeStyle(a.parseStyle(a.serializeStyle(r))))}};function i(i){var r,o,l,s,c={},u=e.dom;function m(){var n,i;b.updateSize(r),a(e,r),(c=t.extend(c,r.toJSON())).alt||(c.alt=""),c.title||(c.title=""),""===c.width&&(c.width=null),""===c.height&&(c.height=null),c.style||(c.style=null),c={src:c.src,alt:c.alt,title:c.title,width:c.width,height:c.height,style:c.style,caption:c.caption,"class":c["class"]},e.undoManager.transact(function(){if(c.src){if(""===c.title&&(c.title=null),o?u.setAttribs(o,c):(c.id="__mcenew",e.focus(),e.selection.setContent(u.createHTML("img",c)),o=u.get("__mcenew"),u.setAttrib(o,"id",null)),e.editorUpload.uploadImagesAuto(),!1===c.caption&&u.is(o.parentNode,"figure.image")&&(n=o.parentNode,u.insertAfter(o,n),u.remove(n)),!0!==c.caption)g.waitLoadImage(e,c,o);else if(!u.is(o.parentNode,"figure.image")){i=o,o=o.cloneNode(!0),(n=u.create("figure",{"class":"image"})).appendChild(o),n.appendChild(u.create("figcaption",{contentEditable:!0},"Caption")),n.contentEditable=!1;var t=u.getParent(i,function(t){return e.schema.getTextBlockElements()[t.nodeName]});t?u.split(t,i,n):u.replace(n,i),e.selection.select(n)}}else if(o){var a=u.is(o.parentNode,"figure.image")?o.parentNode:o;u.remove(a),e.focus(),e.nodeChanged(),u.isEmpty(e.getBody())&&(e.setContent(""),e.selection.setCursorLocation())}})}if(o=e.selection.getNode(),(l=u.getParent(o,"figure.image"))&&(o=u.select("img",l)[0]),o&&("IMG"!==o.nodeName||o.getAttribute("data-mce-object")||o.getAttribute("data-mce-placeholder"))&&(o=null),o&&(c={src:u.getAttrib(o,"src"),alt:u.getAttrib(o,"alt"),title:u.getAttrib(o,"title"),"class":u.getAttrib(o,"class"),width:u.getAttrib(o,"width"),height:u.getAttrib(o,"height"),caption:!!l}),i&&(s={type:"listbox",label:"Image list",name:"image-list",values:g.buildListItems(i,function(t){t.value=e.convertURL(t.value||t.url,"src")},[{text:"None",value:""}]),value:c.src&&e.convertURL(c.src,"src"),onselect:function(e){var t=r.find("#alt");(!t.value()||e.lastControl&&t.value()===e.lastControl.text())&&t.value(e.control.text()),r.find("#src").value(e.control.value()).fire("change")},onPostRender:function(){s=this}}),n.hasAdvTab(e)||n.hasUploadUrl(e)||n.hasUploadHandler(e)){var f=[x.makeTab(e,s)];n.hasAdvTab(e)&&(o&&(o.style.marginLeft&&o.style.marginRight&&o.style.marginLeft===o.style.marginRight&&(c.hspace=g.removePixelSuffix(o.style.marginLeft)),o.style.marginTop&&o.style.marginBottom&&o.style.marginTop===o.style.marginBottom&&(c.vspace=g.removePixelSuffix(o.style.marginTop)),o.style.borderWidth&&(c.border=g.removePixelSuffix(o.style.borderWidth)),c.style=e.dom.serializeStyle(e.dom.parseStyle(e.dom.getAttrib(o,"style")))),f.push(d.makeTab(e,a))),(n.hasUploadUrl(e)||n.hasUploadHandler(e))&&f.push(N.makeTab(e)),r=e.windowManager.open({title:"Insert/edit image",data:c,bodyType:"tabpanel",body:f,onSubmit:m})}else r=e.windowManager.open({title:"Insert/edit image",data:c,body:x.getGeneralItems(e,s),onSubmit:m});b.syncSize(r)}return{open:function(){g.createImageList(e,i)}}}var k=function(e){e.addCommand("mceImage",A(e).open)},z=function(e){return function(n){for(var a,i,r=n.length,o=function(t){t.attr("contenteditable",e?"true":null)};r--;)a=n[r],(i=a.attr("class"))&&/\bimage\b/.test(i)&&(a.attr("contenteditable",e?"false":null),t.each(a.getAll("figcaption"),o))}},R=function(e){e.on("preInit",function(){e.parser.addNodeFilter("figure",z(!0)),e.serializer.addNodeFilter("figure",z(!1))})},D=function(e){e.addButton("image",{icon:"image",tooltip:"Insert/edit image",onclick:A(e).open,stateSelector:"img:not([data-mce-object],[data-mce-placeholder]),figure.image"}),e.addMenuItem("image",{icon:"image",text:"Image",onclick:A(e).open,context:"insert",prependToContext:!0})};e.add("image",function(e){R(e),D(e),k(e)})}(); | PypiClean |
/githubkit-0.10.7-py3-none-any.whl/githubkit/rest/secret_scanning.py | from typing import TYPE_CHECKING, Dict, List, Union, Literal, Optional, overload
from pydantic import BaseModel, parse_obj_as
from githubkit.utils import UNSET, Missing, exclude_unset
from .types import ReposOwnerRepoSecretScanningAlertsAlertNumberPatchBodyType
from .models import (
BasicError,
SecretScanningAlert,
SecretScanningLocation,
OrganizationSecretScanningAlert,
ReposOwnerRepoSecretScanningAlertsAlertNumberPatchBody,
EnterprisesEnterpriseSecretScanningAlertsGetResponse503,
)
if TYPE_CHECKING:
from githubkit import GitHubCore
from githubkit.response import Response
class SecretScanningClient:
_REST_API_VERSION = "2022-11-28"
def __init__(self, github: "GitHubCore"):
self._github = github
def list_alerts_for_enterprise(
self,
enterprise: str,
state: Missing[Literal["open", "resolved"]] = UNSET,
secret_type: Missing[str] = UNSET,
resolution: Missing[str] = UNSET,
sort: Missing[Literal["created", "updated"]] = "created",
direction: Missing[Literal["asc", "desc"]] = "desc",
per_page: Missing[int] = 30,
before: Missing[str] = UNSET,
after: Missing[str] = UNSET,
*,
headers: Optional[Dict[str, str]] = None,
) -> "Response[List[OrganizationSecretScanningAlert]]":
url = f"/enterprises/{enterprise}/secret-scanning/alerts"
params = {
"state": state,
"secret_type": secret_type,
"resolution": resolution,
"sort": sort,
"direction": direction,
"per_page": per_page,
"before": before,
"after": after,
}
headers = {"X-GitHub-Api-Version": self._REST_API_VERSION, **(headers or {})}
return self._github.request(
"GET",
url,
params=exclude_unset(params),
headers=exclude_unset(headers),
response_model=List[OrganizationSecretScanningAlert],
error_models={
"404": BasicError,
"503": EnterprisesEnterpriseSecretScanningAlertsGetResponse503,
},
)
async def async_list_alerts_for_enterprise(
self,
enterprise: str,
state: Missing[Literal["open", "resolved"]] = UNSET,
secret_type: Missing[str] = UNSET,
resolution: Missing[str] = UNSET,
sort: Missing[Literal["created", "updated"]] = "created",
direction: Missing[Literal["asc", "desc"]] = "desc",
per_page: Missing[int] = 30,
before: Missing[str] = UNSET,
after: Missing[str] = UNSET,
*,
headers: Optional[Dict[str, str]] = None,
) -> "Response[List[OrganizationSecretScanningAlert]]":
url = f"/enterprises/{enterprise}/secret-scanning/alerts"
params = {
"state": state,
"secret_type": secret_type,
"resolution": resolution,
"sort": sort,
"direction": direction,
"per_page": per_page,
"before": before,
"after": after,
}
headers = {"X-GitHub-Api-Version": self._REST_API_VERSION, **(headers or {})}
return await self._github.arequest(
"GET",
url,
params=exclude_unset(params),
headers=exclude_unset(headers),
response_model=List[OrganizationSecretScanningAlert],
error_models={
"404": BasicError,
"503": EnterprisesEnterpriseSecretScanningAlertsGetResponse503,
},
)
def list_alerts_for_org(
self,
org: str,
state: Missing[Literal["open", "resolved"]] = UNSET,
secret_type: Missing[str] = UNSET,
resolution: Missing[str] = UNSET,
sort: Missing[Literal["created", "updated"]] = "created",
direction: Missing[Literal["asc", "desc"]] = "desc",
page: Missing[int] = 1,
per_page: Missing[int] = 30,
before: Missing[str] = UNSET,
after: Missing[str] = UNSET,
*,
headers: Optional[Dict[str, str]] = None,
) -> "Response[List[OrganizationSecretScanningAlert]]":
url = f"/orgs/{org}/secret-scanning/alerts"
params = {
"state": state,
"secret_type": secret_type,
"resolution": resolution,
"sort": sort,
"direction": direction,
"page": page,
"per_page": per_page,
"before": before,
"after": after,
}
headers = {"X-GitHub-Api-Version": self._REST_API_VERSION, **(headers or {})}
return self._github.request(
"GET",
url,
params=exclude_unset(params),
headers=exclude_unset(headers),
response_model=List[OrganizationSecretScanningAlert],
error_models={
"404": BasicError,
"503": EnterprisesEnterpriseSecretScanningAlertsGetResponse503,
},
)
async def async_list_alerts_for_org(
self,
org: str,
state: Missing[Literal["open", "resolved"]] = UNSET,
secret_type: Missing[str] = UNSET,
resolution: Missing[str] = UNSET,
sort: Missing[Literal["created", "updated"]] = "created",
direction: Missing[Literal["asc", "desc"]] = "desc",
page: Missing[int] = 1,
per_page: Missing[int] = 30,
before: Missing[str] = UNSET,
after: Missing[str] = UNSET,
*,
headers: Optional[Dict[str, str]] = None,
) -> "Response[List[OrganizationSecretScanningAlert]]":
url = f"/orgs/{org}/secret-scanning/alerts"
params = {
"state": state,
"secret_type": secret_type,
"resolution": resolution,
"sort": sort,
"direction": direction,
"page": page,
"per_page": per_page,
"before": before,
"after": after,
}
headers = {"X-GitHub-Api-Version": self._REST_API_VERSION, **(headers or {})}
return await self._github.arequest(
"GET",
url,
params=exclude_unset(params),
headers=exclude_unset(headers),
response_model=List[OrganizationSecretScanningAlert],
error_models={
"404": BasicError,
"503": EnterprisesEnterpriseSecretScanningAlertsGetResponse503,
},
)
def list_alerts_for_repo(
self,
owner: str,
repo: str,
state: Missing[Literal["open", "resolved"]] = UNSET,
secret_type: Missing[str] = UNSET,
resolution: Missing[str] = UNSET,
sort: Missing[Literal["created", "updated"]] = "created",
direction: Missing[Literal["asc", "desc"]] = "desc",
page: Missing[int] = 1,
per_page: Missing[int] = 30,
before: Missing[str] = UNSET,
after: Missing[str] = UNSET,
*,
headers: Optional[Dict[str, str]] = None,
) -> "Response[List[SecretScanningAlert]]":
url = f"/repos/{owner}/{repo}/secret-scanning/alerts"
params = {
"state": state,
"secret_type": secret_type,
"resolution": resolution,
"sort": sort,
"direction": direction,
"page": page,
"per_page": per_page,
"before": before,
"after": after,
}
headers = {"X-GitHub-Api-Version": self._REST_API_VERSION, **(headers or {})}
return self._github.request(
"GET",
url,
params=exclude_unset(params),
headers=exclude_unset(headers),
response_model=List[SecretScanningAlert],
error_models={
"503": EnterprisesEnterpriseSecretScanningAlertsGetResponse503,
},
)
async def async_list_alerts_for_repo(
self,
owner: str,
repo: str,
state: Missing[Literal["open", "resolved"]] = UNSET,
secret_type: Missing[str] = UNSET,
resolution: Missing[str] = UNSET,
sort: Missing[Literal["created", "updated"]] = "created",
direction: Missing[Literal["asc", "desc"]] = "desc",
page: Missing[int] = 1,
per_page: Missing[int] = 30,
before: Missing[str] = UNSET,
after: Missing[str] = UNSET,
*,
headers: Optional[Dict[str, str]] = None,
) -> "Response[List[SecretScanningAlert]]":
url = f"/repos/{owner}/{repo}/secret-scanning/alerts"
params = {
"state": state,
"secret_type": secret_type,
"resolution": resolution,
"sort": sort,
"direction": direction,
"page": page,
"per_page": per_page,
"before": before,
"after": after,
}
headers = {"X-GitHub-Api-Version": self._REST_API_VERSION, **(headers or {})}
return await self._github.arequest(
"GET",
url,
params=exclude_unset(params),
headers=exclude_unset(headers),
response_model=List[SecretScanningAlert],
error_models={
"503": EnterprisesEnterpriseSecretScanningAlertsGetResponse503,
},
)
def get_alert(
self,
owner: str,
repo: str,
alert_number: int,
*,
headers: Optional[Dict[str, str]] = None,
) -> "Response[SecretScanningAlert]":
url = f"/repos/{owner}/{repo}/secret-scanning/alerts/{alert_number}"
headers = {"X-GitHub-Api-Version": self._REST_API_VERSION, **(headers or {})}
return self._github.request(
"GET",
url,
headers=exclude_unset(headers),
response_model=SecretScanningAlert,
error_models={
"503": EnterprisesEnterpriseSecretScanningAlertsGetResponse503,
},
)
async def async_get_alert(
self,
owner: str,
repo: str,
alert_number: int,
*,
headers: Optional[Dict[str, str]] = None,
) -> "Response[SecretScanningAlert]":
url = f"/repos/{owner}/{repo}/secret-scanning/alerts/{alert_number}"
headers = {"X-GitHub-Api-Version": self._REST_API_VERSION, **(headers or {})}
return await self._github.arequest(
"GET",
url,
headers=exclude_unset(headers),
response_model=SecretScanningAlert,
error_models={
"503": EnterprisesEnterpriseSecretScanningAlertsGetResponse503,
},
)
@overload
def update_alert(
self,
owner: str,
repo: str,
alert_number: int,
*,
headers: Optional[Dict[str, str]] = None,
data: ReposOwnerRepoSecretScanningAlertsAlertNumberPatchBodyType,
) -> "Response[SecretScanningAlert]":
...
@overload
def update_alert(
self,
owner: str,
repo: str,
alert_number: int,
*,
data: Literal[UNSET] = UNSET,
headers: Optional[Dict[str, str]] = None,
state: Literal["open", "resolved"],
resolution: Missing[
Union[
None, Literal["false_positive", "wont_fix", "revoked", "used_in_tests"]
]
] = UNSET,
resolution_comment: Missing[Union[str, None]] = UNSET,
) -> "Response[SecretScanningAlert]":
...
def update_alert(
self,
owner: str,
repo: str,
alert_number: int,
*,
headers: Optional[Dict[str, str]] = None,
data: Missing[
ReposOwnerRepoSecretScanningAlertsAlertNumberPatchBodyType
] = UNSET,
**kwargs,
) -> "Response[SecretScanningAlert]":
url = f"/repos/{owner}/{repo}/secret-scanning/alerts/{alert_number}"
headers = {"X-GitHub-Api-Version": self._REST_API_VERSION, **(headers or {})}
if not kwargs:
kwargs = UNSET
json = kwargs if data is UNSET else data
json = parse_obj_as(
ReposOwnerRepoSecretScanningAlertsAlertNumberPatchBody, json
)
json = json.dict(by_alias=True) if isinstance(json, BaseModel) else json
return self._github.request(
"PATCH",
url,
json=exclude_unset(json),
headers=exclude_unset(headers),
response_model=SecretScanningAlert,
error_models={
"503": EnterprisesEnterpriseSecretScanningAlertsGetResponse503,
},
)
@overload
async def async_update_alert(
self,
owner: str,
repo: str,
alert_number: int,
*,
headers: Optional[Dict[str, str]] = None,
data: ReposOwnerRepoSecretScanningAlertsAlertNumberPatchBodyType,
) -> "Response[SecretScanningAlert]":
...
@overload
async def async_update_alert(
self,
owner: str,
repo: str,
alert_number: int,
*,
data: Literal[UNSET] = UNSET,
headers: Optional[Dict[str, str]] = None,
state: Literal["open", "resolved"],
resolution: Missing[
Union[
None, Literal["false_positive", "wont_fix", "revoked", "used_in_tests"]
]
] = UNSET,
resolution_comment: Missing[Union[str, None]] = UNSET,
) -> "Response[SecretScanningAlert]":
...
async def async_update_alert(
self,
owner: str,
repo: str,
alert_number: int,
*,
headers: Optional[Dict[str, str]] = None,
data: Missing[
ReposOwnerRepoSecretScanningAlertsAlertNumberPatchBodyType
] = UNSET,
**kwargs,
) -> "Response[SecretScanningAlert]":
url = f"/repos/{owner}/{repo}/secret-scanning/alerts/{alert_number}"
headers = {"X-GitHub-Api-Version": self._REST_API_VERSION, **(headers or {})}
if not kwargs:
kwargs = UNSET
json = kwargs if data is UNSET else data
json = parse_obj_as(
ReposOwnerRepoSecretScanningAlertsAlertNumberPatchBody, json
)
json = json.dict(by_alias=True) if isinstance(json, BaseModel) else json
return await self._github.arequest(
"PATCH",
url,
json=exclude_unset(json),
headers=exclude_unset(headers),
response_model=SecretScanningAlert,
error_models={
"503": EnterprisesEnterpriseSecretScanningAlertsGetResponse503,
},
)
def list_locations_for_alert(
self,
owner: str,
repo: str,
alert_number: int,
page: Missing[int] = 1,
per_page: Missing[int] = 30,
*,
headers: Optional[Dict[str, str]] = None,
) -> "Response[List[SecretScanningLocation]]":
url = f"/repos/{owner}/{repo}/secret-scanning/alerts/{alert_number}/locations"
params = {
"page": page,
"per_page": per_page,
}
headers = {"X-GitHub-Api-Version": self._REST_API_VERSION, **(headers or {})}
return self._github.request(
"GET",
url,
params=exclude_unset(params),
headers=exclude_unset(headers),
response_model=List[SecretScanningLocation],
error_models={
"503": EnterprisesEnterpriseSecretScanningAlertsGetResponse503,
},
)
async def async_list_locations_for_alert(
self,
owner: str,
repo: str,
alert_number: int,
page: Missing[int] = 1,
per_page: Missing[int] = 30,
*,
headers: Optional[Dict[str, str]] = None,
) -> "Response[List[SecretScanningLocation]]":
url = f"/repos/{owner}/{repo}/secret-scanning/alerts/{alert_number}/locations"
params = {
"page": page,
"per_page": per_page,
}
headers = {"X-GitHub-Api-Version": self._REST_API_VERSION, **(headers or {})}
return await self._github.arequest(
"GET",
url,
params=exclude_unset(params),
headers=exclude_unset(headers),
response_model=List[SecretScanningLocation],
error_models={
"503": EnterprisesEnterpriseSecretScanningAlertsGetResponse503,
},
) | PypiClean |
/Bravo-2.0.tar.gz/Bravo-2.0/bravo/utilities/geometry.py | def gen_line_simple(point1, point2):
"""
An adaptation of Bresenham's line algorithm in three dimensions.
This function returns an iterable of integer coordinates along the line
from the first point to the second point. No points are omitted.
"""
# XXX should be done with ints instead of floats
tx, ty, tz = point1.x, point1.y, point1.z # t is for temporary
rx, ry, rz = int(tx), int(ty), int(tz) # r is for rounded
ox, oy, oz = point2.x, point2.y, point2.z # o is for objective
dx = ox - tx
dy = oy - ty
dz = oz - tz
largest = float(max(abs(dx), abs(dy), abs(dz)))
dx, dy, dz = dx / largest, dy / largest, dz / largest # We make a vector which maximum value is 1.0
yield rx, ry, rz
while abs(ox - tx) > 1 or abs(oy - ty) > 1 or abs(oz - tz) > 1:
tx += dx
ty += dy
tz += dz
yield int(tx), int(ty), int(tz)
yield ox, oy, oz
class HurpPoint(object):
def __init__(self, t):
self.x, self.y, self.z = t
def gen_close_point(point1, point2):
"""
Retrieve the first integer set of coordinates on the line from the first
point to the second point.
The set of coordinates corresponding to the first point will not be
retrieved.
"""
point1 = HurpPoint(point1)
point2 = HurpPoint(point2)
g = gen_line_simple(point1, point2)
next(g)
return next(g)
def gen_line_covered(point1, point2):
"""
This is Bresenham's algorithm with a little twist: *all* the blocks that
intersect with the line are yielded.
"""
tx, ty, tz = point1.x, point1.y, point1.z # t is for temporary
rx, ry, rz = int(tx), int(ty), int(tz) # r is for rounded
ox, oy, oz = point2.x, point2.y, point2.z # o is for objective
dx = ox - tx
dy = oy - ty
dz = oz - tz
largest = float(max(abs(dx), abs(dy), abs(dz)))
dx, dy, dz = dx / largest, dy / largest, dz / largest # We make a vector which maximum value is 1.0
adx, ady, adz = abs(dx), abs(dy), abs(dz)
px, py, pz = rx, ry, rz
while abs(ox - tx) > 1 or abs(oy - ty) > 1 or abs(oz - tz) > 1:
tx += dx
ty += dy
tz += dz
if (ty < 0 and dy < 0) or (ty >= 127 and dy > 0):
break
rx, ry, rz = int(tx), int(ty), int(tz)
yield rx, ry, rz
# Send blocks that are in fact intersected by the line
# but that bresenham skipped.
if rx != px and adx != 1:
yield px, ry, rz
if ry != py and ady != 1:
yield px, py, rz
if rz != pz and adz != 1:
yield px, ry, pz
if ry != py and ady != 1:
yield rx, py, rz
if rz != pz and adz != 1:
yield rx, py, pz
if rz != pz and adz != 1:
yield rx, ry, pz
px, py, pz = rx, ry, rz | PypiClean |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.