docstring stringlengths 52 499 | function stringlengths 67 35.2k | __index_level_0__ int64 52.6k 1.16M |
|---|---|---|
Insert python list of tuples into SQL table
Args:
data (list): List of tuples
table (str): Name of database table
conn (connection object): database connection object
columns (str): String of column names to use if not assigned then all columns are presumed to be used [Optional]
... | def insert_query_m(data, table, conn, columns=None, db_type='mysql'):
# if length of data is very large we need to break into chunks the insert_query_m is then used recursively untill
# all data has been inserted
if len(data) > 10000:
_chunk_query(data, 10000, columns, conn, table, db_type)
... | 900,166 |
Call for inserting SQL query in chunks based on n rows
Args:
l (list): List of tuples
n (int): Number of rows
cn (str): Column names
conn (connection object): Database connection object
table (str): Table name
db_type (str): If "sqlite" or "mysql" | def _chunk_query(l, n, cn, conn, table, db_type):
# For item i in a range that is a length of l,
[insert_query_m(l[i:i + n], table, conn, cn, db_type) for i in range(0, len(l), n)] | 900,167 |
Convert any python list of lists (or tuples) so that the strings are formatted correctly for insertion into
Args:
ll (list): List of lists (or tuples) | def _make_sql_compatible(ll):
new_ll = []
for l in ll:
new_l = ()
for i in l:
if not i:
new_l = new_l + (None,)
else:
if isinstance(i, str):
if sys.version_info < (3, 0):
val = i.decode('u... | 900,168 |
Find completions for current command.
This assumes that we'll handle all completion logic here and that
the shell's automatic file name completion is disabled.
Args:
command_line: Command line
current_token: Token at cursor
position: Current cursor position
shell: Name of s... | def complete(command_line,
current_token,
position,
shell: arg(choices=('bash', 'fish'))):
position = int(position)
tokens = shlex.split(command_line[:position])
all_argv, run_argv, command_argv = run.partition_argv(tokens[1:])
run_args = run.parse_args(run_a... | 900,284 |
Install a given package.
Args:
name (str): The package name to install. This can be any valid
pip package specification.
index (str): The URL for a pypi index to use.
force (bool): For the reinstall of packages during updates.
update (bool): Updat... | def install_package(self, name, index=None, force=False, update=False):
cmd = 'install'
if force:
cmd = '{0} {1}'.format(cmd, '--force-reinstall')
if update:
cmd = '{0} {1}'.format(cmd, '--update')
if index:
cmd = '{0} {1}'.format(cmd, '-... | 900,298 |
Install packages from a requirements.txt file.
Args:
path (str): The path to the requirements file.
index (str): The URL for a pypi index to use. | def install_requirements(self, path, index=None):
cmd = 'install -r {0}'.format(path)
if index:
cmd = 'install --index-url {0} -r {1}'.format(index, path)
self.pip(cmd) | 900,299 |
Frame counts is the core of all the counting operations. It counts on a per-frame/per-region basis.
Args:
subsets (list): a list of Subset Objects. if not specified, the phenotypes are used.
Returns:
pandas.DataFrame: A dataframe of count data | def frame_counts(self,subsets=None):
mergeon = self.cdf.frame_columns+['region_label']
if subsets is None:
cnts = self.groupby(mergeon+['phenotype_label']).count()[['cell_index']].\
rename(columns={'cell_index':'count'})
mr = self.measured_regions
... | 900,464 |
Wrapper to retrieve the axis of a given histogram.
This can be convenient outside of just projections, so it's made available in the API.
Args:
axis_type: The type of axis to retrieve.
Returns:
Callable to retrieve the specified axis when given a hist. | def hist_axis_func(axis_type: enum.Enum) -> Callable[[Hist], Axis]:
def axis_func(hist: Hist) -> Axis:
# Determine the axis_type value
# Use try here instead of checking for a particular type to protect against type changes
# (say in the enum)
try:
# Try to ... | 900,504 |
Apply the associated range set to the axis of a given hist.
Note:
The min and max values should be bins, not user ranges! For more, see the binning
explanation in ``apply_func_to_find_bin(...)``.
Args:
hist: Histogram to which the axis range restriction should be ap... | def apply_range_set(self, hist: Hist) -> None:
# Do individual assignments to clarify which particular value is causing an error here.
axis = self.axis(hist)
#logger.debug(f"axis: {axis}, axis(): {axis.GetName()}")
# Help out mypy
assert not isinstance(self.min_val, floa... | 900,507 |
Calls the actual projection function for the hist.
Args:
hist: Histogram from which the projections should be performed.
Returns:
The projected histogram. | def call_projection_function(self, hist: Hist) -> Hist:
# Restrict projection axis ranges
for axis in self.projection_axes:
logger.debug(f"Apply projection axes hist range: {axis.name}")
axis.apply_range_set(hist)
projected_hist = None
if hasattr(hist, "... | 900,511 |
Perform the actual THn -> THn or TH1 projection.
This projection could be to 1D, 2D, 3D, or ND.
Args:
hist (ROOT.THnBase): Histogram from which the projections should be performed.
Returns:
ROOT.THnBase or ROOT.TH1: The projected histogram. | def _project_THn(self, hist: Hist) -> Any:
# THnBase projections args are given as a list of axes, followed by any possible options.
projection_axes = [axis.axis_type.value for axis in self.projection_axes]
# Handle ROOT THnBase quirk...
# 2D projection are called as (y, x, opt... | 900,512 |
Perform the actual TH3 -> TH1 projection.
This projection could be to 1D or 2D.
Args:
hist (ROOT.TH3): Histogram from which the projections should be performed.
Returns:
ROOT.TH1: The projected histogram. | def _project_TH3(self, hist: Hist) -> Any:
# Axis length validation
if len(self.projection_axes) < 1 or len(self.projection_axes) > 2:
raise ValueError(len(self.projection_axes), "Invalid number of axes")
# Need to concatenate the names of the axes together
projecti... | 900,513 |
Perform the actual TH2 -> TH1 projection.
This projection can only be to 1D.
Args:
hist (ROOT.TH2): Histogram from which the projections should be performed.
Returns:
ROOT.TH1: The projected histogram. | def _project_TH2(self, hist: Hist) -> Any:
if len(self.projection_axes) != 1:
raise ValueError(len(self.projection_axes), "Invalid number of axes")
#logger.debug(f"self.projection_axes[0].axis: {self.projection_axes[0].axis}, axis range name: {self.projection_axes[0].name}, axis_ty... | 900,514 |
Driver function for projecting and storing a single observable.
Args:
kwargs (dict): Additional named args to be passed to projection_name(...) and output_key_name(...)
Returns:
The projected histogram. The histogram is also stored in the output specified by ``output_observable`... | def _project_single_observable(self, **kwargs: Dict[str, Any]) -> Hist:
# Help out mypy
assert isinstance(self.output_attribute_name, str)
# Run the actual projection.
output_hist, projection_name, projection_name_args, = self._project_observable(
input_key = "singl... | 900,516 |
Driver function for projecting and storing a dictionary of observables.
Args:
kwargs (dict): Additional named args to be passed to projection_name(...) and output_key_name(...)
Returns:
The projected histograms. The projected histograms are also stored in ``output_observable``. | def _project_dict(self, **kwargs: Dict[str, Any]) -> Dict[str, Hist]:
# Setup function arguments with values which don't change per loop.
get_hist_args = copy.deepcopy(kwargs)
projection_name_args = copy.deepcopy(kwargs)
for key, input_observable in self.observable_to_project_fr... | 900,517 |
Perform the requested projection(s).
Note:
All cuts on the original histograms will be reset when this function is completed.
Args:
kwargs (dict): Additional named args to be passed to projection_name(...) and output_key_name(...)
Returns:
The projected hist... | def project(self, **kwargs: Dict[str, Any]) -> Union[Hist, Dict[str, Hist]]:
if self.single_observable_projection:
return self._project_single_observable(**kwargs)
else:
return self._project_dict(**kwargs) | 900,518 |
Cleanup applied cuts by resetting the axis to the full range.
Inspired by: https://github.com/matplo/rootutils/blob/master/python/2.7/THnSparseWrapper.py
Args:
hist: Histogram for which the axes should be reset.
cut_axes: List of axis cuts, which correspond to axes that should ... | def cleanup_cuts(self, hist: Hist, cut_axes: Iterable[HistAxisRange]) -> None:
for axis in cut_axes:
# According to the function TAxis::SetRange(first, last), the widest possible range is
# (1, Nbins). Anything beyond that will be reset to (1, Nbins)
axis.axis(hist).... | 900,519 |
Define the projection name for this projector.
Note:
This function is just a basic placeholder and likely should be overridden.
Args:
kwargs: Projection information dict combined with additional arguments passed to the
projection function.
Returns:
... | def projection_name(self, **kwargs: Dict[str, Any]) -> str:
return self.projection_name_format.format(**kwargs) | 900,520 |
The main method of Search class. It searches tTheia Landsat API
Returns python dictionary
Arguments:
start_date -- date string. format: YYYY-MM-DD
end_date -- date string. format: YYYY-MM-DD
limit -- integer specigying the maximum results return.
... | def search(self,limit,start_date=None,end_date=None,clipper=None):
search_string = self._query_builder(start_date,
end_date,
clipper
)
# Have to manually build t... | 900,763 |
Get commands in namespace.
Args:
namespace (dict|module): Typically a module. If not passed, the
globals from the call site will be used.
level (int): If not called from the global scope, set this
appropriately to account for the call stack.
Returns:
OrderedDict... | def get_commands_in_namespace(namespace=None, level=1):
from ..command import Command # noqa: Avoid circular import
commands = {}
if namespace is None:
frame = inspect.stack()[level][0]
namespace = frame.f_globals
elif inspect.ismodule(namespace):
namespace = vars(namespace... | 900,930 |
Get a dictionary of bundles for requested type.
Args:
type: 'javascript' or 'css' | def _get_bundles_by_type(self, type):
bundles = {}
bundle_definitions = self.config.get(type)
if bundle_definitions is None:
return bundles
# bundle name: common
for bundle_name, paths in bundle_definitions.items():
bundle_files = []
#... | 901,230 |
Orders population members from lowest fitness to highest fitness
Args:
Members (list): list of PyGenetics Member objects
Returns:
lsit: ordered lsit of Members, from highest fitness to lowest fitness | def minimize_best_n(Members):
return(list(reversed(sorted(
Members, key=lambda Member: Member.fitness_score
)))) | 901,313 |
Calculate precursor mz based on exact mass and precursor type
Args:
exact_mass (float): exact mass of compound of interest
precursor_type (str): Precursor type (currently only works with '[M-H]-', '[M+H]+' and '[M+H-H2O]+'
Return:
neutral mass of compound | def get_precursor_mz(exact_mass, precursor_type):
# these are just taken from what was present in the massbank .msp file for those missing the exact mass
d = {'[M-H]-': -1.007276,
'[M+H]+': 1.007276,
'[M+H-H2O]+': 1.007276 - ((1.007276 * 2) + 15.9949)
}
try:
return... | 901,519 |
Get line count of file
Args:
fn (str): Path to file
Return:
Number of lines in file (int) | def line_count(fn):
with open(fn) as f:
for i, l in enumerate(f):
pass
return i + 1 | 901,520 |
Configure the virtual environment for another path.
Args:
destination (str): The target path of the virtual environment.
Note:
This does not actually move the virtual environment. Is only
rewrites the metadata required to support a move. | def relocate(self, destination):
for activate in self.bin.activates:
activate.vpath = destination
for binfile in self.bin.files:
if binfile.shebang and (
'python' in binfile.shebang or 'pypy' in binfile.shebang
):
binfi... | 901,711 |
Reconfigure and move the virtual environment to another path.
Args:
destination (str): The target path of the virtual environment.
Note:
Unlike `relocate`, this method *will* move the virtual to the
given path. | def move(self, destination):
self.relocate(destination)
shutil.move(self.path, destination)
self._path = destination | 901,712 |
Build a lined up markdown table.
Args:
headers (dict): A key -> value pairing fo the headers.
rows (list): List of dictionaries that contain all the keys listed in
the headers.
row_keys (list): A sorted list of keys to display
Returns:
A valid Markdown Table as a string... | def build_markdown_table(headers, rows, row_keys=None):
row_maxes = _find_row_maxes(headers, rows)
row_keys = row_keys or [key for key, value in headers.items()]
table = [
_build_row(headers, row_maxes, row_keys),
_build_separator(row_maxes, row_keys)
]
for row in rows:
... | 901,814 |
Generate Markdown Documentation for the given spec/app name.
Args:
app_name (str): The name of the application.
spec (YapconfSpec): A yapconf specification with sources loaded.
Returns (str):
A valid, markdown string representation of the documentation for
the given specificati... | def generate_markdown_doc(app_name, spec):
# Apply standard headers.
sections = [
HEADER.format(app_name=app_name),
SOURCES_HEADER.format(app_name=app_name)
]
# Generate the sources section of the documentation
sorted_labels = sorted(list(spec.sources))
for label in sorted... | 901,823 |
Output the data the dataframe's 'image' column to a directory structured by project->sample and named by frame
Args:
path (str): Where to write the directory of images
suffix (str): for labeling the imaages you write
format (str): default 'png' format to write the file
... | def write_to_path(self,path,suffix='',format='png',overwrite=False):
if os.path.exists(path) and overwrite is False: raise ValueError("Error: use ovewrite=True to overwrite images")
if not os.path.exists(path): os.makedirs(path)
for i,r in self.iterrows():
spath = os.path.jo... | 901,829 |
Parameter object
Args:
name (str): name of the parameter
min_val (int or float): minimum allowed value for the parameter
max_val (int or float): maximum allowed value for the parameter | def __init__(self, name, min_val, max_val):
self.name = name
self.min_val = min_val
self.max_val = max_val
if type(min_val) != type(max_val):
raise ValueError('Supplied min_val is not the same type as\
supplied max_val: {}, {}'.format(
... | 901,928 |
Member object
Args:
parameters (dictionary): dictionary of parameter names and values
cost_fn_val (float): value returned by cost function using params | def __init__(self, parameters, cost_fn_val):
self.parameters = parameters
self.cost_fn_val = cost_fn_val
self.fitness_score = self.__calc_fitness_score(cost_fn_val) | 901,929 |
Adds a paramber to the Population
Args:
name (str): name of the parameter
min_val (int or float): minimum value for the parameter
max_val (int or float): maximum value for the parameter | def add_parameter(self, name, min_val, max_val):
self.__parameters.append(Parameter(name, min_val, max_val)) | 901,936 |
Private, static method: mutates parameter
Args:
value (int or float): current value for Member's parameter
param (Parameter): parameter object
mut_rate (float): mutation rate of the value
max_mut_amt (float): maximum mutation amount of the value
Returns:... | def __mutate_parameter(value, param, mut_rate, max_mut_amt):
if uniform(0, 1) < mut_rate:
mut_amt = uniform(0, max_mut_amt)
op = choice((add, sub))
new_val = op(value, param.dtype(
(param.max_val - param.min_val) * mut_amt
))
... | 901,939 |
Get the path of a command in the virtual if it exists.
Args:
cmd (str): The command to look for.
Returns:
str: The full path to the command.
Raises:
ValueError: If the command is not present. | def cmd_path(self, cmd):
for binscript in self.bin.files:
if binscript.path.endswith('/{0}'.format(cmd)):
return binscript.path
raise ValueError('The command {0} was not found.'.format(cmd)) | 902,202 |
Handle the new configuration.
Args:
new_config (dict): The new configuration | def handle_config_change(self, new_config):
if self.user_handler:
self.user_handler(self.current_config, new_config)
self._call_spec_handlers(new_config)
self.current_config = copy.deepcopy(new_config) | 902,330 |
Sets up serial port by connecting to phsyical or software port.
Depending on command line options, this function will either connect to a
SerialTestClass() port for loopback testing or to the specified port from
the command line option. If loopback is True it overrides the physical port
specification.
... | def setupSerialPort(loopback, port):
if loopback:
# Implement loopback software serial port
testSerial = SerialTestClass()
serialPort = testSerial.serialPort
else:
# TODO enable serial port command line options (keep simple for user!)
serialPort = serial.Serial(port,... | 902,897 |
Rewrite a single line in the file.
Args:
line (str): The new text to write to the file.
line_number (int): The line of the file to rewrite. Numbering
starts at 0. | def writeline(self, line, line_number):
tmp_file = tempfile.TemporaryFile('w+')
if not line.endswith(os.linesep):
line += os.linesep
try:
with open(self.path, 'r') as file_handle:
for count, new_line in enumerate(file_handle):
... | 903,085 |
Load an object.
Args:
obj (str|object): Load the indicated object if this is a string;
otherwise, return the object as is.
To load a module, pass a dotted path like 'package.module';
to load an an object from a module pass a path like
'package.module:name'.
... | def load_object(obj) -> object:
if isinstance(obj, str):
if ':' in obj:
module_name, obj_name = obj.split(':')
if not module_name:
module_name = '.'
else:
module_name = obj
obj = importlib.import_module(module_name)
if obj_name... | 903,116 |
Create a YAML object for loading a YAML configuration.
Args:
modules_to_register: Modules containing classes to be registered with the YAML object. Default: None.
classes_to_register: Classes to be registered with the YAML object. Default: None.
Returns:
A newly creating YAML object, co... | def yaml(modules_to_register: Iterable[Any] = None, classes_to_register: Iterable[Any] = None) -> ruamel.yaml.YAML:
# Defein a round-trip yaml object for us to work with. This object should be imported by other modules
# NOTE: "typ" is a not a typo. It stands for "type"
yaml = ruamel.yaml.YAML(typ = "r... | 903,446 |
Update the library meta data from the current line being parsed
Args:
line (str): The current line of the of the file being parsed | def _update_libdata(self, line):
####################################################
# parse MONA Comments line
####################################################
# The mona msp files contain a "comments" line that contains lots of other information normally separated
... | 903,477 |
Parse and extract any other names that might be recorded for the compound
Args:
line (str): line of the msp file | def _get_other_names(self, line):
m = re.search(self.compound_regex['other_names'][0], line, re.IGNORECASE)
if m:
self.other_names.append(m.group(1).strip()) | 903,484 |
Parse and extract all meta data by looping through the dictionary of meta_info regexs
updates self.meta_info
Args:
line (str): line of the msp file | def _parse_meta_info(self, line):
if self.mslevel:
self.meta_info['ms_level'] = self.mslevel
if self.polarity:
self.meta_info['polarity'] = self.polarity
for k, regexes in six.iteritems(self.meta_regex):
for reg in regexes:
m = re.s... | 903,485 |
Parse and extract all compound data by looping through the dictionary of compound_info regexs
updates self.compound_info
Args:
line (str): line of the msp file | def _parse_compound_info(self, line):
for k, regexes in six.iteritems(self.compound_regex):
for reg in regexes:
if self.compound_info[k]:
continue
m = re.search(reg, line, re.IGNORECASE)
if m:
self.compo... | 903,486 |
Insert data stored in the current chunk of parsing into the selected database
Args:
remove_data (boolean): Remove the data stored within the LibraryData object for the current chunk of
processing
db_type (str): The type of database to submit to
... | def insert_data(self, remove_data=False, db_type='sqlite'):
if self.update_source:
# print "insert ref id"
import msp2db
self.c.execute(
"INSERT INTO library_spectra_source (id, name, parsing_software) VALUES"
" ({a}, '{b}', 'msp2db-v{... | 903,487 |
Checks if given hook module has been loaded
Args:
name (str): The name of the module to check
Returns:
bool. The return code::
True -- Loaded
False -- Not Loaded | def isloaded(self, name):
if name is None:
return True
if isinstance(name, str):
return (name in [x.__module__ for x in self])
if isinstance(name, Iterable):
return set(name).issubset([x.__module__ for x in self])
return False | 903,735 |
Object Model for CSH LDAP groups.
Arguments:
lib -- handle to a CSHLDAP instance
search_val -- the cn of the LDAP group to bind to | def __init__(self, lib, search_val):
self.__dict__['__lib__'] = lib
self.__dict__['__con__'] = lib.get_con()
res = self.__con__.search_s(
self.__ldap_group_ou__,
ldap.SCOPE_SUBTREE,
"(cn=%s)" % search_val,
['cn'])
... | 904,128 |
Check if a Member is in the bound group.
Arguments:
member -- the CSHMember object (or distinguished name) of the member to
check against
Keyword arguments:
dn -- whether or not member is a distinguished name | def check_member(self, member, dn=False):
if dn:
res = self.__con__.search_s(
self.__dn__,
ldap.SCOPE_BASE,
"(member=%s)" % dn,
['ipaUniqueID'])
else:
res = self.__con__.search_s(
... | 904,130 |
Add a member to the bound group
Arguments:
member -- the CSHMember object (or distinguished name) of the member
Keyword arguments:
dn -- whether or not member is a distinguished name | def add_member(self, member, dn=False):
if dn:
if self.check_member(member, dn=True):
return
mod = (ldap.MOD_ADD, 'member', member.encode('ascii'))
else:
if self.check_member(member):
return
mod = (ldap.MOD_ADD, 'm... | 904,131 |
Creates a subscriber binding to the given address and
subscribe the given topics.
The callback is invoked for every message received.
Args:
- address: the address to bind the PUB socket to.
- topics: the topics to subscribe
- callback: the callback to invoke for eve... | def subscriber(address,topics,callback,message_type):
return Subscriber(address,topics,callback,message_type) | 904,324 |
Send the message on the socket.
Args:
- message: the message to publish
- message_type: the type of message being sent
- topic: the topic on which to send the message. Defaults to ''. | def send(self,message,message_type,topic=''):
if message_type == RAW:
self._sock.send(message)
elif message_type == PYOBJ:
self._sock.send_pyobj(message)
elif message_type == JSON:
self._sock.send_json(message)
elif message_type == MULTIPART:
... | 904,360 |
Receive the message of the specified type and retun
Args:
- message_type: the type of the message to receive
Returns:
- the topic of the message
- the message received from the socket | def receive(self,message_type):
topic = None
message = None
if message_type == RAW:
message = self._sock.recv(flags=zmq.NOBLOCK)
elif message_type == PYOBJ:
message = self._sock.recv_pyobj(flags=zmq.NOBLOCK)
elif message_type == JSON:
... | 904,361 |
_check if the observation table is closed.
Args:
None
Returns:
tuple (bool, str): True if the observation table is closed and false otherwise.
If the table is not closed the escaping string is returned. | def is_closed(self):
old_training_data = self.training_data
self.training_data = {x: [] for x in self.sm_vector}
for t in self.smi_vector:
src_state = t[:-1]
symbol = t[-1:]
found = False
for dst_state in self.sm_vector:
if... | 904,764 |
Fill an entry of the observation table.
Args:
row (str): The row of the observation table
col (str): The column of the observation table
Returns:
None | def _fill_table_entry(self, row, col):
self.observation_table[row, col] = self._membership_query(row + col) | 904,767 |
Run the string in the hypothesis automaton for index steps and then
return the access string for the state reached concatanated with the
rest of the string w.
Args:
mma (DFA): The hypothesis automaton
w_string (str): The examined string to be consumed
index (i... | def _run_in_hypothesis(self, mma, w_string, index):
state = mma.states[0]
s_index = 0
for i in range(index):
for arc in state:
if arc.guard.is_sat(w_string[i]):
state = mma.states[arc.dst_state]
s_index = arc.dst_state
... | 904,768 |
Process a counterexample in the Rivest-Schapire way.
Args:
mma (DFA): The hypothesis automaton
w_string (str): The examined string to be consumed
Return:
None | def _process_counter_example(self, mma, w_string):
if len(w_string) == 1:
self.observation_table.smi_vector.append(w_string)
for exp in self.observation_table.em_vector:
self._fill_table_entry(w_string, exp)
diff = len(w_string)
same = 0
... | 904,769 |
Utilize the observation table to construct a Mealy Machine.
The library used for representing the Mealy Machine is the python
bindings of the openFST library (pyFST).
Args:
None
Returns:
MealyMachine: A mealy machine build based on a closed and consistent
... | def get_sfa_conjecture(self):
sfa = SFA(self.alphabet)
for s in self.observation_table.sm_vector:
transitions = self._get_predicate_guards(
s, self.observation_table.training_data[s])
for (t, pred) in transitions:
src_id = self.observation... | 904,771 |
Initializes table form a DFA
Args:
mma: The input automaton
Returns:
None | def _init_table_from_dfa(self, mma):
observation_table_init = ObservationTableInit(self.epsilon, self.alphabet)
sm_vector, smi_vector, em_vector = observation_table_init.initialize(mma, True)
self.observation_table.sm_vector = sm_vector
self.observation_table.smi_vector = smi_ve... | 904,773 |
Implements the high level loop of the algorithm for learning a
Mealy machine.
Args:
mma:
Returns:
MealyMachine: A model for the Mealy machine to be learned. | def learn_sfa(self, mma=None):
logging.info('Initializing learning procedure.')
if mma:
self._init_table_from_dfa(mma)
else:
self._init_table()
logging.info('Generating a closed and consistent observation table.')
while True:
closed ... | 904,774 |
Run the excel_to_html function from the
command-line.
Args:
-p path to file
-s name of the sheet to convert
-css classes to apply
-m attempt to combine merged cells
-c caption for accessibility
-su summary for accessibility
-d details for accessibility
... | def run_excel_to_html():
# Capture commandline arguments. prog='' argument must
# match the command name in setup.py entry_points
parser = argparse.ArgumentParser(prog='excel_to_html')
parser.add_argument('-p', nargs='?', help='Path to an excel file for conversion.')
parser.add_argument(
... | 905,390 |
Checks whether a point is on the curve.
Args:
point (AffinePoint): Point to be checked.
Returns:
bool: True if point is on the curve, False otherwise. | def is_on_curve(self, point):
X, Y = point.X, point.Y
return (
pow(Y, 2, self.P) - pow(X, 3, self.P) - self.a * X - self.b
) % self.P == 0 | 905,413 |
Generates a private key based on the password.
SHA-256 is a member of the SHA-2 cryptographic hash functions designed by
the NSA. SHA stands for Secure Hash Algorithm. The password is converted
to bytes and hashed with SHA-256. The binary output is converted to a hex
representation.
... | def generate_private_key(self):
random_string = base64.b64encode(os.urandom(4096)).decode('utf-8')
binary_data = bytes(random_string, 'utf-8')
hash_object = hashlib.sha256(binary_data)
message_digest_bin = hash_object.digest()
message_digest_hex = binascii.hexlify(messag... | 905,415 |
Determines the slope between this point and another point.
Args:
other (AffinePoint): The second point.
Returns:
int: Slope between self and other. | def slope(self, other):
X1, Y1, X2, Y2 = self.X, self.Y, other.X, other.Y
Y3 = Y1 - Y2
X3 = X1 - X2
return (Y3 * self.inverse(X3)) % self.P | 905,429 |
Publish the message on the PUB socket with the given topic name.
Args:
- message: the message to publish
- message_type: the type of message being sent
- topic: the topic on which to send the message. Defaults to ''. | def publish(self,message,message_type,topic=''):
if message_type == MULTIPART:
raise Exception("Unsupported request type")
super(Publisher,self).send(message,message_type,topic) | 905,469 |
Create the trie for betacode conversion.
Args:
text: The beta code text to convert. All of this text must be betacode.
strict: Flag to allow for flexible diacritic order on input.
Returns:
The trie for conversion. | def _create_conversion_trie(strict):
t = pygtrie.CharTrie()
for beta, uni in _map.BETACODE_MAP.items():
if strict:
t[beta] = uni
else:
# The order of accents is very strict and weak. Allow for many orders of
# accents between asterisk and letter or after... | 905,656 |
Converts the given text from betacode to unicode.
Args:
text: The beta code text to convert. All of this text must be betacode.
strict: Flag to allow for flexible diacritic order on input.
Returns:
The converted text. | def beta_to_uni(text, strict=False):
# Check if the requested configuration for conversion already has a trie
# stored otherwise convert it.
param_key = (strict,)
try:
t = _BETA_CONVERSION_TRIES[param_key]
except KeyError:
t = _create_conversion_trie(*param_key)
_BETA_CON... | 905,659 |
Convert unicode text to a betacode equivalent.
This method can handle tónos or oxeîa characters in the input.
Args:
text: The text to convert to betacode. This text does not have to all be
Greek polytonic text, and only Greek characters will be converted. Note
that in this case, you cannot... | def uni_to_beta(text):
u = _UNICODE_MAP
transform = []
for ch in text:
try:
conv = u[ch]
except KeyError:
conv = ch
transform.append(conv)
converted = ''.join(transform)
return converted | 905,660 |
Gets the return string for a language that's supported by python.
Used in cases when python provides support for the conversion.
Args:
language: string the langage to return for.
level: integer, the indentation level.
data: python data structure being converted... | def get_built_in(self, language, level, data):
# Language is python
pp = pprint.PrettyPrinter(indent=level)
lookup = {'python' : pp.pformat(data),
'json' : str(json.dumps(data, sort_keys=True, indent=level, separators=(',', ': ')))}
self.data_structure = look... | 905,669 |
Helper function that tries to load a filepath (or python module notation)
as a python module and on failure `exec` it.
Args:
path (str): Path or module to load
The function tries to import `example.module` when either `example.module`,
`example/module` or `example/module.py` is given. | def load(path):
importpath = path.replace("/", ".").replace("\\", ".")
if importpath[-3:] == ".py":
importpath = importpath[:-3]
try:
importlib.import_module(importpath)
except (ModuleNotFoundError, TypeError):
exec(open(path).read()) | 905,778 |
Parse a GPX file into a GpxModel.
Args:
xml: A file-like-object opened in binary mode - that is containing
bytes rather than characters. The root element of the XML should
be a <gpx> element containing a version attribute. GPX versions
1.0 is supported.
Returns:
... | def parse_gpx(gpx_element, gpxns=None):
gpxns = gpxns if gpxns is not None else determine_gpx_namespace(gpx_element)
if gpx_element.tag != gpxns+'gpx':
raise ValueError("No gpx root element")
get_text = lambda tag: optional_text(gpx_element, gpxns+tag)
version = gpx_element.attrib['versi... | 906,051 |
Send a request message of the given type
Args:
- message: the message to publish
- message_type: the type of message being sent | def request(self,message,message_type):
if message_type == MULTIPART:
raise Exception("Unsupported request type")
super(Requestor,self).send(message,message_type) | 906,255 |
Construct an HTTP request.
Args:
uri: The full path or partial path as a Uri object or a string.
method: The HTTP method for the request, examples include 'GET', 'POST',
etc.
headers: dict of strings The HTTP headers to include in the request. | def __init__(self, uri=None, method=None, headers=None):
self.headers = headers or {}
self._body_parts = []
if method is not None:
self.method = method
if isinstance(uri, (str, unicode)):
uri = Uri.parse_uri(uri)
self.uri = uri or Uri()
self.headers['MIME-version'] = '1.0'
s... | 906,350 |
Opens a socket connection to the server to set up an HTTP request.
Args:
uri: The full URL for the request as a Uri object.
headers: A dict of string pairs containing the HTTP headers for the
request. | def _get_connection(self, uri, headers=None):
connection = None
if uri.scheme == 'https':
if not uri.port:
connection = httplib.HTTPSConnection(uri.host)
else:
connection = httplib.HTTPSConnection(uri.host, int(uri.port))
else:
if not uri.port:
connection = htt... | 906,364 |
Makes an HTTP request using httplib.
Args:
method: str example: 'GET', 'POST', 'PUT', 'DELETE', etc.
uri: str or atom.http_core.Uri
headers: dict of strings mapping to strings which will be sent as HTTP
headers in the request.
body_parts: list of strings, objects with a r... | def _http_request(self, method, uri, headers=None, body_parts=None):
if isinstance(uri, (str, unicode)):
uri = Uri.parse_uri(uri)
connection = self._get_connection(uri, headers=headers)
if self.debug:
connection.debuglevel = 1
if connection.host != uri.host:
connection.putrequest... | 906,365 |
Creates a new event. `event` may be iterable or string
Args:
event (str): Name of event to declare
Kwrgs:
help (str): Help string for the event
Raises:
TypeError
**Please** describe the event and its calling arguments in the help
string. | def append(self, event, help=""):
if isinstance(event, str):
self._events[event] = HookList(is_waterfall=self.is_waterfall)
self._help[event] = (help, getframeinfo(stack()[1][0]))
if not help:
logger.warning("Great, don't say anything about your hook... | 906,403 |
Object Model for CSH LDAP users.
Arguments:
lib -- handle to a CSHLDAP instance
search_val -- the uuid (or uid) of the member to bind to
uid -- whether or not search_val is a uid | def __init__(self, lib, search_val, uid):
self.__dict__['__lib__'] = lib
self.__dict__['__con__'] = lib.get_con()
res = None
if uid:
res = self.__con__.search_s(
self.__ldap_user_ou__,
ldap.SCOPE_SUBTREE,
... | 906,593 |
Get whether or not the bound CSH LDAP member object is part of a
group.
Arguments:
group -- the CSHGroup object (or distinguished name) of the group to
check membership for | def in_group(self, group, dn=False):
if dn:
return group in self.groups()
return group.check_member(self) | 906,595 |
Get a CSHMember object.
Arguments:
val -- the iButton ID of the member
Returns:
None if the iButton supplied does not correspond to a CSH Member | def get_member_ibutton(self, val):
members = self.__con__.search_s(
CSHMember.__ldap_user_ou__,
ldap.SCOPE_SUBTREE,
"(ibutton=%s)" % val,
['ipaUniqueID'])
if members:
return CSHMember(
self,
memb... | 907,020 |
Get a CSHMember object.
Arguments:
slack -- the Slack UID of the member
Returns:
None if the Slack UID provided does not correspond to a CSH Member | def get_member_slackuid(self, slack):
members = self.__con__.search_s(
CSHMember.__ldap_user_ou__,
ldap.SCOPE_SUBTREE,
"(slackuid=%s)" % slack,
['ipaUniqueID'])
if members:
return CSHMember(
self,
... | 907,021 |
Get the head of a directorship
Arguments:
val -- the cn of the directorship | def get_directorship_heads(self, val):
__ldap_group_ou__ = "cn=groups,cn=accounts,dc=csh,dc=rit,dc=edu"
res = self.__con__.search_s(
__ldap_group_ou__,
ldap.SCOPE_SUBTREE,
"(cn=eboard-%s)" % val,
['member'])
ret = []
... | 907,022 |
Enqueue a LDAP modification.
Arguments:
dn -- the distinguished name of the object to modify
mod -- an ldap modfication entry to enqueue | def enqueue_mod(self, dn, mod):
# mark for update
if dn not in self.__pending_mod_dn__:
self.__pending_mod_dn__.append(dn)
self.__mod_queue__[dn] = []
self.__mod_queue__[dn].append(mod) | 907,023 |
_check if the observation table is closed.
Args:
None
Returns:
tuple (bool, str): True if the observation table is
closed and false otherwise. If the table is not closed
the escaping string is returned. | def is_closed(self):
for t in self.smi_vector:
found = False
for s in self.sm_vector:
if self.observation_table[s] == self.observation_table[t]:
self.equiv_classes[t] = s
found = True
break
i... | 907,098 |
Fill an entry of the observation table.
Args:
row (str): The row of the observation table
col (str): The column of the observation table
Returns:
None | def _fill_table_entry(self, row, col):
prefix = self._membership_query(row)
full_output = self._membership_query(row + col)
length = len(commonprefix([prefix, full_output]))
self.observation_table[row, col] = full_output[length:] | 907,099 |
Run the string in the hypothesis automaton for index steps and then
return the access string for the state reached concatanated with the
rest of the string w.
Args:
mma (DFA): The hypothesis automaton
w_string (str): The examined string to be consumed
index (i... | def _run_in_hypothesis(self, mma, w_string, index):
state = mma[0]
for i in range(index):
for arc in state:
if mma.isyms.find(arc.ilabel) == w_string[i]:
state = mma[arc.nextstate]
s_index = arc.nextstate
# The id of t... | 907,100 |
Checks if access string suffix matches with the examined string suffix
Args:
w_string (str): The examined string to be consumed
access_string (str): The access string for the state
index (int): The index value for selecting the prefix of w
Returns:
bool: A... | def _check_suffix(self, w_string, access_string, index):
prefix_as = self._membership_query(access_string)
full_as = self._membership_query(access_string + w_string[index:])
prefix_w = self._membership_query(w_string[:index])
full_w = self._membership_query(w_string)
l... | 907,101 |
Checks for bad DFA transitions using the examined string
Args:
mma (DFA): The hypothesis automaton
w_string (str): The examined string to be consumed
Returns:
str: The prefix of the examined string that matches | def _find_bad_transition(self, mma, w_string):
conj_out = mma.consume_input(w_string)
targ_out = self._membership_query(w_string)
# TODO: handle different length outputs from conjecture and target
# hypothesis.
length = min(len(conj_out), len(targ_out))
diff = [i... | 907,102 |
Process a counterexample in the Rivest-Schapire way.
Args:
mma (DFA): The hypothesis automaton
w_string (str): The examined string to be consumed
Returns:
None | def _process_counter_example(self, mma, w_string):
w_string = self._find_bad_transition(mma, w_string)
diff = len(w_string)
same = 0
while True:
i = (same + diff) / 2
access_string = self._run_in_hypothesis(mma, w_string, i)
is_diff = self._c... | 907,103 |
Given a state input_string in Smi that is not equivalent with any state in Sm
this method will move that state in Sm create a corresponding Smi
state and fill the corresponding entries in the table.
Args:
access_string (str): State access string
Returns:
None | def _ot_make_closed(self, access_string):
self.observation_table.sm_vector.append(access_string)
for i in self.alphabet:
self.observation_table.smi_vector.append(access_string + i)
for e in self.observation_table.em_vector:
self._fill_table_entry(access_s... | 907,104 |
Utilize the observation table to construct a Mealy Machine.
The library used for representing the Mealy Machine is the python
bindings of the openFST library (pyFST).
Args:
None
Returns:
MealyMachine: A mealy machine build based on a closed and consistent
... | def get_mealy_conjecture(self):
mma = MealyMachine()
for s in self.observation_table.sm_vector:
for i in self.alphabet:
dst = self.observation_table.equiv_classes[s + i]
# If dst == None then the table is not closed.
if dst is None:
... | 907,105 |
Implements the high level loop of the algorithm for learning a
Mealy machine.
Args:
None
Returns:
MealyMachine: The learned mealy machine | def learn_mealy_machine(self):
logging.info('Initializing learning procedure.')
self._init_table()
logging.info('Generating a closed and consistent observation table.')
while True:
closed = False
# Make sure that the table is closed and consistent
... | 907,107 |
Imports the module indicated in name
Args:
module_path: string representing a module path such as
'app.config' or 'app.extras.my_module'
Returns:
the module matching name of the last component, ie: for
'app.extras.my_module' it returns a
reference to my_module
Raises... | def module_import(module_path):
try:
# Import whole module path.
module = __import__(module_path)
# Split into components: ['contour',
# 'extras','appengine','ndb_persistence'].
components = module_path.split('.')
# Starting at the second component, set module ... | 907,295 |
Traverse directory trees to find a contour.yaml file
Begins with the location of this file then checks the
working directory if not found
Args:
config_file: location of this file, override for
testing
Returns:
the path of contour.yaml or None if not found | def find_contour_yaml(config_file=__file__, names=None):
checked = set()
contour_yaml = _find_countour_yaml(os.path.dirname(config_file), checked,
names=names)
if not contour_yaml:
contour_yaml = _find_countour_yaml(os.getcwd(), checked, names=names)
... | 907,296 |
Traverse the directory tree identified by start
until a directory already in checked is encountered or the path
of countour.yaml is found.
Checked is present both to make the loop termination easy
to reason about and so the same directories do not get
rechecked
Args:
start: the path to... | def _find_countour_yaml(start, checked, names=None):
extensions = []
if names:
for name in names:
if not os.path.splitext(name)[1]:
extensions.append(name + ".yaml")
extensions.append(name + ".yml")
yaml_names = (names or []) + CONTOUR_YAML_NAMES + ... | 907,297 |
Process a counterexample in the Rivest-Schapire way.
Args:
mma (DFA): The hypothesis automaton
w_string (str): The examined string to be consumed
Returns:
None | def _process_counter_example(self, mma, w_string):
diff = len(w_string)
same = 0
membership_answer = self._membership_query(w_string)
while True:
i = (same + diff) / 2
access_string = self._run_in_hypothesis(mma, w_string, i)
if membership_ans... | 907,684 |
Utilize the observation table to construct a Mealy Machine.
The library used for representing the Mealy Machine is the python
bindings of the openFST library (pyFST).
Args:
None
Returns:
MealyMachine: A mealy machine build based on a closed and consistent
... | def get_dfa_conjecture(self):
dfa = DFA(self.alphabet)
for s in self.observation_table.sm_vector:
for i in self.alphabet:
dst = self.observation_table.equiv_classes[s + i]
# If dst == None then the table is not closed.
if dst == None:
... | 907,685 |
Implements the high level loop of the algorithm for learning a
Mealy machine.
Args:
mma (DFA): The input automaton
Returns:
MealyMachine: A string and a model for the Mealy machine to be learned. | def learn_dfa(self, mma=None):
logging.info('Initializing learning procedure.')
if mma:
self._init_table_from_dfa(mma)
else:
self._init_table()
logging.info('Generating a closed and consistent observation table.')
while True:
closed ... | 907,687 |
This function allows an entity to publish data to the middleware.
Args:
data (string): contents to be published by this entity. | def publish(self, data):
if self.entity_api_key == "":
return {'status': 'failure', 'response': 'No API key found in request'}
publish_url = self.base_url + "api/0.1.0/publish"
publish_headers = {"apikey": self.entity_api_key}
publish_data = {
"exchange":... | 907,738 |
This function allows an entity to access the historic data.
Args:
entity (string): Name of the device to listen to
query_filters (string): Elastic search response format string
example, "pretty=true&size=10" | def db(self, entity, query_filters="size=10"):
if self.entity_api_key == "":
return {'status': 'failure', 'response': 'No API key found in request'}
historic_url = self.base_url + "api/0.1.0/historicData?" + query_filters
historic_headers = {
"apikey": self.enti... | 907,739 |
This function allows an entity to list the devices to subscribe for data. This function must be called
at least once, before doing a subscribe. Subscribe function will listen to devices that are bound here.
Args:
devices_to_bind (list): an array of devices to listen to.
... | def bind(self, devices_to_bind):
if self.entity_api_key == "":
return {'status': 'failure', 'response': 'No API key found in request'}
url = self.base_url + "api/0.1.0/subscribe/bind"
headers = {"apikey": self.entity_api_key}
data = {
"exchange": "amq.top... | 907,740 |
This function allows an entity to unbound devices that are already bound.
Args:
devices_to_unbind (list): an array of devices that are to be unbound ( stop listening)
Example unbind(["test10","testDemo105"]) | def unbind(self, devices_to_unbind):
if self.entity_api_key == "":
return {'status': 'failure', 'response': 'No API key found in request'}
url = self.base_url + "api/0.1.0/subscribe/unbind"
headers = {"apikey": self.entity_api_key}
data = {
"exchange": "a... | 907,741 |
This function allows an entity to subscribe for data from the devices specified in the bind operation. It
creates a thread with an event loop to manager the tasks created in start_subscribe_worker.
Args:
devices_to_bind (list): an array of devices to listen to | def subscribe(self, devices_to_bind=[]):
if self.entity_api_key == "":
return {'status': 'failure', 'response': 'No API key found in request'}
self.bind(devices_to_bind)
loop = asyncio.new_event_loop()
t1 = threading.Thread(target=self.start_subscribe_worker, args=(l... | 907,742 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.