code stringlengths 75 104k | docstring stringlengths 1 46.9k | text stringlengths 164 112k |
|---|---|---|
def _updateEmissionProbabilities(self):
"""Sample a new set of emission probabilites from the conditional distribution P(E | S, O)
"""
observations_by_state = [self.model.collect_observations_in_state(self.observations, state)
for state in range(self.model.nstates)]
self.model.output_model.sample(observations_by_state) | Sample a new set of emission probabilites from the conditional distribution P(E | S, O) | Below is the the instruction that describes the task:
### Input:
Sample a new set of emission probabilites from the conditional distribution P(E | S, O)
### Response:
def _updateEmissionProbabilities(self):
"""Sample a new set of emission probabilites from the conditional distribution P(E | S, O)
"""
observations_by_state = [self.model.collect_observations_in_state(self.observations, state)
for state in range(self.model.nstates)]
self.model.output_model.sample(observations_by_state) |
def extend_substation(grid, critical_stations, grid_level):
"""
Reinforce MV or LV substation by exchanging the existing trafo and
installing a parallel one if necessary.
First, all available transformers in a `critical_stations` are extended to
maximum power. If this does not solve all present issues, additional
transformers are build.
Parameters
----------
grid: GridDing0
Ding0 grid container
critical_stations : :any:`list`
List of stations with overloading
grid_level : str
Either "LV" or "MV". Basis to select right equipment.
Notes
-----
Curently straight forward implemented for LV stations
Returns
-------
type
#TODO: Description of return. Change type in the previous line accordingly
"""
load_factor_lv_trans_lc_normal = cfg_ding0.get(
'assumptions',
'load_factor_lv_trans_lc_normal')
load_factor_lv_trans_fc_normal = cfg_ding0.get(
'assumptions',
'load_factor_lv_trans_fc_normal')
trafo_params = grid.network._static_data['{grid_level}_trafos'.format(
grid_level=grid_level)]
trafo_s_max_max = max(trafo_params['S_nom'])
for station in critical_stations:
# determine if load or generation case and apply load factor
if station['s_max'][0] > station['s_max'][1]:
case = 'load'
lf_lv_trans_normal = load_factor_lv_trans_lc_normal
else:
case = 'gen'
lf_lv_trans_normal = load_factor_lv_trans_fc_normal
# cumulative maximum power of transformers installed
s_max_trafos = sum([_.s_max_a
for _ in station['station']._transformers])
# determine missing trafo power to solve overloading issue
s_trafo_missing = max(station['s_max']) - (
s_max_trafos * lf_lv_trans_normal)
# list of trafos with rated apparent power below `trafo_s_max_max`
extendable_trafos = [_ for _ in station['station']._transformers
if _.s_max_a < trafo_s_max_max]
# try to extend power of existing trafos
while (s_trafo_missing > 0) and extendable_trafos:
# only work with first of potentially multiple trafos
trafo = extendable_trafos[0]
trafo_s_max_a_before = trafo.s_max_a
# extend power of first trafo to next higher size available
extend_trafo_power(extendable_trafos, trafo_params)
# diminish missing trafo power by extended trafo power and update
# extendable trafos list
s_trafo_missing -= ((trafo.s_max_a * lf_lv_trans_normal) -
trafo_s_max_a_before)
extendable_trafos = [_ for _ in station['station']._transformers
if _.s_max_a < trafo_s_max_max]
# build new trafos inside station until
if s_trafo_missing > 0:
trafo_type, trafo_cnt = select_transformers(grid, s_max={
's_max': s_trafo_missing,
'case': case
})
# create transformers and add them to station of LVGD
for t in range(0, trafo_cnt):
lv_transformer = TransformerDing0(
grid=grid,
id_db=id,
v_level=0.4,
s_max_longterm=trafo_type['S_nom'],
r=trafo_type['R'],
x=trafo_type['X'])
# add each transformer to its station
grid._station.add_transformer(lv_transformer)
logger.info("{stations_cnt} have been reinforced due to overloading "
"issues.".format(stations_cnt=len(critical_stations))) | Reinforce MV or LV substation by exchanging the existing trafo and
installing a parallel one if necessary.
First, all available transformers in a `critical_stations` are extended to
maximum power. If this does not solve all present issues, additional
transformers are build.
Parameters
----------
grid: GridDing0
Ding0 grid container
critical_stations : :any:`list`
List of stations with overloading
grid_level : str
Either "LV" or "MV". Basis to select right equipment.
Notes
-----
Curently straight forward implemented for LV stations
Returns
-------
type
#TODO: Description of return. Change type in the previous line accordingly | Below is the the instruction that describes the task:
### Input:
Reinforce MV or LV substation by exchanging the existing trafo and
installing a parallel one if necessary.
First, all available transformers in a `critical_stations` are extended to
maximum power. If this does not solve all present issues, additional
transformers are build.
Parameters
----------
grid: GridDing0
Ding0 grid container
critical_stations : :any:`list`
List of stations with overloading
grid_level : str
Either "LV" or "MV". Basis to select right equipment.
Notes
-----
Curently straight forward implemented for LV stations
Returns
-------
type
#TODO: Description of return. Change type in the previous line accordingly
### Response:
def extend_substation(grid, critical_stations, grid_level):
"""
Reinforce MV or LV substation by exchanging the existing trafo and
installing a parallel one if necessary.
First, all available transformers in a `critical_stations` are extended to
maximum power. If this does not solve all present issues, additional
transformers are build.
Parameters
----------
grid: GridDing0
Ding0 grid container
critical_stations : :any:`list`
List of stations with overloading
grid_level : str
Either "LV" or "MV". Basis to select right equipment.
Notes
-----
Curently straight forward implemented for LV stations
Returns
-------
type
#TODO: Description of return. Change type in the previous line accordingly
"""
load_factor_lv_trans_lc_normal = cfg_ding0.get(
'assumptions',
'load_factor_lv_trans_lc_normal')
load_factor_lv_trans_fc_normal = cfg_ding0.get(
'assumptions',
'load_factor_lv_trans_fc_normal')
trafo_params = grid.network._static_data['{grid_level}_trafos'.format(
grid_level=grid_level)]
trafo_s_max_max = max(trafo_params['S_nom'])
for station in critical_stations:
# determine if load or generation case and apply load factor
if station['s_max'][0] > station['s_max'][1]:
case = 'load'
lf_lv_trans_normal = load_factor_lv_trans_lc_normal
else:
case = 'gen'
lf_lv_trans_normal = load_factor_lv_trans_fc_normal
# cumulative maximum power of transformers installed
s_max_trafos = sum([_.s_max_a
for _ in station['station']._transformers])
# determine missing trafo power to solve overloading issue
s_trafo_missing = max(station['s_max']) - (
s_max_trafos * lf_lv_trans_normal)
# list of trafos with rated apparent power below `trafo_s_max_max`
extendable_trafos = [_ for _ in station['station']._transformers
if _.s_max_a < trafo_s_max_max]
# try to extend power of existing trafos
while (s_trafo_missing > 0) and extendable_trafos:
# only work with first of potentially multiple trafos
trafo = extendable_trafos[0]
trafo_s_max_a_before = trafo.s_max_a
# extend power of first trafo to next higher size available
extend_trafo_power(extendable_trafos, trafo_params)
# diminish missing trafo power by extended trafo power and update
# extendable trafos list
s_trafo_missing -= ((trafo.s_max_a * lf_lv_trans_normal) -
trafo_s_max_a_before)
extendable_trafos = [_ for _ in station['station']._transformers
if _.s_max_a < trafo_s_max_max]
# build new trafos inside station until
if s_trafo_missing > 0:
trafo_type, trafo_cnt = select_transformers(grid, s_max={
's_max': s_trafo_missing,
'case': case
})
# create transformers and add them to station of LVGD
for t in range(0, trafo_cnt):
lv_transformer = TransformerDing0(
grid=grid,
id_db=id,
v_level=0.4,
s_max_longterm=trafo_type['S_nom'],
r=trafo_type['R'],
x=trafo_type['X'])
# add each transformer to its station
grid._station.add_transformer(lv_transformer)
logger.info("{stations_cnt} have been reinforced due to overloading "
"issues.".format(stations_cnt=len(critical_stations))) |
def format_servers(servers):
"""
:param servers: server list, element in it can have two kinds of format.
- dict
servers = [
{'name':'node1','host':'127.0.0.1','port':10000,'db':0},
{'name':'node2','host':'127.0.0.1','port':11000,'db':0},
{'name':'node3','host':'127.0.0.1','port':12000,'db':0},
]
- url_schema
servers = ['redis://127.0.0.1:10000/0?name=node1',
'redis://127.0.0.1:11000/0?name=node2',
'redis://127.0.0.1:12000/0?name=node3'
]
"""
configs = []
if not isinstance(servers, list):
raise ValueError("server's config must be list")
_type = type(servers[0])
if _type == dict:
return servers
if (sys.version_info[0] == 3 and _type in [str, bytes]) \
or (sys.version_info[0] == 2 and _type in [str, unicode]):
for config in servers:
configs.append(parse_url(config))
else:
raise ValueError("invalid server config")
return configs | :param servers: server list, element in it can have two kinds of format.
- dict
servers = [
{'name':'node1','host':'127.0.0.1','port':10000,'db':0},
{'name':'node2','host':'127.0.0.1','port':11000,'db':0},
{'name':'node3','host':'127.0.0.1','port':12000,'db':0},
]
- url_schema
servers = ['redis://127.0.0.1:10000/0?name=node1',
'redis://127.0.0.1:11000/0?name=node2',
'redis://127.0.0.1:12000/0?name=node3'
] | Below is the the instruction that describes the task:
### Input:
:param servers: server list, element in it can have two kinds of format.
- dict
servers = [
{'name':'node1','host':'127.0.0.1','port':10000,'db':0},
{'name':'node2','host':'127.0.0.1','port':11000,'db':0},
{'name':'node3','host':'127.0.0.1','port':12000,'db':0},
]
- url_schema
servers = ['redis://127.0.0.1:10000/0?name=node1',
'redis://127.0.0.1:11000/0?name=node2',
'redis://127.0.0.1:12000/0?name=node3'
]
### Response:
def format_servers(servers):
"""
:param servers: server list, element in it can have two kinds of format.
- dict
servers = [
{'name':'node1','host':'127.0.0.1','port':10000,'db':0},
{'name':'node2','host':'127.0.0.1','port':11000,'db':0},
{'name':'node3','host':'127.0.0.1','port':12000,'db':0},
]
- url_schema
servers = ['redis://127.0.0.1:10000/0?name=node1',
'redis://127.0.0.1:11000/0?name=node2',
'redis://127.0.0.1:12000/0?name=node3'
]
"""
configs = []
if not isinstance(servers, list):
raise ValueError("server's config must be list")
_type = type(servers[0])
if _type == dict:
return servers
if (sys.version_info[0] == 3 and _type in [str, bytes]) \
or (sys.version_info[0] == 2 and _type in [str, unicode]):
for config in servers:
configs.append(parse_url(config))
else:
raise ValueError("invalid server config")
return configs |
def tree2text(tree_obj, indent=4):
# type: (TreeInfo, int) -> str
"""
Return text representation of a decision tree.
"""
parts = []
def _format_node(node, depth=0):
# type: (NodeInfo, int) -> None
def p(*args):
# type: (*str) -> None
parts.append(" " * depth * indent)
parts.extend(args)
if node.is_leaf:
value_repr = _format_leaf_value(tree_obj, node)
parts.append(" ---> {}".format(value_repr))
else:
assert node.left is not None
assert node.right is not None
feat_name = node.feature_name
if depth > 0:
parts.append("\n")
left_samples = node.left.sample_ratio
p("{feat_name} <= {threshold:0.3f} ({left_samples:0.1%})".format(
left_samples=left_samples,
feat_name=feat_name,
threshold=node.threshold,
))
_format_node(node.left, depth=depth + 1)
parts.append("\n")
right_samples = node.right.sample_ratio
p("{feat_name} > {threshold:0.3f} ({right_samples:0.1%})".format(
right_samples=right_samples,
feat_name=feat_name,
threshold=node.threshold,
))
_format_node(node.right, depth=depth + 1)
_format_node(tree_obj.tree)
return "".join(parts) | Return text representation of a decision tree. | Below is the the instruction that describes the task:
### Input:
Return text representation of a decision tree.
### Response:
def tree2text(tree_obj, indent=4):
# type: (TreeInfo, int) -> str
"""
Return text representation of a decision tree.
"""
parts = []
def _format_node(node, depth=0):
# type: (NodeInfo, int) -> None
def p(*args):
# type: (*str) -> None
parts.append(" " * depth * indent)
parts.extend(args)
if node.is_leaf:
value_repr = _format_leaf_value(tree_obj, node)
parts.append(" ---> {}".format(value_repr))
else:
assert node.left is not None
assert node.right is not None
feat_name = node.feature_name
if depth > 0:
parts.append("\n")
left_samples = node.left.sample_ratio
p("{feat_name} <= {threshold:0.3f} ({left_samples:0.1%})".format(
left_samples=left_samples,
feat_name=feat_name,
threshold=node.threshold,
))
_format_node(node.left, depth=depth + 1)
parts.append("\n")
right_samples = node.right.sample_ratio
p("{feat_name} > {threshold:0.3f} ({right_samples:0.1%})".format(
right_samples=right_samples,
feat_name=feat_name,
threshold=node.threshold,
))
_format_node(node.right, depth=depth + 1)
_format_node(tree_obj.tree)
return "".join(parts) |
def _aload16(ins):
''' Loads a 16 bit value from a memory address
If 2nd arg. start with '*', it is always treated as
an indirect value.
'''
output = _addr(ins.quad[2])
output.append('ld e, (hl)')
output.append('inc hl')
output.append('ld d, (hl)')
output.append('ex de, hl')
output.append('push hl')
return output | Loads a 16 bit value from a memory address
If 2nd arg. start with '*', it is always treated as
an indirect value. | Below is the the instruction that describes the task:
### Input:
Loads a 16 bit value from a memory address
If 2nd arg. start with '*', it is always treated as
an indirect value.
### Response:
def _aload16(ins):
''' Loads a 16 bit value from a memory address
If 2nd arg. start with '*', it is always treated as
an indirect value.
'''
output = _addr(ins.quad[2])
output.append('ld e, (hl)')
output.append('inc hl')
output.append('ld d, (hl)')
output.append('ex de, hl')
output.append('push hl')
return output |
def do_clustering(types, max_clust):
"""
Helper method for clustering that takes a list of all of the things being
clustered (which are assumed to be binary numbers represented as strings),
and an int representing the maximum number of clusters that are allowed.
Returns: A dictionary mapping cluster ids to lists of numbers that are part
of that cluster.
"""
# Fill in leading zeros to make all numbers same length.
ls = [list(t[t.find("b")+1:]) for t in types]
prepend_zeros_to_lists(ls)
dist_matrix = pdist(ls, weighted_hamming)
clusters = hierarchicalcluster.complete(dist_matrix)
clusters = hierarchicalcluster.fcluster(clusters, max_clust,
criterion="maxclust")
# Group members of each cluster together
cluster_dict = dict((c, []) for c in set(clusters))
for i in range(len(types)):
cluster_dict[clusters[i]].append(types[i])
return cluster_dict | Helper method for clustering that takes a list of all of the things being
clustered (which are assumed to be binary numbers represented as strings),
and an int representing the maximum number of clusters that are allowed.
Returns: A dictionary mapping cluster ids to lists of numbers that are part
of that cluster. | Below is the the instruction that describes the task:
### Input:
Helper method for clustering that takes a list of all of the things being
clustered (which are assumed to be binary numbers represented as strings),
and an int representing the maximum number of clusters that are allowed.
Returns: A dictionary mapping cluster ids to lists of numbers that are part
of that cluster.
### Response:
def do_clustering(types, max_clust):
"""
Helper method for clustering that takes a list of all of the things being
clustered (which are assumed to be binary numbers represented as strings),
and an int representing the maximum number of clusters that are allowed.
Returns: A dictionary mapping cluster ids to lists of numbers that are part
of that cluster.
"""
# Fill in leading zeros to make all numbers same length.
ls = [list(t[t.find("b")+1:]) for t in types]
prepend_zeros_to_lists(ls)
dist_matrix = pdist(ls, weighted_hamming)
clusters = hierarchicalcluster.complete(dist_matrix)
clusters = hierarchicalcluster.fcluster(clusters, max_clust,
criterion="maxclust")
# Group members of each cluster together
cluster_dict = dict((c, []) for c in set(clusters))
for i in range(len(types)):
cluster_dict[clusters[i]].append(types[i])
return cluster_dict |
def numberofnetworks(self):
"""The number of distinct networks defined by the|Node| and
|Element| objects currently handled by the |HydPy| object."""
sels1 = selectiontools.Selections()
sels2 = selectiontools.Selections()
complete = selectiontools.Selection('complete',
self.nodes, self.elements)
for node in self.endnodes:
sel = complete.copy(node.name).select_upstream(node)
sels1 += sel
sels2 += sel.copy(node.name)
for sel1 in sels1:
for sel2 in sels2:
if sel1.name != sel2.name:
sel1 -= sel2
for name in list(sels1.names):
if not sels1[name].elements:
del sels1[name]
return sels1 | The number of distinct networks defined by the|Node| and
|Element| objects currently handled by the |HydPy| object. | Below is the the instruction that describes the task:
### Input:
The number of distinct networks defined by the|Node| and
|Element| objects currently handled by the |HydPy| object.
### Response:
def numberofnetworks(self):
"""The number of distinct networks defined by the|Node| and
|Element| objects currently handled by the |HydPy| object."""
sels1 = selectiontools.Selections()
sels2 = selectiontools.Selections()
complete = selectiontools.Selection('complete',
self.nodes, self.elements)
for node in self.endnodes:
sel = complete.copy(node.name).select_upstream(node)
sels1 += sel
sels2 += sel.copy(node.name)
for sel1 in sels1:
for sel2 in sels2:
if sel1.name != sel2.name:
sel1 -= sel2
for name in list(sels1.names):
if not sels1[name].elements:
del sels1[name]
return sels1 |
def check_arguments_compatibility(the_callable, argd):
"""
Check if calling the_callable with the given arguments would be correct
or not.
>>> def foo(arg1, arg2, arg3='val1', arg4='val2', *args, **argd):
... pass
>>> try: check_arguments_compatibility(foo, {'arg1': 'bla', 'arg2': 'blo'})
... except ValueError as err: print 'failed'
... else: print 'ok'
ok
>>> try: check_arguments_compatibility(foo, {'arg1': 'bla'})
... except ValueError as err: print 'failed'
... else: print 'ok'
failed
Basically this function is simulating the call:
>>> the_callable(**argd)
but it only checks for the correctness of the arguments, without
actually calling the_callable.
:param the_callable: the callable to be analyzed.
:type the_callable: function/callable
:param argd: the arguments to be passed.
:type argd: dict
:raise ValueError: in case of uncompatibility
"""
if not argd:
argd = {}
args, dummy, varkw, defaults = inspect.getargspec(the_callable)
tmp_args = list(args)
optional_args = []
args_dict = {}
if defaults:
defaults = list(defaults)
else:
defaults = []
while defaults:
arg = tmp_args.pop()
optional_args.append(arg)
args_dict[arg] = defaults.pop()
while tmp_args:
args_dict[tmp_args.pop()] = None
for arg, dummy_value in iteritems(argd):
if arg in args_dict:
del args_dict[arg]
elif not varkw:
raise ValueError(
'Argument %s not expected when calling callable '
'"%s" with arguments %s' % (
arg, get_callable_signature_as_string(the_callable), argd))
for arg in args_dict.keys():
if arg in optional_args:
del args_dict[arg]
if args_dict:
raise ValueError(
'Arguments %s not specified when calling callable '
'"%s" with arguments %s' % (
', '.join(args_dict.keys()),
get_callable_signature_as_string(the_callable),
argd)) | Check if calling the_callable with the given arguments would be correct
or not.
>>> def foo(arg1, arg2, arg3='val1', arg4='val2', *args, **argd):
... pass
>>> try: check_arguments_compatibility(foo, {'arg1': 'bla', 'arg2': 'blo'})
... except ValueError as err: print 'failed'
... else: print 'ok'
ok
>>> try: check_arguments_compatibility(foo, {'arg1': 'bla'})
... except ValueError as err: print 'failed'
... else: print 'ok'
failed
Basically this function is simulating the call:
>>> the_callable(**argd)
but it only checks for the correctness of the arguments, without
actually calling the_callable.
:param the_callable: the callable to be analyzed.
:type the_callable: function/callable
:param argd: the arguments to be passed.
:type argd: dict
:raise ValueError: in case of uncompatibility | Below is the the instruction that describes the task:
### Input:
Check if calling the_callable with the given arguments would be correct
or not.
>>> def foo(arg1, arg2, arg3='val1', arg4='val2', *args, **argd):
... pass
>>> try: check_arguments_compatibility(foo, {'arg1': 'bla', 'arg2': 'blo'})
... except ValueError as err: print 'failed'
... else: print 'ok'
ok
>>> try: check_arguments_compatibility(foo, {'arg1': 'bla'})
... except ValueError as err: print 'failed'
... else: print 'ok'
failed
Basically this function is simulating the call:
>>> the_callable(**argd)
but it only checks for the correctness of the arguments, without
actually calling the_callable.
:param the_callable: the callable to be analyzed.
:type the_callable: function/callable
:param argd: the arguments to be passed.
:type argd: dict
:raise ValueError: in case of uncompatibility
### Response:
def check_arguments_compatibility(the_callable, argd):
"""
Check if calling the_callable with the given arguments would be correct
or not.
>>> def foo(arg1, arg2, arg3='val1', arg4='val2', *args, **argd):
... pass
>>> try: check_arguments_compatibility(foo, {'arg1': 'bla', 'arg2': 'blo'})
... except ValueError as err: print 'failed'
... else: print 'ok'
ok
>>> try: check_arguments_compatibility(foo, {'arg1': 'bla'})
... except ValueError as err: print 'failed'
... else: print 'ok'
failed
Basically this function is simulating the call:
>>> the_callable(**argd)
but it only checks for the correctness of the arguments, without
actually calling the_callable.
:param the_callable: the callable to be analyzed.
:type the_callable: function/callable
:param argd: the arguments to be passed.
:type argd: dict
:raise ValueError: in case of uncompatibility
"""
if not argd:
argd = {}
args, dummy, varkw, defaults = inspect.getargspec(the_callable)
tmp_args = list(args)
optional_args = []
args_dict = {}
if defaults:
defaults = list(defaults)
else:
defaults = []
while defaults:
arg = tmp_args.pop()
optional_args.append(arg)
args_dict[arg] = defaults.pop()
while tmp_args:
args_dict[tmp_args.pop()] = None
for arg, dummy_value in iteritems(argd):
if arg in args_dict:
del args_dict[arg]
elif not varkw:
raise ValueError(
'Argument %s not expected when calling callable '
'"%s" with arguments %s' % (
arg, get_callable_signature_as_string(the_callable), argd))
for arg in args_dict.keys():
if arg in optional_args:
del args_dict[arg]
if args_dict:
raise ValueError(
'Arguments %s not specified when calling callable '
'"%s" with arguments %s' % (
', '.join(args_dict.keys()),
get_callable_signature_as_string(the_callable),
argd)) |
def keys_delete(gandi, fqdn, key, force):
"""Delete a key for a domain."""
if not force:
proceed = click.confirm('Are you sure you want to delete key %s on '
'domain %s?' % (key, fqdn))
if not proceed:
return
result = gandi.dns.keys_delete(fqdn, key)
gandi.echo('Delete successful.')
return result | Delete a key for a domain. | Below is the the instruction that describes the task:
### Input:
Delete a key for a domain.
### Response:
def keys_delete(gandi, fqdn, key, force):
"""Delete a key for a domain."""
if not force:
proceed = click.confirm('Are you sure you want to delete key %s on '
'domain %s?' % (key, fqdn))
if not proceed:
return
result = gandi.dns.keys_delete(fqdn, key)
gandi.echo('Delete successful.')
return result |
async def spawn(self, agent_cls, *args, addr=None, **kwargs):
"""Spawn a new agent in a slave environment.
:param str agent_cls:
`qualname`` of the agent class.
That is, the name should be in the form ``pkg.mod:cls``, e.g.
``creamas.core.agent:CreativeAgent``.
:param str addr:
Optional. Address for the slave enviroment's manager.
If :attr:`addr` is None, spawns the agent in the slave environment
with currently smallest number of agents.
:returns: :class:`aiomas.rpc.Proxy` and address for the created agent.
The ``*args`` and ``**kwargs`` are passed down to the agent's
:meth:`__init__`.
.. note::
Use :meth:`~creamas.mp.MultiEnvironment.spawn_n` to spawn large
number of agents with identical initialization parameters.
"""
if addr is None:
addr = await self._get_smallest_env()
r_manager = await self.env.connect(addr)
return await r_manager.spawn(agent_cls, *args, **kwargs) | Spawn a new agent in a slave environment.
:param str agent_cls:
`qualname`` of the agent class.
That is, the name should be in the form ``pkg.mod:cls``, e.g.
``creamas.core.agent:CreativeAgent``.
:param str addr:
Optional. Address for the slave enviroment's manager.
If :attr:`addr` is None, spawns the agent in the slave environment
with currently smallest number of agents.
:returns: :class:`aiomas.rpc.Proxy` and address for the created agent.
The ``*args`` and ``**kwargs`` are passed down to the agent's
:meth:`__init__`.
.. note::
Use :meth:`~creamas.mp.MultiEnvironment.spawn_n` to spawn large
number of agents with identical initialization parameters. | Below is the the instruction that describes the task:
### Input:
Spawn a new agent in a slave environment.
:param str agent_cls:
`qualname`` of the agent class.
That is, the name should be in the form ``pkg.mod:cls``, e.g.
``creamas.core.agent:CreativeAgent``.
:param str addr:
Optional. Address for the slave enviroment's manager.
If :attr:`addr` is None, spawns the agent in the slave environment
with currently smallest number of agents.
:returns: :class:`aiomas.rpc.Proxy` and address for the created agent.
The ``*args`` and ``**kwargs`` are passed down to the agent's
:meth:`__init__`.
.. note::
Use :meth:`~creamas.mp.MultiEnvironment.spawn_n` to spawn large
number of agents with identical initialization parameters.
### Response:
async def spawn(self, agent_cls, *args, addr=None, **kwargs):
"""Spawn a new agent in a slave environment.
:param str agent_cls:
`qualname`` of the agent class.
That is, the name should be in the form ``pkg.mod:cls``, e.g.
``creamas.core.agent:CreativeAgent``.
:param str addr:
Optional. Address for the slave enviroment's manager.
If :attr:`addr` is None, spawns the agent in the slave environment
with currently smallest number of agents.
:returns: :class:`aiomas.rpc.Proxy` and address for the created agent.
The ``*args`` and ``**kwargs`` are passed down to the agent's
:meth:`__init__`.
.. note::
Use :meth:`~creamas.mp.MultiEnvironment.spawn_n` to spawn large
number of agents with identical initialization parameters.
"""
if addr is None:
addr = await self._get_smallest_env()
r_manager = await self.env.connect(addr)
return await r_manager.spawn(agent_cls, *args, **kwargs) |
def _contained_parameters(expression):
"""
Determine which parameters are contained in this expression.
:param Expression expression: expression involving parameters
:return: set of parameters contained in this expression
:rtype: set
"""
if isinstance(expression, BinaryExp):
return _contained_parameters(expression.op1) | _contained_parameters(expression.op2)
elif isinstance(expression, Function):
return _contained_parameters(expression.expression)
elif isinstance(expression, Parameter):
return {expression}
else:
return set() | Determine which parameters are contained in this expression.
:param Expression expression: expression involving parameters
:return: set of parameters contained in this expression
:rtype: set | Below is the the instruction that describes the task:
### Input:
Determine which parameters are contained in this expression.
:param Expression expression: expression involving parameters
:return: set of parameters contained in this expression
:rtype: set
### Response:
def _contained_parameters(expression):
"""
Determine which parameters are contained in this expression.
:param Expression expression: expression involving parameters
:return: set of parameters contained in this expression
:rtype: set
"""
if isinstance(expression, BinaryExp):
return _contained_parameters(expression.op1) | _contained_parameters(expression.op2)
elif isinstance(expression, Function):
return _contained_parameters(expression.expression)
elif isinstance(expression, Parameter):
return {expression}
else:
return set() |
def bulk_get_or_create(self, data_list):
"""
data_list is the data to get or create
We generate the query and set all the record keys based on passed in queryset
Then we loop over each item in the data_list, which has the keys already! No need to generate them. Should save a lot of time
Use values instead of the whole object, much faster
Args:
data_list:
Returns:
"""
items_to_create = dict()
for record_key, record_config in data_list.items():
if record_key not in items_to_create:
record = self.get_instance(record_key)
if not record:
items_to_create[record_key] = self.model_cls(**record_config)
if items_to_create:
"""
TODO. I think we can optimize this. Switch to values, get the primary id
Query set is just select the model with that ID. Return the model object without running the full queryset again. Should be a lot faster.
"""
self.model_cls.objects.bulk_create(items_to_create.values())
self.set_record_lookup(True)
return self.record_lookup | data_list is the data to get or create
We generate the query and set all the record keys based on passed in queryset
Then we loop over each item in the data_list, which has the keys already! No need to generate them. Should save a lot of time
Use values instead of the whole object, much faster
Args:
data_list:
Returns: | Below is the the instruction that describes the task:
### Input:
data_list is the data to get or create
We generate the query and set all the record keys based on passed in queryset
Then we loop over each item in the data_list, which has the keys already! No need to generate them. Should save a lot of time
Use values instead of the whole object, much faster
Args:
data_list:
Returns:
### Response:
def bulk_get_or_create(self, data_list):
"""
data_list is the data to get or create
We generate the query and set all the record keys based on passed in queryset
Then we loop over each item in the data_list, which has the keys already! No need to generate them. Should save a lot of time
Use values instead of the whole object, much faster
Args:
data_list:
Returns:
"""
items_to_create = dict()
for record_key, record_config in data_list.items():
if record_key not in items_to_create:
record = self.get_instance(record_key)
if not record:
items_to_create[record_key] = self.model_cls(**record_config)
if items_to_create:
"""
TODO. I think we can optimize this. Switch to values, get the primary id
Query set is just select the model with that ID. Return the model object without running the full queryset again. Should be a lot faster.
"""
self.model_cls.objects.bulk_create(items_to_create.values())
self.set_record_lookup(True)
return self.record_lookup |
def ecdsa_sign_compact(msg32, seckey):
"""
Takes the same message and seckey as _ecdsa_sign_recoverable
Returns an unsigned char array of length 65 containing the signed message
"""
# Assign 65 bytes to output
output64 = ffi.new("unsigned char[65]")
# ffi definition of recid
recid = ffi.new("int *")
lib.secp256k1_ecdsa_recoverable_signature_serialize_compact(
ctx,
output64,
recid,
_ecdsa_sign_recoverable(msg32, seckey)
)
# Assign recid to the last byte in the output array
r = ffi.buffer(output64)[:64] + struct.pack("B", recid[0])
assert len(r) == 65, len(r)
return r | Takes the same message and seckey as _ecdsa_sign_recoverable
Returns an unsigned char array of length 65 containing the signed message | Below is the the instruction that describes the task:
### Input:
Takes the same message and seckey as _ecdsa_sign_recoverable
Returns an unsigned char array of length 65 containing the signed message
### Response:
def ecdsa_sign_compact(msg32, seckey):
"""
Takes the same message and seckey as _ecdsa_sign_recoverable
Returns an unsigned char array of length 65 containing the signed message
"""
# Assign 65 bytes to output
output64 = ffi.new("unsigned char[65]")
# ffi definition of recid
recid = ffi.new("int *")
lib.secp256k1_ecdsa_recoverable_signature_serialize_compact(
ctx,
output64,
recid,
_ecdsa_sign_recoverable(msg32, seckey)
)
# Assign recid to the last byte in the output array
r = ffi.buffer(output64)[:64] + struct.pack("B", recid[0])
assert len(r) == 65, len(r)
return r |
def _stderr_raw(self, s):
"""Writes the string to stdout"""
print(s, end='', file=sys.stderr)
sys.stderr.flush() | Writes the string to stdout | Below is the the instruction that describes the task:
### Input:
Writes the string to stdout
### Response:
def _stderr_raw(self, s):
"""Writes the string to stdout"""
print(s, end='', file=sys.stderr)
sys.stderr.flush() |
def create_description(self, complib=None, complevel=None,
fletcher32=False, expectedrows=None):
""" create the description of the table from the axes & values """
# provided expected rows if its passed
if expectedrows is None:
expectedrows = max(self.nrows_expected, 10000)
d = dict(name='table', expectedrows=expectedrows)
# description from the axes & values
d['description'] = {a.cname: a.typ for a in self.axes}
if complib:
if complevel is None:
complevel = self._complevel or 9
filters = _tables().Filters(
complevel=complevel, complib=complib,
fletcher32=fletcher32 or self._fletcher32)
d['filters'] = filters
elif self._filters is not None:
d['filters'] = self._filters
return d | create the description of the table from the axes & values | Below is the the instruction that describes the task:
### Input:
create the description of the table from the axes & values
### Response:
def create_description(self, complib=None, complevel=None,
fletcher32=False, expectedrows=None):
""" create the description of the table from the axes & values """
# provided expected rows if its passed
if expectedrows is None:
expectedrows = max(self.nrows_expected, 10000)
d = dict(name='table', expectedrows=expectedrows)
# description from the axes & values
d['description'] = {a.cname: a.typ for a in self.axes}
if complib:
if complevel is None:
complevel = self._complevel or 9
filters = _tables().Filters(
complevel=complevel, complib=complib,
fletcher32=fletcher32 or self._fletcher32)
d['filters'] = filters
elif self._filters is not None:
d['filters'] = self._filters
return d |
def propagate_event_to_delegate(self, event, eventhandler):
"""Propagate the given Mouse event to the widgetdelegate
Enter edit mode, get the editor widget and issue an event on that widget.
:param event: the mouse event
:type event: :class:`QtGui.QMouseEvent`
:param eventhandler: the eventhandler to use. E.g. ``"mousePressEvent"``
:type eventhandler: str
:returns: None
:rtype: None
:raises: None
"""
# if we are recursing because we sent a click event, and it got propagated to the parents
# and we recieve it again, terminate
if self.__recursing:
return
# find index at mouse position
i = self.index_at_event(event)
# if the index is not valid, we don't care
# handle it the default way
if not i.isValid():
return getattr(super(WidgetDelegateViewMixin, self), eventhandler)(event)
# get the widget delegate. if there is None, handle it the default way
delegate = self.itemDelegate(i)
if not isinstance(delegate, WidgetDelegate):
return getattr(super(WidgetDelegateViewMixin, self), eventhandler)(event)
# see if there is already a editor
widget = delegate.edit_widget(i)
if not widget:
# close all editors, then start editing
delegate.close_editors()
# Force editing. If in editing state, view will refuse editing.
if self.state() == self.EditingState:
self.setState(self.NoState)
self.edit(i)
# get the editor widget. if there is None, there is nothing to do so return
widget = delegate.edit_widget(i)
if not widget:
return getattr(super(WidgetDelegateViewMixin, self), eventhandler)(event)
# try to find the relative position to the widget
pid = self.get_pos_in_delegate(i, event.globalPos())
widgetatpos = widget.childAt(pid)
if widgetatpos:
widgettoclick = widgetatpos
g = widget.mapToGlobal(pid)
clickpos = widgettoclick.mapFromGlobal(g)
else:
widgettoclick = widget
clickpos = pid
# create a new event for the editor widget.
e = QtGui.QMouseEvent(event.type(),
clickpos,
event.button(),
event.buttons(),
event.modifiers())
# before we send, make sure, we cannot recurse
self.__recursing = True
try:
r = QtGui.QApplication.sendEvent(widgettoclick, e)
finally:
self.__recursing = False # out of the recursion. now we can accept click events again
return r | Propagate the given Mouse event to the widgetdelegate
Enter edit mode, get the editor widget and issue an event on that widget.
:param event: the mouse event
:type event: :class:`QtGui.QMouseEvent`
:param eventhandler: the eventhandler to use. E.g. ``"mousePressEvent"``
:type eventhandler: str
:returns: None
:rtype: None
:raises: None | Below is the the instruction that describes the task:
### Input:
Propagate the given Mouse event to the widgetdelegate
Enter edit mode, get the editor widget and issue an event on that widget.
:param event: the mouse event
:type event: :class:`QtGui.QMouseEvent`
:param eventhandler: the eventhandler to use. E.g. ``"mousePressEvent"``
:type eventhandler: str
:returns: None
:rtype: None
:raises: None
### Response:
def propagate_event_to_delegate(self, event, eventhandler):
"""Propagate the given Mouse event to the widgetdelegate
Enter edit mode, get the editor widget and issue an event on that widget.
:param event: the mouse event
:type event: :class:`QtGui.QMouseEvent`
:param eventhandler: the eventhandler to use. E.g. ``"mousePressEvent"``
:type eventhandler: str
:returns: None
:rtype: None
:raises: None
"""
# if we are recursing because we sent a click event, and it got propagated to the parents
# and we recieve it again, terminate
if self.__recursing:
return
# find index at mouse position
i = self.index_at_event(event)
# if the index is not valid, we don't care
# handle it the default way
if not i.isValid():
return getattr(super(WidgetDelegateViewMixin, self), eventhandler)(event)
# get the widget delegate. if there is None, handle it the default way
delegate = self.itemDelegate(i)
if not isinstance(delegate, WidgetDelegate):
return getattr(super(WidgetDelegateViewMixin, self), eventhandler)(event)
# see if there is already a editor
widget = delegate.edit_widget(i)
if not widget:
# close all editors, then start editing
delegate.close_editors()
# Force editing. If in editing state, view will refuse editing.
if self.state() == self.EditingState:
self.setState(self.NoState)
self.edit(i)
# get the editor widget. if there is None, there is nothing to do so return
widget = delegate.edit_widget(i)
if not widget:
return getattr(super(WidgetDelegateViewMixin, self), eventhandler)(event)
# try to find the relative position to the widget
pid = self.get_pos_in_delegate(i, event.globalPos())
widgetatpos = widget.childAt(pid)
if widgetatpos:
widgettoclick = widgetatpos
g = widget.mapToGlobal(pid)
clickpos = widgettoclick.mapFromGlobal(g)
else:
widgettoclick = widget
clickpos = pid
# create a new event for the editor widget.
e = QtGui.QMouseEvent(event.type(),
clickpos,
event.button(),
event.buttons(),
event.modifiers())
# before we send, make sure, we cannot recurse
self.__recursing = True
try:
r = QtGui.QApplication.sendEvent(widgettoclick, e)
finally:
self.__recursing = False # out of the recursion. now we can accept click events again
return r |
def get_sgburst_waveform(template=None, **kwargs):
"""Return the plus and cross polarizations of a time domain
sine-Gaussian burst waveform.
Parameters
----------
template: object
An object that has attached properties. This can be used to subsitute
for keyword arguments. A common example would be a row in an xml table.
approximant : string
A string that indicates the chosen approximant. See `td_approximants`
for available options.
q : float
The quality factor of a sine-Gaussian burst
frequency : float
The centre-frequency of a sine-Gaussian burst
delta_t : float
The time step used to generate the waveform
hrss : float
The strain rss
amplitude: float
The strain amplitude
Returns
-------
hplus: TimeSeries
The plus polarization of the waveform.
hcross: TimeSeries
The cross polarization of the waveform.
"""
input_params = props_sgburst(template,**kwargs)
for arg in sgburst_required_args:
if arg not in input_params:
raise ValueError("Please provide " + str(arg))
return _lalsim_sgburst_waveform(**input_params) | Return the plus and cross polarizations of a time domain
sine-Gaussian burst waveform.
Parameters
----------
template: object
An object that has attached properties. This can be used to subsitute
for keyword arguments. A common example would be a row in an xml table.
approximant : string
A string that indicates the chosen approximant. See `td_approximants`
for available options.
q : float
The quality factor of a sine-Gaussian burst
frequency : float
The centre-frequency of a sine-Gaussian burst
delta_t : float
The time step used to generate the waveform
hrss : float
The strain rss
amplitude: float
The strain amplitude
Returns
-------
hplus: TimeSeries
The plus polarization of the waveform.
hcross: TimeSeries
The cross polarization of the waveform. | Below is the the instruction that describes the task:
### Input:
Return the plus and cross polarizations of a time domain
sine-Gaussian burst waveform.
Parameters
----------
template: object
An object that has attached properties. This can be used to subsitute
for keyword arguments. A common example would be a row in an xml table.
approximant : string
A string that indicates the chosen approximant. See `td_approximants`
for available options.
q : float
The quality factor of a sine-Gaussian burst
frequency : float
The centre-frequency of a sine-Gaussian burst
delta_t : float
The time step used to generate the waveform
hrss : float
The strain rss
amplitude: float
The strain amplitude
Returns
-------
hplus: TimeSeries
The plus polarization of the waveform.
hcross: TimeSeries
The cross polarization of the waveform.
### Response:
def get_sgburst_waveform(template=None, **kwargs):
"""Return the plus and cross polarizations of a time domain
sine-Gaussian burst waveform.
Parameters
----------
template: object
An object that has attached properties. This can be used to subsitute
for keyword arguments. A common example would be a row in an xml table.
approximant : string
A string that indicates the chosen approximant. See `td_approximants`
for available options.
q : float
The quality factor of a sine-Gaussian burst
frequency : float
The centre-frequency of a sine-Gaussian burst
delta_t : float
The time step used to generate the waveform
hrss : float
The strain rss
amplitude: float
The strain amplitude
Returns
-------
hplus: TimeSeries
The plus polarization of the waveform.
hcross: TimeSeries
The cross polarization of the waveform.
"""
input_params = props_sgburst(template,**kwargs)
for arg in sgburst_required_args:
if arg not in input_params:
raise ValueError("Please provide " + str(arg))
return _lalsim_sgburst_waveform(**input_params) |
def conspectus_api():
"""
Virtual ``conspectus.py`` file for brython. This file contains following
variables:
Attributes:
consp_dict (dict): Dictionary with conspects.
cosp_id_pairs (list): List of tuples ``(name, id)`` for conspectus.
subs_by_mdt (dict): Dictionary containing ``mdt: sub_dict``.
mdt_by_name (dict): ``name: mdt`` mapping.
Note:
Example of the `cons_dict` format::
{
"1": {
"id": "1",
"name": "Antropologie, etnografie",
"subconspects": {
"304": {
"conspect_id": "1",
"name": "Kulturn\u00ed politika"
"en_name": "Culture and institutions",
"mdt": "304",
"ddc": "306.2",
},
...
}
},
...
}
Values are stored in json and upacked after load. This is used because of
performance issues with parsing large brython files.
"""
def to_json(data):
"""
JSON conversion is used, because brython has BIG performance issues
when parsing large python sources. It is actually cheaper to just load
it as JSON objects.
Load times:
Brython: 24s firefox, 54s chromium
JSON: 14s firefox, 9s chromium
"""
return repr(json.dumps(data))
conspectus_dict = json.loads(read_template("full_conspectus.json"))
# raw conspectus dictionary
out = PY_HEADER
out += "import json\n"
out += "consp_dict = json.loads(%s)\n\n" % to_json(conspectus_dict)
# (consp, id) pairs
cosp_id_pairs = sorted(
(x["name"], x["id"])
for x in conspectus_dict.values()
)
out += "cosp_id_pairs = json.loads(%s)\n\n" % to_json(cosp_id_pairs)
# mdt -> subconspect mapping
subs_by_mdt = [
x["subconspects"].values()
for x in conspectus_dict.values()
]
subs_by_mdt = {d["mdt"]: d for d in sum(subs_by_mdt, [])}
out += "subs_by_mdt = json.loads(%s)\n\n" % to_json(subs_by_mdt)
# subconspect_name -> subconspect mapping
out += "mdt_by_name = json.loads(%s)\n\n" % to_json({
d["name"]: d["mdt"]
for d in subs_by_mdt.values()
})
return out | Virtual ``conspectus.py`` file for brython. This file contains following
variables:
Attributes:
consp_dict (dict): Dictionary with conspects.
cosp_id_pairs (list): List of tuples ``(name, id)`` for conspectus.
subs_by_mdt (dict): Dictionary containing ``mdt: sub_dict``.
mdt_by_name (dict): ``name: mdt`` mapping.
Note:
Example of the `cons_dict` format::
{
"1": {
"id": "1",
"name": "Antropologie, etnografie",
"subconspects": {
"304": {
"conspect_id": "1",
"name": "Kulturn\u00ed politika"
"en_name": "Culture and institutions",
"mdt": "304",
"ddc": "306.2",
},
...
}
},
...
}
Values are stored in json and upacked after load. This is used because of
performance issues with parsing large brython files. | Below is the the instruction that describes the task:
### Input:
Virtual ``conspectus.py`` file for brython. This file contains following
variables:
Attributes:
consp_dict (dict): Dictionary with conspects.
cosp_id_pairs (list): List of tuples ``(name, id)`` for conspectus.
subs_by_mdt (dict): Dictionary containing ``mdt: sub_dict``.
mdt_by_name (dict): ``name: mdt`` mapping.
Note:
Example of the `cons_dict` format::
{
"1": {
"id": "1",
"name": "Antropologie, etnografie",
"subconspects": {
"304": {
"conspect_id": "1",
"name": "Kulturn\u00ed politika"
"en_name": "Culture and institutions",
"mdt": "304",
"ddc": "306.2",
},
...
}
},
...
}
Values are stored in json and upacked after load. This is used because of
performance issues with parsing large brython files.
### Response:
def conspectus_api():
"""
Virtual ``conspectus.py`` file for brython. This file contains following
variables:
Attributes:
consp_dict (dict): Dictionary with conspects.
cosp_id_pairs (list): List of tuples ``(name, id)`` for conspectus.
subs_by_mdt (dict): Dictionary containing ``mdt: sub_dict``.
mdt_by_name (dict): ``name: mdt`` mapping.
Note:
Example of the `cons_dict` format::
{
"1": {
"id": "1",
"name": "Antropologie, etnografie",
"subconspects": {
"304": {
"conspect_id": "1",
"name": "Kulturn\u00ed politika"
"en_name": "Culture and institutions",
"mdt": "304",
"ddc": "306.2",
},
...
}
},
...
}
Values are stored in json and upacked after load. This is used because of
performance issues with parsing large brython files.
"""
def to_json(data):
"""
JSON conversion is used, because brython has BIG performance issues
when parsing large python sources. It is actually cheaper to just load
it as JSON objects.
Load times:
Brython: 24s firefox, 54s chromium
JSON: 14s firefox, 9s chromium
"""
return repr(json.dumps(data))
conspectus_dict = json.loads(read_template("full_conspectus.json"))
# raw conspectus dictionary
out = PY_HEADER
out += "import json\n"
out += "consp_dict = json.loads(%s)\n\n" % to_json(conspectus_dict)
# (consp, id) pairs
cosp_id_pairs = sorted(
(x["name"], x["id"])
for x in conspectus_dict.values()
)
out += "cosp_id_pairs = json.loads(%s)\n\n" % to_json(cosp_id_pairs)
# mdt -> subconspect mapping
subs_by_mdt = [
x["subconspects"].values()
for x in conspectus_dict.values()
]
subs_by_mdt = {d["mdt"]: d for d in sum(subs_by_mdt, [])}
out += "subs_by_mdt = json.loads(%s)\n\n" % to_json(subs_by_mdt)
# subconspect_name -> subconspect mapping
out += "mdt_by_name = json.loads(%s)\n\n" % to_json({
d["name"]: d["mdt"]
for d in subs_by_mdt.values()
})
return out |
def start(self, n):
"""Start n copies of the process using a batch system."""
self.log.debug("Starting %s: %r", self.__class__.__name__, self.args)
# Here we save profile_dir in the context so they
# can be used in the batch script template as {profile_dir}
self.write_batch_script(n)
output = check_output(self.args, env=os.environ)
job_id = self.parse_job_id(output)
self.notify_start(job_id)
return job_id | Start n copies of the process using a batch system. | Below is the the instruction that describes the task:
### Input:
Start n copies of the process using a batch system.
### Response:
def start(self, n):
"""Start n copies of the process using a batch system."""
self.log.debug("Starting %s: %r", self.__class__.__name__, self.args)
# Here we save profile_dir in the context so they
# can be used in the batch script template as {profile_dir}
self.write_batch_script(n)
output = check_output(self.args, env=os.environ)
job_id = self.parse_job_id(output)
self.notify_start(job_id)
return job_id |
def iter_penalty_model_from_specification(cur, specification):
"""Iterate through all penalty models in the cache matching the
given specification.
Args:
cur (:class:`sqlite3.Cursor`): An sqlite3 cursor. This function
is meant to be run within a :obj:`with` statement.
specification (:class:`penaltymodel.Specification`): A specification
for a penalty model.
Yields:
:class:`penaltymodel.PenaltyModel`
"""
encoded_data = {}
nodelist = sorted(specification.graph)
edgelist = sorted(sorted(edge) for edge in specification.graph.edges)
encoded_data['num_nodes'] = len(nodelist)
encoded_data['num_edges'] = len(edgelist)
encoded_data['edges'] = json.dumps(edgelist, separators=(',', ':'))
encoded_data['num_variables'] = len(next(iter(specification.feasible_configurations)))
encoded_data['num_feasible_configurations'] = len(specification.feasible_configurations)
encoded = {_serialize_config(config): en for config, en in specification.feasible_configurations.items()}
configs, energies = zip(*sorted(encoded.items()))
encoded_data['feasible_configurations'] = json.dumps(configs, separators=(',', ':'))
encoded_data['energies'] = json.dumps(energies, separators=(',', ':'))
encoded_data['decision_variables'] = json.dumps(specification.decision_variables, separators=(',', ':'))
encoded_data['classical_gap'] = json.dumps(specification.min_classical_gap, separators=(',', ':'))
select = \
"""
SELECT
linear_biases,
quadratic_biases,
offset,
decision_variables,
classical_gap,
ground_energy
FROM penalty_model_view
WHERE
-- graph:
num_nodes = :num_nodes AND
num_edges = :num_edges AND
edges = :edges AND
-- feasible_configurations:
num_variables = :num_variables AND
num_feasible_configurations = :num_feasible_configurations AND
feasible_configurations = :feasible_configurations AND
energies = :energies AND
-- decision variables:
decision_variables = :decision_variables AND
-- we could apply filters based on the energy ranges but in practice this seems slower
classical_gap >= :classical_gap
ORDER BY classical_gap DESC;
"""
for row in cur.execute(select, encoded_data):
# we need to build the model
linear = _decode_linear_biases(row['linear_biases'], nodelist)
quadratic = _decode_quadratic_biases(row['quadratic_biases'], edgelist)
model = dimod.BinaryQuadraticModel(linear, quadratic, row['offset'], dimod.SPIN) # always spin
yield pm.PenaltyModel.from_specification(specification, model, row['classical_gap'], row['ground_energy']) | Iterate through all penalty models in the cache matching the
given specification.
Args:
cur (:class:`sqlite3.Cursor`): An sqlite3 cursor. This function
is meant to be run within a :obj:`with` statement.
specification (:class:`penaltymodel.Specification`): A specification
for a penalty model.
Yields:
:class:`penaltymodel.PenaltyModel` | Below is the the instruction that describes the task:
### Input:
Iterate through all penalty models in the cache matching the
given specification.
Args:
cur (:class:`sqlite3.Cursor`): An sqlite3 cursor. This function
is meant to be run within a :obj:`with` statement.
specification (:class:`penaltymodel.Specification`): A specification
for a penalty model.
Yields:
:class:`penaltymodel.PenaltyModel`
### Response:
def iter_penalty_model_from_specification(cur, specification):
"""Iterate through all penalty models in the cache matching the
given specification.
Args:
cur (:class:`sqlite3.Cursor`): An sqlite3 cursor. This function
is meant to be run within a :obj:`with` statement.
specification (:class:`penaltymodel.Specification`): A specification
for a penalty model.
Yields:
:class:`penaltymodel.PenaltyModel`
"""
encoded_data = {}
nodelist = sorted(specification.graph)
edgelist = sorted(sorted(edge) for edge in specification.graph.edges)
encoded_data['num_nodes'] = len(nodelist)
encoded_data['num_edges'] = len(edgelist)
encoded_data['edges'] = json.dumps(edgelist, separators=(',', ':'))
encoded_data['num_variables'] = len(next(iter(specification.feasible_configurations)))
encoded_data['num_feasible_configurations'] = len(specification.feasible_configurations)
encoded = {_serialize_config(config): en for config, en in specification.feasible_configurations.items()}
configs, energies = zip(*sorted(encoded.items()))
encoded_data['feasible_configurations'] = json.dumps(configs, separators=(',', ':'))
encoded_data['energies'] = json.dumps(energies, separators=(',', ':'))
encoded_data['decision_variables'] = json.dumps(specification.decision_variables, separators=(',', ':'))
encoded_data['classical_gap'] = json.dumps(specification.min_classical_gap, separators=(',', ':'))
select = \
"""
SELECT
linear_biases,
quadratic_biases,
offset,
decision_variables,
classical_gap,
ground_energy
FROM penalty_model_view
WHERE
-- graph:
num_nodes = :num_nodes AND
num_edges = :num_edges AND
edges = :edges AND
-- feasible_configurations:
num_variables = :num_variables AND
num_feasible_configurations = :num_feasible_configurations AND
feasible_configurations = :feasible_configurations AND
energies = :energies AND
-- decision variables:
decision_variables = :decision_variables AND
-- we could apply filters based on the energy ranges but in practice this seems slower
classical_gap >= :classical_gap
ORDER BY classical_gap DESC;
"""
for row in cur.execute(select, encoded_data):
# we need to build the model
linear = _decode_linear_biases(row['linear_biases'], nodelist)
quadratic = _decode_quadratic_biases(row['quadratic_biases'], edgelist)
model = dimod.BinaryQuadraticModel(linear, quadratic, row['offset'], dimod.SPIN) # always spin
yield pm.PenaltyModel.from_specification(specification, model, row['classical_gap'], row['ground_energy']) |
def build_query(self, **filters):
"""
Build queries for geo spatial filtering.
Expected query parameters are:
- a `unit=value` parameter where the unit is a valid UNIT in the
`django.contrib.gis.measure.Distance` class.
- `from` which must be a comma separated latitude and longitude.
Example query:
/api/v1/search/?km=10&from=59.744076,10.152045
Will perform a `dwithin` query within 10 km from the point
with latitude 59.744076 and longitude 10.152045.
"""
applicable_filters = None
filters = dict((k, filters[k]) for k in chain(self.D.UNITS.keys(),
[constants.DRF_HAYSTACK_SPATIAL_QUERY_PARAM]) if k in filters)
distance = dict((k, v) for k, v in filters.items() if k in self.D.UNITS.keys())
try:
latitude, longitude = map(float, self.tokenize(filters[constants.DRF_HAYSTACK_SPATIAL_QUERY_PARAM],
self.view.lookup_sep))
point = self.Point(longitude, latitude, srid=constants.GEO_SRID)
except ValueError:
raise ValueError("Cannot convert `from=latitude,longitude` query parameter to "
"float values. Make sure to provide numerical values only!")
except KeyError:
# If the user has not provided any `from` query string parameter,
# just return.
pass
else:
for unit in distance.keys():
if not len(distance[unit]) == 1:
raise ValueError("Each unit must have exactly one value.")
distance[unit] = float(distance[unit][0])
if point and distance:
applicable_filters = {
"dwithin": {
"field": self.backend.point_field,
"point": point,
"distance": self.D(**distance)
},
"distance": {
"field": self.backend.point_field,
"point": point
}
}
return applicable_filters | Build queries for geo spatial filtering.
Expected query parameters are:
- a `unit=value` parameter where the unit is a valid UNIT in the
`django.contrib.gis.measure.Distance` class.
- `from` which must be a comma separated latitude and longitude.
Example query:
/api/v1/search/?km=10&from=59.744076,10.152045
Will perform a `dwithin` query within 10 km from the point
with latitude 59.744076 and longitude 10.152045. | Below is the the instruction that describes the task:
### Input:
Build queries for geo spatial filtering.
Expected query parameters are:
- a `unit=value` parameter where the unit is a valid UNIT in the
`django.contrib.gis.measure.Distance` class.
- `from` which must be a comma separated latitude and longitude.
Example query:
/api/v1/search/?km=10&from=59.744076,10.152045
Will perform a `dwithin` query within 10 km from the point
with latitude 59.744076 and longitude 10.152045.
### Response:
def build_query(self, **filters):
"""
Build queries for geo spatial filtering.
Expected query parameters are:
- a `unit=value` parameter where the unit is a valid UNIT in the
`django.contrib.gis.measure.Distance` class.
- `from` which must be a comma separated latitude and longitude.
Example query:
/api/v1/search/?km=10&from=59.744076,10.152045
Will perform a `dwithin` query within 10 km from the point
with latitude 59.744076 and longitude 10.152045.
"""
applicable_filters = None
filters = dict((k, filters[k]) for k in chain(self.D.UNITS.keys(),
[constants.DRF_HAYSTACK_SPATIAL_QUERY_PARAM]) if k in filters)
distance = dict((k, v) for k, v in filters.items() if k in self.D.UNITS.keys())
try:
latitude, longitude = map(float, self.tokenize(filters[constants.DRF_HAYSTACK_SPATIAL_QUERY_PARAM],
self.view.lookup_sep))
point = self.Point(longitude, latitude, srid=constants.GEO_SRID)
except ValueError:
raise ValueError("Cannot convert `from=latitude,longitude` query parameter to "
"float values. Make sure to provide numerical values only!")
except KeyError:
# If the user has not provided any `from` query string parameter,
# just return.
pass
else:
for unit in distance.keys():
if not len(distance[unit]) == 1:
raise ValueError("Each unit must have exactly one value.")
distance[unit] = float(distance[unit][0])
if point and distance:
applicable_filters = {
"dwithin": {
"field": self.backend.point_field,
"point": point,
"distance": self.D(**distance)
},
"distance": {
"field": self.backend.point_field,
"point": point
}
}
return applicable_filters |
def encryptfile(filename, key=None, outfile=None, chunk=64 * 1024):
"""
Encrypts a file using AES (CBC mode) with the given key. If no
file is supplied, then the inputted file will be modified in place.
The chunk value will be the size with which the function uses to
read and encrypt the file. Larger chunks can be faster for some files
and machines. The chunk MUST be divisible by 16.
:param text | <str>
key | <str>
outfile | <str> || None
chunk | <int>
"""
if key is None:
key = ENCRYPT_KEY
if not outfile:
outfile = filename + '.enc'
iv = Random.new().read(16)
cipher = AES.new(key, AES.MODE_CBC, iv)
filesize = os.path.getsize(filename)
with open(filename, 'rb') as input:
with open(outfile, 'wb') as output:
output.write(struct.pack('<Q', filesize))
output.write(iv)
while True:
data = input.read(chunk)
if len(data) == 0:
break
data = pad(data, len(key))
output.write(cipher.encrypt(data)) | Encrypts a file using AES (CBC mode) with the given key. If no
file is supplied, then the inputted file will be modified in place.
The chunk value will be the size with which the function uses to
read and encrypt the file. Larger chunks can be faster for some files
and machines. The chunk MUST be divisible by 16.
:param text | <str>
key | <str>
outfile | <str> || None
chunk | <int> | Below is the the instruction that describes the task:
### Input:
Encrypts a file using AES (CBC mode) with the given key. If no
file is supplied, then the inputted file will be modified in place.
The chunk value will be the size with which the function uses to
read and encrypt the file. Larger chunks can be faster for some files
and machines. The chunk MUST be divisible by 16.
:param text | <str>
key | <str>
outfile | <str> || None
chunk | <int>
### Response:
def encryptfile(filename, key=None, outfile=None, chunk=64 * 1024):
"""
Encrypts a file using AES (CBC mode) with the given key. If no
file is supplied, then the inputted file will be modified in place.
The chunk value will be the size with which the function uses to
read and encrypt the file. Larger chunks can be faster for some files
and machines. The chunk MUST be divisible by 16.
:param text | <str>
key | <str>
outfile | <str> || None
chunk | <int>
"""
if key is None:
key = ENCRYPT_KEY
if not outfile:
outfile = filename + '.enc'
iv = Random.new().read(16)
cipher = AES.new(key, AES.MODE_CBC, iv)
filesize = os.path.getsize(filename)
with open(filename, 'rb') as input:
with open(outfile, 'wb') as output:
output.write(struct.pack('<Q', filesize))
output.write(iv)
while True:
data = input.read(chunk)
if len(data) == 0:
break
data = pad(data, len(key))
output.write(cipher.encrypt(data)) |
def __assert_less(expected, returned):
'''
Test if a value is less than the returned value
'''
result = "Pass"
try:
assert (expected < returned), "{0} not False".format(returned)
except AssertionError as err:
result = "Fail: " + six.text_type(err)
return result | Test if a value is less than the returned value | Below is the the instruction that describes the task:
### Input:
Test if a value is less than the returned value
### Response:
def __assert_less(expected, returned):
'''
Test if a value is less than the returned value
'''
result = "Pass"
try:
assert (expected < returned), "{0} not False".format(returned)
except AssertionError as err:
result = "Fail: " + six.text_type(err)
return result |
def params(self):
""" Read self params from configuration. """
parser = JinjaInterpolationNamespace()
parser.read(self.configuration)
return dict(parser['params'] or {}) | Read self params from configuration. | Below is the the instruction that describes the task:
### Input:
Read self params from configuration.
### Response:
def params(self):
""" Read self params from configuration. """
parser = JinjaInterpolationNamespace()
parser.read(self.configuration)
return dict(parser['params'] or {}) |
def serialize_b58(self, private=True):
"""Encode the serialized node in base58."""
return ensure_str(
base58.b58encode_check(unhexlify(self.serialize(private)))) | Encode the serialized node in base58. | Below is the the instruction that describes the task:
### Input:
Encode the serialized node in base58.
### Response:
def serialize_b58(self, private=True):
"""Encode the serialized node in base58."""
return ensure_str(
base58.b58encode_check(unhexlify(self.serialize(private)))) |
def all_as_list():
'''
returns a list of all defined containers
'''
as_dict = all_as_dict()
containers = as_dict['Running'] + as_dict['Frozen'] + as_dict['Stopped']
containers_list = []
for i in containers:
i = i.replace(' (auto)', '')
containers_list.append(i)
return containers_list | returns a list of all defined containers | Below is the the instruction that describes the task:
### Input:
returns a list of all defined containers
### Response:
def all_as_list():
'''
returns a list of all defined containers
'''
as_dict = all_as_dict()
containers = as_dict['Running'] + as_dict['Frozen'] + as_dict['Stopped']
containers_list = []
for i in containers:
i = i.replace(' (auto)', '')
containers_list.append(i)
return containers_list |
def _confirm_dialog(self, prompt):
''' Prompts for a 'yes' or 'no' to given prompt. '''
response = raw_input(prompt).strip().lower()
valid = {'y': True, 'ye': True, 'yes': True, 'n': False, 'no': False}
while True:
try:
return valid[response]
except:
response = raw_input("Please respond 'y' or 'n': ").strip().lower() | Prompts for a 'yes' or 'no' to given prompt. | Below is the the instruction that describes the task:
### Input:
Prompts for a 'yes' or 'no' to given prompt.
### Response:
def _confirm_dialog(self, prompt):
''' Prompts for a 'yes' or 'no' to given prompt. '''
response = raw_input(prompt).strip().lower()
valid = {'y': True, 'ye': True, 'yes': True, 'n': False, 'no': False}
while True:
try:
return valid[response]
except:
response = raw_input("Please respond 'y' or 'n': ").strip().lower() |
def remote_write(self, data):
"""
Called from remote worker to write L{data} to L{fp} within boundaries
of L{maxsize}
@type data: C{string}
@param data: String of data to write
"""
data = unicode2bytes(data)
if self.remaining is not None:
if len(data) > self.remaining:
data = data[:self.remaining]
self.fp.write(data)
self.remaining = self.remaining - len(data)
else:
self.fp.write(data) | Called from remote worker to write L{data} to L{fp} within boundaries
of L{maxsize}
@type data: C{string}
@param data: String of data to write | Below is the the instruction that describes the task:
### Input:
Called from remote worker to write L{data} to L{fp} within boundaries
of L{maxsize}
@type data: C{string}
@param data: String of data to write
### Response:
def remote_write(self, data):
"""
Called from remote worker to write L{data} to L{fp} within boundaries
of L{maxsize}
@type data: C{string}
@param data: String of data to write
"""
data = unicode2bytes(data)
if self.remaining is not None:
if len(data) > self.remaining:
data = data[:self.remaining]
self.fp.write(data)
self.remaining = self.remaining - len(data)
else:
self.fp.write(data) |
def set_trace():
"""Call pdb.set_trace in the calling frame, first restoring
sys.stdout to the real output stream. Note that sys.stdout is NOT
reset to whatever it was before the call once pdb is done!
"""
import pdb
import sys
stdout = sys.stdout
sys.stdout = sys.__stdout__
pdb.Pdb().set_trace(sys._getframe().f_back) | Call pdb.set_trace in the calling frame, first restoring
sys.stdout to the real output stream. Note that sys.stdout is NOT
reset to whatever it was before the call once pdb is done! | Below is the the instruction that describes the task:
### Input:
Call pdb.set_trace in the calling frame, first restoring
sys.stdout to the real output stream. Note that sys.stdout is NOT
reset to whatever it was before the call once pdb is done!
### Response:
def set_trace():
"""Call pdb.set_trace in the calling frame, first restoring
sys.stdout to the real output stream. Note that sys.stdout is NOT
reset to whatever it was before the call once pdb is done!
"""
import pdb
import sys
stdout = sys.stdout
sys.stdout = sys.__stdout__
pdb.Pdb().set_trace(sys._getframe().f_back) |
def stop_service(self, instance, service):
"""
Stops a single service.
:param str instance: A Yamcs instance name.
:param str service: The name of the service.
"""
req = rest_pb2.EditServiceRequest()
req.state = 'stopped'
url = '/services/{}/{}'.format(instance, service)
self.patch_proto(url, data=req.SerializeToString()) | Stops a single service.
:param str instance: A Yamcs instance name.
:param str service: The name of the service. | Below is the the instruction that describes the task:
### Input:
Stops a single service.
:param str instance: A Yamcs instance name.
:param str service: The name of the service.
### Response:
def stop_service(self, instance, service):
"""
Stops a single service.
:param str instance: A Yamcs instance name.
:param str service: The name of the service.
"""
req = rest_pb2.EditServiceRequest()
req.state = 'stopped'
url = '/services/{}/{}'.format(instance, service)
self.patch_proto(url, data=req.SerializeToString()) |
def plot_isotherm(self, T, zs, ws, Pmin=None, Pmax=None, methods=[], pts=50,
only_valid=True): # pragma: no cover
r'''Method to create a plot of the property vs pressure at a specified
temperature and composition according to either a specified list of
methods, or the user methods (if set), or all methods. User-selectable
number of points, and pressure range. If only_valid is set,
`test_method_validity` will be used to check if each condition in
the specified range is valid, and `test_property_validity` will be used
to test the answer, and the method is allowed to fail; only the valid
points will be plotted. Otherwise, the result will be calculated and
displayed as-is. This will not suceed if the method fails.
Parameters
----------
T : float
Temperature at which to create the plot, [K]
zs : list[float]
Mole fractions of all species in the mixture, [-]
ws : list[float]
Weight fractions of all species in the mixture, [-]
Pmin : float
Minimum pressure, to begin calculating the property, [Pa]
Pmax : float
Maximum pressure, to stop calculating the property, [Pa]
methods : list, optional
List of methods to consider
pts : int, optional
A list of points to calculate the property at; if Pmin to Pmax
covers a wide range of method validities, only a few points may end
up calculated for a given method so this may need to be large
only_valid : bool
If True, only plot successful methods and calculated properties,
and handle errors; if False, attempt calculation without any
checking and use methods outside their bounds
'''
# This function cannot be tested
if not has_matplotlib:
raise Exception('Optional dependency matplotlib is required for plotting')
if Pmin is None:
if self.Pmin is not None:
Pmin = self.Pmin
else:
raise Exception('Minimum pressure could not be auto-detected; please provide it')
if Pmax is None:
if self.Pmax is not None:
Pmax = self.Pmax
else:
raise Exception('Maximum pressure could not be auto-detected; please provide it')
if not methods:
if self.user_methods:
methods = self.user_methods
else:
methods = self.all_methods
Ps = np.linspace(Pmin, Pmax, pts)
for method in methods:
if only_valid:
properties, Ps2 = [], []
for P in Ps:
if self.test_method_validity(T, P, zs, ws, method):
try:
p = self.calculate(T, P, zs, ws, method)
if self.test_property_validity(p):
properties.append(p)
Ps2.append(P)
except:
pass
plt.plot(Ps2, properties, label=method)
else:
properties = [self.calculate(T, P, zs, ws, method) for P in Ps]
plt.plot(Ps, properties, label=method)
plt.legend(loc='best')
plt.ylabel(self.name + ', ' + self.units)
plt.xlabel('Pressure, Pa')
plt.title(self.name + ' of a mixture of ' + ', '.join(self.CASs)
+ ' at mole fractions of ' + ', '.join(str(round(i, 4)) for i in zs) + '.')
plt.show() | r'''Method to create a plot of the property vs pressure at a specified
temperature and composition according to either a specified list of
methods, or the user methods (if set), or all methods. User-selectable
number of points, and pressure range. If only_valid is set,
`test_method_validity` will be used to check if each condition in
the specified range is valid, and `test_property_validity` will be used
to test the answer, and the method is allowed to fail; only the valid
points will be plotted. Otherwise, the result will be calculated and
displayed as-is. This will not suceed if the method fails.
Parameters
----------
T : float
Temperature at which to create the plot, [K]
zs : list[float]
Mole fractions of all species in the mixture, [-]
ws : list[float]
Weight fractions of all species in the mixture, [-]
Pmin : float
Minimum pressure, to begin calculating the property, [Pa]
Pmax : float
Maximum pressure, to stop calculating the property, [Pa]
methods : list, optional
List of methods to consider
pts : int, optional
A list of points to calculate the property at; if Pmin to Pmax
covers a wide range of method validities, only a few points may end
up calculated for a given method so this may need to be large
only_valid : bool
If True, only plot successful methods and calculated properties,
and handle errors; if False, attempt calculation without any
checking and use methods outside their bounds | Below is the the instruction that describes the task:
### Input:
r'''Method to create a plot of the property vs pressure at a specified
temperature and composition according to either a specified list of
methods, or the user methods (if set), or all methods. User-selectable
number of points, and pressure range. If only_valid is set,
`test_method_validity` will be used to check if each condition in
the specified range is valid, and `test_property_validity` will be used
to test the answer, and the method is allowed to fail; only the valid
points will be plotted. Otherwise, the result will be calculated and
displayed as-is. This will not suceed if the method fails.
Parameters
----------
T : float
Temperature at which to create the plot, [K]
zs : list[float]
Mole fractions of all species in the mixture, [-]
ws : list[float]
Weight fractions of all species in the mixture, [-]
Pmin : float
Minimum pressure, to begin calculating the property, [Pa]
Pmax : float
Maximum pressure, to stop calculating the property, [Pa]
methods : list, optional
List of methods to consider
pts : int, optional
A list of points to calculate the property at; if Pmin to Pmax
covers a wide range of method validities, only a few points may end
up calculated for a given method so this may need to be large
only_valid : bool
If True, only plot successful methods and calculated properties,
and handle errors; if False, attempt calculation without any
checking and use methods outside their bounds
### Response:
def plot_isotherm(self, T, zs, ws, Pmin=None, Pmax=None, methods=[], pts=50,
only_valid=True): # pragma: no cover
r'''Method to create a plot of the property vs pressure at a specified
temperature and composition according to either a specified list of
methods, or the user methods (if set), or all methods. User-selectable
number of points, and pressure range. If only_valid is set,
`test_method_validity` will be used to check if each condition in
the specified range is valid, and `test_property_validity` will be used
to test the answer, and the method is allowed to fail; only the valid
points will be plotted. Otherwise, the result will be calculated and
displayed as-is. This will not suceed if the method fails.
Parameters
----------
T : float
Temperature at which to create the plot, [K]
zs : list[float]
Mole fractions of all species in the mixture, [-]
ws : list[float]
Weight fractions of all species in the mixture, [-]
Pmin : float
Minimum pressure, to begin calculating the property, [Pa]
Pmax : float
Maximum pressure, to stop calculating the property, [Pa]
methods : list, optional
List of methods to consider
pts : int, optional
A list of points to calculate the property at; if Pmin to Pmax
covers a wide range of method validities, only a few points may end
up calculated for a given method so this may need to be large
only_valid : bool
If True, only plot successful methods and calculated properties,
and handle errors; if False, attempt calculation without any
checking and use methods outside their bounds
'''
# This function cannot be tested
if not has_matplotlib:
raise Exception('Optional dependency matplotlib is required for plotting')
if Pmin is None:
if self.Pmin is not None:
Pmin = self.Pmin
else:
raise Exception('Minimum pressure could not be auto-detected; please provide it')
if Pmax is None:
if self.Pmax is not None:
Pmax = self.Pmax
else:
raise Exception('Maximum pressure could not be auto-detected; please provide it')
if not methods:
if self.user_methods:
methods = self.user_methods
else:
methods = self.all_methods
Ps = np.linspace(Pmin, Pmax, pts)
for method in methods:
if only_valid:
properties, Ps2 = [], []
for P in Ps:
if self.test_method_validity(T, P, zs, ws, method):
try:
p = self.calculate(T, P, zs, ws, method)
if self.test_property_validity(p):
properties.append(p)
Ps2.append(P)
except:
pass
plt.plot(Ps2, properties, label=method)
else:
properties = [self.calculate(T, P, zs, ws, method) for P in Ps]
plt.plot(Ps, properties, label=method)
plt.legend(loc='best')
plt.ylabel(self.name + ', ' + self.units)
plt.xlabel('Pressure, Pa')
plt.title(self.name + ' of a mixture of ' + ', '.join(self.CASs)
+ ' at mole fractions of ' + ', '.join(str(round(i, 4)) for i in zs) + '.')
plt.show() |
def is_tainted(variable, context, only_unprotected=False, ignore_generic_taint=False):
'''
Args:
variable
context (Contract|Function)
only_unprotected (bool): True only unprotected function are considered
Returns:
bool
'''
assert isinstance(context, (Contract, Function))
assert isinstance(only_unprotected, bool)
if isinstance(variable, Constant):
return False
slither = context.slither
taints = slither.context[KEY_INPUT]
if not ignore_generic_taint:
taints |= GENERIC_TAINT
return variable in taints or any(is_dependent(variable, t, context, only_unprotected) for t in taints) | Args:
variable
context (Contract|Function)
only_unprotected (bool): True only unprotected function are considered
Returns:
bool | Below is the the instruction that describes the task:
### Input:
Args:
variable
context (Contract|Function)
only_unprotected (bool): True only unprotected function are considered
Returns:
bool
### Response:
def is_tainted(variable, context, only_unprotected=False, ignore_generic_taint=False):
'''
Args:
variable
context (Contract|Function)
only_unprotected (bool): True only unprotected function are considered
Returns:
bool
'''
assert isinstance(context, (Contract, Function))
assert isinstance(only_unprotected, bool)
if isinstance(variable, Constant):
return False
slither = context.slither
taints = slither.context[KEY_INPUT]
if not ignore_generic_taint:
taints |= GENERIC_TAINT
return variable in taints or any(is_dependent(variable, t, context, only_unprotected) for t in taints) |
def parse_ls_date(self, s, *, now=None):
"""
Parsing dates from the ls unix utility. For example,
"Nov 18 1958" and "Nov 18 12:29".
:param s: ls date
:type s: :py:class:`str`
:rtype: :py:class:`str`
"""
with setlocale("C"):
try:
if now is None:
now = datetime.datetime.now()
d = datetime.datetime.strptime(s, "%b %d %H:%M")
d = d.replace(year=now.year)
diff = (now - d).total_seconds()
if diff > HALF_OF_YEAR_IN_SECONDS:
d = d.replace(year=now.year + 1)
elif diff < -HALF_OF_YEAR_IN_SECONDS:
d = d.replace(year=now.year - 1)
except ValueError:
d = datetime.datetime.strptime(s, "%b %d %Y")
return self.format_date_time(d) | Parsing dates from the ls unix utility. For example,
"Nov 18 1958" and "Nov 18 12:29".
:param s: ls date
:type s: :py:class:`str`
:rtype: :py:class:`str` | Below is the the instruction that describes the task:
### Input:
Parsing dates from the ls unix utility. For example,
"Nov 18 1958" and "Nov 18 12:29".
:param s: ls date
:type s: :py:class:`str`
:rtype: :py:class:`str`
### Response:
def parse_ls_date(self, s, *, now=None):
"""
Parsing dates from the ls unix utility. For example,
"Nov 18 1958" and "Nov 18 12:29".
:param s: ls date
:type s: :py:class:`str`
:rtype: :py:class:`str`
"""
with setlocale("C"):
try:
if now is None:
now = datetime.datetime.now()
d = datetime.datetime.strptime(s, "%b %d %H:%M")
d = d.replace(year=now.year)
diff = (now - d).total_seconds()
if diff > HALF_OF_YEAR_IN_SECONDS:
d = d.replace(year=now.year + 1)
elif diff < -HALF_OF_YEAR_IN_SECONDS:
d = d.replace(year=now.year - 1)
except ValueError:
d = datetime.datetime.strptime(s, "%b %d %Y")
return self.format_date_time(d) |
def gen_random_string(str_len):
""" generate random string with specified length
"""
return ''.join(
random.choice(string.ascii_letters + string.digits) for _ in range(str_len)) | generate random string with specified length | Below is the the instruction that describes the task:
### Input:
generate random string with specified length
### Response:
def gen_random_string(str_len):
""" generate random string with specified length
"""
return ''.join(
random.choice(string.ascii_letters + string.digits) for _ in range(str_len)) |
def _initEphemerals(self):
"""
Initialize all ephemeral members after being restored to a pickled state.
"""
BacktrackingTM._initEphemerals(self)
#---------------------------------------------------------------------------------
# cells4 specific initialization
# If True, let C++ allocate memory for activeState, predictedState, and
# learnState. In this case we can retrieve copies of these states but can't
# set them directly from Python. If False, Python can allocate them as
# numpy arrays and we can pass pointers to the C++ using setStatePointers
self.allocateStatesInCPP = False
# Set this to true for debugging or accessing learning states
self.retrieveLearningStates = False
if self.makeCells4Ephemeral:
self._initCells4() | Initialize all ephemeral members after being restored to a pickled state. | Below is the the instruction that describes the task:
### Input:
Initialize all ephemeral members after being restored to a pickled state.
### Response:
def _initEphemerals(self):
"""
Initialize all ephemeral members after being restored to a pickled state.
"""
BacktrackingTM._initEphemerals(self)
#---------------------------------------------------------------------------------
# cells4 specific initialization
# If True, let C++ allocate memory for activeState, predictedState, and
# learnState. In this case we can retrieve copies of these states but can't
# set them directly from Python. If False, Python can allocate them as
# numpy arrays and we can pass pointers to the C++ using setStatePointers
self.allocateStatesInCPP = False
# Set this to true for debugging or accessing learning states
self.retrieveLearningStates = False
if self.makeCells4Ephemeral:
self._initCells4() |
def alignTwoAlignments(aln1,aln2,outfile,WorkingDir=None,SuppressStderr=None,\
SuppressStdout=None):
"""Aligns two alignments. Individual sequences are not realigned
aln1: string, name of file containing the first alignment
aln2: string, name of file containing the second alignment
outfile: you're forced to specify an outfile name, because if you don't
aln1 will be overwritten. So, if you want aln1 to be overwritten, you
should specify the same filename.
WARNING: a .dnd file is created with the same prefix as aln1. So an
existing dendrogram might get overwritten.
"""
app = Clustalw({'-profile':None,'-profile1':aln1,\
'-profile2':aln2,'-outfile':outfile},SuppressStderr=\
SuppressStderr,WorkingDir=WorkingDir,SuppressStdout=SuppressStdout)
app.Parameters['-align'].off()
return app() | Aligns two alignments. Individual sequences are not realigned
aln1: string, name of file containing the first alignment
aln2: string, name of file containing the second alignment
outfile: you're forced to specify an outfile name, because if you don't
aln1 will be overwritten. So, if you want aln1 to be overwritten, you
should specify the same filename.
WARNING: a .dnd file is created with the same prefix as aln1. So an
existing dendrogram might get overwritten. | Below is the the instruction that describes the task:
### Input:
Aligns two alignments. Individual sequences are not realigned
aln1: string, name of file containing the first alignment
aln2: string, name of file containing the second alignment
outfile: you're forced to specify an outfile name, because if you don't
aln1 will be overwritten. So, if you want aln1 to be overwritten, you
should specify the same filename.
WARNING: a .dnd file is created with the same prefix as aln1. So an
existing dendrogram might get overwritten.
### Response:
def alignTwoAlignments(aln1,aln2,outfile,WorkingDir=None,SuppressStderr=None,\
SuppressStdout=None):
"""Aligns two alignments. Individual sequences are not realigned
aln1: string, name of file containing the first alignment
aln2: string, name of file containing the second alignment
outfile: you're forced to specify an outfile name, because if you don't
aln1 will be overwritten. So, if you want aln1 to be overwritten, you
should specify the same filename.
WARNING: a .dnd file is created with the same prefix as aln1. So an
existing dendrogram might get overwritten.
"""
app = Clustalw({'-profile':None,'-profile1':aln1,\
'-profile2':aln2,'-outfile':outfile},SuppressStderr=\
SuppressStderr,WorkingDir=WorkingDir,SuppressStdout=SuppressStdout)
app.Parameters['-align'].off()
return app() |
def animate_2Dscatter(x, y, NumAnimatedPoints=50, NTrailPoints=20,
xlabel="", ylabel="",
xlims=None, ylims=None, filename="testAnim.mp4",
bitrate=1e5, dpi=5e2, fps=30, figsize = [6, 6]):
"""
Animates x and y - where x and y are 1d arrays of x and y
positions and it plots x[i:i+NTrailPoints] and y[i:i+NTrailPoints]
against each other and iterates through i.
"""
fig, ax = _plt.subplots(figsize = figsize)
alphas = _np.linspace(0.1, 1, NTrailPoints)
rgba_colors = _np.zeros((NTrailPoints,4))
# for red the first column needs to be one
rgba_colors[:,0] = 1.0
# the fourth column needs to be your alphas
rgba_colors[:, 3] = alphas
scatter = ax.scatter(x[0:NTrailPoints], y[0:NTrailPoints], color=rgba_colors)
if xlims == None:
xlims = (min(x), max(x))
if ylims == None:
ylims = (min(y), max(y))
ax.set_xlim(xlims)
ax.set_ylim(ylims)
ax.set_xlabel(xlabel)
ax.set_ylabel(ylabel)
def animate(i, scatter):
scatter.axes.clear() # clear old scatter object
scatter = ax.scatter(x[i:i+NTrailPoints], y[i:i+NTrailPoints], color=rgba_colors, animated=True)
# create new scatter with updated data
ax.set_xlim(xlims)
ax.set_ylim(ylims)
ax.set_xlabel(xlabel)
ax.set_ylabel(ylabel)
return scatter,
ani = _animation.FuncAnimation(fig, animate, _np.arange(1, NumAnimatedPoints),
interval=25, blit=True, fargs=[scatter])
ani.save(filename, bitrate=bitrate, dpi=dpi, fps=fps)
return None | Animates x and y - where x and y are 1d arrays of x and y
positions and it plots x[i:i+NTrailPoints] and y[i:i+NTrailPoints]
against each other and iterates through i. | Below is the the instruction that describes the task:
### Input:
Animates x and y - where x and y are 1d arrays of x and y
positions and it plots x[i:i+NTrailPoints] and y[i:i+NTrailPoints]
against each other and iterates through i.
### Response:
def animate_2Dscatter(x, y, NumAnimatedPoints=50, NTrailPoints=20,
xlabel="", ylabel="",
xlims=None, ylims=None, filename="testAnim.mp4",
bitrate=1e5, dpi=5e2, fps=30, figsize = [6, 6]):
"""
Animates x and y - where x and y are 1d arrays of x and y
positions and it plots x[i:i+NTrailPoints] and y[i:i+NTrailPoints]
against each other and iterates through i.
"""
fig, ax = _plt.subplots(figsize = figsize)
alphas = _np.linspace(0.1, 1, NTrailPoints)
rgba_colors = _np.zeros((NTrailPoints,4))
# for red the first column needs to be one
rgba_colors[:,0] = 1.0
# the fourth column needs to be your alphas
rgba_colors[:, 3] = alphas
scatter = ax.scatter(x[0:NTrailPoints], y[0:NTrailPoints], color=rgba_colors)
if xlims == None:
xlims = (min(x), max(x))
if ylims == None:
ylims = (min(y), max(y))
ax.set_xlim(xlims)
ax.set_ylim(ylims)
ax.set_xlabel(xlabel)
ax.set_ylabel(ylabel)
def animate(i, scatter):
scatter.axes.clear() # clear old scatter object
scatter = ax.scatter(x[i:i+NTrailPoints], y[i:i+NTrailPoints], color=rgba_colors, animated=True)
# create new scatter with updated data
ax.set_xlim(xlims)
ax.set_ylim(ylims)
ax.set_xlabel(xlabel)
ax.set_ylabel(ylabel)
return scatter,
ani = _animation.FuncAnimation(fig, animate, _np.arange(1, NumAnimatedPoints),
interval=25, blit=True, fargs=[scatter])
ani.save(filename, bitrate=bitrate, dpi=dpi, fps=fps)
return None |
def __query(p, k, v, accepted_keys=None, required_values=None, path=None, exact=True):
"""
Query function given to visit method
:param p: visited path in tuple form
:param k: visited key
:param v: visited value
:param accepted_keys: list of keys where one must match k to satisfy query.
:param required_values: list of values where one must match v to satisfy query
:param path: exact path in tuple form that must match p to satisfy query
:param exact: if True then key and value match uses contains function instead of ==
:return: True if all criteria are satisfied, otherwise False
"""
# if not k:
# print '__query p k:', p, k
# print p, k, accepted_keys, required_values, path, exact
def as_values_iterable(v):
if isinstance(v, dict):
return v.values()
elif isinstance(v, six.string_types):
return [v]
else:
# assume is already some iterable type
return v
if path and path != p:
return False
if accepted_keys:
if isinstance(accepted_keys, six.string_types):
accepted_keys = [accepted_keys]
if len([akey for akey in accepted_keys if akey == k or (not exact and akey in k)]) == 0:
return False
if required_values:
if isinstance(required_values, six.string_types):
required_values = [required_values]
# Find all terms in the vfilter that have a match somewhere in the values of the v dict. If the
# list is shorter than vfilter then some terms did not match and this v fails the test.
if len(required_values) > len([term for term in required_values for nv in as_values_iterable(v) if term == nv or (not exact and term in nv)]):
return False
return True | Query function given to visit method
:param p: visited path in tuple form
:param k: visited key
:param v: visited value
:param accepted_keys: list of keys where one must match k to satisfy query.
:param required_values: list of values where one must match v to satisfy query
:param path: exact path in tuple form that must match p to satisfy query
:param exact: if True then key and value match uses contains function instead of ==
:return: True if all criteria are satisfied, otherwise False | Below is the the instruction that describes the task:
### Input:
Query function given to visit method
:param p: visited path in tuple form
:param k: visited key
:param v: visited value
:param accepted_keys: list of keys where one must match k to satisfy query.
:param required_values: list of values where one must match v to satisfy query
:param path: exact path in tuple form that must match p to satisfy query
:param exact: if True then key and value match uses contains function instead of ==
:return: True if all criteria are satisfied, otherwise False
### Response:
def __query(p, k, v, accepted_keys=None, required_values=None, path=None, exact=True):
"""
Query function given to visit method
:param p: visited path in tuple form
:param k: visited key
:param v: visited value
:param accepted_keys: list of keys where one must match k to satisfy query.
:param required_values: list of values where one must match v to satisfy query
:param path: exact path in tuple form that must match p to satisfy query
:param exact: if True then key and value match uses contains function instead of ==
:return: True if all criteria are satisfied, otherwise False
"""
# if not k:
# print '__query p k:', p, k
# print p, k, accepted_keys, required_values, path, exact
def as_values_iterable(v):
if isinstance(v, dict):
return v.values()
elif isinstance(v, six.string_types):
return [v]
else:
# assume is already some iterable type
return v
if path and path != p:
return False
if accepted_keys:
if isinstance(accepted_keys, six.string_types):
accepted_keys = [accepted_keys]
if len([akey for akey in accepted_keys if akey == k or (not exact and akey in k)]) == 0:
return False
if required_values:
if isinstance(required_values, six.string_types):
required_values = [required_values]
# Find all terms in the vfilter that have a match somewhere in the values of the v dict. If the
# list is shorter than vfilter then some terms did not match and this v fails the test.
if len(required_values) > len([term for term in required_values for nv in as_values_iterable(v) if term == nv or (not exact and term in nv)]):
return False
return True |
def available(self,
skip_hidden=True,
skip_installed=True,
skip_mandatory=False,
skip_reboot=False,
software=True,
drivers=True,
categories=None,
severities=None):
'''
Gets a list of all updates available on the system that match the passed
criteria.
Args:
skip_hidden (bool): Skip hidden updates. Default is True
skip_installed (bool): Skip installed updates. Default is True
skip_mandatory (bool): Skip mandatory updates. Default is False
skip_reboot (bool): Skip updates that can or do require reboot.
Default is False
software (bool): Include software updates. Default is True
drivers (bool): Include driver updates. Default is True
categories (list): Include updates that have these categories.
Default is none (all categories).
Categories include the following:
* Critical Updates
* Definition Updates
* Drivers (make sure you set drivers=True)
* Feature Packs
* Security Updates
* Update Rollups
* Updates
* Update Rollups
* Windows 7
* Windows 8.1
* Windows 8.1 drivers
* Windows 8.1 and later drivers
* Windows Defender
severities (list): Include updates that have these severities.
Default is none (all severities).
Severities include the following:
* Critical
* Important
.. note:: All updates are either software or driver updates. If both
``software`` and ``drivers`` is False, nothing will be returned.
Returns:
Updates: An instance of Updates with the results of the search.
Code Example:
.. code-block:: python
import salt.utils.win_update
wua = salt.utils.win_update.WindowsUpdateAgent()
# Gets all updates and shows a summary
updates = wua.available
updates.summary()
# Get a list of Critical updates
updates = wua.available(categories=['Critical Updates'])
updates.list()
'''
# https://msdn.microsoft.com/en-us/library/windows/desktop/aa386099(v=vs.85).aspx
updates = Updates()
found = updates.updates
for update in self._updates:
if salt.utils.data.is_true(update.IsHidden) and skip_hidden:
continue
if salt.utils.data.is_true(update.IsInstalled) and skip_installed:
continue
if salt.utils.data.is_true(update.IsMandatory) and skip_mandatory:
continue
if salt.utils.data.is_true(
update.InstallationBehavior.RebootBehavior) and skip_reboot:
continue
if not software and update.Type == 1:
continue
if not drivers and update.Type == 2:
continue
if categories is not None:
match = False
for category in update.Categories:
if category.Name in categories:
match = True
if not match:
continue
if severities is not None:
if update.MsrcSeverity not in severities:
continue
found.Add(update)
return updates | Gets a list of all updates available on the system that match the passed
criteria.
Args:
skip_hidden (bool): Skip hidden updates. Default is True
skip_installed (bool): Skip installed updates. Default is True
skip_mandatory (bool): Skip mandatory updates. Default is False
skip_reboot (bool): Skip updates that can or do require reboot.
Default is False
software (bool): Include software updates. Default is True
drivers (bool): Include driver updates. Default is True
categories (list): Include updates that have these categories.
Default is none (all categories).
Categories include the following:
* Critical Updates
* Definition Updates
* Drivers (make sure you set drivers=True)
* Feature Packs
* Security Updates
* Update Rollups
* Updates
* Update Rollups
* Windows 7
* Windows 8.1
* Windows 8.1 drivers
* Windows 8.1 and later drivers
* Windows Defender
severities (list): Include updates that have these severities.
Default is none (all severities).
Severities include the following:
* Critical
* Important
.. note:: All updates are either software or driver updates. If both
``software`` and ``drivers`` is False, nothing will be returned.
Returns:
Updates: An instance of Updates with the results of the search.
Code Example:
.. code-block:: python
import salt.utils.win_update
wua = salt.utils.win_update.WindowsUpdateAgent()
# Gets all updates and shows a summary
updates = wua.available
updates.summary()
# Get a list of Critical updates
updates = wua.available(categories=['Critical Updates'])
updates.list() | Below is the the instruction that describes the task:
### Input:
Gets a list of all updates available on the system that match the passed
criteria.
Args:
skip_hidden (bool): Skip hidden updates. Default is True
skip_installed (bool): Skip installed updates. Default is True
skip_mandatory (bool): Skip mandatory updates. Default is False
skip_reboot (bool): Skip updates that can or do require reboot.
Default is False
software (bool): Include software updates. Default is True
drivers (bool): Include driver updates. Default is True
categories (list): Include updates that have these categories.
Default is none (all categories).
Categories include the following:
* Critical Updates
* Definition Updates
* Drivers (make sure you set drivers=True)
* Feature Packs
* Security Updates
* Update Rollups
* Updates
* Update Rollups
* Windows 7
* Windows 8.1
* Windows 8.1 drivers
* Windows 8.1 and later drivers
* Windows Defender
severities (list): Include updates that have these severities.
Default is none (all severities).
Severities include the following:
* Critical
* Important
.. note:: All updates are either software or driver updates. If both
``software`` and ``drivers`` is False, nothing will be returned.
Returns:
Updates: An instance of Updates with the results of the search.
Code Example:
.. code-block:: python
import salt.utils.win_update
wua = salt.utils.win_update.WindowsUpdateAgent()
# Gets all updates and shows a summary
updates = wua.available
updates.summary()
# Get a list of Critical updates
updates = wua.available(categories=['Critical Updates'])
updates.list()
### Response:
def available(self,
skip_hidden=True,
skip_installed=True,
skip_mandatory=False,
skip_reboot=False,
software=True,
drivers=True,
categories=None,
severities=None):
'''
Gets a list of all updates available on the system that match the passed
criteria.
Args:
skip_hidden (bool): Skip hidden updates. Default is True
skip_installed (bool): Skip installed updates. Default is True
skip_mandatory (bool): Skip mandatory updates. Default is False
skip_reboot (bool): Skip updates that can or do require reboot.
Default is False
software (bool): Include software updates. Default is True
drivers (bool): Include driver updates. Default is True
categories (list): Include updates that have these categories.
Default is none (all categories).
Categories include the following:
* Critical Updates
* Definition Updates
* Drivers (make sure you set drivers=True)
* Feature Packs
* Security Updates
* Update Rollups
* Updates
* Update Rollups
* Windows 7
* Windows 8.1
* Windows 8.1 drivers
* Windows 8.1 and later drivers
* Windows Defender
severities (list): Include updates that have these severities.
Default is none (all severities).
Severities include the following:
* Critical
* Important
.. note:: All updates are either software or driver updates. If both
``software`` and ``drivers`` is False, nothing will be returned.
Returns:
Updates: An instance of Updates with the results of the search.
Code Example:
.. code-block:: python
import salt.utils.win_update
wua = salt.utils.win_update.WindowsUpdateAgent()
# Gets all updates and shows a summary
updates = wua.available
updates.summary()
# Get a list of Critical updates
updates = wua.available(categories=['Critical Updates'])
updates.list()
'''
# https://msdn.microsoft.com/en-us/library/windows/desktop/aa386099(v=vs.85).aspx
updates = Updates()
found = updates.updates
for update in self._updates:
if salt.utils.data.is_true(update.IsHidden) and skip_hidden:
continue
if salt.utils.data.is_true(update.IsInstalled) and skip_installed:
continue
if salt.utils.data.is_true(update.IsMandatory) and skip_mandatory:
continue
if salt.utils.data.is_true(
update.InstallationBehavior.RebootBehavior) and skip_reboot:
continue
if not software and update.Type == 1:
continue
if not drivers and update.Type == 2:
continue
if categories is not None:
match = False
for category in update.Categories:
if category.Name in categories:
match = True
if not match:
continue
if severities is not None:
if update.MsrcSeverity not in severities:
continue
found.Add(update)
return updates |
def earliest_possible_date():
"""
The earliest date for which we can load data from this module.
"""
today = pd.Timestamp('now', tz='UTC').normalize()
# Bank of Canada only has the last 10 years of data at any given time.
return today.replace(year=today.year - 10) | The earliest date for which we can load data from this module. | Below is the the instruction that describes the task:
### Input:
The earliest date for which we can load data from this module.
### Response:
def earliest_possible_date():
"""
The earliest date for which we can load data from this module.
"""
today = pd.Timestamp('now', tz='UTC').normalize()
# Bank of Canada only has the last 10 years of data at any given time.
return today.replace(year=today.year - 10) |
def findfileindirs(filename, dirs=None, use_pythonpath=True, use_searchpath=True, notfound_is_fatal=True, notfound_val=None):
"""Find file in multiple directories.
Inputs:
filename: the file name to be searched for.
dirs: list of folders or None
use_pythonpath: use the Python module search path
use_searchpath: use the sastool search path.
notfound_is_fatal: if an exception is to be raised if the file cannot be
found.
notfound_val: the value which should be returned if the file is not
found (only relevant if notfound_is_fatal is False)
Outputs: the full path of the file.
Notes:
if filename is an absolute path by itself, folders in 'dir' won't be
checked, only the existence of the file will be verified.
"""
if os.path.isabs(filename):
if os.path.exists(filename):
return filename
elif notfound_is_fatal:
raise IOError('File ' + filename + ' not found.')
else:
return notfound_val
if dirs is None:
dirs = []
dirs = normalize_listargument(dirs)
if not dirs: # dirs is empty
dirs = ['.']
if use_pythonpath:
dirs.extend(sys.path)
if use_searchpath:
dirs.extend(sastool_search_path)
# expand ~ and ~user constructs
dirs = [os.path.expanduser(d) for d in dirs]
logger.debug('Searching for file %s in several folders: %s' % (filename, ', '.join(dirs)))
for d in dirs:
if os.path.exists(os.path.join(d, filename)):
logger.debug('Found file %s in folder %s.' % (filename, d))
return os.path.join(d, filename)
logger.debug('Not found file %s in any folders.' % filename)
if notfound_is_fatal:
raise IOError('File %s not found in any of the directories.' % filename)
else:
return notfound_val | Find file in multiple directories.
Inputs:
filename: the file name to be searched for.
dirs: list of folders or None
use_pythonpath: use the Python module search path
use_searchpath: use the sastool search path.
notfound_is_fatal: if an exception is to be raised if the file cannot be
found.
notfound_val: the value which should be returned if the file is not
found (only relevant if notfound_is_fatal is False)
Outputs: the full path of the file.
Notes:
if filename is an absolute path by itself, folders in 'dir' won't be
checked, only the existence of the file will be verified. | Below is the the instruction that describes the task:
### Input:
Find file in multiple directories.
Inputs:
filename: the file name to be searched for.
dirs: list of folders or None
use_pythonpath: use the Python module search path
use_searchpath: use the sastool search path.
notfound_is_fatal: if an exception is to be raised if the file cannot be
found.
notfound_val: the value which should be returned if the file is not
found (only relevant if notfound_is_fatal is False)
Outputs: the full path of the file.
Notes:
if filename is an absolute path by itself, folders in 'dir' won't be
checked, only the existence of the file will be verified.
### Response:
def findfileindirs(filename, dirs=None, use_pythonpath=True, use_searchpath=True, notfound_is_fatal=True, notfound_val=None):
"""Find file in multiple directories.
Inputs:
filename: the file name to be searched for.
dirs: list of folders or None
use_pythonpath: use the Python module search path
use_searchpath: use the sastool search path.
notfound_is_fatal: if an exception is to be raised if the file cannot be
found.
notfound_val: the value which should be returned if the file is not
found (only relevant if notfound_is_fatal is False)
Outputs: the full path of the file.
Notes:
if filename is an absolute path by itself, folders in 'dir' won't be
checked, only the existence of the file will be verified.
"""
if os.path.isabs(filename):
if os.path.exists(filename):
return filename
elif notfound_is_fatal:
raise IOError('File ' + filename + ' not found.')
else:
return notfound_val
if dirs is None:
dirs = []
dirs = normalize_listargument(dirs)
if not dirs: # dirs is empty
dirs = ['.']
if use_pythonpath:
dirs.extend(sys.path)
if use_searchpath:
dirs.extend(sastool_search_path)
# expand ~ and ~user constructs
dirs = [os.path.expanduser(d) for d in dirs]
logger.debug('Searching for file %s in several folders: %s' % (filename, ', '.join(dirs)))
for d in dirs:
if os.path.exists(os.path.join(d, filename)):
logger.debug('Found file %s in folder %s.' % (filename, d))
return os.path.join(d, filename)
logger.debug('Not found file %s in any folders.' % filename)
if notfound_is_fatal:
raise IOError('File %s not found in any of the directories.' % filename)
else:
return notfound_val |
def validate(reference_tempi, reference_weight, estimated_tempi):
"""Checks that the input annotations to a metric look like valid tempo
annotations.
Parameters
----------
reference_tempi : np.ndarray
reference tempo values, in bpm
reference_weight : float
perceptual weight of slow vs fast in reference
estimated_tempi : np.ndarray
estimated tempo values, in bpm
"""
validate_tempi(reference_tempi, reference=True)
validate_tempi(estimated_tempi, reference=False)
if reference_weight < 0 or reference_weight > 1:
raise ValueError('Reference weight must lie in range [0, 1]') | Checks that the input annotations to a metric look like valid tempo
annotations.
Parameters
----------
reference_tempi : np.ndarray
reference tempo values, in bpm
reference_weight : float
perceptual weight of slow vs fast in reference
estimated_tempi : np.ndarray
estimated tempo values, in bpm | Below is the the instruction that describes the task:
### Input:
Checks that the input annotations to a metric look like valid tempo
annotations.
Parameters
----------
reference_tempi : np.ndarray
reference tempo values, in bpm
reference_weight : float
perceptual weight of slow vs fast in reference
estimated_tempi : np.ndarray
estimated tempo values, in bpm
### Response:
def validate(reference_tempi, reference_weight, estimated_tempi):
"""Checks that the input annotations to a metric look like valid tempo
annotations.
Parameters
----------
reference_tempi : np.ndarray
reference tempo values, in bpm
reference_weight : float
perceptual weight of slow vs fast in reference
estimated_tempi : np.ndarray
estimated tempo values, in bpm
"""
validate_tempi(reference_tempi, reference=True)
validate_tempi(estimated_tempi, reference=False)
if reference_weight < 0 or reference_weight > 1:
raise ValueError('Reference weight must lie in range [0, 1]') |
def get_extr_lics_xref(self, extr_lic):
"""
Return a list of cross references.
"""
xrefs = list(self.graph.triples((extr_lic, RDFS.seeAlso, None)))
return map(lambda xref_triple: xref_triple[2], xrefs) | Return a list of cross references. | Below is the the instruction that describes the task:
### Input:
Return a list of cross references.
### Response:
def get_extr_lics_xref(self, extr_lic):
"""
Return a list of cross references.
"""
xrefs = list(self.graph.triples((extr_lic, RDFS.seeAlso, None)))
return map(lambda xref_triple: xref_triple[2], xrefs) |
def configure_s3_resources(self, config):
"""
Extract the S3 operations from the configuration and execute them.
:param config: config of SageMaker operation
:type config: dict
:rtype: dict
"""
s3_operations = config.pop('S3Operations', None)
if s3_operations is not None:
create_bucket_ops = s3_operations.get('S3CreateBucket', [])
upload_ops = s3_operations.get('S3Upload', [])
for op in create_bucket_ops:
self.s3_hook.create_bucket(bucket_name=op['Bucket'])
for op in upload_ops:
if op['Tar']:
self.tar_and_s3_upload(op['Path'], op['Key'],
op['Bucket'])
else:
self.s3_hook.load_file(op['Path'], op['Key'],
op['Bucket']) | Extract the S3 operations from the configuration and execute them.
:param config: config of SageMaker operation
:type config: dict
:rtype: dict | Below is the the instruction that describes the task:
### Input:
Extract the S3 operations from the configuration and execute them.
:param config: config of SageMaker operation
:type config: dict
:rtype: dict
### Response:
def configure_s3_resources(self, config):
"""
Extract the S3 operations from the configuration and execute them.
:param config: config of SageMaker operation
:type config: dict
:rtype: dict
"""
s3_operations = config.pop('S3Operations', None)
if s3_operations is not None:
create_bucket_ops = s3_operations.get('S3CreateBucket', [])
upload_ops = s3_operations.get('S3Upload', [])
for op in create_bucket_ops:
self.s3_hook.create_bucket(bucket_name=op['Bucket'])
for op in upload_ops:
if op['Tar']:
self.tar_and_s3_upload(op['Path'], op['Key'],
op['Bucket'])
else:
self.s3_hook.load_file(op['Path'], op['Key'],
op['Bucket']) |
def set_training(train_mode): #pylint: disable=redefined-outer-name
"""Set status to training/predicting. This affects ctx.is_train in operator
running context. For example, Dropout will drop inputs randomly when
train_mode=True while simply passing through if train_mode=False.
Parameters
----------
train_mode: bool
Returns
-------
previous state before this set.
"""
prev = ctypes.c_int()
check_call(_LIB.MXAutogradSetIsTraining(
ctypes.c_int(train_mode), ctypes.byref(prev)))
return bool(prev.value) | Set status to training/predicting. This affects ctx.is_train in operator
running context. For example, Dropout will drop inputs randomly when
train_mode=True while simply passing through if train_mode=False.
Parameters
----------
train_mode: bool
Returns
-------
previous state before this set. | Below is the the instruction that describes the task:
### Input:
Set status to training/predicting. This affects ctx.is_train in operator
running context. For example, Dropout will drop inputs randomly when
train_mode=True while simply passing through if train_mode=False.
Parameters
----------
train_mode: bool
Returns
-------
previous state before this set.
### Response:
def set_training(train_mode): #pylint: disable=redefined-outer-name
"""Set status to training/predicting. This affects ctx.is_train in operator
running context. For example, Dropout will drop inputs randomly when
train_mode=True while simply passing through if train_mode=False.
Parameters
----------
train_mode: bool
Returns
-------
previous state before this set.
"""
prev = ctypes.c_int()
check_call(_LIB.MXAutogradSetIsTraining(
ctypes.c_int(train_mode), ctypes.byref(prev)))
return bool(prev.value) |
def writeString(self, str):
"""Write the content of the string in the output I/O buffer
This routine handle the I18N transcoding from internal
UTF-8 The buffer is lossless, i.e. will store in case of
partial or delayed writes. """
ret = libxml2mod.xmlOutputBufferWriteString(self._o, str)
return ret | Write the content of the string in the output I/O buffer
This routine handle the I18N transcoding from internal
UTF-8 The buffer is lossless, i.e. will store in case of
partial or delayed writes. | Below is the the instruction that describes the task:
### Input:
Write the content of the string in the output I/O buffer
This routine handle the I18N transcoding from internal
UTF-8 The buffer is lossless, i.e. will store in case of
partial or delayed writes.
### Response:
def writeString(self, str):
"""Write the content of the string in the output I/O buffer
This routine handle the I18N transcoding from internal
UTF-8 The buffer is lossless, i.e. will store in case of
partial or delayed writes. """
ret = libxml2mod.xmlOutputBufferWriteString(self._o, str)
return ret |
def setup_signing_encode(self, target_system, target_component, secret_key, initial_timestamp):
'''
Setup a MAVLink2 signing key. If called with secret_key of all zero
and zero initial_timestamp will disable signing
target_system : system id of the target (uint8_t)
target_component : component ID of the target (uint8_t)
secret_key : signing key (uint8_t)
initial_timestamp : initial timestamp (uint64_t)
'''
return MAVLink_setup_signing_message(target_system, target_component, secret_key, initial_timestamp) | Setup a MAVLink2 signing key. If called with secret_key of all zero
and zero initial_timestamp will disable signing
target_system : system id of the target (uint8_t)
target_component : component ID of the target (uint8_t)
secret_key : signing key (uint8_t)
initial_timestamp : initial timestamp (uint64_t) | Below is the the instruction that describes the task:
### Input:
Setup a MAVLink2 signing key. If called with secret_key of all zero
and zero initial_timestamp will disable signing
target_system : system id of the target (uint8_t)
target_component : component ID of the target (uint8_t)
secret_key : signing key (uint8_t)
initial_timestamp : initial timestamp (uint64_t)
### Response:
def setup_signing_encode(self, target_system, target_component, secret_key, initial_timestamp):
'''
Setup a MAVLink2 signing key. If called with secret_key of all zero
and zero initial_timestamp will disable signing
target_system : system id of the target (uint8_t)
target_component : component ID of the target (uint8_t)
secret_key : signing key (uint8_t)
initial_timestamp : initial timestamp (uint64_t)
'''
return MAVLink_setup_signing_message(target_system, target_component, secret_key, initial_timestamp) |
def close(self):
"""Cleanup client resources and disconnect from MongoDB.
On MongoDB >= 3.6, end all server sessions created by this client by
sending one or more endSessions commands.
Close all sockets in the connection pools and stop the monitor threads.
If this instance is used again it will be automatically re-opened and
the threads restarted.
.. versionchanged:: 3.6
End all server sessions created by this client.
"""
session_ids = self._topology.pop_all_sessions()
if session_ids:
self._end_sessions(session_ids)
# Stop the periodic task thread and then run _process_periodic_tasks
# to send pending killCursor requests before closing the topology.
self._kill_cursors_executor.close()
self._process_periodic_tasks()
self._topology.close() | Cleanup client resources and disconnect from MongoDB.
On MongoDB >= 3.6, end all server sessions created by this client by
sending one or more endSessions commands.
Close all sockets in the connection pools and stop the monitor threads.
If this instance is used again it will be automatically re-opened and
the threads restarted.
.. versionchanged:: 3.6
End all server sessions created by this client. | Below is the the instruction that describes the task:
### Input:
Cleanup client resources and disconnect from MongoDB.
On MongoDB >= 3.6, end all server sessions created by this client by
sending one or more endSessions commands.
Close all sockets in the connection pools and stop the monitor threads.
If this instance is used again it will be automatically re-opened and
the threads restarted.
.. versionchanged:: 3.6
End all server sessions created by this client.
### Response:
def close(self):
"""Cleanup client resources and disconnect from MongoDB.
On MongoDB >= 3.6, end all server sessions created by this client by
sending one or more endSessions commands.
Close all sockets in the connection pools and stop the monitor threads.
If this instance is used again it will be automatically re-opened and
the threads restarted.
.. versionchanged:: 3.6
End all server sessions created by this client.
"""
session_ids = self._topology.pop_all_sessions()
if session_ids:
self._end_sessions(session_ids)
# Stop the periodic task thread and then run _process_periodic_tasks
# to send pending killCursor requests before closing the topology.
self._kill_cursors_executor.close()
self._process_periodic_tasks()
self._topology.close() |
def rename_item_list(self, item_list_url, new_name):
""" Rename an Item List on the server
:type item_list_url: String or ItemList
:param item_list_url: the URL of the list to which to add the items,
or an ItemList object
:type new_name: String
:param new_name: the new name to give the Item List
:rtype: ItemList
:returns: the item list, if successful
:raises: APIError if the request was not successful
"""
data = json.dumps({'name': new_name})
resp = self.api_request(str(item_list_url), data, method="PUT")
try:
return ItemList(resp['items'], self, item_list_url, resp['name'])
except KeyError:
try:
raise APIError('200', 'Rename operation failed', resp['error'])
except KeyError:
raise APIError('200', 'Rename operation failed', resp) | Rename an Item List on the server
:type item_list_url: String or ItemList
:param item_list_url: the URL of the list to which to add the items,
or an ItemList object
:type new_name: String
:param new_name: the new name to give the Item List
:rtype: ItemList
:returns: the item list, if successful
:raises: APIError if the request was not successful | Below is the the instruction that describes the task:
### Input:
Rename an Item List on the server
:type item_list_url: String or ItemList
:param item_list_url: the URL of the list to which to add the items,
or an ItemList object
:type new_name: String
:param new_name: the new name to give the Item List
:rtype: ItemList
:returns: the item list, if successful
:raises: APIError if the request was not successful
### Response:
def rename_item_list(self, item_list_url, new_name):
""" Rename an Item List on the server
:type item_list_url: String or ItemList
:param item_list_url: the URL of the list to which to add the items,
or an ItemList object
:type new_name: String
:param new_name: the new name to give the Item List
:rtype: ItemList
:returns: the item list, if successful
:raises: APIError if the request was not successful
"""
data = json.dumps({'name': new_name})
resp = self.api_request(str(item_list_url), data, method="PUT")
try:
return ItemList(resp['items'], self, item_list_url, resp['name'])
except KeyError:
try:
raise APIError('200', 'Rename operation failed', resp['error'])
except KeyError:
raise APIError('200', 'Rename operation failed', resp) |
def update_mute(self, data):
"""Update mute."""
self._group['muted'] = data['mute']
self.callback()
_LOGGER.info('updated mute on %s', self.friendly_name) | Update mute. | Below is the the instruction that describes the task:
### Input:
Update mute.
### Response:
def update_mute(self, data):
"""Update mute."""
self._group['muted'] = data['mute']
self.callback()
_LOGGER.info('updated mute on %s', self.friendly_name) |
def project(self, point_cloud, round_px=True):
"""Projects a point cloud onto the camera image plane.
Parameters
----------
point_cloud : :obj:`autolab_core.PointCloud` or :obj:`autolab_core.Point`
A PointCloud or Point to project onto the camera image plane.
round_px : bool
If True, projections are rounded to the nearest pixel.
Returns
-------
:obj:`autolab_core.ImageCoords` or :obj:`autolab_core.Point`
A corresponding set of image coordinates representing the given
PointCloud's projections onto the camera image plane. If the input
was a single Point, returns a 2D Point in the camera plane.
Raises
------
ValueError
If the input is not a PointCloud or Point in the same reference
frame as the camera.
"""
if not isinstance(point_cloud, PointCloud) and not (isinstance(point_cloud, Point) and point_cloud.dim == 3):
raise ValueError('Must provide PointCloud or 3D Point object for projection')
if point_cloud.frame != self._frame:
raise ValueError('Cannot project points in frame %s into camera with frame %s' %(point_cloud.frame, self._frame))
points_proj = self._K.dot(point_cloud.data)
if len(points_proj.shape) == 1:
points_proj = points_proj[:, np.newaxis]
point_depths = np.tile(points_proj[2,:], [3, 1])
points_proj = np.divide(points_proj, point_depths)
if round_px:
points_proj = np.round(points_proj)
if isinstance(point_cloud, Point):
return Point(data=points_proj[:2,:].astype(np.int16), frame=self._frame)
return ImageCoords(data=points_proj[:2,:].astype(np.int16), frame=self._frame) | Projects a point cloud onto the camera image plane.
Parameters
----------
point_cloud : :obj:`autolab_core.PointCloud` or :obj:`autolab_core.Point`
A PointCloud or Point to project onto the camera image plane.
round_px : bool
If True, projections are rounded to the nearest pixel.
Returns
-------
:obj:`autolab_core.ImageCoords` or :obj:`autolab_core.Point`
A corresponding set of image coordinates representing the given
PointCloud's projections onto the camera image plane. If the input
was a single Point, returns a 2D Point in the camera plane.
Raises
------
ValueError
If the input is not a PointCloud or Point in the same reference
frame as the camera. | Below is the the instruction that describes the task:
### Input:
Projects a point cloud onto the camera image plane.
Parameters
----------
point_cloud : :obj:`autolab_core.PointCloud` or :obj:`autolab_core.Point`
A PointCloud or Point to project onto the camera image plane.
round_px : bool
If True, projections are rounded to the nearest pixel.
Returns
-------
:obj:`autolab_core.ImageCoords` or :obj:`autolab_core.Point`
A corresponding set of image coordinates representing the given
PointCloud's projections onto the camera image plane. If the input
was a single Point, returns a 2D Point in the camera plane.
Raises
------
ValueError
If the input is not a PointCloud or Point in the same reference
frame as the camera.
### Response:
def project(self, point_cloud, round_px=True):
"""Projects a point cloud onto the camera image plane.
Parameters
----------
point_cloud : :obj:`autolab_core.PointCloud` or :obj:`autolab_core.Point`
A PointCloud or Point to project onto the camera image plane.
round_px : bool
If True, projections are rounded to the nearest pixel.
Returns
-------
:obj:`autolab_core.ImageCoords` or :obj:`autolab_core.Point`
A corresponding set of image coordinates representing the given
PointCloud's projections onto the camera image plane. If the input
was a single Point, returns a 2D Point in the camera plane.
Raises
------
ValueError
If the input is not a PointCloud or Point in the same reference
frame as the camera.
"""
if not isinstance(point_cloud, PointCloud) and not (isinstance(point_cloud, Point) and point_cloud.dim == 3):
raise ValueError('Must provide PointCloud or 3D Point object for projection')
if point_cloud.frame != self._frame:
raise ValueError('Cannot project points in frame %s into camera with frame %s' %(point_cloud.frame, self._frame))
points_proj = self._K.dot(point_cloud.data)
if len(points_proj.shape) == 1:
points_proj = points_proj[:, np.newaxis]
point_depths = np.tile(points_proj[2,:], [3, 1])
points_proj = np.divide(points_proj, point_depths)
if round_px:
points_proj = np.round(points_proj)
if isinstance(point_cloud, Point):
return Point(data=points_proj[:2,:].astype(np.int16), frame=self._frame)
return ImageCoords(data=points_proj[:2,:].astype(np.int16), frame=self._frame) |
def create(self, parameters={}, create_keys=True, **kwargs):
"""
Create the service.
"""
# Create the service
cs = self._create_service(parameters=parameters, **kwargs)
# Create the service key to get config details and
# store in local cache file.
if create_keys:
cfg = parameters
cfg.update(self._get_service_config())
self.settings.save(cfg) | Create the service. | Below is the the instruction that describes the task:
### Input:
Create the service.
### Response:
def create(self, parameters={}, create_keys=True, **kwargs):
"""
Create the service.
"""
# Create the service
cs = self._create_service(parameters=parameters, **kwargs)
# Create the service key to get config details and
# store in local cache file.
if create_keys:
cfg = parameters
cfg.update(self._get_service_config())
self.settings.save(cfg) |
def action(self, target, message, formatted=True, tags=None):
"""Send an action to the given target."""
if formatted:
message = unescape(message)
self.ctcp(target, 'ACTION', message) | Send an action to the given target. | Below is the the instruction that describes the task:
### Input:
Send an action to the given target.
### Response:
def action(self, target, message, formatted=True, tags=None):
"""Send an action to the given target."""
if formatted:
message = unescape(message)
self.ctcp(target, 'ACTION', message) |
def generate_reset_password_token(user):
# type: (User) -> Any
"""Generate a unique reset password token for the specified user.
:param user: The user to work with
"""
data = [str(user.id), md5(user.password)]
return get_serializer("reset").dumps(data) | Generate a unique reset password token for the specified user.
:param user: The user to work with | Below is the the instruction that describes the task:
### Input:
Generate a unique reset password token for the specified user.
:param user: The user to work with
### Response:
def generate_reset_password_token(user):
# type: (User) -> Any
"""Generate a unique reset password token for the specified user.
:param user: The user to work with
"""
data = [str(user.id), md5(user.password)]
return get_serializer("reset").dumps(data) |
def next(self):
"""
Returns new CharacterDataWrapper
TODO: Don't raise offset past count - limit
"""
self.params['offset'] = str(int(self.params['offset']) + int(self.params['limit']))
return self.marvel.get_characters(self.marvel, (), **self.params) | Returns new CharacterDataWrapper
TODO: Don't raise offset past count - limit | Below is the the instruction that describes the task:
### Input:
Returns new CharacterDataWrapper
TODO: Don't raise offset past count - limit
### Response:
def next(self):
"""
Returns new CharacterDataWrapper
TODO: Don't raise offset past count - limit
"""
self.params['offset'] = str(int(self.params['offset']) + int(self.params['limit']))
return self.marvel.get_characters(self.marvel, (), **self.params) |
def get_asset_content_lookup_session_for_repository(self, repository_id):
"""Pass through to provider get_asset_content_lookup_session_for_repository"""
return getattr(sessions, 'AssetContentLookupSession')(
provider_session=self._provider_manager.get_asset_content_lookup_session_for_repository(repository_id),
authz_session=self._authz_session) | Pass through to provider get_asset_content_lookup_session_for_repository | Below is the the instruction that describes the task:
### Input:
Pass through to provider get_asset_content_lookup_session_for_repository
### Response:
def get_asset_content_lookup_session_for_repository(self, repository_id):
"""Pass through to provider get_asset_content_lookup_session_for_repository"""
return getattr(sessions, 'AssetContentLookupSession')(
provider_session=self._provider_manager.get_asset_content_lookup_session_for_repository(repository_id),
authz_session=self._authz_session) |
def within_set(df, items=None):
"""
Assert that df is a subset of items
Parameters
==========
df : DataFrame
items : dict
mapping of columns (k) to array-like of values (v) that
``df[k]`` is expected to be a subset of
"""
for k, v in items.items():
if not df[k].isin(v).all():
raise AssertionError
return df | Assert that df is a subset of items
Parameters
==========
df : DataFrame
items : dict
mapping of columns (k) to array-like of values (v) that
``df[k]`` is expected to be a subset of | Below is the the instruction that describes the task:
### Input:
Assert that df is a subset of items
Parameters
==========
df : DataFrame
items : dict
mapping of columns (k) to array-like of values (v) that
``df[k]`` is expected to be a subset of
### Response:
def within_set(df, items=None):
"""
Assert that df is a subset of items
Parameters
==========
df : DataFrame
items : dict
mapping of columns (k) to array-like of values (v) that
``df[k]`` is expected to be a subset of
"""
for k, v in items.items():
if not df[k].isin(v).all():
raise AssertionError
return df |
def _get_args_name_from_parser(parser):
"""Retrieve the name of the function argument linked to the given parser.
Args:
parser: a function parser
"""
# Retrieve the 'action' destination of the method parser i.e. its
# argument name. The HelpAction is ignored.
return [action.dest for action in parser._actions if not
isinstance(action, argparse._HelpAction)] | Retrieve the name of the function argument linked to the given parser.
Args:
parser: a function parser | Below is the the instruction that describes the task:
### Input:
Retrieve the name of the function argument linked to the given parser.
Args:
parser: a function parser
### Response:
def _get_args_name_from_parser(parser):
"""Retrieve the name of the function argument linked to the given parser.
Args:
parser: a function parser
"""
# Retrieve the 'action' destination of the method parser i.e. its
# argument name. The HelpAction is ignored.
return [action.dest for action in parser._actions if not
isinstance(action, argparse._HelpAction)] |
def __install_eggs(self, config):
""" Install eggs for a particular configuration """
egg_carton = (self.directory.install_directory(self.feature_name),
'requirements.txt')
eggs = self.__gather_eggs(config)
self.logger.debug("Installing eggs %s..." % eggs)
self.__load_carton(egg_carton, eggs)
self.__prepare_eggs(egg_carton, config) | Install eggs for a particular configuration | Below is the the instruction that describes the task:
### Input:
Install eggs for a particular configuration
### Response:
def __install_eggs(self, config):
""" Install eggs for a particular configuration """
egg_carton = (self.directory.install_directory(self.feature_name),
'requirements.txt')
eggs = self.__gather_eggs(config)
self.logger.debug("Installing eggs %s..." % eggs)
self.__load_carton(egg_carton, eggs)
self.__prepare_eggs(egg_carton, config) |
def V_horiz_guppy(D, L, a, h, headonly=False):
r'''Calculates volume of a tank with guppy heads, according to [1]_.
.. math::
V_f = A_fL + \frac{2aR^2}{3}\cos^{-1}\left(1 - \frac{h}{R}\right)
+\frac{2a}{9R}\sqrt{2Rh - h^2}(2h-3R)(h+R)
.. math::
Af = R^2\cos^{-1}\frac{R-h}{R} - (R-h)\sqrt{2Rh - h^2}
Parameters
----------
D : float
Diameter of the main cylindrical section, [m]
L : float
Length of the main cylindrical section, [m]
a : float
Distance the guppy head extends on one side, [m]
h : float
Height, as measured up to where the fluid ends, [m]
headonly : bool, optional
Function returns only the volume of a single head side if True
Returns
-------
V : float
Volume [m^3]
Examples
--------
Matching example from [1]_, with inputs in inches and volume in gallons.
>>> V_horiz_guppy(D=108., L=156., a=42., h=36)/231.
1931.7208029476762
References
----------
.. [1] Jones, D. "Calculating Tank Volume." Text. Accessed December 22, 2015.
http://www.webcalc.com.br/blog/Tank_Volume.PDF'''
R = 0.5*D
Af = R*R*acos((R-h)/R) - (R-h)*(2.*R*h - h*h)**0.5
Vf = 2.*a*R*R/3.*acos(1. - h/R) + 2.*a/9./R*(2*R*h - h**2)**0.5*(2*h - 3*R)*(h + R)
if headonly:
Vf = Vf/2.
else:
Vf += Af*L
return Vf | r'''Calculates volume of a tank with guppy heads, according to [1]_.
.. math::
V_f = A_fL + \frac{2aR^2}{3}\cos^{-1}\left(1 - \frac{h}{R}\right)
+\frac{2a}{9R}\sqrt{2Rh - h^2}(2h-3R)(h+R)
.. math::
Af = R^2\cos^{-1}\frac{R-h}{R} - (R-h)\sqrt{2Rh - h^2}
Parameters
----------
D : float
Diameter of the main cylindrical section, [m]
L : float
Length of the main cylindrical section, [m]
a : float
Distance the guppy head extends on one side, [m]
h : float
Height, as measured up to where the fluid ends, [m]
headonly : bool, optional
Function returns only the volume of a single head side if True
Returns
-------
V : float
Volume [m^3]
Examples
--------
Matching example from [1]_, with inputs in inches and volume in gallons.
>>> V_horiz_guppy(D=108., L=156., a=42., h=36)/231.
1931.7208029476762
References
----------
.. [1] Jones, D. "Calculating Tank Volume." Text. Accessed December 22, 2015.
http://www.webcalc.com.br/blog/Tank_Volume.PDF | Below is the the instruction that describes the task:
### Input:
r'''Calculates volume of a tank with guppy heads, according to [1]_.
.. math::
V_f = A_fL + \frac{2aR^2}{3}\cos^{-1}\left(1 - \frac{h}{R}\right)
+\frac{2a}{9R}\sqrt{2Rh - h^2}(2h-3R)(h+R)
.. math::
Af = R^2\cos^{-1}\frac{R-h}{R} - (R-h)\sqrt{2Rh - h^2}
Parameters
----------
D : float
Diameter of the main cylindrical section, [m]
L : float
Length of the main cylindrical section, [m]
a : float
Distance the guppy head extends on one side, [m]
h : float
Height, as measured up to where the fluid ends, [m]
headonly : bool, optional
Function returns only the volume of a single head side if True
Returns
-------
V : float
Volume [m^3]
Examples
--------
Matching example from [1]_, with inputs in inches and volume in gallons.
>>> V_horiz_guppy(D=108., L=156., a=42., h=36)/231.
1931.7208029476762
References
----------
.. [1] Jones, D. "Calculating Tank Volume." Text. Accessed December 22, 2015.
http://www.webcalc.com.br/blog/Tank_Volume.PDF
### Response:
def V_horiz_guppy(D, L, a, h, headonly=False):
r'''Calculates volume of a tank with guppy heads, according to [1]_.
.. math::
V_f = A_fL + \frac{2aR^2}{3}\cos^{-1}\left(1 - \frac{h}{R}\right)
+\frac{2a}{9R}\sqrt{2Rh - h^2}(2h-3R)(h+R)
.. math::
Af = R^2\cos^{-1}\frac{R-h}{R} - (R-h)\sqrt{2Rh - h^2}
Parameters
----------
D : float
Diameter of the main cylindrical section, [m]
L : float
Length of the main cylindrical section, [m]
a : float
Distance the guppy head extends on one side, [m]
h : float
Height, as measured up to where the fluid ends, [m]
headonly : bool, optional
Function returns only the volume of a single head side if True
Returns
-------
V : float
Volume [m^3]
Examples
--------
Matching example from [1]_, with inputs in inches and volume in gallons.
>>> V_horiz_guppy(D=108., L=156., a=42., h=36)/231.
1931.7208029476762
References
----------
.. [1] Jones, D. "Calculating Tank Volume." Text. Accessed December 22, 2015.
http://www.webcalc.com.br/blog/Tank_Volume.PDF'''
R = 0.5*D
Af = R*R*acos((R-h)/R) - (R-h)*(2.*R*h - h*h)**0.5
Vf = 2.*a*R*R/3.*acos(1. - h/R) + 2.*a/9./R*(2*R*h - h**2)**0.5*(2*h - 3*R)*(h + R)
if headonly:
Vf = Vf/2.
else:
Vf += Af*L
return Vf |
def compute_position_log(self,
td=None,
method='mc',
update_deviation=True):
"""
Args:
deviation (ndarray): A deviation survey with rows like MD, INC, AZI
td (Number): The TD of the well, if not the end of the deviation
survey you're passing.
method (str):
'aa': average angle
'bt': balanced tangential
'mc': minimum curvature
update_deviation: This function makes some adjustments to the dev-
iation survey, to account for the surface and TD. If you do not
want to change the stored deviation survey, set to False.
Returns:
ndarray. A position log with rows like X-offset, Y-offset, Z-offset
"""
deviation = np.copy(self.deviation)
# Adjust to TD.
if td is not None:
last_row = np.copy(deviation[-1, :])
last_row[0] = td
deviation = np.vstack([deviation, last_row])
# Adjust to surface if necessary.
if deviation[0, 0] > 0:
deviation = np.vstack([np.array([0, 0, 0]), deviation])
last = deviation[:-1]
this = deviation[1:]
diff = this[:, 0] - last[:, 0]
Ia, Aa = np.radians(last[:, 1]), np.radians(last[:, 2])
Ib, Ab = np.radians(this[:, 1]), np.radians(this[:, 2])
if method == 'aa':
Iavg = (Ia + Ib) / 2
Aavg = (Aa + Ab) / 2
delta_N = diff * np.sin(Iavg) * np.cos(Aavg)
delta_E = diff * np.sin(Iavg) * np.sin(Aavg)
delta_V = diff * np.cos(Iavg)
elif method in ('bt', 'mc'):
delta_N = 0.5 * diff * np.sin(Ia) * np.cos(Aa)
delta_N += 0.5 * diff * np.sin(Ib) * np.cos(Ab)
delta_E = 0.5 * diff * np.sin(Ia) * np.sin(Aa)
delta_E += 0.5 * diff * np.sin(Ib) * np.sin(Ab)
delta_V = 0.5 * diff * np.cos(Ia)
delta_V += 0.5 * diff * np.cos(Ib)
else:
raise Exception("Method must be one of 'aa', 'bt', 'mc'")
if method == 'mc':
_x = np.sin(Ib) * (1 - np.cos(Ab - Aa))
dogleg = np.arccos(np.cos(Ib - Ia) - np.sin(Ia) * _x)
dogleg[dogleg == 0] = 1e-9
rf = 2 / dogleg * np.tan(dogleg / 2) # ratio factor
rf[np.isnan(rf)] = 1 # Adjust for NaN.
delta_N *= rf
delta_E *= rf
delta_V *= rf
# Prepare the output array.
result = np.zeros_like(deviation, dtype=np.float)
# Stack the results, add the surface.
_offsets = np.squeeze(np.dstack([delta_N, delta_E, delta_V]))
_offsets = np.vstack([np.array([0, 0, 0]), _offsets])
result += _offsets.cumsum(axis=0)
if update_deviation:
self.deviation = deviation
self.position = result
return | Args:
deviation (ndarray): A deviation survey with rows like MD, INC, AZI
td (Number): The TD of the well, if not the end of the deviation
survey you're passing.
method (str):
'aa': average angle
'bt': balanced tangential
'mc': minimum curvature
update_deviation: This function makes some adjustments to the dev-
iation survey, to account for the surface and TD. If you do not
want to change the stored deviation survey, set to False.
Returns:
ndarray. A position log with rows like X-offset, Y-offset, Z-offset | Below is the the instruction that describes the task:
### Input:
Args:
deviation (ndarray): A deviation survey with rows like MD, INC, AZI
td (Number): The TD of the well, if not the end of the deviation
survey you're passing.
method (str):
'aa': average angle
'bt': balanced tangential
'mc': minimum curvature
update_deviation: This function makes some adjustments to the dev-
iation survey, to account for the surface and TD. If you do not
want to change the stored deviation survey, set to False.
Returns:
ndarray. A position log with rows like X-offset, Y-offset, Z-offset
### Response:
def compute_position_log(self,
td=None,
method='mc',
update_deviation=True):
"""
Args:
deviation (ndarray): A deviation survey with rows like MD, INC, AZI
td (Number): The TD of the well, if not the end of the deviation
survey you're passing.
method (str):
'aa': average angle
'bt': balanced tangential
'mc': minimum curvature
update_deviation: This function makes some adjustments to the dev-
iation survey, to account for the surface and TD. If you do not
want to change the stored deviation survey, set to False.
Returns:
ndarray. A position log with rows like X-offset, Y-offset, Z-offset
"""
deviation = np.copy(self.deviation)
# Adjust to TD.
if td is not None:
last_row = np.copy(deviation[-1, :])
last_row[0] = td
deviation = np.vstack([deviation, last_row])
# Adjust to surface if necessary.
if deviation[0, 0] > 0:
deviation = np.vstack([np.array([0, 0, 0]), deviation])
last = deviation[:-1]
this = deviation[1:]
diff = this[:, 0] - last[:, 0]
Ia, Aa = np.radians(last[:, 1]), np.radians(last[:, 2])
Ib, Ab = np.radians(this[:, 1]), np.radians(this[:, 2])
if method == 'aa':
Iavg = (Ia + Ib) / 2
Aavg = (Aa + Ab) / 2
delta_N = diff * np.sin(Iavg) * np.cos(Aavg)
delta_E = diff * np.sin(Iavg) * np.sin(Aavg)
delta_V = diff * np.cos(Iavg)
elif method in ('bt', 'mc'):
delta_N = 0.5 * diff * np.sin(Ia) * np.cos(Aa)
delta_N += 0.5 * diff * np.sin(Ib) * np.cos(Ab)
delta_E = 0.5 * diff * np.sin(Ia) * np.sin(Aa)
delta_E += 0.5 * diff * np.sin(Ib) * np.sin(Ab)
delta_V = 0.5 * diff * np.cos(Ia)
delta_V += 0.5 * diff * np.cos(Ib)
else:
raise Exception("Method must be one of 'aa', 'bt', 'mc'")
if method == 'mc':
_x = np.sin(Ib) * (1 - np.cos(Ab - Aa))
dogleg = np.arccos(np.cos(Ib - Ia) - np.sin(Ia) * _x)
dogleg[dogleg == 0] = 1e-9
rf = 2 / dogleg * np.tan(dogleg / 2) # ratio factor
rf[np.isnan(rf)] = 1 # Adjust for NaN.
delta_N *= rf
delta_E *= rf
delta_V *= rf
# Prepare the output array.
result = np.zeros_like(deviation, dtype=np.float)
# Stack the results, add the surface.
_offsets = np.squeeze(np.dstack([delta_N, delta_E, delta_V]))
_offsets = np.vstack([np.array([0, 0, 0]), _offsets])
result += _offsets.cumsum(axis=0)
if update_deviation:
self.deviation = deviation
self.position = result
return |
def format_table(rows, sep=' '):
"""Format table
:param sep: separator between columns
:type sep: unicode on python2 | str on python3
Given the table::
table = [
['foo', 'bar', 'foo'],
[1, 2, 3],
['54a5a05d-c83b-4bb5-bd95-d90d6ea4a878'],
['foo', 45, 'bar', 2345]
]
`format_table` will return::
foo bar foo
1 2 3
54a5a05d-c83b-4bb5-bd95-d90d6ea4a878
foo 45 bar 2345
"""
max_col_length = [0] * 100
# calculate max length for each col
for row in rows:
for index, (col, length) in enumerate(zip(row, max_col_length)):
if len(text_type(col)) > length:
max_col_length[index] = len(text_type(col))
formated_rows = []
for row in rows:
format_str = sep.join([
'{:<%s}' % l if i < (len(row) - 1) else '{}'
for i, (c, l) in enumerate(zip(row, max_col_length))
])
formated_rows.append(format_str.format(*row))
return '\n'.join(formated_rows) | Format table
:param sep: separator between columns
:type sep: unicode on python2 | str on python3
Given the table::
table = [
['foo', 'bar', 'foo'],
[1, 2, 3],
['54a5a05d-c83b-4bb5-bd95-d90d6ea4a878'],
['foo', 45, 'bar', 2345]
]
`format_table` will return::
foo bar foo
1 2 3
54a5a05d-c83b-4bb5-bd95-d90d6ea4a878
foo 45 bar 2345 | Below is the the instruction that describes the task:
### Input:
Format table
:param sep: separator between columns
:type sep: unicode on python2 | str on python3
Given the table::
table = [
['foo', 'bar', 'foo'],
[1, 2, 3],
['54a5a05d-c83b-4bb5-bd95-d90d6ea4a878'],
['foo', 45, 'bar', 2345]
]
`format_table` will return::
foo bar foo
1 2 3
54a5a05d-c83b-4bb5-bd95-d90d6ea4a878
foo 45 bar 2345
### Response:
def format_table(rows, sep=' '):
"""Format table
:param sep: separator between columns
:type sep: unicode on python2 | str on python3
Given the table::
table = [
['foo', 'bar', 'foo'],
[1, 2, 3],
['54a5a05d-c83b-4bb5-bd95-d90d6ea4a878'],
['foo', 45, 'bar', 2345]
]
`format_table` will return::
foo bar foo
1 2 3
54a5a05d-c83b-4bb5-bd95-d90d6ea4a878
foo 45 bar 2345
"""
max_col_length = [0] * 100
# calculate max length for each col
for row in rows:
for index, (col, length) in enumerate(zip(row, max_col_length)):
if len(text_type(col)) > length:
max_col_length[index] = len(text_type(col))
formated_rows = []
for row in rows:
format_str = sep.join([
'{:<%s}' % l if i < (len(row) - 1) else '{}'
for i, (c, l) in enumerate(zip(row, max_col_length))
])
formated_rows.append(format_str.format(*row))
return '\n'.join(formated_rows) |
def get_publication_date(self, xml_doc):
"""Return the best effort start_date."""
start_date = get_value_in_tag(xml_doc, "prism:coverDate")
if not start_date:
start_date = get_value_in_tag(xml_doc, "prism:coverDisplayDate")
if not start_date:
start_date = get_value_in_tag(xml_doc, 'oa:openAccessEffective')
if start_date:
start_date = datetime.datetime.strptime(
start_date, "%Y-%m-%dT%H:%M:%SZ"
)
return start_date.strftime("%Y-%m-%d")
import dateutil.parser
#dateutil.parser.parse cant process dates like April-June 2016
start_date = re.sub('([A-Z][a-z]+)[\s\-][A-Z][a-z]+ (\d{4})',
r'\1 \2', start_date)
try:
date = dateutil.parser.parse(start_date)
except ValueError:
return ''
# Special case where we ignore the deduced day form dateutil
# in case it was not given in the first place.
if len(start_date.split(" ")) == 3:
return date.strftime("%Y-%m-%d")
else:
return date.strftime("%Y-%m")
else:
if len(start_date) is 8:
start_date = time.strftime(
'%Y-%m-%d', time.strptime(start_date, '%Y%m%d'))
elif len(start_date) is 6:
start_date = time.strftime(
'%Y-%m', time.strptime(start_date, '%Y%m'))
return start_date | Return the best effort start_date. | Below is the the instruction that describes the task:
### Input:
Return the best effort start_date.
### Response:
def get_publication_date(self, xml_doc):
"""Return the best effort start_date."""
start_date = get_value_in_tag(xml_doc, "prism:coverDate")
if not start_date:
start_date = get_value_in_tag(xml_doc, "prism:coverDisplayDate")
if not start_date:
start_date = get_value_in_tag(xml_doc, 'oa:openAccessEffective')
if start_date:
start_date = datetime.datetime.strptime(
start_date, "%Y-%m-%dT%H:%M:%SZ"
)
return start_date.strftime("%Y-%m-%d")
import dateutil.parser
#dateutil.parser.parse cant process dates like April-June 2016
start_date = re.sub('([A-Z][a-z]+)[\s\-][A-Z][a-z]+ (\d{4})',
r'\1 \2', start_date)
try:
date = dateutil.parser.parse(start_date)
except ValueError:
return ''
# Special case where we ignore the deduced day form dateutil
# in case it was not given in the first place.
if len(start_date.split(" ")) == 3:
return date.strftime("%Y-%m-%d")
else:
return date.strftime("%Y-%m")
else:
if len(start_date) is 8:
start_date = time.strftime(
'%Y-%m-%d', time.strptime(start_date, '%Y%m%d'))
elif len(start_date) is 6:
start_date = time.strftime(
'%Y-%m', time.strptime(start_date, '%Y%m'))
return start_date |
def _is_raising(body: typing.List) -> bool:
"""Return true if the given statement node raise an exception"""
for node in body:
if isinstance(node, astroid.Raise):
return True
return False | Return true if the given statement node raise an exception | Below is the the instruction that describes the task:
### Input:
Return true if the given statement node raise an exception
### Response:
def _is_raising(body: typing.List) -> bool:
"""Return true if the given statement node raise an exception"""
for node in body:
if isinstance(node, astroid.Raise):
return True
return False |
def get_index_text(self, modname, name_cls):
"""Return text for index entry based on object type."""
if self.objtype.endswith('function'):
if not modname:
return _('%s() (built-in %s)') % \
(name_cls[0], self.chpl_type_name)
return _('%s() (in module %s)') % (name_cls[0], modname)
elif self.objtype in ('data', 'type', 'enum'):
if not modname:
type_name = self.objtype
if type_name == 'data':
type_name = 'variable'
return _('%s (built-in %s)') % (name_cls[0], type_name)
return _('%s (in module %s)') % (name_cls[0], modname)
else:
return '' | Return text for index entry based on object type. | Below is the the instruction that describes the task:
### Input:
Return text for index entry based on object type.
### Response:
def get_index_text(self, modname, name_cls):
"""Return text for index entry based on object type."""
if self.objtype.endswith('function'):
if not modname:
return _('%s() (built-in %s)') % \
(name_cls[0], self.chpl_type_name)
return _('%s() (in module %s)') % (name_cls[0], modname)
elif self.objtype in ('data', 'type', 'enum'):
if not modname:
type_name = self.objtype
if type_name == 'data':
type_name = 'variable'
return _('%s (built-in %s)') % (name_cls[0], type_name)
return _('%s (in module %s)') % (name_cls[0], modname)
else:
return '' |
def get_weather_code(self, ip):
''' Get weather_code '''
rec = self.get_all(ip)
return rec and rec.weather_code | Get weather_code | Below is the the instruction that describes the task:
### Input:
Get weather_code
### Response:
def get_weather_code(self, ip):
''' Get weather_code '''
rec = self.get_all(ip)
return rec and rec.weather_code |
def get_nodes(api_url=None, verify=False, cert=list()):
"""
Returns info for all Nodes
:param api_url: Base PuppetDB API url
"""
return utils._make_api_request(api_url, '/nodes', verify, cert) | Returns info for all Nodes
:param api_url: Base PuppetDB API url | Below is the the instruction that describes the task:
### Input:
Returns info for all Nodes
:param api_url: Base PuppetDB API url
### Response:
def get_nodes(api_url=None, verify=False, cert=list()):
"""
Returns info for all Nodes
:param api_url: Base PuppetDB API url
"""
return utils._make_api_request(api_url, '/nodes', verify, cert) |
def init_session(self, get_token=True):
"""
init a new oauth2 session that is required to access the cloud
:param bool get_token: if True, a token will be obtained, after
the session has been created
"""
if (self._client_id is None) or (self._client_secret is None):
sys.exit(
"Please make sure to set the client id and client secret "
"via the constructor, the environment variables or the config "
"file; otherwise, the LaMetric cloud cannot be accessed. "
"Abort!"
)
self._session = OAuth2Session(
client=BackendApplicationClient(client_id=self._client_id)
)
if get_token is True:
# get oauth token
self.get_token() | init a new oauth2 session that is required to access the cloud
:param bool get_token: if True, a token will be obtained, after
the session has been created | Below is the the instruction that describes the task:
### Input:
init a new oauth2 session that is required to access the cloud
:param bool get_token: if True, a token will be obtained, after
the session has been created
### Response:
def init_session(self, get_token=True):
"""
init a new oauth2 session that is required to access the cloud
:param bool get_token: if True, a token will be obtained, after
the session has been created
"""
if (self._client_id is None) or (self._client_secret is None):
sys.exit(
"Please make sure to set the client id and client secret "
"via the constructor, the environment variables or the config "
"file; otherwise, the LaMetric cloud cannot be accessed. "
"Abort!"
)
self._session = OAuth2Session(
client=BackendApplicationClient(client_id=self._client_id)
)
if get_token is True:
# get oauth token
self.get_token() |
def trocar_codigo_de_ativacao(self, novo_codigo_ativacao,
opcao=constantes.CODIGO_ATIVACAO_REGULAR,
codigo_emergencia=None):
"""Sobrepõe :meth:`~satcfe.base.FuncoesSAT.trocar_codigo_de_ativacao`.
:return: Uma resposta SAT padrão.
:rtype: satcfe.resposta.padrao.RespostaSAT
"""
resp = self._http_post('trocarcodigodeativacao',
novo_codigo_ativacao=novo_codigo_ativacao,
opcao=opcao,
codigo_emergencia=codigo_emergencia)
conteudo = resp.json()
return RespostaSAT.trocar_codigo_de_ativacao(conteudo.get('retorno')) | Sobrepõe :meth:`~satcfe.base.FuncoesSAT.trocar_codigo_de_ativacao`.
:return: Uma resposta SAT padrão.
:rtype: satcfe.resposta.padrao.RespostaSAT | Below is the the instruction that describes the task:
### Input:
Sobrepõe :meth:`~satcfe.base.FuncoesSAT.trocar_codigo_de_ativacao`.
:return: Uma resposta SAT padrão.
:rtype: satcfe.resposta.padrao.RespostaSAT
### Response:
def trocar_codigo_de_ativacao(self, novo_codigo_ativacao,
opcao=constantes.CODIGO_ATIVACAO_REGULAR,
codigo_emergencia=None):
"""Sobrepõe :meth:`~satcfe.base.FuncoesSAT.trocar_codigo_de_ativacao`.
:return: Uma resposta SAT padrão.
:rtype: satcfe.resposta.padrao.RespostaSAT
"""
resp = self._http_post('trocarcodigodeativacao',
novo_codigo_ativacao=novo_codigo_ativacao,
opcao=opcao,
codigo_emergencia=codigo_emergencia)
conteudo = resp.json()
return RespostaSAT.trocar_codigo_de_ativacao(conteudo.get('retorno')) |
def check(self, version):
"""Check that a version is inside this SemanticVersionRange
Args:
version (SemanticVersion): The version to check
Returns:
bool: True if the version is included in the range, False if not
"""
for disjunct in self._disjuncts:
if self._check_insersection(version, disjunct):
return True
return False | Check that a version is inside this SemanticVersionRange
Args:
version (SemanticVersion): The version to check
Returns:
bool: True if the version is included in the range, False if not | Below is the the instruction that describes the task:
### Input:
Check that a version is inside this SemanticVersionRange
Args:
version (SemanticVersion): The version to check
Returns:
bool: True if the version is included in the range, False if not
### Response:
def check(self, version):
"""Check that a version is inside this SemanticVersionRange
Args:
version (SemanticVersion): The version to check
Returns:
bool: True if the version is included in the range, False if not
"""
for disjunct in self._disjuncts:
if self._check_insersection(version, disjunct):
return True
return False |
def robust_std(x, debug=False):
"""Compute a robust estimator of the standard deviation
See Eq. 3.36 (page 84) in Statistics, Data Mining, and Machine
in Astronomy, by Ivezic, Connolly, VanderPlas & Gray
Parameters
----------
x : 1d numpy array, float
Array of input values which standard deviation is requested.
debug : bool
If True prints computed values
Returns
-------
sigmag : float
Robust estimator of the standar deviation
"""
x = numpy.asarray(x)
# compute percentiles and robust estimator
q25 = numpy.percentile(x, 25)
q75 = numpy.percentile(x, 75)
sigmag = 0.7413 * (q75 - q25)
if debug:
print('debug|sigmag -> q25......................:', q25)
print('debug|sigmag -> q75......................:', q75)
print('debug|sigmag -> Robust standard deviation:', sigmag)
return sigmag | Compute a robust estimator of the standard deviation
See Eq. 3.36 (page 84) in Statistics, Data Mining, and Machine
in Astronomy, by Ivezic, Connolly, VanderPlas & Gray
Parameters
----------
x : 1d numpy array, float
Array of input values which standard deviation is requested.
debug : bool
If True prints computed values
Returns
-------
sigmag : float
Robust estimator of the standar deviation | Below is the the instruction that describes the task:
### Input:
Compute a robust estimator of the standard deviation
See Eq. 3.36 (page 84) in Statistics, Data Mining, and Machine
in Astronomy, by Ivezic, Connolly, VanderPlas & Gray
Parameters
----------
x : 1d numpy array, float
Array of input values which standard deviation is requested.
debug : bool
If True prints computed values
Returns
-------
sigmag : float
Robust estimator of the standar deviation
### Response:
def robust_std(x, debug=False):
"""Compute a robust estimator of the standard deviation
See Eq. 3.36 (page 84) in Statistics, Data Mining, and Machine
in Astronomy, by Ivezic, Connolly, VanderPlas & Gray
Parameters
----------
x : 1d numpy array, float
Array of input values which standard deviation is requested.
debug : bool
If True prints computed values
Returns
-------
sigmag : float
Robust estimator of the standar deviation
"""
x = numpy.asarray(x)
# compute percentiles and robust estimator
q25 = numpy.percentile(x, 25)
q75 = numpy.percentile(x, 75)
sigmag = 0.7413 * (q75 - q25)
if debug:
print('debug|sigmag -> q25......................:', q25)
print('debug|sigmag -> q75......................:', q75)
print('debug|sigmag -> Robust standard deviation:', sigmag)
return sigmag |
def _find_im_paths(self, subj_mp, obs_name, target_polarity,
max_paths=1, max_path_length=5):
"""Check for a source/target path in the influence map.
Parameters
----------
subj_mp : pysb.MonomerPattern
MonomerPattern corresponding to the subject of the Statement
being checked.
obs_name : str
Name of the PySB model Observable corresponding to the
object/target of the Statement being checked.
target_polarity : int
Whether the influence in the Statement is positive (1) or negative
(-1).
Returns
-------
PathResult
PathResult object indicating the results of the attempt to find
a path.
"""
logger.info(('Running path finding with max_paths=%d,'
' max_path_length=%d') % (max_paths, max_path_length))
# Find rules in the model corresponding to the input
if subj_mp is None:
input_rule_set = None
else:
input_rule_set = self._get_input_rules(subj_mp)
if not input_rule_set:
return PathResult(False, 'INPUT_RULES_NOT_FOUND',
max_paths, max_path_length)
logger.info('Checking path metrics between %s and %s with polarity %s' %
(subj_mp, obs_name, target_polarity))
# -- Route to the path sampling function --
if self.do_sampling:
if not has_pg:
raise Exception('The paths_graph package could not be '
'imported.')
return self._sample_paths(input_rule_set, obs_name, target_polarity,
max_paths, max_path_length)
# -- Do Breadth-First Enumeration --
# Generate the predecessors to our observable and count the paths
path_lengths = []
path_metrics = []
for source, polarity, path_length in \
_find_sources(self.get_im(), obs_name, input_rule_set,
target_polarity):
pm = PathMetric(source, obs_name, polarity, path_length)
path_metrics.append(pm)
path_lengths.append(path_length)
logger.info('Finding paths between %s and %s with polarity %s' %
(subj_mp, obs_name, target_polarity))
# Now, look for paths
paths = []
if path_metrics and max_paths == 0:
pr = PathResult(True, 'MAX_PATHS_ZERO',
max_paths, max_path_length)
pr.path_metrics = path_metrics
return pr
elif path_metrics:
if min(path_lengths) <= max_path_length:
pr = PathResult(True, 'PATHS_FOUND', max_paths, max_path_length)
pr.path_metrics = path_metrics
# Get the first path
path_iter = enumerate(_find_sources_with_paths(
self.get_im(), obs_name,
input_rule_set, target_polarity))
for path_ix, path in path_iter:
flipped = _flip(self.get_im(), path)
pr.add_path(flipped)
if len(pr.paths) >= max_paths:
break
return pr
# There are no paths shorter than the max path length, so we
# don't bother trying to get them
else:
pr = PathResult(True, 'MAX_PATH_LENGTH_EXCEEDED',
max_paths, max_path_length)
pr.path_metrics = path_metrics
return pr
else:
return PathResult(False, 'NO_PATHS_FOUND',
max_paths, max_path_length) | Check for a source/target path in the influence map.
Parameters
----------
subj_mp : pysb.MonomerPattern
MonomerPattern corresponding to the subject of the Statement
being checked.
obs_name : str
Name of the PySB model Observable corresponding to the
object/target of the Statement being checked.
target_polarity : int
Whether the influence in the Statement is positive (1) or negative
(-1).
Returns
-------
PathResult
PathResult object indicating the results of the attempt to find
a path. | Below is the the instruction that describes the task:
### Input:
Check for a source/target path in the influence map.
Parameters
----------
subj_mp : pysb.MonomerPattern
MonomerPattern corresponding to the subject of the Statement
being checked.
obs_name : str
Name of the PySB model Observable corresponding to the
object/target of the Statement being checked.
target_polarity : int
Whether the influence in the Statement is positive (1) or negative
(-1).
Returns
-------
PathResult
PathResult object indicating the results of the attempt to find
a path.
### Response:
def _find_im_paths(self, subj_mp, obs_name, target_polarity,
max_paths=1, max_path_length=5):
"""Check for a source/target path in the influence map.
Parameters
----------
subj_mp : pysb.MonomerPattern
MonomerPattern corresponding to the subject of the Statement
being checked.
obs_name : str
Name of the PySB model Observable corresponding to the
object/target of the Statement being checked.
target_polarity : int
Whether the influence in the Statement is positive (1) or negative
(-1).
Returns
-------
PathResult
PathResult object indicating the results of the attempt to find
a path.
"""
logger.info(('Running path finding with max_paths=%d,'
' max_path_length=%d') % (max_paths, max_path_length))
# Find rules in the model corresponding to the input
if subj_mp is None:
input_rule_set = None
else:
input_rule_set = self._get_input_rules(subj_mp)
if not input_rule_set:
return PathResult(False, 'INPUT_RULES_NOT_FOUND',
max_paths, max_path_length)
logger.info('Checking path metrics between %s and %s with polarity %s' %
(subj_mp, obs_name, target_polarity))
# -- Route to the path sampling function --
if self.do_sampling:
if not has_pg:
raise Exception('The paths_graph package could not be '
'imported.')
return self._sample_paths(input_rule_set, obs_name, target_polarity,
max_paths, max_path_length)
# -- Do Breadth-First Enumeration --
# Generate the predecessors to our observable and count the paths
path_lengths = []
path_metrics = []
for source, polarity, path_length in \
_find_sources(self.get_im(), obs_name, input_rule_set,
target_polarity):
pm = PathMetric(source, obs_name, polarity, path_length)
path_metrics.append(pm)
path_lengths.append(path_length)
logger.info('Finding paths between %s and %s with polarity %s' %
(subj_mp, obs_name, target_polarity))
# Now, look for paths
paths = []
if path_metrics and max_paths == 0:
pr = PathResult(True, 'MAX_PATHS_ZERO',
max_paths, max_path_length)
pr.path_metrics = path_metrics
return pr
elif path_metrics:
if min(path_lengths) <= max_path_length:
pr = PathResult(True, 'PATHS_FOUND', max_paths, max_path_length)
pr.path_metrics = path_metrics
# Get the first path
path_iter = enumerate(_find_sources_with_paths(
self.get_im(), obs_name,
input_rule_set, target_polarity))
for path_ix, path in path_iter:
flipped = _flip(self.get_im(), path)
pr.add_path(flipped)
if len(pr.paths) >= max_paths:
break
return pr
# There are no paths shorter than the max path length, so we
# don't bother trying to get them
else:
pr = PathResult(True, 'MAX_PATH_LENGTH_EXCEEDED',
max_paths, max_path_length)
pr.path_metrics = path_metrics
return pr
else:
return PathResult(False, 'NO_PATHS_FOUND',
max_paths, max_path_length) |
def _add_spanning_relation(self, source, target):
"""add a spanning relation to this docgraph"""
self.add_edge(source, target, layers={self.ns, self.ns+':unit'},
edge_type=EdgeTypes.spanning_relation) | add a spanning relation to this docgraph | Below is the the instruction that describes the task:
### Input:
add a spanning relation to this docgraph
### Response:
def _add_spanning_relation(self, source, target):
"""add a spanning relation to this docgraph"""
self.add_edge(source, target, layers={self.ns, self.ns+':unit'},
edge_type=EdgeTypes.spanning_relation) |
def Prl(self):
r'''Prandtl number of the liquid phase of the chemical at its
current temperature and pressure, [dimensionless].
.. math::
Pr = \frac{C_p \mu}{k}
Utilizes the temperature and pressure dependent object oriented
interfaces :obj:`thermo.viscosity.ViscosityLiquid`,
:obj:`thermo.thermal_conductivity.ThermalConductivityLiquid`,
and :obj:`thermo.heat_capacity.HeatCapacityLiquid` to calculate the
actual properties.
Examples
--------
>>> Chemical('nitrogen', T=70).Prl
2.7828214501488886
'''
Cpl, mul, kl = self.Cpl, self.mul, self.kl
if all([Cpl, mul, kl]):
return Prandtl(Cp=Cpl, mu=mul, k=kl)
return None | r'''Prandtl number of the liquid phase of the chemical at its
current temperature and pressure, [dimensionless].
.. math::
Pr = \frac{C_p \mu}{k}
Utilizes the temperature and pressure dependent object oriented
interfaces :obj:`thermo.viscosity.ViscosityLiquid`,
:obj:`thermo.thermal_conductivity.ThermalConductivityLiquid`,
and :obj:`thermo.heat_capacity.HeatCapacityLiquid` to calculate the
actual properties.
Examples
--------
>>> Chemical('nitrogen', T=70).Prl
2.7828214501488886 | Below is the the instruction that describes the task:
### Input:
r'''Prandtl number of the liquid phase of the chemical at its
current temperature and pressure, [dimensionless].
.. math::
Pr = \frac{C_p \mu}{k}
Utilizes the temperature and pressure dependent object oriented
interfaces :obj:`thermo.viscosity.ViscosityLiquid`,
:obj:`thermo.thermal_conductivity.ThermalConductivityLiquid`,
and :obj:`thermo.heat_capacity.HeatCapacityLiquid` to calculate the
actual properties.
Examples
--------
>>> Chemical('nitrogen', T=70).Prl
2.7828214501488886
### Response:
def Prl(self):
r'''Prandtl number of the liquid phase of the chemical at its
current temperature and pressure, [dimensionless].
.. math::
Pr = \frac{C_p \mu}{k}
Utilizes the temperature and pressure dependent object oriented
interfaces :obj:`thermo.viscosity.ViscosityLiquid`,
:obj:`thermo.thermal_conductivity.ThermalConductivityLiquid`,
and :obj:`thermo.heat_capacity.HeatCapacityLiquid` to calculate the
actual properties.
Examples
--------
>>> Chemical('nitrogen', T=70).Prl
2.7828214501488886
'''
Cpl, mul, kl = self.Cpl, self.mul, self.kl
if all([Cpl, mul, kl]):
return Prandtl(Cp=Cpl, mu=mul, k=kl)
return None |
def process_nxml_str(nxml_str, citation=None, offline=False,
output_fname=default_output_fname):
"""Return a ReachProcessor by processing the given NXML string.
NXML is the format used by PubmedCentral for papers in the open
access subset.
Parameters
----------
nxml_str : str
The NXML string to be processed.
citation : Optional[str]
A PubMed ID passed to be used in the evidence for the extracted INDRA
Statements. Default: None
offline : Optional[bool]
If set to True, the REACH system is ran offline. Otherwise (by default)
the web service is called. Default: False
output_fname : Optional[str]
The file to output the REACH JSON output to.
Defaults to reach_output.json in current working directory.
Returns
-------
rp : ReachProcessor
A ReachProcessor containing the extracted INDRA Statements
in rp.statements.
"""
if offline:
if not try_offline:
logger.error('Offline reading is not available.')
return None
try:
api_ruler = reach_reader.get_api_ruler()
except ReachOfflineReadingError as e:
logger.error(e)
logger.error('Cannot read offline because the REACH ApiRuler '
'could not be instantiated.')
return None
try:
result_map = api_ruler.annotateNxml(nxml_str, 'fries')
except JavaException as e:
logger.error('Could not process NXML.')
logger.error(e)
return None
# REACH version < 1.3.3
json_str = result_map.get('resultJson')
if not json_str:
# REACH version >= 1.3.3
json_str = result_map.get('result')
if json_str is None:
logger.warning('No results retrieved')
return None
if isinstance(json_str, bytes):
json_str = json_str.decode('utf-8')
return process_json_str(json_str, citation)
else:
data = {'nxml': nxml_str}
try:
res = requests.post(reach_nxml_url, data)
except requests.exceptions.RequestException as e:
logger.error('Could not connect to REACH service:')
logger.error(e)
return None
if res.status_code != 200:
logger.error('Could not process NXML via REACH service.'
+ 'Status code: %d' % res.status_code)
return None
json_str = res.text
with open(output_fname, 'wb') as fh:
fh.write(json_str.encode('utf-8'))
return process_json_str(json_str, citation) | Return a ReachProcessor by processing the given NXML string.
NXML is the format used by PubmedCentral for papers in the open
access subset.
Parameters
----------
nxml_str : str
The NXML string to be processed.
citation : Optional[str]
A PubMed ID passed to be used in the evidence for the extracted INDRA
Statements. Default: None
offline : Optional[bool]
If set to True, the REACH system is ran offline. Otherwise (by default)
the web service is called. Default: False
output_fname : Optional[str]
The file to output the REACH JSON output to.
Defaults to reach_output.json in current working directory.
Returns
-------
rp : ReachProcessor
A ReachProcessor containing the extracted INDRA Statements
in rp.statements. | Below is the the instruction that describes the task:
### Input:
Return a ReachProcessor by processing the given NXML string.
NXML is the format used by PubmedCentral for papers in the open
access subset.
Parameters
----------
nxml_str : str
The NXML string to be processed.
citation : Optional[str]
A PubMed ID passed to be used in the evidence for the extracted INDRA
Statements. Default: None
offline : Optional[bool]
If set to True, the REACH system is ran offline. Otherwise (by default)
the web service is called. Default: False
output_fname : Optional[str]
The file to output the REACH JSON output to.
Defaults to reach_output.json in current working directory.
Returns
-------
rp : ReachProcessor
A ReachProcessor containing the extracted INDRA Statements
in rp.statements.
### Response:
def process_nxml_str(nxml_str, citation=None, offline=False,
output_fname=default_output_fname):
"""Return a ReachProcessor by processing the given NXML string.
NXML is the format used by PubmedCentral for papers in the open
access subset.
Parameters
----------
nxml_str : str
The NXML string to be processed.
citation : Optional[str]
A PubMed ID passed to be used in the evidence for the extracted INDRA
Statements. Default: None
offline : Optional[bool]
If set to True, the REACH system is ran offline. Otherwise (by default)
the web service is called. Default: False
output_fname : Optional[str]
The file to output the REACH JSON output to.
Defaults to reach_output.json in current working directory.
Returns
-------
rp : ReachProcessor
A ReachProcessor containing the extracted INDRA Statements
in rp.statements.
"""
if offline:
if not try_offline:
logger.error('Offline reading is not available.')
return None
try:
api_ruler = reach_reader.get_api_ruler()
except ReachOfflineReadingError as e:
logger.error(e)
logger.error('Cannot read offline because the REACH ApiRuler '
'could not be instantiated.')
return None
try:
result_map = api_ruler.annotateNxml(nxml_str, 'fries')
except JavaException as e:
logger.error('Could not process NXML.')
logger.error(e)
return None
# REACH version < 1.3.3
json_str = result_map.get('resultJson')
if not json_str:
# REACH version >= 1.3.3
json_str = result_map.get('result')
if json_str is None:
logger.warning('No results retrieved')
return None
if isinstance(json_str, bytes):
json_str = json_str.decode('utf-8')
return process_json_str(json_str, citation)
else:
data = {'nxml': nxml_str}
try:
res = requests.post(reach_nxml_url, data)
except requests.exceptions.RequestException as e:
logger.error('Could not connect to REACH service:')
logger.error(e)
return None
if res.status_code != 200:
logger.error('Could not process NXML via REACH service.'
+ 'Status code: %d' % res.status_code)
return None
json_str = res.text
with open(output_fname, 'wb') as fh:
fh.write(json_str.encode('utf-8'))
return process_json_str(json_str, citation) |
def connect(self, app_id, wait=True, security=None):
"""Connect to a running application.
Parameters
----------
app_id : str
The id of the application.
wait : bool, optional
If true [default], blocks until the application starts. If False,
will raise a ``ApplicationNotRunningError`` immediately if the
application isn't running.
security : Security, optional
The security configuration to use to communicate with the
application master. Defaults to the global configuration.
Returns
-------
app_client : ApplicationClient
Raises
------
ApplicationNotRunningError
If the application isn't running.
"""
if wait:
resp = self._call('waitForStart', proto.Application(id=app_id))
else:
resp = self._call('getStatus', proto.Application(id=app_id))
report = ApplicationReport.from_protobuf(resp)
if report.state is not ApplicationState.RUNNING:
raise ApplicationNotRunningError(
"%s is not running. Application state: "
"%s" % (app_id, report.state))
if security is None:
security = self.security
return ApplicationClient('%s:%d' % (report.host, report.port),
app_id,
security=security) | Connect to a running application.
Parameters
----------
app_id : str
The id of the application.
wait : bool, optional
If true [default], blocks until the application starts. If False,
will raise a ``ApplicationNotRunningError`` immediately if the
application isn't running.
security : Security, optional
The security configuration to use to communicate with the
application master. Defaults to the global configuration.
Returns
-------
app_client : ApplicationClient
Raises
------
ApplicationNotRunningError
If the application isn't running. | Below is the the instruction that describes the task:
### Input:
Connect to a running application.
Parameters
----------
app_id : str
The id of the application.
wait : bool, optional
If true [default], blocks until the application starts. If False,
will raise a ``ApplicationNotRunningError`` immediately if the
application isn't running.
security : Security, optional
The security configuration to use to communicate with the
application master. Defaults to the global configuration.
Returns
-------
app_client : ApplicationClient
Raises
------
ApplicationNotRunningError
If the application isn't running.
### Response:
def connect(self, app_id, wait=True, security=None):
"""Connect to a running application.
Parameters
----------
app_id : str
The id of the application.
wait : bool, optional
If true [default], blocks until the application starts. If False,
will raise a ``ApplicationNotRunningError`` immediately if the
application isn't running.
security : Security, optional
The security configuration to use to communicate with the
application master. Defaults to the global configuration.
Returns
-------
app_client : ApplicationClient
Raises
------
ApplicationNotRunningError
If the application isn't running.
"""
if wait:
resp = self._call('waitForStart', proto.Application(id=app_id))
else:
resp = self._call('getStatus', proto.Application(id=app_id))
report = ApplicationReport.from_protobuf(resp)
if report.state is not ApplicationState.RUNNING:
raise ApplicationNotRunningError(
"%s is not running. Application state: "
"%s" % (app_id, report.state))
if security is None:
security = self.security
return ApplicationClient('%s:%d' % (report.host, report.port),
app_id,
security=security) |
def save(self):
"""Update the infoblox with new values for the specified object, or add
the values if it's a new object all together.
:raises: AssertionError
:raises: infoblox.exceptions.ProtocolError
"""
if 'save' not in self._supports:
raise AssertionError('Can not save this object type')
values = {}
for key in [key for key in self.keys() if key not in self._save_ignore]:
if not getattr(self, key) and getattr(self, key) != False:
continue
if isinstance(getattr(self, key, None), list):
value = list()
for item in getattr(self, key):
if isinstance(item, dict):
value.append(item)
elif hasattr(item, '_save_as'):
value.append(item._save_as())
elif hasattr(item, '_ref') and getattr(item, '_ref'):
value.append(getattr(item, '_ref'))
else:
LOGGER.warning('Cant assign %r', item)
values[key] = value
elif getattr(self, key, None):
values[key] = getattr(self, key)
if not self._ref:
response = self._session.post(self._path, values)
else:
values['_ref'] = self._ref
response = self._session.put(self._path, values)
LOGGER.debug('Response: %r, %r', response.status_code, response.content)
if 200 <= response.status_code <= 201:
self.fetch()
return True
else:
try:
error = response.json()
raise exceptions.ProtocolError(error['text'])
except ValueError:
raise exceptions.ProtocolError(response.content) | Update the infoblox with new values for the specified object, or add
the values if it's a new object all together.
:raises: AssertionError
:raises: infoblox.exceptions.ProtocolError | Below is the the instruction that describes the task:
### Input:
Update the infoblox with new values for the specified object, or add
the values if it's a new object all together.
:raises: AssertionError
:raises: infoblox.exceptions.ProtocolError
### Response:
def save(self):
"""Update the infoblox with new values for the specified object, or add
the values if it's a new object all together.
:raises: AssertionError
:raises: infoblox.exceptions.ProtocolError
"""
if 'save' not in self._supports:
raise AssertionError('Can not save this object type')
values = {}
for key in [key for key in self.keys() if key not in self._save_ignore]:
if not getattr(self, key) and getattr(self, key) != False:
continue
if isinstance(getattr(self, key, None), list):
value = list()
for item in getattr(self, key):
if isinstance(item, dict):
value.append(item)
elif hasattr(item, '_save_as'):
value.append(item._save_as())
elif hasattr(item, '_ref') and getattr(item, '_ref'):
value.append(getattr(item, '_ref'))
else:
LOGGER.warning('Cant assign %r', item)
values[key] = value
elif getattr(self, key, None):
values[key] = getattr(self, key)
if not self._ref:
response = self._session.post(self._path, values)
else:
values['_ref'] = self._ref
response = self._session.put(self._path, values)
LOGGER.debug('Response: %r, %r', response.status_code, response.content)
if 200 <= response.status_code <= 201:
self.fetch()
return True
else:
try:
error = response.json()
raise exceptions.ProtocolError(error['text'])
except ValueError:
raise exceptions.ProtocolError(response.content) |
def bank_account_query(self, number, date, account_type, bank_id):
"""Bank account statement request"""
return self.authenticated_query(
self._bareq(number, date, account_type, bank_id)
) | Bank account statement request | Below is the the instruction that describes the task:
### Input:
Bank account statement request
### Response:
def bank_account_query(self, number, date, account_type, bank_id):
"""Bank account statement request"""
return self.authenticated_query(
self._bareq(number, date, account_type, bank_id)
) |
def _download_progress_cb(blocknum, blocksize, totalsize):
"""Banana Banana"""
readsofar = blocknum * blocksize
if totalsize > 0:
percent = readsofar * 1e2 / totalsize
msg = "\r%5.1f%% %*d / %d" % (
percent, len(str(totalsize)), readsofar, totalsize)
print(msg)
if readsofar >= totalsize: # near the end
print("\n")
else: # total size is unknown
print("read %d\n" % (readsofar,)) | Banana Banana | Below is the the instruction that describes the task:
### Input:
Banana Banana
### Response:
def _download_progress_cb(blocknum, blocksize, totalsize):
"""Banana Banana"""
readsofar = blocknum * blocksize
if totalsize > 0:
percent = readsofar * 1e2 / totalsize
msg = "\r%5.1f%% %*d / %d" % (
percent, len(str(totalsize)), readsofar, totalsize)
print(msg)
if readsofar >= totalsize: # near the end
print("\n")
else: # total size is unknown
print("read %d\n" % (readsofar,)) |
def _Fgamma(self, x, Ep):
"""
KAB06 Eq.58
Note: Quantities are not used in this function
Parameters
----------
x : float
Egamma/Eprot
Ep : float
Eprot [TeV]
"""
L = np.log(Ep)
B = 1.30 + 0.14 * L + 0.011 * L ** 2 # Eq59
beta = (1.79 + 0.11 * L + 0.008 * L ** 2) ** -1 # Eq60
k = (0.801 + 0.049 * L + 0.014 * L ** 2) ** -1 # Eq61
xb = x ** beta
F1 = B * (np.log(x) / x) * ((1 - xb) / (1 + k * xb * (1 - xb))) ** 4
F2 = (
1.0 / np.log(x)
- (4 * beta * xb) / (1 - xb)
- (4 * k * beta * xb * (1 - 2 * xb)) / (1 + k * xb * (1 - xb))
)
return F1 * F2 | KAB06 Eq.58
Note: Quantities are not used in this function
Parameters
----------
x : float
Egamma/Eprot
Ep : float
Eprot [TeV] | Below is the the instruction that describes the task:
### Input:
KAB06 Eq.58
Note: Quantities are not used in this function
Parameters
----------
x : float
Egamma/Eprot
Ep : float
Eprot [TeV]
### Response:
def _Fgamma(self, x, Ep):
"""
KAB06 Eq.58
Note: Quantities are not used in this function
Parameters
----------
x : float
Egamma/Eprot
Ep : float
Eprot [TeV]
"""
L = np.log(Ep)
B = 1.30 + 0.14 * L + 0.011 * L ** 2 # Eq59
beta = (1.79 + 0.11 * L + 0.008 * L ** 2) ** -1 # Eq60
k = (0.801 + 0.049 * L + 0.014 * L ** 2) ** -1 # Eq61
xb = x ** beta
F1 = B * (np.log(x) / x) * ((1 - xb) / (1 + k * xb * (1 - xb))) ** 4
F2 = (
1.0 / np.log(x)
- (4 * beta * xb) / (1 - xb)
- (4 * k * beta * xb * (1 - 2 * xb)) / (1 + k * xb * (1 - xb))
)
return F1 * F2 |
def get_extra_functions(self) -> Dict[str, Callable]:
"""Get a list of additional features
Returns:
Dict[str, Callable]: A dict of methods marked as additional features.
Method can be called with ``get_extra_functions()["methodName"]()``.
"""
if self.channel_type == ChannelType.Master:
raise NameError("get_extra_function is not available on master channels.")
methods = {}
for mName in dir(self):
m = getattr(self, mName)
if callable(m) and getattr(m, "extra_fn", False):
methods[mName] = m
return methods | Get a list of additional features
Returns:
Dict[str, Callable]: A dict of methods marked as additional features.
Method can be called with ``get_extra_functions()["methodName"]()``. | Below is the the instruction that describes the task:
### Input:
Get a list of additional features
Returns:
Dict[str, Callable]: A dict of methods marked as additional features.
Method can be called with ``get_extra_functions()["methodName"]()``.
### Response:
def get_extra_functions(self) -> Dict[str, Callable]:
"""Get a list of additional features
Returns:
Dict[str, Callable]: A dict of methods marked as additional features.
Method can be called with ``get_extra_functions()["methodName"]()``.
"""
if self.channel_type == ChannelType.Master:
raise NameError("get_extra_function is not available on master channels.")
methods = {}
for mName in dir(self):
m = getattr(self, mName)
if callable(m) and getattr(m, "extra_fn", False):
methods[mName] = m
return methods |
def extract(self, obj, bypass_ref=False):
"""
Extract parent of obj, according to current token.
:param obj: the object source
:param bypass_ref: not used
"""
for i in range(0, self.stages):
try:
obj = obj.parent_obj
except AttributeError:
raise UnstagedError(obj, '{!r} must be staged before '
'exploring its parents'.format(obj))
if self.member:
return obj.parent_member
return obj | Extract parent of obj, according to current token.
:param obj: the object source
:param bypass_ref: not used | Below is the the instruction that describes the task:
### Input:
Extract parent of obj, according to current token.
:param obj: the object source
:param bypass_ref: not used
### Response:
def extract(self, obj, bypass_ref=False):
"""
Extract parent of obj, according to current token.
:param obj: the object source
:param bypass_ref: not used
"""
for i in range(0, self.stages):
try:
obj = obj.parent_obj
except AttributeError:
raise UnstagedError(obj, '{!r} must be staged before '
'exploring its parents'.format(obj))
if self.member:
return obj.parent_member
return obj |
def get_package_formats():
"""Get the list of available package formats and parameters."""
# pylint: disable=fixme
# HACK: This obviously isn't great, and it is subject to change as
# the API changes, but it'll do for now as a interim method of
# introspection to get the parameters we need.
def get_parameters(cls):
"""Build parameters for a package format."""
params = {}
# Create a dummy instance so we can check if a parameter is required.
# As with the rest of this function, this is obviously hacky. We'll
# figure out a way to pull this information in from the API later.
dummy_kwargs = {k: "dummy" for k in cls.swagger_types}
instance = cls(**dummy_kwargs)
for k, v in six.iteritems(cls.swagger_types):
attr = getattr(cls, k)
docs = attr.__doc__.strip().split("\n")
doc = (docs[1] if docs[1] else docs[0]).strip()
try:
setattr(instance, k, None)
required = False
except ValueError:
required = True
params[cls.attribute_map.get(k)] = {
"type": v,
"help": doc,
"required": required,
}
return params
return {
key.replace("PackagesUpload", "").lower(): get_parameters(cls)
for key, cls in inspect.getmembers(cloudsmith_api.models)
if key.startswith("PackagesUpload")
} | Get the list of available package formats and parameters. | Below is the the instruction that describes the task:
### Input:
Get the list of available package formats and parameters.
### Response:
def get_package_formats():
"""Get the list of available package formats and parameters."""
# pylint: disable=fixme
# HACK: This obviously isn't great, and it is subject to change as
# the API changes, but it'll do for now as a interim method of
# introspection to get the parameters we need.
def get_parameters(cls):
"""Build parameters for a package format."""
params = {}
# Create a dummy instance so we can check if a parameter is required.
# As with the rest of this function, this is obviously hacky. We'll
# figure out a way to pull this information in from the API later.
dummy_kwargs = {k: "dummy" for k in cls.swagger_types}
instance = cls(**dummy_kwargs)
for k, v in six.iteritems(cls.swagger_types):
attr = getattr(cls, k)
docs = attr.__doc__.strip().split("\n")
doc = (docs[1] if docs[1] else docs[0]).strip()
try:
setattr(instance, k, None)
required = False
except ValueError:
required = True
params[cls.attribute_map.get(k)] = {
"type": v,
"help": doc,
"required": required,
}
return params
return {
key.replace("PackagesUpload", "").lower(): get_parameters(cls)
for key, cls in inspect.getmembers(cloudsmith_api.models)
if key.startswith("PackagesUpload")
} |
def auto_convert_string_cell(flagable, cell_str, position, worksheet, flags,
units, parens_as_neg=True):
'''
Handles the string case of cell and attempts auto-conversion
for auto_convert_cell.
Args:
parens_as_neg: Converts numerics surrounded by parens to negative values
'''
conversion = cell_str.strip()
# Wrapped?
if re.search(allregex.control_wrapping_regex, cell_str):
# Drop the wrapping characters
stripped_cell = cell_str.strip()
mod_cell_str = stripped_cell[1:][:-1].strip()
neg_mult = False
# If the wrapping characters are '(' and ')' and the interior is a number,
# then the number should be interpreted as a negative value
if (stripped_cell[0] == '(' and stripped_cell[-1] == ')' and
re.search(allregex.contains_numerical_regex, mod_cell_str)):
# Flag for conversion to negative
neg_mult = True
flagable.flag_change(flags, 'interpreted', position, worksheet,
flagable.FLAGS['removed-wrapping'])
# Try again without wrapping
converted_value = auto_convert_cell(flagable, mod_cell_str, position,
worksheet, flags, units)
neg_mult = neg_mult and check_cell_type(converted_value, get_cell_type(0))
if neg_mult and parens_as_neg:
flagable.flag_change(flags, 'interpreted', position, worksheet,
flagable.FLAGS['converted-wrapping-to-neg'])
return -converted_value if neg_mult else converted_value
# Is a string containing numbers?
elif re.search(allregex.contains_numerical_regex, cell_str):
conversion = auto_convert_numeric_string_cell(flagable, conversion, position,
worksheet, flags, units)
elif re.search(allregex.bool_regex, cell_str):
flagable.flag_change(flags, 'interpreted', position, worksheet,
flagable.FLAGS['bool-to-int'])
conversion = 1 if re.search(allregex.true_bool_regex, cell_str) else 0
return conversion | Handles the string case of cell and attempts auto-conversion
for auto_convert_cell.
Args:
parens_as_neg: Converts numerics surrounded by parens to negative values | Below is the the instruction that describes the task:
### Input:
Handles the string case of cell and attempts auto-conversion
for auto_convert_cell.
Args:
parens_as_neg: Converts numerics surrounded by parens to negative values
### Response:
def auto_convert_string_cell(flagable, cell_str, position, worksheet, flags,
units, parens_as_neg=True):
'''
Handles the string case of cell and attempts auto-conversion
for auto_convert_cell.
Args:
parens_as_neg: Converts numerics surrounded by parens to negative values
'''
conversion = cell_str.strip()
# Wrapped?
if re.search(allregex.control_wrapping_regex, cell_str):
# Drop the wrapping characters
stripped_cell = cell_str.strip()
mod_cell_str = stripped_cell[1:][:-1].strip()
neg_mult = False
# If the wrapping characters are '(' and ')' and the interior is a number,
# then the number should be interpreted as a negative value
if (stripped_cell[0] == '(' and stripped_cell[-1] == ')' and
re.search(allregex.contains_numerical_regex, mod_cell_str)):
# Flag for conversion to negative
neg_mult = True
flagable.flag_change(flags, 'interpreted', position, worksheet,
flagable.FLAGS['removed-wrapping'])
# Try again without wrapping
converted_value = auto_convert_cell(flagable, mod_cell_str, position,
worksheet, flags, units)
neg_mult = neg_mult and check_cell_type(converted_value, get_cell_type(0))
if neg_mult and parens_as_neg:
flagable.flag_change(flags, 'interpreted', position, worksheet,
flagable.FLAGS['converted-wrapping-to-neg'])
return -converted_value if neg_mult else converted_value
# Is a string containing numbers?
elif re.search(allregex.contains_numerical_regex, cell_str):
conversion = auto_convert_numeric_string_cell(flagable, conversion, position,
worksheet, flags, units)
elif re.search(allregex.bool_regex, cell_str):
flagable.flag_change(flags, 'interpreted', position, worksheet,
flagable.FLAGS['bool-to-int'])
conversion = 1 if re.search(allregex.true_bool_regex, cell_str) else 0
return conversion |
def url_to_query(url):
"""
Given a big huge bugzilla query URL, returns a query dict that can
be passed along to the Bugzilla.query() method.
"""
q = {}
# pylint: disable=unpacking-non-sequence
(ignore, ignore, path,
ignore, query, ignore) = urlparse(url)
base = os.path.basename(path)
if base not in ('buglist.cgi', 'query.cgi'):
return {}
for (k, v) in parse_qsl(query):
if k not in q:
q[k] = v
elif isinstance(q[k], list):
q[k].append(v)
else:
oldv = q[k]
q[k] = [oldv, v]
# Handle saved searches
if base == "buglist.cgi" and "namedcmd" in q and "sharer_id" in q:
q = {
"sharer_id": q["sharer_id"],
"savedsearch": q["namedcmd"],
}
return q | Given a big huge bugzilla query URL, returns a query dict that can
be passed along to the Bugzilla.query() method. | Below is the the instruction that describes the task:
### Input:
Given a big huge bugzilla query URL, returns a query dict that can
be passed along to the Bugzilla.query() method.
### Response:
def url_to_query(url):
"""
Given a big huge bugzilla query URL, returns a query dict that can
be passed along to the Bugzilla.query() method.
"""
q = {}
# pylint: disable=unpacking-non-sequence
(ignore, ignore, path,
ignore, query, ignore) = urlparse(url)
base = os.path.basename(path)
if base not in ('buglist.cgi', 'query.cgi'):
return {}
for (k, v) in parse_qsl(query):
if k not in q:
q[k] = v
elif isinstance(q[k], list):
q[k].append(v)
else:
oldv = q[k]
q[k] = [oldv, v]
# Handle saved searches
if base == "buglist.cgi" and "namedcmd" in q and "sharer_id" in q:
q = {
"sharer_id": q["sharer_id"],
"savedsearch": q["namedcmd"],
}
return q |
def _componentSortKey(componentAndType):
"""Sort SET components by tag
Sort regardless of the Choice value (static sort)
"""
component, asn1Spec = componentAndType
if asn1Spec is None:
asn1Spec = component
if asn1Spec.typeId == univ.Choice.typeId and not asn1Spec.tagSet:
if asn1Spec.tagSet:
return asn1Spec.tagSet
else:
return asn1Spec.componentType.minTagSet
else:
return asn1Spec.tagSet | Sort SET components by tag
Sort regardless of the Choice value (static sort) | Below is the the instruction that describes the task:
### Input:
Sort SET components by tag
Sort regardless of the Choice value (static sort)
### Response:
def _componentSortKey(componentAndType):
"""Sort SET components by tag
Sort regardless of the Choice value (static sort)
"""
component, asn1Spec = componentAndType
if asn1Spec is None:
asn1Spec = component
if asn1Spec.typeId == univ.Choice.typeId and not asn1Spec.tagSet:
if asn1Spec.tagSet:
return asn1Spec.tagSet
else:
return asn1Spec.componentType.minTagSet
else:
return asn1Spec.tagSet |
def get_string_list(self, key):
"""Get a list of strings."""
strings = []
size = self.beginReadArray(key)
for i in range(size):
self.setArrayIndex(i)
entry = str(self._value("entry"))
strings.append(entry)
self.endArray()
return strings | Get a list of strings. | Below is the the instruction that describes the task:
### Input:
Get a list of strings.
### Response:
def get_string_list(self, key):
"""Get a list of strings."""
strings = []
size = self.beginReadArray(key)
for i in range(size):
self.setArrayIndex(i)
entry = str(self._value("entry"))
strings.append(entry)
self.endArray()
return strings |
def scoped_session_decorator(func):
"""Manage contexts and add debugging to db sessions."""
@wraps(func)
def wrapper(*args, **kwargs):
with sessions_scope(session):
# The session used in func comes from the funcs globals, but
# it will be a proxied thread local var from the session
# registry, and will therefore be identical to the one returned
# by the context manager above.
logger.debug("Running worker %s in scoped DB session", func.__name__)
return func(*args, **kwargs)
return wrapper | Manage contexts and add debugging to db sessions. | Below is the the instruction that describes the task:
### Input:
Manage contexts and add debugging to db sessions.
### Response:
def scoped_session_decorator(func):
"""Manage contexts and add debugging to db sessions."""
@wraps(func)
def wrapper(*args, **kwargs):
with sessions_scope(session):
# The session used in func comes from the funcs globals, but
# it will be a proxied thread local var from the session
# registry, and will therefore be identical to the one returned
# by the context manager above.
logger.debug("Running worker %s in scoped DB session", func.__name__)
return func(*args, **kwargs)
return wrapper |
def FindRegex(self, regex, data):
"""Search the data for a hit."""
for match in re.finditer(regex, data, flags=re.I | re.S | re.M):
yield (match.start(), match.end()) | Search the data for a hit. | Below is the the instruction that describes the task:
### Input:
Search the data for a hit.
### Response:
def FindRegex(self, regex, data):
"""Search the data for a hit."""
for match in re.finditer(regex, data, flags=re.I | re.S | re.M):
yield (match.start(), match.end()) |
def Clift(Re):
r'''Calculates drag coefficient of a smooth sphere using the method in
[1]_ as described in [2]_.
.. math::
C_D = \left\{ \begin{array}{ll}
\frac{24}{Re} + \frac{3}{16} & \mbox{if $Re < 0.01$}\\
\frac{24}{Re}(1 + 0.1315Re^{0.82 - 0.05\log Re}) & \mbox{if $0.01 < Re < 20$}\\
\frac{24}{Re}(1 + 0.1935Re^{0.6305}) & \mbox{if $20 < Re < 260$}\\
10^{[1.6435 - 1.1242\log Re + 0.1558[\log Re]^2} & \mbox{if $260 < Re < 1500$}\\
10^{[-2.4571 + 2.5558\log Re - 0.9295[\log Re]^2 + 0.1049[\log Re]^3} & \mbox{if $1500 < Re < 12000$}\\
10^{[-1.9181 + 0.6370\log Re - 0.0636[\log Re]^2} & \mbox{if $12000 < Re < 44000$}\\
10^{[-4.3390 + 1.5809\log Re - 0.1546[\log Re]^2} & \mbox{if $44000 < Re < 338000$}\\
9.78 - 5.3\log Re & \mbox{if $338000 < Re < 400000$}\\
0.19\log Re - 0.49 & \mbox{if $400000 < Re < 1000000$}\end{array} \right.
Parameters
----------
Re : float
Reynolds number of the sphere, [-]
Returns
-------
Cd : float
Drag coefficient [-]
Notes
-----
Range is Re <= 1E6.
Examples
--------
>>> Clift(200)
0.7756342422322543
References
----------
.. [1] R. Clift, J.R. Grace, M.E. Weber, Bubbles, Drops, and Particles,
Academic, New York, 1978.
.. [2] Barati, Reza, Seyed Ali Akbar Salehi Neyshabouri, and Goodarz
Ahmadi. "Development of Empirical Models with High Accuracy for
Estimation of Drag Coefficient of Flow around a Smooth Sphere: An
Evolutionary Approach." Powder Technology 257 (May 2014): 11-19.
doi:10.1016/j.powtec.2014.02.045.
'''
if Re < 0.01:
Cd = 24./Re + 3/16.
elif Re < 20:
Cd = 24./Re*(1 + 0.1315*Re**(0.82 - 0.05*log10(Re)))
elif Re < 260:
Cd = 24./Re*(1 + 0.1935*Re**(0.6305))
elif Re < 1500:
Cd = 10**(1.6435 - 1.1242*log10(Re) + 0.1558*(log10(Re))**2)
elif Re < 12000:
Cd = 10**(-2.4571 + 2.5558*log10(Re) - 0.9295*(log10(Re))**2 + 0.1049*log10(Re)**3)
elif Re < 44000:
Cd = 10**(-1.9181 + 0.6370*log10(Re) - 0.0636*(log10(Re))**2)
elif Re < 338000:
Cd = 10**(-4.3390 + 1.5809*log10(Re) - 0.1546*(log10(Re))**2)
elif Re < 400000:
Cd = 29.78 - 5.3*log10(Re)
else:
Cd = 0.19*log10(Re) - 0.49
return Cd | r'''Calculates drag coefficient of a smooth sphere using the method in
[1]_ as described in [2]_.
.. math::
C_D = \left\{ \begin{array}{ll}
\frac{24}{Re} + \frac{3}{16} & \mbox{if $Re < 0.01$}\\
\frac{24}{Re}(1 + 0.1315Re^{0.82 - 0.05\log Re}) & \mbox{if $0.01 < Re < 20$}\\
\frac{24}{Re}(1 + 0.1935Re^{0.6305}) & \mbox{if $20 < Re < 260$}\\
10^{[1.6435 - 1.1242\log Re + 0.1558[\log Re]^2} & \mbox{if $260 < Re < 1500$}\\
10^{[-2.4571 + 2.5558\log Re - 0.9295[\log Re]^2 + 0.1049[\log Re]^3} & \mbox{if $1500 < Re < 12000$}\\
10^{[-1.9181 + 0.6370\log Re - 0.0636[\log Re]^2} & \mbox{if $12000 < Re < 44000$}\\
10^{[-4.3390 + 1.5809\log Re - 0.1546[\log Re]^2} & \mbox{if $44000 < Re < 338000$}\\
9.78 - 5.3\log Re & \mbox{if $338000 < Re < 400000$}\\
0.19\log Re - 0.49 & \mbox{if $400000 < Re < 1000000$}\end{array} \right.
Parameters
----------
Re : float
Reynolds number of the sphere, [-]
Returns
-------
Cd : float
Drag coefficient [-]
Notes
-----
Range is Re <= 1E6.
Examples
--------
>>> Clift(200)
0.7756342422322543
References
----------
.. [1] R. Clift, J.R. Grace, M.E. Weber, Bubbles, Drops, and Particles,
Academic, New York, 1978.
.. [2] Barati, Reza, Seyed Ali Akbar Salehi Neyshabouri, and Goodarz
Ahmadi. "Development of Empirical Models with High Accuracy for
Estimation of Drag Coefficient of Flow around a Smooth Sphere: An
Evolutionary Approach." Powder Technology 257 (May 2014): 11-19.
doi:10.1016/j.powtec.2014.02.045. | Below is the the instruction that describes the task:
### Input:
r'''Calculates drag coefficient of a smooth sphere using the method in
[1]_ as described in [2]_.
.. math::
C_D = \left\{ \begin{array}{ll}
\frac{24}{Re} + \frac{3}{16} & \mbox{if $Re < 0.01$}\\
\frac{24}{Re}(1 + 0.1315Re^{0.82 - 0.05\log Re}) & \mbox{if $0.01 < Re < 20$}\\
\frac{24}{Re}(1 + 0.1935Re^{0.6305}) & \mbox{if $20 < Re < 260$}\\
10^{[1.6435 - 1.1242\log Re + 0.1558[\log Re]^2} & \mbox{if $260 < Re < 1500$}\\
10^{[-2.4571 + 2.5558\log Re - 0.9295[\log Re]^2 + 0.1049[\log Re]^3} & \mbox{if $1500 < Re < 12000$}\\
10^{[-1.9181 + 0.6370\log Re - 0.0636[\log Re]^2} & \mbox{if $12000 < Re < 44000$}\\
10^{[-4.3390 + 1.5809\log Re - 0.1546[\log Re]^2} & \mbox{if $44000 < Re < 338000$}\\
9.78 - 5.3\log Re & \mbox{if $338000 < Re < 400000$}\\
0.19\log Re - 0.49 & \mbox{if $400000 < Re < 1000000$}\end{array} \right.
Parameters
----------
Re : float
Reynolds number of the sphere, [-]
Returns
-------
Cd : float
Drag coefficient [-]
Notes
-----
Range is Re <= 1E6.
Examples
--------
>>> Clift(200)
0.7756342422322543
References
----------
.. [1] R. Clift, J.R. Grace, M.E. Weber, Bubbles, Drops, and Particles,
Academic, New York, 1978.
.. [2] Barati, Reza, Seyed Ali Akbar Salehi Neyshabouri, and Goodarz
Ahmadi. "Development of Empirical Models with High Accuracy for
Estimation of Drag Coefficient of Flow around a Smooth Sphere: An
Evolutionary Approach." Powder Technology 257 (May 2014): 11-19.
doi:10.1016/j.powtec.2014.02.045.
### Response:
def Clift(Re):
r'''Calculates drag coefficient of a smooth sphere using the method in
[1]_ as described in [2]_.
.. math::
C_D = \left\{ \begin{array}{ll}
\frac{24}{Re} + \frac{3}{16} & \mbox{if $Re < 0.01$}\\
\frac{24}{Re}(1 + 0.1315Re^{0.82 - 0.05\log Re}) & \mbox{if $0.01 < Re < 20$}\\
\frac{24}{Re}(1 + 0.1935Re^{0.6305}) & \mbox{if $20 < Re < 260$}\\
10^{[1.6435 - 1.1242\log Re + 0.1558[\log Re]^2} & \mbox{if $260 < Re < 1500$}\\
10^{[-2.4571 + 2.5558\log Re - 0.9295[\log Re]^2 + 0.1049[\log Re]^3} & \mbox{if $1500 < Re < 12000$}\\
10^{[-1.9181 + 0.6370\log Re - 0.0636[\log Re]^2} & \mbox{if $12000 < Re < 44000$}\\
10^{[-4.3390 + 1.5809\log Re - 0.1546[\log Re]^2} & \mbox{if $44000 < Re < 338000$}\\
9.78 - 5.3\log Re & \mbox{if $338000 < Re < 400000$}\\
0.19\log Re - 0.49 & \mbox{if $400000 < Re < 1000000$}\end{array} \right.
Parameters
----------
Re : float
Reynolds number of the sphere, [-]
Returns
-------
Cd : float
Drag coefficient [-]
Notes
-----
Range is Re <= 1E6.
Examples
--------
>>> Clift(200)
0.7756342422322543
References
----------
.. [1] R. Clift, J.R. Grace, M.E. Weber, Bubbles, Drops, and Particles,
Academic, New York, 1978.
.. [2] Barati, Reza, Seyed Ali Akbar Salehi Neyshabouri, and Goodarz
Ahmadi. "Development of Empirical Models with High Accuracy for
Estimation of Drag Coefficient of Flow around a Smooth Sphere: An
Evolutionary Approach." Powder Technology 257 (May 2014): 11-19.
doi:10.1016/j.powtec.2014.02.045.
'''
if Re < 0.01:
Cd = 24./Re + 3/16.
elif Re < 20:
Cd = 24./Re*(1 + 0.1315*Re**(0.82 - 0.05*log10(Re)))
elif Re < 260:
Cd = 24./Re*(1 + 0.1935*Re**(0.6305))
elif Re < 1500:
Cd = 10**(1.6435 - 1.1242*log10(Re) + 0.1558*(log10(Re))**2)
elif Re < 12000:
Cd = 10**(-2.4571 + 2.5558*log10(Re) - 0.9295*(log10(Re))**2 + 0.1049*log10(Re)**3)
elif Re < 44000:
Cd = 10**(-1.9181 + 0.6370*log10(Re) - 0.0636*(log10(Re))**2)
elif Re < 338000:
Cd = 10**(-4.3390 + 1.5809*log10(Re) - 0.1546*(log10(Re))**2)
elif Re < 400000:
Cd = 29.78 - 5.3*log10(Re)
else:
Cd = 0.19*log10(Re) - 0.49
return Cd |
def _update_limits_from_api(self):
"""
Call :py:meth:`~.connect` and then check what region we're running in;
adjust default limits as required for regions that differ (us-east-1).
"""
region_limits = {
'us-east-1': 70
}
self.connect()
rname = self.conn._client_config.region_name
if rname in region_limits:
self.limits['File systems'].default_limit = region_limits[rname]
logger.debug(
'Running in region %s; setting EFS "File systems" default '
'limit value to: %d', rname, region_limits[rname]
) | Call :py:meth:`~.connect` and then check what region we're running in;
adjust default limits as required for regions that differ (us-east-1). | Below is the the instruction that describes the task:
### Input:
Call :py:meth:`~.connect` and then check what region we're running in;
adjust default limits as required for regions that differ (us-east-1).
### Response:
def _update_limits_from_api(self):
"""
Call :py:meth:`~.connect` and then check what region we're running in;
adjust default limits as required for regions that differ (us-east-1).
"""
region_limits = {
'us-east-1': 70
}
self.connect()
rname = self.conn._client_config.region_name
if rname in region_limits:
self.limits['File systems'].default_limit = region_limits[rname]
logger.debug(
'Running in region %s; setting EFS "File systems" default '
'limit value to: %d', rname, region_limits[rname]
) |
def igphyml(input_file=None, tree_file=None, root=None, verbose=False):
'''
Computes a phylogenetic tree using IgPhyML.
.. note::
IgPhyML must be installed. It can be downloaded from https://github.com/kbhoehn/IgPhyML.
Args:
input_file (str): Path to a Phylip-formatted multiple sequence alignment. Required.
tree_file (str): Path to the output tree file.
root (str): Name of the root sequence. Required.
verbose (bool): If `True`, prints the standard output and standard error for each IgPhyML run.
Default is `False`.
'''
if shutil.which('igphyml') is None:
raise RuntimeError('It appears that IgPhyML is not installed.\nPlease install and try again.')
# first, tree topology is estimated with the M0/GY94 model
igphyml_cmd1 = 'igphyml -i {} -m GY -w M0 -t e --run_id gy94'.format(aln_file)
p1 = sp.Popen(igphyml_cmd1, stdout=sp.PIPE, stderr=sp.PIPE)
stdout1, stderr1 = p1.communicate()
if verbose:
print(stdout1 + '\n')
print(stderr1 + '\n\n')
intermediate = input_file + '_igphyml_tree.txt_gy94'
# now we fit the HLP17 model once the tree topology is fixed
igphyml_cmd2 = 'igphyml -i {0} -m HLP17 --root {1} -o lr -u {}_igphyml_tree.txt_gy94 -o {}'.format(input_file,
root,
tree_file)
p2 = sp.Popen(igphyml_cmd2, stdout=sp.PIPE, stderr=sp.PIPE)
stdout2, stderr2 = p2.communicate()
if verbose:
print(stdout2 + '\n')
print(stderr2 + '\n')
return tree_file + '_igphyml_tree.txt' | Computes a phylogenetic tree using IgPhyML.
.. note::
IgPhyML must be installed. It can be downloaded from https://github.com/kbhoehn/IgPhyML.
Args:
input_file (str): Path to a Phylip-formatted multiple sequence alignment. Required.
tree_file (str): Path to the output tree file.
root (str): Name of the root sequence. Required.
verbose (bool): If `True`, prints the standard output and standard error for each IgPhyML run.
Default is `False`. | Below is the the instruction that describes the task:
### Input:
Computes a phylogenetic tree using IgPhyML.
.. note::
IgPhyML must be installed. It can be downloaded from https://github.com/kbhoehn/IgPhyML.
Args:
input_file (str): Path to a Phylip-formatted multiple sequence alignment. Required.
tree_file (str): Path to the output tree file.
root (str): Name of the root sequence. Required.
verbose (bool): If `True`, prints the standard output and standard error for each IgPhyML run.
Default is `False`.
### Response:
def igphyml(input_file=None, tree_file=None, root=None, verbose=False):
'''
Computes a phylogenetic tree using IgPhyML.
.. note::
IgPhyML must be installed. It can be downloaded from https://github.com/kbhoehn/IgPhyML.
Args:
input_file (str): Path to a Phylip-formatted multiple sequence alignment. Required.
tree_file (str): Path to the output tree file.
root (str): Name of the root sequence. Required.
verbose (bool): If `True`, prints the standard output and standard error for each IgPhyML run.
Default is `False`.
'''
if shutil.which('igphyml') is None:
raise RuntimeError('It appears that IgPhyML is not installed.\nPlease install and try again.')
# first, tree topology is estimated with the M0/GY94 model
igphyml_cmd1 = 'igphyml -i {} -m GY -w M0 -t e --run_id gy94'.format(aln_file)
p1 = sp.Popen(igphyml_cmd1, stdout=sp.PIPE, stderr=sp.PIPE)
stdout1, stderr1 = p1.communicate()
if verbose:
print(stdout1 + '\n')
print(stderr1 + '\n\n')
intermediate = input_file + '_igphyml_tree.txt_gy94'
# now we fit the HLP17 model once the tree topology is fixed
igphyml_cmd2 = 'igphyml -i {0} -m HLP17 --root {1} -o lr -u {}_igphyml_tree.txt_gy94 -o {}'.format(input_file,
root,
tree_file)
p2 = sp.Popen(igphyml_cmd2, stdout=sp.PIPE, stderr=sp.PIPE)
stdout2, stderr2 = p2.communicate()
if verbose:
print(stdout2 + '\n')
print(stderr2 + '\n')
return tree_file + '_igphyml_tree.txt' |
def project_run_path(cls, project, transfer_config, run):
"""Return a fully-qualified project_run string."""
return google.api_core.path_template.expand(
"projects/{project}/transferConfigs/{transfer_config}/runs/{run}",
project=project,
transfer_config=transfer_config,
run=run,
) | Return a fully-qualified project_run string. | Below is the the instruction that describes the task:
### Input:
Return a fully-qualified project_run string.
### Response:
def project_run_path(cls, project, transfer_config, run):
"""Return a fully-qualified project_run string."""
return google.api_core.path_template.expand(
"projects/{project}/transferConfigs/{transfer_config}/runs/{run}",
project=project,
transfer_config=transfer_config,
run=run,
) |
def node_from_elem(elem, nodefactory=Node, lazy=()):
"""
Convert (recursively) an ElementTree object into a Node object.
"""
children = list(elem)
lineno = getattr(elem, 'lineno', None)
if not children:
return nodefactory(elem.tag, dict(elem.attrib), elem.text,
lineno=lineno)
if striptag(elem.tag) in lazy:
nodes = (node_from_elem(ch, nodefactory, lazy) for ch in children)
else:
nodes = [node_from_elem(ch, nodefactory, lazy) for ch in children]
return nodefactory(elem.tag, dict(elem.attrib), nodes=nodes, lineno=lineno) | Convert (recursively) an ElementTree object into a Node object. | Below is the the instruction that describes the task:
### Input:
Convert (recursively) an ElementTree object into a Node object.
### Response:
def node_from_elem(elem, nodefactory=Node, lazy=()):
"""
Convert (recursively) an ElementTree object into a Node object.
"""
children = list(elem)
lineno = getattr(elem, 'lineno', None)
if not children:
return nodefactory(elem.tag, dict(elem.attrib), elem.text,
lineno=lineno)
if striptag(elem.tag) in lazy:
nodes = (node_from_elem(ch, nodefactory, lazy) for ch in children)
else:
nodes = [node_from_elem(ch, nodefactory, lazy) for ch in children]
return nodefactory(elem.tag, dict(elem.attrib), nodes=nodes, lineno=lineno) |
def _formatters_default(self):
"""Activate the default formatters."""
formatter_classes = [
PlainTextFormatter,
HTMLFormatter,
SVGFormatter,
PNGFormatter,
JPEGFormatter,
LatexFormatter,
JSONFormatter,
JavascriptFormatter
]
d = {}
for cls in formatter_classes:
f = cls(config=self.config)
d[f.format_type] = f
return d | Activate the default formatters. | Below is the the instruction that describes the task:
### Input:
Activate the default formatters.
### Response:
def _formatters_default(self):
"""Activate the default formatters."""
formatter_classes = [
PlainTextFormatter,
HTMLFormatter,
SVGFormatter,
PNGFormatter,
JPEGFormatter,
LatexFormatter,
JSONFormatter,
JavascriptFormatter
]
d = {}
for cls in formatter_classes:
f = cls(config=self.config)
d[f.format_type] = f
return d |
def get_uri_obj(uri, storage_args={}):
"""
Retrieve the underlying storage object based on the URI (i.e., scheme).
:param str uri: URI to get storage object for
:param dict storage_args: Keyword arguments to pass to the underlying storage object
"""
if isinstance(uri, BaseURI): return uri
uri_obj = None
o = urlparse(uri)
for storage in STORAGES:
uri_obj = storage.parse_uri(o, storage_args=storage_args)
if uri_obj is not None:
break
#end for
if uri_obj is None:
raise TypeError('<{}> is an unsupported URI.'.format(uri))
return uri_obj | Retrieve the underlying storage object based on the URI (i.e., scheme).
:param str uri: URI to get storage object for
:param dict storage_args: Keyword arguments to pass to the underlying storage object | Below is the the instruction that describes the task:
### Input:
Retrieve the underlying storage object based on the URI (i.e., scheme).
:param str uri: URI to get storage object for
:param dict storage_args: Keyword arguments to pass to the underlying storage object
### Response:
def get_uri_obj(uri, storage_args={}):
"""
Retrieve the underlying storage object based on the URI (i.e., scheme).
:param str uri: URI to get storage object for
:param dict storage_args: Keyword arguments to pass to the underlying storage object
"""
if isinstance(uri, BaseURI): return uri
uri_obj = None
o = urlparse(uri)
for storage in STORAGES:
uri_obj = storage.parse_uri(o, storage_args=storage_args)
if uri_obj is not None:
break
#end for
if uri_obj is None:
raise TypeError('<{}> is an unsupported URI.'.format(uri))
return uri_obj |
def author(self, value):
"""
Setter for **self.__author** attribute.
:param value: Attribute value.
:type value: unicode
"""
if value is not None:
assert type(value) is unicode, "'{0}' attribute: '{1}' type is not 'unicode'!".format(
"author", value)
self.__author = value | Setter for **self.__author** attribute.
:param value: Attribute value.
:type value: unicode | Below is the the instruction that describes the task:
### Input:
Setter for **self.__author** attribute.
:param value: Attribute value.
:type value: unicode
### Response:
def author(self, value):
"""
Setter for **self.__author** attribute.
:param value: Attribute value.
:type value: unicode
"""
if value is not None:
assert type(value) is unicode, "'{0}' attribute: '{1}' type is not 'unicode'!".format(
"author", value)
self.__author = value |
def _collect_args(args) -> ISeq:
"""Collect Python starred arguments into a Basilisp list."""
if isinstance(args, tuple):
return llist.list(args)
raise TypeError("Python variadic arguments should always be a tuple") | Collect Python starred arguments into a Basilisp list. | Below is the the instruction that describes the task:
### Input:
Collect Python starred arguments into a Basilisp list.
### Response:
def _collect_args(args) -> ISeq:
"""Collect Python starred arguments into a Basilisp list."""
if isinstance(args, tuple):
return llist.list(args)
raise TypeError("Python variadic arguments should always be a tuple") |
def get_descendants(self):
"""
:returns: A queryset of all the node's descendants as DFS, doesn't
include the node itself
"""
return self.__class__.get_tree(self).exclude(pk=self.pk) | :returns: A queryset of all the node's descendants as DFS, doesn't
include the node itself | Below is the the instruction that describes the task:
### Input:
:returns: A queryset of all the node's descendants as DFS, doesn't
include the node itself
### Response:
def get_descendants(self):
"""
:returns: A queryset of all the node's descendants as DFS, doesn't
include the node itself
"""
return self.__class__.get_tree(self).exclude(pk=self.pk) |
def moments(self):
"""The first two time delay weighted statistical moments of the
ARMA response."""
timepoints = self.ma.delays
response = self.response
moment1 = statstools.calc_mean_time(timepoints, response)
moment2 = statstools.calc_mean_time_deviation(
timepoints, response, moment1)
return numpy.array([moment1, moment2]) | The first two time delay weighted statistical moments of the
ARMA response. | Below is the the instruction that describes the task:
### Input:
The first two time delay weighted statistical moments of the
ARMA response.
### Response:
def moments(self):
"""The first two time delay weighted statistical moments of the
ARMA response."""
timepoints = self.ma.delays
response = self.response
moment1 = statstools.calc_mean_time(timepoints, response)
moment2 = statstools.calc_mean_time_deviation(
timepoints, response, moment1)
return numpy.array([moment1, moment2]) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.