code stringlengths 75 104k | docstring stringlengths 1 46.9k | text stringlengths 164 112k |
|---|---|---|
def debug(self, msg, indent=0, **kwargs):
"""invoke ``self.logger.debug``"""
return self.logger.debug(self._indent(msg, indent), **kwargs) | invoke ``self.logger.debug`` | Below is the the instruction that describes the task:
### Input:
invoke ``self.logger.debug``
### Response:
def debug(self, msg, indent=0, **kwargs):
"""invoke ``self.logger.debug``"""
return self.logger.debug(self._indent(msg, indent), **kwargs) |
def parse(to_parse, ignore_whitespace_text_nodes=True, adapter=None):
"""
Parse an XML document into an *xml4h*-wrapped DOM representation
using an underlying XML library implementation.
:param to_parse: an XML document file, document string, or the
path to an XML file. If a string value is given that contains
a ``<`` character it is treated as literal XML data, otherwise
a string value is treated as a file path.
:type to_parse: a file-like object or string
:param bool ignore_whitespace_text_nodes: if ``True`` pure whitespace
nodes are stripped from the parsed document, since these are
usually noise introduced by XML docs serialized to be human-friendly.
:param adapter: the *xml4h* implementation adapter class used to parse
the document and to interact with the resulting nodes.
If None, :attr:`best_adapter` will be used.
:type adapter: adapter class or None
:return: an :class:`xml4h.nodes.Document` node representing the
parsed document.
Delegates to an adapter's :meth:`~xml4h.impls.interface.parse_string` or
:meth:`~xml4h.impls.interface.parse_file` implementation.
"""
if adapter is None:
adapter = best_adapter
if isinstance(to_parse, basestring) and '<' in to_parse:
return adapter.parse_string(to_parse, ignore_whitespace_text_nodes)
else:
return adapter.parse_file(to_parse, ignore_whitespace_text_nodes) | Parse an XML document into an *xml4h*-wrapped DOM representation
using an underlying XML library implementation.
:param to_parse: an XML document file, document string, or the
path to an XML file. If a string value is given that contains
a ``<`` character it is treated as literal XML data, otherwise
a string value is treated as a file path.
:type to_parse: a file-like object or string
:param bool ignore_whitespace_text_nodes: if ``True`` pure whitespace
nodes are stripped from the parsed document, since these are
usually noise introduced by XML docs serialized to be human-friendly.
:param adapter: the *xml4h* implementation adapter class used to parse
the document and to interact with the resulting nodes.
If None, :attr:`best_adapter` will be used.
:type adapter: adapter class or None
:return: an :class:`xml4h.nodes.Document` node representing the
parsed document.
Delegates to an adapter's :meth:`~xml4h.impls.interface.parse_string` or
:meth:`~xml4h.impls.interface.parse_file` implementation. | Below is the the instruction that describes the task:
### Input:
Parse an XML document into an *xml4h*-wrapped DOM representation
using an underlying XML library implementation.
:param to_parse: an XML document file, document string, or the
path to an XML file. If a string value is given that contains
a ``<`` character it is treated as literal XML data, otherwise
a string value is treated as a file path.
:type to_parse: a file-like object or string
:param bool ignore_whitespace_text_nodes: if ``True`` pure whitespace
nodes are stripped from the parsed document, since these are
usually noise introduced by XML docs serialized to be human-friendly.
:param adapter: the *xml4h* implementation adapter class used to parse
the document and to interact with the resulting nodes.
If None, :attr:`best_adapter` will be used.
:type adapter: adapter class or None
:return: an :class:`xml4h.nodes.Document` node representing the
parsed document.
Delegates to an adapter's :meth:`~xml4h.impls.interface.parse_string` or
:meth:`~xml4h.impls.interface.parse_file` implementation.
### Response:
def parse(to_parse, ignore_whitespace_text_nodes=True, adapter=None):
"""
Parse an XML document into an *xml4h*-wrapped DOM representation
using an underlying XML library implementation.
:param to_parse: an XML document file, document string, or the
path to an XML file. If a string value is given that contains
a ``<`` character it is treated as literal XML data, otherwise
a string value is treated as a file path.
:type to_parse: a file-like object or string
:param bool ignore_whitespace_text_nodes: if ``True`` pure whitespace
nodes are stripped from the parsed document, since these are
usually noise introduced by XML docs serialized to be human-friendly.
:param adapter: the *xml4h* implementation adapter class used to parse
the document and to interact with the resulting nodes.
If None, :attr:`best_adapter` will be used.
:type adapter: adapter class or None
:return: an :class:`xml4h.nodes.Document` node representing the
parsed document.
Delegates to an adapter's :meth:`~xml4h.impls.interface.parse_string` or
:meth:`~xml4h.impls.interface.parse_file` implementation.
"""
if adapter is None:
adapter = best_adapter
if isinstance(to_parse, basestring) and '<' in to_parse:
return adapter.parse_string(to_parse, ignore_whitespace_text_nodes)
else:
return adapter.parse_file(to_parse, ignore_whitespace_text_nodes) |
def select_rows(cols, rows, mode='list', cast=True):
"""
Yield data selected from rows.
It is sometimes useful to select a subset of data from a profile.
This function selects the data in *cols* from *rows* and yields it
in a form specified by *mode*. Possible values of *mode* are:
================== ================= ==========================
mode description example `['i-id', 'i-wf']`
================== ================= ==========================
`'list'` (default) a list of values `[10, 1]`
`'dict'` col to value map `{'i-id': 10,'i-wf': 1}`
`'row'` [incr tsdb()] row `'10@1'`
================== ================= ==========================
Args:
cols: an iterable of column names to select data for
rows: the rows to select column data from
mode: the form yielded data should take
cast: if `True`, cast column values to their datatype
(requires *rows* to be :class:`Record` objects)
Yields:
Selected data in the form specified by *mode*.
"""
mode = mode.lower()
if mode == 'list':
modecast = lambda cols, data: data
elif mode == 'dict':
modecast = lambda cols, data: dict(zip(cols, data))
elif mode == 'row':
modecast = lambda cols, data: encode_row(data)
else:
raise ItsdbError('Invalid mode for select operation: {}\n'
' Valid options include: list, dict, row'
.format(mode))
for row in rows:
try:
data = [row.get(c, cast=cast) for c in cols]
except TypeError:
data = [row.get(c) for c in cols]
yield modecast(cols, data) | Yield data selected from rows.
It is sometimes useful to select a subset of data from a profile.
This function selects the data in *cols* from *rows* and yields it
in a form specified by *mode*. Possible values of *mode* are:
================== ================= ==========================
mode description example `['i-id', 'i-wf']`
================== ================= ==========================
`'list'` (default) a list of values `[10, 1]`
`'dict'` col to value map `{'i-id': 10,'i-wf': 1}`
`'row'` [incr tsdb()] row `'10@1'`
================== ================= ==========================
Args:
cols: an iterable of column names to select data for
rows: the rows to select column data from
mode: the form yielded data should take
cast: if `True`, cast column values to their datatype
(requires *rows* to be :class:`Record` objects)
Yields:
Selected data in the form specified by *mode*. | Below is the the instruction that describes the task:
### Input:
Yield data selected from rows.
It is sometimes useful to select a subset of data from a profile.
This function selects the data in *cols* from *rows* and yields it
in a form specified by *mode*. Possible values of *mode* are:
================== ================= ==========================
mode description example `['i-id', 'i-wf']`
================== ================= ==========================
`'list'` (default) a list of values `[10, 1]`
`'dict'` col to value map `{'i-id': 10,'i-wf': 1}`
`'row'` [incr tsdb()] row `'10@1'`
================== ================= ==========================
Args:
cols: an iterable of column names to select data for
rows: the rows to select column data from
mode: the form yielded data should take
cast: if `True`, cast column values to their datatype
(requires *rows* to be :class:`Record` objects)
Yields:
Selected data in the form specified by *mode*.
### Response:
def select_rows(cols, rows, mode='list', cast=True):
"""
Yield data selected from rows.
It is sometimes useful to select a subset of data from a profile.
This function selects the data in *cols* from *rows* and yields it
in a form specified by *mode*. Possible values of *mode* are:
================== ================= ==========================
mode description example `['i-id', 'i-wf']`
================== ================= ==========================
`'list'` (default) a list of values `[10, 1]`
`'dict'` col to value map `{'i-id': 10,'i-wf': 1}`
`'row'` [incr tsdb()] row `'10@1'`
================== ================= ==========================
Args:
cols: an iterable of column names to select data for
rows: the rows to select column data from
mode: the form yielded data should take
cast: if `True`, cast column values to their datatype
(requires *rows* to be :class:`Record` objects)
Yields:
Selected data in the form specified by *mode*.
"""
mode = mode.lower()
if mode == 'list':
modecast = lambda cols, data: data
elif mode == 'dict':
modecast = lambda cols, data: dict(zip(cols, data))
elif mode == 'row':
modecast = lambda cols, data: encode_row(data)
else:
raise ItsdbError('Invalid mode for select operation: {}\n'
' Valid options include: list, dict, row'
.format(mode))
for row in rows:
try:
data = [row.get(c, cast=cast) for c in cols]
except TypeError:
data = [row.get(c) for c in cols]
yield modecast(cols, data) |
def _fsic_queuing_calc(fsic1, fsic2):
"""
We set the lower counter between two same instance ids.
If an instance_id exists in one fsic but not the other we want to give that counter a value of 0.
:param fsic1: dictionary containing (instance_id, counter) pairs
:param fsic2: dictionary containing (instance_id, counter) pairs
:return ``dict`` of fsics to be used in queueing the correct records to the buffer
"""
return {instance: fsic2.get(instance, 0) for instance, counter in six.iteritems(fsic1) if fsic2.get(instance, 0) < counter} | We set the lower counter between two same instance ids.
If an instance_id exists in one fsic but not the other we want to give that counter a value of 0.
:param fsic1: dictionary containing (instance_id, counter) pairs
:param fsic2: dictionary containing (instance_id, counter) pairs
:return ``dict`` of fsics to be used in queueing the correct records to the buffer | Below is the the instruction that describes the task:
### Input:
We set the lower counter between two same instance ids.
If an instance_id exists in one fsic but not the other we want to give that counter a value of 0.
:param fsic1: dictionary containing (instance_id, counter) pairs
:param fsic2: dictionary containing (instance_id, counter) pairs
:return ``dict`` of fsics to be used in queueing the correct records to the buffer
### Response:
def _fsic_queuing_calc(fsic1, fsic2):
"""
We set the lower counter between two same instance ids.
If an instance_id exists in one fsic but not the other we want to give that counter a value of 0.
:param fsic1: dictionary containing (instance_id, counter) pairs
:param fsic2: dictionary containing (instance_id, counter) pairs
:return ``dict`` of fsics to be used in queueing the correct records to the buffer
"""
return {instance: fsic2.get(instance, 0) for instance, counter in six.iteritems(fsic1) if fsic2.get(instance, 0) < counter} |
def get_service_categories(self, restricted=True):
"""Return all service categories in the right order
:param restricted: Client settings restrict categories
:type restricted: bool
:returns: Category catalog results
:rtype: brains
"""
bsc = api.get_tool("bika_setup_catalog")
query = {
"portal_type": "AnalysisCategory",
"is_active": True,
"sort_on": "sortable_title",
}
categories = bsc(query)
client = self.get_client()
if client and restricted:
restricted_categories = client.getRestrictedCategories()
restricted_category_ids = map(
lambda c: c.getId(), restricted_categories)
# keep correct order of categories
if restricted_category_ids:
categories = filter(
lambda c: c.getId in restricted_category_ids, categories)
return categories | Return all service categories in the right order
:param restricted: Client settings restrict categories
:type restricted: bool
:returns: Category catalog results
:rtype: brains | Below is the the instruction that describes the task:
### Input:
Return all service categories in the right order
:param restricted: Client settings restrict categories
:type restricted: bool
:returns: Category catalog results
:rtype: brains
### Response:
def get_service_categories(self, restricted=True):
"""Return all service categories in the right order
:param restricted: Client settings restrict categories
:type restricted: bool
:returns: Category catalog results
:rtype: brains
"""
bsc = api.get_tool("bika_setup_catalog")
query = {
"portal_type": "AnalysisCategory",
"is_active": True,
"sort_on": "sortable_title",
}
categories = bsc(query)
client = self.get_client()
if client and restricted:
restricted_categories = client.getRestrictedCategories()
restricted_category_ids = map(
lambda c: c.getId(), restricted_categories)
# keep correct order of categories
if restricted_category_ids:
categories = filter(
lambda c: c.getId in restricted_category_ids, categories)
return categories |
def registerSimulator(self, name=None, hdl=None, analyze_cmd=None, elaborate_cmd=None, simulate_cmd=None):
''' Registers an HDL _simulator
name - str, user defined name, used to identify this _simulator record
hdl - str, case insensitive, (verilog, vhdl), the HDL to which the simulated MyHDL code will be converted
analyze_cmd - str, system command that will be run to analyze the generated HDL
elaborate_cmd - str, optional, system command that will be run after the analyze phase
simulate_cmd - str, system command that will be run to simulate the analyzed and elaborated design
Before execution of a command string the following substitutions take place:
{topname} is substituted with the name of the simulated MyHDL function
'''
if not isinstance(name, str) or (name.strip() == ""):
raise ValueError("Invalid _simulator name")
if hdl.lower() not in ("vhdl", "verilog"):
raise ValueError("Invalid hdl {}".format(hdl))
if not isinstance(analyze_cmd, str) or (analyze_cmd.strip() == ""):
raise ValueError("Invalid analyzer command")
if elaborate_cmd is not None:
if not isinstance(elaborate_cmd, str) or (elaborate_cmd.strip() == ""):
raise ValueError("Invalid elaborate_cmd command")
if not isinstance(simulate_cmd, str) or (simulate_cmd.strip() == ""):
raise ValueError("Invalid _simulator command")
self.sim_reg[name] = (hdl.lower(), analyze_cmd, elaborate_cmd, simulate_cmd) | Registers an HDL _simulator
name - str, user defined name, used to identify this _simulator record
hdl - str, case insensitive, (verilog, vhdl), the HDL to which the simulated MyHDL code will be converted
analyze_cmd - str, system command that will be run to analyze the generated HDL
elaborate_cmd - str, optional, system command that will be run after the analyze phase
simulate_cmd - str, system command that will be run to simulate the analyzed and elaborated design
Before execution of a command string the following substitutions take place:
{topname} is substituted with the name of the simulated MyHDL function | Below is the the instruction that describes the task:
### Input:
Registers an HDL _simulator
name - str, user defined name, used to identify this _simulator record
hdl - str, case insensitive, (verilog, vhdl), the HDL to which the simulated MyHDL code will be converted
analyze_cmd - str, system command that will be run to analyze the generated HDL
elaborate_cmd - str, optional, system command that will be run after the analyze phase
simulate_cmd - str, system command that will be run to simulate the analyzed and elaborated design
Before execution of a command string the following substitutions take place:
{topname} is substituted with the name of the simulated MyHDL function
### Response:
def registerSimulator(self, name=None, hdl=None, analyze_cmd=None, elaborate_cmd=None, simulate_cmd=None):
''' Registers an HDL _simulator
name - str, user defined name, used to identify this _simulator record
hdl - str, case insensitive, (verilog, vhdl), the HDL to which the simulated MyHDL code will be converted
analyze_cmd - str, system command that will be run to analyze the generated HDL
elaborate_cmd - str, optional, system command that will be run after the analyze phase
simulate_cmd - str, system command that will be run to simulate the analyzed and elaborated design
Before execution of a command string the following substitutions take place:
{topname} is substituted with the name of the simulated MyHDL function
'''
if not isinstance(name, str) or (name.strip() == ""):
raise ValueError("Invalid _simulator name")
if hdl.lower() not in ("vhdl", "verilog"):
raise ValueError("Invalid hdl {}".format(hdl))
if not isinstance(analyze_cmd, str) or (analyze_cmd.strip() == ""):
raise ValueError("Invalid analyzer command")
if elaborate_cmd is not None:
if not isinstance(elaborate_cmd, str) or (elaborate_cmd.strip() == ""):
raise ValueError("Invalid elaborate_cmd command")
if not isinstance(simulate_cmd, str) or (simulate_cmd.strip() == ""):
raise ValueError("Invalid _simulator command")
self.sim_reg[name] = (hdl.lower(), analyze_cmd, elaborate_cmd, simulate_cmd) |
def suck_out_variations_only(reporters):
"""Builds a dictionary of variations to canonical reporters.
The dictionary takes the form of:
{
"A. 2d": ["A.2d"],
...
"P.R.": ["Pen. & W.", "P.R.R.", "P."],
}
In other words, it's a dictionary that maps each variation to a list of
reporters that it could be possibly referring to.
"""
variations_out = {}
for reporter_key, data_list in reporters.items():
# For each reporter key...
for data in data_list:
# For each book it maps to...
for variation_key, variation_value in data["variations"].items():
try:
variations_list = variations_out[variation_key]
if variation_value not in variations_list:
variations_list.append(variation_value)
except KeyError:
# The item wasn't there; add it.
variations_out[variation_key] = [variation_value]
return variations_out | Builds a dictionary of variations to canonical reporters.
The dictionary takes the form of:
{
"A. 2d": ["A.2d"],
...
"P.R.": ["Pen. & W.", "P.R.R.", "P."],
}
In other words, it's a dictionary that maps each variation to a list of
reporters that it could be possibly referring to. | Below is the the instruction that describes the task:
### Input:
Builds a dictionary of variations to canonical reporters.
The dictionary takes the form of:
{
"A. 2d": ["A.2d"],
...
"P.R.": ["Pen. & W.", "P.R.R.", "P."],
}
In other words, it's a dictionary that maps each variation to a list of
reporters that it could be possibly referring to.
### Response:
def suck_out_variations_only(reporters):
"""Builds a dictionary of variations to canonical reporters.
The dictionary takes the form of:
{
"A. 2d": ["A.2d"],
...
"P.R.": ["Pen. & W.", "P.R.R.", "P."],
}
In other words, it's a dictionary that maps each variation to a list of
reporters that it could be possibly referring to.
"""
variations_out = {}
for reporter_key, data_list in reporters.items():
# For each reporter key...
for data in data_list:
# For each book it maps to...
for variation_key, variation_value in data["variations"].items():
try:
variations_list = variations_out[variation_key]
if variation_value not in variations_list:
variations_list.append(variation_value)
except KeyError:
# The item wasn't there; add it.
variations_out[variation_key] = [variation_value]
return variations_out |
def file_response(data: Union[bytes, str], # HttpResponse encodes str if req'd
content_type: str,
filename: str) -> HttpResponse:
"""
Returns an ``HttpResponse`` with an attachment containing the specified
data with the specified filename as an attachment.
"""
response = HttpResponse(data, content_type=content_type)
add_download_filename(response, filename)
return response | Returns an ``HttpResponse`` with an attachment containing the specified
data with the specified filename as an attachment. | Below is the the instruction that describes the task:
### Input:
Returns an ``HttpResponse`` with an attachment containing the specified
data with the specified filename as an attachment.
### Response:
def file_response(data: Union[bytes, str], # HttpResponse encodes str if req'd
content_type: str,
filename: str) -> HttpResponse:
"""
Returns an ``HttpResponse`` with an attachment containing the specified
data with the specified filename as an attachment.
"""
response = HttpResponse(data, content_type=content_type)
add_download_filename(response, filename)
return response |
def advReplace(document, search, replace, bs=3):
"""
Replace all occurences of string with a different string, return updated
document
This is a modified version of python-docx.replace() that takes into
account blocks of <bs> elements at a time. The replace element can also
be a string or an xml etree element.
What it does:
It searches the entire document body for text blocks.
Then scan thos text blocks for replace.
Since the text to search could be spawned across multiple text blocks,
we need to adopt some sort of algorithm to handle this situation.
The smaller matching group of blocks (up to bs) is then adopted.
If the matching group has more than one block, blocks other than first
are cleared and all the replacement text is put on first block.
Examples:
original text blocks : [ 'Hel', 'lo,', ' world!' ]
search / replace: 'Hello,' / 'Hi!'
output blocks : [ 'Hi!', '', ' world!' ]
original text blocks : [ 'Hel', 'lo,', ' world!' ]
search / replace: 'Hello, world' / 'Hi!'
output blocks : [ 'Hi!!', '', '' ]
original text blocks : [ 'Hel', 'lo,', ' world!' ]
search / replace: 'Hel' / 'Hal'
output blocks : [ 'Hal', 'lo,', ' world!' ]
@param instance document: The original document
@param str search: The text to search for (regexp)
@param mixed replace: The replacement text or lxml.etree element to
append, or a list of etree elements
@param int bs: See above
@return instance The document with replacement applied
"""
# Enables debug output
DEBUG = False
newdocument = document
# Compile the search regexp
searchre = re.compile(search)
# Will match against searchels. Searchels is a list that contains last
# n text elements found in the document. 1 < n < bs
searchels = []
for element in newdocument.iter():
if element.tag == '{%s}t' % nsprefixes['w']: # t (text) elements
if element.text:
# Add this element to searchels
searchels.append(element)
if len(searchels) > bs:
# Is searchels is too long, remove first elements
searchels.pop(0)
# Search all combinations, of searchels, starting from
# smaller up to bigger ones
# l = search lenght
# s = search start
# e = element IDs to merge
found = False
for l in range(1, len(searchels)+1):
if found:
break
#print "slen:", l
for s in range(len(searchels)):
if found:
break
if s+l <= len(searchels):
e = range(s, s+l)
#print "elems:", e
txtsearch = ''
for k in e:
txtsearch += searchels[k].text
# Searcs for the text in the whole txtsearch
match = searchre.search(txtsearch)
if match:
found = True
# I've found something :)
if DEBUG:
log.debug("Found element!")
log.debug("Search regexp: %s",
searchre.pattern)
log.debug("Requested replacement: %s",
replace)
log.debug("Matched text: %s", txtsearch)
log.debug("Matched text (splitted): %s",
map(lambda i: i.text, searchels))
log.debug("Matched at position: %s",
match.start())
log.debug("matched in elements: %s", e)
if isinstance(replace, etree._Element):
log.debug("Will replace with XML CODE")
elif isinstance(replace(list, tuple)):
log.debug("Will replace with LIST OF"
" ELEMENTS")
else:
log.debug("Will replace with:",
re.sub(search, replace,
txtsearch))
curlen = 0
replaced = False
for i in e:
curlen += len(searchels[i].text)
if curlen > match.start() and not replaced:
# The match occurred in THIS element.
# Puth in the whole replaced text
if isinstance(replace, etree._Element):
# Convert to a list and process
# it later
replace = [replace]
if isinstance(replace, (list, tuple)):
# I'm replacing with a list of
# etree elements
# clear the text in the tag and
# append the element after the
# parent paragraph
# (because t elements cannot have
# childs)
p = findTypeParent(
searchels[i],
'{%s}p' % nsprefixes['w'])
searchels[i].text = re.sub(
search, '', txtsearch)
insindex = p.getparent().index(p)+1
for r in replace:
p.getparent().insert(
insindex, r)
insindex += 1
else:
# Replacing with pure text
searchels[i].text = re.sub(
search, replace, txtsearch)
replaced = True
log.debug(
"Replacing in element #: %s", i)
else:
# Clears the other text elements
searchels[i].text = ''
return newdocument | Replace all occurences of string with a different string, return updated
document
This is a modified version of python-docx.replace() that takes into
account blocks of <bs> elements at a time. The replace element can also
be a string or an xml etree element.
What it does:
It searches the entire document body for text blocks.
Then scan thos text blocks for replace.
Since the text to search could be spawned across multiple text blocks,
we need to adopt some sort of algorithm to handle this situation.
The smaller matching group of blocks (up to bs) is then adopted.
If the matching group has more than one block, blocks other than first
are cleared and all the replacement text is put on first block.
Examples:
original text blocks : [ 'Hel', 'lo,', ' world!' ]
search / replace: 'Hello,' / 'Hi!'
output blocks : [ 'Hi!', '', ' world!' ]
original text blocks : [ 'Hel', 'lo,', ' world!' ]
search / replace: 'Hello, world' / 'Hi!'
output blocks : [ 'Hi!!', '', '' ]
original text blocks : [ 'Hel', 'lo,', ' world!' ]
search / replace: 'Hel' / 'Hal'
output blocks : [ 'Hal', 'lo,', ' world!' ]
@param instance document: The original document
@param str search: The text to search for (regexp)
@param mixed replace: The replacement text or lxml.etree element to
append, or a list of etree elements
@param int bs: See above
@return instance The document with replacement applied | Below is the the instruction that describes the task:
### Input:
Replace all occurences of string with a different string, return updated
document
This is a modified version of python-docx.replace() that takes into
account blocks of <bs> elements at a time. The replace element can also
be a string or an xml etree element.
What it does:
It searches the entire document body for text blocks.
Then scan thos text blocks for replace.
Since the text to search could be spawned across multiple text blocks,
we need to adopt some sort of algorithm to handle this situation.
The smaller matching group of blocks (up to bs) is then adopted.
If the matching group has more than one block, blocks other than first
are cleared and all the replacement text is put on first block.
Examples:
original text blocks : [ 'Hel', 'lo,', ' world!' ]
search / replace: 'Hello,' / 'Hi!'
output blocks : [ 'Hi!', '', ' world!' ]
original text blocks : [ 'Hel', 'lo,', ' world!' ]
search / replace: 'Hello, world' / 'Hi!'
output blocks : [ 'Hi!!', '', '' ]
original text blocks : [ 'Hel', 'lo,', ' world!' ]
search / replace: 'Hel' / 'Hal'
output blocks : [ 'Hal', 'lo,', ' world!' ]
@param instance document: The original document
@param str search: The text to search for (regexp)
@param mixed replace: The replacement text or lxml.etree element to
append, or a list of etree elements
@param int bs: See above
@return instance The document with replacement applied
### Response:
def advReplace(document, search, replace, bs=3):
"""
Replace all occurences of string with a different string, return updated
document
This is a modified version of python-docx.replace() that takes into
account blocks of <bs> elements at a time. The replace element can also
be a string or an xml etree element.
What it does:
It searches the entire document body for text blocks.
Then scan thos text blocks for replace.
Since the text to search could be spawned across multiple text blocks,
we need to adopt some sort of algorithm to handle this situation.
The smaller matching group of blocks (up to bs) is then adopted.
If the matching group has more than one block, blocks other than first
are cleared and all the replacement text is put on first block.
Examples:
original text blocks : [ 'Hel', 'lo,', ' world!' ]
search / replace: 'Hello,' / 'Hi!'
output blocks : [ 'Hi!', '', ' world!' ]
original text blocks : [ 'Hel', 'lo,', ' world!' ]
search / replace: 'Hello, world' / 'Hi!'
output blocks : [ 'Hi!!', '', '' ]
original text blocks : [ 'Hel', 'lo,', ' world!' ]
search / replace: 'Hel' / 'Hal'
output blocks : [ 'Hal', 'lo,', ' world!' ]
@param instance document: The original document
@param str search: The text to search for (regexp)
@param mixed replace: The replacement text or lxml.etree element to
append, or a list of etree elements
@param int bs: See above
@return instance The document with replacement applied
"""
# Enables debug output
DEBUG = False
newdocument = document
# Compile the search regexp
searchre = re.compile(search)
# Will match against searchels. Searchels is a list that contains last
# n text elements found in the document. 1 < n < bs
searchels = []
for element in newdocument.iter():
if element.tag == '{%s}t' % nsprefixes['w']: # t (text) elements
if element.text:
# Add this element to searchels
searchels.append(element)
if len(searchels) > bs:
# Is searchels is too long, remove first elements
searchels.pop(0)
# Search all combinations, of searchels, starting from
# smaller up to bigger ones
# l = search lenght
# s = search start
# e = element IDs to merge
found = False
for l in range(1, len(searchels)+1):
if found:
break
#print "slen:", l
for s in range(len(searchels)):
if found:
break
if s+l <= len(searchels):
e = range(s, s+l)
#print "elems:", e
txtsearch = ''
for k in e:
txtsearch += searchels[k].text
# Searcs for the text in the whole txtsearch
match = searchre.search(txtsearch)
if match:
found = True
# I've found something :)
if DEBUG:
log.debug("Found element!")
log.debug("Search regexp: %s",
searchre.pattern)
log.debug("Requested replacement: %s",
replace)
log.debug("Matched text: %s", txtsearch)
log.debug("Matched text (splitted): %s",
map(lambda i: i.text, searchels))
log.debug("Matched at position: %s",
match.start())
log.debug("matched in elements: %s", e)
if isinstance(replace, etree._Element):
log.debug("Will replace with XML CODE")
elif isinstance(replace(list, tuple)):
log.debug("Will replace with LIST OF"
" ELEMENTS")
else:
log.debug("Will replace with:",
re.sub(search, replace,
txtsearch))
curlen = 0
replaced = False
for i in e:
curlen += len(searchels[i].text)
if curlen > match.start() and not replaced:
# The match occurred in THIS element.
# Puth in the whole replaced text
if isinstance(replace, etree._Element):
# Convert to a list and process
# it later
replace = [replace]
if isinstance(replace, (list, tuple)):
# I'm replacing with a list of
# etree elements
# clear the text in the tag and
# append the element after the
# parent paragraph
# (because t elements cannot have
# childs)
p = findTypeParent(
searchels[i],
'{%s}p' % nsprefixes['w'])
searchels[i].text = re.sub(
search, '', txtsearch)
insindex = p.getparent().index(p)+1
for r in replace:
p.getparent().insert(
insindex, r)
insindex += 1
else:
# Replacing with pure text
searchels[i].text = re.sub(
search, replace, txtsearch)
replaced = True
log.debug(
"Replacing in element #: %s", i)
else:
# Clears the other text elements
searchels[i].text = ''
return newdocument |
def _construct_update(self, outgoing_route):
"""Construct update message with Outgoing-routes path attribute
appropriately cloned/copied/updated.
"""
update = None
path = outgoing_route.path
# Get copy of path's path attributes.
pathattr_map = path.pathattr_map
new_pathattr = []
if path.is_withdraw:
if isinstance(path, Ipv4Path):
update = BGPUpdate(withdrawn_routes=[path.nlri])
return update
else:
mpunreach_attr = BGPPathAttributeMpUnreachNLRI(
path.route_family.afi, path.route_family.safi, [path.nlri]
)
new_pathattr.append(mpunreach_attr)
elif self.is_route_server_client:
nlri_list = [path.nlri]
new_pathattr.extend(pathattr_map.values())
else:
if self.is_route_reflector_client:
# Append ORIGINATOR_ID attribute if not already exist.
if BGP_ATTR_TYPE_ORIGINATOR_ID not in pathattr_map:
originator_id = path.source
if originator_id is None:
originator_id = self._common_conf.router_id
elif isinstance(path.source, Peer):
originator_id = path.source.ip_address
new_pathattr.append(
BGPPathAttributeOriginatorId(value=originator_id))
# Preppend own CLUSTER_ID into CLUSTER_LIST attribute if exist.
# Otherwise append CLUSTER_LIST attribute.
cluster_lst_attr = pathattr_map.get(BGP_ATTR_TYPE_CLUSTER_LIST)
if cluster_lst_attr:
cluster_list = list(cluster_lst_attr.value)
if self._common_conf.cluster_id not in cluster_list:
cluster_list.insert(0, self._common_conf.cluster_id)
new_pathattr.append(
BGPPathAttributeClusterList(cluster_list))
else:
new_pathattr.append(
BGPPathAttributeClusterList(
[self._common_conf.cluster_id]))
# Supported and un-supported/unknown attributes.
origin_attr = None
nexthop_attr = None
as_path_attr = None
as4_path_attr = None
aggregator_attr = None
as4_aggregator_attr = None
extcomm_attr = None
community_attr = None
localpref_attr = None
pmsi_tunnel_attr = None
unknown_opttrans_attrs = None
nlri_list = [path.nlri]
if path.route_family.safi in (subaddr_family.IP_FLOWSPEC,
subaddr_family.VPN_FLOWSPEC):
# Flow Specification does not have next_hop.
next_hop = []
elif self.is_ebgp_peer():
next_hop = self._session_next_hop(path)
if path.is_local() and path.has_nexthop():
next_hop = path.nexthop
else:
next_hop = path.nexthop
# RFC 4271 allows us to change next_hop
# if configured to announce its own ip address.
# Also if the BGP route is configured without next_hop,
# we use path._session_next_hop() as next_hop.
if (self._neigh_conf.is_next_hop_self
or (path.is_local() and not path.has_nexthop())):
next_hop = self._session_next_hop(path)
LOG.debug('using %s as a next_hop address instead'
' of path.nexthop %s', next_hop, path.nexthop)
nexthop_attr = BGPPathAttributeNextHop(next_hop)
assert nexthop_attr, 'Missing NEXTHOP mandatory attribute.'
if not isinstance(path, Ipv4Path):
# We construct mpreach-nlri attribute.
mpnlri_attr = BGPPathAttributeMpReachNLRI(
path.route_family.afi,
path.route_family.safi,
next_hop,
nlri_list
)
# ORIGIN Attribute.
# According to RFC this attribute value SHOULD NOT be changed by
# any other speaker.
origin_attr = pathattr_map.get(BGP_ATTR_TYPE_ORIGIN)
assert origin_attr, 'Missing ORIGIN mandatory attribute.'
# AS_PATH Attribute.
# Construct AS-path-attr using paths AS_PATH attr. with local AS as
# first item.
path_aspath = pathattr_map.get(BGP_ATTR_TYPE_AS_PATH)
assert path_aspath, 'Missing AS_PATH mandatory attribute.'
# Deep copy AS_PATH attr value
as_path_list = path_aspath.path_seg_list
# If this is a iBGP peer.
if not self.is_ebgp_peer():
# When a given BGP speaker advertises the route to an internal
# peer, the advertising speaker SHALL NOT modify the AS_PATH
# attribute associated with the route.
pass
else:
# When a given BGP speaker advertises the route to an external
# peer, the advertising speaker updates the AS_PATH attribute
# as follows:
# 1) if the first path segment of the AS_PATH is of type
# AS_SEQUENCE, the local system prepends its own AS num as
# the last element of the sequence (put it in the left-most
# position with respect to the position of octets in the
# protocol message). If the act of prepending will cause an
# overflow in the AS_PATH segment (i.e., more than 255
# ASes), it SHOULD prepend a new segment of type AS_SEQUENCE
# and prepend its own AS number to this new segment.
#
# 2) if the first path segment of the AS_PATH is of type AS_SET
# , the local system prepends a new path segment of type
# AS_SEQUENCE to the AS_PATH, including its own AS number in
# that segment.
#
# 3) if the AS_PATH is empty, the local system creates a path
# segment of type AS_SEQUENCE, places its own AS into that
# segment, and places that segment into the AS_PATH.
if (len(as_path_list) > 0 and
isinstance(as_path_list[0], list) and
len(as_path_list[0]) < 255):
as_path_list[0].insert(0, self.local_as)
else:
as_path_list.insert(0, [self.local_as])
# Construct AS4_PATH list from AS_PATH list and swap
# non-mappable AS number with AS_TRANS in AS_PATH.
as_path_list, as4_path_list = self._trans_as_path(
as_path_list)
# If the neighbor supports Four-Octet AS number, send AS_PATH
# in Four-Octet.
if self.is_four_octet_as_number_cap_valid():
as_path_attr = BGPPathAttributeAsPath(
as_path_list, as_pack_str='!I') # specify Four-Octet.
# Otherwise, send AS_PATH in Two-Octet.
else:
as_path_attr = BGPPathAttributeAsPath(as_path_list)
# If needed, send AS4_PATH attribute.
if as4_path_list:
as4_path_attr = BGPPathAttributeAs4Path(as4_path_list)
# AGGREGATOR Attribute.
aggregator_attr = pathattr_map.get(BGP_ATTR_TYPE_AGGREGATOR)
# If the neighbor does not support Four-Octet AS number,
# swap non-mappable AS number with AS_TRANS.
if (aggregator_attr and
not self.is_four_octet_as_number_cap_valid()):
# If AS number of AGGREGATOR is Four-Octet AS number,
# swap with AS_TRANS, else do not.
aggregator_as_number = aggregator_attr.as_number
if not is_valid_old_asn(aggregator_as_number):
aggregator_attr = bgp.BGPPathAttributeAggregator(
bgp.AS_TRANS, aggregator_attr.addr)
as4_aggregator_attr = bgp.BGPPathAttributeAs4Aggregator(
aggregator_as_number, aggregator_attr.addr)
# MULTI_EXIT_DISC Attribute.
# For eBGP session we can send multi-exit-disc if configured.
multi_exit_disc = None
if self.is_ebgp_peer():
if self._neigh_conf.multi_exit_disc:
multi_exit_disc = BGPPathAttributeMultiExitDisc(
self._neigh_conf.multi_exit_disc
)
else:
pass
if not self.is_ebgp_peer():
multi_exit_disc = pathattr_map.get(
BGP_ATTR_TYPE_MULTI_EXIT_DISC)
# LOCAL_PREF Attribute.
if not self.is_ebgp_peer():
# For iBGP peers we are required to send local-pref attribute
# for connected or local prefixes. We check if the path matches
# attribute_maps and set local-pref value.
# If the path doesn't match, we set default local-pref given
# from the user. The default value is 100.
localpref_attr = BGPPathAttributeLocalPref(
self._common_conf.local_pref)
key = const.ATTR_MAPS_LABEL_DEFAULT
if isinstance(path, (Vpnv4Path, Vpnv6Path)):
nlri = nlri_list[0]
rf = VRF_RF_IPV4 if isinstance(path, Vpnv4Path)\
else VRF_RF_IPV6
key = ':'.join([nlri.route_dist, rf])
attr_type = AttributeMap.ATTR_LOCAL_PREF
at_maps = self._attribute_maps.get(key, {})
result = self._lookup_attribute_map(at_maps, attr_type, path)
if result:
localpref_attr = result
# COMMUNITY Attribute.
community_attr = pathattr_map.get(BGP_ATTR_TYPE_COMMUNITIES)
# EXTENDED COMMUNITY Attribute.
# Construct ExtCommunity path-attr based on given.
path_extcomm_attr = pathattr_map.get(
BGP_ATTR_TYPE_EXTENDED_COMMUNITIES
)
if path_extcomm_attr:
# SOO list can be configured per VRF and/or per Neighbor.
# NeighborConf has this setting we add this to existing list.
communities = path_extcomm_attr.communities
if self._neigh_conf.soo_list:
# construct extended community
soo_list = self._neigh_conf.soo_list
subtype = 0x03
for soo in soo_list:
first, second = soo.split(':')
if '.' in first:
c = BGPIPv4AddressSpecificExtendedCommunity(
subtype=subtype,
ipv4_address=first,
local_administrator=int(second))
else:
c = BGPTwoOctetAsSpecificExtendedCommunity(
subtype=subtype,
as_number=int(first),
local_administrator=int(second))
communities.append(c)
extcomm_attr = BGPPathAttributeExtendedCommunities(
communities=communities
)
pmsi_tunnel_attr = pathattr_map.get(
BGP_ATTR_TYEP_PMSI_TUNNEL_ATTRIBUTE
)
# UNKNOWN Attributes.
# Get optional transitive path attributes
unknown_opttrans_attrs = bgp_utils.get_unknown_opttrans_attr(path)
# Ordering path attributes according to type as RFC says. We set
# MPReachNLRI first as advised by experts as a new trend in BGP
# implementation.
if isinstance(path, Ipv4Path):
new_pathattr.append(nexthop_attr)
else:
new_pathattr.append(mpnlri_attr)
new_pathattr.append(origin_attr)
new_pathattr.append(as_path_attr)
if as4_path_attr:
new_pathattr.append(as4_path_attr)
if aggregator_attr:
new_pathattr.append(aggregator_attr)
if as4_aggregator_attr:
new_pathattr.append(as4_aggregator_attr)
if multi_exit_disc:
new_pathattr.append(multi_exit_disc)
if localpref_attr:
new_pathattr.append(localpref_attr)
if community_attr:
new_pathattr.append(community_attr)
if extcomm_attr:
new_pathattr.append(extcomm_attr)
if pmsi_tunnel_attr:
new_pathattr.append(pmsi_tunnel_attr)
if unknown_opttrans_attrs:
new_pathattr.extend(unknown_opttrans_attrs.values())
if isinstance(path, Ipv4Path):
update = BGPUpdate(path_attributes=new_pathattr,
nlri=nlri_list)
else:
update = BGPUpdate(path_attributes=new_pathattr)
return update | Construct update message with Outgoing-routes path attribute
appropriately cloned/copied/updated. | Below is the the instruction that describes the task:
### Input:
Construct update message with Outgoing-routes path attribute
appropriately cloned/copied/updated.
### Response:
def _construct_update(self, outgoing_route):
"""Construct update message with Outgoing-routes path attribute
appropriately cloned/copied/updated.
"""
update = None
path = outgoing_route.path
# Get copy of path's path attributes.
pathattr_map = path.pathattr_map
new_pathattr = []
if path.is_withdraw:
if isinstance(path, Ipv4Path):
update = BGPUpdate(withdrawn_routes=[path.nlri])
return update
else:
mpunreach_attr = BGPPathAttributeMpUnreachNLRI(
path.route_family.afi, path.route_family.safi, [path.nlri]
)
new_pathattr.append(mpunreach_attr)
elif self.is_route_server_client:
nlri_list = [path.nlri]
new_pathattr.extend(pathattr_map.values())
else:
if self.is_route_reflector_client:
# Append ORIGINATOR_ID attribute if not already exist.
if BGP_ATTR_TYPE_ORIGINATOR_ID not in pathattr_map:
originator_id = path.source
if originator_id is None:
originator_id = self._common_conf.router_id
elif isinstance(path.source, Peer):
originator_id = path.source.ip_address
new_pathattr.append(
BGPPathAttributeOriginatorId(value=originator_id))
# Preppend own CLUSTER_ID into CLUSTER_LIST attribute if exist.
# Otherwise append CLUSTER_LIST attribute.
cluster_lst_attr = pathattr_map.get(BGP_ATTR_TYPE_CLUSTER_LIST)
if cluster_lst_attr:
cluster_list = list(cluster_lst_attr.value)
if self._common_conf.cluster_id not in cluster_list:
cluster_list.insert(0, self._common_conf.cluster_id)
new_pathattr.append(
BGPPathAttributeClusterList(cluster_list))
else:
new_pathattr.append(
BGPPathAttributeClusterList(
[self._common_conf.cluster_id]))
# Supported and un-supported/unknown attributes.
origin_attr = None
nexthop_attr = None
as_path_attr = None
as4_path_attr = None
aggregator_attr = None
as4_aggregator_attr = None
extcomm_attr = None
community_attr = None
localpref_attr = None
pmsi_tunnel_attr = None
unknown_opttrans_attrs = None
nlri_list = [path.nlri]
if path.route_family.safi in (subaddr_family.IP_FLOWSPEC,
subaddr_family.VPN_FLOWSPEC):
# Flow Specification does not have next_hop.
next_hop = []
elif self.is_ebgp_peer():
next_hop = self._session_next_hop(path)
if path.is_local() and path.has_nexthop():
next_hop = path.nexthop
else:
next_hop = path.nexthop
# RFC 4271 allows us to change next_hop
# if configured to announce its own ip address.
# Also if the BGP route is configured without next_hop,
# we use path._session_next_hop() as next_hop.
if (self._neigh_conf.is_next_hop_self
or (path.is_local() and not path.has_nexthop())):
next_hop = self._session_next_hop(path)
LOG.debug('using %s as a next_hop address instead'
' of path.nexthop %s', next_hop, path.nexthop)
nexthop_attr = BGPPathAttributeNextHop(next_hop)
assert nexthop_attr, 'Missing NEXTHOP mandatory attribute.'
if not isinstance(path, Ipv4Path):
# We construct mpreach-nlri attribute.
mpnlri_attr = BGPPathAttributeMpReachNLRI(
path.route_family.afi,
path.route_family.safi,
next_hop,
nlri_list
)
# ORIGIN Attribute.
# According to RFC this attribute value SHOULD NOT be changed by
# any other speaker.
origin_attr = pathattr_map.get(BGP_ATTR_TYPE_ORIGIN)
assert origin_attr, 'Missing ORIGIN mandatory attribute.'
# AS_PATH Attribute.
# Construct AS-path-attr using paths AS_PATH attr. with local AS as
# first item.
path_aspath = pathattr_map.get(BGP_ATTR_TYPE_AS_PATH)
assert path_aspath, 'Missing AS_PATH mandatory attribute.'
# Deep copy AS_PATH attr value
as_path_list = path_aspath.path_seg_list
# If this is a iBGP peer.
if not self.is_ebgp_peer():
# When a given BGP speaker advertises the route to an internal
# peer, the advertising speaker SHALL NOT modify the AS_PATH
# attribute associated with the route.
pass
else:
# When a given BGP speaker advertises the route to an external
# peer, the advertising speaker updates the AS_PATH attribute
# as follows:
# 1) if the first path segment of the AS_PATH is of type
# AS_SEQUENCE, the local system prepends its own AS num as
# the last element of the sequence (put it in the left-most
# position with respect to the position of octets in the
# protocol message). If the act of prepending will cause an
# overflow in the AS_PATH segment (i.e., more than 255
# ASes), it SHOULD prepend a new segment of type AS_SEQUENCE
# and prepend its own AS number to this new segment.
#
# 2) if the first path segment of the AS_PATH is of type AS_SET
# , the local system prepends a new path segment of type
# AS_SEQUENCE to the AS_PATH, including its own AS number in
# that segment.
#
# 3) if the AS_PATH is empty, the local system creates a path
# segment of type AS_SEQUENCE, places its own AS into that
# segment, and places that segment into the AS_PATH.
if (len(as_path_list) > 0 and
isinstance(as_path_list[0], list) and
len(as_path_list[0]) < 255):
as_path_list[0].insert(0, self.local_as)
else:
as_path_list.insert(0, [self.local_as])
# Construct AS4_PATH list from AS_PATH list and swap
# non-mappable AS number with AS_TRANS in AS_PATH.
as_path_list, as4_path_list = self._trans_as_path(
as_path_list)
# If the neighbor supports Four-Octet AS number, send AS_PATH
# in Four-Octet.
if self.is_four_octet_as_number_cap_valid():
as_path_attr = BGPPathAttributeAsPath(
as_path_list, as_pack_str='!I') # specify Four-Octet.
# Otherwise, send AS_PATH in Two-Octet.
else:
as_path_attr = BGPPathAttributeAsPath(as_path_list)
# If needed, send AS4_PATH attribute.
if as4_path_list:
as4_path_attr = BGPPathAttributeAs4Path(as4_path_list)
# AGGREGATOR Attribute.
aggregator_attr = pathattr_map.get(BGP_ATTR_TYPE_AGGREGATOR)
# If the neighbor does not support Four-Octet AS number,
# swap non-mappable AS number with AS_TRANS.
if (aggregator_attr and
not self.is_four_octet_as_number_cap_valid()):
# If AS number of AGGREGATOR is Four-Octet AS number,
# swap with AS_TRANS, else do not.
aggregator_as_number = aggregator_attr.as_number
if not is_valid_old_asn(aggregator_as_number):
aggregator_attr = bgp.BGPPathAttributeAggregator(
bgp.AS_TRANS, aggregator_attr.addr)
as4_aggregator_attr = bgp.BGPPathAttributeAs4Aggregator(
aggregator_as_number, aggregator_attr.addr)
# MULTI_EXIT_DISC Attribute.
# For eBGP session we can send multi-exit-disc if configured.
multi_exit_disc = None
if self.is_ebgp_peer():
if self._neigh_conf.multi_exit_disc:
multi_exit_disc = BGPPathAttributeMultiExitDisc(
self._neigh_conf.multi_exit_disc
)
else:
pass
if not self.is_ebgp_peer():
multi_exit_disc = pathattr_map.get(
BGP_ATTR_TYPE_MULTI_EXIT_DISC)
# LOCAL_PREF Attribute.
if not self.is_ebgp_peer():
# For iBGP peers we are required to send local-pref attribute
# for connected or local prefixes. We check if the path matches
# attribute_maps and set local-pref value.
# If the path doesn't match, we set default local-pref given
# from the user. The default value is 100.
localpref_attr = BGPPathAttributeLocalPref(
self._common_conf.local_pref)
key = const.ATTR_MAPS_LABEL_DEFAULT
if isinstance(path, (Vpnv4Path, Vpnv6Path)):
nlri = nlri_list[0]
rf = VRF_RF_IPV4 if isinstance(path, Vpnv4Path)\
else VRF_RF_IPV6
key = ':'.join([nlri.route_dist, rf])
attr_type = AttributeMap.ATTR_LOCAL_PREF
at_maps = self._attribute_maps.get(key, {})
result = self._lookup_attribute_map(at_maps, attr_type, path)
if result:
localpref_attr = result
# COMMUNITY Attribute.
community_attr = pathattr_map.get(BGP_ATTR_TYPE_COMMUNITIES)
# EXTENDED COMMUNITY Attribute.
# Construct ExtCommunity path-attr based on given.
path_extcomm_attr = pathattr_map.get(
BGP_ATTR_TYPE_EXTENDED_COMMUNITIES
)
if path_extcomm_attr:
# SOO list can be configured per VRF and/or per Neighbor.
# NeighborConf has this setting we add this to existing list.
communities = path_extcomm_attr.communities
if self._neigh_conf.soo_list:
# construct extended community
soo_list = self._neigh_conf.soo_list
subtype = 0x03
for soo in soo_list:
first, second = soo.split(':')
if '.' in first:
c = BGPIPv4AddressSpecificExtendedCommunity(
subtype=subtype,
ipv4_address=first,
local_administrator=int(second))
else:
c = BGPTwoOctetAsSpecificExtendedCommunity(
subtype=subtype,
as_number=int(first),
local_administrator=int(second))
communities.append(c)
extcomm_attr = BGPPathAttributeExtendedCommunities(
communities=communities
)
pmsi_tunnel_attr = pathattr_map.get(
BGP_ATTR_TYEP_PMSI_TUNNEL_ATTRIBUTE
)
# UNKNOWN Attributes.
# Get optional transitive path attributes
unknown_opttrans_attrs = bgp_utils.get_unknown_opttrans_attr(path)
# Ordering path attributes according to type as RFC says. We set
# MPReachNLRI first as advised by experts as a new trend in BGP
# implementation.
if isinstance(path, Ipv4Path):
new_pathattr.append(nexthop_attr)
else:
new_pathattr.append(mpnlri_attr)
new_pathattr.append(origin_attr)
new_pathattr.append(as_path_attr)
if as4_path_attr:
new_pathattr.append(as4_path_attr)
if aggregator_attr:
new_pathattr.append(aggregator_attr)
if as4_aggregator_attr:
new_pathattr.append(as4_aggregator_attr)
if multi_exit_disc:
new_pathattr.append(multi_exit_disc)
if localpref_attr:
new_pathattr.append(localpref_attr)
if community_attr:
new_pathattr.append(community_attr)
if extcomm_attr:
new_pathattr.append(extcomm_attr)
if pmsi_tunnel_attr:
new_pathattr.append(pmsi_tunnel_attr)
if unknown_opttrans_attrs:
new_pathattr.extend(unknown_opttrans_attrs.values())
if isinstance(path, Ipv4Path):
update = BGPUpdate(path_attributes=new_pathattr,
nlri=nlri_list)
else:
update = BGPUpdate(path_attributes=new_pathattr)
return update |
def mtx_refl(nv, reps=1):
""" Generate block-diagonal reflection matrix about nv.
reps must be >=1 and indicates the number of times the reflection
matrix should be repeated along the block diagonal. Typically this
will be the number of atoms in a geometry.
.. todo:: Complete mtx_refl docstring
"""
# Imports
import numpy as np
from scipy import linalg as spla
from ..const import PRM
# Ensure |nv| is large enough for confident directionality
if spla.norm(nv) < PRM.ZERO_VEC_TOL:
raise ValueError("Norm of 'nv' is too small.")
## end if
# Ensure nv is a normalized np.float64 3-vector
nv = make_nd_vec(nv, nd=3, t=np.float64, norm=True)
# Ensure reps is a positive scalar integer
if not np.isscalar(reps):
raise ValueError("'reps' must be scalar.")
## end if
if not np.issubdtype(type(reps), int):
raise ValueError("'reps' must be an integer.")
## end if
if not reps > 0:
raise ValueError("'reps' must be a positive integer.")
## end if
# Initialize the single-point reflection transform matrix
base_mtx = np.zeros(shape=(3,3), dtype=np.float64)
# Construct the single-point transform matrix
for i in range(3):
for j in range(i,3):
if i==j:
base_mtx[i,j] = 1 - 2*nv[i]**2
else:
base_mtx[i,j] = base_mtx[j,i] = -2*nv[i]*nv[j]
## end if
## next j
## next i
# Construct the block-diagonal replicated reflection matrix
refl_mtx= spla.block_diag(*[base_mtx for i in range(reps)])
# Return the result
return refl_mtx | Generate block-diagonal reflection matrix about nv.
reps must be >=1 and indicates the number of times the reflection
matrix should be repeated along the block diagonal. Typically this
will be the number of atoms in a geometry.
.. todo:: Complete mtx_refl docstring | Below is the the instruction that describes the task:
### Input:
Generate block-diagonal reflection matrix about nv.
reps must be >=1 and indicates the number of times the reflection
matrix should be repeated along the block diagonal. Typically this
will be the number of atoms in a geometry.
.. todo:: Complete mtx_refl docstring
### Response:
def mtx_refl(nv, reps=1):
""" Generate block-diagonal reflection matrix about nv.
reps must be >=1 and indicates the number of times the reflection
matrix should be repeated along the block diagonal. Typically this
will be the number of atoms in a geometry.
.. todo:: Complete mtx_refl docstring
"""
# Imports
import numpy as np
from scipy import linalg as spla
from ..const import PRM
# Ensure |nv| is large enough for confident directionality
if spla.norm(nv) < PRM.ZERO_VEC_TOL:
raise ValueError("Norm of 'nv' is too small.")
## end if
# Ensure nv is a normalized np.float64 3-vector
nv = make_nd_vec(nv, nd=3, t=np.float64, norm=True)
# Ensure reps is a positive scalar integer
if not np.isscalar(reps):
raise ValueError("'reps' must be scalar.")
## end if
if not np.issubdtype(type(reps), int):
raise ValueError("'reps' must be an integer.")
## end if
if not reps > 0:
raise ValueError("'reps' must be a positive integer.")
## end if
# Initialize the single-point reflection transform matrix
base_mtx = np.zeros(shape=(3,3), dtype=np.float64)
# Construct the single-point transform matrix
for i in range(3):
for j in range(i,3):
if i==j:
base_mtx[i,j] = 1 - 2*nv[i]**2
else:
base_mtx[i,j] = base_mtx[j,i] = -2*nv[i]*nv[j]
## end if
## next j
## next i
# Construct the block-diagonal replicated reflection matrix
refl_mtx= spla.block_diag(*[base_mtx for i in range(reps)])
# Return the result
return refl_mtx |
async def _update_firmware(filename, loop):
"""
Currently uses the robot singleton from the API server to connect to
Smoothie. Those calls should be separated out from the singleton so it can
be used directly without requiring a full initialization of the API robot.
"""
try:
from opentrons import robot
except ModuleNotFoundError:
res = "Unable to find module `opentrons`--not updating firmware"
rc = 1
log.error(res)
else:
# ensure there is a reference to the port
if not robot.is_connected():
robot.connect()
# get port name
port = str(robot._driver.port)
# set smoothieware into programming mode
robot._driver._smoothie_programming_mode()
# close the port so other application can access it
robot._driver._connection.close()
# run lpc21isp, THIS WILL TAKE AROUND 1 MINUTE TO COMPLETE
update_cmd = 'lpc21isp -wipe -donotstart {0} {1} {2} 12000'.format(
filename, port, robot.config.serial_speed)
proc = await asyncio.create_subprocess_shell(
update_cmd,
stdout=asyncio.subprocess.PIPE,
loop=loop)
rd = await proc.stdout.read()
res = rd.decode().strip()
await proc.communicate()
rc = proc.returncode
if rc == 0:
# re-open the port
robot._driver._connection.open()
# reset smoothieware
robot._driver._smoothie_reset()
# run setup gcodes
robot._driver._setup()
return res, rc | Currently uses the robot singleton from the API server to connect to
Smoothie. Those calls should be separated out from the singleton so it can
be used directly without requiring a full initialization of the API robot. | Below is the the instruction that describes the task:
### Input:
Currently uses the robot singleton from the API server to connect to
Smoothie. Those calls should be separated out from the singleton so it can
be used directly without requiring a full initialization of the API robot.
### Response:
async def _update_firmware(filename, loop):
"""
Currently uses the robot singleton from the API server to connect to
Smoothie. Those calls should be separated out from the singleton so it can
be used directly without requiring a full initialization of the API robot.
"""
try:
from opentrons import robot
except ModuleNotFoundError:
res = "Unable to find module `opentrons`--not updating firmware"
rc = 1
log.error(res)
else:
# ensure there is a reference to the port
if not robot.is_connected():
robot.connect()
# get port name
port = str(robot._driver.port)
# set smoothieware into programming mode
robot._driver._smoothie_programming_mode()
# close the port so other application can access it
robot._driver._connection.close()
# run lpc21isp, THIS WILL TAKE AROUND 1 MINUTE TO COMPLETE
update_cmd = 'lpc21isp -wipe -donotstart {0} {1} {2} 12000'.format(
filename, port, robot.config.serial_speed)
proc = await asyncio.create_subprocess_shell(
update_cmd,
stdout=asyncio.subprocess.PIPE,
loop=loop)
rd = await proc.stdout.read()
res = rd.decode().strip()
await proc.communicate()
rc = proc.returncode
if rc == 0:
# re-open the port
robot._driver._connection.open()
# reset smoothieware
robot._driver._smoothie_reset()
# run setup gcodes
robot._driver._setup()
return res, rc |
def revert_unordered_batches(self):
"""
Revert changes to ledger (uncommitted) and state made by any requests
that have not been ordered.
"""
i = 0
for key in sorted(self.batches.keys(), reverse=True):
if compare_3PC_keys(self.last_ordered_3pc, key) > 0:
ledger_id, discarded, _, prevStateRoot, len_reqIdr = self.batches.pop(key)
discarded = invalid_index_serializer.deserialize(discarded)
self.logger.debug('{} reverting 3PC key {}'.format(self, key))
self.revert(ledger_id, prevStateRoot, len_reqIdr - len(discarded))
i += 1
else:
break
return i | Revert changes to ledger (uncommitted) and state made by any requests
that have not been ordered. | Below is the the instruction that describes the task:
### Input:
Revert changes to ledger (uncommitted) and state made by any requests
that have not been ordered.
### Response:
def revert_unordered_batches(self):
"""
Revert changes to ledger (uncommitted) and state made by any requests
that have not been ordered.
"""
i = 0
for key in sorted(self.batches.keys(), reverse=True):
if compare_3PC_keys(self.last_ordered_3pc, key) > 0:
ledger_id, discarded, _, prevStateRoot, len_reqIdr = self.batches.pop(key)
discarded = invalid_index_serializer.deserialize(discarded)
self.logger.debug('{} reverting 3PC key {}'.format(self, key))
self.revert(ledger_id, prevStateRoot, len_reqIdr - len(discarded))
i += 1
else:
break
return i |
def _get_queries(self, migration, method):
"""
Get all of the queries that would be run for a migration.
:param migration: The migration
:type migration: orator.migrations.migration.Migration
:param method: The method to execute
:type method: str
:rtype: list
"""
connection = migration.get_connection()
db = connection
with db.pretend():
getattr(migration, method)()
return db.get_logged_queries() | Get all of the queries that would be run for a migration.
:param migration: The migration
:type migration: orator.migrations.migration.Migration
:param method: The method to execute
:type method: str
:rtype: list | Below is the the instruction that describes the task:
### Input:
Get all of the queries that would be run for a migration.
:param migration: The migration
:type migration: orator.migrations.migration.Migration
:param method: The method to execute
:type method: str
:rtype: list
### Response:
def _get_queries(self, migration, method):
"""
Get all of the queries that would be run for a migration.
:param migration: The migration
:type migration: orator.migrations.migration.Migration
:param method: The method to execute
:type method: str
:rtype: list
"""
connection = migration.get_connection()
db = connection
with db.pretend():
getattr(migration, method)()
return db.get_logged_queries() |
def encode(self, source_path, target_path, params): # NOQA: C901
"""
Encodes a video to a specified file. All encoder specific options
are passed in using `params`.
"""
total_time = self.get_media_info(source_path)['duration']
cmds = [self.ffmpeg_path, '-i', source_path]
cmds.extend(self.params)
cmds.extend(params)
cmds.extend([target_path])
process = self._spawn(cmds)
buf = output = ''
# update progress
while True:
# any more data?
out = process.stderr.read(10)
if not out:
break
out = out.decode(console_encoding)
output += out
buf += out
try:
line, buf = buf.split('\r', 1)
except ValueError:
continue
try:
time_str = RE_TIMECODE.findall(line)[0]
except IndexError:
continue
# convert progress to percent
time = 0
for part in time_str.split(':'):
time = 60 * time + float(part)
percent = time / total_time
logger.debug('yield {}%'.format(percent))
yield percent
if os.path.getsize(target_path) == 0:
raise exceptions.FFmpegError("File size of generated file is 0")
# wait for process to exit
self._check_returncode(process)
logger.debug(output)
if not output:
raise exceptions.FFmpegError("No output from FFmpeg.")
yield 100 | Encodes a video to a specified file. All encoder specific options
are passed in using `params`. | Below is the the instruction that describes the task:
### Input:
Encodes a video to a specified file. All encoder specific options
are passed in using `params`.
### Response:
def encode(self, source_path, target_path, params): # NOQA: C901
"""
Encodes a video to a specified file. All encoder specific options
are passed in using `params`.
"""
total_time = self.get_media_info(source_path)['duration']
cmds = [self.ffmpeg_path, '-i', source_path]
cmds.extend(self.params)
cmds.extend(params)
cmds.extend([target_path])
process = self._spawn(cmds)
buf = output = ''
# update progress
while True:
# any more data?
out = process.stderr.read(10)
if not out:
break
out = out.decode(console_encoding)
output += out
buf += out
try:
line, buf = buf.split('\r', 1)
except ValueError:
continue
try:
time_str = RE_TIMECODE.findall(line)[0]
except IndexError:
continue
# convert progress to percent
time = 0
for part in time_str.split(':'):
time = 60 * time + float(part)
percent = time / total_time
logger.debug('yield {}%'.format(percent))
yield percent
if os.path.getsize(target_path) == 0:
raise exceptions.FFmpegError("File size of generated file is 0")
# wait for process to exit
self._check_returncode(process)
logger.debug(output)
if not output:
raise exceptions.FFmpegError("No output from FFmpeg.")
yield 100 |
def _make_connection(self, bind_user=None, bind_password=None,
contextualise=True, **kwargs):
"""
Make a connection.
Args:
bind_user (str): User to bind with. If `None`, AUTH_ANONYMOUS is
used, otherwise authentication specified with
config['LDAP_BIND_AUTHENTICATION_TYPE'] is used.
bind_password (str): Password to bind to the directory with
contextualise (bool): If true (default), will add this connection to the
appcontext so it can be unbound upon app_teardown.
Returns:
ldap3.Connection: An unbound ldap3.Connection. You should handle exceptions
upon bind if you use this internal method.
"""
authentication = ldap3.ANONYMOUS
if bind_user:
authentication = getattr(ldap3, self.config.get(
'LDAP_BIND_AUTHENTICATION_TYPE'))
log.debug("Opening connection with bind user '{0}'".format(
bind_user or 'Anonymous'))
connection = ldap3.Connection(
server=self._server_pool,
read_only=self.config.get('LDAP_READONLY'),
user=bind_user,
password=bind_password,
client_strategy=ldap3.SYNC,
authentication=authentication,
check_names=self.config['LDAP_CHECK_NAMES'],
raise_exceptions=True,
**kwargs
)
if contextualise:
self._contextualise_connection(connection)
return connection | Make a connection.
Args:
bind_user (str): User to bind with. If `None`, AUTH_ANONYMOUS is
used, otherwise authentication specified with
config['LDAP_BIND_AUTHENTICATION_TYPE'] is used.
bind_password (str): Password to bind to the directory with
contextualise (bool): If true (default), will add this connection to the
appcontext so it can be unbound upon app_teardown.
Returns:
ldap3.Connection: An unbound ldap3.Connection. You should handle exceptions
upon bind if you use this internal method. | Below is the the instruction that describes the task:
### Input:
Make a connection.
Args:
bind_user (str): User to bind with. If `None`, AUTH_ANONYMOUS is
used, otherwise authentication specified with
config['LDAP_BIND_AUTHENTICATION_TYPE'] is used.
bind_password (str): Password to bind to the directory with
contextualise (bool): If true (default), will add this connection to the
appcontext so it can be unbound upon app_teardown.
Returns:
ldap3.Connection: An unbound ldap3.Connection. You should handle exceptions
upon bind if you use this internal method.
### Response:
def _make_connection(self, bind_user=None, bind_password=None,
contextualise=True, **kwargs):
"""
Make a connection.
Args:
bind_user (str): User to bind with. If `None`, AUTH_ANONYMOUS is
used, otherwise authentication specified with
config['LDAP_BIND_AUTHENTICATION_TYPE'] is used.
bind_password (str): Password to bind to the directory with
contextualise (bool): If true (default), will add this connection to the
appcontext so it can be unbound upon app_teardown.
Returns:
ldap3.Connection: An unbound ldap3.Connection. You should handle exceptions
upon bind if you use this internal method.
"""
authentication = ldap3.ANONYMOUS
if bind_user:
authentication = getattr(ldap3, self.config.get(
'LDAP_BIND_AUTHENTICATION_TYPE'))
log.debug("Opening connection with bind user '{0}'".format(
bind_user or 'Anonymous'))
connection = ldap3.Connection(
server=self._server_pool,
read_only=self.config.get('LDAP_READONLY'),
user=bind_user,
password=bind_password,
client_strategy=ldap3.SYNC,
authentication=authentication,
check_names=self.config['LDAP_CHECK_NAMES'],
raise_exceptions=True,
**kwargs
)
if contextualise:
self._contextualise_connection(connection)
return connection |
def lines(self, lines):
"""
Fill Dockerfile content with specified lines
:param lines: list of lines to be written to Dockerfile
"""
if self.cache_content:
self.cached_content = ''.join([b2u(l) for l in lines])
try:
with self._open_dockerfile('wb') as dockerfile:
dockerfile.writelines([u2b(l) for l in lines])
except (IOError, OSError) as ex:
logger.error("Couldn't write lines to dockerfile: %r", ex)
raise | Fill Dockerfile content with specified lines
:param lines: list of lines to be written to Dockerfile | Below is the the instruction that describes the task:
### Input:
Fill Dockerfile content with specified lines
:param lines: list of lines to be written to Dockerfile
### Response:
def lines(self, lines):
"""
Fill Dockerfile content with specified lines
:param lines: list of lines to be written to Dockerfile
"""
if self.cache_content:
self.cached_content = ''.join([b2u(l) for l in lines])
try:
with self._open_dockerfile('wb') as dockerfile:
dockerfile.writelines([u2b(l) for l in lines])
except (IOError, OSError) as ex:
logger.error("Couldn't write lines to dockerfile: %r", ex)
raise |
def source_group_receiver(self, sender, source, signal, **kwargs):
"""
Relay source group signals to the appropriate spec strategy.
"""
from imagekit.cachefiles import ImageCacheFile
source_group = sender
instance = kwargs['instance']
# Ignore signals from unregistered groups.
if source_group not in self._source_groups:
return
#HOOK -- update source to point to image file.
for id in self._source_groups[source_group]:
spec_to_update = generator_registry.get(id, source=source, instance=instance, field=hack_spec_field_hash[id])
specs = [generator_registry.get(id, source=source, instance=instance, field=hack_spec_field_hash[id]) for id in
self._source_groups[source_group]]
callback_name = self._signals[signal]
# print 'callback_name? %s'%(callback_name)
for spec in specs:
file = ImageCacheFile(spec)
# print 'SEPC %s file %s'%(spec, file)
call_strategy_method(file, callback_name) | Relay source group signals to the appropriate spec strategy. | Below is the the instruction that describes the task:
### Input:
Relay source group signals to the appropriate spec strategy.
### Response:
def source_group_receiver(self, sender, source, signal, **kwargs):
"""
Relay source group signals to the appropriate spec strategy.
"""
from imagekit.cachefiles import ImageCacheFile
source_group = sender
instance = kwargs['instance']
# Ignore signals from unregistered groups.
if source_group not in self._source_groups:
return
#HOOK -- update source to point to image file.
for id in self._source_groups[source_group]:
spec_to_update = generator_registry.get(id, source=source, instance=instance, field=hack_spec_field_hash[id])
specs = [generator_registry.get(id, source=source, instance=instance, field=hack_spec_field_hash[id]) for id in
self._source_groups[source_group]]
callback_name = self._signals[signal]
# print 'callback_name? %s'%(callback_name)
for spec in specs:
file = ImageCacheFile(spec)
# print 'SEPC %s file %s'%(spec, file)
call_strategy_method(file, callback_name) |
def _get_resource_access_state(self, request):
"""
Returns the FeinCMS resource's access_state, following any INHERITed values.
Will return None if the resource has an access state that should never be
protected. It should not be possible to protect a resource with an access_state
of STATE_ALL_ALLOWED, or an access_state of STATE_INHERIT and no parent.
Will also return None if the accessed URL doesn't contain a Page.
"""
feincms_page = self._get_page_from_path(request.path_info.lstrip('/'))
if not feincms_page:
return None
# Chase inherited values up the tree of inheritance.
INHERIT = AccessState.STATE_INHERIT
while feincms_page.access_state == INHERIT and feincms_page.parent:
feincms_page = feincms_page.parent
# Resources with STATE_ALL_ALLOWED or STATE_INHERIT and no parent should never be
# access-restricted. This code is here rather than in is_resource_protected to
# emphasise its importance and help avoid accidentally overriding it.
never_restricted = (INHERIT, AccessState.STATE_ALL_ALLOWED)
if feincms_page.access_state in never_restricted:
return None
# Return the found value.
return feincms_page.access_state | Returns the FeinCMS resource's access_state, following any INHERITed values.
Will return None if the resource has an access state that should never be
protected. It should not be possible to protect a resource with an access_state
of STATE_ALL_ALLOWED, or an access_state of STATE_INHERIT and no parent.
Will also return None if the accessed URL doesn't contain a Page. | Below is the the instruction that describes the task:
### Input:
Returns the FeinCMS resource's access_state, following any INHERITed values.
Will return None if the resource has an access state that should never be
protected. It should not be possible to protect a resource with an access_state
of STATE_ALL_ALLOWED, or an access_state of STATE_INHERIT and no parent.
Will also return None if the accessed URL doesn't contain a Page.
### Response:
def _get_resource_access_state(self, request):
"""
Returns the FeinCMS resource's access_state, following any INHERITed values.
Will return None if the resource has an access state that should never be
protected. It should not be possible to protect a resource with an access_state
of STATE_ALL_ALLOWED, or an access_state of STATE_INHERIT and no parent.
Will also return None if the accessed URL doesn't contain a Page.
"""
feincms_page = self._get_page_from_path(request.path_info.lstrip('/'))
if not feincms_page:
return None
# Chase inherited values up the tree of inheritance.
INHERIT = AccessState.STATE_INHERIT
while feincms_page.access_state == INHERIT and feincms_page.parent:
feincms_page = feincms_page.parent
# Resources with STATE_ALL_ALLOWED or STATE_INHERIT and no parent should never be
# access-restricted. This code is here rather than in is_resource_protected to
# emphasise its importance and help avoid accidentally overriding it.
never_restricted = (INHERIT, AccessState.STATE_ALL_ALLOWED)
if feincms_page.access_state in never_restricted:
return None
# Return the found value.
return feincms_page.access_state |
def trim_variant_sequences(variant_sequences, min_variant_sequence_coverage):
"""
Trim VariantSequences to desired coverage and then combine any
subsequences which get generated.
"""
n_total = len(variant_sequences)
trimmed_variant_sequences = [
variant_sequence.trim_by_coverage(min_variant_sequence_coverage)
for variant_sequence in variant_sequences
]
collapsed_variant_sequences = collapse_substrings(trimmed_variant_sequences)
n_after_trimming = len(collapsed_variant_sequences)
logger.info(
"Kept %d/%d variant sequences after read coverage trimming to >=%dx",
n_after_trimming,
n_total,
min_variant_sequence_coverage)
return collapsed_variant_sequences | Trim VariantSequences to desired coverage and then combine any
subsequences which get generated. | Below is the the instruction that describes the task:
### Input:
Trim VariantSequences to desired coverage and then combine any
subsequences which get generated.
### Response:
def trim_variant_sequences(variant_sequences, min_variant_sequence_coverage):
"""
Trim VariantSequences to desired coverage and then combine any
subsequences which get generated.
"""
n_total = len(variant_sequences)
trimmed_variant_sequences = [
variant_sequence.trim_by_coverage(min_variant_sequence_coverage)
for variant_sequence in variant_sequences
]
collapsed_variant_sequences = collapse_substrings(trimmed_variant_sequences)
n_after_trimming = len(collapsed_variant_sequences)
logger.info(
"Kept %d/%d variant sequences after read coverage trimming to >=%dx",
n_after_trimming,
n_total,
min_variant_sequence_coverage)
return collapsed_variant_sequences |
def start(controller_class):
"""Start the Helper controller either in the foreground or as a daemon
process.
:param controller_class: The controller class handle to create and run
:type controller_class: callable
"""
args = parser.parse()
obj = controller_class(args, platform.operating_system())
if args.foreground:
try:
obj.start()
except KeyboardInterrupt:
obj.stop()
else:
try:
with platform.Daemon(obj) as daemon:
daemon.start()
except (OSError, ValueError) as error:
sys.stderr.write('\nError starting %s: %s\n\n' %
(sys.argv[0], error))
sys.exit(1) | Start the Helper controller either in the foreground or as a daemon
process.
:param controller_class: The controller class handle to create and run
:type controller_class: callable | Below is the the instruction that describes the task:
### Input:
Start the Helper controller either in the foreground or as a daemon
process.
:param controller_class: The controller class handle to create and run
:type controller_class: callable
### Response:
def start(controller_class):
"""Start the Helper controller either in the foreground or as a daemon
process.
:param controller_class: The controller class handle to create and run
:type controller_class: callable
"""
args = parser.parse()
obj = controller_class(args, platform.operating_system())
if args.foreground:
try:
obj.start()
except KeyboardInterrupt:
obj.stop()
else:
try:
with platform.Daemon(obj) as daemon:
daemon.start()
except (OSError, ValueError) as error:
sys.stderr.write('\nError starting %s: %s\n\n' %
(sys.argv[0], error))
sys.exit(1) |
def show_diff(original, modified, prefix='', suffix='',
prefix_unchanged=' ',
suffix_unchanged='',
prefix_removed='-',
suffix_removed='',
prefix_added='+',
suffix_added=''):
"""Return the diff view between original and modified strings.
Function checks both arguments line by line and returns a string
with a:
- prefix_unchanged when line is common to both sequences
- prefix_removed when line is unique to sequence 1
- prefix_added when line is unique to sequence 2
and a corresponding suffix in each line
:param original: base string
:param modified: changed string
:param prefix: prefix of the output string
:param suffix: suffix of the output string
:param prefix_unchanged: prefix of the unchanged line
:param suffix_unchanged: suffix of the unchanged line
:param prefix_removed: prefix of the removed line
:param suffix_removed: suffix of the removed line
:param prefix_added: prefix of the added line
:param suffix_added: suffix of the added line
:return: string with the comparison of the records
:rtype: string
"""
import difflib
differ = difflib.Differ()
result = [prefix]
for line in differ.compare(modified.splitlines(), original.splitlines()):
if line[0] == ' ':
# Mark as unchanged
result.append(
prefix_unchanged + line[2:].strip() + suffix_unchanged)
elif line[0] == '-':
# Mark as removed
result.append(prefix_removed + line[2:].strip() + suffix_removed)
elif line[0] == '+':
# Mark as added/modified
result.append(prefix_added + line[2:].strip() + suffix_added)
result.append(suffix)
return '\n'.join(result) | Return the diff view between original and modified strings.
Function checks both arguments line by line and returns a string
with a:
- prefix_unchanged when line is common to both sequences
- prefix_removed when line is unique to sequence 1
- prefix_added when line is unique to sequence 2
and a corresponding suffix in each line
:param original: base string
:param modified: changed string
:param prefix: prefix of the output string
:param suffix: suffix of the output string
:param prefix_unchanged: prefix of the unchanged line
:param suffix_unchanged: suffix of the unchanged line
:param prefix_removed: prefix of the removed line
:param suffix_removed: suffix of the removed line
:param prefix_added: prefix of the added line
:param suffix_added: suffix of the added line
:return: string with the comparison of the records
:rtype: string | Below is the the instruction that describes the task:
### Input:
Return the diff view between original and modified strings.
Function checks both arguments line by line and returns a string
with a:
- prefix_unchanged when line is common to both sequences
- prefix_removed when line is unique to sequence 1
- prefix_added when line is unique to sequence 2
and a corresponding suffix in each line
:param original: base string
:param modified: changed string
:param prefix: prefix of the output string
:param suffix: suffix of the output string
:param prefix_unchanged: prefix of the unchanged line
:param suffix_unchanged: suffix of the unchanged line
:param prefix_removed: prefix of the removed line
:param suffix_removed: suffix of the removed line
:param prefix_added: prefix of the added line
:param suffix_added: suffix of the added line
:return: string with the comparison of the records
:rtype: string
### Response:
def show_diff(original, modified, prefix='', suffix='',
prefix_unchanged=' ',
suffix_unchanged='',
prefix_removed='-',
suffix_removed='',
prefix_added='+',
suffix_added=''):
"""Return the diff view between original and modified strings.
Function checks both arguments line by line and returns a string
with a:
- prefix_unchanged when line is common to both sequences
- prefix_removed when line is unique to sequence 1
- prefix_added when line is unique to sequence 2
and a corresponding suffix in each line
:param original: base string
:param modified: changed string
:param prefix: prefix of the output string
:param suffix: suffix of the output string
:param prefix_unchanged: prefix of the unchanged line
:param suffix_unchanged: suffix of the unchanged line
:param prefix_removed: prefix of the removed line
:param suffix_removed: suffix of the removed line
:param prefix_added: prefix of the added line
:param suffix_added: suffix of the added line
:return: string with the comparison of the records
:rtype: string
"""
import difflib
differ = difflib.Differ()
result = [prefix]
for line in differ.compare(modified.splitlines(), original.splitlines()):
if line[0] == ' ':
# Mark as unchanged
result.append(
prefix_unchanged + line[2:].strip() + suffix_unchanged)
elif line[0] == '-':
# Mark as removed
result.append(prefix_removed + line[2:].strip() + suffix_removed)
elif line[0] == '+':
# Mark as added/modified
result.append(prefix_added + line[2:].strip() + suffix_added)
result.append(suffix)
return '\n'.join(result) |
def main():
"""
Program main.
"""
options = parse_options()
output, code = create_output(get_status(options), options)
sys.stdout.write(output)
sys.exit(code) | Program main. | Below is the the instruction that describes the task:
### Input:
Program main.
### Response:
def main():
"""
Program main.
"""
options = parse_options()
output, code = create_output(get_status(options), options)
sys.stdout.write(output)
sys.exit(code) |
def upload(training_dir, algorithm_id=None, writeup=None, api_key=None, ignore_open_monitors=False):
"""Upload the results of training (as automatically recorded by your
env's monitor) to OpenAI Gym.
Args:
training_dir (Optional[str]): A directory containing the results of a training run.
algorithm_id (Optional[str]): An algorithm id indicating the particular version of the algorithm (including choices of parameters) you are running (visit https://gym.openai.com/algorithms to create an id)
writeup (Optional[str]): A Gist URL (of the form https://gist.github.com/<user>/<id>) containing your writeup for this evaluation.
api_key (Optional[str]): Your OpenAI API key. Can also be provided as an environment variable (OPENAI_GYM_API_KEY).
"""
if not ignore_open_monitors:
open_monitors = monitoring._open_monitors()
if len(open_monitors) > 0:
envs = [m.env.spec.id if m.env.spec else '(unknown)' for m in open_monitors]
raise error.Error("Still have an open monitor on {}. You must run 'env.monitor.close()' before uploading.".format(', '.join(envs)))
env_info, training_episode_batch, training_video = upload_training_data(training_dir, api_key=api_key)
env_id = env_info['env_id']
training_episode_batch_id = training_video_id = None
if training_episode_batch:
training_episode_batch_id = training_episode_batch.id
if training_video:
training_video_id = training_video.id
if logger.level <= logging.INFO:
if training_episode_batch_id is not None and training_video_id is not None:
logger.info('[%s] Creating evaluation object from %s with learning curve and training video', env_id, training_dir)
elif training_episode_batch_id is not None:
logger.info('[%s] Creating evaluation object from %s with learning curve', env_id, training_dir)
elif training_video_id is not None:
logger.info('[%s] Creating evaluation object from %s with training video', env_id, training_dir)
else:
raise error.Error("[%s] You didn't have any recorded training data in {}. Once you've used 'env.monitor.start(training_dir)' to start recording, you need to actually run some rollouts. Please join the community chat on https://gym.openai.com if you have any issues.".format(env_id, training_dir))
evaluation = resource.Evaluation.create(
training_episode_batch=training_episode_batch_id,
training_video=training_video_id,
env=env_info['env_id'],
algorithm={
'id': algorithm_id,
},
writeup=writeup,
gym_version=env_info['gym_version'],
api_key=api_key,
# >>>>>>>>> START changes >>>>>>>>>>>>>>>>>>>>>>>>
env_info=env_info,
# <<<<<<<<< END changes <<<<<<<<<<<<<<<<<<<<<<<<<<
)
logger.info(
"""
****************************************************
You successfully uploaded your evaluation on %s to
OpenAI Gym! You can find it at:
%s
****************************************************
""".rstrip(), env_id, evaluation.web_url())
return evaluation | Upload the results of training (as automatically recorded by your
env's monitor) to OpenAI Gym.
Args:
training_dir (Optional[str]): A directory containing the results of a training run.
algorithm_id (Optional[str]): An algorithm id indicating the particular version of the algorithm (including choices of parameters) you are running (visit https://gym.openai.com/algorithms to create an id)
writeup (Optional[str]): A Gist URL (of the form https://gist.github.com/<user>/<id>) containing your writeup for this evaluation.
api_key (Optional[str]): Your OpenAI API key. Can also be provided as an environment variable (OPENAI_GYM_API_KEY). | Below is the the instruction that describes the task:
### Input:
Upload the results of training (as automatically recorded by your
env's monitor) to OpenAI Gym.
Args:
training_dir (Optional[str]): A directory containing the results of a training run.
algorithm_id (Optional[str]): An algorithm id indicating the particular version of the algorithm (including choices of parameters) you are running (visit https://gym.openai.com/algorithms to create an id)
writeup (Optional[str]): A Gist URL (of the form https://gist.github.com/<user>/<id>) containing your writeup for this evaluation.
api_key (Optional[str]): Your OpenAI API key. Can also be provided as an environment variable (OPENAI_GYM_API_KEY).
### Response:
def upload(training_dir, algorithm_id=None, writeup=None, api_key=None, ignore_open_monitors=False):
"""Upload the results of training (as automatically recorded by your
env's monitor) to OpenAI Gym.
Args:
training_dir (Optional[str]): A directory containing the results of a training run.
algorithm_id (Optional[str]): An algorithm id indicating the particular version of the algorithm (including choices of parameters) you are running (visit https://gym.openai.com/algorithms to create an id)
writeup (Optional[str]): A Gist URL (of the form https://gist.github.com/<user>/<id>) containing your writeup for this evaluation.
api_key (Optional[str]): Your OpenAI API key. Can also be provided as an environment variable (OPENAI_GYM_API_KEY).
"""
if not ignore_open_monitors:
open_monitors = monitoring._open_monitors()
if len(open_monitors) > 0:
envs = [m.env.spec.id if m.env.spec else '(unknown)' for m in open_monitors]
raise error.Error("Still have an open monitor on {}. You must run 'env.monitor.close()' before uploading.".format(', '.join(envs)))
env_info, training_episode_batch, training_video = upload_training_data(training_dir, api_key=api_key)
env_id = env_info['env_id']
training_episode_batch_id = training_video_id = None
if training_episode_batch:
training_episode_batch_id = training_episode_batch.id
if training_video:
training_video_id = training_video.id
if logger.level <= logging.INFO:
if training_episode_batch_id is not None and training_video_id is not None:
logger.info('[%s] Creating evaluation object from %s with learning curve and training video', env_id, training_dir)
elif training_episode_batch_id is not None:
logger.info('[%s] Creating evaluation object from %s with learning curve', env_id, training_dir)
elif training_video_id is not None:
logger.info('[%s] Creating evaluation object from %s with training video', env_id, training_dir)
else:
raise error.Error("[%s] You didn't have any recorded training data in {}. Once you've used 'env.monitor.start(training_dir)' to start recording, you need to actually run some rollouts. Please join the community chat on https://gym.openai.com if you have any issues.".format(env_id, training_dir))
evaluation = resource.Evaluation.create(
training_episode_batch=training_episode_batch_id,
training_video=training_video_id,
env=env_info['env_id'],
algorithm={
'id': algorithm_id,
},
writeup=writeup,
gym_version=env_info['gym_version'],
api_key=api_key,
# >>>>>>>>> START changes >>>>>>>>>>>>>>>>>>>>>>>>
env_info=env_info,
# <<<<<<<<< END changes <<<<<<<<<<<<<<<<<<<<<<<<<<
)
logger.info(
"""
****************************************************
You successfully uploaded your evaluation on %s to
OpenAI Gym! You can find it at:
%s
****************************************************
""".rstrip(), env_id, evaluation.web_url())
return evaluation |
def set_permissions(obj_name,
principal,
permissions,
access_mode='grant',
applies_to=None,
obj_type='file',
reset_perms=False,
protected=None):
'''
Set the permissions of an object. This can be a file, folder, registry key,
printer, service, etc...
Args:
obj_name (str):
The object for which to set permissions. This can be the path to a
file or folder, a registry key, printer, etc. For more information
about how to format the name see:
https://msdn.microsoft.com/en-us/library/windows/desktop/aa379593(v=vs.85).aspx
principal (str):
The name of the user or group for which to set permissions. Can also
pass a SID.
permissions (str, list):
The type of permissions to grant/deny the user. Can be one of the
basic permissions, or a list of advanced permissions.
access_mode (Optional[str]):
Whether to grant or deny user the access. Valid options are:
- grant (default): Grants the user access
- deny: Denies the user access
applies_to (Optional[str]):
The objects to which these permissions will apply. Not all these
options apply to all object types. Defaults to
'this_folder_subfolders_files'
obj_type (Optional[str]):
The type of object for which to set permissions. Default is 'file'
reset_perms (Optional[bool]):
True will overwrite the permissions on the specified object. False
will append the permissions. Default is False
protected (Optional[bool]):
True will disable inheritance for the object. False will enable
inheritance. None will make no change. Default is None.
Returns:
bool: True if successful, raises an error otherwise
Usage:
.. code-block:: python
salt.utils.win_dacl.set_permissions(
'C:\\Temp', 'jsnuffy', 'full_control', 'grant')
'''
# Set up applies_to defaults used by registry and file types
if applies_to is None:
if 'registry' in obj_type.lower():
applies_to = 'this_key_subkeys'
elif obj_type.lower() == 'file':
applies_to = 'this_folder_subfolders_files'
# If you don't pass `obj_name` it will create a blank DACL
# Otherwise, it will grab the existing DACL and add to it
if reset_perms:
obj_dacl = dacl(obj_type=obj_type)
else:
obj_dacl = dacl(obj_name, obj_type)
obj_dacl.rm_ace(principal, access_mode)
obj_dacl.add_ace(principal, access_mode, permissions, applies_to)
obj_dacl.order_acl()
obj_dacl.save(obj_name, protected)
return True | Set the permissions of an object. This can be a file, folder, registry key,
printer, service, etc...
Args:
obj_name (str):
The object for which to set permissions. This can be the path to a
file or folder, a registry key, printer, etc. For more information
about how to format the name see:
https://msdn.microsoft.com/en-us/library/windows/desktop/aa379593(v=vs.85).aspx
principal (str):
The name of the user or group for which to set permissions. Can also
pass a SID.
permissions (str, list):
The type of permissions to grant/deny the user. Can be one of the
basic permissions, or a list of advanced permissions.
access_mode (Optional[str]):
Whether to grant or deny user the access. Valid options are:
- grant (default): Grants the user access
- deny: Denies the user access
applies_to (Optional[str]):
The objects to which these permissions will apply. Not all these
options apply to all object types. Defaults to
'this_folder_subfolders_files'
obj_type (Optional[str]):
The type of object for which to set permissions. Default is 'file'
reset_perms (Optional[bool]):
True will overwrite the permissions on the specified object. False
will append the permissions. Default is False
protected (Optional[bool]):
True will disable inheritance for the object. False will enable
inheritance. None will make no change. Default is None.
Returns:
bool: True if successful, raises an error otherwise
Usage:
.. code-block:: python
salt.utils.win_dacl.set_permissions(
'C:\\Temp', 'jsnuffy', 'full_control', 'grant') | Below is the the instruction that describes the task:
### Input:
Set the permissions of an object. This can be a file, folder, registry key,
printer, service, etc...
Args:
obj_name (str):
The object for which to set permissions. This can be the path to a
file or folder, a registry key, printer, etc. For more information
about how to format the name see:
https://msdn.microsoft.com/en-us/library/windows/desktop/aa379593(v=vs.85).aspx
principal (str):
The name of the user or group for which to set permissions. Can also
pass a SID.
permissions (str, list):
The type of permissions to grant/deny the user. Can be one of the
basic permissions, or a list of advanced permissions.
access_mode (Optional[str]):
Whether to grant or deny user the access. Valid options are:
- grant (default): Grants the user access
- deny: Denies the user access
applies_to (Optional[str]):
The objects to which these permissions will apply. Not all these
options apply to all object types. Defaults to
'this_folder_subfolders_files'
obj_type (Optional[str]):
The type of object for which to set permissions. Default is 'file'
reset_perms (Optional[bool]):
True will overwrite the permissions on the specified object. False
will append the permissions. Default is False
protected (Optional[bool]):
True will disable inheritance for the object. False will enable
inheritance. None will make no change. Default is None.
Returns:
bool: True if successful, raises an error otherwise
Usage:
.. code-block:: python
salt.utils.win_dacl.set_permissions(
'C:\\Temp', 'jsnuffy', 'full_control', 'grant')
### Response:
def set_permissions(obj_name,
principal,
permissions,
access_mode='grant',
applies_to=None,
obj_type='file',
reset_perms=False,
protected=None):
'''
Set the permissions of an object. This can be a file, folder, registry key,
printer, service, etc...
Args:
obj_name (str):
The object for which to set permissions. This can be the path to a
file or folder, a registry key, printer, etc. For more information
about how to format the name see:
https://msdn.microsoft.com/en-us/library/windows/desktop/aa379593(v=vs.85).aspx
principal (str):
The name of the user or group for which to set permissions. Can also
pass a SID.
permissions (str, list):
The type of permissions to grant/deny the user. Can be one of the
basic permissions, or a list of advanced permissions.
access_mode (Optional[str]):
Whether to grant or deny user the access. Valid options are:
- grant (default): Grants the user access
- deny: Denies the user access
applies_to (Optional[str]):
The objects to which these permissions will apply. Not all these
options apply to all object types. Defaults to
'this_folder_subfolders_files'
obj_type (Optional[str]):
The type of object for which to set permissions. Default is 'file'
reset_perms (Optional[bool]):
True will overwrite the permissions on the specified object. False
will append the permissions. Default is False
protected (Optional[bool]):
True will disable inheritance for the object. False will enable
inheritance. None will make no change. Default is None.
Returns:
bool: True if successful, raises an error otherwise
Usage:
.. code-block:: python
salt.utils.win_dacl.set_permissions(
'C:\\Temp', 'jsnuffy', 'full_control', 'grant')
'''
# Set up applies_to defaults used by registry and file types
if applies_to is None:
if 'registry' in obj_type.lower():
applies_to = 'this_key_subkeys'
elif obj_type.lower() == 'file':
applies_to = 'this_folder_subfolders_files'
# If you don't pass `obj_name` it will create a blank DACL
# Otherwise, it will grab the existing DACL and add to it
if reset_perms:
obj_dacl = dacl(obj_type=obj_type)
else:
obj_dacl = dacl(obj_name, obj_type)
obj_dacl.rm_ace(principal, access_mode)
obj_dacl.add_ace(principal, access_mode, permissions, applies_to)
obj_dacl.order_acl()
obj_dacl.save(obj_name, protected)
return True |
def split_command_line(command_line):
'''This splits a command line into a list of arguments. It splits arguments
on spaces, but handles embedded quotes, doublequotes, and escaped
characters. It's impossible to do this with a regular expression, so I
wrote a little state machine to parse the command line. '''
arg_list = []
arg = ''
# Constants to name the states we can be in.
state_basic = 0
state_esc = 1
state_singlequote = 2
state_doublequote = 3
# The state when consuming whitespace between commands.
state_whitespace = 4
state = state_basic
for c in command_line:
if state == state_basic or state == state_whitespace:
if c == '\\':
# Escape the next character
state = state_esc
elif c == r"'":
# Handle single quote
state = state_singlequote
elif c == r'"':
# Handle double quote
state = state_doublequote
elif c.isspace():
# Add arg to arg_list if we aren't in the middle of whitespace.
if state == state_whitespace:
# Do nothing.
None
else:
arg_list.append(arg)
arg = ''
state = state_whitespace
else:
arg = arg + c
state = state_basic
elif state == state_esc:
arg = arg + c
state = state_basic
elif state == state_singlequote:
if c == r"'":
state = state_basic
else:
arg = arg + c
elif state == state_doublequote:
if c == r'"':
state = state_basic
else:
arg = arg + c
if arg != '':
arg_list.append(arg)
return arg_list | This splits a command line into a list of arguments. It splits arguments
on spaces, but handles embedded quotes, doublequotes, and escaped
characters. It's impossible to do this with a regular expression, so I
wrote a little state machine to parse the command line. | Below is the the instruction that describes the task:
### Input:
This splits a command line into a list of arguments. It splits arguments
on spaces, but handles embedded quotes, doublequotes, and escaped
characters. It's impossible to do this with a regular expression, so I
wrote a little state machine to parse the command line.
### Response:
def split_command_line(command_line):
'''This splits a command line into a list of arguments. It splits arguments
on spaces, but handles embedded quotes, doublequotes, and escaped
characters. It's impossible to do this with a regular expression, so I
wrote a little state machine to parse the command line. '''
arg_list = []
arg = ''
# Constants to name the states we can be in.
state_basic = 0
state_esc = 1
state_singlequote = 2
state_doublequote = 3
# The state when consuming whitespace between commands.
state_whitespace = 4
state = state_basic
for c in command_line:
if state == state_basic or state == state_whitespace:
if c == '\\':
# Escape the next character
state = state_esc
elif c == r"'":
# Handle single quote
state = state_singlequote
elif c == r'"':
# Handle double quote
state = state_doublequote
elif c.isspace():
# Add arg to arg_list if we aren't in the middle of whitespace.
if state == state_whitespace:
# Do nothing.
None
else:
arg_list.append(arg)
arg = ''
state = state_whitespace
else:
arg = arg + c
state = state_basic
elif state == state_esc:
arg = arg + c
state = state_basic
elif state == state_singlequote:
if c == r"'":
state = state_basic
else:
arg = arg + c
elif state == state_doublequote:
if c == r'"':
state = state_basic
else:
arg = arg + c
if arg != '':
arg_list.append(arg)
return arg_list |
def setData(self, wordBeforeCursor, wholeWord):
"""Set model information
"""
self._typedText = wordBeforeCursor
self.words = self._makeListOfCompletions(wordBeforeCursor, wholeWord)
commonStart = self._commonWordStart(self.words)
self.canCompleteText = commonStart[len(wordBeforeCursor):]
self.layoutChanged.emit() | Set model information | Below is the the instruction that describes the task:
### Input:
Set model information
### Response:
def setData(self, wordBeforeCursor, wholeWord):
"""Set model information
"""
self._typedText = wordBeforeCursor
self.words = self._makeListOfCompletions(wordBeforeCursor, wholeWord)
commonStart = self._commonWordStart(self.words)
self.canCompleteText = commonStart[len(wordBeforeCursor):]
self.layoutChanged.emit() |
def write_async(name, values, tags={}, timestamp=None, database=None):
""" write metrics """
thread = Thread(target=write,
args=(name, values, tags, timestamp, database))
thread.start() | write metrics | Below is the the instruction that describes the task:
### Input:
write metrics
### Response:
def write_async(name, values, tags={}, timestamp=None, database=None):
""" write metrics """
thread = Thread(target=write,
args=(name, values, tags, timestamp, database))
thread.start() |
def create_zipfile(context):
"""This is the actual zest.releaser entry point
Relevant items in the context dict:
name
Name of the project being released
tagdir
Directory where the tag checkout is placed (*if* a tag
checkout has been made)
version
Version we're releasing
workingdir
Original working directory
"""
if not prerequisites_ok():
return
# Create a zipfile.
subprocess.call(['make', 'zip'])
for zipfile in glob.glob('*.zip'):
first_part = zipfile.split('.')[0]
new_name = "%s.%s.zip" % (first_part, context['version'])
target = os.path.join(context['workingdir'], new_name)
shutil.copy(zipfile, target)
print("Copied %s to %s" % (zipfile, target)) | This is the actual zest.releaser entry point
Relevant items in the context dict:
name
Name of the project being released
tagdir
Directory where the tag checkout is placed (*if* a tag
checkout has been made)
version
Version we're releasing
workingdir
Original working directory | Below is the the instruction that describes the task:
### Input:
This is the actual zest.releaser entry point
Relevant items in the context dict:
name
Name of the project being released
tagdir
Directory where the tag checkout is placed (*if* a tag
checkout has been made)
version
Version we're releasing
workingdir
Original working directory
### Response:
def create_zipfile(context):
"""This is the actual zest.releaser entry point
Relevant items in the context dict:
name
Name of the project being released
tagdir
Directory where the tag checkout is placed (*if* a tag
checkout has been made)
version
Version we're releasing
workingdir
Original working directory
"""
if not prerequisites_ok():
return
# Create a zipfile.
subprocess.call(['make', 'zip'])
for zipfile in glob.glob('*.zip'):
first_part = zipfile.split('.')[0]
new_name = "%s.%s.zip" % (first_part, context['version'])
target = os.path.join(context['workingdir'], new_name)
shutil.copy(zipfile, target)
print("Copied %s to %s" % (zipfile, target)) |
def stop(self, container, **kwargs):
"""
Identical to :meth:`dockermap.client.base.DockerClientWrapper.stop` with additional logging.
"""
self.push_log("Stopping container '{0}'.".format(container))
super(DockerFabricClient, self).stop(container, **kwargs) | Identical to :meth:`dockermap.client.base.DockerClientWrapper.stop` with additional logging. | Below is the the instruction that describes the task:
### Input:
Identical to :meth:`dockermap.client.base.DockerClientWrapper.stop` with additional logging.
### Response:
def stop(self, container, **kwargs):
"""
Identical to :meth:`dockermap.client.base.DockerClientWrapper.stop` with additional logging.
"""
self.push_log("Stopping container '{0}'.".format(container))
super(DockerFabricClient, self).stop(container, **kwargs) |
def dump(val, humanread = True, dumpextra = False, typeinfo = DUMPTYPE_FLAT, ordered=True,
tostr=False, encoding='utf-8'):
'''
Convert a parsed NamedStruct (probably with additional NamedStruct as fields) into a
JSON-friendly format, with only Python primitives (dictionaries, lists, bytes, integers etc.)
Then you may use json.dumps, or pprint to further process the result.
:param val: parsed result, may contain NamedStruct
:param humanread: if True (default), convert raw data into readable format with type-defined formatters.
For example, enumerators are converted into names, IP addresses are converted into dotted formats, etc.
:param dumpextra: if True, dump "extra" data in '_extra' field. False (default) to ignore them.
:param typeinfo: Add struct type information in the dump result. May be the following values:
DUMPTYPE_FLAT ('flat')
add a field '_type' for the type information (default)
DUMPTYPE_KEY ('key')
convert the value to dictionary like: {'<struc_type>': value}
DUMPTYPE_NONE ('none')
do not add type information
:param tostr: if True, convert all bytes to str
:param encoding: if tostr=`True`, first try to decode bytes in `encoding`. If failed, use `repr()` instead.
:returns: "dump" format of val, suitable for JSON-encode or print.
'''
dumped = _dump(val, humanread, dumpextra, typeinfo, ordered)
if tostr:
dumped = _to_str(dumped, encoding, ordered)
return dumped | Convert a parsed NamedStruct (probably with additional NamedStruct as fields) into a
JSON-friendly format, with only Python primitives (dictionaries, lists, bytes, integers etc.)
Then you may use json.dumps, or pprint to further process the result.
:param val: parsed result, may contain NamedStruct
:param humanread: if True (default), convert raw data into readable format with type-defined formatters.
For example, enumerators are converted into names, IP addresses are converted into dotted formats, etc.
:param dumpextra: if True, dump "extra" data in '_extra' field. False (default) to ignore them.
:param typeinfo: Add struct type information in the dump result. May be the following values:
DUMPTYPE_FLAT ('flat')
add a field '_type' for the type information (default)
DUMPTYPE_KEY ('key')
convert the value to dictionary like: {'<struc_type>': value}
DUMPTYPE_NONE ('none')
do not add type information
:param tostr: if True, convert all bytes to str
:param encoding: if tostr=`True`, first try to decode bytes in `encoding`. If failed, use `repr()` instead.
:returns: "dump" format of val, suitable for JSON-encode or print. | Below is the the instruction that describes the task:
### Input:
Convert a parsed NamedStruct (probably with additional NamedStruct as fields) into a
JSON-friendly format, with only Python primitives (dictionaries, lists, bytes, integers etc.)
Then you may use json.dumps, or pprint to further process the result.
:param val: parsed result, may contain NamedStruct
:param humanread: if True (default), convert raw data into readable format with type-defined formatters.
For example, enumerators are converted into names, IP addresses are converted into dotted formats, etc.
:param dumpextra: if True, dump "extra" data in '_extra' field. False (default) to ignore them.
:param typeinfo: Add struct type information in the dump result. May be the following values:
DUMPTYPE_FLAT ('flat')
add a field '_type' for the type information (default)
DUMPTYPE_KEY ('key')
convert the value to dictionary like: {'<struc_type>': value}
DUMPTYPE_NONE ('none')
do not add type information
:param tostr: if True, convert all bytes to str
:param encoding: if tostr=`True`, first try to decode bytes in `encoding`. If failed, use `repr()` instead.
:returns: "dump" format of val, suitable for JSON-encode or print.
### Response:
def dump(val, humanread = True, dumpextra = False, typeinfo = DUMPTYPE_FLAT, ordered=True,
tostr=False, encoding='utf-8'):
'''
Convert a parsed NamedStruct (probably with additional NamedStruct as fields) into a
JSON-friendly format, with only Python primitives (dictionaries, lists, bytes, integers etc.)
Then you may use json.dumps, or pprint to further process the result.
:param val: parsed result, may contain NamedStruct
:param humanread: if True (default), convert raw data into readable format with type-defined formatters.
For example, enumerators are converted into names, IP addresses are converted into dotted formats, etc.
:param dumpextra: if True, dump "extra" data in '_extra' field. False (default) to ignore them.
:param typeinfo: Add struct type information in the dump result. May be the following values:
DUMPTYPE_FLAT ('flat')
add a field '_type' for the type information (default)
DUMPTYPE_KEY ('key')
convert the value to dictionary like: {'<struc_type>': value}
DUMPTYPE_NONE ('none')
do not add type information
:param tostr: if True, convert all bytes to str
:param encoding: if tostr=`True`, first try to decode bytes in `encoding`. If failed, use `repr()` instead.
:returns: "dump" format of val, suitable for JSON-encode or print.
'''
dumped = _dump(val, humanread, dumpextra, typeinfo, ordered)
if tostr:
dumped = _to_str(dumped, encoding, ordered)
return dumped |
def close(self):
"""Close all active poll instances and remove all callbacks."""
if self._mpoll is None:
return
for mpoll in self._mpoll.values():
mpoll.close()
self._mpoll.clear()
self._mpoll = None | Close all active poll instances and remove all callbacks. | Below is the the instruction that describes the task:
### Input:
Close all active poll instances and remove all callbacks.
### Response:
def close(self):
"""Close all active poll instances and remove all callbacks."""
if self._mpoll is None:
return
for mpoll in self._mpoll.values():
mpoll.close()
self._mpoll.clear()
self._mpoll = None |
def set_bug(self, bug_number):
"""
Set the bug number of this Classified Failure
If an existing ClassifiedFailure exists with the same bug number
replace this instance with the existing one.
"""
if bug_number == self.bug_number:
return self
other = ClassifiedFailure.objects.filter(bug_number=bug_number).first()
if not other:
self.bug_number = bug_number
self.save(update_fields=['bug_number'])
return self
self.replace_with(other)
return other | Set the bug number of this Classified Failure
If an existing ClassifiedFailure exists with the same bug number
replace this instance with the existing one. | Below is the the instruction that describes the task:
### Input:
Set the bug number of this Classified Failure
If an existing ClassifiedFailure exists with the same bug number
replace this instance with the existing one.
### Response:
def set_bug(self, bug_number):
"""
Set the bug number of this Classified Failure
If an existing ClassifiedFailure exists with the same bug number
replace this instance with the existing one.
"""
if bug_number == self.bug_number:
return self
other = ClassifiedFailure.objects.filter(bug_number=bug_number).first()
if not other:
self.bug_number = bug_number
self.save(update_fields=['bug_number'])
return self
self.replace_with(other)
return other |
def set_power(self, power):
"""Send Power command."""
req_url = ENDPOINTS["setPower"].format(self.ip_address, self.zone_id)
params = {"power": "on" if power else "standby"}
return request(req_url, params=params) | Send Power command. | Below is the the instruction that describes the task:
### Input:
Send Power command.
### Response:
def set_power(self, power):
"""Send Power command."""
req_url = ENDPOINTS["setPower"].format(self.ip_address, self.zone_id)
params = {"power": "on" if power else "standby"}
return request(req_url, params=params) |
def chunks(l,n):
'''chunk l in n sized bits'''
#http://stackoverflow.com/a/3226719
#...not that this is hard to understand.
return [l[x:x+n] for x in range(0, len(l), n)]; | chunk l in n sized bits | Below is the the instruction that describes the task:
### Input:
chunk l in n sized bits
### Response:
def chunks(l,n):
'''chunk l in n sized bits'''
#http://stackoverflow.com/a/3226719
#...not that this is hard to understand.
return [l[x:x+n] for x in range(0, len(l), n)]; |
def change_password():
"""
Changes the standard password from neo4j to testing to be able to run the test suite.
"""
basic_auth = '%s:%s' % (DEFAULT_USERNAME, DEFAULT_PASSWORD)
try: # Python 2
auth = base64.encodestring(basic_auth)
except TypeError: # Python 3
auth = base64.encodestring(bytes(basic_auth, 'utf-8')).decode()
headers = {
"Content-Type": "application/json",
"Accept": "application/json",
"Authorization": "Basic %s" % auth.strip()
}
response = None
retry = 0
while not response: # Retry if the server is not ready yet
sleep(1)
con = http.HTTPConnection('localhost:7474', timeout=10)
try:
con.request('GET', 'http://localhost:7474/user/neo4j', headers=headers)
response = json.loads(con.getresponse().read().decode('utf-8'))
except ValueError:
con.close()
retry += 1
if retry > 10:
print("Could not change password for user neo4j")
break
if response and response.get('password_change_required', None):
payload = json.dumps({'password': 'testing'})
con.request('POST', 'http://localhost:7474/user/neo4j/password', payload, headers)
print("Password changed for user neo4j")
con.close() | Changes the standard password from neo4j to testing to be able to run the test suite. | Below is the the instruction that describes the task:
### Input:
Changes the standard password from neo4j to testing to be able to run the test suite.
### Response:
def change_password():
"""
Changes the standard password from neo4j to testing to be able to run the test suite.
"""
basic_auth = '%s:%s' % (DEFAULT_USERNAME, DEFAULT_PASSWORD)
try: # Python 2
auth = base64.encodestring(basic_auth)
except TypeError: # Python 3
auth = base64.encodestring(bytes(basic_auth, 'utf-8')).decode()
headers = {
"Content-Type": "application/json",
"Accept": "application/json",
"Authorization": "Basic %s" % auth.strip()
}
response = None
retry = 0
while not response: # Retry if the server is not ready yet
sleep(1)
con = http.HTTPConnection('localhost:7474', timeout=10)
try:
con.request('GET', 'http://localhost:7474/user/neo4j', headers=headers)
response = json.loads(con.getresponse().read().decode('utf-8'))
except ValueError:
con.close()
retry += 1
if retry > 10:
print("Could not change password for user neo4j")
break
if response and response.get('password_change_required', None):
payload = json.dumps({'password': 'testing'})
con.request('POST', 'http://localhost:7474/user/neo4j/password', payload, headers)
print("Password changed for user neo4j")
con.close() |
def set_dependencies(ctx, archive_name, dependency=None):
'''
Set the dependencies of an archive
'''
_generate_api(ctx)
kwargs = _parse_dependencies(dependency)
var = ctx.obj.api.get_archive(archive_name)
var.set_dependencies(dependencies=kwargs) | Set the dependencies of an archive | Below is the the instruction that describes the task:
### Input:
Set the dependencies of an archive
### Response:
def set_dependencies(ctx, archive_name, dependency=None):
'''
Set the dependencies of an archive
'''
_generate_api(ctx)
kwargs = _parse_dependencies(dependency)
var = ctx.obj.api.get_archive(archive_name)
var.set_dependencies(dependencies=kwargs) |
def add(name, gid=None, system=False, root=None):
'''
Add the specified group
name
Name of the new group
gid
Use GID for the new group
system
Create a system account
root
Directory to chroot into
CLI Example:
.. code-block:: bash
salt '*' group.add foo 3456
'''
cmd = ['groupadd']
if gid:
cmd.append('-g {0}'.format(gid))
if system and __grains__['kernel'] != 'OpenBSD':
cmd.append('-r')
if root is not None:
cmd.extend(('-R', root))
cmd.append(name)
ret = __salt__['cmd.run_all'](cmd, python_shell=False)
return not ret['retcode'] | Add the specified group
name
Name of the new group
gid
Use GID for the new group
system
Create a system account
root
Directory to chroot into
CLI Example:
.. code-block:: bash
salt '*' group.add foo 3456 | Below is the the instruction that describes the task:
### Input:
Add the specified group
name
Name of the new group
gid
Use GID for the new group
system
Create a system account
root
Directory to chroot into
CLI Example:
.. code-block:: bash
salt '*' group.add foo 3456
### Response:
def add(name, gid=None, system=False, root=None):
'''
Add the specified group
name
Name of the new group
gid
Use GID for the new group
system
Create a system account
root
Directory to chroot into
CLI Example:
.. code-block:: bash
salt '*' group.add foo 3456
'''
cmd = ['groupadd']
if gid:
cmd.append('-g {0}'.format(gid))
if system and __grains__['kernel'] != 'OpenBSD':
cmd.append('-r')
if root is not None:
cmd.extend(('-R', root))
cmd.append(name)
ret = __salt__['cmd.run_all'](cmd, python_shell=False)
return not ret['retcode'] |
def output_file_name(self):
"""Name of the file where plugin's output should be written to."""
safe_path = re.sub(r":|/", "_", self.source_urn.Path().lstrip("/"))
return "results_%s%s" % (safe_path, self.output_file_extension) | Name of the file where plugin's output should be written to. | Below is the the instruction that describes the task:
### Input:
Name of the file where plugin's output should be written to.
### Response:
def output_file_name(self):
"""Name of the file where plugin's output should be written to."""
safe_path = re.sub(r":|/", "_", self.source_urn.Path().lstrip("/"))
return "results_%s%s" % (safe_path, self.output_file_extension) |
def make_function_value_private(self, value, value_type, function):
"""
Wraps converted value so that it is hidden in logs etc.
Note this is not secure just reduces leaking info
Allows base 64 encode stuff using base64() or plain hide() in the
config
"""
# remove quotes
value = self.remove_quotes(value)
if function == "base64":
try:
import base64
value = base64.b64decode(value).decode("utf-8")
except TypeError as e:
self.notify_user("base64(..) error %s" % str(e))
# check we are in a module definition etc
if not self.current_module:
self.notify_user("%s(..) used outside of module or section" % function)
return None
module = self.current_module[-1].split()[0]
if module in CONFIG_FILE_SPECIAL_SECTIONS + I3S_MODULE_NAMES:
self.notify_user(
"%s(..) cannot be used outside of py3status module "
"configuration" % function
)
return None
value = self.value_convert(value, value_type)
module_name = self.current_module[-1]
return PrivateHide(value, module_name) | Wraps converted value so that it is hidden in logs etc.
Note this is not secure just reduces leaking info
Allows base 64 encode stuff using base64() or plain hide() in the
config | Below is the the instruction that describes the task:
### Input:
Wraps converted value so that it is hidden in logs etc.
Note this is not secure just reduces leaking info
Allows base 64 encode stuff using base64() or plain hide() in the
config
### Response:
def make_function_value_private(self, value, value_type, function):
"""
Wraps converted value so that it is hidden in logs etc.
Note this is not secure just reduces leaking info
Allows base 64 encode stuff using base64() or plain hide() in the
config
"""
# remove quotes
value = self.remove_quotes(value)
if function == "base64":
try:
import base64
value = base64.b64decode(value).decode("utf-8")
except TypeError as e:
self.notify_user("base64(..) error %s" % str(e))
# check we are in a module definition etc
if not self.current_module:
self.notify_user("%s(..) used outside of module or section" % function)
return None
module = self.current_module[-1].split()[0]
if module in CONFIG_FILE_SPECIAL_SECTIONS + I3S_MODULE_NAMES:
self.notify_user(
"%s(..) cannot be used outside of py3status module "
"configuration" % function
)
return None
value = self.value_convert(value, value_type)
module_name = self.current_module[-1]
return PrivateHide(value, module_name) |
def get_sqlproxy_runner(self):
"""
Retrieve Cloud SQL Proxy runner. It is used to manage the proxy
lifecycle per task.
:return: The Cloud SQL Proxy runner.
:rtype: CloudSqlProxyRunner
"""
if not self.use_proxy:
raise AirflowException("Proxy runner can only be retrieved in case of use_proxy = True")
return CloudSqlProxyRunner(
path_prefix=self.sql_proxy_unique_path,
instance_specification=self._get_sqlproxy_instance_specification(),
project_id=self.project_id,
sql_proxy_version=self.sql_proxy_version,
sql_proxy_binary_path=self.sql_proxy_binary_path
) | Retrieve Cloud SQL Proxy runner. It is used to manage the proxy
lifecycle per task.
:return: The Cloud SQL Proxy runner.
:rtype: CloudSqlProxyRunner | Below is the the instruction that describes the task:
### Input:
Retrieve Cloud SQL Proxy runner. It is used to manage the proxy
lifecycle per task.
:return: The Cloud SQL Proxy runner.
:rtype: CloudSqlProxyRunner
### Response:
def get_sqlproxy_runner(self):
"""
Retrieve Cloud SQL Proxy runner. It is used to manage the proxy
lifecycle per task.
:return: The Cloud SQL Proxy runner.
:rtype: CloudSqlProxyRunner
"""
if not self.use_proxy:
raise AirflowException("Proxy runner can only be retrieved in case of use_proxy = True")
return CloudSqlProxyRunner(
path_prefix=self.sql_proxy_unique_path,
instance_specification=self._get_sqlproxy_instance_specification(),
project_id=self.project_id,
sql_proxy_version=self.sql_proxy_version,
sql_proxy_binary_path=self.sql_proxy_binary_path
) |
def volume_detach(self,
name,
timeout=300):
'''
Detach a block device
'''
try:
volume = self.volume_show(name)
except KeyError as exc:
raise SaltCloudSystemExit('Unable to find {0} volume: {1}'.format(name, exc))
if not volume['attachments']:
return True
response = self.compute_conn.volumes.delete_server_volume(
volume['attachments'][0]['server_id'],
volume['attachments'][0]['id']
)
trycount = 0
start = time.time()
while True:
trycount += 1
try:
response = self._volume_get(volume['id'])
if response['status'] == 'available':
return response
except Exception as exc:
log.debug('Volume is detaching: %s', name)
time.sleep(1)
if time.time() - start > timeout:
log.error('Timed out after %d seconds '
'while waiting for data', timeout)
return False
log.debug(
'Retrying volume_show() (try %d)', trycount
) | Detach a block device | Below is the the instruction that describes the task:
### Input:
Detach a block device
### Response:
def volume_detach(self,
name,
timeout=300):
'''
Detach a block device
'''
try:
volume = self.volume_show(name)
except KeyError as exc:
raise SaltCloudSystemExit('Unable to find {0} volume: {1}'.format(name, exc))
if not volume['attachments']:
return True
response = self.compute_conn.volumes.delete_server_volume(
volume['attachments'][0]['server_id'],
volume['attachments'][0]['id']
)
trycount = 0
start = time.time()
while True:
trycount += 1
try:
response = self._volume_get(volume['id'])
if response['status'] == 'available':
return response
except Exception as exc:
log.debug('Volume is detaching: %s', name)
time.sleep(1)
if time.time() - start > timeout:
log.error('Timed out after %d seconds '
'while waiting for data', timeout)
return False
log.debug(
'Retrying volume_show() (try %d)', trycount
) |
def retrieve(func):
"""
Decorator for Zotero read API methods; calls _retrieve_data() and passes
the result to the correct processor, based on a lookup
"""
def wrapped_f(self, *args, **kwargs):
"""
Returns result of _retrieve_data()
func's return value is part of a URI, and it's this
which is intercepted and passed to _retrieve_data:
'/users/123/items?key=abc123'
"""
if kwargs:
self.add_parameters(**kwargs)
retrieved = self._retrieve_data(func(self, *args))
# we now always have links in the header response
self.links = self._extract_links()
# determine content and format, based on url params
content = (
self.content.search(self.request.url)
and self.content.search(self.request.url).group(0)
or "bib"
)
# JSON by default
formats = {
"application/atom+xml": "atom",
"application/x-bibtex": "bibtex",
"application/json": "json",
"text/html": "snapshot",
"text/plain": "plain",
"application/pdf; charset=utf-8": "pdf",
"application/pdf": "pdf",
"application/msword": "doc",
"application/vnd.openxmlformats-officedocument.spreadsheetml.sheet": "xlsx",
"application/vnd.openxmlformats-officedocument.wordprocessingml.document": "docx",
"application/zip": "zip",
"application/epub+zip": "zip",
"audio/mpeg": "mp3",
"video/mp4": "mp4",
"audio/x-wav": "wav",
"video/x-msvideo": "avi",
"application/octet-stream": "octet",
"application/x-tex": "tex",
"application/x-texinfo": "texinfo",
"image/jpeg": "jpeg",
"image/png": "png",
"image/gif": "gif",
"image/tiff": "tiff",
"application/postscript": "postscript",
"application/rtf": "rtf",
}
# select format, or assume JSON
content_type_header = self.request.headers["Content-Type"].lower() + ";"
re.compile("\s+")
fmt = formats.get(
# strip "; charset=..." segment
content_type_header[0: content_type_header.index(";")],
"json",
)
# clear all query parameters
self.url_params = None
# check to see whether it's tag data
if "tags" in self.request.url:
self.tag_data = False
return self._tags_data(retrieved.json())
if fmt == "atom":
parsed = feedparser.parse(retrieved.text)
# select the correct processor
processor = self.processors.get(content)
# process the content correctly with a custom rule
return processor(parsed)
if fmt == "snapshot":
# we need to dump as a zip!
self.snapshot = True
if fmt == "bibtex":
parser = bibtexparser.bparser.BibTexParser(common_strings=True)
return parser.parse(retrieved.text)
# it's binary, so return raw content
elif fmt != "json":
return retrieved.content
# no need to do anything special, return JSON
else:
return retrieved.json()
return wrapped_f | Decorator for Zotero read API methods; calls _retrieve_data() and passes
the result to the correct processor, based on a lookup | Below is the the instruction that describes the task:
### Input:
Decorator for Zotero read API methods; calls _retrieve_data() and passes
the result to the correct processor, based on a lookup
### Response:
def retrieve(func):
"""
Decorator for Zotero read API methods; calls _retrieve_data() and passes
the result to the correct processor, based on a lookup
"""
def wrapped_f(self, *args, **kwargs):
"""
Returns result of _retrieve_data()
func's return value is part of a URI, and it's this
which is intercepted and passed to _retrieve_data:
'/users/123/items?key=abc123'
"""
if kwargs:
self.add_parameters(**kwargs)
retrieved = self._retrieve_data(func(self, *args))
# we now always have links in the header response
self.links = self._extract_links()
# determine content and format, based on url params
content = (
self.content.search(self.request.url)
and self.content.search(self.request.url).group(0)
or "bib"
)
# JSON by default
formats = {
"application/atom+xml": "atom",
"application/x-bibtex": "bibtex",
"application/json": "json",
"text/html": "snapshot",
"text/plain": "plain",
"application/pdf; charset=utf-8": "pdf",
"application/pdf": "pdf",
"application/msword": "doc",
"application/vnd.openxmlformats-officedocument.spreadsheetml.sheet": "xlsx",
"application/vnd.openxmlformats-officedocument.wordprocessingml.document": "docx",
"application/zip": "zip",
"application/epub+zip": "zip",
"audio/mpeg": "mp3",
"video/mp4": "mp4",
"audio/x-wav": "wav",
"video/x-msvideo": "avi",
"application/octet-stream": "octet",
"application/x-tex": "tex",
"application/x-texinfo": "texinfo",
"image/jpeg": "jpeg",
"image/png": "png",
"image/gif": "gif",
"image/tiff": "tiff",
"application/postscript": "postscript",
"application/rtf": "rtf",
}
# select format, or assume JSON
content_type_header = self.request.headers["Content-Type"].lower() + ";"
re.compile("\s+")
fmt = formats.get(
# strip "; charset=..." segment
content_type_header[0: content_type_header.index(";")],
"json",
)
# clear all query parameters
self.url_params = None
# check to see whether it's tag data
if "tags" in self.request.url:
self.tag_data = False
return self._tags_data(retrieved.json())
if fmt == "atom":
parsed = feedparser.parse(retrieved.text)
# select the correct processor
processor = self.processors.get(content)
# process the content correctly with a custom rule
return processor(parsed)
if fmt == "snapshot":
# we need to dump as a zip!
self.snapshot = True
if fmt == "bibtex":
parser = bibtexparser.bparser.BibTexParser(common_strings=True)
return parser.parse(retrieved.text)
# it's binary, so return raw content
elif fmt != "json":
return retrieved.content
# no need to do anything special, return JSON
else:
return retrieved.json()
return wrapped_f |
def get_service_resources(cls, model):
""" Get resource models by service model """
key = cls.get_model_key(model)
return cls.get_service_name_resources(key) | Get resource models by service model | Below is the the instruction that describes the task:
### Input:
Get resource models by service model
### Response:
def get_service_resources(cls, model):
""" Get resource models by service model """
key = cls.get_model_key(model)
return cls.get_service_name_resources(key) |
def job_data(cls, job_id=dummy_job_id, status=None, code=None, error=None, **kwargs):
"""
Returns a dictionary containing default job status information such as the *job_id*, a job
*status* string, a job return code, and an *error* message.
"""
return dict(job_id=job_id, status=status, code=code, error=error) | Returns a dictionary containing default job status information such as the *job_id*, a job
*status* string, a job return code, and an *error* message. | Below is the the instruction that describes the task:
### Input:
Returns a dictionary containing default job status information such as the *job_id*, a job
*status* string, a job return code, and an *error* message.
### Response:
def job_data(cls, job_id=dummy_job_id, status=None, code=None, error=None, **kwargs):
"""
Returns a dictionary containing default job status information such as the *job_id*, a job
*status* string, a job return code, and an *error* message.
"""
return dict(job_id=job_id, status=status, code=code, error=error) |
def do_put(endpoint, body, access_token):
'''Do an HTTP PUT request and return JSON.
Args:
endpoint (str): Azure Resource Manager management endpoint.
body (str): JSON body of information to put.
access_token (str): A valid Azure authentication token.
Returns:
HTTP response. JSON body.
'''
headers = {"content-type": "application/json", "Authorization": 'Bearer ' + access_token}
headers['User-Agent'] = get_user_agent()
return requests.put(endpoint, data=body, headers=headers) | Do an HTTP PUT request and return JSON.
Args:
endpoint (str): Azure Resource Manager management endpoint.
body (str): JSON body of information to put.
access_token (str): A valid Azure authentication token.
Returns:
HTTP response. JSON body. | Below is the the instruction that describes the task:
### Input:
Do an HTTP PUT request and return JSON.
Args:
endpoint (str): Azure Resource Manager management endpoint.
body (str): JSON body of information to put.
access_token (str): A valid Azure authentication token.
Returns:
HTTP response. JSON body.
### Response:
def do_put(endpoint, body, access_token):
'''Do an HTTP PUT request and return JSON.
Args:
endpoint (str): Azure Resource Manager management endpoint.
body (str): JSON body of information to put.
access_token (str): A valid Azure authentication token.
Returns:
HTTP response. JSON body.
'''
headers = {"content-type": "application/json", "Authorization": 'Bearer ' + access_token}
headers['User-Agent'] = get_user_agent()
return requests.put(endpoint, data=body, headers=headers) |
def GetNetworks(alias=None,location=None):
"""Gets the list of Networks mapped to the account in the specified datacenter.
https://t3n.zendesk.com/entries/21024721-Get-Networks
:param alias: short code for a particular account. If none will use account's default alias
:param location: datacenter where group resides. If none will use account's primary datacenter
"""
if alias is None: alias = clc.v1.Account.GetAlias()
if location is None: location = clc.v1.Account.GetLocation()
r = clc.v1.API.Call('post','Network/GetAccountNetworks', { 'AccountAlias': alias, 'Location': location })
if int(r['StatusCode']) == 0: return(r['Networks']) | Gets the list of Networks mapped to the account in the specified datacenter.
https://t3n.zendesk.com/entries/21024721-Get-Networks
:param alias: short code for a particular account. If none will use account's default alias
:param location: datacenter where group resides. If none will use account's primary datacenter | Below is the the instruction that describes the task:
### Input:
Gets the list of Networks mapped to the account in the specified datacenter.
https://t3n.zendesk.com/entries/21024721-Get-Networks
:param alias: short code for a particular account. If none will use account's default alias
:param location: datacenter where group resides. If none will use account's primary datacenter
### Response:
def GetNetworks(alias=None,location=None):
"""Gets the list of Networks mapped to the account in the specified datacenter.
https://t3n.zendesk.com/entries/21024721-Get-Networks
:param alias: short code for a particular account. If none will use account's default alias
:param location: datacenter where group resides. If none will use account's primary datacenter
"""
if alias is None: alias = clc.v1.Account.GetAlias()
if location is None: location = clc.v1.Account.GetLocation()
r = clc.v1.API.Call('post','Network/GetAccountNetworks', { 'AccountAlias': alias, 'Location': location })
if int(r['StatusCode']) == 0: return(r['Networks']) |
def init(ctx, client, directory, name, force, use_external_storage):
"""Initialize a project."""
if not client.use_external_storage:
use_external_storage = False
ctx.obj = client = attr.evolve(
client,
path=directory,
use_external_storage=use_external_storage,
)
msg = 'Initialized empty project in {path}'
branch_name = None
stack = contextlib.ExitStack()
if force and client.repo:
msg = 'Initialized project in {path} (branch {branch_name})'
merge_args = ['--no-ff', '-s', 'recursive', '-X', 'ours']
try:
commit = client.find_previous_commit(
str(client.renku_metadata_path),
)
branch_name = 'renku/init/' + str(commit)
except KeyError:
from git import NULL_TREE
commit = NULL_TREE
branch_name = 'renku/init/root'
merge_args.append('--allow-unrelated-histories')
ctx.obj = client = stack.enter_context(
client.worktree(
branch_name=branch_name,
commit=commit,
merge_args=merge_args,
)
)
try:
with client.lock:
path = client.init_repository(name=name, force=force)
except FileExistsError:
raise click.UsageError(
'Renku repository is not empty. '
'Please use --force flag to use the directory as Renku '
'repository.'
)
stack.enter_context(client.commit())
with stack:
# Install Git hooks.
from .githooks import install
ctx.invoke(install, force=force)
# Create all necessary template files.
from .runner import template
ctx.invoke(template, force=force)
click.echo(msg.format(path=path, branch_name=branch_name)) | Initialize a project. | Below is the the instruction that describes the task:
### Input:
Initialize a project.
### Response:
def init(ctx, client, directory, name, force, use_external_storage):
"""Initialize a project."""
if not client.use_external_storage:
use_external_storage = False
ctx.obj = client = attr.evolve(
client,
path=directory,
use_external_storage=use_external_storage,
)
msg = 'Initialized empty project in {path}'
branch_name = None
stack = contextlib.ExitStack()
if force and client.repo:
msg = 'Initialized project in {path} (branch {branch_name})'
merge_args = ['--no-ff', '-s', 'recursive', '-X', 'ours']
try:
commit = client.find_previous_commit(
str(client.renku_metadata_path),
)
branch_name = 'renku/init/' + str(commit)
except KeyError:
from git import NULL_TREE
commit = NULL_TREE
branch_name = 'renku/init/root'
merge_args.append('--allow-unrelated-histories')
ctx.obj = client = stack.enter_context(
client.worktree(
branch_name=branch_name,
commit=commit,
merge_args=merge_args,
)
)
try:
with client.lock:
path = client.init_repository(name=name, force=force)
except FileExistsError:
raise click.UsageError(
'Renku repository is not empty. '
'Please use --force flag to use the directory as Renku '
'repository.'
)
stack.enter_context(client.commit())
with stack:
# Install Git hooks.
from .githooks import install
ctx.invoke(install, force=force)
# Create all necessary template files.
from .runner import template
ctx.invoke(template, force=force)
click.echo(msg.format(path=path, branch_name=branch_name)) |
def create_exchange(self):
"""
Creates user's private exchange
Actually user's private channel needed to be defined only once,
and this should be happened when user first created.
But since this has a little performance cost,
to be safe we always call it before binding to the channel we currently subscribe
"""
channel = self._connect_mq()
channel.exchange_declare(exchange=self.user.prv_exchange,
exchange_type='fanout',
durable=True) | Creates user's private exchange
Actually user's private channel needed to be defined only once,
and this should be happened when user first created.
But since this has a little performance cost,
to be safe we always call it before binding to the channel we currently subscribe | Below is the the instruction that describes the task:
### Input:
Creates user's private exchange
Actually user's private channel needed to be defined only once,
and this should be happened when user first created.
But since this has a little performance cost,
to be safe we always call it before binding to the channel we currently subscribe
### Response:
def create_exchange(self):
"""
Creates user's private exchange
Actually user's private channel needed to be defined only once,
and this should be happened when user first created.
But since this has a little performance cost,
to be safe we always call it before binding to the channel we currently subscribe
"""
channel = self._connect_mq()
channel.exchange_declare(exchange=self.user.prv_exchange,
exchange_type='fanout',
durable=True) |
def make_signed_jwt(signer, payload, key_id=None):
"""Make a signed JWT.
See http://self-issued.info/docs/draft-jones-json-web-token.html.
Args:
signer: crypt.Signer, Cryptographic signer.
payload: dict, Dictionary of data to convert to JSON and then sign.
key_id: string, (Optional) Key ID header.
Returns:
string, The JWT for the payload.
"""
header = {'typ': 'JWT', 'alg': 'RS256'}
if key_id is not None:
header['kid'] = key_id
segments = [
_helpers._urlsafe_b64encode(_helpers._json_encode(header)),
_helpers._urlsafe_b64encode(_helpers._json_encode(payload)),
]
signing_input = b'.'.join(segments)
signature = signer.sign(signing_input)
segments.append(_helpers._urlsafe_b64encode(signature))
logger.debug(str(segments))
return b'.'.join(segments) | Make a signed JWT.
See http://self-issued.info/docs/draft-jones-json-web-token.html.
Args:
signer: crypt.Signer, Cryptographic signer.
payload: dict, Dictionary of data to convert to JSON and then sign.
key_id: string, (Optional) Key ID header.
Returns:
string, The JWT for the payload. | Below is the the instruction that describes the task:
### Input:
Make a signed JWT.
See http://self-issued.info/docs/draft-jones-json-web-token.html.
Args:
signer: crypt.Signer, Cryptographic signer.
payload: dict, Dictionary of data to convert to JSON and then sign.
key_id: string, (Optional) Key ID header.
Returns:
string, The JWT for the payload.
### Response:
def make_signed_jwt(signer, payload, key_id=None):
"""Make a signed JWT.
See http://self-issued.info/docs/draft-jones-json-web-token.html.
Args:
signer: crypt.Signer, Cryptographic signer.
payload: dict, Dictionary of data to convert to JSON and then sign.
key_id: string, (Optional) Key ID header.
Returns:
string, The JWT for the payload.
"""
header = {'typ': 'JWT', 'alg': 'RS256'}
if key_id is not None:
header['kid'] = key_id
segments = [
_helpers._urlsafe_b64encode(_helpers._json_encode(header)),
_helpers._urlsafe_b64encode(_helpers._json_encode(payload)),
]
signing_input = b'.'.join(segments)
signature = signer.sign(signing_input)
segments.append(_helpers._urlsafe_b64encode(signature))
logger.debug(str(segments))
return b'.'.join(segments) |
def do_for_dir(inws, begin):
'''
do something in the directory.
'''
inws = os.path.abspath(inws)
for wroot, wdirs, wfiles in os.walk(inws):
for wfile in wfiles:
if wfile.endswith('.html'):
if 'autogen' in wroot:
continue
check_html(os.path.abspath(os.path.join(wroot, wfile)), begin) | do something in the directory. | Below is the the instruction that describes the task:
### Input:
do something in the directory.
### Response:
def do_for_dir(inws, begin):
'''
do something in the directory.
'''
inws = os.path.abspath(inws)
for wroot, wdirs, wfiles in os.walk(inws):
for wfile in wfiles:
if wfile.endswith('.html'):
if 'autogen' in wroot:
continue
check_html(os.path.abspath(os.path.join(wroot, wfile)), begin) |
def get_meta(catid, sig):
'''
Get metadata of dataset via ID.
'''
meta_base = './static/dataset_list'
if os.path.exists(meta_base):
pass
else:
return False
pp_data = {'logo': '', 'kind': '9'}
for wroot, wdirs, wfiles in os.walk(meta_base):
for wdir in wdirs:
if wdir.lower().endswith(sig):
# Got the dataset of certain ID.
ds_base = pathlib.Path(os.path.join(wroot, wdir))
for uu in ds_base.iterdir():
if uu.name.endswith('.xlsx'):
meta_dic = chuli_meta('u' + sig[2:], uu)
pp_data['title'] = meta_dic['title']
pp_data['cnt_md'] = meta_dic['anytext']
pp_data['user_name'] = 'admin'
pp_data['def_cat_uid'] = catid
pp_data['gcat0'] = catid
pp_data['def_cat_pid'] = catid[:2] + '00'
pp_data['extinfo'] = {}
elif uu.name.startswith('thumbnail_'):
pp_data['logo'] = os.path.join(wroot, wdir, uu.name).strip('.')
return pp_data | Get metadata of dataset via ID. | Below is the the instruction that describes the task:
### Input:
Get metadata of dataset via ID.
### Response:
def get_meta(catid, sig):
'''
Get metadata of dataset via ID.
'''
meta_base = './static/dataset_list'
if os.path.exists(meta_base):
pass
else:
return False
pp_data = {'logo': '', 'kind': '9'}
for wroot, wdirs, wfiles in os.walk(meta_base):
for wdir in wdirs:
if wdir.lower().endswith(sig):
# Got the dataset of certain ID.
ds_base = pathlib.Path(os.path.join(wroot, wdir))
for uu in ds_base.iterdir():
if uu.name.endswith('.xlsx'):
meta_dic = chuli_meta('u' + sig[2:], uu)
pp_data['title'] = meta_dic['title']
pp_data['cnt_md'] = meta_dic['anytext']
pp_data['user_name'] = 'admin'
pp_data['def_cat_uid'] = catid
pp_data['gcat0'] = catid
pp_data['def_cat_pid'] = catid[:2] + '00'
pp_data['extinfo'] = {}
elif uu.name.startswith('thumbnail_'):
pp_data['logo'] = os.path.join(wroot, wdir, uu.name).strip('.')
return pp_data |
def index(request, obj_id=None):
"""Handles a request based on method and calls the appropriate function"""
if request.method == 'GET':
return get(request, obj_id)
elif request.method == 'POST':
return post(request)
elif request.method == 'PUT':
getPutData(request)
return put(request, obj_id)
elif request.method == 'DELETE':
getPutData(request)
return delete(request, obj_id) | Handles a request based on method and calls the appropriate function | Below is the the instruction that describes the task:
### Input:
Handles a request based on method and calls the appropriate function
### Response:
def index(request, obj_id=None):
"""Handles a request based on method and calls the appropriate function"""
if request.method == 'GET':
return get(request, obj_id)
elif request.method == 'POST':
return post(request)
elif request.method == 'PUT':
getPutData(request)
return put(request, obj_id)
elif request.method == 'DELETE':
getPutData(request)
return delete(request, obj_id) |
def set_cache_dir(directory):
"""Set the directory to cache JSON responses from most API endpoints.
"""
global cache_dir
if directory is None:
cache_dir = None
return
if not os.path.exists(directory):
os.makedirs(directory)
if not os.path.isdir(directory):
raise ValueError("not a directory")
cache_dir = directory | Set the directory to cache JSON responses from most API endpoints. | Below is the the instruction that describes the task:
### Input:
Set the directory to cache JSON responses from most API endpoints.
### Response:
def set_cache_dir(directory):
"""Set the directory to cache JSON responses from most API endpoints.
"""
global cache_dir
if directory is None:
cache_dir = None
return
if not os.path.exists(directory):
os.makedirs(directory)
if not os.path.isdir(directory):
raise ValueError("not a directory")
cache_dir = directory |
def _survives_exclude(self, matchstr, match_type):
''' Returns True if *matchstr* does not match patterns
``self.package_name`` removed from front of string if present
Examples
--------
>>> dw = ApiDocWriter('sphinx')
>>> dw._survives_exclude('sphinx.okpkg', 'package')
True
>>> dw.package_skip_patterns.append('^\\.badpkg$')
>>> dw._survives_exclude('sphinx.badpkg', 'package')
False
>>> dw._survives_exclude('sphinx.badpkg', 'module')
True
>>> dw._survives_exclude('sphinx.badmod', 'module')
True
>>> dw.module_skip_patterns.append('^\\.badmod$')
>>> dw._survives_exclude('sphinx.badmod', 'module')
False
'''
if match_type == 'module':
patterns = self.module_skip_patterns
elif match_type == 'package':
patterns = self.package_skip_patterns
else:
raise ValueError('Cannot interpret match type "%s"'
% match_type)
# Match to URI without package name
L = len(self.package_name)
if matchstr[:L] == self.package_name:
matchstr = matchstr[L:]
for pat in patterns:
try:
pat.search
except AttributeError:
pat = re.compile(pat)
if pat.search(matchstr):
return False
return True | Returns True if *matchstr* does not match patterns
``self.package_name`` removed from front of string if present
Examples
--------
>>> dw = ApiDocWriter('sphinx')
>>> dw._survives_exclude('sphinx.okpkg', 'package')
True
>>> dw.package_skip_patterns.append('^\\.badpkg$')
>>> dw._survives_exclude('sphinx.badpkg', 'package')
False
>>> dw._survives_exclude('sphinx.badpkg', 'module')
True
>>> dw._survives_exclude('sphinx.badmod', 'module')
True
>>> dw.module_skip_patterns.append('^\\.badmod$')
>>> dw._survives_exclude('sphinx.badmod', 'module')
False | Below is the the instruction that describes the task:
### Input:
Returns True if *matchstr* does not match patterns
``self.package_name`` removed from front of string if present
Examples
--------
>>> dw = ApiDocWriter('sphinx')
>>> dw._survives_exclude('sphinx.okpkg', 'package')
True
>>> dw.package_skip_patterns.append('^\\.badpkg$')
>>> dw._survives_exclude('sphinx.badpkg', 'package')
False
>>> dw._survives_exclude('sphinx.badpkg', 'module')
True
>>> dw._survives_exclude('sphinx.badmod', 'module')
True
>>> dw.module_skip_patterns.append('^\\.badmod$')
>>> dw._survives_exclude('sphinx.badmod', 'module')
False
### Response:
def _survives_exclude(self, matchstr, match_type):
''' Returns True if *matchstr* does not match patterns
``self.package_name`` removed from front of string if present
Examples
--------
>>> dw = ApiDocWriter('sphinx')
>>> dw._survives_exclude('sphinx.okpkg', 'package')
True
>>> dw.package_skip_patterns.append('^\\.badpkg$')
>>> dw._survives_exclude('sphinx.badpkg', 'package')
False
>>> dw._survives_exclude('sphinx.badpkg', 'module')
True
>>> dw._survives_exclude('sphinx.badmod', 'module')
True
>>> dw.module_skip_patterns.append('^\\.badmod$')
>>> dw._survives_exclude('sphinx.badmod', 'module')
False
'''
if match_type == 'module':
patterns = self.module_skip_patterns
elif match_type == 'package':
patterns = self.package_skip_patterns
else:
raise ValueError('Cannot interpret match type "%s"'
% match_type)
# Match to URI without package name
L = len(self.package_name)
if matchstr[:L] == self.package_name:
matchstr = matchstr[L:]
for pat in patterns:
try:
pat.search
except AttributeError:
pat = re.compile(pat)
if pat.search(matchstr):
return False
return True |
def compat_string(value):
"""
Provide a python2/3 compatible string representation of the value
:type value:
:rtype :
"""
if isinstance(value, bytes):
return value.decode(encoding='utf-8')
return str(value) | Provide a python2/3 compatible string representation of the value
:type value:
:rtype : | Below is the the instruction that describes the task:
### Input:
Provide a python2/3 compatible string representation of the value
:type value:
:rtype :
### Response:
def compat_string(value):
"""
Provide a python2/3 compatible string representation of the value
:type value:
:rtype :
"""
if isinstance(value, bytes):
return value.decode(encoding='utf-8')
return str(value) |
def __method_descriptor(self, service, method_info,
rosy_method, protorpc_method_info):
"""Describes a method.
Args:
service: endpoints.Service, Implementation of the API as a service.
method_info: _MethodInfo, Configuration for the method.
rosy_method: string, ProtoRPC method name prefixed with the
name of the service.
protorpc_method_info: protorpc.remote._RemoteMethodInfo, ProtoRPC
description of the method.
Returns:
Dictionary describing the method.
"""
descriptor = {}
request_message_type = (resource_container.ResourceContainer.
get_request_message(protorpc_method_info.remote))
request_kind = self.__get_request_kind(method_info)
remote_method = protorpc_method_info.remote
descriptor['path'] = method_info.get_path(service.api_info)
descriptor['httpMethod'] = method_info.http_method
descriptor['rosyMethod'] = rosy_method
descriptor['request'] = self.__request_message_descriptor(
request_kind, request_message_type,
method_info.method_id(service.api_info),
descriptor['path'])
descriptor['response'] = self.__response_message_descriptor(
remote_method.response_type(), method_info.method_id(service.api_info))
# Audiences, scopes, allowed_client_ids and auth_level could be set at
# either the method level or the API level. Allow an empty list at the
# method level to override the setting at the API level.
scopes = (method_info.scopes
if method_info.scopes is not None
else service.api_info.scopes)
if scopes:
descriptor['scopes'] = scopes
audiences = (method_info.audiences
if method_info.audiences is not None
else service.api_info.audiences)
if audiences:
descriptor['audiences'] = audiences
allowed_client_ids = (method_info.allowed_client_ids
if method_info.allowed_client_ids is not None
else service.api_info.allowed_client_ids)
if allowed_client_ids:
descriptor['clientIds'] = allowed_client_ids
if remote_method.method.__doc__:
descriptor['description'] = remote_method.method.__doc__
auth_level = (method_info.auth_level
if method_info.auth_level is not None
else service.api_info.auth_level)
if auth_level is not None:
descriptor['authLevel'] = AUTH_LEVEL.reverse_mapping[auth_level]
descriptor['useRequestUri'] = method_info.use_request_uri(service.api_info)
return descriptor | Describes a method.
Args:
service: endpoints.Service, Implementation of the API as a service.
method_info: _MethodInfo, Configuration for the method.
rosy_method: string, ProtoRPC method name prefixed with the
name of the service.
protorpc_method_info: protorpc.remote._RemoteMethodInfo, ProtoRPC
description of the method.
Returns:
Dictionary describing the method. | Below is the the instruction that describes the task:
### Input:
Describes a method.
Args:
service: endpoints.Service, Implementation of the API as a service.
method_info: _MethodInfo, Configuration for the method.
rosy_method: string, ProtoRPC method name prefixed with the
name of the service.
protorpc_method_info: protorpc.remote._RemoteMethodInfo, ProtoRPC
description of the method.
Returns:
Dictionary describing the method.
### Response:
def __method_descriptor(self, service, method_info,
rosy_method, protorpc_method_info):
"""Describes a method.
Args:
service: endpoints.Service, Implementation of the API as a service.
method_info: _MethodInfo, Configuration for the method.
rosy_method: string, ProtoRPC method name prefixed with the
name of the service.
protorpc_method_info: protorpc.remote._RemoteMethodInfo, ProtoRPC
description of the method.
Returns:
Dictionary describing the method.
"""
descriptor = {}
request_message_type = (resource_container.ResourceContainer.
get_request_message(protorpc_method_info.remote))
request_kind = self.__get_request_kind(method_info)
remote_method = protorpc_method_info.remote
descriptor['path'] = method_info.get_path(service.api_info)
descriptor['httpMethod'] = method_info.http_method
descriptor['rosyMethod'] = rosy_method
descriptor['request'] = self.__request_message_descriptor(
request_kind, request_message_type,
method_info.method_id(service.api_info),
descriptor['path'])
descriptor['response'] = self.__response_message_descriptor(
remote_method.response_type(), method_info.method_id(service.api_info))
# Audiences, scopes, allowed_client_ids and auth_level could be set at
# either the method level or the API level. Allow an empty list at the
# method level to override the setting at the API level.
scopes = (method_info.scopes
if method_info.scopes is not None
else service.api_info.scopes)
if scopes:
descriptor['scopes'] = scopes
audiences = (method_info.audiences
if method_info.audiences is not None
else service.api_info.audiences)
if audiences:
descriptor['audiences'] = audiences
allowed_client_ids = (method_info.allowed_client_ids
if method_info.allowed_client_ids is not None
else service.api_info.allowed_client_ids)
if allowed_client_ids:
descriptor['clientIds'] = allowed_client_ids
if remote_method.method.__doc__:
descriptor['description'] = remote_method.method.__doc__
auth_level = (method_info.auth_level
if method_info.auth_level is not None
else service.api_info.auth_level)
if auth_level is not None:
descriptor['authLevel'] = AUTH_LEVEL.reverse_mapping[auth_level]
descriptor['useRequestUri'] = method_info.use_request_uri(service.api_info)
return descriptor |
def numberOfConnectedProximalSynapses(self, cells=None):
"""
Returns the number of proximal connected synapses on these cells.
Parameters:
----------------------------
@param cells (iterable)
Indices of the cells. If None return count for all cells.
"""
if cells is None:
cells = xrange(self.numberOfCells())
return _countWhereGreaterEqualInRows(self.proximalPermanences, cells,
self.connectedPermanenceProximal) | Returns the number of proximal connected synapses on these cells.
Parameters:
----------------------------
@param cells (iterable)
Indices of the cells. If None return count for all cells. | Below is the the instruction that describes the task:
### Input:
Returns the number of proximal connected synapses on these cells.
Parameters:
----------------------------
@param cells (iterable)
Indices of the cells. If None return count for all cells.
### Response:
def numberOfConnectedProximalSynapses(self, cells=None):
"""
Returns the number of proximal connected synapses on these cells.
Parameters:
----------------------------
@param cells (iterable)
Indices of the cells. If None return count for all cells.
"""
if cells is None:
cells = xrange(self.numberOfCells())
return _countWhereGreaterEqualInRows(self.proximalPermanences, cells,
self.connectedPermanenceProximal) |
def respond(self, output):
"""Generates server response."""
response = {'exit_code': output.code,
'command_output': output.log}
self.send_response(200)
self.send_header('Content-type', 'application/json')
self.end_headers()
self.wfile.write(bytes(json.dumps(response), "utf8")) | Generates server response. | Below is the the instruction that describes the task:
### Input:
Generates server response.
### Response:
def respond(self, output):
"""Generates server response."""
response = {'exit_code': output.code,
'command_output': output.log}
self.send_response(200)
self.send_header('Content-type', 'application/json')
self.end_headers()
self.wfile.write(bytes(json.dumps(response), "utf8")) |
def _parse_request(self, xml):
""" Parse a request with metadata information
:param xml: LXML Object
:type xml: Union[lxml.etree._Element]
"""
for node in xml.xpath(".//ti:groupname", namespaces=XPATH_NAMESPACES):
lang = node.get("xml:lang") or CtsText.DEFAULT_LANG
self.metadata.add(RDF_NAMESPACES.CTS.groupname, lang=lang, value=node.text)
self.set_creator(node.text, lang)
for node in xml.xpath(".//ti:title", namespaces=XPATH_NAMESPACES):
lang = node.get("xml:lang") or CtsText.DEFAULT_LANG
self.metadata.add(RDF_NAMESPACES.CTS.title, lang=lang, value=node.text)
self.set_title(node.text, lang)
for node in xml.xpath(".//ti:label", namespaces=XPATH_NAMESPACES):
lang = node.get("xml:lang") or CtsText.DEFAULT_LANG
self.metadata.add(RDF_NAMESPACES.CTS.label, lang=lang, value=node.text)
self.set_subject(node.text, lang)
for node in xml.xpath(".//ti:description", namespaces=XPATH_NAMESPACES):
lang = node.get("xml:lang") or CtsText.DEFAULT_LANG
self.metadata.add(RDF_NAMESPACES.CTS.description, lang=lang, value=node.text)
self.set_description(node.text, lang)
# Need to code that p
if not self.citation.is_set() and xml.xpath("//ti:citation", namespaces=XPATH_NAMESPACES):
self.citation = CtsCollection.XmlCtsCitation.ingest(
xml,
xpath=".//ti:citation[not(ancestor::ti:citation)]"
) | Parse a request with metadata information
:param xml: LXML Object
:type xml: Union[lxml.etree._Element] | Below is the the instruction that describes the task:
### Input:
Parse a request with metadata information
:param xml: LXML Object
:type xml: Union[lxml.etree._Element]
### Response:
def _parse_request(self, xml):
""" Parse a request with metadata information
:param xml: LXML Object
:type xml: Union[lxml.etree._Element]
"""
for node in xml.xpath(".//ti:groupname", namespaces=XPATH_NAMESPACES):
lang = node.get("xml:lang") or CtsText.DEFAULT_LANG
self.metadata.add(RDF_NAMESPACES.CTS.groupname, lang=lang, value=node.text)
self.set_creator(node.text, lang)
for node in xml.xpath(".//ti:title", namespaces=XPATH_NAMESPACES):
lang = node.get("xml:lang") or CtsText.DEFAULT_LANG
self.metadata.add(RDF_NAMESPACES.CTS.title, lang=lang, value=node.text)
self.set_title(node.text, lang)
for node in xml.xpath(".//ti:label", namespaces=XPATH_NAMESPACES):
lang = node.get("xml:lang") or CtsText.DEFAULT_LANG
self.metadata.add(RDF_NAMESPACES.CTS.label, lang=lang, value=node.text)
self.set_subject(node.text, lang)
for node in xml.xpath(".//ti:description", namespaces=XPATH_NAMESPACES):
lang = node.get("xml:lang") or CtsText.DEFAULT_LANG
self.metadata.add(RDF_NAMESPACES.CTS.description, lang=lang, value=node.text)
self.set_description(node.text, lang)
# Need to code that p
if not self.citation.is_set() and xml.xpath("//ti:citation", namespaces=XPATH_NAMESPACES):
self.citation = CtsCollection.XmlCtsCitation.ingest(
xml,
xpath=".//ti:citation[not(ancestor::ti:citation)]"
) |
def get_heap(self, *args, **kwargs):
"""Return a new heap which contains all the new items and item
descriptors since the last call. This is a convenience wrapper
around :meth:`add_to_heap`.
"""
heap = Heap(self._flavour)
self.add_to_heap(heap, *args, **kwargs)
return heap | Return a new heap which contains all the new items and item
descriptors since the last call. This is a convenience wrapper
around :meth:`add_to_heap`. | Below is the the instruction that describes the task:
### Input:
Return a new heap which contains all the new items and item
descriptors since the last call. This is a convenience wrapper
around :meth:`add_to_heap`.
### Response:
def get_heap(self, *args, **kwargs):
"""Return a new heap which contains all the new items and item
descriptors since the last call. This is a convenience wrapper
around :meth:`add_to_heap`.
"""
heap = Heap(self._flavour)
self.add_to_heap(heap, *args, **kwargs)
return heap |
def check_event_coverage(patterns, event_list):
"""Calculate the ratio of patterns that were extracted."""
proportions = []
for pattern_list in patterns:
proportion = 0
for pattern in pattern_list:
for node in pattern.nodes():
if node in event_list:
proportion += 1.0 / len(pattern_list)
break
proportions.append(proportion)
return proportions | Calculate the ratio of patterns that were extracted. | Below is the the instruction that describes the task:
### Input:
Calculate the ratio of patterns that were extracted.
### Response:
def check_event_coverage(patterns, event_list):
"""Calculate the ratio of patterns that were extracted."""
proportions = []
for pattern_list in patterns:
proportion = 0
for pattern in pattern_list:
for node in pattern.nodes():
if node in event_list:
proportion += 1.0 / len(pattern_list)
break
proportions.append(proportion)
return proportions |
def get_coords(x, y, params):
"""
Transforms the given coordinates from plane-space to Mandelbrot-space (real and imaginary).
:param x: X coordinate on the plane.
:param y: Y coordinate on the plane.
:param params: Current application parameters.
:type params: params.Params
:return: Tuple containing the re-mapped coordinates in Mandelbrot-space.
"""
n_x = x * 2.0 / params.plane_w * params.plane_ratio - 1.0
n_y = y * 2.0 / params.plane_h - 1.0
mb_x = params.zoom * n_x
mb_y = params.zoom * n_y
return mb_x, mb_y | Transforms the given coordinates from plane-space to Mandelbrot-space (real and imaginary).
:param x: X coordinate on the plane.
:param y: Y coordinate on the plane.
:param params: Current application parameters.
:type params: params.Params
:return: Tuple containing the re-mapped coordinates in Mandelbrot-space. | Below is the the instruction that describes the task:
### Input:
Transforms the given coordinates from plane-space to Mandelbrot-space (real and imaginary).
:param x: X coordinate on the plane.
:param y: Y coordinate on the plane.
:param params: Current application parameters.
:type params: params.Params
:return: Tuple containing the re-mapped coordinates in Mandelbrot-space.
### Response:
def get_coords(x, y, params):
"""
Transforms the given coordinates from plane-space to Mandelbrot-space (real and imaginary).
:param x: X coordinate on the plane.
:param y: Y coordinate on the plane.
:param params: Current application parameters.
:type params: params.Params
:return: Tuple containing the re-mapped coordinates in Mandelbrot-space.
"""
n_x = x * 2.0 / params.plane_w * params.plane_ratio - 1.0
n_y = y * 2.0 / params.plane_h - 1.0
mb_x = params.zoom * n_x
mb_y = params.zoom * n_y
return mb_x, mb_y |
def check_each_direction(n,angs,ifprint=True):
""" returns a list of the index of elements of n which do not have adequate
toy angle coverage. The criterion is that we must have at least one sample
in each Nyquist box when we project the toy angles along the vector n """
checks = np.array([])
P = np.array([])
if(ifprint):
print("\nChecking modes:\n====")
for k,i in enumerate(n):
N_matrix = np.linalg.norm(i)
X = np.dot(angs,i)
if(np.abs(np.max(X)-np.min(X))<2.*np.pi):
if(ifprint):
print("Need a longer integration window for mode ", i)
checks=np.append(checks,i)
P = np.append(P,(2.*np.pi-np.abs(np.max(X)-np.min(X))))
elif(np.abs(np.max(X)-np.min(X))/len(X)>np.pi):
if(ifprint):
print("Need a finer sampling for mode ", i)
checks=np.append(checks,i)
P = np.append(P,(2.*np.pi-np.abs(np.max(X)-np.min(X))))
if(ifprint):
print("====\n")
return checks,P | returns a list of the index of elements of n which do not have adequate
toy angle coverage. The criterion is that we must have at least one sample
in each Nyquist box when we project the toy angles along the vector n | Below is the the instruction that describes the task:
### Input:
returns a list of the index of elements of n which do not have adequate
toy angle coverage. The criterion is that we must have at least one sample
in each Nyquist box when we project the toy angles along the vector n
### Response:
def check_each_direction(n,angs,ifprint=True):
""" returns a list of the index of elements of n which do not have adequate
toy angle coverage. The criterion is that we must have at least one sample
in each Nyquist box when we project the toy angles along the vector n """
checks = np.array([])
P = np.array([])
if(ifprint):
print("\nChecking modes:\n====")
for k,i in enumerate(n):
N_matrix = np.linalg.norm(i)
X = np.dot(angs,i)
if(np.abs(np.max(X)-np.min(X))<2.*np.pi):
if(ifprint):
print("Need a longer integration window for mode ", i)
checks=np.append(checks,i)
P = np.append(P,(2.*np.pi-np.abs(np.max(X)-np.min(X))))
elif(np.abs(np.max(X)-np.min(X))/len(X)>np.pi):
if(ifprint):
print("Need a finer sampling for mode ", i)
checks=np.append(checks,i)
P = np.append(P,(2.*np.pi-np.abs(np.max(X)-np.min(X))))
if(ifprint):
print("====\n")
return checks,P |
def node_vectors(node_id):
"""Get the vectors of a node.
You must specify the node id in the url.
You can pass direction (incoming/outgoing/all) and failed
(True/False/all).
"""
exp = Experiment(session)
# get the parameters
direction = request_parameter(parameter="direction", default="all")
failed = request_parameter(parameter="failed", parameter_type="bool", default=False)
for x in [direction, failed]:
if type(x) == Response:
return x
# execute the request
node = models.Node.query.get(node_id)
if node is None:
return error_response(error_type="/node/vectors, node does not exist")
try:
vectors = node.vectors(direction=direction, failed=failed)
exp.vector_get_request(node=node, vectors=vectors)
session.commit()
except Exception:
return error_response(
error_type="/node/vectors GET server error",
status=403,
participant=node.participant,
)
# return the data
return success_response(vectors=[v.__json__() for v in vectors]) | Get the vectors of a node.
You must specify the node id in the url.
You can pass direction (incoming/outgoing/all) and failed
(True/False/all). | Below is the the instruction that describes the task:
### Input:
Get the vectors of a node.
You must specify the node id in the url.
You can pass direction (incoming/outgoing/all) and failed
(True/False/all).
### Response:
def node_vectors(node_id):
"""Get the vectors of a node.
You must specify the node id in the url.
You can pass direction (incoming/outgoing/all) and failed
(True/False/all).
"""
exp = Experiment(session)
# get the parameters
direction = request_parameter(parameter="direction", default="all")
failed = request_parameter(parameter="failed", parameter_type="bool", default=False)
for x in [direction, failed]:
if type(x) == Response:
return x
# execute the request
node = models.Node.query.get(node_id)
if node is None:
return error_response(error_type="/node/vectors, node does not exist")
try:
vectors = node.vectors(direction=direction, failed=failed)
exp.vector_get_request(node=node, vectors=vectors)
session.commit()
except Exception:
return error_response(
error_type="/node/vectors GET server error",
status=403,
participant=node.participant,
)
# return the data
return success_response(vectors=[v.__json__() for v in vectors]) |
def populate_keys_tree(self):
"""Reads the HOTKEYS global variable and insert all data in
the TreeStore used by the preferences window treeview.
"""
for group in HOTKEYS:
parent = self.store.append(None, [None, group['label'], None, None])
for item in group['keys']:
if item['key'] == "show-hide" or item['key'] == "show-focus":
accel = self.settings.keybindingsGlobal.get_string(item['key'])
else:
accel = self.settings.keybindingsLocal.get_string(item['key'])
gsettings_path = item['key']
keycode, mask = Gtk.accelerator_parse(accel)
keylabel = Gtk.accelerator_get_label(keycode, mask)
self.store.append(parent, [gsettings_path, item['label'], keylabel, accel])
self.get_widget('treeview-keys').expand_all() | Reads the HOTKEYS global variable and insert all data in
the TreeStore used by the preferences window treeview. | Below is the the instruction that describes the task:
### Input:
Reads the HOTKEYS global variable and insert all data in
the TreeStore used by the preferences window treeview.
### Response:
def populate_keys_tree(self):
"""Reads the HOTKEYS global variable and insert all data in
the TreeStore used by the preferences window treeview.
"""
for group in HOTKEYS:
parent = self.store.append(None, [None, group['label'], None, None])
for item in group['keys']:
if item['key'] == "show-hide" or item['key'] == "show-focus":
accel = self.settings.keybindingsGlobal.get_string(item['key'])
else:
accel = self.settings.keybindingsLocal.get_string(item['key'])
gsettings_path = item['key']
keycode, mask = Gtk.accelerator_parse(accel)
keylabel = Gtk.accelerator_get_label(keycode, mask)
self.store.append(parent, [gsettings_path, item['label'], keylabel, accel])
self.get_widget('treeview-keys').expand_all() |
def p_file_contrib_1(self, p):
"""file_contrib : FILE_CONTRIB LINE"""
try:
if six.PY2:
value = p[2].decode(encoding='utf-8')
else:
value = p[2]
self.builder.add_file_contribution(self.document, value)
except OrderError:
self.order_error('FileContributor', 'FileName', p.lineno(1)) | file_contrib : FILE_CONTRIB LINE | Below is the the instruction that describes the task:
### Input:
file_contrib : FILE_CONTRIB LINE
### Response:
def p_file_contrib_1(self, p):
"""file_contrib : FILE_CONTRIB LINE"""
try:
if six.PY2:
value = p[2].decode(encoding='utf-8')
else:
value = p[2]
self.builder.add_file_contribution(self.document, value)
except OrderError:
self.order_error('FileContributor', 'FileName', p.lineno(1)) |
def _json_request(self, req_type, url, **kwargs):
"""
Make a request of the specified type and expect a JSON object in
response.
If the result has an 'error' value, raise a LuminosoAPIError with
its contents. Otherwise, return the contents of the 'result' value.
"""
response = self._request(req_type, url, **kwargs)
try:
json_response = response.json()
except ValueError:
logger.error("Received response with no JSON: %s %s" %
(response, response.content))
raise LuminosoError('Response body contained no JSON. '
'Perhaps you meant to use get_raw?')
if json_response.get('error'):
raise LuminosoAPIError(json_response.get('error'))
return json_response['result'] | Make a request of the specified type and expect a JSON object in
response.
If the result has an 'error' value, raise a LuminosoAPIError with
its contents. Otherwise, return the contents of the 'result' value. | Below is the the instruction that describes the task:
### Input:
Make a request of the specified type and expect a JSON object in
response.
If the result has an 'error' value, raise a LuminosoAPIError with
its contents. Otherwise, return the contents of the 'result' value.
### Response:
def _json_request(self, req_type, url, **kwargs):
"""
Make a request of the specified type and expect a JSON object in
response.
If the result has an 'error' value, raise a LuminosoAPIError with
its contents. Otherwise, return the contents of the 'result' value.
"""
response = self._request(req_type, url, **kwargs)
try:
json_response = response.json()
except ValueError:
logger.error("Received response with no JSON: %s %s" %
(response, response.content))
raise LuminosoError('Response body contained no JSON. '
'Perhaps you meant to use get_raw?')
if json_response.get('error'):
raise LuminosoAPIError(json_response.get('error'))
return json_response['result'] |
def CopyToString(self):
"""Copies the identifier to a string representation.
Returns:
str: unique identifier or None.
"""
if self.name is not None and self.row_identifier is not None:
return '{0:s}.{1:d}'.format(self.name, self.row_identifier)
return None | Copies the identifier to a string representation.
Returns:
str: unique identifier or None. | Below is the the instruction that describes the task:
### Input:
Copies the identifier to a string representation.
Returns:
str: unique identifier or None.
### Response:
def CopyToString(self):
"""Copies the identifier to a string representation.
Returns:
str: unique identifier or None.
"""
if self.name is not None and self.row_identifier is not None:
return '{0:s}.{1:d}'.format(self.name, self.row_identifier)
return None |
def make_create_payload(**kwargs):
"""Create payload for upload/check-upload operations."""
payload = {}
# Add non-empty arguments
for k, v in six.iteritems(kwargs):
if v is not None:
payload[k] = v
return payload | Create payload for upload/check-upload operations. | Below is the the instruction that describes the task:
### Input:
Create payload for upload/check-upload operations.
### Response:
def make_create_payload(**kwargs):
"""Create payload for upload/check-upload operations."""
payload = {}
# Add non-empty arguments
for k, v in six.iteritems(kwargs):
if v is not None:
payload[k] = v
return payload |
def graph_easy(self):
"""Draw ascii diagram. graph-easy perl module require
"""
if not os.path.isfile("/usr/bin/graph-easy"):
print("Require 'graph-easy': Install with 'slpkg -s sbo "
"graph-easy'")
self.remove_dot()
raise SystemExit()
subprocess.call("graph-easy {0}.dot".format(self.image), shell=True)
self.remove_dot()
raise SystemExit() | Draw ascii diagram. graph-easy perl module require | Below is the the instruction that describes the task:
### Input:
Draw ascii diagram. graph-easy perl module require
### Response:
def graph_easy(self):
"""Draw ascii diagram. graph-easy perl module require
"""
if not os.path.isfile("/usr/bin/graph-easy"):
print("Require 'graph-easy': Install with 'slpkg -s sbo "
"graph-easy'")
self.remove_dot()
raise SystemExit()
subprocess.call("graph-easy {0}.dot".format(self.image), shell=True)
self.remove_dot()
raise SystemExit() |
def find_plugins():
"""Locate and initialize all available plugins.
"""
plugin_dir = os.path.dirname(os.path.realpath(__file__))
plugin_dir = os.path.join(plugin_dir, "plugins")
plugin_files = [x[:-3] for x in os.listdir(plugin_dir) if x.endswith(".py")]
sys.path.insert(0, plugin_dir)
for plugin in plugin_files:
__import__(plugin) | Locate and initialize all available plugins. | Below is the the instruction that describes the task:
### Input:
Locate and initialize all available plugins.
### Response:
def find_plugins():
"""Locate and initialize all available plugins.
"""
plugin_dir = os.path.dirname(os.path.realpath(__file__))
plugin_dir = os.path.join(plugin_dir, "plugins")
plugin_files = [x[:-3] for x in os.listdir(plugin_dir) if x.endswith(".py")]
sys.path.insert(0, plugin_dir)
for plugin in plugin_files:
__import__(plugin) |
def parse_column_filters(*definitions):
"""Parse multiple compound column filter definitions
Examples
--------
>>> parse_column_filters('snr > 10', 'frequency < 1000')
[('snr', <function operator.gt>, 10.), ('frequency', <function operator.lt>, 1000.)]
>>> parse_column_filters('snr > 10 && frequency < 1000')
[('snr', <function operator.gt>, 10.), ('frequency', <function operator.lt>, 1000.)]
""" # noqa: E501
fltrs = []
for def_ in _flatten(definitions):
if is_filter_tuple(def_):
fltrs.append(def_)
else:
for splitdef in DELIM_REGEX.split(def_)[::2]:
fltrs.extend(parse_column_filter(splitdef))
return fltrs | Parse multiple compound column filter definitions
Examples
--------
>>> parse_column_filters('snr > 10', 'frequency < 1000')
[('snr', <function operator.gt>, 10.), ('frequency', <function operator.lt>, 1000.)]
>>> parse_column_filters('snr > 10 && frequency < 1000')
[('snr', <function operator.gt>, 10.), ('frequency', <function operator.lt>, 1000.)] | Below is the the instruction that describes the task:
### Input:
Parse multiple compound column filter definitions
Examples
--------
>>> parse_column_filters('snr > 10', 'frequency < 1000')
[('snr', <function operator.gt>, 10.), ('frequency', <function operator.lt>, 1000.)]
>>> parse_column_filters('snr > 10 && frequency < 1000')
[('snr', <function operator.gt>, 10.), ('frequency', <function operator.lt>, 1000.)]
### Response:
def parse_column_filters(*definitions):
"""Parse multiple compound column filter definitions
Examples
--------
>>> parse_column_filters('snr > 10', 'frequency < 1000')
[('snr', <function operator.gt>, 10.), ('frequency', <function operator.lt>, 1000.)]
>>> parse_column_filters('snr > 10 && frequency < 1000')
[('snr', <function operator.gt>, 10.), ('frequency', <function operator.lt>, 1000.)]
""" # noqa: E501
fltrs = []
for def_ in _flatten(definitions):
if is_filter_tuple(def_):
fltrs.append(def_)
else:
for splitdef in DELIM_REGEX.split(def_)[::2]:
fltrs.extend(parse_column_filter(splitdef))
return fltrs |
def get_nodes_with_recipe(recipe_name, environment=None):
"""Get all nodes which include a given recipe,
prefix-searches are also supported
"""
prefix_search = recipe_name.endswith("*")
if prefix_search:
recipe_name = recipe_name.rstrip("*")
for n in get_nodes(environment):
recipes = get_recipes_in_node(n)
for role in get_roles_in_node(n, recursive=True):
recipes.extend(get_recipes_in_role(role))
if prefix_search:
if any(recipe.startswith(recipe_name) for recipe in recipes):
yield n
else:
if recipe_name in recipes:
yield n | Get all nodes which include a given recipe,
prefix-searches are also supported | Below is the the instruction that describes the task:
### Input:
Get all nodes which include a given recipe,
prefix-searches are also supported
### Response:
def get_nodes_with_recipe(recipe_name, environment=None):
"""Get all nodes which include a given recipe,
prefix-searches are also supported
"""
prefix_search = recipe_name.endswith("*")
if prefix_search:
recipe_name = recipe_name.rstrip("*")
for n in get_nodes(environment):
recipes = get_recipes_in_node(n)
for role in get_roles_in_node(n, recursive=True):
recipes.extend(get_recipes_in_role(role))
if prefix_search:
if any(recipe.startswith(recipe_name) for recipe in recipes):
yield n
else:
if recipe_name in recipes:
yield n |
def path(self):
"""
Returns the path to this table in HDFS.
"""
location = self.client.table_location(self.table, self.database)
if not location:
raise Exception("Couldn't find location for table: {0}".format(str(self)))
return location | Returns the path to this table in HDFS. | Below is the the instruction that describes the task:
### Input:
Returns the path to this table in HDFS.
### Response:
def path(self):
"""
Returns the path to this table in HDFS.
"""
location = self.client.table_location(self.table, self.database)
if not location:
raise Exception("Couldn't find location for table: {0}".format(str(self)))
return location |
def fit(self, conver=DEFAULT_CONVERGENCE, minit=DEFAULT_MINIT,
maxit=DEFAULT_MAXIT, fflag=DEFAULT_FFLAG, maxgerr=DEFAULT_MAXGERR,
going_inwards=False):
"""
Fit an elliptical isophote.
Parameters
----------
conver : float, optional
The main convergence criterion. Iterations stop when the
largest harmonic amplitude becomes smaller (in absolute
value) than ``conver`` times the harmonic fit rms. The
default is 0.05.
minit : int, optional
The minimum number of iterations to perform. A minimum of 10
(the default) iterations guarantees that, on average, 2
iterations will be available for fitting each independent
parameter (the four harmonic amplitudes and the intensity
level). For the first isophote, the minimum number of
iterations is 2 * ``minit`` to ensure that, even departing
from not-so-good initial values, the algorithm has a better
chance to converge to a sensible solution.
maxit : int, optional
The maximum number of iterations to perform. The default is
50.
fflag : float, optional
The acceptable fraction of flagged data points in the
sample. If the actual fraction of valid data points is
smaller than this, the iterations will stop and the current
`~photutils.isophote.Isophote` will be returned. Flagged
data points are points that either lie outside the image
frame, are masked, or were rejected by sigma-clipping. The
default is 0.7.
maxgerr : float, optional
The maximum acceptable relative error in the local radial
intensity gradient. This is the main control for preventing
ellipses to grow to regions of too low signal-to-noise
ratio. It specifies the maximum acceptable relative error
in the local radial intensity gradient. `Busko (1996; ASPC
101, 139)
<http://adsabs.harvard.edu/abs/1996ASPC..101..139B>`_ showed
that the fitting precision relates to that relative error.
The usual behavior of the gradient relative error is to
increase with semimajor axis, being larger in outer, fainter
regions of a galaxy image. In the current implementation,
the ``maxgerr`` criterion is triggered only when two
consecutive isophotes exceed the value specified by the
parameter. This prevents premature stopping caused by
contamination such as stars and HII regions.
A number of actions may happen when the gradient error
exceeds ``maxgerr`` (or becomes non-significant and is set
to `None`). If the maximum semimajor axis specified by
``maxsma`` is set to `None`, semimajor axis growth is
stopped and the algorithm proceeds inwards to the galaxy
center. If ``maxsma`` is set to some finite value, and this
value is larger than the current semimajor axis length, the
algorithm enters non-iterative mode and proceeds outwards
until reaching ``maxsma``. The default is 0.5.
going_inwards : bool, optional
Parameter to define the sense of SMA growth. When fitting
just one isophote, this parameter is used only by the code
that defines the details of how elliptical arc segments
("sectors") are extracted from the image, when using area
extraction modes (see the ``integrmode`` parameter in the
`~photutils.isophote.EllipseSample` class). The default is
`False`.
Returns
-------
result : `~photutils.isophote.Isophote` instance
The fitted isophote, which also contains fit status
information.
Examples
--------
>>> from photutils.isophote import EllipseSample, EllipseFitter
>>> sample = EllipseSample(data, sma=10.)
>>> fitter = EllipseFitter(sample)
>>> isophote = fitter.fit()
"""
sample = self._sample
# this flag signals that limiting gradient error (`maxgerr`)
# wasn't exceeded yet.
lexceed = False
# here we keep track of the sample that caused the minimum harmonic
# amplitude(in absolute value). This will eventually be used to
# build the resulting Isophote in cases where iterations run to
# the maximum allowed (maxit), or the maximum number of flagged
# data points (fflag) is reached.
minimum_amplitude_value = np.Inf
minimum_amplitude_sample = None
for iter in range(maxit):
# Force the sample to compute its gradient and associated values.
sample.update()
# The extract() method returns sampled values as a 2-d numpy array
# with the following structure:
# values[0] = 1-d array with angles
# values[1] = 1-d array with radii
# values[2] = 1-d array with intensity
values = sample.extract()
# Fit harmonic coefficients. Failure in fitting is
# a fatal error; terminate immediately with sample
# marked as invalid.
try:
coeffs = fit_first_and_second_harmonics(values[0], values[2])
except Exception as e:
log.info(e)
return Isophote(sample, iter+1, False, 3)
coeffs = coeffs[0]
# largest harmonic in absolute value drives the correction.
largest_harmonic_index = np.argmax(np.abs(coeffs[1:]))
largest_harmonic = coeffs[1:][largest_harmonic_index]
# see if the amplitude decreased; if yes, keep the
# corresponding sample for eventual later use.
if abs(largest_harmonic) < minimum_amplitude_value:
minimum_amplitude_value = abs(largest_harmonic)
minimum_amplitude_sample = sample
# check if converged
model = first_and_second_harmonic_function(values[0], coeffs)
residual = values[2] - model
if ((conver * sample.sector_area * np.std(residual))
> np.abs(largest_harmonic)):
# Got a valid solution. But before returning, ensure
# that a minimum of iterations has run.
if iter >= minit-1:
sample.update()
return Isophote(sample, iter+1, True, 0)
# it may not have converged yet, but the sample contains too
# many invalid data points: return.
if sample.actual_points < (sample.total_points * fflag):
# when too many data points were flagged, return the
# best fit sample instead of the current one.
minimum_amplitude_sample.update()
return Isophote(minimum_amplitude_sample, iter+1, True, 1)
# pick appropriate corrector code.
corrector = _correctors[largest_harmonic_index]
# generate *NEW* EllipseSample instance with corrected
# parameter. Note that this instance is still devoid of other
# information besides its geometry. It needs to be explicitly
# updated for computations to proceed. We have to build a new
# EllipseSample instance every time because of the lazy
# extraction process used by EllipseSample code. To minimize
# the number of calls to the area integrators, we pay a
# (hopefully smaller) price here, by having multiple calls to
# the EllipseSample constructor.
sample = corrector.correct(sample, largest_harmonic)
sample.update()
# see if any abnormal (or unusual) conditions warrant
# the change to non-iterative mode, or go-inwards mode.
proceed, lexceed = self._check_conditions(
sample, maxgerr, going_inwards, lexceed)
if not proceed:
sample.update()
return Isophote(sample, iter+1, True, -1)
# Got to the maximum number of iterations. Return with
# code 2, and handle it as a valid isophote. Use the
# best fit sample instead of the current one.
minimum_amplitude_sample.update()
return Isophote(minimum_amplitude_sample, maxit, True, 2) | Fit an elliptical isophote.
Parameters
----------
conver : float, optional
The main convergence criterion. Iterations stop when the
largest harmonic amplitude becomes smaller (in absolute
value) than ``conver`` times the harmonic fit rms. The
default is 0.05.
minit : int, optional
The minimum number of iterations to perform. A minimum of 10
(the default) iterations guarantees that, on average, 2
iterations will be available for fitting each independent
parameter (the four harmonic amplitudes and the intensity
level). For the first isophote, the minimum number of
iterations is 2 * ``minit`` to ensure that, even departing
from not-so-good initial values, the algorithm has a better
chance to converge to a sensible solution.
maxit : int, optional
The maximum number of iterations to perform. The default is
50.
fflag : float, optional
The acceptable fraction of flagged data points in the
sample. If the actual fraction of valid data points is
smaller than this, the iterations will stop and the current
`~photutils.isophote.Isophote` will be returned. Flagged
data points are points that either lie outside the image
frame, are masked, or were rejected by sigma-clipping. The
default is 0.7.
maxgerr : float, optional
The maximum acceptable relative error in the local radial
intensity gradient. This is the main control for preventing
ellipses to grow to regions of too low signal-to-noise
ratio. It specifies the maximum acceptable relative error
in the local radial intensity gradient. `Busko (1996; ASPC
101, 139)
<http://adsabs.harvard.edu/abs/1996ASPC..101..139B>`_ showed
that the fitting precision relates to that relative error.
The usual behavior of the gradient relative error is to
increase with semimajor axis, being larger in outer, fainter
regions of a galaxy image. In the current implementation,
the ``maxgerr`` criterion is triggered only when two
consecutive isophotes exceed the value specified by the
parameter. This prevents premature stopping caused by
contamination such as stars and HII regions.
A number of actions may happen when the gradient error
exceeds ``maxgerr`` (or becomes non-significant and is set
to `None`). If the maximum semimajor axis specified by
``maxsma`` is set to `None`, semimajor axis growth is
stopped and the algorithm proceeds inwards to the galaxy
center. If ``maxsma`` is set to some finite value, and this
value is larger than the current semimajor axis length, the
algorithm enters non-iterative mode and proceeds outwards
until reaching ``maxsma``. The default is 0.5.
going_inwards : bool, optional
Parameter to define the sense of SMA growth. When fitting
just one isophote, this parameter is used only by the code
that defines the details of how elliptical arc segments
("sectors") are extracted from the image, when using area
extraction modes (see the ``integrmode`` parameter in the
`~photutils.isophote.EllipseSample` class). The default is
`False`.
Returns
-------
result : `~photutils.isophote.Isophote` instance
The fitted isophote, which also contains fit status
information.
Examples
--------
>>> from photutils.isophote import EllipseSample, EllipseFitter
>>> sample = EllipseSample(data, sma=10.)
>>> fitter = EllipseFitter(sample)
>>> isophote = fitter.fit() | Below is the the instruction that describes the task:
### Input:
Fit an elliptical isophote.
Parameters
----------
conver : float, optional
The main convergence criterion. Iterations stop when the
largest harmonic amplitude becomes smaller (in absolute
value) than ``conver`` times the harmonic fit rms. The
default is 0.05.
minit : int, optional
The minimum number of iterations to perform. A minimum of 10
(the default) iterations guarantees that, on average, 2
iterations will be available for fitting each independent
parameter (the four harmonic amplitudes and the intensity
level). For the first isophote, the minimum number of
iterations is 2 * ``minit`` to ensure that, even departing
from not-so-good initial values, the algorithm has a better
chance to converge to a sensible solution.
maxit : int, optional
The maximum number of iterations to perform. The default is
50.
fflag : float, optional
The acceptable fraction of flagged data points in the
sample. If the actual fraction of valid data points is
smaller than this, the iterations will stop and the current
`~photutils.isophote.Isophote` will be returned. Flagged
data points are points that either lie outside the image
frame, are masked, or were rejected by sigma-clipping. The
default is 0.7.
maxgerr : float, optional
The maximum acceptable relative error in the local radial
intensity gradient. This is the main control for preventing
ellipses to grow to regions of too low signal-to-noise
ratio. It specifies the maximum acceptable relative error
in the local radial intensity gradient. `Busko (1996; ASPC
101, 139)
<http://adsabs.harvard.edu/abs/1996ASPC..101..139B>`_ showed
that the fitting precision relates to that relative error.
The usual behavior of the gradient relative error is to
increase with semimajor axis, being larger in outer, fainter
regions of a galaxy image. In the current implementation,
the ``maxgerr`` criterion is triggered only when two
consecutive isophotes exceed the value specified by the
parameter. This prevents premature stopping caused by
contamination such as stars and HII regions.
A number of actions may happen when the gradient error
exceeds ``maxgerr`` (or becomes non-significant and is set
to `None`). If the maximum semimajor axis specified by
``maxsma`` is set to `None`, semimajor axis growth is
stopped and the algorithm proceeds inwards to the galaxy
center. If ``maxsma`` is set to some finite value, and this
value is larger than the current semimajor axis length, the
algorithm enters non-iterative mode and proceeds outwards
until reaching ``maxsma``. The default is 0.5.
going_inwards : bool, optional
Parameter to define the sense of SMA growth. When fitting
just one isophote, this parameter is used only by the code
that defines the details of how elliptical arc segments
("sectors") are extracted from the image, when using area
extraction modes (see the ``integrmode`` parameter in the
`~photutils.isophote.EllipseSample` class). The default is
`False`.
Returns
-------
result : `~photutils.isophote.Isophote` instance
The fitted isophote, which also contains fit status
information.
Examples
--------
>>> from photutils.isophote import EllipseSample, EllipseFitter
>>> sample = EllipseSample(data, sma=10.)
>>> fitter = EllipseFitter(sample)
>>> isophote = fitter.fit()
### Response:
def fit(self, conver=DEFAULT_CONVERGENCE, minit=DEFAULT_MINIT,
maxit=DEFAULT_MAXIT, fflag=DEFAULT_FFLAG, maxgerr=DEFAULT_MAXGERR,
going_inwards=False):
"""
Fit an elliptical isophote.
Parameters
----------
conver : float, optional
The main convergence criterion. Iterations stop when the
largest harmonic amplitude becomes smaller (in absolute
value) than ``conver`` times the harmonic fit rms. The
default is 0.05.
minit : int, optional
The minimum number of iterations to perform. A minimum of 10
(the default) iterations guarantees that, on average, 2
iterations will be available for fitting each independent
parameter (the four harmonic amplitudes and the intensity
level). For the first isophote, the minimum number of
iterations is 2 * ``minit`` to ensure that, even departing
from not-so-good initial values, the algorithm has a better
chance to converge to a sensible solution.
maxit : int, optional
The maximum number of iterations to perform. The default is
50.
fflag : float, optional
The acceptable fraction of flagged data points in the
sample. If the actual fraction of valid data points is
smaller than this, the iterations will stop and the current
`~photutils.isophote.Isophote` will be returned. Flagged
data points are points that either lie outside the image
frame, are masked, or were rejected by sigma-clipping. The
default is 0.7.
maxgerr : float, optional
The maximum acceptable relative error in the local radial
intensity gradient. This is the main control for preventing
ellipses to grow to regions of too low signal-to-noise
ratio. It specifies the maximum acceptable relative error
in the local radial intensity gradient. `Busko (1996; ASPC
101, 139)
<http://adsabs.harvard.edu/abs/1996ASPC..101..139B>`_ showed
that the fitting precision relates to that relative error.
The usual behavior of the gradient relative error is to
increase with semimajor axis, being larger in outer, fainter
regions of a galaxy image. In the current implementation,
the ``maxgerr`` criterion is triggered only when two
consecutive isophotes exceed the value specified by the
parameter. This prevents premature stopping caused by
contamination such as stars and HII regions.
A number of actions may happen when the gradient error
exceeds ``maxgerr`` (or becomes non-significant and is set
to `None`). If the maximum semimajor axis specified by
``maxsma`` is set to `None`, semimajor axis growth is
stopped and the algorithm proceeds inwards to the galaxy
center. If ``maxsma`` is set to some finite value, and this
value is larger than the current semimajor axis length, the
algorithm enters non-iterative mode and proceeds outwards
until reaching ``maxsma``. The default is 0.5.
going_inwards : bool, optional
Parameter to define the sense of SMA growth. When fitting
just one isophote, this parameter is used only by the code
that defines the details of how elliptical arc segments
("sectors") are extracted from the image, when using area
extraction modes (see the ``integrmode`` parameter in the
`~photutils.isophote.EllipseSample` class). The default is
`False`.
Returns
-------
result : `~photutils.isophote.Isophote` instance
The fitted isophote, which also contains fit status
information.
Examples
--------
>>> from photutils.isophote import EllipseSample, EllipseFitter
>>> sample = EllipseSample(data, sma=10.)
>>> fitter = EllipseFitter(sample)
>>> isophote = fitter.fit()
"""
sample = self._sample
# this flag signals that limiting gradient error (`maxgerr`)
# wasn't exceeded yet.
lexceed = False
# here we keep track of the sample that caused the minimum harmonic
# amplitude(in absolute value). This will eventually be used to
# build the resulting Isophote in cases where iterations run to
# the maximum allowed (maxit), or the maximum number of flagged
# data points (fflag) is reached.
minimum_amplitude_value = np.Inf
minimum_amplitude_sample = None
for iter in range(maxit):
# Force the sample to compute its gradient and associated values.
sample.update()
# The extract() method returns sampled values as a 2-d numpy array
# with the following structure:
# values[0] = 1-d array with angles
# values[1] = 1-d array with radii
# values[2] = 1-d array with intensity
values = sample.extract()
# Fit harmonic coefficients. Failure in fitting is
# a fatal error; terminate immediately with sample
# marked as invalid.
try:
coeffs = fit_first_and_second_harmonics(values[0], values[2])
except Exception as e:
log.info(e)
return Isophote(sample, iter+1, False, 3)
coeffs = coeffs[0]
# largest harmonic in absolute value drives the correction.
largest_harmonic_index = np.argmax(np.abs(coeffs[1:]))
largest_harmonic = coeffs[1:][largest_harmonic_index]
# see if the amplitude decreased; if yes, keep the
# corresponding sample for eventual later use.
if abs(largest_harmonic) < minimum_amplitude_value:
minimum_amplitude_value = abs(largest_harmonic)
minimum_amplitude_sample = sample
# check if converged
model = first_and_second_harmonic_function(values[0], coeffs)
residual = values[2] - model
if ((conver * sample.sector_area * np.std(residual))
> np.abs(largest_harmonic)):
# Got a valid solution. But before returning, ensure
# that a minimum of iterations has run.
if iter >= minit-1:
sample.update()
return Isophote(sample, iter+1, True, 0)
# it may not have converged yet, but the sample contains too
# many invalid data points: return.
if sample.actual_points < (sample.total_points * fflag):
# when too many data points were flagged, return the
# best fit sample instead of the current one.
minimum_amplitude_sample.update()
return Isophote(minimum_amplitude_sample, iter+1, True, 1)
# pick appropriate corrector code.
corrector = _correctors[largest_harmonic_index]
# generate *NEW* EllipseSample instance with corrected
# parameter. Note that this instance is still devoid of other
# information besides its geometry. It needs to be explicitly
# updated for computations to proceed. We have to build a new
# EllipseSample instance every time because of the lazy
# extraction process used by EllipseSample code. To minimize
# the number of calls to the area integrators, we pay a
# (hopefully smaller) price here, by having multiple calls to
# the EllipseSample constructor.
sample = corrector.correct(sample, largest_harmonic)
sample.update()
# see if any abnormal (or unusual) conditions warrant
# the change to non-iterative mode, or go-inwards mode.
proceed, lexceed = self._check_conditions(
sample, maxgerr, going_inwards, lexceed)
if not proceed:
sample.update()
return Isophote(sample, iter+1, True, -1)
# Got to the maximum number of iterations. Return with
# code 2, and handle it as a valid isophote. Use the
# best fit sample instead of the current one.
minimum_amplitude_sample.update()
return Isophote(minimum_amplitude_sample, maxit, True, 2) |
def get_context_data(self, **kwargs):
"""This add in the context of list_type and returns this as whatever the crosstype was."""
context = super(CrossTypeAnimalList, self).get_context_data(**kwargs)
context['list_type'] = self.kwargs['breeding_type']
return context | This add in the context of list_type and returns this as whatever the crosstype was. | Below is the the instruction that describes the task:
### Input:
This add in the context of list_type and returns this as whatever the crosstype was.
### Response:
def get_context_data(self, **kwargs):
"""This add in the context of list_type and returns this as whatever the crosstype was."""
context = super(CrossTypeAnimalList, self).get_context_data(**kwargs)
context['list_type'] = self.kwargs['breeding_type']
return context |
def __create_list_item_widget(self, ui, calibration_observable):
"""Called when an item (calibration_observable) is inserted into the list widget. Returns a widget."""
calibration_row = make_calibration_row_widget(ui, calibration_observable)
column = ui.create_column_widget()
column.add_spacing(4)
column.add(calibration_row)
return column | Called when an item (calibration_observable) is inserted into the list widget. Returns a widget. | Below is the the instruction that describes the task:
### Input:
Called when an item (calibration_observable) is inserted into the list widget. Returns a widget.
### Response:
def __create_list_item_widget(self, ui, calibration_observable):
"""Called when an item (calibration_observable) is inserted into the list widget. Returns a widget."""
calibration_row = make_calibration_row_widget(ui, calibration_observable)
column = ui.create_column_widget()
column.add_spacing(4)
column.add(calibration_row)
return column |
def customize_form_field(self, name, field):
"""
Allows views to customize their form fields. By default, Smartmin replaces the plain textbox
date input with it's own DatePicker implementation.
"""
if isinstance(field, forms.fields.DateField) and isinstance(field.widget, forms.widgets.DateInput):
field.widget = widgets.DatePickerWidget()
field.input_formats = [field.widget.input_format[1]] + list(field.input_formats)
if isinstance(field, forms.fields.ImageField) and isinstance(field.widget, forms.widgets.ClearableFileInput):
field.widget = widgets.ImageThumbnailWidget()
return field | Allows views to customize their form fields. By default, Smartmin replaces the plain textbox
date input with it's own DatePicker implementation. | Below is the the instruction that describes the task:
### Input:
Allows views to customize their form fields. By default, Smartmin replaces the plain textbox
date input with it's own DatePicker implementation.
### Response:
def customize_form_field(self, name, field):
"""
Allows views to customize their form fields. By default, Smartmin replaces the plain textbox
date input with it's own DatePicker implementation.
"""
if isinstance(field, forms.fields.DateField) and isinstance(field.widget, forms.widgets.DateInput):
field.widget = widgets.DatePickerWidget()
field.input_formats = [field.widget.input_format[1]] + list(field.input_formats)
if isinstance(field, forms.fields.ImageField) and isinstance(field.widget, forms.widgets.ClearableFileInput):
field.widget = widgets.ImageThumbnailWidget()
return field |
def transformer_revnet_encoder(encoder_input,
encoder_self_attention_bias,
hparams,
name="encoder"):
"""A stack of transformer layers.
Args:
encoder_input: a Tensor
encoder_self_attention_bias: bias Tensor for self-attention
(see common_attention.attention_bias())
hparams: hyperparameters for model
name: a string
Returns:
y: a Tensors
"""
def f(x, side_input):
"""f(x) for reversible layer, self-attention layer."""
encoder_self_attention_bias = side_input[0]
old_hid_size = hparams.hidden_size
hparams.hidden_size = old_hid_size // 2
with tf.variable_scope("self_attention"):
y = common_attention.multihead_attention(
common_layers.layer_preprocess(
x, hparams), None, encoder_self_attention_bias,
hparams.attention_key_channels or hparams.hidden_size,
hparams.attention_value_channels or hparams.hidden_size,
hparams.hidden_size, hparams.num_heads, hparams.attention_dropout)
y = common_layers.layer_postprocess(x, y, hparams)
hparams.hidden_size = old_hid_size
return y
def g(x):
"""g(x) for reversible layer, feed-forward layer."""
old_hid_size = hparams.hidden_size
hparams.hidden_size = old_hid_size // 2
with tf.variable_scope("ffn"):
y = transformer.transformer_ffn_layer(
common_layers.layer_preprocess(x, hparams), hparams)
y = common_layers.layer_postprocess(x, y, hparams)
hparams.hidden_size = old_hid_size
return y
x1, x2 = tf.split(encoder_input, 2, axis=-1)
with tf.variable_scope(name):
y1, y2 = tf.contrib.layers.rev_block(
x1,
x2,
f,
g,
num_layers=hparams.num_hidden_layers,
f_side_input=[encoder_self_attention_bias],
is_training=hparams.mode == tf.estimator.ModeKeys.TRAIN)
y = tf.concat([y1, y2], axis=-1)
return common_layers.layer_preprocess(y, hparams) | A stack of transformer layers.
Args:
encoder_input: a Tensor
encoder_self_attention_bias: bias Tensor for self-attention
(see common_attention.attention_bias())
hparams: hyperparameters for model
name: a string
Returns:
y: a Tensors | Below is the the instruction that describes the task:
### Input:
A stack of transformer layers.
Args:
encoder_input: a Tensor
encoder_self_attention_bias: bias Tensor for self-attention
(see common_attention.attention_bias())
hparams: hyperparameters for model
name: a string
Returns:
y: a Tensors
### Response:
def transformer_revnet_encoder(encoder_input,
encoder_self_attention_bias,
hparams,
name="encoder"):
"""A stack of transformer layers.
Args:
encoder_input: a Tensor
encoder_self_attention_bias: bias Tensor for self-attention
(see common_attention.attention_bias())
hparams: hyperparameters for model
name: a string
Returns:
y: a Tensors
"""
def f(x, side_input):
"""f(x) for reversible layer, self-attention layer."""
encoder_self_attention_bias = side_input[0]
old_hid_size = hparams.hidden_size
hparams.hidden_size = old_hid_size // 2
with tf.variable_scope("self_attention"):
y = common_attention.multihead_attention(
common_layers.layer_preprocess(
x, hparams), None, encoder_self_attention_bias,
hparams.attention_key_channels or hparams.hidden_size,
hparams.attention_value_channels or hparams.hidden_size,
hparams.hidden_size, hparams.num_heads, hparams.attention_dropout)
y = common_layers.layer_postprocess(x, y, hparams)
hparams.hidden_size = old_hid_size
return y
def g(x):
"""g(x) for reversible layer, feed-forward layer."""
old_hid_size = hparams.hidden_size
hparams.hidden_size = old_hid_size // 2
with tf.variable_scope("ffn"):
y = transformer.transformer_ffn_layer(
common_layers.layer_preprocess(x, hparams), hparams)
y = common_layers.layer_postprocess(x, y, hparams)
hparams.hidden_size = old_hid_size
return y
x1, x2 = tf.split(encoder_input, 2, axis=-1)
with tf.variable_scope(name):
y1, y2 = tf.contrib.layers.rev_block(
x1,
x2,
f,
g,
num_layers=hparams.num_hidden_layers,
f_side_input=[encoder_self_attention_bias],
is_training=hparams.mode == tf.estimator.ModeKeys.TRAIN)
y = tf.concat([y1, y2], axis=-1)
return common_layers.layer_preprocess(y, hparams) |
def is_connected(C, directed=True):
r"""Return true, if the input count matrix is completely connected.
Effectively checking if the number of connected components equals one.
Parameters
----------
C : scipy.sparse matrix or numpy ndarray
Count matrix specifying edge weights.
directed : bool, optional
Whether to compute connected components for a directed or
undirected graph. Default is True.
Returns
-------
connected : boolean, returning true only if C is connected.
"""
nc = csgraph.connected_components(C, directed=directed, connection='strong', \
return_labels=False)
return nc == 1 | r"""Return true, if the input count matrix is completely connected.
Effectively checking if the number of connected components equals one.
Parameters
----------
C : scipy.sparse matrix or numpy ndarray
Count matrix specifying edge weights.
directed : bool, optional
Whether to compute connected components for a directed or
undirected graph. Default is True.
Returns
-------
connected : boolean, returning true only if C is connected. | Below is the the instruction that describes the task:
### Input:
r"""Return true, if the input count matrix is completely connected.
Effectively checking if the number of connected components equals one.
Parameters
----------
C : scipy.sparse matrix or numpy ndarray
Count matrix specifying edge weights.
directed : bool, optional
Whether to compute connected components for a directed or
undirected graph. Default is True.
Returns
-------
connected : boolean, returning true only if C is connected.
### Response:
def is_connected(C, directed=True):
r"""Return true, if the input count matrix is completely connected.
Effectively checking if the number of connected components equals one.
Parameters
----------
C : scipy.sparse matrix or numpy ndarray
Count matrix specifying edge weights.
directed : bool, optional
Whether to compute connected components for a directed or
undirected graph. Default is True.
Returns
-------
connected : boolean, returning true only if C is connected.
"""
nc = csgraph.connected_components(C, directed=directed, connection='strong', \
return_labels=False)
return nc == 1 |
def _route(self):
"""Define route."""
# REST API
self._app.route('/api/%s/config' % self.API_VERSION, method="GET",
callback=self._api_config)
self._app.route('/api/%s/config/<item>' % self.API_VERSION, method="GET",
callback=self._api_config_item)
self._app.route('/api/%s/args' % self.API_VERSION, method="GET",
callback=self._api_args)
self._app.route('/api/%s/args/<item>' % self.API_VERSION, method="GET",
callback=self._api_args_item)
self._app.route('/api/%s/help' % self.API_VERSION, method="GET",
callback=self._api_help)
self._app.route('/api/%s/pluginslist' % self.API_VERSION, method="GET",
callback=self._api_plugins)
self._app.route('/api/%s/all' % self.API_VERSION, method="GET",
callback=self._api_all)
self._app.route('/api/%s/all/limits' % self.API_VERSION, method="GET",
callback=self._api_all_limits)
self._app.route('/api/%s/all/views' % self.API_VERSION, method="GET",
callback=self._api_all_views)
self._app.route('/api/%s/<plugin>' % self.API_VERSION, method="GET",
callback=self._api)
self._app.route('/api/%s/<plugin>/history' % self.API_VERSION, method="GET",
callback=self._api_history)
self._app.route('/api/%s/<plugin>/history/<nb:int>' % self.API_VERSION, method="GET",
callback=self._api_history)
self._app.route('/api/%s/<plugin>/limits' % self.API_VERSION, method="GET",
callback=self._api_limits)
self._app.route('/api/%s/<plugin>/views' % self.API_VERSION, method="GET",
callback=self._api_views)
self._app.route('/api/%s/<plugin>/<item>' % self.API_VERSION, method="GET",
callback=self._api_item)
self._app.route('/api/%s/<plugin>/<item>/history' % self.API_VERSION, method="GET",
callback=self._api_item_history)
self._app.route('/api/%s/<plugin>/<item>/history/<nb:int>' % self.API_VERSION, method="GET",
callback=self._api_item_history)
self._app.route('/api/%s/<plugin>/<item>/<value>' % self.API_VERSION, method="GET",
callback=self._api_value)
bindmsg = 'Glances RESTful API Server started on {}api/{}/'.format(self.bind_url,
self.API_VERSION)
logger.info(bindmsg)
# WEB UI
if not self.args.disable_webui:
self._app.route('/', method="GET", callback=self._index)
self._app.route('/<refresh_time:int>', method=["GET"], callback=self._index)
self._app.route('/<filepath:path>', method="GET", callback=self._resource)
bindmsg = 'Glances Web User Interface started on {}'.format(self.bind_url)
logger.info(bindmsg)
else:
logger.info('The WebUI is disable (--disable-webui)')
print(bindmsg) | Define route. | Below is the the instruction that describes the task:
### Input:
Define route.
### Response:
def _route(self):
"""Define route."""
# REST API
self._app.route('/api/%s/config' % self.API_VERSION, method="GET",
callback=self._api_config)
self._app.route('/api/%s/config/<item>' % self.API_VERSION, method="GET",
callback=self._api_config_item)
self._app.route('/api/%s/args' % self.API_VERSION, method="GET",
callback=self._api_args)
self._app.route('/api/%s/args/<item>' % self.API_VERSION, method="GET",
callback=self._api_args_item)
self._app.route('/api/%s/help' % self.API_VERSION, method="GET",
callback=self._api_help)
self._app.route('/api/%s/pluginslist' % self.API_VERSION, method="GET",
callback=self._api_plugins)
self._app.route('/api/%s/all' % self.API_VERSION, method="GET",
callback=self._api_all)
self._app.route('/api/%s/all/limits' % self.API_VERSION, method="GET",
callback=self._api_all_limits)
self._app.route('/api/%s/all/views' % self.API_VERSION, method="GET",
callback=self._api_all_views)
self._app.route('/api/%s/<plugin>' % self.API_VERSION, method="GET",
callback=self._api)
self._app.route('/api/%s/<plugin>/history' % self.API_VERSION, method="GET",
callback=self._api_history)
self._app.route('/api/%s/<plugin>/history/<nb:int>' % self.API_VERSION, method="GET",
callback=self._api_history)
self._app.route('/api/%s/<plugin>/limits' % self.API_VERSION, method="GET",
callback=self._api_limits)
self._app.route('/api/%s/<plugin>/views' % self.API_VERSION, method="GET",
callback=self._api_views)
self._app.route('/api/%s/<plugin>/<item>' % self.API_VERSION, method="GET",
callback=self._api_item)
self._app.route('/api/%s/<plugin>/<item>/history' % self.API_VERSION, method="GET",
callback=self._api_item_history)
self._app.route('/api/%s/<plugin>/<item>/history/<nb:int>' % self.API_VERSION, method="GET",
callback=self._api_item_history)
self._app.route('/api/%s/<plugin>/<item>/<value>' % self.API_VERSION, method="GET",
callback=self._api_value)
bindmsg = 'Glances RESTful API Server started on {}api/{}/'.format(self.bind_url,
self.API_VERSION)
logger.info(bindmsg)
# WEB UI
if not self.args.disable_webui:
self._app.route('/', method="GET", callback=self._index)
self._app.route('/<refresh_time:int>', method=["GET"], callback=self._index)
self._app.route('/<filepath:path>', method="GET", callback=self._resource)
bindmsg = 'Glances Web User Interface started on {}'.format(self.bind_url)
logger.info(bindmsg)
else:
logger.info('The WebUI is disable (--disable-webui)')
print(bindmsg) |
def add(from_user, from_id, to_user, to_id, type):
"adds a relation to the graph"
if options.users and to_user:
G.add_node(from_user, screen_name=from_user)
G.add_node(to_user, screen_name=to_user)
if G.has_edge(from_user, to_user):
weight = G[from_user][to_user]['weight'] + 1
else:
weight = 1
G.add_edge(from_user, to_user, type=type, weight=weight)
elif not options.users and to_id:
G.add_node(from_id, screen_name=from_user, type=type)
if to_user:
G.add_node(to_id, screen_name=to_user)
else:
G.add_node(to_id)
G.add_edge(from_id, to_id, type=type) | adds a relation to the graph | Below is the the instruction that describes the task:
### Input:
adds a relation to the graph
### Response:
def add(from_user, from_id, to_user, to_id, type):
"adds a relation to the graph"
if options.users and to_user:
G.add_node(from_user, screen_name=from_user)
G.add_node(to_user, screen_name=to_user)
if G.has_edge(from_user, to_user):
weight = G[from_user][to_user]['weight'] + 1
else:
weight = 1
G.add_edge(from_user, to_user, type=type, weight=weight)
elif not options.users and to_id:
G.add_node(from_id, screen_name=from_user, type=type)
if to_user:
G.add_node(to_id, screen_name=to_user)
else:
G.add_node(to_id)
G.add_edge(from_id, to_id, type=type) |
def clear(self, session):
"""Clears a device.
Corresponds to viClear function of the VISA library.
:param session: Unique logical identifier to a session.
:return: return value of the library call.
:rtype: :class:`pyvisa.constants.StatusCode`
"""
try:
sess = self.sessions[session]
except KeyError:
return constants.StatusCode.error_invalid_object
return sess.clear() | Clears a device.
Corresponds to viClear function of the VISA library.
:param session: Unique logical identifier to a session.
:return: return value of the library call.
:rtype: :class:`pyvisa.constants.StatusCode` | Below is the the instruction that describes the task:
### Input:
Clears a device.
Corresponds to viClear function of the VISA library.
:param session: Unique logical identifier to a session.
:return: return value of the library call.
:rtype: :class:`pyvisa.constants.StatusCode`
### Response:
def clear(self, session):
"""Clears a device.
Corresponds to viClear function of the VISA library.
:param session: Unique logical identifier to a session.
:return: return value of the library call.
:rtype: :class:`pyvisa.constants.StatusCode`
"""
try:
sess = self.sessions[session]
except KeyError:
return constants.StatusCode.error_invalid_object
return sess.clear() |
def manage_async(self, command='', name='process', site=ALL, exclude_sites='', end_message='', recipients=''):
"""
Starts a Django management command in a screen.
Parameters:
command :- all arguments passed to `./manage` as a single string
site :- the site to run the command for (default is all)
Designed to be ran like:
fab <role> dj.manage_async:"some_management_command --force"
"""
exclude_sites = exclude_sites.split(':')
r = self.local_renderer
for _site, site_data in self.iter_sites(site=site, no_secure=True):
if _site in exclude_sites:
continue
r.env.SITE = _site
r.env.command = command
r.env.end_email_command = ''
r.env.recipients = recipients or ''
r.env.end_email_command = ''
if end_message:
end_message = end_message + ' for ' + _site
end_message = end_message.replace(' ', '_')
r.env.end_message = end_message
r.env.end_email_command = r.format('{manage_cmd} send_mail --subject={end_message} --recipients={recipients}')
r.env.name = name.format(**r.genv)
r.run(
'screen -dmS {name} bash -c "export SITE={SITE}; '\
'export ROLE={ROLE}; cd {project_dir}; '\
'{manage_cmd} {command} --traceback; {end_email_command}"; sleep 3;') | Starts a Django management command in a screen.
Parameters:
command :- all arguments passed to `./manage` as a single string
site :- the site to run the command for (default is all)
Designed to be ran like:
fab <role> dj.manage_async:"some_management_command --force" | Below is the the instruction that describes the task:
### Input:
Starts a Django management command in a screen.
Parameters:
command :- all arguments passed to `./manage` as a single string
site :- the site to run the command for (default is all)
Designed to be ran like:
fab <role> dj.manage_async:"some_management_command --force"
### Response:
def manage_async(self, command='', name='process', site=ALL, exclude_sites='', end_message='', recipients=''):
"""
Starts a Django management command in a screen.
Parameters:
command :- all arguments passed to `./manage` as a single string
site :- the site to run the command for (default is all)
Designed to be ran like:
fab <role> dj.manage_async:"some_management_command --force"
"""
exclude_sites = exclude_sites.split(':')
r = self.local_renderer
for _site, site_data in self.iter_sites(site=site, no_secure=True):
if _site in exclude_sites:
continue
r.env.SITE = _site
r.env.command = command
r.env.end_email_command = ''
r.env.recipients = recipients or ''
r.env.end_email_command = ''
if end_message:
end_message = end_message + ' for ' + _site
end_message = end_message.replace(' ', '_')
r.env.end_message = end_message
r.env.end_email_command = r.format('{manage_cmd} send_mail --subject={end_message} --recipients={recipients}')
r.env.name = name.format(**r.genv)
r.run(
'screen -dmS {name} bash -c "export SITE={SITE}; '\
'export ROLE={ROLE}; cd {project_dir}; '\
'{manage_cmd} {command} --traceback; {end_email_command}"; sleep 3;') |
def topology(struct=None, protein='protein',
top='system.top', dirname='top',
posres="posres.itp",
ff="oplsaa", water="tip4p",
**pdb2gmx_args):
"""Build Gromacs topology files from pdb.
:Keywords:
*struct*
input structure (**required**)
*protein*
name of the output files
*top*
name of the topology file
*dirname*
directory in which the new topology will be stored
*ff*
force field (string understood by ``pdb2gmx``); default
"oplsaa"
*water*
water model (string), default "tip4p"
*pdb2gmxargs*
other arguments for ``pdb2gmx``
.. note::
At the moment this function simply runs ``pdb2gmx`` and uses
the resulting topology file directly. If you want to create
more complicated topologies and maybe also use additional itp
files or make a protein itp file then you will have to do this
manually.
"""
structure = realpath(struct)
new_struct = protein + '.pdb'
if posres is None:
posres = protein + '_posres.itp'
pdb2gmx_args.update({'f': structure, 'o': new_struct, 'p': top, 'i': posres,
'ff': ff, 'water': water})
with in_dir(dirname):
logger.info("[{dirname!s}] Building topology {top!r} from struct = {struct!r}".format(**vars()))
# perhaps parse output from pdb2gmx 4.5.x to get the names of the chain itp files?
gromacs.pdb2gmx(**pdb2gmx_args)
return { \
'top': realpath(dirname, top), \
'struct': realpath(dirname, new_struct), \
'posres' : realpath(dirname, posres) } | Build Gromacs topology files from pdb.
:Keywords:
*struct*
input structure (**required**)
*protein*
name of the output files
*top*
name of the topology file
*dirname*
directory in which the new topology will be stored
*ff*
force field (string understood by ``pdb2gmx``); default
"oplsaa"
*water*
water model (string), default "tip4p"
*pdb2gmxargs*
other arguments for ``pdb2gmx``
.. note::
At the moment this function simply runs ``pdb2gmx`` and uses
the resulting topology file directly. If you want to create
more complicated topologies and maybe also use additional itp
files or make a protein itp file then you will have to do this
manually. | Below is the the instruction that describes the task:
### Input:
Build Gromacs topology files from pdb.
:Keywords:
*struct*
input structure (**required**)
*protein*
name of the output files
*top*
name of the topology file
*dirname*
directory in which the new topology will be stored
*ff*
force field (string understood by ``pdb2gmx``); default
"oplsaa"
*water*
water model (string), default "tip4p"
*pdb2gmxargs*
other arguments for ``pdb2gmx``
.. note::
At the moment this function simply runs ``pdb2gmx`` and uses
the resulting topology file directly. If you want to create
more complicated topologies and maybe also use additional itp
files or make a protein itp file then you will have to do this
manually.
### Response:
def topology(struct=None, protein='protein',
top='system.top', dirname='top',
posres="posres.itp",
ff="oplsaa", water="tip4p",
**pdb2gmx_args):
"""Build Gromacs topology files from pdb.
:Keywords:
*struct*
input structure (**required**)
*protein*
name of the output files
*top*
name of the topology file
*dirname*
directory in which the new topology will be stored
*ff*
force field (string understood by ``pdb2gmx``); default
"oplsaa"
*water*
water model (string), default "tip4p"
*pdb2gmxargs*
other arguments for ``pdb2gmx``
.. note::
At the moment this function simply runs ``pdb2gmx`` and uses
the resulting topology file directly. If you want to create
more complicated topologies and maybe also use additional itp
files or make a protein itp file then you will have to do this
manually.
"""
structure = realpath(struct)
new_struct = protein + '.pdb'
if posres is None:
posres = protein + '_posres.itp'
pdb2gmx_args.update({'f': structure, 'o': new_struct, 'p': top, 'i': posres,
'ff': ff, 'water': water})
with in_dir(dirname):
logger.info("[{dirname!s}] Building topology {top!r} from struct = {struct!r}".format(**vars()))
# perhaps parse output from pdb2gmx 4.5.x to get the names of the chain itp files?
gromacs.pdb2gmx(**pdb2gmx_args)
return { \
'top': realpath(dirname, top), \
'struct': realpath(dirname, new_struct), \
'posres' : realpath(dirname, posres) } |
def _get_list_axis(self, key, axis=None):
"""
Return Series values by list or array of integers
Parameters
----------
key : list-like positional indexer
axis : int (can only be zero)
Returns
-------
Series object
"""
if axis is None:
axis = self.axis or 0
try:
return self.obj._take(key, axis=axis)
except IndexError:
# re-raise with different error message
raise IndexError("positional indexers are out-of-bounds") | Return Series values by list or array of integers
Parameters
----------
key : list-like positional indexer
axis : int (can only be zero)
Returns
-------
Series object | Below is the the instruction that describes the task:
### Input:
Return Series values by list or array of integers
Parameters
----------
key : list-like positional indexer
axis : int (can only be zero)
Returns
-------
Series object
### Response:
def _get_list_axis(self, key, axis=None):
"""
Return Series values by list or array of integers
Parameters
----------
key : list-like positional indexer
axis : int (can only be zero)
Returns
-------
Series object
"""
if axis is None:
axis = self.axis or 0
try:
return self.obj._take(key, axis=axis)
except IndexError:
# re-raise with different error message
raise IndexError("positional indexers are out-of-bounds") |
def pause(profile_process='worker'):
"""Pause profiling.
Parameters
----------
profile_process : string
whether to profile kvstore `server` or `worker`.
server can only be profiled when kvstore is of type dist.
if this is not passed, defaults to `worker`
"""
profile_process2int = {'worker': 0, 'server': 1}
check_call(_LIB.MXProcessProfilePause(int(1),
profile_process2int[profile_process],
profiler_kvstore_handle)) | Pause profiling.
Parameters
----------
profile_process : string
whether to profile kvstore `server` or `worker`.
server can only be profiled when kvstore is of type dist.
if this is not passed, defaults to `worker` | Below is the the instruction that describes the task:
### Input:
Pause profiling.
Parameters
----------
profile_process : string
whether to profile kvstore `server` or `worker`.
server can only be profiled when kvstore is of type dist.
if this is not passed, defaults to `worker`
### Response:
def pause(profile_process='worker'):
"""Pause profiling.
Parameters
----------
profile_process : string
whether to profile kvstore `server` or `worker`.
server can only be profiled when kvstore is of type dist.
if this is not passed, defaults to `worker`
"""
profile_process2int = {'worker': 0, 'server': 1}
check_call(_LIB.MXProcessProfilePause(int(1),
profile_process2int[profile_process],
profiler_kvstore_handle)) |
def decode_signature(sigb64):
"""
Decode a signature into r, s
"""
sig_bin = base64.b64decode(sigb64)
if len(sig_bin) != 64:
raise ValueError("Invalid base64 signature")
sig_hex = sig_bin.encode('hex')
sig_r = int(sig_hex[:64], 16)
sig_s = int(sig_hex[64:], 16)
return sig_r, sig_s | Decode a signature into r, s | Below is the the instruction that describes the task:
### Input:
Decode a signature into r, s
### Response:
def decode_signature(sigb64):
"""
Decode a signature into r, s
"""
sig_bin = base64.b64decode(sigb64)
if len(sig_bin) != 64:
raise ValueError("Invalid base64 signature")
sig_hex = sig_bin.encode('hex')
sig_r = int(sig_hex[:64], 16)
sig_s = int(sig_hex[64:], 16)
return sig_r, sig_s |
def path_only_contains_dirs(self, path):
"""Return boolean on whether a path only contains directories."""
pathlistdir = os.listdir(path)
if pathlistdir == []:
return True
if any(os.path.isfile(os.path.join(path, i)) for i in pathlistdir):
return False
return all(self.path_only_contains_dirs(os.path.join(path, i)) for i in pathlistdir) | Return boolean on whether a path only contains directories. | Below is the the instruction that describes the task:
### Input:
Return boolean on whether a path only contains directories.
### Response:
def path_only_contains_dirs(self, path):
"""Return boolean on whether a path only contains directories."""
pathlistdir = os.listdir(path)
if pathlistdir == []:
return True
if any(os.path.isfile(os.path.join(path, i)) for i in pathlistdir):
return False
return all(self.path_only_contains_dirs(os.path.join(path, i)) for i in pathlistdir) |
def arc_by_radian(x, y, height, radian_range, thickness, gaussian_width):
"""
Radial arc with Gaussian fall-off after the solid ring-shaped
region with the given thickness, with shape specified by the
(start,end) radian_range.
"""
# Create a circular ring (copied from the ring function)
radius = height/2.0
half_thickness = thickness/2.0
distance_from_origin = np.sqrt(x**2+y**2)
distance_outside_outer_disk = distance_from_origin - radius - half_thickness
distance_inside_inner_disk = radius - half_thickness - distance_from_origin
ring = 1.0-np.bitwise_xor(np.greater_equal(distance_inside_inner_disk,0.0),
np.greater_equal(distance_outside_outer_disk,0.0))
sigmasq = gaussian_width*gaussian_width
if sigmasq==0.0:
inner_falloff = x*0.0
outer_falloff = x*0.0
else:
with float_error_ignore():
inner_falloff = np.exp(np.divide(-distance_inside_inner_disk*distance_inside_inner_disk, 2.0*sigmasq))
outer_falloff = np.exp(np.divide(-distance_outside_outer_disk*distance_outside_outer_disk, 2.0*sigmasq))
output_ring = np.maximum(inner_falloff,np.maximum(outer_falloff,ring))
# Calculate radians (in 4 phases) and cut according to the set range)
# RZHACKALERT:
# Function float_error_ignore() cannot catch the exception when
# both np.dividend and divisor are 0.0, and when only divisor is 0.0
# it returns 'Inf' rather than 0.0. In x, y and
# distance_from_origin, only one point in distance_from_origin can
# be 0.0 (circle center) and in this point x and y must be 0.0 as
# well. So here is a hack to avoid the 'invalid value encountered
# in divide' error by turning 0.0 to 1e-5 in distance_from_origin.
distance_from_origin += np.where(distance_from_origin == 0.0, 1e-5, 0)
with float_error_ignore():
sines = np.divide(y, distance_from_origin)
cosines = np.divide(x, distance_from_origin)
arcsines = np.arcsin(sines)
phase_1 = np.where(np.logical_and(sines >= 0, cosines >= 0), 2*pi-arcsines, 0)
phase_2 = np.where(np.logical_and(sines >= 0, cosines < 0), pi+arcsines, 0)
phase_3 = np.where(np.logical_and(sines < 0, cosines < 0), pi+arcsines, 0)
phase_4 = np.where(np.logical_and(sines < 0, cosines >= 0), -arcsines, 0)
arcsines = phase_1 + phase_2 + phase_3 + phase_4
if radian_range[0] <= radian_range[1]:
return np.where(np.logical_and(arcsines >= radian_range[0], arcsines <= radian_range[1]),
output_ring, 0.0)
else:
return np.where(np.logical_or(arcsines >= radian_range[0], arcsines <= radian_range[1]),
output_ring, 0.0) | Radial arc with Gaussian fall-off after the solid ring-shaped
region with the given thickness, with shape specified by the
(start,end) radian_range. | Below is the the instruction that describes the task:
### Input:
Radial arc with Gaussian fall-off after the solid ring-shaped
region with the given thickness, with shape specified by the
(start,end) radian_range.
### Response:
def arc_by_radian(x, y, height, radian_range, thickness, gaussian_width):
"""
Radial arc with Gaussian fall-off after the solid ring-shaped
region with the given thickness, with shape specified by the
(start,end) radian_range.
"""
# Create a circular ring (copied from the ring function)
radius = height/2.0
half_thickness = thickness/2.0
distance_from_origin = np.sqrt(x**2+y**2)
distance_outside_outer_disk = distance_from_origin - radius - half_thickness
distance_inside_inner_disk = radius - half_thickness - distance_from_origin
ring = 1.0-np.bitwise_xor(np.greater_equal(distance_inside_inner_disk,0.0),
np.greater_equal(distance_outside_outer_disk,0.0))
sigmasq = gaussian_width*gaussian_width
if sigmasq==0.0:
inner_falloff = x*0.0
outer_falloff = x*0.0
else:
with float_error_ignore():
inner_falloff = np.exp(np.divide(-distance_inside_inner_disk*distance_inside_inner_disk, 2.0*sigmasq))
outer_falloff = np.exp(np.divide(-distance_outside_outer_disk*distance_outside_outer_disk, 2.0*sigmasq))
output_ring = np.maximum(inner_falloff,np.maximum(outer_falloff,ring))
# Calculate radians (in 4 phases) and cut according to the set range)
# RZHACKALERT:
# Function float_error_ignore() cannot catch the exception when
# both np.dividend and divisor are 0.0, and when only divisor is 0.0
# it returns 'Inf' rather than 0.0. In x, y and
# distance_from_origin, only one point in distance_from_origin can
# be 0.0 (circle center) and in this point x and y must be 0.0 as
# well. So here is a hack to avoid the 'invalid value encountered
# in divide' error by turning 0.0 to 1e-5 in distance_from_origin.
distance_from_origin += np.where(distance_from_origin == 0.0, 1e-5, 0)
with float_error_ignore():
sines = np.divide(y, distance_from_origin)
cosines = np.divide(x, distance_from_origin)
arcsines = np.arcsin(sines)
phase_1 = np.where(np.logical_and(sines >= 0, cosines >= 0), 2*pi-arcsines, 0)
phase_2 = np.where(np.logical_and(sines >= 0, cosines < 0), pi+arcsines, 0)
phase_3 = np.where(np.logical_and(sines < 0, cosines < 0), pi+arcsines, 0)
phase_4 = np.where(np.logical_and(sines < 0, cosines >= 0), -arcsines, 0)
arcsines = phase_1 + phase_2 + phase_3 + phase_4
if radian_range[0] <= radian_range[1]:
return np.where(np.logical_and(arcsines >= radian_range[0], arcsines <= radian_range[1]),
output_ring, 0.0)
else:
return np.where(np.logical_or(arcsines >= radian_range[0], arcsines <= radian_range[1]),
output_ring, 0.0) |
def log(arg1, arg2=None):
"""Returns the first argument-based logarithm of the second argument.
If there is only one argument, then this takes the natural logarithm of the argument.
>>> df.select(log(10.0, df.age).alias('ten')).rdd.map(lambda l: str(l.ten)[:7]).collect()
['0.30102', '0.69897']
>>> df.select(log(df.age).alias('e')).rdd.map(lambda l: str(l.e)[:7]).collect()
['0.69314', '1.60943']
"""
sc = SparkContext._active_spark_context
if arg2 is None:
jc = sc._jvm.functions.log(_to_java_column(arg1))
else:
jc = sc._jvm.functions.log(arg1, _to_java_column(arg2))
return Column(jc) | Returns the first argument-based logarithm of the second argument.
If there is only one argument, then this takes the natural logarithm of the argument.
>>> df.select(log(10.0, df.age).alias('ten')).rdd.map(lambda l: str(l.ten)[:7]).collect()
['0.30102', '0.69897']
>>> df.select(log(df.age).alias('e')).rdd.map(lambda l: str(l.e)[:7]).collect()
['0.69314', '1.60943'] | Below is the the instruction that describes the task:
### Input:
Returns the first argument-based logarithm of the second argument.
If there is only one argument, then this takes the natural logarithm of the argument.
>>> df.select(log(10.0, df.age).alias('ten')).rdd.map(lambda l: str(l.ten)[:7]).collect()
['0.30102', '0.69897']
>>> df.select(log(df.age).alias('e')).rdd.map(lambda l: str(l.e)[:7]).collect()
['0.69314', '1.60943']
### Response:
def log(arg1, arg2=None):
"""Returns the first argument-based logarithm of the second argument.
If there is only one argument, then this takes the natural logarithm of the argument.
>>> df.select(log(10.0, df.age).alias('ten')).rdd.map(lambda l: str(l.ten)[:7]).collect()
['0.30102', '0.69897']
>>> df.select(log(df.age).alias('e')).rdd.map(lambda l: str(l.e)[:7]).collect()
['0.69314', '1.60943']
"""
sc = SparkContext._active_spark_context
if arg2 is None:
jc = sc._jvm.functions.log(_to_java_column(arg1))
else:
jc = sc._jvm.functions.log(arg1, _to_java_column(arg2))
return Column(jc) |
def get_concept(self, conceptId, lang='en'):
""" Fetch the concept from the Knowledge base
Args:
id (str): The concept id to be fetched, it can be Wikipedia
page id or Wikiedata id.
Returns:
dict, int: A dict containing the concept information; an integer
representing the response code.
"""
url = urljoin(self.concept_service + '/', conceptId)
res, status_code = self.get(url, params={'lang': lang})
if status_code != 200:
logger.debug('Fetch concept failed.')
return self.decode(res), status_code | Fetch the concept from the Knowledge base
Args:
id (str): The concept id to be fetched, it can be Wikipedia
page id or Wikiedata id.
Returns:
dict, int: A dict containing the concept information; an integer
representing the response code. | Below is the the instruction that describes the task:
### Input:
Fetch the concept from the Knowledge base
Args:
id (str): The concept id to be fetched, it can be Wikipedia
page id or Wikiedata id.
Returns:
dict, int: A dict containing the concept information; an integer
representing the response code.
### Response:
def get_concept(self, conceptId, lang='en'):
""" Fetch the concept from the Knowledge base
Args:
id (str): The concept id to be fetched, it can be Wikipedia
page id or Wikiedata id.
Returns:
dict, int: A dict containing the concept information; an integer
representing the response code.
"""
url = urljoin(self.concept_service + '/', conceptId)
res, status_code = self.get(url, params={'lang': lang})
if status_code != 200:
logger.debug('Fetch concept failed.')
return self.decode(res), status_code |
def _attr_sort_func(model, iter1, iter2, attribute):
"""Internal helper
"""
attr1 = getattr(model[iter1][0], attribute, None)
attr2 = getattr(model[iter2][0], attribute, None)
return cmp(attr1, attr2) | Internal helper | Below is the the instruction that describes the task:
### Input:
Internal helper
### Response:
def _attr_sort_func(model, iter1, iter2, attribute):
"""Internal helper
"""
attr1 = getattr(model[iter1][0], attribute, None)
attr2 = getattr(model[iter2][0], attribute, None)
return cmp(attr1, attr2) |
def min(self, key=None):
"""
Find the minimum item in this RDD.
:param key: A function used to generate key for comparing
>>> rdd = sc.parallelize([2.0, 5.0, 43.0, 10.0])
>>> rdd.min()
2.0
>>> rdd.min(key=str)
10.0
"""
if key is None:
return self.reduce(min)
return self.reduce(lambda a, b: min(a, b, key=key)) | Find the minimum item in this RDD.
:param key: A function used to generate key for comparing
>>> rdd = sc.parallelize([2.0, 5.0, 43.0, 10.0])
>>> rdd.min()
2.0
>>> rdd.min(key=str)
10.0 | Below is the the instruction that describes the task:
### Input:
Find the minimum item in this RDD.
:param key: A function used to generate key for comparing
>>> rdd = sc.parallelize([2.0, 5.0, 43.0, 10.0])
>>> rdd.min()
2.0
>>> rdd.min(key=str)
10.0
### Response:
def min(self, key=None):
"""
Find the minimum item in this RDD.
:param key: A function used to generate key for comparing
>>> rdd = sc.parallelize([2.0, 5.0, 43.0, 10.0])
>>> rdd.min()
2.0
>>> rdd.min(key=str)
10.0
"""
if key is None:
return self.reduce(min)
return self.reduce(lambda a, b: min(a, b, key=key)) |
def delete_state(self, state):
"""Delete a specified state from the LRS
:param state: State document to be deleted
:type state: :class:`tincan.documents.state_document.StateDocument`
:return: LRS Response object
:rtype: :class:`tincan.lrs_response.LRSResponse`
"""
return self._delete_state(
activity=state.activity,
agent=state.agent,
state_id=state.id,
etag=state.etag
) | Delete a specified state from the LRS
:param state: State document to be deleted
:type state: :class:`tincan.documents.state_document.StateDocument`
:return: LRS Response object
:rtype: :class:`tincan.lrs_response.LRSResponse` | Below is the the instruction that describes the task:
### Input:
Delete a specified state from the LRS
:param state: State document to be deleted
:type state: :class:`tincan.documents.state_document.StateDocument`
:return: LRS Response object
:rtype: :class:`tincan.lrs_response.LRSResponse`
### Response:
def delete_state(self, state):
"""Delete a specified state from the LRS
:param state: State document to be deleted
:type state: :class:`tincan.documents.state_document.StateDocument`
:return: LRS Response object
:rtype: :class:`tincan.lrs_response.LRSResponse`
"""
return self._delete_state(
activity=state.activity,
agent=state.agent,
state_id=state.id,
etag=state.etag
) |
def runs(self):
"""
Immutable sequence of |_Run| objects corresponding to the runs in
this paragraph.
"""
return tuple(_Run(r, self) for r in self._element.r_lst) | Immutable sequence of |_Run| objects corresponding to the runs in
this paragraph. | Below is the the instruction that describes the task:
### Input:
Immutable sequence of |_Run| objects corresponding to the runs in
this paragraph.
### Response:
def runs(self):
"""
Immutable sequence of |_Run| objects corresponding to the runs in
this paragraph.
"""
return tuple(_Run(r, self) for r in self._element.r_lst) |
def _serialize(self, value, attr, obj):
"""
Serialize value as a timestamp, either as a Unix timestamp (in float second) or a UTC isoformat string.
"""
if value is None:
return None
if self.use_isoformat:
return datetime.utcfromtimestamp(value).isoformat()
else:
return value | Serialize value as a timestamp, either as a Unix timestamp (in float second) or a UTC isoformat string. | Below is the the instruction that describes the task:
### Input:
Serialize value as a timestamp, either as a Unix timestamp (in float second) or a UTC isoformat string.
### Response:
def _serialize(self, value, attr, obj):
"""
Serialize value as a timestamp, either as a Unix timestamp (in float second) or a UTC isoformat string.
"""
if value is None:
return None
if self.use_isoformat:
return datetime.utcfromtimestamp(value).isoformat()
else:
return value |
def _add_parameterized_validator_internal(param_validator, base_tag):
"with builtin tag prefixing"
add_parameterized_validator(param_validator, base_tag, tag_prefix=u'!~~%s(' % param_validator.__name__) | with builtin tag prefixing | Below is the the instruction that describes the task:
### Input:
with builtin tag prefixing
### Response:
def _add_parameterized_validator_internal(param_validator, base_tag):
"with builtin tag prefixing"
add_parameterized_validator(param_validator, base_tag, tag_prefix=u'!~~%s(' % param_validator.__name__) |
def format_datetime(self, value, format_):
"""
Format the datetime using Babel
"""
date_ = make_datetime(value)
return dates.format_datetime(date_, format_, locale=self.lang) | Format the datetime using Babel | Below is the the instruction that describes the task:
### Input:
Format the datetime using Babel
### Response:
def format_datetime(self, value, format_):
"""
Format the datetime using Babel
"""
date_ = make_datetime(value)
return dates.format_datetime(date_, format_, locale=self.lang) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.