code stringlengths 75 104k | docstring stringlengths 1 46.9k | text stringlengths 164 112k |
|---|---|---|
def _wrap_field(field):
"""Improve Flask-RESTFul's original field type"""
class WrappedField(field):
def output(self, key, obj):
value = _fields.get_value(key if self.attribute is None else self.attribute, obj)
# For all fields, when its value was null (None), return null directly,
# instead of return its default value (eg. int type's default value was 0)
# Because sometimes the client **needs** to know, was a field of the model empty, to decide its behavior.
return None if value is None else self.format(value)
return WrappedField | Improve Flask-RESTFul's original field type | Below is the the instruction that describes the task:
### Input:
Improve Flask-RESTFul's original field type
### Response:
def _wrap_field(field):
"""Improve Flask-RESTFul's original field type"""
class WrappedField(field):
def output(self, key, obj):
value = _fields.get_value(key if self.attribute is None else self.attribute, obj)
# For all fields, when its value was null (None), return null directly,
# instead of return its default value (eg. int type's default value was 0)
# Because sometimes the client **needs** to know, was a field of the model empty, to decide its behavior.
return None if value is None else self.format(value)
return WrappedField |
def oeip(cn, ns=None, fl=None, fs=None, ot=None, coe=None, moc=None):
# pylint: disable=too-many-arguments
"""
This function is a wrapper for
:meth:`~pywbem.WBEMConnection.OpenEnumerateInstancePaths`.
Open an enumeration session to enumerate the instance paths of instances of
a class (including instances of its subclasses) in a namespace.
Use the :func:`~wbemcli.pip` function to retrieve the next set of
instance paths or the :func:`~wbcmeli.ce` function to close the enumeration
session before it is complete.
Parameters:
cn (:term:`string` or :class:`~pywbem.CIMClassName`):
Name of the class to be enumerated (case independent).
If specified as a `CIMClassName` object, its `host` attribute will be
ignored.
ns (:term:`string`):
Name of the CIM namespace to be used (case independent).
If `None`, defaults to the namespace of the `cn` parameter if
specified as a `CIMClassName`, or to the default namespace of the
connection.
fl (:term:`string`):
Filter query language to be used for the filter defined in the `fs`
parameter. The DMTF-defined Filter Query Language
(see :term:`DSP0212`) is specified as "DMTF:FQL".
`None` means that no such filtering is peformed.
fs (:term:`string`):
Filter to apply to objects to be returned. Based on filter query
language defined by `fl` parameter.
`None` means that no such filtering is peformed.
ot (:class:`~pywbem.Uint32`):
Operation timeout in seconds. This is the minimum time the WBEM server
must keep the enumeration session open between requests on that
session.
A value of 0 indicates that the server should never time out.
The server may reject the proposed value.
`None` will cause the server to use its default timeout.
coe (:class:`py:bool`):
Continue on error flag.
`None` will cause the server to use its default of `False`.
moc (:class:`~pywbem.Uint32`):
Maximum number of objects to return for this operation.
`None` will cause the server to use its default of 0.
Returns:
A :func:`~py:collections.namedtuple` object containing the following
named items:
* **paths** (list of :class:`~pywbem.CIMInstanceName`):
The retrieved instance paths.
* **eos** (:class:`py:bool`):
`True` if the enumeration session is exhausted after this operation.
Otherwise `eos` is `False` and the `context` item is the context
object for the next operation on the enumeration session.
* **context** (:func:`py:tuple` of server_context, namespace):
A context object identifying the open enumeration session, including
its current enumeration state, and the namespace. This object must be
supplied with the next pull or close operation for this enumeration
session.
"""
return CONN.OpenEnumerateInstancePaths(cn, ns,
FilterQueryLanguage=fl,
FilterQuery=fs,
OperationTimeout=ot,
ContinueOnError=coe,
MaxObjectCount=moc) | This function is a wrapper for
:meth:`~pywbem.WBEMConnection.OpenEnumerateInstancePaths`.
Open an enumeration session to enumerate the instance paths of instances of
a class (including instances of its subclasses) in a namespace.
Use the :func:`~wbemcli.pip` function to retrieve the next set of
instance paths or the :func:`~wbcmeli.ce` function to close the enumeration
session before it is complete.
Parameters:
cn (:term:`string` or :class:`~pywbem.CIMClassName`):
Name of the class to be enumerated (case independent).
If specified as a `CIMClassName` object, its `host` attribute will be
ignored.
ns (:term:`string`):
Name of the CIM namespace to be used (case independent).
If `None`, defaults to the namespace of the `cn` parameter if
specified as a `CIMClassName`, or to the default namespace of the
connection.
fl (:term:`string`):
Filter query language to be used for the filter defined in the `fs`
parameter. The DMTF-defined Filter Query Language
(see :term:`DSP0212`) is specified as "DMTF:FQL".
`None` means that no such filtering is peformed.
fs (:term:`string`):
Filter to apply to objects to be returned. Based on filter query
language defined by `fl` parameter.
`None` means that no such filtering is peformed.
ot (:class:`~pywbem.Uint32`):
Operation timeout in seconds. This is the minimum time the WBEM server
must keep the enumeration session open between requests on that
session.
A value of 0 indicates that the server should never time out.
The server may reject the proposed value.
`None` will cause the server to use its default timeout.
coe (:class:`py:bool`):
Continue on error flag.
`None` will cause the server to use its default of `False`.
moc (:class:`~pywbem.Uint32`):
Maximum number of objects to return for this operation.
`None` will cause the server to use its default of 0.
Returns:
A :func:`~py:collections.namedtuple` object containing the following
named items:
* **paths** (list of :class:`~pywbem.CIMInstanceName`):
The retrieved instance paths.
* **eos** (:class:`py:bool`):
`True` if the enumeration session is exhausted after this operation.
Otherwise `eos` is `False` and the `context` item is the context
object for the next operation on the enumeration session.
* **context** (:func:`py:tuple` of server_context, namespace):
A context object identifying the open enumeration session, including
its current enumeration state, and the namespace. This object must be
supplied with the next pull or close operation for this enumeration
session. | Below is the the instruction that describes the task:
### Input:
This function is a wrapper for
:meth:`~pywbem.WBEMConnection.OpenEnumerateInstancePaths`.
Open an enumeration session to enumerate the instance paths of instances of
a class (including instances of its subclasses) in a namespace.
Use the :func:`~wbemcli.pip` function to retrieve the next set of
instance paths or the :func:`~wbcmeli.ce` function to close the enumeration
session before it is complete.
Parameters:
cn (:term:`string` or :class:`~pywbem.CIMClassName`):
Name of the class to be enumerated (case independent).
If specified as a `CIMClassName` object, its `host` attribute will be
ignored.
ns (:term:`string`):
Name of the CIM namespace to be used (case independent).
If `None`, defaults to the namespace of the `cn` parameter if
specified as a `CIMClassName`, or to the default namespace of the
connection.
fl (:term:`string`):
Filter query language to be used for the filter defined in the `fs`
parameter. The DMTF-defined Filter Query Language
(see :term:`DSP0212`) is specified as "DMTF:FQL".
`None` means that no such filtering is peformed.
fs (:term:`string`):
Filter to apply to objects to be returned. Based on filter query
language defined by `fl` parameter.
`None` means that no such filtering is peformed.
ot (:class:`~pywbem.Uint32`):
Operation timeout in seconds. This is the minimum time the WBEM server
must keep the enumeration session open between requests on that
session.
A value of 0 indicates that the server should never time out.
The server may reject the proposed value.
`None` will cause the server to use its default timeout.
coe (:class:`py:bool`):
Continue on error flag.
`None` will cause the server to use its default of `False`.
moc (:class:`~pywbem.Uint32`):
Maximum number of objects to return for this operation.
`None` will cause the server to use its default of 0.
Returns:
A :func:`~py:collections.namedtuple` object containing the following
named items:
* **paths** (list of :class:`~pywbem.CIMInstanceName`):
The retrieved instance paths.
* **eos** (:class:`py:bool`):
`True` if the enumeration session is exhausted after this operation.
Otherwise `eos` is `False` and the `context` item is the context
object for the next operation on the enumeration session.
* **context** (:func:`py:tuple` of server_context, namespace):
A context object identifying the open enumeration session, including
its current enumeration state, and the namespace. This object must be
supplied with the next pull or close operation for this enumeration
session.
### Response:
def oeip(cn, ns=None, fl=None, fs=None, ot=None, coe=None, moc=None):
# pylint: disable=too-many-arguments
"""
This function is a wrapper for
:meth:`~pywbem.WBEMConnection.OpenEnumerateInstancePaths`.
Open an enumeration session to enumerate the instance paths of instances of
a class (including instances of its subclasses) in a namespace.
Use the :func:`~wbemcli.pip` function to retrieve the next set of
instance paths or the :func:`~wbcmeli.ce` function to close the enumeration
session before it is complete.
Parameters:
cn (:term:`string` or :class:`~pywbem.CIMClassName`):
Name of the class to be enumerated (case independent).
If specified as a `CIMClassName` object, its `host` attribute will be
ignored.
ns (:term:`string`):
Name of the CIM namespace to be used (case independent).
If `None`, defaults to the namespace of the `cn` parameter if
specified as a `CIMClassName`, or to the default namespace of the
connection.
fl (:term:`string`):
Filter query language to be used for the filter defined in the `fs`
parameter. The DMTF-defined Filter Query Language
(see :term:`DSP0212`) is specified as "DMTF:FQL".
`None` means that no such filtering is peformed.
fs (:term:`string`):
Filter to apply to objects to be returned. Based on filter query
language defined by `fl` parameter.
`None` means that no such filtering is peformed.
ot (:class:`~pywbem.Uint32`):
Operation timeout in seconds. This is the minimum time the WBEM server
must keep the enumeration session open between requests on that
session.
A value of 0 indicates that the server should never time out.
The server may reject the proposed value.
`None` will cause the server to use its default timeout.
coe (:class:`py:bool`):
Continue on error flag.
`None` will cause the server to use its default of `False`.
moc (:class:`~pywbem.Uint32`):
Maximum number of objects to return for this operation.
`None` will cause the server to use its default of 0.
Returns:
A :func:`~py:collections.namedtuple` object containing the following
named items:
* **paths** (list of :class:`~pywbem.CIMInstanceName`):
The retrieved instance paths.
* **eos** (:class:`py:bool`):
`True` if the enumeration session is exhausted after this operation.
Otherwise `eos` is `False` and the `context` item is the context
object for the next operation on the enumeration session.
* **context** (:func:`py:tuple` of server_context, namespace):
A context object identifying the open enumeration session, including
its current enumeration state, and the namespace. This object must be
supplied with the next pull or close operation for this enumeration
session.
"""
return CONN.OpenEnumerateInstancePaths(cn, ns,
FilterQueryLanguage=fl,
FilterQuery=fs,
OperationTimeout=ot,
ContinueOnError=coe,
MaxObjectCount=moc) |
def new_empty(self, name):
"""Make a new rule with no actions or anything, and return it."""
if name in self:
raise KeyError("Already have rule {}".format(name))
new = Rule(self.engine, name)
self._cache[name] = new
self.send(self, rule=new, active=True)
return new | Make a new rule with no actions or anything, and return it. | Below is the the instruction that describes the task:
### Input:
Make a new rule with no actions or anything, and return it.
### Response:
def new_empty(self, name):
"""Make a new rule with no actions or anything, and return it."""
if name in self:
raise KeyError("Already have rule {}".format(name))
new = Rule(self.engine, name)
self._cache[name] = new
self.send(self, rule=new, active=True)
return new |
def texkeys2marc(self, key, value):
"""Populate the ``035`` MARC field."""
result = []
values = force_list(value)
if values:
value = values[0]
result.append({
'9': 'INSPIRETeX',
'a': value,
})
for value in values[1:]:
result.append({
'9': 'INSPIRETeX',
'z': value,
})
return result | Populate the ``035`` MARC field. | Below is the the instruction that describes the task:
### Input:
Populate the ``035`` MARC field.
### Response:
def texkeys2marc(self, key, value):
"""Populate the ``035`` MARC field."""
result = []
values = force_list(value)
if values:
value = values[0]
result.append({
'9': 'INSPIRETeX',
'a': value,
})
for value in values[1:]:
result.append({
'9': 'INSPIRETeX',
'z': value,
})
return result |
def find_nearest(x, x0) -> Tuple[int, Any]:
"""
This find_nearest function does NOT assume sorted input
inputs:
x: array (float, int, datetime, h5py.Dataset) within which to search for x0
x0: singleton or array of values to search for in x
outputs:
idx: index of flattened x nearest to x0 (i.e. works with higher than 1-D arrays also)
xidx: x[idx]
Observe how bisect.bisect() gives the incorrect result!
idea based on:
http://stackoverflow.com/questions/2566412/find-nearest-value-in-numpy-array
"""
x = np.asanyarray(x) # for indexing upon return
x0 = np.atleast_1d(x0)
# %%
if x.size == 0 or x0.size == 0:
raise ValueError('empty input(s)')
if x0.ndim not in (0, 1):
raise ValueError('2-D x0 not handled yet')
# %%
ind = np.empty_like(x0, dtype=int)
# NOTE: not trapping IndexError (all-nan) becaues returning None can surprise with slice indexing
for i, xi in enumerate(x0):
if xi is not None and (isinstance(xi, (datetime.datetime, datetime.date, np.datetime64)) or np.isfinite(xi)):
ind[i] = np.nanargmin(abs(x-xi))
else:
raise ValueError('x0 must NOT be None or NaN to avoid surprising None return value')
return ind.squeeze()[()], x[ind].squeeze()[()] | This find_nearest function does NOT assume sorted input
inputs:
x: array (float, int, datetime, h5py.Dataset) within which to search for x0
x0: singleton or array of values to search for in x
outputs:
idx: index of flattened x nearest to x0 (i.e. works with higher than 1-D arrays also)
xidx: x[idx]
Observe how bisect.bisect() gives the incorrect result!
idea based on:
http://stackoverflow.com/questions/2566412/find-nearest-value-in-numpy-array | Below is the the instruction that describes the task:
### Input:
This find_nearest function does NOT assume sorted input
inputs:
x: array (float, int, datetime, h5py.Dataset) within which to search for x0
x0: singleton or array of values to search for in x
outputs:
idx: index of flattened x nearest to x0 (i.e. works with higher than 1-D arrays also)
xidx: x[idx]
Observe how bisect.bisect() gives the incorrect result!
idea based on:
http://stackoverflow.com/questions/2566412/find-nearest-value-in-numpy-array
### Response:
def find_nearest(x, x0) -> Tuple[int, Any]:
"""
This find_nearest function does NOT assume sorted input
inputs:
x: array (float, int, datetime, h5py.Dataset) within which to search for x0
x0: singleton or array of values to search for in x
outputs:
idx: index of flattened x nearest to x0 (i.e. works with higher than 1-D arrays also)
xidx: x[idx]
Observe how bisect.bisect() gives the incorrect result!
idea based on:
http://stackoverflow.com/questions/2566412/find-nearest-value-in-numpy-array
"""
x = np.asanyarray(x) # for indexing upon return
x0 = np.atleast_1d(x0)
# %%
if x.size == 0 or x0.size == 0:
raise ValueError('empty input(s)')
if x0.ndim not in (0, 1):
raise ValueError('2-D x0 not handled yet')
# %%
ind = np.empty_like(x0, dtype=int)
# NOTE: not trapping IndexError (all-nan) becaues returning None can surprise with slice indexing
for i, xi in enumerate(x0):
if xi is not None and (isinstance(xi, (datetime.datetime, datetime.date, np.datetime64)) or np.isfinite(xi)):
ind[i] = np.nanargmin(abs(x-xi))
else:
raise ValueError('x0 must NOT be None or NaN to avoid surprising None return value')
return ind.squeeze()[()], x[ind].squeeze()[()] |
def pandas(self, row_attr: str = None, selector: Union[List, Tuple, np.ndarray, slice] = None, columns: List[str] = None) -> pd.DataFrame:
"""
Create a Pandas DataFrame corresponding to (selected parts of) the Loom file.
Args:
row_attr: Name of the row attribute to use for selecting rows to include (or None to omit row data)
selector: A list, a tuple, a numpy.ndarray or a slice; used to select rows (or None to include all rows)
columns: A list of column attributes to include, or None to include all
Returns:
Pandas DataFrame
Remarks:
The method returns a Pandas DataFrame with one column per row of the Loom file (i.e. transposed), which is usually
what is required for plotting and statistical analysis. By default, all column attributes and no rows are included.
To include row data, provide a ``row_attr`` and a ``selector``. The selector is matched against values of the given
row attribute, and matching rows are included.
Examples:
.. highlight:: python
.. code-block:: python
import loompy
with loompy.connect("mydata.loom") as ds:
# Include all column attributes, and rows where attribute "Gene" matches one of the given genes
df1 = ds.pandas("Gene", ["Actb", "Npy", "Vip", "Pvalb"])
# Include the top 100 rows and name them after values of the "Gene" attribute
df2 = ds.pandas("Gene", :100)
# Include the entire dataset, and name the rows after values of the "Accession" attribute
df3 = ds.pandas("Accession")
"""
if columns is None:
columns = [x for x in self.ca.keys()]
data: Dict[str, np.ndarray] = {}
for col in columns:
vals = self.ca[col]
if vals.ndim >= 2:
for i in range(vals.ndim):
data[col + f".{i+1}"] = vals[:, 0]
else:
data[col] = self.ca[col]
if row_attr is not None: # Pick out some rows (genes)
if selector is None: # Actually, pick all the rows
names = self.ra[row_attr]
vals = self[:, :]
for ix, name in enumerate(names):
data[name] = vals[ix, :][0]
else: # Pick some specific rows
if type(selector) is slice: # Based on a slice
names = self.ra[row_attr][selector]
vals = self[selector, :]
for ix, name in enumerate(names):
data[name] = vals[ix, :][0]
# Based on specific string values
elif all([type(s) is str for s in selector]): # type: ignore
names = self.ra[row_attr][np.in1d(self.ra[row_attr], selector)]
for name in names:
vals = self[self.ra[row_attr] == name, :][0]
data[name] = vals
else: # Give up
raise ValueError("Invalid selector")
return pd.DataFrame(data) | Create a Pandas DataFrame corresponding to (selected parts of) the Loom file.
Args:
row_attr: Name of the row attribute to use for selecting rows to include (or None to omit row data)
selector: A list, a tuple, a numpy.ndarray or a slice; used to select rows (or None to include all rows)
columns: A list of column attributes to include, or None to include all
Returns:
Pandas DataFrame
Remarks:
The method returns a Pandas DataFrame with one column per row of the Loom file (i.e. transposed), which is usually
what is required for plotting and statistical analysis. By default, all column attributes and no rows are included.
To include row data, provide a ``row_attr`` and a ``selector``. The selector is matched against values of the given
row attribute, and matching rows are included.
Examples:
.. highlight:: python
.. code-block:: python
import loompy
with loompy.connect("mydata.loom") as ds:
# Include all column attributes, and rows where attribute "Gene" matches one of the given genes
df1 = ds.pandas("Gene", ["Actb", "Npy", "Vip", "Pvalb"])
# Include the top 100 rows and name them after values of the "Gene" attribute
df2 = ds.pandas("Gene", :100)
# Include the entire dataset, and name the rows after values of the "Accession" attribute
df3 = ds.pandas("Accession") | Below is the the instruction that describes the task:
### Input:
Create a Pandas DataFrame corresponding to (selected parts of) the Loom file.
Args:
row_attr: Name of the row attribute to use for selecting rows to include (or None to omit row data)
selector: A list, a tuple, a numpy.ndarray or a slice; used to select rows (or None to include all rows)
columns: A list of column attributes to include, or None to include all
Returns:
Pandas DataFrame
Remarks:
The method returns a Pandas DataFrame with one column per row of the Loom file (i.e. transposed), which is usually
what is required for plotting and statistical analysis. By default, all column attributes and no rows are included.
To include row data, provide a ``row_attr`` and a ``selector``. The selector is matched against values of the given
row attribute, and matching rows are included.
Examples:
.. highlight:: python
.. code-block:: python
import loompy
with loompy.connect("mydata.loom") as ds:
# Include all column attributes, and rows where attribute "Gene" matches one of the given genes
df1 = ds.pandas("Gene", ["Actb", "Npy", "Vip", "Pvalb"])
# Include the top 100 rows and name them after values of the "Gene" attribute
df2 = ds.pandas("Gene", :100)
# Include the entire dataset, and name the rows after values of the "Accession" attribute
df3 = ds.pandas("Accession")
### Response:
def pandas(self, row_attr: str = None, selector: Union[List, Tuple, np.ndarray, slice] = None, columns: List[str] = None) -> pd.DataFrame:
"""
Create a Pandas DataFrame corresponding to (selected parts of) the Loom file.
Args:
row_attr: Name of the row attribute to use for selecting rows to include (or None to omit row data)
selector: A list, a tuple, a numpy.ndarray or a slice; used to select rows (or None to include all rows)
columns: A list of column attributes to include, or None to include all
Returns:
Pandas DataFrame
Remarks:
The method returns a Pandas DataFrame with one column per row of the Loom file (i.e. transposed), which is usually
what is required for plotting and statistical analysis. By default, all column attributes and no rows are included.
To include row data, provide a ``row_attr`` and a ``selector``. The selector is matched against values of the given
row attribute, and matching rows are included.
Examples:
.. highlight:: python
.. code-block:: python
import loompy
with loompy.connect("mydata.loom") as ds:
# Include all column attributes, and rows where attribute "Gene" matches one of the given genes
df1 = ds.pandas("Gene", ["Actb", "Npy", "Vip", "Pvalb"])
# Include the top 100 rows and name them after values of the "Gene" attribute
df2 = ds.pandas("Gene", :100)
# Include the entire dataset, and name the rows after values of the "Accession" attribute
df3 = ds.pandas("Accession")
"""
if columns is None:
columns = [x for x in self.ca.keys()]
data: Dict[str, np.ndarray] = {}
for col in columns:
vals = self.ca[col]
if vals.ndim >= 2:
for i in range(vals.ndim):
data[col + f".{i+1}"] = vals[:, 0]
else:
data[col] = self.ca[col]
if row_attr is not None: # Pick out some rows (genes)
if selector is None: # Actually, pick all the rows
names = self.ra[row_attr]
vals = self[:, :]
for ix, name in enumerate(names):
data[name] = vals[ix, :][0]
else: # Pick some specific rows
if type(selector) is slice: # Based on a slice
names = self.ra[row_attr][selector]
vals = self[selector, :]
for ix, name in enumerate(names):
data[name] = vals[ix, :][0]
# Based on specific string values
elif all([type(s) is str for s in selector]): # type: ignore
names = self.ra[row_attr][np.in1d(self.ra[row_attr], selector)]
for name in names:
vals = self[self.ra[row_attr] == name, :][0]
data[name] = vals
else: # Give up
raise ValueError("Invalid selector")
return pd.DataFrame(data) |
def manipulate(self, stored_instance, component_instance):
"""
Called by iPOPO right after the instantiation of the component.
This is the last chance to manipulate the component before the other
handlers start.
:param stored_instance: The iPOPO component StoredInstance
:param component_instance: The component instance
"""
# Create the logger for this component instance
self._logger = logging.getLogger(self._name)
# Inject it
setattr(component_instance, self._field, self._logger) | Called by iPOPO right after the instantiation of the component.
This is the last chance to manipulate the component before the other
handlers start.
:param stored_instance: The iPOPO component StoredInstance
:param component_instance: The component instance | Below is the the instruction that describes the task:
### Input:
Called by iPOPO right after the instantiation of the component.
This is the last chance to manipulate the component before the other
handlers start.
:param stored_instance: The iPOPO component StoredInstance
:param component_instance: The component instance
### Response:
def manipulate(self, stored_instance, component_instance):
"""
Called by iPOPO right after the instantiation of the component.
This is the last chance to manipulate the component before the other
handlers start.
:param stored_instance: The iPOPO component StoredInstance
:param component_instance: The component instance
"""
# Create the logger for this component instance
self._logger = logging.getLogger(self._name)
# Inject it
setattr(component_instance, self._field, self._logger) |
def _get_containers(self):
"""Return available containers."""
def full_fn(path):
return os.path.join(self.abs_root, path)
return [self.cont_cls.from_path(self, d)
for d in os.listdir(self.abs_root) if is_dir(full_fn(d))] | Return available containers. | Below is the the instruction that describes the task:
### Input:
Return available containers.
### Response:
def _get_containers(self):
"""Return available containers."""
def full_fn(path):
return os.path.join(self.abs_root, path)
return [self.cont_cls.from_path(self, d)
for d in os.listdir(self.abs_root) if is_dir(full_fn(d))] |
def parse_section(self, section_options):
"""Parses configuration file section.
:param dict section_options:
"""
for (name, (_, value)) in section_options.items():
try:
self[name] = value
except KeyError:
pass | Parses configuration file section.
:param dict section_options: | Below is the the instruction that describes the task:
### Input:
Parses configuration file section.
:param dict section_options:
### Response:
def parse_section(self, section_options):
"""Parses configuration file section.
:param dict section_options:
"""
for (name, (_, value)) in section_options.items():
try:
self[name] = value
except KeyError:
pass |
def get_stopbits():
"""
Returns supported stop bit lengths in a Django-like choices tuples.
"""
stopbits = []
s = pyserial.Serial()
for name, value in s.getSupportedStopbits():
stopbits.append((value, name,))
return tuple(stopbits) | Returns supported stop bit lengths in a Django-like choices tuples. | Below is the the instruction that describes the task:
### Input:
Returns supported stop bit lengths in a Django-like choices tuples.
### Response:
def get_stopbits():
"""
Returns supported stop bit lengths in a Django-like choices tuples.
"""
stopbits = []
s = pyserial.Serial()
for name, value in s.getSupportedStopbits():
stopbits.append((value, name,))
return tuple(stopbits) |
def _ext_link_shadow(self):
"""Replace the invalid chars of SPAN_PARSER_TYPES with b'_'.
For comments, all characters are replaced, but for ('Template',
'ParserFunction', 'Parameter') only invalid characters are replaced.
"""
ss, se = self._span
string = self._lststr[0][ss:se]
byte_array = bytearray(string, 'ascii', 'replace')
subspans = self._subspans
for type_ in 'Template', 'ParserFunction', 'Parameter':
for s, e in subspans(type_):
byte_array[s:e] = b' ' + INVALID_EXT_CHARS_SUB(
b' ', byte_array[s + 2:e - 2]) + b' '
for s, e in subspans('Comment'):
byte_array[s:e] = (e - s) * b'_'
return byte_array | Replace the invalid chars of SPAN_PARSER_TYPES with b'_'.
For comments, all characters are replaced, but for ('Template',
'ParserFunction', 'Parameter') only invalid characters are replaced. | Below is the the instruction that describes the task:
### Input:
Replace the invalid chars of SPAN_PARSER_TYPES with b'_'.
For comments, all characters are replaced, but for ('Template',
'ParserFunction', 'Parameter') only invalid characters are replaced.
### Response:
def _ext_link_shadow(self):
"""Replace the invalid chars of SPAN_PARSER_TYPES with b'_'.
For comments, all characters are replaced, but for ('Template',
'ParserFunction', 'Parameter') only invalid characters are replaced.
"""
ss, se = self._span
string = self._lststr[0][ss:se]
byte_array = bytearray(string, 'ascii', 'replace')
subspans = self._subspans
for type_ in 'Template', 'ParserFunction', 'Parameter':
for s, e in subspans(type_):
byte_array[s:e] = b' ' + INVALID_EXT_CHARS_SUB(
b' ', byte_array[s + 2:e - 2]) + b' '
for s, e in subspans('Comment'):
byte_array[s:e] = (e - s) * b'_'
return byte_array |
def remove_variable(self, name):
"""Remove a variable from the problem."""
index = self._get_var_index(name)
# Remove from matrix
self._A = np.delete(self.A, index, 1)
# Remove from bounds
del self.bounds[name]
# Remove from var list
del self._variables[name]
self._update_variable_indices()
self._reset_solution() | Remove a variable from the problem. | Below is the the instruction that describes the task:
### Input:
Remove a variable from the problem.
### Response:
def remove_variable(self, name):
"""Remove a variable from the problem."""
index = self._get_var_index(name)
# Remove from matrix
self._A = np.delete(self.A, index, 1)
# Remove from bounds
del self.bounds[name]
# Remove from var list
del self._variables[name]
self._update_variable_indices()
self._reset_solution() |
def create(self, body=None, raise_exc=True, headers=None, **kwargs):
'''Performs an HTTP POST to the server, to create a
subordinate resource. Returns a new HALNavigator representing
that resource.
`body` may either be a string or a dictionary representing json
`headers` are additional headers to send in the request
'''
return self._request(POST, body, raise_exc, headers, **kwargs) | Performs an HTTP POST to the server, to create a
subordinate resource. Returns a new HALNavigator representing
that resource.
`body` may either be a string or a dictionary representing json
`headers` are additional headers to send in the request | Below is the the instruction that describes the task:
### Input:
Performs an HTTP POST to the server, to create a
subordinate resource. Returns a new HALNavigator representing
that resource.
`body` may either be a string or a dictionary representing json
`headers` are additional headers to send in the request
### Response:
def create(self, body=None, raise_exc=True, headers=None, **kwargs):
'''Performs an HTTP POST to the server, to create a
subordinate resource. Returns a new HALNavigator representing
that resource.
`body` may either be a string or a dictionary representing json
`headers` are additional headers to send in the request
'''
return self._request(POST, body, raise_exc, headers, **kwargs) |
def eval_valid(self, feval=None):
"""Evaluate for validation data.
Parameters
----------
feval : callable or None, optional (default=None)
Customized evaluation function.
Should accept two parameters: preds, train_data,
and return (eval_name, eval_result, is_higher_better) or list of such tuples.
For multi-class task, the preds is group by class_id first, then group by row_id.
If you want to get i-th row preds in j-th class, the access way is preds[j * num_data + i].
Returns
-------
result : list
List with evaluation results.
"""
return [item for i in range_(1, self.__num_dataset)
for item in self.__inner_eval(self.name_valid_sets[i - 1], i, feval)] | Evaluate for validation data.
Parameters
----------
feval : callable or None, optional (default=None)
Customized evaluation function.
Should accept two parameters: preds, train_data,
and return (eval_name, eval_result, is_higher_better) or list of such tuples.
For multi-class task, the preds is group by class_id first, then group by row_id.
If you want to get i-th row preds in j-th class, the access way is preds[j * num_data + i].
Returns
-------
result : list
List with evaluation results. | Below is the the instruction that describes the task:
### Input:
Evaluate for validation data.
Parameters
----------
feval : callable or None, optional (default=None)
Customized evaluation function.
Should accept two parameters: preds, train_data,
and return (eval_name, eval_result, is_higher_better) or list of such tuples.
For multi-class task, the preds is group by class_id first, then group by row_id.
If you want to get i-th row preds in j-th class, the access way is preds[j * num_data + i].
Returns
-------
result : list
List with evaluation results.
### Response:
def eval_valid(self, feval=None):
"""Evaluate for validation data.
Parameters
----------
feval : callable or None, optional (default=None)
Customized evaluation function.
Should accept two parameters: preds, train_data,
and return (eval_name, eval_result, is_higher_better) or list of such tuples.
For multi-class task, the preds is group by class_id first, then group by row_id.
If you want to get i-th row preds in j-th class, the access way is preds[j * num_data + i].
Returns
-------
result : list
List with evaluation results.
"""
return [item for i in range_(1, self.__num_dataset)
for item in self.__inner_eval(self.name_valid_sets[i - 1], i, feval)] |
def cmd_lockup_autopilot(self, args):
'''lockup autopilot for watchdog testing'''
if len(args) > 0 and args[0] == 'IREALLYMEANIT':
print("Sending lockup command")
self.master.mav.command_long_send(self.settings.target_system, self.settings.target_component,
mavutil.mavlink.MAV_CMD_PREFLIGHT_REBOOT_SHUTDOWN, 0,
42, 24, 71, 93, 0, 0, 0)
else:
print("Invalid lockup command") | lockup autopilot for watchdog testing | Below is the the instruction that describes the task:
### Input:
lockup autopilot for watchdog testing
### Response:
def cmd_lockup_autopilot(self, args):
'''lockup autopilot for watchdog testing'''
if len(args) > 0 and args[0] == 'IREALLYMEANIT':
print("Sending lockup command")
self.master.mav.command_long_send(self.settings.target_system, self.settings.target_component,
mavutil.mavlink.MAV_CMD_PREFLIGHT_REBOOT_SHUTDOWN, 0,
42, 24, 71, 93, 0, 0, 0)
else:
print("Invalid lockup command") |
def _parse_for_element_meta_data(self, meta_data):
"""Load meta data for state elements
The meta data of the state meta data file also contains the meta data for state elements (data ports,
outcomes, etc). This method parses the loaded meta data for each state element model. The meta data of the
elements is removed from the passed dictionary.
:param meta_data: Dictionary of loaded meta data
"""
# print("_parse meta data", meta_data)
for data_port_m in self.input_data_ports:
self._copy_element_meta_data_from_meta_file_data(meta_data, data_port_m, "input_data_port",
data_port_m.data_port.data_port_id)
for data_port_m in self.output_data_ports:
self._copy_element_meta_data_from_meta_file_data(meta_data, data_port_m, "output_data_port",
data_port_m.data_port.data_port_id)
for outcome_m in self.outcomes:
self._copy_element_meta_data_from_meta_file_data(meta_data, outcome_m, "outcome",
outcome_m.outcome.outcome_id)
if "income" in meta_data:
if "gui" in meta_data and "editor_gaphas" in meta_data["gui"] and \
"income" in meta_data["gui"]["editor_gaphas"]: # chain necessary to prevent key generation
del meta_data["gui"]["editor_gaphas"]["income"]
elif "gui" in meta_data and "editor_gaphas" in meta_data["gui"] and \
"income" in meta_data["gui"]["editor_gaphas"]: # chain necessary to prevent key generation in meta data
meta_data["income"]["gui"]["editor_gaphas"] = meta_data["gui"]["editor_gaphas"]["income"]
del meta_data["gui"]["editor_gaphas"]["income"]
self._copy_element_meta_data_from_meta_file_data(meta_data, self.income, "income", "") | Load meta data for state elements
The meta data of the state meta data file also contains the meta data for state elements (data ports,
outcomes, etc). This method parses the loaded meta data for each state element model. The meta data of the
elements is removed from the passed dictionary.
:param meta_data: Dictionary of loaded meta data | Below is the the instruction that describes the task:
### Input:
Load meta data for state elements
The meta data of the state meta data file also contains the meta data for state elements (data ports,
outcomes, etc). This method parses the loaded meta data for each state element model. The meta data of the
elements is removed from the passed dictionary.
:param meta_data: Dictionary of loaded meta data
### Response:
def _parse_for_element_meta_data(self, meta_data):
"""Load meta data for state elements
The meta data of the state meta data file also contains the meta data for state elements (data ports,
outcomes, etc). This method parses the loaded meta data for each state element model. The meta data of the
elements is removed from the passed dictionary.
:param meta_data: Dictionary of loaded meta data
"""
# print("_parse meta data", meta_data)
for data_port_m in self.input_data_ports:
self._copy_element_meta_data_from_meta_file_data(meta_data, data_port_m, "input_data_port",
data_port_m.data_port.data_port_id)
for data_port_m in self.output_data_ports:
self._copy_element_meta_data_from_meta_file_data(meta_data, data_port_m, "output_data_port",
data_port_m.data_port.data_port_id)
for outcome_m in self.outcomes:
self._copy_element_meta_data_from_meta_file_data(meta_data, outcome_m, "outcome",
outcome_m.outcome.outcome_id)
if "income" in meta_data:
if "gui" in meta_data and "editor_gaphas" in meta_data["gui"] and \
"income" in meta_data["gui"]["editor_gaphas"]: # chain necessary to prevent key generation
del meta_data["gui"]["editor_gaphas"]["income"]
elif "gui" in meta_data and "editor_gaphas" in meta_data["gui"] and \
"income" in meta_data["gui"]["editor_gaphas"]: # chain necessary to prevent key generation in meta data
meta_data["income"]["gui"]["editor_gaphas"] = meta_data["gui"]["editor_gaphas"]["income"]
del meta_data["gui"]["editor_gaphas"]["income"]
self._copy_element_meta_data_from_meta_file_data(meta_data, self.income, "income", "") |
def set_fog_density(self, density):
"""Queue up a change fog density command. It will be applied when `tick` or `step` is called next.
By the next tick, the exponential height fog in the world will have the new density. If there is no fog in the
world, it will be automatically created with the given density.
Args:
density (float): The new density value, between 0 and 1. The command will not be sent if the given
density is invalid.
"""
if density < 0 or density > 1:
raise HolodeckException("Fog density should be between 0 and 1")
self._should_write_to_command_buffer = True
command_to_send = ChangeFogDensityCommand(density)
self._commands.add_command(command_to_send) | Queue up a change fog density command. It will be applied when `tick` or `step` is called next.
By the next tick, the exponential height fog in the world will have the new density. If there is no fog in the
world, it will be automatically created with the given density.
Args:
density (float): The new density value, between 0 and 1. The command will not be sent if the given
density is invalid. | Below is the the instruction that describes the task:
### Input:
Queue up a change fog density command. It will be applied when `tick` or `step` is called next.
By the next tick, the exponential height fog in the world will have the new density. If there is no fog in the
world, it will be automatically created with the given density.
Args:
density (float): The new density value, between 0 and 1. The command will not be sent if the given
density is invalid.
### Response:
def set_fog_density(self, density):
"""Queue up a change fog density command. It will be applied when `tick` or `step` is called next.
By the next tick, the exponential height fog in the world will have the new density. If there is no fog in the
world, it will be automatically created with the given density.
Args:
density (float): The new density value, between 0 and 1. The command will not be sent if the given
density is invalid.
"""
if density < 0 or density > 1:
raise HolodeckException("Fog density should be between 0 and 1")
self._should_write_to_command_buffer = True
command_to_send = ChangeFogDensityCommand(density)
self._commands.add_command(command_to_send) |
def do_startInstance(self,args):
"""Start specified instance"""
parser = CommandArgumentParser("startInstance")
parser.add_argument(dest='instance',help='instance index or name');
args = vars(parser.parse_args(args))
instanceId = args['instance']
force = args['force']
try:
index = int(instanceId)
instances = self.scalingGroupDescription['AutoScalingGroups'][0]['Instances']
instanceId = instances[index]
except ValueError:
pass
client = AwsConnectionFactory.getEc2Client()
client.start_instances(InstanceIds=[instanceId['InstanceId']]) | Start specified instance | Below is the the instruction that describes the task:
### Input:
Start specified instance
### Response:
def do_startInstance(self,args):
"""Start specified instance"""
parser = CommandArgumentParser("startInstance")
parser.add_argument(dest='instance',help='instance index or name');
args = vars(parser.parse_args(args))
instanceId = args['instance']
force = args['force']
try:
index = int(instanceId)
instances = self.scalingGroupDescription['AutoScalingGroups'][0]['Instances']
instanceId = instances[index]
except ValueError:
pass
client = AwsConnectionFactory.getEc2Client()
client.start_instances(InstanceIds=[instanceId['InstanceId']]) |
def centroid(X):
"""
Calculate the centroid from a matrix X
"""
C = np.sum(X, axis=0) / len(X)
return C | Calculate the centroid from a matrix X | Below is the the instruction that describes the task:
### Input:
Calculate the centroid from a matrix X
### Response:
def centroid(X):
"""
Calculate the centroid from a matrix X
"""
C = np.sum(X, axis=0) / len(X)
return C |
def agent_texts_with_grounding(stmts):
"""Return agent text groundings in a list of statements with their counts
Parameters
----------
stmts: list of :py:class:`indra.statements.Statement`
Returns
-------
list of tuple
List of tuples of the form
(text: str, ((name_space: str, ID: str, count: int)...),
total_count: int)
Where the counts within the tuple of groundings give the number of
times an agent with the given agent_text appears grounded with the
particular name space and ID. The total_count gives the total number
of times an agent with text appears in the list of statements.
"""
allag = all_agents(stmts)
# Convert PFAM-DEF lists into tuples so that they are hashable and can
# be tabulated with a Counter
for ag in allag:
pfam_def = ag.db_refs.get('PFAM-DEF')
if pfam_def is not None:
ag.db_refs['PFAM-DEF'] = tuple(pfam_def)
refs = [tuple(ag.db_refs.items()) for ag in allag]
refs_counter = Counter(refs)
refs_counter_dict = [(dict(entry[0]), entry[1])
for entry in refs_counter.items()]
# First, sort by text so that we can do a groupby
refs_counter_dict.sort(key=lambda x: x[0].get('TEXT'))
# Then group by text
grouped_by_text = []
for k, g in groupby(refs_counter_dict, key=lambda x: x[0].get('TEXT')):
# Total occurrences of this agent text
total = 0
entry = [k]
db_ref_list = []
for db_refs, count in g:
# Check if TEXT is our only key, indicating no grounding
if list(db_refs.keys()) == ['TEXT']:
db_ref_list.append((None, None, count))
# Add any other db_refs (not TEXT)
for db, db_id in db_refs.items():
if db == 'TEXT':
continue
else:
db_ref_list.append((db, db_id, count))
total += count
# Sort the db_ref_list by the occurrences of each grounding
entry.append(tuple(sorted(db_ref_list, key=lambda x: x[2],
reverse=True)))
# Now add the total frequency to the entry
entry.append(total)
# And add the entry to the overall list
grouped_by_text.append(tuple(entry))
# Sort the list by the total number of occurrences of each unique key
grouped_by_text.sort(key=lambda x: x[2], reverse=True)
return grouped_by_text | Return agent text groundings in a list of statements with their counts
Parameters
----------
stmts: list of :py:class:`indra.statements.Statement`
Returns
-------
list of tuple
List of tuples of the form
(text: str, ((name_space: str, ID: str, count: int)...),
total_count: int)
Where the counts within the tuple of groundings give the number of
times an agent with the given agent_text appears grounded with the
particular name space and ID. The total_count gives the total number
of times an agent with text appears in the list of statements. | Below is the the instruction that describes the task:
### Input:
Return agent text groundings in a list of statements with their counts
Parameters
----------
stmts: list of :py:class:`indra.statements.Statement`
Returns
-------
list of tuple
List of tuples of the form
(text: str, ((name_space: str, ID: str, count: int)...),
total_count: int)
Where the counts within the tuple of groundings give the number of
times an agent with the given agent_text appears grounded with the
particular name space and ID. The total_count gives the total number
of times an agent with text appears in the list of statements.
### Response:
def agent_texts_with_grounding(stmts):
"""Return agent text groundings in a list of statements with their counts
Parameters
----------
stmts: list of :py:class:`indra.statements.Statement`
Returns
-------
list of tuple
List of tuples of the form
(text: str, ((name_space: str, ID: str, count: int)...),
total_count: int)
Where the counts within the tuple of groundings give the number of
times an agent with the given agent_text appears grounded with the
particular name space and ID. The total_count gives the total number
of times an agent with text appears in the list of statements.
"""
allag = all_agents(stmts)
# Convert PFAM-DEF lists into tuples so that they are hashable and can
# be tabulated with a Counter
for ag in allag:
pfam_def = ag.db_refs.get('PFAM-DEF')
if pfam_def is not None:
ag.db_refs['PFAM-DEF'] = tuple(pfam_def)
refs = [tuple(ag.db_refs.items()) for ag in allag]
refs_counter = Counter(refs)
refs_counter_dict = [(dict(entry[0]), entry[1])
for entry in refs_counter.items()]
# First, sort by text so that we can do a groupby
refs_counter_dict.sort(key=lambda x: x[0].get('TEXT'))
# Then group by text
grouped_by_text = []
for k, g in groupby(refs_counter_dict, key=lambda x: x[0].get('TEXT')):
# Total occurrences of this agent text
total = 0
entry = [k]
db_ref_list = []
for db_refs, count in g:
# Check if TEXT is our only key, indicating no grounding
if list(db_refs.keys()) == ['TEXT']:
db_ref_list.append((None, None, count))
# Add any other db_refs (not TEXT)
for db, db_id in db_refs.items():
if db == 'TEXT':
continue
else:
db_ref_list.append((db, db_id, count))
total += count
# Sort the db_ref_list by the occurrences of each grounding
entry.append(tuple(sorted(db_ref_list, key=lambda x: x[2],
reverse=True)))
# Now add the total frequency to the entry
entry.append(total)
# And add the entry to the overall list
grouped_by_text.append(tuple(entry))
# Sort the list by the total number of occurrences of each unique key
grouped_by_text.sort(key=lambda x: x[2], reverse=True)
return grouped_by_text |
def _add_potential(self, potential, parent_tag):
"""
Adds Potentials to the ProbModelXML.
Parameters
----------
potential: dict
Dictionary containing Potential data.
For example: {'role': 'Utility',
'Variables': ['D0', 'D1', 'C0', 'C1'],
'type': 'Tree/ADD',
'UtilityVaribale': 'U1'}
parent_tag: etree Element
etree element which would contain potential tag
For example: <Element Potentials at 0x7f315fc44b08>
<Element Branch at 0x7f315fc44c88>
<Element Branch at 0x7f315fc44d88>
<Element Subpotentials at 0x7f315fc44e48>
Examples
--------
>>> writer = ProbModelXMLWriter(model)
>>> writer._add_potential(potential, parent_tag)
"""
potential_type = potential['type']
try:
potential_tag = etree.SubElement(parent_tag, 'Potential', attrib={
'type': potential['type'], 'role': potential['role']})
except KeyError:
potential_tag = etree.SubElement(parent_tag, 'Potential', attrib={
'type': potential['type']})
self._add_element(potential, 'Comment', potential_tag)
if 'AdditionalProperties' in potential:
self._add_additional_properties(potential_tag, potential['AdditionalProperties'])
if potential_type == "delta":
etree.SubElement(potential_tag, 'Variable', attrib={'name': potential['Variable']})
self._add_element(potential, 'State', potential_tag)
self._add_element(potential, 'StateIndex', potential_tag)
self._add_element(potential, 'NumericValue', potential_tag)
else:
if 'UtilityVariable' in potential:
etree.SubElement(potential_tag, 'UtilityVariable', attrib={
'name': potential['UtilityVariable']})
if 'Variables' in potential:
variable_tag = etree.SubElement(potential_tag, 'Variables')
for var in sorted(potential['Variables']):
etree.SubElement(variable_tag, 'Variable', attrib={'name': var})
for child in sorted(potential['Variables'][var]):
etree.SubElement(variable_tag, 'Variable', attrib={'name': child})
self._add_element(potential, 'Values', potential_tag)
if 'UncertainValues' in potential:
value_tag = etree.SubElement(potential_tag, 'UncertainValues', attrib={})
for value in sorted(potential['UncertainValues']):
try:
etree.SubElement(value_tag, 'Value', attrib={
'distribution': value['distribution'],
'name': value['name']}).text = value['value']
except KeyError:
etree.SubElement(value_tag, 'Value', attrib={
'distribution': value['distribution']}).text = value['value']
if 'TopVariable' in potential:
etree.SubElement(potential_tag, 'TopVariable', attrib={'name': potential['TopVariable']})
if 'Branches' in potential:
branches_tag = etree.SubElement(potential_tag, 'Branches')
for branch in potential['Branches']:
branch_tag = etree.SubElement(branches_tag, 'Branch')
if 'States' in branch:
states_tag = etree.SubElement(branch_tag, 'States')
for state in sorted(branch['States']):
etree.SubElement(states_tag, 'State', attrib={'name': state['name']})
if 'Potential' in branch:
self._add_potential(branch['Potential'], branch_tag)
self._add_element(potential, 'Label', potential_tag)
self._add_element(potential, 'Reference', potential_tag)
if 'Thresholds' in branch:
thresholds_tag = etree.SubElement(branch_tag, 'Thresholds')
for threshold in branch['Thresholds']:
try:
etree.SubElement(thresholds_tag, 'Threshold', attrib={
'value': threshold['value'], 'belongsTo': threshold['belongsTo']})
except KeyError:
etree.SubElement(thresholds_tag, 'Threshold', attrib={
'value': threshold['value']})
self._add_element(potential, 'Model', potential_tag)
self._add_element(potential, 'Coefficients', potential_tag)
self._add_element(potential, 'CovarianceMatrix', potential_tag)
if 'Subpotentials' in potential:
subpotentials = etree.SubElement(potential_tag, 'Subpotentials')
for subpotential in potential['Subpotentials']:
self._add_potential(subpotential, subpotentials)
if 'Potential' in potential:
self._add_potential(potential['Potential'], potential_tag)
if 'NumericVariables' in potential:
numvar_tag = etree.SubElement(potential_tag, 'NumericVariables')
for var in sorted(potential['NumericVariables']):
etree.SubElement(numvar_tag, 'Variable', attrib={'name': var}) | Adds Potentials to the ProbModelXML.
Parameters
----------
potential: dict
Dictionary containing Potential data.
For example: {'role': 'Utility',
'Variables': ['D0', 'D1', 'C0', 'C1'],
'type': 'Tree/ADD',
'UtilityVaribale': 'U1'}
parent_tag: etree Element
etree element which would contain potential tag
For example: <Element Potentials at 0x7f315fc44b08>
<Element Branch at 0x7f315fc44c88>
<Element Branch at 0x7f315fc44d88>
<Element Subpotentials at 0x7f315fc44e48>
Examples
--------
>>> writer = ProbModelXMLWriter(model)
>>> writer._add_potential(potential, parent_tag) | Below is the the instruction that describes the task:
### Input:
Adds Potentials to the ProbModelXML.
Parameters
----------
potential: dict
Dictionary containing Potential data.
For example: {'role': 'Utility',
'Variables': ['D0', 'D1', 'C0', 'C1'],
'type': 'Tree/ADD',
'UtilityVaribale': 'U1'}
parent_tag: etree Element
etree element which would contain potential tag
For example: <Element Potentials at 0x7f315fc44b08>
<Element Branch at 0x7f315fc44c88>
<Element Branch at 0x7f315fc44d88>
<Element Subpotentials at 0x7f315fc44e48>
Examples
--------
>>> writer = ProbModelXMLWriter(model)
>>> writer._add_potential(potential, parent_tag)
### Response:
def _add_potential(self, potential, parent_tag):
"""
Adds Potentials to the ProbModelXML.
Parameters
----------
potential: dict
Dictionary containing Potential data.
For example: {'role': 'Utility',
'Variables': ['D0', 'D1', 'C0', 'C1'],
'type': 'Tree/ADD',
'UtilityVaribale': 'U1'}
parent_tag: etree Element
etree element which would contain potential tag
For example: <Element Potentials at 0x7f315fc44b08>
<Element Branch at 0x7f315fc44c88>
<Element Branch at 0x7f315fc44d88>
<Element Subpotentials at 0x7f315fc44e48>
Examples
--------
>>> writer = ProbModelXMLWriter(model)
>>> writer._add_potential(potential, parent_tag)
"""
potential_type = potential['type']
try:
potential_tag = etree.SubElement(parent_tag, 'Potential', attrib={
'type': potential['type'], 'role': potential['role']})
except KeyError:
potential_tag = etree.SubElement(parent_tag, 'Potential', attrib={
'type': potential['type']})
self._add_element(potential, 'Comment', potential_tag)
if 'AdditionalProperties' in potential:
self._add_additional_properties(potential_tag, potential['AdditionalProperties'])
if potential_type == "delta":
etree.SubElement(potential_tag, 'Variable', attrib={'name': potential['Variable']})
self._add_element(potential, 'State', potential_tag)
self._add_element(potential, 'StateIndex', potential_tag)
self._add_element(potential, 'NumericValue', potential_tag)
else:
if 'UtilityVariable' in potential:
etree.SubElement(potential_tag, 'UtilityVariable', attrib={
'name': potential['UtilityVariable']})
if 'Variables' in potential:
variable_tag = etree.SubElement(potential_tag, 'Variables')
for var in sorted(potential['Variables']):
etree.SubElement(variable_tag, 'Variable', attrib={'name': var})
for child in sorted(potential['Variables'][var]):
etree.SubElement(variable_tag, 'Variable', attrib={'name': child})
self._add_element(potential, 'Values', potential_tag)
if 'UncertainValues' in potential:
value_tag = etree.SubElement(potential_tag, 'UncertainValues', attrib={})
for value in sorted(potential['UncertainValues']):
try:
etree.SubElement(value_tag, 'Value', attrib={
'distribution': value['distribution'],
'name': value['name']}).text = value['value']
except KeyError:
etree.SubElement(value_tag, 'Value', attrib={
'distribution': value['distribution']}).text = value['value']
if 'TopVariable' in potential:
etree.SubElement(potential_tag, 'TopVariable', attrib={'name': potential['TopVariable']})
if 'Branches' in potential:
branches_tag = etree.SubElement(potential_tag, 'Branches')
for branch in potential['Branches']:
branch_tag = etree.SubElement(branches_tag, 'Branch')
if 'States' in branch:
states_tag = etree.SubElement(branch_tag, 'States')
for state in sorted(branch['States']):
etree.SubElement(states_tag, 'State', attrib={'name': state['name']})
if 'Potential' in branch:
self._add_potential(branch['Potential'], branch_tag)
self._add_element(potential, 'Label', potential_tag)
self._add_element(potential, 'Reference', potential_tag)
if 'Thresholds' in branch:
thresholds_tag = etree.SubElement(branch_tag, 'Thresholds')
for threshold in branch['Thresholds']:
try:
etree.SubElement(thresholds_tag, 'Threshold', attrib={
'value': threshold['value'], 'belongsTo': threshold['belongsTo']})
except KeyError:
etree.SubElement(thresholds_tag, 'Threshold', attrib={
'value': threshold['value']})
self._add_element(potential, 'Model', potential_tag)
self._add_element(potential, 'Coefficients', potential_tag)
self._add_element(potential, 'CovarianceMatrix', potential_tag)
if 'Subpotentials' in potential:
subpotentials = etree.SubElement(potential_tag, 'Subpotentials')
for subpotential in potential['Subpotentials']:
self._add_potential(subpotential, subpotentials)
if 'Potential' in potential:
self._add_potential(potential['Potential'], potential_tag)
if 'NumericVariables' in potential:
numvar_tag = etree.SubElement(potential_tag, 'NumericVariables')
for var in sorted(potential['NumericVariables']):
etree.SubElement(numvar_tag, 'Variable', attrib={'name': var}) |
def _style_sheet_changed(self):
""" Set the style sheets of the underlying widgets.
"""
self.setStyleSheet(self.style_sheet)
if self._control is not None:
self._control.document().setDefaultStyleSheet(self.style_sheet)
bg_color = self._control.palette().window().color()
self._ansi_processor.set_background_color(bg_color)
if self._page_control is not None:
self._page_control.document().setDefaultStyleSheet(self.style_sheet) | Set the style sheets of the underlying widgets. | Below is the the instruction that describes the task:
### Input:
Set the style sheets of the underlying widgets.
### Response:
def _style_sheet_changed(self):
""" Set the style sheets of the underlying widgets.
"""
self.setStyleSheet(self.style_sheet)
if self._control is not None:
self._control.document().setDefaultStyleSheet(self.style_sheet)
bg_color = self._control.palette().window().color()
self._ansi_processor.set_background_color(bg_color)
if self._page_control is not None:
self._page_control.document().setDefaultStyleSheet(self.style_sheet) |
def set_patch(self, pre_release_tag=''):
"""
Increment the patch number of project
:var release_tag describes the tag ('a', 'b', 'rc', ...)
:var release_tag_version describes the number behind the 'a', 'b' or 'rc'
For e.g.:
"""
current_version = self.get_version()
current_patch = self.get_patch_version(current_version)
current_pre_release_tag = self.get_current_pre_release_tag(current_patch)
current_RELEASE_SEPARATOR = self.get_current_RELEASE_SEPARATOR(current_patch)
new_patch = ''
# The new patch should get a release tag
if pre_release_tag:
# Check, if the current patch already contains a pre_release_tag.
if current_pre_release_tag:
new_patch = str(current_patch.split(current_pre_release_tag, 2)[0]) + pre_release_tag
if pre_release_tag == current_pre_release_tag:
new_patch += str(int(current_patch.split(current_pre_release_tag, 2)[1])+1)
else:
new_patch += '0'
# The current patch does not contains a pre_release_tag.
else:
new_patch = str(int(current_patch)+1) + \
APISettings.RELEASE_SEPARATOR + \
pre_release_tag + \
'0'
# The new patch should not contain any tag. So just increase it.
else:
if current_RELEASE_SEPARATOR:
new_patch = str(int(current_patch.split(current_RELEASE_SEPARATOR, 2)[0])+1)
elif current_pre_release_tag:
new_patch = str(int(current_patch.split(current_pre_release_tag, 2)[0])+1)
else:
new_patch = str(int(current_patch)+1)
new_version = str(int(current_version.split('.', 5)[0])) + '.' + \
str(int(current_version.split('.', 5)[1])) + '.' + \
str(new_patch)
self.set_version(current_version, new_version) | Increment the patch number of project
:var release_tag describes the tag ('a', 'b', 'rc', ...)
:var release_tag_version describes the number behind the 'a', 'b' or 'rc'
For e.g.: | Below is the the instruction that describes the task:
### Input:
Increment the patch number of project
:var release_tag describes the tag ('a', 'b', 'rc', ...)
:var release_tag_version describes the number behind the 'a', 'b' or 'rc'
For e.g.:
### Response:
def set_patch(self, pre_release_tag=''):
"""
Increment the patch number of project
:var release_tag describes the tag ('a', 'b', 'rc', ...)
:var release_tag_version describes the number behind the 'a', 'b' or 'rc'
For e.g.:
"""
current_version = self.get_version()
current_patch = self.get_patch_version(current_version)
current_pre_release_tag = self.get_current_pre_release_tag(current_patch)
current_RELEASE_SEPARATOR = self.get_current_RELEASE_SEPARATOR(current_patch)
new_patch = ''
# The new patch should get a release tag
if pre_release_tag:
# Check, if the current patch already contains a pre_release_tag.
if current_pre_release_tag:
new_patch = str(current_patch.split(current_pre_release_tag, 2)[0]) + pre_release_tag
if pre_release_tag == current_pre_release_tag:
new_patch += str(int(current_patch.split(current_pre_release_tag, 2)[1])+1)
else:
new_patch += '0'
# The current patch does not contains a pre_release_tag.
else:
new_patch = str(int(current_patch)+1) + \
APISettings.RELEASE_SEPARATOR + \
pre_release_tag + \
'0'
# The new patch should not contain any tag. So just increase it.
else:
if current_RELEASE_SEPARATOR:
new_patch = str(int(current_patch.split(current_RELEASE_SEPARATOR, 2)[0])+1)
elif current_pre_release_tag:
new_patch = str(int(current_patch.split(current_pre_release_tag, 2)[0])+1)
else:
new_patch = str(int(current_patch)+1)
new_version = str(int(current_version.split('.', 5)[0])) + '.' + \
str(int(current_version.split('.', 5)[1])) + '.' + \
str(new_patch)
self.set_version(current_version, new_version) |
def do_request(self, method, url, callback_url = None, get = None, post = None, files = None, stream = False, is_json = True):
if files == {}:
files = None
self._multipart = files is not None
header = self.get_oauth_header(method, url, callback_url, get, post)
if get:
full_url = url + "?" + urllib.urlencode(get)
else:
full_url = url
"""# DEBUG
info = "=" * 50 + "\n"
info += "Method: %s\n" % method
info += "URL: %s\n" % full_url
info += "Headers: %s\n" % str(header)
info += "GET data: %s\n" % str(get)
info += "POST data: %s\n" % str(post)
info += "Files: %s\n" % str(files)
info += "Streaming: %s\n" % str(stream)
info += "JSON: %s\n" % str(is_json)
info += "=" * 50
print info
# END DEBUG"""
if method.upper() == "POST":
response = requests.post(full_url, data = post, files = files, headers = header, stream = stream, timeout = self.timeout)
else:
response = requests.get(full_url, data = post, files = files, headers = header, stream = stream, timeout = self.timeout)
"""# DEBUG
print ("\nResponse: %s\n" % response.text) + "=" * 50
# END DEBUG"""
if response.status_code != 200:
try:
data = response.json()
try:
raise APIError(code = data['errors'][0]['code'], description = data['errors'][0]['message'], body = response.text or None)
except TypeError:
raise APIError(code = None, description = data['errors'])
except APIError:
raise
except:
description = " ".join(response.headers['status'].split()[1:]) if response.headers.get('status', None) else "Unknown Error"
raise APIError(code = response.status_code, description = description, body = response.text or None)
if stream:
return response
if is_json:
try:
return response.json()
except:
return response.text
else:
return response.text | # DEBUG
info = "=" * 50 + "\n"
info += "Method: %s\n" % method
info += "URL: %s\n" % full_url
info += "Headers: %s\n" % str(header)
info += "GET data: %s\n" % str(get)
info += "POST data: %s\n" % str(post)
info += "Files: %s\n" % str(files)
info += "Streaming: %s\n" % str(stream)
info += "JSON: %s\n" % str(is_json)
info += "=" * 50
print info
# END DEBUG | Below is the the instruction that describes the task:
### Input:
# DEBUG
info = "=" * 50 + "\n"
info += "Method: %s\n" % method
info += "URL: %s\n" % full_url
info += "Headers: %s\n" % str(header)
info += "GET data: %s\n" % str(get)
info += "POST data: %s\n" % str(post)
info += "Files: %s\n" % str(files)
info += "Streaming: %s\n" % str(stream)
info += "JSON: %s\n" % str(is_json)
info += "=" * 50
print info
# END DEBUG
### Response:
def do_request(self, method, url, callback_url = None, get = None, post = None, files = None, stream = False, is_json = True):
if files == {}:
files = None
self._multipart = files is not None
header = self.get_oauth_header(method, url, callback_url, get, post)
if get:
full_url = url + "?" + urllib.urlencode(get)
else:
full_url = url
"""# DEBUG
info = "=" * 50 + "\n"
info += "Method: %s\n" % method
info += "URL: %s\n" % full_url
info += "Headers: %s\n" % str(header)
info += "GET data: %s\n" % str(get)
info += "POST data: %s\n" % str(post)
info += "Files: %s\n" % str(files)
info += "Streaming: %s\n" % str(stream)
info += "JSON: %s\n" % str(is_json)
info += "=" * 50
print info
# END DEBUG"""
if method.upper() == "POST":
response = requests.post(full_url, data = post, files = files, headers = header, stream = stream, timeout = self.timeout)
else:
response = requests.get(full_url, data = post, files = files, headers = header, stream = stream, timeout = self.timeout)
"""# DEBUG
print ("\nResponse: %s\n" % response.text) + "=" * 50
# END DEBUG"""
if response.status_code != 200:
try:
data = response.json()
try:
raise APIError(code = data['errors'][0]['code'], description = data['errors'][0]['message'], body = response.text or None)
except TypeError:
raise APIError(code = None, description = data['errors'])
except APIError:
raise
except:
description = " ".join(response.headers['status'].split()[1:]) if response.headers.get('status', None) else "Unknown Error"
raise APIError(code = response.status_code, description = description, body = response.text or None)
if stream:
return response
if is_json:
try:
return response.json()
except:
return response.text
else:
return response.text |
def _CreateAdGroup(client, campaign_id):
"""Creates an ad group.
Args:
client: an AdWordsClient instance.
campaign_id: an integer campaign ID.
Returns:
An integer ad group ID.
"""
ad_group_service = client.GetService('AdGroupService')
operations = [{
'operator': 'ADD',
'operand': {
'campaignId': campaign_id,
'adGroupType': 'SEARCH_DYNAMIC_ADS',
'name': 'Earth to Mars Cruises #%d' % uuid.uuid4(),
'status': 'PAUSED',
'biddingStrategyConfiguration': {
'bids': [{
'xsi_type': 'CpcBid',
'bid': {
'microAmount': '3000000'
},
}]
}
}
}]
ad_group = ad_group_service.mutate(operations)['value'][0]
ad_group_id = ad_group['id']
print 'Ad group with ID "%d" and name "%s" was created.' % (
ad_group_id, ad_group['name'])
return ad_group_id | Creates an ad group.
Args:
client: an AdWordsClient instance.
campaign_id: an integer campaign ID.
Returns:
An integer ad group ID. | Below is the the instruction that describes the task:
### Input:
Creates an ad group.
Args:
client: an AdWordsClient instance.
campaign_id: an integer campaign ID.
Returns:
An integer ad group ID.
### Response:
def _CreateAdGroup(client, campaign_id):
"""Creates an ad group.
Args:
client: an AdWordsClient instance.
campaign_id: an integer campaign ID.
Returns:
An integer ad group ID.
"""
ad_group_service = client.GetService('AdGroupService')
operations = [{
'operator': 'ADD',
'operand': {
'campaignId': campaign_id,
'adGroupType': 'SEARCH_DYNAMIC_ADS',
'name': 'Earth to Mars Cruises #%d' % uuid.uuid4(),
'status': 'PAUSED',
'biddingStrategyConfiguration': {
'bids': [{
'xsi_type': 'CpcBid',
'bid': {
'microAmount': '3000000'
},
}]
}
}
}]
ad_group = ad_group_service.mutate(operations)['value'][0]
ad_group_id = ad_group['id']
print 'Ad group with ID "%d" and name "%s" was created.' % (
ad_group_id, ad_group['name'])
return ad_group_id |
def random_sample(list_, nSample, strict=False, rng=None, seed=None):
"""
Grabs data randomly
Args:
list_ (list):
nSample (?):
strict (bool): (default = False)
rng (module): random number generator(default = numpy.random)
seed (None): (default = None)
Returns:
list: sample_list
CommandLine:
python -m utool.util_numpy --exec-random_sample
Example:
>>> # DISABLE_DOCTEST
>>> from utool.util_numpy import * # NOQA
>>> list_ = np.arange(10)
>>> nSample = 4
>>> strict = False
>>> rng = np.random.RandomState(0)
>>> seed = None
>>> sample_list = random_sample(list_, nSample, strict, rng, seed)
>>> result = ('sample_list = %s' % (str(sample_list),))
>>> print(result)
"""
rng = ensure_rng(seed if rng is None else rng)
if isinstance(list_, list):
list2_ = list_[:]
else:
list2_ = np.copy(list_)
if len(list2_) == 0 and not strict:
return list2_
rng.shuffle(list2_)
if nSample is None and strict is False:
return list2_
if not strict:
nSample = min(max(0, nSample), len(list2_))
sample_list = list2_[:nSample]
return sample_list | Grabs data randomly
Args:
list_ (list):
nSample (?):
strict (bool): (default = False)
rng (module): random number generator(default = numpy.random)
seed (None): (default = None)
Returns:
list: sample_list
CommandLine:
python -m utool.util_numpy --exec-random_sample
Example:
>>> # DISABLE_DOCTEST
>>> from utool.util_numpy import * # NOQA
>>> list_ = np.arange(10)
>>> nSample = 4
>>> strict = False
>>> rng = np.random.RandomState(0)
>>> seed = None
>>> sample_list = random_sample(list_, nSample, strict, rng, seed)
>>> result = ('sample_list = %s' % (str(sample_list),))
>>> print(result) | Below is the the instruction that describes the task:
### Input:
Grabs data randomly
Args:
list_ (list):
nSample (?):
strict (bool): (default = False)
rng (module): random number generator(default = numpy.random)
seed (None): (default = None)
Returns:
list: sample_list
CommandLine:
python -m utool.util_numpy --exec-random_sample
Example:
>>> # DISABLE_DOCTEST
>>> from utool.util_numpy import * # NOQA
>>> list_ = np.arange(10)
>>> nSample = 4
>>> strict = False
>>> rng = np.random.RandomState(0)
>>> seed = None
>>> sample_list = random_sample(list_, nSample, strict, rng, seed)
>>> result = ('sample_list = %s' % (str(sample_list),))
>>> print(result)
### Response:
def random_sample(list_, nSample, strict=False, rng=None, seed=None):
"""
Grabs data randomly
Args:
list_ (list):
nSample (?):
strict (bool): (default = False)
rng (module): random number generator(default = numpy.random)
seed (None): (default = None)
Returns:
list: sample_list
CommandLine:
python -m utool.util_numpy --exec-random_sample
Example:
>>> # DISABLE_DOCTEST
>>> from utool.util_numpy import * # NOQA
>>> list_ = np.arange(10)
>>> nSample = 4
>>> strict = False
>>> rng = np.random.RandomState(0)
>>> seed = None
>>> sample_list = random_sample(list_, nSample, strict, rng, seed)
>>> result = ('sample_list = %s' % (str(sample_list),))
>>> print(result)
"""
rng = ensure_rng(seed if rng is None else rng)
if isinstance(list_, list):
list2_ = list_[:]
else:
list2_ = np.copy(list_)
if len(list2_) == 0 and not strict:
return list2_
rng.shuffle(list2_)
if nSample is None and strict is False:
return list2_
if not strict:
nSample = min(max(0, nSample), len(list2_))
sample_list = list2_[:nSample]
return sample_list |
def gradient_line(xs, ys, colormap_name='jet', ax=None):
'''Plot a 2-d line with a gradient representing ordering.
See http://stackoverflow.com/q/8500700/10601 for details.'''
if ax is None:
ax = plt.gca()
cm = plt.get_cmap(colormap_name)
npts = len(xs)-1
colors = cm(np.linspace(0, 1, num=npts))
if hasattr(ax, 'set_prop_cycle'):
ax.set_prop_cycle(color=colors)
else:
ax.set_color_cycle(colors)
for i in range(npts):
ax.plot(xs[i:i+2],ys[i:i+2])
return plt.show | Plot a 2-d line with a gradient representing ordering.
See http://stackoverflow.com/q/8500700/10601 for details. | Below is the the instruction that describes the task:
### Input:
Plot a 2-d line with a gradient representing ordering.
See http://stackoverflow.com/q/8500700/10601 for details.
### Response:
def gradient_line(xs, ys, colormap_name='jet', ax=None):
'''Plot a 2-d line with a gradient representing ordering.
See http://stackoverflow.com/q/8500700/10601 for details.'''
if ax is None:
ax = plt.gca()
cm = plt.get_cmap(colormap_name)
npts = len(xs)-1
colors = cm(np.linspace(0, 1, num=npts))
if hasattr(ax, 'set_prop_cycle'):
ax.set_prop_cycle(color=colors)
else:
ax.set_color_cycle(colors)
for i in range(npts):
ax.plot(xs[i:i+2],ys[i:i+2])
return plt.show |
def invert(self):
''' Return inverse mapping of dictionary with sorted values.
USAGE
>>> # Switch the keys and values
>>> adv_dict({
... 'A': [1, 2, 3],
... 'B': [4, 2],
... 'C': [1, 4],
... }).invert()
{1: ['A', 'C'], 2: ['A', 'B'], 3: ['A'], 4: ['B', 'C']}
'''
inv_map = {}
for k, v in self.items():
if sys.version_info < (3, 0):
acceptable_v_instance = isinstance(v, (str, int, float, long))
else:
acceptable_v_instance = isinstance(v, (str, int, float))
if acceptable_v_instance: v = [v]
elif not isinstance(v, list):
raise Exception('Error: Non supported value format! Values may only'
' be numerical, strings, or lists of numbers and '
'strings.')
for val in v:
inv_map[val] = inv_map.get(val, [])
inv_map[val].append(k)
inv_map[val].sort()
return inv_map | Return inverse mapping of dictionary with sorted values.
USAGE
>>> # Switch the keys and values
>>> adv_dict({
... 'A': [1, 2, 3],
... 'B': [4, 2],
... 'C': [1, 4],
... }).invert()
{1: ['A', 'C'], 2: ['A', 'B'], 3: ['A'], 4: ['B', 'C']} | Below is the the instruction that describes the task:
### Input:
Return inverse mapping of dictionary with sorted values.
USAGE
>>> # Switch the keys and values
>>> adv_dict({
... 'A': [1, 2, 3],
... 'B': [4, 2],
... 'C': [1, 4],
... }).invert()
{1: ['A', 'C'], 2: ['A', 'B'], 3: ['A'], 4: ['B', 'C']}
### Response:
def invert(self):
''' Return inverse mapping of dictionary with sorted values.
USAGE
>>> # Switch the keys and values
>>> adv_dict({
... 'A': [1, 2, 3],
... 'B': [4, 2],
... 'C': [1, 4],
... }).invert()
{1: ['A', 'C'], 2: ['A', 'B'], 3: ['A'], 4: ['B', 'C']}
'''
inv_map = {}
for k, v in self.items():
if sys.version_info < (3, 0):
acceptable_v_instance = isinstance(v, (str, int, float, long))
else:
acceptable_v_instance = isinstance(v, (str, int, float))
if acceptable_v_instance: v = [v]
elif not isinstance(v, list):
raise Exception('Error: Non supported value format! Values may only'
' be numerical, strings, or lists of numbers and '
'strings.')
for val in v:
inv_map[val] = inv_map.get(val, [])
inv_map[val].append(k)
inv_map[val].sort()
return inv_map |
def update_options(cls, options, items):
"""
Switch default options and backend if new backend is supplied in
items.
"""
# Get new backend
backend_spec = items.get('backend', Store.current_backend)
split = backend_spec.split(':')
backend, mode = split if len(split)==2 else (split[0], 'default')
if ':' not in backend_spec:
backend_spec += ':default'
if 'max_branches' in items:
print('Warning: The max_branches option is now deprecated. Ignoring.')
del items['max_branches']
# Get previous backend
prev_backend = Store.current_backend
renderer = Store.renderers[prev_backend]
prev_backend_spec = prev_backend+':'+renderer.mode
# Update allowed formats
for p in ['fig', 'holomap']:
cls.allowed[p] = list_formats(p, backend_spec)
# Return if backend invalid and let validation error
if backend not in Store.renderers:
options['backend'] = backend_spec
return options
# Get backend specific options
backend_options = dict(cls._backend_options[backend_spec])
cls._backend_options[prev_backend_spec] = {k: v for k, v in cls.options.items()
if k in cls.remembered}
# Fill in remembered options with defaults
for opt in cls.remembered:
if opt not in backend_options:
backend_options[opt] = cls.defaults[opt]
# Switch format if mode does not allow it
for p in ['fig', 'holomap']:
if backend_options.get(p) not in cls.allowed[p]:
backend_options[p] = cls.allowed[p][0]
# Ensure backend and mode are set
backend_options['backend'] = backend_spec
backend_options['mode'] = mode
return backend_options | Switch default options and backend if new backend is supplied in
items. | Below is the the instruction that describes the task:
### Input:
Switch default options and backend if new backend is supplied in
items.
### Response:
def update_options(cls, options, items):
"""
Switch default options and backend if new backend is supplied in
items.
"""
# Get new backend
backend_spec = items.get('backend', Store.current_backend)
split = backend_spec.split(':')
backend, mode = split if len(split)==2 else (split[0], 'default')
if ':' not in backend_spec:
backend_spec += ':default'
if 'max_branches' in items:
print('Warning: The max_branches option is now deprecated. Ignoring.')
del items['max_branches']
# Get previous backend
prev_backend = Store.current_backend
renderer = Store.renderers[prev_backend]
prev_backend_spec = prev_backend+':'+renderer.mode
# Update allowed formats
for p in ['fig', 'holomap']:
cls.allowed[p] = list_formats(p, backend_spec)
# Return if backend invalid and let validation error
if backend not in Store.renderers:
options['backend'] = backend_spec
return options
# Get backend specific options
backend_options = dict(cls._backend_options[backend_spec])
cls._backend_options[prev_backend_spec] = {k: v for k, v in cls.options.items()
if k in cls.remembered}
# Fill in remembered options with defaults
for opt in cls.remembered:
if opt not in backend_options:
backend_options[opt] = cls.defaults[opt]
# Switch format if mode does not allow it
for p in ['fig', 'holomap']:
if backend_options.get(p) not in cls.allowed[p]:
backend_options[p] = cls.allowed[p][0]
# Ensure backend and mode are set
backend_options['backend'] = backend_spec
backend_options['mode'] = mode
return backend_options |
def identify_pycbc_live(origin, filepath, fileobj, *args, **kwargs):
"""Identify a PyCBC Live file as an HDF5 with the correct name
"""
if identify_hdf5(origin, filepath, fileobj, *args, **kwargs) and (
filepath is not None and PYCBC_FILENAME.match(basename(filepath))):
return True
return False | Identify a PyCBC Live file as an HDF5 with the correct name | Below is the the instruction that describes the task:
### Input:
Identify a PyCBC Live file as an HDF5 with the correct name
### Response:
def identify_pycbc_live(origin, filepath, fileobj, *args, **kwargs):
"""Identify a PyCBC Live file as an HDF5 with the correct name
"""
if identify_hdf5(origin, filepath, fileobj, *args, **kwargs) and (
filepath is not None and PYCBC_FILENAME.match(basename(filepath))):
return True
return False |
def dupstack(newtask):
'''
Duplicate the current provenance stack onto another task
'''
stack = s_task.varget('provstack')
s_task.varset('provstack', stack.copy(), newtask) | Duplicate the current provenance stack onto another task | Below is the the instruction that describes the task:
### Input:
Duplicate the current provenance stack onto another task
### Response:
def dupstack(newtask):
'''
Duplicate the current provenance stack onto another task
'''
stack = s_task.varget('provstack')
s_task.varset('provstack', stack.copy(), newtask) |
def patch(save=True, tensorboardX=tensorboardX_loaded):
"""Monkeypatches tensorboard or tensorboardX so that all events are logged to tfevents files and wandb.
We save the tfevents files and graphs to wandb by default.
Arguments:
save, default: True - Passing False will skip sending events.
tensorboardX, default: True if module can be imported - You can override this when calling patch
"""
global Summary, Event
if tensorboardX:
tensorboard_module = "tensorboardX.writer"
if tensorflow_loaded:
wandb.termlog(
"Found TensorboardX and tensorflow, pass tensorboardX=False to patch regular tensorboard.")
from tensorboardX.proto.summary_pb2 import Summary
from tensorboardX.proto.event_pb2 import Event
else:
tensorboard_module = "tensorflow.python.summary.writer.writer"
from tensorflow.summary import Summary, Event
writers = set()
def _add_event(self, event, step, walltime=None):
event.wall_time = time.time() if walltime is None else walltime
if step is not None:
event.step = int(step)
try:
# TensorboardX uses _file_name
if hasattr(self.event_writer._ev_writer, "_file_name"):
name = self.event_writer._ev_writer._file_name
else:
name = self.event_writer._ev_writer.FileName().decode("utf-8")
writers.add(name)
# This is a little hacky, there is a case where the log_dir changes.
# Because the events files will have the same names in sub directories
# we simply overwrite the previous symlink in wandb.save if the log_dir
# changes.
log_dir = os.path.dirname(os.path.commonprefix(list(writers)))
filename = os.path.basename(name)
# Tensorboard loads all tfevents files in a directory and prepends
# their values with the path. Passing namespace to log allows us
# to nest the values in wandb
namespace = name.replace(filename, "").replace(
log_dir, "").strip(os.sep)
if save:
wandb.save(name, base_path=log_dir)
wandb.save(os.path.join(log_dir, "*.pbtxt"),
base_path=log_dir)
log(event, namespace=namespace, step=event.step)
except Exception as e:
wandb.termerror("Unable to log event %s" % e)
# six.reraise(type(e), e, sys.exc_info()[2])
self.event_writer.add_event(event)
writer = wandb.util.get_module(tensorboard_module)
writer.SummaryToEventTransformer._add_event = _add_event | Monkeypatches tensorboard or tensorboardX so that all events are logged to tfevents files and wandb.
We save the tfevents files and graphs to wandb by default.
Arguments:
save, default: True - Passing False will skip sending events.
tensorboardX, default: True if module can be imported - You can override this when calling patch | Below is the the instruction that describes the task:
### Input:
Monkeypatches tensorboard or tensorboardX so that all events are logged to tfevents files and wandb.
We save the tfevents files and graphs to wandb by default.
Arguments:
save, default: True - Passing False will skip sending events.
tensorboardX, default: True if module can be imported - You can override this when calling patch
### Response:
def patch(save=True, tensorboardX=tensorboardX_loaded):
"""Monkeypatches tensorboard or tensorboardX so that all events are logged to tfevents files and wandb.
We save the tfevents files and graphs to wandb by default.
Arguments:
save, default: True - Passing False will skip sending events.
tensorboardX, default: True if module can be imported - You can override this when calling patch
"""
global Summary, Event
if tensorboardX:
tensorboard_module = "tensorboardX.writer"
if tensorflow_loaded:
wandb.termlog(
"Found TensorboardX and tensorflow, pass tensorboardX=False to patch regular tensorboard.")
from tensorboardX.proto.summary_pb2 import Summary
from tensorboardX.proto.event_pb2 import Event
else:
tensorboard_module = "tensorflow.python.summary.writer.writer"
from tensorflow.summary import Summary, Event
writers = set()
def _add_event(self, event, step, walltime=None):
event.wall_time = time.time() if walltime is None else walltime
if step is not None:
event.step = int(step)
try:
# TensorboardX uses _file_name
if hasattr(self.event_writer._ev_writer, "_file_name"):
name = self.event_writer._ev_writer._file_name
else:
name = self.event_writer._ev_writer.FileName().decode("utf-8")
writers.add(name)
# This is a little hacky, there is a case where the log_dir changes.
# Because the events files will have the same names in sub directories
# we simply overwrite the previous symlink in wandb.save if the log_dir
# changes.
log_dir = os.path.dirname(os.path.commonprefix(list(writers)))
filename = os.path.basename(name)
# Tensorboard loads all tfevents files in a directory and prepends
# their values with the path. Passing namespace to log allows us
# to nest the values in wandb
namespace = name.replace(filename, "").replace(
log_dir, "").strip(os.sep)
if save:
wandb.save(name, base_path=log_dir)
wandb.save(os.path.join(log_dir, "*.pbtxt"),
base_path=log_dir)
log(event, namespace=namespace, step=event.step)
except Exception as e:
wandb.termerror("Unable to log event %s" % e)
# six.reraise(type(e), e, sys.exc_info()[2])
self.event_writer.add_event(event)
writer = wandb.util.get_module(tensorboard_module)
writer.SummaryToEventTransformer._add_event = _add_event |
def install_from_zip(url):
"""Download and unzip from url."""
fname = 'tmp.zip'
downlad_file(url, fname)
unzip_file(fname)
print("Removing {}".format(fname))
os.unlink(fname) | Download and unzip from url. | Below is the the instruction that describes the task:
### Input:
Download and unzip from url.
### Response:
def install_from_zip(url):
"""Download and unzip from url."""
fname = 'tmp.zip'
downlad_file(url, fname)
unzip_file(fname)
print("Removing {}".format(fname))
os.unlink(fname) |
def getcomponentdetails(self, product, component, force_refresh=False):
"""
Helper for accessing a single component's info. This is a wrapper
around getcomponentsdetails, see that for explanation
"""
d = self.getcomponentsdetails(product, force_refresh)
return d[component] | Helper for accessing a single component's info. This is a wrapper
around getcomponentsdetails, see that for explanation | Below is the the instruction that describes the task:
### Input:
Helper for accessing a single component's info. This is a wrapper
around getcomponentsdetails, see that for explanation
### Response:
def getcomponentdetails(self, product, component, force_refresh=False):
"""
Helper for accessing a single component's info. This is a wrapper
around getcomponentsdetails, see that for explanation
"""
d = self.getcomponentsdetails(product, force_refresh)
return d[component] |
def importPeptideFeatures(fiContainer, filelocation, specfile):
""" Import peptide features from a featureXml file, as generated for example
by the OpenMS node featureFinderCentroided, or a features.tsv file by the
Dinosaur command line tool.
:param fiContainer: imported features are added to this instance of
:class:`FeatureContainer <maspy.core.FeatureContainer>`.
:param filelocation: Actual file path
:param specfile: Keyword (filename) to represent file in the
:class:`FeatureContainer`. Each filename can only occure once, therefore
importing the same filename again is prevented.
"""
if not os.path.isfile(filelocation):
warnings.warn('The specified file does not exist %s' %(filelocation, ))
return None
elif (not filelocation.lower().endswith('.featurexml') and
not filelocation.lower().endswith('.features.tsv')
):
#TODO: this is depricated as importPeptideFeatues
#is not longer be used solely for featurexml
print('Wrong file extension, %s' %(filelocation, ))
elif specfile in fiContainer.info:
print('%s is already present in the SiContainer, import interrupted.'
%(specfile, )
)
return None
#Prepare the file container for the import
fiContainer.addSpecfile(specfile, os.path.dirname(filelocation))
#import featurexml file
if filelocation.lower().endswith('.featurexml'):
featureDict = _importFeatureXml(filelocation)
for featureId, featureEntryDict in viewitems(featureDict):
rtArea = set()
for convexHullEntry in featureEntryDict['convexHullDict']['0']:
rtArea.update([convexHullEntry[0]])
fi = maspy.core.Fi(featureId, specfile)
fi.rt = featureEntryDict['rt']
fi.rtArea = max(rtArea) - min(rtArea)
fi.rtLow = min(rtArea)
fi.rtHigh = max(rtArea)
fi.charge = featureEntryDict['charge']
fi.mz = featureEntryDict['mz']
fi.mh = maspy.peptidemethods.calcMhFromMz(featureEntryDict['mz'],
featureEntryDict['charge'])
fi.intensity = featureEntryDict['intensity']
fi.quality = featureEntryDict['overallquality']
fi.isMatched = False
fi.isAnnotated = False
fi.isValid = True
fiContainer.container[specfile][featureId] = fi
#import dinosaur tsv file
elif filelocation.lower().endswith('.features.tsv'):
featureDict = _importDinosaurTsv(filelocation)
for featureId, featureEntryDict in viewitems(featureDict):
fi = maspy.core.Fi(featureId, specfile)
fi.rt = featureEntryDict['rtApex']
fi.rtArea = featureEntryDict['rtEnd'] - featureEntryDict['rtStart']
fi.rtFwhm = featureEntryDict['fwhm']
fi.rtLow = featureEntryDict['rtStart']
fi.rtHigh = featureEntryDict['rtEnd']
fi.charge = featureEntryDict['charge']
fi.numScans = featureEntryDict['nScans']
fi.mz = featureEntryDict['mz']
fi.mh = maspy.peptidemethods.calcMhFromMz(featureEntryDict['mz'],
featureEntryDict['charge'])
fi.intensity = featureEntryDict['intensitySum']
fi.intensityApex = featureEntryDict['intensityApex']
#Note: not used keys:
#mostAbundantMz nIsotopes nScans averagineCorr mass massCalib
fi.isMatched = False
fi.isAnnotated = False
fi.isValid = True
fiContainer.container[specfile][featureId] = fi | Import peptide features from a featureXml file, as generated for example
by the OpenMS node featureFinderCentroided, or a features.tsv file by the
Dinosaur command line tool.
:param fiContainer: imported features are added to this instance of
:class:`FeatureContainer <maspy.core.FeatureContainer>`.
:param filelocation: Actual file path
:param specfile: Keyword (filename) to represent file in the
:class:`FeatureContainer`. Each filename can only occure once, therefore
importing the same filename again is prevented. | Below is the the instruction that describes the task:
### Input:
Import peptide features from a featureXml file, as generated for example
by the OpenMS node featureFinderCentroided, or a features.tsv file by the
Dinosaur command line tool.
:param fiContainer: imported features are added to this instance of
:class:`FeatureContainer <maspy.core.FeatureContainer>`.
:param filelocation: Actual file path
:param specfile: Keyword (filename) to represent file in the
:class:`FeatureContainer`. Each filename can only occure once, therefore
importing the same filename again is prevented.
### Response:
def importPeptideFeatures(fiContainer, filelocation, specfile):
""" Import peptide features from a featureXml file, as generated for example
by the OpenMS node featureFinderCentroided, or a features.tsv file by the
Dinosaur command line tool.
:param fiContainer: imported features are added to this instance of
:class:`FeatureContainer <maspy.core.FeatureContainer>`.
:param filelocation: Actual file path
:param specfile: Keyword (filename) to represent file in the
:class:`FeatureContainer`. Each filename can only occure once, therefore
importing the same filename again is prevented.
"""
if not os.path.isfile(filelocation):
warnings.warn('The specified file does not exist %s' %(filelocation, ))
return None
elif (not filelocation.lower().endswith('.featurexml') and
not filelocation.lower().endswith('.features.tsv')
):
#TODO: this is depricated as importPeptideFeatues
#is not longer be used solely for featurexml
print('Wrong file extension, %s' %(filelocation, ))
elif specfile in fiContainer.info:
print('%s is already present in the SiContainer, import interrupted.'
%(specfile, )
)
return None
#Prepare the file container for the import
fiContainer.addSpecfile(specfile, os.path.dirname(filelocation))
#import featurexml file
if filelocation.lower().endswith('.featurexml'):
featureDict = _importFeatureXml(filelocation)
for featureId, featureEntryDict in viewitems(featureDict):
rtArea = set()
for convexHullEntry in featureEntryDict['convexHullDict']['0']:
rtArea.update([convexHullEntry[0]])
fi = maspy.core.Fi(featureId, specfile)
fi.rt = featureEntryDict['rt']
fi.rtArea = max(rtArea) - min(rtArea)
fi.rtLow = min(rtArea)
fi.rtHigh = max(rtArea)
fi.charge = featureEntryDict['charge']
fi.mz = featureEntryDict['mz']
fi.mh = maspy.peptidemethods.calcMhFromMz(featureEntryDict['mz'],
featureEntryDict['charge'])
fi.intensity = featureEntryDict['intensity']
fi.quality = featureEntryDict['overallquality']
fi.isMatched = False
fi.isAnnotated = False
fi.isValid = True
fiContainer.container[specfile][featureId] = fi
#import dinosaur tsv file
elif filelocation.lower().endswith('.features.tsv'):
featureDict = _importDinosaurTsv(filelocation)
for featureId, featureEntryDict in viewitems(featureDict):
fi = maspy.core.Fi(featureId, specfile)
fi.rt = featureEntryDict['rtApex']
fi.rtArea = featureEntryDict['rtEnd'] - featureEntryDict['rtStart']
fi.rtFwhm = featureEntryDict['fwhm']
fi.rtLow = featureEntryDict['rtStart']
fi.rtHigh = featureEntryDict['rtEnd']
fi.charge = featureEntryDict['charge']
fi.numScans = featureEntryDict['nScans']
fi.mz = featureEntryDict['mz']
fi.mh = maspy.peptidemethods.calcMhFromMz(featureEntryDict['mz'],
featureEntryDict['charge'])
fi.intensity = featureEntryDict['intensitySum']
fi.intensityApex = featureEntryDict['intensityApex']
#Note: not used keys:
#mostAbundantMz nIsotopes nScans averagineCorr mass massCalib
fi.isMatched = False
fi.isAnnotated = False
fi.isValid = True
fiContainer.container[specfile][featureId] = fi |
def _get_job(self, project_id, job_id):
"""
Gets a MLEngine job based on the job name.
:return: MLEngine job object if succeed.
:rtype: dict
Raises:
googleapiclient.errors.HttpError: if HTTP error is returned from server
"""
job_name = 'projects/{}/jobs/{}'.format(project_id, job_id)
request = self._mlengine.projects().jobs().get(name=job_name)
while True:
try:
return request.execute()
except HttpError as e:
if e.resp.status == 429:
# polling after 30 seconds when quota failure occurs
time.sleep(30)
else:
self.log.error('Failed to get MLEngine job: {}'.format(e))
raise | Gets a MLEngine job based on the job name.
:return: MLEngine job object if succeed.
:rtype: dict
Raises:
googleapiclient.errors.HttpError: if HTTP error is returned from server | Below is the the instruction that describes the task:
### Input:
Gets a MLEngine job based on the job name.
:return: MLEngine job object if succeed.
:rtype: dict
Raises:
googleapiclient.errors.HttpError: if HTTP error is returned from server
### Response:
def _get_job(self, project_id, job_id):
"""
Gets a MLEngine job based on the job name.
:return: MLEngine job object if succeed.
:rtype: dict
Raises:
googleapiclient.errors.HttpError: if HTTP error is returned from server
"""
job_name = 'projects/{}/jobs/{}'.format(project_id, job_id)
request = self._mlengine.projects().jobs().get(name=job_name)
while True:
try:
return request.execute()
except HttpError as e:
if e.resp.status == 429:
# polling after 30 seconds when quota failure occurs
time.sleep(30)
else:
self.log.error('Failed to get MLEngine job: {}'.format(e))
raise |
def _selftoken_expired():
'''
Validate the current token exists and is still valid
'''
try:
verify = __opts__['vault'].get('verify', None)
url = '{0}/v1/auth/token/lookup-self'.format(__opts__['vault']['url'])
if 'token' not in __opts__['vault']['auth']:
return True
headers = {'X-Vault-Token': __opts__['vault']['auth']['token']}
response = requests.get(url, headers=headers, verify=verify)
if response.status_code != 200:
return True
return False
except Exception as e:
raise salt.exceptions.CommandExecutionError(
'Error while looking up self token : {0}'.format(six.text_type(e))
) | Validate the current token exists and is still valid | Below is the the instruction that describes the task:
### Input:
Validate the current token exists and is still valid
### Response:
def _selftoken_expired():
'''
Validate the current token exists and is still valid
'''
try:
verify = __opts__['vault'].get('verify', None)
url = '{0}/v1/auth/token/lookup-self'.format(__opts__['vault']['url'])
if 'token' not in __opts__['vault']['auth']:
return True
headers = {'X-Vault-Token': __opts__['vault']['auth']['token']}
response = requests.get(url, headers=headers, verify=verify)
if response.status_code != 200:
return True
return False
except Exception as e:
raise salt.exceptions.CommandExecutionError(
'Error while looking up self token : {0}'.format(six.text_type(e))
) |
def send_keys(self, keys, wait=True):
"""
Send a raw key sequence to *Vim*.
.. note:: *Vim* style key sequence notation (like ``<Esc>``)
is not recognized.
Use escaped characters (like ``'\033'``) instead.
Example:
>>> import headlessvim
>>> with headlessvim.open() as vim:
... vim.send_keys('ispam\033')
... str(vim.display_lines()[0].strip())
...
'spam'
:param strgin keys: key sequence to send
:param boolean wait: whether if wait a response
"""
self._process.stdin.write(bytearray(keys, self._encoding))
self._process.stdin.flush()
if wait:
self.wait() | Send a raw key sequence to *Vim*.
.. note:: *Vim* style key sequence notation (like ``<Esc>``)
is not recognized.
Use escaped characters (like ``'\033'``) instead.
Example:
>>> import headlessvim
>>> with headlessvim.open() as vim:
... vim.send_keys('ispam\033')
... str(vim.display_lines()[0].strip())
...
'spam'
:param strgin keys: key sequence to send
:param boolean wait: whether if wait a response | Below is the the instruction that describes the task:
### Input:
Send a raw key sequence to *Vim*.
.. note:: *Vim* style key sequence notation (like ``<Esc>``)
is not recognized.
Use escaped characters (like ``'\033'``) instead.
Example:
>>> import headlessvim
>>> with headlessvim.open() as vim:
... vim.send_keys('ispam\033')
... str(vim.display_lines()[0].strip())
...
'spam'
:param strgin keys: key sequence to send
:param boolean wait: whether if wait a response
### Response:
def send_keys(self, keys, wait=True):
"""
Send a raw key sequence to *Vim*.
.. note:: *Vim* style key sequence notation (like ``<Esc>``)
is not recognized.
Use escaped characters (like ``'\033'``) instead.
Example:
>>> import headlessvim
>>> with headlessvim.open() as vim:
... vim.send_keys('ispam\033')
... str(vim.display_lines()[0].strip())
...
'spam'
:param strgin keys: key sequence to send
:param boolean wait: whether if wait a response
"""
self._process.stdin.write(bytearray(keys, self._encoding))
self._process.stdin.flush()
if wait:
self.wait() |
def extract_subject_from_dn(cert_obj):
"""Serialize a DN to a DataONE subject string.
Args:
cert_obj: cryptography.Certificate
Returns:
str:
Primary subject extracted from the certificate DN.
The certificate DN (DistinguishedName) is a sequence of RDNs
(RelativeDistinguishedName). Each RDN is a set of AVAs (AttributeValueAssertion /
AttributeTypeAndValue). A DataONE subject is a plain string. As there is no single
standard specifying how to create a string representation of a DN, DataONE selected
one of the most common ways, which yield strings such as:
CN=Some Name A123,O=Some Organization,C=US,DC=Some Domain,DC=org
In particular, the sequence of RDNs is reversed. Attribute values are escaped,
attribute type and value pairs are separated by "=", and AVAs are joined together
with ",". If an RDN contains an unknown OID, the OID is serialized as a dotted
string.
As all the information in the DN is preserved, it is not possible to create the
same subject with two different DNs, and the DN can be recreated from the subject.
"""
return ",".join(
"{}={}".format(
OID_TO_SHORT_NAME_DICT.get(v.oid.dotted_string, v.oid.dotted_string),
rdn_escape(v.value),
)
for v in reversed(list(cert_obj.subject))
) | Serialize a DN to a DataONE subject string.
Args:
cert_obj: cryptography.Certificate
Returns:
str:
Primary subject extracted from the certificate DN.
The certificate DN (DistinguishedName) is a sequence of RDNs
(RelativeDistinguishedName). Each RDN is a set of AVAs (AttributeValueAssertion /
AttributeTypeAndValue). A DataONE subject is a plain string. As there is no single
standard specifying how to create a string representation of a DN, DataONE selected
one of the most common ways, which yield strings such as:
CN=Some Name A123,O=Some Organization,C=US,DC=Some Domain,DC=org
In particular, the sequence of RDNs is reversed. Attribute values are escaped,
attribute type and value pairs are separated by "=", and AVAs are joined together
with ",". If an RDN contains an unknown OID, the OID is serialized as a dotted
string.
As all the information in the DN is preserved, it is not possible to create the
same subject with two different DNs, and the DN can be recreated from the subject. | Below is the the instruction that describes the task:
### Input:
Serialize a DN to a DataONE subject string.
Args:
cert_obj: cryptography.Certificate
Returns:
str:
Primary subject extracted from the certificate DN.
The certificate DN (DistinguishedName) is a sequence of RDNs
(RelativeDistinguishedName). Each RDN is a set of AVAs (AttributeValueAssertion /
AttributeTypeAndValue). A DataONE subject is a plain string. As there is no single
standard specifying how to create a string representation of a DN, DataONE selected
one of the most common ways, which yield strings such as:
CN=Some Name A123,O=Some Organization,C=US,DC=Some Domain,DC=org
In particular, the sequence of RDNs is reversed. Attribute values are escaped,
attribute type and value pairs are separated by "=", and AVAs are joined together
with ",". If an RDN contains an unknown OID, the OID is serialized as a dotted
string.
As all the information in the DN is preserved, it is not possible to create the
same subject with two different DNs, and the DN can be recreated from the subject.
### Response:
def extract_subject_from_dn(cert_obj):
"""Serialize a DN to a DataONE subject string.
Args:
cert_obj: cryptography.Certificate
Returns:
str:
Primary subject extracted from the certificate DN.
The certificate DN (DistinguishedName) is a sequence of RDNs
(RelativeDistinguishedName). Each RDN is a set of AVAs (AttributeValueAssertion /
AttributeTypeAndValue). A DataONE subject is a plain string. As there is no single
standard specifying how to create a string representation of a DN, DataONE selected
one of the most common ways, which yield strings such as:
CN=Some Name A123,O=Some Organization,C=US,DC=Some Domain,DC=org
In particular, the sequence of RDNs is reversed. Attribute values are escaped,
attribute type and value pairs are separated by "=", and AVAs are joined together
with ",". If an RDN contains an unknown OID, the OID is serialized as a dotted
string.
As all the information in the DN is preserved, it is not possible to create the
same subject with two different DNs, and the DN can be recreated from the subject.
"""
return ",".join(
"{}={}".format(
OID_TO_SHORT_NAME_DICT.get(v.oid.dotted_string, v.oid.dotted_string),
rdn_escape(v.value),
)
for v in reversed(list(cert_obj.subject))
) |
def remove_transcript(self,tx_id):
"""Remove a transcript from the locus by its id
:param tx_id:
:type tx_id: string
"""
txs = self.get_transcripts()
if tx_id not in [x.id for x in txs]:
return
tx = [x for x in txs if x.id==tx_id][0]
for n in [x for x in self.g.get_nodes()]:
if tx_id not in [y.id for y in n.payload]:
continue
n.payload.remove(tx)
if len(n.payload)==0:
self.g.remove_node(n) | Remove a transcript from the locus by its id
:param tx_id:
:type tx_id: string | Below is the the instruction that describes the task:
### Input:
Remove a transcript from the locus by its id
:param tx_id:
:type tx_id: string
### Response:
def remove_transcript(self,tx_id):
"""Remove a transcript from the locus by its id
:param tx_id:
:type tx_id: string
"""
txs = self.get_transcripts()
if tx_id not in [x.id for x in txs]:
return
tx = [x for x in txs if x.id==tx_id][0]
for n in [x for x in self.g.get_nodes()]:
if tx_id not in [y.id for y in n.payload]:
continue
n.payload.remove(tx)
if len(n.payload)==0:
self.g.remove_node(n) |
def pipe_rssitembuilder(context=None, _INPUT=None, conf=None, **kwargs):
"""A source that builds an rss item. Loopable.
Parameters
----------
context : pipe2py.Context object
_INPUT : pipeforever asyncPipe or an iterable of items or fields
conf : {
'mediaContentType': {'type': 'text', 'value': ''},
'mediaContentHeight': {'type': 'text', 'value': ''},
'mediaContentWidth': {'type': 'text', 'value': ''},
'mediaContentURL': {'type': 'text', 'value': 'url'},
'mediaThumbHeight': {'type': 'text', 'value': ''},
'mediaThumbWidth': {'type': 'text', 'value': ''},
'mediaThumbURL': {'type': 'text', 'value': 'url'},
'description': {'type': 'text', 'value': 'description'},
'pubdate': {'type': 'text', 'value': 'pubdate'},
'author': {'type': 'text', 'value': 'author'},
'title': {'type': 'text', 'value': 'title'},
'link': {'type': 'text', 'value': 'url'},
'guid': {'type': 'text', 'value': 'guid'},
}
Yields
------
_OUTPUT : items
"""
get_value = partial(utils.get_value, **kwargs)
pkwargs = utils.combine_dicts({'parse_func': get_value}, kwargs)
parse_conf = partial(utils.parse_conf, DotDict(conf), **pkwargs)
get_RSS = lambda key, value: (RSS.get(key, key), value)
get_YAHOO = lambda key, value: (YAHOO.get(key), value)
make_dict = lambda func, conf: dict(starmap(func, conf.iteritems()))
clean_dict = lambda d: dict(i for i in d.items() if all(i))
funcs = [partial(make_dict, get_RSS), partial(make_dict, get_YAHOO)]
finite = utils.finitize(_INPUT)
inputs = imap(DotDict, finite)
confs = imap(parse_conf, inputs)
splits = utils.broadcast(confs, *funcs)
combined = starmap(utils.combine_dicts, splits)
result = imap(clean_dict, combined)
_OUTPUT = imap(DotDict, result)
return _OUTPUT | A source that builds an rss item. Loopable.
Parameters
----------
context : pipe2py.Context object
_INPUT : pipeforever asyncPipe or an iterable of items or fields
conf : {
'mediaContentType': {'type': 'text', 'value': ''},
'mediaContentHeight': {'type': 'text', 'value': ''},
'mediaContentWidth': {'type': 'text', 'value': ''},
'mediaContentURL': {'type': 'text', 'value': 'url'},
'mediaThumbHeight': {'type': 'text', 'value': ''},
'mediaThumbWidth': {'type': 'text', 'value': ''},
'mediaThumbURL': {'type': 'text', 'value': 'url'},
'description': {'type': 'text', 'value': 'description'},
'pubdate': {'type': 'text', 'value': 'pubdate'},
'author': {'type': 'text', 'value': 'author'},
'title': {'type': 'text', 'value': 'title'},
'link': {'type': 'text', 'value': 'url'},
'guid': {'type': 'text', 'value': 'guid'},
}
Yields
------
_OUTPUT : items | Below is the the instruction that describes the task:
### Input:
A source that builds an rss item. Loopable.
Parameters
----------
context : pipe2py.Context object
_INPUT : pipeforever asyncPipe or an iterable of items or fields
conf : {
'mediaContentType': {'type': 'text', 'value': ''},
'mediaContentHeight': {'type': 'text', 'value': ''},
'mediaContentWidth': {'type': 'text', 'value': ''},
'mediaContentURL': {'type': 'text', 'value': 'url'},
'mediaThumbHeight': {'type': 'text', 'value': ''},
'mediaThumbWidth': {'type': 'text', 'value': ''},
'mediaThumbURL': {'type': 'text', 'value': 'url'},
'description': {'type': 'text', 'value': 'description'},
'pubdate': {'type': 'text', 'value': 'pubdate'},
'author': {'type': 'text', 'value': 'author'},
'title': {'type': 'text', 'value': 'title'},
'link': {'type': 'text', 'value': 'url'},
'guid': {'type': 'text', 'value': 'guid'},
}
Yields
------
_OUTPUT : items
### Response:
def pipe_rssitembuilder(context=None, _INPUT=None, conf=None, **kwargs):
"""A source that builds an rss item. Loopable.
Parameters
----------
context : pipe2py.Context object
_INPUT : pipeforever asyncPipe or an iterable of items or fields
conf : {
'mediaContentType': {'type': 'text', 'value': ''},
'mediaContentHeight': {'type': 'text', 'value': ''},
'mediaContentWidth': {'type': 'text', 'value': ''},
'mediaContentURL': {'type': 'text', 'value': 'url'},
'mediaThumbHeight': {'type': 'text', 'value': ''},
'mediaThumbWidth': {'type': 'text', 'value': ''},
'mediaThumbURL': {'type': 'text', 'value': 'url'},
'description': {'type': 'text', 'value': 'description'},
'pubdate': {'type': 'text', 'value': 'pubdate'},
'author': {'type': 'text', 'value': 'author'},
'title': {'type': 'text', 'value': 'title'},
'link': {'type': 'text', 'value': 'url'},
'guid': {'type': 'text', 'value': 'guid'},
}
Yields
------
_OUTPUT : items
"""
get_value = partial(utils.get_value, **kwargs)
pkwargs = utils.combine_dicts({'parse_func': get_value}, kwargs)
parse_conf = partial(utils.parse_conf, DotDict(conf), **pkwargs)
get_RSS = lambda key, value: (RSS.get(key, key), value)
get_YAHOO = lambda key, value: (YAHOO.get(key), value)
make_dict = lambda func, conf: dict(starmap(func, conf.iteritems()))
clean_dict = lambda d: dict(i for i in d.items() if all(i))
funcs = [partial(make_dict, get_RSS), partial(make_dict, get_YAHOO)]
finite = utils.finitize(_INPUT)
inputs = imap(DotDict, finite)
confs = imap(parse_conf, inputs)
splits = utils.broadcast(confs, *funcs)
combined = starmap(utils.combine_dicts, splits)
result = imap(clean_dict, combined)
_OUTPUT = imap(DotDict, result)
return _OUTPUT |
def getMiniHTML(self):
'''
getMiniHTML - Gets the HTML representation of this document without any pretty formatting
and disregarding original whitespace beyond the functional.
@return <str> - HTML with only functional whitespace present
'''
from .Formatter import AdvancedHTMLMiniFormatter
html = self.getHTML()
formatter = AdvancedHTMLMiniFormatter(None) # Do not double-encode
formatter.feed(html)
return formatter.getHTML() | getMiniHTML - Gets the HTML representation of this document without any pretty formatting
and disregarding original whitespace beyond the functional.
@return <str> - HTML with only functional whitespace present | Below is the the instruction that describes the task:
### Input:
getMiniHTML - Gets the HTML representation of this document without any pretty formatting
and disregarding original whitespace beyond the functional.
@return <str> - HTML with only functional whitespace present
### Response:
def getMiniHTML(self):
'''
getMiniHTML - Gets the HTML representation of this document without any pretty formatting
and disregarding original whitespace beyond the functional.
@return <str> - HTML with only functional whitespace present
'''
from .Formatter import AdvancedHTMLMiniFormatter
html = self.getHTML()
formatter = AdvancedHTMLMiniFormatter(None) # Do not double-encode
formatter.feed(html)
return formatter.getHTML() |
def lsof(name):
'''
Retrieve the lsof information of the given process name.
CLI Example:
.. code-block:: bash
salt '*' ps.lsof apache2
'''
sanitize_name = six.text_type(name)
lsof_infos = __salt__['cmd.run']("lsof -c " + sanitize_name)
ret = []
ret.extend([sanitize_name, lsof_infos])
return ret | Retrieve the lsof information of the given process name.
CLI Example:
.. code-block:: bash
salt '*' ps.lsof apache2 | Below is the the instruction that describes the task:
### Input:
Retrieve the lsof information of the given process name.
CLI Example:
.. code-block:: bash
salt '*' ps.lsof apache2
### Response:
def lsof(name):
'''
Retrieve the lsof information of the given process name.
CLI Example:
.. code-block:: bash
salt '*' ps.lsof apache2
'''
sanitize_name = six.text_type(name)
lsof_infos = __salt__['cmd.run']("lsof -c " + sanitize_name)
ret = []
ret.extend([sanitize_name, lsof_infos])
return ret |
def media_upload(self, media, additional_owners=None):
"""
Uploads an image to Twitter for later embedding in tweets.
https://dev.twitter.com/rest/reference/post/media/upload
:param file media:
The image file to upload (see the API docs for limitations).
:param list additional_owners:
A list of Twitter users that will be able to access the uploaded
file and embed it in their tweets (maximum 100 users).
:returns:
A dict containing information about the file uploaded. (Contains
the media id needed to embed the image in the ``media_id`` field).
"""
params = {}
set_list_param(
params, 'additional_owners', additional_owners, max_len=100)
return self._upload_media('media/upload.json', media, params) | Uploads an image to Twitter for later embedding in tweets.
https://dev.twitter.com/rest/reference/post/media/upload
:param file media:
The image file to upload (see the API docs for limitations).
:param list additional_owners:
A list of Twitter users that will be able to access the uploaded
file and embed it in their tweets (maximum 100 users).
:returns:
A dict containing information about the file uploaded. (Contains
the media id needed to embed the image in the ``media_id`` field). | Below is the the instruction that describes the task:
### Input:
Uploads an image to Twitter for later embedding in tweets.
https://dev.twitter.com/rest/reference/post/media/upload
:param file media:
The image file to upload (see the API docs for limitations).
:param list additional_owners:
A list of Twitter users that will be able to access the uploaded
file and embed it in their tweets (maximum 100 users).
:returns:
A dict containing information about the file uploaded. (Contains
the media id needed to embed the image in the ``media_id`` field).
### Response:
def media_upload(self, media, additional_owners=None):
"""
Uploads an image to Twitter for later embedding in tweets.
https://dev.twitter.com/rest/reference/post/media/upload
:param file media:
The image file to upload (see the API docs for limitations).
:param list additional_owners:
A list of Twitter users that will be able to access the uploaded
file and embed it in their tweets (maximum 100 users).
:returns:
A dict containing information about the file uploaded. (Contains
the media id needed to embed the image in the ``media_id`` field).
"""
params = {}
set_list_param(
params, 'additional_owners', additional_owners, max_len=100)
return self._upload_media('media/upload.json', media, params) |
def Reset(self):
"""Reset the lexer to process a new data feed."""
# The first state
self.state = "INITIAL"
self.state_stack = []
# The buffer we are parsing now
self.buffer = ""
self.error = 0
self.verbose = 0
# The index into the buffer where we are currently pointing
self.processed = 0
self.processed_buffer = "" | Reset the lexer to process a new data feed. | Below is the the instruction that describes the task:
### Input:
Reset the lexer to process a new data feed.
### Response:
def Reset(self):
"""Reset the lexer to process a new data feed."""
# The first state
self.state = "INITIAL"
self.state_stack = []
# The buffer we are parsing now
self.buffer = ""
self.error = 0
self.verbose = 0
# The index into the buffer where we are currently pointing
self.processed = 0
self.processed_buffer = "" |
def recv(stream):
"""Read an Erlang term from an input stream."""
header = stream.read(4)
if len(header) != 4:
return None # EOF
(length,) = struct.unpack('!I', header)
payload = stream.read(length)
if len(payload) != length:
return None
term = erlang.binary_to_term(payload)
return term | Read an Erlang term from an input stream. | Below is the the instruction that describes the task:
### Input:
Read an Erlang term from an input stream.
### Response:
def recv(stream):
"""Read an Erlang term from an input stream."""
header = stream.read(4)
if len(header) != 4:
return None # EOF
(length,) = struct.unpack('!I', header)
payload = stream.read(length)
if len(payload) != length:
return None
term = erlang.binary_to_term(payload)
return term |
def get_sequence_value(node):
"""Convert an element with DataType Sequence to a DataFrame.
Note this may be a naive implementation as I assume that bulk data is always a table
"""
assert node.Datatype == 15
data = defaultdict(list)
cols = []
for i in range(node.NumValues):
row = node.GetValue(i)
if i == 0: # Get the ordered cols and assume they are constant
cols = [str(row.GetElement(_).Name) for _ in range(row.NumElements)]
for cidx in range(row.NumElements):
col = row.GetElement(cidx)
data[str(col.Name)].append(XmlHelper.as_value(col))
return DataFrame(data, columns=cols) | Convert an element with DataType Sequence to a DataFrame.
Note this may be a naive implementation as I assume that bulk data is always a table | Below is the the instruction that describes the task:
### Input:
Convert an element with DataType Sequence to a DataFrame.
Note this may be a naive implementation as I assume that bulk data is always a table
### Response:
def get_sequence_value(node):
"""Convert an element with DataType Sequence to a DataFrame.
Note this may be a naive implementation as I assume that bulk data is always a table
"""
assert node.Datatype == 15
data = defaultdict(list)
cols = []
for i in range(node.NumValues):
row = node.GetValue(i)
if i == 0: # Get the ordered cols and assume they are constant
cols = [str(row.GetElement(_).Name) for _ in range(row.NumElements)]
for cidx in range(row.NumElements):
col = row.GetElement(cidx)
data[str(col.Name)].append(XmlHelper.as_value(col))
return DataFrame(data, columns=cols) |
def run(self, tag=None, output=None, **kwargs):
"""
runs the extractor
Args:
-----
output: ['filepath', None]
"""
start = datetime.datetime.now()
count = 0
if tag:
tag = Uri(tag)
xml_generator = etree.iterparse(self.source,
#events=("start", "end"),
tag=tag.etree)
else:
xml_generator = etree.iterparse(self.source) #,
#events=("start", "end"))
i = 0
for event, element in xml_generator:
type_tags = element.findall(_RDF_TYPE_TAG)
rdf_types = [el.get(_RES_TAG)
for el in type_tags
if el.get(_RES_TAG)]
# print(rdf_types)
if str(self.filter_val) in rdf_types:
pdb.set_trace()
# print("%s - %s - %s - %s" % (event,
# element.tag,
# element.attrib,
# element.text))
count += 1
# if i == 100:
# break
i += 1
element.clear()
print("Found '{}' items in {}".format(count,
(datetime.datetime.now() - start))) | runs the extractor
Args:
-----
output: ['filepath', None] | Below is the the instruction that describes the task:
### Input:
runs the extractor
Args:
-----
output: ['filepath', None]
### Response:
def run(self, tag=None, output=None, **kwargs):
"""
runs the extractor
Args:
-----
output: ['filepath', None]
"""
start = datetime.datetime.now()
count = 0
if tag:
tag = Uri(tag)
xml_generator = etree.iterparse(self.source,
#events=("start", "end"),
tag=tag.etree)
else:
xml_generator = etree.iterparse(self.source) #,
#events=("start", "end"))
i = 0
for event, element in xml_generator:
type_tags = element.findall(_RDF_TYPE_TAG)
rdf_types = [el.get(_RES_TAG)
for el in type_tags
if el.get(_RES_TAG)]
# print(rdf_types)
if str(self.filter_val) in rdf_types:
pdb.set_trace()
# print("%s - %s - %s - %s" % (event,
# element.tag,
# element.attrib,
# element.text))
count += 1
# if i == 100:
# break
i += 1
element.clear()
print("Found '{}' items in {}".format(count,
(datetime.datetime.now() - start))) |
def __pop_frames_above(self, frame):
"""Pops all the frames above, but not including the given frame."""
while self.__stack[-1] is not frame:
self.__pop_top_frame()
assert self.__stack | Pops all the frames above, but not including the given frame. | Below is the the instruction that describes the task:
### Input:
Pops all the frames above, but not including the given frame.
### Response:
def __pop_frames_above(self, frame):
"""Pops all the frames above, but not including the given frame."""
while self.__stack[-1] is not frame:
self.__pop_top_frame()
assert self.__stack |
def _init_map(self):
"""stub"""
super(TextsAnswerFormRecord, self)._init_map()
self.my_osid_object_form._my_map['minStringLength'] = \
self._min_string_length_metadata['default_cardinal_values'][0]
self.my_osid_object_form._my_map['maxStringLength'] = \
self._max_string_length_metadata['default_cardinal_values'][0] | stub | Below is the the instruction that describes the task:
### Input:
stub
### Response:
def _init_map(self):
"""stub"""
super(TextsAnswerFormRecord, self)._init_map()
self.my_osid_object_form._my_map['minStringLength'] = \
self._min_string_length_metadata['default_cardinal_values'][0]
self.my_osid_object_form._my_map['maxStringLength'] = \
self._max_string_length_metadata['default_cardinal_values'][0] |
def get_variants(self, arch=None, types=None, recursive=False):
"""
Return all variants of given arch and types.
Supported variant types:
self - include the top-level ("self") variant as well
addon
variant
optional
"""
types = types or []
result = []
if "self" in types:
result.append(self)
for variant in six.itervalues(self.variants):
if types and variant.type not in types:
continue
if arch and arch not in variant.arches.union(["src"]):
continue
result.append(variant)
if recursive:
result.extend(variant.get_variants(types=[i for i in types if i != "self"], recursive=True))
result.sort(key=lambda x: x.uid)
return result | Return all variants of given arch and types.
Supported variant types:
self - include the top-level ("self") variant as well
addon
variant
optional | Below is the the instruction that describes the task:
### Input:
Return all variants of given arch and types.
Supported variant types:
self - include the top-level ("self") variant as well
addon
variant
optional
### Response:
def get_variants(self, arch=None, types=None, recursive=False):
"""
Return all variants of given arch and types.
Supported variant types:
self - include the top-level ("self") variant as well
addon
variant
optional
"""
types = types or []
result = []
if "self" in types:
result.append(self)
for variant in six.itervalues(self.variants):
if types and variant.type not in types:
continue
if arch and arch not in variant.arches.union(["src"]):
continue
result.append(variant)
if recursive:
result.extend(variant.get_variants(types=[i for i in types if i != "self"], recursive=True))
result.sort(key=lambda x: x.uid)
return result |
def pivot_query(facet=None, facet_pivot_fields=None, **kwargs):
"""
Pivot query
"""
if facet_pivot_fields is None:
facet_pivot_fields = []
results = search_associations(rows=0,
facet_fields=[facet],
#facet_pivot_fields=facet_pivot_fields + [facet],
facet_pivot_fields=facet_pivot_fields,
**kwargs)
return results | Pivot query | Below is the the instruction that describes the task:
### Input:
Pivot query
### Response:
def pivot_query(facet=None, facet_pivot_fields=None, **kwargs):
"""
Pivot query
"""
if facet_pivot_fields is None:
facet_pivot_fields = []
results = search_associations(rows=0,
facet_fields=[facet],
#facet_pivot_fields=facet_pivot_fields + [facet],
facet_pivot_fields=facet_pivot_fields,
**kwargs)
return results |
def set_autocommit(self, conn, autocommit):
"""
Sets the autocommit flag on the connection
"""
if not self.supports_autocommit and autocommit:
self.log.warn(
("%s connection doesn't support "
"autocommit but autocommit activated."),
getattr(self, self.conn_name_attr))
conn.autocommit = autocommit | Sets the autocommit flag on the connection | Below is the the instruction that describes the task:
### Input:
Sets the autocommit flag on the connection
### Response:
def set_autocommit(self, conn, autocommit):
"""
Sets the autocommit flag on the connection
"""
if not self.supports_autocommit and autocommit:
self.log.warn(
("%s connection doesn't support "
"autocommit but autocommit activated."),
getattr(self, self.conn_name_attr))
conn.autocommit = autocommit |
def wrap(self, wrapper):
"""
Returns the first function passed as an argument to the second,
allowing you to adjust arguments, run code before and after, and
conditionally execute the original function.
"""
def wrapped(*args, **kwargs):
if kwargs:
kwargs["object"] = self.obj
else:
args = list(args)
args.insert(0, self.obj)
return wrapper(*args, **kwargs)
return self._wrap(wrapped) | Returns the first function passed as an argument to the second,
allowing you to adjust arguments, run code before and after, and
conditionally execute the original function. | Below is the the instruction that describes the task:
### Input:
Returns the first function passed as an argument to the second,
allowing you to adjust arguments, run code before and after, and
conditionally execute the original function.
### Response:
def wrap(self, wrapper):
"""
Returns the first function passed as an argument to the second,
allowing you to adjust arguments, run code before and after, and
conditionally execute the original function.
"""
def wrapped(*args, **kwargs):
if kwargs:
kwargs["object"] = self.obj
else:
args = list(args)
args.insert(0, self.obj)
return wrapper(*args, **kwargs)
return self._wrap(wrapped) |
def add_filter(self, filter_values):
"""
Construct a filter.
"""
if not filter_values:
return
f = self.get_value_filter(filter_values[0])
for v in filter_values[1:]:
f |= self.get_value_filter(v)
return f | Construct a filter. | Below is the the instruction that describes the task:
### Input:
Construct a filter.
### Response:
def add_filter(self, filter_values):
"""
Construct a filter.
"""
if not filter_values:
return
f = self.get_value_filter(filter_values[0])
for v in filter_values[1:]:
f |= self.get_value_filter(v)
return f |
def key_value(minion_id,
pillar, # pylint: disable=W0613
pillar_key='redis_pillar'):
'''
Looks for key in redis matching minion_id, returns a structure based on the
data type of the redis key. String for string type, dict for hash type and
lists for lists, sets and sorted sets.
pillar_key
Pillar key to return data into
'''
# Identify key type and process as needed based on that type
key_type = __salt__['redis.key_type'](minion_id)
if key_type == 'string':
return {pillar_key: __salt__['redis.get_key'](minion_id)}
elif key_type == 'hash':
return {pillar_key: __salt__['redis.hgetall'](minion_id)}
elif key_type == 'list':
list_size = __salt__['redis.llen'](minion_id)
if not list_size:
return {}
return {pillar_key: __salt__['redis.lrange'](minion_id, 0,
list_size - 1)}
elif key_type == 'set':
return {pillar_key: __salt__['redis.smembers'](minion_id)}
elif key_type == 'zset':
set_size = __salt__['redis.zcard'](minion_id)
if not set_size:
return {}
return {pillar_key: __salt__['redis.zrange'](minion_id, 0,
set_size - 1)}
# Return nothing for unhandled types
return {} | Looks for key in redis matching minion_id, returns a structure based on the
data type of the redis key. String for string type, dict for hash type and
lists for lists, sets and sorted sets.
pillar_key
Pillar key to return data into | Below is the the instruction that describes the task:
### Input:
Looks for key in redis matching minion_id, returns a structure based on the
data type of the redis key. String for string type, dict for hash type and
lists for lists, sets and sorted sets.
pillar_key
Pillar key to return data into
### Response:
def key_value(minion_id,
pillar, # pylint: disable=W0613
pillar_key='redis_pillar'):
'''
Looks for key in redis matching minion_id, returns a structure based on the
data type of the redis key. String for string type, dict for hash type and
lists for lists, sets and sorted sets.
pillar_key
Pillar key to return data into
'''
# Identify key type and process as needed based on that type
key_type = __salt__['redis.key_type'](minion_id)
if key_type == 'string':
return {pillar_key: __salt__['redis.get_key'](minion_id)}
elif key_type == 'hash':
return {pillar_key: __salt__['redis.hgetall'](minion_id)}
elif key_type == 'list':
list_size = __salt__['redis.llen'](minion_id)
if not list_size:
return {}
return {pillar_key: __salt__['redis.lrange'](minion_id, 0,
list_size - 1)}
elif key_type == 'set':
return {pillar_key: __salt__['redis.smembers'](minion_id)}
elif key_type == 'zset':
set_size = __salt__['redis.zcard'](minion_id)
if not set_size:
return {}
return {pillar_key: __salt__['redis.zrange'](minion_id, 0,
set_size - 1)}
# Return nothing for unhandled types
return {} |
def generate_corpus(self, text):
"""
Given a text string, returns a list of lists; that is, a list of
"sentences," each of which is a list of words. Before splitting into
words, the sentences are filtered through `self.test_sentence_input`
"""
if isinstance(text, str):
sentences = self.sentence_split(text)
else:
sentences = []
for line in text:
sentences += self.sentence_split(line)
passing = filter(self.test_sentence_input, sentences)
runs = map(self.word_split, passing)
return runs | Given a text string, returns a list of lists; that is, a list of
"sentences," each of which is a list of words. Before splitting into
words, the sentences are filtered through `self.test_sentence_input` | Below is the the instruction that describes the task:
### Input:
Given a text string, returns a list of lists; that is, a list of
"sentences," each of which is a list of words. Before splitting into
words, the sentences are filtered through `self.test_sentence_input`
### Response:
def generate_corpus(self, text):
"""
Given a text string, returns a list of lists; that is, a list of
"sentences," each of which is a list of words. Before splitting into
words, the sentences are filtered through `self.test_sentence_input`
"""
if isinstance(text, str):
sentences = self.sentence_split(text)
else:
sentences = []
for line in text:
sentences += self.sentence_split(line)
passing = filter(self.test_sentence_input, sentences)
runs = map(self.word_split, passing)
return runs |
def hjorth(X, D=None):
""" Compute Hjorth mobility and complexity of a time series from either two
cases below:
1. X, the time series of type list (default)
2. D, a first order differential sequence of X (if D is provided,
recommended to speed up)
In case 1, D is computed using Numpy's Difference function.
Notes
-----
To speed up, it is recommended to compute D before calling this function
because D may also be used by other functions whereas computing it here
again will slow down.
Parameters
----------
X
list
a time series
D
list
first order differential sequence of a time series
Returns
-------
As indicated in return line
Hjorth mobility and complexity
"""
if D is None:
D = numpy.diff(X)
D = D.tolist()
D.insert(0, X[0]) # pad the first difference
D = numpy.array(D)
n = len(X)
M2 = float(sum(D ** 2)) / n
TP = sum(numpy.array(X) ** 2)
M4 = 0
for i in range(1, len(D)):
M4 += (D[i] - D[i - 1]) ** 2
M4 = M4 / n
return numpy.sqrt(M2 / TP), numpy.sqrt(
float(M4) * TP / M2 / M2
) | Compute Hjorth mobility and complexity of a time series from either two
cases below:
1. X, the time series of type list (default)
2. D, a first order differential sequence of X (if D is provided,
recommended to speed up)
In case 1, D is computed using Numpy's Difference function.
Notes
-----
To speed up, it is recommended to compute D before calling this function
because D may also be used by other functions whereas computing it here
again will slow down.
Parameters
----------
X
list
a time series
D
list
first order differential sequence of a time series
Returns
-------
As indicated in return line
Hjorth mobility and complexity | Below is the the instruction that describes the task:
### Input:
Compute Hjorth mobility and complexity of a time series from either two
cases below:
1. X, the time series of type list (default)
2. D, a first order differential sequence of X (if D is provided,
recommended to speed up)
In case 1, D is computed using Numpy's Difference function.
Notes
-----
To speed up, it is recommended to compute D before calling this function
because D may also be used by other functions whereas computing it here
again will slow down.
Parameters
----------
X
list
a time series
D
list
first order differential sequence of a time series
Returns
-------
As indicated in return line
Hjorth mobility and complexity
### Response:
def hjorth(X, D=None):
""" Compute Hjorth mobility and complexity of a time series from either two
cases below:
1. X, the time series of type list (default)
2. D, a first order differential sequence of X (if D is provided,
recommended to speed up)
In case 1, D is computed using Numpy's Difference function.
Notes
-----
To speed up, it is recommended to compute D before calling this function
because D may also be used by other functions whereas computing it here
again will slow down.
Parameters
----------
X
list
a time series
D
list
first order differential sequence of a time series
Returns
-------
As indicated in return line
Hjorth mobility and complexity
"""
if D is None:
D = numpy.diff(X)
D = D.tolist()
D.insert(0, X[0]) # pad the first difference
D = numpy.array(D)
n = len(X)
M2 = float(sum(D ** 2)) / n
TP = sum(numpy.array(X) ** 2)
M4 = 0
for i in range(1, len(D)):
M4 += (D[i] - D[i - 1]) ** 2
M4 = M4 / n
return numpy.sqrt(M2 / TP), numpy.sqrt(
float(M4) * TP / M2 / M2
) |
def _date_based_where(self, type, query, where):
"""
Compiled a date where based clause
:param type: The date type
:type type: str
:param query: A QueryBuilder instance
:type query: QueryBuilder
:param where: The condition
:type where: dict
:return: The compiled clause
:rtype: str
"""
value = str(where['value']).zfill(2)
value = self.parameter(value)
return 'strftime(\'%s\', %s) %s %s'\
% (type, self.wrap(where['column']),
where['operator'], value) | Compiled a date where based clause
:param type: The date type
:type type: str
:param query: A QueryBuilder instance
:type query: QueryBuilder
:param where: The condition
:type where: dict
:return: The compiled clause
:rtype: str | Below is the the instruction that describes the task:
### Input:
Compiled a date where based clause
:param type: The date type
:type type: str
:param query: A QueryBuilder instance
:type query: QueryBuilder
:param where: The condition
:type where: dict
:return: The compiled clause
:rtype: str
### Response:
def _date_based_where(self, type, query, where):
"""
Compiled a date where based clause
:param type: The date type
:type type: str
:param query: A QueryBuilder instance
:type query: QueryBuilder
:param where: The condition
:type where: dict
:return: The compiled clause
:rtype: str
"""
value = str(where['value']).zfill(2)
value = self.parameter(value)
return 'strftime(\'%s\', %s) %s %s'\
% (type, self.wrap(where['column']),
where['operator'], value) |
def make_ro(obj: Any, forgive_type=False):
"""
Make a json-serializable type recursively read-only
:param obj: Any json-serializable type
:param forgive_type: If you can forgive a type to be unknown (instead of
raising an exception)
"""
if isinstance(obj, (str, bytes, int, float, bool, RoDict, RoList)) \
or obj is None:
return obj
elif isinstance(obj, Mapping):
return RoDict(obj, forgive_type)
elif isinstance(obj, Sequence):
return RoList(obj, forgive_type)
elif forgive_type:
return obj
else:
raise ValueError('Trying to make read-only an object of type "{}"'
.format(obj.__class__.__name__)) | Make a json-serializable type recursively read-only
:param obj: Any json-serializable type
:param forgive_type: If you can forgive a type to be unknown (instead of
raising an exception) | Below is the the instruction that describes the task:
### Input:
Make a json-serializable type recursively read-only
:param obj: Any json-serializable type
:param forgive_type: If you can forgive a type to be unknown (instead of
raising an exception)
### Response:
def make_ro(obj: Any, forgive_type=False):
"""
Make a json-serializable type recursively read-only
:param obj: Any json-serializable type
:param forgive_type: If you can forgive a type to be unknown (instead of
raising an exception)
"""
if isinstance(obj, (str, bytes, int, float, bool, RoDict, RoList)) \
or obj is None:
return obj
elif isinstance(obj, Mapping):
return RoDict(obj, forgive_type)
elif isinstance(obj, Sequence):
return RoList(obj, forgive_type)
elif forgive_type:
return obj
else:
raise ValueError('Trying to make read-only an object of type "{}"'
.format(obj.__class__.__name__)) |
def get_rows(self, indexes, as_list=False):
"""
For a list of indexes return the values of the indexes in that column.
:param indexes: either a list of index values or a list of booleans with same length as all indexes
:param as_list: if True return a list, if False return Series
:return: Series if as_list if False, a list if as_list is True
"""
if all([isinstance(i, bool) for i in indexes]): # boolean list
if len(indexes) != len(self._index):
raise ValueError('boolean index list must be same size of existing index')
if all(indexes): # the entire column
data = self._data
index = self._index
else:
data = list(compress(self._data, indexes))
index = list(compress(self._index, indexes))
else: # index values list
locations = [sorted_index(self._index, x) for x in indexes] if self._sort \
else [self._index.index(x) for x in indexes]
data = [self._data[i] for i in locations]
index = [self._index[i] for i in locations]
return data if as_list else Series(data=data, index=index, data_name=self._data_name,
index_name=self._index_name, sort=self._sort) | For a list of indexes return the values of the indexes in that column.
:param indexes: either a list of index values or a list of booleans with same length as all indexes
:param as_list: if True return a list, if False return Series
:return: Series if as_list if False, a list if as_list is True | Below is the the instruction that describes the task:
### Input:
For a list of indexes return the values of the indexes in that column.
:param indexes: either a list of index values or a list of booleans with same length as all indexes
:param as_list: if True return a list, if False return Series
:return: Series if as_list if False, a list if as_list is True
### Response:
def get_rows(self, indexes, as_list=False):
"""
For a list of indexes return the values of the indexes in that column.
:param indexes: either a list of index values or a list of booleans with same length as all indexes
:param as_list: if True return a list, if False return Series
:return: Series if as_list if False, a list if as_list is True
"""
if all([isinstance(i, bool) for i in indexes]): # boolean list
if len(indexes) != len(self._index):
raise ValueError('boolean index list must be same size of existing index')
if all(indexes): # the entire column
data = self._data
index = self._index
else:
data = list(compress(self._data, indexes))
index = list(compress(self._index, indexes))
else: # index values list
locations = [sorted_index(self._index, x) for x in indexes] if self._sort \
else [self._index.index(x) for x in indexes]
data = [self._data[i] for i in locations]
index = [self._index[i] for i in locations]
return data if as_list else Series(data=data, index=index, data_name=self._data_name,
index_name=self._index_name, sort=self._sort) |
def get_matching_resource_types(resource_type, resource_id,**kwargs):
"""
Get the possible types of a resource by checking its attributes
against all available types.
@returns A list of TypeSummary objects.
"""
resource_i = None
if resource_type == 'NETWORK':
resource_i = db.DBSession.query(Network).filter(Network.id==resource_id).one()
elif resource_type == 'NODE':
resource_i = db.DBSession.query(Node).filter(Node.id==resource_id).one()
elif resource_type == 'LINK':
resource_i = db.DBSession.query(Link).filter(Link.id==resource_id).one()
elif resource_type == 'GROUP':
resource_i = db.DBSession.query(ResourceGroup).filter(ResourceGroup.id==resource_id).one()
matching_types = get_types_by_attr(resource_i)
return matching_types | Get the possible types of a resource by checking its attributes
against all available types.
@returns A list of TypeSummary objects. | Below is the the instruction that describes the task:
### Input:
Get the possible types of a resource by checking its attributes
against all available types.
@returns A list of TypeSummary objects.
### Response:
def get_matching_resource_types(resource_type, resource_id,**kwargs):
"""
Get the possible types of a resource by checking its attributes
against all available types.
@returns A list of TypeSummary objects.
"""
resource_i = None
if resource_type == 'NETWORK':
resource_i = db.DBSession.query(Network).filter(Network.id==resource_id).one()
elif resource_type == 'NODE':
resource_i = db.DBSession.query(Node).filter(Node.id==resource_id).one()
elif resource_type == 'LINK':
resource_i = db.DBSession.query(Link).filter(Link.id==resource_id).one()
elif resource_type == 'GROUP':
resource_i = db.DBSession.query(ResourceGroup).filter(ResourceGroup.id==resource_id).one()
matching_types = get_types_by_attr(resource_i)
return matching_types |
def offer(self, item, timeout=0):
"""
Inserts the specified element into this queue if it is possible to do so immediately without violating capacity
restrictions. Returns ``true`` upon success. If there is no space currently available:
* If a timeout is provided, it waits until this timeout elapses and returns the result.
* If a timeout is not provided, returns ``false`` immediately.
:param item: (object), the item to be added.
:param timeout: (long), maximum time in seconds to wait for addition (optional).
:return: (bool), ``true`` if the element was added to this queue, ``false`` otherwise.
"""
check_not_none(item, "Value can't be None")
element_data = self._to_data(item)
return self._encode_invoke(queue_offer_codec, value=element_data, timeout_millis=to_millis(timeout)) | Inserts the specified element into this queue if it is possible to do so immediately without violating capacity
restrictions. Returns ``true`` upon success. If there is no space currently available:
* If a timeout is provided, it waits until this timeout elapses and returns the result.
* If a timeout is not provided, returns ``false`` immediately.
:param item: (object), the item to be added.
:param timeout: (long), maximum time in seconds to wait for addition (optional).
:return: (bool), ``true`` if the element was added to this queue, ``false`` otherwise. | Below is the the instruction that describes the task:
### Input:
Inserts the specified element into this queue if it is possible to do so immediately without violating capacity
restrictions. Returns ``true`` upon success. If there is no space currently available:
* If a timeout is provided, it waits until this timeout elapses and returns the result.
* If a timeout is not provided, returns ``false`` immediately.
:param item: (object), the item to be added.
:param timeout: (long), maximum time in seconds to wait for addition (optional).
:return: (bool), ``true`` if the element was added to this queue, ``false`` otherwise.
### Response:
def offer(self, item, timeout=0):
"""
Inserts the specified element into this queue if it is possible to do so immediately without violating capacity
restrictions. Returns ``true`` upon success. If there is no space currently available:
* If a timeout is provided, it waits until this timeout elapses and returns the result.
* If a timeout is not provided, returns ``false`` immediately.
:param item: (object), the item to be added.
:param timeout: (long), maximum time in seconds to wait for addition (optional).
:return: (bool), ``true`` if the element was added to this queue, ``false`` otherwise.
"""
check_not_none(item, "Value can't be None")
element_data = self._to_data(item)
return self._encode_invoke(queue_offer_codec, value=element_data, timeout_millis=to_millis(timeout)) |
def find_matching_files(self, includes):
"""
For various actions we need files that match patterns
"""
if len(includes) == 0:
return []
files = [f['relativepath'] for f in self.package['resources']]
includes = r'|'.join([fnmatch.translate(x) for x in includes])
# Match both the file name as well the path..
files = [f for f in files if re.match(includes, os.path.basename(f))] + \
[f for f in files if re.match(includes, f)]
files = list(set(files))
return files | For various actions we need files that match patterns | Below is the the instruction that describes the task:
### Input:
For various actions we need files that match patterns
### Response:
def find_matching_files(self, includes):
"""
For various actions we need files that match patterns
"""
if len(includes) == 0:
return []
files = [f['relativepath'] for f in self.package['resources']]
includes = r'|'.join([fnmatch.translate(x) for x in includes])
# Match both the file name as well the path..
files = [f for f in files if re.match(includes, os.path.basename(f))] + \
[f for f in files if re.match(includes, f)]
files = list(set(files))
return files |
def create_dir(self, jbfile):
"""Create a dir for the given dirfile and display an error message, if it fails.
:param jbfile: the jb file to make the directory for
:type jbfile: class:`JB_File`
:returns: None
:rtype: None
:raises: None
"""
try:
jbfile.create_directory()
except os.error:
self.statusbar.showMessage('Could not create path: %s' % jbfile.get_path()) | Create a dir for the given dirfile and display an error message, if it fails.
:param jbfile: the jb file to make the directory for
:type jbfile: class:`JB_File`
:returns: None
:rtype: None
:raises: None | Below is the the instruction that describes the task:
### Input:
Create a dir for the given dirfile and display an error message, if it fails.
:param jbfile: the jb file to make the directory for
:type jbfile: class:`JB_File`
:returns: None
:rtype: None
:raises: None
### Response:
def create_dir(self, jbfile):
"""Create a dir for the given dirfile and display an error message, if it fails.
:param jbfile: the jb file to make the directory for
:type jbfile: class:`JB_File`
:returns: None
:rtype: None
:raises: None
"""
try:
jbfile.create_directory()
except os.error:
self.statusbar.showMessage('Could not create path: %s' % jbfile.get_path()) |
def p_NonAnyType_interface(p):
"""NonAnyType : IDENTIFIER TypeSuffix"""
p[0] = helper.unwrapTypeSuffix(model.InterfaceType(name=p[1]), p[2]) | NonAnyType : IDENTIFIER TypeSuffix | Below is the the instruction that describes the task:
### Input:
NonAnyType : IDENTIFIER TypeSuffix
### Response:
def p_NonAnyType_interface(p):
"""NonAnyType : IDENTIFIER TypeSuffix"""
p[0] = helper.unwrapTypeSuffix(model.InterfaceType(name=p[1]), p[2]) |
def get_path_modified_time(path):
"""
Returns given path modification time.
:param path: Path.
:type path: unicode
:return: Modification time.
:rtype: int
"""
return float(foundations.common.get_first_item(str(os.path.getmtime(path)).split("."))) | Returns given path modification time.
:param path: Path.
:type path: unicode
:return: Modification time.
:rtype: int | Below is the the instruction that describes the task:
### Input:
Returns given path modification time.
:param path: Path.
:type path: unicode
:return: Modification time.
:rtype: int
### Response:
def get_path_modified_time(path):
"""
Returns given path modification time.
:param path: Path.
:type path: unicode
:return: Modification time.
:rtype: int
"""
return float(foundations.common.get_first_item(str(os.path.getmtime(path)).split("."))) |
def spawn_i3status(self):
"""
Spawn i3status using a self generated config file and poll its output.
"""
try:
with NamedTemporaryFile(prefix="py3status_") as tmpfile:
self.write_tmp_i3status_config(tmpfile)
i3status_pipe = Popen(
[self.i3status_path, "-c", tmpfile.name],
stdout=PIPE,
stderr=PIPE,
# Ignore the SIGTSTP signal for this subprocess
preexec_fn=lambda: signal(SIGTSTP, SIG_IGN),
)
self.py3_wrapper.log(
"i3status spawned using config file {}".format(tmpfile.name)
)
self.poller_inp = IOPoller(i3status_pipe.stdout)
self.poller_err = IOPoller(i3status_pipe.stderr)
self.tmpfile_path = tmpfile.name
# Store the pipe so we can signal it
self.i3status_pipe = i3status_pipe
try:
# loop on i3status output
while self.py3_wrapper.running:
line = self.poller_inp.readline()
if line:
# remove leading comma if present
if line[0] == ",":
line = line[1:]
if line.startswith("[{"):
json_list = loads(line)
self.last_output = json_list
self.set_responses(json_list)
self.ready = True
else:
err = self.poller_err.readline()
code = i3status_pipe.poll()
if code is not None:
msg = "i3status died"
if err:
msg += " and said: {}".format(err)
else:
msg += " with code {}".format(code)
raise IOError(msg)
except IOError:
err = sys.exc_info()[1]
self.error = err
self.py3_wrapper.log(err, "error")
except OSError:
self.error = "Problem starting i3status maybe it is not installed"
except Exception:
self.py3_wrapper.report_exception("", notify_user=True)
self.i3status_pipe = None | Spawn i3status using a self generated config file and poll its output. | Below is the the instruction that describes the task:
### Input:
Spawn i3status using a self generated config file and poll its output.
### Response:
def spawn_i3status(self):
"""
Spawn i3status using a self generated config file and poll its output.
"""
try:
with NamedTemporaryFile(prefix="py3status_") as tmpfile:
self.write_tmp_i3status_config(tmpfile)
i3status_pipe = Popen(
[self.i3status_path, "-c", tmpfile.name],
stdout=PIPE,
stderr=PIPE,
# Ignore the SIGTSTP signal for this subprocess
preexec_fn=lambda: signal(SIGTSTP, SIG_IGN),
)
self.py3_wrapper.log(
"i3status spawned using config file {}".format(tmpfile.name)
)
self.poller_inp = IOPoller(i3status_pipe.stdout)
self.poller_err = IOPoller(i3status_pipe.stderr)
self.tmpfile_path = tmpfile.name
# Store the pipe so we can signal it
self.i3status_pipe = i3status_pipe
try:
# loop on i3status output
while self.py3_wrapper.running:
line = self.poller_inp.readline()
if line:
# remove leading comma if present
if line[0] == ",":
line = line[1:]
if line.startswith("[{"):
json_list = loads(line)
self.last_output = json_list
self.set_responses(json_list)
self.ready = True
else:
err = self.poller_err.readline()
code = i3status_pipe.poll()
if code is not None:
msg = "i3status died"
if err:
msg += " and said: {}".format(err)
else:
msg += " with code {}".format(code)
raise IOError(msg)
except IOError:
err = sys.exc_info()[1]
self.error = err
self.py3_wrapper.log(err, "error")
except OSError:
self.error = "Problem starting i3status maybe it is not installed"
except Exception:
self.py3_wrapper.report_exception("", notify_user=True)
self.i3status_pipe = None |
def highlight_line(self, payload):
"""
:type payload: str
:param payload: string to highlight, on chosen line
"""
index_of_payload = self.target_line.lower().index(payload.lower())
end_of_payload = index_of_payload + len(payload)
self.target_line = u'{}{}{}'.format(
self.target_line[:index_of_payload],
self.apply_highlight(self.target_line[index_of_payload:end_of_payload]),
self.target_line[end_of_payload:],
)
return self | :type payload: str
:param payload: string to highlight, on chosen line | Below is the the instruction that describes the task:
### Input:
:type payload: str
:param payload: string to highlight, on chosen line
### Response:
def highlight_line(self, payload):
"""
:type payload: str
:param payload: string to highlight, on chosen line
"""
index_of_payload = self.target_line.lower().index(payload.lower())
end_of_payload = index_of_payload + len(payload)
self.target_line = u'{}{}{}'.format(
self.target_line[:index_of_payload],
self.apply_highlight(self.target_line[index_of_payload:end_of_payload]),
self.target_line[end_of_payload:],
)
return self |
def revoke_permission_from_user_groups(self, permission, **kwargs): # noqa: E501
"""Revokes a single permission from user group(s) # noqa: E501
# noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.revoke_permission_from_user_groups(permission, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str permission: Permission to revoke from user group(s). (required)
:param list[str] body: List of user groups.
:return: ResponseContainerUserGroup
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.revoke_permission_from_user_groups_with_http_info(permission, **kwargs) # noqa: E501
else:
(data) = self.revoke_permission_from_user_groups_with_http_info(permission, **kwargs) # noqa: E501
return data | Revokes a single permission from user group(s) # noqa: E501
# noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.revoke_permission_from_user_groups(permission, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str permission: Permission to revoke from user group(s). (required)
:param list[str] body: List of user groups.
:return: ResponseContainerUserGroup
If the method is called asynchronously,
returns the request thread. | Below is the the instruction that describes the task:
### Input:
Revokes a single permission from user group(s) # noqa: E501
# noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.revoke_permission_from_user_groups(permission, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str permission: Permission to revoke from user group(s). (required)
:param list[str] body: List of user groups.
:return: ResponseContainerUserGroup
If the method is called asynchronously,
returns the request thread.
### Response:
def revoke_permission_from_user_groups(self, permission, **kwargs): # noqa: E501
"""Revokes a single permission from user group(s) # noqa: E501
# noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.revoke_permission_from_user_groups(permission, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str permission: Permission to revoke from user group(s). (required)
:param list[str] body: List of user groups.
:return: ResponseContainerUserGroup
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.revoke_permission_from_user_groups_with_http_info(permission, **kwargs) # noqa: E501
else:
(data) = self.revoke_permission_from_user_groups_with_http_info(permission, **kwargs) # noqa: E501
return data |
def _convert_hex_str_to_int(val):
"""Convert hexadecimal formatted ids to signed int64"""
if val is None:
return None
hex_num = int(val, 16)
# ensure it fits into 64-bit
if hex_num > 0x7FFFFFFFFFFFFFFF:
hex_num -= 0x10000000000000000
assert -9223372036854775808 <= hex_num <= 9223372036854775807
return hex_num | Convert hexadecimal formatted ids to signed int64 | Below is the the instruction that describes the task:
### Input:
Convert hexadecimal formatted ids to signed int64
### Response:
def _convert_hex_str_to_int(val):
"""Convert hexadecimal formatted ids to signed int64"""
if val is None:
return None
hex_num = int(val, 16)
# ensure it fits into 64-bit
if hex_num > 0x7FFFFFFFFFFFFFFF:
hex_num -= 0x10000000000000000
assert -9223372036854775808 <= hex_num <= 9223372036854775807
return hex_num |
def get_eids(rup_array, samples_by_grp, num_rlzs_by_grp):
"""
:param rup_array: a composite array with fields serial, n_occ and grp_id
:param samples_by_grp: a dictionary grp_id -> samples
:param num_rlzs_by_grp: a dictionary grp_id -> num_rlzs
"""
all_eids = []
for rup in rup_array:
grp_id = rup['grp_id']
samples = samples_by_grp[grp_id]
num_rlzs = num_rlzs_by_grp[grp_id]
num_events = rup['n_occ'] if samples > 1 else rup['n_occ'] * num_rlzs
eids = TWO32 * U64(rup['serial']) + numpy.arange(num_events, dtype=U64)
all_eids.append(eids)
return numpy.concatenate(all_eids) | :param rup_array: a composite array with fields serial, n_occ and grp_id
:param samples_by_grp: a dictionary grp_id -> samples
:param num_rlzs_by_grp: a dictionary grp_id -> num_rlzs | Below is the the instruction that describes the task:
### Input:
:param rup_array: a composite array with fields serial, n_occ and grp_id
:param samples_by_grp: a dictionary grp_id -> samples
:param num_rlzs_by_grp: a dictionary grp_id -> num_rlzs
### Response:
def get_eids(rup_array, samples_by_grp, num_rlzs_by_grp):
"""
:param rup_array: a composite array with fields serial, n_occ and grp_id
:param samples_by_grp: a dictionary grp_id -> samples
:param num_rlzs_by_grp: a dictionary grp_id -> num_rlzs
"""
all_eids = []
for rup in rup_array:
grp_id = rup['grp_id']
samples = samples_by_grp[grp_id]
num_rlzs = num_rlzs_by_grp[grp_id]
num_events = rup['n_occ'] if samples > 1 else rup['n_occ'] * num_rlzs
eids = TWO32 * U64(rup['serial']) + numpy.arange(num_events, dtype=U64)
all_eids.append(eids)
return numpy.concatenate(all_eids) |
def on_use_runtime_value_toggled(self, widget, path):
"""Try to set the use runtime value flag to the newly entered one
"""
try:
data_port_id = self.list_store[path][self.ID_STORAGE_ID]
self.toggle_runtime_value_usage(data_port_id)
except TypeError as e:
logger.exception("Error while trying to change the use_runtime_value flag") | Try to set the use runtime value flag to the newly entered one | Below is the the instruction that describes the task:
### Input:
Try to set the use runtime value flag to the newly entered one
### Response:
def on_use_runtime_value_toggled(self, widget, path):
"""Try to set the use runtime value flag to the newly entered one
"""
try:
data_port_id = self.list_store[path][self.ID_STORAGE_ID]
self.toggle_runtime_value_usage(data_port_id)
except TypeError as e:
logger.exception("Error while trying to change the use_runtime_value flag") |
def run(self, **kwargs):
'''
Run all benchmarks.
Extras kwargs are passed to benchmarks construtors.
'''
self.report_start()
for bench in self.benchmarks:
bench = bench(before=self.report_before_method,
after=self.report_after_method,
after_each=self.report_progress,
debug=self.debug,
**kwargs)
self.report_before_class(bench)
bench.run()
self.report_after_class(bench)
self.runned.append(bench)
self.report_end() | Run all benchmarks.
Extras kwargs are passed to benchmarks construtors. | Below is the the instruction that describes the task:
### Input:
Run all benchmarks.
Extras kwargs are passed to benchmarks construtors.
### Response:
def run(self, **kwargs):
'''
Run all benchmarks.
Extras kwargs are passed to benchmarks construtors.
'''
self.report_start()
for bench in self.benchmarks:
bench = bench(before=self.report_before_method,
after=self.report_after_method,
after_each=self.report_progress,
debug=self.debug,
**kwargs)
self.report_before_class(bench)
bench.run()
self.report_after_class(bench)
self.runned.append(bench)
self.report_end() |
def decimal_default(obj):
"""Properly parse out the Decimal datatypes into proper int/float types."""
if isinstance(obj, decimal.Decimal):
if obj % 1:
return float(obj)
return int(obj)
raise TypeError | Properly parse out the Decimal datatypes into proper int/float types. | Below is the the instruction that describes the task:
### Input:
Properly parse out the Decimal datatypes into proper int/float types.
### Response:
def decimal_default(obj):
"""Properly parse out the Decimal datatypes into proper int/float types."""
if isinstance(obj, decimal.Decimal):
if obj % 1:
return float(obj)
return int(obj)
raise TypeError |
def acquire(self, timeout=None):
"""Acquires the lock if in the unlocked state otherwise switch
back to the parent coroutine.
"""
green = getcurrent()
parent = green.parent
if parent is None:
raise MustBeInChildGreenlet('GreenLock.acquire in main greenlet')
if self._local.locked:
future = create_future(self._loop)
self._queue.append(future)
parent.switch(future)
self._local.locked = green
return self.locked() | Acquires the lock if in the unlocked state otherwise switch
back to the parent coroutine. | Below is the the instruction that describes the task:
### Input:
Acquires the lock if in the unlocked state otherwise switch
back to the parent coroutine.
### Response:
def acquire(self, timeout=None):
"""Acquires the lock if in the unlocked state otherwise switch
back to the parent coroutine.
"""
green = getcurrent()
parent = green.parent
if parent is None:
raise MustBeInChildGreenlet('GreenLock.acquire in main greenlet')
if self._local.locked:
future = create_future(self._loop)
self._queue.append(future)
parent.switch(future)
self._local.locked = green
return self.locked() |
def load(self, arguments):
"Load the values from the a ServerConnection arguments"
features = arguments[1:-1]
list(map(self.load_feature, features)) | Load the values from the a ServerConnection arguments | Below is the the instruction that describes the task:
### Input:
Load the values from the a ServerConnection arguments
### Response:
def load(self, arguments):
"Load the values from the a ServerConnection arguments"
features = arguments[1:-1]
list(map(self.load_feature, features)) |
def _run_cnvkit_shared(inputs, backgrounds):
"""Shared functionality to run CNVkit, parallelizing over multiple BAM files.
Handles new style cases where we have pre-normalized inputs and
old cases where we run CNVkit individually.
"""
if tz.get_in(["depth", "bins", "normalized"], inputs[0]):
ckouts = []
for data in inputs:
cnr_file = tz.get_in(["depth", "bins", "normalized"], data)
cns_file = os.path.join(_sv_workdir(data), "%s.cns" % dd.get_sample_name(data))
cns_file = _cnvkit_segment(cnr_file, dd.get_coverage_interval(data), data,
inputs + backgrounds, cns_file)
ckouts.append({"cnr": cnr_file, "cns": cns_file,
"background": tz.get_in(["depth", "bins", "background"], data)})
return ckouts
else:
return _run_cnvkit_shared_orig(inputs, backgrounds) | Shared functionality to run CNVkit, parallelizing over multiple BAM files.
Handles new style cases where we have pre-normalized inputs and
old cases where we run CNVkit individually. | Below is the the instruction that describes the task:
### Input:
Shared functionality to run CNVkit, parallelizing over multiple BAM files.
Handles new style cases where we have pre-normalized inputs and
old cases where we run CNVkit individually.
### Response:
def _run_cnvkit_shared(inputs, backgrounds):
"""Shared functionality to run CNVkit, parallelizing over multiple BAM files.
Handles new style cases where we have pre-normalized inputs and
old cases where we run CNVkit individually.
"""
if tz.get_in(["depth", "bins", "normalized"], inputs[0]):
ckouts = []
for data in inputs:
cnr_file = tz.get_in(["depth", "bins", "normalized"], data)
cns_file = os.path.join(_sv_workdir(data), "%s.cns" % dd.get_sample_name(data))
cns_file = _cnvkit_segment(cnr_file, dd.get_coverage_interval(data), data,
inputs + backgrounds, cns_file)
ckouts.append({"cnr": cnr_file, "cns": cns_file,
"background": tz.get_in(["depth", "bins", "background"], data)})
return ckouts
else:
return _run_cnvkit_shared_orig(inputs, backgrounds) |
def build_command(command, parameter_map):
"""
Build command line(s) using the given parameter map.
Even if the passed a single `command`, this function will return a list
of shell commands. It is the caller's responsibility to concatenate them,
likely using the semicolon or double ampersands.
:param command: The command to interpolate params into.
:type command: str|list[str]
:param parameter_map: A ParameterMap object containing parameter knowledge.
:type parameter_map: valohai_yaml.objs.parameter_map.ParameterMap
:return: list of commands
:rtype: list[str]
"""
if isinstance(parameter_map, list): # Partially emulate old (pre-0.7) API for this function.
parameter_map = LegacyParameterMap(parameter_map)
out_commands = []
for command in listify(command):
# Only attempt formatting if the string smells like it should be formatted.
# This allows the user to include shell syntax in the commands, if required.
# (There's still naturally the chance for false-positives, so guard against
# those value errors and warn about them.)
if interpolable_re.search(command):
try:
command = interpolable_re.sub(
lambda match: _replace_interpolation(parameter_map, match),
command,
)
except ValueError as exc: # pragma: no cover
warnings.warn(
'failed to interpolate into %r: %s' % (command, exc),
CommandInterpolationWarning
)
out_commands.append(command.strip())
return out_commands | Build command line(s) using the given parameter map.
Even if the passed a single `command`, this function will return a list
of shell commands. It is the caller's responsibility to concatenate them,
likely using the semicolon or double ampersands.
:param command: The command to interpolate params into.
:type command: str|list[str]
:param parameter_map: A ParameterMap object containing parameter knowledge.
:type parameter_map: valohai_yaml.objs.parameter_map.ParameterMap
:return: list of commands
:rtype: list[str] | Below is the the instruction that describes the task:
### Input:
Build command line(s) using the given parameter map.
Even if the passed a single `command`, this function will return a list
of shell commands. It is the caller's responsibility to concatenate them,
likely using the semicolon or double ampersands.
:param command: The command to interpolate params into.
:type command: str|list[str]
:param parameter_map: A ParameterMap object containing parameter knowledge.
:type parameter_map: valohai_yaml.objs.parameter_map.ParameterMap
:return: list of commands
:rtype: list[str]
### Response:
def build_command(command, parameter_map):
"""
Build command line(s) using the given parameter map.
Even if the passed a single `command`, this function will return a list
of shell commands. It is the caller's responsibility to concatenate them,
likely using the semicolon or double ampersands.
:param command: The command to interpolate params into.
:type command: str|list[str]
:param parameter_map: A ParameterMap object containing parameter knowledge.
:type parameter_map: valohai_yaml.objs.parameter_map.ParameterMap
:return: list of commands
:rtype: list[str]
"""
if isinstance(parameter_map, list): # Partially emulate old (pre-0.7) API for this function.
parameter_map = LegacyParameterMap(parameter_map)
out_commands = []
for command in listify(command):
# Only attempt formatting if the string smells like it should be formatted.
# This allows the user to include shell syntax in the commands, if required.
# (There's still naturally the chance for false-positives, so guard against
# those value errors and warn about them.)
if interpolable_re.search(command):
try:
command = interpolable_re.sub(
lambda match: _replace_interpolation(parameter_map, match),
command,
)
except ValueError as exc: # pragma: no cover
warnings.warn(
'failed to interpolate into %r: %s' % (command, exc),
CommandInterpolationWarning
)
out_commands.append(command.strip())
return out_commands |
def getOutput(self, command, env={}, path=None,
uid=None, gid=None, usePTY=0, childFDs=None):
"""Execute a command and get the output of the finished process.
"""
deferred = defer.Deferred()
processProtocol = _SummaryProcessProtocol(deferred)
self.execute(processProtocol, command, env,
path, uid, gid, usePTY, childFDs)
@deferred.addCallback
def getStdOut(tuple_):
stdout, _stderr, _returnCode = tuple_
return stdout
return deferred | Execute a command and get the output of the finished process. | Below is the the instruction that describes the task:
### Input:
Execute a command and get the output of the finished process.
### Response:
def getOutput(self, command, env={}, path=None,
uid=None, gid=None, usePTY=0, childFDs=None):
"""Execute a command and get the output of the finished process.
"""
deferred = defer.Deferred()
processProtocol = _SummaryProcessProtocol(deferred)
self.execute(processProtocol, command, env,
path, uid, gid, usePTY, childFDs)
@deferred.addCallback
def getStdOut(tuple_):
stdout, _stderr, _returnCode = tuple_
return stdout
return deferred |
def figure(bgcolor=(1,1,1), size=(1000,1000)):
"""Create a blank figure.
Parameters
----------
bgcolor : (3,) float
Color of the background with values in [0,1].
size : (2,) int
Width and height of the figure in pixels.
"""
Visualizer3D._scene = Scene(background_color=np.array(bgcolor))
Visualizer3D._scene.ambient_light = AmbientLight(color=[1.0, 1.0, 1.0], strength=1.0)
Visualizer3D._init_size = np.array(size) | Create a blank figure.
Parameters
----------
bgcolor : (3,) float
Color of the background with values in [0,1].
size : (2,) int
Width and height of the figure in pixels. | Below is the the instruction that describes the task:
### Input:
Create a blank figure.
Parameters
----------
bgcolor : (3,) float
Color of the background with values in [0,1].
size : (2,) int
Width and height of the figure in pixels.
### Response:
def figure(bgcolor=(1,1,1), size=(1000,1000)):
"""Create a blank figure.
Parameters
----------
bgcolor : (3,) float
Color of the background with values in [0,1].
size : (2,) int
Width and height of the figure in pixels.
"""
Visualizer3D._scene = Scene(background_color=np.array(bgcolor))
Visualizer3D._scene.ambient_light = AmbientLight(color=[1.0, 1.0, 1.0], strength=1.0)
Visualizer3D._init_size = np.array(size) |
def _ReadStructureFromByteStream(
self, byte_stream, file_offset, data_type_map, description, context=None):
"""Reads a structure from a byte stream.
Args:
byte_stream (bytes): byte stream.
file_offset (int): offset of the data relative from the start of
the file-like object.
data_type_map (dtfabric.DataTypeMap): data type map of the structure.
description (str): description of the structure.
context (Optional[dtfabric.DataTypeMapContext]): data type map context.
Returns:
object: structure values object.
Raises:
FileFormatError: if the structure cannot be read.
ValueError: if file-like object or date type map are invalid.
"""
if not byte_stream:
raise ValueError('Invalid byte stream.')
if not data_type_map:
raise ValueError('Invalid data type map.')
try:
return data_type_map.MapByteStream(byte_stream, context=context)
except dtfabric_errors.MappingError as exception:
raise errors.FileFormatError((
'Unable to map {0:s} data at offset: 0x{1:08x} with error: '
'{2!s}').format(description, file_offset, exception)) | Reads a structure from a byte stream.
Args:
byte_stream (bytes): byte stream.
file_offset (int): offset of the data relative from the start of
the file-like object.
data_type_map (dtfabric.DataTypeMap): data type map of the structure.
description (str): description of the structure.
context (Optional[dtfabric.DataTypeMapContext]): data type map context.
Returns:
object: structure values object.
Raises:
FileFormatError: if the structure cannot be read.
ValueError: if file-like object or date type map are invalid. | Below is the the instruction that describes the task:
### Input:
Reads a structure from a byte stream.
Args:
byte_stream (bytes): byte stream.
file_offset (int): offset of the data relative from the start of
the file-like object.
data_type_map (dtfabric.DataTypeMap): data type map of the structure.
description (str): description of the structure.
context (Optional[dtfabric.DataTypeMapContext]): data type map context.
Returns:
object: structure values object.
Raises:
FileFormatError: if the structure cannot be read.
ValueError: if file-like object or date type map are invalid.
### Response:
def _ReadStructureFromByteStream(
self, byte_stream, file_offset, data_type_map, description, context=None):
"""Reads a structure from a byte stream.
Args:
byte_stream (bytes): byte stream.
file_offset (int): offset of the data relative from the start of
the file-like object.
data_type_map (dtfabric.DataTypeMap): data type map of the structure.
description (str): description of the structure.
context (Optional[dtfabric.DataTypeMapContext]): data type map context.
Returns:
object: structure values object.
Raises:
FileFormatError: if the structure cannot be read.
ValueError: if file-like object or date type map are invalid.
"""
if not byte_stream:
raise ValueError('Invalid byte stream.')
if not data_type_map:
raise ValueError('Invalid data type map.')
try:
return data_type_map.MapByteStream(byte_stream, context=context)
except dtfabric_errors.MappingError as exception:
raise errors.FileFormatError((
'Unable to map {0:s} data at offset: 0x{1:08x} with error: '
'{2!s}').format(description, file_offset, exception)) |
def work_set(self, wallet, account, work):
"""
Set **work** for **account** in **wallet**
.. enable_control required
.. version 8.0 required
:param wallet: Wallet to set work for account for
:type wallet: str
:param account: Account to set work for
:type account: str
:param work: Work to set for account in wallet
:type work: str
:raises: :py:exc:`nano.rpc.RPCException`
>>> rpc.work_set(
... wallet="000D1BAEC8EC208142C99059B393051BAC8380F9B5A2E6B2489A277D81789F3F",
... account="xrb_1111111111111111111111111111111111111111111111111111hifc8npp",
... work="0000000000000000"
... )
True
"""
wallet = self._process_value(wallet, 'wallet')
account = self._process_value(account, 'account')
work = self._process_value(work, 'work')
payload = {"wallet": wallet, "account": account, "work": work}
resp = self.call('work_set', payload)
return 'success' in resp | Set **work** for **account** in **wallet**
.. enable_control required
.. version 8.0 required
:param wallet: Wallet to set work for account for
:type wallet: str
:param account: Account to set work for
:type account: str
:param work: Work to set for account in wallet
:type work: str
:raises: :py:exc:`nano.rpc.RPCException`
>>> rpc.work_set(
... wallet="000D1BAEC8EC208142C99059B393051BAC8380F9B5A2E6B2489A277D81789F3F",
... account="xrb_1111111111111111111111111111111111111111111111111111hifc8npp",
... work="0000000000000000"
... )
True | Below is the the instruction that describes the task:
### Input:
Set **work** for **account** in **wallet**
.. enable_control required
.. version 8.0 required
:param wallet: Wallet to set work for account for
:type wallet: str
:param account: Account to set work for
:type account: str
:param work: Work to set for account in wallet
:type work: str
:raises: :py:exc:`nano.rpc.RPCException`
>>> rpc.work_set(
... wallet="000D1BAEC8EC208142C99059B393051BAC8380F9B5A2E6B2489A277D81789F3F",
... account="xrb_1111111111111111111111111111111111111111111111111111hifc8npp",
... work="0000000000000000"
... )
True
### Response:
def work_set(self, wallet, account, work):
"""
Set **work** for **account** in **wallet**
.. enable_control required
.. version 8.0 required
:param wallet: Wallet to set work for account for
:type wallet: str
:param account: Account to set work for
:type account: str
:param work: Work to set for account in wallet
:type work: str
:raises: :py:exc:`nano.rpc.RPCException`
>>> rpc.work_set(
... wallet="000D1BAEC8EC208142C99059B393051BAC8380F9B5A2E6B2489A277D81789F3F",
... account="xrb_1111111111111111111111111111111111111111111111111111hifc8npp",
... work="0000000000000000"
... )
True
"""
wallet = self._process_value(wallet, 'wallet')
account = self._process_value(account, 'account')
work = self._process_value(work, 'work')
payload = {"wallet": wallet, "account": account, "work": work}
resp = self.call('work_set', payload)
return 'success' in resp |
def str_to_application_class(self, an_app_key):
"""a configman compatible str_to_* converter"""
try:
app_class = str_to_python_object(self.apps[an_app_key])
except KeyError:
app_class = str_to_python_object(an_app_key)
try:
self.application_defaults = DotDict(
app_class.get_application_defaults()
)
except AttributeError:
# no get_application_defaults, skip this step
pass
return app_class | a configman compatible str_to_* converter | Below is the the instruction that describes the task:
### Input:
a configman compatible str_to_* converter
### Response:
def str_to_application_class(self, an_app_key):
"""a configman compatible str_to_* converter"""
try:
app_class = str_to_python_object(self.apps[an_app_key])
except KeyError:
app_class = str_to_python_object(an_app_key)
try:
self.application_defaults = DotDict(
app_class.get_application_defaults()
)
except AttributeError:
# no get_application_defaults, skip this step
pass
return app_class |
def get_api_envs():
"""Get required API keys from environment variables."""
client_id = os.environ.get('CLIENT_ID')
user_id = os.environ.get('USER_ID')
if not client_id or not user_id:
raise ValueError('API keys are not found in the environment')
return client_id, user_id | Get required API keys from environment variables. | Below is the the instruction that describes the task:
### Input:
Get required API keys from environment variables.
### Response:
def get_api_envs():
"""Get required API keys from environment variables."""
client_id = os.environ.get('CLIENT_ID')
user_id = os.environ.get('USER_ID')
if not client_id or not user_id:
raise ValueError('API keys are not found in the environment')
return client_id, user_id |
def _updateCanvasDraw(self):
""" Overload of the draw function that update
axes position before each draw"""
fn = self.canvas.draw
def draw2(*a,**k):
self._updateGridSpec()
return fn(*a,**k)
self.canvas.draw = draw2 | Overload of the draw function that update
axes position before each draw | Below is the the instruction that describes the task:
### Input:
Overload of the draw function that update
axes position before each draw
### Response:
def _updateCanvasDraw(self):
""" Overload of the draw function that update
axes position before each draw"""
fn = self.canvas.draw
def draw2(*a,**k):
self._updateGridSpec()
return fn(*a,**k)
self.canvas.draw = draw2 |
def load(template):
"""
Try to guess the input format
"""
try:
data = load_json(template)
return data, "json"
except ValueError as e:
try:
data = load_yaml(template)
return data, "yaml"
except Exception:
raise e | Try to guess the input format | Below is the the instruction that describes the task:
### Input:
Try to guess the input format
### Response:
def load(template):
"""
Try to guess the input format
"""
try:
data = load_json(template)
return data, "json"
except ValueError as e:
try:
data = load_yaml(template)
return data, "yaml"
except Exception:
raise e |
def from_context(cls, context, shell_type=None, shell_name=None):
"""Create an instance of TrafficGeneratorVBladeResource from the given context
:param cloudshell.shell.core.driver_context.ResourceCommandContext context:
:param str shell_type: shell type
:param str shell_name: shell name
:rtype: TrafficGeneratorVChassisResource
"""
return cls(address=context.resource.address,
family=context.resource.family,
shell_type=shell_type,
shell_name=shell_name,
fullname=context.resource.fullname,
attributes=dict(context.resource.attributes),
name=context.resource.name) | Create an instance of TrafficGeneratorVBladeResource from the given context
:param cloudshell.shell.core.driver_context.ResourceCommandContext context:
:param str shell_type: shell type
:param str shell_name: shell name
:rtype: TrafficGeneratorVChassisResource | Below is the the instruction that describes the task:
### Input:
Create an instance of TrafficGeneratorVBladeResource from the given context
:param cloudshell.shell.core.driver_context.ResourceCommandContext context:
:param str shell_type: shell type
:param str shell_name: shell name
:rtype: TrafficGeneratorVChassisResource
### Response:
def from_context(cls, context, shell_type=None, shell_name=None):
"""Create an instance of TrafficGeneratorVBladeResource from the given context
:param cloudshell.shell.core.driver_context.ResourceCommandContext context:
:param str shell_type: shell type
:param str shell_name: shell name
:rtype: TrafficGeneratorVChassisResource
"""
return cls(address=context.resource.address,
family=context.resource.family,
shell_type=shell_type,
shell_name=shell_name,
fullname=context.resource.fullname,
attributes=dict(context.resource.attributes),
name=context.resource.name) |
def renamenx(self, key, newkey):
"""Renames key to newkey only if newkey does not exist.
:raises ValueError: if key == newkey
"""
if key == newkey:
raise ValueError("key and newkey are the same")
fut = self.execute(b'RENAMENX', key, newkey)
return wait_convert(fut, bool) | Renames key to newkey only if newkey does not exist.
:raises ValueError: if key == newkey | Below is the the instruction that describes the task:
### Input:
Renames key to newkey only if newkey does not exist.
:raises ValueError: if key == newkey
### Response:
def renamenx(self, key, newkey):
"""Renames key to newkey only if newkey does not exist.
:raises ValueError: if key == newkey
"""
if key == newkey:
raise ValueError("key and newkey are the same")
fut = self.execute(b'RENAMENX', key, newkey)
return wait_convert(fut, bool) |
def _extract_next_page_link(self):
""" Try to get next page link. """
# HEADS UP: we do not abort if next_page_link is already set:
# we try to find next (eg. find 3 if already at page 2).
for pattern in self.config.next_page_link:
items = self.parsed_tree.xpath(pattern)
if not items:
continue
if len(items) == 1:
item = items[0]
if 'href' in item.keys():
self.next_page_link = item.get('href')
else:
self.next_page_link = item.text.strip()
LOGGER.info(u'Found next page link: %s.',
self.next_page_link)
# First found link is the good one.
break
else:
LOGGER.warning(u'%s items for next-page link %s',
items, pattern,
extra={'siteconfig': self.config.host}) | Try to get next page link. | Below is the the instruction that describes the task:
### Input:
Try to get next page link.
### Response:
def _extract_next_page_link(self):
""" Try to get next page link. """
# HEADS UP: we do not abort if next_page_link is already set:
# we try to find next (eg. find 3 if already at page 2).
for pattern in self.config.next_page_link:
items = self.parsed_tree.xpath(pattern)
if not items:
continue
if len(items) == 1:
item = items[0]
if 'href' in item.keys():
self.next_page_link = item.get('href')
else:
self.next_page_link = item.text.strip()
LOGGER.info(u'Found next page link: %s.',
self.next_page_link)
# First found link is the good one.
break
else:
LOGGER.warning(u'%s items for next-page link %s',
items, pattern,
extra={'siteconfig': self.config.host}) |
def format_doc(hit, schema, dates):
"""Format given doc to match given schema."""
doc = hit.get('_source', {})
doc.setdefault(config.ID_FIELD, hit.get('_id'))
doc.setdefault('_type', hit.get('_type'))
if hit.get('highlight'):
doc['es_highlight'] = hit.get('highlight')
if hit.get('inner_hits'):
doc['_inner_hits'] = {}
for key, value in hit.get('inner_hits').items():
doc['_inner_hits'][key] = []
for item in value.get('hits', {}).get('hits', []):
doc['_inner_hits'][key].append(item.get('_source', {}))
for key in dates:
if key in doc:
doc[key] = parse_date(doc[key])
return doc | Format given doc to match given schema. | Below is the the instruction that describes the task:
### Input:
Format given doc to match given schema.
### Response:
def format_doc(hit, schema, dates):
"""Format given doc to match given schema."""
doc = hit.get('_source', {})
doc.setdefault(config.ID_FIELD, hit.get('_id'))
doc.setdefault('_type', hit.get('_type'))
if hit.get('highlight'):
doc['es_highlight'] = hit.get('highlight')
if hit.get('inner_hits'):
doc['_inner_hits'] = {}
for key, value in hit.get('inner_hits').items():
doc['_inner_hits'][key] = []
for item in value.get('hits', {}).get('hits', []):
doc['_inner_hits'][key].append(item.get('_source', {}))
for key in dates:
if key in doc:
doc[key] = parse_date(doc[key])
return doc |
def update_sums(self, r, i, j, data1, data2, sum1, sum2, N=None, centers_sum=None):
"""
The main function that digitizes the pair counts,
calls bincount for the appropriate `sum1` and `sum2`
values, and adds them to the input arrays,
will modify sum1, sum2, N, and centers_sum inplace.
"""
# the summation values for this (r,i,j)
sum1_ij, sum2_ij = compute_sum_values(i, j, data1, data2)
# digitize
digr = self.digitize(r, i, j, data1, data2)
if len(digr) == 3 and isinstance(digr[1], dict):
dig, paircoords, weights = digr
elif len(digr) == 2 and isinstance(digr[1], dict):
dig, paircoords = digr
weights = None
else:
dig = digr
paircoords = None
weights = None
# sum 1
def add_one_channel(sum1c, sum1_ijc):
if numpy.isscalar(sum1_ijc) or sum1_ijc.ndim == 1:
sum1c.flat[:] += utils.bincount(dig, sum1_ijc, minlength=sum1c.size)
else:
for d in range(sum1c.shape[0]):
sum1c[d].flat[:] += utils.bincount(dig, sum1_ijc[...,d], minlength=sum1c[d].size)
if self.channels:
if weights is None:
raise RuntimeError("`digitize` of multi channel paircount did not return a weight array for the channels")
sum1_ij = weights * sum1_ij
# sum1_ij[ichannel, dig, dim]
for ichannel in range(len(self.channels)):
add_one_channel(sum1[ichannel], sum1_ij[ichannel])
else:
# sum1_ij[dig, dim]
add_one_channel(sum1, sum1_ij)
# sum 2, if both data are not points
if not numpy.isscalar(sum2):
sum2.flat[:] += utils.bincount(dig, sum2_ij, minlength=sum2.size)
if N is not None:
if not paircoords:
raise RuntimeError("Bin center is requested but not returned by digitize")
# update the mean coords
self._update_mean_coords(dig, N, centers_sum, **paircoords) | The main function that digitizes the pair counts,
calls bincount for the appropriate `sum1` and `sum2`
values, and adds them to the input arrays,
will modify sum1, sum2, N, and centers_sum inplace. | Below is the the instruction that describes the task:
### Input:
The main function that digitizes the pair counts,
calls bincount for the appropriate `sum1` and `sum2`
values, and adds them to the input arrays,
will modify sum1, sum2, N, and centers_sum inplace.
### Response:
def update_sums(self, r, i, j, data1, data2, sum1, sum2, N=None, centers_sum=None):
"""
The main function that digitizes the pair counts,
calls bincount for the appropriate `sum1` and `sum2`
values, and adds them to the input arrays,
will modify sum1, sum2, N, and centers_sum inplace.
"""
# the summation values for this (r,i,j)
sum1_ij, sum2_ij = compute_sum_values(i, j, data1, data2)
# digitize
digr = self.digitize(r, i, j, data1, data2)
if len(digr) == 3 and isinstance(digr[1], dict):
dig, paircoords, weights = digr
elif len(digr) == 2 and isinstance(digr[1], dict):
dig, paircoords = digr
weights = None
else:
dig = digr
paircoords = None
weights = None
# sum 1
def add_one_channel(sum1c, sum1_ijc):
if numpy.isscalar(sum1_ijc) or sum1_ijc.ndim == 1:
sum1c.flat[:] += utils.bincount(dig, sum1_ijc, minlength=sum1c.size)
else:
for d in range(sum1c.shape[0]):
sum1c[d].flat[:] += utils.bincount(dig, sum1_ijc[...,d], minlength=sum1c[d].size)
if self.channels:
if weights is None:
raise RuntimeError("`digitize` of multi channel paircount did not return a weight array for the channels")
sum1_ij = weights * sum1_ij
# sum1_ij[ichannel, dig, dim]
for ichannel in range(len(self.channels)):
add_one_channel(sum1[ichannel], sum1_ij[ichannel])
else:
# sum1_ij[dig, dim]
add_one_channel(sum1, sum1_ij)
# sum 2, if both data are not points
if not numpy.isscalar(sum2):
sum2.flat[:] += utils.bincount(dig, sum2_ij, minlength=sum2.size)
if N is not None:
if not paircoords:
raise RuntimeError("Bin center is requested but not returned by digitize")
# update the mean coords
self._update_mean_coords(dig, N, centers_sum, **paircoords) |
def display(self, typ, data):
""" display section of typ with data """
if hasattr(self, 'print_' + typ):
getattr(self, 'print_' + typ)(data)
elif not data:
self._print("%s: %s" % (typ, data))
elif isinstance(data, collections.Mapping):
self._print("\n", typ)
for k, v in data.items():
self.print(k, v)
elif isinstance(data, (list, tuple)):
# tabular data layout for lists of dicts
if isinstance(data[0], collections.Mapping):
self.display_set(typ, data, self._get_columns(data[0]))
else:
for each in data:
self.print(typ, each)
else:
self._print("%s: %s" % (typ, data))
self.fobj.flush() | display section of typ with data | Below is the the instruction that describes the task:
### Input:
display section of typ with data
### Response:
def display(self, typ, data):
""" display section of typ with data """
if hasattr(self, 'print_' + typ):
getattr(self, 'print_' + typ)(data)
elif not data:
self._print("%s: %s" % (typ, data))
elif isinstance(data, collections.Mapping):
self._print("\n", typ)
for k, v in data.items():
self.print(k, v)
elif isinstance(data, (list, tuple)):
# tabular data layout for lists of dicts
if isinstance(data[0], collections.Mapping):
self.display_set(typ, data, self._get_columns(data[0]))
else:
for each in data:
self.print(typ, each)
else:
self._print("%s: %s" % (typ, data))
self.fobj.flush() |
def train_language_model(self,
customization_id,
word_type_to_add=None,
customization_weight=None,
**kwargs):
"""
Train a custom language model.
Initiates the training of a custom language model with new resources such as
corpora, grammars, and custom words. After adding, modifying, or deleting
resources for a custom language model, use this method to begin the actual
training of the model on the latest data. You can specify whether the custom
language model is to be trained with all words from its words resource or only
with words that were added or modified by the user directly. You must use
credentials for the instance of the service that owns a model to train it.
The training method is asynchronous. It can take on the order of minutes to
complete depending on the amount of data on which the service is being trained and
the current load on the service. The method returns an HTTP 200 response code to
indicate that the training process has begun.
You can monitor the status of the training by using the **Get a custom language
model** method to poll the model's status. Use a loop to check the status every 10
seconds. The method returns a `LanguageModel` object that includes `status` and
`progress` fields. A status of `available` means that the custom model is trained
and ready to use. The service cannot accept subsequent training requests or
requests to add new resources until the existing request completes.
Training can fail to start for the following reasons:
* The service is currently handling another request for the custom model, such as
another training request or a request to add a corpus or grammar to the model.
* No training data have been added to the custom model.
* One or more words that were added to the custom model have invalid sounds-like
pronunciations that you must fix.
**See also:** [Train the custom language
model](https://cloud.ibm.com/docs/services/speech-to-text/language-create.html#trainModel-language).
:param str customization_id: The customization ID (GUID) of the custom language
model that is to be used for the request. You must make the request with
credentials for the instance of the service that owns the custom model.
:param str word_type_to_add: The type of words from the custom language model's
words resource on which to train the model:
* `all` (the default) trains the model on all new words, regardless of whether
they were extracted from corpora or grammars or were added or modified by the
user.
* `user` trains the model only on new words that were added or modified by the
user directly. The model is not trained on new words extracted from corpora or
grammars.
:param float customization_weight: Specifies a customization weight for the custom
language model. The customization weight tells the service how much weight to give
to words from the custom language model compared to those from the base model for
speech recognition. Specify a value between 0.0 and 1.0; the default is 0.3.
The default value yields the best performance in general. Assign a higher value if
your audio makes frequent use of OOV words from the custom model. Use caution when
setting the weight: a higher value can improve the accuracy of phrases from the
custom model's domain, but it can negatively affect performance on non-domain
phrases.
The value that you assign is used for all recognition requests that use the model.
You can override it for any recognition request by specifying a customization
weight for that request.
:param dict headers: A `dict` containing the request headers
:return: A `DetailedResponse` containing the result, headers and HTTP status code.
:rtype: DetailedResponse
"""
if customization_id is None:
raise ValueError('customization_id must be provided')
headers = {}
if 'headers' in kwargs:
headers.update(kwargs.get('headers'))
sdk_headers = get_sdk_headers('speech_to_text', 'V1',
'train_language_model')
headers.update(sdk_headers)
params = {
'word_type_to_add': word_type_to_add,
'customization_weight': customization_weight
}
url = '/v1/customizations/{0}/train'.format(
*self._encode_path_vars(customization_id))
response = self.request(
method='POST',
url=url,
headers=headers,
params=params,
accept_json=True)
return response | Train a custom language model.
Initiates the training of a custom language model with new resources such as
corpora, grammars, and custom words. After adding, modifying, or deleting
resources for a custom language model, use this method to begin the actual
training of the model on the latest data. You can specify whether the custom
language model is to be trained with all words from its words resource or only
with words that were added or modified by the user directly. You must use
credentials for the instance of the service that owns a model to train it.
The training method is asynchronous. It can take on the order of minutes to
complete depending on the amount of data on which the service is being trained and
the current load on the service. The method returns an HTTP 200 response code to
indicate that the training process has begun.
You can monitor the status of the training by using the **Get a custom language
model** method to poll the model's status. Use a loop to check the status every 10
seconds. The method returns a `LanguageModel` object that includes `status` and
`progress` fields. A status of `available` means that the custom model is trained
and ready to use. The service cannot accept subsequent training requests or
requests to add new resources until the existing request completes.
Training can fail to start for the following reasons:
* The service is currently handling another request for the custom model, such as
another training request or a request to add a corpus or grammar to the model.
* No training data have been added to the custom model.
* One or more words that were added to the custom model have invalid sounds-like
pronunciations that you must fix.
**See also:** [Train the custom language
model](https://cloud.ibm.com/docs/services/speech-to-text/language-create.html#trainModel-language).
:param str customization_id: The customization ID (GUID) of the custom language
model that is to be used for the request. You must make the request with
credentials for the instance of the service that owns the custom model.
:param str word_type_to_add: The type of words from the custom language model's
words resource on which to train the model:
* `all` (the default) trains the model on all new words, regardless of whether
they were extracted from corpora or grammars or were added or modified by the
user.
* `user` trains the model only on new words that were added or modified by the
user directly. The model is not trained on new words extracted from corpora or
grammars.
:param float customization_weight: Specifies a customization weight for the custom
language model. The customization weight tells the service how much weight to give
to words from the custom language model compared to those from the base model for
speech recognition. Specify a value between 0.0 and 1.0; the default is 0.3.
The default value yields the best performance in general. Assign a higher value if
your audio makes frequent use of OOV words from the custom model. Use caution when
setting the weight: a higher value can improve the accuracy of phrases from the
custom model's domain, but it can negatively affect performance on non-domain
phrases.
The value that you assign is used for all recognition requests that use the model.
You can override it for any recognition request by specifying a customization
weight for that request.
:param dict headers: A `dict` containing the request headers
:return: A `DetailedResponse` containing the result, headers and HTTP status code.
:rtype: DetailedResponse | Below is the the instruction that describes the task:
### Input:
Train a custom language model.
Initiates the training of a custom language model with new resources such as
corpora, grammars, and custom words. After adding, modifying, or deleting
resources for a custom language model, use this method to begin the actual
training of the model on the latest data. You can specify whether the custom
language model is to be trained with all words from its words resource or only
with words that were added or modified by the user directly. You must use
credentials for the instance of the service that owns a model to train it.
The training method is asynchronous. It can take on the order of minutes to
complete depending on the amount of data on which the service is being trained and
the current load on the service. The method returns an HTTP 200 response code to
indicate that the training process has begun.
You can monitor the status of the training by using the **Get a custom language
model** method to poll the model's status. Use a loop to check the status every 10
seconds. The method returns a `LanguageModel` object that includes `status` and
`progress` fields. A status of `available` means that the custom model is trained
and ready to use. The service cannot accept subsequent training requests or
requests to add new resources until the existing request completes.
Training can fail to start for the following reasons:
* The service is currently handling another request for the custom model, such as
another training request or a request to add a corpus or grammar to the model.
* No training data have been added to the custom model.
* One or more words that were added to the custom model have invalid sounds-like
pronunciations that you must fix.
**See also:** [Train the custom language
model](https://cloud.ibm.com/docs/services/speech-to-text/language-create.html#trainModel-language).
:param str customization_id: The customization ID (GUID) of the custom language
model that is to be used for the request. You must make the request with
credentials for the instance of the service that owns the custom model.
:param str word_type_to_add: The type of words from the custom language model's
words resource on which to train the model:
* `all` (the default) trains the model on all new words, regardless of whether
they were extracted from corpora or grammars or were added or modified by the
user.
* `user` trains the model only on new words that were added or modified by the
user directly. The model is not trained on new words extracted from corpora or
grammars.
:param float customization_weight: Specifies a customization weight for the custom
language model. The customization weight tells the service how much weight to give
to words from the custom language model compared to those from the base model for
speech recognition. Specify a value between 0.0 and 1.0; the default is 0.3.
The default value yields the best performance in general. Assign a higher value if
your audio makes frequent use of OOV words from the custom model. Use caution when
setting the weight: a higher value can improve the accuracy of phrases from the
custom model's domain, but it can negatively affect performance on non-domain
phrases.
The value that you assign is used for all recognition requests that use the model.
You can override it for any recognition request by specifying a customization
weight for that request.
:param dict headers: A `dict` containing the request headers
:return: A `DetailedResponse` containing the result, headers and HTTP status code.
:rtype: DetailedResponse
### Response:
def train_language_model(self,
customization_id,
word_type_to_add=None,
customization_weight=None,
**kwargs):
"""
Train a custom language model.
Initiates the training of a custom language model with new resources such as
corpora, grammars, and custom words. After adding, modifying, or deleting
resources for a custom language model, use this method to begin the actual
training of the model on the latest data. You can specify whether the custom
language model is to be trained with all words from its words resource or only
with words that were added or modified by the user directly. You must use
credentials for the instance of the service that owns a model to train it.
The training method is asynchronous. It can take on the order of minutes to
complete depending on the amount of data on which the service is being trained and
the current load on the service. The method returns an HTTP 200 response code to
indicate that the training process has begun.
You can monitor the status of the training by using the **Get a custom language
model** method to poll the model's status. Use a loop to check the status every 10
seconds. The method returns a `LanguageModel` object that includes `status` and
`progress` fields. A status of `available` means that the custom model is trained
and ready to use. The service cannot accept subsequent training requests or
requests to add new resources until the existing request completes.
Training can fail to start for the following reasons:
* The service is currently handling another request for the custom model, such as
another training request or a request to add a corpus or grammar to the model.
* No training data have been added to the custom model.
* One or more words that were added to the custom model have invalid sounds-like
pronunciations that you must fix.
**See also:** [Train the custom language
model](https://cloud.ibm.com/docs/services/speech-to-text/language-create.html#trainModel-language).
:param str customization_id: The customization ID (GUID) of the custom language
model that is to be used for the request. You must make the request with
credentials for the instance of the service that owns the custom model.
:param str word_type_to_add: The type of words from the custom language model's
words resource on which to train the model:
* `all` (the default) trains the model on all new words, regardless of whether
they were extracted from corpora or grammars or were added or modified by the
user.
* `user` trains the model only on new words that were added or modified by the
user directly. The model is not trained on new words extracted from corpora or
grammars.
:param float customization_weight: Specifies a customization weight for the custom
language model. The customization weight tells the service how much weight to give
to words from the custom language model compared to those from the base model for
speech recognition. Specify a value between 0.0 and 1.0; the default is 0.3.
The default value yields the best performance in general. Assign a higher value if
your audio makes frequent use of OOV words from the custom model. Use caution when
setting the weight: a higher value can improve the accuracy of phrases from the
custom model's domain, but it can negatively affect performance on non-domain
phrases.
The value that you assign is used for all recognition requests that use the model.
You can override it for any recognition request by specifying a customization
weight for that request.
:param dict headers: A `dict` containing the request headers
:return: A `DetailedResponse` containing the result, headers and HTTP status code.
:rtype: DetailedResponse
"""
if customization_id is None:
raise ValueError('customization_id must be provided')
headers = {}
if 'headers' in kwargs:
headers.update(kwargs.get('headers'))
sdk_headers = get_sdk_headers('speech_to_text', 'V1',
'train_language_model')
headers.update(sdk_headers)
params = {
'word_type_to_add': word_type_to_add,
'customization_weight': customization_weight
}
url = '/v1/customizations/{0}/train'.format(
*self._encode_path_vars(customization_id))
response = self.request(
method='POST',
url=url,
headers=headers,
params=params,
accept_json=True)
return response |
def find_record(self, domain, record_type, name=None, data=None):
"""
Returns a single record for this domain that matches the supplied
search criteria.
If no record matches, a DomainRecordNotFound exception will be raised.
If more than one matches, a DomainRecordNotUnique exception will
be raised.
"""
return domain.find_record(record_type=record_type,
name=name, data=data) | Returns a single record for this domain that matches the supplied
search criteria.
If no record matches, a DomainRecordNotFound exception will be raised.
If more than one matches, a DomainRecordNotUnique exception will
be raised. | Below is the the instruction that describes the task:
### Input:
Returns a single record for this domain that matches the supplied
search criteria.
If no record matches, a DomainRecordNotFound exception will be raised.
If more than one matches, a DomainRecordNotUnique exception will
be raised.
### Response:
def find_record(self, domain, record_type, name=None, data=None):
"""
Returns a single record for this domain that matches the supplied
search criteria.
If no record matches, a DomainRecordNotFound exception will be raised.
If more than one matches, a DomainRecordNotUnique exception will
be raised.
"""
return domain.find_record(record_type=record_type,
name=name, data=data) |
def score_models(clf, X, y, encoder, runs=1):
"""
Takes in a classifier that supports multiclass classification, and X and a y, and returns a cross validation score.
"""
scores = []
X_test = None
for _ in range(runs):
X_test = encoder().fit_transform(X, y)
# Some models, like logistic regression, like normalized features otherwise they underperform and/or take a long time to converge.
# To be rigorous, we should have trained the normalization on each fold individually via pipelines.
# See grid_search_example to learn how to do it.
X_test = StandardScaler().fit_transform(X_test)
scores.append(cross_validate(clf, X_test, y, n_jobs=1, cv=5)['test_score'])
gc.collect()
scores = [y for z in [x for x in scores] for y in z]
return float(np.mean(scores)), float(np.std(scores)), scores, X_test.shape[1] | Takes in a classifier that supports multiclass classification, and X and a y, and returns a cross validation score. | Below is the the instruction that describes the task:
### Input:
Takes in a classifier that supports multiclass classification, and X and a y, and returns a cross validation score.
### Response:
def score_models(clf, X, y, encoder, runs=1):
"""
Takes in a classifier that supports multiclass classification, and X and a y, and returns a cross validation score.
"""
scores = []
X_test = None
for _ in range(runs):
X_test = encoder().fit_transform(X, y)
# Some models, like logistic regression, like normalized features otherwise they underperform and/or take a long time to converge.
# To be rigorous, we should have trained the normalization on each fold individually via pipelines.
# See grid_search_example to learn how to do it.
X_test = StandardScaler().fit_transform(X_test)
scores.append(cross_validate(clf, X_test, y, n_jobs=1, cv=5)['test_score'])
gc.collect()
scores = [y for z in [x for x in scores] for y in z]
return float(np.mean(scores)), float(np.std(scores)), scores, X_test.shape[1] |
def blockdiag_parser(preprocessor, tag, markup):
""" Blockdiag parser """
m = DOT_BLOCK_RE.search(markup)
if m:
# Get diagram type and code
diagram = m.group('diagram').strip()
code = markup
# Run command
output = diag(code, diagram)
if output:
# Return Base64 encoded image
return '<span class="blockdiag" style="align: center;"><img src="data:image/png;base64,%s"></span>' % base64.b64encode(output)
else:
raise ValueError('Error processing input. '
'Expected syntax: {0}'.format(SYNTAX)) | Blockdiag parser | Below is the the instruction that describes the task:
### Input:
Blockdiag parser
### Response:
def blockdiag_parser(preprocessor, tag, markup):
""" Blockdiag parser """
m = DOT_BLOCK_RE.search(markup)
if m:
# Get diagram type and code
diagram = m.group('diagram').strip()
code = markup
# Run command
output = diag(code, diagram)
if output:
# Return Base64 encoded image
return '<span class="blockdiag" style="align: center;"><img src="data:image/png;base64,%s"></span>' % base64.b64encode(output)
else:
raise ValueError('Error processing input. '
'Expected syntax: {0}'.format(SYNTAX)) |
def queue_exists(self, queue):
"""Check if a queue has been declared.
:rtype bool:
"""
try:
self.channel.queue_declare(queue=queue, passive=True)
except AMQPChannelException, e:
if e.amqp_reply_code == 404:
return False
raise e
else:
return True | Check if a queue has been declared.
:rtype bool: | Below is the the instruction that describes the task:
### Input:
Check if a queue has been declared.
:rtype bool:
### Response:
def queue_exists(self, queue):
"""Check if a queue has been declared.
:rtype bool:
"""
try:
self.channel.queue_declare(queue=queue, passive=True)
except AMQPChannelException, e:
if e.amqp_reply_code == 404:
return False
raise e
else:
return True |
def establish_connection(self, width=None, height=None):
"""Establish SSH connection to the network device
Timeout will generate a NetMikoTimeoutException
Authentication failure will generate a NetMikoAuthenticationException
width and height are needed for Fortinet paging setting.
:param width: Specified width of the VT100 terminal window
:type width: int
:param height: Specified height of the VT100 terminal window
:type height: int
"""
if self.protocol == "telnet":
self.remote_conn = telnetlib.Telnet(
self.host, port=self.port, timeout=self.timeout
)
self.telnet_login()
elif self.protocol == "serial":
self.remote_conn = serial.Serial(**self.serial_settings)
self.serial_login()
elif self.protocol == "ssh":
ssh_connect_params = self._connect_params_dict()
self.remote_conn_pre = self._build_ssh_client()
# initiate SSH connection
try:
self.remote_conn_pre.connect(**ssh_connect_params)
except socket.error:
self.paramiko_cleanup()
msg = "Connection to device timed-out: {device_type} {ip}:{port}".format(
device_type=self.device_type, ip=self.host, port=self.port
)
raise NetMikoTimeoutException(msg)
except paramiko.ssh_exception.AuthenticationException as auth_err:
self.paramiko_cleanup()
msg = "Authentication failure: unable to connect {device_type} {ip}:{port}".format(
device_type=self.device_type, ip=self.host, port=self.port
)
msg += self.RETURN + text_type(auth_err)
raise NetMikoAuthenticationException(msg)
if self.verbose:
print(
"SSH connection established to {}:{}".format(self.host, self.port)
)
# Use invoke_shell to establish an 'interactive session'
if width and height:
self.remote_conn = self.remote_conn_pre.invoke_shell(
term="vt100", width=width, height=height
)
else:
self.remote_conn = self.remote_conn_pre.invoke_shell()
self.remote_conn.settimeout(self.blocking_timeout)
if self.keepalive:
self.remote_conn.transport.set_keepalive(self.keepalive)
self.special_login_handler()
if self.verbose:
print("Interactive SSH session established")
return "" | Establish SSH connection to the network device
Timeout will generate a NetMikoTimeoutException
Authentication failure will generate a NetMikoAuthenticationException
width and height are needed for Fortinet paging setting.
:param width: Specified width of the VT100 terminal window
:type width: int
:param height: Specified height of the VT100 terminal window
:type height: int | Below is the the instruction that describes the task:
### Input:
Establish SSH connection to the network device
Timeout will generate a NetMikoTimeoutException
Authentication failure will generate a NetMikoAuthenticationException
width and height are needed for Fortinet paging setting.
:param width: Specified width of the VT100 terminal window
:type width: int
:param height: Specified height of the VT100 terminal window
:type height: int
### Response:
def establish_connection(self, width=None, height=None):
"""Establish SSH connection to the network device
Timeout will generate a NetMikoTimeoutException
Authentication failure will generate a NetMikoAuthenticationException
width and height are needed for Fortinet paging setting.
:param width: Specified width of the VT100 terminal window
:type width: int
:param height: Specified height of the VT100 terminal window
:type height: int
"""
if self.protocol == "telnet":
self.remote_conn = telnetlib.Telnet(
self.host, port=self.port, timeout=self.timeout
)
self.telnet_login()
elif self.protocol == "serial":
self.remote_conn = serial.Serial(**self.serial_settings)
self.serial_login()
elif self.protocol == "ssh":
ssh_connect_params = self._connect_params_dict()
self.remote_conn_pre = self._build_ssh_client()
# initiate SSH connection
try:
self.remote_conn_pre.connect(**ssh_connect_params)
except socket.error:
self.paramiko_cleanup()
msg = "Connection to device timed-out: {device_type} {ip}:{port}".format(
device_type=self.device_type, ip=self.host, port=self.port
)
raise NetMikoTimeoutException(msg)
except paramiko.ssh_exception.AuthenticationException as auth_err:
self.paramiko_cleanup()
msg = "Authentication failure: unable to connect {device_type} {ip}:{port}".format(
device_type=self.device_type, ip=self.host, port=self.port
)
msg += self.RETURN + text_type(auth_err)
raise NetMikoAuthenticationException(msg)
if self.verbose:
print(
"SSH connection established to {}:{}".format(self.host, self.port)
)
# Use invoke_shell to establish an 'interactive session'
if width and height:
self.remote_conn = self.remote_conn_pre.invoke_shell(
term="vt100", width=width, height=height
)
else:
self.remote_conn = self.remote_conn_pre.invoke_shell()
self.remote_conn.settimeout(self.blocking_timeout)
if self.keepalive:
self.remote_conn.transport.set_keepalive(self.keepalive)
self.special_login_handler()
if self.verbose:
print("Interactive SSH session established")
return "" |
def arbitrary(arg, where=None, how=None):
"""
Selects the first / last non-null value in a column
Parameters
----------
arg : array expression
where: bool, default None
how : {'first', 'last', 'heavy'}, default 'first'
Heavy selects a frequently occurring value using the heavy hitters
algorithm. Heavy is only supported by Clickhouse backend.
Returns
-------
arbitrary element : scalar type of caller
"""
return ops.Arbitrary(arg, how, where).to_expr() | Selects the first / last non-null value in a column
Parameters
----------
arg : array expression
where: bool, default None
how : {'first', 'last', 'heavy'}, default 'first'
Heavy selects a frequently occurring value using the heavy hitters
algorithm. Heavy is only supported by Clickhouse backend.
Returns
-------
arbitrary element : scalar type of caller | Below is the the instruction that describes the task:
### Input:
Selects the first / last non-null value in a column
Parameters
----------
arg : array expression
where: bool, default None
how : {'first', 'last', 'heavy'}, default 'first'
Heavy selects a frequently occurring value using the heavy hitters
algorithm. Heavy is only supported by Clickhouse backend.
Returns
-------
arbitrary element : scalar type of caller
### Response:
def arbitrary(arg, where=None, how=None):
"""
Selects the first / last non-null value in a column
Parameters
----------
arg : array expression
where: bool, default None
how : {'first', 'last', 'heavy'}, default 'first'
Heavy selects a frequently occurring value using the heavy hitters
algorithm. Heavy is only supported by Clickhouse backend.
Returns
-------
arbitrary element : scalar type of caller
"""
return ops.Arbitrary(arg, how, where).to_expr() |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.