code stringlengths 75 104k | docstring stringlengths 1 46.9k | text stringlengths 164 112k |
|---|---|---|
def stop(self, wait_for_completion=True, operation_timeout=None,
status_timeout=None, allow_status_exceptions=False):
"""
Stop this LPAR, using the HMC operation "Stop Logical
Partition". The stop operation stops the processors from
processing instructions.
This HMC operation has deferred status behavior: If the asynchronous
job on the HMC is complete, it takes a few seconds until the LPAR
status has reached the desired value. If `wait_for_completion=True`,
this method repeatedly checks the status of the LPAR after the HMC
operation has completed, and waits until the status is in the desired
state "operating", or if `allow_status_exceptions` was
set additionally in the state "exceptions".
Authorization requirements:
* Object-access permission to the CPC containing this LPAR.
* Object-access permission to this LPAR.
* Task permission for the "Stop" task.
Parameters:
wait_for_completion (bool):
Boolean controlling whether this method should wait for completion
of the requested asynchronous HMC operation, as follows:
* If `True`, this method will wait for completion of the
asynchronous job performing the operation, and for the status
becoming "operating" (or in addition "exceptions", if
`allow_status_exceptions` was set.
* If `False`, this method will return immediately once the HMC has
accepted the request to perform the operation.
operation_timeout (:term:`number`):
Timeout in seconds, for waiting for completion of the asynchronous
job performing the operation. The special value 0 means that no
timeout is set. `None` means that the default async operation
timeout of the session is used. If the timeout expires when
`wait_for_completion=True`, a
:exc:`~zhmcclient.OperationTimeout` is raised.
status_timeout (:term:`number`):
Timeout in seconds, for waiting that the status of the LPAR has
reached the desired status, after the HMC operation has completed.
The special value 0 means that no timeout is set. `None` means that
the default async operation timeout of the session is used.
If the timeout expires when `wait_for_completion=True`, a
:exc:`~zhmcclient.StatusTimeout` is raised.
allow_status_exceptions (bool):
Boolean controlling whether LPAR status "exceptions" is considered
an additional acceptable end status when `wait_for_completion` is
set.
Returns:
`None` or :class:`~zhmcclient.Job`:
If `wait_for_completion` is `True`, returns `None`.
If `wait_for_completion` is `False`, returns a
:class:`~zhmcclient.Job` object representing the asynchronously
executing job on the HMC.
Raises:
:exc:`~zhmcclient.HTTPError`
:exc:`~zhmcclient.ParseError`
:exc:`~zhmcclient.AuthError`
:exc:`~zhmcclient.ConnectionError`
:exc:`~zhmcclient.OperationTimeout`: The timeout expired while
waiting for completion of the operation.
:exc:`~zhmcclient.StatusTimeout`: The timeout expired while
waiting for the desired LPAR status.
"""
body = {}
result = self.manager.session.post(
self.uri + '/operations/stop',
body,
wait_for_completion=wait_for_completion,
operation_timeout=operation_timeout)
if wait_for_completion:
statuses = ["operating"]
if allow_status_exceptions:
statuses.append("exceptions")
self.wait_for_status(statuses, status_timeout)
return result | Stop this LPAR, using the HMC operation "Stop Logical
Partition". The stop operation stops the processors from
processing instructions.
This HMC operation has deferred status behavior: If the asynchronous
job on the HMC is complete, it takes a few seconds until the LPAR
status has reached the desired value. If `wait_for_completion=True`,
this method repeatedly checks the status of the LPAR after the HMC
operation has completed, and waits until the status is in the desired
state "operating", or if `allow_status_exceptions` was
set additionally in the state "exceptions".
Authorization requirements:
* Object-access permission to the CPC containing this LPAR.
* Object-access permission to this LPAR.
* Task permission for the "Stop" task.
Parameters:
wait_for_completion (bool):
Boolean controlling whether this method should wait for completion
of the requested asynchronous HMC operation, as follows:
* If `True`, this method will wait for completion of the
asynchronous job performing the operation, and for the status
becoming "operating" (or in addition "exceptions", if
`allow_status_exceptions` was set.
* If `False`, this method will return immediately once the HMC has
accepted the request to perform the operation.
operation_timeout (:term:`number`):
Timeout in seconds, for waiting for completion of the asynchronous
job performing the operation. The special value 0 means that no
timeout is set. `None` means that the default async operation
timeout of the session is used. If the timeout expires when
`wait_for_completion=True`, a
:exc:`~zhmcclient.OperationTimeout` is raised.
status_timeout (:term:`number`):
Timeout in seconds, for waiting that the status of the LPAR has
reached the desired status, after the HMC operation has completed.
The special value 0 means that no timeout is set. `None` means that
the default async operation timeout of the session is used.
If the timeout expires when `wait_for_completion=True`, a
:exc:`~zhmcclient.StatusTimeout` is raised.
allow_status_exceptions (bool):
Boolean controlling whether LPAR status "exceptions" is considered
an additional acceptable end status when `wait_for_completion` is
set.
Returns:
`None` or :class:`~zhmcclient.Job`:
If `wait_for_completion` is `True`, returns `None`.
If `wait_for_completion` is `False`, returns a
:class:`~zhmcclient.Job` object representing the asynchronously
executing job on the HMC.
Raises:
:exc:`~zhmcclient.HTTPError`
:exc:`~zhmcclient.ParseError`
:exc:`~zhmcclient.AuthError`
:exc:`~zhmcclient.ConnectionError`
:exc:`~zhmcclient.OperationTimeout`: The timeout expired while
waiting for completion of the operation.
:exc:`~zhmcclient.StatusTimeout`: The timeout expired while
waiting for the desired LPAR status. | Below is the the instruction that describes the task:
### Input:
Stop this LPAR, using the HMC operation "Stop Logical
Partition". The stop operation stops the processors from
processing instructions.
This HMC operation has deferred status behavior: If the asynchronous
job on the HMC is complete, it takes a few seconds until the LPAR
status has reached the desired value. If `wait_for_completion=True`,
this method repeatedly checks the status of the LPAR after the HMC
operation has completed, and waits until the status is in the desired
state "operating", or if `allow_status_exceptions` was
set additionally in the state "exceptions".
Authorization requirements:
* Object-access permission to the CPC containing this LPAR.
* Object-access permission to this LPAR.
* Task permission for the "Stop" task.
Parameters:
wait_for_completion (bool):
Boolean controlling whether this method should wait for completion
of the requested asynchronous HMC operation, as follows:
* If `True`, this method will wait for completion of the
asynchronous job performing the operation, and for the status
becoming "operating" (or in addition "exceptions", if
`allow_status_exceptions` was set.
* If `False`, this method will return immediately once the HMC has
accepted the request to perform the operation.
operation_timeout (:term:`number`):
Timeout in seconds, for waiting for completion of the asynchronous
job performing the operation. The special value 0 means that no
timeout is set. `None` means that the default async operation
timeout of the session is used. If the timeout expires when
`wait_for_completion=True`, a
:exc:`~zhmcclient.OperationTimeout` is raised.
status_timeout (:term:`number`):
Timeout in seconds, for waiting that the status of the LPAR has
reached the desired status, after the HMC operation has completed.
The special value 0 means that no timeout is set. `None` means that
the default async operation timeout of the session is used.
If the timeout expires when `wait_for_completion=True`, a
:exc:`~zhmcclient.StatusTimeout` is raised.
allow_status_exceptions (bool):
Boolean controlling whether LPAR status "exceptions" is considered
an additional acceptable end status when `wait_for_completion` is
set.
Returns:
`None` or :class:`~zhmcclient.Job`:
If `wait_for_completion` is `True`, returns `None`.
If `wait_for_completion` is `False`, returns a
:class:`~zhmcclient.Job` object representing the asynchronously
executing job on the HMC.
Raises:
:exc:`~zhmcclient.HTTPError`
:exc:`~zhmcclient.ParseError`
:exc:`~zhmcclient.AuthError`
:exc:`~zhmcclient.ConnectionError`
:exc:`~zhmcclient.OperationTimeout`: The timeout expired while
waiting for completion of the operation.
:exc:`~zhmcclient.StatusTimeout`: The timeout expired while
waiting for the desired LPAR status.
### Response:
def stop(self, wait_for_completion=True, operation_timeout=None,
status_timeout=None, allow_status_exceptions=False):
"""
Stop this LPAR, using the HMC operation "Stop Logical
Partition". The stop operation stops the processors from
processing instructions.
This HMC operation has deferred status behavior: If the asynchronous
job on the HMC is complete, it takes a few seconds until the LPAR
status has reached the desired value. If `wait_for_completion=True`,
this method repeatedly checks the status of the LPAR after the HMC
operation has completed, and waits until the status is in the desired
state "operating", or if `allow_status_exceptions` was
set additionally in the state "exceptions".
Authorization requirements:
* Object-access permission to the CPC containing this LPAR.
* Object-access permission to this LPAR.
* Task permission for the "Stop" task.
Parameters:
wait_for_completion (bool):
Boolean controlling whether this method should wait for completion
of the requested asynchronous HMC operation, as follows:
* If `True`, this method will wait for completion of the
asynchronous job performing the operation, and for the status
becoming "operating" (or in addition "exceptions", if
`allow_status_exceptions` was set.
* If `False`, this method will return immediately once the HMC has
accepted the request to perform the operation.
operation_timeout (:term:`number`):
Timeout in seconds, for waiting for completion of the asynchronous
job performing the operation. The special value 0 means that no
timeout is set. `None` means that the default async operation
timeout of the session is used. If the timeout expires when
`wait_for_completion=True`, a
:exc:`~zhmcclient.OperationTimeout` is raised.
status_timeout (:term:`number`):
Timeout in seconds, for waiting that the status of the LPAR has
reached the desired status, after the HMC operation has completed.
The special value 0 means that no timeout is set. `None` means that
the default async operation timeout of the session is used.
If the timeout expires when `wait_for_completion=True`, a
:exc:`~zhmcclient.StatusTimeout` is raised.
allow_status_exceptions (bool):
Boolean controlling whether LPAR status "exceptions" is considered
an additional acceptable end status when `wait_for_completion` is
set.
Returns:
`None` or :class:`~zhmcclient.Job`:
If `wait_for_completion` is `True`, returns `None`.
If `wait_for_completion` is `False`, returns a
:class:`~zhmcclient.Job` object representing the asynchronously
executing job on the HMC.
Raises:
:exc:`~zhmcclient.HTTPError`
:exc:`~zhmcclient.ParseError`
:exc:`~zhmcclient.AuthError`
:exc:`~zhmcclient.ConnectionError`
:exc:`~zhmcclient.OperationTimeout`: The timeout expired while
waiting for completion of the operation.
:exc:`~zhmcclient.StatusTimeout`: The timeout expired while
waiting for the desired LPAR status.
"""
body = {}
result = self.manager.session.post(
self.uri + '/operations/stop',
body,
wait_for_completion=wait_for_completion,
operation_timeout=operation_timeout)
if wait_for_completion:
statuses = ["operating"]
if allow_status_exceptions:
statuses.append("exceptions")
self.wait_for_status(statuses, status_timeout)
return result |
def make_srcmap(psf, exp, spatial_model, sigma, npix=500, xpix=0.0, ypix=0.0,
cdelt=0.01, psf_scale_fn=None, klims=None, sparse=False):
"""Compute the source map for a given spatial model.
Parameters
----------
psf : `~fermipy.irfs.PSFModel`
exp : `~numpy.ndarray`
Array of exposures.
spatial_model : str
Spatial model.
sigma : float
Spatial size parameter for extended models.
xpix : float
Source position in pixel coordinates in X dimension.
ypix : float
Source position in pixel coordinates in Y dimension.
psf_scale_fn : callable
Function that evaluates the PSF scaling function.
Argument is energy in MeV.
klims : tuple
Indices of lower and upper range of energy.
sparse : bool
Skip pixels in which the source amplitude is small.
"""
if spatial_model == 'RadialGaussian':
k = utils.make_radial_kernel(psf, utils.convolve2d_gauss,
sigma / 1.5095921854516636, npix, cdelt,
xpix, ypix, psf_scale_fn, klims=klims,
sparse=sparse)
elif spatial_model == 'RadialDisk':
k = utils.make_radial_kernel(psf, utils.convolve2d_disk,
sigma / 0.8246211251235321, npix, cdelt,
xpix, ypix, psf_scale_fn, klims=klims,
sparse=sparse)
elif spatial_model == 'PointSource':
k = utils.make_radial_kernel(psf, None, None, npix, cdelt,
xpix, ypix, psf_scale_fn, klims=klims,
sparse=sparse)
else:
raise Exception('Unsupported spatial model: %s', spatial_model)
if klims is not None:
exp = exp[klims[0]:klims[1] + 1, ...]
k *= exp[:, np.newaxis, np.newaxis] * np.radians(cdelt) ** 2
return k | Compute the source map for a given spatial model.
Parameters
----------
psf : `~fermipy.irfs.PSFModel`
exp : `~numpy.ndarray`
Array of exposures.
spatial_model : str
Spatial model.
sigma : float
Spatial size parameter for extended models.
xpix : float
Source position in pixel coordinates in X dimension.
ypix : float
Source position in pixel coordinates in Y dimension.
psf_scale_fn : callable
Function that evaluates the PSF scaling function.
Argument is energy in MeV.
klims : tuple
Indices of lower and upper range of energy.
sparse : bool
Skip pixels in which the source amplitude is small. | Below is the the instruction that describes the task:
### Input:
Compute the source map for a given spatial model.
Parameters
----------
psf : `~fermipy.irfs.PSFModel`
exp : `~numpy.ndarray`
Array of exposures.
spatial_model : str
Spatial model.
sigma : float
Spatial size parameter for extended models.
xpix : float
Source position in pixel coordinates in X dimension.
ypix : float
Source position in pixel coordinates in Y dimension.
psf_scale_fn : callable
Function that evaluates the PSF scaling function.
Argument is energy in MeV.
klims : tuple
Indices of lower and upper range of energy.
sparse : bool
Skip pixels in which the source amplitude is small.
### Response:
def make_srcmap(psf, exp, spatial_model, sigma, npix=500, xpix=0.0, ypix=0.0,
cdelt=0.01, psf_scale_fn=None, klims=None, sparse=False):
"""Compute the source map for a given spatial model.
Parameters
----------
psf : `~fermipy.irfs.PSFModel`
exp : `~numpy.ndarray`
Array of exposures.
spatial_model : str
Spatial model.
sigma : float
Spatial size parameter for extended models.
xpix : float
Source position in pixel coordinates in X dimension.
ypix : float
Source position in pixel coordinates in Y dimension.
psf_scale_fn : callable
Function that evaluates the PSF scaling function.
Argument is energy in MeV.
klims : tuple
Indices of lower and upper range of energy.
sparse : bool
Skip pixels in which the source amplitude is small.
"""
if spatial_model == 'RadialGaussian':
k = utils.make_radial_kernel(psf, utils.convolve2d_gauss,
sigma / 1.5095921854516636, npix, cdelt,
xpix, ypix, psf_scale_fn, klims=klims,
sparse=sparse)
elif spatial_model == 'RadialDisk':
k = utils.make_radial_kernel(psf, utils.convolve2d_disk,
sigma / 0.8246211251235321, npix, cdelt,
xpix, ypix, psf_scale_fn, klims=klims,
sparse=sparse)
elif spatial_model == 'PointSource':
k = utils.make_radial_kernel(psf, None, None, npix, cdelt,
xpix, ypix, psf_scale_fn, klims=klims,
sparse=sparse)
else:
raise Exception('Unsupported spatial model: %s', spatial_model)
if klims is not None:
exp = exp[klims[0]:klims[1] + 1, ...]
k *= exp[:, np.newaxis, np.newaxis] * np.radians(cdelt) ** 2
return k |
def register_saver_ops(self):
"""
Registers the saver operations to the graph in context.
"""
variables = self.get_savable_variables()
if variables is None or len(variables) == 0:
self._saver = None
return
base_scope = self._get_base_variable_scope()
variables_map = {strip_name_scope(v.name, base_scope): v for v in variables}
self._saver = tf.train.Saver(
var_list=variables_map,
reshape=False,
sharded=False,
max_to_keep=5,
keep_checkpoint_every_n_hours=10000.0,
name=None,
restore_sequentially=False,
saver_def=None,
builder=None,
defer_build=False,
allow_empty=True,
write_version=tf.train.SaverDef.V2,
pad_step_number=False,
save_relative_paths=True
) | Registers the saver operations to the graph in context. | Below is the the instruction that describes the task:
### Input:
Registers the saver operations to the graph in context.
### Response:
def register_saver_ops(self):
"""
Registers the saver operations to the graph in context.
"""
variables = self.get_savable_variables()
if variables is None or len(variables) == 0:
self._saver = None
return
base_scope = self._get_base_variable_scope()
variables_map = {strip_name_scope(v.name, base_scope): v for v in variables}
self._saver = tf.train.Saver(
var_list=variables_map,
reshape=False,
sharded=False,
max_to_keep=5,
keep_checkpoint_every_n_hours=10000.0,
name=None,
restore_sequentially=False,
saver_def=None,
builder=None,
defer_build=False,
allow_empty=True,
write_version=tf.train.SaverDef.V2,
pad_step_number=False,
save_relative_paths=True
) |
def vrrpe_spf_basic(self, **kwargs):
"""Set vrrpe short path forwarding to default.
Args:
int_type (str): Type of interface. (gigabitethernet,
tengigabitethernet, etc).
name (str): Name of interface. (1/0/5, 1/0/10, etc).
enable (bool): If vrrpe short path fowarding should be enabled
or disabled.Default:``True``.
get (bool) : Get config instead of editing config. (True, False)
vrid (str): vrrpe router ID.
rbridge_id (str): rbridge-id for device. Only required when type is
`ve`.
callback (function): A function executed upon completion of the
method. The only parameter passed to `callback` will be the
``ElementTree`` `config`.
Returns:
Return value of `callback`.
Raises:
KeyError: if `int_type`, `name`, `vrid` is not passed.
ValueError: if `int_type`, `name`, `vrid` is invalid.
Examples:
>>> import pynos.device
>>> switches = ['10.24.39.211', '10.24.39.203']
>>> auth = ('admin', 'password')
>>> for switch in switches:
... conn = (switch, '22')
... with pynos.device.Device(conn=conn, auth=auth) as dev:
... output = dev.services.vrrpe(ip_version='6',
... enable=True, rbridge_id='225')
... output = dev.interface.vrrpe_vip(int_type='ve',
... name='89', vrid='1',
... vip='2002:4818:f000:1ab:cafe:beef:1000:1/64',
... output = dev.interface.vrrpe_vip(int_type='ve',
... name='89',
... vrid='1', vip='2002:4818:f000:1ab:cafe:beef:1000:1/64',
... rbridge_id='225')
... output = dev.services.vrrpe(enable=False,
... rbridge_id='225')
... output = dev.interface.vrrpe_spf_basic(int_type='ve',
... name='89', vrid='1', rbridge_id='1')
"""
int_type = kwargs.pop('int_type').lower()
name = kwargs.pop('name')
vrid = kwargs.pop('vrid')
enable = kwargs.pop('enable', True)
get = kwargs.pop('get', False)
rbridge_id = kwargs.pop('rbridge_id', '1')
callback = kwargs.pop('callback', self._callback)
valid_int_types = ['gigabitethernet', 'tengigabitethernet',
'fortygigabitethernet', 'hundredgigabitethernet',
'port_channel', 've']
vrrpe_args = dict(name=name, vrid=vrid)
method_class = self._interface
if get:
enable = None
if int_type not in valid_int_types:
raise ValueError('`int_type` must be one of: %s' %
repr(valid_int_types))
method_name = 'interface_%s_vrrpe_short_path_forwarding_basic' % \
int_type
if int_type == 've':
method_name = 'rbridge_id_%s' % method_name
method_class = self._rbridge
vrrpe_args['rbridge_id'] = rbridge_id
if not pynos.utilities.valid_vlan_id(name):
raise InvalidVlanId("`name` must be between `1` and `8191`")
elif not pynos.utilities.valid_interface(int_type, name):
raise ValueError('`name` must be in the format of x/y/z for '
'physical interfaces or x for port channel.')
vrrpe_spf_basic = getattr(method_class, method_name)
config = vrrpe_spf_basic(**vrrpe_args)
if get:
return callback(config, handler='get_config')
if not enable:
config.find('.//*short-path-forwarding').set('operation', 'delete')
return callback(config) | Set vrrpe short path forwarding to default.
Args:
int_type (str): Type of interface. (gigabitethernet,
tengigabitethernet, etc).
name (str): Name of interface. (1/0/5, 1/0/10, etc).
enable (bool): If vrrpe short path fowarding should be enabled
or disabled.Default:``True``.
get (bool) : Get config instead of editing config. (True, False)
vrid (str): vrrpe router ID.
rbridge_id (str): rbridge-id for device. Only required when type is
`ve`.
callback (function): A function executed upon completion of the
method. The only parameter passed to `callback` will be the
``ElementTree`` `config`.
Returns:
Return value of `callback`.
Raises:
KeyError: if `int_type`, `name`, `vrid` is not passed.
ValueError: if `int_type`, `name`, `vrid` is invalid.
Examples:
>>> import pynos.device
>>> switches = ['10.24.39.211', '10.24.39.203']
>>> auth = ('admin', 'password')
>>> for switch in switches:
... conn = (switch, '22')
... with pynos.device.Device(conn=conn, auth=auth) as dev:
... output = dev.services.vrrpe(ip_version='6',
... enable=True, rbridge_id='225')
... output = dev.interface.vrrpe_vip(int_type='ve',
... name='89', vrid='1',
... vip='2002:4818:f000:1ab:cafe:beef:1000:1/64',
... output = dev.interface.vrrpe_vip(int_type='ve',
... name='89',
... vrid='1', vip='2002:4818:f000:1ab:cafe:beef:1000:1/64',
... rbridge_id='225')
... output = dev.services.vrrpe(enable=False,
... rbridge_id='225')
... output = dev.interface.vrrpe_spf_basic(int_type='ve',
... name='89', vrid='1', rbridge_id='1') | Below is the the instruction that describes the task:
### Input:
Set vrrpe short path forwarding to default.
Args:
int_type (str): Type of interface. (gigabitethernet,
tengigabitethernet, etc).
name (str): Name of interface. (1/0/5, 1/0/10, etc).
enable (bool): If vrrpe short path fowarding should be enabled
or disabled.Default:``True``.
get (bool) : Get config instead of editing config. (True, False)
vrid (str): vrrpe router ID.
rbridge_id (str): rbridge-id for device. Only required when type is
`ve`.
callback (function): A function executed upon completion of the
method. The only parameter passed to `callback` will be the
``ElementTree`` `config`.
Returns:
Return value of `callback`.
Raises:
KeyError: if `int_type`, `name`, `vrid` is not passed.
ValueError: if `int_type`, `name`, `vrid` is invalid.
Examples:
>>> import pynos.device
>>> switches = ['10.24.39.211', '10.24.39.203']
>>> auth = ('admin', 'password')
>>> for switch in switches:
... conn = (switch, '22')
... with pynos.device.Device(conn=conn, auth=auth) as dev:
... output = dev.services.vrrpe(ip_version='6',
... enable=True, rbridge_id='225')
... output = dev.interface.vrrpe_vip(int_type='ve',
... name='89', vrid='1',
... vip='2002:4818:f000:1ab:cafe:beef:1000:1/64',
... output = dev.interface.vrrpe_vip(int_type='ve',
... name='89',
... vrid='1', vip='2002:4818:f000:1ab:cafe:beef:1000:1/64',
... rbridge_id='225')
... output = dev.services.vrrpe(enable=False,
... rbridge_id='225')
... output = dev.interface.vrrpe_spf_basic(int_type='ve',
... name='89', vrid='1', rbridge_id='1')
### Response:
def vrrpe_spf_basic(self, **kwargs):
"""Set vrrpe short path forwarding to default.
Args:
int_type (str): Type of interface. (gigabitethernet,
tengigabitethernet, etc).
name (str): Name of interface. (1/0/5, 1/0/10, etc).
enable (bool): If vrrpe short path fowarding should be enabled
or disabled.Default:``True``.
get (bool) : Get config instead of editing config. (True, False)
vrid (str): vrrpe router ID.
rbridge_id (str): rbridge-id for device. Only required when type is
`ve`.
callback (function): A function executed upon completion of the
method. The only parameter passed to `callback` will be the
``ElementTree`` `config`.
Returns:
Return value of `callback`.
Raises:
KeyError: if `int_type`, `name`, `vrid` is not passed.
ValueError: if `int_type`, `name`, `vrid` is invalid.
Examples:
>>> import pynos.device
>>> switches = ['10.24.39.211', '10.24.39.203']
>>> auth = ('admin', 'password')
>>> for switch in switches:
... conn = (switch, '22')
... with pynos.device.Device(conn=conn, auth=auth) as dev:
... output = dev.services.vrrpe(ip_version='6',
... enable=True, rbridge_id='225')
... output = dev.interface.vrrpe_vip(int_type='ve',
... name='89', vrid='1',
... vip='2002:4818:f000:1ab:cafe:beef:1000:1/64',
... output = dev.interface.vrrpe_vip(int_type='ve',
... name='89',
... vrid='1', vip='2002:4818:f000:1ab:cafe:beef:1000:1/64',
... rbridge_id='225')
... output = dev.services.vrrpe(enable=False,
... rbridge_id='225')
... output = dev.interface.vrrpe_spf_basic(int_type='ve',
... name='89', vrid='1', rbridge_id='1')
"""
int_type = kwargs.pop('int_type').lower()
name = kwargs.pop('name')
vrid = kwargs.pop('vrid')
enable = kwargs.pop('enable', True)
get = kwargs.pop('get', False)
rbridge_id = kwargs.pop('rbridge_id', '1')
callback = kwargs.pop('callback', self._callback)
valid_int_types = ['gigabitethernet', 'tengigabitethernet',
'fortygigabitethernet', 'hundredgigabitethernet',
'port_channel', 've']
vrrpe_args = dict(name=name, vrid=vrid)
method_class = self._interface
if get:
enable = None
if int_type not in valid_int_types:
raise ValueError('`int_type` must be one of: %s' %
repr(valid_int_types))
method_name = 'interface_%s_vrrpe_short_path_forwarding_basic' % \
int_type
if int_type == 've':
method_name = 'rbridge_id_%s' % method_name
method_class = self._rbridge
vrrpe_args['rbridge_id'] = rbridge_id
if not pynos.utilities.valid_vlan_id(name):
raise InvalidVlanId("`name` must be between `1` and `8191`")
elif not pynos.utilities.valid_interface(int_type, name):
raise ValueError('`name` must be in the format of x/y/z for '
'physical interfaces or x for port channel.')
vrrpe_spf_basic = getattr(method_class, method_name)
config = vrrpe_spf_basic(**vrrpe_args)
if get:
return callback(config, handler='get_config')
if not enable:
config.find('.//*short-path-forwarding').set('operation', 'delete')
return callback(config) |
def mkdir(name, path):
'''Create an empty directory in the virtual folder.
\b
NAME: Name of a virtual folder.
PATH: The name or path of directory. Parent directories are created automatically
if they do not exist.
'''
with Session() as session:
try:
session.VFolder(name).mkdir(path)
print_done('Done.')
except Exception as e:
print_error(e)
sys.exit(1) | Create an empty directory in the virtual folder.
\b
NAME: Name of a virtual folder.
PATH: The name or path of directory. Parent directories are created automatically
if they do not exist. | Below is the the instruction that describes the task:
### Input:
Create an empty directory in the virtual folder.
\b
NAME: Name of a virtual folder.
PATH: The name or path of directory. Parent directories are created automatically
if they do not exist.
### Response:
def mkdir(name, path):
'''Create an empty directory in the virtual folder.
\b
NAME: Name of a virtual folder.
PATH: The name or path of directory. Parent directories are created automatically
if they do not exist.
'''
with Session() as session:
try:
session.VFolder(name).mkdir(path)
print_done('Done.')
except Exception as e:
print_error(e)
sys.exit(1) |
def get_HDX_code_from_location(location, locations=None, configuration=None):
# type: (str, Optional[List[Dict]], Optional[Configuration]) -> Optional[str]
"""Get HDX code for location
Args:
location (str): Location for which to get HDX code
locations (Optional[List[Dict]]): Valid locations list. Defaults to list downloaded from HDX.
configuration (Optional[Configuration]): HDX configuration. Defaults to global configuration.
Returns:
Optional[str]: HDX code or None
"""
if locations is None:
locations = Locations.validlocations(configuration)
locationupper = location.upper()
for locdict in locations:
locationcode = locdict['name'].upper()
if locationupper == locationcode:
return locationcode
for locdict in locations:
if locationupper == locdict['title'].upper():
return locdict['name'].upper()
return None | Get HDX code for location
Args:
location (str): Location for which to get HDX code
locations (Optional[List[Dict]]): Valid locations list. Defaults to list downloaded from HDX.
configuration (Optional[Configuration]): HDX configuration. Defaults to global configuration.
Returns:
Optional[str]: HDX code or None | Below is the the instruction that describes the task:
### Input:
Get HDX code for location
Args:
location (str): Location for which to get HDX code
locations (Optional[List[Dict]]): Valid locations list. Defaults to list downloaded from HDX.
configuration (Optional[Configuration]): HDX configuration. Defaults to global configuration.
Returns:
Optional[str]: HDX code or None
### Response:
def get_HDX_code_from_location(location, locations=None, configuration=None):
# type: (str, Optional[List[Dict]], Optional[Configuration]) -> Optional[str]
"""Get HDX code for location
Args:
location (str): Location for which to get HDX code
locations (Optional[List[Dict]]): Valid locations list. Defaults to list downloaded from HDX.
configuration (Optional[Configuration]): HDX configuration. Defaults to global configuration.
Returns:
Optional[str]: HDX code or None
"""
if locations is None:
locations = Locations.validlocations(configuration)
locationupper = location.upper()
for locdict in locations:
locationcode = locdict['name'].upper()
if locationupper == locationcode:
return locationcode
for locdict in locations:
if locationupper == locdict['title'].upper():
return locdict['name'].upper()
return None |
def fn_with_custom_grad(grad_fn, use_global_vars=False):
"""Decorator to create a subgraph with a custom gradient function.
The subgraph created by the decorated function is NOT put in a Defun and so
does not suffer from the limitations of the Defun (all subgraph ops on the
same device, no summaries).
Args:
grad_fn: function with signature
(inputs, variables, outputs, output_grads) -> (grad_inputs, grad_vars),
all of which are lists of Tensors.
use_global_vars: if True, variables will be the global variables created.
If False, will be the trainable variables.
Returns:
Decorator for function such that the gradient is defined by grad_fn.
"""
def dec(fn):
@functools.wraps(fn)
def wrapped(*args):
return _fn_with_custom_grad(
fn, args, grad_fn, use_global_vars=use_global_vars)
return wrapped
return dec | Decorator to create a subgraph with a custom gradient function.
The subgraph created by the decorated function is NOT put in a Defun and so
does not suffer from the limitations of the Defun (all subgraph ops on the
same device, no summaries).
Args:
grad_fn: function with signature
(inputs, variables, outputs, output_grads) -> (grad_inputs, grad_vars),
all of which are lists of Tensors.
use_global_vars: if True, variables will be the global variables created.
If False, will be the trainable variables.
Returns:
Decorator for function such that the gradient is defined by grad_fn. | Below is the the instruction that describes the task:
### Input:
Decorator to create a subgraph with a custom gradient function.
The subgraph created by the decorated function is NOT put in a Defun and so
does not suffer from the limitations of the Defun (all subgraph ops on the
same device, no summaries).
Args:
grad_fn: function with signature
(inputs, variables, outputs, output_grads) -> (grad_inputs, grad_vars),
all of which are lists of Tensors.
use_global_vars: if True, variables will be the global variables created.
If False, will be the trainable variables.
Returns:
Decorator for function such that the gradient is defined by grad_fn.
### Response:
def fn_with_custom_grad(grad_fn, use_global_vars=False):
"""Decorator to create a subgraph with a custom gradient function.
The subgraph created by the decorated function is NOT put in a Defun and so
does not suffer from the limitations of the Defun (all subgraph ops on the
same device, no summaries).
Args:
grad_fn: function with signature
(inputs, variables, outputs, output_grads) -> (grad_inputs, grad_vars),
all of which are lists of Tensors.
use_global_vars: if True, variables will be the global variables created.
If False, will be the trainable variables.
Returns:
Decorator for function such that the gradient is defined by grad_fn.
"""
def dec(fn):
@functools.wraps(fn)
def wrapped(*args):
return _fn_with_custom_grad(
fn, args, grad_fn, use_global_vars=use_global_vars)
return wrapped
return dec |
def delete_account_metadata(self, prefix=None):
"""
Removes all metadata matching the specified prefix from the account.
By default, the standard account metadata prefix ('X-Account-Meta-') is
prepended to the header name if it isn't present. For non-standard
headers, you must include a non-None prefix, such as an empty string.
"""
# Add the metadata prefix, if needed.
if prefix is None:
prefix = ACCOUNT_META_PREFIX
curr_meta = self.get_account_metadata(prefix=prefix)
for ckey in curr_meta:
curr_meta[ckey] = ""
new_meta = _massage_metakeys(curr_meta, prefix)
uri = "/"
resp, resp_body = self.api.method_post(uri, headers=new_meta)
return 200 <= resp.status_code <= 299 | Removes all metadata matching the specified prefix from the account.
By default, the standard account metadata prefix ('X-Account-Meta-') is
prepended to the header name if it isn't present. For non-standard
headers, you must include a non-None prefix, such as an empty string. | Below is the the instruction that describes the task:
### Input:
Removes all metadata matching the specified prefix from the account.
By default, the standard account metadata prefix ('X-Account-Meta-') is
prepended to the header name if it isn't present. For non-standard
headers, you must include a non-None prefix, such as an empty string.
### Response:
def delete_account_metadata(self, prefix=None):
"""
Removes all metadata matching the specified prefix from the account.
By default, the standard account metadata prefix ('X-Account-Meta-') is
prepended to the header name if it isn't present. For non-standard
headers, you must include a non-None prefix, such as an empty string.
"""
# Add the metadata prefix, if needed.
if prefix is None:
prefix = ACCOUNT_META_PREFIX
curr_meta = self.get_account_metadata(prefix=prefix)
for ckey in curr_meta:
curr_meta[ckey] = ""
new_meta = _massage_metakeys(curr_meta, prefix)
uri = "/"
resp, resp_body = self.api.method_post(uri, headers=new_meta)
return 200 <= resp.status_code <= 299 |
def _load_unicode_block_info(self):
"""
Function for parsing the Unicode block info from the Unicode Character
Database (UCD) and generating a lookup table. For more info on the UCD,
see the following website: https://www.unicode.org/ucd/
"""
filename = "Blocks.txt"
current_dir = os.path.abspath(os.path.dirname(__file__))
with codecs.open(os.path.join(current_dir, filename), mode="r", encoding="utf-8") as fp:
for line in fp:
if not line.strip() or line.startswith("#"):
continue # Skip empty lines or lines that are comments (comments start with '#')
# Format: Start Code..End Code; Block Name
block_range, block_name = line.strip().split(";")
start_range, end_range = block_range.strip().split("..")
self._unicode_blocks[six.moves.range(int(start_range, 16), int(end_range, 16) + 1)] = block_name.strip() | Function for parsing the Unicode block info from the Unicode Character
Database (UCD) and generating a lookup table. For more info on the UCD,
see the following website: https://www.unicode.org/ucd/ | Below is the the instruction that describes the task:
### Input:
Function for parsing the Unicode block info from the Unicode Character
Database (UCD) and generating a lookup table. For more info on the UCD,
see the following website: https://www.unicode.org/ucd/
### Response:
def _load_unicode_block_info(self):
"""
Function for parsing the Unicode block info from the Unicode Character
Database (UCD) and generating a lookup table. For more info on the UCD,
see the following website: https://www.unicode.org/ucd/
"""
filename = "Blocks.txt"
current_dir = os.path.abspath(os.path.dirname(__file__))
with codecs.open(os.path.join(current_dir, filename), mode="r", encoding="utf-8") as fp:
for line in fp:
if not line.strip() or line.startswith("#"):
continue # Skip empty lines or lines that are comments (comments start with '#')
# Format: Start Code..End Code; Block Name
block_range, block_name = line.strip().split(";")
start_range, end_range = block_range.strip().split("..")
self._unicode_blocks[six.moves.range(int(start_range, 16), int(end_range, 16) + 1)] = block_name.strip() |
def absent(name,
vhost='/',
runas=None):
'''
Ensure the named policy is absent
Reference: http://www.rabbitmq.com/ha.html
name
The name of the policy to remove
runas
Name of the user to run the command as
'''
ret = {'name': name, 'result': True, 'comment': '', 'changes': {}}
policy_exists = __salt__['rabbitmq.policy_exists'](
vhost, name, runas=runas)
if not policy_exists:
ret['comment'] = 'Policy \'{0} {1}\' is not present.'.format(vhost, name)
return ret
if not __opts__['test']:
result = __salt__['rabbitmq.delete_policy'](vhost, name, runas=runas)
if 'Error' in result:
ret['result'] = False
ret['comment'] = result['Error']
return ret
elif 'Deleted' in result:
ret['comment'] = 'Deleted'
# If we've reached this far before returning, we have changes.
ret['changes'] = {'new': '', 'old': name}
if __opts__['test']:
ret['result'] = None
ret['comment'] = 'Policy \'{0} {1}\' will be removed.'.format(vhost, name)
return ret | Ensure the named policy is absent
Reference: http://www.rabbitmq.com/ha.html
name
The name of the policy to remove
runas
Name of the user to run the command as | Below is the the instruction that describes the task:
### Input:
Ensure the named policy is absent
Reference: http://www.rabbitmq.com/ha.html
name
The name of the policy to remove
runas
Name of the user to run the command as
### Response:
def absent(name,
vhost='/',
runas=None):
'''
Ensure the named policy is absent
Reference: http://www.rabbitmq.com/ha.html
name
The name of the policy to remove
runas
Name of the user to run the command as
'''
ret = {'name': name, 'result': True, 'comment': '', 'changes': {}}
policy_exists = __salt__['rabbitmq.policy_exists'](
vhost, name, runas=runas)
if not policy_exists:
ret['comment'] = 'Policy \'{0} {1}\' is not present.'.format(vhost, name)
return ret
if not __opts__['test']:
result = __salt__['rabbitmq.delete_policy'](vhost, name, runas=runas)
if 'Error' in result:
ret['result'] = False
ret['comment'] = result['Error']
return ret
elif 'Deleted' in result:
ret['comment'] = 'Deleted'
# If we've reached this far before returning, we have changes.
ret['changes'] = {'new': '', 'old': name}
if __opts__['test']:
ret['result'] = None
ret['comment'] = 'Policy \'{0} {1}\' will be removed.'.format(vhost, name)
return ret |
def get_num_words(text):
"""
Counts and returns the number of words found in a given text
:param text:
:return:
"""
try:
word_regexp_pattern = re.compile(r"[a-zA-Záéíóúñ]+")
num_words = re.findall(word_regexp_pattern, text)
return len(num_words)
except TypeError:
return 0 | Counts and returns the number of words found in a given text
:param text:
:return: | Below is the the instruction that describes the task:
### Input:
Counts and returns the number of words found in a given text
:param text:
:return:
### Response:
def get_num_words(text):
"""
Counts and returns the number of words found in a given text
:param text:
:return:
"""
try:
word_regexp_pattern = re.compile(r"[a-zA-Záéíóúñ]+")
num_words = re.findall(word_regexp_pattern, text)
return len(num_words)
except TypeError:
return 0 |
def open(self, flag="c"):
"""Open handle
set protocol=2 to fix python3
.. versionadded:: 1.3.1
"""
return shelve.open(os.path.join(gettempdir(), self.index), flag=flag, protocol=2) | Open handle
set protocol=2 to fix python3
.. versionadded:: 1.3.1 | Below is the the instruction that describes the task:
### Input:
Open handle
set protocol=2 to fix python3
.. versionadded:: 1.3.1
### Response:
def open(self, flag="c"):
"""Open handle
set protocol=2 to fix python3
.. versionadded:: 1.3.1
"""
return shelve.open(os.path.join(gettempdir(), self.index), flag=flag, protocol=2) |
def require_fresh_games(self, number_fresh):
"""Require a given number of fresh games to be played.
Args:
number_fresh: integer, number of new fresh games needed
Increments the cell `table_state=metadata:wait_for_game_number`
by the given number of games. This will cause
`self.wait_for_fresh_games()` to block until the game
counter has reached this number.
"""
latest = self.latest_game_number
table_state = self.bt_table.row(TABLE_STATE)
table_state.set_cell(METADATA, WAIT_CELL, int(latest + number_fresh))
table_state.commit()
print("== Setting wait cell to ", int(latest + number_fresh), flush=True) | Require a given number of fresh games to be played.
Args:
number_fresh: integer, number of new fresh games needed
Increments the cell `table_state=metadata:wait_for_game_number`
by the given number of games. This will cause
`self.wait_for_fresh_games()` to block until the game
counter has reached this number. | Below is the the instruction that describes the task:
### Input:
Require a given number of fresh games to be played.
Args:
number_fresh: integer, number of new fresh games needed
Increments the cell `table_state=metadata:wait_for_game_number`
by the given number of games. This will cause
`self.wait_for_fresh_games()` to block until the game
counter has reached this number.
### Response:
def require_fresh_games(self, number_fresh):
"""Require a given number of fresh games to be played.
Args:
number_fresh: integer, number of new fresh games needed
Increments the cell `table_state=metadata:wait_for_game_number`
by the given number of games. This will cause
`self.wait_for_fresh_games()` to block until the game
counter has reached this number.
"""
latest = self.latest_game_number
table_state = self.bt_table.row(TABLE_STATE)
table_state.set_cell(METADATA, WAIT_CELL, int(latest + number_fresh))
table_state.commit()
print("== Setting wait cell to ", int(latest + number_fresh), flush=True) |
def with_name(self, name):
"""Sets the name scope for future operations."""
self._head = self._head.with_name(name)
return self | Sets the name scope for future operations. | Below is the the instruction that describes the task:
### Input:
Sets the name scope for future operations.
### Response:
def with_name(self, name):
"""Sets the name scope for future operations."""
self._head = self._head.with_name(name)
return self |
def delete(self):
r"""Delete this node from the parse tree.
Where applicable, this will remove all descendants of this node from
the parse tree.
>>> from TexSoup import TexSoup
>>> soup = TexSoup(r'''\textit{\color{blue}{Silly}}\textit{keep me!}''')
>>> soup.textit.color.delete()
>>> soup
\textit{}\textit{keep me!}
>>> soup.textit.delete()
>>> soup
\textit{keep me!}
"""
# TODO: needs better abstraction for supports contents
parent = self.parent
if parent.expr._supports_contents():
parent.remove(self)
return
# TODO: needs abstraction for removing from arg
for arg in parent.args:
if self.expr in arg.contents:
arg.contents.remove(self.expr) | r"""Delete this node from the parse tree.
Where applicable, this will remove all descendants of this node from
the parse tree.
>>> from TexSoup import TexSoup
>>> soup = TexSoup(r'''\textit{\color{blue}{Silly}}\textit{keep me!}''')
>>> soup.textit.color.delete()
>>> soup
\textit{}\textit{keep me!}
>>> soup.textit.delete()
>>> soup
\textit{keep me!} | Below is the the instruction that describes the task:
### Input:
r"""Delete this node from the parse tree.
Where applicable, this will remove all descendants of this node from
the parse tree.
>>> from TexSoup import TexSoup
>>> soup = TexSoup(r'''\textit{\color{blue}{Silly}}\textit{keep me!}''')
>>> soup.textit.color.delete()
>>> soup
\textit{}\textit{keep me!}
>>> soup.textit.delete()
>>> soup
\textit{keep me!}
### Response:
def delete(self):
r"""Delete this node from the parse tree.
Where applicable, this will remove all descendants of this node from
the parse tree.
>>> from TexSoup import TexSoup
>>> soup = TexSoup(r'''\textit{\color{blue}{Silly}}\textit{keep me!}''')
>>> soup.textit.color.delete()
>>> soup
\textit{}\textit{keep me!}
>>> soup.textit.delete()
>>> soup
\textit{keep me!}
"""
# TODO: needs better abstraction for supports contents
parent = self.parent
if parent.expr._supports_contents():
parent.remove(self)
return
# TODO: needs abstraction for removing from arg
for arg in parent.args:
if self.expr in arg.contents:
arg.contents.remove(self.expr) |
def upload(self, fp, token=None, target_name=None, content_type=None):
"""
Upload a file to Zendesk.
:param fp: file object, StringIO instance, content, or file path to be
uploaded
:param token: upload token for uploading multiple files
:param target_name: name of the file inside Zendesk
:return: :class:`Upload` object containing a token and other information see
Zendesk API `Reference <https://developer.zendesk.com/rest_api/docs/core/attachments#uploading-files>`__.
"""
return UploadRequest(self).post(fp, token=token, target_name=target_name, content_type=content_type) | Upload a file to Zendesk.
:param fp: file object, StringIO instance, content, or file path to be
uploaded
:param token: upload token for uploading multiple files
:param target_name: name of the file inside Zendesk
:return: :class:`Upload` object containing a token and other information see
Zendesk API `Reference <https://developer.zendesk.com/rest_api/docs/core/attachments#uploading-files>`__. | Below is the the instruction that describes the task:
### Input:
Upload a file to Zendesk.
:param fp: file object, StringIO instance, content, or file path to be
uploaded
:param token: upload token for uploading multiple files
:param target_name: name of the file inside Zendesk
:return: :class:`Upload` object containing a token and other information see
Zendesk API `Reference <https://developer.zendesk.com/rest_api/docs/core/attachments#uploading-files>`__.
### Response:
def upload(self, fp, token=None, target_name=None, content_type=None):
"""
Upload a file to Zendesk.
:param fp: file object, StringIO instance, content, or file path to be
uploaded
:param token: upload token for uploading multiple files
:param target_name: name of the file inside Zendesk
:return: :class:`Upload` object containing a token and other information see
Zendesk API `Reference <https://developer.zendesk.com/rest_api/docs/core/attachments#uploading-files>`__.
"""
return UploadRequest(self).post(fp, token=token, target_name=target_name, content_type=content_type) |
def consume(consumer_id):
"""Given an existing consumer ID, return any new lines from the
log since the last time the consumer was consumed."""
global _consumers
consumer = _consumers[consumer_id]
client = get_docker_client()
try:
status = client.inspect_container(consumer.container_id)['State']['Status']
except Exception as e:
status = 'unknown'
new_logs = client.logs(consumer.container_id,
stdout=True,
stderr=True,
stream=False,
timestamps=False,
since=calendar.timegm(consumer.offset.timetuple()))
updated_consumer = Consumer(consumer.container_id, datetime.utcnow())
_consumers[str(consumer_id)] = updated_consumer
response = jsonify({'logs': new_logs, 'status': status})
response.headers['Access-Control-Allow-Origin'] = '*'
response.headers['Access-Control-Allow-Methods'] = 'GET, POST'
return response | Given an existing consumer ID, return any new lines from the
log since the last time the consumer was consumed. | Below is the the instruction that describes the task:
### Input:
Given an existing consumer ID, return any new lines from the
log since the last time the consumer was consumed.
### Response:
def consume(consumer_id):
"""Given an existing consumer ID, return any new lines from the
log since the last time the consumer was consumed."""
global _consumers
consumer = _consumers[consumer_id]
client = get_docker_client()
try:
status = client.inspect_container(consumer.container_id)['State']['Status']
except Exception as e:
status = 'unknown'
new_logs = client.logs(consumer.container_id,
stdout=True,
stderr=True,
stream=False,
timestamps=False,
since=calendar.timegm(consumer.offset.timetuple()))
updated_consumer = Consumer(consumer.container_id, datetime.utcnow())
_consumers[str(consumer_id)] = updated_consumer
response = jsonify({'logs': new_logs, 'status': status})
response.headers['Access-Control-Allow-Origin'] = '*'
response.headers['Access-Control-Allow-Methods'] = 'GET, POST'
return response |
def cookie_set(self, name, value):
"""
Set the value of a client cookie. This can only be called while
headers can be sent.
:param str name: The name of the cookie value to set.
:param str value: The value of the cookie to set.
"""
if not self.headers_active:
raise RuntimeError('headers have already been ended')
cookie = "{0}={1}; Path=/; HttpOnly".format(name, value)
self.send_header('Set-Cookie', cookie) | Set the value of a client cookie. This can only be called while
headers can be sent.
:param str name: The name of the cookie value to set.
:param str value: The value of the cookie to set. | Below is the the instruction that describes the task:
### Input:
Set the value of a client cookie. This can only be called while
headers can be sent.
:param str name: The name of the cookie value to set.
:param str value: The value of the cookie to set.
### Response:
def cookie_set(self, name, value):
"""
Set the value of a client cookie. This can only be called while
headers can be sent.
:param str name: The name of the cookie value to set.
:param str value: The value of the cookie to set.
"""
if not self.headers_active:
raise RuntimeError('headers have already been ended')
cookie = "{0}={1}; Path=/; HttpOnly".format(name, value)
self.send_header('Set-Cookie', cookie) |
def add_hours(self, datetimestr, n):
"""Returns a time that n hours after a time.
:param datetimestr: a datetime object or a datetime str
:param n: number of hours, value can be negative
**中文文档**
返回给定日期N小时之后的时间。
"""
a_datetime = self.parse_datetime(datetimestr)
return a_datetime + timedelta(seconds=3600 * n) | Returns a time that n hours after a time.
:param datetimestr: a datetime object or a datetime str
:param n: number of hours, value can be negative
**中文文档**
返回给定日期N小时之后的时间。 | Below is the the instruction that describes the task:
### Input:
Returns a time that n hours after a time.
:param datetimestr: a datetime object or a datetime str
:param n: number of hours, value can be negative
**中文文档**
返回给定日期N小时之后的时间。
### Response:
def add_hours(self, datetimestr, n):
"""Returns a time that n hours after a time.
:param datetimestr: a datetime object or a datetime str
:param n: number of hours, value can be negative
**中文文档**
返回给定日期N小时之后的时间。
"""
a_datetime = self.parse_datetime(datetimestr)
return a_datetime + timedelta(seconds=3600 * n) |
def create_hpx(cls, nside, nest, coordsys='CEL', order=-1, ebins=None,
region=None, conv=HPX_Conv('FGST_CCUBE'), pixels=None):
"""Create a HPX object.
Parameters
----------
nside : int
HEALPix nside paramter
nest : bool
True for HEALPix "NESTED" indexing scheme, False for "RING" scheme.
coordsys : str
"CEL" or "GAL"
order : int
nside = 2**order
ebins : `~numpy.ndarray`
Energy bin edges
region : str
Allows for partial-sky mappings
conv : `HPX_Conv`
Object defining the convention for column names and the like
pixels : `np.array` or `None`
For use with 'EXPLICIT' region string
"""
return cls(nside, nest, coordsys, order, ebins,
region=region, conv=conv, pixels=pixels) | Create a HPX object.
Parameters
----------
nside : int
HEALPix nside paramter
nest : bool
True for HEALPix "NESTED" indexing scheme, False for "RING" scheme.
coordsys : str
"CEL" or "GAL"
order : int
nside = 2**order
ebins : `~numpy.ndarray`
Energy bin edges
region : str
Allows for partial-sky mappings
conv : `HPX_Conv`
Object defining the convention for column names and the like
pixels : `np.array` or `None`
For use with 'EXPLICIT' region string | Below is the the instruction that describes the task:
### Input:
Create a HPX object.
Parameters
----------
nside : int
HEALPix nside paramter
nest : bool
True for HEALPix "NESTED" indexing scheme, False for "RING" scheme.
coordsys : str
"CEL" or "GAL"
order : int
nside = 2**order
ebins : `~numpy.ndarray`
Energy bin edges
region : str
Allows for partial-sky mappings
conv : `HPX_Conv`
Object defining the convention for column names and the like
pixels : `np.array` or `None`
For use with 'EXPLICIT' region string
### Response:
def create_hpx(cls, nside, nest, coordsys='CEL', order=-1, ebins=None,
region=None, conv=HPX_Conv('FGST_CCUBE'), pixels=None):
"""Create a HPX object.
Parameters
----------
nside : int
HEALPix nside paramter
nest : bool
True for HEALPix "NESTED" indexing scheme, False for "RING" scheme.
coordsys : str
"CEL" or "GAL"
order : int
nside = 2**order
ebins : `~numpy.ndarray`
Energy bin edges
region : str
Allows for partial-sky mappings
conv : `HPX_Conv`
Object defining the convention for column names and the like
pixels : `np.array` or `None`
For use with 'EXPLICIT' region string
"""
return cls(nside, nest, coordsys, order, ebins,
region=region, conv=conv, pixels=pixels) |
def _endless_page(self, number, label=None):
"""Factory function that returns a *EndlessPage* instance.
This method works just like a partial constructor.
"""
return EndlessPage(
self._request,
number,
self._page.number,
len(self),
self._querystring_key,
label=label,
default_number=self._default_number,
override_path=self._override_path,
) | Factory function that returns a *EndlessPage* instance.
This method works just like a partial constructor. | Below is the the instruction that describes the task:
### Input:
Factory function that returns a *EndlessPage* instance.
This method works just like a partial constructor.
### Response:
def _endless_page(self, number, label=None):
"""Factory function that returns a *EndlessPage* instance.
This method works just like a partial constructor.
"""
return EndlessPage(
self._request,
number,
self._page.number,
len(self),
self._querystring_key,
label=label,
default_number=self._default_number,
override_path=self._override_path,
) |
def next(self):
"""Go one token ahead and return the old one"""
rv = self.current
if self._pushed:
self.current = self._pushed.popleft()
elif self.current.type is not TOKEN_EOF:
try:
self.current = self._next()
except StopIteration:
self.close()
return rv | Go one token ahead and return the old one | Below is the the instruction that describes the task:
### Input:
Go one token ahead and return the old one
### Response:
def next(self):
"""Go one token ahead and return the old one"""
rv = self.current
if self._pushed:
self.current = self._pushed.popleft()
elif self.current.type is not TOKEN_EOF:
try:
self.current = self._next()
except StopIteration:
self.close()
return rv |
def plot(self, sizescale=10, color=None, alpha=0.5, label=None, edgecolor='none', **kw):
'''
Plot the ra and dec of the coordinates,
at a given epoch, scaled by their magnitude.
(This does *not* create a new empty figure.)
Parameters
----------
sizescale : (optional) float
The marker size for scatter for a star at the magnitudelimit.
color : (optional) any valid color
The color to plot (but there is a default for this catalog.)
**kw : dict
Additional keywords will be passed on to plt.scatter.
Returns
-------
plotted : outputs from the plots
'''
# calculate the sizes of the stars (logarithmic with brightness?)
size = np.maximum(sizescale*(1 + self.magnitudelimit - self.magnitude), 1)
# make a scatter plot of the RA + Dec
scatter = plt.scatter(self.ra, self.dec,
s=size,
color=color or self.color,
label=label or '{} ({:.1f})'.format(self.name, self.epoch),
alpha=alpha,
edgecolor=edgecolor,
**kw)
return scatter | Plot the ra and dec of the coordinates,
at a given epoch, scaled by their magnitude.
(This does *not* create a new empty figure.)
Parameters
----------
sizescale : (optional) float
The marker size for scatter for a star at the magnitudelimit.
color : (optional) any valid color
The color to plot (but there is a default for this catalog.)
**kw : dict
Additional keywords will be passed on to plt.scatter.
Returns
-------
plotted : outputs from the plots | Below is the the instruction that describes the task:
### Input:
Plot the ra and dec of the coordinates,
at a given epoch, scaled by their magnitude.
(This does *not* create a new empty figure.)
Parameters
----------
sizescale : (optional) float
The marker size for scatter for a star at the magnitudelimit.
color : (optional) any valid color
The color to plot (but there is a default for this catalog.)
**kw : dict
Additional keywords will be passed on to plt.scatter.
Returns
-------
plotted : outputs from the plots
### Response:
def plot(self, sizescale=10, color=None, alpha=0.5, label=None, edgecolor='none', **kw):
'''
Plot the ra and dec of the coordinates,
at a given epoch, scaled by their magnitude.
(This does *not* create a new empty figure.)
Parameters
----------
sizescale : (optional) float
The marker size for scatter for a star at the magnitudelimit.
color : (optional) any valid color
The color to plot (but there is a default for this catalog.)
**kw : dict
Additional keywords will be passed on to plt.scatter.
Returns
-------
plotted : outputs from the plots
'''
# calculate the sizes of the stars (logarithmic with brightness?)
size = np.maximum(sizescale*(1 + self.magnitudelimit - self.magnitude), 1)
# make a scatter plot of the RA + Dec
scatter = plt.scatter(self.ra, self.dec,
s=size,
color=color or self.color,
label=label or '{} ({:.1f})'.format(self.name, self.epoch),
alpha=alpha,
edgecolor=edgecolor,
**kw)
return scatter |
def _get_updated_values(before_values, after_values):
""" Get updated values from 2 dicts of values
Args:
before_values (dict): values before update
after_values (dict): values after update
Returns:
dict: a diff dict with key is field key, value is tuple of
(before_value, after_value)
"""
assert before_values.keys() == after_values.keys()
return dict([(k, [before_values[k], after_values[k]])
for k in before_values.keys()
if before_values[k] != after_values[k]]) | Get updated values from 2 dicts of values
Args:
before_values (dict): values before update
after_values (dict): values after update
Returns:
dict: a diff dict with key is field key, value is tuple of
(before_value, after_value) | Below is the the instruction that describes the task:
### Input:
Get updated values from 2 dicts of values
Args:
before_values (dict): values before update
after_values (dict): values after update
Returns:
dict: a diff dict with key is field key, value is tuple of
(before_value, after_value)
### Response:
def _get_updated_values(before_values, after_values):
""" Get updated values from 2 dicts of values
Args:
before_values (dict): values before update
after_values (dict): values after update
Returns:
dict: a diff dict with key is field key, value is tuple of
(before_value, after_value)
"""
assert before_values.keys() == after_values.keys()
return dict([(k, [before_values[k], after_values[k]])
for k in before_values.keys()
if before_values[k] != after_values[k]]) |
def open(self):
'''Opens the stream for reading.'''
options = copy(self.__options)
# Get scheme and format if not already given
compression = None
if self.__scheme is None or self.__format is None:
detected_scheme, detected_format = helpers.detect_scheme_and_format(self.__source)
scheme = self.__scheme or detected_scheme
format = self.__format or detected_format
# Get compression
for type in config.SUPPORTED_COMPRESSION:
if self.__compression == type or detected_format == type:
compression = type
else:
scheme = self.__scheme
format = self.__format
# Initiate loader
self.__loader = None
if scheme is not None:
loader_class = self.__custom_loaders.get(scheme)
if loader_class is None:
if scheme not in config.LOADERS:
message = 'Scheme "%s" is not supported' % scheme
raise exceptions.SchemeError(message)
loader_path = config.LOADERS[scheme]
if loader_path:
loader_class = helpers.import_attribute(loader_path)
if loader_class is not None:
loader_options = helpers.extract_options(options, loader_class.options)
if compression and 'http_stream' in loader_class.options:
loader_options['http_stream'] = False
self.__loader = loader_class(
bytes_sample_size=self.__bytes_sample_size,
**loader_options)
# Zip compression
if compression == 'zip' and six.PY3:
source = self.__loader.load(self.__source, mode='b')
with zipfile.ZipFile(source) as archive:
name = archive.namelist()[0]
if 'filename' in options.keys():
name = options['filename']
del options['filename']
with archive.open(name) as file:
source = tempfile.NamedTemporaryFile(suffix='.' + name)
for line in file:
source.write(line)
source.seek(0)
self.__source = source
self.__loader = StreamLoader(bytes_sample_size=self.__bytes_sample_size)
format = self.__format or helpers.detect_scheme_and_format(source.name)[1]
scheme = 'stream'
# Gzip compression
elif compression == 'gz' and six.PY3:
name = self.__source.replace('.gz', '')
self.__source = gzip.open(self.__loader.load(self.__source, mode='b'))
self.__loader = StreamLoader(bytes_sample_size=self.__bytes_sample_size)
format = self.__format or helpers.detect_scheme_and_format(name)[1]
scheme = 'stream'
# Not supported compression
elif compression:
message = 'Compression "%s" is not supported for your Python version'
raise exceptions.TabulatorException(message % compression)
# Initiate parser
parser_class = self.__custom_parsers.get(format)
if parser_class is None:
if format not in config.PARSERS:
message = 'Format "%s" is not supported' % format
raise exceptions.FormatError(message)
parser_class = helpers.import_attribute(config.PARSERS[format])
parser_options = helpers.extract_options(options, parser_class.options)
self.__parser = parser_class(self.__loader,
force_parse=self.__force_parse,
**parser_options)
# Bad options
if options:
message = 'Not supported option(s) "%s" for scheme "%s" and format "%s"'
message = message % (', '.join(options), scheme, format)
warnings.warn(message, UserWarning)
# Open and setup
self.__parser.open(self.__source, encoding=self.__encoding)
self.__extract_sample()
self.__extract_headers()
if not self.__allow_html:
self.__detect_html()
# Set scheme/format/encoding
self.__actual_scheme = scheme
self.__actual_format = format
self.__actual_encoding = self.__parser.encoding
return self | Opens the stream for reading. | Below is the the instruction that describes the task:
### Input:
Opens the stream for reading.
### Response:
def open(self):
'''Opens the stream for reading.'''
options = copy(self.__options)
# Get scheme and format if not already given
compression = None
if self.__scheme is None or self.__format is None:
detected_scheme, detected_format = helpers.detect_scheme_and_format(self.__source)
scheme = self.__scheme or detected_scheme
format = self.__format or detected_format
# Get compression
for type in config.SUPPORTED_COMPRESSION:
if self.__compression == type or detected_format == type:
compression = type
else:
scheme = self.__scheme
format = self.__format
# Initiate loader
self.__loader = None
if scheme is not None:
loader_class = self.__custom_loaders.get(scheme)
if loader_class is None:
if scheme not in config.LOADERS:
message = 'Scheme "%s" is not supported' % scheme
raise exceptions.SchemeError(message)
loader_path = config.LOADERS[scheme]
if loader_path:
loader_class = helpers.import_attribute(loader_path)
if loader_class is not None:
loader_options = helpers.extract_options(options, loader_class.options)
if compression and 'http_stream' in loader_class.options:
loader_options['http_stream'] = False
self.__loader = loader_class(
bytes_sample_size=self.__bytes_sample_size,
**loader_options)
# Zip compression
if compression == 'zip' and six.PY3:
source = self.__loader.load(self.__source, mode='b')
with zipfile.ZipFile(source) as archive:
name = archive.namelist()[0]
if 'filename' in options.keys():
name = options['filename']
del options['filename']
with archive.open(name) as file:
source = tempfile.NamedTemporaryFile(suffix='.' + name)
for line in file:
source.write(line)
source.seek(0)
self.__source = source
self.__loader = StreamLoader(bytes_sample_size=self.__bytes_sample_size)
format = self.__format or helpers.detect_scheme_and_format(source.name)[1]
scheme = 'stream'
# Gzip compression
elif compression == 'gz' and six.PY3:
name = self.__source.replace('.gz', '')
self.__source = gzip.open(self.__loader.load(self.__source, mode='b'))
self.__loader = StreamLoader(bytes_sample_size=self.__bytes_sample_size)
format = self.__format or helpers.detect_scheme_and_format(name)[1]
scheme = 'stream'
# Not supported compression
elif compression:
message = 'Compression "%s" is not supported for your Python version'
raise exceptions.TabulatorException(message % compression)
# Initiate parser
parser_class = self.__custom_parsers.get(format)
if parser_class is None:
if format not in config.PARSERS:
message = 'Format "%s" is not supported' % format
raise exceptions.FormatError(message)
parser_class = helpers.import_attribute(config.PARSERS[format])
parser_options = helpers.extract_options(options, parser_class.options)
self.__parser = parser_class(self.__loader,
force_parse=self.__force_parse,
**parser_options)
# Bad options
if options:
message = 'Not supported option(s) "%s" for scheme "%s" and format "%s"'
message = message % (', '.join(options), scheme, format)
warnings.warn(message, UserWarning)
# Open and setup
self.__parser.open(self.__source, encoding=self.__encoding)
self.__extract_sample()
self.__extract_headers()
if not self.__allow_html:
self.__detect_html()
# Set scheme/format/encoding
self.__actual_scheme = scheme
self.__actual_format = format
self.__actual_encoding = self.__parser.encoding
return self |
def initialize(
self, M_c, M_r, T, seed, initialization=b'from_the_prior',
row_initialization=-1, n_chains=1,
ROW_CRP_ALPHA_GRID=(), COLUMN_CRP_ALPHA_GRID=(),
S_GRID=(), MU_GRID=(), N_GRID=31,):
"""Sample a latent state from prior.
T, list of lists:
The data table in mapped representation (all floats, generated
by data_utils.read_data_objects)
:returns: X_L, X_D -- the latent state
"""
# FIXME: why is M_r passed?
arg_tuples = self.get_initialize_arg_tuples(
M_c, M_r, T, initialization, row_initialization, n_chains,
ROW_CRP_ALPHA_GRID, COLUMN_CRP_ALPHA_GRID, S_GRID, MU_GRID, N_GRID,
make_get_next_seed(seed),)
chain_tuples = self.mapper(self.do_initialize, arg_tuples)
X_L_list, X_D_list = zip(*chain_tuples)
if n_chains == 1:
X_L_list, X_D_list = X_L_list[0], X_D_list[0]
return X_L_list, X_D_list | Sample a latent state from prior.
T, list of lists:
The data table in mapped representation (all floats, generated
by data_utils.read_data_objects)
:returns: X_L, X_D -- the latent state | Below is the the instruction that describes the task:
### Input:
Sample a latent state from prior.
T, list of lists:
The data table in mapped representation (all floats, generated
by data_utils.read_data_objects)
:returns: X_L, X_D -- the latent state
### Response:
def initialize(
self, M_c, M_r, T, seed, initialization=b'from_the_prior',
row_initialization=-1, n_chains=1,
ROW_CRP_ALPHA_GRID=(), COLUMN_CRP_ALPHA_GRID=(),
S_GRID=(), MU_GRID=(), N_GRID=31,):
"""Sample a latent state from prior.
T, list of lists:
The data table in mapped representation (all floats, generated
by data_utils.read_data_objects)
:returns: X_L, X_D -- the latent state
"""
# FIXME: why is M_r passed?
arg_tuples = self.get_initialize_arg_tuples(
M_c, M_r, T, initialization, row_initialization, n_chains,
ROW_CRP_ALPHA_GRID, COLUMN_CRP_ALPHA_GRID, S_GRID, MU_GRID, N_GRID,
make_get_next_seed(seed),)
chain_tuples = self.mapper(self.do_initialize, arg_tuples)
X_L_list, X_D_list = zip(*chain_tuples)
if n_chains == 1:
X_L_list, X_D_list = X_L_list[0], X_D_list[0]
return X_L_list, X_D_list |
def reindex_variables(
variables: Mapping[Any, Variable],
sizes: Mapping[Any, int],
indexes: Mapping[Any, pd.Index],
indexers: Mapping,
method: Optional[str] = None,
tolerance: Any = None,
copy: bool = True,
) -> 'Tuple[OrderedDict[Any, Variable], OrderedDict[Any, pd.Index]]':
"""Conform a dictionary of aligned variables onto a new set of variables,
filling in missing values with NaN.
Not public API.
Parameters
----------
variables : dict-like
Dictionary of xarray.Variable objects.
sizes : dict-like
Dictionary from dimension names to integer sizes.
indexes : dict-like
Dictionary of indexes associated with variables.
indexers : dict
Dictionary with keys given by dimension names and values given by
arrays of coordinates tick labels. Any mis-matched coordinate values
will be filled in with NaN, and any mis-matched dimension names will
simply be ignored.
method : {None, 'nearest', 'pad'/'ffill', 'backfill'/'bfill'}, optional
Method to use for filling index values in ``indexers`` not found in
this dataset:
* None (default): don't fill gaps
* pad / ffill: propagate last valid index value forward
* backfill / bfill: propagate next valid index value backward
* nearest: use nearest valid index value
tolerance : optional
Maximum distance between original and new labels for inexact matches.
The values of the index at the matching locations must satisfy the
equation ``abs(index[indexer] - target) <= tolerance``.
copy : bool, optional
If ``copy=True``, data in the return values is always copied. If
``copy=False`` and reindexing is unnecessary, or can be performed
with only slice operations, then the output may share memory with
the input. In either case, new xarray objects are always returned.
Returns
-------
reindexed : OrderedDict
Dict of reindexed variables.
new_indexes : OrderedDict
Dict of indexes associated with the reindexed variables.
"""
from .dataarray import DataArray
# create variables for the new dataset
reindexed = OrderedDict() # type: OrderedDict[Any, Variable]
# build up indexers for assignment along each dimension
int_indexers = {}
new_indexes = OrderedDict(indexes)
masked_dims = set()
unchanged_dims = set()
for dim, indexer in indexers.items():
if isinstance(indexer, DataArray) and indexer.dims != (dim,):
warnings.warn(
"Indexer has dimensions {0:s} that are different "
"from that to be indexed along {1:s}. "
"This will behave differently in the future.".format(
str(indexer.dims), dim),
FutureWarning, stacklevel=3)
target = new_indexes[dim] = utils.safe_cast_to_index(indexers[dim])
if dim in indexes:
index = indexes[dim]
if not index.is_unique:
raise ValueError(
'cannot reindex or align along dimension %r because the '
'index has duplicate values' % dim)
int_indexer = get_indexer_nd(index, target, method, tolerance)
# We uses negative values from get_indexer_nd to signify
# values that are missing in the index.
if (int_indexer < 0).any():
masked_dims.add(dim)
elif np.array_equal(int_indexer, np.arange(len(index))):
unchanged_dims.add(dim)
int_indexers[dim] = int_indexer
if dim in variables:
var = variables[dim]
args = (var.attrs, var.encoding) # type: tuple
else:
args = ()
reindexed[dim] = IndexVariable((dim,), target, *args)
for dim in sizes:
if dim not in indexes and dim in indexers:
existing_size = sizes[dim]
new_size = indexers[dim].size
if existing_size != new_size:
raise ValueError(
'cannot reindex or align along dimension %r without an '
'index because its size %r is different from the size of '
'the new index %r' % (dim, existing_size, new_size))
for name, var in variables.items():
if name not in indexers:
key = tuple(slice(None)
if d in unchanged_dims
else int_indexers.get(d, slice(None))
for d in var.dims)
needs_masking = any(d in masked_dims for d in var.dims)
if needs_masking:
new_var = var._getitem_with_mask(key)
elif all(is_full_slice(k) for k in key):
# no reindexing necessary
# here we need to manually deal with copying data, since
# we neither created a new ndarray nor used fancy indexing
new_var = var.copy(deep=copy)
else:
new_var = var[key]
reindexed[name] = new_var
return reindexed, new_indexes | Conform a dictionary of aligned variables onto a new set of variables,
filling in missing values with NaN.
Not public API.
Parameters
----------
variables : dict-like
Dictionary of xarray.Variable objects.
sizes : dict-like
Dictionary from dimension names to integer sizes.
indexes : dict-like
Dictionary of indexes associated with variables.
indexers : dict
Dictionary with keys given by dimension names and values given by
arrays of coordinates tick labels. Any mis-matched coordinate values
will be filled in with NaN, and any mis-matched dimension names will
simply be ignored.
method : {None, 'nearest', 'pad'/'ffill', 'backfill'/'bfill'}, optional
Method to use for filling index values in ``indexers`` not found in
this dataset:
* None (default): don't fill gaps
* pad / ffill: propagate last valid index value forward
* backfill / bfill: propagate next valid index value backward
* nearest: use nearest valid index value
tolerance : optional
Maximum distance between original and new labels for inexact matches.
The values of the index at the matching locations must satisfy the
equation ``abs(index[indexer] - target) <= tolerance``.
copy : bool, optional
If ``copy=True``, data in the return values is always copied. If
``copy=False`` and reindexing is unnecessary, or can be performed
with only slice operations, then the output may share memory with
the input. In either case, new xarray objects are always returned.
Returns
-------
reindexed : OrderedDict
Dict of reindexed variables.
new_indexes : OrderedDict
Dict of indexes associated with the reindexed variables. | Below is the the instruction that describes the task:
### Input:
Conform a dictionary of aligned variables onto a new set of variables,
filling in missing values with NaN.
Not public API.
Parameters
----------
variables : dict-like
Dictionary of xarray.Variable objects.
sizes : dict-like
Dictionary from dimension names to integer sizes.
indexes : dict-like
Dictionary of indexes associated with variables.
indexers : dict
Dictionary with keys given by dimension names and values given by
arrays of coordinates tick labels. Any mis-matched coordinate values
will be filled in with NaN, and any mis-matched dimension names will
simply be ignored.
method : {None, 'nearest', 'pad'/'ffill', 'backfill'/'bfill'}, optional
Method to use for filling index values in ``indexers`` not found in
this dataset:
* None (default): don't fill gaps
* pad / ffill: propagate last valid index value forward
* backfill / bfill: propagate next valid index value backward
* nearest: use nearest valid index value
tolerance : optional
Maximum distance between original and new labels for inexact matches.
The values of the index at the matching locations must satisfy the
equation ``abs(index[indexer] - target) <= tolerance``.
copy : bool, optional
If ``copy=True``, data in the return values is always copied. If
``copy=False`` and reindexing is unnecessary, or can be performed
with only slice operations, then the output may share memory with
the input. In either case, new xarray objects are always returned.
Returns
-------
reindexed : OrderedDict
Dict of reindexed variables.
new_indexes : OrderedDict
Dict of indexes associated with the reindexed variables.
### Response:
def reindex_variables(
variables: Mapping[Any, Variable],
sizes: Mapping[Any, int],
indexes: Mapping[Any, pd.Index],
indexers: Mapping,
method: Optional[str] = None,
tolerance: Any = None,
copy: bool = True,
) -> 'Tuple[OrderedDict[Any, Variable], OrderedDict[Any, pd.Index]]':
"""Conform a dictionary of aligned variables onto a new set of variables,
filling in missing values with NaN.
Not public API.
Parameters
----------
variables : dict-like
Dictionary of xarray.Variable objects.
sizes : dict-like
Dictionary from dimension names to integer sizes.
indexes : dict-like
Dictionary of indexes associated with variables.
indexers : dict
Dictionary with keys given by dimension names and values given by
arrays of coordinates tick labels. Any mis-matched coordinate values
will be filled in with NaN, and any mis-matched dimension names will
simply be ignored.
method : {None, 'nearest', 'pad'/'ffill', 'backfill'/'bfill'}, optional
Method to use for filling index values in ``indexers`` not found in
this dataset:
* None (default): don't fill gaps
* pad / ffill: propagate last valid index value forward
* backfill / bfill: propagate next valid index value backward
* nearest: use nearest valid index value
tolerance : optional
Maximum distance between original and new labels for inexact matches.
The values of the index at the matching locations must satisfy the
equation ``abs(index[indexer] - target) <= tolerance``.
copy : bool, optional
If ``copy=True``, data in the return values is always copied. If
``copy=False`` and reindexing is unnecessary, or can be performed
with only slice operations, then the output may share memory with
the input. In either case, new xarray objects are always returned.
Returns
-------
reindexed : OrderedDict
Dict of reindexed variables.
new_indexes : OrderedDict
Dict of indexes associated with the reindexed variables.
"""
from .dataarray import DataArray
# create variables for the new dataset
reindexed = OrderedDict() # type: OrderedDict[Any, Variable]
# build up indexers for assignment along each dimension
int_indexers = {}
new_indexes = OrderedDict(indexes)
masked_dims = set()
unchanged_dims = set()
for dim, indexer in indexers.items():
if isinstance(indexer, DataArray) and indexer.dims != (dim,):
warnings.warn(
"Indexer has dimensions {0:s} that are different "
"from that to be indexed along {1:s}. "
"This will behave differently in the future.".format(
str(indexer.dims), dim),
FutureWarning, stacklevel=3)
target = new_indexes[dim] = utils.safe_cast_to_index(indexers[dim])
if dim in indexes:
index = indexes[dim]
if not index.is_unique:
raise ValueError(
'cannot reindex or align along dimension %r because the '
'index has duplicate values' % dim)
int_indexer = get_indexer_nd(index, target, method, tolerance)
# We uses negative values from get_indexer_nd to signify
# values that are missing in the index.
if (int_indexer < 0).any():
masked_dims.add(dim)
elif np.array_equal(int_indexer, np.arange(len(index))):
unchanged_dims.add(dim)
int_indexers[dim] = int_indexer
if dim in variables:
var = variables[dim]
args = (var.attrs, var.encoding) # type: tuple
else:
args = ()
reindexed[dim] = IndexVariable((dim,), target, *args)
for dim in sizes:
if dim not in indexes and dim in indexers:
existing_size = sizes[dim]
new_size = indexers[dim].size
if existing_size != new_size:
raise ValueError(
'cannot reindex or align along dimension %r without an '
'index because its size %r is different from the size of '
'the new index %r' % (dim, existing_size, new_size))
for name, var in variables.items():
if name not in indexers:
key = tuple(slice(None)
if d in unchanged_dims
else int_indexers.get(d, slice(None))
for d in var.dims)
needs_masking = any(d in masked_dims for d in var.dims)
if needs_masking:
new_var = var._getitem_with_mask(key)
elif all(is_full_slice(k) for k in key):
# no reindexing necessary
# here we need to manually deal with copying data, since
# we neither created a new ndarray nor used fancy indexing
new_var = var.copy(deep=copy)
else:
new_var = var[key]
reindexed[name] = new_var
return reindexed, new_indexes |
def deserialize_json(cls, serialized_json):
'''Return a macaroon deserialized from a string
@param serialized_json The string to decode {str}
@return {Macaroon}
'''
serialized = json.loads(serialized_json)
return Macaroon.from_dict(serialized) | Return a macaroon deserialized from a string
@param serialized_json The string to decode {str}
@return {Macaroon} | Below is the the instruction that describes the task:
### Input:
Return a macaroon deserialized from a string
@param serialized_json The string to decode {str}
@return {Macaroon}
### Response:
def deserialize_json(cls, serialized_json):
'''Return a macaroon deserialized from a string
@param serialized_json The string to decode {str}
@return {Macaroon}
'''
serialized = json.loads(serialized_json)
return Macaroon.from_dict(serialized) |
def useradd(pwfile, user, password, opts='', runas=None):
'''
Add a user to htpasswd file using the htpasswd command. If the htpasswd
file does not exist, it will be created.
pwfile
Path to htpasswd file
user
User name
password
User password
opts
Valid options that can be passed are:
- `n` Don't update file; display results on stdout.
- `m` Force MD5 encryption of the password (default).
- `d` Force CRYPT encryption of the password.
- `p` Do not encrypt the password (plaintext).
- `s` Force SHA encryption of the password.
runas
The system user to run htpasswd command with
CLI Examples:
.. code-block:: bash
salt '*' webutil.useradd /etc/httpd/htpasswd larry badpassword
salt '*' webutil.useradd /etc/httpd/htpasswd larry badpass opts=ns
'''
if not os.path.exists(pwfile):
opts += 'c'
cmd = ['htpasswd', '-b{0}'.format(opts), pwfile, user, password]
return __salt__['cmd.run_all'](cmd, runas=runas, python_shell=False) | Add a user to htpasswd file using the htpasswd command. If the htpasswd
file does not exist, it will be created.
pwfile
Path to htpasswd file
user
User name
password
User password
opts
Valid options that can be passed are:
- `n` Don't update file; display results on stdout.
- `m` Force MD5 encryption of the password (default).
- `d` Force CRYPT encryption of the password.
- `p` Do not encrypt the password (plaintext).
- `s` Force SHA encryption of the password.
runas
The system user to run htpasswd command with
CLI Examples:
.. code-block:: bash
salt '*' webutil.useradd /etc/httpd/htpasswd larry badpassword
salt '*' webutil.useradd /etc/httpd/htpasswd larry badpass opts=ns | Below is the the instruction that describes the task:
### Input:
Add a user to htpasswd file using the htpasswd command. If the htpasswd
file does not exist, it will be created.
pwfile
Path to htpasswd file
user
User name
password
User password
opts
Valid options that can be passed are:
- `n` Don't update file; display results on stdout.
- `m` Force MD5 encryption of the password (default).
- `d` Force CRYPT encryption of the password.
- `p` Do not encrypt the password (plaintext).
- `s` Force SHA encryption of the password.
runas
The system user to run htpasswd command with
CLI Examples:
.. code-block:: bash
salt '*' webutil.useradd /etc/httpd/htpasswd larry badpassword
salt '*' webutil.useradd /etc/httpd/htpasswd larry badpass opts=ns
### Response:
def useradd(pwfile, user, password, opts='', runas=None):
'''
Add a user to htpasswd file using the htpasswd command. If the htpasswd
file does not exist, it will be created.
pwfile
Path to htpasswd file
user
User name
password
User password
opts
Valid options that can be passed are:
- `n` Don't update file; display results on stdout.
- `m` Force MD5 encryption of the password (default).
- `d` Force CRYPT encryption of the password.
- `p` Do not encrypt the password (plaintext).
- `s` Force SHA encryption of the password.
runas
The system user to run htpasswd command with
CLI Examples:
.. code-block:: bash
salt '*' webutil.useradd /etc/httpd/htpasswd larry badpassword
salt '*' webutil.useradd /etc/httpd/htpasswd larry badpass opts=ns
'''
if not os.path.exists(pwfile):
opts += 'c'
cmd = ['htpasswd', '-b{0}'.format(opts), pwfile, user, password]
return __salt__['cmd.run_all'](cmd, runas=runas, python_shell=False) |
def _validate_enum(self, item: Any, enum: Any) -> Any:
"""Validate enum parameter of method in subclasses of BaseProvider.
:param item: Item of enum object.
:param enum: Enum object.
:return: Value of item.
:raises NonEnumerableError: if ``item`` not in ``enum``.
"""
if item is None:
result = get_random_item(enum, self.random)
elif item and isinstance(item, enum):
result = item
else:
raise NonEnumerableError(enum)
return result.value | Validate enum parameter of method in subclasses of BaseProvider.
:param item: Item of enum object.
:param enum: Enum object.
:return: Value of item.
:raises NonEnumerableError: if ``item`` not in ``enum``. | Below is the the instruction that describes the task:
### Input:
Validate enum parameter of method in subclasses of BaseProvider.
:param item: Item of enum object.
:param enum: Enum object.
:return: Value of item.
:raises NonEnumerableError: if ``item`` not in ``enum``.
### Response:
def _validate_enum(self, item: Any, enum: Any) -> Any:
"""Validate enum parameter of method in subclasses of BaseProvider.
:param item: Item of enum object.
:param enum: Enum object.
:return: Value of item.
:raises NonEnumerableError: if ``item`` not in ``enum``.
"""
if item is None:
result = get_random_item(enum, self.random)
elif item and isinstance(item, enum):
result = item
else:
raise NonEnumerableError(enum)
return result.value |
def _escape_xref(xref_match):
"""Escape things that need to be escaped if they're in a cross-reference.
"""
xref = xref_match.group()
xref = xref.replace('/', '%2F')
xref = xref.replace('?', '%3F')
xref = xref.replace('#', '%23')
return xref | Escape things that need to be escaped if they're in a cross-reference. | Below is the the instruction that describes the task:
### Input:
Escape things that need to be escaped if they're in a cross-reference.
### Response:
def _escape_xref(xref_match):
"""Escape things that need to be escaped if they're in a cross-reference.
"""
xref = xref_match.group()
xref = xref.replace('/', '%2F')
xref = xref.replace('?', '%3F')
xref = xref.replace('#', '%23')
return xref |
def extended_path(self):
"""
Add prefix \\?\ to every absolute path, so that it's a "extended-length"
path, that should be longer than 259 characters (called: "MAX_PATH")
see:
https://msdn.microsoft.com/en-us/library/aa365247.aspx#maxpath
"""
if self.is_absolute() and not self.path.startswith("\\\\"):
return "\\\\?\\%s" % self.path
return self.path | Add prefix \\?\ to every absolute path, so that it's a "extended-length"
path, that should be longer than 259 characters (called: "MAX_PATH")
see:
https://msdn.microsoft.com/en-us/library/aa365247.aspx#maxpath | Below is the the instruction that describes the task:
### Input:
Add prefix \\?\ to every absolute path, so that it's a "extended-length"
path, that should be longer than 259 characters (called: "MAX_PATH")
see:
https://msdn.microsoft.com/en-us/library/aa365247.aspx#maxpath
### Response:
def extended_path(self):
"""
Add prefix \\?\ to every absolute path, so that it's a "extended-length"
path, that should be longer than 259 characters (called: "MAX_PATH")
see:
https://msdn.microsoft.com/en-us/library/aa365247.aspx#maxpath
"""
if self.is_absolute() and not self.path.startswith("\\\\"):
return "\\\\?\\%s" % self.path
return self.path |
def ClosureTable(model_class, foreign_key=None, referencing_class=None,
referencing_key=None):
"""Model factory for the transitive closure extension."""
if referencing_class is None:
referencing_class = model_class
if foreign_key is None:
for field_obj in model_class._meta.refs:
if field_obj.rel_model is model_class:
foreign_key = field_obj
break
else:
raise ValueError('Unable to find self-referential foreign key.')
source_key = model_class._meta.primary_key
if referencing_key is None:
referencing_key = source_key
class BaseClosureTable(VirtualModel):
depth = VirtualField(IntegerField)
id = VirtualField(IntegerField)
idcolumn = VirtualField(TextField)
parentcolumn = VirtualField(TextField)
root = VirtualField(IntegerField)
tablename = VirtualField(TextField)
class Meta:
extension_module = 'transitive_closure'
@classmethod
def descendants(cls, node, depth=None, include_node=False):
query = (model_class
.select(model_class, cls.depth.alias('depth'))
.join(cls, on=(source_key == cls.id))
.where(cls.root == node)
.objects())
if depth is not None:
query = query.where(cls.depth == depth)
elif not include_node:
query = query.where(cls.depth > 0)
return query
@classmethod
def ancestors(cls, node, depth=None, include_node=False):
query = (model_class
.select(model_class, cls.depth.alias('depth'))
.join(cls, on=(source_key == cls.root))
.where(cls.id == node)
.objects())
if depth:
query = query.where(cls.depth == depth)
elif not include_node:
query = query.where(cls.depth > 0)
return query
@classmethod
def siblings(cls, node, include_node=False):
if referencing_class is model_class:
# self-join
fk_value = node.__data__.get(foreign_key.name)
query = model_class.select().where(foreign_key == fk_value)
else:
# siblings as given in reference_class
siblings = (referencing_class
.select(referencing_key)
.join(cls, on=(foreign_key == cls.root))
.where((cls.id == node) & (cls.depth == 1)))
# the according models
query = (model_class
.select()
.where(source_key << siblings)
.objects())
if not include_node:
query = query.where(source_key != node)
return query
class Meta:
database = referencing_class._meta.database
options = {
'tablename': referencing_class._meta.table_name,
'idcolumn': referencing_key.column_name,
'parentcolumn': foreign_key.column_name}
primary_key = False
name = '%sClosure' % model_class.__name__
return type(name, (BaseClosureTable,), {'Meta': Meta}) | Model factory for the transitive closure extension. | Below is the the instruction that describes the task:
### Input:
Model factory for the transitive closure extension.
### Response:
def ClosureTable(model_class, foreign_key=None, referencing_class=None,
referencing_key=None):
"""Model factory for the transitive closure extension."""
if referencing_class is None:
referencing_class = model_class
if foreign_key is None:
for field_obj in model_class._meta.refs:
if field_obj.rel_model is model_class:
foreign_key = field_obj
break
else:
raise ValueError('Unable to find self-referential foreign key.')
source_key = model_class._meta.primary_key
if referencing_key is None:
referencing_key = source_key
class BaseClosureTable(VirtualModel):
depth = VirtualField(IntegerField)
id = VirtualField(IntegerField)
idcolumn = VirtualField(TextField)
parentcolumn = VirtualField(TextField)
root = VirtualField(IntegerField)
tablename = VirtualField(TextField)
class Meta:
extension_module = 'transitive_closure'
@classmethod
def descendants(cls, node, depth=None, include_node=False):
query = (model_class
.select(model_class, cls.depth.alias('depth'))
.join(cls, on=(source_key == cls.id))
.where(cls.root == node)
.objects())
if depth is not None:
query = query.where(cls.depth == depth)
elif not include_node:
query = query.where(cls.depth > 0)
return query
@classmethod
def ancestors(cls, node, depth=None, include_node=False):
query = (model_class
.select(model_class, cls.depth.alias('depth'))
.join(cls, on=(source_key == cls.root))
.where(cls.id == node)
.objects())
if depth:
query = query.where(cls.depth == depth)
elif not include_node:
query = query.where(cls.depth > 0)
return query
@classmethod
def siblings(cls, node, include_node=False):
if referencing_class is model_class:
# self-join
fk_value = node.__data__.get(foreign_key.name)
query = model_class.select().where(foreign_key == fk_value)
else:
# siblings as given in reference_class
siblings = (referencing_class
.select(referencing_key)
.join(cls, on=(foreign_key == cls.root))
.where((cls.id == node) & (cls.depth == 1)))
# the according models
query = (model_class
.select()
.where(source_key << siblings)
.objects())
if not include_node:
query = query.where(source_key != node)
return query
class Meta:
database = referencing_class._meta.database
options = {
'tablename': referencing_class._meta.table_name,
'idcolumn': referencing_key.column_name,
'parentcolumn': foreign_key.column_name}
primary_key = False
name = '%sClosure' % model_class.__name__
return type(name, (BaseClosureTable,), {'Meta': Meta}) |
def getMorphParameters(fromTGFN, toTGFN, tierName,
filterFunc=None, useBlanks=False):
'''
Get intervals for source and target audio files
Use this information to find out how much to stretch/shrink each source
interval.
The target values are based on the contents of toTGFN.
'''
if filterFunc is None:
filterFunc = lambda entry: True # Everything is accepted
fromEntryList = utils.getIntervals(fromTGFN, tierName,
includeUnlabeledRegions=useBlanks)
toEntryList = utils.getIntervals(toTGFN, tierName,
includeUnlabeledRegions=useBlanks)
fromEntryList = [entry for entry in fromEntryList if filterFunc(entry)]
toEntryList = [entry for entry in toEntryList if filterFunc(entry)]
assert(len(fromEntryList) == len(toEntryList))
durationParameters = []
for fromEntry, toEntry in zip(fromEntryList, toEntryList):
fromStart, fromEnd = fromEntry[:2]
toStart, toEnd = toEntry[:2]
# Praat will ignore a second value appearing at the same time as
# another so we give each start a tiny offset to distinguish intervals
# that start and end at the same point
toStart += PRAAT_TIME_DIFF
fromStart += PRAAT_TIME_DIFF
ratio = (toEnd - toStart) / float((fromEnd - fromStart))
durationParameters.append((fromStart, fromEnd, ratio))
return durationParameters | Get intervals for source and target audio files
Use this information to find out how much to stretch/shrink each source
interval.
The target values are based on the contents of toTGFN. | Below is the the instruction that describes the task:
### Input:
Get intervals for source and target audio files
Use this information to find out how much to stretch/shrink each source
interval.
The target values are based on the contents of toTGFN.
### Response:
def getMorphParameters(fromTGFN, toTGFN, tierName,
filterFunc=None, useBlanks=False):
'''
Get intervals for source and target audio files
Use this information to find out how much to stretch/shrink each source
interval.
The target values are based on the contents of toTGFN.
'''
if filterFunc is None:
filterFunc = lambda entry: True # Everything is accepted
fromEntryList = utils.getIntervals(fromTGFN, tierName,
includeUnlabeledRegions=useBlanks)
toEntryList = utils.getIntervals(toTGFN, tierName,
includeUnlabeledRegions=useBlanks)
fromEntryList = [entry for entry in fromEntryList if filterFunc(entry)]
toEntryList = [entry for entry in toEntryList if filterFunc(entry)]
assert(len(fromEntryList) == len(toEntryList))
durationParameters = []
for fromEntry, toEntry in zip(fromEntryList, toEntryList):
fromStart, fromEnd = fromEntry[:2]
toStart, toEnd = toEntry[:2]
# Praat will ignore a second value appearing at the same time as
# another so we give each start a tiny offset to distinguish intervals
# that start and end at the same point
toStart += PRAAT_TIME_DIFF
fromStart += PRAAT_TIME_DIFF
ratio = (toEnd - toStart) / float((fromEnd - fromStart))
durationParameters.append((fromStart, fromEnd, ratio))
return durationParameters |
def getallkeys(self, key, failobj=None):
"""Returns a list of the full key names (not the items)
for all the matching values for key. The list will
contain a single entry for unambiguous matches and
multiple entries for ambiguous matches."""
if self.mmkeys is None: self._mmInit()
return self.mmkeys.get(key, failobj) | Returns a list of the full key names (not the items)
for all the matching values for key. The list will
contain a single entry for unambiguous matches and
multiple entries for ambiguous matches. | Below is the the instruction that describes the task:
### Input:
Returns a list of the full key names (not the items)
for all the matching values for key. The list will
contain a single entry for unambiguous matches and
multiple entries for ambiguous matches.
### Response:
def getallkeys(self, key, failobj=None):
"""Returns a list of the full key names (not the items)
for all the matching values for key. The list will
contain a single entry for unambiguous matches and
multiple entries for ambiguous matches."""
if self.mmkeys is None: self._mmInit()
return self.mmkeys.get(key, failobj) |
def submit_audit(self, item_list):
"""
将第三方提交的代码包提交审核
详情请参考
https://open.weixin.qq.com/cgi-bin/showdocument?action=dir_list&id=open1489140610_Uavc4
:param item_list: 提交审核项的一个列表(至少填写1项,至多填写5项)
:type item_list: list[dict]
:return: 审核编号
:rtype: int
"""
return self._post(
'wxa/submit_audit',
data={
'item_list': item_list,
},
result_processor=lambda x: x['auditid'],
) | 将第三方提交的代码包提交审核
详情请参考
https://open.weixin.qq.com/cgi-bin/showdocument?action=dir_list&id=open1489140610_Uavc4
:param item_list: 提交审核项的一个列表(至少填写1项,至多填写5项)
:type item_list: list[dict]
:return: 审核编号
:rtype: int | Below is the the instruction that describes the task:
### Input:
将第三方提交的代码包提交审核
详情请参考
https://open.weixin.qq.com/cgi-bin/showdocument?action=dir_list&id=open1489140610_Uavc4
:param item_list: 提交审核项的一个列表(至少填写1项,至多填写5项)
:type item_list: list[dict]
:return: 审核编号
:rtype: int
### Response:
def submit_audit(self, item_list):
"""
将第三方提交的代码包提交审核
详情请参考
https://open.weixin.qq.com/cgi-bin/showdocument?action=dir_list&id=open1489140610_Uavc4
:param item_list: 提交审核项的一个列表(至少填写1项,至多填写5项)
:type item_list: list[dict]
:return: 审核编号
:rtype: int
"""
return self._post(
'wxa/submit_audit',
data={
'item_list': item_list,
},
result_processor=lambda x: x['auditid'],
) |
def hydrate_sources(sources_field, glob_match_error_behavior):
"""Given a SourcesField, request a Snapshot for its path_globs and create an EagerFilesetWithSpec.
"""
# TODO(#5864): merge the target's selection of --glob-expansion-failure (which doesn't exist yet)
# with the global default!
path_globs = sources_field.path_globs.copy(glob_match_error_behavior=glob_match_error_behavior)
snapshot = yield Get(Snapshot, PathGlobs, path_globs)
fileset_with_spec = _eager_fileset_with_spec(
sources_field.address.spec_path,
sources_field.filespecs,
snapshot)
sources_field.validate_fn(fileset_with_spec)
yield HydratedField(sources_field.arg, fileset_with_spec) | Given a SourcesField, request a Snapshot for its path_globs and create an EagerFilesetWithSpec. | Below is the the instruction that describes the task:
### Input:
Given a SourcesField, request a Snapshot for its path_globs and create an EagerFilesetWithSpec.
### Response:
def hydrate_sources(sources_field, glob_match_error_behavior):
"""Given a SourcesField, request a Snapshot for its path_globs and create an EagerFilesetWithSpec.
"""
# TODO(#5864): merge the target's selection of --glob-expansion-failure (which doesn't exist yet)
# with the global default!
path_globs = sources_field.path_globs.copy(glob_match_error_behavior=glob_match_error_behavior)
snapshot = yield Get(Snapshot, PathGlobs, path_globs)
fileset_with_spec = _eager_fileset_with_spec(
sources_field.address.spec_path,
sources_field.filespecs,
snapshot)
sources_field.validate_fn(fileset_with_spec)
yield HydratedField(sources_field.arg, fileset_with_spec) |
def list(self, prefix='', delimiter='', filter_function=None, max_results=1, reverse_search=True, previous_key=''):
'''
a method to list keys in the collection
:param prefix: string with prefix value to filter results
:param delimiter: string with value which results must not contain (after prefix)
:param filter_function: (positional arguments) function used to filter results
:param max_results: integer with maximum number of results to return
:param reverse_search: boolean to search keys in reverse alphanumeric order
:param previous_key: string with key in collection to begin search after
:return: list of key strings
NOTE: each key string can be divided into one or more segments
based upon the / characters which occur in the key string as
well as its file extension type. if the key string represents
a file path, then each directory in the path, the file name
and the file extension are all separate indexed values.
eg. lab/unittests/1473719695.2165067.json is indexed:
[ 'lab', 'unittests', '1473719695.2165067', '.json' ]
it is possible to filter the records in the collection according
to one or more of these path segments using a filter_function.
NOTE: the filter_function must be able to accept an array of positional
arguments and return a value that can evaluate to true or false.
while searching the records, list produces an array of strings
which represent the directory structure in relative path of each
key string. if a filter_function is provided, this list of strings
is fed to the filter function. if the function evaluates this input
and returns a true value the file will be included in the list
results.
'''
title = '%s.list' % self.__class__.__name__
# validate input
input_fields = {
'prefix': prefix,
'delimiter': delimiter,
'max_results': max_results,
'record_key': previous_key
}
for key, value in input_fields.items():
if value:
object_title = '%s(%s=%s)' % (title, key, str(value))
self.fields.validate(value, '.%s' % key, object_title)
# validate filter function
if filter_function:
try:
path_segments = [ 'lab', 'unittests', '1473719695.2165067', '.json' ]
filter_function(*path_segments)
except:
err_msg = '%s(filter_function=%s)' % (title, filter_function.__class__.__name__)
raise TypeError('%s must accept positional arguments.' % err_msg)
# construct empty results list
results_list = []
root_segments = self.collection_folder.split(os.sep)
if previous_key:
previous_key = os.path.join(self.collection_folder, previous_key)
# determine root path
root_path = self.collection_folder
if prefix:
from os import path
file_root, file_name = path.split(prefix)
root_path = path.join(root_path, file_root)
# walk collection folder to find files
for file_path in self.localhost.walk(root_path, reverse_search, previous_key):
path_segments = file_path.split(os.sep)
for i in range(len(root_segments)):
del path_segments[0]
record_key = os.path.join(*path_segments)
record_key = record_key.replace('\\','/')
# apply prefix filter
partial_key = record_key
if prefix:
if record_key.find(prefix) == 0:
partial_key = record_key[len(prefix):]
else:
continue
# apply delimiter filter
if delimiter:
if partial_key.find(delimiter) > -1:
continue
# apply filter function
if filter_function:
if filter_function(*path_segments):
results_list.append(record_key)
else:
results_list.append(record_key)
# return results list
if len(results_list) == max_results:
return results_list
return results_list | a method to list keys in the collection
:param prefix: string with prefix value to filter results
:param delimiter: string with value which results must not contain (after prefix)
:param filter_function: (positional arguments) function used to filter results
:param max_results: integer with maximum number of results to return
:param reverse_search: boolean to search keys in reverse alphanumeric order
:param previous_key: string with key in collection to begin search after
:return: list of key strings
NOTE: each key string can be divided into one or more segments
based upon the / characters which occur in the key string as
well as its file extension type. if the key string represents
a file path, then each directory in the path, the file name
and the file extension are all separate indexed values.
eg. lab/unittests/1473719695.2165067.json is indexed:
[ 'lab', 'unittests', '1473719695.2165067', '.json' ]
it is possible to filter the records in the collection according
to one or more of these path segments using a filter_function.
NOTE: the filter_function must be able to accept an array of positional
arguments and return a value that can evaluate to true or false.
while searching the records, list produces an array of strings
which represent the directory structure in relative path of each
key string. if a filter_function is provided, this list of strings
is fed to the filter function. if the function evaluates this input
and returns a true value the file will be included in the list
results. | Below is the the instruction that describes the task:
### Input:
a method to list keys in the collection
:param prefix: string with prefix value to filter results
:param delimiter: string with value which results must not contain (after prefix)
:param filter_function: (positional arguments) function used to filter results
:param max_results: integer with maximum number of results to return
:param reverse_search: boolean to search keys in reverse alphanumeric order
:param previous_key: string with key in collection to begin search after
:return: list of key strings
NOTE: each key string can be divided into one or more segments
based upon the / characters which occur in the key string as
well as its file extension type. if the key string represents
a file path, then each directory in the path, the file name
and the file extension are all separate indexed values.
eg. lab/unittests/1473719695.2165067.json is indexed:
[ 'lab', 'unittests', '1473719695.2165067', '.json' ]
it is possible to filter the records in the collection according
to one or more of these path segments using a filter_function.
NOTE: the filter_function must be able to accept an array of positional
arguments and return a value that can evaluate to true or false.
while searching the records, list produces an array of strings
which represent the directory structure in relative path of each
key string. if a filter_function is provided, this list of strings
is fed to the filter function. if the function evaluates this input
and returns a true value the file will be included in the list
results.
### Response:
def list(self, prefix='', delimiter='', filter_function=None, max_results=1, reverse_search=True, previous_key=''):
'''
a method to list keys in the collection
:param prefix: string with prefix value to filter results
:param delimiter: string with value which results must not contain (after prefix)
:param filter_function: (positional arguments) function used to filter results
:param max_results: integer with maximum number of results to return
:param reverse_search: boolean to search keys in reverse alphanumeric order
:param previous_key: string with key in collection to begin search after
:return: list of key strings
NOTE: each key string can be divided into one or more segments
based upon the / characters which occur in the key string as
well as its file extension type. if the key string represents
a file path, then each directory in the path, the file name
and the file extension are all separate indexed values.
eg. lab/unittests/1473719695.2165067.json is indexed:
[ 'lab', 'unittests', '1473719695.2165067', '.json' ]
it is possible to filter the records in the collection according
to one or more of these path segments using a filter_function.
NOTE: the filter_function must be able to accept an array of positional
arguments and return a value that can evaluate to true or false.
while searching the records, list produces an array of strings
which represent the directory structure in relative path of each
key string. if a filter_function is provided, this list of strings
is fed to the filter function. if the function evaluates this input
and returns a true value the file will be included in the list
results.
'''
title = '%s.list' % self.__class__.__name__
# validate input
input_fields = {
'prefix': prefix,
'delimiter': delimiter,
'max_results': max_results,
'record_key': previous_key
}
for key, value in input_fields.items():
if value:
object_title = '%s(%s=%s)' % (title, key, str(value))
self.fields.validate(value, '.%s' % key, object_title)
# validate filter function
if filter_function:
try:
path_segments = [ 'lab', 'unittests', '1473719695.2165067', '.json' ]
filter_function(*path_segments)
except:
err_msg = '%s(filter_function=%s)' % (title, filter_function.__class__.__name__)
raise TypeError('%s must accept positional arguments.' % err_msg)
# construct empty results list
results_list = []
root_segments = self.collection_folder.split(os.sep)
if previous_key:
previous_key = os.path.join(self.collection_folder, previous_key)
# determine root path
root_path = self.collection_folder
if prefix:
from os import path
file_root, file_name = path.split(prefix)
root_path = path.join(root_path, file_root)
# walk collection folder to find files
for file_path in self.localhost.walk(root_path, reverse_search, previous_key):
path_segments = file_path.split(os.sep)
for i in range(len(root_segments)):
del path_segments[0]
record_key = os.path.join(*path_segments)
record_key = record_key.replace('\\','/')
# apply prefix filter
partial_key = record_key
if prefix:
if record_key.find(prefix) == 0:
partial_key = record_key[len(prefix):]
else:
continue
# apply delimiter filter
if delimiter:
if partial_key.find(delimiter) > -1:
continue
# apply filter function
if filter_function:
if filter_function(*path_segments):
results_list.append(record_key)
else:
results_list.append(record_key)
# return results list
if len(results_list) == max_results:
return results_list
return results_list |
def vertical_padding(self, padding=None):
"""Returns or sets (if a value is provided) the chart's vertical
padding. This determines how much space will be above and below the
display area, as a proportion of overall height, and should be a value
between 0 and 0.5
:param float padding: If given, the chart's vertical_padding\
will be set to this.
:raises ValueError: if a value outside of 0 < n < 0.5 is given.
:rtype: float"""
if padding is None:
return self._vertical_padding
else:
if not isinstance(padding, float):
raise TypeError("padding must be float, not '%s'" % str(padding))
if not 0 < padding < 0.5:
raise ValueError(
"padding must be between 0 and 0.5 (not inclusive), not '%s'" % str(padding)
)
self._vertical_padding = padding | Returns or sets (if a value is provided) the chart's vertical
padding. This determines how much space will be above and below the
display area, as a proportion of overall height, and should be a value
between 0 and 0.5
:param float padding: If given, the chart's vertical_padding\
will be set to this.
:raises ValueError: if a value outside of 0 < n < 0.5 is given.
:rtype: float | Below is the the instruction that describes the task:
### Input:
Returns or sets (if a value is provided) the chart's vertical
padding. This determines how much space will be above and below the
display area, as a proportion of overall height, and should be a value
between 0 and 0.5
:param float padding: If given, the chart's vertical_padding\
will be set to this.
:raises ValueError: if a value outside of 0 < n < 0.5 is given.
:rtype: float
### Response:
def vertical_padding(self, padding=None):
"""Returns or sets (if a value is provided) the chart's vertical
padding. This determines how much space will be above and below the
display area, as a proportion of overall height, and should be a value
between 0 and 0.5
:param float padding: If given, the chart's vertical_padding\
will be set to this.
:raises ValueError: if a value outside of 0 < n < 0.5 is given.
:rtype: float"""
if padding is None:
return self._vertical_padding
else:
if not isinstance(padding, float):
raise TypeError("padding must be float, not '%s'" % str(padding))
if not 0 < padding < 0.5:
raise ValueError(
"padding must be between 0 and 0.5 (not inclusive), not '%s'" % str(padding)
)
self._vertical_padding = padding |
def add_result(self, code, message=None):
"""
add a result to the internal result list
arguments:
same arguments as for Result()
"""
self._results.append(Result(code, message)) | add a result to the internal result list
arguments:
same arguments as for Result() | Below is the the instruction that describes the task:
### Input:
add a result to the internal result list
arguments:
same arguments as for Result()
### Response:
def add_result(self, code, message=None):
"""
add a result to the internal result list
arguments:
same arguments as for Result()
"""
self._results.append(Result(code, message)) |
def _compute_distance(self, rup, dists, C):
"""
Compute the distance function, equation (9):
"""
mref = 3.6
rref = 1.0
rval = np.sqrt(dists.rhypo ** 2 + C['h'] ** 2)
return (C['c1'] + C['c2'] * (rup.mag - mref)) *\
np.log10(rval / rref) + C['c3'] * (rval - rref) | Compute the distance function, equation (9): | Below is the the instruction that describes the task:
### Input:
Compute the distance function, equation (9):
### Response:
def _compute_distance(self, rup, dists, C):
"""
Compute the distance function, equation (9):
"""
mref = 3.6
rref = 1.0
rval = np.sqrt(dists.rhypo ** 2 + C['h'] ** 2)
return (C['c1'] + C['c2'] * (rup.mag - mref)) *\
np.log10(rval / rref) + C['c3'] * (rval - rref) |
def process_update(self, update):
"""Process an incoming update from a remote NetworkTables"""
data = json.loads(update)
NetworkTables.getEntry(data["k"]).setValue(data["v"]) | Process an incoming update from a remote NetworkTables | Below is the the instruction that describes the task:
### Input:
Process an incoming update from a remote NetworkTables
### Response:
def process_update(self, update):
"""Process an incoming update from a remote NetworkTables"""
data = json.loads(update)
NetworkTables.getEntry(data["k"]).setValue(data["v"]) |
def _dash_f_e_to_dict(self, info_filename, tree_filename):
"""
Raxml provides an option to fit model params to a tree,
selected with -f e.
The output is different and needs a different parser.
"""
with open(info_filename) as fl:
models, likelihood, partition_params = self._dash_f_e_parser.parseFile(fl).asList()
with open(tree_filename) as fl:
tree = fl.read()
d = {'likelihood': likelihood, 'ml_tree': tree, 'partitions': {}}
for model, params in zip(models, partition_params):
subdict = {}
index, name, _, alpha, rates, freqs = params
subdict['alpha'] = alpha
subdict['name'] = name
subdict['rates'] = rates
subdict['frequencies'] = freqs
subdict['model'] = model
d['partitions'][index] = subdict
return d | Raxml provides an option to fit model params to a tree,
selected with -f e.
The output is different and needs a different parser. | Below is the the instruction that describes the task:
### Input:
Raxml provides an option to fit model params to a tree,
selected with -f e.
The output is different and needs a different parser.
### Response:
def _dash_f_e_to_dict(self, info_filename, tree_filename):
"""
Raxml provides an option to fit model params to a tree,
selected with -f e.
The output is different and needs a different parser.
"""
with open(info_filename) as fl:
models, likelihood, partition_params = self._dash_f_e_parser.parseFile(fl).asList()
with open(tree_filename) as fl:
tree = fl.read()
d = {'likelihood': likelihood, 'ml_tree': tree, 'partitions': {}}
for model, params in zip(models, partition_params):
subdict = {}
index, name, _, alpha, rates, freqs = params
subdict['alpha'] = alpha
subdict['name'] = name
subdict['rates'] = rates
subdict['frequencies'] = freqs
subdict['model'] = model
d['partitions'][index] = subdict
return d |
def to_kraus(self):
"""
Compute the Kraus operator representation of the estimated process.
:return: The process as a list of Kraus operators.
:rytpe: List[np.array]
"""
return [k.data.toarray() for k in qt.to_kraus(self.sop)] | Compute the Kraus operator representation of the estimated process.
:return: The process as a list of Kraus operators.
:rytpe: List[np.array] | Below is the the instruction that describes the task:
### Input:
Compute the Kraus operator representation of the estimated process.
:return: The process as a list of Kraus operators.
:rytpe: List[np.array]
### Response:
def to_kraus(self):
"""
Compute the Kraus operator representation of the estimated process.
:return: The process as a list of Kraus operators.
:rytpe: List[np.array]
"""
return [k.data.toarray() for k in qt.to_kraus(self.sop)] |
def create_discount_coupon(cls, discount_coupon, **kwargs):
"""Create DiscountCoupon
Create a new DiscountCoupon
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.create_discount_coupon(discount_coupon, async=True)
>>> result = thread.get()
:param async bool
:param DiscountCoupon discount_coupon: Attributes of discountCoupon to create (required)
:return: DiscountCoupon
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return cls._create_discount_coupon_with_http_info(discount_coupon, **kwargs)
else:
(data) = cls._create_discount_coupon_with_http_info(discount_coupon, **kwargs)
return data | Create DiscountCoupon
Create a new DiscountCoupon
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.create_discount_coupon(discount_coupon, async=True)
>>> result = thread.get()
:param async bool
:param DiscountCoupon discount_coupon: Attributes of discountCoupon to create (required)
:return: DiscountCoupon
If the method is called asynchronously,
returns the request thread. | Below is the the instruction that describes the task:
### Input:
Create DiscountCoupon
Create a new DiscountCoupon
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.create_discount_coupon(discount_coupon, async=True)
>>> result = thread.get()
:param async bool
:param DiscountCoupon discount_coupon: Attributes of discountCoupon to create (required)
:return: DiscountCoupon
If the method is called asynchronously,
returns the request thread.
### Response:
def create_discount_coupon(cls, discount_coupon, **kwargs):
"""Create DiscountCoupon
Create a new DiscountCoupon
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.create_discount_coupon(discount_coupon, async=True)
>>> result = thread.get()
:param async bool
:param DiscountCoupon discount_coupon: Attributes of discountCoupon to create (required)
:return: DiscountCoupon
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return cls._create_discount_coupon_with_http_info(discount_coupon, **kwargs)
else:
(data) = cls._create_discount_coupon_with_http_info(discount_coupon, **kwargs)
return data |
def set_style(style=None, rc=None):
"""
Set the aesthetic style of the plots.
This affects things like the color of the axes, whether a grid is
enabled by default, and other aesthetic elements.
Parameters
----------
style : dict, None, or one of {darkgrid, whitegrid, dark, white, ticks}
A dictionary of parameters or the name of a preconfigured set.
rc : dict, optional
Parameter mappings to override the values in the preset seaborn
style dictionaries. This only updates parameters that are
considered part of the style definition.
"""
style_object = _axes_style(style, rc)
mpl.rcParams.update(style_object) | Set the aesthetic style of the plots.
This affects things like the color of the axes, whether a grid is
enabled by default, and other aesthetic elements.
Parameters
----------
style : dict, None, or one of {darkgrid, whitegrid, dark, white, ticks}
A dictionary of parameters or the name of a preconfigured set.
rc : dict, optional
Parameter mappings to override the values in the preset seaborn
style dictionaries. This only updates parameters that are
considered part of the style definition. | Below is the the instruction that describes the task:
### Input:
Set the aesthetic style of the plots.
This affects things like the color of the axes, whether a grid is
enabled by default, and other aesthetic elements.
Parameters
----------
style : dict, None, or one of {darkgrid, whitegrid, dark, white, ticks}
A dictionary of parameters or the name of a preconfigured set.
rc : dict, optional
Parameter mappings to override the values in the preset seaborn
style dictionaries. This only updates parameters that are
considered part of the style definition.
### Response:
def set_style(style=None, rc=None):
"""
Set the aesthetic style of the plots.
This affects things like the color of the axes, whether a grid is
enabled by default, and other aesthetic elements.
Parameters
----------
style : dict, None, or one of {darkgrid, whitegrid, dark, white, ticks}
A dictionary of parameters or the name of a preconfigured set.
rc : dict, optional
Parameter mappings to override the values in the preset seaborn
style dictionaries. This only updates parameters that are
considered part of the style definition.
"""
style_object = _axes_style(style, rc)
mpl.rcParams.update(style_object) |
def api_version(profile=None, **connection_args):
'''
Returns the API version derived from endpoint's response.
CLI Example:
.. code-block:: bash
salt '*' keystone.api_version
'''
kwargs = _get_kwargs(profile=profile, **connection_args)
auth_url = kwargs.get('auth_url', kwargs.get('endpoint', None))
try:
return salt.utils.http.query(auth_url, decode=True, decode_type='json',
verify_ssl=False)['dict']['version']['id']
except KeyError:
return None | Returns the API version derived from endpoint's response.
CLI Example:
.. code-block:: bash
salt '*' keystone.api_version | Below is the the instruction that describes the task:
### Input:
Returns the API version derived from endpoint's response.
CLI Example:
.. code-block:: bash
salt '*' keystone.api_version
### Response:
def api_version(profile=None, **connection_args):
'''
Returns the API version derived from endpoint's response.
CLI Example:
.. code-block:: bash
salt '*' keystone.api_version
'''
kwargs = _get_kwargs(profile=profile, **connection_args)
auth_url = kwargs.get('auth_url', kwargs.get('endpoint', None))
try:
return salt.utils.http.query(auth_url, decode=True, decode_type='json',
verify_ssl=False)['dict']['version']['id']
except KeyError:
return None |
def csview(self, view=False):
"""View chemical shift values organized by amino acid residue.
:param view: Open in default image viewer or save file in current working directory quietly.
:type view: :py:obj:`True` or :py:obj:`False`
:return: None
:rtype: :py:obj:`None`
"""
for starfile in fileio.read_files(self.from_path):
chains = starfile.chem_shifts_by_residue(amino_acids=self.amino_acids,
atoms=self.atoms,
amino_acids_and_atoms=self.amino_acids_and_atoms,
nmrstar_version=self.nmrstar_version)
for idx, chemshifts_dict in enumerate(chains):
nodes = []
edges = []
for seq_id in chemshifts_dict:
aaname = "{}_{}".format(chemshifts_dict[seq_id]["AA3Code"], seq_id)
label = '"{{{}|{}}}"'.format(seq_id, chemshifts_dict[seq_id]["AA3Code"])
color = 8
aanode_entry = " {} [label={}, fillcolor={}]".format(aaname, label, color)
nodes.append(aanode_entry)
currnodename = aaname
for atom_type in chemshifts_dict[seq_id]:
if atom_type in ["AA3Code", "Seq_ID"]:
continue
else:
atname = "{}_{}".format(aaname, atom_type)
label = '"{{{}|{}}}"'.format(atom_type, chemshifts_dict[seq_id][atom_type])
if atom_type.startswith("H"):
color = 4
elif atom_type.startswith("C"):
color = 6
elif atom_type.startswith("N"):
color = 10
else:
color = 8
atnode_entry = "{} [label={}, fillcolor={}]".format(atname, label, color)
nextnodename = atname
nodes.append(atnode_entry)
edges.append("{} -> {}".format(currnodename, nextnodename))
currnodename = nextnodename
if self.filename is None:
filename = "{}_{}".format(starfile.id, idx)
else:
filename = "{}_{}".format(self.filename, idx)
src = Source(self.dot_template.format("\n".join(nodes), "\n".join(edges)), format=self.csview_format)
src.render(filename=filename, view=view) | View chemical shift values organized by amino acid residue.
:param view: Open in default image viewer or save file in current working directory quietly.
:type view: :py:obj:`True` or :py:obj:`False`
:return: None
:rtype: :py:obj:`None` | Below is the the instruction that describes the task:
### Input:
View chemical shift values organized by amino acid residue.
:param view: Open in default image viewer or save file in current working directory quietly.
:type view: :py:obj:`True` or :py:obj:`False`
:return: None
:rtype: :py:obj:`None`
### Response:
def csview(self, view=False):
"""View chemical shift values organized by amino acid residue.
:param view: Open in default image viewer or save file in current working directory quietly.
:type view: :py:obj:`True` or :py:obj:`False`
:return: None
:rtype: :py:obj:`None`
"""
for starfile in fileio.read_files(self.from_path):
chains = starfile.chem_shifts_by_residue(amino_acids=self.amino_acids,
atoms=self.atoms,
amino_acids_and_atoms=self.amino_acids_and_atoms,
nmrstar_version=self.nmrstar_version)
for idx, chemshifts_dict in enumerate(chains):
nodes = []
edges = []
for seq_id in chemshifts_dict:
aaname = "{}_{}".format(chemshifts_dict[seq_id]["AA3Code"], seq_id)
label = '"{{{}|{}}}"'.format(seq_id, chemshifts_dict[seq_id]["AA3Code"])
color = 8
aanode_entry = " {} [label={}, fillcolor={}]".format(aaname, label, color)
nodes.append(aanode_entry)
currnodename = aaname
for atom_type in chemshifts_dict[seq_id]:
if atom_type in ["AA3Code", "Seq_ID"]:
continue
else:
atname = "{}_{}".format(aaname, atom_type)
label = '"{{{}|{}}}"'.format(atom_type, chemshifts_dict[seq_id][atom_type])
if atom_type.startswith("H"):
color = 4
elif atom_type.startswith("C"):
color = 6
elif atom_type.startswith("N"):
color = 10
else:
color = 8
atnode_entry = "{} [label={}, fillcolor={}]".format(atname, label, color)
nextnodename = atname
nodes.append(atnode_entry)
edges.append("{} -> {}".format(currnodename, nextnodename))
currnodename = nextnodename
if self.filename is None:
filename = "{}_{}".format(starfile.id, idx)
else:
filename = "{}_{}".format(self.filename, idx)
src = Source(self.dot_template.format("\n".join(nodes), "\n".join(edges)), format=self.csview_format)
src.render(filename=filename, view=view) |
def _get_file_from_s3(metadata, saltenv, bucket_name, path, cached_file_path):
'''
Checks the local cache for the file, if it's old or missing go grab the
file from S3 and update the cache
'''
key, keyid, service_url, verify_ssl, kms_keyid, location, path_style, https_enable = _get_s3_key()
# check the local cache...
if os.path.isfile(cached_file_path):
file_meta = _find_file_meta(metadata, bucket_name, saltenv, path)
if file_meta:
file_etag = file_meta['ETag']
if file_etag.find('-') == -1:
file_md5 = file_etag
cached_md5 = salt.utils.hashutils.get_hash(cached_file_path, 'md5')
# hashes match we have a cache hit
if cached_md5 == file_md5:
return
else:
cached_file_stat = os.stat(cached_file_path)
cached_file_size = cached_file_stat.st_size
cached_file_mtime = datetime.datetime.fromtimestamp(
cached_file_stat.st_mtime)
cached_file_lastmod = datetime.datetime.strptime(
file_meta['LastModified'], '%Y-%m-%dT%H:%M:%S.%fZ')
if (cached_file_size == int(file_meta['Size']) and
cached_file_mtime > cached_file_lastmod):
log.debug('cached file size equal to metadata size and '
'cached file mtime later than metadata last '
'modification time.')
ret = __utils__['s3.query'](
key=key,
keyid=keyid,
kms_keyid=keyid,
method='HEAD',
bucket=bucket_name,
service_url=service_url,
verify_ssl=verify_ssl,
location=location,
path=_quote(path),
local_file=cached_file_path,
full_headers=True,
path_style=path_style,
https_enable=https_enable
)
if ret is not None:
for header_name, header_value in ret['headers'].items():
name = header_name.strip()
value = header_value.strip()
if six.text_type(name).lower() == 'last-modified':
s3_file_mtime = datetime.datetime.strptime(
value, '%a, %d %b %Y %H:%M:%S %Z')
elif six.text_type(name).lower() == 'content-length':
s3_file_size = int(value)
if (cached_file_size == s3_file_size and
cached_file_mtime > s3_file_mtime):
log.info(
'%s - %s : %s skipped download since cached file size '
'equal to and mtime after s3 values',
bucket_name, saltenv, path
)
return
# ... or get the file from S3
__utils__['s3.query'](
key=key,
keyid=keyid,
kms_keyid=keyid,
bucket=bucket_name,
service_url=service_url,
verify_ssl=verify_ssl,
location=location,
path=_quote(path),
local_file=cached_file_path,
path_style=path_style,
https_enable=https_enable,
) | Checks the local cache for the file, if it's old or missing go grab the
file from S3 and update the cache | Below is the the instruction that describes the task:
### Input:
Checks the local cache for the file, if it's old or missing go grab the
file from S3 and update the cache
### Response:
def _get_file_from_s3(metadata, saltenv, bucket_name, path, cached_file_path):
'''
Checks the local cache for the file, if it's old or missing go grab the
file from S3 and update the cache
'''
key, keyid, service_url, verify_ssl, kms_keyid, location, path_style, https_enable = _get_s3_key()
# check the local cache...
if os.path.isfile(cached_file_path):
file_meta = _find_file_meta(metadata, bucket_name, saltenv, path)
if file_meta:
file_etag = file_meta['ETag']
if file_etag.find('-') == -1:
file_md5 = file_etag
cached_md5 = salt.utils.hashutils.get_hash(cached_file_path, 'md5')
# hashes match we have a cache hit
if cached_md5 == file_md5:
return
else:
cached_file_stat = os.stat(cached_file_path)
cached_file_size = cached_file_stat.st_size
cached_file_mtime = datetime.datetime.fromtimestamp(
cached_file_stat.st_mtime)
cached_file_lastmod = datetime.datetime.strptime(
file_meta['LastModified'], '%Y-%m-%dT%H:%M:%S.%fZ')
if (cached_file_size == int(file_meta['Size']) and
cached_file_mtime > cached_file_lastmod):
log.debug('cached file size equal to metadata size and '
'cached file mtime later than metadata last '
'modification time.')
ret = __utils__['s3.query'](
key=key,
keyid=keyid,
kms_keyid=keyid,
method='HEAD',
bucket=bucket_name,
service_url=service_url,
verify_ssl=verify_ssl,
location=location,
path=_quote(path),
local_file=cached_file_path,
full_headers=True,
path_style=path_style,
https_enable=https_enable
)
if ret is not None:
for header_name, header_value in ret['headers'].items():
name = header_name.strip()
value = header_value.strip()
if six.text_type(name).lower() == 'last-modified':
s3_file_mtime = datetime.datetime.strptime(
value, '%a, %d %b %Y %H:%M:%S %Z')
elif six.text_type(name).lower() == 'content-length':
s3_file_size = int(value)
if (cached_file_size == s3_file_size and
cached_file_mtime > s3_file_mtime):
log.info(
'%s - %s : %s skipped download since cached file size '
'equal to and mtime after s3 values',
bucket_name, saltenv, path
)
return
# ... or get the file from S3
__utils__['s3.query'](
key=key,
keyid=keyid,
kms_keyid=keyid,
bucket=bucket_name,
service_url=service_url,
verify_ssl=verify_ssl,
location=location,
path=_quote(path),
local_file=cached_file_path,
path_style=path_style,
https_enable=https_enable,
) |
def get_preinit_encoders(encoders: List[encoder.Encoder]) -> List[Tuple[str, mx.init.Initializer]]:
"""
Get initializers from encoders. Some encoders might be initialized from pretrained models.
:param encoders: List of encoders
:return: The list of initializers
"""
init = [] # type: List[Tuple[str, mx.init.Initializer]]
for enc in encoders:
if hasattr(enc, "get_initializers"):
enc = cast(encoder_image.ImageLoadedCnnEncoder, enc)
init.extend(enc.get_initializers())
return init | Get initializers from encoders. Some encoders might be initialized from pretrained models.
:param encoders: List of encoders
:return: The list of initializers | Below is the the instruction that describes the task:
### Input:
Get initializers from encoders. Some encoders might be initialized from pretrained models.
:param encoders: List of encoders
:return: The list of initializers
### Response:
def get_preinit_encoders(encoders: List[encoder.Encoder]) -> List[Tuple[str, mx.init.Initializer]]:
"""
Get initializers from encoders. Some encoders might be initialized from pretrained models.
:param encoders: List of encoders
:return: The list of initializers
"""
init = [] # type: List[Tuple[str, mx.init.Initializer]]
for enc in encoders:
if hasattr(enc, "get_initializers"):
enc = cast(encoder_image.ImageLoadedCnnEncoder, enc)
init.extend(enc.get_initializers())
return init |
def neighbors_iter(self, n, t=None):
"""Return an iterator over all neighbors of node n at time t.
Parameters
----------
n : node
A node in the graph
t : snapshot id (default=None)
If None will be returned an iterator over the neighbors of the node on the flattened graph.
Examples
--------
>>> G = dn.DynGraph()
>>> G.add_path([0,1,2,3], t=0)
>>> [n for n in G.neighbors_iter(0, t=0)]
[1]
"""
try:
if t is None:
return iter(self._adj[n])
else:
return iter([i for i in self._adj[n] if self.__presence_test(n, i, t)])
except KeyError:
raise nx.NetworkXError("The node %s is not in the graph." % (n,)) | Return an iterator over all neighbors of node n at time t.
Parameters
----------
n : node
A node in the graph
t : snapshot id (default=None)
If None will be returned an iterator over the neighbors of the node on the flattened graph.
Examples
--------
>>> G = dn.DynGraph()
>>> G.add_path([0,1,2,3], t=0)
>>> [n for n in G.neighbors_iter(0, t=0)]
[1] | Below is the the instruction that describes the task:
### Input:
Return an iterator over all neighbors of node n at time t.
Parameters
----------
n : node
A node in the graph
t : snapshot id (default=None)
If None will be returned an iterator over the neighbors of the node on the flattened graph.
Examples
--------
>>> G = dn.DynGraph()
>>> G.add_path([0,1,2,3], t=0)
>>> [n for n in G.neighbors_iter(0, t=0)]
[1]
### Response:
def neighbors_iter(self, n, t=None):
"""Return an iterator over all neighbors of node n at time t.
Parameters
----------
n : node
A node in the graph
t : snapshot id (default=None)
If None will be returned an iterator over the neighbors of the node on the flattened graph.
Examples
--------
>>> G = dn.DynGraph()
>>> G.add_path([0,1,2,3], t=0)
>>> [n for n in G.neighbors_iter(0, t=0)]
[1]
"""
try:
if t is None:
return iter(self._adj[n])
else:
return iter([i for i in self._adj[n] if self.__presence_test(n, i, t)])
except KeyError:
raise nx.NetworkXError("The node %s is not in the graph." % (n,)) |
def get_project(self, project_id):
""" http://confluence.jetbrains.net/display/YTD2/GET+project
"""
return youtrack.Project(self._get("/admin/project/" + urlquote(project_id)), self) | http://confluence.jetbrains.net/display/YTD2/GET+project | Below is the the instruction that describes the task:
### Input:
http://confluence.jetbrains.net/display/YTD2/GET+project
### Response:
def get_project(self, project_id):
""" http://confluence.jetbrains.net/display/YTD2/GET+project
"""
return youtrack.Project(self._get("/admin/project/" + urlquote(project_id)), self) |
def handle_connection_exec(client):
"""
Alternate connection handler. No output redirection.
"""
class ExitExecLoop(Exception):
pass
def exit():
raise ExitExecLoop()
client.settimeout(None)
fh = os.fdopen(client.detach() if hasattr(client, 'detach') else client.fileno())
with closing(client):
with closing(fh):
try:
payload = fh.readline()
while payload:
_LOG("Running: %r." % payload)
eval(compile(payload, '<manhole>', 'exec'), {'exit': exit}, _MANHOLE.locals)
payload = fh.readline()
except ExitExecLoop:
_LOG("Exiting exec loop.") | Alternate connection handler. No output redirection. | Below is the the instruction that describes the task:
### Input:
Alternate connection handler. No output redirection.
### Response:
def handle_connection_exec(client):
"""
Alternate connection handler. No output redirection.
"""
class ExitExecLoop(Exception):
pass
def exit():
raise ExitExecLoop()
client.settimeout(None)
fh = os.fdopen(client.detach() if hasattr(client, 'detach') else client.fileno())
with closing(client):
with closing(fh):
try:
payload = fh.readline()
while payload:
_LOG("Running: %r." % payload)
eval(compile(payload, '<manhole>', 'exec'), {'exit': exit}, _MANHOLE.locals)
payload = fh.readline()
except ExitExecLoop:
_LOG("Exiting exec loop.") |
def _register_notification_callback(self, connection_handle, attribute_handle, callback, once=False):
"""Register a callback as a notification callback. It will be called if a notification with the matching
connection_handle and attribute_handle is received.
Args:
connection_handle (int): The connection handle to watch
attribute_handle (int): The attribute handle to watch
callback (func): The callback function to call once the notification has been received
once (bool): Should the callback only be called once (and then removed from the notification callbacks)
"""
notification_id = (connection_handle, attribute_handle)
with self.notification_callbacks_lock:
self.notification_callbacks[notification_id] = (callback, once) | Register a callback as a notification callback. It will be called if a notification with the matching
connection_handle and attribute_handle is received.
Args:
connection_handle (int): The connection handle to watch
attribute_handle (int): The attribute handle to watch
callback (func): The callback function to call once the notification has been received
once (bool): Should the callback only be called once (and then removed from the notification callbacks) | Below is the the instruction that describes the task:
### Input:
Register a callback as a notification callback. It will be called if a notification with the matching
connection_handle and attribute_handle is received.
Args:
connection_handle (int): The connection handle to watch
attribute_handle (int): The attribute handle to watch
callback (func): The callback function to call once the notification has been received
once (bool): Should the callback only be called once (and then removed from the notification callbacks)
### Response:
def _register_notification_callback(self, connection_handle, attribute_handle, callback, once=False):
"""Register a callback as a notification callback. It will be called if a notification with the matching
connection_handle and attribute_handle is received.
Args:
connection_handle (int): The connection handle to watch
attribute_handle (int): The attribute handle to watch
callback (func): The callback function to call once the notification has been received
once (bool): Should the callback only be called once (and then removed from the notification callbacks)
"""
notification_id = (connection_handle, attribute_handle)
with self.notification_callbacks_lock:
self.notification_callbacks[notification_id] = (callback, once) |
def analyze(self, count):
""" Analyze count data from :meth:`PDFHistogram.count`.
Turns an array of counts (see :meth:`PDFHistogram.count`) into a
histogram of probabilities, and estimates the mean, standard
deviation, and other statistical characteristics of the corresponding
probability distribution.
Args:
count (array): Array of length ``nbin+2`` containing histogram
data where ``count[0]`` is the count for values that are
below the range of the histogram, ``count[-1]`` is the count
for values above the range, and ``count[i]`` is the count
for the ``i``-th bin where ``i=1...nbin``.
Returns a named tuple containing the following information (in order):
*bins*: Array of bin edges for histogram (length ``nbin+1``)
*prob*: Array of probabilities for each bin.
*stats*: Statistical data about histogram. See :class:`PDFStatistics`.
*norm*: Convert counts into probabilities by dividing by ``norm``.
"""
if numpy.ndim(count) != 1:
raise ValueError('count must have dimension 1')
if len(count) == len(self.midpoints) + 2:
norm = numpy.sum(count)
data = numpy.asarray(count[1:-1]) / norm
elif len(count) != len(self.midpoints):
raise ValueError(
'wrong data length: %s != %s'
% (len(count), len(self.midpoints))
)
else:
data = count
norm = 1.
mid = self.midpoints
stats = PDFStatistics(histogram=(self.bins, count))
return PDFHistogram.Histogram(self.bins, data, stats, norm) | Analyze count data from :meth:`PDFHistogram.count`.
Turns an array of counts (see :meth:`PDFHistogram.count`) into a
histogram of probabilities, and estimates the mean, standard
deviation, and other statistical characteristics of the corresponding
probability distribution.
Args:
count (array): Array of length ``nbin+2`` containing histogram
data where ``count[0]`` is the count for values that are
below the range of the histogram, ``count[-1]`` is the count
for values above the range, and ``count[i]`` is the count
for the ``i``-th bin where ``i=1...nbin``.
Returns a named tuple containing the following information (in order):
*bins*: Array of bin edges for histogram (length ``nbin+1``)
*prob*: Array of probabilities for each bin.
*stats*: Statistical data about histogram. See :class:`PDFStatistics`.
*norm*: Convert counts into probabilities by dividing by ``norm``. | Below is the the instruction that describes the task:
### Input:
Analyze count data from :meth:`PDFHistogram.count`.
Turns an array of counts (see :meth:`PDFHistogram.count`) into a
histogram of probabilities, and estimates the mean, standard
deviation, and other statistical characteristics of the corresponding
probability distribution.
Args:
count (array): Array of length ``nbin+2`` containing histogram
data where ``count[0]`` is the count for values that are
below the range of the histogram, ``count[-1]`` is the count
for values above the range, and ``count[i]`` is the count
for the ``i``-th bin where ``i=1...nbin``.
Returns a named tuple containing the following information (in order):
*bins*: Array of bin edges for histogram (length ``nbin+1``)
*prob*: Array of probabilities for each bin.
*stats*: Statistical data about histogram. See :class:`PDFStatistics`.
*norm*: Convert counts into probabilities by dividing by ``norm``.
### Response:
def analyze(self, count):
""" Analyze count data from :meth:`PDFHistogram.count`.
Turns an array of counts (see :meth:`PDFHistogram.count`) into a
histogram of probabilities, and estimates the mean, standard
deviation, and other statistical characteristics of the corresponding
probability distribution.
Args:
count (array): Array of length ``nbin+2`` containing histogram
data where ``count[0]`` is the count for values that are
below the range of the histogram, ``count[-1]`` is the count
for values above the range, and ``count[i]`` is the count
for the ``i``-th bin where ``i=1...nbin``.
Returns a named tuple containing the following information (in order):
*bins*: Array of bin edges for histogram (length ``nbin+1``)
*prob*: Array of probabilities for each bin.
*stats*: Statistical data about histogram. See :class:`PDFStatistics`.
*norm*: Convert counts into probabilities by dividing by ``norm``.
"""
if numpy.ndim(count) != 1:
raise ValueError('count must have dimension 1')
if len(count) == len(self.midpoints) + 2:
norm = numpy.sum(count)
data = numpy.asarray(count[1:-1]) / norm
elif len(count) != len(self.midpoints):
raise ValueError(
'wrong data length: %s != %s'
% (len(count), len(self.midpoints))
)
else:
data = count
norm = 1.
mid = self.midpoints
stats = PDFStatistics(histogram=(self.bins, count))
return PDFHistogram.Histogram(self.bins, data, stats, norm) |
def distribute_ready(self):
'''Distribute the ready state across all of the connections'''
connections = [c for c in self.connections() if c.alive()]
if len(connections) > self._max_in_flight:
raise NotImplementedError(
'Max in flight must be greater than number of connections')
else:
# Distribute the ready count evenly among the connections
for count, conn in distribute(self._max_in_flight, connections):
# We cannot exceed the maximum RDY count for a connection
if count > conn.max_rdy_count:
logger.info(
'Using max_rdy_count (%i) instead of %i for %s RDY',
conn.max_rdy_count, count, conn)
count = conn.max_rdy_count
logger.info('Sending RDY %i to %s', count, conn)
conn.rdy(count) | Distribute the ready state across all of the connections | Below is the the instruction that describes the task:
### Input:
Distribute the ready state across all of the connections
### Response:
def distribute_ready(self):
'''Distribute the ready state across all of the connections'''
connections = [c for c in self.connections() if c.alive()]
if len(connections) > self._max_in_flight:
raise NotImplementedError(
'Max in flight must be greater than number of connections')
else:
# Distribute the ready count evenly among the connections
for count, conn in distribute(self._max_in_flight, connections):
# We cannot exceed the maximum RDY count for a connection
if count > conn.max_rdy_count:
logger.info(
'Using max_rdy_count (%i) instead of %i for %s RDY',
conn.max_rdy_count, count, conn)
count = conn.max_rdy_count
logger.info('Sending RDY %i to %s', count, conn)
conn.rdy(count) |
def train(ctx, output, corpus, clusters):
"""Train POS Tagger."""
click.echo('chemdataextractor.pos.train')
click.echo('Output: %s' % output)
click.echo('Corpus: %s' % corpus)
click.echo('Clusters: %s' % clusters)
wsj_sents = []
genia_sents = []
if corpus == 'wsj' or corpus == 'wsj+genia':
wsj_sents = list(wsj_training.tagged_sents())
# For WSJ, remove all tokens with -NONE- tag
for i, wsj_sent in enumerate(wsj_sents):
wsj_sents[i] = [t for t in wsj_sent if not t[1] == '-NONE-']
if corpus == 'genia' or corpus == 'wsj+genia':
genia_sents = list(genia_training.tagged_sents())
# Translate GENIA
for i, genia_sent in enumerate(genia_sents):
for j, (token, tag) in enumerate(genia_sent):
if tag == '(':
genia_sents[i][j] = (token, '-LRB-') # ( to -RLB- (also do for evaluation)
elif tag == ')':
genia_sents[i][j] = (token, '-RRB-') # ) to -RRB- (also do for evaluation)
elif tag == 'CT':
genia_sents[i][j] = (token, 'DT') # Typo?
elif tag == 'XT':
genia_sents[i][j] = (token, 'DT') # Typo?
elif tag == '-':
genia_sents[i][j] = (token, ':') # Single hyphen character for dash
elif tag == 'N':
genia_sents[i][j] = (token, 'NN') # Typo?
elif tag == 'PP':
genia_sents[i][j] = (token, 'PRP') # Typo?
elif tag == '' and token == ')':
genia_sents[i][j] = (token, '-RRB-') # Typo?
elif tag == '' and token == 'IFN-gamma':
genia_sents[i][j] = (token, 'NN') # Typo?
elif '|' in tag:
genia_sents[i][j] = (token, tag.split('|')[0]) # If contains |, choose first part
# Filter any tags not in the allowed tagset (Shouldn't be any left anyway)
genia_sents[i] = [t for t in genia_sent if t[1] in TAGS]
if corpus == 'wsj':
training_corpus = wsj_sents
elif corpus == 'genia':
training_corpus = genia_sents
elif corpus == 'wsj+genia':
training_corpus = wsj_sents + genia_sents
else:
raise click.ClickException('Invalid corpus')
tagger = ChemCrfPosTagger(clusters=clusters)
tagger.train(training_corpus, output) | Train POS Tagger. | Below is the the instruction that describes the task:
### Input:
Train POS Tagger.
### Response:
def train(ctx, output, corpus, clusters):
"""Train POS Tagger."""
click.echo('chemdataextractor.pos.train')
click.echo('Output: %s' % output)
click.echo('Corpus: %s' % corpus)
click.echo('Clusters: %s' % clusters)
wsj_sents = []
genia_sents = []
if corpus == 'wsj' or corpus == 'wsj+genia':
wsj_sents = list(wsj_training.tagged_sents())
# For WSJ, remove all tokens with -NONE- tag
for i, wsj_sent in enumerate(wsj_sents):
wsj_sents[i] = [t for t in wsj_sent if not t[1] == '-NONE-']
if corpus == 'genia' or corpus == 'wsj+genia':
genia_sents = list(genia_training.tagged_sents())
# Translate GENIA
for i, genia_sent in enumerate(genia_sents):
for j, (token, tag) in enumerate(genia_sent):
if tag == '(':
genia_sents[i][j] = (token, '-LRB-') # ( to -RLB- (also do for evaluation)
elif tag == ')':
genia_sents[i][j] = (token, '-RRB-') # ) to -RRB- (also do for evaluation)
elif tag == 'CT':
genia_sents[i][j] = (token, 'DT') # Typo?
elif tag == 'XT':
genia_sents[i][j] = (token, 'DT') # Typo?
elif tag == '-':
genia_sents[i][j] = (token, ':') # Single hyphen character for dash
elif tag == 'N':
genia_sents[i][j] = (token, 'NN') # Typo?
elif tag == 'PP':
genia_sents[i][j] = (token, 'PRP') # Typo?
elif tag == '' and token == ')':
genia_sents[i][j] = (token, '-RRB-') # Typo?
elif tag == '' and token == 'IFN-gamma':
genia_sents[i][j] = (token, 'NN') # Typo?
elif '|' in tag:
genia_sents[i][j] = (token, tag.split('|')[0]) # If contains |, choose first part
# Filter any tags not in the allowed tagset (Shouldn't be any left anyway)
genia_sents[i] = [t for t in genia_sent if t[1] in TAGS]
if corpus == 'wsj':
training_corpus = wsj_sents
elif corpus == 'genia':
training_corpus = genia_sents
elif corpus == 'wsj+genia':
training_corpus = wsj_sents + genia_sents
else:
raise click.ClickException('Invalid corpus')
tagger = ChemCrfPosTagger(clusters=clusters)
tagger.train(training_corpus, output) |
def rownumbers(self, table=None):
"""Return a list containing the row numbers of this table.
This method can be useful after a selection or a sort.
It returns the row numbers of the rows in this table with respect
to the given table. If no table is given, the original table is used.
For example::
t = table('W53.MS')
t1 = t.selectrows([1,3,5,7,9]) # select a few rows
t1.rownumbers(t)
# [1 3 5 7 9]
t2 = t1.selectrows([2,5]) # select rows from the selection
t2.rownumbers(t1)
# [2 5] # rownrs of t2 in table t1
t2.rownumbers(t)
# [3 9] # rownrs of t2 in t
t2.rownumbers()
# [3 9]
The last statements show that the method returns the row numbers
referring to the given table. Table t2 contains rows 2 and 5 in
table t1, which are rows 3 and 9 in table t.
"""
if table is None:
return self._rownumbers(Table())
return self._rownumbers(table) | Return a list containing the row numbers of this table.
This method can be useful after a selection or a sort.
It returns the row numbers of the rows in this table with respect
to the given table. If no table is given, the original table is used.
For example::
t = table('W53.MS')
t1 = t.selectrows([1,3,5,7,9]) # select a few rows
t1.rownumbers(t)
# [1 3 5 7 9]
t2 = t1.selectrows([2,5]) # select rows from the selection
t2.rownumbers(t1)
# [2 5] # rownrs of t2 in table t1
t2.rownumbers(t)
# [3 9] # rownrs of t2 in t
t2.rownumbers()
# [3 9]
The last statements show that the method returns the row numbers
referring to the given table. Table t2 contains rows 2 and 5 in
table t1, which are rows 3 and 9 in table t. | Below is the the instruction that describes the task:
### Input:
Return a list containing the row numbers of this table.
This method can be useful after a selection or a sort.
It returns the row numbers of the rows in this table with respect
to the given table. If no table is given, the original table is used.
For example::
t = table('W53.MS')
t1 = t.selectrows([1,3,5,7,9]) # select a few rows
t1.rownumbers(t)
# [1 3 5 7 9]
t2 = t1.selectrows([2,5]) # select rows from the selection
t2.rownumbers(t1)
# [2 5] # rownrs of t2 in table t1
t2.rownumbers(t)
# [3 9] # rownrs of t2 in t
t2.rownumbers()
# [3 9]
The last statements show that the method returns the row numbers
referring to the given table. Table t2 contains rows 2 and 5 in
table t1, which are rows 3 and 9 in table t.
### Response:
def rownumbers(self, table=None):
"""Return a list containing the row numbers of this table.
This method can be useful after a selection or a sort.
It returns the row numbers of the rows in this table with respect
to the given table. If no table is given, the original table is used.
For example::
t = table('W53.MS')
t1 = t.selectrows([1,3,5,7,9]) # select a few rows
t1.rownumbers(t)
# [1 3 5 7 9]
t2 = t1.selectrows([2,5]) # select rows from the selection
t2.rownumbers(t1)
# [2 5] # rownrs of t2 in table t1
t2.rownumbers(t)
# [3 9] # rownrs of t2 in t
t2.rownumbers()
# [3 9]
The last statements show that the method returns the row numbers
referring to the given table. Table t2 contains rows 2 and 5 in
table t1, which are rows 3 and 9 in table t.
"""
if table is None:
return self._rownumbers(Table())
return self._rownumbers(table) |
def arsc(input_,
file_,
output,
package,
locale,
type_,
id_,
list_packages,
list_locales,
list_types):
"""
Decode resources.arsc either directly from a given file or from an APK.
Example:
\b
$ androguard arsc app.apk
"""
from androguard.core import androconf
from androguard.core.bytecodes import apk
if file_ and input_:
print("Can not give --input and positional argument! "
"Please use only one of them!",
file=sys.stderr)
sys.exit(1)
if not input_ and not file_:
print("Give one file to decode!", file=sys.stderr)
sys.exit(1)
if input_:
fname = input_
else:
fname = file_
ret_type = androconf.is_android(fname)
if ret_type == "APK":
a = apk.APK(fname)
arscobj = a.get_android_resources()
if not arscobj:
print("The APK does not contain a resources file!", file=sys.stderr)
sys.exit(0)
elif ret_type == "ARSC":
with open(fname, 'rb') as fp:
arscobj = apk.ARSCParser(fp.read())
if not arscobj:
print("The resources file seems to be invalid!", file=sys.stderr)
sys.exit(1)
else:
print("Unknown file type!", file=sys.stderr)
sys.exit(1)
if id_:
# Strip the @, if any
if id_[0] == "@":
id_ = id_[1:]
try:
i_id = int(id_, 16)
except ValueError:
print("ID '{}' could not be parsed! have you supplied the correct hex ID?".format(id_))
sys.exit(1)
name = arscobj.get_resource_xml_name(i_id)
if not name:
print("Specified resource was not found!")
sys.exit(1)
print("@{:08x} resolves to '{}'".format(i_id, name))
print()
# All the information is in the config.
# we simply need to get the actual value of the entry
for config, entry in arscobj.get_resolved_res_configs(i_id):
print("{} = '{}'".format(config.get_qualifier() if not config.is_default() else "<default>", entry))
sys.exit(0)
if list_packages:
print("\n".join(arscobj.get_packages_names()))
sys.exit(0)
if list_locales:
for p in arscobj.get_packages_names():
print("In Package:", p)
print("\n".join(map(lambda x: " \\x00\\x00"
if x == "\x00\x00"
else " {}".format(x),
sorted(arscobj.get_locales(p)))))
sys.exit(0)
if list_types:
for p in arscobj.get_packages_names():
print("In Package:", p)
for locale in sorted(arscobj.get_locales(p)):
print(" In Locale: {}".format("\\x00\\x00"
if locale == "\x00\x00" else locale))
print("\n".join(map(" {}".format,
sorted(arscobj.get_types(p, locale)))))
sys.exit(0)
androarsc_main(arscobj,
outp=output,
package=package,
typ=type_,
locale=locale) | Decode resources.arsc either directly from a given file or from an APK.
Example:
\b
$ androguard arsc app.apk | Below is the the instruction that describes the task:
### Input:
Decode resources.arsc either directly from a given file or from an APK.
Example:
\b
$ androguard arsc app.apk
### Response:
def arsc(input_,
file_,
output,
package,
locale,
type_,
id_,
list_packages,
list_locales,
list_types):
"""
Decode resources.arsc either directly from a given file or from an APK.
Example:
\b
$ androguard arsc app.apk
"""
from androguard.core import androconf
from androguard.core.bytecodes import apk
if file_ and input_:
print("Can not give --input and positional argument! "
"Please use only one of them!",
file=sys.stderr)
sys.exit(1)
if not input_ and not file_:
print("Give one file to decode!", file=sys.stderr)
sys.exit(1)
if input_:
fname = input_
else:
fname = file_
ret_type = androconf.is_android(fname)
if ret_type == "APK":
a = apk.APK(fname)
arscobj = a.get_android_resources()
if not arscobj:
print("The APK does not contain a resources file!", file=sys.stderr)
sys.exit(0)
elif ret_type == "ARSC":
with open(fname, 'rb') as fp:
arscobj = apk.ARSCParser(fp.read())
if not arscobj:
print("The resources file seems to be invalid!", file=sys.stderr)
sys.exit(1)
else:
print("Unknown file type!", file=sys.stderr)
sys.exit(1)
if id_:
# Strip the @, if any
if id_[0] == "@":
id_ = id_[1:]
try:
i_id = int(id_, 16)
except ValueError:
print("ID '{}' could not be parsed! have you supplied the correct hex ID?".format(id_))
sys.exit(1)
name = arscobj.get_resource_xml_name(i_id)
if not name:
print("Specified resource was not found!")
sys.exit(1)
print("@{:08x} resolves to '{}'".format(i_id, name))
print()
# All the information is in the config.
# we simply need to get the actual value of the entry
for config, entry in arscobj.get_resolved_res_configs(i_id):
print("{} = '{}'".format(config.get_qualifier() if not config.is_default() else "<default>", entry))
sys.exit(0)
if list_packages:
print("\n".join(arscobj.get_packages_names()))
sys.exit(0)
if list_locales:
for p in arscobj.get_packages_names():
print("In Package:", p)
print("\n".join(map(lambda x: " \\x00\\x00"
if x == "\x00\x00"
else " {}".format(x),
sorted(arscobj.get_locales(p)))))
sys.exit(0)
if list_types:
for p in arscobj.get_packages_names():
print("In Package:", p)
for locale in sorted(arscobj.get_locales(p)):
print(" In Locale: {}".format("\\x00\\x00"
if locale == "\x00\x00" else locale))
print("\n".join(map(" {}".format,
sorted(arscobj.get_types(p, locale)))))
sys.exit(0)
androarsc_main(arscobj,
outp=output,
package=package,
typ=type_,
locale=locale) |
def to(self, new_unit):
"""
Conversion to a new_unit. Right now, only supports 1 to 1 mapping of
units of each type.
Args:
new_unit: New unit type.
Returns:
A FloatWithUnit object in the new units.
Example usage:
>>> e = Energy(1.1, "eV")
>>> e = Energy(1.1, "Ha")
>>> e.to("eV")
29.932522246 eV
"""
return FloatWithUnit(
self * self.unit.get_conversion_factor(new_unit),
unit_type=self._unit_type,
unit=new_unit) | Conversion to a new_unit. Right now, only supports 1 to 1 mapping of
units of each type.
Args:
new_unit: New unit type.
Returns:
A FloatWithUnit object in the new units.
Example usage:
>>> e = Energy(1.1, "eV")
>>> e = Energy(1.1, "Ha")
>>> e.to("eV")
29.932522246 eV | Below is the the instruction that describes the task:
### Input:
Conversion to a new_unit. Right now, only supports 1 to 1 mapping of
units of each type.
Args:
new_unit: New unit type.
Returns:
A FloatWithUnit object in the new units.
Example usage:
>>> e = Energy(1.1, "eV")
>>> e = Energy(1.1, "Ha")
>>> e.to("eV")
29.932522246 eV
### Response:
def to(self, new_unit):
"""
Conversion to a new_unit. Right now, only supports 1 to 1 mapping of
units of each type.
Args:
new_unit: New unit type.
Returns:
A FloatWithUnit object in the new units.
Example usage:
>>> e = Energy(1.1, "eV")
>>> e = Energy(1.1, "Ha")
>>> e.to("eV")
29.932522246 eV
"""
return FloatWithUnit(
self * self.unit.get_conversion_factor(new_unit),
unit_type=self._unit_type,
unit=new_unit) |
def make_order_string(cls, order_specification):
"""
Converts the given order specification to a CQL order expression.
"""
registry = get_current_registry()
visitor_cls = registry.getUtility(IOrderSpecificationVisitor,
name=EXPRESSION_KINDS.CQL)
visitor = visitor_cls()
order_specification.accept(visitor)
return str(visitor.expression) | Converts the given order specification to a CQL order expression. | Below is the the instruction that describes the task:
### Input:
Converts the given order specification to a CQL order expression.
### Response:
def make_order_string(cls, order_specification):
"""
Converts the given order specification to a CQL order expression.
"""
registry = get_current_registry()
visitor_cls = registry.getUtility(IOrderSpecificationVisitor,
name=EXPRESSION_KINDS.CQL)
visitor = visitor_cls()
order_specification.accept(visitor)
return str(visitor.expression) |
def serialize(self, subject, *objects_or_combinators):
""" object_combinators may also be URIRefs or Literals """
ec_s = rdflib.BNode()
if self.operator is not None:
if subject is not None:
yield subject, self.predicate, ec_s
yield from oc(ec_s)
yield from self._list.serialize(ec_s, self.operator, *objects_or_combinators)
else:
for thing in objects_or_combinators:
if isinstance(thing, Combinator):
object = rdflib.BNode()
#anything = list(thing(object))
#if anything:
#[print(_) for _ in anything]
hasType = False
for t in thing(object):
if t[1] == rdf.type:
hasType = True
yield t
if not hasType:
yield object, rdf.type, owl.Class
else:
object = thing
yield subject, self.predicate, object | object_combinators may also be URIRefs or Literals | Below is the the instruction that describes the task:
### Input:
object_combinators may also be URIRefs or Literals
### Response:
def serialize(self, subject, *objects_or_combinators):
""" object_combinators may also be URIRefs or Literals """
ec_s = rdflib.BNode()
if self.operator is not None:
if subject is not None:
yield subject, self.predicate, ec_s
yield from oc(ec_s)
yield from self._list.serialize(ec_s, self.operator, *objects_or_combinators)
else:
for thing in objects_or_combinators:
if isinstance(thing, Combinator):
object = rdflib.BNode()
#anything = list(thing(object))
#if anything:
#[print(_) for _ in anything]
hasType = False
for t in thing(object):
if t[1] == rdf.type:
hasType = True
yield t
if not hasType:
yield object, rdf.type, owl.Class
else:
object = thing
yield subject, self.predicate, object |
def postinit(
self,
args,
body,
decorators=None,
returns=None,
type_comment_returns=None,
type_comment_args=None,
):
"""Do some setup after initialisation.
:param args: The arguments that the function takes.
:type args: Arguments or list
:param body: The contents of the function body.
:type body: list(NodeNG)
:param decorators: The decorators that are applied to this
method or function.
:type decorators: Decorators or None
:params type_comment_returns:
The return type annotation passed via a type comment.
:params type_comment_args:
The args type annotation passed via a type comment.
"""
self.args = args
self.body = body
self.decorators = decorators
self.returns = returns
self.type_comment_returns = type_comment_returns
self.type_comment_args = type_comment_args
if isinstance(self.parent.frame(), ClassDef):
self.set_local("__class__", self.parent.frame()) | Do some setup after initialisation.
:param args: The arguments that the function takes.
:type args: Arguments or list
:param body: The contents of the function body.
:type body: list(NodeNG)
:param decorators: The decorators that are applied to this
method or function.
:type decorators: Decorators or None
:params type_comment_returns:
The return type annotation passed via a type comment.
:params type_comment_args:
The args type annotation passed via a type comment. | Below is the the instruction that describes the task:
### Input:
Do some setup after initialisation.
:param args: The arguments that the function takes.
:type args: Arguments or list
:param body: The contents of the function body.
:type body: list(NodeNG)
:param decorators: The decorators that are applied to this
method or function.
:type decorators: Decorators or None
:params type_comment_returns:
The return type annotation passed via a type comment.
:params type_comment_args:
The args type annotation passed via a type comment.
### Response:
def postinit(
self,
args,
body,
decorators=None,
returns=None,
type_comment_returns=None,
type_comment_args=None,
):
"""Do some setup after initialisation.
:param args: The arguments that the function takes.
:type args: Arguments or list
:param body: The contents of the function body.
:type body: list(NodeNG)
:param decorators: The decorators that are applied to this
method or function.
:type decorators: Decorators or None
:params type_comment_returns:
The return type annotation passed via a type comment.
:params type_comment_args:
The args type annotation passed via a type comment.
"""
self.args = args
self.body = body
self.decorators = decorators
self.returns = returns
self.type_comment_returns = type_comment_returns
self.type_comment_args = type_comment_args
if isinstance(self.parent.frame(), ClassDef):
self.set_local("__class__", self.parent.frame()) |
def add_subscription(self, channel, callback_function):
"""
Add a channel to subscribe to and a callback function to
run when the channel receives an update.
If channel already exists, create a new "subscription"
and append another callback function.
Args:
channel (str): The channel to add a subscription too.
callback_function (func): The function to run on an
update to the passed in channel.
"""
if channel not in CHANNELS:
CHANNELS.append(channel)
SUBSCRIPTIONS[channel] = [callback_function]
else:
SUBSCRIPTIONS[channel].append(callback_function)
# If a channel gets added after subscription has already been called
# call subscribe on the individual channel, here.
if self._subscribed:
_LOGGER.info("New channel added after main subscribe call.")
self._pubnub.subscribe().channels(channel).execute() | Add a channel to subscribe to and a callback function to
run when the channel receives an update.
If channel already exists, create a new "subscription"
and append another callback function.
Args:
channel (str): The channel to add a subscription too.
callback_function (func): The function to run on an
update to the passed in channel. | Below is the the instruction that describes the task:
### Input:
Add a channel to subscribe to and a callback function to
run when the channel receives an update.
If channel already exists, create a new "subscription"
and append another callback function.
Args:
channel (str): The channel to add a subscription too.
callback_function (func): The function to run on an
update to the passed in channel.
### Response:
def add_subscription(self, channel, callback_function):
"""
Add a channel to subscribe to and a callback function to
run when the channel receives an update.
If channel already exists, create a new "subscription"
and append another callback function.
Args:
channel (str): The channel to add a subscription too.
callback_function (func): The function to run on an
update to the passed in channel.
"""
if channel not in CHANNELS:
CHANNELS.append(channel)
SUBSCRIPTIONS[channel] = [callback_function]
else:
SUBSCRIPTIONS[channel].append(callback_function)
# If a channel gets added after subscription has already been called
# call subscribe on the individual channel, here.
if self._subscribed:
_LOGGER.info("New channel added after main subscribe call.")
self._pubnub.subscribe().channels(channel).execute() |
def extend(self, key, array, **attrs):
"""
Extend the dataset associated to the given key; create it if needed
:param key: name of the dataset
:param array: array to store
:param attrs: a dictionary of attributes
"""
try:
dset = self.hdf5[key]
except KeyError:
dset = hdf5.create(self.hdf5, key, array.dtype,
shape=(None,) + array.shape[1:])
hdf5.extend(dset, array)
for k, v in attrs.items():
dset.attrs[k] = v
return dset | Extend the dataset associated to the given key; create it if needed
:param key: name of the dataset
:param array: array to store
:param attrs: a dictionary of attributes | Below is the the instruction that describes the task:
### Input:
Extend the dataset associated to the given key; create it if needed
:param key: name of the dataset
:param array: array to store
:param attrs: a dictionary of attributes
### Response:
def extend(self, key, array, **attrs):
"""
Extend the dataset associated to the given key; create it if needed
:param key: name of the dataset
:param array: array to store
:param attrs: a dictionary of attributes
"""
try:
dset = self.hdf5[key]
except KeyError:
dset = hdf5.create(self.hdf5, key, array.dtype,
shape=(None,) + array.shape[1:])
hdf5.extend(dset, array)
for k, v in attrs.items():
dset.attrs[k] = v
return dset |
def confirm_destructive_query(queries):
"""Check if the query is destructive and prompts the user to confirm.
Returns:
* None if the query is non-destructive or we can't prompt the user.
* True if the query is destructive and the user wants to proceed.
* False if the query is destructive and the user doesn't want to proceed.
"""
prompt_text = ("You're about to run a destructive command.\n"
"Do you want to proceed? (y/n)")
if is_destructive(queries) and sys.stdin.isatty():
return prompt(prompt_text, type=bool) | Check if the query is destructive and prompts the user to confirm.
Returns:
* None if the query is non-destructive or we can't prompt the user.
* True if the query is destructive and the user wants to proceed.
* False if the query is destructive and the user doesn't want to proceed. | Below is the the instruction that describes the task:
### Input:
Check if the query is destructive and prompts the user to confirm.
Returns:
* None if the query is non-destructive or we can't prompt the user.
* True if the query is destructive and the user wants to proceed.
* False if the query is destructive and the user doesn't want to proceed.
### Response:
def confirm_destructive_query(queries):
"""Check if the query is destructive and prompts the user to confirm.
Returns:
* None if the query is non-destructive or we can't prompt the user.
* True if the query is destructive and the user wants to proceed.
* False if the query is destructive and the user doesn't want to proceed.
"""
prompt_text = ("You're about to run a destructive command.\n"
"Do you want to proceed? (y/n)")
if is_destructive(queries) and sys.stdin.isatty():
return prompt(prompt_text, type=bool) |
def getInfo(self, CorpNum, MgtKeyType, MgtKey):
""" 상태정보 확인
args
CorpNum : 회원 사업자 번호
MgtKeyType : 관리번호 유형 one of ['SELL','BUY','TRUSTEE']
MgtKey : 파트너 관리번호
return
처리결과. consist of code and message
raise
PopbillException
"""
if MgtKeyType not in self.__MgtKeyTypes:
raise PopbillException(-99999999, "관리번호 형태가 올바르지 않습니다.")
if MgtKey == None or MgtKey == "":
raise PopbillException(-99999999, "관리번호가 입력되지 않았습니다.")
return self._httpget('/Taxinvoice/' + MgtKeyType + '/' + MgtKey, CorpNum) | 상태정보 확인
args
CorpNum : 회원 사업자 번호
MgtKeyType : 관리번호 유형 one of ['SELL','BUY','TRUSTEE']
MgtKey : 파트너 관리번호
return
처리결과. consist of code and message
raise
PopbillException | Below is the the instruction that describes the task:
### Input:
상태정보 확인
args
CorpNum : 회원 사업자 번호
MgtKeyType : 관리번호 유형 one of ['SELL','BUY','TRUSTEE']
MgtKey : 파트너 관리번호
return
처리결과. consist of code and message
raise
PopbillException
### Response:
def getInfo(self, CorpNum, MgtKeyType, MgtKey):
""" 상태정보 확인
args
CorpNum : 회원 사업자 번호
MgtKeyType : 관리번호 유형 one of ['SELL','BUY','TRUSTEE']
MgtKey : 파트너 관리번호
return
처리결과. consist of code and message
raise
PopbillException
"""
if MgtKeyType not in self.__MgtKeyTypes:
raise PopbillException(-99999999, "관리번호 형태가 올바르지 않습니다.")
if MgtKey == None or MgtKey == "":
raise PopbillException(-99999999, "관리번호가 입력되지 않았습니다.")
return self._httpget('/Taxinvoice/' + MgtKeyType + '/' + MgtKey, CorpNum) |
def randindex(lo, hi, n = 1.):
"""
Yields integers in the range [lo, hi) where 0 <= lo < hi. Each
return value is a two-element tuple. The first element is the
random integer, the second is the natural logarithm of the
probability with which that integer will be chosen.
The CDF for the distribution from which the integers are drawn goes
as [integer]^{n}, where n > 0. Specifically, it's
CDF(x) = (x^{n} - lo^{n}) / (hi^{n} - lo^{n})
n = 1 yields a uniform distribution; n > 1 favours larger
integers, n < 1 favours smaller integers.
"""
if not 0 <= lo < hi:
raise ValueError("require 0 <= lo < hi: lo = %d, hi = %d" % (lo, hi))
if n <= 0.:
raise ValueError("n <= 0: %g" % n)
elif n == 1.:
# special case for uniform distribution
try:
lnP = math.log(1. / (hi - lo))
except ValueError:
raise ValueError("[lo, hi) domain error")
hi -= 1
rnd = random.randint
while 1:
yield rnd(lo, hi), lnP
# CDF evaluated at index boundaries
lnP = numpy.arange(lo, hi + 1, dtype = "double")**n
lnP -= lnP[0]
lnP /= lnP[-1]
# differences give probabilities
lnP = tuple(numpy.log(lnP[1:] - lnP[:-1]))
if numpy.isinf(lnP).any():
raise ValueError("[lo, hi) domain error")
beta = lo**n / (hi**n - lo**n)
n = 1. / n
alpha = hi / (1. + beta)**n
flr = math.floor
rnd = random.random
while 1:
index = int(flr(alpha * (rnd() + beta)**n))
# the tuple look-up provides the second part of the
# range safety check on index
assert index >= lo
yield index, lnP[index - lo] | Yields integers in the range [lo, hi) where 0 <= lo < hi. Each
return value is a two-element tuple. The first element is the
random integer, the second is the natural logarithm of the
probability with which that integer will be chosen.
The CDF for the distribution from which the integers are drawn goes
as [integer]^{n}, where n > 0. Specifically, it's
CDF(x) = (x^{n} - lo^{n}) / (hi^{n} - lo^{n})
n = 1 yields a uniform distribution; n > 1 favours larger
integers, n < 1 favours smaller integers. | Below is the the instruction that describes the task:
### Input:
Yields integers in the range [lo, hi) where 0 <= lo < hi. Each
return value is a two-element tuple. The first element is the
random integer, the second is the natural logarithm of the
probability with which that integer will be chosen.
The CDF for the distribution from which the integers are drawn goes
as [integer]^{n}, where n > 0. Specifically, it's
CDF(x) = (x^{n} - lo^{n}) / (hi^{n} - lo^{n})
n = 1 yields a uniform distribution; n > 1 favours larger
integers, n < 1 favours smaller integers.
### Response:
def randindex(lo, hi, n = 1.):
"""
Yields integers in the range [lo, hi) where 0 <= lo < hi. Each
return value is a two-element tuple. The first element is the
random integer, the second is the natural logarithm of the
probability with which that integer will be chosen.
The CDF for the distribution from which the integers are drawn goes
as [integer]^{n}, where n > 0. Specifically, it's
CDF(x) = (x^{n} - lo^{n}) / (hi^{n} - lo^{n})
n = 1 yields a uniform distribution; n > 1 favours larger
integers, n < 1 favours smaller integers.
"""
if not 0 <= lo < hi:
raise ValueError("require 0 <= lo < hi: lo = %d, hi = %d" % (lo, hi))
if n <= 0.:
raise ValueError("n <= 0: %g" % n)
elif n == 1.:
# special case for uniform distribution
try:
lnP = math.log(1. / (hi - lo))
except ValueError:
raise ValueError("[lo, hi) domain error")
hi -= 1
rnd = random.randint
while 1:
yield rnd(lo, hi), lnP
# CDF evaluated at index boundaries
lnP = numpy.arange(lo, hi + 1, dtype = "double")**n
lnP -= lnP[0]
lnP /= lnP[-1]
# differences give probabilities
lnP = tuple(numpy.log(lnP[1:] - lnP[:-1]))
if numpy.isinf(lnP).any():
raise ValueError("[lo, hi) domain error")
beta = lo**n / (hi**n - lo**n)
n = 1. / n
alpha = hi / (1. + beta)**n
flr = math.floor
rnd = random.random
while 1:
index = int(flr(alpha * (rnd() + beta)**n))
# the tuple look-up provides the second part of the
# range safety check on index
assert index >= lo
yield index, lnP[index - lo] |
def publish_message_to_centrifugo(sender, instance, created, **kwargs):
""" Publishes each saved message to Centrifugo. """
if created is True:
client = Client("{0}api/".format(getattr(settings, "CENTRIFUGE_ADDRESS")), getattr(settings, "CENTRIFUGE_SECRET"))
# we ensure the client is still in the thread (he may have left or have been removed)
active_participants = [participation.participant.id for participation in Participation.objects.filter(thread=instance.thread, date_left__isnull=True).select_related('participant')]
client.publish(
build_channel(settings.CENTRIFUGO_MESSAGE_NAMESPACE, instance.thread.id, active_participants),
{
"id": instance.id,
"body": instance.body,
"sender": instance.sender.id,
"thread": instance.thread.id,
"sent_at": str(instance.sent_at),
"is_notification": True, # ATTENTION: check against sender too to be sure to not notify him his message
}
) | Publishes each saved message to Centrifugo. | Below is the the instruction that describes the task:
### Input:
Publishes each saved message to Centrifugo.
### Response:
def publish_message_to_centrifugo(sender, instance, created, **kwargs):
""" Publishes each saved message to Centrifugo. """
if created is True:
client = Client("{0}api/".format(getattr(settings, "CENTRIFUGE_ADDRESS")), getattr(settings, "CENTRIFUGE_SECRET"))
# we ensure the client is still in the thread (he may have left or have been removed)
active_participants = [participation.participant.id for participation in Participation.objects.filter(thread=instance.thread, date_left__isnull=True).select_related('participant')]
client.publish(
build_channel(settings.CENTRIFUGO_MESSAGE_NAMESPACE, instance.thread.id, active_participants),
{
"id": instance.id,
"body": instance.body,
"sender": instance.sender.id,
"thread": instance.thread.id,
"sent_at": str(instance.sent_at),
"is_notification": True, # ATTENTION: check against sender too to be sure to not notify him his message
}
) |
def checkout(url, version=None):
"""
Checks out latest version of item or repository.
:param url: URL of repo or item to check out.
:param version: Version number to check out.
"""
from grit import Repo
r = Repo(url)
def _write(item):
log.debug('writing: %s' % item.name)
if item.type != 'blob':
return
if r.type in ['repo', 'proxy', 'local']:
path = os.path.join(r.name, item.path)
pdir = os.path.dirname(path)
if not os.path.isdir(pdir):
os.makedirs(pdir)
else:
path = item.name
f = open(path, 'w')
f.write(item.data())
f.close()
if r.type == 'blob':
_write(r)
else:
items = r.items()
count = 1
total = len(items)
while count <= total:
print '[%s/%s] %0.2f%%' %(count, total, (float(count) / total) * 100), '*'*count, '\r',
_write(items[count-1])
count += 1
sys.stdout.flush()
print | Checks out latest version of item or repository.
:param url: URL of repo or item to check out.
:param version: Version number to check out. | Below is the the instruction that describes the task:
### Input:
Checks out latest version of item or repository.
:param url: URL of repo or item to check out.
:param version: Version number to check out.
### Response:
def checkout(url, version=None):
"""
Checks out latest version of item or repository.
:param url: URL of repo or item to check out.
:param version: Version number to check out.
"""
from grit import Repo
r = Repo(url)
def _write(item):
log.debug('writing: %s' % item.name)
if item.type != 'blob':
return
if r.type in ['repo', 'proxy', 'local']:
path = os.path.join(r.name, item.path)
pdir = os.path.dirname(path)
if not os.path.isdir(pdir):
os.makedirs(pdir)
else:
path = item.name
f = open(path, 'w')
f.write(item.data())
f.close()
if r.type == 'blob':
_write(r)
else:
items = r.items()
count = 1
total = len(items)
while count <= total:
print '[%s/%s] %0.2f%%' %(count, total, (float(count) / total) * 100), '*'*count, '\r',
_write(items[count-1])
count += 1
sys.stdout.flush()
print |
def _GetStringValue(self, data_dict, name, default_value=None):
"""Retrieves a specific string value from the data dict.
Args:
data_dict (dict[str, list[str]): values per name.
name (str): name of the value to retrieve.
default_value (Optional[object]): value to return if the name has no value
set in data_dict.
Returns:
str: value represented as a string.
"""
values = data_dict.get(name, None)
if not values:
return default_value
for index, value in enumerate(values):
if ',' in value:
values[index] = '"{0:s}"'.format(value)
return ', '.join(values) | Retrieves a specific string value from the data dict.
Args:
data_dict (dict[str, list[str]): values per name.
name (str): name of the value to retrieve.
default_value (Optional[object]): value to return if the name has no value
set in data_dict.
Returns:
str: value represented as a string. | Below is the the instruction that describes the task:
### Input:
Retrieves a specific string value from the data dict.
Args:
data_dict (dict[str, list[str]): values per name.
name (str): name of the value to retrieve.
default_value (Optional[object]): value to return if the name has no value
set in data_dict.
Returns:
str: value represented as a string.
### Response:
def _GetStringValue(self, data_dict, name, default_value=None):
"""Retrieves a specific string value from the data dict.
Args:
data_dict (dict[str, list[str]): values per name.
name (str): name of the value to retrieve.
default_value (Optional[object]): value to return if the name has no value
set in data_dict.
Returns:
str: value represented as a string.
"""
values = data_dict.get(name, None)
if not values:
return default_value
for index, value in enumerate(values):
if ',' in value:
values[index] = '"{0:s}"'.format(value)
return ', '.join(values) |
def like_num(text):
"""
check if text resembles a number
"""
text = (
text.replace(",", "")
.replace(".", "")
.replace("،", "")
.replace("٫", "")
.replace("/", "")
)
if text.isdigit():
return True
if text in _num_words:
return True
if text in _ordinal_words:
return True
return False | check if text resembles a number | Below is the the instruction that describes the task:
### Input:
check if text resembles a number
### Response:
def like_num(text):
"""
check if text resembles a number
"""
text = (
text.replace(",", "")
.replace(".", "")
.replace("،", "")
.replace("٫", "")
.replace("/", "")
)
if text.isdigit():
return True
if text in _num_words:
return True
if text in _ordinal_words:
return True
return False |
def datasource_process(self, datasource_id):
"""
deprecated
Запускает настроенные обработки в фиде
:param datasource_id: uuid
"""
# TODO Выпилить потом класс используется для другого
# TODO без applicationId не выбираются поля сущностей. Подумать на сколько это НЕ нормально
response = self.__app.native_api_call('feed', 'datasource/' + datasource_id + '/process?applicationId=1', {},
self.__options, False, None, False, http_method="POST")
return json.loads(response.text) | deprecated
Запускает настроенные обработки в фиде
:param datasource_id: uuid | Below is the the instruction that describes the task:
### Input:
deprecated
Запускает настроенные обработки в фиде
:param datasource_id: uuid
### Response:
def datasource_process(self, datasource_id):
"""
deprecated
Запускает настроенные обработки в фиде
:param datasource_id: uuid
"""
# TODO Выпилить потом класс используется для другого
# TODO без applicationId не выбираются поля сущностей. Подумать на сколько это НЕ нормально
response = self.__app.native_api_call('feed', 'datasource/' + datasource_id + '/process?applicationId=1', {},
self.__options, False, None, False, http_method="POST")
return json.loads(response.text) |
def version(verbose):
"""Prints the current version number"""
print(Fore.BLUE + '-=' * 15)
print(Fore.YELLOW + 'Superset ' + Fore.CYAN + '{version}'.format(
version=config.get('VERSION_STRING')))
print(Fore.BLUE + '-=' * 15)
if verbose:
print('[DB] : ' + '{}'.format(db.engine))
print(Style.RESET_ALL) | Prints the current version number | Below is the the instruction that describes the task:
### Input:
Prints the current version number
### Response:
def version(verbose):
"""Prints the current version number"""
print(Fore.BLUE + '-=' * 15)
print(Fore.YELLOW + 'Superset ' + Fore.CYAN + '{version}'.format(
version=config.get('VERSION_STRING')))
print(Fore.BLUE + '-=' * 15)
if verbose:
print('[DB] : ' + '{}'.format(db.engine))
print(Style.RESET_ALL) |
def usage_example(phrase, format='json'):
"""Takes the source phrase and queries it to the urbandictionary API
:params phrase: word for which usage_example is to be found
:param format: response structure type. Defaults to: "json"
:returns: returns a json object as str, False if invalid phrase
"""
base_url = Vocabulary.__get_api_link("urbandict")
url = base_url.format(action="define", word=phrase)
word_examples = {}
json_obj = Vocabulary.__return_json(url)
if json_obj:
examples_list = json_obj["list"]
for i, example in enumerate(examples_list):
if example["thumbs_up"] > example["thumbs_down"]:
word_examples[i] = example["example"].replace("\r", "").replace("\n", "")
if word_examples:
# reforamatting "word_examples" using "__clean_dict()"
# return json.dumps(Vocabulary.__clean_dict(word_examples))
# return Vocabulary.__clean_dict(word_examples)
return Response().respond(Vocabulary.__clean_dict(word_examples), format)
else:
return False
else:
return False | Takes the source phrase and queries it to the urbandictionary API
:params phrase: word for which usage_example is to be found
:param format: response structure type. Defaults to: "json"
:returns: returns a json object as str, False if invalid phrase | Below is the the instruction that describes the task:
### Input:
Takes the source phrase and queries it to the urbandictionary API
:params phrase: word for which usage_example is to be found
:param format: response structure type. Defaults to: "json"
:returns: returns a json object as str, False if invalid phrase
### Response:
def usage_example(phrase, format='json'):
"""Takes the source phrase and queries it to the urbandictionary API
:params phrase: word for which usage_example is to be found
:param format: response structure type. Defaults to: "json"
:returns: returns a json object as str, False if invalid phrase
"""
base_url = Vocabulary.__get_api_link("urbandict")
url = base_url.format(action="define", word=phrase)
word_examples = {}
json_obj = Vocabulary.__return_json(url)
if json_obj:
examples_list = json_obj["list"]
for i, example in enumerate(examples_list):
if example["thumbs_up"] > example["thumbs_down"]:
word_examples[i] = example["example"].replace("\r", "").replace("\n", "")
if word_examples:
# reforamatting "word_examples" using "__clean_dict()"
# return json.dumps(Vocabulary.__clean_dict(word_examples))
# return Vocabulary.__clean_dict(word_examples)
return Response().respond(Vocabulary.__clean_dict(word_examples), format)
else:
return False
else:
return False |
def forecast(self, throughputs, backlog_size, num_simulations=10000, max_periods=10000, seed=None):
"""Forecasts how long a backlog will take to complete given the historical values provided.
Arguments:
throughputs(List[int]): Number of units completed per unit of time (stories per week, story points per month, etc.)
backlog_size(int): Units in the backlog (stories, points, etc.)
Returns:
results
Exceptions:
ValueError: If there aren't any positive throughputs, or the simulation takes too long.
"""
self._check_throughputs(throughputs)
results = []
if seed is not None:
random.seed(seed)
for i in range(0, num_simulations):
simulated_backlog = backlog_size
time_unit_count = 0
while simulated_backlog > 0:
simulated_backlog -= random.choice(throughputs)
time_unit_count += 1
if time_unit_count > max_periods:
raise ValueError("More than {} periods calculated".format(max_periods))
results.append(time_unit_count)
return Results(results) | Forecasts how long a backlog will take to complete given the historical values provided.
Arguments:
throughputs(List[int]): Number of units completed per unit of time (stories per week, story points per month, etc.)
backlog_size(int): Units in the backlog (stories, points, etc.)
Returns:
results
Exceptions:
ValueError: If there aren't any positive throughputs, or the simulation takes too long. | Below is the the instruction that describes the task:
### Input:
Forecasts how long a backlog will take to complete given the historical values provided.
Arguments:
throughputs(List[int]): Number of units completed per unit of time (stories per week, story points per month, etc.)
backlog_size(int): Units in the backlog (stories, points, etc.)
Returns:
results
Exceptions:
ValueError: If there aren't any positive throughputs, or the simulation takes too long.
### Response:
def forecast(self, throughputs, backlog_size, num_simulations=10000, max_periods=10000, seed=None):
"""Forecasts how long a backlog will take to complete given the historical values provided.
Arguments:
throughputs(List[int]): Number of units completed per unit of time (stories per week, story points per month, etc.)
backlog_size(int): Units in the backlog (stories, points, etc.)
Returns:
results
Exceptions:
ValueError: If there aren't any positive throughputs, or the simulation takes too long.
"""
self._check_throughputs(throughputs)
results = []
if seed is not None:
random.seed(seed)
for i in range(0, num_simulations):
simulated_backlog = backlog_size
time_unit_count = 0
while simulated_backlog > 0:
simulated_backlog -= random.choice(throughputs)
time_unit_count += 1
if time_unit_count > max_periods:
raise ValueError("More than {} periods calculated".format(max_periods))
results.append(time_unit_count)
return Results(results) |
def fuzzy_match(self, proc):
"""
Are there any commands that contain the given text?
Returns:
boolean: ``True`` if the word ``proc`` appears in the command column.
.. note::
'proc' can match anywhere in the command path, name or arguments.
"""
return any(proc in row[self.command_name] for row in self.data) | Are there any commands that contain the given text?
Returns:
boolean: ``True`` if the word ``proc`` appears in the command column.
.. note::
'proc' can match anywhere in the command path, name or arguments. | Below is the the instruction that describes the task:
### Input:
Are there any commands that contain the given text?
Returns:
boolean: ``True`` if the word ``proc`` appears in the command column.
.. note::
'proc' can match anywhere in the command path, name or arguments.
### Response:
def fuzzy_match(self, proc):
"""
Are there any commands that contain the given text?
Returns:
boolean: ``True`` if the word ``proc`` appears in the command column.
.. note::
'proc' can match anywhere in the command path, name or arguments.
"""
return any(proc in row[self.command_name] for row in self.data) |
def add_direction(value, arg=u"rtl_only"):
"""Adds direction to the element
:arguments:
arg
* rtl_only: Add the direction only in case of a
right-to-left language (default)
* both: add the direction in both case
* ltr_only: Add the direction only in case of a
left-to-right language
{{image_name|add_direction}} when image_name is 'start_arrow.png'
results in 'start_arrow_rtl.png' in case of RTL language, and
'start_arrow.png' or 'start_arrow_ltr.png' depends on `arg` value.
"""
if arg == u'rtl_only':
directions = (u'', u'_rtl')
elif arg == u'both':
directions = (u'_ltr', u'_rtl')
elif arg == u'ltr_only':
directions = (u'_ltr', u'')
else:
raise template.TemplateSyntaxError('add_direction can use arg with one of ["rtl_only", "both", "ltr_only"]')
parts = value.rsplit('.', 1)
if not len(parts):
return value
elif len(parts) == 1:
return value + directions[translation.get_language_bidi()]
else:
return '.'.join((parts[0]+directions[translation.get_language_bidi()],parts[1])) | Adds direction to the element
:arguments:
arg
* rtl_only: Add the direction only in case of a
right-to-left language (default)
* both: add the direction in both case
* ltr_only: Add the direction only in case of a
left-to-right language
{{image_name|add_direction}} when image_name is 'start_arrow.png'
results in 'start_arrow_rtl.png' in case of RTL language, and
'start_arrow.png' or 'start_arrow_ltr.png' depends on `arg` value. | Below is the the instruction that describes the task:
### Input:
Adds direction to the element
:arguments:
arg
* rtl_only: Add the direction only in case of a
right-to-left language (default)
* both: add the direction in both case
* ltr_only: Add the direction only in case of a
left-to-right language
{{image_name|add_direction}} when image_name is 'start_arrow.png'
results in 'start_arrow_rtl.png' in case of RTL language, and
'start_arrow.png' or 'start_arrow_ltr.png' depends on `arg` value.
### Response:
def add_direction(value, arg=u"rtl_only"):
"""Adds direction to the element
:arguments:
arg
* rtl_only: Add the direction only in case of a
right-to-left language (default)
* both: add the direction in both case
* ltr_only: Add the direction only in case of a
left-to-right language
{{image_name|add_direction}} when image_name is 'start_arrow.png'
results in 'start_arrow_rtl.png' in case of RTL language, and
'start_arrow.png' or 'start_arrow_ltr.png' depends on `arg` value.
"""
if arg == u'rtl_only':
directions = (u'', u'_rtl')
elif arg == u'both':
directions = (u'_ltr', u'_rtl')
elif arg == u'ltr_only':
directions = (u'_ltr', u'')
else:
raise template.TemplateSyntaxError('add_direction can use arg with one of ["rtl_only", "both", "ltr_only"]')
parts = value.rsplit('.', 1)
if not len(parts):
return value
elif len(parts) == 1:
return value + directions[translation.get_language_bidi()]
else:
return '.'.join((parts[0]+directions[translation.get_language_bidi()],parts[1])) |
def main():
"""Writes out newsfile if significant version bump"""
last_known = '0'
if os.path.isfile(metafile):
with open(metafile) as fh:
last_known = fh.read()
import mbed_cloud
current = mbed_cloud.__version__
# how significant a change in version scheme should trigger a new changelog entry
# (api major, api minor, sdk major, sdk minor, sdk patch)
sigfigs = 4
current_version = LooseVersion(current).version
last_known_version = LooseVersion(last_known).version
should_towncrier = current_version[:sigfigs] != last_known_version[:sigfigs]
print('%s -- %s :: current vs previous changelog build' % (current, last_known))
if should_towncrier:
print('%s >> %s :: running changelog build' % (current, last_known))
subprocess.check_call(
['towncrier', '--yes'],
cwd=os.path.join(PROJECT_ROOT, 'docs', 'changelog')
)
with open(metafile, 'w') as fh:
fh.write(current) | Writes out newsfile if significant version bump | Below is the the instruction that describes the task:
### Input:
Writes out newsfile if significant version bump
### Response:
def main():
"""Writes out newsfile if significant version bump"""
last_known = '0'
if os.path.isfile(metafile):
with open(metafile) as fh:
last_known = fh.read()
import mbed_cloud
current = mbed_cloud.__version__
# how significant a change in version scheme should trigger a new changelog entry
# (api major, api minor, sdk major, sdk minor, sdk patch)
sigfigs = 4
current_version = LooseVersion(current).version
last_known_version = LooseVersion(last_known).version
should_towncrier = current_version[:sigfigs] != last_known_version[:sigfigs]
print('%s -- %s :: current vs previous changelog build' % (current, last_known))
if should_towncrier:
print('%s >> %s :: running changelog build' % (current, last_known))
subprocess.check_call(
['towncrier', '--yes'],
cwd=os.path.join(PROJECT_ROOT, 'docs', 'changelog')
)
with open(metafile, 'w') as fh:
fh.write(current) |
def find_by_id(self, submission_id):
"""Finds submission by ID.
Args:
submission_id: ID of the submission
Returns:
SubmissionDescriptor with information about submission or None if
submission is not found.
"""
return self._attacks.get(
submission_id,
self._defenses.get(
submission_id,
self._targeted_attacks.get(submission_id, None))) | Finds submission by ID.
Args:
submission_id: ID of the submission
Returns:
SubmissionDescriptor with information about submission or None if
submission is not found. | Below is the the instruction that describes the task:
### Input:
Finds submission by ID.
Args:
submission_id: ID of the submission
Returns:
SubmissionDescriptor with information about submission or None if
submission is not found.
### Response:
def find_by_id(self, submission_id):
"""Finds submission by ID.
Args:
submission_id: ID of the submission
Returns:
SubmissionDescriptor with information about submission or None if
submission is not found.
"""
return self._attacks.get(
submission_id,
self._defenses.get(
submission_id,
self._targeted_attacks.get(submission_id, None))) |
def set_value(self, value):
"""
Sets the user value (mode) of the choice. Like for Symbol.set_value(),
the visibility might truncate the value. Choices without the 'optional'
attribute (is_optional) can never be in n mode, but 0/"n" is still
accepted since it's not a malformed value (though it will have no
effect).
Returns True if the value is valid for the type of the choice, and
False otherwise. This only looks at the form of the value. Check the
Choice.assignable attribute to see what values are currently in range
and would actually be reflected in the mode of the choice.
"""
if value == self.user_value:
# We know the value must be valid if it was successfully set
# previously
self._was_set = True
return True
if not ((self.orig_type is BOOL and value in (2, 0, "y", "n") ) or
(self.orig_type is TRISTATE and value in (2, 1, 0, "y", "m", "n"))):
# Display tristate values as n, m, y in the warning
self.kconfig._warn(
"the value {} is invalid for {}, which has type {} -- "
"assignment ignored"
.format(TRI_TO_STR[value] if value in (0, 1, 2) else
"'{}'".format(value),
_name_and_loc(self),
TYPE_TO_STR[self.orig_type]))
return False
if value in ("y", "m", "n"):
value = STR_TO_TRI[value]
self.user_value = value
self._was_set = True
self._rec_invalidate()
return True | Sets the user value (mode) of the choice. Like for Symbol.set_value(),
the visibility might truncate the value. Choices without the 'optional'
attribute (is_optional) can never be in n mode, but 0/"n" is still
accepted since it's not a malformed value (though it will have no
effect).
Returns True if the value is valid for the type of the choice, and
False otherwise. This only looks at the form of the value. Check the
Choice.assignable attribute to see what values are currently in range
and would actually be reflected in the mode of the choice. | Below is the the instruction that describes the task:
### Input:
Sets the user value (mode) of the choice. Like for Symbol.set_value(),
the visibility might truncate the value. Choices without the 'optional'
attribute (is_optional) can never be in n mode, but 0/"n" is still
accepted since it's not a malformed value (though it will have no
effect).
Returns True if the value is valid for the type of the choice, and
False otherwise. This only looks at the form of the value. Check the
Choice.assignable attribute to see what values are currently in range
and would actually be reflected in the mode of the choice.
### Response:
def set_value(self, value):
"""
Sets the user value (mode) of the choice. Like for Symbol.set_value(),
the visibility might truncate the value. Choices without the 'optional'
attribute (is_optional) can never be in n mode, but 0/"n" is still
accepted since it's not a malformed value (though it will have no
effect).
Returns True if the value is valid for the type of the choice, and
False otherwise. This only looks at the form of the value. Check the
Choice.assignable attribute to see what values are currently in range
and would actually be reflected in the mode of the choice.
"""
if value == self.user_value:
# We know the value must be valid if it was successfully set
# previously
self._was_set = True
return True
if not ((self.orig_type is BOOL and value in (2, 0, "y", "n") ) or
(self.orig_type is TRISTATE and value in (2, 1, 0, "y", "m", "n"))):
# Display tristate values as n, m, y in the warning
self.kconfig._warn(
"the value {} is invalid for {}, which has type {} -- "
"assignment ignored"
.format(TRI_TO_STR[value] if value in (0, 1, 2) else
"'{}'".format(value),
_name_and_loc(self),
TYPE_TO_STR[self.orig_type]))
return False
if value in ("y", "m", "n"):
value = STR_TO_TRI[value]
self.user_value = value
self._was_set = True
self._rec_invalidate()
return True |
def makeService(cls, options):
"""
Create an L{IService} for the database specified by the given
configuration.
"""
from axiom.store import Store
jm = options['journal-mode']
if jm is not None:
jm = jm.decode('ascii')
store = Store(options['dbdir'], debug=options['debug'], journalMode=jm)
service = IService(store)
_CheckSystemVersion(store).setServiceParent(service)
return service | Create an L{IService} for the database specified by the given
configuration. | Below is the the instruction that describes the task:
### Input:
Create an L{IService} for the database specified by the given
configuration.
### Response:
def makeService(cls, options):
"""
Create an L{IService} for the database specified by the given
configuration.
"""
from axiom.store import Store
jm = options['journal-mode']
if jm is not None:
jm = jm.decode('ascii')
store = Store(options['dbdir'], debug=options['debug'], journalMode=jm)
service = IService(store)
_CheckSystemVersion(store).setServiceParent(service)
return service |
def text2text_generate_encoded(sample_generator,
vocab,
targets_vocab=None,
has_inputs=True,
inputs_prefix="",
targets_prefix=""):
"""Encode Text2Text samples from the generator with the vocab."""
targets_vocab = targets_vocab or vocab
for sample in sample_generator:
if has_inputs:
sample["inputs"] = vocab.encode(inputs_prefix + sample["inputs"])
sample["inputs"].append(text_encoder.EOS_ID)
sample["targets"] = targets_vocab.encode(targets_prefix + sample["targets"])
sample["targets"].append(text_encoder.EOS_ID)
yield sample | Encode Text2Text samples from the generator with the vocab. | Below is the the instruction that describes the task:
### Input:
Encode Text2Text samples from the generator with the vocab.
### Response:
def text2text_generate_encoded(sample_generator,
vocab,
targets_vocab=None,
has_inputs=True,
inputs_prefix="",
targets_prefix=""):
"""Encode Text2Text samples from the generator with the vocab."""
targets_vocab = targets_vocab or vocab
for sample in sample_generator:
if has_inputs:
sample["inputs"] = vocab.encode(inputs_prefix + sample["inputs"])
sample["inputs"].append(text_encoder.EOS_ID)
sample["targets"] = targets_vocab.encode(targets_prefix + sample["targets"])
sample["targets"].append(text_encoder.EOS_ID)
yield sample |
def get_list(self, section, option):
"""
This allows for loading of Pyramid list style configuration
options:
[foo]
bar =
baz
qux
zap
``get_list('foo', 'bar')`` returns ``['baz', 'qux', 'zap']``
:param str section:
The section to read.
:param str option:
The option to read from the section.
:returns: list
"""
value = self.get(section, option)
return list(filter(None, (x.strip() for x in value.splitlines()))) | This allows for loading of Pyramid list style configuration
options:
[foo]
bar =
baz
qux
zap
``get_list('foo', 'bar')`` returns ``['baz', 'qux', 'zap']``
:param str section:
The section to read.
:param str option:
The option to read from the section.
:returns: list | Below is the the instruction that describes the task:
### Input:
This allows for loading of Pyramid list style configuration
options:
[foo]
bar =
baz
qux
zap
``get_list('foo', 'bar')`` returns ``['baz', 'qux', 'zap']``
:param str section:
The section to read.
:param str option:
The option to read from the section.
:returns: list
### Response:
def get_list(self, section, option):
"""
This allows for loading of Pyramid list style configuration
options:
[foo]
bar =
baz
qux
zap
``get_list('foo', 'bar')`` returns ``['baz', 'qux', 'zap']``
:param str section:
The section to read.
:param str option:
The option to read from the section.
:returns: list
"""
value = self.get(section, option)
return list(filter(None, (x.strip() for x in value.splitlines()))) |
def n_subfile(self):
"""
Count how many files in this directory (doesn't include files in
sub folders).
"""
self.assert_is_dir_and_exists()
n = 0
for _ in self.select_file(recursive=False):
n += 1
return n | Count how many files in this directory (doesn't include files in
sub folders). | Below is the the instruction that describes the task:
### Input:
Count how many files in this directory (doesn't include files in
sub folders).
### Response:
def n_subfile(self):
"""
Count how many files in this directory (doesn't include files in
sub folders).
"""
self.assert_is_dir_and_exists()
n = 0
for _ in self.select_file(recursive=False):
n += 1
return n |
def write_percolator_xml(staticxml, feats, fn):
"""Given the static percolator xml root and process info nodes, and all
psms and peptides as iterators in a dict {'peptide': pep_iterator, 'psm':
psm_iterator}, this generates percolator out data into a file."""
# First get xml until psms opening element is found.
etree.SubElement(staticxml, 'psms').text = '***psms***'
root = etree.tostring(staticxml, pretty_print=True,
xml_declaration=True, encoding='UTF-8')
root = root.decode('utf-8')
root = root[:root.find('***psms***')]
# Write opening xml
with open(fn, 'w') as fp:
fp.write(root)
fp.write('\n')
# Then write features
with open(fn, 'a') as fp:
psmcount = 0
for psm in feats['psm']:
psmcount += 1
fp.write(psm)
fp.write('\n')
fp.write('</psms><peptides>\n')
peptidecount = 0
for pep in feats['peptide']:
peptidecount += 1
fp.write(pep)
fp.write('\n')
fp.write('</peptides></percolator_output>')
print('Wrote {0} psms, {1} peptides to file {2}'.format(psmcount,
peptidecount, fn)) | Given the static percolator xml root and process info nodes, and all
psms and peptides as iterators in a dict {'peptide': pep_iterator, 'psm':
psm_iterator}, this generates percolator out data into a file. | Below is the the instruction that describes the task:
### Input:
Given the static percolator xml root and process info nodes, and all
psms and peptides as iterators in a dict {'peptide': pep_iterator, 'psm':
psm_iterator}, this generates percolator out data into a file.
### Response:
def write_percolator_xml(staticxml, feats, fn):
"""Given the static percolator xml root and process info nodes, and all
psms and peptides as iterators in a dict {'peptide': pep_iterator, 'psm':
psm_iterator}, this generates percolator out data into a file."""
# First get xml until psms opening element is found.
etree.SubElement(staticxml, 'psms').text = '***psms***'
root = etree.tostring(staticxml, pretty_print=True,
xml_declaration=True, encoding='UTF-8')
root = root.decode('utf-8')
root = root[:root.find('***psms***')]
# Write opening xml
with open(fn, 'w') as fp:
fp.write(root)
fp.write('\n')
# Then write features
with open(fn, 'a') as fp:
psmcount = 0
for psm in feats['psm']:
psmcount += 1
fp.write(psm)
fp.write('\n')
fp.write('</psms><peptides>\n')
peptidecount = 0
for pep in feats['peptide']:
peptidecount += 1
fp.write(pep)
fp.write('\n')
fp.write('</peptides></percolator_output>')
print('Wrote {0} psms, {1} peptides to file {2}'.format(psmcount,
peptidecount, fn)) |
def get_batch(self, user_list):
"""
批量获取用户基本信息
开发者可通过该接口来批量获取用户基本信息。最多支持一次拉取100条。
详情请参考
https://mp.weixin.qq.com/wiki?t=resource/res_main&id=mp1421140839
:param user_list: user_list,支持“使用示例”中两种输入格式
:return: 用户信息的 list
使用示例::
from wechatpy import WeChatClient
client = WeChatClient('appid', 'secret')
users = client.user.get_batch(['openid1', 'openid2'])
users = client.user.get_batch([
{'openid': 'openid1', 'lang': 'zh-CN'},
{'openid': 'openid2', 'lang': 'en'},
])
"""
if all((isinstance(x, six.string_types) for x in user_list)):
user_list = [{'openid': oid} for oid in user_list]
res = self._post(
'user/info/batchget',
data={'user_list': user_list},
result_processor=lambda x: x['user_info_list']
)
return res | 批量获取用户基本信息
开发者可通过该接口来批量获取用户基本信息。最多支持一次拉取100条。
详情请参考
https://mp.weixin.qq.com/wiki?t=resource/res_main&id=mp1421140839
:param user_list: user_list,支持“使用示例”中两种输入格式
:return: 用户信息的 list
使用示例::
from wechatpy import WeChatClient
client = WeChatClient('appid', 'secret')
users = client.user.get_batch(['openid1', 'openid2'])
users = client.user.get_batch([
{'openid': 'openid1', 'lang': 'zh-CN'},
{'openid': 'openid2', 'lang': 'en'},
]) | Below is the the instruction that describes the task:
### Input:
批量获取用户基本信息
开发者可通过该接口来批量获取用户基本信息。最多支持一次拉取100条。
详情请参考
https://mp.weixin.qq.com/wiki?t=resource/res_main&id=mp1421140839
:param user_list: user_list,支持“使用示例”中两种输入格式
:return: 用户信息的 list
使用示例::
from wechatpy import WeChatClient
client = WeChatClient('appid', 'secret')
users = client.user.get_batch(['openid1', 'openid2'])
users = client.user.get_batch([
{'openid': 'openid1', 'lang': 'zh-CN'},
{'openid': 'openid2', 'lang': 'en'},
])
### Response:
def get_batch(self, user_list):
"""
批量获取用户基本信息
开发者可通过该接口来批量获取用户基本信息。最多支持一次拉取100条。
详情请参考
https://mp.weixin.qq.com/wiki?t=resource/res_main&id=mp1421140839
:param user_list: user_list,支持“使用示例”中两种输入格式
:return: 用户信息的 list
使用示例::
from wechatpy import WeChatClient
client = WeChatClient('appid', 'secret')
users = client.user.get_batch(['openid1', 'openid2'])
users = client.user.get_batch([
{'openid': 'openid1', 'lang': 'zh-CN'},
{'openid': 'openid2', 'lang': 'en'},
])
"""
if all((isinstance(x, six.string_types) for x in user_list)):
user_list = [{'openid': oid} for oid in user_list]
res = self._post(
'user/info/batchget',
data={'user_list': user_list},
result_processor=lambda x: x['user_info_list']
)
return res |
def add_pk_if_required(db, table, name):
"""Return a class deriving from our Model class as well as the SQLAlchemy
model.
:param `sqlalchemy.schema.Table` table: table to create primary key for
:param table: table to create primary key for
"""
db.metadata.reflect(bind=db.engine)
cls_dict = {'__tablename__': name}
if not table.primary_key:
for column in table.columns:
column.primary_key = True
Table(name, db.metadata, *table.columns, extend_existing=True)
cls_dict['__table__'] = table
db.metadata.create_all(bind=db.engine)
return type(str(name), (sandman_model, db.Model), cls_dict) | Return a class deriving from our Model class as well as the SQLAlchemy
model.
:param `sqlalchemy.schema.Table` table: table to create primary key for
:param table: table to create primary key for | Below is the the instruction that describes the task:
### Input:
Return a class deriving from our Model class as well as the SQLAlchemy
model.
:param `sqlalchemy.schema.Table` table: table to create primary key for
:param table: table to create primary key for
### Response:
def add_pk_if_required(db, table, name):
"""Return a class deriving from our Model class as well as the SQLAlchemy
model.
:param `sqlalchemy.schema.Table` table: table to create primary key for
:param table: table to create primary key for
"""
db.metadata.reflect(bind=db.engine)
cls_dict = {'__tablename__': name}
if not table.primary_key:
for column in table.columns:
column.primary_key = True
Table(name, db.metadata, *table.columns, extend_existing=True)
cls_dict['__table__'] = table
db.metadata.create_all(bind=db.engine)
return type(str(name), (sandman_model, db.Model), cls_dict) |
def _reregister_types(self):
"""Registers existing types for a new connection"""
for _type in self._register_types:
psycopg2.extensions.register_type(psycopg2.extensions.new_type(*_type)) | Registers existing types for a new connection | Below is the the instruction that describes the task:
### Input:
Registers existing types for a new connection
### Response:
def _reregister_types(self):
"""Registers existing types for a new connection"""
for _type in self._register_types:
psycopg2.extensions.register_type(psycopg2.extensions.new_type(*_type)) |
def reduce_lists(d):
"""Replace single item lists in a dictionary with the single item."""
for field in d:
old_data = d[field]
if len(old_data) == 1:
d[field] = old_data[0] | Replace single item lists in a dictionary with the single item. | Below is the the instruction that describes the task:
### Input:
Replace single item lists in a dictionary with the single item.
### Response:
def reduce_lists(d):
"""Replace single item lists in a dictionary with the single item."""
for field in d:
old_data = d[field]
if len(old_data) == 1:
d[field] = old_data[0] |
def batch_face_locations(images, number_of_times_to_upsample=1, batch_size=128):
"""
Returns an 2d array of bounding boxes of human faces in a image using the cnn face detector
If you are using a GPU, this can give you much faster results since the GPU
can process batches of images at once. If you aren't using a GPU, you don't need this function.
:param img: A list of images (each as a numpy array)
:param number_of_times_to_upsample: How many times to upsample the image looking for faces. Higher numbers find smaller faces.
:param batch_size: How many images to include in each GPU processing batch.
:return: A list of tuples of found face locations in css (top, right, bottom, left) order
"""
def convert_cnn_detections_to_css(detections):
return [_trim_css_to_bounds(_rect_to_css(face.rect), images[0].shape) for face in detections]
raw_detections_batched = _raw_face_locations_batched(images, number_of_times_to_upsample, batch_size)
return list(map(convert_cnn_detections_to_css, raw_detections_batched)) | Returns an 2d array of bounding boxes of human faces in a image using the cnn face detector
If you are using a GPU, this can give you much faster results since the GPU
can process batches of images at once. If you aren't using a GPU, you don't need this function.
:param img: A list of images (each as a numpy array)
:param number_of_times_to_upsample: How many times to upsample the image looking for faces. Higher numbers find smaller faces.
:param batch_size: How many images to include in each GPU processing batch.
:return: A list of tuples of found face locations in css (top, right, bottom, left) order | Below is the the instruction that describes the task:
### Input:
Returns an 2d array of bounding boxes of human faces in a image using the cnn face detector
If you are using a GPU, this can give you much faster results since the GPU
can process batches of images at once. If you aren't using a GPU, you don't need this function.
:param img: A list of images (each as a numpy array)
:param number_of_times_to_upsample: How many times to upsample the image looking for faces. Higher numbers find smaller faces.
:param batch_size: How many images to include in each GPU processing batch.
:return: A list of tuples of found face locations in css (top, right, bottom, left) order
### Response:
def batch_face_locations(images, number_of_times_to_upsample=1, batch_size=128):
"""
Returns an 2d array of bounding boxes of human faces in a image using the cnn face detector
If you are using a GPU, this can give you much faster results since the GPU
can process batches of images at once. If you aren't using a GPU, you don't need this function.
:param img: A list of images (each as a numpy array)
:param number_of_times_to_upsample: How many times to upsample the image looking for faces. Higher numbers find smaller faces.
:param batch_size: How many images to include in each GPU processing batch.
:return: A list of tuples of found face locations in css (top, right, bottom, left) order
"""
def convert_cnn_detections_to_css(detections):
return [_trim_css_to_bounds(_rect_to_css(face.rect), images[0].shape) for face in detections]
raw_detections_batched = _raw_face_locations_batched(images, number_of_times_to_upsample, batch_size)
return list(map(convert_cnn_detections_to_css, raw_detections_batched)) |
def configure_attributes(self, json_data):
"""Configure load balancer attributes such as idle timeout, connection draining, etc
Args:
json_data (json): return data from ELB upsert
"""
env = boto3.session.Session(profile_name=self.env, region_name=self.region)
elbclient = env.client('elb')
elb_settings = self.properties['elb']
LOG.debug('Block ELB Settings Pre Configure Load Balancer Attributes:\n%s', pformat(elb_settings))
# FIXME: Determine why 'job' is not being used
# pylint: disable=unused-variable
for job in json.loads(json_data)['job']:
load_balancer_attributes = {
'CrossZoneLoadBalancing': {
'Enabled': True
},
'AccessLog': {
'Enabled': False,
},
'ConnectionDraining': {
'Enabled': False,
},
'ConnectionSettings': {
'IdleTimeout': 60
}
}
if elb_settings.get('connection_draining_timeout'):
connection_draining_timeout = int(elb_settings['connection_draining_timeout'])
LOG.info('Applying Custom Load Balancer Connection Draining Timeout: %d', connection_draining_timeout)
load_balancer_attributes['ConnectionDraining'] = {
'Enabled': True,
'Timeout': connection_draining_timeout
}
if elb_settings.get('idle_timeout'):
idle_timeout = int(elb_settings['idle_timeout'])
LOG.info('Applying Custom Load Balancer Idle Timeout: %d', idle_timeout)
load_balancer_attributes['ConnectionSettings'] = {'IdleTimeout': idle_timeout}
if elb_settings.get('access_log'):
access_log_bucket_name = elb_settings['access_log']['bucket_name']
access_log_bucket_prefix = elb_settings['access_log']['bucket_prefix']
access_log_emit_interval = int(elb_settings['access_log']['emit_interval'])
LOG.info('Applying Custom Load Balancer Access Log: %s/%s every %d minutes', access_log_bucket_name,
access_log_bucket_prefix, access_log_emit_interval)
load_balancer_attributes['AccessLog'] = {
'Enabled': True,
'S3BucketName': access_log_bucket_name,
'EmitInterval': access_log_emit_interval,
'S3BucketPrefix': access_log_bucket_prefix
}
LOG.info('Applying Load Balancer Attributes')
LOG.debug('Load Balancer Attributes:\n%s', pformat(load_balancer_attributes))
elbclient.modify_load_balancer_attributes(
LoadBalancerName=self.app, LoadBalancerAttributes=load_balancer_attributes) | Configure load balancer attributes such as idle timeout, connection draining, etc
Args:
json_data (json): return data from ELB upsert | Below is the the instruction that describes the task:
### Input:
Configure load balancer attributes such as idle timeout, connection draining, etc
Args:
json_data (json): return data from ELB upsert
### Response:
def configure_attributes(self, json_data):
"""Configure load balancer attributes such as idle timeout, connection draining, etc
Args:
json_data (json): return data from ELB upsert
"""
env = boto3.session.Session(profile_name=self.env, region_name=self.region)
elbclient = env.client('elb')
elb_settings = self.properties['elb']
LOG.debug('Block ELB Settings Pre Configure Load Balancer Attributes:\n%s', pformat(elb_settings))
# FIXME: Determine why 'job' is not being used
# pylint: disable=unused-variable
for job in json.loads(json_data)['job']:
load_balancer_attributes = {
'CrossZoneLoadBalancing': {
'Enabled': True
},
'AccessLog': {
'Enabled': False,
},
'ConnectionDraining': {
'Enabled': False,
},
'ConnectionSettings': {
'IdleTimeout': 60
}
}
if elb_settings.get('connection_draining_timeout'):
connection_draining_timeout = int(elb_settings['connection_draining_timeout'])
LOG.info('Applying Custom Load Balancer Connection Draining Timeout: %d', connection_draining_timeout)
load_balancer_attributes['ConnectionDraining'] = {
'Enabled': True,
'Timeout': connection_draining_timeout
}
if elb_settings.get('idle_timeout'):
idle_timeout = int(elb_settings['idle_timeout'])
LOG.info('Applying Custom Load Balancer Idle Timeout: %d', idle_timeout)
load_balancer_attributes['ConnectionSettings'] = {'IdleTimeout': idle_timeout}
if elb_settings.get('access_log'):
access_log_bucket_name = elb_settings['access_log']['bucket_name']
access_log_bucket_prefix = elb_settings['access_log']['bucket_prefix']
access_log_emit_interval = int(elb_settings['access_log']['emit_interval'])
LOG.info('Applying Custom Load Balancer Access Log: %s/%s every %d minutes', access_log_bucket_name,
access_log_bucket_prefix, access_log_emit_interval)
load_balancer_attributes['AccessLog'] = {
'Enabled': True,
'S3BucketName': access_log_bucket_name,
'EmitInterval': access_log_emit_interval,
'S3BucketPrefix': access_log_bucket_prefix
}
LOG.info('Applying Load Balancer Attributes')
LOG.debug('Load Balancer Attributes:\n%s', pformat(load_balancer_attributes))
elbclient.modify_load_balancer_attributes(
LoadBalancerName=self.app, LoadBalancerAttributes=load_balancer_attributes) |
def extract_file_config(content):
"""
Pull out the file-specific config specified in the docstring.
"""
prop_pat = re.compile(
r"^\s*#\s*sphinx_gallery_([A-Za-z0-9_]+)\s*=\s*(.+)\s*$",
re.MULTILINE)
file_conf = {}
for match in re.finditer(prop_pat, content):
name = match.group(1)
value = match.group(2)
try:
value = ast.literal_eval(value)
except (SyntaxError, ValueError):
logger.warning(
'Sphinx-gallery option %s was passed invalid value %s',
name, value)
else:
file_conf[name] = value
return file_conf | Pull out the file-specific config specified in the docstring. | Below is the the instruction that describes the task:
### Input:
Pull out the file-specific config specified in the docstring.
### Response:
def extract_file_config(content):
"""
Pull out the file-specific config specified in the docstring.
"""
prop_pat = re.compile(
r"^\s*#\s*sphinx_gallery_([A-Za-z0-9_]+)\s*=\s*(.+)\s*$",
re.MULTILINE)
file_conf = {}
for match in re.finditer(prop_pat, content):
name = match.group(1)
value = match.group(2)
try:
value = ast.literal_eval(value)
except (SyntaxError, ValueError):
logger.warning(
'Sphinx-gallery option %s was passed invalid value %s',
name, value)
else:
file_conf[name] = value
return file_conf |
def check(jail=None,
chroot=None,
root=None,
depends=False,
recompute=False,
checksum=False):
'''
Sanity checks installed packages
jail
Perform the sanity check in the specified jail
CLI Example:
.. code-block:: bash
salt '*' pkg.check jail=<jail name or id>
chroot
Perform the sanity check in the specified chroot (ignored if ``jail``
is specified)
root
Perform the sanity check in the specified root (ignored if ``jail``
is specified)
CLI Example:
.. code-block:: bash
salt '*' pkg.check chroot=/path/to/chroot
Of the below, at least one must be set to ``True``.
depends
Check for and install missing dependencies.
CLI Example:
.. code-block:: bash
salt '*' pkg.check recompute=True
recompute
Recompute sizes and checksums of installed packages.
CLI Example:
.. code-block:: bash
salt '*' pkg.check depends=True
checksum
Find invalid checksums for installed packages.
CLI Example:
.. code-block:: bash
salt '*' pkg.check checksum=True
'''
if not any((depends, recompute, checksum)):
return 'One of depends, recompute, or checksum must be set to True'
opts = ''
if depends:
opts += 'dy'
if recompute:
opts += 'r'
if checksum:
opts += 's'
cmd = _pkg(jail, chroot, root)
cmd.append('check')
if opts:
cmd.append('-' + opts)
return __salt__['cmd.run'](
cmd,
output_loglevel='trace',
python_shell=False
) | Sanity checks installed packages
jail
Perform the sanity check in the specified jail
CLI Example:
.. code-block:: bash
salt '*' pkg.check jail=<jail name or id>
chroot
Perform the sanity check in the specified chroot (ignored if ``jail``
is specified)
root
Perform the sanity check in the specified root (ignored if ``jail``
is specified)
CLI Example:
.. code-block:: bash
salt '*' pkg.check chroot=/path/to/chroot
Of the below, at least one must be set to ``True``.
depends
Check for and install missing dependencies.
CLI Example:
.. code-block:: bash
salt '*' pkg.check recompute=True
recompute
Recompute sizes and checksums of installed packages.
CLI Example:
.. code-block:: bash
salt '*' pkg.check depends=True
checksum
Find invalid checksums for installed packages.
CLI Example:
.. code-block:: bash
salt '*' pkg.check checksum=True | Below is the the instruction that describes the task:
### Input:
Sanity checks installed packages
jail
Perform the sanity check in the specified jail
CLI Example:
.. code-block:: bash
salt '*' pkg.check jail=<jail name or id>
chroot
Perform the sanity check in the specified chroot (ignored if ``jail``
is specified)
root
Perform the sanity check in the specified root (ignored if ``jail``
is specified)
CLI Example:
.. code-block:: bash
salt '*' pkg.check chroot=/path/to/chroot
Of the below, at least one must be set to ``True``.
depends
Check for and install missing dependencies.
CLI Example:
.. code-block:: bash
salt '*' pkg.check recompute=True
recompute
Recompute sizes and checksums of installed packages.
CLI Example:
.. code-block:: bash
salt '*' pkg.check depends=True
checksum
Find invalid checksums for installed packages.
CLI Example:
.. code-block:: bash
salt '*' pkg.check checksum=True
### Response:
def check(jail=None,
chroot=None,
root=None,
depends=False,
recompute=False,
checksum=False):
'''
Sanity checks installed packages
jail
Perform the sanity check in the specified jail
CLI Example:
.. code-block:: bash
salt '*' pkg.check jail=<jail name or id>
chroot
Perform the sanity check in the specified chroot (ignored if ``jail``
is specified)
root
Perform the sanity check in the specified root (ignored if ``jail``
is specified)
CLI Example:
.. code-block:: bash
salt '*' pkg.check chroot=/path/to/chroot
Of the below, at least one must be set to ``True``.
depends
Check for and install missing dependencies.
CLI Example:
.. code-block:: bash
salt '*' pkg.check recompute=True
recompute
Recompute sizes and checksums of installed packages.
CLI Example:
.. code-block:: bash
salt '*' pkg.check depends=True
checksum
Find invalid checksums for installed packages.
CLI Example:
.. code-block:: bash
salt '*' pkg.check checksum=True
'''
if not any((depends, recompute, checksum)):
return 'One of depends, recompute, or checksum must be set to True'
opts = ''
if depends:
opts += 'dy'
if recompute:
opts += 'r'
if checksum:
opts += 's'
cmd = _pkg(jail, chroot, root)
cmd.append('check')
if opts:
cmd.append('-' + opts)
return __salt__['cmd.run'](
cmd,
output_loglevel='trace',
python_shell=False
) |
def locked_put(self, credentials):
"""Write a credentials to the SQLAlchemy datastore.
Args:
credentials: :class:`oauth2client.Credentials`
"""
filters = {self.key_name: self.key_value}
query = self.session.query(self.model_class).filter_by(**filters)
entity = query.first()
if not entity:
entity = self.model_class(**filters)
setattr(entity, self.property_name, credentials)
self.session.add(entity) | Write a credentials to the SQLAlchemy datastore.
Args:
credentials: :class:`oauth2client.Credentials` | Below is the the instruction that describes the task:
### Input:
Write a credentials to the SQLAlchemy datastore.
Args:
credentials: :class:`oauth2client.Credentials`
### Response:
def locked_put(self, credentials):
"""Write a credentials to the SQLAlchemy datastore.
Args:
credentials: :class:`oauth2client.Credentials`
"""
filters = {self.key_name: self.key_value}
query = self.session.query(self.model_class).filter_by(**filters)
entity = query.first()
if not entity:
entity = self.model_class(**filters)
setattr(entity, self.property_name, credentials)
self.session.add(entity) |
def reverse(
self,
query,
exactly_one=DEFAULT_SENTINEL,
timeout=DEFAULT_SENTINEL,
feature_code=None,
lang=None,
find_nearby_type='findNearbyPlaceName',
):
"""
Return an address by location point.
.. versionadded:: 1.2.0
:param query: The coordinates for which you wish to obtain the
closest human-readable addresses.
:type query: :class:`geopy.point.Point`, list or tuple of ``(latitude,
longitude)``, or string as ``"%(latitude)s, %(longitude)s"``.
:param bool exactly_one: Return one result or a list of results, if
available.
.. versionchanged:: 1.14.0
Default value for ``exactly_one`` was ``False``, which differs
from the conventional default across geopy. Please always pass
this argument explicitly, otherwise you would get a warning.
In geopy 2.0 the default value will become ``True``.
:param int timeout: Time, in seconds, to wait for the geocoding service
to respond before raising a :class:`geopy.exc.GeocoderTimedOut`
exception. Set this only if you wish to override, on this call
only, the value set during the geocoder's initialization.
:param str feature_code: A GeoNames feature code
.. versionadded:: 1.18.0
:param str lang: language of the returned ``name`` element (the pseudo
language code 'local' will return it in local language)
Full list of supported languages can be found here:
https://www.geonames.org/countries/
.. versionadded:: 1.18.0
:param str find_nearby_type: A flag to switch between different
GeoNames API endpoints. The default value is ``findNearbyPlaceName``
which returns the closest populated place. Another currently
implemented option is ``findNearby`` which returns
the closest toponym for the lat/lng query.
.. versionadded:: 1.18.0
:rtype: ``None``, :class:`geopy.location.Location` or a list of them, if
``exactly_one=False``.
"""
if exactly_one is DEFAULT_SENTINEL:
warnings.warn('%s.reverse: default value for `exactly_one` '
'argument will become True in geopy 2.0. '
'Specify `exactly_one=False` as the argument '
'explicitly to get rid of this warning.' % type(self).__name__,
DeprecationWarning, stacklevel=2)
exactly_one = False
try:
lat, lng = self._coerce_point_to_string(query).split(',')
except ValueError:
raise ValueError("Must be a coordinate pair or Point")
if find_nearby_type == 'findNearbyPlaceName': # default
if feature_code:
raise ValueError(
"find_nearby_type=findNearbyPlaceName doesn't support "
"the `feature_code` param"
)
params = self._reverse_find_nearby_place_name_params(
lat=lat,
lng=lng,
lang=lang,
)
url = "?".join((self.api_reverse, urlencode(params)))
elif find_nearby_type == 'findNearby':
if lang:
raise ValueError(
"find_nearby_type=findNearby doesn't support the `lang` param"
)
params = self._reverse_find_nearby_params(
lat=lat,
lng=lng,
feature_code=feature_code,
)
url = "?".join((self.api_reverse_nearby, urlencode(params)))
else:
raise GeocoderQueryError(
'`%s` find_nearby_type is not supported by geopy' % find_nearby_type
)
logger.debug("%s.reverse: %s", self.__class__.__name__, url)
return self._parse_json(
self._call_geocoder(url, timeout=timeout),
exactly_one
) | Return an address by location point.
.. versionadded:: 1.2.0
:param query: The coordinates for which you wish to obtain the
closest human-readable addresses.
:type query: :class:`geopy.point.Point`, list or tuple of ``(latitude,
longitude)``, or string as ``"%(latitude)s, %(longitude)s"``.
:param bool exactly_one: Return one result or a list of results, if
available.
.. versionchanged:: 1.14.0
Default value for ``exactly_one`` was ``False``, which differs
from the conventional default across geopy. Please always pass
this argument explicitly, otherwise you would get a warning.
In geopy 2.0 the default value will become ``True``.
:param int timeout: Time, in seconds, to wait for the geocoding service
to respond before raising a :class:`geopy.exc.GeocoderTimedOut`
exception. Set this only if you wish to override, on this call
only, the value set during the geocoder's initialization.
:param str feature_code: A GeoNames feature code
.. versionadded:: 1.18.0
:param str lang: language of the returned ``name`` element (the pseudo
language code 'local' will return it in local language)
Full list of supported languages can be found here:
https://www.geonames.org/countries/
.. versionadded:: 1.18.0
:param str find_nearby_type: A flag to switch between different
GeoNames API endpoints. The default value is ``findNearbyPlaceName``
which returns the closest populated place. Another currently
implemented option is ``findNearby`` which returns
the closest toponym for the lat/lng query.
.. versionadded:: 1.18.0
:rtype: ``None``, :class:`geopy.location.Location` or a list of them, if
``exactly_one=False``. | Below is the the instruction that describes the task:
### Input:
Return an address by location point.
.. versionadded:: 1.2.0
:param query: The coordinates for which you wish to obtain the
closest human-readable addresses.
:type query: :class:`geopy.point.Point`, list or tuple of ``(latitude,
longitude)``, or string as ``"%(latitude)s, %(longitude)s"``.
:param bool exactly_one: Return one result or a list of results, if
available.
.. versionchanged:: 1.14.0
Default value for ``exactly_one`` was ``False``, which differs
from the conventional default across geopy. Please always pass
this argument explicitly, otherwise you would get a warning.
In geopy 2.0 the default value will become ``True``.
:param int timeout: Time, in seconds, to wait for the geocoding service
to respond before raising a :class:`geopy.exc.GeocoderTimedOut`
exception. Set this only if you wish to override, on this call
only, the value set during the geocoder's initialization.
:param str feature_code: A GeoNames feature code
.. versionadded:: 1.18.0
:param str lang: language of the returned ``name`` element (the pseudo
language code 'local' will return it in local language)
Full list of supported languages can be found here:
https://www.geonames.org/countries/
.. versionadded:: 1.18.0
:param str find_nearby_type: A flag to switch between different
GeoNames API endpoints. The default value is ``findNearbyPlaceName``
which returns the closest populated place. Another currently
implemented option is ``findNearby`` which returns
the closest toponym for the lat/lng query.
.. versionadded:: 1.18.0
:rtype: ``None``, :class:`geopy.location.Location` or a list of them, if
``exactly_one=False``.
### Response:
def reverse(
self,
query,
exactly_one=DEFAULT_SENTINEL,
timeout=DEFAULT_SENTINEL,
feature_code=None,
lang=None,
find_nearby_type='findNearbyPlaceName',
):
"""
Return an address by location point.
.. versionadded:: 1.2.0
:param query: The coordinates for which you wish to obtain the
closest human-readable addresses.
:type query: :class:`geopy.point.Point`, list or tuple of ``(latitude,
longitude)``, or string as ``"%(latitude)s, %(longitude)s"``.
:param bool exactly_one: Return one result or a list of results, if
available.
.. versionchanged:: 1.14.0
Default value for ``exactly_one`` was ``False``, which differs
from the conventional default across geopy. Please always pass
this argument explicitly, otherwise you would get a warning.
In geopy 2.0 the default value will become ``True``.
:param int timeout: Time, in seconds, to wait for the geocoding service
to respond before raising a :class:`geopy.exc.GeocoderTimedOut`
exception. Set this only if you wish to override, on this call
only, the value set during the geocoder's initialization.
:param str feature_code: A GeoNames feature code
.. versionadded:: 1.18.0
:param str lang: language of the returned ``name`` element (the pseudo
language code 'local' will return it in local language)
Full list of supported languages can be found here:
https://www.geonames.org/countries/
.. versionadded:: 1.18.0
:param str find_nearby_type: A flag to switch between different
GeoNames API endpoints. The default value is ``findNearbyPlaceName``
which returns the closest populated place. Another currently
implemented option is ``findNearby`` which returns
the closest toponym for the lat/lng query.
.. versionadded:: 1.18.0
:rtype: ``None``, :class:`geopy.location.Location` or a list of them, if
``exactly_one=False``.
"""
if exactly_one is DEFAULT_SENTINEL:
warnings.warn('%s.reverse: default value for `exactly_one` '
'argument will become True in geopy 2.0. '
'Specify `exactly_one=False` as the argument '
'explicitly to get rid of this warning.' % type(self).__name__,
DeprecationWarning, stacklevel=2)
exactly_one = False
try:
lat, lng = self._coerce_point_to_string(query).split(',')
except ValueError:
raise ValueError("Must be a coordinate pair or Point")
if find_nearby_type == 'findNearbyPlaceName': # default
if feature_code:
raise ValueError(
"find_nearby_type=findNearbyPlaceName doesn't support "
"the `feature_code` param"
)
params = self._reverse_find_nearby_place_name_params(
lat=lat,
lng=lng,
lang=lang,
)
url = "?".join((self.api_reverse, urlencode(params)))
elif find_nearby_type == 'findNearby':
if lang:
raise ValueError(
"find_nearby_type=findNearby doesn't support the `lang` param"
)
params = self._reverse_find_nearby_params(
lat=lat,
lng=lng,
feature_code=feature_code,
)
url = "?".join((self.api_reverse_nearby, urlencode(params)))
else:
raise GeocoderQueryError(
'`%s` find_nearby_type is not supported by geopy' % find_nearby_type
)
logger.debug("%s.reverse: %s", self.__class__.__name__, url)
return self._parse_json(
self._call_geocoder(url, timeout=timeout),
exactly_one
) |
def check_auth(self):
"""Check authentication/authorization of client"""
# access permissions
if self.auth is not None:
return self.auth(self.request)
return self.public_readble, self.public_writable | Check authentication/authorization of client | Below is the the instruction that describes the task:
### Input:
Check authentication/authorization of client
### Response:
def check_auth(self):
"""Check authentication/authorization of client"""
# access permissions
if self.auth is not None:
return self.auth(self.request)
return self.public_readble, self.public_writable |
def displayText(self, value, blank='', joiner=', '):
"""
Returns the display text for the value associated with
the inputted text. This will result in a comma separated
list of labels for the value, or the blank text provided if
no text is found.
:param value | <variant>
blank | <str>
joiner | <str>
:return <str>
"""
if value is None:
return ''
labels = []
for key, my_value in sorted(self.items(), key=lambda x: x[1]):
if value & my_value:
labels.append(self._labels.get(my_value, text.pretty(key)))
return joiner.join(labels) or blank | Returns the display text for the value associated with
the inputted text. This will result in a comma separated
list of labels for the value, or the blank text provided if
no text is found.
:param value | <variant>
blank | <str>
joiner | <str>
:return <str> | Below is the the instruction that describes the task:
### Input:
Returns the display text for the value associated with
the inputted text. This will result in a comma separated
list of labels for the value, or the blank text provided if
no text is found.
:param value | <variant>
blank | <str>
joiner | <str>
:return <str>
### Response:
def displayText(self, value, blank='', joiner=', '):
"""
Returns the display text for the value associated with
the inputted text. This will result in a comma separated
list of labels for the value, or the blank text provided if
no text is found.
:param value | <variant>
blank | <str>
joiner | <str>
:return <str>
"""
if value is None:
return ''
labels = []
for key, my_value in sorted(self.items(), key=lambda x: x[1]):
if value & my_value:
labels.append(self._labels.get(my_value, text.pretty(key)))
return joiner.join(labels) or blank |
def _generate_html(data, out):
'''
Generate report data as HTML
'''
print('<html>', file=out)
print('<body>', file=out)
_generate_html_table(data, out, 0)
print('</body>', file=out)
print('</html>', file=out) | Generate report data as HTML | Below is the the instruction that describes the task:
### Input:
Generate report data as HTML
### Response:
def _generate_html(data, out):
'''
Generate report data as HTML
'''
print('<html>', file=out)
print('<body>', file=out)
_generate_html_table(data, out, 0)
print('</body>', file=out)
print('</html>', file=out) |
def device_add_rule(self, direction, action, src, dst, target=None):
"""Adds a tuntap device rule.
To be used in a vassal.
:param str|unicode direction: Direction:
* in
* out.
:param str|unicode action: Action:
* allow
* deny
* route
* gateway.
:param str|unicode src: Source/mask.
:param str|unicode dst: Destination/mask.
:param str|unicode target: Depends on action.
* Route / Gateway: Accept addr:port
"""
value = [direction, src, dst, action]
if target:
value.append(target)
self._set_aliased('device-rule', ' '.join(value), multi=True)
return self | Adds a tuntap device rule.
To be used in a vassal.
:param str|unicode direction: Direction:
* in
* out.
:param str|unicode action: Action:
* allow
* deny
* route
* gateway.
:param str|unicode src: Source/mask.
:param str|unicode dst: Destination/mask.
:param str|unicode target: Depends on action.
* Route / Gateway: Accept addr:port | Below is the the instruction that describes the task:
### Input:
Adds a tuntap device rule.
To be used in a vassal.
:param str|unicode direction: Direction:
* in
* out.
:param str|unicode action: Action:
* allow
* deny
* route
* gateway.
:param str|unicode src: Source/mask.
:param str|unicode dst: Destination/mask.
:param str|unicode target: Depends on action.
* Route / Gateway: Accept addr:port
### Response:
def device_add_rule(self, direction, action, src, dst, target=None):
"""Adds a tuntap device rule.
To be used in a vassal.
:param str|unicode direction: Direction:
* in
* out.
:param str|unicode action: Action:
* allow
* deny
* route
* gateway.
:param str|unicode src: Source/mask.
:param str|unicode dst: Destination/mask.
:param str|unicode target: Depends on action.
* Route / Gateway: Accept addr:port
"""
value = [direction, src, dst, action]
if target:
value.append(target)
self._set_aliased('device-rule', ' '.join(value), multi=True)
return self |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.