code
stringlengths 75
104k
| docstring
stringlengths 1
46.9k
|
|---|---|
def lock_machine(self, session, lock_type):
"""Locks the machine for the given session to enable the caller
to make changes to the machine or start the VM or control
VM execution.
There are two ways to lock a machine for such uses:
If you want to make changes to the machine settings,
you must obtain an exclusive write lock on the machine
by setting @a lockType to @c Write.
This will only succeed if no other process has locked
the machine to prevent conflicting changes. Only after
an exclusive write lock has been obtained using this method, one
can change all VM settings or execute the VM in the process
space of the session object. (Note that the latter is only of
interest if you actually want to write a new front-end for
virtual machines; but this API gets called internally by
the existing front-ends such as VBoxHeadless and the VirtualBox
GUI to acquire a write lock on the machine that they are running.)
On success, write-locking the machine for a session creates
a second copy of the IMachine object. It is this second object
upon which changes can be made; in VirtualBox terminology, the
second copy is "mutable". It is only this second, mutable machine
object upon which you can call methods that change the
machine state. After having called this method, you can
obtain this second, mutable machine object using the
:py:func:`ISession.machine` attribute.
If you only want to check the machine state or control
machine execution without actually changing machine
settings (e.g. to get access to VM statistics or take
a snapshot or save the machine state), then set the
@a lockType argument to @c Shared.
If no other session has obtained a lock, you will obtain an
exclusive write lock as described above. However, if another
session has already obtained such a lock, then a link to that
existing session will be established which allows you
to control that existing session.
To find out which type of lock was obtained, you can
inspect :py:func:`ISession.type_p` , which will have been
set to either @c WriteLock or @c Shared.
In either case, you can get access to the :py:class:`IConsole`
object which controls VM execution.
Also in all of the above cases, one must always call
:py:func:`ISession.unlock_machine` to release the lock on the machine, or
the machine's state will eventually be set to "Aborted".
To change settings on a machine, the following sequence is typically
performed:
Call this method to obtain an exclusive write lock for the current session.
Obtain a mutable IMachine object from :py:func:`ISession.machine` .
Change the settings of the machine by invoking IMachine methods.
Call :py:func:`IMachine.save_settings` .
Release the write lock by calling :py:func:`ISession.unlock_machine` .
in session of type :class:`ISession`
Session object for which the machine will be locked.
in lock_type of type :class:`LockType`
If set to @c Write, then attempt to acquire an exclusive write lock or fail.
If set to @c Shared, then either acquire an exclusive write lock or establish
a link to an existing session.
raises :class:`OleErrorUnexpected`
Virtual machine not registered.
raises :class:`OleErrorAccessdenied`
Process not started by
raises :class:`VBoxErrorInvalidObjectState`
Session already open or being opened.
raises :class:`VBoxErrorVmError`
Failed to assign machine to session.
"""
if not isinstance(session, ISession):
raise TypeError("session can only be an instance of type ISession")
if not isinstance(lock_type, LockType):
raise TypeError("lock_type can only be an instance of type LockType")
self._call("lockMachine",
in_p=[session, lock_type])
|
Locks the machine for the given session to enable the caller
to make changes to the machine or start the VM or control
VM execution.
There are two ways to lock a machine for such uses:
If you want to make changes to the machine settings,
you must obtain an exclusive write lock on the machine
by setting @a lockType to @c Write.
This will only succeed if no other process has locked
the machine to prevent conflicting changes. Only after
an exclusive write lock has been obtained using this method, one
can change all VM settings or execute the VM in the process
space of the session object. (Note that the latter is only of
interest if you actually want to write a new front-end for
virtual machines; but this API gets called internally by
the existing front-ends such as VBoxHeadless and the VirtualBox
GUI to acquire a write lock on the machine that they are running.)
On success, write-locking the machine for a session creates
a second copy of the IMachine object. It is this second object
upon which changes can be made; in VirtualBox terminology, the
second copy is "mutable". It is only this second, mutable machine
object upon which you can call methods that change the
machine state. After having called this method, you can
obtain this second, mutable machine object using the
:py:func:`ISession.machine` attribute.
If you only want to check the machine state or control
machine execution without actually changing machine
settings (e.g. to get access to VM statistics or take
a snapshot or save the machine state), then set the
@a lockType argument to @c Shared.
If no other session has obtained a lock, you will obtain an
exclusive write lock as described above. However, if another
session has already obtained such a lock, then a link to that
existing session will be established which allows you
to control that existing session.
To find out which type of lock was obtained, you can
inspect :py:func:`ISession.type_p` , which will have been
set to either @c WriteLock or @c Shared.
In either case, you can get access to the :py:class:`IConsole`
object which controls VM execution.
Also in all of the above cases, one must always call
:py:func:`ISession.unlock_machine` to release the lock on the machine, or
the machine's state will eventually be set to "Aborted".
To change settings on a machine, the following sequence is typically
performed:
Call this method to obtain an exclusive write lock for the current session.
Obtain a mutable IMachine object from :py:func:`ISession.machine` .
Change the settings of the machine by invoking IMachine methods.
Call :py:func:`IMachine.save_settings` .
Release the write lock by calling :py:func:`ISession.unlock_machine` .
in session of type :class:`ISession`
Session object for which the machine will be locked.
in lock_type of type :class:`LockType`
If set to @c Write, then attempt to acquire an exclusive write lock or fail.
If set to @c Shared, then either acquire an exclusive write lock or establish
a link to an existing session.
raises :class:`OleErrorUnexpected`
Virtual machine not registered.
raises :class:`OleErrorAccessdenied`
Process not started by
raises :class:`VBoxErrorInvalidObjectState`
Session already open or being opened.
raises :class:`VBoxErrorVmError`
Failed to assign machine to session.
|
def _open(self, archive):
"""Open RAR archive file."""
try:
handle = unrarlib.RAROpenArchiveEx(ctypes.byref(archive))
except unrarlib.UnrarException:
raise BadRarFile("Invalid RAR file.")
return handle
|
Open RAR archive file.
|
def split_transfer_kwargs(kwargs, skip=None):
"""
Takes keyword arguments *kwargs*, splits them into two separate dictionaries depending on their
content, and returns them in a tuple. The first one will contain arguments related to potential
file transfer operations (e.g. ``"cache"`` or ``"retries"``), while the second one will contain
all remaining arguments. This function is used internally to decide which arguments to pass to
target formatters. *skip* can be a list of argument keys that are ignored.
"""
skip = make_list(skip) if skip else []
transfer_kwargs = {
name: kwargs.pop(name)
for name in ["cache", "prefer_cache", "retries", "retry_delay"]
if name in kwargs and name not in skip
}
return transfer_kwargs, kwargs
|
Takes keyword arguments *kwargs*, splits them into two separate dictionaries depending on their
content, and returns them in a tuple. The first one will contain arguments related to potential
file transfer operations (e.g. ``"cache"`` or ``"retries"``), while the second one will contain
all remaining arguments. This function is used internally to decide which arguments to pass to
target formatters. *skip* can be a list of argument keys that are ignored.
|
def hrv_parameters(data, sample_rate, signal=False, in_seconds=False):
"""
-----
Brief
-----
Function for extracting HRV parameters from time and frequency domains.
-----------
Description
-----------
ECG signals require specific processing due to their cyclic nature. For example, it is expected that in similar
conditions the RR peak interval to be similar, which would mean that the heart rate variability (HRV) would be
constant.
In this function, it is calculated the tachogram of the input ECG signal, that allows to understand the variability
of the heart rate by the calculus of the time difference between consecutive RR peaks. Thus, different features
may be extracted from the tachogram, such as, maximum, minimum and average RR peak interval.
This function extracts a wide range of features related to the HRV and returns them as a dictionary.
----------
Parameters
----------
data : list
ECG signal or R peak list. When the input is a raw signal the input flag signal should be
True.
sample_rate : int
Sampling frequency.
signal : boolean
If True, then the data argument contains the set of the ECG acquired samples.
in_seconds : boolean
If the R peaks list defined as the input argument "data" contains the sample numbers where
the R peaks occur, then in_seconds needs to be False.
Returns
-------
out : dict
Dictionary with HRV parameters values, with keys:
MaxRR : Maximum RR interval
MinRR : Minimum RR interval
AvgRR : Average RR interval
MaxBPM : Maximum RR interval in BPM
MinBPM : Minimum RR interval in BPM
AvgBPM : Average RR interval in BPM
SDNN : Standard deviation of the tachogram
SD1 : Square root of half of the sqaured standard deviation of the differentiated tachogram
SD2 : Square root of double of the squared SD1 minus the SD2 squared
SD1/SD2 : quotient between SD1 and SD2
NN20 : Number of consecutive heartbeats with a difference larger than 20 ms
pNN20 : Relative number of consecutive heartbeats with a difference larger than 20 ms
NN50 : Number of consecutive heartbeats with a difference larger than 50 ms
pNN50 : Relative number of consecutive heartbeats with a difference larger than 50 ms
ULF_Power : Power of the spectrum between 0 and 0.003 Hz
VLF_Power : Power of the spectrum between 0.003 and 0.04 Hz
LF_Power : Power of the spectrum between 0.04 and 0.15 Hz
HF_Power : Power of the spectrum between 0.15 and 0.40 Hz
LF_HF_Ratio : Quotient between the values of LF_Power and HF_Power
Total_Power : Power of the whole spectrum
"""
out_dict = {}
# Generation of tachogram.
tachogram_data, tachogram_time = tachogram(data, sample_rate, signal=signal,
in_seconds=in_seconds, out_seconds=True)
# Ectopy Removal.
tachogram_data_nn = remove_ectopy(tachogram_data, tachogram_time)[0]
# Determination of heart rate in BPM.
# bpm_data = (1 / numpy.array(tachogram_data_nn)) * 60
# ================================== Time Parameters ==========================================
# Maximum, Minimum and Average RR Interval.
out_dict["MaxRR"] = numpy.max(tachogram_data_nn)
out_dict["MinRR"] = numpy.min(tachogram_data_nn)
out_dict["AvgRR"] = numpy.average(tachogram_data_nn)
# Maximum, Minimum and Average Heart Rate.
max_hr = 1 / out_dict["MinRR"] # Cycles per second.
out_dict["MaxBPM"] = max_hr * 60 # BPM
min_hr = 1 / out_dict["MaxRR"] # Cycles per second.
out_dict["MinBPM"] = min_hr * 60 # BPM
avg_hr = 1 / out_dict["AvgRR"] # Cyles per second.
out_dict["AvgBPM"] = avg_hr * 60 # BPM
# SDNN.
out_dict["SDNN"] = numpy.std(tachogram_data_nn)
# ================================ Poincaré Parameters ========================================
# Auxiliary Structures.
tachogram_diff = numpy.diff(tachogram_data)
sdsd = numpy.std(tachogram_diff)
# Poincaré Parameters.
out_dict["SD1"] = numpy.sqrt(0.5 * numpy.power(sdsd, 2))
out_dict["SD2"] = numpy.sqrt(2 * numpy.power(out_dict["SDNN"], 2) -
numpy.power(out_dict["SD1"], 2))
out_dict["SD1/SD2"] = out_dict["SD1"] / out_dict["SD2"]
# ============================= Additional Parameters =========================================
tachogram_diff_abs = numpy.fabs(tachogram_diff)
# Number of RR intervals that have a difference in duration, from the previous one, of at least
# 20 ms.
out_dict["NN20"] = sum(1 for i in tachogram_diff_abs if i > 0.02)
out_dict["pNN20"] = int(float(out_dict["NN20"]) / len(tachogram_diff_abs) * 100) # % value.
# Number of RR intervals that have a difference in duration, from the previous one, of at least
# 50 ms.
out_dict["NN50"] = sum(1 for i in tachogram_diff_abs if i > 0.05)
out_dict["pNN50"] = int(float(out_dict["NN50"]) / len(tachogram_diff_abs) * 100) # % value.
# =============================== Frequency Parameters ========================================
# Auxiliary Structures.
freqs, power_spect = psd(tachogram_time, tachogram_data) # Power spectrum.
# Frequency Parameters.
freq_bands = {"ulf_band": [0.00, 0.003], "vlf_band": [0.003, 0.04], "lf_band": [0.04, 0.15],
"hf_band": [0.15, 0.40]}
power_band = {}
total_power = 0
band_keys = freq_bands.keys()
for band in band_keys:
freq_band = freq_bands[band]
freq_samples_inside_band = [freq for freq in freqs if freq_band[0] <= freq <= freq_band[1]]
power_samples_inside_band = [power_val for power_val, freq in zip(power_spect, freqs) if
freq_band[0] <= freq <= freq_band[1]]
power = numpy.round(integr.simps(power_samples_inside_band, freq_samples_inside_band), 5)
# Storage of power inside band.
power_band[band] = {}
power_band[band]["Power Band"] = power
power_band[band]["Freqs"] = freq_samples_inside_band
power_band[band]["Power"] = power_samples_inside_band
# Total power update.
total_power = total_power + power
out_dict["ULF_Power"] = power_band["ulf_band"]["Power Band"]
out_dict["VLF_Power"] = power_band["vlf_band"]["Power Band"]
out_dict["LF_Power"] = power_band["lf_band"]["Power Band"]
out_dict["HF_Power"] = power_band["hf_band"]["Power Band"]
out_dict["LF_HF_Ratio"] = power_band["lf_band"]["Power Band"] / power_band["hf_band"]["Power Band"]
out_dict["Total_Power"] = total_power
return out_dict
|
-----
Brief
-----
Function for extracting HRV parameters from time and frequency domains.
-----------
Description
-----------
ECG signals require specific processing due to their cyclic nature. For example, it is expected that in similar
conditions the RR peak interval to be similar, which would mean that the heart rate variability (HRV) would be
constant.
In this function, it is calculated the tachogram of the input ECG signal, that allows to understand the variability
of the heart rate by the calculus of the time difference between consecutive RR peaks. Thus, different features
may be extracted from the tachogram, such as, maximum, minimum and average RR peak interval.
This function extracts a wide range of features related to the HRV and returns them as a dictionary.
----------
Parameters
----------
data : list
ECG signal or R peak list. When the input is a raw signal the input flag signal should be
True.
sample_rate : int
Sampling frequency.
signal : boolean
If True, then the data argument contains the set of the ECG acquired samples.
in_seconds : boolean
If the R peaks list defined as the input argument "data" contains the sample numbers where
the R peaks occur, then in_seconds needs to be False.
Returns
-------
out : dict
Dictionary with HRV parameters values, with keys:
MaxRR : Maximum RR interval
MinRR : Minimum RR interval
AvgRR : Average RR interval
MaxBPM : Maximum RR interval in BPM
MinBPM : Minimum RR interval in BPM
AvgBPM : Average RR interval in BPM
SDNN : Standard deviation of the tachogram
SD1 : Square root of half of the sqaured standard deviation of the differentiated tachogram
SD2 : Square root of double of the squared SD1 minus the SD2 squared
SD1/SD2 : quotient between SD1 and SD2
NN20 : Number of consecutive heartbeats with a difference larger than 20 ms
pNN20 : Relative number of consecutive heartbeats with a difference larger than 20 ms
NN50 : Number of consecutive heartbeats with a difference larger than 50 ms
pNN50 : Relative number of consecutive heartbeats with a difference larger than 50 ms
ULF_Power : Power of the spectrum between 0 and 0.003 Hz
VLF_Power : Power of the spectrum between 0.003 and 0.04 Hz
LF_Power : Power of the spectrum between 0.04 and 0.15 Hz
HF_Power : Power of the spectrum between 0.15 and 0.40 Hz
LF_HF_Ratio : Quotient between the values of LF_Power and HF_Power
Total_Power : Power of the whole spectrum
|
def set_module_options(self, module):
"""
Set universal module options to be interpreted by i3bar
https://i3wm.org/i3status/manpage.html#_universal_module_options
"""
self.i3bar_module_options = {}
self.i3bar_gaps_module_options = {}
self.py3status_module_options = {}
fn = self._py3_wrapper.get_config_attribute
def make_quotes(options):
x = ["`{}`".format(x) for x in options]
if len(x) > 2:
x = [", ".join(x[:-1]), x[-1]]
return " or ".join(x)
# i3bar
min_width = fn(self.module_full_name, "min_width")
if not hasattr(min_width, "none_setting"):
if not isinstance(min_width, int):
err = "Invalid `min_width` attribute, should be an int. "
err += "Got `{}`.".format(min_width)
raise TypeError(err)
self.i3bar_module_options["min_width"] = min_width
align = fn(self.module_full_name, "align")
if not hasattr(align, "none_setting"):
if align not in POSITIONS:
err = "Invalid `align` attribute, should be "
err += make_quotes(POSITIONS)
err += ". Got `{}`.".format(align)
raise ValueError(err)
self.i3bar_module_options["align"] = align
separator = fn(self.module_full_name, "separator")
if not hasattr(separator, "none_setting"):
if not isinstance(separator, bool):
err = "Invalid `separator` attribute, should be a boolean. "
err += "Got `{}`.".format(separator)
raise TypeError(err)
self.i3bar_module_options["separator"] = separator
separator_block_width = fn(self.module_full_name, "separator_block_width")
if not hasattr(separator_block_width, "none_setting"):
if not isinstance(separator_block_width, int):
err = "Invalid `separator_block_width` attribute, "
err += "should be an int. "
err += "Got `{}`.".format(separator_block_width)
raise TypeError(err)
self.i3bar_module_options["separator_block_width"] = separator_block_width
# i3bar_gaps
background = fn(self.module_full_name, "background")
if not hasattr(background, "none_setting"):
color = self.module_class.py3._get_color(background)
if not color:
err = "Invalid `background` attribute should be a color. "
err += "Got `{}`.".format(background)
raise ValueError(err)
self.i3bar_gaps_module_options["background"] = color
border = fn(self.module_full_name, "border")
if not hasattr(border, "none_setting"):
color = self.module_class.py3._get_color(border)
if not color:
err = "Invalid `border` attribute, should be a color. "
err += "Got `{}`.".format(border)
raise ValueError(err)
self.i3bar_gaps_module_options["border"] = color
borders = ["top", "right", "bottom", "left"]
for name in ["border_" + x for x in borders]:
param = fn(self.module_full_name, name)
if hasattr(param, "none_setting"):
param = 1
elif not isinstance(param, int):
err = "Invalid `{}` attribute, ".format(name)
err += "should be an int. "
err += "Got `{}`.".format(param)
raise TypeError(err)
self.i3bar_gaps_module_options[name] = param
# py3status
min_length = fn(self.module_full_name, "min_length")
if not hasattr(min_length, "none_setting"):
if not isinstance(min_length, int):
err = "Invalid `min_length` attribute, should be an int. "
err += "Got `{}`.".format(min_length)
raise TypeError(err)
self.py3status_module_options["min_length"] = min_length
self.random_int = randint(0, 1)
position = fn(self.module_full_name, "position")
if not hasattr(position, "none_setting"):
if position not in POSITIONS:
err = "Invalid `position` attribute, should be "
err += make_quotes(POSITIONS)
err += ". Got `{}`.".format(position)
raise ValueError(err)
self.py3status_module_options["position"] = position
# i3bar, py3status
markup = fn(self.module_full_name, "markup")
if not hasattr(markup, "none_setting"):
if markup not in MARKUP_LANGUAGES:
err = "Invalid `markup` attribute, should be "
err += make_quotes(MARKUP_LANGUAGES)
err += ". Got `{}`.".format(markup)
raise ValueError(err)
self.i3bar_module_options["markup"] = markup
self.py3status_module_options["markup"] = markup
|
Set universal module options to be interpreted by i3bar
https://i3wm.org/i3status/manpage.html#_universal_module_options
|
def create_buffer(self, ignore_unsupported=False):
"""
Create this tree's TreeBuffer
"""
bufferdict = OrderedDict()
for branch in self.iterbranches():
# only include activated branches
if not self.GetBranchStatus(branch.GetName()):
continue
if not BaseTree.branch_is_supported(branch):
log.warning(
"ignore unsupported branch `{0}`".format(branch.GetName()))
continue
bufferdict[branch.GetName()] = Tree.branch_type(branch)
self.set_buffer(TreeBuffer(
bufferdict,
ignore_unsupported=ignore_unsupported))
|
Create this tree's TreeBuffer
|
def present(name, mediatype, **kwargs):
'''
Creates new mediatype.
NOTE: This function accepts all standard mediatype properties: keyword argument names differ depending on your
zabbix version, see:
https://www.zabbix.com/documentation/3.0/manual/api/reference/host/object#host_inventory
:param name: name of the mediatype
:param _connection_user: Optional - zabbix user (can also be set in opts or pillar, see module's docstring)
:param _connection_password: Optional - zabbix password (can also be set in opts or pillar, see module's docstring)
:param _connection_url: Optional - url of zabbix frontend (can also be set in opts, pillar, see module's docstring)
.. code-block:: yaml
make_new_mediatype:
zabbix_mediatype.present:
- name: 'Email'
- mediatype: 0
- smtp_server: smtp.example.com
- smtp_hello: zabbix.example.com
- smtp_email: zabbix@example.com
'''
connection_args = {}
if '_connection_user' in kwargs:
connection_args['_connection_user'] = kwargs['_connection_user']
if '_connection_password' in kwargs:
connection_args['_connection_password'] = kwargs['_connection_password']
if '_connection_url' in kwargs:
connection_args['_connection_url'] = kwargs['_connection_url']
ret = {'name': name, 'changes': {}, 'result': False, 'comment': ''}
# Comment and change messages
comment_mediatype_created = 'Mediatype {0} created.'.format(name)
comment_mediatype_updated = 'Mediatype {0} updated.'.format(name)
comment_mediatype_notcreated = 'Unable to create mediatype: {0}. '.format(name)
comment_mediatype_exists = 'Mediatype {0} already exists.'.format(name)
changes_mediatype_created = {name: {'old': 'Mediatype {0} does not exist.'.format(name),
'new': 'Mediatype {0} created.'.format(name),
}
}
# Zabbix API expects script parameters as a string of arguments seperated by newline characters
if 'exec_params' in kwargs:
if isinstance(kwargs['exec_params'], list):
kwargs['exec_params'] = '\n'.join(kwargs['exec_params'])+'\n'
else:
kwargs['exec_params'] = six.text_type(kwargs['exec_params'])+'\n'
mediatype_exists = __salt__['zabbix.mediatype_get'](name, **connection_args)
if mediatype_exists:
mediatypeobj = mediatype_exists[0]
mediatypeid = int(mediatypeobj['mediatypeid'])
update_email = False
update_email_port = False
update_email_security = False
update_email_verify_peer = False
update_email_verify_host = False
update_email_auth = False
update_script = False
update_script_params = False
update_sms = False
update_jabber = False
update_eztext = False
update_status = False
if int(mediatype) == 0 and 'smtp_server' in kwargs and 'smtp_helo' in kwargs and 'smtp_email' in kwargs:
if (int(mediatype) != int(mediatypeobj['type']) or
kwargs['smtp_server'] != mediatypeobj['smtp_server'] or
kwargs['smtp_email'] != mediatypeobj['smtp_email'] or
kwargs['smtp_helo'] != mediatypeobj['smtp_helo']):
update_email = True
if int(mediatype) == 0 and 'smtp_port' in kwargs:
if int(kwargs['smtp_port']) != int(mediatypeobj['smtp_port']):
update_email_port = True
if int(mediatype) == 0 and 'smtp_security' in kwargs:
if int(kwargs['smtp_security']) != int(mediatypeobj['smtp_security']):
update_email_security = True
if int(mediatype) == 0 and 'smtp_verify_peer' in kwargs:
if int(kwargs['smtp_verify_peer']) != int(mediatypeobj['smtp_verify_peer']):
update_email_verify_peer = True
if int(mediatype) == 0 and 'smtp_verify_host' in kwargs:
if int(kwargs['smtp_verify_host']) != int(mediatypeobj['smtp_verify_host']):
update_email_verify_host = True
if int(mediatype) == 0 and 'smtp_authentication' in kwargs and 'username' in kwargs and 'passwd' in kwargs:
if (int(kwargs['smtp_authentication']) != int(mediatypeobj['smtp_authentication']) or
kwargs['username'] != mediatypeobj['username'] or
kwargs['passwd'] != mediatypeobj['passwd']):
update_email_auth = True
if int(mediatype) == 1 and 'exec_path' in kwargs:
if (int(mediatype) != int(mediatypeobj['type']) or
kwargs['exec_path'] != mediatypeobj['exec_path']):
update_script = True
if int(mediatype) == 1 and 'exec_params' in kwargs:
if kwargs['exec_params'] != mediatypeobj['exec_params']:
update_script_params = True
if int(mediatype) == 2 and 'gsm_modem' in kwargs:
if (int(mediatype) != int(mediatypeobj['type']) or
kwargs['gsm_modem'] != mediatypeobj['gsm_modem']):
update_sms = True
if int(mediatype) == 3 and 'username' in kwargs and 'passwd' in kwargs:
if (int(mediatype) != int(mediatypeobj['type']) or
kwargs['username'] != mediatypeobj['username'] or
kwargs['passwd'] != mediatypeobj['passwd']):
update_jabber = True
if int(mediatype) == 100 and 'username' in kwargs and 'passwd' in kwargs and 'exec_path' in kwargs:
if (int(mediatype) != int(mediatypeobj['type']) or
kwargs['username'] != mediatypeobj['username'] or
kwargs['passwd'] != mediatypeobj['passwd'] or
kwargs['exec_path'] != mediatypeobj['exec_path']):
update_eztext = True
if 'status' in kwargs:
if int(kwargs['status']) != int(mediatypeobj['status']):
update_status = True
# Dry run, test=true mode
if __opts__['test']:
if mediatype_exists:
if update_status:
ret['result'] = None
ret['comment'] = comment_mediatype_updated
else:
ret['result'] = True
ret['comment'] = comment_mediatype_exists
else:
ret['result'] = None
ret['comment'] = comment_mediatype_created
return ret
error = []
if mediatype_exists:
if (update_email or update_email_port or update_email_security or
update_email_verify_peer or update_email_verify_host or update_email_auth or
update_script or update_script_params or update_sms or
update_jabber or update_eztext or update_status):
ret['result'] = True
ret['comment'] = comment_mediatype_updated
if update_email:
updated_email = __salt__['zabbix.mediatype_update'](mediatypeid,
type=mediatype,
smtp_server=kwargs['smtp_server'],
smtp_helo=kwargs['smtp_helo'],
smtp_email=kwargs['smtp_email'],
**connection_args)
if 'error' in updated_email:
error.append(updated_email['error'])
else:
ret['changes']['smtp_server'] = kwargs['smtp_server']
ret['changes']['smtp_helo'] = kwargs['smtp_helo']
ret['changes']['smtp_email'] = kwargs['smtp_email']
if update_email_port:
updated_email_port = __salt__['zabbix.mediatype_update'](mediatypeid,
smtp_port=kwargs['smtp_port'],
**connection_args)
if 'error' in updated_email_port:
error.append(updated_email_port['error'])
else:
ret['changes']['smtp_port'] = kwargs['smtp_port']
if update_email_security:
updated_email_security = __salt__['zabbix.mediatype_update'](mediatypeid,
smtp_security=kwargs['smtp_security'],
**connection_args)
if 'error' in updated_email_security:
error.append(updated_email_security['error'])
else:
ret['changes']['smtp_security'] = kwargs['smtp_security']
if update_email_verify_peer:
updated_email_verify_peer = __salt__['zabbix.mediatype_update'](mediatypeid,
smtp_verify_peer=kwargs['smtp_verify_peer'],
**connection_args)
if 'error' in updated_email_verify_peer:
error.append(updated_email_verify_peer['error'])
else:
ret['changes']['smtp_verify_peer'] = kwargs['smtp_verify_peer']
if update_email_verify_host:
updated_email_verify_host = __salt__['zabbix.mediatype_update'](mediatypeid,
smtp_verify_host=kwargs['smtp_verify_host'],
**connection_args)
if 'error' in updated_email_verify_host:
error.append(updated_email_verify_host['error'])
else:
ret['changes']['smtp_verify_host'] = kwargs['smtp_verify_host']
if update_email_auth:
updated_email_auth = __salt__['zabbix.mediatype_update'](mediatypeid,
username=kwargs['username'],
passwd=kwargs['passwd'],
smtp_authentication=kwargs['smtp_authentication'],
**connection_args)
if 'error' in updated_email_auth:
error.append(updated_email_auth['error'])
else:
ret['changes']['smtp_authentication'] = kwargs['smtp_authentication']
ret['changes']['username'] = kwargs['username']
if update_script:
updated_script = __salt__['zabbix.mediatype_update'](mediatypeid,
type=mediatype,
exec_path=kwargs['exec_path'],
**connection_args)
if 'error' in updated_script:
error.append(updated_script['error'])
else:
ret['changes']['exec_path'] = kwargs['exec_path']
if update_script_params:
updated_script_params = __salt__['zabbix.mediatype_update'](mediatypeid,
exec_params=kwargs['exec_params'],
**connection_args)
if 'error' in updated_script_params:
error.append(updated_script['error'])
else:
ret['changes']['exec_params'] = kwargs['exec_params']
if update_sms:
updated_sms = __salt__['zabbix.mediatype_update'](mediatypeid,
type=mediatype,
gsm_modem=kwargs['gsm_modem'],
**connection_args)
if 'error' in updated_sms:
error.append(updated_sms['error'])
else:
ret['changes']['gsm_modem'] = kwargs['gsm_modem']
if update_jabber:
updated_jabber = __salt__['zabbix.mediatype_update'](mediatypeid,
type=mediatype,
username=kwargs['username'],
passwd=kwargs['passwd'],
**connection_args)
if 'error' in updated_jabber:
error.append(updated_jabber['error'])
else:
ret['changes']['username'] = kwargs['username']
if update_eztext:
updated_eztext = __salt__['zabbix.mediatype_update'](mediatypeid,
type=mediatype,
username=kwargs['username'],
passwd=kwargs['passwd'],
exec_path=kwargs['exec_path'],
**connection_args)
if 'error' in updated_eztext:
error.append(updated_eztext['error'])
else:
ret['changes']['username'] = kwargs['username']
ret['changes']['exec_path'] = kwargs['exec_path']
if update_status:
updated_status = __salt__['zabbix.mediatype_update'](mediatypeid,
status=kwargs['status'],
**connection_args)
if 'error' in updated_status:
error.append(updated_status['error'])
else:
ret['changes']['status'] = kwargs['status']
else:
ret['result'] = True
ret['comment'] = comment_mediatype_exists
else:
mediatype_create = __salt__['zabbix.mediatype_create'](name, mediatype, **kwargs)
if 'error' not in mediatype_create:
ret['result'] = True
ret['comment'] = comment_mediatype_created
ret['changes'] = changes_mediatype_created
else:
ret['result'] = False
ret['comment'] = comment_mediatype_notcreated + six.text_type(mediatype_create['error'])
# error detected
if error:
ret['changes'] = {}
ret['result'] = False
ret['comment'] = six.text_type(error)
return ret
|
Creates new mediatype.
NOTE: This function accepts all standard mediatype properties: keyword argument names differ depending on your
zabbix version, see:
https://www.zabbix.com/documentation/3.0/manual/api/reference/host/object#host_inventory
:param name: name of the mediatype
:param _connection_user: Optional - zabbix user (can also be set in opts or pillar, see module's docstring)
:param _connection_password: Optional - zabbix password (can also be set in opts or pillar, see module's docstring)
:param _connection_url: Optional - url of zabbix frontend (can also be set in opts, pillar, see module's docstring)
.. code-block:: yaml
make_new_mediatype:
zabbix_mediatype.present:
- name: 'Email'
- mediatype: 0
- smtp_server: smtp.example.com
- smtp_hello: zabbix.example.com
- smtp_email: zabbix@example.com
|
def _get_block(self):
"""Just read a single block from your current location in _fh"""
b = self._fh.read(4) # get block size bytes
#print self._fh.tell()
if not b: raise StopIteration
block_size = struct.unpack('<i',b)[0]
return self._fh.read(block_size)
|
Just read a single block from your current location in _fh
|
def _process_comment(self, comment: praw.models.Comment):
"""
Process a reddit comment. Calls `func_comment(*func_comment_args)`.
:param comment: Comment to process
"""
self._func_comment(comment, *self._func_comment_args)
|
Process a reddit comment. Calls `func_comment(*func_comment_args)`.
:param comment: Comment to process
|
def register_pickle():
"""The fastest serialization method, but restricts
you to python clients."""
import cPickle
registry.register('pickle', cPickle.dumps, cPickle.loads,
content_type='application/x-python-serialize',
content_encoding='binary')
|
The fastest serialization method, but restricts
you to python clients.
|
def revoke_token(access_token):
"""
Instructs the API to delete this access token and associated refresh token
"""
response = requests.post(
get_revoke_token_url(),
data={
'token': access_token,
'client_id': settings.API_CLIENT_ID,
'client_secret': settings.API_CLIENT_SECRET,
},
timeout=15
)
return response.status_code == 200
|
Instructs the API to delete this access token and associated refresh token
|
def iter_paths(self, pathnames=None, mapfunc=None):
"""
Special iteration on paths. Yields couples of path and items. If a expanded path
doesn't match with any files a couple with path and `None` is returned.
:param pathnames: Iterable with a set of pathnames. If is `None` uses the all \
the stored pathnames.
:param mapfunc: A mapping function for building the effective path from various \
wildcards (eg. time spec wildcards).
:return: Yields 2-tuples.
"""
pathnames = pathnames or self._pathnames
if self.recursive and not pathnames:
pathnames = ['.']
elif not pathnames:
yield []
if mapfunc is not None:
for mapped_paths in map(mapfunc, pathnames):
for path in mapped_paths:
if self.recursive and (os.path.isdir(path) or os.path.islink(path)):
for t in os.walk(path, followlinks=self.follow_symlinks):
for filename, values in self.iglob(os.path.join(t[0], '*')):
yield filename, values
else:
empty_glob = True
for filename, values in self.iglob(path):
yield filename, values
empty_glob = False
if empty_glob:
yield path, None
else:
for path in pathnames:
if self.recursive and (os.path.isdir(path) or os.path.islink(path)):
for t in os.walk(path, followlinks=self.follow_symlinks):
for filename, values in self.iglob(os.path.join(t[0], '*')):
yield filename, values
else:
empty_glob = True
for filename, values in self.iglob(path):
yield filename, values
empty_glob = False
if empty_glob:
yield path, None
|
Special iteration on paths. Yields couples of path and items. If a expanded path
doesn't match with any files a couple with path and `None` is returned.
:param pathnames: Iterable with a set of pathnames. If is `None` uses the all \
the stored pathnames.
:param mapfunc: A mapping function for building the effective path from various \
wildcards (eg. time spec wildcards).
:return: Yields 2-tuples.
|
def set_value(self, eid, val, idx='*'):
"""
Set the content of an xml element marked with the matching eid attribute.
"""
if eid in self.__element_ids:
elems = self.__element_ids[eid]
if type(val) in SEQ_TYPES:
idx = 0
if idx == '*':
for elem in elems:
self.__set_value(eid, elem, val, idx)
elif idx < len(elems):
self.__set_value(eid, elems[idx], val, idx)
|
Set the content of an xml element marked with the matching eid attribute.
|
def search_users(self, user_name):
"""Searches for users via provisioning API.
If you get back an error 999, then the provisioning API is not enabled.
:param user_name: name of user to be searched for
:returns: list of usernames that contain user_name as substring
:raises: HTTPResponseError in case an HTTP error status was returned
"""
action_path = 'users'
if user_name:
action_path += '?search={}'.format(user_name)
res = self._make_ocs_request(
'GET',
self.OCS_SERVICE_CLOUD,
action_path
)
if res.status_code == 200:
tree = ET.fromstring(res.content)
users = [x.text for x in tree.findall('data/users/element')]
return users
raise HTTPResponseError(res)
|
Searches for users via provisioning API.
If you get back an error 999, then the provisioning API is not enabled.
:param user_name: name of user to be searched for
:returns: list of usernames that contain user_name as substring
:raises: HTTPResponseError in case an HTTP error status was returned
|
def get_ssh_dir(config, username):
"""Get the users ssh dir"""
sshdir = config.get('ssh_config_dir')
if not sshdir:
sshdir = os.path.expanduser('~/.ssh')
if not os.path.isdir(sshdir):
pwentry = getpwnam(username)
sshdir = os.path.join(pwentry.pw_dir, '.ssh')
if not os.path.isdir(sshdir):
sshdir = None
return sshdir
|
Get the users ssh dir
|
def get_tab_tip(self, filename, is_modified=None, is_readonly=None):
"""Return tab menu title"""
text = u"%s — %s"
text = self.__modified_readonly_title(text,
is_modified, is_readonly)
if self.tempfile_path is not None\
and filename == encoding.to_unicode_from_fs(self.tempfile_path):
temp_file_str = to_text_string(_("Temporary file"))
return text % (temp_file_str, self.tempfile_path)
else:
return text % (osp.basename(filename), osp.dirname(filename))
|
Return tab menu title
|
def match(self, url):
'''
Try to find if url matches against any of the schemes within this
endpoint.
Args:
url: The url to match against each scheme
Returns:
True if a matching scheme was found for the url, False otherwise
'''
try:
urlSchemes = self._urlSchemes.itervalues() # Python 2
except AttributeError:
urlSchemes = self._urlSchemes.values() # Python 3
for urlScheme in urlSchemes:
if urlScheme.match(url):
return True
return False
|
Try to find if url matches against any of the schemes within this
endpoint.
Args:
url: The url to match against each scheme
Returns:
True if a matching scheme was found for the url, False otherwise
|
def store_sample(self, sample_bytes, filename, type_tag):
"""Store a sample into the datastore.
Args:
filename: Name of the file.
sample_bytes: Actual bytes of sample.
type_tag: Type of sample ('exe','pcap','pdf','json','swf', or ...)
Returns:
md5 digest of the sample.
"""
# Temp sanity check for old clients
if len(filename) > 1000:
print 'switched bytes/filename... %s %s' % (sample_bytes[:100], filename[:100])
exit(1)
sample_info = {}
# Compute the MD5 hash
sample_info['md5'] = hashlib.md5(sample_bytes).hexdigest()
# Check if sample already exists
if self.has_sample(sample_info['md5']):
return sample_info['md5']
# Run the periodic operations
self.periodic_ops()
# Check if we need to expire anything
self.expire_data()
# Okay start populating the sample for adding to the data store
# Filename, length, import time and type_tag
sample_info['filename'] = filename
sample_info['length'] = len(sample_bytes)
sample_info['import_time'] = datetime.datetime.utcnow()
sample_info['type_tag'] = type_tag
# Random customer for now
import random
sample_info['customer'] = random.choice(['Mega Corp', 'Huge Inc', 'BearTron', 'Dorseys Mom'])
# Push the file into the MongoDB GridFS
sample_info['__grid_fs'] = self.gridfs_handle.put(sample_bytes)
self.database[self.sample_collection].insert(sample_info)
# Print info
print 'Sample Storage: %.2f out of %.2f MB' % (self.sample_storage_size(), self.samples_cap)
# Return the sample md5
return sample_info['md5']
|
Store a sample into the datastore.
Args:
filename: Name of the file.
sample_bytes: Actual bytes of sample.
type_tag: Type of sample ('exe','pcap','pdf','json','swf', or ...)
Returns:
md5 digest of the sample.
|
def kill_given_tasks(self, task_ids, scale=False, force=None):
"""Kill a list of given tasks.
:param list[str] task_ids: tasks to kill
:param bool scale: if true, scale down the app by the number of tasks killed
:param bool force: if true, ignore any current running deployments
:return: True on success
:rtype: bool
"""
params = {'scale': scale}
if force is not None:
params['force'] = force
data = json.dumps({"ids": task_ids})
response = self._do_request(
'POST', '/v2/tasks/delete', params=params, data=data)
return response == 200
|
Kill a list of given tasks.
:param list[str] task_ids: tasks to kill
:param bool scale: if true, scale down the app by the number of tasks killed
:param bool force: if true, ignore any current running deployments
:return: True on success
:rtype: bool
|
def get_member(thing_obj, member_string):
"""Get a member from an object by (string) name"""
mems = {x[0]: x[1] for x in inspect.getmembers(thing_obj)}
if member_string in mems:
return mems[member_string]
|
Get a member from an object by (string) name
|
def _as_rescale(self, get, targetbitdepth):
"""Helper used by :meth:`asRGB8` and :meth:`asRGBA8`."""
width,height,pixels,meta = get()
maxval = 2**meta['bitdepth'] - 1
targetmaxval = 2**targetbitdepth - 1
factor = float(targetmaxval) / float(maxval)
meta['bitdepth'] = targetbitdepth
def iterscale():
for row in pixels:
yield map(lambda x: int(round(x*factor)), row)
if maxval == targetmaxval:
return width, height, pixels, meta
else:
return width, height, iterscale(), meta
|
Helper used by :meth:`asRGB8` and :meth:`asRGBA8`.
|
async def handle_client_hello(self, client_addr, _: ClientHello):
""" Handle an ClientHello message. Send available containers to the client """
self._logger.info("New client connected %s", client_addr)
self._registered_clients.add(client_addr)
await self.send_container_update_to_client([client_addr])
|
Handle an ClientHello message. Send available containers to the client
|
def confirm_delete_view(self, request, object_id):
"""
Instantiates a class-based view to provide 'delete confirmation'
functionality for the assigned model, or redirect to Wagtail's delete
confirmation view if the assigned model extends 'Page'. The view class
used can be overridden by changing the 'confirm_delete_view_class'
attribute.
"""
kwargs = {'model_admin': self, 'object_id': object_id}
view_class = self.confirm_delete_view_class
return view_class.as_view(**kwargs)(request)
|
Instantiates a class-based view to provide 'delete confirmation'
functionality for the assigned model, or redirect to Wagtail's delete
confirmation view if the assigned model extends 'Page'. The view class
used can be overridden by changing the 'confirm_delete_view_class'
attribute.
|
def get_resource_allocation(self):
"""Get the :py:class:`ResourceAllocation` element tance.
Returns:
ResourceAllocation: Resource allocation used to access information about the resource where this PE is running.
.. versionadded:: 1.9
"""
if hasattr(self, 'resourceAllocation'):
return ResourceAllocation(self.rest_client.make_request(self.resourceAllocation), self.rest_client)
|
Get the :py:class:`ResourceAllocation` element tance.
Returns:
ResourceAllocation: Resource allocation used to access information about the resource where this PE is running.
.. versionadded:: 1.9
|
def check_df(
state, index, missing_msg=None, not_instance_msg=None, expand_msg=None
):
"""Check whether a DataFrame was defined and it is the right type
``check_df()`` is a combo of ``check_object()`` and ``is_instance()`` that checks whether the specified object exists
and whether the specified object is pandas DataFrame.
You can continue checking the data frame with ``check_keys()`` function to 'zoom in' on a particular column in the pandas DataFrame:
Args:
index (str): Name of the data frame to zoom in on.
missing_msg (str): See ``check_object()``.
not_instance_msg (str): See ``is_instance()``.
expand_msg (str): If specified, this overrides any messages that are prepended by previous SCT chains.
:Example:
Suppose you want the student to create a DataFrame ``my_df`` with two columns.
The column ``a`` should contain the numbers 1 to 3,
while the contents of column ``b`` can be anything: ::
import pandas as pd
my_df = pd.DataFrame({"a": [1, 2, 3], "b": ["a", "n", "y"]})
The following SCT would robustly check that: ::
Ex().check_df("my_df").multi(
check_keys("a").has_equal_value(),
check_keys("b")
)
- ``check_df()`` checks if ``my_df`` exists (``check_object()`` behind the scenes) and is a DataFrame (``is_instance()``)
- ``check_keys("a")`` zooms in on the column ``a`` of the data frame, and ``has_equal_value()`` checks if the columns correspond between student and solution process.
- ``check_keys("b")`` zooms in on hte column ``b`` of the data frame, but there's no 'equality checking' happening
The following submissions would pass the SCT above: ::
my_df = pd.DataFrame({"a": [1, 1 + 1, 3], "b": ["a", "l", "l"]})
my_df = pd.DataFrame({"a": [1, 2, 3], "b": [4, 5, 6], "c": [7, 8, 9]})
"""
child = check_object(
state,
index,
missing_msg=missing_msg,
expand_msg=expand_msg,
typestr="pandas DataFrame",
)
is_instance(child, pd.DataFrame, not_instance_msg=not_instance_msg)
return child
|
Check whether a DataFrame was defined and it is the right type
``check_df()`` is a combo of ``check_object()`` and ``is_instance()`` that checks whether the specified object exists
and whether the specified object is pandas DataFrame.
You can continue checking the data frame with ``check_keys()`` function to 'zoom in' on a particular column in the pandas DataFrame:
Args:
index (str): Name of the data frame to zoom in on.
missing_msg (str): See ``check_object()``.
not_instance_msg (str): See ``is_instance()``.
expand_msg (str): If specified, this overrides any messages that are prepended by previous SCT chains.
:Example:
Suppose you want the student to create a DataFrame ``my_df`` with two columns.
The column ``a`` should contain the numbers 1 to 3,
while the contents of column ``b`` can be anything: ::
import pandas as pd
my_df = pd.DataFrame({"a": [1, 2, 3], "b": ["a", "n", "y"]})
The following SCT would robustly check that: ::
Ex().check_df("my_df").multi(
check_keys("a").has_equal_value(),
check_keys("b")
)
- ``check_df()`` checks if ``my_df`` exists (``check_object()`` behind the scenes) and is a DataFrame (``is_instance()``)
- ``check_keys("a")`` zooms in on the column ``a`` of the data frame, and ``has_equal_value()`` checks if the columns correspond between student and solution process.
- ``check_keys("b")`` zooms in on hte column ``b`` of the data frame, but there's no 'equality checking' happening
The following submissions would pass the SCT above: ::
my_df = pd.DataFrame({"a": [1, 1 + 1, 3], "b": ["a", "l", "l"]})
my_df = pd.DataFrame({"a": [1, 2, 3], "b": [4, 5, 6], "c": [7, 8, 9]})
|
def unpack_rsp(cls, rsp_pb):
"""Convert from PLS response to user response"""
if rsp_pb.retType != RET_OK:
return RET_ERROR, rsp_pb.retMsg, None
raw_position_list = rsp_pb.s2c.positionList
position_list = [{
"code": merge_trd_mkt_stock_str(rsp_pb.s2c.header.trdMarket, position.code),
"stock_name": position.name,
"qty": position.qty,
"can_sell_qty": position.canSellQty,
"cost_price": position.costPrice if position.HasField('costPrice') else 0,
"cost_price_valid": 1 if position.HasField('costPrice') else 0,
"market_val": position.val,
"nominal_price": position.price,
"pl_ratio": 100 * position.plRatio if position.HasField('plRatio') else 0,
"pl_ratio_valid": 1 if position.HasField('plRatio') else 0,
"pl_val": position.plVal if position.HasField('plVal') else 0,
"pl_val_valid": 1 if position.HasField('plVal') else 0,
"today_buy_qty": position.td_buyQty if position.HasField('td_buyQty') else 0,
"today_buy_val": position.td_buyVal if position.HasField('td_buyVal') else 0,
"today_pl_val": position.td_plVal if position.HasField('td_plVal') else 0,
"today_sell_qty": position.td_sellQty if position.HasField('td_sellQty') else 0,
"today_sell_val": position.td_sellVal if position.HasField('td_sellVal') else 0,
"position_side": TRADE.REV_POSITION_SIDE_MAP[position.positionSide]
if position.positionSide in TRADE.REV_POSITION_SIDE_MAP else PositionSide.NONE,
} for position in raw_position_list]
return RET_OK, "", position_list
|
Convert from PLS response to user response
|
def norm_coefs(self):
"""Multiply all coefficients by the same factor, so that their sum
becomes one."""
sum_coefs = self.sum_coefs
self.ar_coefs /= sum_coefs
self.ma_coefs /= sum_coefs
|
Multiply all coefficients by the same factor, so that their sum
becomes one.
|
def apply_shortcuts(self):
"""Apply shortcuts settings to all widgets/plugins"""
toberemoved = []
for index, (qobject, context, name,
add_sc_to_tip) in enumerate(self.shortcut_data):
keyseq = QKeySequence( get_shortcut(context, name) )
try:
if isinstance(qobject, QAction):
if sys.platform == 'darwin' and \
qobject._shown_shortcut == 'missing':
qobject._shown_shortcut = keyseq
else:
qobject.setShortcut(keyseq)
if add_sc_to_tip:
add_shortcut_to_tooltip(qobject, context, name)
elif isinstance(qobject, QShortcut):
qobject.setKey(keyseq)
except RuntimeError:
# Object has been deleted
toberemoved.append(index)
for index in sorted(toberemoved, reverse=True):
self.shortcut_data.pop(index)
|
Apply shortcuts settings to all widgets/plugins
|
def to_dict(self):
"""
Encode the token as a dictionary suitable for JSON serialization.
"""
d = {
'id': self.id,
'start': self.start,
'end': self.end,
'form': self.form
}
if self.lnk is not None:
cfrom, cto = self.lnk.data
d['from'] = cfrom
d['to'] = cto
# d['paths'] = self.paths
if self.surface is not None:
d['surface'] = self.surface
# d['ipos'] = self.ipos
# d['lrules'] = self.lrules
if self.pos:
d['tags'] = [ps[0] for ps in self.pos]
d['probabilities'] = [ps[1] for ps in self.pos]
return d
|
Encode the token as a dictionary suitable for JSON serialization.
|
def tobinary(series, path, prefix='series', overwrite=False, credentials=None):
"""
Writes out data to binary format.
Parameters
----------
series : Series
The data to write
path : string path or URI to directory to be created
Output files will be written underneath path.
Directory will be created as a result of this call.
prefix : str, optional, default = 'series'
String prefix for files.
overwrite : bool
If true, path and all its contents will be deleted and
recreated as partof this call.
"""
from six import BytesIO
from thunder.utils import check_path
from thunder.writers import get_parallel_writer
if not overwrite:
check_path(path, credentials=credentials)
overwrite = True
def tobuffer(kv):
firstkey = None
buf = BytesIO()
for k, v in kv:
if firstkey is None:
firstkey = k
buf.write(v.tostring())
val = buf.getvalue()
buf.close()
if firstkey is None:
return iter([])
else:
label = prefix + '-' + getlabel(firstkey) + ".bin"
return iter([(label, val)])
writer = get_parallel_writer(path)(path, overwrite=overwrite, credentials=credentials)
if series.mode == 'spark':
binary = series.values.tordd().sortByKey().mapPartitions(tobuffer)
binary.foreach(writer.write)
else:
basedims = [series.shape[d] for d in series.baseaxes]
def split(k):
ind = unravel_index(k, basedims)
return ind, series.values[ind]
buf = tobuffer([split(i) for i in range(prod(basedims))])
[writer.write(b) for b in buf]
shape = series.shape
dtype = series.dtype
write_config(path, shape=shape, dtype=dtype, overwrite=overwrite, credentials=credentials)
|
Writes out data to binary format.
Parameters
----------
series : Series
The data to write
path : string path or URI to directory to be created
Output files will be written underneath path.
Directory will be created as a result of this call.
prefix : str, optional, default = 'series'
String prefix for files.
overwrite : bool
If true, path and all its contents will be deleted and
recreated as partof this call.
|
def build_encryption_materials_cache_key(partition, request):
"""Generates a cache key for an encrypt request.
:param bytes partition: Partition name for which to generate key
:param request: Request for which to generate key
:type request: aws_encryption_sdk.materials_managers.EncryptionMaterialsRequest
:returns: cache key
:rtype: bytes
"""
if request.algorithm is None:
_algorithm_info = b"\x00"
else:
_algorithm_info = b"\x01" + request.algorithm.id_as_bytes()
hasher = _new_cache_key_hasher()
_partition_hash = _partition_name_hash(hasher=hasher.copy(), partition_name=partition)
_ec_hash = _encryption_context_hash(hasher=hasher.copy(), encryption_context=request.encryption_context)
hasher.update(_partition_hash)
hasher.update(_algorithm_info)
hasher.update(_ec_hash)
return hasher.finalize()
|
Generates a cache key for an encrypt request.
:param bytes partition: Partition name for which to generate key
:param request: Request for which to generate key
:type request: aws_encryption_sdk.materials_managers.EncryptionMaterialsRequest
:returns: cache key
:rtype: bytes
|
def mod9710(iban):
"""
Calculates the MOD 97 10 of the passed IBAN as specified in ISO7064.
@method mod9710
@param {String} iban
@returns {Number}
"""
remainder = iban
block = None
while len(remainder) > 2:
block = remainder[:9]
remainder = str(int(block) % 97) + remainder[len(block):]
return int(remainder) % 97
|
Calculates the MOD 97 10 of the passed IBAN as specified in ISO7064.
@method mod9710
@param {String} iban
@returns {Number}
|
def _evaluate(self,R,phi=0.,t=0.):
"""
NAME:
_evaluate
PURPOSE:
evaluate the potential at R,phi,t
INPUT:
R - Galactocentric cylindrical radius
phi - azimuth
t - time
OUTPUT:
Phi(R,phi,t)
HISTORY:
2017-10-16 - Written - Bovy (UofT)
"""
return 0.5*R*R*(1.+2./3.*R*numpy.sin(3.*phi))
|
NAME:
_evaluate
PURPOSE:
evaluate the potential at R,phi,t
INPUT:
R - Galactocentric cylindrical radius
phi - azimuth
t - time
OUTPUT:
Phi(R,phi,t)
HISTORY:
2017-10-16 - Written - Bovy (UofT)
|
def _drain(writer, ion_event):
"""Drain the writer of its pending write events.
Args:
writer (Coroutine): A writer co-routine.
ion_event (amazon.ion.core.IonEvent): The first event to apply to the writer.
Yields:
DataEvent: Yields each pending data event.
"""
result_event = _WRITE_EVENT_HAS_PENDING_EMPTY
while result_event.type is WriteEventType.HAS_PENDING:
result_event = writer.send(ion_event)
ion_event = None
yield result_event
|
Drain the writer of its pending write events.
Args:
writer (Coroutine): A writer co-routine.
ion_event (amazon.ion.core.IonEvent): The first event to apply to the writer.
Yields:
DataEvent: Yields each pending data event.
|
def build_colormap(palette, info):
"""Create the colormap from the `raw_palette` and the valid_range."""
from trollimage.colormap import Colormap
if 'palette_meanings' in palette.attrs:
palette_indices = palette.attrs['palette_meanings']
else:
palette_indices = range(len(palette))
sqpalette = np.asanyarray(palette).squeeze() / 255.0
tups = [(val, tuple(tup))
for (val, tup) in zip(palette_indices, sqpalette)]
colormap = Colormap(*tups)
if 'palette_meanings' not in palette.attrs:
sf = info.get('scale_factor', np.array(1))
colormap.set_range(
*(np.array(info['valid_range']) * sf + info.get('add_offset', 0)))
return colormap, sqpalette
|
Create the colormap from the `raw_palette` and the valid_range.
|
def nvrtcAddNameExpression(self, prog, name_expression):
"""
Notes the given name expression denoting a __global__ function or
function template instantiation.
"""
code = self._lib.nvrtcAddNameExpression(prog,
c_char_p(encode_str(name_expression)))
self._throw_on_error(code)
return
|
Notes the given name expression denoting a __global__ function or
function template instantiation.
|
def get_post_agg(mconf):
"""
For a metric specified as `postagg` returns the
kind of post aggregation for pydruid.
"""
if mconf.get('type') == 'javascript':
return JavascriptPostAggregator(
name=mconf.get('name', ''),
field_names=mconf.get('fieldNames', []),
function=mconf.get('function', ''))
elif mconf.get('type') == 'quantile':
return Quantile(
mconf.get('name', ''),
mconf.get('probability', ''),
)
elif mconf.get('type') == 'quantiles':
return Quantiles(
mconf.get('name', ''),
mconf.get('probabilities', ''),
)
elif mconf.get('type') == 'fieldAccess':
return Field(mconf.get('name'))
elif mconf.get('type') == 'constant':
return Const(
mconf.get('value'),
output_name=mconf.get('name', ''),
)
elif mconf.get('type') == 'hyperUniqueCardinality':
return HyperUniqueCardinality(
mconf.get('name'),
)
elif mconf.get('type') == 'arithmetic':
return Postaggregator(
mconf.get('fn', '/'),
mconf.get('fields', []),
mconf.get('name', ''))
else:
return CustomPostAggregator(
mconf.get('name', ''),
mconf)
|
For a metric specified as `postagg` returns the
kind of post aggregation for pydruid.
|
def stripped_photo_to_jpg(stripped):
"""
Adds the JPG header and footer to a stripped image.
Ported from https://github.com/telegramdesktop/tdesktop/blob/bec39d89e19670eb436dc794a8f20b657cb87c71/Telegram/SourceFiles/ui/image/image.cpp#L225
"""
if len(stripped) < 3 or stripped[0] != 1:
return stripped
header = bytearray(b'\xff\xd8\xff\xe0\x00\x10JFIF\x00\x01\x01\x00\x00\x01\x00\x01\x00\x00\xff\xdb\x00C\x00(\x1c\x1e#\x1e\x19(#!#-+(0<dA<77<{X]Id\x91\x80\x99\x96\x8f\x80\x8c\x8a\xa0\xb4\xe6\xc3\xa0\xaa\xda\xad\x8a\x8c\xc8\xff\xcb\xda\xee\xf5\xff\xff\xff\x9b\xc1\xff\xff\xff\xfa\xff\xe6\xfd\xff\xf8\xff\xdb\x00C\x01+--<5<vAAv\xf8\xa5\x8c\xa5\xf8\xf8\xf8\xf8\xf8\xf8\xf8\xf8\xf8\xf8\xf8\xf8\xf8\xf8\xf8\xf8\xf8\xf8\xf8\xf8\xf8\xf8\xf8\xf8\xf8\xf8\xf8\xf8\xf8\xf8\xf8\xf8\xf8\xf8\xf8\xf8\xf8\xf8\xf8\xf8\xf8\xf8\xf8\xf8\xf8\xf8\xf8\xf8\xf8\xf8\xff\xc0\x00\x11\x08\x00\x00\x00\x00\x03\x01"\x00\x02\x11\x01\x03\x11\x01\xff\xc4\x00\x1f\x00\x00\x01\x05\x01\x01\x01\x01\x01\x01\x00\x00\x00\x00\x00\x00\x00\x00\x01\x02\x03\x04\x05\x06\x07\x08\t\n\x0b\xff\xc4\x00\xb5\x10\x00\x02\x01\x03\x03\x02\x04\x03\x05\x05\x04\x04\x00\x00\x01}\x01\x02\x03\x00\x04\x11\x05\x12!1A\x06\x13Qa\x07"q\x142\x81\x91\xa1\x08#B\xb1\xc1\x15R\xd1\xf0$3br\x82\t\n\x16\x17\x18\x19\x1a%&\'()*456789:CDEFGHIJSTUVWXYZcdefghijstuvwxyz\x83\x84\x85\x86\x87\x88\x89\x8a\x92\x93\x94\x95\x96\x97\x98\x99\x9a\xa2\xa3\xa4\xa5\xa6\xa7\xa8\xa9\xaa\xb2\xb3\xb4\xb5\xb6\xb7\xb8\xb9\xba\xc2\xc3\xc4\xc5\xc6\xc7\xc8\xc9\xca\xd2\xd3\xd4\xd5\xd6\xd7\xd8\xd9\xda\xe1\xe2\xe3\xe4\xe5\xe6\xe7\xe8\xe9\xea\xf1\xf2\xf3\xf4\xf5\xf6\xf7\xf8\xf9\xfa\xff\xc4\x00\x1f\x01\x00\x03\x01\x01\x01\x01\x01\x01\x01\x01\x01\x00\x00\x00\x00\x00\x00\x01\x02\x03\x04\x05\x06\x07\x08\t\n\x0b\xff\xc4\x00\xb5\x11\x00\x02\x01\x02\x04\x04\x03\x04\x07\x05\x04\x04\x00\x01\x02w\x00\x01\x02\x03\x11\x04\x05!1\x06\x12AQ\x07aq\x13"2\x81\x08\x14B\x91\xa1\xb1\xc1\t#3R\xf0\x15br\xd1\n\x16$4\xe1%\xf1\x17\x18\x19\x1a&\'()*56789:CDEFGHIJSTUVWXYZcdefghijstuvwxyz\x82\x83\x84\x85\x86\x87\x88\x89\x8a\x92\x93\x94\x95\x96\x97\x98\x99\x9a\xa2\xa3\xa4\xa5\xa6\xa7\xa8\xa9\xaa\xb2\xb3\xb4\xb5\xb6\xb7\xb8\xb9\xba\xc2\xc3\xc4\xc5\xc6\xc7\xc8\xc9\xca\xd2\xd3\xd4\xd5\xd6\xd7\xd8\xd9\xda\xe2\xe3\xe4\xe5\xe6\xe7\xe8\xe9\xea\xf2\xf3\xf4\xf5\xf6\xf7\xf8\xf9\xfa\xff\xda\x00\x0c\x03\x01\x00\x02\x11\x03\x11\x00?\x00')
footer = b"\xff\xd9"
header[164] = stripped[1]
header[166] = stripped[2]
return bytes(header) + stripped[3:] + footer
|
Adds the JPG header and footer to a stripped image.
Ported from https://github.com/telegramdesktop/tdesktop/blob/bec39d89e19670eb436dc794a8f20b657cb87c71/Telegram/SourceFiles/ui/image/image.cpp#L225
|
def _conj(self, f):
"""Function returning the complex conjugate of a result."""
def f_conj(x, **kwargs):
result = np.asarray(f(x, **kwargs),
dtype=self.scalar_out_dtype)
return result.conj()
if is_real_dtype(self.out_dtype):
return f
else:
return self.element(f_conj)
|
Function returning the complex conjugate of a result.
|
def simulate_experiment(self, modelparams, expparams, repeat=1):
"""
Produces data according to the given model parameters and experimental
parameters, structured as a NumPy array.
:param np.ndarray modelparams: A shape ``(n_models, n_modelparams)``
array of model parameter vectors describing the hypotheses under
which data should be simulated.
:param np.ndarray expparams: A shape ``(n_experiments, )`` array of
experimental control settings, with ``dtype`` given by
:attr:`~qinfer.Model.expparams_dtype`, describing the
experiments whose outcomes should be simulated.
:param int repeat: How many times the specified experiment should
be repeated.
:rtype: np.ndarray
:return: A three-index tensor ``data[i, j, k]``, where ``i`` is the repetition,
``j`` indexes which vector of model parameters was used, and where
``k`` indexes which experimental parameters where used. If ``repeat == 1``,
``len(modelparams) == 1`` and ``len(expparams) == 1``, then a scalar
datum is returned instead.
"""
self._sim_count += modelparams.shape[0] * expparams.shape[0] * repeat
assert(self.are_expparam_dtypes_consistent(expparams))
|
Produces data according to the given model parameters and experimental
parameters, structured as a NumPy array.
:param np.ndarray modelparams: A shape ``(n_models, n_modelparams)``
array of model parameter vectors describing the hypotheses under
which data should be simulated.
:param np.ndarray expparams: A shape ``(n_experiments, )`` array of
experimental control settings, with ``dtype`` given by
:attr:`~qinfer.Model.expparams_dtype`, describing the
experiments whose outcomes should be simulated.
:param int repeat: How many times the specified experiment should
be repeated.
:rtype: np.ndarray
:return: A three-index tensor ``data[i, j, k]``, where ``i`` is the repetition,
``j`` indexes which vector of model parameters was used, and where
``k`` indexes which experimental parameters where used. If ``repeat == 1``,
``len(modelparams) == 1`` and ``len(expparams) == 1``, then a scalar
datum is returned instead.
|
def update_items(portal_type=None, uid=None, endpoint=None, **kw):
""" update items
1. If the uid is given, the user wants to update the object with the data
given in request body
2. If no uid is given, the user wants to update a bunch of objects.
-> each record contains either an UID, path or parent_path + id
"""
# disable CSRF
req.disable_csrf_protection()
# the data to update
records = req.get_request_data()
# we have an uid -> try to get an object for it
obj = get_object_by_uid(uid)
if obj:
record = records[0] # ignore other records if we got an uid
obj = update_object_with_data(obj, record)
return make_items_for([obj], endpoint=endpoint)
# no uid -> go through the record items
results = []
for record in records:
obj = get_object_by_record(record)
# no object found for this record
if obj is None:
continue
# update the object with the given record data
obj = update_object_with_data(obj, record)
results.append(obj)
if not results:
fail(400, "No Objects could be updated")
return make_items_for(results, endpoint=endpoint)
|
update items
1. If the uid is given, the user wants to update the object with the data
given in request body
2. If no uid is given, the user wants to update a bunch of objects.
-> each record contains either an UID, path or parent_path + id
|
def get_pint_to_fortran_safe_units_mapping(inverse=False):
"""Get the mappings from Pint to Fortran safe units.
Fortran can't handle special characters like "^" or "/" in names, but we need
these in Pint. Conversely, Pint stores variables with spaces by default e.g. "Mt
CO2 / yr" but we don't want these in the input files as Fortran is likely to think
the whitespace is a delimiter.
Parameters
----------
inverse : bool
If True, return the inverse mappings i.e. Fortran safe to Pint mappings
Returns
-------
dict
Dictionary of mappings
"""
replacements = {"^": "super", "/": "per", " ": ""}
if inverse:
replacements = {v: k for k, v in replacements.items()}
# mapping nothing to something is obviously not going to work in the inverse
# hence remove
replacements.pop("")
return replacements
|
Get the mappings from Pint to Fortran safe units.
Fortran can't handle special characters like "^" or "/" in names, but we need
these in Pint. Conversely, Pint stores variables with spaces by default e.g. "Mt
CO2 / yr" but we don't want these in the input files as Fortran is likely to think
the whitespace is a delimiter.
Parameters
----------
inverse : bool
If True, return the inverse mappings i.e. Fortran safe to Pint mappings
Returns
-------
dict
Dictionary of mappings
|
def simxEraseFile(clientID, fileName_serverSide, operationMode):
'''
Please have a look at the function description/documentation in the V-REP user manual
'''
if (sys.version_info[0] == 3) and (type(fileName_serverSide) is str):
fileName_serverSide=fileName_serverSide.encode('utf-8')
return c_EraseFile(clientID, fileName_serverSide, operationMode)
|
Please have a look at the function description/documentation in the V-REP user manual
|
def parse_analyzer_arguments(arguments):
"""
Parse string in format `function_1:param1=value:param2 function_2:param` into array of FunctionArguments
"""
rets = []
for argument in arguments:
args = argument.split(argument_splitter)
# The first one is the function name
func_name = args[0]
# The rest is the args
func_args = {}
for arg in args[1:]:
key, value = parse_arg(arg)
func_args[key] = value
rets.append(FunctionArguments(function=func_name, arguments=func_args))
return rets
|
Parse string in format `function_1:param1=value:param2 function_2:param` into array of FunctionArguments
|
def update_resource(self, resource, underlined=None):
"""Update the cache for global names in `resource`"""
try:
pymodule = self.project.get_pymodule(resource)
modname = self._module_name(resource)
self._add_names(pymodule, modname, underlined)
except exceptions.ModuleSyntaxError:
pass
|
Update the cache for global names in `resource`
|
def trailing_input(self):
"""
Get the `MatchVariable` instance, representing trailing input, if there is any.
"Trailing input" is input at the end that does not match the grammar anymore, but
when this is removed from the end of the input, the input would be a valid string.
"""
slices = []
# Find all regex group for the name _INVALID_TRAILING_INPUT.
for r, re_match in self._re_matches:
for group_name, group_index in r.groupindex.items():
if group_name == _INVALID_TRAILING_INPUT:
slices.append(re_match.regs[group_index])
# Take the smallest part. (Smaller trailing text means that a larger input has
# been matched, so that is better.)
if slices:
slice = [max(i[0] for i in slices), max(i[1] for i in slices)]
value = self.string[slice[0]:slice[1]]
return MatchVariable('<trailing_input>', value, slice)
|
Get the `MatchVariable` instance, representing trailing input, if there is any.
"Trailing input" is input at the end that does not match the grammar anymore, but
when this is removed from the end of the input, the input would be a valid string.
|
def upload_async(self, remote_path, local_path, callback=None):
"""Uploads resource to remote path on WebDAV server asynchronously.
In case resource is directory it will upload all nested files and directories.
:param remote_path: the path for uploading resources on WebDAV server. Can be file and directory.
:param local_path: the path to local resource for uploading.
:param callback: the callback which will be invoked when downloading is complete.
"""
target = (lambda: self.upload_sync(local_path=local_path, remote_path=remote_path, callback=callback))
threading.Thread(target=target).start()
|
Uploads resource to remote path on WebDAV server asynchronously.
In case resource is directory it will upload all nested files and directories.
:param remote_path: the path for uploading resources on WebDAV server. Can be file and directory.
:param local_path: the path to local resource for uploading.
:param callback: the callback which will be invoked when downloading is complete.
|
def annual_heating_design_day_996(self):
"""A design day object representing the annual 99.6% heating design day."""
if bool(self._winter_des_day_dict) is True:
return DesignDay.from_ashrae_dict_heating(
self._winter_des_day_dict, self.location, False,
self._stand_press_at_elev)
else:
return None
|
A design day object representing the annual 99.6% heating design day.
|
def get_request_data():
"""
Get request data based on request.method
If method is GET or DELETE, get data from request.args
If method is POST, PATCH or PUT, get data from request.form or request.json
"""
method = request.method.lower()
if method in ["get", "delete"]:
return request.args
elif method in ["post", "put", "patch"]:
if request.mimetype == 'application/json':
try:
return request.get_json()
except:
abort(400, "InvalidData", "invalid json content")
else:
return request.form
else:
return None
|
Get request data based on request.method
If method is GET or DELETE, get data from request.args
If method is POST, PATCH or PUT, get data from request.form or request.json
|
def xstep_check(self, b):
r"""Check the minimisation of the Augmented Lagrangian with
respect to :math:`\mathbf{x}` by method `xstep` defined in
derived classes. This method should be called at the end of any
`xstep` method.
"""
if self.opt['LinSolveCheck']:
Zop = lambda x: sl.inner(self.Zf, x, axis=self.cri.axisM)
ZHop = lambda x: sl.inner(np.conj(self.Zf), x,
axis=self.cri.axisK)
ax = ZHop(Zop(self.Xf)) + self.Xf
self.xrrs = sl.rrs(ax, b)
else:
self.xrrs = None
|
r"""Check the minimisation of the Augmented Lagrangian with
respect to :math:`\mathbf{x}` by method `xstep` defined in
derived classes. This method should be called at the end of any
`xstep` method.
|
def declare_base(erroName=True):
"""Create a Exception with default message.
:param errorName: boolean, True if you want the Exception name in the
error message body.
"""
if erroName:
class Base(Exception):
def __str__(self):
if len(self.args):
return "%s: %s" % (self.__class__.__name__, self.args[0])
else:
return "%s: %s" % (self.__class__.__name__, self.default)
else:
class Base(Exception):
def __str__(self):
if len(self.args):
return "%s" % self.args[0]
else:
return "%s" % self.default
return Base
|
Create a Exception with default message.
:param errorName: boolean, True if you want the Exception name in the
error message body.
|
def replaceWith(self, el):
"""
Replace value in this element with values from `el`.
This useful when you don't want change all references to object.
Args:
el (obj): :class:`HTMLElement` instance.
"""
self.childs = el.childs
self.params = el.params
self.endtag = el.endtag
self.openertag = el.openertag
self._tagname = el.getTagName()
self._element = el.tagToString()
self._istag = el.isTag()
self._isendtag = el.isEndTag()
self._iscomment = el.isComment()
self._isnonpairtag = el.isNonPairTag()
|
Replace value in this element with values from `el`.
This useful when you don't want change all references to object.
Args:
el (obj): :class:`HTMLElement` instance.
|
async def sysinfo(dev: Device):
"""Print out system information (version, MAC addrs)."""
click.echo(await dev.get_system_info())
click.echo(await dev.get_interface_information())
|
Print out system information (version, MAC addrs).
|
def _actuator_on_off(self, on_off, service_location_id, actuator_id,
duration=None):
"""
Turn actuator on or off
Parameters
----------
on_off : str
'on' or 'off'
service_location_id : int
actuator_id : int
duration : int, optional
300,900,1800 or 3600 , specifying the time in seconds the actuator
should be turned on. Any other value results in turning on for an
undetermined period of time.
Returns
-------
requests.Response
"""
url = urljoin(URLS['servicelocation'], service_location_id,
"actuator", actuator_id, on_off)
headers = {"Authorization": "Bearer {}".format(self.access_token)}
if duration is not None:
data = {"duration": duration}
else:
data = {}
r = requests.post(url, headers=headers, json=data)
r.raise_for_status()
return r
|
Turn actuator on or off
Parameters
----------
on_off : str
'on' or 'off'
service_location_id : int
actuator_id : int
duration : int, optional
300,900,1800 or 3600 , specifying the time in seconds the actuator
should be turned on. Any other value results in turning on for an
undetermined period of time.
Returns
-------
requests.Response
|
def fleets(self):
"""
:rtype: twilio.rest.preview.deployed_devices.fleet.FleetList
"""
if self._fleets is None:
self._fleets = FleetList(self)
return self._fleets
|
:rtype: twilio.rest.preview.deployed_devices.fleet.FleetList
|
def get_account_details(self, session=None, lightweight=None):
"""
Returns the details relating your account, including your discount
rate and Betfair point balance.
:param requests.session session: Requests session object
:param bool lightweight: If True will return dict not a resource
:rtype: resources.AccountDetails
"""
params = clean_locals(locals())
method = '%s%s' % (self.URI, 'getAccountDetails')
(response, elapsed_time) = self.request(method, params, session)
return self.process_response(response, resources.AccountDetails, elapsed_time, lightweight)
|
Returns the details relating your account, including your discount
rate and Betfair point balance.
:param requests.session session: Requests session object
:param bool lightweight: If True will return dict not a resource
:rtype: resources.AccountDetails
|
def isTemporal(inferenceType):
""" Returns True if the inference type is 'temporal', i.e. requires a
temporal memory in the network.
"""
if InferenceType.__temporalInferenceTypes is None:
InferenceType.__temporalInferenceTypes = \
set([InferenceType.TemporalNextStep,
InferenceType.TemporalClassification,
InferenceType.TemporalAnomaly,
InferenceType.TemporalMultiStep,
InferenceType.NontemporalMultiStep])
return inferenceType in InferenceType.__temporalInferenceTypes
|
Returns True if the inference type is 'temporal', i.e. requires a
temporal memory in the network.
|
def add_reviewer(self, doc, reviewer):
"""Adds a reviewer to the SPDX Document.
Reviwer is an entity created by an EntityBuilder.
Raises SPDXValueError if not a valid reviewer type.
"""
# Each reviewer marks the start of a new review object.
# FIXME: this state does not make sense
self.reset_reviews()
if validations.validate_reviewer(reviewer):
doc.add_review(review.Review(reviewer=reviewer))
return True
else:
raise SPDXValueError('Review::Reviewer')
|
Adds a reviewer to the SPDX Document.
Reviwer is an entity created by an EntityBuilder.
Raises SPDXValueError if not a valid reviewer type.
|
def many_until(these, term):
"""Consumes as many of these as it can until it term is encountered.
Returns a tuple of the list of these results and the term result
"""
results = []
while True:
stop, result = choice(_tag(True, term),
_tag(False, these))
if stop:
return results, result
else:
results.append(result)
|
Consumes as many of these as it can until it term is encountered.
Returns a tuple of the list of these results and the term result
|
async def fetching_data(self, *_):
"""Get the latest data from met.no."""
try:
with async_timeout.timeout(10):
resp = await self._websession.get(self._api_url, params=self._urlparams)
if resp.status != 200:
_LOGGER.error('%s returned %s', self._api_url, resp.status)
return False
text = await resp.text()
except (asyncio.TimeoutError, aiohttp.ClientError) as err:
_LOGGER.error('%s returned %s', self._api_url, err)
return False
try:
self.data = xmltodict.parse(text)['weatherdata']
except (ExpatError, IndexError) as err:
_LOGGER.error('%s returned %s', resp.url, err)
return False
return True
|
Get the latest data from met.no.
|
def delete_old_tickets(**kwargs):
"""
Delete tickets if they are over 2 days old
kwargs = ['raw', 'signal', 'instance', 'sender', 'created']
"""
sender = kwargs.get('sender', None)
now = datetime.now()
expire = datetime(now.year, now.month, now.day - 2)
sender.objects.filter(created__lt=expire).delete()
|
Delete tickets if they are over 2 days old
kwargs = ['raw', 'signal', 'instance', 'sender', 'created']
|
def send_status_message(self, object_id, status):
"""Send a message to the `status_queue` to update a job's status.
Returns `True` if the message was sent, else `False`
Args:
object_id (`str`): ID of the job that was executed
status (:obj:`SchedulerStatus`): Status of the job
Returns:
`bool`
"""
try:
body = json.dumps({
'id': object_id,
'status': status
})
self.status_queue.send_message(
MessageBody=body,
MessageGroupId='job_status',
MessageDeduplicationId=get_hash((object_id, status))
)
return True
except Exception as ex:
print(ex)
return False
|
Send a message to the `status_queue` to update a job's status.
Returns `True` if the message was sent, else `False`
Args:
object_id (`str`): ID of the job that was executed
status (:obj:`SchedulerStatus`): Status of the job
Returns:
`bool`
|
def create_class(self, data, options=None, **kwargs):
"""Return instance of class based on Go data
Data keys handled here:
_type
Set the object class
consts, types, vars, funcs
Recurse into :py:meth:`create_class` to create child object
instances
:param data: dictionary data from godocjson output
"""
_type = kwargs.get("_type")
obj_map = dict((cls.type, cls) for cls in ALL_CLASSES)
try:
# Contextual type data from children recursion
if _type:
LOGGER.debug("Forcing Go Type %s" % _type)
cls = obj_map[_type]
else:
cls = obj_map[data["type"]]
except KeyError:
LOGGER.warning("Unknown Type: %s" % data)
else:
if cls.inverted_names and "names" in data:
# Handle types that have reversed names parameter
for name in data["names"]:
data_inv = {}
data_inv.update(data)
data_inv["name"] = name
if "names" in data_inv:
del data_inv["names"]
for obj in self.create_class(data_inv):
yield obj
else:
# Recurse for children
obj = cls(data, jinja_env=self.jinja_env)
for child_type in ["consts", "types", "vars", "funcs"]:
for child_data in data.get(child_type, []):
obj.children += list(
self.create_class(
child_data,
_type=child_type.replace("consts", "const")
.replace("types", "type")
.replace("vars", "variable")
.replace("funcs", "func"),
)
)
yield obj
|
Return instance of class based on Go data
Data keys handled here:
_type
Set the object class
consts, types, vars, funcs
Recurse into :py:meth:`create_class` to create child object
instances
:param data: dictionary data from godocjson output
|
def _update_tokens(self, write_token=True, override=None,
patch_skip=False):
"""Update tokens to the next available values."""
next_token = next(self.tokens)
patch_value = ''
patch_tokens = ''
if self.pfile and write_token:
token = override if override else self.token
patch_value += token
while next_token[0] in self.comment_tokens + whitespace:
if self.pfile:
if next_token[0] in self.comment_tokens:
while not next_token == '\n':
patch_tokens += next_token
next_token = next(self.tokens)
patch_tokens += next_token
# Several sections rely on StopIteration to terminate token search
# If that occurs, dump the patched tokens immediately
try:
next_token = next(self.tokens)
except StopIteration:
if not patch_skip or next_token in ('=', '(', '%'):
patch_tokens = patch_value + patch_tokens
if self.pfile:
self.pfile.write(patch_tokens)
raise
# Write patched values and whitespace + comments to file
if not patch_skip or next_token in ('=', '(', '%'):
patch_tokens = patch_value + patch_tokens
if self.pfile:
self.pfile.write(patch_tokens)
# Update tokens, ignoring padding
self.token, self.prior_token = next_token, self.token
|
Update tokens to the next available values.
|
def _upsert_persons(cursor, person_ids, lookup_func):
"""Upsert's user info into the database.
The model contains the user info as part of the role values.
"""
person_ids = list(set(person_ids)) # cleanse data
# Check for existing records to update.
cursor.execute("SELECT personid from persons where personid = ANY (%s)",
(person_ids,))
existing_person_ids = [x[0] for x in cursor.fetchall()]
new_person_ids = [p for p in person_ids if p not in existing_person_ids]
# Update existing records.
for person_id in existing_person_ids:
# TODO only update based on a delta against the 'updated' column.
person_info = lookup_func(person_id)
cursor.execute("""\
UPDATE persons
SET (personid, firstname, surname, fullname) =
( %(username)s, %(first_name)s, %(last_name)s,
%(full_name)s)
WHERE personid = %(username)s""", person_info)
# Insert new records.
# Email is an empty string because
# accounts no longer gives out user
# email info but a string datatype
# is still needed for legacy to
# properly process the persons table
for person_id in new_person_ids:
person_info = lookup_func(person_id)
cursor.execute("""\
INSERT INTO persons
(personid, firstname, surname, fullname, email)
VALUES
(%(username)s, %(first_name)s,
%(last_name)s, %(full_name)s, '')""", person_info)
|
Upsert's user info into the database.
The model contains the user info as part of the role values.
|
def filter(self, cls, recursive=False):
"""Retrieves all descendants (including self) that are instances
of a given class.
Args:
cls (class): The class to use as a filter.
Kwargs:
recursive (bool): Whether to descend recursively down the tree.
"""
source = self.walk_preorder if recursive else self._children
return [
codeobj
for codeobj in source()
if isinstance(codeobj, cls)
]
|
Retrieves all descendants (including self) that are instances
of a given class.
Args:
cls (class): The class to use as a filter.
Kwargs:
recursive (bool): Whether to descend recursively down the tree.
|
def extract_labels(filename):
"""Extract the labels into a 1D uint8 numpy array [index]."""
with gzip.open(filename) as bytestream:
magic = _read32(bytestream)
if magic != 2049:
raise ValueError(
'Invalid magic number %d in MNIST label file: %s' %
(magic, filename))
num_items = _read32(bytestream)
buf = bytestream.read(num_items)
labels = numpy.frombuffer(buf, dtype=numpy.uint8)
return labels
|
Extract the labels into a 1D uint8 numpy array [index].
|
def doDup(self, WHAT={}, **params):
"""This function will perform the command -dup."""
if hasattr(WHAT, '_modified'):
for key, value in WHAT._modified():
if WHAT.__new2old__.has_key(key):
self._addDBParam(WHAT.__new2old__[key].encode('utf-8'), value)
else:
self._addDBParam(key, value)
self._addDBParam('RECORDID', WHAT.RECORDID)
self._addDBParam('MODID', WHAT.MODID)
elif type(WHAT) == dict:
for key in WHAT:
self._addDBParam(key, WHAT[key])
else:
raise FMError, 'Python Runtime: Object type (%s) given to function doDup as argument WHAT cannot be used.' % type(WHAT)
if self._layout == '':
raise FMError, 'No layout was selected'
for key in params:
self._addDBParam(key, params[key])
if self._checkRecordID() == 0:
raise FMError, 'RecordID is missing'
return self._doAction('-dup')
|
This function will perform the command -dup.
|
def whoami(*args, **kwargs):
"""
Prints information about the current user.
Assumes the user is already logged-in.
"""
user = client.whoami()
if user:
print_user(user)
else:
print('You are not logged-in.')
|
Prints information about the current user.
Assumes the user is already logged-in.
|
def write(self, pos, size, **kwargs):
"""
Writes some data, loaded from the state, into the file.
:param pos: The address to read the data to write from in memory
:param size: The requested size of the write
:return: The real length of the write
"""
if type(pos) is str:
raise TypeError("SimFileDescriptor.write takes an address and size. Did you mean write_data?")
# Find a reasonable concrete size for the load since we don't want to concretize anything
# This is copied from SimFile.read
# TODO: refactor into a generic concretization strategy?
if self.state.solver.symbolic(size):
try:
passed_max_size = self.state.solver.max(size, extra_constraints=(size < self.state.libc.max_packet_size,))
except SimSolverError:
passed_max_size = self.state.solver.min(size)
l.warning("Symbolic write size is too large for threshold - concretizing to min (%d)", passed_max_size)
self.state.solver.add(size == passed_max_size)
else:
passed_max_size = self.state.solver.eval(size)
if passed_max_size > 2**13:
l.warning("Program performing extremely large write")
data = self.state.memory.load(pos, passed_max_size)
return self.write_data(data, size, **kwargs)
|
Writes some data, loaded from the state, into the file.
:param pos: The address to read the data to write from in memory
:param size: The requested size of the write
:return: The real length of the write
|
def update(self):
'''
Updates LaserData.
'''
if self.hasproxy():
irD = IRData()
received = 0
data = self.proxy.getIRData()
irD.received = data.received
self.lock.acquire()
self.ir = irD
self.lock.release()
|
Updates LaserData.
|
def _vmomentsurfaceMCIntegrand(vz,vR,vT,R,z,df,sigmaR1,gamma,sigmaz1,mvT,n,m,o):
"""Internal function that is the integrand for the vmomentsurface mass integration"""
return vR**n*vT**m*vz**o*df(R,vR*sigmaR1,vT*sigmaR1*gamma,z,vz*sigmaz1,use_physical=False)*numpy.exp(vR**2./2.+(vT-mvT)**2./2.+vz**2./2.)
|
Internal function that is the integrand for the vmomentsurface mass integration
|
def f_get_from_runs(self, name, include_default_run=True, use_indices=False,
fast_access=False, with_links = True,
shortcuts=True, max_depth=None, auto_load=False):
"""Searches for all occurrences of `name` in each run.
Generates an ordered dictionary with the run names or indices as keys and
found items as values.
Example:
>>> traj.f_get_from_runs(self, 'deep.universal_answer', use_indices=True, fast_access=True)
OrderedDict([(0, 42), (1, 42), (2, 'fortytwo), (3, 43)])
:param name:
String description of the item(s) to find.
Cannot be full names but the part of the names that are below
a `run_XXXXXXXXX` group.
:param include_default_run:
If results found under ``run_ALL`` should be accounted for every run or simply be
ignored.
:param use_indices:
If `True` the keys of the resulting dictionary are the run indices
(e.g. 0,1,2,3), otherwise the keys are run names (e.g. `run_00000000`,
`run_000000001`)
:param fast_access:
Whether to return parameter or result instances or the values handled by these.
:param with_links:
If links should be considered
:param shortcuts:
If shortcuts are allowed and the trajectory can *hop* over nodes in the
path.
:param max_depth:
Maximum depth (relative to start node) how search should progress in tree.
`None` means no depth limit. Only relevant if `shortcuts` are allowed.
:param auto_load:
If data should be loaded from the storage service if it cannot be found in the
current trajectory tree. Auto-loading will load group and leaf nodes currently
not in memory and it will load data into empty leaves. Be aware that auto-loading
does not work with shortcuts.
:return:
Ordered dictionary with run names or indices as keys and found items as values.
Will only include runs where an item was actually found.
"""
result_dict = OrderedDict()
old_crun = self.v_crun
try:
if len(self._run_parent_groups) > 0:
for run_name in self.f_iter_runs():
# Iterate over all runs
value = None
already_found = False
for run_parent_group in self._run_parent_groups.values():
if run_name not in run_parent_group._children:
continue
try:
value = run_parent_group.f_get(run_name + '.' + name,
fast_access=False,
with_links=with_links,
shortcuts=shortcuts,
max_depth=max_depth,
auto_load=auto_load)
if already_found:
raise pex.NotUniqueNodeError('`%s` has been found several times '
'in one run.' % name)
else:
already_found = True
except (AttributeError, pex.DataNotInStorageError):
pass
if value is None and include_default_run:
for run_parent_group in self._run_parent_groups.values():
try:
value = run_parent_group.f_get(self.f_wildcard('$', -1) +
'.' + name,
fast_access=False,
with_links=with_links,
shortcuts=shortcuts,
max_depth=max_depth,
auto_load=auto_load)
if already_found:
raise pex.NotUniqueNodeError('`%s` has been found several '
'times in one run.' % name)
else:
already_found = True
except (AttributeError, pex.DataNotInStorageError):
pass
if value is not None:
if value.v_is_leaf:
value = self._nn_interface._apply_fast_access(value, fast_access)
if use_indices:
key = self.f_idx_to_run(run_name)
else:
key = run_name
result_dict[key] = value
return result_dict
finally:
self.v_crun = old_crun
|
Searches for all occurrences of `name` in each run.
Generates an ordered dictionary with the run names or indices as keys and
found items as values.
Example:
>>> traj.f_get_from_runs(self, 'deep.universal_answer', use_indices=True, fast_access=True)
OrderedDict([(0, 42), (1, 42), (2, 'fortytwo), (3, 43)])
:param name:
String description of the item(s) to find.
Cannot be full names but the part of the names that are below
a `run_XXXXXXXXX` group.
:param include_default_run:
If results found under ``run_ALL`` should be accounted for every run or simply be
ignored.
:param use_indices:
If `True` the keys of the resulting dictionary are the run indices
(e.g. 0,1,2,3), otherwise the keys are run names (e.g. `run_00000000`,
`run_000000001`)
:param fast_access:
Whether to return parameter or result instances or the values handled by these.
:param with_links:
If links should be considered
:param shortcuts:
If shortcuts are allowed and the trajectory can *hop* over nodes in the
path.
:param max_depth:
Maximum depth (relative to start node) how search should progress in tree.
`None` means no depth limit. Only relevant if `shortcuts` are allowed.
:param auto_load:
If data should be loaded from the storage service if it cannot be found in the
current trajectory tree. Auto-loading will load group and leaf nodes currently
not in memory and it will load data into empty leaves. Be aware that auto-loading
does not work with shortcuts.
:return:
Ordered dictionary with run names or indices as keys and found items as values.
Will only include runs where an item was actually found.
|
def inspect(self, **kwargs):
"""
Plot the evolution of the structural relaxation with matplotlib.
Args:
what: Either "hist" or "scf". The first option (default) extracts data
from the HIST file and plot the evolution of the structural
parameters, forces, pressures and energies.
The second option, extracts data from the main output file and
plot the evolution of the SCF cycles (etotal, residuals, etc).
Returns:
`matplotlib` figure, None if some error occurred.
"""
what = kwargs.pop("what", "hist")
if what == "hist":
# Read the hist file to get access to the structure.
with self.open_hist() as hist:
return hist.plot(**kwargs) if hist else None
elif what == "scf":
# Get info on the different SCF cycles
relaxation = abiinspect.Relaxation.from_file(self.output_file.path)
if "title" not in kwargs: kwargs["title"] = str(self)
return relaxation.plot(**kwargs) if relaxation is not None else None
else:
raise ValueError("Wrong value for what %s" % what)
|
Plot the evolution of the structural relaxation with matplotlib.
Args:
what: Either "hist" or "scf". The first option (default) extracts data
from the HIST file and plot the evolution of the structural
parameters, forces, pressures and energies.
The second option, extracts data from the main output file and
plot the evolution of the SCF cycles (etotal, residuals, etc).
Returns:
`matplotlib` figure, None if some error occurred.
|
def _handleInvalid(invalidDefault):
'''
_handleInvalid - Common code for raising / returning an invalid value
@param invalidDefault <None/str/Exception> - The value to return if "val" is not empty string/None
and "val" is not in #possibleValues
If instantiated Exception (like ValueError('blah')): Raise this exception
If an Exception type ( like ValueError ) - Instantiate and raise this exception type
Otherwise, use this raw value
'''
# If not
# If an instantiated Exception, raise that exception
try:
isInstantiatedException = bool( issubclass(invalidDefault.__class__, Exception) )
except:
isInstantiatedException = False
if isInstantiatedException:
raise invalidDefault
else:
try:
isExceptionType = bool( issubclass( invalidDefault, Exception) )
except TypeError:
isExceptionType = False
# If an Exception type, instantiate and raise
if isExceptionType:
raise invalidDefault()
else:
# Otherwise, just return invalidDefault itself
return invalidDefault
|
_handleInvalid - Common code for raising / returning an invalid value
@param invalidDefault <None/str/Exception> - The value to return if "val" is not empty string/None
and "val" is not in #possibleValues
If instantiated Exception (like ValueError('blah')): Raise this exception
If an Exception type ( like ValueError ) - Instantiate and raise this exception type
Otherwise, use this raw value
|
def verify_signature(public_key, signature, hash, hash_algo):
"""Verify the given signature is correct for the given hash and public key.
Args:
public_key (str): PEM encoded public key
signature (bytes): signature to verify
hash (bytes): hash of data
hash_algo (str): hash algorithm used
Returns:
True if the signature is valid, False otherwise
"""
hash_algo = _hash_algorithms[hash_algo]
try:
return get_publickey(public_key).verify(
signature,
hash,
padding.PKCS1v15(),
utils.Prehashed(hash_algo),
) is None
except InvalidSignature:
return False
|
Verify the given signature is correct for the given hash and public key.
Args:
public_key (str): PEM encoded public key
signature (bytes): signature to verify
hash (bytes): hash of data
hash_algo (str): hash algorithm used
Returns:
True if the signature is valid, False otherwise
|
def ParseOptions(cls, options, configuration_object):
"""Parses and validates options.
Args:
options (argparse.Namespace): parser options.
configuration_object (CLITool): object to be configured by the argument
helper.
Raises:
BadConfigObject: when the configuration object is of the wrong type.
"""
if not isinstance(configuration_object, tools.CLITool):
raise errors.BadConfigObject(
'Configuration object is not an instance of CLITool')
profilers = cls._ParseStringOption(options, 'profilers')
if not profilers:
profilers = set()
elif profilers.lower() != 'list':
profilers = set(profilers.split(','))
supported_profilers = set(cls.PROFILERS_INFORMATION.keys())
unsupported_profilers = profilers.difference(supported_profilers)
if unsupported_profilers:
unsupported_profilers = ', '.join(unsupported_profilers)
raise errors.BadConfigOption(
'Unsupported profilers: {0:s}'.format(unsupported_profilers))
profiling_directory = getattr(options, 'profiling_directory', None)
if profiling_directory and not os.path.isdir(profiling_directory):
raise errors.BadConfigOption(
'No such profiling directory: {0:s}'.format(profiling_directory))
profiling_sample_rate = getattr(options, 'profiling_sample_rate', None)
if not profiling_sample_rate:
profiling_sample_rate = cls.DEFAULT_PROFILING_SAMPLE_RATE
else:
try:
profiling_sample_rate = int(profiling_sample_rate, 10)
except (TypeError, ValueError):
raise errors.BadConfigOption(
'Invalid profile sample rate: {0!s}.'.format(profiling_sample_rate))
setattr(configuration_object, '_profilers', profilers)
setattr(configuration_object, '_profiling_directory', profiling_directory)
setattr(
configuration_object, '_profiling_sample_rate', profiling_sample_rate)
|
Parses and validates options.
Args:
options (argparse.Namespace): parser options.
configuration_object (CLITool): object to be configured by the argument
helper.
Raises:
BadConfigObject: when the configuration object is of the wrong type.
|
def list_product_versions(page_size=200, page_index=0, sort="", q=""):
"""
List all ProductVersions
"""
content = list_product_versions_raw(page_size, page_index, sort, q)
if content:
return utils.format_json_list(content)
|
List all ProductVersions
|
def ip_v6_network_validator(v: Any) -> IPv6Network:
"""
Assume IPv6Network initialised with a default ``strict`` argument
See more:
https://docs.python.org/library/ipaddress.html#ipaddress.IPv6Network
"""
if isinstance(v, IPv6Network):
return v
with change_exception(errors.IPv6NetworkError, ValueError):
return IPv6Network(v)
|
Assume IPv6Network initialised with a default ``strict`` argument
See more:
https://docs.python.org/library/ipaddress.html#ipaddress.IPv6Network
|
def words_to_word_ids(data=None, word_to_id=None, unk_key='UNK'):
"""Convert a list of string (words) to IDs.
Parameters
----------
data : list of string or byte
The context in list format
word_to_id : a dictionary
that maps word to ID.
unk_key : str
Represent the unknown words.
Returns
--------
list of int
A list of IDs to represent the context.
Examples
--------
>>> words = tl.files.load_matt_mahoney_text8_dataset()
>>> vocabulary_size = 50000
>>> data, count, dictionary, reverse_dictionary = tl.nlp.build_words_dataset(words, vocabulary_size, True)
>>> context = [b'hello', b'how', b'are', b'you']
>>> ids = tl.nlp.words_to_word_ids(words, dictionary)
>>> context = tl.nlp.word_ids_to_words(ids, reverse_dictionary)
>>> print(ids)
[6434, 311, 26, 207]
>>> print(context)
[b'hello', b'how', b'are', b'you']
References
---------------
- `tensorflow.models.rnn.ptb.reader <https://github.com/tensorflow/tensorflow/tree/master/tensorflow/models/rnn/ptb>`__
"""
if data is None:
raise Exception("data : list of string or byte")
if word_to_id is None:
raise Exception("word_to_id : a dictionary")
# if isinstance(data[0], six.string_types):
# tl.logging.info(type(data[0]))
# # exit()
# tl.logging.info(data[0])
# tl.logging.info(word_to_id)
# return [word_to_id[str(word)] for word in data]
# else:
word_ids = []
for word in data:
if word_to_id.get(word) is not None:
word_ids.append(word_to_id[word])
else:
word_ids.append(word_to_id[unk_key])
return word_ids
|
Convert a list of string (words) to IDs.
Parameters
----------
data : list of string or byte
The context in list format
word_to_id : a dictionary
that maps word to ID.
unk_key : str
Represent the unknown words.
Returns
--------
list of int
A list of IDs to represent the context.
Examples
--------
>>> words = tl.files.load_matt_mahoney_text8_dataset()
>>> vocabulary_size = 50000
>>> data, count, dictionary, reverse_dictionary = tl.nlp.build_words_dataset(words, vocabulary_size, True)
>>> context = [b'hello', b'how', b'are', b'you']
>>> ids = tl.nlp.words_to_word_ids(words, dictionary)
>>> context = tl.nlp.word_ids_to_words(ids, reverse_dictionary)
>>> print(ids)
[6434, 311, 26, 207]
>>> print(context)
[b'hello', b'how', b'are', b'you']
References
---------------
- `tensorflow.models.rnn.ptb.reader <https://github.com/tensorflow/tensorflow/tree/master/tensorflow/models/rnn/ptb>`__
|
def nf_io_to_process(ios, def_ios, scatter_ios=None):
"""Convert CWL input/output into a nextflow process definition.
Needs to handle scattered/parallel variables.
"""
scatter_names = {k: v for k, v in scatter_ios} if scatter_ios else {}
var_types = {}
for def_io in def_ios:
var_types[def_io["name"]] = nf_type(def_io["variable_type"])
out = []
for io in ios:
cur_id = io["id"]
if scatter_names:
input_id = scatter_names[io["value"]]
else:
input_id = io.get("value")
vtype = var_types[cur_id]
if input_id and cur_id != input_id:
out.append("%s %s as %s" % (vtype, input_id, cur_id))
else:
out.append("%s %s" % (vtype, cur_id))
return out
|
Convert CWL input/output into a nextflow process definition.
Needs to handle scattered/parallel variables.
|
def mimeData(self, indexes):
"""
Reimplements the :meth:`QAbstractItemModel.mimeData` method.
:param indexes: Indexes.
:type indexes: QModelIndexList
:return: MimeData.
:rtype: QMimeData
"""
byte_stream = pickle.dumps([self.get_node(index) for index in indexes], pickle.HIGHEST_PROTOCOL)
mime_data = QMimeData()
mime_data.setData("application/x-umbragraphmodeldatalist", byte_stream)
return mime_data
|
Reimplements the :meth:`QAbstractItemModel.mimeData` method.
:param indexes: Indexes.
:type indexes: QModelIndexList
:return: MimeData.
:rtype: QMimeData
|
def _handle_app_result_build_failure(self,out,err,exit_status,result_paths):
""" Catch the error when files are not produced """
try:
raise ApplicationError, \
'RAxML failed to produce an output file due to the following error: \n\n%s ' \
% err.read()
except:
raise ApplicationError,\
'RAxML failed to run properly.'
|
Catch the error when files are not produced
|
def beta_pdf(x, a, b):
"""Beta distirbution probability density function."""
bc = 1 / beta(a, b)
fc = x ** (a - 1)
sc = (1 - x) ** (b - 1)
return bc * fc * sc
|
Beta distirbution probability density function.
|
def output(self):
""" Generates output from data array
:returns Pythoned file
:rtype str or unicode
"""
if len(self.files) < 1:
raise Exception('Converter#output: No files to convert')
return self.template.render(self.files)
|
Generates output from data array
:returns Pythoned file
:rtype str or unicode
|
def create(self, phone_number, sms_capability, account_sid=values.unset,
friendly_name=values.unset, unique_name=values.unset,
cc_emails=values.unset, sms_url=values.unset,
sms_method=values.unset, sms_fallback_url=values.unset,
sms_fallback_method=values.unset, status_callback_url=values.unset,
status_callback_method=values.unset,
sms_application_sid=values.unset, address_sid=values.unset,
email=values.unset, verification_type=values.unset,
verification_document_sid=values.unset):
"""
Create a new HostedNumberOrderInstance
:param unicode phone_number: An E164 formatted phone number.
:param bool sms_capability: Specify SMS capability to host.
:param unicode account_sid: Account Sid.
:param unicode friendly_name: A human readable description of this resource.
:param unicode unique_name: A unique, developer assigned name of this HostedNumberOrder.
:param unicode cc_emails: A list of emails.
:param unicode sms_url: SMS URL.
:param unicode sms_method: SMS Method.
:param unicode sms_fallback_url: SMS Fallback URL.
:param unicode sms_fallback_method: SMS Fallback Method.
:param unicode status_callback_url: Status Callback URL.
:param unicode status_callback_method: Status Callback Method.
:param unicode sms_application_sid: SMS Application Sid.
:param unicode address_sid: Address sid.
:param unicode email: Email.
:param HostedNumberOrderInstance.VerificationType verification_type: Verification Type.
:param unicode verification_document_sid: Verification Document Sid
:returns: Newly created HostedNumberOrderInstance
:rtype: twilio.rest.preview.hosted_numbers.hosted_number_order.HostedNumberOrderInstance
"""
data = values.of({
'PhoneNumber': phone_number,
'SmsCapability': sms_capability,
'AccountSid': account_sid,
'FriendlyName': friendly_name,
'UniqueName': unique_name,
'CcEmails': serialize.map(cc_emails, lambda e: e),
'SmsUrl': sms_url,
'SmsMethod': sms_method,
'SmsFallbackUrl': sms_fallback_url,
'SmsFallbackMethod': sms_fallback_method,
'StatusCallbackUrl': status_callback_url,
'StatusCallbackMethod': status_callback_method,
'SmsApplicationSid': sms_application_sid,
'AddressSid': address_sid,
'Email': email,
'VerificationType': verification_type,
'VerificationDocumentSid': verification_document_sid,
})
payload = self._version.create(
'POST',
self._uri,
data=data,
)
return HostedNumberOrderInstance(self._version, payload, )
|
Create a new HostedNumberOrderInstance
:param unicode phone_number: An E164 formatted phone number.
:param bool sms_capability: Specify SMS capability to host.
:param unicode account_sid: Account Sid.
:param unicode friendly_name: A human readable description of this resource.
:param unicode unique_name: A unique, developer assigned name of this HostedNumberOrder.
:param unicode cc_emails: A list of emails.
:param unicode sms_url: SMS URL.
:param unicode sms_method: SMS Method.
:param unicode sms_fallback_url: SMS Fallback URL.
:param unicode sms_fallback_method: SMS Fallback Method.
:param unicode status_callback_url: Status Callback URL.
:param unicode status_callback_method: Status Callback Method.
:param unicode sms_application_sid: SMS Application Sid.
:param unicode address_sid: Address sid.
:param unicode email: Email.
:param HostedNumberOrderInstance.VerificationType verification_type: Verification Type.
:param unicode verification_document_sid: Verification Document Sid
:returns: Newly created HostedNumberOrderInstance
:rtype: twilio.rest.preview.hosted_numbers.hosted_number_order.HostedNumberOrderInstance
|
def _read_function(schema):
"""Add a write method for named schema to a class.
"""
def func(
filename=None,
data=None,
add_node_labels=True,
use_uids=True,
**kwargs):
# Use generic write class to write data.
return _read(
filename=filename,
data=data,
schema=schema,
add_node_labels=add_node_labels,
use_uids=use_uids,
**kwargs
)
# Update docs
func.__doc__ = _read_doc_template(schema)
return func
|
Add a write method for named schema to a class.
|
def check_libcloud_version(reqver=LIBCLOUD_MINIMAL_VERSION, why=None):
'''
Compare different libcloud versions
'''
if not HAS_LIBCLOUD:
return False
if not isinstance(reqver, (list, tuple)):
raise RuntimeError(
'\'reqver\' needs to passed as a tuple or list, i.e., (0, 14, 0)'
)
try:
import libcloud # pylint: disable=redefined-outer-name
except ImportError:
raise ImportError(
'salt-cloud requires >= libcloud {0} which is not installed'.format(
'.'.join([six.text_type(num) for num in reqver])
)
)
if LIBCLOUD_VERSION_INFO >= reqver:
return libcloud.__version__
errormsg = 'Your version of libcloud is {0}. '.format(libcloud.__version__)
errormsg += 'salt-cloud requires >= libcloud {0}'.format(
'.'.join([six.text_type(num) for num in reqver])
)
if why:
errormsg += ' for {0}'.format(why)
errormsg += '. Please upgrade.'
raise ImportError(errormsg)
|
Compare different libcloud versions
|
def leaves_are_consistent(self):
"""
Return ``True`` if the sync map fragments
which are the leaves of the sync map tree
(except for HEAD and TAIL leaves)
are all consistent, that is,
their intervals do not overlap in forbidden ways.
:rtype: bool
.. versionadded:: 1.7.0
"""
self.log(u"Checking if leaves are consistent")
leaves = self.leaves()
if len(leaves) < 1:
self.log(u"Empty leaves => return True")
return True
min_time = min([l.interval.begin for l in leaves])
self.log([u" Min time: %.3f", min_time])
max_time = max([l.interval.end for l in leaves])
self.log([u" Max time: %.3f", max_time])
self.log(u" Creating SyncMapFragmentList...")
smf = SyncMapFragmentList(
begin=min_time,
end=max_time,
rconf=self.rconf,
logger=self.logger
)
self.log(u" Creating SyncMapFragmentList... done")
self.log(u" Sorting SyncMapFragmentList...")
result = True
not_head_tail = [l for l in leaves if not l.is_head_or_tail]
for l in not_head_tail:
smf.add(l, sort=False)
try:
smf.sort()
self.log(u" Sorting completed => return True")
except ValueError:
self.log(u" Exception while sorting => return False")
result = False
self.log(u" Sorting SyncMapFragmentList... done")
return result
|
Return ``True`` if the sync map fragments
which are the leaves of the sync map tree
(except for HEAD and TAIL leaves)
are all consistent, that is,
their intervals do not overlap in forbidden ways.
:rtype: bool
.. versionadded:: 1.7.0
|
def label_tree(n,lookup):
'''label tree will again recursively label the tree
:param n: the root node, usually d3['children'][0]
:param lookup: the node/id lookup
'''
if len(n["children"]) == 0:
leaves = [lookup[n["node_id"]]]
else:
leaves = reduce(lambda ls, c: ls + label_tree(c,lookup), n["children"], [])
del n["node_id"]
n["name"] = name = "|||".join(sorted(map(str, leaves)))
return leaves
|
label tree will again recursively label the tree
:param n: the root node, usually d3['children'][0]
:param lookup: the node/id lookup
|
def merge_coords(objs, compat='minimal', join='outer', priority_arg=None,
indexes=None):
"""Merge coordinate variables.
See merge_core below for argument descriptions. This works similarly to
merge_core, except everything we don't worry about whether variables are
coordinates or not.
"""
_assert_compat_valid(compat)
coerced = coerce_pandas_values(objs)
aligned = deep_align(coerced, join=join, copy=False, indexes=indexes)
expanded = expand_variable_dicts(aligned)
priority_vars = _get_priority_vars(aligned, priority_arg, compat=compat)
variables = merge_variables(expanded, priority_vars, compat=compat)
assert_unique_multiindex_level_names(variables)
return variables
|
Merge coordinate variables.
See merge_core below for argument descriptions. This works similarly to
merge_core, except everything we don't worry about whether variables are
coordinates or not.
|
def run(self, parent=None):
"""Start the configeditor
:returns: None
:rtype: None
:raises: None
"""
self.gw = GuerillaMGMTWin(parent=parent)
self.gw.show()
|
Start the configeditor
:returns: None
:rtype: None
:raises: None
|
def _parse_string_el(el):
"""read a string element, maybe encoded in base64"""
value = str(el)
el_type = el.attributes().get('xsi:type')
if el_type and el_type.value == 'xsd:base64Binary':
value = base64.b64decode(value)
if not PY2:
value = value.decode('utf-8', errors='replace')
value = _uc(value)
return value
|
read a string element, maybe encoded in base64
|
def find(cls, session, resource_id, include=None):
"""Retrieve a single resource.
This should only be called from sub-classes.
Args:
session(Session): The session to find the resource in
resource_id: The ``id`` for the resource to look up
Keyword Args:
include: Resource classes to include
Returns:
Resource: An instance of a resource, or throws a
:class:`NotFoundError` if the resource can not be found.
"""
url = session._build_url(cls._resource_path(), resource_id)
params = build_request_include(include, None)
process = cls._mk_one(session, include=include)
return session.get(url, CB.json(200, process), params=params)
|
Retrieve a single resource.
This should only be called from sub-classes.
Args:
session(Session): The session to find the resource in
resource_id: The ``id`` for the resource to look up
Keyword Args:
include: Resource classes to include
Returns:
Resource: An instance of a resource, or throws a
:class:`NotFoundError` if the resource can not be found.
|
def compute(self, nodes):
"""Helper function to find edges of the overlapping clusters.
Parameters
----------
nodes:
A dictionary with entires `{node id}:{list of ids in node}`
Returns
-------
edges:
A 1-skeleton of the nerve (intersecting nodes)
simplicies:
Complete list of simplices
"""
result = defaultdict(list)
# Create links when clusters from different hypercubes have members with the same sample id.
candidates = itertools.combinations(nodes.keys(), 2)
for candidate in candidates:
# if there are non-unique members in the union
if (
len(set(nodes[candidate[0]]).intersection(nodes[candidate[1]]))
>= self.min_intersection
):
result[candidate[0]].append(candidate[1])
edges = [[x, end] for x in result for end in result[x]]
simplices = [[n] for n in nodes] + edges
return result, simplices
|
Helper function to find edges of the overlapping clusters.
Parameters
----------
nodes:
A dictionary with entires `{node id}:{list of ids in node}`
Returns
-------
edges:
A 1-skeleton of the nerve (intersecting nodes)
simplicies:
Complete list of simplices
|
def close(self):
"""
Commit and close the connection.
.. seealso:: :py:meth:`sqlite3.Connection.close`
"""
if self.__delayed_connection_path and self.__connection is None:
self.__initialize_connection()
return
try:
self.check_connection()
except (SystemError, NullDatabaseConnectionError):
return
logger.debug("close connection to a SQLite database: path='{}'".format(self.database_path))
self.commit()
self.connection.close()
self.__initialize_connection()
|
Commit and close the connection.
.. seealso:: :py:meth:`sqlite3.Connection.close`
|
def predict(self, features, batch_size = -1):
"""
Model inference base on the given data.
:param features: it can be a ndarray or list of ndarray for locally inference
or RDD[Sample] for running in distributed fashion
:param batch_size: total batch size of prediction.
:return: ndarray or RDD[Sample] depend on the the type of features.
"""
if isinstance(features, RDD):
return self.predict_distributed(features, batch_size)
else:
return self.predict_local(features, batch_size)
|
Model inference base on the given data.
:param features: it can be a ndarray or list of ndarray for locally inference
or RDD[Sample] for running in distributed fashion
:param batch_size: total batch size of prediction.
:return: ndarray or RDD[Sample] depend on the the type of features.
|
def format_stack_frame_json(self):
"""Convert StackFrame object to json format."""
stack_frame_json = {}
stack_frame_json['function_name'] = get_truncatable_str(
self.func_name)
stack_frame_json['original_function_name'] = get_truncatable_str(
self.original_func_name)
stack_frame_json['file_name'] = get_truncatable_str(self.file_name)
stack_frame_json['line_number'] = self.line_num
stack_frame_json['column_number'] = self.col_num
stack_frame_json['load_module'] = {
'module': get_truncatable_str(self.load_module),
'build_id': get_truncatable_str(self.build_id),
}
stack_frame_json['source_version'] = get_truncatable_str(
self.source_version)
return stack_frame_json
|
Convert StackFrame object to json format.
|
def start(self):
""" Starts the clock from 0.
Uses a separate thread to handle the timing functionalities. """
if not hasattr(self,"thread") or not self.thread.isAlive():
self.thread = threading.Thread(target=self.__run)
self.status = RUNNING
self.reset()
self.thread.start()
else:
print("Clock already running!")
|
Starts the clock from 0.
Uses a separate thread to handle the timing functionalities.
|
def univprop(self):
'''
.foo
'''
self.ignore(whitespace)
if not self.nextstr('.'):
self._raiseSyntaxError('universal property expected .')
name = self.noms(varset)
if not name:
mesg = 'Expected a univeral property name.'
self._raiseSyntaxError(mesg=mesg)
if not isUnivName(name):
self._raiseSyntaxError(f'no such universal property: {name!r}')
return s_ast.UnivProp(name)
|
.foo
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.