code stringlengths 75 104k | docstring stringlengths 1 46.9k | text stringlengths 164 112k |
|---|---|---|
def configure_profile(msg_type, profile_name, data, auth):
"""
Create the profile entry.
Args:
:msg_type: (str) message type to create config entry.
:profile_name: (str) name of the profile entry
:data: (dict) dict values for the 'settings'
:auth: (dict) auth parameters
"""
with jsonconfig.Config("messages", indent=4) as cfg:
write_data(msg_type, profile_name, data, cfg)
write_auth(msg_type, profile_name, auth, cfg)
print("[+] Configuration entry for <" + profile_name + "> created.")
print("[+] Configuration file location: " + cfg.filename) | Create the profile entry.
Args:
:msg_type: (str) message type to create config entry.
:profile_name: (str) name of the profile entry
:data: (dict) dict values for the 'settings'
:auth: (dict) auth parameters | Below is the the instruction that describes the task:
### Input:
Create the profile entry.
Args:
:msg_type: (str) message type to create config entry.
:profile_name: (str) name of the profile entry
:data: (dict) dict values for the 'settings'
:auth: (dict) auth parameters
### Response:
def configure_profile(msg_type, profile_name, data, auth):
"""
Create the profile entry.
Args:
:msg_type: (str) message type to create config entry.
:profile_name: (str) name of the profile entry
:data: (dict) dict values for the 'settings'
:auth: (dict) auth parameters
"""
with jsonconfig.Config("messages", indent=4) as cfg:
write_data(msg_type, profile_name, data, cfg)
write_auth(msg_type, profile_name, auth, cfg)
print("[+] Configuration entry for <" + profile_name + "> created.")
print("[+] Configuration file location: " + cfg.filename) |
def _convert_soap_method_args(*args):
"""Convert arguments to be consumed by a SoapClient method
Soap client required a list of named arguments:
>>> _convert_soap_method_args('a', 1)
[('arg0', 'a'), ('arg1', 1)]
"""
soap_args = []
for arg_n, arg in enumerate(args):
soap_args.append(('arg' + str(arg_n), arg))
return soap_args | Convert arguments to be consumed by a SoapClient method
Soap client required a list of named arguments:
>>> _convert_soap_method_args('a', 1)
[('arg0', 'a'), ('arg1', 1)] | Below is the the instruction that describes the task:
### Input:
Convert arguments to be consumed by a SoapClient method
Soap client required a list of named arguments:
>>> _convert_soap_method_args('a', 1)
[('arg0', 'a'), ('arg1', 1)]
### Response:
def _convert_soap_method_args(*args):
"""Convert arguments to be consumed by a SoapClient method
Soap client required a list of named arguments:
>>> _convert_soap_method_args('a', 1)
[('arg0', 'a'), ('arg1', 1)]
"""
soap_args = []
for arg_n, arg in enumerate(args):
soap_args.append(('arg' + str(arg_n), arg))
return soap_args |
def set_hybrid_parameters(self, s_C, s_WH, do_renorm=True):
"""Set the hybrid/renormalization control parameters.
**Call signature**
*s_C*
The harmonic number above which the continuous approximation
is used (with special behavior; see below).
*s_WH*
The harmonic number above which the Wild-Hill BEssel function
approximations are used.
*do_renorm* (default True)
Whether to do any renormalization at all.
Returns
*self* for convenience in chaining.
FK10 uses frequency parameters f^C_cr and f^WH_cr to control some of
its optimizations. This function sets these parameters as multiples of
the electron cyclotron frequency (f_Be in FK10 notation): e.g.,
``f^C_cr = s_C * f_Be``.
At frequencies above f^C_cr, the "continuum" approximation is
introduced, replacing the "exact" sum with an integral. At frequencies
above f^WH_cr, the Wild-Hild approximations to the Bessel functions
are used. In both cases, the activation of the optimizations can
result in normalization shifts in the calculations. "Renormalization"
computes these shifts (by doing both kinds of calculations at the
transition frequencies) and attempts to correct them. (Some of the
FK10 documentation seems to refer to renormalization as
"R-optimization".)
If f^C_cr is below the lowest frequency integrated, all calculations
will be done in continuum mode. In this case, the sign of *s_C* sets
whether Wild-Hill renormalization is applied. If *s_C* is negative and
f^WH_cr is above the lowest frequency integration, renormalization is
done. Otherwise, it is not.
The documentation regarding f^WH_cr is confusing. It states that
f^WH_cr only matters if (1) s_WH < s_C or (2) s_C < 0 and f^WH_cr >
f_0. It is not obvious to me why s_WH > s_C should only matter if s_C
< 0, but that's what's implied.
In most examples in FK10, both of these parameters are set to 12.
"""
self.in_vals[IN_VAL_FCCR] = s_C
self.in_vals[IN_VAL_FWHCR] = s_WH
self.in_vals[IN_VAL_RENORMFLAG] = 1 if do_renorm else 0
return self | Set the hybrid/renormalization control parameters.
**Call signature**
*s_C*
The harmonic number above which the continuous approximation
is used (with special behavior; see below).
*s_WH*
The harmonic number above which the Wild-Hill BEssel function
approximations are used.
*do_renorm* (default True)
Whether to do any renormalization at all.
Returns
*self* for convenience in chaining.
FK10 uses frequency parameters f^C_cr and f^WH_cr to control some of
its optimizations. This function sets these parameters as multiples of
the electron cyclotron frequency (f_Be in FK10 notation): e.g.,
``f^C_cr = s_C * f_Be``.
At frequencies above f^C_cr, the "continuum" approximation is
introduced, replacing the "exact" sum with an integral. At frequencies
above f^WH_cr, the Wild-Hild approximations to the Bessel functions
are used. In both cases, the activation of the optimizations can
result in normalization shifts in the calculations. "Renormalization"
computes these shifts (by doing both kinds of calculations at the
transition frequencies) and attempts to correct them. (Some of the
FK10 documentation seems to refer to renormalization as
"R-optimization".)
If f^C_cr is below the lowest frequency integrated, all calculations
will be done in continuum mode. In this case, the sign of *s_C* sets
whether Wild-Hill renormalization is applied. If *s_C* is negative and
f^WH_cr is above the lowest frequency integration, renormalization is
done. Otherwise, it is not.
The documentation regarding f^WH_cr is confusing. It states that
f^WH_cr only matters if (1) s_WH < s_C or (2) s_C < 0 and f^WH_cr >
f_0. It is not obvious to me why s_WH > s_C should only matter if s_C
< 0, but that's what's implied.
In most examples in FK10, both of these parameters are set to 12. | Below is the the instruction that describes the task:
### Input:
Set the hybrid/renormalization control parameters.
**Call signature**
*s_C*
The harmonic number above which the continuous approximation
is used (with special behavior; see below).
*s_WH*
The harmonic number above which the Wild-Hill BEssel function
approximations are used.
*do_renorm* (default True)
Whether to do any renormalization at all.
Returns
*self* for convenience in chaining.
FK10 uses frequency parameters f^C_cr and f^WH_cr to control some of
its optimizations. This function sets these parameters as multiples of
the electron cyclotron frequency (f_Be in FK10 notation): e.g.,
``f^C_cr = s_C * f_Be``.
At frequencies above f^C_cr, the "continuum" approximation is
introduced, replacing the "exact" sum with an integral. At frequencies
above f^WH_cr, the Wild-Hild approximations to the Bessel functions
are used. In both cases, the activation of the optimizations can
result in normalization shifts in the calculations. "Renormalization"
computes these shifts (by doing both kinds of calculations at the
transition frequencies) and attempts to correct them. (Some of the
FK10 documentation seems to refer to renormalization as
"R-optimization".)
If f^C_cr is below the lowest frequency integrated, all calculations
will be done in continuum mode. In this case, the sign of *s_C* sets
whether Wild-Hill renormalization is applied. If *s_C* is negative and
f^WH_cr is above the lowest frequency integration, renormalization is
done. Otherwise, it is not.
The documentation regarding f^WH_cr is confusing. It states that
f^WH_cr only matters if (1) s_WH < s_C or (2) s_C < 0 and f^WH_cr >
f_0. It is not obvious to me why s_WH > s_C should only matter if s_C
< 0, but that's what's implied.
In most examples in FK10, both of these parameters are set to 12.
### Response:
def set_hybrid_parameters(self, s_C, s_WH, do_renorm=True):
"""Set the hybrid/renormalization control parameters.
**Call signature**
*s_C*
The harmonic number above which the continuous approximation
is used (with special behavior; see below).
*s_WH*
The harmonic number above which the Wild-Hill BEssel function
approximations are used.
*do_renorm* (default True)
Whether to do any renormalization at all.
Returns
*self* for convenience in chaining.
FK10 uses frequency parameters f^C_cr and f^WH_cr to control some of
its optimizations. This function sets these parameters as multiples of
the electron cyclotron frequency (f_Be in FK10 notation): e.g.,
``f^C_cr = s_C * f_Be``.
At frequencies above f^C_cr, the "continuum" approximation is
introduced, replacing the "exact" sum with an integral. At frequencies
above f^WH_cr, the Wild-Hild approximations to the Bessel functions
are used. In both cases, the activation of the optimizations can
result in normalization shifts in the calculations. "Renormalization"
computes these shifts (by doing both kinds of calculations at the
transition frequencies) and attempts to correct them. (Some of the
FK10 documentation seems to refer to renormalization as
"R-optimization".)
If f^C_cr is below the lowest frequency integrated, all calculations
will be done in continuum mode. In this case, the sign of *s_C* sets
whether Wild-Hill renormalization is applied. If *s_C* is negative and
f^WH_cr is above the lowest frequency integration, renormalization is
done. Otherwise, it is not.
The documentation regarding f^WH_cr is confusing. It states that
f^WH_cr only matters if (1) s_WH < s_C or (2) s_C < 0 and f^WH_cr >
f_0. It is not obvious to me why s_WH > s_C should only matter if s_C
< 0, but that's what's implied.
In most examples in FK10, both of these parameters are set to 12.
"""
self.in_vals[IN_VAL_FCCR] = s_C
self.in_vals[IN_VAL_FWHCR] = s_WH
self.in_vals[IN_VAL_RENORMFLAG] = 1 if do_renorm else 0
return self |
def send(self, *args, **kwargs):
"""Sends the envelope using a freshly created SMTP connection. *args*
and *kwargs* are passed directly to :py:class:`envelopes.conn.SMTP`
constructor.
Returns a tuple of SMTP object and whatever its send method returns."""
conn = SMTP(*args, **kwargs)
send_result = conn.send(self)
return conn, send_result | Sends the envelope using a freshly created SMTP connection. *args*
and *kwargs* are passed directly to :py:class:`envelopes.conn.SMTP`
constructor.
Returns a tuple of SMTP object and whatever its send method returns. | Below is the the instruction that describes the task:
### Input:
Sends the envelope using a freshly created SMTP connection. *args*
and *kwargs* are passed directly to :py:class:`envelopes.conn.SMTP`
constructor.
Returns a tuple of SMTP object and whatever its send method returns.
### Response:
def send(self, *args, **kwargs):
"""Sends the envelope using a freshly created SMTP connection. *args*
and *kwargs* are passed directly to :py:class:`envelopes.conn.SMTP`
constructor.
Returns a tuple of SMTP object and whatever its send method returns."""
conn = SMTP(*args, **kwargs)
send_result = conn.send(self)
return conn, send_result |
def value_validate(self, value):
"""
Validates value and throws ValidationError. Subclasses should override
this to provide validation logic.
"""
if not isinstance(value, six.string_types):
raise tldap.exceptions.ValidationError("should be a string") | Validates value and throws ValidationError. Subclasses should override
this to provide validation logic. | Below is the the instruction that describes the task:
### Input:
Validates value and throws ValidationError. Subclasses should override
this to provide validation logic.
### Response:
def value_validate(self, value):
"""
Validates value and throws ValidationError. Subclasses should override
this to provide validation logic.
"""
if not isinstance(value, six.string_types):
raise tldap.exceptions.ValidationError("should be a string") |
def get_grade_entry_admin_session_for_gradebook(self, gradebook_id, proxy):
"""Gets the ``OsidSession`` associated with the grade entry admin service for the given gradebook.
arg: gradebook_id (osid.id.Id): the ``Id`` of the gradebook
arg: proxy (osid.proxy.Proxy): a proxy
return: (osid.grading.GradeEntryAdminSession) - ``a
GradeEntryAdminSession``
raise: NotFound - ``gradebook_id`` not found
raise: NullArgument - ``gradebook_id`` or ``proxy`` is ``null``
raise: OperationFailed - ``unable to complete request``
raise: Unimplemented - ``supports_grade_entry_admin()`` or
``supports_visible_federation()`` is ``false``
*compliance: optional -- This method must be implemented if
``supports_grade_entry_admin()`` and
``supports_visible_federation()`` are ``true``.*
"""
if not self.supports_grade_entry_admin():
raise errors.Unimplemented()
##
# Also include check to see if the catalog Id is found otherwise raise errors.NotFound
##
# pylint: disable=no-member
return sessions.GradeEntryAdminSession(gradebook_id, proxy, self._runtime) | Gets the ``OsidSession`` associated with the grade entry admin service for the given gradebook.
arg: gradebook_id (osid.id.Id): the ``Id`` of the gradebook
arg: proxy (osid.proxy.Proxy): a proxy
return: (osid.grading.GradeEntryAdminSession) - ``a
GradeEntryAdminSession``
raise: NotFound - ``gradebook_id`` not found
raise: NullArgument - ``gradebook_id`` or ``proxy`` is ``null``
raise: OperationFailed - ``unable to complete request``
raise: Unimplemented - ``supports_grade_entry_admin()`` or
``supports_visible_federation()`` is ``false``
*compliance: optional -- This method must be implemented if
``supports_grade_entry_admin()`` and
``supports_visible_federation()`` are ``true``.* | Below is the the instruction that describes the task:
### Input:
Gets the ``OsidSession`` associated with the grade entry admin service for the given gradebook.
arg: gradebook_id (osid.id.Id): the ``Id`` of the gradebook
arg: proxy (osid.proxy.Proxy): a proxy
return: (osid.grading.GradeEntryAdminSession) - ``a
GradeEntryAdminSession``
raise: NotFound - ``gradebook_id`` not found
raise: NullArgument - ``gradebook_id`` or ``proxy`` is ``null``
raise: OperationFailed - ``unable to complete request``
raise: Unimplemented - ``supports_grade_entry_admin()`` or
``supports_visible_federation()`` is ``false``
*compliance: optional -- This method must be implemented if
``supports_grade_entry_admin()`` and
``supports_visible_federation()`` are ``true``.*
### Response:
def get_grade_entry_admin_session_for_gradebook(self, gradebook_id, proxy):
"""Gets the ``OsidSession`` associated with the grade entry admin service for the given gradebook.
arg: gradebook_id (osid.id.Id): the ``Id`` of the gradebook
arg: proxy (osid.proxy.Proxy): a proxy
return: (osid.grading.GradeEntryAdminSession) - ``a
GradeEntryAdminSession``
raise: NotFound - ``gradebook_id`` not found
raise: NullArgument - ``gradebook_id`` or ``proxy`` is ``null``
raise: OperationFailed - ``unable to complete request``
raise: Unimplemented - ``supports_grade_entry_admin()`` or
``supports_visible_federation()`` is ``false``
*compliance: optional -- This method must be implemented if
``supports_grade_entry_admin()`` and
``supports_visible_federation()`` are ``true``.*
"""
if not self.supports_grade_entry_admin():
raise errors.Unimplemented()
##
# Also include check to see if the catalog Id is found otherwise raise errors.NotFound
##
# pylint: disable=no-member
return sessions.GradeEntryAdminSession(gradebook_id, proxy, self._runtime) |
def _create_related_chart(self, data, work, output_dir):
"""Generates and writes to a file in `output_dir` the data used to
display a grouped bar chart.
This data gives, for each "maybe" work, the percentage of it
that is shared with `work`, and the percentage of `work` that
is shared with the "maybe" work.
:param data: data to derive the chart data from
:type data: `pandas.DataFrame`
:param works: work to show related data for
:type works: `str`
:param output_dir: directory to output data file to
:type output_dir: `str`
"""
chart_data = data[work].dropna().sort_values(by=SHARED_RELATED_WORK,
ascending=False)
csv_path = os.path.join(output_dir, 'related_{}.csv'.format(work))
chart_data.to_csv(csv_path) | Generates and writes to a file in `output_dir` the data used to
display a grouped bar chart.
This data gives, for each "maybe" work, the percentage of it
that is shared with `work`, and the percentage of `work` that
is shared with the "maybe" work.
:param data: data to derive the chart data from
:type data: `pandas.DataFrame`
:param works: work to show related data for
:type works: `str`
:param output_dir: directory to output data file to
:type output_dir: `str` | Below is the the instruction that describes the task:
### Input:
Generates and writes to a file in `output_dir` the data used to
display a grouped bar chart.
This data gives, for each "maybe" work, the percentage of it
that is shared with `work`, and the percentage of `work` that
is shared with the "maybe" work.
:param data: data to derive the chart data from
:type data: `pandas.DataFrame`
:param works: work to show related data for
:type works: `str`
:param output_dir: directory to output data file to
:type output_dir: `str`
### Response:
def _create_related_chart(self, data, work, output_dir):
"""Generates and writes to a file in `output_dir` the data used to
display a grouped bar chart.
This data gives, for each "maybe" work, the percentage of it
that is shared with `work`, and the percentage of `work` that
is shared with the "maybe" work.
:param data: data to derive the chart data from
:type data: `pandas.DataFrame`
:param works: work to show related data for
:type works: `str`
:param output_dir: directory to output data file to
:type output_dir: `str`
"""
chart_data = data[work].dropna().sort_values(by=SHARED_RELATED_WORK,
ascending=False)
csv_path = os.path.join(output_dir, 'related_{}.csv'.format(work))
chart_data.to_csv(csv_path) |
def filter_that(self, criteria, data):
'''
this method just use the module 're' to check if the data contain
the string to find
'''
import re
prog = re.compile(criteria)
return True if prog.match(data) else False | this method just use the module 're' to check if the data contain
the string to find | Below is the the instruction that describes the task:
### Input:
this method just use the module 're' to check if the data contain
the string to find
### Response:
def filter_that(self, criteria, data):
'''
this method just use the module 're' to check if the data contain
the string to find
'''
import re
prog = re.compile(criteria)
return True if prog.match(data) else False |
def get_form_kwargs(self):
"""
initialize default value that won't be displayed
:return:
"""
kwargs = super(UserServiceUpdateView, self).get_form_kwargs()
kwargs['initial']['user'] = self.request.user
kwargs['initial']['name'] = self.object.name
return kwargs | initialize default value that won't be displayed
:return: | Below is the the instruction that describes the task:
### Input:
initialize default value that won't be displayed
:return:
### Response:
def get_form_kwargs(self):
"""
initialize default value that won't be displayed
:return:
"""
kwargs = super(UserServiceUpdateView, self).get_form_kwargs()
kwargs['initial']['user'] = self.request.user
kwargs['initial']['name'] = self.object.name
return kwargs |
def id(self, id):
"""
Sets the id of this ServicePackageQuotaHistoryReservation.
Reservation ID.
:param id: The id of this ServicePackageQuotaHistoryReservation.
:type: str
"""
if id is None:
raise ValueError("Invalid value for `id`, must not be `None`")
if id is not None and len(id) > 250:
raise ValueError("Invalid value for `id`, length must be less than or equal to `250`")
if id is not None and len(id) < 1:
raise ValueError("Invalid value for `id`, length must be greater than or equal to `1`")
self._id = id | Sets the id of this ServicePackageQuotaHistoryReservation.
Reservation ID.
:param id: The id of this ServicePackageQuotaHistoryReservation.
:type: str | Below is the the instruction that describes the task:
### Input:
Sets the id of this ServicePackageQuotaHistoryReservation.
Reservation ID.
:param id: The id of this ServicePackageQuotaHistoryReservation.
:type: str
### Response:
def id(self, id):
"""
Sets the id of this ServicePackageQuotaHistoryReservation.
Reservation ID.
:param id: The id of this ServicePackageQuotaHistoryReservation.
:type: str
"""
if id is None:
raise ValueError("Invalid value for `id`, must not be `None`")
if id is not None and len(id) > 250:
raise ValueError("Invalid value for `id`, length must be less than or equal to `250`")
if id is not None and len(id) < 1:
raise ValueError("Invalid value for `id`, length must be greater than or equal to `1`")
self._id = id |
def get_normalized_grid(self):
"""
Analyzes subcell structure
"""
log = logging.getLogger(__name__)
# Resolve multirow mentions, TODO: validate against all PDFs
# subcol_count = 0
mega_rows = []
for row_id, row in enumerate(self._grid):
# maps yc_grid -> [mentions]
subrow_across_cell = defaultdict(list)
for col_id, cell in enumerate(row):
# Keep cell text in reading order
cell.texts.sort(key=cmp_to_key(reading_order))
log.debug("=" * 50)
for m in cell.texts:
subrow_across_cell[m.yc_grid].append(m)
# prev = m
log.debug(pformat(dict(subrow_across_cell)))
mega_rows.append(subrow_across_cell)
# Multiline paragraph check
# Subrow/Subcolumn
return mega_rows | Analyzes subcell structure | Below is the the instruction that describes the task:
### Input:
Analyzes subcell structure
### Response:
def get_normalized_grid(self):
"""
Analyzes subcell structure
"""
log = logging.getLogger(__name__)
# Resolve multirow mentions, TODO: validate against all PDFs
# subcol_count = 0
mega_rows = []
for row_id, row in enumerate(self._grid):
# maps yc_grid -> [mentions]
subrow_across_cell = defaultdict(list)
for col_id, cell in enumerate(row):
# Keep cell text in reading order
cell.texts.sort(key=cmp_to_key(reading_order))
log.debug("=" * 50)
for m in cell.texts:
subrow_across_cell[m.yc_grid].append(m)
# prev = m
log.debug(pformat(dict(subrow_across_cell)))
mega_rows.append(subrow_across_cell)
# Multiline paragraph check
# Subrow/Subcolumn
return mega_rows |
def _aggregate(data, norm=True, sort_by='value', keys=None):
'''
Counts the number of occurances of each item in 'data'.
Inputs
data: a list of values.
norm: normalize the resulting counts (as percent)
sort_by: how to sort the retured data. Options are 'value' and 'count'.
Output
a non-redundant list of values (from 'data') and a list of counts.
'''
if keys:
vdict = {k: 0 for k in keys}
for d in data:
if d in keys:
vdict[d] += 1
else:
vdict = {}
for d in data:
vdict[d] = vdict[d] + 1 if d in vdict else 1
vals = [(k, v) for k, v in vdict.items()]
if sort_by == 'value':
vals.sort(key=lambda x: x[0])
else:
vals.sort(key=lambda x: x[1])
xs = [v[0] for v in vals]
if norm:
raw_y = [v[1] for v in vals]
total_y = sum(raw_y)
ys = [100. * y / total_y for y in raw_y]
else:
ys = [v[1] for v in vals]
return xs, ys | Counts the number of occurances of each item in 'data'.
Inputs
data: a list of values.
norm: normalize the resulting counts (as percent)
sort_by: how to sort the retured data. Options are 'value' and 'count'.
Output
a non-redundant list of values (from 'data') and a list of counts. | Below is the the instruction that describes the task:
### Input:
Counts the number of occurances of each item in 'data'.
Inputs
data: a list of values.
norm: normalize the resulting counts (as percent)
sort_by: how to sort the retured data. Options are 'value' and 'count'.
Output
a non-redundant list of values (from 'data') and a list of counts.
### Response:
def _aggregate(data, norm=True, sort_by='value', keys=None):
'''
Counts the number of occurances of each item in 'data'.
Inputs
data: a list of values.
norm: normalize the resulting counts (as percent)
sort_by: how to sort the retured data. Options are 'value' and 'count'.
Output
a non-redundant list of values (from 'data') and a list of counts.
'''
if keys:
vdict = {k: 0 for k in keys}
for d in data:
if d in keys:
vdict[d] += 1
else:
vdict = {}
for d in data:
vdict[d] = vdict[d] + 1 if d in vdict else 1
vals = [(k, v) for k, v in vdict.items()]
if sort_by == 'value':
vals.sort(key=lambda x: x[0])
else:
vals.sort(key=lambda x: x[1])
xs = [v[0] for v in vals]
if norm:
raw_y = [v[1] for v in vals]
total_y = sum(raw_y)
ys = [100. * y / total_y for y in raw_y]
else:
ys = [v[1] for v in vals]
return xs, ys |
def set_string(_bytearray, byte_index, value, max_size):
"""
Set string value
:params value: string data
:params max_size: max possible string size
"""
if six.PY2:
assert isinstance(value, (str, unicode))
else:
assert isinstance(value, str)
size = len(value)
# FAIL HARD WHEN trying to write too much data into PLC
if size > max_size:
raise ValueError('size %s > max_size %s %s' % (size, max_size, value))
# set len count on first position
_bytearray[byte_index + 1] = len(value)
i = 0
# fill array which chr integers
for i, c in enumerate(value):
_bytearray[byte_index + 2 + i] = ord(c)
# fill the rest with empty space
for r in range(i + 1, _bytearray[byte_index]):
_bytearray[byte_index + 2 + r] = ord(' ') | Set string value
:params value: string data
:params max_size: max possible string size | Below is the the instruction that describes the task:
### Input:
Set string value
:params value: string data
:params max_size: max possible string size
### Response:
def set_string(_bytearray, byte_index, value, max_size):
"""
Set string value
:params value: string data
:params max_size: max possible string size
"""
if six.PY2:
assert isinstance(value, (str, unicode))
else:
assert isinstance(value, str)
size = len(value)
# FAIL HARD WHEN trying to write too much data into PLC
if size > max_size:
raise ValueError('size %s > max_size %s %s' % (size, max_size, value))
# set len count on first position
_bytearray[byte_index + 1] = len(value)
i = 0
# fill array which chr integers
for i, c in enumerate(value):
_bytearray[byte_index + 2 + i] = ord(c)
# fill the rest with empty space
for r in range(i + 1, _bytearray[byte_index]):
_bytearray[byte_index + 2 + r] = ord(' ') |
def Checksum(params, ctxt, scope, stream, coord):
"""
Runs a simple checksum on a file and returns the result as a int64. The
algorithm can be one of the following constants:
CHECKSUM_BYTE - Treats the file as a set of unsigned bytes
CHECKSUM_SHORT_LE - Treats the file as a set of unsigned little-endian shorts
CHECKSUM_SHORT_BE - Treats the file as a set of unsigned big-endian shorts
CHECKSUM_INT_LE - Treats the file as a set of unsigned little-endian ints
CHECKSUM_INT_BE - Treats the file as a set of unsigned big-endian ints
CHECKSUM_INT64_LE - Treats the file as a set of unsigned little-endian int64s
CHECKSUM_INT64_BE - Treats the file as a set of unsigned big-endian int64s
CHECKSUM_SUM8 - Same as CHECKSUM_BYTE except result output as 8-bits
CHECKSUM_SUM16 - Same as CHECKSUM_BYTE except result output as 16-bits
CHECKSUM_SUM32 - Same as CHECKSUM_BYTE except result output as 32-bits
CHECKSUM_SUM64 - Same as CHECKSUM_BYTE
CHECKSUM_CRC16
CHECKSUM_CRCCCITT
CHECKSUM_CRC32
CHECKSUM_ADLER32
If start and size are zero, the algorithm is run on the whole file. If
they are not zero then the algorithm is run on size bytes starting at
address start. See the ChecksumAlgBytes and ChecksumAlgStr functions
to run more complex algorithms. crcPolynomial and crcInitValue
can be used to set a custom polynomial and initial value for the
CRC functions. A value of -1 for these parameters uses the default
values as described in the Check Sum/Hash Algorithms topic. A negative
number is returned on error.
"""
checksum_types = {
0: "CHECKSUM_BYTE", # Treats the file as a set of unsigned bytes
1: "CHECKSUM_SHORT_LE", # Treats the file as a set of unsigned little-endian shorts
2: "CHECKSUM_SHORT_BE", # Treats the file as a set of unsigned big-endian shorts
3: "CHECKSUM_INT_LE", # Treats the file as a set of unsigned little-endian ints
4: "CHECKSUM_INT_BE", # Treats the file as a set of unsigned big-endian ints
5: "CHECKSUM_INT64_LE", # Treats the file as a set of unsigned little-endian int64s
6: "CHECKSUM_INT64_BE", # Treats the file as a set of unsigned big-endian int64s
7: "CHECKSUM_SUM8", # Same as CHECKSUM_BYTE except result output as 8-bits
8: "CHECKSUM_SUM16", # Same as CHECKSUM_BYTE except result output as 16-bits
9: "CHECKSUM_SUM32", # Same as CHECKSUM_BYTE except result output as 32-bits
10: "CHECKSUM_SUM64", # Same as CHECKSUM_BYTE
11: "CHECKSUM_CRC16",
12: "CHECKSUM_CRCCCITT",
13: _crc32,
14: _checksum_Adler32
}
if len(params) < 1:
raise errors.InvalidArguments(coord, "at least 1 argument", "{} args".format(len(params)))
alg = PYVAL(params[0])
if alg not in checksum_types:
raise errors.InvalidArguments(coord, "checksum alg must be one of (0-14)", "{}".format(alg))
start = 0
if len(params) > 1:
start = PYVAL(params[1])
size = 0
if len(params) > 2:
size = PYVAL(params[2])
crc_poly = -1
if len(params) > 3:
crc_poly = PYVAL(params[3])
crc_init = -1
if len(params) > 4:
crc_init = PYVAL(params[4])
stream_pos = stream.tell()
if start + size == 0:
stream.seek(0, 0)
data = stream.read()
else:
stream.seek(start, 0)
data = stream.read(size)
try:
return checksum_types[alg](data, crc_init, crc_poly)
finally:
# yes, this does execute even though a return statement
# exists within the try
stream.seek(stream_pos, 0) | Runs a simple checksum on a file and returns the result as a int64. The
algorithm can be one of the following constants:
CHECKSUM_BYTE - Treats the file as a set of unsigned bytes
CHECKSUM_SHORT_LE - Treats the file as a set of unsigned little-endian shorts
CHECKSUM_SHORT_BE - Treats the file as a set of unsigned big-endian shorts
CHECKSUM_INT_LE - Treats the file as a set of unsigned little-endian ints
CHECKSUM_INT_BE - Treats the file as a set of unsigned big-endian ints
CHECKSUM_INT64_LE - Treats the file as a set of unsigned little-endian int64s
CHECKSUM_INT64_BE - Treats the file as a set of unsigned big-endian int64s
CHECKSUM_SUM8 - Same as CHECKSUM_BYTE except result output as 8-bits
CHECKSUM_SUM16 - Same as CHECKSUM_BYTE except result output as 16-bits
CHECKSUM_SUM32 - Same as CHECKSUM_BYTE except result output as 32-bits
CHECKSUM_SUM64 - Same as CHECKSUM_BYTE
CHECKSUM_CRC16
CHECKSUM_CRCCCITT
CHECKSUM_CRC32
CHECKSUM_ADLER32
If start and size are zero, the algorithm is run on the whole file. If
they are not zero then the algorithm is run on size bytes starting at
address start. See the ChecksumAlgBytes and ChecksumAlgStr functions
to run more complex algorithms. crcPolynomial and crcInitValue
can be used to set a custom polynomial and initial value for the
CRC functions. A value of -1 for these parameters uses the default
values as described in the Check Sum/Hash Algorithms topic. A negative
number is returned on error. | Below is the the instruction that describes the task:
### Input:
Runs a simple checksum on a file and returns the result as a int64. The
algorithm can be one of the following constants:
CHECKSUM_BYTE - Treats the file as a set of unsigned bytes
CHECKSUM_SHORT_LE - Treats the file as a set of unsigned little-endian shorts
CHECKSUM_SHORT_BE - Treats the file as a set of unsigned big-endian shorts
CHECKSUM_INT_LE - Treats the file as a set of unsigned little-endian ints
CHECKSUM_INT_BE - Treats the file as a set of unsigned big-endian ints
CHECKSUM_INT64_LE - Treats the file as a set of unsigned little-endian int64s
CHECKSUM_INT64_BE - Treats the file as a set of unsigned big-endian int64s
CHECKSUM_SUM8 - Same as CHECKSUM_BYTE except result output as 8-bits
CHECKSUM_SUM16 - Same as CHECKSUM_BYTE except result output as 16-bits
CHECKSUM_SUM32 - Same as CHECKSUM_BYTE except result output as 32-bits
CHECKSUM_SUM64 - Same as CHECKSUM_BYTE
CHECKSUM_CRC16
CHECKSUM_CRCCCITT
CHECKSUM_CRC32
CHECKSUM_ADLER32
If start and size are zero, the algorithm is run on the whole file. If
they are not zero then the algorithm is run on size bytes starting at
address start. See the ChecksumAlgBytes and ChecksumAlgStr functions
to run more complex algorithms. crcPolynomial and crcInitValue
can be used to set a custom polynomial and initial value for the
CRC functions. A value of -1 for these parameters uses the default
values as described in the Check Sum/Hash Algorithms topic. A negative
number is returned on error.
### Response:
def Checksum(params, ctxt, scope, stream, coord):
"""
Runs a simple checksum on a file and returns the result as a int64. The
algorithm can be one of the following constants:
CHECKSUM_BYTE - Treats the file as a set of unsigned bytes
CHECKSUM_SHORT_LE - Treats the file as a set of unsigned little-endian shorts
CHECKSUM_SHORT_BE - Treats the file as a set of unsigned big-endian shorts
CHECKSUM_INT_LE - Treats the file as a set of unsigned little-endian ints
CHECKSUM_INT_BE - Treats the file as a set of unsigned big-endian ints
CHECKSUM_INT64_LE - Treats the file as a set of unsigned little-endian int64s
CHECKSUM_INT64_BE - Treats the file as a set of unsigned big-endian int64s
CHECKSUM_SUM8 - Same as CHECKSUM_BYTE except result output as 8-bits
CHECKSUM_SUM16 - Same as CHECKSUM_BYTE except result output as 16-bits
CHECKSUM_SUM32 - Same as CHECKSUM_BYTE except result output as 32-bits
CHECKSUM_SUM64 - Same as CHECKSUM_BYTE
CHECKSUM_CRC16
CHECKSUM_CRCCCITT
CHECKSUM_CRC32
CHECKSUM_ADLER32
If start and size are zero, the algorithm is run on the whole file. If
they are not zero then the algorithm is run on size bytes starting at
address start. See the ChecksumAlgBytes and ChecksumAlgStr functions
to run more complex algorithms. crcPolynomial and crcInitValue
can be used to set a custom polynomial and initial value for the
CRC functions. A value of -1 for these parameters uses the default
values as described in the Check Sum/Hash Algorithms topic. A negative
number is returned on error.
"""
checksum_types = {
0: "CHECKSUM_BYTE", # Treats the file as a set of unsigned bytes
1: "CHECKSUM_SHORT_LE", # Treats the file as a set of unsigned little-endian shorts
2: "CHECKSUM_SHORT_BE", # Treats the file as a set of unsigned big-endian shorts
3: "CHECKSUM_INT_LE", # Treats the file as a set of unsigned little-endian ints
4: "CHECKSUM_INT_BE", # Treats the file as a set of unsigned big-endian ints
5: "CHECKSUM_INT64_LE", # Treats the file as a set of unsigned little-endian int64s
6: "CHECKSUM_INT64_BE", # Treats the file as a set of unsigned big-endian int64s
7: "CHECKSUM_SUM8", # Same as CHECKSUM_BYTE except result output as 8-bits
8: "CHECKSUM_SUM16", # Same as CHECKSUM_BYTE except result output as 16-bits
9: "CHECKSUM_SUM32", # Same as CHECKSUM_BYTE except result output as 32-bits
10: "CHECKSUM_SUM64", # Same as CHECKSUM_BYTE
11: "CHECKSUM_CRC16",
12: "CHECKSUM_CRCCCITT",
13: _crc32,
14: _checksum_Adler32
}
if len(params) < 1:
raise errors.InvalidArguments(coord, "at least 1 argument", "{} args".format(len(params)))
alg = PYVAL(params[0])
if alg not in checksum_types:
raise errors.InvalidArguments(coord, "checksum alg must be one of (0-14)", "{}".format(alg))
start = 0
if len(params) > 1:
start = PYVAL(params[1])
size = 0
if len(params) > 2:
size = PYVAL(params[2])
crc_poly = -1
if len(params) > 3:
crc_poly = PYVAL(params[3])
crc_init = -1
if len(params) > 4:
crc_init = PYVAL(params[4])
stream_pos = stream.tell()
if start + size == 0:
stream.seek(0, 0)
data = stream.read()
else:
stream.seek(start, 0)
data = stream.read(size)
try:
return checksum_types[alg](data, crc_init, crc_poly)
finally:
# yes, this does execute even though a return statement
# exists within the try
stream.seek(stream_pos, 0) |
def _parse_args(self, args, known_only):
"""Helper function to do the main argument parsing.
This function goes through args and does the bulk of the flag parsing.
It will find the corresponding flag in our flag dictionary, and call its
.parse() method on the flag value.
Args:
args: [str], a list of strings with the arguments to parse.
known_only: bool, if True, parse and remove known flags; return the rest
untouched. Unknown flags specified by --undefok are not returned.
Returns:
A tuple with the following:
unknown_flags: List of (flag name, arg) for flags we don't know about.
unparsed_args: List of arguments we did not parse.
Raises:
Error: Raised on any parsing error.
ValueError: Raised on flag value parsing error.
"""
unparsed_names_and_args = [] # A list of (flag name or None, arg).
undefok = set()
retired_flag_func = self.__dict__['__is_retired_flag_func']
flag_dict = self._flags()
args = iter(args)
for arg in args:
value = None
def get_value():
# pylint: disable=cell-var-from-loop
try:
return next(args) if value is None else value
except StopIteration:
raise _exceptions.Error('Missing value for flag ' + arg) # pylint: disable=undefined-loop-variable
if not arg.startswith('-'):
# A non-argument: default is break, GNU is skip.
unparsed_names_and_args.append((None, arg))
if self.is_gnu_getopt():
continue
else:
break
if arg == '--':
if known_only:
unparsed_names_and_args.append((None, arg))
break
# At this point, arg must start with '-'.
if arg.startswith('--'):
arg_without_dashes = arg[2:]
else:
arg_without_dashes = arg[1:]
if '=' in arg_without_dashes:
name, value = arg_without_dashes.split('=', 1)
else:
name, value = arg_without_dashes, None
if not name:
# The argument is all dashes (including one dash).
unparsed_names_and_args.append((None, arg))
if self.is_gnu_getopt():
continue
else:
break
# --undefok is a special case.
if name == 'undefok':
value = get_value()
undefok.update(v.strip() for v in value.split(','))
undefok.update('no' + v.strip() for v in value.split(','))
continue
flag = flag_dict.get(name)
if flag:
if flag.boolean and value is None:
value = 'true'
else:
value = get_value()
elif name.startswith('no') and len(name) > 2:
# Boolean flags can take the form of --noflag, with no value.
noflag = flag_dict.get(name[2:])
if noflag and noflag.boolean:
if value is not None:
raise ValueError(arg + ' does not take an argument')
flag = noflag
value = 'false'
if retired_flag_func and not flag:
is_retired, is_bool = retired_flag_func(name)
# If we didn't recognize that flag, but it starts with
# "no" then maybe it was a boolean flag specified in the
# --nofoo form.
if not is_retired and name.startswith('no'):
is_retired, is_bool = retired_flag_func(name[2:])
is_retired = is_retired and is_bool
if is_retired:
if not is_bool and value is None:
# This happens when a non-bool retired flag is specified
# in format of "--flag value".
get_value()
logging.error('Flag "%s" is retired and should no longer '
'be specified. See go/totw/90.', name)
continue
if flag:
flag.parse(value)
flag.using_default_value = False
else:
unparsed_names_and_args.append((name, arg))
unknown_flags = []
unparsed_args = []
for name, arg in unparsed_names_and_args:
if name is None:
# Positional arguments.
unparsed_args.append(arg)
elif name in undefok:
# Remove undefok flags.
continue
else:
# This is an unknown flag.
if known_only:
unparsed_args.append(arg)
else:
unknown_flags.append((name, arg))
unparsed_args.extend(list(args))
return unknown_flags, unparsed_args | Helper function to do the main argument parsing.
This function goes through args and does the bulk of the flag parsing.
It will find the corresponding flag in our flag dictionary, and call its
.parse() method on the flag value.
Args:
args: [str], a list of strings with the arguments to parse.
known_only: bool, if True, parse and remove known flags; return the rest
untouched. Unknown flags specified by --undefok are not returned.
Returns:
A tuple with the following:
unknown_flags: List of (flag name, arg) for flags we don't know about.
unparsed_args: List of arguments we did not parse.
Raises:
Error: Raised on any parsing error.
ValueError: Raised on flag value parsing error. | Below is the the instruction that describes the task:
### Input:
Helper function to do the main argument parsing.
This function goes through args and does the bulk of the flag parsing.
It will find the corresponding flag in our flag dictionary, and call its
.parse() method on the flag value.
Args:
args: [str], a list of strings with the arguments to parse.
known_only: bool, if True, parse and remove known flags; return the rest
untouched. Unknown flags specified by --undefok are not returned.
Returns:
A tuple with the following:
unknown_flags: List of (flag name, arg) for flags we don't know about.
unparsed_args: List of arguments we did not parse.
Raises:
Error: Raised on any parsing error.
ValueError: Raised on flag value parsing error.
### Response:
def _parse_args(self, args, known_only):
"""Helper function to do the main argument parsing.
This function goes through args and does the bulk of the flag parsing.
It will find the corresponding flag in our flag dictionary, and call its
.parse() method on the flag value.
Args:
args: [str], a list of strings with the arguments to parse.
known_only: bool, if True, parse and remove known flags; return the rest
untouched. Unknown flags specified by --undefok are not returned.
Returns:
A tuple with the following:
unknown_flags: List of (flag name, arg) for flags we don't know about.
unparsed_args: List of arguments we did not parse.
Raises:
Error: Raised on any parsing error.
ValueError: Raised on flag value parsing error.
"""
unparsed_names_and_args = [] # A list of (flag name or None, arg).
undefok = set()
retired_flag_func = self.__dict__['__is_retired_flag_func']
flag_dict = self._flags()
args = iter(args)
for arg in args:
value = None
def get_value():
# pylint: disable=cell-var-from-loop
try:
return next(args) if value is None else value
except StopIteration:
raise _exceptions.Error('Missing value for flag ' + arg) # pylint: disable=undefined-loop-variable
if not arg.startswith('-'):
# A non-argument: default is break, GNU is skip.
unparsed_names_and_args.append((None, arg))
if self.is_gnu_getopt():
continue
else:
break
if arg == '--':
if known_only:
unparsed_names_and_args.append((None, arg))
break
# At this point, arg must start with '-'.
if arg.startswith('--'):
arg_without_dashes = arg[2:]
else:
arg_without_dashes = arg[1:]
if '=' in arg_without_dashes:
name, value = arg_without_dashes.split('=', 1)
else:
name, value = arg_without_dashes, None
if not name:
# The argument is all dashes (including one dash).
unparsed_names_and_args.append((None, arg))
if self.is_gnu_getopt():
continue
else:
break
# --undefok is a special case.
if name == 'undefok':
value = get_value()
undefok.update(v.strip() for v in value.split(','))
undefok.update('no' + v.strip() for v in value.split(','))
continue
flag = flag_dict.get(name)
if flag:
if flag.boolean and value is None:
value = 'true'
else:
value = get_value()
elif name.startswith('no') and len(name) > 2:
# Boolean flags can take the form of --noflag, with no value.
noflag = flag_dict.get(name[2:])
if noflag and noflag.boolean:
if value is not None:
raise ValueError(arg + ' does not take an argument')
flag = noflag
value = 'false'
if retired_flag_func and not flag:
is_retired, is_bool = retired_flag_func(name)
# If we didn't recognize that flag, but it starts with
# "no" then maybe it was a boolean flag specified in the
# --nofoo form.
if not is_retired and name.startswith('no'):
is_retired, is_bool = retired_flag_func(name[2:])
is_retired = is_retired and is_bool
if is_retired:
if not is_bool and value is None:
# This happens when a non-bool retired flag is specified
# in format of "--flag value".
get_value()
logging.error('Flag "%s" is retired and should no longer '
'be specified. See go/totw/90.', name)
continue
if flag:
flag.parse(value)
flag.using_default_value = False
else:
unparsed_names_and_args.append((name, arg))
unknown_flags = []
unparsed_args = []
for name, arg in unparsed_names_and_args:
if name is None:
# Positional arguments.
unparsed_args.append(arg)
elif name in undefok:
# Remove undefok flags.
continue
else:
# This is an unknown flag.
if known_only:
unparsed_args.append(arg)
else:
unknown_flags.append((name, arg))
unparsed_args.extend(list(args))
return unknown_flags, unparsed_args |
def save(self, out_path):
"""Save an ascii representation of this simulation trace.
Args:
out_path (str): The output path to save this simulation trace.
"""
out = {
'selectors': [str(x) for x in self.selectors],
'trace': [{'stream': str(DataStream.FromEncoded(x.stream)), 'time': x.raw_time, 'value': x.value, 'reading_id': x.reading_id} for x in self]
}
with open(out_path, "wb") as outfile:
json.dump(out, outfile, indent=4) | Save an ascii representation of this simulation trace.
Args:
out_path (str): The output path to save this simulation trace. | Below is the the instruction that describes the task:
### Input:
Save an ascii representation of this simulation trace.
Args:
out_path (str): The output path to save this simulation trace.
### Response:
def save(self, out_path):
"""Save an ascii representation of this simulation trace.
Args:
out_path (str): The output path to save this simulation trace.
"""
out = {
'selectors': [str(x) for x in self.selectors],
'trace': [{'stream': str(DataStream.FromEncoded(x.stream)), 'time': x.raw_time, 'value': x.value, 'reading_id': x.reading_id} for x in self]
}
with open(out_path, "wb") as outfile:
json.dump(out, outfile, indent=4) |
def update_templates(self, body):
"""Update enrollment and verification SMS templates.
Useful to send custom messages on sms enrollment and verification
Args:
body (dict): Attributes to modify.
See: https://auth0.com/docs/api/management/v2#!/Guardian/put_templates
"""
return self.client.put(self._url('factors/sms/templates'), data=body) | Update enrollment and verification SMS templates.
Useful to send custom messages on sms enrollment and verification
Args:
body (dict): Attributes to modify.
See: https://auth0.com/docs/api/management/v2#!/Guardian/put_templates | Below is the the instruction that describes the task:
### Input:
Update enrollment and verification SMS templates.
Useful to send custom messages on sms enrollment and verification
Args:
body (dict): Attributes to modify.
See: https://auth0.com/docs/api/management/v2#!/Guardian/put_templates
### Response:
def update_templates(self, body):
"""Update enrollment and verification SMS templates.
Useful to send custom messages on sms enrollment and verification
Args:
body (dict): Attributes to modify.
See: https://auth0.com/docs/api/management/v2#!/Guardian/put_templates
"""
return self.client.put(self._url('factors/sms/templates'), data=body) |
def _process_oauth_response(self, response):
"Extracts the fields from an oauth response"
if response.status_code == 200:
credentials = parse_qs(response.text)
# Initialize the oauth credentials
self._init_oauth(
credentials.get('oauth_token')[0],
credentials.get('oauth_token_secret')[0]
)
# If tokens are refreshable, we'll get a session handle
self.oauth_session_handle = credentials.get(
'oauth_session_handle', [None])[0]
# Calculate token/auth expiry
oauth_expires_in = credentials.get(
'oauth_expires_in',
[OAUTH_EXPIRY_SECONDS])[0]
oauth_authorisation_expires_in = credentials.get(
'oauth_authorization_expires_in',
[OAUTH_EXPIRY_SECONDS])[0]
self.oauth_expires_at = datetime.datetime.now() + \
datetime.timedelta(seconds=int(
oauth_expires_in))
self.oauth_authorization_expires_at = \
datetime.datetime.now() + \
datetime.timedelta(seconds=int(
oauth_authorisation_expires_in))
else:
self._handle_error_response(response) | Extracts the fields from an oauth response | Below is the the instruction that describes the task:
### Input:
Extracts the fields from an oauth response
### Response:
def _process_oauth_response(self, response):
"Extracts the fields from an oauth response"
if response.status_code == 200:
credentials = parse_qs(response.text)
# Initialize the oauth credentials
self._init_oauth(
credentials.get('oauth_token')[0],
credentials.get('oauth_token_secret')[0]
)
# If tokens are refreshable, we'll get a session handle
self.oauth_session_handle = credentials.get(
'oauth_session_handle', [None])[0]
# Calculate token/auth expiry
oauth_expires_in = credentials.get(
'oauth_expires_in',
[OAUTH_EXPIRY_SECONDS])[0]
oauth_authorisation_expires_in = credentials.get(
'oauth_authorization_expires_in',
[OAUTH_EXPIRY_SECONDS])[0]
self.oauth_expires_at = datetime.datetime.now() + \
datetime.timedelta(seconds=int(
oauth_expires_in))
self.oauth_authorization_expires_at = \
datetime.datetime.now() + \
datetime.timedelta(seconds=int(
oauth_authorisation_expires_in))
else:
self._handle_error_response(response) |
def select_segment(self, segs, segs_tips, segs_undecided) -> Tuple[int, int]:
"""Out of a list of line segments, choose segment that has the most
distant second data point.
Assume the distance matrix Ddiff is sorted according to seg_idcs.
Compute all the distances.
Returns
-------
iseg : int
Index identifying the position within the list of line segments.
tips3 : int
Positions of tips within chosen segment.
"""
scores_tips = np.zeros((len(segs), 4))
allindices = np.arange(self._adata.shape[0], dtype=int)
for iseg, seg in enumerate(segs):
# do not consider too small segments
if segs_tips[iseg][0] == -1: continue
# restrict distance matrix to points in segment
if not isinstance(self.distances_dpt, OnFlySymMatrix):
Dseg = self.distances_dpt[np.ix_(seg, seg)]
else:
Dseg = self.distances_dpt.restrict(seg)
third_maximizer = None
if segs_undecided[iseg]:
# check that none of our tips "connects" with a tip of the
# other segments
for jseg in range(len(segs)):
if jseg != iseg:
# take the inner tip, the "second tip" of the segment
for itip in range(2):
if (self.distances_dpt[segs_tips[jseg][1], segs_tips[iseg][itip]]
< 0.5 * self.distances_dpt[segs_tips[iseg][~itip], segs_tips[iseg][itip]]):
# logg.m(' group', iseg, 'with tip', segs_tips[iseg][itip],
# 'connects with', jseg, 'with tip', segs_tips[jseg][1], v=4)
# logg.m(' do not use the tip for "triangulation"', v=4)
third_maximizer = itip
# map the global position to the position within the segment
tips = [np.where(allindices[seg] == tip)[0][0]
for tip in segs_tips[iseg]]
# find the third point on the segment that has maximal
# added distance from the two tip points
dseg = Dseg[tips[0]] + Dseg[tips[1]]
if not np.isfinite(dseg).any():
continue
# add this point to tips, it's a third tip, we store it at the first
# position in an array called tips3
third_tip = np.argmax(dseg)
if third_maximizer is not None:
# find a fourth point that has maximal distance to all three
dseg += Dseg[third_tip]
fourth_tip = np.argmax(dseg)
if fourth_tip != tips[0] and fourth_tip != third_tip:
tips[1] = fourth_tip
dseg -= Dseg[tips[1]]
else:
dseg -= Dseg[third_tip]
tips3 = np.append(tips, third_tip)
# compute the score as ratio of the added distance to the third tip,
# to what it would be if it were on the straight line between the
# two first tips, given by Dseg[tips[:2]]
# if we did not normalize, there would be a danger of simply
# assigning the highest score to the longest segment
score = dseg[tips3[2]] / Dseg[tips3[0], tips3[1]]
score = len(seg) if self.choose_largest_segment else score # simply the number of points
logg.m(' group', iseg, 'score', score, 'n_points', len(seg),
'(too small)' if len(seg) < self.min_group_size else '', v=4)
if len(seg) <= self.min_group_size: score = 0
# write result
scores_tips[iseg, 0] = score
scores_tips[iseg, 1:] = tips3
iseg = np.argmax(scores_tips[:, 0])
if scores_tips[iseg, 0] == 0: return -1, None
tips3 = scores_tips[iseg, 1:].astype(int)
return iseg, tips3 | Out of a list of line segments, choose segment that has the most
distant second data point.
Assume the distance matrix Ddiff is sorted according to seg_idcs.
Compute all the distances.
Returns
-------
iseg : int
Index identifying the position within the list of line segments.
tips3 : int
Positions of tips within chosen segment. | Below is the the instruction that describes the task:
### Input:
Out of a list of line segments, choose segment that has the most
distant second data point.
Assume the distance matrix Ddiff is sorted according to seg_idcs.
Compute all the distances.
Returns
-------
iseg : int
Index identifying the position within the list of line segments.
tips3 : int
Positions of tips within chosen segment.
### Response:
def select_segment(self, segs, segs_tips, segs_undecided) -> Tuple[int, int]:
"""Out of a list of line segments, choose segment that has the most
distant second data point.
Assume the distance matrix Ddiff is sorted according to seg_idcs.
Compute all the distances.
Returns
-------
iseg : int
Index identifying the position within the list of line segments.
tips3 : int
Positions of tips within chosen segment.
"""
scores_tips = np.zeros((len(segs), 4))
allindices = np.arange(self._adata.shape[0], dtype=int)
for iseg, seg in enumerate(segs):
# do not consider too small segments
if segs_tips[iseg][0] == -1: continue
# restrict distance matrix to points in segment
if not isinstance(self.distances_dpt, OnFlySymMatrix):
Dseg = self.distances_dpt[np.ix_(seg, seg)]
else:
Dseg = self.distances_dpt.restrict(seg)
third_maximizer = None
if segs_undecided[iseg]:
# check that none of our tips "connects" with a tip of the
# other segments
for jseg in range(len(segs)):
if jseg != iseg:
# take the inner tip, the "second tip" of the segment
for itip in range(2):
if (self.distances_dpt[segs_tips[jseg][1], segs_tips[iseg][itip]]
< 0.5 * self.distances_dpt[segs_tips[iseg][~itip], segs_tips[iseg][itip]]):
# logg.m(' group', iseg, 'with tip', segs_tips[iseg][itip],
# 'connects with', jseg, 'with tip', segs_tips[jseg][1], v=4)
# logg.m(' do not use the tip for "triangulation"', v=4)
third_maximizer = itip
# map the global position to the position within the segment
tips = [np.where(allindices[seg] == tip)[0][0]
for tip in segs_tips[iseg]]
# find the third point on the segment that has maximal
# added distance from the two tip points
dseg = Dseg[tips[0]] + Dseg[tips[1]]
if not np.isfinite(dseg).any():
continue
# add this point to tips, it's a third tip, we store it at the first
# position in an array called tips3
third_tip = np.argmax(dseg)
if third_maximizer is not None:
# find a fourth point that has maximal distance to all three
dseg += Dseg[third_tip]
fourth_tip = np.argmax(dseg)
if fourth_tip != tips[0] and fourth_tip != third_tip:
tips[1] = fourth_tip
dseg -= Dseg[tips[1]]
else:
dseg -= Dseg[third_tip]
tips3 = np.append(tips, third_tip)
# compute the score as ratio of the added distance to the third tip,
# to what it would be if it were on the straight line between the
# two first tips, given by Dseg[tips[:2]]
# if we did not normalize, there would be a danger of simply
# assigning the highest score to the longest segment
score = dseg[tips3[2]] / Dseg[tips3[0], tips3[1]]
score = len(seg) if self.choose_largest_segment else score # simply the number of points
logg.m(' group', iseg, 'score', score, 'n_points', len(seg),
'(too small)' if len(seg) < self.min_group_size else '', v=4)
if len(seg) <= self.min_group_size: score = 0
# write result
scores_tips[iseg, 0] = score
scores_tips[iseg, 1:] = tips3
iseg = np.argmax(scores_tips[:, 0])
if scores_tips[iseg, 0] == 0: return -1, None
tips3 = scores_tips[iseg, 1:].astype(int)
return iseg, tips3 |
def rsh(self, num, cin=None):
"""Right shift the farray by *num* places.
The *num* argument must be a non-negative ``int``.
If the *cin* farray is provided, it will be shifted in.
Otherwise, the carry-in is zero.
Returns a two-tuple (farray fs, farray cout),
where *fs* is the shifted vector, and *cout* is the "carry out".
Returns a new farray.
"""
if num < 0 or num > self.size:
raise ValueError("expected 0 <= num <= {0.size}".format(self))
if cin is None:
items = [self.ftype.box(0) for _ in range(num)]
cin = self.__class__(items, ftype=self.ftype)
else:
if len(cin) != num:
raise ValueError("expected length of cin to be equal to num")
if num == 0:
return self, self.__class__([], ftype=self.ftype)
else:
fs = self.__class__(self._items[num:] + cin._items,
ftype=self.ftype)
cout = self.__class__(self._items[:num], ftype=self.ftype)
return fs, cout | Right shift the farray by *num* places.
The *num* argument must be a non-negative ``int``.
If the *cin* farray is provided, it will be shifted in.
Otherwise, the carry-in is zero.
Returns a two-tuple (farray fs, farray cout),
where *fs* is the shifted vector, and *cout* is the "carry out".
Returns a new farray. | Below is the the instruction that describes the task:
### Input:
Right shift the farray by *num* places.
The *num* argument must be a non-negative ``int``.
If the *cin* farray is provided, it will be shifted in.
Otherwise, the carry-in is zero.
Returns a two-tuple (farray fs, farray cout),
where *fs* is the shifted vector, and *cout* is the "carry out".
Returns a new farray.
### Response:
def rsh(self, num, cin=None):
"""Right shift the farray by *num* places.
The *num* argument must be a non-negative ``int``.
If the *cin* farray is provided, it will be shifted in.
Otherwise, the carry-in is zero.
Returns a two-tuple (farray fs, farray cout),
where *fs* is the shifted vector, and *cout* is the "carry out".
Returns a new farray.
"""
if num < 0 or num > self.size:
raise ValueError("expected 0 <= num <= {0.size}".format(self))
if cin is None:
items = [self.ftype.box(0) for _ in range(num)]
cin = self.__class__(items, ftype=self.ftype)
else:
if len(cin) != num:
raise ValueError("expected length of cin to be equal to num")
if num == 0:
return self, self.__class__([], ftype=self.ftype)
else:
fs = self.__class__(self._items[num:] + cin._items,
ftype=self.ftype)
cout = self.__class__(self._items[:num], ftype=self.ftype)
return fs, cout |
def getrepositorytree(self, project_id, **kwargs):
"""
Get a list of repository files and directories in a project.
:param project_id: The ID of a project
:param path: The path inside repository. Used to get contend of subdirectories
:param ref_name: The name of a repository branch or tag or if not given the default branch
:return: dict with the tree
"""
data = {}
if kwargs:
data.update(kwargs)
request = requests.get(
'{0}/{1}/repository/tree'.format(self.projects_url, project_id), params=data,
verify=self.verify_ssl, auth=self.auth, headers=self.headers, timeout=self.timeout)
if request.status_code == 200:
return request.json()
else:
return False | Get a list of repository files and directories in a project.
:param project_id: The ID of a project
:param path: The path inside repository. Used to get contend of subdirectories
:param ref_name: The name of a repository branch or tag or if not given the default branch
:return: dict with the tree | Below is the the instruction that describes the task:
### Input:
Get a list of repository files and directories in a project.
:param project_id: The ID of a project
:param path: The path inside repository. Used to get contend of subdirectories
:param ref_name: The name of a repository branch or tag or if not given the default branch
:return: dict with the tree
### Response:
def getrepositorytree(self, project_id, **kwargs):
"""
Get a list of repository files and directories in a project.
:param project_id: The ID of a project
:param path: The path inside repository. Used to get contend of subdirectories
:param ref_name: The name of a repository branch or tag or if not given the default branch
:return: dict with the tree
"""
data = {}
if kwargs:
data.update(kwargs)
request = requests.get(
'{0}/{1}/repository/tree'.format(self.projects_url, project_id), params=data,
verify=self.verify_ssl, auth=self.auth, headers=self.headers, timeout=self.timeout)
if request.status_code == 200:
return request.json()
else:
return False |
def register_with_context(self, myname, context):
""" registers this build target (exclusively) with a given context """
if self.context is not None:
raise Exception("attempted to register BuildTarget with multiple "
"BuildContexts")
context.register_task(myname, self)
self._name = myname
self.context = context
for key in self.data_dependencies:
if type(self.data_dependencies[key]) is DeferredDependency:
self.data_dependencies[key].parent = myname
self.data_dependencies[key].context = context
for tnmame in self.data_dependencies[key].target_names:
context.register_dependency(tnmame, myname) | registers this build target (exclusively) with a given context | Below is the the instruction that describes the task:
### Input:
registers this build target (exclusively) with a given context
### Response:
def register_with_context(self, myname, context):
""" registers this build target (exclusively) with a given context """
if self.context is not None:
raise Exception("attempted to register BuildTarget with multiple "
"BuildContexts")
context.register_task(myname, self)
self._name = myname
self.context = context
for key in self.data_dependencies:
if type(self.data_dependencies[key]) is DeferredDependency:
self.data_dependencies[key].parent = myname
self.data_dependencies[key].context = context
for tnmame in self.data_dependencies[key].target_names:
context.register_dependency(tnmame, myname) |
def post(self, *args, **kwargs):
"""Handle creation of an item.
:param args:
:param kwargs:
"""
self.initialize_post()
# Don't allow the post if the poster does not have permission
if not self.has_create_permission():
LOGGER.debug('Does not have write_permission')
self.set_status(403, self.status_message('Creation Forbidden'))
self.finish()
return
result = yield self.model.save()
if result:
self.set_status(201, self.status_message('Created'))
self.add_headers()
self.finish(self.model.as_dict())
else:
self.set_status(507, self.status_message('Creation Failed'))
self.finish() | Handle creation of an item.
:param args:
:param kwargs: | Below is the the instruction that describes the task:
### Input:
Handle creation of an item.
:param args:
:param kwargs:
### Response:
def post(self, *args, **kwargs):
"""Handle creation of an item.
:param args:
:param kwargs:
"""
self.initialize_post()
# Don't allow the post if the poster does not have permission
if not self.has_create_permission():
LOGGER.debug('Does not have write_permission')
self.set_status(403, self.status_message('Creation Forbidden'))
self.finish()
return
result = yield self.model.save()
if result:
self.set_status(201, self.status_message('Created'))
self.add_headers()
self.finish(self.model.as_dict())
else:
self.set_status(507, self.status_message('Creation Failed'))
self.finish() |
def timeout_message(self, message):
'''
Handle a message timeout by removing it from the sending queue
and informing the caller
:raises: SaltReqTimeoutError
'''
future = self.send_future_map.pop(message, None)
# In a race condition the message might have been sent by the time
# we're timing it out. Make sure the future is not None
if future is not None:
del self.send_timeout_map[message]
if future.attempts < future.tries:
future.attempts += 1
log.debug('SaltReqTimeoutError, retrying. (%s/%s)', future.attempts, future.tries)
self.send(
message,
timeout=future.timeout,
tries=future.tries,
future=future,
)
else:
future.set_exception(SaltReqTimeoutError('Message timed out')) | Handle a message timeout by removing it from the sending queue
and informing the caller
:raises: SaltReqTimeoutError | Below is the the instruction that describes the task:
### Input:
Handle a message timeout by removing it from the sending queue
and informing the caller
:raises: SaltReqTimeoutError
### Response:
def timeout_message(self, message):
'''
Handle a message timeout by removing it from the sending queue
and informing the caller
:raises: SaltReqTimeoutError
'''
future = self.send_future_map.pop(message, None)
# In a race condition the message might have been sent by the time
# we're timing it out. Make sure the future is not None
if future is not None:
del self.send_timeout_map[message]
if future.attempts < future.tries:
future.attempts += 1
log.debug('SaltReqTimeoutError, retrying. (%s/%s)', future.attempts, future.tries)
self.send(
message,
timeout=future.timeout,
tries=future.tries,
future=future,
)
else:
future.set_exception(SaltReqTimeoutError('Message timed out')) |
def expose(url="/", methods=("GET",)):
"""
Use this decorator to expose API endpoints on your API classes.
:param url:
Relative URL for the endpoint
:param methods:
Allowed HTTP methods. By default only GET is allowed.
"""
def wrap(f):
if not hasattr(f, "_urls"):
f._urls = []
f._urls.append((url, methods))
return f
return wrap | Use this decorator to expose API endpoints on your API classes.
:param url:
Relative URL for the endpoint
:param methods:
Allowed HTTP methods. By default only GET is allowed. | Below is the the instruction that describes the task:
### Input:
Use this decorator to expose API endpoints on your API classes.
:param url:
Relative URL for the endpoint
:param methods:
Allowed HTTP methods. By default only GET is allowed.
### Response:
def expose(url="/", methods=("GET",)):
"""
Use this decorator to expose API endpoints on your API classes.
:param url:
Relative URL for the endpoint
:param methods:
Allowed HTTP methods. By default only GET is allowed.
"""
def wrap(f):
if not hasattr(f, "_urls"):
f._urls = []
f._urls.append((url, methods))
return f
return wrap |
def organization_subscription_create(self, data, **kwargs):
"https://developer.zendesk.com/rest_api/docs/core/organization_subscriptions#create-organization-subscription"
api_path = "/api/v2/organization_subscriptions.json"
return self.call(api_path, method="POST", data=data, **kwargs) | https://developer.zendesk.com/rest_api/docs/core/organization_subscriptions#create-organization-subscription | Below is the the instruction that describes the task:
### Input:
https://developer.zendesk.com/rest_api/docs/core/organization_subscriptions#create-organization-subscription
### Response:
def organization_subscription_create(self, data, **kwargs):
"https://developer.zendesk.com/rest_api/docs/core/organization_subscriptions#create-organization-subscription"
api_path = "/api/v2/organization_subscriptions.json"
return self.call(api_path, method="POST", data=data, **kwargs) |
def run(self):
"""Collects memory stats for specified Python program."""
existing_objects = _get_in_memory_objects()
prof, result = self.profile()
new_objects = _get_in_memory_objects()
new_obj_count = _get_obj_count_difference(new_objects, existing_objects)
result_obj_count = new_obj_count - prof.obj_overhead
# existing_objects list is also profiler overhead
result_obj_count[list] -= 1
pretty_obj_count = _format_obj_count(result_obj_count)
return {
'objectName': self._object_name,
'codeEvents': prof.code_events,
'totalEvents': len(prof.code_events),
'objectsCount': pretty_obj_count,
'result': result,
'timestamp': int(time.time())
} | Collects memory stats for specified Python program. | Below is the the instruction that describes the task:
### Input:
Collects memory stats for specified Python program.
### Response:
def run(self):
"""Collects memory stats for specified Python program."""
existing_objects = _get_in_memory_objects()
prof, result = self.profile()
new_objects = _get_in_memory_objects()
new_obj_count = _get_obj_count_difference(new_objects, existing_objects)
result_obj_count = new_obj_count - prof.obj_overhead
# existing_objects list is also profiler overhead
result_obj_count[list] -= 1
pretty_obj_count = _format_obj_count(result_obj_count)
return {
'objectName': self._object_name,
'codeEvents': prof.code_events,
'totalEvents': len(prof.code_events),
'objectsCount': pretty_obj_count,
'result': result,
'timestamp': int(time.time())
} |
def disconnect(receiver, signal=Any, sender=Any, weak=True):
"""Disconnect receiver from sender for signal
receiver -- the registered receiver to disconnect
signal -- the registered signal to disconnect
sender -- the registered sender to disconnect
weak -- the weakref state to disconnect
disconnect reverses the process of connect,
the semantics for the individual elements are
logically equivalent to a tuple of
(receiver, signal, sender, weak) used as a key
to be deleted from the internal routing tables.
(The actual process is slightly more complex
but the semantics are basically the same).
Note:
Using disconnect is not required to cleanup
routing when an object is deleted, the framework
will remove routes for deleted objects
automatically. It's only necessary to disconnect
if you want to stop routing to a live object.
returns None, may raise DispatcherTypeError or
DispatcherKeyError
"""
if signal is None:
raise errors.DispatcherTypeError(
'Signal cannot be None (receiver=%r sender=%r)'%( receiver,sender)
)
if weak: receiver = saferef.safeRef(receiver)
senderkey = id(sender)
try:
signals = connections[senderkey]
receivers = signals[signal]
except KeyError:
raise errors.DispatcherKeyError(
"""No receivers found for signal %r from sender %r""" %(
signal,
sender
)
)
try:
# also removes from receivers
_removeOldBackRefs(senderkey, signal, receiver, receivers)
except ValueError:
raise errors.DispatcherKeyError(
"""No connection to receiver %s for signal %s from sender %s""" %(
receiver,
signal,
sender
)
)
_cleanupConnections(senderkey, signal) | Disconnect receiver from sender for signal
receiver -- the registered receiver to disconnect
signal -- the registered signal to disconnect
sender -- the registered sender to disconnect
weak -- the weakref state to disconnect
disconnect reverses the process of connect,
the semantics for the individual elements are
logically equivalent to a tuple of
(receiver, signal, sender, weak) used as a key
to be deleted from the internal routing tables.
(The actual process is slightly more complex
but the semantics are basically the same).
Note:
Using disconnect is not required to cleanup
routing when an object is deleted, the framework
will remove routes for deleted objects
automatically. It's only necessary to disconnect
if you want to stop routing to a live object.
returns None, may raise DispatcherTypeError or
DispatcherKeyError | Below is the the instruction that describes the task:
### Input:
Disconnect receiver from sender for signal
receiver -- the registered receiver to disconnect
signal -- the registered signal to disconnect
sender -- the registered sender to disconnect
weak -- the weakref state to disconnect
disconnect reverses the process of connect,
the semantics for the individual elements are
logically equivalent to a tuple of
(receiver, signal, sender, weak) used as a key
to be deleted from the internal routing tables.
(The actual process is slightly more complex
but the semantics are basically the same).
Note:
Using disconnect is not required to cleanup
routing when an object is deleted, the framework
will remove routes for deleted objects
automatically. It's only necessary to disconnect
if you want to stop routing to a live object.
returns None, may raise DispatcherTypeError or
DispatcherKeyError
### Response:
def disconnect(receiver, signal=Any, sender=Any, weak=True):
"""Disconnect receiver from sender for signal
receiver -- the registered receiver to disconnect
signal -- the registered signal to disconnect
sender -- the registered sender to disconnect
weak -- the weakref state to disconnect
disconnect reverses the process of connect,
the semantics for the individual elements are
logically equivalent to a tuple of
(receiver, signal, sender, weak) used as a key
to be deleted from the internal routing tables.
(The actual process is slightly more complex
but the semantics are basically the same).
Note:
Using disconnect is not required to cleanup
routing when an object is deleted, the framework
will remove routes for deleted objects
automatically. It's only necessary to disconnect
if you want to stop routing to a live object.
returns None, may raise DispatcherTypeError or
DispatcherKeyError
"""
if signal is None:
raise errors.DispatcherTypeError(
'Signal cannot be None (receiver=%r sender=%r)'%( receiver,sender)
)
if weak: receiver = saferef.safeRef(receiver)
senderkey = id(sender)
try:
signals = connections[senderkey]
receivers = signals[signal]
except KeyError:
raise errors.DispatcherKeyError(
"""No receivers found for signal %r from sender %r""" %(
signal,
sender
)
)
try:
# also removes from receivers
_removeOldBackRefs(senderkey, signal, receiver, receivers)
except ValueError:
raise errors.DispatcherKeyError(
"""No connection to receiver %s for signal %s from sender %s""" %(
receiver,
signal,
sender
)
)
_cleanupConnections(senderkey, signal) |
def brightness(sequence_number, brightness):
"""Create a brightness message"""
return MessageWriter().string("brightness").uint64(sequence_number).uint8(int(brightness*255)).get() | Create a brightness message | Below is the the instruction that describes the task:
### Input:
Create a brightness message
### Response:
def brightness(sequence_number, brightness):
"""Create a brightness message"""
return MessageWriter().string("brightness").uint64(sequence_number).uint8(int(brightness*255)).get() |
def destroy(self):
""" A reimplemented destructor.
This destructor will clear the reference to the toolkit widget
and set its parent to None.
"""
widget = self.widget
if widget is not None:
del self.widget
super(UiKitToolkitObject, self).destroy() | A reimplemented destructor.
This destructor will clear the reference to the toolkit widget
and set its parent to None. | Below is the the instruction that describes the task:
### Input:
A reimplemented destructor.
This destructor will clear the reference to the toolkit widget
and set its parent to None.
### Response:
def destroy(self):
""" A reimplemented destructor.
This destructor will clear the reference to the toolkit widget
and set its parent to None.
"""
widget = self.widget
if widget is not None:
del self.widget
super(UiKitToolkitObject, self).destroy() |
def length(self):
"""
:return: Length of the ``data``.
:rtype: int
"""
if not self.__length:
self.__length = self.__get_length()
return self.__length | :return: Length of the ``data``.
:rtype: int | Below is the the instruction that describes the task:
### Input:
:return: Length of the ``data``.
:rtype: int
### Response:
def length(self):
"""
:return: Length of the ``data``.
:rtype: int
"""
if not self.__length:
self.__length = self.__get_length()
return self.__length |
def normalize_weight(self, samples):
"""normalize weight
Parameters
----------
samples: list
a collection of sample, it's a (NUM_OF_INSTANCE * NUM_OF_FUNCTIONS) matrix,
representing{{w11, w12, ..., w1k}, {w21, w22, ... w2k}, ...{wk1, wk2,..., wkk}}
Returns
-------
list
samples after normalize weight
"""
for i in range(NUM_OF_INSTANCE):
total = 0
for j in range(self.effective_model_num):
total += samples[i][j]
for j in range(self.effective_model_num):
samples[i][j] /= total
return samples | normalize weight
Parameters
----------
samples: list
a collection of sample, it's a (NUM_OF_INSTANCE * NUM_OF_FUNCTIONS) matrix,
representing{{w11, w12, ..., w1k}, {w21, w22, ... w2k}, ...{wk1, wk2,..., wkk}}
Returns
-------
list
samples after normalize weight | Below is the the instruction that describes the task:
### Input:
normalize weight
Parameters
----------
samples: list
a collection of sample, it's a (NUM_OF_INSTANCE * NUM_OF_FUNCTIONS) matrix,
representing{{w11, w12, ..., w1k}, {w21, w22, ... w2k}, ...{wk1, wk2,..., wkk}}
Returns
-------
list
samples after normalize weight
### Response:
def normalize_weight(self, samples):
"""normalize weight
Parameters
----------
samples: list
a collection of sample, it's a (NUM_OF_INSTANCE * NUM_OF_FUNCTIONS) matrix,
representing{{w11, w12, ..., w1k}, {w21, w22, ... w2k}, ...{wk1, wk2,..., wkk}}
Returns
-------
list
samples after normalize weight
"""
for i in range(NUM_OF_INSTANCE):
total = 0
for j in range(self.effective_model_num):
total += samples[i][j]
for j in range(self.effective_model_num):
samples[i][j] /= total
return samples |
def add_cats(self, axis, cat_data):
'''
Add categories to rows or columns using cat_data array of objects. Each object in cat_data is a dictionary with one key (category title) and value (rows/column names) that have this category. Categories will be added onto the existing categories and will be added in the order of the objects in the array.
Example ``cat_data``::
[
{
"title": "First Category",
"cats": {
"true": [
"ROS1",
"AAK1"
]
}
},
{
"title": "Second Category",
"cats": {
"something": [
"PDK4"
]
}
}
]
'''
for inst_data in cat_data:
categories.add_cats(self, axis, inst_data) | Add categories to rows or columns using cat_data array of objects. Each object in cat_data is a dictionary with one key (category title) and value (rows/column names) that have this category. Categories will be added onto the existing categories and will be added in the order of the objects in the array.
Example ``cat_data``::
[
{
"title": "First Category",
"cats": {
"true": [
"ROS1",
"AAK1"
]
}
},
{
"title": "Second Category",
"cats": {
"something": [
"PDK4"
]
}
}
] | Below is the the instruction that describes the task:
### Input:
Add categories to rows or columns using cat_data array of objects. Each object in cat_data is a dictionary with one key (category title) and value (rows/column names) that have this category. Categories will be added onto the existing categories and will be added in the order of the objects in the array.
Example ``cat_data``::
[
{
"title": "First Category",
"cats": {
"true": [
"ROS1",
"AAK1"
]
}
},
{
"title": "Second Category",
"cats": {
"something": [
"PDK4"
]
}
}
]
### Response:
def add_cats(self, axis, cat_data):
'''
Add categories to rows or columns using cat_data array of objects. Each object in cat_data is a dictionary with one key (category title) and value (rows/column names) that have this category. Categories will be added onto the existing categories and will be added in the order of the objects in the array.
Example ``cat_data``::
[
{
"title": "First Category",
"cats": {
"true": [
"ROS1",
"AAK1"
]
}
},
{
"title": "Second Category",
"cats": {
"something": [
"PDK4"
]
}
}
]
'''
for inst_data in cat_data:
categories.add_cats(self, axis, inst_data) |
def rewire_inputs(data_list):
"""Rewire inputs of provided data objects.
Input parameter is a list of original and copied data object model
instances: ``[{'original': original, 'copy': copy}]``. This
function finds which objects reference other objects (in the list)
on the input and replaces original objects with the copies (mutates
copies' inputs).
"""
if len(data_list) < 2:
return data_list
mapped_ids = {bundle['original'].id: bundle['copy'].id for bundle in data_list}
for bundle in data_list:
updated = False
copy = bundle['copy']
for field_schema, fields in iterate_fields(copy.input, copy.process.input_schema):
name = field_schema['name']
value = fields[name]
if field_schema['type'].startswith('data:') and value in mapped_ids:
fields[name] = mapped_ids[value]
updated = True
elif field_schema['type'].startswith('list:data:') and any([id_ in mapped_ids for id_ in value]):
fields[name] = [mapped_ids[id_] if id_ in mapped_ids else id_ for id_ in value]
updated = True
if updated:
copy.save()
return data_list | Rewire inputs of provided data objects.
Input parameter is a list of original and copied data object model
instances: ``[{'original': original, 'copy': copy}]``. This
function finds which objects reference other objects (in the list)
on the input and replaces original objects with the copies (mutates
copies' inputs). | Below is the the instruction that describes the task:
### Input:
Rewire inputs of provided data objects.
Input parameter is a list of original and copied data object model
instances: ``[{'original': original, 'copy': copy}]``. This
function finds which objects reference other objects (in the list)
on the input and replaces original objects with the copies (mutates
copies' inputs).
### Response:
def rewire_inputs(data_list):
"""Rewire inputs of provided data objects.
Input parameter is a list of original and copied data object model
instances: ``[{'original': original, 'copy': copy}]``. This
function finds which objects reference other objects (in the list)
on the input and replaces original objects with the copies (mutates
copies' inputs).
"""
if len(data_list) < 2:
return data_list
mapped_ids = {bundle['original'].id: bundle['copy'].id for bundle in data_list}
for bundle in data_list:
updated = False
copy = bundle['copy']
for field_schema, fields in iterate_fields(copy.input, copy.process.input_schema):
name = field_schema['name']
value = fields[name]
if field_schema['type'].startswith('data:') and value in mapped_ids:
fields[name] = mapped_ids[value]
updated = True
elif field_schema['type'].startswith('list:data:') and any([id_ in mapped_ids for id_ in value]):
fields[name] = [mapped_ids[id_] if id_ in mapped_ids else id_ for id_ in value]
updated = True
if updated:
copy.save()
return data_list |
def get_resources(domain, token):
"""
Returns a list of resources (data endpoints) on a Socrata domain.
The catalog API and JSON endpoint both return useful information, but the information that they return is useful
in slightly different ways. The JSON endpoint provides less information about the resource in question,
including lacking a field for what *type* of resources the entity in question is, but has the advantage of
returning only data resources (endpoints of other things, like charts and filters, are excluded). The catalog API
provides more information, and does so for all endpoints, but provides no way of filtering that set down to
resources only because of issues with its categorization of "map" entities.
Hence, to capture the actual data resources on the portal, we match the APIs against one another.
Note that it is technically possible for a resource to be published as a filter or view of a private endpoint.
This method does not capture resources published in this (highly discouraged, but nevertheless occasionally
practiced) manner.
Also note that this method does not filter out resources with a community provenance. You can filter these out
yourself downstream using the `provenance` metadata field.
Parameters
----------
domain: str
A Socrata data portal domain. "data.seattle.gov" or "data.cityofnewyork.us" for example.
token: str
A Socrata application token. Application tokens can be registered by going onto the Socrata portal in
question, creating an account, logging in, going to developer tools, and spawning a token.
Returns
-------
A list of metadata stores for all data resources on the domain.
"""
json_endpoints = get_endpoints_using_raw_json_emission(domain)
catalog_api_output = get_endpoints_using_catalog_api(domain, token)
catalog_endpoints = [d['permalink'].split("/")[-1] for d in catalog_api_output]
json_endpoints = [d['landingPage'].split("/")[-1] for d in json_endpoints['dataset']]
resources = []
for i, endpoint in enumerate(json_endpoints):
try:
catalog_ind = catalog_endpoints.index(json_endpoints[i])
except ValueError: # The catalog does not contain this dataset. Skip it.
pass
else:
resources.append(catalog_api_output[catalog_ind])
# Exclude stories, which are remixed, not published, data.
resources = [d for d in resources if d['resource']['type'] != 'story']
return resources | Returns a list of resources (data endpoints) on a Socrata domain.
The catalog API and JSON endpoint both return useful information, but the information that they return is useful
in slightly different ways. The JSON endpoint provides less information about the resource in question,
including lacking a field for what *type* of resources the entity in question is, but has the advantage of
returning only data resources (endpoints of other things, like charts and filters, are excluded). The catalog API
provides more information, and does so for all endpoints, but provides no way of filtering that set down to
resources only because of issues with its categorization of "map" entities.
Hence, to capture the actual data resources on the portal, we match the APIs against one another.
Note that it is technically possible for a resource to be published as a filter or view of a private endpoint.
This method does not capture resources published in this (highly discouraged, but nevertheless occasionally
practiced) manner.
Also note that this method does not filter out resources with a community provenance. You can filter these out
yourself downstream using the `provenance` metadata field.
Parameters
----------
domain: str
A Socrata data portal domain. "data.seattle.gov" or "data.cityofnewyork.us" for example.
token: str
A Socrata application token. Application tokens can be registered by going onto the Socrata portal in
question, creating an account, logging in, going to developer tools, and spawning a token.
Returns
-------
A list of metadata stores for all data resources on the domain. | Below is the the instruction that describes the task:
### Input:
Returns a list of resources (data endpoints) on a Socrata domain.
The catalog API and JSON endpoint both return useful information, but the information that they return is useful
in slightly different ways. The JSON endpoint provides less information about the resource in question,
including lacking a field for what *type* of resources the entity in question is, but has the advantage of
returning only data resources (endpoints of other things, like charts and filters, are excluded). The catalog API
provides more information, and does so for all endpoints, but provides no way of filtering that set down to
resources only because of issues with its categorization of "map" entities.
Hence, to capture the actual data resources on the portal, we match the APIs against one another.
Note that it is technically possible for a resource to be published as a filter or view of a private endpoint.
This method does not capture resources published in this (highly discouraged, but nevertheless occasionally
practiced) manner.
Also note that this method does not filter out resources with a community provenance. You can filter these out
yourself downstream using the `provenance` metadata field.
Parameters
----------
domain: str
A Socrata data portal domain. "data.seattle.gov" or "data.cityofnewyork.us" for example.
token: str
A Socrata application token. Application tokens can be registered by going onto the Socrata portal in
question, creating an account, logging in, going to developer tools, and spawning a token.
Returns
-------
A list of metadata stores for all data resources on the domain.
### Response:
def get_resources(domain, token):
"""
Returns a list of resources (data endpoints) on a Socrata domain.
The catalog API and JSON endpoint both return useful information, but the information that they return is useful
in slightly different ways. The JSON endpoint provides less information about the resource in question,
including lacking a field for what *type* of resources the entity in question is, but has the advantage of
returning only data resources (endpoints of other things, like charts and filters, are excluded). The catalog API
provides more information, and does so for all endpoints, but provides no way of filtering that set down to
resources only because of issues with its categorization of "map" entities.
Hence, to capture the actual data resources on the portal, we match the APIs against one another.
Note that it is technically possible for a resource to be published as a filter or view of a private endpoint.
This method does not capture resources published in this (highly discouraged, but nevertheless occasionally
practiced) manner.
Also note that this method does not filter out resources with a community provenance. You can filter these out
yourself downstream using the `provenance` metadata field.
Parameters
----------
domain: str
A Socrata data portal domain. "data.seattle.gov" or "data.cityofnewyork.us" for example.
token: str
A Socrata application token. Application tokens can be registered by going onto the Socrata portal in
question, creating an account, logging in, going to developer tools, and spawning a token.
Returns
-------
A list of metadata stores for all data resources on the domain.
"""
json_endpoints = get_endpoints_using_raw_json_emission(domain)
catalog_api_output = get_endpoints_using_catalog_api(domain, token)
catalog_endpoints = [d['permalink'].split("/")[-1] for d in catalog_api_output]
json_endpoints = [d['landingPage'].split("/")[-1] for d in json_endpoints['dataset']]
resources = []
for i, endpoint in enumerate(json_endpoints):
try:
catalog_ind = catalog_endpoints.index(json_endpoints[i])
except ValueError: # The catalog does not contain this dataset. Skip it.
pass
else:
resources.append(catalog_api_output[catalog_ind])
# Exclude stories, which are remixed, not published, data.
resources = [d for d in resources if d['resource']['type'] != 'story']
return resources |
def random_sparse(strategy, prob, obj_reaction, flux_threshold):
"""Find a random minimal network of model reactions.
Given a reaction to optimize and a threshold, delete entities randomly
until the flux of the reaction to optimize falls under the threshold.
Keep deleting until no more entities can be deleted. It works
with two strategies: deleting reactions or deleting genes (reactions
related to certain genes).
Args:
strategy: :class:`.ReactionDeletionStrategy` or
:class:`.GeneDeletionStrategy`.
prob: :class:`psamm.fluxanalysis.FluxBalanceProblem`.
obj_reaction: objective reactions to optimize.
flux_threshold: threshold of max reaction flux.
"""
essential = set()
deleted = set()
for entity, deleted_reactions in strategy.iter_tests():
if obj_reaction in deleted_reactions:
logger.info(
'Marking entity {} as essential because the objective'
' reaction depends on this entity...'.format(entity))
essential.add(entity)
continue
if len(deleted_reactions) == 0:
logger.info(
'No reactions were removed when entity {}'
' was deleted'.format(entity))
deleted.add(entity)
strategy.delete(entity, deleted_reactions)
continue
logger.info('Deleted reactions: {}'.format(
', '.join(deleted_reactions)))
constr = []
for r in deleted_reactions:
flux_var = prob.get_flux_var(r)
c, = prob.prob.add_linear_constraints(flux_var == 0)
constr.append(c)
logger.info('Trying FBA without reactions {}...'.format(
', '.join(deleted_reactions)))
try:
prob.maximize(obj_reaction)
except fluxanalysis.FluxBalanceError:
logger.info(
'FBA is infeasible, marking {} as essential'.format(
entity))
for c in constr:
c.delete()
essential.add(entity)
continue
logger.debug('Reaction {} has flux {}'.format(
obj_reaction, prob.get_flux(obj_reaction)))
if prob.get_flux(obj_reaction) < flux_threshold:
for c in constr:
c.delete()
essential.add(entity)
logger.info('Entity {} was essential'.format(
entity))
else:
deleted.add(entity)
strategy.delete(entity, deleted_reactions)
logger.info('Entity {} was deleted'.format(entity))
return essential, deleted | Find a random minimal network of model reactions.
Given a reaction to optimize and a threshold, delete entities randomly
until the flux of the reaction to optimize falls under the threshold.
Keep deleting until no more entities can be deleted. It works
with two strategies: deleting reactions or deleting genes (reactions
related to certain genes).
Args:
strategy: :class:`.ReactionDeletionStrategy` or
:class:`.GeneDeletionStrategy`.
prob: :class:`psamm.fluxanalysis.FluxBalanceProblem`.
obj_reaction: objective reactions to optimize.
flux_threshold: threshold of max reaction flux. | Below is the the instruction that describes the task:
### Input:
Find a random minimal network of model reactions.
Given a reaction to optimize and a threshold, delete entities randomly
until the flux of the reaction to optimize falls under the threshold.
Keep deleting until no more entities can be deleted. It works
with two strategies: deleting reactions or deleting genes (reactions
related to certain genes).
Args:
strategy: :class:`.ReactionDeletionStrategy` or
:class:`.GeneDeletionStrategy`.
prob: :class:`psamm.fluxanalysis.FluxBalanceProblem`.
obj_reaction: objective reactions to optimize.
flux_threshold: threshold of max reaction flux.
### Response:
def random_sparse(strategy, prob, obj_reaction, flux_threshold):
"""Find a random minimal network of model reactions.
Given a reaction to optimize and a threshold, delete entities randomly
until the flux of the reaction to optimize falls under the threshold.
Keep deleting until no more entities can be deleted. It works
with two strategies: deleting reactions or deleting genes (reactions
related to certain genes).
Args:
strategy: :class:`.ReactionDeletionStrategy` or
:class:`.GeneDeletionStrategy`.
prob: :class:`psamm.fluxanalysis.FluxBalanceProblem`.
obj_reaction: objective reactions to optimize.
flux_threshold: threshold of max reaction flux.
"""
essential = set()
deleted = set()
for entity, deleted_reactions in strategy.iter_tests():
if obj_reaction in deleted_reactions:
logger.info(
'Marking entity {} as essential because the objective'
' reaction depends on this entity...'.format(entity))
essential.add(entity)
continue
if len(deleted_reactions) == 0:
logger.info(
'No reactions were removed when entity {}'
' was deleted'.format(entity))
deleted.add(entity)
strategy.delete(entity, deleted_reactions)
continue
logger.info('Deleted reactions: {}'.format(
', '.join(deleted_reactions)))
constr = []
for r in deleted_reactions:
flux_var = prob.get_flux_var(r)
c, = prob.prob.add_linear_constraints(flux_var == 0)
constr.append(c)
logger.info('Trying FBA without reactions {}...'.format(
', '.join(deleted_reactions)))
try:
prob.maximize(obj_reaction)
except fluxanalysis.FluxBalanceError:
logger.info(
'FBA is infeasible, marking {} as essential'.format(
entity))
for c in constr:
c.delete()
essential.add(entity)
continue
logger.debug('Reaction {} has flux {}'.format(
obj_reaction, prob.get_flux(obj_reaction)))
if prob.get_flux(obj_reaction) < flux_threshold:
for c in constr:
c.delete()
essential.add(entity)
logger.info('Entity {} was essential'.format(
entity))
else:
deleted.add(entity)
strategy.delete(entity, deleted_reactions)
logger.info('Entity {} was deleted'.format(entity))
return essential, deleted |
def make(self):
""" turn fetched files into a local repo, make auxiliary files
"""
logger.debug("preparing to add all git files")
num_added = self.local_repo.add_all_files()
if num_added:
self.local_repo.commit("Initial import from Project Gutenberg")
file_handler = NewFilesHandler(self)
file_handler.add_new_files()
num_added = self.local_repo.add_all_files()
if num_added:
self.local_repo.commit(
"Updates Readme, contributing, license files, cover, metadata."
) | turn fetched files into a local repo, make auxiliary files | Below is the the instruction that describes the task:
### Input:
turn fetched files into a local repo, make auxiliary files
### Response:
def make(self):
""" turn fetched files into a local repo, make auxiliary files
"""
logger.debug("preparing to add all git files")
num_added = self.local_repo.add_all_files()
if num_added:
self.local_repo.commit("Initial import from Project Gutenberg")
file_handler = NewFilesHandler(self)
file_handler.add_new_files()
num_added = self.local_repo.add_all_files()
if num_added:
self.local_repo.commit(
"Updates Readme, contributing, license files, cover, metadata."
) |
def _copy_required(lib_path, copy_filt_func, copied_libs):
""" Copy libraries required for files in `lib_path` to `lib_path`
Augment `copied_libs` dictionary with any newly copied libraries, modifying
`copied_libs` in-place - see Notes.
This is one pass of ``copy_recurse``
Parameters
----------
lib_path : str
Directory containing libraries
copy_filt_func : None or callable, optional
If None, copy any library that found libraries depend on. If callable,
called on each library name; copy where ``copy_filt_func(libname)`` is
True, don't copy otherwise
copied_libs : dict
See :func:`copy_recurse` for definition.
Notes
-----
If we need to copy another library, add that (``depended_lib_path``,
``dependings_dict``) to `copied_libs`. ``dependings_dict`` has (key,
value) pairs of (``depending_lib_path``, ``install_name``).
``depending_lib_path`` will be the original (canonical) library name, not
the copy in ``lib_path``.
Sometimes we copy a library, that further depends on a library we have
already copied. In this case update ``copied_libs[depended_lib]`` with the
extra dependency (as well as fixing up the install names for the depending
library).
For example, imagine we've start with a lib path like this::
my_lib_path/
libA.dylib
libB.dylib
Our input `copied_libs` has keys ``/sys/libA.dylib``, ``/sys/libB.lib``
telling us we previously copied those guys from the ``/sys`` folder.
On a first pass, we discover that ``libA.dylib`` depends on
``/sys/libC.dylib``, so we copy that.
On a second pass, we discover now that ``libC.dylib`` also depends on
``/sys/libB.dylib``. `copied_libs` tells us that we already have a copy of
``/sys/libB.dylib``, so we fix our copy of `libC.dylib`` to point to
``my_lib_path/libB.dylib`` and add ``/sys/libC.dylib`` as a
``dependings_dict`` entry for ``copied_libs['/sys/libB.dylib']``
"""
# Paths will be prepended with `lib_path`
lib_dict = tree_libs(lib_path)
# Map library paths after copy ('copied') to path before copy ('orig')
rp_lp = realpath(lib_path)
copied2orig = dict((pjoin(rp_lp, basename(c)), c) for c in copied_libs)
for required, requirings in lib_dict.items():
if not copy_filt_func is None and not copy_filt_func(required):
continue
if required.startswith('@'):
# May have been processed by us, or have some rpath, loader_path of
# its own. Either way, leave alone
continue
# Requiring names may well be the copies in lib_path. Replace the copy
# names with the original names for entry into `copied_libs`
procd_requirings = {}
# Set requiring lib install names to point to local copy
for requiring, orig_install_name in requirings.items():
set_install_name(requiring,
orig_install_name,
'@loader_path/' + basename(required))
# Make processed version of ``dependings_dict``
mapped_requiring = copied2orig.get(requiring, requiring)
procd_requirings[mapped_requiring] = orig_install_name
if required in copied_libs:
# Have copied this already, add any new requirings
copied_libs[required].update(procd_requirings)
continue
# Haven't see this one before, add entry to copied_libs
out_path = pjoin(lib_path, basename(required))
if exists(out_path):
raise DelocationError(out_path + ' already exists')
shutil.copy(required, lib_path)
copied2orig[out_path] = required
copied_libs[required] = procd_requirings | Copy libraries required for files in `lib_path` to `lib_path`
Augment `copied_libs` dictionary with any newly copied libraries, modifying
`copied_libs` in-place - see Notes.
This is one pass of ``copy_recurse``
Parameters
----------
lib_path : str
Directory containing libraries
copy_filt_func : None or callable, optional
If None, copy any library that found libraries depend on. If callable,
called on each library name; copy where ``copy_filt_func(libname)`` is
True, don't copy otherwise
copied_libs : dict
See :func:`copy_recurse` for definition.
Notes
-----
If we need to copy another library, add that (``depended_lib_path``,
``dependings_dict``) to `copied_libs`. ``dependings_dict`` has (key,
value) pairs of (``depending_lib_path``, ``install_name``).
``depending_lib_path`` will be the original (canonical) library name, not
the copy in ``lib_path``.
Sometimes we copy a library, that further depends on a library we have
already copied. In this case update ``copied_libs[depended_lib]`` with the
extra dependency (as well as fixing up the install names for the depending
library).
For example, imagine we've start with a lib path like this::
my_lib_path/
libA.dylib
libB.dylib
Our input `copied_libs` has keys ``/sys/libA.dylib``, ``/sys/libB.lib``
telling us we previously copied those guys from the ``/sys`` folder.
On a first pass, we discover that ``libA.dylib`` depends on
``/sys/libC.dylib``, so we copy that.
On a second pass, we discover now that ``libC.dylib`` also depends on
``/sys/libB.dylib``. `copied_libs` tells us that we already have a copy of
``/sys/libB.dylib``, so we fix our copy of `libC.dylib`` to point to
``my_lib_path/libB.dylib`` and add ``/sys/libC.dylib`` as a
``dependings_dict`` entry for ``copied_libs['/sys/libB.dylib']`` | Below is the the instruction that describes the task:
### Input:
Copy libraries required for files in `lib_path` to `lib_path`
Augment `copied_libs` dictionary with any newly copied libraries, modifying
`copied_libs` in-place - see Notes.
This is one pass of ``copy_recurse``
Parameters
----------
lib_path : str
Directory containing libraries
copy_filt_func : None or callable, optional
If None, copy any library that found libraries depend on. If callable,
called on each library name; copy where ``copy_filt_func(libname)`` is
True, don't copy otherwise
copied_libs : dict
See :func:`copy_recurse` for definition.
Notes
-----
If we need to copy another library, add that (``depended_lib_path``,
``dependings_dict``) to `copied_libs`. ``dependings_dict`` has (key,
value) pairs of (``depending_lib_path``, ``install_name``).
``depending_lib_path`` will be the original (canonical) library name, not
the copy in ``lib_path``.
Sometimes we copy a library, that further depends on a library we have
already copied. In this case update ``copied_libs[depended_lib]`` with the
extra dependency (as well as fixing up the install names for the depending
library).
For example, imagine we've start with a lib path like this::
my_lib_path/
libA.dylib
libB.dylib
Our input `copied_libs` has keys ``/sys/libA.dylib``, ``/sys/libB.lib``
telling us we previously copied those guys from the ``/sys`` folder.
On a first pass, we discover that ``libA.dylib`` depends on
``/sys/libC.dylib``, so we copy that.
On a second pass, we discover now that ``libC.dylib`` also depends on
``/sys/libB.dylib``. `copied_libs` tells us that we already have a copy of
``/sys/libB.dylib``, so we fix our copy of `libC.dylib`` to point to
``my_lib_path/libB.dylib`` and add ``/sys/libC.dylib`` as a
``dependings_dict`` entry for ``copied_libs['/sys/libB.dylib']``
### Response:
def _copy_required(lib_path, copy_filt_func, copied_libs):
""" Copy libraries required for files in `lib_path` to `lib_path`
Augment `copied_libs` dictionary with any newly copied libraries, modifying
`copied_libs` in-place - see Notes.
This is one pass of ``copy_recurse``
Parameters
----------
lib_path : str
Directory containing libraries
copy_filt_func : None or callable, optional
If None, copy any library that found libraries depend on. If callable,
called on each library name; copy where ``copy_filt_func(libname)`` is
True, don't copy otherwise
copied_libs : dict
See :func:`copy_recurse` for definition.
Notes
-----
If we need to copy another library, add that (``depended_lib_path``,
``dependings_dict``) to `copied_libs`. ``dependings_dict`` has (key,
value) pairs of (``depending_lib_path``, ``install_name``).
``depending_lib_path`` will be the original (canonical) library name, not
the copy in ``lib_path``.
Sometimes we copy a library, that further depends on a library we have
already copied. In this case update ``copied_libs[depended_lib]`` with the
extra dependency (as well as fixing up the install names for the depending
library).
For example, imagine we've start with a lib path like this::
my_lib_path/
libA.dylib
libB.dylib
Our input `copied_libs` has keys ``/sys/libA.dylib``, ``/sys/libB.lib``
telling us we previously copied those guys from the ``/sys`` folder.
On a first pass, we discover that ``libA.dylib`` depends on
``/sys/libC.dylib``, so we copy that.
On a second pass, we discover now that ``libC.dylib`` also depends on
``/sys/libB.dylib``. `copied_libs` tells us that we already have a copy of
``/sys/libB.dylib``, so we fix our copy of `libC.dylib`` to point to
``my_lib_path/libB.dylib`` and add ``/sys/libC.dylib`` as a
``dependings_dict`` entry for ``copied_libs['/sys/libB.dylib']``
"""
# Paths will be prepended with `lib_path`
lib_dict = tree_libs(lib_path)
# Map library paths after copy ('copied') to path before copy ('orig')
rp_lp = realpath(lib_path)
copied2orig = dict((pjoin(rp_lp, basename(c)), c) for c in copied_libs)
for required, requirings in lib_dict.items():
if not copy_filt_func is None and not copy_filt_func(required):
continue
if required.startswith('@'):
# May have been processed by us, or have some rpath, loader_path of
# its own. Either way, leave alone
continue
# Requiring names may well be the copies in lib_path. Replace the copy
# names with the original names for entry into `copied_libs`
procd_requirings = {}
# Set requiring lib install names to point to local copy
for requiring, orig_install_name in requirings.items():
set_install_name(requiring,
orig_install_name,
'@loader_path/' + basename(required))
# Make processed version of ``dependings_dict``
mapped_requiring = copied2orig.get(requiring, requiring)
procd_requirings[mapped_requiring] = orig_install_name
if required in copied_libs:
# Have copied this already, add any new requirings
copied_libs[required].update(procd_requirings)
continue
# Haven't see this one before, add entry to copied_libs
out_path = pjoin(lib_path, basename(required))
if exists(out_path):
raise DelocationError(out_path + ' already exists')
shutil.copy(required, lib_path)
copied2orig[out_path] = required
copied_libs[required] = procd_requirings |
def var(self):
"""Returns a symbol representing this parameter."""
if self._var is None:
self._var = symbol.var(self.name, shape=self.shape, dtype=self.dtype,
lr_mult=self.lr_mult, wd_mult=self.wd_mult,
init=self.init, stype=self._stype)
return self._var | Returns a symbol representing this parameter. | Below is the the instruction that describes the task:
### Input:
Returns a symbol representing this parameter.
### Response:
def var(self):
"""Returns a symbol representing this parameter."""
if self._var is None:
self._var = symbol.var(self.name, shape=self.shape, dtype=self.dtype,
lr_mult=self.lr_mult, wd_mult=self.wd_mult,
init=self.init, stype=self._stype)
return self._var |
def _fullqualname_function_py3(obj):
"""Fully qualified name for 'function' objects in Python 3.
"""
if hasattr(obj, "__wrapped__"):
# Required for decorator.__version__ <= 4.0.0.
qualname = obj.__wrapped__.__qualname__
else:
qualname = obj.__qualname__
return obj.__module__ + '.' + qualname | Fully qualified name for 'function' objects in Python 3. | Below is the the instruction that describes the task:
### Input:
Fully qualified name for 'function' objects in Python 3.
### Response:
def _fullqualname_function_py3(obj):
"""Fully qualified name for 'function' objects in Python 3.
"""
if hasattr(obj, "__wrapped__"):
# Required for decorator.__version__ <= 4.0.0.
qualname = obj.__wrapped__.__qualname__
else:
qualname = obj.__qualname__
return obj.__module__ + '.' + qualname |
def _write(self, session, openFile, replaceParamFile):
"""
Channel Input File Write to File Method
"""
# Write lines
openFile.write('GSSHA_CHAN\n')
alpha = vwp(self.alpha, replaceParamFile)
try:
openFile.write('ALPHA%s%.6f\n' % (' ' * 7, alpha))
except:
openFile.write('ALPHA%s%s\n' % (' ' * 7, alpha))
beta = vwp(self.beta, replaceParamFile)
try:
openFile.write('BETA%s%.6f\n' % (' ' * 8, beta))
except:
openFile.write('BETA%s%s\n' % (' ' * 8, beta))
theta = vwp(self.theta, replaceParamFile)
try:
openFile.write('THETA%s%.6f\n' % (' ' * 7, theta))
except:
openFile.write('THETA%s%s\n' % (' ' * 7, theta))
openFile.write('LINKS%s%s\n' % (' ' * 7, self.links))
openFile.write('MAXNODES%s%s\n' % (' ' * 4, self.maxNodes))
# Retrieve StreamLinks
links = self.getOrderedLinks(session)
self._writeConnectivity(links=links,
fileObject=openFile)
self._writeLinks(links=links,
fileObject=openFile,
replaceParamFile=replaceParamFile) | Channel Input File Write to File Method | Below is the the instruction that describes the task:
### Input:
Channel Input File Write to File Method
### Response:
def _write(self, session, openFile, replaceParamFile):
"""
Channel Input File Write to File Method
"""
# Write lines
openFile.write('GSSHA_CHAN\n')
alpha = vwp(self.alpha, replaceParamFile)
try:
openFile.write('ALPHA%s%.6f\n' % (' ' * 7, alpha))
except:
openFile.write('ALPHA%s%s\n' % (' ' * 7, alpha))
beta = vwp(self.beta, replaceParamFile)
try:
openFile.write('BETA%s%.6f\n' % (' ' * 8, beta))
except:
openFile.write('BETA%s%s\n' % (' ' * 8, beta))
theta = vwp(self.theta, replaceParamFile)
try:
openFile.write('THETA%s%.6f\n' % (' ' * 7, theta))
except:
openFile.write('THETA%s%s\n' % (' ' * 7, theta))
openFile.write('LINKS%s%s\n' % (' ' * 7, self.links))
openFile.write('MAXNODES%s%s\n' % (' ' * 4, self.maxNodes))
# Retrieve StreamLinks
links = self.getOrderedLinks(session)
self._writeConnectivity(links=links,
fileObject=openFile)
self._writeLinks(links=links,
fileObject=openFile,
replaceParamFile=replaceParamFile) |
def _send_email(name, email):
"send a email to inform user of account creation"
config = __salt__['config.option']('splunk')
email_object = config.get('email')
if email_object:
cc = email_object.get('cc')
subject = email_object.get('subject')
message = email_object.get('message').format(name, name, _generate_password(email), name)
try:
mail_process = subprocess.Popen(['mail', '-s', subject, '-c', cc, email], stdin=subprocess.PIPE)
except Exception as e:
log.error("unable to send email to %s: %s", email, e)
mail_process.communicate(message)
log.info("sent account creation email to %s", email) | send a email to inform user of account creation | Below is the the instruction that describes the task:
### Input:
send a email to inform user of account creation
### Response:
def _send_email(name, email):
"send a email to inform user of account creation"
config = __salt__['config.option']('splunk')
email_object = config.get('email')
if email_object:
cc = email_object.get('cc')
subject = email_object.get('subject')
message = email_object.get('message').format(name, name, _generate_password(email), name)
try:
mail_process = subprocess.Popen(['mail', '-s', subject, '-c', cc, email], stdin=subprocess.PIPE)
except Exception as e:
log.error("unable to send email to %s: %s", email, e)
mail_process.communicate(message)
log.info("sent account creation email to %s", email) |
def congestionControl(Cause_presence=0):
"""CONGESTION CONTROL Section 9.3.4"""
a = TpPd(pd=0x3)
b = MessageType(mesType=0x39) # 00111001
c = CongestionLevelAndSpareHalfOctets()
packet = a / b / c
if Cause_presence is 1:
e = CauseHdr(ieiC=0x08, eightBitC=0x0)
packet = packet / e
return packet | CONGESTION CONTROL Section 9.3.4 | Below is the the instruction that describes the task:
### Input:
CONGESTION CONTROL Section 9.3.4
### Response:
def congestionControl(Cause_presence=0):
"""CONGESTION CONTROL Section 9.3.4"""
a = TpPd(pd=0x3)
b = MessageType(mesType=0x39) # 00111001
c = CongestionLevelAndSpareHalfOctets()
packet = a / b / c
if Cause_presence is 1:
e = CauseHdr(ieiC=0x08, eightBitC=0x0)
packet = packet / e
return packet |
def ParseHeader(cls, script_data):
"""Parse a script integrity header.
This function makes sure any integrity hashes are correctly parsed and
returns a ScriptHeader structure containing the information that it
was able to parse out.
Args:
script_data (bytearray): The script that we should parse.
Raises:
ArgumentError: If the script contains malformed data that
cannot be parsed.
Returns:
ScriptHeader: The parsed script header information
"""
if len(script_data) < UpdateScript.SCRIPT_HEADER_LENGTH:
raise ArgumentError("Script is too short to contain a script header",
length=len(script_data), header_length=UpdateScript.SCRIPT_HEADER_LENGTH)
embedded_hash, magic, total_length = struct.unpack_from("<16sLL", script_data)
if magic != UpdateScript.SCRIPT_MAGIC:
raise ArgumentError("Script has invalid magic value", expected=UpdateScript.SCRIPT_MAGIC, found=magic)
if total_length != len(script_data):
raise ArgumentError("Script length does not match embedded length",
embedded_length=total_length, length=len(script_data))
hashed_data = script_data[16:]
sha = hashlib.sha256()
sha.update(hashed_data)
hash_value = sha.digest()[:16]
if not compare_digest(embedded_hash, hash_value):
raise ArgumentError("Script has invalid embedded hash", embedded_hash=hexlify(embedded_hash),
calculated_hash=hexlify(hash_value))
return ScriptHeader(UpdateScript.SCRIPT_HEADER_LENGTH, False, True, False) | Parse a script integrity header.
This function makes sure any integrity hashes are correctly parsed and
returns a ScriptHeader structure containing the information that it
was able to parse out.
Args:
script_data (bytearray): The script that we should parse.
Raises:
ArgumentError: If the script contains malformed data that
cannot be parsed.
Returns:
ScriptHeader: The parsed script header information | Below is the the instruction that describes the task:
### Input:
Parse a script integrity header.
This function makes sure any integrity hashes are correctly parsed and
returns a ScriptHeader structure containing the information that it
was able to parse out.
Args:
script_data (bytearray): The script that we should parse.
Raises:
ArgumentError: If the script contains malformed data that
cannot be parsed.
Returns:
ScriptHeader: The parsed script header information
### Response:
def ParseHeader(cls, script_data):
"""Parse a script integrity header.
This function makes sure any integrity hashes are correctly parsed and
returns a ScriptHeader structure containing the information that it
was able to parse out.
Args:
script_data (bytearray): The script that we should parse.
Raises:
ArgumentError: If the script contains malformed data that
cannot be parsed.
Returns:
ScriptHeader: The parsed script header information
"""
if len(script_data) < UpdateScript.SCRIPT_HEADER_LENGTH:
raise ArgumentError("Script is too short to contain a script header",
length=len(script_data), header_length=UpdateScript.SCRIPT_HEADER_LENGTH)
embedded_hash, magic, total_length = struct.unpack_from("<16sLL", script_data)
if magic != UpdateScript.SCRIPT_MAGIC:
raise ArgumentError("Script has invalid magic value", expected=UpdateScript.SCRIPT_MAGIC, found=magic)
if total_length != len(script_data):
raise ArgumentError("Script length does not match embedded length",
embedded_length=total_length, length=len(script_data))
hashed_data = script_data[16:]
sha = hashlib.sha256()
sha.update(hashed_data)
hash_value = sha.digest()[:16]
if not compare_digest(embedded_hash, hash_value):
raise ArgumentError("Script has invalid embedded hash", embedded_hash=hexlify(embedded_hash),
calculated_hash=hexlify(hash_value))
return ScriptHeader(UpdateScript.SCRIPT_HEADER_LENGTH, False, True, False) |
def list_tags(self, image_name):
# type: (str) -> Iterator[str]
""" List all tags for the given image stored in the registry.
Args:
image_name (str):
The name of the image to query. The image must be present on the
registry for this call to return any values.
Returns:
list[str]: List of tags for that image.
"""
tags_url = self.registry_url + '/v2/{}/tags/list'
r = self.get(tags_url.format(image_name), auth=self.auth)
data = r.json()
if 'tags' in data:
return reversed(sorted(data['tags']))
return [] | List all tags for the given image stored in the registry.
Args:
image_name (str):
The name of the image to query. The image must be present on the
registry for this call to return any values.
Returns:
list[str]: List of tags for that image. | Below is the the instruction that describes the task:
### Input:
List all tags for the given image stored in the registry.
Args:
image_name (str):
The name of the image to query. The image must be present on the
registry for this call to return any values.
Returns:
list[str]: List of tags for that image.
### Response:
def list_tags(self, image_name):
# type: (str) -> Iterator[str]
""" List all tags for the given image stored in the registry.
Args:
image_name (str):
The name of the image to query. The image must be present on the
registry for this call to return any values.
Returns:
list[str]: List of tags for that image.
"""
tags_url = self.registry_url + '/v2/{}/tags/list'
r = self.get(tags_url.format(image_name), auth=self.auth)
data = r.json()
if 'tags' in data:
return reversed(sorted(data['tags']))
return [] |
def remove_provider(self, id):
'''
Remove the provider with the given id or :term:`URI`.
:param str id: The identifier for the provider.
:returns: A :class:`skosprovider.providers.VocabularyProvider` or
`False` if the id is unknown.
'''
if id in self.providers:
p = self.providers.get(id, False)
del self.providers[id]
del self.concept_scheme_uri_map[p.concept_scheme.uri]
return p
elif id in self.concept_scheme_uri_map:
id = self.concept_scheme_uri_map[id]
return self.remove_provider(id)
else:
return False | Remove the provider with the given id or :term:`URI`.
:param str id: The identifier for the provider.
:returns: A :class:`skosprovider.providers.VocabularyProvider` or
`False` if the id is unknown. | Below is the the instruction that describes the task:
### Input:
Remove the provider with the given id or :term:`URI`.
:param str id: The identifier for the provider.
:returns: A :class:`skosprovider.providers.VocabularyProvider` or
`False` if the id is unknown.
### Response:
def remove_provider(self, id):
'''
Remove the provider with the given id or :term:`URI`.
:param str id: The identifier for the provider.
:returns: A :class:`skosprovider.providers.VocabularyProvider` or
`False` if the id is unknown.
'''
if id in self.providers:
p = self.providers.get(id, False)
del self.providers[id]
del self.concept_scheme_uri_map[p.concept_scheme.uri]
return p
elif id in self.concept_scheme_uri_map:
id = self.concept_scheme_uri_map[id]
return self.remove_provider(id)
else:
return False |
def cpp_best_split_full_model(X, Uy, C, S, U, noderange, delta,
save_memory=False):
"""wrappe calling cpp splitting function"""
return CSP.best_split_full_model(X, Uy, C, S, U, noderange, delta) | wrappe calling cpp splitting function | Below is the the instruction that describes the task:
### Input:
wrappe calling cpp splitting function
### Response:
def cpp_best_split_full_model(X, Uy, C, S, U, noderange, delta,
save_memory=False):
"""wrappe calling cpp splitting function"""
return CSP.best_split_full_model(X, Uy, C, S, U, noderange, delta) |
def collectTriggers(self, rgx, code):
"""Return a dictionary of triggers and their corresponding matches
from the code.
"""
return {m.group(0): m for m in re.finditer(rgx, code)} | Return a dictionary of triggers and their corresponding matches
from the code. | Below is the the instruction that describes the task:
### Input:
Return a dictionary of triggers and their corresponding matches
from the code.
### Response:
def collectTriggers(self, rgx, code):
"""Return a dictionary of triggers and their corresponding matches
from the code.
"""
return {m.group(0): m for m in re.finditer(rgx, code)} |
def f_remove_child(self, name, recursive=False, predicate=None):
"""Removes a child of the group.
Note that groups and leaves are only removed from the current trajectory in RAM.
If the trajectory is stored to disk, this data is not affected. Thus, removing children
can be only be used to free RAM memory!
If you want to free memory on disk via your storage service,
use :func:`~pypet.trajectory.Trajectory.f_delete_items` of your trajectory.
:param name:
Name of child, naming by grouping is NOT allowed ('groupA.groupB.childC'),
child must be direct successor of current node.
:param recursive:
Must be true if child is a group that has children. Will remove
the whole subtree in this case. Otherwise a Type Error is thrown.
:param predicate:
Predicate which can evaluate for each node to ``True`` in order to remove the node or
``False`` if the node should be kept. Leave ``None`` if you want to remove all nodes.
:raises:
TypeError if recursive is false but there are children below the node.
ValueError if child does not exist.
"""
if name not in self._children:
raise ValueError('Your group `%s` does not contain the child `%s`.' %
(self.v_full_name, name))
else:
child = self._children[name]
if (name not in self._links and
not child.v_is_leaf and
child.f_has_children() and
not recursive):
raise TypeError('Cannot remove child. It is a group with children. Use'
' f_remove with ``recursive = True``')
else:
self._nn_interface._remove_subtree(self, name, predicate) | Removes a child of the group.
Note that groups and leaves are only removed from the current trajectory in RAM.
If the trajectory is stored to disk, this data is not affected. Thus, removing children
can be only be used to free RAM memory!
If you want to free memory on disk via your storage service,
use :func:`~pypet.trajectory.Trajectory.f_delete_items` of your trajectory.
:param name:
Name of child, naming by grouping is NOT allowed ('groupA.groupB.childC'),
child must be direct successor of current node.
:param recursive:
Must be true if child is a group that has children. Will remove
the whole subtree in this case. Otherwise a Type Error is thrown.
:param predicate:
Predicate which can evaluate for each node to ``True`` in order to remove the node or
``False`` if the node should be kept. Leave ``None`` if you want to remove all nodes.
:raises:
TypeError if recursive is false but there are children below the node.
ValueError if child does not exist. | Below is the the instruction that describes the task:
### Input:
Removes a child of the group.
Note that groups and leaves are only removed from the current trajectory in RAM.
If the trajectory is stored to disk, this data is not affected. Thus, removing children
can be only be used to free RAM memory!
If you want to free memory on disk via your storage service,
use :func:`~pypet.trajectory.Trajectory.f_delete_items` of your trajectory.
:param name:
Name of child, naming by grouping is NOT allowed ('groupA.groupB.childC'),
child must be direct successor of current node.
:param recursive:
Must be true if child is a group that has children. Will remove
the whole subtree in this case. Otherwise a Type Error is thrown.
:param predicate:
Predicate which can evaluate for each node to ``True`` in order to remove the node or
``False`` if the node should be kept. Leave ``None`` if you want to remove all nodes.
:raises:
TypeError if recursive is false but there are children below the node.
ValueError if child does not exist.
### Response:
def f_remove_child(self, name, recursive=False, predicate=None):
"""Removes a child of the group.
Note that groups and leaves are only removed from the current trajectory in RAM.
If the trajectory is stored to disk, this data is not affected. Thus, removing children
can be only be used to free RAM memory!
If you want to free memory on disk via your storage service,
use :func:`~pypet.trajectory.Trajectory.f_delete_items` of your trajectory.
:param name:
Name of child, naming by grouping is NOT allowed ('groupA.groupB.childC'),
child must be direct successor of current node.
:param recursive:
Must be true if child is a group that has children. Will remove
the whole subtree in this case. Otherwise a Type Error is thrown.
:param predicate:
Predicate which can evaluate for each node to ``True`` in order to remove the node or
``False`` if the node should be kept. Leave ``None`` if you want to remove all nodes.
:raises:
TypeError if recursive is false but there are children below the node.
ValueError if child does not exist.
"""
if name not in self._children:
raise ValueError('Your group `%s` does not contain the child `%s`.' %
(self.v_full_name, name))
else:
child = self._children[name]
if (name not in self._links and
not child.v_is_leaf and
child.f_has_children() and
not recursive):
raise TypeError('Cannot remove child. It is a group with children. Use'
' f_remove with ``recursive = True``')
else:
self._nn_interface._remove_subtree(self, name, predicate) |
def clean_restructuredtext(form_instance, content):
"""
RST syntax validation
"""
if content:
errors = SourceReporter(content)
if errors:
raise ValidationError(map(map_parsing_errors, errors))
return content | RST syntax validation | Below is the the instruction that describes the task:
### Input:
RST syntax validation
### Response:
def clean_restructuredtext(form_instance, content):
"""
RST syntax validation
"""
if content:
errors = SourceReporter(content)
if errors:
raise ValidationError(map(map_parsing_errors, errors))
return content |
def _get_new_alive_state(self, new_seq, new_log_probs, new_cache):
"""Gather the top k sequences that are still alive.
Args:
new_seq: New sequences generated by growing the current alive sequences
int32 tensor with shape [batch_size, 2 * beam_size, cur_index + 1]
new_log_probs: Log probabilities of new sequences
float32 tensor with shape [batch_size, beam_size]
new_cache: Dict of cached values for each sequence.
Returns:
Dictionary with alive keys from _StateKeys:
{Top beam_size sequences that are still alive (don't end with eos_id)
Log probabilities of top alive sequences
Dict cache storing decoder states for top alive sequences}
"""
# To prevent finished sequences from being considered, set log probs to -INF
new_finished_flags = tf.equal(new_seq[:, :, -1], self.eos_id)
new_log_probs += tf.to_float(new_finished_flags) * -INF
top_alive_seq, top_alive_log_probs, top_alive_cache = _gather_topk_beams(
[new_seq, new_log_probs, new_cache], new_log_probs, self.batch_size,
self.beam_size)
return {
_StateKeys.ALIVE_SEQ: top_alive_seq,
_StateKeys.ALIVE_LOG_PROBS: top_alive_log_probs,
_StateKeys.ALIVE_CACHE: top_alive_cache
} | Gather the top k sequences that are still alive.
Args:
new_seq: New sequences generated by growing the current alive sequences
int32 tensor with shape [batch_size, 2 * beam_size, cur_index + 1]
new_log_probs: Log probabilities of new sequences
float32 tensor with shape [batch_size, beam_size]
new_cache: Dict of cached values for each sequence.
Returns:
Dictionary with alive keys from _StateKeys:
{Top beam_size sequences that are still alive (don't end with eos_id)
Log probabilities of top alive sequences
Dict cache storing decoder states for top alive sequences} | Below is the the instruction that describes the task:
### Input:
Gather the top k sequences that are still alive.
Args:
new_seq: New sequences generated by growing the current alive sequences
int32 tensor with shape [batch_size, 2 * beam_size, cur_index + 1]
new_log_probs: Log probabilities of new sequences
float32 tensor with shape [batch_size, beam_size]
new_cache: Dict of cached values for each sequence.
Returns:
Dictionary with alive keys from _StateKeys:
{Top beam_size sequences that are still alive (don't end with eos_id)
Log probabilities of top alive sequences
Dict cache storing decoder states for top alive sequences}
### Response:
def _get_new_alive_state(self, new_seq, new_log_probs, new_cache):
"""Gather the top k sequences that are still alive.
Args:
new_seq: New sequences generated by growing the current alive sequences
int32 tensor with shape [batch_size, 2 * beam_size, cur_index + 1]
new_log_probs: Log probabilities of new sequences
float32 tensor with shape [batch_size, beam_size]
new_cache: Dict of cached values for each sequence.
Returns:
Dictionary with alive keys from _StateKeys:
{Top beam_size sequences that are still alive (don't end with eos_id)
Log probabilities of top alive sequences
Dict cache storing decoder states for top alive sequences}
"""
# To prevent finished sequences from being considered, set log probs to -INF
new_finished_flags = tf.equal(new_seq[:, :, -1], self.eos_id)
new_log_probs += tf.to_float(new_finished_flags) * -INF
top_alive_seq, top_alive_log_probs, top_alive_cache = _gather_topk_beams(
[new_seq, new_log_probs, new_cache], new_log_probs, self.batch_size,
self.beam_size)
return {
_StateKeys.ALIVE_SEQ: top_alive_seq,
_StateKeys.ALIVE_LOG_PROBS: top_alive_log_probs,
_StateKeys.ALIVE_CACHE: top_alive_cache
} |
def load_plugins(self, plugin_dirs=None, quiet=True):
"""
Load plugins in `sys.path` and :attr:`plugin_dirs`
Parameters
----------
plugin_dirs : list or tuple of string, optional
A list or tuple of plugin directory path
quiet : bool, optional
If True, print all error message
"""
from pkg_resources import working_set
from pkg_resources import iter_entry_points
from pkg_resources import Environment
if plugin_dirs is None:
plugin_dirs = []
plugin_dirs.append(environment.get_system_plugins_directory())
plugin_dirs.append(environment.get_user_plugins_directory())
distributions, errors = working_set.find_plugins(
Environment(plugin_dirs)
)
map(working_set.add, distributions)
if not quiet:
# display error info
for distribution, error in errors:
print distrubution, error
for entry_point in iter_entry_points(self.ENTRY_POINT):
# load entry point
plugin = entry_point.load()
# if plugin is callable and `manually` is True, initialize manually
if callable(plugin) and getattr(plugin, 'manually', False):
# manually initialize plugin
plugin(self)
else:
# automatically initialize plugin
self.register(entry_point.name, plugin) | Load plugins in `sys.path` and :attr:`plugin_dirs`
Parameters
----------
plugin_dirs : list or tuple of string, optional
A list or tuple of plugin directory path
quiet : bool, optional
If True, print all error message | Below is the the instruction that describes the task:
### Input:
Load plugins in `sys.path` and :attr:`plugin_dirs`
Parameters
----------
plugin_dirs : list or tuple of string, optional
A list or tuple of plugin directory path
quiet : bool, optional
If True, print all error message
### Response:
def load_plugins(self, plugin_dirs=None, quiet=True):
"""
Load plugins in `sys.path` and :attr:`plugin_dirs`
Parameters
----------
plugin_dirs : list or tuple of string, optional
A list or tuple of plugin directory path
quiet : bool, optional
If True, print all error message
"""
from pkg_resources import working_set
from pkg_resources import iter_entry_points
from pkg_resources import Environment
if plugin_dirs is None:
plugin_dirs = []
plugin_dirs.append(environment.get_system_plugins_directory())
plugin_dirs.append(environment.get_user_plugins_directory())
distributions, errors = working_set.find_plugins(
Environment(plugin_dirs)
)
map(working_set.add, distributions)
if not quiet:
# display error info
for distribution, error in errors:
print distrubution, error
for entry_point in iter_entry_points(self.ENTRY_POINT):
# load entry point
plugin = entry_point.load()
# if plugin is callable and `manually` is True, initialize manually
if callable(plugin) and getattr(plugin, 'manually', False):
# manually initialize plugin
plugin(self)
else:
# automatically initialize plugin
self.register(entry_point.name, plugin) |
def set_state_from_file(self, filename):
"""Sets the state of the sampler back to the instance saved in a file.
"""
with self.io(filename, 'r') as fp:
rstate = fp.read_random_state()
# set the numpy random state
numpy.random.set_state(rstate)
# set emcee's generator to the same state
self._sampler.random_state = rstate | Sets the state of the sampler back to the instance saved in a file. | Below is the the instruction that describes the task:
### Input:
Sets the state of the sampler back to the instance saved in a file.
### Response:
def set_state_from_file(self, filename):
"""Sets the state of the sampler back to the instance saved in a file.
"""
with self.io(filename, 'r') as fp:
rstate = fp.read_random_state()
# set the numpy random state
numpy.random.set_state(rstate)
# set emcee's generator to the same state
self._sampler.random_state = rstate |
def getOutput(self):
"""
Returns the combined output of stdout and stderr
"""
output = self.stdout
if self.stdout:
output += '\r\n'
output += self.stderr
return output | Returns the combined output of stdout and stderr | Below is the the instruction that describes the task:
### Input:
Returns the combined output of stdout and stderr
### Response:
def getOutput(self):
"""
Returns the combined output of stdout and stderr
"""
output = self.stdout
if self.stdout:
output += '\r\n'
output += self.stderr
return output |
def get(self, request, *args, **kwargs):
"""
method called on GET request on this view
:param django.http.HttpRequest request: The current request object
"""
logger.info("logout requested")
# initialize the class attributes
self.init_get(request)
# if CAS federation mode is enable, bakup the provider before flushing the sessions
if settings.CAS_FEDERATE:
try:
user = FederatedUser.get_from_federated_username(
self.request.session.get("username")
)
auth = CASFederateValidateUser(user.provider, service_url="")
except FederatedUser.DoesNotExist:
auth = None
session_nb = self.logout(self.request.GET.get("all"))
# if CAS federation mode is enable, redirect to user CAS logout page, appending the
# current querystring
if settings.CAS_FEDERATE:
if auth is not None:
params = utils.copy_params(request.GET, ignore={"forget_provider"})
url = auth.get_logout_url()
response = HttpResponseRedirect(utils.update_url(url, params))
if request.GET.get("forget_provider"):
response.delete_cookie("remember_provider")
return response
# if service is set, redirect to service after logout
if self.service:
list(messages.get_messages(request)) # clean messages before leaving the django app
return HttpResponseRedirect(self.service)
# if service is not set but url is set, redirect to url after logout
elif self.url:
list(messages.get_messages(request)) # clean messages before leaving the django app
return HttpResponseRedirect(self.url)
else:
# build logout message depending of the number of sessions the user logs out
if session_nb == 1:
logout_msg = mark_safe(_(
"<h3>Logout successful</h3>"
"You have successfully logged out from the Central Authentication Service. "
"For security reasons, close your web browser."
))
elif session_nb > 1:
logout_msg = mark_safe(_(
"<h3>Logout successful</h3>"
"You have successfully logged out from %d sessions of the Central "
"Authentication Service. "
"For security reasons, close your web browser."
) % session_nb)
else:
logout_msg = mark_safe(_(
"<h3>Logout successful</h3>"
"You were already logged out from the Central Authentication Service. "
"For security reasons, close your web browser."
))
# depending of settings, redirect to the login page with a logout message or display
# the logout page. The default is to display tge logout page.
if settings.CAS_REDIRECT_TO_LOGIN_AFTER_LOGOUT:
messages.add_message(request, messages.SUCCESS, logout_msg)
if self.ajax:
url = reverse("cas_server:login")
data = {
'status': 'success',
'detail': 'logout',
'url': url,
'session_nb': session_nb
}
return json_response(request, data)
else:
return redirect("cas_server:login")
else:
if self.ajax:
data = {'status': 'success', 'detail': 'logout', 'session_nb': session_nb}
return json_response(request, data)
else:
return render(
request,
settings.CAS_LOGOUT_TEMPLATE,
utils.context({'logout_msg': logout_msg})
) | method called on GET request on this view
:param django.http.HttpRequest request: The current request object | Below is the the instruction that describes the task:
### Input:
method called on GET request on this view
:param django.http.HttpRequest request: The current request object
### Response:
def get(self, request, *args, **kwargs):
"""
method called on GET request on this view
:param django.http.HttpRequest request: The current request object
"""
logger.info("logout requested")
# initialize the class attributes
self.init_get(request)
# if CAS federation mode is enable, bakup the provider before flushing the sessions
if settings.CAS_FEDERATE:
try:
user = FederatedUser.get_from_federated_username(
self.request.session.get("username")
)
auth = CASFederateValidateUser(user.provider, service_url="")
except FederatedUser.DoesNotExist:
auth = None
session_nb = self.logout(self.request.GET.get("all"))
# if CAS federation mode is enable, redirect to user CAS logout page, appending the
# current querystring
if settings.CAS_FEDERATE:
if auth is not None:
params = utils.copy_params(request.GET, ignore={"forget_provider"})
url = auth.get_logout_url()
response = HttpResponseRedirect(utils.update_url(url, params))
if request.GET.get("forget_provider"):
response.delete_cookie("remember_provider")
return response
# if service is set, redirect to service after logout
if self.service:
list(messages.get_messages(request)) # clean messages before leaving the django app
return HttpResponseRedirect(self.service)
# if service is not set but url is set, redirect to url after logout
elif self.url:
list(messages.get_messages(request)) # clean messages before leaving the django app
return HttpResponseRedirect(self.url)
else:
# build logout message depending of the number of sessions the user logs out
if session_nb == 1:
logout_msg = mark_safe(_(
"<h3>Logout successful</h3>"
"You have successfully logged out from the Central Authentication Service. "
"For security reasons, close your web browser."
))
elif session_nb > 1:
logout_msg = mark_safe(_(
"<h3>Logout successful</h3>"
"You have successfully logged out from %d sessions of the Central "
"Authentication Service. "
"For security reasons, close your web browser."
) % session_nb)
else:
logout_msg = mark_safe(_(
"<h3>Logout successful</h3>"
"You were already logged out from the Central Authentication Service. "
"For security reasons, close your web browser."
))
# depending of settings, redirect to the login page with a logout message or display
# the logout page. The default is to display tge logout page.
if settings.CAS_REDIRECT_TO_LOGIN_AFTER_LOGOUT:
messages.add_message(request, messages.SUCCESS, logout_msg)
if self.ajax:
url = reverse("cas_server:login")
data = {
'status': 'success',
'detail': 'logout',
'url': url,
'session_nb': session_nb
}
return json_response(request, data)
else:
return redirect("cas_server:login")
else:
if self.ajax:
data = {'status': 'success', 'detail': 'logout', 'session_nb': session_nb}
return json_response(request, data)
else:
return render(
request,
settings.CAS_LOGOUT_TEMPLATE,
utils.context({'logout_msg': logout_msg})
) |
def get_app(system_version_file: str = None,
config_file_override: str = None,
name_override: str = None,
loop: asyncio.AbstractEventLoop = None) -> web.Application:
""" Build and return the aiohttp.web.Application that runs the server
The params can be overloaded for testing.
"""
if not system_version_file:
system_version_file = BR_BUILTIN_VERSION_FILE
version = get_version(system_version_file)
name = name_override or name_management.get_name()
config_obj = config.load(config_file_override)
LOG.info("Setup: " + '\n\t'.join([
f'Device name: {name}',
f'Buildroot version: '
f'{version.get("buildroot_version", "unknown")}',
f'\t(from git sha '
f'{version.get("buildroot_sha", "unknown")}',
f'API version: '
f'{version.get("opentrons_api_version", "unknown")}',
f'\t(from git sha '
f'{version.get("opentrons_api_sha", "unknown")}',
f'Update server version: '
f'{version.get("update_server_version", "unknown")}',
f'\t(from git sha '
f'{version.get("update_server_sha", "unknown")}',
f'Smoothie firmware version: TODO'
]))
if not loop:
loop = asyncio.get_event_loop()
app = web.Application(loop=loop, middlewares=[log_error_middleware])
app[config.CONFIG_VARNAME] = config_obj
app[constants.RESTART_LOCK_NAME] = asyncio.Lock()
app[constants.DEVICE_NAME_VARNAME] = name
app.router.add_routes([
web.get('/server/update/health',
control.build_health_endpoint(version)),
web.post('/server/update/begin', update.begin),
web.post('/server/update/cancel', update.cancel),
web.get('/server/update/{session}/status', update.status),
web.post('/server/update/{session}/file', update.file_upload),
web.post('/server/update/{session}/commit', update.commit),
web.post('/server/restart', control.restart),
web.get('/server/ssh_keys', ssh_key_management.list_keys),
web.post('/server/ssh_keys', ssh_key_management.add),
web.delete('/server/ssh_keys/{key_md5}', ssh_key_management.remove),
web.post('/server/name', name_management.set_name_endpoint),
web.get('/server/name', name_management.get_name_endpoint),
])
return app | Build and return the aiohttp.web.Application that runs the server
The params can be overloaded for testing. | Below is the the instruction that describes the task:
### Input:
Build and return the aiohttp.web.Application that runs the server
The params can be overloaded for testing.
### Response:
def get_app(system_version_file: str = None,
config_file_override: str = None,
name_override: str = None,
loop: asyncio.AbstractEventLoop = None) -> web.Application:
""" Build and return the aiohttp.web.Application that runs the server
The params can be overloaded for testing.
"""
if not system_version_file:
system_version_file = BR_BUILTIN_VERSION_FILE
version = get_version(system_version_file)
name = name_override or name_management.get_name()
config_obj = config.load(config_file_override)
LOG.info("Setup: " + '\n\t'.join([
f'Device name: {name}',
f'Buildroot version: '
f'{version.get("buildroot_version", "unknown")}',
f'\t(from git sha '
f'{version.get("buildroot_sha", "unknown")}',
f'API version: '
f'{version.get("opentrons_api_version", "unknown")}',
f'\t(from git sha '
f'{version.get("opentrons_api_sha", "unknown")}',
f'Update server version: '
f'{version.get("update_server_version", "unknown")}',
f'\t(from git sha '
f'{version.get("update_server_sha", "unknown")}',
f'Smoothie firmware version: TODO'
]))
if not loop:
loop = asyncio.get_event_loop()
app = web.Application(loop=loop, middlewares=[log_error_middleware])
app[config.CONFIG_VARNAME] = config_obj
app[constants.RESTART_LOCK_NAME] = asyncio.Lock()
app[constants.DEVICE_NAME_VARNAME] = name
app.router.add_routes([
web.get('/server/update/health',
control.build_health_endpoint(version)),
web.post('/server/update/begin', update.begin),
web.post('/server/update/cancel', update.cancel),
web.get('/server/update/{session}/status', update.status),
web.post('/server/update/{session}/file', update.file_upload),
web.post('/server/update/{session}/commit', update.commit),
web.post('/server/restart', control.restart),
web.get('/server/ssh_keys', ssh_key_management.list_keys),
web.post('/server/ssh_keys', ssh_key_management.add),
web.delete('/server/ssh_keys/{key_md5}', ssh_key_management.remove),
web.post('/server/name', name_management.set_name_endpoint),
web.get('/server/name', name_management.get_name_endpoint),
])
return app |
def sign(self):
"""Generates a signature"""
payload = self._payload()
sigin = b'.'.join([self.protected.encode('utf-8'), payload])
signature = self.engine.sign(self.key, sigin)
return {'protected': self.protected,
'payload': payload,
'signature': base64url_encode(signature)} | Generates a signature | Below is the the instruction that describes the task:
### Input:
Generates a signature
### Response:
def sign(self):
"""Generates a signature"""
payload = self._payload()
sigin = b'.'.join([self.protected.encode('utf-8'), payload])
signature = self.engine.sign(self.key, sigin)
return {'protected': self.protected,
'payload': payload,
'signature': base64url_encode(signature)} |
def main(argv=None):
'''Command line options.'''
program_name = os.path.basename(sys.argv[0])
program_version = version
program_build_date = "%s" % __updated__
program_version_string = '%%prog %s (%s)' % (program_version, program_build_date)
#program_usage = '''usage: spam two eggs''' # optional - will be autogenerated by optparse
program_longdesc = '''''' # optional - give further explanation about what the program does
program_license = "Copyright 2015-2018 Mohamed El Amine SEHILI \
Licensed under the General Public License (GPL) Version 3 \nhttp://www.gnu.org/licenses/"
if argv is None:
argv = sys.argv[1:]
try:
# setup option parser
parser = OptionParser(version=program_version_string, epilog=program_longdesc, description=program_license)
group = OptionGroup(parser, "[Input-Output options]")
group.add_option("-i", "--input", dest="input", help="Input audio or video file. Use - for stdin [default: read from microphone using pyaudio]", metavar="FILE")
group.add_option("-t", "--input-type", dest="input_type", help="Input audio file type. Mandatory if file name has no extension [default: %default]", type=str, default=None, metavar="String")
group.add_option("-M", "--max_time", dest="max_time", help="Max data (in seconds) to read from microphone/file [default: read until the end of file/stream]", type=float, default=None, metavar="FLOAT")
group.add_option("-O", "--output-main", dest="output_main", help="Save main stream as. If omitted main stream will not be saved [default: omitted]", type=str, default=None, metavar="FILE")
group.add_option("-o", "--output-tokens", dest="output_tokens", help="Output file name format for detections. Use {N} and {start} and {end} to build file names, example: 'Det_{N}_{start}-{end}.wav'", type=str, default=None, metavar="STRING")
group.add_option("-T", "--output-type", dest="output_type", help="Audio type used to save detections and/or main stream. If not supplied will: (1). guess from extension or (2). use wav format", type=str, default=None, metavar="STRING")
group.add_option("-u", "--use-channel", dest="use_channel", help="Choose channel to use from a multi-channel audio file (requires pydub). 'left', 'right' and 'mix' are accepted values. [Default: 1 (i.e. 1st or left channel)]", type=str, default="1", metavar="STRING")
parser.add_option_group(group)
group = OptionGroup(parser, "[Tokenization options]", "Set tokenizer options and energy threshold.")
group.add_option("-a", "--analysis-window", dest="analysis_window", help="Size of analysis window in seconds [default: %default (10ms)]", type=float, default=0.01, metavar="FLOAT")
group.add_option("-n", "--min-duration", dest="min_duration", help="Min duration of a valid audio event in seconds [default: %default]", type=float, default=0.2, metavar="FLOAT")
group.add_option("-m", "--max-duration", dest="max_duration", help="Max duration of a valid audio event in seconds [default: %default]", type=float, default=5, metavar="FLOAT")
group.add_option("-s", "--max-silence", dest="max_silence", help="Max duration of a consecutive silence within a valid audio event in seconds [default: %default]", type=float, default=0.3, metavar="FLOAT")
group.add_option("-d", "--drop-trailing-silence", dest="drop_trailing_silence", help="Drop trailing silence from a detection [default: keep trailing silence]", action="store_true", default=False)
group.add_option("-e", "--energy-threshold", dest="energy_threshold", help="Log energy threshold for detection [default: %default]", type=float, default=50, metavar="FLOAT")
parser.add_option_group(group)
group = OptionGroup(parser, "[Audio parameters]", "Define audio parameters if data is read from a headerless file (raw or stdin) or you want to use different microphone parameters.")
group.add_option("-r", "--rate", dest="sampling_rate", help="Sampling rate of audio data [default: %default]", type=int, default=16000, metavar="INT")
group.add_option("-c", "--channels", dest="channels", help="Number of channels of audio data [default: %default]", type=int, default=1, metavar="INT")
group.add_option("-w", "--width", dest="sample_width", help="Number of bytes per audio sample [default: %default]", type=int, default=2, metavar="INT")
group.add_option("-I", "--input-device-index", dest="input_device_index", help="Audio device index [default: %default] - only when using PyAudio", type=int, default=None, metavar="INT")
group.add_option("-F", "--audio-frame-per-buffer", dest="frame_per_buffer", help="Audio frame per buffer [default: %default] - only when using PyAudio", type=int, default=1024, metavar="INT")
parser.add_option_group(group)
group = OptionGroup(parser, "[Do something with detections]", "Use these options to print, play or plot detections.")
group.add_option("-C", "--command", dest="command", help="Command to call when an audio detection occurs. Use $ to represent the file name to use with the command (e.g. -C 'du -h $')", default=None, type=str, metavar="STRING")
group.add_option("-E", "--echo", dest="echo", help="Play back each detection immediately using pyaudio [default: do not play]", action="store_true", default=False)
group.add_option("-p", "--plot", dest="plot", help="Plot and show audio signal and detections (requires matplotlib)", action="store_true", default=False)
group.add_option("", "--save-image", dest="save_image", help="Save plotted audio signal and detections as a picture or a PDF file (requires matplotlib)", type=str, default=None, metavar="FILE")
group.add_option("", "--printf", dest="printf", help="print detections, one per line, using a user supplied format (e.g. '[{id}]: {start} -- {end}'). Available keywords {id}, {start}, {end} and {duration}", type=str, default="{id} {start} {end}", metavar="STRING")
group.add_option("", "--time-format", dest="time_format", help="format used to print {start} and {end}. [Default= %default]. %S: absolute time in sec. %I: absolute time in ms. If at least one of (%h, %m, %s, %i) is used, convert time into hours, minutes, seconds and millis (e.g. %h:%m:%s.%i). Only required fields are printed", type=str, default="%S", metavar="STRING")
parser.add_option_group(group)
parser.add_option("-q", "--quiet", dest="quiet", help="Do not print any information about detections [default: print 'id', 'start' and 'end' of each detection]", action="store_true", default=False)
parser.add_option("-D", "--debug", dest="debug", help="Print processing operations to STDOUT", action="store_true", default=False)
parser.add_option("", "--debug-file", dest="debug_file", help="Print processing operations to FILE", type=str, default=None, metavar="FILE")
# process options
(opts, args) = parser.parse_args(argv)
if opts.input == "-":
asource = StdinAudioSource(sampling_rate = opts.sampling_rate,
sample_width = opts.sample_width,
channels = opts.channels)
#read data from a file
elif opts.input is not None:
asource = file_to_audio_source(filename=opts.input, filetype=opts.input_type, uc=opts.use_channel)
# read data from microphone via pyaudio
else:
try:
asource = PyAudioSource(sampling_rate = opts.sampling_rate,
sample_width = opts.sample_width,
channels = opts.channels,
frames_per_buffer = opts.frame_per_buffer,
input_device_index = opts.input_device_index)
except Exception:
sys.stderr.write("Cannot read data from audio device!\n")
sys.stderr.write("You should either install pyaudio or read data from STDIN\n")
sys.exit(2)
logger = logging.getLogger(LOGGER_NAME)
logger.setLevel(logging.DEBUG)
handler = logging.StreamHandler(sys.stdout)
if opts.quiet or not opts.debug:
# only critical messages will be printed
handler.setLevel(logging.CRITICAL)
else:
handler.setLevel(logging.DEBUG)
logger.addHandler(handler)
if opts.debug_file is not None:
logger.setLevel(logging.DEBUG)
opts.debug = True
handler = logging.FileHandler(opts.debug_file, "w")
fmt = logging.Formatter('[%(asctime)s] | %(message)s')
handler.setFormatter(fmt)
handler.setLevel(logging.DEBUG)
logger.addHandler(handler)
record = opts.output_main is not None or opts.plot or opts.save_image is not None
ads = ADSFactory.ads(audio_source = asource, block_dur = opts.analysis_window, max_time = opts.max_time, record = record)
validator = AudioEnergyValidator(sample_width=asource.get_sample_width(), energy_threshold=opts.energy_threshold)
if opts.drop_trailing_silence:
mode = StreamTokenizer.DROP_TRAILING_SILENCE
else:
mode = 0
analysis_window_per_second = 1. / opts.analysis_window
tokenizer = StreamTokenizer(validator=validator, min_length=opts.min_duration * analysis_window_per_second,
max_length=int(opts.max_duration * analysis_window_per_second),
max_continuous_silence=opts.max_silence * analysis_window_per_second,
mode = mode)
observers = []
tokenizer_worker = None
if opts.output_tokens is not None:
try:
# check user format is correct
fname = opts.output_tokens.format(N=0, start=0, end=0)
# find file type for detections
tok_type = opts.output_type
if tok_type is None:
tok_type = os.path.splitext(opts.output_tokens)[1][1:]
if tok_type == "":
tok_type = "wav"
token_saver = TokenSaverWorker(name_format=opts.output_tokens, filetype=tok_type,
debug=opts.debug, logger=logger, sr=asource.get_sampling_rate(),
sw=asource.get_sample_width(),
ch=asource.get_channels())
observers.append(token_saver)
except Exception:
sys.stderr.write("Wrong format for detections file name: '{0}'\n".format(opts.output_tokens))
sys.exit(2)
if opts.echo:
try:
player = player_for(asource)
player_worker = PlayerWorker(player=player, debug=opts.debug, logger=logger)
observers.append(player_worker)
except Exception:
sys.stderr.write("Cannot get an audio player!\n")
sys.stderr.write("You should either install pyaudio or supply a command (-C option) to play audio\n")
sys.exit(2)
if opts.command is not None and len(opts.command) > 0:
cmd_worker = CommandLineWorker(command=opts.command, debug=opts.debug, logger=logger)
observers.append(cmd_worker)
if not opts.quiet or opts.plot is not None or opts.save_image is not None:
oformat = opts.printf.replace("\\n", "\n").replace("\\t", "\t").replace("\\r", "\r")
converter = seconds_to_str_fromatter(opts.time_format)
log_worker = LogWorker(print_detections = not opts.quiet, output_format=oformat,
time_formatter=converter, logger=logger, debug=opts.debug)
observers.append(log_worker)
tokenizer_worker = TokenizerWorker(ads, tokenizer, opts.analysis_window, observers)
def _save_main_stream():
# find file type
main_type = opts.output_type
if main_type is None:
main_type = os.path.splitext(opts.output_main)[1][1:]
if main_type == "":
main_type = "wav"
ads.close()
ads.rewind()
data = ads.get_audio_source().get_data_buffer()
if len(data) > 0:
save_audio_data(data=data, filename=opts.output_main, filetype=main_type, sr=asource.get_sampling_rate(),
sw = asource.get_sample_width(),
ch = asource.get_channels())
def _plot():
import numpy as np
ads.close()
ads.rewind()
data = ads.get_audio_source().get_data_buffer()
signal = AudioEnergyValidator._convert(data, asource.get_sample_width())
detections = [(det[3] , det[4]) for det in log_worker.detections]
max_amplitude = 2**(asource.get_sample_width() * 8 - 1) - 1
energy_as_amp = np.sqrt(np.exp(opts.energy_threshold * np.log(10) / 10)) / max_amplitude
plot_all(signal / max_amplitude, asource.get_sampling_rate(), energy_as_amp, detections, show = opts.plot, save_as = opts.save_image)
# start observer threads
for obs in observers:
obs.start()
# start tokenization thread
tokenizer_worker.start()
while True:
time.sleep(1)
if len(threading.enumerate()) == 1:
break
tokenizer_worker = None
if opts.output_main is not None:
_save_main_stream()
if opts.plot or opts.save_image is not None:
_plot()
return 0
except KeyboardInterrupt:
if tokenizer_worker is not None:
tokenizer_worker.stop()
for obs in observers:
obs.stop()
if opts.output_main is not None:
_save_main_stream()
if opts.plot or opts.save_image is not None:
_plot()
return 0
except Exception as e:
sys.stderr.write(program_name + ": " + str(e) + "\n")
sys.stderr.write("for help use -h\n")
return 2 | Command line options. | Below is the the instruction that describes the task:
### Input:
Command line options.
### Response:
def main(argv=None):
'''Command line options.'''
program_name = os.path.basename(sys.argv[0])
program_version = version
program_build_date = "%s" % __updated__
program_version_string = '%%prog %s (%s)' % (program_version, program_build_date)
#program_usage = '''usage: spam two eggs''' # optional - will be autogenerated by optparse
program_longdesc = '''''' # optional - give further explanation about what the program does
program_license = "Copyright 2015-2018 Mohamed El Amine SEHILI \
Licensed under the General Public License (GPL) Version 3 \nhttp://www.gnu.org/licenses/"
if argv is None:
argv = sys.argv[1:]
try:
# setup option parser
parser = OptionParser(version=program_version_string, epilog=program_longdesc, description=program_license)
group = OptionGroup(parser, "[Input-Output options]")
group.add_option("-i", "--input", dest="input", help="Input audio or video file. Use - for stdin [default: read from microphone using pyaudio]", metavar="FILE")
group.add_option("-t", "--input-type", dest="input_type", help="Input audio file type. Mandatory if file name has no extension [default: %default]", type=str, default=None, metavar="String")
group.add_option("-M", "--max_time", dest="max_time", help="Max data (in seconds) to read from microphone/file [default: read until the end of file/stream]", type=float, default=None, metavar="FLOAT")
group.add_option("-O", "--output-main", dest="output_main", help="Save main stream as. If omitted main stream will not be saved [default: omitted]", type=str, default=None, metavar="FILE")
group.add_option("-o", "--output-tokens", dest="output_tokens", help="Output file name format for detections. Use {N} and {start} and {end} to build file names, example: 'Det_{N}_{start}-{end}.wav'", type=str, default=None, metavar="STRING")
group.add_option("-T", "--output-type", dest="output_type", help="Audio type used to save detections and/or main stream. If not supplied will: (1). guess from extension or (2). use wav format", type=str, default=None, metavar="STRING")
group.add_option("-u", "--use-channel", dest="use_channel", help="Choose channel to use from a multi-channel audio file (requires pydub). 'left', 'right' and 'mix' are accepted values. [Default: 1 (i.e. 1st or left channel)]", type=str, default="1", metavar="STRING")
parser.add_option_group(group)
group = OptionGroup(parser, "[Tokenization options]", "Set tokenizer options and energy threshold.")
group.add_option("-a", "--analysis-window", dest="analysis_window", help="Size of analysis window in seconds [default: %default (10ms)]", type=float, default=0.01, metavar="FLOAT")
group.add_option("-n", "--min-duration", dest="min_duration", help="Min duration of a valid audio event in seconds [default: %default]", type=float, default=0.2, metavar="FLOAT")
group.add_option("-m", "--max-duration", dest="max_duration", help="Max duration of a valid audio event in seconds [default: %default]", type=float, default=5, metavar="FLOAT")
group.add_option("-s", "--max-silence", dest="max_silence", help="Max duration of a consecutive silence within a valid audio event in seconds [default: %default]", type=float, default=0.3, metavar="FLOAT")
group.add_option("-d", "--drop-trailing-silence", dest="drop_trailing_silence", help="Drop trailing silence from a detection [default: keep trailing silence]", action="store_true", default=False)
group.add_option("-e", "--energy-threshold", dest="energy_threshold", help="Log energy threshold for detection [default: %default]", type=float, default=50, metavar="FLOAT")
parser.add_option_group(group)
group = OptionGroup(parser, "[Audio parameters]", "Define audio parameters if data is read from a headerless file (raw or stdin) or you want to use different microphone parameters.")
group.add_option("-r", "--rate", dest="sampling_rate", help="Sampling rate of audio data [default: %default]", type=int, default=16000, metavar="INT")
group.add_option("-c", "--channels", dest="channels", help="Number of channels of audio data [default: %default]", type=int, default=1, metavar="INT")
group.add_option("-w", "--width", dest="sample_width", help="Number of bytes per audio sample [default: %default]", type=int, default=2, metavar="INT")
group.add_option("-I", "--input-device-index", dest="input_device_index", help="Audio device index [default: %default] - only when using PyAudio", type=int, default=None, metavar="INT")
group.add_option("-F", "--audio-frame-per-buffer", dest="frame_per_buffer", help="Audio frame per buffer [default: %default] - only when using PyAudio", type=int, default=1024, metavar="INT")
parser.add_option_group(group)
group = OptionGroup(parser, "[Do something with detections]", "Use these options to print, play or plot detections.")
group.add_option("-C", "--command", dest="command", help="Command to call when an audio detection occurs. Use $ to represent the file name to use with the command (e.g. -C 'du -h $')", default=None, type=str, metavar="STRING")
group.add_option("-E", "--echo", dest="echo", help="Play back each detection immediately using pyaudio [default: do not play]", action="store_true", default=False)
group.add_option("-p", "--plot", dest="plot", help="Plot and show audio signal and detections (requires matplotlib)", action="store_true", default=False)
group.add_option("", "--save-image", dest="save_image", help="Save plotted audio signal and detections as a picture or a PDF file (requires matplotlib)", type=str, default=None, metavar="FILE")
group.add_option("", "--printf", dest="printf", help="print detections, one per line, using a user supplied format (e.g. '[{id}]: {start} -- {end}'). Available keywords {id}, {start}, {end} and {duration}", type=str, default="{id} {start} {end}", metavar="STRING")
group.add_option("", "--time-format", dest="time_format", help="format used to print {start} and {end}. [Default= %default]. %S: absolute time in sec. %I: absolute time in ms. If at least one of (%h, %m, %s, %i) is used, convert time into hours, minutes, seconds and millis (e.g. %h:%m:%s.%i). Only required fields are printed", type=str, default="%S", metavar="STRING")
parser.add_option_group(group)
parser.add_option("-q", "--quiet", dest="quiet", help="Do not print any information about detections [default: print 'id', 'start' and 'end' of each detection]", action="store_true", default=False)
parser.add_option("-D", "--debug", dest="debug", help="Print processing operations to STDOUT", action="store_true", default=False)
parser.add_option("", "--debug-file", dest="debug_file", help="Print processing operations to FILE", type=str, default=None, metavar="FILE")
# process options
(opts, args) = parser.parse_args(argv)
if opts.input == "-":
asource = StdinAudioSource(sampling_rate = opts.sampling_rate,
sample_width = opts.sample_width,
channels = opts.channels)
#read data from a file
elif opts.input is not None:
asource = file_to_audio_source(filename=opts.input, filetype=opts.input_type, uc=opts.use_channel)
# read data from microphone via pyaudio
else:
try:
asource = PyAudioSource(sampling_rate = opts.sampling_rate,
sample_width = opts.sample_width,
channels = opts.channels,
frames_per_buffer = opts.frame_per_buffer,
input_device_index = opts.input_device_index)
except Exception:
sys.stderr.write("Cannot read data from audio device!\n")
sys.stderr.write("You should either install pyaudio or read data from STDIN\n")
sys.exit(2)
logger = logging.getLogger(LOGGER_NAME)
logger.setLevel(logging.DEBUG)
handler = logging.StreamHandler(sys.stdout)
if opts.quiet or not opts.debug:
# only critical messages will be printed
handler.setLevel(logging.CRITICAL)
else:
handler.setLevel(logging.DEBUG)
logger.addHandler(handler)
if opts.debug_file is not None:
logger.setLevel(logging.DEBUG)
opts.debug = True
handler = logging.FileHandler(opts.debug_file, "w")
fmt = logging.Formatter('[%(asctime)s] | %(message)s')
handler.setFormatter(fmt)
handler.setLevel(logging.DEBUG)
logger.addHandler(handler)
record = opts.output_main is not None or opts.plot or opts.save_image is not None
ads = ADSFactory.ads(audio_source = asource, block_dur = opts.analysis_window, max_time = opts.max_time, record = record)
validator = AudioEnergyValidator(sample_width=asource.get_sample_width(), energy_threshold=opts.energy_threshold)
if opts.drop_trailing_silence:
mode = StreamTokenizer.DROP_TRAILING_SILENCE
else:
mode = 0
analysis_window_per_second = 1. / opts.analysis_window
tokenizer = StreamTokenizer(validator=validator, min_length=opts.min_duration * analysis_window_per_second,
max_length=int(opts.max_duration * analysis_window_per_second),
max_continuous_silence=opts.max_silence * analysis_window_per_second,
mode = mode)
observers = []
tokenizer_worker = None
if opts.output_tokens is not None:
try:
# check user format is correct
fname = opts.output_tokens.format(N=0, start=0, end=0)
# find file type for detections
tok_type = opts.output_type
if tok_type is None:
tok_type = os.path.splitext(opts.output_tokens)[1][1:]
if tok_type == "":
tok_type = "wav"
token_saver = TokenSaverWorker(name_format=opts.output_tokens, filetype=tok_type,
debug=opts.debug, logger=logger, sr=asource.get_sampling_rate(),
sw=asource.get_sample_width(),
ch=asource.get_channels())
observers.append(token_saver)
except Exception:
sys.stderr.write("Wrong format for detections file name: '{0}'\n".format(opts.output_tokens))
sys.exit(2)
if opts.echo:
try:
player = player_for(asource)
player_worker = PlayerWorker(player=player, debug=opts.debug, logger=logger)
observers.append(player_worker)
except Exception:
sys.stderr.write("Cannot get an audio player!\n")
sys.stderr.write("You should either install pyaudio or supply a command (-C option) to play audio\n")
sys.exit(2)
if opts.command is not None and len(opts.command) > 0:
cmd_worker = CommandLineWorker(command=opts.command, debug=opts.debug, logger=logger)
observers.append(cmd_worker)
if not opts.quiet or opts.plot is not None or opts.save_image is not None:
oformat = opts.printf.replace("\\n", "\n").replace("\\t", "\t").replace("\\r", "\r")
converter = seconds_to_str_fromatter(opts.time_format)
log_worker = LogWorker(print_detections = not opts.quiet, output_format=oformat,
time_formatter=converter, logger=logger, debug=opts.debug)
observers.append(log_worker)
tokenizer_worker = TokenizerWorker(ads, tokenizer, opts.analysis_window, observers)
def _save_main_stream():
# find file type
main_type = opts.output_type
if main_type is None:
main_type = os.path.splitext(opts.output_main)[1][1:]
if main_type == "":
main_type = "wav"
ads.close()
ads.rewind()
data = ads.get_audio_source().get_data_buffer()
if len(data) > 0:
save_audio_data(data=data, filename=opts.output_main, filetype=main_type, sr=asource.get_sampling_rate(),
sw = asource.get_sample_width(),
ch = asource.get_channels())
def _plot():
import numpy as np
ads.close()
ads.rewind()
data = ads.get_audio_source().get_data_buffer()
signal = AudioEnergyValidator._convert(data, asource.get_sample_width())
detections = [(det[3] , det[4]) for det in log_worker.detections]
max_amplitude = 2**(asource.get_sample_width() * 8 - 1) - 1
energy_as_amp = np.sqrt(np.exp(opts.energy_threshold * np.log(10) / 10)) / max_amplitude
plot_all(signal / max_amplitude, asource.get_sampling_rate(), energy_as_amp, detections, show = opts.plot, save_as = opts.save_image)
# start observer threads
for obs in observers:
obs.start()
# start tokenization thread
tokenizer_worker.start()
while True:
time.sleep(1)
if len(threading.enumerate()) == 1:
break
tokenizer_worker = None
if opts.output_main is not None:
_save_main_stream()
if opts.plot or opts.save_image is not None:
_plot()
return 0
except KeyboardInterrupt:
if tokenizer_worker is not None:
tokenizer_worker.stop()
for obs in observers:
obs.stop()
if opts.output_main is not None:
_save_main_stream()
if opts.plot or opts.save_image is not None:
_plot()
return 0
except Exception as e:
sys.stderr.write(program_name + ": " + str(e) + "\n")
sys.stderr.write("for help use -h\n")
return 2 |
def _init_edges(p2cs, c2ps):
"""Get the directed edges from GO term to GO term."""
edge_from_to = []
for parent, children in p2cs.items():
for child in children:
# if child in goids_present and parent in goids_present:
edge_from_to.append((child, parent))
for parent, children in c2ps.items():
for child in children:
# if child in goids_present and parent in goids_present:
edge_from_to.append((child, parent))
return edge_from_to | Get the directed edges from GO term to GO term. | Below is the the instruction that describes the task:
### Input:
Get the directed edges from GO term to GO term.
### Response:
def _init_edges(p2cs, c2ps):
"""Get the directed edges from GO term to GO term."""
edge_from_to = []
for parent, children in p2cs.items():
for child in children:
# if child in goids_present and parent in goids_present:
edge_from_to.append((child, parent))
for parent, children in c2ps.items():
for child in children:
# if child in goids_present and parent in goids_present:
edge_from_to.append((child, parent))
return edge_from_to |
def bresenham_circle_octant(radius):
"""
Uses Bresenham's algorithm to draw a single octant of a circle with thickness 1,
centered on the origin and with the given radius.
:param radius: The radius of the circle to draw
:return: A list of integer coordinates representing pixels.
Starts at (radius, 0) and end with a pixel (x, y) where x == y.
"""
x, y = radius, 0
r2 = radius * radius
coords = []
while x >= y:
coords.append((x, y))
y += 1
if abs((x - 1) * (x - 1) + y * y - r2) < abs(x * x + y * y - r2):
x -= 1
# add a point on the line x = y at the end if it's not already there.
if coords[-1][0] != coords[-1][1]:
coords.append((coords[-1][0], coords[-1][0]))
return coords | Uses Bresenham's algorithm to draw a single octant of a circle with thickness 1,
centered on the origin and with the given radius.
:param radius: The radius of the circle to draw
:return: A list of integer coordinates representing pixels.
Starts at (radius, 0) and end with a pixel (x, y) where x == y. | Below is the the instruction that describes the task:
### Input:
Uses Bresenham's algorithm to draw a single octant of a circle with thickness 1,
centered on the origin and with the given radius.
:param radius: The radius of the circle to draw
:return: A list of integer coordinates representing pixels.
Starts at (radius, 0) and end with a pixel (x, y) where x == y.
### Response:
def bresenham_circle_octant(radius):
"""
Uses Bresenham's algorithm to draw a single octant of a circle with thickness 1,
centered on the origin and with the given radius.
:param radius: The radius of the circle to draw
:return: A list of integer coordinates representing pixels.
Starts at (radius, 0) and end with a pixel (x, y) where x == y.
"""
x, y = radius, 0
r2 = radius * radius
coords = []
while x >= y:
coords.append((x, y))
y += 1
if abs((x - 1) * (x - 1) + y * y - r2) < abs(x * x + y * y - r2):
x -= 1
# add a point on the line x = y at the end if it's not already there.
if coords[-1][0] != coords[-1][1]:
coords.append((coords[-1][0], coords[-1][0]))
return coords |
def load_go_graph(go_fname):
"""Load the GO data from an OWL file and parse into an RDF graph.
Parameters
----------
go_fname : str
Path to the GO OWL file. Can be downloaded from
http://geneontology.org/ontology/go.owl.
Returns
-------
rdflib.Graph
RDF graph containing GO data.
"""
global _go_graph
if _go_graph is None:
_go_graph = rdflib.Graph()
logger.info("Parsing GO OWL file")
_go_graph.parse(os.path.abspath(go_fname))
return _go_graph | Load the GO data from an OWL file and parse into an RDF graph.
Parameters
----------
go_fname : str
Path to the GO OWL file. Can be downloaded from
http://geneontology.org/ontology/go.owl.
Returns
-------
rdflib.Graph
RDF graph containing GO data. | Below is the the instruction that describes the task:
### Input:
Load the GO data from an OWL file and parse into an RDF graph.
Parameters
----------
go_fname : str
Path to the GO OWL file. Can be downloaded from
http://geneontology.org/ontology/go.owl.
Returns
-------
rdflib.Graph
RDF graph containing GO data.
### Response:
def load_go_graph(go_fname):
"""Load the GO data from an OWL file and parse into an RDF graph.
Parameters
----------
go_fname : str
Path to the GO OWL file. Can be downloaded from
http://geneontology.org/ontology/go.owl.
Returns
-------
rdflib.Graph
RDF graph containing GO data.
"""
global _go_graph
if _go_graph is None:
_go_graph = rdflib.Graph()
logger.info("Parsing GO OWL file")
_go_graph.parse(os.path.abspath(go_fname))
return _go_graph |
def _parse_response(self, response):
"""Parses the API response and raises appropriate errors if
raise_errors was set to True
"""
if not self._raise_errors:
return response
is_4xx_error = str(response.status_code)[0] == '4'
is_5xx_error = str(response.status_code)[0] == '5'
content = response.content
if response.status_code == 403:
raise AuthenticationError(content)
elif is_4xx_error:
raise APIError(content)
elif is_5xx_error:
raise ServerError(content)
return response | Parses the API response and raises appropriate errors if
raise_errors was set to True | Below is the the instruction that describes the task:
### Input:
Parses the API response and raises appropriate errors if
raise_errors was set to True
### Response:
def _parse_response(self, response):
"""Parses the API response and raises appropriate errors if
raise_errors was set to True
"""
if not self._raise_errors:
return response
is_4xx_error = str(response.status_code)[0] == '4'
is_5xx_error = str(response.status_code)[0] == '5'
content = response.content
if response.status_code == 403:
raise AuthenticationError(content)
elif is_4xx_error:
raise APIError(content)
elif is_5xx_error:
raise ServerError(content)
return response |
def dict_to_unicode(raw_dict):
"""
Ensure all keys and values in a dict are unicode.
The passed dict is assumed to have lists for all values.
"""
decoded = {}
for key, value in raw_dict.items():
decoded[to_unicode(key)] = map(
to_unicode, value)
return decoded | Ensure all keys and values in a dict are unicode.
The passed dict is assumed to have lists for all values. | Below is the the instruction that describes the task:
### Input:
Ensure all keys and values in a dict are unicode.
The passed dict is assumed to have lists for all values.
### Response:
def dict_to_unicode(raw_dict):
"""
Ensure all keys and values in a dict are unicode.
The passed dict is assumed to have lists for all values.
"""
decoded = {}
for key, value in raw_dict.items():
decoded[to_unicode(key)] = map(
to_unicode, value)
return decoded |
def model_summary(model, solution=None, threshold=0.01, fva=None, names=False,
floatfmt='.3g'):
"""
Print a summary of the input and output fluxes of the model.
Parameters
----------
solution: cobra.Solution, optional
A previously solved model solution to use for generating the
summary. If none provided (default), the summary method will
resolve the model. Note that the solution object must match the
model, i.e., changes to the model such as changed bounds,
added or removed reactions are not taken into account by this
method.
threshold : float, optional
Threshold below which fluxes are not reported.
fva : pandas.DataFrame, float or None, optional
Whether or not to include flux variability analysis in the output.
If given, fva should either be a previous FVA solution matching
the model or a float between 0 and 1 representing the
fraction of the optimum objective to be searched.
names : bool, optional
Emit reaction and metabolite names rather than identifiers (default
False).
floatfmt : string, optional
Format string for floats (default '.3g').
"""
if names:
emit = attrgetter('name')
else:
emit = attrgetter('id')
objective_reactions = linear_reaction_coefficients(model)
boundary_reactions = model.exchanges
summary_rxns = set(objective_reactions.keys()).union(boundary_reactions)
if solution is None:
model.slim_optimize(error_value=None)
solution = get_solution(model, reactions=summary_rxns)
# Create a dataframe of objective fluxes
obj_fluxes = pd.DataFrame({key: solution[key.id] * value for key,
value in iteritems(objective_reactions)},
index=['flux']).T
obj_fluxes['id'] = obj_fluxes.apply(
lambda x: format_long_string(x.name.id, 15), 1)
# Build a dictionary of metabolite production from the boundary reactions
metabolites = {m for r in boundary_reactions for m in r.metabolites}
index = sorted(metabolites, key=attrgetter('id'))
metabolite_fluxes = pd.DataFrame({
'id': [format_long_string(emit(m), 15) for m in index],
'flux': zeros(len(index), dtype=float)
}, index=[m.id for m in index])
for rxn in boundary_reactions:
for met, stoich in iteritems(rxn.metabolites):
metabolite_fluxes.at[met.id, 'flux'] += stoich * solution[rxn.id]
# Calculate FVA results if requested
if fva is not None:
if len(index) != len(boundary_reactions):
LOGGER.warning(
"There exists more than one boundary reaction per metabolite. "
"Please be careful when evaluating flux ranges.")
metabolite_fluxes['fmin'] = zeros(len(index), dtype=float)
metabolite_fluxes['fmax'] = zeros(len(index), dtype=float)
if hasattr(fva, 'columns'):
fva_results = fva
else:
fva_results = flux_variability_analysis(
model, reaction_list=boundary_reactions,
fraction_of_optimum=fva)
for rxn in boundary_reactions:
for met, stoich in iteritems(rxn.metabolites):
fmin = stoich * fva_results.at[rxn.id, 'minimum']
fmax = stoich * fva_results.at[rxn.id, 'maximum']
# Correct 'max' and 'min' for negative values
if abs(fmin) <= abs(fmax):
metabolite_fluxes.at[met.id, 'fmin'] += fmin
metabolite_fluxes.at[met.id, 'fmax'] += fmax
else:
metabolite_fluxes.at[met.id, 'fmin'] += fmax
metabolite_fluxes.at[met.id, 'fmax'] += fmin
# Generate a dataframe of boundary fluxes
metabolite_fluxes = _process_flux_dataframe(
metabolite_fluxes, fva, threshold, floatfmt)
# Begin building string output table
def get_str_table(species_df, fva=False):
"""Formats a string table for each column"""
if fva:
return tabulate(
species_df.loc[:, ['id', 'flux', 'fva_fmt']].values,
floatfmt=floatfmt, tablefmt='simple',
headers=['id', 'Flux', 'Range']).split('\n')
else:
return tabulate(species_df.loc[:, ['id', 'flux']].values,
floatfmt=floatfmt, tablefmt='plain').split('\n')
in_table = get_str_table(
metabolite_fluxes[metabolite_fluxes['is_input']], fva=fva is not None)
out_table = get_str_table(
metabolite_fluxes[~metabolite_fluxes['is_input']], fva=fva is not None)
obj_table = get_str_table(obj_fluxes, fva=False)
# Print nested output table
print_(tabulate(
[entries for entries in zip_longest(in_table, out_table, obj_table)],
headers=['IN FLUXES', 'OUT FLUXES', 'OBJECTIVES'], tablefmt='simple')) | Print a summary of the input and output fluxes of the model.
Parameters
----------
solution: cobra.Solution, optional
A previously solved model solution to use for generating the
summary. If none provided (default), the summary method will
resolve the model. Note that the solution object must match the
model, i.e., changes to the model such as changed bounds,
added or removed reactions are not taken into account by this
method.
threshold : float, optional
Threshold below which fluxes are not reported.
fva : pandas.DataFrame, float or None, optional
Whether or not to include flux variability analysis in the output.
If given, fva should either be a previous FVA solution matching
the model or a float between 0 and 1 representing the
fraction of the optimum objective to be searched.
names : bool, optional
Emit reaction and metabolite names rather than identifiers (default
False).
floatfmt : string, optional
Format string for floats (default '.3g'). | Below is the the instruction that describes the task:
### Input:
Print a summary of the input and output fluxes of the model.
Parameters
----------
solution: cobra.Solution, optional
A previously solved model solution to use for generating the
summary. If none provided (default), the summary method will
resolve the model. Note that the solution object must match the
model, i.e., changes to the model such as changed bounds,
added or removed reactions are not taken into account by this
method.
threshold : float, optional
Threshold below which fluxes are not reported.
fva : pandas.DataFrame, float or None, optional
Whether or not to include flux variability analysis in the output.
If given, fva should either be a previous FVA solution matching
the model or a float between 0 and 1 representing the
fraction of the optimum objective to be searched.
names : bool, optional
Emit reaction and metabolite names rather than identifiers (default
False).
floatfmt : string, optional
Format string for floats (default '.3g').
### Response:
def model_summary(model, solution=None, threshold=0.01, fva=None, names=False,
floatfmt='.3g'):
"""
Print a summary of the input and output fluxes of the model.
Parameters
----------
solution: cobra.Solution, optional
A previously solved model solution to use for generating the
summary. If none provided (default), the summary method will
resolve the model. Note that the solution object must match the
model, i.e., changes to the model such as changed bounds,
added or removed reactions are not taken into account by this
method.
threshold : float, optional
Threshold below which fluxes are not reported.
fva : pandas.DataFrame, float or None, optional
Whether or not to include flux variability analysis in the output.
If given, fva should either be a previous FVA solution matching
the model or a float between 0 and 1 representing the
fraction of the optimum objective to be searched.
names : bool, optional
Emit reaction and metabolite names rather than identifiers (default
False).
floatfmt : string, optional
Format string for floats (default '.3g').
"""
if names:
emit = attrgetter('name')
else:
emit = attrgetter('id')
objective_reactions = linear_reaction_coefficients(model)
boundary_reactions = model.exchanges
summary_rxns = set(objective_reactions.keys()).union(boundary_reactions)
if solution is None:
model.slim_optimize(error_value=None)
solution = get_solution(model, reactions=summary_rxns)
# Create a dataframe of objective fluxes
obj_fluxes = pd.DataFrame({key: solution[key.id] * value for key,
value in iteritems(objective_reactions)},
index=['flux']).T
obj_fluxes['id'] = obj_fluxes.apply(
lambda x: format_long_string(x.name.id, 15), 1)
# Build a dictionary of metabolite production from the boundary reactions
metabolites = {m for r in boundary_reactions for m in r.metabolites}
index = sorted(metabolites, key=attrgetter('id'))
metabolite_fluxes = pd.DataFrame({
'id': [format_long_string(emit(m), 15) for m in index],
'flux': zeros(len(index), dtype=float)
}, index=[m.id for m in index])
for rxn in boundary_reactions:
for met, stoich in iteritems(rxn.metabolites):
metabolite_fluxes.at[met.id, 'flux'] += stoich * solution[rxn.id]
# Calculate FVA results if requested
if fva is not None:
if len(index) != len(boundary_reactions):
LOGGER.warning(
"There exists more than one boundary reaction per metabolite. "
"Please be careful when evaluating flux ranges.")
metabolite_fluxes['fmin'] = zeros(len(index), dtype=float)
metabolite_fluxes['fmax'] = zeros(len(index), dtype=float)
if hasattr(fva, 'columns'):
fva_results = fva
else:
fva_results = flux_variability_analysis(
model, reaction_list=boundary_reactions,
fraction_of_optimum=fva)
for rxn in boundary_reactions:
for met, stoich in iteritems(rxn.metabolites):
fmin = stoich * fva_results.at[rxn.id, 'minimum']
fmax = stoich * fva_results.at[rxn.id, 'maximum']
# Correct 'max' and 'min' for negative values
if abs(fmin) <= abs(fmax):
metabolite_fluxes.at[met.id, 'fmin'] += fmin
metabolite_fluxes.at[met.id, 'fmax'] += fmax
else:
metabolite_fluxes.at[met.id, 'fmin'] += fmax
metabolite_fluxes.at[met.id, 'fmax'] += fmin
# Generate a dataframe of boundary fluxes
metabolite_fluxes = _process_flux_dataframe(
metabolite_fluxes, fva, threshold, floatfmt)
# Begin building string output table
def get_str_table(species_df, fva=False):
"""Formats a string table for each column"""
if fva:
return tabulate(
species_df.loc[:, ['id', 'flux', 'fva_fmt']].values,
floatfmt=floatfmt, tablefmt='simple',
headers=['id', 'Flux', 'Range']).split('\n')
else:
return tabulate(species_df.loc[:, ['id', 'flux']].values,
floatfmt=floatfmt, tablefmt='plain').split('\n')
in_table = get_str_table(
metabolite_fluxes[metabolite_fluxes['is_input']], fva=fva is not None)
out_table = get_str_table(
metabolite_fluxes[~metabolite_fluxes['is_input']], fva=fva is not None)
obj_table = get_str_table(obj_fluxes, fva=False)
# Print nested output table
print_(tabulate(
[entries for entries in zip_longest(in_table, out_table, obj_table)],
headers=['IN FLUXES', 'OUT FLUXES', 'OBJECTIVES'], tablefmt='simple')) |
def grok_filter_name(element):
"""Extracts the name, which may be embedded, for a Jinja2
filter node"""
e_name = None
if element.name == 'default':
if isinstance(element.node, jinja2.nodes.Getattr):
e_name = element.node.node.name
else:
e_name = element.node.name
return e_name | Extracts the name, which may be embedded, for a Jinja2
filter node | Below is the the instruction that describes the task:
### Input:
Extracts the name, which may be embedded, for a Jinja2
filter node
### Response:
def grok_filter_name(element):
"""Extracts the name, which may be embedded, for a Jinja2
filter node"""
e_name = None
if element.name == 'default':
if isinstance(element.node, jinja2.nodes.Getattr):
e_name = element.node.node.name
else:
e_name = element.node.name
return e_name |
def update_records(self, domain, records):
"""
Modifies an existing records for a domain.
"""
if not isinstance(records, list):
raise TypeError("Expected records of type list")
uri = "/domains/%s/records" % utils.get_id(domain)
resp, resp_body = self._async_call(uri, method="PUT",
body={"records": records},
error_class=exc.DomainRecordUpdateFailed, has_response=False)
return resp_body | Modifies an existing records for a domain. | Below is the the instruction that describes the task:
### Input:
Modifies an existing records for a domain.
### Response:
def update_records(self, domain, records):
"""
Modifies an existing records for a domain.
"""
if not isinstance(records, list):
raise TypeError("Expected records of type list")
uri = "/domains/%s/records" % utils.get_id(domain)
resp, resp_body = self._async_call(uri, method="PUT",
body={"records": records},
error_class=exc.DomainRecordUpdateFailed, has_response=False)
return resp_body |
def _asarray(self, vec):
"""Convert ``x`` to an array.
Here the indices are changed such that the "outer" indices come last
in order to have the access order as `numpy.linalg.svd` needs it.
This is the inverse of `_asvector`.
"""
shape = self.domain[0, 0].shape + self.pshape
arr = np.empty(shape, dtype=self.domain.dtype)
for i, xi in enumerate(vec):
for j, xij in enumerate(xi):
arr[..., i, j] = xij.asarray()
return arr | Convert ``x`` to an array.
Here the indices are changed such that the "outer" indices come last
in order to have the access order as `numpy.linalg.svd` needs it.
This is the inverse of `_asvector`. | Below is the the instruction that describes the task:
### Input:
Convert ``x`` to an array.
Here the indices are changed such that the "outer" indices come last
in order to have the access order as `numpy.linalg.svd` needs it.
This is the inverse of `_asvector`.
### Response:
def _asarray(self, vec):
"""Convert ``x`` to an array.
Here the indices are changed such that the "outer" indices come last
in order to have the access order as `numpy.linalg.svd` needs it.
This is the inverse of `_asvector`.
"""
shape = self.domain[0, 0].shape + self.pshape
arr = np.empty(shape, dtype=self.domain.dtype)
for i, xi in enumerate(vec):
for j, xij in enumerate(xi):
arr[..., i, j] = xij.asarray()
return arr |
def cmd_fence(self, args):
'''fence commands'''
if len(args) < 1:
self.print_usage()
return
if args[0] == "enable":
self.set_fence_enabled(1)
elif args[0] == "disable":
self.set_fence_enabled(0)
elif args[0] == "load":
if len(args) != 2:
print("usage: fence load <filename>")
return
self.load_fence(args[1])
elif args[0] == "list":
self.list_fence(None)
elif args[0] == "move":
self.cmd_fence_move(args[1:])
elif args[0] == "remove":
self.cmd_fence_remove(args[1:])
elif args[0] == "save":
if len(args) != 2:
print("usage: fence save <filename>")
return
self.list_fence(args[1])
elif args[0] == "show":
if len(args) != 2:
print("usage: fence show <filename>")
return
self.fenceloader.load(args[1])
self.have_list = True
elif args[0] == "draw":
if not 'draw_lines' in self.mpstate.map_functions:
print("No map drawing available")
return
self.mpstate.map_functions['draw_lines'](self.fence_draw_callback)
print("Drawing fence on map")
elif args[0] == "clear":
self.param_set('FENCE_TOTAL', 0, 3)
else:
self.print_usage() | fence commands | Below is the the instruction that describes the task:
### Input:
fence commands
### Response:
def cmd_fence(self, args):
'''fence commands'''
if len(args) < 1:
self.print_usage()
return
if args[0] == "enable":
self.set_fence_enabled(1)
elif args[0] == "disable":
self.set_fence_enabled(0)
elif args[0] == "load":
if len(args) != 2:
print("usage: fence load <filename>")
return
self.load_fence(args[1])
elif args[0] == "list":
self.list_fence(None)
elif args[0] == "move":
self.cmd_fence_move(args[1:])
elif args[0] == "remove":
self.cmd_fence_remove(args[1:])
elif args[0] == "save":
if len(args) != 2:
print("usage: fence save <filename>")
return
self.list_fence(args[1])
elif args[0] == "show":
if len(args) != 2:
print("usage: fence show <filename>")
return
self.fenceloader.load(args[1])
self.have_list = True
elif args[0] == "draw":
if not 'draw_lines' in self.mpstate.map_functions:
print("No map drawing available")
return
self.mpstate.map_functions['draw_lines'](self.fence_draw_callback)
print("Drawing fence on map")
elif args[0] == "clear":
self.param_set('FENCE_TOTAL', 0, 3)
else:
self.print_usage() |
def next_frame_id(self):
"""
Gets a byte of the next valid frame ID (1 - 255), increments the
internal _frame_id counter and wraps it back to 1 if necessary.
"""
# Python 2/3 compatible way of converting 1 to "\x01" in py2 or b"\x01"
# in py3.
fid = bytes(bytearray((self._frame_id,)))
self._frame_id += 1
if self._frame_id > 0xFF:
self._frame_id = 1
try:
del self._rx_frames[fid]
except KeyError:
pass
return fid | Gets a byte of the next valid frame ID (1 - 255), increments the
internal _frame_id counter and wraps it back to 1 if necessary. | Below is the the instruction that describes the task:
### Input:
Gets a byte of the next valid frame ID (1 - 255), increments the
internal _frame_id counter and wraps it back to 1 if necessary.
### Response:
def next_frame_id(self):
"""
Gets a byte of the next valid frame ID (1 - 255), increments the
internal _frame_id counter and wraps it back to 1 if necessary.
"""
# Python 2/3 compatible way of converting 1 to "\x01" in py2 or b"\x01"
# in py3.
fid = bytes(bytearray((self._frame_id,)))
self._frame_id += 1
if self._frame_id > 0xFF:
self._frame_id = 1
try:
del self._rx_frames[fid]
except KeyError:
pass
return fid |
def calculable(self):
"""Check if class is calculable by its kwargs"""
self._thermo = ""
if self.kwargs["T"] and self.kwargs["P"]:
self._thermo = "TP"
elif self.kwargs["P"] and self.kwargs["h"] is not None:
self._thermo = "Ph"
elif self.kwargs["P"] and self.kwargs["s"] is not None:
self._thermo = "Ps"
# TODO: Add other pairs definitions options
# elif self.kwargs["P"] and self.kwargs["v"]:
# self._thermo = "Pv"
# elif self.kwargs["T"] and self.kwargs["s"] is not None:
# self._thermo = "Ts"
elif self.kwargs["h"] is not None and self.kwargs["s"] is not None:
self._thermo = "hs"
elif self.kwargs["T"] and self.kwargs["x"] is not None:
self._thermo = "Tx"
elif self.kwargs["P"] and self.kwargs["x"] is not None:
self._thermo = "Px"
return self._thermo | Check if class is calculable by its kwargs | Below is the the instruction that describes the task:
### Input:
Check if class is calculable by its kwargs
### Response:
def calculable(self):
"""Check if class is calculable by its kwargs"""
self._thermo = ""
if self.kwargs["T"] and self.kwargs["P"]:
self._thermo = "TP"
elif self.kwargs["P"] and self.kwargs["h"] is not None:
self._thermo = "Ph"
elif self.kwargs["P"] and self.kwargs["s"] is not None:
self._thermo = "Ps"
# TODO: Add other pairs definitions options
# elif self.kwargs["P"] and self.kwargs["v"]:
# self._thermo = "Pv"
# elif self.kwargs["T"] and self.kwargs["s"] is not None:
# self._thermo = "Ts"
elif self.kwargs["h"] is not None and self.kwargs["s"] is not None:
self._thermo = "hs"
elif self.kwargs["T"] and self.kwargs["x"] is not None:
self._thermo = "Tx"
elif self.kwargs["P"] and self.kwargs["x"] is not None:
self._thermo = "Px"
return self._thermo |
def novo(args):
"""
%prog novo reads.fastq
Reference-free tGBS pipeline v1.
"""
from jcvi.assembly.kmer import jellyfish, histogram
from jcvi.assembly.preprocess import diginorm
from jcvi.formats.fasta import filter as fasta_filter, format
from jcvi.apps.cdhit import filter as cdhit_filter
p = OptionParser(novo.__doc__)
p.add_option("--technology", choices=("illumina", "454", "iontorrent"),
default="iontorrent", help="Sequencing platform")
p.set_depth(depth=50)
p.set_align(pctid=96)
p.set_home("cdhit", default="/usr/local/bin/")
p.set_home("fiona", default="/usr/local/bin/")
p.set_home("jellyfish", default="/usr/local/bin/")
p.set_cpus()
opts, args = p.parse_args(args)
if len(args) != 1:
sys.exit(not p.print_help())
fastqfile, = args
cpus = opts.cpus
depth = opts.depth
pf, sf = fastqfile.rsplit(".", 1)
diginormfile = pf + ".diginorm." + sf
if need_update(fastqfile, diginormfile):
diginorm([fastqfile, "--single", "--depth={0}".format(depth)])
keepabund = fastqfile + ".keep.abundfilt"
sh("cp -s {0} {1}".format(keepabund, diginormfile))
jf = pf + "-K23.histogram"
if need_update(diginormfile, jf):
jellyfish([diginormfile, "--prefix={0}".format(pf),
"--cpus={0}".format(cpus),
"--jellyfish_home={0}".format(opts.jellyfish_home)])
genomesize = histogram([jf, pf, "23"])
fiona = pf + ".fiona.fa"
if need_update(diginormfile, fiona):
cmd = op.join(opts.fiona_home, "fiona")
cmd += " -g {0} -nt {1} --sequencing-technology {2}".\
format(genomesize, cpus, opts.technology)
cmd += " -vv {0} {1}".format(diginormfile, fiona)
logfile = pf + ".fiona.log"
sh(cmd, outfile=logfile, errfile=logfile)
dedup = "cdhit"
pctid = opts.pctid
cons = fiona + ".P{0}.{1}.consensus.fasta".format(pctid, dedup)
if need_update(fiona, cons):
deduplicate([fiona, "--consensus", "--reads",
"--pctid={0}".format(pctid),
"--cdhit_home={0}".format(opts.cdhit_home)])
filteredfile = pf + ".filtered.fasta"
if need_update(cons, filteredfile):
covfile = pf + ".cov.fasta"
cdhit_filter([cons, "--outfile={0}".format(covfile),
"--minsize={0}".format(depth / 5)])
fasta_filter([covfile, "50", "--outfile={0}".format(filteredfile)])
finalfile = pf + ".final.fasta"
if need_update(filteredfile, finalfile):
format([filteredfile, finalfile, "--sequential=replace",
"--prefix={0}_".format(pf)]) | %prog novo reads.fastq
Reference-free tGBS pipeline v1. | Below is the the instruction that describes the task:
### Input:
%prog novo reads.fastq
Reference-free tGBS pipeline v1.
### Response:
def novo(args):
"""
%prog novo reads.fastq
Reference-free tGBS pipeline v1.
"""
from jcvi.assembly.kmer import jellyfish, histogram
from jcvi.assembly.preprocess import diginorm
from jcvi.formats.fasta import filter as fasta_filter, format
from jcvi.apps.cdhit import filter as cdhit_filter
p = OptionParser(novo.__doc__)
p.add_option("--technology", choices=("illumina", "454", "iontorrent"),
default="iontorrent", help="Sequencing platform")
p.set_depth(depth=50)
p.set_align(pctid=96)
p.set_home("cdhit", default="/usr/local/bin/")
p.set_home("fiona", default="/usr/local/bin/")
p.set_home("jellyfish", default="/usr/local/bin/")
p.set_cpus()
opts, args = p.parse_args(args)
if len(args) != 1:
sys.exit(not p.print_help())
fastqfile, = args
cpus = opts.cpus
depth = opts.depth
pf, sf = fastqfile.rsplit(".", 1)
diginormfile = pf + ".diginorm." + sf
if need_update(fastqfile, diginormfile):
diginorm([fastqfile, "--single", "--depth={0}".format(depth)])
keepabund = fastqfile + ".keep.abundfilt"
sh("cp -s {0} {1}".format(keepabund, diginormfile))
jf = pf + "-K23.histogram"
if need_update(diginormfile, jf):
jellyfish([diginormfile, "--prefix={0}".format(pf),
"--cpus={0}".format(cpus),
"--jellyfish_home={0}".format(opts.jellyfish_home)])
genomesize = histogram([jf, pf, "23"])
fiona = pf + ".fiona.fa"
if need_update(diginormfile, fiona):
cmd = op.join(opts.fiona_home, "fiona")
cmd += " -g {0} -nt {1} --sequencing-technology {2}".\
format(genomesize, cpus, opts.technology)
cmd += " -vv {0} {1}".format(diginormfile, fiona)
logfile = pf + ".fiona.log"
sh(cmd, outfile=logfile, errfile=logfile)
dedup = "cdhit"
pctid = opts.pctid
cons = fiona + ".P{0}.{1}.consensus.fasta".format(pctid, dedup)
if need_update(fiona, cons):
deduplicate([fiona, "--consensus", "--reads",
"--pctid={0}".format(pctid),
"--cdhit_home={0}".format(opts.cdhit_home)])
filteredfile = pf + ".filtered.fasta"
if need_update(cons, filteredfile):
covfile = pf + ".cov.fasta"
cdhit_filter([cons, "--outfile={0}".format(covfile),
"--minsize={0}".format(depth / 5)])
fasta_filter([covfile, "50", "--outfile={0}".format(filteredfile)])
finalfile = pf + ".final.fasta"
if need_update(filteredfile, finalfile):
format([filteredfile, finalfile, "--sequential=replace",
"--prefix={0}_".format(pf)]) |
def ToByteArray(self):
"""
Serialize self and get the byte stream.
Returns:
bytes: serialized object.
"""
ms = StreamManager.GetStream()
writer = BinaryWriter(ms)
self.Serialize(writer)
retval = ms.ToArray()
StreamManager.ReleaseStream(ms)
return retval | Serialize self and get the byte stream.
Returns:
bytes: serialized object. | Below is the the instruction that describes the task:
### Input:
Serialize self and get the byte stream.
Returns:
bytes: serialized object.
### Response:
def ToByteArray(self):
"""
Serialize self and get the byte stream.
Returns:
bytes: serialized object.
"""
ms = StreamManager.GetStream()
writer = BinaryWriter(ms)
self.Serialize(writer)
retval = ms.ToArray()
StreamManager.ReleaseStream(ms)
return retval |
def copy_file(
source_path,
target_path,
allow_undo=True,
no_confirm=False,
rename_on_collision=True,
silent=False,
extra_flags=0,
hWnd=None
):
"""Perform a shell-based file copy. Copying in
this way allows the possibility of undo, auto-renaming,
and showing the "flying file" animation during the copy.
The default options allow for undo, don't automatically
clobber on a name clash, automatically rename on collision
and display the animation.
"""
return _file_operation(
shellcon.FO_COPY,
source_path,
target_path,
allow_undo,
no_confirm,
rename_on_collision,
silent,
extra_flags,
hWnd
) | Perform a shell-based file copy. Copying in
this way allows the possibility of undo, auto-renaming,
and showing the "flying file" animation during the copy.
The default options allow for undo, don't automatically
clobber on a name clash, automatically rename on collision
and display the animation. | Below is the the instruction that describes the task:
### Input:
Perform a shell-based file copy. Copying in
this way allows the possibility of undo, auto-renaming,
and showing the "flying file" animation during the copy.
The default options allow for undo, don't automatically
clobber on a name clash, automatically rename on collision
and display the animation.
### Response:
def copy_file(
source_path,
target_path,
allow_undo=True,
no_confirm=False,
rename_on_collision=True,
silent=False,
extra_flags=0,
hWnd=None
):
"""Perform a shell-based file copy. Copying in
this way allows the possibility of undo, auto-renaming,
and showing the "flying file" animation during the copy.
The default options allow for undo, don't automatically
clobber on a name clash, automatically rename on collision
and display the animation.
"""
return _file_operation(
shellcon.FO_COPY,
source_path,
target_path,
allow_undo,
no_confirm,
rename_on_collision,
silent,
extra_flags,
hWnd
) |
def publish_receiver_count(self, service, routing_id):
'''Get the number of peers that would handle a particular publish
:param service: the service name
:type service: anything hash-able
:param routing_id: the id used for limiting the service handlers
:type routing_id: int
'''
peers = len(list(self._dispatcher.find_peer_routes(
const.MSG_TYPE_PUBLISH, service, routing_id)))
if self._dispatcher.locally_handles(const.MSG_TYPE_PUBLISH,
service, routing_id):
return peers + 1
return peers | Get the number of peers that would handle a particular publish
:param service: the service name
:type service: anything hash-able
:param routing_id: the id used for limiting the service handlers
:type routing_id: int | Below is the the instruction that describes the task:
### Input:
Get the number of peers that would handle a particular publish
:param service: the service name
:type service: anything hash-able
:param routing_id: the id used for limiting the service handlers
:type routing_id: int
### Response:
def publish_receiver_count(self, service, routing_id):
'''Get the number of peers that would handle a particular publish
:param service: the service name
:type service: anything hash-able
:param routing_id: the id used for limiting the service handlers
:type routing_id: int
'''
peers = len(list(self._dispatcher.find_peer_routes(
const.MSG_TYPE_PUBLISH, service, routing_id)))
if self._dispatcher.locally_handles(const.MSG_TYPE_PUBLISH,
service, routing_id):
return peers + 1
return peers |
def load_http_response(cls, http_response):
"""
This method should return an instantiated class and set its response
to the requests.Response object.
"""
if not http_response.ok:
raise APIResponseError(http_response.text)
c = cls(http_response)
c.response = http_response
RateLimits.getRateLimits(cls.__name__).set(c.response.headers)
return c | This method should return an instantiated class and set its response
to the requests.Response object. | Below is the the instruction that describes the task:
### Input:
This method should return an instantiated class and set its response
to the requests.Response object.
### Response:
def load_http_response(cls, http_response):
"""
This method should return an instantiated class and set its response
to the requests.Response object.
"""
if not http_response.ok:
raise APIResponseError(http_response.text)
c = cls(http_response)
c.response = http_response
RateLimits.getRateLimits(cls.__name__).set(c.response.headers)
return c |
def encrypt_files(self, file_list, force_nocompress=False, force_compress=False, armored=False, checksum=False):
"""public method for multiple file encryption with optional compression, ASCII armored formatting, and file hash digest generation"""
for the_file in file_list:
self.encrypt_file(the_file, force_nocompress, force_compress, armored, checksum) | public method for multiple file encryption with optional compression, ASCII armored formatting, and file hash digest generation | Below is the the instruction that describes the task:
### Input:
public method for multiple file encryption with optional compression, ASCII armored formatting, and file hash digest generation
### Response:
def encrypt_files(self, file_list, force_nocompress=False, force_compress=False, armored=False, checksum=False):
"""public method for multiple file encryption with optional compression, ASCII armored formatting, and file hash digest generation"""
for the_file in file_list:
self.encrypt_file(the_file, force_nocompress, force_compress, armored, checksum) |
def option_parser():
"""Option Parser to give various options."""
usage = '''
$ ./crawler -d5 <url>
Here in this case it goes till depth of 5 and url is target URL to
start crawling.
'''
version = "2.0.0"
parser = optparse.OptionParser(usage=usage, version=version)
parser.add_option("-l", "--links", action="store_true",
default=False, dest="links", help="links for target url")
parser.add_option("-d", "--depth", action="store", type="int",
default=30, dest="depth", help="Maximum depth traverse")
opts, args = parser.parse_args()
if len(args) < 1:
parser.print_help()
raise SystemExit(1)
return opts, args | Option Parser to give various options. | Below is the the instruction that describes the task:
### Input:
Option Parser to give various options.
### Response:
def option_parser():
"""Option Parser to give various options."""
usage = '''
$ ./crawler -d5 <url>
Here in this case it goes till depth of 5 and url is target URL to
start crawling.
'''
version = "2.0.0"
parser = optparse.OptionParser(usage=usage, version=version)
parser.add_option("-l", "--links", action="store_true",
default=False, dest="links", help="links for target url")
parser.add_option("-d", "--depth", action="store", type="int",
default=30, dest="depth", help="Maximum depth traverse")
opts, args = parser.parse_args()
if len(args) < 1:
parser.print_help()
raise SystemExit(1)
return opts, args |
def update_current_retention_level(self, value):
"""Set a new value for the current retention level.
This updates the value of self.retain_files for an updated value of the
retention level.
Parameters
-----------
value : int
The new value to use for the retention level.
"""
# Determine the level at which output files should be kept
self.current_retention_level = value
try:
global_retention_level = \
self.cp.get_opt_tags("workflow", "file-retention-level",
self.tags+[self.name])
except ConfigParser.Error:
msg="Cannot find file-retention-level in [workflow] section "
msg+="of the configuration file. Setting a default value of "
msg+="retain all files."
logging.warn(msg)
self.retain_files = True
self.global_retention_threshold = 1
self.cp.set("workflow", "file-retention-level", "all_files")
else:
# FIXME: Are these names suitably descriptive?
retention_choices = {
'all_files' : 1,
'all_triggers' : 2,
'merged_triggers' : 3,
'results' : 4
}
try:
self.global_retention_threshold = \
retention_choices[global_retention_level]
except KeyError:
err_msg = "Cannot recognize the file-retention-level in the "
err_msg += "[workflow] section of the ini file. "
err_msg += "Got : {0}.".format(global_retention_level)
err_msg += "Valid options are: 'all_files', 'all_triggers',"
err_msg += "'merged_triggers' or 'results' "
raise ValueError(err_msg)
if self.current_retention_level == 5:
self.retain_files = True
if type(self).__name__ in Executable._warned_classes_list:
pass
else:
warn_msg = "Attribute current_retention_level has not "
warn_msg += "been set in class {0}. ".format(type(self))
warn_msg += "This value should be set explicitly. "
warn_msg += "All output from this class will be stored."
logging.warn(warn_msg)
Executable._warned_classes_list.append(type(self).__name__)
elif self.global_retention_threshold > self.current_retention_level:
self.retain_files = False
else:
self.retain_files = True | Set a new value for the current retention level.
This updates the value of self.retain_files for an updated value of the
retention level.
Parameters
-----------
value : int
The new value to use for the retention level. | Below is the the instruction that describes the task:
### Input:
Set a new value for the current retention level.
This updates the value of self.retain_files for an updated value of the
retention level.
Parameters
-----------
value : int
The new value to use for the retention level.
### Response:
def update_current_retention_level(self, value):
"""Set a new value for the current retention level.
This updates the value of self.retain_files for an updated value of the
retention level.
Parameters
-----------
value : int
The new value to use for the retention level.
"""
# Determine the level at which output files should be kept
self.current_retention_level = value
try:
global_retention_level = \
self.cp.get_opt_tags("workflow", "file-retention-level",
self.tags+[self.name])
except ConfigParser.Error:
msg="Cannot find file-retention-level in [workflow] section "
msg+="of the configuration file. Setting a default value of "
msg+="retain all files."
logging.warn(msg)
self.retain_files = True
self.global_retention_threshold = 1
self.cp.set("workflow", "file-retention-level", "all_files")
else:
# FIXME: Are these names suitably descriptive?
retention_choices = {
'all_files' : 1,
'all_triggers' : 2,
'merged_triggers' : 3,
'results' : 4
}
try:
self.global_retention_threshold = \
retention_choices[global_retention_level]
except KeyError:
err_msg = "Cannot recognize the file-retention-level in the "
err_msg += "[workflow] section of the ini file. "
err_msg += "Got : {0}.".format(global_retention_level)
err_msg += "Valid options are: 'all_files', 'all_triggers',"
err_msg += "'merged_triggers' or 'results' "
raise ValueError(err_msg)
if self.current_retention_level == 5:
self.retain_files = True
if type(self).__name__ in Executable._warned_classes_list:
pass
else:
warn_msg = "Attribute current_retention_level has not "
warn_msg += "been set in class {0}. ".format(type(self))
warn_msg += "This value should be set explicitly. "
warn_msg += "All output from this class will be stored."
logging.warn(warn_msg)
Executable._warned_classes_list.append(type(self).__name__)
elif self.global_retention_threshold > self.current_retention_level:
self.retain_files = False
else:
self.retain_files = True |
def find_word_prob(word_string, word_total=sum(WORD_DISTRIBUTION.values())):
'''
Finds the relative probability of the word appearing given context of a base corpus.
Returns this probability value as a float instance.
'''
if word_string is None:
return 0
elif isinstance(word_string, str):
return WORD_DISTRIBUTION[word_string] / word_total
else:
raise InputError("string or none type variable not passed as argument to find_word_prob") | Finds the relative probability of the word appearing given context of a base corpus.
Returns this probability value as a float instance. | Below is the the instruction that describes the task:
### Input:
Finds the relative probability of the word appearing given context of a base corpus.
Returns this probability value as a float instance.
### Response:
def find_word_prob(word_string, word_total=sum(WORD_DISTRIBUTION.values())):
'''
Finds the relative probability of the word appearing given context of a base corpus.
Returns this probability value as a float instance.
'''
if word_string is None:
return 0
elif isinstance(word_string, str):
return WORD_DISTRIBUTION[word_string] / word_total
else:
raise InputError("string or none type variable not passed as argument to find_word_prob") |
def _run_cnvkit_cancer(items, background):
"""Run CNVkit on a tumor/normal pair.
"""
paired = vcfutils.get_paired_bams([x["align_bam"] for x in items], items)
normal_data = [x for x in items if dd.get_sample_name(x) != paired.tumor_name]
tumor_ready, normal_ready = _match_batches(paired.tumor_data, normal_data[0] if normal_data else None)
ckouts = _run_cnvkit_shared([tumor_ready], [normal_ready] if normal_ready else [])
if not ckouts:
return items
assert len(ckouts) == 1
tumor_data = _associate_cnvkit_out(ckouts, [paired.tumor_data], is_somatic=True)
return tumor_data + normal_data | Run CNVkit on a tumor/normal pair. | Below is the the instruction that describes the task:
### Input:
Run CNVkit on a tumor/normal pair.
### Response:
def _run_cnvkit_cancer(items, background):
"""Run CNVkit on a tumor/normal pair.
"""
paired = vcfutils.get_paired_bams([x["align_bam"] for x in items], items)
normal_data = [x for x in items if dd.get_sample_name(x) != paired.tumor_name]
tumor_ready, normal_ready = _match_batches(paired.tumor_data, normal_data[0] if normal_data else None)
ckouts = _run_cnvkit_shared([tumor_ready], [normal_ready] if normal_ready else [])
if not ckouts:
return items
assert len(ckouts) == 1
tumor_data = _associate_cnvkit_out(ckouts, [paired.tumor_data], is_somatic=True)
return tumor_data + normal_data |
def _islots(self):
""" Return an iterator with the inferred slots. """
if "__slots__" not in self.locals:
return None
for slots in self.igetattr("__slots__"):
# check if __slots__ is a valid type
for meth in ITER_METHODS:
try:
slots.getattr(meth)
break
except exceptions.AttributeInferenceError:
continue
else:
continue
if isinstance(slots, node_classes.Const):
# a string. Ignore the following checks,
# but yield the node, only if it has a value
if slots.value:
yield slots
continue
if not hasattr(slots, "itered"):
# we can't obtain the values, maybe a .deque?
continue
if isinstance(slots, node_classes.Dict):
values = [item[0] for item in slots.items]
else:
values = slots.itered()
if values is util.Uninferable:
continue
if not values:
# Stop the iteration, because the class
# has an empty list of slots.
return values
for elt in values:
try:
for inferred in elt.infer():
if inferred is util.Uninferable:
continue
if not isinstance(
inferred, node_classes.Const
) or not isinstance(inferred.value, str):
continue
if not inferred.value:
continue
yield inferred
except exceptions.InferenceError:
continue
return None | Return an iterator with the inferred slots. | Below is the the instruction that describes the task:
### Input:
Return an iterator with the inferred slots.
### Response:
def _islots(self):
""" Return an iterator with the inferred slots. """
if "__slots__" not in self.locals:
return None
for slots in self.igetattr("__slots__"):
# check if __slots__ is a valid type
for meth in ITER_METHODS:
try:
slots.getattr(meth)
break
except exceptions.AttributeInferenceError:
continue
else:
continue
if isinstance(slots, node_classes.Const):
# a string. Ignore the following checks,
# but yield the node, only if it has a value
if slots.value:
yield slots
continue
if not hasattr(slots, "itered"):
# we can't obtain the values, maybe a .deque?
continue
if isinstance(slots, node_classes.Dict):
values = [item[0] for item in slots.items]
else:
values = slots.itered()
if values is util.Uninferable:
continue
if not values:
# Stop the iteration, because the class
# has an empty list of slots.
return values
for elt in values:
try:
for inferred in elt.infer():
if inferred is util.Uninferable:
continue
if not isinstance(
inferred, node_classes.Const
) or not isinstance(inferred.value, str):
continue
if not inferred.value:
continue
yield inferred
except exceptions.InferenceError:
continue
return None |
def save_params(
self, f=None, f_params=None, f_optimizer=None, f_history=None):
"""Saves the module's parameters, history, and optimizer,
not the whole object.
To save the whole object, use pickle.
``f_params`` and ``f_optimizer`` uses PyTorchs'
:func:`~torch.save`.
Parameters
----------
f_params : file-like object, str, None (default=None)
Path of module parameters. Pass ``None`` to not save
f_optimizer : file-like object, str, None (default=None)
Path of optimizer. Pass ``None`` to not save
f_history : file-like object, str, None (default=None)
Path to history. Pass ``None`` to not save
f : deprecated
Examples
--------
>>> before = NeuralNetClassifier(mymodule)
>>> before.save_params(f_params='model.pkl',
>>> f_optimizer='optimizer.pkl',
>>> f_history='history.json')
>>> after = NeuralNetClassifier(mymodule).initialize()
>>> after.load_params(f_params='model.pkl',
>>> f_optimizer='optimizer.pkl',
>>> f_history='history.json')
"""
# TODO: Remove warning in a future release
if f is not None:
warnings.warn(
"f argument was renamed to f_params and will be removed "
"in the next release. To make your code future-proof it is "
"recommended to explicitly specify keyword arguments' names "
"instead of relying on positional order.",
DeprecationWarning)
f_params = f
if f_params is not None:
if not hasattr(self, 'module_'):
raise NotInitializedError(
"Cannot save parameters of an un-initialized model. "
"Please initialize first by calling .initialize() "
"or by fitting the model with .fit(...).")
torch.save(self.module_.state_dict(), f_params)
if f_optimizer is not None:
if not hasattr(self, 'optimizer_'):
raise NotInitializedError(
"Cannot save state of an un-initialized optimizer. "
"Please initialize first by calling .initialize() "
"or by fitting the model with .fit(...).")
torch.save(self.optimizer_.state_dict(), f_optimizer)
if f_history is not None:
self.history.to_file(f_history) | Saves the module's parameters, history, and optimizer,
not the whole object.
To save the whole object, use pickle.
``f_params`` and ``f_optimizer`` uses PyTorchs'
:func:`~torch.save`.
Parameters
----------
f_params : file-like object, str, None (default=None)
Path of module parameters. Pass ``None`` to not save
f_optimizer : file-like object, str, None (default=None)
Path of optimizer. Pass ``None`` to not save
f_history : file-like object, str, None (default=None)
Path to history. Pass ``None`` to not save
f : deprecated
Examples
--------
>>> before = NeuralNetClassifier(mymodule)
>>> before.save_params(f_params='model.pkl',
>>> f_optimizer='optimizer.pkl',
>>> f_history='history.json')
>>> after = NeuralNetClassifier(mymodule).initialize()
>>> after.load_params(f_params='model.pkl',
>>> f_optimizer='optimizer.pkl',
>>> f_history='history.json') | Below is the the instruction that describes the task:
### Input:
Saves the module's parameters, history, and optimizer,
not the whole object.
To save the whole object, use pickle.
``f_params`` and ``f_optimizer`` uses PyTorchs'
:func:`~torch.save`.
Parameters
----------
f_params : file-like object, str, None (default=None)
Path of module parameters. Pass ``None`` to not save
f_optimizer : file-like object, str, None (default=None)
Path of optimizer. Pass ``None`` to not save
f_history : file-like object, str, None (default=None)
Path to history. Pass ``None`` to not save
f : deprecated
Examples
--------
>>> before = NeuralNetClassifier(mymodule)
>>> before.save_params(f_params='model.pkl',
>>> f_optimizer='optimizer.pkl',
>>> f_history='history.json')
>>> after = NeuralNetClassifier(mymodule).initialize()
>>> after.load_params(f_params='model.pkl',
>>> f_optimizer='optimizer.pkl',
>>> f_history='history.json')
### Response:
def save_params(
self, f=None, f_params=None, f_optimizer=None, f_history=None):
"""Saves the module's parameters, history, and optimizer,
not the whole object.
To save the whole object, use pickle.
``f_params`` and ``f_optimizer`` uses PyTorchs'
:func:`~torch.save`.
Parameters
----------
f_params : file-like object, str, None (default=None)
Path of module parameters. Pass ``None`` to not save
f_optimizer : file-like object, str, None (default=None)
Path of optimizer. Pass ``None`` to not save
f_history : file-like object, str, None (default=None)
Path to history. Pass ``None`` to not save
f : deprecated
Examples
--------
>>> before = NeuralNetClassifier(mymodule)
>>> before.save_params(f_params='model.pkl',
>>> f_optimizer='optimizer.pkl',
>>> f_history='history.json')
>>> after = NeuralNetClassifier(mymodule).initialize()
>>> after.load_params(f_params='model.pkl',
>>> f_optimizer='optimizer.pkl',
>>> f_history='history.json')
"""
# TODO: Remove warning in a future release
if f is not None:
warnings.warn(
"f argument was renamed to f_params and will be removed "
"in the next release. To make your code future-proof it is "
"recommended to explicitly specify keyword arguments' names "
"instead of relying on positional order.",
DeprecationWarning)
f_params = f
if f_params is not None:
if not hasattr(self, 'module_'):
raise NotInitializedError(
"Cannot save parameters of an un-initialized model. "
"Please initialize first by calling .initialize() "
"or by fitting the model with .fit(...).")
torch.save(self.module_.state_dict(), f_params)
if f_optimizer is not None:
if not hasattr(self, 'optimizer_'):
raise NotInitializedError(
"Cannot save state of an un-initialized optimizer. "
"Please initialize first by calling .initialize() "
"or by fitting the model with .fit(...).")
torch.save(self.optimizer_.state_dict(), f_optimizer)
if f_history is not None:
self.history.to_file(f_history) |
def import_all_modules(package, skip=None, verbose=False, prefix="", depth=0):
"""Recursively imports all subpackages, modules, and submodules of a
given package.
'package' should be an imported package, not a string.
'skip' is a list of modules or subpackages not to import.
"""
skip = [] if skip is None else skip
for ff, modname, ispkg in pkgutil.walk_packages(path=package.__path__,
prefix=prefix,
onerror=lambda x: None):
if ff.path not in package.__path__[0]: # Solves weird bug
continue
if verbose:
print('\t'*depth,modname)
if modname in skip:
if verbose:
print('\t'*depth,'*Skipping*')
continue
module = '%s.%s' % (package.__name__,modname)
subpackage = importlib.import_module(module)
if ispkg:
import_all_modules(subpackage, skip=skip,
verbose=verbose,depth=depth+1) | Recursively imports all subpackages, modules, and submodules of a
given package.
'package' should be an imported package, not a string.
'skip' is a list of modules or subpackages not to import. | Below is the the instruction that describes the task:
### Input:
Recursively imports all subpackages, modules, and submodules of a
given package.
'package' should be an imported package, not a string.
'skip' is a list of modules or subpackages not to import.
### Response:
def import_all_modules(package, skip=None, verbose=False, prefix="", depth=0):
"""Recursively imports all subpackages, modules, and submodules of a
given package.
'package' should be an imported package, not a string.
'skip' is a list of modules or subpackages not to import.
"""
skip = [] if skip is None else skip
for ff, modname, ispkg in pkgutil.walk_packages(path=package.__path__,
prefix=prefix,
onerror=lambda x: None):
if ff.path not in package.__path__[0]: # Solves weird bug
continue
if verbose:
print('\t'*depth,modname)
if modname in skip:
if verbose:
print('\t'*depth,'*Skipping*')
continue
module = '%s.%s' % (package.__name__,modname)
subpackage = importlib.import_module(module)
if ispkg:
import_all_modules(subpackage, skip=skip,
verbose=verbose,depth=depth+1) |
def get_tool(name):
"""
Returns an instance of a specific tool.
Parameters
----------
name : str
Name of the tool (case-insensitive).
Returns
-------
tool : MotifProgram instance
"""
tool = name.lower()
if tool not in __tools__:
raise ValueError("Tool {0} not found!\n".format(name))
t = __tools__[tool]()
if not t.is_installed():
sys.stderr.write("Tool {0} not installed!\n".format(tool))
if not t.is_configured():
sys.stderr.write("Tool {0} not configured!\n".format(tool))
return t | Returns an instance of a specific tool.
Parameters
----------
name : str
Name of the tool (case-insensitive).
Returns
-------
tool : MotifProgram instance | Below is the the instruction that describes the task:
### Input:
Returns an instance of a specific tool.
Parameters
----------
name : str
Name of the tool (case-insensitive).
Returns
-------
tool : MotifProgram instance
### Response:
def get_tool(name):
"""
Returns an instance of a specific tool.
Parameters
----------
name : str
Name of the tool (case-insensitive).
Returns
-------
tool : MotifProgram instance
"""
tool = name.lower()
if tool not in __tools__:
raise ValueError("Tool {0} not found!\n".format(name))
t = __tools__[tool]()
if not t.is_installed():
sys.stderr.write("Tool {0} not installed!\n".format(tool))
if not t.is_configured():
sys.stderr.write("Tool {0} not configured!\n".format(tool))
return t |
def create_validator(data_struct_dict, name=None):
"""
create a Validator instance from data_struct_dict
:param data_struct_dict: a dict describe validator's fields, like the dict `to_dict()` method returned.
:param name: name of Validator class
:return: Validator instance
"""
if name is None:
name = 'FromDictValidator'
attrs = {}
for field_name, field_info in six.iteritems(data_struct_dict):
field_type = field_info['type']
if field_type == DictField.FIELD_TYPE_NAME and isinstance(field_info.get('validator'), dict):
field_info['validator'] = create_validator(field_info['validator'])
attrs[field_name] = create_field(field_info)
name = force_str(name)
return type(name, (Validator, ), attrs) | create a Validator instance from data_struct_dict
:param data_struct_dict: a dict describe validator's fields, like the dict `to_dict()` method returned.
:param name: name of Validator class
:return: Validator instance | Below is the the instruction that describes the task:
### Input:
create a Validator instance from data_struct_dict
:param data_struct_dict: a dict describe validator's fields, like the dict `to_dict()` method returned.
:param name: name of Validator class
:return: Validator instance
### Response:
def create_validator(data_struct_dict, name=None):
"""
create a Validator instance from data_struct_dict
:param data_struct_dict: a dict describe validator's fields, like the dict `to_dict()` method returned.
:param name: name of Validator class
:return: Validator instance
"""
if name is None:
name = 'FromDictValidator'
attrs = {}
for field_name, field_info in six.iteritems(data_struct_dict):
field_type = field_info['type']
if field_type == DictField.FIELD_TYPE_NAME and isinstance(field_info.get('validator'), dict):
field_info['validator'] = create_validator(field_info['validator'])
attrs[field_name] = create_field(field_info)
name = force_str(name)
return type(name, (Validator, ), attrs) |
def get_args():
"""Get the script arguments."""
description = "bum - Download and display album art \
for mpd tracks."
arg = argparse.ArgumentParser(description=description)
arg.add_argument("--size", metavar="\"px\"",
help="what size to display the album art in.",
default=250)
arg.add_argument("--cache_dir", metavar="\"/path/to/dir\"",
help="Where to store the downloaded cover art.",
default=pathlib.Path.home() / ".cache/bum")
arg.add_argument("--version", action="store_true",
help="Print \"bum\" version.")
arg.add_argument("--port",
help="Use a custom mpd port.",
default=6600)
arg.add_argument("--server",
help="Use a remote server instead of localhost.",
default="localhost")
arg.add_argument("--no_display",
action="store_true",
help="Only download album art, don't display.")
return arg.parse_args() | Get the script arguments. | Below is the the instruction that describes the task:
### Input:
Get the script arguments.
### Response:
def get_args():
"""Get the script arguments."""
description = "bum - Download and display album art \
for mpd tracks."
arg = argparse.ArgumentParser(description=description)
arg.add_argument("--size", metavar="\"px\"",
help="what size to display the album art in.",
default=250)
arg.add_argument("--cache_dir", metavar="\"/path/to/dir\"",
help="Where to store the downloaded cover art.",
default=pathlib.Path.home() / ".cache/bum")
arg.add_argument("--version", action="store_true",
help="Print \"bum\" version.")
arg.add_argument("--port",
help="Use a custom mpd port.",
default=6600)
arg.add_argument("--server",
help="Use a remote server instead of localhost.",
default="localhost")
arg.add_argument("--no_display",
action="store_true",
help="Only download album art, don't display.")
return arg.parse_args() |
def pltnp(point, v1, v2, v3):
"""
Find the nearest point on a triangular plate to a given point.
https://naif.jpl.nasa.gov/pub/naif/toolkit_docs/C/cspice/pltnp_c.html
:param point: A point in 3-dimensional space.
:type point: 3-Element Array of floats
:param v1: Vertices of a triangular plate.
:type v1: 3-Element Array of floats
:param v2: Vertices of a triangular plate.
:type v2: 3-Element Array of floats
:param v3: Vertices of a triangular plate.
:type v3: 3-Element Array of floats
:return: the nearest point on a triangular plate to a given point and distance
:rtype: tuple
"""
point = stypes.toDoubleVector(point)
v1 = stypes.toDoubleVector(v1)
v2 = stypes.toDoubleVector(v2)
v3 = stypes.toDoubleVector(v3)
pnear = stypes.emptyDoubleVector(3)
dist = ctypes.c_double()
libspice.pltnp_c(point, v1, v2, v3, pnear, ctypes.byref(dist))
return stypes.cVectorToPython(pnear), dist.value | Find the nearest point on a triangular plate to a given point.
https://naif.jpl.nasa.gov/pub/naif/toolkit_docs/C/cspice/pltnp_c.html
:param point: A point in 3-dimensional space.
:type point: 3-Element Array of floats
:param v1: Vertices of a triangular plate.
:type v1: 3-Element Array of floats
:param v2: Vertices of a triangular plate.
:type v2: 3-Element Array of floats
:param v3: Vertices of a triangular plate.
:type v3: 3-Element Array of floats
:return: the nearest point on a triangular plate to a given point and distance
:rtype: tuple | Below is the the instruction that describes the task:
### Input:
Find the nearest point on a triangular plate to a given point.
https://naif.jpl.nasa.gov/pub/naif/toolkit_docs/C/cspice/pltnp_c.html
:param point: A point in 3-dimensional space.
:type point: 3-Element Array of floats
:param v1: Vertices of a triangular plate.
:type v1: 3-Element Array of floats
:param v2: Vertices of a triangular plate.
:type v2: 3-Element Array of floats
:param v3: Vertices of a triangular plate.
:type v3: 3-Element Array of floats
:return: the nearest point on a triangular plate to a given point and distance
:rtype: tuple
### Response:
def pltnp(point, v1, v2, v3):
"""
Find the nearest point on a triangular plate to a given point.
https://naif.jpl.nasa.gov/pub/naif/toolkit_docs/C/cspice/pltnp_c.html
:param point: A point in 3-dimensional space.
:type point: 3-Element Array of floats
:param v1: Vertices of a triangular plate.
:type v1: 3-Element Array of floats
:param v2: Vertices of a triangular plate.
:type v2: 3-Element Array of floats
:param v3: Vertices of a triangular plate.
:type v3: 3-Element Array of floats
:return: the nearest point on a triangular plate to a given point and distance
:rtype: tuple
"""
point = stypes.toDoubleVector(point)
v1 = stypes.toDoubleVector(v1)
v2 = stypes.toDoubleVector(v2)
v3 = stypes.toDoubleVector(v3)
pnear = stypes.emptyDoubleVector(3)
dist = ctypes.c_double()
libspice.pltnp_c(point, v1, v2, v3, pnear, ctypes.byref(dist))
return stypes.cVectorToPython(pnear), dist.value |
def _locations_mirror(x):
"""
Mirrors the points in a list-of-list-of-...-of-list-of-points.
For example:
>>> _locations_mirror([[[1, 2], [3, 4]], [5, 6], [7, 8]])
[[[2, 1], [4, 3]], [6, 5], [8, 7]]
"""
if hasattr(x, '__iter__'):
if hasattr(x[0], '__iter__'):
return list(map(_locations_mirror, x))
else:
return list(x[::-1])
else:
return x | Mirrors the points in a list-of-list-of-...-of-list-of-points.
For example:
>>> _locations_mirror([[[1, 2], [3, 4]], [5, 6], [7, 8]])
[[[2, 1], [4, 3]], [6, 5], [8, 7]] | Below is the the instruction that describes the task:
### Input:
Mirrors the points in a list-of-list-of-...-of-list-of-points.
For example:
>>> _locations_mirror([[[1, 2], [3, 4]], [5, 6], [7, 8]])
[[[2, 1], [4, 3]], [6, 5], [8, 7]]
### Response:
def _locations_mirror(x):
"""
Mirrors the points in a list-of-list-of-...-of-list-of-points.
For example:
>>> _locations_mirror([[[1, 2], [3, 4]], [5, 6], [7, 8]])
[[[2, 1], [4, 3]], [6, 5], [8, 7]]
"""
if hasattr(x, '__iter__'):
if hasattr(x[0], '__iter__'):
return list(map(_locations_mirror, x))
else:
return list(x[::-1])
else:
return x |
def _conv_tel_list(tel_list, entry):
"""Converts to Abook phone types"""
for tel in tel_list:
if not hasattr(tel, 'TYPE_param'):
entry['other'] = tel.value
elif tel.TYPE_param.lower() == 'home':
entry['phone'] = tel.value
elif tel.TYPE_param.lower() == 'work':
entry['workphone'] = tel.value
elif tel.TYPE_param.lower() == 'cell':
entry['mobile'] = tel.value | Converts to Abook phone types | Below is the the instruction that describes the task:
### Input:
Converts to Abook phone types
### Response:
def _conv_tel_list(tel_list, entry):
"""Converts to Abook phone types"""
for tel in tel_list:
if not hasattr(tel, 'TYPE_param'):
entry['other'] = tel.value
elif tel.TYPE_param.lower() == 'home':
entry['phone'] = tel.value
elif tel.TYPE_param.lower() == 'work':
entry['workphone'] = tel.value
elif tel.TYPE_param.lower() == 'cell':
entry['mobile'] = tel.value |
def oplot(self, x, y, **kw):
"""generic plotting method, overplotting any existing plot """
self.panel.oplot(x, y, **kw) | generic plotting method, overplotting any existing plot | Below is the the instruction that describes the task:
### Input:
generic plotting method, overplotting any existing plot
### Response:
def oplot(self, x, y, **kw):
"""generic plotting method, overplotting any existing plot """
self.panel.oplot(x, y, **kw) |
def get_volume(self):
"""Get the current volume."""
self.request(EP_GET_VOLUME)
return 0 if self.last_response is None else self.last_response.get('payload').get('volume') | Get the current volume. | Below is the the instruction that describes the task:
### Input:
Get the current volume.
### Response:
def get_volume(self):
"""Get the current volume."""
self.request(EP_GET_VOLUME)
return 0 if self.last_response is None else self.last_response.get('payload').get('volume') |
def GetFormattedEventObject(cls, event):
"""Retrieves a string representation of the event.
Args:
event (EventObject): event.
Returns:
str: string representation of the event.
"""
time_string = timelib.Timestamp.CopyToIsoFormat(event.timestamp)
lines_of_text = [
'+-' * 40,
'[Timestamp]:',
' {0:s}'.format(time_string)]
pathspec = getattr(event, 'pathspec', None)
if pathspec:
lines_of_text.append('[Pathspec]:')
attribute_string = pathspec.comparable.replace('\n', '\n ')
attribute_string = ' {0:s}\n'.format(attribute_string)
lines_of_text.append(attribute_string)
# TODO: add support for event tag after event clean up.
lines_of_text.append('[Reserved attributes]:')
out_additional = ['[Additional attributes]:']
for attribute_name, attribute_value in sorted(event.GetAttributes()):
if attribute_name not in definitions.RESERVED_VARIABLE_NAMES:
attribute_string = ' {{{0!s}}} {1!s}'.format(
attribute_name, attribute_value)
out_additional.append(attribute_string)
elif attribute_name not in ('pathspec', 'tag'):
attribute_string = ' {{{0!s}}} {1!s}'.format(
attribute_name, attribute_value)
lines_of_text.append(attribute_string)
lines_of_text.append('')
out_additional.append('')
lines_of_text.extend(out_additional)
return '\n'.join(lines_of_text) | Retrieves a string representation of the event.
Args:
event (EventObject): event.
Returns:
str: string representation of the event. | Below is the the instruction that describes the task:
### Input:
Retrieves a string representation of the event.
Args:
event (EventObject): event.
Returns:
str: string representation of the event.
### Response:
def GetFormattedEventObject(cls, event):
"""Retrieves a string representation of the event.
Args:
event (EventObject): event.
Returns:
str: string representation of the event.
"""
time_string = timelib.Timestamp.CopyToIsoFormat(event.timestamp)
lines_of_text = [
'+-' * 40,
'[Timestamp]:',
' {0:s}'.format(time_string)]
pathspec = getattr(event, 'pathspec', None)
if pathspec:
lines_of_text.append('[Pathspec]:')
attribute_string = pathspec.comparable.replace('\n', '\n ')
attribute_string = ' {0:s}\n'.format(attribute_string)
lines_of_text.append(attribute_string)
# TODO: add support for event tag after event clean up.
lines_of_text.append('[Reserved attributes]:')
out_additional = ['[Additional attributes]:']
for attribute_name, attribute_value in sorted(event.GetAttributes()):
if attribute_name not in definitions.RESERVED_VARIABLE_NAMES:
attribute_string = ' {{{0!s}}} {1!s}'.format(
attribute_name, attribute_value)
out_additional.append(attribute_string)
elif attribute_name not in ('pathspec', 'tag'):
attribute_string = ' {{{0!s}}} {1!s}'.format(
attribute_name, attribute_value)
lines_of_text.append(attribute_string)
lines_of_text.append('')
out_additional.append('')
lines_of_text.extend(out_additional)
return '\n'.join(lines_of_text) |
def load_dataset_items(test_file, predict_file_lst, nonfeature_file):
"""
This function is used to read 3 kinds of data into list, 3 kinds of data are stored in files given by parameter
:param test_file: path string, the testing set used for SVm rank
:param predict_file_lst: filename lst, all prediction file output by SVM rank
:param nonfeature_file: path string, contain all the score data not used as feature (aligned with test_file)
:return: None
"""
print 'Reading baseline feature & bleu...'
with open(test_file, 'r') as reader:
for line in reader:
items = line.split(' ')
label = float(items[0])
id_list.append(items[1])
bleu_list.append(label)
word_count_list.append(float(items[2].split(':')[1]))
attri_count_list.append(float(items[10].split(':')[1]))
print 'Reading svm rankscore...'
global prediction_dict
for predict_file in predict_file_lst:
mark = predict_file.replace('predictions', '')
prediction_dict[mark] = []
with open(result_file_path + predict_file, 'r') as reader:
for line in reader:
rankscore = float(line)
prediction_dict[mark].append(rankscore)
print 'Reading NonFeature score...'
with open(nonfeature_file, 'r') as reader:
for line in reader:
nonfeature_items = line.split()
w_score = float(nonfeature_items[2].split(':')[1])
m_score = float(nonfeature_items[3].split(':')[1])
weighted_attri_list.append(w_score)
meteor_score_list.append(m_score) | This function is used to read 3 kinds of data into list, 3 kinds of data are stored in files given by parameter
:param test_file: path string, the testing set used for SVm rank
:param predict_file_lst: filename lst, all prediction file output by SVM rank
:param nonfeature_file: path string, contain all the score data not used as feature (aligned with test_file)
:return: None | Below is the the instruction that describes the task:
### Input:
This function is used to read 3 kinds of data into list, 3 kinds of data are stored in files given by parameter
:param test_file: path string, the testing set used for SVm rank
:param predict_file_lst: filename lst, all prediction file output by SVM rank
:param nonfeature_file: path string, contain all the score data not used as feature (aligned with test_file)
:return: None
### Response:
def load_dataset_items(test_file, predict_file_lst, nonfeature_file):
"""
This function is used to read 3 kinds of data into list, 3 kinds of data are stored in files given by parameter
:param test_file: path string, the testing set used for SVm rank
:param predict_file_lst: filename lst, all prediction file output by SVM rank
:param nonfeature_file: path string, contain all the score data not used as feature (aligned with test_file)
:return: None
"""
print 'Reading baseline feature & bleu...'
with open(test_file, 'r') as reader:
for line in reader:
items = line.split(' ')
label = float(items[0])
id_list.append(items[1])
bleu_list.append(label)
word_count_list.append(float(items[2].split(':')[1]))
attri_count_list.append(float(items[10].split(':')[1]))
print 'Reading svm rankscore...'
global prediction_dict
for predict_file in predict_file_lst:
mark = predict_file.replace('predictions', '')
prediction_dict[mark] = []
with open(result_file_path + predict_file, 'r') as reader:
for line in reader:
rankscore = float(line)
prediction_dict[mark].append(rankscore)
print 'Reading NonFeature score...'
with open(nonfeature_file, 'r') as reader:
for line in reader:
nonfeature_items = line.split()
w_score = float(nonfeature_items[2].split(':')[1])
m_score = float(nonfeature_items[3].split(':')[1])
weighted_attri_list.append(w_score)
meteor_score_list.append(m_score) |
def process_net_command(self, py_db, cmd_id, seq, text):
'''Processes a command received from the Java side
@param cmd_id: the id of the command
@param seq: the sequence of the command
@param text: the text received in the command
'''
meaning = ID_TO_MEANING[str(cmd_id)]
# print('Handling %s (%s)' % (meaning, text))
method_name = meaning.lower()
on_command = getattr(self, method_name.lower(), None)
if on_command is None:
# I have no idea what this is all about
cmd = py_db.cmd_factory.make_error_message(seq, "unexpected command " + str(cmd_id))
py_db.writer.add_command(cmd)
return
py_db._main_lock.acquire()
try:
cmd = on_command(py_db, cmd_id, seq, text)
if cmd is not None:
py_db.writer.add_command(cmd)
except:
if traceback is not None and sys is not None and pydev_log_exception is not None:
pydev_log_exception()
stream = StringIO()
traceback.print_exc(file=stream)
cmd = py_db.cmd_factory.make_error_message(
seq,
"Unexpected exception in process_net_command.\nInitial params: %s. Exception: %s" % (
((cmd_id, seq, text), stream.getvalue())
)
)
if cmd is not None:
py_db.writer.add_command(cmd)
finally:
py_db._main_lock.release() | Processes a command received from the Java side
@param cmd_id: the id of the command
@param seq: the sequence of the command
@param text: the text received in the command | Below is the the instruction that describes the task:
### Input:
Processes a command received from the Java side
@param cmd_id: the id of the command
@param seq: the sequence of the command
@param text: the text received in the command
### Response:
def process_net_command(self, py_db, cmd_id, seq, text):
'''Processes a command received from the Java side
@param cmd_id: the id of the command
@param seq: the sequence of the command
@param text: the text received in the command
'''
meaning = ID_TO_MEANING[str(cmd_id)]
# print('Handling %s (%s)' % (meaning, text))
method_name = meaning.lower()
on_command = getattr(self, method_name.lower(), None)
if on_command is None:
# I have no idea what this is all about
cmd = py_db.cmd_factory.make_error_message(seq, "unexpected command " + str(cmd_id))
py_db.writer.add_command(cmd)
return
py_db._main_lock.acquire()
try:
cmd = on_command(py_db, cmd_id, seq, text)
if cmd is not None:
py_db.writer.add_command(cmd)
except:
if traceback is not None and sys is not None and pydev_log_exception is not None:
pydev_log_exception()
stream = StringIO()
traceback.print_exc(file=stream)
cmd = py_db.cmd_factory.make_error_message(
seq,
"Unexpected exception in process_net_command.\nInitial params: %s. Exception: %s" % (
((cmd_id, seq, text), stream.getvalue())
)
)
if cmd is not None:
py_db.writer.add_command(cmd)
finally:
py_db._main_lock.release() |
def recall(links_true, links_pred=None):
"""recall(links_true, links_pred)
Compute the recall/sensitivity.
The recall is given by TP/(TP+FN).
Parameters
----------
links_true: pandas.MultiIndex, pandas.DataFrame, pandas.Series
The true (or actual) collection of links.
links_pred: pandas.MultiIndex, pandas.DataFrame, pandas.Series
The predicted collection of links.
Returns
-------
float
The recall
"""
if _isconfusionmatrix(links_true):
confusion_matrix = links_true
v = confusion_matrix[0, 0] \
/ (confusion_matrix[0, 0] + confusion_matrix[0, 1])
else:
tp = true_positives(links_true, links_pred)
fn = false_negatives(links_true, links_pred)
v = tp / (tp + fn)
return float(v) | recall(links_true, links_pred)
Compute the recall/sensitivity.
The recall is given by TP/(TP+FN).
Parameters
----------
links_true: pandas.MultiIndex, pandas.DataFrame, pandas.Series
The true (or actual) collection of links.
links_pred: pandas.MultiIndex, pandas.DataFrame, pandas.Series
The predicted collection of links.
Returns
-------
float
The recall | Below is the the instruction that describes the task:
### Input:
recall(links_true, links_pred)
Compute the recall/sensitivity.
The recall is given by TP/(TP+FN).
Parameters
----------
links_true: pandas.MultiIndex, pandas.DataFrame, pandas.Series
The true (or actual) collection of links.
links_pred: pandas.MultiIndex, pandas.DataFrame, pandas.Series
The predicted collection of links.
Returns
-------
float
The recall
### Response:
def recall(links_true, links_pred=None):
"""recall(links_true, links_pred)
Compute the recall/sensitivity.
The recall is given by TP/(TP+FN).
Parameters
----------
links_true: pandas.MultiIndex, pandas.DataFrame, pandas.Series
The true (or actual) collection of links.
links_pred: pandas.MultiIndex, pandas.DataFrame, pandas.Series
The predicted collection of links.
Returns
-------
float
The recall
"""
if _isconfusionmatrix(links_true):
confusion_matrix = links_true
v = confusion_matrix[0, 0] \
/ (confusion_matrix[0, 0] + confusion_matrix[0, 1])
else:
tp = true_positives(links_true, links_pred)
fn = false_negatives(links_true, links_pred)
v = tp / (tp + fn)
return float(v) |
def _has_desired_permit(permits, acategory, astatus):
"""
return True if permits has one whose
category_code and status_code match with the given ones
"""
if permits is None:
return False
for permit in permits:
if permit.category_code == acategory and\
permit.status_code == astatus:
return True
return False | return True if permits has one whose
category_code and status_code match with the given ones | Below is the the instruction that describes the task:
### Input:
return True if permits has one whose
category_code and status_code match with the given ones
### Response:
def _has_desired_permit(permits, acategory, astatus):
"""
return True if permits has one whose
category_code and status_code match with the given ones
"""
if permits is None:
return False
for permit in permits:
if permit.category_code == acategory and\
permit.status_code == astatus:
return True
return False |
def terminate(self):
"""Terminate all workers and threads."""
for t in self._threads:
t.quit()
self._thread = []
self._workers = [] | Terminate all workers and threads. | Below is the the instruction that describes the task:
### Input:
Terminate all workers and threads.
### Response:
def terminate(self):
"""Terminate all workers and threads."""
for t in self._threads:
t.quit()
self._thread = []
self._workers = [] |
def save_stream(self, key, binary=False):
"""
Return a managed file-like object into which the calling code can write
arbitrary data.
:param key:
:return: A managed stream-like object
"""
s = io.BytesIO() if binary else io.StringIO()
yield s
self.save_value(key, s.getvalue()) | Return a managed file-like object into which the calling code can write
arbitrary data.
:param key:
:return: A managed stream-like object | Below is the the instruction that describes the task:
### Input:
Return a managed file-like object into which the calling code can write
arbitrary data.
:param key:
:return: A managed stream-like object
### Response:
def save_stream(self, key, binary=False):
"""
Return a managed file-like object into which the calling code can write
arbitrary data.
:param key:
:return: A managed stream-like object
"""
s = io.BytesIO() if binary else io.StringIO()
yield s
self.save_value(key, s.getvalue()) |
def attention_bias_local_block(mesh, block_length, memory_length,
dtype=tf.int32):
"""Bias for attention for local blocks where attention to right is disallowed.
Create the bias matrix by using two separate masks, one for the memory part
which doesn't overlap with the query and second which interacts with the query
and should be disallowed to look to the right of the current query position.
Args:
mesh: a MeshTensorflow object
block_length: a mtf.Dimension
memory_length: a mtf.Dimension
dtype: a tf.dtype
Returns:
a mtf.Tensor with shape [block_length, memory_length]
"""
memory_length = mtf.Dimension(memory_length.name, block_length.size)
memory_mask = mtf.zeros(mesh, [block_length, memory_length], dtype=dtype)
mask = mtf.cast(mtf.less(mtf.range(mesh, block_length, dtype=dtype),
mtf.range(mesh, memory_length, dtype=dtype)),
dtype=dtype)
mask = mtf.cast(
mtf.concat([memory_mask, mask], memory_length.name),
dtype=tf.float32) * -1e9
return mask | Bias for attention for local blocks where attention to right is disallowed.
Create the bias matrix by using two separate masks, one for the memory part
which doesn't overlap with the query and second which interacts with the query
and should be disallowed to look to the right of the current query position.
Args:
mesh: a MeshTensorflow object
block_length: a mtf.Dimension
memory_length: a mtf.Dimension
dtype: a tf.dtype
Returns:
a mtf.Tensor with shape [block_length, memory_length] | Below is the the instruction that describes the task:
### Input:
Bias for attention for local blocks where attention to right is disallowed.
Create the bias matrix by using two separate masks, one for the memory part
which doesn't overlap with the query and second which interacts with the query
and should be disallowed to look to the right of the current query position.
Args:
mesh: a MeshTensorflow object
block_length: a mtf.Dimension
memory_length: a mtf.Dimension
dtype: a tf.dtype
Returns:
a mtf.Tensor with shape [block_length, memory_length]
### Response:
def attention_bias_local_block(mesh, block_length, memory_length,
dtype=tf.int32):
"""Bias for attention for local blocks where attention to right is disallowed.
Create the bias matrix by using two separate masks, one for the memory part
which doesn't overlap with the query and second which interacts with the query
and should be disallowed to look to the right of the current query position.
Args:
mesh: a MeshTensorflow object
block_length: a mtf.Dimension
memory_length: a mtf.Dimension
dtype: a tf.dtype
Returns:
a mtf.Tensor with shape [block_length, memory_length]
"""
memory_length = mtf.Dimension(memory_length.name, block_length.size)
memory_mask = mtf.zeros(mesh, [block_length, memory_length], dtype=dtype)
mask = mtf.cast(mtf.less(mtf.range(mesh, block_length, dtype=dtype),
mtf.range(mesh, memory_length, dtype=dtype)),
dtype=dtype)
mask = mtf.cast(
mtf.concat([memory_mask, mask], memory_length.name),
dtype=tf.float32) * -1e9
return mask |
def _seqtk_fastq_prep_cl(data, in_file=None, read_num=0):
"""Provide a commandline for prep of fastq inputs with seqtk.
Handles fast conversion of fastq quality scores and trimming.
"""
needs_convert = dd.get_quality_format(data).lower() == "illumina"
trim_ends = dd.get_trim_ends(data)
seqtk = config_utils.get_program("seqtk", data["config"])
if in_file:
in_file = objectstore.cl_input(in_file)
else:
in_file = "/dev/stdin"
cmd = ""
if needs_convert:
cmd += "{seqtk} seq -Q64 -V {in_file}".format(**locals())
if trim_ends:
left_trim, right_trim = trim_ends[0:2] if data.get("read_num", read_num) == 0 else trim_ends[2:4]
if left_trim or right_trim:
trim_infile = "/dev/stdin" if needs_convert else in_file
pipe = " | " if needs_convert else ""
cmd += "{pipe}{seqtk} trimfq -b {left_trim} -e {right_trim} {trim_infile}".format(**locals())
return cmd | Provide a commandline for prep of fastq inputs with seqtk.
Handles fast conversion of fastq quality scores and trimming. | Below is the the instruction that describes the task:
### Input:
Provide a commandline for prep of fastq inputs with seqtk.
Handles fast conversion of fastq quality scores and trimming.
### Response:
def _seqtk_fastq_prep_cl(data, in_file=None, read_num=0):
"""Provide a commandline for prep of fastq inputs with seqtk.
Handles fast conversion of fastq quality scores and trimming.
"""
needs_convert = dd.get_quality_format(data).lower() == "illumina"
trim_ends = dd.get_trim_ends(data)
seqtk = config_utils.get_program("seqtk", data["config"])
if in_file:
in_file = objectstore.cl_input(in_file)
else:
in_file = "/dev/stdin"
cmd = ""
if needs_convert:
cmd += "{seqtk} seq -Q64 -V {in_file}".format(**locals())
if trim_ends:
left_trim, right_trim = trim_ends[0:2] if data.get("read_num", read_num) == 0 else trim_ends[2:4]
if left_trim or right_trim:
trim_infile = "/dev/stdin" if needs_convert else in_file
pipe = " | " if needs_convert else ""
cmd += "{pipe}{seqtk} trimfq -b {left_trim} -e {right_trim} {trim_infile}".format(**locals())
return cmd |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.