code stringlengths 75 104k | docstring stringlengths 1 46.9k | text stringlengths 164 112k |
|---|---|---|
def _getNearestMappingIndexList(fromValList, toValList):
'''
Finds the indicies for data points that are closest to each other.
The inputs should be in relative time, scaled from 0 to 1
e.g. if you have [0, .1, .5., .9] and [0, .1, .2, 1]
will output [0, 1, 1, 2]
'''
indexList = []
for fromTimestamp in fromValList:
smallestDiff = _getSmallestDifference(toValList, fromTimestamp)
i = toValList.index(smallestDiff)
indexList.append(i)
return indexList | Finds the indicies for data points that are closest to each other.
The inputs should be in relative time, scaled from 0 to 1
e.g. if you have [0, .1, .5., .9] and [0, .1, .2, 1]
will output [0, 1, 1, 2] | Below is the the instruction that describes the task:
### Input:
Finds the indicies for data points that are closest to each other.
The inputs should be in relative time, scaled from 0 to 1
e.g. if you have [0, .1, .5., .9] and [0, .1, .2, 1]
will output [0, 1, 1, 2]
### Response:
def _getNearestMappingIndexList(fromValList, toValList):
'''
Finds the indicies for data points that are closest to each other.
The inputs should be in relative time, scaled from 0 to 1
e.g. if you have [0, .1, .5., .9] and [0, .1, .2, 1]
will output [0, 1, 1, 2]
'''
indexList = []
for fromTimestamp in fromValList:
smallestDiff = _getSmallestDifference(toValList, fromTimestamp)
i = toValList.index(smallestDiff)
indexList.append(i)
return indexList |
def request(self):
"""
Returns a callable and an iterable respectively. Those can be used to
both transmit a message and/or iterate over incoming messages,
that were replied by a reply socket. Note that the iterable returns
as many parts as sent by repliers. Also, the sender function has a
``print`` like signature, with an infinite number of arguments. Each one
being a part of the complete message.
:rtype: (function, generator)
"""
sock = self.__sock(zmq.REQ)
return self.__send_function(sock), self.__recv_generator(sock) | Returns a callable and an iterable respectively. Those can be used to
both transmit a message and/or iterate over incoming messages,
that were replied by a reply socket. Note that the iterable returns
as many parts as sent by repliers. Also, the sender function has a
``print`` like signature, with an infinite number of arguments. Each one
being a part of the complete message.
:rtype: (function, generator) | Below is the the instruction that describes the task:
### Input:
Returns a callable and an iterable respectively. Those can be used to
both transmit a message and/or iterate over incoming messages,
that were replied by a reply socket. Note that the iterable returns
as many parts as sent by repliers. Also, the sender function has a
``print`` like signature, with an infinite number of arguments. Each one
being a part of the complete message.
:rtype: (function, generator)
### Response:
def request(self):
"""
Returns a callable and an iterable respectively. Those can be used to
both transmit a message and/or iterate over incoming messages,
that were replied by a reply socket. Note that the iterable returns
as many parts as sent by repliers. Also, the sender function has a
``print`` like signature, with an infinite number of arguments. Each one
being a part of the complete message.
:rtype: (function, generator)
"""
sock = self.__sock(zmq.REQ)
return self.__send_function(sock), self.__recv_generator(sock) |
def date(page):
"""
Return the date, nicely-formatted
"""
soup = BeautifulSoup(page)
try:
page_date = soup.find('input', attrs={'name': 'date'})['value']
parsed_date = datetime.strptime(page_date, '%Y-%m-%d')
return parsed_date.strftime('%a, %b %d, %Y')
except:
return None | Return the date, nicely-formatted | Below is the the instruction that describes the task:
### Input:
Return the date, nicely-formatted
### Response:
def date(page):
"""
Return the date, nicely-formatted
"""
soup = BeautifulSoup(page)
try:
page_date = soup.find('input', attrs={'name': 'date'})['value']
parsed_date = datetime.strptime(page_date, '%Y-%m-%d')
return parsed_date.strftime('%a, %b %d, %Y')
except:
return None |
def potential_radiation(dates, lon, lat, timezone, terrain_slope=0, terrain_slope_azimuth=0,
cloud_fraction=0, split=False):
"""
Calculate potential shortwave radiation for a specific location and time.
This routine calculates global radiation as described in:
Liston, G. E. and Elder, K. (2006): A Meteorological Distribution System for
High-Resolution Terrestrial Modeling (MicroMet), J. Hydrometeorol., 7, 217–234.
Corrections for eccentricity are carried out following:
Paltridge, G.W., Platt, C.M.R., 1976. Radiative processes in Meteorology and Climatology.
Elsevier Scientific Publishing Company, Amsterdam, Oxford, New York.
Parameters
----------
dates : DatetimeIndex or array-like
The dates for which potential radiation shall be calculated
lon : float
Longitude (degrees)
lat : float
Latitude (degrees)
timezone : float
Time zone
terrain_slope : float, default 0
Terrain slope as defined in Liston & Elder (2006) (eq. 12)
terrain_slope_azimuth : float, default 0
Terrain slope azimuth as defined in Liston & Elder (2006) (eq. 13)
cloud_fraction : float, default 0
Cloud fraction between 0 and 1
split : boolean, default False
If True, return a DataFrame containing direct and diffuse radiation,
otherwise return a Series containing total radiation
"""
solar_constant = 1367.
days_per_year = 365.25
tropic_of_cancer = np.deg2rad(23.43697)
solstice = 173.0
dates = pd.DatetimeIndex(dates)
dates_hour = np.array(dates.hour)
dates_minute = np.array(dates.minute)
day_of_year = np.array(dates.dayofyear)
# compute solar decline in rad
solar_decline = tropic_of_cancer * np.cos(2.0 * np.pi * (day_of_year - solstice) / days_per_year)
# compute the sun hour angle in rad
standard_meridian = timezone * 15.
delta_lat_time = (lon - standard_meridian) * 24. / 360.
hour_angle = np.pi * (((dates_hour + dates_minute / 60. + delta_lat_time) / 12.) - 1.)
# get solar zenith angle
cos_solar_zenith = (np.sin(solar_decline) * np.sin(np.deg2rad(lat))
+ np.cos(solar_decline) * np.cos(np.deg2rad(lat)) * np.cos(hour_angle))
cos_solar_zenith = cos_solar_zenith.clip(min=0)
solar_zenith_angle = np.arccos(cos_solar_zenith)
# compute transmissivities for direct and diffus radiation using cloud fraction
transmissivity_direct = (0.6 + 0.2 * cos_solar_zenith) * (1.0 - cloud_fraction)
transmissivity_diffuse = (0.3 + 0.1 * cos_solar_zenith) * cloud_fraction
# modify solar constant for eccentricity
beta = 2. * np.pi * (day_of_year / days_per_year)
radius_ratio = (1.00011 + 0.034221 * np.cos(beta) + 0.00128 * np.sin(beta)
+ 0.000719 * np.cos(2. * beta) + 0.000077 * np.sin(2 * beta))
solar_constant_times_radius_ratio = solar_constant * radius_ratio
mu = np.arcsin(np.cos(solar_decline) * np.sin(hour_angle) / np.sin(solar_zenith_angle))
cosi = (np.cos(terrain_slope) * cos_solar_zenith
+ np.sin(terrain_slope) * np.sin(solar_zenith_angle) * np.cos(mu - terrain_slope_azimuth))
# get total shortwave radiation
direct_radiation = solar_constant_times_radius_ratio * transmissivity_direct * cosi
diffuse_radiation = solar_constant_times_radius_ratio * transmissivity_diffuse * cos_solar_zenith
direct_radiation = direct_radiation.clip(min=0)
df = pd.DataFrame(index=dates, data=dict(direct=direct_radiation, diffuse=diffuse_radiation))
if split:
return df
else:
return df.direct + df.diffuse | Calculate potential shortwave radiation for a specific location and time.
This routine calculates global radiation as described in:
Liston, G. E. and Elder, K. (2006): A Meteorological Distribution System for
High-Resolution Terrestrial Modeling (MicroMet), J. Hydrometeorol., 7, 217–234.
Corrections for eccentricity are carried out following:
Paltridge, G.W., Platt, C.M.R., 1976. Radiative processes in Meteorology and Climatology.
Elsevier Scientific Publishing Company, Amsterdam, Oxford, New York.
Parameters
----------
dates : DatetimeIndex or array-like
The dates for which potential radiation shall be calculated
lon : float
Longitude (degrees)
lat : float
Latitude (degrees)
timezone : float
Time zone
terrain_slope : float, default 0
Terrain slope as defined in Liston & Elder (2006) (eq. 12)
terrain_slope_azimuth : float, default 0
Terrain slope azimuth as defined in Liston & Elder (2006) (eq. 13)
cloud_fraction : float, default 0
Cloud fraction between 0 and 1
split : boolean, default False
If True, return a DataFrame containing direct and diffuse radiation,
otherwise return a Series containing total radiation | Below is the the instruction that describes the task:
### Input:
Calculate potential shortwave radiation for a specific location and time.
This routine calculates global radiation as described in:
Liston, G. E. and Elder, K. (2006): A Meteorological Distribution System for
High-Resolution Terrestrial Modeling (MicroMet), J. Hydrometeorol., 7, 217–234.
Corrections for eccentricity are carried out following:
Paltridge, G.W., Platt, C.M.R., 1976. Radiative processes in Meteorology and Climatology.
Elsevier Scientific Publishing Company, Amsterdam, Oxford, New York.
Parameters
----------
dates : DatetimeIndex or array-like
The dates for which potential radiation shall be calculated
lon : float
Longitude (degrees)
lat : float
Latitude (degrees)
timezone : float
Time zone
terrain_slope : float, default 0
Terrain slope as defined in Liston & Elder (2006) (eq. 12)
terrain_slope_azimuth : float, default 0
Terrain slope azimuth as defined in Liston & Elder (2006) (eq. 13)
cloud_fraction : float, default 0
Cloud fraction between 0 and 1
split : boolean, default False
If True, return a DataFrame containing direct and diffuse radiation,
otherwise return a Series containing total radiation
### Response:
def potential_radiation(dates, lon, lat, timezone, terrain_slope=0, terrain_slope_azimuth=0,
cloud_fraction=0, split=False):
"""
Calculate potential shortwave radiation for a specific location and time.
This routine calculates global radiation as described in:
Liston, G. E. and Elder, K. (2006): A Meteorological Distribution System for
High-Resolution Terrestrial Modeling (MicroMet), J. Hydrometeorol., 7, 217–234.
Corrections for eccentricity are carried out following:
Paltridge, G.W., Platt, C.M.R., 1976. Radiative processes in Meteorology and Climatology.
Elsevier Scientific Publishing Company, Amsterdam, Oxford, New York.
Parameters
----------
dates : DatetimeIndex or array-like
The dates for which potential radiation shall be calculated
lon : float
Longitude (degrees)
lat : float
Latitude (degrees)
timezone : float
Time zone
terrain_slope : float, default 0
Terrain slope as defined in Liston & Elder (2006) (eq. 12)
terrain_slope_azimuth : float, default 0
Terrain slope azimuth as defined in Liston & Elder (2006) (eq. 13)
cloud_fraction : float, default 0
Cloud fraction between 0 and 1
split : boolean, default False
If True, return a DataFrame containing direct and diffuse radiation,
otherwise return a Series containing total radiation
"""
solar_constant = 1367.
days_per_year = 365.25
tropic_of_cancer = np.deg2rad(23.43697)
solstice = 173.0
dates = pd.DatetimeIndex(dates)
dates_hour = np.array(dates.hour)
dates_minute = np.array(dates.minute)
day_of_year = np.array(dates.dayofyear)
# compute solar decline in rad
solar_decline = tropic_of_cancer * np.cos(2.0 * np.pi * (day_of_year - solstice) / days_per_year)
# compute the sun hour angle in rad
standard_meridian = timezone * 15.
delta_lat_time = (lon - standard_meridian) * 24. / 360.
hour_angle = np.pi * (((dates_hour + dates_minute / 60. + delta_lat_time) / 12.) - 1.)
# get solar zenith angle
cos_solar_zenith = (np.sin(solar_decline) * np.sin(np.deg2rad(lat))
+ np.cos(solar_decline) * np.cos(np.deg2rad(lat)) * np.cos(hour_angle))
cos_solar_zenith = cos_solar_zenith.clip(min=0)
solar_zenith_angle = np.arccos(cos_solar_zenith)
# compute transmissivities for direct and diffus radiation using cloud fraction
transmissivity_direct = (0.6 + 0.2 * cos_solar_zenith) * (1.0 - cloud_fraction)
transmissivity_diffuse = (0.3 + 0.1 * cos_solar_zenith) * cloud_fraction
# modify solar constant for eccentricity
beta = 2. * np.pi * (day_of_year / days_per_year)
radius_ratio = (1.00011 + 0.034221 * np.cos(beta) + 0.00128 * np.sin(beta)
+ 0.000719 * np.cos(2. * beta) + 0.000077 * np.sin(2 * beta))
solar_constant_times_radius_ratio = solar_constant * radius_ratio
mu = np.arcsin(np.cos(solar_decline) * np.sin(hour_angle) / np.sin(solar_zenith_angle))
cosi = (np.cos(terrain_slope) * cos_solar_zenith
+ np.sin(terrain_slope) * np.sin(solar_zenith_angle) * np.cos(mu - terrain_slope_azimuth))
# get total shortwave radiation
direct_radiation = solar_constant_times_radius_ratio * transmissivity_direct * cosi
diffuse_radiation = solar_constant_times_radius_ratio * transmissivity_diffuse * cos_solar_zenith
direct_radiation = direct_radiation.clip(min=0)
df = pd.DataFrame(index=dates, data=dict(direct=direct_radiation, diffuse=diffuse_radiation))
if split:
return df
else:
return df.direct + df.diffuse |
def send(self, message) :
"puts a message in the outgoing queue."
if not isinstance(message, Message) :
raise TypeError("message must be a Message")
#end if
serial = ct.c_uint()
if not dbus.dbus_connection_send(self._dbobj, message._dbobj, ct.byref(serial)) :
raise CallFailed("dbus_connection_send")
#end if
return \
serial.value | puts a message in the outgoing queue. | Below is the the instruction that describes the task:
### Input:
puts a message in the outgoing queue.
### Response:
def send(self, message) :
"puts a message in the outgoing queue."
if not isinstance(message, Message) :
raise TypeError("message must be a Message")
#end if
serial = ct.c_uint()
if not dbus.dbus_connection_send(self._dbobj, message._dbobj, ct.byref(serial)) :
raise CallFailed("dbus_connection_send")
#end if
return \
serial.value |
def _search_capability(self, base):
"""Given a class, return a list of all of the derived classes that
are themselves derived from Capability."""
if _debug: Collector._debug("_search_capability %r", base)
rslt = []
for cls in base.__bases__:
if issubclass(cls, Collector):
map( rslt.append, self._search_capability(cls))
elif issubclass(cls, Capability):
rslt.append(cls)
if _debug: Collector._debug(" - rslt: %r", rslt)
return rslt | Given a class, return a list of all of the derived classes that
are themselves derived from Capability. | Below is the the instruction that describes the task:
### Input:
Given a class, return a list of all of the derived classes that
are themselves derived from Capability.
### Response:
def _search_capability(self, base):
"""Given a class, return a list of all of the derived classes that
are themselves derived from Capability."""
if _debug: Collector._debug("_search_capability %r", base)
rslt = []
for cls in base.__bases__:
if issubclass(cls, Collector):
map( rslt.append, self._search_capability(cls))
elif issubclass(cls, Capability):
rslt.append(cls)
if _debug: Collector._debug(" - rslt: %r", rslt)
return rslt |
def _parse_formula(s):
"""Parse formula string."""
scanner = re.compile(r'''
(\s+) | # whitespace
(\(|\)) | # group
([A-Z][a-z]*) | # element
(\d+) | # number
([a-z]) | # variable
(\Z) | # end
(.) # error
''', re.DOTALL | re.VERBOSE)
def transform_subformula(form):
"""Extract radical if subformula is a singleton with a radical."""
if isinstance(form, dict) and len(form) == 1:
# A radical in a singleton subformula is interpreted as a
# numbered radical.
element, value = next(iteritems(form))
if isinstance(element, Radical):
return Radical('{}{}'.format(element.symbol, value))
return form
stack = []
formula = {}
expect_count = False
def close(formula, count=1):
if len(stack) == 0:
raise ParseError('Unbalanced parenthesis group in formula')
subformula = transform_subformula(formula)
if isinstance(subformula, dict):
subformula = Formula(subformula)
formula = stack.pop()
if subformula not in formula:
formula[subformula] = 0
formula[subformula] += count
return formula
for match in re.finditer(scanner, s):
(whitespace, group, element, number, variable, end,
error) = match.groups()
if error is not None:
raise ParseError(
'Invalid token in formula string: {!r}'.format(match.group(0)),
span=(match.start(), match.end()))
elif whitespace is not None:
continue
elif group is not None and group == '(':
if expect_count:
formula = close(formula)
stack.append(formula)
formula = {}
expect_count = False
elif group is not None and group == ')':
if expect_count:
formula = close(formula)
expect_count = True
elif element is not None:
if expect_count:
formula = close(formula)
stack.append(formula)
if element in 'RX':
formula = Radical(element)
else:
formula = Atom(element)
expect_count = True
elif number is not None and expect_count:
formula = close(formula, int(number))
expect_count = False
elif variable is not None and expect_count:
formula = close(formula, Expression(variable))
expect_count = False
elif end is not None:
if expect_count:
formula = close(formula)
else:
raise ParseError(
'Invalid token in formula string: {!r}'.format(match.group(0)),
span=(match.start(), match.end()))
if len(stack) > 0:
raise ParseError('Unbalanced parenthesis group in formula')
return Formula(formula) | Parse formula string. | Below is the the instruction that describes the task:
### Input:
Parse formula string.
### Response:
def _parse_formula(s):
"""Parse formula string."""
scanner = re.compile(r'''
(\s+) | # whitespace
(\(|\)) | # group
([A-Z][a-z]*) | # element
(\d+) | # number
([a-z]) | # variable
(\Z) | # end
(.) # error
''', re.DOTALL | re.VERBOSE)
def transform_subformula(form):
"""Extract radical if subformula is a singleton with a radical."""
if isinstance(form, dict) and len(form) == 1:
# A radical in a singleton subformula is interpreted as a
# numbered radical.
element, value = next(iteritems(form))
if isinstance(element, Radical):
return Radical('{}{}'.format(element.symbol, value))
return form
stack = []
formula = {}
expect_count = False
def close(formula, count=1):
if len(stack) == 0:
raise ParseError('Unbalanced parenthesis group in formula')
subformula = transform_subformula(formula)
if isinstance(subformula, dict):
subformula = Formula(subformula)
formula = stack.pop()
if subformula not in formula:
formula[subformula] = 0
formula[subformula] += count
return formula
for match in re.finditer(scanner, s):
(whitespace, group, element, number, variable, end,
error) = match.groups()
if error is not None:
raise ParseError(
'Invalid token in formula string: {!r}'.format(match.group(0)),
span=(match.start(), match.end()))
elif whitespace is not None:
continue
elif group is not None and group == '(':
if expect_count:
formula = close(formula)
stack.append(formula)
formula = {}
expect_count = False
elif group is not None and group == ')':
if expect_count:
formula = close(formula)
expect_count = True
elif element is not None:
if expect_count:
formula = close(formula)
stack.append(formula)
if element in 'RX':
formula = Radical(element)
else:
formula = Atom(element)
expect_count = True
elif number is not None and expect_count:
formula = close(formula, int(number))
expect_count = False
elif variable is not None and expect_count:
formula = close(formula, Expression(variable))
expect_count = False
elif end is not None:
if expect_count:
formula = close(formula)
else:
raise ParseError(
'Invalid token in formula string: {!r}'.format(match.group(0)),
span=(match.start(), match.end()))
if len(stack) > 0:
raise ParseError('Unbalanced parenthesis group in formula')
return Formula(formula) |
def active(self):
"""Returns all outlets that are currently active and have sales."""
qs = self.get_queryset()
return qs.filter(
models.Q(
models.Q(start_date__isnull=True) |
models.Q(start_date__lte=now().date())
) &
models.Q(
models.Q(end_date__isnull=True) |
models.Q(end_date__gte=now().date())
)
).distinct() | Returns all outlets that are currently active and have sales. | Below is the the instruction that describes the task:
### Input:
Returns all outlets that are currently active and have sales.
### Response:
def active(self):
"""Returns all outlets that are currently active and have sales."""
qs = self.get_queryset()
return qs.filter(
models.Q(
models.Q(start_date__isnull=True) |
models.Q(start_date__lte=now().date())
) &
models.Q(
models.Q(end_date__isnull=True) |
models.Q(end_date__gte=now().date())
)
).distinct() |
def get_irradiance(self, surface_tilt, surface_azimuth,
solar_zenith, solar_azimuth, dni, ghi, dhi,
dni_extra=None, airmass=None, model='haydavies',
**kwargs):
"""
Uses the :func:`irradiance.get_total_irradiance` function to
calculate the plane of array irradiance components on a tilted
surface defined by the input data and ``self.albedo``.
For a given set of solar zenith and azimuth angles, the
surface tilt and azimuth parameters are typically determined
by :py:meth:`~SingleAxisTracker.singleaxis`.
Parameters
----------
surface_tilt : numeric
Panel tilt from horizontal.
surface_azimuth : numeric
Panel azimuth from north
solar_zenith : numeric
Solar zenith angle.
solar_azimuth : numeric
Solar azimuth angle.
dni : float or Series
Direct Normal Irradiance
ghi : float or Series
Global horizontal irradiance
dhi : float or Series
Diffuse horizontal irradiance
dni_extra : float or Series, default None
Extraterrestrial direct normal irradiance
airmass : float or Series, default None
Airmass
model : String, default 'haydavies'
Irradiance model.
**kwargs
Passed to :func:`irradiance.total_irrad`.
Returns
-------
poa_irradiance : DataFrame
Column names are: ``total, beam, sky, ground``.
"""
# not needed for all models, but this is easier
if dni_extra is None:
dni_extra = irradiance.get_extra_radiation(solar_zenith.index)
if airmass is None:
airmass = atmosphere.get_relative_airmass(solar_zenith)
return irradiance.get_total_irradiance(surface_tilt,
surface_azimuth,
solar_zenith,
solar_azimuth,
dni, ghi, dhi,
dni_extra=dni_extra,
airmass=airmass,
model=model,
albedo=self.albedo,
**kwargs) | Uses the :func:`irradiance.get_total_irradiance` function to
calculate the plane of array irradiance components on a tilted
surface defined by the input data and ``self.albedo``.
For a given set of solar zenith and azimuth angles, the
surface tilt and azimuth parameters are typically determined
by :py:meth:`~SingleAxisTracker.singleaxis`.
Parameters
----------
surface_tilt : numeric
Panel tilt from horizontal.
surface_azimuth : numeric
Panel azimuth from north
solar_zenith : numeric
Solar zenith angle.
solar_azimuth : numeric
Solar azimuth angle.
dni : float or Series
Direct Normal Irradiance
ghi : float or Series
Global horizontal irradiance
dhi : float or Series
Diffuse horizontal irradiance
dni_extra : float or Series, default None
Extraterrestrial direct normal irradiance
airmass : float or Series, default None
Airmass
model : String, default 'haydavies'
Irradiance model.
**kwargs
Passed to :func:`irradiance.total_irrad`.
Returns
-------
poa_irradiance : DataFrame
Column names are: ``total, beam, sky, ground``. | Below is the the instruction that describes the task:
### Input:
Uses the :func:`irradiance.get_total_irradiance` function to
calculate the plane of array irradiance components on a tilted
surface defined by the input data and ``self.albedo``.
For a given set of solar zenith and azimuth angles, the
surface tilt and azimuth parameters are typically determined
by :py:meth:`~SingleAxisTracker.singleaxis`.
Parameters
----------
surface_tilt : numeric
Panel tilt from horizontal.
surface_azimuth : numeric
Panel azimuth from north
solar_zenith : numeric
Solar zenith angle.
solar_azimuth : numeric
Solar azimuth angle.
dni : float or Series
Direct Normal Irradiance
ghi : float or Series
Global horizontal irradiance
dhi : float or Series
Diffuse horizontal irradiance
dni_extra : float or Series, default None
Extraterrestrial direct normal irradiance
airmass : float or Series, default None
Airmass
model : String, default 'haydavies'
Irradiance model.
**kwargs
Passed to :func:`irradiance.total_irrad`.
Returns
-------
poa_irradiance : DataFrame
Column names are: ``total, beam, sky, ground``.
### Response:
def get_irradiance(self, surface_tilt, surface_azimuth,
solar_zenith, solar_azimuth, dni, ghi, dhi,
dni_extra=None, airmass=None, model='haydavies',
**kwargs):
"""
Uses the :func:`irradiance.get_total_irradiance` function to
calculate the plane of array irradiance components on a tilted
surface defined by the input data and ``self.albedo``.
For a given set of solar zenith and azimuth angles, the
surface tilt and azimuth parameters are typically determined
by :py:meth:`~SingleAxisTracker.singleaxis`.
Parameters
----------
surface_tilt : numeric
Panel tilt from horizontal.
surface_azimuth : numeric
Panel azimuth from north
solar_zenith : numeric
Solar zenith angle.
solar_azimuth : numeric
Solar azimuth angle.
dni : float or Series
Direct Normal Irradiance
ghi : float or Series
Global horizontal irradiance
dhi : float or Series
Diffuse horizontal irradiance
dni_extra : float or Series, default None
Extraterrestrial direct normal irradiance
airmass : float or Series, default None
Airmass
model : String, default 'haydavies'
Irradiance model.
**kwargs
Passed to :func:`irradiance.total_irrad`.
Returns
-------
poa_irradiance : DataFrame
Column names are: ``total, beam, sky, ground``.
"""
# not needed for all models, but this is easier
if dni_extra is None:
dni_extra = irradiance.get_extra_radiation(solar_zenith.index)
if airmass is None:
airmass = atmosphere.get_relative_airmass(solar_zenith)
return irradiance.get_total_irradiance(surface_tilt,
surface_azimuth,
solar_zenith,
solar_azimuth,
dni, ghi, dhi,
dni_extra=dni_extra,
airmass=airmass,
model=model,
albedo=self.albedo,
**kwargs) |
def get_stream(self, error_callback=None, live=True):
""" Get room stream to listen for messages.
Kwargs:
error_callback (func): Callback to call when an error occurred (parameters: exception)
live (bool): If True, issue a live stream, otherwise an offline stream
Returns:
:class:`Stream`. Stream
"""
self.join()
return Stream(self, error_callback=error_callback, live=live) | Get room stream to listen for messages.
Kwargs:
error_callback (func): Callback to call when an error occurred (parameters: exception)
live (bool): If True, issue a live stream, otherwise an offline stream
Returns:
:class:`Stream`. Stream | Below is the the instruction that describes the task:
### Input:
Get room stream to listen for messages.
Kwargs:
error_callback (func): Callback to call when an error occurred (parameters: exception)
live (bool): If True, issue a live stream, otherwise an offline stream
Returns:
:class:`Stream`. Stream
### Response:
def get_stream(self, error_callback=None, live=True):
""" Get room stream to listen for messages.
Kwargs:
error_callback (func): Callback to call when an error occurred (parameters: exception)
live (bool): If True, issue a live stream, otherwise an offline stream
Returns:
:class:`Stream`. Stream
"""
self.join()
return Stream(self, error_callback=error_callback, live=live) |
def present(name,
tag=None,
build=None,
load=None,
force=False,
insecure_registry=False,
client_timeout=salt.utils.docker.CLIENT_TIMEOUT,
dockerfile=None,
sls=None,
base='opensuse/python',
saltenv='base',
pillarenv=None,
pillar=None,
**kwargs):
'''
.. versionchanged:: 2018.3.0
The ``tag`` argument has been added. It is now required unless pulling
from a registry.
Ensure that an image is present. The image can either be pulled from a
Docker registry, built from a Dockerfile, loaded from a saved image, or
built by running SLS files against a base image.
If none of the ``build``, ``load``, or ``sls`` arguments are used, then Salt
will pull from the :ref:`configured registries <docker-authentication>`. If
the specified image already exists, it will not be pulled unless ``force``
is set to ``True``. Here is an example of a state that will pull an image
from the Docker Hub:
.. code-block:: yaml
myuser/myimage:
docker_image.present:
- tag: mytag
tag
Tag name for the image. Required when using ``build``, ``load``, or
``sls`` to create the image, but optional if pulling from a repository.
.. versionadded:: 2018.3.0
build
Path to directory on the Minion containing a Dockerfile
.. code-block:: yaml
myuser/myimage:
docker_image.present:
- build: /home/myuser/docker/myimage
- tag: mytag
myuser/myimage:
docker_image.present:
- build: /home/myuser/docker/myimage
- tag: mytag
- dockerfile: Dockerfile.alternative
The image will be built using :py:func:`docker.build
<salt.modules.dockermod.build>` and the specified image name and tag
will be applied to it.
.. versionadded:: 2016.11.0
.. versionchanged:: 2018.3.0
The ``tag`` must be manually specified using the ``tag`` argument.
load
Loads a tar archive created with :py:func:`docker.load
<salt.modules.dockermod.load>` (or the ``docker load`` Docker CLI
command), and assigns it the specified repo and tag.
.. code-block:: yaml
myuser/myimage:
docker_image.present:
- load: salt://path/to/image.tar
- tag: mytag
.. versionchanged:: 2018.3.0
The ``tag`` must be manually specified using the ``tag`` argument.
force : False
Set this parameter to ``True`` to force Salt to pull/build/load the
image even if it is already present.
client_timeout
Timeout in seconds for the Docker client. This is not a timeout for
the state, but for receiving a response from the API.
dockerfile
Allows for an alternative Dockerfile to be specified. Path to alternative
Dockefile is relative to the build path for the Docker container.
.. versionadded:: 2016.11.0
sls
Allow for building of image with :py:func:`docker.sls_build
<salt.modules.dockermod.sls_build>` by specifying the SLS files with
which to build. This can be a list or comma-seperated string.
.. code-block:: yaml
myuser/myimage:
docker_image.present:
- tag: latest
- sls:
- webapp1
- webapp2
- base: centos
- saltenv: base
.. versionadded: 2017.7.0
.. versionchanged:: 2018.3.0
The ``tag`` must be manually specified using the ``tag`` argument.
base
Base image with which to start :py:func:`docker.sls_build
<salt.modules.dockermod.sls_build>`
.. versionadded:: 2017.7.0
saltenv
Specify the environment from which to retrieve the SLS indicated by the
`mods` parameter.
.. versionadded:: 2017.7.0
.. versionchanged:: 2018.3.0
Now uses the effective saltenv if not explicitly passed. In earlier
versions, ``base`` was assumed as a default.
pillarenv
Specify a Pillar environment to be used when applying states. This
can also be set in the minion config file using the
:conf_minion:`pillarenv` option. When neither the
:conf_minion:`pillarenv` minion config option nor this CLI argument is
used, all Pillar environments will be merged together.
.. versionadded:: 2018.3.0
pillar
Custom Pillar values, passed as a dictionary of key-value pairs
.. note::
Values passed this way will override Pillar values set via
``pillar_roots`` or an external Pillar source.
.. versionadded:: 2018.3.0
'''
ret = {'name': name,
'changes': {},
'result': False,
'comment': ''}
if not isinstance(name, six.string_types):
name = six.text_type(name)
# At most one of the args that result in an image being built can be used
num_build_args = len([x for x in (build, load, sls) if x is not None])
if num_build_args > 1:
ret['comment'] = \
'Only one of \'build\', \'load\', or \'sls\' is permitted.'
return ret
elif num_build_args == 1:
# If building, we need the tag to be specified
if not tag:
ret['comment'] = (
'The \'tag\' argument is required if any one of \'build\', '
'\'load\', or \'sls\' is used.'
)
return ret
if not isinstance(tag, six.string_types):
tag = six.text_type(tag)
full_image = ':'.join((name, tag))
else:
if tag:
name = '{0}:{1}'.format(name, tag)
full_image = name
try:
image_info = __salt__['docker.inspect_image'](full_image)
except CommandExecutionError as exc:
msg = exc.__str__()
if '404' in msg:
# Image not present
image_info = None
else:
ret['comment'] = msg
return ret
if image_info is not None:
# Specified image is present
if not force:
ret['result'] = True
ret['comment'] = 'Image {0} already present'.format(full_image)
return ret
if build or sls:
action = 'built'
elif load:
action = 'loaded'
else:
action = 'pulled'
if __opts__['test']:
ret['result'] = None
if (image_info is not None and force) or image_info is None:
ret['comment'] = 'Image {0} will be {1}'.format(full_image, action)
return ret
if build:
# Get the functions default value and args
argspec = salt.utils.args.get_function_argspec(__salt__['docker.build'])
# Map any if existing args from kwargs into the build_args dictionary
build_args = dict(list(zip(argspec.args, argspec.defaults)))
for k in build_args:
if k in kwargs.get('kwargs', {}):
build_args[k] = kwargs.get('kwargs', {}).get(k)
try:
# map values passed from the state to the build args
build_args['path'] = build
build_args['repository'] = name
build_args['tag'] = tag
build_args['dockerfile'] = dockerfile
image_update = __salt__['docker.build'](**build_args)
except Exception as exc:
ret['comment'] = (
'Encountered error building {0} as {1}: {2}'.format(
build, full_image, exc
)
)
return ret
if image_info is None or image_update['Id'] != image_info['Id'][:12]:
ret['changes'] = image_update
elif sls:
_locals = locals()
sls_build_kwargs = {k: _locals[k] for k in ('saltenv', 'pillarenv', 'pillar')
if _locals[k] is not None}
try:
image_update = __salt__['docker.sls_build'](repository=name,
tag=tag,
base=base,
mods=sls,
**sls_build_kwargs)
except Exception as exc:
ret['comment'] = (
'Encountered error using SLS {0} for building {1}: {2}'
.format(sls, full_image, exc)
)
return ret
if image_info is None or image_update['Id'] != image_info['Id'][:12]:
ret['changes'] = image_update
elif load:
try:
image_update = __salt__['docker.load'](path=load,
repository=name,
tag=tag)
except Exception as exc:
ret['comment'] = (
'Encountered error loading {0} as {1}: {2}'
.format(load, full_image, exc)
)
return ret
if image_info is None or image_update.get('Layers', []):
ret['changes'] = image_update
else:
try:
image_update = __salt__['docker.pull'](
name,
insecure_registry=insecure_registry,
client_timeout=client_timeout
)
except Exception as exc:
ret['comment'] = \
'Encountered error pulling {0}: {1}'.format(full_image, exc)
return ret
if (image_info is not None and image_info['Id'][:12] == image_update
.get('Layers', {})
.get('Already_Pulled', [None])[0]):
# Image was pulled again (because of force) but was also
# already there. No new image was available on the registry.
pass
elif image_info is None or image_update.get('Layers', {}).get('Pulled'):
# Only add to the changes dict if layers were pulled
ret['changes'] = image_update
error = False
try:
__salt__['docker.inspect_image'](full_image)
except CommandExecutionError as exc:
msg = exc.__str__()
if '404' not in msg:
error = 'Failed to inspect image \'{0}\' after it was {1}: {2}'.format(
full_image, action, msg
)
if error:
ret['comment'] = error
else:
ret['result'] = True
if not ret['changes']:
ret['comment'] = (
'Image \'{0}\' was {1}, but there were no changes'.format(
name, action
)
)
else:
ret['comment'] = 'Image \'{0}\' was {1}'.format(full_image, action)
return ret | .. versionchanged:: 2018.3.0
The ``tag`` argument has been added. It is now required unless pulling
from a registry.
Ensure that an image is present. The image can either be pulled from a
Docker registry, built from a Dockerfile, loaded from a saved image, or
built by running SLS files against a base image.
If none of the ``build``, ``load``, or ``sls`` arguments are used, then Salt
will pull from the :ref:`configured registries <docker-authentication>`. If
the specified image already exists, it will not be pulled unless ``force``
is set to ``True``. Here is an example of a state that will pull an image
from the Docker Hub:
.. code-block:: yaml
myuser/myimage:
docker_image.present:
- tag: mytag
tag
Tag name for the image. Required when using ``build``, ``load``, or
``sls`` to create the image, but optional if pulling from a repository.
.. versionadded:: 2018.3.0
build
Path to directory on the Minion containing a Dockerfile
.. code-block:: yaml
myuser/myimage:
docker_image.present:
- build: /home/myuser/docker/myimage
- tag: mytag
myuser/myimage:
docker_image.present:
- build: /home/myuser/docker/myimage
- tag: mytag
- dockerfile: Dockerfile.alternative
The image will be built using :py:func:`docker.build
<salt.modules.dockermod.build>` and the specified image name and tag
will be applied to it.
.. versionadded:: 2016.11.0
.. versionchanged:: 2018.3.0
The ``tag`` must be manually specified using the ``tag`` argument.
load
Loads a tar archive created with :py:func:`docker.load
<salt.modules.dockermod.load>` (or the ``docker load`` Docker CLI
command), and assigns it the specified repo and tag.
.. code-block:: yaml
myuser/myimage:
docker_image.present:
- load: salt://path/to/image.tar
- tag: mytag
.. versionchanged:: 2018.3.0
The ``tag`` must be manually specified using the ``tag`` argument.
force : False
Set this parameter to ``True`` to force Salt to pull/build/load the
image even if it is already present.
client_timeout
Timeout in seconds for the Docker client. This is not a timeout for
the state, but for receiving a response from the API.
dockerfile
Allows for an alternative Dockerfile to be specified. Path to alternative
Dockefile is relative to the build path for the Docker container.
.. versionadded:: 2016.11.0
sls
Allow for building of image with :py:func:`docker.sls_build
<salt.modules.dockermod.sls_build>` by specifying the SLS files with
which to build. This can be a list or comma-seperated string.
.. code-block:: yaml
myuser/myimage:
docker_image.present:
- tag: latest
- sls:
- webapp1
- webapp2
- base: centos
- saltenv: base
.. versionadded: 2017.7.0
.. versionchanged:: 2018.3.0
The ``tag`` must be manually specified using the ``tag`` argument.
base
Base image with which to start :py:func:`docker.sls_build
<salt.modules.dockermod.sls_build>`
.. versionadded:: 2017.7.0
saltenv
Specify the environment from which to retrieve the SLS indicated by the
`mods` parameter.
.. versionadded:: 2017.7.0
.. versionchanged:: 2018.3.0
Now uses the effective saltenv if not explicitly passed. In earlier
versions, ``base`` was assumed as a default.
pillarenv
Specify a Pillar environment to be used when applying states. This
can also be set in the minion config file using the
:conf_minion:`pillarenv` option. When neither the
:conf_minion:`pillarenv` minion config option nor this CLI argument is
used, all Pillar environments will be merged together.
.. versionadded:: 2018.3.0
pillar
Custom Pillar values, passed as a dictionary of key-value pairs
.. note::
Values passed this way will override Pillar values set via
``pillar_roots`` or an external Pillar source.
.. versionadded:: 2018.3.0 | Below is the the instruction that describes the task:
### Input:
.. versionchanged:: 2018.3.0
The ``tag`` argument has been added. It is now required unless pulling
from a registry.
Ensure that an image is present. The image can either be pulled from a
Docker registry, built from a Dockerfile, loaded from a saved image, or
built by running SLS files against a base image.
If none of the ``build``, ``load``, or ``sls`` arguments are used, then Salt
will pull from the :ref:`configured registries <docker-authentication>`. If
the specified image already exists, it will not be pulled unless ``force``
is set to ``True``. Here is an example of a state that will pull an image
from the Docker Hub:
.. code-block:: yaml
myuser/myimage:
docker_image.present:
- tag: mytag
tag
Tag name for the image. Required when using ``build``, ``load``, or
``sls`` to create the image, but optional if pulling from a repository.
.. versionadded:: 2018.3.0
build
Path to directory on the Minion containing a Dockerfile
.. code-block:: yaml
myuser/myimage:
docker_image.present:
- build: /home/myuser/docker/myimage
- tag: mytag
myuser/myimage:
docker_image.present:
- build: /home/myuser/docker/myimage
- tag: mytag
- dockerfile: Dockerfile.alternative
The image will be built using :py:func:`docker.build
<salt.modules.dockermod.build>` and the specified image name and tag
will be applied to it.
.. versionadded:: 2016.11.0
.. versionchanged:: 2018.3.0
The ``tag`` must be manually specified using the ``tag`` argument.
load
Loads a tar archive created with :py:func:`docker.load
<salt.modules.dockermod.load>` (or the ``docker load`` Docker CLI
command), and assigns it the specified repo and tag.
.. code-block:: yaml
myuser/myimage:
docker_image.present:
- load: salt://path/to/image.tar
- tag: mytag
.. versionchanged:: 2018.3.0
The ``tag`` must be manually specified using the ``tag`` argument.
force : False
Set this parameter to ``True`` to force Salt to pull/build/load the
image even if it is already present.
client_timeout
Timeout in seconds for the Docker client. This is not a timeout for
the state, but for receiving a response from the API.
dockerfile
Allows for an alternative Dockerfile to be specified. Path to alternative
Dockefile is relative to the build path for the Docker container.
.. versionadded:: 2016.11.0
sls
Allow for building of image with :py:func:`docker.sls_build
<salt.modules.dockermod.sls_build>` by specifying the SLS files with
which to build. This can be a list or comma-seperated string.
.. code-block:: yaml
myuser/myimage:
docker_image.present:
- tag: latest
- sls:
- webapp1
- webapp2
- base: centos
- saltenv: base
.. versionadded: 2017.7.0
.. versionchanged:: 2018.3.0
The ``tag`` must be manually specified using the ``tag`` argument.
base
Base image with which to start :py:func:`docker.sls_build
<salt.modules.dockermod.sls_build>`
.. versionadded:: 2017.7.0
saltenv
Specify the environment from which to retrieve the SLS indicated by the
`mods` parameter.
.. versionadded:: 2017.7.0
.. versionchanged:: 2018.3.0
Now uses the effective saltenv if not explicitly passed. In earlier
versions, ``base`` was assumed as a default.
pillarenv
Specify a Pillar environment to be used when applying states. This
can also be set in the minion config file using the
:conf_minion:`pillarenv` option. When neither the
:conf_minion:`pillarenv` minion config option nor this CLI argument is
used, all Pillar environments will be merged together.
.. versionadded:: 2018.3.0
pillar
Custom Pillar values, passed as a dictionary of key-value pairs
.. note::
Values passed this way will override Pillar values set via
``pillar_roots`` or an external Pillar source.
.. versionadded:: 2018.3.0
### Response:
def present(name,
tag=None,
build=None,
load=None,
force=False,
insecure_registry=False,
client_timeout=salt.utils.docker.CLIENT_TIMEOUT,
dockerfile=None,
sls=None,
base='opensuse/python',
saltenv='base',
pillarenv=None,
pillar=None,
**kwargs):
'''
.. versionchanged:: 2018.3.0
The ``tag`` argument has been added. It is now required unless pulling
from a registry.
Ensure that an image is present. The image can either be pulled from a
Docker registry, built from a Dockerfile, loaded from a saved image, or
built by running SLS files against a base image.
If none of the ``build``, ``load``, or ``sls`` arguments are used, then Salt
will pull from the :ref:`configured registries <docker-authentication>`. If
the specified image already exists, it will not be pulled unless ``force``
is set to ``True``. Here is an example of a state that will pull an image
from the Docker Hub:
.. code-block:: yaml
myuser/myimage:
docker_image.present:
- tag: mytag
tag
Tag name for the image. Required when using ``build``, ``load``, or
``sls`` to create the image, but optional if pulling from a repository.
.. versionadded:: 2018.3.0
build
Path to directory on the Minion containing a Dockerfile
.. code-block:: yaml
myuser/myimage:
docker_image.present:
- build: /home/myuser/docker/myimage
- tag: mytag
myuser/myimage:
docker_image.present:
- build: /home/myuser/docker/myimage
- tag: mytag
- dockerfile: Dockerfile.alternative
The image will be built using :py:func:`docker.build
<salt.modules.dockermod.build>` and the specified image name and tag
will be applied to it.
.. versionadded:: 2016.11.0
.. versionchanged:: 2018.3.0
The ``tag`` must be manually specified using the ``tag`` argument.
load
Loads a tar archive created with :py:func:`docker.load
<salt.modules.dockermod.load>` (or the ``docker load`` Docker CLI
command), and assigns it the specified repo and tag.
.. code-block:: yaml
myuser/myimage:
docker_image.present:
- load: salt://path/to/image.tar
- tag: mytag
.. versionchanged:: 2018.3.0
The ``tag`` must be manually specified using the ``tag`` argument.
force : False
Set this parameter to ``True`` to force Salt to pull/build/load the
image even if it is already present.
client_timeout
Timeout in seconds for the Docker client. This is not a timeout for
the state, but for receiving a response from the API.
dockerfile
Allows for an alternative Dockerfile to be specified. Path to alternative
Dockefile is relative to the build path for the Docker container.
.. versionadded:: 2016.11.0
sls
Allow for building of image with :py:func:`docker.sls_build
<salt.modules.dockermod.sls_build>` by specifying the SLS files with
which to build. This can be a list or comma-seperated string.
.. code-block:: yaml
myuser/myimage:
docker_image.present:
- tag: latest
- sls:
- webapp1
- webapp2
- base: centos
- saltenv: base
.. versionadded: 2017.7.0
.. versionchanged:: 2018.3.0
The ``tag`` must be manually specified using the ``tag`` argument.
base
Base image with which to start :py:func:`docker.sls_build
<salt.modules.dockermod.sls_build>`
.. versionadded:: 2017.7.0
saltenv
Specify the environment from which to retrieve the SLS indicated by the
`mods` parameter.
.. versionadded:: 2017.7.0
.. versionchanged:: 2018.3.0
Now uses the effective saltenv if not explicitly passed. In earlier
versions, ``base`` was assumed as a default.
pillarenv
Specify a Pillar environment to be used when applying states. This
can also be set in the minion config file using the
:conf_minion:`pillarenv` option. When neither the
:conf_minion:`pillarenv` minion config option nor this CLI argument is
used, all Pillar environments will be merged together.
.. versionadded:: 2018.3.0
pillar
Custom Pillar values, passed as a dictionary of key-value pairs
.. note::
Values passed this way will override Pillar values set via
``pillar_roots`` or an external Pillar source.
.. versionadded:: 2018.3.0
'''
ret = {'name': name,
'changes': {},
'result': False,
'comment': ''}
if not isinstance(name, six.string_types):
name = six.text_type(name)
# At most one of the args that result in an image being built can be used
num_build_args = len([x for x in (build, load, sls) if x is not None])
if num_build_args > 1:
ret['comment'] = \
'Only one of \'build\', \'load\', or \'sls\' is permitted.'
return ret
elif num_build_args == 1:
# If building, we need the tag to be specified
if not tag:
ret['comment'] = (
'The \'tag\' argument is required if any one of \'build\', '
'\'load\', or \'sls\' is used.'
)
return ret
if not isinstance(tag, six.string_types):
tag = six.text_type(tag)
full_image = ':'.join((name, tag))
else:
if tag:
name = '{0}:{1}'.format(name, tag)
full_image = name
try:
image_info = __salt__['docker.inspect_image'](full_image)
except CommandExecutionError as exc:
msg = exc.__str__()
if '404' in msg:
# Image not present
image_info = None
else:
ret['comment'] = msg
return ret
if image_info is not None:
# Specified image is present
if not force:
ret['result'] = True
ret['comment'] = 'Image {0} already present'.format(full_image)
return ret
if build or sls:
action = 'built'
elif load:
action = 'loaded'
else:
action = 'pulled'
if __opts__['test']:
ret['result'] = None
if (image_info is not None and force) or image_info is None:
ret['comment'] = 'Image {0} will be {1}'.format(full_image, action)
return ret
if build:
# Get the functions default value and args
argspec = salt.utils.args.get_function_argspec(__salt__['docker.build'])
# Map any if existing args from kwargs into the build_args dictionary
build_args = dict(list(zip(argspec.args, argspec.defaults)))
for k in build_args:
if k in kwargs.get('kwargs', {}):
build_args[k] = kwargs.get('kwargs', {}).get(k)
try:
# map values passed from the state to the build args
build_args['path'] = build
build_args['repository'] = name
build_args['tag'] = tag
build_args['dockerfile'] = dockerfile
image_update = __salt__['docker.build'](**build_args)
except Exception as exc:
ret['comment'] = (
'Encountered error building {0} as {1}: {2}'.format(
build, full_image, exc
)
)
return ret
if image_info is None or image_update['Id'] != image_info['Id'][:12]:
ret['changes'] = image_update
elif sls:
_locals = locals()
sls_build_kwargs = {k: _locals[k] for k in ('saltenv', 'pillarenv', 'pillar')
if _locals[k] is not None}
try:
image_update = __salt__['docker.sls_build'](repository=name,
tag=tag,
base=base,
mods=sls,
**sls_build_kwargs)
except Exception as exc:
ret['comment'] = (
'Encountered error using SLS {0} for building {1}: {2}'
.format(sls, full_image, exc)
)
return ret
if image_info is None or image_update['Id'] != image_info['Id'][:12]:
ret['changes'] = image_update
elif load:
try:
image_update = __salt__['docker.load'](path=load,
repository=name,
tag=tag)
except Exception as exc:
ret['comment'] = (
'Encountered error loading {0} as {1}: {2}'
.format(load, full_image, exc)
)
return ret
if image_info is None or image_update.get('Layers', []):
ret['changes'] = image_update
else:
try:
image_update = __salt__['docker.pull'](
name,
insecure_registry=insecure_registry,
client_timeout=client_timeout
)
except Exception as exc:
ret['comment'] = \
'Encountered error pulling {0}: {1}'.format(full_image, exc)
return ret
if (image_info is not None and image_info['Id'][:12] == image_update
.get('Layers', {})
.get('Already_Pulled', [None])[0]):
# Image was pulled again (because of force) but was also
# already there. No new image was available on the registry.
pass
elif image_info is None or image_update.get('Layers', {}).get('Pulled'):
# Only add to the changes dict if layers were pulled
ret['changes'] = image_update
error = False
try:
__salt__['docker.inspect_image'](full_image)
except CommandExecutionError as exc:
msg = exc.__str__()
if '404' not in msg:
error = 'Failed to inspect image \'{0}\' after it was {1}: {2}'.format(
full_image, action, msg
)
if error:
ret['comment'] = error
else:
ret['result'] = True
if not ret['changes']:
ret['comment'] = (
'Image \'{0}\' was {1}, but there were no changes'.format(
name, action
)
)
else:
ret['comment'] = 'Image \'{0}\' was {1}'.format(full_image, action)
return ret |
def send(self, data):
"""Send some part of message to the socket."""
bytes_sent = self._sock.send(extract_bytes(data))
self.bytes_written += bytes_sent
return bytes_sent | Send some part of message to the socket. | Below is the the instruction that describes the task:
### Input:
Send some part of message to the socket.
### Response:
def send(self, data):
"""Send some part of message to the socket."""
bytes_sent = self._sock.send(extract_bytes(data))
self.bytes_written += bytes_sent
return bytes_sent |
def _UpdateCampaignDSASetting(client, campaign_id, feed_id):
"""Updates the campaign DSA setting to DSA pagefeeds.
Args:
client: an AdWordsClient instance.
campaign_id: a str Campaign ID.
feed_id: a str page Feed ID.
Raises:
ValueError: If the given campaign is found not to be a dynamic search ad
campaign.
"""
# Get the CampaignService.
campaign_service = client.GetService('CampaignService', version='v201809')
selector = {
'fields': ['Id', 'Settings'],
'predicates': [{
'field': 'Id',
'operator': 'EQUALS',
'values': [campaign_id]
}]
}
response = campaign_service.get(selector)
if response['totalNumEntries']:
campaign = response['entries'][0]
else:
raise ValueError('No campaign with ID "%d" exists.' % campaign_id)
if not campaign['settings']:
raise ValueError('This is not a DSA campaign.')
dsa_setting = None
campaign_settings = campaign['settings']
for setting in campaign_settings:
if setting['Setting.Type'] == 'DynamicSearchAdsSetting':
dsa_setting = setting
break
if dsa_setting is None:
raise ValueError('This is not a DSA campaign.')
dsa_setting['pageFeed'] = {
'feedIds': [feed_id]
}
# Optional: Specify whether only the supplied URLs should be used with your
# Dynamic Search Ads.
dsa_setting['useSuppliedUrlsOnly'] = True
operation = {
'operand': {
'id': campaign_id,
'settings': campaign_settings
},
'operator': 'SET'
}
campaign_service.mutate([operation])
print 'DSA page feed for campaign ID "%d" was updated with feed ID "%d".' % (
campaign_id, feed_id) | Updates the campaign DSA setting to DSA pagefeeds.
Args:
client: an AdWordsClient instance.
campaign_id: a str Campaign ID.
feed_id: a str page Feed ID.
Raises:
ValueError: If the given campaign is found not to be a dynamic search ad
campaign. | Below is the the instruction that describes the task:
### Input:
Updates the campaign DSA setting to DSA pagefeeds.
Args:
client: an AdWordsClient instance.
campaign_id: a str Campaign ID.
feed_id: a str page Feed ID.
Raises:
ValueError: If the given campaign is found not to be a dynamic search ad
campaign.
### Response:
def _UpdateCampaignDSASetting(client, campaign_id, feed_id):
"""Updates the campaign DSA setting to DSA pagefeeds.
Args:
client: an AdWordsClient instance.
campaign_id: a str Campaign ID.
feed_id: a str page Feed ID.
Raises:
ValueError: If the given campaign is found not to be a dynamic search ad
campaign.
"""
# Get the CampaignService.
campaign_service = client.GetService('CampaignService', version='v201809')
selector = {
'fields': ['Id', 'Settings'],
'predicates': [{
'field': 'Id',
'operator': 'EQUALS',
'values': [campaign_id]
}]
}
response = campaign_service.get(selector)
if response['totalNumEntries']:
campaign = response['entries'][0]
else:
raise ValueError('No campaign with ID "%d" exists.' % campaign_id)
if not campaign['settings']:
raise ValueError('This is not a DSA campaign.')
dsa_setting = None
campaign_settings = campaign['settings']
for setting in campaign_settings:
if setting['Setting.Type'] == 'DynamicSearchAdsSetting':
dsa_setting = setting
break
if dsa_setting is None:
raise ValueError('This is not a DSA campaign.')
dsa_setting['pageFeed'] = {
'feedIds': [feed_id]
}
# Optional: Specify whether only the supplied URLs should be used with your
# Dynamic Search Ads.
dsa_setting['useSuppliedUrlsOnly'] = True
operation = {
'operand': {
'id': campaign_id,
'settings': campaign_settings
},
'operator': 'SET'
}
campaign_service.mutate([operation])
print 'DSA page feed for campaign ID "%d" was updated with feed ID "%d".' % (
campaign_id, feed_id) |
def insert_element(self, vector, value, idx, name=''):
"""
Returns vector with vector[idx] replaced by value.
The result is undefined if the idx is larger or equal the vector length.
"""
instr = instructions.InsertElement(self.block, vector, value, idx,
name=name)
self._insert(instr)
return instr | Returns vector with vector[idx] replaced by value.
The result is undefined if the idx is larger or equal the vector length. | Below is the the instruction that describes the task:
### Input:
Returns vector with vector[idx] replaced by value.
The result is undefined if the idx is larger or equal the vector length.
### Response:
def insert_element(self, vector, value, idx, name=''):
"""
Returns vector with vector[idx] replaced by value.
The result is undefined if the idx is larger or equal the vector length.
"""
instr = instructions.InsertElement(self.block, vector, value, idx,
name=name)
self._insert(instr)
return instr |
def add_metric(self, metric: float) -> None:
"""
Record a new value of the metric and update the various things that depend on it.
"""
new_best = ((self._best_so_far is None) or
(self._should_decrease and metric < self._best_so_far) or
(not self._should_decrease and metric > self._best_so_far))
if new_best:
self.best_epoch = self._epoch_number
self._is_best_so_far = True
self._best_so_far = metric
self._epochs_with_no_improvement = 0
else:
self._is_best_so_far = False
self._epochs_with_no_improvement += 1
self._epoch_number += 1 | Record a new value of the metric and update the various things that depend on it. | Below is the the instruction that describes the task:
### Input:
Record a new value of the metric and update the various things that depend on it.
### Response:
def add_metric(self, metric: float) -> None:
"""
Record a new value of the metric and update the various things that depend on it.
"""
new_best = ((self._best_so_far is None) or
(self._should_decrease and metric < self._best_so_far) or
(not self._should_decrease and metric > self._best_so_far))
if new_best:
self.best_epoch = self._epoch_number
self._is_best_so_far = True
self._best_so_far = metric
self._epochs_with_no_improvement = 0
else:
self._is_best_so_far = False
self._epochs_with_no_improvement += 1
self._epoch_number += 1 |
def select(self, *keys):
"""
指定查询返回结果中只包含某些字段。可以重复调用,每次调用的包含内容都将会被返回。
:param keys: 包含字段名
:rtype: Query
"""
if len(keys) == 1 and isinstance(keys[0], (list, tuple)):
keys = keys[0]
self._select += keys
return self | 指定查询返回结果中只包含某些字段。可以重复调用,每次调用的包含内容都将会被返回。
:param keys: 包含字段名
:rtype: Query | Below is the the instruction that describes the task:
### Input:
指定查询返回结果中只包含某些字段。可以重复调用,每次调用的包含内容都将会被返回。
:param keys: 包含字段名
:rtype: Query
### Response:
def select(self, *keys):
"""
指定查询返回结果中只包含某些字段。可以重复调用,每次调用的包含内容都将会被返回。
:param keys: 包含字段名
:rtype: Query
"""
if len(keys) == 1 and isinstance(keys[0], (list, tuple)):
keys = keys[0]
self._select += keys
return self |
def _line_wrapper(self,diffs):
"""Returns iterator that splits (wraps) mdiff text lines"""
# pull from/to data and flags from mdiff iterator
for fromdata,todata,flag in diffs:
# check for context separators and pass them through
if flag is None:
yield fromdata,todata,flag
continue
(fromline,fromtext),(toline,totext) = fromdata,todata
# for each from/to line split it at the wrap column to form
# list of text lines.
fromlist,tolist = [],[]
self._split_line(fromlist,fromline,fromtext)
self._split_line(tolist,toline,totext)
# yield from/to line in pairs inserting blank lines as
# necessary when one side has more wrapped lines
while fromlist or tolist:
if fromlist:
fromdata = fromlist.pop(0)
else:
fromdata = ('',' ')
if tolist:
todata = tolist.pop(0)
else:
todata = ('',' ')
yield fromdata,todata,flag | Returns iterator that splits (wraps) mdiff text lines | Below is the the instruction that describes the task:
### Input:
Returns iterator that splits (wraps) mdiff text lines
### Response:
def _line_wrapper(self,diffs):
"""Returns iterator that splits (wraps) mdiff text lines"""
# pull from/to data and flags from mdiff iterator
for fromdata,todata,flag in diffs:
# check for context separators and pass them through
if flag is None:
yield fromdata,todata,flag
continue
(fromline,fromtext),(toline,totext) = fromdata,todata
# for each from/to line split it at the wrap column to form
# list of text lines.
fromlist,tolist = [],[]
self._split_line(fromlist,fromline,fromtext)
self._split_line(tolist,toline,totext)
# yield from/to line in pairs inserting blank lines as
# necessary when one side has more wrapped lines
while fromlist or tolist:
if fromlist:
fromdata = fromlist.pop(0)
else:
fromdata = ('',' ')
if tolist:
todata = tolist.pop(0)
else:
todata = ('',' ')
yield fromdata,todata,flag |
def parse_config_h(fp, vars=None):
"""Parse a config.h-style file.
A dictionary containing name/value pairs is returned. If an
optional dictionary is passed in as the second argument, it is
used instead of a new dictionary.
"""
if vars is None:
vars = {}
define_rx = re.compile("#define ([A-Z][A-Za-z0-9_]+) (.*)\n")
undef_rx = re.compile("/[*] #undef ([A-Z][A-Za-z0-9_]+) [*]/\n")
while True:
line = fp.readline()
if not line:
break
m = define_rx.match(line)
if m:
n, v = m.group(1, 2)
try:
v = int(v)
except ValueError:
pass
vars[n] = v
else:
m = undef_rx.match(line)
if m:
vars[m.group(1)] = 0
return vars | Parse a config.h-style file.
A dictionary containing name/value pairs is returned. If an
optional dictionary is passed in as the second argument, it is
used instead of a new dictionary. | Below is the the instruction that describes the task:
### Input:
Parse a config.h-style file.
A dictionary containing name/value pairs is returned. If an
optional dictionary is passed in as the second argument, it is
used instead of a new dictionary.
### Response:
def parse_config_h(fp, vars=None):
"""Parse a config.h-style file.
A dictionary containing name/value pairs is returned. If an
optional dictionary is passed in as the second argument, it is
used instead of a new dictionary.
"""
if vars is None:
vars = {}
define_rx = re.compile("#define ([A-Z][A-Za-z0-9_]+) (.*)\n")
undef_rx = re.compile("/[*] #undef ([A-Z][A-Za-z0-9_]+) [*]/\n")
while True:
line = fp.readline()
if not line:
break
m = define_rx.match(line)
if m:
n, v = m.group(1, 2)
try:
v = int(v)
except ValueError:
pass
vars[n] = v
else:
m = undef_rx.match(line)
if m:
vars[m.group(1)] = 0
return vars |
def do_tree(self, params):
"""
\x1b[1mNAME\x1b[0m
tree - Print the tree under a given path
\x1b[1mSYNOPSIS\x1b[0m
tree [path] [max_depth]
\x1b[1mOPTIONS\x1b[0m
* path: the path (default: cwd)
* max_depth: max recursion limit (0 is no limit) (default: 0)
\x1b[1mEXAMPLES\x1b[0m
> tree
.
├── zookeeper
│ ├── config
│ ├── quota
> tree 1
.
├── zookeeper
├── foo
├── bar
"""
self.show_output(".")
for child, level in self._zk.tree(params.path, params.max_depth):
self.show_output(u"%s├── %s", u"│ " * level, child) | \x1b[1mNAME\x1b[0m
tree - Print the tree under a given path
\x1b[1mSYNOPSIS\x1b[0m
tree [path] [max_depth]
\x1b[1mOPTIONS\x1b[0m
* path: the path (default: cwd)
* max_depth: max recursion limit (0 is no limit) (default: 0)
\x1b[1mEXAMPLES\x1b[0m
> tree
.
├── zookeeper
│ ├── config
│ ├── quota
> tree 1
.
├── zookeeper
├── foo
├── bar | Below is the the instruction that describes the task:
### Input:
\x1b[1mNAME\x1b[0m
tree - Print the tree under a given path
\x1b[1mSYNOPSIS\x1b[0m
tree [path] [max_depth]
\x1b[1mOPTIONS\x1b[0m
* path: the path (default: cwd)
* max_depth: max recursion limit (0 is no limit) (default: 0)
\x1b[1mEXAMPLES\x1b[0m
> tree
.
├── zookeeper
│ ├── config
│ ├── quota
> tree 1
.
├── zookeeper
├── foo
├── bar
### Response:
def do_tree(self, params):
"""
\x1b[1mNAME\x1b[0m
tree - Print the tree under a given path
\x1b[1mSYNOPSIS\x1b[0m
tree [path] [max_depth]
\x1b[1mOPTIONS\x1b[0m
* path: the path (default: cwd)
* max_depth: max recursion limit (0 is no limit) (default: 0)
\x1b[1mEXAMPLES\x1b[0m
> tree
.
├── zookeeper
│ ├── config
│ ├── quota
> tree 1
.
├── zookeeper
├── foo
├── bar
"""
self.show_output(".")
for child, level in self._zk.tree(params.path, params.max_depth):
self.show_output(u"%s├── %s", u"│ " * level, child) |
def sign(self, message):
"""Signs a message.
Args:
message: bytes, Message to be signed.
Returns:
string, The signature of the message for the given key.
"""
message = _helpers._to_bytes(message, encoding='utf-8')
return crypto.sign(self._key, message, 'sha256') | Signs a message.
Args:
message: bytes, Message to be signed.
Returns:
string, The signature of the message for the given key. | Below is the the instruction that describes the task:
### Input:
Signs a message.
Args:
message: bytes, Message to be signed.
Returns:
string, The signature of the message for the given key.
### Response:
def sign(self, message):
"""Signs a message.
Args:
message: bytes, Message to be signed.
Returns:
string, The signature of the message for the given key.
"""
message = _helpers._to_bytes(message, encoding='utf-8')
return crypto.sign(self._key, message, 'sha256') |
def has_text_frame(self):
"""
Return |True| if this data label has a text frame (implying it has
custom data label text), and |False| otherwise. Assigning |True|
causes a text frame to be added if not already present. Assigning
|False| causes any existing text frame to be removed along with any
text contained in the text frame.
"""
dLbl = self._dLbl
if dLbl is None:
return False
if dLbl.xpath('c:tx/c:rich'):
return True
return False | Return |True| if this data label has a text frame (implying it has
custom data label text), and |False| otherwise. Assigning |True|
causes a text frame to be added if not already present. Assigning
|False| causes any existing text frame to be removed along with any
text contained in the text frame. | Below is the the instruction that describes the task:
### Input:
Return |True| if this data label has a text frame (implying it has
custom data label text), and |False| otherwise. Assigning |True|
causes a text frame to be added if not already present. Assigning
|False| causes any existing text frame to be removed along with any
text contained in the text frame.
### Response:
def has_text_frame(self):
"""
Return |True| if this data label has a text frame (implying it has
custom data label text), and |False| otherwise. Assigning |True|
causes a text frame to be added if not already present. Assigning
|False| causes any existing text frame to be removed along with any
text contained in the text frame.
"""
dLbl = self._dLbl
if dLbl is None:
return False
if dLbl.xpath('c:tx/c:rich'):
return True
return False |
def gen_random_key(size=32):
"""
Generate a cryptographically-secure random key. This is done by using
Python 2.4's os.urandom, or PyCrypto.
"""
import os
if hasattr(os, "urandom"): # Python 2.4+
return os.urandom(size)
# Try using PyCrypto if available
try:
from Crypto.Util.randpool import RandomPool
from Crypto.Hash import SHA256
return RandomPool(hash=SHA256).get_bytes(size)
except ImportError:
print >>sys.stderr, "WARNING: The generated key will not be cryptographically-secure key. Consider using Python 2.4+ to generate the key, or install PyCrypto."
# Stupid random generation
import random
L = []
for i in range(size):
L.append(chr(random.randint(0, 255)))
return "".join(L) | Generate a cryptographically-secure random key. This is done by using
Python 2.4's os.urandom, or PyCrypto. | Below is the the instruction that describes the task:
### Input:
Generate a cryptographically-secure random key. This is done by using
Python 2.4's os.urandom, or PyCrypto.
### Response:
def gen_random_key(size=32):
"""
Generate a cryptographically-secure random key. This is done by using
Python 2.4's os.urandom, or PyCrypto.
"""
import os
if hasattr(os, "urandom"): # Python 2.4+
return os.urandom(size)
# Try using PyCrypto if available
try:
from Crypto.Util.randpool import RandomPool
from Crypto.Hash import SHA256
return RandomPool(hash=SHA256).get_bytes(size)
except ImportError:
print >>sys.stderr, "WARNING: The generated key will not be cryptographically-secure key. Consider using Python 2.4+ to generate the key, or install PyCrypto."
# Stupid random generation
import random
L = []
for i in range(size):
L.append(chr(random.randint(0, 255)))
return "".join(L) |
def get_service_key(self, service_name, key_name):
"""
Returns the service key details.
Similar to `cf service-key`.
"""
for key in self._get_service_keys(service_name)['resources']:
if key_name == key['entity']['name']:
guid = key['metadata']['guid']
uri = "/v2/service_keys/%s" % (guid)
return self.api.get(uri)
return None | Returns the service key details.
Similar to `cf service-key`. | Below is the the instruction that describes the task:
### Input:
Returns the service key details.
Similar to `cf service-key`.
### Response:
def get_service_key(self, service_name, key_name):
"""
Returns the service key details.
Similar to `cf service-key`.
"""
for key in self._get_service_keys(service_name)['resources']:
if key_name == key['entity']['name']:
guid = key['metadata']['guid']
uri = "/v2/service_keys/%s" % (guid)
return self.api.get(uri)
return None |
def get_entry_url(self, entry):
"""
Return the URL of a blog entry, relative to this page.
"""
# It could be possible this page is fetched as fallback, while the 'entry' does have a translation.
# - Currently django-fluent-pages 1.0b3 `Page.objects.get_for_path()` assigns the language of retrieval
# as current object language. The page is not assigned a fallback language instead.
# - With i18n_patterns() that would make strange URLs, such as '/en/blog/2016/05/dutch-entry-title/'
# Hence, respect the entry language as starting point to make the language consistent.
with switch_language(self, entry.get_current_language()):
return self.get_absolute_url() + entry.get_relative_url() | Return the URL of a blog entry, relative to this page. | Below is the the instruction that describes the task:
### Input:
Return the URL of a blog entry, relative to this page.
### Response:
def get_entry_url(self, entry):
"""
Return the URL of a blog entry, relative to this page.
"""
# It could be possible this page is fetched as fallback, while the 'entry' does have a translation.
# - Currently django-fluent-pages 1.0b3 `Page.objects.get_for_path()` assigns the language of retrieval
# as current object language. The page is not assigned a fallback language instead.
# - With i18n_patterns() that would make strange URLs, such as '/en/blog/2016/05/dutch-entry-title/'
# Hence, respect the entry language as starting point to make the language consistent.
with switch_language(self, entry.get_current_language()):
return self.get_absolute_url() + entry.get_relative_url() |
def create(cls, *args, **kwargs) -> 'Entity':
"""Create a new record in the repository.
Also performs unique validations before creating the entity
:param args: positional arguments for the entity
:param kwargs: keyword arguments for the entity
"""
logger.debug(
f'Creating new `{cls.__name__}` object using data {kwargs}')
model_cls = repo_factory.get_model(cls)
repository = repo_factory.get_repository(cls)
try:
# Build the entity from the input arguments
# Raises validation errors, if any, at this point
entity = cls(*args, **kwargs)
# Do unique checks, create this object and return it
entity._validate_unique()
# Perform Pre-Save Actions
entity.pre_save()
# Build the model object and create it
model_obj = repository.create(model_cls.from_entity(entity))
# Update the auto fields of the entity
for field_name, field_obj in entity.meta_.declared_fields.items():
if isinstance(field_obj, Auto):
if isinstance(model_obj, dict):
field_val = model_obj[field_name]
else:
field_val = getattr(model_obj, field_name)
setattr(entity, field_name, field_val)
# Set Entity status to saved
entity.state_.mark_saved()
# Perform Post-Save Actions
entity.post_save()
return entity
except ValidationError:
# FIXME Log Exception
raise | Create a new record in the repository.
Also performs unique validations before creating the entity
:param args: positional arguments for the entity
:param kwargs: keyword arguments for the entity | Below is the the instruction that describes the task:
### Input:
Create a new record in the repository.
Also performs unique validations before creating the entity
:param args: positional arguments for the entity
:param kwargs: keyword arguments for the entity
### Response:
def create(cls, *args, **kwargs) -> 'Entity':
"""Create a new record in the repository.
Also performs unique validations before creating the entity
:param args: positional arguments for the entity
:param kwargs: keyword arguments for the entity
"""
logger.debug(
f'Creating new `{cls.__name__}` object using data {kwargs}')
model_cls = repo_factory.get_model(cls)
repository = repo_factory.get_repository(cls)
try:
# Build the entity from the input arguments
# Raises validation errors, if any, at this point
entity = cls(*args, **kwargs)
# Do unique checks, create this object and return it
entity._validate_unique()
# Perform Pre-Save Actions
entity.pre_save()
# Build the model object and create it
model_obj = repository.create(model_cls.from_entity(entity))
# Update the auto fields of the entity
for field_name, field_obj in entity.meta_.declared_fields.items():
if isinstance(field_obj, Auto):
if isinstance(model_obj, dict):
field_val = model_obj[field_name]
else:
field_val = getattr(model_obj, field_name)
setattr(entity, field_name, field_val)
# Set Entity status to saved
entity.state_.mark_saved()
# Perform Post-Save Actions
entity.post_save()
return entity
except ValidationError:
# FIXME Log Exception
raise |
def getArticles(self,
page=1,
count=100,
sortBy = "rel",
sortByAsc = False,
returnInfo=ReturnInfo()):
"""
return a list of articles that match the topic page
@param page: which page of the results to return (default: 1)
@param count: number of articles to return (default: 100)
@param sortBy: how are articles sorted. Options: id (internal id), date (publishing date), cosSim (closeness to the event centroid), rel (relevance to the query), sourceImportance (manually curated score of source importance - high value, high importance), sourceImportanceRank (reverse of sourceImportance), sourceAlexaGlobalRank (global rank of the news source), sourceAlexaCountryRank (country rank of the news source), socialScore (total shares on social media), facebookShares (shares on Facebook only)
@param sortByAsc: should the results be sorted in ascending order (True) or descending (False)
@param returnInfo: what details should be included in the returned information
"""
assert page >= 1
assert count <= 100
params = {
"action": "getArticlesForTopicPage",
"resultType": "articles",
"dataType": self.topicPage["dataType"],
"articlesCount": count,
"articlesSortBy": sortBy,
"articlesSortByAsc": sortByAsc,
"page": page,
"topicPage": json.dumps(self.topicPage)
}
params.update(returnInfo.getParams("articles"))
return self.eventRegistry.jsonRequest("/json/article", params) | return a list of articles that match the topic page
@param page: which page of the results to return (default: 1)
@param count: number of articles to return (default: 100)
@param sortBy: how are articles sorted. Options: id (internal id), date (publishing date), cosSim (closeness to the event centroid), rel (relevance to the query), sourceImportance (manually curated score of source importance - high value, high importance), sourceImportanceRank (reverse of sourceImportance), sourceAlexaGlobalRank (global rank of the news source), sourceAlexaCountryRank (country rank of the news source), socialScore (total shares on social media), facebookShares (shares on Facebook only)
@param sortByAsc: should the results be sorted in ascending order (True) or descending (False)
@param returnInfo: what details should be included in the returned information | Below is the the instruction that describes the task:
### Input:
return a list of articles that match the topic page
@param page: which page of the results to return (default: 1)
@param count: number of articles to return (default: 100)
@param sortBy: how are articles sorted. Options: id (internal id), date (publishing date), cosSim (closeness to the event centroid), rel (relevance to the query), sourceImportance (manually curated score of source importance - high value, high importance), sourceImportanceRank (reverse of sourceImportance), sourceAlexaGlobalRank (global rank of the news source), sourceAlexaCountryRank (country rank of the news source), socialScore (total shares on social media), facebookShares (shares on Facebook only)
@param sortByAsc: should the results be sorted in ascending order (True) or descending (False)
@param returnInfo: what details should be included in the returned information
### Response:
def getArticles(self,
page=1,
count=100,
sortBy = "rel",
sortByAsc = False,
returnInfo=ReturnInfo()):
"""
return a list of articles that match the topic page
@param page: which page of the results to return (default: 1)
@param count: number of articles to return (default: 100)
@param sortBy: how are articles sorted. Options: id (internal id), date (publishing date), cosSim (closeness to the event centroid), rel (relevance to the query), sourceImportance (manually curated score of source importance - high value, high importance), sourceImportanceRank (reverse of sourceImportance), sourceAlexaGlobalRank (global rank of the news source), sourceAlexaCountryRank (country rank of the news source), socialScore (total shares on social media), facebookShares (shares on Facebook only)
@param sortByAsc: should the results be sorted in ascending order (True) or descending (False)
@param returnInfo: what details should be included in the returned information
"""
assert page >= 1
assert count <= 100
params = {
"action": "getArticlesForTopicPage",
"resultType": "articles",
"dataType": self.topicPage["dataType"],
"articlesCount": count,
"articlesSortBy": sortBy,
"articlesSortByAsc": sortByAsc,
"page": page,
"topicPage": json.dumps(self.topicPage)
}
params.update(returnInfo.getParams("articles"))
return self.eventRegistry.jsonRequest("/json/article", params) |
def update_pareto_optimal_tuples(self, new_pareto_tuple):
"""
# this function should be optimized
Parameters
----------
new_pareto_tuple: LabelTimeSimple
Returns
-------
added: bool
whether new_pareto_tuple was added to the set of pareto-optimal tuples
"""
if new_pareto_tuple.duration() > self._walk_to_target_duration:
direct_walk_label = self._label_class.direct_walk_label(new_pareto_tuple.departure_time,
self._walk_to_target_duration)
if not direct_walk_label.dominates(new_pareto_tuple):
raise
direct_walk_label = self._label_class.direct_walk_label(new_pareto_tuple.departure_time, self._walk_to_target_duration)
if direct_walk_label.dominates(new_pareto_tuple):
return False
if self._new_paretotuple_is_dominated_by_old_tuples(new_pareto_tuple):
return False
else:
self._remove_old_tuples_dominated_by_new_and_insert_new_paretotuple(new_pareto_tuple)
return True | # this function should be optimized
Parameters
----------
new_pareto_tuple: LabelTimeSimple
Returns
-------
added: bool
whether new_pareto_tuple was added to the set of pareto-optimal tuples | Below is the the instruction that describes the task:
### Input:
# this function should be optimized
Parameters
----------
new_pareto_tuple: LabelTimeSimple
Returns
-------
added: bool
whether new_pareto_tuple was added to the set of pareto-optimal tuples
### Response:
def update_pareto_optimal_tuples(self, new_pareto_tuple):
"""
# this function should be optimized
Parameters
----------
new_pareto_tuple: LabelTimeSimple
Returns
-------
added: bool
whether new_pareto_tuple was added to the set of pareto-optimal tuples
"""
if new_pareto_tuple.duration() > self._walk_to_target_duration:
direct_walk_label = self._label_class.direct_walk_label(new_pareto_tuple.departure_time,
self._walk_to_target_duration)
if not direct_walk_label.dominates(new_pareto_tuple):
raise
direct_walk_label = self._label_class.direct_walk_label(new_pareto_tuple.departure_time, self._walk_to_target_duration)
if direct_walk_label.dominates(new_pareto_tuple):
return False
if self._new_paretotuple_is_dominated_by_old_tuples(new_pareto_tuple):
return False
else:
self._remove_old_tuples_dominated_by_new_and_insert_new_paretotuple(new_pareto_tuple)
return True |
def get_changed(self, p1, p2):
"""
Return the loci that are in clusters that have changed between
partitions p1 and p2
"""
if p1 is None or p2 is None:
return list(range(len(self.insts)))
return set(flatten_list(set(p1) - set(p2))) | Return the loci that are in clusters that have changed between
partitions p1 and p2 | Below is the the instruction that describes the task:
### Input:
Return the loci that are in clusters that have changed between
partitions p1 and p2
### Response:
def get_changed(self, p1, p2):
"""
Return the loci that are in clusters that have changed between
partitions p1 and p2
"""
if p1 is None or p2 is None:
return list(range(len(self.insts)))
return set(flatten_list(set(p1) - set(p2))) |
def get_parsed_context(context_arg):
"""Parse input context string and returns context as dictionary."""
if not context_arg:
logger.debug("pipeline invoked without context arg set. For "
"this json parser you're looking for something "
"like: "
"pypyr pipelinename '{\"key1\":\"value1\","
"\"key2\":\"value2\"}'")
return None
logger.debug("starting")
# deserialize the input context string into json
return json.loads(context_arg) | Parse input context string and returns context as dictionary. | Below is the the instruction that describes the task:
### Input:
Parse input context string and returns context as dictionary.
### Response:
def get_parsed_context(context_arg):
"""Parse input context string and returns context as dictionary."""
if not context_arg:
logger.debug("pipeline invoked without context arg set. For "
"this json parser you're looking for something "
"like: "
"pypyr pipelinename '{\"key1\":\"value1\","
"\"key2\":\"value2\"}'")
return None
logger.debug("starting")
# deserialize the input context string into json
return json.loads(context_arg) |
def do_rmdep(self, args):
"""Removes dependent variables currently set for plotting/tabulating etc."""
for arg in args.split():
if arg in self.curargs["dependents"]:
self.curargs["dependents"].remove(arg)
if arg in self.curargs["plottypes"]:
del self.curargs["plottypes"][arg]
if arg in self.curargs["twinplots"]:
del self.curargs["twinplots"][arg]
if arg in self.curargs["colors"]:
del self.curargs["colors"][arg]
if arg in self.curargs["labels"]:
del self.curargs["labels"][arg]
if arg in self.curargs["markers"]:
del self.curargs["markers"][arg]
if arg in self.curargs["lines"]:
del self.curargs["lines"][arg] | Removes dependent variables currently set for plotting/tabulating etc. | Below is the the instruction that describes the task:
### Input:
Removes dependent variables currently set for plotting/tabulating etc.
### Response:
def do_rmdep(self, args):
"""Removes dependent variables currently set for plotting/tabulating etc."""
for arg in args.split():
if arg in self.curargs["dependents"]:
self.curargs["dependents"].remove(arg)
if arg in self.curargs["plottypes"]:
del self.curargs["plottypes"][arg]
if arg in self.curargs["twinplots"]:
del self.curargs["twinplots"][arg]
if arg in self.curargs["colors"]:
del self.curargs["colors"][arg]
if arg in self.curargs["labels"]:
del self.curargs["labels"][arg]
if arg in self.curargs["markers"]:
del self.curargs["markers"][arg]
if arg in self.curargs["lines"]:
del self.curargs["lines"][arg] |
def get_builds(self, job_name):
""" Retrieve all builds from a job"""
if self.blacklist_jobs and job_name in self.blacklist_jobs:
logger.warning("Not getting blacklisted job: %s", job_name)
return
payload = {'depth': self.detail_depth}
url_build = urijoin(self.base_url, "job", job_name, "api", "json")
response = self.fetch(url_build, payload=payload)
return response.text | Retrieve all builds from a job | Below is the the instruction that describes the task:
### Input:
Retrieve all builds from a job
### Response:
def get_builds(self, job_name):
""" Retrieve all builds from a job"""
if self.blacklist_jobs and job_name in self.blacklist_jobs:
logger.warning("Not getting blacklisted job: %s", job_name)
return
payload = {'depth': self.detail_depth}
url_build = urijoin(self.base_url, "job", job_name, "api", "json")
response = self.fetch(url_build, payload=payload)
return response.text |
def _cr_decode(self, msg):
"""CR: Custom values"""
if int(msg[4:6]) > 0:
index = int(msg[4:6])-1
return {'values': [self._cr_one_custom_value_decode(index, msg[6:12])]}
else:
part = 6
ret = []
for i in range(Max.SETTINGS.value):
ret.append(self._cr_one_custom_value_decode(i, msg[part:part+6]))
part += 6
return {'values': ret} | CR: Custom values | Below is the the instruction that describes the task:
### Input:
CR: Custom values
### Response:
def _cr_decode(self, msg):
"""CR: Custom values"""
if int(msg[4:6]) > 0:
index = int(msg[4:6])-1
return {'values': [self._cr_one_custom_value_decode(index, msg[6:12])]}
else:
part = 6
ret = []
for i in range(Max.SETTINGS.value):
ret.append(self._cr_one_custom_value_decode(i, msg[part:part+6]))
part += 6
return {'values': ret} |
def resolve(self, ref, document=None):
"""Resolve a ref within the schema.
This is just a convenience method, since RefResolver returns both a URI
and the resolved value, and we usually just need the resolved value.
:param str ref: URI to resolve.
:param dict document: Optional schema in which to resolve the URI.
:returns: the portion of the schema that the URI references.
:see: :meth:`SchemaRefResolver.resolve`
"""
_, resolved = self.resolver.resolve(ref, document=document)
return resolved | Resolve a ref within the schema.
This is just a convenience method, since RefResolver returns both a URI
and the resolved value, and we usually just need the resolved value.
:param str ref: URI to resolve.
:param dict document: Optional schema in which to resolve the URI.
:returns: the portion of the schema that the URI references.
:see: :meth:`SchemaRefResolver.resolve` | Below is the the instruction that describes the task:
### Input:
Resolve a ref within the schema.
This is just a convenience method, since RefResolver returns both a URI
and the resolved value, and we usually just need the resolved value.
:param str ref: URI to resolve.
:param dict document: Optional schema in which to resolve the URI.
:returns: the portion of the schema that the URI references.
:see: :meth:`SchemaRefResolver.resolve`
### Response:
def resolve(self, ref, document=None):
"""Resolve a ref within the schema.
This is just a convenience method, since RefResolver returns both a URI
and the resolved value, and we usually just need the resolved value.
:param str ref: URI to resolve.
:param dict document: Optional schema in which to resolve the URI.
:returns: the portion of the schema that the URI references.
:see: :meth:`SchemaRefResolver.resolve`
"""
_, resolved = self.resolver.resolve(ref, document=document)
return resolved |
def attributesGTF(inGTF):
"""
List the type of attributes in a the attribute section of a GTF file
:param inGTF: GTF dataframe to be analysed
:returns: a list of attributes present in the attribute section
"""
df=pd.DataFrame(inGTF['attribute'].str.split(";").tolist())
desc=[]
for i in df.columns.tolist():
val=df[[i]].dropna()
val=pd.DataFrame(val[i].str.split(' "').tolist())[0]
val=list(set(val))
for v in val:
if len(v) > 0:
l=v.split(" ")
if len(l)>1:
l=l[1]
else:
l=l[0]
desc.append(l)
desc=list(set(desc))
finaldesc=[]
for d in desc:
if len(d) > 0:
finaldesc.append(d)
return finaldesc | List the type of attributes in a the attribute section of a GTF file
:param inGTF: GTF dataframe to be analysed
:returns: a list of attributes present in the attribute section | Below is the the instruction that describes the task:
### Input:
List the type of attributes in a the attribute section of a GTF file
:param inGTF: GTF dataframe to be analysed
:returns: a list of attributes present in the attribute section
### Response:
def attributesGTF(inGTF):
"""
List the type of attributes in a the attribute section of a GTF file
:param inGTF: GTF dataframe to be analysed
:returns: a list of attributes present in the attribute section
"""
df=pd.DataFrame(inGTF['attribute'].str.split(";").tolist())
desc=[]
for i in df.columns.tolist():
val=df[[i]].dropna()
val=pd.DataFrame(val[i].str.split(' "').tolist())[0]
val=list(set(val))
for v in val:
if len(v) > 0:
l=v.split(" ")
if len(l)>1:
l=l[1]
else:
l=l[0]
desc.append(l)
desc=list(set(desc))
finaldesc=[]
for d in desc:
if len(d) > 0:
finaldesc.append(d)
return finaldesc |
def status(backend):
'''print the status for all or one of the backends.
'''
print('[backend status]')
settings = read_client_secrets()
print('There are %s clients found in secrets.' %len(settings))
if 'SREGISTRY_CLIENT' in settings:
print('active: %s' %settings['SREGISTRY_CLIENT'])
update_secrets(settings)
else:
print('There is no active client.') | print the status for all or one of the backends. | Below is the the instruction that describes the task:
### Input:
print the status for all or one of the backends.
### Response:
def status(backend):
'''print the status for all or one of the backends.
'''
print('[backend status]')
settings = read_client_secrets()
print('There are %s clients found in secrets.' %len(settings))
if 'SREGISTRY_CLIENT' in settings:
print('active: %s' %settings['SREGISTRY_CLIENT'])
update_secrets(settings)
else:
print('There is no active client.') |
def attachment_upload(instance, filename):
"""Stores the attachment in a "per module/appname/primary key" folder"""
return 'attachments/{app}_{model}/{pk}/{filename}'.format(
app=instance.content_object._meta.app_label,
model=instance.content_object._meta.object_name.lower(),
pk=instance.content_object.pk,
filename=filename,
) | Stores the attachment in a "per module/appname/primary key" folder | Below is the the instruction that describes the task:
### Input:
Stores the attachment in a "per module/appname/primary key" folder
### Response:
def attachment_upload(instance, filename):
"""Stores the attachment in a "per module/appname/primary key" folder"""
return 'attachments/{app}_{model}/{pk}/{filename}'.format(
app=instance.content_object._meta.app_label,
model=instance.content_object._meta.object_name.lower(),
pk=instance.content_object.pk,
filename=filename,
) |
def assign_private_ip_addresses(network_interface_name=None, network_interface_id=None,
private_ip_addresses=None, secondary_private_ip_address_count=None,
allow_reassignment=False, region=None, key=None,
keyid=None, profile=None):
'''
Assigns one or more secondary private IP addresses to a network interface.
network_interface_id
(string) - ID of the network interface to associate the IP with (exclusive with 'network_interface_name')
network_interface_name
(string) - Name of the network interface to associate the IP with (exclusive with 'network_interface_id')
private_ip_addresses
(list) - Assigns the specified IP addresses as secondary IP addresses to the network interface (exclusive with 'secondary_private_ip_address_count')
secondary_private_ip_address_count
(int) - The number of secondary IP addresses to assign to the network interface. (exclusive with 'private_ip_addresses')
allow_reassociation
(bool) – Allow a currently associated EIP to be re-associated with the new instance or interface.
returns
(bool) - True on success, False on failure.
CLI Example:
.. code-block:: bash
salt myminion boto_ec2.assign_private_ip_addresses network_interface_name=my_eni private_ip_addresses=private_ip
salt myminion boto_ec2.assign_private_ip_addresses network_interface_name=my_eni secondary_private_ip_address_count=2
.. versionadded:: 2017.7.0
'''
if not salt.utils.data.exactly_one((network_interface_name,
network_interface_id)):
raise SaltInvocationError("Exactly one of 'network_interface_name', "
"'network_interface_id' must be provided")
conn = _get_conn(region=region, key=key, keyid=keyid, profile=profile)
if network_interface_name:
try:
network_interface_id = get_network_interface_id(
network_interface_name, region=region, key=key, keyid=keyid,
profile=profile)
except boto.exception.BotoServerError as e:
log.error(e)
return False
if not network_interface_id:
log.error("Given network_interface_name '%s' cannot be mapped to "
"an network_interface_id", network_interface_name)
return False
try:
return conn.assign_private_ip_addresses(network_interface_id=network_interface_id,
private_ip_addresses=private_ip_addresses,
secondary_private_ip_address_count=secondary_private_ip_address_count,
allow_reassignment=allow_reassignment)
except boto.exception.BotoServerError as e:
log.error(e)
return False | Assigns one or more secondary private IP addresses to a network interface.
network_interface_id
(string) - ID of the network interface to associate the IP with (exclusive with 'network_interface_name')
network_interface_name
(string) - Name of the network interface to associate the IP with (exclusive with 'network_interface_id')
private_ip_addresses
(list) - Assigns the specified IP addresses as secondary IP addresses to the network interface (exclusive with 'secondary_private_ip_address_count')
secondary_private_ip_address_count
(int) - The number of secondary IP addresses to assign to the network interface. (exclusive with 'private_ip_addresses')
allow_reassociation
(bool) – Allow a currently associated EIP to be re-associated with the new instance or interface.
returns
(bool) - True on success, False on failure.
CLI Example:
.. code-block:: bash
salt myminion boto_ec2.assign_private_ip_addresses network_interface_name=my_eni private_ip_addresses=private_ip
salt myminion boto_ec2.assign_private_ip_addresses network_interface_name=my_eni secondary_private_ip_address_count=2
.. versionadded:: 2017.7.0 | Below is the the instruction that describes the task:
### Input:
Assigns one or more secondary private IP addresses to a network interface.
network_interface_id
(string) - ID of the network interface to associate the IP with (exclusive with 'network_interface_name')
network_interface_name
(string) - Name of the network interface to associate the IP with (exclusive with 'network_interface_id')
private_ip_addresses
(list) - Assigns the specified IP addresses as secondary IP addresses to the network interface (exclusive with 'secondary_private_ip_address_count')
secondary_private_ip_address_count
(int) - The number of secondary IP addresses to assign to the network interface. (exclusive with 'private_ip_addresses')
allow_reassociation
(bool) – Allow a currently associated EIP to be re-associated with the new instance or interface.
returns
(bool) - True on success, False on failure.
CLI Example:
.. code-block:: bash
salt myminion boto_ec2.assign_private_ip_addresses network_interface_name=my_eni private_ip_addresses=private_ip
salt myminion boto_ec2.assign_private_ip_addresses network_interface_name=my_eni secondary_private_ip_address_count=2
.. versionadded:: 2017.7.0
### Response:
def assign_private_ip_addresses(network_interface_name=None, network_interface_id=None,
private_ip_addresses=None, secondary_private_ip_address_count=None,
allow_reassignment=False, region=None, key=None,
keyid=None, profile=None):
'''
Assigns one or more secondary private IP addresses to a network interface.
network_interface_id
(string) - ID of the network interface to associate the IP with (exclusive with 'network_interface_name')
network_interface_name
(string) - Name of the network interface to associate the IP with (exclusive with 'network_interface_id')
private_ip_addresses
(list) - Assigns the specified IP addresses as secondary IP addresses to the network interface (exclusive with 'secondary_private_ip_address_count')
secondary_private_ip_address_count
(int) - The number of secondary IP addresses to assign to the network interface. (exclusive with 'private_ip_addresses')
allow_reassociation
(bool) – Allow a currently associated EIP to be re-associated with the new instance or interface.
returns
(bool) - True on success, False on failure.
CLI Example:
.. code-block:: bash
salt myminion boto_ec2.assign_private_ip_addresses network_interface_name=my_eni private_ip_addresses=private_ip
salt myminion boto_ec2.assign_private_ip_addresses network_interface_name=my_eni secondary_private_ip_address_count=2
.. versionadded:: 2017.7.0
'''
if not salt.utils.data.exactly_one((network_interface_name,
network_interface_id)):
raise SaltInvocationError("Exactly one of 'network_interface_name', "
"'network_interface_id' must be provided")
conn = _get_conn(region=region, key=key, keyid=keyid, profile=profile)
if network_interface_name:
try:
network_interface_id = get_network_interface_id(
network_interface_name, region=region, key=key, keyid=keyid,
profile=profile)
except boto.exception.BotoServerError as e:
log.error(e)
return False
if not network_interface_id:
log.error("Given network_interface_name '%s' cannot be mapped to "
"an network_interface_id", network_interface_name)
return False
try:
return conn.assign_private_ip_addresses(network_interface_id=network_interface_id,
private_ip_addresses=private_ip_addresses,
secondary_private_ip_address_count=secondary_private_ip_address_count,
allow_reassignment=allow_reassignment)
except boto.exception.BotoServerError as e:
log.error(e)
return False |
def get_base_wrappers(method='get', template_name='', predicates=(), wrappers=()):
""" basic View Wrappers used by view_config.
"""
wrappers += (preserve_view(MethodPredicate(method), *predicates),)
if template_name:
wrappers += (render_template(template_name),)
return wrappers | basic View Wrappers used by view_config. | Below is the the instruction that describes the task:
### Input:
basic View Wrappers used by view_config.
### Response:
def get_base_wrappers(method='get', template_name='', predicates=(), wrappers=()):
""" basic View Wrappers used by view_config.
"""
wrappers += (preserve_view(MethodPredicate(method), *predicates),)
if template_name:
wrappers += (render_template(template_name),)
return wrappers |
def set_item_class_name(cls_obj):
"""
Return the first part of the class name of this custom generator.
This will be used for the class name of the items produced by this
generator.
Examples:
FoobarGenerator -> Foobar
QuuxGenerator -> Quux
"""
if '__tohu__items__name__' in cls_obj.__dict__:
logger.debug(f"Using item class name '{cls_obj.__tohu_items_name__}' (derived from attribute '__tohu_items_name__')")
else:
m = re.match('^(.*)Generator$', cls_obj.__name__)
if m is not None:
cls_obj.__tohu_items_name__ = m.group(1)
logger.debug(f"Using item class name '{cls_obj.__tohu_items_name__}' (derived from custom generator name)")
else:
raise ValueError("Cannot derive class name for items to be produced by custom generator. "
"Please set '__tohu_items_name__' at the top of the custom generator's "
"definition or change its name so that it ends in '...Generator'") | Return the first part of the class name of this custom generator.
This will be used for the class name of the items produced by this
generator.
Examples:
FoobarGenerator -> Foobar
QuuxGenerator -> Quux | Below is the the instruction that describes the task:
### Input:
Return the first part of the class name of this custom generator.
This will be used for the class name of the items produced by this
generator.
Examples:
FoobarGenerator -> Foobar
QuuxGenerator -> Quux
### Response:
def set_item_class_name(cls_obj):
"""
Return the first part of the class name of this custom generator.
This will be used for the class name of the items produced by this
generator.
Examples:
FoobarGenerator -> Foobar
QuuxGenerator -> Quux
"""
if '__tohu__items__name__' in cls_obj.__dict__:
logger.debug(f"Using item class name '{cls_obj.__tohu_items_name__}' (derived from attribute '__tohu_items_name__')")
else:
m = re.match('^(.*)Generator$', cls_obj.__name__)
if m is not None:
cls_obj.__tohu_items_name__ = m.group(1)
logger.debug(f"Using item class name '{cls_obj.__tohu_items_name__}' (derived from custom generator name)")
else:
raise ValueError("Cannot derive class name for items to be produced by custom generator. "
"Please set '__tohu_items_name__' at the top of the custom generator's "
"definition or change its name so that it ends in '...Generator'") |
def json_2_nic(json_obj):
"""
transform JSON obj coming from Ariane to ariane_clip3 object
:param json_obj: the JSON obj coming from Ariane
:return: ariane_clip3 NIC object
"""
LOGGER.debug("NIC.json_2_nic")
return NIC(nic_id=json_obj['nicID'],
mac_address=json_obj['nicMacAddress'],
name=json_obj['nicName'],
speed=json_obj['nicSpeed'],
duplex=json_obj['nicDuplex'],
mtu=json_obj['nicMtu'],
nic_osi_id=json_obj['nicOSInstanceID'],
nic_ipa_id=json_obj['nicIPAddressID']) | transform JSON obj coming from Ariane to ariane_clip3 object
:param json_obj: the JSON obj coming from Ariane
:return: ariane_clip3 NIC object | Below is the the instruction that describes the task:
### Input:
transform JSON obj coming from Ariane to ariane_clip3 object
:param json_obj: the JSON obj coming from Ariane
:return: ariane_clip3 NIC object
### Response:
def json_2_nic(json_obj):
"""
transform JSON obj coming from Ariane to ariane_clip3 object
:param json_obj: the JSON obj coming from Ariane
:return: ariane_clip3 NIC object
"""
LOGGER.debug("NIC.json_2_nic")
return NIC(nic_id=json_obj['nicID'],
mac_address=json_obj['nicMacAddress'],
name=json_obj['nicName'],
speed=json_obj['nicSpeed'],
duplex=json_obj['nicDuplex'],
mtu=json_obj['nicMtu'],
nic_osi_id=json_obj['nicOSInstanceID'],
nic_ipa_id=json_obj['nicIPAddressID']) |
def create_ospf_area_with_message_digest_auth():
"""
If you require message-digest authentication for your OSPFArea, you must
create an OSPF key chain configuration.
"""
OSPFKeyChain.create(name='secure-keychain',
key_chain_entry=[{'key': 'fookey',
'key_id': 10,
'send_key': True}])
"""
An OSPF interface is applied to a physical interface at the engines routing
node level. This configuration is done by an OSPFInterfaceSetting element. To
apply the key-chain to this configuration, add the authentication_type of
message-digest and reference to the key-chain
"""
key_chain = OSPFKeyChain('secure-keychain') # obtain resource
OSPFInterfaceSetting.create(name='authenicated-ospf',
authentication_type='message_digest',
key_chain_ref=key_chain.href)
"""
Create the OSPFArea and assign the above created OSPFInterfaceSetting.
In this example, use the default system OSPFInterfaceSetting called
'Default OSPFv2 Interface Settings'
"""
for profile in describe_ospfv2_interface_settings():
if profile.name.startswith('Default OSPF'): # Use the system default
interface_profile = profile.href
OSPFArea.create(name='area0',
interface_settings_ref=interface_profile,
area_id=0) | If you require message-digest authentication for your OSPFArea, you must
create an OSPF key chain configuration. | Below is the the instruction that describes the task:
### Input:
If you require message-digest authentication for your OSPFArea, you must
create an OSPF key chain configuration.
### Response:
def create_ospf_area_with_message_digest_auth():
"""
If you require message-digest authentication for your OSPFArea, you must
create an OSPF key chain configuration.
"""
OSPFKeyChain.create(name='secure-keychain',
key_chain_entry=[{'key': 'fookey',
'key_id': 10,
'send_key': True}])
"""
An OSPF interface is applied to a physical interface at the engines routing
node level. This configuration is done by an OSPFInterfaceSetting element. To
apply the key-chain to this configuration, add the authentication_type of
message-digest and reference to the key-chain
"""
key_chain = OSPFKeyChain('secure-keychain') # obtain resource
OSPFInterfaceSetting.create(name='authenicated-ospf',
authentication_type='message_digest',
key_chain_ref=key_chain.href)
"""
Create the OSPFArea and assign the above created OSPFInterfaceSetting.
In this example, use the default system OSPFInterfaceSetting called
'Default OSPFv2 Interface Settings'
"""
for profile in describe_ospfv2_interface_settings():
if profile.name.startswith('Default OSPF'): # Use the system default
interface_profile = profile.href
OSPFArea.create(name='area0',
interface_settings_ref=interface_profile,
area_id=0) |
def play_Bars(self, bars, channels, bpm=120):
"""Play several bars (a list of Bar objects) at the same time.
A list of channels should also be provided. The tempo can be changed
by providing one or more of the NoteContainers with a bpm argument.
"""
self.notify_listeners(self.MSG_PLAY_BARS, {'bars': bars,
'channels': channels, 'bpm': bpm})
qn_length = 60.0 / bpm # length of a quarter note
tick = 0.0 # place in beat from 0.0 to bar.length
cur = [0] * len(bars) # keeps the index of the NoteContainer under
# investigation in each of the bars
playing = [] # The NoteContainers being played.
while tick < bars[0].length:
# Prepare a and play a list of NoteContainers that are ready for it.
# The list `playing_new` holds both the duration and the
# NoteContainer.
playing_new = []
for (n, x) in enumerate(cur):
(start_tick, note_length, nc) = bars[n][x]
if start_tick <= tick:
self.play_NoteContainer(nc, channels[n])
playing_new.append([note_length, n])
playing.append([note_length, nc, channels[n], n])
# Change the length of a quarter note if the NoteContainer
# has a bpm attribute
if hasattr(nc, 'bpm'):
bpm = nc.bpm
qn_length = 60.0 / bpm
# Sort the list and sleep for the shortest duration
if len(playing_new) != 0:
playing_new.sort()
shortest = playing_new[-1][0]
ms = qn_length * (4.0 / shortest)
self.sleep(ms)
self.notify_listeners(self.MSG_SLEEP, {'s': ms})
else:
# If somehow, playing_new doesn't contain any notes (something
# that shouldn't happen when the bar was filled properly), we
# make sure that at least the notes that are still playing get
# handled correctly.
if len(playing) != 0:
playing.sort()
shortest = playing[-1][0]
ms = qn_length * (4.0 / shortest)
self.sleep(ms)
self.notify_listeners(self.MSG_SLEEP, {'s': ms})
else:
# warning: this could lead to some strange behaviour. OTOH.
# Leaving gaps is not the way Bar works. should we do an
# integrity check on bars first?
return {}
# Add shortest interval to tick
tick += 1.0 / shortest
# This final piece adjusts the duration in `playing` and checks if a
# NoteContainer should be stopped.
new_playing = []
for (length, nc, chan, n) in playing:
duration = 1.0 / length - 1.0 / shortest
if duration >= 0.00001:
new_playing.append([1.0 / duration, nc, chan, n])
else:
self.stop_NoteContainer(nc, chan)
if cur[n] < len(bars[n]) - 1:
cur[n] += 1
playing = new_playing
for p in playing:
self.stop_NoteContainer(p[1], p[2])
playing.remove(p)
return {'bpm': bpm} | Play several bars (a list of Bar objects) at the same time.
A list of channels should also be provided. The tempo can be changed
by providing one or more of the NoteContainers with a bpm argument. | Below is the the instruction that describes the task:
### Input:
Play several bars (a list of Bar objects) at the same time.
A list of channels should also be provided. The tempo can be changed
by providing one or more of the NoteContainers with a bpm argument.
### Response:
def play_Bars(self, bars, channels, bpm=120):
"""Play several bars (a list of Bar objects) at the same time.
A list of channels should also be provided. The tempo can be changed
by providing one or more of the NoteContainers with a bpm argument.
"""
self.notify_listeners(self.MSG_PLAY_BARS, {'bars': bars,
'channels': channels, 'bpm': bpm})
qn_length = 60.0 / bpm # length of a quarter note
tick = 0.0 # place in beat from 0.0 to bar.length
cur = [0] * len(bars) # keeps the index of the NoteContainer under
# investigation in each of the bars
playing = [] # The NoteContainers being played.
while tick < bars[0].length:
# Prepare a and play a list of NoteContainers that are ready for it.
# The list `playing_new` holds both the duration and the
# NoteContainer.
playing_new = []
for (n, x) in enumerate(cur):
(start_tick, note_length, nc) = bars[n][x]
if start_tick <= tick:
self.play_NoteContainer(nc, channels[n])
playing_new.append([note_length, n])
playing.append([note_length, nc, channels[n], n])
# Change the length of a quarter note if the NoteContainer
# has a bpm attribute
if hasattr(nc, 'bpm'):
bpm = nc.bpm
qn_length = 60.0 / bpm
# Sort the list and sleep for the shortest duration
if len(playing_new) != 0:
playing_new.sort()
shortest = playing_new[-1][0]
ms = qn_length * (4.0 / shortest)
self.sleep(ms)
self.notify_listeners(self.MSG_SLEEP, {'s': ms})
else:
# If somehow, playing_new doesn't contain any notes (something
# that shouldn't happen when the bar was filled properly), we
# make sure that at least the notes that are still playing get
# handled correctly.
if len(playing) != 0:
playing.sort()
shortest = playing[-1][0]
ms = qn_length * (4.0 / shortest)
self.sleep(ms)
self.notify_listeners(self.MSG_SLEEP, {'s': ms})
else:
# warning: this could lead to some strange behaviour. OTOH.
# Leaving gaps is not the way Bar works. should we do an
# integrity check on bars first?
return {}
# Add shortest interval to tick
tick += 1.0 / shortest
# This final piece adjusts the duration in `playing` and checks if a
# NoteContainer should be stopped.
new_playing = []
for (length, nc, chan, n) in playing:
duration = 1.0 / length - 1.0 / shortest
if duration >= 0.00001:
new_playing.append([1.0 / duration, nc, chan, n])
else:
self.stop_NoteContainer(nc, chan)
if cur[n] < len(bars[n]) - 1:
cur[n] += 1
playing = new_playing
for p in playing:
self.stop_NoteContainer(p[1], p[2])
playing.remove(p)
return {'bpm': bpm} |
def get_loss(self, logits: mx.sym.Symbol, labels: mx.sym.Symbol) -> mx.sym.Symbol:
"""
Returns loss and softmax output symbols given logits and integer-coded labels.
:param logits: Shape: (batch_size * target_seq_len, target_vocab_size).
:param labels: Shape: (batch_size * target_seq_len,).
:return: Loss symbol.
"""
raise NotImplementedError() | Returns loss and softmax output symbols given logits and integer-coded labels.
:param logits: Shape: (batch_size * target_seq_len, target_vocab_size).
:param labels: Shape: (batch_size * target_seq_len,).
:return: Loss symbol. | Below is the the instruction that describes the task:
### Input:
Returns loss and softmax output symbols given logits and integer-coded labels.
:param logits: Shape: (batch_size * target_seq_len, target_vocab_size).
:param labels: Shape: (batch_size * target_seq_len,).
:return: Loss symbol.
### Response:
def get_loss(self, logits: mx.sym.Symbol, labels: mx.sym.Symbol) -> mx.sym.Symbol:
"""
Returns loss and softmax output symbols given logits and integer-coded labels.
:param logits: Shape: (batch_size * target_seq_len, target_vocab_size).
:param labels: Shape: (batch_size * target_seq_len,).
:return: Loss symbol.
"""
raise NotImplementedError() |
def quick_send(self, send, echo=None, loglevel=logging.INFO):
"""Quick and dirty send that ignores background tasks. Intended for internal use.
"""
shutit = self.shutit
shutit.log('Quick send: ' + send, level=loglevel)
res = self.sendline(ShutItSendSpec(self,
send=send,
check_exit=False,
echo=echo,
fail_on_empty_before=False,
record_command=False,
ignore_background=True))
if not res:
self.expect(self.default_expect) | Quick and dirty send that ignores background tasks. Intended for internal use. | Below is the the instruction that describes the task:
### Input:
Quick and dirty send that ignores background tasks. Intended for internal use.
### Response:
def quick_send(self, send, echo=None, loglevel=logging.INFO):
"""Quick and dirty send that ignores background tasks. Intended for internal use.
"""
shutit = self.shutit
shutit.log('Quick send: ' + send, level=loglevel)
res = self.sendline(ShutItSendSpec(self,
send=send,
check_exit=False,
echo=echo,
fail_on_empty_before=False,
record_command=False,
ignore_background=True))
if not res:
self.expect(self.default_expect) |
def shutdown(self):
"""
clean shutdown of the nameserver and the config file (if written)
"""
if not self.pyro_ns is None:
self.pyro_ns.shutdown()
self.pyro_ns = None
if not self.conf_fn is None:
os.remove(self.conf_fn)
self.conf_fn = None | clean shutdown of the nameserver and the config file (if written) | Below is the the instruction that describes the task:
### Input:
clean shutdown of the nameserver and the config file (if written)
### Response:
def shutdown(self):
"""
clean shutdown of the nameserver and the config file (if written)
"""
if not self.pyro_ns is None:
self.pyro_ns.shutdown()
self.pyro_ns = None
if not self.conf_fn is None:
os.remove(self.conf_fn)
self.conf_fn = None |
def CER(prediction, true_labels):
"""
Calculates the classification error rate for an N-class classification problem
Parameters:
prediction (numpy.ndarray): A 1D :py:class:`numpy.ndarray` containing your
prediction
true_labels (numpy.ndarray): A 1D :py:class:`numpy.ndarray`
containing the ground truth labels for the input array, organized in the
same order.
"""
errors = (prediction != true_labels).sum()
return float(errors)/len(prediction) | Calculates the classification error rate for an N-class classification problem
Parameters:
prediction (numpy.ndarray): A 1D :py:class:`numpy.ndarray` containing your
prediction
true_labels (numpy.ndarray): A 1D :py:class:`numpy.ndarray`
containing the ground truth labels for the input array, organized in the
same order. | Below is the the instruction that describes the task:
### Input:
Calculates the classification error rate for an N-class classification problem
Parameters:
prediction (numpy.ndarray): A 1D :py:class:`numpy.ndarray` containing your
prediction
true_labels (numpy.ndarray): A 1D :py:class:`numpy.ndarray`
containing the ground truth labels for the input array, organized in the
same order.
### Response:
def CER(prediction, true_labels):
"""
Calculates the classification error rate for an N-class classification problem
Parameters:
prediction (numpy.ndarray): A 1D :py:class:`numpy.ndarray` containing your
prediction
true_labels (numpy.ndarray): A 1D :py:class:`numpy.ndarray`
containing the ground truth labels for the input array, organized in the
same order.
"""
errors = (prediction != true_labels).sum()
return float(errors)/len(prediction) |
def _as_published_topic(self):
"""This stream as a PublishedTopic if it is published otherwise None
"""
oop = self.get_operator_output_port()
if not hasattr(oop, 'export'):
return
export = oop.export
if export['type'] != 'properties':
return
seen_export_type = False
topic = None
for p in export['properties']:
if p['type'] != 'rstring':
continue
if p['name'] == '__spl_exportType':
if p['values'] == ['"topic"']:
seen_export_type = True
else:
return
if p['name'] == '__spl_topic':
topic = p['values'][0]
if seen_export_type and topic is not None:
schema = None
if hasattr(oop, 'tupleAttributes'):
ta_url = oop.tupleAttributes
ta_resp = self.rest_client.make_request(ta_url)
schema = streamsx.topology.schema.StreamSchema(ta_resp['splType'])
return PublishedTopic(topic[1:-1], schema)
return | This stream as a PublishedTopic if it is published otherwise None | Below is the the instruction that describes the task:
### Input:
This stream as a PublishedTopic if it is published otherwise None
### Response:
def _as_published_topic(self):
"""This stream as a PublishedTopic if it is published otherwise None
"""
oop = self.get_operator_output_port()
if not hasattr(oop, 'export'):
return
export = oop.export
if export['type'] != 'properties':
return
seen_export_type = False
topic = None
for p in export['properties']:
if p['type'] != 'rstring':
continue
if p['name'] == '__spl_exportType':
if p['values'] == ['"topic"']:
seen_export_type = True
else:
return
if p['name'] == '__spl_topic':
topic = p['values'][0]
if seen_export_type and topic is not None:
schema = None
if hasattr(oop, 'tupleAttributes'):
ta_url = oop.tupleAttributes
ta_resp = self.rest_client.make_request(ta_url)
schema = streamsx.topology.schema.StreamSchema(ta_resp['splType'])
return PublishedTopic(topic[1:-1], schema)
return |
def insert(self, i, tag1, tag2, cmd="prevtag", x=None, y=None):
""" Inserts a new rule that updates words with tag1 to tag2,
given constraints x and y, e.g., Context.append("TO < NN", "VB")
"""
if " < " in tag1 and not x and not y:
tag1, x = tag1.split(" < "); cmd="prevtag"
if " > " in tag1 and not x and not y:
x, tag1 = tag1.split(" > "); cmd="nexttag"
lazylist.insert(self, i, [tag1, tag2, cmd, x or "", y or ""]) | Inserts a new rule that updates words with tag1 to tag2,
given constraints x and y, e.g., Context.append("TO < NN", "VB") | Below is the the instruction that describes the task:
### Input:
Inserts a new rule that updates words with tag1 to tag2,
given constraints x and y, e.g., Context.append("TO < NN", "VB")
### Response:
def insert(self, i, tag1, tag2, cmd="prevtag", x=None, y=None):
""" Inserts a new rule that updates words with tag1 to tag2,
given constraints x and y, e.g., Context.append("TO < NN", "VB")
"""
if " < " in tag1 and not x and not y:
tag1, x = tag1.split(" < "); cmd="prevtag"
if " > " in tag1 and not x and not y:
x, tag1 = tag1.split(" > "); cmd="nexttag"
lazylist.insert(self, i, [tag1, tag2, cmd, x or "", y or ""]) |
def random_offspring(self):
"Returns an offspring with the associated weight(s)"
function_set = self.function_set
function_selection = self._function_selection_ins
function_selection.density = self.population.density
function_selection.unfeasible_functions.clear()
for i in range(self._number_tries_feasible_ind):
if self._function_selection:
func_index = function_selection.tournament()
else:
func_index = function_selection.random_function()
func = function_set[func_index]
args = self.get_args(func)
if args is None:
continue
args = [self.population.population[x].position for x in args]
f = self._random_offspring(func, args)
if f is None:
function_selection.unfeasible_functions.add(func_index)
continue
function_selection[func_index] = f.fitness
return f
raise RuntimeError("Could not find a suitable random offpsring") | Returns an offspring with the associated weight(s) | Below is the the instruction that describes the task:
### Input:
Returns an offspring with the associated weight(s)
### Response:
def random_offspring(self):
"Returns an offspring with the associated weight(s)"
function_set = self.function_set
function_selection = self._function_selection_ins
function_selection.density = self.population.density
function_selection.unfeasible_functions.clear()
for i in range(self._number_tries_feasible_ind):
if self._function_selection:
func_index = function_selection.tournament()
else:
func_index = function_selection.random_function()
func = function_set[func_index]
args = self.get_args(func)
if args is None:
continue
args = [self.population.population[x].position for x in args]
f = self._random_offspring(func, args)
if f is None:
function_selection.unfeasible_functions.add(func_index)
continue
function_selection[func_index] = f.fitness
return f
raise RuntimeError("Could not find a suitable random offpsring") |
def static_dimensions(self):
"""
Return all constant dimensions.
"""
dimensions = []
for dim in self.kdims:
if len(set(self.dimension_values(dim.name))) == 1:
dimensions.append(dim)
return dimensions | Return all constant dimensions. | Below is the the instruction that describes the task:
### Input:
Return all constant dimensions.
### Response:
def static_dimensions(self):
"""
Return all constant dimensions.
"""
dimensions = []
for dim in self.kdims:
if len(set(self.dimension_values(dim.name))) == 1:
dimensions.append(dim)
return dimensions |
def securityHandler(self, value):
""" sets the security handler """
if isinstance(value, abstract.BaseSecurityHandler):
if isinstance(value, security.AGOLTokenSecurityHandler):
self._securityHandler = value
self._token = value.token
self._username = value.username
self._password = value._password
self._token_url = value.token_url
elif isinstance(value, security.OAuthSecurityHandler):
self._token = value.token
self._securityHandler = value
else:
pass | sets the security handler | Below is the the instruction that describes the task:
### Input:
sets the security handler
### Response:
def securityHandler(self, value):
""" sets the security handler """
if isinstance(value, abstract.BaseSecurityHandler):
if isinstance(value, security.AGOLTokenSecurityHandler):
self._securityHandler = value
self._token = value.token
self._username = value.username
self._password = value._password
self._token_url = value.token_url
elif isinstance(value, security.OAuthSecurityHandler):
self._token = value.token
self._securityHandler = value
else:
pass |
def replace_between_tags(text, repl_, start_tag, end_tag=None):
r"""
Replaces text between sentinal lines in a block of text.
Args:
text (str):
repl_ (str):
start_tag (str):
end_tag (str): (default=None)
Returns:
str: new_text
CommandLine:
python -m utool.util_str --exec-replace_between_tags
Example:
>>> # DISABLE_DOCTEST
>>> from utool.util_str import * # NOQA
>>> text = ut.codeblock(
'''
class:
# <FOO>
bar
# </FOO>
baz
''')
>>> repl_ = 'spam'
>>> start_tag = '# <FOO>'
>>> end_tag = '# </FOO>'
>>> new_text = replace_between_tags(text, repl_, start_tag, end_tag)
>>> result = ('new_text =\n%s' % (str(new_text),))
>>> print(result)
new_text =
class:
# <FOO>
spam
# </FOO>
baz
"""
new_lines = []
editing = False
lines = text.split('\n')
for line in lines:
if not editing:
new_lines.append(line)
if line.strip().startswith(start_tag):
new_lines.append(repl_)
editing = True
if end_tag is not None and line.strip().startswith(end_tag):
editing = False
new_lines.append(line)
new_text = '\n'.join(new_lines)
return new_text | r"""
Replaces text between sentinal lines in a block of text.
Args:
text (str):
repl_ (str):
start_tag (str):
end_tag (str): (default=None)
Returns:
str: new_text
CommandLine:
python -m utool.util_str --exec-replace_between_tags
Example:
>>> # DISABLE_DOCTEST
>>> from utool.util_str import * # NOQA
>>> text = ut.codeblock(
'''
class:
# <FOO>
bar
# </FOO>
baz
''')
>>> repl_ = 'spam'
>>> start_tag = '# <FOO>'
>>> end_tag = '# </FOO>'
>>> new_text = replace_between_tags(text, repl_, start_tag, end_tag)
>>> result = ('new_text =\n%s' % (str(new_text),))
>>> print(result)
new_text =
class:
# <FOO>
spam
# </FOO>
baz | Below is the the instruction that describes the task:
### Input:
r"""
Replaces text between sentinal lines in a block of text.
Args:
text (str):
repl_ (str):
start_tag (str):
end_tag (str): (default=None)
Returns:
str: new_text
CommandLine:
python -m utool.util_str --exec-replace_between_tags
Example:
>>> # DISABLE_DOCTEST
>>> from utool.util_str import * # NOQA
>>> text = ut.codeblock(
'''
class:
# <FOO>
bar
# </FOO>
baz
''')
>>> repl_ = 'spam'
>>> start_tag = '# <FOO>'
>>> end_tag = '# </FOO>'
>>> new_text = replace_between_tags(text, repl_, start_tag, end_tag)
>>> result = ('new_text =\n%s' % (str(new_text),))
>>> print(result)
new_text =
class:
# <FOO>
spam
# </FOO>
baz
### Response:
def replace_between_tags(text, repl_, start_tag, end_tag=None):
r"""
Replaces text between sentinal lines in a block of text.
Args:
text (str):
repl_ (str):
start_tag (str):
end_tag (str): (default=None)
Returns:
str: new_text
CommandLine:
python -m utool.util_str --exec-replace_between_tags
Example:
>>> # DISABLE_DOCTEST
>>> from utool.util_str import * # NOQA
>>> text = ut.codeblock(
'''
class:
# <FOO>
bar
# </FOO>
baz
''')
>>> repl_ = 'spam'
>>> start_tag = '# <FOO>'
>>> end_tag = '# </FOO>'
>>> new_text = replace_between_tags(text, repl_, start_tag, end_tag)
>>> result = ('new_text =\n%s' % (str(new_text),))
>>> print(result)
new_text =
class:
# <FOO>
spam
# </FOO>
baz
"""
new_lines = []
editing = False
lines = text.split('\n')
for line in lines:
if not editing:
new_lines.append(line)
if line.strip().startswith(start_tag):
new_lines.append(repl_)
editing = True
if end_tag is not None and line.strip().startswith(end_tag):
editing = False
new_lines.append(line)
new_text = '\n'.join(new_lines)
return new_text |
def eval_condition(self, event):
"""
Evaluates the breakpoint condition, if any was set.
@type event: L{Event}
@param event: Debug event triggered by the breakpoint.
@rtype: bool
@return: C{True} to dispatch the event, C{False} otherwise.
"""
condition = self.get_condition()
if condition is True: # shortcut for unconditional breakpoints
return True
if callable(condition):
try:
return bool( condition(event) )
except Exception:
e = sys.exc_info()[1]
msg = ("Breakpoint condition callback %r"
" raised an exception: %s")
msg = msg % (condition, traceback.format_exc(e))
warnings.warn(msg, BreakpointCallbackWarning)
return False
return bool( condition ) | Evaluates the breakpoint condition, if any was set.
@type event: L{Event}
@param event: Debug event triggered by the breakpoint.
@rtype: bool
@return: C{True} to dispatch the event, C{False} otherwise. | Below is the the instruction that describes the task:
### Input:
Evaluates the breakpoint condition, if any was set.
@type event: L{Event}
@param event: Debug event triggered by the breakpoint.
@rtype: bool
@return: C{True} to dispatch the event, C{False} otherwise.
### Response:
def eval_condition(self, event):
"""
Evaluates the breakpoint condition, if any was set.
@type event: L{Event}
@param event: Debug event triggered by the breakpoint.
@rtype: bool
@return: C{True} to dispatch the event, C{False} otherwise.
"""
condition = self.get_condition()
if condition is True: # shortcut for unconditional breakpoints
return True
if callable(condition):
try:
return bool( condition(event) )
except Exception:
e = sys.exc_info()[1]
msg = ("Breakpoint condition callback %r"
" raised an exception: %s")
msg = msg % (condition, traceback.format_exc(e))
warnings.warn(msg, BreakpointCallbackWarning)
return False
return bool( condition ) |
def client(self):
"""Returns client session object"""
if self._client is None:
self._client = get_session(self.user_agent)
return self._client | Returns client session object | Below is the the instruction that describes the task:
### Input:
Returns client session object
### Response:
def client(self):
"""Returns client session object"""
if self._client is None:
self._client = get_session(self.user_agent)
return self._client |
def get_notification_result(self, stream_id):
"""
Get result for specified stream
The function returns: 'Success' or 'failure reason' or ('Unregistered', timestamp)
"""
with self._connection.get_response(stream_id) as response:
if response.status == 200:
return 'Success'
else:
raw_data = response.read().decode('utf-8')
data = json.loads(raw_data)
if response.status == 410:
return data['reason'], data['timestamp']
else:
return data['reason'] | Get result for specified stream
The function returns: 'Success' or 'failure reason' or ('Unregistered', timestamp) | Below is the the instruction that describes the task:
### Input:
Get result for specified stream
The function returns: 'Success' or 'failure reason' or ('Unregistered', timestamp)
### Response:
def get_notification_result(self, stream_id):
"""
Get result for specified stream
The function returns: 'Success' or 'failure reason' or ('Unregistered', timestamp)
"""
with self._connection.get_response(stream_id) as response:
if response.status == 200:
return 'Success'
else:
raw_data = response.read().decode('utf-8')
data = json.loads(raw_data)
if response.status == 410:
return data['reason'], data['timestamp']
else:
return data['reason'] |
def verifychain_from_cafile(self, cafile, untrusted_file=None):
"""
Does the same job as .verifychain() but using the list of anchors
from the cafile. This is useful (because more efficient) if
you have a lot of certificates to verify do it that way: it
avoids the creation of a cafile from anchors at each call.
As for .verifychain(), a list of untrusted certificates can be
passed (as a file, this time)
"""
cmd = ["openssl", "verify", "-CAfile", cafile]
if untrusted_file:
cmd += ["-untrusted", untrusted_file]
try:
pemcert = self.output(fmt="PEM")
cmdres = self._apply_ossl_cmd(cmd, pemcert)
except:
return False
return cmdres.endswith("\nOK\n") or cmdres.endswith(": OK\n") | Does the same job as .verifychain() but using the list of anchors
from the cafile. This is useful (because more efficient) if
you have a lot of certificates to verify do it that way: it
avoids the creation of a cafile from anchors at each call.
As for .verifychain(), a list of untrusted certificates can be
passed (as a file, this time) | Below is the the instruction that describes the task:
### Input:
Does the same job as .verifychain() but using the list of anchors
from the cafile. This is useful (because more efficient) if
you have a lot of certificates to verify do it that way: it
avoids the creation of a cafile from anchors at each call.
As for .verifychain(), a list of untrusted certificates can be
passed (as a file, this time)
### Response:
def verifychain_from_cafile(self, cafile, untrusted_file=None):
"""
Does the same job as .verifychain() but using the list of anchors
from the cafile. This is useful (because more efficient) if
you have a lot of certificates to verify do it that way: it
avoids the creation of a cafile from anchors at each call.
As for .verifychain(), a list of untrusted certificates can be
passed (as a file, this time)
"""
cmd = ["openssl", "verify", "-CAfile", cafile]
if untrusted_file:
cmd += ["-untrusted", untrusted_file]
try:
pemcert = self.output(fmt="PEM")
cmdres = self._apply_ossl_cmd(cmd, pemcert)
except:
return False
return cmdres.endswith("\nOK\n") or cmdres.endswith(": OK\n") |
def safe_dict(obj_or_func, **kwargs):
"""
Create a dict from any object with all attributes, but not the ones starting with an underscore _
Useful for objects or function that return an object that have no __dict__ attribute, like psutil functions.
"""
if callable(obj_or_func):
res = obj_or_func(**kwargs)
else:
res = obj_or_func
if hasattr(res, '__dict__'):
return res.__dict__
attributes = [i for i in dir(res) if not i.startswith('_')]
out = {}
for a in attributes:
val = getattr(res, a)
if val and not callable(val):
out[a] = val
return out | Create a dict from any object with all attributes, but not the ones starting with an underscore _
Useful for objects or function that return an object that have no __dict__ attribute, like psutil functions. | Below is the the instruction that describes the task:
### Input:
Create a dict from any object with all attributes, but not the ones starting with an underscore _
Useful for objects or function that return an object that have no __dict__ attribute, like psutil functions.
### Response:
def safe_dict(obj_or_func, **kwargs):
"""
Create a dict from any object with all attributes, but not the ones starting with an underscore _
Useful for objects or function that return an object that have no __dict__ attribute, like psutil functions.
"""
if callable(obj_or_func):
res = obj_or_func(**kwargs)
else:
res = obj_or_func
if hasattr(res, '__dict__'):
return res.__dict__
attributes = [i for i in dir(res) if not i.startswith('_')]
out = {}
for a in attributes:
val = getattr(res, a)
if val and not callable(val):
out[a] = val
return out |
def transform_expression_columns(df, fn=np.log2, prefix='Intensity '):
"""
Apply transformation to expression columns.
Default is log2 transform to expression columns beginning with Intensity
:param df:
:param prefix: The column prefix for expression columns
:return:
"""
df = df.copy()
mask = np.array([l.startswith(prefix) for l in df.columns.values])
df.iloc[:, mask] = fn(df.iloc[:, mask])
df.replace([np.inf, -np.inf], np.nan, inplace=True)
return df | Apply transformation to expression columns.
Default is log2 transform to expression columns beginning with Intensity
:param df:
:param prefix: The column prefix for expression columns
:return: | Below is the the instruction that describes the task:
### Input:
Apply transformation to expression columns.
Default is log2 transform to expression columns beginning with Intensity
:param df:
:param prefix: The column prefix for expression columns
:return:
### Response:
def transform_expression_columns(df, fn=np.log2, prefix='Intensity '):
"""
Apply transformation to expression columns.
Default is log2 transform to expression columns beginning with Intensity
:param df:
:param prefix: The column prefix for expression columns
:return:
"""
df = df.copy()
mask = np.array([l.startswith(prefix) for l in df.columns.values])
df.iloc[:, mask] = fn(df.iloc[:, mask])
df.replace([np.inf, -np.inf], np.nan, inplace=True)
return df |
def read_metrics_file(path: str) -> List[Dict[str, Any]]:
"""
Reads lines metrics file and returns list of mappings of key and values.
:param path: File to read metric values from.
:return: Dictionary of metric names (e.g. perplexity-train) mapping to a list of values.
"""
with open(path) as fin:
metrics = [parse_metrics_line(i, line.strip()) for i, line in enumerate(fin, 1)]
return metrics | Reads lines metrics file and returns list of mappings of key and values.
:param path: File to read metric values from.
:return: Dictionary of metric names (e.g. perplexity-train) mapping to a list of values. | Below is the the instruction that describes the task:
### Input:
Reads lines metrics file and returns list of mappings of key and values.
:param path: File to read metric values from.
:return: Dictionary of metric names (e.g. perplexity-train) mapping to a list of values.
### Response:
def read_metrics_file(path: str) -> List[Dict[str, Any]]:
"""
Reads lines metrics file and returns list of mappings of key and values.
:param path: File to read metric values from.
:return: Dictionary of metric names (e.g. perplexity-train) mapping to a list of values.
"""
with open(path) as fin:
metrics = [parse_metrics_line(i, line.strip()) for i, line in enumerate(fin, 1)]
return metrics |
def add_listener(self, include_value=False, item_added_func=None, item_removed_func=None):
"""
Adds an item listener for this queue. Listener will be notified for all queue add/remove events.
:param include_value: (bool), whether received events include the updated item or not (optional).
:param item_added_func: Function to be called when an item is added to this set (optional).
:param item_removed_func: Function to be called when an item is deleted from this set (optional).
:return: (str), a registration id which is used as a key to remove the listener.
"""
request = queue_add_listener_codec.encode_request(self.name, include_value, False)
def handle_event_item(item, uuid, event_type):
item = item if include_value else None
member = self._client.cluster.get_member_by_uuid(uuid)
item_event = ItemEvent(self.name, item, event_type, member, self._to_object)
if event_type == ItemEventType.added:
if item_added_func:
item_added_func(item_event)
else:
if item_removed_func:
item_removed_func(item_event)
return self._start_listening(request,
lambda m: queue_add_listener_codec.handle(m, handle_event_item),
lambda r: queue_add_listener_codec.decode_response(r)['response'],
self.partition_key) | Adds an item listener for this queue. Listener will be notified for all queue add/remove events.
:param include_value: (bool), whether received events include the updated item or not (optional).
:param item_added_func: Function to be called when an item is added to this set (optional).
:param item_removed_func: Function to be called when an item is deleted from this set (optional).
:return: (str), a registration id which is used as a key to remove the listener. | Below is the the instruction that describes the task:
### Input:
Adds an item listener for this queue. Listener will be notified for all queue add/remove events.
:param include_value: (bool), whether received events include the updated item or not (optional).
:param item_added_func: Function to be called when an item is added to this set (optional).
:param item_removed_func: Function to be called when an item is deleted from this set (optional).
:return: (str), a registration id which is used as a key to remove the listener.
### Response:
def add_listener(self, include_value=False, item_added_func=None, item_removed_func=None):
"""
Adds an item listener for this queue. Listener will be notified for all queue add/remove events.
:param include_value: (bool), whether received events include the updated item or not (optional).
:param item_added_func: Function to be called when an item is added to this set (optional).
:param item_removed_func: Function to be called when an item is deleted from this set (optional).
:return: (str), a registration id which is used as a key to remove the listener.
"""
request = queue_add_listener_codec.encode_request(self.name, include_value, False)
def handle_event_item(item, uuid, event_type):
item = item if include_value else None
member = self._client.cluster.get_member_by_uuid(uuid)
item_event = ItemEvent(self.name, item, event_type, member, self._to_object)
if event_type == ItemEventType.added:
if item_added_func:
item_added_func(item_event)
else:
if item_removed_func:
item_removed_func(item_event)
return self._start_listening(request,
lambda m: queue_add_listener_codec.handle(m, handle_event_item),
lambda r: queue_add_listener_codec.decode_response(r)['response'],
self.partition_key) |
async def emit(self, record: LogRecord): # type: ignore
"""
Emit a record.
Output the record to the file, catering for rollover as described
in `do_rollover`.
"""
try:
if self.should_rollover(record):
async with self._rollover_lock:
if self.should_rollover(record):
await self.do_rollover()
await super().emit(record)
except Exception as e:
await self.handleError(record) | Emit a record.
Output the record to the file, catering for rollover as described
in `do_rollover`. | Below is the the instruction that describes the task:
### Input:
Emit a record.
Output the record to the file, catering for rollover as described
in `do_rollover`.
### Response:
async def emit(self, record: LogRecord): # type: ignore
"""
Emit a record.
Output the record to the file, catering for rollover as described
in `do_rollover`.
"""
try:
if self.should_rollover(record):
async with self._rollover_lock:
if self.should_rollover(record):
await self.do_rollover()
await super().emit(record)
except Exception as e:
await self.handleError(record) |
def profiler(self):
"""Creates a dictionary from the profile scheme(s)"""
logging.info('Loading profiles')
# Initialise variables
profiledata = defaultdict(make_dict)
reverse_profiledata = dict()
profileset = set()
# Find all the unique profiles to use with a set
for sample in self.runmetadata.samples:
if sample.general.bestassemblyfile != 'NA':
if sample[self.analysistype].profile != 'NA':
profileset.add(sample[self.analysistype].profile)
# Extract the profiles for each set
for sequenceprofile in profileset:
#
if sequenceprofile not in self.meta_dict:
self.meta_dict[sequenceprofile] = dict()
reverse_profiledata[sequenceprofile] = dict()
self.meta_dict[sequenceprofile]['ND'] = dict()
# Clear the list of genes
geneset = set()
# Calculate the total number of genes in the typing scheme
for sample in self.runmetadata.samples:
if sample.general.bestassemblyfile != 'NA':
if sequenceprofile == sample[self.analysistype].profile:
geneset = {allele for allele in sample[self.analysistype].alleles}
try:
# Open the sequence profile file as a dictionary
profile = DictReader(open(sequenceprofile), dialect='excel-tab')
# Revert to standard comma separated values
except KeyError:
# Open the sequence profile file as a dictionary
profile = DictReader(open(sequenceprofile))
# Iterate through the rows
for row in profile:
# Populate the profile dictionary with profile number: {gene: allele}. Use the first field name,
# which will be either ST, or rST as the key to determine the profile number value
allele_comprehension = {gene: allele for gene, allele in row.items() if gene in geneset}
st = row[profile.fieldnames[0]]
for header, value in row.items():
value = value if value else 'ND'
if header not in geneset and header not in ['ST', 'rST']:
if st not in self.meta_dict[sequenceprofile]:
self.meta_dict[sequenceprofile][st] = dict()
if header == 'CC' or header == 'clonal_complex':
header = 'CC'
self.meta_dict[sequenceprofile][st][header] = value
self.meta_dict[sequenceprofile]['ND'][header] = 'ND'
self.meta_dict[sequenceprofile][st]['PredictedSerogroup'] = 'ND'
if header not in self.meta_headers:
self.meta_headers.append(header)
profiledata[sequenceprofile][st] = allele_comprehension
# Create a 'reverse' dictionary using the the allele comprehension as the key, and
# the sequence type as the value - can be used if exact matches are ever desired
reverse_profiledata[sequenceprofile].update({frozenset(allele_comprehension.items()): st})
# Add the profile data, and gene list to each sample
for sample in self.runmetadata.samples:
if sample.general.bestassemblyfile != 'NA':
if sequenceprofile == sample[self.analysistype].profile:
# Populate the metadata with the profile data
sample[self.analysistype].profiledata = profiledata[sample[self.analysistype].profile]
sample[self.analysistype].reverse_profiledata = reverse_profiledata[sequenceprofile]
sample[self.analysistype].meta_dict = self.meta_dict[sequenceprofile]
else:
sample[self.analysistype].profiledata = 'NA'
sample[self.analysistype].reverse_profiledata = 'NA'
sample[self.analysistype].meta_dict = 'NA' | Creates a dictionary from the profile scheme(s) | Below is the the instruction that describes the task:
### Input:
Creates a dictionary from the profile scheme(s)
### Response:
def profiler(self):
"""Creates a dictionary from the profile scheme(s)"""
logging.info('Loading profiles')
# Initialise variables
profiledata = defaultdict(make_dict)
reverse_profiledata = dict()
profileset = set()
# Find all the unique profiles to use with a set
for sample in self.runmetadata.samples:
if sample.general.bestassemblyfile != 'NA':
if sample[self.analysistype].profile != 'NA':
profileset.add(sample[self.analysistype].profile)
# Extract the profiles for each set
for sequenceprofile in profileset:
#
if sequenceprofile not in self.meta_dict:
self.meta_dict[sequenceprofile] = dict()
reverse_profiledata[sequenceprofile] = dict()
self.meta_dict[sequenceprofile]['ND'] = dict()
# Clear the list of genes
geneset = set()
# Calculate the total number of genes in the typing scheme
for sample in self.runmetadata.samples:
if sample.general.bestassemblyfile != 'NA':
if sequenceprofile == sample[self.analysistype].profile:
geneset = {allele for allele in sample[self.analysistype].alleles}
try:
# Open the sequence profile file as a dictionary
profile = DictReader(open(sequenceprofile), dialect='excel-tab')
# Revert to standard comma separated values
except KeyError:
# Open the sequence profile file as a dictionary
profile = DictReader(open(sequenceprofile))
# Iterate through the rows
for row in profile:
# Populate the profile dictionary with profile number: {gene: allele}. Use the first field name,
# which will be either ST, or rST as the key to determine the profile number value
allele_comprehension = {gene: allele for gene, allele in row.items() if gene in geneset}
st = row[profile.fieldnames[0]]
for header, value in row.items():
value = value if value else 'ND'
if header not in geneset and header not in ['ST', 'rST']:
if st not in self.meta_dict[sequenceprofile]:
self.meta_dict[sequenceprofile][st] = dict()
if header == 'CC' or header == 'clonal_complex':
header = 'CC'
self.meta_dict[sequenceprofile][st][header] = value
self.meta_dict[sequenceprofile]['ND'][header] = 'ND'
self.meta_dict[sequenceprofile][st]['PredictedSerogroup'] = 'ND'
if header not in self.meta_headers:
self.meta_headers.append(header)
profiledata[sequenceprofile][st] = allele_comprehension
# Create a 'reverse' dictionary using the the allele comprehension as the key, and
# the sequence type as the value - can be used if exact matches are ever desired
reverse_profiledata[sequenceprofile].update({frozenset(allele_comprehension.items()): st})
# Add the profile data, and gene list to each sample
for sample in self.runmetadata.samples:
if sample.general.bestassemblyfile != 'NA':
if sequenceprofile == sample[self.analysistype].profile:
# Populate the metadata with the profile data
sample[self.analysistype].profiledata = profiledata[sample[self.analysistype].profile]
sample[self.analysistype].reverse_profiledata = reverse_profiledata[sequenceprofile]
sample[self.analysistype].meta_dict = self.meta_dict[sequenceprofile]
else:
sample[self.analysistype].profiledata = 'NA'
sample[self.analysistype].reverse_profiledata = 'NA'
sample[self.analysistype].meta_dict = 'NA' |
def get_qi_name(self, qi_type, band='B00', data_format=MimeType.GML):
"""
:param qi_type: type of quality indicator
:type qi_type: str
:param band: band name
:type band: str
:param data_format: format of the file
:type data_format: MimeType
:return: name of gml file
:rtype: str
"""
band = band.split('/')[-1]
if self.safe_type == EsaSafeType.OLD_TYPE:
name = _edit_name(self.tile_id, 'MSK', delete_end=True)
source_param = '{}_TL'.format('L1C' if self.data_source is DataSource.SENTINEL2_L1C else 'L2A')
name = name.replace(source_param, qi_type)
name = '{}_{}_MSIL1C'.format(name, band)
else:
name = 'MSK_{}_{}'.format(qi_type, band)
return '{}.{}'.format(name, data_format.value) | :param qi_type: type of quality indicator
:type qi_type: str
:param band: band name
:type band: str
:param data_format: format of the file
:type data_format: MimeType
:return: name of gml file
:rtype: str | Below is the the instruction that describes the task:
### Input:
:param qi_type: type of quality indicator
:type qi_type: str
:param band: band name
:type band: str
:param data_format: format of the file
:type data_format: MimeType
:return: name of gml file
:rtype: str
### Response:
def get_qi_name(self, qi_type, band='B00', data_format=MimeType.GML):
"""
:param qi_type: type of quality indicator
:type qi_type: str
:param band: band name
:type band: str
:param data_format: format of the file
:type data_format: MimeType
:return: name of gml file
:rtype: str
"""
band = band.split('/')[-1]
if self.safe_type == EsaSafeType.OLD_TYPE:
name = _edit_name(self.tile_id, 'MSK', delete_end=True)
source_param = '{}_TL'.format('L1C' if self.data_source is DataSource.SENTINEL2_L1C else 'L2A')
name = name.replace(source_param, qi_type)
name = '{}_{}_MSIL1C'.format(name, band)
else:
name = 'MSK_{}_{}'.format(qi_type, band)
return '{}.{}'.format(name, data_format.value) |
def set_headline(self, level, message, timestamp=None, now_reference=None):
"""Set the persistent headline message for this service.
Args:
level (int): The level of the message (info, warning, error)
message (string): The message contents
timestamp (float): An optional monotonic value in seconds for when the message was created
now_reference (float): If timestamp is not relative to monotonic() as called from this
module then this should be now() as seen by whoever created the timestamp.
"""
if self.headline is not None and self.headline.message == message:
self.headline.created = monotonic()
self.headline.count += 1
return
msg_object = ServiceMessage(level, message, self._last_message_id, timestamp, now_reference)
self.headline = msg_object
self._last_message_id += 1 | Set the persistent headline message for this service.
Args:
level (int): The level of the message (info, warning, error)
message (string): The message contents
timestamp (float): An optional monotonic value in seconds for when the message was created
now_reference (float): If timestamp is not relative to monotonic() as called from this
module then this should be now() as seen by whoever created the timestamp. | Below is the the instruction that describes the task:
### Input:
Set the persistent headline message for this service.
Args:
level (int): The level of the message (info, warning, error)
message (string): The message contents
timestamp (float): An optional monotonic value in seconds for when the message was created
now_reference (float): If timestamp is not relative to monotonic() as called from this
module then this should be now() as seen by whoever created the timestamp.
### Response:
def set_headline(self, level, message, timestamp=None, now_reference=None):
"""Set the persistent headline message for this service.
Args:
level (int): The level of the message (info, warning, error)
message (string): The message contents
timestamp (float): An optional monotonic value in seconds for when the message was created
now_reference (float): If timestamp is not relative to monotonic() as called from this
module then this should be now() as seen by whoever created the timestamp.
"""
if self.headline is not None and self.headline.message == message:
self.headline.created = monotonic()
self.headline.count += 1
return
msg_object = ServiceMessage(level, message, self._last_message_id, timestamp, now_reference)
self.headline = msg_object
self._last_message_id += 1 |
def out_interactions_iter(self, nbunch=None, t=None):
"""Return an iterator over the out interactions present in a given snapshot.
Edges are returned as tuples
in the order (node, neighbor).
Parameters
----------
nbunch : iterable container, optional (default= all nodes)
A container of nodes. The container will be iterated
through once.
t : snapshot id (default=None)
If None the the method returns an iterator over the edges of the flattened graph.
Returns
-------
edge_iter : iterator
An iterator of (u,v) tuples of interaction.
Notes
-----
Nodes in nbunch that are not in the graph will be (quietly) ignored.
For directed graphs this returns the out-interaction.
Examples
--------
>>> G = dn.DynDiGraph()
>>> G.add_interaction(0,1, 0)
>>> G.add_interaction(1,2, 0)
>>> G.add_interaction(2,3,1)
>>> [e for e in G.out_interactions_iter(t=0)]
[(0, 1), (1, 2)]
>>> list(G.out_interactions_iter())
[(0, 1), (1, 2), (2, 3)]
"""
if nbunch is None:
nodes_nbrs_succ = self._succ.items()
else:
nodes_nbrs_succ = [(n, self._succ[n]) for n in self.nbunch_iter(nbunch)]
for n, nbrs in nodes_nbrs_succ:
for nbr in nbrs:
if t is not None:
if self.__presence_test(n, nbr, t):
yield (n, nbr, {"t": [t]})
else:
if nbr in self._succ[n]:
yield (n, nbr, self._succ[n][nbr]) | Return an iterator over the out interactions present in a given snapshot.
Edges are returned as tuples
in the order (node, neighbor).
Parameters
----------
nbunch : iterable container, optional (default= all nodes)
A container of nodes. The container will be iterated
through once.
t : snapshot id (default=None)
If None the the method returns an iterator over the edges of the flattened graph.
Returns
-------
edge_iter : iterator
An iterator of (u,v) tuples of interaction.
Notes
-----
Nodes in nbunch that are not in the graph will be (quietly) ignored.
For directed graphs this returns the out-interaction.
Examples
--------
>>> G = dn.DynDiGraph()
>>> G.add_interaction(0,1, 0)
>>> G.add_interaction(1,2, 0)
>>> G.add_interaction(2,3,1)
>>> [e for e in G.out_interactions_iter(t=0)]
[(0, 1), (1, 2)]
>>> list(G.out_interactions_iter())
[(0, 1), (1, 2), (2, 3)] | Below is the the instruction that describes the task:
### Input:
Return an iterator over the out interactions present in a given snapshot.
Edges are returned as tuples
in the order (node, neighbor).
Parameters
----------
nbunch : iterable container, optional (default= all nodes)
A container of nodes. The container will be iterated
through once.
t : snapshot id (default=None)
If None the the method returns an iterator over the edges of the flattened graph.
Returns
-------
edge_iter : iterator
An iterator of (u,v) tuples of interaction.
Notes
-----
Nodes in nbunch that are not in the graph will be (quietly) ignored.
For directed graphs this returns the out-interaction.
Examples
--------
>>> G = dn.DynDiGraph()
>>> G.add_interaction(0,1, 0)
>>> G.add_interaction(1,2, 0)
>>> G.add_interaction(2,3,1)
>>> [e for e in G.out_interactions_iter(t=0)]
[(0, 1), (1, 2)]
>>> list(G.out_interactions_iter())
[(0, 1), (1, 2), (2, 3)]
### Response:
def out_interactions_iter(self, nbunch=None, t=None):
"""Return an iterator over the out interactions present in a given snapshot.
Edges are returned as tuples
in the order (node, neighbor).
Parameters
----------
nbunch : iterable container, optional (default= all nodes)
A container of nodes. The container will be iterated
through once.
t : snapshot id (default=None)
If None the the method returns an iterator over the edges of the flattened graph.
Returns
-------
edge_iter : iterator
An iterator of (u,v) tuples of interaction.
Notes
-----
Nodes in nbunch that are not in the graph will be (quietly) ignored.
For directed graphs this returns the out-interaction.
Examples
--------
>>> G = dn.DynDiGraph()
>>> G.add_interaction(0,1, 0)
>>> G.add_interaction(1,2, 0)
>>> G.add_interaction(2,3,1)
>>> [e for e in G.out_interactions_iter(t=0)]
[(0, 1), (1, 2)]
>>> list(G.out_interactions_iter())
[(0, 1), (1, 2), (2, 3)]
"""
if nbunch is None:
nodes_nbrs_succ = self._succ.items()
else:
nodes_nbrs_succ = [(n, self._succ[n]) for n in self.nbunch_iter(nbunch)]
for n, nbrs in nodes_nbrs_succ:
for nbr in nbrs:
if t is not None:
if self.__presence_test(n, nbr, t):
yield (n, nbr, {"t": [t]})
else:
if nbr in self._succ[n]:
yield (n, nbr, self._succ[n][nbr]) |
def _AddVolume(self, volume):
"""Adds a volume.
Args:
volume (Volume): a volume.
Raises:
KeyError: if volume is already set for the corresponding volume
identifier.
"""
if volume.identifier in self._volumes:
raise KeyError(
'Volume object already set for volume identifier: {0:s}'.format(
volume.identifier))
self._volumes[volume.identifier] = volume
self._volume_identifiers.append(volume.identifier) | Adds a volume.
Args:
volume (Volume): a volume.
Raises:
KeyError: if volume is already set for the corresponding volume
identifier. | Below is the the instruction that describes the task:
### Input:
Adds a volume.
Args:
volume (Volume): a volume.
Raises:
KeyError: if volume is already set for the corresponding volume
identifier.
### Response:
def _AddVolume(self, volume):
"""Adds a volume.
Args:
volume (Volume): a volume.
Raises:
KeyError: if volume is already set for the corresponding volume
identifier.
"""
if volume.identifier in self._volumes:
raise KeyError(
'Volume object already set for volume identifier: {0:s}'.format(
volume.identifier))
self._volumes[volume.identifier] = volume
self._volume_identifiers.append(volume.identifier) |
def finish_target(self, name):
"""Finishes progress bar for a specified target."""
# We have to write a msg about finished target
with self._lock:
pbar = self._bar(name, 100, 100)
if sys.stdout.isatty():
self.clearln()
self._print(pbar)
self._n_finished += 1
self._line = None | Finishes progress bar for a specified target. | Below is the the instruction that describes the task:
### Input:
Finishes progress bar for a specified target.
### Response:
def finish_target(self, name):
"""Finishes progress bar for a specified target."""
# We have to write a msg about finished target
with self._lock:
pbar = self._bar(name, 100, 100)
if sys.stdout.isatty():
self.clearln()
self._print(pbar)
self._n_finished += 1
self._line = None |
def su(self) -> 'Gate':
"""Convert gate tensor to the special unitary group."""
rank = 2**self.qubit_nb
U = asarray(self.asoperator())
U /= np.linalg.det(U) ** (1/rank)
return Gate(tensor=U, qubits=self.qubits) | Convert gate tensor to the special unitary group. | Below is the the instruction that describes the task:
### Input:
Convert gate tensor to the special unitary group.
### Response:
def su(self) -> 'Gate':
"""Convert gate tensor to the special unitary group."""
rank = 2**self.qubit_nb
U = asarray(self.asoperator())
U /= np.linalg.det(U) ** (1/rank)
return Gate(tensor=U, qubits=self.qubits) |
def to_etree(source, root_tag=None):
""" Convert various representations of an XML structure to a etree Element
Args:
source -- The source object to be converted - ET.Element\ElementTree, dict or string.
Keyword args:
root_tag -- A optional parent tag in which to wrap the xml tree if no root in dict representation.
See dict_to_etree()
Returns:
A etree Element matching the source object.
>>> to_etree("<content/>") #doctest: +ELLIPSIS
<Element content at 0x...>
>>> to_etree({'document': {'title': 'foo', 'list': [{'li':1}, {'li':2}]}}) #doctest: +ELLIPSIS
<Element document at 0x...>
>>> to_etree(ET.Element('root')) #doctest: +ELLIPSIS
<Element root at 0x...>
"""
if hasattr(source, 'get_root'): #XXX:
return source.get_root()
elif isinstance(source, type(ET.Element('x'))): #XXX: # cElementTree.Element isn't exposed directly
return source
elif isinstance(source, basestring):
try:
return ET.fromstring(source)
except:
raise XMLError(source)
elif hasattr(source, 'keys'): # Dict.
return dict_to_etree(source, root_tag)
else:
raise XMLError(source) | Convert various representations of an XML structure to a etree Element
Args:
source -- The source object to be converted - ET.Element\ElementTree, dict or string.
Keyword args:
root_tag -- A optional parent tag in which to wrap the xml tree if no root in dict representation.
See dict_to_etree()
Returns:
A etree Element matching the source object.
>>> to_etree("<content/>") #doctest: +ELLIPSIS
<Element content at 0x...>
>>> to_etree({'document': {'title': 'foo', 'list': [{'li':1}, {'li':2}]}}) #doctest: +ELLIPSIS
<Element document at 0x...>
>>> to_etree(ET.Element('root')) #doctest: +ELLIPSIS
<Element root at 0x...> | Below is the the instruction that describes the task:
### Input:
Convert various representations of an XML structure to a etree Element
Args:
source -- The source object to be converted - ET.Element\ElementTree, dict or string.
Keyword args:
root_tag -- A optional parent tag in which to wrap the xml tree if no root in dict representation.
See dict_to_etree()
Returns:
A etree Element matching the source object.
>>> to_etree("<content/>") #doctest: +ELLIPSIS
<Element content at 0x...>
>>> to_etree({'document': {'title': 'foo', 'list': [{'li':1}, {'li':2}]}}) #doctest: +ELLIPSIS
<Element document at 0x...>
>>> to_etree(ET.Element('root')) #doctest: +ELLIPSIS
<Element root at 0x...>
### Response:
def to_etree(source, root_tag=None):
""" Convert various representations of an XML structure to a etree Element
Args:
source -- The source object to be converted - ET.Element\ElementTree, dict or string.
Keyword args:
root_tag -- A optional parent tag in which to wrap the xml tree if no root in dict representation.
See dict_to_etree()
Returns:
A etree Element matching the source object.
>>> to_etree("<content/>") #doctest: +ELLIPSIS
<Element content at 0x...>
>>> to_etree({'document': {'title': 'foo', 'list': [{'li':1}, {'li':2}]}}) #doctest: +ELLIPSIS
<Element document at 0x...>
>>> to_etree(ET.Element('root')) #doctest: +ELLIPSIS
<Element root at 0x...>
"""
if hasattr(source, 'get_root'): #XXX:
return source.get_root()
elif isinstance(source, type(ET.Element('x'))): #XXX: # cElementTree.Element isn't exposed directly
return source
elif isinstance(source, basestring):
try:
return ET.fromstring(source)
except:
raise XMLError(source)
elif hasattr(source, 'keys'): # Dict.
return dict_to_etree(source, root_tag)
else:
raise XMLError(source) |
def _add_call_item_to_queue(pending_work_items,
running_work_items,
work_ids,
call_queue):
"""Fills call_queue with _WorkItems from pending_work_items.
This function never blocks.
Args:
pending_work_items: A dict mapping work ids to _WorkItems e.g.
{5: <_WorkItem...>, 6: <_WorkItem...>, ...}
work_ids: A queue.Queue of work ids e.g. Queue([5, 6, ...]). Work ids
are consumed and the corresponding _WorkItems from
pending_work_items are transformed into _CallItems and put in
call_queue.
call_queue: A ctx.Queue that will be filled with _CallItems
derived from _WorkItems.
"""
while True:
if call_queue.full():
return
try:
work_id = work_ids.get(block=False)
except queue.Empty:
return
else:
work_item = pending_work_items[work_id]
if work_item.future.set_running_or_notify_cancel():
running_work_items += [work_id]
call_queue.put(_CallItem(work_id,
work_item.fn,
work_item.args,
work_item.kwargs),
block=True)
else:
del pending_work_items[work_id]
continue | Fills call_queue with _WorkItems from pending_work_items.
This function never blocks.
Args:
pending_work_items: A dict mapping work ids to _WorkItems e.g.
{5: <_WorkItem...>, 6: <_WorkItem...>, ...}
work_ids: A queue.Queue of work ids e.g. Queue([5, 6, ...]). Work ids
are consumed and the corresponding _WorkItems from
pending_work_items are transformed into _CallItems and put in
call_queue.
call_queue: A ctx.Queue that will be filled with _CallItems
derived from _WorkItems. | Below is the the instruction that describes the task:
### Input:
Fills call_queue with _WorkItems from pending_work_items.
This function never blocks.
Args:
pending_work_items: A dict mapping work ids to _WorkItems e.g.
{5: <_WorkItem...>, 6: <_WorkItem...>, ...}
work_ids: A queue.Queue of work ids e.g. Queue([5, 6, ...]). Work ids
are consumed and the corresponding _WorkItems from
pending_work_items are transformed into _CallItems and put in
call_queue.
call_queue: A ctx.Queue that will be filled with _CallItems
derived from _WorkItems.
### Response:
def _add_call_item_to_queue(pending_work_items,
running_work_items,
work_ids,
call_queue):
"""Fills call_queue with _WorkItems from pending_work_items.
This function never blocks.
Args:
pending_work_items: A dict mapping work ids to _WorkItems e.g.
{5: <_WorkItem...>, 6: <_WorkItem...>, ...}
work_ids: A queue.Queue of work ids e.g. Queue([5, 6, ...]). Work ids
are consumed and the corresponding _WorkItems from
pending_work_items are transformed into _CallItems and put in
call_queue.
call_queue: A ctx.Queue that will be filled with _CallItems
derived from _WorkItems.
"""
while True:
if call_queue.full():
return
try:
work_id = work_ids.get(block=False)
except queue.Empty:
return
else:
work_item = pending_work_items[work_id]
if work_item.future.set_running_or_notify_cancel():
running_work_items += [work_id]
call_queue.put(_CallItem(work_id,
work_item.fn,
work_item.args,
work_item.kwargs),
block=True)
else:
del pending_work_items[work_id]
continue |
def start(self):
"""Start the Consumers.
:return:
"""
if not self.connection:
self.create_connection()
while True:
try:
channel = self.connection.channel()
channel.queue.declare('simple_queue')
channel.basic.consume(self, 'simple_queue', no_ack=False)
channel.start_consuming()
if not channel.consumer_tags:
channel.close()
except amqpstorm.AMQPError as why:
LOGGER.exception(why)
self.create_connection()
except KeyboardInterrupt:
self.connection.close()
break | Start the Consumers.
:return: | Below is the the instruction that describes the task:
### Input:
Start the Consumers.
:return:
### Response:
def start(self):
"""Start the Consumers.
:return:
"""
if not self.connection:
self.create_connection()
while True:
try:
channel = self.connection.channel()
channel.queue.declare('simple_queue')
channel.basic.consume(self, 'simple_queue', no_ack=False)
channel.start_consuming()
if not channel.consumer_tags:
channel.close()
except amqpstorm.AMQPError as why:
LOGGER.exception(why)
self.create_connection()
except KeyboardInterrupt:
self.connection.close()
break |
def add_resource_permissions(*args, **kwargs):
"""
This syncdb hooks takes care of adding a view permission too all our
content types.
"""
# for each of our content types
for resource in find_api_classes('v1_api', ModelResource):
auth = resource._meta.authorization
content_type = ContentType.objects.get_for_model(resource._meta.queryset.model)
if isinstance(auth, SpiffAuthorization):
conditions = auth.conditions()
operations = auth.operations()
if len(conditions) == 0:
conditions = (None,)
for condition in conditions:
for operation in operations:
# build our permission slug
if condition:
codename = "%s_%s_%s" % (operation[0], condition[0], content_type.model)
name = "Can %s %s, when %s" % (operation[1], content_type.name,
condition[1])
else:
codename = "%s_%s" % (operation[1], content_type.model)
name = "Can %s %s" % (operation[1], content_type.name)
# if it doesn't exist..
if not Permission.objects.filter(content_type=content_type, codename=codename):
# add it
Permission.objects.create(content_type=content_type,
codename=codename,
name=name[:49])
funcLog().debug("Created permission %s.%s (%s)", content_type.app_label, codename, name) | This syncdb hooks takes care of adding a view permission too all our
content types. | Below is the the instruction that describes the task:
### Input:
This syncdb hooks takes care of adding a view permission too all our
content types.
### Response:
def add_resource_permissions(*args, **kwargs):
"""
This syncdb hooks takes care of adding a view permission too all our
content types.
"""
# for each of our content types
for resource in find_api_classes('v1_api', ModelResource):
auth = resource._meta.authorization
content_type = ContentType.objects.get_for_model(resource._meta.queryset.model)
if isinstance(auth, SpiffAuthorization):
conditions = auth.conditions()
operations = auth.operations()
if len(conditions) == 0:
conditions = (None,)
for condition in conditions:
for operation in operations:
# build our permission slug
if condition:
codename = "%s_%s_%s" % (operation[0], condition[0], content_type.model)
name = "Can %s %s, when %s" % (operation[1], content_type.name,
condition[1])
else:
codename = "%s_%s" % (operation[1], content_type.model)
name = "Can %s %s" % (operation[1], content_type.name)
# if it doesn't exist..
if not Permission.objects.filter(content_type=content_type, codename=codename):
# add it
Permission.objects.create(content_type=content_type,
codename=codename,
name=name[:49])
funcLog().debug("Created permission %s.%s (%s)", content_type.app_label, codename, name) |
def plot_dry_adiabats(self, p=None, theta=None, **kwargs):
r'''Plot dry adiabats.
Adds dry adiabats (lines of constant potential temperature) to the
plot. The default style of these lines is dashed red lines with an
alpha value of 0.5. These can be overridden using keyword arguments.
Parameters
----------
p : array_like, optional
1-dimensional array of pressure values to be included in the dry
adiabats. If not specified, they will be linearly distributed
across the current plotted pressure range.
theta : array_like, optional
1-dimensional array of potential temperature values for dry
adiabats. By default these will be generated based on the current
temperature limits.
kwargs
Other keyword arguments to pass to
`matplotlib.collections.LineCollection`
See Also#B85C00
--------
plot_moist_adiabats
`matplotlib.collections.LineCollection`
`metpy.calc.dry_lapse`
'''
for artist in self._dry_adiabats:
artist.remove()
self._dry_adiabats = []
# Determine set of starting temps if necessary
if theta is None:
xmin, xmax = self.get_xlim()
theta = np.arange(xmin, xmax + 201, 10)
# Get pressure levels based on ylims if necessary
if p is None:
p = np.linspace(*self.get_ylim())
# Assemble into data for plotting
t = calculate('T', theta=theta[:, None], p=p, p_units='hPa',
T_units='degC', theta_units='degC')
linedata = [np.vstack((ti, p)).T for ti in t]
# Add to plot
kwargs.setdefault('clip_on', True)
kwargs.setdefault('colors', '#A65300')
kwargs.setdefault('linestyles', '-')
kwargs.setdefault('alpha', 1)
kwargs.setdefault('linewidth', 0.5)
kwargs.setdefault('zorder', 1.1)
collection = LineCollection(linedata, **kwargs)
self._dry_adiabats.append(collection)
self.add_collection(collection)
theta = theta.flatten()
T_label = calculate('T', p=140, p_units='hPa', theta=theta,
T_units='degC', theta_units='degC')
for i in range(len(theta)):
text = self.text(
T_label[i], 140, '{:.0f}'.format(theta[i]),
fontsize=8, ha='left', va='center', rotation=-60,
color='#A65300', bbox={
'facecolor': 'w', 'edgecolor': 'w', 'alpha': 0,
}, zorder=1.2)
text.set_clip_on(True)
self._dry_adiabats.append(text) | r'''Plot dry adiabats.
Adds dry adiabats (lines of constant potential temperature) to the
plot. The default style of these lines is dashed red lines with an
alpha value of 0.5. These can be overridden using keyword arguments.
Parameters
----------
p : array_like, optional
1-dimensional array of pressure values to be included in the dry
adiabats. If not specified, they will be linearly distributed
across the current plotted pressure range.
theta : array_like, optional
1-dimensional array of potential temperature values for dry
adiabats. By default these will be generated based on the current
temperature limits.
kwargs
Other keyword arguments to pass to
`matplotlib.collections.LineCollection`
See Also#B85C00
--------
plot_moist_adiabats
`matplotlib.collections.LineCollection`
`metpy.calc.dry_lapse` | Below is the the instruction that describes the task:
### Input:
r'''Plot dry adiabats.
Adds dry adiabats (lines of constant potential temperature) to the
plot. The default style of these lines is dashed red lines with an
alpha value of 0.5. These can be overridden using keyword arguments.
Parameters
----------
p : array_like, optional
1-dimensional array of pressure values to be included in the dry
adiabats. If not specified, they will be linearly distributed
across the current plotted pressure range.
theta : array_like, optional
1-dimensional array of potential temperature values for dry
adiabats. By default these will be generated based on the current
temperature limits.
kwargs
Other keyword arguments to pass to
`matplotlib.collections.LineCollection`
See Also#B85C00
--------
plot_moist_adiabats
`matplotlib.collections.LineCollection`
`metpy.calc.dry_lapse`
### Response:
def plot_dry_adiabats(self, p=None, theta=None, **kwargs):
r'''Plot dry adiabats.
Adds dry adiabats (lines of constant potential temperature) to the
plot. The default style of these lines is dashed red lines with an
alpha value of 0.5. These can be overridden using keyword arguments.
Parameters
----------
p : array_like, optional
1-dimensional array of pressure values to be included in the dry
adiabats. If not specified, they will be linearly distributed
across the current plotted pressure range.
theta : array_like, optional
1-dimensional array of potential temperature values for dry
adiabats. By default these will be generated based on the current
temperature limits.
kwargs
Other keyword arguments to pass to
`matplotlib.collections.LineCollection`
See Also#B85C00
--------
plot_moist_adiabats
`matplotlib.collections.LineCollection`
`metpy.calc.dry_lapse`
'''
for artist in self._dry_adiabats:
artist.remove()
self._dry_adiabats = []
# Determine set of starting temps if necessary
if theta is None:
xmin, xmax = self.get_xlim()
theta = np.arange(xmin, xmax + 201, 10)
# Get pressure levels based on ylims if necessary
if p is None:
p = np.linspace(*self.get_ylim())
# Assemble into data for plotting
t = calculate('T', theta=theta[:, None], p=p, p_units='hPa',
T_units='degC', theta_units='degC')
linedata = [np.vstack((ti, p)).T for ti in t]
# Add to plot
kwargs.setdefault('clip_on', True)
kwargs.setdefault('colors', '#A65300')
kwargs.setdefault('linestyles', '-')
kwargs.setdefault('alpha', 1)
kwargs.setdefault('linewidth', 0.5)
kwargs.setdefault('zorder', 1.1)
collection = LineCollection(linedata, **kwargs)
self._dry_adiabats.append(collection)
self.add_collection(collection)
theta = theta.flatten()
T_label = calculate('T', p=140, p_units='hPa', theta=theta,
T_units='degC', theta_units='degC')
for i in range(len(theta)):
text = self.text(
T_label[i], 140, '{:.0f}'.format(theta[i]),
fontsize=8, ha='left', va='center', rotation=-60,
color='#A65300', bbox={
'facecolor': 'w', 'edgecolor': 'w', 'alpha': 0,
}, zorder=1.2)
text.set_clip_on(True)
self._dry_adiabats.append(text) |
def _backup_dir_item(self, dir_path, process_bar):
"""
Backup one dir item
:param dir_path: filesystem_walk.DirEntryPath() instance
"""
self.path_helper.set_src_filepath(dir_path)
if self.path_helper.abs_src_filepath is None:
self.total_errored_items += 1
log.info("Can't backup %r", dir_path)
# self.summary(no, dir_path.stat.st_mtime, end=" ")
if dir_path.is_symlink:
self.summary("TODO Symlink: %s" % dir_path)
return
if dir_path.resolve_error is not None:
self.summary("TODO resolve error: %s" % dir_path.resolve_error)
pprint_path(dir_path)
return
if dir_path.different_path:
self.summary("TODO different path:")
pprint_path(dir_path)
return
if dir_path.is_dir:
self.summary("TODO dir: %s" % dir_path)
elif dir_path.is_file:
# self.summary("Normal file: %s", dir_path)
file_backup = FileBackup(dir_path, self.path_helper, self.backup_run)
old_backup_entry = self.fast_compare(dir_path)
if old_backup_entry is not None:
# We can just link the file from a old backup
file_backup.fast_deduplication_backup(old_backup_entry, process_bar)
else:
file_backup.deduplication_backup(process_bar)
assert file_backup.fast_backup is not None, dir_path.path
assert file_backup.file_linked is not None, dir_path.path
file_size = dir_path.stat.st_size
if file_backup.file_linked:
# os.link() was used
self.total_file_link_count += 1
self.total_stined_bytes += file_size
else:
self.total_new_file_count += 1
self.total_new_bytes += file_size
if file_backup.fast_backup:
self.total_fast_backup += 1
else:
self.summary("TODO:" % dir_path)
pprint_path(dir_path) | Backup one dir item
:param dir_path: filesystem_walk.DirEntryPath() instance | Below is the the instruction that describes the task:
### Input:
Backup one dir item
:param dir_path: filesystem_walk.DirEntryPath() instance
### Response:
def _backup_dir_item(self, dir_path, process_bar):
"""
Backup one dir item
:param dir_path: filesystem_walk.DirEntryPath() instance
"""
self.path_helper.set_src_filepath(dir_path)
if self.path_helper.abs_src_filepath is None:
self.total_errored_items += 1
log.info("Can't backup %r", dir_path)
# self.summary(no, dir_path.stat.st_mtime, end=" ")
if dir_path.is_symlink:
self.summary("TODO Symlink: %s" % dir_path)
return
if dir_path.resolve_error is not None:
self.summary("TODO resolve error: %s" % dir_path.resolve_error)
pprint_path(dir_path)
return
if dir_path.different_path:
self.summary("TODO different path:")
pprint_path(dir_path)
return
if dir_path.is_dir:
self.summary("TODO dir: %s" % dir_path)
elif dir_path.is_file:
# self.summary("Normal file: %s", dir_path)
file_backup = FileBackup(dir_path, self.path_helper, self.backup_run)
old_backup_entry = self.fast_compare(dir_path)
if old_backup_entry is not None:
# We can just link the file from a old backup
file_backup.fast_deduplication_backup(old_backup_entry, process_bar)
else:
file_backup.deduplication_backup(process_bar)
assert file_backup.fast_backup is not None, dir_path.path
assert file_backup.file_linked is not None, dir_path.path
file_size = dir_path.stat.st_size
if file_backup.file_linked:
# os.link() was used
self.total_file_link_count += 1
self.total_stined_bytes += file_size
else:
self.total_new_file_count += 1
self.total_new_bytes += file_size
if file_backup.fast_backup:
self.total_fast_backup += 1
else:
self.summary("TODO:" % dir_path)
pprint_path(dir_path) |
async def run(self, wait_for_completion=True):
"""Run scene.
Parameters:
* wait_for_completion: If set, function will return
after device has reached target position.
"""
activate_scene = ActivateScene(
pyvlx=self.pyvlx,
wait_for_completion=wait_for_completion,
scene_id=self.scene_id)
await activate_scene.do_api_call()
if not activate_scene.success:
raise PyVLXException("Unable to activate scene") | Run scene.
Parameters:
* wait_for_completion: If set, function will return
after device has reached target position. | Below is the the instruction that describes the task:
### Input:
Run scene.
Parameters:
* wait_for_completion: If set, function will return
after device has reached target position.
### Response:
async def run(self, wait_for_completion=True):
"""Run scene.
Parameters:
* wait_for_completion: If set, function will return
after device has reached target position.
"""
activate_scene = ActivateScene(
pyvlx=self.pyvlx,
wait_for_completion=wait_for_completion,
scene_id=self.scene_id)
await activate_scene.do_api_call()
if not activate_scene.success:
raise PyVLXException("Unable to activate scene") |
def listAllFiles (directory, suffix=None, abspath=False):
"""Returns the list of all files within the input directory and
all subdirectories.
"""
files = []
directory = expandPath(directory)
for dirpath, dirnames, filenames in os.walk(directory, followlinks=True):
if suffix:
filenames = [f for f in filenames if f.endswith(suffix)]
for filename in filenames:
filepath = os.path.join(dirpath, filename)
if not abspath:
filepath = os.path.relpath(filepath, start=directory)
# os.path.join(path, filename)
files.append(filepath)
return files | Returns the list of all files within the input directory and
all subdirectories. | Below is the the instruction that describes the task:
### Input:
Returns the list of all files within the input directory and
all subdirectories.
### Response:
def listAllFiles (directory, suffix=None, abspath=False):
"""Returns the list of all files within the input directory and
all subdirectories.
"""
files = []
directory = expandPath(directory)
for dirpath, dirnames, filenames in os.walk(directory, followlinks=True):
if suffix:
filenames = [f for f in filenames if f.endswith(suffix)]
for filename in filenames:
filepath = os.path.join(dirpath, filename)
if not abspath:
filepath = os.path.relpath(filepath, start=directory)
# os.path.join(path, filename)
files.append(filepath)
return files |
def candles(self, pair, timeframe=None):
"""Return a queue containing all received candles data.
:param pair: str, Symbol pair to request data for
:param timeframe: str
:return: Queue()
"""
timeframe = '1m' if not timeframe else timeframe
key = ('candles', pair, timeframe)
return self.queue_processor.candles[key] | Return a queue containing all received candles data.
:param pair: str, Symbol pair to request data for
:param timeframe: str
:return: Queue() | Below is the the instruction that describes the task:
### Input:
Return a queue containing all received candles data.
:param pair: str, Symbol pair to request data for
:param timeframe: str
:return: Queue()
### Response:
def candles(self, pair, timeframe=None):
"""Return a queue containing all received candles data.
:param pair: str, Symbol pair to request data for
:param timeframe: str
:return: Queue()
"""
timeframe = '1m' if not timeframe else timeframe
key = ('candles', pair, timeframe)
return self.queue_processor.candles[key] |
def item_from_topics(key, topics):
"""Get binding from `topics` via `key`
Example:
{0} == hello --> be in hello world
{1} == world --> be in hello world
Returns:
Single topic matching the key
Raises:
IndexError (int): With number of required
arguments for the key
"""
if re.match("{\d+}", key):
pos = int(key.strip("{}"))
try:
binding = topics[pos]
except IndexError:
raise IndexError(pos + 1)
else:
echo("be.yaml template key not recognised")
sys.exit(PROJECT_ERROR)
return binding | Get binding from `topics` via `key`
Example:
{0} == hello --> be in hello world
{1} == world --> be in hello world
Returns:
Single topic matching the key
Raises:
IndexError (int): With number of required
arguments for the key | Below is the the instruction that describes the task:
### Input:
Get binding from `topics` via `key`
Example:
{0} == hello --> be in hello world
{1} == world --> be in hello world
Returns:
Single topic matching the key
Raises:
IndexError (int): With number of required
arguments for the key
### Response:
def item_from_topics(key, topics):
"""Get binding from `topics` via `key`
Example:
{0} == hello --> be in hello world
{1} == world --> be in hello world
Returns:
Single topic matching the key
Raises:
IndexError (int): With number of required
arguments for the key
"""
if re.match("{\d+}", key):
pos = int(key.strip("{}"))
try:
binding = topics[pos]
except IndexError:
raise IndexError(pos + 1)
else:
echo("be.yaml template key not recognised")
sys.exit(PROJECT_ERROR)
return binding |
def get_cimobject_header(obj):
"""
Return the value for the CIM-XML extension header field 'CIMObject', using
the given object.
This function implements the rules defined in DSP0200 section 6.3.7
"CIMObject". The format of the CIMObject value is similar but not identical
to a local WBEM URI (one without namespace type and authority), as defined
in DSP0207.
One difference is that DSP0207 requires a leading slash for a local WBEM
URI, e.g. '/root/cimv2:CIM_Class.k=1', while the CIMObject value has no
leading slash, e.g. 'root/cimv2:CIM_Class.k=1'.
Another difference is that the CIMObject value for instance paths has
provisions for an instance path without keys, while WBEM URIs do not have
that. Pywbem does not support that.
"""
# Local namespace path
if isinstance(obj, six.string_types):
return obj
# Local class path
if isinstance(obj, CIMClassName):
return obj.to_wbem_uri(format='cimobject')
# Local instance path
if isinstance(obj, CIMInstanceName):
return obj.to_wbem_uri(format='cimobject')
raise TypeError(
_format("Invalid object type {0} to generate CIMObject header value "
"from", type(obj))) | Return the value for the CIM-XML extension header field 'CIMObject', using
the given object.
This function implements the rules defined in DSP0200 section 6.3.7
"CIMObject". The format of the CIMObject value is similar but not identical
to a local WBEM URI (one without namespace type and authority), as defined
in DSP0207.
One difference is that DSP0207 requires a leading slash for a local WBEM
URI, e.g. '/root/cimv2:CIM_Class.k=1', while the CIMObject value has no
leading slash, e.g. 'root/cimv2:CIM_Class.k=1'.
Another difference is that the CIMObject value for instance paths has
provisions for an instance path without keys, while WBEM URIs do not have
that. Pywbem does not support that. | Below is the the instruction that describes the task:
### Input:
Return the value for the CIM-XML extension header field 'CIMObject', using
the given object.
This function implements the rules defined in DSP0200 section 6.3.7
"CIMObject". The format of the CIMObject value is similar but not identical
to a local WBEM URI (one without namespace type and authority), as defined
in DSP0207.
One difference is that DSP0207 requires a leading slash for a local WBEM
URI, e.g. '/root/cimv2:CIM_Class.k=1', while the CIMObject value has no
leading slash, e.g. 'root/cimv2:CIM_Class.k=1'.
Another difference is that the CIMObject value for instance paths has
provisions for an instance path without keys, while WBEM URIs do not have
that. Pywbem does not support that.
### Response:
def get_cimobject_header(obj):
"""
Return the value for the CIM-XML extension header field 'CIMObject', using
the given object.
This function implements the rules defined in DSP0200 section 6.3.7
"CIMObject". The format of the CIMObject value is similar but not identical
to a local WBEM URI (one without namespace type and authority), as defined
in DSP0207.
One difference is that DSP0207 requires a leading slash for a local WBEM
URI, e.g. '/root/cimv2:CIM_Class.k=1', while the CIMObject value has no
leading slash, e.g. 'root/cimv2:CIM_Class.k=1'.
Another difference is that the CIMObject value for instance paths has
provisions for an instance path without keys, while WBEM URIs do not have
that. Pywbem does not support that.
"""
# Local namespace path
if isinstance(obj, six.string_types):
return obj
# Local class path
if isinstance(obj, CIMClassName):
return obj.to_wbem_uri(format='cimobject')
# Local instance path
if isinstance(obj, CIMInstanceName):
return obj.to_wbem_uri(format='cimobject')
raise TypeError(
_format("Invalid object type {0} to generate CIMObject header value "
"from", type(obj))) |
def get_arr(self):
"""
Get the heatmap's array within the value range originally provided in ``__init__()``.
The HeatmapsOnImage object saves heatmaps internally in the value range ``(min=0.0, max=1.0)``.
This function converts the internal representation to ``(min=min_value, max=max_value)``,
where ``min_value`` and ``max_value`` are provided upon instantiation of the object.
Returns
-------
result : (H,W) ndarray or (H,W,C) ndarray
Heatmap array. Dtype is float32.
"""
if self.arr_was_2d and self.arr_0to1.shape[2] == 1:
arr = self.arr_0to1[:, :, 0]
else:
arr = self.arr_0to1
eps = np.finfo(np.float32).eps
min_is_zero = 0.0 - eps < self.min_value < 0.0 + eps
max_is_one = 1.0 - eps < self.max_value < 1.0 + eps
if min_is_zero and max_is_one:
return np.copy(arr)
else:
diff = self.max_value - self.min_value
return self.min_value + diff * arr | Get the heatmap's array within the value range originally provided in ``__init__()``.
The HeatmapsOnImage object saves heatmaps internally in the value range ``(min=0.0, max=1.0)``.
This function converts the internal representation to ``(min=min_value, max=max_value)``,
where ``min_value`` and ``max_value`` are provided upon instantiation of the object.
Returns
-------
result : (H,W) ndarray or (H,W,C) ndarray
Heatmap array. Dtype is float32. | Below is the the instruction that describes the task:
### Input:
Get the heatmap's array within the value range originally provided in ``__init__()``.
The HeatmapsOnImage object saves heatmaps internally in the value range ``(min=0.0, max=1.0)``.
This function converts the internal representation to ``(min=min_value, max=max_value)``,
where ``min_value`` and ``max_value`` are provided upon instantiation of the object.
Returns
-------
result : (H,W) ndarray or (H,W,C) ndarray
Heatmap array. Dtype is float32.
### Response:
def get_arr(self):
"""
Get the heatmap's array within the value range originally provided in ``__init__()``.
The HeatmapsOnImage object saves heatmaps internally in the value range ``(min=0.0, max=1.0)``.
This function converts the internal representation to ``(min=min_value, max=max_value)``,
where ``min_value`` and ``max_value`` are provided upon instantiation of the object.
Returns
-------
result : (H,W) ndarray or (H,W,C) ndarray
Heatmap array. Dtype is float32.
"""
if self.arr_was_2d and self.arr_0to1.shape[2] == 1:
arr = self.arr_0to1[:, :, 0]
else:
arr = self.arr_0to1
eps = np.finfo(np.float32).eps
min_is_zero = 0.0 - eps < self.min_value < 0.0 + eps
max_is_one = 1.0 - eps < self.max_value < 1.0 + eps
if min_is_zero and max_is_one:
return np.copy(arr)
else:
diff = self.max_value - self.min_value
return self.min_value + diff * arr |
def iris_investigate(self, domains=None, data_updated_after=None, expiration_date=None,
create_date=None, active=None, **kwargs):
"""Returns back a list of domains based on the provided filters.
The following filters are available beyond what is parameterized as kwargs:
- ip: Search for domains having this IP.
- email: Search for domains with this email in their data.
- email_domain: Search for domains where the email address uses this domain.
- nameserver_host: Search for domains with this nameserver.
- nameserver_domain: Search for domains with a nameserver that has this domain.
- nameserver_ip: Search for domains with a nameserver on this IP.
- registrar: Search for domains with this registrar.
- registrant: Search for domains with this registrant name.
- registrant_org: Search for domains with this registrant organization.
- mailserver_host: Search for domains with this mailserver.
- mailserver_domain: Search for domains with a mailserver that has this domain.
- mailserver_ip: Search for domains with a mailserver on this IP.
- redirect_domain: Search for domains which redirect to this domain.
- ssl_hash: Search for domains which have an SSL certificate with this hash.
- ssl_subject: Search for domains which have an SSL certificate with this subject string.
- ssl_email: Search for domains which have an SSL certificate with this email in it.
- ssl_org: Search for domains which have an SSL certificate with this organization in it.
- google_analytics: Search for domains which have this Google Analytics code.
- adsense: Search for domains which have this AdSense code.
- tld: Filter by TLD. Must be combined with another parameter.
You can loop over results of your investigation as if it was a native Python list:
for result in api.iris_investigate(ip='199.30.228.112'): # Enables looping over all related results
api.iris_investigate(QUERY)['results_count'] Returns the number of results returned with this request
api.iris_investigate(QUERY)['total_count'] Returns the number of results available within Iris
api.iris_investigate(QUERY)['missing_domains'] Returns any domains that we were unable to find
api.iris_investigate(QUERY)['limit_exceeded'] Returns True if you've exceeded your API usage
api.iris_investigate(QUERY)['position'] Returns the position key that can be used to retrieve the next page:
next_page = api.iris_investigate(QUERY, position=api.iris_investigate(QUERY)['position'])
for enrichment in api.iris_enrich(i): # Enables looping over all returned enriched domains
"""
if not (kwargs or domains):
raise ValueError('Need to define investigation using kwarg filters or domains')
if type(domains) in (list, tuple):
domains = ','.join(domains)
if hasattr(data_updated_after, 'strftime'):
data_updated_after = data_updated_after.strftime('%Y-%M-%d')
if hasattr(expiration_date, 'strftime'):
expiration_date = expiration_date.strftime('%Y-%M-%d')
if hasattr(create_date, 'strftime'):
create_date = create_date.strftime('%Y-%M-%d')
if type(active) == bool:
active = str(active).lower()
return self._results('iris-investigate', '/v1/iris-investigate/', domain=domains,
data_updated_after=data_updated_after, expiration_date=expiration_date,
create_date=create_date, items_path=('results', ), **kwargs) | Returns back a list of domains based on the provided filters.
The following filters are available beyond what is parameterized as kwargs:
- ip: Search for domains having this IP.
- email: Search for domains with this email in their data.
- email_domain: Search for domains where the email address uses this domain.
- nameserver_host: Search for domains with this nameserver.
- nameserver_domain: Search for domains with a nameserver that has this domain.
- nameserver_ip: Search for domains with a nameserver on this IP.
- registrar: Search for domains with this registrar.
- registrant: Search for domains with this registrant name.
- registrant_org: Search for domains with this registrant organization.
- mailserver_host: Search for domains with this mailserver.
- mailserver_domain: Search for domains with a mailserver that has this domain.
- mailserver_ip: Search for domains with a mailserver on this IP.
- redirect_domain: Search for domains which redirect to this domain.
- ssl_hash: Search for domains which have an SSL certificate with this hash.
- ssl_subject: Search for domains which have an SSL certificate with this subject string.
- ssl_email: Search for domains which have an SSL certificate with this email in it.
- ssl_org: Search for domains which have an SSL certificate with this organization in it.
- google_analytics: Search for domains which have this Google Analytics code.
- adsense: Search for domains which have this AdSense code.
- tld: Filter by TLD. Must be combined with another parameter.
You can loop over results of your investigation as if it was a native Python list:
for result in api.iris_investigate(ip='199.30.228.112'): # Enables looping over all related results
api.iris_investigate(QUERY)['results_count'] Returns the number of results returned with this request
api.iris_investigate(QUERY)['total_count'] Returns the number of results available within Iris
api.iris_investigate(QUERY)['missing_domains'] Returns any domains that we were unable to find
api.iris_investigate(QUERY)['limit_exceeded'] Returns True if you've exceeded your API usage
api.iris_investigate(QUERY)['position'] Returns the position key that can be used to retrieve the next page:
next_page = api.iris_investigate(QUERY, position=api.iris_investigate(QUERY)['position'])
for enrichment in api.iris_enrich(i): # Enables looping over all returned enriched domains | Below is the the instruction that describes the task:
### Input:
Returns back a list of domains based on the provided filters.
The following filters are available beyond what is parameterized as kwargs:
- ip: Search for domains having this IP.
- email: Search for domains with this email in their data.
- email_domain: Search for domains where the email address uses this domain.
- nameserver_host: Search for domains with this nameserver.
- nameserver_domain: Search for domains with a nameserver that has this domain.
- nameserver_ip: Search for domains with a nameserver on this IP.
- registrar: Search for domains with this registrar.
- registrant: Search for domains with this registrant name.
- registrant_org: Search for domains with this registrant organization.
- mailserver_host: Search for domains with this mailserver.
- mailserver_domain: Search for domains with a mailserver that has this domain.
- mailserver_ip: Search for domains with a mailserver on this IP.
- redirect_domain: Search for domains which redirect to this domain.
- ssl_hash: Search for domains which have an SSL certificate with this hash.
- ssl_subject: Search for domains which have an SSL certificate with this subject string.
- ssl_email: Search for domains which have an SSL certificate with this email in it.
- ssl_org: Search for domains which have an SSL certificate with this organization in it.
- google_analytics: Search for domains which have this Google Analytics code.
- adsense: Search for domains which have this AdSense code.
- tld: Filter by TLD. Must be combined with another parameter.
You can loop over results of your investigation as if it was a native Python list:
for result in api.iris_investigate(ip='199.30.228.112'): # Enables looping over all related results
api.iris_investigate(QUERY)['results_count'] Returns the number of results returned with this request
api.iris_investigate(QUERY)['total_count'] Returns the number of results available within Iris
api.iris_investigate(QUERY)['missing_domains'] Returns any domains that we were unable to find
api.iris_investigate(QUERY)['limit_exceeded'] Returns True if you've exceeded your API usage
api.iris_investigate(QUERY)['position'] Returns the position key that can be used to retrieve the next page:
next_page = api.iris_investigate(QUERY, position=api.iris_investigate(QUERY)['position'])
for enrichment in api.iris_enrich(i): # Enables looping over all returned enriched domains
### Response:
def iris_investigate(self, domains=None, data_updated_after=None, expiration_date=None,
create_date=None, active=None, **kwargs):
"""Returns back a list of domains based on the provided filters.
The following filters are available beyond what is parameterized as kwargs:
- ip: Search for domains having this IP.
- email: Search for domains with this email in their data.
- email_domain: Search for domains where the email address uses this domain.
- nameserver_host: Search for domains with this nameserver.
- nameserver_domain: Search for domains with a nameserver that has this domain.
- nameserver_ip: Search for domains with a nameserver on this IP.
- registrar: Search for domains with this registrar.
- registrant: Search for domains with this registrant name.
- registrant_org: Search for domains with this registrant organization.
- mailserver_host: Search for domains with this mailserver.
- mailserver_domain: Search for domains with a mailserver that has this domain.
- mailserver_ip: Search for domains with a mailserver on this IP.
- redirect_domain: Search for domains which redirect to this domain.
- ssl_hash: Search for domains which have an SSL certificate with this hash.
- ssl_subject: Search for domains which have an SSL certificate with this subject string.
- ssl_email: Search for domains which have an SSL certificate with this email in it.
- ssl_org: Search for domains which have an SSL certificate with this organization in it.
- google_analytics: Search for domains which have this Google Analytics code.
- adsense: Search for domains which have this AdSense code.
- tld: Filter by TLD. Must be combined with another parameter.
You can loop over results of your investigation as if it was a native Python list:
for result in api.iris_investigate(ip='199.30.228.112'): # Enables looping over all related results
api.iris_investigate(QUERY)['results_count'] Returns the number of results returned with this request
api.iris_investigate(QUERY)['total_count'] Returns the number of results available within Iris
api.iris_investigate(QUERY)['missing_domains'] Returns any domains that we were unable to find
api.iris_investigate(QUERY)['limit_exceeded'] Returns True if you've exceeded your API usage
api.iris_investigate(QUERY)['position'] Returns the position key that can be used to retrieve the next page:
next_page = api.iris_investigate(QUERY, position=api.iris_investigate(QUERY)['position'])
for enrichment in api.iris_enrich(i): # Enables looping over all returned enriched domains
"""
if not (kwargs or domains):
raise ValueError('Need to define investigation using kwarg filters or domains')
if type(domains) in (list, tuple):
domains = ','.join(domains)
if hasattr(data_updated_after, 'strftime'):
data_updated_after = data_updated_after.strftime('%Y-%M-%d')
if hasattr(expiration_date, 'strftime'):
expiration_date = expiration_date.strftime('%Y-%M-%d')
if hasattr(create_date, 'strftime'):
create_date = create_date.strftime('%Y-%M-%d')
if type(active) == bool:
active = str(active).lower()
return self._results('iris-investigate', '/v1/iris-investigate/', domain=domains,
data_updated_after=data_updated_after, expiration_date=expiration_date,
create_date=create_date, items_path=('results', ), **kwargs) |
def receive(self):
""" receive the next PDU from the watchman service
If the client has activated subscriptions or logs then
this PDU may be a unilateral PDU sent by the service to
inform the client of a log event or subscription change.
It may also simply be the response portion of a request
initiated by query.
There are clients in production that subscribe and call
this in a loop to retrieve all subscription responses,
so care should be taken when making changes here.
"""
self._connect()
result = self.recvConn.receive()
if self._hasprop(result, "error"):
raise CommandError(result["error"])
if self._hasprop(result, "log"):
self.logs.append(result["log"])
if self._hasprop(result, "subscription"):
sub = result["subscription"]
if not (sub in self.subs):
self.subs[sub] = []
self.subs[sub].append(result)
# also accumulate in {root,sub} keyed store
root = os.path.normpath(os.path.normcase(result["root"]))
if not root in self.sub_by_root:
self.sub_by_root[root] = {}
if not sub in self.sub_by_root[root]:
self.sub_by_root[root][sub] = []
self.sub_by_root[root][sub].append(result)
return result | receive the next PDU from the watchman service
If the client has activated subscriptions or logs then
this PDU may be a unilateral PDU sent by the service to
inform the client of a log event or subscription change.
It may also simply be the response portion of a request
initiated by query.
There are clients in production that subscribe and call
this in a loop to retrieve all subscription responses,
so care should be taken when making changes here. | Below is the the instruction that describes the task:
### Input:
receive the next PDU from the watchman service
If the client has activated subscriptions or logs then
this PDU may be a unilateral PDU sent by the service to
inform the client of a log event or subscription change.
It may also simply be the response portion of a request
initiated by query.
There are clients in production that subscribe and call
this in a loop to retrieve all subscription responses,
so care should be taken when making changes here.
### Response:
def receive(self):
""" receive the next PDU from the watchman service
If the client has activated subscriptions or logs then
this PDU may be a unilateral PDU sent by the service to
inform the client of a log event or subscription change.
It may also simply be the response portion of a request
initiated by query.
There are clients in production that subscribe and call
this in a loop to retrieve all subscription responses,
so care should be taken when making changes here.
"""
self._connect()
result = self.recvConn.receive()
if self._hasprop(result, "error"):
raise CommandError(result["error"])
if self._hasprop(result, "log"):
self.logs.append(result["log"])
if self._hasprop(result, "subscription"):
sub = result["subscription"]
if not (sub in self.subs):
self.subs[sub] = []
self.subs[sub].append(result)
# also accumulate in {root,sub} keyed store
root = os.path.normpath(os.path.normcase(result["root"]))
if not root in self.sub_by_root:
self.sub_by_root[root] = {}
if not sub in self.sub_by_root[root]:
self.sub_by_root[root][sub] = []
self.sub_by_root[root][sub].append(result)
return result |
def get_icon_for(self, brain_or_object):
"""Get the navigation portlet icon for the brain or object
The cache key ensures that the lookup is done only once per domain name
"""
portal_types = api.get_tool("portal_types")
fti = portal_types.getTypeInfo(api.get_portal_type(brain_or_object))
icon = fti.getIcon()
if not icon:
return ""
# Always try to get the big icon for high-res displays
icon_big = icon.replace(".png", "_big.png")
# fall back to a default icon if the looked up icon does not exist
if self.context.restrictedTraverse(icon_big, None) is None:
icon_big = None
portal_url = api.get_url(api.get_portal())
title = api.get_title(brain_or_object)
html_tag = "<img title='{}' src='{}/{}' width='16' />".format(
title, portal_url, icon_big or icon)
logger.info("Generated Icon Tag for {}: {}".format(
api.get_path(brain_or_object), html_tag))
return html_tag | Get the navigation portlet icon for the brain or object
The cache key ensures that the lookup is done only once per domain name | Below is the the instruction that describes the task:
### Input:
Get the navigation portlet icon for the brain or object
The cache key ensures that the lookup is done only once per domain name
### Response:
def get_icon_for(self, brain_or_object):
"""Get the navigation portlet icon for the brain or object
The cache key ensures that the lookup is done only once per domain name
"""
portal_types = api.get_tool("portal_types")
fti = portal_types.getTypeInfo(api.get_portal_type(brain_or_object))
icon = fti.getIcon()
if not icon:
return ""
# Always try to get the big icon for high-res displays
icon_big = icon.replace(".png", "_big.png")
# fall back to a default icon if the looked up icon does not exist
if self.context.restrictedTraverse(icon_big, None) is None:
icon_big = None
portal_url = api.get_url(api.get_portal())
title = api.get_title(brain_or_object)
html_tag = "<img title='{}' src='{}/{}' width='16' />".format(
title, portal_url, icon_big or icon)
logger.info("Generated Icon Tag for {}: {}".format(
api.get_path(brain_or_object), html_tag))
return html_tag |
async def get_ltd_product(session, slug=None, url=None):
"""Get the product resource (JSON document) from the LSST the Docs API.
Parameters
----------
session : `aiohttp.ClientSession`
Your application's aiohttp client session.
See http://aiohttp.readthedocs.io/en/stable/client.html.
slug : `str`, optional
Slug identfying the product. This is the same as the subdomain.
For example, ``'ldm-151'`` is the slug for ``ldm-151.lsst.io``.
A full product URL can be provided instead, see ``url``.
url : `str`, optional
The full LTD Keeper URL for the product resource. For example,
``'https://keeper.lsst.codes/products/ldm-151'``. The ``slug``
can be provided instead.
Returns
-------
product : `dict`
Product dataset. See
https://ltd-keeper.lsst.io/products.html#get--products-(slug)
for fields.
"""
if url is None:
url = 'https://keeper.lsst.codes/products/{}'.format(slug)
async with session.get(url) as response:
data = await response.json()
return data | Get the product resource (JSON document) from the LSST the Docs API.
Parameters
----------
session : `aiohttp.ClientSession`
Your application's aiohttp client session.
See http://aiohttp.readthedocs.io/en/stable/client.html.
slug : `str`, optional
Slug identfying the product. This is the same as the subdomain.
For example, ``'ldm-151'`` is the slug for ``ldm-151.lsst.io``.
A full product URL can be provided instead, see ``url``.
url : `str`, optional
The full LTD Keeper URL for the product resource. For example,
``'https://keeper.lsst.codes/products/ldm-151'``. The ``slug``
can be provided instead.
Returns
-------
product : `dict`
Product dataset. See
https://ltd-keeper.lsst.io/products.html#get--products-(slug)
for fields. | Below is the the instruction that describes the task:
### Input:
Get the product resource (JSON document) from the LSST the Docs API.
Parameters
----------
session : `aiohttp.ClientSession`
Your application's aiohttp client session.
See http://aiohttp.readthedocs.io/en/stable/client.html.
slug : `str`, optional
Slug identfying the product. This is the same as the subdomain.
For example, ``'ldm-151'`` is the slug for ``ldm-151.lsst.io``.
A full product URL can be provided instead, see ``url``.
url : `str`, optional
The full LTD Keeper URL for the product resource. For example,
``'https://keeper.lsst.codes/products/ldm-151'``. The ``slug``
can be provided instead.
Returns
-------
product : `dict`
Product dataset. See
https://ltd-keeper.lsst.io/products.html#get--products-(slug)
for fields.
### Response:
async def get_ltd_product(session, slug=None, url=None):
"""Get the product resource (JSON document) from the LSST the Docs API.
Parameters
----------
session : `aiohttp.ClientSession`
Your application's aiohttp client session.
See http://aiohttp.readthedocs.io/en/stable/client.html.
slug : `str`, optional
Slug identfying the product. This is the same as the subdomain.
For example, ``'ldm-151'`` is the slug for ``ldm-151.lsst.io``.
A full product URL can be provided instead, see ``url``.
url : `str`, optional
The full LTD Keeper URL for the product resource. For example,
``'https://keeper.lsst.codes/products/ldm-151'``. The ``slug``
can be provided instead.
Returns
-------
product : `dict`
Product dataset. See
https://ltd-keeper.lsst.io/products.html#get--products-(slug)
for fields.
"""
if url is None:
url = 'https://keeper.lsst.codes/products/{}'.format(slug)
async with session.get(url) as response:
data = await response.json()
return data |
def count(self, weighted=True):
"""Return numberic count of rows considered for cube response."""
return self._measures.weighted_n if weighted else self._measures.unweighted_n | Return numberic count of rows considered for cube response. | Below is the the instruction that describes the task:
### Input:
Return numberic count of rows considered for cube response.
### Response:
def count(self, weighted=True):
"""Return numberic count of rows considered for cube response."""
return self._measures.weighted_n if weighted else self._measures.unweighted_n |
def _add_generate_sub_commands(self):
"""
Sub commands for generating models for usage by clients.
Currently supports Google Closure.
"""
gen_parser = self._subparsers_handle.add_parser(
name="gen",
help="generate client side model stubs, filters"
)
gen_parser.add_argument(
"-t",
"--template",
choices=['closure.model', 'closure.filter'],
default='closure.model',
required=True,
dest="template",
help="template to use for client side code generation"
)
gen_parser.add_argument(
"-m",
"--model",
required=True,
dest="models_definition",
help="path to models definition file or package"
)
gen_parser.add_argument(
"-o",
"--output",
default=".",
dest="output",
help="output path for generated code"
)
gen_parser.add_argument(
"-n",
"--namespace",
required=True,
dest="namespace",
help="namespace to use with template e.g prestans.data.model"
)
gen_parser.add_argument(
"-fn",
"--filter-namespace",
required=False,
default=None,
dest="filter_namespace",
help="filter namespace to use with template e.g prestans.data.filter"
) | Sub commands for generating models for usage by clients.
Currently supports Google Closure. | Below is the the instruction that describes the task:
### Input:
Sub commands for generating models for usage by clients.
Currently supports Google Closure.
### Response:
def _add_generate_sub_commands(self):
"""
Sub commands for generating models for usage by clients.
Currently supports Google Closure.
"""
gen_parser = self._subparsers_handle.add_parser(
name="gen",
help="generate client side model stubs, filters"
)
gen_parser.add_argument(
"-t",
"--template",
choices=['closure.model', 'closure.filter'],
default='closure.model',
required=True,
dest="template",
help="template to use for client side code generation"
)
gen_parser.add_argument(
"-m",
"--model",
required=True,
dest="models_definition",
help="path to models definition file or package"
)
gen_parser.add_argument(
"-o",
"--output",
default=".",
dest="output",
help="output path for generated code"
)
gen_parser.add_argument(
"-n",
"--namespace",
required=True,
dest="namespace",
help="namespace to use with template e.g prestans.data.model"
)
gen_parser.add_argument(
"-fn",
"--filter-namespace",
required=False,
default=None,
dest="filter_namespace",
help="filter namespace to use with template e.g prestans.data.filter"
) |
def report_up(self, service, port):
"""
Report the given service's present node as up by creating/updating
its respective znode in Zookeeper and setting the znode's data to
the serialized representation of the node.
Waits for zookeeper to be connected before taking any action.
"""
wait_on_any(self.connected, self.shutdown)
node = Node.current(service, port)
path = self.path_of(service, node)
data = node.serialize().encode()
znode = self.client.exists(path)
if not znode:
logger.debug("ZNode at %s does not exist, creating new one.", path)
self.client.create(path, value=data, ephemeral=True, makepath=True)
elif znode.owner_session_id != self.client.client_id[0]:
logger.debug("ZNode at %s not owned by us, recreating.", path)
txn = self.client.transaction()
txn.delete(path)
txn.create(path, value=data, ephemeral=True)
txn.commit()
else:
logger.debug("Setting node value to %r", data)
self.client.set(path, data) | Report the given service's present node as up by creating/updating
its respective znode in Zookeeper and setting the znode's data to
the serialized representation of the node.
Waits for zookeeper to be connected before taking any action. | Below is the the instruction that describes the task:
### Input:
Report the given service's present node as up by creating/updating
its respective znode in Zookeeper and setting the znode's data to
the serialized representation of the node.
Waits for zookeeper to be connected before taking any action.
### Response:
def report_up(self, service, port):
"""
Report the given service's present node as up by creating/updating
its respective znode in Zookeeper and setting the znode's data to
the serialized representation of the node.
Waits for zookeeper to be connected before taking any action.
"""
wait_on_any(self.connected, self.shutdown)
node = Node.current(service, port)
path = self.path_of(service, node)
data = node.serialize().encode()
znode = self.client.exists(path)
if not znode:
logger.debug("ZNode at %s does not exist, creating new one.", path)
self.client.create(path, value=data, ephemeral=True, makepath=True)
elif znode.owner_session_id != self.client.client_id[0]:
logger.debug("ZNode at %s not owned by us, recreating.", path)
txn = self.client.transaction()
txn.delete(path)
txn.create(path, value=data, ephemeral=True)
txn.commit()
else:
logger.debug("Setting node value to %r", data)
self.client.set(path, data) |
def get_starsep_RaDecDeg(ra1_deg, dec1_deg, ra2_deg, dec2_deg):
"""Calculate separation."""
sep = deltaStarsRaDecDeg(ra1_deg, dec1_deg, ra2_deg, dec2_deg)
sgn, deg, mn, sec = degToDms(sep)
if deg != 0:
txt = '%02d:%02d:%06.3f' % (deg, mn, sec)
else:
txt = '%02d:%06.3f' % (mn, sec)
return txt | Calculate separation. | Below is the the instruction that describes the task:
### Input:
Calculate separation.
### Response:
def get_starsep_RaDecDeg(ra1_deg, dec1_deg, ra2_deg, dec2_deg):
"""Calculate separation."""
sep = deltaStarsRaDecDeg(ra1_deg, dec1_deg, ra2_deg, dec2_deg)
sgn, deg, mn, sec = degToDms(sep)
if deg != 0:
txt = '%02d:%02d:%06.3f' % (deg, mn, sec)
else:
txt = '%02d:%06.3f' % (mn, sec)
return txt |
def html_for_env_var(key):
"""Returns an HTML snippet for an environment variable.
Args:
key: A string representing an environment variable name.
Returns:
String HTML representing the value and variable.
"""
value = os.getenv(key)
return KEY_VALUE_TEMPLATE.format(key, value) | Returns an HTML snippet for an environment variable.
Args:
key: A string representing an environment variable name.
Returns:
String HTML representing the value and variable. | Below is the the instruction that describes the task:
### Input:
Returns an HTML snippet for an environment variable.
Args:
key: A string representing an environment variable name.
Returns:
String HTML representing the value and variable.
### Response:
def html_for_env_var(key):
"""Returns an HTML snippet for an environment variable.
Args:
key: A string representing an environment variable name.
Returns:
String HTML representing the value and variable.
"""
value = os.getenv(key)
return KEY_VALUE_TEMPLATE.format(key, value) |
def randomDecimalField(self, model_class, field_name):
"""
Validate if the field has a `max_digits` and `decimal_places`
And generating the unique decimal number.
"""
decimal_field = model_class._meta.get_field(field_name)
max_digits = None
decimal_places = None
if decimal_field.max_digits is not None:
max_digits = decimal_field.max_digits
if decimal_field.decimal_places is not None:
decimal_places = decimal_field.decimal_places
digits = random.choice(range(100))
if max_digits is not None:
start = 0
if max_digits < start:
start = max_digits - max_digits
digits = int(
"".join([
str(x) for x in random.sample(
range(start, max_digits),
max_digits - 1
)
])
)
places = random.choice(range(10, 99))
if decimal_places is not None:
places = str(
random.choice(range(9999 * 99999))
)[:decimal_places]
return float(
str(digits)[:decimal_places] + "." + str(places)
) | Validate if the field has a `max_digits` and `decimal_places`
And generating the unique decimal number. | Below is the the instruction that describes the task:
### Input:
Validate if the field has a `max_digits` and `decimal_places`
And generating the unique decimal number.
### Response:
def randomDecimalField(self, model_class, field_name):
"""
Validate if the field has a `max_digits` and `decimal_places`
And generating the unique decimal number.
"""
decimal_field = model_class._meta.get_field(field_name)
max_digits = None
decimal_places = None
if decimal_field.max_digits is not None:
max_digits = decimal_field.max_digits
if decimal_field.decimal_places is not None:
decimal_places = decimal_field.decimal_places
digits = random.choice(range(100))
if max_digits is not None:
start = 0
if max_digits < start:
start = max_digits - max_digits
digits = int(
"".join([
str(x) for x in random.sample(
range(start, max_digits),
max_digits - 1
)
])
)
places = random.choice(range(10, 99))
if decimal_places is not None:
places = str(
random.choice(range(9999 * 99999))
)[:decimal_places]
return float(
str(digits)[:decimal_places] + "." + str(places)
) |
def bandpass(self, frequency, width_q=2.0, constant_skirt=False):
'''Apply a two-pole Butterworth band-pass filter with the given central
frequency, and (3dB-point) band-width. The filter rolls off at 6dB per
octave (20dB per decade) and is described in detail in
http://musicdsp.org/files/Audio-EQ-Cookbook.txt
Parameters
----------
frequency : float
The filter's center frequency in Hz.
width_q : float, default=2.0
The filter's width as a Q-factor.
constant_skirt : bool, default=False
If True, selects constant skirt gain (peak gain = width_q).
If False, selects constant 0dB peak gain.
See Also
--------
bandreject, sinc
'''
if not is_number(frequency) or frequency <= 0:
raise ValueError("frequency must be a positive number.")
if not is_number(width_q) or width_q <= 0:
raise ValueError("width_q must be a positive number.")
if not isinstance(constant_skirt, bool):
raise ValueError("constant_skirt must be a boolean.")
effect_args = ['bandpass']
if constant_skirt:
effect_args.append('-c')
effect_args.extend(['{:f}'.format(frequency), '{:f}q'.format(width_q)])
self.effects.extend(effect_args)
self.effects_log.append('bandpass')
return self | Apply a two-pole Butterworth band-pass filter with the given central
frequency, and (3dB-point) band-width. The filter rolls off at 6dB per
octave (20dB per decade) and is described in detail in
http://musicdsp.org/files/Audio-EQ-Cookbook.txt
Parameters
----------
frequency : float
The filter's center frequency in Hz.
width_q : float, default=2.0
The filter's width as a Q-factor.
constant_skirt : bool, default=False
If True, selects constant skirt gain (peak gain = width_q).
If False, selects constant 0dB peak gain.
See Also
--------
bandreject, sinc | Below is the the instruction that describes the task:
### Input:
Apply a two-pole Butterworth band-pass filter with the given central
frequency, and (3dB-point) band-width. The filter rolls off at 6dB per
octave (20dB per decade) and is described in detail in
http://musicdsp.org/files/Audio-EQ-Cookbook.txt
Parameters
----------
frequency : float
The filter's center frequency in Hz.
width_q : float, default=2.0
The filter's width as a Q-factor.
constant_skirt : bool, default=False
If True, selects constant skirt gain (peak gain = width_q).
If False, selects constant 0dB peak gain.
See Also
--------
bandreject, sinc
### Response:
def bandpass(self, frequency, width_q=2.0, constant_skirt=False):
'''Apply a two-pole Butterworth band-pass filter with the given central
frequency, and (3dB-point) band-width. The filter rolls off at 6dB per
octave (20dB per decade) and is described in detail in
http://musicdsp.org/files/Audio-EQ-Cookbook.txt
Parameters
----------
frequency : float
The filter's center frequency in Hz.
width_q : float, default=2.0
The filter's width as a Q-factor.
constant_skirt : bool, default=False
If True, selects constant skirt gain (peak gain = width_q).
If False, selects constant 0dB peak gain.
See Also
--------
bandreject, sinc
'''
if not is_number(frequency) or frequency <= 0:
raise ValueError("frequency must be a positive number.")
if not is_number(width_q) or width_q <= 0:
raise ValueError("width_q must be a positive number.")
if not isinstance(constant_skirt, bool):
raise ValueError("constant_skirt must be a boolean.")
effect_args = ['bandpass']
if constant_skirt:
effect_args.append('-c')
effect_args.extend(['{:f}'.format(frequency), '{:f}q'.format(width_q)])
self.effects.extend(effect_args)
self.effects_log.append('bandpass')
return self |
def cursor_position_col(self):
"""
Current column. (0-based.)
"""
# (Don't use self.text_before_cursor to calculate this. Creating
# substrings and doing rsplit is too expensive for getting the cursor
# position.)
_, line_start_index = self._find_line_start_index(self.cursor_position)
return self.cursor_position - line_start_index | Current column. (0-based.) | Below is the the instruction that describes the task:
### Input:
Current column. (0-based.)
### Response:
def cursor_position_col(self):
"""
Current column. (0-based.)
"""
# (Don't use self.text_before_cursor to calculate this. Creating
# substrings and doing rsplit is too expensive for getting the cursor
# position.)
_, line_start_index = self._find_line_start_index(self.cursor_position)
return self.cursor_position - line_start_index |
def to_dict(self):
"""Pack the load averages into a nicely-keyed dictionary."""
result = {}
for meta in self.intervals.values():
result[meta.display] = meta.value
return result | Pack the load averages into a nicely-keyed dictionary. | Below is the the instruction that describes the task:
### Input:
Pack the load averages into a nicely-keyed dictionary.
### Response:
def to_dict(self):
"""Pack the load averages into a nicely-keyed dictionary."""
result = {}
for meta in self.intervals.values():
result[meta.display] = meta.value
return result |
def replace_data(self, chart_data):
"""
Use the categories and series values in the |ChartData| object
*chart_data* to replace those in the XML and Excel worksheet for this
chart.
"""
rewriter = SeriesXmlRewriterFactory(self.chart_type, chart_data)
rewriter.replace_series_data(self._chartSpace)
self._workbook.update_from_xlsx_blob(chart_data.xlsx_blob) | Use the categories and series values in the |ChartData| object
*chart_data* to replace those in the XML and Excel worksheet for this
chart. | Below is the the instruction that describes the task:
### Input:
Use the categories and series values in the |ChartData| object
*chart_data* to replace those in the XML and Excel worksheet for this
chart.
### Response:
def replace_data(self, chart_data):
"""
Use the categories and series values in the |ChartData| object
*chart_data* to replace those in the XML and Excel worksheet for this
chart.
"""
rewriter = SeriesXmlRewriterFactory(self.chart_type, chart_data)
rewriter.replace_series_data(self._chartSpace)
self._workbook.update_from_xlsx_blob(chart_data.xlsx_blob) |
def convertafield(field_comm, field_val, field_iddname):
"""convert field based on field info in IDD"""
convinidd = ConvInIDD()
field_typ = field_comm.get('type', [None])[0]
conv = convinidd.conv_dict().get(field_typ, convinidd.no_type)
return conv(field_val, field_iddname) | convert field based on field info in IDD | Below is the the instruction that describes the task:
### Input:
convert field based on field info in IDD
### Response:
def convertafield(field_comm, field_val, field_iddname):
"""convert field based on field info in IDD"""
convinidd = ConvInIDD()
field_typ = field_comm.get('type', [None])[0]
conv = convinidd.conv_dict().get(field_typ, convinidd.no_type)
return conv(field_val, field_iddname) |
def split(pipe, splitter, skip_empty=False):
''' this function works a lot like groupby but splits on given patterns,
the same behavior as str.split provides. if skip_empty is True,
split only yields pieces that have contents
Example:
splitting 1011101010101
by 10
returns ,11,,,,1
Or if skip_empty is True
splitting 1011101010101
by 10
returns 11,1
'''
splitter = tuple(splitter)
len_splitter = len(splitter)
pipe=iter(pipe)
current = deque()
tmp = []
windowed = window(pipe, len(splitter))
for i in windowed:
if i == splitter:
skip(windowed, len(splitter)-1)
yield list(current)
current.clear()
tmp = []
else:
current.append(i[0])
tmp = i
if len(current) or len(tmp):
yield list(chain(current,tmp)) | this function works a lot like groupby but splits on given patterns,
the same behavior as str.split provides. if skip_empty is True,
split only yields pieces that have contents
Example:
splitting 1011101010101
by 10
returns ,11,,,,1
Or if skip_empty is True
splitting 1011101010101
by 10
returns 11,1 | Below is the the instruction that describes the task:
### Input:
this function works a lot like groupby but splits on given patterns,
the same behavior as str.split provides. if skip_empty is True,
split only yields pieces that have contents
Example:
splitting 1011101010101
by 10
returns ,11,,,,1
Or if skip_empty is True
splitting 1011101010101
by 10
returns 11,1
### Response:
def split(pipe, splitter, skip_empty=False):
''' this function works a lot like groupby but splits on given patterns,
the same behavior as str.split provides. if skip_empty is True,
split only yields pieces that have contents
Example:
splitting 1011101010101
by 10
returns ,11,,,,1
Or if skip_empty is True
splitting 1011101010101
by 10
returns 11,1
'''
splitter = tuple(splitter)
len_splitter = len(splitter)
pipe=iter(pipe)
current = deque()
tmp = []
windowed = window(pipe, len(splitter))
for i in windowed:
if i == splitter:
skip(windowed, len(splitter)-1)
yield list(current)
current.clear()
tmp = []
else:
current.append(i[0])
tmp = i
if len(current) or len(tmp):
yield list(chain(current,tmp)) |
def get_retained_chimeras(output_fp_de_novo_nonchimeras,
output_fp_ref_nonchimeras,
output_combined_fp,
chimeras_retention='union'):
""" Gets union or intersection of two supplied fasta files
output_fp_de_novo_nonchimeras: filepath of nonchimeras from de novo
usearch detection.
output_fp_ref_nonchimeras: filepath of nonchimeras from reference based
usearch detection.
output_combined_fp: filepath to write retained sequences to.
chimeras_retention: accepts either 'intersection' or 'union'. Will test
for chimeras against the full input error clustered sequence set, and
retain sequences flagged as non-chimeras by either (union) or
only those flagged as non-chimeras by both (intersection)."""
de_novo_non_chimeras = []
reference_non_chimeras = []
de_novo_nonchimeras_f = open(output_fp_de_novo_nonchimeras, "U")
reference_nonchimeras_f = open(output_fp_ref_nonchimeras, "U")
output_combined_f = open(output_combined_fp, "w")
for label, seq in parse_fasta(de_novo_nonchimeras_f):
de_novo_non_chimeras.append(label)
de_novo_nonchimeras_f.close()
for label, seq in parse_fasta(reference_nonchimeras_f):
reference_non_chimeras.append(label)
reference_nonchimeras_f.close()
de_novo_non_chimeras = set(de_novo_non_chimeras)
reference_non_chimeras = set(reference_non_chimeras)
if chimeras_retention == 'union':
all_non_chimeras = de_novo_non_chimeras.union(reference_non_chimeras)
elif chimeras_retention == 'intersection':
all_non_chimeras =\
de_novo_non_chimeras.intersection(reference_non_chimeras)
de_novo_nonchimeras_f = open(output_fp_de_novo_nonchimeras, "U")
reference_nonchimeras_f = open(output_fp_ref_nonchimeras, "U")
# Save a list of already-written labels
labels_written = []
for label, seq in parse_fasta(de_novo_nonchimeras_f):
if label in all_non_chimeras:
if label not in labels_written:
output_combined_f.write('>%s\n%s\n' % (label, seq))
labels_written.append(label)
de_novo_nonchimeras_f.close()
for label, seq in parse_fasta(reference_nonchimeras_f):
if label in all_non_chimeras:
if label not in labels_written:
output_combined_f.write('>%s\n%s\n' % (label, seq))
labels_written.append(label)
reference_nonchimeras_f.close()
output_combined_f.close()
return output_combined_fp | Gets union or intersection of two supplied fasta files
output_fp_de_novo_nonchimeras: filepath of nonchimeras from de novo
usearch detection.
output_fp_ref_nonchimeras: filepath of nonchimeras from reference based
usearch detection.
output_combined_fp: filepath to write retained sequences to.
chimeras_retention: accepts either 'intersection' or 'union'. Will test
for chimeras against the full input error clustered sequence set, and
retain sequences flagged as non-chimeras by either (union) or
only those flagged as non-chimeras by both (intersection). | Below is the the instruction that describes the task:
### Input:
Gets union or intersection of two supplied fasta files
output_fp_de_novo_nonchimeras: filepath of nonchimeras from de novo
usearch detection.
output_fp_ref_nonchimeras: filepath of nonchimeras from reference based
usearch detection.
output_combined_fp: filepath to write retained sequences to.
chimeras_retention: accepts either 'intersection' or 'union'. Will test
for chimeras against the full input error clustered sequence set, and
retain sequences flagged as non-chimeras by either (union) or
only those flagged as non-chimeras by both (intersection).
### Response:
def get_retained_chimeras(output_fp_de_novo_nonchimeras,
output_fp_ref_nonchimeras,
output_combined_fp,
chimeras_retention='union'):
""" Gets union or intersection of two supplied fasta files
output_fp_de_novo_nonchimeras: filepath of nonchimeras from de novo
usearch detection.
output_fp_ref_nonchimeras: filepath of nonchimeras from reference based
usearch detection.
output_combined_fp: filepath to write retained sequences to.
chimeras_retention: accepts either 'intersection' or 'union'. Will test
for chimeras against the full input error clustered sequence set, and
retain sequences flagged as non-chimeras by either (union) or
only those flagged as non-chimeras by both (intersection)."""
de_novo_non_chimeras = []
reference_non_chimeras = []
de_novo_nonchimeras_f = open(output_fp_de_novo_nonchimeras, "U")
reference_nonchimeras_f = open(output_fp_ref_nonchimeras, "U")
output_combined_f = open(output_combined_fp, "w")
for label, seq in parse_fasta(de_novo_nonchimeras_f):
de_novo_non_chimeras.append(label)
de_novo_nonchimeras_f.close()
for label, seq in parse_fasta(reference_nonchimeras_f):
reference_non_chimeras.append(label)
reference_nonchimeras_f.close()
de_novo_non_chimeras = set(de_novo_non_chimeras)
reference_non_chimeras = set(reference_non_chimeras)
if chimeras_retention == 'union':
all_non_chimeras = de_novo_non_chimeras.union(reference_non_chimeras)
elif chimeras_retention == 'intersection':
all_non_chimeras =\
de_novo_non_chimeras.intersection(reference_non_chimeras)
de_novo_nonchimeras_f = open(output_fp_de_novo_nonchimeras, "U")
reference_nonchimeras_f = open(output_fp_ref_nonchimeras, "U")
# Save a list of already-written labels
labels_written = []
for label, seq in parse_fasta(de_novo_nonchimeras_f):
if label in all_non_chimeras:
if label not in labels_written:
output_combined_f.write('>%s\n%s\n' % (label, seq))
labels_written.append(label)
de_novo_nonchimeras_f.close()
for label, seq in parse_fasta(reference_nonchimeras_f):
if label in all_non_chimeras:
if label not in labels_written:
output_combined_f.write('>%s\n%s\n' % (label, seq))
labels_written.append(label)
reference_nonchimeras_f.close()
output_combined_f.close()
return output_combined_fp |
def iterators(self, frequency=None):
"""
Returns the iterators (i.e. subject_id, visit_id) that the pipeline
iterates over
Parameters
----------
frequency : str | None
A selected data frequency to use to determine which iterators are
required. If None, all input frequencies of the pipeline are
assumed
"""
iterators = set()
if frequency is None:
input_freqs = list(self.input_frequencies)
else:
input_freqs = [frequency]
for freq in input_freqs:
iterators.update(self.study.FREQUENCIES[freq])
return iterators | Returns the iterators (i.e. subject_id, visit_id) that the pipeline
iterates over
Parameters
----------
frequency : str | None
A selected data frequency to use to determine which iterators are
required. If None, all input frequencies of the pipeline are
assumed | Below is the the instruction that describes the task:
### Input:
Returns the iterators (i.e. subject_id, visit_id) that the pipeline
iterates over
Parameters
----------
frequency : str | None
A selected data frequency to use to determine which iterators are
required. If None, all input frequencies of the pipeline are
assumed
### Response:
def iterators(self, frequency=None):
"""
Returns the iterators (i.e. subject_id, visit_id) that the pipeline
iterates over
Parameters
----------
frequency : str | None
A selected data frequency to use to determine which iterators are
required. If None, all input frequencies of the pipeline are
assumed
"""
iterators = set()
if frequency is None:
input_freqs = list(self.input_frequencies)
else:
input_freqs = [frequency]
for freq in input_freqs:
iterators.update(self.study.FREQUENCIES[freq])
return iterators |
def vcfmeltsamples(table, *samples):
"""
Melt the samples columns. E.g.::
>>> import petl as etl
>>> # activate bio extensions
... import petlx.bio
>>> table1 = (
... etl
... .fromvcf('fixture/sample.vcf')
... .vcfmeltsamples()
... )
>>> table1
+-------+-----+------+-----+-----+------+--------+------+-----------+-----------------------------------------------------+
| CHROM | POS | ID | REF | ALT | QUAL | FILTER | INFO | SAMPLE | CALL |
+=======+=====+======+=====+=====+======+========+======+===========+=====================================================+
| '19' | 111 | None | 'A' | [C] | 9.6 | None | {} | 'NA00001' | Call(sample=NA00001, CallData(GT=0|0, HQ=[10, 10])) |
+-------+-----+------+-----+-----+------+--------+------+-----------+-----------------------------------------------------+
| '19' | 111 | None | 'A' | [C] | 9.6 | None | {} | 'NA00002' | Call(sample=NA00002, CallData(GT=0|0, HQ=[10, 10])) |
+-------+-----+------+-----+-----+------+--------+------+-----------+-----------------------------------------------------+
| '19' | 111 | None | 'A' | [C] | 9.6 | None | {} | 'NA00003' | Call(sample=NA00003, CallData(GT=0/1, HQ=[3, 3])) |
+-------+-----+------+-----+-----+------+--------+------+-----------+-----------------------------------------------------+
| '19' | 112 | None | 'A' | [G] | 10 | None | {} | 'NA00001' | Call(sample=NA00001, CallData(GT=0|0, HQ=[10, 10])) |
+-------+-----+------+-----+-----+------+--------+------+-----------+-----------------------------------------------------+
| '19' | 112 | None | 'A' | [G] | 10 | None | {} | 'NA00002' | Call(sample=NA00002, CallData(GT=0|0, HQ=[10, 10])) |
+-------+-----+------+-----+-----+------+--------+------+-----------+-----------------------------------------------------+
...
"""
result = etl.melt(table, key=VCF_HEADER, variables=samples,
variablefield='SAMPLE', valuefield='CALL')
return result | Melt the samples columns. E.g.::
>>> import petl as etl
>>> # activate bio extensions
... import petlx.bio
>>> table1 = (
... etl
... .fromvcf('fixture/sample.vcf')
... .vcfmeltsamples()
... )
>>> table1
+-------+-----+------+-----+-----+------+--------+------+-----------+-----------------------------------------------------+
| CHROM | POS | ID | REF | ALT | QUAL | FILTER | INFO | SAMPLE | CALL |
+=======+=====+======+=====+=====+======+========+======+===========+=====================================================+
| '19' | 111 | None | 'A' | [C] | 9.6 | None | {} | 'NA00001' | Call(sample=NA00001, CallData(GT=0|0, HQ=[10, 10])) |
+-------+-----+------+-----+-----+------+--------+------+-----------+-----------------------------------------------------+
| '19' | 111 | None | 'A' | [C] | 9.6 | None | {} | 'NA00002' | Call(sample=NA00002, CallData(GT=0|0, HQ=[10, 10])) |
+-------+-----+------+-----+-----+------+--------+------+-----------+-----------------------------------------------------+
| '19' | 111 | None | 'A' | [C] | 9.6 | None | {} | 'NA00003' | Call(sample=NA00003, CallData(GT=0/1, HQ=[3, 3])) |
+-------+-----+------+-----+-----+------+--------+------+-----------+-----------------------------------------------------+
| '19' | 112 | None | 'A' | [G] | 10 | None | {} | 'NA00001' | Call(sample=NA00001, CallData(GT=0|0, HQ=[10, 10])) |
+-------+-----+------+-----+-----+------+--------+------+-----------+-----------------------------------------------------+
| '19' | 112 | None | 'A' | [G] | 10 | None | {} | 'NA00002' | Call(sample=NA00002, CallData(GT=0|0, HQ=[10, 10])) |
+-------+-----+------+-----+-----+------+--------+------+-----------+-----------------------------------------------------+
... | Below is the the instruction that describes the task:
### Input:
Melt the samples columns. E.g.::
>>> import petl as etl
>>> # activate bio extensions
... import petlx.bio
>>> table1 = (
... etl
... .fromvcf('fixture/sample.vcf')
... .vcfmeltsamples()
... )
>>> table1
+-------+-----+------+-----+-----+------+--------+------+-----------+-----------------------------------------------------+
| CHROM | POS | ID | REF | ALT | QUAL | FILTER | INFO | SAMPLE | CALL |
+=======+=====+======+=====+=====+======+========+======+===========+=====================================================+
| '19' | 111 | None | 'A' | [C] | 9.6 | None | {} | 'NA00001' | Call(sample=NA00001, CallData(GT=0|0, HQ=[10, 10])) |
+-------+-----+------+-----+-----+------+--------+------+-----------+-----------------------------------------------------+
| '19' | 111 | None | 'A' | [C] | 9.6 | None | {} | 'NA00002' | Call(sample=NA00002, CallData(GT=0|0, HQ=[10, 10])) |
+-------+-----+------+-----+-----+------+--------+------+-----------+-----------------------------------------------------+
| '19' | 111 | None | 'A' | [C] | 9.6 | None | {} | 'NA00003' | Call(sample=NA00003, CallData(GT=0/1, HQ=[3, 3])) |
+-------+-----+------+-----+-----+------+--------+------+-----------+-----------------------------------------------------+
| '19' | 112 | None | 'A' | [G] | 10 | None | {} | 'NA00001' | Call(sample=NA00001, CallData(GT=0|0, HQ=[10, 10])) |
+-------+-----+------+-----+-----+------+--------+------+-----------+-----------------------------------------------------+
| '19' | 112 | None | 'A' | [G] | 10 | None | {} | 'NA00002' | Call(sample=NA00002, CallData(GT=0|0, HQ=[10, 10])) |
+-------+-----+------+-----+-----+------+--------+------+-----------+-----------------------------------------------------+
...
### Response:
def vcfmeltsamples(table, *samples):
"""
Melt the samples columns. E.g.::
>>> import petl as etl
>>> # activate bio extensions
... import petlx.bio
>>> table1 = (
... etl
... .fromvcf('fixture/sample.vcf')
... .vcfmeltsamples()
... )
>>> table1
+-------+-----+------+-----+-----+------+--------+------+-----------+-----------------------------------------------------+
| CHROM | POS | ID | REF | ALT | QUAL | FILTER | INFO | SAMPLE | CALL |
+=======+=====+======+=====+=====+======+========+======+===========+=====================================================+
| '19' | 111 | None | 'A' | [C] | 9.6 | None | {} | 'NA00001' | Call(sample=NA00001, CallData(GT=0|0, HQ=[10, 10])) |
+-------+-----+------+-----+-----+------+--------+------+-----------+-----------------------------------------------------+
| '19' | 111 | None | 'A' | [C] | 9.6 | None | {} | 'NA00002' | Call(sample=NA00002, CallData(GT=0|0, HQ=[10, 10])) |
+-------+-----+------+-----+-----+------+--------+------+-----------+-----------------------------------------------------+
| '19' | 111 | None | 'A' | [C] | 9.6 | None | {} | 'NA00003' | Call(sample=NA00003, CallData(GT=0/1, HQ=[3, 3])) |
+-------+-----+------+-----+-----+------+--------+------+-----------+-----------------------------------------------------+
| '19' | 112 | None | 'A' | [G] | 10 | None | {} | 'NA00001' | Call(sample=NA00001, CallData(GT=0|0, HQ=[10, 10])) |
+-------+-----+------+-----+-----+------+--------+------+-----------+-----------------------------------------------------+
| '19' | 112 | None | 'A' | [G] | 10 | None | {} | 'NA00002' | Call(sample=NA00002, CallData(GT=0|0, HQ=[10, 10])) |
+-------+-----+------+-----+-----+------+--------+------+-----------+-----------------------------------------------------+
...
"""
result = etl.melt(table, key=VCF_HEADER, variables=samples,
variablefield='SAMPLE', valuefield='CALL')
return result |
def djfrontend_jquery_scrollto(version=None):
"""
Returns the jQuery ScrollTo plugin file according to version number.
TEMPLATE_DEBUG returns full file, otherwise returns minified file.
"""
if version is None:
version = getattr(settings, 'DJFRONTEND_JQUERY_SCROLLTO', DJFRONTEND_JQUERY_SCROLLTO_DEFAULT)
if getattr(settings, 'TEMPLATE_DEBUG', False):
template = '<script src="{static}djfrontend/js/jquery/jquery.scrollTo/{v}/jquery.scrollTo.js"></script>'
else:
template = (
'<script src="//cdnjs.cloudflare.com/ajax/libs/jquery-scrollTo/{v}/jquery.scrollTo.min.js"></script>'
'<script>window.jQuery.fn.scrollTo || document.write(\'<script src="{static}djfrontend/js/jquery/jquery.scrollTo/{v}/jquery.scrollTo.min.js"><\/script>\')</script>')
return format_html(template, static=_static_url, v=version) | Returns the jQuery ScrollTo plugin file according to version number.
TEMPLATE_DEBUG returns full file, otherwise returns minified file. | Below is the the instruction that describes the task:
### Input:
Returns the jQuery ScrollTo plugin file according to version number.
TEMPLATE_DEBUG returns full file, otherwise returns minified file.
### Response:
def djfrontend_jquery_scrollto(version=None):
"""
Returns the jQuery ScrollTo plugin file according to version number.
TEMPLATE_DEBUG returns full file, otherwise returns minified file.
"""
if version is None:
version = getattr(settings, 'DJFRONTEND_JQUERY_SCROLLTO', DJFRONTEND_JQUERY_SCROLLTO_DEFAULT)
if getattr(settings, 'TEMPLATE_DEBUG', False):
template = '<script src="{static}djfrontend/js/jquery/jquery.scrollTo/{v}/jquery.scrollTo.js"></script>'
else:
template = (
'<script src="//cdnjs.cloudflare.com/ajax/libs/jquery-scrollTo/{v}/jquery.scrollTo.min.js"></script>'
'<script>window.jQuery.fn.scrollTo || document.write(\'<script src="{static}djfrontend/js/jquery/jquery.scrollTo/{v}/jquery.scrollTo.min.js"><\/script>\')</script>')
return format_html(template, static=_static_url, v=version) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.