code
stringlengths 75
104k
| docstring
stringlengths 1
46.9k
|
|---|---|
def head_object(Bucket=None, IfMatch=None, IfModifiedSince=None, IfNoneMatch=None, IfUnmodifiedSince=None, Key=None, Range=None, VersionId=None, SSECustomerAlgorithm=None, SSECustomerKey=None, SSECustomerKeyMD5=None, RequestPayer=None, PartNumber=None):
"""
The HEAD operation retrieves metadata from an object without returning the object itself. This operation is useful if you're only interested in an object's metadata. To use HEAD, you must have READ access to the object.
See also: AWS API Documentation
:example: response = client.head_object(
Bucket='string',
IfMatch='string',
IfModifiedSince=datetime(2015, 1, 1),
IfNoneMatch='string',
IfUnmodifiedSince=datetime(2015, 1, 1),
Key='string',
Range='string',
VersionId='string',
SSECustomerAlgorithm='string',
SSECustomerKey='string',
RequestPayer='requester',
PartNumber=123
)
:type Bucket: string
:param Bucket: [REQUIRED]
:type IfMatch: string
:param IfMatch: Return the object only if its entity tag (ETag) is the same as the one specified, otherwise return a 412 (precondition failed).
:type IfModifiedSince: datetime
:param IfModifiedSince: Return the object only if it has been modified since the specified time, otherwise return a 304 (not modified).
:type IfNoneMatch: string
:param IfNoneMatch: Return the object only if its entity tag (ETag) is different from the one specified, otherwise return a 304 (not modified).
:type IfUnmodifiedSince: datetime
:param IfUnmodifiedSince: Return the object only if it has not been modified since the specified time, otherwise return a 412 (precondition failed).
:type Key: string
:param Key: [REQUIRED]
:type Range: string
:param Range: Downloads the specified range bytes of an object. For more information about the HTTP Range header, go to http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.35.
:type VersionId: string
:param VersionId: VersionId used to reference a specific version of the object.
:type SSECustomerAlgorithm: string
:param SSECustomerAlgorithm: Specifies the algorithm to use to when encrypting the object (e.g., AES256).
:type SSECustomerKey: string
:param SSECustomerKey: Specifies the customer-provided encryption key for Amazon S3 to use in encrypting data. This value is used to store the object and then it is discarded; Amazon does not store the encryption key. The key must be appropriate for use with the algorithm specified in the x-amz-server-side -encryption -customer-algorithm header.
:type SSECustomerKeyMD5: string
:param SSECustomerKeyMD5: Specifies the 128-bit MD5 digest of the encryption key according to RFC 1321. Amazon S3 uses this header for a message integrity check to ensure the encryption key was transmitted without error. Please note that this parameter is automatically populated if it is not provided. Including this parameter is not required
:type RequestPayer: string
:param RequestPayer: Confirms that the requester knows that she or he will be charged for the request. Bucket owners need not specify this parameter in their requests. Documentation on downloading objects from requester pays buckets can be found at http://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html
:type PartNumber: integer
:param PartNumber: Part number of the object being read. This is a positive integer between 1 and 10,000. Effectively performs a 'ranged' HEAD request for the part specified. Useful querying about the size of the part and the number of parts in this object.
:rtype: dict
:return: {
'DeleteMarker': True|False,
'AcceptRanges': 'string',
'Expiration': 'string',
'Restore': 'string',
'LastModified': datetime(2015, 1, 1),
'ContentLength': 123,
'ETag': 'string',
'MissingMeta': 123,
'VersionId': 'string',
'CacheControl': 'string',
'ContentDisposition': 'string',
'ContentEncoding': 'string',
'ContentLanguage': 'string',
'ContentType': 'string',
'Expires': datetime(2015, 1, 1),
'WebsiteRedirectLocation': 'string',
'ServerSideEncryption': 'AES256'|'aws:kms',
'Metadata': {
'string': 'string'
},
'SSECustomerAlgorithm': 'string',
'SSECustomerKeyMD5': 'string',
'SSEKMSKeyId': 'string',
'StorageClass': 'STANDARD'|'REDUCED_REDUNDANCY'|'STANDARD_IA',
'RequestCharged': 'requester',
'ReplicationStatus': 'COMPLETE'|'PENDING'|'FAILED'|'REPLICA',
'PartsCount': 123
}
:returns:
(dict) --
DeleteMarker (boolean) -- Specifies whether the object retrieved was (true) or was not (false) a Delete Marker. If false, this response header does not appear in the response.
AcceptRanges (string) --
Expiration (string) -- If the object expiration is configured (see PUT Bucket lifecycle), the response includes this header. It includes the expiry-date and rule-id key value pairs providing object expiration information. The value of the rule-id is URL encoded.
Restore (string) -- Provides information about object restoration operation and expiration time of the restored object copy.
LastModified (datetime) -- Last modified date of the object
ContentLength (integer) -- Size of the body in bytes.
ETag (string) -- An ETag is an opaque identifier assigned by a web server to a specific version of a resource found at a URL
MissingMeta (integer) -- This is set to the number of metadata entries not returned in x-amz-meta headers. This can happen if you create metadata using an API like SOAP that supports more flexible metadata than the REST API. For example, using SOAP, you can create metadata whose values are not legal HTTP headers.
VersionId (string) -- Version of the object.
CacheControl (string) -- Specifies caching behavior along the request/reply chain.
ContentDisposition (string) -- Specifies presentational information for the object.
ContentEncoding (string) -- Specifies what content encodings have been applied to the object and thus what decoding mechanisms must be applied to obtain the media-type referenced by the Content-Type header field.
ContentLanguage (string) -- The language the content is in.
ContentType (string) -- A standard MIME type describing the format of the object data.
Expires (datetime) -- The date and time at which the object is no longer cacheable.
WebsiteRedirectLocation (string) -- If the bucket is configured as a website, redirects requests for this object to another object in the same bucket or to an external URL. Amazon S3 stores the value of this header in the object metadata.
ServerSideEncryption (string) -- The Server-side encryption algorithm used when storing this object in S3 (e.g., AES256, aws:kms).
Metadata (dict) -- A map of metadata to store with the object in S3.
(string) --
(string) --
SSECustomerAlgorithm (string) -- If server-side encryption with a customer-provided encryption key was requested, the response will include this header confirming the encryption algorithm used.
SSECustomerKeyMD5 (string) -- If server-side encryption with a customer-provided encryption key was requested, the response will include this header to provide round trip message integrity verification of the customer-provided encryption key.
SSEKMSKeyId (string) -- If present, specifies the ID of the AWS Key Management Service (KMS) master encryption key that was used for the object.
StorageClass (string) --
RequestCharged (string) -- If present, indicates that the requester was successfully charged for the request.
ReplicationStatus (string) --
PartsCount (integer) -- The count of parts this object has.
"""
pass
|
The HEAD operation retrieves metadata from an object without returning the object itself. This operation is useful if you're only interested in an object's metadata. To use HEAD, you must have READ access to the object.
See also: AWS API Documentation
:example: response = client.head_object(
Bucket='string',
IfMatch='string',
IfModifiedSince=datetime(2015, 1, 1),
IfNoneMatch='string',
IfUnmodifiedSince=datetime(2015, 1, 1),
Key='string',
Range='string',
VersionId='string',
SSECustomerAlgorithm='string',
SSECustomerKey='string',
RequestPayer='requester',
PartNumber=123
)
:type Bucket: string
:param Bucket: [REQUIRED]
:type IfMatch: string
:param IfMatch: Return the object only if its entity tag (ETag) is the same as the one specified, otherwise return a 412 (precondition failed).
:type IfModifiedSince: datetime
:param IfModifiedSince: Return the object only if it has been modified since the specified time, otherwise return a 304 (not modified).
:type IfNoneMatch: string
:param IfNoneMatch: Return the object only if its entity tag (ETag) is different from the one specified, otherwise return a 304 (not modified).
:type IfUnmodifiedSince: datetime
:param IfUnmodifiedSince: Return the object only if it has not been modified since the specified time, otherwise return a 412 (precondition failed).
:type Key: string
:param Key: [REQUIRED]
:type Range: string
:param Range: Downloads the specified range bytes of an object. For more information about the HTTP Range header, go to http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.35.
:type VersionId: string
:param VersionId: VersionId used to reference a specific version of the object.
:type SSECustomerAlgorithm: string
:param SSECustomerAlgorithm: Specifies the algorithm to use to when encrypting the object (e.g., AES256).
:type SSECustomerKey: string
:param SSECustomerKey: Specifies the customer-provided encryption key for Amazon S3 to use in encrypting data. This value is used to store the object and then it is discarded; Amazon does not store the encryption key. The key must be appropriate for use with the algorithm specified in the x-amz-server-side -encryption -customer-algorithm header.
:type SSECustomerKeyMD5: string
:param SSECustomerKeyMD5: Specifies the 128-bit MD5 digest of the encryption key according to RFC 1321. Amazon S3 uses this header for a message integrity check to ensure the encryption key was transmitted without error. Please note that this parameter is automatically populated if it is not provided. Including this parameter is not required
:type RequestPayer: string
:param RequestPayer: Confirms that the requester knows that she or he will be charged for the request. Bucket owners need not specify this parameter in their requests. Documentation on downloading objects from requester pays buckets can be found at http://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html
:type PartNumber: integer
:param PartNumber: Part number of the object being read. This is a positive integer between 1 and 10,000. Effectively performs a 'ranged' HEAD request for the part specified. Useful querying about the size of the part and the number of parts in this object.
:rtype: dict
:return: {
'DeleteMarker': True|False,
'AcceptRanges': 'string',
'Expiration': 'string',
'Restore': 'string',
'LastModified': datetime(2015, 1, 1),
'ContentLength': 123,
'ETag': 'string',
'MissingMeta': 123,
'VersionId': 'string',
'CacheControl': 'string',
'ContentDisposition': 'string',
'ContentEncoding': 'string',
'ContentLanguage': 'string',
'ContentType': 'string',
'Expires': datetime(2015, 1, 1),
'WebsiteRedirectLocation': 'string',
'ServerSideEncryption': 'AES256'|'aws:kms',
'Metadata': {
'string': 'string'
},
'SSECustomerAlgorithm': 'string',
'SSECustomerKeyMD5': 'string',
'SSEKMSKeyId': 'string',
'StorageClass': 'STANDARD'|'REDUCED_REDUNDANCY'|'STANDARD_IA',
'RequestCharged': 'requester',
'ReplicationStatus': 'COMPLETE'|'PENDING'|'FAILED'|'REPLICA',
'PartsCount': 123
}
:returns:
(dict) --
DeleteMarker (boolean) -- Specifies whether the object retrieved was (true) or was not (false) a Delete Marker. If false, this response header does not appear in the response.
AcceptRanges (string) --
Expiration (string) -- If the object expiration is configured (see PUT Bucket lifecycle), the response includes this header. It includes the expiry-date and rule-id key value pairs providing object expiration information. The value of the rule-id is URL encoded.
Restore (string) -- Provides information about object restoration operation and expiration time of the restored object copy.
LastModified (datetime) -- Last modified date of the object
ContentLength (integer) -- Size of the body in bytes.
ETag (string) -- An ETag is an opaque identifier assigned by a web server to a specific version of a resource found at a URL
MissingMeta (integer) -- This is set to the number of metadata entries not returned in x-amz-meta headers. This can happen if you create metadata using an API like SOAP that supports more flexible metadata than the REST API. For example, using SOAP, you can create metadata whose values are not legal HTTP headers.
VersionId (string) -- Version of the object.
CacheControl (string) -- Specifies caching behavior along the request/reply chain.
ContentDisposition (string) -- Specifies presentational information for the object.
ContentEncoding (string) -- Specifies what content encodings have been applied to the object and thus what decoding mechanisms must be applied to obtain the media-type referenced by the Content-Type header field.
ContentLanguage (string) -- The language the content is in.
ContentType (string) -- A standard MIME type describing the format of the object data.
Expires (datetime) -- The date and time at which the object is no longer cacheable.
WebsiteRedirectLocation (string) -- If the bucket is configured as a website, redirects requests for this object to another object in the same bucket or to an external URL. Amazon S3 stores the value of this header in the object metadata.
ServerSideEncryption (string) -- The Server-side encryption algorithm used when storing this object in S3 (e.g., AES256, aws:kms).
Metadata (dict) -- A map of metadata to store with the object in S3.
(string) --
(string) --
SSECustomerAlgorithm (string) -- If server-side encryption with a customer-provided encryption key was requested, the response will include this header confirming the encryption algorithm used.
SSECustomerKeyMD5 (string) -- If server-side encryption with a customer-provided encryption key was requested, the response will include this header to provide round trip message integrity verification of the customer-provided encryption key.
SSEKMSKeyId (string) -- If present, specifies the ID of the AWS Key Management Service (KMS) master encryption key that was used for the object.
StorageClass (string) --
RequestCharged (string) -- If present, indicates that the requester was successfully charged for the request.
ReplicationStatus (string) --
PartsCount (integer) -- The count of parts this object has.
|
def _display(self, sent, now, chunk, mbps):
""" Display intermediate progress. """
if self.parent is not None:
self.parent._display(self.parent.offset + sent, now, chunk, mbps)
return
elapsed = now - self.startTime
if sent > 0 and self.total is not None and sent <= self.total:
eta = (self.total - sent) * elapsed.total_seconds() / sent
eta = datetime.timedelta(seconds=eta)
else:
eta = None
self.output.write(
"\r %s: Sent %s%s%s ETA: %s (%s) %s%20s\r" % (
elapsed,
util.humanize(sent),
"" if self.total is None else " of %s" % (util.humanize(self.total),),
"" if self.total is None else " (%d%%)" % (int(100 * sent / self.total),),
eta,
"" if not mbps else "%.3g Mbps " % (mbps,),
chunk or "",
" ",
)
)
self.output.flush()
|
Display intermediate progress.
|
def format(self, sql, params):
"""
Formats the SQL query to use ordinal parameters instead of named
parameters.
*sql* (|string|) is the SQL query.
*params* (|dict|) maps each named parameter (|string|) to value
(|object|). If |self.named| is "numeric", then *params* can be
simply a |sequence| of values mapped by index.
Returns a 2-|tuple| containing: the formatted SQL query (|string|),
and the ordinal parameters (|list|).
"""
if isinstance(sql, unicode):
string_type = unicode
elif isinstance(sql, bytes):
string_type = bytes
sql = sql.decode(_BYTES_ENCODING)
else:
raise TypeError("sql:{!r} is not a unicode or byte string.".format(sql))
if self.named == 'numeric':
if isinstance(params, collections.Mapping):
params = {string_type(idx): val for idx, val in iteritems(params)}
elif isinstance(params, collections.Sequence) and not isinstance(params, (unicode, bytes)):
params = {string_type(idx): val for idx, val in enumerate(params, 1)}
if not isinstance(params, collections.Mapping):
raise TypeError("params:{!r} is not a dict.".format(params))
# Find named parameters.
names = self.match.findall(sql)
# Map named parameters to ordinals.
ord_params = []
name_to_ords = {}
for name in names:
value = params[name]
if isinstance(value, tuple):
ord_params.extend(value)
if name not in name_to_ords:
name_to_ords[name] = '(' + ','.join((self.replace,) * len(value)) + ')'
else:
ord_params.append(value)
if name not in name_to_ords:
name_to_ords[name] = self.replace
# Replace named parameters with ordinals.
sql = self.match.sub(lambda m: name_to_ords[m.group(1)], sql)
# Make sure the query is returned as the proper string type.
if string_type is bytes:
sql = sql.encode(_BYTES_ENCODING)
# Return formatted SQL and new ordinal parameters.
return sql, ord_params
|
Formats the SQL query to use ordinal parameters instead of named
parameters.
*sql* (|string|) is the SQL query.
*params* (|dict|) maps each named parameter (|string|) to value
(|object|). If |self.named| is "numeric", then *params* can be
simply a |sequence| of values mapped by index.
Returns a 2-|tuple| containing: the formatted SQL query (|string|),
and the ordinal parameters (|list|).
|
def splitPrefix(name):
"""
Split the name into a tuple (I{prefix}, I{name}). The first element in
the tuple is I{None} when the name does't have a prefix.
@param name: A node name containing an optional prefix.
@type name: basestring
@return: A tuple containing the (2) parts of I{name}
@rtype: (I{prefix}, I{name})
"""
if isinstance(name, basestring) \
and ':' in name:
return tuple(name.split(':', 1))
else:
return (None, name)
|
Split the name into a tuple (I{prefix}, I{name}). The first element in
the tuple is I{None} when the name does't have a prefix.
@param name: A node name containing an optional prefix.
@type name: basestring
@return: A tuple containing the (2) parts of I{name}
@rtype: (I{prefix}, I{name})
|
def dropSpans(spans, text):
"""
Drop from text the blocks identified in :param spans:, possibly nested.
"""
spans.sort()
res = ''
offset = 0
for s, e in spans:
if offset <= s: # handle nesting
if offset < s:
res += text[offset:s]
offset = e
res += text[offset:]
return res
|
Drop from text the blocks identified in :param spans:, possibly nested.
|
def add_page(self, page=None):
""" May generate and add a PDFPage separately, or use this to generate
a default page."""
if page is None:
self.page = PDFPage(self.orientation_default, self.layout_default, self.margins)
else:
self.page = page
self.page._set_index(len(self.pages))
self.pages.append(self.page)
currentfont = self.font
self.set_font(font=currentfont)
self.session._reset_colors()
|
May generate and add a PDFPage separately, or use this to generate
a default page.
|
def netconf_config_change_datastore(self, **kwargs):
"""Auto Generated Code
"""
config = ET.Element("config")
netconf_config_change = ET.SubElement(config, "netconf-config-change", xmlns="urn:ietf:params:xml:ns:yang:ietf-netconf-notifications")
datastore = ET.SubElement(netconf_config_change, "datastore")
datastore.text = kwargs.pop('datastore')
callback = kwargs.pop('callback', self._callback)
return callback(config)
|
Auto Generated Code
|
def full_value(self):
"""Returns the full value with the path also (ie, name = value (path))
:returns: String
"""
s = self.name_value()
s += self.path_value()
s += "\n\n"
return s
|
Returns the full value with the path also (ie, name = value (path))
:returns: String
|
def array_controller_by_model(self, model):
"""Returns array controller instance by model
:returns Instance of array controller
"""
for member in self.get_members():
if member.model == model:
return member
|
Returns array controller instance by model
:returns Instance of array controller
|
def tile_read(source, bounds, tilesize, **kwargs):
"""
Read data and mask.
Attributes
----------
source : str or rasterio.io.DatasetReader
input file path or rasterio.io.DatasetReader object
bounds : list
Mercator tile bounds (left, bottom, right, top)
tilesize : int
Output image size
kwargs: dict, optional
These will be passed to the _tile_read function.
Returns
-------
out : array, int
returns pixel value.
"""
if isinstance(source, DatasetReader):
return _tile_read(source, bounds, tilesize, **kwargs)
else:
with rasterio.open(source) as src_dst:
return _tile_read(src_dst, bounds, tilesize, **kwargs)
|
Read data and mask.
Attributes
----------
source : str or rasterio.io.DatasetReader
input file path or rasterio.io.DatasetReader object
bounds : list
Mercator tile bounds (left, bottom, right, top)
tilesize : int
Output image size
kwargs: dict, optional
These will be passed to the _tile_read function.
Returns
-------
out : array, int
returns pixel value.
|
def _is_monotonic(coord, axis=0):
"""
>>> _is_monotonic(np.array([0, 1, 2]))
True
>>> _is_monotonic(np.array([2, 1, 0]))
True
>>> _is_monotonic(np.array([0, 2, 1]))
False
"""
if coord.shape[axis] < 3:
return True
else:
n = coord.shape[axis]
delta_pos = (coord.take(np.arange(1, n), axis=axis) >=
coord.take(np.arange(0, n - 1), axis=axis))
delta_neg = (coord.take(np.arange(1, n), axis=axis) <=
coord.take(np.arange(0, n - 1), axis=axis))
return np.all(delta_pos) or np.all(delta_neg)
|
>>> _is_monotonic(np.array([0, 1, 2]))
True
>>> _is_monotonic(np.array([2, 1, 0]))
True
>>> _is_monotonic(np.array([0, 2, 1]))
False
|
def set_env(self):
"""Put info about coverage into the env so that subprocesses can activate coverage."""
if self.cov_source is None:
os.environ['COV_CORE_SOURCE'] = ''
else:
os.environ['COV_CORE_SOURCE'] = UNIQUE_SEP.join(self.cov_source)
os.environ['COV_CORE_DATA_FILE'] = self.cov_data_file
os.environ['COV_CORE_CONFIG'] = self.cov_config
|
Put info about coverage into the env so that subprocesses can activate coverage.
|
def to_regex(regex, flags=0):
"""
Given a string, this function returns a new re.RegexObject.
Given a re.RegexObject, this function just returns the same object.
:type regex: string|re.RegexObject
:param regex: A regex or a re.RegexObject
:type flags: int
:param flags: See Python's re.compile().
:rtype: re.RegexObject
:return: The Python regex object.
"""
if regex is None:
raise TypeError('None can not be cast to re.RegexObject')
if hasattr(regex, 'match'):
return regex
return re.compile(regex, flags)
|
Given a string, this function returns a new re.RegexObject.
Given a re.RegexObject, this function just returns the same object.
:type regex: string|re.RegexObject
:param regex: A regex or a re.RegexObject
:type flags: int
:param flags: See Python's re.compile().
:rtype: re.RegexObject
:return: The Python regex object.
|
def _get_dst_resolution(self, dst_res=None):
"""Get default resolution, i.e. the highest resolution or smallest cell size."""
if dst_res is None:
dst_res = min(self._res_indices.keys())
return dst_res
|
Get default resolution, i.e. the highest resolution or smallest cell size.
|
def reply(self, user, msg, errors_as_replies=True):
"""Fetch a reply from the RiveScript brain.
Arguments:
user (str): A unique user ID for the person requesting a reply.
This could be e.g. a screen name or nickname. It's used internally
to store user variables (including topic and history), so if your
bot has multiple users each one should have a unique ID.
msg (str): The user's message. This is allowed to contain
punctuation and such, but any extraneous data such as HTML tags
should be removed in advance.
errors_as_replies (bool): When errors are encountered (such as a
deep recursion error, no reply matched, etc.) this will make the
reply be a text representation of the error message. If you set
this to ``False``, errors will instead raise an exception, such as
a ``DeepRecursionError`` or ``NoReplyError``. By default, no
exceptions are raised and errors are set in the reply instead.
Returns:
str: The reply output.
"""
return self._brain.reply(user, msg, errors_as_replies)
|
Fetch a reply from the RiveScript brain.
Arguments:
user (str): A unique user ID for the person requesting a reply.
This could be e.g. a screen name or nickname. It's used internally
to store user variables (including topic and history), so if your
bot has multiple users each one should have a unique ID.
msg (str): The user's message. This is allowed to contain
punctuation and such, but any extraneous data such as HTML tags
should be removed in advance.
errors_as_replies (bool): When errors are encountered (such as a
deep recursion error, no reply matched, etc.) this will make the
reply be a text representation of the error message. If you set
this to ``False``, errors will instead raise an exception, such as
a ``DeepRecursionError`` or ``NoReplyError``. By default, no
exceptions are raised and errors are set in the reply instead.
Returns:
str: The reply output.
|
def make_attrstring(attr):
"""Returns an attribute string in the form key="val" """
attrstring = ' '.join(['%s="%s"' % (k, v) for k, v in attr.items()])
return '%s%s' % (' ' if attrstring != '' else '', attrstring)
|
Returns an attribute string in the form key="val"
|
def at_depth(self, depth):
"""
Returns a generator yielding all nodes in the tree at a specific depth
:param depth:
An integer >= 0 of the depth of nodes to yield
:return:
A generator yielding PolicyTreeNode objects
"""
for child in list(self.children):
if depth == 0:
yield child
else:
for grandchild in child.at_depth(depth - 1):
yield grandchild
|
Returns a generator yielding all nodes in the tree at a specific depth
:param depth:
An integer >= 0 of the depth of nodes to yield
:return:
A generator yielding PolicyTreeNode objects
|
def status(self):
"""Returns the status of the executor via probing the execution providers."""
if self.provider:
status = self.provider.status(self.engines)
else:
status = []
return status
|
Returns the status of the executor via probing the execution providers.
|
def apply(self, func, skills):
"""Run a function on all skills in parallel"""
def run_item(skill):
try:
func(skill)
return True
except MsmException as e:
LOG.error('Error running {} on {}: {}'.format(
func.__name__, skill.name, repr(e)
))
return False
except:
LOG.exception('Error running {} on {}:'.format(
func.__name__, skill.name
))
with ThreadPool(20) as tp:
return tp.map(run_item, skills)
|
Run a function on all skills in parallel
|
def complete_url(self, url):
""" Completes a given URL with this instance's URL base. """
if self.base_url:
return urlparse.urljoin(self.base_url, url)
else:
return url
|
Completes a given URL with this instance's URL base.
|
def expire_leaderboard_at_for(self, leaderboard_name, timestamp):
'''
Expire the given leaderboard at a specific UNIX timestamp. Do not use this with
leaderboards that utilize member data as there is no facility to cascade the
expiration out to the keys for the member data.
@param leaderboard_name [String] Name of the leaderboard.
@param timestamp [int] UNIX timestamp at which the leaderboard will be expired.
'''
pipeline = self.redis_connection.pipeline()
pipeline.expireat(leaderboard_name, timestamp)
pipeline.expireat(
self._ties_leaderboard_key(leaderboard_name), timestamp)
pipeline.expireat(self._member_data_key(leaderboard_name), timestamp)
pipeline.execute()
|
Expire the given leaderboard at a specific UNIX timestamp. Do not use this with
leaderboards that utilize member data as there is no facility to cascade the
expiration out to the keys for the member data.
@param leaderboard_name [String] Name of the leaderboard.
@param timestamp [int] UNIX timestamp at which the leaderboard will be expired.
|
def request_show(self, id, **kwargs):
"https://developer.zendesk.com/rest_api/docs/core/requests#show-request"
api_path = "/api/v2/requests/{id}.json"
api_path = api_path.format(id=id)
return self.call(api_path, **kwargs)
|
https://developer.zendesk.com/rest_api/docs/core/requests#show-request
|
def pick_peaks(nc, L=16, offset_denom=0.1):
"""Obtain peaks from a novelty curve using an adaptive threshold."""
offset = nc.mean() * float(offset_denom)
th = filters.median_filter(nc, size=L) + offset
#th = filters.gaussian_filter(nc, sigma=L/2., mode="nearest") + offset
#import pylab as plt
#plt.plot(nc)
#plt.plot(th)
#plt.show()
# th = np.ones(nc.shape[0]) * nc.mean() - 0.08
peaks = []
for i in range(1, nc.shape[0] - 1):
# is it a peak?
if nc[i - 1] < nc[i] and nc[i] > nc[i + 1]:
# is it above the threshold?
if nc[i] > th[i]:
peaks.append(i)
return peaks
|
Obtain peaks from a novelty curve using an adaptive threshold.
|
def at(*args, **kwargs): # pylint: disable=C0103
'''
Add a job to the queue.
The 'timespec' follows the format documented in the
at(1) manpage.
CLI Example:
.. code-block:: bash
salt '*' at.at <timespec> <cmd> [tag=<tag>] [runas=<user>]
salt '*' at.at 12:05am '/sbin/reboot' tag=reboot
salt '*' at.at '3:05am +3 days' 'bin/myscript' tag=nightly runas=jim
'''
# check args
if len(args) < 2:
return {'jobs': []}
# build job
if 'tag' in kwargs:
stdin = '### SALT: {0}\n{1}'.format(kwargs['tag'], ' '.join(args[1:]))
else:
stdin = ' '.join(args[1:])
cmd_kwargs = {'stdin': stdin, 'python_shell': False}
if 'runas' in kwargs:
cmd_kwargs['runas'] = kwargs['runas']
res = __salt__['cmd.run_all']('at "{timespec}"'.format(
timespec=args[0]
), **cmd_kwargs)
# verify job creation
if res['retcode'] > 0:
if 'bad time specification' in res['stderr']:
return {'jobs': [], 'error': 'invalid timespec'}
return {'jobs': [], 'error': res['stderr']}
else:
jobid = res['stderr'].splitlines()[1]
jobid = six.text_type(jobid.split()[1])
return atq(jobid)
|
Add a job to the queue.
The 'timespec' follows the format documented in the
at(1) manpage.
CLI Example:
.. code-block:: bash
salt '*' at.at <timespec> <cmd> [tag=<tag>] [runas=<user>]
salt '*' at.at 12:05am '/sbin/reboot' tag=reboot
salt '*' at.at '3:05am +3 days' 'bin/myscript' tag=nightly runas=jim
|
def CopyToIsoFormat(cls, timestamp, timezone=pytz.UTC, raise_error=False):
"""Copies the timestamp to an ISO 8601 formatted string.
Args:
timestamp: The timestamp which is an integer containing the number
of micro seconds since January 1, 1970, 00:00:00 UTC.
timezone: Optional timezone (instance of pytz.timezone).
raise_error: Boolean that if set to True will not absorb an OverflowError
if the timestamp is out of bounds. By default there will be
no error raised.
Returns:
A string containing an ISO 8601 formatted date and time.
"""
datetime_object = cls.CopyToDatetime(
timestamp, timezone, raise_error=raise_error)
return datetime_object.isoformat()
|
Copies the timestamp to an ISO 8601 formatted string.
Args:
timestamp: The timestamp which is an integer containing the number
of micro seconds since January 1, 1970, 00:00:00 UTC.
timezone: Optional timezone (instance of pytz.timezone).
raise_error: Boolean that if set to True will not absorb an OverflowError
if the timestamp is out of bounds. By default there will be
no error raised.
Returns:
A string containing an ISO 8601 formatted date and time.
|
def API520_B(Pset, Pback, overpressure=0.1):
r'''Calculates capacity correction due to backpressure on balanced
spring-loaded PRVs in vapor service. For pilot operated valves,
this is always 1. Applicable up to 50% of the percent gauge backpressure,
For use in API 520 relief valve sizing. 1D interpolation among a table with
53 backpressures is performed.
Parameters
----------
Pset : float
Set pressure for relief [Pa]
Pback : float
Backpressure, [Pa]
overpressure : float, optional
The maximum fraction overpressure; one of 0.1, 0.16, or 0.21, []
Returns
-------
Kb : float
Correction due to vapor backpressure [-]
Notes
-----
If the calculated gauge backpressure is less than 30%, 38%, or 50% for
overpressures of 0.1, 0.16, or 0.21, a value of 1 is returned.
Percent gauge backpressure must be under 50%.
Examples
--------
Custom examples from figure 30:
>>> API520_B(1E6, 5E5)
0.7929945420944432
References
----------
.. [1] API Standard 520, Part 1 - Sizing and Selection.
'''
gauge_backpressure = (Pback-atm)/(Pset-atm)*100 # in percent
if overpressure not in [0.1, 0.16, 0.21]:
raise Exception('Only overpressure of 10%, 16%, or 21% are permitted')
if (overpressure == 0.1 and gauge_backpressure < 30) or (
overpressure == 0.16 and gauge_backpressure < 38) or (
overpressure == 0.21 and gauge_backpressure < 50):
return 1
elif gauge_backpressure > 50:
raise Exception('Gauge pressure must be < 50%')
if overpressure == 0.16:
Kb = interp(gauge_backpressure, Kb_16_over_x, Kb_16_over_y)
elif overpressure == 0.1:
Kb = interp(gauge_backpressure, Kb_10_over_x, Kb_10_over_y)
return Kb
|
r'''Calculates capacity correction due to backpressure on balanced
spring-loaded PRVs in vapor service. For pilot operated valves,
this is always 1. Applicable up to 50% of the percent gauge backpressure,
For use in API 520 relief valve sizing. 1D interpolation among a table with
53 backpressures is performed.
Parameters
----------
Pset : float
Set pressure for relief [Pa]
Pback : float
Backpressure, [Pa]
overpressure : float, optional
The maximum fraction overpressure; one of 0.1, 0.16, or 0.21, []
Returns
-------
Kb : float
Correction due to vapor backpressure [-]
Notes
-----
If the calculated gauge backpressure is less than 30%, 38%, or 50% for
overpressures of 0.1, 0.16, or 0.21, a value of 1 is returned.
Percent gauge backpressure must be under 50%.
Examples
--------
Custom examples from figure 30:
>>> API520_B(1E6, 5E5)
0.7929945420944432
References
----------
.. [1] API Standard 520, Part 1 - Sizing and Selection.
|
def compute_logarithmic_scale(min_, max_, min_scale, max_scale):
"""Compute an optimal scale for logarithmic"""
if max_ <= 0 or min_ <= 0:
return []
min_order = int(floor(log10(min_)))
max_order = int(ceil(log10(max_)))
positions = []
amplitude = max_order - min_order
if amplitude <= 1:
return []
detail = 10.
while amplitude * detail < min_scale * 5:
detail *= 2
while amplitude * detail > max_scale * 3:
detail /= 2
for order in range(min_order, max_order + 1):
for i in range(int(detail)):
tick = (10 * i / detail or 1) * 10**order
tick = round_to_scale(tick, tick)
if min_ <= tick <= max_ and tick not in positions:
positions.append(tick)
return positions
|
Compute an optimal scale for logarithmic
|
def from_location(cls, location):
"""Try to create a Ladybug location from a location string.
Args:
locationString: Location string
Usage:
l = Location.from_location(locationString)
"""
if not location:
return cls()
try:
if hasattr(location, 'isLocation'):
# Ladybug location
return location
elif hasattr(location, 'Latitude'):
# Revit's location
return cls(city=str(location.Name.replace(",", " ")),
latitude=location.Latitude,
longitude=location.Longitude)
elif location.startswith('Site:'):
loc, city, latitude, longitude, time_zone, elevation = \
[x.strip() for x in re.findall(r'\r*\n*([^\r\n]*)[,|;]',
location, re.DOTALL)]
else:
try:
city, latitude, longitude, time_zone, elevation = \
[key.split(":")[-1].strip()
for key in location.split(",")]
except ValueError:
# it's just the city name
return cls(city=location)
return cls(city=city, country=None, latitude=latitude,
longitude=longitude, time_zone=time_zone,
elevation=elevation)
except Exception as e:
raise ValueError(
"Failed to create a Location from %s!\n%s" % (location, e))
|
Try to create a Ladybug location from a location string.
Args:
locationString: Location string
Usage:
l = Location.from_location(locationString)
|
async def join_voice_channel(self, guild_id, channel_id):
"""
Alternative way to join a voice channel if node is known.
"""
voice_ws = self.get_voice_ws(guild_id)
await voice_ws.voice_state(guild_id, channel_id)
|
Alternative way to join a voice channel if node is known.
|
def results(context, history_log):
"""Process provided history log and results files."""
if context.obj is None:
context.obj = {}
context.obj['history_log'] = history_log
if context.invoked_subcommand is None:
context.invoke(show, item=1)
|
Process provided history log and results files.
|
def show_bounds(self, mesh=None, bounds=None, show_xaxis=True,
show_yaxis=True, show_zaxis=True, show_xlabels=True,
show_ylabels=True, show_zlabels=True, italic=False,
bold=True, shadow=False, font_size=None,
font_family=None, color=None,
xlabel='X Axis', ylabel='Y Axis', zlabel='Z Axis',
use_2d=False, grid=None, location='closest', ticks=None,
all_edges=False, corner_factor=0.5, fmt=None,
minor_ticks=False, loc=None, padding=0.0):
"""
Adds bounds axes. Shows the bounds of the most recent input
mesh unless mesh is specified.
Parameters
----------
mesh : vtkPolydata or unstructured grid, optional
Input mesh to draw bounds axes around
bounds : list or tuple, optional
Bounds to override mesh bounds.
[xmin, xmax, ymin, ymax, zmin, zmax]
show_xaxis : bool, optional
Makes x axis visible. Default True.
show_yaxis : bool, optional
Makes y axis visible. Default True.
show_zaxis : bool, optional
Makes z axis visible. Default True.
show_xlabels : bool, optional
Shows x labels. Default True.
show_ylabels : bool, optional
Shows y labels. Default True.
show_zlabels : bool, optional
Shows z labels. Default True.
italic : bool, optional
Italicises axis labels and numbers. Default False.
bold : bool, optional
Bolds axis labels and numbers. Default True.
shadow : bool, optional
Adds a black shadow to the text. Default False.
font_size : float, optional
Sets the size of the label font. Defaults to 16.
font_family : string, optional
Font family. Must be either courier, times, or arial.
color : string or 3 item list, optional
Color of all labels and axis titles. Default white.
Either a string, rgb list, or hex color string. For example:
color='white'
color='w'
color=[1, 1, 1]
color='#FFFFFF'
xlabel : string, optional
Title of the x axis. Default "X Axis"
ylabel : string, optional
Title of the y axis. Default "Y Axis"
zlabel : string, optional
Title of the z axis. Default "Z Axis"
use_2d : bool, optional
A bug with vtk 6.3 in Windows seems to cause this function
to crash this can be enabled for smoother plotting for
other enviornments.
grid : bool or str, optional
Add grid lines to the backface (``True``, ``'back'``, or
``'backface'``) or to the frontface (``'front'``,
``'frontface'``) of the axes actor.
location : str, optional
Set how the axes are drawn: either static (``'all'``),
closest triad (``front``), furthest triad (``'back'``),
static closest to the origin (``'origin'``), or outer
edges (``'outer'``) in relation to the camera
position. Options include: ``'all', 'front', 'back',
'origin', 'outer'``
ticks : str, optional
Set how the ticks are drawn on the axes grid. Options include:
``'inside', 'outside', 'both'``
all_edges : bool, optional
Adds an unlabeled and unticked box at the boundaries of
plot. Useful for when wanting to plot outer grids while
still retaining all edges of the boundary.
corner_factor : float, optional
If ``all_edges````, this is the factor along each axis to
draw the default box. Dafuault is 0.5 to show the full box.
loc : int, tuple, or list
Index of the renderer to add the actor to. For example,
``loc=2`` or ``loc=(1, 1)``. If None, selects the last
active Renderer.
padding : float, optional
An optional percent padding along each axial direction to cushion
the datasets in the scene from the axes annotations. Defaults to
have no padding
Returns
-------
cube_axes_actor : vtk.vtkCubeAxesActor
Bounds actor
Examples
--------
>>> import vtki
>>> from vtki import examples
>>> mesh = vtki.Sphere()
>>> plotter = vtki.Plotter()
>>> _ = plotter.add_mesh(mesh)
>>> _ = plotter.show_bounds(grid='front', location='outer', all_edges=True)
>>> plotter.show() # doctest:+SKIP
"""
kwargs = locals()
_ = kwargs.pop('self')
_ = kwargs.pop('loc')
self._active_renderer_index = self.loc_to_index(loc)
renderer = self.renderers[self._active_renderer_index]
renderer.show_bounds(**kwargs)
|
Adds bounds axes. Shows the bounds of the most recent input
mesh unless mesh is specified.
Parameters
----------
mesh : vtkPolydata or unstructured grid, optional
Input mesh to draw bounds axes around
bounds : list or tuple, optional
Bounds to override mesh bounds.
[xmin, xmax, ymin, ymax, zmin, zmax]
show_xaxis : bool, optional
Makes x axis visible. Default True.
show_yaxis : bool, optional
Makes y axis visible. Default True.
show_zaxis : bool, optional
Makes z axis visible. Default True.
show_xlabels : bool, optional
Shows x labels. Default True.
show_ylabels : bool, optional
Shows y labels. Default True.
show_zlabels : bool, optional
Shows z labels. Default True.
italic : bool, optional
Italicises axis labels and numbers. Default False.
bold : bool, optional
Bolds axis labels and numbers. Default True.
shadow : bool, optional
Adds a black shadow to the text. Default False.
font_size : float, optional
Sets the size of the label font. Defaults to 16.
font_family : string, optional
Font family. Must be either courier, times, or arial.
color : string or 3 item list, optional
Color of all labels and axis titles. Default white.
Either a string, rgb list, or hex color string. For example:
color='white'
color='w'
color=[1, 1, 1]
color='#FFFFFF'
xlabel : string, optional
Title of the x axis. Default "X Axis"
ylabel : string, optional
Title of the y axis. Default "Y Axis"
zlabel : string, optional
Title of the z axis. Default "Z Axis"
use_2d : bool, optional
A bug with vtk 6.3 in Windows seems to cause this function
to crash this can be enabled for smoother plotting for
other enviornments.
grid : bool or str, optional
Add grid lines to the backface (``True``, ``'back'``, or
``'backface'``) or to the frontface (``'front'``,
``'frontface'``) of the axes actor.
location : str, optional
Set how the axes are drawn: either static (``'all'``),
closest triad (``front``), furthest triad (``'back'``),
static closest to the origin (``'origin'``), or outer
edges (``'outer'``) in relation to the camera
position. Options include: ``'all', 'front', 'back',
'origin', 'outer'``
ticks : str, optional
Set how the ticks are drawn on the axes grid. Options include:
``'inside', 'outside', 'both'``
all_edges : bool, optional
Adds an unlabeled and unticked box at the boundaries of
plot. Useful for when wanting to plot outer grids while
still retaining all edges of the boundary.
corner_factor : float, optional
If ``all_edges````, this is the factor along each axis to
draw the default box. Dafuault is 0.5 to show the full box.
loc : int, tuple, or list
Index of the renderer to add the actor to. For example,
``loc=2`` or ``loc=(1, 1)``. If None, selects the last
active Renderer.
padding : float, optional
An optional percent padding along each axial direction to cushion
the datasets in the scene from the axes annotations. Defaults to
have no padding
Returns
-------
cube_axes_actor : vtk.vtkCubeAxesActor
Bounds actor
Examples
--------
>>> import vtki
>>> from vtki import examples
>>> mesh = vtki.Sphere()
>>> plotter = vtki.Plotter()
>>> _ = plotter.add_mesh(mesh)
>>> _ = plotter.show_bounds(grid='front', location='outer', all_edges=True)
>>> plotter.show() # doctest:+SKIP
|
def _bnd(self, xloc, dist, cache):
"""Distribution bounds."""
return numpy.log(evaluation.evaluate_bound(
dist, numpy.e**xloc, cache=cache))
|
Distribution bounds.
|
def macro_body(self, node, frame):
"""Dump the function def of a macro or call block."""
frame = frame.inner()
frame.symbols.analyze_node(node)
macro_ref = MacroRef(node)
explicit_caller = None
skip_special_params = set()
args = []
for idx, arg in enumerate(node.args):
if arg.name == 'caller':
explicit_caller = idx
if arg.name in ('kwargs', 'varargs'):
skip_special_params.add(arg.name)
args.append(frame.symbols.ref(arg.name))
undeclared = find_undeclared(node.body, ('caller', 'kwargs', 'varargs'))
if 'caller' in undeclared:
# In older Jinja2 versions there was a bug that allowed caller
# to retain the special behavior even if it was mentioned in
# the argument list. However thankfully this was only really
# working if it was the last argument. So we are explicitly
# checking this now and error out if it is anywhere else in
# the argument list.
if explicit_caller is not None:
try:
node.defaults[explicit_caller - len(node.args)]
except IndexError:
self.fail('When defining macros or call blocks the '
'special "caller" argument must be omitted '
'or be given a default.', node.lineno)
else:
args.append(frame.symbols.declare_parameter('caller'))
macro_ref.accesses_caller = True
if 'kwargs' in undeclared and not 'kwargs' in skip_special_params:
args.append(frame.symbols.declare_parameter('kwargs'))
macro_ref.accesses_kwargs = True
if 'varargs' in undeclared and not 'varargs' in skip_special_params:
args.append(frame.symbols.declare_parameter('varargs'))
macro_ref.accesses_varargs = True
# macros are delayed, they never require output checks
frame.require_output_check = False
frame.symbols.analyze_node(node)
self.writeline('%s(%s):' % (self.func('macro'), ', '.join(args)), node)
self.indent()
self.buffer(frame)
self.enter_frame(frame)
self.push_parameter_definitions(frame)
for idx, arg in enumerate(node.args):
ref = frame.symbols.ref(arg.name)
self.writeline('if %s is missing:' % ref)
self.indent()
try:
default = node.defaults[idx - len(node.args)]
except IndexError:
self.writeline('%s = undefined(%r, name=%r)' % (
ref,
'parameter %r was not provided' % arg.name,
arg.name))
else:
self.writeline('%s = ' % ref)
self.visit(default, frame)
self.mark_parameter_stored(ref)
self.outdent()
self.pop_parameter_definitions()
self.blockvisit(node.body, frame)
self.return_buffer_contents(frame, force_unescaped=True)
self.leave_frame(frame, with_python_scope=True)
self.outdent()
return frame, macro_ref
|
Dump the function def of a macro or call block.
|
def get_mapping(self, doc_type=None, indices=None, raw=False):
"""
Register specific mapping definition for a specific type against one or more indices.
(See :ref:`es-guide-reference-api-admin-indices-get-mapping`)
"""
if doc_type is None and indices is None:
path = make_path("_mapping")
is_mapping = False
else:
indices = self.conn._validate_indices(indices)
if doc_type:
path = make_path(','.join(indices), doc_type, "_mapping")
is_mapping = True
else:
path = make_path(','.join(indices), "_mapping")
is_mapping = False
result = self.conn._send_request('GET', path)
if raw:
return result
from pyes.mappings import Mapper
mapper = Mapper(result, is_mapping=False,
connection=self.conn,
document_object_field=self.conn.document_object_field)
if doc_type:
return mapper.mappings[doc_type]
return mapper
|
Register specific mapping definition for a specific type against one or more indices.
(See :ref:`es-guide-reference-api-admin-indices-get-mapping`)
|
def link(self, camera):
""" Link this camera with another camera of the same type
Linked camera's keep each-others' state in sync.
Parameters
----------
camera : instance of Camera
The other camera to link.
"""
cam1, cam2 = self, camera
# Remove if already linked
while cam1 in cam2._linked_cameras:
cam2._linked_cameras.remove(cam1)
while cam2 in cam1._linked_cameras:
cam1._linked_cameras.remove(cam2)
# Link both ways
cam1._linked_cameras.append(cam2)
cam2._linked_cameras.append(cam1)
|
Link this camera with another camera of the same type
Linked camera's keep each-others' state in sync.
Parameters
----------
camera : instance of Camera
The other camera to link.
|
def thumbnail_preview(src_path):
''' Returns the path to small thumbnail preview. '''
try:
assert(exists(src_path))
width = '1980'
dest_dir = mkdtemp(prefix='pyglass')
cmd = [QLMANAGE, '-t', '-s', width, src_path, '-o', dest_dir]
assert(check_call(cmd) == 0)
src_filename = basename(src_path)
dest_list = glob(join(dest_dir, '%s.png' % (src_filename)))
assert(dest_list)
dest_path = dest_list[0]
assert(exists(dest_path))
return dest_path
except:
return None
|
Returns the path to small thumbnail preview.
|
def _update(self, rect, delta_y, force_update_margins=False):
""" Updates panels """
helper = TextHelper(self.editor)
if not self:
return
for zones_id, zone in self._panels.items():
if zones_id == Panel.Position.TOP or \
zones_id == Panel.Position.BOTTOM:
continue
panels = list(zone.values())
for panel in panels:
if panel.scrollable and delta_y:
panel.scroll(0, delta_y)
line, col = helper.cursor_position()
oline, ocol = self._cached_cursor_pos
if line != oline or col != ocol or panel.scrollable:
panel.update(0, rect.y(), panel.width(), rect.height())
self._cached_cursor_pos = helper.cursor_position()
if (rect.contains(self.editor.viewport().rect()) or
force_update_margins):
self._update_viewport_margins()
|
Updates panels
|
def low_mem_sq(m, step=100000):
"""np.dot(m, m.T) with low mem usage, by doing it in small steps"""
if not m.flags.c_contiguous:
raise ValueError('m must be C ordered for this to work with less mem.')
# -- can make this even faster with pre-allocating arrays, but not worth it
# right now
# mmt = np.zeros([m.shape[0], m.shape[0]]) #6us
# mt_tmp = np.zeros([step, m.shape[0]])
# for a in range(0, m.shape[1], step):
# mx = min(a+step, m.shape[1])
# mt_tmp[:mx-a,:] = m.T[a:mx]
# # np.dot(m_tmp, m.T, out=mmt[a:mx])
# # np.dot(m, m[a:mx].T, out=mmt[:, a:mx])
# np.dot(m[:,a:mx], mt_tmp[:mx], out=mmt)
# return mmt
mmt = np.zeros([m.shape[0], m.shape[0]]) #6us
# m_tmp = np.zeros([step, m.shape[1]])
for a in range(0, m.shape[0], step):
mx = min(a+step, m.shape[1])
# m_tmp[:] = m[a:mx]
# np.dot(m_tmp, m.T, out=mmt[a:mx])
mmt[:, a:mx] = np.dot(m, m[a:mx].T)
return mmt
|
np.dot(m, m.T) with low mem usage, by doing it in small steps
|
def init_app(self, app, minters_entry_point_group=None,
fetchers_entry_point_group=None):
"""Flask application initialization.
Initialize:
* The CLI commands.
* Initialize the logger (Default: `app.debug`).
* Initialize the default admin object link endpoint.
(Default: `{"rec": "recordmetadata.details_view"}` if
`invenio-records` is installed, otherwise `{}`).
* Register the `pid_exists` template filter.
* Initialize extension state.
:param app: The Flask application
:param minters_entry_point_group: The minters entry point group
(Default: None).
:param fetchers_entry_point_group: The fetchers entry point group
(Default: None).
:returns: PIDStore state application.
"""
self.init_config(app)
# Initialize CLI
app.cli.add_command(cmd)
# Initialize logger
app.config.setdefault('PIDSTORE_APP_LOGGER_HANDLERS', app.debug)
if app.config['PIDSTORE_APP_LOGGER_HANDLERS']:
for handler in app.logger.handlers:
logger.addHandler(handler)
# Initialize admin object link endpoints.
try:
pkg_resources.get_distribution('invenio-records')
app.config.setdefault('PIDSTORE_OBJECT_ENDPOINTS', dict(
rec='recordmetadata.details_view',
))
except pkg_resources.DistributionNotFound:
app.config.setdefault('PIDSTORE_OBJECT_ENDPOINTS', {})
# Register template filter
app.jinja_env.filters['pid_exists'] = pid_exists
# Initialize extension state.
state = _PIDStoreState(
app=app,
minters_entry_point_group=minters_entry_point_group,
fetchers_entry_point_group=fetchers_entry_point_group,
)
app.extensions['invenio-pidstore'] = state
return state
|
Flask application initialization.
Initialize:
* The CLI commands.
* Initialize the logger (Default: `app.debug`).
* Initialize the default admin object link endpoint.
(Default: `{"rec": "recordmetadata.details_view"}` if
`invenio-records` is installed, otherwise `{}`).
* Register the `pid_exists` template filter.
* Initialize extension state.
:param app: The Flask application
:param minters_entry_point_group: The minters entry point group
(Default: None).
:param fetchers_entry_point_group: The fetchers entry point group
(Default: None).
:returns: PIDStore state application.
|
def createpath(path, mode, exists_ok=True):
"""
Create directories in the indicated path.
:param path:
:param mode:
:param exists_ok:
:return:
"""
try:
os.makedirs(path, mode)
except OSError, e:
if e.errno != errno.EEXIST or not exists_ok:
raise e
|
Create directories in the indicated path.
:param path:
:param mode:
:param exists_ok:
:return:
|
def find_locales(self) -> Dict[str, gettext.GNUTranslations]:
"""
Load all compiled locales from path
:return: dict with locales
"""
translations = {}
for name in os.listdir(self.path):
if not os.path.isdir(os.path.join(self.path, name)):
continue
mo_path = os.path.join(self.path, name, 'LC_MESSAGES', self.domain + '.mo')
if os.path.exists(mo_path):
with open(mo_path, 'rb') as fp:
translations[name] = gettext.GNUTranslations(fp)
elif os.path.exists(mo_path[:-2] + 'po'):
raise RuntimeError(f"Found locale '{name} but this language is not compiled!")
return translations
|
Load all compiled locales from path
:return: dict with locales
|
def batch_split_words(self, sentences: List[str]) -> List[List[Token]]:
"""
Spacy needs to do batch processing, or it can be really slow. This method lets you take
advantage of that if you want. Default implementation is to just iterate of the sentences
and call ``split_words``, but the ``SpacyWordSplitter`` will actually do batched
processing.
"""
return [self.split_words(sentence) for sentence in sentences]
|
Spacy needs to do batch processing, or it can be really slow. This method lets you take
advantage of that if you want. Default implementation is to just iterate of the sentences
and call ``split_words``, but the ``SpacyWordSplitter`` will actually do batched
processing.
|
def navigation_info(request):
'''Expose whether to display the navigation header and footer'''
if request.GET.get('wafer_hide_navigation') == "1":
nav_class = "wafer-invisible"
else:
nav_class = "wafer-visible"
context = {
'WAFER_NAVIGATION_VISIBILITY': nav_class,
}
return context
|
Expose whether to display the navigation header and footer
|
def get_precision_regex():
"""Build regular expression used to extract precision
metric from command output"""
expr = re.escape(PRECISION_FORMULA)
expr += r'=\s*(\S*)\s.*\s([A-Z]*)'
return re.compile(expr)
|
Build regular expression used to extract precision
metric from command output
|
def _col_name(index):
"""
Converts a column index to a column name.
>>> _col_name(0)
'A'
>>> _col_name(26)
'AA'
"""
for exp in itertools.count(1):
limit = 26 ** exp
if index < limit:
return ''.join(chr(ord('A') + index // (26 ** i) % 26) for i in range(exp-1, -1, -1))
index -= limit
|
Converts a column index to a column name.
>>> _col_name(0)
'A'
>>> _col_name(26)
'AA'
|
def mqc_load_userconfig(paths=()):
""" Overwrite config defaults with user config files """
# Load and parse installation config file if we find it
mqc_load_config(os.path.join( os.path.dirname(MULTIQC_DIR), 'multiqc_config.yaml'))
# Load and parse a user config file if we find it
mqc_load_config(os.path.expanduser('~/.multiqc_config.yaml'))
# Load and parse a config file path set in an ENV variable if we find it
if os.environ.get('MULTIQC_CONFIG_PATH') is not None:
mqc_load_config( os.environ.get('MULTIQC_CONFIG_PATH') )
# Load and parse a config file in this working directory if we find it
mqc_load_config('multiqc_config.yaml')
# Custom command line config
for p in paths:
mqc_load_config(p)
|
Overwrite config defaults with user config files
|
def middleware(self, *args, **kwargs):
"""Decorate and register middleware
:param args: captures all of the positional arguments passed in
:type args: tuple(Any)
:param kwargs: captures the keyword arguments passed in
:type kwargs: dict(Any)
:return: The middleware function to use as the decorator
:rtype: fn
"""
kwargs.setdefault('priority', 5)
kwargs.setdefault('relative', None)
kwargs.setdefault('attach_to', None)
kwargs.setdefault('with_context', False)
if len(args) == 1 and callable(args[0]):
middle_f = args[0]
self._middlewares.append(
FutureMiddleware(middle_f, args=tuple(), kwargs=kwargs))
return middle_f
def wrapper(middleware_f):
self._middlewares.append(
FutureMiddleware(middleware_f, args=args, kwargs=kwargs))
return middleware_f
return wrapper
|
Decorate and register middleware
:param args: captures all of the positional arguments passed in
:type args: tuple(Any)
:param kwargs: captures the keyword arguments passed in
:type kwargs: dict(Any)
:return: The middleware function to use as the decorator
:rtype: fn
|
def get_paths(self):
"""Get all paths from the root to the leaves.
For example, given a chain like `{'a':{'b':{'c':None}}}`,
this method would return `[['a', 'b', 'c']]`.
Returns:
A list of lists of paths.
"""
paths = []
for key, child in six.iteritems(self):
if isinstance(child, TreeMap) and child:
# current child is an intermediate node
for path in child.get_paths():
path.insert(0, key)
paths.append(path)
else:
# current child is an endpoint
paths.append([key])
return paths
|
Get all paths from the root to the leaves.
For example, given a chain like `{'a':{'b':{'c':None}}}`,
this method would return `[['a', 'b', 'c']]`.
Returns:
A list of lists of paths.
|
def f_store(self, recursive=True, store_data=pypetconstants.STORE_DATA,
max_depth=None):
"""Stores a group node to disk
:param recursive:
Whether recursively all children should be stored too. Default is ``True``.
:param store_data:
For how to choose 'store_data' see :ref:`more-on-storing`.
:param max_depth:
In case `recursive` is `True`, you can specify the maximum depth to store
data relative from current node. Leave `None` if you don't want to limit
the depth.
"""
traj = self._nn_interface._root_instance
storage_service = traj.v_storage_service
storage_service.store(pypetconstants.GROUP, self,
trajectory_name=traj.v_name,
recursive=recursive,
store_data=store_data,
max_depth=max_depth)
|
Stores a group node to disk
:param recursive:
Whether recursively all children should be stored too. Default is ``True``.
:param store_data:
For how to choose 'store_data' see :ref:`more-on-storing`.
:param max_depth:
In case `recursive` is `True`, you can specify the maximum depth to store
data relative from current node. Leave `None` if you don't want to limit
the depth.
|
def _get_data(self) -> BaseFrameManager:
"""Perform the map step
Returns:
A BaseFrameManager object.
"""
def iloc(partition, row_internal_indices, col_internal_indices):
return partition.iloc[row_internal_indices, col_internal_indices]
masked_data = self.parent_data.apply_func_to_indices_both_axis(
func=iloc,
row_indices=self.index_map.values,
col_indices=self.columns_map.values,
lazy=False,
keep_remaining=False,
)
return masked_data
|
Perform the map step
Returns:
A BaseFrameManager object.
|
def _fmt_structured(d):
"""Formats '{k1:v1, k2:v2}' => 'time=... pid=... k1=v1 k2=v2'
Output is lexically sorted, *except* the time and pid always
come first, to assist with human scanning of the data.
"""
timeEntry = datetime.datetime.utcnow().strftime(
"time=%Y-%m-%dT%H:%M:%S.%f-00")
pidEntry = "pid=" + str(os.getpid())
rest = sorted('='.join([str(k), str(v)])
for (k, v) in list(d.items()))
return ' '.join([timeEntry, pidEntry] + rest)
|
Formats '{k1:v1, k2:v2}' => 'time=... pid=... k1=v1 k2=v2'
Output is lexically sorted, *except* the time and pid always
come first, to assist with human scanning of the data.
|
def arg(*args, **kwargs):
"""Return an attrib() that can be fed as a command-line argument.
This function is a wrapper for an attr.attrib to create a corresponding
command line argument for it. Use it with the same arguments as argparse's
add_argument().
Example:
>>> @attrs
... class MyFeature(Feature):
... my_number = arg('-n', '--number', default=3)
... def run(self):
... print('Your number:', self.my_number)
Now you could run it like `firefed myfeature --number 5`.
"""
metadata = {'arg_params': (args, kwargs)}
return attrib(default=arg_default(*args, **kwargs), metadata=metadata)
|
Return an attrib() that can be fed as a command-line argument.
This function is a wrapper for an attr.attrib to create a corresponding
command line argument for it. Use it with the same arguments as argparse's
add_argument().
Example:
>>> @attrs
... class MyFeature(Feature):
... my_number = arg('-n', '--number', default=3)
... def run(self):
... print('Your number:', self.my_number)
Now you could run it like `firefed myfeature --number 5`.
|
def decimal_format(value, TWOPLACES=Decimal(100) ** -2):
'Format a decimal.Decimal like to 2 decimal places.'
if not isinstance(value, Decimal):
value = Decimal(str(value))
return value.quantize(TWOPLACES)
|
Format a decimal.Decimal like to 2 decimal places.
|
def maybe_inspect_zip(models):
r'''
Detect if models is a list of protocolbuffer files or a ZIP file.
If the latter, then unzip it and return the list of protocolbuffer files
that were inside.
'''
if not(is_zip_file(models)):
return models
if len(models) > 1:
return models
if len(models) < 1:
raise AssertionError('No models at all')
return zipfile.ZipFile(models[0]).namelist()
|
r'''
Detect if models is a list of protocolbuffer files or a ZIP file.
If the latter, then unzip it and return the list of protocolbuffer files
that were inside.
|
def create_session(self, session_request, protocol):
"""CreateSession.
[Preview API] Creates a session, a wrapper around a feed that can store additional metadata on the packages published to it.
:param :class:`<SessionRequest> <azure.devops.v5_0.provenance.models.SessionRequest>` session_request: The feed and metadata for the session
:param str protocol: The protocol that the session will target
:rtype: :class:`<SessionResponse> <azure.devops.v5_0.provenance.models.SessionResponse>`
"""
route_values = {}
if protocol is not None:
route_values['protocol'] = self._serialize.url('protocol', protocol, 'str')
content = self._serialize.body(session_request, 'SessionRequest')
response = self._send(http_method='POST',
location_id='503b4e54-ebf4-4d04-8eee-21c00823c2ac',
version='5.0-preview.1',
route_values=route_values,
content=content)
return self._deserialize('SessionResponse', response)
|
CreateSession.
[Preview API] Creates a session, a wrapper around a feed that can store additional metadata on the packages published to it.
:param :class:`<SessionRequest> <azure.devops.v5_0.provenance.models.SessionRequest>` session_request: The feed and metadata for the session
:param str protocol: The protocol that the session will target
:rtype: :class:`<SessionResponse> <azure.devops.v5_0.provenance.models.SessionResponse>`
|
def generate_cutV_genomic_CDR3_segs(self):
"""Add palindromic inserted nucleotides to germline V sequences.
The maximum number of palindromic insertions are appended to the
germline V segments so that delV can index directly for number of
nucleotides to delete from a segment.
Sets the attribute cutV_genomic_CDR3_segs.
"""
max_palindrome = self.max_delV_palindrome
self.cutV_genomic_CDR3_segs = []
for CDR3_V_seg in [x[1] for x in self.genV]:
if len(CDR3_V_seg) < max_palindrome:
self.cutV_genomic_CDR3_segs += [cutR_seq(CDR3_V_seg, 0, len(CDR3_V_seg))]
else:
self.cutV_genomic_CDR3_segs += [cutR_seq(CDR3_V_seg, 0, max_palindrome)]
|
Add palindromic inserted nucleotides to germline V sequences.
The maximum number of palindromic insertions are appended to the
germline V segments so that delV can index directly for number of
nucleotides to delete from a segment.
Sets the attribute cutV_genomic_CDR3_segs.
|
def get_container_service(access_token, subscription_id, resource_group, service_name):
'''Get details about an Azure Container Server
Args:
access_token (str): A valid Azure authentication token.
subscription_id (str): Azure subscription id.
resource_group (str): Azure resource group name.
service_name (str): Name of container service.
Returns:
HTTP response. JSON model.
'''
endpoint = ''.join([get_rm_endpoint(),
'/subscriptions/', subscription_id,
'/resourcegroups/', resource_group,
'/providers/Microsoft.ContainerService/ContainerServices/', service_name,
'?api-version=', ACS_API])
return do_get(endpoint, access_token)
|
Get details about an Azure Container Server
Args:
access_token (str): A valid Azure authentication token.
subscription_id (str): Azure subscription id.
resource_group (str): Azure resource group name.
service_name (str): Name of container service.
Returns:
HTTP response. JSON model.
|
def hook_drop(self):
""" Install hooks for drop operations.
"""
widget = self.widget
widget.setAcceptDrops(True)
widget.dragEnterEvent = self.dragEnterEvent
widget.dragMoveEvent = self.dragMoveEvent
widget.dragLeaveEvent = self.dragLeaveEvent
widget.dropEvent = self.dropEvent
|
Install hooks for drop operations.
|
def send(self, stanza):
"""Send a stanza somwhere.
The default implementation sends it via the `uplink` if it is defined
or raises the `NoRouteError`.
:Parameters:
- `stanza`: the stanza to send.
:Types:
- `stanza`: `pyxmpp.stanza.Stanza`"""
if self.uplink:
self.uplink.send(stanza)
else:
raise NoRouteError("No route for stanza")
|
Send a stanza somwhere.
The default implementation sends it via the `uplink` if it is defined
or raises the `NoRouteError`.
:Parameters:
- `stanza`: the stanza to send.
:Types:
- `stanza`: `pyxmpp.stanza.Stanza`
|
def update_user_password(new_pwd_user_id, new_password,**kwargs):
"""
Update a user's password
"""
#check_perm(kwargs.get('user_id'), 'edit_user')
try:
user_i = db.DBSession.query(User).filter(User.id==new_pwd_user_id).one()
user_i.password = bcrypt.hashpw(str(new_password).encode('utf-8'), bcrypt.gensalt())
return user_i
except NoResultFound:
raise ResourceNotFoundError("User (id=%s) not found"%(new_pwd_user_id))
|
Update a user's password
|
def read_samples(self, sr=None, offset=0, duration=None):
"""
Return the samples from the track in the container.
Uses librosa for resampling, if needed.
Args:
sr (int): If ``None``, uses the sampling rate given by the file,
otherwise resamples to the given sampling rate.
offset (float): The time in seconds, from where to start reading
the samples (rel. to the file start).
duration (float): The length of the samples to read in seconds.
Returns:
np.ndarray: A numpy array containing the samples as a
floating point (numpy.float32) time series.
"""
with self.container.open_if_needed(mode='r') as cnt:
samples, native_sr = cnt.get(self.key)
start_sample_index = int(offset * native_sr)
if duration is None:
end_sample_index = samples.shape[0]
else:
end_sample_index = int((offset + duration) * native_sr)
samples = samples[start_sample_index:end_sample_index]
if sr is not None and sr != native_sr:
samples = librosa.core.resample(
samples,
native_sr,
sr,
res_type='kaiser_best'
)
return samples
|
Return the samples from the track in the container.
Uses librosa for resampling, if needed.
Args:
sr (int): If ``None``, uses the sampling rate given by the file,
otherwise resamples to the given sampling rate.
offset (float): The time in seconds, from where to start reading
the samples (rel. to the file start).
duration (float): The length of the samples to read in seconds.
Returns:
np.ndarray: A numpy array containing the samples as a
floating point (numpy.float32) time series.
|
def bootstrap_repl(which_ns: str) -> types.ModuleType:
"""Bootstrap the REPL with a few useful vars and returned the
bootstrapped module so it's functions can be used by the REPL
command."""
repl_ns = runtime.Namespace.get_or_create(sym.symbol("basilisp.repl"))
ns = runtime.Namespace.get_or_create(sym.symbol(which_ns))
repl_module = importlib.import_module("basilisp.repl")
ns.add_alias(sym.symbol("basilisp.repl"), repl_ns)
ns.refer_all(repl_ns)
return repl_module
|
Bootstrap the REPL with a few useful vars and returned the
bootstrapped module so it's functions can be used by the REPL
command.
|
def loads(s: str, **kwargs) -> JsonObj:
""" Convert a json_str into a JsonObj
:param s: a str instance containing a JSON document
:param kwargs: arguments see: json.load for details
:return: JsonObj representing the json string
"""
if isinstance(s, (bytes, bytearray)):
s = s.decode(json.detect_encoding(s), 'surrogatepass')
return json.loads(s, object_hook=lambda pairs: JsonObj(**pairs), **kwargs)
|
Convert a json_str into a JsonObj
:param s: a str instance containing a JSON document
:param kwargs: arguments see: json.load for details
:return: JsonObj representing the json string
|
def load_profiles_from_file(self, fqfn):
"""Load profiles from file.
Args:
fqfn (str): Fully qualified file name.
"""
if self.args.verbose:
print('Loading profiles from File: {}{}{}'.format(c.Style.BRIGHT, c.Fore.MAGENTA, fqfn))
with open(fqfn, 'r+') as fh:
data = json.load(fh)
for profile in data:
# force update old profiles
self.profile_update(profile)
if self.args.action == 'validate':
self.validate(profile)
fh.seek(0)
fh.write(json.dumps(data, indent=2, sort_keys=True))
fh.truncate()
for d in data:
if d.get('profile_name') in self.profiles:
self.handle_error(
'Found a duplicate profile name ({}).'.format(d.get('profile_name'))
)
self.profiles.setdefault(
d.get('profile_name'),
{'data': d, 'ij_filename': d.get('install_json'), 'fqfn': fqfn},
)
|
Load profiles from file.
Args:
fqfn (str): Fully qualified file name.
|
def on_train_begin(self, **kwargs):
"Call watch method to log model topology, gradients & weights"
# Set self.best, method inherited from "TrackerCallback" by "SaveModelCallback"
super().on_train_begin()
# Ensure we don't call "watch" multiple times
if not WandbCallback.watch_called:
WandbCallback.watch_called = True
# Logs model topology and optionally gradients and weights
wandb.watch(self.learn.model, log=self.log)
|
Call watch method to log model topology, gradients & weights
|
def scoped_session_decorator(func):
"""Manage contexts and add debugging to db sessions."""
@wraps(func)
def wrapper(*args, **kwargs):
with sessions_scope(session):
# The session used in func comes from the funcs globals, but
# it will be a proxied thread local var from the session
# registry, and will therefore be identical to the one returned
# by the context manager above.
logger.debug("Running worker %s in scoped DB session", func.__name__)
return func(*args, **kwargs)
return wrapper
|
Manage contexts and add debugging to db sessions.
|
def copy(self):
"""Create a new copy of selfe. does not do a deep copy for payload
:return: copied range
:rtype: GenomicRange
"""
return type(self)(self.chr,
self.start+self._start_offset,
self.end,
self.payload,
self.dir)
|
Create a new copy of selfe. does not do a deep copy for payload
:return: copied range
:rtype: GenomicRange
|
def webapi_request(url, method='GET', caller=None, session=None, params=None):
"""Low level function for calling Steam's WebAPI
.. versionchanged:: 0.8.3
:param url: request url (e.g. ``https://api.steampowered.com/A/B/v001/``)
:type url: :class:`str`
:param method: HTTP method (GET or POST)
:type method: :class:`str`
:param caller: caller reference, caller.last_response is set to the last response
:param params: dict of WebAPI and endpoint specific params
:type params: :class:`dict`
:param session: an instance requests session, or one is created per call
:type session: :class:`requests.Session`
:return: response based on paramers
:rtype: :class:`dict`, :class:`lxml.etree.Element`, :class:`str`
"""
if method not in ('GET', 'POST'):
raise NotImplemented("HTTP method: %s" % repr(self.method))
if params is None:
params = {}
onetime = {}
for param in DEFAULT_PARAMS:
params[param] = onetime[param] = params.get(param, DEFAULT_PARAMS[param])
for param in ('raw', 'apihost', 'https', 'http_timeout'):
del params[param]
if onetime['format'] not in ('json', 'vdf', 'xml'):
raise ValueError("Expected format to be json,vdf or xml; got %s" % onetime['format'])
for k, v in list(params.items()): # serialize some types
if isinstance(v, bool): params[k] = 1 if v else 0
elif isinstance(v, dict): params[k] = _json.dumps(v)
elif isinstance(v, list):
del params[k]
for i, lvalue in enumerate(v):
params["%s[%d]" % (k, i)] = lvalue
kwargs = {'params': params} if method == "GET" else {'data': params} # params to data for POST
if session is None: session = _make_session()
f = getattr(session, method.lower())
resp = f(url, stream=False, timeout=onetime['http_timeout'], **kwargs)
# we keep a reference of the last response instance on the caller
if caller is not None: caller.last_response = resp
# 4XX and 5XX will cause this to raise
resp.raise_for_status()
if onetime['raw']:
return resp.text
elif onetime['format'] == 'json':
return resp.json()
elif onetime['format'] == 'xml':
from lxml import etree as _etree
return _etree.fromstring(resp.content)
elif onetime['format'] == 'vdf':
import vdf as _vdf
return _vdf.loads(resp.text)
|
Low level function for calling Steam's WebAPI
.. versionchanged:: 0.8.3
:param url: request url (e.g. ``https://api.steampowered.com/A/B/v001/``)
:type url: :class:`str`
:param method: HTTP method (GET or POST)
:type method: :class:`str`
:param caller: caller reference, caller.last_response is set to the last response
:param params: dict of WebAPI and endpoint specific params
:type params: :class:`dict`
:param session: an instance requests session, or one is created per call
:type session: :class:`requests.Session`
:return: response based on paramers
:rtype: :class:`dict`, :class:`lxml.etree.Element`, :class:`str`
|
def calculate_incorrect_name_dict(graph: BELGraph) -> Mapping[str, List[str]]:
"""Get missing names grouped by namespace."""
missing = defaultdict(list)
for namespace, name in _iterate_namespace_name(graph):
missing[namespace].append(name)
return dict(missing)
|
Get missing names grouped by namespace.
|
def state_name(self):
"""Get a human-readable value of the state
Returns:
str: Name of the current state
"""
if self.state == 1:
return 'New Issue'
elif self.state == 2:
return 'Shutdown in 1 week'
elif self.state == 3:
return 'Shutdown in 1 day'
elif self.state == 4:
return 'Pending Shutdown'
elif self.state == 5:
return 'Stopped, delete in 12 weeks'
elif self.state == 6:
return 'Instance deleted'
else:
raise ValueError('Invalid state: {}'.format(self.state))
|
Get a human-readable value of the state
Returns:
str: Name of the current state
|
def _validate_target(self, y):
"""
Raises a value error if the target is not a classification target.
"""
# Ignore None values
if y is None:
return
y_type = type_of_target(y)
if y_type not in ("binary", "multiclass"):
raise YellowbrickValueError((
"'{}' target type not supported, only binary and multiclass"
).format(y_type))
|
Raises a value error if the target is not a classification target.
|
def origin(self, origin):
"""Set the origin. Pass a length three tuple of floats"""
ox, oy, oz = origin[0], origin[1], origin[2]
self.SetOrigin(ox, oy, oz)
self.Modified()
|
Set the origin. Pass a length three tuple of floats
|
def migrator(self):
"""Create migrator and setup it with fake migrations."""
migrator = Migrator(self.database)
for name in self.done:
self.run_one(name, migrator)
return migrator
|
Create migrator and setup it with fake migrations.
|
def getResponsible(self):
"""Return all manager info of responsible departments
"""
managers = {}
for department in self.getDepartments():
manager = department.getManager()
if manager is None:
continue
manager_id = manager.getId()
if manager_id not in managers:
managers[manager_id] = {}
managers[manager_id]['salutation'] = safe_unicode(
manager.getSalutation())
managers[manager_id]['name'] = safe_unicode(
manager.getFullname())
managers[manager_id]['email'] = safe_unicode(
manager.getEmailAddress())
managers[manager_id]['phone'] = safe_unicode(
manager.getBusinessPhone())
managers[manager_id]['job_title'] = safe_unicode(
manager.getJobTitle())
if manager.getSignature():
managers[manager_id]['signature'] = \
'{}/Signature'.format(manager.absolute_url())
else:
managers[manager_id]['signature'] = False
managers[manager_id]['departments'] = ''
mngr_dept = managers[manager_id]['departments']
if mngr_dept:
mngr_dept += ', '
mngr_dept += safe_unicode(department.Title())
managers[manager_id]['departments'] = mngr_dept
mngr_keys = managers.keys()
mngr_info = {'ids': mngr_keys, 'dict': managers}
return mngr_info
|
Return all manager info of responsible departments
|
def create_selection():
""" Create a selection expression """
operation = Forward()
nested = Group(Suppress("(") + operation + Suppress(")")).setResultsName("nested")
select_expr = Forward()
functions = select_functions(select_expr)
maybe_nested = functions | nested | Group(var_val)
operation <<= maybe_nested + OneOrMore(oneOf("+ - * /") + maybe_nested)
select_expr <<= operation | maybe_nested
alias = Group(Suppress(upkey("as")) + var).setResultsName("alias")
full_select = Group(
Group(select_expr).setResultsName("selection") + Optional(alias)
)
return Group(
Keyword("*") | upkey("count(*)") | delimitedList(full_select)
).setResultsName("attrs")
|
Create a selection expression
|
def get_class_field(cls, field_name):
"""
Add management of dynamic fields: if a normal field cannot be retrieved,
check if it can be a dynamic field and in this case, create a copy with
the given name and associate it to the model.
"""
try:
field = super(ModelWithDynamicFieldMixin, cls).get_class_field(field_name)
except AttributeError:
# the "has_field" returned True but getattr raised... we have a DynamicField
dynamic_field = cls._get_dynamic_field_for(field_name)
field = cls._add_dynamic_field_to_model(dynamic_field, field_name)
return field
|
Add management of dynamic fields: if a normal field cannot be retrieved,
check if it can be a dynamic field and in this case, create a copy with
the given name and associate it to the model.
|
def decode(self, encoding=None, errors='strict'):
"""Decode using the codec registered for encoding. encoding defaults to the default encoding.
errors may be given to set a different error handling scheme. Default is 'strict' meaning that encoding errors
raise a UnicodeDecodeError. Other possible values are 'ignore' and 'replace' as well as any other name
registered with codecs.register_error that is able to handle UnicodeDecodeErrors.
:param str encoding: Codec.
:param str errors: Error handling scheme.
"""
return self.__class__(super(ColorStr, self).decode(encoding, errors), keep_tags=True)
|
Decode using the codec registered for encoding. encoding defaults to the default encoding.
errors may be given to set a different error handling scheme. Default is 'strict' meaning that encoding errors
raise a UnicodeDecodeError. Other possible values are 'ignore' and 'replace' as well as any other name
registered with codecs.register_error that is able to handle UnicodeDecodeErrors.
:param str encoding: Codec.
:param str errors: Error handling scheme.
|
def _get_chain_by_sid(sid):
"""Return None if not found."""
try:
return d1_gmn.app.models.Chain.objects.get(sid__did=sid)
except d1_gmn.app.models.Chain.DoesNotExist:
pass
|
Return None if not found.
|
def main(host):
client = capnp.TwoPartyClient(host)
# Pass "calculator" to ez_restore (there's also a `restore` function that
# takes a struct or AnyPointer as an argument), and then cast the returned
# capability to it's proper type. This casting is due to capabilities not
# having a reference to their schema
calculator = client.bootstrap().cast_as(calculator_capnp.Calculator)
'''Make a request that just evaluates the literal value 123.
What's interesting here is that evaluate() returns a "Value", which is
another interface and therefore points back to an object living on the
server. We then have to call read() on that object to read it.
However, even though we are making two RPC's, this block executes in
*one* network round trip because of promise pipelining: we do not wait
for the first call to complete before we send the second call to the
server.'''
print('Evaluating a literal... ', end="")
# Make the request. Note we are using the shorter function form (instead
# of evaluate_request), and we are passing a dictionary that represents a
# struct and its member to evaluate
eval_promise = calculator.evaluate({"literal": 123})
# This is equivalent to:
'''
request = calculator.evaluate_request()
request.expression.literal = 123
# Send it, which returns a promise for the result (without blocking).
eval_promise = request.send()
'''
# Using the promise, create a pipelined request to call read() on the
# returned object. Note that here we are using the shortened method call
# syntax read(), which is mostly just sugar for read_request().send()
read_promise = eval_promise.value.read()
# Now that we've sent all the requests, wait for the response. Until this
# point, we haven't waited at all!
response = read_promise.wait()
assert response.value == 123
print("PASS")
'''Make a request to evaluate 123 + 45 - 67.
The Calculator interface requires that we first call getOperator() to
get the addition and subtraction functions, then call evaluate() to use
them. But, once again, we can get both functions, call evaluate(), and
then read() the result -- four RPCs -- in the time of *one* network
round trip, because of promise pipelining.'''
print("Using add and subtract... ", end='')
# Get the "add" function from the server.
add = calculator.getOperator(op='add').func
# Get the "subtract" function from the server.
subtract = calculator.getOperator(op='subtract').func
# Build the request to evaluate 123 + 45 - 67. Note the form is 'evaluate'
# + '_request', where 'evaluate' is the name of the method we want to call
request = calculator.evaluate_request()
subtract_call = request.expression.init('call')
subtract_call.function = subtract
subtract_params = subtract_call.init('params', 2)
subtract_params[1].literal = 67.0
add_call = subtract_params[0].init('call')
add_call.function = add
add_params = add_call.init('params', 2)
add_params[0].literal = 123
add_params[1].literal = 45
# Send the evaluate() request, read() the result, and wait for read() to finish.
eval_promise = request.send()
read_promise = eval_promise.value.read()
response = read_promise.wait()
assert response.value == 101
print("PASS")
'''
Note: a one liner version of building the previous request (I highly
recommend not doing it this way for such a complicated structure, but I
just wanted to demonstrate it is possible to set all of the fields with a
dictionary):
eval_promise = calculator.evaluate(
{'call': {'function': subtract,
'params': [{'call': {'function': add,
'params': [{'literal': 123},
{'literal': 45}]}},
{'literal': 67.0}]}})
'''
'''Make a request to evaluate 4 * 6, then use the result in two more
requests that add 3 and 5.
Since evaluate() returns its result wrapped in a `Value`, we can pass
that `Value` back to the server in subsequent requests before the first
`evaluate()` has actually returned. Thus, this example again does only
one network round trip.'''
print("Pipelining eval() calls... ", end="")
# Get the "add" function from the server.
add = calculator.getOperator(op='add').func
# Get the "multiply" function from the server.
multiply = calculator.getOperator(op='multiply').func
# Build the request to evaluate 4 * 6
request = calculator.evaluate_request()
multiply_call = request.expression.init("call")
multiply_call.function = multiply
multiply_params = multiply_call.init("params", 2)
multiply_params[0].literal = 4
multiply_params[1].literal = 6
multiply_result = request.send().value
# Use the result in two calls that add 3 and add 5.
add_3_request = calculator.evaluate_request()
add_3_call = add_3_request.expression.init("call")
add_3_call.function = add
add_3_params = add_3_call.init("params", 2)
add_3_params[0].previousResult = multiply_result
add_3_params[1].literal = 3
add_3_promise = add_3_request.send().value.read()
add_5_request = calculator.evaluate_request()
add_5_call = add_5_request.expression.init("call")
add_5_call.function = add
add_5_params = add_5_call.init("params", 2)
add_5_params[0].previousResult = multiply_result
add_5_params[1].literal = 5
add_5_promise = add_5_request.send().value.read()
# Now wait for the results.
assert add_3_promise.wait().value == 27
assert add_5_promise.wait().value == 29
print("PASS")
'''Our calculator interface supports defining functions. Here we use it
to define two functions and then make calls to them as follows:
f(x, y) = x * 100 + y
g(x) = f(x, x + 1) * 2;
f(12, 34)
g(21)
Once again, the whole thing takes only one network round trip.'''
print("Defining functions... ", end="")
# Get the "add" function from the server.
add = calculator.getOperator(op='add').func
# Get the "multiply" function from the server.
multiply = calculator.getOperator(op='multiply').func
# Define f.
request = calculator.defFunction_request()
request.paramCount = 2
# Build the function body.
add_call = request.body.init("call")
add_call.function = add
add_params = add_call.init("params", 2)
add_params[1].parameter = 1 # y
multiply_call = add_params[0].init("call")
multiply_call.function = multiply
multiply_params = multiply_call.init("params", 2)
multiply_params[0].parameter = 0 # x
multiply_params[1].literal = 100
f = request.send().func
# Define g.
request = calculator.defFunction_request()
request.paramCount = 1
# Build the function body.
multiply_call = request.body.init("call")
multiply_call.function = multiply
multiply_params = multiply_call.init("params", 2)
multiply_params[1].literal = 2
f_call = multiply_params[0].init("call")
f_call.function = f
f_params = f_call.init("params", 2)
f_params[0].parameter = 0
add_call = f_params[1].init("call")
add_call.function = add
add_params = add_call.init("params", 2)
add_params[0].parameter = 0
add_params[1].literal = 1
g = request.send().func
# OK, we've defined all our functions. Now create our eval requests.
# f(12, 34)
f_eval_request = calculator.evaluate_request()
f_call = f_eval_request.expression.init("call")
f_call.function = f
f_params = f_call.init("params", 2)
f_params[0].literal = 12
f_params[1].literal = 34
f_eval_promise = f_eval_request.send().value.read()
# g(21)
g_eval_request = calculator.evaluate_request()
g_call = g_eval_request.expression.init("call")
g_call.function = g
g_call.init('params', 1)[0].literal = 21
g_eval_promise = g_eval_request.send().value.read()
# Wait for the results.
assert f_eval_promise.wait().value == 1234
assert g_eval_promise.wait().value == 4244
print("PASS")
'''Make a request that will call back to a function defined locally.
Specifically, we will compute 2^(4 + 5). However, exponent is not
defined by the Calculator server. So, we'll implement the Function
interface locally and pass it to the server for it to use when
evaluating the expression.
This example requires two network round trips to complete, because the
server calls back to the client once before finishing. In this
particular case, this could potentially be optimized by using a tail
call on the server side -- see CallContext::tailCall(). However, to
keep the example simpler, we haven't implemented this optimization in
the sample server.'''
print("Using a callback... ", end="")
# Get the "add" function from the server.
add = calculator.getOperator(op='add').func
# Build the eval request for 2^(4+5).
request = calculator.evaluate_request()
pow_call = request.expression.init("call")
pow_call.function = PowerFunction()
pow_params = pow_call.init("params", 2)
pow_params[0].literal = 2
add_call = pow_params[1].init("call")
add_call.function = add
add_params = add_call.init("params", 2)
add_params[0].literal = 4
add_params[1].literal = 5
# Send the request and wait.
response = request.send().value.read().wait()
assert response.value == 512
print("PASS")
|
Make a request that just evaluates the literal value 123.
What's interesting here is that evaluate() returns a "Value", which is
another interface and therefore points back to an object living on the
server. We then have to call read() on that object to read it.
However, even though we are making two RPC's, this block executes in
*one* network round trip because of promise pipelining: we do not wait
for the first call to complete before we send the second call to the
server.
|
def _histplot_bins(column, bins=100):
"""Helper to get bins for histplot."""
col_min = np.min(column)
col_max = np.max(column)
return range(col_min, col_max + 2, max((col_max - col_min) // bins, 1))
|
Helper to get bins for histplot.
|
def topological_order_dfs(graph):
"""Topological sorting by depth first search
:param graph: directed graph in listlist format, cannot be listdict
:returns: list of vertices in order
:complexity: `O(|V|+|E|)`
"""
n = len(graph)
order = []
times_seen = [-1] * n
for start in range(n):
if times_seen[start] == -1:
times_seen[start] = 0
to_visit = [start]
while to_visit:
node = to_visit[-1]
children = graph[node]
if times_seen[node] == len(children):
to_visit.pop()
order.append(node)
else:
child = children[times_seen[node]]
times_seen[node] += 1
if times_seen[child] == -1:
times_seen[child] = 0
to_visit.append(child)
return order[::-1]
|
Topological sorting by depth first search
:param graph: directed graph in listlist format, cannot be listdict
:returns: list of vertices in order
:complexity: `O(|V|+|E|)`
|
def required_items(element, children, attributes):
"""Check an xml element to include given attributes and children.
:param element: ElementTree element
:param children: list of XPaths to check
:param attributes: list of attributes names to check
:raises NotValidXmlException: if some argument is missing
:raises NotValidXmlException: if some child is missing
"""
required_elements(element, *children)
required_attributes(element, *attributes)
|
Check an xml element to include given attributes and children.
:param element: ElementTree element
:param children: list of XPaths to check
:param attributes: list of attributes names to check
:raises NotValidXmlException: if some argument is missing
:raises NotValidXmlException: if some child is missing
|
def _recover_network_failure(self):
"""Recover from a network failure"""
if self.auto_reconnect and not self._is_closing:
connected = False
while not connected:
log_msg = "* ATTEMPTING RECONNECT"
if self._retry_new_version:
log_msg = "* RETRYING DIFFERENT DDP VERSION"
self.ddpsocket._debug_log(log_msg)
time.sleep(self.auto_reconnect_timeout)
self._init_socket()
try:
self.connect()
connected = True
if self._retry_new_version:
self._retry_new_version = False
else:
self._is_reconnecting = True
except (socket.error, WebSocketException):
pass
|
Recover from a network failure
|
def set_user_jobs(session, job_ids):
"""
Replace the currently authenticated user's list of jobs with a new list of
jobs
"""
jobs_data = {
'jobs[]': job_ids
}
response = make_put_request(session, 'self/jobs', json_data=jobs_data)
json_data = response.json()
if response.status_code == 200:
return json_data['status']
else:
raise UserJobsNotSetException(
message=json_data['message'],
error_code=json_data['error_code'],
request_id=json_data['request_id'])
|
Replace the currently authenticated user's list of jobs with a new list of
jobs
|
def is_negated(self):
"""A negated query is one in which every clause has a presence of
prohibited. These queries require some special processing to return
the expected results.
"""
return all(
clause.presence == QueryPresence.PROHIBITED for clause in self.clauses
)
|
A negated query is one in which every clause has a presence of
prohibited. These queries require some special processing to return
the expected results.
|
def find_node(self, attribute):
"""
Returns the Node with given attribute.
:param attribute: Attribute.
:type attribute: GraphModelAttribute
:return: Node.
:rtype: GraphModelNode
"""
for model in GraphModel._GraphModel__models_instances.itervalues():
for node in foundations.walkers.nodes_walker(model.root_node):
if attribute in node.get_attributes():
return node
|
Returns the Node with given attribute.
:param attribute: Attribute.
:type attribute: GraphModelAttribute
:return: Node.
:rtype: GraphModelNode
|
def associations(self, subject, object=None):
"""
Given a subject-object pair (e.g. gene id to ontology class id), return all association
objects that match.
"""
if object is None:
if self.associations_by_subj is not None:
return self.associations_by_subj[subject]
else:
return []
else:
if self.associations_by_subj_obj is not None:
return self.associations_by_subj_obj[(subject,object)]
else:
return []
|
Given a subject-object pair (e.g. gene id to ontology class id), return all association
objects that match.
|
def create_ingest_point(self, privateStreamName, publicStreamName):
"""
Creates an RTMP ingest point, which mandates that streams pushed into
the EMS have a target stream name which matches one Ingest Point
privateStreamName.
:param privateStreamName: The name that RTMP Target Stream Names must
match.
:type privateStreamName: str
:param publicStreamName: The name that is used to access the stream
pushed to the privateStreamName. The publicStreamName becomes the
streams localStreamName.
:type publicStreamName: str
:link: http://docs.evostream.com/ems_api_definition/createingestpoint
"""
return self.protocol.execute('createIngestPoint',
privateStreamName=privateStreamName,
publicStreamName=publicStreamName)
|
Creates an RTMP ingest point, which mandates that streams pushed into
the EMS have a target stream name which matches one Ingest Point
privateStreamName.
:param privateStreamName: The name that RTMP Target Stream Names must
match.
:type privateStreamName: str
:param publicStreamName: The name that is used to access the stream
pushed to the privateStreamName. The publicStreamName becomes the
streams localStreamName.
:type publicStreamName: str
:link: http://docs.evostream.com/ems_api_definition/createingestpoint
|
def update_rec(self, rec, name, value):
"""Update current GOTerm with optional record."""
# 'def' is a reserved word in python, do not use it as a Class attr.
if name == "def":
name = "defn"
# If we have a relationship, then we will split this into a further
# dictionary.
if hasattr(rec, name):
if name not in self.attrs_scalar:
if name not in self.attrs_nested:
getattr(rec, name).add(value)
else:
self._add_nested(rec, name, value)
else:
raise Exception("ATTR({NAME}) ALREADY SET({VAL})".format(
NAME=name, VAL=getattr(rec, name)))
else: # Initialize new GOTerm attr
if name in self.attrs_scalar:
setattr(rec, name, value)
elif name not in self.attrs_nested:
setattr(rec, name, set([value]))
else:
name = '_{:s}'.format(name)
setattr(rec, name, defaultdict(list))
self._add_nested(rec, name, value)
|
Update current GOTerm with optional record.
|
def write(self, output_filepath):
"""
serialize the ExmaraldaFile instance and write it to a file.
Parameters
----------
output_filepath : str
relative or absolute path to the Exmaralda file to be created
"""
with open(output_filepath, 'w') as out_file:
out_file.write(self.__str__())
|
serialize the ExmaraldaFile instance and write it to a file.
Parameters
----------
output_filepath : str
relative or absolute path to the Exmaralda file to be created
|
def extended_fade_in(self, segment, duration):
"""Add a fade-in to a segment that extends the beginning of the
segment.
:param segment: Segment to fade in
:type segment: :py:class:`radiotool.composer.Segment`
:param duration: Duration of fade-in (in seconds)
:returns: The fade that has been added to the composition
:rtype: :py:class:`Fade`
"""
dur = int(duration * segment.track.samplerate)
if segment.start - dur >= 0:
segment.start -= dur
else:
raise Exception(
"Cannot create fade-in that extends "
"past the track's beginning")
if segment.comp_location - dur >= 0:
segment.comp_location -= dur
else:
raise Exception(
"Cannot create fade-in the extends past the score's beginning")
segment.duration += dur
f = Fade(segment.track, segment.comp_location_in_seconds,
duration, 0.0, 1.0)
self.add_dynamic(f)
return f
|
Add a fade-in to a segment that extends the beginning of the
segment.
:param segment: Segment to fade in
:type segment: :py:class:`radiotool.composer.Segment`
:param duration: Duration of fade-in (in seconds)
:returns: The fade that has been added to the composition
:rtype: :py:class:`Fade`
|
def _schedule_dependencies(dag):
"""
Computes an ordering < of tasks so that for any two tasks t and t' we have that if t depends on t' then
t' < t. In words, all dependencies of a task precede the task in this ordering.
:param dag: A directed acyclic graph representing dependencies between tasks.
:type dag: DirectedGraph
:return: A list of topologically ordered dependecies
:rtype: list(Dependency)
"""
in_degrees = dict(dag.get_indegrees())
independent_vertices = collections.deque([vertex for vertex in dag if dag.get_indegree(vertex) == 0])
topological_order = []
while independent_vertices:
v_vertex = independent_vertices.popleft()
topological_order.append(v_vertex)
for u_vertex in dag[v_vertex]:
in_degrees[u_vertex] -= 1
if in_degrees[u_vertex] == 0:
independent_vertices.append(u_vertex)
if len(topological_order) != len(dag):
raise CyclicDependencyError('Tasks do not form an acyclic graph')
return topological_order
|
Computes an ordering < of tasks so that for any two tasks t and t' we have that if t depends on t' then
t' < t. In words, all dependencies of a task precede the task in this ordering.
:param dag: A directed acyclic graph representing dependencies between tasks.
:type dag: DirectedGraph
:return: A list of topologically ordered dependecies
:rtype: list(Dependency)
|
def stringfy(expr, sym_const=None, sym_states=None, sym_algebs=None):
"""Convert the right-hand-side of an equation into CVXOPT matrix operations"""
if not sym_const:
sym_const = []
if not sym_states:
sym_states = []
if not sym_algebs:
sym_algebs = []
expr_str = []
if type(expr) in (int, float):
return expr
if expr.is_Atom:
if expr in sym_const:
expr_str = 'self.{}'.format(expr)
elif expr in sym_states:
expr_str = 'dae.x[self.{}]'.format(expr)
elif expr in sym_algebs:
expr_str = 'dae.y[self.{}]'.format(expr)
elif expr.is_Number:
if expr.is_Integer:
expr_str = str(int(expr))
else:
expr_str = str(float(expr))
# if expr.is_negative:
# expr_str = '{}'.format(expr)
# else:
# expr_str = str(expr)
else:
raise AttributeError('Unknown free symbol <{}>'.format(expr))
else:
nargs = len(expr.args)
arg_str = []
for arg in expr.args:
arg_str.append(stringfy(arg, sym_const, sym_states, sym_algebs))
if expr.is_Add:
expr_str = ''
for idx, item in enumerate(arg_str):
if idx == 0:
if len(item) > 1 and item[1] == ' ':
item = item[0] + item[2:]
if idx > 0:
if item[0] == '-':
item = ' ' + item
else:
item = ' + ' + item
expr_str += item
elif expr.is_Mul:
if nargs == 2 and expr.args[0].is_Integer: # number * matrix
if expr.args[0].is_positive:
expr_str = '{}*{}'.format(*arg_str)
elif expr.args[0] == Integer('-1'):
expr_str = '- {}'.format(arg_str[1])
else: # negative but not -1
expr_str = '{}*{}'.format(*arg_str)
else: # matrix dot multiplication
if expr.args[0] == Integer('-1'):
# bring '-' out of mul()
expr_str = ', '.join(arg_str[1:])
expr_str = '- mul(' + expr_str + ')'
else:
expr_str = ', '.join(arg_str)
expr_str = 'mul(' + expr_str + ')'
elif expr.is_Function:
expr_str = ', '.join(arg_str)
expr_str = str(expr.func) + '(' + expr_str + ')'
elif expr.is_Pow:
if arg_str[1] == '-1':
expr_str = 'div(1, {})'.format(arg_str[0])
else:
expr_str = '({})**{}'.format(*arg_str)
elif expr.is_Div:
expr_str = ', '.join(arg_str)
expr_str = 'div(' + expr_str + ')'
else:
raise NotImplementedError
return expr_str
|
Convert the right-hand-side of an equation into CVXOPT matrix operations
|
def deployment_absent(name, namespace='default', **kwargs):
'''
Ensures that the named deployment is absent from the given namespace.
name
The name of the deployment
namespace
The name of the namespace
'''
ret = {'name': name,
'changes': {},
'result': False,
'comment': ''}
deployment = __salt__['kubernetes.show_deployment'](name, namespace, **kwargs)
if deployment is None:
ret['result'] = True if not __opts__['test'] else None
ret['comment'] = 'The deployment does not exist'
return ret
if __opts__['test']:
ret['comment'] = 'The deployment is going to be deleted'
ret['result'] = None
return ret
res = __salt__['kubernetes.delete_deployment'](name, namespace, **kwargs)
if res['code'] == 200:
ret['result'] = True
ret['changes'] = {
'kubernetes.deployment': {
'new': 'absent', 'old': 'present'}}
ret['comment'] = res['message']
else:
ret['comment'] = 'Something went wrong, response: {0}'.format(res)
return ret
|
Ensures that the named deployment is absent from the given namespace.
name
The name of the deployment
namespace
The name of the namespace
|
def parse_gradient_rgb_args(args):
""" Parse one or two rgb args given with --gradientrgb.
Raises InvalidArg for invalid rgb values.
Returns a tuple of (start_rgb, stop_rgb), where the stop_rgb may be
None if only one arg value was given and start_rgb may be None if
no values were given.
"""
arglen = len(args)
if arglen < 1 or arglen > 2:
raise InvalidArg(arglen, label='Expecting 1 or 2 \'-G\' flags, got')
start_rgb = try_rgb(args[0]) if args else None
stop_rgb = try_rgb(args[1]) if arglen > 1 else None
return start_rgb, stop_rgb
|
Parse one or two rgb args given with --gradientrgb.
Raises InvalidArg for invalid rgb values.
Returns a tuple of (start_rgb, stop_rgb), where the stop_rgb may be
None if only one arg value was given and start_rgb may be None if
no values were given.
|
def compensate_system_time_change(self, difference): # pragma: no cover,
# pylint: disable=too-many-branches
# not with unit tests
"""Compensate a system time change of difference for all hosts/services/checks/notifs
:param difference: difference in seconds
:type difference: int
:return: None
"""
super(Alignak, self).compensate_system_time_change(difference)
# We only need to change some value
self.program_start = max(0, self.program_start + difference)
if not hasattr(self.sched, "conf"):
# Race condition where time change before getting conf
return
# Then we compensate all host/services
for host in self.sched.hosts:
host.compensate_system_time_change(difference)
for serv in self.sched.services:
serv.compensate_system_time_change(difference)
# Now all checks and actions
for chk in list(self.sched.checks.values()):
# Already launch checks should not be touch
if chk.status == u'scheduled' and chk.t_to_go is not None:
t_to_go = chk.t_to_go
ref = self.sched.find_item_by_id(chk.ref)
new_t = max(0, t_to_go + difference)
timeperiod = self.sched.timeperiods[ref.check_period]
if timeperiod is not None:
# But it's no so simple, we must match the timeperiod
new_t = timeperiod.get_next_valid_time_from_t(new_t)
# But maybe no there is no more new value! Not good :(
# Say as error, with error output
if new_t is None:
chk.state = u'waitconsume'
chk.exit_status = 2
chk.output = '(Error: there is no available check time after time change!)'
chk.check_time = time.time()
chk.execution_time = 0
else:
chk.t_to_go = new_t
ref.next_chk = new_t
# Now all checks and actions
for act in list(self.sched.actions.values()):
# Already launch checks should not be touch
if act.status == u'scheduled':
t_to_go = act.t_to_go
# Event handler do not have ref
ref_id = getattr(act, 'ref', None)
new_t = max(0, t_to_go + difference)
# Notification should be check with notification_period
if act.is_a == u'notification':
ref = self.sched.find_item_by_id(ref_id)
if ref.notification_period:
# But it's no so simple, we must match the timeperiod
notification_period = self.sched.timeperiods[ref.notification_period]
new_t = notification_period.get_next_valid_time_from_t(new_t)
# And got a creation_time variable too
act.creation_time += difference
# But maybe no there is no more new value! Not good :(
# Say as error, with error output
if new_t is None:
act.state = 'waitconsume'
act.exit_status = 2
act.output = '(Error: there is no available check time after time change!)'
act.check_time = time.time()
act.execution_time = 0
else:
act.t_to_go = new_t
|
Compensate a system time change of difference for all hosts/services/checks/notifs
:param difference: difference in seconds
:type difference: int
:return: None
|
def repair_central_directory(zipFile, is_file_instance): # source: https://bitbucket.org/openpyxl/openpyxl/src/93604327bce7aac5e8270674579af76d390e09c0/openpyxl/reader/excel.py?at=default&fileviewer=file-view-default
''' trims trailing data from the central directory
code taken from http://stackoverflow.com/a/7457686/570216, courtesy of Uri Cohen
'''
f = zipFile if is_file_instance else open(zipFile, 'rb+')
data = f.read()
pos = data.find(CENTRAL_DIRECTORY_SIGNATURE) # End of central directory signature
if (pos > 0):
sio = BytesIO(data)
sio.seek(pos + 22) # size of 'ZIP end of central directory record'
sio.truncate()
sio.seek(0)
return sio
f.seek(0)
return f
|
trims trailing data from the central directory
code taken from http://stackoverflow.com/a/7457686/570216, courtesy of Uri Cohen
|
def PC_varExplained(Y,standardized=True):
"""
Run PCA and calculate the cumulative fraction of variance
Args:
Y: phenotype values
standardize: if True, phenotypes are standardized
Returns:
var: cumulative distribution of variance explained
"""
# figuring out the number of latent factors
if standardized:
Y-=Y.mean(0)
Y/=Y.std(0)
covY = sp.cov(Y)
S,U = linalg.eigh(covY+1e-6*sp.eye(covY.shape[0]))
S = S[::-1]
rv = np.array([S[0:i].sum() for i in range(1,S.shape[0])])
rv/= S.sum()
return rv
|
Run PCA and calculate the cumulative fraction of variance
Args:
Y: phenotype values
standardize: if True, phenotypes are standardized
Returns:
var: cumulative distribution of variance explained
|
def export_content_groups(self, group_id, export_type, skip_notifications=None):
"""
Export content.
Begin a content export job for a course, group, or user.
You can use the {api:ProgressController#show Progress API} to track the
progress of the export. The migration's progress is linked to with the
_progress_url_ value.
When the export completes, use the {api:ContentExportsApiController#show Show content export} endpoint
to retrieve a download URL for the exported content.
"""
path = {}
data = {}
params = {}
# REQUIRED - PATH - group_id
"""ID"""
path["group_id"] = group_id
# REQUIRED - export_type
""""common_cartridge":: Export the contents of the course in the Common Cartridge (.imscc) format
"qti":: Export quizzes from a course in the QTI format
"zip":: Export files from a course, group, or user in a zip file"""
self._validate_enum(export_type, ["common_cartridge", "qti", "zip"])
data["export_type"] = export_type
# OPTIONAL - skip_notifications
"""Don't send the notifications about the export to the user. Default: false"""
if skip_notifications is not None:
data["skip_notifications"] = skip_notifications
self.logger.debug("POST /api/v1/groups/{group_id}/content_exports with query params: {params} and form data: {data}".format(params=params, data=data, **path))
return self.generic_request("POST", "/api/v1/groups/{group_id}/content_exports".format(**path), data=data, params=params, single_item=True)
|
Export content.
Begin a content export job for a course, group, or user.
You can use the {api:ProgressController#show Progress API} to track the
progress of the export. The migration's progress is linked to with the
_progress_url_ value.
When the export completes, use the {api:ContentExportsApiController#show Show content export} endpoint
to retrieve a download URL for the exported content.
|
def get_cluster_name(self):
"""
Name identifying this RabbitMQ cluster.
"""
return self._get(
url=self.url + '/api/cluster-name',
headers=self.headers,
auth=self.auth
)
|
Name identifying this RabbitMQ cluster.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.