code stringlengths 75 104k | docstring stringlengths 1 46.9k | text stringlengths 164 112k |
|---|---|---|
def wait(self, auth, resource, options, defer=False):
""" This is a HTTP Long Polling API which allows a user to wait on specific resources to be
updated.
Args:
auth: <cik> for authentication
resource: <ResourceID> to specify what resource to wait on.
options: Options for the wait including a timeout (in ms), (max 5min) and start time
(null acts as when request is recieved)
"""
# let the server control the timeout
return self._call('wait', auth, [resource, options], defer, notimeout=True) | This is a HTTP Long Polling API which allows a user to wait on specific resources to be
updated.
Args:
auth: <cik> for authentication
resource: <ResourceID> to specify what resource to wait on.
options: Options for the wait including a timeout (in ms), (max 5min) and start time
(null acts as when request is recieved) | Below is the the instruction that describes the task:
### Input:
This is a HTTP Long Polling API which allows a user to wait on specific resources to be
updated.
Args:
auth: <cik> for authentication
resource: <ResourceID> to specify what resource to wait on.
options: Options for the wait including a timeout (in ms), (max 5min) and start time
(null acts as when request is recieved)
### Response:
def wait(self, auth, resource, options, defer=False):
""" This is a HTTP Long Polling API which allows a user to wait on specific resources to be
updated.
Args:
auth: <cik> for authentication
resource: <ResourceID> to specify what resource to wait on.
options: Options for the wait including a timeout (in ms), (max 5min) and start time
(null acts as when request is recieved)
"""
# let the server control the timeout
return self._call('wait', auth, [resource, options], defer, notimeout=True) |
def configureAutoReconnectBackoffTime(self, baseReconnectQuietTimeSecond, maxReconnectQuietTimeSecond, stableConnectionTimeSecond):
"""
**Description**
Used to configure the auto-reconnect backoff timing. Should be called before connect. This is a public
facing API inherited by application level public clients.
**Syntax**
.. code:: python
# Configure the auto-reconnect backoff to start with 1 second and use 128 seconds as a maximum back off time.
# Connection over 20 seconds is considered stable and will reset the back off time back to its base.
myShadowClient.clearLastWill(1, 128, 20)
myJobsClient.clearLastWill(1, 128, 20)
**Parameters**
*baseReconnectQuietTimeSecond* - The initial back off time to start with, in seconds.
Should be less than the stableConnectionTime.
*maxReconnectQuietTimeSecond* - The maximum back off time, in seconds.
*stableConnectionTimeSecond* - The number of seconds for a connection to last to be considered as stable.
Back off time will be reset to base once the connection is stable.
**Returns**
None
"""
# AWSIoTMQTTClient.configureBackoffTime
self._AWSIoTMQTTClient.configureAutoReconnectBackoffTime(baseReconnectQuietTimeSecond, maxReconnectQuietTimeSecond, stableConnectionTimeSecond) | **Description**
Used to configure the auto-reconnect backoff timing. Should be called before connect. This is a public
facing API inherited by application level public clients.
**Syntax**
.. code:: python
# Configure the auto-reconnect backoff to start with 1 second and use 128 seconds as a maximum back off time.
# Connection over 20 seconds is considered stable and will reset the back off time back to its base.
myShadowClient.clearLastWill(1, 128, 20)
myJobsClient.clearLastWill(1, 128, 20)
**Parameters**
*baseReconnectQuietTimeSecond* - The initial back off time to start with, in seconds.
Should be less than the stableConnectionTime.
*maxReconnectQuietTimeSecond* - The maximum back off time, in seconds.
*stableConnectionTimeSecond* - The number of seconds for a connection to last to be considered as stable.
Back off time will be reset to base once the connection is stable.
**Returns**
None | Below is the the instruction that describes the task:
### Input:
**Description**
Used to configure the auto-reconnect backoff timing. Should be called before connect. This is a public
facing API inherited by application level public clients.
**Syntax**
.. code:: python
# Configure the auto-reconnect backoff to start with 1 second and use 128 seconds as a maximum back off time.
# Connection over 20 seconds is considered stable and will reset the back off time back to its base.
myShadowClient.clearLastWill(1, 128, 20)
myJobsClient.clearLastWill(1, 128, 20)
**Parameters**
*baseReconnectQuietTimeSecond* - The initial back off time to start with, in seconds.
Should be less than the stableConnectionTime.
*maxReconnectQuietTimeSecond* - The maximum back off time, in seconds.
*stableConnectionTimeSecond* - The number of seconds for a connection to last to be considered as stable.
Back off time will be reset to base once the connection is stable.
**Returns**
None
### Response:
def configureAutoReconnectBackoffTime(self, baseReconnectQuietTimeSecond, maxReconnectQuietTimeSecond, stableConnectionTimeSecond):
"""
**Description**
Used to configure the auto-reconnect backoff timing. Should be called before connect. This is a public
facing API inherited by application level public clients.
**Syntax**
.. code:: python
# Configure the auto-reconnect backoff to start with 1 second and use 128 seconds as a maximum back off time.
# Connection over 20 seconds is considered stable and will reset the back off time back to its base.
myShadowClient.clearLastWill(1, 128, 20)
myJobsClient.clearLastWill(1, 128, 20)
**Parameters**
*baseReconnectQuietTimeSecond* - The initial back off time to start with, in seconds.
Should be less than the stableConnectionTime.
*maxReconnectQuietTimeSecond* - The maximum back off time, in seconds.
*stableConnectionTimeSecond* - The number of seconds for a connection to last to be considered as stable.
Back off time will be reset to base once the connection is stable.
**Returns**
None
"""
# AWSIoTMQTTClient.configureBackoffTime
self._AWSIoTMQTTClient.configureAutoReconnectBackoffTime(baseReconnectQuietTimeSecond, maxReconnectQuietTimeSecond, stableConnectionTimeSecond) |
def compare_hdf_files(file1, file2):
""" Compare two hdf files.
:param file1: First file to compare.
:param file2: Second file to compare.
:returns True if they are the same.
"""
data1 = FileToDict()
data2 = FileToDict()
scanner1 = data1.scan
scanner2 = data2.scan
with h5py.File(file1, 'r') as fh1:
fh1.visititems(scanner1)
with h5py.File(file2, 'r') as fh2:
fh2.visititems(scanner2)
return data1.contents == data2.contents | Compare two hdf files.
:param file1: First file to compare.
:param file2: Second file to compare.
:returns True if they are the same. | Below is the the instruction that describes the task:
### Input:
Compare two hdf files.
:param file1: First file to compare.
:param file2: Second file to compare.
:returns True if they are the same.
### Response:
def compare_hdf_files(file1, file2):
""" Compare two hdf files.
:param file1: First file to compare.
:param file2: Second file to compare.
:returns True if they are the same.
"""
data1 = FileToDict()
data2 = FileToDict()
scanner1 = data1.scan
scanner2 = data2.scan
with h5py.File(file1, 'r') as fh1:
fh1.visititems(scanner1)
with h5py.File(file2, 'r') as fh2:
fh2.visititems(scanner2)
return data1.contents == data2.contents |
def tangent_curve_single_list(obj, param_list, normalize):
""" Evaluates the curve tangent vectors at the given list of parameter values.
:param obj: input curve
:type obj: abstract.Curve
:param param_list: parameter list
:type param_list: list or tuple
:param normalize: if True, the returned vector is converted to a unit vector
:type normalize: bool
:return: a list containing "point" and "vector" pairs
:rtype: tuple
"""
ret_vector = []
for param in param_list:
temp = tangent_curve_single(obj, param, normalize)
ret_vector.append(temp)
return tuple(ret_vector) | Evaluates the curve tangent vectors at the given list of parameter values.
:param obj: input curve
:type obj: abstract.Curve
:param param_list: parameter list
:type param_list: list or tuple
:param normalize: if True, the returned vector is converted to a unit vector
:type normalize: bool
:return: a list containing "point" and "vector" pairs
:rtype: tuple | Below is the the instruction that describes the task:
### Input:
Evaluates the curve tangent vectors at the given list of parameter values.
:param obj: input curve
:type obj: abstract.Curve
:param param_list: parameter list
:type param_list: list or tuple
:param normalize: if True, the returned vector is converted to a unit vector
:type normalize: bool
:return: a list containing "point" and "vector" pairs
:rtype: tuple
### Response:
def tangent_curve_single_list(obj, param_list, normalize):
""" Evaluates the curve tangent vectors at the given list of parameter values.
:param obj: input curve
:type obj: abstract.Curve
:param param_list: parameter list
:type param_list: list or tuple
:param normalize: if True, the returned vector is converted to a unit vector
:type normalize: bool
:return: a list containing "point" and "vector" pairs
:rtype: tuple
"""
ret_vector = []
for param in param_list:
temp = tangent_curve_single(obj, param, normalize)
ret_vector.append(temp)
return tuple(ret_vector) |
def add_reporting_args(parser):
"""Add reporting arguments to an argument parser.
Parameters
----------
parser: `argparse.ArgumentParser`
Returns
-------
`argparse.ArgumentGroup`
The argument group created.
"""
g = parser.add_argument_group('Reporting options')
g.add_argument('-l', '--log-file', default=None,
type = str_type, metavar = file_mv, help = textwrap.dedent("""\
Path of log file (if specified, report to stdout AND file."""))
g.add_argument('-q', '--quiet', action='store_true',
help = 'Only output errors and warnings.')
g.add_argument('-v', '--verbose', action='store_true',
help = 'Enable verbose output. Ignored if --quiet is specified.')
return parser | Add reporting arguments to an argument parser.
Parameters
----------
parser: `argparse.ArgumentParser`
Returns
-------
`argparse.ArgumentGroup`
The argument group created. | Below is the the instruction that describes the task:
### Input:
Add reporting arguments to an argument parser.
Parameters
----------
parser: `argparse.ArgumentParser`
Returns
-------
`argparse.ArgumentGroup`
The argument group created.
### Response:
def add_reporting_args(parser):
"""Add reporting arguments to an argument parser.
Parameters
----------
parser: `argparse.ArgumentParser`
Returns
-------
`argparse.ArgumentGroup`
The argument group created.
"""
g = parser.add_argument_group('Reporting options')
g.add_argument('-l', '--log-file', default=None,
type = str_type, metavar = file_mv, help = textwrap.dedent("""\
Path of log file (if specified, report to stdout AND file."""))
g.add_argument('-q', '--quiet', action='store_true',
help = 'Only output errors and warnings.')
g.add_argument('-v', '--verbose', action='store_true',
help = 'Enable verbose output. Ignored if --quiet is specified.')
return parser |
async def replace_dialog(self, dialog_id: str, options: object = None) -> DialogTurnResult:
"""
Ends the active dialog and starts a new dialog in its place. This is particularly useful
for creating loops or redirecting to another dialog.
:param dialog_id: ID of the dialog to search for.
:param options: (Optional) additional argument(s) to pass to the new dialog.
:return:
"""
# End the current dialog and giving the reason.
await self.end_active_dialog(DialogReason.ReplaceCalled)
# Start replacement dialog
return await self.begin_dialog(dialog_id, options) | Ends the active dialog and starts a new dialog in its place. This is particularly useful
for creating loops or redirecting to another dialog.
:param dialog_id: ID of the dialog to search for.
:param options: (Optional) additional argument(s) to pass to the new dialog.
:return: | Below is the the instruction that describes the task:
### Input:
Ends the active dialog and starts a new dialog in its place. This is particularly useful
for creating loops or redirecting to another dialog.
:param dialog_id: ID of the dialog to search for.
:param options: (Optional) additional argument(s) to pass to the new dialog.
:return:
### Response:
async def replace_dialog(self, dialog_id: str, options: object = None) -> DialogTurnResult:
"""
Ends the active dialog and starts a new dialog in its place. This is particularly useful
for creating loops or redirecting to another dialog.
:param dialog_id: ID of the dialog to search for.
:param options: (Optional) additional argument(s) to pass to the new dialog.
:return:
"""
# End the current dialog and giving the reason.
await self.end_active_dialog(DialogReason.ReplaceCalled)
# Start replacement dialog
return await self.begin_dialog(dialog_id, options) |
async def get_link_secret_label(self) -> str:
"""
Get current link secret label from non-secret storage records; return None for no match.
:return: latest non-secret storage record for link secret label
"""
LOGGER.debug('Wallet.get_link_secret_label >>>')
if not self.handle:
LOGGER.debug('Wallet.get_link_secret <!< Wallet %s is closed', self.name)
raise WalletState('Wallet {} is closed'.format(self.name))
rv = None
records = await self.get_non_secret(TYPE_LINK_SECRET_LABEL)
if records:
rv = records[str(max(int(k) for k in records))].value # str to int, max, and back again
LOGGER.debug('Wallet.get_link_secret_label <<< %s', rv)
return rv | Get current link secret label from non-secret storage records; return None for no match.
:return: latest non-secret storage record for link secret label | Below is the the instruction that describes the task:
### Input:
Get current link secret label from non-secret storage records; return None for no match.
:return: latest non-secret storage record for link secret label
### Response:
async def get_link_secret_label(self) -> str:
"""
Get current link secret label from non-secret storage records; return None for no match.
:return: latest non-secret storage record for link secret label
"""
LOGGER.debug('Wallet.get_link_secret_label >>>')
if not self.handle:
LOGGER.debug('Wallet.get_link_secret <!< Wallet %s is closed', self.name)
raise WalletState('Wallet {} is closed'.format(self.name))
rv = None
records = await self.get_non_secret(TYPE_LINK_SECRET_LABEL)
if records:
rv = records[str(max(int(k) for k in records))].value # str to int, max, and back again
LOGGER.debug('Wallet.get_link_secret_label <<< %s', rv)
return rv |
def lstled(x, n, array):
"""
Given a number x and an array of non-decreasing floats
find the index of the largest array element less than or equal to x.
http://naif.jpl.nasa.gov/pub/naif/toolkit_docs/C/cspice/lstled_c.html
:param x: Value to search against.
:type x: float
:param n: Number elements in array.
:type n: int
:param array: Array of possible lower bounds
:type array: list
:return: index of the last element of array that is less than or equal to x.
:rtype: int
"""
array = stypes.toDoubleVector(array)
x = ctypes.c_double(x)
n = ctypes.c_int(n)
return libspice.lstled_c(x, n, array) | Given a number x and an array of non-decreasing floats
find the index of the largest array element less than or equal to x.
http://naif.jpl.nasa.gov/pub/naif/toolkit_docs/C/cspice/lstled_c.html
:param x: Value to search against.
:type x: float
:param n: Number elements in array.
:type n: int
:param array: Array of possible lower bounds
:type array: list
:return: index of the last element of array that is less than or equal to x.
:rtype: int | Below is the the instruction that describes the task:
### Input:
Given a number x and an array of non-decreasing floats
find the index of the largest array element less than or equal to x.
http://naif.jpl.nasa.gov/pub/naif/toolkit_docs/C/cspice/lstled_c.html
:param x: Value to search against.
:type x: float
:param n: Number elements in array.
:type n: int
:param array: Array of possible lower bounds
:type array: list
:return: index of the last element of array that is less than or equal to x.
:rtype: int
### Response:
def lstled(x, n, array):
"""
Given a number x and an array of non-decreasing floats
find the index of the largest array element less than or equal to x.
http://naif.jpl.nasa.gov/pub/naif/toolkit_docs/C/cspice/lstled_c.html
:param x: Value to search against.
:type x: float
:param n: Number elements in array.
:type n: int
:param array: Array of possible lower bounds
:type array: list
:return: index of the last element of array that is less than or equal to x.
:rtype: int
"""
array = stypes.toDoubleVector(array)
x = ctypes.c_double(x)
n = ctypes.c_int(n)
return libspice.lstled_c(x, n, array) |
def _get_cibfile_tmp(cibname):
'''
Get the full path of a temporary CIB-file with the name of the CIB
'''
cibfile_tmp = '{0}.tmp'.format(_get_cibfile(cibname))
log.trace('cibfile_tmp: %s', cibfile_tmp)
return cibfile_tmp | Get the full path of a temporary CIB-file with the name of the CIB | Below is the the instruction that describes the task:
### Input:
Get the full path of a temporary CIB-file with the name of the CIB
### Response:
def _get_cibfile_tmp(cibname):
'''
Get the full path of a temporary CIB-file with the name of the CIB
'''
cibfile_tmp = '{0}.tmp'.format(_get_cibfile(cibname))
log.trace('cibfile_tmp: %s', cibfile_tmp)
return cibfile_tmp |
def _swap(self):
'''Swaps the alignment so that the reference becomes the query and vice-versa. Swaps their names, coordinates etc. The frame is not changed'''
self.ref_start, self.qry_start = self.qry_start, self.ref_start
self.ref_end, self.qry_end = self.qry_end, self.ref_end
self.hit_length_ref, self.hit_length_qry = self.hit_length_qry, self.hit_length_ref
self.ref_length, self.qry_length = self.qry_length, self.ref_length
self.ref_name, self.qry_name = self.qry_name, self.ref_name | Swaps the alignment so that the reference becomes the query and vice-versa. Swaps their names, coordinates etc. The frame is not changed | Below is the the instruction that describes the task:
### Input:
Swaps the alignment so that the reference becomes the query and vice-versa. Swaps their names, coordinates etc. The frame is not changed
### Response:
def _swap(self):
'''Swaps the alignment so that the reference becomes the query and vice-versa. Swaps their names, coordinates etc. The frame is not changed'''
self.ref_start, self.qry_start = self.qry_start, self.ref_start
self.ref_end, self.qry_end = self.qry_end, self.ref_end
self.hit_length_ref, self.hit_length_qry = self.hit_length_qry, self.hit_length_ref
self.ref_length, self.qry_length = self.qry_length, self.ref_length
self.ref_name, self.qry_name = self.qry_name, self.ref_name |
def UseRangeIndexesOnStrings(client, database_id):
"""Showing how range queries can be performed even on strings.
"""
try:
DeleteContainerIfExists(client, database_id, COLLECTION_ID)
database_link = GetDatabaseLink(database_id)
# collections = Query_Entities(client, 'collection', parent_link = database_link)
# print(collections)
# Use range indexes on strings
# This is how you can specify a range index on strings (and numbers) for all properties.
# This is the recommended indexing policy for collections. i.e. precision -1
#indexingPolicy = {
# 'indexingPolicy': {
# 'includedPaths': [
# {
# 'indexes': [
# {
# 'kind': documents.IndexKind.Range,
# 'dataType': documents.DataType.String,
# 'precision': -1
# }
# ]
# }
# ]
# }
#}
# For demo purposes, we are going to use the default (range on numbers, hash on strings) for the whole document (/* )
# and just include a range index on strings for the "region".
collection_definition = {
'id': COLLECTION_ID,
'indexingPolicy': {
'includedPaths': [
{
'path': '/region/?',
'indexes': [
{
'kind': documents.IndexKind.Range,
'dataType': documents.DataType.String,
'precision': -1
}
]
},
{
'path': '/*'
}
]
}
}
created_Container = client.CreateContainer(database_link, collection_definition)
print(created_Container)
print("\n" + "-" * 25 + "\n6. Collection created with index policy")
print_dictionary_items(created_Container["indexingPolicy"])
collection_link = GetContainerLink(database_id, COLLECTION_ID)
client.CreateItem(collection_link, { "id" : "doc1", "region" : "USA" })
client.CreateItem(collection_link, { "id" : "doc2", "region" : "UK" })
client.CreateItem(collection_link, { "id" : "doc3", "region" : "Armenia" })
client.CreateItem(collection_link, { "id" : "doc4", "region" : "Egypt" })
# Now ordering against region is allowed. You can run the following query
query = { "query" : "SELECT * FROM r ORDER BY r.region" }
message = "Documents ordered by region"
QueryDocumentsWithCustomQuery(client, collection_link, query, message)
# You can also perform filters against string comparison like >= 'UK'. Note that you can perform a prefix query,
# the equivalent of LIKE 'U%' (is >= 'U' AND < 'U')
query = { "query" : "SELECT * FROM r WHERE r.region >= 'U'" }
message = "Documents with region begining with U"
QueryDocumentsWithCustomQuery(client, collection_link, query, message)
# Cleanup
client.DeleteContainer(collection_link)
print("\n")
except errors.HTTPFailure as e:
if e.status_code == 409:
print("Entity already exists")
elif e.status_code == 404:
print("Entity doesn't exist")
else:
raise | Showing how range queries can be performed even on strings. | Below is the the instruction that describes the task:
### Input:
Showing how range queries can be performed even on strings.
### Response:
def UseRangeIndexesOnStrings(client, database_id):
"""Showing how range queries can be performed even on strings.
"""
try:
DeleteContainerIfExists(client, database_id, COLLECTION_ID)
database_link = GetDatabaseLink(database_id)
# collections = Query_Entities(client, 'collection', parent_link = database_link)
# print(collections)
# Use range indexes on strings
# This is how you can specify a range index on strings (and numbers) for all properties.
# This is the recommended indexing policy for collections. i.e. precision -1
#indexingPolicy = {
# 'indexingPolicy': {
# 'includedPaths': [
# {
# 'indexes': [
# {
# 'kind': documents.IndexKind.Range,
# 'dataType': documents.DataType.String,
# 'precision': -1
# }
# ]
# }
# ]
# }
#}
# For demo purposes, we are going to use the default (range on numbers, hash on strings) for the whole document (/* )
# and just include a range index on strings for the "region".
collection_definition = {
'id': COLLECTION_ID,
'indexingPolicy': {
'includedPaths': [
{
'path': '/region/?',
'indexes': [
{
'kind': documents.IndexKind.Range,
'dataType': documents.DataType.String,
'precision': -1
}
]
},
{
'path': '/*'
}
]
}
}
created_Container = client.CreateContainer(database_link, collection_definition)
print(created_Container)
print("\n" + "-" * 25 + "\n6. Collection created with index policy")
print_dictionary_items(created_Container["indexingPolicy"])
collection_link = GetContainerLink(database_id, COLLECTION_ID)
client.CreateItem(collection_link, { "id" : "doc1", "region" : "USA" })
client.CreateItem(collection_link, { "id" : "doc2", "region" : "UK" })
client.CreateItem(collection_link, { "id" : "doc3", "region" : "Armenia" })
client.CreateItem(collection_link, { "id" : "doc4", "region" : "Egypt" })
# Now ordering against region is allowed. You can run the following query
query = { "query" : "SELECT * FROM r ORDER BY r.region" }
message = "Documents ordered by region"
QueryDocumentsWithCustomQuery(client, collection_link, query, message)
# You can also perform filters against string comparison like >= 'UK'. Note that you can perform a prefix query,
# the equivalent of LIKE 'U%' (is >= 'U' AND < 'U')
query = { "query" : "SELECT * FROM r WHERE r.region >= 'U'" }
message = "Documents with region begining with U"
QueryDocumentsWithCustomQuery(client, collection_link, query, message)
# Cleanup
client.DeleteContainer(collection_link)
print("\n")
except errors.HTTPFailure as e:
if e.status_code == 409:
print("Entity already exists")
elif e.status_code == 404:
print("Entity doesn't exist")
else:
raise |
def check(cls, status):
"""Checks if a status enum matches the trigger originally set, and
if so, raises the appropriate error.
Args:
status (int, enum): A protobuf enum response status to check.
Raises:
AssertionError: If trigger or error were not set.
_ApiError: If the statuses don't match. Do not catch. Will be
caught automatically and sent back to the client.
"""
assert cls.trigger is not None, 'Invalid ErrorTrap, trigger not set'
assert cls.error is not None, 'Invalid ErrorTrap, error not set'
if status == cls.trigger:
# pylint: disable=not-callable
# cls.error will be callable at runtime
raise cls.error() | Checks if a status enum matches the trigger originally set, and
if so, raises the appropriate error.
Args:
status (int, enum): A protobuf enum response status to check.
Raises:
AssertionError: If trigger or error were not set.
_ApiError: If the statuses don't match. Do not catch. Will be
caught automatically and sent back to the client. | Below is the the instruction that describes the task:
### Input:
Checks if a status enum matches the trigger originally set, and
if so, raises the appropriate error.
Args:
status (int, enum): A protobuf enum response status to check.
Raises:
AssertionError: If trigger or error were not set.
_ApiError: If the statuses don't match. Do not catch. Will be
caught automatically and sent back to the client.
### Response:
def check(cls, status):
"""Checks if a status enum matches the trigger originally set, and
if so, raises the appropriate error.
Args:
status (int, enum): A protobuf enum response status to check.
Raises:
AssertionError: If trigger or error were not set.
_ApiError: If the statuses don't match. Do not catch. Will be
caught automatically and sent back to the client.
"""
assert cls.trigger is not None, 'Invalid ErrorTrap, trigger not set'
assert cls.error is not None, 'Invalid ErrorTrap, error not set'
if status == cls.trigger:
# pylint: disable=not-callable
# cls.error will be callable at runtime
raise cls.error() |
def rule_command_cmdlist_interface_o_interface_loopback_leaf_interface_loopback_leaf(self, **kwargs):
"""Auto Generated Code
"""
config = ET.Element("config")
rule = ET.SubElement(config, "rule", xmlns="urn:brocade.com:mgmt:brocade-aaa")
index_key = ET.SubElement(rule, "index")
index_key.text = kwargs.pop('index')
command = ET.SubElement(rule, "command")
cmdlist = ET.SubElement(command, "cmdlist")
interface_o = ET.SubElement(cmdlist, "interface-o")
interface_loopback_leaf = ET.SubElement(interface_o, "interface-loopback-leaf")
interface = ET.SubElement(interface_loopback_leaf, "interface")
loopback_leaf = ET.SubElement(interface, "loopback-leaf")
loopback_leaf.text = kwargs.pop('loopback_leaf')
callback = kwargs.pop('callback', self._callback)
return callback(config) | Auto Generated Code | Below is the the instruction that describes the task:
### Input:
Auto Generated Code
### Response:
def rule_command_cmdlist_interface_o_interface_loopback_leaf_interface_loopback_leaf(self, **kwargs):
"""Auto Generated Code
"""
config = ET.Element("config")
rule = ET.SubElement(config, "rule", xmlns="urn:brocade.com:mgmt:brocade-aaa")
index_key = ET.SubElement(rule, "index")
index_key.text = kwargs.pop('index')
command = ET.SubElement(rule, "command")
cmdlist = ET.SubElement(command, "cmdlist")
interface_o = ET.SubElement(cmdlist, "interface-o")
interface_loopback_leaf = ET.SubElement(interface_o, "interface-loopback-leaf")
interface = ET.SubElement(interface_loopback_leaf, "interface")
loopback_leaf = ET.SubElement(interface, "loopback-leaf")
loopback_leaf.text = kwargs.pop('loopback_leaf')
callback = kwargs.pop('callback', self._callback)
return callback(config) |
def create_with_virtualenv(self, interpreter, virtualenv_options):
"""Create a virtualenv using the virtualenv lib."""
args = ['virtualenv', '--python', interpreter, self.env_path]
args.extend(virtualenv_options)
if not self.pip_installed:
args.insert(3, '--no-pip')
try:
helpers.logged_exec(args)
self.env_bin_path = os.path.join(self.env_path, 'bin')
except FileNotFoundError as error:
logger.error('Virtualenv is not installed. It is needed to create a virtualenv with '
'a different python version than fades (got {})'.format(error))
raise FadesError('virtualenv not found')
except helpers.ExecutionError as error:
error.dump_to_log(logger)
raise FadesError('virtualenv could not be run')
except Exception as error:
logger.exception("Error creating virtualenv: %s", error)
raise FadesError('General error while running virtualenv') | Create a virtualenv using the virtualenv lib. | Below is the the instruction that describes the task:
### Input:
Create a virtualenv using the virtualenv lib.
### Response:
def create_with_virtualenv(self, interpreter, virtualenv_options):
"""Create a virtualenv using the virtualenv lib."""
args = ['virtualenv', '--python', interpreter, self.env_path]
args.extend(virtualenv_options)
if not self.pip_installed:
args.insert(3, '--no-pip')
try:
helpers.logged_exec(args)
self.env_bin_path = os.path.join(self.env_path, 'bin')
except FileNotFoundError as error:
logger.error('Virtualenv is not installed. It is needed to create a virtualenv with '
'a different python version than fades (got {})'.format(error))
raise FadesError('virtualenv not found')
except helpers.ExecutionError as error:
error.dump_to_log(logger)
raise FadesError('virtualenv could not be run')
except Exception as error:
logger.exception("Error creating virtualenv: %s", error)
raise FadesError('General error while running virtualenv') |
def _discover_gui():
"""Return the most desirable of the currently registered GUIs"""
# Prefer last registered
guis = reversed(pyblish.api.registered_guis())
for gui in guis:
try:
gui = __import__(gui).show
except (ImportError, AttributeError):
continue
else:
return gui | Return the most desirable of the currently registered GUIs | Below is the the instruction that describes the task:
### Input:
Return the most desirable of the currently registered GUIs
### Response:
def _discover_gui():
"""Return the most desirable of the currently registered GUIs"""
# Prefer last registered
guis = reversed(pyblish.api.registered_guis())
for gui in guis:
try:
gui = __import__(gui).show
except (ImportError, AttributeError):
continue
else:
return gui |
def verify(self, parents=set()):
"""
## DEBUG ONLY ##
Recursively ensures that the invariants of an interval subtree
hold.
"""
assert(isinstance(self.s_center, set))
bal = self.balance
assert abs(bal) < 2, \
"Error: Rotation should have happened, but didn't! \n{}".format(
self.print_structure(tostring=True)
)
self.refresh_balance()
assert bal == self.balance, \
"Error: self.balance not set correctly! \n{}".format(
self.print_structure(tostring=True)
)
assert self.s_center, \
"Error: s_center is empty! \n{}".format(
self.print_structure(tostring=True)
)
for iv in self.s_center:
assert hasattr(iv, 'begin')
assert hasattr(iv, 'end')
assert iv.begin < iv.end
assert iv.overlaps(self.x_center)
for parent in sorted(parents):
assert not iv.contains_point(parent), \
"Error: Overlaps ancestor ({})! \n{}\n\n{}".format(
parent, iv, self.print_structure(tostring=True)
)
if self[0]:
assert self[0].x_center < self.x_center, \
"Error: Out-of-order left child! {}".format(self.x_center)
self[0].verify(parents.union([self.x_center]))
if self[1]:
assert self[1].x_center > self.x_center, \
"Error: Out-of-order right child! {}".format(self.x_center)
self[1].verify(parents.union([self.x_center])) | ## DEBUG ONLY ##
Recursively ensures that the invariants of an interval subtree
hold. | Below is the the instruction that describes the task:
### Input:
## DEBUG ONLY ##
Recursively ensures that the invariants of an interval subtree
hold.
### Response:
def verify(self, parents=set()):
"""
## DEBUG ONLY ##
Recursively ensures that the invariants of an interval subtree
hold.
"""
assert(isinstance(self.s_center, set))
bal = self.balance
assert abs(bal) < 2, \
"Error: Rotation should have happened, but didn't! \n{}".format(
self.print_structure(tostring=True)
)
self.refresh_balance()
assert bal == self.balance, \
"Error: self.balance not set correctly! \n{}".format(
self.print_structure(tostring=True)
)
assert self.s_center, \
"Error: s_center is empty! \n{}".format(
self.print_structure(tostring=True)
)
for iv in self.s_center:
assert hasattr(iv, 'begin')
assert hasattr(iv, 'end')
assert iv.begin < iv.end
assert iv.overlaps(self.x_center)
for parent in sorted(parents):
assert not iv.contains_point(parent), \
"Error: Overlaps ancestor ({})! \n{}\n\n{}".format(
parent, iv, self.print_structure(tostring=True)
)
if self[0]:
assert self[0].x_center < self.x_center, \
"Error: Out-of-order left child! {}".format(self.x_center)
self[0].verify(parents.union([self.x_center]))
if self[1]:
assert self[1].x_center > self.x_center, \
"Error: Out-of-order right child! {}".format(self.x_center)
self[1].verify(parents.union([self.x_center])) |
def set_bucket_props(self, bucket, props):
"""
Set the properties on the bucket object given
"""
bucket_type = self._get_bucket_type(bucket.bucket_type)
url = self.bucket_properties_path(bucket.name,
bucket_type=bucket_type)
headers = {'Content-Type': 'application/json'}
content = json.dumps({'props': props})
# Run the request...
status, _, body = self._request('PUT', url, headers, content)
if status == 401:
raise SecurityError('Not authorized to set bucket properties.')
elif status != 204:
raise RiakError('Error setting bucket properties.')
return True | Set the properties on the bucket object given | Below is the the instruction that describes the task:
### Input:
Set the properties on the bucket object given
### Response:
def set_bucket_props(self, bucket, props):
"""
Set the properties on the bucket object given
"""
bucket_type = self._get_bucket_type(bucket.bucket_type)
url = self.bucket_properties_path(bucket.name,
bucket_type=bucket_type)
headers = {'Content-Type': 'application/json'}
content = json.dumps({'props': props})
# Run the request...
status, _, body = self._request('PUT', url, headers, content)
if status == 401:
raise SecurityError('Not authorized to set bucket properties.')
elif status != 204:
raise RiakError('Error setting bucket properties.')
return True |
def pivot(table, left, top, value):
"""
Creates a cross-tab or pivot table from a normalised input table. Use this
function to 'denormalize' a table of normalized records.
* The table argument can be a list of dictionaries or a Table object.
(http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/334621)
* The left argument is a tuple of headings which are displayed down the
left side of the new table.
* The top argument is a tuple of headings which are displayed across the
top of the new table.
Tuples are used so that multiple element headings and columns can be used.
E.g. To transform the list (listOfDicts):
Name, Year, Value
-----------------------
'Simon', 2004, 32
'Simon', 2005, 128
'Russel', 2004, 64
'Eric', 2004, 52
'Russel', 2005, 32
into the new list:
'Name', 2004, 2005
------------------------
'Simon', 32, 128
'Russel', 64, 32
'Eric', 52, NA
you would call pivot with the arguments:
newList = pivot(listOfDicts, ('Name',), ('Year',), 'Value')
"""
rs = {}
ysort = []
xsort = []
for row in table:
yaxis = tuple([row[c] for c in left]) # e.g. yaxis = ('Simon',)
if yaxis not in ysort: ysort.append(yaxis)
xaxis = tuple([row[c] for c in top]) # e.g. xaxis = ('2004',)
if xaxis not in xsort: xsort.append(xaxis)
try:
rs[yaxis]
except KeyError:
rs[yaxis] = {}
if xaxis not in rs[yaxis]:
rs[yaxis][xaxis] = 0
rs[yaxis][xaxis] += row[value]
"""
In the following loop we take care of missing data,
e.g 'Eric' has a value in 2004 but not in 2005
"""
for key in rs:
if len(rs[key]) - len(xsort):
for var in xsort:
if var not in rs[key].keys():
rs[key][var] = ''
headings = list(left)
headings.extend(xsort)
t = []
"""
The lists 'sortedkeys' and 'sortedvalues' make sure that
even if the field 'top' is unordered, data will be transposed correctly.
E.g. in the example above the table rows are not ordered by the year
"""
for left in ysort:
row = list(left)
sortedkeys = sorted(rs[left].keys())
sortedvalues = map(rs[left].get, sortedkeys)
row.extend(sortedvalues)
t.append(dict(zip(headings,row)))
return t | Creates a cross-tab or pivot table from a normalised input table. Use this
function to 'denormalize' a table of normalized records.
* The table argument can be a list of dictionaries or a Table object.
(http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/334621)
* The left argument is a tuple of headings which are displayed down the
left side of the new table.
* The top argument is a tuple of headings which are displayed across the
top of the new table.
Tuples are used so that multiple element headings and columns can be used.
E.g. To transform the list (listOfDicts):
Name, Year, Value
-----------------------
'Simon', 2004, 32
'Simon', 2005, 128
'Russel', 2004, 64
'Eric', 2004, 52
'Russel', 2005, 32
into the new list:
'Name', 2004, 2005
------------------------
'Simon', 32, 128
'Russel', 64, 32
'Eric', 52, NA
you would call pivot with the arguments:
newList = pivot(listOfDicts, ('Name',), ('Year',), 'Value') | Below is the the instruction that describes the task:
### Input:
Creates a cross-tab or pivot table from a normalised input table. Use this
function to 'denormalize' a table of normalized records.
* The table argument can be a list of dictionaries or a Table object.
(http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/334621)
* The left argument is a tuple of headings which are displayed down the
left side of the new table.
* The top argument is a tuple of headings which are displayed across the
top of the new table.
Tuples are used so that multiple element headings and columns can be used.
E.g. To transform the list (listOfDicts):
Name, Year, Value
-----------------------
'Simon', 2004, 32
'Simon', 2005, 128
'Russel', 2004, 64
'Eric', 2004, 52
'Russel', 2005, 32
into the new list:
'Name', 2004, 2005
------------------------
'Simon', 32, 128
'Russel', 64, 32
'Eric', 52, NA
you would call pivot with the arguments:
newList = pivot(listOfDicts, ('Name',), ('Year',), 'Value')
### Response:
def pivot(table, left, top, value):
"""
Creates a cross-tab or pivot table from a normalised input table. Use this
function to 'denormalize' a table of normalized records.
* The table argument can be a list of dictionaries or a Table object.
(http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/334621)
* The left argument is a tuple of headings which are displayed down the
left side of the new table.
* The top argument is a tuple of headings which are displayed across the
top of the new table.
Tuples are used so that multiple element headings and columns can be used.
E.g. To transform the list (listOfDicts):
Name, Year, Value
-----------------------
'Simon', 2004, 32
'Simon', 2005, 128
'Russel', 2004, 64
'Eric', 2004, 52
'Russel', 2005, 32
into the new list:
'Name', 2004, 2005
------------------------
'Simon', 32, 128
'Russel', 64, 32
'Eric', 52, NA
you would call pivot with the arguments:
newList = pivot(listOfDicts, ('Name',), ('Year',), 'Value')
"""
rs = {}
ysort = []
xsort = []
for row in table:
yaxis = tuple([row[c] for c in left]) # e.g. yaxis = ('Simon',)
if yaxis not in ysort: ysort.append(yaxis)
xaxis = tuple([row[c] for c in top]) # e.g. xaxis = ('2004',)
if xaxis not in xsort: xsort.append(xaxis)
try:
rs[yaxis]
except KeyError:
rs[yaxis] = {}
if xaxis not in rs[yaxis]:
rs[yaxis][xaxis] = 0
rs[yaxis][xaxis] += row[value]
"""
In the following loop we take care of missing data,
e.g 'Eric' has a value in 2004 but not in 2005
"""
for key in rs:
if len(rs[key]) - len(xsort):
for var in xsort:
if var not in rs[key].keys():
rs[key][var] = ''
headings = list(left)
headings.extend(xsort)
t = []
"""
The lists 'sortedkeys' and 'sortedvalues' make sure that
even if the field 'top' is unordered, data will be transposed correctly.
E.g. in the example above the table rows are not ordered by the year
"""
for left in ysort:
row = list(left)
sortedkeys = sorted(rs[left].keys())
sortedvalues = map(rs[left].get, sortedkeys)
row.extend(sortedvalues)
t.append(dict(zip(headings,row)))
return t |
def export(app, local):
"""Export the data."""
print_header()
log("Preparing to export the data...")
id = str(app)
subdata_path = os.path.join("data", id, "data")
# Create the data package
os.makedirs(subdata_path)
# Copy the experiment code into a code/ subdirectory
try:
shutil.copyfile(
os.path.join("snapshots", id + "-code.zip"),
os.path.join("data", id, id + "-code.zip")
)
except:
pass
# Copy in the DATA readme.
# open(os.path.join(id, "README.txt"), "a").close()
# Save the experiment id.
with open(os.path.join("data", id, "experiment_id.md"), "a+") as file:
file.write(id)
if not local:
# Export the logs
subprocess.call(
"heroku logs " +
"-n 10000 > " + os.path.join("data", id, "server_logs.md") +
" --app " + id,
shell=True)
dump_path = dump_database(id)
subprocess.call(
"pg_restore --verbose --clean -d wallace " +
os.path.join("data", id) + "/data.dump",
shell=True)
all_tables = [
"node",
"network",
"vector",
"info",
"transformation",
"transmission",
"participant",
"notification",
"question"
]
for table in all_tables:
subprocess.call(
"psql -d wallace --command=\"\\copy " + table + " to \'" +
os.path.join(subdata_path, table) + ".csv\' csv header\"",
shell=True)
if not local:
os.remove(dump_path)
log("Zipping up the package...")
shutil.make_archive(
os.path.join("data", id + "-data"),
"zip",
os.path.join("data", id)
)
shutil.rmtree(os.path.join("data", id))
log("Done. Data available in " + str(id) + ".zip") | Export the data. | Below is the the instruction that describes the task:
### Input:
Export the data.
### Response:
def export(app, local):
"""Export the data."""
print_header()
log("Preparing to export the data...")
id = str(app)
subdata_path = os.path.join("data", id, "data")
# Create the data package
os.makedirs(subdata_path)
# Copy the experiment code into a code/ subdirectory
try:
shutil.copyfile(
os.path.join("snapshots", id + "-code.zip"),
os.path.join("data", id, id + "-code.zip")
)
except:
pass
# Copy in the DATA readme.
# open(os.path.join(id, "README.txt"), "a").close()
# Save the experiment id.
with open(os.path.join("data", id, "experiment_id.md"), "a+") as file:
file.write(id)
if not local:
# Export the logs
subprocess.call(
"heroku logs " +
"-n 10000 > " + os.path.join("data", id, "server_logs.md") +
" --app " + id,
shell=True)
dump_path = dump_database(id)
subprocess.call(
"pg_restore --verbose --clean -d wallace " +
os.path.join("data", id) + "/data.dump",
shell=True)
all_tables = [
"node",
"network",
"vector",
"info",
"transformation",
"transmission",
"participant",
"notification",
"question"
]
for table in all_tables:
subprocess.call(
"psql -d wallace --command=\"\\copy " + table + " to \'" +
os.path.join(subdata_path, table) + ".csv\' csv header\"",
shell=True)
if not local:
os.remove(dump_path)
log("Zipping up the package...")
shutil.make_archive(
os.path.join("data", id + "-data"),
"zip",
os.path.join("data", id)
)
shutil.rmtree(os.path.join("data", id))
log("Done. Data available in " + str(id) + ".zip") |
def greplines(lines, regexpr_list, reflags=0):
"""
grepfile - greps a specific file
TODO: move to util_str, rework to be core of grepfile
"""
found_lines = []
found_lxs = []
# Ensure a list
islist = isinstance(regexpr_list, (list, tuple))
islist2 = isinstance(reflags, (list, tuple))
regexpr_list_ = regexpr_list if islist else [regexpr_list]
reflags_list = reflags if islist2 else [reflags] * len(regexpr_list_)
re_list = [re.compile(pat, flags=_flags)
for pat, _flags in zip(regexpr_list_, reflags_list)]
#print('regexpr_list_ = %r' % (regexpr_list_,))
#print('re_list = %r' % (re_list,))
import numpy as np
#import utool as ut
#cumsum = ut.cumsum(map(len, lines))
cumsum = np.cumsum(list(map(len, lines)))
text = ''.join(lines)
# Search each line for each pattern
for re_ in re_list:
# FIXME: multiline mode doesnt work
for match_object in re_.finditer(text):
lxs = np.where(match_object.start() < cumsum)[0][0:1]
if len(lxs) == 1:
lx = lxs[0]
if lx > 0:
line_start = cumsum[lx - 1]
else:
line_start = 0
line_end = cumsum[lx]
line = text[line_start:line_end]
found_lines.append(line)
found_lxs.append(lx)
return found_lines, found_lxs | grepfile - greps a specific file
TODO: move to util_str, rework to be core of grepfile | Below is the the instruction that describes the task:
### Input:
grepfile - greps a specific file
TODO: move to util_str, rework to be core of grepfile
### Response:
def greplines(lines, regexpr_list, reflags=0):
"""
grepfile - greps a specific file
TODO: move to util_str, rework to be core of grepfile
"""
found_lines = []
found_lxs = []
# Ensure a list
islist = isinstance(regexpr_list, (list, tuple))
islist2 = isinstance(reflags, (list, tuple))
regexpr_list_ = regexpr_list if islist else [regexpr_list]
reflags_list = reflags if islist2 else [reflags] * len(regexpr_list_)
re_list = [re.compile(pat, flags=_flags)
for pat, _flags in zip(regexpr_list_, reflags_list)]
#print('regexpr_list_ = %r' % (regexpr_list_,))
#print('re_list = %r' % (re_list,))
import numpy as np
#import utool as ut
#cumsum = ut.cumsum(map(len, lines))
cumsum = np.cumsum(list(map(len, lines)))
text = ''.join(lines)
# Search each line for each pattern
for re_ in re_list:
# FIXME: multiline mode doesnt work
for match_object in re_.finditer(text):
lxs = np.where(match_object.start() < cumsum)[0][0:1]
if len(lxs) == 1:
lx = lxs[0]
if lx > 0:
line_start = cumsum[lx - 1]
else:
line_start = 0
line_end = cumsum[lx]
line = text[line_start:line_end]
found_lines.append(line)
found_lxs.append(lx)
return found_lines, found_lxs |
def is_in_list(self, plane_list):
"""
Checks whether the plane is identical to one of the Planes in the plane_list list of Planes
:param plane_list: List of Planes to be compared to
:return: True if the plane is in the list, False otherwise
"""
for plane in plane_list:
if self.is_same_plane_as(plane):
return True
return False | Checks whether the plane is identical to one of the Planes in the plane_list list of Planes
:param plane_list: List of Planes to be compared to
:return: True if the plane is in the list, False otherwise | Below is the the instruction that describes the task:
### Input:
Checks whether the plane is identical to one of the Planes in the plane_list list of Planes
:param plane_list: List of Planes to be compared to
:return: True if the plane is in the list, False otherwise
### Response:
def is_in_list(self, plane_list):
"""
Checks whether the plane is identical to one of the Planes in the plane_list list of Planes
:param plane_list: List of Planes to be compared to
:return: True if the plane is in the list, False otherwise
"""
for plane in plane_list:
if self.is_same_plane_as(plane):
return True
return False |
def calculate_bayesian_probability(self, cat, token_score, token_tally):
"""
Calculates the bayesian probability for a given token/category
:param cat: The category we're scoring for this token
:type cat: str
:param token_score: The tally of this token for this category
:type token_score: float
:param token_tally: The tally total for this token from all categories
:type token_tally: float
:return: bayesian probability
:rtype: float
"""
# P that any given token IS in this category
prc = self.probabilities[cat]['prc']
# P that any given token is NOT in this category
prnc = self.probabilities[cat]['prnc']
# P that this token is NOT of this category
prtnc = (token_tally - token_score) / token_tally
# P that this token IS of this category
prtc = token_score / token_tally
# Assembling the parts of the bayes equation
numerator = (prtc * prc)
denominator = (numerator + (prtnc * prnc))
# Returning the calculated bayes probability unless the denom. is 0
return numerator / denominator if denominator != 0.0 else 0.0 | Calculates the bayesian probability for a given token/category
:param cat: The category we're scoring for this token
:type cat: str
:param token_score: The tally of this token for this category
:type token_score: float
:param token_tally: The tally total for this token from all categories
:type token_tally: float
:return: bayesian probability
:rtype: float | Below is the the instruction that describes the task:
### Input:
Calculates the bayesian probability for a given token/category
:param cat: The category we're scoring for this token
:type cat: str
:param token_score: The tally of this token for this category
:type token_score: float
:param token_tally: The tally total for this token from all categories
:type token_tally: float
:return: bayesian probability
:rtype: float
### Response:
def calculate_bayesian_probability(self, cat, token_score, token_tally):
"""
Calculates the bayesian probability for a given token/category
:param cat: The category we're scoring for this token
:type cat: str
:param token_score: The tally of this token for this category
:type token_score: float
:param token_tally: The tally total for this token from all categories
:type token_tally: float
:return: bayesian probability
:rtype: float
"""
# P that any given token IS in this category
prc = self.probabilities[cat]['prc']
# P that any given token is NOT in this category
prnc = self.probabilities[cat]['prnc']
# P that this token is NOT of this category
prtnc = (token_tally - token_score) / token_tally
# P that this token IS of this category
prtc = token_score / token_tally
# Assembling the parts of the bayes equation
numerator = (prtc * prc)
denominator = (numerator + (prtnc * prnc))
# Returning the calculated bayes probability unless the denom. is 0
return numerator / denominator if denominator != 0.0 else 0.0 |
def triangle(self, params=None, **kwargs):
"""
Makes a nifty corner plot.
Uses :func:`triangle.corner`.
:param params: (optional)
Names of columns (from :attr:`StarModel.samples`)
to plot. If ``None``, then it will plot samples
of the parameters used in the MCMC fit-- that is,
mass, age, [Fe/H], and optionally distance and A_V.
:param query: (optional)
Optional query on samples.
:param extent: (optional)
Will be appropriately passed to :func:`triangle.corner`.
:param **kwargs:
Additional keyword arguments passed to :func:`triangle.corner`.
:return:
Figure oject containing corner plot.
"""
if params is None:
params = ['mass_A', 'mass_B', 'age', 'feh', 'distance', 'AV']
super(BinaryStarModel, self).triangle(params=params, **kwargs) | Makes a nifty corner plot.
Uses :func:`triangle.corner`.
:param params: (optional)
Names of columns (from :attr:`StarModel.samples`)
to plot. If ``None``, then it will plot samples
of the parameters used in the MCMC fit-- that is,
mass, age, [Fe/H], and optionally distance and A_V.
:param query: (optional)
Optional query on samples.
:param extent: (optional)
Will be appropriately passed to :func:`triangle.corner`.
:param **kwargs:
Additional keyword arguments passed to :func:`triangle.corner`.
:return:
Figure oject containing corner plot. | Below is the the instruction that describes the task:
### Input:
Makes a nifty corner plot.
Uses :func:`triangle.corner`.
:param params: (optional)
Names of columns (from :attr:`StarModel.samples`)
to plot. If ``None``, then it will plot samples
of the parameters used in the MCMC fit-- that is,
mass, age, [Fe/H], and optionally distance and A_V.
:param query: (optional)
Optional query on samples.
:param extent: (optional)
Will be appropriately passed to :func:`triangle.corner`.
:param **kwargs:
Additional keyword arguments passed to :func:`triangle.corner`.
:return:
Figure oject containing corner plot.
### Response:
def triangle(self, params=None, **kwargs):
"""
Makes a nifty corner plot.
Uses :func:`triangle.corner`.
:param params: (optional)
Names of columns (from :attr:`StarModel.samples`)
to plot. If ``None``, then it will plot samples
of the parameters used in the MCMC fit-- that is,
mass, age, [Fe/H], and optionally distance and A_V.
:param query: (optional)
Optional query on samples.
:param extent: (optional)
Will be appropriately passed to :func:`triangle.corner`.
:param **kwargs:
Additional keyword arguments passed to :func:`triangle.corner`.
:return:
Figure oject containing corner plot.
"""
if params is None:
params = ['mass_A', 'mass_B', 'age', 'feh', 'distance', 'AV']
super(BinaryStarModel, self).triangle(params=params, **kwargs) |
def get_intent_name(handler_input):
# type: (HandlerInput) -> AnyStr
"""Return the name of the intent request.
The method retrieves the intent ``name`` from the input request, only if
the input request is an
:py:class:`ask_sdk_model.intent_request.IntentRequest`. If the input
is not an IntentRequest, a :py:class:`TypeError` is raised.
:param handler_input: The handler input instance that is generally
passed in the sdk's request and exception components
:type handler_input: ask_sdk_core.handler_input.HandlerInput
:return: Name of the intent request
:rtype: str
:raises: TypeError
"""
request = handler_input.request_envelope.request
if isinstance(request, IntentRequest):
return request.intent.name
raise TypeError("The provided request is not an IntentRequest") | Return the name of the intent request.
The method retrieves the intent ``name`` from the input request, only if
the input request is an
:py:class:`ask_sdk_model.intent_request.IntentRequest`. If the input
is not an IntentRequest, a :py:class:`TypeError` is raised.
:param handler_input: The handler input instance that is generally
passed in the sdk's request and exception components
:type handler_input: ask_sdk_core.handler_input.HandlerInput
:return: Name of the intent request
:rtype: str
:raises: TypeError | Below is the the instruction that describes the task:
### Input:
Return the name of the intent request.
The method retrieves the intent ``name`` from the input request, only if
the input request is an
:py:class:`ask_sdk_model.intent_request.IntentRequest`. If the input
is not an IntentRequest, a :py:class:`TypeError` is raised.
:param handler_input: The handler input instance that is generally
passed in the sdk's request and exception components
:type handler_input: ask_sdk_core.handler_input.HandlerInput
:return: Name of the intent request
:rtype: str
:raises: TypeError
### Response:
def get_intent_name(handler_input):
# type: (HandlerInput) -> AnyStr
"""Return the name of the intent request.
The method retrieves the intent ``name`` from the input request, only if
the input request is an
:py:class:`ask_sdk_model.intent_request.IntentRequest`. If the input
is not an IntentRequest, a :py:class:`TypeError` is raised.
:param handler_input: The handler input instance that is generally
passed in the sdk's request and exception components
:type handler_input: ask_sdk_core.handler_input.HandlerInput
:return: Name of the intent request
:rtype: str
:raises: TypeError
"""
request = handler_input.request_envelope.request
if isinstance(request, IntentRequest):
return request.intent.name
raise TypeError("The provided request is not an IntentRequest") |
def _detect_sse3(self):
"Does this compiler support SSE3 intrinsics?"
self._print_support_start('SSE3')
result = self.hasfunction('__m128 v; _mm_hadd_ps(v,v)',
include='<pmmintrin.h>',
extra_postargs=['-msse3'])
self._print_support_end('SSE3', result)
return result | Does this compiler support SSE3 intrinsics? | Below is the the instruction that describes the task:
### Input:
Does this compiler support SSE3 intrinsics?
### Response:
def _detect_sse3(self):
"Does this compiler support SSE3 intrinsics?"
self._print_support_start('SSE3')
result = self.hasfunction('__m128 v; _mm_hadd_ps(v,v)',
include='<pmmintrin.h>',
extra_postargs=['-msse3'])
self._print_support_end('SSE3', result)
return result |
def on_diff(request, page_name):
"""Show the diff between two revisions."""
old = request.args.get("old", type=int)
new = request.args.get("new", type=int)
error = ""
diff = page = old_rev = new_rev = None
if not (old and new):
error = "No revisions specified."
else:
revisions = dict(
(x.revision_id, x)
for x in Revision.query.filter(
(Revision.revision_id.in_((old, new)))
& (Revision.page_id == Page.page_id)
& (Page.name == page_name)
)
)
if len(revisions) != 2:
error = "At least one of the revisions requested does not exist."
else:
new_rev = revisions[new]
old_rev = revisions[old]
page = old_rev.page
diff = unified_diff(
(old_rev.text + "\n").splitlines(True),
(new_rev.text + "\n").splitlines(True),
page.name,
page.name,
format_datetime(old_rev.timestamp),
format_datetime(new_rev.timestamp),
3,
)
return Response(
generate_template(
"action_diff.html",
error=error,
old_revision=old_rev,
new_revision=new_rev,
page=page,
diff=diff,
)
) | Show the diff between two revisions. | Below is the the instruction that describes the task:
### Input:
Show the diff between two revisions.
### Response:
def on_diff(request, page_name):
"""Show the diff between two revisions."""
old = request.args.get("old", type=int)
new = request.args.get("new", type=int)
error = ""
diff = page = old_rev = new_rev = None
if not (old and new):
error = "No revisions specified."
else:
revisions = dict(
(x.revision_id, x)
for x in Revision.query.filter(
(Revision.revision_id.in_((old, new)))
& (Revision.page_id == Page.page_id)
& (Page.name == page_name)
)
)
if len(revisions) != 2:
error = "At least one of the revisions requested does not exist."
else:
new_rev = revisions[new]
old_rev = revisions[old]
page = old_rev.page
diff = unified_diff(
(old_rev.text + "\n").splitlines(True),
(new_rev.text + "\n").splitlines(True),
page.name,
page.name,
format_datetime(old_rev.timestamp),
format_datetime(new_rev.timestamp),
3,
)
return Response(
generate_template(
"action_diff.html",
error=error,
old_revision=old_rev,
new_revision=new_rev,
page=page,
diff=diff,
)
) |
def _list_packages(self, args):
'''
List files for an installed package
'''
packages = self._pkgdb_fun('list_packages', self.db_conn)
for package in packages:
if self.opts['verbose']:
status_msg = ','.join(package)
else:
status_msg = package[0]
self.ui.status(status_msg) | List files for an installed package | Below is the the instruction that describes the task:
### Input:
List files for an installed package
### Response:
def _list_packages(self, args):
'''
List files for an installed package
'''
packages = self._pkgdb_fun('list_packages', self.db_conn)
for package in packages:
if self.opts['verbose']:
status_msg = ','.join(package)
else:
status_msg = package[0]
self.ui.status(status_msg) |
def validate_activatable_models():
"""
Raises a ValidationError for any ActivatableModel that has ForeignKeys or OneToOneFields that will
cause cascading deletions to occur. This function also raises a ValidationError if the activatable
model has not defined a Boolean field with the field name defined by the ACTIVATABLE_FIELD_NAME variable
on the model.
"""
for model in get_activatable_models():
# Verify the activatable model has an activatable boolean field
activatable_field = next((
f for f in model._meta.fields
if f.__class__ == models.BooleanField and f.name == model.ACTIVATABLE_FIELD_NAME
), None)
if activatable_field is None:
raise ValidationError((
'Model {0} is an activatable model. It must define an activatable BooleanField that '
'has a field name of model.ACTIVATABLE_FIELD_NAME (which defaults to is_active)'.format(model)
))
# Ensure all foreign keys and onetoone fields will not result in cascade deletions if not cascade deletable
if not model.ALLOW_CASCADE_DELETE:
for field in model._meta.fields:
if field.__class__ in (models.ForeignKey, models.OneToOneField):
if field.remote_field.on_delete == models.CASCADE:
raise ValidationError((
'Model {0} is an activatable model. All ForeignKey and OneToOneFields '
'must set on_delete methods to something other than CASCADE (the default). '
'If you want to explicitely allow cascade deletes, then you must set the '
'ALLOW_CASCADE_DELETE=True class variable on your model.'
).format(model)) | Raises a ValidationError for any ActivatableModel that has ForeignKeys or OneToOneFields that will
cause cascading deletions to occur. This function also raises a ValidationError if the activatable
model has not defined a Boolean field with the field name defined by the ACTIVATABLE_FIELD_NAME variable
on the model. | Below is the the instruction that describes the task:
### Input:
Raises a ValidationError for any ActivatableModel that has ForeignKeys or OneToOneFields that will
cause cascading deletions to occur. This function also raises a ValidationError if the activatable
model has not defined a Boolean field with the field name defined by the ACTIVATABLE_FIELD_NAME variable
on the model.
### Response:
def validate_activatable_models():
"""
Raises a ValidationError for any ActivatableModel that has ForeignKeys or OneToOneFields that will
cause cascading deletions to occur. This function also raises a ValidationError if the activatable
model has not defined a Boolean field with the field name defined by the ACTIVATABLE_FIELD_NAME variable
on the model.
"""
for model in get_activatable_models():
# Verify the activatable model has an activatable boolean field
activatable_field = next((
f for f in model._meta.fields
if f.__class__ == models.BooleanField and f.name == model.ACTIVATABLE_FIELD_NAME
), None)
if activatable_field is None:
raise ValidationError((
'Model {0} is an activatable model. It must define an activatable BooleanField that '
'has a field name of model.ACTIVATABLE_FIELD_NAME (which defaults to is_active)'.format(model)
))
# Ensure all foreign keys and onetoone fields will not result in cascade deletions if not cascade deletable
if not model.ALLOW_CASCADE_DELETE:
for field in model._meta.fields:
if field.__class__ in (models.ForeignKey, models.OneToOneField):
if field.remote_field.on_delete == models.CASCADE:
raise ValidationError((
'Model {0} is an activatable model. All ForeignKey and OneToOneFields '
'must set on_delete methods to something other than CASCADE (the default). '
'If you want to explicitely allow cascade deletes, then you must set the '
'ALLOW_CASCADE_DELETE=True class variable on your model.'
).format(model)) |
def _function_contents(func):
"""
The signature is as follows (should be byte/chars):
< _code_contents (see above) from func.__code__ >
,( comma separated _object_contents for function argument defaults)
,( comma separated _object_contents for any closure contents )
See also: https://docs.python.org/3/reference/datamodel.html
- func.__code__ - The code object representing the compiled function body.
- func.__defaults__ - A tuple containing default argument values for those arguments that have defaults, or None if no arguments have a default value
- func.__closure__ - None or a tuple of cells that contain bindings for the function's free variables.
:Returns:
Signature contents of a function. (in bytes)
"""
contents = [_code_contents(func.__code__, func.__doc__)]
# The function contents depends on the value of defaults arguments
if func.__defaults__:
function_defaults_contents = [_object_contents(cc) for cc in func.__defaults__]
defaults = bytearray(b',(')
defaults.extend(bytearray(b',').join(function_defaults_contents))
defaults.extend(b')')
contents.append(defaults)
else:
contents.append(b',()')
# The function contents depends on the closure captured cell values.
closure = func.__closure__ or []
try:
closure_contents = [_object_contents(x.cell_contents) for x in closure]
except AttributeError:
closure_contents = []
contents.append(b',(')
contents.append(bytearray(b',').join(closure_contents))
contents.append(b')')
retval = bytearray(b'').join(contents)
return retval | The signature is as follows (should be byte/chars):
< _code_contents (see above) from func.__code__ >
,( comma separated _object_contents for function argument defaults)
,( comma separated _object_contents for any closure contents )
See also: https://docs.python.org/3/reference/datamodel.html
- func.__code__ - The code object representing the compiled function body.
- func.__defaults__ - A tuple containing default argument values for those arguments that have defaults, or None if no arguments have a default value
- func.__closure__ - None or a tuple of cells that contain bindings for the function's free variables.
:Returns:
Signature contents of a function. (in bytes) | Below is the the instruction that describes the task:
### Input:
The signature is as follows (should be byte/chars):
< _code_contents (see above) from func.__code__ >
,( comma separated _object_contents for function argument defaults)
,( comma separated _object_contents for any closure contents )
See also: https://docs.python.org/3/reference/datamodel.html
- func.__code__ - The code object representing the compiled function body.
- func.__defaults__ - A tuple containing default argument values for those arguments that have defaults, or None if no arguments have a default value
- func.__closure__ - None or a tuple of cells that contain bindings for the function's free variables.
:Returns:
Signature contents of a function. (in bytes)
### Response:
def _function_contents(func):
"""
The signature is as follows (should be byte/chars):
< _code_contents (see above) from func.__code__ >
,( comma separated _object_contents for function argument defaults)
,( comma separated _object_contents for any closure contents )
See also: https://docs.python.org/3/reference/datamodel.html
- func.__code__ - The code object representing the compiled function body.
- func.__defaults__ - A tuple containing default argument values for those arguments that have defaults, or None if no arguments have a default value
- func.__closure__ - None or a tuple of cells that contain bindings for the function's free variables.
:Returns:
Signature contents of a function. (in bytes)
"""
contents = [_code_contents(func.__code__, func.__doc__)]
# The function contents depends on the value of defaults arguments
if func.__defaults__:
function_defaults_contents = [_object_contents(cc) for cc in func.__defaults__]
defaults = bytearray(b',(')
defaults.extend(bytearray(b',').join(function_defaults_contents))
defaults.extend(b')')
contents.append(defaults)
else:
contents.append(b',()')
# The function contents depends on the closure captured cell values.
closure = func.__closure__ or []
try:
closure_contents = [_object_contents(x.cell_contents) for x in closure]
except AttributeError:
closure_contents = []
contents.append(b',(')
contents.append(bytearray(b',').join(closure_contents))
contents.append(b')')
retval = bytearray(b'').join(contents)
return retval |
def getSolution(self):
"""
Find and return a solution to the problem
Example:
>>> problem = Problem()
>>> problem.getSolution() is None
True
>>> problem.addVariables(["a"], [42])
>>> problem.getSolution()
{'a': 42}
@return: Solution for the problem
@rtype: dictionary mapping variables to values
"""
domains, constraints, vconstraints = self._getArgs()
if not domains:
return None
return self._solver.getSolution(domains, constraints, vconstraints) | Find and return a solution to the problem
Example:
>>> problem = Problem()
>>> problem.getSolution() is None
True
>>> problem.addVariables(["a"], [42])
>>> problem.getSolution()
{'a': 42}
@return: Solution for the problem
@rtype: dictionary mapping variables to values | Below is the the instruction that describes the task:
### Input:
Find and return a solution to the problem
Example:
>>> problem = Problem()
>>> problem.getSolution() is None
True
>>> problem.addVariables(["a"], [42])
>>> problem.getSolution()
{'a': 42}
@return: Solution for the problem
@rtype: dictionary mapping variables to values
### Response:
def getSolution(self):
"""
Find and return a solution to the problem
Example:
>>> problem = Problem()
>>> problem.getSolution() is None
True
>>> problem.addVariables(["a"], [42])
>>> problem.getSolution()
{'a': 42}
@return: Solution for the problem
@rtype: dictionary mapping variables to values
"""
domains, constraints, vconstraints = self._getArgs()
if not domains:
return None
return self._solver.getSolution(domains, constraints, vconstraints) |
def __make_tree(self):
"""Build a tree using lxml.html.builder and our subtrees"""
# create div with "container" class
div = E.DIV(E.CLASS("container"))
# append header with title
div.append(E.H2(self.__title))
# next, iterate through subtrees appending each tree to div
for subtree in self.__subtrees:
div.append(subtree.get_html())
# Connect div to body
body = E.BODY(div)
# attach body to html
self.__htmltree = E.HTML(
E.HEAD(
E.TITLE(self.__title)
),
body
) | Build a tree using lxml.html.builder and our subtrees | Below is the the instruction that describes the task:
### Input:
Build a tree using lxml.html.builder and our subtrees
### Response:
def __make_tree(self):
"""Build a tree using lxml.html.builder and our subtrees"""
# create div with "container" class
div = E.DIV(E.CLASS("container"))
# append header with title
div.append(E.H2(self.__title))
# next, iterate through subtrees appending each tree to div
for subtree in self.__subtrees:
div.append(subtree.get_html())
# Connect div to body
body = E.BODY(div)
# attach body to html
self.__htmltree = E.HTML(
E.HEAD(
E.TITLE(self.__title)
),
body
) |
def add_site(self, site_name, location_name=None, er_data=None, pmag_data=None):
"""
Create a Site object and add it to self.sites.
If a location name is provided, add the site to location.sites as well.
"""
if location_name:
location = self.find_by_name(location_name, self.locations)
if not location:
location = self.add_location(location_name)
else:
location = None
## check all declinations/azimuths/longitudes in range 0=>360.
#for key, value in er_data.items():
# er_data[key] = pmag.adjust_to_360(value, key)
new_site = Site(site_name, location, self.data_model, er_data, pmag_data)
self.sites.append(new_site)
if location:
location.sites.append(new_site)
return new_site | Create a Site object and add it to self.sites.
If a location name is provided, add the site to location.sites as well. | Below is the the instruction that describes the task:
### Input:
Create a Site object and add it to self.sites.
If a location name is provided, add the site to location.sites as well.
### Response:
def add_site(self, site_name, location_name=None, er_data=None, pmag_data=None):
"""
Create a Site object and add it to self.sites.
If a location name is provided, add the site to location.sites as well.
"""
if location_name:
location = self.find_by_name(location_name, self.locations)
if not location:
location = self.add_location(location_name)
else:
location = None
## check all declinations/azimuths/longitudes in range 0=>360.
#for key, value in er_data.items():
# er_data[key] = pmag.adjust_to_360(value, key)
new_site = Site(site_name, location, self.data_model, er_data, pmag_data)
self.sites.append(new_site)
if location:
location.sites.append(new_site)
return new_site |
def power(self, n):
"""Return the compose of a operator with itself n times.
Args:
n (int): the number of times to compose with self (n>0).
Returns:
BaseOperator: the n-times composed operator.
Raises:
QiskitError: if the input and output dimensions of the operator
are not equal, or the power is not a positive integer.
"""
# NOTE: if a subclass can have negative or non-integer powers
# this method should be overriden in that class.
if not isinstance(n, (int, np.integer)) or n < 1:
raise QiskitError("Can only power with positive integer powers.")
if self._input_dim != self._output_dim:
raise QiskitError("Can only power with input_dim = output_dim.")
ret = self.copy()
for _ in range(1, n):
ret = ret.compose(self)
return ret | Return the compose of a operator with itself n times.
Args:
n (int): the number of times to compose with self (n>0).
Returns:
BaseOperator: the n-times composed operator.
Raises:
QiskitError: if the input and output dimensions of the operator
are not equal, or the power is not a positive integer. | Below is the the instruction that describes the task:
### Input:
Return the compose of a operator with itself n times.
Args:
n (int): the number of times to compose with self (n>0).
Returns:
BaseOperator: the n-times composed operator.
Raises:
QiskitError: if the input and output dimensions of the operator
are not equal, or the power is not a positive integer.
### Response:
def power(self, n):
"""Return the compose of a operator with itself n times.
Args:
n (int): the number of times to compose with self (n>0).
Returns:
BaseOperator: the n-times composed operator.
Raises:
QiskitError: if the input and output dimensions of the operator
are not equal, or the power is not a positive integer.
"""
# NOTE: if a subclass can have negative or non-integer powers
# this method should be overriden in that class.
if not isinstance(n, (int, np.integer)) or n < 1:
raise QiskitError("Can only power with positive integer powers.")
if self._input_dim != self._output_dim:
raise QiskitError("Can only power with input_dim = output_dim.")
ret = self.copy()
for _ in range(1, n):
ret = ret.compose(self)
return ret |
def add_virtual_columns_aitoff(self, alpha, delta, x, y, radians=True):
"""Add aitoff (https://en.wikipedia.org/wiki/Aitoff_projection) projection
:param alpha: azimuth angle
:param delta: polar angle
:param x: output name for x coordinate
:param y: output name for y coordinate
:param radians: input and output in radians (True), or degrees (False)
:return:
"""
transform = "" if radians else "*pi/180."
aitoff_alpha = "__aitoff_alpha_%s_%s" % (alpha, delta)
# sanatize
aitoff_alpha = re.sub("[^a-zA-Z_]", "_", aitoff_alpha)
self.add_virtual_column(aitoff_alpha, "arccos(cos({delta}{transform})*cos({alpha}{transform}/2))".format(**locals()))
self.add_virtual_column(x, "2*cos({delta}{transform})*sin({alpha}{transform}/2)/sinc({aitoff_alpha}/pi)/pi".format(**locals()))
self.add_virtual_column(y, "sin({delta}{transform})/sinc({aitoff_alpha}/pi)/pi".format(**locals())) | Add aitoff (https://en.wikipedia.org/wiki/Aitoff_projection) projection
:param alpha: azimuth angle
:param delta: polar angle
:param x: output name for x coordinate
:param y: output name for y coordinate
:param radians: input and output in radians (True), or degrees (False)
:return: | Below is the the instruction that describes the task:
### Input:
Add aitoff (https://en.wikipedia.org/wiki/Aitoff_projection) projection
:param alpha: azimuth angle
:param delta: polar angle
:param x: output name for x coordinate
:param y: output name for y coordinate
:param radians: input and output in radians (True), or degrees (False)
:return:
### Response:
def add_virtual_columns_aitoff(self, alpha, delta, x, y, radians=True):
"""Add aitoff (https://en.wikipedia.org/wiki/Aitoff_projection) projection
:param alpha: azimuth angle
:param delta: polar angle
:param x: output name for x coordinate
:param y: output name for y coordinate
:param radians: input and output in radians (True), or degrees (False)
:return:
"""
transform = "" if radians else "*pi/180."
aitoff_alpha = "__aitoff_alpha_%s_%s" % (alpha, delta)
# sanatize
aitoff_alpha = re.sub("[^a-zA-Z_]", "_", aitoff_alpha)
self.add_virtual_column(aitoff_alpha, "arccos(cos({delta}{transform})*cos({alpha}{transform}/2))".format(**locals()))
self.add_virtual_column(x, "2*cos({delta}{transform})*sin({alpha}{transform}/2)/sinc({aitoff_alpha}/pi)/pi".format(**locals()))
self.add_virtual_column(y, "sin({delta}{transform})/sinc({aitoff_alpha}/pi)/pi".format(**locals())) |
def scaffold():
"""Start a new site."""
click.echo("A whole new site? Awesome.")
title = click.prompt("What's the title?")
url = click.prompt("Great. What's url? http://")
# Make sure that title doesn't exist.
click.echo("Got it. Creating %s..." % url) | Start a new site. | Below is the the instruction that describes the task:
### Input:
Start a new site.
### Response:
def scaffold():
"""Start a new site."""
click.echo("A whole new site? Awesome.")
title = click.prompt("What's the title?")
url = click.prompt("Great. What's url? http://")
# Make sure that title doesn't exist.
click.echo("Got it. Creating %s..." % url) |
def create_fw(self, proj_name, pol_id, fw_id, fw_name, fw_type, rtr_id):
"""Fills up the local attributes when FW is created. """
self.tenant_name = proj_name
self.fw_id = fw_id
self.fw_name = fw_name
self.fw_created = True
self.active_pol_id = pol_id
self.fw_type = fw_type
self.router_id = rtr_id | Fills up the local attributes when FW is created. | Below is the the instruction that describes the task:
### Input:
Fills up the local attributes when FW is created.
### Response:
def create_fw(self, proj_name, pol_id, fw_id, fw_name, fw_type, rtr_id):
"""Fills up the local attributes when FW is created. """
self.tenant_name = proj_name
self.fw_id = fw_id
self.fw_name = fw_name
self.fw_created = True
self.active_pol_id = pol_id
self.fw_type = fw_type
self.router_id = rtr_id |
def from_url(url):
"""
Given a URL, return a package
:param url:
:return:
"""
package_data = HTTPClient().http_request(url=url, decode=None)
return Package(raw_data=package_data) | Given a URL, return a package
:param url:
:return: | Below is the the instruction that describes the task:
### Input:
Given a URL, return a package
:param url:
:return:
### Response:
def from_url(url):
"""
Given a URL, return a package
:param url:
:return:
"""
package_data = HTTPClient().http_request(url=url, decode=None)
return Package(raw_data=package_data) |
def new(params, event_shape=(), validate_args=False, name=None):
"""Create the distribution instance from a `params` vector."""
with tf.compat.v1.name_scope(name, 'IndependentPoisson',
[params, event_shape]):
params = tf.convert_to_tensor(value=params, name='params')
event_shape = dist_util.expand_to_vector(
tf.convert_to_tensor(
value=event_shape, name='event_shape', dtype_hint=tf.int32),
tensor_name='event_shape')
output_shape = tf.concat([
tf.shape(input=params)[:-1],
event_shape,
],
axis=0)
return tfd.Independent(
tfd.Poisson(
log_rate=tf.reshape(params, output_shape),
validate_args=validate_args),
reinterpreted_batch_ndims=tf.size(input=event_shape),
validate_args=validate_args) | Create the distribution instance from a `params` vector. | Below is the the instruction that describes the task:
### Input:
Create the distribution instance from a `params` vector.
### Response:
def new(params, event_shape=(), validate_args=False, name=None):
"""Create the distribution instance from a `params` vector."""
with tf.compat.v1.name_scope(name, 'IndependentPoisson',
[params, event_shape]):
params = tf.convert_to_tensor(value=params, name='params')
event_shape = dist_util.expand_to_vector(
tf.convert_to_tensor(
value=event_shape, name='event_shape', dtype_hint=tf.int32),
tensor_name='event_shape')
output_shape = tf.concat([
tf.shape(input=params)[:-1],
event_shape,
],
axis=0)
return tfd.Independent(
tfd.Poisson(
log_rate=tf.reshape(params, output_shape),
validate_args=validate_args),
reinterpreted_batch_ndims=tf.size(input=event_shape),
validate_args=validate_args) |
def compare_version(value):
""" Determines if the provided version value compares with program version.
`value`
Version comparison string (e.g. ==1.0, <=1.0, >1.1)
Supported operators:
<, <=, ==, >, >=
"""
# extract parts from value
import re
res = re.match(r'(<|<=|==|>|>=)(\d{1,2}\.\d{1,2}(\.\d{1,2})?)$',
str(value).strip())
if not res:
return False
operator, value, _ = res.groups()
# break into pieces
value = tuple(int(x) for x in str(value).split('.'))
if len(value) < 3:
value += (0,)
version = __version_info__
if operator in ('<', '<='):
if version < value:
return True
if operator != '<=':
return False
elif operator in ('>=', '>'):
if version > value:
return True
if operator != '>=':
return False
return value == version | Determines if the provided version value compares with program version.
`value`
Version comparison string (e.g. ==1.0, <=1.0, >1.1)
Supported operators:
<, <=, ==, >, >= | Below is the the instruction that describes the task:
### Input:
Determines if the provided version value compares with program version.
`value`
Version comparison string (e.g. ==1.0, <=1.0, >1.1)
Supported operators:
<, <=, ==, >, >=
### Response:
def compare_version(value):
""" Determines if the provided version value compares with program version.
`value`
Version comparison string (e.g. ==1.0, <=1.0, >1.1)
Supported operators:
<, <=, ==, >, >=
"""
# extract parts from value
import re
res = re.match(r'(<|<=|==|>|>=)(\d{1,2}\.\d{1,2}(\.\d{1,2})?)$',
str(value).strip())
if not res:
return False
operator, value, _ = res.groups()
# break into pieces
value = tuple(int(x) for x in str(value).split('.'))
if len(value) < 3:
value += (0,)
version = __version_info__
if operator in ('<', '<='):
if version < value:
return True
if operator != '<=':
return False
elif operator in ('>=', '>'):
if version > value:
return True
if operator != '>=':
return False
return value == version |
def read_metadata(self, f, objects, previous_segment=None):
"""Read segment metadata section and update object information"""
if not self.toc["kTocMetaData"]:
try:
self.ordered_objects = previous_segment.ordered_objects
except AttributeError:
raise ValueError(
"kTocMetaData is not set for segment but "
"there is no previous segment")
self.calculate_chunks()
return
if not self.toc["kTocNewObjList"]:
# In this case, there can be a list of new objects that
# are appended, or previous objects can also be repeated
# if their properties change
self.ordered_objects = [
copy(o) for o in previous_segment.ordered_objects]
log.debug("Reading metadata at %d", f.tell())
# First four bytes have number of objects in metadata
num_objects = types.Int32.read(f, self.endianness)
for obj in range(num_objects):
# Read the object path
object_path = types.String.read(f, self.endianness)
# If this is a new segment for an existing object,
# reuse the existing object, otherwise,
# create a new object and add it to the object dictionary
if object_path in objects:
obj = objects[object_path]
else:
obj = TdmsObject(object_path, self.tdms_file)
objects[object_path] = obj
# Add this segment object to the list of segment objects,
# re-using any properties from previous segments.
updating_existing = False
if not self.toc["kTocNewObjList"]:
# Search for the same object from the previous segment
# object list.
obj_index = [
i for i, o in enumerate(self.ordered_objects)
if o.tdms_object is obj]
if len(obj_index) > 0:
updating_existing = True
log.debug("Updating object in segment list")
obj_index = obj_index[0]
segment_obj = self.ordered_objects[obj_index]
if not updating_existing:
if obj._previous_segment_object is not None:
log.debug("Copying previous segment object")
segment_obj = copy(obj._previous_segment_object)
else:
log.debug("Creating a new segment object")
segment_obj = _TdmsSegmentObject(obj, self.endianness)
self.ordered_objects.append(segment_obj)
# Read the metadata for this object, updating any
# data structure information and properties.
segment_obj._read_metadata(f)
obj._previous_segment_object = segment_obj
self.calculate_chunks() | Read segment metadata section and update object information | Below is the the instruction that describes the task:
### Input:
Read segment metadata section and update object information
### Response:
def read_metadata(self, f, objects, previous_segment=None):
"""Read segment metadata section and update object information"""
if not self.toc["kTocMetaData"]:
try:
self.ordered_objects = previous_segment.ordered_objects
except AttributeError:
raise ValueError(
"kTocMetaData is not set for segment but "
"there is no previous segment")
self.calculate_chunks()
return
if not self.toc["kTocNewObjList"]:
# In this case, there can be a list of new objects that
# are appended, or previous objects can also be repeated
# if their properties change
self.ordered_objects = [
copy(o) for o in previous_segment.ordered_objects]
log.debug("Reading metadata at %d", f.tell())
# First four bytes have number of objects in metadata
num_objects = types.Int32.read(f, self.endianness)
for obj in range(num_objects):
# Read the object path
object_path = types.String.read(f, self.endianness)
# If this is a new segment for an existing object,
# reuse the existing object, otherwise,
# create a new object and add it to the object dictionary
if object_path in objects:
obj = objects[object_path]
else:
obj = TdmsObject(object_path, self.tdms_file)
objects[object_path] = obj
# Add this segment object to the list of segment objects,
# re-using any properties from previous segments.
updating_existing = False
if not self.toc["kTocNewObjList"]:
# Search for the same object from the previous segment
# object list.
obj_index = [
i for i, o in enumerate(self.ordered_objects)
if o.tdms_object is obj]
if len(obj_index) > 0:
updating_existing = True
log.debug("Updating object in segment list")
obj_index = obj_index[0]
segment_obj = self.ordered_objects[obj_index]
if not updating_existing:
if obj._previous_segment_object is not None:
log.debug("Copying previous segment object")
segment_obj = copy(obj._previous_segment_object)
else:
log.debug("Creating a new segment object")
segment_obj = _TdmsSegmentObject(obj, self.endianness)
self.ordered_objects.append(segment_obj)
# Read the metadata for this object, updating any
# data structure information and properties.
segment_obj._read_metadata(f)
obj._previous_segment_object = segment_obj
self.calculate_chunks() |
def executemany(self, sql, *params):
"""Prepare a database query or command and then execute it against
all parameter sequences found in the sequence seq_of_params.
:param sql: the SQL statement to execute with optional ? parameters
:param params: sequence parameters for the markers in the SQL.
"""
fut = self._run_operation(self._impl.executemany, sql, *params)
return fut | Prepare a database query or command and then execute it against
all parameter sequences found in the sequence seq_of_params.
:param sql: the SQL statement to execute with optional ? parameters
:param params: sequence parameters for the markers in the SQL. | Below is the the instruction that describes the task:
### Input:
Prepare a database query or command and then execute it against
all parameter sequences found in the sequence seq_of_params.
:param sql: the SQL statement to execute with optional ? parameters
:param params: sequence parameters for the markers in the SQL.
### Response:
def executemany(self, sql, *params):
"""Prepare a database query or command and then execute it against
all parameter sequences found in the sequence seq_of_params.
:param sql: the SQL statement to execute with optional ? parameters
:param params: sequence parameters for the markers in the SQL.
"""
fut = self._run_operation(self._impl.executemany, sql, *params)
return fut |
def generate_confusables():
"""Generates the confusables JSON data file from the unicode specification.
:return: True for success, raises otherwise.
:rtype: bool
"""
url = 'ftp://ftp.unicode.org/Public/security/latest/confusables.txt'
file = get(url)
confusables_matrix = defaultdict(list)
match = re.compile(r'[0-9A-F ]+\s+;\s*[0-9A-F ]+\s+;\s*\w+\s*#'
r'\*?\s*\( (.+) → (.+) \) (.+) → (.+)\t#',
re.UNICODE)
for line in file:
p = re.findall(match, line)
if p:
char1, char2, name1, name2 = p[0]
confusables_matrix[char1].append({
'c': char2,
'n': name2,
})
confusables_matrix[char2].append({
'c': char1,
'n': name1,
})
dump('confusables.json', dict(confusables_matrix)) | Generates the confusables JSON data file from the unicode specification.
:return: True for success, raises otherwise.
:rtype: bool | Below is the the instruction that describes the task:
### Input:
Generates the confusables JSON data file from the unicode specification.
:return: True for success, raises otherwise.
:rtype: bool
### Response:
def generate_confusables():
"""Generates the confusables JSON data file from the unicode specification.
:return: True for success, raises otherwise.
:rtype: bool
"""
url = 'ftp://ftp.unicode.org/Public/security/latest/confusables.txt'
file = get(url)
confusables_matrix = defaultdict(list)
match = re.compile(r'[0-9A-F ]+\s+;\s*[0-9A-F ]+\s+;\s*\w+\s*#'
r'\*?\s*\( (.+) → (.+) \) (.+) → (.+)\t#',
re.UNICODE)
for line in file:
p = re.findall(match, line)
if p:
char1, char2, name1, name2 = p[0]
confusables_matrix[char1].append({
'c': char2,
'n': name2,
})
confusables_matrix[char2].append({
'c': char1,
'n': name1,
})
dump('confusables.json', dict(confusables_matrix)) |
def unlock_kinetis_abort_clear():
"""Returns the abort register clear code.
Returns:
The abort register clear code.
"""
flags = registers.AbortRegisterFlags()
flags.STKCMPCLR = 1
flags.STKERRCLR = 1
flags.WDERRCLR = 1
flags.ORUNERRCLR = 1
return flags.value | Returns the abort register clear code.
Returns:
The abort register clear code. | Below is the the instruction that describes the task:
### Input:
Returns the abort register clear code.
Returns:
The abort register clear code.
### Response:
def unlock_kinetis_abort_clear():
"""Returns the abort register clear code.
Returns:
The abort register clear code.
"""
flags = registers.AbortRegisterFlags()
flags.STKCMPCLR = 1
flags.STKERRCLR = 1
flags.WDERRCLR = 1
flags.ORUNERRCLR = 1
return flags.value |
def parse_coaches(self):
"""
Parse the home and away coaches
:returns: ``self`` on success, ``None`` otherwise
"""
lx_doc = self.html_doc()
tr = lx_doc.xpath('//tr[@id="HeadCoaches"]')[0]
for i, td in enumerate(tr):
txt = td.xpath('.//text()')
txt = ex_junk(txt, ['\n','\r'])
team = 'away' if i == 0 else 'home'
self.coaches[team] = txt[0]
return self if self.coaches else None | Parse the home and away coaches
:returns: ``self`` on success, ``None`` otherwise | Below is the the instruction that describes the task:
### Input:
Parse the home and away coaches
:returns: ``self`` on success, ``None`` otherwise
### Response:
def parse_coaches(self):
"""
Parse the home and away coaches
:returns: ``self`` on success, ``None`` otherwise
"""
lx_doc = self.html_doc()
tr = lx_doc.xpath('//tr[@id="HeadCoaches"]')[0]
for i, td in enumerate(tr):
txt = td.xpath('.//text()')
txt = ex_junk(txt, ['\n','\r'])
team = 'away' if i == 0 else 'home'
self.coaches[team] = txt[0]
return self if self.coaches else None |
def GetNumberOfRows(self):
"""Retrieves the number of rows of the table.
Returns:
int: number of rows.
Raises:
IOError: if the file-like object has not been opened.
OSError: if the file-like object has not been opened.
"""
if not self._database_object:
raise IOError('Not opened.')
if self._number_of_rows is None:
self._number_of_rows = self._database_object.GetNumberOfRows(
self._table_name)
return self._number_of_rows | Retrieves the number of rows of the table.
Returns:
int: number of rows.
Raises:
IOError: if the file-like object has not been opened.
OSError: if the file-like object has not been opened. | Below is the the instruction that describes the task:
### Input:
Retrieves the number of rows of the table.
Returns:
int: number of rows.
Raises:
IOError: if the file-like object has not been opened.
OSError: if the file-like object has not been opened.
### Response:
def GetNumberOfRows(self):
"""Retrieves the number of rows of the table.
Returns:
int: number of rows.
Raises:
IOError: if the file-like object has not been opened.
OSError: if the file-like object has not been opened.
"""
if not self._database_object:
raise IOError('Not opened.')
if self._number_of_rows is None:
self._number_of_rows = self._database_object.GetNumberOfRows(
self._table_name)
return self._number_of_rows |
def assert_matches(self, *args, **kwargs):
"""Assert this matches a :ref:`message spec <message spec>`.
Returns self.
"""
matcher = make_matcher(*args, **kwargs)
if not matcher.matches(self):
raise AssertionError('%r does not match %r' % (self, matcher))
return self | Assert this matches a :ref:`message spec <message spec>`.
Returns self. | Below is the the instruction that describes the task:
### Input:
Assert this matches a :ref:`message spec <message spec>`.
Returns self.
### Response:
def assert_matches(self, *args, **kwargs):
"""Assert this matches a :ref:`message spec <message spec>`.
Returns self.
"""
matcher = make_matcher(*args, **kwargs)
if not matcher.matches(self):
raise AssertionError('%r does not match %r' % (self, matcher))
return self |
def listDatasetArray(self, **kwargs):
"""
API to list datasets in DBS.
:param dataset: list of datasets [dataset1,dataset2,..,dataset n] (Required if dataset_id is not presented), Max length 1000.
:type dataset: list
:param dataset_id: list of dataset_ids that are the primary keys of datasets table: [dataset_id1,dataset_id2,..,dataset_idn] (Required if dataset is not presented), Max length 1000.
:type dataset: list
:param dataset_access_type: List only datasets with that dataset access type (Optional)
:type dataset_access_type: str
:param detail: brief list or detailed list 1/0
:type detail: bool
:returns: List of dictionaries containing the following keys (dataset). If the detail option is used. The dictionary contains the following keys (primary_ds_name, physics_group_name, acquisition_era_name, create_by, dataset_access_type, data_tier_name, last_modified_by, creation_date, processing_version, processed_ds_name, xtcrosssection, last_modification_date, dataset_id, dataset, prep_id, primary_ds_type)
:rtype: list of dicts
"""
validParameters = ['dataset', 'dataset_access_type', 'detail', 'dataset_id']
requiredParameters = {'multiple': ['dataset', 'dataset_id']}
checkInputParameter(method="listDatasetArray", parameters=kwargs.keys(), validParameters=validParameters,
requiredParameters=requiredParameters)
#set defaults
if 'detail' not in kwargs.keys():
kwargs['detail'] = False
return self.__callServer("datasetlist", data=kwargs, callmethod='POST') | API to list datasets in DBS.
:param dataset: list of datasets [dataset1,dataset2,..,dataset n] (Required if dataset_id is not presented), Max length 1000.
:type dataset: list
:param dataset_id: list of dataset_ids that are the primary keys of datasets table: [dataset_id1,dataset_id2,..,dataset_idn] (Required if dataset is not presented), Max length 1000.
:type dataset: list
:param dataset_access_type: List only datasets with that dataset access type (Optional)
:type dataset_access_type: str
:param detail: brief list or detailed list 1/0
:type detail: bool
:returns: List of dictionaries containing the following keys (dataset). If the detail option is used. The dictionary contains the following keys (primary_ds_name, physics_group_name, acquisition_era_name, create_by, dataset_access_type, data_tier_name, last_modified_by, creation_date, processing_version, processed_ds_name, xtcrosssection, last_modification_date, dataset_id, dataset, prep_id, primary_ds_type)
:rtype: list of dicts | Below is the the instruction that describes the task:
### Input:
API to list datasets in DBS.
:param dataset: list of datasets [dataset1,dataset2,..,dataset n] (Required if dataset_id is not presented), Max length 1000.
:type dataset: list
:param dataset_id: list of dataset_ids that are the primary keys of datasets table: [dataset_id1,dataset_id2,..,dataset_idn] (Required if dataset is not presented), Max length 1000.
:type dataset: list
:param dataset_access_type: List only datasets with that dataset access type (Optional)
:type dataset_access_type: str
:param detail: brief list or detailed list 1/0
:type detail: bool
:returns: List of dictionaries containing the following keys (dataset). If the detail option is used. The dictionary contains the following keys (primary_ds_name, physics_group_name, acquisition_era_name, create_by, dataset_access_type, data_tier_name, last_modified_by, creation_date, processing_version, processed_ds_name, xtcrosssection, last_modification_date, dataset_id, dataset, prep_id, primary_ds_type)
:rtype: list of dicts
### Response:
def listDatasetArray(self, **kwargs):
"""
API to list datasets in DBS.
:param dataset: list of datasets [dataset1,dataset2,..,dataset n] (Required if dataset_id is not presented), Max length 1000.
:type dataset: list
:param dataset_id: list of dataset_ids that are the primary keys of datasets table: [dataset_id1,dataset_id2,..,dataset_idn] (Required if dataset is not presented), Max length 1000.
:type dataset: list
:param dataset_access_type: List only datasets with that dataset access type (Optional)
:type dataset_access_type: str
:param detail: brief list or detailed list 1/0
:type detail: bool
:returns: List of dictionaries containing the following keys (dataset). If the detail option is used. The dictionary contains the following keys (primary_ds_name, physics_group_name, acquisition_era_name, create_by, dataset_access_type, data_tier_name, last_modified_by, creation_date, processing_version, processed_ds_name, xtcrosssection, last_modification_date, dataset_id, dataset, prep_id, primary_ds_type)
:rtype: list of dicts
"""
validParameters = ['dataset', 'dataset_access_type', 'detail', 'dataset_id']
requiredParameters = {'multiple': ['dataset', 'dataset_id']}
checkInputParameter(method="listDatasetArray", parameters=kwargs.keys(), validParameters=validParameters,
requiredParameters=requiredParameters)
#set defaults
if 'detail' not in kwargs.keys():
kwargs['detail'] = False
return self.__callServer("datasetlist", data=kwargs, callmethod='POST') |
def validate_param_name(name, param_type):
"""Validate that the name follows posix conventions for env variables."""
# http://pubs.opengroup.org/onlinepubs/9699919799/basedefs/V1_chap03.html#tag_03_235
#
# 3.235 Name
# In the shell command language, a word consisting solely of underscores,
# digits, and alphabetics from the portable character set.
if not re.match(r'^[a-zA-Z_][a-zA-Z0-9_]*$', name):
raise ValueError('Invalid %s: %s' % (param_type, name)) | Validate that the name follows posix conventions for env variables. | Below is the the instruction that describes the task:
### Input:
Validate that the name follows posix conventions for env variables.
### Response:
def validate_param_name(name, param_type):
"""Validate that the name follows posix conventions for env variables."""
# http://pubs.opengroup.org/onlinepubs/9699919799/basedefs/V1_chap03.html#tag_03_235
#
# 3.235 Name
# In the shell command language, a word consisting solely of underscores,
# digits, and alphabetics from the portable character set.
if not re.match(r'^[a-zA-Z_][a-zA-Z0-9_]*$', name):
raise ValueError('Invalid %s: %s' % (param_type, name)) |
def _cursor_down(self, count=1):
"""
Moves cursor down count lines in same column. Cursor stops at bottom
margin.
"""
self.y = min(self.size[0] - 1, self.y + count) | Moves cursor down count lines in same column. Cursor stops at bottom
margin. | Below is the the instruction that describes the task:
### Input:
Moves cursor down count lines in same column. Cursor stops at bottom
margin.
### Response:
def _cursor_down(self, count=1):
"""
Moves cursor down count lines in same column. Cursor stops at bottom
margin.
"""
self.y = min(self.size[0] - 1, self.y + count) |
def save_file(result, filename, encoding='utf8', headers=None,
convertors=None, visitor=None, writer=None, **kwargs):
"""
save query result to a csv file
visitor can used to convert values, all value should be convert to string
visitor function should be defined as:
def visitor(keys, values, encoding):
#return new values []
convertors is used to convert single column value, for example:
convertors = {'field1':convert_func1, 'fields2':convert_func2}
def convert_func1(value, data):
value is value of field1
data is the record
if visitor and convertors all provided, only visitor is available.
headers used to convert column to a provided value
"""
import os
from uliweb.utils.common import simple_value
convertors = convertors or {}
headers = headers or []
writer_class = Writer
if isinstance(filename, (str, unicode)):
ext = os.path.splitext(filename)[1]
if ext == '.csv':
writer_class = Writer
elif ext == '.dict':
writer_class = DictWriter
elif ext == '.xlsx':
writer_class = XlsxWriter
def convert(k, v, data):
f = convertors.get(k)
if f:
v = f(v, data)
return v
if isinstance(result, (str, unicode)):
result = text(safe_unicode(result))
if isinstance(result, (Select, TextClause)):
result = do_(result)
_header = []
for k in result.keys():
flag = False
for x in headers:
if x['name'] == k:
_header.append(x)
flag = True
break
if not flag:
_header.append({'name':k, 'title':k})
def _data():
for row in result:
if visitor and callable(visitor):
_row = visitor(result.keys(), row.values(), encoding)
else:
_row = [convert(k, v, row) for k, v in zip(result.keys(), row.values())]
yield _row
writer = writer_class(filename, header=_header, data=_data(), **kwargs)
writer.save() | save query result to a csv file
visitor can used to convert values, all value should be convert to string
visitor function should be defined as:
def visitor(keys, values, encoding):
#return new values []
convertors is used to convert single column value, for example:
convertors = {'field1':convert_func1, 'fields2':convert_func2}
def convert_func1(value, data):
value is value of field1
data is the record
if visitor and convertors all provided, only visitor is available.
headers used to convert column to a provided value | Below is the the instruction that describes the task:
### Input:
save query result to a csv file
visitor can used to convert values, all value should be convert to string
visitor function should be defined as:
def visitor(keys, values, encoding):
#return new values []
convertors is used to convert single column value, for example:
convertors = {'field1':convert_func1, 'fields2':convert_func2}
def convert_func1(value, data):
value is value of field1
data is the record
if visitor and convertors all provided, only visitor is available.
headers used to convert column to a provided value
### Response:
def save_file(result, filename, encoding='utf8', headers=None,
convertors=None, visitor=None, writer=None, **kwargs):
"""
save query result to a csv file
visitor can used to convert values, all value should be convert to string
visitor function should be defined as:
def visitor(keys, values, encoding):
#return new values []
convertors is used to convert single column value, for example:
convertors = {'field1':convert_func1, 'fields2':convert_func2}
def convert_func1(value, data):
value is value of field1
data is the record
if visitor and convertors all provided, only visitor is available.
headers used to convert column to a provided value
"""
import os
from uliweb.utils.common import simple_value
convertors = convertors or {}
headers = headers or []
writer_class = Writer
if isinstance(filename, (str, unicode)):
ext = os.path.splitext(filename)[1]
if ext == '.csv':
writer_class = Writer
elif ext == '.dict':
writer_class = DictWriter
elif ext == '.xlsx':
writer_class = XlsxWriter
def convert(k, v, data):
f = convertors.get(k)
if f:
v = f(v, data)
return v
if isinstance(result, (str, unicode)):
result = text(safe_unicode(result))
if isinstance(result, (Select, TextClause)):
result = do_(result)
_header = []
for k in result.keys():
flag = False
for x in headers:
if x['name'] == k:
_header.append(x)
flag = True
break
if not flag:
_header.append({'name':k, 'title':k})
def _data():
for row in result:
if visitor and callable(visitor):
_row = visitor(result.keys(), row.values(), encoding)
else:
_row = [convert(k, v, row) for k, v in zip(result.keys(), row.values())]
yield _row
writer = writer_class(filename, header=_header, data=_data(), **kwargs)
writer.save() |
def text(el, strip=True):
"""
Return the text of a ``BeautifulSoup`` element
"""
if not el:
return ""
text = el.text
if strip:
text = text.strip()
return text | Return the text of a ``BeautifulSoup`` element | Below is the the instruction that describes the task:
### Input:
Return the text of a ``BeautifulSoup`` element
### Response:
def text(el, strip=True):
"""
Return the text of a ``BeautifulSoup`` element
"""
if not el:
return ""
text = el.text
if strip:
text = text.strip()
return text |
def _add_study_provenance(
self,
phenotyping_center,
colony,
project_fullname,
pipeline_name,
pipeline_stable_id,
procedure_stable_id,
procedure_name,
parameter_stable_id,
parameter_name,
statistical_method,
resource_name,
row_num
):
"""
:param phenotyping_center: str, from self.files['all']
:param colony: str, from self.files['all']
:param project_fullname: str, from self.files['all']
:param pipeline_name: str, from self.files['all']
:param pipeline_stable_id: str, from self.files['all']
:param procedure_stable_id: str, from self.files['all']
:param procedure_name: str, from self.files['all']
:param parameter_stable_id: str, from self.files['all']
:param parameter_name: str, from self.files['all']
:param statistical_method: str, from self.files['all']
:param resource_name: str, from self.files['all']
:return: study bnode
"""
provenance_model = Provenance(self.graph)
model = Model(self.graph)
# Add provenance
# A study is a blank node equal to its parts
study_bnode = self.make_id("{0}{1}{2}{3}{4}{5}{6}{7}".format(
phenotyping_center, colony, project_fullname, pipeline_stable_id,
procedure_stable_id, parameter_stable_id, statistical_method,
resource_name), '_')
model.addIndividualToGraph(
study_bnode, None, self.globaltt['study'])
# List of nodes linked to study with has_part property
study_parts = []
# Add study parts
model.addIndividualToGraph(self.resolve(procedure_stable_id), procedure_name)
study_parts.append(self.resolve(procedure_stable_id))
study_parts.append(self.resolve(statistical_method))
provenance_model.add_study_parts(study_bnode, study_parts)
# Add parameter/measure statement: study measures parameter
parameter_label = "{0} ({1})".format(parameter_name, procedure_name)
logging.info("Adding Provenance")
model.addIndividualToGraph(
self.resolve(parameter_stable_id), parameter_label)
provenance_model.add_study_measure(
study_bnode, self.resolve(parameter_stable_id))
# Add Colony
colony_bnode = self.make_id("{0}".format(colony), '_')
model.addIndividualToGraph(colony_bnode, colony)
# Add study agent
model.addIndividualToGraph(
self.resolve(phenotyping_center), phenotyping_center,
self.globaltt['organization'])
# self.graph
model.addTriple(
study_bnode, self.globaltt['has_agent'], self.resolve(phenotyping_center))
# add pipeline and project
model.addIndividualToGraph(
self.resolve(pipeline_stable_id), pipeline_name)
# self.graph
model.addTriple(
study_bnode, self.globaltt['part_of'], self.resolve(pipeline_stable_id))
model.addIndividualToGraph(
self.resolve(project_fullname), project_fullname, self.globaltt['project'])
# self.graph
model.addTriple(
study_bnode, self.globaltt['part_of'], self.resolve(project_fullname))
return study_bnode | :param phenotyping_center: str, from self.files['all']
:param colony: str, from self.files['all']
:param project_fullname: str, from self.files['all']
:param pipeline_name: str, from self.files['all']
:param pipeline_stable_id: str, from self.files['all']
:param procedure_stable_id: str, from self.files['all']
:param procedure_name: str, from self.files['all']
:param parameter_stable_id: str, from self.files['all']
:param parameter_name: str, from self.files['all']
:param statistical_method: str, from self.files['all']
:param resource_name: str, from self.files['all']
:return: study bnode | Below is the the instruction that describes the task:
### Input:
:param phenotyping_center: str, from self.files['all']
:param colony: str, from self.files['all']
:param project_fullname: str, from self.files['all']
:param pipeline_name: str, from self.files['all']
:param pipeline_stable_id: str, from self.files['all']
:param procedure_stable_id: str, from self.files['all']
:param procedure_name: str, from self.files['all']
:param parameter_stable_id: str, from self.files['all']
:param parameter_name: str, from self.files['all']
:param statistical_method: str, from self.files['all']
:param resource_name: str, from self.files['all']
:return: study bnode
### Response:
def _add_study_provenance(
self,
phenotyping_center,
colony,
project_fullname,
pipeline_name,
pipeline_stable_id,
procedure_stable_id,
procedure_name,
parameter_stable_id,
parameter_name,
statistical_method,
resource_name,
row_num
):
"""
:param phenotyping_center: str, from self.files['all']
:param colony: str, from self.files['all']
:param project_fullname: str, from self.files['all']
:param pipeline_name: str, from self.files['all']
:param pipeline_stable_id: str, from self.files['all']
:param procedure_stable_id: str, from self.files['all']
:param procedure_name: str, from self.files['all']
:param parameter_stable_id: str, from self.files['all']
:param parameter_name: str, from self.files['all']
:param statistical_method: str, from self.files['all']
:param resource_name: str, from self.files['all']
:return: study bnode
"""
provenance_model = Provenance(self.graph)
model = Model(self.graph)
# Add provenance
# A study is a blank node equal to its parts
study_bnode = self.make_id("{0}{1}{2}{3}{4}{5}{6}{7}".format(
phenotyping_center, colony, project_fullname, pipeline_stable_id,
procedure_stable_id, parameter_stable_id, statistical_method,
resource_name), '_')
model.addIndividualToGraph(
study_bnode, None, self.globaltt['study'])
# List of nodes linked to study with has_part property
study_parts = []
# Add study parts
model.addIndividualToGraph(self.resolve(procedure_stable_id), procedure_name)
study_parts.append(self.resolve(procedure_stable_id))
study_parts.append(self.resolve(statistical_method))
provenance_model.add_study_parts(study_bnode, study_parts)
# Add parameter/measure statement: study measures parameter
parameter_label = "{0} ({1})".format(parameter_name, procedure_name)
logging.info("Adding Provenance")
model.addIndividualToGraph(
self.resolve(parameter_stable_id), parameter_label)
provenance_model.add_study_measure(
study_bnode, self.resolve(parameter_stable_id))
# Add Colony
colony_bnode = self.make_id("{0}".format(colony), '_')
model.addIndividualToGraph(colony_bnode, colony)
# Add study agent
model.addIndividualToGraph(
self.resolve(phenotyping_center), phenotyping_center,
self.globaltt['organization'])
# self.graph
model.addTriple(
study_bnode, self.globaltt['has_agent'], self.resolve(phenotyping_center))
# add pipeline and project
model.addIndividualToGraph(
self.resolve(pipeline_stable_id), pipeline_name)
# self.graph
model.addTriple(
study_bnode, self.globaltt['part_of'], self.resolve(pipeline_stable_id))
model.addIndividualToGraph(
self.resolve(project_fullname), project_fullname, self.globaltt['project'])
# self.graph
model.addTriple(
study_bnode, self.globaltt['part_of'], self.resolve(project_fullname))
return study_bnode |
def parse_args(self, ctx, args):
"""Parse arguments sent to this command.
The code for this method is taken from MultiCommand:
https://github.com/mitsuhiko/click/blob/master/click/core.py
It is Copyright (c) 2014 by Armin Ronacher.
See the license:
https://github.com/mitsuhiko/click/blob/master/LICENSE
"""
if not args and self.no_args_is_help and not ctx.resilient_parsing:
click.echo(ctx.get_help())
ctx.exit()
return super(ActionSubcommand, self).parse_args(ctx, args) | Parse arguments sent to this command.
The code for this method is taken from MultiCommand:
https://github.com/mitsuhiko/click/blob/master/click/core.py
It is Copyright (c) 2014 by Armin Ronacher.
See the license:
https://github.com/mitsuhiko/click/blob/master/LICENSE | Below is the the instruction that describes the task:
### Input:
Parse arguments sent to this command.
The code for this method is taken from MultiCommand:
https://github.com/mitsuhiko/click/blob/master/click/core.py
It is Copyright (c) 2014 by Armin Ronacher.
See the license:
https://github.com/mitsuhiko/click/blob/master/LICENSE
### Response:
def parse_args(self, ctx, args):
"""Parse arguments sent to this command.
The code for this method is taken from MultiCommand:
https://github.com/mitsuhiko/click/blob/master/click/core.py
It is Copyright (c) 2014 by Armin Ronacher.
See the license:
https://github.com/mitsuhiko/click/blob/master/LICENSE
"""
if not args and self.no_args_is_help and not ctx.resilient_parsing:
click.echo(ctx.get_help())
ctx.exit()
return super(ActionSubcommand, self).parse_args(ctx, args) |
def list_disks(disk_ids=None, scsi_addresses=None, service_instance=None):
'''
Returns a list of dict representations of the disks in an ESXi host.
The list of disks can be filtered by disk canonical names or
scsi addresses.
disk_ids:
List of disk canonical names to be retrieved. Default is None.
scsi_addresses
List of scsi addresses of disks to be retrieved. Default is None
service_instance
Service instance (vim.ServiceInstance) of the vCenter/ESXi host.
Default is None.
.. code-block:: bash
salt '*' vsphere.list_disks
salt '*' vsphere.list_disks disk_ids='[naa.00, naa.001]'
salt '*' vsphere.list_disks
scsi_addresses='[vmhba0:C0:T0:L0, vmhba1:C0:T0:L0]'
'''
host_ref = _get_proxy_target(service_instance)
hostname = __proxy__['esxi.get_details']()['esxi_host']
log.trace('Retrieving disks if host \'%s\'', hostname)
log.trace('disk ids = %s', disk_ids)
log.trace('scsi_addresses = %s', scsi_addresses)
# Default to getting all disks if no filtering is done
get_all_disks = True if not (disk_ids or scsi_addresses) else False
ret_list = []
scsi_address_to_lun = salt.utils.vmware.get_scsi_address_to_lun_map(
host_ref, hostname=hostname)
canonical_name_to_scsi_address = {
lun.canonicalName: scsi_addr
for scsi_addr, lun in six.iteritems(scsi_address_to_lun)}
for d in salt.utils.vmware.get_disks(host_ref, disk_ids, scsi_addresses,
get_all_disks):
ret_list.append({'id': d.canonicalName,
'scsi_address':
canonical_name_to_scsi_address[d.canonicalName]})
return ret_list | Returns a list of dict representations of the disks in an ESXi host.
The list of disks can be filtered by disk canonical names or
scsi addresses.
disk_ids:
List of disk canonical names to be retrieved. Default is None.
scsi_addresses
List of scsi addresses of disks to be retrieved. Default is None
service_instance
Service instance (vim.ServiceInstance) of the vCenter/ESXi host.
Default is None.
.. code-block:: bash
salt '*' vsphere.list_disks
salt '*' vsphere.list_disks disk_ids='[naa.00, naa.001]'
salt '*' vsphere.list_disks
scsi_addresses='[vmhba0:C0:T0:L0, vmhba1:C0:T0:L0]' | Below is the the instruction that describes the task:
### Input:
Returns a list of dict representations of the disks in an ESXi host.
The list of disks can be filtered by disk canonical names or
scsi addresses.
disk_ids:
List of disk canonical names to be retrieved. Default is None.
scsi_addresses
List of scsi addresses of disks to be retrieved. Default is None
service_instance
Service instance (vim.ServiceInstance) of the vCenter/ESXi host.
Default is None.
.. code-block:: bash
salt '*' vsphere.list_disks
salt '*' vsphere.list_disks disk_ids='[naa.00, naa.001]'
salt '*' vsphere.list_disks
scsi_addresses='[vmhba0:C0:T0:L0, vmhba1:C0:T0:L0]'
### Response:
def list_disks(disk_ids=None, scsi_addresses=None, service_instance=None):
'''
Returns a list of dict representations of the disks in an ESXi host.
The list of disks can be filtered by disk canonical names or
scsi addresses.
disk_ids:
List of disk canonical names to be retrieved. Default is None.
scsi_addresses
List of scsi addresses of disks to be retrieved. Default is None
service_instance
Service instance (vim.ServiceInstance) of the vCenter/ESXi host.
Default is None.
.. code-block:: bash
salt '*' vsphere.list_disks
salt '*' vsphere.list_disks disk_ids='[naa.00, naa.001]'
salt '*' vsphere.list_disks
scsi_addresses='[vmhba0:C0:T0:L0, vmhba1:C0:T0:L0]'
'''
host_ref = _get_proxy_target(service_instance)
hostname = __proxy__['esxi.get_details']()['esxi_host']
log.trace('Retrieving disks if host \'%s\'', hostname)
log.trace('disk ids = %s', disk_ids)
log.trace('scsi_addresses = %s', scsi_addresses)
# Default to getting all disks if no filtering is done
get_all_disks = True if not (disk_ids or scsi_addresses) else False
ret_list = []
scsi_address_to_lun = salt.utils.vmware.get_scsi_address_to_lun_map(
host_ref, hostname=hostname)
canonical_name_to_scsi_address = {
lun.canonicalName: scsi_addr
for scsi_addr, lun in six.iteritems(scsi_address_to_lun)}
for d in salt.utils.vmware.get_disks(host_ref, disk_ids, scsi_addresses,
get_all_disks):
ret_list.append({'id': d.canonicalName,
'scsi_address':
canonical_name_to_scsi_address[d.canonicalName]})
return ret_list |
def save_data(trigger_id, data):
"""
call the consumer and handle the data
:param trigger_id:
:param data:
:return:
"""
status = True
# consumer - the service which uses the data
default_provider.load_services()
service = TriggerService.objects.get(id=trigger_id)
service_consumer = default_provider.get_service(str(service.consumer.name.name))
kwargs = {'user': service.user}
if len(data) > 0:
getattr(service_consumer, '__init__')(service.consumer.token, **kwargs)
status = getattr(service_consumer, 'save_data')(service.id, **data)
return status | call the consumer and handle the data
:param trigger_id:
:param data:
:return: | Below is the the instruction that describes the task:
### Input:
call the consumer and handle the data
:param trigger_id:
:param data:
:return:
### Response:
def save_data(trigger_id, data):
"""
call the consumer and handle the data
:param trigger_id:
:param data:
:return:
"""
status = True
# consumer - the service which uses the data
default_provider.load_services()
service = TriggerService.objects.get(id=trigger_id)
service_consumer = default_provider.get_service(str(service.consumer.name.name))
kwargs = {'user': service.user}
if len(data) > 0:
getattr(service_consumer, '__init__')(service.consumer.token, **kwargs)
status = getattr(service_consumer, 'save_data')(service.id, **data)
return status |
def _nvram_file(self):
"""
Path to the nvram file
"""
return os.path.join(self.working_dir, "nvram_{:05d}".format(self.application_id)) | Path to the nvram file | Below is the the instruction that describes the task:
### Input:
Path to the nvram file
### Response:
def _nvram_file(self):
"""
Path to the nvram file
"""
return os.path.join(self.working_dir, "nvram_{:05d}".format(self.application_id)) |
def set_contents_from_filename(self, filename, headers=None, replace=True,
cb=None, num_cb=10, policy=None, md5=None,
reduced_redundancy=False,
encrypt_key=False):
"""
Store an object in S3 using the name of the Key object as the
key in S3 and the contents of the file named by 'filename'.
See set_contents_from_file method for details about the
parameters.
:type filename: string
:param filename: The name of the file that you want to put onto S3
:type headers: dict
:param headers: Additional headers to pass along with the
request to AWS.
:type replace: bool
:param replace: If True, replaces the contents of the file
if it already exists.
:type cb: function
:param cb: a callback function that will be called to report
progress on the upload. The callback should accept
two integer parameters, the first representing the
number of bytes that have been successfully
transmitted to S3 and the second representing the
size of the to be transmitted object.
:type cb: int
:param num_cb: (optional) If a callback is specified with
the cb parameter this parameter determines the
granularity of the callback by defining
the maximum number of times the callback will
be called during the file transfer.
:type policy: :class:`boto.s3.acl.CannedACLStrings`
:param policy: A canned ACL policy that will be applied to the
new key in S3.
:type md5: A tuple containing the hexdigest version of the MD5
checksum of the file as the first element and the
Base64-encoded version of the plain checksum as the
second element. This is the same format returned by
the compute_md5 method.
:param md5: If you need to compute the MD5 for any reason prior
to upload, it's silly to have to do it twice so this
param, if present, will be used as the MD5 values
of the file. Otherwise, the checksum will be computed.
:type reduced_redundancy: bool
:param reduced_redundancy: If True, this will set the storage
class of the new Key to be
REDUCED_REDUNDANCY. The Reduced Redundancy
Storage (RRS) feature of S3, provides lower
redundancy at lower storage cost.
:type encrypt_key: bool
:param encrypt_key: If True, the new copy of the object will
be encrypted on the server-side by S3 and
will be stored in an encrypted form while
at rest in S3.
"""
fp = open(filename, 'rb')
self.set_contents_from_file(fp, headers, replace, cb, num_cb,
policy, md5, reduced_redundancy,
encrypt_key=encrypt_key)
fp.close() | Store an object in S3 using the name of the Key object as the
key in S3 and the contents of the file named by 'filename'.
See set_contents_from_file method for details about the
parameters.
:type filename: string
:param filename: The name of the file that you want to put onto S3
:type headers: dict
:param headers: Additional headers to pass along with the
request to AWS.
:type replace: bool
:param replace: If True, replaces the contents of the file
if it already exists.
:type cb: function
:param cb: a callback function that will be called to report
progress on the upload. The callback should accept
two integer parameters, the first representing the
number of bytes that have been successfully
transmitted to S3 and the second representing the
size of the to be transmitted object.
:type cb: int
:param num_cb: (optional) If a callback is specified with
the cb parameter this parameter determines the
granularity of the callback by defining
the maximum number of times the callback will
be called during the file transfer.
:type policy: :class:`boto.s3.acl.CannedACLStrings`
:param policy: A canned ACL policy that will be applied to the
new key in S3.
:type md5: A tuple containing the hexdigest version of the MD5
checksum of the file as the first element and the
Base64-encoded version of the plain checksum as the
second element. This is the same format returned by
the compute_md5 method.
:param md5: If you need to compute the MD5 for any reason prior
to upload, it's silly to have to do it twice so this
param, if present, will be used as the MD5 values
of the file. Otherwise, the checksum will be computed.
:type reduced_redundancy: bool
:param reduced_redundancy: If True, this will set the storage
class of the new Key to be
REDUCED_REDUNDANCY. The Reduced Redundancy
Storage (RRS) feature of S3, provides lower
redundancy at lower storage cost.
:type encrypt_key: bool
:param encrypt_key: If True, the new copy of the object will
be encrypted on the server-side by S3 and
will be stored in an encrypted form while
at rest in S3. | Below is the the instruction that describes the task:
### Input:
Store an object in S3 using the name of the Key object as the
key in S3 and the contents of the file named by 'filename'.
See set_contents_from_file method for details about the
parameters.
:type filename: string
:param filename: The name of the file that you want to put onto S3
:type headers: dict
:param headers: Additional headers to pass along with the
request to AWS.
:type replace: bool
:param replace: If True, replaces the contents of the file
if it already exists.
:type cb: function
:param cb: a callback function that will be called to report
progress on the upload. The callback should accept
two integer parameters, the first representing the
number of bytes that have been successfully
transmitted to S3 and the second representing the
size of the to be transmitted object.
:type cb: int
:param num_cb: (optional) If a callback is specified with
the cb parameter this parameter determines the
granularity of the callback by defining
the maximum number of times the callback will
be called during the file transfer.
:type policy: :class:`boto.s3.acl.CannedACLStrings`
:param policy: A canned ACL policy that will be applied to the
new key in S3.
:type md5: A tuple containing the hexdigest version of the MD5
checksum of the file as the first element and the
Base64-encoded version of the plain checksum as the
second element. This is the same format returned by
the compute_md5 method.
:param md5: If you need to compute the MD5 for any reason prior
to upload, it's silly to have to do it twice so this
param, if present, will be used as the MD5 values
of the file. Otherwise, the checksum will be computed.
:type reduced_redundancy: bool
:param reduced_redundancy: If True, this will set the storage
class of the new Key to be
REDUCED_REDUNDANCY. The Reduced Redundancy
Storage (RRS) feature of S3, provides lower
redundancy at lower storage cost.
:type encrypt_key: bool
:param encrypt_key: If True, the new copy of the object will
be encrypted on the server-side by S3 and
will be stored in an encrypted form while
at rest in S3.
### Response:
def set_contents_from_filename(self, filename, headers=None, replace=True,
cb=None, num_cb=10, policy=None, md5=None,
reduced_redundancy=False,
encrypt_key=False):
"""
Store an object in S3 using the name of the Key object as the
key in S3 and the contents of the file named by 'filename'.
See set_contents_from_file method for details about the
parameters.
:type filename: string
:param filename: The name of the file that you want to put onto S3
:type headers: dict
:param headers: Additional headers to pass along with the
request to AWS.
:type replace: bool
:param replace: If True, replaces the contents of the file
if it already exists.
:type cb: function
:param cb: a callback function that will be called to report
progress on the upload. The callback should accept
two integer parameters, the first representing the
number of bytes that have been successfully
transmitted to S3 and the second representing the
size of the to be transmitted object.
:type cb: int
:param num_cb: (optional) If a callback is specified with
the cb parameter this parameter determines the
granularity of the callback by defining
the maximum number of times the callback will
be called during the file transfer.
:type policy: :class:`boto.s3.acl.CannedACLStrings`
:param policy: A canned ACL policy that will be applied to the
new key in S3.
:type md5: A tuple containing the hexdigest version of the MD5
checksum of the file as the first element and the
Base64-encoded version of the plain checksum as the
second element. This is the same format returned by
the compute_md5 method.
:param md5: If you need to compute the MD5 for any reason prior
to upload, it's silly to have to do it twice so this
param, if present, will be used as the MD5 values
of the file. Otherwise, the checksum will be computed.
:type reduced_redundancy: bool
:param reduced_redundancy: If True, this will set the storage
class of the new Key to be
REDUCED_REDUNDANCY. The Reduced Redundancy
Storage (RRS) feature of S3, provides lower
redundancy at lower storage cost.
:type encrypt_key: bool
:param encrypt_key: If True, the new copy of the object will
be encrypted on the server-side by S3 and
will be stored in an encrypted form while
at rest in S3.
"""
fp = open(filename, 'rb')
self.set_contents_from_file(fp, headers, replace, cb, num_cb,
policy, md5, reduced_redundancy,
encrypt_key=encrypt_key)
fp.close() |
def path_file_to_list(path_file):
"""
:return: A list with the paths which are stored in a text file in a line-by-
line format. Validate each path using is_valid_path
"""
paths = []
path_file_fd = file(path_file)
for line_no, line in enumerate(path_file_fd.readlines(), start=1):
line = line.strip()
if not line:
# Blank line support
continue
if line.startswith('#'):
# Comment support
continue
try:
is_valid_path(line)
paths.append(line)
except ValueError, ve:
args = (ve, path_file, line_no)
raise ValueError('%s error found in %s:%s.' % args)
return paths | :return: A list with the paths which are stored in a text file in a line-by-
line format. Validate each path using is_valid_path | Below is the the instruction that describes the task:
### Input:
:return: A list with the paths which are stored in a text file in a line-by-
line format. Validate each path using is_valid_path
### Response:
def path_file_to_list(path_file):
"""
:return: A list with the paths which are stored in a text file in a line-by-
line format. Validate each path using is_valid_path
"""
paths = []
path_file_fd = file(path_file)
for line_no, line in enumerate(path_file_fd.readlines(), start=1):
line = line.strip()
if not line:
# Blank line support
continue
if line.startswith('#'):
# Comment support
continue
try:
is_valid_path(line)
paths.append(line)
except ValueError, ve:
args = (ve, path_file, line_no)
raise ValueError('%s error found in %s:%s.' % args)
return paths |
def add_parameter(self, location='query', **kwargs):
"""Adds a new parameter to the request
:param location: the 'in' field of the parameter (e.g: 'query',
'body', 'path')
"""
kwargs.setdefault('in', location)
if kwargs['in'] != 'body':
kwargs.setdefault('type', 'string')
self['parameters'].append(kwargs) | Adds a new parameter to the request
:param location: the 'in' field of the parameter (e.g: 'query',
'body', 'path') | Below is the the instruction that describes the task:
### Input:
Adds a new parameter to the request
:param location: the 'in' field of the parameter (e.g: 'query',
'body', 'path')
### Response:
def add_parameter(self, location='query', **kwargs):
"""Adds a new parameter to the request
:param location: the 'in' field of the parameter (e.g: 'query',
'body', 'path')
"""
kwargs.setdefault('in', location)
if kwargs['in'] != 'body':
kwargs.setdefault('type', 'string')
self['parameters'].append(kwargs) |
def enable(identifier, exclude_children=False):
"""
Enable a previously disabled include type
:param identifier: module or name of the include type
:param exclude_children: disable the include type only for child processes, not the current process
The ``identifier`` can be specified in multiple ways to disable an include type.
See :py:meth:`~.DisabledIncludeTypes.disable` for details.
"""
DISABLED_TYPES.enable(identifier=identifier, exclude_children=exclude_children) | Enable a previously disabled include type
:param identifier: module or name of the include type
:param exclude_children: disable the include type only for child processes, not the current process
The ``identifier`` can be specified in multiple ways to disable an include type.
See :py:meth:`~.DisabledIncludeTypes.disable` for details. | Below is the the instruction that describes the task:
### Input:
Enable a previously disabled include type
:param identifier: module or name of the include type
:param exclude_children: disable the include type only for child processes, not the current process
The ``identifier`` can be specified in multiple ways to disable an include type.
See :py:meth:`~.DisabledIncludeTypes.disable` for details.
### Response:
def enable(identifier, exclude_children=False):
"""
Enable a previously disabled include type
:param identifier: module or name of the include type
:param exclude_children: disable the include type only for child processes, not the current process
The ``identifier`` can be specified in multiple ways to disable an include type.
See :py:meth:`~.DisabledIncludeTypes.disable` for details.
"""
DISABLED_TYPES.enable(identifier=identifier, exclude_children=exclude_children) |
def weights(self):
"""Weights as described in the FS framework."""
m = self.kernel.feature_log_prob_[self._match_class_pos()]
u = self.kernel.feature_log_prob_[self._nonmatch_class_pos()]
return self._prob_inverse_transform(numpy.exp(m - u)) | Weights as described in the FS framework. | Below is the the instruction that describes the task:
### Input:
Weights as described in the FS framework.
### Response:
def weights(self):
"""Weights as described in the FS framework."""
m = self.kernel.feature_log_prob_[self._match_class_pos()]
u = self.kernel.feature_log_prob_[self._nonmatch_class_pos()]
return self._prob_inverse_transform(numpy.exp(m - u)) |
def print_warning(msg, color=True):
"""
Print a warning message.
:param string msg: the message
:param bool color: if ``True``, print with POSIX color
"""
if color and is_posix():
safe_print(u"%s[WARN] %s%s" % (ANSI_WARNING, msg, ANSI_END))
else:
safe_print(u"[WARN] %s" % (msg)) | Print a warning message.
:param string msg: the message
:param bool color: if ``True``, print with POSIX color | Below is the the instruction that describes the task:
### Input:
Print a warning message.
:param string msg: the message
:param bool color: if ``True``, print with POSIX color
### Response:
def print_warning(msg, color=True):
"""
Print a warning message.
:param string msg: the message
:param bool color: if ``True``, print with POSIX color
"""
if color and is_posix():
safe_print(u"%s[WARN] %s%s" % (ANSI_WARNING, msg, ANSI_END))
else:
safe_print(u"[WARN] %s" % (msg)) |
def dict_to_element(doc: dict, value_key: str='@', attribute_prefix: str='@') -> Element:
"""
Generates XML Element from dict.
Generates complex elements by assuming element attributes are prefixed with '@', and value is stored to plain '@'
in case of complex element. Children are sub-dicts.
For example:
{
'Doc': {
'@version': '1.2',
'A': [{'@class': 'x', 'B': {'@': 'hello', '@class': 'x2'}},
{'@class': 'y', 'B': {'@': 'world', '@class': 'y2'}}],
'C': 'value node',
}
}
is returned as follows:
<?xml version="1.0" ?>
<Doc version="1.2">
<A class="x">
<B class="x2">hello</B>
</A>
<A class="y">
<B class="y2">world</B>
</A>
<C>value node</C>
</Doc>
Args:
doc: dict. Must have sigle root key dict.
value_key: Key to store (complex) element value. Default is '@'
attribute_prefix: Key prefix to store element attribute values. Default is '@'
Returns: xml.etree.ElementTree.Element
"""
from xml.etree import ElementTree as ET
if len(doc) != 1:
raise Exception('Invalid data dict for XML generation, document root must have single element')
for tag, data in doc.items():
el = ET.Element(tag)
assert isinstance(el, Element)
_xml_element_set_data_r(el, data, value_key, attribute_prefix)
return el | Generates XML Element from dict.
Generates complex elements by assuming element attributes are prefixed with '@', and value is stored to plain '@'
in case of complex element. Children are sub-dicts.
For example:
{
'Doc': {
'@version': '1.2',
'A': [{'@class': 'x', 'B': {'@': 'hello', '@class': 'x2'}},
{'@class': 'y', 'B': {'@': 'world', '@class': 'y2'}}],
'C': 'value node',
}
}
is returned as follows:
<?xml version="1.0" ?>
<Doc version="1.2">
<A class="x">
<B class="x2">hello</B>
</A>
<A class="y">
<B class="y2">world</B>
</A>
<C>value node</C>
</Doc>
Args:
doc: dict. Must have sigle root key dict.
value_key: Key to store (complex) element value. Default is '@'
attribute_prefix: Key prefix to store element attribute values. Default is '@'
Returns: xml.etree.ElementTree.Element | Below is the the instruction that describes the task:
### Input:
Generates XML Element from dict.
Generates complex elements by assuming element attributes are prefixed with '@', and value is stored to plain '@'
in case of complex element. Children are sub-dicts.
For example:
{
'Doc': {
'@version': '1.2',
'A': [{'@class': 'x', 'B': {'@': 'hello', '@class': 'x2'}},
{'@class': 'y', 'B': {'@': 'world', '@class': 'y2'}}],
'C': 'value node',
}
}
is returned as follows:
<?xml version="1.0" ?>
<Doc version="1.2">
<A class="x">
<B class="x2">hello</B>
</A>
<A class="y">
<B class="y2">world</B>
</A>
<C>value node</C>
</Doc>
Args:
doc: dict. Must have sigle root key dict.
value_key: Key to store (complex) element value. Default is '@'
attribute_prefix: Key prefix to store element attribute values. Default is '@'
Returns: xml.etree.ElementTree.Element
### Response:
def dict_to_element(doc: dict, value_key: str='@', attribute_prefix: str='@') -> Element:
"""
Generates XML Element from dict.
Generates complex elements by assuming element attributes are prefixed with '@', and value is stored to plain '@'
in case of complex element. Children are sub-dicts.
For example:
{
'Doc': {
'@version': '1.2',
'A': [{'@class': 'x', 'B': {'@': 'hello', '@class': 'x2'}},
{'@class': 'y', 'B': {'@': 'world', '@class': 'y2'}}],
'C': 'value node',
}
}
is returned as follows:
<?xml version="1.0" ?>
<Doc version="1.2">
<A class="x">
<B class="x2">hello</B>
</A>
<A class="y">
<B class="y2">world</B>
</A>
<C>value node</C>
</Doc>
Args:
doc: dict. Must have sigle root key dict.
value_key: Key to store (complex) element value. Default is '@'
attribute_prefix: Key prefix to store element attribute values. Default is '@'
Returns: xml.etree.ElementTree.Element
"""
from xml.etree import ElementTree as ET
if len(doc) != 1:
raise Exception('Invalid data dict for XML generation, document root must have single element')
for tag, data in doc.items():
el = ET.Element(tag)
assert isinstance(el, Element)
_xml_element_set_data_r(el, data, value_key, attribute_prefix)
return el |
def handle_stream(self, stream, address):
'''
Handle incoming streams and add messages to the incoming queue
'''
log.trace('Req client %s connected', address)
self.clients.append((stream, address))
unpacker = msgpack.Unpacker()
try:
while True:
wire_bytes = yield stream.read_bytes(4096, partial=True)
unpacker.feed(wire_bytes)
for framed_msg in unpacker:
if six.PY3:
framed_msg = salt.transport.frame.decode_embedded_strs(
framed_msg
)
header = framed_msg['head']
self.io_loop.spawn_callback(self.message_handler, stream, header, framed_msg['body'])
except StreamClosedError:
log.trace('req client disconnected %s', address)
self.clients.remove((stream, address))
except Exception as e:
log.trace('other master-side exception: %s', e)
self.clients.remove((stream, address))
stream.close() | Handle incoming streams and add messages to the incoming queue | Below is the the instruction that describes the task:
### Input:
Handle incoming streams and add messages to the incoming queue
### Response:
def handle_stream(self, stream, address):
'''
Handle incoming streams and add messages to the incoming queue
'''
log.trace('Req client %s connected', address)
self.clients.append((stream, address))
unpacker = msgpack.Unpacker()
try:
while True:
wire_bytes = yield stream.read_bytes(4096, partial=True)
unpacker.feed(wire_bytes)
for framed_msg in unpacker:
if six.PY3:
framed_msg = salt.transport.frame.decode_embedded_strs(
framed_msg
)
header = framed_msg['head']
self.io_loop.spawn_callback(self.message_handler, stream, header, framed_msg['body'])
except StreamClosedError:
log.trace('req client disconnected %s', address)
self.clients.remove((stream, address))
except Exception as e:
log.trace('other master-side exception: %s', e)
self.clients.remove((stream, address))
stream.close() |
def _fnop_style(schema, op, name):
"""Set an operator's parameter representing the style of this schema."""
if is_common(schema):
if name in op.params:
del op.params[name]
return
if _is_pending(schema):
ntp = 'pending'
elif schema.style is tuple:
ntp = 'tuple'
elif schema.style is _spl_dict:
ntp = 'dict'
elif _is_namedtuple(schema.style) and hasattr(schema.style, '_splpy_namedtuple'):
ntp = 'namedtuple:' + schema.style._splpy_namedtuple
else:
return
op.params[name] = ntp | Set an operator's parameter representing the style of this schema. | Below is the the instruction that describes the task:
### Input:
Set an operator's parameter representing the style of this schema.
### Response:
def _fnop_style(schema, op, name):
"""Set an operator's parameter representing the style of this schema."""
if is_common(schema):
if name in op.params:
del op.params[name]
return
if _is_pending(schema):
ntp = 'pending'
elif schema.style is tuple:
ntp = 'tuple'
elif schema.style is _spl_dict:
ntp = 'dict'
elif _is_namedtuple(schema.style) and hasattr(schema.style, '_splpy_namedtuple'):
ntp = 'namedtuple:' + schema.style._splpy_namedtuple
else:
return
op.params[name] = ntp |
def upload_panel(store, institute_id, case_name, stream):
"""Parse out HGNC symbols from a stream."""
institute_obj, case_obj = institute_and_case(store, institute_id, case_name)
raw_symbols = [line.strip().split('\t')[0] for line in stream if
line and not line.startswith('#')]
# check if supplied gene symbols exist
hgnc_symbols = []
for raw_symbol in raw_symbols:
if store.hgnc_genes(raw_symbol).count() == 0:
flash("HGNC symbol not found: {}".format(raw_symbol), 'warning')
else:
hgnc_symbols.append(raw_symbol)
return hgnc_symbols | Parse out HGNC symbols from a stream. | Below is the the instruction that describes the task:
### Input:
Parse out HGNC symbols from a stream.
### Response:
def upload_panel(store, institute_id, case_name, stream):
"""Parse out HGNC symbols from a stream."""
institute_obj, case_obj = institute_and_case(store, institute_id, case_name)
raw_symbols = [line.strip().split('\t')[0] for line in stream if
line and not line.startswith('#')]
# check if supplied gene symbols exist
hgnc_symbols = []
for raw_symbol in raw_symbols:
if store.hgnc_genes(raw_symbol).count() == 0:
flash("HGNC symbol not found: {}".format(raw_symbol), 'warning')
else:
hgnc_symbols.append(raw_symbol)
return hgnc_symbols |
def show_messages(self):
"""Show all messages."""
if isinstance(self.static_message, MessageElement):
# Handle sent Message instance
string = html_header()
if self.static_message is not None:
string += self.static_message.to_html()
# Keep track of the last ID we had so we can scroll to it
self.last_id = 0
for message in self.dynamic_messages:
if message.element_id is None:
self.last_id += 1
message.element_id = str(self.last_id)
html = message.to_html(in_div_flag=True)
if html is not None:
string += html
string += html_footer()
elif (isinstance(self.static_message, str)):
# Handle sent text directly
string = self.static_message
elif self.static_message is not None:
string = str(self.static_message)
elif not self.static_message:
# handle dynamic message
# Handle sent Message instance
string = html_header()
# Keep track of the last ID we had so we can scroll to it
self.last_id = 0
for message in self.dynamic_messages:
if message.element_id is None:
self.last_id += 1
message.element_id = str(self.last_id)
html = message.to_html(in_div_flag=True)
if html is not None:
string += html
string += html_footer()
# Set HTML
self.load_html(HTML_STR_MODE, string) | Show all messages. | Below is the the instruction that describes the task:
### Input:
Show all messages.
### Response:
def show_messages(self):
"""Show all messages."""
if isinstance(self.static_message, MessageElement):
# Handle sent Message instance
string = html_header()
if self.static_message is not None:
string += self.static_message.to_html()
# Keep track of the last ID we had so we can scroll to it
self.last_id = 0
for message in self.dynamic_messages:
if message.element_id is None:
self.last_id += 1
message.element_id = str(self.last_id)
html = message.to_html(in_div_flag=True)
if html is not None:
string += html
string += html_footer()
elif (isinstance(self.static_message, str)):
# Handle sent text directly
string = self.static_message
elif self.static_message is not None:
string = str(self.static_message)
elif not self.static_message:
# handle dynamic message
# Handle sent Message instance
string = html_header()
# Keep track of the last ID we had so we can scroll to it
self.last_id = 0
for message in self.dynamic_messages:
if message.element_id is None:
self.last_id += 1
message.element_id = str(self.last_id)
html = message.to_html(in_div_flag=True)
if html is not None:
string += html
string += html_footer()
# Set HTML
self.load_html(HTML_STR_MODE, string) |
def is_bool_matrix(l):
r"""Checks if l is a 2D numpy array of bools
"""
if isinstance(l, np.ndarray):
if l.ndim == 2 and (l.dtype == bool):
return True
return False | r"""Checks if l is a 2D numpy array of bools | Below is the the instruction that describes the task:
### Input:
r"""Checks if l is a 2D numpy array of bools
### Response:
def is_bool_matrix(l):
r"""Checks if l is a 2D numpy array of bools
"""
if isinstance(l, np.ndarray):
if l.ndim == 2 and (l.dtype == bool):
return True
return False |
def export_modifications(self):
"""
Returns list modifications.
"""
if self.__modified_data__ is not None:
return self.export_data()
result = {}
for key, value in enumerate(self.__original_data__):
try:
if not value.is_modified():
continue
modifications = value.export_modifications()
except AttributeError:
continue
try:
result.update({'{}.{}'.format(key, f): v for f, v in modifications.items()})
except AttributeError:
result[key] = modifications
return result | Returns list modifications. | Below is the the instruction that describes the task:
### Input:
Returns list modifications.
### Response:
def export_modifications(self):
"""
Returns list modifications.
"""
if self.__modified_data__ is not None:
return self.export_data()
result = {}
for key, value in enumerate(self.__original_data__):
try:
if not value.is_modified():
continue
modifications = value.export_modifications()
except AttributeError:
continue
try:
result.update({'{}.{}'.format(key, f): v for f, v in modifications.items()})
except AttributeError:
result[key] = modifications
return result |
def relative_path(self, filepath, basepath=None):
"""
Convert the filepath path to a relative path against basepath. By
default basepath is self.basedir.
"""
if basepath is None:
basepath = self.basedir
if not basepath:
return filepath
if filepath.startswith(basepath):
rel = filepath[len(basepath):]
if rel and rel[0] == os.sep:
rel = rel[1:]
return rel | Convert the filepath path to a relative path against basepath. By
default basepath is self.basedir. | Below is the the instruction that describes the task:
### Input:
Convert the filepath path to a relative path against basepath. By
default basepath is self.basedir.
### Response:
def relative_path(self, filepath, basepath=None):
"""
Convert the filepath path to a relative path against basepath. By
default basepath is self.basedir.
"""
if basepath is None:
basepath = self.basedir
if not basepath:
return filepath
if filepath.startswith(basepath):
rel = filepath[len(basepath):]
if rel and rel[0] == os.sep:
rel = rel[1:]
return rel |
def evaluate(ref_intervals, ref_pitches, est_intervals, est_pitches, **kwargs):
"""Compute all metrics for the given reference and estimated annotations.
Examples
--------
>>> ref_intervals, ref_pitches = mir_eval.io.load_valued_intervals(
... 'reference.txt')
>>> est_intervals, est_pitches = mir_eval.io.load_valued_intervals(
... 'estimate.txt')
>>> scores = mir_eval.transcription.evaluate(ref_intervals, ref_pitches,
... est_intervals, est_pitches)
Parameters
----------
ref_intervals : np.ndarray, shape=(n,2)
Array of reference notes time intervals (onset and offset times)
ref_pitches : np.ndarray, shape=(n,)
Array of reference pitch values in Hertz
est_intervals : np.ndarray, shape=(m,2)
Array of estimated notes time intervals (onset and offset times)
est_pitches : np.ndarray, shape=(m,)
Array of estimated pitch values in Hertz
kwargs
Additional keyword arguments which will be passed to the
appropriate metric or preprocessing functions.
Returns
-------
scores : dict
Dictionary of scores, where the key is the metric name (str) and
the value is the (float) score achieved.
"""
# Compute all the metrics
scores = collections.OrderedDict()
# Precision, recall and f-measure taking note offsets into account
kwargs.setdefault('offset_ratio', 0.2)
orig_offset_ratio = kwargs['offset_ratio']
if kwargs['offset_ratio'] is not None:
(scores['Precision'],
scores['Recall'],
scores['F-measure'],
scores['Average_Overlap_Ratio']) = util.filter_kwargs(
precision_recall_f1_overlap, ref_intervals, ref_pitches,
est_intervals, est_pitches, **kwargs)
# Precision, recall and f-measure NOT taking note offsets into account
kwargs['offset_ratio'] = None
(scores['Precision_no_offset'],
scores['Recall_no_offset'],
scores['F-measure_no_offset'],
scores['Average_Overlap_Ratio_no_offset']) = (
util.filter_kwargs(precision_recall_f1_overlap,
ref_intervals, ref_pitches,
est_intervals, est_pitches, **kwargs))
# onset-only metrics
(scores['Onset_Precision'],
scores['Onset_Recall'],
scores['Onset_F-measure']) = (
util.filter_kwargs(onset_precision_recall_f1,
ref_intervals, est_intervals, **kwargs))
# offset-only metrics
kwargs['offset_ratio'] = orig_offset_ratio
if kwargs['offset_ratio'] is not None:
(scores['Offset_Precision'],
scores['Offset_Recall'],
scores['Offset_F-measure']) = (
util.filter_kwargs(offset_precision_recall_f1,
ref_intervals, est_intervals, **kwargs))
return scores | Compute all metrics for the given reference and estimated annotations.
Examples
--------
>>> ref_intervals, ref_pitches = mir_eval.io.load_valued_intervals(
... 'reference.txt')
>>> est_intervals, est_pitches = mir_eval.io.load_valued_intervals(
... 'estimate.txt')
>>> scores = mir_eval.transcription.evaluate(ref_intervals, ref_pitches,
... est_intervals, est_pitches)
Parameters
----------
ref_intervals : np.ndarray, shape=(n,2)
Array of reference notes time intervals (onset and offset times)
ref_pitches : np.ndarray, shape=(n,)
Array of reference pitch values in Hertz
est_intervals : np.ndarray, shape=(m,2)
Array of estimated notes time intervals (onset and offset times)
est_pitches : np.ndarray, shape=(m,)
Array of estimated pitch values in Hertz
kwargs
Additional keyword arguments which will be passed to the
appropriate metric or preprocessing functions.
Returns
-------
scores : dict
Dictionary of scores, where the key is the metric name (str) and
the value is the (float) score achieved. | Below is the the instruction that describes the task:
### Input:
Compute all metrics for the given reference and estimated annotations.
Examples
--------
>>> ref_intervals, ref_pitches = mir_eval.io.load_valued_intervals(
... 'reference.txt')
>>> est_intervals, est_pitches = mir_eval.io.load_valued_intervals(
... 'estimate.txt')
>>> scores = mir_eval.transcription.evaluate(ref_intervals, ref_pitches,
... est_intervals, est_pitches)
Parameters
----------
ref_intervals : np.ndarray, shape=(n,2)
Array of reference notes time intervals (onset and offset times)
ref_pitches : np.ndarray, shape=(n,)
Array of reference pitch values in Hertz
est_intervals : np.ndarray, shape=(m,2)
Array of estimated notes time intervals (onset and offset times)
est_pitches : np.ndarray, shape=(m,)
Array of estimated pitch values in Hertz
kwargs
Additional keyword arguments which will be passed to the
appropriate metric or preprocessing functions.
Returns
-------
scores : dict
Dictionary of scores, where the key is the metric name (str) and
the value is the (float) score achieved.
### Response:
def evaluate(ref_intervals, ref_pitches, est_intervals, est_pitches, **kwargs):
"""Compute all metrics for the given reference and estimated annotations.
Examples
--------
>>> ref_intervals, ref_pitches = mir_eval.io.load_valued_intervals(
... 'reference.txt')
>>> est_intervals, est_pitches = mir_eval.io.load_valued_intervals(
... 'estimate.txt')
>>> scores = mir_eval.transcription.evaluate(ref_intervals, ref_pitches,
... est_intervals, est_pitches)
Parameters
----------
ref_intervals : np.ndarray, shape=(n,2)
Array of reference notes time intervals (onset and offset times)
ref_pitches : np.ndarray, shape=(n,)
Array of reference pitch values in Hertz
est_intervals : np.ndarray, shape=(m,2)
Array of estimated notes time intervals (onset and offset times)
est_pitches : np.ndarray, shape=(m,)
Array of estimated pitch values in Hertz
kwargs
Additional keyword arguments which will be passed to the
appropriate metric or preprocessing functions.
Returns
-------
scores : dict
Dictionary of scores, where the key is the metric name (str) and
the value is the (float) score achieved.
"""
# Compute all the metrics
scores = collections.OrderedDict()
# Precision, recall and f-measure taking note offsets into account
kwargs.setdefault('offset_ratio', 0.2)
orig_offset_ratio = kwargs['offset_ratio']
if kwargs['offset_ratio'] is not None:
(scores['Precision'],
scores['Recall'],
scores['F-measure'],
scores['Average_Overlap_Ratio']) = util.filter_kwargs(
precision_recall_f1_overlap, ref_intervals, ref_pitches,
est_intervals, est_pitches, **kwargs)
# Precision, recall and f-measure NOT taking note offsets into account
kwargs['offset_ratio'] = None
(scores['Precision_no_offset'],
scores['Recall_no_offset'],
scores['F-measure_no_offset'],
scores['Average_Overlap_Ratio_no_offset']) = (
util.filter_kwargs(precision_recall_f1_overlap,
ref_intervals, ref_pitches,
est_intervals, est_pitches, **kwargs))
# onset-only metrics
(scores['Onset_Precision'],
scores['Onset_Recall'],
scores['Onset_F-measure']) = (
util.filter_kwargs(onset_precision_recall_f1,
ref_intervals, est_intervals, **kwargs))
# offset-only metrics
kwargs['offset_ratio'] = orig_offset_ratio
if kwargs['offset_ratio'] is not None:
(scores['Offset_Precision'],
scores['Offset_Recall'],
scores['Offset_F-measure']) = (
util.filter_kwargs(offset_precision_recall_f1,
ref_intervals, est_intervals, **kwargs))
return scores |
def run(self):
"""
Overrides the default _run() private method.
Performs the complete analysis
:return: A fully computed set of Ordinary Differential Equations that can be used for further simulation
:rtype: :class:`~means.core.problems.ODEProblem`
"""
S = self.model.stoichiometry_matrix
amat = self.model.propensities
ymat = self.model.species
n_species = len(ymat)
# dPdt is matrix of each species differentiated w.r.t. time
# The code below literally multiplies the stoichiometry matrix to a column vector of propensities
# from the right (::math::`\frac{dP}{dt} = \mathbf{Sa}`)
dPdt = S * amat
# A Is a matrix of each species (rows) and the derivatives of their stoichiometry matrix rows
# against each other species
# Code below computes the matrix A, that is of size `len(ymat) x len(ymat)`, for which each entry
# ::math::`A_{ik} = \sum_j S_{ij} \frac{\partial a_j}{\partial y_k} = \mathfb{S_i} \frac{\partial \mathbf{a}}{\partial y_k}`
A = sp.Matrix(len(ymat), len(ymat), lambda i, j: 0)
for i in range(A.rows):
for k in range(A.cols):
A[i, k] = reduce(operator.add, [S[i, j] * sp.diff(amat[j], ymat[k]) for j in range(len(amat))])
# `diagA` is a matrix that has values sqrt(a[i]) on the diagonal (0 elsewhere)
diagA = sp.Matrix(len(amat), len(amat), lambda i, j: amat[i] ** sp.Rational(1,2) if i==j else 0)
# E is stoichiometry matrix times diagA
E = S * diagA
variance_terms = []
cov_matrix = []
for i in range(len(ymat)):
row = []
for j in range(len(ymat)):
if i <= j:
symbol = 'V_{0}_{1}'.format(i, j)
variance_terms.append(VarianceTerm(position=(i,j), symbol=symbol))
else:
# Since Vi,j = Vj,i, i.e. covariance are equal, we only record Vi,j but not Vj,i
symbol = 'V_{0}_{1}'.format(j, i)
variance_terms.append(VarianceTerm(position=(j,i), symbol=symbol))
row.append(symbol)
cov_matrix.append(row)
V = sp.Matrix(cov_matrix)
# Matrix of variances (diagonal) and covariances of species i and j differentiated wrt time.
# I.e. if i=j, V_ij is the variance, and if i!=j, V_ij is the covariance between species i and species j
dVdt = A * V + V * (A.T) + E * (E.T)
# build ODEProblem object
rhs_redundant = sp.Matrix([i for i in dPdt] + [i for i in dVdt])
#generate ODE terms
n_vectors = [tuple([1 if i==j else 0 for i in range(n_species)]) for j in range(n_species)]
moment_terms = [Moment(nvec,lhs) for (lhs, nvec) in zip(ymat, n_vectors)]
ode_description = moment_terms + variance_terms
non_redundant_idx = []
ode_terms = []
# remove repetitive covariances, as Vij = Vji
for i, cov in enumerate(ode_description):
if cov in ode_terms:
continue
else:
ode_terms.append(cov)
non_redundant_idx.append(i)
rhs = []
for i in non_redundant_idx:
rhs.append(rhs_redundant[i])
out_problem = ODEProblem("LNA", ode_terms, rhs, sp.Matrix(self.model.parameters))
return out_problem | Overrides the default _run() private method.
Performs the complete analysis
:return: A fully computed set of Ordinary Differential Equations that can be used for further simulation
:rtype: :class:`~means.core.problems.ODEProblem` | Below is the the instruction that describes the task:
### Input:
Overrides the default _run() private method.
Performs the complete analysis
:return: A fully computed set of Ordinary Differential Equations that can be used for further simulation
:rtype: :class:`~means.core.problems.ODEProblem`
### Response:
def run(self):
"""
Overrides the default _run() private method.
Performs the complete analysis
:return: A fully computed set of Ordinary Differential Equations that can be used for further simulation
:rtype: :class:`~means.core.problems.ODEProblem`
"""
S = self.model.stoichiometry_matrix
amat = self.model.propensities
ymat = self.model.species
n_species = len(ymat)
# dPdt is matrix of each species differentiated w.r.t. time
# The code below literally multiplies the stoichiometry matrix to a column vector of propensities
# from the right (::math::`\frac{dP}{dt} = \mathbf{Sa}`)
dPdt = S * amat
# A Is a matrix of each species (rows) and the derivatives of their stoichiometry matrix rows
# against each other species
# Code below computes the matrix A, that is of size `len(ymat) x len(ymat)`, for which each entry
# ::math::`A_{ik} = \sum_j S_{ij} \frac{\partial a_j}{\partial y_k} = \mathfb{S_i} \frac{\partial \mathbf{a}}{\partial y_k}`
A = sp.Matrix(len(ymat), len(ymat), lambda i, j: 0)
for i in range(A.rows):
for k in range(A.cols):
A[i, k] = reduce(operator.add, [S[i, j] * sp.diff(amat[j], ymat[k]) for j in range(len(amat))])
# `diagA` is a matrix that has values sqrt(a[i]) on the diagonal (0 elsewhere)
diagA = sp.Matrix(len(amat), len(amat), lambda i, j: amat[i] ** sp.Rational(1,2) if i==j else 0)
# E is stoichiometry matrix times diagA
E = S * diagA
variance_terms = []
cov_matrix = []
for i in range(len(ymat)):
row = []
for j in range(len(ymat)):
if i <= j:
symbol = 'V_{0}_{1}'.format(i, j)
variance_terms.append(VarianceTerm(position=(i,j), symbol=symbol))
else:
# Since Vi,j = Vj,i, i.e. covariance are equal, we only record Vi,j but not Vj,i
symbol = 'V_{0}_{1}'.format(j, i)
variance_terms.append(VarianceTerm(position=(j,i), symbol=symbol))
row.append(symbol)
cov_matrix.append(row)
V = sp.Matrix(cov_matrix)
# Matrix of variances (diagonal) and covariances of species i and j differentiated wrt time.
# I.e. if i=j, V_ij is the variance, and if i!=j, V_ij is the covariance between species i and species j
dVdt = A * V + V * (A.T) + E * (E.T)
# build ODEProblem object
rhs_redundant = sp.Matrix([i for i in dPdt] + [i for i in dVdt])
#generate ODE terms
n_vectors = [tuple([1 if i==j else 0 for i in range(n_species)]) for j in range(n_species)]
moment_terms = [Moment(nvec,lhs) for (lhs, nvec) in zip(ymat, n_vectors)]
ode_description = moment_terms + variance_terms
non_redundant_idx = []
ode_terms = []
# remove repetitive covariances, as Vij = Vji
for i, cov in enumerate(ode_description):
if cov in ode_terms:
continue
else:
ode_terms.append(cov)
non_redundant_idx.append(i)
rhs = []
for i in non_redundant_idx:
rhs.append(rhs_redundant[i])
out_problem = ODEProblem("LNA", ode_terms, rhs, sp.Matrix(self.model.parameters))
return out_problem |
def get_resources(cls):
"""Returns Ext Resources."""
plural_mappings = resource_helper.build_plural_mappings(
{}, RESOURCE_ATTRIBUTE_MAP)
# attr.PLURALS.update(plural_mappings)
return resource_helper.build_resource_info(plural_mappings,
RESOURCE_ATTRIBUTE_MAP,
None,
register_quota=True) | Returns Ext Resources. | Below is the the instruction that describes the task:
### Input:
Returns Ext Resources.
### Response:
def get_resources(cls):
"""Returns Ext Resources."""
plural_mappings = resource_helper.build_plural_mappings(
{}, RESOURCE_ATTRIBUTE_MAP)
# attr.PLURALS.update(plural_mappings)
return resource_helper.build_resource_info(plural_mappings,
RESOURCE_ATTRIBUTE_MAP,
None,
register_quota=True) |
def hincrby(self, hashkey, attribute, increment=1):
"""Emulate hincrby."""
return self._hincrby(hashkey, attribute, 'HINCRBY', long, increment) | Emulate hincrby. | Below is the the instruction that describes the task:
### Input:
Emulate hincrby.
### Response:
def hincrby(self, hashkey, attribute, increment=1):
"""Emulate hincrby."""
return self._hincrby(hashkey, attribute, 'HINCRBY', long, increment) |
def parse_gene_panel(path, institute='cust000', panel_id='test', panel_type='clinical', date=datetime.now(),
version=1.0, display_name=None, genes = None):
"""Parse the panel info and return a gene panel
Args:
path(str): Path to panel file
institute(str): Name of institute that owns the panel
panel_id(str): Panel id
date(datetime.datetime): Date of creation
version(float)
full_name(str): Option to have a long name
Returns:
gene_panel(dict)
"""
LOG.info("Parsing gene panel %s", panel_id)
gene_panel = {}
gene_panel['path'] = path
gene_panel['type'] = panel_type
gene_panel['date'] = date
gene_panel['panel_id'] = panel_id
gene_panel['institute'] = institute
version = version or 1.0
gene_panel['version'] = float(version)
gene_panel['display_name'] = display_name or panel_id
if not path:
panel_handle = genes
else:
panel_handle = get_file_handle(gene_panel['path'])
gene_panel['genes'] = parse_genes(gene_lines=panel_handle)
return gene_panel | Parse the panel info and return a gene panel
Args:
path(str): Path to panel file
institute(str): Name of institute that owns the panel
panel_id(str): Panel id
date(datetime.datetime): Date of creation
version(float)
full_name(str): Option to have a long name
Returns:
gene_panel(dict) | Below is the the instruction that describes the task:
### Input:
Parse the panel info and return a gene panel
Args:
path(str): Path to panel file
institute(str): Name of institute that owns the panel
panel_id(str): Panel id
date(datetime.datetime): Date of creation
version(float)
full_name(str): Option to have a long name
Returns:
gene_panel(dict)
### Response:
def parse_gene_panel(path, institute='cust000', panel_id='test', panel_type='clinical', date=datetime.now(),
version=1.0, display_name=None, genes = None):
"""Parse the panel info and return a gene panel
Args:
path(str): Path to panel file
institute(str): Name of institute that owns the panel
panel_id(str): Panel id
date(datetime.datetime): Date of creation
version(float)
full_name(str): Option to have a long name
Returns:
gene_panel(dict)
"""
LOG.info("Parsing gene panel %s", panel_id)
gene_panel = {}
gene_panel['path'] = path
gene_panel['type'] = panel_type
gene_panel['date'] = date
gene_panel['panel_id'] = panel_id
gene_panel['institute'] = institute
version = version or 1.0
gene_panel['version'] = float(version)
gene_panel['display_name'] = display_name or panel_id
if not path:
panel_handle = genes
else:
panel_handle = get_file_handle(gene_panel['path'])
gene_panel['genes'] = parse_genes(gene_lines=panel_handle)
return gene_panel |
def register(self, new_formulas, *args, **kwargs):
"""
Register formula and meta data.
* ``islinear`` - ``True`` if formula is linear, ``False`` if non-linear.
* ``args`` - position of arguments
* ``units`` - units of returns and arguments as pair of tuples
* ``isconstant`` - constant arguments not included in covariance
:param new_formulas: new formulas to add to registry.
"""
kwargs.update(zip(self.meta_names, args))
# call super method, meta must be passed as kwargs!
super(FormulaRegistry, self).register(new_formulas, **kwargs) | Register formula and meta data.
* ``islinear`` - ``True`` if formula is linear, ``False`` if non-linear.
* ``args`` - position of arguments
* ``units`` - units of returns and arguments as pair of tuples
* ``isconstant`` - constant arguments not included in covariance
:param new_formulas: new formulas to add to registry. | Below is the the instruction that describes the task:
### Input:
Register formula and meta data.
* ``islinear`` - ``True`` if formula is linear, ``False`` if non-linear.
* ``args`` - position of arguments
* ``units`` - units of returns and arguments as pair of tuples
* ``isconstant`` - constant arguments not included in covariance
:param new_formulas: new formulas to add to registry.
### Response:
def register(self, new_formulas, *args, **kwargs):
"""
Register formula and meta data.
* ``islinear`` - ``True`` if formula is linear, ``False`` if non-linear.
* ``args`` - position of arguments
* ``units`` - units of returns and arguments as pair of tuples
* ``isconstant`` - constant arguments not included in covariance
:param new_formulas: new formulas to add to registry.
"""
kwargs.update(zip(self.meta_names, args))
# call super method, meta must be passed as kwargs!
super(FormulaRegistry, self).register(new_formulas, **kwargs) |
def choose_optimizer(optimizer_name, bounds):
"""
Selects the type of local optimizer
"""
if optimizer_name == 'lbfgs':
optimizer = OptLbfgs(bounds)
elif optimizer_name == 'DIRECT':
optimizer = OptDirect(bounds)
elif optimizer_name == 'CMA':
optimizer = OptCma(bounds)
else:
raise InvalidVariableNameError('Invalid optimizer selected.')
return optimizer | Selects the type of local optimizer | Below is the the instruction that describes the task:
### Input:
Selects the type of local optimizer
### Response:
def choose_optimizer(optimizer_name, bounds):
"""
Selects the type of local optimizer
"""
if optimizer_name == 'lbfgs':
optimizer = OptLbfgs(bounds)
elif optimizer_name == 'DIRECT':
optimizer = OptDirect(bounds)
elif optimizer_name == 'CMA':
optimizer = OptCma(bounds)
else:
raise InvalidVariableNameError('Invalid optimizer selected.')
return optimizer |
def generic_loss(top_out, targets, model_hparams, vocab_size, weights_fn):
"""Compute loss numerator and denominator for one shard of output."""
del vocab_size # unused arg
logits = top_out
logits = common_attention.maybe_upcast(logits, hparams=model_hparams)
cutoff = getattr(model_hparams, "video_modality_loss_cutoff", 0.0)
return common_layers.padded_cross_entropy(
logits,
targets,
model_hparams.label_smoothing,
cutoff=cutoff,
weights_fn=weights_fn) | Compute loss numerator and denominator for one shard of output. | Below is the the instruction that describes the task:
### Input:
Compute loss numerator and denominator for one shard of output.
### Response:
def generic_loss(top_out, targets, model_hparams, vocab_size, weights_fn):
"""Compute loss numerator and denominator for one shard of output."""
del vocab_size # unused arg
logits = top_out
logits = common_attention.maybe_upcast(logits, hparams=model_hparams)
cutoff = getattr(model_hparams, "video_modality_loss_cutoff", 0.0)
return common_layers.padded_cross_entropy(
logits,
targets,
model_hparams.label_smoothing,
cutoff=cutoff,
weights_fn=weights_fn) |
def handle_text(self, item):
"""Helper method for fetching a text value."""
doc = yield from self.handle_get(item)
if doc is None:
return None
return doc.value.c8_array.text or None | Helper method for fetching a text value. | Below is the the instruction that describes the task:
### Input:
Helper method for fetching a text value.
### Response:
def handle_text(self, item):
"""Helper method for fetching a text value."""
doc = yield from self.handle_get(item)
if doc is None:
return None
return doc.value.c8_array.text or None |
def operator_relocate(self, graph, solution, op_diff_round_digits, anim):
"""applies Relocate inter-route operator to solution
Takes every node from every route and calculates savings when inserted
into all possible positions in other routes. Insertion is done at
position with max. saving and procedure starts over again with newly
created graph as input. Stops when no improvement is found.
Args
----
graph: :networkx:`NetworkX Graph Obj< >`
A NetworkX graaph is used.
solution: BaseSolution
BaseSolution instance
op_diff_round_digits: float
Precision (floating point digits) for rounding route length differences.
*Details*: In some cases when an exchange is performed on two routes with one node each,
the difference between the both solutions (before and after the exchange) is not zero.
This is due to internal rounding errors of float type. So the loop won't break
(alternating between these two solutions), we need an additional criterion to avoid
this behaviour: A threshold to handle values very close to zero as if they were zero
(for a more detailed description of the matter see http://floating-point-gui.de or
https://docs.python.org/3.5/tutorial/floatingpoint.html)
anim: AnimationDing0
AnimationDing0 object
Returns
-------
LocalSearchSolution
A solution (LocalSearchSolution class)
Notes
-----
(Inner) Loop variables:
* i: node that is checked for possible moves (position in the route `tour`, not node name)
* j: node that precedes the insert position in target route (position in the route `target_tour`, not node name)
Todo
----
* Remove ugly nested loops, convert to more efficient matrix operations
"""
# shorter var names for loop
dm = graph._matrix
dn = graph._nodes
# Relocate: Search better solutions by checking possible node moves
while True:
length_diff_best = 0
for route in solution.routes():
# exclude origin routes with single high-demand nodes (Load Areas)
if len(route._nodes) == 1:
if solution._problem._is_aggregated[str(route._nodes[0])]:
continue
# create tour by adding depot at start and end
tour = [graph._depot] + route._nodes + [graph._depot]
for target_route in solution.routes():
# exclude (origin+target) routes with single high-demand nodes (Load Areas)
if len(target_route._nodes) == 1:
if solution._problem._is_aggregated[str(target_route._nodes[0])]:
continue
target_tour = [graph._depot] + target_route._nodes + [graph._depot]
if route == target_route:
continue
n = len(route._nodes)
nt = len(target_route._nodes)+1
for i in range(0,n):
node = route._nodes[i]
for j in range(0,nt):
#target_node = target_route._nodes[j]
if target_route.can_allocate([node]):
length_diff = (-dm[dn[tour[i].name()]][dn[tour[i+1].name()]] -
dm[dn[tour[i+1].name()]][dn[tour[i+2].name()]] +
dm[dn[tour[i].name()]][dn[tour[i+2].name()]] +
dm[dn[target_tour[j].name()]][dn[tour[i+1].name()]] +
dm[dn[tour[i+1].name()]][dn[target_tour[j+1].name()]] -
dm[dn[target_tour[j].name()]][dn[target_tour[j+1].name()]])
if length_diff < length_diff_best:
length_diff_best = length_diff
node_best, target_route_best, j_best = node, target_route, j
if length_diff_best < 0:
# insert new node
target_route_best.insert([node_best], j_best)
# remove empty routes from solution
solution._routes = [route for route in solution._routes if route._nodes]
if anim is not None:
solution.draw_network(anim)
#print('Bessere Loesung gefunden:', node_best, target_node_best, target_route_best, length_diff_best)
# no improvement found
if round(length_diff_best, op_diff_round_digits) == 0:
break
return solution | applies Relocate inter-route operator to solution
Takes every node from every route and calculates savings when inserted
into all possible positions in other routes. Insertion is done at
position with max. saving and procedure starts over again with newly
created graph as input. Stops when no improvement is found.
Args
----
graph: :networkx:`NetworkX Graph Obj< >`
A NetworkX graaph is used.
solution: BaseSolution
BaseSolution instance
op_diff_round_digits: float
Precision (floating point digits) for rounding route length differences.
*Details*: In some cases when an exchange is performed on two routes with one node each,
the difference between the both solutions (before and after the exchange) is not zero.
This is due to internal rounding errors of float type. So the loop won't break
(alternating between these two solutions), we need an additional criterion to avoid
this behaviour: A threshold to handle values very close to zero as if they were zero
(for a more detailed description of the matter see http://floating-point-gui.de or
https://docs.python.org/3.5/tutorial/floatingpoint.html)
anim: AnimationDing0
AnimationDing0 object
Returns
-------
LocalSearchSolution
A solution (LocalSearchSolution class)
Notes
-----
(Inner) Loop variables:
* i: node that is checked for possible moves (position in the route `tour`, not node name)
* j: node that precedes the insert position in target route (position in the route `target_tour`, not node name)
Todo
----
* Remove ugly nested loops, convert to more efficient matrix operations | Below is the the instruction that describes the task:
### Input:
applies Relocate inter-route operator to solution
Takes every node from every route and calculates savings when inserted
into all possible positions in other routes. Insertion is done at
position with max. saving and procedure starts over again with newly
created graph as input. Stops when no improvement is found.
Args
----
graph: :networkx:`NetworkX Graph Obj< >`
A NetworkX graaph is used.
solution: BaseSolution
BaseSolution instance
op_diff_round_digits: float
Precision (floating point digits) for rounding route length differences.
*Details*: In some cases when an exchange is performed on two routes with one node each,
the difference between the both solutions (before and after the exchange) is not zero.
This is due to internal rounding errors of float type. So the loop won't break
(alternating between these two solutions), we need an additional criterion to avoid
this behaviour: A threshold to handle values very close to zero as if they were zero
(for a more detailed description of the matter see http://floating-point-gui.de or
https://docs.python.org/3.5/tutorial/floatingpoint.html)
anim: AnimationDing0
AnimationDing0 object
Returns
-------
LocalSearchSolution
A solution (LocalSearchSolution class)
Notes
-----
(Inner) Loop variables:
* i: node that is checked for possible moves (position in the route `tour`, not node name)
* j: node that precedes the insert position in target route (position in the route `target_tour`, not node name)
Todo
----
* Remove ugly nested loops, convert to more efficient matrix operations
### Response:
def operator_relocate(self, graph, solution, op_diff_round_digits, anim):
"""applies Relocate inter-route operator to solution
Takes every node from every route and calculates savings when inserted
into all possible positions in other routes. Insertion is done at
position with max. saving and procedure starts over again with newly
created graph as input. Stops when no improvement is found.
Args
----
graph: :networkx:`NetworkX Graph Obj< >`
A NetworkX graaph is used.
solution: BaseSolution
BaseSolution instance
op_diff_round_digits: float
Precision (floating point digits) for rounding route length differences.
*Details*: In some cases when an exchange is performed on two routes with one node each,
the difference between the both solutions (before and after the exchange) is not zero.
This is due to internal rounding errors of float type. So the loop won't break
(alternating between these two solutions), we need an additional criterion to avoid
this behaviour: A threshold to handle values very close to zero as if they were zero
(for a more detailed description of the matter see http://floating-point-gui.de or
https://docs.python.org/3.5/tutorial/floatingpoint.html)
anim: AnimationDing0
AnimationDing0 object
Returns
-------
LocalSearchSolution
A solution (LocalSearchSolution class)
Notes
-----
(Inner) Loop variables:
* i: node that is checked for possible moves (position in the route `tour`, not node name)
* j: node that precedes the insert position in target route (position in the route `target_tour`, not node name)
Todo
----
* Remove ugly nested loops, convert to more efficient matrix operations
"""
# shorter var names for loop
dm = graph._matrix
dn = graph._nodes
# Relocate: Search better solutions by checking possible node moves
while True:
length_diff_best = 0
for route in solution.routes():
# exclude origin routes with single high-demand nodes (Load Areas)
if len(route._nodes) == 1:
if solution._problem._is_aggregated[str(route._nodes[0])]:
continue
# create tour by adding depot at start and end
tour = [graph._depot] + route._nodes + [graph._depot]
for target_route in solution.routes():
# exclude (origin+target) routes with single high-demand nodes (Load Areas)
if len(target_route._nodes) == 1:
if solution._problem._is_aggregated[str(target_route._nodes[0])]:
continue
target_tour = [graph._depot] + target_route._nodes + [graph._depot]
if route == target_route:
continue
n = len(route._nodes)
nt = len(target_route._nodes)+1
for i in range(0,n):
node = route._nodes[i]
for j in range(0,nt):
#target_node = target_route._nodes[j]
if target_route.can_allocate([node]):
length_diff = (-dm[dn[tour[i].name()]][dn[tour[i+1].name()]] -
dm[dn[tour[i+1].name()]][dn[tour[i+2].name()]] +
dm[dn[tour[i].name()]][dn[tour[i+2].name()]] +
dm[dn[target_tour[j].name()]][dn[tour[i+1].name()]] +
dm[dn[tour[i+1].name()]][dn[target_tour[j+1].name()]] -
dm[dn[target_tour[j].name()]][dn[target_tour[j+1].name()]])
if length_diff < length_diff_best:
length_diff_best = length_diff
node_best, target_route_best, j_best = node, target_route, j
if length_diff_best < 0:
# insert new node
target_route_best.insert([node_best], j_best)
# remove empty routes from solution
solution._routes = [route for route in solution._routes if route._nodes]
if anim is not None:
solution.draw_network(anim)
#print('Bessere Loesung gefunden:', node_best, target_node_best, target_route_best, length_diff_best)
# no improvement found
if round(length_diff_best, op_diff_round_digits) == 0:
break
return solution |
def session_scope(session_cls=None):
"""Provide a transactional scope around a series of operations."""
session = session_cls() if session_cls else Session()
try:
yield session
session.commit()
except Exception:
session.rollback()
raise
finally:
session.close() | Provide a transactional scope around a series of operations. | Below is the the instruction that describes the task:
### Input:
Provide a transactional scope around a series of operations.
### Response:
def session_scope(session_cls=None):
"""Provide a transactional scope around a series of operations."""
session = session_cls() if session_cls else Session()
try:
yield session
session.commit()
except Exception:
session.rollback()
raise
finally:
session.close() |
def encode(self, pad=106):
"""Encodes this AIT command to binary.
If pad is specified, it indicates the maximum size of the encoded
command in bytes. If the encoded command is less than pad, the
remaining bytes are set to zero.
Commands sent to ISS payloads over 1553 are limited to 64 words
(128 bytes) with 11 words (22 bytes) of CCSDS overhead (SSP
52050J, Section 3.2.3.4). This leaves 53 words (106 bytes) for
the command itself.
"""
opcode = struct.pack('>H', self.defn.opcode)
offset = len(opcode)
size = max(offset + self.defn.argsize, pad)
encoded = bytearray(size)
encoded[0:offset] = opcode
encoded[offset] = self.defn.argsize
offset += 1
index = 0
for defn in self.defn.argdefns:
if defn.fixed:
value = defn.value
else:
value = self.args[index]
index += 1
encoded[defn.slice(offset)] = defn.encode(value)
return encoded | Encodes this AIT command to binary.
If pad is specified, it indicates the maximum size of the encoded
command in bytes. If the encoded command is less than pad, the
remaining bytes are set to zero.
Commands sent to ISS payloads over 1553 are limited to 64 words
(128 bytes) with 11 words (22 bytes) of CCSDS overhead (SSP
52050J, Section 3.2.3.4). This leaves 53 words (106 bytes) for
the command itself. | Below is the the instruction that describes the task:
### Input:
Encodes this AIT command to binary.
If pad is specified, it indicates the maximum size of the encoded
command in bytes. If the encoded command is less than pad, the
remaining bytes are set to zero.
Commands sent to ISS payloads over 1553 are limited to 64 words
(128 bytes) with 11 words (22 bytes) of CCSDS overhead (SSP
52050J, Section 3.2.3.4). This leaves 53 words (106 bytes) for
the command itself.
### Response:
def encode(self, pad=106):
"""Encodes this AIT command to binary.
If pad is specified, it indicates the maximum size of the encoded
command in bytes. If the encoded command is less than pad, the
remaining bytes are set to zero.
Commands sent to ISS payloads over 1553 are limited to 64 words
(128 bytes) with 11 words (22 bytes) of CCSDS overhead (SSP
52050J, Section 3.2.3.4). This leaves 53 words (106 bytes) for
the command itself.
"""
opcode = struct.pack('>H', self.defn.opcode)
offset = len(opcode)
size = max(offset + self.defn.argsize, pad)
encoded = bytearray(size)
encoded[0:offset] = opcode
encoded[offset] = self.defn.argsize
offset += 1
index = 0
for defn in self.defn.argdefns:
if defn.fixed:
value = defn.value
else:
value = self.args[index]
index += 1
encoded[defn.slice(offset)] = defn.encode(value)
return encoded |
def head(file_path, lines=10, encoding="utf-8", printed=True,
errors='strict'):
"""
Read the first N lines of a file, defaults to 10
:param file_path: Path to file to read
:param lines: Number of lines to read in
:param encoding: defaults to utf-8 to decode as, will fail on binary
:param printed: Automatically print the lines instead of returning it
:param errors: Decoding errors: 'strict', 'ignore' or 'replace'
:return: if printed is false, the lines are returned as a list
"""
data = []
with open(file_path, "rb") as f:
for _ in range(lines):
try:
if python_version >= (2, 7):
data.append(next(f).decode(encoding, errors=errors))
else:
data.append(next(f).decode(encoding))
except StopIteration:
break
if printed:
print("".join(data))
else:
return data | Read the first N lines of a file, defaults to 10
:param file_path: Path to file to read
:param lines: Number of lines to read in
:param encoding: defaults to utf-8 to decode as, will fail on binary
:param printed: Automatically print the lines instead of returning it
:param errors: Decoding errors: 'strict', 'ignore' or 'replace'
:return: if printed is false, the lines are returned as a list | Below is the the instruction that describes the task:
### Input:
Read the first N lines of a file, defaults to 10
:param file_path: Path to file to read
:param lines: Number of lines to read in
:param encoding: defaults to utf-8 to decode as, will fail on binary
:param printed: Automatically print the lines instead of returning it
:param errors: Decoding errors: 'strict', 'ignore' or 'replace'
:return: if printed is false, the lines are returned as a list
### Response:
def head(file_path, lines=10, encoding="utf-8", printed=True,
errors='strict'):
"""
Read the first N lines of a file, defaults to 10
:param file_path: Path to file to read
:param lines: Number of lines to read in
:param encoding: defaults to utf-8 to decode as, will fail on binary
:param printed: Automatically print the lines instead of returning it
:param errors: Decoding errors: 'strict', 'ignore' or 'replace'
:return: if printed is false, the lines are returned as a list
"""
data = []
with open(file_path, "rb") as f:
for _ in range(lines):
try:
if python_version >= (2, 7):
data.append(next(f).decode(encoding, errors=errors))
else:
data.append(next(f).decode(encoding))
except StopIteration:
break
if printed:
print("".join(data))
else:
return data |
def getOverlayTransformTrackedDeviceComponent(self, ulOverlayHandle, pchComponentName, unComponentNameSize):
"""Gets the transform information when the overlay is rendering on a component."""
fn = self.function_table.getOverlayTransformTrackedDeviceComponent
punDeviceIndex = TrackedDeviceIndex_t()
result = fn(ulOverlayHandle, byref(punDeviceIndex), pchComponentName, unComponentNameSize)
return result, punDeviceIndex | Gets the transform information when the overlay is rendering on a component. | Below is the the instruction that describes the task:
### Input:
Gets the transform information when the overlay is rendering on a component.
### Response:
def getOverlayTransformTrackedDeviceComponent(self, ulOverlayHandle, pchComponentName, unComponentNameSize):
"""Gets the transform information when the overlay is rendering on a component."""
fn = self.function_table.getOverlayTransformTrackedDeviceComponent
punDeviceIndex = TrackedDeviceIndex_t()
result = fn(ulOverlayHandle, byref(punDeviceIndex), pchComponentName, unComponentNameSize)
return result, punDeviceIndex |
def buscar_ambientep44_por_finalidade_cliente(
self,
finalidade_txt,
cliente_txt):
"""Search ambiente_p44_txt environment vip
:return: Dictionary with the following structure:
::
{‘ambiente_p44_txt’:
'id':<'id_ambientevip'>,
‘finalidade’: <'finalidade_txt'>,
'cliente_txt: <'cliente_txt'>',
'ambiente_p44: <'ambiente_p44'>',}
:raise InvalidParameterError: finalidade_txt and cliente_txt is null and invalid.
:raise DataBaseError: Networkapi failed to access the database.
:raise XMLError: Networkapi failed to generate the XML response.
"""
vip_map = dict()
vip_map['finalidade_txt'] = finalidade_txt
vip_map['cliente_txt'] = cliente_txt
url = 'environment-vip/get/ambiente_p44_txt/'
code, xml = self.submit({'vip': vip_map}, 'POST', url)
return self.response(code, xml) | Search ambiente_p44_txt environment vip
:return: Dictionary with the following structure:
::
{‘ambiente_p44_txt’:
'id':<'id_ambientevip'>,
‘finalidade’: <'finalidade_txt'>,
'cliente_txt: <'cliente_txt'>',
'ambiente_p44: <'ambiente_p44'>',}
:raise InvalidParameterError: finalidade_txt and cliente_txt is null and invalid.
:raise DataBaseError: Networkapi failed to access the database.
:raise XMLError: Networkapi failed to generate the XML response. | Below is the the instruction that describes the task:
### Input:
Search ambiente_p44_txt environment vip
:return: Dictionary with the following structure:
::
{‘ambiente_p44_txt’:
'id':<'id_ambientevip'>,
‘finalidade’: <'finalidade_txt'>,
'cliente_txt: <'cliente_txt'>',
'ambiente_p44: <'ambiente_p44'>',}
:raise InvalidParameterError: finalidade_txt and cliente_txt is null and invalid.
:raise DataBaseError: Networkapi failed to access the database.
:raise XMLError: Networkapi failed to generate the XML response.
### Response:
def buscar_ambientep44_por_finalidade_cliente(
self,
finalidade_txt,
cliente_txt):
"""Search ambiente_p44_txt environment vip
:return: Dictionary with the following structure:
::
{‘ambiente_p44_txt’:
'id':<'id_ambientevip'>,
‘finalidade’: <'finalidade_txt'>,
'cliente_txt: <'cliente_txt'>',
'ambiente_p44: <'ambiente_p44'>',}
:raise InvalidParameterError: finalidade_txt and cliente_txt is null and invalid.
:raise DataBaseError: Networkapi failed to access the database.
:raise XMLError: Networkapi failed to generate the XML response.
"""
vip_map = dict()
vip_map['finalidade_txt'] = finalidade_txt
vip_map['cliente_txt'] = cliente_txt
url = 'environment-vip/get/ambiente_p44_txt/'
code, xml = self.submit({'vip': vip_map}, 'POST', url)
return self.response(code, xml) |
def credits(self):
"""Returns either a tuple representing the credit range or a
single integer if the range is set to one value.
Use self.cred to always get the tuple.
"""
if self.cred[0] == self.cred[1]:
return self.cred[0]
return self.cred | Returns either a tuple representing the credit range or a
single integer if the range is set to one value.
Use self.cred to always get the tuple. | Below is the the instruction that describes the task:
### Input:
Returns either a tuple representing the credit range or a
single integer if the range is set to one value.
Use self.cred to always get the tuple.
### Response:
def credits(self):
"""Returns either a tuple representing the credit range or a
single integer if the range is set to one value.
Use self.cred to always get the tuple.
"""
if self.cred[0] == self.cred[1]:
return self.cred[0]
return self.cred |
def selfoss(reset_password=False):
'''Install, update and set up selfoss.
This selfoss installation uses sqlite (selfoss-default), php5-fpm and nginx.
The connection is https-only and secured by a letsencrypt certificate. This
certificate must be created separately with task setup.server_letsencrypt.
More infos:
https://selfoss.aditu.de/
https://github.com/SSilence/selfoss/wiki
https://www.heise.de/ct/ausgabe/2016-13-RSS-Reader-Selfoss-hat-die-Nachrichtenlage-im-Blick-3228045.html
https://ct.de/yqp7
'''
hostname = re.sub(r'^[^@]+@', '', env.host) # without username if any
sitename = query_input(
question='\nEnter site-name of Your trac web service',
default=flo('selfoss.{hostname}'))
username = env.user
site_dir = flo('/home/{username}/sites/{sitename}')
checkout_latest_release_of_selfoss()
create_directory_structure(site_dir)
restored = install_selfoss(sitename, site_dir, username)
nginx_site_config(username, sitename, hostname)
enable_php5_socket_file()
if not restored or reset_password:
setup_selfoss_user(username, sitename, site_dir)
print_msg('\n## reload nginx and restart php\n')
run('sudo service nginx reload')
run('sudo service php5-fpm restart') | Install, update and set up selfoss.
This selfoss installation uses sqlite (selfoss-default), php5-fpm and nginx.
The connection is https-only and secured by a letsencrypt certificate. This
certificate must be created separately with task setup.server_letsencrypt.
More infos:
https://selfoss.aditu.de/
https://github.com/SSilence/selfoss/wiki
https://www.heise.de/ct/ausgabe/2016-13-RSS-Reader-Selfoss-hat-die-Nachrichtenlage-im-Blick-3228045.html
https://ct.de/yqp7 | Below is the the instruction that describes the task:
### Input:
Install, update and set up selfoss.
This selfoss installation uses sqlite (selfoss-default), php5-fpm and nginx.
The connection is https-only and secured by a letsencrypt certificate. This
certificate must be created separately with task setup.server_letsencrypt.
More infos:
https://selfoss.aditu.de/
https://github.com/SSilence/selfoss/wiki
https://www.heise.de/ct/ausgabe/2016-13-RSS-Reader-Selfoss-hat-die-Nachrichtenlage-im-Blick-3228045.html
https://ct.de/yqp7
### Response:
def selfoss(reset_password=False):
'''Install, update and set up selfoss.
This selfoss installation uses sqlite (selfoss-default), php5-fpm and nginx.
The connection is https-only and secured by a letsencrypt certificate. This
certificate must be created separately with task setup.server_letsencrypt.
More infos:
https://selfoss.aditu.de/
https://github.com/SSilence/selfoss/wiki
https://www.heise.de/ct/ausgabe/2016-13-RSS-Reader-Selfoss-hat-die-Nachrichtenlage-im-Blick-3228045.html
https://ct.de/yqp7
'''
hostname = re.sub(r'^[^@]+@', '', env.host) # without username if any
sitename = query_input(
question='\nEnter site-name of Your trac web service',
default=flo('selfoss.{hostname}'))
username = env.user
site_dir = flo('/home/{username}/sites/{sitename}')
checkout_latest_release_of_selfoss()
create_directory_structure(site_dir)
restored = install_selfoss(sitename, site_dir, username)
nginx_site_config(username, sitename, hostname)
enable_php5_socket_file()
if not restored or reset_password:
setup_selfoss_user(username, sitename, site_dir)
print_msg('\n## reload nginx and restart php\n')
run('sudo service nginx reload')
run('sudo service php5-fpm restart') |
def _preloading_env(self):
"""
A "stripped" jinja environment.
"""
ctx = self.env.globals
try:
ctx['random_model'] = lambda *a, **kw: None
ctx['random_models'] = lambda *a, **kw: None
yield self.env
finally:
ctx['random_model'] = jinja2.contextfunction(random_model)
ctx['random_models'] = jinja2.contextfunction(random_models) | A "stripped" jinja environment. | Below is the the instruction that describes the task:
### Input:
A "stripped" jinja environment.
### Response:
def _preloading_env(self):
"""
A "stripped" jinja environment.
"""
ctx = self.env.globals
try:
ctx['random_model'] = lambda *a, **kw: None
ctx['random_models'] = lambda *a, **kw: None
yield self.env
finally:
ctx['random_model'] = jinja2.contextfunction(random_model)
ctx['random_models'] = jinja2.contextfunction(random_models) |
def download_scans(sc, age=0, unzip=False, path='scans'):
'''Scan Downloader
Here we will attempt to download all of the scans that have completed between
now and AGE days ago.
sc = SecurityCenter5 object
age = how many days back do we want to pull? (default: 0)
unzip = Do we want to uncompress the nessus files? (default: False)
path = Path where the resulting data will be placed. (default: scans)
'''
# if the download path doesn't exist, we need to create it.
if not os.path.exists(path):
logger.debug('scan path didn\'t exist. creating it.')
os.makedirs(path)
# Now we will need to comuter the timestamp for the date that the age has
# apecified. The API expects this in a unix timestamp format.
findate = (date.today() - timedelta(days=age))
# Lets get the list of scans that had completed within the timefram that we
# had specified.
logger.debug('getting scan results for parsing')
resp = sc.get('scanResult', params={
'startTime': int(time.mktime(findate.timetuple())),
'fields': 'name,finishTime,downloadAvailable,repository',
})
for scan in resp.json()['response']['usable']:
# If this particular scan does not have any results (either it was a
# partial, failed, or incomplete scan) then we have nothing further to
# do and should simply ignore this scan.
if scan['downloadAvailable'] == 'false':
logger.debug('%s/"%s" not available for download' % (scan['id'],
scan['name']))
else:
# Well look, this scan actually has results, lets go ahead and pull
# them down.
logger.debug('%s/"%s" downloading' % (scan['id'], scan['name']))
scandata = sc.post('scanResult/%s/download' % scan['id'],
json={'downloadType': 'v2'})
sfin = datetime.fromtimestamp(int(scan['finishTime']))
# The filename is being computed generically here. As this will be
# used whether we extract the .nessus file out of the zipfile or
# not.
filename = '%s-%s.%s.%s' % (scan['id'],
scan['name'].replace(' ', '_'),
scan['repository']['id'],
sfin.strftime('%Y.%m.%d-%H.%M'))
if unzip:
# Unzip that .nessus file!
logger.debug('extracting %s/%s' % (scan['id'], scan['name']))
zfile = ZipFile(StringIO(buf=scandata.content))
scanfile = zfile.filelist[0]
scanfile.filename = '%s.nessus' % filename
zfile.extract(scanfile, path=path)
else:
# We want to keep it compressed, just dump to disk.
logger.debug('writing zip for %s/%s' % (scan['id'], scan['name']))
with open('%s.zip' % filename, 'wb') as zfile:
zfile.write(scandata.content)
# Were done with this scan file!!!
logger.info('%s/"%s" downloaded' % (scan['id'], scan['name'])) | Scan Downloader
Here we will attempt to download all of the scans that have completed between
now and AGE days ago.
sc = SecurityCenter5 object
age = how many days back do we want to pull? (default: 0)
unzip = Do we want to uncompress the nessus files? (default: False)
path = Path where the resulting data will be placed. (default: scans) | Below is the the instruction that describes the task:
### Input:
Scan Downloader
Here we will attempt to download all of the scans that have completed between
now and AGE days ago.
sc = SecurityCenter5 object
age = how many days back do we want to pull? (default: 0)
unzip = Do we want to uncompress the nessus files? (default: False)
path = Path where the resulting data will be placed. (default: scans)
### Response:
def download_scans(sc, age=0, unzip=False, path='scans'):
'''Scan Downloader
Here we will attempt to download all of the scans that have completed between
now and AGE days ago.
sc = SecurityCenter5 object
age = how many days back do we want to pull? (default: 0)
unzip = Do we want to uncompress the nessus files? (default: False)
path = Path where the resulting data will be placed. (default: scans)
'''
# if the download path doesn't exist, we need to create it.
if not os.path.exists(path):
logger.debug('scan path didn\'t exist. creating it.')
os.makedirs(path)
# Now we will need to comuter the timestamp for the date that the age has
# apecified. The API expects this in a unix timestamp format.
findate = (date.today() - timedelta(days=age))
# Lets get the list of scans that had completed within the timefram that we
# had specified.
logger.debug('getting scan results for parsing')
resp = sc.get('scanResult', params={
'startTime': int(time.mktime(findate.timetuple())),
'fields': 'name,finishTime,downloadAvailable,repository',
})
for scan in resp.json()['response']['usable']:
# If this particular scan does not have any results (either it was a
# partial, failed, or incomplete scan) then we have nothing further to
# do and should simply ignore this scan.
if scan['downloadAvailable'] == 'false':
logger.debug('%s/"%s" not available for download' % (scan['id'],
scan['name']))
else:
# Well look, this scan actually has results, lets go ahead and pull
# them down.
logger.debug('%s/"%s" downloading' % (scan['id'], scan['name']))
scandata = sc.post('scanResult/%s/download' % scan['id'],
json={'downloadType': 'v2'})
sfin = datetime.fromtimestamp(int(scan['finishTime']))
# The filename is being computed generically here. As this will be
# used whether we extract the .nessus file out of the zipfile or
# not.
filename = '%s-%s.%s.%s' % (scan['id'],
scan['name'].replace(' ', '_'),
scan['repository']['id'],
sfin.strftime('%Y.%m.%d-%H.%M'))
if unzip:
# Unzip that .nessus file!
logger.debug('extracting %s/%s' % (scan['id'], scan['name']))
zfile = ZipFile(StringIO(buf=scandata.content))
scanfile = zfile.filelist[0]
scanfile.filename = '%s.nessus' % filename
zfile.extract(scanfile, path=path)
else:
# We want to keep it compressed, just dump to disk.
logger.debug('writing zip for %s/%s' % (scan['id'], scan['name']))
with open('%s.zip' % filename, 'wb') as zfile:
zfile.write(scandata.content)
# Were done with this scan file!!!
logger.info('%s/"%s" downloaded' % (scan['id'], scan['name'])) |
def get_charset(content_type):
"""Function used to retrieve the charset from a content-type.If there is no
charset in the content type then the charset defined on DEFAULT_CHARSET
will be returned
:param content_type: A string containing a Content-Type header
:returns: A string containing the charset
"""
if not content_type:
return DEFAULT_CHARSET
matched = _get_charset_re.search(content_type)
if matched:
# Extract the charset and strip its double quotes
return matched.group('charset').replace('"', '')
return DEFAULT_CHARSET | Function used to retrieve the charset from a content-type.If there is no
charset in the content type then the charset defined on DEFAULT_CHARSET
will be returned
:param content_type: A string containing a Content-Type header
:returns: A string containing the charset | Below is the the instruction that describes the task:
### Input:
Function used to retrieve the charset from a content-type.If there is no
charset in the content type then the charset defined on DEFAULT_CHARSET
will be returned
:param content_type: A string containing a Content-Type header
:returns: A string containing the charset
### Response:
def get_charset(content_type):
"""Function used to retrieve the charset from a content-type.If there is no
charset in the content type then the charset defined on DEFAULT_CHARSET
will be returned
:param content_type: A string containing a Content-Type header
:returns: A string containing the charset
"""
if not content_type:
return DEFAULT_CHARSET
matched = _get_charset_re.search(content_type)
if matched:
# Extract the charset and strip its double quotes
return matched.group('charset').replace('"', '')
return DEFAULT_CHARSET |
def select(self, model):
"""Select nodes according to the input selector.
This can ALWAYS return multiple root elements.
"""
res = []
def doSelect(value, pre, remaining):
if not remaining:
res.append((pre, value))
else:
# For the other selectors to work, value must be a Tuple or a list at this point.
if not is_tuple(value) and not isinstance(value, list):
return
qhead, qtail = remaining[0], remaining[1:]
if isinstance(qhead, tuple) and is_tuple(value):
for alt in qhead:
if alt in value:
doSelect(value[alt], pre + [alt], qtail)
elif qhead == '*':
if isinstance(value, list):
indices = range(len(value))
reprs = [listKey(i) for i in indices]
else:
indices = value.keys()
reprs = indices
for key, rep in zip(indices, reprs):
doSelect(value[key], pre + [rep], qtail)
elif isinstance(qhead, int) and isinstance(value, list):
doSelect(value[qhead], pre + [listKey(qhead)], qtail)
elif is_tuple(value):
if qhead in value:
doSelect(value[qhead], pre + [qhead], qtail)
for selector in self.selectors:
doSelect(model, [], selector)
return QueryResult(res) | Select nodes according to the input selector.
This can ALWAYS return multiple root elements. | Below is the the instruction that describes the task:
### Input:
Select nodes according to the input selector.
This can ALWAYS return multiple root elements.
### Response:
def select(self, model):
"""Select nodes according to the input selector.
This can ALWAYS return multiple root elements.
"""
res = []
def doSelect(value, pre, remaining):
if not remaining:
res.append((pre, value))
else:
# For the other selectors to work, value must be a Tuple or a list at this point.
if not is_tuple(value) and not isinstance(value, list):
return
qhead, qtail = remaining[0], remaining[1:]
if isinstance(qhead, tuple) and is_tuple(value):
for alt in qhead:
if alt in value:
doSelect(value[alt], pre + [alt], qtail)
elif qhead == '*':
if isinstance(value, list):
indices = range(len(value))
reprs = [listKey(i) for i in indices]
else:
indices = value.keys()
reprs = indices
for key, rep in zip(indices, reprs):
doSelect(value[key], pre + [rep], qtail)
elif isinstance(qhead, int) and isinstance(value, list):
doSelect(value[qhead], pre + [listKey(qhead)], qtail)
elif is_tuple(value):
if qhead in value:
doSelect(value[qhead], pre + [qhead], qtail)
for selector in self.selectors:
doSelect(model, [], selector)
return QueryResult(res) |
def start_new_thread(function, args, kwargs={}):
"""Dummy implementation of thread.start_new_thread().
Compatibility is maintained by making sure that ``args`` is a
tuple and ``kwargs`` is a dictionary. If an exception is raised
and it is SystemExit (which can be done by thread.exit()) it is
caught and nothing is done; all other exceptions are printed out
by using traceback.print_exc().
If the executed function calls interrupt_main the KeyboardInterrupt will be
raised when the function returns.
"""
if type(args) != type(tuple()):
raise TypeError("2nd arg must be a tuple")
if type(kwargs) != type(dict()):
raise TypeError("3rd arg must be a dict")
global _main
_main = False
try:
function(*args, **kwargs)
except SystemExit:
pass
except:
_traceback.print_exc()
_main = True
global _interrupt
if _interrupt:
_interrupt = False
raise KeyboardInterrupt | Dummy implementation of thread.start_new_thread().
Compatibility is maintained by making sure that ``args`` is a
tuple and ``kwargs`` is a dictionary. If an exception is raised
and it is SystemExit (which can be done by thread.exit()) it is
caught and nothing is done; all other exceptions are printed out
by using traceback.print_exc().
If the executed function calls interrupt_main the KeyboardInterrupt will be
raised when the function returns. | Below is the the instruction that describes the task:
### Input:
Dummy implementation of thread.start_new_thread().
Compatibility is maintained by making sure that ``args`` is a
tuple and ``kwargs`` is a dictionary. If an exception is raised
and it is SystemExit (which can be done by thread.exit()) it is
caught and nothing is done; all other exceptions are printed out
by using traceback.print_exc().
If the executed function calls interrupt_main the KeyboardInterrupt will be
raised when the function returns.
### Response:
def start_new_thread(function, args, kwargs={}):
"""Dummy implementation of thread.start_new_thread().
Compatibility is maintained by making sure that ``args`` is a
tuple and ``kwargs`` is a dictionary. If an exception is raised
and it is SystemExit (which can be done by thread.exit()) it is
caught and nothing is done; all other exceptions are printed out
by using traceback.print_exc().
If the executed function calls interrupt_main the KeyboardInterrupt will be
raised when the function returns.
"""
if type(args) != type(tuple()):
raise TypeError("2nd arg must be a tuple")
if type(kwargs) != type(dict()):
raise TypeError("3rd arg must be a dict")
global _main
_main = False
try:
function(*args, **kwargs)
except SystemExit:
pass
except:
_traceback.print_exc()
_main = True
global _interrupt
if _interrupt:
_interrupt = False
raise KeyboardInterrupt |
def _fetch_all(cls, api_key, endpoint=None, offset=0, limit=25, **kwargs):
"""
Call `self._fetch_page` for as many pages as exist.
TODO: should be extended to do async page fetches if API allows it via
exposing total value.
Returns a list of `cls` instances.
"""
output = []
qp = kwargs.copy()
limit = max(1, min(100, limit))
maximum = kwargs.get('maximum')
qp['limit'] = min(limit, maximum) if maximum is not None else limit
qp['offset'] = offset
more, total = None, None
while True:
entities, options = cls._fetch_page(
api_key=api_key, endpoint=endpoint, **qp
)
output += entities
more = options.get('more')
limit = options.get('limit')
offset = options.get('offset')
total = options.get('total')
if more is None:
if total is None or offset is None:
break
more = (limit + offset) < total
if not more or (maximum is not None and len(output) >= maximum):
break
qp['limit'] = limit
qp['offset'] = offset + limit
return output | Call `self._fetch_page` for as many pages as exist.
TODO: should be extended to do async page fetches if API allows it via
exposing total value.
Returns a list of `cls` instances. | Below is the the instruction that describes the task:
### Input:
Call `self._fetch_page` for as many pages as exist.
TODO: should be extended to do async page fetches if API allows it via
exposing total value.
Returns a list of `cls` instances.
### Response:
def _fetch_all(cls, api_key, endpoint=None, offset=0, limit=25, **kwargs):
"""
Call `self._fetch_page` for as many pages as exist.
TODO: should be extended to do async page fetches if API allows it via
exposing total value.
Returns a list of `cls` instances.
"""
output = []
qp = kwargs.copy()
limit = max(1, min(100, limit))
maximum = kwargs.get('maximum')
qp['limit'] = min(limit, maximum) if maximum is not None else limit
qp['offset'] = offset
more, total = None, None
while True:
entities, options = cls._fetch_page(
api_key=api_key, endpoint=endpoint, **qp
)
output += entities
more = options.get('more')
limit = options.get('limit')
offset = options.get('offset')
total = options.get('total')
if more is None:
if total is None or offset is None:
break
more = (limit + offset) < total
if not more or (maximum is not None and len(output) >= maximum):
break
qp['limit'] = limit
qp['offset'] = offset + limit
return output |
def resume():
"""
Resume a paused timer, re-activating it. Subsequent time accumulates in
the total.
Returns:
float: The current time.
Raises:
PausedError: If timer was not in paused state.
StoppedError: If timer was already stopped.
"""
t = timer()
if f.t.stopped:
raise StoppedError("Cannot resume stopped timer.")
if not f.t.paused:
raise PausedError("Cannot resume timer that is not paused.")
f.t.paused = False
f.t.start_t = t
f.t.last_t = t
return t | Resume a paused timer, re-activating it. Subsequent time accumulates in
the total.
Returns:
float: The current time.
Raises:
PausedError: If timer was not in paused state.
StoppedError: If timer was already stopped. | Below is the the instruction that describes the task:
### Input:
Resume a paused timer, re-activating it. Subsequent time accumulates in
the total.
Returns:
float: The current time.
Raises:
PausedError: If timer was not in paused state.
StoppedError: If timer was already stopped.
### Response:
def resume():
"""
Resume a paused timer, re-activating it. Subsequent time accumulates in
the total.
Returns:
float: The current time.
Raises:
PausedError: If timer was not in paused state.
StoppedError: If timer was already stopped.
"""
t = timer()
if f.t.stopped:
raise StoppedError("Cannot resume stopped timer.")
if not f.t.paused:
raise PausedError("Cannot resume timer that is not paused.")
f.t.paused = False
f.t.start_t = t
f.t.last_t = t
return t |
def _click_autocomplete(root, text):
"""Completer generator for click applications."""
try:
parts = shlex.split(text)
except ValueError:
raise StopIteration
location, incomplete = _click_resolve_command(root, parts)
if not text.endswith(' ') and not incomplete and text:
raise StopIteration
if incomplete and not incomplete[0:2].isalnum():
for param in location.params:
if not isinstance(param, click.Option):
continue
for opt in itertools.chain(param.opts, param.secondary_opts):
if opt.startswith(incomplete):
yield completion.Completion(opt, -len(incomplete), display_meta=param.help)
elif isinstance(location, (click.MultiCommand, click.core.Group)):
ctx = click.Context(location)
commands = location.list_commands(ctx)
for command in commands:
if command.startswith(incomplete):
cmd = location.get_command(ctx, command)
yield completion.Completion(command, -len(incomplete), display_meta=cmd.short_help) | Completer generator for click applications. | Below is the the instruction that describes the task:
### Input:
Completer generator for click applications.
### Response:
def _click_autocomplete(root, text):
"""Completer generator for click applications."""
try:
parts = shlex.split(text)
except ValueError:
raise StopIteration
location, incomplete = _click_resolve_command(root, parts)
if not text.endswith(' ') and not incomplete and text:
raise StopIteration
if incomplete and not incomplete[0:2].isalnum():
for param in location.params:
if not isinstance(param, click.Option):
continue
for opt in itertools.chain(param.opts, param.secondary_opts):
if opt.startswith(incomplete):
yield completion.Completion(opt, -len(incomplete), display_meta=param.help)
elif isinstance(location, (click.MultiCommand, click.core.Group)):
ctx = click.Context(location)
commands = location.list_commands(ctx)
for command in commands:
if command.startswith(incomplete):
cmd = location.get_command(ctx, command)
yield completion.Completion(command, -len(incomplete), display_meta=cmd.short_help) |
def logs_for_job(self, job_name, wait=False, poll=10): # noqa: C901 - suppress complexity warning for this method
"""Display the logs for a given training job, optionally tailing them until the
job is complete. If the output is a tty or a Jupyter cell, it will be color-coded
based on which instance the log entry is from.
Args:
job_name (str): Name of the training job to display the logs for.
wait (bool): Whether to keep looking for new log entries until the job completes (default: False).
poll (int): The interval in seconds between polling for new log entries and job completion (default: 5).
Raises:
ValueError: If waiting and the training job fails.
"""
description = self.sagemaker_client.describe_training_job(TrainingJobName=job_name)
print(secondary_training_status_message(description, None), end='')
instance_count = description['ResourceConfig']['InstanceCount']
status = description['TrainingJobStatus']
stream_names = [] # The list of log streams
positions = {} # The current position in each stream, map of stream name -> position
# Increase retries allowed (from default of 4), as we don't want waiting for a training job
# to be interrupted by a transient exception.
config = botocore.config.Config(retries={'max_attempts': 15})
client = self.boto_session.client('logs', config=config)
log_group = '/aws/sagemaker/TrainingJobs'
job_already_completed = True if status == 'Completed' or status == 'Failed' or status == 'Stopped' else False
state = LogState.TAILING if wait and not job_already_completed else LogState.COMPLETE
dot = False
color_wrap = sagemaker.logs.ColorWrap()
# The loop below implements a state machine that alternates between checking the job status and
# reading whatever is available in the logs at this point. Note, that if we were called with
# wait == False, we never check the job status.
#
# If wait == TRUE and job is not completed, the initial state is TAILING
# If wait == FALSE, the initial state is COMPLETE (doesn't matter if the job really is complete).
#
# The state table:
#
# STATE ACTIONS CONDITION NEW STATE
# ---------------- ---------------- ----------------- ----------------
# TAILING Read logs, Pause, Get status Job complete JOB_COMPLETE
# Else TAILING
# JOB_COMPLETE Read logs, Pause Any COMPLETE
# COMPLETE Read logs, Exit N/A
#
# Notes:
# - The JOB_COMPLETE state forces us to do an extra pause and read any items that got to Cloudwatch after
# the job was marked complete.
last_describe_job_call = time.time()
last_description = description
while True:
if len(stream_names) < instance_count:
# Log streams are created whenever a container starts writing to stdout/err, so this list
# may be dynamic until we have a stream for every instance.
try:
streams = client.describe_log_streams(logGroupName=log_group, logStreamNamePrefix=job_name + '/',
orderBy='LogStreamName', limit=instance_count)
stream_names = [s['logStreamName'] for s in streams['logStreams']]
positions.update([(s, sagemaker.logs.Position(timestamp=0, skip=0))
for s in stream_names if s not in positions])
except ClientError as e:
# On the very first training job run on an account, there's no log group until
# the container starts logging, so ignore any errors thrown about that
err = e.response.get('Error', {})
if err.get('Code', None) != 'ResourceNotFoundException':
raise
if len(stream_names) > 0:
if dot:
print('')
dot = False
for idx, event in sagemaker.logs.multi_stream_iter(client, log_group, stream_names, positions):
color_wrap(idx, event['message'])
ts, count = positions[stream_names[idx]]
if event['timestamp'] == ts:
positions[stream_names[idx]] = sagemaker.logs.Position(timestamp=ts, skip=count + 1)
else:
positions[stream_names[idx]] = sagemaker.logs.Position(timestamp=event['timestamp'], skip=1)
else:
dot = True
print('.', end='')
sys.stdout.flush()
if state == LogState.COMPLETE:
break
time.sleep(poll)
if state == LogState.JOB_COMPLETE:
state = LogState.COMPLETE
elif time.time() - last_describe_job_call >= 30:
description = self.sagemaker_client.describe_training_job(TrainingJobName=job_name)
last_describe_job_call = time.time()
if secondary_training_status_changed(description, last_description):
print()
print(secondary_training_status_message(description, last_description), end='')
last_description = description
status = description['TrainingJobStatus']
if status == 'Completed' or status == 'Failed' or status == 'Stopped':
print()
state = LogState.JOB_COMPLETE
if wait:
self._check_job_status(job_name, description, 'TrainingJobStatus')
if dot:
print()
# Customers are not billed for hardware provisioning, so billable time is less than total time
billable_time = (description['TrainingEndTime'] - description['TrainingStartTime']) * instance_count
print('Billable seconds:', int(billable_time.total_seconds()) + 1) | Display the logs for a given training job, optionally tailing them until the
job is complete. If the output is a tty or a Jupyter cell, it will be color-coded
based on which instance the log entry is from.
Args:
job_name (str): Name of the training job to display the logs for.
wait (bool): Whether to keep looking for new log entries until the job completes (default: False).
poll (int): The interval in seconds between polling for new log entries and job completion (default: 5).
Raises:
ValueError: If waiting and the training job fails. | Below is the the instruction that describes the task:
### Input:
Display the logs for a given training job, optionally tailing them until the
job is complete. If the output is a tty or a Jupyter cell, it will be color-coded
based on which instance the log entry is from.
Args:
job_name (str): Name of the training job to display the logs for.
wait (bool): Whether to keep looking for new log entries until the job completes (default: False).
poll (int): The interval in seconds between polling for new log entries and job completion (default: 5).
Raises:
ValueError: If waiting and the training job fails.
### Response:
def logs_for_job(self, job_name, wait=False, poll=10): # noqa: C901 - suppress complexity warning for this method
"""Display the logs for a given training job, optionally tailing them until the
job is complete. If the output is a tty or a Jupyter cell, it will be color-coded
based on which instance the log entry is from.
Args:
job_name (str): Name of the training job to display the logs for.
wait (bool): Whether to keep looking for new log entries until the job completes (default: False).
poll (int): The interval in seconds between polling for new log entries and job completion (default: 5).
Raises:
ValueError: If waiting and the training job fails.
"""
description = self.sagemaker_client.describe_training_job(TrainingJobName=job_name)
print(secondary_training_status_message(description, None), end='')
instance_count = description['ResourceConfig']['InstanceCount']
status = description['TrainingJobStatus']
stream_names = [] # The list of log streams
positions = {} # The current position in each stream, map of stream name -> position
# Increase retries allowed (from default of 4), as we don't want waiting for a training job
# to be interrupted by a transient exception.
config = botocore.config.Config(retries={'max_attempts': 15})
client = self.boto_session.client('logs', config=config)
log_group = '/aws/sagemaker/TrainingJobs'
job_already_completed = True if status == 'Completed' or status == 'Failed' or status == 'Stopped' else False
state = LogState.TAILING if wait and not job_already_completed else LogState.COMPLETE
dot = False
color_wrap = sagemaker.logs.ColorWrap()
# The loop below implements a state machine that alternates between checking the job status and
# reading whatever is available in the logs at this point. Note, that if we were called with
# wait == False, we never check the job status.
#
# If wait == TRUE and job is not completed, the initial state is TAILING
# If wait == FALSE, the initial state is COMPLETE (doesn't matter if the job really is complete).
#
# The state table:
#
# STATE ACTIONS CONDITION NEW STATE
# ---------------- ---------------- ----------------- ----------------
# TAILING Read logs, Pause, Get status Job complete JOB_COMPLETE
# Else TAILING
# JOB_COMPLETE Read logs, Pause Any COMPLETE
# COMPLETE Read logs, Exit N/A
#
# Notes:
# - The JOB_COMPLETE state forces us to do an extra pause and read any items that got to Cloudwatch after
# the job was marked complete.
last_describe_job_call = time.time()
last_description = description
while True:
if len(stream_names) < instance_count:
# Log streams are created whenever a container starts writing to stdout/err, so this list
# may be dynamic until we have a stream for every instance.
try:
streams = client.describe_log_streams(logGroupName=log_group, logStreamNamePrefix=job_name + '/',
orderBy='LogStreamName', limit=instance_count)
stream_names = [s['logStreamName'] for s in streams['logStreams']]
positions.update([(s, sagemaker.logs.Position(timestamp=0, skip=0))
for s in stream_names if s not in positions])
except ClientError as e:
# On the very first training job run on an account, there's no log group until
# the container starts logging, so ignore any errors thrown about that
err = e.response.get('Error', {})
if err.get('Code', None) != 'ResourceNotFoundException':
raise
if len(stream_names) > 0:
if dot:
print('')
dot = False
for idx, event in sagemaker.logs.multi_stream_iter(client, log_group, stream_names, positions):
color_wrap(idx, event['message'])
ts, count = positions[stream_names[idx]]
if event['timestamp'] == ts:
positions[stream_names[idx]] = sagemaker.logs.Position(timestamp=ts, skip=count + 1)
else:
positions[stream_names[idx]] = sagemaker.logs.Position(timestamp=event['timestamp'], skip=1)
else:
dot = True
print('.', end='')
sys.stdout.flush()
if state == LogState.COMPLETE:
break
time.sleep(poll)
if state == LogState.JOB_COMPLETE:
state = LogState.COMPLETE
elif time.time() - last_describe_job_call >= 30:
description = self.sagemaker_client.describe_training_job(TrainingJobName=job_name)
last_describe_job_call = time.time()
if secondary_training_status_changed(description, last_description):
print()
print(secondary_training_status_message(description, last_description), end='')
last_description = description
status = description['TrainingJobStatus']
if status == 'Completed' or status == 'Failed' or status == 'Stopped':
print()
state = LogState.JOB_COMPLETE
if wait:
self._check_job_status(job_name, description, 'TrainingJobStatus')
if dot:
print()
# Customers are not billed for hardware provisioning, so billable time is less than total time
billable_time = (description['TrainingEndTime'] - description['TrainingStartTime']) * instance_count
print('Billable seconds:', int(billable_time.total_seconds()) + 1) |
def insert_point(self, x, y):
""" Inserts a point on the path at the mouse location.
We first need to check if the mouse location is on the path.
Inserting point is time intensive and experimental.
"""
try:
bezier = _ctx.ximport("bezier")
except:
from nodebox.graphics import bezier
# Do a number of checks distributed along the path.
# Keep the one closest to the actual mouse location.
# Ten checks works fast but leads to imprecision in sharp corners
# and curves closely located next to each other.
# I prefer the slower but more stable approach.
n = 100
closest = None
dx0 = float("inf")
dy0 = float("inf")
for i in range(n):
t = float(i)/n
pt = self.path.point(t)
dx = abs(pt.x-x)
dy = abs(pt.y-y)
if dx+dy <= dx0+dy0:
dx0 = dx
dy0 = dy
closest = t
# Next, scan the area around the approximation.
# If the closest point is located at 0.2 on the path,
# we need to scan between 0.1 and 0.3 for a better
# approximation. If 1.5 was the best guess, scan
# 1.40, 1.41 ... 1.59 and so on.
# Each decimal precision takes 20 iterations.
decimals = [3,4]
for d in decimals:
d = 1.0/pow(10,d)
for i in range(20):
t = closest-d + float(i)*d*0.1
if t < 0.0: t = 1.0+t
if t > 1.0: t = t-1.0
pt = self.path.point(t)
dx = abs(pt.x-x)
dy = abs(pt.y-y)
if dx <= dx0 and dy <= dy0:
dx0 = dx
dy0 = dy
closest_precise = t
closest = closest_precise
# Update the points list with the inserted point.
p = bezier.insert_point(self.path, closest_precise)
i, t, pt = bezier._locate(self.path, closest_precise)
i += 1
pt = PathElement()
pt.cmd = p[i].cmd
pt.x = p[i].x
pt.y = p[i].y
pt.ctrl1 = Point(p[i].ctrl1.x, p[i].ctrl1.y)
pt.ctrl2 = Point(p[i].ctrl2.x, p[i].ctrl2.y)
pt.freehand = False
self._points.insert(i, pt)
self._points[i-1].ctrl1 = Point(p[i-1].ctrl1.x, p[i-1].ctrl1.y)
self._points[i+1].ctrl1 = Point(p[i+1].ctrl1.x, p[i+1].ctrl1.y)
self._points[i+1].ctrl2 = Point(p[i+1].ctrl2.x, p[i+1].ctrl2.y) | Inserts a point on the path at the mouse location.
We first need to check if the mouse location is on the path.
Inserting point is time intensive and experimental. | Below is the the instruction that describes the task:
### Input:
Inserts a point on the path at the mouse location.
We first need to check if the mouse location is on the path.
Inserting point is time intensive and experimental.
### Response:
def insert_point(self, x, y):
""" Inserts a point on the path at the mouse location.
We first need to check if the mouse location is on the path.
Inserting point is time intensive and experimental.
"""
try:
bezier = _ctx.ximport("bezier")
except:
from nodebox.graphics import bezier
# Do a number of checks distributed along the path.
# Keep the one closest to the actual mouse location.
# Ten checks works fast but leads to imprecision in sharp corners
# and curves closely located next to each other.
# I prefer the slower but more stable approach.
n = 100
closest = None
dx0 = float("inf")
dy0 = float("inf")
for i in range(n):
t = float(i)/n
pt = self.path.point(t)
dx = abs(pt.x-x)
dy = abs(pt.y-y)
if dx+dy <= dx0+dy0:
dx0 = dx
dy0 = dy
closest = t
# Next, scan the area around the approximation.
# If the closest point is located at 0.2 on the path,
# we need to scan between 0.1 and 0.3 for a better
# approximation. If 1.5 was the best guess, scan
# 1.40, 1.41 ... 1.59 and so on.
# Each decimal precision takes 20 iterations.
decimals = [3,4]
for d in decimals:
d = 1.0/pow(10,d)
for i in range(20):
t = closest-d + float(i)*d*0.1
if t < 0.0: t = 1.0+t
if t > 1.0: t = t-1.0
pt = self.path.point(t)
dx = abs(pt.x-x)
dy = abs(pt.y-y)
if dx <= dx0 and dy <= dy0:
dx0 = dx
dy0 = dy
closest_precise = t
closest = closest_precise
# Update the points list with the inserted point.
p = bezier.insert_point(self.path, closest_precise)
i, t, pt = bezier._locate(self.path, closest_precise)
i += 1
pt = PathElement()
pt.cmd = p[i].cmd
pt.x = p[i].x
pt.y = p[i].y
pt.ctrl1 = Point(p[i].ctrl1.x, p[i].ctrl1.y)
pt.ctrl2 = Point(p[i].ctrl2.x, p[i].ctrl2.y)
pt.freehand = False
self._points.insert(i, pt)
self._points[i-1].ctrl1 = Point(p[i-1].ctrl1.x, p[i-1].ctrl1.y)
self._points[i+1].ctrl1 = Point(p[i+1].ctrl1.x, p[i+1].ctrl1.y)
self._points[i+1].ctrl2 = Point(p[i+1].ctrl2.x, p[i+1].ctrl2.y) |
def tokens(cls, tokens):
"""
Create a Lnk object for a token range.
Args:
tokens: a list of token identifiers
"""
return cls(Lnk.TOKENS, tuple(map(int, tokens))) | Create a Lnk object for a token range.
Args:
tokens: a list of token identifiers | Below is the the instruction that describes the task:
### Input:
Create a Lnk object for a token range.
Args:
tokens: a list of token identifiers
### Response:
def tokens(cls, tokens):
"""
Create a Lnk object for a token range.
Args:
tokens: a list of token identifiers
"""
return cls(Lnk.TOKENS, tuple(map(int, tokens))) |
def error_perturbation(C, S):
r"""Error perturbation for given sensitivity matrix.
Parameters
----------
C : (M, M) ndarray
Count matrix
S : (M, M) ndarray or (K, M, M) ndarray
Sensitivity matrix (for scalar observable) or sensitivity
tensor for vector observable
Returns
-------
X : float or (K, K) ndarray
error-perturbation (for scalar observables) or covariance matrix
(for vector-valued observable)
"""
if len(S.shape) == 2: # Scalar observable
return error_perturbation_single(C, S)
elif len(S.shape) == 3: # Vector observable
return error_perturbation_cov(C, S)
else:
raise ValueError("Sensitivity matrix S has to be a 2d or 3d array") | r"""Error perturbation for given sensitivity matrix.
Parameters
----------
C : (M, M) ndarray
Count matrix
S : (M, M) ndarray or (K, M, M) ndarray
Sensitivity matrix (for scalar observable) or sensitivity
tensor for vector observable
Returns
-------
X : float or (K, K) ndarray
error-perturbation (for scalar observables) or covariance matrix
(for vector-valued observable) | Below is the the instruction that describes the task:
### Input:
r"""Error perturbation for given sensitivity matrix.
Parameters
----------
C : (M, M) ndarray
Count matrix
S : (M, M) ndarray or (K, M, M) ndarray
Sensitivity matrix (for scalar observable) or sensitivity
tensor for vector observable
Returns
-------
X : float or (K, K) ndarray
error-perturbation (for scalar observables) or covariance matrix
(for vector-valued observable)
### Response:
def error_perturbation(C, S):
r"""Error perturbation for given sensitivity matrix.
Parameters
----------
C : (M, M) ndarray
Count matrix
S : (M, M) ndarray or (K, M, M) ndarray
Sensitivity matrix (for scalar observable) or sensitivity
tensor for vector observable
Returns
-------
X : float or (K, K) ndarray
error-perturbation (for scalar observables) or covariance matrix
(for vector-valued observable)
"""
if len(S.shape) == 2: # Scalar observable
return error_perturbation_single(C, S)
elif len(S.shape) == 3: # Vector observable
return error_perturbation_cov(C, S)
else:
raise ValueError("Sensitivity matrix S has to be a 2d or 3d array") |
def count(self, searchString, category="", math=False, game=False, searchFiles=False, extension=""):
"""Counts the number of ticalc.org files containing some search term, doesn't return them"""
fileData = {}
nameData = {}
#Search the index
if searchFiles:
fileData = self.searchNamesIndex(self.fileIndex, fileData, searchString, category, math, game, extension)
else:
nameData = self.searchNamesIndex(self.nameIndex, nameData, searchString)
#Now search the other index
if searchFiles:
nameData, fileData = self.searchFilesIndex(fileData, nameData, self.nameIndex, searchString)
else:
fileData, nameData = self.searchFilesIndex(nameData, fileData, self.fileIndex, searchString, category, math, game, extension)
# Bail out if we failed to do either of those things.
if fileData is None or nameData is None:
self.repo.printd("Error: failed to load one or more of the index files for this repo. Exiting.")
self.repo.printd("Please run 'calcpkg update' and retry this command.")
sys.exit(1)
#Now obtain a count (exclude "none" elements)
count = 0
for element in nameData:
if not nameData[element] is None:
count += 1
self.repo.printd("Search for '" + searchString + "' returned " + str(count) + " result(s) in " + self.repo.name)
return count | Counts the number of ticalc.org files containing some search term, doesn't return them | Below is the the instruction that describes the task:
### Input:
Counts the number of ticalc.org files containing some search term, doesn't return them
### Response:
def count(self, searchString, category="", math=False, game=False, searchFiles=False, extension=""):
"""Counts the number of ticalc.org files containing some search term, doesn't return them"""
fileData = {}
nameData = {}
#Search the index
if searchFiles:
fileData = self.searchNamesIndex(self.fileIndex, fileData, searchString, category, math, game, extension)
else:
nameData = self.searchNamesIndex(self.nameIndex, nameData, searchString)
#Now search the other index
if searchFiles:
nameData, fileData = self.searchFilesIndex(fileData, nameData, self.nameIndex, searchString)
else:
fileData, nameData = self.searchFilesIndex(nameData, fileData, self.fileIndex, searchString, category, math, game, extension)
# Bail out if we failed to do either of those things.
if fileData is None or nameData is None:
self.repo.printd("Error: failed to load one or more of the index files for this repo. Exiting.")
self.repo.printd("Please run 'calcpkg update' and retry this command.")
sys.exit(1)
#Now obtain a count (exclude "none" elements)
count = 0
for element in nameData:
if not nameData[element] is None:
count += 1
self.repo.printd("Search for '" + searchString + "' returned " + str(count) + " result(s) in " + self.repo.name)
return count |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.